Compare commits
447 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
37ae3956c1 | ||
|
|
4c28f15b35 | ||
|
|
095ef04c04 | ||
|
|
7d49979658 | ||
|
|
7a36695a21 | ||
|
|
5865587bd0 | ||
|
|
219bf93566 | ||
|
|
8371546a66 | ||
|
|
0b9b7bddd7 | ||
|
|
4c8449f4bc | ||
|
|
36d7b5c9ab | ||
|
|
f2b0ea6722 | ||
|
|
46f4be88a6 | ||
|
|
6381efa7ce | ||
|
|
85ee66efb9 | ||
|
|
40dccf5b29 | ||
|
|
c114849a31 | ||
|
|
4e9798d0e6 | ||
|
|
a30b1a394f | ||
|
|
91460436cf | ||
|
|
3f807a9432 | ||
|
|
cbe32c7482 | ||
|
|
5d3c582ecf | ||
|
|
3ed006d216 | ||
|
|
3e1026286b | ||
|
|
b59266249d | ||
|
|
015261a524 | ||
|
|
024e1088eb | ||
|
|
08f4b1ae8a | ||
|
|
1390c22004 | ||
|
|
8742ead585 | ||
|
|
59a297abe6 | ||
|
|
18636ea628 | ||
|
|
cf5980ace2 | ||
|
|
a7b0861436 | ||
|
|
89f2c0b0a4 | ||
|
|
ee4f4d7800 | ||
|
|
4de75ce621 | ||
|
|
1c4043ab39 | ||
|
|
44c945b9f5 | ||
|
|
c7719ac365 | ||
|
|
b9c24189e4 | ||
|
|
411d8d7439 | ||
|
|
671b40df2a | ||
|
|
249a860c6f | ||
|
|
0367a39e1f | ||
|
|
1a7340bb02 | ||
|
|
ce7d852d22 | ||
|
|
01b01c5969 | ||
|
|
c159460b2c | ||
|
|
07728d7425 | ||
|
|
d3a25e4dc1 | ||
|
|
1751c35f69 | ||
|
|
93f5b8cc4a | ||
|
|
5b1e59a48c | ||
|
|
7b27cad1ba | ||
|
|
1b083d63ab | ||
|
|
23f2b47531 | ||
|
|
194288c00e | ||
|
|
f9c8ed0dc3 | ||
|
|
88def9b71b | ||
|
|
f818f44693 | ||
|
|
8a395fdb4a | ||
|
|
c0588926b8 | ||
|
|
f1b7ecb2a2 | ||
|
|
4bcf157d88 | ||
|
|
2f7da03cce | ||
|
|
f1c995dcb8 | ||
|
|
9aec58c6b8 | ||
|
|
46aaaa9b70 | ||
|
|
46543d6323 | ||
|
|
a585119a67 | ||
|
|
8cc72368ca | ||
|
|
92e57ee06c | ||
|
|
c737a19d9f | ||
|
|
708a97d773 | ||
|
|
b95a90dbd6 | ||
|
|
a2d1ee08d4 | ||
|
|
7e64dc380f | ||
|
|
046cb6a564 | ||
|
|
644ce9edab | ||
|
|
059b601b13 | ||
|
|
d59999f510 | ||
|
|
c5d31e7527 | ||
|
|
c121e38da6 | ||
|
|
b16bc3d2e3 | ||
|
|
c732abbda2 | ||
|
|
61d681a7c8 | ||
|
|
7828bc09cf | ||
|
|
36d330fea0 | ||
|
|
4d46589d39 | ||
|
|
93f57edd3a | ||
|
|
8ec8ae0587 | ||
|
|
ce94e636bb | ||
|
|
21c7378b61 | ||
|
|
75a9845d20 | ||
|
|
d638f6e411 | ||
|
|
81d0a64d46 | ||
|
|
f76739cb1b | ||
|
|
7f992fd321 | ||
|
|
e428d11add | ||
|
|
0ed5b75a14 | ||
|
|
b1b4adec74 | ||
|
|
1934cc2e62 | ||
|
|
ae8cf8c35e | ||
|
|
f5878eafb9 | ||
|
|
27fe4f7062 | ||
|
|
7ad8b26297 | ||
|
|
1a383b7d90 | ||
|
|
445946792e | ||
|
|
de82c7d5ac | ||
|
|
07f0d561dc | ||
|
|
be379f3dac | ||
|
|
1bf904fe60 | ||
|
|
c6faf005cb | ||
|
|
b534b58542 | ||
|
|
920711533e | ||
|
|
194110433e | ||
|
|
7926396d2a | ||
|
|
797522e8ca | ||
|
|
64b5d1a269 | ||
|
|
d00d3802c9 | ||
|
|
46fff13341 | ||
|
|
be3374a3ef | ||
|
|
264ac0b017 | ||
|
|
17033b3c6c | ||
|
|
c4ea122d66 | ||
|
|
90185dc6b3 | ||
|
|
377b030d88 | ||
|
|
437bd87d7c | ||
|
|
73a7916ce3 | ||
|
|
f947fa86e3 | ||
|
|
7219efbdb7 | ||
|
|
b7435b9cd1 | ||
|
|
207ab5a0d1 | ||
|
|
70aa0ef85d | ||
|
|
dfbe231a51 | ||
|
|
cce35da366 | ||
|
|
1a612bcae9 | ||
|
|
d5b9e003fe | ||
|
|
e19c474a92 | ||
|
|
8274798499 | ||
|
|
9f68a32934 | ||
|
|
5c688daff1 | ||
|
|
741fb1181f | ||
|
|
fd1f05c8e0 | ||
|
|
708cbf937f | ||
|
|
32213cad01 | ||
|
|
9320a6e115 | ||
|
|
0f16c0f4cf | ||
|
|
30464396d9 | ||
|
|
64066c4ea8 | ||
|
|
7e97787d9d | ||
|
|
40f2dd8c6c | ||
|
|
4dd364e1c3 | ||
|
|
03f2a35b31 | ||
|
|
73bd98df57 | ||
|
|
bcf1fc658d | ||
|
|
863cbe512d | ||
|
|
d871e9aee7 | ||
|
|
70ef61ac6d | ||
|
|
a4a140bfad | ||
|
|
d2d91e713a | ||
|
|
d9bb1ceaec | ||
|
|
5fe8903fd2 | ||
|
|
8509101b83 | ||
|
|
357849c348 | ||
|
|
0f1b4e06f5 | ||
|
|
8e041420cd | ||
|
|
9211d22b2b | ||
|
|
f5246eb167 | ||
|
|
e436b2d720 | ||
|
|
51f4e9c0d3 | ||
|
|
8c3371c4ac | ||
|
|
6ff0fc6d83 | ||
|
|
9347a70425 | ||
|
|
91957f0848 | ||
|
|
62105bb353 | ||
|
|
e03f684508 | ||
|
|
2f41ae24f8 | ||
|
|
4ad551be9a | ||
|
|
bd640ae2c5 | ||
|
|
2cfc882c62 | ||
|
|
21ece2d76d | ||
|
|
d055d7f496 | ||
|
|
b1cfb1afe4 | ||
|
|
2f215356d6 | ||
|
|
e07c79259b | ||
|
|
59085f072a | ||
|
|
474d6db42f | ||
|
|
a95710ed0c | ||
|
|
51d7724255 | ||
|
|
276e7629bd | ||
|
|
69606a45e0 | ||
|
|
7f65ffcb15 | ||
|
|
4f5f6761f3 | ||
|
|
f543dbb42f | ||
|
|
5917a42997 | ||
|
|
fbe1664214 | ||
|
|
d09bb13cb6 | ||
|
|
31c323c097 | ||
|
|
8f09aadfdf | ||
|
|
20b4e8c779 | ||
|
|
402a0108ae | ||
|
|
9de4a8efcf | ||
|
|
077fa2e6b9 | ||
|
|
2ae9316f48 | ||
|
|
9b5a90e3b9 | ||
|
|
483942dc41 | ||
|
|
2ddda6457f | ||
|
|
681e695170 | ||
|
|
a043664dc4 | ||
|
|
e940f99646 | ||
|
|
22073042a9 | ||
|
|
2634cc408a | ||
|
|
36446bcbc2 | ||
|
|
b371ec5cf6 | ||
|
|
18f4afb388 | ||
|
|
77dcbe95c0 | ||
|
|
061b749041 | ||
|
|
5b0c3951f6 | ||
|
|
f2394b5a8d | ||
|
|
fe7b884cc9 | ||
|
|
5c1b635229 | ||
|
|
63410491b7 | ||
|
|
26e0a4bbde | ||
|
|
c356e56522 | ||
|
|
fd26bbbd0b | ||
|
|
7aa55371b5 | ||
|
|
ba06533c3e | ||
|
|
d66d66e74b | ||
|
|
d6b5f3efe6 | ||
|
|
b5a431624b | ||
|
|
8e7284de5a | ||
|
|
b2d38cd31c | ||
|
|
a15fed35b7 | ||
|
|
eee6b0059c | ||
|
|
530b4f3bee | ||
|
|
bac1c223de | ||
|
|
57f7582b4d | ||
|
|
5afe819ebd | ||
|
|
59568f5311 | ||
|
|
822706367b | ||
|
|
f8e9fafda3 | ||
|
|
c2bb9db012 | ||
|
|
e4e7d7fbfc | ||
|
|
035e4cf90a | ||
|
|
18fff4a3f5 | ||
|
|
675b6dc305 | ||
|
|
4071c78b2b | ||
|
|
2cb32a683e | ||
|
|
b6dc9c004b | ||
|
|
4ea0c707c1 | ||
|
|
2fbcb5c6d8 | ||
|
|
a4d60d9750 | ||
|
|
d3925890b1 | ||
|
|
8c6c144f28 | ||
|
|
db8c24cc7b | ||
|
|
ecbbb8426f | ||
|
|
0752879fc8 | ||
|
|
3f2a04b25b | ||
|
|
aa15e7916e | ||
|
|
7b09623fa8 | ||
|
|
2f45b8b7f5 | ||
|
|
5ffa2a30be | ||
|
|
b102ae141a | ||
|
|
845abcdd77 | ||
|
|
805db7ca50 | ||
|
|
bd3d0c330f | ||
|
|
0060df9877 | ||
|
|
cd66e203bd | ||
|
|
240f99478a | ||
|
|
41534c73f0 | ||
|
|
6139a69fa8 | ||
|
|
3cca312e61 | ||
|
|
7e312797ec | ||
|
|
fe44fa648a | ||
|
|
3249030257 | ||
|
|
8f98c20c51 | ||
|
|
1c76d5d096 | ||
|
|
35f1e28809 | ||
|
|
20999979de | ||
|
|
c6706a86f1 | ||
|
|
b4b1866286 | ||
|
|
28eb9b4c29 | ||
|
|
0a9accccc1 | ||
|
|
c3d220175f | ||
|
|
095c90ad22 | ||
|
|
a77bfecb02 | ||
|
|
72027b5b3c | ||
|
|
e5503c56ad | ||
|
|
ee7b225272 | ||
|
|
03d37725a9 | ||
|
|
29d1cbb673 | ||
|
|
e81278b800 | ||
|
|
e5482a5725 | ||
|
|
8464be691e | ||
|
|
ed9937bbd8 | ||
|
|
b2a4d4a018 | ||
|
|
74aaf4f75b | ||
|
|
2945f9daa9 | ||
|
|
3b496ab3d8 | ||
|
|
e1f30aeff9 | ||
|
|
a92e73231d | ||
|
|
8d91115623 | ||
|
|
9af8d6912a | ||
|
|
fe43fb47e1 | ||
|
|
ca3a80fbe1 | ||
|
|
f0747e76da | ||
|
|
7416d6ea71 | ||
|
|
ea7cbc781e | ||
|
|
3568fb9f93 | ||
|
|
43b7ce4f6d | ||
|
|
baa38d6266 | ||
|
|
1677960caa | ||
|
|
0fab573c98 | ||
|
|
04a8e5b888 | ||
|
|
6284e2011c | ||
|
|
a97c93abe4 | ||
|
|
664816383a | ||
|
|
fc4cb1654c | ||
|
|
f1fa915985 | ||
|
|
11482a75a1 | ||
|
|
e983d35c25 | ||
|
|
85c4f753ad | ||
|
|
1847ce3f3d | ||
|
|
83c27cc7b1 | ||
|
|
3e8f96a463 | ||
|
|
69e4f16b13 | ||
|
|
918c3fb260 | ||
|
|
54ee44839c | ||
|
|
8362aa9d66 | ||
|
|
2a6ff16819 | ||
|
|
47ad73cc89 | ||
|
|
9687f71a17 | ||
|
|
ed684be18d | ||
|
|
5aef725c13 | ||
|
|
d00550c45f | ||
|
|
9ce8d78835 | ||
|
|
29016822fd | ||
|
|
bb50d7edb4 | ||
|
|
d43d6f2b13 | ||
|
|
636dc27ead | ||
|
|
a18f535f21 | ||
|
|
6994d4a712 | ||
|
|
c9d0ae7cf3 | ||
|
|
9edc25999e | ||
|
|
53c130b704 | ||
|
|
e4e174981d | ||
|
|
584a52ac21 | ||
|
|
f9b5767dae | ||
|
|
3179829fa5 | ||
|
|
187d1b853d | ||
|
|
8d2e5f0bda | ||
|
|
7def6663bd | ||
|
|
a13d19c582 | ||
|
|
1837f83282 | ||
|
|
b14cfd6c64 | ||
|
|
963c51f473 | ||
|
|
1f77b75e14 | ||
|
|
e5f3acd139 | ||
|
|
c8365b3b7e | ||
|
|
29c671ce46 | ||
|
|
38ac9d2ecf | ||
|
|
3573d93855 | ||
|
|
3cc2cda026 | ||
|
|
7d10986f10 | ||
|
|
8c6a6604ce | ||
|
|
7170280401 | ||
|
|
babecb6d49 | ||
|
|
9770802901 | ||
|
|
4c1e817b38 | ||
|
|
52b329be4e | ||
|
|
1d50d62a79 | ||
|
|
07502c9804 | ||
|
|
59e0e49822 | ||
|
|
05170d78be | ||
|
|
88c83277c6 | ||
|
|
d0734b105b | ||
|
|
4860dc148c | ||
|
|
ee468be696 | ||
|
|
7f539c951a | ||
|
|
e495ae9030 | ||
|
|
ccb6b3c64b | ||
|
|
85594cc92e | ||
|
|
0b72612cd2 | ||
|
|
dd086c7830 | ||
|
|
6a601ceb97 | ||
|
|
8236534e3c | ||
|
|
0fef147713 | ||
|
|
0198296ced | ||
|
|
37726a02af | ||
|
|
a9c135488e | ||
|
|
72f5c9b62d | ||
|
|
8d0f50a6fd | ||
|
|
6c353e8b8f | ||
|
|
512d9822f0 | ||
|
|
d003ca46c7 | ||
|
|
7587dc350e | ||
|
|
28664fedb2 | ||
|
|
ef20f05221 | ||
|
|
cabf5d004d | ||
|
|
d551da26e5 | ||
|
|
893357f01e | ||
|
|
fc7fa4b6c5 | ||
|
|
fb75db2f1f | ||
|
|
7c20522a30 | ||
|
|
c09884c686 | ||
|
|
9273782093 | ||
|
|
44ffe29c10 | ||
|
|
e619493ece | ||
|
|
1449c8b887 | ||
|
|
6b06a23102 | ||
|
|
b55a93a3e1 | ||
|
|
f5f43e6d1b | ||
|
|
1e03a9440b | ||
|
|
9a59512f75 | ||
|
|
35150caea4 | ||
|
|
f01da8fee4 | ||
|
|
434c08a357 | ||
|
|
bd9c5b6995 | ||
|
|
b941d270ce | ||
|
|
9406961125 | ||
|
|
0d391b66a3 | ||
|
|
a11e07e250 | ||
|
|
d266dad1f4 | ||
|
|
331b700d1b | ||
|
|
2163fde0a4 | ||
|
|
24a2aaef4b | ||
|
|
042cf517b2 | ||
|
|
b97027ac9a | ||
|
|
4ea3f82e50 | ||
|
|
38c4111e6c | ||
|
|
338341add8 | ||
|
|
93bb679f9d | ||
|
|
40d859354f | ||
|
|
9e7c8df384 | ||
|
|
f088dd7e00 | ||
|
|
10c4e4f63f | ||
|
|
962325cc40 | ||
|
|
a9c33abfa5 | ||
|
|
d835c19fce | ||
|
|
1f1384afc6 | ||
|
|
9d4b55be19 | ||
|
|
c549ab907a | ||
|
|
9c0d14bb60 | ||
|
|
a822d942cd |
2
.gitattributes
vendored
Normal file
2
.gitattributes
vendored
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
CHANGELOG.md merge=union
|
||||||
|
|
||||||
297
CHANGELOG.md
297
CHANGELOG.md
@@ -1,10 +1,305 @@
|
|||||||
## v0.10.2 [unreleased]
|
## v0.13 [unreleased]
|
||||||
|
|
||||||
|
### Release Notes
|
||||||
|
|
||||||
|
- **Breaking change** in jolokia plugin. See
|
||||||
|
https://github.com/influxdata/telegraf/blob/master/plugins/inputs/jolokia/README.md
|
||||||
|
for updated configuration. The plugin will now support proxy mode and will make
|
||||||
|
POST requests.
|
||||||
|
|
||||||
|
- New [agent] configuration option: `metric_batch_size`. This option tells
|
||||||
|
telegraf the maximum batch size to allow to accumulate before sending a flush
|
||||||
|
to the configured outputs. `metric_buffer_limit` now refers to the absolute
|
||||||
|
maximum number of metrics that will accumulate before metrics are dropped.
|
||||||
|
|
||||||
|
- There is no longer an option to
|
||||||
|
`flush_buffer_when_full`, this is now the default and only behavior of telegraf.
|
||||||
|
|
||||||
|
- **Breaking Change**: docker plugin tags. The cont_id tag no longer exists, it
|
||||||
|
will now be a field, and be called container_id. Additionally, cont_image and
|
||||||
|
cont_name are being renamed to container_image and container_name.
|
||||||
|
|
||||||
|
- **Breaking Change**: docker plugin measurements. The `docker_cpu`, `docker_mem`,
|
||||||
|
`docker_blkio` and `docker_net` measurements are being renamed to
|
||||||
|
`docker_container_cpu`, `docker_container_mem`, `docker_container_blkio` and
|
||||||
|
`docker_container_net`. Why? Because these metrics are
|
||||||
|
specifically tracking per-container stats. The problem with per-container stats,
|
||||||
|
in some use-cases, is that if containers are short-lived AND names are not
|
||||||
|
kept consistent, then the series cardinality will balloon very quickly.
|
||||||
|
So adding "container" to each metric will:
|
||||||
|
(1) make it more clear that these metrics are per-container, and
|
||||||
|
(2) allow users to easily drop per-container metrics if cardinality is an
|
||||||
|
issue (`namedrop = ["docker_container_*"]`)
|
||||||
|
|
||||||
|
- `tagexclude` and `taginclude` are now available, which can be used to remove
|
||||||
|
tags from measurements on inputs and outputs. See
|
||||||
|
[the configuration doc](https://github.com/influxdata/telegraf/blob/master/docs/CONFIGURATION.md)
|
||||||
|
for more details.
|
||||||
|
|
||||||
|
- **Measurement filtering:** All measurement filters now match based on glob
|
||||||
|
only. Previously there was an undocumented behavior where filters would match
|
||||||
|
based on _prefix_ in addition to globs. This means that a filter like
|
||||||
|
`fielddrop = ["time_"]` will need to be changed to `fielddrop = ["time_*"]`
|
||||||
|
|
||||||
|
- **datadog**: measurement and field names will no longer have `_` replaced by `.`
|
||||||
|
|
||||||
|
- The following plugins have changed their tags to _not_ overwrite the host tag:
|
||||||
|
- cassandra: `host -> cassandra_host`
|
||||||
|
- disque: `host -> disque_host`
|
||||||
|
- rethinkdb: `host -> rethinkdb_host`
|
||||||
|
|
||||||
|
- **Breaking Change**: The `win_perf_counters` input has been changed to sanitize field names, replacing `/Sec` and `/sec` with `_persec`, as well as spaces with underscores. This is needed because Graphite doesn't like slashes and spaces, and was failing to accept metrics that had them. The `/[sS]ec` -> `_persec` is just to make things clearer and uniform.
|
||||||
|
- The `disk` input plugin can now be configured with the `HOST_MOUNT_PREFIX` environment variable.
|
||||||
|
This value is prepended to any mountpaths discovered before retrieving stats.
|
||||||
|
It is not included on the report path. This is necessary for reporting host disk stats when running from within a container.
|
||||||
|
|
||||||
|
### Features
|
||||||
|
|
||||||
|
- [#1031](https://github.com/influxdata/telegraf/pull/1031): Jolokia plugin proxy mode. Thanks @saiello!
|
||||||
|
- [#1017](https://github.com/influxdata/telegraf/pull/1017): taginclude and tagexclude arguments.
|
||||||
|
- [#1015](https://github.com/influxdata/telegraf/pull/1015): Docker plugin schema refactor.
|
||||||
|
- [#889](https://github.com/influxdata/telegraf/pull/889): Improved MySQL plugin. Thanks @maksadbek!
|
||||||
|
- [#1060](https://github.com/influxdata/telegraf/pull/1060): TTL metrics added to MongoDB input plugin
|
||||||
|
- [#1056](https://github.com/influxdata/telegraf/pull/1056): Don't allow inputs to overwrite host tags.
|
||||||
|
- [#1035](https://github.com/influxdata/telegraf/issues/1035): Add `user`, `exe`, `pidfile` tags to procstat plugin.
|
||||||
|
- [#1041](https://github.com/influxdata/telegraf/issues/1041): Add `n_cpus` field to the system plugin.
|
||||||
|
- [#1072](https://github.com/influxdata/telegraf/pull/1072): New Input Plugin: filestat.
|
||||||
|
- [#1066](https://github.com/influxdata/telegraf/pull/1066): Replication lag metrics for MongoDB input plugin
|
||||||
|
- [#1086](https://github.com/influxdata/telegraf/pull/1086): Ability to specify AWS keys in config file. Thanks @johnrengleman!
|
||||||
|
- [#1096](https://github.com/influxdata/telegraf/pull/1096): Performance refactor of running output buffers.
|
||||||
|
- [#967](https://github.com/influxdata/telegraf/issues/967): Buffer logging improvements.
|
||||||
|
- [#1107](https://github.com/influxdata/telegraf/issues/1107): Support lustre2 job stats. Thanks @hanleyja!
|
||||||
|
- [#1122](https://github.com/influxdata/telegraf/pull/1122): Support setting config path through env variable and default paths.
|
||||||
|
- [#1128](https://github.com/influxdata/telegraf/pull/1128): MongoDB jumbo chunks metric for MongoDB input plugin
|
||||||
|
- [#1146](https://github.com/influxdata/telegraf/pull/1146): HAProxy socket support. Thanks weshmashian!
|
||||||
|
|
||||||
|
### Bugfixes
|
||||||
|
|
||||||
|
- [#1050](https://github.com/influxdata/telegraf/issues/1050): jolokia plugin - do not overwrite host tag. Thanks @saiello!
|
||||||
|
- [#921](https://github.com/influxdata/telegraf/pull/921): mqtt_consumer stops gathering metrics. Thanks @chaton78!
|
||||||
|
- [#1013](https://github.com/influxdata/telegraf/pull/1013): Close dead riemann output connections. Thanks @echupriyanov!
|
||||||
|
- [#1012](https://github.com/influxdata/telegraf/pull/1012): Set default tags in test accumulator.
|
||||||
|
- [#1024](https://github.com/influxdata/telegraf/issues/1024): Don't replace `.` with `_` in datadog output.
|
||||||
|
- [#1058](https://github.com/influxdata/telegraf/issues/1058): Fix possible leaky TCP connections in influxdb output.
|
||||||
|
- [#1044](https://github.com/influxdata/telegraf/pull/1044): Fix SNMP OID possible collisions. Thanks @relip
|
||||||
|
- [#1022](https://github.com/influxdata/telegraf/issues/1022): Dont error deb/rpm install on systemd errors.
|
||||||
|
- [#1078](https://github.com/influxdata/telegraf/issues/1078): Use default AWS credential chain.
|
||||||
|
- [#1070](https://github.com/influxdata/telegraf/issues/1070): SQL Server input. Fix datatype conversion.
|
||||||
|
- [#1089](https://github.com/influxdata/telegraf/issues/1089): Fix leaky TCP connections in phpfpm plugin.
|
||||||
|
- [#914](https://github.com/influxdata/telegraf/issues/914): Telegraf can drop metrics on full buffers.
|
||||||
|
- [#1098](https://github.com/influxdata/telegraf/issues/1098): Sanitize invalid OpenTSDB characters.
|
||||||
|
- [#1110](https://github.com/influxdata/telegraf/pull/1110): Sanitize * to - in graphite serializer. Thanks @goodeggs!
|
||||||
|
- [#1118](https://github.com/influxdata/telegraf/pull/1118): Sanitize Counter names for `win_perf_counters` input.
|
||||||
|
- [#1125](https://github.com/influxdata/telegraf/pull/1125): Wrap all exec command runners with a timeout, so hung os processes don't halt Telegraf.
|
||||||
|
- [#1113](https://github.com/influxdata/telegraf/pull/1113): Set MaxRetry and RequiredAcks defaults in Kafka output.
|
||||||
|
- [#1090](https://github.com/influxdata/telegraf/issues/1090): [agent] and [global_tags] config sometimes not getting applied.
|
||||||
|
- [#1133](https://github.com/influxdata/telegraf/issues/1133): Use a timeout for docker list & stat cmds.
|
||||||
|
- [#1052](https://github.com/influxdata/telegraf/issues/1052): Docker panic fix when decode fails.
|
||||||
|
- [#1136](https://github.com/influxdata/telegraf/pull/1136): "DELAYED" Inserts were deprecated in MySQL 5.6.6. Thanks @PierreF
|
||||||
|
|
||||||
|
## v0.12.1 [2016-04-14]
|
||||||
|
|
||||||
|
### Release Notes
|
||||||
|
- Breaking change in the dovecot input plugin. See Features section below.
|
||||||
|
- Graphite output templates are now supported. See
|
||||||
|
https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md#graphite
|
||||||
|
- Possible breaking change for the librato and graphite outputs. Telegraf will
|
||||||
|
no longer insert field names when the field is simply named `value`. This is
|
||||||
|
because the `value` field is redundant in the graphite/librato context.
|
||||||
|
|
||||||
|
### Features
|
||||||
|
- [#1009](https://github.com/influxdata/telegraf/pull/1009): Cassandra input plugin. Thanks @subhachandrachandra!
|
||||||
|
- [#976](https://github.com/influxdata/telegraf/pull/976): Reduce allocations in the UDP and statsd inputs.
|
||||||
|
- [#979](https://github.com/influxdata/telegraf/pull/979): Reduce allocations in the TCP listener.
|
||||||
|
- [#992](https://github.com/influxdata/telegraf/pull/992): Refactor allocations in TCP/UDP listeners.
|
||||||
|
- [#935](https://github.com/influxdata/telegraf/pull/935): AWS Cloudwatch input plugin. Thanks @joshhardy & @ljosa!
|
||||||
|
- [#943](https://github.com/influxdata/telegraf/pull/943): http_response input plugin. Thanks @Lswith!
|
||||||
|
- [#939](https://github.com/influxdata/telegraf/pull/939): sysstat input plugin. Thanks @zbindenren!
|
||||||
|
- [#998](https://github.com/influxdata/telegraf/pull/998): **breaking change** enabled global, user and ip queries in dovecot plugin. Thanks @mikif70!
|
||||||
|
- [#1001](https://github.com/influxdata/telegraf/pull/1001): Graphite serializer templates.
|
||||||
|
- [#1008](https://github.com/influxdata/telegraf/pull/1008): Adding memstats metrics to the influxdb plugin.
|
||||||
|
|
||||||
|
### Bugfixes
|
||||||
|
- [#968](https://github.com/influxdata/telegraf/issues/968): Processes plugin gets unknown state when spaces are in (command name)
|
||||||
|
- [#969](https://github.com/influxdata/telegraf/pull/969): ipmi_sensors: allow : in password. Thanks @awaw!
|
||||||
|
- [#972](https://github.com/influxdata/telegraf/pull/972): dovecot: remove extra newline in dovecot command. Thanks @mrannanj!
|
||||||
|
- [#645](https://github.com/influxdata/telegraf/issues/645): docker plugin i/o error on closed pipe. Thanks @tripledes!
|
||||||
|
|
||||||
|
## v0.12.0 [2016-04-05]
|
||||||
|
|
||||||
|
### Features
|
||||||
|
- [#951](https://github.com/influxdata/telegraf/pull/951): Parse environment variables in the config file.
|
||||||
|
- [#948](https://github.com/influxdata/telegraf/pull/948): Cleanup config file and make default package version include all plugins (but commented).
|
||||||
|
- [#927](https://github.com/influxdata/telegraf/pull/927): Adds parsing of tags to the statsd input when using DataDog's dogstatsd extension
|
||||||
|
- [#863](https://github.com/influxdata/telegraf/pull/863): AMQP output: allow external auth. Thanks @ekini!
|
||||||
|
- [#707](https://github.com/influxdata/telegraf/pull/707): Improved prometheus plugin. Thanks @titilambert!
|
||||||
|
- [#878](https://github.com/influxdata/telegraf/pull/878): Added json serializer. Thanks @ch3lo!
|
||||||
|
- [#880](https://github.com/influxdata/telegraf/pull/880): Add the ability to specify the bearer token to the prometheus plugin. Thanks @jchauncey!
|
||||||
|
- [#882](https://github.com/influxdata/telegraf/pull/882): Fixed SQL Server Plugin issues
|
||||||
|
- [#849](https://github.com/influxdata/telegraf/issues/849): Adding ability to parse single values as an input data type.
|
||||||
|
- [#844](https://github.com/influxdata/telegraf/pull/844): postgres_extensible plugin added. Thanks @menardorama!
|
||||||
|
- [#866](https://github.com/influxdata/telegraf/pull/866): couchbase input plugin. Thanks @ljosa!
|
||||||
|
- [#789](https://github.com/influxdata/telegraf/pull/789): Support multiple field specification and `field*` in graphite templates. Thanks @chrusty!
|
||||||
|
- [#762](https://github.com/influxdata/telegraf/pull/762): Nagios parser for the exec plugin. Thanks @titilambert!
|
||||||
|
- [#848](https://github.com/influxdata/telegraf/issues/848): Provide option to omit host tag from telegraf agent.
|
||||||
|
- [#928](https://github.com/influxdata/telegraf/pull/928): Deprecating the statsd "convert_names" options, expose separator config.
|
||||||
|
- [#919](https://github.com/influxdata/telegraf/pull/919): ipmi_sensor input plugin. Thanks @ebookbug!
|
||||||
|
- [#945](https://github.com/influxdata/telegraf/pull/945): KAFKA output: codec, acks, and retry configuration. Thanks @framiere!
|
||||||
|
|
||||||
|
### Bugfixes
|
||||||
|
- [#890](https://github.com/influxdata/telegraf/issues/890): Create TLS config even if only ssl_ca is provided.
|
||||||
|
- [#884](https://github.com/influxdata/telegraf/issues/884): Do not call write method if there are 0 metrics to write.
|
||||||
|
- [#898](https://github.com/influxdata/telegraf/issues/898): Put database name in quotes, fixes special characters in the database name.
|
||||||
|
- [#656](https://github.com/influxdata/telegraf/issues/656): No longer run `lsof` on linux to get netstat data, fixes permissions issue.
|
||||||
|
- [#907](https://github.com/influxdata/telegraf/issues/907): Fix prometheus invalid label/measurement name key.
|
||||||
|
- [#841](https://github.com/influxdata/telegraf/issues/841): Fix memcached unix socket panic.
|
||||||
|
- [#873](https://github.com/influxdata/telegraf/issues/873): Fix SNMP plugin sometimes not returning metrics. Thanks @titiliambert!
|
||||||
|
- [#934](https://github.com/influxdata/telegraf/pull/934): phpfpm: Fix fcgi uri path. Thanks @rudenkovk!
|
||||||
|
- [#805](https://github.com/influxdata/telegraf/issues/805): Kafka consumer stops gathering after i/o timeout.
|
||||||
|
- [#959](https://github.com/influxdata/telegraf/pull/959): reduce mongodb & prometheus collection timeouts. Thanks @PierreF!
|
||||||
|
|
||||||
|
## v0.11.1 [2016-03-17]
|
||||||
|
|
||||||
|
### Release Notes
|
||||||
|
- Primarily this release was cut to fix [#859](https://github.com/influxdata/telegraf/issues/859)
|
||||||
|
|
||||||
|
### Features
|
||||||
|
- [#747](https://github.com/influxdata/telegraf/pull/747): Start telegraf on install & remove on uninstall. Thanks @pierref!
|
||||||
|
- [#794](https://github.com/influxdata/telegraf/pull/794): Add service reload ability. Thanks @entertainyou!
|
||||||
|
|
||||||
|
### Bugfixes
|
||||||
|
- [#852](https://github.com/influxdata/telegraf/issues/852): Windows zip package fix
|
||||||
|
- [#859](https://github.com/influxdata/telegraf/issues/859): httpjson plugin panic
|
||||||
|
|
||||||
|
## v0.11.0 [2016-03-15]
|
||||||
|
|
||||||
### Release Notes
|
### Release Notes
|
||||||
|
|
||||||
### Features
|
### Features
|
||||||
|
- [#692](https://github.com/influxdata/telegraf/pull/770): Support InfluxDB retention policies
|
||||||
|
- [#771](https://github.com/influxdata/telegraf/pull/771): Default timeouts for input plugns. Thanks @PierreF!
|
||||||
|
- [#758](https://github.com/influxdata/telegraf/pull/758): UDP Listener input plugin, thanks @whatyouhide!
|
||||||
|
- [#769](https://github.com/influxdata/telegraf/issues/769): httpjson plugin: allow specifying SSL configuration.
|
||||||
|
- [#735](https://github.com/influxdata/telegraf/pull/735): SNMP Table feature. Thanks @titilambert!
|
||||||
|
- [#754](https://github.com/influxdata/telegraf/pull/754): docker plugin: adding `docker info` metrics to output. Thanks @titilambert!
|
||||||
|
- [#788](https://github.com/influxdata/telegraf/pull/788): -input-list and -output-list command-line options. Thanks @ebookbug!
|
||||||
|
- [#778](https://github.com/influxdata/telegraf/pull/778): Adding a TCP input listener.
|
||||||
|
- [#797](https://github.com/influxdata/telegraf/issues/797): Provide option for persistent MQTT consumer client sessions.
|
||||||
|
- [#799](https://github.com/influxdata/telegraf/pull/799): Add number of threads for procstat input plugin. Thanks @titilambert!
|
||||||
|
- [#776](https://github.com/influxdata/telegraf/pull/776): Add Zookeeper chroot option to kafka_consumer. Thanks @prune998!
|
||||||
|
- [#811](https://github.com/influxdata/telegraf/pull/811): Add processes plugin for classifying total procs on system. Thanks @titilambert!
|
||||||
|
- [#235](https://github.com/influxdata/telegraf/issues/235): Add number of users to the `system` input plugin.
|
||||||
|
- [#826](https://github.com/influxdata/telegraf/pull/826): "kernel" linux plugin for /proc/stat metrics (context switches, interrupts, etc.)
|
||||||
|
- [#847](https://github.com/influxdata/telegraf/pull/847): `ntpq`: Input plugin for running ntp query executable and gathering metrics.
|
||||||
|
|
||||||
### Bugfixes
|
### Bugfixes
|
||||||
|
- [#748](https://github.com/influxdata/telegraf/issues/748): Fix sensor plugin split on ":"
|
||||||
|
- [#722](https://github.com/influxdata/telegraf/pull/722): Librato output plugin fixes. Thanks @chrusty!
|
||||||
|
- [#745](https://github.com/influxdata/telegraf/issues/745): Fix Telegraf toml parse panic on large config files. Thanks @titilambert!
|
||||||
|
- [#781](https://github.com/influxdata/telegraf/pull/781): Fix mqtt_consumer username not being set. Thanks @chaton78!
|
||||||
|
- [#786](https://github.com/influxdata/telegraf/pull/786): Fix mqtt output username not being set. Thanks @msangoi!
|
||||||
|
- [#773](https://github.com/influxdata/telegraf/issues/773): Fix duplicate measurements in snmp plugin. Thanks @titilambert!
|
||||||
|
- [#708](https://github.com/influxdata/telegraf/issues/708): packaging: build ARM package
|
||||||
|
- [#713](https://github.com/influxdata/telegraf/issues/713): packaging: insecure permissions error on log directory
|
||||||
|
- [#816](https://github.com/influxdata/telegraf/issues/816): Fix phpfpm panic if fcgi endpoint unreachable.
|
||||||
|
- [#828](https://github.com/influxdata/telegraf/issues/828): fix net_response plugin overwriting host tag.
|
||||||
|
- [#821](https://github.com/influxdata/telegraf/issues/821): Remove postgres password from server tag. Thanks @menardorama!
|
||||||
|
|
||||||
|
## v0.10.4.1
|
||||||
|
|
||||||
|
### Release Notes
|
||||||
|
- Bug in the build script broke deb and rpm packages.
|
||||||
|
|
||||||
|
### Bugfixes
|
||||||
|
- [#750](https://github.com/influxdata/telegraf/issues/750): deb package broken
|
||||||
|
- [#752](https://github.com/influxdata/telegraf/issues/752): rpm package broken
|
||||||
|
|
||||||
|
## v0.10.4 [2016-02-24]
|
||||||
|
|
||||||
|
### Release Notes
|
||||||
|
- The pass/drop parameters have been renamed to fielddrop/fieldpass parameters,
|
||||||
|
to more accurately indicate their purpose.
|
||||||
|
- There are also now namedrop/namepass parameters for passing/dropping based
|
||||||
|
on the metric _name_.
|
||||||
|
- Experimental windows builds now available.
|
||||||
|
|
||||||
|
### Features
|
||||||
|
- [#727](https://github.com/influxdata/telegraf/pull/727): riak input, thanks @jcoene!
|
||||||
|
- [#694](https://github.com/influxdata/telegraf/pull/694): DNS Query input, thanks @mjasion!
|
||||||
|
- [#724](https://github.com/influxdata/telegraf/pull/724): username matching for procstat input, thanks @zorel!
|
||||||
|
- [#736](https://github.com/influxdata/telegraf/pull/736): Ignore dummy filesystems from disk plugin. Thanks @PierreF!
|
||||||
|
- [#737](https://github.com/influxdata/telegraf/pull/737): Support multiple fields for statsd input. Thanks @mattheath!
|
||||||
|
|
||||||
|
### Bugfixes
|
||||||
|
- [#701](https://github.com/influxdata/telegraf/pull/701): output write count shouldnt print in quiet mode.
|
||||||
|
- [#746](https://github.com/influxdata/telegraf/pull/746): httpjson plugin: Fix HTTP GET parameters.
|
||||||
|
|
||||||
|
## v0.10.3 [2016-02-18]
|
||||||
|
|
||||||
|
### Release Notes
|
||||||
|
- Users of the `exec` and `kafka_consumer` (and the new `nats_consumer`
|
||||||
|
and `mqtt_consumer` plugins) can now specify the incoming data
|
||||||
|
format that they would like to parse. Currently supports: "json", "influx", and
|
||||||
|
"graphite"
|
||||||
|
- Users of message broker and file output plugins can now choose what data format
|
||||||
|
they would like to output. Currently supports: "influx" and "graphite"
|
||||||
|
- More info on parsing _incoming_ data formats can be found
|
||||||
|
[here](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md)
|
||||||
|
- More info on serializing _outgoing_ data formats can be found
|
||||||
|
[here](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md)
|
||||||
|
- Telegraf now has an option `flush_buffer_when_full` that will flush the
|
||||||
|
metric buffer whenever it fills up for each output, rather than dropping
|
||||||
|
points and only flushing on a set time interval. This will default to `true`
|
||||||
|
and is in the `[agent]` config section.
|
||||||
|
|
||||||
|
### Features
|
||||||
|
- [#652](https://github.com/influxdata/telegraf/pull/652): CouchDB Input Plugin. Thanks @codehate!
|
||||||
|
- [#655](https://github.com/influxdata/telegraf/pull/655): Support parsing arbitrary data formats. Currently limited to kafka_consumer and exec inputs.
|
||||||
|
- [#671](https://github.com/influxdata/telegraf/pull/671): Dovecot input plugin. Thanks @mikif70!
|
||||||
|
- [#680](https://github.com/influxdata/telegraf/pull/680): NATS consumer input plugin. Thanks @netixen!
|
||||||
|
- [#676](https://github.com/influxdata/telegraf/pull/676): MQTT consumer input plugin.
|
||||||
|
- [#683](https://github.com/influxdata/telegraf/pull/683): PostGRES input plugin: add pg_stat_bgwriter. Thanks @menardorama!
|
||||||
|
- [#679](https://github.com/influxdata/telegraf/pull/679): File/stdout output plugin.
|
||||||
|
- [#679](https://github.com/influxdata/telegraf/pull/679): Support for arbitrary output data formats.
|
||||||
|
- [#695](https://github.com/influxdata/telegraf/pull/695): raindrops input plugin. Thanks @burdandrei!
|
||||||
|
- [#650](https://github.com/influxdata/telegraf/pull/650): net_response input plugin. Thanks @titilambert!
|
||||||
|
- [#699](https://github.com/influxdata/telegraf/pull/699): Flush based on buffer size rather than time.
|
||||||
|
- [#682](https://github.com/influxdata/telegraf/pull/682): Mesos input plugin. Thanks @tripledes!
|
||||||
|
|
||||||
|
### Bugfixes
|
||||||
|
- [#443](https://github.com/influxdata/telegraf/issues/443): Fix Ping command timeout parameter on Linux.
|
||||||
|
- [#662](https://github.com/influxdata/telegraf/pull/667): Change `[tags]` to `[global_tags]` to fix multiple-plugin tags bug.
|
||||||
|
- [#642](https://github.com/influxdata/telegraf/issues/642): Riemann output plugin issues.
|
||||||
|
- [#394](https://github.com/influxdata/telegraf/issues/394): Support HTTP POST. Thanks @gabelev!
|
||||||
|
- [#715](https://github.com/influxdata/telegraf/pull/715): Fix influxdb precision config panic. Thanks @netixen!
|
||||||
|
|
||||||
|
## v0.10.2 [2016-02-04]
|
||||||
|
|
||||||
|
### Release Notes
|
||||||
|
- Statsd timing measurements are now aggregated into a single measurement with
|
||||||
|
fields.
|
||||||
|
- Graphite output now inserts tags into the bucket in alphabetical order.
|
||||||
|
- Normalized TLS/SSL support for output plugins: MQTT, AMQP, Kafka
|
||||||
|
- `verify_ssl` config option was removed from Kafka because it was actually
|
||||||
|
doing the opposite of what it claimed to do (yikes). It's been replaced by
|
||||||
|
`insecure_skip_verify`
|
||||||
|
|
||||||
|
### Features
|
||||||
|
- [#575](https://github.com/influxdata/telegraf/pull/575): Support for collecting Windows Performance Counters. Thanks @TheFlyingCorpse!
|
||||||
|
- [#564](https://github.com/influxdata/telegraf/issues/564): features for plugin writing simplification. Internal metric data type.
|
||||||
|
- [#603](https://github.com/influxdata/telegraf/pull/603): Aggregate statsd timing measurements into fields. Thanks @marcinbunsch!
|
||||||
|
- [#601](https://github.com/influxdata/telegraf/issues/601): Warn when overwriting cached metrics.
|
||||||
|
- [#614](https://github.com/influxdata/telegraf/pull/614): PowerDNS input plugin. Thanks @Kasen!
|
||||||
|
- [#617](https://github.com/influxdata/telegraf/pull/617): exec plugin: parse influx line protocol in addition to JSON.
|
||||||
|
- [#628](https://github.com/influxdata/telegraf/pull/628): Windows perf counters: pre-vista support
|
||||||
|
|
||||||
|
### Bugfixes
|
||||||
|
- [#595](https://github.com/influxdata/telegraf/issues/595): graphite output should include tags to separate duplicate measurements.
|
||||||
|
- [#599](https://github.com/influxdata/telegraf/issues/599): datadog plugin tags not working.
|
||||||
|
- [#600](https://github.com/influxdata/telegraf/issues/600): datadog measurement/field name parsing is wrong.
|
||||||
|
- [#602](https://github.com/influxdata/telegraf/issues/602): Fix statsd field name templating.
|
||||||
|
- [#612](https://github.com/influxdata/telegraf/pull/612): Docker input panic fix if stats received are nil.
|
||||||
|
- [#634](https://github.com/influxdata/telegraf/pull/634): Properly set host headers in httpjson. Thanks @reginaldosousa!
|
||||||
|
|
||||||
## v0.10.1 [2016-01-27]
|
## v0.10.1 [2016-01-27]
|
||||||
|
|
||||||
|
|||||||
187
CONTRIBUTING.md
187
CONTRIBUTING.md
@@ -12,6 +12,13 @@ but any information you can provide on how the data will look is appreciated.
|
|||||||
See the [OpenTSDB output](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/opentsdb)
|
See the [OpenTSDB output](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/opentsdb)
|
||||||
for a good example.
|
for a good example.
|
||||||
|
|
||||||
|
## GoDoc
|
||||||
|
|
||||||
|
Public interfaces for inputs, outputs, metrics, and the accumulator can be found
|
||||||
|
on the GoDoc
|
||||||
|
|
||||||
|
[](https://godoc.org/github.com/influxdata/telegraf)
|
||||||
|
|
||||||
## Sign the CLA
|
## Sign the CLA
|
||||||
|
|
||||||
Before we can merge a pull request, you will need to sign the CLA,
|
Before we can merge a pull request, you will need to sign the CLA,
|
||||||
@@ -29,7 +36,7 @@ Assuming you can already build the project, run these in the telegraf directory:
|
|||||||
|
|
||||||
This section is for developers who want to create new collection inputs.
|
This section is for developers who want to create new collection inputs.
|
||||||
Telegraf is entirely plugin driven. This interface allows for operators to
|
Telegraf is entirely plugin driven. This interface allows for operators to
|
||||||
pick and chose what is gathered as well as makes it easy for developers
|
pick and chose what is gathered and makes it easy for developers
|
||||||
to create new ways of generating metrics.
|
to create new ways of generating metrics.
|
||||||
|
|
||||||
Plugin authorship is kept as simple as possible to promote people to develop
|
Plugin authorship is kept as simple as possible to promote people to develop
|
||||||
@@ -37,7 +44,7 @@ and submit new inputs.
|
|||||||
|
|
||||||
### Input Plugin Guidelines
|
### Input Plugin Guidelines
|
||||||
|
|
||||||
* A plugin must conform to the `inputs.Input` interface.
|
* A plugin must conform to the `telegraf.Input` interface.
|
||||||
* Input Plugins should call `inputs.Add` in their `init` function to register themselves.
|
* Input Plugins should call `inputs.Add` in their `init` function to register themselves.
|
||||||
See below for a quick example.
|
See below for a quick example.
|
||||||
* Input Plugins must be added to the
|
* Input Plugins must be added to the
|
||||||
@@ -46,49 +53,8 @@ See below for a quick example.
|
|||||||
plugin can be configured. This is include in `telegraf -sample-config`.
|
plugin can be configured. This is include in `telegraf -sample-config`.
|
||||||
* The `Description` function should say in one line what this plugin does.
|
* The `Description` function should say in one line what this plugin does.
|
||||||
|
|
||||||
### Input interface
|
Let's say you've written a plugin that emits metrics about processes on the
|
||||||
|
current host.
|
||||||
```go
|
|
||||||
type Input interface {
|
|
||||||
SampleConfig() string
|
|
||||||
Description() string
|
|
||||||
Gather(Accumulator) error
|
|
||||||
}
|
|
||||||
|
|
||||||
type Accumulator interface {
|
|
||||||
Add(measurement string,
|
|
||||||
value interface{},
|
|
||||||
tags map[string]string,
|
|
||||||
timestamp ...time.Time)
|
|
||||||
AddFields(measurement string,
|
|
||||||
fields map[string]interface{},
|
|
||||||
tags map[string]string,
|
|
||||||
timestamp ...time.Time)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Accumulator
|
|
||||||
|
|
||||||
The way that a plugin emits metrics is by interacting with the Accumulator.
|
|
||||||
|
|
||||||
The `Add` function takes 3 arguments:
|
|
||||||
* **measurement**: A string description of the metric. For instance `bytes_read` or `
|
|
||||||
faults`.
|
|
||||||
* **value**: A value for the metric. This accepts 5 different types of value:
|
|
||||||
* **int**: The most common type. All int types are accepted but favor using `int64`
|
|
||||||
Useful for counters, etc.
|
|
||||||
* **float**: Favor `float64`, useful for gauges, percentages, etc.
|
|
||||||
* **bool**: `true` or `false`, useful to indicate the presence of a state. `light_on`,
|
|
||||||
etc.
|
|
||||||
* **string**: Typically used to indicate a message, or some kind of freeform
|
|
||||||
information.
|
|
||||||
* **time.Time**: Useful for indicating when a state last occurred, for instance `
|
|
||||||
light_on_since`.
|
|
||||||
* **tags**: This is a map of strings to strings to describe the where or who
|
|
||||||
about the metric. For instance, the `net` plugin adds a tag named `"interface"`
|
|
||||||
set to the name of the network interface, like `"eth0"`.
|
|
||||||
|
|
||||||
Let's say you've written a plugin that emits metrics about processes on the current host.
|
|
||||||
|
|
||||||
### Input Plugin Example
|
### Input Plugin Example
|
||||||
|
|
||||||
@@ -97,7 +63,10 @@ package simple
|
|||||||
|
|
||||||
// simple.go
|
// simple.go
|
||||||
|
|
||||||
import "github.com/influxdata/telegraf/plugins/inputs"
|
import (
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
|
)
|
||||||
|
|
||||||
type Simple struct {
|
type Simple struct {
|
||||||
Ok bool
|
Ok bool
|
||||||
@@ -111,7 +80,7 @@ func (s *Simple) SampleConfig() string {
|
|||||||
return "ok = true # indicate if everything is fine"
|
return "ok = true # indicate if everything is fine"
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *Simple) Gather(acc inputs.Accumulator) error {
|
func (s *Simple) Gather(acc telegraf.Accumulator) error {
|
||||||
if s.Ok {
|
if s.Ok {
|
||||||
acc.Add("state", "pretty good", nil)
|
acc.Add("state", "pretty good", nil)
|
||||||
} else {
|
} else {
|
||||||
@@ -122,10 +91,56 @@ func (s *Simple) Gather(acc inputs.Accumulator) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
inputs.Add("simple", func() inputs.Input { return &Simple{} })
|
inputs.Add("simple", func() telegraf.Input { return &Simple{} })
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Input Plugins Accepting Arbitrary Data Formats
|
||||||
|
|
||||||
|
Some input plugins (such as
|
||||||
|
[exec](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/exec))
|
||||||
|
accept arbitrary input data formats. An overview of these data formats can
|
||||||
|
be found
|
||||||
|
[here](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md).
|
||||||
|
|
||||||
|
In order to enable this, you must specify a `SetParser(parser parsers.Parser)`
|
||||||
|
function on the plugin object (see the exec plugin for an example), as well as
|
||||||
|
defining `parser` as a field of the object.
|
||||||
|
|
||||||
|
You can then utilize the parser internally in your plugin, parsing data as you
|
||||||
|
see fit. Telegraf's configuration layer will take care of instantiating and
|
||||||
|
creating the `Parser` object.
|
||||||
|
|
||||||
|
You should also add the following to your SampleConfig() return:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
## Data format to consume.
|
||||||
|
## Each data format has it's own unique set of configuration options, read
|
||||||
|
## more about them here:
|
||||||
|
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||||
|
data_format = "influx"
|
||||||
|
```
|
||||||
|
|
||||||
|
Below is the `Parser` interface.
|
||||||
|
|
||||||
|
```go
|
||||||
|
// Parser is an interface defining functions that a parser plugin must satisfy.
|
||||||
|
type Parser interface {
|
||||||
|
// Parse takes a byte buffer separated by newlines
|
||||||
|
// ie, `cpu.usage.idle 90\ncpu.usage.busy 10`
|
||||||
|
// and parses it into telegraf metrics
|
||||||
|
Parse(buf []byte) ([]telegraf.Metric, error)
|
||||||
|
|
||||||
|
// ParseLine takes a single string metric
|
||||||
|
// ie, "cpu.usage.idle 90"
|
||||||
|
// and parses it into a telegraf metric.
|
||||||
|
ParseLine(line string) (telegraf.Metric, error)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
And you can view the code
|
||||||
|
[here.](https://github.com/influxdata/telegraf/blob/henrypfhu-master/plugins/parsers/registry.go)
|
||||||
|
|
||||||
## Service Input Plugins
|
## Service Input Plugins
|
||||||
|
|
||||||
This section is for developers who want to create new "service" collection
|
This section is for developers who want to create new "service" collection
|
||||||
@@ -145,18 +160,6 @@ and `Stop()` methods.
|
|||||||
* Same as the `Plugin` guidelines, except that they must conform to the
|
* Same as the `Plugin` guidelines, except that they must conform to the
|
||||||
`inputs.ServiceInput` interface.
|
`inputs.ServiceInput` interface.
|
||||||
|
|
||||||
### Service Plugin interface
|
|
||||||
|
|
||||||
```go
|
|
||||||
type ServicePlugin interface {
|
|
||||||
SampleConfig() string
|
|
||||||
Description() string
|
|
||||||
Gather(Accumulator) error
|
|
||||||
Start() error
|
|
||||||
Stop()
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Output Plugins
|
## Output Plugins
|
||||||
|
|
||||||
This section is for developers who want to create a new output sink. Outputs
|
This section is for developers who want to create a new output sink. Outputs
|
||||||
@@ -174,18 +177,6 @@ See below for a quick example.
|
|||||||
output can be configured. This is include in `telegraf -sample-config`.
|
output can be configured. This is include in `telegraf -sample-config`.
|
||||||
* The `Description` function should say in one line what this output does.
|
* The `Description` function should say in one line what this output does.
|
||||||
|
|
||||||
### Output interface
|
|
||||||
|
|
||||||
```go
|
|
||||||
type Output interface {
|
|
||||||
Connect() error
|
|
||||||
Close() error
|
|
||||||
Description() string
|
|
||||||
SampleConfig() string
|
|
||||||
Write(points []*client.Point) error
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Output Example
|
### Output Example
|
||||||
|
|
||||||
```go
|
```go
|
||||||
@@ -193,7 +184,10 @@ package simpleoutput
|
|||||||
|
|
||||||
// simpleoutput.go
|
// simpleoutput.go
|
||||||
|
|
||||||
import "github.com/influxdata/telegraf/plugins/outputs"
|
import (
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
|
"github.com/influxdata/telegraf/plugins/outputs"
|
||||||
|
)
|
||||||
|
|
||||||
type Simple struct {
|
type Simple struct {
|
||||||
Ok bool
|
Ok bool
|
||||||
@@ -217,7 +211,7 @@ func (s *Simple) Close() error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *Simple) Write(points []*client.Point) error {
|
func (s *Simple) Write(metrics []telegraf.Metric) error {
|
||||||
for _, pt := range points {
|
for _, pt := range points {
|
||||||
// write `pt` to the output sink here
|
// write `pt` to the output sink here
|
||||||
}
|
}
|
||||||
@@ -225,11 +219,38 @@ func (s *Simple) Write(points []*client.Point) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
outputs.Add("simpleoutput", func() outputs.Output { return &Simple{} })
|
outputs.Add("simpleoutput", func() telegraf.Output { return &Simple{} })
|
||||||
}
|
}
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Output Plugins Writing Arbitrary Data Formats
|
||||||
|
|
||||||
|
Some output plugins (such as
|
||||||
|
[file](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/file))
|
||||||
|
can write arbitrary output data formats. An overview of these data formats can
|
||||||
|
be found
|
||||||
|
[here](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md).
|
||||||
|
|
||||||
|
In order to enable this, you must specify a
|
||||||
|
`SetSerializer(serializer serializers.Serializer)`
|
||||||
|
function on the plugin object (see the file plugin for an example), as well as
|
||||||
|
defining `serializer` as a field of the object.
|
||||||
|
|
||||||
|
You can then utilize the serializer internally in your plugin, serializing data
|
||||||
|
before it's written. Telegraf's configuration layer will take care of
|
||||||
|
instantiating and creating the `Serializer` object.
|
||||||
|
|
||||||
|
You should also add the following to your SampleConfig() return:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
## Data format to output.
|
||||||
|
## Each data format has it's own unique set of configuration options, read
|
||||||
|
## more about them here:
|
||||||
|
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
||||||
|
data_format = "influx"
|
||||||
|
```
|
||||||
|
|
||||||
## Service Output Plugins
|
## Service Output Plugins
|
||||||
|
|
||||||
This section is for developers who want to create new "service" output. A
|
This section is for developers who want to create new "service" output. A
|
||||||
@@ -245,20 +266,6 @@ and `Stop()` methods.
|
|||||||
* Same as the `Output` guidelines, except that they must conform to the
|
* Same as the `Output` guidelines, except that they must conform to the
|
||||||
`output.ServiceOutput` interface.
|
`output.ServiceOutput` interface.
|
||||||
|
|
||||||
### Service Output interface
|
|
||||||
|
|
||||||
```go
|
|
||||||
type ServiceOutput interface {
|
|
||||||
Connect() error
|
|
||||||
Close() error
|
|
||||||
Description() string
|
|
||||||
SampleConfig() string
|
|
||||||
Write(points []*client.Point) error
|
|
||||||
Start() error
|
|
||||||
Stop()
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Unit Tests
|
## Unit Tests
|
||||||
|
|
||||||
### Execute short tests
|
### Execute short tests
|
||||||
@@ -274,7 +281,7 @@ which would take some time to replicate.
|
|||||||
To overcome this situation we've decided to use docker containers to provide a
|
To overcome this situation we've decided to use docker containers to provide a
|
||||||
fast and reproducible environment to test those services which require it.
|
fast and reproducible environment to test those services which require it.
|
||||||
For other situations
|
For other situations
|
||||||
(i.e: https://github.com/influxdata/telegraf/blob/master/plugins/redis/redis_test.go)
|
(i.e: https://github.com/influxdata/telegraf/blob/master/plugins/inputs/redis/redis_test.go)
|
||||||
a simple mock will suffice.
|
a simple mock will suffice.
|
||||||
|
|
||||||
To execute Telegraf tests follow these simple steps:
|
To execute Telegraf tests follow these simple steps:
|
||||||
|
|||||||
83
Godeps
83
Godeps
@@ -1,59 +1,58 @@
|
|||||||
git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git dbd8d5c40a582eb9adacde36b47932b3a3ad0034
|
github.com/Shopify/sarama 8aadb476e66ca998f2f6bb3c993e9a2daa3666b9
|
||||||
github.com/Shopify/sarama d37c73f2b2bce85f7fa16b6a550d26c5372892ef
|
github.com/Sirupsen/logrus 219c8cb75c258c552e999735be6df753ffc7afdc
|
||||||
github.com/Sirupsen/logrus f7f79f729e0fbe2fcc061db48a9ba0263f588252
|
github.com/amir/raidman 53c1b967405155bfc8758557863bf2e14f814687
|
||||||
github.com/amir/raidman 6a8e089bbe32e6b907feae5ba688841974b3c339
|
github.com/aws/aws-sdk-go 13a12060f716145019378a10e2806c174356b857
|
||||||
github.com/armon/go-metrics 345426c77237ece5dab0e1605c3e4b35c3f54757
|
github.com/beorn7/perks 3ac7bf7a47d159a033b107610db8a1b6575507a4
|
||||||
github.com/aws/aws-sdk-go 87b1e60a50b09e4812dee560b33a238f67305804
|
|
||||||
github.com/beorn7/perks b965b613227fddccbfffe13eae360ed3fa822f8d
|
|
||||||
github.com/boltdb/bolt ee4a0888a9abe7eefe5a0992ca4cb06864839873
|
|
||||||
github.com/cenkalti/backoff 4dc77674aceaabba2c7e3da25d4c823edfb73f99
|
github.com/cenkalti/backoff 4dc77674aceaabba2c7e3da25d4c823edfb73f99
|
||||||
github.com/dancannon/gorethink 6f088135ff288deb9d5546f4c71919207f891a70
|
github.com/couchbase/go-couchbase cb664315a324d87d19c879d9cc67fda6be8c2ac1
|
||||||
|
github.com/couchbase/gomemcached a5ea6356f648fec6ab89add00edd09151455b4b2
|
||||||
|
github.com/couchbase/goutils 5823a0cbaaa9008406021dc5daf80125ea30bba6
|
||||||
|
github.com/dancannon/gorethink e7cac92ea2bc52638791a021f212145acfedb1fc
|
||||||
github.com/davecgh/go-spew 5215b55f46b2b919f50a1df0eaa5886afe4e3b3d
|
github.com/davecgh/go-spew 5215b55f46b2b919f50a1df0eaa5886afe4e3b3d
|
||||||
|
github.com/docker/engine-api 8924d6900370b4c7e7984be5adc61f50a80d7537
|
||||||
|
github.com/docker/go-connections f549a9393d05688dff0992ef3efd8bbe6c628aeb
|
||||||
|
github.com/docker/go-units 5d2041e26a699eaca682e2ea41c8f891e1060444
|
||||||
github.com/eapache/go-resiliency b86b1ec0dd4209a588dc1285cdd471e73525c0b3
|
github.com/eapache/go-resiliency b86b1ec0dd4209a588dc1285cdd471e73525c0b3
|
||||||
github.com/eapache/queue ded5959c0d4e360646dc9e9908cff48666781367
|
github.com/eapache/queue ded5959c0d4e360646dc9e9908cff48666781367
|
||||||
github.com/fsouza/go-dockerclient 7b651349f9479f5114913eefbfd3c4eeddd79ab4
|
github.com/eclipse/paho.mqtt.golang 0f7a459f04f13a41b7ed752d47944528d4bf9a86
|
||||||
github.com/go-ini/ini afbd495e5aaea13597b5e14fe514ddeaa4d76fc3
|
github.com/go-sql-driver/mysql 1fca743146605a172a266e1654e01e5cd5669bee
|
||||||
github.com/go-sql-driver/mysql 7c7f556282622f94213bc028b4d0a7b6151ba239
|
github.com/gobwas/glob d877f6352135181470c40c73ebb81aefa22115fa
|
||||||
github.com/gogo/protobuf e8904f58e872a473a5b91bc9bf3377d223555263
|
github.com/golang/protobuf 552c7b9542c194800fd493123b3798ef0a832032
|
||||||
github.com/golang/protobuf 6aaa8d47701fa6cf07e914ec01fde3d4a1fe79c3
|
github.com/golang/snappy 427fb6fc07997f43afa32f35e850833760e489a7
|
||||||
github.com/golang/snappy 723cc1e459b8eea2dea4583200fd60757d40097a
|
|
||||||
github.com/gonuts/go-shellquote e842a11b24c6abfb3dd27af69a17f482e4b483c2
|
github.com/gonuts/go-shellquote e842a11b24c6abfb3dd27af69a17f482e4b483c2
|
||||||
github.com/gorilla/context 1c83b3eabd45b6d76072b66b746c20815fb2872d
|
github.com/gorilla/context 1ea25387ff6f684839d82767c1733ff4d4d15d0a
|
||||||
github.com/gorilla/mux 26a6070f849969ba72b72256e9f14cf519751690
|
github.com/gorilla/mux c9e326e2bdec29039a3761c07bece13133863e1e
|
||||||
github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478
|
github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478
|
||||||
github.com/hashicorp/go-msgpack fa3f63826f7c23912c15263591e65d54d080b458
|
github.com/hpcloud/tail b2940955ab8b26e19d43a43c4da0475dd81bdb56
|
||||||
github.com/hashicorp/raft 057b893fd996696719e98b6c44649ea14968c811
|
github.com/influxdata/config b79f6829346b8d6e78ba73544b1e1038f1f1c9da
|
||||||
github.com/hashicorp/raft-boltdb d1e82c1ec3f15ee991f7cc7ffd5b67ff6f5bbaee
|
github.com/influxdata/influxdb 21db76b3374c733f37ed16ad93f3484020034351
|
||||||
github.com/influxdata/config bae7cb98197d842374d3b8403905924094930f24
|
github.com/influxdata/toml af4df43894b16e3fd2b788d01bd27ad0776ef2d0
|
||||||
github.com/influxdata/influxdb 697f48b4e62e514e701ffec39978b864a3c666e6
|
github.com/klauspost/crc32 19b0b332c9e4516a6370a0456e6182c3b5036720
|
||||||
github.com/influxdb/influxdb 697f48b4e62e514e701ffec39978b864a3c666e6
|
github.com/lib/pq e182dc4027e2ded4b19396d638610f2653295f36
|
||||||
github.com/jmespath/go-jmespath c01cf91b011868172fdcd9f41838e80c9d716264
|
|
||||||
github.com/klauspost/crc32 999f3125931f6557b991b2f8472172bdfa578d38
|
|
||||||
github.com/lib/pq 8ad2b298cadd691a77015666a5372eae5dbfac8f
|
|
||||||
github.com/matttproud/golang_protobuf_extensions d0c3fe89de86839aecf2e0579c40ba3bb336a453
|
github.com/matttproud/golang_protobuf_extensions d0c3fe89de86839aecf2e0579c40ba3bb336a453
|
||||||
|
github.com/miekg/dns cce6c130cdb92c752850880fd285bea1d64439dd
|
||||||
github.com/mreiferson/go-snappystream 028eae7ab5c4c9e2d1cb4c4ca1e53259bbe7e504
|
github.com/mreiferson/go-snappystream 028eae7ab5c4c9e2d1cb4c4ca1e53259bbe7e504
|
||||||
github.com/naoina/go-stringutil 6b638e95a32d0c1131db0e7fe83775cbea4a0d0b
|
github.com/naoina/go-stringutil 6b638e95a32d0c1131db0e7fe83775cbea4a0d0b
|
||||||
github.com/naoina/toml 751171607256bb66e64c9f0220c00662420c38e9
|
github.com/nats-io/nats b13fc9d12b0b123ebc374e6b808c6228ae4234a3
|
||||||
github.com/nsqio/go-nsq 2118015c120962edc5d03325c680daf3163a8b5f
|
github.com/nats-io/nuid 4f84f5f3b2786224e336af2e13dba0a0a80b76fa
|
||||||
github.com/pborman/uuid dee7705ef7b324f27ceb85a121c61f2c2e8ce988
|
github.com/nsqio/go-nsq 0b80d6f05e15ca1930e0c5e1d540ed627e299980
|
||||||
github.com/pmezard/go-difflib 792786c7400a136282c1664665ae0a8db921c6c2
|
github.com/opencontainers/runc 89ab7f2ccc1e45ddf6485eaa802c35dcf321dfc8
|
||||||
github.com/prometheus/client_golang 67994f177195311c3ea3d4407ed0175e34a4256f
|
github.com/prometheus/client_golang 18acf9993a863f4c4b40612e19cdd243e7c86831
|
||||||
github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
|
github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
|
||||||
github.com/prometheus/common 14ca1097bbe21584194c15e391a9dab95ad42a59
|
github.com/prometheus/common e8eabff8812b05acf522b45fdcd725a785188e37
|
||||||
github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8
|
github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8
|
||||||
github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f
|
github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f
|
||||||
github.com/shirou/gopsutil 85bf0974ed06e4e668595ae2b4de02e772a2819b
|
github.com/shirou/gopsutil 1f32ce1bb380845be7f5d174ac641a2c592c0c42
|
||||||
github.com/soniah/gosnmp b1b4f885b12c5dcbd021c5cee1c904110de6db7d
|
github.com/soniah/gosnmp b1b4f885b12c5dcbd021c5cee1c904110de6db7d
|
||||||
github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744
|
github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744
|
||||||
github.com/stretchr/objx 1a9d0bb9f541897e62256577b352fdbc1fb4fd94
|
github.com/stretchr/testify 1f4a1643a57e798696635ea4c126e9127adb7d3c
|
||||||
github.com/stretchr/testify f390dcf405f7b83c997eac1b06768bb9f44dec18
|
github.com/wvanbergen/kafka 46f9a1cf3f670edec492029fadded9c2d9e18866
|
||||||
github.com/wvanbergen/kafka 1a8639a45164fcc245d5c7b4bd3ccfbd1a0ffbf3
|
|
||||||
github.com/wvanbergen/kazoo-go 0f768712ae6f76454f987c3356177e138df258f8
|
github.com/wvanbergen/kazoo-go 0f768712ae6f76454f987c3356177e138df258f8
|
||||||
github.com/zensqlmonitor/go-mssqldb ffe5510c6fa5e15e6d983210ab501c815b56b363
|
github.com/zensqlmonitor/go-mssqldb ffe5510c6fa5e15e6d983210ab501c815b56b363
|
||||||
golang.org/x/crypto 1f22c0103821b9390939b6776727195525381532
|
golang.org/x/crypto 5dc8cb4b8a8eb076cbb5a06bc3b8682c15bdbbd3
|
||||||
golang.org/x/net 04b9de9b512f58addf28c9853d50ebef61c3953e
|
golang.org/x/net 6acef71eb69611914f7a30939ea9f6e194c78172
|
||||||
golang.org/x/text 6d3c22c4525a4da167968fa2479be5524d2e8bd0
|
golang.org/x/text a71fd10341b064c10f4a81ceac72bcf70f26ea34
|
||||||
gopkg.in/dancannon/gorethink.v1 6f088135ff288deb9d5546f4c71919207f891a70
|
gopkg.in/dancannon/gorethink.v1 7d1af5be49cb5ecc7b177bf387d232050299d6ef
|
||||||
gopkg.in/fatih/pool.v2 cba550ebf9bce999a02e963296d4bc7a486cb715
|
gopkg.in/fatih/pool.v2 cba550ebf9bce999a02e963296d4bc7a486cb715
|
||||||
gopkg.in/mgo.v2 03c9f3ee4c14c8e51ee521a6a7d0425658dd6f64
|
gopkg.in/mgo.v2 d90005c5262a3463800497ea5a89aed5fe22c886
|
||||||
gopkg.in/yaml.v2 f7716cbe52baa25d2e9b0d0da546fcf909fc16b4
|
gopkg.in/yaml.v2 a83829b6f1293c91addabc89d0571c246397bbf4
|
||||||
|
|||||||
59
Godeps_windows
Normal file
59
Godeps_windows
Normal file
@@ -0,0 +1,59 @@
|
|||||||
|
github.com/Microsoft/go-winio 9f57cbbcbcb41dea496528872a4f0e37a4f7ae98
|
||||||
|
github.com/Shopify/sarama 8aadb476e66ca998f2f6bb3c993e9a2daa3666b9
|
||||||
|
github.com/Sirupsen/logrus 219c8cb75c258c552e999735be6df753ffc7afdc
|
||||||
|
github.com/StackExchange/wmi f3e2bae1e0cb5aef83e319133eabfee30013a4a5
|
||||||
|
github.com/amir/raidman 53c1b967405155bfc8758557863bf2e14f814687
|
||||||
|
github.com/aws/aws-sdk-go 13a12060f716145019378a10e2806c174356b857
|
||||||
|
github.com/beorn7/perks 3ac7bf7a47d159a033b107610db8a1b6575507a4
|
||||||
|
github.com/cenkalti/backoff 4dc77674aceaabba2c7e3da25d4c823edfb73f99
|
||||||
|
github.com/couchbase/go-couchbase cb664315a324d87d19c879d9cc67fda6be8c2ac1
|
||||||
|
github.com/couchbase/gomemcached a5ea6356f648fec6ab89add00edd09151455b4b2
|
||||||
|
github.com/couchbase/goutils 5823a0cbaaa9008406021dc5daf80125ea30bba6
|
||||||
|
github.com/dancannon/gorethink e7cac92ea2bc52638791a021f212145acfedb1fc
|
||||||
|
github.com/davecgh/go-spew 5215b55f46b2b919f50a1df0eaa5886afe4e3b3d
|
||||||
|
github.com/docker/engine-api 8924d6900370b4c7e7984be5adc61f50a80d7537
|
||||||
|
github.com/docker/go-connections f549a9393d05688dff0992ef3efd8bbe6c628aeb
|
||||||
|
github.com/docker/go-units 5d2041e26a699eaca682e2ea41c8f891e1060444
|
||||||
|
github.com/eapache/go-resiliency b86b1ec0dd4209a588dc1285cdd471e73525c0b3
|
||||||
|
github.com/eapache/queue ded5959c0d4e360646dc9e9908cff48666781367
|
||||||
|
github.com/eclipse/paho.mqtt.golang 0f7a459f04f13a41b7ed752d47944528d4bf9a86
|
||||||
|
github.com/go-ole/go-ole 50055884d646dd9434f16bbb5c9801749b9bafe4
|
||||||
|
github.com/go-sql-driver/mysql 1fca743146605a172a266e1654e01e5cd5669bee
|
||||||
|
github.com/golang/protobuf 552c7b9542c194800fd493123b3798ef0a832032
|
||||||
|
github.com/golang/snappy 427fb6fc07997f43afa32f35e850833760e489a7
|
||||||
|
github.com/gonuts/go-shellquote e842a11b24c6abfb3dd27af69a17f482e4b483c2
|
||||||
|
github.com/gorilla/context 1ea25387ff6f684839d82767c1733ff4d4d15d0a
|
||||||
|
github.com/gorilla/mux c9e326e2bdec29039a3761c07bece13133863e1e
|
||||||
|
github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478
|
||||||
|
github.com/influxdata/config b79f6829346b8d6e78ba73544b1e1038f1f1c9da
|
||||||
|
github.com/influxdata/influxdb e3fef5593c21644f2b43af55d6e17e70910b0e48
|
||||||
|
github.com/influxdata/toml af4df43894b16e3fd2b788d01bd27ad0776ef2d0
|
||||||
|
github.com/klauspost/crc32 19b0b332c9e4516a6370a0456e6182c3b5036720
|
||||||
|
github.com/lib/pq e182dc4027e2ded4b19396d638610f2653295f36
|
||||||
|
github.com/lxn/win 9a7734ea4db26bc593d52f6a8a957afdad39c5c1
|
||||||
|
github.com/matttproud/golang_protobuf_extensions d0c3fe89de86839aecf2e0579c40ba3bb336a453
|
||||||
|
github.com/miekg/dns cce6c130cdb92c752850880fd285bea1d64439dd
|
||||||
|
github.com/mreiferson/go-snappystream 028eae7ab5c4c9e2d1cb4c4ca1e53259bbe7e504
|
||||||
|
github.com/naoina/go-stringutil 6b638e95a32d0c1131db0e7fe83775cbea4a0d0b
|
||||||
|
github.com/nats-io/nats b13fc9d12b0b123ebc374e6b808c6228ae4234a3
|
||||||
|
github.com/nats-io/nuid 4f84f5f3b2786224e336af2e13dba0a0a80b76fa
|
||||||
|
github.com/nsqio/go-nsq 0b80d6f05e15ca1930e0c5e1d540ed627e299980
|
||||||
|
github.com/prometheus/client_golang 18acf9993a863f4c4b40612e19cdd243e7c86831
|
||||||
|
github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
|
||||||
|
github.com/prometheus/common e8eabff8812b05acf522b45fdcd725a785188e37
|
||||||
|
github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8
|
||||||
|
github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f
|
||||||
|
github.com/shirou/gopsutil 1f32ce1bb380845be7f5d174ac641a2c592c0c42
|
||||||
|
github.com/shirou/w32 ada3ba68f000aa1b58580e45c9d308fe0b7fc5c5
|
||||||
|
github.com/soniah/gosnmp b1b4f885b12c5dcbd021c5cee1c904110de6db7d
|
||||||
|
github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744
|
||||||
|
github.com/stretchr/testify 1f4a1643a57e798696635ea4c126e9127adb7d3c
|
||||||
|
github.com/wvanbergen/kafka 46f9a1cf3f670edec492029fadded9c2d9e18866
|
||||||
|
github.com/wvanbergen/kazoo-go 0f768712ae6f76454f987c3356177e138df258f8
|
||||||
|
github.com/zensqlmonitor/go-mssqldb ffe5510c6fa5e15e6d983210ab501c815b56b363
|
||||||
|
golang.org/x/net 6acef71eb69611914f7a30939ea9f6e194c78172
|
||||||
|
golang.org/x/text a71fd10341b064c10f4a81ceac72bcf70f26ea34
|
||||||
|
gopkg.in/dancannon/gorethink.v1 7d1af5be49cb5ecc7b177bf387d232050299d6ef
|
||||||
|
gopkg.in/fatih/pool.v2 cba550ebf9bce999a02e963296d4bc7a486cb715
|
||||||
|
gopkg.in/mgo.v2 d90005c5262a3463800497ea5a89aed5fe22c886
|
||||||
|
gopkg.in/yaml.v2 a83829b6f1293c91addabc89d0571c246397bbf4
|
||||||
35
Makefile
35
Makefile
@@ -9,23 +9,41 @@ endif
|
|||||||
# Standard Telegraf build
|
# Standard Telegraf build
|
||||||
default: prepare build
|
default: prepare build
|
||||||
|
|
||||||
|
# Windows build
|
||||||
|
windows: prepare-windows build-windows
|
||||||
|
|
||||||
# Only run the build (no dependency grabbing)
|
# Only run the build (no dependency grabbing)
|
||||||
build:
|
build:
|
||||||
go build -o telegraf -ldflags \
|
go install -ldflags "-X main.Version=$(VERSION)" ./...
|
||||||
|
|
||||||
|
build-windows:
|
||||||
|
go build -o telegraf.exe -ldflags \
|
||||||
"-X main.Version=$(VERSION)" \
|
"-X main.Version=$(VERSION)" \
|
||||||
./cmd/telegraf/telegraf.go
|
./cmd/telegraf/telegraf.go
|
||||||
|
|
||||||
|
build-for-docker:
|
||||||
|
CGO_ENABLED=0 GOOS=linux go build -installsuffix cgo -o telegraf -ldflags \
|
||||||
|
"-s -X main.Version=$(VERSION)" \
|
||||||
|
./cmd/telegraf/telegraf.go
|
||||||
|
|
||||||
# Build with race detector
|
# Build with race detector
|
||||||
dev: prepare
|
dev: prepare
|
||||||
go build -race -o telegraf -ldflags \
|
go build -race -ldflags "-X main.Version=$(VERSION)" ./...
|
||||||
"-X main.Version=$(VERSION)" \
|
|
||||||
./cmd/telegraf/telegraf.go
|
# run package script
|
||||||
|
package:
|
||||||
|
./scripts/build.py --package --version="$(VERSION)" --platform=linux --arch=all --upload
|
||||||
|
|
||||||
# Get dependencies and use gdm to checkout changesets
|
# Get dependencies and use gdm to checkout changesets
|
||||||
prepare:
|
prepare:
|
||||||
go get github.com/sparrc/gdm
|
go get github.com/sparrc/gdm
|
||||||
gdm restore
|
gdm restore
|
||||||
|
|
||||||
|
# Use the windows godeps file to prepare dependencies
|
||||||
|
prepare-windows:
|
||||||
|
go get github.com/sparrc/gdm
|
||||||
|
gdm restore -f Godeps_windows
|
||||||
|
|
||||||
# Run all docker containers necessary for unit tests
|
# Run all docker containers necessary for unit tests
|
||||||
docker-run:
|
docker-run:
|
||||||
ifeq ($(UNAME), Darwin)
|
ifeq ($(UNAME), Darwin)
|
||||||
@@ -74,14 +92,17 @@ docker-kill:
|
|||||||
-docker rm nsq aerospike redis opentsdb rabbitmq postgres memcached mysql kafka mqtt riemann snmp
|
-docker rm nsq aerospike redis opentsdb rabbitmq postgres memcached mysql kafka mqtt riemann snmp
|
||||||
|
|
||||||
# Run full unit tests using docker containers (includes setup and teardown)
|
# Run full unit tests using docker containers (includes setup and teardown)
|
||||||
test: docker-kill docker-run
|
test: vet docker-kill docker-run
|
||||||
# Sleeping for kafka leadership election, TSDB setup, etc.
|
# Sleeping for kafka leadership election, TSDB setup, etc.
|
||||||
sleep 60
|
sleep 60
|
||||||
# SUCCESS, running tests
|
# SUCCESS, running tests
|
||||||
go test -race ./...
|
go test -race ./...
|
||||||
|
|
||||||
# Run "short" unit tests
|
# Run "short" unit tests
|
||||||
test-short:
|
test-short: vet
|
||||||
go test -short ./...
|
go test -short ./...
|
||||||
|
|
||||||
.PHONY: test
|
vet:
|
||||||
|
go vet ./...
|
||||||
|
|
||||||
|
.PHONY: test test-short vet build default
|
||||||
|
|||||||
201
README.md
201
README.md
@@ -17,24 +17,17 @@ new plugins.
|
|||||||
|
|
||||||
## Installation:
|
## Installation:
|
||||||
|
|
||||||
NOTE: Telegraf 0.10.x is **not** backwards-compatible with previous versions
|
### Linux deb and rpm Packages:
|
||||||
of telegraf, both in the database layout and the configuration file. 0.2.x
|
|
||||||
will continue to be supported, see below for download links.
|
|
||||||
|
|
||||||
For more details on the differences between Telegraf 0.2.x and 0.10.x, see
|
|
||||||
the [release blog post](https://influxdata.com/blog/announcing-telegraf-0-10-0/).
|
|
||||||
|
|
||||||
### Linux deb and rpm packages:
|
|
||||||
|
|
||||||
Latest:
|
Latest:
|
||||||
* http://get.influxdb.org/telegraf/telegraf_0.10.1-1_amd64.deb
|
* http://get.influxdb.org/telegraf/telegraf_0.12.1-1_amd64.deb
|
||||||
* http://get.influxdb.org/telegraf/telegraf-0.10.1-1.x86_64.rpm
|
* http://get.influxdb.org/telegraf/telegraf-0.12.1-1.x86_64.rpm
|
||||||
|
|
||||||
0.2.x:
|
Latest (arm):
|
||||||
* http://get.influxdb.org/telegraf/telegraf_0.2.4_amd64.deb
|
* http://get.influxdb.org/telegraf/telegraf_0.12.1-1_armhf.deb
|
||||||
* http://get.influxdb.org/telegraf/telegraf-0.2.4-1.x86_64.rpm
|
* http://get.influxdb.org/telegraf/telegraf-0.12.1-1.armhf.rpm
|
||||||
|
|
||||||
##### Package instructions:
|
##### Package Instructions:
|
||||||
|
|
||||||
* Telegraf binary is installed in `/usr/bin/telegraf`
|
* Telegraf binary is installed in `/usr/bin/telegraf`
|
||||||
* Telegraf daemon configuration file is in `/etc/telegraf/telegraf.conf`
|
* Telegraf daemon configuration file is in `/etc/telegraf/telegraf.conf`
|
||||||
@@ -43,32 +36,47 @@ Latest:
|
|||||||
* On systemd systems (such as Ubuntu 15+), the telegraf daemon can be
|
* On systemd systems (such as Ubuntu 15+), the telegraf daemon can be
|
||||||
controlled via `systemctl [action] telegraf`
|
controlled via `systemctl [action] telegraf`
|
||||||
|
|
||||||
|
### yum/apt Repositories:
|
||||||
|
|
||||||
|
There is a yum/apt repo available for the whole InfluxData stack, see
|
||||||
|
[here](https://docs.influxdata.com/influxdb/v0.10/introduction/installation/#installation)
|
||||||
|
for instructions on setting up the repo. Once it is configured, you will be able
|
||||||
|
to use this repo to install & update telegraf.
|
||||||
|
|
||||||
### Linux tarballs:
|
### Linux tarballs:
|
||||||
|
|
||||||
Latest:
|
Latest:
|
||||||
* http://get.influxdb.org/telegraf/telegraf-0.10.1-1_linux_amd64.tar.gz
|
* http://get.influxdb.org/telegraf/telegraf-0.12.1-1_linux_amd64.tar.gz
|
||||||
* http://get.influxdb.org/telegraf/telegraf-0.10.1-1_linux_386.tar.gz
|
* http://get.influxdb.org/telegraf/telegraf-0.12.1-1_linux_i386.tar.gz
|
||||||
* http://get.influxdb.org/telegraf/telegraf-0.10.1-1_linux_arm.tar.gz
|
* http://get.influxdb.org/telegraf/telegraf-0.12.1-1_linux_armhf.tar.gz
|
||||||
|
|
||||||
0.2.x:
|
##### tarball Instructions:
|
||||||
* http://get.influxdb.org/telegraf/telegraf_linux_amd64_0.2.4.tar.gz
|
|
||||||
* http://get.influxdb.org/telegraf/telegraf_linux_386_0.2.4.tar.gz
|
|
||||||
* http://get.influxdb.org/telegraf/telegraf_linux_arm_0.2.4.tar.gz
|
|
||||||
|
|
||||||
##### tarball instructions:
|
|
||||||
|
|
||||||
To install the full directory structure with config file, run:
|
To install the full directory structure with config file, run:
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo tar -C / -xvf ./telegraf-v0.10.1-1_linux_amd64.tar.gz
|
sudo tar -C / -zxvf ./telegraf-0.12.1-1_linux_amd64.tar.gz
|
||||||
```
|
```
|
||||||
|
|
||||||
To extract only the binary, run:
|
To extract only the binary, run:
|
||||||
|
|
||||||
```
|
```
|
||||||
tar -zxvf telegraf-v0.10.1-1_linux_amd64.tar.gz --strip-components=3 ./usr/bin/telegraf
|
tar -zxvf telegraf-0.12.1-1_linux_amd64.tar.gz --strip-components=3 ./usr/bin/telegraf
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### FreeBSD tarball:
|
||||||
|
|
||||||
|
Latest:
|
||||||
|
* http://get.influxdb.org/telegraf/telegraf-0.12.1-1_freebsd_amd64.tar.gz
|
||||||
|
|
||||||
|
##### tarball Instructions:
|
||||||
|
|
||||||
|
See linux instructions above.
|
||||||
|
|
||||||
|
### Ansible Role:
|
||||||
|
|
||||||
|
Ansible role: https://github.com/rossmcdonald/telegraf
|
||||||
|
|
||||||
### OSX via Homebrew:
|
### OSX via Homebrew:
|
||||||
|
|
||||||
```
|
```
|
||||||
@@ -76,6 +84,12 @@ brew update
|
|||||||
brew install telegraf
|
brew install telegraf
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Windows Binaries (EXPERIMENTAL)
|
||||||
|
|
||||||
|
Latest:
|
||||||
|
* http://get.influxdb.org/telegraf/telegraf-0.12.1-1_windows_amd64.zip
|
||||||
|
* http://get.influxdb.org/telegraf/telegraf-0.12.1-1_windows_i386.zip
|
||||||
|
|
||||||
### From Source:
|
### From Source:
|
||||||
|
|
||||||
Telegraf manages dependencies via [gdm](https://github.com/sparrc/gdm),
|
Telegraf manages dependencies via [gdm](https://github.com/sparrc/gdm),
|
||||||
@@ -88,7 +102,7 @@ if you don't have it already. You also must build with golang version 1.5+.
|
|||||||
4. Run `cd $GOPATH/src/github.com/influxdata/telegraf`
|
4. Run `cd $GOPATH/src/github.com/influxdata/telegraf`
|
||||||
5. Run `make`
|
5. Run `make`
|
||||||
|
|
||||||
### How to use it:
|
## How to use it:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
$ telegraf -help
|
$ telegraf -help
|
||||||
@@ -131,7 +145,7 @@ Examples:
|
|||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
See the [configuration guide](CONFIGURATION.md) for a rundown of the more advanced
|
See the [configuration guide](docs/CONFIGURATION.md) for a rundown of the more advanced
|
||||||
configuration options.
|
configuration options.
|
||||||
|
|
||||||
## Supported Input Plugins
|
## Supported Input Plugins
|
||||||
@@ -142,42 +156,60 @@ more information on each, please look at the directory of the same name in
|
|||||||
|
|
||||||
Currently implemented sources:
|
Currently implemented sources:
|
||||||
|
|
||||||
* aerospike
|
* [aws cloudwatch](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/cloudwatch)
|
||||||
* apache
|
* [aerospike](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/aerospike)
|
||||||
* bcache
|
* [apache](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/apache)
|
||||||
* disque
|
* [bcache](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/bcache)
|
||||||
* docker
|
* [cassandra](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/cassandra)
|
||||||
* elasticsearch
|
* [couchbase](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/couchbase)
|
||||||
* exec (generic JSON-emitting executable plugin)
|
* [couchdb](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/couchdb)
|
||||||
* haproxy
|
* [disque](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/disque)
|
||||||
* httpjson (generic JSON-emitting http service plugin)
|
* [dns query time](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/dns_query)
|
||||||
* influxdb
|
* [docker](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/docker)
|
||||||
* jolokia
|
* [dovecot](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/dovecot)
|
||||||
* leofs
|
* [elasticsearch](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/elasticsearch)
|
||||||
* lustre2
|
* [exec](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/exec) (generic executable plugin, support JSON, influx, graphite and nagios)
|
||||||
* mailchimp
|
* [filestat](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/filestat)
|
||||||
* memcached
|
* [haproxy](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/haproxy)
|
||||||
* mongodb
|
* [http_response](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/http_response)
|
||||||
* mysql
|
* [httpjson](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/httpjson) (generic JSON-emitting http service plugin)
|
||||||
* nginx
|
* [influxdb](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/influxdb)
|
||||||
* nsq
|
* [ipmi_sensor](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/ipmi_sensor)
|
||||||
* phpfpm
|
* [jolokia](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/jolokia)
|
||||||
* phusion passenger
|
* [leofs](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/leofs)
|
||||||
* ping
|
* [lustre2](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/lustre2)
|
||||||
* postgresql
|
* [mailchimp](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/mailchimp)
|
||||||
* procstat
|
* [memcached](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/memcached)
|
||||||
* prometheus
|
* [mesos](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/mesos)
|
||||||
* puppetagent
|
* [mongodb](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/mongodb)
|
||||||
* rabbitmq
|
* [mysql](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/mysql)
|
||||||
* redis
|
* [net_response](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/net_response)
|
||||||
* rethinkdb
|
* [nginx](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/nginx)
|
||||||
* sql server (microsoft)
|
* [nsq](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/nsq)
|
||||||
* twemproxy
|
* [ntpq](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/ntpq)
|
||||||
* zfs
|
* [phpfpm](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/phpfpm)
|
||||||
* zookeeper
|
* [phusion passenger](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/passenger)
|
||||||
* sensors
|
* [ping](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/ping)
|
||||||
* snmp
|
* [postgresql](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/postgresql)
|
||||||
* system
|
* [postgresql_extensible](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/postgresql_extensible)
|
||||||
|
* [powerdns](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/powerdns)
|
||||||
|
* [procstat](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/procstat)
|
||||||
|
* [prometheus](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/prometheus)
|
||||||
|
* [puppetagent](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/puppetagent)
|
||||||
|
* [rabbitmq](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/rabbitmq)
|
||||||
|
* [raindrops](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/raindrops)
|
||||||
|
* [redis](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/redis)
|
||||||
|
* [rethinkdb](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/rethinkdb)
|
||||||
|
* [riak](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/riak)
|
||||||
|
* [sensors ](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/sensors) (only available if built from source)
|
||||||
|
* [snmp](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/snmp)
|
||||||
|
* [sql server](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/sqlserver) (microsoft)
|
||||||
|
* [twemproxy](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/twemproxy)
|
||||||
|
* [zfs](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/zfs)
|
||||||
|
* [zookeeper](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/zookeeper)
|
||||||
|
* [win_perf_counters ](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/win_perf_counters) (windows performance counters)
|
||||||
|
* [sysstat](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/sysstat)
|
||||||
|
* [system](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/system)
|
||||||
* cpu
|
* cpu
|
||||||
* mem
|
* mem
|
||||||
* net
|
* net
|
||||||
@@ -185,35 +217,42 @@ Currently implemented sources:
|
|||||||
* disk
|
* disk
|
||||||
* diskio
|
* diskio
|
||||||
* swap
|
* swap
|
||||||
|
* processes
|
||||||
|
* kernel (/proc/stat)
|
||||||
|
|
||||||
Telegraf can also collect metrics via the following service plugins:
|
Telegraf can also collect metrics via the following service plugins:
|
||||||
|
|
||||||
* statsd
|
* [statsd](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/statsd)
|
||||||
* kafka_consumer
|
* [udp_listener](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/udp_listener)
|
||||||
* github_webhooks
|
* [tcp_listener](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/tcp_listener)
|
||||||
|
* [mqtt_consumer](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/mqtt_consumer)
|
||||||
|
* [kafka_consumer](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/kafka_consumer)
|
||||||
|
* [nats_consumer](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/nats_consumer)
|
||||||
|
* [github_webhooks](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/github_webhooks)
|
||||||
|
|
||||||
We'll be adding support for many more over the coming months. Read on if you
|
We'll be adding support for many more over the coming months. Read on if you
|
||||||
want to add support for another service or third-party API.
|
want to add support for another service or third-party API.
|
||||||
|
|
||||||
## Supported Output Plugins
|
## Supported Output Plugins
|
||||||
|
|
||||||
* influxdb
|
* [influxdb](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/influxdb)
|
||||||
* amon
|
* [amon](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/amon)
|
||||||
* amqp
|
* [amqp](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/amqp)
|
||||||
* aws kinesis
|
* [aws kinesis](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/kinesis)
|
||||||
* aws cloudwatch
|
* [aws cloudwatch](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/cloudwatch)
|
||||||
* datadog
|
* [datadog](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/datadog)
|
||||||
* graphite
|
* [file](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/file)
|
||||||
* kafka
|
* [graphite](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/graphite)
|
||||||
* librato
|
* [kafka](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/kafka)
|
||||||
* mqtt
|
* [librato](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/librato)
|
||||||
* nsq
|
* [mqtt](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/mqtt)
|
||||||
* opentsdb
|
* [nsq](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/nsq)
|
||||||
* prometheus
|
* [opentsdb](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/opentsdb)
|
||||||
* riemann
|
* [prometheus](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/prometheus_client)
|
||||||
|
* [riemann](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/riemann)
|
||||||
|
|
||||||
## Contributing
|
## Contributing
|
||||||
|
|
||||||
Please see the
|
Please see the
|
||||||
[contributing guide](CONTRIBUTING.md)
|
[contributing guide](CONTRIBUTING.md)
|
||||||
for details on contributing a plugin or output to Telegraf.
|
for details on contributing a plugin to Telegraf.
|
||||||
|
|||||||
191
accumulator.go
191
accumulator.go
@@ -1,188 +1,21 @@
|
|||||||
package telegraf
|
package telegraf
|
||||||
|
|
||||||
import (
|
import "time"
|
||||||
"fmt"
|
|
||||||
"log"
|
|
||||||
"math"
|
|
||||||
"sync"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/influxdata/telegraf/internal/models"
|
|
||||||
|
|
||||||
"github.com/influxdata/influxdb/client/v2"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Accumulator interface {
|
type Accumulator interface {
|
||||||
Add(measurement string, value interface{},
|
// Create a point with a value, decorating it with tags
|
||||||
tags map[string]string, t ...time.Time)
|
// NOTE: tags is expected to be owned by the caller, don't mutate
|
||||||
AddFields(measurement string, fields map[string]interface{},
|
// it after passing to Add.
|
||||||
tags map[string]string, t ...time.Time)
|
Add(measurement string,
|
||||||
|
value interface{},
|
||||||
|
tags map[string]string,
|
||||||
|
t ...time.Time)
|
||||||
|
|
||||||
SetDefaultTags(tags map[string]string)
|
AddFields(measurement string,
|
||||||
AddDefaultTag(key, value string)
|
fields map[string]interface{},
|
||||||
|
tags map[string]string,
|
||||||
Prefix() string
|
t ...time.Time)
|
||||||
SetPrefix(prefix string)
|
|
||||||
|
|
||||||
Debug() bool
|
Debug() bool
|
||||||
SetDebug(enabled bool)
|
SetDebug(enabled bool)
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewAccumulator(
|
|
||||||
inputConfig *models.InputConfig,
|
|
||||||
points chan *client.Point,
|
|
||||||
) Accumulator {
|
|
||||||
acc := accumulator{}
|
|
||||||
acc.points = points
|
|
||||||
acc.inputConfig = inputConfig
|
|
||||||
return &acc
|
|
||||||
}
|
|
||||||
|
|
||||||
type accumulator struct {
|
|
||||||
sync.Mutex
|
|
||||||
|
|
||||||
points chan *client.Point
|
|
||||||
|
|
||||||
defaultTags map[string]string
|
|
||||||
|
|
||||||
debug bool
|
|
||||||
|
|
||||||
inputConfig *models.InputConfig
|
|
||||||
|
|
||||||
prefix string
|
|
||||||
}
|
|
||||||
|
|
||||||
func (ac *accumulator) Add(
|
|
||||||
measurement string,
|
|
||||||
value interface{},
|
|
||||||
tags map[string]string,
|
|
||||||
t ...time.Time,
|
|
||||||
) {
|
|
||||||
fields := make(map[string]interface{})
|
|
||||||
fields["value"] = value
|
|
||||||
ac.AddFields(measurement, fields, tags, t...)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (ac *accumulator) AddFields(
|
|
||||||
measurement string,
|
|
||||||
fields map[string]interface{},
|
|
||||||
tags map[string]string,
|
|
||||||
t ...time.Time,
|
|
||||||
) {
|
|
||||||
if len(fields) == 0 || len(measurement) == 0 {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if !ac.inputConfig.Filter.ShouldTagsPass(tags) {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Override measurement name if set
|
|
||||||
if len(ac.inputConfig.NameOverride) != 0 {
|
|
||||||
measurement = ac.inputConfig.NameOverride
|
|
||||||
}
|
|
||||||
// Apply measurement prefix and suffix if set
|
|
||||||
if len(ac.inputConfig.MeasurementPrefix) != 0 {
|
|
||||||
measurement = ac.inputConfig.MeasurementPrefix + measurement
|
|
||||||
}
|
|
||||||
if len(ac.inputConfig.MeasurementSuffix) != 0 {
|
|
||||||
measurement = measurement + ac.inputConfig.MeasurementSuffix
|
|
||||||
}
|
|
||||||
|
|
||||||
if tags == nil {
|
|
||||||
tags = make(map[string]string)
|
|
||||||
}
|
|
||||||
// Apply plugin-wide tags if set
|
|
||||||
for k, v := range ac.inputConfig.Tags {
|
|
||||||
if _, ok := tags[k]; !ok {
|
|
||||||
tags[k] = v
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// Apply daemon-wide tags if set
|
|
||||||
for k, v := range ac.defaultTags {
|
|
||||||
if _, ok := tags[k]; !ok {
|
|
||||||
tags[k] = v
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
result := make(map[string]interface{})
|
|
||||||
for k, v := range fields {
|
|
||||||
// Filter out any filtered fields
|
|
||||||
if ac.inputConfig != nil {
|
|
||||||
if !ac.inputConfig.Filter.ShouldPass(k) {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
}
|
|
||||||
result[k] = v
|
|
||||||
|
|
||||||
// Validate uint64 and float64 fields
|
|
||||||
switch val := v.(type) {
|
|
||||||
case uint64:
|
|
||||||
// InfluxDB does not support writing uint64
|
|
||||||
if val < uint64(9223372036854775808) {
|
|
||||||
result[k] = int64(val)
|
|
||||||
} else {
|
|
||||||
result[k] = int64(9223372036854775807)
|
|
||||||
}
|
|
||||||
case float64:
|
|
||||||
// NaNs are invalid values in influxdb, skip measurement
|
|
||||||
if math.IsNaN(val) || math.IsInf(val, 0) {
|
|
||||||
if ac.debug {
|
|
||||||
log.Printf("Measurement [%s] field [%s] has a NaN or Inf "+
|
|
||||||
"field, skipping",
|
|
||||||
measurement, k)
|
|
||||||
}
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
fields = nil
|
|
||||||
if len(result) == 0 {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
var timestamp time.Time
|
|
||||||
if len(t) > 0 {
|
|
||||||
timestamp = t[0]
|
|
||||||
} else {
|
|
||||||
timestamp = time.Now()
|
|
||||||
}
|
|
||||||
|
|
||||||
if ac.prefix != "" {
|
|
||||||
measurement = ac.prefix + measurement
|
|
||||||
}
|
|
||||||
|
|
||||||
pt, err := client.NewPoint(measurement, tags, result, timestamp)
|
|
||||||
if err != nil {
|
|
||||||
log.Printf("Error adding point [%s]: %s\n", measurement, err.Error())
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if ac.debug {
|
|
||||||
fmt.Println("> " + pt.String())
|
|
||||||
}
|
|
||||||
ac.points <- pt
|
|
||||||
}
|
|
||||||
|
|
||||||
func (ac *accumulator) SetDefaultTags(tags map[string]string) {
|
|
||||||
ac.defaultTags = tags
|
|
||||||
}
|
|
||||||
|
|
||||||
func (ac *accumulator) AddDefaultTag(key, value string) {
|
|
||||||
ac.defaultTags[key] = value
|
|
||||||
}
|
|
||||||
|
|
||||||
func (ac *accumulator) Prefix() string {
|
|
||||||
return ac.prefix
|
|
||||||
}
|
|
||||||
|
|
||||||
func (ac *accumulator) SetPrefix(prefix string) {
|
|
||||||
ac.prefix = prefix
|
|
||||||
}
|
|
||||||
|
|
||||||
func (ac *accumulator) Debug() bool {
|
|
||||||
return ac.debug
|
|
||||||
}
|
|
||||||
|
|
||||||
func (ac *accumulator) SetDebug(debug bool) {
|
|
||||||
ac.debug = debug
|
|
||||||
}
|
|
||||||
|
|||||||
174
agent/accumulator.go
Normal file
174
agent/accumulator.go
Normal file
@@ -0,0 +1,174 @@
|
|||||||
|
package agent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"log"
|
||||||
|
"math"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
|
"github.com/influxdata/telegraf/internal/models"
|
||||||
|
)
|
||||||
|
|
||||||
|
func NewAccumulator(
|
||||||
|
inputConfig *internal_models.InputConfig,
|
||||||
|
metrics chan telegraf.Metric,
|
||||||
|
) *accumulator {
|
||||||
|
acc := accumulator{}
|
||||||
|
acc.metrics = metrics
|
||||||
|
acc.inputConfig = inputConfig
|
||||||
|
return &acc
|
||||||
|
}
|
||||||
|
|
||||||
|
type accumulator struct {
|
||||||
|
sync.Mutex
|
||||||
|
|
||||||
|
metrics chan telegraf.Metric
|
||||||
|
|
||||||
|
defaultTags map[string]string
|
||||||
|
|
||||||
|
debug bool
|
||||||
|
|
||||||
|
inputConfig *internal_models.InputConfig
|
||||||
|
|
||||||
|
prefix string
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ac *accumulator) Add(
|
||||||
|
measurement string,
|
||||||
|
value interface{},
|
||||||
|
tags map[string]string,
|
||||||
|
t ...time.Time,
|
||||||
|
) {
|
||||||
|
fields := make(map[string]interface{})
|
||||||
|
fields["value"] = value
|
||||||
|
|
||||||
|
if !ac.inputConfig.Filter.ShouldNamePass(measurement) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
ac.AddFields(measurement, fields, tags, t...)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ac *accumulator) AddFields(
|
||||||
|
measurement string,
|
||||||
|
fields map[string]interface{},
|
||||||
|
tags map[string]string,
|
||||||
|
t ...time.Time,
|
||||||
|
) {
|
||||||
|
if len(fields) == 0 || len(measurement) == 0 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if !ac.inputConfig.Filter.ShouldNamePass(measurement) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if !ac.inputConfig.Filter.ShouldTagsPass(tags) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Override measurement name if set
|
||||||
|
if len(ac.inputConfig.NameOverride) != 0 {
|
||||||
|
measurement = ac.inputConfig.NameOverride
|
||||||
|
}
|
||||||
|
// Apply measurement prefix and suffix if set
|
||||||
|
if len(ac.inputConfig.MeasurementPrefix) != 0 {
|
||||||
|
measurement = ac.inputConfig.MeasurementPrefix + measurement
|
||||||
|
}
|
||||||
|
if len(ac.inputConfig.MeasurementSuffix) != 0 {
|
||||||
|
measurement = measurement + ac.inputConfig.MeasurementSuffix
|
||||||
|
}
|
||||||
|
|
||||||
|
if tags == nil {
|
||||||
|
tags = make(map[string]string)
|
||||||
|
}
|
||||||
|
// Apply daemon-wide tags if set
|
||||||
|
for k, v := range ac.defaultTags {
|
||||||
|
tags[k] = v
|
||||||
|
}
|
||||||
|
// Apply plugin-wide tags if set
|
||||||
|
for k, v := range ac.inputConfig.Tags {
|
||||||
|
tags[k] = v
|
||||||
|
}
|
||||||
|
ac.inputConfig.Filter.FilterTags(tags)
|
||||||
|
|
||||||
|
result := make(map[string]interface{})
|
||||||
|
for k, v := range fields {
|
||||||
|
// Filter out any filtered fields
|
||||||
|
if ac.inputConfig != nil {
|
||||||
|
if !ac.inputConfig.Filter.ShouldFieldsPass(k) {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Validate uint64 and float64 fields
|
||||||
|
switch val := v.(type) {
|
||||||
|
case uint64:
|
||||||
|
// InfluxDB does not support writing uint64
|
||||||
|
if val < uint64(9223372036854775808) {
|
||||||
|
result[k] = int64(val)
|
||||||
|
} else {
|
||||||
|
result[k] = int64(9223372036854775807)
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
case float64:
|
||||||
|
// NaNs are invalid values in influxdb, skip measurement
|
||||||
|
if math.IsNaN(val) || math.IsInf(val, 0) {
|
||||||
|
if ac.debug {
|
||||||
|
log.Printf("Measurement [%s] field [%s] has a NaN or Inf "+
|
||||||
|
"field, skipping",
|
||||||
|
measurement, k)
|
||||||
|
}
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
result[k] = v
|
||||||
|
}
|
||||||
|
fields = nil
|
||||||
|
if len(result) == 0 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
var timestamp time.Time
|
||||||
|
if len(t) > 0 {
|
||||||
|
timestamp = t[0]
|
||||||
|
} else {
|
||||||
|
timestamp = time.Now()
|
||||||
|
}
|
||||||
|
|
||||||
|
if ac.prefix != "" {
|
||||||
|
measurement = ac.prefix + measurement
|
||||||
|
}
|
||||||
|
|
||||||
|
m, err := telegraf.NewMetric(measurement, tags, result, timestamp)
|
||||||
|
if err != nil {
|
||||||
|
log.Printf("Error adding point [%s]: %s\n", measurement, err.Error())
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if ac.debug {
|
||||||
|
fmt.Println("> " + m.String())
|
||||||
|
}
|
||||||
|
ac.metrics <- m
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ac *accumulator) Debug() bool {
|
||||||
|
return ac.debug
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ac *accumulator) SetDebug(debug bool) {
|
||||||
|
ac.debug = debug
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ac *accumulator) setDefaultTags(tags map[string]string) {
|
||||||
|
ac.defaultTags = tags
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ac *accumulator) addDefaultTag(key, value string) {
|
||||||
|
if ac.defaultTags == nil {
|
||||||
|
ac.defaultTags = make(map[string]string)
|
||||||
|
}
|
||||||
|
ac.defaultTags[key] = value
|
||||||
|
}
|
||||||
334
agent/accumulator_test.go
Normal file
334
agent/accumulator_test.go
Normal file
@@ -0,0 +1,334 @@
|
|||||||
|
package agent
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"math"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
|
"github.com/influxdata/telegraf/internal/models"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestAdd(t *testing.T) {
|
||||||
|
a := accumulator{}
|
||||||
|
now := time.Now()
|
||||||
|
a.metrics = make(chan telegraf.Metric, 10)
|
||||||
|
defer close(a.metrics)
|
||||||
|
a.inputConfig = &internal_models.InputConfig{}
|
||||||
|
|
||||||
|
a.Add("acctest", float64(101), map[string]string{})
|
||||||
|
a.Add("acctest", float64(101), map[string]string{"acc": "test"})
|
||||||
|
a.Add("acctest", float64(101), map[string]string{"acc": "test"}, now)
|
||||||
|
|
||||||
|
testm := <-a.metrics
|
||||||
|
actual := testm.String()
|
||||||
|
assert.Contains(t, actual, "acctest value=101")
|
||||||
|
|
||||||
|
testm = <-a.metrics
|
||||||
|
actual = testm.String()
|
||||||
|
assert.Contains(t, actual, "acctest,acc=test value=101")
|
||||||
|
|
||||||
|
testm = <-a.metrics
|
||||||
|
actual = testm.String()
|
||||||
|
assert.Equal(t,
|
||||||
|
fmt.Sprintf("acctest,acc=test value=101 %d", now.UnixNano()),
|
||||||
|
actual)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestAddDefaultTags(t *testing.T) {
|
||||||
|
a := accumulator{}
|
||||||
|
a.addDefaultTag("default", "tag")
|
||||||
|
now := time.Now()
|
||||||
|
a.metrics = make(chan telegraf.Metric, 10)
|
||||||
|
defer close(a.metrics)
|
||||||
|
a.inputConfig = &internal_models.InputConfig{}
|
||||||
|
|
||||||
|
a.Add("acctest", float64(101), map[string]string{})
|
||||||
|
a.Add("acctest", float64(101), map[string]string{"acc": "test"})
|
||||||
|
a.Add("acctest", float64(101), map[string]string{"acc": "test"}, now)
|
||||||
|
|
||||||
|
testm := <-a.metrics
|
||||||
|
actual := testm.String()
|
||||||
|
assert.Contains(t, actual, "acctest,default=tag value=101")
|
||||||
|
|
||||||
|
testm = <-a.metrics
|
||||||
|
actual = testm.String()
|
||||||
|
assert.Contains(t, actual, "acctest,acc=test,default=tag value=101")
|
||||||
|
|
||||||
|
testm = <-a.metrics
|
||||||
|
actual = testm.String()
|
||||||
|
assert.Equal(t,
|
||||||
|
fmt.Sprintf("acctest,acc=test,default=tag value=101 %d", now.UnixNano()),
|
||||||
|
actual)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestAddFields(t *testing.T) {
|
||||||
|
a := accumulator{}
|
||||||
|
now := time.Now()
|
||||||
|
a.metrics = make(chan telegraf.Metric, 10)
|
||||||
|
defer close(a.metrics)
|
||||||
|
a.inputConfig = &internal_models.InputConfig{}
|
||||||
|
|
||||||
|
fields := map[string]interface{}{
|
||||||
|
"usage": float64(99),
|
||||||
|
}
|
||||||
|
a.AddFields("acctest", fields, map[string]string{})
|
||||||
|
a.AddFields("acctest", fields, map[string]string{"acc": "test"})
|
||||||
|
a.AddFields("acctest", fields, map[string]string{"acc": "test"}, now)
|
||||||
|
|
||||||
|
testm := <-a.metrics
|
||||||
|
actual := testm.String()
|
||||||
|
assert.Contains(t, actual, "acctest usage=99")
|
||||||
|
|
||||||
|
testm = <-a.metrics
|
||||||
|
actual = testm.String()
|
||||||
|
assert.Contains(t, actual, "acctest,acc=test usage=99")
|
||||||
|
|
||||||
|
testm = <-a.metrics
|
||||||
|
actual = testm.String()
|
||||||
|
assert.Equal(t,
|
||||||
|
fmt.Sprintf("acctest,acc=test usage=99 %d", now.UnixNano()),
|
||||||
|
actual)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test that all Inf fields get dropped, and not added to metrics channel
|
||||||
|
func TestAddInfFields(t *testing.T) {
|
||||||
|
inf := math.Inf(1)
|
||||||
|
ninf := math.Inf(-1)
|
||||||
|
|
||||||
|
a := accumulator{}
|
||||||
|
now := time.Now()
|
||||||
|
a.metrics = make(chan telegraf.Metric, 10)
|
||||||
|
defer close(a.metrics)
|
||||||
|
a.inputConfig = &internal_models.InputConfig{}
|
||||||
|
|
||||||
|
fields := map[string]interface{}{
|
||||||
|
"usage": inf,
|
||||||
|
"nusage": ninf,
|
||||||
|
}
|
||||||
|
a.AddFields("acctest", fields, map[string]string{})
|
||||||
|
a.AddFields("acctest", fields, map[string]string{"acc": "test"})
|
||||||
|
a.AddFields("acctest", fields, map[string]string{"acc": "test"}, now)
|
||||||
|
|
||||||
|
assert.Len(t, a.metrics, 0)
|
||||||
|
|
||||||
|
// test that non-inf fields are kept and not dropped
|
||||||
|
fields["notinf"] = float64(100)
|
||||||
|
a.AddFields("acctest", fields, map[string]string{})
|
||||||
|
testm := <-a.metrics
|
||||||
|
actual := testm.String()
|
||||||
|
assert.Contains(t, actual, "acctest notinf=100")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test that nan fields are dropped and not added
|
||||||
|
func TestAddNaNFields(t *testing.T) {
|
||||||
|
nan := math.NaN()
|
||||||
|
|
||||||
|
a := accumulator{}
|
||||||
|
now := time.Now()
|
||||||
|
a.metrics = make(chan telegraf.Metric, 10)
|
||||||
|
defer close(a.metrics)
|
||||||
|
a.inputConfig = &internal_models.InputConfig{}
|
||||||
|
|
||||||
|
fields := map[string]interface{}{
|
||||||
|
"usage": nan,
|
||||||
|
}
|
||||||
|
a.AddFields("acctest", fields, map[string]string{})
|
||||||
|
a.AddFields("acctest", fields, map[string]string{"acc": "test"})
|
||||||
|
a.AddFields("acctest", fields, map[string]string{"acc": "test"}, now)
|
||||||
|
|
||||||
|
assert.Len(t, a.metrics, 0)
|
||||||
|
|
||||||
|
// test that non-nan fields are kept and not dropped
|
||||||
|
fields["notnan"] = float64(100)
|
||||||
|
a.AddFields("acctest", fields, map[string]string{})
|
||||||
|
testm := <-a.metrics
|
||||||
|
actual := testm.String()
|
||||||
|
assert.Contains(t, actual, "acctest notnan=100")
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestAddUint64Fields(t *testing.T) {
|
||||||
|
a := accumulator{}
|
||||||
|
now := time.Now()
|
||||||
|
a.metrics = make(chan telegraf.Metric, 10)
|
||||||
|
defer close(a.metrics)
|
||||||
|
a.inputConfig = &internal_models.InputConfig{}
|
||||||
|
|
||||||
|
fields := map[string]interface{}{
|
||||||
|
"usage": uint64(99),
|
||||||
|
}
|
||||||
|
a.AddFields("acctest", fields, map[string]string{})
|
||||||
|
a.AddFields("acctest", fields, map[string]string{"acc": "test"})
|
||||||
|
a.AddFields("acctest", fields, map[string]string{"acc": "test"}, now)
|
||||||
|
|
||||||
|
testm := <-a.metrics
|
||||||
|
actual := testm.String()
|
||||||
|
assert.Contains(t, actual, "acctest usage=99i")
|
||||||
|
|
||||||
|
testm = <-a.metrics
|
||||||
|
actual = testm.String()
|
||||||
|
assert.Contains(t, actual, "acctest,acc=test usage=99i")
|
||||||
|
|
||||||
|
testm = <-a.metrics
|
||||||
|
actual = testm.String()
|
||||||
|
assert.Equal(t,
|
||||||
|
fmt.Sprintf("acctest,acc=test usage=99i %d", now.UnixNano()),
|
||||||
|
actual)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestAddUint64Overflow(t *testing.T) {
|
||||||
|
a := accumulator{}
|
||||||
|
now := time.Now()
|
||||||
|
a.metrics = make(chan telegraf.Metric, 10)
|
||||||
|
defer close(a.metrics)
|
||||||
|
a.inputConfig = &internal_models.InputConfig{}
|
||||||
|
|
||||||
|
fields := map[string]interface{}{
|
||||||
|
"usage": uint64(9223372036854775808),
|
||||||
|
}
|
||||||
|
a.AddFields("acctest", fields, map[string]string{})
|
||||||
|
a.AddFields("acctest", fields, map[string]string{"acc": "test"})
|
||||||
|
a.AddFields("acctest", fields, map[string]string{"acc": "test"}, now)
|
||||||
|
|
||||||
|
testm := <-a.metrics
|
||||||
|
actual := testm.String()
|
||||||
|
assert.Contains(t, actual, "acctest usage=9223372036854775807i")
|
||||||
|
|
||||||
|
testm = <-a.metrics
|
||||||
|
actual = testm.String()
|
||||||
|
assert.Contains(t, actual, "acctest,acc=test usage=9223372036854775807i")
|
||||||
|
|
||||||
|
testm = <-a.metrics
|
||||||
|
actual = testm.String()
|
||||||
|
assert.Equal(t,
|
||||||
|
fmt.Sprintf("acctest,acc=test usage=9223372036854775807i %d", now.UnixNano()),
|
||||||
|
actual)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestAddInts(t *testing.T) {
|
||||||
|
a := accumulator{}
|
||||||
|
a.addDefaultTag("default", "tag")
|
||||||
|
now := time.Now()
|
||||||
|
a.metrics = make(chan telegraf.Metric, 10)
|
||||||
|
defer close(a.metrics)
|
||||||
|
a.inputConfig = &internal_models.InputConfig{}
|
||||||
|
|
||||||
|
a.Add("acctest", int(101), map[string]string{})
|
||||||
|
a.Add("acctest", int32(101), map[string]string{"acc": "test"})
|
||||||
|
a.Add("acctest", int64(101), map[string]string{"acc": "test"}, now)
|
||||||
|
|
||||||
|
testm := <-a.metrics
|
||||||
|
actual := testm.String()
|
||||||
|
assert.Contains(t, actual, "acctest,default=tag value=101i")
|
||||||
|
|
||||||
|
testm = <-a.metrics
|
||||||
|
actual = testm.String()
|
||||||
|
assert.Contains(t, actual, "acctest,acc=test,default=tag value=101i")
|
||||||
|
|
||||||
|
testm = <-a.metrics
|
||||||
|
actual = testm.String()
|
||||||
|
assert.Equal(t,
|
||||||
|
fmt.Sprintf("acctest,acc=test,default=tag value=101i %d", now.UnixNano()),
|
||||||
|
actual)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestAddFloats(t *testing.T) {
|
||||||
|
a := accumulator{}
|
||||||
|
a.addDefaultTag("default", "tag")
|
||||||
|
now := time.Now()
|
||||||
|
a.metrics = make(chan telegraf.Metric, 10)
|
||||||
|
defer close(a.metrics)
|
||||||
|
a.inputConfig = &internal_models.InputConfig{}
|
||||||
|
|
||||||
|
a.Add("acctest", float32(101), map[string]string{"acc": "test"})
|
||||||
|
a.Add("acctest", float64(101), map[string]string{"acc": "test"}, now)
|
||||||
|
|
||||||
|
testm := <-a.metrics
|
||||||
|
actual := testm.String()
|
||||||
|
assert.Contains(t, actual, "acctest,acc=test,default=tag value=101")
|
||||||
|
|
||||||
|
testm = <-a.metrics
|
||||||
|
actual = testm.String()
|
||||||
|
assert.Equal(t,
|
||||||
|
fmt.Sprintf("acctest,acc=test,default=tag value=101 %d", now.UnixNano()),
|
||||||
|
actual)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestAddStrings(t *testing.T) {
|
||||||
|
a := accumulator{}
|
||||||
|
a.addDefaultTag("default", "tag")
|
||||||
|
now := time.Now()
|
||||||
|
a.metrics = make(chan telegraf.Metric, 10)
|
||||||
|
defer close(a.metrics)
|
||||||
|
a.inputConfig = &internal_models.InputConfig{}
|
||||||
|
|
||||||
|
a.Add("acctest", "test", map[string]string{"acc": "test"})
|
||||||
|
a.Add("acctest", "foo", map[string]string{"acc": "test"}, now)
|
||||||
|
|
||||||
|
testm := <-a.metrics
|
||||||
|
actual := testm.String()
|
||||||
|
assert.Contains(t, actual, "acctest,acc=test,default=tag value=\"test\"")
|
||||||
|
|
||||||
|
testm = <-a.metrics
|
||||||
|
actual = testm.String()
|
||||||
|
assert.Equal(t,
|
||||||
|
fmt.Sprintf("acctest,acc=test,default=tag value=\"foo\" %d", now.UnixNano()),
|
||||||
|
actual)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestAddBools(t *testing.T) {
|
||||||
|
a := accumulator{}
|
||||||
|
a.addDefaultTag("default", "tag")
|
||||||
|
now := time.Now()
|
||||||
|
a.metrics = make(chan telegraf.Metric, 10)
|
||||||
|
defer close(a.metrics)
|
||||||
|
a.inputConfig = &internal_models.InputConfig{}
|
||||||
|
|
||||||
|
a.Add("acctest", true, map[string]string{"acc": "test"})
|
||||||
|
a.Add("acctest", false, map[string]string{"acc": "test"}, now)
|
||||||
|
|
||||||
|
testm := <-a.metrics
|
||||||
|
actual := testm.String()
|
||||||
|
assert.Contains(t, actual, "acctest,acc=test,default=tag value=true")
|
||||||
|
|
||||||
|
testm = <-a.metrics
|
||||||
|
actual = testm.String()
|
||||||
|
assert.Equal(t,
|
||||||
|
fmt.Sprintf("acctest,acc=test,default=tag value=false %d", now.UnixNano()),
|
||||||
|
actual)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test that tag filters get applied to metrics.
|
||||||
|
func TestAccFilterTags(t *testing.T) {
|
||||||
|
a := accumulator{}
|
||||||
|
now := time.Now()
|
||||||
|
a.metrics = make(chan telegraf.Metric, 10)
|
||||||
|
defer close(a.metrics)
|
||||||
|
filter := internal_models.Filter{
|
||||||
|
TagExclude: []string{"acc"},
|
||||||
|
}
|
||||||
|
assert.NoError(t, filter.CompileFilter())
|
||||||
|
a.inputConfig = &internal_models.InputConfig{}
|
||||||
|
a.inputConfig.Filter = filter
|
||||||
|
|
||||||
|
a.Add("acctest", float64(101), map[string]string{})
|
||||||
|
a.Add("acctest", float64(101), map[string]string{"acc": "test"})
|
||||||
|
a.Add("acctest", float64(101), map[string]string{"acc": "test"}, now)
|
||||||
|
|
||||||
|
testm := <-a.metrics
|
||||||
|
actual := testm.String()
|
||||||
|
assert.Contains(t, actual, "acctest value=101")
|
||||||
|
|
||||||
|
testm = <-a.metrics
|
||||||
|
actual = testm.String()
|
||||||
|
assert.Contains(t, actual, "acctest value=101")
|
||||||
|
|
||||||
|
testm = <-a.metrics
|
||||||
|
actual = testm.String()
|
||||||
|
assert.Equal(t,
|
||||||
|
fmt.Sprintf("acctest value=101 %d", now.UnixNano()),
|
||||||
|
actual)
|
||||||
|
}
|
||||||
@@ -1,4 +1,4 @@
|
|||||||
package telegraf
|
package agent
|
||||||
|
|
||||||
import (
|
import (
|
||||||
cryptorand "crypto/rand"
|
cryptorand "crypto/rand"
|
||||||
@@ -11,12 +11,9 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
"github.com/influxdata/telegraf/internal/config"
|
"github.com/influxdata/telegraf/internal/config"
|
||||||
"github.com/influxdata/telegraf/internal/models"
|
"github.com/influxdata/telegraf/internal/models"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
|
||||||
"github.com/influxdata/telegraf/plugins/outputs"
|
|
||||||
|
|
||||||
"github.com/influxdata/influxdb/client/v2"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// Agent runs telegraf and collects data based on the given config
|
// Agent runs telegraf and collects data based on the given config
|
||||||
@@ -30,25 +27,29 @@ func NewAgent(config *config.Config) (*Agent, error) {
|
|||||||
Config: config,
|
Config: config,
|
||||||
}
|
}
|
||||||
|
|
||||||
if a.Config.Agent.Hostname == "" {
|
if !a.Config.Agent.OmitHostname {
|
||||||
hostname, err := os.Hostname()
|
if a.Config.Agent.Hostname == "" {
|
||||||
if err != nil {
|
hostname, err := os.Hostname()
|
||||||
return nil, err
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
a.Config.Agent.Hostname = hostname
|
||||||
}
|
}
|
||||||
|
|
||||||
a.Config.Agent.Hostname = hostname
|
config.Tags["host"] = a.Config.Agent.Hostname
|
||||||
}
|
}
|
||||||
|
|
||||||
config.Tags["host"] = a.Config.Agent.Hostname
|
|
||||||
|
|
||||||
return a, nil
|
return a, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Connect connects to all configured outputs
|
// Connect connects to all configured outputs
|
||||||
func (a *Agent) Connect() error {
|
func (a *Agent) Connect() error {
|
||||||
for _, o := range a.Config.Outputs {
|
for _, o := range a.Config.Outputs {
|
||||||
|
o.Quiet = a.Config.Agent.Quiet
|
||||||
|
|
||||||
switch ot := o.Output.(type) {
|
switch ot := o.Output.(type) {
|
||||||
case outputs.ServiceOutput:
|
case telegraf.ServiceOutput:
|
||||||
if err := ot.Start(); err != nil {
|
if err := ot.Start(); err != nil {
|
||||||
log.Printf("Service for output %s failed to start, exiting\n%s\n",
|
log.Printf("Service for output %s failed to start, exiting\n%s\n",
|
||||||
o.Name, err.Error())
|
o.Name, err.Error())
|
||||||
@@ -61,7 +62,8 @@ func (a *Agent) Connect() error {
|
|||||||
}
|
}
|
||||||
err := o.Output.Connect()
|
err := o.Output.Connect()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Printf("Failed to connect to output %s, retrying in 15s, error was '%s' \n", o.Name, err)
|
log.Printf("Failed to connect to output %s, retrying in 15s, "+
|
||||||
|
"error was '%s' \n", o.Name, err)
|
||||||
time.Sleep(15 * time.Second)
|
time.Sleep(15 * time.Second)
|
||||||
err = o.Output.Connect()
|
err = o.Output.Connect()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -81,14 +83,14 @@ func (a *Agent) Close() error {
|
|||||||
for _, o := range a.Config.Outputs {
|
for _, o := range a.Config.Outputs {
|
||||||
err = o.Output.Close()
|
err = o.Output.Close()
|
||||||
switch ot := o.Output.(type) {
|
switch ot := o.Output.(type) {
|
||||||
case outputs.ServiceOutput:
|
case telegraf.ServiceOutput:
|
||||||
ot.Stop()
|
ot.Stop()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
func panicRecover(input *models.RunningInput) {
|
func panicRecover(input *internal_models.RunningInput) {
|
||||||
if err := recover(); err != nil {
|
if err := recover(); err != nil {
|
||||||
trace := make([]byte, 2048)
|
trace := make([]byte, 2048)
|
||||||
runtime.Stack(trace, true)
|
runtime.Stack(trace, true)
|
||||||
@@ -102,7 +104,7 @@ func panicRecover(input *models.RunningInput) {
|
|||||||
|
|
||||||
// gatherParallel runs the inputs that are using the same reporting interval
|
// gatherParallel runs the inputs that are using the same reporting interval
|
||||||
// as the telegraf agent.
|
// as the telegraf agent.
|
||||||
func (a *Agent) gatherParallel(pointChan chan *client.Point) error {
|
func (a *Agent) gatherParallel(metricC chan telegraf.Metric) error {
|
||||||
var wg sync.WaitGroup
|
var wg sync.WaitGroup
|
||||||
|
|
||||||
start := time.Now()
|
start := time.Now()
|
||||||
@@ -115,13 +117,13 @@ func (a *Agent) gatherParallel(pointChan chan *client.Point) error {
|
|||||||
|
|
||||||
wg.Add(1)
|
wg.Add(1)
|
||||||
counter++
|
counter++
|
||||||
go func(input *models.RunningInput) {
|
go func(input *internal_models.RunningInput) {
|
||||||
defer panicRecover(input)
|
defer panicRecover(input)
|
||||||
defer wg.Done()
|
defer wg.Done()
|
||||||
|
|
||||||
acc := NewAccumulator(input.Config, pointChan)
|
acc := NewAccumulator(input.Config, metricC)
|
||||||
acc.SetDebug(a.Config.Agent.Debug)
|
acc.SetDebug(a.Config.Agent.Debug)
|
||||||
acc.SetDefaultTags(a.Config.Tags)
|
acc.setDefaultTags(a.Config.Tags)
|
||||||
|
|
||||||
if jitter != 0 {
|
if jitter != 0 {
|
||||||
nanoSleep := rand.Int63n(jitter)
|
nanoSleep := rand.Int63n(jitter)
|
||||||
@@ -159,8 +161,8 @@ func (a *Agent) gatherParallel(pointChan chan *client.Point) error {
|
|||||||
// reporting interval.
|
// reporting interval.
|
||||||
func (a *Agent) gatherSeparate(
|
func (a *Agent) gatherSeparate(
|
||||||
shutdown chan struct{},
|
shutdown chan struct{},
|
||||||
input *models.RunningInput,
|
input *internal_models.RunningInput,
|
||||||
pointChan chan *client.Point,
|
metricC chan telegraf.Metric,
|
||||||
) error {
|
) error {
|
||||||
defer panicRecover(input)
|
defer panicRecover(input)
|
||||||
|
|
||||||
@@ -170,9 +172,9 @@ func (a *Agent) gatherSeparate(
|
|||||||
var outerr error
|
var outerr error
|
||||||
start := time.Now()
|
start := time.Now()
|
||||||
|
|
||||||
acc := NewAccumulator(input.Config, pointChan)
|
acc := NewAccumulator(input.Config, metricC)
|
||||||
acc.SetDebug(a.Config.Agent.Debug)
|
acc.SetDebug(a.Config.Agent.Debug)
|
||||||
acc.SetDefaultTags(a.Config.Tags)
|
acc.setDefaultTags(a.Config.Tags)
|
||||||
|
|
||||||
if err := input.Input.Gather(acc); err != nil {
|
if err := input.Input.Gather(acc); err != nil {
|
||||||
log.Printf("Error in input [%s]: %s", input.Name, err)
|
log.Printf("Error in input [%s]: %s", input.Name, err)
|
||||||
@@ -202,13 +204,13 @@ func (a *Agent) gatherSeparate(
|
|||||||
func (a *Agent) Test() error {
|
func (a *Agent) Test() error {
|
||||||
shutdown := make(chan struct{})
|
shutdown := make(chan struct{})
|
||||||
defer close(shutdown)
|
defer close(shutdown)
|
||||||
pointChan := make(chan *client.Point)
|
metricC := make(chan telegraf.Metric)
|
||||||
|
|
||||||
// dummy receiver for the point channel
|
// dummy receiver for the point channel
|
||||||
go func() {
|
go func() {
|
||||||
for {
|
for {
|
||||||
select {
|
select {
|
||||||
case <-pointChan:
|
case <-metricC:
|
||||||
// do nothing
|
// do nothing
|
||||||
case <-shutdown:
|
case <-shutdown:
|
||||||
return
|
return
|
||||||
@@ -217,8 +219,9 @@ func (a *Agent) Test() error {
|
|||||||
}()
|
}()
|
||||||
|
|
||||||
for _, input := range a.Config.Inputs {
|
for _, input := range a.Config.Inputs {
|
||||||
acc := NewAccumulator(input.Config, pointChan)
|
acc := NewAccumulator(input.Config, metricC)
|
||||||
acc.SetDebug(true)
|
acc.SetDebug(true)
|
||||||
|
acc.setDefaultTags(a.Config.Tags)
|
||||||
|
|
||||||
fmt.Printf("* Plugin: %s, Collection 1\n", input.Name)
|
fmt.Printf("* Plugin: %s, Collection 1\n", input.Name)
|
||||||
if input.Config.Interval != 0 {
|
if input.Config.Interval != 0 {
|
||||||
@@ -244,13 +247,13 @@ func (a *Agent) Test() error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// flush writes a list of points to all configured outputs
|
// flush writes a list of metrics to all configured outputs
|
||||||
func (a *Agent) flush() {
|
func (a *Agent) flush() {
|
||||||
var wg sync.WaitGroup
|
var wg sync.WaitGroup
|
||||||
|
|
||||||
wg.Add(len(a.Config.Outputs))
|
wg.Add(len(a.Config.Outputs))
|
||||||
for _, o := range a.Config.Outputs {
|
for _, o := range a.Config.Outputs {
|
||||||
go func(output *models.RunningOutput) {
|
go func(output *internal_models.RunningOutput) {
|
||||||
defer wg.Done()
|
defer wg.Done()
|
||||||
err := output.Write()
|
err := output.Write()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -263,8 +266,8 @@ func (a *Agent) flush() {
|
|||||||
wg.Wait()
|
wg.Wait()
|
||||||
}
|
}
|
||||||
|
|
||||||
// flusher monitors the points input channel and flushes on the minimum interval
|
// flusher monitors the metrics input channel and flushes on the minimum interval
|
||||||
func (a *Agent) flusher(shutdown chan struct{}, pointChan chan *client.Point) error {
|
func (a *Agent) flusher(shutdown chan struct{}, metricC chan telegraf.Metric) error {
|
||||||
// Inelegant, but this sleep is to allow the Gather threads to run, so that
|
// Inelegant, but this sleep is to allow the Gather threads to run, so that
|
||||||
// the flusher will flush after metrics are collected.
|
// the flusher will flush after metrics are collected.
|
||||||
time.Sleep(time.Millisecond * 200)
|
time.Sleep(time.Millisecond * 200)
|
||||||
@@ -274,14 +277,14 @@ func (a *Agent) flusher(shutdown chan struct{}, pointChan chan *client.Point) er
|
|||||||
for {
|
for {
|
||||||
select {
|
select {
|
||||||
case <-shutdown:
|
case <-shutdown:
|
||||||
log.Println("Hang on, flushing any cached points before shutdown")
|
log.Println("Hang on, flushing any cached metrics before shutdown")
|
||||||
a.flush()
|
a.flush()
|
||||||
return nil
|
return nil
|
||||||
case <-ticker.C:
|
case <-ticker.C:
|
||||||
a.flush()
|
a.flush()
|
||||||
case pt := <-pointChan:
|
case m := <-metricC:
|
||||||
for _, o := range a.Config.Outputs {
|
for _, o := range a.Config.Outputs {
|
||||||
o.AddPoint(pt)
|
o.AddMetric(m)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -321,8 +324,24 @@ func (a *Agent) Run(shutdown chan struct{}) error {
|
|||||||
a.Config.Agent.Interval.Duration, a.Config.Agent.Debug, a.Config.Agent.Quiet,
|
a.Config.Agent.Interval.Duration, a.Config.Agent.Debug, a.Config.Agent.Quiet,
|
||||||
a.Config.Agent.Hostname, a.Config.Agent.FlushInterval.Duration)
|
a.Config.Agent.Hostname, a.Config.Agent.FlushInterval.Duration)
|
||||||
|
|
||||||
// channel shared between all input threads for accumulating points
|
// channel shared between all input threads for accumulating metrics
|
||||||
pointChan := make(chan *client.Point, 1000)
|
metricC := make(chan telegraf.Metric, 10000)
|
||||||
|
|
||||||
|
for _, input := range a.Config.Inputs {
|
||||||
|
// Start service of any ServicePlugins
|
||||||
|
switch p := input.Input.(type) {
|
||||||
|
case telegraf.ServiceInput:
|
||||||
|
acc := NewAccumulator(input.Config, metricC)
|
||||||
|
acc.SetDebug(a.Config.Agent.Debug)
|
||||||
|
acc.setDefaultTags(a.Config.Tags)
|
||||||
|
if err := p.Start(acc); err != nil {
|
||||||
|
log.Printf("Service for input %s failed to start, exiting\n%s\n",
|
||||||
|
input.Name, err.Error())
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
defer p.Stop()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Round collection to nearest interval by sleeping
|
// Round collection to nearest interval by sleeping
|
||||||
if a.Config.Agent.RoundInterval {
|
if a.Config.Agent.RoundInterval {
|
||||||
@@ -334,32 +353,20 @@ func (a *Agent) Run(shutdown chan struct{}) error {
|
|||||||
wg.Add(1)
|
wg.Add(1)
|
||||||
go func() {
|
go func() {
|
||||||
defer wg.Done()
|
defer wg.Done()
|
||||||
if err := a.flusher(shutdown, pointChan); err != nil {
|
if err := a.flusher(shutdown, metricC); err != nil {
|
||||||
log.Printf("Flusher routine failed, exiting: %s\n", err.Error())
|
log.Printf("Flusher routine failed, exiting: %s\n", err.Error())
|
||||||
close(shutdown)
|
close(shutdown)
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
|
|
||||||
for _, input := range a.Config.Inputs {
|
for _, input := range a.Config.Inputs {
|
||||||
|
|
||||||
// Start service of any ServicePlugins
|
|
||||||
switch p := input.Input.(type) {
|
|
||||||
case inputs.ServiceInput:
|
|
||||||
if err := p.Start(); err != nil {
|
|
||||||
log.Printf("Service for input %s failed to start, exiting\n%s\n",
|
|
||||||
input.Name, err.Error())
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer p.Stop()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Special handling for inputs that have their own collection interval
|
// Special handling for inputs that have their own collection interval
|
||||||
// configured. Default intervals are handled below with gatherParallel
|
// configured. Default intervals are handled below with gatherParallel
|
||||||
if input.Config.Interval != 0 {
|
if input.Config.Interval != 0 {
|
||||||
wg.Add(1)
|
wg.Add(1)
|
||||||
go func(input *models.RunningInput) {
|
go func(input *internal_models.RunningInput) {
|
||||||
defer wg.Done()
|
defer wg.Done()
|
||||||
if err := a.gatherSeparate(shutdown, input, pointChan); err != nil {
|
if err := a.gatherSeparate(shutdown, input, metricC); err != nil {
|
||||||
log.Printf(err.Error())
|
log.Printf(err.Error())
|
||||||
}
|
}
|
||||||
}(input)
|
}(input)
|
||||||
@@ -369,7 +376,7 @@ func (a *Agent) Run(shutdown chan struct{}) error {
|
|||||||
defer wg.Wait()
|
defer wg.Wait()
|
||||||
|
|
||||||
for {
|
for {
|
||||||
if err := a.gatherParallel(pointChan); err != nil {
|
if err := a.gatherParallel(metricC); err != nil {
|
||||||
log.Printf(err.Error())
|
log.Printf(err.Error())
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1,7 +1,6 @@
|
|||||||
package telegraf
|
package agent
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"github.com/stretchr/testify/assert"
|
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
@@ -11,40 +10,50 @@ import (
|
|||||||
_ "github.com/influxdata/telegraf/plugins/inputs/all"
|
_ "github.com/influxdata/telegraf/plugins/inputs/all"
|
||||||
// needing to load the outputs
|
// needing to load the outputs
|
||||||
_ "github.com/influxdata/telegraf/plugins/outputs/all"
|
_ "github.com/influxdata/telegraf/plugins/outputs/all"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
func TestAgent_OmitHostname(t *testing.T) {
|
||||||
|
c := config.NewConfig()
|
||||||
|
c.Agent.OmitHostname = true
|
||||||
|
_, err := NewAgent(c)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.NotContains(t, c.Tags, "host")
|
||||||
|
}
|
||||||
|
|
||||||
func TestAgent_LoadPlugin(t *testing.T) {
|
func TestAgent_LoadPlugin(t *testing.T) {
|
||||||
c := config.NewConfig()
|
c := config.NewConfig()
|
||||||
c.InputFilters = []string{"mysql"}
|
c.InputFilters = []string{"mysql"}
|
||||||
err := c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
|
err := c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
a, _ := NewAgent(c)
|
a, _ := NewAgent(c)
|
||||||
assert.Equal(t, 1, len(a.Config.Inputs))
|
assert.Equal(t, 1, len(a.Config.Inputs))
|
||||||
|
|
||||||
c = config.NewConfig()
|
c = config.NewConfig()
|
||||||
c.InputFilters = []string{"foo"}
|
c.InputFilters = []string{"foo"}
|
||||||
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
|
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
a, _ = NewAgent(c)
|
a, _ = NewAgent(c)
|
||||||
assert.Equal(t, 0, len(a.Config.Inputs))
|
assert.Equal(t, 0, len(a.Config.Inputs))
|
||||||
|
|
||||||
c = config.NewConfig()
|
c = config.NewConfig()
|
||||||
c.InputFilters = []string{"mysql", "foo"}
|
c.InputFilters = []string{"mysql", "foo"}
|
||||||
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
|
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
a, _ = NewAgent(c)
|
a, _ = NewAgent(c)
|
||||||
assert.Equal(t, 1, len(a.Config.Inputs))
|
assert.Equal(t, 1, len(a.Config.Inputs))
|
||||||
|
|
||||||
c = config.NewConfig()
|
c = config.NewConfig()
|
||||||
c.InputFilters = []string{"mysql", "redis"}
|
c.InputFilters = []string{"mysql", "redis"}
|
||||||
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
|
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
a, _ = NewAgent(c)
|
a, _ = NewAgent(c)
|
||||||
assert.Equal(t, 2, len(a.Config.Inputs))
|
assert.Equal(t, 2, len(a.Config.Inputs))
|
||||||
|
|
||||||
c = config.NewConfig()
|
c = config.NewConfig()
|
||||||
c.InputFilters = []string{"mysql", "foo", "redis", "bar"}
|
c.InputFilters = []string{"mysql", "foo", "redis", "bar"}
|
||||||
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
|
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
a, _ = NewAgent(c)
|
a, _ = NewAgent(c)
|
||||||
assert.Equal(t, 2, len(a.Config.Inputs))
|
assert.Equal(t, 2, len(a.Config.Inputs))
|
||||||
@@ -53,42 +62,42 @@ func TestAgent_LoadPlugin(t *testing.T) {
|
|||||||
func TestAgent_LoadOutput(t *testing.T) {
|
func TestAgent_LoadOutput(t *testing.T) {
|
||||||
c := config.NewConfig()
|
c := config.NewConfig()
|
||||||
c.OutputFilters = []string{"influxdb"}
|
c.OutputFilters = []string{"influxdb"}
|
||||||
err := c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
|
err := c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
a, _ := NewAgent(c)
|
a, _ := NewAgent(c)
|
||||||
assert.Equal(t, 2, len(a.Config.Outputs))
|
assert.Equal(t, 2, len(a.Config.Outputs))
|
||||||
|
|
||||||
c = config.NewConfig()
|
c = config.NewConfig()
|
||||||
c.OutputFilters = []string{"kafka"}
|
c.OutputFilters = []string{"kafka"}
|
||||||
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
|
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
a, _ = NewAgent(c)
|
a, _ = NewAgent(c)
|
||||||
assert.Equal(t, 1, len(a.Config.Outputs))
|
assert.Equal(t, 1, len(a.Config.Outputs))
|
||||||
|
|
||||||
c = config.NewConfig()
|
c = config.NewConfig()
|
||||||
c.OutputFilters = []string{}
|
c.OutputFilters = []string{}
|
||||||
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
|
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
a, _ = NewAgent(c)
|
a, _ = NewAgent(c)
|
||||||
assert.Equal(t, 3, len(a.Config.Outputs))
|
assert.Equal(t, 3, len(a.Config.Outputs))
|
||||||
|
|
||||||
c = config.NewConfig()
|
c = config.NewConfig()
|
||||||
c.OutputFilters = []string{"foo"}
|
c.OutputFilters = []string{"foo"}
|
||||||
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
|
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
a, _ = NewAgent(c)
|
a, _ = NewAgent(c)
|
||||||
assert.Equal(t, 0, len(a.Config.Outputs))
|
assert.Equal(t, 0, len(a.Config.Outputs))
|
||||||
|
|
||||||
c = config.NewConfig()
|
c = config.NewConfig()
|
||||||
c.OutputFilters = []string{"influxdb", "foo"}
|
c.OutputFilters = []string{"influxdb", "foo"}
|
||||||
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
|
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
a, _ = NewAgent(c)
|
a, _ = NewAgent(c)
|
||||||
assert.Equal(t, 2, len(a.Config.Outputs))
|
assert.Equal(t, 2, len(a.Config.Outputs))
|
||||||
|
|
||||||
c = config.NewConfig()
|
c = config.NewConfig()
|
||||||
c.OutputFilters = []string{"influxdb", "kafka"}
|
c.OutputFilters = []string{"influxdb", "kafka"}
|
||||||
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
|
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Equal(t, 3, len(c.Outputs))
|
assert.Equal(t, 3, len(c.Outputs))
|
||||||
a, _ = NewAgent(c)
|
a, _ = NewAgent(c)
|
||||||
@@ -96,7 +105,7 @@ func TestAgent_LoadOutput(t *testing.T) {
|
|||||||
|
|
||||||
c = config.NewConfig()
|
c = config.NewConfig()
|
||||||
c.OutputFilters = []string{"influxdb", "foo", "kafka", "bar"}
|
c.OutputFilters = []string{"influxdb", "foo", "kafka", "bar"}
|
||||||
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
|
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
a, _ = NewAgent(c)
|
a, _ = NewAgent(c)
|
||||||
assert.Equal(t, 3, len(a.Config.Outputs))
|
assert.Equal(t, 3, len(a.Config.Outputs))
|
||||||
713
build.py
713
build.py
@@ -1,713 +0,0 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
#
|
|
||||||
# This is the Telegraf build script.
|
|
||||||
#
|
|
||||||
# Current caveats:
|
|
||||||
# - Does not checkout the correct commit/branch (for now, you will need to do so manually)
|
|
||||||
# - Has external dependencies for packaging (fpm) and uploading (boto)
|
|
||||||
#
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
import subprocess
|
|
||||||
import time
|
|
||||||
import datetime
|
|
||||||
import shutil
|
|
||||||
import tempfile
|
|
||||||
import hashlib
|
|
||||||
import re
|
|
||||||
|
|
||||||
debug = False
|
|
||||||
|
|
||||||
# PACKAGING VARIABLES
|
|
||||||
INSTALL_ROOT_DIR = "/usr/bin"
|
|
||||||
LOG_DIR = "/var/log/telegraf"
|
|
||||||
SCRIPT_DIR = "/usr/lib/telegraf/scripts"
|
|
||||||
CONFIG_DIR = "/etc/telegraf"
|
|
||||||
LOGROTATE_DIR = "/etc/logrotate.d"
|
|
||||||
|
|
||||||
INIT_SCRIPT = "scripts/init.sh"
|
|
||||||
SYSTEMD_SCRIPT = "scripts/telegraf.service"
|
|
||||||
LOGROTATE_SCRIPT = "etc/logrotate.d/telegraf"
|
|
||||||
DEFAULT_CONFIG = "etc/telegraf.conf"
|
|
||||||
POSTINST_SCRIPT = "scripts/post-install.sh"
|
|
||||||
PREINST_SCRIPT = "scripts/pre-install.sh"
|
|
||||||
|
|
||||||
# META-PACKAGE VARIABLES
|
|
||||||
PACKAGE_LICENSE = "MIT"
|
|
||||||
PACKAGE_URL = "https://github.com/influxdata/telegraf"
|
|
||||||
MAINTAINER = "support@influxdb.com"
|
|
||||||
VENDOR = "InfluxData"
|
|
||||||
DESCRIPTION = "Plugin-driven server agent for reporting metrics into InfluxDB."
|
|
||||||
|
|
||||||
# SCRIPT START
|
|
||||||
prereqs = [ 'git', 'go' ]
|
|
||||||
optional_prereqs = [ 'gvm', 'fpm', 'rpmbuild' ]
|
|
||||||
|
|
||||||
fpm_common_args = "-f -s dir --log error \
|
|
||||||
--vendor {} \
|
|
||||||
--url {} \
|
|
||||||
--license {} \
|
|
||||||
--maintainer {} \
|
|
||||||
--config-files {} \
|
|
||||||
--config-files {} \
|
|
||||||
--after-install {} \
|
|
||||||
--before-install {} \
|
|
||||||
--description \"{}\"".format(
|
|
||||||
VENDOR,
|
|
||||||
PACKAGE_URL,
|
|
||||||
PACKAGE_LICENSE,
|
|
||||||
MAINTAINER,
|
|
||||||
CONFIG_DIR + '/telegraf.conf',
|
|
||||||
LOGROTATE_DIR + '/telegraf',
|
|
||||||
POSTINST_SCRIPT,
|
|
||||||
PREINST_SCRIPT,
|
|
||||||
DESCRIPTION)
|
|
||||||
|
|
||||||
targets = {
|
|
||||||
'telegraf' : './cmd/telegraf/telegraf.go',
|
|
||||||
}
|
|
||||||
|
|
||||||
supported_builds = {
|
|
||||||
'darwin': [ "amd64", "i386" ],
|
|
||||||
'windows': [ "amd64", "i386", "arm" ],
|
|
||||||
'linux': [ "amd64", "i386", "arm" ]
|
|
||||||
}
|
|
||||||
supported_packages = {
|
|
||||||
"darwin": [ "tar", "zip" ],
|
|
||||||
"linux": [ "deb", "rpm", "tar", "zip" ],
|
|
||||||
"windows": [ "tar", "zip" ],
|
|
||||||
}
|
|
||||||
|
|
||||||
def run(command, allow_failure=False, shell=False):
|
|
||||||
out = None
|
|
||||||
if debug:
|
|
||||||
print("[DEBUG] {}".format(command))
|
|
||||||
try:
|
|
||||||
if shell:
|
|
||||||
out = subprocess.check_output(command, stderr=subprocess.STDOUT, shell=shell)
|
|
||||||
else:
|
|
||||||
out = subprocess.check_output(command.split(), stderr=subprocess.STDOUT)
|
|
||||||
out = out.decode("utf8")
|
|
||||||
except subprocess.CalledProcessError as e:
|
|
||||||
print("")
|
|
||||||
print("")
|
|
||||||
print("Executed command failed!")
|
|
||||||
print("-- Command run was: {}".format(command))
|
|
||||||
print("-- Failure was: {}".format(e.output))
|
|
||||||
if allow_failure:
|
|
||||||
print("Continuing...")
|
|
||||||
return None
|
|
||||||
else:
|
|
||||||
print("")
|
|
||||||
print("Stopping.")
|
|
||||||
sys.exit(1)
|
|
||||||
except OSError as e:
|
|
||||||
print("")
|
|
||||||
print("")
|
|
||||||
print("Invalid command!")
|
|
||||||
print("-- Command run was: {}".format(command))
|
|
||||||
print("-- Failure was: {}".format(e))
|
|
||||||
if allow_failure:
|
|
||||||
print("Continuing...")
|
|
||||||
return out
|
|
||||||
else:
|
|
||||||
print("")
|
|
||||||
print("Stopping.")
|
|
||||||
sys.exit(1)
|
|
||||||
else:
|
|
||||||
return out
|
|
||||||
|
|
||||||
def create_temp_dir(prefix=None):
|
|
||||||
if prefix is None:
|
|
||||||
return tempfile.mkdtemp(prefix="telegraf-build.")
|
|
||||||
else:
|
|
||||||
return tempfile.mkdtemp(prefix=prefix)
|
|
||||||
|
|
||||||
def get_current_version():
|
|
||||||
command = "git describe --always --tags --abbrev=0"
|
|
||||||
out = run(command)
|
|
||||||
return out.strip()
|
|
||||||
|
|
||||||
def get_current_commit(short=False):
|
|
||||||
command = None
|
|
||||||
if short:
|
|
||||||
command = "git log --pretty=format:'%h' -n 1"
|
|
||||||
else:
|
|
||||||
command = "git rev-parse HEAD"
|
|
||||||
out = run(command)
|
|
||||||
return out.strip('\'\n\r ')
|
|
||||||
|
|
||||||
def get_current_branch():
|
|
||||||
command = "git rev-parse --abbrev-ref HEAD"
|
|
||||||
out = run(command)
|
|
||||||
return out.strip()
|
|
||||||
|
|
||||||
def get_system_arch():
|
|
||||||
arch = os.uname()[4]
|
|
||||||
if arch == "x86_64":
|
|
||||||
arch = "amd64"
|
|
||||||
return arch
|
|
||||||
|
|
||||||
def get_system_platform():
|
|
||||||
if sys.platform.startswith("linux"):
|
|
||||||
return "linux"
|
|
||||||
else:
|
|
||||||
return sys.platform
|
|
||||||
|
|
||||||
def get_go_version():
|
|
||||||
out = run("go version")
|
|
||||||
matches = re.search('go version go(\S+)', out)
|
|
||||||
if matches is not None:
|
|
||||||
return matches.groups()[0].strip()
|
|
||||||
return None
|
|
||||||
|
|
||||||
def check_path_for(b):
|
|
||||||
def is_exe(fpath):
|
|
||||||
return os.path.isfile(fpath) and os.access(fpath, os.X_OK)
|
|
||||||
|
|
||||||
for path in os.environ["PATH"].split(os.pathsep):
|
|
||||||
path = path.strip('"')
|
|
||||||
full_path = os.path.join(path, b)
|
|
||||||
if os.path.isfile(full_path) and os.access(full_path, os.X_OK):
|
|
||||||
return full_path
|
|
||||||
|
|
||||||
def check_environ(build_dir = None):
|
|
||||||
print("\nChecking environment:")
|
|
||||||
for v in [ "GOPATH", "GOBIN", "GOROOT" ]:
|
|
||||||
print("\t- {} -> {}".format(v, os.environ.get(v)))
|
|
||||||
|
|
||||||
cwd = os.getcwd()
|
|
||||||
if build_dir == None and os.environ.get("GOPATH") and os.environ.get("GOPATH") not in cwd:
|
|
||||||
print("\n!! WARNING: Your current directory is not under your GOPATH. This may lead to build failures.")
|
|
||||||
|
|
||||||
def check_prereqs():
|
|
||||||
print("\nChecking for dependencies:")
|
|
||||||
for req in prereqs:
|
|
||||||
path = check_path_for(req)
|
|
||||||
if path is None:
|
|
||||||
path = '?'
|
|
||||||
print("\t- {} -> {}".format(req, path))
|
|
||||||
for req in optional_prereqs:
|
|
||||||
path = check_path_for(req)
|
|
||||||
if path is None:
|
|
||||||
path = '?'
|
|
||||||
print("\t- {} (optional) -> {}".format(req, path))
|
|
||||||
print("")
|
|
||||||
|
|
||||||
def upload_packages(packages, bucket_name=None, nightly=False):
|
|
||||||
if debug:
|
|
||||||
print("[DEBUG] upload_packags: {}".format(packages))
|
|
||||||
try:
|
|
||||||
import boto
|
|
||||||
from boto.s3.key import Key
|
|
||||||
except ImportError:
|
|
||||||
print "!! Cannot upload packages without the 'boto' python library."
|
|
||||||
return 1
|
|
||||||
print("Uploading packages to S3...")
|
|
||||||
print("")
|
|
||||||
c = boto.connect_s3()
|
|
||||||
if bucket_name is None:
|
|
||||||
bucket_name = 'get.influxdb.org/telegraf'
|
|
||||||
bucket = c.get_bucket(bucket_name.split('/')[0])
|
|
||||||
print("\t - Using bucket: {}".format(bucket_name))
|
|
||||||
for p in packages:
|
|
||||||
if '/' in bucket_name:
|
|
||||||
# Allow for nested paths within the bucket name (ex:
|
|
||||||
# bucket/telegraf). Assuming forward-slashes as path
|
|
||||||
# delimiter.
|
|
||||||
name = os.path.join('/'.join(bucket_name.split('/')[1:]),
|
|
||||||
os.path.basename(p))
|
|
||||||
else:
|
|
||||||
name = os.path.basename(p)
|
|
||||||
if bucket.get_key(name) is None or nightly:
|
|
||||||
print("\t - Uploading {} to {}...".format(name, bucket_name))
|
|
||||||
k = Key(bucket)
|
|
||||||
k.key = name
|
|
||||||
if nightly:
|
|
||||||
n = k.set_contents_from_filename(p, replace=True)
|
|
||||||
else:
|
|
||||||
n = k.set_contents_from_filename(p, replace=False)
|
|
||||||
k.make_public()
|
|
||||||
else:
|
|
||||||
print("\t - Not uploading {}, already exists.".format(p))
|
|
||||||
print("")
|
|
||||||
|
|
||||||
def run_tests(race, parallel, timeout, no_vet):
|
|
||||||
get_command = "go get -d -t ./..."
|
|
||||||
print("Retrieving Go dependencies...")
|
|
||||||
sys.stdout.flush()
|
|
||||||
run(get_command)
|
|
||||||
print("Running tests:")
|
|
||||||
print("\tRace: ", race)
|
|
||||||
if parallel is not None:
|
|
||||||
print("\tParallel:", parallel)
|
|
||||||
if timeout is not None:
|
|
||||||
print("\tTimeout:", timeout)
|
|
||||||
sys.stdout.flush()
|
|
||||||
p = subprocess.Popen(["go", "fmt", "./..."], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
|
||||||
out, err = p.communicate()
|
|
||||||
if len(out) > 0 or len(err) > 0:
|
|
||||||
print("Code not formatted. Please use 'go fmt ./...' to fix formatting errors.")
|
|
||||||
print(out)
|
|
||||||
print(err)
|
|
||||||
return False
|
|
||||||
if not no_vet:
|
|
||||||
p = subprocess.Popen(["go", "tool", "vet", "-composites=false", "./"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
|
||||||
out, err = p.communicate()
|
|
||||||
if len(out) > 0 or len(err) > 0:
|
|
||||||
print("Go vet failed. Please run 'go vet ./...' and fix any errors.")
|
|
||||||
print(out)
|
|
||||||
print(err)
|
|
||||||
return False
|
|
||||||
else:
|
|
||||||
print("Skipping go vet ...")
|
|
||||||
sys.stdout.flush()
|
|
||||||
test_command = "go test -v"
|
|
||||||
if race:
|
|
||||||
test_command += " -race"
|
|
||||||
if parallel is not None:
|
|
||||||
test_command += " -parallel {}".format(parallel)
|
|
||||||
if timeout is not None:
|
|
||||||
test_command += " -timeout {}".format(timeout)
|
|
||||||
test_command += " ./..."
|
|
||||||
code = os.system(test_command)
|
|
||||||
if code != 0:
|
|
||||||
print("Tests Failed")
|
|
||||||
return False
|
|
||||||
else:
|
|
||||||
print("Tests Passed")
|
|
||||||
return True
|
|
||||||
|
|
||||||
def build(version=None,
|
|
||||||
branch=None,
|
|
||||||
commit=None,
|
|
||||||
platform=None,
|
|
||||||
arch=None,
|
|
||||||
nightly=False,
|
|
||||||
rc=None,
|
|
||||||
race=False,
|
|
||||||
clean=False,
|
|
||||||
outdir=".",
|
|
||||||
goarm_version="6"):
|
|
||||||
print("-------------------------")
|
|
||||||
print("")
|
|
||||||
print("Build plan:")
|
|
||||||
print("\t- version: {}".format(version))
|
|
||||||
if rc:
|
|
||||||
print("\t- release candidate: {}".format(rc))
|
|
||||||
print("\t- commit: {}".format(commit))
|
|
||||||
print("\t- branch: {}".format(branch))
|
|
||||||
print("\t- platform: {}".format(platform))
|
|
||||||
print("\t- arch: {}".format(arch))
|
|
||||||
if arch == 'arm' and goarm_version:
|
|
||||||
print("\t- ARM version: {}".format(goarm_version))
|
|
||||||
print("\t- nightly? {}".format(str(nightly).lower()))
|
|
||||||
print("\t- race enabled? {}".format(str(race).lower()))
|
|
||||||
print("")
|
|
||||||
|
|
||||||
if not os.path.exists(outdir):
|
|
||||||
os.makedirs(outdir)
|
|
||||||
elif clean and outdir != '/':
|
|
||||||
print("Cleaning build directory...")
|
|
||||||
shutil.rmtree(outdir)
|
|
||||||
os.makedirs(outdir)
|
|
||||||
|
|
||||||
if rc:
|
|
||||||
# If a release candidate, update the version information accordingly
|
|
||||||
version = "{}rc{}".format(version, rc)
|
|
||||||
|
|
||||||
# Set architecture to something that Go expects
|
|
||||||
if arch == 'i386':
|
|
||||||
arch = '386'
|
|
||||||
elif arch == 'x86_64':
|
|
||||||
arch = 'amd64'
|
|
||||||
|
|
||||||
print("Starting build...")
|
|
||||||
for b, c in targets.items():
|
|
||||||
print("\t- Building '{}'...".format(os.path.join(outdir, b)))
|
|
||||||
build_command = ""
|
|
||||||
build_command += "GOOS={} GOARCH={} ".format(platform, arch)
|
|
||||||
if arch == "arm" and goarm_version:
|
|
||||||
if goarm_version not in ["5", "6", "7", "arm64"]:
|
|
||||||
print("!! Invalid ARM build version: {}".format(goarm_version))
|
|
||||||
build_command += "GOARM={} ".format(goarm_version)
|
|
||||||
build_command += "go build -o {} ".format(os.path.join(outdir, b))
|
|
||||||
if race:
|
|
||||||
build_command += "-race "
|
|
||||||
go_version = get_go_version()
|
|
||||||
if "1.4" in go_version:
|
|
||||||
build_command += "-ldflags=\"-X main.buildTime '{}' ".format(datetime.datetime.utcnow().isoformat())
|
|
||||||
build_command += "-X main.Version {} ".format(version)
|
|
||||||
build_command += "-X main.Branch {} ".format(get_current_branch())
|
|
||||||
build_command += "-X main.Commit {}\" ".format(get_current_commit())
|
|
||||||
else:
|
|
||||||
build_command += "-ldflags=\"-X main.buildTime='{}' ".format(datetime.datetime.utcnow().isoformat())
|
|
||||||
build_command += "-X main.Version={} ".format(version)
|
|
||||||
build_command += "-X main.Branch={} ".format(get_current_branch())
|
|
||||||
build_command += "-X main.Commit={}\" ".format(get_current_commit())
|
|
||||||
build_command += c
|
|
||||||
run(build_command, shell=True)
|
|
||||||
print("")
|
|
||||||
|
|
||||||
def create_dir(path):
|
|
||||||
try:
|
|
||||||
os.makedirs(path)
|
|
||||||
except OSError as e:
|
|
||||||
print(e)
|
|
||||||
|
|
||||||
def rename_file(fr, to):
|
|
||||||
try:
|
|
||||||
os.rename(fr, to)
|
|
||||||
except OSError as e:
|
|
||||||
print(e)
|
|
||||||
# Return the original filename
|
|
||||||
return fr
|
|
||||||
else:
|
|
||||||
# Return the new filename
|
|
||||||
return to
|
|
||||||
|
|
||||||
def copy_file(fr, to):
|
|
||||||
try:
|
|
||||||
shutil.copy(fr, to)
|
|
||||||
except OSError as e:
|
|
||||||
print(e)
|
|
||||||
|
|
||||||
def create_package_fs(build_root):
|
|
||||||
print("\t- Creating a filesystem hierarchy from directory: {}".format(build_root))
|
|
||||||
# Using [1:] for the path names due to them being absolute
|
|
||||||
# (will overwrite previous paths, per 'os.path.join' documentation)
|
|
||||||
dirs = [ INSTALL_ROOT_DIR[1:], LOG_DIR[1:], SCRIPT_DIR[1:], CONFIG_DIR[1:], LOGROTATE_DIR[1:] ]
|
|
||||||
for d in dirs:
|
|
||||||
create_dir(os.path.join(build_root, d))
|
|
||||||
os.chmod(os.path.join(build_root, d), 0o755)
|
|
||||||
|
|
||||||
def package_scripts(build_root):
|
|
||||||
print("\t- Copying scripts and sample configuration to build directory")
|
|
||||||
shutil.copyfile(INIT_SCRIPT, os.path.join(build_root, SCRIPT_DIR[1:], INIT_SCRIPT.split('/')[1]))
|
|
||||||
os.chmod(os.path.join(build_root, SCRIPT_DIR[1:], INIT_SCRIPT.split('/')[1]), 0o644)
|
|
||||||
shutil.copyfile(SYSTEMD_SCRIPT, os.path.join(build_root, SCRIPT_DIR[1:], SYSTEMD_SCRIPT.split('/')[1]))
|
|
||||||
os.chmod(os.path.join(build_root, SCRIPT_DIR[1:], SYSTEMD_SCRIPT.split('/')[1]), 0o644)
|
|
||||||
shutil.copyfile(LOGROTATE_SCRIPT, os.path.join(build_root, LOGROTATE_DIR[1:], "telegraf"))
|
|
||||||
os.chmod(os.path.join(build_root, LOGROTATE_DIR[1:], "telegraf"), 0o644)
|
|
||||||
shutil.copyfile(DEFAULT_CONFIG, os.path.join(build_root, CONFIG_DIR[1:], "telegraf.conf"))
|
|
||||||
os.chmod(os.path.join(build_root, CONFIG_DIR[1:], "telegraf.conf"), 0o644)
|
|
||||||
|
|
||||||
def go_get(update=False):
|
|
||||||
get_command = None
|
|
||||||
if update:
|
|
||||||
get_command = "go get -u -f -d ./..."
|
|
||||||
else:
|
|
||||||
get_command = "go get -d ./..."
|
|
||||||
print("Retrieving Go dependencies...")
|
|
||||||
run(get_command)
|
|
||||||
|
|
||||||
def generate_md5_from_file(path):
|
|
||||||
m = hashlib.md5()
|
|
||||||
with open(path, 'rb') as f:
|
|
||||||
while True:
|
|
||||||
data = f.read(4096)
|
|
||||||
if not data:
|
|
||||||
break
|
|
||||||
m.update(data)
|
|
||||||
return m.hexdigest()
|
|
||||||
|
|
||||||
def build_packages(build_output, version, nightly=False, rc=None, iteration=1):
|
|
||||||
outfiles = []
|
|
||||||
tmp_build_dir = create_temp_dir()
|
|
||||||
if debug:
|
|
||||||
print("[DEBUG] build_output = {}".format(build_output))
|
|
||||||
try:
|
|
||||||
print("-------------------------")
|
|
||||||
print("")
|
|
||||||
print("Packaging...")
|
|
||||||
for p in build_output:
|
|
||||||
# Create top-level folder displaying which platform (linux, etc)
|
|
||||||
create_dir(os.path.join(tmp_build_dir, p))
|
|
||||||
for a in build_output[p]:
|
|
||||||
current_location = build_output[p][a]
|
|
||||||
# Create second-level directory displaying the architecture (amd64, etc)p
|
|
||||||
build_root = os.path.join(tmp_build_dir, p, a)
|
|
||||||
# Create directory tree to mimic file system of package
|
|
||||||
create_dir(build_root)
|
|
||||||
create_package_fs(build_root)
|
|
||||||
# Copy in packaging and miscellaneous scripts
|
|
||||||
package_scripts(build_root)
|
|
||||||
# Copy newly-built binaries to packaging directory
|
|
||||||
for b in targets:
|
|
||||||
if p == 'windows':
|
|
||||||
b = b + '.exe'
|
|
||||||
fr = os.path.join(current_location, b)
|
|
||||||
to = os.path.join(build_root, INSTALL_ROOT_DIR[1:], b)
|
|
||||||
print("\t- [{}][{}] - Moving from '{}' to '{}'".format(p, a, fr, to))
|
|
||||||
copy_file(fr, to)
|
|
||||||
# Package the directory structure
|
|
||||||
for package_type in supported_packages[p]:
|
|
||||||
print("\t- Packaging directory '{}' as '{}'...".format(build_root, package_type))
|
|
||||||
name = "telegraf"
|
|
||||||
# Reset version, iteration, and current location on each run
|
|
||||||
# since they may be modified below.
|
|
||||||
package_version = version
|
|
||||||
package_iteration = iteration
|
|
||||||
current_location = build_output[p][a]
|
|
||||||
|
|
||||||
if package_type in ['zip', 'tar']:
|
|
||||||
if nightly:
|
|
||||||
name = '{}-nightly_{}_{}'.format(name, p, a)
|
|
||||||
else:
|
|
||||||
name = '{}-{}-{}_{}_{}'.format(name, package_version, package_iteration, p, a)
|
|
||||||
if package_type == 'tar':
|
|
||||||
# Add `tar.gz` to path to reduce package size
|
|
||||||
current_location = os.path.join(current_location, name + '.tar.gz')
|
|
||||||
if rc is not None:
|
|
||||||
package_iteration = "0.rc{}".format(rc)
|
|
||||||
if a == '386':
|
|
||||||
a = 'i386'
|
|
||||||
fpm_command = "fpm {} --name {} -a {} -t {} --version {} --iteration {} -C {} -p {} ".format(
|
|
||||||
fpm_common_args,
|
|
||||||
name,
|
|
||||||
a,
|
|
||||||
package_type,
|
|
||||||
package_version,
|
|
||||||
package_iteration,
|
|
||||||
build_root,
|
|
||||||
current_location)
|
|
||||||
if package_type == "rpm":
|
|
||||||
fpm_command += "--depends coreutils "
|
|
||||||
fpm_command += "--depends lsof"
|
|
||||||
out = run(fpm_command, shell=True)
|
|
||||||
matches = re.search(':path=>"(.*)"', out)
|
|
||||||
outfile = None
|
|
||||||
if matches is not None:
|
|
||||||
outfile = matches.groups()[0]
|
|
||||||
if outfile is None:
|
|
||||||
print("[ COULD NOT DETERMINE OUTPUT ]")
|
|
||||||
else:
|
|
||||||
# Strip nightly version (the unix epoch) from filename
|
|
||||||
if nightly and package_type in ['deb', 'rpm']:
|
|
||||||
outfile = rename_file(outfile, outfile.replace("{}-{}".format(version, iteration), "nightly"))
|
|
||||||
outfiles.append(os.path.join(os.getcwd(), outfile))
|
|
||||||
# Display MD5 hash for generated package
|
|
||||||
print("\t\tMD5 = {}".format(generate_md5_from_file(outfile)))
|
|
||||||
print("")
|
|
||||||
if debug:
|
|
||||||
print("[DEBUG] package outfiles: {}".format(outfiles))
|
|
||||||
return outfiles
|
|
||||||
finally:
|
|
||||||
# Cleanup
|
|
||||||
shutil.rmtree(tmp_build_dir)
|
|
||||||
|
|
||||||
def print_usage():
|
|
||||||
print("Usage: ./build.py [options]")
|
|
||||||
print("")
|
|
||||||
print("Options:")
|
|
||||||
print("\t --outdir=<path> \n\t\t- Send build output to a specified path. Defaults to ./build.")
|
|
||||||
print("\t --arch=<arch> \n\t\t- Build for specified architecture. Acceptable values: x86_64|amd64, 386, arm, or all")
|
|
||||||
print("\t --goarm=<arm version> \n\t\t- Build for specified ARM version (when building for ARM). Default value is: 6")
|
|
||||||
print("\t --platform=<platform> \n\t\t- Build for specified platform. Acceptable values: linux, windows, darwin, or all")
|
|
||||||
print("\t --version=<version> \n\t\t- Version information to apply to build metadata. If not specified, will be pulled from repo tag.")
|
|
||||||
print("\t --commit=<commit> \n\t\t- Use specific commit for build (currently a NOOP).")
|
|
||||||
print("\t --branch=<branch> \n\t\t- Build from a specific branch (currently a NOOP).")
|
|
||||||
print("\t --rc=<rc number> \n\t\t- Whether or not the build is a release candidate (affects version information).")
|
|
||||||
print("\t --iteration=<iteration number> \n\t\t- The iteration to display on the package output (defaults to 0 for RC's, and 1 otherwise).")
|
|
||||||
print("\t --race \n\t\t- Whether the produced build should have race detection enabled.")
|
|
||||||
print("\t --package \n\t\t- Whether the produced builds should be packaged for the target platform(s).")
|
|
||||||
print("\t --nightly \n\t\t- Whether the produced build is a nightly (affects version information).")
|
|
||||||
print("\t --update \n\t\t- Whether dependencies should be updated prior to building.")
|
|
||||||
print("\t --test \n\t\t- Run Go tests. Will not produce a build.")
|
|
||||||
print("\t --parallel \n\t\t- Run Go tests in parallel up to the count specified.")
|
|
||||||
print("\t --timeout \n\t\t- Timeout for Go tests. Defaults to 480s.")
|
|
||||||
print("\t --clean \n\t\t- Clean the build output directory prior to creating build.")
|
|
||||||
print("\t --no-get \n\t\t- Do not run `go get` before building.")
|
|
||||||
print("\t --bucket=<S3 bucket>\n\t\t- Full path of the bucket to upload packages to (must also specify --upload).")
|
|
||||||
print("\t --debug \n\t\t- Displays debug output.")
|
|
||||||
print("")
|
|
||||||
|
|
||||||
def print_package_summary(packages):
|
|
||||||
print(packages)
|
|
||||||
|
|
||||||
def main():
|
|
||||||
# Command-line arguments
|
|
||||||
outdir = "build"
|
|
||||||
commit = None
|
|
||||||
target_platform = None
|
|
||||||
target_arch = None
|
|
||||||
nightly = False
|
|
||||||
race = False
|
|
||||||
branch = None
|
|
||||||
version = get_current_version()
|
|
||||||
rc = None
|
|
||||||
package = False
|
|
||||||
update = False
|
|
||||||
clean = False
|
|
||||||
upload = False
|
|
||||||
test = False
|
|
||||||
parallel = None
|
|
||||||
timeout = None
|
|
||||||
iteration = 1
|
|
||||||
no_vet = False
|
|
||||||
goarm_version = "6"
|
|
||||||
run_get = True
|
|
||||||
upload_bucket = None
|
|
||||||
global debug
|
|
||||||
|
|
||||||
for arg in sys.argv[1:]:
|
|
||||||
if '--outdir' in arg:
|
|
||||||
# Output directory. If none is specified, then builds will be placed in the same directory.
|
|
||||||
output_dir = arg.split("=")[1]
|
|
||||||
if '--commit' in arg:
|
|
||||||
# Commit to build from. If none is specified, then it will build from the most recent commit.
|
|
||||||
commit = arg.split("=")[1]
|
|
||||||
if '--branch' in arg:
|
|
||||||
# Branch to build from. If none is specified, then it will build from the current branch.
|
|
||||||
branch = arg.split("=")[1]
|
|
||||||
elif '--arch' in arg:
|
|
||||||
# Target architecture. If none is specified, then it will build for the current arch.
|
|
||||||
target_arch = arg.split("=")[1]
|
|
||||||
elif '--platform' in arg:
|
|
||||||
# Target platform. If none is specified, then it will build for the current platform.
|
|
||||||
target_platform = arg.split("=")[1]
|
|
||||||
elif '--version' in arg:
|
|
||||||
# Version to assign to this build (0.9.5, etc)
|
|
||||||
version = arg.split("=")[1]
|
|
||||||
elif '--rc' in arg:
|
|
||||||
# Signifies that this is a release candidate build.
|
|
||||||
rc = arg.split("=")[1]
|
|
||||||
elif '--race' in arg:
|
|
||||||
# Signifies that race detection should be enabled.
|
|
||||||
race = True
|
|
||||||
elif '--package' in arg:
|
|
||||||
# Signifies that packages should be built.
|
|
||||||
package = True
|
|
||||||
elif '--nightly' in arg:
|
|
||||||
# Signifies that this is a nightly build.
|
|
||||||
nightly = True
|
|
||||||
elif '--update' in arg:
|
|
||||||
# Signifies that dependencies should be updated.
|
|
||||||
update = True
|
|
||||||
elif '--upload' in arg:
|
|
||||||
# Signifies that the resulting packages should be uploaded to S3
|
|
||||||
upload = True
|
|
||||||
elif '--test' in arg:
|
|
||||||
# Run tests and exit
|
|
||||||
test = True
|
|
||||||
elif '--parallel' in arg:
|
|
||||||
# Set parallel for tests.
|
|
||||||
parallel = int(arg.split("=")[1])
|
|
||||||
elif '--timeout' in arg:
|
|
||||||
# Set timeout for tests.
|
|
||||||
timeout = arg.split("=")[1]
|
|
||||||
elif '--clean' in arg:
|
|
||||||
# Signifies that the outdir should be deleted before building
|
|
||||||
clean = True
|
|
||||||
elif '--iteration' in arg:
|
|
||||||
iteration = arg.split("=")[1]
|
|
||||||
elif '--no-vet' in arg:
|
|
||||||
no_vet = True
|
|
||||||
elif '--goarm' in arg:
|
|
||||||
# Signifies GOARM flag to pass to build command when compiling for ARM
|
|
||||||
goarm_version = arg.split("=")[1]
|
|
||||||
elif '--bucket' in arg:
|
|
||||||
# The bucket to upload the packages to, relies on boto
|
|
||||||
upload_bucket = arg.split("=")[1]
|
|
||||||
elif '--no-get' in arg:
|
|
||||||
run_get = False
|
|
||||||
elif '--debug' in arg:
|
|
||||||
print "[DEBUG] Using debug output"
|
|
||||||
debug = True
|
|
||||||
elif '--help' in arg:
|
|
||||||
print_usage()
|
|
||||||
return 0
|
|
||||||
else:
|
|
||||||
print("!! Unknown argument: {}".format(arg))
|
|
||||||
print_usage()
|
|
||||||
return 1
|
|
||||||
|
|
||||||
if nightly:
|
|
||||||
if rc:
|
|
||||||
print("!! Cannot be both nightly and a release candidate! Stopping.")
|
|
||||||
return 1
|
|
||||||
# In order to support nightly builds on the repository, we are adding the epoch timestamp
|
|
||||||
# to the version so that version numbers are always greater than the previous nightly.
|
|
||||||
version = "{}.n{}".format(version, int(time.time()))
|
|
||||||
|
|
||||||
# Pre-build checks
|
|
||||||
check_environ()
|
|
||||||
check_prereqs()
|
|
||||||
|
|
||||||
if not commit:
|
|
||||||
commit = get_current_commit(short=True)
|
|
||||||
if not branch:
|
|
||||||
branch = get_current_branch()
|
|
||||||
if not target_arch:
|
|
||||||
if 'arm' in get_system_arch():
|
|
||||||
# Prevent uname from reporting ARM arch (eg 'armv7l')
|
|
||||||
target_arch = "arm"
|
|
||||||
else:
|
|
||||||
target_arch = get_system_arch()
|
|
||||||
if not target_platform:
|
|
||||||
target_platform = get_system_platform()
|
|
||||||
if rc or nightly:
|
|
||||||
# If a release candidate or nightly, set iteration to 0 (instead of 1)
|
|
||||||
iteration = 0
|
|
||||||
|
|
||||||
if target_arch == '386':
|
|
||||||
target_arch = 'i386'
|
|
||||||
elif target_arch == 'x86_64':
|
|
||||||
target_arch = 'amd64'
|
|
||||||
|
|
||||||
build_output = {}
|
|
||||||
if test:
|
|
||||||
if not run_tests(race, parallel, timeout, no_vet):
|
|
||||||
return 1
|
|
||||||
return 0
|
|
||||||
|
|
||||||
if run_get:
|
|
||||||
go_get(update=update)
|
|
||||||
|
|
||||||
platforms = []
|
|
||||||
single_build = True
|
|
||||||
if target_platform == 'all':
|
|
||||||
platforms = list(supported_builds.keys())
|
|
||||||
single_build = False
|
|
||||||
else:
|
|
||||||
platforms = [target_platform]
|
|
||||||
|
|
||||||
for platform in platforms:
|
|
||||||
build_output.update( { platform : {} } )
|
|
||||||
archs = []
|
|
||||||
if target_arch == "all":
|
|
||||||
single_build = False
|
|
||||||
archs = supported_builds.get(platform)
|
|
||||||
else:
|
|
||||||
archs = [target_arch]
|
|
||||||
for arch in archs:
|
|
||||||
od = outdir
|
|
||||||
if not single_build:
|
|
||||||
od = os.path.join(outdir, platform, arch)
|
|
||||||
build(version=version,
|
|
||||||
branch=branch,
|
|
||||||
commit=commit,
|
|
||||||
platform=platform,
|
|
||||||
arch=arch,
|
|
||||||
nightly=nightly,
|
|
||||||
rc=rc,
|
|
||||||
race=race,
|
|
||||||
clean=clean,
|
|
||||||
outdir=od,
|
|
||||||
goarm_version=goarm_version)
|
|
||||||
build_output.get(platform).update( { arch : od } )
|
|
||||||
|
|
||||||
# Build packages
|
|
||||||
if package:
|
|
||||||
if not check_path_for("fpm"):
|
|
||||||
print("!! Cannot package without command 'fpm'. Stopping.")
|
|
||||||
return 1
|
|
||||||
packages = build_packages(build_output, version, nightly=nightly, rc=rc, iteration=iteration)
|
|
||||||
# Optionally upload to S3
|
|
||||||
if upload:
|
|
||||||
upload_packages(packages, bucket_name=upload_bucket, nightly=nightly)
|
|
||||||
return 0
|
|
||||||
|
|
||||||
if __name__ == '__main__':
|
|
||||||
sys.exit(main())
|
|
||||||
@@ -4,14 +4,17 @@ machine:
|
|||||||
post:
|
post:
|
||||||
- sudo service zookeeper stop
|
- sudo service zookeeper stop
|
||||||
- go version
|
- go version
|
||||||
- go version | grep 1.5.2 || sudo rm -rf /usr/local/go
|
- go version | grep 1.6.2 || sudo rm -rf /usr/local/go
|
||||||
- wget https://storage.googleapis.com/golang/go1.5.2.linux-amd64.tar.gz
|
- wget https://storage.googleapis.com/golang/go1.6.2.linux-amd64.tar.gz
|
||||||
- sudo tar -C /usr/local -xzf go1.5.2.linux-amd64.tar.gz
|
- sudo tar -C /usr/local -xzf go1.6.2.linux-amd64.tar.gz
|
||||||
- go version
|
- go version
|
||||||
|
|
||||||
dependencies:
|
dependencies:
|
||||||
override:
|
override:
|
||||||
- docker info
|
- docker info
|
||||||
|
post:
|
||||||
|
- gem install fpm
|
||||||
|
- sudo apt-get install -y rpm python-boto
|
||||||
|
|
||||||
test:
|
test:
|
||||||
override:
|
override:
|
||||||
|
|||||||
@@ -9,9 +9,11 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
"syscall"
|
"syscall"
|
||||||
|
|
||||||
"github.com/influxdata/telegraf"
|
"github.com/influxdata/telegraf/agent"
|
||||||
"github.com/influxdata/telegraf/internal/config"
|
"github.com/influxdata/telegraf/internal/config"
|
||||||
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/all"
|
_ "github.com/influxdata/telegraf/plugins/inputs/all"
|
||||||
|
"github.com/influxdata/telegraf/plugins/outputs"
|
||||||
_ "github.com/influxdata/telegraf/plugins/outputs/all"
|
_ "github.com/influxdata/telegraf/plugins/outputs/all"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -29,11 +31,14 @@ var fSampleConfig = flag.Bool("sample-config", false,
|
|||||||
var fPidfile = flag.String("pidfile", "", "file to write our pid to")
|
var fPidfile = flag.String("pidfile", "", "file to write our pid to")
|
||||||
var fInputFilters = flag.String("input-filter", "",
|
var fInputFilters = flag.String("input-filter", "",
|
||||||
"filter the inputs to enable, separator is :")
|
"filter the inputs to enable, separator is :")
|
||||||
|
var fInputList = flag.Bool("input-list", false,
|
||||||
|
"print available input plugins.")
|
||||||
var fOutputFilters = flag.String("output-filter", "",
|
var fOutputFilters = flag.String("output-filter", "",
|
||||||
"filter the outputs to enable, separator is :")
|
"filter the outputs to enable, separator is :")
|
||||||
|
var fOutputList = flag.Bool("output-list", false,
|
||||||
|
"print available output plugins.")
|
||||||
var fUsage = flag.String("usage", "",
|
var fUsage = flag.String("usage", "",
|
||||||
"print usage for a plugin, ie, 'telegraf -usage mysql'")
|
"print usage for a plugin, ie, 'telegraf -usage mysql'")
|
||||||
|
|
||||||
var fInputFiltersLegacy = flag.String("filter", "",
|
var fInputFiltersLegacy = flag.String("filter", "",
|
||||||
"filter the inputs to enable, separator is :")
|
"filter the inputs to enable, separator is :")
|
||||||
var fOutputFiltersLegacy = flag.String("outputfilter", "",
|
var fOutputFiltersLegacy = flag.String("outputfilter", "",
|
||||||
@@ -58,12 +63,21 @@ The flags are:
|
|||||||
-sample-config print out full sample configuration to stdout
|
-sample-config print out full sample configuration to stdout
|
||||||
-config-directory directory containing additional *.conf files
|
-config-directory directory containing additional *.conf files
|
||||||
-input-filter filter the input plugins to enable, separator is :
|
-input-filter filter the input plugins to enable, separator is :
|
||||||
|
-input-list print all the plugins inputs
|
||||||
-output-filter filter the output plugins to enable, separator is :
|
-output-filter filter the output plugins to enable, separator is :
|
||||||
|
-output-list print all the available outputs
|
||||||
-usage print usage for a plugin, ie, 'telegraf -usage mysql'
|
-usage print usage for a plugin, ie, 'telegraf -usage mysql'
|
||||||
-debug print metrics as they're generated to stdout
|
-debug print metrics as they're generated to stdout
|
||||||
-quiet run in quiet mode
|
-quiet run in quiet mode
|
||||||
-version print the version to stdout
|
-version print the version to stdout
|
||||||
|
|
||||||
|
In addition to the -config flag, telegraf will also load the config file from
|
||||||
|
an environment variable or default location. Precedence is:
|
||||||
|
1. -config flag
|
||||||
|
2. $TELEGRAF_CONFIG_PATH environment variable
|
||||||
|
3. $HOME/.telegraf/telegraf.conf
|
||||||
|
4. /etc/telegraf/telegraf.conf
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
|
|
||||||
# generate a telegraf config file:
|
# generate a telegraf config file:
|
||||||
@@ -87,15 +101,14 @@ func main() {
|
|||||||
reload <- true
|
reload <- true
|
||||||
for <-reload {
|
for <-reload {
|
||||||
reload <- false
|
reload <- false
|
||||||
flag.Usage = usageExit
|
flag.Usage = func() { usageExit(0) }
|
||||||
flag.Parse()
|
flag.Parse()
|
||||||
|
args := flag.Args()
|
||||||
if flag.NFlag() == 0 {
|
|
||||||
usageExit()
|
|
||||||
}
|
|
||||||
|
|
||||||
var inputFilters []string
|
var inputFilters []string
|
||||||
if *fInputFiltersLegacy != "" {
|
if *fInputFiltersLegacy != "" {
|
||||||
|
fmt.Printf("WARNING '--filter' flag is deprecated, please use" +
|
||||||
|
" '--input-filter'")
|
||||||
inputFilter := strings.TrimSpace(*fInputFiltersLegacy)
|
inputFilter := strings.TrimSpace(*fInputFiltersLegacy)
|
||||||
inputFilters = strings.Split(":"+inputFilter+":", ":")
|
inputFilters = strings.Split(":"+inputFilter+":", ":")
|
||||||
}
|
}
|
||||||
@@ -106,6 +119,8 @@ func main() {
|
|||||||
|
|
||||||
var outputFilters []string
|
var outputFilters []string
|
||||||
if *fOutputFiltersLegacy != "" {
|
if *fOutputFiltersLegacy != "" {
|
||||||
|
fmt.Printf("WARNING '--outputfilter' flag is deprecated, please use" +
|
||||||
|
" '--output-filter'")
|
||||||
outputFilter := strings.TrimSpace(*fOutputFiltersLegacy)
|
outputFilter := strings.TrimSpace(*fOutputFiltersLegacy)
|
||||||
outputFilters = strings.Split(":"+outputFilter+":", ":")
|
outputFilters = strings.Split(":"+outputFilter+":", ":")
|
||||||
}
|
}
|
||||||
@@ -114,6 +129,34 @@ func main() {
|
|||||||
outputFilters = strings.Split(":"+outputFilter+":", ":")
|
outputFilters = strings.Split(":"+outputFilter+":", ":")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if len(args) > 0 {
|
||||||
|
switch args[0] {
|
||||||
|
case "version":
|
||||||
|
v := fmt.Sprintf("Telegraf - Version %s", Version)
|
||||||
|
fmt.Println(v)
|
||||||
|
return
|
||||||
|
case "config":
|
||||||
|
config.PrintSampleConfig(inputFilters, outputFilters)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if *fOutputList {
|
||||||
|
fmt.Println("Available Output Plugins:")
|
||||||
|
for k, _ := range outputs.Outputs {
|
||||||
|
fmt.Printf(" %s\n", k)
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if *fInputList {
|
||||||
|
fmt.Println("Available Input Plugins:")
|
||||||
|
for k, _ := range inputs.Inputs {
|
||||||
|
fmt.Printf(" %s\n", k)
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
if *fVersion {
|
if *fVersion {
|
||||||
v := fmt.Sprintf("Telegraf - Version %s", Version)
|
v := fmt.Sprintf("Telegraf - Version %s", Version)
|
||||||
fmt.Println(v)
|
fmt.Println(v)
|
||||||
@@ -134,26 +177,19 @@ func main() {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
var (
|
// If no other options are specified, load the config file and run.
|
||||||
c *config.Config
|
c := config.NewConfig()
|
||||||
err error
|
c.OutputFilters = outputFilters
|
||||||
)
|
c.InputFilters = inputFilters
|
||||||
|
err := c.LoadConfig(*fConfig)
|
||||||
if *fConfig != "" {
|
if err != nil {
|
||||||
c = config.NewConfig()
|
fmt.Println(err)
|
||||||
c.OutputFilters = outputFilters
|
os.Exit(1)
|
||||||
c.InputFilters = inputFilters
|
|
||||||
err = c.LoadConfig(*fConfig)
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal(err)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
fmt.Println("Usage: Telegraf")
|
|
||||||
flag.PrintDefaults()
|
|
||||||
return
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if *fConfigDirectoryLegacy != "" {
|
if *fConfigDirectoryLegacy != "" {
|
||||||
|
fmt.Printf("WARNING '--configdirectory' flag is deprecated, please use" +
|
||||||
|
" '--config-directory'")
|
||||||
err = c.LoadDirectory(*fConfigDirectoryLegacy)
|
err = c.LoadDirectory(*fConfigDirectoryLegacy)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal(err)
|
log.Fatal(err)
|
||||||
@@ -173,7 +209,7 @@ func main() {
|
|||||||
log.Fatalf("Error: no inputs found, did you provide a valid config file?")
|
log.Fatalf("Error: no inputs found, did you provide a valid config file?")
|
||||||
}
|
}
|
||||||
|
|
||||||
ag, err := telegraf.NewAgent(c)
|
ag, err := agent.NewAgent(c)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal(err)
|
log.Fatal(err)
|
||||||
}
|
}
|
||||||
@@ -235,7 +271,7 @@ func main() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func usageExit() {
|
func usageExit(rc int) {
|
||||||
fmt.Println(usage)
|
fmt.Println(usage)
|
||||||
os.Exit(0)
|
os.Exit(rc)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -3,16 +3,31 @@
|
|||||||
## Generating a Configuration File
|
## Generating a Configuration File
|
||||||
|
|
||||||
A default Telegraf config file can be generated using the -sample-config flag:
|
A default Telegraf config file can be generated using the -sample-config flag:
|
||||||
`telegraf -sample-config > telegraf.conf`
|
|
||||||
|
```
|
||||||
|
telegraf -sample-config > telegraf.conf
|
||||||
|
```
|
||||||
|
|
||||||
To generate a file with specific inputs and outputs, you can use the
|
To generate a file with specific inputs and outputs, you can use the
|
||||||
-input-filter and -output-filter flags:
|
-input-filter and -output-filter flags:
|
||||||
`telegraf -sample-config -input-filter cpu:mem:net:swap -output-filter influxdb:kafka`
|
|
||||||
|
|
||||||
## `[tags]` Configuration
|
```
|
||||||
|
telegraf -sample-config -input-filter cpu:mem:net:swap -output-filter influxdb:kafka
|
||||||
|
```
|
||||||
|
|
||||||
Global tags can be specific in the `[tags]` section of the config file in
|
You can see the latest config file with all available plugins here:
|
||||||
key="value" format. All metrics being gathered on this host will be tagged
|
[telegraf.conf](https://github.com/influxdata/telegraf/blob/master/etc/telegraf.conf)
|
||||||
|
|
||||||
|
## Environment Variables
|
||||||
|
|
||||||
|
Environment variables can be used anywhere in the config file, simply prepend
|
||||||
|
them with $. For strings the variable must be within quotes (ie, "$STR_VAR"),
|
||||||
|
for numbers and booleans they should be plain (ie, $INT_VAR, $BOOL_VAR)
|
||||||
|
|
||||||
|
## `[global_tags]` Configuration
|
||||||
|
|
||||||
|
Global tags can be specified in the `[global_tags]` section of the config file
|
||||||
|
in key="value" format. All metrics being gathered on this host will be tagged
|
||||||
with the tags specified here.
|
with the tags specified here.
|
||||||
|
|
||||||
## `[agent]` Configuration
|
## `[agent]` Configuration
|
||||||
@@ -23,8 +38,12 @@ config.
|
|||||||
* **interval**: Default data collection interval for all inputs
|
* **interval**: Default data collection interval for all inputs
|
||||||
* **round_interval**: Rounds collection interval to 'interval'
|
* **round_interval**: Rounds collection interval to 'interval'
|
||||||
ie, if interval="10s" then always collect on :00, :10, :20, etc.
|
ie, if interval="10s" then always collect on :00, :10, :20, etc.
|
||||||
|
* **metric_batch_size**: Telegraf will send metrics to output in batch of at
|
||||||
|
most metric_batch_size metrics.
|
||||||
* **metric_buffer_limit**: Telegraf will cache metric_buffer_limit metrics
|
* **metric_buffer_limit**: Telegraf will cache metric_buffer_limit metrics
|
||||||
for each output, and will flush this buffer on a successful write.
|
for each output, and will flush this buffer on a successful write.
|
||||||
|
This should be a multiple of metric_batch_size and could not be less
|
||||||
|
than 2 times metric_batch_size.
|
||||||
* **collection_jitter**: Collection jitter is used to jitter
|
* **collection_jitter**: Collection jitter is used to jitter
|
||||||
the collection by a random amount.
|
the collection by a random amount.
|
||||||
Each plugin will sleep for a random time within jitter before collecting.
|
Each plugin will sleep for a random time within jitter before collecting.
|
||||||
@@ -41,9 +60,35 @@ ie, a jitter of 5s and flush_interval 10s means flushes will happen every 10-15s
|
|||||||
* **quiet**: Run telegraf in quiet mode.
|
* **quiet**: Run telegraf in quiet mode.
|
||||||
* **hostname**: Override default hostname, if empty use os.Hostname().
|
* **hostname**: Override default hostname, if empty use os.Hostname().
|
||||||
|
|
||||||
## `[inputs.xxx]` Configuration
|
#### Measurement Filtering
|
||||||
|
|
||||||
There are some configuration options that are configurable per input:
|
Filters can be configured per input or output, see below for examples.
|
||||||
|
|
||||||
|
* **namepass**: An array of strings that is used to filter metrics generated by the
|
||||||
|
current input. Each string in the array is tested as a glob match against
|
||||||
|
measurement names and if it matches, the field is emitted.
|
||||||
|
* **namedrop**: The inverse of pass, if a measurement name matches, it is not emitted.
|
||||||
|
* **fieldpass**: An array of strings that is used to filter metrics generated by the
|
||||||
|
current input. Each string in the array is tested as a glob match against field names
|
||||||
|
and if it matches, the field is emitted. fieldpass is not available for outputs.
|
||||||
|
* **fielddrop**: The inverse of pass, if a field name matches, it is not emitted.
|
||||||
|
fielddrop is not available for outputs.
|
||||||
|
* **tagpass**: tag names and arrays of strings that are used to filter
|
||||||
|
measurements by the current input. Each string in the array is tested as a glob
|
||||||
|
match against the tag name, and if it matches the measurement is emitted.
|
||||||
|
* **tagdrop**: The inverse of tagpass. If a tag matches, the measurement is not
|
||||||
|
emitted. This is tested on measurements that have passed the tagpass test.
|
||||||
|
* **tagexclude**: tagexclude can be used to exclude a tag from measurement(s).
|
||||||
|
As opposed to tagdrop, which will drop an entire measurement based on it's
|
||||||
|
tags, tagexclude simply strips the given tag keys from the measurement. This
|
||||||
|
can be used on inputs & outputs, but it is _recommended_ to be used on inputs,
|
||||||
|
as it is more efficient to filter out tags at the ingestion point.
|
||||||
|
* **taginclude**: taginclude is the inverse of tagexclude. It will only include
|
||||||
|
the tag keys in the final measurement.
|
||||||
|
|
||||||
|
## Input Configuration
|
||||||
|
|
||||||
|
Some configuration options are configurable per input:
|
||||||
|
|
||||||
* **name_override**: Override the base name of the measurement.
|
* **name_override**: Override the base name of the measurement.
|
||||||
(Default is the name of the input).
|
(Default is the name of the input).
|
||||||
@@ -54,20 +99,6 @@ There are some configuration options that are configurable per input:
|
|||||||
global interval, but if one particular input should be run less or more often,
|
global interval, but if one particular input should be run less or more often,
|
||||||
you can configure that here.
|
you can configure that here.
|
||||||
|
|
||||||
#### Input Filters
|
|
||||||
|
|
||||||
There are also filters that can be configured per input:
|
|
||||||
|
|
||||||
* **pass**: An array of strings that is used to filter metrics generated by the
|
|
||||||
current input. Each string in the array is tested as a glob match against field names
|
|
||||||
and if it matches, the field is emitted.
|
|
||||||
* **drop**: The inverse of pass, if a field name matches, it is not emitted.
|
|
||||||
* **tagpass**: tag names and arrays of strings that are used to filter
|
|
||||||
measurements by the current input. Each string in the array is tested as a glob
|
|
||||||
match against the tag name, and if it matches the measurement is emitted.
|
|
||||||
* **tagdrop**: The inverse of tagpass. If a tag matches, the measurement is not
|
|
||||||
emitted. This is tested on measurements that have passed the tagpass test.
|
|
||||||
|
|
||||||
#### Input Configuration Examples
|
#### Input Configuration Examples
|
||||||
|
|
||||||
This is a full working config that will output CPU data to an InfluxDB instance
|
This is a full working config that will output CPU data to an InfluxDB instance
|
||||||
@@ -76,7 +107,7 @@ measurements at a 10s interval and will collect per-cpu data, dropping any
|
|||||||
fields which begin with `time_`.
|
fields which begin with `time_`.
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
[tags]
|
[global_tags]
|
||||||
dc = "denver-1"
|
dc = "denver-1"
|
||||||
|
|
||||||
[agent]
|
[agent]
|
||||||
@@ -93,7 +124,7 @@ fields which begin with `time_`.
|
|||||||
percpu = true
|
percpu = true
|
||||||
totalcpu = false
|
totalcpu = false
|
||||||
# filter all fields beginning with 'time_'
|
# filter all fields beginning with 'time_'
|
||||||
drop = ["time_*"]
|
fielddrop = ["time_*"]
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Input Config: tagpass and tagdrop
|
#### Input Config: tagpass and tagdrop
|
||||||
@@ -102,7 +133,7 @@ fields which begin with `time_`.
|
|||||||
[[inputs.cpu]]
|
[[inputs.cpu]]
|
||||||
percpu = true
|
percpu = true
|
||||||
totalcpu = false
|
totalcpu = false
|
||||||
drop = ["cpu_time"]
|
fielddrop = ["cpu_time"]
|
||||||
# Don't collect CPU data for cpu6 & cpu7
|
# Don't collect CPU data for cpu6 & cpu7
|
||||||
[inputs.cpu.tagdrop]
|
[inputs.cpu.tagdrop]
|
||||||
cpu = [ "cpu6", "cpu7" ]
|
cpu = [ "cpu6", "cpu7" ]
|
||||||
@@ -117,18 +148,46 @@ fields which begin with `time_`.
|
|||||||
path = [ "/opt", "/home*" ]
|
path = [ "/opt", "/home*" ]
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Input Config: pass and drop
|
#### Input Config: fieldpass and fielddrop
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
# Drop all metrics for guest & steal CPU usage
|
# Drop all metrics for guest & steal CPU usage
|
||||||
[[inputs.cpu]]
|
[[inputs.cpu]]
|
||||||
percpu = false
|
percpu = false
|
||||||
totalcpu = true
|
totalcpu = true
|
||||||
drop = ["usage_guest", "usage_steal"]
|
fielddrop = ["usage_guest", "usage_steal"]
|
||||||
|
|
||||||
# Only store inode related metrics for disks
|
# Only store inode related metrics for disks
|
||||||
[[inputs.disk]]
|
[[inputs.disk]]
|
||||||
pass = ["inodes*"]
|
fieldpass = ["inodes*"]
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Input Config: namepass and namedrop
|
||||||
|
|
||||||
|
```toml
|
||||||
|
# Drop all metrics about containers for kubelet
|
||||||
|
[[inputs.prometheus]]
|
||||||
|
urls = ["http://kube-node-1:4194/metrics"]
|
||||||
|
namedrop = ["container_*"]
|
||||||
|
|
||||||
|
# Only store rest client related metrics for kubelet
|
||||||
|
[[inputs.prometheus]]
|
||||||
|
urls = ["http://kube-node-1:4194/metrics"]
|
||||||
|
namepass = ["rest_client_*"]
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Input Config: taginclude and tagexclude
|
||||||
|
|
||||||
|
```toml
|
||||||
|
# Only include the "cpu" tag in the measurements for the cpu plugin.
|
||||||
|
[[inputs.cpu]]
|
||||||
|
percpu = true
|
||||||
|
totalcpu = true
|
||||||
|
taginclude = ["cpu"]
|
||||||
|
|
||||||
|
# Exclude the "fstype" tag from the measurements for the disk plugin.
|
||||||
|
[[inputs.disk]]
|
||||||
|
tagexclude = ["fstype"]
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Input config: prefix, suffix, and override
|
#### Input config: prefix, suffix, and override
|
||||||
@@ -156,6 +215,9 @@ This will emit measurements with the name `foobar`
|
|||||||
This plugin will emit measurements with two additional tags: `tag1=foo` and
|
This plugin will emit measurements with two additional tags: `tag1=foo` and
|
||||||
`tag2=bar`
|
`tag2=bar`
|
||||||
|
|
||||||
|
NOTE: Order matters, the `[inputs.cpu.tags]` table must be at the _end_ of the
|
||||||
|
plugin definition.
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
[[inputs.cpu]]
|
[[inputs.cpu]]
|
||||||
percpu = false
|
percpu = false
|
||||||
@@ -181,32 +243,29 @@ to avoid measurement collisions:
|
|||||||
percpu = true
|
percpu = true
|
||||||
totalcpu = false
|
totalcpu = false
|
||||||
name_override = "percpu_usage"
|
name_override = "percpu_usage"
|
||||||
drop = ["cpu_time*"]
|
fielddrop = ["cpu_time*"]
|
||||||
```
|
```
|
||||||
|
|
||||||
## `[outputs.xxx]` Configuration
|
## Output Configuration
|
||||||
|
|
||||||
Telegraf also supports specifying multiple output sinks to send data to,
|
Telegraf also supports specifying multiple output sinks to send data to,
|
||||||
configuring each output sink is different, but examples can be
|
configuring each output sink is different, but examples can be
|
||||||
found by running `telegraf -sample-config`.
|
found by running `telegraf -sample-config`.
|
||||||
|
|
||||||
Outputs also support the same configurable options as inputs
|
|
||||||
(pass, drop, tagpass, tagdrop)
|
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
[[outputs.influxdb]]
|
[[outputs.influxdb]]
|
||||||
urls = [ "http://localhost:8086" ]
|
urls = [ "http://localhost:8086" ]
|
||||||
database = "telegraf"
|
database = "telegraf"
|
||||||
precision = "s"
|
precision = "s"
|
||||||
# Drop all measurements that start with "aerospike"
|
# Drop all measurements that start with "aerospike"
|
||||||
drop = ["aerospike*"]
|
namedrop = ["aerospike*"]
|
||||||
|
|
||||||
[[outputs.influxdb]]
|
[[outputs.influxdb]]
|
||||||
urls = [ "http://localhost:8086" ]
|
urls = [ "http://localhost:8086" ]
|
||||||
database = "telegraf-aerospike-data"
|
database = "telegraf-aerospike-data"
|
||||||
precision = "s"
|
precision = "s"
|
||||||
# Only accept aerospike data:
|
# Only accept aerospike data:
|
||||||
pass = ["aerospike*"]
|
namepass = ["aerospike*"]
|
||||||
|
|
||||||
[[outputs.influxdb]]
|
[[outputs.influxdb]]
|
||||||
urls = [ "http://localhost:8086" ]
|
urls = [ "http://localhost:8086" ]
|
||||||
362
docs/DATA_FORMATS_INPUT.md
Normal file
362
docs/DATA_FORMATS_INPUT.md
Normal file
@@ -0,0 +1,362 @@
|
|||||||
|
# Telegraf Input Data Formats
|
||||||
|
|
||||||
|
Telegraf is able to parse the following input data formats into metrics:
|
||||||
|
|
||||||
|
1. [InfluxDB Line Protocol](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#influx)
|
||||||
|
1. [JSON](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#json)
|
||||||
|
1. [Graphite](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#graphite)
|
||||||
|
1. [Value](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#value), ie: 45 or "booyah"
|
||||||
|
1. [Nagios](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#nagios) (exec input only)
|
||||||
|
|
||||||
|
Telegraf metrics, like InfluxDB
|
||||||
|
[points](https://docs.influxdata.com/influxdb/v0.10/write_protocols/line/),
|
||||||
|
are a combination of four basic parts:
|
||||||
|
|
||||||
|
1. Measurement Name
|
||||||
|
1. Tags
|
||||||
|
1. Fields
|
||||||
|
1. Timestamp
|
||||||
|
|
||||||
|
These four parts are easily defined when using InfluxDB line-protocol as a
|
||||||
|
data format. But there are other data formats that users may want to use which
|
||||||
|
require more advanced configuration to create usable Telegraf metrics.
|
||||||
|
|
||||||
|
Plugins such as `exec` and `kafka_consumer` parse textual data. Up until now,
|
||||||
|
these plugins were statically configured to parse just a single
|
||||||
|
data format. `exec` mostly only supported parsing JSON, and `kafka_consumer` only
|
||||||
|
supported data in InfluxDB line-protocol.
|
||||||
|
|
||||||
|
But now we are normalizing the parsing of various data formats across all
|
||||||
|
plugins that can support it. You will be able to identify a plugin that supports
|
||||||
|
different data formats by the presence of a `data_format` config option, for
|
||||||
|
example, in the exec plugin:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[inputs.exec]]
|
||||||
|
## Commands array
|
||||||
|
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
|
||||||
|
|
||||||
|
## measurement name suffix (for separating different commands)
|
||||||
|
name_suffix = "_mycollector"
|
||||||
|
|
||||||
|
## Data format to consume.
|
||||||
|
## Each data format has it's own unique set of configuration options, read
|
||||||
|
## more about them here:
|
||||||
|
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||||
|
data_format = "json"
|
||||||
|
|
||||||
|
## Additional configuration options go here
|
||||||
|
```
|
||||||
|
|
||||||
|
Each data_format has an additional set of configuration options available, which
|
||||||
|
I'll go over below.
|
||||||
|
|
||||||
|
# Influx:
|
||||||
|
|
||||||
|
There are no additional configuration options for InfluxDB line-protocol. The
|
||||||
|
metrics are parsed directly into Telegraf metrics.
|
||||||
|
|
||||||
|
#### Influx Configuration:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[inputs.exec]]
|
||||||
|
## Commands array
|
||||||
|
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
|
||||||
|
|
||||||
|
## measurement name suffix (for separating different commands)
|
||||||
|
name_suffix = "_mycollector"
|
||||||
|
|
||||||
|
## Data format to consume.
|
||||||
|
## Each data format has it's own unique set of configuration options, read
|
||||||
|
## more about them here:
|
||||||
|
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||||
|
data_format = "influx"
|
||||||
|
```
|
||||||
|
|
||||||
|
# JSON:
|
||||||
|
|
||||||
|
The JSON data format flattens JSON into metric _fields_.
|
||||||
|
NOTE: Only numerical values are converted to fields, and they are converted
|
||||||
|
into a float. strings are ignored unless specified as a tag_key (see below).
|
||||||
|
|
||||||
|
So for example, this JSON:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"a": 5,
|
||||||
|
"b": {
|
||||||
|
"c": 6
|
||||||
|
},
|
||||||
|
"ignored": "I'm a string"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Would get translated into _fields_ of a measurement:
|
||||||
|
|
||||||
|
```
|
||||||
|
myjsonmetric a=5,b_c=6
|
||||||
|
```
|
||||||
|
|
||||||
|
The _measurement_ _name_ is usually the name of the plugin,
|
||||||
|
but can be overridden using the `name_override` config option.
|
||||||
|
|
||||||
|
#### JSON Configuration:
|
||||||
|
|
||||||
|
The JSON data format supports specifying "tag keys". If specified, keys
|
||||||
|
will be searched for in the root-level of the JSON blob. If the key(s) exist,
|
||||||
|
they will be applied as tags to the Telegraf metrics.
|
||||||
|
|
||||||
|
For example, if you had this configuration:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[inputs.exec]]
|
||||||
|
## Commands array
|
||||||
|
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
|
||||||
|
|
||||||
|
## measurement name suffix (for separating different commands)
|
||||||
|
name_suffix = "_mycollector"
|
||||||
|
|
||||||
|
## Data format to consume.
|
||||||
|
## Each data format has it's own unique set of configuration options, read
|
||||||
|
## more about them here:
|
||||||
|
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||||
|
data_format = "json"
|
||||||
|
|
||||||
|
## List of tag names to extract from top-level of JSON server response
|
||||||
|
tag_keys = [
|
||||||
|
"my_tag_1",
|
||||||
|
"my_tag_2"
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
with this JSON output from a command:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"a": 5,
|
||||||
|
"b": {
|
||||||
|
"c": 6
|
||||||
|
},
|
||||||
|
"my_tag_1": "foo"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Your Telegraf metrics would get tagged with "my_tag_1"
|
||||||
|
|
||||||
|
```
|
||||||
|
exec_mycollector,my_tag_1=foo a=5,b_c=6
|
||||||
|
```
|
||||||
|
|
||||||
|
# Value:
|
||||||
|
|
||||||
|
The "value" data format translates single values into Telegraf metrics. This
|
||||||
|
is done by assigning a measurement name and setting a single field ("value")
|
||||||
|
as the parsed metric.
|
||||||
|
|
||||||
|
#### Value Configuration:
|
||||||
|
|
||||||
|
You **must** tell Telegraf what type of metric to collect by using the
|
||||||
|
`data_type` configuration option. Available options are:
|
||||||
|
|
||||||
|
1. integer
|
||||||
|
2. float or long
|
||||||
|
3. string
|
||||||
|
4. boolean
|
||||||
|
|
||||||
|
**Note:** It is also recommended that you set `name_override` to a measurement
|
||||||
|
name that makes sense for your metric, otherwise it will just be set to the
|
||||||
|
name of the plugin.
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[inputs.exec]]
|
||||||
|
## Commands array
|
||||||
|
commands = ["cat /proc/sys/kernel/random/entropy_avail"]
|
||||||
|
|
||||||
|
## override the default metric name of "exec"
|
||||||
|
name_override = "entropy_available"
|
||||||
|
|
||||||
|
## Data format to consume.
|
||||||
|
## Each data format has it's own unique set of configuration options, read
|
||||||
|
## more about them here:
|
||||||
|
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||||
|
data_format = "value"
|
||||||
|
data_type = "integer" # required
|
||||||
|
```
|
||||||
|
|
||||||
|
# Graphite:
|
||||||
|
|
||||||
|
The Graphite data format translates graphite _dot_ buckets directly into
|
||||||
|
telegraf measurement names, with a single value field, and without any tags. For
|
||||||
|
more advanced options, Telegraf supports specifying "templates" to translate
|
||||||
|
graphite buckets into Telegraf metrics.
|
||||||
|
|
||||||
|
#### Separator:
|
||||||
|
|
||||||
|
You can specify a separator to use for the parsed metrics.
|
||||||
|
By default, it will leave the metrics with a "." separator.
|
||||||
|
Setting `separator = "_"` will translate:
|
||||||
|
|
||||||
|
```
|
||||||
|
cpu.usage.idle 99
|
||||||
|
=> cpu_usage_idle value=99
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Measurement/Tag Templates:
|
||||||
|
|
||||||
|
The most basic template is to specify a single transformation to apply to all
|
||||||
|
incoming metrics. _measurement_ is a special keyword that tells Telegraf which
|
||||||
|
parts of the graphite bucket to combine into the measurement name. It can have a
|
||||||
|
trailing `*` to indicate that the remainder of the metric should be used.
|
||||||
|
Other words are considered tag keys. So the following template:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
templates = [
|
||||||
|
"region.measurement*"
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
would result in the following Graphite -> Telegraf transformation.
|
||||||
|
|
||||||
|
```
|
||||||
|
us-west.cpu.load 100
|
||||||
|
=> cpu.load,region=us-west value=100
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Field Templates:
|
||||||
|
|
||||||
|
There is also a _field_ keyword, which can only be specified once.
|
||||||
|
The field keyword tells Telegraf to give the metric that field name.
|
||||||
|
So the following template:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
templates = [
|
||||||
|
"measurement.measurement.field.field.region"
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
would result in the following Graphite -> Telegraf transformation.
|
||||||
|
|
||||||
|
```
|
||||||
|
cpu.usage.idle.percent.us-west 100
|
||||||
|
=> cpu_usage,region=us-west idle_percent=100
|
||||||
|
```
|
||||||
|
|
||||||
|
The field key can also be derived from the second "half" of the input metric-name by specifying ```field*```:
|
||||||
|
```toml
|
||||||
|
templates = [
|
||||||
|
"measurement.measurement.region.field*"
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
would result in the following Graphite -> Telegraf transformation.
|
||||||
|
|
||||||
|
```
|
||||||
|
cpu.usage.us-west.idle.percentage 100
|
||||||
|
=> cpu_usage,region=us-west idle_percentage=100
|
||||||
|
```
|
||||||
|
(This cannot be used in conjunction with "measurement*"!)
|
||||||
|
|
||||||
|
#### Filter Templates:
|
||||||
|
|
||||||
|
Users can also filter the template(s) to use based on the name of the bucket,
|
||||||
|
using glob matching, like so:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
templates = [
|
||||||
|
"cpu.* measurement.measurement.region",
|
||||||
|
"mem.* measurement.measurement.host"
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
which would result in the following transformation:
|
||||||
|
|
||||||
|
```
|
||||||
|
cpu.load.us-west 100
|
||||||
|
=> cpu_load,region=us-west value=100
|
||||||
|
|
||||||
|
mem.cached.localhost 256
|
||||||
|
=> mem_cached,host=localhost value=256
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Adding Tags:
|
||||||
|
|
||||||
|
Additional tags can be added to a metric that don't exist on the received metric.
|
||||||
|
You can add additional tags by specifying them after the pattern.
|
||||||
|
Tags have the same format as the line protocol.
|
||||||
|
Multiple tags are separated by commas.
|
||||||
|
|
||||||
|
```toml
|
||||||
|
templates = [
|
||||||
|
"measurement.measurement.field.region datacenter=1a"
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
would result in the following Graphite -> Telegraf transformation.
|
||||||
|
|
||||||
|
```
|
||||||
|
cpu.usage.idle.us-west 100
|
||||||
|
=> cpu_usage,region=us-west,datacenter=1a idle=100
|
||||||
|
```
|
||||||
|
|
||||||
|
There are many more options available,
|
||||||
|
[More details can be found here](https://github.com/influxdata/influxdb/tree/master/services/graphite#templates)
|
||||||
|
|
||||||
|
#### Graphite Configuration:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[inputs.exec]]
|
||||||
|
## Commands array
|
||||||
|
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
|
||||||
|
|
||||||
|
## measurement name suffix (for separating different commands)
|
||||||
|
name_suffix = "_mycollector"
|
||||||
|
|
||||||
|
## Data format to consume.
|
||||||
|
## Each data format has it's own unique set of configuration options, read
|
||||||
|
## more about them here:
|
||||||
|
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||||
|
data_format = "graphite"
|
||||||
|
|
||||||
|
## This string will be used to join the matched values.
|
||||||
|
separator = "_"
|
||||||
|
|
||||||
|
## Each template line requires a template pattern. It can have an optional
|
||||||
|
## filter before the template and separated by spaces. It can also have optional extra
|
||||||
|
## tags following the template. Multiple tags should be separated by commas and no spaces
|
||||||
|
## similar to the line protocol format. There can be only one default template.
|
||||||
|
## Templates support below format:
|
||||||
|
## 1. filter + template
|
||||||
|
## 2. filter + template + extra tag
|
||||||
|
## 3. filter + template with field key
|
||||||
|
## 4. default template
|
||||||
|
templates = [
|
||||||
|
"*.app env.service.resource.measurement",
|
||||||
|
"stats.* .host.measurement* region=us-west,agent=sensu",
|
||||||
|
"stats2.* .host.measurement.field",
|
||||||
|
"measurement*"
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
# Nagios:
|
||||||
|
|
||||||
|
There are no additional configuration options for Nagios line-protocol. The
|
||||||
|
metrics are parsed directly into Telegraf metrics.
|
||||||
|
|
||||||
|
Note: Nagios Input Data Formats is only supported in `exec` input plugin.
|
||||||
|
|
||||||
|
#### Nagios Configuration:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[inputs.exec]]
|
||||||
|
## Commands array
|
||||||
|
commands = ["/usr/lib/nagios/plugins/check_load", "-w 5,6,7 -c 7,8,9"]
|
||||||
|
|
||||||
|
## measurement name suffix (for separating different commands)
|
||||||
|
name_suffix = "_mycollector"
|
||||||
|
|
||||||
|
## Data format to consume.
|
||||||
|
## Each data format has it's own unique set of configuration options, read
|
||||||
|
## more about them here:
|
||||||
|
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||||
|
data_format = "nagios"
|
||||||
|
```
|
||||||
150
docs/DATA_FORMATS_OUTPUT.md
Normal file
150
docs/DATA_FORMATS_OUTPUT.md
Normal file
@@ -0,0 +1,150 @@
|
|||||||
|
# Telegraf Output Data Formats
|
||||||
|
|
||||||
|
Telegraf is able to serialize metrics into the following output data formats:
|
||||||
|
|
||||||
|
1. [InfluxDB Line Protocol](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md#influx)
|
||||||
|
1. [JSON](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md#json)
|
||||||
|
1. [Graphite](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md#graphite)
|
||||||
|
|
||||||
|
Telegraf metrics, like InfluxDB
|
||||||
|
[points](https://docs.influxdata.com/influxdb/v0.10/write_protocols/line/),
|
||||||
|
are a combination of four basic parts:
|
||||||
|
|
||||||
|
1. Measurement Name
|
||||||
|
1. Tags
|
||||||
|
1. Fields
|
||||||
|
1. Timestamp
|
||||||
|
|
||||||
|
In InfluxDB line protocol, these 4 parts are easily defined in textual form:
|
||||||
|
|
||||||
|
```
|
||||||
|
measurement_name[,tag1=val1,...] field1=val1[,field2=val2,...] [timestamp]
|
||||||
|
```
|
||||||
|
|
||||||
|
For Telegraf outputs that write textual data (such as `kafka`, `mqtt`, and `file`),
|
||||||
|
InfluxDB line protocol was originally the only available output format. But now
|
||||||
|
we are normalizing telegraf metric "serializers" into a
|
||||||
|
[plugin-like interface](https://github.com/influxdata/telegraf/tree/master/plugins/serializers)
|
||||||
|
across all output plugins that can support it.
|
||||||
|
You will be able to identify a plugin that supports different data formats
|
||||||
|
by the presence of a `data_format`
|
||||||
|
config option, for example, in the `file` output plugin:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[outputs.file]]
|
||||||
|
## Files to write to, "stdout" is a specially handled file.
|
||||||
|
files = ["stdout"]
|
||||||
|
|
||||||
|
## Data format to output.
|
||||||
|
## Each data format has it's own unique set of configuration options, read
|
||||||
|
## more about them here:
|
||||||
|
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
||||||
|
data_format = "influx"
|
||||||
|
|
||||||
|
## Additional configuration options go here
|
||||||
|
```
|
||||||
|
|
||||||
|
Each data_format has an additional set of configuration options available, which
|
||||||
|
I'll go over below.
|
||||||
|
|
||||||
|
# Influx:
|
||||||
|
|
||||||
|
There are no additional configuration options for InfluxDB line-protocol. The
|
||||||
|
metrics are serialized directly into InfluxDB line-protocol.
|
||||||
|
|
||||||
|
### Influx Configuration:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[outputs.file]]
|
||||||
|
## Files to write to, "stdout" is a specially handled file.
|
||||||
|
files = ["stdout", "/tmp/metrics.out"]
|
||||||
|
|
||||||
|
## Data format to output.
|
||||||
|
## Each data format has it's own unique set of configuration options, read
|
||||||
|
## more about them here:
|
||||||
|
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
||||||
|
data_format = "influx"
|
||||||
|
```
|
||||||
|
|
||||||
|
# Graphite:
|
||||||
|
|
||||||
|
The Graphite data format translates Telegraf metrics into _dot_ buckets. A
|
||||||
|
template can be specified for the output of Telegraf metrics into Graphite
|
||||||
|
buckets. The default template is:
|
||||||
|
|
||||||
|
```
|
||||||
|
template = "host.tags.measurement.field"
|
||||||
|
```
|
||||||
|
|
||||||
|
In the above template, we have four parts:
|
||||||
|
|
||||||
|
1. _host_ is a tag key. This can be any tag key that is in the Telegraf
|
||||||
|
metric(s). If the key doesn't exist, it will be ignored. If it does exist, the
|
||||||
|
tag value will be filled in.
|
||||||
|
1. _tags_ is a special keyword that outputs all remaining tag values, separated
|
||||||
|
by dots and in alphabetical order (by tag key). These will be filled after all
|
||||||
|
tag keys are filled.
|
||||||
|
1. _measurement_ is a special keyword that outputs the measurement name.
|
||||||
|
1. _field_ is a special keyword that outputs the field name.
|
||||||
|
|
||||||
|
Which means the following influx metric -> graphite conversion would happen:
|
||||||
|
|
||||||
|
```
|
||||||
|
cpu,cpu=cpu-total,dc=us-east-1,host=tars usage_idle=98.09,usage_user=0.89 1455320660004257758
|
||||||
|
=>
|
||||||
|
tars.cpu-total.us-east-1.cpu.usage_user 0.89 1455320690
|
||||||
|
tars.cpu-total.us-east-1.cpu.usage_idle 98.09 1455320690
|
||||||
|
```
|
||||||
|
|
||||||
|
### Graphite Configuration:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[outputs.file]]
|
||||||
|
## Files to write to, "stdout" is a specially handled file.
|
||||||
|
files = ["stdout", "/tmp/metrics.out"]
|
||||||
|
|
||||||
|
## Data format to output.
|
||||||
|
## Each data format has it's own unique set of configuration options, read
|
||||||
|
## more about them here:
|
||||||
|
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
||||||
|
data_format = "graphite"
|
||||||
|
|
||||||
|
# prefix each graphite bucket
|
||||||
|
prefix = "telegraf"
|
||||||
|
# graphite template
|
||||||
|
template = "host.tags.measurement.field"
|
||||||
|
```
|
||||||
|
|
||||||
|
# JSON:
|
||||||
|
|
||||||
|
The JSON data format serialized Telegraf metrics in json format. The format is:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"fields":{
|
||||||
|
"field_1":30,
|
||||||
|
"field_2":4,
|
||||||
|
"field_N":59,
|
||||||
|
"n_images":660
|
||||||
|
},
|
||||||
|
"name":"docker",
|
||||||
|
"tags":{
|
||||||
|
"host":"raynor"
|
||||||
|
},
|
||||||
|
"timestamp":1458229140
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### JSON Configuration:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[outputs.file]]
|
||||||
|
## Files to write to, "stdout" is a specially handled file.
|
||||||
|
files = ["stdout", "/tmp/metrics.out"]
|
||||||
|
|
||||||
|
## Data format to output.
|
||||||
|
## Each data format has it's own unique set of configuration options, read
|
||||||
|
## more about them here:
|
||||||
|
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
||||||
|
data_format = "json"
|
||||||
|
```
|
||||||
@@ -28,6 +28,5 @@
|
|||||||
- github.com/wvanbergen/kazoo-go [MIT LICENSE](https://github.com/wvanbergen/kazoo-go/blob/master/MIT-LICENSE)
|
- github.com/wvanbergen/kazoo-go [MIT LICENSE](https://github.com/wvanbergen/kazoo-go/blob/master/MIT-LICENSE)
|
||||||
- gopkg.in/dancannon/gorethink.v1 [APACHE LICENSE](https://github.com/dancannon/gorethink/blob/v1.1.2/LICENSE)
|
- gopkg.in/dancannon/gorethink.v1 [APACHE LICENSE](https://github.com/dancannon/gorethink/blob/v1.1.2/LICENSE)
|
||||||
- gopkg.in/mgo.v2 [BSD LICENSE](https://github.com/go-mgo/mgo/blob/v2/LICENSE)
|
- gopkg.in/mgo.v2 [BSD LICENSE](https://github.com/go-mgo/mgo/blob/v2/LICENSE)
|
||||||
- golang.org/x/crypto/* [BSD LICENSE](https://github.com/golang/crypto/blob/master/LICENSE)
|
- golang.org/x/crypto/ [BSD LICENSE](https://github.com/golang/crypto/blob/master/LICENSE)
|
||||||
- internal Glob function [MIT LICENSE](https://github.com/ryanuber/go-glob/blob/master/LICENSE)
|
|
||||||
|
|
||||||
36
docs/WINDOWS_SERVICE.md
Normal file
36
docs/WINDOWS_SERVICE.md
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
# Running Telegraf as a Windows Service
|
||||||
|
|
||||||
|
If you have tried to install Go binaries as Windows Services with the **sc.exe**
|
||||||
|
tool you may have seen that the service errors and stops running after a while.
|
||||||
|
|
||||||
|
**NSSM** (the Non-Sucking Service Manager) is a tool that helps you in a
|
||||||
|
[number of scenarios](http://nssm.cc/scenarios) including running Go binaries
|
||||||
|
that were not specifically designed to run only in Windows platforms.
|
||||||
|
|
||||||
|
## NSSM Installation via Chocolatey
|
||||||
|
|
||||||
|
You can install [Chocolatey](https://chocolatey.org/) and [NSSM](http://nssm.cc/)
|
||||||
|
with these commands
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))
|
||||||
|
choco install -y nssm
|
||||||
|
```
|
||||||
|
|
||||||
|
## Installing Telegraf as a Windows Service with NSSM
|
||||||
|
|
||||||
|
You can download the latest Telegraf Windows binaries (still Experimental at
|
||||||
|
the moment) from [the Telegraf Github repo](https://github.com/influxdata/telegraf).
|
||||||
|
|
||||||
|
Then you can create a C:\telegraf folder, unzip the binary there and modify the
|
||||||
|
**telegraf.conf** sample to allocate the metrics you want to send to **InfluxDB**.
|
||||||
|
|
||||||
|
Once you have NSSM installed in your system, the process is quite straightforward.
|
||||||
|
You only need to type this command in your Windows shell
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
nssm install Telegraf c:\telegraf\telegraf.exe -config c:\telegraf\telegraf.config
|
||||||
|
```
|
||||||
|
|
||||||
|
And now your service will be installed in Windows and you will be able to start and
|
||||||
|
stop it gracefully
|
||||||
1477
etc/telegraf.conf
1477
etc/telegraf.conf
File diff suppressed because it is too large
Load Diff
164
etc/telegraf_windows.conf
Normal file
164
etc/telegraf_windows.conf
Normal file
@@ -0,0 +1,164 @@
|
|||||||
|
# Telegraf configuration
|
||||||
|
|
||||||
|
# Telegraf is entirely plugin driven. All metrics are gathered from the
|
||||||
|
# declared inputs, and sent to the declared outputs.
|
||||||
|
|
||||||
|
# Plugins must be declared in here to be active.
|
||||||
|
# To deactivate a plugin, comment out the name and any variables.
|
||||||
|
|
||||||
|
# Use 'telegraf -config telegraf.conf -test' to see what metrics a config
|
||||||
|
# file would generate.
|
||||||
|
|
||||||
|
# Global tags can be specified here in key="value" format.
|
||||||
|
[global_tags]
|
||||||
|
# dc = "us-east-1" # will tag all metrics with dc=us-east-1
|
||||||
|
# rack = "1a"
|
||||||
|
|
||||||
|
# Configuration for telegraf agent
|
||||||
|
[agent]
|
||||||
|
## Default data collection interval for all inputs
|
||||||
|
interval = "10s"
|
||||||
|
## Rounds collection interval to 'interval'
|
||||||
|
## ie, if interval="10s" then always collect on :00, :10, :20, etc.
|
||||||
|
round_interval = true
|
||||||
|
|
||||||
|
## Telegraf will cache metric_buffer_limit metrics for each output, and will
|
||||||
|
## flush this buffer on a successful write.
|
||||||
|
metric_buffer_limit = 1000
|
||||||
|
## Flush the buffer whenever full, regardless of flush_interval.
|
||||||
|
flush_buffer_when_full = true
|
||||||
|
|
||||||
|
## Collection jitter is used to jitter the collection by a random amount.
|
||||||
|
## Each plugin will sleep for a random time within jitter before collecting.
|
||||||
|
## This can be used to avoid many plugins querying things like sysfs at the
|
||||||
|
## same time, which can have a measurable effect on the system.
|
||||||
|
collection_jitter = "0s"
|
||||||
|
|
||||||
|
## Default flushing interval for all outputs. You shouldn't set this below
|
||||||
|
## interval. Maximum flush_interval will be flush_interval + flush_jitter
|
||||||
|
flush_interval = "10s"
|
||||||
|
## Jitter the flush interval by a random amount. This is primarily to avoid
|
||||||
|
## large write spikes for users running a large number of telegraf instances.
|
||||||
|
## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
|
||||||
|
flush_jitter = "0s"
|
||||||
|
|
||||||
|
## Run telegraf in debug mode
|
||||||
|
debug = false
|
||||||
|
## Run telegraf in quiet mode
|
||||||
|
quiet = false
|
||||||
|
## Override default hostname, if empty use os.Hostname()
|
||||||
|
hostname = ""
|
||||||
|
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# OUTPUTS #
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
# Configuration for influxdb server to send metrics to
|
||||||
|
[[outputs.influxdb]]
|
||||||
|
# The full HTTP or UDP endpoint URL for your InfluxDB instance.
|
||||||
|
# Multiple urls can be specified but it is assumed that they are part of the same
|
||||||
|
# cluster, this means that only ONE of the urls will be written to each interval.
|
||||||
|
# urls = ["udp://localhost:8089"] # UDP endpoint example
|
||||||
|
urls = ["http://localhost:8086"] # required
|
||||||
|
# The target database for metrics (telegraf will create it if not exists)
|
||||||
|
database = "telegraf" # required
|
||||||
|
# Precision of writes, valid values are "ns", "us" (or "µs"), "ms", "s", "m", "h".
|
||||||
|
# note: using second precision greatly helps InfluxDB compression
|
||||||
|
precision = "s"
|
||||||
|
|
||||||
|
## Write timeout (for the InfluxDB client), formatted as a string.
|
||||||
|
## If not provided, will default to 5s. 0s means no timeout (not recommended).
|
||||||
|
timeout = "5s"
|
||||||
|
# username = "telegraf"
|
||||||
|
# password = "metricsmetricsmetricsmetrics"
|
||||||
|
# Set the user agent for HTTP POSTs (can be useful for log differentiation)
|
||||||
|
# user_agent = "telegraf"
|
||||||
|
# Set UDP payload size, defaults to InfluxDB UDP Client default (512 bytes)
|
||||||
|
# udp_payload = 512
|
||||||
|
|
||||||
|
|
||||||
|
###############################################################################
|
||||||
|
# INPUTS #
|
||||||
|
###############################################################################
|
||||||
|
|
||||||
|
# Windows Performance Counters plugin.
|
||||||
|
# These are the recommended method of monitoring system metrics on windows,
|
||||||
|
# as the regular system plugins (inputs.cpu, inputs.mem, etc.) rely on WMI,
|
||||||
|
# which utilizes a lot of system resources.
|
||||||
|
#
|
||||||
|
# See more configuration examples at:
|
||||||
|
# https://github.com/influxdata/telegraf/tree/master/plugins/inputs/win_perf_counters
|
||||||
|
|
||||||
|
[[inputs.win_perf_counters]]
|
||||||
|
[[inputs.win_perf_counters.object]]
|
||||||
|
# Processor usage, alternative to native, reports on a per core.
|
||||||
|
ObjectName = "Processor"
|
||||||
|
Instances = ["*"]
|
||||||
|
Counters = ["% Idle Time", "% Interrupt Time", "% Privileged Time", "% User Time", "% Processor Time"]
|
||||||
|
Measurement = "win_cpu"
|
||||||
|
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
|
||||||
|
|
||||||
|
[[inputs.win_perf_counters.object]]
|
||||||
|
# Disk times and queues
|
||||||
|
ObjectName = "LogicalDisk"
|
||||||
|
Instances = ["*"]
|
||||||
|
Counters = ["% Idle Time", "% Disk Time","% Disk Read Time", "% Disk Write Time", "% User Time", "Current Disk Queue Length"]
|
||||||
|
Measurement = "win_disk"
|
||||||
|
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
|
||||||
|
|
||||||
|
[[inputs.win_perf_counters.object]]
|
||||||
|
ObjectName = "System"
|
||||||
|
Counters = ["Context Switches/sec","System Calls/sec"]
|
||||||
|
Instances = ["------"]
|
||||||
|
Measurement = "win_system"
|
||||||
|
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
|
||||||
|
|
||||||
|
[[inputs.win_perf_counters.object]]
|
||||||
|
# Example query where the Instance portion must be removed to get data back, such as from the Memory object.
|
||||||
|
ObjectName = "Memory"
|
||||||
|
Counters = ["Available Bytes","Cache Faults/sec","Demand Zero Faults/sec","Page Faults/sec","Pages/sec","Transition Faults/sec","Pool Nonpaged Bytes","Pool Paged Bytes"]
|
||||||
|
Instances = ["------"] # Use 6 x - to remove the Instance bit from the query.
|
||||||
|
Measurement = "win_mem"
|
||||||
|
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
|
||||||
|
|
||||||
|
|
||||||
|
# Windows system plugins using WMI (disabled by default, using
|
||||||
|
# win_perf_counters over WMI is recommended)
|
||||||
|
|
||||||
|
# Read metrics about cpu usage
|
||||||
|
#[[inputs.cpu]]
|
||||||
|
## Whether to report per-cpu stats or not
|
||||||
|
#percpu = true
|
||||||
|
## Whether to report total system cpu stats or not
|
||||||
|
#totalcpu = true
|
||||||
|
## Comment this line if you want the raw CPU time metrics
|
||||||
|
#fielddrop = ["time_*"]
|
||||||
|
|
||||||
|
# Read metrics about disk usage by mount point
|
||||||
|
#[[inputs.disk]]
|
||||||
|
## By default, telegraf gather stats for all mountpoints.
|
||||||
|
## Setting mountpoints will restrict the stats to the specified mountpoints.
|
||||||
|
## mount_points=["/"]
|
||||||
|
|
||||||
|
## Ignore some mountpoints by filesystem type. For example (dev)tmpfs (usually
|
||||||
|
## present on /run, /var/run, /dev/shm or /dev).
|
||||||
|
#ignore_fs = ["tmpfs", "devtmpfs"]
|
||||||
|
|
||||||
|
# Read metrics about disk IO by device
|
||||||
|
#[[inputs.diskio]]
|
||||||
|
## By default, telegraf will gather stats for all devices including
|
||||||
|
## disk partitions.
|
||||||
|
## Setting devices will restrict the stats to the specified devices.
|
||||||
|
## devices = ["sda", "sdb"]
|
||||||
|
## Uncomment the following line if you do not need disk serial numbers.
|
||||||
|
## skip_serial_number = true
|
||||||
|
|
||||||
|
# Read metrics about memory usage
|
||||||
|
#[[inputs.mem]]
|
||||||
|
# no configuration
|
||||||
|
|
||||||
|
# Read metrics about swap memory usage
|
||||||
|
#[[inputs.swap]]
|
||||||
|
# no configuration
|
||||||
|
|
||||||
31
input.go
Normal file
31
input.go
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
package telegraf
|
||||||
|
|
||||||
|
type Input interface {
|
||||||
|
// SampleConfig returns the default configuration of the Input
|
||||||
|
SampleConfig() string
|
||||||
|
|
||||||
|
// Description returns a one-sentence description on the Input
|
||||||
|
Description() string
|
||||||
|
|
||||||
|
// Gather takes in an accumulator and adds the metrics that the Input
|
||||||
|
// gathers. This is called every "interval"
|
||||||
|
Gather(Accumulator) error
|
||||||
|
}
|
||||||
|
|
||||||
|
type ServiceInput interface {
|
||||||
|
// SampleConfig returns the default configuration of the Input
|
||||||
|
SampleConfig() string
|
||||||
|
|
||||||
|
// Description returns a one-sentence description on the Input
|
||||||
|
Description() string
|
||||||
|
|
||||||
|
// Gather takes in an accumulator and adds the metrics that the Input
|
||||||
|
// gathers. This is called every "interval"
|
||||||
|
Gather(Accumulator) error
|
||||||
|
|
||||||
|
// Start starts the ServiceInput's service, whatever that may be
|
||||||
|
Start(Accumulator) error
|
||||||
|
|
||||||
|
// Stop stops the services and closes any necessary channels and connections
|
||||||
|
Stop()
|
||||||
|
}
|
||||||
77
internal/buffer/buffer.go
Normal file
77
internal/buffer/buffer.go
Normal file
@@ -0,0 +1,77 @@
|
|||||||
|
package buffer
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Buffer is an object for storing metrics in a circular buffer.
|
||||||
|
type Buffer struct {
|
||||||
|
buf chan telegraf.Metric
|
||||||
|
// total dropped metrics
|
||||||
|
drops int
|
||||||
|
// total metrics added
|
||||||
|
total int
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewBuffer returns a Buffer
|
||||||
|
// size is the maximum number of metrics that Buffer will cache. If Add is
|
||||||
|
// called when the buffer is full, then the oldest metric(s) will be dropped.
|
||||||
|
func NewBuffer(size int) *Buffer {
|
||||||
|
return &Buffer{
|
||||||
|
buf: make(chan telegraf.Metric, size),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// IsEmpty returns true if Buffer is empty.
|
||||||
|
func (b *Buffer) IsEmpty() bool {
|
||||||
|
return len(b.buf) == 0
|
||||||
|
}
|
||||||
|
|
||||||
|
// Len returns the current length of the buffer.
|
||||||
|
func (b *Buffer) Len() int {
|
||||||
|
return len(b.buf)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Drops returns the total number of dropped metrics that have occured in this
|
||||||
|
// buffer since instantiation.
|
||||||
|
func (b *Buffer) Drops() int {
|
||||||
|
return b.drops
|
||||||
|
}
|
||||||
|
|
||||||
|
// Total returns the total number of metrics that have been added to this buffer.
|
||||||
|
func (b *Buffer) Total() int {
|
||||||
|
return b.total
|
||||||
|
}
|
||||||
|
|
||||||
|
// Add adds metrics to the buffer.
|
||||||
|
func (b *Buffer) Add(metrics ...telegraf.Metric) {
|
||||||
|
for i, _ := range metrics {
|
||||||
|
b.total++
|
||||||
|
select {
|
||||||
|
case b.buf <- metrics[i]:
|
||||||
|
default:
|
||||||
|
b.drops++
|
||||||
|
<-b.buf
|
||||||
|
b.buf <- metrics[i]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Batch returns a batch of metrics of size batchSize.
|
||||||
|
// the batch will be of maximum length batchSize. It can be less than batchSize,
|
||||||
|
// if the length of Buffer is less than batchSize.
|
||||||
|
func (b *Buffer) Batch(batchSize int) []telegraf.Metric {
|
||||||
|
n := min(len(b.buf), batchSize)
|
||||||
|
out := make([]telegraf.Metric, n)
|
||||||
|
for i := 0; i < n; i++ {
|
||||||
|
out[i] = <-b.buf
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
func min(a, b int) int {
|
||||||
|
if b < a {
|
||||||
|
return b
|
||||||
|
}
|
||||||
|
return a
|
||||||
|
}
|
||||||
94
internal/buffer/buffer_test.go
Normal file
94
internal/buffer/buffer_test.go
Normal file
@@ -0,0 +1,94 @@
|
|||||||
|
package buffer
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
|
"github.com/influxdata/telegraf/testutil"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
)
|
||||||
|
|
||||||
|
var metricList = []telegraf.Metric{
|
||||||
|
testutil.TestMetric(2, "mymetric1"),
|
||||||
|
testutil.TestMetric(1, "mymetric2"),
|
||||||
|
testutil.TestMetric(11, "mymetric3"),
|
||||||
|
testutil.TestMetric(15, "mymetric4"),
|
||||||
|
testutil.TestMetric(8, "mymetric5"),
|
||||||
|
}
|
||||||
|
|
||||||
|
func BenchmarkAddMetrics(b *testing.B) {
|
||||||
|
buf := NewBuffer(10000)
|
||||||
|
m := testutil.TestMetric(1, "mymetric")
|
||||||
|
for n := 0; n < b.N; n++ {
|
||||||
|
buf.Add(m)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestNewBufferBasicFuncs(t *testing.T) {
|
||||||
|
b := NewBuffer(10)
|
||||||
|
|
||||||
|
assert.True(t, b.IsEmpty())
|
||||||
|
assert.Zero(t, b.Len())
|
||||||
|
assert.Zero(t, b.Drops())
|
||||||
|
assert.Zero(t, b.Total())
|
||||||
|
|
||||||
|
m := testutil.TestMetric(1, "mymetric")
|
||||||
|
b.Add(m)
|
||||||
|
assert.False(t, b.IsEmpty())
|
||||||
|
assert.Equal(t, b.Len(), 1)
|
||||||
|
assert.Equal(t, b.Drops(), 0)
|
||||||
|
assert.Equal(t, b.Total(), 1)
|
||||||
|
|
||||||
|
b.Add(metricList...)
|
||||||
|
assert.False(t, b.IsEmpty())
|
||||||
|
assert.Equal(t, b.Len(), 6)
|
||||||
|
assert.Equal(t, b.Drops(), 0)
|
||||||
|
assert.Equal(t, b.Total(), 6)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDroppingMetrics(t *testing.T) {
|
||||||
|
b := NewBuffer(10)
|
||||||
|
|
||||||
|
// Add up to the size of the buffer
|
||||||
|
b.Add(metricList...)
|
||||||
|
b.Add(metricList...)
|
||||||
|
assert.False(t, b.IsEmpty())
|
||||||
|
assert.Equal(t, b.Len(), 10)
|
||||||
|
assert.Equal(t, b.Drops(), 0)
|
||||||
|
assert.Equal(t, b.Total(), 10)
|
||||||
|
|
||||||
|
// Add 5 more and verify they were dropped
|
||||||
|
b.Add(metricList...)
|
||||||
|
assert.False(t, b.IsEmpty())
|
||||||
|
assert.Equal(t, b.Len(), 10)
|
||||||
|
assert.Equal(t, b.Drops(), 5)
|
||||||
|
assert.Equal(t, b.Total(), 15)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGettingBatches(t *testing.T) {
|
||||||
|
b := NewBuffer(20)
|
||||||
|
|
||||||
|
// Verify that the buffer returned is smaller than requested when there are
|
||||||
|
// not as many items as requested.
|
||||||
|
b.Add(metricList...)
|
||||||
|
batch := b.Batch(10)
|
||||||
|
assert.Len(t, batch, 5)
|
||||||
|
|
||||||
|
// Verify that the buffer is now empty
|
||||||
|
assert.True(t, b.IsEmpty())
|
||||||
|
assert.Zero(t, b.Len())
|
||||||
|
assert.Zero(t, b.Drops())
|
||||||
|
assert.Equal(t, b.Total(), 5)
|
||||||
|
|
||||||
|
// Verify that the buffer returned is not more than the size requested
|
||||||
|
b.Add(metricList...)
|
||||||
|
batch = b.Batch(3)
|
||||||
|
assert.Len(t, batch, 3)
|
||||||
|
|
||||||
|
// Verify that buffer is not empty
|
||||||
|
assert.False(t, b.IsEmpty())
|
||||||
|
assert.Equal(t, b.Len(), 2)
|
||||||
|
assert.Equal(t, b.Drops(), 0)
|
||||||
|
assert.Equal(t, b.Total(), 10)
|
||||||
|
}
|
||||||
@@ -1,22 +1,41 @@
|
|||||||
package config
|
package config
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"bytes"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
"log"
|
"log"
|
||||||
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
|
"regexp"
|
||||||
"sort"
|
"sort"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
"github.com/influxdata/telegraf/internal"
|
"github.com/influxdata/telegraf/internal"
|
||||||
"github.com/influxdata/telegraf/internal/models"
|
"github.com/influxdata/telegraf/internal/models"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
"github.com/influxdata/telegraf/plugins/outputs"
|
"github.com/influxdata/telegraf/plugins/outputs"
|
||||||
|
"github.com/influxdata/telegraf/plugins/parsers"
|
||||||
|
"github.com/influxdata/telegraf/plugins/serializers"
|
||||||
|
|
||||||
"github.com/influxdata/config"
|
"github.com/influxdata/config"
|
||||||
"github.com/naoina/toml/ast"
|
"github.com/influxdata/toml"
|
||||||
|
"github.com/influxdata/toml/ast"
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
// Default input plugins
|
||||||
|
inputDefaults = []string{"cpu", "mem", "swap", "system", "kernel",
|
||||||
|
"processes", "disk", "diskio"}
|
||||||
|
|
||||||
|
// Default output plugins
|
||||||
|
outputDefaults = []string{"influxdb"}
|
||||||
|
|
||||||
|
// envVarRe is a regex to find environment variables in the config file
|
||||||
|
envVarRe = regexp.MustCompile(`\$\w+`)
|
||||||
)
|
)
|
||||||
|
|
||||||
// Config specifies the URL/user/password for the database that telegraf
|
// Config specifies the URL/user/password for the database that telegraf
|
||||||
@@ -28,8 +47,8 @@ type Config struct {
|
|||||||
OutputFilters []string
|
OutputFilters []string
|
||||||
|
|
||||||
Agent *AgentConfig
|
Agent *AgentConfig
|
||||||
Inputs []*models.RunningInput
|
Inputs []*internal_models.RunningInput
|
||||||
Outputs []*models.RunningOutput
|
Outputs []*internal_models.RunningOutput
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewConfig() *Config {
|
func NewConfig() *Config {
|
||||||
@@ -43,8 +62,8 @@ func NewConfig() *Config {
|
|||||||
},
|
},
|
||||||
|
|
||||||
Tags: make(map[string]string),
|
Tags: make(map[string]string),
|
||||||
Inputs: make([]*models.RunningInput, 0),
|
Inputs: make([]*internal_models.RunningInput, 0),
|
||||||
Outputs: make([]*models.RunningOutput, 0),
|
Outputs: make([]*internal_models.RunningOutput, 0),
|
||||||
InputFilters: make([]string, 0),
|
InputFilters: make([]string, 0),
|
||||||
OutputFilters: make([]string, 0),
|
OutputFilters: make([]string, 0),
|
||||||
}
|
}
|
||||||
@@ -65,7 +84,7 @@ type AgentConfig struct {
|
|||||||
// same time, which can have a measurable effect on the system.
|
// same time, which can have a measurable effect on the system.
|
||||||
CollectionJitter internal.Duration
|
CollectionJitter internal.Duration
|
||||||
|
|
||||||
// Interval at which to flush data
|
// FlushInterval is the Interval at which to flush data
|
||||||
FlushInterval internal.Duration
|
FlushInterval internal.Duration
|
||||||
|
|
||||||
// FlushJitter Jitters the flush interval by a random amount.
|
// FlushJitter Jitters the flush interval by a random amount.
|
||||||
@@ -74,11 +93,22 @@ type AgentConfig struct {
|
|||||||
// ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
|
// ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
|
||||||
FlushJitter internal.Duration
|
FlushJitter internal.Duration
|
||||||
|
|
||||||
|
// MetricBatchSize is the maximum number of metrics that is wrote to an
|
||||||
|
// output plugin in one call.
|
||||||
|
MetricBatchSize int
|
||||||
|
|
||||||
// MetricBufferLimit is the max number of metrics that each output plugin
|
// MetricBufferLimit is the max number of metrics that each output plugin
|
||||||
// will cache. The buffer is cleared when a successful write occurs. When
|
// will cache. The buffer is cleared when a successful write occurs. When
|
||||||
// full, the oldest metrics will be overwritten.
|
// full, the oldest metrics will be overwritten. This number should be a
|
||||||
|
// multiple of MetricBatchSize. Due to current implementation, this could
|
||||||
|
// not be less than 2 times MetricBatchSize.
|
||||||
MetricBufferLimit int
|
MetricBufferLimit int
|
||||||
|
|
||||||
|
// FlushBufferWhenFull tells Telegraf to flush the metric buffer whenever
|
||||||
|
// it fills up, regardless of FlushInterval. Setting this option to true
|
||||||
|
// does _not_ deactivate FlushInterval.
|
||||||
|
FlushBufferWhenFull bool
|
||||||
|
|
||||||
// TODO(cam): Remove UTC and Precision parameters, they are no longer
|
// TODO(cam): Remove UTC and Precision parameters, they are no longer
|
||||||
// valid for the agent config. Leaving them here for now for backwards-
|
// valid for the agent config. Leaving them here for now for backwards-
|
||||||
// compatability
|
// compatability
|
||||||
@@ -89,8 +119,9 @@ type AgentConfig struct {
|
|||||||
Debug bool
|
Debug bool
|
||||||
|
|
||||||
// Quiet is the option for running in quiet mode
|
// Quiet is the option for running in quiet mode
|
||||||
Quiet bool
|
Quiet bool
|
||||||
Hostname string
|
Hostname string
|
||||||
|
OmitHostname bool
|
||||||
}
|
}
|
||||||
|
|
||||||
// Inputs returns a list of strings of the configured inputs.
|
// Inputs returns a list of strings of the configured inputs.
|
||||||
@@ -125,85 +156,171 @@ func (c *Config) ListTags() string {
|
|||||||
return strings.Join(tags, " ")
|
return strings.Join(tags, " ")
|
||||||
}
|
}
|
||||||
|
|
||||||
var header = `# Telegraf configuration
|
var header = `# Telegraf Configuration
|
||||||
|
#
|
||||||
# Telegraf is entirely plugin driven. All metrics are gathered from the
|
# Telegraf is entirely plugin driven. All metrics are gathered from the
|
||||||
# declared inputs, and sent to the declared outputs.
|
# declared inputs, and sent to the declared outputs.
|
||||||
|
#
|
||||||
# Plugins must be declared in here to be active.
|
# Plugins must be declared in here to be active.
|
||||||
# To deactivate a plugin, comment out the name and any variables.
|
# To deactivate a plugin, comment out the name and any variables.
|
||||||
|
#
|
||||||
# Use 'telegraf -config telegraf.conf -test' to see what metrics a config
|
# Use 'telegraf -config telegraf.conf -test' to see what metrics a config
|
||||||
# file would generate.
|
# file would generate.
|
||||||
|
#
|
||||||
|
# Environment variables can be used anywhere in this config file, simply prepend
|
||||||
|
# them with $. For strings the variable must be within quotes (ie, "$STR_VAR"),
|
||||||
|
# for numbers and booleans they should be plain (ie, $INT_VAR, $BOOL_VAR)
|
||||||
|
|
||||||
|
|
||||||
# Global tags can be specified here in key="value" format.
|
# Global tags can be specified here in key="value" format.
|
||||||
[tags]
|
[global_tags]
|
||||||
# dc = "us-east-1" # will tag all metrics with dc=us-east-1
|
# dc = "us-east-1" # will tag all metrics with dc=us-east-1
|
||||||
# rack = "1a"
|
# rack = "1a"
|
||||||
|
## Environment variables can be used as tags, and throughout the config file
|
||||||
|
# user = "$USER"
|
||||||
|
|
||||||
|
|
||||||
# Configuration for telegraf agent
|
# Configuration for telegraf agent
|
||||||
[agent]
|
[agent]
|
||||||
# Default data collection interval for all inputs
|
## Default data collection interval for all inputs
|
||||||
interval = "10s"
|
interval = "10s"
|
||||||
# Rounds collection interval to 'interval'
|
## Rounds collection interval to 'interval'
|
||||||
# ie, if interval="10s" then always collect on :00, :10, :20, etc.
|
## ie, if interval="10s" then always collect on :00, :10, :20, etc.
|
||||||
round_interval = true
|
round_interval = true
|
||||||
|
|
||||||
# Telegraf will cache metric_buffer_limit metrics for each output, and will
|
## Telegraf will send metrics to outputs in batches of at
|
||||||
# flush this buffer on a successful write.
|
## most metric_batch_size metrics.
|
||||||
|
metric_batch_size = 1000
|
||||||
|
## For failed writes, telegraf will cache metric_buffer_limit metrics for each
|
||||||
|
## output, and will flush this buffer on a successful write. Oldest metrics
|
||||||
|
## are dropped first when this buffer fills.
|
||||||
metric_buffer_limit = 10000
|
metric_buffer_limit = 10000
|
||||||
|
|
||||||
# Collection jitter is used to jitter the collection by a random amount.
|
## Collection jitter is used to jitter the collection by a random amount.
|
||||||
# Each plugin will sleep for a random time within jitter before collecting.
|
## Each plugin will sleep for a random time within jitter before collecting.
|
||||||
# This can be used to avoid many plugins querying things like sysfs at the
|
## This can be used to avoid many plugins querying things like sysfs at the
|
||||||
# same time, which can have a measurable effect on the system.
|
## same time, which can have a measurable effect on the system.
|
||||||
collection_jitter = "0s"
|
collection_jitter = "0s"
|
||||||
|
|
||||||
# Default data flushing interval for all outputs. You should not set this below
|
## Default flushing interval for all outputs. You shouldn't set this below
|
||||||
# interval. Maximum flush_interval will be flush_interval + flush_jitter
|
## interval. Maximum flush_interval will be flush_interval + flush_jitter
|
||||||
flush_interval = "10s"
|
flush_interval = "10s"
|
||||||
# Jitter the flush interval by a random amount. This is primarily to avoid
|
## Jitter the flush interval by a random amount. This is primarily to avoid
|
||||||
# large write spikes for users running a large number of telegraf instances.
|
## large write spikes for users running a large number of telegraf instances.
|
||||||
# ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
|
## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
|
||||||
flush_jitter = "0s"
|
flush_jitter = "0s"
|
||||||
|
|
||||||
# Run telegraf in debug mode
|
## Run telegraf in debug mode
|
||||||
debug = false
|
debug = false
|
||||||
# Run telegraf in quiet mode
|
## Run telegraf in quiet mode
|
||||||
quiet = false
|
quiet = false
|
||||||
# Override default hostname, if empty use os.Hostname()
|
## Override default hostname, if empty use os.Hostname()
|
||||||
hostname = ""
|
hostname = ""
|
||||||
|
## If set to true, do no set the "host" tag in the telegraf agent.
|
||||||
|
omit_hostname = false
|
||||||
|
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
# OUTPUTS #
|
# OUTPUT PLUGINS #
|
||||||
###############################################################################
|
###############################################################################
|
||||||
|
|
||||||
`
|
`
|
||||||
|
|
||||||
var pluginHeader = `
|
var inputHeader = `
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
# INPUTS #
|
# INPUT PLUGINS #
|
||||||
###############################################################################
|
###############################################################################
|
||||||
|
|
||||||
`
|
`
|
||||||
|
|
||||||
var serviceInputHeader = `
|
var serviceInputHeader = `
|
||||||
|
|
||||||
###############################################################################
|
###############################################################################
|
||||||
# SERVICE INPUTS #
|
# SERVICE INPUT PLUGINS #
|
||||||
###############################################################################
|
###############################################################################
|
||||||
`
|
`
|
||||||
|
|
||||||
// PrintSampleConfig prints the sample config
|
// PrintSampleConfig prints the sample config
|
||||||
func PrintSampleConfig(pluginFilters []string, outputFilters []string) {
|
func PrintSampleConfig(inputFilters []string, outputFilters []string) {
|
||||||
fmt.Printf(header)
|
fmt.Printf(header)
|
||||||
|
|
||||||
|
if len(outputFilters) != 0 {
|
||||||
|
printFilteredOutputs(outputFilters, false)
|
||||||
|
} else {
|
||||||
|
printFilteredOutputs(outputDefaults, false)
|
||||||
|
// Print non-default outputs, commented
|
||||||
|
var pnames []string
|
||||||
|
for pname := range outputs.Outputs {
|
||||||
|
if !sliceContains(pname, outputDefaults) {
|
||||||
|
pnames = append(pnames, pname)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
sort.Strings(pnames)
|
||||||
|
printFilteredOutputs(pnames, true)
|
||||||
|
}
|
||||||
|
|
||||||
|
fmt.Printf(inputHeader)
|
||||||
|
if len(inputFilters) != 0 {
|
||||||
|
printFilteredInputs(inputFilters, false)
|
||||||
|
} else {
|
||||||
|
printFilteredInputs(inputDefaults, false)
|
||||||
|
// Print non-default inputs, commented
|
||||||
|
var pnames []string
|
||||||
|
for pname := range inputs.Inputs {
|
||||||
|
if !sliceContains(pname, inputDefaults) {
|
||||||
|
pnames = append(pnames, pname)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
sort.Strings(pnames)
|
||||||
|
printFilteredInputs(pnames, true)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func printFilteredInputs(inputFilters []string, commented bool) {
|
||||||
|
// Filter inputs
|
||||||
|
var pnames []string
|
||||||
|
for pname := range inputs.Inputs {
|
||||||
|
if sliceContains(pname, inputFilters) {
|
||||||
|
pnames = append(pnames, pname)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
sort.Strings(pnames)
|
||||||
|
|
||||||
|
// cache service inputs to print them at the end
|
||||||
|
servInputs := make(map[string]telegraf.ServiceInput)
|
||||||
|
// for alphabetical looping:
|
||||||
|
servInputNames := []string{}
|
||||||
|
|
||||||
|
// Print Inputs
|
||||||
|
for _, pname := range pnames {
|
||||||
|
creator := inputs.Inputs[pname]
|
||||||
|
input := creator()
|
||||||
|
|
||||||
|
switch p := input.(type) {
|
||||||
|
case telegraf.ServiceInput:
|
||||||
|
servInputs[pname] = p
|
||||||
|
servInputNames = append(servInputNames, pname)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
printConfig(pname, input, "inputs", commented)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Print Service Inputs
|
||||||
|
if len(servInputs) == 0 {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
sort.Strings(servInputNames)
|
||||||
|
fmt.Printf(serviceInputHeader)
|
||||||
|
for _, name := range servInputNames {
|
||||||
|
printConfig(name, servInputs[name], "inputs", commented)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func printFilteredOutputs(outputFilters []string, commented bool) {
|
||||||
// Filter outputs
|
// Filter outputs
|
||||||
var onames []string
|
var onames []string
|
||||||
for oname := range outputs.Outputs {
|
for oname := range outputs.Outputs {
|
||||||
if len(outputFilters) == 0 || sliceContains(oname, outputFilters) {
|
if sliceContains(oname, outputFilters) {
|
||||||
onames = append(onames, oname)
|
onames = append(onames, oname)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -213,38 +330,7 @@ func PrintSampleConfig(pluginFilters []string, outputFilters []string) {
|
|||||||
for _, oname := range onames {
|
for _, oname := range onames {
|
||||||
creator := outputs.Outputs[oname]
|
creator := outputs.Outputs[oname]
|
||||||
output := creator()
|
output := creator()
|
||||||
printConfig(oname, output, "outputs")
|
printConfig(oname, output, "outputs", commented)
|
||||||
}
|
|
||||||
|
|
||||||
// Filter inputs
|
|
||||||
var pnames []string
|
|
||||||
for pname := range inputs.Inputs {
|
|
||||||
if len(pluginFilters) == 0 || sliceContains(pname, pluginFilters) {
|
|
||||||
pnames = append(pnames, pname)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
sort.Strings(pnames)
|
|
||||||
|
|
||||||
// Print Inputs
|
|
||||||
fmt.Printf(pluginHeader)
|
|
||||||
servInputs := make(map[string]inputs.ServiceInput)
|
|
||||||
for _, pname := range pnames {
|
|
||||||
creator := inputs.Inputs[pname]
|
|
||||||
input := creator()
|
|
||||||
|
|
||||||
switch p := input.(type) {
|
|
||||||
case inputs.ServiceInput:
|
|
||||||
servInputs[pname] = p
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
printConfig(pname, input, "inputs")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Print Service Inputs
|
|
||||||
fmt.Printf(serviceInputHeader)
|
|
||||||
for name, input := range servInputs {
|
|
||||||
printConfig(name, input, "inputs")
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -253,13 +339,26 @@ type printer interface {
|
|||||||
SampleConfig() string
|
SampleConfig() string
|
||||||
}
|
}
|
||||||
|
|
||||||
func printConfig(name string, p printer, op string) {
|
func printConfig(name string, p printer, op string, commented bool) {
|
||||||
fmt.Printf("\n# %s\n[[%s.%s]]", p.Description(), op, name)
|
comment := ""
|
||||||
|
if commented {
|
||||||
|
comment = "# "
|
||||||
|
}
|
||||||
|
fmt.Printf("\n%s# %s\n%s[[%s.%s]]", comment, p.Description(), comment,
|
||||||
|
op, name)
|
||||||
|
|
||||||
config := p.SampleConfig()
|
config := p.SampleConfig()
|
||||||
if config == "" {
|
if config == "" {
|
||||||
fmt.Printf("\n # no configuration\n")
|
fmt.Printf("\n%s # no configuration\n\n", comment)
|
||||||
} else {
|
} else {
|
||||||
fmt.Printf(config)
|
lines := strings.Split(config, "\n")
|
||||||
|
for i, line := range lines {
|
||||||
|
if i == 0 || i == len(lines)-1 {
|
||||||
|
fmt.Print("\n")
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
fmt.Print(comment + line + "\n")
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -275,7 +374,7 @@ func sliceContains(name string, list []string) bool {
|
|||||||
// PrintInputConfig prints the config usage of a single input.
|
// PrintInputConfig prints the config usage of a single input.
|
||||||
func PrintInputConfig(name string) error {
|
func PrintInputConfig(name string) error {
|
||||||
if creator, ok := inputs.Inputs[name]; ok {
|
if creator, ok := inputs.Inputs[name]; ok {
|
||||||
printConfig(name, creator(), "inputs")
|
printConfig(name, creator(), "inputs", false)
|
||||||
} else {
|
} else {
|
||||||
return errors.New(fmt.Sprintf("Input %s not found", name))
|
return errors.New(fmt.Sprintf("Input %s not found", name))
|
||||||
}
|
}
|
||||||
@@ -285,7 +384,7 @@ func PrintInputConfig(name string) error {
|
|||||||
// PrintOutputConfig prints the config usage of a single output.
|
// PrintOutputConfig prints the config usage of a single output.
|
||||||
func PrintOutputConfig(name string) error {
|
func PrintOutputConfig(name string) error {
|
||||||
if creator, ok := outputs.Outputs[name]; ok {
|
if creator, ok := outputs.Outputs[name]; ok {
|
||||||
printConfig(name, creator(), "outputs")
|
printConfig(name, creator(), "outputs", false)
|
||||||
} else {
|
} else {
|
||||||
return errors.New(fmt.Sprintf("Output %s not found", name))
|
return errors.New(fmt.Sprintf("Output %s not found", name))
|
||||||
}
|
}
|
||||||
@@ -313,46 +412,91 @@ func (c *Config) LoadDirectory(path string) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// LoadConfig loads the given config file and applies it to c
|
// Try to find a default config file at these locations (in order):
|
||||||
func (c *Config) LoadConfig(path string) error {
|
// 1. $TELEGRAF_CONFIG_PATH
|
||||||
tbl, err := config.ParseFile(path)
|
// 2. $HOME/.telegraf/telegraf.conf
|
||||||
if err != nil {
|
// 3. /etc/telegraf/telegraf.conf
|
||||||
return err
|
//
|
||||||
|
func getDefaultConfigPath() (string, error) {
|
||||||
|
envfile := os.Getenv("TELEGRAF_CONFIG_PATH")
|
||||||
|
homefile := os.ExpandEnv("${HOME}/.telegraf/telegraf.conf")
|
||||||
|
etcfile := "/etc/telegraf/telegraf.conf"
|
||||||
|
for _, path := range []string{envfile, homefile, etcfile} {
|
||||||
|
if _, err := os.Stat(path); err == nil {
|
||||||
|
log.Printf("Using config file: %s", path)
|
||||||
|
return path, nil
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// if we got here, we didn't find a file in a default location
|
||||||
|
return "", fmt.Errorf("No config file specified, and could not find one"+
|
||||||
|
" in $TELEGRAF_CONFIG_PATH, %s, or %s", homefile, etcfile)
|
||||||
|
}
|
||||||
|
|
||||||
|
// LoadConfig loads the given config file and applies it to c
|
||||||
|
func (c *Config) LoadConfig(path string) error {
|
||||||
|
var err error
|
||||||
|
if path == "" {
|
||||||
|
if path, err = getDefaultConfigPath(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
tbl, err := parseFile(path)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("Error parsing %s, %s", path, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse tags tables first:
|
||||||
|
for _, tableName := range []string{"tags", "global_tags"} {
|
||||||
|
if val, ok := tbl.Fields[tableName]; ok {
|
||||||
|
subTable, ok := val.(*ast.Table)
|
||||||
|
if !ok {
|
||||||
|
return fmt.Errorf("%s: invalid configuration", path)
|
||||||
|
}
|
||||||
|
if err = config.UnmarshalTable(subTable, c.Tags); err != nil {
|
||||||
|
log.Printf("Could not parse [global_tags] config\n")
|
||||||
|
return fmt.Errorf("Error parsing %s, %s", path, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse agent table:
|
||||||
|
if val, ok := tbl.Fields["agent"]; ok {
|
||||||
|
subTable, ok := val.(*ast.Table)
|
||||||
|
if !ok {
|
||||||
|
return fmt.Errorf("%s: invalid configuration", path)
|
||||||
|
}
|
||||||
|
if err = config.UnmarshalTable(subTable, c.Agent); err != nil {
|
||||||
|
log.Printf("Could not parse [agent] config\n")
|
||||||
|
return fmt.Errorf("Error parsing %s, %s", path, err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Parse all the rest of the plugins:
|
||||||
for name, val := range tbl.Fields {
|
for name, val := range tbl.Fields {
|
||||||
subTable, ok := val.(*ast.Table)
|
subTable, ok := val.(*ast.Table)
|
||||||
if !ok {
|
if !ok {
|
||||||
return errors.New("invalid configuration")
|
return fmt.Errorf("%s: invalid configuration", path)
|
||||||
}
|
}
|
||||||
|
|
||||||
switch name {
|
switch name {
|
||||||
case "agent":
|
case "agent", "global_tags", "tags":
|
||||||
if err = config.UnmarshalTable(subTable, c.Agent); err != nil {
|
|
||||||
log.Printf("Could not parse [agent] config\n")
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
case "tags":
|
|
||||||
if err = config.UnmarshalTable(subTable, c.Tags); err != nil {
|
|
||||||
log.Printf("Could not parse [tags] config\n")
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
case "outputs":
|
case "outputs":
|
||||||
for pluginName, pluginVal := range subTable.Fields {
|
for pluginName, pluginVal := range subTable.Fields {
|
||||||
switch pluginSubTable := pluginVal.(type) {
|
switch pluginSubTable := pluginVal.(type) {
|
||||||
case *ast.Table:
|
case *ast.Table:
|
||||||
if err = c.addOutput(pluginName, pluginSubTable); err != nil {
|
if err = c.addOutput(pluginName, pluginSubTable); err != nil {
|
||||||
return err
|
return fmt.Errorf("Error parsing %s, %s", path, err)
|
||||||
}
|
}
|
||||||
case []*ast.Table:
|
case []*ast.Table:
|
||||||
for _, t := range pluginSubTable {
|
for _, t := range pluginSubTable {
|
||||||
if err = c.addOutput(pluginName, t); err != nil {
|
if err = c.addOutput(pluginName, t); err != nil {
|
||||||
return err
|
return fmt.Errorf("Error parsing %s, %s", path, err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
default:
|
default:
|
||||||
return fmt.Errorf("Unsupported config format: %s",
|
return fmt.Errorf("Unsupported config format: %s, file %s",
|
||||||
pluginName)
|
pluginName, path)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
case "inputs", "plugins":
|
case "inputs", "plugins":
|
||||||
@@ -360,30 +504,50 @@ func (c *Config) LoadConfig(path string) error {
|
|||||||
switch pluginSubTable := pluginVal.(type) {
|
switch pluginSubTable := pluginVal.(type) {
|
||||||
case *ast.Table:
|
case *ast.Table:
|
||||||
if err = c.addInput(pluginName, pluginSubTable); err != nil {
|
if err = c.addInput(pluginName, pluginSubTable); err != nil {
|
||||||
return err
|
return fmt.Errorf("Error parsing %s, %s", path, err)
|
||||||
}
|
}
|
||||||
case []*ast.Table:
|
case []*ast.Table:
|
||||||
for _, t := range pluginSubTable {
|
for _, t := range pluginSubTable {
|
||||||
if err = c.addInput(pluginName, t); err != nil {
|
if err = c.addInput(pluginName, t); err != nil {
|
||||||
return err
|
return fmt.Errorf("Error parsing %s, %s", path, err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
default:
|
default:
|
||||||
return fmt.Errorf("Unsupported config format: %s",
|
return fmt.Errorf("Unsupported config format: %s, file %s",
|
||||||
pluginName)
|
pluginName, path)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
// Assume it's an input input for legacy config file support if no other
|
// Assume it's an input input for legacy config file support if no other
|
||||||
// identifiers are present
|
// identifiers are present
|
||||||
default:
|
default:
|
||||||
if err = c.addInput(name, subTable); err != nil {
|
if err = c.addInput(name, subTable); err != nil {
|
||||||
return err
|
return fmt.Errorf("Error parsing %s, %s", path, err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// parseFile loads a TOML configuration from a provided path and
|
||||||
|
// returns the AST produced from the TOML parser. When loading the file, it
|
||||||
|
// will find environment variables and replace them.
|
||||||
|
func parseFile(fpath string) (*ast.Table, error) {
|
||||||
|
contents, err := ioutil.ReadFile(fpath)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
env_vars := envVarRe.FindAll(contents, -1)
|
||||||
|
for _, env_var := range env_vars {
|
||||||
|
env_val := os.Getenv(strings.TrimPrefix(string(env_var), "$"))
|
||||||
|
if env_val != "" {
|
||||||
|
contents = bytes.Replace(contents, env_var, []byte(env_val), 1)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return toml.Parse(contents)
|
||||||
|
}
|
||||||
|
|
||||||
func (c *Config) addOutput(name string, table *ast.Table) error {
|
func (c *Config) addOutput(name string, table *ast.Table) error {
|
||||||
if len(c.OutputFilters) > 0 && !sliceContains(name, c.OutputFilters) {
|
if len(c.OutputFilters) > 0 && !sliceContains(name, c.OutputFilters) {
|
||||||
return nil
|
return nil
|
||||||
@@ -394,6 +558,17 @@ func (c *Config) addOutput(name string, table *ast.Table) error {
|
|||||||
}
|
}
|
||||||
output := creator()
|
output := creator()
|
||||||
|
|
||||||
|
// If the output has a SetSerializer function, then this means it can write
|
||||||
|
// arbitrary types of output, so build the serializer and set it.
|
||||||
|
switch t := output.(type) {
|
||||||
|
case serializers.SerializerOutput:
|
||||||
|
serializer, err := buildSerializer(name, table)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
t.SetSerializer(serializer)
|
||||||
|
}
|
||||||
|
|
||||||
outputConfig, err := buildOutput(name, table)
|
outputConfig, err := buildOutput(name, table)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -403,11 +578,8 @@ func (c *Config) addOutput(name string, table *ast.Table) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
ro := models.NewRunningOutput(name, output, outputConfig)
|
ro := internal_models.NewRunningOutput(name, output, outputConfig,
|
||||||
if c.Agent.MetricBufferLimit > 0 {
|
c.Agent.MetricBatchSize, c.Agent.MetricBufferLimit)
|
||||||
ro.PointBufferLimit = c.Agent.MetricBufferLimit
|
|
||||||
}
|
|
||||||
ro.Quiet = c.Agent.Quiet
|
|
||||||
c.Outputs = append(c.Outputs, ro)
|
c.Outputs = append(c.Outputs, ro)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
@@ -427,6 +599,17 @@ func (c *Config) addInput(name string, table *ast.Table) error {
|
|||||||
}
|
}
|
||||||
input := creator()
|
input := creator()
|
||||||
|
|
||||||
|
// If the input has a SetParser function, then this means it can accept
|
||||||
|
// arbitrary types of input, so build the parser and set it.
|
||||||
|
switch t := input.(type) {
|
||||||
|
case parsers.ParserInput:
|
||||||
|
parser, err := buildParser(name, table)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
t.SetParser(parser)
|
||||||
|
}
|
||||||
|
|
||||||
pluginConfig, err := buildInput(name, table)
|
pluginConfig, err := buildInput(name, table)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -436,7 +619,7 @@ func (c *Config) addInput(name string, table *ast.Table) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
rp := &models.RunningInput{
|
rp := &internal_models.RunningInput{
|
||||||
Name: name,
|
Name: name,
|
||||||
Input: input,
|
Input: input,
|
||||||
Config: pluginConfig,
|
Config: pluginConfig,
|
||||||
@@ -445,18 +628,19 @@ func (c *Config) addInput(name string, table *ast.Table) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// buildFilter builds a Filter (tagpass/tagdrop/pass/drop) to
|
// buildFilter builds a Filter
|
||||||
// be inserted into the models.OutputConfig/models.InputConfig to be used for prefix
|
// (tagpass/tagdrop/namepass/namedrop/fieldpass/fielddrop) to
|
||||||
// filtering on tags and measurements
|
// be inserted into the internal_models.OutputConfig/internal_models.InputConfig
|
||||||
func buildFilter(tbl *ast.Table) models.Filter {
|
// to be used for glob filtering on tags and measurements
|
||||||
f := models.Filter{}
|
func buildFilter(tbl *ast.Table) (internal_models.Filter, error) {
|
||||||
|
f := internal_models.Filter{}
|
||||||
|
|
||||||
if node, ok := tbl.Fields["pass"]; ok {
|
if node, ok := tbl.Fields["namepass"]; ok {
|
||||||
if kv, ok := node.(*ast.KeyValue); ok {
|
if kv, ok := node.(*ast.KeyValue); ok {
|
||||||
if ary, ok := kv.Value.(*ast.Array); ok {
|
if ary, ok := kv.Value.(*ast.Array); ok {
|
||||||
for _, elem := range ary.Value {
|
for _, elem := range ary.Value {
|
||||||
if str, ok := elem.(*ast.String); ok {
|
if str, ok := elem.(*ast.String); ok {
|
||||||
f.Pass = append(f.Pass, str.Value)
|
f.NamePass = append(f.NamePass, str.Value)
|
||||||
f.IsActive = true
|
f.IsActive = true
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -464,12 +648,12 @@ func buildFilter(tbl *ast.Table) models.Filter {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if node, ok := tbl.Fields["drop"]; ok {
|
if node, ok := tbl.Fields["namedrop"]; ok {
|
||||||
if kv, ok := node.(*ast.KeyValue); ok {
|
if kv, ok := node.(*ast.KeyValue); ok {
|
||||||
if ary, ok := kv.Value.(*ast.Array); ok {
|
if ary, ok := kv.Value.(*ast.Array); ok {
|
||||||
for _, elem := range ary.Value {
|
for _, elem := range ary.Value {
|
||||||
if str, ok := elem.(*ast.String); ok {
|
if str, ok := elem.(*ast.String); ok {
|
||||||
f.Drop = append(f.Drop, str.Value)
|
f.NameDrop = append(f.NameDrop, str.Value)
|
||||||
f.IsActive = true
|
f.IsActive = true
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -477,11 +661,43 @@ func buildFilter(tbl *ast.Table) models.Filter {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
fields := []string{"pass", "fieldpass"}
|
||||||
|
for _, field := range fields {
|
||||||
|
if node, ok := tbl.Fields[field]; ok {
|
||||||
|
if kv, ok := node.(*ast.KeyValue); ok {
|
||||||
|
if ary, ok := kv.Value.(*ast.Array); ok {
|
||||||
|
for _, elem := range ary.Value {
|
||||||
|
if str, ok := elem.(*ast.String); ok {
|
||||||
|
f.FieldPass = append(f.FieldPass, str.Value)
|
||||||
|
f.IsActive = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fields = []string{"drop", "fielddrop"}
|
||||||
|
for _, field := range fields {
|
||||||
|
if node, ok := tbl.Fields[field]; ok {
|
||||||
|
if kv, ok := node.(*ast.KeyValue); ok {
|
||||||
|
if ary, ok := kv.Value.(*ast.Array); ok {
|
||||||
|
for _, elem := range ary.Value {
|
||||||
|
if str, ok := elem.(*ast.String); ok {
|
||||||
|
f.FieldDrop = append(f.FieldDrop, str.Value)
|
||||||
|
f.IsActive = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if node, ok := tbl.Fields["tagpass"]; ok {
|
if node, ok := tbl.Fields["tagpass"]; ok {
|
||||||
if subtbl, ok := node.(*ast.Table); ok {
|
if subtbl, ok := node.(*ast.Table); ok {
|
||||||
for name, val := range subtbl.Fields {
|
for name, val := range subtbl.Fields {
|
||||||
if kv, ok := val.(*ast.KeyValue); ok {
|
if kv, ok := val.(*ast.KeyValue); ok {
|
||||||
tagfilter := &models.TagFilter{Name: name}
|
tagfilter := &internal_models.TagFilter{Name: name}
|
||||||
if ary, ok := kv.Value.(*ast.Array); ok {
|
if ary, ok := kv.Value.(*ast.Array); ok {
|
||||||
for _, elem := range ary.Value {
|
for _, elem := range ary.Value {
|
||||||
if str, ok := elem.(*ast.String); ok {
|
if str, ok := elem.(*ast.String); ok {
|
||||||
@@ -500,7 +716,7 @@ func buildFilter(tbl *ast.Table) models.Filter {
|
|||||||
if subtbl, ok := node.(*ast.Table); ok {
|
if subtbl, ok := node.(*ast.Table); ok {
|
||||||
for name, val := range subtbl.Fields {
|
for name, val := range subtbl.Fields {
|
||||||
if kv, ok := val.(*ast.KeyValue); ok {
|
if kv, ok := val.(*ast.KeyValue); ok {
|
||||||
tagfilter := &models.TagFilter{Name: name}
|
tagfilter := &internal_models.TagFilter{Name: name}
|
||||||
if ary, ok := kv.Value.(*ast.Array); ok {
|
if ary, ok := kv.Value.(*ast.Array); ok {
|
||||||
for _, elem := range ary.Value {
|
for _, elem := range ary.Value {
|
||||||
if str, ok := elem.(*ast.String); ok {
|
if str, ok := elem.(*ast.String); ok {
|
||||||
@@ -515,18 +731,51 @@ func buildFilter(tbl *ast.Table) models.Filter {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if node, ok := tbl.Fields["tagexclude"]; ok {
|
||||||
|
if kv, ok := node.(*ast.KeyValue); ok {
|
||||||
|
if ary, ok := kv.Value.(*ast.Array); ok {
|
||||||
|
for _, elem := range ary.Value {
|
||||||
|
if str, ok := elem.(*ast.String); ok {
|
||||||
|
f.TagExclude = append(f.TagExclude, str.Value)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if node, ok := tbl.Fields["taginclude"]; ok {
|
||||||
|
if kv, ok := node.(*ast.KeyValue); ok {
|
||||||
|
if ary, ok := kv.Value.(*ast.Array); ok {
|
||||||
|
for _, elem := range ary.Value {
|
||||||
|
if str, ok := elem.(*ast.String); ok {
|
||||||
|
f.TagInclude = append(f.TagInclude, str.Value)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if err := f.CompileFilter(); err != nil {
|
||||||
|
return f, err
|
||||||
|
}
|
||||||
|
|
||||||
|
delete(tbl.Fields, "namedrop")
|
||||||
|
delete(tbl.Fields, "namepass")
|
||||||
|
delete(tbl.Fields, "fielddrop")
|
||||||
|
delete(tbl.Fields, "fieldpass")
|
||||||
delete(tbl.Fields, "drop")
|
delete(tbl.Fields, "drop")
|
||||||
delete(tbl.Fields, "pass")
|
delete(tbl.Fields, "pass")
|
||||||
delete(tbl.Fields, "tagdrop")
|
delete(tbl.Fields, "tagdrop")
|
||||||
delete(tbl.Fields, "tagpass")
|
delete(tbl.Fields, "tagpass")
|
||||||
return f
|
delete(tbl.Fields, "tagexclude")
|
||||||
|
delete(tbl.Fields, "taginclude")
|
||||||
|
return f, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// buildInput parses input specific items from the ast.Table,
|
// buildInput parses input specific items from the ast.Table,
|
||||||
// builds the filter and returns a
|
// builds the filter and returns a
|
||||||
// models.InputConfig to be inserted into models.RunningInput
|
// internal_models.InputConfig to be inserted into internal_models.RunningInput
|
||||||
func buildInput(name string, tbl *ast.Table) (*models.InputConfig, error) {
|
func buildInput(name string, tbl *ast.Table) (*internal_models.InputConfig, error) {
|
||||||
cp := &models.InputConfig{Name: name}
|
cp := &internal_models.InputConfig{Name: name}
|
||||||
if node, ok := tbl.Fields["interval"]; ok {
|
if node, ok := tbl.Fields["interval"]; ok {
|
||||||
if kv, ok := node.(*ast.KeyValue); ok {
|
if kv, ok := node.(*ast.KeyValue); ok {
|
||||||
if str, ok := kv.Value.(*ast.String); ok {
|
if str, ok := kv.Value.(*ast.String); ok {
|
||||||
@@ -578,17 +827,145 @@ func buildInput(name string, tbl *ast.Table) (*models.InputConfig, error) {
|
|||||||
delete(tbl.Fields, "name_override")
|
delete(tbl.Fields, "name_override")
|
||||||
delete(tbl.Fields, "interval")
|
delete(tbl.Fields, "interval")
|
||||||
delete(tbl.Fields, "tags")
|
delete(tbl.Fields, "tags")
|
||||||
cp.Filter = buildFilter(tbl)
|
var err error
|
||||||
|
cp.Filter, err = buildFilter(tbl)
|
||||||
|
if err != nil {
|
||||||
|
return cp, err
|
||||||
|
}
|
||||||
return cp, nil
|
return cp, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// buildOutput parses output specific items from the ast.Table, builds the filter and returns an
|
// buildParser grabs the necessary entries from the ast.Table for creating
|
||||||
// models.OutputConfig to be inserted into models.RunningInput
|
// a parsers.Parser object, and creates it, which can then be added onto
|
||||||
|
// an Input object.
|
||||||
|
func buildParser(name string, tbl *ast.Table) (parsers.Parser, error) {
|
||||||
|
c := &parsers.Config{}
|
||||||
|
|
||||||
|
if node, ok := tbl.Fields["data_format"]; ok {
|
||||||
|
if kv, ok := node.(*ast.KeyValue); ok {
|
||||||
|
if str, ok := kv.Value.(*ast.String); ok {
|
||||||
|
c.DataFormat = str.Value
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Legacy support, exec plugin originally parsed JSON by default.
|
||||||
|
if name == "exec" && c.DataFormat == "" {
|
||||||
|
c.DataFormat = "json"
|
||||||
|
} else if c.DataFormat == "" {
|
||||||
|
c.DataFormat = "influx"
|
||||||
|
}
|
||||||
|
|
||||||
|
if node, ok := tbl.Fields["separator"]; ok {
|
||||||
|
if kv, ok := node.(*ast.KeyValue); ok {
|
||||||
|
if str, ok := kv.Value.(*ast.String); ok {
|
||||||
|
c.Separator = str.Value
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if node, ok := tbl.Fields["templates"]; ok {
|
||||||
|
if kv, ok := node.(*ast.KeyValue); ok {
|
||||||
|
if ary, ok := kv.Value.(*ast.Array); ok {
|
||||||
|
for _, elem := range ary.Value {
|
||||||
|
if str, ok := elem.(*ast.String); ok {
|
||||||
|
c.Templates = append(c.Templates, str.Value)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if node, ok := tbl.Fields["tag_keys"]; ok {
|
||||||
|
if kv, ok := node.(*ast.KeyValue); ok {
|
||||||
|
if ary, ok := kv.Value.(*ast.Array); ok {
|
||||||
|
for _, elem := range ary.Value {
|
||||||
|
if str, ok := elem.(*ast.String); ok {
|
||||||
|
c.TagKeys = append(c.TagKeys, str.Value)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if node, ok := tbl.Fields["data_type"]; ok {
|
||||||
|
if kv, ok := node.(*ast.KeyValue); ok {
|
||||||
|
if str, ok := kv.Value.(*ast.String); ok {
|
||||||
|
c.DataType = str.Value
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
c.MetricName = name
|
||||||
|
|
||||||
|
delete(tbl.Fields, "data_format")
|
||||||
|
delete(tbl.Fields, "separator")
|
||||||
|
delete(tbl.Fields, "templates")
|
||||||
|
delete(tbl.Fields, "tag_keys")
|
||||||
|
delete(tbl.Fields, "data_type")
|
||||||
|
|
||||||
|
return parsers.NewParser(c)
|
||||||
|
}
|
||||||
|
|
||||||
|
// buildSerializer grabs the necessary entries from the ast.Table for creating
|
||||||
|
// a serializers.Serializer object, and creates it, which can then be added onto
|
||||||
|
// an Output object.
|
||||||
|
func buildSerializer(name string, tbl *ast.Table) (serializers.Serializer, error) {
|
||||||
|
c := &serializers.Config{}
|
||||||
|
|
||||||
|
if node, ok := tbl.Fields["data_format"]; ok {
|
||||||
|
if kv, ok := node.(*ast.KeyValue); ok {
|
||||||
|
if str, ok := kv.Value.(*ast.String); ok {
|
||||||
|
c.DataFormat = str.Value
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if c.DataFormat == "" {
|
||||||
|
c.DataFormat = "influx"
|
||||||
|
}
|
||||||
|
|
||||||
|
if node, ok := tbl.Fields["prefix"]; ok {
|
||||||
|
if kv, ok := node.(*ast.KeyValue); ok {
|
||||||
|
if str, ok := kv.Value.(*ast.String); ok {
|
||||||
|
c.Prefix = str.Value
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if node, ok := tbl.Fields["template"]; ok {
|
||||||
|
if kv, ok := node.(*ast.KeyValue); ok {
|
||||||
|
if str, ok := kv.Value.(*ast.String); ok {
|
||||||
|
c.Template = str.Value
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
delete(tbl.Fields, "data_format")
|
||||||
|
delete(tbl.Fields, "prefix")
|
||||||
|
delete(tbl.Fields, "template")
|
||||||
|
return serializers.NewSerializer(c)
|
||||||
|
}
|
||||||
|
|
||||||
|
// buildOutput parses output specific items from the ast.Table,
|
||||||
|
// builds the filter and returns an
|
||||||
|
// internal_models.OutputConfig to be inserted into internal_models.RunningInput
|
||||||
// Note: error exists in the return for future calls that might require error
|
// Note: error exists in the return for future calls that might require error
|
||||||
func buildOutput(name string, tbl *ast.Table) (*models.OutputConfig, error) {
|
func buildOutput(name string, tbl *ast.Table) (*internal_models.OutputConfig, error) {
|
||||||
oc := &models.OutputConfig{
|
filter, err := buildFilter(tbl)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
oc := &internal_models.OutputConfig{
|
||||||
Name: name,
|
Name: name,
|
||||||
Filter: buildFilter(tbl),
|
Filter: filter,
|
||||||
|
}
|
||||||
|
// Outputs don't support FieldDrop/FieldPass, so set to NameDrop/NamePass
|
||||||
|
if len(oc.Filter.FieldDrop) > 0 {
|
||||||
|
oc.Filter.NameDrop = oc.Filter.FieldDrop
|
||||||
|
}
|
||||||
|
if len(oc.Filter.FieldPass) > 0 {
|
||||||
|
oc.Filter.NamePass = oc.Filter.FieldPass
|
||||||
}
|
}
|
||||||
return oc, nil
|
return oc, nil
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
package config
|
package config
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"os"
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
@@ -9,9 +10,55 @@ import (
|
|||||||
"github.com/influxdata/telegraf/plugins/inputs/exec"
|
"github.com/influxdata/telegraf/plugins/inputs/exec"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs/memcached"
|
"github.com/influxdata/telegraf/plugins/inputs/memcached"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs/procstat"
|
"github.com/influxdata/telegraf/plugins/inputs/procstat"
|
||||||
|
"github.com/influxdata/telegraf/plugins/parsers"
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
func TestConfig_LoadSingleInputWithEnvVars(t *testing.T) {
|
||||||
|
c := NewConfig()
|
||||||
|
err := os.Setenv("MY_TEST_SERVER", "192.168.1.1")
|
||||||
|
assert.NoError(t, err)
|
||||||
|
err = os.Setenv("TEST_INTERVAL", "10s")
|
||||||
|
assert.NoError(t, err)
|
||||||
|
c.LoadConfig("./testdata/single_plugin_env_vars.toml")
|
||||||
|
|
||||||
|
memcached := inputs.Inputs["memcached"]().(*memcached.Memcached)
|
||||||
|
memcached.Servers = []string{"192.168.1.1"}
|
||||||
|
|
||||||
|
filter := internal_models.Filter{
|
||||||
|
NameDrop: []string{"metricname2"},
|
||||||
|
NamePass: []string{"metricname1"},
|
||||||
|
FieldDrop: []string{"other", "stuff"},
|
||||||
|
FieldPass: []string{"some", "strings"},
|
||||||
|
TagDrop: []internal_models.TagFilter{
|
||||||
|
internal_models.TagFilter{
|
||||||
|
Name: "badtag",
|
||||||
|
Filter: []string{"othertag"},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
TagPass: []internal_models.TagFilter{
|
||||||
|
internal_models.TagFilter{
|
||||||
|
Name: "goodtag",
|
||||||
|
Filter: []string{"mytag"},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
IsActive: true,
|
||||||
|
}
|
||||||
|
assert.NoError(t, filter.CompileFilter())
|
||||||
|
mConfig := &internal_models.InputConfig{
|
||||||
|
Name: "memcached",
|
||||||
|
Filter: filter,
|
||||||
|
Interval: 10 * time.Second,
|
||||||
|
}
|
||||||
|
mConfig.Tags = make(map[string]string)
|
||||||
|
|
||||||
|
assert.Equal(t, memcached, c.Inputs[0].Input,
|
||||||
|
"Testdata did not produce a correct memcached struct.")
|
||||||
|
assert.Equal(t, mConfig, c.Inputs[0].Config,
|
||||||
|
"Testdata did not produce correct memcached metadata.")
|
||||||
|
}
|
||||||
|
|
||||||
func TestConfig_LoadSingleInput(t *testing.T) {
|
func TestConfig_LoadSingleInput(t *testing.T) {
|
||||||
c := NewConfig()
|
c := NewConfig()
|
||||||
c.LoadConfig("./testdata/single_plugin.toml")
|
c.LoadConfig("./testdata/single_plugin.toml")
|
||||||
@@ -19,25 +66,29 @@ func TestConfig_LoadSingleInput(t *testing.T) {
|
|||||||
memcached := inputs.Inputs["memcached"]().(*memcached.Memcached)
|
memcached := inputs.Inputs["memcached"]().(*memcached.Memcached)
|
||||||
memcached.Servers = []string{"localhost"}
|
memcached.Servers = []string{"localhost"}
|
||||||
|
|
||||||
mConfig := &models.InputConfig{
|
filter := internal_models.Filter{
|
||||||
Name: "memcached",
|
NameDrop: []string{"metricname2"},
|
||||||
Filter: models.Filter{
|
NamePass: []string{"metricname1"},
|
||||||
Drop: []string{"other", "stuff"},
|
FieldDrop: []string{"other", "stuff"},
|
||||||
Pass: []string{"some", "strings"},
|
FieldPass: []string{"some", "strings"},
|
||||||
TagDrop: []models.TagFilter{
|
TagDrop: []internal_models.TagFilter{
|
||||||
models.TagFilter{
|
internal_models.TagFilter{
|
||||||
Name: "badtag",
|
Name: "badtag",
|
||||||
Filter: []string{"othertag"},
|
Filter: []string{"othertag"},
|
||||||
},
|
|
||||||
},
|
},
|
||||||
TagPass: []models.TagFilter{
|
|
||||||
models.TagFilter{
|
|
||||||
Name: "goodtag",
|
|
||||||
Filter: []string{"mytag"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
IsActive: true,
|
|
||||||
},
|
},
|
||||||
|
TagPass: []internal_models.TagFilter{
|
||||||
|
internal_models.TagFilter{
|
||||||
|
Name: "goodtag",
|
||||||
|
Filter: []string{"mytag"},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
IsActive: true,
|
||||||
|
}
|
||||||
|
assert.NoError(t, filter.CompileFilter())
|
||||||
|
mConfig := &internal_models.InputConfig{
|
||||||
|
Name: "memcached",
|
||||||
|
Filter: filter,
|
||||||
Interval: 5 * time.Second,
|
Interval: 5 * time.Second,
|
||||||
}
|
}
|
||||||
mConfig.Tags = make(map[string]string)
|
mConfig.Tags = make(map[string]string)
|
||||||
@@ -62,25 +113,29 @@ func TestConfig_LoadDirectory(t *testing.T) {
|
|||||||
memcached := inputs.Inputs["memcached"]().(*memcached.Memcached)
|
memcached := inputs.Inputs["memcached"]().(*memcached.Memcached)
|
||||||
memcached.Servers = []string{"localhost"}
|
memcached.Servers = []string{"localhost"}
|
||||||
|
|
||||||
mConfig := &models.InputConfig{
|
filter := internal_models.Filter{
|
||||||
Name: "memcached",
|
NameDrop: []string{"metricname2"},
|
||||||
Filter: models.Filter{
|
NamePass: []string{"metricname1"},
|
||||||
Drop: []string{"other", "stuff"},
|
FieldDrop: []string{"other", "stuff"},
|
||||||
Pass: []string{"some", "strings"},
|
FieldPass: []string{"some", "strings"},
|
||||||
TagDrop: []models.TagFilter{
|
TagDrop: []internal_models.TagFilter{
|
||||||
models.TagFilter{
|
internal_models.TagFilter{
|
||||||
Name: "badtag",
|
Name: "badtag",
|
||||||
Filter: []string{"othertag"},
|
Filter: []string{"othertag"},
|
||||||
},
|
|
||||||
},
|
},
|
||||||
TagPass: []models.TagFilter{
|
|
||||||
models.TagFilter{
|
|
||||||
Name: "goodtag",
|
|
||||||
Filter: []string{"mytag"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
IsActive: true,
|
|
||||||
},
|
},
|
||||||
|
TagPass: []internal_models.TagFilter{
|
||||||
|
internal_models.TagFilter{
|
||||||
|
Name: "goodtag",
|
||||||
|
Filter: []string{"mytag"},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
IsActive: true,
|
||||||
|
}
|
||||||
|
assert.NoError(t, filter.CompileFilter())
|
||||||
|
mConfig := &internal_models.InputConfig{
|
||||||
|
Name: "memcached",
|
||||||
|
Filter: filter,
|
||||||
Interval: 5 * time.Second,
|
Interval: 5 * time.Second,
|
||||||
}
|
}
|
||||||
mConfig.Tags = make(map[string]string)
|
mConfig.Tags = make(map[string]string)
|
||||||
@@ -91,8 +146,11 @@ func TestConfig_LoadDirectory(t *testing.T) {
|
|||||||
"Testdata did not produce correct memcached metadata.")
|
"Testdata did not produce correct memcached metadata.")
|
||||||
|
|
||||||
ex := inputs.Inputs["exec"]().(*exec.Exec)
|
ex := inputs.Inputs["exec"]().(*exec.Exec)
|
||||||
|
p, err := parsers.NewJSONParser("exec", nil, nil)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
ex.SetParser(p)
|
||||||
ex.Command = "/usr/bin/myothercollector --foo=bar"
|
ex.Command = "/usr/bin/myothercollector --foo=bar"
|
||||||
eConfig := &models.InputConfig{
|
eConfig := &internal_models.InputConfig{
|
||||||
Name: "exec",
|
Name: "exec",
|
||||||
MeasurementSuffix: "_myothercollector",
|
MeasurementSuffix: "_myothercollector",
|
||||||
}
|
}
|
||||||
@@ -111,7 +169,7 @@ func TestConfig_LoadDirectory(t *testing.T) {
|
|||||||
pstat := inputs.Inputs["procstat"]().(*procstat.Procstat)
|
pstat := inputs.Inputs["procstat"]().(*procstat.Procstat)
|
||||||
pstat.PidFile = "/var/run/grafana-server.pid"
|
pstat.PidFile = "/var/run/grafana-server.pid"
|
||||||
|
|
||||||
pConfig := &models.InputConfig{Name: "procstat"}
|
pConfig := &internal_models.InputConfig{Name: "procstat"}
|
||||||
pConfig.Tags = make(map[string]string)
|
pConfig.Tags = make(map[string]string)
|
||||||
|
|
||||||
assert.Equal(t, pstat, c.Inputs[3].Input,
|
assert.Equal(t, pstat, c.Inputs[3].Input,
|
||||||
|
|||||||
6
internal/config/testdata/single_plugin.toml
vendored
6
internal/config/testdata/single_plugin.toml
vendored
@@ -1,7 +1,9 @@
|
|||||||
[[inputs.memcached]]
|
[[inputs.memcached]]
|
||||||
servers = ["localhost"]
|
servers = ["localhost"]
|
||||||
pass = ["some", "strings"]
|
namepass = ["metricname1"]
|
||||||
drop = ["other", "stuff"]
|
namedrop = ["metricname2"]
|
||||||
|
fieldpass = ["some", "strings"]
|
||||||
|
fielddrop = ["other", "stuff"]
|
||||||
interval = "5s"
|
interval = "5s"
|
||||||
[inputs.memcached.tagpass]
|
[inputs.memcached.tagpass]
|
||||||
goodtag = ["mytag"]
|
goodtag = ["mytag"]
|
||||||
|
|||||||
11
internal/config/testdata/single_plugin_env_vars.toml
vendored
Normal file
11
internal/config/testdata/single_plugin_env_vars.toml
vendored
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
[[inputs.memcached]]
|
||||||
|
servers = ["$MY_TEST_SERVER"]
|
||||||
|
namepass = ["metricname1"]
|
||||||
|
namedrop = ["metricname2"]
|
||||||
|
fieldpass = ["some", "strings"]
|
||||||
|
fielddrop = ["other", "stuff"]
|
||||||
|
interval = "$TEST_INTERVAL"
|
||||||
|
[inputs.memcached.tagpass]
|
||||||
|
goodtag = ["mytag"]
|
||||||
|
[inputs.memcached.tagdrop]
|
||||||
|
badtag = ["othertag"]
|
||||||
@@ -1,5 +1,7 @@
|
|||||||
[[inputs.memcached]]
|
[[inputs.memcached]]
|
||||||
servers = ["192.168.1.1"]
|
servers = ["192.168.1.1"]
|
||||||
|
namepass = ["metricname1"]
|
||||||
|
namedrop = ["metricname2"]
|
||||||
pass = ["some", "strings"]
|
pass = ["some", "strings"]
|
||||||
drop = ["other", "stuff"]
|
drop = ["other", "stuff"]
|
||||||
interval = "5s"
|
interval = "5s"
|
||||||
|
|||||||
11
internal/config/testdata/telegraf-agent.toml
vendored
11
internal/config/testdata/telegraf-agent.toml
vendored
@@ -20,7 +20,7 @@
|
|||||||
# with 'required'. Be sure to edit those to make this configuration work.
|
# with 'required'. Be sure to edit those to make this configuration work.
|
||||||
|
|
||||||
# Tags can also be specified via a normal map, but only one form at a time:
|
# Tags can also be specified via a normal map, but only one form at a time:
|
||||||
[tags]
|
[global_tags]
|
||||||
dc = "us-east-1"
|
dc = "us-east-1"
|
||||||
|
|
||||||
# Configuration for telegraf agent
|
# Configuration for telegraf agent
|
||||||
@@ -184,6 +184,15 @@
|
|||||||
# If no servers are specified, then localhost is used as the host.
|
# If no servers are specified, then localhost is used as the host.
|
||||||
servers = ["localhost"]
|
servers = ["localhost"]
|
||||||
|
|
||||||
|
# Telegraf plugin for gathering metrics from N Mesos masters
|
||||||
|
[[inputs.mesos]]
|
||||||
|
# Timeout, in ms.
|
||||||
|
timeout = 100
|
||||||
|
# A list of Mesos masters, default value is localhost:5050.
|
||||||
|
masters = ["localhost:5050"]
|
||||||
|
# Metrics groups to be collected, by default, all enabled.
|
||||||
|
master_collections = ["resources","master","system","slaves","frameworks","messages","evqueue","registrar"]
|
||||||
|
|
||||||
# Read metrics from one or many MongoDB servers
|
# Read metrics from one or many MongoDB servers
|
||||||
[[inputs.mongodb]]
|
[[inputs.mongodb]]
|
||||||
# An array of URI to gather stats about. Specify an ip or hostname
|
# An array of URI to gather stats about. Specify an ip or hostname
|
||||||
|
|||||||
98
internal/globpath/globpath.go
Normal file
98
internal/globpath/globpath.go
Normal file
@@ -0,0 +1,98 @@
|
|||||||
|
package globpath
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"os"
|
||||||
|
"path/filepath"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/gobwas/glob"
|
||||||
|
)
|
||||||
|
|
||||||
|
var sepStr = fmt.Sprintf("%v", string(os.PathSeparator))
|
||||||
|
|
||||||
|
type GlobPath struct {
|
||||||
|
path string
|
||||||
|
hasMeta bool
|
||||||
|
g glob.Glob
|
||||||
|
root string
|
||||||
|
}
|
||||||
|
|
||||||
|
func Compile(path string) (*GlobPath, error) {
|
||||||
|
out := GlobPath{
|
||||||
|
hasMeta: hasMeta(path),
|
||||||
|
path: path,
|
||||||
|
}
|
||||||
|
|
||||||
|
// if there are no glob meta characters in the path, don't bother compiling
|
||||||
|
// a glob object or finding the root directory. (see short-circuit in Match)
|
||||||
|
if !out.hasMeta {
|
||||||
|
return &out, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
var err error
|
||||||
|
if out.g, err = glob.Compile(path, os.PathSeparator); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
// Get the root directory for this filepath
|
||||||
|
out.root = findRootDir(path)
|
||||||
|
return &out, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (g *GlobPath) Match() map[string]os.FileInfo {
|
||||||
|
if !g.hasMeta {
|
||||||
|
out := make(map[string]os.FileInfo)
|
||||||
|
info, err := os.Stat(g.path)
|
||||||
|
if !os.IsNotExist(err) {
|
||||||
|
out[g.path] = info
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
return walkFilePath(g.root, g.g)
|
||||||
|
}
|
||||||
|
|
||||||
|
// walk the filepath from the given root and return a list of files that match
|
||||||
|
// the given glob.
|
||||||
|
func walkFilePath(root string, g glob.Glob) map[string]os.FileInfo {
|
||||||
|
matchedFiles := make(map[string]os.FileInfo)
|
||||||
|
walkfn := func(path string, info os.FileInfo, _ error) error {
|
||||||
|
if g.Match(path) {
|
||||||
|
matchedFiles[path] = info
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
filepath.Walk(root, walkfn)
|
||||||
|
return matchedFiles
|
||||||
|
}
|
||||||
|
|
||||||
|
// find the root dir of the given path (could include globs).
|
||||||
|
// ie:
|
||||||
|
// /var/log/telegraf.conf -> /var/log
|
||||||
|
// /home/** -> /home
|
||||||
|
// /home/*/** -> /home
|
||||||
|
// /lib/share/*/*/**.txt -> /lib/share
|
||||||
|
func findRootDir(path string) string {
|
||||||
|
pathItems := strings.Split(path, sepStr)
|
||||||
|
out := sepStr
|
||||||
|
for i, item := range pathItems {
|
||||||
|
if i == len(pathItems)-1 {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
if item == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if hasMeta(item) {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
out += item + sepStr
|
||||||
|
}
|
||||||
|
if out != "/" {
|
||||||
|
out = strings.TrimSuffix(out, "/")
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
// hasMeta reports whether path contains any magic glob characters.
|
||||||
|
func hasMeta(path string) bool {
|
||||||
|
return strings.IndexAny(path, "*?[") >= 0
|
||||||
|
}
|
||||||
62
internal/globpath/globpath_test.go
Normal file
62
internal/globpath/globpath_test.go
Normal file
@@ -0,0 +1,62 @@
|
|||||||
|
package globpath
|
||||||
|
|
||||||
|
import (
|
||||||
|
"runtime"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestCompileAndMatch(t *testing.T) {
|
||||||
|
dir := getTestdataDir()
|
||||||
|
// test super asterisk
|
||||||
|
g1, err := Compile(dir + "/**")
|
||||||
|
require.NoError(t, err)
|
||||||
|
// test single asterisk
|
||||||
|
g2, err := Compile(dir + "/*.log")
|
||||||
|
require.NoError(t, err)
|
||||||
|
// test no meta characters (file exists)
|
||||||
|
g3, err := Compile(dir + "/log1.log")
|
||||||
|
require.NoError(t, err)
|
||||||
|
// test file that doesn't exist
|
||||||
|
g4, err := Compile(dir + "/i_dont_exist.log")
|
||||||
|
require.NoError(t, err)
|
||||||
|
// test super asterisk that doesn't exist
|
||||||
|
g5, err := Compile(dir + "/dir_doesnt_exist/**")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
matches := g1.Match()
|
||||||
|
assert.Len(t, matches, 3)
|
||||||
|
matches = g2.Match()
|
||||||
|
assert.Len(t, matches, 2)
|
||||||
|
matches = g3.Match()
|
||||||
|
assert.Len(t, matches, 1)
|
||||||
|
matches = g4.Match()
|
||||||
|
assert.Len(t, matches, 0)
|
||||||
|
matches = g5.Match()
|
||||||
|
assert.Len(t, matches, 0)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFindRootDir(t *testing.T) {
|
||||||
|
tests := []struct {
|
||||||
|
input string
|
||||||
|
output string
|
||||||
|
}{
|
||||||
|
{"/var/log/telegraf.conf", "/var/log"},
|
||||||
|
{"/home/**", "/home"},
|
||||||
|
{"/home/*/**", "/home"},
|
||||||
|
{"/lib/share/*/*/**.txt", "/lib/share"},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, test := range tests {
|
||||||
|
actual := findRootDir(test.input)
|
||||||
|
assert.Equal(t, test.output, actual)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func getTestdataDir() string {
|
||||||
|
_, filename, _, _ := runtime.Caller(1)
|
||||||
|
return strings.Replace(filename, "globpath_test.go", "testdata", 1)
|
||||||
|
}
|
||||||
0
internal/globpath/testdata/log1.log
vendored
Normal file
0
internal/globpath/testdata/log1.log
vendored
Normal file
0
internal/globpath/testdata/log2.log
vendored
Normal file
0
internal/globpath/testdata/log2.log
vendored
Normal file
5
internal/globpath/testdata/test.conf
vendored
Normal file
5
internal/globpath/testdata/test.conf
vendored
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
# this is a fake testing config file
|
||||||
|
# for testing the filestat plugin
|
||||||
|
|
||||||
|
option1 = "foo"
|
||||||
|
option2 = "bar"
|
||||||
@@ -2,12 +2,27 @@ package internal
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"bufio"
|
"bufio"
|
||||||
|
"bytes"
|
||||||
|
"crypto/rand"
|
||||||
|
"crypto/tls"
|
||||||
|
"crypto/x509"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"io/ioutil"
|
||||||
|
"log"
|
||||||
"os"
|
"os"
|
||||||
"strconv"
|
"os/exec"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
"unicode"
|
||||||
|
)
|
||||||
|
|
||||||
|
const alphanum string = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
|
||||||
|
|
||||||
|
var (
|
||||||
|
TimeoutErr = errors.New("Command timed out.")
|
||||||
|
|
||||||
|
NotImplementedError = errors.New("not implemented yet")
|
||||||
)
|
)
|
||||||
|
|
||||||
// Duration just wraps time.Duration
|
// Duration just wraps time.Duration
|
||||||
@@ -27,49 +42,6 @@ func (d *Duration) UnmarshalTOML(b []byte) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
var NotImplementedError = errors.New("not implemented yet")
|
|
||||||
|
|
||||||
type JSONFlattener struct {
|
|
||||||
Fields map[string]interface{}
|
|
||||||
}
|
|
||||||
|
|
||||||
// FlattenJSON flattens nested maps/interfaces into a fields map
|
|
||||||
func (f *JSONFlattener) FlattenJSON(
|
|
||||||
fieldname string,
|
|
||||||
v interface{},
|
|
||||||
) error {
|
|
||||||
if f.Fields == nil {
|
|
||||||
f.Fields = make(map[string]interface{})
|
|
||||||
}
|
|
||||||
fieldname = strings.Trim(fieldname, "_")
|
|
||||||
switch t := v.(type) {
|
|
||||||
case map[string]interface{}:
|
|
||||||
for k, v := range t {
|
|
||||||
err := f.FlattenJSON(fieldname+"_"+k+"_", v)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
case []interface{}:
|
|
||||||
for i, v := range t {
|
|
||||||
k := strconv.Itoa(i)
|
|
||||||
err := f.FlattenJSON(fieldname+"_"+k+"_", v)
|
|
||||||
if err != nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
case float64:
|
|
||||||
f.Fields[fieldname] = t
|
|
||||||
case bool, string, nil:
|
|
||||||
// ignored types
|
|
||||||
return nil
|
|
||||||
default:
|
|
||||||
return fmt.Errorf("JSON Flattener: got unexpected type %T with value %v (%s)",
|
|
||||||
t, t, fieldname)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// ReadLines reads contents from a file and splits them by new lines.
|
// ReadLines reads contents from a file and splits them by new lines.
|
||||||
// A convenience wrapper to ReadLinesOffsetN(filename, 0, -1).
|
// A convenience wrapper to ReadLinesOffsetN(filename, 0, -1).
|
||||||
func ReadLines(filename string) ([]string, error) {
|
func ReadLines(filename string) ([]string, error) {
|
||||||
@@ -105,58 +77,117 @@ func ReadLinesOffsetN(filename string, offset uint, n int) ([]string, error) {
|
|||||||
return ret, nil
|
return ret, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Glob will test a string pattern, potentially containing globs, against a
|
// RandomString returns a random string of alpha-numeric characters
|
||||||
// subject string. The result is a simple true/false, determining whether or
|
func RandomString(n int) string {
|
||||||
// not the glob pattern matched the subject text.
|
var bytes = make([]byte, n)
|
||||||
//
|
rand.Read(bytes)
|
||||||
// Adapted from https://github.com/ryanuber/go-glob/blob/master/glob.go
|
for i, b := range bytes {
|
||||||
// thanks Ryan Uber!
|
bytes[i] = alphanum[b%byte(len(alphanum))]
|
||||||
func Glob(pattern, measurement string) bool {
|
}
|
||||||
// Empty pattern can only match empty subject
|
return string(bytes)
|
||||||
if pattern == "" {
|
}
|
||||||
return measurement == pattern
|
|
||||||
|
// GetTLSConfig gets a tls.Config object from the given certs, key, and CA files.
|
||||||
|
// you must give the full path to the files.
|
||||||
|
// If all files are blank and InsecureSkipVerify=false, returns a nil pointer.
|
||||||
|
func GetTLSConfig(
|
||||||
|
SSLCert, SSLKey, SSLCA string,
|
||||||
|
InsecureSkipVerify bool,
|
||||||
|
) (*tls.Config, error) {
|
||||||
|
if SSLCert == "" && SSLKey == "" && SSLCA == "" && !InsecureSkipVerify {
|
||||||
|
return nil, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// If the pattern _is_ a glob, it matches everything
|
t := &tls.Config{
|
||||||
if pattern == "*" {
|
InsecureSkipVerify: InsecureSkipVerify,
|
||||||
return true
|
|
||||||
}
|
}
|
||||||
|
|
||||||
parts := strings.Split(pattern, "*")
|
if SSLCA != "" {
|
||||||
|
caCert, err := ioutil.ReadFile(SSLCA)
|
||||||
if len(parts) == 1 {
|
if err != nil {
|
||||||
// No globs in pattern, so test for match
|
return nil, errors.New(fmt.Sprintf("Could not load TLS CA: %s",
|
||||||
return pattern == measurement
|
err))
|
||||||
}
|
|
||||||
|
|
||||||
leadingGlob := strings.HasPrefix(pattern, "*")
|
|
||||||
trailingGlob := strings.HasSuffix(pattern, "*")
|
|
||||||
end := len(parts) - 1
|
|
||||||
|
|
||||||
for i, part := range parts {
|
|
||||||
switch i {
|
|
||||||
case 0:
|
|
||||||
if leadingGlob {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if !strings.HasPrefix(measurement, part) {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
case end:
|
|
||||||
if len(measurement) > 0 {
|
|
||||||
return trailingGlob || strings.HasSuffix(measurement, part)
|
|
||||||
}
|
|
||||||
default:
|
|
||||||
if !strings.Contains(measurement, part) {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Trim evaluated text from measurement as we loop over the pattern.
|
caCertPool := x509.NewCertPool()
|
||||||
idx := strings.Index(measurement, part) + len(part)
|
caCertPool.AppendCertsFromPEM(caCert)
|
||||||
measurement = measurement[idx:]
|
t.RootCAs = caCertPool
|
||||||
}
|
}
|
||||||
|
|
||||||
// All parts of the pattern matched
|
if SSLCert != "" && SSLKey != "" {
|
||||||
return true
|
cert, err := tls.LoadX509KeyPair(SSLCert, SSLKey)
|
||||||
|
if err != nil {
|
||||||
|
return nil, errors.New(fmt.Sprintf(
|
||||||
|
"Could not load TLS client key/certificate: %s",
|
||||||
|
err))
|
||||||
|
}
|
||||||
|
|
||||||
|
t.Certificates = []tls.Certificate{cert}
|
||||||
|
t.BuildNameToCertificate()
|
||||||
|
}
|
||||||
|
|
||||||
|
// will be nil by default if nothing is provided
|
||||||
|
return t, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// SnakeCase converts the given string to snake case following the Golang format:
|
||||||
|
// acronyms are converted to lower-case and preceded by an underscore.
|
||||||
|
func SnakeCase(in string) string {
|
||||||
|
runes := []rune(in)
|
||||||
|
length := len(runes)
|
||||||
|
|
||||||
|
var out []rune
|
||||||
|
for i := 0; i < length; i++ {
|
||||||
|
if i > 0 && unicode.IsUpper(runes[i]) && ((i+1 < length && unicode.IsLower(runes[i+1])) || unicode.IsLower(runes[i-1])) {
|
||||||
|
out = append(out, '_')
|
||||||
|
}
|
||||||
|
out = append(out, unicode.ToLower(runes[i]))
|
||||||
|
}
|
||||||
|
|
||||||
|
return string(out)
|
||||||
|
}
|
||||||
|
|
||||||
|
// CombinedOutputTimeout runs the given command with the given timeout and
|
||||||
|
// returns the combined output of stdout and stderr.
|
||||||
|
// If the command times out, it attempts to kill the process.
|
||||||
|
func CombinedOutputTimeout(c *exec.Cmd, timeout time.Duration) ([]byte, error) {
|
||||||
|
var b bytes.Buffer
|
||||||
|
c.Stdout = &b
|
||||||
|
c.Stderr = &b
|
||||||
|
if err := c.Start(); err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
err := WaitTimeout(c, timeout)
|
||||||
|
return b.Bytes(), err
|
||||||
|
}
|
||||||
|
|
||||||
|
// RunTimeout runs the given command with the given timeout.
|
||||||
|
// If the command times out, it attempts to kill the process.
|
||||||
|
func RunTimeout(c *exec.Cmd, timeout time.Duration) error {
|
||||||
|
if err := c.Start(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return WaitTimeout(c, timeout)
|
||||||
|
}
|
||||||
|
|
||||||
|
// WaitTimeout waits for the given command to finish with a timeout.
|
||||||
|
// It assumes the command has already been started.
|
||||||
|
// If the command times out, it attempts to kill the process.
|
||||||
|
func WaitTimeout(c *exec.Cmd, timeout time.Duration) error {
|
||||||
|
timer := time.NewTimer(timeout)
|
||||||
|
done := make(chan error)
|
||||||
|
go func() { done <- c.Wait() }()
|
||||||
|
select {
|
||||||
|
case err := <-done:
|
||||||
|
timer.Stop()
|
||||||
|
return err
|
||||||
|
case <-timer.C:
|
||||||
|
if err := c.Process.Kill(); err != nil {
|
||||||
|
log.Printf("FATAL error killing process: %s", err)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
// wait for the command to return after killing it
|
||||||
|
<-done
|
||||||
|
return TimeoutErr
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,44 +1,108 @@
|
|||||||
package internal
|
package internal
|
||||||
|
|
||||||
import "testing"
|
import (
|
||||||
|
"os/exec"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
func testGlobMatch(t *testing.T, pattern, subj string) {
|
"github.com/stretchr/testify/assert"
|
||||||
if !Glob(pattern, subj) {
|
)
|
||||||
t.Errorf("%s should match %s", pattern, subj)
|
|
||||||
|
type SnakeTest struct {
|
||||||
|
input string
|
||||||
|
output string
|
||||||
|
}
|
||||||
|
|
||||||
|
var tests = []SnakeTest{
|
||||||
|
{"a", "a"},
|
||||||
|
{"snake", "snake"},
|
||||||
|
{"A", "a"},
|
||||||
|
{"ID", "id"},
|
||||||
|
{"MOTD", "motd"},
|
||||||
|
{"Snake", "snake"},
|
||||||
|
{"SnakeTest", "snake_test"},
|
||||||
|
{"APIResponse", "api_response"},
|
||||||
|
{"SnakeID", "snake_id"},
|
||||||
|
{"SnakeIDGoogle", "snake_id_google"},
|
||||||
|
{"LinuxMOTD", "linux_motd"},
|
||||||
|
{"OMGWTFBBQ", "omgwtfbbq"},
|
||||||
|
{"omg_wtf_bbq", "omg_wtf_bbq"},
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSnakeCase(t *testing.T) {
|
||||||
|
for _, test := range tests {
|
||||||
|
if SnakeCase(test.input) != test.output {
|
||||||
|
t.Errorf(`SnakeCase("%s"), wanted "%s", got \%s"`, test.input, test.output, SnakeCase(test.input))
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func testGlobNoMatch(t *testing.T, pattern, subj string) {
|
var (
|
||||||
if Glob(pattern, subj) {
|
sleepbin, _ = exec.LookPath("sleep")
|
||||||
t.Errorf("%s should not match %s", pattern, subj)
|
echobin, _ = exec.LookPath("echo")
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestRunTimeout(t *testing.T) {
|
||||||
|
if sleepbin == "" {
|
||||||
|
t.Skip("'sleep' binary not available on OS, skipping.")
|
||||||
}
|
}
|
||||||
|
cmd := exec.Command(sleepbin, "10")
|
||||||
|
start := time.Now()
|
||||||
|
err := RunTimeout(cmd, time.Millisecond*20)
|
||||||
|
elapsed := time.Since(start)
|
||||||
|
|
||||||
|
assert.Equal(t, TimeoutErr, err)
|
||||||
|
// Verify that command gets killed in 20ms, with some breathing room
|
||||||
|
assert.True(t, elapsed < time.Millisecond*75)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestEmptyPattern(t *testing.T) {
|
func TestCombinedOutputTimeout(t *testing.T) {
|
||||||
testGlobMatch(t, "", "")
|
if sleepbin == "" {
|
||||||
testGlobNoMatch(t, "", "test")
|
t.Skip("'sleep' binary not available on OS, skipping.")
|
||||||
}
|
|
||||||
|
|
||||||
func TestPatternWithoutGlobs(t *testing.T) {
|
|
||||||
testGlobMatch(t, "test", "test")
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGlob(t *testing.T) {
|
|
||||||
for _, pattern := range []string{
|
|
||||||
"*test", // Leading glob
|
|
||||||
"this*", // Trailing glob
|
|
||||||
"*is*a*", // Lots of globs
|
|
||||||
"**test**", // Double glob characters
|
|
||||||
"**is**a***test*", // Varying number of globs
|
|
||||||
} {
|
|
||||||
testGlobMatch(t, pattern, "this_is_a_test")
|
|
||||||
}
|
}
|
||||||
|
cmd := exec.Command(sleepbin, "10")
|
||||||
|
start := time.Now()
|
||||||
|
_, err := CombinedOutputTimeout(cmd, time.Millisecond*20)
|
||||||
|
elapsed := time.Since(start)
|
||||||
|
|
||||||
for _, pattern := range []string{
|
assert.Equal(t, TimeoutErr, err)
|
||||||
"test*", // Implicit substring match should fail
|
// Verify that command gets killed in 20ms, with some breathing room
|
||||||
"*is", // Partial match should fail
|
assert.True(t, elapsed < time.Millisecond*75)
|
||||||
"*no*", // Globs without a match between them should fail
|
}
|
||||||
} {
|
|
||||||
testGlobNoMatch(t, pattern, "this_is_a_test")
|
func TestCombinedOutput(t *testing.T) {
|
||||||
}
|
if echobin == "" {
|
||||||
|
t.Skip("'echo' binary not available on OS, skipping.")
|
||||||
|
}
|
||||||
|
cmd := exec.Command(echobin, "foo")
|
||||||
|
out, err := CombinedOutputTimeout(cmd, time.Second)
|
||||||
|
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.Equal(t, "foo\n", string(out))
|
||||||
|
}
|
||||||
|
|
||||||
|
// test that CombinedOutputTimeout and exec.Cmd.CombinedOutput return
|
||||||
|
// the same output from a failed command.
|
||||||
|
func TestCombinedOutputError(t *testing.T) {
|
||||||
|
if sleepbin == "" {
|
||||||
|
t.Skip("'sleep' binary not available on OS, skipping.")
|
||||||
|
}
|
||||||
|
cmd := exec.Command(sleepbin, "foo")
|
||||||
|
expected, err := cmd.CombinedOutput()
|
||||||
|
|
||||||
|
cmd2 := exec.Command(sleepbin, "foo")
|
||||||
|
actual, err := CombinedOutputTimeout(cmd2, time.Second)
|
||||||
|
|
||||||
|
assert.Error(t, err)
|
||||||
|
assert.Equal(t, expected, actual)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRunError(t *testing.T) {
|
||||||
|
if sleepbin == "" {
|
||||||
|
t.Skip("'sleep' binary not available on OS, skipping.")
|
||||||
|
}
|
||||||
|
cmd := exec.Command(sleepbin, "foo")
|
||||||
|
err := RunTimeout(cmd, time.Second)
|
||||||
|
|
||||||
|
assert.Error(t, err)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,74 +1,157 @@
|
|||||||
package models
|
package internal_models
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"fmt"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/influxdata/influxdb/client/v2"
|
"github.com/gobwas/glob"
|
||||||
"github.com/influxdata/telegraf/internal"
|
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
)
|
)
|
||||||
|
|
||||||
// TagFilter is the name of a tag, and the values on which to filter
|
// TagFilter is the name of a tag, and the values on which to filter
|
||||||
type TagFilter struct {
|
type TagFilter struct {
|
||||||
Name string
|
Name string
|
||||||
Filter []string
|
Filter []string
|
||||||
|
filter glob.Glob
|
||||||
}
|
}
|
||||||
|
|
||||||
// Filter containing drop/pass and tagdrop/tagpass rules
|
// Filter containing drop/pass and tagdrop/tagpass rules
|
||||||
type Filter struct {
|
type Filter struct {
|
||||||
Drop []string
|
NameDrop []string
|
||||||
Pass []string
|
nameDrop glob.Glob
|
||||||
|
NamePass []string
|
||||||
|
namePass glob.Glob
|
||||||
|
|
||||||
|
FieldDrop []string
|
||||||
|
fieldDrop glob.Glob
|
||||||
|
FieldPass []string
|
||||||
|
fieldPass glob.Glob
|
||||||
|
|
||||||
TagDrop []TagFilter
|
TagDrop []TagFilter
|
||||||
TagPass []TagFilter
|
TagPass []TagFilter
|
||||||
|
|
||||||
|
TagExclude []string
|
||||||
|
tagExclude glob.Glob
|
||||||
|
TagInclude []string
|
||||||
|
tagInclude glob.Glob
|
||||||
|
|
||||||
IsActive bool
|
IsActive bool
|
||||||
}
|
}
|
||||||
|
|
||||||
func (f Filter) ShouldPointPass(point *client.Point) bool {
|
// Compile all Filter lists into glob.Glob objects.
|
||||||
if f.ShouldPass(point.Name()) && f.ShouldTagsPass(point.Tags()) {
|
func (f *Filter) CompileFilter() error {
|
||||||
|
var err error
|
||||||
|
f.nameDrop, err = compileFilter(f.NameDrop)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("Error compiling 'namedrop', %s", err)
|
||||||
|
}
|
||||||
|
f.namePass, err = compileFilter(f.NamePass)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("Error compiling 'namepass', %s", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
f.fieldDrop, err = compileFilter(f.FieldDrop)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("Error compiling 'fielddrop', %s", err)
|
||||||
|
}
|
||||||
|
f.fieldPass, err = compileFilter(f.FieldPass)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("Error compiling 'fieldpass', %s", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
f.tagExclude, err = compileFilter(f.TagExclude)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("Error compiling 'tagexclude', %s", err)
|
||||||
|
}
|
||||||
|
f.tagInclude, err = compileFilter(f.TagInclude)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("Error compiling 'taginclude', %s", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
for i, _ := range f.TagDrop {
|
||||||
|
f.TagDrop[i].filter, err = compileFilter(f.TagDrop[i].Filter)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("Error compiling 'tagdrop', %s", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for i, _ := range f.TagPass {
|
||||||
|
f.TagPass[i].filter, err = compileFilter(f.TagPass[i].Filter)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("Error compiling 'tagpass', %s", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func compileFilter(filter []string) (glob.Glob, error) {
|
||||||
|
if len(filter) == 0 {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
var g glob.Glob
|
||||||
|
var err error
|
||||||
|
if len(filter) == 1 {
|
||||||
|
g, err = glob.Compile(filter[0])
|
||||||
|
} else {
|
||||||
|
g, err = glob.Compile("{" + strings.Join(filter, ",") + "}")
|
||||||
|
}
|
||||||
|
return g, err
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f *Filter) ShouldMetricPass(metric telegraf.Metric) bool {
|
||||||
|
if f.ShouldNamePass(metric.Name()) && f.ShouldTagsPass(metric.Tags()) {
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
// ShouldPass returns true if the metric should pass, false if should drop
|
// ShouldFieldsPass returns true if the metric should pass, false if should drop
|
||||||
// based on the drop/pass filter parameters
|
// based on the drop/pass filter parameters
|
||||||
func (f Filter) ShouldPass(key string) bool {
|
func (f *Filter) ShouldNamePass(key string) bool {
|
||||||
if f.Pass != nil {
|
if f.namePass != nil {
|
||||||
for _, pat := range f.Pass {
|
if f.namePass.Match(key) {
|
||||||
// TODO remove HasPrefix check, leaving it for now for legacy support.
|
return true
|
||||||
// Cam, 2015-12-07
|
|
||||||
if strings.HasPrefix(key, pat) || internal.Glob(pat, key) {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
if f.Drop != nil {
|
if f.nameDrop != nil {
|
||||||
for _, pat := range f.Drop {
|
if f.nameDrop.Match(key) {
|
||||||
// TODO remove HasPrefix check, leaving it for now for legacy support.
|
return false
|
||||||
// Cam, 2015-12-07
|
|
||||||
if strings.HasPrefix(key, pat) || internal.Glob(pat, key) {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
return true
|
// ShouldFieldsPass returns true if the metric should pass, false if should drop
|
||||||
|
// based on the drop/pass filter parameters
|
||||||
|
func (f *Filter) ShouldFieldsPass(key string) bool {
|
||||||
|
if f.fieldPass != nil {
|
||||||
|
if f.fieldPass.Match(key) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
if f.fieldDrop != nil {
|
||||||
|
if f.fieldDrop.Match(key) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
}
|
}
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
// ShouldTagsPass returns true if the metric should pass, false if should drop
|
// ShouldTagsPass returns true if the metric should pass, false if should drop
|
||||||
// based on the tagdrop/tagpass filter parameters
|
// based on the tagdrop/tagpass filter parameters
|
||||||
func (f Filter) ShouldTagsPass(tags map[string]string) bool {
|
func (f *Filter) ShouldTagsPass(tags map[string]string) bool {
|
||||||
if f.TagPass != nil {
|
if f.TagPass != nil {
|
||||||
for _, pat := range f.TagPass {
|
for _, pat := range f.TagPass {
|
||||||
|
if pat.filter == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
if tagval, ok := tags[pat.Name]; ok {
|
if tagval, ok := tags[pat.Name]; ok {
|
||||||
for _, filter := range pat.Filter {
|
if pat.filter.Match(tagval) {
|
||||||
if internal.Glob(filter, tagval) {
|
return true
|
||||||
return true
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -77,11 +160,12 @@ func (f Filter) ShouldTagsPass(tags map[string]string) bool {
|
|||||||
|
|
||||||
if f.TagDrop != nil {
|
if f.TagDrop != nil {
|
||||||
for _, pat := range f.TagDrop {
|
for _, pat := range f.TagDrop {
|
||||||
|
if pat.filter == nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
if tagval, ok := tags[pat.Name]; ok {
|
if tagval, ok := tags[pat.Name]; ok {
|
||||||
for _, filter := range pat.Filter {
|
if pat.filter.Match(tagval) {
|
||||||
if internal.Glob(filter, tagval) {
|
return false
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -90,3 +174,23 @@ func (f Filter) ShouldTagsPass(tags map[string]string) bool {
|
|||||||
|
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Apply TagInclude and TagExclude filters.
|
||||||
|
// modifies the tags map in-place.
|
||||||
|
func (f *Filter) FilterTags(tags map[string]string) {
|
||||||
|
if f.tagInclude != nil {
|
||||||
|
for k, _ := range tags {
|
||||||
|
if !f.tagInclude.Match(k) {
|
||||||
|
delete(tags, k)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if f.tagExclude != nil {
|
||||||
|
for k, _ := range tags {
|
||||||
|
if f.tagExclude.Match(k) {
|
||||||
|
delete(tags, k)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
@@ -1,7 +1,12 @@
|
|||||||
package models
|
package internal_models
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf/testutil"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestFilter_Empty(t *testing.T) {
|
func TestFilter_Empty(t *testing.T) {
|
||||||
@@ -18,16 +23,17 @@ func TestFilter_Empty(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
for _, measurement := range measurements {
|
for _, measurement := range measurements {
|
||||||
if !f.ShouldPass(measurement) {
|
if !f.ShouldFieldsPass(measurement) {
|
||||||
t.Errorf("Expected measurement %s to pass", measurement)
|
t.Errorf("Expected measurement %s to pass", measurement)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestFilter_Pass(t *testing.T) {
|
func TestFilter_NamePass(t *testing.T) {
|
||||||
f := Filter{
|
f := Filter{
|
||||||
Pass: []string{"foo*", "cpu_usage_idle"},
|
NamePass: []string{"foo*", "cpu_usage_idle"},
|
||||||
}
|
}
|
||||||
|
require.NoError(t, f.CompileFilter())
|
||||||
|
|
||||||
passes := []string{
|
passes := []string{
|
||||||
"foo",
|
"foo",
|
||||||
@@ -45,22 +51,23 @@ func TestFilter_Pass(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
for _, measurement := range passes {
|
for _, measurement := range passes {
|
||||||
if !f.ShouldPass(measurement) {
|
if !f.ShouldNamePass(measurement) {
|
||||||
t.Errorf("Expected measurement %s to pass", measurement)
|
t.Errorf("Expected measurement %s to pass", measurement)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, measurement := range drops {
|
for _, measurement := range drops {
|
||||||
if f.ShouldPass(measurement) {
|
if f.ShouldNamePass(measurement) {
|
||||||
t.Errorf("Expected measurement %s to drop", measurement)
|
t.Errorf("Expected measurement %s to drop", measurement)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestFilter_Drop(t *testing.T) {
|
func TestFilter_NameDrop(t *testing.T) {
|
||||||
f := Filter{
|
f := Filter{
|
||||||
Drop: []string{"foo*", "cpu_usage_idle"},
|
NameDrop: []string{"foo*", "cpu_usage_idle"},
|
||||||
}
|
}
|
||||||
|
require.NoError(t, f.CompileFilter())
|
||||||
|
|
||||||
drops := []string{
|
drops := []string{
|
||||||
"foo",
|
"foo",
|
||||||
@@ -78,13 +85,81 @@ func TestFilter_Drop(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
for _, measurement := range passes {
|
for _, measurement := range passes {
|
||||||
if !f.ShouldPass(measurement) {
|
if !f.ShouldNamePass(measurement) {
|
||||||
t.Errorf("Expected measurement %s to pass", measurement)
|
t.Errorf("Expected measurement %s to pass", measurement)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, measurement := range drops {
|
for _, measurement := range drops {
|
||||||
if f.ShouldPass(measurement) {
|
if f.ShouldNamePass(measurement) {
|
||||||
|
t.Errorf("Expected measurement %s to drop", measurement)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFilter_FieldPass(t *testing.T) {
|
||||||
|
f := Filter{
|
||||||
|
FieldPass: []string{"foo*", "cpu_usage_idle"},
|
||||||
|
}
|
||||||
|
require.NoError(t, f.CompileFilter())
|
||||||
|
|
||||||
|
passes := []string{
|
||||||
|
"foo",
|
||||||
|
"foo_bar",
|
||||||
|
"foo.bar",
|
||||||
|
"foo-bar",
|
||||||
|
"cpu_usage_idle",
|
||||||
|
}
|
||||||
|
|
||||||
|
drops := []string{
|
||||||
|
"bar",
|
||||||
|
"barfoo",
|
||||||
|
"bar_foo",
|
||||||
|
"cpu_usage_busy",
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, measurement := range passes {
|
||||||
|
if !f.ShouldFieldsPass(measurement) {
|
||||||
|
t.Errorf("Expected measurement %s to pass", measurement)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, measurement := range drops {
|
||||||
|
if f.ShouldFieldsPass(measurement) {
|
||||||
|
t.Errorf("Expected measurement %s to drop", measurement)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFilter_FieldDrop(t *testing.T) {
|
||||||
|
f := Filter{
|
||||||
|
FieldDrop: []string{"foo*", "cpu_usage_idle"},
|
||||||
|
}
|
||||||
|
require.NoError(t, f.CompileFilter())
|
||||||
|
|
||||||
|
drops := []string{
|
||||||
|
"foo",
|
||||||
|
"foo_bar",
|
||||||
|
"foo.bar",
|
||||||
|
"foo-bar",
|
||||||
|
"cpu_usage_idle",
|
||||||
|
}
|
||||||
|
|
||||||
|
passes := []string{
|
||||||
|
"bar",
|
||||||
|
"barfoo",
|
||||||
|
"bar_foo",
|
||||||
|
"cpu_usage_busy",
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, measurement := range passes {
|
||||||
|
if !f.ShouldFieldsPass(measurement) {
|
||||||
|
t.Errorf("Expected measurement %s to pass", measurement)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, measurement := range drops {
|
||||||
|
if f.ShouldFieldsPass(measurement) {
|
||||||
t.Errorf("Expected measurement %s to drop", measurement)
|
t.Errorf("Expected measurement %s to drop", measurement)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -103,6 +178,7 @@ func TestFilter_TagPass(t *testing.T) {
|
|||||||
f := Filter{
|
f := Filter{
|
||||||
TagPass: filters,
|
TagPass: filters,
|
||||||
}
|
}
|
||||||
|
require.NoError(t, f.CompileFilter())
|
||||||
|
|
||||||
passes := []map[string]string{
|
passes := []map[string]string{
|
||||||
{"cpu": "cpu-total"},
|
{"cpu": "cpu-total"},
|
||||||
@@ -146,6 +222,7 @@ func TestFilter_TagDrop(t *testing.T) {
|
|||||||
f := Filter{
|
f := Filter{
|
||||||
TagDrop: filters,
|
TagDrop: filters,
|
||||||
}
|
}
|
||||||
|
require.NoError(t, f.CompileFilter())
|
||||||
|
|
||||||
drops := []map[string]string{
|
drops := []map[string]string{
|
||||||
{"cpu": "cpu-total"},
|
{"cpu": "cpu-total"},
|
||||||
@@ -175,3 +252,115 @@ func TestFilter_TagDrop(t *testing.T) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestFilter_CompileFilterError(t *testing.T) {
|
||||||
|
f := Filter{
|
||||||
|
NameDrop: []string{"", ""},
|
||||||
|
}
|
||||||
|
assert.Error(t, f.CompileFilter())
|
||||||
|
f = Filter{
|
||||||
|
NamePass: []string{"", ""},
|
||||||
|
}
|
||||||
|
assert.Error(t, f.CompileFilter())
|
||||||
|
f = Filter{
|
||||||
|
FieldDrop: []string{"", ""},
|
||||||
|
}
|
||||||
|
assert.Error(t, f.CompileFilter())
|
||||||
|
f = Filter{
|
||||||
|
FieldPass: []string{"", ""},
|
||||||
|
}
|
||||||
|
assert.Error(t, f.CompileFilter())
|
||||||
|
f = Filter{
|
||||||
|
TagExclude: []string{"", ""},
|
||||||
|
}
|
||||||
|
assert.Error(t, f.CompileFilter())
|
||||||
|
f = Filter{
|
||||||
|
TagInclude: []string{"", ""},
|
||||||
|
}
|
||||||
|
assert.Error(t, f.CompileFilter())
|
||||||
|
filters := []TagFilter{
|
||||||
|
TagFilter{
|
||||||
|
Name: "cpu",
|
||||||
|
Filter: []string{"{foobar}"},
|
||||||
|
}}
|
||||||
|
f = Filter{
|
||||||
|
TagDrop: filters,
|
||||||
|
}
|
||||||
|
require.Error(t, f.CompileFilter())
|
||||||
|
filters = []TagFilter{
|
||||||
|
TagFilter{
|
||||||
|
Name: "cpu",
|
||||||
|
Filter: []string{"{foobar}"},
|
||||||
|
}}
|
||||||
|
f = Filter{
|
||||||
|
TagPass: filters,
|
||||||
|
}
|
||||||
|
require.Error(t, f.CompileFilter())
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFilter_ShouldMetricsPass(t *testing.T) {
|
||||||
|
m := testutil.TestMetric(1, "testmetric")
|
||||||
|
f := Filter{
|
||||||
|
NameDrop: []string{"foobar"},
|
||||||
|
}
|
||||||
|
require.NoError(t, f.CompileFilter())
|
||||||
|
require.True(t, f.ShouldMetricPass(m))
|
||||||
|
|
||||||
|
m = testutil.TestMetric(1, "foobar")
|
||||||
|
require.False(t, f.ShouldMetricPass(m))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFilter_FilterTagsNoMatches(t *testing.T) {
|
||||||
|
pretags := map[string]string{
|
||||||
|
"host": "localhost",
|
||||||
|
"mytag": "foobar",
|
||||||
|
}
|
||||||
|
f := Filter{
|
||||||
|
TagExclude: []string{"nomatch"},
|
||||||
|
}
|
||||||
|
require.NoError(t, f.CompileFilter())
|
||||||
|
|
||||||
|
f.FilterTags(pretags)
|
||||||
|
assert.Equal(t, map[string]string{
|
||||||
|
"host": "localhost",
|
||||||
|
"mytag": "foobar",
|
||||||
|
}, pretags)
|
||||||
|
|
||||||
|
f = Filter{
|
||||||
|
TagInclude: []string{"nomatch"},
|
||||||
|
}
|
||||||
|
require.NoError(t, f.CompileFilter())
|
||||||
|
|
||||||
|
f.FilterTags(pretags)
|
||||||
|
assert.Equal(t, map[string]string{}, pretags)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFilter_FilterTagsMatches(t *testing.T) {
|
||||||
|
pretags := map[string]string{
|
||||||
|
"host": "localhost",
|
||||||
|
"mytag": "foobar",
|
||||||
|
}
|
||||||
|
f := Filter{
|
||||||
|
TagExclude: []string{"ho*"},
|
||||||
|
}
|
||||||
|
require.NoError(t, f.CompileFilter())
|
||||||
|
|
||||||
|
f.FilterTags(pretags)
|
||||||
|
assert.Equal(t, map[string]string{
|
||||||
|
"mytag": "foobar",
|
||||||
|
}, pretags)
|
||||||
|
|
||||||
|
pretags = map[string]string{
|
||||||
|
"host": "localhost",
|
||||||
|
"mytag": "foobar",
|
||||||
|
}
|
||||||
|
f = Filter{
|
||||||
|
TagInclude: []string{"my*"},
|
||||||
|
}
|
||||||
|
require.NoError(t, f.CompileFilter())
|
||||||
|
|
||||||
|
f.FilterTags(pretags)
|
||||||
|
assert.Equal(t, map[string]string{
|
||||||
|
"mytag": "foobar",
|
||||||
|
}, pretags)
|
||||||
|
}
|
||||||
|
|||||||
@@ -1,14 +1,14 @@
|
|||||||
package models
|
package internal_models
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf"
|
||||||
)
|
)
|
||||||
|
|
||||||
type RunningInput struct {
|
type RunningInput struct {
|
||||||
Name string
|
Name string
|
||||||
Input inputs.Input
|
Input telegraf.Input
|
||||||
Config *InputConfig
|
Config *InputConfig
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,71 +1,154 @@
|
|||||||
package models
|
package internal_models
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"log"
|
"log"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/influxdata/telegraf/plugins/outputs"
|
"github.com/influxdata/telegraf"
|
||||||
|
"github.com/influxdata/telegraf/internal/buffer"
|
||||||
"github.com/influxdata/influxdb/client/v2"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
const DEFAULT_POINT_BUFFER_LIMIT = 10000
|
const (
|
||||||
|
// Default size of metrics batch size.
|
||||||
|
DEFAULT_METRIC_BATCH_SIZE = 1000
|
||||||
|
|
||||||
|
// Default number of metrics kept. It should be a multiple of batch size.
|
||||||
|
DEFAULT_METRIC_BUFFER_LIMIT = 10000
|
||||||
|
)
|
||||||
|
|
||||||
|
// RunningOutput contains the output configuration
|
||||||
type RunningOutput struct {
|
type RunningOutput struct {
|
||||||
Name string
|
Name string
|
||||||
Output outputs.Output
|
Output telegraf.Output
|
||||||
Config *OutputConfig
|
Config *OutputConfig
|
||||||
Quiet bool
|
Quiet bool
|
||||||
PointBufferLimit int
|
MetricBufferLimit int
|
||||||
|
MetricBatchSize int
|
||||||
|
|
||||||
points []*client.Point
|
metrics *buffer.Buffer
|
||||||
overwriteCounter int
|
failMetrics *buffer.Buffer
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewRunningOutput(
|
func NewRunningOutput(
|
||||||
name string,
|
name string,
|
||||||
output outputs.Output,
|
output telegraf.Output,
|
||||||
conf *OutputConfig,
|
conf *OutputConfig,
|
||||||
|
batchSize int,
|
||||||
|
bufferLimit int,
|
||||||
) *RunningOutput {
|
) *RunningOutput {
|
||||||
|
if bufferLimit == 0 {
|
||||||
|
bufferLimit = DEFAULT_METRIC_BUFFER_LIMIT
|
||||||
|
}
|
||||||
|
if batchSize == 0 {
|
||||||
|
batchSize = DEFAULT_METRIC_BATCH_SIZE
|
||||||
|
}
|
||||||
ro := &RunningOutput{
|
ro := &RunningOutput{
|
||||||
Name: name,
|
Name: name,
|
||||||
points: make([]*client.Point, 0),
|
metrics: buffer.NewBuffer(batchSize),
|
||||||
Output: output,
|
failMetrics: buffer.NewBuffer(bufferLimit),
|
||||||
Config: conf,
|
Output: output,
|
||||||
PointBufferLimit: DEFAULT_POINT_BUFFER_LIMIT,
|
Config: conf,
|
||||||
|
MetricBufferLimit: bufferLimit,
|
||||||
|
MetricBatchSize: batchSize,
|
||||||
}
|
}
|
||||||
return ro
|
return ro
|
||||||
}
|
}
|
||||||
|
|
||||||
func (ro *RunningOutput) AddPoint(point *client.Point) {
|
// AddMetric adds a metric to the output. This function can also write cached
|
||||||
|
// points if FlushBufferWhenFull is true.
|
||||||
|
func (ro *RunningOutput) AddMetric(metric telegraf.Metric) {
|
||||||
if ro.Config.Filter.IsActive {
|
if ro.Config.Filter.IsActive {
|
||||||
if !ro.Config.Filter.ShouldPointPass(point) {
|
if !ro.Config.Filter.ShouldMetricPass(metric) {
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(ro.points) < ro.PointBufferLimit {
|
// Filter any tagexclude/taginclude parameters before adding metric
|
||||||
ro.points = append(ro.points, point)
|
if len(ro.Config.Filter.TagExclude) != 0 || len(ro.Config.Filter.TagInclude) != 0 {
|
||||||
} else {
|
// In order to filter out tags, we need to create a new metric, since
|
||||||
if ro.overwriteCounter == len(ro.points) {
|
// metrics are immutable once created.
|
||||||
ro.overwriteCounter = 0
|
tags := metric.Tags()
|
||||||
|
fields := metric.Fields()
|
||||||
|
t := metric.Time()
|
||||||
|
name := metric.Name()
|
||||||
|
ro.Config.Filter.FilterTags(tags)
|
||||||
|
// error is not possible if creating from another metric, so ignore.
|
||||||
|
metric, _ = telegraf.NewMetric(name, tags, fields, t)
|
||||||
|
}
|
||||||
|
|
||||||
|
ro.metrics.Add(metric)
|
||||||
|
if ro.metrics.Len() == ro.MetricBatchSize {
|
||||||
|
batch := ro.metrics.Batch(ro.MetricBatchSize)
|
||||||
|
err := ro.write(batch)
|
||||||
|
if err != nil {
|
||||||
|
ro.failMetrics.Add(batch...)
|
||||||
}
|
}
|
||||||
ro.points[ro.overwriteCounter] = point
|
|
||||||
ro.overwriteCounter++
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Write writes all cached points to this output.
|
||||||
func (ro *RunningOutput) Write() error {
|
func (ro *RunningOutput) Write() error {
|
||||||
|
if !ro.Quiet {
|
||||||
|
log.Printf("Output [%s] buffer fullness: %d / %d metrics. "+
|
||||||
|
"Total gathered metrics: %d. Total dropped metrics: %d.",
|
||||||
|
ro.Name,
|
||||||
|
ro.failMetrics.Len()+ro.metrics.Len(),
|
||||||
|
ro.MetricBufferLimit,
|
||||||
|
ro.metrics.Total(),
|
||||||
|
ro.metrics.Drops()+ro.failMetrics.Drops())
|
||||||
|
}
|
||||||
|
|
||||||
|
var err error
|
||||||
|
if !ro.failMetrics.IsEmpty() {
|
||||||
|
bufLen := ro.failMetrics.Len()
|
||||||
|
// how many batches of failed writes we need to write.
|
||||||
|
nBatches := bufLen/ro.MetricBatchSize + 1
|
||||||
|
batchSize := ro.MetricBatchSize
|
||||||
|
|
||||||
|
for i := 0; i < nBatches; i++ {
|
||||||
|
// If it's the last batch, only grab the metrics that have not had
|
||||||
|
// a write attempt already (this is primarily to preserve order).
|
||||||
|
if i == nBatches-1 {
|
||||||
|
batchSize = bufLen % ro.MetricBatchSize
|
||||||
|
}
|
||||||
|
batch := ro.failMetrics.Batch(batchSize)
|
||||||
|
// If we've already failed previous writes, don't bother trying to
|
||||||
|
// write to this output again. We are not exiting the loop just so
|
||||||
|
// that we can rotate the metrics to preserve order.
|
||||||
|
if err == nil {
|
||||||
|
err = ro.write(batch)
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
ro.failMetrics.Add(batch...)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
batch := ro.metrics.Batch(ro.MetricBatchSize)
|
||||||
|
// see comment above about not trying to write to an already failed output.
|
||||||
|
// if ro.failMetrics is empty then err will always be nil at this point.
|
||||||
|
if err == nil {
|
||||||
|
err = ro.write(batch)
|
||||||
|
}
|
||||||
|
if err != nil {
|
||||||
|
ro.failMetrics.Add(batch...)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ro *RunningOutput) write(metrics []telegraf.Metric) error {
|
||||||
|
if len(metrics) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
start := time.Now()
|
start := time.Now()
|
||||||
err := ro.Output.Write(ro.points)
|
err := ro.Output.Write(metrics)
|
||||||
elapsed := time.Since(start)
|
elapsed := time.Since(start)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
if !ro.Quiet {
|
if !ro.Quiet {
|
||||||
log.Printf("Wrote %d metrics to output %s in %s\n",
|
log.Printf("Output [%s] wrote batch of %d metrics in %s\n",
|
||||||
len(ro.points), ro.Name, elapsed)
|
ro.Name, len(metrics), elapsed)
|
||||||
}
|
}
|
||||||
ro.points = make([]*client.Point, 0)
|
|
||||||
ro.overwriteCounter = 0
|
|
||||||
}
|
}
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|||||||
568
internal/models/running_output_test.go
Normal file
568
internal/models/running_output_test.go
Normal file
@@ -0,0 +1,568 @@
|
|||||||
|
package internal_models
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"sync"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
|
"github.com/influxdata/telegraf/testutil"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
var first5 = []telegraf.Metric{
|
||||||
|
testutil.TestMetric(101, "metric1"),
|
||||||
|
testutil.TestMetric(101, "metric2"),
|
||||||
|
testutil.TestMetric(101, "metric3"),
|
||||||
|
testutil.TestMetric(101, "metric4"),
|
||||||
|
testutil.TestMetric(101, "metric5"),
|
||||||
|
}
|
||||||
|
|
||||||
|
var next5 = []telegraf.Metric{
|
||||||
|
testutil.TestMetric(101, "metric6"),
|
||||||
|
testutil.TestMetric(101, "metric7"),
|
||||||
|
testutil.TestMetric(101, "metric8"),
|
||||||
|
testutil.TestMetric(101, "metric9"),
|
||||||
|
testutil.TestMetric(101, "metric10"),
|
||||||
|
}
|
||||||
|
|
||||||
|
// Benchmark adding metrics.
|
||||||
|
func BenchmarkRunningOutputAddWrite(b *testing.B) {
|
||||||
|
conf := &OutputConfig{
|
||||||
|
Filter: Filter{
|
||||||
|
IsActive: false,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
m := &perfOutput{}
|
||||||
|
ro := NewRunningOutput("test", m, conf, 1000, 10000)
|
||||||
|
ro.Quiet = true
|
||||||
|
|
||||||
|
for n := 0; n < b.N; n++ {
|
||||||
|
ro.AddMetric(first5[0])
|
||||||
|
ro.Write()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Benchmark adding metrics.
|
||||||
|
func BenchmarkRunningOutputAddWriteEvery100(b *testing.B) {
|
||||||
|
conf := &OutputConfig{
|
||||||
|
Filter: Filter{
|
||||||
|
IsActive: false,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
m := &perfOutput{}
|
||||||
|
ro := NewRunningOutput("test", m, conf, 1000, 10000)
|
||||||
|
ro.Quiet = true
|
||||||
|
|
||||||
|
for n := 0; n < b.N; n++ {
|
||||||
|
ro.AddMetric(first5[0])
|
||||||
|
if n%100 == 0 {
|
||||||
|
ro.Write()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Benchmark adding metrics.
|
||||||
|
func BenchmarkRunningOutputAddFailWrites(b *testing.B) {
|
||||||
|
conf := &OutputConfig{
|
||||||
|
Filter: Filter{
|
||||||
|
IsActive: false,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
m := &perfOutput{}
|
||||||
|
m.failWrite = true
|
||||||
|
ro := NewRunningOutput("test", m, conf, 1000, 10000)
|
||||||
|
ro.Quiet = true
|
||||||
|
|
||||||
|
for n := 0; n < b.N; n++ {
|
||||||
|
ro.AddMetric(first5[0])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test that NameDrop filters ger properly applied.
|
||||||
|
func TestRunningOutput_DropFilter(t *testing.T) {
|
||||||
|
conf := &OutputConfig{
|
||||||
|
Filter: Filter{
|
||||||
|
IsActive: true,
|
||||||
|
NameDrop: []string{"metric1", "metric2"},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
assert.NoError(t, conf.Filter.CompileFilter())
|
||||||
|
|
||||||
|
m := &mockOutput{}
|
||||||
|
ro := NewRunningOutput("test", m, conf, 1000, 10000)
|
||||||
|
|
||||||
|
for _, metric := range first5 {
|
||||||
|
ro.AddMetric(metric)
|
||||||
|
}
|
||||||
|
for _, metric := range next5 {
|
||||||
|
ro.AddMetric(metric)
|
||||||
|
}
|
||||||
|
assert.Len(t, m.Metrics(), 0)
|
||||||
|
|
||||||
|
err := ro.Write()
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.Len(t, m.Metrics(), 8)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test that NameDrop filters without a match do nothing.
|
||||||
|
func TestRunningOutput_PassFilter(t *testing.T) {
|
||||||
|
conf := &OutputConfig{
|
||||||
|
Filter: Filter{
|
||||||
|
IsActive: true,
|
||||||
|
NameDrop: []string{"metric1000", "foo*"},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
assert.NoError(t, conf.Filter.CompileFilter())
|
||||||
|
|
||||||
|
m := &mockOutput{}
|
||||||
|
ro := NewRunningOutput("test", m, conf, 1000, 10000)
|
||||||
|
|
||||||
|
for _, metric := range first5 {
|
||||||
|
ro.AddMetric(metric)
|
||||||
|
}
|
||||||
|
for _, metric := range next5 {
|
||||||
|
ro.AddMetric(metric)
|
||||||
|
}
|
||||||
|
assert.Len(t, m.Metrics(), 0)
|
||||||
|
|
||||||
|
err := ro.Write()
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.Len(t, m.Metrics(), 10)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test that tags are properly included
|
||||||
|
func TestRunningOutput_TagIncludeNoMatch(t *testing.T) {
|
||||||
|
conf := &OutputConfig{
|
||||||
|
Filter: Filter{
|
||||||
|
IsActive: true,
|
||||||
|
TagInclude: []string{"nothing*"},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
assert.NoError(t, conf.Filter.CompileFilter())
|
||||||
|
|
||||||
|
m := &mockOutput{}
|
||||||
|
ro := NewRunningOutput("test", m, conf, 1000, 10000)
|
||||||
|
|
||||||
|
ro.AddMetric(first5[0])
|
||||||
|
assert.Len(t, m.Metrics(), 0)
|
||||||
|
|
||||||
|
err := ro.Write()
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.Len(t, m.Metrics(), 1)
|
||||||
|
assert.Empty(t, m.Metrics()[0].Tags())
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test that tags are properly excluded
|
||||||
|
func TestRunningOutput_TagExcludeMatch(t *testing.T) {
|
||||||
|
conf := &OutputConfig{
|
||||||
|
Filter: Filter{
|
||||||
|
IsActive: true,
|
||||||
|
TagExclude: []string{"tag*"},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
assert.NoError(t, conf.Filter.CompileFilter())
|
||||||
|
|
||||||
|
m := &mockOutput{}
|
||||||
|
ro := NewRunningOutput("test", m, conf, 1000, 10000)
|
||||||
|
|
||||||
|
ro.AddMetric(first5[0])
|
||||||
|
assert.Len(t, m.Metrics(), 0)
|
||||||
|
|
||||||
|
err := ro.Write()
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.Len(t, m.Metrics(), 1)
|
||||||
|
assert.Len(t, m.Metrics()[0].Tags(), 0)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test that tags are properly Excluded
|
||||||
|
func TestRunningOutput_TagExcludeNoMatch(t *testing.T) {
|
||||||
|
conf := &OutputConfig{
|
||||||
|
Filter: Filter{
|
||||||
|
IsActive: true,
|
||||||
|
TagExclude: []string{"nothing*"},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
assert.NoError(t, conf.Filter.CompileFilter())
|
||||||
|
|
||||||
|
m := &mockOutput{}
|
||||||
|
ro := NewRunningOutput("test", m, conf, 1000, 10000)
|
||||||
|
|
||||||
|
ro.AddMetric(first5[0])
|
||||||
|
assert.Len(t, m.Metrics(), 0)
|
||||||
|
|
||||||
|
err := ro.Write()
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.Len(t, m.Metrics(), 1)
|
||||||
|
assert.Len(t, m.Metrics()[0].Tags(), 1)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test that tags are properly included
|
||||||
|
func TestRunningOutput_TagIncludeMatch(t *testing.T) {
|
||||||
|
conf := &OutputConfig{
|
||||||
|
Filter: Filter{
|
||||||
|
IsActive: true,
|
||||||
|
TagInclude: []string{"tag*"},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
assert.NoError(t, conf.Filter.CompileFilter())
|
||||||
|
|
||||||
|
m := &mockOutput{}
|
||||||
|
ro := NewRunningOutput("test", m, conf, 1000, 10000)
|
||||||
|
|
||||||
|
ro.AddMetric(first5[0])
|
||||||
|
assert.Len(t, m.Metrics(), 0)
|
||||||
|
|
||||||
|
err := ro.Write()
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.Len(t, m.Metrics(), 1)
|
||||||
|
assert.Len(t, m.Metrics()[0].Tags(), 1)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test that we can write metrics with simple default setup.
|
||||||
|
func TestRunningOutputDefault(t *testing.T) {
|
||||||
|
conf := &OutputConfig{
|
||||||
|
Filter: Filter{
|
||||||
|
IsActive: false,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
m := &mockOutput{}
|
||||||
|
ro := NewRunningOutput("test", m, conf, 1000, 10000)
|
||||||
|
|
||||||
|
for _, metric := range first5 {
|
||||||
|
ro.AddMetric(metric)
|
||||||
|
}
|
||||||
|
for _, metric := range next5 {
|
||||||
|
ro.AddMetric(metric)
|
||||||
|
}
|
||||||
|
assert.Len(t, m.Metrics(), 0)
|
||||||
|
|
||||||
|
err := ro.Write()
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.Len(t, m.Metrics(), 10)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test that running output doesn't flush until it's full when
|
||||||
|
// FlushBufferWhenFull is set.
|
||||||
|
func TestRunningOutputFlushWhenFull(t *testing.T) {
|
||||||
|
conf := &OutputConfig{
|
||||||
|
Filter: Filter{
|
||||||
|
IsActive: false,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
m := &mockOutput{}
|
||||||
|
ro := NewRunningOutput("test", m, conf, 6, 10)
|
||||||
|
|
||||||
|
// Fill buffer to 1 under limit
|
||||||
|
for _, metric := range first5 {
|
||||||
|
ro.AddMetric(metric)
|
||||||
|
}
|
||||||
|
// no flush yet
|
||||||
|
assert.Len(t, m.Metrics(), 0)
|
||||||
|
|
||||||
|
// add one more metric
|
||||||
|
ro.AddMetric(next5[0])
|
||||||
|
// now it flushed
|
||||||
|
assert.Len(t, m.Metrics(), 6)
|
||||||
|
|
||||||
|
// add one more metric and write it manually
|
||||||
|
ro.AddMetric(next5[1])
|
||||||
|
err := ro.Write()
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.Len(t, m.Metrics(), 7)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test that running output doesn't flush until it's full when
|
||||||
|
// FlushBufferWhenFull is set, twice.
|
||||||
|
func TestRunningOutputMultiFlushWhenFull(t *testing.T) {
|
||||||
|
conf := &OutputConfig{
|
||||||
|
Filter: Filter{
|
||||||
|
IsActive: false,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
m := &mockOutput{}
|
||||||
|
ro := NewRunningOutput("test", m, conf, 4, 12)
|
||||||
|
|
||||||
|
// Fill buffer past limit twive
|
||||||
|
for _, metric := range first5 {
|
||||||
|
ro.AddMetric(metric)
|
||||||
|
}
|
||||||
|
for _, metric := range next5 {
|
||||||
|
ro.AddMetric(metric)
|
||||||
|
}
|
||||||
|
// flushed twice
|
||||||
|
assert.Len(t, m.Metrics(), 8)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRunningOutputWriteFail(t *testing.T) {
|
||||||
|
conf := &OutputConfig{
|
||||||
|
Filter: Filter{
|
||||||
|
IsActive: false,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
m := &mockOutput{}
|
||||||
|
m.failWrite = true
|
||||||
|
ro := NewRunningOutput("test", m, conf, 4, 12)
|
||||||
|
|
||||||
|
// Fill buffer to limit twice
|
||||||
|
for _, metric := range first5 {
|
||||||
|
ro.AddMetric(metric)
|
||||||
|
}
|
||||||
|
for _, metric := range next5 {
|
||||||
|
ro.AddMetric(metric)
|
||||||
|
}
|
||||||
|
// no successful flush yet
|
||||||
|
assert.Len(t, m.Metrics(), 0)
|
||||||
|
|
||||||
|
// manual write fails
|
||||||
|
err := ro.Write()
|
||||||
|
require.Error(t, err)
|
||||||
|
// no successful flush yet
|
||||||
|
assert.Len(t, m.Metrics(), 0)
|
||||||
|
|
||||||
|
m.failWrite = false
|
||||||
|
err = ro.Write()
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
assert.Len(t, m.Metrics(), 10)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify that the order of points is preserved during a write failure.
|
||||||
|
func TestRunningOutputWriteFailOrder(t *testing.T) {
|
||||||
|
conf := &OutputConfig{
|
||||||
|
Filter: Filter{
|
||||||
|
IsActive: false,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
m := &mockOutput{}
|
||||||
|
m.failWrite = true
|
||||||
|
ro := NewRunningOutput("test", m, conf, 100, 1000)
|
||||||
|
|
||||||
|
// add 5 metrics
|
||||||
|
for _, metric := range first5 {
|
||||||
|
ro.AddMetric(metric)
|
||||||
|
}
|
||||||
|
// no successful flush yet
|
||||||
|
assert.Len(t, m.Metrics(), 0)
|
||||||
|
|
||||||
|
// Write fails
|
||||||
|
err := ro.Write()
|
||||||
|
require.Error(t, err)
|
||||||
|
// no successful flush yet
|
||||||
|
assert.Len(t, m.Metrics(), 0)
|
||||||
|
|
||||||
|
m.failWrite = false
|
||||||
|
// add 5 more metrics
|
||||||
|
for _, metric := range next5 {
|
||||||
|
ro.AddMetric(metric)
|
||||||
|
}
|
||||||
|
err = ro.Write()
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
// Verify that 10 metrics were written
|
||||||
|
assert.Len(t, m.Metrics(), 10)
|
||||||
|
// Verify that they are in order
|
||||||
|
expected := append(first5, next5...)
|
||||||
|
assert.Equal(t, expected, m.Metrics())
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify that the order of points is preserved during many write failures.
|
||||||
|
func TestRunningOutputWriteFailOrder2(t *testing.T) {
|
||||||
|
conf := &OutputConfig{
|
||||||
|
Filter: Filter{
|
||||||
|
IsActive: false,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
m := &mockOutput{}
|
||||||
|
m.failWrite = true
|
||||||
|
ro := NewRunningOutput("test", m, conf, 5, 100)
|
||||||
|
|
||||||
|
// add 5 metrics
|
||||||
|
for _, metric := range first5 {
|
||||||
|
ro.AddMetric(metric)
|
||||||
|
}
|
||||||
|
// Write fails
|
||||||
|
err := ro.Write()
|
||||||
|
require.Error(t, err)
|
||||||
|
// no successful flush yet
|
||||||
|
assert.Len(t, m.Metrics(), 0)
|
||||||
|
|
||||||
|
// add 5 metrics
|
||||||
|
for _, metric := range next5 {
|
||||||
|
ro.AddMetric(metric)
|
||||||
|
}
|
||||||
|
// Write fails
|
||||||
|
err = ro.Write()
|
||||||
|
require.Error(t, err)
|
||||||
|
// no successful flush yet
|
||||||
|
assert.Len(t, m.Metrics(), 0)
|
||||||
|
|
||||||
|
// add 5 metrics
|
||||||
|
for _, metric := range first5 {
|
||||||
|
ro.AddMetric(metric)
|
||||||
|
}
|
||||||
|
// Write fails
|
||||||
|
err = ro.Write()
|
||||||
|
require.Error(t, err)
|
||||||
|
// no successful flush yet
|
||||||
|
assert.Len(t, m.Metrics(), 0)
|
||||||
|
|
||||||
|
// add 5 metrics
|
||||||
|
for _, metric := range next5 {
|
||||||
|
ro.AddMetric(metric)
|
||||||
|
}
|
||||||
|
// Write fails
|
||||||
|
err = ro.Write()
|
||||||
|
require.Error(t, err)
|
||||||
|
// no successful flush yet
|
||||||
|
assert.Len(t, m.Metrics(), 0)
|
||||||
|
|
||||||
|
m.failWrite = false
|
||||||
|
err = ro.Write()
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
// Verify that 10 metrics were written
|
||||||
|
assert.Len(t, m.Metrics(), 20)
|
||||||
|
// Verify that they are in order
|
||||||
|
expected := append(first5, next5...)
|
||||||
|
expected = append(expected, first5...)
|
||||||
|
expected = append(expected, next5...)
|
||||||
|
assert.Equal(t, expected, m.Metrics())
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify that the order of points is preserved when there is a remainder
|
||||||
|
// of points for the batch.
|
||||||
|
//
|
||||||
|
// ie, with a batch size of 5:
|
||||||
|
//
|
||||||
|
// 1 2 3 4 5 6 <-- order, failed points
|
||||||
|
// 6 1 2 3 4 5 <-- order, after 1st write failure (1 2 3 4 5 was batch)
|
||||||
|
// 1 2 3 4 5 6 <-- order, after 2nd write failure, (6 was batch)
|
||||||
|
//
|
||||||
|
func TestRunningOutputWriteFailOrder3(t *testing.T) {
|
||||||
|
conf := &OutputConfig{
|
||||||
|
Filter: Filter{
|
||||||
|
IsActive: false,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
m := &mockOutput{}
|
||||||
|
m.failWrite = true
|
||||||
|
ro := NewRunningOutput("test", m, conf, 5, 1000)
|
||||||
|
|
||||||
|
// add 5 metrics
|
||||||
|
for _, metric := range first5 {
|
||||||
|
ro.AddMetric(metric)
|
||||||
|
}
|
||||||
|
// no successful flush yet
|
||||||
|
assert.Len(t, m.Metrics(), 0)
|
||||||
|
|
||||||
|
// Write fails
|
||||||
|
err := ro.Write()
|
||||||
|
require.Error(t, err)
|
||||||
|
// no successful flush yet
|
||||||
|
assert.Len(t, m.Metrics(), 0)
|
||||||
|
|
||||||
|
// add and attempt to write a single metric:
|
||||||
|
ro.AddMetric(next5[0])
|
||||||
|
err = ro.Write()
|
||||||
|
require.Error(t, err)
|
||||||
|
|
||||||
|
// unset fail and write metrics
|
||||||
|
m.failWrite = false
|
||||||
|
err = ro.Write()
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
// Verify that 6 metrics were written
|
||||||
|
assert.Len(t, m.Metrics(), 6)
|
||||||
|
// Verify that they are in order
|
||||||
|
expected := append(first5, next5[0])
|
||||||
|
assert.Equal(t, expected, m.Metrics())
|
||||||
|
}
|
||||||
|
|
||||||
|
type mockOutput struct {
|
||||||
|
sync.Mutex
|
||||||
|
|
||||||
|
metrics []telegraf.Metric
|
||||||
|
|
||||||
|
// if true, mock a write failure
|
||||||
|
failWrite bool
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *mockOutput) Connect() error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *mockOutput) Close() error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *mockOutput) Description() string {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *mockOutput) SampleConfig() string {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *mockOutput) Write(metrics []telegraf.Metric) error {
|
||||||
|
m.Lock()
|
||||||
|
defer m.Unlock()
|
||||||
|
if m.failWrite {
|
||||||
|
return fmt.Errorf("Failed Write!")
|
||||||
|
}
|
||||||
|
|
||||||
|
if m.metrics == nil {
|
||||||
|
m.metrics = []telegraf.Metric{}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, metric := range metrics {
|
||||||
|
m.metrics = append(m.metrics, metric)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *mockOutput) Metrics() []telegraf.Metric {
|
||||||
|
m.Lock()
|
||||||
|
defer m.Unlock()
|
||||||
|
return m.metrics
|
||||||
|
}
|
||||||
|
|
||||||
|
type perfOutput struct {
|
||||||
|
// if true, mock a write failure
|
||||||
|
failWrite bool
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *perfOutput) Connect() error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *perfOutput) Close() error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *perfOutput) Description() string {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *perfOutput) SampleConfig() string {
|
||||||
|
return ""
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *perfOutput) Write(metrics []telegraf.Metric) error {
|
||||||
|
if m.failWrite {
|
||||||
|
return fmt.Errorf("Failed Write!")
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
94
metric.go
Normal file
94
metric.go
Normal file
@@ -0,0 +1,94 @@
|
|||||||
|
package telegraf
|
||||||
|
|
||||||
|
import (
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/influxdata/influxdb/client/v2"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Metric interface {
|
||||||
|
// Name returns the measurement name of the metric
|
||||||
|
Name() string
|
||||||
|
|
||||||
|
// Name returns the tags associated with the metric
|
||||||
|
Tags() map[string]string
|
||||||
|
|
||||||
|
// Time return the timestamp for the metric
|
||||||
|
Time() time.Time
|
||||||
|
|
||||||
|
// UnixNano returns the unix nano time of the metric
|
||||||
|
UnixNano() int64
|
||||||
|
|
||||||
|
// Fields returns the fields for the metric
|
||||||
|
Fields() map[string]interface{}
|
||||||
|
|
||||||
|
// String returns a line-protocol string of the metric
|
||||||
|
String() string
|
||||||
|
|
||||||
|
// PrecisionString returns a line-protocol string of the metric, at precision
|
||||||
|
PrecisionString(precison string) string
|
||||||
|
|
||||||
|
// Point returns a influxdb client.Point object
|
||||||
|
Point() *client.Point
|
||||||
|
}
|
||||||
|
|
||||||
|
// metric is a wrapper of the influxdb client.Point struct
|
||||||
|
type metric struct {
|
||||||
|
pt *client.Point
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewMetric returns a metric with the given timestamp. If a timestamp is not
|
||||||
|
// given, then data is sent to the database without a timestamp, in which case
|
||||||
|
// the server will assign local time upon reception. NOTE: it is recommended to
|
||||||
|
// send data with a timestamp.
|
||||||
|
func NewMetric(
|
||||||
|
name string,
|
||||||
|
tags map[string]string,
|
||||||
|
fields map[string]interface{},
|
||||||
|
t ...time.Time,
|
||||||
|
) (Metric, error) {
|
||||||
|
var T time.Time
|
||||||
|
if len(t) > 0 {
|
||||||
|
T = t[0]
|
||||||
|
}
|
||||||
|
|
||||||
|
pt, err := client.NewPoint(name, tags, fields, T)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return &metric{
|
||||||
|
pt: pt,
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *metric) Name() string {
|
||||||
|
return m.pt.Name()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *metric) Tags() map[string]string {
|
||||||
|
return m.pt.Tags()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *metric) Time() time.Time {
|
||||||
|
return m.pt.Time()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *metric) UnixNano() int64 {
|
||||||
|
return m.pt.UnixNano()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *metric) Fields() map[string]interface{} {
|
||||||
|
return m.pt.Fields()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *metric) String() string {
|
||||||
|
return m.pt.String()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *metric) PrecisionString(precison string) string {
|
||||||
|
return m.pt.PrecisionString(precison)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *metric) Point() *client.Point {
|
||||||
|
return m.pt
|
||||||
|
}
|
||||||
83
metric_test.go
Normal file
83
metric_test.go
Normal file
@@ -0,0 +1,83 @@
|
|||||||
|
package telegraf
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"math"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestNewMetric(t *testing.T) {
|
||||||
|
now := time.Now()
|
||||||
|
|
||||||
|
tags := map[string]string{
|
||||||
|
"host": "localhost",
|
||||||
|
"datacenter": "us-east-1",
|
||||||
|
}
|
||||||
|
fields := map[string]interface{}{
|
||||||
|
"usage_idle": float64(99),
|
||||||
|
"usage_busy": float64(1),
|
||||||
|
}
|
||||||
|
m, err := NewMetric("cpu", tags, fields, now)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
|
||||||
|
assert.Equal(t, tags, m.Tags())
|
||||||
|
assert.Equal(t, fields, m.Fields())
|
||||||
|
assert.Equal(t, "cpu", m.Name())
|
||||||
|
assert.Equal(t, now, m.Time())
|
||||||
|
assert.Equal(t, now.UnixNano(), m.UnixNano())
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestNewMetricString(t *testing.T) {
|
||||||
|
now := time.Now()
|
||||||
|
|
||||||
|
tags := map[string]string{
|
||||||
|
"host": "localhost",
|
||||||
|
}
|
||||||
|
fields := map[string]interface{}{
|
||||||
|
"usage_idle": float64(99),
|
||||||
|
}
|
||||||
|
m, err := NewMetric("cpu", tags, fields, now)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
|
||||||
|
lineProto := fmt.Sprintf("cpu,host=localhost usage_idle=99 %d",
|
||||||
|
now.UnixNano())
|
||||||
|
assert.Equal(t, lineProto, m.String())
|
||||||
|
|
||||||
|
lineProtoPrecision := fmt.Sprintf("cpu,host=localhost usage_idle=99 %d",
|
||||||
|
now.Unix())
|
||||||
|
assert.Equal(t, lineProtoPrecision, m.PrecisionString("s"))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestNewMetricStringNoTime(t *testing.T) {
|
||||||
|
tags := map[string]string{
|
||||||
|
"host": "localhost",
|
||||||
|
}
|
||||||
|
fields := map[string]interface{}{
|
||||||
|
"usage_idle": float64(99),
|
||||||
|
}
|
||||||
|
m, err := NewMetric("cpu", tags, fields)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
|
||||||
|
lineProto := fmt.Sprintf("cpu,host=localhost usage_idle=99")
|
||||||
|
assert.Equal(t, lineProto, m.String())
|
||||||
|
|
||||||
|
lineProtoPrecision := fmt.Sprintf("cpu,host=localhost usage_idle=99")
|
||||||
|
assert.Equal(t, lineProtoPrecision, m.PrecisionString("s"))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestNewMetricFailNaN(t *testing.T) {
|
||||||
|
now := time.Now()
|
||||||
|
|
||||||
|
tags := map[string]string{
|
||||||
|
"host": "localhost",
|
||||||
|
}
|
||||||
|
fields := map[string]interface{}{
|
||||||
|
"usage_idle": math.NaN(),
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err := NewMetric("cpu", tags, fields, now)
|
||||||
|
assert.Error(t, err)
|
||||||
|
}
|
||||||
31
output.go
Normal file
31
output.go
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
package telegraf
|
||||||
|
|
||||||
|
type Output interface {
|
||||||
|
// Connect to the Output
|
||||||
|
Connect() error
|
||||||
|
// Close any connections to the Output
|
||||||
|
Close() error
|
||||||
|
// Description returns a one-sentence description on the Output
|
||||||
|
Description() string
|
||||||
|
// SampleConfig returns the default configuration of the Output
|
||||||
|
SampleConfig() string
|
||||||
|
// Write takes in group of points to be written to the Output
|
||||||
|
Write(metrics []Metric) error
|
||||||
|
}
|
||||||
|
|
||||||
|
type ServiceOutput interface {
|
||||||
|
// Connect to the Output
|
||||||
|
Connect() error
|
||||||
|
// Close any connections to the Output
|
||||||
|
Close() error
|
||||||
|
// Description returns a one-sentence description on the Output
|
||||||
|
Description() string
|
||||||
|
// SampleConfig returns the default configuration of the Output
|
||||||
|
SampleConfig() string
|
||||||
|
// Write takes in group of points to be written to the Output
|
||||||
|
Write(metrics []Metric) error
|
||||||
|
// Start the "service" that will provide an Output
|
||||||
|
Start() error
|
||||||
|
// Stop the "service" that will provide an Output
|
||||||
|
Stop()
|
||||||
|
}
|
||||||
@@ -4,7 +4,7 @@ The example plugin gathers metrics about example things
|
|||||||
|
|
||||||
### Configuration:
|
### Configuration:
|
||||||
|
|
||||||
```
|
```toml
|
||||||
# Description
|
# Description
|
||||||
[[inputs.example]]
|
[[inputs.example]]
|
||||||
# SampleConfig
|
# SampleConfig
|
||||||
@@ -30,8 +30,6 @@ The example plugin gathers metrics about example things
|
|||||||
|
|
||||||
### Example Output:
|
### Example Output:
|
||||||
|
|
||||||
Give an example `-test` output here
|
|
||||||
|
|
||||||
```
|
```
|
||||||
$ ./telegraf -config telegraf.conf -input-filter example -test
|
$ ./telegraf -config telegraf.conf -input-filter example -test
|
||||||
measurement1,tag1=foo,tag2=bar field1=1i,field2=2.1 1453831884664956455
|
measurement1,tag1=foo,tag2=bar field1=1i,field2=2.1 1453831884664956455
|
||||||
|
|||||||
@@ -4,6 +4,7 @@ import (
|
|||||||
"bytes"
|
"bytes"
|
||||||
"encoding/binary"
|
"encoding/binary"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
"net"
|
"net"
|
||||||
"strconv"
|
"strconv"
|
||||||
@@ -103,11 +104,9 @@ type Aerospike struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
var sampleConfig = `
|
var sampleConfig = `
|
||||||
# Aerospike servers to connect to (with port)
|
## Aerospike servers to connect to (with port)
|
||||||
# Default: servers = ["localhost:3000"]
|
## This plugin will query all namespaces the aerospike
|
||||||
#
|
## server has configured and get stats for them.
|
||||||
# This plugin will query all namespaces the aerospike
|
|
||||||
# server has configured and get stats for them.
|
|
||||||
servers = ["localhost:3000"]
|
servers = ["localhost:3000"]
|
||||||
`
|
`
|
||||||
|
|
||||||
@@ -119,7 +118,7 @@ func (a *Aerospike) Description() string {
|
|||||||
return "Read stats from an aerospike server"
|
return "Read stats from an aerospike server"
|
||||||
}
|
}
|
||||||
|
|
||||||
func (a *Aerospike) Gather(acc inputs.Accumulator) error {
|
func (a *Aerospike) Gather(acc telegraf.Accumulator) error {
|
||||||
if len(a.Servers) == 0 {
|
if len(a.Servers) == 0 {
|
||||||
return a.gatherServer("127.0.0.1:3000", acc)
|
return a.gatherServer("127.0.0.1:3000", acc)
|
||||||
}
|
}
|
||||||
@@ -140,7 +139,7 @@ func (a *Aerospike) Gather(acc inputs.Accumulator) error {
|
|||||||
return outerr
|
return outerr
|
||||||
}
|
}
|
||||||
|
|
||||||
func (a *Aerospike) gatherServer(host string, acc inputs.Accumulator) error {
|
func (a *Aerospike) gatherServer(host string, acc telegraf.Accumulator) error {
|
||||||
aerospikeInfo, err := getMap(STATISTICS_COMMAND, host)
|
aerospikeInfo, err := getMap(STATISTICS_COMMAND, host)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("Aerospike info failed: %s", err)
|
return fmt.Errorf("Aerospike info failed: %s", err)
|
||||||
@@ -249,7 +248,7 @@ func get(key []byte, host string) (map[string]string, error) {
|
|||||||
|
|
||||||
func readAerospikeStats(
|
func readAerospikeStats(
|
||||||
stats map[string]string,
|
stats map[string]string,
|
||||||
acc inputs.Accumulator,
|
acc telegraf.Accumulator,
|
||||||
host string,
|
host string,
|
||||||
namespace string,
|
namespace string,
|
||||||
) {
|
) {
|
||||||
@@ -336,7 +335,7 @@ func msgLenFromBytes(buf [6]byte) int64 {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
inputs.Add("aerospike", func() inputs.Input {
|
inputs.Add("aerospike", func() telegraf.Input {
|
||||||
return &Aerospike{}
|
return &Aerospike{}
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,41 +4,65 @@ import (
|
|||||||
_ "github.com/influxdata/telegraf/plugins/inputs/aerospike"
|
_ "github.com/influxdata/telegraf/plugins/inputs/aerospike"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/apache"
|
_ "github.com/influxdata/telegraf/plugins/inputs/apache"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/bcache"
|
_ "github.com/influxdata/telegraf/plugins/inputs/bcache"
|
||||||
|
_ "github.com/influxdata/telegraf/plugins/inputs/cassandra"
|
||||||
|
_ "github.com/influxdata/telegraf/plugins/inputs/cloudwatch"
|
||||||
|
_ "github.com/influxdata/telegraf/plugins/inputs/couchbase"
|
||||||
|
_ "github.com/influxdata/telegraf/plugins/inputs/couchdb"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/disque"
|
_ "github.com/influxdata/telegraf/plugins/inputs/disque"
|
||||||
|
_ "github.com/influxdata/telegraf/plugins/inputs/dns_query"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/docker"
|
_ "github.com/influxdata/telegraf/plugins/inputs/docker"
|
||||||
|
_ "github.com/influxdata/telegraf/plugins/inputs/dovecot"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/elasticsearch"
|
_ "github.com/influxdata/telegraf/plugins/inputs/elasticsearch"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/exec"
|
_ "github.com/influxdata/telegraf/plugins/inputs/exec"
|
||||||
|
_ "github.com/influxdata/telegraf/plugins/inputs/filestat"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/github_webhooks"
|
_ "github.com/influxdata/telegraf/plugins/inputs/github_webhooks"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/haproxy"
|
_ "github.com/influxdata/telegraf/plugins/inputs/haproxy"
|
||||||
|
_ "github.com/influxdata/telegraf/plugins/inputs/http_response"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/httpjson"
|
_ "github.com/influxdata/telegraf/plugins/inputs/httpjson"
|
||||||
|
_ "github.com/influxdata/telegraf/plugins/inputs/igloo"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/influxdb"
|
_ "github.com/influxdata/telegraf/plugins/inputs/influxdb"
|
||||||
|
_ "github.com/influxdata/telegraf/plugins/inputs/ipmi_sensor"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/jolokia"
|
_ "github.com/influxdata/telegraf/plugins/inputs/jolokia"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/kafka_consumer"
|
_ "github.com/influxdata/telegraf/plugins/inputs/kafka_consumer"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/leofs"
|
_ "github.com/influxdata/telegraf/plugins/inputs/leofs"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/lustre2"
|
_ "github.com/influxdata/telegraf/plugins/inputs/lustre2"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/mailchimp"
|
_ "github.com/influxdata/telegraf/plugins/inputs/mailchimp"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/memcached"
|
_ "github.com/influxdata/telegraf/plugins/inputs/memcached"
|
||||||
|
_ "github.com/influxdata/telegraf/plugins/inputs/mesos"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/mongodb"
|
_ "github.com/influxdata/telegraf/plugins/inputs/mongodb"
|
||||||
|
_ "github.com/influxdata/telegraf/plugins/inputs/mqtt_consumer"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/mysql"
|
_ "github.com/influxdata/telegraf/plugins/inputs/mysql"
|
||||||
|
_ "github.com/influxdata/telegraf/plugins/inputs/nats_consumer"
|
||||||
|
_ "github.com/influxdata/telegraf/plugins/inputs/net_response"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/nginx"
|
_ "github.com/influxdata/telegraf/plugins/inputs/nginx"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/nsq"
|
_ "github.com/influxdata/telegraf/plugins/inputs/nsq"
|
||||||
|
_ "github.com/influxdata/telegraf/plugins/inputs/ntpq"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/passenger"
|
_ "github.com/influxdata/telegraf/plugins/inputs/passenger"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/phpfpm"
|
_ "github.com/influxdata/telegraf/plugins/inputs/phpfpm"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/ping"
|
_ "github.com/influxdata/telegraf/plugins/inputs/ping"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/postgresql"
|
_ "github.com/influxdata/telegraf/plugins/inputs/postgresql"
|
||||||
|
_ "github.com/influxdata/telegraf/plugins/inputs/postgresql_extensible"
|
||||||
|
_ "github.com/influxdata/telegraf/plugins/inputs/powerdns"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/procstat"
|
_ "github.com/influxdata/telegraf/plugins/inputs/procstat"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/prometheus"
|
_ "github.com/influxdata/telegraf/plugins/inputs/prometheus"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/puppetagent"
|
_ "github.com/influxdata/telegraf/plugins/inputs/puppetagent"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/rabbitmq"
|
_ "github.com/influxdata/telegraf/plugins/inputs/rabbitmq"
|
||||||
|
_ "github.com/influxdata/telegraf/plugins/inputs/raindrops"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/redis"
|
_ "github.com/influxdata/telegraf/plugins/inputs/redis"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/rethinkdb"
|
_ "github.com/influxdata/telegraf/plugins/inputs/rethinkdb"
|
||||||
|
_ "github.com/influxdata/telegraf/plugins/inputs/riak"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/sensors"
|
_ "github.com/influxdata/telegraf/plugins/inputs/sensors"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/snmp"
|
_ "github.com/influxdata/telegraf/plugins/inputs/snmp"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/sqlserver"
|
_ "github.com/influxdata/telegraf/plugins/inputs/sqlserver"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/statsd"
|
_ "github.com/influxdata/telegraf/plugins/inputs/statsd"
|
||||||
|
_ "github.com/influxdata/telegraf/plugins/inputs/sysstat"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/system"
|
_ "github.com/influxdata/telegraf/plugins/inputs/system"
|
||||||
|
_ "github.com/influxdata/telegraf/plugins/inputs/tail"
|
||||||
|
_ "github.com/influxdata/telegraf/plugins/inputs/tcp_listener"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/trig"
|
_ "github.com/influxdata/telegraf/plugins/inputs/trig"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/twemproxy"
|
_ "github.com/influxdata/telegraf/plugins/inputs/twemproxy"
|
||||||
|
_ "github.com/influxdata/telegraf/plugins/inputs/udp_listener"
|
||||||
|
_ "github.com/influxdata/telegraf/plugins/inputs/win_perf_counters"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/zfs"
|
_ "github.com/influxdata/telegraf/plugins/inputs/zfs"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/zookeeper"
|
_ "github.com/influxdata/telegraf/plugins/inputs/zookeeper"
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -11,6 +11,7 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -19,7 +20,7 @@ type Apache struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
var sampleConfig = `
|
var sampleConfig = `
|
||||||
# An array of Apache status URI to gather stats.
|
## An array of Apache status URI to gather stats.
|
||||||
urls = ["http://localhost/server-status?auto"]
|
urls = ["http://localhost/server-status?auto"]
|
||||||
`
|
`
|
||||||
|
|
||||||
@@ -31,7 +32,7 @@ func (n *Apache) Description() string {
|
|||||||
return "Read Apache status information (mod_status)"
|
return "Read Apache status information (mod_status)"
|
||||||
}
|
}
|
||||||
|
|
||||||
func (n *Apache) Gather(acc inputs.Accumulator) error {
|
func (n *Apache) Gather(acc telegraf.Accumulator) error {
|
||||||
var wg sync.WaitGroup
|
var wg sync.WaitGroup
|
||||||
var outerr error
|
var outerr error
|
||||||
|
|
||||||
@@ -57,9 +58,12 @@ var tr = &http.Transport{
|
|||||||
ResponseHeaderTimeout: time.Duration(3 * time.Second),
|
ResponseHeaderTimeout: time.Duration(3 * time.Second),
|
||||||
}
|
}
|
||||||
|
|
||||||
var client = &http.Client{Transport: tr}
|
var client = &http.Client{
|
||||||
|
Transport: tr,
|
||||||
|
Timeout: time.Duration(4 * time.Second),
|
||||||
|
}
|
||||||
|
|
||||||
func (n *Apache) gatherUrl(addr *url.URL, acc inputs.Accumulator) error {
|
func (n *Apache) gatherUrl(addr *url.URL, acc telegraf.Accumulator) error {
|
||||||
resp, err := client.Get(addr.String())
|
resp, err := client.Get(addr.String())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("error making HTTP request to %s: %s", addr.String(), err)
|
return fmt.Errorf("error making HTTP request to %s: %s", addr.String(), err)
|
||||||
@@ -164,7 +168,7 @@ func getTags(addr *url.URL) map[string]string {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
inputs.Add("apache", func() inputs.Input {
|
inputs.Add("apache", func() telegraf.Input {
|
||||||
return &Apache{}
|
return &Apache{}
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -8,6 +8,7 @@ import (
|
|||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -17,14 +18,14 @@ type Bcache struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
var sampleConfig = `
|
var sampleConfig = `
|
||||||
# Bcache sets path
|
## Bcache sets path
|
||||||
# If not specified, then default is:
|
## If not specified, then default is:
|
||||||
# bcachePath = "/sys/fs/bcache"
|
bcachePath = "/sys/fs/bcache"
|
||||||
#
|
|
||||||
# By default, telegraf gather stats for all bcache devices
|
## By default, telegraf gather stats for all bcache devices
|
||||||
# Setting devices will restrict the stats to the specified
|
## Setting devices will restrict the stats to the specified
|
||||||
# bcache devices.
|
## bcache devices.
|
||||||
# bcacheDevs = ["bcache0", ...]
|
bcacheDevs = ["bcache0"]
|
||||||
`
|
`
|
||||||
|
|
||||||
func (b *Bcache) SampleConfig() string {
|
func (b *Bcache) SampleConfig() string {
|
||||||
@@ -69,7 +70,7 @@ func prettyToBytes(v string) uint64 {
|
|||||||
return uint64(result)
|
return uint64(result)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (b *Bcache) gatherBcache(bdev string, acc inputs.Accumulator) error {
|
func (b *Bcache) gatherBcache(bdev string, acc telegraf.Accumulator) error {
|
||||||
tags := getTags(bdev)
|
tags := getTags(bdev)
|
||||||
metrics, err := filepath.Glob(bdev + "/stats_total/*")
|
metrics, err := filepath.Glob(bdev + "/stats_total/*")
|
||||||
if len(metrics) < 0 {
|
if len(metrics) < 0 {
|
||||||
@@ -104,7 +105,7 @@ func (b *Bcache) gatherBcache(bdev string, acc inputs.Accumulator) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (b *Bcache) Gather(acc inputs.Accumulator) error {
|
func (b *Bcache) Gather(acc telegraf.Accumulator) error {
|
||||||
bcacheDevsChecked := make(map[string]bool)
|
bcacheDevsChecked := make(map[string]bool)
|
||||||
var restrictDevs bool
|
var restrictDevs bool
|
||||||
if len(b.BcacheDevs) != 0 {
|
if len(b.BcacheDevs) != 0 {
|
||||||
@@ -135,7 +136,7 @@ func (b *Bcache) Gather(acc inputs.Accumulator) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
inputs.Add("bcache", func() inputs.Input {
|
inputs.Add("bcache", func() telegraf.Input {
|
||||||
return &Bcache{}
|
return &Bcache{}
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|||||||
125
plugins/inputs/cassandra/README.md
Normal file
125
plugins/inputs/cassandra/README.md
Normal file
@@ -0,0 +1,125 @@
|
|||||||
|
# Telegraf plugin: Cassandra
|
||||||
|
|
||||||
|
#### Plugin arguments:
|
||||||
|
- **context** string: Context root used for jolokia url
|
||||||
|
- **servers** []string: List of servers with the format "<user:passwd@><host>:port"
|
||||||
|
- **metrics** []string: List of Jmx paths that identify mbeans attributes
|
||||||
|
|
||||||
|
#### Description
|
||||||
|
|
||||||
|
The Cassandra plugin collects Cassandra/JVM metrics exposed as MBean's attributes through jolokia REST endpoint. All metrics are collected for each server configured.
|
||||||
|
|
||||||
|
See: https://jolokia.org/ and [Cassandra Documentation](http://docs.datastax.com/en/cassandra/3.x/cassandra/operations/monitoringCassandraTOC.html)
|
||||||
|
|
||||||
|
# Measurements:
|
||||||
|
Cassandra plugin produces one or more measurements for each metric configured, adding Server's name as `host` tag. More than one measurement is generated when querying table metrics with a wildcard for the keyspace or table name.
|
||||||
|
|
||||||
|
Given a configuration like:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[inputs.cassandra]]
|
||||||
|
context = "/jolokia/read"
|
||||||
|
servers = [":8778"]
|
||||||
|
metrics = ["/java.lang:type=Memory/HeapMemoryUsage"]
|
||||||
|
```
|
||||||
|
|
||||||
|
The collected metrics will be:
|
||||||
|
|
||||||
|
```
|
||||||
|
javaMemory,host=myHost,mname=HeapMemoryUsage HeapMemoryUsage_committed=1040187392,HeapMemoryUsage_init=1050673152,HeapMemoryUsage_max=1040187392,HeapMemoryUsage_used=368155000 1459551767230567084
|
||||||
|
```
|
||||||
|
|
||||||
|
# Useful Metrics:
|
||||||
|
|
||||||
|
Here is a list of metrics that might be useful to monitor your cassandra cluster. This was put together from multiple sources on the web.
|
||||||
|
|
||||||
|
- [How to monitor Cassandra performance metrics](https://www.datadoghq.com/blog/how-to-monitor-cassandra-performance-metrics)
|
||||||
|
- [Cassandra Documentation](http://docs.datastax.com/en/cassandra/3.x/cassandra/operations/monitoringCassandraTOC.html)
|
||||||
|
|
||||||
|
####measurement = javaGarbageCollector
|
||||||
|
|
||||||
|
- /java.lang:type=GarbageCollector,name=ConcurrentMarkSweep/CollectionTime
|
||||||
|
- /java.lang:type=GarbageCollector,name=ConcurrentMarkSweep/CollectionCount
|
||||||
|
- /java.lang:type=GarbageCollector,name=ParNew/CollectionTime
|
||||||
|
- /java.lang:type=GarbageCollector,name=ParNew/CollectionCount
|
||||||
|
|
||||||
|
####measurement = javaMemory
|
||||||
|
|
||||||
|
- /java.lang:type=Memory/HeapMemoryUsage
|
||||||
|
- /java.lang:type=Memory/NonHeapMemoryUsage
|
||||||
|
|
||||||
|
####measurement = cassandraCache
|
||||||
|
|
||||||
|
- /org.apache.cassandra.metrics:type=Cache,scope=KeyCache,name=Hit
|
||||||
|
- /org.apache.cassandra.metrics:type=Cache,scope=KeyCache,name=Requests
|
||||||
|
- /org.apache.cassandra.metrics:type=Cache,scope=KeyCache,name=Entries
|
||||||
|
- /org.apache.cassandra.metrics:type=Cache,scope=KeyCache,name=Size
|
||||||
|
- /org.apache.cassandra.metrics:type=Cache,scope=KeyCache,name=Capacity
|
||||||
|
- /org.apache.cassandra.metrics:type=Cache,scope=RowCache,name=Hit
|
||||||
|
- /org.apache.cassandra.metrics:type=Cache,scope=RowCache,name=Requests
|
||||||
|
- /org.apache.cassandra.metrics:type=Cache,scope=RowCache,name=Entries
|
||||||
|
- /org.apache.cassandra.metrics:type=Cache,scope=RowCache,name=Size
|
||||||
|
- /org.apache.cassandra.metrics:type=Cache,scope=RowCache,name=Capacity
|
||||||
|
|
||||||
|
####measurement = cassandraClient
|
||||||
|
|
||||||
|
- /org.apache.cassandra.metrics:type=Client,name=connectedNativeClients
|
||||||
|
|
||||||
|
####measurement = cassandraClientRequest
|
||||||
|
|
||||||
|
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Read,name=TotalLatency
|
||||||
|
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=TotalLatency
|
||||||
|
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Read,name=Latency
|
||||||
|
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Latency
|
||||||
|
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Read,name=Timeouts
|
||||||
|
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Timeouts
|
||||||
|
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Read,name=Unavailables
|
||||||
|
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Unavailables
|
||||||
|
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Read,name=Failures
|
||||||
|
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Failures
|
||||||
|
|
||||||
|
####measurement = cassandraCommitLog
|
||||||
|
|
||||||
|
- /org.apache.cassandra.metrics:type=CommitLog,name=PendingTasks
|
||||||
|
- /org.apache.cassandra.metrics:type=CommitLog,name=TotalCommitLogSize
|
||||||
|
|
||||||
|
####measurement = cassandraCompaction
|
||||||
|
|
||||||
|
- /org.apache.cassandra.metrics:type=Compaction,name=CompletedTask
|
||||||
|
- /org.apache.cassandra.metrics:type=Compaction,name=PendingTasks
|
||||||
|
- /org.apache.cassandra.metrics:type=Compaction,name=TotalCompactionsCompleted
|
||||||
|
- /org.apache.cassandra.metrics:type=Compaction,name=BytesCompacted
|
||||||
|
|
||||||
|
####measurement = cassandraStorage
|
||||||
|
|
||||||
|
- /org.apache.cassandra.metrics:type=Storage,name=Load
|
||||||
|
- /org.apache.cassandra.metrics:type=Storage,name=Exceptions
|
||||||
|
|
||||||
|
####measurement = cassandraTable
|
||||||
|
Using wildcards for "keyspace" and "scope" can create a lot of series as metrics will be reported for every table and keyspace including internal system tables. Specify a keyspace name and/or a table name to limit them.
|
||||||
|
|
||||||
|
- /org.apache.cassandra.metrics:type=Table,keyspace=\*,scope=\*,name=LiveDiskSpaceUsed
|
||||||
|
- /org.apache.cassandra.metrics:type=Table,keyspace=\*,scope=\*,name=TotalDiskSpaceUsed
|
||||||
|
- /org.apache.cassandra.metrics:type=Table,keyspace=\*,scope=\*,name=ReadLatency
|
||||||
|
- /org.apache.cassandra.metrics:type=Table,keyspace=\*,scope=\*,name=CoordinatorReadLatency
|
||||||
|
- /org.apache.cassandra.metrics:type=Table,keyspace=\*,scope=\*,name=WriteLatency
|
||||||
|
- /org.apache.cassandra.metrics:type=Table,keyspace=\*,scope=\*,name=ReadTotalLatency
|
||||||
|
- /org.apache.cassandra.metrics:type=Table,keyspace=\*,scope=\*,name=WriteTotalLatency
|
||||||
|
|
||||||
|
|
||||||
|
####measurement = cassandraThreadPools
|
||||||
|
|
||||||
|
- /org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=CompactionExecutor,name=ActiveTasks
|
||||||
|
- /org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=AntiEntropyStage,name=ActiveTasks
|
||||||
|
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=CounterMutationStage,name=PendingTasks
|
||||||
|
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=CounterMutationStage,name=CurrentlyBlockedTasks
|
||||||
|
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=MutationStage,name=PendingTasks
|
||||||
|
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=MutationStage,name=CurrentlyBlockedTasks
|
||||||
|
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadRepairStage,name=PendingTasks
|
||||||
|
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadRepairStage,name=CurrentlyBlockedTasks
|
||||||
|
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadStage,name=PendingTasks
|
||||||
|
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadStage,name=CurrentlyBlockedTasks
|
||||||
|
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=RequestResponseStage,name=PendingTasks
|
||||||
|
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=RequestResponseStage,name=CurrentlyBlockedTasks
|
||||||
|
|
||||||
|
|
||||||
309
plugins/inputs/cassandra/cassandra.go
Normal file
309
plugins/inputs/cassandra/cassandra.go
Normal file
@@ -0,0 +1,309 @@
|
|||||||
|
package cassandra
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
|
"io/ioutil"
|
||||||
|
"log"
|
||||||
|
"net/http"
|
||||||
|
"net/url"
|
||||||
|
"strings"
|
||||||
|
)
|
||||||
|
|
||||||
|
type JolokiaClient interface {
|
||||||
|
MakeRequest(req *http.Request) (*http.Response, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
type JolokiaClientImpl struct {
|
||||||
|
client *http.Client
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c JolokiaClientImpl) MakeRequest(req *http.Request) (*http.Response, error) {
|
||||||
|
return c.client.Do(req)
|
||||||
|
}
|
||||||
|
|
||||||
|
type Cassandra struct {
|
||||||
|
jClient JolokiaClient
|
||||||
|
Context string
|
||||||
|
Servers []string
|
||||||
|
Metrics []string
|
||||||
|
}
|
||||||
|
|
||||||
|
type javaMetric struct {
|
||||||
|
host string
|
||||||
|
metric string
|
||||||
|
acc telegraf.Accumulator
|
||||||
|
}
|
||||||
|
|
||||||
|
type cassandraMetric struct {
|
||||||
|
host string
|
||||||
|
metric string
|
||||||
|
acc telegraf.Accumulator
|
||||||
|
}
|
||||||
|
|
||||||
|
type jmxMetric interface {
|
||||||
|
addTagsFields(out map[string]interface{})
|
||||||
|
}
|
||||||
|
|
||||||
|
func newJavaMetric(host string, metric string,
|
||||||
|
acc telegraf.Accumulator) *javaMetric {
|
||||||
|
return &javaMetric{host: host, metric: metric, acc: acc}
|
||||||
|
}
|
||||||
|
|
||||||
|
func newCassandraMetric(host string, metric string,
|
||||||
|
acc telegraf.Accumulator) *cassandraMetric {
|
||||||
|
return &cassandraMetric{host: host, metric: metric, acc: acc}
|
||||||
|
}
|
||||||
|
|
||||||
|
func addValuesAsFields(values map[string]interface{}, fields map[string]interface{},
|
||||||
|
mname string) {
|
||||||
|
for k, v := range values {
|
||||||
|
if v != nil {
|
||||||
|
fields[mname+"_"+k] = v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseJmxMetricRequest(mbean string) map[string]string {
|
||||||
|
tokens := make(map[string]string)
|
||||||
|
classAndPairs := strings.Split(mbean, ":")
|
||||||
|
if classAndPairs[0] == "org.apache.cassandra.metrics" {
|
||||||
|
tokens["class"] = "cassandra"
|
||||||
|
} else if classAndPairs[0] == "java.lang" {
|
||||||
|
tokens["class"] = "java"
|
||||||
|
} else {
|
||||||
|
return tokens
|
||||||
|
}
|
||||||
|
pairs := strings.Split(classAndPairs[1], ",")
|
||||||
|
for _, pair := range pairs {
|
||||||
|
p := strings.Split(pair, "=")
|
||||||
|
tokens[p[0]] = p[1]
|
||||||
|
}
|
||||||
|
return tokens
|
||||||
|
}
|
||||||
|
|
||||||
|
func addTokensToTags(tokens map[string]string, tags map[string]string) {
|
||||||
|
for k, v := range tokens {
|
||||||
|
if k == "name" {
|
||||||
|
tags["mname"] = v // name seems to a reserved word in influxdb
|
||||||
|
} else if k == "class" || k == "type" {
|
||||||
|
continue // class and type are used in the metric name
|
||||||
|
} else {
|
||||||
|
tags[k] = v
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (j javaMetric) addTagsFields(out map[string]interface{}) {
|
||||||
|
tags := make(map[string]string)
|
||||||
|
fields := make(map[string]interface{})
|
||||||
|
|
||||||
|
a := out["request"].(map[string]interface{})
|
||||||
|
attribute := a["attribute"].(string)
|
||||||
|
mbean := a["mbean"].(string)
|
||||||
|
|
||||||
|
tokens := parseJmxMetricRequest(mbean)
|
||||||
|
addTokensToTags(tokens, tags)
|
||||||
|
tags["cassandra_host"] = j.host
|
||||||
|
|
||||||
|
if _, ok := tags["mname"]; !ok {
|
||||||
|
//Queries for a single value will not return a "name" tag in the response.
|
||||||
|
tags["mname"] = attribute
|
||||||
|
}
|
||||||
|
|
||||||
|
if values, ok := out["value"]; ok {
|
||||||
|
switch t := values.(type) {
|
||||||
|
case map[string]interface{}:
|
||||||
|
addValuesAsFields(values.(map[string]interface{}), fields, attribute)
|
||||||
|
case interface{}:
|
||||||
|
fields[attribute] = t
|
||||||
|
}
|
||||||
|
j.acc.AddFields(tokens["class"]+tokens["type"], fields, tags)
|
||||||
|
} else {
|
||||||
|
fmt.Printf("Missing key 'value' in '%s' output response\n%v\n",
|
||||||
|
j.metric, out)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func addCassandraMetric(mbean string, c cassandraMetric,
|
||||||
|
values map[string]interface{}) {
|
||||||
|
|
||||||
|
tags := make(map[string]string)
|
||||||
|
fields := make(map[string]interface{})
|
||||||
|
tokens := parseJmxMetricRequest(mbean)
|
||||||
|
addTokensToTags(tokens, tags)
|
||||||
|
tags["cassandra_host"] = c.host
|
||||||
|
addValuesAsFields(values, fields, tags["mname"])
|
||||||
|
c.acc.AddFields(tokens["class"]+tokens["type"], fields, tags)
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c cassandraMetric) addTagsFields(out map[string]interface{}) {
|
||||||
|
|
||||||
|
r := out["request"]
|
||||||
|
|
||||||
|
tokens := parseJmxMetricRequest(r.(map[string]interface{})["mbean"].(string))
|
||||||
|
// Requests with wildcards for keyspace or table names will return nested
|
||||||
|
// maps in the json response
|
||||||
|
if tokens["type"] == "Table" && (tokens["keyspace"] == "*" ||
|
||||||
|
tokens["scope"] == "*") {
|
||||||
|
if valuesMap, ok := out["value"]; ok {
|
||||||
|
for k, v := range valuesMap.(map[string]interface{}) {
|
||||||
|
addCassandraMetric(k, c, v.(map[string]interface{}))
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
fmt.Printf("Missing key 'value' in '%s' output response\n%v\n",
|
||||||
|
c.metric, out)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
if values, ok := out["value"]; ok {
|
||||||
|
addCassandraMetric(r.(map[string]interface{})["mbean"].(string),
|
||||||
|
c, values.(map[string]interface{}))
|
||||||
|
} else {
|
||||||
|
fmt.Printf("Missing key 'value' in '%s' output response\n%v\n",
|
||||||
|
c.metric, out)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (j *Cassandra) SampleConfig() string {
|
||||||
|
return `
|
||||||
|
# This is the context root used to compose the jolokia url
|
||||||
|
context = "/jolokia/read"
|
||||||
|
## List of cassandra servers exposing jolokia read service
|
||||||
|
servers = ["myuser:mypassword@10.10.10.1:8778","10.10.10.2:8778",":8778"]
|
||||||
|
## List of metrics collected on above servers
|
||||||
|
## Each metric consists of a jmx path.
|
||||||
|
## This will collect all heap memory usage metrics from the jvm and
|
||||||
|
## ReadLatency metrics for all keyspaces and tables.
|
||||||
|
## "type=Table" in the query works with Cassandra3.0. Older versions might
|
||||||
|
## need to use "type=ColumnFamily"
|
||||||
|
metrics = [
|
||||||
|
"/java.lang:type=Memory/HeapMemoryUsage",
|
||||||
|
"/org.apache.cassandra.metrics:type=Table,keyspace=*,scope=*,name=ReadLatency"
|
||||||
|
]
|
||||||
|
`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (j *Cassandra) Description() string {
|
||||||
|
return "Read Cassandra metrics through Jolokia"
|
||||||
|
}
|
||||||
|
|
||||||
|
func (j *Cassandra) getAttr(requestUrl *url.URL) (map[string]interface{}, error) {
|
||||||
|
// Create + send request
|
||||||
|
req, err := http.NewRequest("GET", requestUrl.String(), nil)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
resp, err := j.jClient.MakeRequest(req)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
defer resp.Body.Close()
|
||||||
|
|
||||||
|
// Process response
|
||||||
|
if resp.StatusCode != http.StatusOK {
|
||||||
|
err = fmt.Errorf("Response from url \"%s\" has status code %d (%s), expected %d (%s)",
|
||||||
|
requestUrl,
|
||||||
|
resp.StatusCode,
|
||||||
|
http.StatusText(resp.StatusCode),
|
||||||
|
http.StatusOK,
|
||||||
|
http.StatusText(http.StatusOK))
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// read body
|
||||||
|
body, err := ioutil.ReadAll(resp.Body)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// Unmarshal json
|
||||||
|
var jsonOut map[string]interface{}
|
||||||
|
if err = json.Unmarshal([]byte(body), &jsonOut); err != nil {
|
||||||
|
return nil, errors.New("Error decoding JSON response")
|
||||||
|
}
|
||||||
|
|
||||||
|
return jsonOut, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func parseServerTokens(server string) map[string]string {
|
||||||
|
serverTokens := make(map[string]string)
|
||||||
|
|
||||||
|
hostAndUser := strings.Split(server, "@")
|
||||||
|
hostPort := ""
|
||||||
|
userPasswd := ""
|
||||||
|
if len(hostAndUser) == 2 {
|
||||||
|
hostPort = hostAndUser[1]
|
||||||
|
userPasswd = hostAndUser[0]
|
||||||
|
} else {
|
||||||
|
hostPort = hostAndUser[0]
|
||||||
|
}
|
||||||
|
hostTokens := strings.Split(hostPort, ":")
|
||||||
|
serverTokens["host"] = hostTokens[0]
|
||||||
|
serverTokens["port"] = hostTokens[1]
|
||||||
|
|
||||||
|
if userPasswd != "" {
|
||||||
|
userTokens := strings.Split(userPasswd, ":")
|
||||||
|
serverTokens["user"] = userTokens[0]
|
||||||
|
serverTokens["passwd"] = userTokens[1]
|
||||||
|
}
|
||||||
|
return serverTokens
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *Cassandra) Gather(acc telegraf.Accumulator) error {
|
||||||
|
context := c.Context
|
||||||
|
servers := c.Servers
|
||||||
|
metrics := c.Metrics
|
||||||
|
|
||||||
|
for _, server := range servers {
|
||||||
|
for _, metric := range metrics {
|
||||||
|
serverTokens := parseServerTokens(server)
|
||||||
|
|
||||||
|
var m jmxMetric
|
||||||
|
if strings.HasPrefix(metric, "/java.lang:") {
|
||||||
|
m = newJavaMetric(serverTokens["host"], metric, acc)
|
||||||
|
} else if strings.HasPrefix(metric,
|
||||||
|
"/org.apache.cassandra.metrics:") {
|
||||||
|
m = newCassandraMetric(serverTokens["host"], metric, acc)
|
||||||
|
} else {
|
||||||
|
// unsupported metric type
|
||||||
|
log.Printf("Unsupported Cassandra metric [%s], skipping",
|
||||||
|
metric)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Prepare URL
|
||||||
|
requestUrl, err := url.Parse("http://" + serverTokens["host"] + ":" +
|
||||||
|
serverTokens["port"] + context + metric)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if serverTokens["user"] != "" && serverTokens["passwd"] != "" {
|
||||||
|
requestUrl.User = url.UserPassword(serverTokens["user"],
|
||||||
|
serverTokens["passwd"])
|
||||||
|
}
|
||||||
|
fmt.Printf("host %s url %s\n", serverTokens["host"], requestUrl)
|
||||||
|
|
||||||
|
out, err := c.getAttr(requestUrl)
|
||||||
|
if out["status"] != 200.0 {
|
||||||
|
fmt.Printf("URL returned with status %v\n", out["status"])
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
m.addTagsFields(out)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
inputs.Add("cassandra", func() telegraf.Input {
|
||||||
|
return &Cassandra{jClient: &JolokiaClientImpl{client: &http.Client{}}}
|
||||||
|
})
|
||||||
|
}
|
||||||
286
plugins/inputs/cassandra/cassandra_test.go
Normal file
286
plugins/inputs/cassandra/cassandra_test.go
Normal file
@@ -0,0 +1,286 @@
|
|||||||
|
package cassandra
|
||||||
|
|
||||||
|
import (
|
||||||
|
_ "fmt"
|
||||||
|
"io/ioutil"
|
||||||
|
"net/http"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf/testutil"
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
_ "github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
const validJavaMultiValueJSON = `
|
||||||
|
{
|
||||||
|
"request":{
|
||||||
|
"mbean":"java.lang:type=Memory",
|
||||||
|
"attribute":"HeapMemoryUsage",
|
||||||
|
"type":"read"
|
||||||
|
},
|
||||||
|
"value":{
|
||||||
|
"init":67108864,
|
||||||
|
"committed":456130560,
|
||||||
|
"max":477626368,
|
||||||
|
"used":203288528
|
||||||
|
},
|
||||||
|
"timestamp":1446129191,
|
||||||
|
"status":200
|
||||||
|
}`
|
||||||
|
|
||||||
|
const validCassandraMultiValueJSON = `
|
||||||
|
{
|
||||||
|
"request": {
|
||||||
|
"mbean": "org.apache.cassandra.metrics:keyspace=test_keyspace1,name=ReadLatency,scope=test_table,type=Table",
|
||||||
|
"type": "read"},
|
||||||
|
"status": 200,
|
||||||
|
"timestamp": 1458089229,
|
||||||
|
"value": {
|
||||||
|
"999thPercentile": 20.0,
|
||||||
|
"99thPercentile": 10.0,
|
||||||
|
"Count": 400,
|
||||||
|
"DurationUnit": "microseconds",
|
||||||
|
"Max": 30.0,
|
||||||
|
"Mean": null,
|
||||||
|
"MeanRate": 3.0,
|
||||||
|
"Min": 1.0,
|
||||||
|
"RateUnit": "events/second",
|
||||||
|
"StdDev": null
|
||||||
|
}
|
||||||
|
}`
|
||||||
|
|
||||||
|
const validCassandraNestedMultiValueJSON = `
|
||||||
|
{
|
||||||
|
"request": {
|
||||||
|
"mbean": "org.apache.cassandra.metrics:keyspace=test_keyspace1,name=ReadLatency,scope=*,type=Table",
|
||||||
|
"type": "read"},
|
||||||
|
"status": 200,
|
||||||
|
"timestamp": 1458089184,
|
||||||
|
"value": {
|
||||||
|
"org.apache.cassandra.metrics:keyspace=test_keyspace1,name=ReadLatency,scope=test_table1,type=Table":
|
||||||
|
{ "999thPercentile": 1.0,
|
||||||
|
"Count": 100,
|
||||||
|
"DurationUnit": "microseconds",
|
||||||
|
"OneMinuteRate": 1.0,
|
||||||
|
"RateUnit": "events/second",
|
||||||
|
"StdDev": null
|
||||||
|
},
|
||||||
|
"org.apache.cassandra.metrics:keyspace=test_keyspace2,name=ReadLatency,scope=test_table2,type=Table":
|
||||||
|
{ "999thPercentile": 2.0,
|
||||||
|
"Count": 200,
|
||||||
|
"DurationUnit": "microseconds",
|
||||||
|
"OneMinuteRate": 2.0,
|
||||||
|
"RateUnit": "events/second",
|
||||||
|
"StdDev": null
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}`
|
||||||
|
|
||||||
|
const validSingleValueJSON = `
|
||||||
|
{
|
||||||
|
"request":{
|
||||||
|
"path":"used",
|
||||||
|
"mbean":"java.lang:type=Memory",
|
||||||
|
"attribute":"HeapMemoryUsage",
|
||||||
|
"type":"read"
|
||||||
|
},
|
||||||
|
"value":209274376,
|
||||||
|
"timestamp":1446129256,
|
||||||
|
"status":200
|
||||||
|
}`
|
||||||
|
|
||||||
|
const validJavaMultiTypeJSON = `
|
||||||
|
{
|
||||||
|
"request":{
|
||||||
|
"mbean":"java.lang:name=ConcurrentMarkSweep,type=GarbageCollector",
|
||||||
|
"attribute":"CollectionCount",
|
||||||
|
"type":"read"
|
||||||
|
},
|
||||||
|
"value":1,
|
||||||
|
"timestamp":1459316570,
|
||||||
|
"status":200
|
||||||
|
}`
|
||||||
|
|
||||||
|
const invalidJSON = "I don't think this is JSON"
|
||||||
|
|
||||||
|
const empty = ""
|
||||||
|
|
||||||
|
var Servers = []string{"10.10.10.10:8778"}
|
||||||
|
var AuthServers = []string{"user:passwd@10.10.10.10:8778"}
|
||||||
|
var MultipleServers = []string{"10.10.10.10:8778", "10.10.10.11:8778"}
|
||||||
|
var HeapMetric = "/java.lang:type=Memory/HeapMemoryUsage"
|
||||||
|
var ReadLatencyMetric = "/org.apache.cassandra.metrics:type=Table,keyspace=test_keyspace1,scope=test_table,name=ReadLatency"
|
||||||
|
var NestedReadLatencyMetric = "/org.apache.cassandra.metrics:type=Table,keyspace=test_keyspace1,scope=*,name=ReadLatency"
|
||||||
|
var GarbageCollectorMetric1 = "/java.lang:type=GarbageCollector,name=ConcurrentMarkSweep/CollectionCount"
|
||||||
|
var GarbageCollectorMetric2 = "/java.lang:type=GarbageCollector,name=ConcurrentMarkSweep/CollectionTime"
|
||||||
|
var Context = "/jolokia/read"
|
||||||
|
|
||||||
|
type jolokiaClientStub struct {
|
||||||
|
responseBody string
|
||||||
|
statusCode int
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c jolokiaClientStub) MakeRequest(req *http.Request) (*http.Response, error) {
|
||||||
|
resp := http.Response{}
|
||||||
|
resp.StatusCode = c.statusCode
|
||||||
|
resp.Body = ioutil.NopCloser(strings.NewReader(c.responseBody))
|
||||||
|
return &resp, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Generates a pointer to an HttpJson object that uses a mock HTTP client.
|
||||||
|
// Parameters:
|
||||||
|
// response : Body of the response that the mock HTTP client should return
|
||||||
|
// statusCode: HTTP status code the mock HTTP client should return
|
||||||
|
//
|
||||||
|
// Returns:
|
||||||
|
// *HttpJson: Pointer to an HttpJson object that uses the generated mock HTTP client
|
||||||
|
func genJolokiaClientStub(response string, statusCode int, servers []string, metrics []string) *Cassandra {
|
||||||
|
return &Cassandra{
|
||||||
|
jClient: jolokiaClientStub{responseBody: response, statusCode: statusCode},
|
||||||
|
Context: Context,
|
||||||
|
Servers: servers,
|
||||||
|
Metrics: metrics,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test that the proper values are ignored or collected for class=Java
|
||||||
|
func TestHttpJsonJavaMultiValue(t *testing.T) {
|
||||||
|
cassandra := genJolokiaClientStub(validJavaMultiValueJSON, 200,
|
||||||
|
MultipleServers, []string{HeapMetric})
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
acc.SetDebug(true)
|
||||||
|
err := cassandra.Gather(&acc)
|
||||||
|
|
||||||
|
assert.Nil(t, err)
|
||||||
|
assert.Equal(t, 2, len(acc.Metrics))
|
||||||
|
|
||||||
|
fields := map[string]interface{}{
|
||||||
|
"HeapMemoryUsage_init": 67108864.0,
|
||||||
|
"HeapMemoryUsage_committed": 456130560.0,
|
||||||
|
"HeapMemoryUsage_max": 477626368.0,
|
||||||
|
"HeapMemoryUsage_used": 203288528.0,
|
||||||
|
}
|
||||||
|
tags1 := map[string]string{
|
||||||
|
"cassandra_host": "10.10.10.10",
|
||||||
|
"mname": "HeapMemoryUsage",
|
||||||
|
}
|
||||||
|
|
||||||
|
tags2 := map[string]string{
|
||||||
|
"cassandra_host": "10.10.10.11",
|
||||||
|
"mname": "HeapMemoryUsage",
|
||||||
|
}
|
||||||
|
acc.AssertContainsTaggedFields(t, "javaMemory", fields, tags1)
|
||||||
|
acc.AssertContainsTaggedFields(t, "javaMemory", fields, tags2)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestHttpJsonJavaMultiType(t *testing.T) {
|
||||||
|
cassandra := genJolokiaClientStub(validJavaMultiTypeJSON, 200, AuthServers, []string{GarbageCollectorMetric1, GarbageCollectorMetric2})
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
acc.SetDebug(true)
|
||||||
|
err := cassandra.Gather(&acc)
|
||||||
|
|
||||||
|
assert.Nil(t, err)
|
||||||
|
assert.Equal(t, 2, len(acc.Metrics))
|
||||||
|
|
||||||
|
fields := map[string]interface{}{
|
||||||
|
"CollectionCount": 1.0,
|
||||||
|
}
|
||||||
|
|
||||||
|
tags := map[string]string{
|
||||||
|
"cassandra_host": "10.10.10.10",
|
||||||
|
"mname": "ConcurrentMarkSweep",
|
||||||
|
}
|
||||||
|
acc.AssertContainsTaggedFields(t, "javaGarbageCollector", fields, tags)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test that the proper values are ignored or collected
|
||||||
|
func TestHttpJsonOn404(t *testing.T) {
|
||||||
|
|
||||||
|
jolokia := genJolokiaClientStub(validJavaMultiValueJSON, 404, Servers,
|
||||||
|
[]string{HeapMetric})
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
err := jolokia.Gather(&acc)
|
||||||
|
|
||||||
|
assert.Nil(t, err)
|
||||||
|
assert.Equal(t, 0, len(acc.Metrics))
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test that the proper values are ignored or collected for class=Cassandra
|
||||||
|
func TestHttpJsonCassandraMultiValue(t *testing.T) {
|
||||||
|
cassandra := genJolokiaClientStub(validCassandraMultiValueJSON, 200, Servers, []string{ReadLatencyMetric})
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
err := cassandra.Gather(&acc)
|
||||||
|
|
||||||
|
assert.Nil(t, err)
|
||||||
|
assert.Equal(t, 1, len(acc.Metrics))
|
||||||
|
|
||||||
|
fields := map[string]interface{}{
|
||||||
|
"ReadLatency_999thPercentile": 20.0,
|
||||||
|
"ReadLatency_99thPercentile": 10.0,
|
||||||
|
"ReadLatency_Count": 400.0,
|
||||||
|
"ReadLatency_DurationUnit": "microseconds",
|
||||||
|
"ReadLatency_Max": 30.0,
|
||||||
|
"ReadLatency_MeanRate": 3.0,
|
||||||
|
"ReadLatency_Min": 1.0,
|
||||||
|
"ReadLatency_RateUnit": "events/second",
|
||||||
|
}
|
||||||
|
|
||||||
|
tags := map[string]string{
|
||||||
|
"cassandra_host": "10.10.10.10",
|
||||||
|
"mname": "ReadLatency",
|
||||||
|
"keyspace": "test_keyspace1",
|
||||||
|
"scope": "test_table",
|
||||||
|
}
|
||||||
|
acc.AssertContainsTaggedFields(t, "cassandraTable", fields, tags)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test that the proper values are ignored or collected for class=Cassandra with
|
||||||
|
// nested values
|
||||||
|
func TestHttpJsonCassandraNestedMultiValue(t *testing.T) {
|
||||||
|
cassandra := genJolokiaClientStub(validCassandraNestedMultiValueJSON, 200, Servers, []string{NestedReadLatencyMetric})
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
acc.SetDebug(true)
|
||||||
|
err := cassandra.Gather(&acc)
|
||||||
|
|
||||||
|
assert.Nil(t, err)
|
||||||
|
assert.Equal(t, 2, len(acc.Metrics))
|
||||||
|
|
||||||
|
fields1 := map[string]interface{}{
|
||||||
|
"ReadLatency_999thPercentile": 1.0,
|
||||||
|
"ReadLatency_Count": 100.0,
|
||||||
|
"ReadLatency_DurationUnit": "microseconds",
|
||||||
|
"ReadLatency_OneMinuteRate": 1.0,
|
||||||
|
"ReadLatency_RateUnit": "events/second",
|
||||||
|
}
|
||||||
|
|
||||||
|
fields2 := map[string]interface{}{
|
||||||
|
"ReadLatency_999thPercentile": 2.0,
|
||||||
|
"ReadLatency_Count": 200.0,
|
||||||
|
"ReadLatency_DurationUnit": "microseconds",
|
||||||
|
"ReadLatency_OneMinuteRate": 2.0,
|
||||||
|
"ReadLatency_RateUnit": "events/second",
|
||||||
|
}
|
||||||
|
|
||||||
|
tags1 := map[string]string{
|
||||||
|
"cassandra_host": "10.10.10.10",
|
||||||
|
"mname": "ReadLatency",
|
||||||
|
"keyspace": "test_keyspace1",
|
||||||
|
"scope": "test_table1",
|
||||||
|
}
|
||||||
|
|
||||||
|
tags2 := map[string]string{
|
||||||
|
"cassandra_host": "10.10.10.10",
|
||||||
|
"mname": "ReadLatency",
|
||||||
|
"keyspace": "test_keyspace2",
|
||||||
|
"scope": "test_table2",
|
||||||
|
}
|
||||||
|
|
||||||
|
acc.AssertContainsTaggedFields(t, "cassandraTable", fields1, tags1)
|
||||||
|
acc.AssertContainsTaggedFields(t, "cassandraTable", fields2, tags2)
|
||||||
|
}
|
||||||
86
plugins/inputs/cloudwatch/README.md
Normal file
86
plugins/inputs/cloudwatch/README.md
Normal file
@@ -0,0 +1,86 @@
|
|||||||
|
# Amazon CloudWatch Statistics Input
|
||||||
|
|
||||||
|
This plugin will pull Metric Statistics from Amazon CloudWatch.
|
||||||
|
|
||||||
|
### Amazon Authentication
|
||||||
|
|
||||||
|
This plugin uses a credential chain for Authentication with the CloudWatch
|
||||||
|
API endpoint. In the following order the plugin will attempt to authenticate.
|
||||||
|
1. [IAMS Role](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
|
||||||
|
2. [Environment Variables](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables)
|
||||||
|
3. [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file)
|
||||||
|
|
||||||
|
### Configuration:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[inputs.cloudwatch]]
|
||||||
|
## Amazon Region (required)
|
||||||
|
region = 'us-east-1'
|
||||||
|
|
||||||
|
## Requested CloudWatch aggregation Period (required - must be a multiple of 60s)
|
||||||
|
period = '1m'
|
||||||
|
|
||||||
|
## Collection Delay (required - must account for metrics availability via CloudWatch API)
|
||||||
|
delay = '1m'
|
||||||
|
|
||||||
|
## Override global run interval (optional - defaults to global interval)
|
||||||
|
## Recomended: use metric 'interval' that is a multiple of 'period' to avoid
|
||||||
|
## gaps or overlap in pulled data
|
||||||
|
interval = '1m'
|
||||||
|
|
||||||
|
## Metric Statistic Namespace (required)
|
||||||
|
namespace = 'AWS/ELB'
|
||||||
|
|
||||||
|
## Metrics to Pull (optional)
|
||||||
|
## Defaults to all Metrics in Namespace if nothing is provided
|
||||||
|
## Refreshes Namespace available metrics every 1h
|
||||||
|
[[inputs.cloudwatch.metrics]]
|
||||||
|
names = ['Latency', 'RequestCount']
|
||||||
|
|
||||||
|
## Dimension filters for Metric (optional)
|
||||||
|
[[inputs.cloudwatch.metrics.dimensions]]
|
||||||
|
name = 'LoadBalancerName'
|
||||||
|
value = 'p-example'
|
||||||
|
```
|
||||||
|
#### Requirements and Terminology
|
||||||
|
|
||||||
|
Plugin Configuration utilizes [CloudWatch concepts](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html) and access pattern to allow monitoring of any CloudWatch Metric.
|
||||||
|
|
||||||
|
- `region` must be a valid AWS [Region](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#CloudWatchRegions) value
|
||||||
|
- `period` must be a valid CloudWatch [Period](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#CloudWatchPeriods) value
|
||||||
|
- `namespace` must be a valid CloudWatch [Namespace](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#Namespace) value
|
||||||
|
- `names` must be valid CloudWatch [Metric](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#Metric) names
|
||||||
|
- `dimensions` must be valid CloudWatch [Dimension](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#Dimension) name/value pairs
|
||||||
|
|
||||||
|
#### Restrictions and Limitations
|
||||||
|
- CloudWatch metrics are not available instantly via the CloudWatch API. You should adjust your collection `delay` to account for this lag in metrics availability based on your [monitoring subscription level](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html)
|
||||||
|
- CloudWatch API usage incurs cost - see [GetMetricStatistics Pricing](https://aws.amazon.com/cloudwatch/pricing/)
|
||||||
|
|
||||||
|
### Measurements & Fields:
|
||||||
|
|
||||||
|
Each CloudWatch Namespace monitored records a measurement with fields for each available Metric Statistic
|
||||||
|
Namespace and Metrics are represented in [snake case](https://en.wikipedia.org/wiki/Snake_case)
|
||||||
|
|
||||||
|
- cloudwatch_{namespace}
|
||||||
|
- {metric}_sum (metric Sum value)
|
||||||
|
- {metric}_average (metric Average value)
|
||||||
|
- {metric}_minimum (metric Minimum value)
|
||||||
|
- {metric}_maximum (metric Maximum value)
|
||||||
|
- {metric}_sample_count (metric SampleCount value)
|
||||||
|
|
||||||
|
|
||||||
|
### Tags:
|
||||||
|
Each measurement is tagged with the following identifiers to uniquely identify the associated metric
|
||||||
|
Tag Dimension names are represented in [snake case](https://en.wikipedia.org/wiki/Snake_case)
|
||||||
|
|
||||||
|
- All measurements have the following tags:
|
||||||
|
- region (CloudWatch Region)
|
||||||
|
- unit (CloudWatch Metric Unit)
|
||||||
|
- {dimension-name} (Cloudwatch Dimension value - one for each metric dimension)
|
||||||
|
|
||||||
|
### Example Output:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ ./telegraf -config telegraf.conf -input-filter cloudwatch -test
|
||||||
|
> cloudwatch_aws_elb,load_balancer_name=p-example,region=us-east-1,unit=seconds latency_average=0.004810798017284538,latency_maximum=0.1100282669067383,latency_minimum=0.0006084442138671875,latency_sample_count=4029,latency_sum=19.382705211639404 1459542420000000000
|
||||||
|
```
|
||||||
311
plugins/inputs/cloudwatch/cloudwatch.go
Normal file
311
plugins/inputs/cloudwatch/cloudwatch.go
Normal file
@@ -0,0 +1,311 @@
|
|||||||
|
package cloudwatch
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/aws/aws-sdk-go/aws"
|
||||||
|
"github.com/aws/aws-sdk-go/aws/credentials"
|
||||||
|
"github.com/aws/aws-sdk-go/aws/session"
|
||||||
|
|
||||||
|
"github.com/aws/aws-sdk-go/service/cloudwatch"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
|
"github.com/influxdata/telegraf/internal"
|
||||||
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
|
)
|
||||||
|
|
||||||
|
type (
|
||||||
|
CloudWatch struct {
|
||||||
|
Region string `toml:"region"`
|
||||||
|
AccessKey string `toml:"access_key"`
|
||||||
|
SecretKey string `toml:"secret_key"`
|
||||||
|
Period internal.Duration `toml:"period"`
|
||||||
|
Delay internal.Duration `toml:"delay"`
|
||||||
|
Namespace string `toml:"namespace"`
|
||||||
|
Metrics []*Metric `toml:"metrics"`
|
||||||
|
client cloudwatchClient
|
||||||
|
metricCache *MetricCache
|
||||||
|
}
|
||||||
|
|
||||||
|
Metric struct {
|
||||||
|
MetricNames []string `toml:"names"`
|
||||||
|
Dimensions []*Dimension `toml:"dimensions"`
|
||||||
|
}
|
||||||
|
|
||||||
|
Dimension struct {
|
||||||
|
Name string `toml:"name"`
|
||||||
|
Value string `toml:"value"`
|
||||||
|
}
|
||||||
|
|
||||||
|
MetricCache struct {
|
||||||
|
TTL time.Duration
|
||||||
|
Fetched time.Time
|
||||||
|
Metrics []*cloudwatch.Metric
|
||||||
|
}
|
||||||
|
|
||||||
|
cloudwatchClient interface {
|
||||||
|
ListMetrics(*cloudwatch.ListMetricsInput) (*cloudwatch.ListMetricsOutput, error)
|
||||||
|
GetMetricStatistics(*cloudwatch.GetMetricStatisticsInput) (*cloudwatch.GetMetricStatisticsOutput, error)
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
func (c *CloudWatch) SampleConfig() string {
|
||||||
|
return `
|
||||||
|
## Amazon Region
|
||||||
|
region = 'us-east-1'
|
||||||
|
|
||||||
|
## Amazon Credentials
|
||||||
|
## Credentials are loaded in the following order
|
||||||
|
## 1) explicit credentials from 'access_key' and 'secret_key'
|
||||||
|
## 2) environment variables
|
||||||
|
## 3) shared credentials file
|
||||||
|
## 4) EC2 Instance Profile
|
||||||
|
#access_key = ""
|
||||||
|
#secret_key = ""
|
||||||
|
|
||||||
|
## Requested CloudWatch aggregation Period (required - must be a multiple of 60s)
|
||||||
|
period = '1m'
|
||||||
|
|
||||||
|
## Collection Delay (required - must account for metrics availability via CloudWatch API)
|
||||||
|
delay = '1m'
|
||||||
|
|
||||||
|
## Recomended: use metric 'interval' that is a multiple of 'period' to avoid
|
||||||
|
## gaps or overlap in pulled data
|
||||||
|
interval = '1m'
|
||||||
|
|
||||||
|
## Metric Statistic Namespace (required)
|
||||||
|
namespace = 'AWS/ELB'
|
||||||
|
|
||||||
|
## Metrics to Pull (optional)
|
||||||
|
## Defaults to all Metrics in Namespace if nothing is provided
|
||||||
|
## Refreshes Namespace available metrics every 1h
|
||||||
|
#[[inputs.cloudwatch.metrics]]
|
||||||
|
# names = ['Latency', 'RequestCount']
|
||||||
|
#
|
||||||
|
# ## Dimension filters for Metric (optional)
|
||||||
|
# [[inputs.cloudwatch.metrics.dimensions]]
|
||||||
|
# name = 'LoadBalancerName'
|
||||||
|
# value = 'p-example'
|
||||||
|
`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *CloudWatch) Description() string {
|
||||||
|
return "Pull Metric Statistics from Amazon CloudWatch"
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *CloudWatch) Gather(acc telegraf.Accumulator) error {
|
||||||
|
if c.client == nil {
|
||||||
|
c.initializeCloudWatch()
|
||||||
|
}
|
||||||
|
|
||||||
|
var metrics []*cloudwatch.Metric
|
||||||
|
|
||||||
|
// check for provided metric filter
|
||||||
|
if c.Metrics != nil {
|
||||||
|
metrics = []*cloudwatch.Metric{}
|
||||||
|
for _, m := range c.Metrics {
|
||||||
|
dimensions := make([]*cloudwatch.Dimension, len(m.Dimensions))
|
||||||
|
for k, d := range m.Dimensions {
|
||||||
|
dimensions[k] = &cloudwatch.Dimension{
|
||||||
|
Name: aws.String(d.Name),
|
||||||
|
Value: aws.String(d.Value),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for _, name := range m.MetricNames {
|
||||||
|
metrics = append(metrics, &cloudwatch.Metric{
|
||||||
|
Namespace: aws.String(c.Namespace),
|
||||||
|
MetricName: aws.String(name),
|
||||||
|
Dimensions: dimensions,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
var err error
|
||||||
|
metrics, err = c.fetchNamespaceMetrics()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
metricCount := len(metrics)
|
||||||
|
var errChan = make(chan error, metricCount)
|
||||||
|
|
||||||
|
now := time.Now()
|
||||||
|
|
||||||
|
// limit concurrency or we can easily exhaust user connection limit
|
||||||
|
semaphore := make(chan byte, 64)
|
||||||
|
|
||||||
|
for _, m := range metrics {
|
||||||
|
semaphore <- 0x1
|
||||||
|
go c.gatherMetric(acc, m, now, semaphore, errChan)
|
||||||
|
}
|
||||||
|
|
||||||
|
for i := 1; i <= metricCount; i++ {
|
||||||
|
err := <-errChan
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
inputs.Add("cloudwatch", func() telegraf.Input {
|
||||||
|
return &CloudWatch{}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Initialize CloudWatch client
|
||||||
|
*/
|
||||||
|
func (c *CloudWatch) initializeCloudWatch() error {
|
||||||
|
config := &aws.Config{
|
||||||
|
Region: aws.String(c.Region),
|
||||||
|
}
|
||||||
|
if c.AccessKey != "" || c.SecretKey != "" {
|
||||||
|
config.Credentials = credentials.NewStaticCredentials(c.AccessKey, c.SecretKey, "")
|
||||||
|
}
|
||||||
|
|
||||||
|
c.client = cloudwatch.New(session.New(config))
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Fetch available metrics for given CloudWatch Namespace
|
||||||
|
*/
|
||||||
|
func (c *CloudWatch) fetchNamespaceMetrics() (metrics []*cloudwatch.Metric, err error) {
|
||||||
|
if c.metricCache != nil && c.metricCache.IsValid() {
|
||||||
|
metrics = c.metricCache.Metrics
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
metrics = []*cloudwatch.Metric{}
|
||||||
|
|
||||||
|
var token *string
|
||||||
|
for more := true; more; {
|
||||||
|
params := &cloudwatch.ListMetricsInput{
|
||||||
|
Namespace: aws.String(c.Namespace),
|
||||||
|
Dimensions: []*cloudwatch.DimensionFilter{},
|
||||||
|
NextToken: token,
|
||||||
|
MetricName: nil,
|
||||||
|
}
|
||||||
|
|
||||||
|
resp, err := c.client.ListMetrics(params)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
metrics = append(metrics, resp.Metrics...)
|
||||||
|
|
||||||
|
token = resp.NextToken
|
||||||
|
more = token != nil
|
||||||
|
}
|
||||||
|
|
||||||
|
cacheTTL, _ := time.ParseDuration("1hr")
|
||||||
|
c.metricCache = &MetricCache{
|
||||||
|
Metrics: metrics,
|
||||||
|
Fetched: time.Now(),
|
||||||
|
TTL: cacheTTL,
|
||||||
|
}
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Gather given Metric and emit any error
|
||||||
|
*/
|
||||||
|
func (c *CloudWatch) gatherMetric(acc telegraf.Accumulator, metric *cloudwatch.Metric, now time.Time, semaphore chan byte, errChan chan error) {
|
||||||
|
params := c.getStatisticsInput(metric, now)
|
||||||
|
resp, err := c.client.GetMetricStatistics(params)
|
||||||
|
if err != nil {
|
||||||
|
errChan <- err
|
||||||
|
<-semaphore
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, point := range resp.Datapoints {
|
||||||
|
tags := map[string]string{
|
||||||
|
"region": c.Region,
|
||||||
|
"unit": snakeCase(*point.Unit),
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, d := range metric.Dimensions {
|
||||||
|
tags[snakeCase(*d.Name)] = *d.Value
|
||||||
|
}
|
||||||
|
|
||||||
|
// record field for each statistic
|
||||||
|
fields := map[string]interface{}{}
|
||||||
|
|
||||||
|
if point.Average != nil {
|
||||||
|
fields[formatField(*metric.MetricName, cloudwatch.StatisticAverage)] = *point.Average
|
||||||
|
}
|
||||||
|
if point.Maximum != nil {
|
||||||
|
fields[formatField(*metric.MetricName, cloudwatch.StatisticMaximum)] = *point.Maximum
|
||||||
|
}
|
||||||
|
if point.Minimum != nil {
|
||||||
|
fields[formatField(*metric.MetricName, cloudwatch.StatisticMinimum)] = *point.Minimum
|
||||||
|
}
|
||||||
|
if point.SampleCount != nil {
|
||||||
|
fields[formatField(*metric.MetricName, cloudwatch.StatisticSampleCount)] = *point.SampleCount
|
||||||
|
}
|
||||||
|
if point.Sum != nil {
|
||||||
|
fields[formatField(*metric.MetricName, cloudwatch.StatisticSum)] = *point.Sum
|
||||||
|
}
|
||||||
|
|
||||||
|
acc.AddFields(formatMeasurement(c.Namespace), fields, tags, *point.Timestamp)
|
||||||
|
}
|
||||||
|
|
||||||
|
errChan <- nil
|
||||||
|
<-semaphore
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Formatting helpers
|
||||||
|
*/
|
||||||
|
func formatField(metricName string, statistic string) string {
|
||||||
|
return fmt.Sprintf("%s_%s", snakeCase(metricName), snakeCase(statistic))
|
||||||
|
}
|
||||||
|
|
||||||
|
func formatMeasurement(namespace string) string {
|
||||||
|
namespace = strings.Replace(namespace, "/", "_", -1)
|
||||||
|
namespace = snakeCase(namespace)
|
||||||
|
return fmt.Sprintf("cloudwatch_%s", namespace)
|
||||||
|
}
|
||||||
|
|
||||||
|
func snakeCase(s string) string {
|
||||||
|
s = internal.SnakeCase(s)
|
||||||
|
s = strings.Replace(s, "__", "_", -1)
|
||||||
|
return s
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Map Metric to *cloudwatch.GetMetricStatisticsInput for given timeframe
|
||||||
|
*/
|
||||||
|
func (c *CloudWatch) getStatisticsInput(metric *cloudwatch.Metric, now time.Time) *cloudwatch.GetMetricStatisticsInput {
|
||||||
|
end := now.Add(-c.Delay.Duration)
|
||||||
|
|
||||||
|
input := &cloudwatch.GetMetricStatisticsInput{
|
||||||
|
StartTime: aws.Time(end.Add(-c.Period.Duration)),
|
||||||
|
EndTime: aws.Time(end),
|
||||||
|
MetricName: metric.MetricName,
|
||||||
|
Namespace: metric.Namespace,
|
||||||
|
Period: aws.Int64(int64(c.Period.Duration.Seconds())),
|
||||||
|
Dimensions: metric.Dimensions,
|
||||||
|
Statistics: []*string{
|
||||||
|
aws.String(cloudwatch.StatisticAverage),
|
||||||
|
aws.String(cloudwatch.StatisticMaximum),
|
||||||
|
aws.String(cloudwatch.StatisticMinimum),
|
||||||
|
aws.String(cloudwatch.StatisticSum),
|
||||||
|
aws.String(cloudwatch.StatisticSampleCount)},
|
||||||
|
}
|
||||||
|
return input
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
|
* Check Metric Cache validity
|
||||||
|
*/
|
||||||
|
func (c *MetricCache) IsValid() bool {
|
||||||
|
return c.Metrics != nil && time.Since(c.Fetched) < c.TTL
|
||||||
|
}
|
||||||
131
plugins/inputs/cloudwatch/cloudwatch_test.go
Normal file
131
plugins/inputs/cloudwatch/cloudwatch_test.go
Normal file
@@ -0,0 +1,131 @@
|
|||||||
|
package cloudwatch
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/aws/aws-sdk-go/aws"
|
||||||
|
"github.com/aws/aws-sdk-go/service/cloudwatch"
|
||||||
|
"github.com/influxdata/telegraf/internal"
|
||||||
|
"github.com/influxdata/telegraf/testutil"
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
)
|
||||||
|
|
||||||
|
type mockCloudWatchClient struct{}
|
||||||
|
|
||||||
|
func (m *mockCloudWatchClient) ListMetrics(params *cloudwatch.ListMetricsInput) (*cloudwatch.ListMetricsOutput, error) {
|
||||||
|
metric := &cloudwatch.Metric{
|
||||||
|
Namespace: params.Namespace,
|
||||||
|
MetricName: aws.String("Latency"),
|
||||||
|
Dimensions: []*cloudwatch.Dimension{
|
||||||
|
&cloudwatch.Dimension{
|
||||||
|
Name: aws.String("LoadBalancerName"),
|
||||||
|
Value: aws.String("p-example"),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
result := &cloudwatch.ListMetricsOutput{
|
||||||
|
Metrics: []*cloudwatch.Metric{metric},
|
||||||
|
}
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *mockCloudWatchClient) GetMetricStatistics(params *cloudwatch.GetMetricStatisticsInput) (*cloudwatch.GetMetricStatisticsOutput, error) {
|
||||||
|
dataPoint := &cloudwatch.Datapoint{
|
||||||
|
Timestamp: params.EndTime,
|
||||||
|
Minimum: aws.Float64(0.1),
|
||||||
|
Maximum: aws.Float64(0.3),
|
||||||
|
Average: aws.Float64(0.2),
|
||||||
|
Sum: aws.Float64(123),
|
||||||
|
SampleCount: aws.Float64(100),
|
||||||
|
Unit: aws.String("Seconds"),
|
||||||
|
}
|
||||||
|
result := &cloudwatch.GetMetricStatisticsOutput{
|
||||||
|
Label: aws.String("Latency"),
|
||||||
|
Datapoints: []*cloudwatch.Datapoint{dataPoint},
|
||||||
|
}
|
||||||
|
return result, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGather(t *testing.T) {
|
||||||
|
duration, _ := time.ParseDuration("1m")
|
||||||
|
internalDuration := internal.Duration{
|
||||||
|
Duration: duration,
|
||||||
|
}
|
||||||
|
c := &CloudWatch{
|
||||||
|
Region: "us-east-1",
|
||||||
|
Namespace: "AWS/ELB",
|
||||||
|
Delay: internalDuration,
|
||||||
|
Period: internalDuration,
|
||||||
|
}
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
c.client = &mockCloudWatchClient{}
|
||||||
|
|
||||||
|
c.Gather(&acc)
|
||||||
|
|
||||||
|
fields := map[string]interface{}{}
|
||||||
|
fields["latency_minimum"] = 0.1
|
||||||
|
fields["latency_maximum"] = 0.3
|
||||||
|
fields["latency_average"] = 0.2
|
||||||
|
fields["latency_sum"] = 123.0
|
||||||
|
fields["latency_sample_count"] = 100.0
|
||||||
|
|
||||||
|
tags := map[string]string{}
|
||||||
|
tags["unit"] = "seconds"
|
||||||
|
tags["region"] = "us-east-1"
|
||||||
|
tags["load_balancer_name"] = "p-example"
|
||||||
|
|
||||||
|
assert.True(t, acc.HasMeasurement("cloudwatch_aws_elb"))
|
||||||
|
acc.AssertContainsTaggedFields(t, "cloudwatch_aws_elb", fields, tags)
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGenerateStatisticsInputParams(t *testing.T) {
|
||||||
|
d := &cloudwatch.Dimension{
|
||||||
|
Name: aws.String("LoadBalancerName"),
|
||||||
|
Value: aws.String("p-example"),
|
||||||
|
}
|
||||||
|
|
||||||
|
m := &cloudwatch.Metric{
|
||||||
|
MetricName: aws.String("Latency"),
|
||||||
|
Dimensions: []*cloudwatch.Dimension{d},
|
||||||
|
}
|
||||||
|
|
||||||
|
duration, _ := time.ParseDuration("1m")
|
||||||
|
internalDuration := internal.Duration{
|
||||||
|
Duration: duration,
|
||||||
|
}
|
||||||
|
|
||||||
|
c := &CloudWatch{
|
||||||
|
Namespace: "AWS/ELB",
|
||||||
|
Delay: internalDuration,
|
||||||
|
Period: internalDuration,
|
||||||
|
}
|
||||||
|
|
||||||
|
c.initializeCloudWatch()
|
||||||
|
|
||||||
|
now := time.Now()
|
||||||
|
|
||||||
|
params := c.getStatisticsInput(m, now)
|
||||||
|
|
||||||
|
assert.EqualValues(t, *params.EndTime, now.Add(-c.Delay.Duration))
|
||||||
|
assert.EqualValues(t, *params.StartTime, now.Add(-c.Period.Duration).Add(-c.Delay.Duration))
|
||||||
|
assert.Len(t, params.Dimensions, 1)
|
||||||
|
assert.Len(t, params.Statistics, 5)
|
||||||
|
assert.EqualValues(t, *params.Period, 60)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestMetricsCacheTimeout(t *testing.T) {
|
||||||
|
ttl, _ := time.ParseDuration("5ms")
|
||||||
|
cache := &MetricCache{
|
||||||
|
Metrics: []*cloudwatch.Metric{},
|
||||||
|
Fetched: time.Now(),
|
||||||
|
TTL: ttl,
|
||||||
|
}
|
||||||
|
|
||||||
|
assert.True(t, cache.IsValid())
|
||||||
|
time.Sleep(ttl)
|
||||||
|
assert.False(t, cache.IsValid())
|
||||||
|
}
|
||||||
63
plugins/inputs/couchbase/README.md
Normal file
63
plugins/inputs/couchbase/README.md
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# Telegraf Plugin: Couchbase
|
||||||
|
|
||||||
|
## Configuration:
|
||||||
|
|
||||||
|
```
|
||||||
|
# Read per-node and per-bucket metrics from Couchbase
|
||||||
|
[[inputs.couchbase]]
|
||||||
|
## specify servers via a url matching:
|
||||||
|
## [protocol://][:password]@address[:port]
|
||||||
|
## e.g.
|
||||||
|
## http://couchbase-0.example.com/
|
||||||
|
## http://admin:secret@couchbase-0.example.com:8091/
|
||||||
|
##
|
||||||
|
## If no servers are specified, then localhost is used as the host.
|
||||||
|
## If no protocol is specifed, HTTP is used.
|
||||||
|
## If no port is specified, 8091 is used.
|
||||||
|
servers = ["http://localhost:8091"]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Measurements:
|
||||||
|
|
||||||
|
### couchbase_node
|
||||||
|
|
||||||
|
Tags:
|
||||||
|
- cluster: whatever you called it in `servers` in the configuration, e.g.: `http://couchbase-0.example.com/`
|
||||||
|
- hostname: Couchbase's name for the node and port, e.g., `172.16.10.187:8091`
|
||||||
|
|
||||||
|
Fields:
|
||||||
|
- memory_free (unit: bytes, example: 23181365248.0)
|
||||||
|
- memory_total (unit: bytes, example: 64424656896.0)
|
||||||
|
|
||||||
|
### couchbase_bucket
|
||||||
|
|
||||||
|
Tags:
|
||||||
|
- cluster: whatever you called it in `servers` in the configuration, e.g.: `http://couchbase-0.example.com/`)
|
||||||
|
- bucket: the name of the couchbase bucket, e.g., `blastro-df`
|
||||||
|
|
||||||
|
Fields:
|
||||||
|
- quota_percent_used (unit: percent, example: 68.85424936294555)
|
||||||
|
- ops_per_sec (unit: count, example: 5686.789686789687)
|
||||||
|
- disk_fetches (unit: count, example: 0.0)
|
||||||
|
- item_count (unit: count, example: 943239752.0)
|
||||||
|
- disk_used (unit: bytes, example: 409178772321.0)
|
||||||
|
- data_used (unit: bytes, example: 212179309111.0)
|
||||||
|
- mem_used (unit: bytes, example: 202156957464.0)
|
||||||
|
|
||||||
|
|
||||||
|
## Example output
|
||||||
|
|
||||||
|
```
|
||||||
|
$ telegraf -config telegraf.conf -input-filter couchbase -test
|
||||||
|
* Plugin: couchbase, Collection 1
|
||||||
|
> couchbase_node,cluster=https://couchbase-0.example.com/,hostname=172.16.10.187:8091 memory_free=22927384576,memory_total=64424656896 1458381183695864929
|
||||||
|
> couchbase_node,cluster=https://couchbase-0.example.com/,hostname=172.16.10.65:8091 memory_free=23520161792,memory_total=64424656896 1458381183695972112
|
||||||
|
> couchbase_node,cluster=https://couchbase-0.example.com/,hostname=172.16.13.105:8091 memory_free=23531704320,memory_total=64424656896 1458381183695995259
|
||||||
|
> couchbase_node,cluster=https://couchbase-0.example.com/,hostname=172.16.13.173:8091 memory_free=23628767232,memory_total=64424656896 1458381183696010870
|
||||||
|
> couchbase_node,cluster=https://couchbase-0.example.com/,hostname=172.16.15.120:8091 memory_free=23616692224,memory_total=64424656896 1458381183696027406
|
||||||
|
> couchbase_node,cluster=https://couchbase-0.example.com/,hostname=172.16.8.127:8091 memory_free=23431770112,memory_total=64424656896 1458381183696041040
|
||||||
|
> couchbase_node,cluster=https://couchbase-0.example.com/,hostname=172.16.8.148:8091 memory_free=23811371008,memory_total=64424656896 1458381183696059060
|
||||||
|
> couchbase_bucket,bucket=default,cluster=https://couchbase-0.example.com/ data_used=25743360,disk_fetches=0,disk_used=31744886,item_count=0,mem_used=77729224,ops_per_sec=0,quota_percent_used=10.58976636614118 1458381183696210074
|
||||||
|
> couchbase_bucket,bucket=demoncat,cluster=https://couchbase-0.example.com/ data_used=38157584951,disk_fetches=0,disk_used=62730302441,item_count=14662532,mem_used=24015304256,ops_per_sec=1207.753207753208,quota_percent_used=79.87855353525707 1458381183696242695
|
||||||
|
> couchbase_bucket,bucket=blastro-df,cluster=https://couchbase-0.example.com/ data_used=212552491622,disk_fetches=0,disk_used=413323157621,item_count=944655680,mem_used=202421103760,ops_per_sec=1692.176692176692,quota_percent_used=68.9442170551845 1458381183696272206
|
||||||
|
```
|
||||||
104
plugins/inputs/couchbase/couchbase.go
Normal file
104
plugins/inputs/couchbase/couchbase.go
Normal file
@@ -0,0 +1,104 @@
|
|||||||
|
package couchbase
|
||||||
|
|
||||||
|
import (
|
||||||
|
couchbase "github.com/couchbase/go-couchbase"
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
|
"sync"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Couchbase struct {
|
||||||
|
Servers []string
|
||||||
|
}
|
||||||
|
|
||||||
|
var sampleConfig = `
|
||||||
|
## specify servers via a url matching:
|
||||||
|
## [protocol://][:password]@address[:port]
|
||||||
|
## e.g.
|
||||||
|
## http://couchbase-0.example.com/
|
||||||
|
## http://admin:secret@couchbase-0.example.com:8091/
|
||||||
|
##
|
||||||
|
## If no servers are specified, then localhost is used as the host.
|
||||||
|
## If no protocol is specifed, HTTP is used.
|
||||||
|
## If no port is specified, 8091 is used.
|
||||||
|
servers = ["http://localhost:8091"]
|
||||||
|
`
|
||||||
|
|
||||||
|
func (r *Couchbase) SampleConfig() string {
|
||||||
|
return sampleConfig
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *Couchbase) Description() string {
|
||||||
|
return "Read metrics from one or many couchbase clusters"
|
||||||
|
}
|
||||||
|
|
||||||
|
// Reads stats from all configured clusters. Accumulates stats.
|
||||||
|
// Returns one of the errors encountered while gathering stats (if any).
|
||||||
|
func (r *Couchbase) Gather(acc telegraf.Accumulator) error {
|
||||||
|
if len(r.Servers) == 0 {
|
||||||
|
r.gatherServer("http://localhost:8091/", acc, nil)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
var wg sync.WaitGroup
|
||||||
|
|
||||||
|
var outerr error
|
||||||
|
|
||||||
|
for _, serv := range r.Servers {
|
||||||
|
wg.Add(1)
|
||||||
|
go func(serv string) {
|
||||||
|
defer wg.Done()
|
||||||
|
outerr = r.gatherServer(serv, acc, nil)
|
||||||
|
}(serv)
|
||||||
|
}
|
||||||
|
|
||||||
|
wg.Wait()
|
||||||
|
|
||||||
|
return outerr
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *Couchbase) gatherServer(addr string, acc telegraf.Accumulator, pool *couchbase.Pool) error {
|
||||||
|
if pool == nil {
|
||||||
|
client, err := couchbase.Connect(addr)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// `default` is the only possible pool name. It's a
|
||||||
|
// placeholder for a possible future Couchbase feature. See
|
||||||
|
// http://stackoverflow.com/a/16990911/17498.
|
||||||
|
p, err := client.GetPool("default")
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
pool = &p
|
||||||
|
}
|
||||||
|
for i := 0; i < len(pool.Nodes); i++ {
|
||||||
|
node := pool.Nodes[i]
|
||||||
|
tags := map[string]string{"cluster": addr, "hostname": node.Hostname}
|
||||||
|
fields := make(map[string]interface{})
|
||||||
|
fields["memory_free"] = node.MemoryFree
|
||||||
|
fields["memory_total"] = node.MemoryTotal
|
||||||
|
acc.AddFields("couchbase_node", fields, tags)
|
||||||
|
}
|
||||||
|
for bucketName, _ := range pool.BucketMap {
|
||||||
|
tags := map[string]string{"cluster": addr, "bucket": bucketName}
|
||||||
|
bs := pool.BucketMap[bucketName].BasicStats
|
||||||
|
fields := make(map[string]interface{})
|
||||||
|
fields["quota_percent_used"] = bs["quotaPercentUsed"]
|
||||||
|
fields["ops_per_sec"] = bs["opsPerSec"]
|
||||||
|
fields["disk_fetches"] = bs["diskFetches"]
|
||||||
|
fields["item_count"] = bs["itemCount"]
|
||||||
|
fields["disk_used"] = bs["diskUsed"]
|
||||||
|
fields["data_used"] = bs["dataUsed"]
|
||||||
|
fields["mem_used"] = bs["memUsed"]
|
||||||
|
acc.AddFields("couchbase_bucket", fields, tags)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
inputs.Add("couchbase", func() telegraf.Input {
|
||||||
|
return &Couchbase{}
|
||||||
|
})
|
||||||
|
}
|
||||||
50
plugins/inputs/couchbase/couchbase_test.go
Normal file
50
plugins/inputs/couchbase/couchbase_test.go
Normal file
File diff suppressed because one or more lines are too long
255
plugins/inputs/couchdb/README.md
Normal file
255
plugins/inputs/couchdb/README.md
Normal file
@@ -0,0 +1,255 @@
|
|||||||
|
# CouchDB Input Plugin
|
||||||
|
---
|
||||||
|
|
||||||
|
The CouchDB plugin gathers metrics of CouchDB using [_stats](http://docs.couchdb.org/en/1.6.1/api/server/common.html?highlight=stats#get--_stats) endpoint.
|
||||||
|
|
||||||
|
### Configuration:
|
||||||
|
|
||||||
|
```
|
||||||
|
# Sample Config:
|
||||||
|
[[inputs.couchdb]]
|
||||||
|
hosts = ["http://localhost:5984/_stats"]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Measurements & Fields:
|
||||||
|
|
||||||
|
Statistics specific to the internals of CouchDB:
|
||||||
|
|
||||||
|
- couchdb_auth_cache_misses
|
||||||
|
- couchdb_database_writes
|
||||||
|
- couchdb_open_databases
|
||||||
|
- couchdb_auth_cache_hits
|
||||||
|
- couchdb_request_time
|
||||||
|
- couchdb_database_reads
|
||||||
|
- couchdb_open_os_files
|
||||||
|
|
||||||
|
Statistics of HTTP requests by method:
|
||||||
|
|
||||||
|
- httpd_request_methods_put
|
||||||
|
- httpd_request_methods_get
|
||||||
|
- httpd_request_methods_copy
|
||||||
|
- httpd_request_methods_delete
|
||||||
|
- httpd_request_methods_post
|
||||||
|
- httpd_request_methods_head
|
||||||
|
|
||||||
|
Statistics of HTTP requests by response code:
|
||||||
|
|
||||||
|
- httpd_status_codes_200
|
||||||
|
- httpd_status_codes_201
|
||||||
|
- httpd_status_codes_202
|
||||||
|
- httpd_status_codes_301
|
||||||
|
- httpd_status_codes_304
|
||||||
|
- httpd_status_codes_400
|
||||||
|
- httpd_status_codes_401
|
||||||
|
- httpd_status_codes_403
|
||||||
|
- httpd_status_codes_404
|
||||||
|
- httpd_status_codes_405
|
||||||
|
- httpd_status_codes_409
|
||||||
|
- httpd_status_codes_412
|
||||||
|
- httpd_status_codes_500
|
||||||
|
|
||||||
|
httpd statistics:
|
||||||
|
|
||||||
|
- httpd_clients_requesting_changes
|
||||||
|
- httpd_temporary_view_reads
|
||||||
|
- httpd_requests
|
||||||
|
- httpd_bulk_requests
|
||||||
|
- httpd_view_reads
|
||||||
|
|
||||||
|
### Tags:
|
||||||
|
|
||||||
|
- server (url of the couchdb _stats endpoint)
|
||||||
|
|
||||||
|
### Example output:
|
||||||
|
|
||||||
|
```
|
||||||
|
➜ telegraf git:(master) ✗ ./telegraf -config ./config.conf -input-filter couchdb -test
|
||||||
|
* Plugin: couchdb,
|
||||||
|
Collection 1
|
||||||
|
> couchdb,server=http://localhost:5984/_stats couchdb_auth_cache_hits_current=0,
|
||||||
|
couchdb_auth_cache_hits_max=0,
|
||||||
|
couchdb_auth_cache_hits_mean=0,
|
||||||
|
couchdb_auth_cache_hits_min=0,
|
||||||
|
couchdb_auth_cache_hits_stddev=0,
|
||||||
|
couchdb_auth_cache_hits_sum=0,
|
||||||
|
couchdb_auth_cache_misses_current=0,
|
||||||
|
couchdb_auth_cache_misses_max=0,
|
||||||
|
couchdb_auth_cache_misses_mean=0,
|
||||||
|
couchdb_auth_cache_misses_min=0,
|
||||||
|
couchdb_auth_cache_misses_stddev=0,
|
||||||
|
couchdb_auth_cache_misses_sum=0,
|
||||||
|
couchdb_database_reads_current=0,
|
||||||
|
couchdb_database_reads_max=0,
|
||||||
|
couchdb_database_reads_mean=0,
|
||||||
|
couchdb_database_reads_min=0,
|
||||||
|
couchdb_database_reads_stddev=0,
|
||||||
|
couchdb_database_reads_sum=0,
|
||||||
|
couchdb_database_writes_current=1102,
|
||||||
|
couchdb_database_writes_max=131,
|
||||||
|
couchdb_database_writes_mean=0.116,
|
||||||
|
couchdb_database_writes_min=0,
|
||||||
|
couchdb_database_writes_stddev=3.536,
|
||||||
|
couchdb_database_writes_sum=1102,
|
||||||
|
couchdb_open_databases_current=1,
|
||||||
|
couchdb_open_databases_max=1,
|
||||||
|
couchdb_open_databases_mean=0,
|
||||||
|
couchdb_open_databases_min=0,
|
||||||
|
couchdb_open_databases_stddev=0.01,
|
||||||
|
couchdb_open_databases_sum=1,
|
||||||
|
couchdb_open_os_files_current=2,
|
||||||
|
couchdb_open_os_files_max=2,
|
||||||
|
couchdb_open_os_files_mean=0,
|
||||||
|
couchdb_open_os_files_min=0,
|
||||||
|
couchdb_open_os_files_stddev=0.02,
|
||||||
|
couchdb_open_os_files_sum=2,
|
||||||
|
couchdb_request_time_current=242.21,
|
||||||
|
couchdb_request_time_max=102,
|
||||||
|
couchdb_request_time_mean=5.767,
|
||||||
|
couchdb_request_time_min=1,
|
||||||
|
couchdb_request_time_stddev=17.369,
|
||||||
|
couchdb_request_time_sum=242.21,
|
||||||
|
httpd_bulk_requests_current=0,
|
||||||
|
httpd_bulk_requests_max=0,
|
||||||
|
httpd_bulk_requests_mean=0,
|
||||||
|
httpd_bulk_requests_min=0,
|
||||||
|
httpd_bulk_requests_stddev=0,
|
||||||
|
httpd_bulk_requests_sum=0,
|
||||||
|
httpd_clients_requesting_changes_current=0,
|
||||||
|
httpd_clients_requesting_changes_max=0,
|
||||||
|
httpd_clients_requesting_changes_mean=0,
|
||||||
|
httpd_clients_requesting_changes_min=0,
|
||||||
|
httpd_clients_requesting_changes_stddev=0,
|
||||||
|
httpd_clients_requesting_changes_sum=0,
|
||||||
|
httpd_request_methods_copy_current=0,
|
||||||
|
httpd_request_methods_copy_max=0,
|
||||||
|
httpd_request_methods_copy_mean=0,
|
||||||
|
httpd_request_methods_copy_min=0,
|
||||||
|
httpd_request_methods_copy_stddev=0,
|
||||||
|
httpd_request_methods_copy_sum=0,
|
||||||
|
httpd_request_methods_delete_current=0,
|
||||||
|
httpd_request_methods_delete_max=0,
|
||||||
|
httpd_request_methods_delete_mean=0,
|
||||||
|
httpd_request_methods_delete_min=0,
|
||||||
|
httpd_request_methods_delete_stddev=0,
|
||||||
|
httpd_request_methods_delete_sum=0,
|
||||||
|
httpd_request_methods_get_current=31,
|
||||||
|
httpd_request_methods_get_max=1,
|
||||||
|
httpd_request_methods_get_mean=0.003,
|
||||||
|
httpd_request_methods_get_min=0,
|
||||||
|
httpd_request_methods_get_stddev=0.057,
|
||||||
|
httpd_request_methods_get_sum=31,
|
||||||
|
httpd_request_methods_head_current=0,
|
||||||
|
httpd_request_methods_head_max=0,
|
||||||
|
httpd_request_methods_head_mean=0,
|
||||||
|
httpd_request_methods_head_min=0,
|
||||||
|
httpd_request_methods_head_stddev=0,
|
||||||
|
httpd_request_methods_head_sum=0,
|
||||||
|
httpd_request_methods_post_current=1102,
|
||||||
|
httpd_request_methods_post_max=131,
|
||||||
|
httpd_request_methods_post_mean=0.116,
|
||||||
|
httpd_request_methods_post_min=0,
|
||||||
|
httpd_request_methods_post_stddev=3.536,
|
||||||
|
httpd_request_methods_post_sum=1102,
|
||||||
|
httpd_request_methods_put_current=1,
|
||||||
|
httpd_request_methods_put_max=1,
|
||||||
|
httpd_request_methods_put_mean=0,
|
||||||
|
httpd_request_methods_put_min=0,
|
||||||
|
httpd_request_methods_put_stddev=0.01,
|
||||||
|
httpd_request_methods_put_sum=1,
|
||||||
|
httpd_requests_current=1133,
|
||||||
|
httpd_requests_max=130,
|
||||||
|
httpd_requests_mean=0.118,
|
||||||
|
httpd_requests_min=0,
|
||||||
|
httpd_requests_stddev=3.512,
|
||||||
|
httpd_requests_sum=1133,
|
||||||
|
httpd_status_codes_200_current=31,
|
||||||
|
httpd_status_codes_200_max=1,
|
||||||
|
httpd_status_codes_200_mean=0.003,
|
||||||
|
httpd_status_codes_200_min=0,
|
||||||
|
httpd_status_codes_200_stddev=0.057,
|
||||||
|
httpd_status_codes_200_sum=31,
|
||||||
|
httpd_status_codes_201_current=1103,
|
||||||
|
httpd_status_codes_201_max=130,
|
||||||
|
httpd_status_codes_201_mean=0.116,
|
||||||
|
httpd_status_codes_201_min=0,
|
||||||
|
httpd_status_codes_201_stddev=3.532,
|
||||||
|
httpd_status_codes_201_sum=1103,
|
||||||
|
httpd_status_codes_202_current=0,
|
||||||
|
httpd_status_codes_202_max=0,
|
||||||
|
httpd_status_codes_202_mean=0,
|
||||||
|
httpd_status_codes_202_min=0,
|
||||||
|
httpd_status_codes_202_stddev=0,
|
||||||
|
httpd_status_codes_202_sum=0,
|
||||||
|
httpd_status_codes_301_current=0,
|
||||||
|
httpd_status_codes_301_max=0,
|
||||||
|
httpd_status_codes_301_mean=0,
|
||||||
|
httpd_status_codes_301_min=0,
|
||||||
|
httpd_status_codes_301_stddev=0,
|
||||||
|
httpd_status_codes_301_sum=0,
|
||||||
|
httpd_status_codes_304_current=0,
|
||||||
|
httpd_status_codes_304_max=0,
|
||||||
|
httpd_status_codes_304_mean=0,
|
||||||
|
httpd_status_codes_304_min=0,
|
||||||
|
httpd_status_codes_304_stddev=0,
|
||||||
|
httpd_status_codes_304_sum=0,
|
||||||
|
httpd_status_codes_400_current=0,
|
||||||
|
httpd_status_codes_400_max=0,
|
||||||
|
httpd_status_codes_400_mean=0,
|
||||||
|
httpd_status_codes_400_min=0,
|
||||||
|
httpd_status_codes_400_stddev=0,
|
||||||
|
httpd_status_codes_400_sum=0,
|
||||||
|
httpd_status_codes_401_current=0,
|
||||||
|
httpd_status_codes_401_max=0,
|
||||||
|
httpd_status_codes_401_mean=0,
|
||||||
|
httpd_status_codes_401_min=0,
|
||||||
|
httpd_status_codes_401_stddev=0,
|
||||||
|
httpd_status_codes_401_sum=0,
|
||||||
|
httpd_status_codes_403_current=0,
|
||||||
|
httpd_status_codes_403_max=0,
|
||||||
|
httpd_status_codes_403_mean=0,
|
||||||
|
httpd_status_codes_403_min=0,
|
||||||
|
httpd_status_codes_403_stddev=0,
|
||||||
|
httpd_status_codes_403_sum=0,
|
||||||
|
httpd_status_codes_404_current=0,
|
||||||
|
httpd_status_codes_404_max=0,
|
||||||
|
httpd_status_codes_404_mean=0,
|
||||||
|
httpd_status_codes_404_min=0,
|
||||||
|
httpd_status_codes_404_stddev=0,
|
||||||
|
httpd_status_codes_404_sum=0,
|
||||||
|
httpd_status_codes_405_current=0,
|
||||||
|
httpd_status_codes_405_max=0,
|
||||||
|
httpd_status_codes_405_mean=0,
|
||||||
|
httpd_status_codes_405_min=0,
|
||||||
|
httpd_status_codes_405_stddev=0,
|
||||||
|
httpd_status_codes_405_sum=0,
|
||||||
|
httpd_status_codes_409_current=0,
|
||||||
|
httpd_status_codes_409_max=0,
|
||||||
|
httpd_status_codes_409_mean=0,
|
||||||
|
httpd_status_codes_409_min=0,
|
||||||
|
httpd_status_codes_409_stddev=0,
|
||||||
|
httpd_status_codes_409_sum=0,
|
||||||
|
httpd_status_codes_412_current=0,
|
||||||
|
httpd_status_codes_412_max=0,
|
||||||
|
httpd_status_codes_412_mean=0,
|
||||||
|
httpd_status_codes_412_min=0,
|
||||||
|
httpd_status_codes_412_stddev=0,
|
||||||
|
httpd_status_codes_412_sum=0,
|
||||||
|
httpd_status_codes_500_current=0,
|
||||||
|
httpd_status_codes_500_max=0,
|
||||||
|
httpd_status_codes_500_mean=0,
|
||||||
|
httpd_status_codes_500_min=0,
|
||||||
|
httpd_status_codes_500_stddev=0,
|
||||||
|
httpd_status_codes_500_sum=0,
|
||||||
|
httpd_temporary_view_reads_current=0,
|
||||||
|
httpd_temporary_view_reads_max=0,
|
||||||
|
httpd_temporary_view_reads_mean=0,
|
||||||
|
httpd_temporary_view_reads_min=0,
|
||||||
|
httpd_temporary_view_reads_stddev=0,
|
||||||
|
httpd_temporary_view_reads_sum=0,
|
||||||
|
httpd_view_reads_current=0,
|
||||||
|
httpd_view_reads_max=0,
|
||||||
|
httpd_view_reads_mean=0,
|
||||||
|
httpd_view_reads_min=0,
|
||||||
|
httpd_view_reads_stddev=0,
|
||||||
|
httpd_view_reads_sum=0 1454692257621938169
|
||||||
|
```
|
||||||
215
plugins/inputs/couchdb/couchdb.go
Normal file
215
plugins/inputs/couchdb/couchdb.go
Normal file
@@ -0,0 +1,215 @@
|
|||||||
|
package couchdb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
|
"net/http"
|
||||||
|
"reflect"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Schema:
|
||||||
|
type metaData struct {
|
||||||
|
Description string `json:"description"`
|
||||||
|
Current float64 `json:"current"`
|
||||||
|
Sum float64 `json:"sum"`
|
||||||
|
Mean float64 `json:"mean"`
|
||||||
|
Stddev float64 `json:"stddev"`
|
||||||
|
Min float64 `json:"min"`
|
||||||
|
Max float64 `json:"max"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type Stats struct {
|
||||||
|
Couchdb struct {
|
||||||
|
AuthCacheMisses metaData `json:"auth_cache_misses"`
|
||||||
|
DatabaseWrites metaData `json:"database_writes"`
|
||||||
|
OpenDatabases metaData `json:"open_databases"`
|
||||||
|
AuthCacheHits metaData `json:"auth_cache_hits"`
|
||||||
|
RequestTime metaData `json:"request_time"`
|
||||||
|
DatabaseReads metaData `json:"database_reads"`
|
||||||
|
OpenOsFiles metaData `json:"open_os_files"`
|
||||||
|
} `json:"couchdb"`
|
||||||
|
HttpdRequestMethods struct {
|
||||||
|
Put metaData `json:"PUT"`
|
||||||
|
Get metaData `json:"GET"`
|
||||||
|
Copy metaData `json:"COPY"`
|
||||||
|
Delete metaData `json:"DELETE"`
|
||||||
|
Post metaData `json:"POST"`
|
||||||
|
Head metaData `json:"HEAD"`
|
||||||
|
} `json:"httpd_request_methods"`
|
||||||
|
HttpdStatusCodes struct {
|
||||||
|
Status200 metaData `json:"200"`
|
||||||
|
Status201 metaData `json:"201"`
|
||||||
|
Status202 metaData `json:"202"`
|
||||||
|
Status301 metaData `json:"301"`
|
||||||
|
Status304 metaData `json:"304"`
|
||||||
|
Status400 metaData `json:"400"`
|
||||||
|
Status401 metaData `json:"401"`
|
||||||
|
Status403 metaData `json:"403"`
|
||||||
|
Status404 metaData `json:"404"`
|
||||||
|
Status405 metaData `json:"405"`
|
||||||
|
Status409 metaData `json:"409"`
|
||||||
|
Status412 metaData `json:"412"`
|
||||||
|
Status500 metaData `json:"500"`
|
||||||
|
} `json:"httpd_status_codes"`
|
||||||
|
Httpd struct {
|
||||||
|
ClientsRequestingChanges metaData `json:"clients_requesting_changes"`
|
||||||
|
TemporaryViewReads metaData `json:"temporary_view_reads"`
|
||||||
|
Requests metaData `json:"requests"`
|
||||||
|
BulkRequests metaData `json:"bulk_requests"`
|
||||||
|
ViewReads metaData `json:"view_reads"`
|
||||||
|
} `json:"httpd"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type CouchDB struct {
|
||||||
|
HOSTs []string `toml:"hosts"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (*CouchDB) Description() string {
|
||||||
|
return "Read CouchDB Stats from one or more servers"
|
||||||
|
}
|
||||||
|
|
||||||
|
func (*CouchDB) SampleConfig() string {
|
||||||
|
return `
|
||||||
|
## Works with CouchDB stats endpoints out of the box
|
||||||
|
## Multiple HOSTs from which to read CouchDB stats:
|
||||||
|
hosts = ["http://localhost:8086/_stats"]
|
||||||
|
`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *CouchDB) Gather(accumulator telegraf.Accumulator) error {
|
||||||
|
errorChannel := make(chan error, len(c.HOSTs))
|
||||||
|
var wg sync.WaitGroup
|
||||||
|
for _, u := range c.HOSTs {
|
||||||
|
wg.Add(1)
|
||||||
|
go func(host string) {
|
||||||
|
defer wg.Done()
|
||||||
|
if err := c.fetchAndInsertData(accumulator, host); err != nil {
|
||||||
|
errorChannel <- fmt.Errorf("[host=%s]: %s", host, err)
|
||||||
|
}
|
||||||
|
}(u)
|
||||||
|
}
|
||||||
|
|
||||||
|
wg.Wait()
|
||||||
|
close(errorChannel)
|
||||||
|
|
||||||
|
// If there weren't any errors, we can return nil now.
|
||||||
|
if len(errorChannel) == 0 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// There were errors, so join them all together as one big error.
|
||||||
|
errorStrings := make([]string, 0, len(errorChannel))
|
||||||
|
for err := range errorChannel {
|
||||||
|
errorStrings = append(errorStrings, err.Error())
|
||||||
|
}
|
||||||
|
|
||||||
|
return errors.New(strings.Join(errorStrings, "\n"))
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
var tr = &http.Transport{
|
||||||
|
ResponseHeaderTimeout: time.Duration(3 * time.Second),
|
||||||
|
}
|
||||||
|
|
||||||
|
var client = &http.Client{
|
||||||
|
Transport: tr,
|
||||||
|
Timeout: time.Duration(4 * time.Second),
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *CouchDB) fetchAndInsertData(accumulator telegraf.Accumulator, host string) error {
|
||||||
|
|
||||||
|
response, error := client.Get(host)
|
||||||
|
if error != nil {
|
||||||
|
return error
|
||||||
|
}
|
||||||
|
defer response.Body.Close()
|
||||||
|
|
||||||
|
var stats Stats
|
||||||
|
decoder := json.NewDecoder(response.Body)
|
||||||
|
decoder.Decode(&stats)
|
||||||
|
|
||||||
|
fields := map[string]interface{}{}
|
||||||
|
|
||||||
|
// CouchDB meta stats:
|
||||||
|
c.MapCopy(fields, c.generateFields("couchdb_auth_cache_misses", stats.Couchdb.AuthCacheMisses))
|
||||||
|
c.MapCopy(fields, c.generateFields("couchdb_database_writes", stats.Couchdb.DatabaseWrites))
|
||||||
|
c.MapCopy(fields, c.generateFields("couchdb_open_databases", stats.Couchdb.OpenDatabases))
|
||||||
|
c.MapCopy(fields, c.generateFields("couchdb_auth_cache_hits", stats.Couchdb.AuthCacheHits))
|
||||||
|
c.MapCopy(fields, c.generateFields("couchdb_request_time", stats.Couchdb.RequestTime))
|
||||||
|
c.MapCopy(fields, c.generateFields("couchdb_database_reads", stats.Couchdb.DatabaseReads))
|
||||||
|
c.MapCopy(fields, c.generateFields("couchdb_open_os_files", stats.Couchdb.OpenOsFiles))
|
||||||
|
|
||||||
|
// http request methods stats:
|
||||||
|
c.MapCopy(fields, c.generateFields("httpd_request_methods_put", stats.HttpdRequestMethods.Put))
|
||||||
|
c.MapCopy(fields, c.generateFields("httpd_request_methods_get", stats.HttpdRequestMethods.Get))
|
||||||
|
c.MapCopy(fields, c.generateFields("httpd_request_methods_copy", stats.HttpdRequestMethods.Copy))
|
||||||
|
c.MapCopy(fields, c.generateFields("httpd_request_methods_delete", stats.HttpdRequestMethods.Delete))
|
||||||
|
c.MapCopy(fields, c.generateFields("httpd_request_methods_post", stats.HttpdRequestMethods.Post))
|
||||||
|
c.MapCopy(fields, c.generateFields("httpd_request_methods_head", stats.HttpdRequestMethods.Head))
|
||||||
|
|
||||||
|
// status code stats:
|
||||||
|
c.MapCopy(fields, c.generateFields("httpd_status_codes_200", stats.HttpdStatusCodes.Status200))
|
||||||
|
c.MapCopy(fields, c.generateFields("httpd_status_codes_201", stats.HttpdStatusCodes.Status201))
|
||||||
|
c.MapCopy(fields, c.generateFields("httpd_status_codes_202", stats.HttpdStatusCodes.Status202))
|
||||||
|
c.MapCopy(fields, c.generateFields("httpd_status_codes_301", stats.HttpdStatusCodes.Status301))
|
||||||
|
c.MapCopy(fields, c.generateFields("httpd_status_codes_304", stats.HttpdStatusCodes.Status304))
|
||||||
|
c.MapCopy(fields, c.generateFields("httpd_status_codes_400", stats.HttpdStatusCodes.Status400))
|
||||||
|
c.MapCopy(fields, c.generateFields("httpd_status_codes_401", stats.HttpdStatusCodes.Status401))
|
||||||
|
c.MapCopy(fields, c.generateFields("httpd_status_codes_403", stats.HttpdStatusCodes.Status403))
|
||||||
|
c.MapCopy(fields, c.generateFields("httpd_status_codes_404", stats.HttpdStatusCodes.Status404))
|
||||||
|
c.MapCopy(fields, c.generateFields("httpd_status_codes_405", stats.HttpdStatusCodes.Status405))
|
||||||
|
c.MapCopy(fields, c.generateFields("httpd_status_codes_409", stats.HttpdStatusCodes.Status409))
|
||||||
|
c.MapCopy(fields, c.generateFields("httpd_status_codes_412", stats.HttpdStatusCodes.Status412))
|
||||||
|
c.MapCopy(fields, c.generateFields("httpd_status_codes_500", stats.HttpdStatusCodes.Status500))
|
||||||
|
|
||||||
|
// httpd stats:
|
||||||
|
c.MapCopy(fields, c.generateFields("httpd_clients_requesting_changes", stats.Httpd.ClientsRequestingChanges))
|
||||||
|
c.MapCopy(fields, c.generateFields("httpd_temporary_view_reads", stats.Httpd.TemporaryViewReads))
|
||||||
|
c.MapCopy(fields, c.generateFields("httpd_requests", stats.Httpd.Requests))
|
||||||
|
c.MapCopy(fields, c.generateFields("httpd_bulk_requests", stats.Httpd.BulkRequests))
|
||||||
|
c.MapCopy(fields, c.generateFields("httpd_view_reads", stats.Httpd.ViewReads))
|
||||||
|
|
||||||
|
tags := map[string]string{
|
||||||
|
"server": host,
|
||||||
|
}
|
||||||
|
accumulator.AddFields("couchdb", fields, tags)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (*CouchDB) MapCopy(dst, src interface{}) {
|
||||||
|
dv, sv := reflect.ValueOf(dst), reflect.ValueOf(src)
|
||||||
|
for _, k := range sv.MapKeys() {
|
||||||
|
dv.SetMapIndex(k, sv.MapIndex(k))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (*CouchDB) safeCheck(value interface{}) interface{} {
|
||||||
|
if value == nil {
|
||||||
|
return 0.0
|
||||||
|
}
|
||||||
|
return value
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *CouchDB) generateFields(prefix string, obj metaData) map[string]interface{} {
|
||||||
|
fields := map[string]interface{}{
|
||||||
|
prefix + "_current": c.safeCheck(obj.Current),
|
||||||
|
prefix + "_sum": c.safeCheck(obj.Sum),
|
||||||
|
prefix + "_mean": c.safeCheck(obj.Mean),
|
||||||
|
prefix + "_stddev": c.safeCheck(obj.Stddev),
|
||||||
|
prefix + "_min": c.safeCheck(obj.Min),
|
||||||
|
prefix + "_max": c.safeCheck(obj.Max),
|
||||||
|
}
|
||||||
|
return fields
|
||||||
|
}
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
inputs.Add("couchdb", func() telegraf.Input {
|
||||||
|
return &CouchDB{}
|
||||||
|
})
|
||||||
|
}
|
||||||
320
plugins/inputs/couchdb/couchdb_test.go
Normal file
320
plugins/inputs/couchdb/couchdb_test.go
Normal file
@@ -0,0 +1,320 @@
|
|||||||
|
package couchdb_test
|
||||||
|
|
||||||
|
import (
|
||||||
|
"github.com/influxdata/telegraf/plugins/inputs/couchdb"
|
||||||
|
"github.com/influxdata/telegraf/testutil"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
"net/http"
|
||||||
|
"net/http/httptest"
|
||||||
|
"testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestBasic(t *testing.T) {
|
||||||
|
js := `
|
||||||
|
{
|
||||||
|
"couchdb": {
|
||||||
|
"auth_cache_misses": {
|
||||||
|
"description": "number of authentication cache misses",
|
||||||
|
"current": null,
|
||||||
|
"sum": null,
|
||||||
|
"mean": null,
|
||||||
|
"stddev": null,
|
||||||
|
"min": null,
|
||||||
|
"max": null
|
||||||
|
},
|
||||||
|
"database_writes": {
|
||||||
|
"description": "number of times a database was changed",
|
||||||
|
"current": null,
|
||||||
|
"sum": null,
|
||||||
|
"mean": null,
|
||||||
|
"stddev": null,
|
||||||
|
"min": null,
|
||||||
|
"max": null
|
||||||
|
},
|
||||||
|
"open_databases": {
|
||||||
|
"description": "number of open databases",
|
||||||
|
"current": null,
|
||||||
|
"sum": null,
|
||||||
|
"mean": null,
|
||||||
|
"stddev": null,
|
||||||
|
"min": null,
|
||||||
|
"max": null
|
||||||
|
},
|
||||||
|
"auth_cache_hits": {
|
||||||
|
"description": "number of authentication cache hits",
|
||||||
|
"current": null,
|
||||||
|
"sum": null,
|
||||||
|
"mean": null,
|
||||||
|
"stddev": null,
|
||||||
|
"min": null,
|
||||||
|
"max": null
|
||||||
|
},
|
||||||
|
"request_time": {
|
||||||
|
"description": "length of a request inside CouchDB without MochiWeb",
|
||||||
|
"current": 18.0,
|
||||||
|
"sum": 18.0,
|
||||||
|
"mean": 18.0,
|
||||||
|
"stddev": null,
|
||||||
|
"min": 18.0,
|
||||||
|
"max": 18.0
|
||||||
|
},
|
||||||
|
"database_reads": {
|
||||||
|
"description": "number of times a document was read from a database",
|
||||||
|
"current": null,
|
||||||
|
"sum": null,
|
||||||
|
"mean": null,
|
||||||
|
"stddev": null,
|
||||||
|
"min": null,
|
||||||
|
"max": null
|
||||||
|
},
|
||||||
|
"open_os_files": {
|
||||||
|
"description": "number of file descriptors CouchDB has open",
|
||||||
|
"current": null,
|
||||||
|
"sum": null,
|
||||||
|
"mean": null,
|
||||||
|
"stddev": null,
|
||||||
|
"min": null,
|
||||||
|
"max": null
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"httpd_request_methods": {
|
||||||
|
"PUT": {
|
||||||
|
"description": "number of HTTP PUT requests",
|
||||||
|
"current": null,
|
||||||
|
"sum": null,
|
||||||
|
"mean": null,
|
||||||
|
"stddev": null,
|
||||||
|
"min": null,
|
||||||
|
"max": null
|
||||||
|
},
|
||||||
|
"GET": {
|
||||||
|
"description": "number of HTTP GET requests",
|
||||||
|
"current": 2.0,
|
||||||
|
"sum": 2.0,
|
||||||
|
"mean": 0.25,
|
||||||
|
"stddev": 0.70699999999999996181,
|
||||||
|
"min": 0,
|
||||||
|
"max": 2
|
||||||
|
},
|
||||||
|
"COPY": {
|
||||||
|
"description": "number of HTTP COPY requests",
|
||||||
|
"current": null,
|
||||||
|
"sum": null,
|
||||||
|
"mean": null,
|
||||||
|
"stddev": null,
|
||||||
|
"min": null,
|
||||||
|
"max": null
|
||||||
|
},
|
||||||
|
"DELETE": {
|
||||||
|
"description": "number of HTTP DELETE requests",
|
||||||
|
"current": null,
|
||||||
|
"sum": null,
|
||||||
|
"mean": null,
|
||||||
|
"stddev": null,
|
||||||
|
"min": null,
|
||||||
|
"max": null
|
||||||
|
},
|
||||||
|
"POST": {
|
||||||
|
"description": "number of HTTP POST requests",
|
||||||
|
"current": null,
|
||||||
|
"sum": null,
|
||||||
|
"mean": null,
|
||||||
|
"stddev": null,
|
||||||
|
"min": null,
|
||||||
|
"max": null
|
||||||
|
},
|
||||||
|
"HEAD": {
|
||||||
|
"description": "number of HTTP HEAD requests",
|
||||||
|
"current": null,
|
||||||
|
"sum": null,
|
||||||
|
"mean": null,
|
||||||
|
"stddev": null,
|
||||||
|
"min": null,
|
||||||
|
"max": null
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"httpd_status_codes": {
|
||||||
|
"403": {
|
||||||
|
"description": "number of HTTP 403 Forbidden responses",
|
||||||
|
"current": null,
|
||||||
|
"sum": null,
|
||||||
|
"mean": null,
|
||||||
|
"stddev": null,
|
||||||
|
"min": null,
|
||||||
|
"max": null
|
||||||
|
},
|
||||||
|
"202": {
|
||||||
|
"description": "number of HTTP 202 Accepted responses",
|
||||||
|
"current": null,
|
||||||
|
"sum": null,
|
||||||
|
"mean": null,
|
||||||
|
"stddev": null,
|
||||||
|
"min": null,
|
||||||
|
"max": null
|
||||||
|
},
|
||||||
|
"401": {
|
||||||
|
"description": "number of HTTP 401 Unauthorized responses",
|
||||||
|
"current": null,
|
||||||
|
"sum": null,
|
||||||
|
"mean": null,
|
||||||
|
"stddev": null,
|
||||||
|
"min": null,
|
||||||
|
"max": null
|
||||||
|
},
|
||||||
|
"409": {
|
||||||
|
"description": "number of HTTP 409 Conflict responses",
|
||||||
|
"current": null,
|
||||||
|
"sum": null,
|
||||||
|
"mean": null,
|
||||||
|
"stddev": null,
|
||||||
|
"min": null,
|
||||||
|
"max": null
|
||||||
|
},
|
||||||
|
"200": {
|
||||||
|
"description": "number of HTTP 200 OK responses",
|
||||||
|
"current": 1.0,
|
||||||
|
"sum": 1.0,
|
||||||
|
"mean": 0.125,
|
||||||
|
"stddev": 0.35399999999999998135,
|
||||||
|
"min": 0,
|
||||||
|
"max": 1
|
||||||
|
},
|
||||||
|
"405": {
|
||||||
|
"description": "number of HTTP 405 Method Not Allowed responses",
|
||||||
|
"current": null,
|
||||||
|
"sum": null,
|
||||||
|
"mean": null,
|
||||||
|
"stddev": null,
|
||||||
|
"min": null,
|
||||||
|
"max": null
|
||||||
|
},
|
||||||
|
"400": {
|
||||||
|
"description": "number of HTTP 400 Bad Request responses",
|
||||||
|
"current": null,
|
||||||
|
"sum": null,
|
||||||
|
"mean": null,
|
||||||
|
"stddev": null,
|
||||||
|
"min": null,
|
||||||
|
"max": null
|
||||||
|
},
|
||||||
|
"201": {
|
||||||
|
"description": "number of HTTP 201 Created responses",
|
||||||
|
"current": null,
|
||||||
|
"sum": null,
|
||||||
|
"mean": null,
|
||||||
|
"stddev": null,
|
||||||
|
"min": null,
|
||||||
|
"max": null
|
||||||
|
},
|
||||||
|
"404": {
|
||||||
|
"description": "number of HTTP 404 Not Found responses",
|
||||||
|
"current": null,
|
||||||
|
"sum": null,
|
||||||
|
"mean": null,
|
||||||
|
"stddev": null,
|
||||||
|
"min": null,
|
||||||
|
"max": null
|
||||||
|
},
|
||||||
|
"500": {
|
||||||
|
"description": "number of HTTP 500 Internal Server Error responses",
|
||||||
|
"current": null,
|
||||||
|
"sum": null,
|
||||||
|
"mean": null,
|
||||||
|
"stddev": null,
|
||||||
|
"min": null,
|
||||||
|
"max": null
|
||||||
|
},
|
||||||
|
"412": {
|
||||||
|
"description": "number of HTTP 412 Precondition Failed responses",
|
||||||
|
"current": null,
|
||||||
|
"sum": null,
|
||||||
|
"mean": null,
|
||||||
|
"stddev": null,
|
||||||
|
"min": null,
|
||||||
|
"max": null
|
||||||
|
},
|
||||||
|
"301": {
|
||||||
|
"description": "number of HTTP 301 Moved Permanently responses",
|
||||||
|
"current": null,
|
||||||
|
"sum": null,
|
||||||
|
"mean": null,
|
||||||
|
"stddev": null,
|
||||||
|
"min": null,
|
||||||
|
"max": null
|
||||||
|
},
|
||||||
|
"304": {
|
||||||
|
"description": "number of HTTP 304 Not Modified responses",
|
||||||
|
"current": null,
|
||||||
|
"sum": null,
|
||||||
|
"mean": null,
|
||||||
|
"stddev": null,
|
||||||
|
"min": null,
|
||||||
|
"max": null
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"httpd": {
|
||||||
|
"clients_requesting_changes": {
|
||||||
|
"description": "number of clients for continuous _changes",
|
||||||
|
"current": null,
|
||||||
|
"sum": null,
|
||||||
|
"mean": null,
|
||||||
|
"stddev": null,
|
||||||
|
"min": null,
|
||||||
|
"max": null
|
||||||
|
},
|
||||||
|
"temporary_view_reads": {
|
||||||
|
"description": "number of temporary view reads",
|
||||||
|
"current": null,
|
||||||
|
"sum": null,
|
||||||
|
"mean": null,
|
||||||
|
"stddev": null,
|
||||||
|
"min": null,
|
||||||
|
"max": null
|
||||||
|
},
|
||||||
|
"requests": {
|
||||||
|
"description": "number of HTTP requests",
|
||||||
|
"current": 2.0,
|
||||||
|
"sum": 2.0,
|
||||||
|
"mean": 0.25,
|
||||||
|
"stddev": 0.70699999999999996181,
|
||||||
|
"min": 0,
|
||||||
|
"max": 2
|
||||||
|
},
|
||||||
|
"bulk_requests": {
|
||||||
|
"description": "number of bulk requests",
|
||||||
|
"current": null,
|
||||||
|
"sum": null,
|
||||||
|
"mean": null,
|
||||||
|
"stddev": null,
|
||||||
|
"min": null,
|
||||||
|
"max": null
|
||||||
|
},
|
||||||
|
"view_reads": {
|
||||||
|
"description": "number of view reads",
|
||||||
|
"current": null,
|
||||||
|
"sum": null,
|
||||||
|
"mean": null,
|
||||||
|
"stddev": null,
|
||||||
|
"min": null,
|
||||||
|
"max": null
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
`
|
||||||
|
fakeServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
if r.URL.Path == "/_stats" {
|
||||||
|
_, _ = w.Write([]byte(js))
|
||||||
|
} else {
|
||||||
|
w.WriteHeader(http.StatusNotFound)
|
||||||
|
}
|
||||||
|
}))
|
||||||
|
defer fakeServer.Close()
|
||||||
|
|
||||||
|
plugin := &couchdb.CouchDB{
|
||||||
|
HOSTs: []string{fakeServer.URL + "/_stats"},
|
||||||
|
}
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
require.NoError(t, plugin.Gather(&acc))
|
||||||
|
}
|
||||||
@@ -9,7 +9,9 @@ import (
|
|||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -21,14 +23,15 @@ type Disque struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
var sampleConfig = `
|
var sampleConfig = `
|
||||||
# An array of URI to gather stats about. Specify an ip or hostname
|
## An array of URI to gather stats about. Specify an ip or hostname
|
||||||
# with optional port and password. ie disque://localhost, disque://10.10.3.33:18832,
|
## with optional port and password.
|
||||||
# 10.0.0.1:10000, etc.
|
## ie disque://localhost, disque://10.10.3.33:18832, 10.0.0.1:10000, etc.
|
||||||
#
|
## If no servers are specified, then localhost is used as the host.
|
||||||
# If no servers are specified, then localhost is used as the host.
|
|
||||||
servers = ["localhost"]
|
servers = ["localhost"]
|
||||||
`
|
`
|
||||||
|
|
||||||
|
var defaultTimeout = 5 * time.Second
|
||||||
|
|
||||||
func (r *Disque) SampleConfig() string {
|
func (r *Disque) SampleConfig() string {
|
||||||
return sampleConfig
|
return sampleConfig
|
||||||
}
|
}
|
||||||
@@ -61,7 +64,7 @@ var ErrProtocolError = errors.New("disque protocol error")
|
|||||||
|
|
||||||
// Reads stats from all configured servers accumulates stats.
|
// Reads stats from all configured servers accumulates stats.
|
||||||
// Returns one of the errors encountered while gather stats (if any).
|
// Returns one of the errors encountered while gather stats (if any).
|
||||||
func (g *Disque) Gather(acc inputs.Accumulator) error {
|
func (g *Disque) Gather(acc telegraf.Accumulator) error {
|
||||||
if len(g.Servers) == 0 {
|
if len(g.Servers) == 0 {
|
||||||
url := &url.URL{
|
url := &url.URL{
|
||||||
Host: ":7711",
|
Host: ":7711",
|
||||||
@@ -98,7 +101,7 @@ func (g *Disque) Gather(acc inputs.Accumulator) error {
|
|||||||
|
|
||||||
const defaultPort = "7711"
|
const defaultPort = "7711"
|
||||||
|
|
||||||
func (g *Disque) gatherServer(addr *url.URL, acc inputs.Accumulator) error {
|
func (g *Disque) gatherServer(addr *url.URL, acc telegraf.Accumulator) error {
|
||||||
if g.c == nil {
|
if g.c == nil {
|
||||||
|
|
||||||
_, _, err := net.SplitHostPort(addr.Host)
|
_, _, err := net.SplitHostPort(addr.Host)
|
||||||
@@ -106,7 +109,7 @@ func (g *Disque) gatherServer(addr *url.URL, acc inputs.Accumulator) error {
|
|||||||
addr.Host = addr.Host + ":" + defaultPort
|
addr.Host = addr.Host + ":" + defaultPort
|
||||||
}
|
}
|
||||||
|
|
||||||
c, err := net.Dial("tcp", addr.Host)
|
c, err := net.DialTimeout("tcp", addr.Host, defaultTimeout)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("Unable to connect to disque server '%s': %s", addr.Host, err)
|
return fmt.Errorf("Unable to connect to disque server '%s': %s", addr.Host, err)
|
||||||
}
|
}
|
||||||
@@ -131,6 +134,9 @@ func (g *Disque) gatherServer(addr *url.URL, acc inputs.Accumulator) error {
|
|||||||
g.c = c
|
g.c = c
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Extend connection
|
||||||
|
g.c.SetDeadline(time.Now().Add(defaultTimeout))
|
||||||
|
|
||||||
g.c.Write([]byte("info\r\n"))
|
g.c.Write([]byte("info\r\n"))
|
||||||
|
|
||||||
r := bufio.NewReader(g.c)
|
r := bufio.NewReader(g.c)
|
||||||
@@ -156,7 +162,7 @@ func (g *Disque) gatherServer(addr *url.URL, acc inputs.Accumulator) error {
|
|||||||
var read int
|
var read int
|
||||||
|
|
||||||
fields := make(map[string]interface{})
|
fields := make(map[string]interface{})
|
||||||
tags := map[string]string{"host": addr.String()}
|
tags := map[string]string{"disque_host": addr.String()}
|
||||||
for read < sz {
|
for read < sz {
|
||||||
line, err := r.ReadString('\n')
|
line, err := r.ReadString('\n')
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -198,7 +204,7 @@ func (g *Disque) gatherServer(addr *url.URL, acc inputs.Accumulator) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
inputs.Add("disque", func() inputs.Input {
|
inputs.Add("disque", func() telegraf.Input {
|
||||||
return &Disque{}
|
return &Disque{}
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|||||||
51
plugins/inputs/dns_query/README.md
Normal file
51
plugins/inputs/dns_query/README.md
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
# DNS Query Input Plugin
|
||||||
|
|
||||||
|
The DNS plugin gathers dns query times in miliseconds - like [Dig](https://en.wikipedia.org/wiki/Dig_\(command\))
|
||||||
|
|
||||||
|
### Configuration:
|
||||||
|
|
||||||
|
```
|
||||||
|
# Sample Config:
|
||||||
|
[[inputs.dns_query]]
|
||||||
|
## servers to query
|
||||||
|
servers = ["8.8.8.8"] # required
|
||||||
|
|
||||||
|
## Domains or subdomains to query. "." (root) is default
|
||||||
|
domains = ["."] # optional
|
||||||
|
|
||||||
|
## Query record type. Posible values: A, AAAA, ANY, CNAME, MX, NS, PTR, SOA, SPF, SRV, TXT. Default is "NS"
|
||||||
|
record_type = "A" # optional
|
||||||
|
|
||||||
|
## Dns server port. 53 is default
|
||||||
|
port = 53 # optional
|
||||||
|
|
||||||
|
## Query timeout in seconds. Default is 2 seconds
|
||||||
|
timeout = 2 # optional
|
||||||
|
```
|
||||||
|
|
||||||
|
For querying more than one record type make:
|
||||||
|
|
||||||
|
```
|
||||||
|
[[inputs.dns_query]]
|
||||||
|
domains = ["mjasion.pl"]
|
||||||
|
servers = ["8.8.8.8", "8.8.4.4"]
|
||||||
|
record_type = "A"
|
||||||
|
|
||||||
|
[[inputs.dns_query]]
|
||||||
|
domains = ["mjasion.pl"]
|
||||||
|
servers = ["8.8.8.8", "8.8.4.4"]
|
||||||
|
record_type = "MX"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Tags:
|
||||||
|
|
||||||
|
- server
|
||||||
|
- domain
|
||||||
|
- record_type
|
||||||
|
|
||||||
|
### Example output:
|
||||||
|
|
||||||
|
```
|
||||||
|
./telegraf -config telegraf.conf -test -input-filter dns_query -test
|
||||||
|
> dns_query,domain=mjasion.pl,record_type=A,server=8.8.8.8 query_time_ms=67.189842 1456082743585760680
|
||||||
|
```
|
||||||
160
plugins/inputs/dns_query/dns_query.go
Normal file
160
plugins/inputs/dns_query/dns_query.go
Normal file
@@ -0,0 +1,160 @@
|
|||||||
|
package dns_query
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"fmt"
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
|
"github.com/miekg/dns"
|
||||||
|
"net"
|
||||||
|
"strconv"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
type DnsQuery struct {
|
||||||
|
// Domains or subdomains to query
|
||||||
|
Domains []string
|
||||||
|
|
||||||
|
// Server to query
|
||||||
|
Servers []string
|
||||||
|
|
||||||
|
// Record type
|
||||||
|
RecordType string `toml:"record_type"`
|
||||||
|
|
||||||
|
// DNS server port number
|
||||||
|
Port int
|
||||||
|
|
||||||
|
// Dns query timeout in seconds. 0 means no timeout
|
||||||
|
Timeout int
|
||||||
|
}
|
||||||
|
|
||||||
|
var sampleConfig = `
|
||||||
|
## servers to query
|
||||||
|
servers = ["8.8.8.8"] # required
|
||||||
|
|
||||||
|
## Domains or subdomains to query. "."(root) is default
|
||||||
|
domains = ["."] # optional
|
||||||
|
|
||||||
|
## Query record type. Default is "A"
|
||||||
|
## Posible values: A, AAAA, CNAME, MX, NS, PTR, TXT, SOA, SPF, SRV.
|
||||||
|
record_type = "A" # optional
|
||||||
|
|
||||||
|
## Dns server port. 53 is default
|
||||||
|
port = 53 # optional
|
||||||
|
|
||||||
|
## Query timeout in seconds. Default is 2 seconds
|
||||||
|
timeout = 2 # optional
|
||||||
|
`
|
||||||
|
|
||||||
|
func (d *DnsQuery) SampleConfig() string {
|
||||||
|
return sampleConfig
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *DnsQuery) Description() string {
|
||||||
|
return "Query given DNS server and gives statistics"
|
||||||
|
}
|
||||||
|
func (d *DnsQuery) Gather(acc telegraf.Accumulator) error {
|
||||||
|
d.setDefaultValues()
|
||||||
|
for _, domain := range d.Domains {
|
||||||
|
for _, server := range d.Servers {
|
||||||
|
dnsQueryTime, err := d.getDnsQueryTime(domain, server)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
tags := map[string]string{
|
||||||
|
"server": server,
|
||||||
|
"domain": domain,
|
||||||
|
"record_type": d.RecordType,
|
||||||
|
}
|
||||||
|
|
||||||
|
fields := map[string]interface{}{"query_time_ms": dnsQueryTime}
|
||||||
|
acc.AddFields("dns_query", fields, tags)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *DnsQuery) setDefaultValues() {
|
||||||
|
if len(d.RecordType) == 0 {
|
||||||
|
d.RecordType = "NS"
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(d.Domains) == 0 {
|
||||||
|
d.Domains = []string{"."}
|
||||||
|
d.RecordType = "NS"
|
||||||
|
}
|
||||||
|
|
||||||
|
if d.Port == 0 {
|
||||||
|
d.Port = 53
|
||||||
|
}
|
||||||
|
|
||||||
|
if d.Timeout == 0 {
|
||||||
|
d.Timeout = 2
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *DnsQuery) getDnsQueryTime(domain string, server string) (float64, error) {
|
||||||
|
dnsQueryTime := float64(0)
|
||||||
|
|
||||||
|
c := new(dns.Client)
|
||||||
|
c.ReadTimeout = time.Duration(d.Timeout) * time.Second
|
||||||
|
|
||||||
|
m := new(dns.Msg)
|
||||||
|
recordType, err := d.parseRecordType()
|
||||||
|
if err != nil {
|
||||||
|
return dnsQueryTime, err
|
||||||
|
}
|
||||||
|
m.SetQuestion(dns.Fqdn(domain), recordType)
|
||||||
|
m.RecursionDesired = true
|
||||||
|
|
||||||
|
r, rtt, err := c.Exchange(m, net.JoinHostPort(server, strconv.Itoa(d.Port)))
|
||||||
|
if err != nil {
|
||||||
|
return dnsQueryTime, err
|
||||||
|
}
|
||||||
|
if r.Rcode != dns.RcodeSuccess {
|
||||||
|
return dnsQueryTime, errors.New(fmt.Sprintf("Invalid answer name %s after %s query for %s\n", domain, d.RecordType, domain))
|
||||||
|
}
|
||||||
|
dnsQueryTime = float64(rtt.Nanoseconds()) / 1e6
|
||||||
|
return dnsQueryTime, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *DnsQuery) parseRecordType() (uint16, error) {
|
||||||
|
var recordType uint16
|
||||||
|
var error error
|
||||||
|
|
||||||
|
switch d.RecordType {
|
||||||
|
case "A":
|
||||||
|
recordType = dns.TypeA
|
||||||
|
case "AAAA":
|
||||||
|
recordType = dns.TypeAAAA
|
||||||
|
case "ANY":
|
||||||
|
recordType = dns.TypeANY
|
||||||
|
case "CNAME":
|
||||||
|
recordType = dns.TypeCNAME
|
||||||
|
case "MX":
|
||||||
|
recordType = dns.TypeMX
|
||||||
|
case "NS":
|
||||||
|
recordType = dns.TypeNS
|
||||||
|
case "PTR":
|
||||||
|
recordType = dns.TypePTR
|
||||||
|
case "SOA":
|
||||||
|
recordType = dns.TypeSOA
|
||||||
|
case "SPF":
|
||||||
|
recordType = dns.TypeSPF
|
||||||
|
case "SRV":
|
||||||
|
recordType = dns.TypeSRV
|
||||||
|
case "TXT":
|
||||||
|
recordType = dns.TypeTXT
|
||||||
|
default:
|
||||||
|
error = errors.New(fmt.Sprintf("Record type %s not recognized", d.RecordType))
|
||||||
|
}
|
||||||
|
|
||||||
|
return recordType, error
|
||||||
|
}
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
inputs.Add("dns_query", func() telegraf.Input {
|
||||||
|
return &DnsQuery{}
|
||||||
|
})
|
||||||
|
}
|
||||||
210
plugins/inputs/dns_query/dns_query_test.go
Normal file
210
plugins/inputs/dns_query/dns_query_test.go
Normal file
@@ -0,0 +1,210 @@
|
|||||||
|
package dns_query
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf/testutil"
|
||||||
|
|
||||||
|
"github.com/miekg/dns"
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
var servers = []string{"8.8.8.8"}
|
||||||
|
var domains = []string{"google.com"}
|
||||||
|
|
||||||
|
func TestGathering(t *testing.T) {
|
||||||
|
if testing.Short() {
|
||||||
|
t.Skip("Skipping network-dependent test in short mode.")
|
||||||
|
}
|
||||||
|
var dnsConfig = DnsQuery{
|
||||||
|
Servers: servers,
|
||||||
|
Domains: domains,
|
||||||
|
}
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
|
||||||
|
err := dnsConfig.Gather(&acc)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
metric, ok := acc.Get("dns_query")
|
||||||
|
require.True(t, ok)
|
||||||
|
queryTime, _ := metric.Fields["query_time_ms"].(float64)
|
||||||
|
|
||||||
|
assert.NotEqual(t, 0, queryTime)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGatheringMxRecord(t *testing.T) {
|
||||||
|
if testing.Short() {
|
||||||
|
t.Skip("Skipping network-dependent test in short mode.")
|
||||||
|
}
|
||||||
|
var dnsConfig = DnsQuery{
|
||||||
|
Servers: servers,
|
||||||
|
Domains: domains,
|
||||||
|
}
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
dnsConfig.RecordType = "MX"
|
||||||
|
|
||||||
|
err := dnsConfig.Gather(&acc)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
metric, ok := acc.Get("dns_query")
|
||||||
|
require.True(t, ok)
|
||||||
|
queryTime, _ := metric.Fields["query_time_ms"].(float64)
|
||||||
|
|
||||||
|
assert.NotEqual(t, 0, queryTime)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGatheringRootDomain(t *testing.T) {
|
||||||
|
if testing.Short() {
|
||||||
|
t.Skip("Skipping network-dependent test in short mode.")
|
||||||
|
}
|
||||||
|
var dnsConfig = DnsQuery{
|
||||||
|
Servers: servers,
|
||||||
|
Domains: []string{"."},
|
||||||
|
RecordType: "MX",
|
||||||
|
}
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
tags := map[string]string{
|
||||||
|
"server": "8.8.8.8",
|
||||||
|
"domain": ".",
|
||||||
|
"record_type": "MX",
|
||||||
|
}
|
||||||
|
fields := map[string]interface{}{}
|
||||||
|
|
||||||
|
err := dnsConfig.Gather(&acc)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
metric, ok := acc.Get("dns_query")
|
||||||
|
require.True(t, ok)
|
||||||
|
queryTime, _ := metric.Fields["query_time_ms"].(float64)
|
||||||
|
|
||||||
|
fields["query_time_ms"] = queryTime
|
||||||
|
acc.AssertContainsTaggedFields(t, "dns_query", fields, tags)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestMetricContainsServerAndDomainAndRecordTypeTags(t *testing.T) {
|
||||||
|
if testing.Short() {
|
||||||
|
t.Skip("Skipping network-dependent test in short mode.")
|
||||||
|
}
|
||||||
|
var dnsConfig = DnsQuery{
|
||||||
|
Servers: servers,
|
||||||
|
Domains: domains,
|
||||||
|
}
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
tags := map[string]string{
|
||||||
|
"server": "8.8.8.8",
|
||||||
|
"domain": "google.com",
|
||||||
|
"record_type": "NS",
|
||||||
|
}
|
||||||
|
fields := map[string]interface{}{}
|
||||||
|
|
||||||
|
err := dnsConfig.Gather(&acc)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
metric, ok := acc.Get("dns_query")
|
||||||
|
require.True(t, ok)
|
||||||
|
queryTime, _ := metric.Fields["query_time_ms"].(float64)
|
||||||
|
|
||||||
|
fields["query_time_ms"] = queryTime
|
||||||
|
acc.AssertContainsTaggedFields(t, "dns_query", fields, tags)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGatheringTimeout(t *testing.T) {
|
||||||
|
if testing.Short() {
|
||||||
|
t.Skip("Skipping network-dependent test in short mode.")
|
||||||
|
}
|
||||||
|
var dnsConfig = DnsQuery{
|
||||||
|
Servers: servers,
|
||||||
|
Domains: domains,
|
||||||
|
}
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
dnsConfig.Port = 60054
|
||||||
|
dnsConfig.Timeout = 1
|
||||||
|
var err error
|
||||||
|
|
||||||
|
channel := make(chan error, 1)
|
||||||
|
go func() {
|
||||||
|
channel <- dnsConfig.Gather(&acc)
|
||||||
|
}()
|
||||||
|
select {
|
||||||
|
case res := <-channel:
|
||||||
|
err = res
|
||||||
|
case <-time.After(time.Second * 2):
|
||||||
|
err = nil
|
||||||
|
}
|
||||||
|
|
||||||
|
assert.Error(t, err)
|
||||||
|
assert.Contains(t, err.Error(), "i/o timeout")
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSettingDefaultValues(t *testing.T) {
|
||||||
|
dnsConfig := DnsQuery{}
|
||||||
|
|
||||||
|
dnsConfig.setDefaultValues()
|
||||||
|
|
||||||
|
assert.Equal(t, []string{"."}, dnsConfig.Domains, "Default domain not equal \".\"")
|
||||||
|
assert.Equal(t, "NS", dnsConfig.RecordType, "Default record type not equal 'NS'")
|
||||||
|
assert.Equal(t, 53, dnsConfig.Port, "Default port number not equal 53")
|
||||||
|
assert.Equal(t, 2, dnsConfig.Timeout, "Default timeout not equal 2")
|
||||||
|
|
||||||
|
dnsConfig = DnsQuery{Domains: []string{"."}}
|
||||||
|
|
||||||
|
dnsConfig.setDefaultValues()
|
||||||
|
|
||||||
|
assert.Equal(t, "NS", dnsConfig.RecordType, "Default record type not equal 'NS'")
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRecordTypeParser(t *testing.T) {
|
||||||
|
var dnsConfig = DnsQuery{}
|
||||||
|
var recordType uint16
|
||||||
|
|
||||||
|
dnsConfig.RecordType = "A"
|
||||||
|
recordType, _ = dnsConfig.parseRecordType()
|
||||||
|
assert.Equal(t, dns.TypeA, recordType)
|
||||||
|
|
||||||
|
dnsConfig.RecordType = "AAAA"
|
||||||
|
recordType, _ = dnsConfig.parseRecordType()
|
||||||
|
assert.Equal(t, dns.TypeAAAA, recordType)
|
||||||
|
|
||||||
|
dnsConfig.RecordType = "ANY"
|
||||||
|
recordType, _ = dnsConfig.parseRecordType()
|
||||||
|
assert.Equal(t, dns.TypeANY, recordType)
|
||||||
|
|
||||||
|
dnsConfig.RecordType = "CNAME"
|
||||||
|
recordType, _ = dnsConfig.parseRecordType()
|
||||||
|
assert.Equal(t, dns.TypeCNAME, recordType)
|
||||||
|
|
||||||
|
dnsConfig.RecordType = "MX"
|
||||||
|
recordType, _ = dnsConfig.parseRecordType()
|
||||||
|
assert.Equal(t, dns.TypeMX, recordType)
|
||||||
|
|
||||||
|
dnsConfig.RecordType = "NS"
|
||||||
|
recordType, _ = dnsConfig.parseRecordType()
|
||||||
|
assert.Equal(t, dns.TypeNS, recordType)
|
||||||
|
|
||||||
|
dnsConfig.RecordType = "PTR"
|
||||||
|
recordType, _ = dnsConfig.parseRecordType()
|
||||||
|
assert.Equal(t, dns.TypePTR, recordType)
|
||||||
|
|
||||||
|
dnsConfig.RecordType = "SOA"
|
||||||
|
recordType, _ = dnsConfig.parseRecordType()
|
||||||
|
assert.Equal(t, dns.TypeSOA, recordType)
|
||||||
|
|
||||||
|
dnsConfig.RecordType = "SPF"
|
||||||
|
recordType, _ = dnsConfig.parseRecordType()
|
||||||
|
assert.Equal(t, dns.TypeSPF, recordType)
|
||||||
|
|
||||||
|
dnsConfig.RecordType = "SRV"
|
||||||
|
recordType, _ = dnsConfig.parseRecordType()
|
||||||
|
assert.Equal(t, dns.TypeSRV, recordType)
|
||||||
|
|
||||||
|
dnsConfig.RecordType = "TXT"
|
||||||
|
recordType, _ = dnsConfig.parseRecordType()
|
||||||
|
assert.Equal(t, dns.TypeTXT, recordType)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRecordTypeParserError(t *testing.T) {
|
||||||
|
var dnsConfig = DnsQuery{}
|
||||||
|
var err error
|
||||||
|
|
||||||
|
dnsConfig.RecordType = "nil"
|
||||||
|
_, err = dnsConfig.parseRecordType()
|
||||||
|
assert.Error(t, err)
|
||||||
|
}
|
||||||
@@ -5,11 +5,11 @@ docker containers. You can read Docker's documentation for their remote API
|
|||||||
[here](https://docs.docker.com/engine/reference/api/docker_remote_api_v1.20/#get-container-stats-based-on-resource-usage)
|
[here](https://docs.docker.com/engine/reference/api/docker_remote_api_v1.20/#get-container-stats-based-on-resource-usage)
|
||||||
|
|
||||||
The docker plugin uses the excellent
|
The docker plugin uses the excellent
|
||||||
[fsouza go-dockerclient](https://github.com/fsouza/go-dockerclient) library to
|
[docker engine-api](https://github.com/docker/engine-api) library to
|
||||||
gather stats. Documentation for the library can be found
|
gather stats. Documentation for the library can be found
|
||||||
[here](https://godoc.org/github.com/fsouza/go-dockerclient) and documentation
|
[here](https://godoc.org/github.com/docker/engine-api) and documentation
|
||||||
for the stat structure can be found
|
for the stat structure can be found
|
||||||
[here](https://godoc.org/github.com/fsouza/go-dockerclient#Stats)
|
[here](https://godoc.org/github.com/docker/engine-api/types#Stats)
|
||||||
|
|
||||||
### Configuration:
|
### Configuration:
|
||||||
|
|
||||||
@@ -29,10 +29,10 @@ for the stat structure can be found
|
|||||||
Every effort was made to preserve the names based on the JSON response from the
|
Every effort was made to preserve the names based on the JSON response from the
|
||||||
docker API.
|
docker API.
|
||||||
|
|
||||||
Note that the docker_cpu metric may appear multiple times per collection, based
|
Note that the docker_container_cpu metric may appear multiple times per collection,
|
||||||
on the availability of per-cpu stats on your system.
|
based on the availability of per-cpu stats on your system.
|
||||||
|
|
||||||
- docker_mem
|
- docker_container_mem
|
||||||
- total_pgmafault
|
- total_pgmafault
|
||||||
- cache
|
- cache
|
||||||
- mapped_file
|
- mapped_file
|
||||||
@@ -66,7 +66,8 @@ on the availability of per-cpu stats on your system.
|
|||||||
- usage
|
- usage
|
||||||
- failcnt
|
- failcnt
|
||||||
- limit
|
- limit
|
||||||
- docker_cpu
|
- container_id
|
||||||
|
- docker_container_cpu
|
||||||
- throttling_periods
|
- throttling_periods
|
||||||
- throttling_throttled_periods
|
- throttling_throttled_periods
|
||||||
- throttling_throttled_time
|
- throttling_throttled_time
|
||||||
@@ -74,7 +75,9 @@ on the availability of per-cpu stats on your system.
|
|||||||
- usage_in_usermode
|
- usage_in_usermode
|
||||||
- usage_system
|
- usage_system
|
||||||
- usage_total
|
- usage_total
|
||||||
- docker_net
|
- usage_percent
|
||||||
|
- container_id
|
||||||
|
- docker_container_net
|
||||||
- rx_dropped
|
- rx_dropped
|
||||||
- rx_bytes
|
- rx_bytes
|
||||||
- rx_errors
|
- rx_errors
|
||||||
@@ -83,7 +86,8 @@ on the availability of per-cpu stats on your system.
|
|||||||
- rx_packets
|
- rx_packets
|
||||||
- tx_errors
|
- tx_errors
|
||||||
- tx_bytes
|
- tx_bytes
|
||||||
- docker_blkio
|
- container_id
|
||||||
|
- docker_container_blkio
|
||||||
- io_service_bytes_recursive_async
|
- io_service_bytes_recursive_async
|
||||||
- io_service_bytes_recursive_read
|
- io_service_bytes_recursive_read
|
||||||
- io_service_bytes_recursive_sync
|
- io_service_bytes_recursive_sync
|
||||||
@@ -94,18 +98,51 @@ on the availability of per-cpu stats on your system.
|
|||||||
- io_serviced_recursive_sync
|
- io_serviced_recursive_sync
|
||||||
- io_serviced_recursive_total
|
- io_serviced_recursive_total
|
||||||
- io_serviced_recursive_write
|
- io_serviced_recursive_write
|
||||||
|
- container_id
|
||||||
|
- docker_
|
||||||
|
- n_used_file_descriptors
|
||||||
|
- n_cpus
|
||||||
|
- n_containers
|
||||||
|
- n_images
|
||||||
|
- n_goroutines
|
||||||
|
- n_listener_events
|
||||||
|
- memory_total
|
||||||
|
- pool_blocksize
|
||||||
|
- docker_data
|
||||||
|
- available
|
||||||
|
- total
|
||||||
|
- used
|
||||||
|
- docker_metadata
|
||||||
|
- available
|
||||||
|
- total
|
||||||
|
- used
|
||||||
|
|
||||||
|
|
||||||
### Tags:
|
### Tags:
|
||||||
|
|
||||||
- All stats have the following tags:
|
- docker (memory_total)
|
||||||
- cont_id (container ID)
|
- unit=bytes
|
||||||
- cont_image (container image)
|
- docker (pool_blocksize)
|
||||||
- cont_name (container name)
|
- unit=bytes
|
||||||
- docker_cpu specific:
|
- docker_data
|
||||||
|
- unit=bytes
|
||||||
|
- docker_metadata
|
||||||
|
- unit=bytes
|
||||||
|
|
||||||
|
- docker_container_mem specific:
|
||||||
|
- container_image
|
||||||
|
- container_name
|
||||||
|
- docker_container_cpu specific:
|
||||||
|
- container_image
|
||||||
|
- container_name
|
||||||
- cpu
|
- cpu
|
||||||
- docker_net specific:
|
- docker_container_net specific:
|
||||||
|
- container_image
|
||||||
|
- container_name
|
||||||
- network
|
- network
|
||||||
- docker_blkio specific:
|
- docker_container_blkio specific:
|
||||||
|
- container_image
|
||||||
|
- container_name
|
||||||
- device
|
- device
|
||||||
|
|
||||||
### Example Output:
|
### Example Output:
|
||||||
@@ -113,8 +150,18 @@ on the availability of per-cpu stats on your system.
|
|||||||
```
|
```
|
||||||
% ./telegraf -config ~/ws/telegraf.conf -input-filter docker -test
|
% ./telegraf -config ~/ws/telegraf.conf -input-filter docker -test
|
||||||
* Plugin: docker, Collection 1
|
* Plugin: docker, Collection 1
|
||||||
> docker_mem,cont_id=5705ba8ed8fb47527410653d60a8bb2f3af5e62372297c419022a3cc6d45d848,\
|
> docker n_cpus=8i 1456926671065383978
|
||||||
cont_image=spotify/kafka,cont_name=kafka \
|
> docker n_used_file_descriptors=15i 1456926671065383978
|
||||||
|
> docker n_containers=7i 1456926671065383978
|
||||||
|
> docker n_images=152i 1456926671065383978
|
||||||
|
> docker n_goroutines=36i 1456926671065383978
|
||||||
|
> docker n_listener_events=0i 1456926671065383978
|
||||||
|
> docker,unit=bytes memory_total=18935443456i 1456926671065383978
|
||||||
|
> docker,unit=bytes pool_blocksize=65540i 1456926671065383978
|
||||||
|
> docker_data,unit=bytes available=24340000000i,total=107400000000i,used=14820000000i 1456926671065383978
|
||||||
|
> docker_metadata,unit=bytes available=2126999999i,total=2146999999i,used=20420000i 145692667106538
|
||||||
|
> docker_container_mem,
|
||||||
|
container_image=spotify/kafka,container_name=kafka \
|
||||||
active_anon=52568064i,active_file=6926336i,cache=12038144i,fail_count=0i,\
|
active_anon=52568064i,active_file=6926336i,cache=12038144i,fail_count=0i,\
|
||||||
hierarchical_memory_limit=9223372036854771712i,inactive_anon=52707328i,\
|
hierarchical_memory_limit=9223372036854771712i,inactive_anon=52707328i,\
|
||||||
inactive_file=5111808i,limit=1044578304i,mapped_file=10301440i,\
|
inactive_file=5111808i,limit=1044578304i,mapped_file=10301440i,\
|
||||||
@@ -125,21 +172,21 @@ total_inactive_file=5111808i,total_mapped_file=10301440i,total_pgfault=63762i,\
|
|||||||
total_pgmafault=0i,total_pgpgin=73355i,total_pgpgout=45736i,\
|
total_pgmafault=0i,total_pgpgin=73355i,total_pgpgout=45736i,\
|
||||||
total_rss=105275392i,total_rss_huge=4194304i,total_unevictable=0i,\
|
total_rss=105275392i,total_rss_huge=4194304i,total_unevictable=0i,\
|
||||||
total_writeback=0i,unevictable=0i,usage=117440512i,writeback=0i 1453409536840126713
|
total_writeback=0i,unevictable=0i,usage=117440512i,writeback=0i 1453409536840126713
|
||||||
> docker_cpu,cont_id=5705ba8ed8fb47527410653d60a8bb2f3af5e62372297c419022a3cc6d45d848,\
|
> docker_container_cpu,
|
||||||
cont_image=spotify/kafka,cont_name=kafka,cpu=cpu-total \
|
container_image=spotify/kafka,container_name=kafka,cpu=cpu-total \
|
||||||
throttling_periods=0i,throttling_throttled_periods=0i,\
|
throttling_periods=0i,throttling_throttled_periods=0i,\
|
||||||
throttling_throttled_time=0i,usage_in_kernelmode=440000000i,\
|
throttling_throttled_time=0i,usage_in_kernelmode=440000000i,\
|
||||||
usage_in_usermode=2290000000i,usage_system=84795360000000i,\
|
usage_in_usermode=2290000000i,usage_system=84795360000000i,\
|
||||||
usage_total=6628208865i 1453409536840126713
|
usage_total=6628208865i 1453409536840126713
|
||||||
> docker_cpu,cont_id=5705ba8ed8fb47527410653d60a8bb2f3af5e62372297c419022a3cc6d45d848,\
|
> docker_container_cpu,
|
||||||
cont_image=spotify/kafka,cont_name=kafka,cpu=cpu0 \
|
container_image=spotify/kafka,container_name=kafka,cpu=cpu0 \
|
||||||
usage_total=6628208865i 1453409536840126713
|
usage_total=6628208865i 1453409536840126713
|
||||||
> docker_net,cont_id=5705ba8ed8fb47527410653d60a8bb2f3af5e62372297c419022a3cc6d45d848,\
|
> docker_container_net,\
|
||||||
cont_image=spotify/kafka,cont_name=kafka,network=eth0 \
|
container_image=spotify/kafka,container_name=kafka,network=eth0 \
|
||||||
rx_bytes=7468i,rx_dropped=0i,rx_errors=0i,rx_packets=94i,tx_bytes=946i,\
|
rx_bytes=7468i,rx_dropped=0i,rx_errors=0i,rx_packets=94i,tx_bytes=946i,\
|
||||||
tx_dropped=0i,tx_errors=0i,tx_packets=13i 1453409536840126713
|
tx_dropped=0i,tx_errors=0i,tx_packets=13i 1453409536840126713
|
||||||
> docker_blkio,cont_id=5705ba8ed8fb47527410653d60a8bb2f3af5e62372297c419022a3cc6d45d848,\
|
> docker_container_blkio,
|
||||||
cont_image=spotify/kafka,cont_name=kafka,device=8:0 \
|
container_image=spotify/kafka,container_name=kafka,device=8:0 \
|
||||||
io_service_bytes_recursive_async=80216064i,io_service_bytes_recursive_read=79925248i,\
|
io_service_bytes_recursive_async=80216064i,io_service_bytes_recursive_read=79925248i,\
|
||||||
io_service_bytes_recursive_sync=77824i,io_service_bytes_recursive_total=80293888i,\
|
io_service_bytes_recursive_sync=77824i,io_service_bytes_recursive_total=80293888i,\
|
||||||
io_service_bytes_recursive_write=368640i,io_serviced_recursive_async=6562i,\
|
io_service_bytes_recursive_write=368640i,io_serviced_recursive_async=6562i,\
|
||||||
|
|||||||
@@ -1,54 +1,91 @@
|
|||||||
package system
|
package system
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"log"
|
||||||
|
"regexp"
|
||||||
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"golang.org/x/net/context"
|
||||||
|
|
||||||
"github.com/fsouza/go-dockerclient"
|
"github.com/docker/engine-api/client"
|
||||||
|
"github.com/docker/engine-api/types"
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
|
"github.com/influxdata/telegraf/internal"
|
||||||
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// Docker object
|
||||||
type Docker struct {
|
type Docker struct {
|
||||||
Endpoint string
|
Endpoint string
|
||||||
ContainerNames []string
|
ContainerNames []string
|
||||||
|
Timeout internal.Duration
|
||||||
|
|
||||||
client *docker.Client
|
client DockerClient
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// DockerClient interface, useful for testing
|
||||||
|
type DockerClient interface {
|
||||||
|
Info(ctx context.Context) (types.Info, error)
|
||||||
|
ContainerList(ctx context.Context, options types.ContainerListOptions) ([]types.Container, error)
|
||||||
|
ContainerStats(ctx context.Context, containerID string, stream bool) (io.ReadCloser, error)
|
||||||
|
}
|
||||||
|
|
||||||
|
// KB, MB, GB, TB, PB...human friendly
|
||||||
|
const (
|
||||||
|
KB = 1000
|
||||||
|
MB = 1000 * KB
|
||||||
|
GB = 1000 * MB
|
||||||
|
TB = 1000 * GB
|
||||||
|
PB = 1000 * TB
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
sizeRegex = regexp.MustCompile(`^(\d+(\.\d+)*) ?([kKmMgGtTpP])?[bB]?$`)
|
||||||
|
)
|
||||||
|
|
||||||
var sampleConfig = `
|
var sampleConfig = `
|
||||||
# Docker Endpoint
|
## Docker Endpoint
|
||||||
# To use TCP, set endpoint = "tcp://[ip]:[port]"
|
## To use TCP, set endpoint = "tcp://[ip]:[port]"
|
||||||
# To use environment variables (ie, docker-machine), set endpoint = "ENV"
|
## To use environment variables (ie, docker-machine), set endpoint = "ENV"
|
||||||
endpoint = "unix:///var/run/docker.sock"
|
endpoint = "unix:///var/run/docker.sock"
|
||||||
# Only collect metrics for these containers, collect all if empty
|
## Only collect metrics for these containers, collect all if empty
|
||||||
container_names = []
|
container_names = []
|
||||||
|
## Timeout for docker list, info, and stats commands
|
||||||
|
timeout = "5s"
|
||||||
`
|
`
|
||||||
|
|
||||||
|
// Description returns input description
|
||||||
func (d *Docker) Description() string {
|
func (d *Docker) Description() string {
|
||||||
return "Read metrics about docker containers"
|
return "Read metrics about docker containers"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// SampleConfig prints sampleConfig
|
||||||
func (d *Docker) SampleConfig() string { return sampleConfig }
|
func (d *Docker) SampleConfig() string { return sampleConfig }
|
||||||
|
|
||||||
func (d *Docker) Gather(acc inputs.Accumulator) error {
|
// Gather starts stats collection
|
||||||
|
func (d *Docker) Gather(acc telegraf.Accumulator) error {
|
||||||
if d.client == nil {
|
if d.client == nil {
|
||||||
var c *docker.Client
|
var c *client.Client
|
||||||
var err error
|
var err error
|
||||||
|
defaultHeaders := map[string]string{"User-Agent": "engine-api-cli-1.0"}
|
||||||
if d.Endpoint == "ENV" {
|
if d.Endpoint == "ENV" {
|
||||||
c, err = docker.NewClientFromEnv()
|
c, err = client.NewEnvClient()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
} else if d.Endpoint == "" {
|
} else if d.Endpoint == "" {
|
||||||
c, err = docker.NewClient("unix:///var/run/docker.sock")
|
c, err = client.NewClient("unix:///var/run/docker.sock", "", nil, defaultHeaders)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
c, err = docker.NewClient(d.Endpoint)
|
c, err = client.NewClient(d.Endpoint, "", nil, defaultHeaders)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -56,20 +93,31 @@ func (d *Docker) Gather(acc inputs.Accumulator) error {
|
|||||||
d.client = c
|
d.client = c
|
||||||
}
|
}
|
||||||
|
|
||||||
opts := docker.ListContainersOptions{}
|
// Get daemon info
|
||||||
containers, err := d.client.ListContainers(opts)
|
err := d.gatherInfo(acc)
|
||||||
|
if err != nil {
|
||||||
|
fmt.Println(err.Error())
|
||||||
|
}
|
||||||
|
|
||||||
|
// List containers
|
||||||
|
opts := types.ContainerListOptions{}
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), d.Timeout.Duration)
|
||||||
|
defer cancel()
|
||||||
|
containers, err := d.client.ContainerList(ctx, opts)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Get container data
|
||||||
var wg sync.WaitGroup
|
var wg sync.WaitGroup
|
||||||
wg.Add(len(containers))
|
wg.Add(len(containers))
|
||||||
for _, container := range containers {
|
for _, container := range containers {
|
||||||
go func(c docker.APIContainers) {
|
go func(c types.Container) {
|
||||||
defer wg.Done()
|
defer wg.Done()
|
||||||
err := d.gatherContainer(c, acc)
|
err := d.gatherContainer(c, acc)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
fmt.Println(err.Error())
|
log.Printf("Error gathering container %s stats: %s\n",
|
||||||
|
c.Names, err.Error())
|
||||||
}
|
}
|
||||||
}(container)
|
}(container)
|
||||||
}
|
}
|
||||||
@@ -78,10 +126,80 @@ func (d *Docker) Gather(acc inputs.Accumulator) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (d *Docker) gatherInfo(acc telegraf.Accumulator) error {
|
||||||
|
// Init vars
|
||||||
|
dataFields := make(map[string]interface{})
|
||||||
|
metadataFields := make(map[string]interface{})
|
||||||
|
now := time.Now()
|
||||||
|
// Get info from docker daemon
|
||||||
|
ctx, cancel := context.WithTimeout(context.Background(), d.Timeout.Duration)
|
||||||
|
defer cancel()
|
||||||
|
info, err := d.client.Info(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
fields := map[string]interface{}{
|
||||||
|
"n_cpus": info.NCPU,
|
||||||
|
"n_used_file_descriptors": info.NFd,
|
||||||
|
"n_containers": info.Containers,
|
||||||
|
"n_images": info.Images,
|
||||||
|
"n_goroutines": info.NGoroutines,
|
||||||
|
"n_listener_events": info.NEventsListener,
|
||||||
|
}
|
||||||
|
// Add metrics
|
||||||
|
acc.AddFields("docker",
|
||||||
|
fields,
|
||||||
|
nil,
|
||||||
|
now)
|
||||||
|
acc.AddFields("docker",
|
||||||
|
map[string]interface{}{"memory_total": info.MemTotal},
|
||||||
|
map[string]string{"unit": "bytes"},
|
||||||
|
now)
|
||||||
|
// Get storage metrics
|
||||||
|
for _, rawData := range info.DriverStatus {
|
||||||
|
// Try to convert string to int (bytes)
|
||||||
|
value, err := parseSize(rawData[1])
|
||||||
|
if err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
name := strings.ToLower(strings.Replace(rawData[0], " ", "_", -1))
|
||||||
|
if name == "pool_blocksize" {
|
||||||
|
// pool blocksize
|
||||||
|
acc.AddFields("docker",
|
||||||
|
map[string]interface{}{"pool_blocksize": value},
|
||||||
|
map[string]string{"unit": "bytes"},
|
||||||
|
now)
|
||||||
|
} else if strings.HasPrefix(name, "data_space_") {
|
||||||
|
// data space
|
||||||
|
fieldName := strings.TrimPrefix(name, "data_space_")
|
||||||
|
dataFields[fieldName] = value
|
||||||
|
} else if strings.HasPrefix(name, "metadata_space_") {
|
||||||
|
// metadata space
|
||||||
|
fieldName := strings.TrimPrefix(name, "metadata_space_")
|
||||||
|
metadataFields[fieldName] = value
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if len(dataFields) > 0 {
|
||||||
|
acc.AddFields("docker_data",
|
||||||
|
dataFields,
|
||||||
|
map[string]string{"unit": "bytes"},
|
||||||
|
now)
|
||||||
|
}
|
||||||
|
if len(metadataFields) > 0 {
|
||||||
|
acc.AddFields("docker_metadata",
|
||||||
|
metadataFields,
|
||||||
|
map[string]string{"unit": "bytes"},
|
||||||
|
now)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
func (d *Docker) gatherContainer(
|
func (d *Docker) gatherContainer(
|
||||||
container docker.APIContainers,
|
container types.Container,
|
||||||
acc inputs.Accumulator,
|
acc telegraf.Accumulator,
|
||||||
) error {
|
) error {
|
||||||
|
var v *types.StatsJSON
|
||||||
// Parse container name
|
// Parse container name
|
||||||
cname := "unknown"
|
cname := "unknown"
|
||||||
if len(container.Names) > 0 {
|
if len(container.Names) > 0 {
|
||||||
@@ -90,9 +208,8 @@ func (d *Docker) gatherContainer(
|
|||||||
}
|
}
|
||||||
|
|
||||||
tags := map[string]string{
|
tags := map[string]string{
|
||||||
"cont_id": container.ID,
|
"container_name": cname,
|
||||||
"cont_name": cname,
|
"container_image": container.Image,
|
||||||
"cont_image": container.Image,
|
|
||||||
}
|
}
|
||||||
if len(d.ContainerNames) > 0 {
|
if len(d.ContainerNames) > 0 {
|
||||||
if !sliceContains(cname, d.ContainerNames) {
|
if !sliceContains(cname, d.ContainerNames) {
|
||||||
@@ -100,37 +217,36 @@ func (d *Docker) gatherContainer(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
statChan := make(chan *docker.Stats)
|
ctx, cancel := context.WithTimeout(context.Background(), d.Timeout.Duration)
|
||||||
done := make(chan bool)
|
defer cancel()
|
||||||
statOpts := docker.StatsOptions{
|
r, err := d.client.ContainerStats(ctx, container.ID, false)
|
||||||
Stream: false,
|
if err != nil {
|
||||||
ID: container.ID,
|
log.Printf("Error getting docker stats: %s\n", err.Error())
|
||||||
Stats: statChan,
|
}
|
||||||
Done: done,
|
defer r.Close()
|
||||||
Timeout: time.Duration(time.Second * 5),
|
dec := json.NewDecoder(r)
|
||||||
|
if err = dec.Decode(&v); err != nil {
|
||||||
|
if err == io.EOF {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
return fmt.Errorf("Error decoding: %s", err.Error())
|
||||||
}
|
}
|
||||||
|
|
||||||
go func() {
|
|
||||||
d.client.Stats(statOpts)
|
|
||||||
}()
|
|
||||||
|
|
||||||
stat := <-statChan
|
|
||||||
close(done)
|
|
||||||
|
|
||||||
// Add labels to tags
|
// Add labels to tags
|
||||||
for k, v := range container.Labels {
|
for k, label := range container.Labels {
|
||||||
tags[k] = v
|
tags[k] = label
|
||||||
}
|
}
|
||||||
|
|
||||||
gatherContainerStats(stat, acc, tags)
|
gatherContainerStats(v, acc, tags, container.ID)
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func gatherContainerStats(
|
func gatherContainerStats(
|
||||||
stat *docker.Stats,
|
stat *types.StatsJSON,
|
||||||
acc inputs.Accumulator,
|
acc telegraf.Accumulator,
|
||||||
tags map[string]string,
|
tags map[string]string,
|
||||||
|
id string,
|
||||||
) {
|
) {
|
||||||
now := stat.Read
|
now := stat.Read
|
||||||
|
|
||||||
@@ -139,88 +255,114 @@ func gatherContainerStats(
|
|||||||
"usage": stat.MemoryStats.Usage,
|
"usage": stat.MemoryStats.Usage,
|
||||||
"fail_count": stat.MemoryStats.Failcnt,
|
"fail_count": stat.MemoryStats.Failcnt,
|
||||||
"limit": stat.MemoryStats.Limit,
|
"limit": stat.MemoryStats.Limit,
|
||||||
"total_pgmafault": stat.MemoryStats.Stats.TotalPgmafault,
|
"total_pgmafault": stat.MemoryStats.Stats["total_pgmajfault"],
|
||||||
"cache": stat.MemoryStats.Stats.Cache,
|
"cache": stat.MemoryStats.Stats["cache"],
|
||||||
"mapped_file": stat.MemoryStats.Stats.MappedFile,
|
"mapped_file": stat.MemoryStats.Stats["mapped_file"],
|
||||||
"total_inactive_file": stat.MemoryStats.Stats.TotalInactiveFile,
|
"total_inactive_file": stat.MemoryStats.Stats["total_inactive_file"],
|
||||||
"pgpgout": stat.MemoryStats.Stats.Pgpgout,
|
"pgpgout": stat.MemoryStats.Stats["pagpgout"],
|
||||||
"rss": stat.MemoryStats.Stats.Rss,
|
"rss": stat.MemoryStats.Stats["rss"],
|
||||||
"total_mapped_file": stat.MemoryStats.Stats.TotalMappedFile,
|
"total_mapped_file": stat.MemoryStats.Stats["total_mapped_file"],
|
||||||
"writeback": stat.MemoryStats.Stats.Writeback,
|
"writeback": stat.MemoryStats.Stats["writeback"],
|
||||||
"unevictable": stat.MemoryStats.Stats.Unevictable,
|
"unevictable": stat.MemoryStats.Stats["unevictable"],
|
||||||
"pgpgin": stat.MemoryStats.Stats.Pgpgin,
|
"pgpgin": stat.MemoryStats.Stats["pgpgin"],
|
||||||
"total_unevictable": stat.MemoryStats.Stats.TotalUnevictable,
|
"total_unevictable": stat.MemoryStats.Stats["total_unevictable"],
|
||||||
"pgmajfault": stat.MemoryStats.Stats.Pgmajfault,
|
"pgmajfault": stat.MemoryStats.Stats["pgmajfault"],
|
||||||
"total_rss": stat.MemoryStats.Stats.TotalRss,
|
"total_rss": stat.MemoryStats.Stats["total_rss"],
|
||||||
"total_rss_huge": stat.MemoryStats.Stats.TotalRssHuge,
|
"total_rss_huge": stat.MemoryStats.Stats["total_rss_huge"],
|
||||||
"total_writeback": stat.MemoryStats.Stats.TotalWriteback,
|
"total_writeback": stat.MemoryStats.Stats["total_write_back"],
|
||||||
"total_inactive_anon": stat.MemoryStats.Stats.TotalInactiveAnon,
|
"total_inactive_anon": stat.MemoryStats.Stats["total_inactive_anon"],
|
||||||
"rss_huge": stat.MemoryStats.Stats.RssHuge,
|
"rss_huge": stat.MemoryStats.Stats["rss_huge"],
|
||||||
"hierarchical_memory_limit": stat.MemoryStats.Stats.HierarchicalMemoryLimit,
|
"hierarchical_memory_limit": stat.MemoryStats.Stats["hierarchical_memory_limit"],
|
||||||
"total_pgfault": stat.MemoryStats.Stats.TotalPgfault,
|
"total_pgfault": stat.MemoryStats.Stats["total_pgfault"],
|
||||||
"total_active_file": stat.MemoryStats.Stats.TotalActiveFile,
|
"total_active_file": stat.MemoryStats.Stats["total_active_file"],
|
||||||
"active_anon": stat.MemoryStats.Stats.ActiveAnon,
|
"active_anon": stat.MemoryStats.Stats["active_anon"],
|
||||||
"total_active_anon": stat.MemoryStats.Stats.TotalActiveAnon,
|
"total_active_anon": stat.MemoryStats.Stats["total_active_anon"],
|
||||||
"total_pgpgout": stat.MemoryStats.Stats.TotalPgpgout,
|
"total_pgpgout": stat.MemoryStats.Stats["total_pgpgout"],
|
||||||
"total_cache": stat.MemoryStats.Stats.TotalCache,
|
"total_cache": stat.MemoryStats.Stats["total_cache"],
|
||||||
"inactive_anon": stat.MemoryStats.Stats.InactiveAnon,
|
"inactive_anon": stat.MemoryStats.Stats["inactive_anon"],
|
||||||
"active_file": stat.MemoryStats.Stats.ActiveFile,
|
"active_file": stat.MemoryStats.Stats["active_file"],
|
||||||
"pgfault": stat.MemoryStats.Stats.Pgfault,
|
"pgfault": stat.MemoryStats.Stats["pgfault"],
|
||||||
"inactive_file": stat.MemoryStats.Stats.InactiveFile,
|
"inactive_file": stat.MemoryStats.Stats["inactive_file"],
|
||||||
"total_pgpgin": stat.MemoryStats.Stats.TotalPgpgin,
|
"total_pgpgin": stat.MemoryStats.Stats["total_pgpgin"],
|
||||||
|
"usage_percent": calculateMemPercent(stat),
|
||||||
|
"container_id": id,
|
||||||
}
|
}
|
||||||
acc.AddFields("docker_mem", memfields, tags, now)
|
acc.AddFields("docker_container_mem", memfields, tags, now)
|
||||||
|
|
||||||
cpufields := map[string]interface{}{
|
cpufields := map[string]interface{}{
|
||||||
"usage_total": stat.CPUStats.CPUUsage.TotalUsage,
|
"usage_total": stat.CPUStats.CPUUsage.TotalUsage,
|
||||||
"usage_in_usermode": stat.CPUStats.CPUUsage.UsageInUsermode,
|
"usage_in_usermode": stat.CPUStats.CPUUsage.UsageInUsermode,
|
||||||
"usage_in_kernelmode": stat.CPUStats.CPUUsage.UsageInKernelmode,
|
"usage_in_kernelmode": stat.CPUStats.CPUUsage.UsageInKernelmode,
|
||||||
"usage_system": stat.CPUStats.SystemCPUUsage,
|
"usage_system": stat.CPUStats.SystemUsage,
|
||||||
"throttling_periods": stat.CPUStats.ThrottlingData.Periods,
|
"throttling_periods": stat.CPUStats.ThrottlingData.Periods,
|
||||||
"throttling_throttled_periods": stat.CPUStats.ThrottlingData.ThrottledPeriods,
|
"throttling_throttled_periods": stat.CPUStats.ThrottlingData.ThrottledPeriods,
|
||||||
"throttling_throttled_time": stat.CPUStats.ThrottlingData.ThrottledTime,
|
"throttling_throttled_time": stat.CPUStats.ThrottlingData.ThrottledTime,
|
||||||
|
"usage_percent": calculateCPUPercent(stat),
|
||||||
|
"container_id": id,
|
||||||
}
|
}
|
||||||
cputags := copyTags(tags)
|
cputags := copyTags(tags)
|
||||||
cputags["cpu"] = "cpu-total"
|
cputags["cpu"] = "cpu-total"
|
||||||
acc.AddFields("docker_cpu", cpufields, cputags, now)
|
acc.AddFields("docker_container_cpu", cpufields, cputags, now)
|
||||||
|
|
||||||
for i, percpu := range stat.CPUStats.CPUUsage.PercpuUsage {
|
for i, percpu := range stat.CPUStats.CPUUsage.PercpuUsage {
|
||||||
percputags := copyTags(tags)
|
percputags := copyTags(tags)
|
||||||
percputags["cpu"] = fmt.Sprintf("cpu%d", i)
|
percputags["cpu"] = fmt.Sprintf("cpu%d", i)
|
||||||
acc.AddFields("docker_cpu", map[string]interface{}{"usage_total": percpu}, percputags, now)
|
acc.AddFields("docker_container_cpu", map[string]interface{}{"usage_total": percpu}, percputags, now)
|
||||||
}
|
}
|
||||||
|
|
||||||
for network, netstats := range stat.Networks {
|
for network, netstats := range stat.Networks {
|
||||||
netfields := map[string]interface{}{
|
netfields := map[string]interface{}{
|
||||||
"rx_dropped": netstats.RxDropped,
|
"rx_dropped": netstats.RxDropped,
|
||||||
"rx_bytes": netstats.RxBytes,
|
"rx_bytes": netstats.RxBytes,
|
||||||
"rx_errors": netstats.RxErrors,
|
"rx_errors": netstats.RxErrors,
|
||||||
"tx_packets": netstats.TxPackets,
|
"tx_packets": netstats.TxPackets,
|
||||||
"tx_dropped": netstats.TxDropped,
|
"tx_dropped": netstats.TxDropped,
|
||||||
"rx_packets": netstats.RxPackets,
|
"rx_packets": netstats.RxPackets,
|
||||||
"tx_errors": netstats.TxErrors,
|
"tx_errors": netstats.TxErrors,
|
||||||
"tx_bytes": netstats.TxBytes,
|
"tx_bytes": netstats.TxBytes,
|
||||||
|
"container_id": id,
|
||||||
}
|
}
|
||||||
// Create a new network tag dictionary for the "network" tag
|
// Create a new network tag dictionary for the "network" tag
|
||||||
nettags := copyTags(tags)
|
nettags := copyTags(tags)
|
||||||
nettags["network"] = network
|
nettags["network"] = network
|
||||||
acc.AddFields("docker_net", netfields, nettags, now)
|
acc.AddFields("docker_container_net", netfields, nettags, now)
|
||||||
}
|
}
|
||||||
|
|
||||||
gatherBlockIOMetrics(stat, acc, tags, now)
|
gatherBlockIOMetrics(stat, acc, tags, now, id)
|
||||||
|
}
|
||||||
|
|
||||||
|
func calculateMemPercent(stat *types.StatsJSON) float64 {
|
||||||
|
var memPercent = 0.0
|
||||||
|
if stat.MemoryStats.Limit > 0 {
|
||||||
|
memPercent = float64(stat.MemoryStats.Usage) / float64(stat.MemoryStats.Limit) * 100.0
|
||||||
|
}
|
||||||
|
return memPercent
|
||||||
|
}
|
||||||
|
|
||||||
|
func calculateCPUPercent(stat *types.StatsJSON) float64 {
|
||||||
|
var cpuPercent = 0.0
|
||||||
|
// calculate the change for the cpu and system usage of the container in between readings
|
||||||
|
cpuDelta := float64(stat.CPUStats.CPUUsage.TotalUsage) - float64(stat.PreCPUStats.CPUUsage.TotalUsage)
|
||||||
|
systemDelta := float64(stat.CPUStats.SystemUsage) - float64(stat.PreCPUStats.SystemUsage)
|
||||||
|
|
||||||
|
if systemDelta > 0.0 && cpuDelta > 0.0 {
|
||||||
|
cpuPercent = (cpuDelta / systemDelta) * float64(len(stat.CPUStats.CPUUsage.PercpuUsage)) * 100.0
|
||||||
|
}
|
||||||
|
return cpuPercent
|
||||||
}
|
}
|
||||||
|
|
||||||
func gatherBlockIOMetrics(
|
func gatherBlockIOMetrics(
|
||||||
stat *docker.Stats,
|
stat *types.StatsJSON,
|
||||||
acc inputs.Accumulator,
|
acc telegraf.Accumulator,
|
||||||
tags map[string]string,
|
tags map[string]string,
|
||||||
now time.Time,
|
now time.Time,
|
||||||
|
id string,
|
||||||
) {
|
) {
|
||||||
blkioStats := stat.BlkioStats
|
blkioStats := stat.BlkioStats
|
||||||
// Make a map of devices to their block io stats
|
// Make a map of devices to their block io stats
|
||||||
deviceStatMap := make(map[string]map[string]interface{})
|
deviceStatMap := make(map[string]map[string]interface{})
|
||||||
|
|
||||||
for _, metric := range blkioStats.IOServiceBytesRecursive {
|
for _, metric := range blkioStats.IoServiceBytesRecursive {
|
||||||
device := fmt.Sprintf("%d:%d", metric.Major, metric.Minor)
|
device := fmt.Sprintf("%d:%d", metric.Major, metric.Minor)
|
||||||
_, ok := deviceStatMap[device]
|
_, ok := deviceStatMap[device]
|
||||||
if !ok {
|
if !ok {
|
||||||
@@ -231,7 +373,7 @@ func gatherBlockIOMetrics(
|
|||||||
deviceStatMap[device][field] = metric.Value
|
deviceStatMap[device][field] = metric.Value
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, metric := range blkioStats.IOServicedRecursive {
|
for _, metric := range blkioStats.IoServicedRecursive {
|
||||||
device := fmt.Sprintf("%d:%d", metric.Major, metric.Minor)
|
device := fmt.Sprintf("%d:%d", metric.Major, metric.Minor)
|
||||||
_, ok := deviceStatMap[device]
|
_, ok := deviceStatMap[device]
|
||||||
if !ok {
|
if !ok {
|
||||||
@@ -242,46 +384,45 @@ func gatherBlockIOMetrics(
|
|||||||
deviceStatMap[device][field] = metric.Value
|
deviceStatMap[device][field] = metric.Value
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, metric := range blkioStats.IOQueueRecursive {
|
for _, metric := range blkioStats.IoQueuedRecursive {
|
||||||
device := fmt.Sprintf("%d:%d", metric.Major, metric.Minor)
|
device := fmt.Sprintf("%d:%d", metric.Major, metric.Minor)
|
||||||
field := fmt.Sprintf("io_queue_recursive_%s", strings.ToLower(metric.Op))
|
field := fmt.Sprintf("io_queue_recursive_%s", strings.ToLower(metric.Op))
|
||||||
deviceStatMap[device][field] = metric.Value
|
deviceStatMap[device][field] = metric.Value
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, metric := range blkioStats.IOServiceTimeRecursive {
|
for _, metric := range blkioStats.IoServiceTimeRecursive {
|
||||||
device := fmt.Sprintf("%d:%d", metric.Major, metric.Minor)
|
device := fmt.Sprintf("%d:%d", metric.Major, metric.Minor)
|
||||||
field := fmt.Sprintf("io_service_time_recursive_%s", strings.ToLower(metric.Op))
|
field := fmt.Sprintf("io_service_time_recursive_%s", strings.ToLower(metric.Op))
|
||||||
deviceStatMap[device][field] = metric.Value
|
deviceStatMap[device][field] = metric.Value
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, metric := range blkioStats.IOWaitTimeRecursive {
|
for _, metric := range blkioStats.IoWaitTimeRecursive {
|
||||||
device := fmt.Sprintf("%d:%d", metric.Major, metric.Minor)
|
device := fmt.Sprintf("%d:%d", metric.Major, metric.Minor)
|
||||||
field := fmt.Sprintf("io_wait_time_%s", strings.ToLower(metric.Op))
|
field := fmt.Sprintf("io_wait_time_%s", strings.ToLower(metric.Op))
|
||||||
deviceStatMap[device][field] = metric.Value
|
deviceStatMap[device][field] = metric.Value
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, metric := range blkioStats.IOMergedRecursive {
|
for _, metric := range blkioStats.IoMergedRecursive {
|
||||||
device := fmt.Sprintf("%d:%d", metric.Major, metric.Minor)
|
device := fmt.Sprintf("%d:%d", metric.Major, metric.Minor)
|
||||||
field := fmt.Sprintf("io_merged_recursive_%s", strings.ToLower(metric.Op))
|
field := fmt.Sprintf("io_merged_recursive_%s", strings.ToLower(metric.Op))
|
||||||
deviceStatMap[device][field] = metric.Value
|
deviceStatMap[device][field] = metric.Value
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, metric := range blkioStats.IOTimeRecursive {
|
for _, metric := range blkioStats.IoTimeRecursive {
|
||||||
device := fmt.Sprintf("%d:%d", metric.Major, metric.Minor)
|
device := fmt.Sprintf("%d:%d", metric.Major, metric.Minor)
|
||||||
field := fmt.Sprintf("io_time_recursive_%s", strings.ToLower(metric.Op))
|
deviceStatMap[device]["io_time_recursive"] = metric.Value
|
||||||
deviceStatMap[device][field] = metric.Value
|
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, metric := range blkioStats.SectorsRecursive {
|
for _, metric := range blkioStats.SectorsRecursive {
|
||||||
device := fmt.Sprintf("%d:%d", metric.Major, metric.Minor)
|
device := fmt.Sprintf("%d:%d", metric.Major, metric.Minor)
|
||||||
field := fmt.Sprintf("sectors_recursive_%s", strings.ToLower(metric.Op))
|
deviceStatMap[device]["sectors_recursive"] = metric.Value
|
||||||
deviceStatMap[device][field] = metric.Value
|
|
||||||
}
|
}
|
||||||
|
|
||||||
for device, fields := range deviceStatMap {
|
for device, fields := range deviceStatMap {
|
||||||
iotags := copyTags(tags)
|
iotags := copyTags(tags)
|
||||||
iotags["device"] = device
|
iotags["device"] = device
|
||||||
acc.AddFields("docker_blkio", fields, iotags, now)
|
fields["container_id"] = id
|
||||||
|
acc.AddFields("docker_container_blkio", fields, iotags, now)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -302,8 +443,29 @@ func sliceContains(in string, sl []string) bool {
|
|||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Parses the human-readable size string into the amount it represents.
|
||||||
|
func parseSize(sizeStr string) (int64, error) {
|
||||||
|
matches := sizeRegex.FindStringSubmatch(sizeStr)
|
||||||
|
if len(matches) != 4 {
|
||||||
|
return -1, fmt.Errorf("invalid size: '%s'", sizeStr)
|
||||||
|
}
|
||||||
|
|
||||||
|
size, err := strconv.ParseFloat(matches[1], 64)
|
||||||
|
if err != nil {
|
||||||
|
return -1, err
|
||||||
|
}
|
||||||
|
|
||||||
|
uMap := map[string]int64{"k": KB, "m": MB, "g": GB, "t": TB, "p": PB}
|
||||||
|
unitPrefix := strings.ToLower(matches[3])
|
||||||
|
if mul, ok := uMap[unitPrefix]; ok {
|
||||||
|
size *= float64(mul)
|
||||||
|
}
|
||||||
|
|
||||||
|
return int64(size), nil
|
||||||
|
}
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
inputs.Add("docker", func() inputs.Input {
|
inputs.Add("docker", func() telegraf.Input {
|
||||||
return &Docker{}
|
return &Docker{}
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,12 +1,19 @@
|
|||||||
package system
|
package system
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"io"
|
||||||
|
"io/ioutil"
|
||||||
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"golang.org/x/net/context"
|
||||||
|
|
||||||
|
"github.com/docker/engine-api/types"
|
||||||
|
"github.com/docker/engine-api/types/registry"
|
||||||
"github.com/influxdata/telegraf/testutil"
|
"github.com/influxdata/telegraf/testutil"
|
||||||
|
|
||||||
"github.com/fsouza/go-dockerclient"
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestDockerGatherContainerStats(t *testing.T) {
|
func TestDockerGatherContainerStats(t *testing.T) {
|
||||||
@@ -14,26 +21,26 @@ func TestDockerGatherContainerStats(t *testing.T) {
|
|||||||
stats := testStats()
|
stats := testStats()
|
||||||
|
|
||||||
tags := map[string]string{
|
tags := map[string]string{
|
||||||
"cont_id": "foobarbaz",
|
"container_name": "redis",
|
||||||
"cont_name": "redis",
|
"container_image": "redis/image",
|
||||||
"cont_image": "redis/image",
|
|
||||||
}
|
}
|
||||||
gatherContainerStats(stats, &acc, tags)
|
gatherContainerStats(stats, &acc, tags, "123456789")
|
||||||
|
|
||||||
// test docker_net measurement
|
// test docker_container_net measurement
|
||||||
netfields := map[string]interface{}{
|
netfields := map[string]interface{}{
|
||||||
"rx_dropped": uint64(1),
|
"rx_dropped": uint64(1),
|
||||||
"rx_bytes": uint64(2),
|
"rx_bytes": uint64(2),
|
||||||
"rx_errors": uint64(3),
|
"rx_errors": uint64(3),
|
||||||
"tx_packets": uint64(4),
|
"tx_packets": uint64(4),
|
||||||
"tx_dropped": uint64(1),
|
"tx_dropped": uint64(1),
|
||||||
"rx_packets": uint64(2),
|
"rx_packets": uint64(2),
|
||||||
"tx_errors": uint64(3),
|
"tx_errors": uint64(3),
|
||||||
"tx_bytes": uint64(4),
|
"tx_bytes": uint64(4),
|
||||||
|
"container_id": "123456789",
|
||||||
}
|
}
|
||||||
nettags := copyTags(tags)
|
nettags := copyTags(tags)
|
||||||
nettags["network"] = "eth0"
|
nettags["network"] = "eth0"
|
||||||
acc.AssertContainsTaggedFields(t, "docker_net", netfields, nettags)
|
acc.AssertContainsTaggedFields(t, "docker_container_net", netfields, nettags)
|
||||||
|
|
||||||
// test docker_blkio measurement
|
// test docker_blkio measurement
|
||||||
blkiotags := copyTags(tags)
|
blkiotags := copyTags(tags)
|
||||||
@@ -41,15 +48,16 @@ func TestDockerGatherContainerStats(t *testing.T) {
|
|||||||
blkiofields := map[string]interface{}{
|
blkiofields := map[string]interface{}{
|
||||||
"io_service_bytes_recursive_read": uint64(100),
|
"io_service_bytes_recursive_read": uint64(100),
|
||||||
"io_serviced_recursive_write": uint64(101),
|
"io_serviced_recursive_write": uint64(101),
|
||||||
|
"container_id": "123456789",
|
||||||
}
|
}
|
||||||
acc.AssertContainsTaggedFields(t, "docker_blkio", blkiofields, blkiotags)
|
acc.AssertContainsTaggedFields(t, "docker_container_blkio", blkiofields, blkiotags)
|
||||||
|
|
||||||
// test docker_mem measurement
|
// test docker_container_mem measurement
|
||||||
memfields := map[string]interface{}{
|
memfields := map[string]interface{}{
|
||||||
"max_usage": uint64(1001),
|
"max_usage": uint64(1001),
|
||||||
"usage": uint64(1111),
|
"usage": uint64(1111),
|
||||||
"fail_count": uint64(1),
|
"fail_count": uint64(1),
|
||||||
"limit": uint64(20),
|
"limit": uint64(2000),
|
||||||
"total_pgmafault": uint64(0),
|
"total_pgmafault": uint64(0),
|
||||||
"cache": uint64(0),
|
"cache": uint64(0),
|
||||||
"mapped_file": uint64(0),
|
"mapped_file": uint64(0),
|
||||||
@@ -79,10 +87,13 @@ func TestDockerGatherContainerStats(t *testing.T) {
|
|||||||
"pgfault": uint64(2),
|
"pgfault": uint64(2),
|
||||||
"inactive_file": uint64(3),
|
"inactive_file": uint64(3),
|
||||||
"total_pgpgin": uint64(4),
|
"total_pgpgin": uint64(4),
|
||||||
|
"usage_percent": float64(55.55),
|
||||||
|
"container_id": "123456789",
|
||||||
}
|
}
|
||||||
acc.AssertContainsTaggedFields(t, "docker_mem", memfields, tags)
|
|
||||||
|
|
||||||
// test docker_cpu measurement
|
acc.AssertContainsTaggedFields(t, "docker_container_mem", memfields, tags)
|
||||||
|
|
||||||
|
// test docker_container_cpu measurement
|
||||||
cputags := copyTags(tags)
|
cputags := copyTags(tags)
|
||||||
cputags["cpu"] = "cpu-total"
|
cputags["cpu"] = "cpu-total"
|
||||||
cpufields := map[string]interface{}{
|
cpufields := map[string]interface{}{
|
||||||
@@ -93,71 +104,76 @@ func TestDockerGatherContainerStats(t *testing.T) {
|
|||||||
"throttling_periods": uint64(1),
|
"throttling_periods": uint64(1),
|
||||||
"throttling_throttled_periods": uint64(0),
|
"throttling_throttled_periods": uint64(0),
|
||||||
"throttling_throttled_time": uint64(0),
|
"throttling_throttled_time": uint64(0),
|
||||||
|
"usage_percent": float64(400.0),
|
||||||
|
"container_id": "123456789",
|
||||||
}
|
}
|
||||||
acc.AssertContainsTaggedFields(t, "docker_cpu", cpufields, cputags)
|
acc.AssertContainsTaggedFields(t, "docker_container_cpu", cpufields, cputags)
|
||||||
|
|
||||||
cputags["cpu"] = "cpu0"
|
cputags["cpu"] = "cpu0"
|
||||||
cpu0fields := map[string]interface{}{
|
cpu0fields := map[string]interface{}{
|
||||||
"usage_total": uint64(1),
|
"usage_total": uint64(1),
|
||||||
}
|
}
|
||||||
acc.AssertContainsTaggedFields(t, "docker_cpu", cpu0fields, cputags)
|
acc.AssertContainsTaggedFields(t, "docker_container_cpu", cpu0fields, cputags)
|
||||||
|
|
||||||
cputags["cpu"] = "cpu1"
|
cputags["cpu"] = "cpu1"
|
||||||
cpu1fields := map[string]interface{}{
|
cpu1fields := map[string]interface{}{
|
||||||
"usage_total": uint64(1002),
|
"usage_total": uint64(1002),
|
||||||
}
|
}
|
||||||
acc.AssertContainsTaggedFields(t, "docker_cpu", cpu1fields, cputags)
|
acc.AssertContainsTaggedFields(t, "docker_container_cpu", cpu1fields, cputags)
|
||||||
}
|
}
|
||||||
|
|
||||||
func testStats() *docker.Stats {
|
func testStats() *types.StatsJSON {
|
||||||
stats := &docker.Stats{
|
stats := &types.StatsJSON{}
|
||||||
Read: time.Now(),
|
stats.Read = time.Now()
|
||||||
Networks: make(map[string]docker.NetworkStats),
|
stats.Networks = make(map[string]types.NetworkStats)
|
||||||
}
|
|
||||||
|
|
||||||
stats.CPUStats.CPUUsage.PercpuUsage = []uint64{1, 1002}
|
stats.CPUStats.CPUUsage.PercpuUsage = []uint64{1, 1002}
|
||||||
stats.CPUStats.CPUUsage.UsageInUsermode = 100
|
stats.CPUStats.CPUUsage.UsageInUsermode = 100
|
||||||
stats.CPUStats.CPUUsage.TotalUsage = 500
|
stats.CPUStats.CPUUsage.TotalUsage = 500
|
||||||
stats.CPUStats.CPUUsage.UsageInKernelmode = 200
|
stats.CPUStats.CPUUsage.UsageInKernelmode = 200
|
||||||
stats.CPUStats.SystemCPUUsage = 100
|
stats.CPUStats.SystemUsage = 100
|
||||||
stats.CPUStats.ThrottlingData.Periods = 1
|
stats.CPUStats.ThrottlingData.Periods = 1
|
||||||
|
|
||||||
stats.MemoryStats.Stats.TotalPgmafault = 0
|
stats.PreCPUStats.CPUUsage.TotalUsage = 400
|
||||||
stats.MemoryStats.Stats.Cache = 0
|
stats.PreCPUStats.SystemUsage = 50
|
||||||
stats.MemoryStats.Stats.MappedFile = 0
|
|
||||||
stats.MemoryStats.Stats.TotalInactiveFile = 0
|
stats.MemoryStats.Stats = make(map[string]uint64)
|
||||||
stats.MemoryStats.Stats.Pgpgout = 0
|
stats.MemoryStats.Stats["total_pgmajfault"] = 0
|
||||||
stats.MemoryStats.Stats.Rss = 0
|
stats.MemoryStats.Stats["cache"] = 0
|
||||||
stats.MemoryStats.Stats.TotalMappedFile = 0
|
stats.MemoryStats.Stats["mapped_file"] = 0
|
||||||
stats.MemoryStats.Stats.Writeback = 0
|
stats.MemoryStats.Stats["total_inactive_file"] = 0
|
||||||
stats.MemoryStats.Stats.Unevictable = 0
|
stats.MemoryStats.Stats["pagpgout"] = 0
|
||||||
stats.MemoryStats.Stats.Pgpgin = 0
|
stats.MemoryStats.Stats["rss"] = 0
|
||||||
stats.MemoryStats.Stats.TotalUnevictable = 0
|
stats.MemoryStats.Stats["total_mapped_file"] = 0
|
||||||
stats.MemoryStats.Stats.Pgmajfault = 0
|
stats.MemoryStats.Stats["writeback"] = 0
|
||||||
stats.MemoryStats.Stats.TotalRss = 44
|
stats.MemoryStats.Stats["unevictable"] = 0
|
||||||
stats.MemoryStats.Stats.TotalRssHuge = 444
|
stats.MemoryStats.Stats["pgpgin"] = 0
|
||||||
stats.MemoryStats.Stats.TotalWriteback = 55
|
stats.MemoryStats.Stats["total_unevictable"] = 0
|
||||||
stats.MemoryStats.Stats.TotalInactiveAnon = 0
|
stats.MemoryStats.Stats["pgmajfault"] = 0
|
||||||
stats.MemoryStats.Stats.RssHuge = 0
|
stats.MemoryStats.Stats["total_rss"] = 44
|
||||||
stats.MemoryStats.Stats.HierarchicalMemoryLimit = 0
|
stats.MemoryStats.Stats["total_rss_huge"] = 444
|
||||||
stats.MemoryStats.Stats.TotalPgfault = 0
|
stats.MemoryStats.Stats["total_write_back"] = 55
|
||||||
stats.MemoryStats.Stats.TotalActiveFile = 0
|
stats.MemoryStats.Stats["total_inactive_anon"] = 0
|
||||||
stats.MemoryStats.Stats.ActiveAnon = 0
|
stats.MemoryStats.Stats["rss_huge"] = 0
|
||||||
stats.MemoryStats.Stats.TotalActiveAnon = 0
|
stats.MemoryStats.Stats["hierarchical_memory_limit"] = 0
|
||||||
stats.MemoryStats.Stats.TotalPgpgout = 0
|
stats.MemoryStats.Stats["total_pgfault"] = 0
|
||||||
stats.MemoryStats.Stats.TotalCache = 0
|
stats.MemoryStats.Stats["total_active_file"] = 0
|
||||||
stats.MemoryStats.Stats.InactiveAnon = 0
|
stats.MemoryStats.Stats["active_anon"] = 0
|
||||||
stats.MemoryStats.Stats.ActiveFile = 1
|
stats.MemoryStats.Stats["total_active_anon"] = 0
|
||||||
stats.MemoryStats.Stats.Pgfault = 2
|
stats.MemoryStats.Stats["total_pgpgout"] = 0
|
||||||
stats.MemoryStats.Stats.InactiveFile = 3
|
stats.MemoryStats.Stats["total_cache"] = 0
|
||||||
stats.MemoryStats.Stats.TotalPgpgin = 4
|
stats.MemoryStats.Stats["inactive_anon"] = 0
|
||||||
|
stats.MemoryStats.Stats["active_file"] = 1
|
||||||
|
stats.MemoryStats.Stats["pgfault"] = 2
|
||||||
|
stats.MemoryStats.Stats["inactive_file"] = 3
|
||||||
|
stats.MemoryStats.Stats["total_pgpgin"] = 4
|
||||||
|
|
||||||
stats.MemoryStats.MaxUsage = 1001
|
stats.MemoryStats.MaxUsage = 1001
|
||||||
stats.MemoryStats.Usage = 1111
|
stats.MemoryStats.Usage = 1111
|
||||||
stats.MemoryStats.Failcnt = 1
|
stats.MemoryStats.Failcnt = 1
|
||||||
stats.MemoryStats.Limit = 20
|
stats.MemoryStats.Limit = 2000
|
||||||
|
|
||||||
stats.Networks["eth0"] = docker.NetworkStats{
|
stats.Networks["eth0"] = types.NetworkStats{
|
||||||
RxDropped: 1,
|
RxDropped: 1,
|
||||||
RxBytes: 2,
|
RxBytes: 2,
|
||||||
RxErrors: 3,
|
RxErrors: 3,
|
||||||
@@ -168,23 +184,246 @@ func testStats() *docker.Stats {
|
|||||||
TxBytes: 4,
|
TxBytes: 4,
|
||||||
}
|
}
|
||||||
|
|
||||||
sbr := docker.BlkioStatsEntry{
|
sbr := types.BlkioStatEntry{
|
||||||
Major: 6,
|
Major: 6,
|
||||||
Minor: 0,
|
Minor: 0,
|
||||||
Op: "read",
|
Op: "read",
|
||||||
Value: 100,
|
Value: 100,
|
||||||
}
|
}
|
||||||
sr := docker.BlkioStatsEntry{
|
sr := types.BlkioStatEntry{
|
||||||
Major: 6,
|
Major: 6,
|
||||||
Minor: 0,
|
Minor: 0,
|
||||||
Op: "write",
|
Op: "write",
|
||||||
Value: 101,
|
Value: 101,
|
||||||
}
|
}
|
||||||
|
|
||||||
stats.BlkioStats.IOServiceBytesRecursive = append(
|
stats.BlkioStats.IoServiceBytesRecursive = append(
|
||||||
stats.BlkioStats.IOServiceBytesRecursive, sbr)
|
stats.BlkioStats.IoServiceBytesRecursive, sbr)
|
||||||
stats.BlkioStats.IOServicedRecursive = append(
|
stats.BlkioStats.IoServicedRecursive = append(
|
||||||
stats.BlkioStats.IOServicedRecursive, sr)
|
stats.BlkioStats.IoServicedRecursive, sr)
|
||||||
|
|
||||||
return stats
|
return stats
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type FakeDockerClient struct {
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d FakeDockerClient) Info(ctx context.Context) (types.Info, error) {
|
||||||
|
env := types.Info{
|
||||||
|
Containers: 108,
|
||||||
|
OomKillDisable: false,
|
||||||
|
SystemTime: "2016-02-24T00:55:09.15073105-05:00",
|
||||||
|
NEventsListener: 0,
|
||||||
|
ID: "5WQQ:TFWR:FDNG:OKQ3:37Y4:FJWG:QIKK:623T:R3ME:QTKB:A7F7:OLHD",
|
||||||
|
Debug: false,
|
||||||
|
LoggingDriver: "json-file",
|
||||||
|
KernelVersion: "4.3.0-1-amd64",
|
||||||
|
IndexServerAddress: "https://index.docker.io/v1/",
|
||||||
|
MemTotal: 3840757760,
|
||||||
|
Images: 199,
|
||||||
|
CPUCfsQuota: true,
|
||||||
|
Name: "absol",
|
||||||
|
SwapLimit: false,
|
||||||
|
IPv4Forwarding: true,
|
||||||
|
ExecutionDriver: "native-0.2",
|
||||||
|
ExperimentalBuild: false,
|
||||||
|
CPUCfsPeriod: true,
|
||||||
|
RegistryConfig: ®istry.ServiceConfig{
|
||||||
|
IndexConfigs: map[string]*registry.IndexInfo{
|
||||||
|
"docker.io": {
|
||||||
|
Name: "docker.io",
|
||||||
|
Mirrors: []string{},
|
||||||
|
Official: true,
|
||||||
|
Secure: true,
|
||||||
|
},
|
||||||
|
}, InsecureRegistryCIDRs: []*registry.NetIPNet{{IP: []byte{127, 0, 0, 0}, Mask: []byte{255, 0, 0, 0}}}, Mirrors: []string{}},
|
||||||
|
OperatingSystem: "Linux Mint LMDE (containerized)",
|
||||||
|
BridgeNfIptables: true,
|
||||||
|
HTTPSProxy: "",
|
||||||
|
Labels: []string{},
|
||||||
|
MemoryLimit: false,
|
||||||
|
DriverStatus: [][2]string{{"Pool Name", "docker-8:1-1182287-pool"}, {"Pool Blocksize", "65.54 kB"}, {"Backing Filesystem", "extfs"}, {"Data file", "/dev/loop0"}, {"Metadata file", "/dev/loop1"}, {"Data Space Used", "17.3 GB"}, {"Data Space Total", "107.4 GB"}, {"Data Space Available", "36.53 GB"}, {"Metadata Space Used", "20.97 MB"}, {"Metadata Space Total", "2.147 GB"}, {"Metadata Space Available", "2.127 GB"}, {"Udev Sync Supported", "true"}, {"Deferred Removal Enabled", "false"}, {"Data loop file", "/var/lib/docker/devicemapper/devicemapper/data"}, {"Metadata loop file", "/var/lib/docker/devicemapper/devicemapper/metadata"}, {"Library Version", "1.02.115 (2016-01-25)"}},
|
||||||
|
NFd: 19,
|
||||||
|
HTTPProxy: "",
|
||||||
|
Driver: "devicemapper",
|
||||||
|
NGoroutines: 39,
|
||||||
|
NCPU: 4,
|
||||||
|
DockerRootDir: "/var/lib/docker",
|
||||||
|
NoProxy: "",
|
||||||
|
BridgeNfIP6tables: true,
|
||||||
|
}
|
||||||
|
return env, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d FakeDockerClient) ContainerList(octx context.Context, options types.ContainerListOptions) ([]types.Container, error) {
|
||||||
|
container1 := types.Container{
|
||||||
|
ID: "e2173b9478a6ae55e237d4d74f8bbb753f0817192b5081334dc78476296b7dfb",
|
||||||
|
Names: []string{"/etcd"},
|
||||||
|
Image: "quay.io/coreos/etcd:v2.2.2",
|
||||||
|
Command: "/etcd -name etcd0 -advertise-client-urls http://localhost:2379 -listen-client-urls http://0.0.0.0:2379",
|
||||||
|
Created: 1455941930,
|
||||||
|
Status: "Up 4 hours",
|
||||||
|
Ports: []types.Port{
|
||||||
|
types.Port{
|
||||||
|
PrivatePort: 7001,
|
||||||
|
PublicPort: 0,
|
||||||
|
Type: "tcp",
|
||||||
|
},
|
||||||
|
types.Port{
|
||||||
|
PrivatePort: 4001,
|
||||||
|
PublicPort: 0,
|
||||||
|
Type: "tcp",
|
||||||
|
},
|
||||||
|
types.Port{
|
||||||
|
PrivatePort: 2380,
|
||||||
|
PublicPort: 0,
|
||||||
|
Type: "tcp",
|
||||||
|
},
|
||||||
|
types.Port{
|
||||||
|
PrivatePort: 2379,
|
||||||
|
PublicPort: 2379,
|
||||||
|
Type: "tcp",
|
||||||
|
IP: "0.0.0.0",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
SizeRw: 0,
|
||||||
|
SizeRootFs: 0,
|
||||||
|
}
|
||||||
|
container2 := types.Container{
|
||||||
|
ID: "b7dfbb9478a6ae55e237d4d74f8bbb753f0817192b5081334dc78476296e2173",
|
||||||
|
Names: []string{"/etcd2"},
|
||||||
|
Image: "quay.io/coreos/etcd:v2.2.2",
|
||||||
|
Command: "/etcd -name etcd2 -advertise-client-urls http://localhost:2379 -listen-client-urls http://0.0.0.0:2379",
|
||||||
|
Created: 1455941933,
|
||||||
|
Status: "Up 4 hours",
|
||||||
|
Ports: []types.Port{
|
||||||
|
types.Port{
|
||||||
|
PrivatePort: 7002,
|
||||||
|
PublicPort: 0,
|
||||||
|
Type: "tcp",
|
||||||
|
},
|
||||||
|
types.Port{
|
||||||
|
PrivatePort: 4002,
|
||||||
|
PublicPort: 0,
|
||||||
|
Type: "tcp",
|
||||||
|
},
|
||||||
|
types.Port{
|
||||||
|
PrivatePort: 2381,
|
||||||
|
PublicPort: 0,
|
||||||
|
Type: "tcp",
|
||||||
|
},
|
||||||
|
types.Port{
|
||||||
|
PrivatePort: 2382,
|
||||||
|
PublicPort: 2382,
|
||||||
|
Type: "tcp",
|
||||||
|
IP: "0.0.0.0",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
SizeRw: 0,
|
||||||
|
SizeRootFs: 0,
|
||||||
|
}
|
||||||
|
|
||||||
|
containers := []types.Container{container1, container2}
|
||||||
|
return containers, nil
|
||||||
|
|
||||||
|
//#{e6a96c84ca91a5258b7cb752579fb68826b68b49ff957487695cd4d13c343b44 titilambert/snmpsim /bin/sh -c 'snmpsimd --agent-udpv4-endpoint=0.0.0.0:31161 --process-user=root --process-group=user' 1455724831 Up 4 hours [{31161 31161 udp 0.0.0.0}] 0 0 [/snmp] map[]}]2016/02/24 01:05:01 Gathered metrics, (3s interval), from 1 inputs in 1.233836656s
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d FakeDockerClient) ContainerStats(ctx context.Context, containerID string, stream bool) (io.ReadCloser, error) {
|
||||||
|
var stat io.ReadCloser
|
||||||
|
jsonStat := `{"read":"2016-02-24T11:42:27.472459608-05:00","memory_stats":{"stats":{},"limit":18935443456},"blkio_stats":{"io_service_bytes_recursive":[{"major":252,"minor":1,"op":"Read","value":753664},{"major":252,"minor":1,"op":"Write"},{"major":252,"minor":1,"op":"Sync"},{"major":252,"minor":1,"op":"Async","value":753664},{"major":252,"minor":1,"op":"Total","value":753664}],"io_serviced_recursive":[{"major":252,"minor":1,"op":"Read","value":26},{"major":252,"minor":1,"op":"Write"},{"major":252,"minor":1,"op":"Sync"},{"major":252,"minor":1,"op":"Async","value":26},{"major":252,"minor":1,"op":"Total","value":26}]},"cpu_stats":{"cpu_usage":{"percpu_usage":[17871,4959158,1646137,1231652,11829401,244656,369972,0],"usage_in_usermode":10000000,"total_usage":20298847},"system_cpu_usage":24052607520000000,"throttling_data":{}},"precpu_stats":{"cpu_usage":{"percpu_usage":[17871,4959158,1646137,1231652,11829401,244656,369972,0],"usage_in_usermode":10000000,"total_usage":20298847},"system_cpu_usage":24052599550000000,"throttling_data":{}}}`
|
||||||
|
stat = ioutil.NopCloser(strings.NewReader(jsonStat))
|
||||||
|
return stat, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDockerGatherInfo(t *testing.T) {
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
client := FakeDockerClient{}
|
||||||
|
d := Docker{client: client}
|
||||||
|
|
||||||
|
err := d.Gather(&acc)
|
||||||
|
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
acc.AssertContainsTaggedFields(t,
|
||||||
|
"docker",
|
||||||
|
map[string]interface{}{
|
||||||
|
"n_listener_events": int(0),
|
||||||
|
"n_cpus": int(4),
|
||||||
|
"n_used_file_descriptors": int(19),
|
||||||
|
"n_containers": int(108),
|
||||||
|
"n_images": int(199),
|
||||||
|
"n_goroutines": int(39),
|
||||||
|
},
|
||||||
|
map[string]string{},
|
||||||
|
)
|
||||||
|
|
||||||
|
acc.AssertContainsTaggedFields(t,
|
||||||
|
"docker_data",
|
||||||
|
map[string]interface{}{
|
||||||
|
"used": int64(17300000000),
|
||||||
|
"total": int64(107400000000),
|
||||||
|
"available": int64(36530000000),
|
||||||
|
},
|
||||||
|
map[string]string{
|
||||||
|
"unit": "bytes",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
acc.AssertContainsTaggedFields(t,
|
||||||
|
"docker_container_cpu",
|
||||||
|
map[string]interface{}{
|
||||||
|
"usage_total": uint64(1231652),
|
||||||
|
},
|
||||||
|
map[string]string{
|
||||||
|
"container_name": "etcd2",
|
||||||
|
"container_image": "quay.io/coreos/etcd:v2.2.2",
|
||||||
|
"cpu": "cpu3",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
acc.AssertContainsTaggedFields(t,
|
||||||
|
"docker_container_mem",
|
||||||
|
map[string]interface{}{
|
||||||
|
"total_pgpgout": uint64(0),
|
||||||
|
"usage_percent": float64(0),
|
||||||
|
"rss": uint64(0),
|
||||||
|
"total_writeback": uint64(0),
|
||||||
|
"active_anon": uint64(0),
|
||||||
|
"total_pgmafault": uint64(0),
|
||||||
|
"total_rss": uint64(0),
|
||||||
|
"total_unevictable": uint64(0),
|
||||||
|
"active_file": uint64(0),
|
||||||
|
"total_mapped_file": uint64(0),
|
||||||
|
"pgpgin": uint64(0),
|
||||||
|
"total_active_file": uint64(0),
|
||||||
|
"total_active_anon": uint64(0),
|
||||||
|
"total_cache": uint64(0),
|
||||||
|
"inactive_anon": uint64(0),
|
||||||
|
"pgmajfault": uint64(0),
|
||||||
|
"total_inactive_anon": uint64(0),
|
||||||
|
"total_rss_huge": uint64(0),
|
||||||
|
"rss_huge": uint64(0),
|
||||||
|
"hierarchical_memory_limit": uint64(0),
|
||||||
|
"pgpgout": uint64(0),
|
||||||
|
"unevictable": uint64(0),
|
||||||
|
"total_inactive_file": uint64(0),
|
||||||
|
"writeback": uint64(0),
|
||||||
|
"total_pgfault": uint64(0),
|
||||||
|
"total_pgpgin": uint64(0),
|
||||||
|
"cache": uint64(0),
|
||||||
|
"mapped_file": uint64(0),
|
||||||
|
"inactive_file": uint64(0),
|
||||||
|
"max_usage": uint64(0),
|
||||||
|
"fail_count": uint64(0),
|
||||||
|
"pgfault": uint64(0),
|
||||||
|
"usage": uint64(0),
|
||||||
|
"limit": uint64(18935443456),
|
||||||
|
"container_id": "b7dfbb9478a6ae55e237d4d74f8bbb753f0817192b5081334dc78476296e2173",
|
||||||
|
},
|
||||||
|
map[string]string{
|
||||||
|
"container_name": "etcd2",
|
||||||
|
"container_image": "quay.io/coreos/etcd:v2.2.2",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
//fmt.Print(info)
|
||||||
|
}
|
||||||
|
|||||||
74
plugins/inputs/dovecot/README.md
Normal file
74
plugins/inputs/dovecot/README.md
Normal file
@@ -0,0 +1,74 @@
|
|||||||
|
# Dovecot Input Plugin
|
||||||
|
|
||||||
|
The dovecot plugin uses the dovecot Stats protocol to gather metrics on configured
|
||||||
|
domains. You can read Dovecot's documentation
|
||||||
|
[here](http://wiki2.dovecot.org/Statistics)
|
||||||
|
|
||||||
|
|
||||||
|
### Configuration:
|
||||||
|
|
||||||
|
```
|
||||||
|
# Read metrics about dovecot servers
|
||||||
|
[[inputs.dovecot]]
|
||||||
|
## specify dovecot servers via an address:port list
|
||||||
|
## e.g.
|
||||||
|
## localhost:24242
|
||||||
|
##
|
||||||
|
## If no servers are specified, then localhost is used as the host.
|
||||||
|
servers = ["localhost:24242"]
|
||||||
|
## Type is one of "user", "domain", "ip", or "global"
|
||||||
|
type = "global"
|
||||||
|
## Wildcard matches like "*.com". An empty string "" is same as "*"
|
||||||
|
## If type = "ip" filters should be <IP/network>
|
||||||
|
filters = [""]
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
### Tags:
|
||||||
|
server: hostname
|
||||||
|
type: query type
|
||||||
|
ip: ip addr
|
||||||
|
user: username
|
||||||
|
domain: domain name
|
||||||
|
|
||||||
|
|
||||||
|
### Fields:
|
||||||
|
|
||||||
|
reset_timestamp time.Time
|
||||||
|
last_update time.Time
|
||||||
|
num_logins int64
|
||||||
|
num_cmds int64
|
||||||
|
num_connected_sessions int64 ## not in <user> type
|
||||||
|
user_cpu float32
|
||||||
|
sys_cpu float32
|
||||||
|
clock_time float64
|
||||||
|
min_faults int64
|
||||||
|
maj_faults int64
|
||||||
|
vol_cs int64
|
||||||
|
invol_cs int64
|
||||||
|
disk_input int64
|
||||||
|
disk_output int64
|
||||||
|
read_count int64
|
||||||
|
read_bytes int64
|
||||||
|
write_count int64
|
||||||
|
write_bytes int64
|
||||||
|
mail_lookup_path int64
|
||||||
|
mail_lookup_attr int64
|
||||||
|
mail_read_count int64
|
||||||
|
mail_read_bytes int64
|
||||||
|
mail_cache_hits int64
|
||||||
|
|
||||||
|
|
||||||
|
### Example Output:
|
||||||
|
|
||||||
|
```
|
||||||
|
telegraf -config t.cfg -input-filter dovecot -test
|
||||||
|
* Plugin: dovecot, Collection 1
|
||||||
|
> dovecot,ip=192.168.0.1,server=dovecot-1.domain.test,type=ip clock_time=0,disk_input=0i,disk_output=0i,invol_cs=0i,last_update="2016-04-08 10:59:47.000208479 +0200 CEST",mail_cache_hits=0i,mail_lookup_attr=0i,mail_lookup_path=0i,mail_read_bytes=0i,mail_read_count=0i,maj_faults=0i,min_faults=0i,num_cmds=12i,num_connected_sessions=0i,num_logins=6i,read_bytes=0i,read_count=0i,reset_timestamp="2016-04-08 10:33:34 +0200 CEST",sys_cpu=0,user_cpu=0,vol_cs=0i,write_bytes=0i,write_count=0i 1460106251633824223
|
||||||
|
* Plugin: dovecot, Collection 1
|
||||||
|
> dovecot,server=dovecot-1.domain.test,type=user,user=user-1@domain.test clock_time=0.00006,disk_input=405504i,disk_output=77824i,invol_cs=67i,last_update="2016-04-08 11:02:55.000111634 +0200 CEST",mail_cache_hits=26i,mail_lookup_attr=0i,mail_lookup_path=6i,mail_read_bytes=86233i,mail_read_count=5i,maj_faults=0i,min_faults=975i,num_cmds=41i,num_logins=3i,read_bytes=368833i,read_count=394i,reset_timestamp="2016-04-08 11:01:32 +0200 CEST",sys_cpu=0.008,user_cpu=0.004,vol_cs=323i,write_bytes=105086i,write_count=176i 1460106256637049167
|
||||||
|
* Plugin: dovecot, Collection 1
|
||||||
|
> dovecot,domain=domain.test,server=dovecot-1.domain.test,type=domain clock_time=100896189179847.7,disk_input=6467588263936i,disk_output=17933680439296i,invol_cs=1194808498i,last_update="2016-04-08 11:04:08.000377367 +0200 CEST",mail_cache_hits=46455781i,mail_lookup_attr=0i,mail_lookup_path=571490i,mail_read_bytes=79287033067i,mail_read_count=491243i,maj_faults=16992i,min_faults=1278442541i,num_cmds=606005i,num_connected_sessions=6597i,num_logins=166381i,read_bytes=30231409780721i,read_count=1624912080i,reset_timestamp="2016-04-08 10:28:45 +0200 CEST",sys_cpu=156440.372,user_cpu=216676.476,vol_cs=2749291157i,write_bytes=17097106707594i,write_count=944448998i 1460106261639672622
|
||||||
|
* Plugin: dovecot, Collection 1
|
||||||
|
> dovecot,server=dovecot-1.domain.test,type=global clock_time=101196971074203.94,disk_input=6493168218112i,disk_output=17978638815232i,invol_cs=1198855447i,last_update="2016-04-08 11:04:13.000379245 +0200 CEST",mail_cache_hits=68192209i,mail_lookup_attr=0i,mail_lookup_path=653861i,mail_read_bytes=86705151847i,mail_read_count=566125i,maj_faults=17208i,min_faults=1286179702i,num_cmds=917469i,num_connected_sessions=8896i,num_logins=174827i,read_bytes=30327690466186i,read_count=1772396430i,reset_timestamp="2016-04-08 10:28:45 +0200 CEST",sys_cpu=157965.692,user_cpu=219337.48,vol_cs=2827615787i,write_bytes=17150837661940i,write_count=992653220i 1460106266642153907
|
||||||
|
```
|
||||||
193
plugins/inputs/dovecot/dovecot.go
Normal file
193
plugins/inputs/dovecot/dovecot.go
Normal file
@@ -0,0 +1,193 @@
|
|||||||
|
package dovecot
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
// "log"
|
||||||
|
"net"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
|
)
|
||||||
|
|
||||||
|
type Dovecot struct {
|
||||||
|
Type string
|
||||||
|
Filters []string
|
||||||
|
Servers []string
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Dovecot) Description() string {
|
||||||
|
return "Read statistics from one or many dovecot servers"
|
||||||
|
}
|
||||||
|
|
||||||
|
var sampleConfig = `
|
||||||
|
## specify dovecot servers via an address:port list
|
||||||
|
## e.g.
|
||||||
|
## localhost:24242
|
||||||
|
##
|
||||||
|
## If no servers are specified, then localhost is used as the host.
|
||||||
|
servers = ["localhost:24242"]
|
||||||
|
## Type is one of "user", "domain", "ip", or "global"
|
||||||
|
type = "global"
|
||||||
|
## Wildcard matches like "*.com". An empty string "" is same as "*"
|
||||||
|
## If type = "ip" filters should be <IP/network>
|
||||||
|
filters = [""]
|
||||||
|
`
|
||||||
|
|
||||||
|
var defaultTimeout = time.Second * time.Duration(5)
|
||||||
|
|
||||||
|
var validQuery = map[string]bool{
|
||||||
|
"user": true, "domain": true, "global": true, "ip": true,
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Dovecot) SampleConfig() string { return sampleConfig }
|
||||||
|
|
||||||
|
const defaultPort = "24242"
|
||||||
|
|
||||||
|
// Reads stats from all configured servers.
|
||||||
|
func (d *Dovecot) Gather(acc telegraf.Accumulator) error {
|
||||||
|
|
||||||
|
if !validQuery[d.Type] {
|
||||||
|
return fmt.Errorf("Error: %s is not a valid query type\n",
|
||||||
|
d.Type)
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(d.Servers) == 0 {
|
||||||
|
d.Servers = append(d.Servers, "127.0.0.1:24242")
|
||||||
|
}
|
||||||
|
|
||||||
|
var wg sync.WaitGroup
|
||||||
|
|
||||||
|
var outerr error
|
||||||
|
|
||||||
|
if len(d.Filters) <= 0 {
|
||||||
|
d.Filters = append(d.Filters, "")
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, serv := range d.Servers {
|
||||||
|
for _, filter := range d.Filters {
|
||||||
|
wg.Add(1)
|
||||||
|
go func(serv string, filter string) {
|
||||||
|
defer wg.Done()
|
||||||
|
outerr = d.gatherServer(serv, acc, d.Type, filter)
|
||||||
|
}(serv, filter)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
wg.Wait()
|
||||||
|
|
||||||
|
return outerr
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d *Dovecot) gatherServer(addr string, acc telegraf.Accumulator, qtype string, filter string) error {
|
||||||
|
|
||||||
|
_, _, err := net.SplitHostPort(addr)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("Error: %s on url %s\n", err, addr)
|
||||||
|
}
|
||||||
|
|
||||||
|
c, err := net.DialTimeout("tcp", addr, defaultTimeout)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("Unable to connect to dovecot server '%s': %s", addr, err)
|
||||||
|
}
|
||||||
|
defer c.Close()
|
||||||
|
|
||||||
|
// Extend connection
|
||||||
|
c.SetDeadline(time.Now().Add(defaultTimeout))
|
||||||
|
|
||||||
|
msg := fmt.Sprintf("EXPORT\t%s", qtype)
|
||||||
|
if len(filter) > 0 {
|
||||||
|
msg += fmt.Sprintf("\t%s=%s", qtype, filter)
|
||||||
|
}
|
||||||
|
msg += "\n"
|
||||||
|
|
||||||
|
c.Write([]byte(msg))
|
||||||
|
var buf bytes.Buffer
|
||||||
|
io.Copy(&buf, c)
|
||||||
|
|
||||||
|
host, _, _ := net.SplitHostPort(addr)
|
||||||
|
|
||||||
|
return gatherStats(&buf, acc, host, qtype)
|
||||||
|
}
|
||||||
|
|
||||||
|
func gatherStats(buf *bytes.Buffer, acc telegraf.Accumulator, host string, qtype string) error {
|
||||||
|
|
||||||
|
lines := strings.Split(buf.String(), "\n")
|
||||||
|
head := strings.Split(lines[0], "\t")
|
||||||
|
vals := lines[1:]
|
||||||
|
|
||||||
|
for i := range vals {
|
||||||
|
if vals[i] == "" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
val := strings.Split(vals[i], "\t")
|
||||||
|
|
||||||
|
fields := make(map[string]interface{})
|
||||||
|
tags := map[string]string{"server": host, "type": qtype}
|
||||||
|
|
||||||
|
if qtype != "global" {
|
||||||
|
tags[qtype] = val[0]
|
||||||
|
}
|
||||||
|
|
||||||
|
for n := range val {
|
||||||
|
switch head[n] {
|
||||||
|
case qtype:
|
||||||
|
continue
|
||||||
|
case "user_cpu", "sys_cpu", "clock_time":
|
||||||
|
fields[head[n]] = secParser(val[n])
|
||||||
|
case "reset_timestamp", "last_update":
|
||||||
|
fields[head[n]] = timeParser(val[n])
|
||||||
|
default:
|
||||||
|
ival, _ := splitSec(val[n])
|
||||||
|
fields[head[n]] = ival
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
acc.AddFields("dovecot", fields, tags)
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func splitSec(tm string) (sec int64, msec int64) {
|
||||||
|
var err error
|
||||||
|
ss := strings.Split(tm, ".")
|
||||||
|
|
||||||
|
sec, err = strconv.ParseInt(ss[0], 10, 64)
|
||||||
|
if err != nil {
|
||||||
|
sec = 0
|
||||||
|
}
|
||||||
|
if len(ss) > 1 {
|
||||||
|
msec, err = strconv.ParseInt(ss[1], 10, 64)
|
||||||
|
if err != nil {
|
||||||
|
msec = 0
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
msec = 0
|
||||||
|
}
|
||||||
|
|
||||||
|
return sec, msec
|
||||||
|
}
|
||||||
|
|
||||||
|
func timeParser(tm string) time.Time {
|
||||||
|
|
||||||
|
sec, msec := splitSec(tm)
|
||||||
|
return time.Unix(sec, msec)
|
||||||
|
}
|
||||||
|
|
||||||
|
func secParser(tm string) float64 {
|
||||||
|
|
||||||
|
sec, msec := splitSec(tm)
|
||||||
|
return float64(sec) + (float64(msec) / 1000000.0)
|
||||||
|
}
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
inputs.Add("dovecot", func() telegraf.Input {
|
||||||
|
return &Dovecot{}
|
||||||
|
})
|
||||||
|
}
|
||||||
119
plugins/inputs/dovecot/dovecot_test.go
Normal file
119
plugins/inputs/dovecot/dovecot_test.go
Normal file
@@ -0,0 +1,119 @@
|
|||||||
|
package dovecot
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf/testutil"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestDovecot(t *testing.T) {
|
||||||
|
|
||||||
|
if testing.Short() {
|
||||||
|
t.Skip("Skipping integration test in short mode")
|
||||||
|
}
|
||||||
|
|
||||||
|
fields := map[string]interface{}{
|
||||||
|
"reset_timestamp": time.Unix(1453969886, 0),
|
||||||
|
"last_update": time.Unix(1454603963, 39864),
|
||||||
|
"num_logins": int64(7503897),
|
||||||
|
"num_cmds": int64(52595715),
|
||||||
|
"num_connected_sessions": int64(1204),
|
||||||
|
"user_cpu": 1.00831175372e+08,
|
||||||
|
"sys_cpu": 8.3849071112e+07,
|
||||||
|
"clock_time": 4.3260019315281835e+15,
|
||||||
|
"min_faults": int64(763950011),
|
||||||
|
"maj_faults": int64(1112443),
|
||||||
|
"vol_cs": int64(4120386897),
|
||||||
|
"invol_cs": int64(3685239306),
|
||||||
|
"disk_input": int64(41679480946688),
|
||||||
|
"disk_output": int64(1819070669176832),
|
||||||
|
"read_count": int64(2368906465),
|
||||||
|
"read_bytes": int64(2957928122981169),
|
||||||
|
"write_count": int64(3545389615),
|
||||||
|
"write_bytes": int64(1666822498251286),
|
||||||
|
"mail_lookup_path": int64(24396105),
|
||||||
|
"mail_lookup_attr": int64(302845),
|
||||||
|
"mail_read_count": int64(20155768),
|
||||||
|
"mail_read_bytes": int64(669946617705),
|
||||||
|
"mail_cache_hits": int64(1557255080),
|
||||||
|
}
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
|
||||||
|
// Test type=global
|
||||||
|
tags := map[string]string{"server": "dovecot.test", "type": "global"}
|
||||||
|
buf := bytes.NewBufferString(sampleGlobal)
|
||||||
|
|
||||||
|
err := gatherStats(buf, &acc, "dovecot.test", "global")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
acc.AssertContainsTaggedFields(t, "dovecot", fields, tags)
|
||||||
|
|
||||||
|
// Test type=domain
|
||||||
|
tags = map[string]string{"server": "dovecot.test", "type": "domain", "domain": "domain.test"}
|
||||||
|
buf = bytes.NewBufferString(sampleDomain)
|
||||||
|
|
||||||
|
err = gatherStats(buf, &acc, "dovecot.test", "domain")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
acc.AssertContainsTaggedFields(t, "dovecot", fields, tags)
|
||||||
|
|
||||||
|
// Test type=ip
|
||||||
|
tags = map[string]string{"server": "dovecot.test", "type": "ip", "ip": "192.168.0.100"}
|
||||||
|
buf = bytes.NewBufferString(sampleIp)
|
||||||
|
|
||||||
|
err = gatherStats(buf, &acc, "dovecot.test", "ip")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
acc.AssertContainsTaggedFields(t, "dovecot", fields, tags)
|
||||||
|
|
||||||
|
// Test type=user
|
||||||
|
fields = map[string]interface{}{
|
||||||
|
"reset_timestamp": time.Unix(1453969886, 0),
|
||||||
|
"last_update": time.Unix(1454603963, 39864),
|
||||||
|
"num_logins": int64(7503897),
|
||||||
|
"num_cmds": int64(52595715),
|
||||||
|
"user_cpu": 1.00831175372e+08,
|
||||||
|
"sys_cpu": 8.3849071112e+07,
|
||||||
|
"clock_time": 4.3260019315281835e+15,
|
||||||
|
"min_faults": int64(763950011),
|
||||||
|
"maj_faults": int64(1112443),
|
||||||
|
"vol_cs": int64(4120386897),
|
||||||
|
"invol_cs": int64(3685239306),
|
||||||
|
"disk_input": int64(41679480946688),
|
||||||
|
"disk_output": int64(1819070669176832),
|
||||||
|
"read_count": int64(2368906465),
|
||||||
|
"read_bytes": int64(2957928122981169),
|
||||||
|
"write_count": int64(3545389615),
|
||||||
|
"write_bytes": int64(1666822498251286),
|
||||||
|
"mail_lookup_path": int64(24396105),
|
||||||
|
"mail_lookup_attr": int64(302845),
|
||||||
|
"mail_read_count": int64(20155768),
|
||||||
|
"mail_read_bytes": int64(669946617705),
|
||||||
|
"mail_cache_hits": int64(1557255080),
|
||||||
|
}
|
||||||
|
|
||||||
|
tags = map[string]string{"server": "dovecot.test", "type": "user", "user": "user.1@domain.test"}
|
||||||
|
buf = bytes.NewBufferString(sampleUser)
|
||||||
|
|
||||||
|
err = gatherStats(buf, &acc, "dovecot.test", "user")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
acc.AssertContainsTaggedFields(t, "dovecot", fields, tags)
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
const sampleGlobal = `reset_timestamp last_update num_logins num_cmds num_connected_sessions user_cpu sys_cpu clock_time min_faults maj_faults vol_cs invol_cs disk_input disk_output read_count read_bytes write_count write_bytes mail_lookup_path mail_lookup_attr mail_read_count mail_read_bytes mail_cache_hits
|
||||||
|
1453969886 1454603963.039864 7503897 52595715 1204 100831175.372000 83849071.112000 4326001931528183.495762 763950011 1112443 4120386897 3685239306 41679480946688 1819070669176832 2368906465 2957928122981169 3545389615 1666822498251286 24396105 302845 20155768 669946617705 1557255080`
|
||||||
|
|
||||||
|
const sampleDomain = `domain reset_timestamp last_update num_logins num_cmds num_connected_sessions user_cpu sys_cpu clock_time min_faults maj_faults vol_cs invol_cs disk_input disk_output read_count read_bytes write_count write_bytes mail_lookup_path mail_lookup_attr mail_read_count mail_read_bytes mail_cache_hits
|
||||||
|
domain.test 1453969886 1454603963.039864 7503897 52595715 1204 100831175.372000 83849071.112000 4326001931528183.495762 763950011 1112443 4120386897 3685239306 41679480946688 1819070669176832 2368906465 2957928122981169 3545389615 1666822498251286 24396105 302845 20155768 669946617705 1557255080`
|
||||||
|
|
||||||
|
const sampleIp = `ip reset_timestamp last_update num_logins num_cmds num_connected_sessions user_cpu sys_cpu clock_time min_faults maj_faults vol_cs invol_cs disk_input disk_output read_count read_bytes write_count write_bytes mail_lookup_path mail_lookup_attr mail_read_count mail_read_bytes mail_cache_hits
|
||||||
|
192.168.0.100 1453969886 1454603963.039864 7503897 52595715 1204 100831175.372000 83849071.112000 4326001931528183.495762 763950011 1112443 4120386897 3685239306 41679480946688 1819070669176832 2368906465 2957928122981169 3545389615 1666822498251286 24396105 302845 20155768 669946617705 1557255080`
|
||||||
|
|
||||||
|
const sampleUser = `user reset_timestamp last_update num_logins num_cmds user_cpu sys_cpu clock_time min_faults maj_faults vol_cs invol_cs disk_input disk_output read_count read_bytes write_count write_bytes mail_lookup_path mail_lookup_attr mail_read_count mail_read_bytes mail_cache_hits
|
||||||
|
user.1@domain.test 1453969886 1454603963.039864 7503897 52595715 100831175.372000 83849071.112000 4326001931528183.495762 763950011 1112443 4120386897 3685239306 41679480946688 1819070669176832 2368906465 2957928122981169 3545389615 1666822498251286 24396105 302845 20155768 669946617705 1557255080`
|
||||||
@@ -9,8 +9,9 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/influxdata/telegraf/internal"
|
"github.com/influxdata/telegraf"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
|
jsonparser "github.com/influxdata/telegraf/plugins/parsers/json"
|
||||||
)
|
)
|
||||||
|
|
||||||
const statsPath = "/_nodes/stats"
|
const statsPath = "/_nodes/stats"
|
||||||
@@ -58,14 +59,14 @@ type indexHealth struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
const sampleConfig = `
|
const sampleConfig = `
|
||||||
# specify a list of one or more Elasticsearch servers
|
## specify a list of one or more Elasticsearch servers
|
||||||
servers = ["http://localhost:9200"]
|
servers = ["http://localhost:9200"]
|
||||||
|
|
||||||
# set local to false when you want to read the indices stats from all nodes
|
## set local to false when you want to read the indices stats from all nodes
|
||||||
# within the cluster
|
## within the cluster
|
||||||
local = true
|
local = true
|
||||||
|
|
||||||
# set cluster_health to true when you want to also obtain cluster level stats
|
## set cluster_health to true when you want to also obtain cluster level stats
|
||||||
cluster_health = false
|
cluster_health = false
|
||||||
`
|
`
|
||||||
|
|
||||||
@@ -80,7 +81,12 @@ type Elasticsearch struct {
|
|||||||
|
|
||||||
// NewElasticsearch return a new instance of Elasticsearch
|
// NewElasticsearch return a new instance of Elasticsearch
|
||||||
func NewElasticsearch() *Elasticsearch {
|
func NewElasticsearch() *Elasticsearch {
|
||||||
return &Elasticsearch{client: http.DefaultClient}
|
tr := &http.Transport{ResponseHeaderTimeout: time.Duration(3 * time.Second)}
|
||||||
|
client := &http.Client{
|
||||||
|
Transport: tr,
|
||||||
|
Timeout: time.Duration(4 * time.Second),
|
||||||
|
}
|
||||||
|
return &Elasticsearch{client: client}
|
||||||
}
|
}
|
||||||
|
|
||||||
// SampleConfig returns sample configuration for this plugin.
|
// SampleConfig returns sample configuration for this plugin.
|
||||||
@@ -95,13 +101,13 @@ func (e *Elasticsearch) Description() string {
|
|||||||
|
|
||||||
// Gather reads the stats from Elasticsearch and writes it to the
|
// Gather reads the stats from Elasticsearch and writes it to the
|
||||||
// Accumulator.
|
// Accumulator.
|
||||||
func (e *Elasticsearch) Gather(acc inputs.Accumulator) error {
|
func (e *Elasticsearch) Gather(acc telegraf.Accumulator) error {
|
||||||
errChan := make(chan error, len(e.Servers))
|
errChan := make(chan error, len(e.Servers))
|
||||||
var wg sync.WaitGroup
|
var wg sync.WaitGroup
|
||||||
wg.Add(len(e.Servers))
|
wg.Add(len(e.Servers))
|
||||||
|
|
||||||
for _, serv := range e.Servers {
|
for _, serv := range e.Servers {
|
||||||
go func(s string, acc inputs.Accumulator) {
|
go func(s string, acc telegraf.Accumulator) {
|
||||||
defer wg.Done()
|
defer wg.Done()
|
||||||
var url string
|
var url string
|
||||||
if e.Local {
|
if e.Local {
|
||||||
@@ -133,7 +139,7 @@ func (e *Elasticsearch) Gather(acc inputs.Accumulator) error {
|
|||||||
return errors.New(strings.Join(errStrings, "\n"))
|
return errors.New(strings.Join(errStrings, "\n"))
|
||||||
}
|
}
|
||||||
|
|
||||||
func (e *Elasticsearch) gatherNodeStats(url string, acc inputs.Accumulator) error {
|
func (e *Elasticsearch) gatherNodeStats(url string, acc telegraf.Accumulator) error {
|
||||||
nodeStats := &struct {
|
nodeStats := &struct {
|
||||||
ClusterName string `json:"cluster_name"`
|
ClusterName string `json:"cluster_name"`
|
||||||
Nodes map[string]*node `json:"nodes"`
|
Nodes map[string]*node `json:"nodes"`
|
||||||
@@ -167,7 +173,7 @@ func (e *Elasticsearch) gatherNodeStats(url string, acc inputs.Accumulator) erro
|
|||||||
|
|
||||||
now := time.Now()
|
now := time.Now()
|
||||||
for p, s := range stats {
|
for p, s := range stats {
|
||||||
f := internal.JSONFlattener{}
|
f := jsonparser.JSONFlattener{}
|
||||||
err := f.FlattenJSON("", s)
|
err := f.FlattenJSON("", s)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -178,7 +184,7 @@ func (e *Elasticsearch) gatherNodeStats(url string, acc inputs.Accumulator) erro
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (e *Elasticsearch) gatherClusterStats(url string, acc inputs.Accumulator) error {
|
func (e *Elasticsearch) gatherClusterStats(url string, acc telegraf.Accumulator) error {
|
||||||
clusterStats := &clusterHealth{}
|
clusterStats := &clusterHealth{}
|
||||||
if err := e.gatherData(url, clusterStats); err != nil {
|
if err := e.gatherData(url, clusterStats); err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -243,7 +249,7 @@ func (e *Elasticsearch) gatherData(url string, v interface{}) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
inputs.Add("elasticsearch", func() inputs.Input {
|
inputs.Add("elasticsearch", func() telegraf.Input {
|
||||||
return NewElasticsearch()
|
return NewElasticsearch()
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -34,6 +34,9 @@ func (t *transportMock) RoundTrip(r *http.Request) (*http.Response, error) {
|
|||||||
return res, nil
|
return res, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (t *transportMock) CancelRequest(_ *http.Request) {
|
||||||
|
}
|
||||||
|
|
||||||
func TestElasticsearch(t *testing.T) {
|
func TestElasticsearch(t *testing.T) {
|
||||||
es := NewElasticsearch()
|
es := NewElasticsearch()
|
||||||
es.Servers = []string{"http://example.com:9200"}
|
es.Servers = []string{"http://example.com:9200"}
|
||||||
|
|||||||
@@ -1,28 +1,57 @@
|
|||||||
# Exec Plugin
|
# Exec Input Plugin
|
||||||
|
|
||||||
The exec plugin can execute arbitrary commands which output JSON. Then it flattens JSON and finds
|
Please also see: [Telegraf Input Data Formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md)
|
||||||
all numeric values, treating them as floats.
|
|
||||||
|
|
||||||
For example, if you have a json-returning command called mycollector, you could
|
### Example 1 - JSON
|
||||||
setup the exec plugin with:
|
|
||||||
|
#### Configuration
|
||||||
|
|
||||||
|
In this example a script called ```/tmp/test.sh``` and a script called ```/tmp/test2.sh```
|
||||||
|
are configured for ```[[inputs.exec]]``` in JSON format.
|
||||||
|
|
||||||
```
|
```
|
||||||
|
# Read flattened metrics from one or more commands that output JSON to stdout
|
||||||
[[inputs.exec]]
|
[[inputs.exec]]
|
||||||
command = "/usr/bin/mycollector --output=json"
|
# Shell/commands array
|
||||||
|
commands = ["/tmp/test.sh", "/tmp/test2.sh"]
|
||||||
|
|
||||||
|
# Data format to consume.
|
||||||
|
# NOTE json only reads numerical measurements, strings and booleans are ignored.
|
||||||
|
data_format = "json"
|
||||||
|
|
||||||
|
# measurement name suffix (for separating different commands)
|
||||||
name_suffix = "_mycollector"
|
name_suffix = "_mycollector"
|
||||||
interval = "10s"
|
|
||||||
|
## Below configuration will be used for data_format = "graphite", can be ignored for other data_format
|
||||||
|
## If matching multiple measurement files, this string will be used to join the matched values.
|
||||||
|
#separator = "."
|
||||||
|
|
||||||
|
## Each template line requires a template pattern. It can have an optional
|
||||||
|
## filter before the template and separated by spaces. It can also have optional extra
|
||||||
|
## tags following the template. Multiple tags should be separated by commas and no spaces
|
||||||
|
## similar to the line protocol format. The can be only one default template.
|
||||||
|
## Templates support below format:
|
||||||
|
## 1. filter + template
|
||||||
|
## 2. filter + template + extra tag
|
||||||
|
## 3. filter + template with field key
|
||||||
|
## 4. default template
|
||||||
|
#templates = [
|
||||||
|
# "*.app env.service.resource.measurement",
|
||||||
|
# "stats.* .host.measurement* region=us-west,agent=sensu",
|
||||||
|
# "stats2.* .host.measurement.field",
|
||||||
|
# "measurement*"
|
||||||
|
#]
|
||||||
```
|
```
|
||||||
|
|
||||||
The name suffix is appended to exec as "exec_name_suffix" to identify the input stream.
|
Other options for modifying the measurement names are:
|
||||||
|
|
||||||
The interval is used to determine how often a particular command should be run. Each
|
```
|
||||||
time the exec plugin runs, it will only run a particular command if it has been at least
|
name_prefix = "prefix_"
|
||||||
`interval` seconds since the exec plugin last ran the command.
|
```
|
||||||
|
|
||||||
|
Let's say that we have the above configuration, and mycollector outputs the
|
||||||
|
following JSON:
|
||||||
|
|
||||||
# Sample
|
|
||||||
|
|
||||||
Let's say that we have a command with the name_suffix "_mycollector", which gives the following output:
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"a": 0.5,
|
"a": 0.5,
|
||||||
@@ -33,13 +62,122 @@ Let's say that we have a command with the name_suffix "_mycollector", which give
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
The collected metrics will be stored as field values under the same measurement "exec_mycollector":
|
The collected metrics will be stored as fields under the measurement
|
||||||
|
"exec_mycollector":
|
||||||
|
|
||||||
```
|
```
|
||||||
exec_mycollector a=0.5,b_c=0.1,b_d=5 1452815002357578567
|
exec_mycollector a=0.5,b_c=0.1,b_d=5 1452815002357578567
|
||||||
|
```
|
||||||
|
If using JSON, only numeric values are parsed and turned into floats. Booleans
|
||||||
|
and strings will be ignored.
|
||||||
|
|
||||||
|
### Example 2 - Influx Line-Protocol
|
||||||
|
|
||||||
|
In this example an application called ```/usr/bin/line_protocol_collector```
|
||||||
|
and a script called ```/tmp/test2.sh``` are configured for ```[[inputs.exec]]```
|
||||||
|
in influx line-protocol format.
|
||||||
|
|
||||||
|
#### Configuration
|
||||||
|
|
||||||
|
```
|
||||||
|
[[inputs.exec]]
|
||||||
|
# Shell/commands array
|
||||||
|
# compatible with old version
|
||||||
|
# we can still use the old command configuration
|
||||||
|
# command = "/usr/bin/line_protocol_collector"
|
||||||
|
commands = ["/usr/bin/line_protocol_collector","/tmp/test2.sh"]
|
||||||
|
|
||||||
|
# Data format to consume.
|
||||||
|
# NOTE json only reads numerical measurements, strings and booleans are ignored.
|
||||||
|
data_format = "influx"
|
||||||
```
|
```
|
||||||
|
|
||||||
Other options for modifying the measurement names are:
|
The line_protocol_collector application outputs the following line protocol:
|
||||||
|
|
||||||
```
|
```
|
||||||
name_override = "newname"
|
cpu,cpu=cpu0,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
||||||
name_prefix = "prefix_"
|
cpu,cpu=cpu1,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
||||||
|
cpu,cpu=cpu2,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
||||||
|
cpu,cpu=cpu3,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
||||||
|
cpu,cpu=cpu4,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
||||||
|
cpu,cpu=cpu5,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
||||||
|
cpu,cpu=cpu6,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
||||||
```
|
```
|
||||||
|
|
||||||
|
You will get data in InfluxDB exactly as it is defined above,
|
||||||
|
tags are cpu=cpuN, host=foo, and datacenter=us-east with fields usage_idle
|
||||||
|
and usage_busy. They will receive a timestamp at collection time.
|
||||||
|
Each line must end in \n, just as the Influx line protocol does.
|
||||||
|
|
||||||
|
|
||||||
|
### Example 3 - Graphite
|
||||||
|
|
||||||
|
We can also change the data_format to "graphite" to use the metrics collecting scripts such as (compatible with graphite):
|
||||||
|
|
||||||
|
* Nagios [Metrics Plugins](https://exchange.nagios.org/directory/Plugins)
|
||||||
|
* Sensu [Metrics Plugins](https://github.com/sensu-plugins)
|
||||||
|
|
||||||
|
In this example a script called /tmp/test.sh and a script called /tmp/test2.sh are configured for [[inputs.exec]] in graphite format.
|
||||||
|
|
||||||
|
#### Configuration
|
||||||
|
```
|
||||||
|
# Read flattened metrics from one or more commands that output JSON to stdout
|
||||||
|
[[inputs.exec]]
|
||||||
|
# Shell/commands array
|
||||||
|
commands = ["/tmp/test.sh","/tmp/test2.sh"]
|
||||||
|
|
||||||
|
# Data format to consume.
|
||||||
|
# NOTE json only reads numerical measurements, strings and booleans are ignored.
|
||||||
|
data_format = "graphite"
|
||||||
|
|
||||||
|
# measurement name suffix (for separating different commands)
|
||||||
|
name_suffix = "_mycollector"
|
||||||
|
|
||||||
|
## Below configuration will be used for data_format = "graphite", can be ignored for other data_format
|
||||||
|
## If matching multiple measurement files, this string will be used to join the matched values.
|
||||||
|
separator = "."
|
||||||
|
|
||||||
|
## Each template line requires a template pattern. It can have an optional
|
||||||
|
## filter before the template and separated by spaces. It can also have optional extra
|
||||||
|
## tags following the template. Multiple tags should be separated by commas and no spaces
|
||||||
|
## similar to the line protocol format. The can be only one default template.
|
||||||
|
## Templates support below format:
|
||||||
|
## 1. filter + template
|
||||||
|
## 2. filter + template + extra tag
|
||||||
|
## 3. filter + template with field key
|
||||||
|
## 4. default template
|
||||||
|
templates = [
|
||||||
|
"*.app env.service.resource.measurement",
|
||||||
|
"stats.* .host.measurement* region=us-west,agent=sensu",
|
||||||
|
"stats2.* .host.measurement.field",
|
||||||
|
"measurement*"
|
||||||
|
]
|
||||||
|
```
|
||||||
|
Graphite messages are in this format:
|
||||||
|
|
||||||
|
```
|
||||||
|
metric_path value timestamp\n
|
||||||
|
```
|
||||||
|
|
||||||
|
__metric_path__ is the metric namespace that you want to populate.
|
||||||
|
|
||||||
|
__value__ is the value that you want to assign to the metric at this time.
|
||||||
|
|
||||||
|
__timestamp__ is the unix epoch time.
|
||||||
|
|
||||||
|
And test.sh/test2.sh will output:
|
||||||
|
|
||||||
|
```
|
||||||
|
sensu.metric.net.server0.eth0.rx_packets 461295119435 1444234982
|
||||||
|
sensu.metric.net.server0.eth0.tx_bytes 1093086493388480 1444234982
|
||||||
|
sensu.metric.net.server0.eth0.rx_bytes 1015633926034834 1444234982
|
||||||
|
sensu.metric.net.server0.eth0.tx_errors 0 1444234982
|
||||||
|
sensu.metric.net.server0.eth0.rx_errors 0 1444234982
|
||||||
|
sensu.metric.net.server0.eth0.tx_dropped 0 1444234982
|
||||||
|
sensu.metric.net.server0.eth0.rx_dropped 0 1444234982
|
||||||
|
```
|
||||||
|
|
||||||
|
The templates configuration will be used to parse the graphite metrics to support influxdb/opentsdb tagging store engines.
|
||||||
|
|
||||||
|
More detail information about templates, please refer to [The graphite Input](https://github.com/influxdata/influxdb/blob/master/services/graphite/README.md)
|
||||||
|
|
||||||
|
|||||||
@@ -2,58 +2,133 @@ package exec
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"os/exec"
|
"os/exec"
|
||||||
|
"sync"
|
||||||
|
"syscall"
|
||||||
|
"time"
|
||||||
|
|
||||||
"github.com/gonuts/go-shellquote"
|
"github.com/gonuts/go-shellquote"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
"github.com/influxdata/telegraf/internal"
|
"github.com/influxdata/telegraf/internal"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
|
"github.com/influxdata/telegraf/plugins/parsers"
|
||||||
|
"github.com/influxdata/telegraf/plugins/parsers/nagios"
|
||||||
)
|
)
|
||||||
|
|
||||||
const sampleConfig = `
|
const sampleConfig = `
|
||||||
# NOTE This plugin only reads numerical measurements, strings and booleans
|
## Commands array
|
||||||
# will be ignored.
|
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
|
||||||
|
|
||||||
# the command to run
|
## Timeout for each command to complete.
|
||||||
command = "/usr/bin/mycollector --foo=bar"
|
timeout = "5s"
|
||||||
|
|
||||||
# measurement name suffix (for separating different commands)
|
## measurement name suffix (for separating different commands)
|
||||||
name_suffix = "_mycollector"
|
name_suffix = "_mycollector"
|
||||||
|
|
||||||
|
## Data format to consume.
|
||||||
|
## Each data format has it's own unique set of configuration options, read
|
||||||
|
## more about them here:
|
||||||
|
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||||
|
data_format = "influx"
|
||||||
`
|
`
|
||||||
|
|
||||||
type Exec struct {
|
type Exec struct {
|
||||||
Command string
|
Commands []string
|
||||||
|
Command string
|
||||||
|
Timeout internal.Duration
|
||||||
|
|
||||||
runner Runner
|
parser parsers.Parser
|
||||||
|
|
||||||
|
wg sync.WaitGroup
|
||||||
|
|
||||||
|
runner Runner
|
||||||
|
errChan chan error
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewExec() *Exec {
|
||||||
|
return &Exec{
|
||||||
|
runner: CommandRunner{},
|
||||||
|
Timeout: internal.Duration{Duration: time.Second * 5},
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
type Runner interface {
|
type Runner interface {
|
||||||
Run(*Exec) ([]byte, error)
|
Run(*Exec, string, telegraf.Accumulator) ([]byte, error)
|
||||||
}
|
}
|
||||||
|
|
||||||
type CommandRunner struct{}
|
type CommandRunner struct{}
|
||||||
|
|
||||||
func (c CommandRunner) Run(e *Exec) ([]byte, error) {
|
func AddNagiosState(exitCode error, acc telegraf.Accumulator) error {
|
||||||
split_cmd, err := shellquote.Split(e.Command)
|
nagiosState := 0
|
||||||
|
if exitCode != nil {
|
||||||
|
exiterr, ok := exitCode.(*exec.ExitError)
|
||||||
|
if ok {
|
||||||
|
status, ok := exiterr.Sys().(syscall.WaitStatus)
|
||||||
|
if ok {
|
||||||
|
nagiosState = status.ExitStatus()
|
||||||
|
} else {
|
||||||
|
return fmt.Errorf("exec: unable to get nagios plugin exit code")
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
return fmt.Errorf("exec: unable to get nagios plugin exit code")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
fields := map[string]interface{}{"state": nagiosState}
|
||||||
|
acc.AddFields("nagios_state", fields, nil)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c CommandRunner) Run(
|
||||||
|
e *Exec,
|
||||||
|
command string,
|
||||||
|
acc telegraf.Accumulator,
|
||||||
|
) ([]byte, error) {
|
||||||
|
split_cmd, err := shellquote.Split(command)
|
||||||
if err != nil || len(split_cmd) == 0 {
|
if err != nil || len(split_cmd) == 0 {
|
||||||
return nil, fmt.Errorf("exec: unable to parse command, %s", err)
|
return nil, fmt.Errorf("exec: unable to parse command, %s", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
cmd := exec.Command(split_cmd[0], split_cmd[1:]...)
|
cmd := exec.Command(split_cmd[0], split_cmd[1:]...)
|
||||||
|
|
||||||
var out bytes.Buffer
|
var out bytes.Buffer
|
||||||
cmd.Stdout = &out
|
cmd.Stdout = &out
|
||||||
|
|
||||||
if err := cmd.Run(); err != nil {
|
if err := internal.RunTimeout(cmd, e.Timeout.Duration); err != nil {
|
||||||
return nil, fmt.Errorf("exec: %s for command '%s'", err, e.Command)
|
switch e.parser.(type) {
|
||||||
|
case *nagios.NagiosParser:
|
||||||
|
AddNagiosState(err, acc)
|
||||||
|
default:
|
||||||
|
return nil, fmt.Errorf("exec: %s for command '%s'", err, command)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
switch e.parser.(type) {
|
||||||
|
case *nagios.NagiosParser:
|
||||||
|
AddNagiosState(nil, acc)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return out.Bytes(), nil
|
return out.Bytes(), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewExec() *Exec {
|
func (e *Exec) ProcessCommand(command string, acc telegraf.Accumulator) {
|
||||||
return &Exec{runner: CommandRunner{}}
|
defer e.wg.Done()
|
||||||
|
|
||||||
|
out, err := e.runner.Run(e, command, acc)
|
||||||
|
if err != nil {
|
||||||
|
e.errChan <- err
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
metrics, err := e.parser.Parse(out)
|
||||||
|
if err != nil {
|
||||||
|
e.errChan <- err
|
||||||
|
} else {
|
||||||
|
for _, metric := range metrics {
|
||||||
|
acc.AddFields(metric.Name(), metric.Fields(), metric.Tags(), metric.Time())
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (e *Exec) SampleConfig() string {
|
func (e *Exec) SampleConfig() string {
|
||||||
@@ -61,34 +136,41 @@ func (e *Exec) SampleConfig() string {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (e *Exec) Description() string {
|
func (e *Exec) Description() string {
|
||||||
return "Read flattened metrics from one or more commands that output JSON to stdout"
|
return "Read metrics from one or more commands that can output to stdout"
|
||||||
}
|
}
|
||||||
|
|
||||||
func (e *Exec) Gather(acc inputs.Accumulator) error {
|
func (e *Exec) SetParser(parser parsers.Parser) {
|
||||||
out, err := e.runner.Run(e)
|
e.parser = parser
|
||||||
if err != nil {
|
}
|
||||||
|
|
||||||
|
func (e *Exec) Gather(acc telegraf.Accumulator) error {
|
||||||
|
// Legacy single command support
|
||||||
|
if e.Command != "" {
|
||||||
|
e.Commands = append(e.Commands, e.Command)
|
||||||
|
e.Command = ""
|
||||||
|
}
|
||||||
|
|
||||||
|
e.errChan = make(chan error, len(e.Commands))
|
||||||
|
|
||||||
|
e.wg.Add(len(e.Commands))
|
||||||
|
for _, command := range e.Commands {
|
||||||
|
go e.ProcessCommand(command, acc)
|
||||||
|
}
|
||||||
|
e.wg.Wait()
|
||||||
|
|
||||||
|
select {
|
||||||
|
default:
|
||||||
|
close(e.errChan)
|
||||||
|
return nil
|
||||||
|
case err := <-e.errChan:
|
||||||
|
close(e.errChan)
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
var jsonOut interface{}
|
|
||||||
err = json.Unmarshal(out, &jsonOut)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("exec: unable to parse output of '%s' as JSON, %s",
|
|
||||||
e.Command, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
f := internal.JSONFlattener{}
|
|
||||||
err = f.FlattenJSON("", jsonOut)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
acc.AddFields("exec", f.Fields, nil)
|
|
||||||
return nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
inputs.Add("exec", func() inputs.Input {
|
inputs.Add("exec", func() telegraf.Input {
|
||||||
return NewExec()
|
return NewExec()
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,6 +4,9 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
|
"github.com/influxdata/telegraf/plugins/parsers"
|
||||||
|
|
||||||
"github.com/influxdata/telegraf/testutil"
|
"github.com/influxdata/telegraf/testutil"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
@@ -31,6 +34,18 @@ const malformedJson = `
|
|||||||
"status": "green",
|
"status": "green",
|
||||||
`
|
`
|
||||||
|
|
||||||
|
const lineProtocol = "cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1"
|
||||||
|
|
||||||
|
const lineProtocolMulti = `
|
||||||
|
cpu,cpu=cpu0,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
||||||
|
cpu,cpu=cpu1,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
||||||
|
cpu,cpu=cpu2,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
||||||
|
cpu,cpu=cpu3,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
||||||
|
cpu,cpu=cpu4,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
||||||
|
cpu,cpu=cpu5,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
||||||
|
cpu,cpu=cpu6,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
||||||
|
`
|
||||||
|
|
||||||
type runnerMock struct {
|
type runnerMock struct {
|
||||||
out []byte
|
out []byte
|
||||||
err error
|
err error
|
||||||
@@ -43,7 +58,7 @@ func newRunnerMock(out []byte, err error) Runner {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r runnerMock) Run(e *Exec) ([]byte, error) {
|
func (r runnerMock) Run(e *Exec, command string, acc telegraf.Accumulator) ([]byte, error) {
|
||||||
if r.err != nil {
|
if r.err != nil {
|
||||||
return nil, r.err
|
return nil, r.err
|
||||||
}
|
}
|
||||||
@@ -51,9 +66,11 @@ func (r runnerMock) Run(e *Exec) ([]byte, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestExec(t *testing.T) {
|
func TestExec(t *testing.T) {
|
||||||
|
parser, _ := parsers.NewJSONParser("exec", []string{}, nil)
|
||||||
e := &Exec{
|
e := &Exec{
|
||||||
runner: newRunnerMock([]byte(validJson), nil),
|
runner: newRunnerMock([]byte(validJson), nil),
|
||||||
Command: "testcommand arg1",
|
Commands: []string{"testcommand arg1"},
|
||||||
|
parser: parser,
|
||||||
}
|
}
|
||||||
|
|
||||||
var acc testutil.Accumulator
|
var acc testutil.Accumulator
|
||||||
@@ -75,9 +92,11 @@ func TestExec(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestExecMalformed(t *testing.T) {
|
func TestExecMalformed(t *testing.T) {
|
||||||
|
parser, _ := parsers.NewJSONParser("exec", []string{}, nil)
|
||||||
e := &Exec{
|
e := &Exec{
|
||||||
runner: newRunnerMock([]byte(malformedJson), nil),
|
runner: newRunnerMock([]byte(malformedJson), nil),
|
||||||
Command: "badcommand arg1",
|
Commands: []string{"badcommand arg1"},
|
||||||
|
parser: parser,
|
||||||
}
|
}
|
||||||
|
|
||||||
var acc testutil.Accumulator
|
var acc testutil.Accumulator
|
||||||
@@ -87,9 +106,11 @@ func TestExecMalformed(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestCommandError(t *testing.T) {
|
func TestCommandError(t *testing.T) {
|
||||||
|
parser, _ := parsers.NewJSONParser("exec", []string{}, nil)
|
||||||
e := &Exec{
|
e := &Exec{
|
||||||
runner: newRunnerMock(nil, fmt.Errorf("exit status code 1")),
|
runner: newRunnerMock(nil, fmt.Errorf("exit status code 1")),
|
||||||
Command: "badcommand",
|
Commands: []string{"badcommand"},
|
||||||
|
parser: parser,
|
||||||
}
|
}
|
||||||
|
|
||||||
var acc testutil.Accumulator
|
var acc testutil.Accumulator
|
||||||
@@ -97,3 +118,54 @@ func TestCommandError(t *testing.T) {
|
|||||||
require.Error(t, err)
|
require.Error(t, err)
|
||||||
assert.Equal(t, acc.NFields(), 0, "No new points should have been added")
|
assert.Equal(t, acc.NFields(), 0, "No new points should have been added")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestLineProtocolParse(t *testing.T) {
|
||||||
|
parser, _ := parsers.NewInfluxParser()
|
||||||
|
e := &Exec{
|
||||||
|
runner: newRunnerMock([]byte(lineProtocol), nil),
|
||||||
|
Commands: []string{"line-protocol"},
|
||||||
|
parser: parser,
|
||||||
|
}
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
err := e.Gather(&acc)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
fields := map[string]interface{}{
|
||||||
|
"usage_idle": float64(99),
|
||||||
|
"usage_busy": float64(1),
|
||||||
|
}
|
||||||
|
tags := map[string]string{
|
||||||
|
"host": "foo",
|
||||||
|
"datacenter": "us-east",
|
||||||
|
}
|
||||||
|
acc.AssertContainsTaggedFields(t, "cpu", fields, tags)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestLineProtocolParseMultiple(t *testing.T) {
|
||||||
|
parser, _ := parsers.NewInfluxParser()
|
||||||
|
e := &Exec{
|
||||||
|
runner: newRunnerMock([]byte(lineProtocolMulti), nil),
|
||||||
|
Commands: []string{"line-protocol"},
|
||||||
|
parser: parser,
|
||||||
|
}
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
err := e.Gather(&acc)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
fields := map[string]interface{}{
|
||||||
|
"usage_idle": float64(99),
|
||||||
|
"usage_busy": float64(1),
|
||||||
|
}
|
||||||
|
tags := map[string]string{
|
||||||
|
"host": "foo",
|
||||||
|
"datacenter": "us-east",
|
||||||
|
}
|
||||||
|
cpuTags := []string{"cpu0", "cpu1", "cpu2", "cpu3", "cpu4", "cpu5", "cpu6"}
|
||||||
|
|
||||||
|
for _, cpu := range cpuTags {
|
||||||
|
tags["cpu"] = cpu
|
||||||
|
acc.AssertContainsTaggedFields(t, "cpu", fields, tags)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
37
plugins/inputs/filestat/README.md
Normal file
37
plugins/inputs/filestat/README.md
Normal file
@@ -0,0 +1,37 @@
|
|||||||
|
# filestat Input Plugin
|
||||||
|
|
||||||
|
The filestat plugin gathers metrics about file existence, size, and other stats.
|
||||||
|
|
||||||
|
### Configuration:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
# Read stats about given file(s)
|
||||||
|
[[inputs.filestat]]
|
||||||
|
## Files to gather stats about.
|
||||||
|
## These accept standard unix glob matching rules, but with the addition of
|
||||||
|
## ** as a "super asterisk". See https://github.com/gobwas/glob.
|
||||||
|
files = ["/etc/telegraf/telegraf.conf", "/var/log/**.log"]
|
||||||
|
## If true, read the entire file and calculate an md5 checksum.
|
||||||
|
md5 = false
|
||||||
|
```
|
||||||
|
|
||||||
|
### Measurements & Fields:
|
||||||
|
|
||||||
|
- filestat
|
||||||
|
- exists (int, 0 | 1)
|
||||||
|
- size_bytes (int, bytes)
|
||||||
|
- md5 (optional, string)
|
||||||
|
|
||||||
|
### Tags:
|
||||||
|
|
||||||
|
- All measurements have the following tags:
|
||||||
|
- file (the path the to file, as specified in the config)
|
||||||
|
|
||||||
|
### Example Output:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ telegraf -config /etc/telegraf/telegraf.conf -input-filter filestat -test
|
||||||
|
* Plugin: filestat, Collection 1
|
||||||
|
> filestat,file=/tmp/foo/bar,host=tyrion exists=0i 1461203374493128216
|
||||||
|
> filestat,file=/Users/sparrc/ws/telegraf.conf,host=tyrion exists=1i,size=47894i 1461203374493199335
|
||||||
|
```
|
||||||
125
plugins/inputs/filestat/filestat.go
Normal file
125
plugins/inputs/filestat/filestat.go
Normal file
@@ -0,0 +1,125 @@
|
|||||||
|
package filestat
|
||||||
|
|
||||||
|
import (
|
||||||
|
"crypto/md5"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"os"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
|
"github.com/influxdata/telegraf/internal/globpath"
|
||||||
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
|
)
|
||||||
|
|
||||||
|
const sampleConfig = `
|
||||||
|
## Files to gather stats about.
|
||||||
|
## These accept standard unix glob matching rules, but with the addition of
|
||||||
|
## ** as a "super asterisk". ie:
|
||||||
|
## "/var/log/**.log" -> recursively find all .log files in /var/log
|
||||||
|
## "/var/log/*/*.log" -> find all .log files with a parent dir in /var/log
|
||||||
|
## "/var/log/apache.log" -> just tail the apache log file
|
||||||
|
##
|
||||||
|
## See https://github.com/gobwas/glob for more examples
|
||||||
|
##
|
||||||
|
files = ["/var/log/**.log"]
|
||||||
|
## If true, read the entire file and calculate an md5 checksum.
|
||||||
|
md5 = false
|
||||||
|
`
|
||||||
|
|
||||||
|
type FileStat struct {
|
||||||
|
Md5 bool
|
||||||
|
Files []string
|
||||||
|
|
||||||
|
// maps full file paths to globmatch obj
|
||||||
|
globs map[string]*globpath.GlobPath
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewFileStat() *FileStat {
|
||||||
|
return &FileStat{
|
||||||
|
globs: make(map[string]*globpath.GlobPath),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (_ *FileStat) Description() string {
|
||||||
|
return "Read stats about given file(s)"
|
||||||
|
}
|
||||||
|
|
||||||
|
func (_ *FileStat) SampleConfig() string { return sampleConfig }
|
||||||
|
|
||||||
|
func (f *FileStat) Gather(acc telegraf.Accumulator) error {
|
||||||
|
var errS string
|
||||||
|
var err error
|
||||||
|
|
||||||
|
for _, filepath := range f.Files {
|
||||||
|
// Get the compiled glob object for this filepath
|
||||||
|
g, ok := f.globs[filepath]
|
||||||
|
if !ok {
|
||||||
|
if g, err = globpath.Compile(filepath); err != nil {
|
||||||
|
errS += err.Error() + " "
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
f.globs[filepath] = g
|
||||||
|
}
|
||||||
|
|
||||||
|
files := g.Match()
|
||||||
|
if len(files) == 0 {
|
||||||
|
acc.AddFields("filestat",
|
||||||
|
map[string]interface{}{
|
||||||
|
"exists": int64(0),
|
||||||
|
},
|
||||||
|
map[string]string{
|
||||||
|
"file": filepath,
|
||||||
|
})
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
for fileName, fileInfo := range files {
|
||||||
|
tags := map[string]string{
|
||||||
|
"file": fileName,
|
||||||
|
}
|
||||||
|
fields := map[string]interface{}{
|
||||||
|
"exists": int64(1),
|
||||||
|
"size_bytes": fileInfo.Size(),
|
||||||
|
}
|
||||||
|
|
||||||
|
if f.Md5 {
|
||||||
|
md5, err := getMd5(fileName)
|
||||||
|
if err != nil {
|
||||||
|
errS += err.Error() + " "
|
||||||
|
} else {
|
||||||
|
fields["md5_sum"] = md5
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
acc.AddFields("filestat", fields, tags)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if errS != "" {
|
||||||
|
return fmt.Errorf(errS)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Read given file and calculate an md5 hash.
|
||||||
|
func getMd5(file string) (string, error) {
|
||||||
|
of, err := os.Open(file)
|
||||||
|
if err != nil {
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
defer of.Close()
|
||||||
|
|
||||||
|
hash := md5.New()
|
||||||
|
_, err = io.Copy(hash, of)
|
||||||
|
if err != nil {
|
||||||
|
// fatal error
|
||||||
|
return "", err
|
||||||
|
}
|
||||||
|
return fmt.Sprintf("%x", hash.Sum(nil)), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
inputs.Add("filestat", func() telegraf.Input {
|
||||||
|
return NewFileStat()
|
||||||
|
})
|
||||||
|
}
|
||||||
180
plugins/inputs/filestat/filestat_test.go
Normal file
180
plugins/inputs/filestat/filestat_test.go
Normal file
@@ -0,0 +1,180 @@
|
|||||||
|
package filestat
|
||||||
|
|
||||||
|
import (
|
||||||
|
"runtime"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf/testutil"
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestGatherNoMd5(t *testing.T) {
|
||||||
|
dir := getTestdataDir()
|
||||||
|
fs := NewFileStat()
|
||||||
|
fs.Files = []string{
|
||||||
|
dir + "log1.log",
|
||||||
|
dir + "log2.log",
|
||||||
|
"/non/existant/file",
|
||||||
|
}
|
||||||
|
|
||||||
|
acc := testutil.Accumulator{}
|
||||||
|
fs.Gather(&acc)
|
||||||
|
|
||||||
|
tags1 := map[string]string{
|
||||||
|
"file": dir + "log1.log",
|
||||||
|
}
|
||||||
|
fields1 := map[string]interface{}{
|
||||||
|
"size_bytes": int64(0),
|
||||||
|
"exists": int64(1),
|
||||||
|
}
|
||||||
|
acc.AssertContainsTaggedFields(t, "filestat", fields1, tags1)
|
||||||
|
|
||||||
|
tags2 := map[string]string{
|
||||||
|
"file": dir + "log2.log",
|
||||||
|
}
|
||||||
|
fields2 := map[string]interface{}{
|
||||||
|
"size_bytes": int64(0),
|
||||||
|
"exists": int64(1),
|
||||||
|
}
|
||||||
|
acc.AssertContainsTaggedFields(t, "filestat", fields2, tags2)
|
||||||
|
|
||||||
|
tags3 := map[string]string{
|
||||||
|
"file": "/non/existant/file",
|
||||||
|
}
|
||||||
|
fields3 := map[string]interface{}{
|
||||||
|
"exists": int64(0),
|
||||||
|
}
|
||||||
|
acc.AssertContainsTaggedFields(t, "filestat", fields3, tags3)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGatherExplicitFiles(t *testing.T) {
|
||||||
|
dir := getTestdataDir()
|
||||||
|
fs := NewFileStat()
|
||||||
|
fs.Md5 = true
|
||||||
|
fs.Files = []string{
|
||||||
|
dir + "log1.log",
|
||||||
|
dir + "log2.log",
|
||||||
|
"/non/existant/file",
|
||||||
|
}
|
||||||
|
|
||||||
|
acc := testutil.Accumulator{}
|
||||||
|
fs.Gather(&acc)
|
||||||
|
|
||||||
|
tags1 := map[string]string{
|
||||||
|
"file": dir + "log1.log",
|
||||||
|
}
|
||||||
|
fields1 := map[string]interface{}{
|
||||||
|
"size_bytes": int64(0),
|
||||||
|
"exists": int64(1),
|
||||||
|
"md5_sum": "d41d8cd98f00b204e9800998ecf8427e",
|
||||||
|
}
|
||||||
|
acc.AssertContainsTaggedFields(t, "filestat", fields1, tags1)
|
||||||
|
|
||||||
|
tags2 := map[string]string{
|
||||||
|
"file": dir + "log2.log",
|
||||||
|
}
|
||||||
|
fields2 := map[string]interface{}{
|
||||||
|
"size_bytes": int64(0),
|
||||||
|
"exists": int64(1),
|
||||||
|
"md5_sum": "d41d8cd98f00b204e9800998ecf8427e",
|
||||||
|
}
|
||||||
|
acc.AssertContainsTaggedFields(t, "filestat", fields2, tags2)
|
||||||
|
|
||||||
|
tags3 := map[string]string{
|
||||||
|
"file": "/non/existant/file",
|
||||||
|
}
|
||||||
|
fields3 := map[string]interface{}{
|
||||||
|
"exists": int64(0),
|
||||||
|
}
|
||||||
|
acc.AssertContainsTaggedFields(t, "filestat", fields3, tags3)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGatherGlob(t *testing.T) {
|
||||||
|
dir := getTestdataDir()
|
||||||
|
fs := NewFileStat()
|
||||||
|
fs.Md5 = true
|
||||||
|
fs.Files = []string{
|
||||||
|
dir + "*.log",
|
||||||
|
}
|
||||||
|
|
||||||
|
acc := testutil.Accumulator{}
|
||||||
|
fs.Gather(&acc)
|
||||||
|
|
||||||
|
tags1 := map[string]string{
|
||||||
|
"file": dir + "log1.log",
|
||||||
|
}
|
||||||
|
fields1 := map[string]interface{}{
|
||||||
|
"size_bytes": int64(0),
|
||||||
|
"exists": int64(1),
|
||||||
|
"md5_sum": "d41d8cd98f00b204e9800998ecf8427e",
|
||||||
|
}
|
||||||
|
acc.AssertContainsTaggedFields(t, "filestat", fields1, tags1)
|
||||||
|
|
||||||
|
tags2 := map[string]string{
|
||||||
|
"file": dir + "log2.log",
|
||||||
|
}
|
||||||
|
fields2 := map[string]interface{}{
|
||||||
|
"size_bytes": int64(0),
|
||||||
|
"exists": int64(1),
|
||||||
|
"md5_sum": "d41d8cd98f00b204e9800998ecf8427e",
|
||||||
|
}
|
||||||
|
acc.AssertContainsTaggedFields(t, "filestat", fields2, tags2)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGatherSuperAsterisk(t *testing.T) {
|
||||||
|
dir := getTestdataDir()
|
||||||
|
fs := NewFileStat()
|
||||||
|
fs.Md5 = true
|
||||||
|
fs.Files = []string{
|
||||||
|
dir + "**",
|
||||||
|
}
|
||||||
|
|
||||||
|
acc := testutil.Accumulator{}
|
||||||
|
fs.Gather(&acc)
|
||||||
|
|
||||||
|
tags1 := map[string]string{
|
||||||
|
"file": dir + "log1.log",
|
||||||
|
}
|
||||||
|
fields1 := map[string]interface{}{
|
||||||
|
"size_bytes": int64(0),
|
||||||
|
"exists": int64(1),
|
||||||
|
"md5_sum": "d41d8cd98f00b204e9800998ecf8427e",
|
||||||
|
}
|
||||||
|
acc.AssertContainsTaggedFields(t, "filestat", fields1, tags1)
|
||||||
|
|
||||||
|
tags2 := map[string]string{
|
||||||
|
"file": dir + "log2.log",
|
||||||
|
}
|
||||||
|
fields2 := map[string]interface{}{
|
||||||
|
"size_bytes": int64(0),
|
||||||
|
"exists": int64(1),
|
||||||
|
"md5_sum": "d41d8cd98f00b204e9800998ecf8427e",
|
||||||
|
}
|
||||||
|
acc.AssertContainsTaggedFields(t, "filestat", fields2, tags2)
|
||||||
|
|
||||||
|
tags3 := map[string]string{
|
||||||
|
"file": dir + "test.conf",
|
||||||
|
}
|
||||||
|
fields3 := map[string]interface{}{
|
||||||
|
"size_bytes": int64(104),
|
||||||
|
"exists": int64(1),
|
||||||
|
"md5_sum": "5a7e9b77fa25e7bb411dbd17cf403c1f",
|
||||||
|
}
|
||||||
|
acc.AssertContainsTaggedFields(t, "filestat", fields3, tags3)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGetMd5(t *testing.T) {
|
||||||
|
dir := getTestdataDir()
|
||||||
|
md5, err := getMd5(dir + "test.conf")
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.Equal(t, "5a7e9b77fa25e7bb411dbd17cf403c1f", md5)
|
||||||
|
|
||||||
|
md5, err = getMd5("/tmp/foo/bar/fooooo")
|
||||||
|
assert.Error(t, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func getTestdataDir() string {
|
||||||
|
_, filename, _, _ := runtime.Caller(1)
|
||||||
|
return strings.Replace(filename, "filestat_test.go", "testdata/", 1)
|
||||||
|
}
|
||||||
0
plugins/inputs/filestat/testdata/log1.log
vendored
Normal file
0
plugins/inputs/filestat/testdata/log1.log
vendored
Normal file
0
plugins/inputs/filestat/testdata/log2.log
vendored
Normal file
0
plugins/inputs/filestat/testdata/log2.log
vendored
Normal file
5
plugins/inputs/filestat/testdata/test.conf
vendored
Normal file
5
plugins/inputs/filestat/testdata/test.conf
vendored
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
# this is a fake testing config file
|
||||||
|
# for testing the filestat plugin
|
||||||
|
|
||||||
|
option1 = "foo"
|
||||||
|
option2 = "bar"
|
||||||
@@ -9,11 +9,12 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
|
|
||||||
"github.com/gorilla/mux"
|
"github.com/gorilla/mux"
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
)
|
)
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
inputs.Add("github_webhooks", func() inputs.Input { return &GithubWebhooks{} })
|
inputs.Add("github_webhooks", func() telegraf.Input { return &GithubWebhooks{} })
|
||||||
}
|
}
|
||||||
|
|
||||||
type GithubWebhooks struct {
|
type GithubWebhooks struct {
|
||||||
@@ -30,7 +31,7 @@ func NewGithubWebhooks() *GithubWebhooks {
|
|||||||
|
|
||||||
func (gh *GithubWebhooks) SampleConfig() string {
|
func (gh *GithubWebhooks) SampleConfig() string {
|
||||||
return `
|
return `
|
||||||
# Address and port to host Webhook listener on
|
## Address and port to host Webhook listener on
|
||||||
service_address = ":1618"
|
service_address = ":1618"
|
||||||
`
|
`
|
||||||
}
|
}
|
||||||
@@ -40,11 +41,11 @@ func (gh *GithubWebhooks) Description() string {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Writes the points from <-gh.in to the Accumulator
|
// Writes the points from <-gh.in to the Accumulator
|
||||||
func (gh *GithubWebhooks) Gather(acc inputs.Accumulator) error {
|
func (gh *GithubWebhooks) Gather(acc telegraf.Accumulator) error {
|
||||||
gh.Lock()
|
gh.Lock()
|
||||||
defer gh.Unlock()
|
defer gh.Unlock()
|
||||||
for _, event := range gh.events {
|
for _, event := range gh.events {
|
||||||
p := event.NewPoint()
|
p := event.NewMetric()
|
||||||
acc.AddFields("github_webhooks", p.Fields(), p.Tags(), p.Time())
|
acc.AddFields("github_webhooks", p.Fields(), p.Tags(), p.Time())
|
||||||
}
|
}
|
||||||
gh.events = make([]Event, 0)
|
gh.events = make([]Event, 0)
|
||||||
@@ -60,7 +61,7 @@ func (gh *GithubWebhooks) Listen() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (gh *GithubWebhooks) Start() error {
|
func (gh *GithubWebhooks) Start(_ telegraf.Accumulator) error {
|
||||||
go gh.Listen()
|
go gh.Listen()
|
||||||
log.Printf("Started the github_webhooks service on %s\n", gh.ServiceAddress)
|
log.Printf("Started the github_webhooks service on %s\n", gh.ServiceAddress)
|
||||||
return nil
|
return nil
|
||||||
@@ -72,14 +73,17 @@ func (gh *GithubWebhooks) Stop() {
|
|||||||
|
|
||||||
// Handles the / route
|
// Handles the / route
|
||||||
func (gh *GithubWebhooks) eventHandler(w http.ResponseWriter, r *http.Request) {
|
func (gh *GithubWebhooks) eventHandler(w http.ResponseWriter, r *http.Request) {
|
||||||
|
defer r.Body.Close()
|
||||||
eventType := r.Header["X-Github-Event"][0]
|
eventType := r.Header["X-Github-Event"][0]
|
||||||
data, err := ioutil.ReadAll(r.Body)
|
data, err := ioutil.ReadAll(r.Body)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
w.WriteHeader(http.StatusBadRequest)
|
w.WriteHeader(http.StatusBadRequest)
|
||||||
|
return
|
||||||
}
|
}
|
||||||
e, err := NewEvent(data, eventType)
|
e, err := NewEvent(data, eventType)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
w.WriteHeader(http.StatusBadRequest)
|
w.WriteHeader(http.StatusBadRequest)
|
||||||
|
return
|
||||||
}
|
}
|
||||||
gh.Lock()
|
gh.Lock()
|
||||||
gh.events = append(gh.events, e)
|
gh.events = append(gh.events, e)
|
||||||
|
|||||||
@@ -5,13 +5,13 @@ import (
|
|||||||
"log"
|
"log"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/influxdata/influxdb/client/v2"
|
"github.com/influxdata/telegraf"
|
||||||
)
|
)
|
||||||
|
|
||||||
const meas = "github_webhooks"
|
const meas = "github_webhooks"
|
||||||
|
|
||||||
type Event interface {
|
type Event interface {
|
||||||
NewPoint() *client.Point
|
NewMetric() telegraf.Metric
|
||||||
}
|
}
|
||||||
|
|
||||||
type Repository struct {
|
type Repository struct {
|
||||||
@@ -90,7 +90,7 @@ type CommitCommentEvent struct {
|
|||||||
Sender Sender `json:"sender"`
|
Sender Sender `json:"sender"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s CommitCommentEvent) NewPoint() *client.Point {
|
func (s CommitCommentEvent) NewMetric() telegraf.Metric {
|
||||||
event := "commit_comment"
|
event := "commit_comment"
|
||||||
t := map[string]string{
|
t := map[string]string{
|
||||||
"event": event,
|
"event": event,
|
||||||
@@ -106,11 +106,11 @@ func (s CommitCommentEvent) NewPoint() *client.Point {
|
|||||||
"commit": s.Comment.Commit,
|
"commit": s.Comment.Commit,
|
||||||
"comment": s.Comment.Body,
|
"comment": s.Comment.Body,
|
||||||
}
|
}
|
||||||
p, err := client.NewPoint(meas, t, f, time.Now())
|
m, err := telegraf.NewMetric(meas, t, f, time.Now())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Failed to create %v event", event)
|
log.Fatalf("Failed to create %v event", event)
|
||||||
}
|
}
|
||||||
return p
|
return m
|
||||||
}
|
}
|
||||||
|
|
||||||
type CreateEvent struct {
|
type CreateEvent struct {
|
||||||
@@ -120,7 +120,7 @@ type CreateEvent struct {
|
|||||||
Sender Sender `json:"sender"`
|
Sender Sender `json:"sender"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s CreateEvent) NewPoint() *client.Point {
|
func (s CreateEvent) NewMetric() telegraf.Metric {
|
||||||
event := "create"
|
event := "create"
|
||||||
t := map[string]string{
|
t := map[string]string{
|
||||||
"event": event,
|
"event": event,
|
||||||
@@ -136,11 +136,11 @@ func (s CreateEvent) NewPoint() *client.Point {
|
|||||||
"ref": s.Ref,
|
"ref": s.Ref,
|
||||||
"refType": s.RefType,
|
"refType": s.RefType,
|
||||||
}
|
}
|
||||||
p, err := client.NewPoint(meas, t, f, time.Now())
|
m, err := telegraf.NewMetric(meas, t, f, time.Now())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Failed to create %v event", event)
|
log.Fatalf("Failed to create %v event", event)
|
||||||
}
|
}
|
||||||
return p
|
return m
|
||||||
}
|
}
|
||||||
|
|
||||||
type DeleteEvent struct {
|
type DeleteEvent struct {
|
||||||
@@ -150,7 +150,7 @@ type DeleteEvent struct {
|
|||||||
Sender Sender `json:"sender"`
|
Sender Sender `json:"sender"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s DeleteEvent) NewPoint() *client.Point {
|
func (s DeleteEvent) NewMetric() telegraf.Metric {
|
||||||
event := "delete"
|
event := "delete"
|
||||||
t := map[string]string{
|
t := map[string]string{
|
||||||
"event": event,
|
"event": event,
|
||||||
@@ -166,11 +166,11 @@ func (s DeleteEvent) NewPoint() *client.Point {
|
|||||||
"ref": s.Ref,
|
"ref": s.Ref,
|
||||||
"refType": s.RefType,
|
"refType": s.RefType,
|
||||||
}
|
}
|
||||||
p, err := client.NewPoint(meas, t, f, time.Now())
|
m, err := telegraf.NewMetric(meas, t, f, time.Now())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Failed to create %v event", event)
|
log.Fatalf("Failed to create %v event", event)
|
||||||
}
|
}
|
||||||
return p
|
return m
|
||||||
}
|
}
|
||||||
|
|
||||||
type DeploymentEvent struct {
|
type DeploymentEvent struct {
|
||||||
@@ -179,7 +179,7 @@ type DeploymentEvent struct {
|
|||||||
Sender Sender `json:"sender"`
|
Sender Sender `json:"sender"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s DeploymentEvent) NewPoint() *client.Point {
|
func (s DeploymentEvent) NewMetric() telegraf.Metric {
|
||||||
event := "deployment"
|
event := "deployment"
|
||||||
t := map[string]string{
|
t := map[string]string{
|
||||||
"event": event,
|
"event": event,
|
||||||
@@ -197,11 +197,11 @@ func (s DeploymentEvent) NewPoint() *client.Point {
|
|||||||
"environment": s.Deployment.Environment,
|
"environment": s.Deployment.Environment,
|
||||||
"description": s.Deployment.Description,
|
"description": s.Deployment.Description,
|
||||||
}
|
}
|
||||||
p, err := client.NewPoint(meas, t, f, time.Now())
|
m, err := telegraf.NewMetric(meas, t, f, time.Now())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Failed to create %v event", event)
|
log.Fatalf("Failed to create %v event", event)
|
||||||
}
|
}
|
||||||
return p
|
return m
|
||||||
}
|
}
|
||||||
|
|
||||||
type DeploymentStatusEvent struct {
|
type DeploymentStatusEvent struct {
|
||||||
@@ -211,7 +211,7 @@ type DeploymentStatusEvent struct {
|
|||||||
Sender Sender `json:"sender"`
|
Sender Sender `json:"sender"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s DeploymentStatusEvent) NewPoint() *client.Point {
|
func (s DeploymentStatusEvent) NewMetric() telegraf.Metric {
|
||||||
event := "delete"
|
event := "delete"
|
||||||
t := map[string]string{
|
t := map[string]string{
|
||||||
"event": event,
|
"event": event,
|
||||||
@@ -231,11 +231,11 @@ func (s DeploymentStatusEvent) NewPoint() *client.Point {
|
|||||||
"depState": s.DeploymentStatus.State,
|
"depState": s.DeploymentStatus.State,
|
||||||
"depDescription": s.DeploymentStatus.Description,
|
"depDescription": s.DeploymentStatus.Description,
|
||||||
}
|
}
|
||||||
p, err := client.NewPoint(meas, t, f, time.Now())
|
m, err := telegraf.NewMetric(meas, t, f, time.Now())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Failed to create %v event", event)
|
log.Fatalf("Failed to create %v event", event)
|
||||||
}
|
}
|
||||||
return p
|
return m
|
||||||
}
|
}
|
||||||
|
|
||||||
type ForkEvent struct {
|
type ForkEvent struct {
|
||||||
@@ -244,7 +244,7 @@ type ForkEvent struct {
|
|||||||
Sender Sender `json:"sender"`
|
Sender Sender `json:"sender"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s ForkEvent) NewPoint() *client.Point {
|
func (s ForkEvent) NewMetric() telegraf.Metric {
|
||||||
event := "fork"
|
event := "fork"
|
||||||
t := map[string]string{
|
t := map[string]string{
|
||||||
"event": event,
|
"event": event,
|
||||||
@@ -259,11 +259,11 @@ func (s ForkEvent) NewPoint() *client.Point {
|
|||||||
"issues": s.Repository.Issues,
|
"issues": s.Repository.Issues,
|
||||||
"fork": s.Forkee.Repository,
|
"fork": s.Forkee.Repository,
|
||||||
}
|
}
|
||||||
p, err := client.NewPoint(meas, t, f, time.Now())
|
m, err := telegraf.NewMetric(meas, t, f, time.Now())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Failed to create %v event", event)
|
log.Fatalf("Failed to create %v event", event)
|
||||||
}
|
}
|
||||||
return p
|
return m
|
||||||
}
|
}
|
||||||
|
|
||||||
type GollumEvent struct {
|
type GollumEvent struct {
|
||||||
@@ -273,7 +273,7 @@ type GollumEvent struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// REVIEW: Going to be lazy and not deal with the pages.
|
// REVIEW: Going to be lazy and not deal with the pages.
|
||||||
func (s GollumEvent) NewPoint() *client.Point {
|
func (s GollumEvent) NewMetric() telegraf.Metric {
|
||||||
event := "gollum"
|
event := "gollum"
|
||||||
t := map[string]string{
|
t := map[string]string{
|
||||||
"event": event,
|
"event": event,
|
||||||
@@ -287,11 +287,11 @@ func (s GollumEvent) NewPoint() *client.Point {
|
|||||||
"forks": s.Repository.Forks,
|
"forks": s.Repository.Forks,
|
||||||
"issues": s.Repository.Issues,
|
"issues": s.Repository.Issues,
|
||||||
}
|
}
|
||||||
p, err := client.NewPoint(meas, t, f, time.Now())
|
m, err := telegraf.NewMetric(meas, t, f, time.Now())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Failed to create %v event", event)
|
log.Fatalf("Failed to create %v event", event)
|
||||||
}
|
}
|
||||||
return p
|
return m
|
||||||
}
|
}
|
||||||
|
|
||||||
type IssueCommentEvent struct {
|
type IssueCommentEvent struct {
|
||||||
@@ -301,7 +301,7 @@ type IssueCommentEvent struct {
|
|||||||
Sender Sender `json:"sender"`
|
Sender Sender `json:"sender"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s IssueCommentEvent) NewPoint() *client.Point {
|
func (s IssueCommentEvent) NewMetric() telegraf.Metric {
|
||||||
event := "issue_comment"
|
event := "issue_comment"
|
||||||
t := map[string]string{
|
t := map[string]string{
|
||||||
"event": event,
|
"event": event,
|
||||||
@@ -319,11 +319,11 @@ func (s IssueCommentEvent) NewPoint() *client.Point {
|
|||||||
"comments": s.Issue.Comments,
|
"comments": s.Issue.Comments,
|
||||||
"body": s.Comment.Body,
|
"body": s.Comment.Body,
|
||||||
}
|
}
|
||||||
p, err := client.NewPoint(meas, t, f, time.Now())
|
m, err := telegraf.NewMetric(meas, t, f, time.Now())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Failed to create %v event", event)
|
log.Fatalf("Failed to create %v event", event)
|
||||||
}
|
}
|
||||||
return p
|
return m
|
||||||
}
|
}
|
||||||
|
|
||||||
type IssuesEvent struct {
|
type IssuesEvent struct {
|
||||||
@@ -333,7 +333,7 @@ type IssuesEvent struct {
|
|||||||
Sender Sender `json:"sender"`
|
Sender Sender `json:"sender"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s IssuesEvent) NewPoint() *client.Point {
|
func (s IssuesEvent) NewMetric() telegraf.Metric {
|
||||||
event := "issue"
|
event := "issue"
|
||||||
t := map[string]string{
|
t := map[string]string{
|
||||||
"event": event,
|
"event": event,
|
||||||
@@ -351,11 +351,11 @@ func (s IssuesEvent) NewPoint() *client.Point {
|
|||||||
"title": s.Issue.Title,
|
"title": s.Issue.Title,
|
||||||
"comments": s.Issue.Comments,
|
"comments": s.Issue.Comments,
|
||||||
}
|
}
|
||||||
p, err := client.NewPoint(meas, t, f, time.Now())
|
m, err := telegraf.NewMetric(meas, t, f, time.Now())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Failed to create %v event", event)
|
log.Fatalf("Failed to create %v event", event)
|
||||||
}
|
}
|
||||||
return p
|
return m
|
||||||
}
|
}
|
||||||
|
|
||||||
type MemberEvent struct {
|
type MemberEvent struct {
|
||||||
@@ -364,7 +364,7 @@ type MemberEvent struct {
|
|||||||
Sender Sender `json:"sender"`
|
Sender Sender `json:"sender"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s MemberEvent) NewPoint() *client.Point {
|
func (s MemberEvent) NewMetric() telegraf.Metric {
|
||||||
event := "member"
|
event := "member"
|
||||||
t := map[string]string{
|
t := map[string]string{
|
||||||
"event": event,
|
"event": event,
|
||||||
@@ -380,11 +380,11 @@ func (s MemberEvent) NewPoint() *client.Point {
|
|||||||
"newMember": s.Member.User,
|
"newMember": s.Member.User,
|
||||||
"newMemberStatus": s.Member.Admin,
|
"newMemberStatus": s.Member.Admin,
|
||||||
}
|
}
|
||||||
p, err := client.NewPoint(meas, t, f, time.Now())
|
m, err := telegraf.NewMetric(meas, t, f, time.Now())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Failed to create %v event", event)
|
log.Fatalf("Failed to create %v event", event)
|
||||||
}
|
}
|
||||||
return p
|
return m
|
||||||
}
|
}
|
||||||
|
|
||||||
type MembershipEvent struct {
|
type MembershipEvent struct {
|
||||||
@@ -394,7 +394,7 @@ type MembershipEvent struct {
|
|||||||
Team Team `json:"team"`
|
Team Team `json:"team"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s MembershipEvent) NewPoint() *client.Point {
|
func (s MembershipEvent) NewMetric() telegraf.Metric {
|
||||||
event := "membership"
|
event := "membership"
|
||||||
t := map[string]string{
|
t := map[string]string{
|
||||||
"event": event,
|
"event": event,
|
||||||
@@ -406,11 +406,11 @@ func (s MembershipEvent) NewPoint() *client.Point {
|
|||||||
"newMember": s.Member.User,
|
"newMember": s.Member.User,
|
||||||
"newMemberStatus": s.Member.Admin,
|
"newMemberStatus": s.Member.Admin,
|
||||||
}
|
}
|
||||||
p, err := client.NewPoint(meas, t, f, time.Now())
|
m, err := telegraf.NewMetric(meas, t, f, time.Now())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Failed to create %v event", event)
|
log.Fatalf("Failed to create %v event", event)
|
||||||
}
|
}
|
||||||
return p
|
return m
|
||||||
}
|
}
|
||||||
|
|
||||||
type PageBuildEvent struct {
|
type PageBuildEvent struct {
|
||||||
@@ -418,7 +418,7 @@ type PageBuildEvent struct {
|
|||||||
Sender Sender `json:"sender"`
|
Sender Sender `json:"sender"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s PageBuildEvent) NewPoint() *client.Point {
|
func (s PageBuildEvent) NewMetric() telegraf.Metric {
|
||||||
event := "page_build"
|
event := "page_build"
|
||||||
t := map[string]string{
|
t := map[string]string{
|
||||||
"event": event,
|
"event": event,
|
||||||
@@ -432,11 +432,11 @@ func (s PageBuildEvent) NewPoint() *client.Point {
|
|||||||
"forks": s.Repository.Forks,
|
"forks": s.Repository.Forks,
|
||||||
"issues": s.Repository.Issues,
|
"issues": s.Repository.Issues,
|
||||||
}
|
}
|
||||||
p, err := client.NewPoint(meas, t, f, time.Now())
|
m, err := telegraf.NewMetric(meas, t, f, time.Now())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Failed to create %v event", event)
|
log.Fatalf("Failed to create %v event", event)
|
||||||
}
|
}
|
||||||
return p
|
return m
|
||||||
}
|
}
|
||||||
|
|
||||||
type PublicEvent struct {
|
type PublicEvent struct {
|
||||||
@@ -444,7 +444,7 @@ type PublicEvent struct {
|
|||||||
Sender Sender `json:"sender"`
|
Sender Sender `json:"sender"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s PublicEvent) NewPoint() *client.Point {
|
func (s PublicEvent) NewMetric() telegraf.Metric {
|
||||||
event := "public"
|
event := "public"
|
||||||
t := map[string]string{
|
t := map[string]string{
|
||||||
"event": event,
|
"event": event,
|
||||||
@@ -458,11 +458,11 @@ func (s PublicEvent) NewPoint() *client.Point {
|
|||||||
"forks": s.Repository.Forks,
|
"forks": s.Repository.Forks,
|
||||||
"issues": s.Repository.Issues,
|
"issues": s.Repository.Issues,
|
||||||
}
|
}
|
||||||
p, err := client.NewPoint(meas, t, f, time.Now())
|
m, err := telegraf.NewMetric(meas, t, f, time.Now())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Failed to create %v event", event)
|
log.Fatalf("Failed to create %v event", event)
|
||||||
}
|
}
|
||||||
return p
|
return m
|
||||||
}
|
}
|
||||||
|
|
||||||
type PullRequestEvent struct {
|
type PullRequestEvent struct {
|
||||||
@@ -472,7 +472,7 @@ type PullRequestEvent struct {
|
|||||||
Sender Sender `json:"sender"`
|
Sender Sender `json:"sender"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s PullRequestEvent) NewPoint() *client.Point {
|
func (s PullRequestEvent) NewMetric() telegraf.Metric {
|
||||||
event := "pull_request"
|
event := "pull_request"
|
||||||
t := map[string]string{
|
t := map[string]string{
|
||||||
"event": event,
|
"event": event,
|
||||||
@@ -495,11 +495,11 @@ func (s PullRequestEvent) NewPoint() *client.Point {
|
|||||||
"deletions": s.PullRequest.Deletions,
|
"deletions": s.PullRequest.Deletions,
|
||||||
"changedFiles": s.PullRequest.ChangedFiles,
|
"changedFiles": s.PullRequest.ChangedFiles,
|
||||||
}
|
}
|
||||||
p, err := client.NewPoint(meas, t, f, time.Now())
|
m, err := telegraf.NewMetric(meas, t, f, time.Now())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Failed to create %v event", event)
|
log.Fatalf("Failed to create %v event", event)
|
||||||
}
|
}
|
||||||
return p
|
return m
|
||||||
}
|
}
|
||||||
|
|
||||||
type PullRequestReviewCommentEvent struct {
|
type PullRequestReviewCommentEvent struct {
|
||||||
@@ -509,7 +509,7 @@ type PullRequestReviewCommentEvent struct {
|
|||||||
Sender Sender `json:"sender"`
|
Sender Sender `json:"sender"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s PullRequestReviewCommentEvent) NewPoint() *client.Point {
|
func (s PullRequestReviewCommentEvent) NewMetric() telegraf.Metric {
|
||||||
event := "pull_request_review_comment"
|
event := "pull_request_review_comment"
|
||||||
t := map[string]string{
|
t := map[string]string{
|
||||||
"event": event,
|
"event": event,
|
||||||
@@ -533,11 +533,11 @@ func (s PullRequestReviewCommentEvent) NewPoint() *client.Point {
|
|||||||
"commentFile": s.Comment.File,
|
"commentFile": s.Comment.File,
|
||||||
"comment": s.Comment.Comment,
|
"comment": s.Comment.Comment,
|
||||||
}
|
}
|
||||||
p, err := client.NewPoint(meas, t, f, time.Now())
|
m, err := telegraf.NewMetric(meas, t, f, time.Now())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Failed to create %v event", event)
|
log.Fatalf("Failed to create %v event", event)
|
||||||
}
|
}
|
||||||
return p
|
return m
|
||||||
}
|
}
|
||||||
|
|
||||||
type PushEvent struct {
|
type PushEvent struct {
|
||||||
@@ -548,7 +548,7 @@ type PushEvent struct {
|
|||||||
Sender Sender `json:"sender"`
|
Sender Sender `json:"sender"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s PushEvent) NewPoint() *client.Point {
|
func (s PushEvent) NewMetric() telegraf.Metric {
|
||||||
event := "push"
|
event := "push"
|
||||||
t := map[string]string{
|
t := map[string]string{
|
||||||
"event": event,
|
"event": event,
|
||||||
@@ -565,11 +565,11 @@ func (s PushEvent) NewPoint() *client.Point {
|
|||||||
"before": s.Before,
|
"before": s.Before,
|
||||||
"after": s.After,
|
"after": s.After,
|
||||||
}
|
}
|
||||||
p, err := client.NewPoint(meas, t, f, time.Now())
|
m, err := telegraf.NewMetric(meas, t, f, time.Now())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Failed to create %v event", event)
|
log.Fatalf("Failed to create %v event", event)
|
||||||
}
|
}
|
||||||
return p
|
return m
|
||||||
}
|
}
|
||||||
|
|
||||||
type ReleaseEvent struct {
|
type ReleaseEvent struct {
|
||||||
@@ -578,7 +578,7 @@ type ReleaseEvent struct {
|
|||||||
Sender Sender `json:"sender"`
|
Sender Sender `json:"sender"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s ReleaseEvent) NewPoint() *client.Point {
|
func (s ReleaseEvent) NewMetric() telegraf.Metric {
|
||||||
event := "release"
|
event := "release"
|
||||||
t := map[string]string{
|
t := map[string]string{
|
||||||
"event": event,
|
"event": event,
|
||||||
@@ -593,11 +593,11 @@ func (s ReleaseEvent) NewPoint() *client.Point {
|
|||||||
"issues": s.Repository.Issues,
|
"issues": s.Repository.Issues,
|
||||||
"tagName": s.Release.TagName,
|
"tagName": s.Release.TagName,
|
||||||
}
|
}
|
||||||
p, err := client.NewPoint(meas, t, f, time.Now())
|
m, err := telegraf.NewMetric(meas, t, f, time.Now())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Failed to create %v event", event)
|
log.Fatalf("Failed to create %v event", event)
|
||||||
}
|
}
|
||||||
return p
|
return m
|
||||||
}
|
}
|
||||||
|
|
||||||
type RepositoryEvent struct {
|
type RepositoryEvent struct {
|
||||||
@@ -605,7 +605,7 @@ type RepositoryEvent struct {
|
|||||||
Sender Sender `json:"sender"`
|
Sender Sender `json:"sender"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s RepositoryEvent) NewPoint() *client.Point {
|
func (s RepositoryEvent) NewMetric() telegraf.Metric {
|
||||||
event := "repository"
|
event := "repository"
|
||||||
t := map[string]string{
|
t := map[string]string{
|
||||||
"event": event,
|
"event": event,
|
||||||
@@ -619,11 +619,11 @@ func (s RepositoryEvent) NewPoint() *client.Point {
|
|||||||
"forks": s.Repository.Forks,
|
"forks": s.Repository.Forks,
|
||||||
"issues": s.Repository.Issues,
|
"issues": s.Repository.Issues,
|
||||||
}
|
}
|
||||||
p, err := client.NewPoint(meas, t, f, time.Now())
|
m, err := telegraf.NewMetric(meas, t, f, time.Now())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Failed to create %v event", event)
|
log.Fatalf("Failed to create %v event", event)
|
||||||
}
|
}
|
||||||
return p
|
return m
|
||||||
}
|
}
|
||||||
|
|
||||||
type StatusEvent struct {
|
type StatusEvent struct {
|
||||||
@@ -633,7 +633,7 @@ type StatusEvent struct {
|
|||||||
Sender Sender `json:"sender"`
|
Sender Sender `json:"sender"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s StatusEvent) NewPoint() *client.Point {
|
func (s StatusEvent) NewMetric() telegraf.Metric {
|
||||||
event := "status"
|
event := "status"
|
||||||
t := map[string]string{
|
t := map[string]string{
|
||||||
"event": event,
|
"event": event,
|
||||||
@@ -649,11 +649,11 @@ func (s StatusEvent) NewPoint() *client.Point {
|
|||||||
"commit": s.Commit,
|
"commit": s.Commit,
|
||||||
"state": s.State,
|
"state": s.State,
|
||||||
}
|
}
|
||||||
p, err := client.NewPoint(meas, t, f, time.Now())
|
m, err := telegraf.NewMetric(meas, t, f, time.Now())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Failed to create %v event", event)
|
log.Fatalf("Failed to create %v event", event)
|
||||||
}
|
}
|
||||||
return p
|
return m
|
||||||
}
|
}
|
||||||
|
|
||||||
type TeamAddEvent struct {
|
type TeamAddEvent struct {
|
||||||
@@ -662,7 +662,7 @@ type TeamAddEvent struct {
|
|||||||
Sender Sender `json:"sender"`
|
Sender Sender `json:"sender"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s TeamAddEvent) NewPoint() *client.Point {
|
func (s TeamAddEvent) NewMetric() telegraf.Metric {
|
||||||
event := "team_add"
|
event := "team_add"
|
||||||
t := map[string]string{
|
t := map[string]string{
|
||||||
"event": event,
|
"event": event,
|
||||||
@@ -677,11 +677,11 @@ func (s TeamAddEvent) NewPoint() *client.Point {
|
|||||||
"issues": s.Repository.Issues,
|
"issues": s.Repository.Issues,
|
||||||
"teamName": s.Team.Name,
|
"teamName": s.Team.Name,
|
||||||
}
|
}
|
||||||
p, err := client.NewPoint(meas, t, f, time.Now())
|
m, err := telegraf.NewMetric(meas, t, f, time.Now())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Failed to create %v event", event)
|
log.Fatalf("Failed to create %v event", event)
|
||||||
}
|
}
|
||||||
return p
|
return m
|
||||||
}
|
}
|
||||||
|
|
||||||
type WatchEvent struct {
|
type WatchEvent struct {
|
||||||
@@ -689,7 +689,7 @@ type WatchEvent struct {
|
|||||||
Sender Sender `json:"sender"`
|
Sender Sender `json:"sender"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s WatchEvent) NewPoint() *client.Point {
|
func (s WatchEvent) NewMetric() telegraf.Metric {
|
||||||
event := "delete"
|
event := "delete"
|
||||||
t := map[string]string{
|
t := map[string]string{
|
||||||
"event": event,
|
"event": event,
|
||||||
@@ -703,9 +703,9 @@ func (s WatchEvent) NewPoint() *client.Point {
|
|||||||
"forks": s.Repository.Forks,
|
"forks": s.Repository.Forks,
|
||||||
"issues": s.Repository.Issues,
|
"issues": s.Repository.Issues,
|
||||||
}
|
}
|
||||||
p, err := client.NewPoint(meas, t, f, time.Now())
|
m, err := telegraf.NewMetric(meas, t, f, time.Now())
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Failed to create %v event", event)
|
log.Fatalf("Failed to create %v event", event)
|
||||||
}
|
}
|
||||||
return p
|
return m
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -3,11 +3,14 @@ package haproxy
|
|||||||
import (
|
import (
|
||||||
"encoding/csv"
|
"encoding/csv"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
"io"
|
"io"
|
||||||
|
"net"
|
||||||
"net/http"
|
"net/http"
|
||||||
"net/url"
|
"net/url"
|
||||||
"strconv"
|
"strconv"
|
||||||
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
)
|
)
|
||||||
@@ -46,7 +49,7 @@ const (
|
|||||||
HF_THROTTLE = 29 //29. throttle [...S]: current throttle percentage for the server, when slowstart is active, or no value if not in slowstart.
|
HF_THROTTLE = 29 //29. throttle [...S]: current throttle percentage for the server, when slowstart is active, or no value if not in slowstart.
|
||||||
HF_LBTOT = 30 //30. lbtot [..BS]: total number of times a server was selected, either for new sessions, or when re-dispatching. The server counter is the number of times that server was selected.
|
HF_LBTOT = 30 //30. lbtot [..BS]: total number of times a server was selected, either for new sessions, or when re-dispatching. The server counter is the number of times that server was selected.
|
||||||
HF_TRACKED = 31 //31. tracked [...S]: id of proxy/server if tracking is enabled.
|
HF_TRACKED = 31 //31. tracked [...S]: id of proxy/server if tracking is enabled.
|
||||||
HF_TYPE = 32 //32. type [LFBS]: (0 = frontend, 1 = backend, 2 = server, 3 = socket/listener)
|
HF_TYPE = 32 //32. type [LFBS]: (0 = frontend, 1 = backend, 2 = server, 3 = socket/listener)
|
||||||
HF_RATE = 33 //33. rate [.FBS]: number of sessions per second over last elapsed second
|
HF_RATE = 33 //33. rate [.FBS]: number of sessions per second over last elapsed second
|
||||||
HF_RATE_LIM = 34 //34. rate_lim [.F..]: configured limit on new sessions per second
|
HF_RATE_LIM = 34 //34. rate_lim [.F..]: configured limit on new sessions per second
|
||||||
HF_RATE_MAX = 35 //35. rate_max [.FBS]: max number of new sessions per second
|
HF_RATE_MAX = 35 //35. rate_max [.FBS]: max number of new sessions per second
|
||||||
@@ -85,13 +88,13 @@ type haproxy struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
var sampleConfig = `
|
var sampleConfig = `
|
||||||
# An array of address to gather stats about. Specify an ip on hostname
|
## An array of address to gather stats about. Specify an ip on hostname
|
||||||
# with optional port. ie localhost, 10.10.3.33:1936, etc.
|
## with optional port. ie localhost, 10.10.3.33:1936, etc.
|
||||||
#
|
|
||||||
# If no servers are specified, then default to 127.0.0.1:1936
|
## If no servers are specified, then default to 127.0.0.1:1936
|
||||||
servers = ["http://myhaproxy.com:1936", "http://anotherhaproxy.com:1936"]
|
servers = ["http://myhaproxy.com:1936", "http://anotherhaproxy.com:1936"]
|
||||||
# Or you can also use local socket(not work yet)
|
## Or you can also use local socket
|
||||||
# servers = ["socket://run/haproxy/admin.sock"]
|
## servers = ["socket:/run/haproxy/admin.sock"]
|
||||||
`
|
`
|
||||||
|
|
||||||
func (r *haproxy) SampleConfig() string {
|
func (r *haproxy) SampleConfig() string {
|
||||||
@@ -104,7 +107,7 @@ func (r *haproxy) Description() string {
|
|||||||
|
|
||||||
// Reads stats from all configured servers accumulates stats.
|
// Reads stats from all configured servers accumulates stats.
|
||||||
// Returns one of the errors encountered while gather stats (if any).
|
// Returns one of the errors encountered while gather stats (if any).
|
||||||
func (g *haproxy) Gather(acc inputs.Accumulator) error {
|
func (g *haproxy) Gather(acc telegraf.Accumulator) error {
|
||||||
if len(g.Servers) == 0 {
|
if len(g.Servers) == 0 {
|
||||||
return g.gatherServer("http://127.0.0.1:1936", acc)
|
return g.gatherServer("http://127.0.0.1:1936", acc)
|
||||||
}
|
}
|
||||||
@@ -126,10 +129,42 @@ func (g *haproxy) Gather(acc inputs.Accumulator) error {
|
|||||||
return outerr
|
return outerr
|
||||||
}
|
}
|
||||||
|
|
||||||
func (g *haproxy) gatherServer(addr string, acc inputs.Accumulator) error {
|
func (g *haproxy) gatherServerSocket(addr string, acc telegraf.Accumulator) error {
|
||||||
if g.client == nil {
|
var socketPath string
|
||||||
|
socketAddr := strings.Split(addr, ":")
|
||||||
|
|
||||||
client := &http.Client{}
|
if len(socketAddr) >= 2 {
|
||||||
|
socketPath = socketAddr[1]
|
||||||
|
} else {
|
||||||
|
socketPath = socketAddr[0]
|
||||||
|
}
|
||||||
|
|
||||||
|
c, err := net.Dial("unix", socketPath)
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("Could not connect to socket '%s': %s", addr, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
_, errw := c.Write([]byte("show stat\n"))
|
||||||
|
|
||||||
|
if errw != nil {
|
||||||
|
return fmt.Errorf("Could not write to socket '%s': %s", addr, errw)
|
||||||
|
}
|
||||||
|
|
||||||
|
return importCsvResult(c, acc, socketPath)
|
||||||
|
}
|
||||||
|
|
||||||
|
func (g *haproxy) gatherServer(addr string, acc telegraf.Accumulator) error {
|
||||||
|
if !strings.HasPrefix(addr, "http") {
|
||||||
|
return g.gatherServerSocket(addr, acc)
|
||||||
|
}
|
||||||
|
|
||||||
|
if g.client == nil {
|
||||||
|
tr := &http.Transport{ResponseHeaderTimeout: time.Duration(3 * time.Second)}
|
||||||
|
client := &http.Client{
|
||||||
|
Transport: tr,
|
||||||
|
Timeout: time.Duration(4 * time.Second),
|
||||||
|
}
|
||||||
g.client = client
|
g.client = client
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -156,7 +191,7 @@ func (g *haproxy) gatherServer(addr string, acc inputs.Accumulator) error {
|
|||||||
return importCsvResult(res.Body, acc, u.Host)
|
return importCsvResult(res.Body, acc, u.Host)
|
||||||
}
|
}
|
||||||
|
|
||||||
func importCsvResult(r io.Reader, acc inputs.Accumulator, host string) error {
|
func importCsvResult(r io.Reader, acc telegraf.Accumulator, host string) error {
|
||||||
csv := csv.NewReader(r)
|
csv := csv.NewReader(r)
|
||||||
result, err := csv.ReadAll()
|
result, err := csv.ReadAll()
|
||||||
now := time.Now()
|
now := time.Now()
|
||||||
@@ -358,7 +393,7 @@ func importCsvResult(r io.Reader, acc inputs.Accumulator, host string) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
inputs.Add("haproxy", func() inputs.Input {
|
inputs.Add("haproxy", func() telegraf.Input {
|
||||||
return &haproxy{}
|
return &haproxy{}
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,17 +1,42 @@
|
|||||||
package haproxy
|
package haproxy
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"crypto/rand"
|
||||||
|
"encoding/binary"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"net"
|
||||||
|
"net/http"
|
||||||
|
"net/http/httptest"
|
||||||
"strings"
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/influxdata/telegraf/testutil"
|
"github.com/influxdata/telegraf/testutil"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
"net/http"
|
|
||||||
"net/http/httptest"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
type statServer struct{}
|
||||||
|
|
||||||
|
func (s statServer) serverSocket(l net.Listener) {
|
||||||
|
for {
|
||||||
|
conn, err := l.Accept()
|
||||||
|
if err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
go func(c net.Conn) {
|
||||||
|
buf := make([]byte, 1024)
|
||||||
|
n, _ := c.Read(buf)
|
||||||
|
|
||||||
|
data := buf[:n]
|
||||||
|
if string(data) == "show stat\n" {
|
||||||
|
c.Write([]byte(csvOutputSample))
|
||||||
|
c.Close()
|
||||||
|
}
|
||||||
|
}(conn)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func TestHaproxyGeneratesMetricsWithAuthentication(t *testing.T) {
|
func TestHaproxyGeneratesMetricsWithAuthentication(t *testing.T) {
|
||||||
//We create a fake server to return test data
|
//We create a fake server to return test data
|
||||||
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
@@ -146,6 +171,69 @@ func TestHaproxyGeneratesMetricsWithoutAuthentication(t *testing.T) {
|
|||||||
acc.AssertContainsTaggedFields(t, "haproxy", fields, tags)
|
acc.AssertContainsTaggedFields(t, "haproxy", fields, tags)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestHaproxyGeneratesMetricsUsingSocket(t *testing.T) {
|
||||||
|
var randomNumber int64
|
||||||
|
binary.Read(rand.Reader, binary.LittleEndian, &randomNumber)
|
||||||
|
sock, err := net.Listen("unix", fmt.Sprintf("/tmp/test-haproxy%d.sock", randomNumber))
|
||||||
|
if err != nil {
|
||||||
|
t.Fatal("Cannot initialize socket ")
|
||||||
|
}
|
||||||
|
|
||||||
|
defer sock.Close()
|
||||||
|
|
||||||
|
s := statServer{}
|
||||||
|
go s.serverSocket(sock)
|
||||||
|
|
||||||
|
r := &haproxy{
|
||||||
|
Servers: []string{sock.Addr().String()},
|
||||||
|
}
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
|
||||||
|
err = r.Gather(&acc)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
tags := map[string]string{
|
||||||
|
"proxy": "be_app",
|
||||||
|
"server": sock.Addr().String(),
|
||||||
|
"sv": "host0",
|
||||||
|
}
|
||||||
|
|
||||||
|
fields := map[string]interface{}{
|
||||||
|
"active_servers": uint64(1),
|
||||||
|
"backup_servers": uint64(0),
|
||||||
|
"bin": uint64(510913516),
|
||||||
|
"bout": uint64(2193856571),
|
||||||
|
"check_duration": uint64(10),
|
||||||
|
"cli_abort": uint64(73),
|
||||||
|
"ctime": uint64(2),
|
||||||
|
"downtime": uint64(0),
|
||||||
|
"dresp": uint64(0),
|
||||||
|
"econ": uint64(0),
|
||||||
|
"eresp": uint64(1),
|
||||||
|
"http_response.1xx": uint64(0),
|
||||||
|
"http_response.2xx": uint64(119534),
|
||||||
|
"http_response.3xx": uint64(48051),
|
||||||
|
"http_response.4xx": uint64(2345),
|
||||||
|
"http_response.5xx": uint64(1056),
|
||||||
|
"lbtot": uint64(171013),
|
||||||
|
"qcur": uint64(0),
|
||||||
|
"qmax": uint64(0),
|
||||||
|
"qtime": uint64(0),
|
||||||
|
"rate": uint64(3),
|
||||||
|
"rate_max": uint64(12),
|
||||||
|
"rtime": uint64(312),
|
||||||
|
"scur": uint64(1),
|
||||||
|
"smax": uint64(32),
|
||||||
|
"srv_abort": uint64(1),
|
||||||
|
"stot": uint64(171014),
|
||||||
|
"ttime": uint64(2341),
|
||||||
|
"wredis": uint64(0),
|
||||||
|
"wretr": uint64(1),
|
||||||
|
}
|
||||||
|
acc.AssertContainsTaggedFields(t, "haproxy", fields, tags)
|
||||||
|
}
|
||||||
|
|
||||||
//When not passing server config, we default to localhost
|
//When not passing server config, we default to localhost
|
||||||
//We just want to make sure we did request stat from localhost
|
//We just want to make sure we did request stat from localhost
|
||||||
func TestHaproxyDefaultGetFromLocalhost(t *testing.T) {
|
func TestHaproxyDefaultGetFromLocalhost(t *testing.T) {
|
||||||
|
|||||||
44
plugins/inputs/http_response/README.md
Normal file
44
plugins/inputs/http_response/README.md
Normal file
@@ -0,0 +1,44 @@
|
|||||||
|
# Example Input Plugin
|
||||||
|
|
||||||
|
This input plugin will test HTTP/HTTPS connections.
|
||||||
|
|
||||||
|
### Configuration:
|
||||||
|
|
||||||
|
```
|
||||||
|
# List of UDP/TCP connections you want to check
|
||||||
|
[[inputs.http_response]]
|
||||||
|
## Server address (default http://localhost)
|
||||||
|
address = "http://github.com"
|
||||||
|
## Set response_timeout (default 5 seconds)
|
||||||
|
response_timeout = 5
|
||||||
|
## HTTP Request Method
|
||||||
|
method = "GET"
|
||||||
|
## HTTP Request Headers
|
||||||
|
[inputs.http_response.headers]
|
||||||
|
Host = github.com
|
||||||
|
## Whether to follow redirects from the server (defaults to false)
|
||||||
|
follow_redirects = true
|
||||||
|
## Optional HTTP Request Body
|
||||||
|
body = '''
|
||||||
|
{'fake':'data'}
|
||||||
|
'''
|
||||||
|
```
|
||||||
|
|
||||||
|
### Measurements & Fields:
|
||||||
|
|
||||||
|
- http_response
|
||||||
|
- response_time (float, seconds)
|
||||||
|
- http_response_code (int) #The code received
|
||||||
|
|
||||||
|
### Tags:
|
||||||
|
|
||||||
|
- All measurements have the following tags:
|
||||||
|
- server
|
||||||
|
- method
|
||||||
|
|
||||||
|
### Example Output:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ ./telegraf -config telegraf.conf -input-filter http_response -test
|
||||||
|
http_response,method=GET,server=http://www.github.com http_response_code=200i,response_time=6.223266528 1459419354977857955
|
||||||
|
```
|
||||||
154
plugins/inputs/http_response/http_response.go
Normal file
154
plugins/inputs/http_response/http_response.go
Normal file
@@ -0,0 +1,154 @@
|
|||||||
|
package http_response
|
||||||
|
|
||||||
|
import (
|
||||||
|
"errors"
|
||||||
|
"io"
|
||||||
|
"net/http"
|
||||||
|
"net/url"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
|
)
|
||||||
|
|
||||||
|
// HTTPResponse struct
|
||||||
|
type HTTPResponse struct {
|
||||||
|
Address string
|
||||||
|
Body string
|
||||||
|
Method string
|
||||||
|
ResponseTimeout int
|
||||||
|
Headers map[string]string
|
||||||
|
FollowRedirects bool
|
||||||
|
}
|
||||||
|
|
||||||
|
// Description returns the plugin Description
|
||||||
|
func (h *HTTPResponse) Description() string {
|
||||||
|
return "HTTP/HTTPS request given an address a method and a timeout"
|
||||||
|
}
|
||||||
|
|
||||||
|
var sampleConfig = `
|
||||||
|
## Server address (default http://localhost)
|
||||||
|
address = "http://github.com"
|
||||||
|
## Set response_timeout (default 5 seconds)
|
||||||
|
response_timeout = 5
|
||||||
|
## HTTP Request Method
|
||||||
|
method = "GET"
|
||||||
|
## Whether to follow redirects from the server (defaults to false)
|
||||||
|
follow_redirects = true
|
||||||
|
## HTTP Request Headers (all values must be strings)
|
||||||
|
# [inputs.http_response.headers]
|
||||||
|
# Host = "github.com"
|
||||||
|
## Optional HTTP Request Body
|
||||||
|
# body = '''
|
||||||
|
# {'fake':'data'}
|
||||||
|
# '''
|
||||||
|
`
|
||||||
|
|
||||||
|
// SampleConfig returns the plugin SampleConfig
|
||||||
|
func (h *HTTPResponse) SampleConfig() string {
|
||||||
|
return sampleConfig
|
||||||
|
}
|
||||||
|
|
||||||
|
// ErrRedirectAttempted indicates that a redirect occurred
|
||||||
|
var ErrRedirectAttempted = errors.New("redirect")
|
||||||
|
|
||||||
|
// CreateHttpClient creates an http client which will timeout at the specified
|
||||||
|
// timeout period and can follow redirects if specified
|
||||||
|
func CreateHttpClient(followRedirects bool, ResponseTimeout time.Duration) *http.Client {
|
||||||
|
client := &http.Client{
|
||||||
|
Timeout: time.Second * ResponseTimeout,
|
||||||
|
}
|
||||||
|
|
||||||
|
if followRedirects == false {
|
||||||
|
client.CheckRedirect = func(req *http.Request, via []*http.Request) error {
|
||||||
|
return ErrRedirectAttempted
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return client
|
||||||
|
}
|
||||||
|
|
||||||
|
// CreateHeaders takes a map of header strings and puts them
|
||||||
|
// into a http.Header Object
|
||||||
|
func CreateHeaders(headers map[string]string) http.Header {
|
||||||
|
httpHeaders := make(http.Header)
|
||||||
|
for key := range headers {
|
||||||
|
httpHeaders.Add(key, headers[key])
|
||||||
|
}
|
||||||
|
return httpHeaders
|
||||||
|
}
|
||||||
|
|
||||||
|
// HTTPGather gathers all fields and returns any errors it encounters
|
||||||
|
func (h *HTTPResponse) HTTPGather() (map[string]interface{}, error) {
|
||||||
|
// Prepare fields
|
||||||
|
fields := make(map[string]interface{})
|
||||||
|
|
||||||
|
client := CreateHttpClient(h.FollowRedirects, time.Duration(h.ResponseTimeout))
|
||||||
|
|
||||||
|
var body io.Reader
|
||||||
|
if h.Body != "" {
|
||||||
|
body = strings.NewReader(h.Body)
|
||||||
|
}
|
||||||
|
request, err := http.NewRequest(h.Method, h.Address, body)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
request.Header = CreateHeaders(h.Headers)
|
||||||
|
|
||||||
|
// Start Timer
|
||||||
|
start := time.Now()
|
||||||
|
resp, err := client.Do(request)
|
||||||
|
if err != nil {
|
||||||
|
if h.FollowRedirects {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
if urlError, ok := err.(*url.Error); ok &&
|
||||||
|
urlError.Err == ErrRedirectAttempted {
|
||||||
|
err = nil
|
||||||
|
} else {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
fields["response_time"] = time.Since(start).Seconds()
|
||||||
|
fields["http_response_code"] = resp.StatusCode
|
||||||
|
return fields, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Gather gets all metric fields and tags and returns any errors it encounters
|
||||||
|
func (h *HTTPResponse) Gather(acc telegraf.Accumulator) error {
|
||||||
|
// Set default values
|
||||||
|
if h.ResponseTimeout < 1 {
|
||||||
|
h.ResponseTimeout = 5
|
||||||
|
}
|
||||||
|
// Check send and expected string
|
||||||
|
if h.Method == "" {
|
||||||
|
h.Method = "GET"
|
||||||
|
}
|
||||||
|
if h.Address == "" {
|
||||||
|
h.Address = "http://localhost"
|
||||||
|
}
|
||||||
|
addr, err := url.Parse(h.Address)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if addr.Scheme != "http" && addr.Scheme != "https" {
|
||||||
|
return errors.New("Only http and https are supported")
|
||||||
|
}
|
||||||
|
// Prepare data
|
||||||
|
tags := map[string]string{"server": h.Address, "method": h.Method}
|
||||||
|
var fields map[string]interface{}
|
||||||
|
// Gather data
|
||||||
|
fields, err = h.HTTPGather()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
// Add metrics
|
||||||
|
acc.AddFields("http_response", fields, tags)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
inputs.Add("http_response", func() telegraf.Input {
|
||||||
|
return &HTTPResponse{}
|
||||||
|
})
|
||||||
|
}
|
||||||
241
plugins/inputs/http_response/http_response_test.go
Normal file
241
plugins/inputs/http_response/http_response_test.go
Normal file
@@ -0,0 +1,241 @@
|
|||||||
|
package http_response
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
"io/ioutil"
|
||||||
|
"net/http"
|
||||||
|
"net/http/httptest"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestCreateHeaders(t *testing.T) {
|
||||||
|
fakeHeaders := map[string]string{
|
||||||
|
"Accept": "text/plain",
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
"Cache-Control": "no-cache",
|
||||||
|
}
|
||||||
|
headers := CreateHeaders(fakeHeaders)
|
||||||
|
testHeaders := make(http.Header)
|
||||||
|
testHeaders.Add("Accept", "text/plain")
|
||||||
|
testHeaders.Add("Content-Type", "application/json")
|
||||||
|
testHeaders.Add("Cache-Control", "no-cache")
|
||||||
|
assert.Equal(t, testHeaders, headers)
|
||||||
|
}
|
||||||
|
|
||||||
|
func setUpTestMux() http.Handler {
|
||||||
|
mux := http.NewServeMux()
|
||||||
|
mux.HandleFunc("/redirect", func(w http.ResponseWriter, req *http.Request) {
|
||||||
|
http.Redirect(w, req, "/good", http.StatusMovedPermanently)
|
||||||
|
})
|
||||||
|
mux.HandleFunc("/good", func(w http.ResponseWriter, req *http.Request) {
|
||||||
|
fmt.Fprintf(w, "hit the good page!")
|
||||||
|
})
|
||||||
|
mux.HandleFunc("/badredirect", func(w http.ResponseWriter, req *http.Request) {
|
||||||
|
http.Redirect(w, req, "/badredirect", http.StatusMovedPermanently)
|
||||||
|
})
|
||||||
|
mux.HandleFunc("/mustbepostmethod", func(w http.ResponseWriter, req *http.Request) {
|
||||||
|
if req.Method != "POST" {
|
||||||
|
http.Error(w, "method wasn't post", http.StatusMethodNotAllowed)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
fmt.Fprintf(w, "used post correctly!")
|
||||||
|
})
|
||||||
|
mux.HandleFunc("/musthaveabody", func(w http.ResponseWriter, req *http.Request) {
|
||||||
|
body, err := ioutil.ReadAll(req.Body)
|
||||||
|
req.Body.Close()
|
||||||
|
if err != nil {
|
||||||
|
http.Error(w, "couldn't read request body", http.StatusBadRequest)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
if string(body) == "" {
|
||||||
|
http.Error(w, "body was empty", http.StatusBadRequest)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
fmt.Fprintf(w, "sent a body!")
|
||||||
|
})
|
||||||
|
mux.HandleFunc("/twosecondnap", func(w http.ResponseWriter, req *http.Request) {
|
||||||
|
time.Sleep(time.Second * 2)
|
||||||
|
return
|
||||||
|
})
|
||||||
|
return mux
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFields(t *testing.T) {
|
||||||
|
mux := setUpTestMux()
|
||||||
|
ts := httptest.NewServer(mux)
|
||||||
|
defer ts.Close()
|
||||||
|
|
||||||
|
h := &HTTPResponse{
|
||||||
|
Address: ts.URL + "/good",
|
||||||
|
Body: "{ 'test': 'data'}",
|
||||||
|
Method: "GET",
|
||||||
|
ResponseTimeout: 20,
|
||||||
|
Headers: map[string]string{
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
},
|
||||||
|
FollowRedirects: true,
|
||||||
|
}
|
||||||
|
fields, err := h.HTTPGather()
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.NotEmpty(t, fields)
|
||||||
|
if assert.NotNil(t, fields["http_response_code"]) {
|
||||||
|
assert.Equal(t, http.StatusOK, fields["http_response_code"])
|
||||||
|
}
|
||||||
|
assert.NotNil(t, fields["response_time"])
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRedirects(t *testing.T) {
|
||||||
|
mux := setUpTestMux()
|
||||||
|
ts := httptest.NewServer(mux)
|
||||||
|
defer ts.Close()
|
||||||
|
|
||||||
|
h := &HTTPResponse{
|
||||||
|
Address: ts.URL + "/redirect",
|
||||||
|
Body: "{ 'test': 'data'}",
|
||||||
|
Method: "GET",
|
||||||
|
ResponseTimeout: 20,
|
||||||
|
Headers: map[string]string{
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
},
|
||||||
|
FollowRedirects: true,
|
||||||
|
}
|
||||||
|
fields, err := h.HTTPGather()
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.NotEmpty(t, fields)
|
||||||
|
if assert.NotNil(t, fields["http_response_code"]) {
|
||||||
|
assert.Equal(t, http.StatusOK, fields["http_response_code"])
|
||||||
|
}
|
||||||
|
|
||||||
|
h = &HTTPResponse{
|
||||||
|
Address: ts.URL + "/badredirect",
|
||||||
|
Body: "{ 'test': 'data'}",
|
||||||
|
Method: "GET",
|
||||||
|
ResponseTimeout: 20,
|
||||||
|
Headers: map[string]string{
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
},
|
||||||
|
FollowRedirects: true,
|
||||||
|
}
|
||||||
|
fields, err = h.HTTPGather()
|
||||||
|
require.Error(t, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestMethod(t *testing.T) {
|
||||||
|
mux := setUpTestMux()
|
||||||
|
ts := httptest.NewServer(mux)
|
||||||
|
defer ts.Close()
|
||||||
|
|
||||||
|
h := &HTTPResponse{
|
||||||
|
Address: ts.URL + "/mustbepostmethod",
|
||||||
|
Body: "{ 'test': 'data'}",
|
||||||
|
Method: "POST",
|
||||||
|
ResponseTimeout: 20,
|
||||||
|
Headers: map[string]string{
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
},
|
||||||
|
FollowRedirects: true,
|
||||||
|
}
|
||||||
|
fields, err := h.HTTPGather()
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.NotEmpty(t, fields)
|
||||||
|
if assert.NotNil(t, fields["http_response_code"]) {
|
||||||
|
assert.Equal(t, http.StatusOK, fields["http_response_code"])
|
||||||
|
}
|
||||||
|
|
||||||
|
h = &HTTPResponse{
|
||||||
|
Address: ts.URL + "/mustbepostmethod",
|
||||||
|
Body: "{ 'test': 'data'}",
|
||||||
|
Method: "GET",
|
||||||
|
ResponseTimeout: 20,
|
||||||
|
Headers: map[string]string{
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
},
|
||||||
|
FollowRedirects: true,
|
||||||
|
}
|
||||||
|
fields, err = h.HTTPGather()
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.NotEmpty(t, fields)
|
||||||
|
if assert.NotNil(t, fields["http_response_code"]) {
|
||||||
|
assert.Equal(t, http.StatusMethodNotAllowed, fields["http_response_code"])
|
||||||
|
}
|
||||||
|
|
||||||
|
//check that lowercase methods work correctly
|
||||||
|
h = &HTTPResponse{
|
||||||
|
Address: ts.URL + "/mustbepostmethod",
|
||||||
|
Body: "{ 'test': 'data'}",
|
||||||
|
Method: "head",
|
||||||
|
ResponseTimeout: 20,
|
||||||
|
Headers: map[string]string{
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
},
|
||||||
|
FollowRedirects: true,
|
||||||
|
}
|
||||||
|
fields, err = h.HTTPGather()
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.NotEmpty(t, fields)
|
||||||
|
if assert.NotNil(t, fields["http_response_code"]) {
|
||||||
|
assert.Equal(t, http.StatusMethodNotAllowed, fields["http_response_code"])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestBody(t *testing.T) {
|
||||||
|
mux := setUpTestMux()
|
||||||
|
ts := httptest.NewServer(mux)
|
||||||
|
defer ts.Close()
|
||||||
|
|
||||||
|
h := &HTTPResponse{
|
||||||
|
Address: ts.URL + "/musthaveabody",
|
||||||
|
Body: "{ 'test': 'data'}",
|
||||||
|
Method: "GET",
|
||||||
|
ResponseTimeout: 20,
|
||||||
|
Headers: map[string]string{
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
},
|
||||||
|
FollowRedirects: true,
|
||||||
|
}
|
||||||
|
fields, err := h.HTTPGather()
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.NotEmpty(t, fields)
|
||||||
|
if assert.NotNil(t, fields["http_response_code"]) {
|
||||||
|
assert.Equal(t, http.StatusOK, fields["http_response_code"])
|
||||||
|
}
|
||||||
|
|
||||||
|
h = &HTTPResponse{
|
||||||
|
Address: ts.URL + "/musthaveabody",
|
||||||
|
Method: "GET",
|
||||||
|
ResponseTimeout: 20,
|
||||||
|
Headers: map[string]string{
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
},
|
||||||
|
FollowRedirects: true,
|
||||||
|
}
|
||||||
|
fields, err = h.HTTPGather()
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.NotEmpty(t, fields)
|
||||||
|
if assert.NotNil(t, fields["http_response_code"]) {
|
||||||
|
assert.Equal(t, http.StatusBadRequest, fields["http_response_code"])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestTimeout(t *testing.T) {
|
||||||
|
mux := setUpTestMux()
|
||||||
|
ts := httptest.NewServer(mux)
|
||||||
|
defer ts.Close()
|
||||||
|
|
||||||
|
h := &HTTPResponse{
|
||||||
|
Address: ts.URL + "/twosecondnap",
|
||||||
|
Body: "{ 'test': 'data'}",
|
||||||
|
Method: "GET",
|
||||||
|
ResponseTimeout: 1,
|
||||||
|
Headers: map[string]string{
|
||||||
|
"Content-Type": "application/json",
|
||||||
|
},
|
||||||
|
FollowRedirects: true,
|
||||||
|
}
|
||||||
|
_, err := h.HTTPGather()
|
||||||
|
require.Error(t, err)
|
||||||
|
}
|
||||||
@@ -6,7 +6,7 @@ For example, if you have a service called _mycollector_, which has HTTP endpoint
|
|||||||
plugin like this:
|
plugin like this:
|
||||||
|
|
||||||
```
|
```
|
||||||
[[httpjson.services]]
|
[[inputs.httpjson]]
|
||||||
name = "mycollector"
|
name = "mycollector"
|
||||||
|
|
||||||
servers = [
|
servers = [
|
||||||
@@ -24,7 +24,7 @@ plugin like this:
|
|||||||
You can also specify which keys from server response should be considered tags:
|
You can also specify which keys from server response should be considered tags:
|
||||||
|
|
||||||
```
|
```
|
||||||
[[httpjson.services]]
|
[[inputs.httpjson]]
|
||||||
...
|
...
|
||||||
|
|
||||||
tag_keys = [
|
tag_keys = [
|
||||||
@@ -36,10 +36,10 @@ You can also specify which keys from server response should be considered tags:
|
|||||||
You can also specify additional request parameters for the service:
|
You can also specify additional request parameters for the service:
|
||||||
|
|
||||||
```
|
```
|
||||||
[[httpjson.services]]
|
[[inputs.httpjson]]
|
||||||
...
|
...
|
||||||
|
|
||||||
[httpjson.services.parameters]
|
[inputs.httpjson.parameters]
|
||||||
event_type = "cpu_spike"
|
event_type = "cpu_spike"
|
||||||
threshold = "0.75"
|
threshold = "0.75"
|
||||||
|
|
||||||
@@ -48,10 +48,10 @@ You can also specify additional request parameters for the service:
|
|||||||
You can also specify additional request header parameters for the service:
|
You can also specify additional request header parameters for the service:
|
||||||
|
|
||||||
```
|
```
|
||||||
[[httpjson.services]]
|
[[inputs.httpjson]]
|
||||||
...
|
...
|
||||||
|
|
||||||
[httpjson.services.headers]
|
[inputs.httpjson.headers]
|
||||||
X-Auth-Token = "my-xauth-token"
|
X-Auth-Token = "my-xauth-token"
|
||||||
apiVersion = "v1"
|
apiVersion = "v1"
|
||||||
```
|
```
|
||||||
@@ -61,18 +61,14 @@ You can also specify additional request header parameters for the service:
|
|||||||
Let's say that we have a service named "mycollector" configured like this:
|
Let's say that we have a service named "mycollector" configured like this:
|
||||||
|
|
||||||
```
|
```
|
||||||
[httpjson]
|
[[inputs.httpjson]]
|
||||||
[[httpjson.services]]
|
name = "mycollector"
|
||||||
name = "mycollector"
|
servers = [
|
||||||
|
"http://my.service.com/_stats"
|
||||||
servers = [
|
]
|
||||||
"http://my.service.com/_stats"
|
# HTTP method to use (case-sensitive)
|
||||||
]
|
method = "GET"
|
||||||
|
tag_keys = ["service"]
|
||||||
# HTTP method to use (case-sensitive)
|
|
||||||
method = "GET"
|
|
||||||
|
|
||||||
tag_keys = ["service"]
|
|
||||||
```
|
```
|
||||||
|
|
||||||
which responds with the following JSON:
|
which responds with the following JSON:
|
||||||
@@ -102,26 +98,21 @@ There is also the option to collect JSON from multiple services, here is an
|
|||||||
example doing that.
|
example doing that.
|
||||||
|
|
||||||
```
|
```
|
||||||
[httpjson]
|
[[inputs.httpjson]]
|
||||||
[[httpjson.services]]
|
name = "mycollector1"
|
||||||
name = "mycollector1"
|
servers = [
|
||||||
|
"http://my.service1.com/_stats"
|
||||||
|
]
|
||||||
|
# HTTP method to use (case-sensitive)
|
||||||
|
method = "GET"
|
||||||
|
|
||||||
servers = [
|
[[inputs.httpjson]]
|
||||||
"http://my.service1.com/_stats"
|
name = "mycollector2"
|
||||||
]
|
servers = [
|
||||||
|
"http://service.net/json/stats"
|
||||||
# HTTP method to use (case-sensitive)
|
]
|
||||||
method = "GET"
|
# HTTP method to use (case-sensitive)
|
||||||
|
method = "POST"
|
||||||
[[httpjson.services]]
|
|
||||||
name = "mycollector2"
|
|
||||||
|
|
||||||
servers = [
|
|
||||||
"http://service.net/json/stats"
|
|
||||||
]
|
|
||||||
|
|
||||||
# HTTP method to use (case-sensitive)
|
|
||||||
method = "POST"
|
|
||||||
```
|
```
|
||||||
|
|
||||||
The services respond with the following JSON:
|
The services respond with the following JSON:
|
||||||
|
|||||||
@@ -1,7 +1,6 @@
|
|||||||
package httpjson
|
package httpjson
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"encoding/json"
|
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
@@ -11,8 +10,10 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
"github.com/influxdata/telegraf/internal"
|
"github.com/influxdata/telegraf/internal"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
|
"github.com/influxdata/telegraf/plugins/parsers"
|
||||||
)
|
)
|
||||||
|
|
||||||
type HttpJson struct {
|
type HttpJson struct {
|
||||||
@@ -22,7 +23,17 @@ type HttpJson struct {
|
|||||||
TagKeys []string
|
TagKeys []string
|
||||||
Parameters map[string]string
|
Parameters map[string]string
|
||||||
Headers map[string]string
|
Headers map[string]string
|
||||||
client HTTPClient
|
|
||||||
|
// Path to CA file
|
||||||
|
SSLCA string `toml:"ssl_ca"`
|
||||||
|
// Path to host cert file
|
||||||
|
SSLCert string `toml:"ssl_cert"`
|
||||||
|
// Path to cert key file
|
||||||
|
SSLKey string `toml:"ssl_key"`
|
||||||
|
// Use SSL but skip chain & host verification
|
||||||
|
InsecureSkipVerify bool
|
||||||
|
|
||||||
|
client HTTPClient
|
||||||
}
|
}
|
||||||
|
|
||||||
type HTTPClient interface {
|
type HTTPClient interface {
|
||||||
@@ -35,48 +46,65 @@ type HTTPClient interface {
|
|||||||
// http.Response: HTTP respons object
|
// http.Response: HTTP respons object
|
||||||
// error : Any error that may have occurred
|
// error : Any error that may have occurred
|
||||||
MakeRequest(req *http.Request) (*http.Response, error)
|
MakeRequest(req *http.Request) (*http.Response, error)
|
||||||
|
|
||||||
|
SetHTTPClient(client *http.Client)
|
||||||
|
HTTPClient() *http.Client
|
||||||
}
|
}
|
||||||
|
|
||||||
type RealHTTPClient struct {
|
type RealHTTPClient struct {
|
||||||
client *http.Client
|
client *http.Client
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c RealHTTPClient) MakeRequest(req *http.Request) (*http.Response, error) {
|
func (c *RealHTTPClient) MakeRequest(req *http.Request) (*http.Response, error) {
|
||||||
return c.client.Do(req)
|
return c.client.Do(req)
|
||||||
}
|
}
|
||||||
|
|
||||||
var sampleConfig = `
|
func (c *RealHTTPClient) SetHTTPClient(client *http.Client) {
|
||||||
# NOTE This plugin only reads numerical measurements, strings and booleans
|
c.client = client
|
||||||
# will be ignored.
|
}
|
||||||
|
|
||||||
# a name for the service being polled
|
func (c *RealHTTPClient) HTTPClient() *http.Client {
|
||||||
|
return c.client
|
||||||
|
}
|
||||||
|
|
||||||
|
var sampleConfig = `
|
||||||
|
## NOTE This plugin only reads numerical measurements, strings and booleans
|
||||||
|
## will be ignored.
|
||||||
|
|
||||||
|
## a name for the service being polled
|
||||||
name = "webserver_stats"
|
name = "webserver_stats"
|
||||||
|
|
||||||
# URL of each server in the service's cluster
|
## URL of each server in the service's cluster
|
||||||
servers = [
|
servers = [
|
||||||
"http://localhost:9999/stats/",
|
"http://localhost:9999/stats/",
|
||||||
"http://localhost:9998/stats/",
|
"http://localhost:9998/stats/",
|
||||||
]
|
]
|
||||||
|
|
||||||
# HTTP method to use (case-sensitive)
|
## HTTP method to use: GET or POST (case-sensitive)
|
||||||
method = "GET"
|
method = "GET"
|
||||||
|
|
||||||
# List of tag names to extract from top-level of JSON server response
|
## List of tag names to extract from top-level of JSON server response
|
||||||
# tag_keys = [
|
# tag_keys = [
|
||||||
# "my_tag_1",
|
# "my_tag_1",
|
||||||
# "my_tag_2"
|
# "my_tag_2"
|
||||||
# ]
|
# ]
|
||||||
|
|
||||||
# HTTP parameters (all values must be strings)
|
## HTTP parameters (all values must be strings)
|
||||||
[inputs.httpjson.parameters]
|
[inputs.httpjson.parameters]
|
||||||
event_type = "cpu_spike"
|
event_type = "cpu_spike"
|
||||||
threshold = "0.75"
|
threshold = "0.75"
|
||||||
|
|
||||||
# HTTP Header parameters (all values must be strings)
|
## HTTP Header parameters (all values must be strings)
|
||||||
# [inputs.httpjson.headers]
|
# [inputs.httpjson.headers]
|
||||||
# X-Auth-Token = "my-xauth-token"
|
# X-Auth-Token = "my-xauth-token"
|
||||||
# apiVersion = "v1"
|
# apiVersion = "v1"
|
||||||
|
|
||||||
|
## Optional SSL Config
|
||||||
|
# ssl_ca = "/etc/telegraf/ca.pem"
|
||||||
|
# ssl_cert = "/etc/telegraf/cert.pem"
|
||||||
|
# ssl_key = "/etc/telegraf/key.pem"
|
||||||
|
## Use SSL but skip chain & host verification
|
||||||
|
# insecure_skip_verify = false
|
||||||
`
|
`
|
||||||
|
|
||||||
func (h *HttpJson) SampleConfig() string {
|
func (h *HttpJson) SampleConfig() string {
|
||||||
@@ -88,9 +116,26 @@ func (h *HttpJson) Description() string {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Gathers data for all servers.
|
// Gathers data for all servers.
|
||||||
func (h *HttpJson) Gather(acc inputs.Accumulator) error {
|
func (h *HttpJson) Gather(acc telegraf.Accumulator) error {
|
||||||
var wg sync.WaitGroup
|
var wg sync.WaitGroup
|
||||||
|
|
||||||
|
if h.client.HTTPClient() == nil {
|
||||||
|
tlsCfg, err := internal.GetTLSConfig(
|
||||||
|
h.SSLCert, h.SSLKey, h.SSLCA, h.InsecureSkipVerify)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
tr := &http.Transport{
|
||||||
|
ResponseHeaderTimeout: time.Duration(3 * time.Second),
|
||||||
|
TLSClientConfig: tlsCfg,
|
||||||
|
}
|
||||||
|
client := &http.Client{
|
||||||
|
Transport: tr,
|
||||||
|
Timeout: time.Duration(4 * time.Second),
|
||||||
|
}
|
||||||
|
h.client.SetHTTPClient(client)
|
||||||
|
}
|
||||||
|
|
||||||
errorChannel := make(chan error, len(h.Servers))
|
errorChannel := make(chan error, len(h.Servers))
|
||||||
|
|
||||||
for _, server := range h.Servers {
|
for _, server := range h.Servers {
|
||||||
@@ -127,7 +172,7 @@ func (h *HttpJson) Gather(acc inputs.Accumulator) error {
|
|||||||
// Returns:
|
// Returns:
|
||||||
// error: Any error that may have occurred
|
// error: Any error that may have occurred
|
||||||
func (h *HttpJson) gatherServer(
|
func (h *HttpJson) gatherServer(
|
||||||
acc inputs.Accumulator,
|
acc telegraf.Accumulator,
|
||||||
serverURL string,
|
serverURL string,
|
||||||
) error {
|
) error {
|
||||||
resp, responseTime, err := h.sendRequest(serverURL)
|
resp, responseTime, err := h.sendRequest(serverURL)
|
||||||
@@ -136,43 +181,39 @@ func (h *HttpJson) gatherServer(
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
var jsonOut map[string]interface{}
|
|
||||||
if err = json.Unmarshal([]byte(resp), &jsonOut); err != nil {
|
|
||||||
return errors.New("Error decoding JSON response")
|
|
||||||
}
|
|
||||||
|
|
||||||
tags := map[string]string{
|
|
||||||
"server": serverURL,
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tag := range h.TagKeys {
|
|
||||||
switch v := jsonOut[tag].(type) {
|
|
||||||
case string:
|
|
||||||
tags[tag] = v
|
|
||||||
}
|
|
||||||
delete(jsonOut, tag)
|
|
||||||
}
|
|
||||||
|
|
||||||
if responseTime >= 0 {
|
|
||||||
jsonOut["response_time"] = responseTime
|
|
||||||
}
|
|
||||||
f := internal.JSONFlattener{}
|
|
||||||
err = f.FlattenJSON("", jsonOut)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
var msrmnt_name string
|
var msrmnt_name string
|
||||||
if h.Name == "" {
|
if h.Name == "" {
|
||||||
msrmnt_name = "httpjson"
|
msrmnt_name = "httpjson"
|
||||||
} else {
|
} else {
|
||||||
msrmnt_name = "httpjson_" + h.Name
|
msrmnt_name = "httpjson_" + h.Name
|
||||||
}
|
}
|
||||||
acc.AddFields(msrmnt_name, f.Fields, tags)
|
tags := map[string]string{
|
||||||
|
"server": serverURL,
|
||||||
|
}
|
||||||
|
|
||||||
|
parser, err := parsers.NewJSONParser(msrmnt_name, h.TagKeys, tags)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
metrics, err := parser.Parse([]byte(resp))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, metric := range metrics {
|
||||||
|
fields := make(map[string]interface{})
|
||||||
|
for k, v := range metric.Fields() {
|
||||||
|
fields[k] = v
|
||||||
|
}
|
||||||
|
fields["response_time"] = responseTime
|
||||||
|
acc.AddFields(metric.Name(), fields, metric.Tags())
|
||||||
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Sends an HTTP request to the server using the HttpJson object's HTTPClient
|
// Sends an HTTP request to the server using the HttpJson object's HTTPClient.
|
||||||
|
// This request can be either a GET or a POST.
|
||||||
// Parameters:
|
// Parameters:
|
||||||
// serverURL: endpoint to send request to
|
// serverURL: endpoint to send request to
|
||||||
//
|
//
|
||||||
@@ -186,21 +227,36 @@ func (h *HttpJson) sendRequest(serverURL string) (string, float64, error) {
|
|||||||
return "", -1, fmt.Errorf("Invalid server URL \"%s\"", serverURL)
|
return "", -1, fmt.Errorf("Invalid server URL \"%s\"", serverURL)
|
||||||
}
|
}
|
||||||
|
|
||||||
params := url.Values{}
|
data := url.Values{}
|
||||||
for k, v := range h.Parameters {
|
switch {
|
||||||
params.Add(k, v)
|
case h.Method == "GET":
|
||||||
|
params := requestURL.Query()
|
||||||
|
for k, v := range h.Parameters {
|
||||||
|
params.Add(k, v)
|
||||||
|
}
|
||||||
|
requestURL.RawQuery = params.Encode()
|
||||||
|
|
||||||
|
case h.Method == "POST":
|
||||||
|
requestURL.RawQuery = ""
|
||||||
|
for k, v := range h.Parameters {
|
||||||
|
data.Add(k, v)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
requestURL.RawQuery = params.Encode()
|
|
||||||
|
|
||||||
// Create + send request
|
// Create + send request
|
||||||
req, err := http.NewRequest(h.Method, requestURL.String(), nil)
|
req, err := http.NewRequest(h.Method, requestURL.String(),
|
||||||
|
strings.NewReader(data.Encode()))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", -1, err
|
return "", -1, err
|
||||||
}
|
}
|
||||||
|
|
||||||
// Add header parameters
|
// Add header parameters
|
||||||
for k, v := range h.Headers {
|
for k, v := range h.Headers {
|
||||||
req.Header.Add(k, v)
|
if strings.ToLower(k) == "host" {
|
||||||
|
req.Host = v
|
||||||
|
} else {
|
||||||
|
req.Header.Add(k, v)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
start := time.Now()
|
start := time.Now()
|
||||||
@@ -232,7 +288,9 @@ func (h *HttpJson) sendRequest(serverURL string) (string, float64, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
inputs.Add("httpjson", func() inputs.Input {
|
inputs.Add("httpjson", func() telegraf.Input {
|
||||||
return &HttpJson{client: RealHTTPClient{client: &http.Client{}}}
|
return &HttpJson{
|
||||||
|
client: &RealHTTPClient{},
|
||||||
|
}
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,8 +1,10 @@
|
|||||||
package httpjson
|
package httpjson
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"fmt"
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
"net/http"
|
"net/http"
|
||||||
|
"net/http/httptest"
|
||||||
"strings"
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
@@ -27,6 +29,75 @@ const validJSON = `
|
|||||||
"another_list": [4]
|
"another_list": [4]
|
||||||
}`
|
}`
|
||||||
|
|
||||||
|
const validJSON2 = `{
|
||||||
|
"user":{
|
||||||
|
"hash_rate":0,
|
||||||
|
"expected_24h_rewards":0,
|
||||||
|
"total_rewards":0.000595109232,
|
||||||
|
"paid_rewards":0,
|
||||||
|
"unpaid_rewards":0.000595109232,
|
||||||
|
"past_24h_rewards":0,
|
||||||
|
"total_work":"5172625408",
|
||||||
|
"blocks_found":0
|
||||||
|
},
|
||||||
|
"workers":{
|
||||||
|
"brminer.1":{
|
||||||
|
"hash_rate":0,
|
||||||
|
"hash_rate_24h":0,
|
||||||
|
"valid_shares":"6176",
|
||||||
|
"stale_shares":"0",
|
||||||
|
"invalid_shares":"0",
|
||||||
|
"rewards":4.5506464e-5,
|
||||||
|
"rewards_24h":0,
|
||||||
|
"reset_time":1455409950
|
||||||
|
},
|
||||||
|
"brminer.2":{
|
||||||
|
"hash_rate":0,
|
||||||
|
"hash_rate_24h":0,
|
||||||
|
"valid_shares":"0",
|
||||||
|
"stale_shares":"0",
|
||||||
|
"invalid_shares":"0",
|
||||||
|
"rewards":0,
|
||||||
|
"rewards_24h":0,
|
||||||
|
"reset_time":1455936726
|
||||||
|
},
|
||||||
|
"brminer.3":{
|
||||||
|
"hash_rate":0,
|
||||||
|
"hash_rate_24h":0,
|
||||||
|
"valid_shares":"0",
|
||||||
|
"stale_shares":"0",
|
||||||
|
"invalid_shares":"0",
|
||||||
|
"rewards":0,
|
||||||
|
"rewards_24h":0,
|
||||||
|
"reset_time":1455936733
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"pool":{
|
||||||
|
"hash_rate":114100000,
|
||||||
|
"active_users":843,
|
||||||
|
"total_work":"5015346808842682368",
|
||||||
|
"pps_ratio":1.04,
|
||||||
|
"pps_rate":7.655e-9
|
||||||
|
},
|
||||||
|
"network":{
|
||||||
|
"hash_rate":1426117703,
|
||||||
|
"block_number":944895,
|
||||||
|
"time_per_block":156,
|
||||||
|
"difficulty":51825.72835216,
|
||||||
|
"next_difficulty":51916.15249019,
|
||||||
|
"retarget_time":95053
|
||||||
|
},
|
||||||
|
"market":{
|
||||||
|
"ltc_btc":0.00798,
|
||||||
|
"ltc_usd":3.37801,
|
||||||
|
"ltc_eur":3.113,
|
||||||
|
"ltc_gbp":2.32807,
|
||||||
|
"ltc_rub":241.796,
|
||||||
|
"ltc_cny":21.3883,
|
||||||
|
"btc_usd":422.852
|
||||||
|
}
|
||||||
|
}`
|
||||||
|
|
||||||
const validJSONTags = `
|
const validJSONTags = `
|
||||||
{
|
{
|
||||||
"value": 15,
|
"value": 15,
|
||||||
@@ -54,7 +125,7 @@ type mockHTTPClient struct {
|
|||||||
// Mock implementation of MakeRequest. Usually returns an http.Response with
|
// Mock implementation of MakeRequest. Usually returns an http.Response with
|
||||||
// hard-coded responseBody and statusCode. However, if the request uses a
|
// hard-coded responseBody and statusCode. However, if the request uses a
|
||||||
// nonstandard method, it uses status code 405 (method not allowed)
|
// nonstandard method, it uses status code 405 (method not allowed)
|
||||||
func (c mockHTTPClient) MakeRequest(req *http.Request) (*http.Response, error) {
|
func (c *mockHTTPClient) MakeRequest(req *http.Request) (*http.Response, error) {
|
||||||
resp := http.Response{}
|
resp := http.Response{}
|
||||||
resp.StatusCode = c.statusCode
|
resp.StatusCode = c.statusCode
|
||||||
|
|
||||||
@@ -76,6 +147,13 @@ func (c mockHTTPClient) MakeRequest(req *http.Request) (*http.Response, error) {
|
|||||||
return &resp, nil
|
return &resp, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (c *mockHTTPClient) SetHTTPClient(_ *http.Client) {
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *mockHTTPClient) HTTPClient() *http.Client {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
// Generates a pointer to an HttpJson object that uses a mock HTTP client.
|
// Generates a pointer to an HttpJson object that uses a mock HTTP client.
|
||||||
// Parameters:
|
// Parameters:
|
||||||
// response : Body of the response that the mock HTTP client should return
|
// response : Body of the response that the mock HTTP client should return
|
||||||
@@ -86,7 +164,7 @@ func (c mockHTTPClient) MakeRequest(req *http.Request) (*http.Response, error) {
|
|||||||
func genMockHttpJson(response string, statusCode int) []*HttpJson {
|
func genMockHttpJson(response string, statusCode int) []*HttpJson {
|
||||||
return []*HttpJson{
|
return []*HttpJson{
|
||||||
&HttpJson{
|
&HttpJson{
|
||||||
client: mockHTTPClient{responseBody: response, statusCode: statusCode},
|
client: &mockHTTPClient{responseBody: response, statusCode: statusCode},
|
||||||
Servers: []string{
|
Servers: []string{
|
||||||
"http://server1.example.com/metrics/",
|
"http://server1.example.com/metrics/",
|
||||||
"http://server2.example.com/metrics/",
|
"http://server2.example.com/metrics/",
|
||||||
@@ -103,7 +181,7 @@ func genMockHttpJson(response string, statusCode int) []*HttpJson {
|
|||||||
},
|
},
|
||||||
},
|
},
|
||||||
&HttpJson{
|
&HttpJson{
|
||||||
client: mockHTTPClient{responseBody: response, statusCode: statusCode},
|
client: &mockHTTPClient{responseBody: response, statusCode: statusCode},
|
||||||
Servers: []string{
|
Servers: []string{
|
||||||
"http://server3.example.com/metrics/",
|
"http://server3.example.com/metrics/",
|
||||||
"http://server4.example.com/metrics/",
|
"http://server4.example.com/metrics/",
|
||||||
@@ -136,7 +214,7 @@ func TestHttpJson200(t *testing.T) {
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
assert.Equal(t, 12, acc.NFields())
|
assert.Equal(t, 12, acc.NFields())
|
||||||
// Set responsetime
|
// Set responsetime
|
||||||
for _, p := range acc.Points {
|
for _, p := range acc.Metrics {
|
||||||
p.Fields["response_time"] = 1.0
|
p.Fields["response_time"] = 1.0
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -149,6 +227,222 @@ func TestHttpJson200(t *testing.T) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Test that GET Parameters from the url string are applied properly
|
||||||
|
func TestHttpJsonGET_URL(t *testing.T) {
|
||||||
|
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
key := r.FormValue("api_key")
|
||||||
|
assert.Equal(t, "mykey", key)
|
||||||
|
w.WriteHeader(http.StatusOK)
|
||||||
|
fmt.Fprintln(w, validJSON2)
|
||||||
|
}))
|
||||||
|
defer ts.Close()
|
||||||
|
|
||||||
|
a := HttpJson{
|
||||||
|
Servers: []string{ts.URL + "?api_key=mykey"},
|
||||||
|
Name: "",
|
||||||
|
Method: "GET",
|
||||||
|
client: &RealHTTPClient{client: &http.Client{}},
|
||||||
|
}
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
err := a.Gather(&acc)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
// remove response_time from gathered fields because it's non-deterministic
|
||||||
|
delete(acc.Metrics[0].Fields, "response_time")
|
||||||
|
|
||||||
|
fields := map[string]interface{}{
|
||||||
|
"market_btc_usd": float64(422.852),
|
||||||
|
"market_ltc_btc": float64(0.00798),
|
||||||
|
"market_ltc_cny": float64(21.3883),
|
||||||
|
"market_ltc_eur": float64(3.113),
|
||||||
|
"market_ltc_gbp": float64(2.32807),
|
||||||
|
"market_ltc_rub": float64(241.796),
|
||||||
|
"market_ltc_usd": float64(3.37801),
|
||||||
|
"network_block_number": float64(944895),
|
||||||
|
"network_difficulty": float64(51825.72835216),
|
||||||
|
"network_hash_rate": float64(1.426117703e+09),
|
||||||
|
"network_next_difficulty": float64(51916.15249019),
|
||||||
|
"network_retarget_time": float64(95053),
|
||||||
|
"network_time_per_block": float64(156),
|
||||||
|
"pool_active_users": float64(843),
|
||||||
|
"pool_hash_rate": float64(1.141e+08),
|
||||||
|
"pool_pps_rate": float64(7.655e-09),
|
||||||
|
"pool_pps_ratio": float64(1.04),
|
||||||
|
"user_blocks_found": float64(0),
|
||||||
|
"user_expected_24h_rewards": float64(0),
|
||||||
|
"user_hash_rate": float64(0),
|
||||||
|
"user_paid_rewards": float64(0),
|
||||||
|
"user_past_24h_rewards": float64(0),
|
||||||
|
"user_total_rewards": float64(0.000595109232),
|
||||||
|
"user_unpaid_rewards": float64(0.000595109232),
|
||||||
|
"workers_brminer.1_hash_rate": float64(0),
|
||||||
|
"workers_brminer.1_hash_rate_24h": float64(0),
|
||||||
|
"workers_brminer.1_reset_time": float64(1.45540995e+09),
|
||||||
|
"workers_brminer.1_rewards": float64(4.5506464e-05),
|
||||||
|
"workers_brminer.1_rewards_24h": float64(0),
|
||||||
|
"workers_brminer.2_hash_rate": float64(0),
|
||||||
|
"workers_brminer.2_hash_rate_24h": float64(0),
|
||||||
|
"workers_brminer.2_reset_time": float64(1.455936726e+09),
|
||||||
|
"workers_brminer.2_rewards": float64(0),
|
||||||
|
"workers_brminer.2_rewards_24h": float64(0),
|
||||||
|
"workers_brminer.3_hash_rate": float64(0),
|
||||||
|
"workers_brminer.3_hash_rate_24h": float64(0),
|
||||||
|
"workers_brminer.3_reset_time": float64(1.455936733e+09),
|
||||||
|
"workers_brminer.3_rewards": float64(0),
|
||||||
|
"workers_brminer.3_rewards_24h": float64(0),
|
||||||
|
}
|
||||||
|
|
||||||
|
acc.AssertContainsFields(t, "httpjson", fields)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test that GET Parameters are applied properly
|
||||||
|
func TestHttpJsonGET(t *testing.T) {
|
||||||
|
params := map[string]string{
|
||||||
|
"api_key": "mykey",
|
||||||
|
}
|
||||||
|
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
key := r.FormValue("api_key")
|
||||||
|
assert.Equal(t, "mykey", key)
|
||||||
|
w.WriteHeader(http.StatusOK)
|
||||||
|
fmt.Fprintln(w, validJSON2)
|
||||||
|
}))
|
||||||
|
defer ts.Close()
|
||||||
|
|
||||||
|
a := HttpJson{
|
||||||
|
Servers: []string{ts.URL},
|
||||||
|
Name: "",
|
||||||
|
Method: "GET",
|
||||||
|
Parameters: params,
|
||||||
|
client: &RealHTTPClient{client: &http.Client{}},
|
||||||
|
}
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
err := a.Gather(&acc)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
// remove response_time from gathered fields because it's non-deterministic
|
||||||
|
delete(acc.Metrics[0].Fields, "response_time")
|
||||||
|
|
||||||
|
fields := map[string]interface{}{
|
||||||
|
"market_btc_usd": float64(422.852),
|
||||||
|
"market_ltc_btc": float64(0.00798),
|
||||||
|
"market_ltc_cny": float64(21.3883),
|
||||||
|
"market_ltc_eur": float64(3.113),
|
||||||
|
"market_ltc_gbp": float64(2.32807),
|
||||||
|
"market_ltc_rub": float64(241.796),
|
||||||
|
"market_ltc_usd": float64(3.37801),
|
||||||
|
"network_block_number": float64(944895),
|
||||||
|
"network_difficulty": float64(51825.72835216),
|
||||||
|
"network_hash_rate": float64(1.426117703e+09),
|
||||||
|
"network_next_difficulty": float64(51916.15249019),
|
||||||
|
"network_retarget_time": float64(95053),
|
||||||
|
"network_time_per_block": float64(156),
|
||||||
|
"pool_active_users": float64(843),
|
||||||
|
"pool_hash_rate": float64(1.141e+08),
|
||||||
|
"pool_pps_rate": float64(7.655e-09),
|
||||||
|
"pool_pps_ratio": float64(1.04),
|
||||||
|
"user_blocks_found": float64(0),
|
||||||
|
"user_expected_24h_rewards": float64(0),
|
||||||
|
"user_hash_rate": float64(0),
|
||||||
|
"user_paid_rewards": float64(0),
|
||||||
|
"user_past_24h_rewards": float64(0),
|
||||||
|
"user_total_rewards": float64(0.000595109232),
|
||||||
|
"user_unpaid_rewards": float64(0.000595109232),
|
||||||
|
"workers_brminer.1_hash_rate": float64(0),
|
||||||
|
"workers_brminer.1_hash_rate_24h": float64(0),
|
||||||
|
"workers_brminer.1_reset_time": float64(1.45540995e+09),
|
||||||
|
"workers_brminer.1_rewards": float64(4.5506464e-05),
|
||||||
|
"workers_brminer.1_rewards_24h": float64(0),
|
||||||
|
"workers_brminer.2_hash_rate": float64(0),
|
||||||
|
"workers_brminer.2_hash_rate_24h": float64(0),
|
||||||
|
"workers_brminer.2_reset_time": float64(1.455936726e+09),
|
||||||
|
"workers_brminer.2_rewards": float64(0),
|
||||||
|
"workers_brminer.2_rewards_24h": float64(0),
|
||||||
|
"workers_brminer.3_hash_rate": float64(0),
|
||||||
|
"workers_brminer.3_hash_rate_24h": float64(0),
|
||||||
|
"workers_brminer.3_reset_time": float64(1.455936733e+09),
|
||||||
|
"workers_brminer.3_rewards": float64(0),
|
||||||
|
"workers_brminer.3_rewards_24h": float64(0),
|
||||||
|
}
|
||||||
|
|
||||||
|
acc.AssertContainsFields(t, "httpjson", fields)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Test that POST Parameters are applied properly
|
||||||
|
func TestHttpJsonPOST(t *testing.T) {
|
||||||
|
params := map[string]string{
|
||||||
|
"api_key": "mykey",
|
||||||
|
}
|
||||||
|
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
body, err := ioutil.ReadAll(r.Body)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
assert.Equal(t, "api_key=mykey", string(body))
|
||||||
|
w.WriteHeader(http.StatusOK)
|
||||||
|
fmt.Fprintln(w, validJSON2)
|
||||||
|
}))
|
||||||
|
defer ts.Close()
|
||||||
|
|
||||||
|
a := HttpJson{
|
||||||
|
Servers: []string{ts.URL},
|
||||||
|
Name: "",
|
||||||
|
Method: "POST",
|
||||||
|
Parameters: params,
|
||||||
|
client: &RealHTTPClient{client: &http.Client{}},
|
||||||
|
}
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
err := a.Gather(&acc)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
// remove response_time from gathered fields because it's non-deterministic
|
||||||
|
delete(acc.Metrics[0].Fields, "response_time")
|
||||||
|
|
||||||
|
fields := map[string]interface{}{
|
||||||
|
"market_btc_usd": float64(422.852),
|
||||||
|
"market_ltc_btc": float64(0.00798),
|
||||||
|
"market_ltc_cny": float64(21.3883),
|
||||||
|
"market_ltc_eur": float64(3.113),
|
||||||
|
"market_ltc_gbp": float64(2.32807),
|
||||||
|
"market_ltc_rub": float64(241.796),
|
||||||
|
"market_ltc_usd": float64(3.37801),
|
||||||
|
"network_block_number": float64(944895),
|
||||||
|
"network_difficulty": float64(51825.72835216),
|
||||||
|
"network_hash_rate": float64(1.426117703e+09),
|
||||||
|
"network_next_difficulty": float64(51916.15249019),
|
||||||
|
"network_retarget_time": float64(95053),
|
||||||
|
"network_time_per_block": float64(156),
|
||||||
|
"pool_active_users": float64(843),
|
||||||
|
"pool_hash_rate": float64(1.141e+08),
|
||||||
|
"pool_pps_rate": float64(7.655e-09),
|
||||||
|
"pool_pps_ratio": float64(1.04),
|
||||||
|
"user_blocks_found": float64(0),
|
||||||
|
"user_expected_24h_rewards": float64(0),
|
||||||
|
"user_hash_rate": float64(0),
|
||||||
|
"user_paid_rewards": float64(0),
|
||||||
|
"user_past_24h_rewards": float64(0),
|
||||||
|
"user_total_rewards": float64(0.000595109232),
|
||||||
|
"user_unpaid_rewards": float64(0.000595109232),
|
||||||
|
"workers_brminer.1_hash_rate": float64(0),
|
||||||
|
"workers_brminer.1_hash_rate_24h": float64(0),
|
||||||
|
"workers_brminer.1_reset_time": float64(1.45540995e+09),
|
||||||
|
"workers_brminer.1_rewards": float64(4.5506464e-05),
|
||||||
|
"workers_brminer.1_rewards_24h": float64(0),
|
||||||
|
"workers_brminer.2_hash_rate": float64(0),
|
||||||
|
"workers_brminer.2_hash_rate_24h": float64(0),
|
||||||
|
"workers_brminer.2_reset_time": float64(1.455936726e+09),
|
||||||
|
"workers_brminer.2_rewards": float64(0),
|
||||||
|
"workers_brminer.2_rewards_24h": float64(0),
|
||||||
|
"workers_brminer.3_hash_rate": float64(0),
|
||||||
|
"workers_brminer.3_hash_rate_24h": float64(0),
|
||||||
|
"workers_brminer.3_reset_time": float64(1.455936733e+09),
|
||||||
|
"workers_brminer.3_rewards": float64(0),
|
||||||
|
"workers_brminer.3_rewards_24h": float64(0),
|
||||||
|
}
|
||||||
|
|
||||||
|
acc.AssertContainsFields(t, "httpjson", fields)
|
||||||
|
}
|
||||||
|
|
||||||
// Test response to HTTP 500
|
// Test response to HTTP 500
|
||||||
func TestHttpJson500(t *testing.T) {
|
func TestHttpJson500(t *testing.T) {
|
||||||
httpjson := genMockHttpJson(validJSON, 500)
|
httpjson := genMockHttpJson(validJSON, 500)
|
||||||
@@ -203,7 +497,7 @@ func TestHttpJson200Tags(t *testing.T) {
|
|||||||
var acc testutil.Accumulator
|
var acc testutil.Accumulator
|
||||||
err := service.Gather(&acc)
|
err := service.Gather(&acc)
|
||||||
// Set responsetime
|
// Set responsetime
|
||||||
for _, p := range acc.Points {
|
for _, p := range acc.Metrics {
|
||||||
p.Fields["response_time"] = 1.0
|
p.Fields["response_time"] = 1.0
|
||||||
}
|
}
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|||||||
23
plugins/inputs/igloo/README.md
Normal file
23
plugins/inputs/igloo/README.md
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
# igloo Input Plugin
|
||||||
|
|
||||||
|
The igloo plugin "tails" a logfile and parses each log message.
|
||||||
|
|
||||||
|
By default, the igloo plugin acts like the following unix tail command:
|
||||||
|
|
||||||
|
```
|
||||||
|
tail -F --lines=0 myfile.log
|
||||||
|
```
|
||||||
|
|
||||||
|
- `-F` means that it will follow the _name_ of the given file, so
|
||||||
|
that it will be compatible with log-rotated files, and that it will retry on
|
||||||
|
inaccessible files.
|
||||||
|
- `--lines=0` means that it will start at the end of the file (unless
|
||||||
|
the `from_beginning` option is set).
|
||||||
|
|
||||||
|
see http://man7.org/linux/man-pages/man1/tail.1.html for more details.
|
||||||
|
|
||||||
|
### Configuration:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
```
|
||||||
|
|
||||||
331
plugins/inputs/igloo/igloo.go
Normal file
331
plugins/inputs/igloo/igloo.go
Normal file
@@ -0,0 +1,331 @@
|
|||||||
|
package igloo
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"log"
|
||||||
|
"regexp"
|
||||||
|
"sort"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/hpcloud/tail"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
|
"github.com/influxdata/telegraf/internal/globpath"
|
||||||
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
|
)
|
||||||
|
|
||||||
|
// format of timestamps
|
||||||
|
const (
|
||||||
|
rfcFormat string = "%s-%s-%sT%s:%s:%s.%sZ"
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
// regex for finding timestamps
|
||||||
|
tRe = regexp.MustCompile(`Timestamp=((\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2}),(\d+))`)
|
||||||
|
)
|
||||||
|
|
||||||
|
type Tail struct {
|
||||||
|
Files []string
|
||||||
|
FromBeginning bool
|
||||||
|
TagKeys []string
|
||||||
|
Counters []string
|
||||||
|
NumFields []string
|
||||||
|
StrFields []string
|
||||||
|
|
||||||
|
numfieldsRe map[string]*regexp.Regexp
|
||||||
|
strfieldsRe map[string]*regexp.Regexp
|
||||||
|
countersRe map[string]*regexp.Regexp
|
||||||
|
tagsRe map[string]*regexp.Regexp
|
||||||
|
|
||||||
|
counters map[string]map[string]int64
|
||||||
|
|
||||||
|
tailers []*tail.Tail
|
||||||
|
wg sync.WaitGroup
|
||||||
|
acc telegraf.Accumulator
|
||||||
|
|
||||||
|
sync.Mutex
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewTail() *Tail {
|
||||||
|
return &Tail{
|
||||||
|
FromBeginning: false,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const sampleConfig = `
|
||||||
|
## logfiles to parse.
|
||||||
|
##
|
||||||
|
## These accept standard unix glob matching rules, but with the addition of
|
||||||
|
## ** as a "super asterisk". ie:
|
||||||
|
## "/var/log/**.log" -> recursively find all .log files in /var/log
|
||||||
|
## "/var/log/*/*.log" -> find all .log files with a parent dir in /var/log
|
||||||
|
## "/var/log/apache.log" -> just tail the apache log file
|
||||||
|
##
|
||||||
|
## See https://github.com/gobwas/glob for more examples
|
||||||
|
##
|
||||||
|
files = ["$HOME/sample.log"]
|
||||||
|
## Read file from beginning.
|
||||||
|
from_beginning = false
|
||||||
|
|
||||||
|
## Each log message is searched for these tag keys in TagKey=Value format.
|
||||||
|
## Any that are found will be tagged on the resulting influx measurements.
|
||||||
|
tag_keys = [
|
||||||
|
"HostLocal",
|
||||||
|
"ProductName",
|
||||||
|
"OperationName",
|
||||||
|
]
|
||||||
|
|
||||||
|
## counters are keys which are treated as counters.
|
||||||
|
## so if counters = ["Result"], then this means that the following ocurrence
|
||||||
|
## on a log line:
|
||||||
|
## Result=Success
|
||||||
|
## would be treated as a counter: Result_Success, and it will be incremented
|
||||||
|
## for every occurrence, until Telegraf is restarted.
|
||||||
|
counters = ["Result"]
|
||||||
|
## num_fields are log line occurrences that are translated into numerical
|
||||||
|
## fields. ie:
|
||||||
|
## Duration=1
|
||||||
|
num_fields = ["Duration", "Attempt"]
|
||||||
|
## str_fields are log line occurences that are translated into string fields,
|
||||||
|
## ie:
|
||||||
|
## ActivityGUID=0bb03bf4-ae1d-4487-bb6f-311653b35760
|
||||||
|
str_fields = ["ActivityGUID"]
|
||||||
|
`
|
||||||
|
|
||||||
|
func (t *Tail) SampleConfig() string {
|
||||||
|
return sampleConfig
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t *Tail) Description() string {
|
||||||
|
return "Stream an igloo file, like the tail -f command"
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t *Tail) Gather(acc telegraf.Accumulator) error {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t *Tail) buildRegexes() error {
|
||||||
|
t.numfieldsRe = make(map[string]*regexp.Regexp)
|
||||||
|
t.strfieldsRe = make(map[string]*regexp.Regexp)
|
||||||
|
t.tagsRe = make(map[string]*regexp.Regexp)
|
||||||
|
t.countersRe = make(map[string]*regexp.Regexp)
|
||||||
|
t.counters = make(map[string]map[string]int64)
|
||||||
|
|
||||||
|
for _, field := range t.NumFields {
|
||||||
|
re, err := regexp.Compile(field + `=([0-9\.]+)`)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
t.numfieldsRe[field] = re
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, field := range t.StrFields {
|
||||||
|
re, err := regexp.Compile(field + `=([0-9a-zA-Z\.\-]+)`)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
t.strfieldsRe[field] = re
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, field := range t.TagKeys {
|
||||||
|
re, err := regexp.Compile(field + `=([0-9a-zA-Z\.\-]+)`)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
t.tagsRe[field] = re
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, field := range t.Counters {
|
||||||
|
re, err := regexp.Compile("(" + field + ")" + `=([0-9a-zA-Z\.\-]+)`)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
t.countersRe[field] = re
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t *Tail) Start(acc telegraf.Accumulator) error {
|
||||||
|
t.Lock()
|
||||||
|
defer t.Unlock()
|
||||||
|
|
||||||
|
t.acc = acc
|
||||||
|
if err := t.buildRegexes(); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
var seek tail.SeekInfo
|
||||||
|
if !t.FromBeginning {
|
||||||
|
seek.Whence = 2
|
||||||
|
seek.Offset = 0
|
||||||
|
}
|
||||||
|
|
||||||
|
var errS string
|
||||||
|
// Create a "tailer" for each file
|
||||||
|
for _, filepath := range t.Files {
|
||||||
|
g, err := globpath.Compile(filepath)
|
||||||
|
if err != nil {
|
||||||
|
log.Printf("ERROR Glob %s failed to compile, %s", filepath, err)
|
||||||
|
}
|
||||||
|
for file, _ := range g.Match() {
|
||||||
|
tailer, err := tail.TailFile(file,
|
||||||
|
tail.Config{
|
||||||
|
ReOpen: true,
|
||||||
|
Follow: true,
|
||||||
|
Location: &seek,
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
errS += err.Error() + " "
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
// create a goroutine for each "tailer"
|
||||||
|
go t.receiver(tailer)
|
||||||
|
t.tailers = append(t.tailers, tailer)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if errS != "" {
|
||||||
|
return fmt.Errorf(errS)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// this is launched as a goroutine to continuously watch a tailed logfile
|
||||||
|
// for changes, parse any incoming msgs, and add to the accumulator.
|
||||||
|
func (t *Tail) receiver(tailer *tail.Tail) {
|
||||||
|
t.wg.Add(1)
|
||||||
|
defer t.wg.Done()
|
||||||
|
|
||||||
|
var err error
|
||||||
|
var line *tail.Line
|
||||||
|
for line = range tailer.Lines {
|
||||||
|
if line.Err != nil {
|
||||||
|
log.Printf("ERROR tailing file %s, Error: %s\n",
|
||||||
|
tailer.Filename, err)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
err = t.Parse(line.Text)
|
||||||
|
if err != nil {
|
||||||
|
log.Printf("ERROR: %s", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t *Tail) Parse(line string) error {
|
||||||
|
// find the timestamp:
|
||||||
|
match := tRe.FindAllStringSubmatch(line, -1)
|
||||||
|
if len(match) < 1 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if len(match[0]) < 9 {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
// make an rfc3339 timestamp and parse it:
|
||||||
|
ts, err := time.Parse(time.RFC3339Nano,
|
||||||
|
fmt.Sprintf(rfcFormat, match[0][2], match[0][3], match[0][4], match[0][5], match[0][6], match[0][7], match[0][8]))
|
||||||
|
if err != nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
fields := make(map[string]interface{})
|
||||||
|
tags := make(map[string]string)
|
||||||
|
|
||||||
|
// parse numerical fields:
|
||||||
|
for name, re := range t.numfieldsRe {
|
||||||
|
match := re.FindAllStringSubmatch(line, -1)
|
||||||
|
if len(match) < 1 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if len(match[0]) < 2 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
num, err := strconv.ParseFloat(match[0][1], 64)
|
||||||
|
if err == nil {
|
||||||
|
fields[name] = num
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// parse string fields:
|
||||||
|
for name, re := range t.strfieldsRe {
|
||||||
|
match := re.FindAllStringSubmatch(line, -1)
|
||||||
|
if len(match) < 1 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if len(match[0]) < 2 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
fields[name] = match[0][1]
|
||||||
|
}
|
||||||
|
|
||||||
|
// parse tags:
|
||||||
|
for name, re := range t.tagsRe {
|
||||||
|
match := re.FindAllStringSubmatch(line, -1)
|
||||||
|
if len(match) < 1 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if len(match[0]) < 2 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
tags[name] = match[0][1]
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(t.countersRe) > 0 {
|
||||||
|
// Make a unique key for the measurement name/tags
|
||||||
|
var tg []string
|
||||||
|
for k, v := range tags {
|
||||||
|
tg = append(tg, fmt.Sprintf("%s=%s", k, v))
|
||||||
|
}
|
||||||
|
sort.Strings(tg)
|
||||||
|
hash := fmt.Sprintf("%s%s", strings.Join(tg, ""), "igloo")
|
||||||
|
|
||||||
|
// check if this hash already has a counter map
|
||||||
|
_, ok := t.counters[hash]
|
||||||
|
if !ok {
|
||||||
|
// doesnt have counter map, so make one
|
||||||
|
t.counters[hash] = make(map[string]int64)
|
||||||
|
}
|
||||||
|
|
||||||
|
// search for counter matches:
|
||||||
|
for _, re := range t.countersRe {
|
||||||
|
match := re.FindAllStringSubmatch(line, -1)
|
||||||
|
if len(match) < 1 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if len(match[0]) < 3 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
counterName := match[0][1] + "_" + match[0][2]
|
||||||
|
// increment this counter
|
||||||
|
t.counters[hash][counterName] += 1
|
||||||
|
// add this counter to the output fields
|
||||||
|
fields[counterName] = t.counters[hash][counterName]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
t.acc.AddFields("igloo", fields, tags, ts)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (t *Tail) Stop() {
|
||||||
|
t.Lock()
|
||||||
|
defer t.Unlock()
|
||||||
|
|
||||||
|
for _, t := range t.tailers {
|
||||||
|
err := t.Stop()
|
||||||
|
if err != nil {
|
||||||
|
log.Printf("ERROR stopping tail on file %s\n", t.Filename)
|
||||||
|
}
|
||||||
|
t.Cleanup()
|
||||||
|
}
|
||||||
|
t.wg.Wait()
|
||||||
|
}
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
inputs.Add("igloo", func() telegraf.Input {
|
||||||
|
return NewTail()
|
||||||
|
})
|
||||||
|
}
|
||||||
@@ -1,6 +1,41 @@
|
|||||||
# influxdb plugin
|
# influxdb plugin
|
||||||
|
|
||||||
The influxdb plugin collects InfluxDB-formatted data from JSON endpoints.
|
The InfluxDB plugin will collect metrics on the given InfluxDB servers.
|
||||||
|
|
||||||
|
This plugin can also gather metrics from endpoints that expose
|
||||||
|
InfluxDB-formatted endpoints. See below for more information.
|
||||||
|
|
||||||
|
### Configuration:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
# Read InfluxDB-formatted JSON metrics from one or more HTTP endpoints
|
||||||
|
[[inputs.influxdb]]
|
||||||
|
## Works with InfluxDB debug endpoints out of the box,
|
||||||
|
## but other services can use this format too.
|
||||||
|
## See the influxdb plugin's README for more details.
|
||||||
|
|
||||||
|
## Multiple URLs from which to read InfluxDB-formatted JSON
|
||||||
|
urls = [
|
||||||
|
"http://localhost:8086/debug/vars"
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Measurements & Fields
|
||||||
|
|
||||||
|
- influxdb_database
|
||||||
|
- influxdb_httpd
|
||||||
|
- influxdb_measurement
|
||||||
|
- influxdb_memstats
|
||||||
|
- influxdb_shard
|
||||||
|
- influxdb_subscriber
|
||||||
|
- influxdb_tsm1_cache
|
||||||
|
- influxdb_tsm1_wal
|
||||||
|
- influxdb_write
|
||||||
|
|
||||||
|
### InfluxDB-formatted endpoints
|
||||||
|
|
||||||
|
The influxdb plugin can collect InfluxDB-formatted data from JSON endpoints.
|
||||||
|
Whether associated with an Influx database or not.
|
||||||
|
|
||||||
With a configuration of:
|
With a configuration of:
|
||||||
|
|
||||||
@@ -65,8 +100,11 @@ influxdb_transactions,url='http://192.168.2.1:8086/debug/vars' total=100.0,balan
|
|||||||
|
|
||||||
There are two important details to note about the collected metrics:
|
There are two important details to note about the collected metrics:
|
||||||
|
|
||||||
1. Even though the values in JSON are being displayed as integers, the metrics are reported as floats.
|
1. Even though the values in JSON are being displayed as integers,
|
||||||
|
the metrics are reported as floats.
|
||||||
JSON encoders usually don't print the fractional part for round floats.
|
JSON encoders usually don't print the fractional part for round floats.
|
||||||
Because you cannot change the type of an existing field in InfluxDB, we assume all numbers are floats.
|
Because you cannot change the type of an existing field in InfluxDB,
|
||||||
|
we assume all numbers are floats.
|
||||||
|
|
||||||
2. The top-level keys' names (in the example above, `"k1"`, `"k2"`, and `"k3"`) are not considered when recording the metrics.
|
2. The top-level keys' names (in the example above, `"k1"`, `"k2"`, and `"k3"`)
|
||||||
|
are not considered when recording the metrics.
|
||||||
|
|||||||
@@ -7,7 +7,9 @@ import (
|
|||||||
"net/http"
|
"net/http"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -21,18 +23,18 @@ func (*InfluxDB) Description() string {
|
|||||||
|
|
||||||
func (*InfluxDB) SampleConfig() string {
|
func (*InfluxDB) SampleConfig() string {
|
||||||
return `
|
return `
|
||||||
# Works with InfluxDB debug endpoints out of the box,
|
## Works with InfluxDB debug endpoints out of the box,
|
||||||
# but other services can use this format too.
|
## but other services can use this format too.
|
||||||
# See the influxdb plugin's README for more details.
|
## See the influxdb plugin's README for more details.
|
||||||
|
|
||||||
# Multiple URLs from which to read InfluxDB-formatted JSON
|
## Multiple URLs from which to read InfluxDB-formatted JSON
|
||||||
urls = [
|
urls = [
|
||||||
"http://localhost:8086/debug/vars"
|
"http://localhost:8086/debug/vars"
|
||||||
]
|
]
|
||||||
`
|
`
|
||||||
}
|
}
|
||||||
|
|
||||||
func (i *InfluxDB) Gather(acc inputs.Accumulator) error {
|
func (i *InfluxDB) Gather(acc telegraf.Accumulator) error {
|
||||||
errorChannel := make(chan error, len(i.URLs))
|
errorChannel := make(chan error, len(i.URLs))
|
||||||
|
|
||||||
var wg sync.WaitGroup
|
var wg sync.WaitGroup
|
||||||
@@ -69,6 +71,44 @@ type point struct {
|
|||||||
Values map[string]interface{} `json:"values"`
|
Values map[string]interface{} `json:"values"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type memstats struct {
|
||||||
|
Alloc int64 `json:"Alloc"`
|
||||||
|
TotalAlloc int64 `json:"TotalAlloc"`
|
||||||
|
Sys int64 `json:"Sys"`
|
||||||
|
Lookups int64 `json:"Lookups"`
|
||||||
|
Mallocs int64 `json:"Mallocs"`
|
||||||
|
Frees int64 `json:"Frees"`
|
||||||
|
HeapAlloc int64 `json:"HeapAlloc"`
|
||||||
|
HeapSys int64 `json:"HeapSys"`
|
||||||
|
HeapIdle int64 `json:"HeapIdle"`
|
||||||
|
HeapInuse int64 `json:"HeapInuse"`
|
||||||
|
HeapReleased int64 `json:"HeapReleased"`
|
||||||
|
HeapObjects int64 `json:"HeapObjects"`
|
||||||
|
StackInuse int64 `json:"StackInuse"`
|
||||||
|
StackSys int64 `json:"StackSys"`
|
||||||
|
MSpanInuse int64 `json:"MSpanInuse"`
|
||||||
|
MSpanSys int64 `json:"MSpanSys"`
|
||||||
|
MCacheInuse int64 `json:"MCacheInuse"`
|
||||||
|
MCacheSys int64 `json:"MCacheSys"`
|
||||||
|
BuckHashSys int64 `json:"BuckHashSys"`
|
||||||
|
GCSys int64 `json:"GCSys"`
|
||||||
|
OtherSys int64 `json:"OtherSys"`
|
||||||
|
NextGC int64 `json:"NextGC"`
|
||||||
|
LastGC int64 `json:"LastGC"`
|
||||||
|
PauseTotalNs int64 `json:"PauseTotalNs"`
|
||||||
|
NumGC int64 `json:"NumGC"`
|
||||||
|
GCCPUFraction float64 `json:"GCCPUFraction"`
|
||||||
|
}
|
||||||
|
|
||||||
|
var tr = &http.Transport{
|
||||||
|
ResponseHeaderTimeout: time.Duration(3 * time.Second),
|
||||||
|
}
|
||||||
|
|
||||||
|
var client = &http.Client{
|
||||||
|
Transport: tr,
|
||||||
|
Timeout: time.Duration(4 * time.Second),
|
||||||
|
}
|
||||||
|
|
||||||
// Gathers data from a particular URL
|
// Gathers data from a particular URL
|
||||||
// Parameters:
|
// Parameters:
|
||||||
// acc : The telegraf Accumulator to use
|
// acc : The telegraf Accumulator to use
|
||||||
@@ -77,10 +117,10 @@ type point struct {
|
|||||||
// Returns:
|
// Returns:
|
||||||
// error: Any error that may have occurred
|
// error: Any error that may have occurred
|
||||||
func (i *InfluxDB) gatherURL(
|
func (i *InfluxDB) gatherURL(
|
||||||
acc inputs.Accumulator,
|
acc telegraf.Accumulator,
|
||||||
url string,
|
url string,
|
||||||
) error {
|
) error {
|
||||||
resp, err := http.Get(url)
|
resp, err := client.Get(url)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -107,12 +147,52 @@ func (i *InfluxDB) gatherURL(
|
|||||||
break
|
break
|
||||||
}
|
}
|
||||||
|
|
||||||
// Read in a string key. We don't do anything with the top-level keys, so it's discarded.
|
// Read in a string key. We don't do anything with the top-level keys,
|
||||||
_, err := dec.Token()
|
// so it's discarded.
|
||||||
|
key, err := dec.Token()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if key.(string) == "memstats" {
|
||||||
|
var m memstats
|
||||||
|
if err := dec.Decode(&m); err != nil {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
acc.AddFields("influxdb_memstats",
|
||||||
|
map[string]interface{}{
|
||||||
|
"alloc": m.Alloc,
|
||||||
|
"total_alloc": m.TotalAlloc,
|
||||||
|
"sys": m.Sys,
|
||||||
|
"lookups": m.Lookups,
|
||||||
|
"mallocs": m.Mallocs,
|
||||||
|
"frees": m.Frees,
|
||||||
|
"heap_alloc": m.HeapAlloc,
|
||||||
|
"heap_sys": m.HeapSys,
|
||||||
|
"heap_idle": m.HeapIdle,
|
||||||
|
"heap_inuse": m.HeapInuse,
|
||||||
|
"heap_released": m.HeapReleased,
|
||||||
|
"heap_objects": m.HeapObjects,
|
||||||
|
"stack_inuse": m.StackInuse,
|
||||||
|
"stack_sys": m.StackSys,
|
||||||
|
"mspan_inuse": m.MSpanInuse,
|
||||||
|
"mspan_sys": m.MSpanSys,
|
||||||
|
"mcache_inuse": m.MCacheInuse,
|
||||||
|
"mcache_sys": m.MCacheSys,
|
||||||
|
"buck_hash_sys": m.BuckHashSys,
|
||||||
|
"gc_sys": m.GCSys,
|
||||||
|
"other_sys": m.OtherSys,
|
||||||
|
"next_gc": m.NextGC,
|
||||||
|
"last_gc": m.LastGC,
|
||||||
|
"pause_total_ns": m.PauseTotalNs,
|
||||||
|
"num_gc": m.NumGC,
|
||||||
|
"gcc_pu_fraction": m.GCCPUFraction,
|
||||||
|
},
|
||||||
|
map[string]string{
|
||||||
|
"url": url,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
// Attempt to parse a whole object into a point.
|
// Attempt to parse a whole object into a point.
|
||||||
// It might be a non-object, like a string or array.
|
// It might be a non-object, like a string or array.
|
||||||
// If we fail to decode it into a point, ignore it and move on.
|
// If we fail to decode it into a point, ignore it and move on.
|
||||||
@@ -121,7 +201,8 @@ func (i *InfluxDB) gatherURL(
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
||||||
// If the object was a point, but was not fully initialized, ignore it and move on.
|
// If the object was a point, but was not fully initialized,
|
||||||
|
// ignore it and move on.
|
||||||
if p.Name == "" || p.Tags == nil || p.Values == nil || len(p.Values) == 0 {
|
if p.Name == "" || p.Tags == nil || p.Values == nil || len(p.Values) == 0 {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
@@ -140,7 +221,7 @@ func (i *InfluxDB) gatherURL(
|
|||||||
}
|
}
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
inputs.Add("influxdb", func() inputs.Input {
|
inputs.Add("influxdb", func() telegraf.Input {
|
||||||
return &InfluxDB{}
|
return &InfluxDB{}
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -11,7 +11,138 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
func TestBasic(t *testing.T) {
|
func TestBasic(t *testing.T) {
|
||||||
js := `
|
fakeServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
if r.URL.Path == "/endpoint" {
|
||||||
|
_, _ = w.Write([]byte(basicJSON))
|
||||||
|
} else {
|
||||||
|
w.WriteHeader(http.StatusNotFound)
|
||||||
|
}
|
||||||
|
}))
|
||||||
|
defer fakeServer.Close()
|
||||||
|
|
||||||
|
plugin := &influxdb.InfluxDB{
|
||||||
|
URLs: []string{fakeServer.URL + "/endpoint"},
|
||||||
|
}
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
require.NoError(t, plugin.Gather(&acc))
|
||||||
|
|
||||||
|
require.Len(t, acc.Metrics, 2)
|
||||||
|
fields := map[string]interface{}{
|
||||||
|
// JSON will truncate floats to integer representations.
|
||||||
|
// Since there's no distinction in JSON, we can't assume it's an int.
|
||||||
|
"i": -1.0,
|
||||||
|
"f": 0.5,
|
||||||
|
"b": true,
|
||||||
|
"s": "string",
|
||||||
|
}
|
||||||
|
tags := map[string]string{
|
||||||
|
"id": "ex1",
|
||||||
|
"url": fakeServer.URL + "/endpoint",
|
||||||
|
}
|
||||||
|
acc.AssertContainsTaggedFields(t, "influxdb_foo", fields, tags)
|
||||||
|
|
||||||
|
fields = map[string]interface{}{
|
||||||
|
"x": "x",
|
||||||
|
}
|
||||||
|
tags = map[string]string{
|
||||||
|
"id": "ex2",
|
||||||
|
"url": fakeServer.URL + "/endpoint",
|
||||||
|
}
|
||||||
|
acc.AssertContainsTaggedFields(t, "influxdb_bar", fields, tags)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestInfluxDB(t *testing.T) {
|
||||||
|
fakeInfluxServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
if r.URL.Path == "/endpoint" {
|
||||||
|
_, _ = w.Write([]byte(influxReturn))
|
||||||
|
} else {
|
||||||
|
w.WriteHeader(http.StatusNotFound)
|
||||||
|
}
|
||||||
|
}))
|
||||||
|
defer fakeInfluxServer.Close()
|
||||||
|
|
||||||
|
plugin := &influxdb.InfluxDB{
|
||||||
|
URLs: []string{fakeInfluxServer.URL + "/endpoint"},
|
||||||
|
}
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
require.NoError(t, plugin.Gather(&acc))
|
||||||
|
|
||||||
|
require.Len(t, acc.Metrics, 33)
|
||||||
|
|
||||||
|
fields := map[string]interface{}{
|
||||||
|
"heap_inuse": int64(18046976),
|
||||||
|
"heap_released": int64(3473408),
|
||||||
|
"mspan_inuse": int64(97440),
|
||||||
|
"total_alloc": int64(201739016),
|
||||||
|
"sys": int64(38537464),
|
||||||
|
"mallocs": int64(570251),
|
||||||
|
"frees": int64(381008),
|
||||||
|
"heap_idle": int64(15802368),
|
||||||
|
"pause_total_ns": int64(5132914),
|
||||||
|
"lookups": int64(77),
|
||||||
|
"heap_sys": int64(33849344),
|
||||||
|
"mcache_sys": int64(16384),
|
||||||
|
"next_gc": int64(20843042),
|
||||||
|
"gcc_pu_fraction": float64(4.287178819113636e-05),
|
||||||
|
"other_sys": int64(1229737),
|
||||||
|
"alloc": int64(17034016),
|
||||||
|
"stack_inuse": int64(753664),
|
||||||
|
"stack_sys": int64(753664),
|
||||||
|
"buck_hash_sys": int64(1461583),
|
||||||
|
"gc_sys": int64(1112064),
|
||||||
|
"num_gc": int64(27),
|
||||||
|
"heap_alloc": int64(17034016),
|
||||||
|
"heap_objects": int64(189243),
|
||||||
|
"mspan_sys": int64(114688),
|
||||||
|
"mcache_inuse": int64(4800),
|
||||||
|
"last_gc": int64(1460434886475114239),
|
||||||
|
}
|
||||||
|
|
||||||
|
tags := map[string]string{
|
||||||
|
"url": fakeInfluxServer.URL + "/endpoint",
|
||||||
|
}
|
||||||
|
acc.AssertContainsTaggedFields(t, "influxdb_memstats", fields, tags)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestErrorHandling(t *testing.T) {
|
||||||
|
badServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
if r.URL.Path == "/endpoint" {
|
||||||
|
_, _ = w.Write([]byte("not json"))
|
||||||
|
} else {
|
||||||
|
w.WriteHeader(http.StatusNotFound)
|
||||||
|
}
|
||||||
|
}))
|
||||||
|
defer badServer.Close()
|
||||||
|
|
||||||
|
plugin := &influxdb.InfluxDB{
|
||||||
|
URLs: []string{badServer.URL + "/endpoint"},
|
||||||
|
}
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
require.Error(t, plugin.Gather(&acc))
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestErrorHandling404(t *testing.T) {
|
||||||
|
badServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
if r.URL.Path == "/endpoint" {
|
||||||
|
_, _ = w.Write([]byte(basicJSON))
|
||||||
|
} else {
|
||||||
|
w.WriteHeader(http.StatusNotFound)
|
||||||
|
}
|
||||||
|
}))
|
||||||
|
defer badServer.Close()
|
||||||
|
|
||||||
|
plugin := &influxdb.InfluxDB{
|
||||||
|
URLs: []string{badServer.URL},
|
||||||
|
}
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
require.Error(t, plugin.Gather(&acc))
|
||||||
|
}
|
||||||
|
|
||||||
|
const basicJSON = `
|
||||||
{
|
{
|
||||||
"_1": {
|
"_1": {
|
||||||
"name": "foo",
|
"name": "foo",
|
||||||
@@ -55,43 +186,48 @@ func TestBasic(t *testing.T) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
`
|
`
|
||||||
fakeServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
|
||||||
if r.URL.Path == "/endpoint" {
|
|
||||||
_, _ = w.Write([]byte(js))
|
|
||||||
} else {
|
|
||||||
w.WriteHeader(http.StatusNotFound)
|
|
||||||
}
|
|
||||||
}))
|
|
||||||
defer fakeServer.Close()
|
|
||||||
|
|
||||||
plugin := &influxdb.InfluxDB{
|
const influxReturn = `
|
||||||
URLs: []string{fakeServer.URL + "/endpoint"},
|
{
|
||||||
}
|
"cluster": {"name": "cluster", "tags": {}, "values": {}},
|
||||||
|
"cmdline": ["influxd"],
|
||||||
var acc testutil.Accumulator
|
"cq": {"name": "cq", "tags": {}, "values": {}},
|
||||||
require.NoError(t, plugin.Gather(&acc))
|
"database:_internal": {"name": "database", "tags": {"database": "_internal"}, "values": {"numMeasurements": 8, "numSeries": 12}},
|
||||||
|
"database:udp": {"name": "database", "tags": {"database": "udp"}, "values": {"numMeasurements": 14, "numSeries": 38}},
|
||||||
require.Len(t, acc.Points, 2)
|
"hh:/Users/csparr/.influxdb/hh": {"name": "hh", "tags": {"path": "/Users/csparr/.influxdb/hh"}, "values": {}},
|
||||||
fields := map[string]interface{}{
|
"httpd::8086": {"name": "httpd", "tags": {"bind": ":8086"}, "values": {"req": 7, "reqActive": 1, "reqDurationNs": 4488799}},
|
||||||
// JSON will truncate floats to integer representations.
|
"measurement:cpu_idle.udp": {"name": "measurement", "tags": {"database": "udp", "measurement": "cpu_idle"}, "values": {"numSeries": 1}},
|
||||||
// Since there's no distinction in JSON, we can't assume it's an int.
|
"measurement:cpu_usage.udp": {"name": "measurement", "tags": {"database": "udp", "measurement": "cpu_usage"}, "values": {"numSeries": 1}},
|
||||||
"i": -1.0,
|
"measurement:database._internal": {"name": "measurement", "tags": {"database": "_internal", "measurement": "database"}, "values": {"numSeries": 2}},
|
||||||
"f": 0.5,
|
"measurement:database.udp": {"name": "measurement", "tags": {"database": "udp", "measurement": "database"}, "values": {"numSeries": 2}},
|
||||||
"b": true,
|
"measurement:httpd.udp": {"name": "measurement", "tags": {"database": "udp", "measurement": "httpd"}, "values": {"numSeries": 1}},
|
||||||
"s": "string",
|
"measurement:measurement.udp": {"name": "measurement", "tags": {"database": "udp", "measurement": "measurement"}, "values": {"numSeries": 22}},
|
||||||
}
|
"measurement:mem.udp": {"name": "measurement", "tags": {"database": "udp", "measurement": "mem"}, "values": {"numSeries": 1}},
|
||||||
tags := map[string]string{
|
"measurement:net.udp": {"name": "measurement", "tags": {"database": "udp", "measurement": "net"}, "values": {"numSeries": 1}},
|
||||||
"id": "ex1",
|
"measurement:runtime._internal": {"name": "measurement", "tags": {"database": "_internal", "measurement": "runtime"}, "values": {"numSeries": 1}},
|
||||||
"url": fakeServer.URL + "/endpoint",
|
"measurement:runtime.udp": {"name": "measurement", "tags": {"database": "udp", "measurement": "runtime"}, "values": {"numSeries": 1}},
|
||||||
}
|
"measurement:shard._internal": {"name": "measurement", "tags": {"database": "_internal", "measurement": "shard"}, "values": {"numSeries": 2}},
|
||||||
acc.AssertContainsTaggedFields(t, "influxdb_foo", fields, tags)
|
"measurement:shard.udp": {"name": "measurement", "tags": {"database": "udp", "measurement": "shard"}, "values": {"numSeries": 1}},
|
||||||
|
"measurement:subscriber._internal": {"name": "measurement", "tags": {"database": "_internal", "measurement": "subscriber"}, "values": {"numSeries": 1}},
|
||||||
fields = map[string]interface{}{
|
"measurement:subscriber.udp": {"name": "measurement", "tags": {"database": "udp", "measurement": "subscriber"}, "values": {"numSeries": 1}},
|
||||||
"x": "x",
|
"measurement:swap_used.udp": {"name": "measurement", "tags": {"database": "udp", "measurement": "swap_used"}, "values": {"numSeries": 1}},
|
||||||
}
|
"measurement:tsm1_cache._internal": {"name": "measurement", "tags": {"database": "_internal", "measurement": "tsm1_cache"}, "values": {"numSeries": 2}},
|
||||||
tags = map[string]string{
|
"measurement:tsm1_cache.udp": {"name": "measurement", "tags": {"database": "udp", "measurement": "tsm1_cache"}, "values": {"numSeries": 2}},
|
||||||
"id": "ex2",
|
"measurement:tsm1_wal._internal": {"name": "measurement", "tags": {"database": "_internal", "measurement": "tsm1_wal"}, "values": {"numSeries": 2}},
|
||||||
"url": fakeServer.URL + "/endpoint",
|
"measurement:tsm1_wal.udp": {"name": "measurement", "tags": {"database": "udp", "measurement": "tsm1_wal"}, "values": {"numSeries": 2}},
|
||||||
}
|
"measurement:udp._internal": {"name": "measurement", "tags": {"database": "_internal", "measurement": "udp"}, "values": {"numSeries": 1}},
|
||||||
acc.AssertContainsTaggedFields(t, "influxdb_bar", fields, tags)
|
"measurement:write._internal": {"name": "measurement", "tags": {"database": "_internal", "measurement": "write"}, "values": {"numSeries": 1}},
|
||||||
}
|
"measurement:write.udp": {"name": "measurement", "tags": {"database": "udp", "measurement": "write"}, "values": {"numSeries": 1}},
|
||||||
|
"memstats": {"Alloc":17034016,"TotalAlloc":201739016,"Sys":38537464,"Lookups":77,"Mallocs":570251,"Frees":381008,"HeapAlloc":17034016,"HeapSys":33849344,"HeapIdle":15802368,"HeapInuse":18046976,"HeapReleased":3473408,"HeapObjects":189243,"StackInuse":753664,"StackSys":753664,"MSpanInuse":97440,"MSpanSys":114688,"MCacheInuse":4800,"MCacheSys":16384,"BuckHashSys":1461583,"GCSys":1112064,"OtherSys":1229737,"NextGC":20843042,"LastGC":1460434886475114239,"PauseTotalNs":5132914,"PauseNs":[195052,117751,139370,156933,263089,165249,713747,103904,122015,294408,213753,170864,175845,114221,121563,122409,113098,162219,229257,126726,250774,254235,117206,293588,144279,124306,127053,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],"PauseEnd":[1460433856394860455,1460433856398162739,1460433856405888337,1460433856411784017,1460433856417924684,1460433856428385687,1460433856443782908,1460433856456522851,1460433857392743223,1460433866484394564,1460433866494076235,1460433896472438632,1460433957839825106,1460433976473440328,1460434016473413006,1460434096471892794,1460434126470792929,1460434246480428250,1460434366554468369,1460434396471249528,1460434456471205885,1460434476479487292,1460434536471435965,1460434616469784776,1460434736482078216,1460434856544251733,1460434886475114239,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],"NumGC":27,"GCCPUFraction":4.287178819113636e-05,"EnableGC":true,"DebugGC":false,"BySize":[{"Size":0,"Mallocs":0,"Frees":0},{"Size":8,"Mallocs":1031,"Frees":955},{"Size":16,"Mallocs":308485,"Frees":142064},{"Size":32,"Mallocs":64937,"Frees":54321},{"Size":48,"Mallocs":33012,"Frees":29754},{"Size":64,"Mallocs":20299,"Frees":18173},{"Size":80,"Mallocs":8186,"Frees":7597},{"Size":96,"Mallocs":9806,"Frees":8982},{"Size":112,"Mallocs":5671,"Frees":4850},{"Size":128,"Mallocs":2972,"Frees":2684},{"Size":144,"Mallocs":4106,"Frees":3719},{"Size":160,"Mallocs":1324,"Frees":911},{"Size":176,"Mallocs":2574,"Frees":2391},{"Size":192,"Mallocs":4053,"Frees":3863},{"Size":208,"Mallocs":442,"Frees":307},{"Size":224,"Mallocs":336,"Frees":172},{"Size":240,"Mallocs":143,"Frees":125},{"Size":256,"Mallocs":542,"Frees":497},{"Size":288,"Mallocs":15971,"Frees":14761},{"Size":320,"Mallocs":245,"Frees":30},{"Size":352,"Mallocs":1299,"Frees":1065},{"Size":384,"Mallocs":138,"Frees":2},{"Size":416,"Mallocs":54,"Frees":47},{"Size":448,"Mallocs":75,"Frees":29},{"Size":480,"Mallocs":6,"Frees":4},{"Size":512,"Mallocs":452,"Frees":422},{"Size":576,"Mallocs":486,"Frees":395},{"Size":640,"Mallocs":81,"Frees":67},{"Size":704,"Mallocs":421,"Frees":397},{"Size":768,"Mallocs":469,"Frees":468},{"Size":896,"Mallocs":1049,"Frees":1010},{"Size":1024,"Mallocs":1078,"Frees":960},{"Size":1152,"Mallocs":750,"Frees":498},{"Size":1280,"Mallocs":84,"Frees":72},{"Size":1408,"Mallocs":218,"Frees":187},{"Size":1536,"Mallocs":73,"Frees":48},{"Size":1664,"Mallocs":43,"Frees":30},{"Size":2048,"Mallocs":153,"Frees":57},{"Size":2304,"Mallocs":41,"Frees":30},{"Size":2560,"Mallocs":18,"Frees":15},{"Size":2816,"Mallocs":164,"Frees":157},{"Size":3072,"Mallocs":0,"Frees":0},{"Size":3328,"Mallocs":13,"Frees":6},{"Size":4096,"Mallocs":101,"Frees":82},{"Size":4608,"Mallocs":32,"Frees":26},{"Size":5376,"Mallocs":165,"Frees":151},{"Size":6144,"Mallocs":15,"Frees":9},{"Size":6400,"Mallocs":1,"Frees":1},{"Size":6656,"Mallocs":1,"Frees":0},{"Size":6912,"Mallocs":0,"Frees":0},{"Size":8192,"Mallocs":13,"Frees":13},{"Size":8448,"Mallocs":0,"Frees":0},{"Size":8704,"Mallocs":1,"Frees":1},{"Size":9472,"Mallocs":6,"Frees":4},{"Size":10496,"Mallocs":0,"Frees":0},{"Size":12288,"Mallocs":41,"Frees":35},{"Size":13568,"Mallocs":0,"Frees":0},{"Size":14080,"Mallocs":0,"Frees":0},{"Size":16384,"Mallocs":4,"Frees":4},{"Size":16640,"Mallocs":0,"Frees":0},{"Size":17664,"Mallocs":0,"Frees":0}]},
|
||||||
|
"queryExecutor": {"name": "queryExecutor", "tags": {}, "values": {}},
|
||||||
|
"shard:/Users/csparr/.influxdb/data/_internal/monitor/2:2": {"name": "shard", "tags": {"database": "_internal", "engine": "tsm1", "id": "2", "path": "/Users/csparr/.influxdb/data/_internal/monitor/2", "retentionPolicy": "monitor"}, "values": {}},
|
||||||
|
"shard:/Users/csparr/.influxdb/data/udp/default/1:1": {"name": "shard", "tags": {"database": "udp", "engine": "tsm1", "id": "1", "path": "/Users/csparr/.influxdb/data/udp/default/1", "retentionPolicy": "default"}, "values": {"fieldsCreate": 61, "seriesCreate": 33, "writePointsOk": 3613, "writeReq": 110}},
|
||||||
|
"subscriber": {"name": "subscriber", "tags": {}, "values": {"pointsWritten": 3613}},
|
||||||
|
"tsm1_cache:/Users/csparr/.influxdb/data/_internal/monitor/2": {"name": "tsm1_cache", "tags": {"database": "_internal", "path": "/Users/csparr/.influxdb/data/_internal/monitor/2", "retentionPolicy": "monitor"}, "values": {"WALCompactionTimeMs": 0, "cacheAgeMs": 1103932, "cachedBytes": 0, "diskBytes": 0, "memBytes": 40480, "snapshotCount": 0}},
|
||||||
|
"tsm1_cache:/Users/csparr/.influxdb/data/udp/default/1": {"name": "tsm1_cache", "tags": {"database": "udp", "path": "/Users/csparr/.influxdb/data/udp/default/1", "retentionPolicy": "default"}, "values": {"WALCompactionTimeMs": 0, "cacheAgeMs": 1103029, "cachedBytes": 0, "diskBytes": 0, "memBytes": 2359472, "snapshotCount": 0}},
|
||||||
|
"tsm1_filestore:/Users/csparr/.influxdb/data/_internal/monitor/2": {"name": "tsm1_filestore", "tags": {"database": "_internal", "path": "/Users/csparr/.influxdb/data/_internal/monitor/2", "retentionPolicy": "monitor"}, "values": {}},
|
||||||
|
"tsm1_filestore:/Users/csparr/.influxdb/data/udp/default/1": {"name": "tsm1_filestore", "tags": {"database": "udp", "path": "/Users/csparr/.influxdb/data/udp/default/1", "retentionPolicy": "default"}, "values": {}},
|
||||||
|
"tsm1_wal:/Users/csparr/.influxdb/wal/_internal/monitor/2": {"name": "tsm1_wal", "tags": {"database": "_internal", "path": "/Users/csparr/.influxdb/wal/_internal/monitor/2", "retentionPolicy": "monitor"}, "values": {"currentSegmentDiskBytes": 0, "oldSegmentsDiskBytes": 69532}},
|
||||||
|
"tsm1_wal:/Users/csparr/.influxdb/wal/udp/default/1": {"name": "tsm1_wal", "tags": {"database": "udp", "path": "/Users/csparr/.influxdb/wal/udp/default/1", "retentionPolicy": "default"}, "values": {"currentSegmentDiskBytes": 193728, "oldSegmentsDiskBytes": 1008330}},
|
||||||
|
"write": {"name": "write", "tags": {}, "values": {"pointReq": 3613, "pointReqLocal": 3613, "req": 110, "subWriteOk": 110, "writeOk": 110}}
|
||||||
|
}`
|
||||||
|
|||||||
42
plugins/inputs/ipmi_sensor/README.md
Normal file
42
plugins/inputs/ipmi_sensor/README.md
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
# Telegraf ipmi plugin
|
||||||
|
|
||||||
|
Get bare metal metrics using the command line utility `ipmitool`
|
||||||
|
|
||||||
|
see ipmitool(https://sourceforge.net/projects/ipmitool/files/ipmitool/)
|
||||||
|
|
||||||
|
The plugin will use the following command to collect remote host sensor stats:
|
||||||
|
|
||||||
|
ipmitool -I lan -H 192.168.1.1 -U USERID -P PASSW0RD sdr
|
||||||
|
|
||||||
|
## Measurements
|
||||||
|
|
||||||
|
- ipmi_sensor:
|
||||||
|
|
||||||
|
* Tags: `name`, `server`, `unit`
|
||||||
|
* Fields:
|
||||||
|
- status
|
||||||
|
- value
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[inputs.ipmi_sensor]]
|
||||||
|
## specify servers via a url matching:
|
||||||
|
## [username[:password]@][protocol[(address)]]
|
||||||
|
## e.g.
|
||||||
|
## root:passwd@lan(127.0.0.1)
|
||||||
|
##
|
||||||
|
servers = ["USERID:PASSW0RD@lan(10.20.2.203)"]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
```
|
||||||
|
> ipmi_sensor,server=10.20.2.203,unit=degrees_c,name=ambient_temp status=1i,value=20 1458488465012559455
|
||||||
|
> ipmi_sensor,server=10.20.2.203,unit=feet,name=altitude status=1i,value=80 1458488465012688613
|
||||||
|
> ipmi_sensor,server=10.20.2.203,unit=watts,name=avg_power status=1i,value=220 1458488465012776511
|
||||||
|
> ipmi_sensor,server=10.20.2.203,unit=volts,name=planar_3.3v status=1i,value=3.28 1458488465012861875
|
||||||
|
> ipmi_sensor,server=10.20.2.203,unit=volts,name=planar_vbat status=1i,value=3.04 1458488465013072508
|
||||||
|
> ipmi_sensor,server=10.20.2.203,unit=rpm,name=fan_1a_tach status=1i,value=2610 1458488465013137932
|
||||||
|
> ipmi_sensor,server=10.20.2.203,unit=rpm,name=fan_1b_tach status=1i,value=1775 1458488465013279896
|
||||||
|
```
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user