Compare commits
2880 Commits
v0.9-b1
...
ga-azure-m
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4b8d0ad35d | ||
|
|
78e5f52966 | ||
|
|
79b6edadd2 | ||
|
|
9490a22aeb | ||
|
|
17093efad5 | ||
|
|
d077f5dbc7 | ||
|
|
6cea487bfc | ||
|
|
14d97e5416 | ||
|
|
44e3b9bee3 | ||
|
|
7f93911f43 | ||
|
|
f417cec036 | ||
|
|
dbd02ebb74 | ||
|
|
fbf09409e9 | ||
|
|
54728f54c6 | ||
|
|
83b03ecb18 | ||
|
|
8826cdc423 | ||
|
|
697d8ceae5 | ||
|
|
089eb2c8d6 | ||
|
|
fd22b1ef1f | ||
|
|
a86c2d54ad | ||
|
|
daacfc6368 | ||
|
|
795c8057ad | ||
|
|
6a21e23bcc | ||
|
|
0d21296aed | ||
|
|
7660315e45 | ||
|
|
703be4f124 | ||
|
|
ccc4a85fd6 | ||
|
|
3be9cad309 | ||
|
|
45c1a45f4a | ||
|
|
1a407ceaf9 | ||
|
|
61a0e500a8 | ||
|
|
7f46aafcd6 | ||
|
|
3072b5a493 | ||
|
|
81f5a41bc9 | ||
|
|
a688eefd1c | ||
|
|
1a8786712c | ||
|
|
339cebbc21 | ||
|
|
b62f7a3c68 | ||
|
|
cce4f520bd | ||
|
|
6d73cb85cc | ||
|
|
2948dec6f5 | ||
|
|
863af9d1d4 | ||
|
|
99033241c4 | ||
|
|
e45822e2e2 | ||
|
|
0eba72d2c0 | ||
|
|
d5f57715dc | ||
|
|
190a4128c5 | ||
|
|
d19a33dd6f | ||
|
|
0af40a8a5d | ||
|
|
558caf57de | ||
|
|
18db718d7f | ||
|
|
fdd899e9d4 | ||
|
|
7e0e664860 | ||
|
|
5030373a4c | ||
|
|
5b599337a3 | ||
|
|
2add516eee | ||
|
|
1f29612918 | ||
|
|
851efc9ca0 | ||
|
|
fa04e539ff | ||
|
|
3ef28e332f | ||
|
|
5953db88df | ||
|
|
2bf2b51039 | ||
|
|
2a2cc3212f | ||
|
|
b11468757c | ||
|
|
339c5d0312 | ||
|
|
1c6cfcfbab | ||
|
|
c16ecaa124 | ||
|
|
ce58926feb | ||
|
|
cff2aa1863 | ||
|
|
4790a21c04 | ||
|
|
21167a6232 | ||
|
|
2fe167b8a7 | ||
|
|
d96bcac3ec | ||
|
|
ac9b308cee | ||
|
|
4c35a56edd | ||
|
|
73c22a8189 | ||
|
|
de355b76d6 | ||
|
|
b2bb44363a | ||
|
|
8b687a8e21 | ||
|
|
81620c69c5 | ||
|
|
3ae0c20200 | ||
|
|
7c0754ebe5 | ||
|
|
757e23a5f2 | ||
|
|
fd63591b15 | ||
|
|
2108582b43 | ||
|
|
c125cb1d27 | ||
|
|
2fb3f7a585 | ||
|
|
9647ea88ea | ||
|
|
c1d4b0b154 | ||
|
|
239333ad90 | ||
|
|
fd64487be5 | ||
|
|
cff7ee8edf | ||
|
|
c03e8918a2 | ||
|
|
83345ec2b3 | ||
|
|
f094f83da5 | ||
|
|
0768022240 | ||
|
|
92956104d6 | ||
|
|
964856eb5f | ||
|
|
377547aa4c | ||
|
|
1662b6feb9 | ||
|
|
908170b207 | ||
|
|
ec47cab950 | ||
|
|
06671777e9 | ||
|
|
46a8bdbfe5 | ||
|
|
abdff033cc | ||
|
|
535e9e9a68 | ||
|
|
c256f17870 | ||
|
|
b8d5df2076 | ||
|
|
538baee8a4 | ||
|
|
d3d8d52e2f | ||
|
|
286f14f730 | ||
|
|
9f4752ba12 | ||
|
|
f639f994b5 | ||
|
|
911f0e4b57 | ||
|
|
86a3b8cad4 | ||
|
|
a3500cc33a | ||
|
|
bf0c59f56c | ||
|
|
c7b3667ac4 | ||
|
|
638853be05 | ||
|
|
ee9a2f73a1 | ||
|
|
648d7ae922 | ||
|
|
13937d511d | ||
|
|
fe4d3cd117 | ||
|
|
eacf11fcd8 | ||
|
|
3a8ca4d08d | ||
|
|
00e3363d45 | ||
|
|
29b37e67c2 | ||
|
|
42fee824f8 | ||
|
|
120be7e87b | ||
|
|
9e4a330ee5 | ||
|
|
78d4a95ce6 | ||
|
|
571ce86d10 | ||
|
|
dd2c60e620 | ||
|
|
1486ae25c0 | ||
|
|
da5b46e770 | ||
|
|
9ef902f4a1 | ||
|
|
058510464c | ||
|
|
0b4f4b089f | ||
|
|
7c592558d8 | ||
|
|
1e1d9e8acb | ||
|
|
3b3d16273d | ||
|
|
3046f957d5 | ||
|
|
bcf1cf59c1 | ||
|
|
c8d2ba2bc8 | ||
|
|
04ab9a4fe4 | ||
|
|
e4009234e9 | ||
|
|
8d516d26e9 | ||
|
|
0a02363c03 | ||
|
|
2c19d74829 | ||
|
|
3f4e1af222 | ||
|
|
10c7324d74 | ||
|
|
55cfc383f3 | ||
|
|
7b8f12b377 | ||
|
|
15f19375e7 | ||
|
|
93e2381f42 | ||
|
|
387bae9b9f | ||
|
|
34416e0da8 | ||
|
|
32f56140a3 | ||
|
|
64a23c0b18 | ||
|
|
af68975e2f | ||
|
|
0223b22b3e | ||
|
|
1890efbb70 | ||
|
|
e4f8a82ee6 | ||
|
|
a28de4b5cd | ||
|
|
caac224276 | ||
|
|
fe31ce9d7d | ||
|
|
01ede2ea0b | ||
|
|
fb6390e7ab | ||
|
|
ff40da6019 | ||
|
|
43a044542e | ||
|
|
00203fa889 | ||
|
|
7177e0473f | ||
|
|
252101b7c6 | ||
|
|
efdf36746c | ||
|
|
df78133bf3 | ||
|
|
bf915fa79c | ||
|
|
c160b56229 | ||
|
|
627f0e5d9d | ||
|
|
4551b4c5d2 | ||
|
|
a9afd2f030 | ||
|
|
caf860bc88 | ||
|
|
beeab2c509 | ||
|
|
a50acadc44 | ||
|
|
265d0e6d84 | ||
|
|
413cf6dd23 | ||
|
|
7b23287e20 | ||
|
|
f4c0aac898 | ||
|
|
bcaaeda49c | ||
|
|
9d2f3fcbb9 | ||
|
|
0aad487cab | ||
|
|
19c102cf4b | ||
|
|
109c1a4344 | ||
|
|
82448a9dd1 | ||
|
|
64b239663c | ||
|
|
7e3ec16e15 | ||
|
|
a971ffb880 | ||
|
|
461c0dccd8 | ||
|
|
971debb582 | ||
|
|
6d585beedf | ||
|
|
38ec968b0b | ||
|
|
0c1293ad5e | ||
|
|
b99cd14129 | ||
|
|
c2108fcf09 | ||
|
|
04b9afff68 | ||
|
|
a320f91516 | ||
|
|
ef112e6ee7 | ||
|
|
5be1198274 | ||
|
|
8a73dc05c0 | ||
|
|
43bd23e555 | ||
|
|
b0b18df0bf | ||
|
|
cc97b48ca8 | ||
|
|
36b8220181 | ||
|
|
1c0f63a90d | ||
|
|
503881d4d7 | ||
|
|
63de4ffc51 | ||
|
|
4cefe3eadd | ||
|
|
b63073deb2 | ||
|
|
e60abdf8ea | ||
|
|
e5e75a62cc | ||
|
|
a4870e6a6d | ||
|
|
3469e74dd9 | ||
|
|
def76ace3b | ||
|
|
05393da939 | ||
|
|
e8fc3ca70c | ||
|
|
729388f4dd | ||
|
|
be9d4f4be0 | ||
|
|
3658ac8f53 | ||
|
|
d7f279e3d3 | ||
|
|
e28f422d21 | ||
|
|
cd919066d5 | ||
|
|
6200683c29 | ||
|
|
76ce71f7fa | ||
|
|
2160779126 | ||
|
|
6e5e2f713d | ||
|
|
8e515688eb | ||
|
|
6d6631382c | ||
|
|
f1b681cbdc | ||
|
|
4118ec7629 | ||
|
|
f114f6a124 | ||
|
|
8cfd001441 | ||
|
|
9ce70aad77 | ||
|
|
07dbbb27dc | ||
|
|
0e14e31b0a | ||
|
|
8b3767fd6e | ||
|
|
81a93fcddf | ||
|
|
8005883de8 | ||
|
|
f7207f514e | ||
|
|
f1c8abd68c | ||
|
|
e4ce057885 | ||
|
|
a6d366fb84 | ||
|
|
de22480e7d | ||
|
|
2b65915b96 | ||
|
|
9d8b1b1e87 | ||
|
|
b9ddbbd5ed | ||
|
|
c377c8fb7c | ||
|
|
45c22e42da | ||
|
|
ad5e954047 | ||
|
|
93b2870b28 | ||
|
|
3501b65f7c | ||
|
|
35378ae9cc | ||
|
|
1212b2ddc5 | ||
|
|
0a37386c5e | ||
|
|
00a52a67b9 | ||
|
|
dc96c34e2c | ||
|
|
5928219454 | ||
|
|
8c932abff6 | ||
|
|
fcd6d4eb09 | ||
|
|
b355536b20 | ||
|
|
e988c83068 | ||
|
|
80d9417315 | ||
|
|
f4fa05530a | ||
|
|
18aef35c58 | ||
|
|
8147d60973 | ||
|
|
df80fa6099 | ||
|
|
53221d87eb | ||
|
|
ddde8809f4 | ||
|
|
0ca3900abe | ||
|
|
a777ce9293 | ||
|
|
3242f97deb | ||
|
|
6e35071c89 | ||
|
|
cd620ac144 | ||
|
|
6406abbc89 | ||
|
|
9aabf56795 | ||
|
|
4ac78d5c6d | ||
|
|
3fe3d75bb3 | ||
|
|
a55456b56c | ||
|
|
6c656d92a0 | ||
|
|
2ee270f274 | ||
|
|
5b37fd3ae9 | ||
|
|
f82f03b92c | ||
|
|
42ccc9f324 | ||
|
|
a00d5b48f8 | ||
|
|
f5ea13a9ab | ||
|
|
32dd1b3725 | ||
|
|
1b0e87a8b0 | ||
|
|
efa9095829 | ||
|
|
89974d96d7 | ||
|
|
8c51d629eb | ||
|
|
ea0be51985 | ||
|
|
5639d5608d | ||
|
|
9a1d69a2ae | ||
|
|
b7a68eef56 | ||
|
|
be688ec761 | ||
|
|
3208fc32ee | ||
|
|
1f87c10dd4 | ||
|
|
281f4d3688 | ||
|
|
3dcf66aed6 | ||
|
|
01479af096 | ||
|
|
23933e1139 | ||
|
|
a7571d5730 | ||
|
|
12d62e60b3 | ||
|
|
4153d2ca42 | ||
|
|
8c8c9200e7 | ||
|
|
426360d61f | ||
|
|
86e08e6ce7 | ||
|
|
a462b555a7 | ||
|
|
a2635573a8 | ||
|
|
ec8e923fda | ||
|
|
d43e8262b7 | ||
|
|
7b365180d0 | ||
|
|
32732d42f8 | ||
|
|
10e51e4b49 | ||
|
|
3a85e7b1f0 | ||
|
|
5d87ad85a1 | ||
|
|
c28d0e1b16 | ||
|
|
1b0a4e49cd | ||
|
|
f9c48ee2f0 | ||
|
|
1b84ac08ab | ||
|
|
bcefe90846 | ||
|
|
da12c64791 | ||
|
|
de03ee3caa | ||
|
|
fbd3544a9d | ||
|
|
fb947e8fe7 | ||
|
|
5b130b6ea0 | ||
|
|
48092ed598 | ||
|
|
efb9d5b4cb | ||
|
|
c17427631d | ||
|
|
8527a1b7b8 | ||
|
|
d831dbc51d | ||
|
|
f9c0aa1e23 | ||
|
|
3e4c91880a | ||
|
|
899c3a2ae1 | ||
|
|
4558aeddeb | ||
|
|
36c9113917 | ||
|
|
5270aa451c | ||
|
|
91fc2765b1 | ||
|
|
ef776f120b | ||
|
|
5bac08662e | ||
|
|
601dc99606 | ||
|
|
0f55d9eba2 | ||
|
|
f374a295d9 | ||
|
|
548157852c | ||
|
|
822cfbc8e8 | ||
|
|
fa5f1bf6d9 | ||
|
|
ad921a3840 | ||
|
|
9d559292a5 | ||
|
|
6e24056757 | ||
|
|
87830a1c38 | ||
|
|
d188b78d9e | ||
|
|
6e4650da3a | ||
|
|
7ab0d50116 | ||
|
|
97f6c9d8e1 | ||
|
|
666eb47613 | ||
|
|
90b6b760d1 | ||
|
|
f3147cc44d | ||
|
|
3cf0ba1ccf | ||
|
|
2b972dcd56 | ||
|
|
ce06d0cee0 | ||
|
|
24ae3293bc | ||
|
|
0ddb1d26a0 | ||
|
|
317de40ac4 | ||
|
|
9cfa3b292b | ||
|
|
0bf63a29f1 | ||
|
|
1d86064fb7 | ||
|
|
53e7537c5c | ||
|
|
6dd5c3b2c0 | ||
|
|
2938c2fa79 | ||
|
|
35f1b9f500 | ||
|
|
ae848e9539 | ||
|
|
163f18f959 | ||
|
|
37757b7782 | ||
|
|
315fd1e987 | ||
|
|
b0c2bb870e | ||
|
|
11c6a7f9c9 | ||
|
|
92acef1664 | ||
|
|
5397c02570 | ||
|
|
87f1d45ee0 | ||
|
|
07cb749e04 | ||
|
|
acea7109d4 | ||
|
|
009b649a13 | ||
|
|
b900967b78 | ||
|
|
81f42e8b17 | ||
|
|
56be3d3236 | ||
|
|
a440ed8d8c | ||
|
|
06c21fb9f7 | ||
|
|
4f7afb8cb5 | ||
|
|
ef6e5c5a85 | ||
|
|
005face7c0 | ||
|
|
1011cd0c94 | ||
|
|
6c075c4346 | ||
|
|
7f3f556b39 | ||
|
|
6639f44c17 | ||
|
|
801a248668 | ||
|
|
496452144c | ||
|
|
3029d58cad | ||
|
|
fcc9c82d34 | ||
|
|
4f1ea13ebf | ||
|
|
b90ee4a43c | ||
|
|
4537eb2c5d | ||
|
|
d6fd9ce738 | ||
|
|
5b40173bcb | ||
|
|
6638fc68de | ||
|
|
9ad0297b1f | ||
|
|
15266bb7eb | ||
|
|
d935dfa9ed | ||
|
|
8785c7d78d | ||
|
|
fb3d66cdd3 | ||
|
|
de180d1e56 | ||
|
|
df9c7590b3 | ||
|
|
d7d224d511 | ||
|
|
abcad439eb | ||
|
|
8484de6c12 | ||
|
|
ab8376de03 | ||
|
|
ff634c5056 | ||
|
|
14b31a2354 | ||
|
|
663a5b1f50 | ||
|
|
93d16a4603 | ||
|
|
88746b01c3 | ||
|
|
37095ef47d | ||
|
|
4f42d8a298 | ||
|
|
574034c301 | ||
|
|
654e953a89 | ||
|
|
4d91162abd | ||
|
|
177e7e2c73 | ||
|
|
d8966d5067 | ||
|
|
bdda6ceb70 | ||
|
|
ca8911fec0 | ||
|
|
2c5a5373f6 | ||
|
|
cabe10b88a | ||
|
|
7f66863b87 | ||
|
|
e400ec2b57 | ||
|
|
44320a5421 | ||
|
|
a9951710b3 | ||
|
|
6426bca1f8 | ||
|
|
f92a4f528f | ||
|
|
3ba5458220 | ||
|
|
beb9d7560d | ||
|
|
24d82aebe6 | ||
|
|
7dc256e845 | ||
|
|
297897ae0a | ||
|
|
414a7e34fb | ||
|
|
bf65e19486 | ||
|
|
2c70958c24 | ||
|
|
d727a6f85c | ||
|
|
4e9b19f7a6 | ||
|
|
132fb50150 | ||
|
|
d1ba75176d | ||
|
|
76240b9f18 | ||
|
|
06e22ee7ac | ||
|
|
a18eedb970 | ||
|
|
6514399baf | ||
|
|
27994abcb5 | ||
|
|
a9ada5f65b | ||
|
|
f758d0c6c3 | ||
|
|
7442b5645f | ||
|
|
d5bd426e0c | ||
|
|
154b263f14 | ||
|
|
92ca661662 | ||
|
|
54b0b9e727 | ||
|
|
dc2c8791d0 | ||
|
|
367bbdeb7e | ||
|
|
e544d742f9 | ||
|
|
393c4c6c2d | ||
|
|
4d1bc620b2 | ||
|
|
db8e767f1f | ||
|
|
afe05fcfef | ||
|
|
9422cca2cc | ||
|
|
a06ee58785 | ||
|
|
b13eea89b1 | ||
|
|
b813e2ecae | ||
|
|
8364417009 | ||
|
|
136c15ba33 | ||
|
|
19839c0167 | ||
|
|
72682973bd | ||
|
|
a411306fba | ||
|
|
cbd346117a | ||
|
|
181a56018f | ||
|
|
6ee6d55751 | ||
|
|
ebd73b7279 | ||
|
|
6a57395731 | ||
|
|
be13f69305 | ||
|
|
62ec3e50d9 | ||
|
|
07297e80a8 | ||
|
|
f0578b8c83 | ||
|
|
493af043d3 | ||
|
|
47d013132a | ||
|
|
dcff769fed | ||
|
|
5141f8a2a0 | ||
|
|
bb14589469 | ||
|
|
b81bea658f | ||
|
|
2c2dc97702 | ||
|
|
cbbdf1043b | ||
|
|
c55f285de0 | ||
|
|
e1295c41c8 | ||
|
|
e0df62c27b | ||
|
|
fdf12ce6b4 | ||
|
|
e5a265c8c7 | ||
|
|
112955a9f5 | ||
|
|
da0ca8a870 | ||
|
|
6e6aefe5da | ||
|
|
ae2635b547 | ||
|
|
c14478f025 | ||
|
|
2c31345c70 | ||
|
|
4a9fa7ef4b | ||
|
|
7db06d2aa4 | ||
|
|
871fae6eb3 | ||
|
|
8e587e74f5 | ||
|
|
440918a03b | ||
|
|
f64b23b724 | ||
|
|
c11739d143 | ||
|
|
883696c224 | ||
|
|
0ea0519e89 | ||
|
|
4596ae70a9 | ||
|
|
92caf33fff | ||
|
|
a987118b01 | ||
|
|
a6ada03b91 | ||
|
|
8ed00af10a | ||
|
|
86961cc814 | ||
|
|
ba462f5c94 | ||
|
|
1d1d5e6089 | ||
|
|
8560c2f88d | ||
|
|
5d135cece3 | ||
|
|
9b0af4478b | ||
|
|
26ccc1f205 | ||
|
|
76ed70340b | ||
|
|
5f215c22fe | ||
|
|
63842d48fd | ||
|
|
777b84d1dc | ||
|
|
c116af35c7 | ||
|
|
fcfcc803b1 | ||
|
|
4d5de8698b | ||
|
|
23ad959d71 | ||
|
|
d9fa916711 | ||
|
|
53b13a20d0 | ||
|
|
ffa8a4a716 | ||
|
|
8b4708c82a | ||
|
|
88ec171293 | ||
|
|
5885ef2c1c | ||
|
|
a519abf13f | ||
|
|
6ea61b55d9 | ||
|
|
206397d475 | ||
|
|
a6797a44d5 | ||
|
|
13c1f1524a | ||
|
|
9a062498e7 | ||
|
|
f64cf89db1 | ||
|
|
6d1777276c | ||
|
|
65580759fc | ||
|
|
d2f9fc7d8c | ||
|
|
77cc071796 | ||
|
|
4deb6238a3 | ||
|
|
7088d98304 | ||
|
|
4243403432 | ||
|
|
3bbc2beeed | ||
|
|
0e6a70b199 | ||
|
|
ec4efe5b03 | ||
|
|
adb1f5588c | ||
|
|
6e5915c59f | ||
|
|
9b59cdd10e | ||
|
|
02baa696c3 | ||
|
|
a4fa19252f | ||
|
|
7ba376964c | ||
|
|
a75ab3e190 | ||
|
|
2208657d73 | ||
|
|
9d8e935734 | ||
|
|
f5a9d1bc75 | ||
|
|
4b05edea53 | ||
|
|
246ffab3e0 | ||
|
|
3ea41e885c | ||
|
|
1f348037b7 | ||
|
|
86f19dee2b | ||
|
|
a1796989f7 | ||
|
|
6b67fedfdc | ||
|
|
5cd3327d5f | ||
|
|
bf9f94eb9d | ||
|
|
0f9f757da7 | ||
|
|
2f8d0f4d47 | ||
|
|
024dea2ff9 | ||
|
|
fa25e123d8 | ||
|
|
bed14e5037 | ||
|
|
c74c29b164 | ||
|
|
4e0c8e6026 | ||
|
|
d7ea83f39b | ||
|
|
b641f06552 | ||
|
|
c7a6d4eaa4 | ||
|
|
61b0336d97 | ||
|
|
761544f56d | ||
|
|
0f452ad0df | ||
|
|
4093bc98b7 | ||
|
|
75567d5b51 | ||
|
|
59bb31e765 | ||
|
|
13c7802b84 | ||
|
|
cce40c515a | ||
|
|
6e1fa559a3 | ||
|
|
f56dda0ac8 | ||
|
|
4fab572b6b | ||
|
|
b9f319529f | ||
|
|
0bb32570ba | ||
|
|
a4ea4c7a25 | ||
|
|
e69c3f9d1c | ||
|
|
002ccf3295 | ||
|
|
a163effa6d | ||
|
|
93ff811358 | ||
|
|
dd4299e925 | ||
|
|
b610276485 | ||
|
|
6aee40fac1 | ||
|
|
79f66dc5b3 | ||
|
|
0a55ab42b4 | ||
|
|
aba269e94c | ||
|
|
f67350107d | ||
|
|
8e3ed96d6f | ||
|
|
771fbc311a | ||
|
|
d7b88b10ad | ||
|
|
cdca81c999 | ||
|
|
ed6f438c9d | ||
|
|
366f3f560c | ||
|
|
e4f5547d37 | ||
|
|
e1bf655ef9 | ||
|
|
29b6f4168c | ||
|
|
3d62e045af | ||
|
|
ad4a5aa7a0 | ||
|
|
f2cb1da7cf | ||
|
|
c3d15f0aff | ||
|
|
b2453e3ec3 | ||
|
|
c3b11f9cfb | ||
|
|
cd1791494a | ||
|
|
402460f038 | ||
|
|
f85db90780 | ||
|
|
9bddd50a64 | ||
|
|
b8a0b8461a | ||
|
|
ee26191eb5 | ||
|
|
cadafa6405 | ||
|
|
22a9ffbb9d | ||
|
|
2e1457a496 | ||
|
|
8614445235 | ||
|
|
f23d1eb078 | ||
|
|
ef5c12bd86 | ||
|
|
c013cc1497 | ||
|
|
bb665cf013 | ||
|
|
5dff5932fd | ||
|
|
f823fc73f6 | ||
|
|
fd702e6bb8 | ||
|
|
50024c1860 | ||
|
|
a4b8805f7f | ||
|
|
837e6b1a32 | ||
|
|
063f3f68df | ||
|
|
b24663b0bd | ||
|
|
366bda45c3 | ||
|
|
c010fb1c3c | ||
|
|
08c197f73a | ||
|
|
cafb22d145 | ||
|
|
73df179bd6 | ||
|
|
c3bea59f3b | ||
|
|
52393582d2 | ||
|
|
ce29ca78e3 | ||
|
|
6e6ed075dc | ||
|
|
c0a4bd99a1 | ||
|
|
decb09e760 | ||
|
|
a63f80e017 | ||
|
|
daee48c861 | ||
|
|
dea8bf7ac0 | ||
|
|
292c5229bf | ||
|
|
0048bf2120 | ||
|
|
b8e134cd37 | ||
|
|
0339dc7faf | ||
|
|
575a07c985 | ||
|
|
b94cda6b46 | ||
|
|
73372872c2 | ||
|
|
103ae3b710 | ||
|
|
171332c579 | ||
|
|
875ab3c4b7 | ||
|
|
1c5ebd4be3 | ||
|
|
103d24bfba | ||
|
|
d5f48e3e96 | ||
|
|
7a41d2c586 | ||
|
|
fa1982323a | ||
|
|
cdf63c5776 | ||
|
|
0a8c2e0b3b | ||
|
|
9197a59cdb | ||
|
|
9c8f4afa37 | ||
|
|
eebee9759f | ||
|
|
ee85f9275e | ||
|
|
4e53464fe2 | ||
|
|
2163981872 | ||
|
|
c5cfde667a | ||
|
|
8a68e7424c | ||
|
|
cc63b3b667 | ||
|
|
5488f4b3ac | ||
|
|
14a4b108b4 | ||
|
|
32f313a6a6 | ||
|
|
c720200883 | ||
|
|
f62e543003 | ||
|
|
be83c8c8f0 | ||
|
|
c809debfd4 | ||
|
|
247c2e71fd | ||
|
|
7b08f9d099 | ||
|
|
d0b690f040 | ||
|
|
98ca22597d | ||
|
|
99dfc69fbb | ||
|
|
144862354a | ||
|
|
402a0f16e1 | ||
|
|
5d4eec606f | ||
|
|
ab1c11b06d | ||
|
|
864ea1efaf | ||
|
|
4fb1c3a2bc | ||
|
|
9796d3c99d | ||
|
|
98e784faf3 | ||
|
|
16d6011ca1 | ||
|
|
f43af72785 | ||
|
|
28d16188b3 | ||
|
|
19f3264073 | ||
|
|
8225bd0173 | ||
|
|
3806424aab | ||
|
|
ef8876b70b | ||
|
|
5fd8ab36d3 | ||
|
|
ac1fa05672 | ||
|
|
73d57c8a02 | ||
|
|
95fe0e43f5 | ||
|
|
02f7b0d030 | ||
|
|
a9a40cbf87 | ||
|
|
a98496591a | ||
|
|
0a6541dfa8 | ||
|
|
8ecc58639a | ||
|
|
6abecd0ac7 | ||
|
|
0502b65316 | ||
|
|
e400fcf5da | ||
|
|
d449833de9 | ||
|
|
58751fa4df | ||
|
|
656ce31d98 | ||
|
|
485e273187 | ||
|
|
f95c239a3f | ||
|
|
ae24a0754b | ||
|
|
f253623231 | ||
|
|
f0db4fd901 | ||
|
|
8c68bd9ddb | ||
|
|
9fc7220c2e | ||
|
|
6597b55477 | ||
|
|
1f4a997164 | ||
|
|
5224b526f4 | ||
|
|
371638ce56 | ||
|
|
53c5d3a290 | ||
|
|
b480022330 | ||
|
|
ccf17a9f93 | ||
|
|
13a6b917c3 | ||
|
|
1f1e9cc49f | ||
|
|
70c2b83f00 | ||
|
|
4de264ffc8 | ||
|
|
36c2c88fd2 | ||
|
|
e31d91f0f9 | ||
|
|
3006ccbf2f | ||
|
|
8b588ea37f | ||
|
|
7608251633 | ||
|
|
1e9d7cd6e9 | ||
|
|
a91457e001 | ||
|
|
fd3a9bf46a | ||
|
|
ca394fcfb2 | ||
|
|
3819607511 | ||
|
|
eb0215c382 | ||
|
|
09153c815c | ||
|
|
9bc13f143e | ||
|
|
032348c7a5 | ||
|
|
5fbdd09aaf | ||
|
|
7d5dae5a08 | ||
|
|
54be037911 | ||
|
|
5003809e97 | ||
|
|
1b50f14d55 | ||
|
|
b0109b3550 | ||
|
|
257b460f61 | ||
|
|
287a44de5e | ||
|
|
73897d1f1c | ||
|
|
1e2d594af0 | ||
|
|
83c003e594 | ||
|
|
84ce9629a8 | ||
|
|
3c14b46f6f | ||
|
|
8a2373e8c8 | ||
|
|
cb04fa1e9c | ||
|
|
92af42a847 | ||
|
|
bceb020d72 | ||
|
|
d9deb266df | ||
|
|
f3435f1c59 | ||
|
|
f9573ad969 | ||
|
|
40aacd9046 | ||
|
|
5e73f3e816 | ||
|
|
a1e7a5f474 | ||
|
|
828c5817f9 | ||
|
|
3e27134872 | ||
|
|
1fb5373962 | ||
|
|
75e6ebcf93 | ||
|
|
e21f2de8b8 | ||
|
|
795f02ab88 | ||
|
|
360d03e301 | ||
|
|
137b312fa9 | ||
|
|
ce12913bc2 | ||
|
|
d82c5062b8 | ||
|
|
6666e6a5a7 | ||
|
|
9c0aadf445 | ||
|
|
3bd14ed229 | ||
|
|
5e95367f6c | ||
|
|
c31e7d0b91 | ||
|
|
f8c84302a4 | ||
|
|
9143670d6e | ||
|
|
f0bd69d904 | ||
|
|
7179290dea | ||
|
|
c4297f40ad | ||
|
|
0d4c954e01 | ||
|
|
d6cf9f4f30 | ||
|
|
5f88be022c | ||
|
|
284ab79a37 | ||
|
|
2bd6c80506 | ||
|
|
0ca936a12e | ||
|
|
a26fc52181 | ||
|
|
83f575fcea | ||
|
|
ffd1f25b75 | ||
|
|
1658404cea | ||
|
|
82ea04f188 | ||
|
|
273d0b85b0 | ||
|
|
f3917ec5ff | ||
|
|
428455e032 | ||
|
|
573bd4aa32 | ||
|
|
ab5205f8c3 | ||
|
|
85aa212467 | ||
|
|
840d19db35 | ||
|
|
1c267e9b16 | ||
|
|
1ff6e92193 | ||
|
|
c82c0e596b | ||
|
|
31ce98fa91 | ||
|
|
4d66db1603 | ||
|
|
681d20083a | ||
|
|
4ee74ff54b | ||
|
|
16073e4172 | ||
|
|
3c204d409d | ||
|
|
d903a9142d | ||
|
|
a2d4453269 | ||
|
|
34c042c7dc | ||
|
|
4dfe2312d0 | ||
|
|
c740dce36d | ||
|
|
475a926d43 | ||
|
|
d2626f1da6 | ||
|
|
f5a8415c78 | ||
|
|
1d416a4213 | ||
|
|
731ab9773d | ||
|
|
d8f7b76253 | ||
|
|
dbe2f79019 | ||
|
|
ef63908541 | ||
|
|
27e47614c6 | ||
|
|
dc4a133b11 | ||
|
|
f4d67d8c3c | ||
|
|
785798611e | ||
|
|
b165ce4cd5 | ||
|
|
d9d1ca5a46 | ||
|
|
2c10806fef | ||
|
|
5d2c093105 | ||
|
|
f68bab1667 | ||
|
|
1388e2cf92 | ||
|
|
af318f4959 | ||
|
|
9f244cf1ac | ||
|
|
885aa8e6e1 | ||
|
|
945446b36f | ||
|
|
4209ebfa6e | ||
|
|
79f8ed874a | ||
|
|
739d97639a | ||
|
|
ac8e28f436 | ||
|
|
2740a3ba44 | ||
|
|
0f850400f2 | ||
|
|
74a764d549 | ||
|
|
a8a637809e | ||
|
|
75dbf2b0f8 | ||
|
|
90909ae708 | ||
|
|
d40e441240 | ||
|
|
cc3d420551 | ||
|
|
f2bb4acd4a | ||
|
|
a7595c918a | ||
|
|
a52f90122b | ||
|
|
d217cdc1a6 | ||
|
|
d5b6f92f3f | ||
|
|
1a636abaaf | ||
|
|
1fdbfa4719 | ||
|
|
22fc130e97 | ||
|
|
a726579d50 | ||
|
|
d774c2a170 | ||
|
|
6d5bb35f84 | ||
|
|
e028f10586 | ||
|
|
9276318faf | ||
|
|
82a04d904d | ||
|
|
364da9a83d | ||
|
|
ca9cec2c84 | ||
|
|
9211985c63 | ||
|
|
929ba0a637 | ||
|
|
dcdcb70cb1 | ||
|
|
cb5a12de3d | ||
|
|
193e8fa5ad | ||
|
|
00b37a7c0d | ||
|
|
736322dfc9 | ||
|
|
ba364988de | ||
|
|
a729a44284 | ||
|
|
3ecfd32df5 | ||
|
|
ea1888bd26 | ||
|
|
674c24f987 | ||
|
|
ca72df5868 | ||
|
|
ea787b83bf | ||
|
|
949072e8dc | ||
|
|
246f342e6a | ||
|
|
619b5d4c14 | ||
|
|
b0efc22140 | ||
|
|
5d1efdbfda | ||
|
|
e3ccd473d2 | ||
|
|
f0cbfe4d67 | ||
|
|
40d8e582ee | ||
|
|
02b55fe77f | ||
|
|
0c53de6700 | ||
|
|
b277e6e2d7 | ||
|
|
de4a312eba | ||
|
|
4b3b16ef1a | ||
|
|
4c534433aa | ||
|
|
f9447d01d4 | ||
|
|
2092443cd7 | ||
|
|
1c73caba04 | ||
|
|
84dbf8bb25 | ||
|
|
a275e6792a | ||
|
|
de7fb2acfe | ||
|
|
91f2764cd5 | ||
|
|
4e91b18bbe | ||
|
|
56a7ffe0e4 | ||
|
|
f9462d4fff | ||
|
|
035905d65e | ||
|
|
a47e6e6efe | ||
|
|
5bab4616ff | ||
|
|
37e01808b5 | ||
|
|
0b6db905ff | ||
|
|
9529199a44 | ||
|
|
be03abd464 | ||
|
|
04aa732e94 | ||
|
|
e7f9db297e | ||
|
|
24ea9fdc4d | ||
|
|
02d168705c | ||
|
|
7d7206b3e2 | ||
|
|
03ca3975b5 | ||
|
|
e1088b9eee | ||
|
|
f47924ffc5 | ||
|
|
a96f85c847 | ||
|
|
9148871608 | ||
|
|
7d198f0a68 | ||
|
|
1459fab4d6 | ||
|
|
b0bd4d55f5 | ||
|
|
9ab688d62c | ||
|
|
8fdc2aec80 | ||
|
|
91690b1d3e | ||
|
|
c61cd73eff | ||
|
|
93e638d63e | ||
|
|
501c22478e | ||
|
|
7155e90f66 | ||
|
|
c53d9fa9b7 | ||
|
|
ac5ac3161f | ||
|
|
bfeb3020a3 | ||
|
|
b01ecdccff | ||
|
|
da99777f6f | ||
|
|
dd537b3382 | ||
|
|
f74687dcc0 | ||
|
|
a47aa0dcc2 | ||
|
|
17d883c602 | ||
|
|
a1446a60f7 | ||
|
|
1931aac284 | ||
|
|
b88eb0f59d | ||
|
|
e7ad2d0463 | ||
|
|
c28ffb11cb | ||
|
|
018fd5ce5b | ||
|
|
cd0ec0185a | ||
|
|
8124cfa3ed | ||
|
|
5af985ef5f | ||
|
|
1ebd1aaa41 | ||
|
|
de3f52b990 | ||
|
|
4200018a0b | ||
|
|
67cd1669cc | ||
|
|
a8cfe03ba8 | ||
|
|
e2983383e4 | ||
|
|
8cf0dc769b | ||
|
|
613de8a80d | ||
|
|
f5c890cc1d | ||
|
|
f7f1eaef65 | ||
|
|
188703e204 | ||
|
|
52c19af0ba | ||
|
|
5c88965084 | ||
|
|
6e76731b7e | ||
|
|
c7a0e40c87 | ||
|
|
086a2f5f12 | ||
|
|
1da1c4753e | ||
|
|
a083e1af7d | ||
|
|
052e88ad5e | ||
|
|
b9ce455bba | ||
|
|
cd103c85db | ||
|
|
a3feacbd2f | ||
|
|
e1a734c525 | ||
|
|
53ab56de72 | ||
|
|
4e2fe598ac | ||
|
|
5fe5c46c6d | ||
|
|
153304d92b | ||
|
|
cb9aecbf04 | ||
|
|
c66e2896c6 | ||
|
|
9b874dff8d | ||
|
|
b243faa22b | ||
|
|
8f5cd6c2ae | ||
|
|
3c28b93514 | ||
|
|
06baf7cf78 | ||
|
|
801f6cb8a0 | ||
|
|
3684ec6315 | ||
|
|
da0773151b | ||
|
|
38e1c1de77 | ||
|
|
799c8bed29 | ||
|
|
a237301932 | ||
|
|
b03d78d00f | ||
|
|
748ca7d503 | ||
|
|
bf30ef89ee | ||
|
|
3690e1b9bf | ||
|
|
2542ef6d62 | ||
|
|
eb7ef5392e | ||
|
|
70b3e763e7 | ||
|
|
58ee962679 | ||
|
|
dc5779e2a7 | ||
|
|
b968759d10 | ||
|
|
b90a5b48a1 | ||
|
|
a12e082dbe | ||
|
|
cadd845b36 | ||
|
|
dff216c44d | ||
|
|
45c9b867f6 | ||
|
|
9388fff1f7 | ||
|
|
3e0c55bff9 | ||
|
|
49ab4e26f8 | ||
|
|
360b10c4de | ||
|
|
2c98e5ae66 | ||
|
|
0193cbee51 | ||
|
|
f55af7d21f | ||
|
|
516dffa4c4 | ||
|
|
62b5c1f7e7 | ||
|
|
07c428ef89 | ||
|
|
aa722fac9b | ||
|
|
7cc4ca2341 | ||
|
|
92fa20cef2 | ||
|
|
c9f8308f27 | ||
|
|
5ffc9fd379 | ||
|
|
8bf193dc06 | ||
|
|
f2805fd4aa | ||
|
|
35e4390168 | ||
|
|
51c99d5b67 | ||
|
|
540f98e228 | ||
|
|
c980c92cd5 | ||
|
|
9495b615f5 | ||
|
|
fb1c7d0154 | ||
|
|
03ee6022f3 | ||
|
|
cc5b2f68b6 | ||
|
|
2d7f612bd7 | ||
|
|
9e036b2d65 | ||
|
|
1100a98f11 | ||
|
|
37689f4df6 | ||
|
|
78c7f4e4af | ||
|
|
84a9f91f5c | ||
|
|
5612df48f9 | ||
|
|
0fa9001453 | ||
|
|
995546e7c6 | ||
|
|
1402c158b7 | ||
|
|
616b66f5cb | ||
|
|
70a0a84882 | ||
|
|
5c33c760c7 | ||
|
|
bb28fb256b | ||
|
|
a962e958eb | ||
|
|
8514acdc3c | ||
|
|
426182b81a | ||
|
|
7a5d857846 | ||
|
|
13f314a507 | ||
|
|
ea6e0b8259 | ||
|
|
e811e2600d | ||
|
|
49c212337f | ||
|
|
d243d69a09 | ||
|
|
ae6a5d2255 | ||
|
|
56aa89e5c8 | ||
|
|
7513fcac4e | ||
|
|
9df2974a0f | ||
|
|
ceb36adac7 | ||
|
|
7a8e821731 | ||
|
|
76bcdecd21 | ||
|
|
10744646db | ||
|
|
1873abd248 | ||
|
|
9618515926 | ||
|
|
a251adb838 | ||
|
|
9e810ac463 | ||
|
|
b9457a1092 | ||
|
|
6f2eeae498 | ||
|
|
42a41d33cc | ||
|
|
81408f9da7 | ||
|
|
c4212d69c9 | ||
|
|
e17164d3f0 | ||
|
|
e5349393f8 | ||
|
|
06176ef410 | ||
|
|
2a3448c8f3 | ||
|
|
5da40d56ad | ||
|
|
54c9a385d5 | ||
|
|
25c55419df | ||
|
|
c19fb1535e | ||
|
|
45a168e425 | ||
|
|
22243a8354 | ||
|
|
ff9369f1a1 | ||
|
|
21cf79163c | ||
|
|
f05fac74cb | ||
|
|
c8cc01ba6a | ||
|
|
694955c87b | ||
|
|
b1945c0493 | ||
|
|
1c4673e900 | ||
|
|
dfb4038654 | ||
|
|
b3537ef2a8 | ||
|
|
0ce44648cf | ||
|
|
55d3f70771 | ||
|
|
a610f8bd03 | ||
|
|
dfba3ff37a | ||
|
|
285be648c4 | ||
|
|
f7d551a807 | ||
|
|
3f224a15d5 | ||
|
|
c0bbde03ea | ||
|
|
97050e9669 | ||
|
|
eafd1dcc7c | ||
|
|
c528c53e5b | ||
|
|
07a6223932 | ||
|
|
aeb849d744 | ||
|
|
9003efc3fa | ||
|
|
32e06a489d | ||
|
|
2932db8480 | ||
|
|
19dee32287 | ||
|
|
4dad723088 | ||
|
|
54cfbb5b87 | ||
|
|
3e37dda7b0 | ||
|
|
fb7931591d | ||
|
|
e87ce22af9 | ||
|
|
738cbbdbb6 | ||
|
|
074e6d177c | ||
|
|
1d864ebd40 | ||
|
|
e9decadf75 | ||
|
|
3fa37a9212 | ||
|
|
c9e87a39f8 | ||
|
|
4a5d313693 | ||
|
|
168270ea5f | ||
|
|
c4d4185fb5 | ||
|
|
822333690f | ||
|
|
d7a8bb2214 | ||
|
|
a505123e60 | ||
|
|
be10b19760 | ||
|
|
b9ae3d6a57 | ||
|
|
c882570983 | ||
|
|
80411f99f0 | ||
|
|
6df3f0fdae | ||
|
|
22340ad984 | ||
|
|
c15504c509 | ||
|
|
20bf90ee52 | ||
|
|
3de6bfbcb8 | ||
|
|
e0c6262e0b | ||
|
|
9b2f6499e7 | ||
|
|
9262712f0a | ||
|
|
0c9da0985a | ||
|
|
b89c45b858 | ||
|
|
b60b360f13 | ||
|
|
734988d732 | ||
|
|
95bad9e55b | ||
|
|
e812a2efc6 | ||
|
|
411853fc74 | ||
|
|
b7d29ca0e9 | ||
|
|
947e1909ff | ||
|
|
31a4f03031 | ||
|
|
81f95e7a29 | ||
|
|
2aa2c796e5 | ||
|
|
a658e6c509 | ||
|
|
5f6766f6e1 | ||
|
|
7279018cfe | ||
|
|
4b08d127e0 | ||
|
|
fd1feff7b4 | ||
|
|
37bc9cf795 | ||
|
|
b762546fa7 | ||
|
|
bf5f2659a1 | ||
|
|
d2787e8ef5 | ||
|
|
a9f03a72f5 | ||
|
|
7fc57812a7 | ||
|
|
8a982ca68f | ||
|
|
200237a515 | ||
|
|
0ae1e0611c | ||
|
|
1392e73125 | ||
|
|
a90afd95c6 | ||
|
|
9866146545 | ||
|
|
8df325a68c | ||
|
|
48ae105a11 | ||
|
|
4e808c5c20 | ||
|
|
eb96443a34 | ||
|
|
e36c354ff5 | ||
|
|
f09c08d1f3 | ||
|
|
0e8122a2fc | ||
|
|
6723ea5fe6 | ||
|
|
e8bf968c78 | ||
|
|
9c8f24601f | ||
|
|
4957717df5 | ||
|
|
21fac3ebec | ||
|
|
ecbc634221 | ||
|
|
90cec20d1d | ||
|
|
bcbf82f8e8 | ||
|
|
3a45d8851d | ||
|
|
4a83c8c518 | ||
|
|
bc13d32d53 | ||
|
|
e6fc32bdf0 | ||
|
|
a970b9c62c | ||
|
|
17b307a7bc | ||
|
|
393f5044bb | ||
|
|
c630212dde | ||
|
|
f39db08c6d | ||
|
|
b4f9bc8745 | ||
|
|
5f06bd2566 | ||
|
|
8a4ab3654d | ||
|
|
e2f9617228 | ||
|
|
e097ae9632 | ||
|
|
07684fb030 | ||
|
|
17fa6f9b17 | ||
|
|
8e3fbaa9dd | ||
|
|
dede3e70ad | ||
|
|
7558081873 | ||
|
|
6e241611be | ||
|
|
fc9f921b62 | ||
|
|
12db3b9120 | ||
|
|
b58926dd26 | ||
|
|
91143dda1a | ||
|
|
efb64a049f | ||
|
|
4f6087a99d | ||
|
|
6b0e863556 | ||
|
|
11bc82379c | ||
|
|
a093ec1eaa | ||
|
|
d71a42cd1b | ||
|
|
d518d7d806 | ||
|
|
1d1afe6481 | ||
|
|
5a3f2e61f3 | ||
|
|
504f4e69db | ||
|
|
9f6666beb3 | ||
|
|
af6e7b9531 | ||
|
|
6fd7361364 | ||
|
|
e5c7a71d8e | ||
|
|
db7a4b24b6 | ||
|
|
332f678afb | ||
|
|
04a2b36a52 | ||
|
|
f862c6585d | ||
|
|
5c32521a07 | ||
|
|
9db30250c3 | ||
|
|
2b0cd2037b | ||
|
|
536dbfb724 | ||
|
|
b77398c4d3 | ||
|
|
fbf5bee051 | ||
|
|
81004c808f | ||
|
|
196509cc53 | ||
|
|
94ce67cc67 | ||
|
|
33ed528afe | ||
|
|
2435e47926 | ||
|
|
ff67a4b96c | ||
|
|
f816b952cf | ||
|
|
b905bc1b5d | ||
|
|
0ecbf9e349 | ||
|
|
1c7715780e | ||
|
|
5d3850c44e | ||
|
|
e84b356a12 | ||
|
|
b349800f7a | ||
|
|
47de43abf3 | ||
|
|
7a9fef80f5 | ||
|
|
dc28875437 | ||
|
|
a6ed4d4c3a | ||
|
|
fe6162b2a1 | ||
|
|
34182d9c9f | ||
|
|
16081b2d1a | ||
|
|
e43cfc2fce | ||
|
|
137272afea | ||
|
|
2150510bd4 | ||
|
|
fc59757a1a | ||
|
|
0cfa0d419a | ||
|
|
522658bd07 | ||
|
|
b1a97e35b9 | ||
|
|
c66363cba5 | ||
|
|
61269c3500 | ||
|
|
393d129982 | ||
|
|
80d4864844 | ||
|
|
f729fa990d | ||
|
|
662db7a944 | ||
|
|
c849b58de9 | ||
|
|
097b1e09db | ||
|
|
babd37bf35 | ||
|
|
91f48e7ad5 | ||
|
|
a12bd878e0 | ||
|
|
a4e8f24b16 | ||
|
|
a65447d22e | ||
|
|
b00ad65b08 | ||
|
|
a84ce5d5cb | ||
|
|
8ca4a50c18 | ||
|
|
03b2984ac2 | ||
|
|
9540a6532f | ||
|
|
cace663bbf | ||
|
|
acfdd15aa9 | ||
|
|
78f544c0aa | ||
|
|
2175a72fcc | ||
|
|
b03c1d9691 | ||
|
|
fead80844e | ||
|
|
ef885eda62 | ||
|
|
64a71263a1 | ||
|
|
974221f0cf | ||
|
|
bccef2856d | ||
|
|
80df3f7634 | ||
|
|
e96f7a9b12 | ||
|
|
2bbb6aa6f2 | ||
|
|
1ff721ad84 | ||
|
|
3e3b094270 | ||
|
|
1f7a8fceef | ||
|
|
b702a9758b | ||
|
|
3b607aa8ae | ||
|
|
4a4a6892f9 | ||
|
|
56b627dfe2 | ||
|
|
5c87b92976 | ||
|
|
dbcc312b0e | ||
|
|
2d842fefb8 | ||
|
|
d63e3c8cc4 | ||
|
|
187a894fe9 | ||
|
|
ca55c4a55d | ||
|
|
d627bdbbdb | ||
|
|
4f06f6b3d8 | ||
|
|
7f0fe78615 | ||
|
|
5913f7cb36 | ||
|
|
8dc42ad9f2 | ||
|
|
886bdd2ef2 | ||
|
|
5a86a2ff26 | ||
|
|
817d696628 | ||
|
|
4ab0344ebf | ||
|
|
7b05170145 | ||
|
|
b48ad4b737 | ||
|
|
9feb639bbd | ||
|
|
ce5054c850 | ||
|
|
c7834209d2 | ||
|
|
78ced6bc30 | ||
|
|
ca8e512e5b | ||
|
|
573628dbdd | ||
|
|
e477620dc5 | ||
|
|
32268fb25b | ||
|
|
80391bfe1f | ||
|
|
e19845c202 | ||
|
|
52134555d6 | ||
|
|
e7e39df6a0 | ||
|
|
055ef168ae | ||
|
|
2778b7be30 | ||
|
|
953db51b2c | ||
|
|
c043461f6c | ||
|
|
ddc07f9ef8 | ||
|
|
2cf1db0837 | ||
|
|
17e6496830 | ||
|
|
1d10eda84e | ||
|
|
9ea3dbeee8 | ||
|
|
100501ba72 | ||
|
|
f12368698b | ||
|
|
6b25a73629 | ||
|
|
90c7475c68 | ||
|
|
6648c101dd | ||
|
|
8d3285522c | ||
|
|
b613405f42 | ||
|
|
e999298078 | ||
|
|
0f0ab953f6 | ||
|
|
aaddbd153e | ||
|
|
9b2e2cc41f | ||
|
|
bc22309459 | ||
|
|
b6f81b538a | ||
|
|
c3aa43a6bd | ||
|
|
b2ea39077e | ||
|
|
811567a2f4 | ||
|
|
ca8fb440cc | ||
|
|
ac58a6bb3c | ||
|
|
9757d39240 | ||
|
|
5a9e7d77b8 | ||
|
|
e963b7f01b | ||
|
|
e7899d4dc5 | ||
|
|
301c79e57c | ||
|
|
67c288abda | ||
|
|
8dd2a8527a | ||
|
|
2fe427b3b3 | ||
|
|
6b1cc67664 | ||
|
|
1271f9d71a | ||
|
|
49ea4e9f39 | ||
|
|
50ef3282b6 | ||
|
|
b63dedb74d | ||
|
|
5628049440 | ||
|
|
54c9ba7639 | ||
|
|
b18d375d6c | ||
|
|
6dbbe65897 | ||
|
|
03d8abccdd | ||
|
|
0f6d317a8e | ||
|
|
792682590c | ||
|
|
2d3da343b3 | ||
|
|
094eda22c0 | ||
|
|
4886109d9c | ||
|
|
2dc47285bd | ||
|
|
6e33a6d62f | ||
|
|
a8f9eb23cc | ||
|
|
41a5ee6571 | ||
|
|
7d8de4b8e1 | ||
|
|
cc2b53abf4 | ||
|
|
32aa1cc814 | ||
|
|
38d877165a | ||
|
|
5c5984bfe1 | ||
|
|
30cdc31a27 | ||
|
|
602a36e241 | ||
|
|
b863ee1d65 | ||
|
|
ca49babf3a | ||
|
|
cf37b5cdcf | ||
|
|
969f388ef2 | ||
|
|
0589a1d0a5 | ||
|
|
4e019a176d | ||
|
|
a0e23d30fe | ||
|
|
e931706249 | ||
|
|
2457d95262 | ||
|
|
e9d33726a9 | ||
|
|
2462e04bf2 | ||
|
|
7fac74919c | ||
|
|
b022b5567d | ||
|
|
dbf6380e4b | ||
|
|
a0e42f8a61 | ||
|
|
94e673fe85 | ||
|
|
7600757f16 | ||
|
|
4ce8dd5f9a | ||
|
|
26315bfbea | ||
|
|
a282fb8524 | ||
|
|
dee98612e2 | ||
|
|
69e4e862a3 | ||
|
|
c0e895c3a7 | ||
|
|
fec9760f72 | ||
|
|
1989a5855d | ||
|
|
abcd19493e | ||
|
|
e457b7a8df | ||
|
|
3853d0d065 | ||
|
|
53e31cf1b5 | ||
|
|
c99c22534b | ||
|
|
8e22526756 | ||
|
|
7b6713b094 | ||
|
|
b0ef506a88 | ||
|
|
22c293de62 | ||
|
|
d3bb1e7010 | ||
|
|
49988b15a3 | ||
|
|
f0357b7a12 | ||
|
|
9d3ad6309e | ||
|
|
b55e9e78e3 | ||
|
|
4bc6fdb09e | ||
|
|
2b43b385de | ||
|
|
13865f9e04 | ||
|
|
497353e586 | ||
|
|
2d86dfba8b | ||
|
|
30dbfd9af8 | ||
|
|
c991b579d2 | ||
|
|
841729c0f9 | ||
|
|
412f5b5acb | ||
|
|
0b3958d3cd | ||
|
|
e68f251df7 | ||
|
|
986735234b | ||
|
|
4363eebc1b | ||
|
|
1be6ea5696 | ||
|
|
8acda0da8f | ||
|
|
ee240a5599 | ||
|
|
29ea433763 | ||
|
|
0462af164e | ||
|
|
1c24665b29 | ||
|
|
0af0fa7c2e | ||
|
|
191608041f | ||
|
|
42d9d5d237 | ||
|
|
d54b169d67 | ||
|
|
82166a36d0 | ||
|
|
cbf5a55c7d | ||
|
|
5f14ad9fa1 | ||
|
|
0be69b8a44 | ||
|
|
375710488d | ||
|
|
03d02fa67a | ||
|
|
b58cd78c79 | ||
|
|
dabb6f5466 | ||
|
|
281a4d5500 | ||
|
|
1c2965703d | ||
|
|
5dc4cce157 | ||
|
|
8c7edeb53b | ||
|
|
1d9745ee98 | ||
|
|
2d6c8767f7 | ||
|
|
b4a6d9c647 | ||
|
|
6afe9ceef1 | ||
|
|
704d9ad76c | ||
|
|
300d9adbd0 | ||
|
|
207c5498e7 | ||
|
|
d5e7439343 | ||
|
|
21add2c799 | ||
|
|
4651ab88ad | ||
|
|
53f40063b3 | ||
|
|
97d92bba67 | ||
|
|
bfdd665435 | ||
|
|
821d3fafa6 | ||
|
|
7c9b312cee | ||
|
|
69ab8a645c | ||
|
|
7b550c11cb | ||
|
|
bb4f18ca88 | ||
|
|
6efe91ea9c | ||
|
|
5f0a63f554 | ||
|
|
d14e7536ab | ||
|
|
c873937356 | ||
|
|
e1c3800cd9 | ||
|
|
c046232425 | ||
|
|
2d4864e126 | ||
|
|
048448aa93 | ||
|
|
755b2ec953 | ||
|
|
f62c493c77 | ||
|
|
a6365a6086 | ||
|
|
f7e057ec55 | ||
|
|
30cc00d11b | ||
|
|
d641c42029 | ||
|
|
9c2ca805da | ||
|
|
b0484d8a0c | ||
|
|
5ddd61d2e2 | ||
|
|
50ea7f4a9d | ||
|
|
b18134a4e3 | ||
|
|
7825df4771 | ||
|
|
d6951dacdc | ||
|
|
e603825e37 | ||
|
|
e3448153e1 | ||
|
|
25848c545a | ||
|
|
3098564896 | ||
|
|
4b6f9b93dd | ||
|
|
2beef21231 | ||
|
|
cb3c54a1ae | ||
|
|
d50a1e83ac | ||
|
|
1f10639222 | ||
|
|
af0979cce5 | ||
|
|
5b43901bd8 | ||
|
|
d7efb7a71d | ||
|
|
4d242836ee | ||
|
|
06cb5a041e | ||
|
|
ea2521bf27 | ||
|
|
4cd1f7a104 | ||
|
|
137843b2f6 | ||
|
|
008ed17a79 | ||
|
|
75e6cb9064 | ||
|
|
ad88a9421a | ||
|
|
346deb30a3 | ||
|
|
8c3d7cd145 | ||
|
|
821b30eb92 | ||
|
|
a362352587 | ||
|
|
94f952787f | ||
|
|
3ff184c061 | ||
|
|
80368e3936 | ||
|
|
2c448e22e1 | ||
|
|
1aabd38eb2 | ||
|
|
675457873a | ||
|
|
8173338f8a | ||
|
|
c4841843a9 | ||
|
|
f08a27be5d | ||
|
|
a4b36d12dd | ||
|
|
c842724b61 | ||
|
|
fb5f40319e | ||
|
|
52b9fc837c | ||
|
|
6f991ec78a | ||
|
|
7921d87a45 | ||
|
|
9f7a758bf9 | ||
|
|
0aff7a0bc1 | ||
|
|
c4cfdb8a25 | ||
|
|
342cfc4087 | ||
|
|
bd1282eddf | ||
|
|
892abec025 | ||
|
|
e809c4e445 | ||
|
|
9ff536d94d | ||
|
|
4f27315720 | ||
|
|
958ef2f872 | ||
|
|
069764f05e | ||
|
|
eeeab5192b | ||
|
|
a7dfbce3d3 | ||
|
|
ed2d1d9bb7 | ||
|
|
0fb2d2ffae | ||
|
|
3af65e7abb | ||
|
|
984b6cb0fb | ||
|
|
ca504a19ec | ||
|
|
c2797c85d1 | ||
|
|
d5add07c0b | ||
|
|
0ebf1c1ad7 | ||
|
|
42d7fc5e16 | ||
|
|
6828fc48e1 | ||
|
|
98d91b1c89 | ||
|
|
9bbdb2d562 | ||
|
|
a8334c3261 | ||
|
|
9144f9630b | ||
|
|
3e4a19539a | ||
|
|
5fe7e6e40e | ||
|
|
58f2ba1247 | ||
|
|
5f3a91bffd | ||
|
|
6351aa5167 | ||
|
|
9966099d1a | ||
|
|
1ef5599361 | ||
|
|
c78b6cdb4e | ||
|
|
d736c7235a | ||
|
|
475252d873 | ||
|
|
e103923430 | ||
|
|
cb59517ceb | ||
|
|
1248934f3e | ||
|
|
204ebf6bf6 | ||
|
|
52d5b19219 | ||
|
|
8e92d3a4a0 | ||
|
|
c44ecf54a5 | ||
|
|
c6699c36d3 | ||
|
|
d6ceae7005 | ||
|
|
4dcb82bf08 | ||
|
|
4f5d5926d9 | ||
|
|
3c5c3b98df | ||
|
|
56aee1ceee | ||
|
|
f176c28a56 | ||
|
|
2e68bd1412 | ||
|
|
35eb65460d | ||
|
|
ab54064689 | ||
|
|
debf7bf149 | ||
|
|
1dbe3b8231 | ||
|
|
b065573e23 | ||
|
|
e94e50181c | ||
|
|
69dfe63809 | ||
|
|
f32916a5bd | ||
|
|
be7ca56872 | ||
|
|
33cacc71b8 | ||
|
|
c292e3931a | ||
|
|
a87d6f0545 | ||
|
|
3a01b6d5b7 | ||
|
|
39df2635bd | ||
|
|
08ecfb8a67 | ||
|
|
a59bf7246a | ||
|
|
281296cd3f | ||
|
|
61d190b1ae | ||
|
|
dc89f029ad | ||
|
|
7557056a31 | ||
|
|
20c45a150c | ||
|
|
46bf0ef271 | ||
|
|
a7b632eb5e | ||
|
|
90a98c76a0 | ||
|
|
12357ee8c5 | ||
|
|
bb254fc2b9 | ||
|
|
aeadc2c43a | ||
|
|
ed492fe950 | ||
|
|
775daba8f5 | ||
|
|
677dd7ad53 | ||
|
|
85dee02a3b | ||
|
|
afdebbc3a2 | ||
|
|
5deb22a539 | ||
|
|
36b9e2e077 | ||
|
|
5348937c3d | ||
|
|
72fcacbbc7 | ||
|
|
4c28f15b35 | ||
|
|
095ef04c04 | ||
|
|
7d49979658 | ||
|
|
7a36695a21 | ||
|
|
5865587bd0 | ||
|
|
219bf93566 | ||
|
|
8371546a66 | ||
|
|
0b9b7bddd7 | ||
|
|
4c8449f4bc | ||
|
|
36d7b5c9ab | ||
|
|
f2b0ea6722 | ||
|
|
46f4be88a6 | ||
|
|
6381efa7ce | ||
|
|
85ee66efb9 | ||
|
|
40dccf5b29 | ||
|
|
c114849a31 | ||
|
|
4e9798d0e6 | ||
|
|
a30b1a394f | ||
|
|
91460436cf | ||
|
|
3f807a9432 | ||
|
|
cbe32c7482 | ||
|
|
5d3c582ecf | ||
|
|
3ed006d216 | ||
|
|
3e1026286b | ||
|
|
b59266249d | ||
|
|
015261a524 | ||
|
|
024e1088eb | ||
|
|
08f4b1ae8a | ||
|
|
1390c22004 | ||
|
|
8742ead585 | ||
|
|
59a297abe6 | ||
|
|
18636ea628 | ||
|
|
cf5980ace2 | ||
|
|
a7b0861436 | ||
|
|
89f2c0b0a4 | ||
|
|
ee4f4d7800 | ||
|
|
4de75ce621 | ||
|
|
1c4043ab39 | ||
|
|
44c945b9f5 | ||
|
|
c7719ac365 | ||
|
|
b9c24189e4 | ||
|
|
411d8d7439 | ||
|
|
671b40df2a | ||
|
|
249a860c6f | ||
|
|
0367a39e1f | ||
|
|
1a7340bb02 | ||
|
|
ce7d852d22 | ||
|
|
01b01c5969 | ||
|
|
c159460b2c | ||
|
|
07728d7425 | ||
|
|
d3a25e4dc1 | ||
|
|
1751c35f69 | ||
|
|
93f5b8cc4a | ||
|
|
5b1e59a48c | ||
|
|
7b27cad1ba | ||
|
|
1b083d63ab | ||
|
|
23f2b47531 | ||
|
|
194288c00e | ||
|
|
f9c8ed0dc3 | ||
|
|
88def9b71b | ||
|
|
f818f44693 | ||
|
|
8a395fdb4a | ||
|
|
c0588926b8 | ||
|
|
f1b7ecb2a2 | ||
|
|
4bcf157d88 | ||
|
|
2f7da03cce | ||
|
|
f1c995dcb8 | ||
|
|
9aec58c6b8 | ||
|
|
46aaaa9b70 | ||
|
|
46543d6323 | ||
|
|
a585119a67 | ||
|
|
8cc72368ca | ||
|
|
92e57ee06c | ||
|
|
c737a19d9f | ||
|
|
708a97d773 | ||
|
|
b95a90dbd6 | ||
|
|
a2d1ee08d4 | ||
|
|
7e64dc380f | ||
|
|
046cb6a564 | ||
|
|
644ce9edab | ||
|
|
059b601b13 | ||
|
|
d59999f510 | ||
|
|
c5d31e7527 | ||
|
|
c121e38da6 | ||
|
|
b16bc3d2e3 | ||
|
|
c732abbda2 | ||
|
|
61d681a7c8 | ||
|
|
7828bc09cf | ||
|
|
36d330fea0 | ||
|
|
4d46589d39 | ||
|
|
93f57edd3a | ||
|
|
8ec8ae0587 | ||
|
|
ce94e636bb | ||
|
|
21c7378b61 | ||
|
|
75a9845d20 | ||
|
|
d638f6e411 | ||
|
|
81d0a64d46 | ||
|
|
f76739cb1b | ||
|
|
7f992fd321 | ||
|
|
e428d11add | ||
|
|
0ed5b75a14 | ||
|
|
b1b4adec74 | ||
|
|
1934cc2e62 | ||
|
|
ae8cf8c35e | ||
|
|
f5878eafb9 | ||
|
|
27fe4f7062 | ||
|
|
7ad8b26297 | ||
|
|
1a383b7d90 | ||
|
|
445946792e | ||
|
|
de82c7d5ac | ||
|
|
07f0d561dc | ||
|
|
be379f3dac | ||
|
|
1bf904fe60 | ||
|
|
c6faf005cb | ||
|
|
b534b58542 | ||
|
|
920711533e | ||
|
|
194110433e | ||
|
|
7926396d2a | ||
|
|
797522e8ca | ||
|
|
64b5d1a269 | ||
|
|
d00d3802c9 | ||
|
|
46fff13341 | ||
|
|
be3374a3ef | ||
|
|
264ac0b017 | ||
|
|
17033b3c6c | ||
|
|
c4ea122d66 | ||
|
|
90185dc6b3 | ||
|
|
377b030d88 | ||
|
|
437bd87d7c | ||
|
|
73a7916ce3 | ||
|
|
f947fa86e3 | ||
|
|
7219efbdb7 | ||
|
|
b7435b9cd1 | ||
|
|
207ab5a0d1 | ||
|
|
70aa0ef85d | ||
|
|
dfbe231a51 | ||
|
|
cce35da366 | ||
|
|
1a612bcae9 | ||
|
|
d5b9e003fe | ||
|
|
e19c474a92 | ||
|
|
8274798499 | ||
|
|
9f68a32934 | ||
|
|
5c688daff1 | ||
|
|
741fb1181f | ||
|
|
fd1f05c8e0 | ||
|
|
708cbf937f | ||
|
|
32213cad01 | ||
|
|
9320a6e115 | ||
|
|
0f16c0f4cf | ||
|
|
30464396d9 | ||
|
|
64066c4ea8 | ||
|
|
7e97787d9d | ||
|
|
40f2dd8c6c | ||
|
|
4dd364e1c3 | ||
|
|
03f2a35b31 | ||
|
|
73bd98df57 | ||
|
|
bcf1fc658d | ||
|
|
863cbe512d | ||
|
|
d871e9aee7 | ||
|
|
70ef61ac6d | ||
|
|
a4a140bfad | ||
|
|
d2d91e713a | ||
|
|
d9bb1ceaec | ||
|
|
5fe8903fd2 | ||
|
|
8509101b83 | ||
|
|
357849c348 | ||
|
|
0f1b4e06f5 | ||
|
|
8e041420cd | ||
|
|
9211d22b2b | ||
|
|
f5246eb167 | ||
|
|
e436b2d720 | ||
|
|
51f4e9c0d3 | ||
|
|
8c3371c4ac | ||
|
|
6ff0fc6d83 | ||
|
|
9347a70425 | ||
|
|
91957f0848 | ||
|
|
62105bb353 | ||
|
|
e03f684508 | ||
|
|
2f41ae24f8 | ||
|
|
4ad551be9a | ||
|
|
bd640ae2c5 | ||
|
|
2cfc882c62 | ||
|
|
21ece2d76d | ||
|
|
d055d7f496 | ||
|
|
b1cfb1afe4 | ||
|
|
2f215356d6 | ||
|
|
e07c79259b | ||
|
|
59085f072a | ||
|
|
474d6db42f | ||
|
|
a95710ed0c | ||
|
|
51d7724255 | ||
|
|
276e7629bd | ||
|
|
69606a45e0 | ||
|
|
7f65ffcb15 | ||
|
|
4f5f6761f3 | ||
|
|
f543dbb42f | ||
|
|
5917a42997 | ||
|
|
fbe1664214 | ||
|
|
d09bb13cb6 | ||
|
|
31c323c097 | ||
|
|
8f09aadfdf | ||
|
|
20b4e8c779 | ||
|
|
402a0108ae | ||
|
|
9de4a8efcf | ||
|
|
077fa2e6b9 | ||
|
|
2ae9316f48 | ||
|
|
9b5a90e3b9 | ||
|
|
483942dc41 | ||
|
|
2ddda6457f | ||
|
|
681e695170 | ||
|
|
a043664dc4 | ||
|
|
e940f99646 | ||
|
|
22073042a9 | ||
|
|
2634cc408a | ||
|
|
36446bcbc2 | ||
|
|
b371ec5cf6 | ||
|
|
18f4afb388 | ||
|
|
77dcbe95c0 | ||
|
|
061b749041 | ||
|
|
5b0c3951f6 | ||
|
|
f2394b5a8d | ||
|
|
fe7b884cc9 | ||
|
|
5c1b635229 | ||
|
|
63410491b7 | ||
|
|
26e0a4bbde | ||
|
|
c356e56522 | ||
|
|
fd26bbbd0b | ||
|
|
7aa55371b5 | ||
|
|
ba06533c3e | ||
|
|
d66d66e74b | ||
|
|
d6b5f3efe6 | ||
|
|
b5a431624b | ||
|
|
8e7284de5a | ||
|
|
b2d38cd31c | ||
|
|
a15fed35b7 | ||
|
|
eee6b0059c | ||
|
|
530b4f3bee | ||
|
|
bac1c223de | ||
|
|
57f7582b4d | ||
|
|
5afe819ebd | ||
|
|
59568f5311 | ||
|
|
822706367b | ||
|
|
f8e9fafda3 | ||
|
|
c2bb9db012 | ||
|
|
e4e7d7fbfc | ||
|
|
035e4cf90a | ||
|
|
18fff4a3f5 | ||
|
|
675b6dc305 | ||
|
|
4071c78b2b | ||
|
|
2cb32a683e | ||
|
|
b6dc9c004b | ||
|
|
4ea0c707c1 | ||
|
|
2fbcb5c6d8 | ||
|
|
a4d60d9750 | ||
|
|
d3925890b1 | ||
|
|
8c6c144f28 | ||
|
|
db8c24cc7b | ||
|
|
ecbbb8426f | ||
|
|
0752879fc8 | ||
|
|
3f2a04b25b | ||
|
|
aa15e7916e | ||
|
|
7b09623fa8 | ||
|
|
2f45b8b7f5 | ||
|
|
5ffa2a30be | ||
|
|
b102ae141a | ||
|
|
845abcdd77 | ||
|
|
805db7ca50 | ||
|
|
bd3d0c330f | ||
|
|
0060df9877 | ||
|
|
cd66e203bd | ||
|
|
240f99478a | ||
|
|
41534c73f0 | ||
|
|
6139a69fa8 | ||
|
|
3cca312e61 | ||
|
|
7e312797ec | ||
|
|
fe44fa648a | ||
|
|
3249030257 | ||
|
|
8f98c20c51 | ||
|
|
1c76d5d096 | ||
|
|
35f1e28809 | ||
|
|
20999979de | ||
|
|
c6706a86f1 | ||
|
|
b4b1866286 | ||
|
|
28eb9b4c29 | ||
|
|
0a9accccc1 | ||
|
|
c3d220175f | ||
|
|
095c90ad22 | ||
|
|
a77bfecb02 | ||
|
|
72027b5b3c | ||
|
|
e5503c56ad | ||
|
|
ee7b225272 | ||
|
|
03d37725a9 | ||
|
|
29d1cbb673 | ||
|
|
e81278b800 | ||
|
|
e5482a5725 | ||
|
|
8464be691e | ||
|
|
ed9937bbd8 | ||
|
|
b2a4d4a018 | ||
|
|
74aaf4f75b | ||
|
|
2945f9daa9 | ||
|
|
3b496ab3d8 | ||
|
|
e1f30aeff9 | ||
|
|
a92e73231d | ||
|
|
8d91115623 | ||
|
|
9af8d6912a | ||
|
|
fe43fb47e1 | ||
|
|
ca3a80fbe1 | ||
|
|
f0747e76da | ||
|
|
7416d6ea71 | ||
|
|
ea7cbc781e | ||
|
|
3568fb9f93 | ||
|
|
43b7ce4f6d | ||
|
|
baa38d6266 | ||
|
|
1677960caa | ||
|
|
0fab573c98 | ||
|
|
04a8e5b888 | ||
|
|
6284e2011c | ||
|
|
a97c93abe4 | ||
|
|
664816383a | ||
|
|
fc4cb1654c | ||
|
|
f1fa915985 | ||
|
|
11482a75a1 | ||
|
|
e983d35c25 | ||
|
|
85c4f753ad | ||
|
|
1847ce3f3d | ||
|
|
83c27cc7b1 | ||
|
|
3e8f96a463 | ||
|
|
69e4f16b13 | ||
|
|
918c3fb260 | ||
|
|
54ee44839c | ||
|
|
8362aa9d66 | ||
|
|
2a6ff16819 | ||
|
|
47ad73cc89 | ||
|
|
9687f71a17 | ||
|
|
ed684be18d | ||
|
|
5aef725c13 | ||
|
|
d00550c45f | ||
|
|
9ce8d78835 | ||
|
|
29016822fd | ||
|
|
bb50d7edb4 | ||
|
|
d43d6f2b13 | ||
|
|
636dc27ead | ||
|
|
a18f535f21 | ||
|
|
6994d4a712 | ||
|
|
c9d0ae7cf3 | ||
|
|
9edc25999e | ||
|
|
53c130b704 | ||
|
|
e4e174981d | ||
|
|
584a52ac21 | ||
|
|
f9b5767dae | ||
|
|
3179829fa5 | ||
|
|
187d1b853d | ||
|
|
8d2e5f0bda | ||
|
|
7def6663bd | ||
|
|
a13d19c582 | ||
|
|
1837f83282 | ||
|
|
b14cfd6c64 | ||
|
|
963c51f473 | ||
|
|
1f77b75e14 | ||
|
|
e5f3acd139 | ||
|
|
c8365b3b7e | ||
|
|
29c671ce46 | ||
|
|
38ac9d2ecf | ||
|
|
3573d93855 | ||
|
|
3cc2cda026 | ||
|
|
7d10986f10 | ||
|
|
8c6a6604ce | ||
|
|
7170280401 | ||
|
|
babecb6d49 | ||
|
|
9770802901 | ||
|
|
4c1e817b38 | ||
|
|
52b329be4e | ||
|
|
1d50d62a79 | ||
|
|
07502c9804 | ||
|
|
59e0e49822 | ||
|
|
05170d78be | ||
|
|
88c83277c6 | ||
|
|
d0734b105b | ||
|
|
4860dc148c | ||
|
|
ee468be696 | ||
|
|
7f539c951a | ||
|
|
e495ae9030 | ||
|
|
ccb6b3c64b | ||
|
|
85594cc92e | ||
|
|
0b72612cd2 | ||
|
|
dd086c7830 | ||
|
|
6a601ceb97 | ||
|
|
8236534e3c | ||
|
|
0fef147713 | ||
|
|
0198296ced | ||
|
|
37726a02af | ||
|
|
a9c135488e | ||
|
|
72f5c9b62d | ||
|
|
8d0f50a6fd | ||
|
|
6c353e8b8f | ||
|
|
512d9822f0 | ||
|
|
d003ca46c7 | ||
|
|
7587dc350e | ||
|
|
28664fedb2 | ||
|
|
ef20f05221 | ||
|
|
cabf5d004d | ||
|
|
d551da26e5 | ||
|
|
893357f01e | ||
|
|
fc7fa4b6c5 | ||
|
|
fb75db2f1f | ||
|
|
7c20522a30 | ||
|
|
c09884c686 | ||
|
|
9273782093 | ||
|
|
44ffe29c10 | ||
|
|
e619493ece | ||
|
|
1449c8b887 | ||
|
|
6b06a23102 | ||
|
|
b55a93a3e1 | ||
|
|
f5f43e6d1b | ||
|
|
1e03a9440b | ||
|
|
9a59512f75 | ||
|
|
35150caea4 | ||
|
|
f01da8fee4 | ||
|
|
434c08a357 | ||
|
|
bd9c5b6995 | ||
|
|
b941d270ce | ||
|
|
9406961125 | ||
|
|
0d391b66a3 | ||
|
|
a11e07e250 | ||
|
|
d266dad1f4 | ||
|
|
331b700d1b | ||
|
|
2163fde0a4 | ||
|
|
24a2aaef4b | ||
|
|
042cf517b2 | ||
|
|
b97027ac9a | ||
|
|
4ea3f82e50 | ||
|
|
38c4111e6c | ||
|
|
338341add8 | ||
|
|
93bb679f9d | ||
|
|
40d859354f | ||
|
|
9e7c8df384 | ||
|
|
f088dd7e00 | ||
|
|
10c4e4f63f | ||
|
|
962325cc40 | ||
|
|
a9c33abfa5 | ||
|
|
d835c19fce | ||
|
|
1f1384afc6 | ||
|
|
9d4b55be19 | ||
|
|
c549ab907a | ||
|
|
9c0d14bb60 | ||
|
|
a822d942cd | ||
|
|
3a64a01f91 | ||
|
|
6ebb6bc7ee | ||
|
|
be95dfdd0e | ||
|
|
88890fa7c2 | ||
|
|
f8930b9cbc | ||
|
|
c10227a766 | ||
|
|
7e7e462de1 | ||
|
|
a93e1ceac8 | ||
|
|
7f8469b66a | ||
|
|
cf568487c8 | ||
|
|
4c74a2dd3a | ||
|
|
a70452219b | ||
|
|
47ea2d5fb4 | ||
|
|
16540e35f1 | ||
|
|
3bfb3a9fe2 | ||
|
|
f9517dcf24 | ||
|
|
7878b22b09 | ||
|
|
e6d7e4e309 | ||
|
|
40d0da404e | ||
|
|
8675bd125a | ||
|
|
4e5dfa5d33 | ||
|
|
89f5b77550 | ||
|
|
5b15cd9163 | ||
|
|
dbf1383a38 | ||
|
|
46b367e74b | ||
|
|
3da390682d | ||
|
|
5349a3b6d1 | ||
|
|
f2ab5f61f5 | ||
|
|
e910a03af4 | ||
|
|
4d0dc8b7c8 | ||
|
|
e0dc1ef5bd | ||
|
|
f24f5e98dd | ||
|
|
6647cfc228 | ||
|
|
ddcd99a1ce | ||
|
|
55c07f23b0 | ||
|
|
8192572e23 | ||
|
|
0cdf1b07e9 | ||
|
|
8653bae6ac | ||
|
|
fc1aa7d3b4 | ||
|
|
8bdcd6d576 | ||
|
|
d3925fe578 | ||
|
|
d3a5cca1bc | ||
|
|
f3b553712a | ||
|
|
839651fadb | ||
|
|
6a50fceea4 | ||
|
|
7efe108686 | ||
|
|
c313af1b24 | ||
|
|
1388b1b58b | ||
|
|
551db20657 | ||
|
|
bc71e956a5 | ||
|
|
5af6974796 | ||
|
|
a712036b56 | ||
|
|
37b96c192b | ||
|
|
8cbdf0f907 | ||
|
|
ef5c630d3a | ||
|
|
6eea89f4c0 | ||
|
|
dbbb2d9877 | ||
|
|
c483e16d72 | ||
|
|
40a5bad968 | ||
|
|
1421bce371 | ||
|
|
5e7dd6d51b | ||
|
|
71f4e72b22 | ||
|
|
b24e71b232 | ||
|
|
f60c090e4c | ||
|
|
50334e6bac | ||
|
|
963a9429dd | ||
|
|
2eda8d64c7 | ||
|
|
9b96c62e46 | ||
|
|
378b7467a4 | ||
|
|
c0d98ecd4b | ||
|
|
b44644b6bf | ||
|
|
7bfb42946e | ||
|
|
e8907acd28 | ||
|
|
d6ef3b1e02 | ||
|
|
a39a7a7a03 | ||
|
|
923be102b3 | ||
|
|
7531e218c1 | ||
|
|
3cc1fecb53 | ||
|
|
3c89847489 | ||
|
|
fb837ca66d | ||
|
|
2ec1ffdc11 | ||
|
|
56509a61b9 | ||
|
|
f37f8ac815 | ||
|
|
231b5feb23 | ||
|
|
81fa063338 | ||
|
|
07b4a4dbca | ||
|
|
fd6daaa73b | ||
|
|
6496d185ab | ||
|
|
7499c1f969 | ||
|
|
9c5db1057d | ||
|
|
30d24a3c1c | ||
|
|
ad4af06802 | ||
|
|
64b98a9b61 | ||
|
|
4fdcb136bc | ||
|
|
0e398f5802 | ||
|
|
b9869eadc3 | ||
|
|
936c5a8a7a | ||
|
|
10f19fade1 | ||
|
|
c01594c2a4 | ||
|
|
ccbd7bb785 | ||
|
|
6eb49dee5d | ||
|
|
6a4bf9fcff | ||
|
|
9ada89d51a | ||
|
|
524fddedb4 | ||
|
|
c4a7711e02 | ||
|
|
2e20fc413c | ||
|
|
498482d0f6 | ||
|
|
4bd5b6a4d6 | ||
|
|
2e764cb22d | ||
|
|
c8914679b7 | ||
|
|
e25ac0d587 | ||
|
|
41374aabcb | ||
|
|
f60d846eb3 | ||
|
|
96e54ab326 | ||
|
|
40a3feaad0 | ||
|
|
2611931f82 | ||
|
|
ec39d10695 | ||
|
|
30d8ed411a | ||
|
|
64a832467e | ||
|
|
9c5321c538 | ||
|
|
aba123dae0 | ||
|
|
5aca58ad2a | ||
|
|
a34418d724 | ||
|
|
5f4262921a | ||
|
|
6fcd05b855 | ||
|
|
7746a2b3cd | ||
|
|
2749dcd128 | ||
|
|
92343d91d6 | ||
|
|
ce7b48143a | ||
|
|
e30e98a496 | ||
|
|
4798bd9d33 | ||
|
|
38d6cb97ad | ||
|
|
3be111a160 | ||
|
|
97a66b73cf | ||
|
|
50fc3ec974 | ||
|
|
ee8d99b955 | ||
|
|
5bf7c4d241 | ||
|
|
c2b5f21832 | ||
|
|
bdac9b7241 | ||
|
|
ec6eae9537 | ||
|
|
f607074899 | ||
|
|
0571eecb0c | ||
|
|
4f3d6ddf17 | ||
|
|
34f0c593ad | ||
|
|
97ebcc2af1 | ||
|
|
4852b5c11e | ||
|
|
16ce06f621 | ||
|
|
811a54af6c | ||
|
|
e02973b6f4 | ||
|
|
b91eab6737 | ||
|
|
c89ef84df7 | ||
|
|
e3c8a1131a | ||
|
|
eb78b9268f | ||
|
|
d62e63c448 | ||
|
|
03e66d5b87 | ||
|
|
22afc99f1e | ||
|
|
c83f220fc4 | ||
|
|
0d0a8e9b68 | ||
|
|
bcafadb68a | ||
|
|
9999b2e3c6 | ||
|
|
6c23fb3173 | ||
|
|
e6517d4140 | ||
|
|
00a6dbbe97 | ||
|
|
4cf47dcd0f | ||
|
|
03863bd84d | ||
|
|
7a2eeb7439 | ||
|
|
6fb7d2883d | ||
|
|
a844c1ac74 | ||
|
|
a7b77d9658 | ||
|
|
3509713a23 | ||
|
|
4b3b41fea5 | ||
|
|
2be7fc072f | ||
|
|
ca222a14de | ||
|
|
4aa94ee290 | ||
|
|
5c051eb801 | ||
|
|
3761f00062 | ||
|
|
b705608b04 | ||
|
|
a5f2d5ff21 | ||
|
|
979e5f193a | ||
|
|
8dde60e869 | ||
|
|
224a570a08 | ||
|
|
78f2ea89f8 | ||
|
|
13ccf420d7 | ||
|
|
d47740bd8d | ||
|
|
e2aa0e8a35 | ||
|
|
d505be1fd4 | ||
|
|
40fd33d1b0 | ||
|
|
317a352a65 | ||
|
|
970bfce997 | ||
|
|
a3feddd8ed | ||
|
|
a8294c2c34 | ||
|
|
0823eed546 | ||
|
|
f85bc6e7f7 | ||
|
|
21c4e70f33 | ||
|
|
03a6f28d55 | ||
|
|
19e5d975ca | ||
|
|
5664625f67 | ||
|
|
375045953f | ||
|
|
b10b186cc8 | ||
|
|
bf8e0f4cae | ||
|
|
a6ae597dfc | ||
|
|
b975419bc7 | ||
|
|
0f036d6bec | ||
|
|
20fbfc7006 | ||
|
|
e167b72b16 | ||
|
|
68ef07bff6 | ||
|
|
10a20e208a | ||
|
|
e10394ba3b | ||
|
|
019585f0db | ||
|
|
e619845ffe | ||
|
|
3012928452 | ||
|
|
352ccde52b | ||
|
|
92fb51026a | ||
|
|
acf9c1141a | ||
|
|
a8bcc51071 | ||
|
|
dcd1c6766c | ||
|
|
00ee2529bc | ||
|
|
2af97cdbcb | ||
|
|
1accab02ed | ||
|
|
1a05899be0 | ||
|
|
d54f6be639 | ||
|
|
00614026b3 | ||
|
|
acf1da4d30 | ||
|
|
921ffb7bdb | ||
|
|
b2e22cbc59 | ||
|
|
55c598f9ff | ||
|
|
eabc0875de | ||
|
|
62270a3697 | ||
|
|
40d8aeecb0 | ||
|
|
2daa9ff260 | ||
|
|
25fd4297a8 | ||
|
|
f05d89ed72 | ||
|
|
4c2501be95 | ||
|
|
e2854232d0 | ||
|
|
6794fd06eb | ||
|
|
befc906167 | ||
|
|
422d240afb | ||
|
|
2b966b40f2 | ||
|
|
a992e16f7d | ||
|
|
0398dc1226 | ||
|
|
5592738603 | ||
|
|
688ffd024b | ||
|
|
4ac1c819e0 | ||
|
|
a6e0ae2896 | ||
|
|
cb8499c264 | ||
|
|
d2fb065d0d | ||
|
|
4449f7f2fb | ||
|
|
7cc60dfb8f | ||
|
|
028bae8f04 | ||
|
|
fa9555c430 | ||
|
|
48d11f0a5c | ||
|
|
09a0c3b40f | ||
|
|
e622bd5e7f | ||
|
|
0d31f40e16 | ||
|
|
e13500fc4f | ||
|
|
2a76942a74 | ||
|
|
c73c28de7e | ||
|
|
9e0ec0927c | ||
|
|
23e6715a02 | ||
|
|
f7eae86cdb | ||
|
|
889c0a50a4 | ||
|
|
7d15061984 | ||
|
|
ccbfb038ee | ||
|
|
cb951ebd28 | ||
|
|
d35c78e933 | ||
|
|
e9356c893b | ||
|
|
869483617b | ||
|
|
df96958fb8 | ||
|
|
de7ad9dfbc | ||
|
|
bf1cf4557e | ||
|
|
86d20496ea | ||
|
|
ae7ad2230f | ||
|
|
2007064c47 | ||
|
|
c8852339c9 | ||
|
|
eb0a19062e | ||
|
|
2f08577967 | ||
|
|
891f3af504 | ||
|
|
c5f200917a | ||
|
|
21622a1a17 | ||
|
|
a1067fa4ae | ||
|
|
4395a46190 | ||
|
|
c938523cd5 | ||
|
|
ae10fc7fb4 | ||
|
|
0299a17da1 | ||
|
|
d77cfd6ecc | ||
|
|
03d79996de | ||
|
|
553208a960 | ||
|
|
dfc59866e8 | ||
|
|
ac685d19f8 | ||
|
|
dd2e9e08df | ||
|
|
499b5befd6 | ||
|
|
c26ce9c4fe | ||
|
|
6263bc2d1b | ||
|
|
f7504fb5eb | ||
|
|
6869362f43 | ||
|
|
7600cc87d8 | ||
|
|
3192c78d96 | ||
|
|
c3dad00c1b | ||
|
|
a1bad378d2 | ||
|
|
73f1ed4f25 | ||
|
|
b28b4bd71e | ||
|
|
b15928c95e | ||
|
|
97d4f9e0ff | ||
|
|
0986caf0ad | ||
|
|
9cccf8f88a | ||
|
|
3ae5b4b280 | ||
|
|
62b0e25b84 | ||
|
|
4e5ed9d3b9 | ||
|
|
555436a222 | ||
|
|
6977119f1e | ||
|
|
52be516fa3 | ||
|
|
2dd3eee58e | ||
|
|
d40351286a | ||
|
|
d84a258b0a | ||
|
|
eb2a4dc724 | ||
|
|
316fa1cc01 | ||
|
|
04e2db1f41 | ||
|
|
2f7d781635 | ||
|
|
88ff269370 | ||
|
|
7121e1a3b0 | ||
|
|
8fd06b96d7 | ||
|
|
181c3cdc28 | ||
|
|
ccfa913186 | ||
|
|
2a9f31bfea | ||
|
|
0bc76f094a | ||
|
|
d394003739 | ||
|
|
17dd058308 | ||
|
|
99b1a3071d | ||
|
|
dc38e448bd | ||
|
|
1d1180ec0c | ||
|
|
81539c4ed6 | ||
|
|
cf1dcfe37c | ||
|
|
7293376973 | ||
|
|
d9f1a60a64 | ||
|
|
4f6526e1a5 | ||
|
|
e6ea09f482 | ||
|
|
d620651ef6 | ||
|
|
9221f93be9 | ||
|
|
795ea49093 | ||
|
|
6827459b9f | ||
|
|
e424d47ce6 | ||
|
|
ca0e732331 | ||
|
|
8e52905ea9 | ||
|
|
5cc26bb640 | ||
|
|
fdf00c1be6 | ||
|
|
47258a7093 | ||
|
|
5112d077d5 | ||
|
|
b4e8a23da4 | ||
|
|
63e9a4ae68 | ||
|
|
7e96a9afda | ||
|
|
f5a225f1e0 | ||
|
|
6f4a3816a5 | ||
|
|
29363794c1 | ||
|
|
64a3a718e6 | ||
|
|
b01c28ebc6 | ||
|
|
f5d1aaf7d9 | ||
|
|
f6f45881da | ||
|
|
cd7468f3be | ||
|
|
cd93b9ae0b | ||
|
|
0ffaafd788 | ||
|
|
c6283d1b5a | ||
|
|
24527859e6 | ||
|
|
0c6c5718fe | ||
|
|
c4bbc18cb6 | ||
|
|
6e76759225 | ||
|
|
87ed2d4a21 | ||
|
|
74b3309225 | ||
|
|
1d741cbfc5 | ||
|
|
12420db4b9 | ||
|
|
aad6a7e262 | ||
|
|
b12b804f0a | ||
|
|
64d38ed17e | ||
|
|
f8d64a7378 | ||
|
|
b92a0d5126 | ||
|
|
e0372358df | ||
|
|
1bce6e3faf | ||
|
|
81dd281789 | ||
|
|
f7b38dc270 | ||
|
|
ec9819071a | ||
|
|
72edc3c4fe | ||
|
|
5657e8d1da | ||
|
|
0700e0cf94 | ||
|
|
1cd2db9f8c | ||
|
|
10d411c4f7 | ||
|
|
167b8b8eb8 | ||
|
|
74da03d9fa | ||
|
|
b8a58dad65 | ||
|
|
b012713cf2 | ||
|
|
82d914149e | ||
|
|
450f5e03a5 | ||
|
|
b04706b875 | ||
|
|
10b0438201 | ||
|
|
0270ace3d4 | ||
|
|
bbb27fa484 | ||
|
|
df15e7b379 | ||
|
|
df651ab98e | ||
|
|
dd7a3b37b0 | ||
|
|
94a623c00e | ||
|
|
6cb0f2d392 | ||
|
|
17e165382f | ||
|
|
733ba07312 | ||
|
|
46cd9ff9f5 | ||
|
|
66ed4f7328 | ||
|
|
3be6d84675 | ||
|
|
406e980fae | ||
|
|
211065565f | ||
|
|
d979ee5573 | ||
|
|
c843b53c30 | ||
|
|
5d280e4d25 | ||
|
|
f00d43aa09 | ||
|
|
2e68d3cb3c | ||
|
|
4d6f11b61f | ||
|
|
aac9ba6c1e | ||
|
|
d926a3b5da | ||
|
|
fa5753c579 | ||
|
|
3fa3b2d836 | ||
|
|
76041e84e8 | ||
|
|
19c6572926 | ||
|
|
2217fb8c58 | ||
|
|
50fcb3914d | ||
|
|
9a0c0886ce | ||
|
|
fc41cc9878 | ||
|
|
08b220a1fb | ||
|
|
7e3beaf822 | ||
|
|
d2150efc19 | ||
|
|
380146b75b | ||
|
|
2bf096cfc7 | ||
|
|
cb887dee81 | ||
|
|
2ee7d5eeb6 | ||
|
|
6d6158ff08 | ||
|
|
11126cf4ae | ||
|
|
bd00f46d8b | ||
|
|
d8482cc286 | ||
|
|
f7a4317990 | ||
|
|
a55f6498c8 | ||
|
|
81f4aa9a5d | ||
|
|
3c7c8926fb | ||
|
|
a7ed46160a | ||
|
|
a9b97c7a2b | ||
|
|
0780ad4ad9 | ||
|
|
bf9992b613 | ||
|
|
8c5e1ff0a0 | ||
|
|
b3044a6e2b | ||
|
|
6260dd1018 | ||
|
|
e47801074e | ||
|
|
6d42973d7c | ||
|
|
68e41f130c | ||
|
|
65b33a848e | ||
|
|
5bfb6df0e0 | ||
|
|
13061d1ec7 | ||
|
|
0143a4227e | ||
|
|
3f63bcde12 | ||
|
|
b86c6bba4e | ||
|
|
4d19fc0860 | ||
|
|
9c57c30e57 | ||
|
|
9969c4e810 | ||
|
|
e2bc5d80c9 | ||
|
|
ab191e2b58 | ||
|
|
d418a6e872 | ||
|
|
bdfd1aef62 | ||
|
|
ff2de0c715 | ||
|
|
5b78b1e548 | ||
|
|
d1f965ae30 | ||
|
|
434267898b | ||
|
|
a00510a73c | ||
|
|
846fd31121 | ||
|
|
ab4344a781 | ||
|
|
ac97fefb91 | ||
|
|
8d034f544c | ||
|
|
ca1d2c7000 | ||
|
|
0acf15c025 | ||
|
|
94eed9b43c | ||
|
|
8a6665c03f | ||
|
|
85ae6fffbb | ||
|
|
bd85a36cb1 | ||
|
|
a449e4b47c | ||
|
|
42602a3f35 | ||
|
|
50f902cb02 | ||
|
|
b014ac12ee | ||
|
|
610f24e0cd | ||
|
|
f45f7e56fd | ||
|
|
afe366d6b7 | ||
|
|
1daa059ef9 | ||
|
|
9777aa6165 | ||
|
|
143ec1a019 | ||
|
|
13ee9ff37b | ||
|
|
a3c846b73e | ||
|
|
3d05575e9d | ||
|
|
9d00b5e165 | ||
|
|
a29b39e17a | ||
|
|
8273679634 | ||
|
|
f8c1e953d4 | ||
|
|
532d953b5a | ||
|
|
ecfdafab06 | ||
|
|
9bc39987f1 | ||
|
|
601b444a60 | ||
|
|
4b0671205d | ||
|
|
db634f4c0b | ||
|
|
7d9efd7cff | ||
|
|
06ef2a72c5 | ||
|
|
5e8b6dd164 | ||
|
|
03c7d564d9 | ||
|
|
c3ec3f4bc8 | ||
|
|
7273e2e6f2 | ||
|
|
af770e042a | ||
|
|
07a1bffc60 | ||
|
|
183e79398d | ||
|
|
d98bedd6e1 | ||
|
|
461245c83d | ||
|
|
6fcbb7bdb0 | ||
|
|
2304d03b40 | ||
|
|
4e3213f3bd | ||
|
|
55fb249f6b | ||
|
|
4d614b3088 | ||
|
|
cad0a762a0 | ||
|
|
0ae5075cc9 | ||
|
|
3145a732f2 | ||
|
|
ceaf6fd67a | ||
|
|
c26fa33094 | ||
|
|
b199d7a9fe | ||
|
|
0e65d8e64e | ||
|
|
1e742aec04 | ||
|
|
ba1e4917d1 | ||
|
|
4ce61875a4 | ||
|
|
04963f12a3 | ||
|
|
5d4b6c41a8 | ||
|
|
5cb3a096c1 | ||
|
|
ddf438dac0 | ||
|
|
ed13924c5a | ||
|
|
5cc6f88ade | ||
|
|
32124a7913 | ||
|
|
08042089f9 | ||
|
|
53969ae054 | ||
|
|
16c424de2a | ||
|
|
9e2f8f664b | ||
|
|
343d8f87b4 | ||
|
|
374a0af084 | ||
|
|
9f2e6d6172 | ||
|
|
af647990ab | ||
|
|
9b2b1df7e2 | ||
|
|
abdef7c326 | ||
|
|
b312e48d31 | ||
|
|
48a075529a | ||
|
|
7f22211e4b | ||
|
|
a63c3c8e0b | ||
|
|
bba162c55b | ||
|
|
540ba6d6ae | ||
|
|
29e8ce68e4 | ||
|
|
5691253acd | ||
|
|
cd5c85a245 | ||
|
|
7e1d1c19e6 | ||
|
|
46cdb40800 | ||
|
|
e3c6101b93 | ||
|
|
448aeb9c55 | ||
|
|
5e55104aa6 | ||
|
|
03cd83dc82 | ||
|
|
0cebae8e23 | ||
|
|
38bbe7567a | ||
|
|
fc95e8401a | ||
|
|
b70f821a10 | ||
|
|
95bb21f3f5 | ||
|
|
94741d52ed | ||
|
|
1ac6da4a8b | ||
|
|
090c0a60fa | ||
|
|
e7ca9113bc | ||
|
|
924700f381 | ||
|
|
d280b968d7 | ||
|
|
1d8c7a74d6 | ||
|
|
c1dc77c69c | ||
|
|
3ecb5a20a5 | ||
|
|
0c1460062d | ||
|
|
c0cef8ca43 | ||
|
|
a3e20ab2d6 | ||
|
|
7a23eb69eb | ||
|
|
7da12dc324 | ||
|
|
ed9b43e2cc | ||
|
|
2cd56e43a8 | ||
|
|
91f6c4b740 | ||
|
|
c0249caef9 | ||
|
|
e0d0bc0966 | ||
|
|
24eb7d6bc9 | ||
|
|
48c10f9454 | ||
|
|
d9b208260e | ||
|
|
5dd16399b3 | ||
|
|
96014f8e94 | ||
|
|
5dd14f2ee2 | ||
|
|
ad2e0bc4e3 | ||
|
|
85c61fb684 | ||
|
|
2601a09a83 | ||
|
|
d318ef6df7 | ||
|
|
7ed19de44e | ||
|
|
72652ff16e | ||
|
|
4a12471918 | ||
|
|
32cbbdbf73 | ||
|
|
ab28707d71 | ||
|
|
42a7203b1e | ||
|
|
5259c50612 | ||
|
|
06a84def5f | ||
|
|
d7bda01ccb | ||
|
|
890b2453f8 | ||
|
|
df9e1669cf | ||
|
|
8b491a46f3 | ||
|
|
c698dc9784 | ||
|
|
77dd1e3d45 | ||
|
|
b3cb8d0f53 | ||
|
|
260fc43281 | ||
|
|
b4ef7bb3ed | ||
|
|
816313de30 | ||
|
|
bb7bdffada | ||
|
|
0647666c65 | ||
|
|
8255945ea7 | ||
|
|
2364595697 | ||
|
|
e442d754d0 | ||
|
|
6b510652ed | ||
|
|
9ea5a88f84 | ||
|
|
aa0adc98f9 | ||
|
|
fdd2401f7b | ||
|
|
6b820d91ae | ||
|
|
611ad26d1b | ||
|
|
0911b5b2e8 | ||
|
|
c660ff80bf | ||
|
|
ef098923d6 | ||
|
|
a4f7ffea3f | ||
|
|
c5deb9d557 | ||
|
|
3ff2ea8d4e | ||
|
|
85ecee3525 | ||
|
|
d09e5f37ab | ||
|
|
7edcd7aaf5 | ||
|
|
7def1364b5 | ||
|
|
3b588f0502 | ||
|
|
03c520798e | ||
|
|
c0fa6af51b | ||
|
|
a4d0c47fc6 | ||
|
|
5bf00e87cc | ||
|
|
014ddd76f4 | ||
|
|
6eb4bdcf0e | ||
|
|
b4e032d9c9 | ||
|
|
4ca39dfd1e | ||
|
|
15ef62747a | ||
|
|
ad6dcb478d | ||
|
|
0b7aa65dbf | ||
|
|
e484d4bbf4 | ||
|
|
e6ff9c6cd5 | ||
|
|
fad63b28d1 | ||
|
|
7a075e091d | ||
|
|
0df4708267 | ||
|
|
d5b4e4ba60 | ||
|
|
b717dc0742 | ||
|
|
6ad37267e4 | ||
|
|
22d4d1fb42 | ||
|
|
98b0543b26 | ||
|
|
c0512e720c | ||
|
|
0f6664b260 | ||
|
|
f76f99e789 | ||
|
|
e2d48f42cc | ||
|
|
ec138cae62 | ||
|
|
986b89f5ed | ||
|
|
d799011039 | ||
|
|
0faa1c886a | ||
|
|
cb839d0fe8 | ||
|
|
ec4079733e | ||
|
|
4743c9ab16 | ||
|
|
c4e5e743c4 | ||
|
|
1d23681efe | ||
|
|
56d49f2f4f | ||
|
|
ac54b7cdd1 | ||
|
|
efe2771a34 | ||
|
|
10c4ec74cc | ||
|
|
d90026646f | ||
|
|
9cd1344740 | ||
|
|
c6a9335bf2 | ||
|
|
6c87148cd4 | ||
|
|
3f6c46e1ec | ||
|
|
b3c13b7aef | ||
|
|
55cfd5c904 | ||
|
|
d38f2223a5 | ||
|
|
86145d5eb5 | ||
|
|
ef335d9fd7 | ||
|
|
d2810ddc95 | ||
|
|
037c43cd25 | ||
|
|
aa86c16838 | ||
|
|
ed16a84e0d | ||
|
|
530a60a52d | ||
|
|
48463681fb | ||
|
|
f5a8739b7c | ||
|
|
4471e2bdbb | ||
|
|
63552282d7 | ||
|
|
ae385b336d | ||
|
|
0db55007ab | ||
|
|
d545b197ea | ||
|
|
bbc6fa57fa | ||
|
|
120218f9c6 | ||
|
|
38ee6adcd2 | ||
|
|
1d8e6473c6 | ||
|
|
494704b479 | ||
|
|
d634b08969 | ||
|
|
350f91601c | ||
|
|
659e1cfe85 | ||
|
|
1943d89147 | ||
|
|
aa822756e7 | ||
|
|
073b1084b7 | ||
|
|
5cbe15b676 | ||
|
|
e2cff9febe | ||
|
|
e9ad786578 | ||
|
|
0692b4be61 | ||
|
|
6550d4f634 | ||
|
|
c523ae2c52 | ||
|
|
5e1ba3fbb7 | ||
|
|
6e8a298d21 | ||
|
|
815e9534b8 | ||
|
|
5390a8ea71 | ||
|
|
e34c52402f | ||
|
|
86a6f337f6 | ||
|
|
a1f7d5549b | ||
|
|
5fbd07b146 | ||
|
|
b8f3c68b89 | ||
|
|
043b171028 | ||
|
|
b86d789abe | ||
|
|
e1c7dc80ae | ||
|
|
1fe0791a74 | ||
|
|
0d87eb4725 | ||
|
|
e2dac56a40 | ||
|
|
039fc80ed7 | ||
|
|
8e90a444c2 | ||
|
|
2ccd828e81 | ||
|
|
480f29bde7 | ||
|
|
11a6db8268 | ||
|
|
6566cc51e3 | ||
|
|
051cd03bbf | ||
|
|
0aa0a40d89 | ||
|
|
9dcbe750d1 | ||
|
|
10bf663a3b | ||
|
|
b71cfb7cfd | ||
|
|
87fedcfa74 | ||
|
|
3b9174a322 | ||
|
|
851fdd439f | ||
|
|
ab78e8efec | ||
|
|
b829febe0d | ||
|
|
39c90dd879 | ||
|
|
3a43042089 | ||
|
|
8d6b33342d | ||
|
|
7abdd3b453 | ||
|
|
d30bbde321 | ||
|
|
7b6db8d4cd | ||
|
|
48b4b345d1 | ||
|
|
70a17bd660 | ||
|
|
ba959e84e7 | ||
|
|
c492f8e7eb | ||
|
|
4d5782df53 | ||
|
|
2bd58ee4df | ||
|
|
2667183bfb | ||
|
|
dcd7861c1a | ||
|
|
92acd88523 | ||
|
|
4750124782 | ||
|
|
434e06ceda | ||
|
|
1653330421 | ||
|
|
08ce092713 | ||
|
|
f75e2d44ce | ||
|
|
ead52e807e | ||
|
|
75439f9185 | ||
|
|
cc99043cc8 | ||
|
|
4e82b37325 | ||
|
|
861707d651 | ||
|
|
f79c51145c | ||
|
|
83ed405103 | ||
|
|
ebac4d8346 | ||
|
|
64d0056a8a | ||
|
|
8aa7e355f6 | ||
|
|
203d3695b4 | ||
|
|
40cc034acb | ||
|
|
5e6d33a57f | ||
|
|
66905c069f | ||
|
|
4146f5f6df | ||
|
|
13e6018eb0 | ||
|
|
82cbc4daa2 | ||
|
|
b2c5d95737 | ||
|
|
f1e1204374 | ||
|
|
34e87e7026 | ||
|
|
de9c1f50ea | ||
|
|
bcb33d880e | ||
|
|
00c99ec373 | ||
|
|
60a49243cf | ||
|
|
4d0784a64d | ||
|
|
5b9f7e7bf3 |
106
.circleci/config.yml
Normal file
106
.circleci/config.yml
Normal file
@@ -0,0 +1,106 @@
|
||||
---
|
||||
defaults:
|
||||
defaults: &defaults
|
||||
working_directory: '/go/src/github.com/influxdata/telegraf'
|
||||
go-1_8: &go-1_8
|
||||
docker:
|
||||
- image: 'circleci/golang:1.8.7'
|
||||
go-1_9: &go-1_9
|
||||
docker:
|
||||
- image: 'circleci/golang:1.9.5'
|
||||
go-1_10: &go-1_10
|
||||
docker:
|
||||
- image: 'circleci/golang:1.10.1'
|
||||
|
||||
version: 2
|
||||
jobs:
|
||||
deps:
|
||||
<<: [ *defaults, *go-1_10 ]
|
||||
steps:
|
||||
- checkout
|
||||
- run: 'make deps'
|
||||
- persist_to_workspace:
|
||||
root: '/go/src'
|
||||
paths:
|
||||
- '*'
|
||||
test-go-1.8:
|
||||
<<: [ *defaults, *go-1_8 ]
|
||||
steps:
|
||||
- attach_workspace:
|
||||
at: '/go/src'
|
||||
- run: 'make test-ci'
|
||||
test-go-1.9:
|
||||
<<: [ *defaults, *go-1_9 ]
|
||||
steps:
|
||||
- attach_workspace:
|
||||
at: '/go/src'
|
||||
- run: 'make test-ci'
|
||||
test-go-1.10:
|
||||
<<: [ *defaults, *go-1_10 ]
|
||||
steps:
|
||||
- attach_workspace:
|
||||
at: '/go/src'
|
||||
- run: 'make test-ci'
|
||||
- run: 'GOARCH=386 make test-ci'
|
||||
release:
|
||||
<<: [ *defaults, *go-1_10 ]
|
||||
steps:
|
||||
- attach_workspace:
|
||||
at: '/go/src'
|
||||
- run: './scripts/release.sh'
|
||||
- store_artifacts:
|
||||
path: './artifacts'
|
||||
destination: '.'
|
||||
nightly:
|
||||
<<: [ *defaults, *go-1_10 ]
|
||||
steps:
|
||||
- attach_workspace:
|
||||
at: '/go/src'
|
||||
- run: './scripts/release.sh'
|
||||
- store_artifacts:
|
||||
path: './artifacts'
|
||||
destination: '.'
|
||||
|
||||
workflows:
|
||||
version: 2
|
||||
build_and_release:
|
||||
jobs:
|
||||
- 'deps'
|
||||
- 'test-go-1.8':
|
||||
requires:
|
||||
- 'deps'
|
||||
- 'test-go-1.9':
|
||||
requires:
|
||||
- 'deps'
|
||||
- 'test-go-1.10':
|
||||
requires:
|
||||
- 'deps'
|
||||
- 'release':
|
||||
requires:
|
||||
- 'test-go-1.8'
|
||||
- 'test-go-1.9'
|
||||
- 'test-go-1.10'
|
||||
nightly:
|
||||
jobs:
|
||||
- 'deps'
|
||||
- 'test-go-1.8':
|
||||
requires:
|
||||
- 'deps'
|
||||
- 'test-go-1.9':
|
||||
requires:
|
||||
- 'deps'
|
||||
- 'test-go-1.10':
|
||||
requires:
|
||||
- 'deps'
|
||||
- 'nightly':
|
||||
requires:
|
||||
- 'test-go-1.8'
|
||||
- 'test-go-1.9'
|
||||
- 'test-go-1.10'
|
||||
triggers:
|
||||
- schedule:
|
||||
cron: "0 7 * * *"
|
||||
filters:
|
||||
branches:
|
||||
only:
|
||||
- master
|
||||
4
.gitattributes
vendored
Normal file
4
.gitattributes
vendored
Normal file
@@ -0,0 +1,4 @@
|
||||
CHANGELOG.md merge=union
|
||||
README.md merge=union
|
||||
plugins/inputs/all/all.go merge=union
|
||||
plugins/outputs/all/all.go merge=union
|
||||
24
.github/ISSUE_TEMPLATE/Bug_report.md
vendored
Normal file
24
.github/ISSUE_TEMPLATE/Bug_report.md
vendored
Normal file
@@ -0,0 +1,24 @@
|
||||
---
|
||||
name: Bug report
|
||||
about: Create a report to help us improve
|
||||
|
||||
---
|
||||
|
||||
### Relevant telegraf.conf:
|
||||
|
||||
### System info:
|
||||
|
||||
[Include Telegraf version, operating system name, and other relevant details]
|
||||
|
||||
### Steps to reproduce:
|
||||
|
||||
1. ...
|
||||
2. ...
|
||||
|
||||
### Expected behavior:
|
||||
|
||||
### Actual behavior:
|
||||
|
||||
### Additional info:
|
||||
|
||||
[Include gist of relevant config, logs, etc.]
|
||||
17
.github/ISSUE_TEMPLATE/Feature_request.md
vendored
Normal file
17
.github/ISSUE_TEMPLATE/Feature_request.md
vendored
Normal file
@@ -0,0 +1,17 @@
|
||||
---
|
||||
name: Feature request
|
||||
about: Suggest an idea for this project
|
||||
|
||||
---
|
||||
|
||||
## Feature Request
|
||||
|
||||
Opening a feature request kicks off a discussion.
|
||||
|
||||
### Proposal:
|
||||
|
||||
### Current behavior:
|
||||
|
||||
### Desired behavior:
|
||||
|
||||
### Use case: [Why is this important (helps with prioritizing requests)]
|
||||
5
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
5
.github/PULL_REQUEST_TEMPLATE.md
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
### Required for all PRs:
|
||||
|
||||
- [ ] Signed [CLA](https://influxdata.com/community/cla/).
|
||||
- [ ] Associated README.md updated.
|
||||
- [ ] Has appropriate unit tests.
|
||||
7
.gitignore
vendored
7
.gitignore
vendored
@@ -1,3 +1,4 @@
|
||||
pkg/
|
||||
tivan
|
||||
.vagrant
|
||||
/build
|
||||
/telegraf
|
||||
/telegraf.exe
|
||||
/telegraf.gz
|
||||
|
||||
1771
CHANGELOG.md
Normal file
1771
CHANGELOG.md
Normal file
File diff suppressed because it is too large
Load Diff
489
CONTRIBUTING.md
Normal file
489
CONTRIBUTING.md
Normal file
@@ -0,0 +1,489 @@
|
||||
## Steps for Contributing:
|
||||
|
||||
1. [Sign the CLA](http://influxdb.com/community/cla.html)
|
||||
1. Make changes or write plugin (see below for details)
|
||||
1. Add your plugin to one of: `plugins/{inputs,outputs,aggregators,processors}/all/all.go`
|
||||
1. If your plugin requires a new Go package,
|
||||
[add it](https://github.com/influxdata/telegraf/blob/master/CONTRIBUTING.md#adding-a-dependency)
|
||||
1. Write a README for your plugin, if it's an input plugin, it should be structured
|
||||
like the [input example here](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/EXAMPLE_README.md).
|
||||
Output plugins READMEs are less structured,
|
||||
but any information you can provide on how the data will look is appreciated.
|
||||
See the [OpenTSDB output](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/opentsdb)
|
||||
for a good example.
|
||||
1. **Optional:** Help users of your plugin by including example queries for populating dashboards. Include these sample queries in the `README.md` for the plugin.
|
||||
1. **Optional:** Write a [tickscript](https://docs.influxdata.com/kapacitor/v1.0/tick/syntax/) for your plugin and add it to [Kapacitor](https://github.com/influxdata/kapacitor/tree/master/examples/telegraf).
|
||||
|
||||
## GoDoc
|
||||
|
||||
Public interfaces for inputs, outputs, processors, aggregators, metrics,
|
||||
and the accumulator can be found on the GoDoc
|
||||
|
||||
[](https://godoc.org/github.com/influxdata/telegraf)
|
||||
|
||||
## Sign the CLA
|
||||
|
||||
Before we can merge a pull request, you will need to sign the CLA,
|
||||
which can be found [on our website](http://influxdb.com/community/cla.html)
|
||||
|
||||
## Adding a dependency
|
||||
|
||||
Assuming you can already build the project, run these in the telegraf directory:
|
||||
|
||||
1. `go get github.com/sparrc/gdm`
|
||||
1. `gdm restore`
|
||||
1. `GOOS=linux gdm save`
|
||||
|
||||
## Input Plugins
|
||||
|
||||
This section is for developers who want to create new collection inputs.
|
||||
Telegraf is entirely plugin driven. This interface allows for operators to
|
||||
pick and chose what is gathered and makes it easy for developers
|
||||
to create new ways of generating metrics.
|
||||
|
||||
Plugin authorship is kept as simple as possible to promote people to develop
|
||||
and submit new inputs.
|
||||
|
||||
### Input Plugin Guidelines
|
||||
|
||||
* A plugin must conform to the [`telegraf.Input`](https://godoc.org/github.com/influxdata/telegraf#Input) interface.
|
||||
* Input Plugins should call `inputs.Add` in their `init` function to register themselves.
|
||||
See below for a quick example.
|
||||
* Input Plugins must be added to the
|
||||
`github.com/influxdata/telegraf/plugins/inputs/all/all.go` file.
|
||||
* The `SampleConfig` function should return valid toml that describes how the
|
||||
plugin can be configured. This is include in `telegraf config`.
|
||||
* The `Description` function should say in one line what this plugin does.
|
||||
|
||||
Let's say you've written a plugin that emits metrics about processes on the
|
||||
current host.
|
||||
|
||||
### Input Plugin Example
|
||||
|
||||
```go
|
||||
package simple
|
||||
|
||||
// simple.go
|
||||
|
||||
import (
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
)
|
||||
|
||||
type Simple struct {
|
||||
Ok bool
|
||||
}
|
||||
|
||||
func (s *Simple) Description() string {
|
||||
return "a demo plugin"
|
||||
}
|
||||
|
||||
func (s *Simple) SampleConfig() string {
|
||||
return `
|
||||
## Indicate if everything is fine
|
||||
ok = true
|
||||
`
|
||||
}
|
||||
|
||||
func (s *Simple) Gather(acc telegraf.Accumulator) error {
|
||||
if s.Ok {
|
||||
acc.AddFields("state", map[string]interface{}{"value": "pretty good"}, nil)
|
||||
} else {
|
||||
acc.AddFields("state", map[string]interface{}{"value": "not great"}, nil)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
inputs.Add("simple", func() telegraf.Input { return &Simple{} })
|
||||
}
|
||||
```
|
||||
|
||||
## Adding Typed Metrics
|
||||
|
||||
In addition the the `AddFields` function, the accumulator also supports an
|
||||
`AddGauge` and `AddCounter` function. These functions are for adding _typed_
|
||||
metrics. Metric types are ignored for the InfluxDB output, but can be used
|
||||
for other outputs, such as [prometheus](https://prometheus.io/docs/concepts/metric_types/).
|
||||
|
||||
## Input Plugins Accepting Arbitrary Data Formats
|
||||
|
||||
Some input plugins (such as
|
||||
[exec](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/exec))
|
||||
accept arbitrary input data formats. An overview of these data formats can
|
||||
be found
|
||||
[here](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md).
|
||||
|
||||
In order to enable this, you must specify a `SetParser(parser parsers.Parser)`
|
||||
function on the plugin object (see the exec plugin for an example), as well as
|
||||
defining `parser` as a field of the object.
|
||||
|
||||
You can then utilize the parser internally in your plugin, parsing data as you
|
||||
see fit. Telegraf's configuration layer will take care of instantiating and
|
||||
creating the `Parser` object.
|
||||
|
||||
You should also add the following to your SampleConfig() return:
|
||||
|
||||
```toml
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "influx"
|
||||
```
|
||||
|
||||
Below is the `Parser` interface.
|
||||
|
||||
```go
|
||||
// Parser is an interface defining functions that a parser plugin must satisfy.
|
||||
type Parser interface {
|
||||
// Parse takes a byte buffer separated by newlines
|
||||
// ie, `cpu.usage.idle 90\ncpu.usage.busy 10`
|
||||
// and parses it into telegraf metrics
|
||||
Parse(buf []byte) ([]telegraf.Metric, error)
|
||||
|
||||
// ParseLine takes a single string metric
|
||||
// ie, "cpu.usage.idle 90"
|
||||
// and parses it into a telegraf metric.
|
||||
ParseLine(line string) (telegraf.Metric, error)
|
||||
}
|
||||
```
|
||||
|
||||
And you can view the code
|
||||
[here.](https://github.com/influxdata/telegraf/blob/henrypfhu-master/plugins/parsers/registry.go)
|
||||
|
||||
## Service Input Plugins
|
||||
|
||||
This section is for developers who want to create new "service" collection
|
||||
inputs. A service plugin differs from a regular plugin in that it operates
|
||||
a background service while Telegraf is running. One example would be the `statsd`
|
||||
plugin, which operates a statsd server.
|
||||
|
||||
Service Input Plugins are substantially more complicated than a regular plugin, as they
|
||||
will require threads and locks to verify data integrity. Service Input Plugins should
|
||||
be avoided unless there is no way to create their behavior with a regular plugin.
|
||||
|
||||
Their interface is quite similar to a regular plugin, with the addition of `Start()`
|
||||
and `Stop()` methods.
|
||||
|
||||
### Service Plugin Guidelines
|
||||
|
||||
* Same as the `Plugin` guidelines, except that they must conform to the
|
||||
[`telegraf.ServiceInput`](https://godoc.org/github.com/influxdata/telegraf#ServiceInput) interface.
|
||||
|
||||
## Output Plugins
|
||||
|
||||
This section is for developers who want to create a new output sink. Outputs
|
||||
are created in a similar manner as collection plugins, and their interface has
|
||||
similar constructs.
|
||||
|
||||
### Output Plugin Guidelines
|
||||
|
||||
* An output must conform to the [`telegraf.Output`](https://godoc.org/github.com/influxdata/telegraf#Output) interface.
|
||||
* Outputs should call `outputs.Add` in their `init` function to register themselves.
|
||||
See below for a quick example.
|
||||
* To be available within Telegraf itself, plugins must add themselves to the
|
||||
`github.com/influxdata/telegraf/plugins/outputs/all/all.go` file.
|
||||
* The `SampleConfig` function should return valid toml that describes how the
|
||||
output can be configured. This is include in `telegraf config`.
|
||||
* The `Description` function should say in one line what this output does.
|
||||
|
||||
### Output Example
|
||||
|
||||
```go
|
||||
package simpleoutput
|
||||
|
||||
// simpleoutput.go
|
||||
|
||||
import (
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/plugins/outputs"
|
||||
)
|
||||
|
||||
type Simple struct {
|
||||
Ok bool
|
||||
}
|
||||
|
||||
func (s *Simple) Description() string {
|
||||
return "a demo output"
|
||||
}
|
||||
|
||||
func (s *Simple) SampleConfig() string {
|
||||
return `
|
||||
ok = true
|
||||
`
|
||||
}
|
||||
|
||||
func (s *Simple) Connect() error {
|
||||
// Make a connection to the URL here
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *Simple) Close() error {
|
||||
// Close connection to the URL here
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *Simple) Write(metrics []telegraf.Metric) error {
|
||||
for _, metric := range metrics {
|
||||
// write `metric` to the output sink here
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
outputs.Add("simpleoutput", func() telegraf.Output { return &Simple{} })
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
## Output Plugins Writing Arbitrary Data Formats
|
||||
|
||||
Some output plugins (such as
|
||||
[file](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/file))
|
||||
can write arbitrary output data formats. An overview of these data formats can
|
||||
be found
|
||||
[here](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md).
|
||||
|
||||
In order to enable this, you must specify a
|
||||
`SetSerializer(serializer serializers.Serializer)`
|
||||
function on the plugin object (see the file plugin for an example), as well as
|
||||
defining `serializer` as a field of the object.
|
||||
|
||||
You can then utilize the serializer internally in your plugin, serializing data
|
||||
before it's written. Telegraf's configuration layer will take care of
|
||||
instantiating and creating the `Serializer` object.
|
||||
|
||||
You should also add the following to your SampleConfig() return:
|
||||
|
||||
```toml
|
||||
## Data format to output.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
||||
data_format = "influx"
|
||||
```
|
||||
|
||||
## Service Output Plugins
|
||||
|
||||
This section is for developers who want to create new "service" output. A
|
||||
service output differs from a regular output in that it operates a background service
|
||||
while Telegraf is running. One example would be the `prometheus_client` output,
|
||||
which operates an HTTP server.
|
||||
|
||||
Their interface is quite similar to a regular output, with the addition of `Start()`
|
||||
and `Stop()` methods.
|
||||
|
||||
### Service Output Guidelines
|
||||
|
||||
* Same as the `Output` guidelines, except that they must conform to the
|
||||
`output.ServiceOutput` interface.
|
||||
|
||||
## Processor Plugins
|
||||
|
||||
This section is for developers who want to create a new processor plugin.
|
||||
|
||||
### Processor Plugin Guidelines
|
||||
|
||||
* A processor must conform to the [`telegraf.Processor`](https://godoc.org/github.com/influxdata/telegraf#Processor) interface.
|
||||
* Processors should call `processors.Add` in their `init` function to register themselves.
|
||||
See below for a quick example.
|
||||
* To be available within Telegraf itself, plugins must add themselves to the
|
||||
`github.com/influxdata/telegraf/plugins/processors/all/all.go` file.
|
||||
* The `SampleConfig` function should return valid toml that describes how the
|
||||
processor can be configured. This is include in the output of `telegraf config`.
|
||||
* The `Description` function should say in one line what this processor does.
|
||||
|
||||
### Processor Example
|
||||
|
||||
```go
|
||||
package printer
|
||||
|
||||
// printer.go
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/plugins/processors"
|
||||
)
|
||||
|
||||
type Printer struct {
|
||||
}
|
||||
|
||||
var sampleConfig = `
|
||||
`
|
||||
|
||||
func (p *Printer) SampleConfig() string {
|
||||
return sampleConfig
|
||||
}
|
||||
|
||||
func (p *Printer) Description() string {
|
||||
return "Print all metrics that pass through this filter."
|
||||
}
|
||||
|
||||
func (p *Printer) Apply(in ...telegraf.Metric) []telegraf.Metric {
|
||||
for _, metric := range in {
|
||||
fmt.Println(metric.String())
|
||||
}
|
||||
return in
|
||||
}
|
||||
|
||||
func init() {
|
||||
processors.Add("printer", func() telegraf.Processor {
|
||||
return &Printer{}
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
## Aggregator Plugins
|
||||
|
||||
This section is for developers who want to create a new aggregator plugin.
|
||||
|
||||
### Aggregator Plugin Guidelines
|
||||
|
||||
* A aggregator must conform to the [`telegraf.Aggregator`](https://godoc.org/github.com/influxdata/telegraf#Aggregator) interface.
|
||||
* Aggregators should call `aggregators.Add` in their `init` function to register themselves.
|
||||
See below for a quick example.
|
||||
* To be available within Telegraf itself, plugins must add themselves to the
|
||||
`github.com/influxdata/telegraf/plugins/aggregators/all/all.go` file.
|
||||
* The `SampleConfig` function should return valid toml that describes how the
|
||||
aggregator can be configured. This is include in `telegraf config`.
|
||||
* The `Description` function should say in one line what this aggregator does.
|
||||
* The Aggregator plugin will need to keep caches of metrics that have passed
|
||||
through it. This should be done using the builtin `HashID()` function of each
|
||||
metric.
|
||||
* When the `Reset()` function is called, all caches should be cleared.
|
||||
|
||||
### Aggregator Example
|
||||
|
||||
```go
|
||||
package min
|
||||
|
||||
// min.go
|
||||
|
||||
import (
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/plugins/aggregators"
|
||||
)
|
||||
|
||||
type Min struct {
|
||||
// caches for metric fields, names, and tags
|
||||
fieldCache map[uint64]map[string]float64
|
||||
nameCache map[uint64]string
|
||||
tagCache map[uint64]map[string]string
|
||||
}
|
||||
|
||||
func NewMin() telegraf.Aggregator {
|
||||
m := &Min{}
|
||||
m.Reset()
|
||||
return m
|
||||
}
|
||||
|
||||
var sampleConfig = `
|
||||
## period is the flush & clear interval of the aggregator.
|
||||
period = "30s"
|
||||
## If true drop_original will drop the original metrics and
|
||||
## only send aggregates.
|
||||
drop_original = false
|
||||
`
|
||||
|
||||
func (m *Min) SampleConfig() string {
|
||||
return sampleConfig
|
||||
}
|
||||
|
||||
func (m *Min) Description() string {
|
||||
return "Keep the aggregate min of each metric passing through."
|
||||
}
|
||||
|
||||
func (m *Min) Add(in telegraf.Metric) {
|
||||
id := in.HashID()
|
||||
if _, ok := m.nameCache[id]; !ok {
|
||||
// hit an uncached metric, create caches for first time:
|
||||
m.nameCache[id] = in.Name()
|
||||
m.tagCache[id] = in.Tags()
|
||||
m.fieldCache[id] = make(map[string]float64)
|
||||
for k, v := range in.Fields() {
|
||||
if fv, ok := convert(v); ok {
|
||||
m.fieldCache[id][k] = fv
|
||||
}
|
||||
}
|
||||
} else {
|
||||
for k, v := range in.Fields() {
|
||||
if fv, ok := convert(v); ok {
|
||||
if _, ok := m.fieldCache[id][k]; !ok {
|
||||
// hit an uncached field of a cached metric
|
||||
m.fieldCache[id][k] = fv
|
||||
continue
|
||||
}
|
||||
if fv < m.fieldCache[id][k] {
|
||||
// set new minimum
|
||||
m.fieldCache[id][k] = fv
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (m *Min) Push(acc telegraf.Accumulator) {
|
||||
for id, _ := range m.nameCache {
|
||||
fields := map[string]interface{}{}
|
||||
for k, v := range m.fieldCache[id] {
|
||||
fields[k+"_min"] = v
|
||||
}
|
||||
acc.AddFields(m.nameCache[id], fields, m.tagCache[id])
|
||||
}
|
||||
}
|
||||
|
||||
func (m *Min) Reset() {
|
||||
m.fieldCache = make(map[uint64]map[string]float64)
|
||||
m.nameCache = make(map[uint64]string)
|
||||
m.tagCache = make(map[uint64]map[string]string)
|
||||
}
|
||||
|
||||
func convert(in interface{}) (float64, bool) {
|
||||
switch v := in.(type) {
|
||||
case float64:
|
||||
return v, true
|
||||
case int64:
|
||||
return float64(v), true
|
||||
default:
|
||||
return 0, false
|
||||
}
|
||||
}
|
||||
|
||||
func init() {
|
||||
aggregators.Add("min", func() telegraf.Aggregator {
|
||||
return NewMin()
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
## Unit Tests
|
||||
|
||||
Before opening a pull request you should run the linter checks and
|
||||
the short tests.
|
||||
|
||||
### Execute linter
|
||||
|
||||
execute `make lint`
|
||||
|
||||
### Execute short tests
|
||||
|
||||
execute `make test`
|
||||
|
||||
### Execute integration tests
|
||||
|
||||
Running the integration tests requires several docker containers to be
|
||||
running. You can start the containers with:
|
||||
```
|
||||
make docker-run
|
||||
```
|
||||
|
||||
And run the full test suite with:
|
||||
```
|
||||
make test-all
|
||||
```
|
||||
|
||||
Use `make docker-kill` to stop the containers.
|
||||
101
Godeps
Normal file
101
Godeps
Normal file
@@ -0,0 +1,101 @@
|
||||
code.cloudfoundry.org/clock e9dc86bbf0e5bbe6bf7ff5a6f71e048959b61f71
|
||||
collectd.org 2ce144541b8903101fb8f1483cc0497a68798122
|
||||
github.com/aerospike/aerospike-client-go 9701404f4c60a6ea256595d24bf318f721a7e8b8
|
||||
github.com/Azure/go-autorest 9ad9326b278af8fa5cc67c30c0ce9a58cc0862b2
|
||||
github.com/amir/raidman c74861fe6a7bb8ede0a010ce4485bdbb4fc4c985
|
||||
github.com/apache/thrift 4aaa92ece8503a6da9bc6701604f69acf2b99d07
|
||||
github.com/aws/aws-sdk-go c861d27d0304a79f727e9a8a4e2ac1e74602fdc0
|
||||
github.com/beorn7/perks 4c0e84591b9aa9e6dcfdf3e020114cd81f89d5f9
|
||||
github.com/bsm/sarama-cluster abf039439f66c1ce78017f560b490612552f6472
|
||||
github.com/cenkalti/backoff b02f2bbce11d7ea6b97f282ef1771b0fe2f65ef3
|
||||
github.com/couchbase/go-couchbase bfe555a140d53dc1adf390f1a1d4b0fd4ceadb28
|
||||
github.com/couchbase/gomemcached 4a25d2f4e1dea9ea7dd76dfd943407abf9b07d29
|
||||
github.com/couchbase/goutils 5823a0cbaaa9008406021dc5daf80125ea30bba6
|
||||
github.com/davecgh/go-spew 346938d642f2ec3594ed81d874461961cd0faa76
|
||||
github.com/dgrijalva/jwt-go dbeaa9332f19a944acb5736b4456cfcc02140e29
|
||||
github.com/docker/docker f5ec1e2936dcbe7b5001c2b817188b095c700c27
|
||||
github.com/docker/go-connections 990a1a1a70b0da4c4cb70e117971a4f0babfbf1a
|
||||
github.com/eapache/go-resiliency b86b1ec0dd4209a588dc1285cdd471e73525c0b3
|
||||
github.com/eapache/go-xerial-snappy bb955e01b9346ac19dc29eb16586c90ded99a98c
|
||||
github.com/eapache/queue 44cc805cf13205b55f69e14bcb69867d1ae92f98
|
||||
github.com/eclipse/paho.mqtt.golang aff15770515e3c57fc6109da73d42b0d46f7f483
|
||||
github.com/go-logfmt/logfmt 390ab7935ee28ec6b286364bba9b4dd6410cb3d5
|
||||
github.com/go-sql-driver/mysql 2e00b5cd70399450106cec6431c2e2ce3cae5034
|
||||
github.com/gobwas/glob bea32b9cd2d6f55753d94a28e959b13f0244797a
|
||||
github.com/go-ini/ini 9144852efba7c4daf409943ee90767da62d55438
|
||||
github.com/gogo/protobuf 7b6c6391c4ff245962047fc1e2c6e08b1cdfa0e8
|
||||
github.com/golang/protobuf 8ee79997227bf9b34611aee7946ae64735e6fd93
|
||||
github.com/golang/snappy 7db9049039a047d955fe8c19b83c8ff5abd765c7
|
||||
github.com/go-ole/go-ole be49f7c07711fcb603cff39e1de7c67926dc0ba7
|
||||
github.com/google/go-cmp f94e52cad91c65a63acc1e75d4be223ea22e99bc
|
||||
github.com/gorilla/mux 53c1911da2b537f792e7cafcb446b05ffe33b996
|
||||
github.com/go-redis/redis 73b70592cdaa9e6abdfcfbf97b4a90d80728c836
|
||||
github.com/go-sql-driver/mysql 2e00b5cd70399450106cec6431c2e2ce3cae5034
|
||||
github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478
|
||||
github.com/hashicorp/consul 5174058f0d2bda63fa5198ab96c33d9a909c58ed
|
||||
github.com/influxdata/tail c43482518d410361b6c383d7aebce33d0471d7bc
|
||||
github.com/influxdata/toml 5d1d907f22ead1cd47adde17ceec5bda9cacaf8f
|
||||
github.com/influxdata/wlog 7c63b0a71ef8300adc255344d275e10e5c3a71ec
|
||||
github.com/fsnotify/fsnotify c2828203cd70a50dcccfb2761f8b1f8ceef9a8e9
|
||||
github.com/jackc/pgx 63f58fd32edb5684b9e9f4cfaac847c6b42b3917
|
||||
github.com/jmespath/go-jmespath bd40a432e4c76585ef6b72d3fd96fb9b6dc7b68d
|
||||
github.com/kardianos/osext c2c54e542fb797ad986b31721e1baedf214ca413
|
||||
github.com/kardianos/service 6d3a0ee7d3425d9d835debc51a0ca1ffa28f4893
|
||||
github.com/kballard/go-shellquote d8ec1a69a250a17bb0e419c386eac1f3711dc142
|
||||
github.com/matttproud/golang_protobuf_extensions c12348ce28de40eed0136aa2b644d0ee0650e56c
|
||||
github.com/Microsoft/ApplicationInsights-Go 3612f58550c1de70f1a110c78c830e55f29aa65d
|
||||
github.com/Microsoft/go-winio ce2922f643c8fd76b46cadc7f404a06282678b34
|
||||
github.com/miekg/dns 99f84ae56e75126dd77e5de4fae2ea034a468ca1
|
||||
github.com/mitchellh/mapstructure d0303fe809921458f417bcf828397a65db30a7e4
|
||||
github.com/multiplay/go-ts3 07477f49b8dfa3ada231afc7b7b17617d42afe8e
|
||||
github.com/naoina/go-stringutil 6b638e95a32d0c1131db0e7fe83775cbea4a0d0b
|
||||
github.com/nats-io/gnatsd 393bbb7c031433e68707c8810fda0bfcfbe6ab9b
|
||||
github.com/nats-io/go-nats ea9585611a4ab58a205b9b125ebd74c389a6b898
|
||||
github.com/nats-io/nats ea9585611a4ab58a205b9b125ebd74c389a6b898
|
||||
github.com/nats-io/nuid 289cccf02c178dc782430d534e3c1f5b72af807f
|
||||
github.com/nsqio/go-nsq eee57a3ac4174c55924125bb15eeeda8cffb6e6f
|
||||
github.com/opencontainers/runc 89ab7f2ccc1e45ddf6485eaa802c35dcf321dfc8
|
||||
github.com/opentracing-contrib/go-observer a52f2342449246d5bcc273e65cbdcfa5f7d6c63c
|
||||
github.com/opentracing/opentracing-go 06f47b42c792fef2796e9681353e1d908c417827
|
||||
github.com/openzipkin/zipkin-go-opentracing 1cafbdfde94fbf2b373534764e0863aa3bd0bf7b
|
||||
github.com/pierrec/lz4 5c9560bfa9ace2bf86080bf40d46b34ae44604df
|
||||
github.com/pierrec/xxHash 5a004441f897722c627870a981d02b29924215fa
|
||||
github.com/pkg/errors 645ef00459ed84a119197bfb8d8205042c6df63d
|
||||
github.com/pmezard/go-difflib/difflib 792786c7400a136282c1664665ae0a8db921c6c2
|
||||
github.com/prometheus/client_golang c317fb74746eac4fc65fe3909195f4cf67c5562a
|
||||
github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
|
||||
github.com/prometheus/common dd2f054febf4a6c00f2343686efb775948a8bff4
|
||||
github.com/prometheus/procfs 1878d9fbb537119d24b21ca07effd591627cd160
|
||||
github.com/rcrowley/go-metrics 1f30fe9094a513ce4c700b9a54458bbb0c96996c
|
||||
github.com/samuel/go-zookeeper 1d7be4effb13d2d908342d349d71a284a7542693
|
||||
github.com/satori/go.uuid 5bf94b69c6b68ee1b541973bb8e1144db23a194b
|
||||
github.com/shirou/gopsutil c95755e4bcd7a62bb8bd33f3a597a7c7f35e2cf3
|
||||
github.com/shirou/w32 3c9377fc6748f222729a8270fe2775d149a249ad
|
||||
github.com/Shopify/sarama 3b1b38866a79f06deddf0487d5c27ba0697ccd65
|
||||
github.com/Sirupsen/logrus 61e43dc76f7ee59a82bdf3d71033dc12bea4c77d
|
||||
github.com/soniah/gosnmp f15472a4cd6f6ea7929e4c7d9f163c49f059924f
|
||||
github.com/StackExchange/wmi f3e2bae1e0cb5aef83e319133eabfee30013a4a5
|
||||
github.com/streadway/amqp 63795daa9a446c920826655f26ba31c81c860fd6
|
||||
github.com/stretchr/objx facf9a85c22f48d2f52f2380e4efce1768749a89
|
||||
github.com/stretchr/testify 12b6f73e6084dad08a7c6e575284b177ecafbc71
|
||||
github.com/tidwall/gjson 0623bd8fbdbf97cc62b98d15108832851a658e59
|
||||
github.com/tidwall/match 173748da739a410c5b0b813b956f89ff94730b4c
|
||||
github.com/vjeantet/grok d73e972b60935c7fec0b4ffbc904ed39ecaf7efe
|
||||
github.com/wvanbergen/kafka bc265fedb9ff5b5c5d3c0fdcef4a819b3523d3ee
|
||||
github.com/wvanbergen/kazoo-go 968957352185472eacb69215fa3dbfcfdbac1096
|
||||
github.com/yuin/gopher-lua 66c871e454fcf10251c61bf8eff02d0978cae75a
|
||||
github.com/zensqlmonitor/go-mssqldb ffe5510c6fa5e15e6d983210ab501c815b56b363
|
||||
golang.org/x/crypto dc137beb6cce2043eb6b5f223ab8bf51c32459f4
|
||||
golang.org/x/net a337091b0525af65de94df2eb7e98bd9962dcbe2
|
||||
golang.org/x/sys 739734461d1c916b6c72a63d7efda2b27edb369f
|
||||
golang.org/x/text 506f9d5c962f284575e88337e7d9296d27e729d3
|
||||
google.golang.org/genproto 11c7f9e547da6db876260ce49ea7536985904c9b
|
||||
google.golang.org/grpc de2209a968d48e8970546c8a710189f7461370f7
|
||||
gopkg.in/asn1-ber.v1 4e86f4367175e39f69d9358a5f17b4dda270378d
|
||||
gopkg.in/fatih/pool.v2 6e328e67893eb46323ad06f0e92cb9536babbabc
|
||||
gopkg.in/gorethink/gorethink.v3 7ab832f7b65573104a555d84a27992ae9ea1f659
|
||||
gopkg.in/ldap.v2 8168ee085ee43257585e50c6441aadf54ecb2c9f
|
||||
gopkg.in/mgo.v2 3f83fa5005286a7fe593b055f0d7771a7dce4655
|
||||
gopkg.in/olivere/elastic.v5 3113f9b9ad37509fe5f8a0e5e91c96fdc4435e26
|
||||
gopkg.in/tomb.v1 dd632973f1e7218eb1089048e0798ec9ae7dceb8
|
||||
gopkg.in/yaml.v2 4c78c975fe7c825c6d1466c42be594d1d6f3aba6
|
||||
95
Makefile
Normal file
95
Makefile
Normal file
@@ -0,0 +1,95 @@
|
||||
PREFIX := /usr/local
|
||||
VERSION := $(shell git describe --exact-match --tags 2>/dev/null)
|
||||
BRANCH := $(shell git rev-parse --abbrev-ref HEAD)
|
||||
COMMIT := $(shell git rev-parse --short HEAD)
|
||||
GOFILES ?= $(shell git ls-files '*.go')
|
||||
GOFMT ?= $(shell gofmt -l $(filter-out plugins/parsers/influx/machine.go, $(GOFILES)))
|
||||
BUILDFLAGS ?=
|
||||
|
||||
ifdef GOBIN
|
||||
PATH := $(GOBIN):$(PATH)
|
||||
else
|
||||
PATH := $(subst :,/bin:,$(GOPATH))/bin:$(PATH)
|
||||
endif
|
||||
|
||||
LDFLAGS := $(LDFLAGS) -X main.commit=$(COMMIT) -X main.branch=$(BRANCH)
|
||||
ifdef VERSION
|
||||
LDFLAGS += -X main.version=$(VERSION)
|
||||
endif
|
||||
|
||||
all:
|
||||
$(MAKE) deps
|
||||
$(MAKE) telegraf
|
||||
|
||||
deps:
|
||||
go get -u github.com/golang/lint/golint
|
||||
go get github.com/sparrc/gdm
|
||||
gdm restore --parallel=false
|
||||
|
||||
telegraf:
|
||||
go build -ldflags "$(LDFLAGS)" ./cmd/telegraf
|
||||
|
||||
go-install:
|
||||
go install -ldflags "-w -s $(LDFLAGS)" ./cmd/telegraf
|
||||
|
||||
install: telegraf
|
||||
mkdir -p $(DESTDIR)$(PREFIX)/bin/
|
||||
cp $(TELEGRAF) $(DESTDIR)$(PREFIX)/bin/
|
||||
|
||||
test:
|
||||
go test -short ./...
|
||||
|
||||
fmt:
|
||||
@gofmt -w $(filter-out plugins/parsers/influx/machine.go, $(GOFILES))
|
||||
|
||||
fmtcheck:
|
||||
@echo '[INFO] running gofmt to identify incorrectly formatted code...'
|
||||
@if [ ! -z "$(GOFMT)" ]; then \
|
||||
echo "[ERROR] gofmt has found errors in the following files:" ; \
|
||||
echo "$(GOFMT)" ; \
|
||||
echo "" ;\
|
||||
echo "Run make fmt to fix them." ; \
|
||||
exit 1 ;\
|
||||
fi
|
||||
@echo '[INFO] done.'
|
||||
|
||||
test-windows:
|
||||
go test ./plugins/inputs/ping/...
|
||||
go test ./plugins/inputs/win_perf_counters/...
|
||||
go test ./plugins/inputs/win_services/...
|
||||
go test ./plugins/inputs/procstat/...
|
||||
go test ./plugins/inputs/ntpq/...
|
||||
|
||||
# vet runs the Go source code static analysis tool `vet` to find
|
||||
# any common errors.
|
||||
vet:
|
||||
@echo 'go vet $$(go list ./... | grep -v ./plugins/parsers/influx)'
|
||||
@go vet $$(go list ./... | grep -v ./plugins/parsers/influx) ; if [ $$? -ne 0 ]; then \
|
||||
echo ""; \
|
||||
echo "go vet has found suspicious constructs. Please remediate any reported errors"; \
|
||||
echo "to fix them before submitting code for review."; \
|
||||
exit 1; \
|
||||
fi
|
||||
|
||||
test-ci: fmtcheck vet
|
||||
go test -short ./...
|
||||
|
||||
test-all: fmtcheck vet
|
||||
go test ./...
|
||||
|
||||
package:
|
||||
./scripts/build.py --package --platform=all --arch=all
|
||||
|
||||
clean:
|
||||
rm -f telegraf
|
||||
rm -f telegraf.exe
|
||||
|
||||
docker-image:
|
||||
./scripts/build.py --package --platform=linux --arch=amd64
|
||||
cp build/telegraf*$(COMMIT)*.deb .
|
||||
docker build -f scripts/dev.docker --build-arg "package=telegraf*$(COMMIT)*.deb" -t "telegraf-dev:$(COMMIT)" .
|
||||
|
||||
plugins/parsers/influx/machine.go: plugins/parsers/influx/machine.go.rl
|
||||
ragel -Z -G2 $^ -o $@
|
||||
|
||||
.PHONY: deps telegraf install test test-windows lint vet test-all package clean docker-image fmtcheck uint64
|
||||
314
README.md
314
README.md
@@ -1 +1,313 @@
|
||||
# tivan
|
||||
# Telegraf [](https://circleci.com/gh/influxdata/telegraf) [](https://hub.docker.com/_/telegraf/)
|
||||
|
||||
Telegraf is an agent written in Go for collecting, processing, aggregating,
|
||||
and writing metrics.
|
||||
|
||||
Design goals are to have a minimal memory footprint with a plugin system so
|
||||
that developers in the community can easily add support for collecting metrics
|
||||
. For an example configuration referencet from local or remote services.
|
||||
|
||||
Telegraf is plugin-driven and has the concept of 4 distinct plugins:
|
||||
|
||||
1. [Input Plugins](#input-plugins) collect metrics from the system, services, or 3rd party APIs
|
||||
2. [Processor Plugins](#processor-plugins) transform, decorate, and/or filter metrics
|
||||
3. [Aggregator Plugins](#aggregator-plugins) create aggregate metrics (e.g. mean, min, max, quantiles, etc.)
|
||||
4. [Output Plugins](#output-plugins) write metrics to various destinations
|
||||
|
||||
For more information on Processor and Aggregator plugins please [read this](./docs/AGGREGATORS_AND_PROCESSORS.md).
|
||||
|
||||
New plugins are designed to be easy to contribute,
|
||||
we'll eagerly accept pull
|
||||
requests and will manage the set of plugins that Telegraf supports.
|
||||
|
||||
## Contributing
|
||||
|
||||
There are many ways to contribute:
|
||||
- Fix and [report bugs](https://github.com/influxdata/telegraf/issues/new)
|
||||
- [Improve documentation](https://github.com/influxdata/telegraf/issues?q=is%3Aopen+label%3Adocumentation)
|
||||
- [Review code and feature proposals](https://github.com/influxdata/telegraf/pulls)
|
||||
- Answer questions on github and on the [Community Site](https://community.influxdata.com/)
|
||||
- [Contribute plugins](CONTRIBUTING.md)
|
||||
|
||||
## Installation:
|
||||
|
||||
You can download the binaries directly from the [downloads](https://www.influxdata.com/downloads) page
|
||||
or from the [releases](https://github.com/influxdata/telegraf/releases) section.
|
||||
|
||||
### Ansible Role:
|
||||
|
||||
Ansible role: https://github.com/rossmcdonald/telegraf
|
||||
|
||||
### From Source:
|
||||
|
||||
Telegraf requires golang version 1.8+, the Makefile requires GNU make.
|
||||
|
||||
Dependencies are managed with [gdm](https://github.com/sparrc/gdm),
|
||||
which is installed by the Makefile if you don't have it already.
|
||||
|
||||
1. [Install Go](https://golang.org/doc/install)
|
||||
2. [Setup your GOPATH](https://golang.org/doc/code.html#GOPATH)
|
||||
3. Run `go get -d github.com/influxdata/telegraf`
|
||||
4. Run `cd $GOPATH/src/github.com/influxdata/telegraf`
|
||||
5. Run `make`
|
||||
|
||||
### Nightly Builds
|
||||
|
||||
These builds are generated from the master branch:
|
||||
- [telegraf_nightly_amd64.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_amd64.deb)
|
||||
- [telegraf_nightly_arm64.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_arm64.deb)
|
||||
- [telegraf-nightly.arm64.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.arm64.rpm)
|
||||
- [telegraf_nightly_armel.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_armel.deb)
|
||||
- [telegraf-nightly.armel.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.armel.rpm)
|
||||
- [telegraf_nightly_armhf.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_armhf.deb)
|
||||
- [telegraf-nightly.armv6hl.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.armv6hl.rpm)
|
||||
- [telegraf-nightly_freebsd_amd64.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_freebsd_amd64.tar.gz)
|
||||
- [telegraf-nightly_freebsd_i386.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_freebsd_i386.tar.gz)
|
||||
- [telegraf_nightly_i386.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_i386.deb)
|
||||
- [telegraf-nightly.i386.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.i386.rpm)
|
||||
- [telegraf-nightly_linux_amd64.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_amd64.tar.gz)
|
||||
- [telegraf-nightly_linux_arm64.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_arm64.tar.gz)
|
||||
- [telegraf-nightly_linux_armel.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_armel.tar.gz)
|
||||
- [telegraf-nightly_linux_armhf.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_armhf.tar.gz)
|
||||
- [telegraf-nightly_linux_i386.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_i386.tar.gz)
|
||||
- [telegraf-nightly_linux_s390x.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_s390x.tar.gz)
|
||||
- [telegraf_nightly_s390x.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_s390x.deb)
|
||||
- [telegraf-nightly.s390x.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.s390x.rpm)
|
||||
- [telegraf-nightly_windows_amd64.zip](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_windows_amd64.zip)
|
||||
- [telegraf-nightly_windows_i386.zip](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_windows_i386.zip)
|
||||
- [telegraf-nightly.x86_64.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.x86_64.rpm)
|
||||
- [telegraf-static-nightly_linux_amd64.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-static-nightly_linux_amd64.tar.gz)
|
||||
|
||||
## How to use it:
|
||||
|
||||
See usage with:
|
||||
|
||||
```
|
||||
./telegraf --help
|
||||
```
|
||||
|
||||
#### Generate a telegraf config file:
|
||||
|
||||
```
|
||||
./telegraf config > telegraf.conf
|
||||
```
|
||||
|
||||
#### Generate config with only cpu input & influxdb output plugins defined:
|
||||
|
||||
```
|
||||
./telegraf --input-filter cpu --output-filter influxdb config
|
||||
```
|
||||
|
||||
#### Run a single telegraf collection, outputing metrics to stdout:
|
||||
|
||||
```
|
||||
./telegraf --config telegraf.conf --test
|
||||
```
|
||||
|
||||
#### Run telegraf with all plugins defined in config file:
|
||||
|
||||
```
|
||||
./telegraf --config telegraf.conf
|
||||
```
|
||||
|
||||
#### Run telegraf, enabling the cpu & memory input, and influxdb output plugins:
|
||||
|
||||
```
|
||||
./telegraf --config telegraf.conf --input-filter cpu:mem --output-filter influxdb
|
||||
```
|
||||
|
||||
|
||||
## Configuration
|
||||
|
||||
See the [configuration guide](docs/CONFIGURATION.md) for a rundown of the more advanced
|
||||
configuration options.
|
||||
|
||||
## Input Plugins
|
||||
|
||||
* [aerospike](./plugins/inputs/aerospike)
|
||||
* [amqp_consumer](./plugins/inputs/amqp_consumer) (rabbitmq)
|
||||
* [apache](./plugins/inputs/apache)
|
||||
* [aurora](./plugins/inputs/aurora)
|
||||
* [aws cloudwatch](./plugins/inputs/cloudwatch)
|
||||
* [bcache](./plugins/inputs/bcache)
|
||||
* [bond](./plugins/inputs/bond)
|
||||
* [cassandra](./plugins/inputs/cassandra) (deprecated, use [jolokia2](./plugins/inputs/jolokia2))
|
||||
* [burrow](./plugins/inputs/burrow)
|
||||
* [ceph](./plugins/inputs/ceph)
|
||||
* [cgroup](./plugins/inputs/cgroup)
|
||||
* [chrony](./plugins/inputs/chrony)
|
||||
* [consul](./plugins/inputs/consul)
|
||||
* [conntrack](./plugins/inputs/conntrack)
|
||||
* [couchbase](./plugins/inputs/couchbase)
|
||||
* [couchdb](./plugins/inputs/couchdb)
|
||||
* [DC/OS](./plugins/inputs/dcos)
|
||||
* [disque](./plugins/inputs/disque)
|
||||
* [dmcache](./plugins/inputs/dmcache)
|
||||
* [dns query time](./plugins/inputs/dns_query)
|
||||
* [docker](./plugins/inputs/docker)
|
||||
* [dovecot](./plugins/inputs/dovecot)
|
||||
* [elasticsearch](./plugins/inputs/elasticsearch)
|
||||
* [exec](./plugins/inputs/exec) (generic executable plugin, support JSON, influx, graphite and nagios)
|
||||
* [fail2ban](./plugins/inputs/fail2ban)
|
||||
* [fibaro](./plugins/inputs/fibaro)
|
||||
* [filestat](./plugins/inputs/filestat)
|
||||
* [fluentd](./plugins/inputs/fluentd)
|
||||
* [graylog](./plugins/inputs/graylog)
|
||||
* [haproxy](./plugins/inputs/haproxy)
|
||||
* [hddtemp](./plugins/inputs/hddtemp)
|
||||
* [http](./plugins/inputs/http) (generic HTTP plugin, supports using input data formats)
|
||||
* [http_response](./plugins/inputs/http_response)
|
||||
* [httpjson](./plugins/inputs/httpjson) (generic JSON-emitting http service plugin)
|
||||
* [internal](./plugins/inputs/internal)
|
||||
* [influxdb](./plugins/inputs/influxdb)
|
||||
* [interrupts](./plugins/inputs/interrupts)
|
||||
* [ipmi_sensor](./plugins/inputs/ipmi_sensor)
|
||||
* [iptables](./plugins/inputs/iptables)
|
||||
* [ipset](./plugins/inputs/ipset)
|
||||
* [jolokia](./plugins/inputs/jolokia) (deprecated, use [jolokia2](./plugins/inputs/jolokia2))
|
||||
* [jolokia2](./plugins/inputs/jolokia2) (java, cassandra, kafka)
|
||||
- [jti_openconfig_telemetry](./plugins/inputs/jti_openconfig_telemetry)
|
||||
* [kapacitor](./plugins/inputs/kapacitor)
|
||||
* [kubernetes](./plugins/inputs/kubernetes)
|
||||
* [leofs](./plugins/inputs/leofs)
|
||||
* [lustre2](./plugins/inputs/lustre2)
|
||||
* [mailchimp](./plugins/inputs/mailchimp)
|
||||
* [mcrouter](./plugins/inputs/mcrouter)
|
||||
* [memcached](./plugins/inputs/memcached)
|
||||
* [mesos](./plugins/inputs/mesos)
|
||||
* [minecraft](./plugins/inputs/minecraft)
|
||||
* [mongodb](./plugins/inputs/mongodb)
|
||||
* [mysql](./plugins/inputs/mysql)
|
||||
* [nats](./plugins/inputs/nats)
|
||||
* [net_response](./plugins/inputs/net_response)
|
||||
* [nginx](./plugins/inputs/nginx)
|
||||
* [nginx_plus](./plugins/inputs/nginx_plus)
|
||||
* [nsq](./plugins/inputs/nsq)
|
||||
* [nstat](./plugins/inputs/nstat)
|
||||
* [ntpq](./plugins/inputs/ntpq)
|
||||
* [nvidia_smi](./plugins/inputs/nvidia_smi)
|
||||
* [openldap](./plugins/inputs/openldap)
|
||||
* [opensmtpd](./plugins/inputs/opensmtpd)
|
||||
* [pf](./plugins/inputs/pf)
|
||||
* [phpfpm](./plugins/inputs/phpfpm)
|
||||
* [phusion passenger](./plugins/inputs/passenger)
|
||||
* [ping](./plugins/inputs/ping)
|
||||
* [postfix](./plugins/inputs/postfix)
|
||||
* [postgresql_extensible](./plugins/inputs/postgresql_extensible)
|
||||
* [postgresql](./plugins/inputs/postgresql)
|
||||
* [powerdns](./plugins/inputs/powerdns)
|
||||
* [procstat](./plugins/inputs/procstat)
|
||||
* [prometheus](./plugins/inputs/prometheus) (can be used for [Caddy server](./plugins/inputs/prometheus/README.md#usage-for-caddy-http-server))
|
||||
* [puppetagent](./plugins/inputs/puppetagent)
|
||||
* [rabbitmq](./plugins/inputs/rabbitmq)
|
||||
* [raindrops](./plugins/inputs/raindrops)
|
||||
* [redis](./plugins/inputs/redis)
|
||||
* [rethinkdb](./plugins/inputs/rethinkdb)
|
||||
* [riak](./plugins/inputs/riak)
|
||||
* [salesforce](./plugins/inputs/salesforce)
|
||||
* [sensors](./plugins/inputs/sensors)
|
||||
* [smart](./plugins/inputs/smart)
|
||||
* [snmp](./plugins/inputs/snmp)
|
||||
* [snmp_legacy](./plugins/inputs/snmp_legacy)
|
||||
* [solr](./plugins/inputs/solr)
|
||||
* [sql server](./plugins/inputs/sqlserver) (microsoft)
|
||||
* [teamspeak](./plugins/inputs/teamspeak)
|
||||
* [tomcat](./plugins/inputs/tomcat)
|
||||
* [twemproxy](./plugins/inputs/twemproxy)
|
||||
* [unbound](./plugins/inputs/unbound)
|
||||
* [varnish](./plugins/inputs/varnish)
|
||||
* [zfs](./plugins/inputs/zfs)
|
||||
* [zookeeper](./plugins/inputs/zookeeper)
|
||||
* [win_perf_counters](./plugins/inputs/win_perf_counters) (windows performance counters)
|
||||
* [win_services](./plugins/inputs/win_services)
|
||||
* [sysstat](./plugins/inputs/sysstat)
|
||||
* [system](./plugins/inputs/system)
|
||||
* cpu
|
||||
* mem
|
||||
* net
|
||||
* netstat
|
||||
* disk
|
||||
* diskio
|
||||
* swap
|
||||
* processes
|
||||
* kernel (/proc/stat)
|
||||
* kernel (/proc/vmstat)
|
||||
* linux_sysctl_fs (/proc/sys/fs)
|
||||
|
||||
Telegraf can also collect metrics via the following service plugins:
|
||||
|
||||
* [http_listener](./plugins/inputs/http_listener)
|
||||
* [kafka_consumer](./plugins/inputs/kafka_consumer)
|
||||
* [mqtt_consumer](./plugins/inputs/mqtt_consumer)
|
||||
* [nats_consumer](./plugins/inputs/nats_consumer)
|
||||
* [nsq_consumer](./plugins/inputs/nsq_consumer)
|
||||
* [logparser](./plugins/inputs/logparser)
|
||||
* [statsd](./plugins/inputs/statsd)
|
||||
* [socket_listener](./plugins/inputs/socket_listener)
|
||||
* [tail](./plugins/inputs/tail)
|
||||
* [tcp_listener](./plugins/inputs/socket_listener)
|
||||
* [udp_listener](./plugins/inputs/socket_listener)
|
||||
* [webhooks](./plugins/inputs/webhooks)
|
||||
* [filestack](./plugins/inputs/webhooks/filestack)
|
||||
* [github](./plugins/inputs/webhooks/github)
|
||||
* [mandrill](./plugins/inputs/webhooks/mandrill)
|
||||
* [papertrail](./plugins/inputs/webhooks/papertrail)
|
||||
* [particle](./plugins/inputs/webhooks/particle)
|
||||
* [rollbar](./plugins/inputs/webhooks/rollbar)
|
||||
* [zipkin](./plugins/inputs/zipkin)
|
||||
|
||||
Telegraf is able to parse the following input data formats into metrics, these
|
||||
formats may be used with input plugins supporting the `data_format` option:
|
||||
|
||||
* [InfluxDB Line Protocol](./docs/DATA_FORMATS_INPUT.md#influx)
|
||||
* [JSON](./docs/DATA_FORMATS_INPUT.md#json)
|
||||
* [Graphite](./docs/DATA_FORMATS_INPUT.md#graphite)
|
||||
* [Value](./docs/DATA_FORMATS_INPUT.md#value)
|
||||
* [Nagios](./docs/DATA_FORMATS_INPUT.md#nagios)
|
||||
* [Collectd](./docs/DATA_FORMATS_INPUT.md#collectd)
|
||||
* [Dropwizard](./docs/DATA_FORMATS_INPUT.md#dropwizard)
|
||||
|
||||
## Processor Plugins
|
||||
|
||||
* [converter](./plugins/processors/converter)
|
||||
* [override](./plugins/processors/override)
|
||||
* [printer](./plugins/processors/printer)
|
||||
* [regex](./plugins/processors/regex)
|
||||
* [topk](./plugins/processors/topk)
|
||||
|
||||
## Aggregator Plugins
|
||||
|
||||
* [basicstats](./plugins/aggregators/basicstats)
|
||||
* [minmax](./plugins/aggregators/minmax)
|
||||
* [histogram](./plugins/aggregators/histogram)
|
||||
|
||||
## Output Plugins
|
||||
|
||||
* [influxdb](./plugins/outputs/influxdb)
|
||||
* [amon](./plugins/outputs/amon)
|
||||
* [amqp](./plugins/outputs/amqp) (rabbitmq)
|
||||
* [application_insights](./plugins/outputs/application_insights)
|
||||
* [aws kinesis](./plugins/outputs/kinesis)
|
||||
* [aws cloudwatch](./plugins/outputs/cloudwatch)
|
||||
* [cratedb](./plugins/outputs/cratedb)
|
||||
* [datadog](./plugins/outputs/datadog)
|
||||
* [discard](./plugins/outputs/discard)
|
||||
* [elasticsearch](./plugins/outputs/elasticsearch)
|
||||
* [file](./plugins/outputs/file)
|
||||
* [graphite](./plugins/outputs/graphite)
|
||||
* [graylog](./plugins/outputs/graylog)
|
||||
* [http](./plugins/outputs/http)
|
||||
* [instrumental](./plugins/outputs/instrumental)
|
||||
* [kafka](./plugins/outputs/kafka)
|
||||
* [librato](./plugins/outputs/librato)
|
||||
* [mqtt](./plugins/outputs/mqtt)
|
||||
* [nats](./plugins/outputs/nats)
|
||||
* [nsq](./plugins/outputs/nsq)
|
||||
* [opentsdb](./plugins/outputs/opentsdb)
|
||||
* [prometheus](./plugins/outputs/prometheus_client)
|
||||
* [riemann](./plugins/outputs/riemann)
|
||||
* [riemann_legacy](./plugins/outputs/riemann_legacy)
|
||||
* [socket_writer](./plugins/outputs/socket_writer)
|
||||
* [tcp](./plugins/outputs/socket_writer)
|
||||
* [udp](./plugins/outputs/socket_writer)
|
||||
* [wavefront](./plugins/outputs/wavefront)
|
||||
|
||||
122
Vagrantfile
vendored
122
Vagrantfile
vendored
@@ -1,122 +0,0 @@
|
||||
# -*- mode: ruby -*-
|
||||
# vi: set ft=ruby :
|
||||
|
||||
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
|
||||
VAGRANTFILE_API_VERSION = "2"
|
||||
|
||||
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
|
||||
# All Vagrant configuration is done here. The most common configuration
|
||||
# options are documented and commented below. For a complete reference,
|
||||
# please see the online documentation at vagrantup.com.
|
||||
|
||||
# Every Vagrant virtual environment requires a box to build off of.
|
||||
config.vm.box = "ubuntu/trusty64"
|
||||
|
||||
# Disable automatic box update checking. If you disable this, then
|
||||
# boxes will only be checked for updates when the user runs
|
||||
# `vagrant box outdated`. This is not recommended.
|
||||
# config.vm.box_check_update = false
|
||||
|
||||
# Create a forwarded port mapping which allows access to a specific port
|
||||
# within the machine from a port on the host machine. In the example below,
|
||||
# accessing "localhost:8080" will access port 80 on the guest machine.
|
||||
# config.vm.network "forwarded_port", guest: 80, host: 8080
|
||||
|
||||
# Create a private network, which allows host-only access to the machine
|
||||
# using a specific IP.
|
||||
# config.vm.network "private_network", ip: "192.168.33.10"
|
||||
|
||||
# Create a public network, which generally matched to bridged network.
|
||||
# Bridged networks make the machine appear as another physical device on
|
||||
# your network.
|
||||
# config.vm.network "public_network"
|
||||
|
||||
# If true, then any SSH connections made will enable agent forwarding.
|
||||
# Default value: false
|
||||
# config.ssh.forward_agent = true
|
||||
|
||||
# Share an additional folder to the guest VM. The first argument is
|
||||
# the path on the host to the actual folder. The second argument is
|
||||
# the path on the guest to mount the folder. And the optional third
|
||||
# argument is a set of non-required options.
|
||||
config.vm.synced_folder "~/go", "/home/vagrant/go"
|
||||
|
||||
# Provider-specific configuration so you can fine-tune various
|
||||
# backing providers for Vagrant. These expose provider-specific options.
|
||||
# Example for VirtualBox:
|
||||
#
|
||||
config.vm.provider "virtualbox" do |vb|
|
||||
# # Don't boot with headless mode
|
||||
# vb.gui = true
|
||||
#
|
||||
# # Use VBoxManage to customize the VM. For example to change memory:
|
||||
vb.customize ["modifyvm", :id, "--memory", "1024"]
|
||||
end
|
||||
#
|
||||
# View the documentation for the provider you're using for more
|
||||
# information on available options.
|
||||
|
||||
# Enable provisioning with CFEngine. CFEngine Community packages are
|
||||
# automatically installed. For example, configure the host as a
|
||||
# policy server and optionally a policy file to run:
|
||||
#
|
||||
# config.vm.provision "cfengine" do |cf|
|
||||
# cf.am_policy_hub = true
|
||||
# # cf.run_file = "motd.cf"
|
||||
# end
|
||||
#
|
||||
# You can also configure and bootstrap a client to an existing
|
||||
# policy server:
|
||||
#
|
||||
# config.vm.provision "cfengine" do |cf|
|
||||
# cf.policy_server_address = "10.0.2.15"
|
||||
# end
|
||||
|
||||
# Enable provisioning with Puppet stand alone. Puppet manifests
|
||||
# are contained in a directory path relative to this Vagrantfile.
|
||||
# You will need to create the manifests directory and a manifest in
|
||||
# the file default.pp in the manifests_path directory.
|
||||
#
|
||||
# config.vm.provision "puppet" do |puppet|
|
||||
# puppet.manifests_path = "manifests"
|
||||
# puppet.manifest_file = "site.pp"
|
||||
# end
|
||||
|
||||
# Enable provisioning with chef solo, specifying a cookbooks path, roles
|
||||
# path, and data_bags path (all relative to this Vagrantfile), and adding
|
||||
# some recipes and/or roles.
|
||||
#
|
||||
# config.vm.provision "chef_solo" do |chef|
|
||||
# chef.cookbooks_path = "../my-recipes/cookbooks"
|
||||
# chef.roles_path = "../my-recipes/roles"
|
||||
# chef.data_bags_path = "../my-recipes/data_bags"
|
||||
# chef.add_recipe "mysql"
|
||||
# chef.add_role "web"
|
||||
#
|
||||
# # You may also specify custom JSON attributes:
|
||||
# chef.json = { mysql_password: "foo" }
|
||||
# end
|
||||
|
||||
# Enable provisioning with chef server, specifying the chef server URL,
|
||||
# and the path to the validation key (relative to this Vagrantfile).
|
||||
#
|
||||
# The Opscode Platform uses HTTPS. Substitute your organization for
|
||||
# ORGNAME in the URL and validation key.
|
||||
#
|
||||
# If you have your own Chef Server, use the appropriate URL, which may be
|
||||
# HTTP instead of HTTPS depending on your configuration. Also change the
|
||||
# validation key to validation.pem.
|
||||
#
|
||||
# config.vm.provision "chef_client" do |chef|
|
||||
# chef.chef_server_url = "https://api.opscode.com/organizations/ORGNAME"
|
||||
# chef.validation_key_path = "ORGNAME-validator.pem"
|
||||
# end
|
||||
#
|
||||
# If you're using the Opscode platform, your validator client is
|
||||
# ORGNAME-validator, replacing ORGNAME with your organization name.
|
||||
#
|
||||
# If you have your own Chef Server, the default validation client name is
|
||||
# chef-validator, unless you changed the configuration.
|
||||
#
|
||||
# chef.validation_client_name = "ORGNAME-validator"
|
||||
end
|
||||
@@ -1,37 +1,46 @@
|
||||
package tivan
|
||||
package telegraf
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sort"
|
||||
"strings"
|
||||
import "time"
|
||||
|
||||
"github.com/influxdb/influxdb/client"
|
||||
)
|
||||
// Accumulator is an interface for "accumulating" metrics from plugin(s).
|
||||
// The metrics are sent down a channel shared between all plugins.
|
||||
type Accumulator interface {
|
||||
// AddFields adds a metric to the accumulator with the given measurement
|
||||
// name, fields, and tags (and timestamp). If a timestamp is not provided,
|
||||
// then the accumulator sets it to "now".
|
||||
// Create a point with a value, decorating it with tags
|
||||
// NOTE: tags is expected to be owned by the caller, don't mutate
|
||||
// it after passing to Add.
|
||||
AddFields(measurement string,
|
||||
fields map[string]interface{},
|
||||
tags map[string]string,
|
||||
t ...time.Time)
|
||||
|
||||
type BatchPoints struct {
|
||||
client.BatchPoints
|
||||
// AddGauge is the same as AddFields, but will add the metric as a "Gauge" type
|
||||
AddGauge(measurement string,
|
||||
fields map[string]interface{},
|
||||
tags map[string]string,
|
||||
t ...time.Time)
|
||||
|
||||
Debug bool
|
||||
}
|
||||
|
||||
func (bp *BatchPoints) Add(name string, val interface{}, tags map[string]string) {
|
||||
if bp.Debug {
|
||||
var tg []string
|
||||
|
||||
for k, v := range tags {
|
||||
tg = append(tg, fmt.Sprintf("%s=\"%s\"", k, v))
|
||||
}
|
||||
|
||||
sort.Strings(tg)
|
||||
|
||||
fmt.Printf("> [%s] %s=%v\n", strings.Join(tg, " "), name, val)
|
||||
}
|
||||
|
||||
bp.Points = append(bp.Points, client.Point{
|
||||
Name: name,
|
||||
Tags: tags,
|
||||
Fields: map[string]interface{}{
|
||||
"value": val,
|
||||
},
|
||||
})
|
||||
// AddCounter is the same as AddFields, but will add the metric as a "Counter" type
|
||||
AddCounter(measurement string,
|
||||
fields map[string]interface{},
|
||||
tags map[string]string,
|
||||
t ...time.Time)
|
||||
|
||||
// AddSummary is the same as AddFields, but will add the metric as a "Summary" type
|
||||
AddSummary(measurement string,
|
||||
fields map[string]interface{},
|
||||
tags map[string]string,
|
||||
t ...time.Time)
|
||||
|
||||
// AddHistogram is the same as AddFields, but will add the metric as a "Histogram" type
|
||||
AddHistogram(measurement string,
|
||||
fields map[string]interface{},
|
||||
tags map[string]string,
|
||||
t ...time.Time)
|
||||
|
||||
SetPrecision(precision, interval time.Duration)
|
||||
|
||||
AddError(err error)
|
||||
}
|
||||
|
||||
155
agent.go
155
agent.go
@@ -1,155 +0,0 @@
|
||||
package tivan
|
||||
|
||||
import (
|
||||
"log"
|
||||
"net/url"
|
||||
"os"
|
||||
"sort"
|
||||
"time"
|
||||
|
||||
"github.com/influxdb/influxdb/client"
|
||||
"github.com/influxdb/tivan/plugins"
|
||||
)
|
||||
|
||||
type Agent struct {
|
||||
Interval Duration
|
||||
Debug bool
|
||||
HTTP string
|
||||
Hostname string
|
||||
|
||||
Config *Config
|
||||
|
||||
plugins []plugins.Plugin
|
||||
|
||||
conn *client.Client
|
||||
}
|
||||
|
||||
func NewAgent(config *Config) (*Agent, error) {
|
||||
agent := &Agent{Config: config}
|
||||
|
||||
err := config.Apply("agent", agent)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if agent.Hostname == "" {
|
||||
hostname, err := os.Hostname()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
agent.Hostname = hostname
|
||||
|
||||
if config.Tags == nil {
|
||||
config.Tags = map[string]string{}
|
||||
}
|
||||
|
||||
config.Tags["host"] = agent.Hostname
|
||||
}
|
||||
|
||||
return agent, nil
|
||||
}
|
||||
|
||||
func (agent *Agent) Connect() error {
|
||||
config := agent.Config
|
||||
|
||||
u, err := url.Parse(config.URL)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
c, err := client.NewClient(client.Config{
|
||||
URL: *u,
|
||||
Username: config.Username,
|
||||
Password: config.Password,
|
||||
UserAgent: config.UserAgent,
|
||||
})
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
agent.conn = c
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (a *Agent) LoadPlugins() ([]string, error) {
|
||||
var names []string
|
||||
|
||||
for name, creator := range plugins.Plugins {
|
||||
plugin := creator()
|
||||
|
||||
err := a.Config.Apply(name, plugin)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
a.plugins = append(a.plugins, plugin)
|
||||
names = append(names, name)
|
||||
}
|
||||
|
||||
sort.Strings(names)
|
||||
|
||||
return names, nil
|
||||
}
|
||||
|
||||
func (a *Agent) crank() error {
|
||||
var acc BatchPoints
|
||||
|
||||
acc.Debug = a.Debug
|
||||
|
||||
for _, plugin := range a.plugins {
|
||||
err := plugin.Gather(&acc)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
acc.Tags = a.Config.Tags
|
||||
acc.Timestamp = time.Now()
|
||||
acc.Database = a.Config.Database
|
||||
|
||||
_, err := a.conn.Write(acc.BatchPoints)
|
||||
return err
|
||||
}
|
||||
|
||||
func (a *Agent) Test() error {
|
||||
var acc BatchPoints
|
||||
|
||||
acc.Debug = true
|
||||
|
||||
for _, plugin := range a.plugins {
|
||||
err := plugin.Gather(&acc)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (a *Agent) Run(shutdown chan struct{}) error {
|
||||
if a.conn == nil {
|
||||
err := a.Connect()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
ticker := time.NewTicker(a.Interval.Duration)
|
||||
|
||||
for {
|
||||
err := a.crank()
|
||||
if err != nil {
|
||||
log.Printf("Error in plugins: %s", err)
|
||||
}
|
||||
|
||||
select {
|
||||
case <-shutdown:
|
||||
return nil
|
||||
case <-ticker.C:
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
||||
141
agent/accumulator.go
Normal file
141
agent/accumulator.go
Normal file
@@ -0,0 +1,141 @@
|
||||
package agent
|
||||
|
||||
import (
|
||||
"log"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/selfstat"
|
||||
)
|
||||
|
||||
var (
|
||||
NErrors = selfstat.Register("agent", "gather_errors", map[string]string{})
|
||||
)
|
||||
|
||||
type MetricMaker interface {
|
||||
Name() string
|
||||
MakeMetric(
|
||||
measurement string,
|
||||
fields map[string]interface{},
|
||||
tags map[string]string,
|
||||
mType telegraf.ValueType,
|
||||
t time.Time,
|
||||
) telegraf.Metric
|
||||
}
|
||||
|
||||
func NewAccumulator(
|
||||
maker MetricMaker,
|
||||
metrics chan telegraf.Metric,
|
||||
) telegraf.Accumulator {
|
||||
acc := accumulator{
|
||||
maker: maker,
|
||||
metrics: metrics,
|
||||
precision: time.Nanosecond,
|
||||
}
|
||||
return &acc
|
||||
}
|
||||
|
||||
type accumulator struct {
|
||||
metrics chan telegraf.Metric
|
||||
|
||||
maker MetricMaker
|
||||
|
||||
precision time.Duration
|
||||
}
|
||||
|
||||
func (ac *accumulator) AddFields(
|
||||
measurement string,
|
||||
fields map[string]interface{},
|
||||
tags map[string]string,
|
||||
t ...time.Time,
|
||||
) {
|
||||
if m := ac.maker.MakeMetric(measurement, fields, tags, telegraf.Untyped, ac.getTime(t)); m != nil {
|
||||
ac.metrics <- m
|
||||
}
|
||||
}
|
||||
|
||||
func (ac *accumulator) AddGauge(
|
||||
measurement string,
|
||||
fields map[string]interface{},
|
||||
tags map[string]string,
|
||||
t ...time.Time,
|
||||
) {
|
||||
if m := ac.maker.MakeMetric(measurement, fields, tags, telegraf.Gauge, ac.getTime(t)); m != nil {
|
||||
ac.metrics <- m
|
||||
}
|
||||
}
|
||||
|
||||
func (ac *accumulator) AddCounter(
|
||||
measurement string,
|
||||
fields map[string]interface{},
|
||||
tags map[string]string,
|
||||
t ...time.Time,
|
||||
) {
|
||||
if m := ac.maker.MakeMetric(measurement, fields, tags, telegraf.Counter, ac.getTime(t)); m != nil {
|
||||
ac.metrics <- m
|
||||
}
|
||||
}
|
||||
|
||||
func (ac *accumulator) AddSummary(
|
||||
measurement string,
|
||||
fields map[string]interface{},
|
||||
tags map[string]string,
|
||||
t ...time.Time,
|
||||
) {
|
||||
if m := ac.maker.MakeMetric(measurement, fields, tags, telegraf.Summary, ac.getTime(t)); m != nil {
|
||||
ac.metrics <- m
|
||||
}
|
||||
}
|
||||
|
||||
func (ac *accumulator) AddHistogram(
|
||||
measurement string,
|
||||
fields map[string]interface{},
|
||||
tags map[string]string,
|
||||
t ...time.Time,
|
||||
) {
|
||||
if m := ac.maker.MakeMetric(measurement, fields, tags, telegraf.Histogram, ac.getTime(t)); m != nil {
|
||||
ac.metrics <- m
|
||||
}
|
||||
}
|
||||
|
||||
// AddError passes a runtime error to the accumulator.
|
||||
// The error will be tagged with the plugin name and written to the log.
|
||||
func (ac *accumulator) AddError(err error) {
|
||||
if err == nil {
|
||||
return
|
||||
}
|
||||
NErrors.Incr(1)
|
||||
//TODO suppress/throttle consecutive duplicate errors?
|
||||
log.Printf("E! Error in plugin [%s]: %s", ac.maker.Name(), err)
|
||||
}
|
||||
|
||||
// SetPrecision takes two time.Duration objects. If the first is non-zero,
|
||||
// it sets that as the precision. Otherwise, it takes the second argument
|
||||
// as the order of time that the metrics should be rounded to, with the
|
||||
// maximum being 1s.
|
||||
func (ac *accumulator) SetPrecision(precision, interval time.Duration) {
|
||||
if precision > 0 {
|
||||
ac.precision = precision
|
||||
return
|
||||
}
|
||||
switch {
|
||||
case interval >= time.Second:
|
||||
ac.precision = time.Second
|
||||
case interval >= time.Millisecond:
|
||||
ac.precision = time.Millisecond
|
||||
case interval >= time.Microsecond:
|
||||
ac.precision = time.Microsecond
|
||||
default:
|
||||
ac.precision = time.Nanosecond
|
||||
}
|
||||
}
|
||||
|
||||
func (ac accumulator) getTime(t []time.Time) time.Time {
|
||||
var timestamp time.Time
|
||||
if len(t) > 0 {
|
||||
timestamp = t[0]
|
||||
} else {
|
||||
timestamp = time.Now()
|
||||
}
|
||||
return timestamp.Round(ac.precision)
|
||||
}
|
||||
159
agent/accumulator_test.go
Normal file
159
agent/accumulator_test.go
Normal file
@@ -0,0 +1,159 @@
|
||||
package agent
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/metric"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestAddFields(t *testing.T) {
|
||||
metrics := make(chan telegraf.Metric, 10)
|
||||
defer close(metrics)
|
||||
a := NewAccumulator(&TestMetricMaker{}, metrics)
|
||||
|
||||
tags := map[string]string{"foo": "bar"}
|
||||
fields := map[string]interface{}{
|
||||
"usage": float64(99),
|
||||
}
|
||||
now := time.Now()
|
||||
a.AddCounter("acctest", fields, tags, now)
|
||||
|
||||
testm := <-metrics
|
||||
|
||||
require.Equal(t, "acctest", testm.Name())
|
||||
actual, ok := testm.GetField("usage")
|
||||
|
||||
require.True(t, ok)
|
||||
require.Equal(t, float64(99), actual)
|
||||
|
||||
actual, ok = testm.GetTag("foo")
|
||||
require.True(t, ok)
|
||||
require.Equal(t, "bar", actual)
|
||||
|
||||
tm := testm.Time()
|
||||
// okay if monotonic clock differs
|
||||
require.True(t, now.Equal(tm))
|
||||
|
||||
tp := testm.Type()
|
||||
require.Equal(t, telegraf.Counter, tp)
|
||||
}
|
||||
|
||||
func TestAccAddError(t *testing.T) {
|
||||
errBuf := bytes.NewBuffer(nil)
|
||||
log.SetOutput(errBuf)
|
||||
defer log.SetOutput(os.Stderr)
|
||||
|
||||
metrics := make(chan telegraf.Metric, 10)
|
||||
defer close(metrics)
|
||||
a := NewAccumulator(&TestMetricMaker{}, metrics)
|
||||
|
||||
a.AddError(fmt.Errorf("foo"))
|
||||
a.AddError(fmt.Errorf("bar"))
|
||||
a.AddError(fmt.Errorf("baz"))
|
||||
|
||||
errs := bytes.Split(errBuf.Bytes(), []byte{'\n'})
|
||||
assert.EqualValues(t, int64(3), NErrors.Get())
|
||||
require.Len(t, errs, 4) // 4 because of trailing newline
|
||||
assert.Contains(t, string(errs[0]), "TestPlugin")
|
||||
assert.Contains(t, string(errs[0]), "foo")
|
||||
assert.Contains(t, string(errs[1]), "TestPlugin")
|
||||
assert.Contains(t, string(errs[1]), "bar")
|
||||
assert.Contains(t, string(errs[2]), "TestPlugin")
|
||||
assert.Contains(t, string(errs[2]), "baz")
|
||||
}
|
||||
|
||||
func TestSetPrecision(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
unset bool
|
||||
precision time.Duration
|
||||
interval time.Duration
|
||||
timestamp time.Time
|
||||
expected time.Time
|
||||
}{
|
||||
{
|
||||
name: "default precision is nanosecond",
|
||||
unset: true,
|
||||
timestamp: time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC),
|
||||
expected: time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC),
|
||||
},
|
||||
{
|
||||
name: "second interval",
|
||||
interval: time.Second,
|
||||
timestamp: time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC),
|
||||
expected: time.Date(2006, time.February, 10, 12, 0, 0, 0, time.UTC),
|
||||
},
|
||||
{
|
||||
name: "microsecond interval",
|
||||
interval: time.Microsecond,
|
||||
timestamp: time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC),
|
||||
expected: time.Date(2006, time.February, 10, 12, 0, 0, 82913000, time.UTC),
|
||||
},
|
||||
{
|
||||
name: "2 second precision",
|
||||
precision: 2 * time.Second,
|
||||
timestamp: time.Date(2006, time.February, 10, 12, 0, 2, 4, time.UTC),
|
||||
expected: time.Date(2006, time.February, 10, 12, 0, 2, 0, time.UTC),
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
metrics := make(chan telegraf.Metric, 10)
|
||||
|
||||
a := NewAccumulator(&TestMetricMaker{}, metrics)
|
||||
if !tt.unset {
|
||||
a.SetPrecision(tt.precision, tt.interval)
|
||||
}
|
||||
|
||||
a.AddFields("acctest",
|
||||
map[string]interface{}{"value": float64(101)},
|
||||
map[string]string{},
|
||||
tt.timestamp,
|
||||
)
|
||||
|
||||
testm := <-metrics
|
||||
require.Equal(t, tt.expected, testm.Time())
|
||||
|
||||
close(metrics)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
type TestMetricMaker struct {
|
||||
}
|
||||
|
||||
func (tm *TestMetricMaker) Name() string {
|
||||
return "TestPlugin"
|
||||
}
|
||||
func (tm *TestMetricMaker) MakeMetric(
|
||||
measurement string,
|
||||
fields map[string]interface{},
|
||||
tags map[string]string,
|
||||
mType telegraf.ValueType,
|
||||
t time.Time,
|
||||
) telegraf.Metric {
|
||||
switch mType {
|
||||
case telegraf.Untyped:
|
||||
if m, err := metric.New(measurement, tags, fields, t); err == nil {
|
||||
return m
|
||||
}
|
||||
case telegraf.Counter:
|
||||
if m, err := metric.New(measurement, tags, fields, t, telegraf.Counter); err == nil {
|
||||
return m
|
||||
}
|
||||
case telegraf.Gauge:
|
||||
if m, err := metric.New(measurement, tags, fields, t, telegraf.Gauge); err == nil {
|
||||
return m
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
425
agent/agent.go
Normal file
425
agent/agent.go
Normal file
@@ -0,0 +1,425 @@
|
||||
package agent
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"runtime"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/internal"
|
||||
"github.com/influxdata/telegraf/internal/config"
|
||||
"github.com/influxdata/telegraf/internal/models"
|
||||
"github.com/influxdata/telegraf/selfstat"
|
||||
)
|
||||
|
||||
// Agent runs telegraf and collects data based on the given config
|
||||
type Agent struct {
|
||||
Config *config.Config
|
||||
}
|
||||
|
||||
// NewAgent returns an Agent struct based off the given Config
|
||||
func NewAgent(config *config.Config) (*Agent, error) {
|
||||
a := &Agent{
|
||||
Config: config,
|
||||
}
|
||||
|
||||
if !a.Config.Agent.OmitHostname {
|
||||
if a.Config.Agent.Hostname == "" {
|
||||
hostname, err := os.Hostname()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
a.Config.Agent.Hostname = hostname
|
||||
}
|
||||
|
||||
config.Tags["host"] = a.Config.Agent.Hostname
|
||||
}
|
||||
|
||||
return a, nil
|
||||
}
|
||||
|
||||
// Connect connects to all configured outputs
|
||||
func (a *Agent) Connect() error {
|
||||
for _, o := range a.Config.Outputs {
|
||||
switch ot := o.Output.(type) {
|
||||
case telegraf.ServiceOutput:
|
||||
if err := ot.Start(); err != nil {
|
||||
log.Printf("E! Service for output %s failed to start, exiting\n%s\n",
|
||||
o.Name, err.Error())
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
log.Printf("D! Attempting connection to output: %s\n", o.Name)
|
||||
err := o.Output.Connect()
|
||||
if err != nil {
|
||||
log.Printf("E! Failed to connect to output %s, retrying in 15s, "+
|
||||
"error was '%s' \n", o.Name, err)
|
||||
time.Sleep(15 * time.Second)
|
||||
err = o.Output.Connect()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
log.Printf("D! Successfully connected to output: %s\n", o.Name)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Close closes the connection to all configured outputs
|
||||
func (a *Agent) Close() error {
|
||||
var err error
|
||||
for _, o := range a.Config.Outputs {
|
||||
err = o.Output.Close()
|
||||
switch ot := o.Output.(type) {
|
||||
case telegraf.ServiceOutput:
|
||||
ot.Stop()
|
||||
}
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func panicRecover(input *models.RunningInput) {
|
||||
if err := recover(); err != nil {
|
||||
trace := make([]byte, 2048)
|
||||
runtime.Stack(trace, true)
|
||||
log.Printf("E! FATAL: Input [%s] panicked: %s, Stack:\n%s\n",
|
||||
input.Name(), err, trace)
|
||||
log.Println("E! PLEASE REPORT THIS PANIC ON GITHUB with " +
|
||||
"stack trace, configuration, and OS information: " +
|
||||
"https://github.com/influxdata/telegraf/issues/new")
|
||||
}
|
||||
}
|
||||
|
||||
// gatherer runs the inputs that have been configured with their own
|
||||
// reporting interval.
|
||||
func (a *Agent) gatherer(
|
||||
shutdown chan struct{},
|
||||
input *models.RunningInput,
|
||||
interval time.Duration,
|
||||
metricC chan telegraf.Metric,
|
||||
) {
|
||||
defer panicRecover(input)
|
||||
|
||||
GatherTime := selfstat.RegisterTiming("gather",
|
||||
"gather_time_ns",
|
||||
map[string]string{"input": input.Config.Name},
|
||||
)
|
||||
|
||||
acc := NewAccumulator(input, metricC)
|
||||
acc.SetPrecision(a.Config.Agent.Precision.Duration,
|
||||
a.Config.Agent.Interval.Duration)
|
||||
|
||||
ticker := time.NewTicker(interval)
|
||||
defer ticker.Stop()
|
||||
|
||||
for {
|
||||
internal.RandomSleep(a.Config.Agent.CollectionJitter.Duration, shutdown)
|
||||
|
||||
start := time.Now()
|
||||
gatherWithTimeout(shutdown, input, acc, interval)
|
||||
elapsed := time.Since(start)
|
||||
|
||||
GatherTime.Incr(elapsed.Nanoseconds())
|
||||
|
||||
select {
|
||||
case <-shutdown:
|
||||
return
|
||||
case <-ticker.C:
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// gatherWithTimeout gathers from the given input, with the given timeout.
|
||||
// when the given timeout is reached, gatherWithTimeout logs an error message
|
||||
// but continues waiting for it to return. This is to avoid leaving behind
|
||||
// hung processes, and to prevent re-calling the same hung process over and
|
||||
// over.
|
||||
func gatherWithTimeout(
|
||||
shutdown chan struct{},
|
||||
input *models.RunningInput,
|
||||
acc telegraf.Accumulator,
|
||||
timeout time.Duration,
|
||||
) {
|
||||
ticker := time.NewTicker(timeout)
|
||||
defer ticker.Stop()
|
||||
done := make(chan error)
|
||||
go func() {
|
||||
done <- input.Input.Gather(acc)
|
||||
}()
|
||||
|
||||
for {
|
||||
select {
|
||||
case err := <-done:
|
||||
if err != nil {
|
||||
acc.AddError(err)
|
||||
}
|
||||
return
|
||||
case <-ticker.C:
|
||||
err := fmt.Errorf("took longer to collect than collection interval (%s)",
|
||||
timeout)
|
||||
acc.AddError(err)
|
||||
continue
|
||||
case <-shutdown:
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Test verifies that we can 'Gather' from all inputs with their configured
|
||||
// Config struct
|
||||
func (a *Agent) Test() error {
|
||||
shutdown := make(chan struct{})
|
||||
defer close(shutdown)
|
||||
metricC := make(chan telegraf.Metric)
|
||||
|
||||
// dummy receiver for the point channel
|
||||
go func() {
|
||||
for {
|
||||
select {
|
||||
case <-metricC:
|
||||
// do nothing
|
||||
case <-shutdown:
|
||||
return
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
for _, input := range a.Config.Inputs {
|
||||
if _, ok := input.Input.(telegraf.ServiceInput); ok {
|
||||
fmt.Printf("\nWARNING: skipping plugin [[%s]]: service inputs not supported in --test mode\n",
|
||||
input.Name())
|
||||
continue
|
||||
}
|
||||
|
||||
acc := NewAccumulator(input, metricC)
|
||||
acc.SetPrecision(a.Config.Agent.Precision.Duration,
|
||||
a.Config.Agent.Interval.Duration)
|
||||
input.SetTrace(true)
|
||||
input.SetDefaultTags(a.Config.Tags)
|
||||
|
||||
if err := input.Input.Gather(acc); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Special instructions for some inputs. cpu, for example, needs to be
|
||||
// run twice in order to return cpu usage percentages.
|
||||
switch input.Name() {
|
||||
case "inputs.cpu", "inputs.mongodb", "inputs.procstat":
|
||||
time.Sleep(500 * time.Millisecond)
|
||||
if err := input.Input.Gather(acc); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// flush writes a list of metrics to all configured outputs
|
||||
func (a *Agent) flush() {
|
||||
var wg sync.WaitGroup
|
||||
|
||||
wg.Add(len(a.Config.Outputs))
|
||||
for _, o := range a.Config.Outputs {
|
||||
go func(output *models.RunningOutput) {
|
||||
defer wg.Done()
|
||||
err := output.Write()
|
||||
if err != nil {
|
||||
log.Printf("E! Error writing to output [%s]: %s\n",
|
||||
output.Name, err.Error())
|
||||
}
|
||||
}(o)
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
}
|
||||
|
||||
// flusher monitors the metrics input channel and flushes on the minimum interval
|
||||
func (a *Agent) flusher(shutdown chan struct{}, metricC chan telegraf.Metric, aggC chan telegraf.Metric) error {
|
||||
// Inelegant, but this sleep is to allow the Gather threads to run, so that
|
||||
// the flusher will flush after metrics are collected.
|
||||
time.Sleep(time.Millisecond * 300)
|
||||
|
||||
// create an output metric channel and a gorouting that continuously passes
|
||||
// each metric onto the output plugins & aggregators.
|
||||
outMetricC := make(chan telegraf.Metric, 100)
|
||||
var wg sync.WaitGroup
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
for {
|
||||
select {
|
||||
case <-shutdown:
|
||||
if len(outMetricC) > 0 {
|
||||
// keep going until outMetricC is flushed
|
||||
continue
|
||||
}
|
||||
return
|
||||
case m := <-outMetricC:
|
||||
// if dropOriginal is set to true, then we will only send this
|
||||
// metric to the aggregators, not the outputs.
|
||||
var dropOriginal bool
|
||||
for _, agg := range a.Config.Aggregators {
|
||||
if ok := agg.Add(m.Copy()); ok {
|
||||
dropOriginal = true
|
||||
}
|
||||
}
|
||||
if !dropOriginal {
|
||||
for i, o := range a.Config.Outputs {
|
||||
if i == len(a.Config.Outputs)-1 {
|
||||
o.AddMetric(m)
|
||||
} else {
|
||||
o.AddMetric(m.Copy())
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
for {
|
||||
select {
|
||||
case <-shutdown:
|
||||
if len(aggC) > 0 {
|
||||
// keep going until aggC is flushed
|
||||
continue
|
||||
}
|
||||
return
|
||||
case metric := <-aggC:
|
||||
metrics := []telegraf.Metric{metric}
|
||||
for _, processor := range a.Config.Processors {
|
||||
metrics = processor.Apply(metrics...)
|
||||
}
|
||||
for _, m := range metrics {
|
||||
for i, o := range a.Config.Outputs {
|
||||
if i == len(a.Config.Outputs)-1 {
|
||||
o.AddMetric(m)
|
||||
} else {
|
||||
o.AddMetric(m.Copy())
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}()
|
||||
|
||||
ticker := time.NewTicker(a.Config.Agent.FlushInterval.Duration)
|
||||
semaphore := make(chan struct{}, 1)
|
||||
for {
|
||||
select {
|
||||
case <-shutdown:
|
||||
log.Println("I! Hang on, flushing any cached metrics before shutdown")
|
||||
// wait for outMetricC to get flushed before flushing outputs
|
||||
wg.Wait()
|
||||
a.flush()
|
||||
return nil
|
||||
case <-ticker.C:
|
||||
go func() {
|
||||
select {
|
||||
case semaphore <- struct{}{}:
|
||||
internal.RandomSleep(a.Config.Agent.FlushJitter.Duration, shutdown)
|
||||
a.flush()
|
||||
<-semaphore
|
||||
default:
|
||||
// skipping this flush because one is already happening
|
||||
log.Println("W! Skipping a scheduled flush because there is" +
|
||||
" already a flush ongoing.")
|
||||
}
|
||||
}()
|
||||
case metric := <-metricC:
|
||||
// NOTE potential bottleneck here as we put each metric through the
|
||||
// processors serially.
|
||||
mS := []telegraf.Metric{metric}
|
||||
for _, processor := range a.Config.Processors {
|
||||
mS = processor.Apply(mS...)
|
||||
}
|
||||
for _, m := range mS {
|
||||
outMetricC <- m
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Run runs the agent daemon, gathering every Interval
|
||||
func (a *Agent) Run(shutdown chan struct{}) error {
|
||||
var wg sync.WaitGroup
|
||||
|
||||
log.Printf("I! Agent Config: Interval:%s, Quiet:%#v, Hostname:%#v, "+
|
||||
"Flush Interval:%s \n",
|
||||
a.Config.Agent.Interval.Duration, a.Config.Agent.Quiet,
|
||||
a.Config.Agent.Hostname, a.Config.Agent.FlushInterval.Duration)
|
||||
|
||||
// channel shared between all input threads for accumulating metrics
|
||||
metricC := make(chan telegraf.Metric, 100)
|
||||
aggC := make(chan telegraf.Metric, 100)
|
||||
|
||||
// Start all ServicePlugins
|
||||
for _, input := range a.Config.Inputs {
|
||||
input.SetDefaultTags(a.Config.Tags)
|
||||
switch p := input.Input.(type) {
|
||||
case telegraf.ServiceInput:
|
||||
acc := NewAccumulator(input, metricC)
|
||||
// Service input plugins should set their own precision of their
|
||||
// metrics.
|
||||
acc.SetPrecision(time.Nanosecond, 0)
|
||||
if err := p.Start(acc); err != nil {
|
||||
log.Printf("E! Service for input %s failed to start, exiting\n%s\n",
|
||||
input.Name(), err.Error())
|
||||
return err
|
||||
}
|
||||
defer p.Stop()
|
||||
}
|
||||
}
|
||||
|
||||
// Round collection to nearest interval by sleeping
|
||||
if a.Config.Agent.RoundInterval {
|
||||
i := int64(a.Config.Agent.Interval.Duration)
|
||||
time.Sleep(time.Duration(i - (time.Now().UnixNano() % i)))
|
||||
}
|
||||
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
if err := a.flusher(shutdown, metricC, aggC); err != nil {
|
||||
log.Printf("E! Flusher routine failed, exiting: %s\n", err.Error())
|
||||
close(shutdown)
|
||||
}
|
||||
}()
|
||||
|
||||
wg.Add(len(a.Config.Aggregators))
|
||||
for _, aggregator := range a.Config.Aggregators {
|
||||
go func(agg *models.RunningAggregator) {
|
||||
defer wg.Done()
|
||||
acc := NewAccumulator(agg, aggC)
|
||||
acc.SetPrecision(a.Config.Agent.Precision.Duration,
|
||||
a.Config.Agent.Interval.Duration)
|
||||
agg.Run(acc, shutdown)
|
||||
}(aggregator)
|
||||
}
|
||||
|
||||
wg.Add(len(a.Config.Inputs))
|
||||
for _, input := range a.Config.Inputs {
|
||||
interval := a.Config.Agent.Interval.Duration
|
||||
// overwrite global interval if this plugin has it's own.
|
||||
if input.Config.Interval != 0 {
|
||||
interval = input.Config.Interval
|
||||
}
|
||||
go func(in *models.RunningInput, interv time.Duration) {
|
||||
defer wg.Done()
|
||||
a.gatherer(shutdown, in, interv, metricC)
|
||||
}(input, interval)
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
a.Close()
|
||||
return nil
|
||||
}
|
||||
111
agent/agent_test.go
Normal file
111
agent/agent_test.go
Normal file
@@ -0,0 +1,111 @@
|
||||
package agent
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/influxdata/telegraf/internal/config"
|
||||
|
||||
// needing to load the plugins
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/all"
|
||||
// needing to load the outputs
|
||||
_ "github.com/influxdata/telegraf/plugins/outputs/all"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestAgent_OmitHostname(t *testing.T) {
|
||||
c := config.NewConfig()
|
||||
c.Agent.OmitHostname = true
|
||||
_, err := NewAgent(c)
|
||||
assert.NoError(t, err)
|
||||
assert.NotContains(t, c.Tags, "host")
|
||||
}
|
||||
|
||||
func TestAgent_LoadPlugin(t *testing.T) {
|
||||
c := config.NewConfig()
|
||||
c.InputFilters = []string{"mysql"}
|
||||
err := c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
|
||||
assert.NoError(t, err)
|
||||
a, _ := NewAgent(c)
|
||||
assert.Equal(t, 1, len(a.Config.Inputs))
|
||||
|
||||
c = config.NewConfig()
|
||||
c.InputFilters = []string{"foo"}
|
||||
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
|
||||
assert.NoError(t, err)
|
||||
a, _ = NewAgent(c)
|
||||
assert.Equal(t, 0, len(a.Config.Inputs))
|
||||
|
||||
c = config.NewConfig()
|
||||
c.InputFilters = []string{"mysql", "foo"}
|
||||
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
|
||||
assert.NoError(t, err)
|
||||
a, _ = NewAgent(c)
|
||||
assert.Equal(t, 1, len(a.Config.Inputs))
|
||||
|
||||
c = config.NewConfig()
|
||||
c.InputFilters = []string{"mysql", "redis"}
|
||||
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
|
||||
assert.NoError(t, err)
|
||||
a, _ = NewAgent(c)
|
||||
assert.Equal(t, 2, len(a.Config.Inputs))
|
||||
|
||||
c = config.NewConfig()
|
||||
c.InputFilters = []string{"mysql", "foo", "redis", "bar"}
|
||||
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
|
||||
assert.NoError(t, err)
|
||||
a, _ = NewAgent(c)
|
||||
assert.Equal(t, 2, len(a.Config.Inputs))
|
||||
}
|
||||
|
||||
func TestAgent_LoadOutput(t *testing.T) {
|
||||
c := config.NewConfig()
|
||||
c.OutputFilters = []string{"influxdb"}
|
||||
err := c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
|
||||
assert.NoError(t, err)
|
||||
a, _ := NewAgent(c)
|
||||
assert.Equal(t, 2, len(a.Config.Outputs))
|
||||
|
||||
c = config.NewConfig()
|
||||
c.OutputFilters = []string{"kafka"}
|
||||
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
|
||||
assert.NoError(t, err)
|
||||
a, _ = NewAgent(c)
|
||||
assert.Equal(t, 1, len(a.Config.Outputs))
|
||||
|
||||
c = config.NewConfig()
|
||||
c.OutputFilters = []string{}
|
||||
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
|
||||
assert.NoError(t, err)
|
||||
a, _ = NewAgent(c)
|
||||
assert.Equal(t, 3, len(a.Config.Outputs))
|
||||
|
||||
c = config.NewConfig()
|
||||
c.OutputFilters = []string{"foo"}
|
||||
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
|
||||
assert.NoError(t, err)
|
||||
a, _ = NewAgent(c)
|
||||
assert.Equal(t, 0, len(a.Config.Outputs))
|
||||
|
||||
c = config.NewConfig()
|
||||
c.OutputFilters = []string{"influxdb", "foo"}
|
||||
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
|
||||
assert.NoError(t, err)
|
||||
a, _ = NewAgent(c)
|
||||
assert.Equal(t, 2, len(a.Config.Outputs))
|
||||
|
||||
c = config.NewConfig()
|
||||
c.OutputFilters = []string{"influxdb", "kafka"}
|
||||
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, 3, len(c.Outputs))
|
||||
a, _ = NewAgent(c)
|
||||
assert.Equal(t, 3, len(a.Config.Outputs))
|
||||
|
||||
c = config.NewConfig()
|
||||
c.OutputFilters = []string{"influxdb", "foo", "kafka", "bar"}
|
||||
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
|
||||
assert.NoError(t, err)
|
||||
a, _ = NewAgent(c)
|
||||
assert.Equal(t, 3, len(a.Config.Outputs))
|
||||
}
|
||||
@@ -1,61 +0,0 @@
|
||||
package tivan
|
||||
|
||||
/*
|
||||
func TestAgent_DrivesMetrics(t *testing.T) {
|
||||
var (
|
||||
plugin plugins.MockPlugin
|
||||
)
|
||||
|
||||
defer plugin.AssertExpectations(t)
|
||||
defer metrics.AssertExpectations(t)
|
||||
|
||||
a := &Agent{
|
||||
plugins: []plugins.Plugin{&plugin},
|
||||
Config: &Config{},
|
||||
}
|
||||
|
||||
plugin.On("Add", "foo", 1.2, nil).Return(nil)
|
||||
plugin.On("Add", "bar", 888, nil).Return(nil)
|
||||
|
||||
err := a.crank()
|
||||
require.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestAgent_AppliesTags(t *testing.T) {
|
||||
var (
|
||||
plugin plugins.MockPlugin
|
||||
metrics MockMetrics
|
||||
)
|
||||
|
||||
defer plugin.AssertExpectations(t)
|
||||
defer metrics.AssertExpectations(t)
|
||||
|
||||
a := &Agent{
|
||||
plugins: []plugins.Plugin{&plugin},
|
||||
metrics: &metrics,
|
||||
Config: &Config{
|
||||
Tags: map[string]string{
|
||||
"dc": "us-west-1",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
m1 := cypress.Metric()
|
||||
m1.Add("name", "foo")
|
||||
m1.Add("value", 1.2)
|
||||
|
||||
msgs := []*cypress.Message{m1}
|
||||
|
||||
m2 := cypress.Metric()
|
||||
m2.Timestamp = m1.Timestamp
|
||||
m2.Add("name", "foo")
|
||||
m2.Add("value", 1.2)
|
||||
m2.AddTag("dc", "us-west-1")
|
||||
|
||||
plugin.On("Read").Return(msgs, nil)
|
||||
metrics.On("Receive", m2).Return(nil)
|
||||
|
||||
err := a.crank()
|
||||
require.NoError(t, err)
|
||||
}
|
||||
*/
|
||||
22
aggregator.go
Normal file
22
aggregator.go
Normal file
@@ -0,0 +1,22 @@
|
||||
package telegraf
|
||||
|
||||
// Aggregator is an interface for implementing an Aggregator plugin.
|
||||
// the RunningAggregator wraps this interface and guarantees that
|
||||
// Add, Push, and Reset can not be called concurrently, so locking is not
|
||||
// required when implementing an Aggregator plugin.
|
||||
type Aggregator interface {
|
||||
// SampleConfig returns the default configuration of the Input.
|
||||
SampleConfig() string
|
||||
|
||||
// Description returns a one-sentence description on the Input.
|
||||
Description() string
|
||||
|
||||
// Add the metric to the aggregator.
|
||||
Add(in Metric)
|
||||
|
||||
// Push pushes the current aggregates to the accumulator.
|
||||
Push(acc Accumulator)
|
||||
|
||||
// Reset resets the aggregators caches and aggregates.
|
||||
Reset()
|
||||
}
|
||||
33
appveyor.yml
Normal file
33
appveyor.yml
Normal file
@@ -0,0 +1,33 @@
|
||||
version: "{build}"
|
||||
|
||||
cache:
|
||||
- C:\Cache
|
||||
|
||||
clone_folder: C:\gopath\src\github.com\influxdata\telegraf
|
||||
|
||||
environment:
|
||||
GOPATH: C:\gopath
|
||||
|
||||
platform: x64
|
||||
|
||||
install:
|
||||
- IF NOT EXIST "C:\Cache" mkdir C:\Cache
|
||||
- IF NOT EXIST "C:\Cache\go1.10.1.msi" curl -o "C:\Cache\go1.10.1.msi" https://storage.googleapis.com/golang/go1.10.1.windows-amd64.msi
|
||||
- IF NOT EXIST "C:\Cache\gnuwin32-bin.zip" curl -o "C:\Cache\gnuwin32-bin.zip" https://dl.influxdata.com/telegraf/ci/make-3.81-bin.zip
|
||||
- IF NOT EXIST "C:\Cache\gnuwin32-dep.zip" curl -o "C:\Cache\gnuwin32-dep.zip" https://dl.influxdata.com/telegraf/ci/make-3.81-dep.zip
|
||||
- IF EXIST "C:\Go" rmdir /S /Q C:\Go
|
||||
- msiexec.exe /i "C:\Cache\go1.10.1.msi" /quiet
|
||||
- 7z x "C:\Cache\gnuwin32-bin.zip" -oC:\GnuWin32 -y
|
||||
- 7z x "C:\Cache\gnuwin32-dep.zip" -oC:\GnuWin32 -y
|
||||
- go version
|
||||
- go env
|
||||
|
||||
build_script:
|
||||
- cmd: C:\GnuWin32\bin\make deps
|
||||
- cmd: C:\GnuWin32\bin\make telegraf
|
||||
|
||||
test_script:
|
||||
- cmd: C:\GnuWin32\bin\make test-windows
|
||||
|
||||
artifacts:
|
||||
- path: telegraf.exe
|
||||
370
cmd/telegraf/telegraf.go
Normal file
370
cmd/telegraf/telegraf.go
Normal file
@@ -0,0 +1,370 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"log"
|
||||
"net/http"
|
||||
_ "net/http/pprof" // Comment this line to disable pprof endpoint.
|
||||
"os"
|
||||
"os/signal"
|
||||
"runtime"
|
||||
"strings"
|
||||
"syscall"
|
||||
|
||||
"github.com/influxdata/telegraf/agent"
|
||||
"github.com/influxdata/telegraf/internal"
|
||||
"github.com/influxdata/telegraf/internal/config"
|
||||
"github.com/influxdata/telegraf/logger"
|
||||
_ "github.com/influxdata/telegraf/plugins/aggregators/all"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/all"
|
||||
"github.com/influxdata/telegraf/plugins/outputs"
|
||||
_ "github.com/influxdata/telegraf/plugins/outputs/all"
|
||||
_ "github.com/influxdata/telegraf/plugins/processors/all"
|
||||
"github.com/kardianos/service"
|
||||
)
|
||||
|
||||
var fDebug = flag.Bool("debug", false,
|
||||
"turn on debug logging")
|
||||
var pprofAddr = flag.String("pprof-addr", "",
|
||||
"pprof address to listen on, not activate pprof if empty")
|
||||
var fQuiet = flag.Bool("quiet", false,
|
||||
"run in quiet mode")
|
||||
var fTest = flag.Bool("test", false, "gather metrics, print them out, and exit")
|
||||
var fConfig = flag.String("config", "", "configuration file to load")
|
||||
var fConfigDirectory = flag.String("config-directory", "",
|
||||
"directory containing additional *.conf files")
|
||||
var fVersion = flag.Bool("version", false, "display the version")
|
||||
var fSampleConfig = flag.Bool("sample-config", false,
|
||||
"print out full sample configuration")
|
||||
var fPidfile = flag.String("pidfile", "", "file to write our pid to")
|
||||
var fInputFilters = flag.String("input-filter", "",
|
||||
"filter the inputs to enable, separator is :")
|
||||
var fInputList = flag.Bool("input-list", false,
|
||||
"print available input plugins.")
|
||||
var fOutputFilters = flag.String("output-filter", "",
|
||||
"filter the outputs to enable, separator is :")
|
||||
var fOutputList = flag.Bool("output-list", false,
|
||||
"print available output plugins.")
|
||||
var fAggregatorFilters = flag.String("aggregator-filter", "",
|
||||
"filter the aggregators to enable, separator is :")
|
||||
var fProcessorFilters = flag.String("processor-filter", "",
|
||||
"filter the processors to enable, separator is :")
|
||||
var fUsage = flag.String("usage", "",
|
||||
"print usage for a plugin, ie, 'telegraf --usage mysql'")
|
||||
var fService = flag.String("service", "",
|
||||
"operate on the service")
|
||||
var fRunAsConsole = flag.Bool("console", false, "run as console application (windows only)")
|
||||
|
||||
var (
|
||||
nextVersion = "1.7.0"
|
||||
version string
|
||||
commit string
|
||||
branch string
|
||||
)
|
||||
|
||||
func init() {
|
||||
// If commit or branch are not set, make that clear.
|
||||
if commit == "" {
|
||||
commit = "unknown"
|
||||
}
|
||||
if branch == "" {
|
||||
branch = "unknown"
|
||||
}
|
||||
}
|
||||
|
||||
var stop chan struct{}
|
||||
|
||||
func reloadLoop(
|
||||
stop chan struct{},
|
||||
inputFilters []string,
|
||||
outputFilters []string,
|
||||
aggregatorFilters []string,
|
||||
processorFilters []string,
|
||||
) {
|
||||
reload := make(chan bool, 1)
|
||||
reload <- true
|
||||
for <-reload {
|
||||
reload <- false
|
||||
|
||||
// If no other options are specified, load the config file and run.
|
||||
c := config.NewConfig()
|
||||
c.OutputFilters = outputFilters
|
||||
c.InputFilters = inputFilters
|
||||
err := c.LoadConfig(*fConfig)
|
||||
if err != nil {
|
||||
log.Fatal("E! " + err.Error())
|
||||
}
|
||||
|
||||
if *fConfigDirectory != "" {
|
||||
err = c.LoadDirectory(*fConfigDirectory)
|
||||
if err != nil {
|
||||
log.Fatal("E! " + err.Error())
|
||||
}
|
||||
}
|
||||
if !*fTest && len(c.Outputs) == 0 {
|
||||
log.Fatalf("E! Error: no outputs found, did you provide a valid config file?")
|
||||
}
|
||||
if len(c.Inputs) == 0 {
|
||||
log.Fatalf("E! Error: no inputs found, did you provide a valid config file?")
|
||||
}
|
||||
|
||||
if int64(c.Agent.Interval.Duration) <= 0 {
|
||||
log.Fatalf("E! Agent interval must be positive, found %s",
|
||||
c.Agent.Interval.Duration)
|
||||
}
|
||||
|
||||
if int64(c.Agent.FlushInterval.Duration) <= 0 {
|
||||
log.Fatalf("E! Agent flush_interval must be positive; found %s",
|
||||
c.Agent.Interval.Duration)
|
||||
}
|
||||
|
||||
ag, err := agent.NewAgent(c)
|
||||
if err != nil {
|
||||
log.Fatal("E! " + err.Error())
|
||||
}
|
||||
|
||||
// Setup logging
|
||||
logger.SetupLogging(
|
||||
ag.Config.Agent.Debug || *fDebug,
|
||||
ag.Config.Agent.Quiet || *fQuiet,
|
||||
ag.Config.Agent.Logfile,
|
||||
)
|
||||
|
||||
if *fTest {
|
||||
err = ag.Test()
|
||||
if err != nil {
|
||||
log.Fatal("E! " + err.Error())
|
||||
}
|
||||
os.Exit(0)
|
||||
}
|
||||
|
||||
err = ag.Connect()
|
||||
if err != nil {
|
||||
log.Fatal("E! " + err.Error())
|
||||
}
|
||||
|
||||
shutdown := make(chan struct{})
|
||||
signals := make(chan os.Signal)
|
||||
signal.Notify(signals, os.Interrupt, syscall.SIGHUP)
|
||||
go func() {
|
||||
select {
|
||||
case sig := <-signals:
|
||||
if sig == os.Interrupt {
|
||||
close(shutdown)
|
||||
}
|
||||
if sig == syscall.SIGHUP {
|
||||
log.Printf("I! Reloading Telegraf config\n")
|
||||
<-reload
|
||||
reload <- true
|
||||
close(shutdown)
|
||||
}
|
||||
case <-stop:
|
||||
close(shutdown)
|
||||
}
|
||||
}()
|
||||
|
||||
log.Printf("I! Starting Telegraf %s\n", displayVersion())
|
||||
log.Printf("I! Loaded outputs: %s", strings.Join(c.OutputNames(), " "))
|
||||
log.Printf("I! Loaded inputs: %s", strings.Join(c.InputNames(), " "))
|
||||
log.Printf("I! Tags enabled: %s", c.ListTags())
|
||||
|
||||
if *fPidfile != "" {
|
||||
f, err := os.OpenFile(*fPidfile, os.O_CREATE|os.O_WRONLY, 0644)
|
||||
if err != nil {
|
||||
log.Printf("E! Unable to create pidfile: %s", err)
|
||||
} else {
|
||||
fmt.Fprintf(f, "%d\n", os.Getpid())
|
||||
|
||||
f.Close()
|
||||
|
||||
defer func() {
|
||||
err := os.Remove(*fPidfile)
|
||||
if err != nil {
|
||||
log.Printf("E! Unable to remove pidfile: %s", err)
|
||||
}
|
||||
}()
|
||||
}
|
||||
}
|
||||
|
||||
ag.Run(shutdown)
|
||||
}
|
||||
}
|
||||
|
||||
func usageExit(rc int) {
|
||||
fmt.Println(internal.Usage)
|
||||
os.Exit(rc)
|
||||
}
|
||||
|
||||
type program struct {
|
||||
inputFilters []string
|
||||
outputFilters []string
|
||||
aggregatorFilters []string
|
||||
processorFilters []string
|
||||
}
|
||||
|
||||
func (p *program) Start(s service.Service) error {
|
||||
go p.run()
|
||||
return nil
|
||||
}
|
||||
func (p *program) run() {
|
||||
stop = make(chan struct{})
|
||||
reloadLoop(
|
||||
stop,
|
||||
p.inputFilters,
|
||||
p.outputFilters,
|
||||
p.aggregatorFilters,
|
||||
p.processorFilters,
|
||||
)
|
||||
}
|
||||
func (p *program) Stop(s service.Service) error {
|
||||
close(stop)
|
||||
return nil
|
||||
}
|
||||
|
||||
func displayVersion() string {
|
||||
if version == "" {
|
||||
return fmt.Sprintf("v%s~%s", nextVersion, commit)
|
||||
}
|
||||
return "v" + version
|
||||
}
|
||||
|
||||
func main() {
|
||||
flag.Usage = func() { usageExit(0) }
|
||||
flag.Parse()
|
||||
args := flag.Args()
|
||||
|
||||
inputFilters, outputFilters := []string{}, []string{}
|
||||
if *fInputFilters != "" {
|
||||
inputFilters = strings.Split(":"+strings.TrimSpace(*fInputFilters)+":", ":")
|
||||
}
|
||||
if *fOutputFilters != "" {
|
||||
outputFilters = strings.Split(":"+strings.TrimSpace(*fOutputFilters)+":", ":")
|
||||
}
|
||||
|
||||
aggregatorFilters, processorFilters := []string{}, []string{}
|
||||
if *fAggregatorFilters != "" {
|
||||
aggregatorFilters = strings.Split(":"+strings.TrimSpace(*fAggregatorFilters)+":", ":")
|
||||
}
|
||||
if *fProcessorFilters != "" {
|
||||
processorFilters = strings.Split(":"+strings.TrimSpace(*fProcessorFilters)+":", ":")
|
||||
}
|
||||
|
||||
if *pprofAddr != "" {
|
||||
go func() {
|
||||
pprofHostPort := *pprofAddr
|
||||
parts := strings.Split(pprofHostPort, ":")
|
||||
if len(parts) == 2 && parts[0] == "" {
|
||||
pprofHostPort = fmt.Sprintf("localhost:%s", parts[1])
|
||||
}
|
||||
pprofHostPort = "http://" + pprofHostPort + "/debug/pprof"
|
||||
|
||||
log.Printf("I! Starting pprof HTTP server at: %s", pprofHostPort)
|
||||
|
||||
if err := http.ListenAndServe(*pprofAddr, nil); err != nil {
|
||||
log.Fatal("E! " + err.Error())
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
if len(args) > 0 {
|
||||
switch args[0] {
|
||||
case "version":
|
||||
fmt.Printf("Telegraf %s (git: %s %s)\n", displayVersion(), branch, commit)
|
||||
return
|
||||
case "config":
|
||||
config.PrintSampleConfig(
|
||||
inputFilters,
|
||||
outputFilters,
|
||||
aggregatorFilters,
|
||||
processorFilters,
|
||||
)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// switch for flags which just do something and exit immediately
|
||||
switch {
|
||||
case *fOutputList:
|
||||
fmt.Println("Available Output Plugins:")
|
||||
for k, _ := range outputs.Outputs {
|
||||
fmt.Printf(" %s\n", k)
|
||||
}
|
||||
return
|
||||
case *fInputList:
|
||||
fmt.Println("Available Input Plugins:")
|
||||
for k, _ := range inputs.Inputs {
|
||||
fmt.Printf(" %s\n", k)
|
||||
}
|
||||
return
|
||||
case *fVersion:
|
||||
fmt.Printf("Telegraf %s (git: %s %s)\n", displayVersion(), branch, commit)
|
||||
return
|
||||
case *fSampleConfig:
|
||||
config.PrintSampleConfig(
|
||||
inputFilters,
|
||||
outputFilters,
|
||||
aggregatorFilters,
|
||||
processorFilters,
|
||||
)
|
||||
return
|
||||
case *fUsage != "":
|
||||
err := config.PrintInputConfig(*fUsage)
|
||||
err2 := config.PrintOutputConfig(*fUsage)
|
||||
if err != nil && err2 != nil {
|
||||
log.Fatalf("E! %s and %s", err, err2)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
if runtime.GOOS == "windows" && !(*fRunAsConsole) {
|
||||
svcConfig := &service.Config{
|
||||
Name: "telegraf",
|
||||
DisplayName: "Telegraf Data Collector Service",
|
||||
Description: "Collects data using a series of plugins and publishes it to" +
|
||||
"another series of plugins.",
|
||||
Arguments: []string{"--config", "C:\\Program Files\\Telegraf\\telegraf.conf"},
|
||||
}
|
||||
|
||||
prg := &program{
|
||||
inputFilters: inputFilters,
|
||||
outputFilters: outputFilters,
|
||||
aggregatorFilters: aggregatorFilters,
|
||||
processorFilters: processorFilters,
|
||||
}
|
||||
s, err := service.New(prg, svcConfig)
|
||||
if err != nil {
|
||||
log.Fatal("E! " + err.Error())
|
||||
}
|
||||
// Handle the --service flag here to prevent any issues with tooling that
|
||||
// may not have an interactive session, e.g. installing from Ansible.
|
||||
if *fService != "" {
|
||||
if *fConfig != "" {
|
||||
(*svcConfig).Arguments = []string{"--config", *fConfig}
|
||||
}
|
||||
if *fConfigDirectory != "" {
|
||||
(*svcConfig).Arguments = append((*svcConfig).Arguments, "--config-directory", *fConfigDirectory)
|
||||
}
|
||||
err := service.Control(s, *fService)
|
||||
if err != nil {
|
||||
log.Fatal("E! " + err.Error())
|
||||
}
|
||||
os.Exit(0)
|
||||
} else {
|
||||
err = s.Run()
|
||||
if err != nil {
|
||||
log.Println("E! " + err.Error())
|
||||
}
|
||||
}
|
||||
} else {
|
||||
stop = make(chan struct{})
|
||||
reloadLoop(
|
||||
stop,
|
||||
inputFilters,
|
||||
outputFilters,
|
||||
aggregatorFilters,
|
||||
processorFilters,
|
||||
)
|
||||
}
|
||||
}
|
||||
@@ -1,96 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"os/signal"
|
||||
"strings"
|
||||
|
||||
"github.com/influxdb/tivan"
|
||||
_ "github.com/influxdb/tivan/plugins/all"
|
||||
)
|
||||
|
||||
var fDebug = flag.Bool("debug", false, "show metrics as they're generated to stdout")
|
||||
var fTest = flag.Bool("test", false, "gather metrics, print them out, and exit")
|
||||
var fConfig = flag.String("config", "", "configuration file to load")
|
||||
var fVersion = flag.Bool("version", false, "display the version")
|
||||
|
||||
var Version = "unreleased"
|
||||
|
||||
func main() {
|
||||
flag.Parse()
|
||||
|
||||
if *fVersion {
|
||||
fmt.Printf("InfluxDB Tivan agent - Version %s\n", Version)
|
||||
return
|
||||
}
|
||||
|
||||
var (
|
||||
config *tivan.Config
|
||||
err error
|
||||
)
|
||||
|
||||
if *fConfig != "" {
|
||||
config, err = tivan.LoadConfig(*fConfig)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
} else {
|
||||
config = tivan.DefaultConfig()
|
||||
}
|
||||
|
||||
ag, err := tivan.NewAgent(config)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
if *fDebug {
|
||||
ag.Debug = true
|
||||
}
|
||||
|
||||
plugins, err := ag.LoadPlugins()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
if *fTest {
|
||||
err = ag.Test()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
err = ag.Connect()
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
shutdown := make(chan struct{})
|
||||
|
||||
signals := make(chan os.Signal)
|
||||
|
||||
signal.Notify(signals, os.Interrupt)
|
||||
|
||||
go func() {
|
||||
<-signals
|
||||
close(shutdown)
|
||||
}()
|
||||
|
||||
log.Print("InfluxDB Agent running")
|
||||
log.Printf("Loaded plugins: %s", strings.Join(plugins, " "))
|
||||
if ag.Debug {
|
||||
log.Printf("Debug: enabled")
|
||||
log.Printf("Agent Config: %#v", ag)
|
||||
}
|
||||
|
||||
if config.URL != "" {
|
||||
log.Printf("Sending metrics to: %s", config.URL)
|
||||
log.Printf("Tags enabled: %v", config.ListTags())
|
||||
}
|
||||
|
||||
ag.Run(shutdown)
|
||||
}
|
||||
103
config.go
103
config.go
@@ -1,103 +0,0 @@
|
||||
package tivan
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/naoina/toml"
|
||||
"github.com/naoina/toml/ast"
|
||||
)
|
||||
|
||||
type Duration struct {
|
||||
time.Duration
|
||||
}
|
||||
|
||||
func (d *Duration) UnmarshalTOML(b []byte) error {
|
||||
dur, err := time.ParseDuration(string(b[1 : len(b)-1]))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
d.Duration = dur
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
type Config struct {
|
||||
URL string
|
||||
Username string
|
||||
Password string
|
||||
Database string
|
||||
UserAgent string
|
||||
Tags map[string]string
|
||||
|
||||
plugins map[string]*ast.Table
|
||||
}
|
||||
|
||||
func (c *Config) Plugins() map[string]*ast.Table {
|
||||
return c.plugins
|
||||
}
|
||||
|
||||
func (c *Config) Apply(name string, v interface{}) error {
|
||||
if tbl, ok := c.plugins[name]; ok {
|
||||
return toml.UnmarshalTable(tbl, v)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func DefaultConfig() *Config {
|
||||
return &Config{}
|
||||
}
|
||||
|
||||
var ErrInvalidConfig = errors.New("invalid configuration")
|
||||
|
||||
func LoadConfig(path string) (*Config, error) {
|
||||
data, err := ioutil.ReadFile(path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
tbl, err := toml.Parse(data)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
c := &Config{
|
||||
plugins: make(map[string]*ast.Table),
|
||||
}
|
||||
|
||||
for name, val := range tbl.Fields {
|
||||
subtbl, ok := val.(*ast.Table)
|
||||
if !ok {
|
||||
return nil, ErrInvalidConfig
|
||||
}
|
||||
|
||||
if name == "influxdb" {
|
||||
err := toml.UnmarshalTable(subtbl, c)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else {
|
||||
c.plugins[name] = subtbl
|
||||
}
|
||||
}
|
||||
|
||||
return c, nil
|
||||
}
|
||||
|
||||
func (c *Config) ListTags() string {
|
||||
var tags []string
|
||||
|
||||
for k, v := range c.Tags {
|
||||
tags = append(tags, fmt.Sprintf("%s=%s", k, v))
|
||||
}
|
||||
|
||||
sort.Strings(tags)
|
||||
|
||||
return strings.Join(tags, " ")
|
||||
}
|
||||
93
docker-compose.yml
Normal file
93
docker-compose.yml
Normal file
@@ -0,0 +1,93 @@
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
aerospike:
|
||||
image: aerospike/aerospike-server:3.9.0
|
||||
ports:
|
||||
- "3000:3000"
|
||||
zookeeper:
|
||||
image: wurstmeister/zookeeper
|
||||
environment:
|
||||
- JAVA_OPTS="-Xms256m -Xmx256m"
|
||||
ports:
|
||||
- "2181:2181"
|
||||
kafka:
|
||||
image: wurstmeister/kafka
|
||||
environment:
|
||||
- KAFKA_ADVERTISED_HOST_NAME=localhost
|
||||
- KAFKA_ADVERTISED_PORT=9092
|
||||
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
|
||||
- KAFKA_CREATE_TOPICS="test:1:1"
|
||||
- JAVA_OPTS="-Xms256m -Xmx256m"
|
||||
ports:
|
||||
- "9092:9092"
|
||||
depends_on:
|
||||
- zookeeper
|
||||
elasticsearch:
|
||||
image: elasticsearch:5
|
||||
environment:
|
||||
- JAVA_OPTS="-Xms256m -Xmx256m"
|
||||
ports:
|
||||
- "9200:9200"
|
||||
- "9300:9300"
|
||||
mysql:
|
||||
image: mysql
|
||||
environment:
|
||||
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
|
||||
ports:
|
||||
- "3306:3306"
|
||||
memcached:
|
||||
image: memcached
|
||||
ports:
|
||||
- "11211:11211"
|
||||
postgres:
|
||||
image: postgres:alpine
|
||||
ports:
|
||||
- "5432:5432"
|
||||
rabbitmq:
|
||||
image: rabbitmq:3-management
|
||||
ports:
|
||||
- "15672:15672"
|
||||
- "5672:5672"
|
||||
redis:
|
||||
image: redis:alpine
|
||||
ports:
|
||||
- "6379:6379"
|
||||
nsq:
|
||||
image: nsqio/nsq
|
||||
ports:
|
||||
- "4150:4150"
|
||||
command: "/nsqd"
|
||||
mqtt:
|
||||
image: ncarlier/mqtt
|
||||
ports:
|
||||
- "1883:1883"
|
||||
riemann:
|
||||
image: stealthly/docker-riemann
|
||||
ports:
|
||||
- "5555:5555"
|
||||
nats:
|
||||
image: nats
|
||||
ports:
|
||||
- "4222:4222"
|
||||
openldap:
|
||||
image: cobaugh/openldap-alpine
|
||||
environment:
|
||||
- SLAPD_CONFIG_ROOTDN="cn=manager,cn=config"
|
||||
- SLAPD_CONFIG_ROOTPW="secret"
|
||||
ports:
|
||||
- "389:389"
|
||||
- "636:636"
|
||||
crate:
|
||||
image: crate/crate
|
||||
ports:
|
||||
- "4200:4200"
|
||||
- "4230:4230"
|
||||
- "5432:5432"
|
||||
command:
|
||||
- crate
|
||||
- -Cnetwork.host=0.0.0.0
|
||||
- -Ctransport.host=localhost
|
||||
- -Clicense.enterprise=false
|
||||
environment:
|
||||
- CRATE_HEAP_SIZE=128m
|
||||
64
docs/AGGREGATORS_AND_PROCESSORS.md
Normal file
64
docs/AGGREGATORS_AND_PROCESSORS.md
Normal file
@@ -0,0 +1,64 @@
|
||||
# Telegraf Aggregator & Processor Plugins
|
||||
|
||||
As of release 1.1.0, Telegraf has the concept of Aggregator and Processor Plugins.
|
||||
|
||||
These plugins sit in-between Input & Output plugins, aggregating and processing
|
||||
metrics as they pass through Telegraf:
|
||||
|
||||
```
|
||||
┌───────────┐
|
||||
│ │
|
||||
│ CPU │───┐
|
||||
│ │ │
|
||||
└───────────┘ │
|
||||
│
|
||||
┌───────────┐ │ ┌───────────┐
|
||||
│ │ │ │ │
|
||||
│ Memory │───┤ ┌──▶│ InfluxDB │
|
||||
│ │ │ │ │ │
|
||||
└───────────┘ │ ┌─────────────┐ ┌─────────────┐ │ └───────────┘
|
||||
│ │ │ │Aggregate │ │
|
||||
┌───────────┐ │ │Process │ │ - mean │ │ ┌───────────┐
|
||||
│ │ │ │ - transform │ │ - quantiles │ │ │ │
|
||||
│ MySQL │───┼───▶│ - decorate │────▶│ - min/max │───┼──▶│ File │
|
||||
│ │ │ │ - filter │ │ - count │ │ │ │
|
||||
└───────────┘ │ │ │ │ │ │ └───────────┘
|
||||
│ └─────────────┘ └─────────────┘ │
|
||||
┌───────────┐ │ │ ┌───────────┐
|
||||
│ │ │ │ │ │
|
||||
│ SNMP │───┤ └──▶│ Kafka │
|
||||
│ │ │ │ │
|
||||
└───────────┘ │ └───────────┘
|
||||
│
|
||||
┌───────────┐ │
|
||||
│ │ │
|
||||
│ Docker │───┘
|
||||
│ │
|
||||
└───────────┘
|
||||
```
|
||||
|
||||
Both Aggregators and Processors analyze metrics as they pass through Telegraf.
|
||||
|
||||
Use [measurement filtering](CONFIGURATION.md#measurement-filtering)
|
||||
to control which metrics are passed through a processor or aggregator. If a
|
||||
metric is filtered out the metric bypasses the plugin and is passed downstream
|
||||
to the next plugin.
|
||||
|
||||
**Processor** plugins process metrics as they pass through and immediately emit
|
||||
results based on the values they process. For example, this could be printing
|
||||
all metrics or adding a tag to all metrics that pass through.
|
||||
|
||||
**Aggregator** plugins, on the other hand, are a bit more complicated. Aggregators
|
||||
are typically for emitting new _aggregate_ metrics, such as a running mean,
|
||||
minimum, maximum, quantiles, or standard deviation. For this reason, all _aggregator_
|
||||
plugins are configured with a `period`. The `period` is the size of the window
|
||||
of metrics that each _aggregate_ represents. In other words, the emitted
|
||||
_aggregate_ metric will be the aggregated value of the past `period` seconds.
|
||||
Since many users will only care about their aggregates and not every single metric
|
||||
gathered, there is also a `drop_original` argument, which tells Telegraf to only
|
||||
emit the aggregates and not the original metrics.
|
||||
|
||||
**NOTE** That since aggregators only aggregate metrics within their period, that
|
||||
historical data is not supported. In other words, if your metric timestamp is more
|
||||
than `now() - period` in the past, it will not be aggregated. If this is a feature
|
||||
that you need, please comment on this [github issue](https://github.com/influxdata/telegraf/issues/1992)
|
||||
403
docs/CONFIGURATION.md
Normal file
403
docs/CONFIGURATION.md
Normal file
@@ -0,0 +1,403 @@
|
||||
# Telegraf Configuration
|
||||
|
||||
You can see the latest config file with all available plugins here:
|
||||
[telegraf.conf](https://github.com/influxdata/telegraf/blob/master/etc/telegraf.conf)
|
||||
|
||||
## Generating a Configuration File
|
||||
|
||||
A default Telegraf config file can be auto-generated by telegraf:
|
||||
|
||||
```
|
||||
telegraf config > telegraf.conf
|
||||
```
|
||||
|
||||
To generate a file with specific inputs and outputs, you can use the
|
||||
--input-filter and --output-filter flags:
|
||||
|
||||
```
|
||||
telegraf --input-filter cpu:mem:net:swap --output-filter influxdb:kafka config
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
Environment variables can be used anywhere in the config file, simply prepend
|
||||
them with $. For strings the variable must be within quotes (ie, "$STR_VAR"),
|
||||
for numbers and booleans they should be plain (ie, $INT_VAR, $BOOL_VAR)
|
||||
|
||||
When using the `.deb` or `.rpm` packages, you can define environment variables
|
||||
in the `/etc/default/telegraf` file.
|
||||
|
||||
## Configuration file locations
|
||||
|
||||
The location of the configuration file can be set via the `--config` command
|
||||
line flag.
|
||||
|
||||
When the `--config-directory` command line flag is used files ending with
|
||||
`.conf` in the specified directory will also be included in the Telegraf
|
||||
configuration.
|
||||
|
||||
On most systems, the default locations are `/etc/telegraf/telegraf.conf` for
|
||||
the main configuration file and `/etc/telegraf/telegraf.d` for the directory of
|
||||
configuration files.
|
||||
|
||||
# Global Tags
|
||||
|
||||
Global tags can be specified in the `[global_tags]` section of the config file
|
||||
in key="value" format. All metrics being gathered on this host will be tagged
|
||||
with the tags specified here.
|
||||
|
||||
## Agent Configuration
|
||||
|
||||
Telegraf has a few options you can configure under the `[agent]` section of the
|
||||
config.
|
||||
|
||||
* **interval**: Default data collection interval for all inputs
|
||||
* **round_interval**: Rounds collection interval to 'interval'
|
||||
ie, if interval="10s" then always collect on :00, :10, :20, etc.
|
||||
* **metric_batch_size**: Telegraf will send metrics to output in batch of at
|
||||
most metric_batch_size metrics.
|
||||
* **metric_buffer_limit**: Telegraf will cache metric_buffer_limit metrics
|
||||
for each output, and will flush this buffer on a successful write.
|
||||
This should be a multiple of metric_batch_size and could not be less
|
||||
than 2 times metric_batch_size.
|
||||
* **collection_jitter**: Collection jitter is used to jitter
|
||||
the collection by a random amount.
|
||||
Each plugin will sleep for a random time within jitter before collecting.
|
||||
This can be used to avoid many plugins querying things like sysfs at the
|
||||
same time, which can have a measurable effect on the system.
|
||||
* **flush_interval**: Default data flushing interval for all outputs.
|
||||
You should not set this below
|
||||
interval. Maximum flush_interval will be flush_interval + flush_jitter
|
||||
* **flush_jitter**: Jitter the flush interval by a random amount.
|
||||
This is primarily to avoid
|
||||
large write spikes for users running a large number of telegraf instances.
|
||||
ie, a jitter of 5s and flush_interval 10s means flushes will happen every 10-15s.
|
||||
* **precision**:
|
||||
By default or when set to "0s", precision will be set to the same
|
||||
timestamp order as the collection interval, with the maximum being 1s.
|
||||
Precision will NOT be used for service inputs. It is up to each individual
|
||||
service input to set the timestamp at the appropriate precision.
|
||||
Valid time units are "ns", "us" (or "µs"), "ms", "s".
|
||||
|
||||
* **logfile**: Specify the log file name. The empty string means to log to stderr.
|
||||
* **debug**: Run telegraf in debug mode.
|
||||
* **quiet**: Run telegraf in quiet mode (error messages only).
|
||||
* **hostname**: Override default hostname, if empty use os.Hostname().
|
||||
* **omit_hostname**: If true, do no set the "host" tag in the telegraf agent.
|
||||
|
||||
## Input Configuration
|
||||
|
||||
The following config parameters are available for all inputs:
|
||||
|
||||
* **interval**: How often to gather this metric. Normal plugins use a single
|
||||
global interval, but if one particular input should be run less or more often,
|
||||
you can configure that here.
|
||||
* **name_override**: Override the base name of the measurement.
|
||||
(Default is the name of the input).
|
||||
* **name_prefix**: Specifies a prefix to attach to the measurement name.
|
||||
* **name_suffix**: Specifies a suffix to attach to the measurement name.
|
||||
* **tags**: A map of tags to apply to a specific input's measurements.
|
||||
|
||||
The [measurement filtering](#measurement-filtering) parameters can be used to
|
||||
limit what metrics are emitted from the input plugin.
|
||||
|
||||
## Output Configuration
|
||||
|
||||
The [measurement filtering](#measurement-filtering) parameters can be used to
|
||||
limit what metrics are emitted from the output plugin.
|
||||
|
||||
## Aggregator Configuration
|
||||
|
||||
The following config parameters are available for all aggregators:
|
||||
|
||||
* **period**: The period on which to flush & clear each aggregator. All metrics
|
||||
that are sent with timestamps outside of this period will be ignored by the
|
||||
aggregator.
|
||||
* **delay**: The delay before each aggregator is flushed. This is to control
|
||||
how long for aggregators to wait before receiving metrics from input plugins,
|
||||
in the case that aggregators are flushing and inputs are gathering on the
|
||||
same interval.
|
||||
* **drop_original**: If true, the original metric will be dropped by the
|
||||
aggregator and will not get sent to the output plugins.
|
||||
* **name_override**: Override the base name of the measurement.
|
||||
(Default is the name of the input).
|
||||
* **name_prefix**: Specifies a prefix to attach to the measurement name.
|
||||
* **name_suffix**: Specifies a suffix to attach to the measurement name.
|
||||
* **tags**: A map of tags to apply to a specific input's measurements.
|
||||
|
||||
The [measurement filtering](#measurement-filtering) parameters can be used to
|
||||
limit what metrics are handled by the aggregator. Excluded metrics are passed
|
||||
downstream to the next aggregator.
|
||||
|
||||
## Processor Configuration
|
||||
|
||||
The following config parameters are available for all processors:
|
||||
|
||||
* **order**: This is the order in which the processor(s) get executed. If this
|
||||
is not specified then processor execution order will be random.
|
||||
|
||||
The [measurement filtering](#measurement-filtering) parameters can be used
|
||||
to limit what metrics are handled by the processor. Excluded metrics are
|
||||
passed downstream to the next processor.
|
||||
|
||||
#### Measurement Filtering
|
||||
|
||||
Filters can be configured per input, output, processor, or aggregator,
|
||||
see below for examples.
|
||||
|
||||
* **namepass**:
|
||||
An array of glob pattern strings. Only points whose measurement name matches
|
||||
a pattern in this list are emitted.
|
||||
* **namedrop**:
|
||||
The inverse of `namepass`. If a match is found the point is discarded. This
|
||||
is tested on points after they have passed the `namepass` test.
|
||||
* **fieldpass**:
|
||||
An array of glob pattern strings. Only fields whose field key matches a
|
||||
pattern in this list are emitted.
|
||||
* **fielddrop**:
|
||||
The inverse of `fieldpass`. Fields with a field key matching one of the
|
||||
patterns will be discarded from the point. This is tested on points after
|
||||
they have passed the `fieldpass` test.
|
||||
* **tagpass**:
|
||||
A table mapping tag keys to arrays of glob pattern strings. Only points
|
||||
that contain a tag key in the table and a tag value matching one of its
|
||||
patterns is emitted.
|
||||
* **tagdrop**:
|
||||
The inverse of `tagpass`. If a match is found the point is discarded. This
|
||||
is tested on points after they have passed the `tagpass` test.
|
||||
* **taginclude**:
|
||||
An array of glob pattern strings. Only tags with a tag key matching one of
|
||||
the patterns are emitted. In contrast to `tagpass`, which will pass an entire
|
||||
point based on its tag, `taginclude` removes all non matching tags from the
|
||||
point. This filter can be used on both inputs & outputs, but it is
|
||||
_recommended_ to be used on inputs, as it is more efficient to filter out tags
|
||||
at the ingestion point.
|
||||
* **tagexclude**:
|
||||
The inverse of `taginclude`. Tags with a tag key matching one of the patterns
|
||||
will be discarded from the point.
|
||||
|
||||
**NOTE** Due to the way TOML is parsed, `tagpass` and `tagdrop` parameters
|
||||
must be defined at the _end_ of the plugin definition, otherwise subsequent
|
||||
plugin config options will be interpreted as part of the tagpass/tagdrop
|
||||
tables.
|
||||
|
||||
#### Input Configuration Examples
|
||||
|
||||
This is a full working config that will output CPU data to an InfluxDB instance
|
||||
at 192.168.59.103:8086, tagging measurements with dc="denver-1". It will output
|
||||
measurements at a 10s interval and will collect per-cpu data, dropping any
|
||||
fields which begin with `time_`.
|
||||
|
||||
```toml
|
||||
[global_tags]
|
||||
dc = "denver-1"
|
||||
|
||||
[agent]
|
||||
interval = "10s"
|
||||
|
||||
# OUTPUTS
|
||||
[[outputs.influxdb]]
|
||||
url = "http://192.168.59.103:8086" # required.
|
||||
database = "telegraf" # required.
|
||||
|
||||
# INPUTS
|
||||
[[inputs.cpu]]
|
||||
percpu = true
|
||||
totalcpu = false
|
||||
# filter all fields beginning with 'time_'
|
||||
fielddrop = ["time_*"]
|
||||
```
|
||||
|
||||
#### Input Config: tagpass and tagdrop
|
||||
|
||||
**NOTE** `tagpass` and `tagdrop` parameters must be defined at the _end_ of
|
||||
the plugin definition, otherwise subsequent plugin config options will be
|
||||
interpreted as part of the tagpass/tagdrop map.
|
||||
|
||||
```toml
|
||||
[[inputs.cpu]]
|
||||
percpu = true
|
||||
totalcpu = false
|
||||
fielddrop = ["cpu_time"]
|
||||
# Don't collect CPU data for cpu6 & cpu7
|
||||
[inputs.cpu.tagdrop]
|
||||
cpu = [ "cpu6", "cpu7" ]
|
||||
|
||||
[[inputs.disk]]
|
||||
[inputs.disk.tagpass]
|
||||
# tagpass conditions are OR, not AND.
|
||||
# If the (filesystem is ext4 or xfs) OR (the path is /opt or /home)
|
||||
# then the metric passes
|
||||
fstype = [ "ext4", "xfs" ]
|
||||
# Globs can also be used on the tag values
|
||||
path = [ "/opt", "/home*" ]
|
||||
```
|
||||
|
||||
#### Input Config: fieldpass and fielddrop
|
||||
|
||||
```toml
|
||||
# Drop all metrics for guest & steal CPU usage
|
||||
[[inputs.cpu]]
|
||||
percpu = false
|
||||
totalcpu = true
|
||||
fielddrop = ["usage_guest", "usage_steal"]
|
||||
|
||||
# Only store inode related metrics for disks
|
||||
[[inputs.disk]]
|
||||
fieldpass = ["inodes*"]
|
||||
```
|
||||
|
||||
#### Input Config: namepass and namedrop
|
||||
|
||||
```toml
|
||||
# Drop all metrics about containers for kubelet
|
||||
[[inputs.prometheus]]
|
||||
urls = ["http://kube-node-1:4194/metrics"]
|
||||
namedrop = ["container_*"]
|
||||
|
||||
# Only store rest client related metrics for kubelet
|
||||
[[inputs.prometheus]]
|
||||
urls = ["http://kube-node-1:4194/metrics"]
|
||||
namepass = ["rest_client_*"]
|
||||
```
|
||||
|
||||
#### Input Config: taginclude and tagexclude
|
||||
|
||||
```toml
|
||||
# Only include the "cpu" tag in the measurements for the cpu plugin.
|
||||
[[inputs.cpu]]
|
||||
percpu = true
|
||||
totalcpu = true
|
||||
taginclude = ["cpu"]
|
||||
|
||||
# Exclude the "fstype" tag from the measurements for the disk plugin.
|
||||
[[inputs.disk]]
|
||||
tagexclude = ["fstype"]
|
||||
```
|
||||
|
||||
#### Input config: prefix, suffix, and override
|
||||
|
||||
This plugin will emit measurements with the name `cpu_total`
|
||||
|
||||
```toml
|
||||
[[inputs.cpu]]
|
||||
name_suffix = "_total"
|
||||
percpu = false
|
||||
totalcpu = true
|
||||
```
|
||||
|
||||
This will emit measurements with the name `foobar`
|
||||
|
||||
```toml
|
||||
[[inputs.cpu]]
|
||||
name_override = "foobar"
|
||||
percpu = false
|
||||
totalcpu = true
|
||||
```
|
||||
|
||||
#### Input config: tags
|
||||
|
||||
This plugin will emit measurements with two additional tags: `tag1=foo` and
|
||||
`tag2=bar`
|
||||
|
||||
NOTE: Order matters, the `[inputs.cpu.tags]` table must be at the _end_ of the
|
||||
plugin definition.
|
||||
|
||||
```toml
|
||||
[[inputs.cpu]]
|
||||
percpu = false
|
||||
totalcpu = true
|
||||
[inputs.cpu.tags]
|
||||
tag1 = "foo"
|
||||
tag2 = "bar"
|
||||
```
|
||||
|
||||
#### Multiple inputs of the same type
|
||||
|
||||
Additional inputs (or outputs) of the same type can be specified,
|
||||
just define more instances in the config file. It is highly recommended that
|
||||
you utilize `name_override`, `name_prefix`, or `name_suffix` config options
|
||||
to avoid measurement collisions:
|
||||
|
||||
```toml
|
||||
[[inputs.cpu]]
|
||||
percpu = false
|
||||
totalcpu = true
|
||||
|
||||
[[inputs.cpu]]
|
||||
percpu = true
|
||||
totalcpu = false
|
||||
name_override = "percpu_usage"
|
||||
fielddrop = ["cpu_time*"]
|
||||
```
|
||||
|
||||
#### Output Configuration Examples:
|
||||
|
||||
```toml
|
||||
[[outputs.influxdb]]
|
||||
urls = [ "http://localhost:8086" ]
|
||||
database = "telegraf"
|
||||
# Drop all measurements that start with "aerospike"
|
||||
namedrop = ["aerospike*"]
|
||||
|
||||
[[outputs.influxdb]]
|
||||
urls = [ "http://localhost:8086" ]
|
||||
database = "telegraf-aerospike-data"
|
||||
# Only accept aerospike data:
|
||||
namepass = ["aerospike*"]
|
||||
|
||||
[[outputs.influxdb]]
|
||||
urls = [ "http://localhost:8086" ]
|
||||
database = "telegraf-cpu0-data"
|
||||
# Only store measurements where the tag "cpu" matches the value "cpu0"
|
||||
[outputs.influxdb.tagpass]
|
||||
cpu = ["cpu0"]
|
||||
```
|
||||
|
||||
#### Aggregator Configuration Examples:
|
||||
|
||||
This will collect and emit the min/max of the system load1 metric every
|
||||
30s, dropping the originals.
|
||||
|
||||
```toml
|
||||
[[inputs.system]]
|
||||
fieldpass = ["load1"] # collects system load1 metric.
|
||||
|
||||
[[aggregators.minmax]]
|
||||
period = "30s" # send & clear the aggregate every 30s.
|
||||
drop_original = true # drop the original metrics.
|
||||
|
||||
[[outputs.file]]
|
||||
files = ["stdout"]
|
||||
```
|
||||
|
||||
This will collect and emit the min/max of the swap metrics every
|
||||
30s, dropping the originals. The aggregator will not be applied
|
||||
to the system load metrics due to the `namepass` parameter.
|
||||
|
||||
```toml
|
||||
[[inputs.swap]]
|
||||
|
||||
[[inputs.system]]
|
||||
fieldpass = ["load1"] # collects system load1 metric.
|
||||
|
||||
[[aggregators.minmax]]
|
||||
period = "30s" # send & clear the aggregate every 30s.
|
||||
drop_original = true # drop the original metrics.
|
||||
namepass = ["swap"] # only "pass" swap metrics through the aggregator.
|
||||
|
||||
[[outputs.file]]
|
||||
files = ["stdout"]
|
||||
```
|
||||
|
||||
#### Processor Configuration Examples:
|
||||
|
||||
Print only the metrics with `cpu` as the measurement name, all metrics are
|
||||
passed to the output:
|
||||
```toml
|
||||
[[processors.printer]]
|
||||
namepass = "cpu"
|
||||
|
||||
[[outputs.file]]
|
||||
files = ["/tmp/metrics.out"]
|
||||
```
|
||||
655
docs/DATA_FORMATS_INPUT.md
Normal file
655
docs/DATA_FORMATS_INPUT.md
Normal file
@@ -0,0 +1,655 @@
|
||||
# Telegraf Input Data Formats
|
||||
|
||||
Telegraf is able to parse the following input data formats into metrics:
|
||||
|
||||
1. [InfluxDB Line Protocol](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#influx)
|
||||
1. [JSON](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#json)
|
||||
1. [Graphite](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#graphite)
|
||||
1. [Value](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#value), ie: 45 or "booyah"
|
||||
1. [Nagios](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#nagios) (exec input only)
|
||||
1. [Collectd](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#collectd)
|
||||
1. [Dropwizard](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#dropwizard)
|
||||
|
||||
Telegraf metrics, like InfluxDB
|
||||
[points](https://docs.influxdata.com/influxdb/v0.10/write_protocols/line/),
|
||||
are a combination of four basic parts:
|
||||
|
||||
1. Measurement Name
|
||||
1. Tags
|
||||
1. Fields
|
||||
1. Timestamp
|
||||
|
||||
These four parts are easily defined when using InfluxDB line-protocol as a
|
||||
data format. But there are other data formats that users may want to use which
|
||||
require more advanced configuration to create usable Telegraf metrics.
|
||||
|
||||
Plugins such as `exec` and `kafka_consumer` parse textual data. Up until now,
|
||||
these plugins were statically configured to parse just a single
|
||||
data format. `exec` mostly only supported parsing JSON, and `kafka_consumer` only
|
||||
supported data in InfluxDB line-protocol.
|
||||
|
||||
But now we are normalizing the parsing of various data formats across all
|
||||
plugins that can support it. You will be able to identify a plugin that supports
|
||||
different data formats by the presence of a `data_format` config option, for
|
||||
example, in the exec plugin:
|
||||
|
||||
```toml
|
||||
[[inputs.exec]]
|
||||
## Commands array
|
||||
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
|
||||
|
||||
## measurement name suffix (for separating different commands)
|
||||
name_suffix = "_mycollector"
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "json"
|
||||
|
||||
## Additional configuration options go here
|
||||
```
|
||||
|
||||
Each data_format has an additional set of configuration options available, which
|
||||
I'll go over below.
|
||||
|
||||
# Influx:
|
||||
|
||||
There are no additional configuration options for InfluxDB line-protocol. The
|
||||
metrics are parsed directly into Telegraf metrics.
|
||||
|
||||
#### Influx Configuration:
|
||||
|
||||
```toml
|
||||
[[inputs.exec]]
|
||||
## Commands array
|
||||
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
|
||||
|
||||
## measurement name suffix (for separating different commands)
|
||||
name_suffix = "_mycollector"
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "influx"
|
||||
```
|
||||
|
||||
# JSON:
|
||||
|
||||
The JSON data format flattens JSON into metric _fields_.
|
||||
NOTE: Only numerical values are converted to fields, and they are converted
|
||||
into a float. strings are ignored unless specified as a tag_key (see below).
|
||||
|
||||
So for example, this JSON:
|
||||
|
||||
```json
|
||||
{
|
||||
"a": 5,
|
||||
"b": {
|
||||
"c": 6
|
||||
},
|
||||
"ignored": "I'm a string"
|
||||
}
|
||||
```
|
||||
|
||||
Would get translated into _fields_ of a measurement:
|
||||
|
||||
```
|
||||
myjsonmetric a=5,b_c=6
|
||||
```
|
||||
|
||||
The _measurement_ _name_ is usually the name of the plugin,
|
||||
but can be overridden using the `name_override` config option.
|
||||
|
||||
#### JSON Configuration:
|
||||
|
||||
The JSON data format supports specifying "tag keys". If specified, keys
|
||||
will be searched for in the root-level of the JSON blob. If the key(s) exist,
|
||||
they will be applied as tags to the Telegraf metrics.
|
||||
|
||||
For example, if you had this configuration:
|
||||
|
||||
```toml
|
||||
[[inputs.exec]]
|
||||
## Commands array
|
||||
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
|
||||
|
||||
## measurement name suffix (for separating different commands)
|
||||
name_suffix = "_mycollector"
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "json"
|
||||
|
||||
## List of tag names to extract from top-level of JSON server response
|
||||
tag_keys = [
|
||||
"my_tag_1",
|
||||
"my_tag_2"
|
||||
]
|
||||
```
|
||||
|
||||
with this JSON output from a command:
|
||||
|
||||
```json
|
||||
{
|
||||
"a": 5,
|
||||
"b": {
|
||||
"c": 6
|
||||
},
|
||||
"my_tag_1": "foo"
|
||||
}
|
||||
```
|
||||
|
||||
Your Telegraf metrics would get tagged with "my_tag_1"
|
||||
|
||||
```
|
||||
exec_mycollector,my_tag_1=foo a=5,b_c=6
|
||||
```
|
||||
|
||||
If the JSON data is an array, then each element of the array is parsed with the configured settings.
|
||||
Each resulting metric will be output with the same timestamp.
|
||||
|
||||
For example, if the following configuration:
|
||||
|
||||
```toml
|
||||
[[inputs.exec]]
|
||||
## Commands array
|
||||
commands = ["/usr/bin/mycollector --foo=bar"]
|
||||
|
||||
## measurement name suffix (for separating different commands)
|
||||
name_suffix = "_mycollector"
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "json"
|
||||
|
||||
## List of tag names to extract from top-level of JSON server response
|
||||
tag_keys = [
|
||||
"my_tag_1",
|
||||
"my_tag_2"
|
||||
]
|
||||
```
|
||||
|
||||
with this JSON output from a command:
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"a": 5,
|
||||
"b": {
|
||||
"c": 6
|
||||
},
|
||||
"my_tag_1": "foo",
|
||||
"my_tag_2": "baz"
|
||||
},
|
||||
{
|
||||
"a": 7,
|
||||
"b": {
|
||||
"c": 8
|
||||
},
|
||||
"my_tag_1": "bar",
|
||||
"my_tag_2": "baz"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
Your Telegraf metrics would get tagged with "my_tag_1" and "my_tag_2"
|
||||
|
||||
```
|
||||
exec_mycollector,my_tag_1=foo,my_tag_2=baz a=5,b_c=6
|
||||
exec_mycollector,my_tag_1=bar,my_tag_2=baz a=7,b_c=8
|
||||
```
|
||||
|
||||
# Value:
|
||||
|
||||
The "value" data format translates single values into Telegraf metrics. This
|
||||
is done by assigning a measurement name and setting a single field ("value")
|
||||
as the parsed metric.
|
||||
|
||||
#### Value Configuration:
|
||||
|
||||
You **must** tell Telegraf what type of metric to collect by using the
|
||||
`data_type` configuration option. Available options are:
|
||||
|
||||
1. integer
|
||||
2. float or long
|
||||
3. string
|
||||
4. boolean
|
||||
|
||||
**Note:** It is also recommended that you set `name_override` to a measurement
|
||||
name that makes sense for your metric, otherwise it will just be set to the
|
||||
name of the plugin.
|
||||
|
||||
```toml
|
||||
[[inputs.exec]]
|
||||
## Commands array
|
||||
commands = ["cat /proc/sys/kernel/random/entropy_avail"]
|
||||
|
||||
## override the default metric name of "exec"
|
||||
name_override = "entropy_available"
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "value"
|
||||
data_type = "integer" # required
|
||||
```
|
||||
|
||||
# Graphite:
|
||||
|
||||
The Graphite data format translates graphite _dot_ buckets directly into
|
||||
telegraf measurement names, with a single value field, and without any tags.
|
||||
By default, the separator is left as ".", but this can be changed using the
|
||||
"separator" argument. For more advanced options,
|
||||
Telegraf supports specifying "templates" to translate
|
||||
graphite buckets into Telegraf metrics.
|
||||
|
||||
Templates are of the form:
|
||||
|
||||
```
|
||||
"host.mytag.mytag.measurement.measurement.field*"
|
||||
```
|
||||
|
||||
Where the following keywords exist:
|
||||
|
||||
1. `measurement`: specifies that this section of the graphite bucket corresponds
|
||||
to the measurement name. This can be specified multiple times.
|
||||
2. `field`: specifies that this section of the graphite bucket corresponds
|
||||
to the field name. This can be specified multiple times.
|
||||
3. `measurement*`: specifies that all remaining elements of the graphite bucket
|
||||
correspond to the measurement name.
|
||||
4. `field*`: specifies that all remaining elements of the graphite bucket
|
||||
correspond to the field name.
|
||||
|
||||
Any part of the template that is not a keyword is treated as a tag key. This
|
||||
can also be specified multiple times.
|
||||
|
||||
NOTE: `field*` cannot be used in conjunction with `measurement*`!
|
||||
|
||||
#### Measurement & Tag Templates:
|
||||
|
||||
The most basic template is to specify a single transformation to apply to all
|
||||
incoming metrics. So the following template:
|
||||
|
||||
```toml
|
||||
templates = [
|
||||
"region.region.measurement*"
|
||||
]
|
||||
```
|
||||
|
||||
would result in the following Graphite -> Telegraf transformation.
|
||||
|
||||
```
|
||||
us.west.cpu.load 100
|
||||
=> cpu.load,region=us.west value=100
|
||||
```
|
||||
|
||||
Multiple templates can also be specified, but these should be differentiated
|
||||
using _filters_ (see below for more details)
|
||||
|
||||
```toml
|
||||
templates = [
|
||||
"*.*.* region.region.measurement", # <- all 3-part measurements will match this one.
|
||||
"*.*.*.* region.region.host.measurement", # <- all 4-part measurements will match this one.
|
||||
]
|
||||
```
|
||||
|
||||
#### Field Templates:
|
||||
|
||||
The field keyword tells Telegraf to give the metric that field name.
|
||||
So the following template:
|
||||
|
||||
```toml
|
||||
separator = "_"
|
||||
templates = [
|
||||
"measurement.measurement.field.field.region"
|
||||
]
|
||||
```
|
||||
|
||||
would result in the following Graphite -> Telegraf transformation.
|
||||
|
||||
```
|
||||
cpu.usage.idle.percent.eu-east 100
|
||||
=> cpu_usage,region=eu-east idle_percent=100
|
||||
```
|
||||
|
||||
The field key can also be derived from all remaining elements of the graphite
|
||||
bucket by specifying `field*`:
|
||||
|
||||
```toml
|
||||
separator = "_"
|
||||
templates = [
|
||||
"measurement.measurement.region.field*"
|
||||
]
|
||||
```
|
||||
|
||||
which would result in the following Graphite -> Telegraf transformation.
|
||||
|
||||
```
|
||||
cpu.usage.eu-east.idle.percentage 100
|
||||
=> cpu_usage,region=eu-east idle_percentage=100
|
||||
```
|
||||
|
||||
#### Filter Templates:
|
||||
|
||||
Users can also filter the template(s) to use based on the name of the bucket,
|
||||
using glob matching, like so:
|
||||
|
||||
```toml
|
||||
templates = [
|
||||
"cpu.* measurement.measurement.region",
|
||||
"mem.* measurement.measurement.host"
|
||||
]
|
||||
```
|
||||
|
||||
which would result in the following transformation:
|
||||
|
||||
```
|
||||
cpu.load.eu-east 100
|
||||
=> cpu_load,region=eu-east value=100
|
||||
|
||||
mem.cached.localhost 256
|
||||
=> mem_cached,host=localhost value=256
|
||||
```
|
||||
|
||||
#### Adding Tags:
|
||||
|
||||
Additional tags can be added to a metric that don't exist on the received metric.
|
||||
You can add additional tags by specifying them after the pattern.
|
||||
Tags have the same format as the line protocol.
|
||||
Multiple tags are separated by commas.
|
||||
|
||||
```toml
|
||||
templates = [
|
||||
"measurement.measurement.field.region datacenter=1a"
|
||||
]
|
||||
```
|
||||
|
||||
would result in the following Graphite -> Telegraf transformation.
|
||||
|
||||
```
|
||||
cpu.usage.idle.eu-east 100
|
||||
=> cpu_usage,region=eu-east,datacenter=1a idle=100
|
||||
```
|
||||
|
||||
There are many more options available,
|
||||
[More details can be found here](https://github.com/influxdata/influxdb/tree/master/services/graphite#templates)
|
||||
|
||||
#### Graphite Configuration:
|
||||
|
||||
```toml
|
||||
[[inputs.exec]]
|
||||
## Commands array
|
||||
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
|
||||
|
||||
## measurement name suffix (for separating different commands)
|
||||
name_suffix = "_mycollector"
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "graphite"
|
||||
|
||||
## This string will be used to join the matched values.
|
||||
separator = "_"
|
||||
|
||||
## Each template line requires a template pattern. It can have an optional
|
||||
## filter before the template and separated by spaces. It can also have optional extra
|
||||
## tags following the template. Multiple tags should be separated by commas and no spaces
|
||||
## similar to the line protocol format. There can be only one default template.
|
||||
## Templates support below format:
|
||||
## 1. filter + template
|
||||
## 2. filter + template + extra tag(s)
|
||||
## 3. filter + template with field key
|
||||
## 4. default template
|
||||
templates = [
|
||||
"*.app env.service.resource.measurement",
|
||||
"stats.* .host.measurement* region=eu-east,agent=sensu",
|
||||
"stats2.* .host.measurement.field",
|
||||
"measurement*"
|
||||
]
|
||||
```
|
||||
|
||||
# Nagios:
|
||||
|
||||
There are no additional configuration options for Nagios line-protocol. The
|
||||
metrics are parsed directly into Telegraf metrics.
|
||||
|
||||
Note: Nagios Input Data Formats is only supported in `exec` input plugin.
|
||||
|
||||
#### Nagios Configuration:
|
||||
|
||||
```toml
|
||||
[[inputs.exec]]
|
||||
## Commands array
|
||||
commands = ["/usr/lib/nagios/plugins/check_load -w 5,6,7 -c 7,8,9"]
|
||||
|
||||
## measurement name suffix (for separating different commands)
|
||||
name_suffix = "_mycollector"
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "nagios"
|
||||
```
|
||||
|
||||
# Collectd:
|
||||
|
||||
The collectd format parses the collectd binary network protocol. Tags are
|
||||
created for host, instance, type, and type instance. All collectd values are
|
||||
added as float64 fields.
|
||||
|
||||
For more information about the binary network protocol see
|
||||
[here](https://collectd.org/wiki/index.php/Binary_protocol).
|
||||
|
||||
You can control the cryptographic settings with parser options. Create an
|
||||
authentication file and set `collectd_auth_file` to the path of the file, then
|
||||
set the desired security level in `collectd_security_level`.
|
||||
|
||||
Additional information including client setup can be found
|
||||
[here](https://collectd.org/wiki/index.php/Networking_introduction#Cryptographic_setup).
|
||||
|
||||
You can also change the path to the typesdb or add additional typesdb using
|
||||
`collectd_typesdb`.
|
||||
|
||||
#### Collectd Configuration:
|
||||
|
||||
```toml
|
||||
[[inputs.socket_listener]]
|
||||
service_address = "udp://127.0.0.1:25826"
|
||||
name_prefix = "collectd_"
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "collectd"
|
||||
|
||||
## Authentication file for cryptographic security levels
|
||||
collectd_auth_file = "/etc/collectd/auth_file"
|
||||
## One of none (default), sign, or encrypt
|
||||
collectd_security_level = "encrypt"
|
||||
## Path of to TypesDB specifications
|
||||
collectd_typesdb = ["/usr/share/collectd/types.db"]
|
||||
```
|
||||
|
||||
# Dropwizard:
|
||||
|
||||
The dropwizard format can parse the JSON representation of a single dropwizard metric registry. By default, tags are parsed from metric names as if they were actual influxdb line protocol keys (`measurement<,tag_set>`) which can be overriden by defining custom [measurement & tag templates](./DATA_FORMATS_INPUT.md#measurement--tag-templates). All field value types are supported, `string`, `number` and `boolean`.
|
||||
|
||||
A typical JSON of a dropwizard metric registry:
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "3.0.0",
|
||||
"counters" : {
|
||||
"measurement,tag1=green" : {
|
||||
"count" : 1
|
||||
}
|
||||
},
|
||||
"meters" : {
|
||||
"measurement" : {
|
||||
"count" : 1,
|
||||
"m15_rate" : 1.0,
|
||||
"m1_rate" : 1.0,
|
||||
"m5_rate" : 1.0,
|
||||
"mean_rate" : 1.0,
|
||||
"units" : "events/second"
|
||||
}
|
||||
},
|
||||
"gauges" : {
|
||||
"measurement" : {
|
||||
"value" : 1
|
||||
}
|
||||
},
|
||||
"histograms" : {
|
||||
"measurement" : {
|
||||
"count" : 1,
|
||||
"max" : 1.0,
|
||||
"mean" : 1.0,
|
||||
"min" : 1.0,
|
||||
"p50" : 1.0,
|
||||
"p75" : 1.0,
|
||||
"p95" : 1.0,
|
||||
"p98" : 1.0,
|
||||
"p99" : 1.0,
|
||||
"p999" : 1.0,
|
||||
"stddev" : 1.0
|
||||
}
|
||||
},
|
||||
"timers" : {
|
||||
"measurement" : {
|
||||
"count" : 1,
|
||||
"max" : 1.0,
|
||||
"mean" : 1.0,
|
||||
"min" : 1.0,
|
||||
"p50" : 1.0,
|
||||
"p75" : 1.0,
|
||||
"p95" : 1.0,
|
||||
"p98" : 1.0,
|
||||
"p99" : 1.0,
|
||||
"p999" : 1.0,
|
||||
"stddev" : 1.0,
|
||||
"m15_rate" : 1.0,
|
||||
"m1_rate" : 1.0,
|
||||
"m5_rate" : 1.0,
|
||||
"mean_rate" : 1.0,
|
||||
"duration_units" : "seconds",
|
||||
"rate_units" : "calls/second"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Would get translated into 4 different measurements:
|
||||
|
||||
```
|
||||
measurement,metric_type=counter,tag1=green count=1
|
||||
measurement,metric_type=meter count=1,m15_rate=1.0,m1_rate=1.0,m5_rate=1.0,mean_rate=1.0
|
||||
measurement,metric_type=gauge value=1
|
||||
measurement,metric_type=histogram count=1,max=1.0,mean=1.0,min=1.0,p50=1.0,p75=1.0,p95=1.0,p98=1.0,p99=1.0,p999=1.0
|
||||
measurement,metric_type=timer count=1,max=1.0,mean=1.0,min=1.0,p50=1.0,p75=1.0,p95=1.0,p98=1.0,p99=1.0,p999=1.0,stddev=1.0,m15_rate=1.0,m1_rate=1.0,m5_rate=1.0,mean_rate=1.0
|
||||
```
|
||||
|
||||
You may also parse a dropwizard registry from any JSON document which contains a dropwizard registry in some inner field.
|
||||
Eg. to parse the following JSON document:
|
||||
|
||||
```json
|
||||
{
|
||||
"time" : "2017-02-22T14:33:03.662+02:00",
|
||||
"tags" : {
|
||||
"tag1" : "green",
|
||||
"tag2" : "yellow"
|
||||
},
|
||||
"metrics" : {
|
||||
"counters" : {
|
||||
"measurement" : {
|
||||
"count" : 1
|
||||
}
|
||||
},
|
||||
"meters" : {},
|
||||
"gauges" : {},
|
||||
"histograms" : {},
|
||||
"timers" : {}
|
||||
}
|
||||
}
|
||||
```
|
||||
and translate it into:
|
||||
|
||||
```
|
||||
measurement,metric_type=counter,tag1=green,tag2=yellow count=1 1487766783662000000
|
||||
```
|
||||
|
||||
you simply need to use the following additional configuration properties:
|
||||
|
||||
```toml
|
||||
dropwizard_metric_registry_path = "metrics"
|
||||
dropwizard_time_path = "time"
|
||||
dropwizard_time_format = "2006-01-02T15:04:05Z07:00"
|
||||
dropwizard_tags_path = "tags"
|
||||
## tag paths per tag are supported too, eg.
|
||||
#[inputs.yourinput.dropwizard_tag_paths]
|
||||
# tag1 = "tags.tag1"
|
||||
# tag2 = "tags.tag2"
|
||||
```
|
||||
|
||||
|
||||
For more information about the dropwizard json format see
|
||||
[here](http://metrics.dropwizard.io/3.1.0/manual/json/).
|
||||
|
||||
#### Dropwizard Configuration:
|
||||
|
||||
```toml
|
||||
[[inputs.exec]]
|
||||
## Commands array
|
||||
commands = ["curl http://localhost:8080/sys/metrics"]
|
||||
timeout = "5s"
|
||||
|
||||
## Data format to consume.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
data_format = "dropwizard"
|
||||
|
||||
## Used by the templating engine to join matched values when cardinality is > 1
|
||||
separator = "_"
|
||||
|
||||
## Each template line requires a template pattern. It can have an optional
|
||||
## filter before the template and separated by spaces. It can also have optional extra
|
||||
## tags following the template. Multiple tags should be separated by commas and no spaces
|
||||
## similar to the line protocol format. There can be only one default template.
|
||||
## Templates support below format:
|
||||
## 1. filter + template
|
||||
## 2. filter + template + extra tag(s)
|
||||
## 3. filter + template with field key
|
||||
## 4. default template
|
||||
## By providing an empty template array, templating is disabled and measurements are parsed as influxdb line protocol keys (measurement<,tag_set>)
|
||||
templates = []
|
||||
|
||||
## You may use an appropriate [gjson path](https://github.com/tidwall/gjson#path-syntax)
|
||||
## to locate the metric registry within the JSON document
|
||||
# dropwizard_metric_registry_path = "metrics"
|
||||
|
||||
## You may use an appropriate [gjson path](https://github.com/tidwall/gjson#path-syntax)
|
||||
## to locate the default time of the measurements within the JSON document
|
||||
# dropwizard_time_path = "time"
|
||||
# dropwizard_time_format = "2006-01-02T15:04:05Z07:00"
|
||||
|
||||
## You may use an appropriate [gjson path](https://github.com/tidwall/gjson#path-syntax)
|
||||
## to locate the tags map within the JSON document
|
||||
# dropwizard_tags_path = "tags"
|
||||
|
||||
## You may even use tag paths per tag
|
||||
# [inputs.exec.dropwizard_tag_paths]
|
||||
# tag1 = "tags.tag1"
|
||||
# tag2 = "tags.tag2"
|
||||
|
||||
```
|
||||
211
docs/DATA_FORMATS_OUTPUT.md
Normal file
211
docs/DATA_FORMATS_OUTPUT.md
Normal file
@@ -0,0 +1,211 @@
|
||||
# Output Data Formats
|
||||
|
||||
In addition to output specific data formats, Telegraf supports a set of
|
||||
standard data formats that may be selected from when configuring many output
|
||||
plugins.
|
||||
|
||||
1. [InfluxDB Line Protocol](#influx)
|
||||
1. [JSON](#json)
|
||||
1. [Graphite](#graphite)
|
||||
|
||||
You will be able to identify the plugins with support by the presence of a
|
||||
`data_format` config option, for example, in the `file` output plugin:
|
||||
```toml
|
||||
[[outputs.file]]
|
||||
## Files to write to, "stdout" is a specially handled file.
|
||||
files = ["stdout"]
|
||||
|
||||
## Data format to output.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
||||
data_format = "influx"
|
||||
```
|
||||
|
||||
## Influx
|
||||
|
||||
The `influx` data format outputs metrics using
|
||||
[InfluxDB Line Protocol](https://docs.influxdata.com/influxdb/latest/write_protocols/line_protocol_tutorial/).
|
||||
This is the recommended format unless another format is required for
|
||||
interoperability.
|
||||
|
||||
### Influx Configuration
|
||||
```toml
|
||||
[[outputs.file]]
|
||||
## Files to write to, "stdout" is a specially handled file.
|
||||
files = ["stdout", "/tmp/metrics.out"]
|
||||
|
||||
## Data format to output.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
||||
data_format = "influx"
|
||||
|
||||
## Maximum line length in bytes. Useful only for debugging.
|
||||
# influx_max_line_bytes = 0
|
||||
|
||||
## When true, fields will be output in ascending lexical order. Enabling
|
||||
## this option will result in decreased performance and is only recommended
|
||||
## when you need predictable ordering while debugging.
|
||||
# influx_sort_fields = false
|
||||
|
||||
## When true, Telegraf will output unsigned integers as unsigned values,
|
||||
## i.e.: `42u`. You will need a version of InfluxDB supporting unsigned
|
||||
## integer values. Enabling this option will result in field type errors if
|
||||
## existing data has been written.
|
||||
# influx_uint_support = false
|
||||
```
|
||||
|
||||
## Graphite
|
||||
|
||||
The Graphite data format is translated from Telegraf Metrics using either the
|
||||
template pattern or tag support method. You can select between the two
|
||||
methods using the [`graphite_tag_support`](#graphite-tag-support) option. When set, the tag support
|
||||
method is used, otherwise the [`template` pattern](#template-pattern) is used.
|
||||
|
||||
#### Template Pattern
|
||||
|
||||
The `template` option describes how Telegraf traslates metrics into _dot_
|
||||
buckets. The default template is:
|
||||
|
||||
```
|
||||
template = "host.tags.measurement.field"
|
||||
```
|
||||
|
||||
In the above template, we have four parts:
|
||||
|
||||
1. _host_ is a tag key. This can be any tag key that is in the Telegraf
|
||||
metric(s). If the key doesn't exist, it will be ignored. If it does exist, the
|
||||
tag value will be filled in.
|
||||
1. _tags_ is a special keyword that outputs all remaining tag values, separated
|
||||
by dots and in alphabetical order (by tag key). These will be filled after all
|
||||
tag keys are filled.
|
||||
1. _measurement_ is a special keyword that outputs the measurement name.
|
||||
1. _field_ is a special keyword that outputs the field name.
|
||||
|
||||
**Example Conversion**:
|
||||
|
||||
```
|
||||
cpu,cpu=cpu-total,dc=us-east-1,host=tars usage_idle=98.09,usage_user=0.89 1455320660004257758
|
||||
=>
|
||||
tars.cpu-total.us-east-1.cpu.usage_user 0.89 1455320690
|
||||
tars.cpu-total.us-east-1.cpu.usage_idle 98.09 1455320690
|
||||
```
|
||||
|
||||
Fields with string values will be skipped. Boolean fields will be converted
|
||||
to 1 (true) or 0 (false).
|
||||
|
||||
#### Graphite Tag Support
|
||||
|
||||
When the `graphite_tag_support` option is enabled, the template pattern is not
|
||||
used. Instead, tags are encoded using
|
||||
[Graphite tag support](http://graphite.readthedocs.io/en/latest/tags.html)
|
||||
added in Graphite 1.1. The `metric_path` is a combination of the optional
|
||||
`prefix` option, measurement name, and field name.
|
||||
|
||||
The tag `name` is reserved by Graphite, any conflicting tags and will be encoded as `_name`.
|
||||
|
||||
**Example Conversion**:
|
||||
```
|
||||
cpu,cpu=cpu-total,dc=us-east-1,host=tars usage_idle=98.09,usage_user=0.89 1455320660004257758
|
||||
=>
|
||||
cpu.usage_user;cpu=cpu-total;dc=us-east-1;host=tars 0.89 1455320690
|
||||
cpu.usage_idle;cpu=cpu-total;dc=us-east-1;host=tars 98.09 1455320690
|
||||
```
|
||||
|
||||
### Graphite Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.file]]
|
||||
## Files to write to, "stdout" is a specially handled file.
|
||||
files = ["stdout", "/tmp/metrics.out"]
|
||||
|
||||
## Data format to output.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
||||
data_format = "graphite"
|
||||
|
||||
## Prefix added to each graphite bucket
|
||||
prefix = "telegraf"
|
||||
## Graphite template pattern
|
||||
template = "host.tags.measurement.field"
|
||||
|
||||
## Support Graphite tags, recommended to enable when using Graphite 1.1 or later.
|
||||
# graphite_tag_support = false
|
||||
```
|
||||
|
||||
## JSON
|
||||
|
||||
The JSON output data format output for a single metric is in the
|
||||
form:
|
||||
```json
|
||||
{
|
||||
"fields": {
|
||||
"field_1": 30,
|
||||
"field_2": 4,
|
||||
"field_N": 59,
|
||||
"n_images": 660
|
||||
},
|
||||
"name": "docker",
|
||||
"tags": {
|
||||
"host": "raynor"
|
||||
},
|
||||
"timestamp": 1458229140
|
||||
}
|
||||
```
|
||||
|
||||
When an output plugin needs to emit multiple metrics at one time, it may use
|
||||
the batch format. The use of batch format is determined by the plugin,
|
||||
reference the documentation for the specific plugin.
|
||||
```json
|
||||
{
|
||||
"metrics": [
|
||||
{
|
||||
"fields": {
|
||||
"field_1": 30,
|
||||
"field_2": 4,
|
||||
"field_N": 59,
|
||||
"n_images": 660
|
||||
},
|
||||
"name": "docker",
|
||||
"tags": {
|
||||
"host": "raynor"
|
||||
},
|
||||
"timestamp": 1458229140
|
||||
},
|
||||
{
|
||||
"fields": {
|
||||
"field_1": 30,
|
||||
"field_2": 4,
|
||||
"field_N": 59,
|
||||
"n_images": 660
|
||||
},
|
||||
"name": "docker",
|
||||
"tags": {
|
||||
"host": "raynor"
|
||||
},
|
||||
"timestamp": 1458229140
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### JSON Configuration
|
||||
|
||||
```toml
|
||||
[[outputs.file]]
|
||||
## Files to write to, "stdout" is a specially handled file.
|
||||
files = ["stdout", "/tmp/metrics.out"]
|
||||
|
||||
## Data format to output.
|
||||
## Each data format has its own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
||||
data_format = "json"
|
||||
|
||||
## The resolution to use for the metric timestamp. Must be a duration string
|
||||
## such as "1ns", "1us", "1ms", "10ms", "1s". Durations are truncated to
|
||||
## the power of 10 less than the specified units.
|
||||
json_timestamp_units = "1s"
|
||||
```
|
||||
46
docs/FAQ.md
Normal file
46
docs/FAQ.md
Normal file
@@ -0,0 +1,46 @@
|
||||
# Frequently Asked Questions
|
||||
|
||||
### Q: How can I monitor the Docker Engine Host from within a container?
|
||||
|
||||
You will need to setup several volume mounts as well as some environment
|
||||
variables:
|
||||
```
|
||||
docker run --name telegraf
|
||||
-v /:/hostfs:ro
|
||||
-v /etc:/hostfs/etc:ro
|
||||
-v /proc:/hostfs/proc:ro
|
||||
-v /sys:/hostfs/sys:ro
|
||||
-v /var/run/utmp:/var/run/utmp:ro
|
||||
-e HOST_ETC=/hostfs/etc
|
||||
-e HOST_PROC=/hostfs/proc
|
||||
-e HOST_SYS=/hostfs/sys
|
||||
-e HOST_MOUNT_PREFIX=/hostfs
|
||||
telegraf
|
||||
```
|
||||
|
||||
|
||||
### Q: Why do I get a "no such host" error resolving hostnames that other
|
||||
programs can resolve?
|
||||
|
||||
Go uses a pure Go resolver by default for [name resolution](https://golang.org/pkg/net/#hdr-Name_Resolution).
|
||||
This resolver behaves differently than the C library functions but is more
|
||||
efficient when used with the Go runtime.
|
||||
|
||||
If you encounter problems or want to use more advanced name resolution methods
|
||||
that are unsupported by the pure Go resolver, you can switch to the cgo
|
||||
resolver.
|
||||
|
||||
If running manually set:
|
||||
```
|
||||
export GODEBUG=netdns=cgo
|
||||
```
|
||||
|
||||
If running as a service add the environment variable to `/etc/default/telegraf`:
|
||||
```
|
||||
GODEBUG=netdns=cgo
|
||||
```
|
||||
|
||||
### Q: When will the next version be released?
|
||||
|
||||
The latest release date estimate can be viewed on the
|
||||
[milestones](https://github.com/influxdata/telegraf/milestones) page.
|
||||
111
docs/LICENSE_OF_DEPENDENCIES.md
Normal file
111
docs/LICENSE_OF_DEPENDENCIES.md
Normal file
@@ -0,0 +1,111 @@
|
||||
# Licenses of dependencies
|
||||
|
||||
When distributed in a binary form, Telegraf may contain portions of the
|
||||
following works:
|
||||
|
||||
- code.cloudfoundry.org/clock [APACHE](https://github.com/cloudfoundry/clock/blob/master/LICENSE)
|
||||
- collectd.org [MIT](https://github.com/collectd/go-collectd/blob/master/LICENSE)
|
||||
- github.com/aerospike/aerospike-client-go [APACHE](https://github.com/aerospike/aerospike-client-go/blob/master/LICENSE)
|
||||
- github.com/amir/raidman [PUBLIC DOMAIN](https://github.com/amir/raidman/blob/master/UNLICENSE)
|
||||
- github.com/armon/go-metrics [MIT](https://github.com/armon/go-metrics/blob/master/LICENSE)
|
||||
- github.com/aws/aws-sdk-go [APACHE](https://github.com/aws/aws-sdk-go/blob/master/LICENSE.txt)
|
||||
- github.com/beorn7/perks [MIT](https://github.com/beorn7/perks/blob/master/LICENSE)
|
||||
- github.com/boltdb/bolt [MIT](https://github.com/boltdb/bolt/blob/master/LICENSE)
|
||||
- github.com/bsm/sarama-cluster [MIT](https://github.com/bsm/sarama-cluster/blob/master/LICENSE)
|
||||
- github.com/cenkalti/backoff [MIT](https://github.com/cenkalti/backoff/blob/master/LICENSE)
|
||||
- github.com/chuckpreslar/rcon [MIT](https://github.com/chuckpreslar/rcon#license)
|
||||
- github.com/couchbase/go-couchbase [MIT](https://github.com/couchbase/go-couchbase/blob/master/LICENSE)
|
||||
- github.com/couchbase/gomemcached [MIT](https://github.com/couchbase/gomemcached/blob/master/LICENSE)
|
||||
- github.com/couchbase/goutils [MIT](https://github.com/couchbase/go-couchbase/blob/master/LICENSE)
|
||||
- github.com/dancannon/gorethink [APACHE](https://github.com/dancannon/gorethink/blob/master/LICENSE)
|
||||
- github.com/davecgh/go-spew [ISC](https://github.com/davecgh/go-spew/blob/master/LICENSE)
|
||||
- github.com/docker/docker [APACHE](https://github.com/docker/docker/blob/master/LICENSE)
|
||||
- github.com/docker/cli [APACHE](https://github.com/docker/cli/blob/master/LICENSE)
|
||||
- github.com/eapache/go-resiliency [MIT](https://github.com/eapache/go-resiliency/blob/master/LICENSE)
|
||||
- github.com/eapache/go-xerial-snappy [MIT](https://github.com/eapache/go-xerial-snappy/blob/master/LICENSE)
|
||||
- github.com/eapache/queue [MIT](https://github.com/eapache/queue/blob/master/LICENSE)
|
||||
- github.com/eclipse/paho.mqtt.golang [ECLIPSE](https://github.com/eclipse/paho.mqtt.golang/blob/master/LICENSE)
|
||||
- github.com/fsnotify/fsnotify [BSD](https://github.com/fsnotify/fsnotify/blob/master/LICENSE)
|
||||
- github.com/fsouza/go-dockerclient [BSD](https://github.com/fsouza/go-dockerclient/blob/master/LICENSE)
|
||||
- github.com/gobwas/glob [MIT](https://github.com/gobwas/glob/blob/master/LICENSE)
|
||||
- github.com/google/go-cmp [BSD](https://github.com/google/go-cmp/blob/master/LICENSE)
|
||||
- github.com/gogo/protobuf [BSD](https://github.com/gogo/protobuf/blob/master/LICENSE)
|
||||
- github.com/golang/protobuf [BSD](https://github.com/golang/protobuf/blob/master/LICENSE)
|
||||
- github.com/golang/snappy [BSD](https://github.com/golang/snappy/blob/master/LICENSE)
|
||||
- github.com/go-logfmt/logfmt [MIT](https://github.com/go-logfmt/logfmt/blob/master/LICENSE)
|
||||
- github.com/gorilla/mux [BSD](https://github.com/gorilla/mux/blob/master/LICENSE)
|
||||
- github.com/go-ini/ini [APACHE](https://github.com/go-ini/ini/blob/master/LICENSE)
|
||||
- github.com/go-ole/go-ole [MPL](http://mattn.mit-license.org/2013)
|
||||
- github.com/go-sql-driver/mysql [MPL](https://github.com/go-sql-driver/mysql/blob/master/LICENSE)
|
||||
- github.com/hailocab/go-hostpool [MIT](https://github.com/hailocab/go-hostpool/blob/master/LICENSE)
|
||||
- github.com/hashicorp/consul [MPL](https://github.com/hashicorp/consul/blob/master/LICENSE)
|
||||
- github.com/hashicorp/go-msgpack [BSD](https://github.com/hashicorp/go-msgpack/blob/master/LICENSE)
|
||||
- github.com/hashicorp/raft-boltdb [MPL](https://github.com/hashicorp/raft-boltdb/blob/master/LICENSE)
|
||||
- github.com/hashicorp/raft [MPL](https://github.com/hashicorp/raft/blob/master/LICENSE)
|
||||
- github.com/influxdata/tail [MIT](https://github.com/influxdata/tail/blob/master/LICENSE.txt)
|
||||
- github.com/influxdata/toml [MIT](https://github.com/influxdata/toml/blob/master/LICENSE)
|
||||
- github.com/influxdata/wlog [MIT](https://github.com/influxdata/wlog/blob/master/LICENSE)
|
||||
- github.com/jackc/pgx [MIT](https://github.com/jackc/pgx/blob/master/LICENSE)
|
||||
- github.com/jmespath/go-jmespath [APACHE](https://github.com/jmespath/go-jmespath/blob/master/LICENSE)
|
||||
- github.com/kardianos/osext [BSD](https://github.com/kardianos/osext/blob/master/LICENSE)
|
||||
- github.com/kardianos/service [ZLIB](https://github.com/kardianos/service/blob/master/LICENSE) (License not named but matches word for word with ZLib)
|
||||
- github.com/kballard/go-shellquote [MIT](https://github.com/kballard/go-shellquote/blob/master/LICENSE)
|
||||
- github.com/lib/pq [MIT](https://github.com/lib/pq/blob/master/LICENSE.md)
|
||||
- github.com/matttproud/golang_protobuf_extensions [APACHE](https://github.com/matttproud/golang_protobuf_extensions/blob/master/LICENSE)
|
||||
- github.com/Microsoft/ApplicationInsights-Go [APACHE](https://github.com/Microsoft/ApplicationInsights-Go/blob/master/LICENSE)
|
||||
- github.com/Microsoft/go-winio [MIT](https://github.com/Microsoft/go-winio/blob/master/LICENSE)
|
||||
- github.com/miekg/dns [BSD](https://github.com/miekg/dns/blob/master/LICENSE)
|
||||
- github.com/naoina/go-stringutil [MIT](https://github.com/naoina/go-stringutil/blob/master/LICENSE)
|
||||
- github.com/naoina/toml [MIT](https://github.com/naoina/toml/blob/master/LICENSE)
|
||||
- github.com/nats-io/gnatsd [MIT](https://github.com/nats-io/gnatsd/blob/master/LICENSE)
|
||||
- github.com/nats-io/go-nats [MIT](https://github.com/nats-io/go-nats/blob/master/LICENSE)
|
||||
- github.com/nats-io/nats [MIT](https://github.com/nats-io/nats/blob/master/LICENSE)
|
||||
- github.com/nats-io/nuid [MIT](https://github.com/nats-io/nuid/blob/master/LICENSE)
|
||||
- github.com/nsqio/go-nsq [MIT](https://github.com/nsqio/go-nsq/blob/master/LICENSE)
|
||||
- github.com/opentracing-contrib/go-observer [APACHE](https://github.com/opentracing-contrib/go-observer/blob/master/LICENSE)
|
||||
- github.com/opentracing/opentracing-go [MIT](https://github.com/opentracing/opentracing-go/blob/master/LICENSE)
|
||||
- github.com/openzipkin/zipkin-go-opentracing [MIT](https://github.com/openzipkin/zipkin-go-opentracing/blob/master/LICENSE)
|
||||
- github.com/pierrec/lz4 [BSD](https://github.com/pierrec/lz4/blob/master/LICENSE)
|
||||
- github.com/pierrec/xxHash [BSD](https://github.com/pierrec/xxHash/blob/master/LICENSE)
|
||||
- github.com/pkg/errors [BSD](https://github.com/pkg/errors/blob/master/LICENSE)
|
||||
- github.com/pmezard/go-difflib [BSD](https://github.com/pmezard/go-difflib/blob/master/LICENSE)
|
||||
- github.com/prometheus/client_golang [APACHE](https://github.com/prometheus/client_golang/blob/master/LICENSE)
|
||||
- github.com/prometheus/client_model [APACHE](https://github.com/prometheus/client_model/blob/master/LICENSE)
|
||||
- github.com/prometheus/common [APACHE](https://github.com/prometheus/common/blob/master/LICENSE)
|
||||
- github.com/prometheus/procfs [APACHE](https://github.com/prometheus/procfs/blob/master/LICENSE)
|
||||
- github.com/rcrowley/go-metrics [BSD](https://github.com/rcrowley/go-metrics/blob/master/LICENSE)
|
||||
- github.com/samuel/go-zookeeper [BSD](https://github.com/samuel/go-zookeeper/blob/master/LICENSE)
|
||||
- github.com/satori/go.uuid [MIT](https://github.com/satori/go.uuid/blob/master/LICENSE)
|
||||
- github.com/shirou/gopsutil [BSD](https://github.com/shirou/gopsutil/blob/master/LICENSE)
|
||||
- github.com/shirou/w32 [BSD](https://github.com/shirou/w32/blob/master/LICENSE)
|
||||
- github.com/Shopify/sarama [MIT](https://github.com/Shopify/sarama/blob/master/MIT-LICENSE)
|
||||
- github.com/Sirupsen/logrus [MIT](https://github.com/Sirupsen/logrus/blob/master/LICENSE)
|
||||
- github.com/StackExchange/wmi [MIT](https://github.com/StackExchange/wmi/blob/master/LICENSE)
|
||||
- github.com/stretchr/objx [MIT](https://github.com/stretchr/objx/blob/master/LICENSE.md)
|
||||
- github.com/soniah/gosnmp [BSD](https://github.com/soniah/gosnmp/blob/master/LICENSE)
|
||||
- github.com/streadway/amqp [BSD](https://github.com/streadway/amqp/blob/master/LICENSE)
|
||||
- github.com/stretchr/objx [MIT](https://github.com/stretchr/objx/blob/master/LICENSE.md)
|
||||
- github.com/stretchr/testify [MIT](https://github.com/stretchr/testify/blob/master/LICENCE.txt)
|
||||
- github.com/tidwall/gjson [MIT](https://github.com/tidwall/gjson/blob/master/LICENSE)
|
||||
- github.com/tidwall/match [MIT](https://github.com/tidwall/match/blob/master/LICENSE)
|
||||
- github.com/mitchellh/mapstructure [MIT](https://github.com/mitchellh/mapstructure/blob/master/LICENSE)
|
||||
- github.com/multiplay/go-ts3 [BSD](https://github.com/multiplay/go-ts3/blob/master/LICENSE)
|
||||
- github.com/vjeantet/grok [APACHE](https://github.com/vjeantet/grok/blob/master/LICENSE)
|
||||
- github.com/wvanbergen/kafka [MIT](https://github.com/wvanbergen/kafka/blob/master/LICENSE)
|
||||
- github.com/wvanbergen/kazoo-go [MIT](https://github.com/wvanbergen/kazoo-go/blob/master/MIT-LICENSE)
|
||||
- github.com/yuin/gopher-lua [MIT](https://github.com/yuin/gopher-lua/blob/master/LICENSE)
|
||||
- github.com/zensqlmonitor/go-mssqldb [BSD](https://github.com/zensqlmonitor/go-mssqldb/blob/master/LICENSE.txt)
|
||||
- golang.org/x/crypto [BSD](https://github.com/golang/crypto/blob/master/LICENSE)
|
||||
- golang.org/x/net [BSD](https://go.googlesource.com/net/+/master/LICENSE)
|
||||
- golang.org/x/text [BSD](https://go.googlesource.com/text/+/master/LICENSE)
|
||||
- golang.org/x/sys [BSD](https://go.googlesource.com/sys/+/master/LICENSE)
|
||||
- google.golang.org/grpc [APACHE](https://github.com/google/grpc-go/blob/master/LICENSE)
|
||||
- google.golang.org/genproto [APACHE](https://github.com/google/go-genproto/blob/master/LICENSE)
|
||||
- gopkg.in/asn1-ber.v1 [MIT](https://github.com/go-asn1-ber/asn1-ber/blob/v1.2/LICENSE)
|
||||
- gopkg.in/dancannon/gorethink.v1 [APACHE](https://github.com/dancannon/gorethink/blob/v1.1.2/LICENSE)
|
||||
- gopkg.in/fatih/pool.v2 [MIT](https://github.com/fatih/pool/blob/v2.0.0/LICENSE)
|
||||
- gopkg.in/ldap.v2 [MIT](https://github.com/go-ldap/ldap/blob/v2.5.0/LICENSE)
|
||||
- gopkg.in/mgo.v2 [BSD](https://github.com/go-mgo/mgo/blob/v2/LICENSE)
|
||||
- gopkg.in/olivere/elastic.v5 [MIT](https://github.com/olivere/elastic/blob/v5.0.38/LICENSE)
|
||||
- gopkg.in/tomb.v1 [BSD](https://github.com/go-tomb/tomb/blob/v1/LICENSE)
|
||||
- gopkg.in/yaml.v2 [APACHE](https://github.com/go-yaml/yaml/blob/v2/LICENSE)
|
||||
24
docs/PROFILING.md
Normal file
24
docs/PROFILING.md
Normal file
@@ -0,0 +1,24 @@
|
||||
# Telegraf profiling
|
||||
|
||||
Telegraf uses the standard package `net/http/pprof`. This package serves via its HTTP server runtime profiling data in the format expected by the pprof visualization tool.
|
||||
|
||||
By default, the profiling is turned off.
|
||||
|
||||
To enable profiling you need to specify address to config parameter `pprof-addr`, for example:
|
||||
|
||||
```
|
||||
telegraf --config telegraf.conf --pprof-addr localhost:6060
|
||||
```
|
||||
|
||||
There are several paths to get different profiling information:
|
||||
|
||||
To look at the heap profile:
|
||||
|
||||
`go tool pprof http://localhost:6060/debug/pprof/heap`
|
||||
|
||||
or to look at a 30-second CPU profile:
|
||||
|
||||
`go tool pprof http://localhost:6060/debug/pprof/profile?seconds=30`
|
||||
|
||||
To view all available profiles, open `http://localhost:6060/debug/pprof/` in your browser.
|
||||
|
||||
53
docs/WINDOWS_SERVICE.md
Normal file
53
docs/WINDOWS_SERVICE.md
Normal file
@@ -0,0 +1,53 @@
|
||||
# Running Telegraf as a Windows Service
|
||||
|
||||
Telegraf natively supports running as a Windows Service. Outlined below is are
|
||||
the general steps to set it up.
|
||||
|
||||
1. Obtain the telegraf windows distribution
|
||||
2. Create the directory `C:\Program Files\Telegraf` (if you install in a different
|
||||
location simply specify the `--config` parameter with the desired location)
|
||||
3. Place the telegraf.exe and the telegraf.conf config file into `C:\Program Files\Telegraf`
|
||||
4. To install the service into the Windows Service Manager, run the following in PowerShell as an administrator (If necessary, you can wrap any spaces in the file paths in double quotes ""):
|
||||
|
||||
```
|
||||
> C:\"Program Files"\Telegraf\telegraf.exe --service install
|
||||
```
|
||||
|
||||
5. Edit the configuration file to meet your needs
|
||||
6. To check that it works, run:
|
||||
|
||||
```
|
||||
> C:\"Program Files"\Telegraf\telegraf.exe --config C:\"Program Files"\Telegraf\telegraf.conf --test
|
||||
```
|
||||
|
||||
7. To start collecting data, run:
|
||||
|
||||
```
|
||||
> net start telegraf
|
||||
```
|
||||
|
||||
## Config Directory
|
||||
|
||||
You can also specify a `--config-directory` for the service to use:
|
||||
1. Create a directory for config snippets: `C:\Program Files\Telegraf\telegraf.d`
|
||||
2. Include the `--config-directory` option when registering the service:
|
||||
```
|
||||
> C:\"Program Files"\Telegraf\telegraf.exe --service install --config C:\"Program Files"\Telegraf\telegraf.conf --config-directory C:\"Program Files"\Telegraf\telegraf.d
|
||||
```
|
||||
|
||||
## Other supported operations
|
||||
|
||||
Telegraf can manage its own service through the --service flag:
|
||||
|
||||
| Command | Effect |
|
||||
|------------------------------------|-------------------------------|
|
||||
| `telegraf.exe --service install` | Install telegraf as a service |
|
||||
| `telegraf.exe --service uninstall` | Remove the telegraf service |
|
||||
| `telegraf.exe --service start` | Start the telegraf service |
|
||||
| `telegraf.exe --service stop` | Stop the telegraf service |
|
||||
|
||||
Troubleshooting common error #1067
|
||||
|
||||
When installing as service in Windows, always double check to specify full path of the config file, otherwise windows service will fail to start
|
||||
|
||||
--config C:\"Program Files"\Telegraf\telegraf.conf
|
||||
11
etc/logrotate.d/telegraf
Normal file
11
etc/logrotate.d/telegraf
Normal file
@@ -0,0 +1,11 @@
|
||||
/var/log/telegraf/telegraf.log
|
||||
{
|
||||
rotate 6
|
||||
daily
|
||||
missingok
|
||||
dateext
|
||||
copytruncate
|
||||
notifempty
|
||||
compress
|
||||
}
|
||||
|
||||
3633
etc/telegraf.conf
Normal file
3633
etc/telegraf.conf
Normal file
File diff suppressed because it is too large
Load Diff
266
etc/telegraf_windows.conf
Normal file
266
etc/telegraf_windows.conf
Normal file
@@ -0,0 +1,266 @@
|
||||
# Telegraf configuration
|
||||
|
||||
# Telegraf is entirely plugin driven. All metrics are gathered from the
|
||||
# declared inputs, and sent to the declared outputs.
|
||||
|
||||
# Plugins must be declared in here to be active.
|
||||
# To deactivate a plugin, comment out the name and any variables.
|
||||
|
||||
# Use 'telegraf -config telegraf.conf -test' to see what metrics a config
|
||||
# file would generate.
|
||||
|
||||
# Global tags can be specified here in key="value" format.
|
||||
[global_tags]
|
||||
# dc = "us-east-1" # will tag all metrics with dc=us-east-1
|
||||
# rack = "1a"
|
||||
|
||||
# Configuration for telegraf agent
|
||||
[agent]
|
||||
## Default data collection interval for all inputs
|
||||
interval = "10s"
|
||||
## Rounds collection interval to 'interval'
|
||||
## ie, if interval="10s" then always collect on :00, :10, :20, etc.
|
||||
round_interval = true
|
||||
|
||||
## Telegraf will cache metric_buffer_limit metrics for each output, and will
|
||||
## flush this buffer on a successful write.
|
||||
metric_buffer_limit = 1000
|
||||
## Flush the buffer whenever full, regardless of flush_interval.
|
||||
flush_buffer_when_full = true
|
||||
|
||||
## Collection jitter is used to jitter the collection by a random amount.
|
||||
## Each plugin will sleep for a random time within jitter before collecting.
|
||||
## This can be used to avoid many plugins querying things like sysfs at the
|
||||
## same time, which can have a measurable effect on the system.
|
||||
collection_jitter = "0s"
|
||||
|
||||
## Default flushing interval for all outputs. You shouldn't set this below
|
||||
## interval. Maximum flush_interval will be flush_interval + flush_jitter
|
||||
flush_interval = "10s"
|
||||
## Jitter the flush interval by a random amount. This is primarily to avoid
|
||||
## large write spikes for users running a large number of telegraf instances.
|
||||
## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
|
||||
flush_jitter = "0s"
|
||||
|
||||
## Logging configuration:
|
||||
## Run telegraf in debug mode
|
||||
debug = false
|
||||
## Run telegraf in quiet mode
|
||||
quiet = false
|
||||
## Specify the log file name. The empty string means to log to stdout.
|
||||
logfile = "/Program Files/Telegraf/telegraf.log"
|
||||
|
||||
## Override default hostname, if empty use os.Hostname()
|
||||
hostname = ""
|
||||
|
||||
|
||||
###############################################################################
|
||||
# OUTPUTS #
|
||||
###############################################################################
|
||||
|
||||
# Configuration for influxdb server to send metrics to
|
||||
[[outputs.influxdb]]
|
||||
# The full HTTP or UDP endpoint URL for your InfluxDB instance.
|
||||
# Multiple urls can be specified but it is assumed that they are part of the same
|
||||
# cluster, this means that only ONE of the urls will be written to each interval.
|
||||
# urls = ["udp://127.0.0.1:8089"] # UDP endpoint example
|
||||
urls = ["http://127.0.0.1:8086"] # required
|
||||
# The target database for metrics (telegraf will create it if not exists)
|
||||
database = "telegraf" # required
|
||||
# Precision of writes, valid values are "ns", "us" (or "µs"), "ms", "s", "m", "h".
|
||||
# note: using second precision greatly helps InfluxDB compression
|
||||
precision = "s"
|
||||
|
||||
## Write timeout (for the InfluxDB client), formatted as a string.
|
||||
## If not provided, will default to 5s. 0s means no timeout (not recommended).
|
||||
timeout = "5s"
|
||||
# username = "telegraf"
|
||||
# password = "metricsmetricsmetricsmetrics"
|
||||
# Set the user agent for HTTP POSTs (can be useful for log differentiation)
|
||||
# user_agent = "telegraf"
|
||||
# Set UDP payload size, defaults to InfluxDB UDP Client default (512 bytes)
|
||||
# udp_payload = 512
|
||||
|
||||
|
||||
###############################################################################
|
||||
# INPUTS #
|
||||
###############################################################################
|
||||
|
||||
# Windows Performance Counters plugin.
|
||||
# These are the recommended method of monitoring system metrics on windows,
|
||||
# as the regular system plugins (inputs.cpu, inputs.mem, etc.) rely on WMI,
|
||||
# which utilize more system resources.
|
||||
#
|
||||
# See more configuration examples at:
|
||||
# https://github.com/influxdata/telegraf/tree/master/plugins/inputs/win_perf_counters
|
||||
|
||||
[[inputs.win_perf_counters]]
|
||||
[[inputs.win_perf_counters.object]]
|
||||
# Processor usage, alternative to native, reports on a per core.
|
||||
ObjectName = "Processor"
|
||||
Instances = ["*"]
|
||||
Counters = [
|
||||
"% Idle Time",
|
||||
"% Interrupt Time",
|
||||
"% Privileged Time",
|
||||
"% User Time",
|
||||
"% Processor Time",
|
||||
"% DPC Time",
|
||||
]
|
||||
Measurement = "win_cpu"
|
||||
# Set to true to include _Total instance when querying for all (*).
|
||||
IncludeTotal=true
|
||||
|
||||
[[inputs.win_perf_counters.object]]
|
||||
# Disk times and queues
|
||||
ObjectName = "LogicalDisk"
|
||||
Instances = ["*"]
|
||||
Counters = [
|
||||
"% Idle Time",
|
||||
"% Disk Time",
|
||||
"% Disk Read Time",
|
||||
"% Disk Write Time",
|
||||
"Current Disk Queue Length",
|
||||
"% Free Space",
|
||||
"Free Megabytes",
|
||||
]
|
||||
Measurement = "win_disk"
|
||||
# Set to true to include _Total instance when querying for all (*).
|
||||
#IncludeTotal=false
|
||||
|
||||
[[inputs.win_perf_counters.object]]
|
||||
ObjectName = "PhysicalDisk"
|
||||
Instances = ["*"]
|
||||
Counters = [
|
||||
"Disk Read Bytes/sec",
|
||||
"Disk Write Bytes/sec",
|
||||
"Current Disk Queue Length",
|
||||
"Disk Reads/sec",
|
||||
"Disk Writes/sec",
|
||||
"% Disk Time",
|
||||
"% Disk Read Time",
|
||||
"% Disk Write Time",
|
||||
]
|
||||
Measurement = "win_diskio"
|
||||
|
||||
[[inputs.win_perf_counters.object]]
|
||||
ObjectName = "Network Interface"
|
||||
Instances = ["*"]
|
||||
Counters = [
|
||||
"Bytes Received/sec",
|
||||
"Bytes Sent/sec",
|
||||
"Packets Received/sec",
|
||||
"Packets Sent/sec",
|
||||
"Packets Received Discarded",
|
||||
"Packets Outbound Discarded",
|
||||
"Packets Received Errors",
|
||||
"Packets Outbound Errors",
|
||||
]
|
||||
Measurement = "win_net"
|
||||
|
||||
[[inputs.win_perf_counters.object]]
|
||||
ObjectName = "System"
|
||||
Counters = [
|
||||
"Context Switches/sec",
|
||||
"System Calls/sec",
|
||||
"Processor Queue Length",
|
||||
"System Up Time",
|
||||
]
|
||||
Instances = ["------"]
|
||||
Measurement = "win_system"
|
||||
# Set to true to include _Total instance when querying for all (*).
|
||||
#IncludeTotal=false
|
||||
|
||||
[[inputs.win_perf_counters.object]]
|
||||
# Example query where the Instance portion must be removed to get data back,
|
||||
# such as from the Memory object.
|
||||
ObjectName = "Memory"
|
||||
Counters = [
|
||||
"Available Bytes",
|
||||
"Cache Faults/sec",
|
||||
"Demand Zero Faults/sec",
|
||||
"Page Faults/sec",
|
||||
"Pages/sec",
|
||||
"Transition Faults/sec",
|
||||
"Pool Nonpaged Bytes",
|
||||
"Pool Paged Bytes",
|
||||
"Standby Cache Reserve Bytes",
|
||||
"Standby Cache Normal Priority Bytes",
|
||||
"Standby Cache Core Bytes",
|
||||
|
||||
]
|
||||
# Use 6 x - to remove the Instance bit from the query.
|
||||
Instances = ["------"]
|
||||
Measurement = "win_mem"
|
||||
# Set to true to include _Total instance when querying for all (*).
|
||||
#IncludeTotal=false
|
||||
|
||||
[[inputs.win_perf_counters.object]]
|
||||
# Example query where the Instance portion must be removed to get data back,
|
||||
# such as from the Paging File object.
|
||||
ObjectName = "Paging File"
|
||||
Counters = [
|
||||
"% Usage",
|
||||
]
|
||||
Instances = ["_Total"]
|
||||
Measurement = "win_swap"
|
||||
|
||||
[[inputs.win_perf_counters.object]]
|
||||
ObjectName = "Network Interface"
|
||||
Instances = ["*"]
|
||||
Counters = [
|
||||
"Bytes Sent/sec",
|
||||
"Bytes Received/sec",
|
||||
"Packets Sent/sec",
|
||||
"Packets Received/sec",
|
||||
"Packets Received Discarded",
|
||||
"Packets Received Errors",
|
||||
"Packets Outbound Discarded",
|
||||
"Packets Outbound Errors",
|
||||
]
|
||||
|
||||
|
||||
|
||||
# Windows system plugins using WMI (disabled by default, using
|
||||
# win_perf_counters over WMI is recommended)
|
||||
|
||||
# # Read metrics about cpu usage
|
||||
# [[inputs.cpu]]
|
||||
# ## Whether to report per-cpu stats or not
|
||||
# percpu = true
|
||||
# ## Whether to report total system cpu stats or not
|
||||
# totalcpu = true
|
||||
# ## Comment this line if you want the raw CPU time metrics
|
||||
# fielddrop = ["time_*"]
|
||||
|
||||
|
||||
# # Read metrics about disk usage by mount point
|
||||
# [[inputs.disk]]
|
||||
# ## By default, telegraf gather stats for all mountpoints.
|
||||
# ## Setting mountpoints will restrict the stats to the specified mountpoints.
|
||||
# ## mount_points=["/"]
|
||||
#
|
||||
# ## Ignore some mountpoints by filesystem type. For example (dev)tmpfs (usually
|
||||
# ## present on /run, /var/run, /dev/shm or /dev).
|
||||
# # ignore_fs = ["tmpfs", "devtmpfs"]
|
||||
|
||||
|
||||
# # Read metrics about disk IO by device
|
||||
# [[inputs.diskio]]
|
||||
# ## By default, telegraf will gather stats for all devices including
|
||||
# ## disk partitions.
|
||||
# ## Setting devices will restrict the stats to the specified devices.
|
||||
# ## devices = ["sda", "sdb"]
|
||||
# ## Uncomment the following line if you do not need disk serial numbers.
|
||||
# ## skip_serial_number = true
|
||||
|
||||
|
||||
# # Read metrics about memory usage
|
||||
# [[inputs.mem]]
|
||||
# # no configuration
|
||||
|
||||
|
||||
# # Read metrics about swap memory usage
|
||||
# [[inputs.swap]]
|
||||
# # no configuration
|
||||
|
||||
116
filter/filter.go
Normal file
116
filter/filter.go
Normal file
@@ -0,0 +1,116 @@
|
||||
package filter
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"github.com/gobwas/glob"
|
||||
)
|
||||
|
||||
type Filter interface {
|
||||
Match(string) bool
|
||||
}
|
||||
|
||||
// Compile takes a list of string filters and returns a Filter interface
|
||||
// for matching a given string against the filter list. The filter list
|
||||
// supports glob matching too, ie:
|
||||
//
|
||||
// f, _ := Compile([]string{"cpu", "mem", "net*"})
|
||||
// f.Match("cpu") // true
|
||||
// f.Match("network") // true
|
||||
// f.Match("memory") // false
|
||||
//
|
||||
func Compile(filters []string) (Filter, error) {
|
||||
// return if there is nothing to compile
|
||||
if len(filters) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// check if we can compile a non-glob filter
|
||||
noGlob := true
|
||||
for _, filter := range filters {
|
||||
if hasMeta(filter) {
|
||||
noGlob = false
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
switch {
|
||||
case noGlob:
|
||||
// return non-globbing filter if not needed.
|
||||
return compileFilterNoGlob(filters), nil
|
||||
case len(filters) == 1:
|
||||
return glob.Compile(filters[0])
|
||||
default:
|
||||
return glob.Compile("{" + strings.Join(filters, ",") + "}")
|
||||
}
|
||||
}
|
||||
|
||||
// hasMeta reports whether path contains any magic glob characters.
|
||||
func hasMeta(s string) bool {
|
||||
return strings.IndexAny(s, "*?[") >= 0
|
||||
}
|
||||
|
||||
type filter struct {
|
||||
m map[string]struct{}
|
||||
}
|
||||
|
||||
func (f *filter) Match(s string) bool {
|
||||
_, ok := f.m[s]
|
||||
return ok
|
||||
}
|
||||
|
||||
type filtersingle struct {
|
||||
s string
|
||||
}
|
||||
|
||||
func (f *filtersingle) Match(s string) bool {
|
||||
return f.s == s
|
||||
}
|
||||
|
||||
func compileFilterNoGlob(filters []string) Filter {
|
||||
if len(filters) == 1 {
|
||||
return &filtersingle{s: filters[0]}
|
||||
}
|
||||
out := filter{m: make(map[string]struct{})}
|
||||
for _, filter := range filters {
|
||||
out.m[filter] = struct{}{}
|
||||
}
|
||||
return &out
|
||||
}
|
||||
|
||||
type IncludeExcludeFilter struct {
|
||||
include Filter
|
||||
exclude Filter
|
||||
}
|
||||
|
||||
func NewIncludeExcludeFilter(
|
||||
include []string,
|
||||
exclude []string,
|
||||
) (Filter, error) {
|
||||
in, err := Compile(include)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
ex, err := Compile(exclude)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &IncludeExcludeFilter{in, ex}, nil
|
||||
}
|
||||
|
||||
func (f *IncludeExcludeFilter) Match(s string) bool {
|
||||
if f.include != nil {
|
||||
if !f.include.Match(s) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
if f.exclude != nil {
|
||||
if f.exclude.Match(s) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
96
filter/filter_test.go
Normal file
96
filter/filter_test.go
Normal file
@@ -0,0 +1,96 @@
|
||||
package filter
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestCompile(t *testing.T) {
|
||||
f, err := Compile([]string{})
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, f)
|
||||
|
||||
f, err = Compile([]string{"cpu"})
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, f.Match("cpu"))
|
||||
assert.False(t, f.Match("cpu0"))
|
||||
assert.False(t, f.Match("mem"))
|
||||
|
||||
f, err = Compile([]string{"cpu*"})
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, f.Match("cpu"))
|
||||
assert.True(t, f.Match("cpu0"))
|
||||
assert.False(t, f.Match("mem"))
|
||||
|
||||
f, err = Compile([]string{"cpu", "mem"})
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, f.Match("cpu"))
|
||||
assert.False(t, f.Match("cpu0"))
|
||||
assert.True(t, f.Match("mem"))
|
||||
|
||||
f, err = Compile([]string{"cpu", "mem", "net*"})
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, f.Match("cpu"))
|
||||
assert.False(t, f.Match("cpu0"))
|
||||
assert.True(t, f.Match("mem"))
|
||||
assert.True(t, f.Match("network"))
|
||||
}
|
||||
|
||||
var benchbool bool
|
||||
|
||||
func BenchmarkFilterSingleNoGlobFalse(b *testing.B) {
|
||||
f, _ := Compile([]string{"cpu"})
|
||||
var tmp bool
|
||||
for n := 0; n < b.N; n++ {
|
||||
tmp = f.Match("network")
|
||||
}
|
||||
benchbool = tmp
|
||||
}
|
||||
|
||||
func BenchmarkFilterSingleNoGlobTrue(b *testing.B) {
|
||||
f, _ := Compile([]string{"cpu"})
|
||||
var tmp bool
|
||||
for n := 0; n < b.N; n++ {
|
||||
tmp = f.Match("cpu")
|
||||
}
|
||||
benchbool = tmp
|
||||
}
|
||||
|
||||
func BenchmarkFilter(b *testing.B) {
|
||||
f, _ := Compile([]string{"cpu", "mem", "net*"})
|
||||
var tmp bool
|
||||
for n := 0; n < b.N; n++ {
|
||||
tmp = f.Match("network")
|
||||
}
|
||||
benchbool = tmp
|
||||
}
|
||||
|
||||
func BenchmarkFilterNoGlob(b *testing.B) {
|
||||
f, _ := Compile([]string{"cpu", "mem", "net"})
|
||||
var tmp bool
|
||||
for n := 0; n < b.N; n++ {
|
||||
tmp = f.Match("net")
|
||||
}
|
||||
benchbool = tmp
|
||||
}
|
||||
|
||||
func BenchmarkFilter2(b *testing.B) {
|
||||
f, _ := Compile([]string{"aa", "bb", "c", "ad", "ar", "at", "aq",
|
||||
"aw", "az", "axxx", "ab", "cpu", "mem", "net*"})
|
||||
var tmp bool
|
||||
for n := 0; n < b.N; n++ {
|
||||
tmp = f.Match("network")
|
||||
}
|
||||
benchbool = tmp
|
||||
}
|
||||
|
||||
func BenchmarkFilter2NoGlob(b *testing.B) {
|
||||
f, _ := Compile([]string{"aa", "bb", "c", "ad", "ar", "at", "aq",
|
||||
"aw", "az", "axxx", "ab", "cpu", "mem", "net"})
|
||||
var tmp bool
|
||||
for n := 0; n < b.N; n++ {
|
||||
tmp = f.Match("net")
|
||||
}
|
||||
benchbool = tmp
|
||||
}
|
||||
31
input.go
Normal file
31
input.go
Normal file
@@ -0,0 +1,31 @@
|
||||
package telegraf
|
||||
|
||||
type Input interface {
|
||||
// SampleConfig returns the default configuration of the Input
|
||||
SampleConfig() string
|
||||
|
||||
// Description returns a one-sentence description on the Input
|
||||
Description() string
|
||||
|
||||
// Gather takes in an accumulator and adds the metrics that the Input
|
||||
// gathers. This is called every "interval"
|
||||
Gather(Accumulator) error
|
||||
}
|
||||
|
||||
type ServiceInput interface {
|
||||
// SampleConfig returns the default configuration of the Input
|
||||
SampleConfig() string
|
||||
|
||||
// Description returns a one-sentence description on the Input
|
||||
Description() string
|
||||
|
||||
// Gather takes in an accumulator and adds the metrics that the Input
|
||||
// gathers. This is called every "interval"
|
||||
Gather(Accumulator) error
|
||||
|
||||
// Start starts the ServiceInput's service, whatever that may be
|
||||
Start(Accumulator) error
|
||||
|
||||
// Stop stops the services and closes any necessary channels and connections
|
||||
Stop()
|
||||
}
|
||||
76
internal/buffer/buffer.go
Normal file
76
internal/buffer/buffer.go
Normal file
@@ -0,0 +1,76 @@
|
||||
package buffer
|
||||
|
||||
import (
|
||||
"sync"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/selfstat"
|
||||
)
|
||||
|
||||
var (
|
||||
MetricsWritten = selfstat.Register("agent", "metrics_written", map[string]string{})
|
||||
MetricsDropped = selfstat.Register("agent", "metrics_dropped", map[string]string{})
|
||||
)
|
||||
|
||||
// Buffer is an object for storing metrics in a circular buffer.
|
||||
type Buffer struct {
|
||||
buf chan telegraf.Metric
|
||||
|
||||
mu sync.Mutex
|
||||
}
|
||||
|
||||
// NewBuffer returns a Buffer
|
||||
// size is the maximum number of metrics that Buffer will cache. If Add is
|
||||
// called when the buffer is full, then the oldest metric(s) will be dropped.
|
||||
func NewBuffer(size int) *Buffer {
|
||||
return &Buffer{
|
||||
buf: make(chan telegraf.Metric, size),
|
||||
}
|
||||
}
|
||||
|
||||
// IsEmpty returns true if Buffer is empty.
|
||||
func (b *Buffer) IsEmpty() bool {
|
||||
return len(b.buf) == 0
|
||||
}
|
||||
|
||||
// Len returns the current length of the buffer.
|
||||
func (b *Buffer) Len() int {
|
||||
return len(b.buf)
|
||||
}
|
||||
|
||||
// Add adds metrics to the buffer.
|
||||
func (b *Buffer) Add(metrics ...telegraf.Metric) {
|
||||
for i, _ := range metrics {
|
||||
MetricsWritten.Incr(1)
|
||||
select {
|
||||
case b.buf <- metrics[i]:
|
||||
default:
|
||||
b.mu.Lock()
|
||||
MetricsDropped.Incr(1)
|
||||
<-b.buf
|
||||
b.buf <- metrics[i]
|
||||
b.mu.Unlock()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Batch returns a batch of metrics of size batchSize.
|
||||
// the batch will be of maximum length batchSize. It can be less than batchSize,
|
||||
// if the length of Buffer is less than batchSize.
|
||||
func (b *Buffer) Batch(batchSize int) []telegraf.Metric {
|
||||
b.mu.Lock()
|
||||
n := min(len(b.buf), batchSize)
|
||||
out := make([]telegraf.Metric, n)
|
||||
for i := 0; i < n; i++ {
|
||||
out[i] = <-b.buf
|
||||
}
|
||||
b.mu.Unlock()
|
||||
return out
|
||||
}
|
||||
|
||||
func min(a, b int) int {
|
||||
if b < a {
|
||||
return b
|
||||
}
|
||||
return a
|
||||
}
|
||||
100
internal/buffer/buffer_test.go
Normal file
100
internal/buffer/buffer_test.go
Normal file
@@ -0,0 +1,100 @@
|
||||
package buffer
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/testutil"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
var metricList = []telegraf.Metric{
|
||||
testutil.TestMetric(2, "mymetric1"),
|
||||
testutil.TestMetric(1, "mymetric2"),
|
||||
testutil.TestMetric(11, "mymetric3"),
|
||||
testutil.TestMetric(15, "mymetric4"),
|
||||
testutil.TestMetric(8, "mymetric5"),
|
||||
}
|
||||
|
||||
func BenchmarkAddMetrics(b *testing.B) {
|
||||
buf := NewBuffer(10000)
|
||||
m := testutil.TestMetric(1, "mymetric")
|
||||
for n := 0; n < b.N; n++ {
|
||||
buf.Add(m)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNewBufferBasicFuncs(t *testing.T) {
|
||||
b := NewBuffer(10)
|
||||
MetricsDropped.Set(0)
|
||||
MetricsWritten.Set(0)
|
||||
|
||||
assert.True(t, b.IsEmpty())
|
||||
assert.Zero(t, b.Len())
|
||||
assert.Zero(t, MetricsDropped.Get())
|
||||
assert.Zero(t, MetricsWritten.Get())
|
||||
|
||||
m := testutil.TestMetric(1, "mymetric")
|
||||
b.Add(m)
|
||||
assert.False(t, b.IsEmpty())
|
||||
assert.Equal(t, b.Len(), 1)
|
||||
assert.Equal(t, int64(0), MetricsDropped.Get())
|
||||
assert.Equal(t, int64(1), MetricsWritten.Get())
|
||||
|
||||
b.Add(metricList...)
|
||||
assert.False(t, b.IsEmpty())
|
||||
assert.Equal(t, b.Len(), 6)
|
||||
assert.Equal(t, int64(0), MetricsDropped.Get())
|
||||
assert.Equal(t, int64(6), MetricsWritten.Get())
|
||||
}
|
||||
|
||||
func TestDroppingMetrics(t *testing.T) {
|
||||
b := NewBuffer(10)
|
||||
MetricsDropped.Set(0)
|
||||
MetricsWritten.Set(0)
|
||||
|
||||
// Add up to the size of the buffer
|
||||
b.Add(metricList...)
|
||||
b.Add(metricList...)
|
||||
assert.False(t, b.IsEmpty())
|
||||
assert.Equal(t, b.Len(), 10)
|
||||
assert.Equal(t, int64(0), MetricsDropped.Get())
|
||||
assert.Equal(t, int64(10), MetricsWritten.Get())
|
||||
|
||||
// Add 5 more and verify they were dropped
|
||||
b.Add(metricList...)
|
||||
assert.False(t, b.IsEmpty())
|
||||
assert.Equal(t, b.Len(), 10)
|
||||
assert.Equal(t, int64(5), MetricsDropped.Get())
|
||||
assert.Equal(t, int64(15), MetricsWritten.Get())
|
||||
}
|
||||
|
||||
func TestGettingBatches(t *testing.T) {
|
||||
b := NewBuffer(20)
|
||||
MetricsDropped.Set(0)
|
||||
MetricsWritten.Set(0)
|
||||
|
||||
// Verify that the buffer returned is smaller than requested when there are
|
||||
// not as many items as requested.
|
||||
b.Add(metricList...)
|
||||
batch := b.Batch(10)
|
||||
assert.Len(t, batch, 5)
|
||||
|
||||
// Verify that the buffer is now empty
|
||||
assert.True(t, b.IsEmpty())
|
||||
assert.Zero(t, b.Len())
|
||||
assert.Zero(t, MetricsDropped.Get())
|
||||
assert.Equal(t, int64(5), MetricsWritten.Get())
|
||||
|
||||
// Verify that the buffer returned is not more than the size requested
|
||||
b.Add(metricList...)
|
||||
batch = b.Batch(3)
|
||||
assert.Len(t, batch, 3)
|
||||
|
||||
// Verify that buffer is not empty
|
||||
assert.False(t, b.IsEmpty())
|
||||
assert.Equal(t, b.Len(), 2)
|
||||
assert.Equal(t, int64(0), MetricsDropped.Get())
|
||||
assert.Equal(t, int64(10), MetricsWritten.Get())
|
||||
}
|
||||
49
internal/config/aws/credentials.go
Normal file
49
internal/config/aws/credentials.go
Normal file
@@ -0,0 +1,49 @@
|
||||
package aws
|
||||
|
||||
import (
|
||||
"github.com/aws/aws-sdk-go/aws"
|
||||
"github.com/aws/aws-sdk-go/aws/client"
|
||||
"github.com/aws/aws-sdk-go/aws/credentials"
|
||||
"github.com/aws/aws-sdk-go/aws/credentials/stscreds"
|
||||
"github.com/aws/aws-sdk-go/aws/session"
|
||||
)
|
||||
|
||||
type CredentialConfig struct {
|
||||
Region string
|
||||
AccessKey string
|
||||
SecretKey string
|
||||
RoleARN string
|
||||
Profile string
|
||||
Filename string
|
||||
Token string
|
||||
}
|
||||
|
||||
func (c *CredentialConfig) Credentials() client.ConfigProvider {
|
||||
if c.RoleARN != "" {
|
||||
return c.assumeCredentials()
|
||||
} else {
|
||||
return c.rootCredentials()
|
||||
}
|
||||
}
|
||||
|
||||
func (c *CredentialConfig) rootCredentials() client.ConfigProvider {
|
||||
config := &aws.Config{
|
||||
Region: aws.String(c.Region),
|
||||
}
|
||||
if c.AccessKey != "" || c.SecretKey != "" {
|
||||
config.Credentials = credentials.NewStaticCredentials(c.AccessKey, c.SecretKey, c.Token)
|
||||
} else if c.Profile != "" || c.Filename != "" {
|
||||
config.Credentials = credentials.NewSharedCredentials(c.Filename, c.Profile)
|
||||
}
|
||||
|
||||
return session.New(config)
|
||||
}
|
||||
|
||||
func (c *CredentialConfig) assumeCredentials() client.ConfigProvider {
|
||||
rootCredentials := c.rootCredentials()
|
||||
config := &aws.Config{
|
||||
Region: aws.String(c.Region),
|
||||
}
|
||||
config.Credentials = stscreds.NewCredentials(rootCredentials, c.RoleARN)
|
||||
return session.New(config)
|
||||
}
|
||||
1472
internal/config/config.go
Normal file
1472
internal/config/config.go
Normal file
File diff suppressed because it is too large
Load Diff
176
internal/config/config_test.go
Normal file
176
internal/config/config_test.go
Normal file
@@ -0,0 +1,176 @@
|
||||
package config
|
||||
|
||||
import (
|
||||
"os"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf/internal/models"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
"github.com/influxdata/telegraf/plugins/inputs/exec"
|
||||
"github.com/influxdata/telegraf/plugins/inputs/memcached"
|
||||
"github.com/influxdata/telegraf/plugins/inputs/procstat"
|
||||
"github.com/influxdata/telegraf/plugins/parsers"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestConfig_LoadSingleInputWithEnvVars(t *testing.T) {
|
||||
c := NewConfig()
|
||||
err := os.Setenv("MY_TEST_SERVER", "192.168.1.1")
|
||||
assert.NoError(t, err)
|
||||
err = os.Setenv("TEST_INTERVAL", "10s")
|
||||
assert.NoError(t, err)
|
||||
c.LoadConfig("./testdata/single_plugin_env_vars.toml")
|
||||
|
||||
memcached := inputs.Inputs["memcached"]().(*memcached.Memcached)
|
||||
memcached.Servers = []string{"192.168.1.1"}
|
||||
|
||||
filter := models.Filter{
|
||||
NameDrop: []string{"metricname2"},
|
||||
NamePass: []string{"metricname1"},
|
||||
FieldDrop: []string{"other", "stuff"},
|
||||
FieldPass: []string{"some", "strings"},
|
||||
TagDrop: []models.TagFilter{
|
||||
models.TagFilter{
|
||||
Name: "badtag",
|
||||
Filter: []string{"othertag"},
|
||||
},
|
||||
},
|
||||
TagPass: []models.TagFilter{
|
||||
models.TagFilter{
|
||||
Name: "goodtag",
|
||||
Filter: []string{"mytag"},
|
||||
},
|
||||
},
|
||||
}
|
||||
assert.NoError(t, filter.Compile())
|
||||
mConfig := &models.InputConfig{
|
||||
Name: "memcached",
|
||||
Filter: filter,
|
||||
Interval: 10 * time.Second,
|
||||
}
|
||||
mConfig.Tags = make(map[string]string)
|
||||
|
||||
assert.Equal(t, memcached, c.Inputs[0].Input,
|
||||
"Testdata did not produce a correct memcached struct.")
|
||||
assert.Equal(t, mConfig, c.Inputs[0].Config,
|
||||
"Testdata did not produce correct memcached metadata.")
|
||||
}
|
||||
|
||||
func TestConfig_LoadSingleInput(t *testing.T) {
|
||||
c := NewConfig()
|
||||
c.LoadConfig("./testdata/single_plugin.toml")
|
||||
|
||||
memcached := inputs.Inputs["memcached"]().(*memcached.Memcached)
|
||||
memcached.Servers = []string{"localhost"}
|
||||
|
||||
filter := models.Filter{
|
||||
NameDrop: []string{"metricname2"},
|
||||
NamePass: []string{"metricname1"},
|
||||
FieldDrop: []string{"other", "stuff"},
|
||||
FieldPass: []string{"some", "strings"},
|
||||
TagDrop: []models.TagFilter{
|
||||
models.TagFilter{
|
||||
Name: "badtag",
|
||||
Filter: []string{"othertag"},
|
||||
},
|
||||
},
|
||||
TagPass: []models.TagFilter{
|
||||
models.TagFilter{
|
||||
Name: "goodtag",
|
||||
Filter: []string{"mytag"},
|
||||
},
|
||||
},
|
||||
}
|
||||
assert.NoError(t, filter.Compile())
|
||||
mConfig := &models.InputConfig{
|
||||
Name: "memcached",
|
||||
Filter: filter,
|
||||
Interval: 5 * time.Second,
|
||||
}
|
||||
mConfig.Tags = make(map[string]string)
|
||||
|
||||
assert.Equal(t, memcached, c.Inputs[0].Input,
|
||||
"Testdata did not produce a correct memcached struct.")
|
||||
assert.Equal(t, mConfig, c.Inputs[0].Config,
|
||||
"Testdata did not produce correct memcached metadata.")
|
||||
}
|
||||
|
||||
func TestConfig_LoadDirectory(t *testing.T) {
|
||||
c := NewConfig()
|
||||
err := c.LoadConfig("./testdata/single_plugin.toml")
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
err = c.LoadDirectory("./testdata/subconfig")
|
||||
if err != nil {
|
||||
t.Error(err)
|
||||
}
|
||||
|
||||
memcached := inputs.Inputs["memcached"]().(*memcached.Memcached)
|
||||
memcached.Servers = []string{"localhost"}
|
||||
|
||||
filter := models.Filter{
|
||||
NameDrop: []string{"metricname2"},
|
||||
NamePass: []string{"metricname1"},
|
||||
FieldDrop: []string{"other", "stuff"},
|
||||
FieldPass: []string{"some", "strings"},
|
||||
TagDrop: []models.TagFilter{
|
||||
models.TagFilter{
|
||||
Name: "badtag",
|
||||
Filter: []string{"othertag"},
|
||||
},
|
||||
},
|
||||
TagPass: []models.TagFilter{
|
||||
models.TagFilter{
|
||||
Name: "goodtag",
|
||||
Filter: []string{"mytag"},
|
||||
},
|
||||
},
|
||||
}
|
||||
assert.NoError(t, filter.Compile())
|
||||
mConfig := &models.InputConfig{
|
||||
Name: "memcached",
|
||||
Filter: filter,
|
||||
Interval: 5 * time.Second,
|
||||
}
|
||||
mConfig.Tags = make(map[string]string)
|
||||
|
||||
assert.Equal(t, memcached, c.Inputs[0].Input,
|
||||
"Testdata did not produce a correct memcached struct.")
|
||||
assert.Equal(t, mConfig, c.Inputs[0].Config,
|
||||
"Testdata did not produce correct memcached metadata.")
|
||||
|
||||
ex := inputs.Inputs["exec"]().(*exec.Exec)
|
||||
p, err := parsers.NewJSONParser("exec", nil, nil)
|
||||
assert.NoError(t, err)
|
||||
ex.SetParser(p)
|
||||
ex.Command = "/usr/bin/myothercollector --foo=bar"
|
||||
eConfig := &models.InputConfig{
|
||||
Name: "exec",
|
||||
MeasurementSuffix: "_myothercollector",
|
||||
}
|
||||
eConfig.Tags = make(map[string]string)
|
||||
assert.Equal(t, ex, c.Inputs[1].Input,
|
||||
"Merged Testdata did not produce a correct exec struct.")
|
||||
assert.Equal(t, eConfig, c.Inputs[1].Config,
|
||||
"Merged Testdata did not produce correct exec metadata.")
|
||||
|
||||
memcached.Servers = []string{"192.168.1.1"}
|
||||
assert.Equal(t, memcached, c.Inputs[2].Input,
|
||||
"Testdata did not produce a correct memcached struct.")
|
||||
assert.Equal(t, mConfig, c.Inputs[2].Config,
|
||||
"Testdata did not produce correct memcached metadata.")
|
||||
|
||||
pstat := inputs.Inputs["procstat"]().(*procstat.Procstat)
|
||||
pstat.PidFile = "/var/run/grafana-server.pid"
|
||||
|
||||
pConfig := &models.InputConfig{Name: "procstat"}
|
||||
pConfig.Tags = make(map[string]string)
|
||||
|
||||
assert.Equal(t, pstat, c.Inputs[3].Input,
|
||||
"Merged Testdata did not produce a correct procstat struct.")
|
||||
assert.Equal(t, pConfig, c.Inputs[3].Config,
|
||||
"Merged Testdata did not produce correct procstat metadata.")
|
||||
}
|
||||
11
internal/config/testdata/single_plugin.toml
vendored
Normal file
11
internal/config/testdata/single_plugin.toml
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
[[inputs.memcached]]
|
||||
servers = ["localhost"]
|
||||
namepass = ["metricname1"]
|
||||
namedrop = ["metricname2"]
|
||||
fieldpass = ["some", "strings"]
|
||||
fielddrop = ["other", "stuff"]
|
||||
interval = "5s"
|
||||
[inputs.memcached.tagpass]
|
||||
goodtag = ["mytag"]
|
||||
[inputs.memcached.tagdrop]
|
||||
badtag = ["othertag"]
|
||||
11
internal/config/testdata/single_plugin_env_vars.toml
vendored
Normal file
11
internal/config/testdata/single_plugin_env_vars.toml
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
[[inputs.memcached]]
|
||||
servers = ["$MY_TEST_SERVER"]
|
||||
namepass = ["metricname1"]
|
||||
namedrop = ["metricname2"]
|
||||
fieldpass = ["some", "strings"]
|
||||
fielddrop = ["other", "stuff"]
|
||||
interval = "$TEST_INTERVAL"
|
||||
[inputs.memcached.tagpass]
|
||||
goodtag = ["mytag"]
|
||||
[inputs.memcached.tagdrop]
|
||||
badtag = ["othertag"]
|
||||
4
internal/config/testdata/subconfig/..4984_10_04_08_28_06.119/invalid-config.conf
vendored
Normal file
4
internal/config/testdata/subconfig/..4984_10_04_08_28_06.119/invalid-config.conf
vendored
Normal file
@@ -0,0 +1,4 @@
|
||||
# This invalid config file should be skipped during testing
|
||||
# as it is an ..data folder
|
||||
|
||||
[[outputs.influxdb
|
||||
4
internal/config/testdata/subconfig/exec.conf
vendored
Normal file
4
internal/config/testdata/subconfig/exec.conf
vendored
Normal file
@@ -0,0 +1,4 @@
|
||||
[[inputs.exec]]
|
||||
# the command to run
|
||||
command = "/usr/bin/myothercollector --foo=bar"
|
||||
name_suffix = "_myothercollector"
|
||||
11
internal/config/testdata/subconfig/memcached.conf
vendored
Normal file
11
internal/config/testdata/subconfig/memcached.conf
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
[[inputs.memcached]]
|
||||
servers = ["192.168.1.1"]
|
||||
namepass = ["metricname1"]
|
||||
namedrop = ["metricname2"]
|
||||
pass = ["some", "strings"]
|
||||
drop = ["other", "stuff"]
|
||||
interval = "5s"
|
||||
[inputs.memcached.tagpass]
|
||||
goodtag = ["mytag"]
|
||||
[inputs.memcached.tagdrop]
|
||||
badtag = ["othertag"]
|
||||
2
internal/config/testdata/subconfig/procstat.conf
vendored
Normal file
2
internal/config/testdata/subconfig/procstat.conf
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
[[inputs.procstat]]
|
||||
pid_file = "/var/run/grafana-server.pid"
|
||||
322
internal/config/testdata/telegraf-agent.toml
vendored
Normal file
322
internal/config/testdata/telegraf-agent.toml
vendored
Normal file
@@ -0,0 +1,322 @@
|
||||
# Telegraf configuration
|
||||
|
||||
# Telegraf is entirely plugin driven. All metrics are gathered from the
|
||||
# declared inputs.
|
||||
|
||||
# Even if a plugin has no configuration, it must be declared in here
|
||||
# to be active. Declaring a plugin means just specifying the name
|
||||
# as a section with no variables. To deactivate a plugin, comment
|
||||
# out the name and any variables.
|
||||
|
||||
# Use 'telegraf -config telegraf.toml -test' to see what metrics a config
|
||||
# file would generate.
|
||||
|
||||
# One rule that plugins conform to is wherever a connection string
|
||||
# can be passed, the values '' and 'localhost' are treated specially.
|
||||
# They indicate to the plugin to use their own builtin configuration to
|
||||
# connect to the local system.
|
||||
|
||||
# NOTE: The configuration has a few required parameters. They are marked
|
||||
# with 'required'. Be sure to edit those to make this configuration work.
|
||||
|
||||
# Tags can also be specified via a normal map, but only one form at a time:
|
||||
[global_tags]
|
||||
dc = "us-east-1"
|
||||
|
||||
# Configuration for telegraf agent
|
||||
[agent]
|
||||
# Default data collection interval for all plugins
|
||||
interval = "10s"
|
||||
|
||||
# run telegraf in debug mode
|
||||
debug = false
|
||||
|
||||
# Override default hostname, if empty use os.Hostname()
|
||||
hostname = ""
|
||||
|
||||
|
||||
###############################################################################
|
||||
# OUTPUTS #
|
||||
###############################################################################
|
||||
|
||||
# Configuration for influxdb server to send metrics to
|
||||
[[outputs.influxdb]]
|
||||
# The full HTTP endpoint URL for your InfluxDB instance
|
||||
# Multiple urls can be specified for InfluxDB cluster support. Server to
|
||||
# write to will be randomly chosen each interval.
|
||||
urls = ["http://localhost:8086"] # required.
|
||||
|
||||
# The target database for metrics. This database must already exist
|
||||
database = "telegraf" # required.
|
||||
|
||||
[[outputs.influxdb]]
|
||||
urls = ["udp://localhost:8089"]
|
||||
database = "udp-telegraf"
|
||||
|
||||
# Configuration for the Kafka server to send metrics to
|
||||
[[outputs.kafka]]
|
||||
# URLs of kafka brokers
|
||||
brokers = ["localhost:9092"]
|
||||
# Kafka topic for producer messages
|
||||
topic = "telegraf"
|
||||
# Telegraf tag to use as a routing key
|
||||
# ie, if this tag exists, its value will be used as the routing key
|
||||
routing_tag = "host"
|
||||
|
||||
|
||||
###############################################################################
|
||||
# PLUGINS #
|
||||
###############################################################################
|
||||
|
||||
# Read Apache status information (mod_status)
|
||||
[[inputs.apache]]
|
||||
# An array of Apache status URI to gather stats.
|
||||
urls = ["http://localhost/server-status?auto"]
|
||||
|
||||
# Read metrics about cpu usage
|
||||
[[inputs.cpu]]
|
||||
# Whether to report per-cpu stats or not
|
||||
percpu = true
|
||||
# Whether to report total system cpu stats or not
|
||||
totalcpu = true
|
||||
# Comment this line if you want the raw CPU time metrics
|
||||
drop = ["cpu_time"]
|
||||
|
||||
# Read metrics about disk usage by mount point
|
||||
[[inputs.diskio]]
|
||||
# no configuration
|
||||
|
||||
# Read metrics from one or many disque servers
|
||||
[[inputs.disque]]
|
||||
# An array of URI to gather stats about. Specify an ip or hostname
|
||||
# with optional port and password. ie disque://localhost, disque://10.10.3.33:18832,
|
||||
# 10.0.0.1:10000, etc.
|
||||
#
|
||||
# If no servers are specified, then localhost is used as the host.
|
||||
servers = ["localhost"]
|
||||
|
||||
# Read stats from one or more Elasticsearch servers or clusters
|
||||
[[inputs.elasticsearch]]
|
||||
# specify a list of one or more Elasticsearch servers
|
||||
servers = ["http://localhost:9200"]
|
||||
|
||||
# set local to false when you want to read the indices stats from all nodes
|
||||
# within the cluster
|
||||
local = true
|
||||
|
||||
# Read flattened metrics from one or more commands that output JSON to stdout
|
||||
[[inputs.exec]]
|
||||
# the command to run
|
||||
command = "/usr/bin/mycollector --foo=bar"
|
||||
name_suffix = "_mycollector"
|
||||
|
||||
# Read metrics of haproxy, via socket or csv stats page
|
||||
[[inputs.haproxy]]
|
||||
# An array of address to gather stats about. Specify an ip on hostname
|
||||
# with optional port. ie localhost, 10.10.3.33:1936, etc.
|
||||
#
|
||||
# If no servers are specified, then default to 127.0.0.1:1936
|
||||
servers = ["http://myhaproxy.com:1936", "http://anotherhaproxy.com:1936"]
|
||||
# Or you can also use local socket(not work yet)
|
||||
# servers = ["socket:/run/haproxy/admin.sock"]
|
||||
|
||||
# Read flattened metrics from one or more JSON HTTP endpoints
|
||||
[[inputs.httpjson]]
|
||||
# a name for the service being polled
|
||||
name = "webserver_stats"
|
||||
|
||||
# URL of each server in the service's cluster
|
||||
servers = [
|
||||
"http://localhost:9999/stats/",
|
||||
"http://localhost:9998/stats/",
|
||||
]
|
||||
|
||||
# HTTP method to use (case-sensitive)
|
||||
method = "GET"
|
||||
|
||||
# HTTP parameters (all values must be strings)
|
||||
[httpjson.parameters]
|
||||
event_type = "cpu_spike"
|
||||
threshold = "0.75"
|
||||
|
||||
# Read metrics about disk IO by device
|
||||
[[inputs.diskio]]
|
||||
# no configuration
|
||||
|
||||
# read metrics from a Kafka 0.9+ topic
|
||||
[[inputs.kafka_consumer]]
|
||||
## kafka brokers
|
||||
brokers = ["localhost:9092"]
|
||||
## topic(s) to consume
|
||||
topics = ["telegraf"]
|
||||
## the name of the consumer group
|
||||
consumer_group = "telegraf_metrics_consumers"
|
||||
## Offset (must be either "oldest" or "newest")
|
||||
offset = "oldest"
|
||||
|
||||
# read metrics from a Kafka legacy topic
|
||||
[[inputs.kafka_consumer_legacy]]
|
||||
## topic(s) to consume
|
||||
topics = ["telegraf"]
|
||||
# an array of Zookeeper connection strings
|
||||
zookeeper_peers = ["localhost:2181"]
|
||||
## the name of the consumer group
|
||||
consumer_group = "telegraf_metrics_consumers"
|
||||
# Maximum number of points to buffer between collection intervals
|
||||
point_buffer = 100000
|
||||
## Offset (must be either "oldest" or "newest")
|
||||
offset = "oldest"
|
||||
|
||||
|
||||
# Read metrics from a LeoFS Server via SNMP
|
||||
[[inputs.leofs]]
|
||||
# An array of URI to gather stats about LeoFS.
|
||||
# Specify an ip or hostname with port. ie 127.0.0.1:4020
|
||||
#
|
||||
# If no servers are specified, then 127.0.0.1 is used as the host and 4020 as the port.
|
||||
servers = ["127.0.0.1:4021"]
|
||||
|
||||
# Read metrics from local Lustre service on OST, MDS
|
||||
[[inputs.lustre2]]
|
||||
# An array of /proc globs to search for Lustre stats
|
||||
# If not specified, the default will work on Lustre 2.5.x
|
||||
#
|
||||
# ost_procfiles = ["/proc/fs/lustre/obdfilter/*/stats", "/proc/fs/lustre/osd-ldiskfs/*/stats"]
|
||||
# mds_procfiles = ["/proc/fs/lustre/mdt/*/md_stats"]
|
||||
|
||||
# Read metrics about memory usage
|
||||
[[inputs.mem]]
|
||||
# no configuration
|
||||
|
||||
# Read metrics from one or many memcached servers
|
||||
[[inputs.memcached]]
|
||||
# An array of address to gather stats about. Specify an ip on hostname
|
||||
# with optional port. ie localhost, 10.0.0.1:11211, etc.
|
||||
#
|
||||
# If no servers are specified, then localhost is used as the host.
|
||||
servers = ["localhost"]
|
||||
|
||||
# Telegraf plugin for gathering metrics from N Mesos masters
|
||||
[[inputs.mesos]]
|
||||
# Timeout, in ms.
|
||||
timeout = 100
|
||||
# A list of Mesos masters, default value is localhost:5050.
|
||||
masters = ["localhost:5050"]
|
||||
# Metrics groups to be collected, by default, all enabled.
|
||||
master_collections = ["resources","master","system","slaves","frameworks","messages","evqueue","registrar"]
|
||||
|
||||
# Read metrics from one or many MongoDB servers
|
||||
[[inputs.mongodb]]
|
||||
# An array of URI to gather stats about. Specify an ip or hostname
|
||||
# with optional port add password. ie mongodb://user:auth_key@10.10.3.30:27017,
|
||||
# mongodb://10.10.3.33:18832, 10.0.0.1:10000, etc.
|
||||
#
|
||||
# If no servers are specified, then 127.0.0.1 is used as the host and 27107 as the port.
|
||||
servers = ["127.0.0.1:27017"]
|
||||
|
||||
# Read metrics from one or many mysql servers
|
||||
[[inputs.mysql]]
|
||||
# specify servers via a url matching:
|
||||
# [username[:password]@][protocol[(address)]]/[?tls=[true|false|skip-verify]]
|
||||
# e.g.
|
||||
# servers = ["root:root@http://10.0.0.18/?tls=false"]
|
||||
# servers = ["root:passwd@tcp(127.0.0.1:3306)/"]
|
||||
#
|
||||
# If no servers are specified, then localhost is used as the host.
|
||||
servers = ["localhost"]
|
||||
|
||||
# Read metrics about network interface usage
|
||||
[[inputs.net]]
|
||||
# By default, telegraf gathers stats from any up interface (excluding loopback)
|
||||
# Setting interfaces will tell it to gather these explicit interfaces,
|
||||
# regardless of status.
|
||||
#
|
||||
# interfaces = ["eth0", ... ]
|
||||
|
||||
# Read Nginx's basic status information (ngx_http_stub_status_module)
|
||||
[[inputs.nginx]]
|
||||
# An array of Nginx stub_status URI to gather stats.
|
||||
urls = ["http://localhost/status"]
|
||||
|
||||
# Ping given url(s) and return statistics
|
||||
[[inputs.ping]]
|
||||
# urls to ping
|
||||
urls = ["www.google.com"] # required
|
||||
# number of pings to send (ping -c <COUNT>)
|
||||
count = 1 # required
|
||||
# interval, in s, at which to ping. 0 == default (ping -i <PING_INTERVAL>)
|
||||
ping_interval = 0.0
|
||||
# ping timeout, in s. 0 == no timeout (ping -t <TIMEOUT>)
|
||||
timeout = 0.0
|
||||
# interface to send ping from (ping -I <INTERFACE>)
|
||||
interface = ""
|
||||
|
||||
# Read metrics from one or many postgresql servers
|
||||
[[inputs.postgresql]]
|
||||
# specify address via a url matching:
|
||||
# postgres://[pqgotest[:password]]@localhost[/dbname]?sslmode=[disable|verify-ca|verify-full]
|
||||
# or a simple string:
|
||||
# host=localhost user=pqotest password=... sslmode=... dbname=app_production
|
||||
#
|
||||
# All connection parameters are optional. By default, the host is localhost
|
||||
# and the user is the currently running user. For localhost, we default
|
||||
# to sslmode=disable as well.
|
||||
#
|
||||
# Without the dbname parameter, the driver will default to a database
|
||||
# with the same name as the user. This dbname is just for instantiating a
|
||||
# connection with the server and doesn't restrict the databases we are trying
|
||||
# to grab metrics for.
|
||||
#
|
||||
|
||||
address = "sslmode=disable"
|
||||
|
||||
# A list of databases to pull metrics about. If not specified, metrics for all
|
||||
# databases are gathered.
|
||||
|
||||
# databases = ["app_production", "blah_testing"]
|
||||
|
||||
# [[postgresql.servers]]
|
||||
# address = "influx@remoteserver"
|
||||
|
||||
# Read metrics from one or many prometheus clients
|
||||
[[inputs.prometheus]]
|
||||
# An array of urls to scrape metrics from.
|
||||
urls = ["http://localhost:9100/metrics"]
|
||||
|
||||
# Read metrics from one or many RabbitMQ servers via the management API
|
||||
[[inputs.rabbitmq]]
|
||||
# Specify servers via an array of tables
|
||||
# name = "rmq-server-1" # optional tag
|
||||
# url = "http://localhost:15672"
|
||||
# username = "guest"
|
||||
# password = "guest"
|
||||
|
||||
# A list of nodes to pull metrics about. If not specified, metrics for
|
||||
# all nodes are gathered.
|
||||
# nodes = ["rabbit@node1", "rabbit@node2"]
|
||||
|
||||
# Read metrics from one or many redis servers
|
||||
[[inputs.redis]]
|
||||
# An array of URI to gather stats about. Specify an ip or hostname
|
||||
# with optional port add password. ie redis://localhost, redis://10.10.3.33:18832,
|
||||
# 10.0.0.1:10000, etc.
|
||||
#
|
||||
# If no servers are specified, then localhost is used as the host.
|
||||
servers = ["localhost"]
|
||||
|
||||
# Read metrics from one or many RethinkDB servers
|
||||
[[inputs.rethinkdb]]
|
||||
# An array of URI to gather stats about. Specify an ip or hostname
|
||||
# with optional port add password. ie rethinkdb://user:auth_key@10.10.3.30:28105,
|
||||
# rethinkdb://10.10.3.33:18832, 10.0.0.1:10000, etc.
|
||||
#
|
||||
# If no servers are specified, then 127.0.0.1 is used as the host and 28015 as the port.
|
||||
servers = ["127.0.0.1:28015"]
|
||||
|
||||
# Read metrics about swap memory usage
|
||||
[[inputs.swap]]
|
||||
# no configuration
|
||||
|
||||
# Read metrics about system load & uptime
|
||||
[[inputs.system]]
|
||||
# no configuration
|
||||
116
internal/globpath/globpath.go
Normal file
116
internal/globpath/globpath.go
Normal file
@@ -0,0 +1,116 @@
|
||||
package globpath
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/gobwas/glob"
|
||||
)
|
||||
|
||||
var sepStr = fmt.Sprintf("%v", string(os.PathSeparator))
|
||||
|
||||
type GlobPath struct {
|
||||
path string
|
||||
hasMeta bool
|
||||
hasSuperMeta bool
|
||||
g glob.Glob
|
||||
root string
|
||||
}
|
||||
|
||||
func Compile(path string) (*GlobPath, error) {
|
||||
out := GlobPath{
|
||||
hasMeta: hasMeta(path),
|
||||
hasSuperMeta: hasSuperMeta(path),
|
||||
path: path,
|
||||
}
|
||||
|
||||
// if there are no glob meta characters in the path, don't bother compiling
|
||||
// a glob object or finding the root directory. (see short-circuit in Match)
|
||||
if !out.hasMeta || !out.hasSuperMeta {
|
||||
return &out, nil
|
||||
}
|
||||
|
||||
var err error
|
||||
if out.g, err = glob.Compile(path, os.PathSeparator); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Get the root directory for this filepath
|
||||
out.root = findRootDir(path)
|
||||
return &out, nil
|
||||
}
|
||||
|
||||
func (g *GlobPath) Match() map[string]os.FileInfo {
|
||||
if !g.hasMeta {
|
||||
out := make(map[string]os.FileInfo)
|
||||
info, err := os.Stat(g.path)
|
||||
if err == nil {
|
||||
out[g.path] = info
|
||||
}
|
||||
return out
|
||||
}
|
||||
if !g.hasSuperMeta {
|
||||
out := make(map[string]os.FileInfo)
|
||||
files, _ := filepath.Glob(g.path)
|
||||
for _, file := range files {
|
||||
info, err := os.Stat(file)
|
||||
if err == nil {
|
||||
out[file] = info
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
return walkFilePath(g.root, g.g)
|
||||
}
|
||||
|
||||
// walk the filepath from the given root and return a list of files that match
|
||||
// the given glob.
|
||||
func walkFilePath(root string, g glob.Glob) map[string]os.FileInfo {
|
||||
matchedFiles := make(map[string]os.FileInfo)
|
||||
walkfn := func(path string, info os.FileInfo, _ error) error {
|
||||
if g.Match(path) {
|
||||
matchedFiles[path] = info
|
||||
}
|
||||
return nil
|
||||
}
|
||||
filepath.Walk(root, walkfn)
|
||||
return matchedFiles
|
||||
}
|
||||
|
||||
// find the root dir of the given path (could include globs).
|
||||
// ie:
|
||||
// /var/log/telegraf.conf -> /var/log
|
||||
// /home/** -> /home
|
||||
// /home/*/** -> /home
|
||||
// /lib/share/*/*/**.txt -> /lib/share
|
||||
func findRootDir(path string) string {
|
||||
pathItems := strings.Split(path, sepStr)
|
||||
out := sepStr
|
||||
for i, item := range pathItems {
|
||||
if i == len(pathItems)-1 {
|
||||
break
|
||||
}
|
||||
if item == "" {
|
||||
continue
|
||||
}
|
||||
if hasMeta(item) {
|
||||
break
|
||||
}
|
||||
out += item + sepStr
|
||||
}
|
||||
if out != "/" {
|
||||
out = strings.TrimSuffix(out, "/")
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// hasMeta reports whether path contains any magic glob characters.
|
||||
func hasMeta(path string) bool {
|
||||
return strings.IndexAny(path, "*?[") >= 0
|
||||
}
|
||||
|
||||
// hasSuperMeta reports whether path contains any super magic glob characters (**).
|
||||
func hasSuperMeta(path string) bool {
|
||||
return strings.Index(path, "**") >= 0
|
||||
}
|
||||
90
internal/globpath/globpath_test.go
Normal file
90
internal/globpath/globpath_test.go
Normal file
@@ -0,0 +1,90 @@
|
||||
package globpath
|
||||
|
||||
import (
|
||||
"os"
|
||||
"runtime"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestCompileAndMatch(t *testing.T) {
|
||||
dir := getTestdataDir()
|
||||
// test super asterisk
|
||||
g1, err := Compile(dir + "/**")
|
||||
require.NoError(t, err)
|
||||
// test single asterisk
|
||||
g2, err := Compile(dir + "/*.log")
|
||||
require.NoError(t, err)
|
||||
// test no meta characters (file exists)
|
||||
g3, err := Compile(dir + "/log1.log")
|
||||
require.NoError(t, err)
|
||||
// test file that doesn't exist
|
||||
g4, err := Compile(dir + "/i_dont_exist.log")
|
||||
require.NoError(t, err)
|
||||
// test super asterisk that doesn't exist
|
||||
g5, err := Compile(dir + "/dir_doesnt_exist/**")
|
||||
require.NoError(t, err)
|
||||
|
||||
matches := g1.Match()
|
||||
assert.Len(t, matches, 6)
|
||||
matches = g2.Match()
|
||||
assert.Len(t, matches, 2)
|
||||
matches = g3.Match()
|
||||
assert.Len(t, matches, 1)
|
||||
matches = g4.Match()
|
||||
assert.Len(t, matches, 0)
|
||||
matches = g5.Match()
|
||||
assert.Len(t, matches, 0)
|
||||
}
|
||||
|
||||
func TestFindRootDir(t *testing.T) {
|
||||
tests := []struct {
|
||||
input string
|
||||
output string
|
||||
}{
|
||||
{"/var/log/telegraf.conf", "/var/log"},
|
||||
{"/home/**", "/home"},
|
||||
{"/home/*/**", "/home"},
|
||||
{"/lib/share/*/*/**.txt", "/lib/share"},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
actual := findRootDir(test.input)
|
||||
assert.Equal(t, test.output, actual)
|
||||
}
|
||||
}
|
||||
|
||||
func TestFindNestedTextFile(t *testing.T) {
|
||||
dir := getTestdataDir()
|
||||
// test super asterisk
|
||||
g1, err := Compile(dir + "/**.txt")
|
||||
require.NoError(t, err)
|
||||
|
||||
matches := g1.Match()
|
||||
assert.Len(t, matches, 1)
|
||||
}
|
||||
|
||||
func getTestdataDir() string {
|
||||
_, filename, _, _ := runtime.Caller(1)
|
||||
return strings.Replace(filename, "globpath_test.go", "testdata", 1)
|
||||
}
|
||||
|
||||
func TestMatch_ErrPermission(t *testing.T) {
|
||||
tests := []struct {
|
||||
input string
|
||||
expected map[string]os.FileInfo
|
||||
}{
|
||||
{"/root/foo", map[string]os.FileInfo{}},
|
||||
{"/root/f*", map[string]os.FileInfo{}},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
glob, err := Compile(test.input)
|
||||
require.NoError(t, err)
|
||||
actual := glob.Match()
|
||||
require.Equal(t, test.expected, actual)
|
||||
}
|
||||
}
|
||||
0
internal/globpath/testdata/log1.log
vendored
Normal file
0
internal/globpath/testdata/log1.log
vendored
Normal file
0
internal/globpath/testdata/log2.log
vendored
Normal file
0
internal/globpath/testdata/log2.log
vendored
Normal file
0
internal/globpath/testdata/nested1/nested2/nested.txt
vendored
Normal file
0
internal/globpath/testdata/nested1/nested2/nested.txt
vendored
Normal file
5
internal/globpath/testdata/test.conf
vendored
Normal file
5
internal/globpath/testdata/test.conf
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
# this is a fake testing config file
|
||||
# for testing the filestat plugin
|
||||
|
||||
option1 = "foo"
|
||||
option2 = "bar"
|
||||
195
internal/internal.go
Normal file
195
internal/internal.go
Normal file
@@ -0,0 +1,195 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"crypto/rand"
|
||||
"errors"
|
||||
"log"
|
||||
"math/big"
|
||||
"os"
|
||||
"os/exec"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
"unicode"
|
||||
)
|
||||
|
||||
const alphanum string = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
|
||||
|
||||
var (
|
||||
TimeoutErr = errors.New("Command timed out.")
|
||||
|
||||
NotImplementedError = errors.New("not implemented yet")
|
||||
)
|
||||
|
||||
// Duration just wraps time.Duration
|
||||
type Duration struct {
|
||||
Duration time.Duration
|
||||
}
|
||||
|
||||
// UnmarshalTOML parses the duration from the TOML config file
|
||||
func (d *Duration) UnmarshalTOML(b []byte) error {
|
||||
var err error
|
||||
b = bytes.Trim(b, `'`)
|
||||
|
||||
// see if we can directly convert it
|
||||
d.Duration, err = time.ParseDuration(string(b))
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Parse string duration, ie, "1s"
|
||||
if uq, err := strconv.Unquote(string(b)); err == nil && len(uq) > 0 {
|
||||
d.Duration, err = time.ParseDuration(uq)
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// First try parsing as integer seconds
|
||||
sI, err := strconv.ParseInt(string(b), 10, 64)
|
||||
if err == nil {
|
||||
d.Duration = time.Second * time.Duration(sI)
|
||||
return nil
|
||||
}
|
||||
// Second try parsing as float seconds
|
||||
sF, err := strconv.ParseFloat(string(b), 64)
|
||||
if err == nil {
|
||||
d.Duration = time.Second * time.Duration(sF)
|
||||
return nil
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ReadLines reads contents from a file and splits them by new lines.
|
||||
// A convenience wrapper to ReadLinesOffsetN(filename, 0, -1).
|
||||
func ReadLines(filename string) ([]string, error) {
|
||||
return ReadLinesOffsetN(filename, 0, -1)
|
||||
}
|
||||
|
||||
// ReadLines reads contents from file and splits them by new line.
|
||||
// The offset tells at which line number to start.
|
||||
// The count determines the number of lines to read (starting from offset):
|
||||
// n >= 0: at most n lines
|
||||
// n < 0: whole file
|
||||
func ReadLinesOffsetN(filename string, offset uint, n int) ([]string, error) {
|
||||
f, err := os.Open(filename)
|
||||
if err != nil {
|
||||
return []string{""}, err
|
||||
}
|
||||
defer f.Close()
|
||||
|
||||
var ret []string
|
||||
|
||||
r := bufio.NewReader(f)
|
||||
for i := 0; i < n+int(offset) || n < 0; i++ {
|
||||
line, err := r.ReadString('\n')
|
||||
if err != nil {
|
||||
break
|
||||
}
|
||||
if i < int(offset) {
|
||||
continue
|
||||
}
|
||||
ret = append(ret, strings.Trim(line, "\n"))
|
||||
}
|
||||
|
||||
return ret, nil
|
||||
}
|
||||
|
||||
// RandomString returns a random string of alpha-numeric characters
|
||||
func RandomString(n int) string {
|
||||
var bytes = make([]byte, n)
|
||||
rand.Read(bytes)
|
||||
for i, b := range bytes {
|
||||
bytes[i] = alphanum[b%byte(len(alphanum))]
|
||||
}
|
||||
return string(bytes)
|
||||
}
|
||||
|
||||
// SnakeCase converts the given string to snake case following the Golang format:
|
||||
// acronyms are converted to lower-case and preceded by an underscore.
|
||||
func SnakeCase(in string) string {
|
||||
runes := []rune(in)
|
||||
length := len(runes)
|
||||
|
||||
var out []rune
|
||||
for i := 0; i < length; i++ {
|
||||
if i > 0 && unicode.IsUpper(runes[i]) && ((i+1 < length && unicode.IsLower(runes[i+1])) || unicode.IsLower(runes[i-1])) {
|
||||
out = append(out, '_')
|
||||
}
|
||||
out = append(out, unicode.ToLower(runes[i]))
|
||||
}
|
||||
|
||||
return string(out)
|
||||
}
|
||||
|
||||
// CombinedOutputTimeout runs the given command with the given timeout and
|
||||
// returns the combined output of stdout and stderr.
|
||||
// If the command times out, it attempts to kill the process.
|
||||
func CombinedOutputTimeout(c *exec.Cmd, timeout time.Duration) ([]byte, error) {
|
||||
var b bytes.Buffer
|
||||
c.Stdout = &b
|
||||
c.Stderr = &b
|
||||
if err := c.Start(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
err := WaitTimeout(c, timeout)
|
||||
return b.Bytes(), err
|
||||
}
|
||||
|
||||
// RunTimeout runs the given command with the given timeout.
|
||||
// If the command times out, it attempts to kill the process.
|
||||
func RunTimeout(c *exec.Cmd, timeout time.Duration) error {
|
||||
if err := c.Start(); err != nil {
|
||||
return err
|
||||
}
|
||||
return WaitTimeout(c, timeout)
|
||||
}
|
||||
|
||||
// WaitTimeout waits for the given command to finish with a timeout.
|
||||
// It assumes the command has already been started.
|
||||
// If the command times out, it attempts to kill the process.
|
||||
func WaitTimeout(c *exec.Cmd, timeout time.Duration) error {
|
||||
timer := time.NewTimer(timeout)
|
||||
done := make(chan error)
|
||||
go func() { done <- c.Wait() }()
|
||||
select {
|
||||
case err := <-done:
|
||||
timer.Stop()
|
||||
return err
|
||||
case <-timer.C:
|
||||
if err := c.Process.Kill(); err != nil {
|
||||
log.Printf("E! FATAL error killing process: %s", err)
|
||||
return err
|
||||
}
|
||||
// wait for the command to return after killing it
|
||||
<-done
|
||||
return TimeoutErr
|
||||
}
|
||||
}
|
||||
|
||||
// RandomSleep will sleep for a random amount of time up to max.
|
||||
// If the shutdown channel is closed, it will return before it has finished
|
||||
// sleeping.
|
||||
func RandomSleep(max time.Duration, shutdown chan struct{}) {
|
||||
if max == 0 {
|
||||
return
|
||||
}
|
||||
maxSleep := big.NewInt(max.Nanoseconds())
|
||||
|
||||
var sleepns int64
|
||||
if j, err := rand.Int(rand.Reader, maxSleep); err == nil {
|
||||
sleepns = j.Int64()
|
||||
}
|
||||
|
||||
t := time.NewTimer(time.Nanosecond * time.Duration(sleepns))
|
||||
select {
|
||||
case <-t.C:
|
||||
return
|
||||
case <-shutdown:
|
||||
t.Stop()
|
||||
return
|
||||
}
|
||||
}
|
||||
164
internal/internal_test.go
Normal file
164
internal/internal_test.go
Normal file
@@ -0,0 +1,164 @@
|
||||
package internal
|
||||
|
||||
import (
|
||||
"os/exec"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
type SnakeTest struct {
|
||||
input string
|
||||
output string
|
||||
}
|
||||
|
||||
var tests = []SnakeTest{
|
||||
{"a", "a"},
|
||||
{"snake", "snake"},
|
||||
{"A", "a"},
|
||||
{"ID", "id"},
|
||||
{"MOTD", "motd"},
|
||||
{"Snake", "snake"},
|
||||
{"SnakeTest", "snake_test"},
|
||||
{"APIResponse", "api_response"},
|
||||
{"SnakeID", "snake_id"},
|
||||
{"SnakeIDGoogle", "snake_id_google"},
|
||||
{"LinuxMOTD", "linux_motd"},
|
||||
{"OMGWTFBBQ", "omgwtfbbq"},
|
||||
{"omg_wtf_bbq", "omg_wtf_bbq"},
|
||||
}
|
||||
|
||||
func TestSnakeCase(t *testing.T) {
|
||||
for _, test := range tests {
|
||||
if SnakeCase(test.input) != test.output {
|
||||
t.Errorf(`SnakeCase("%s"), wanted "%s", got \%s"`, test.input, test.output, SnakeCase(test.input))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
var (
|
||||
sleepbin, _ = exec.LookPath("sleep")
|
||||
echobin, _ = exec.LookPath("echo")
|
||||
shell, _ = exec.LookPath("sh")
|
||||
)
|
||||
|
||||
func TestRunTimeout(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("Skipping test due to random failures.")
|
||||
}
|
||||
if sleepbin == "" {
|
||||
t.Skip("'sleep' binary not available on OS, skipping.")
|
||||
}
|
||||
cmd := exec.Command(sleepbin, "10")
|
||||
start := time.Now()
|
||||
err := RunTimeout(cmd, time.Millisecond*20)
|
||||
elapsed := time.Since(start)
|
||||
|
||||
assert.Equal(t, TimeoutErr, err)
|
||||
// Verify that command gets killed in 20ms, with some breathing room
|
||||
assert.True(t, elapsed < time.Millisecond*75)
|
||||
}
|
||||
|
||||
func TestCombinedOutputTimeout(t *testing.T) {
|
||||
// TODO: Fix this test
|
||||
t.Skip("Test failing too often, skip for now and revisit later.")
|
||||
if sleepbin == "" {
|
||||
t.Skip("'sleep' binary not available on OS, skipping.")
|
||||
}
|
||||
cmd := exec.Command(sleepbin, "10")
|
||||
start := time.Now()
|
||||
_, err := CombinedOutputTimeout(cmd, time.Millisecond*20)
|
||||
elapsed := time.Since(start)
|
||||
|
||||
assert.Equal(t, TimeoutErr, err)
|
||||
// Verify that command gets killed in 20ms, with some breathing room
|
||||
assert.True(t, elapsed < time.Millisecond*75)
|
||||
}
|
||||
|
||||
func TestCombinedOutput(t *testing.T) {
|
||||
if echobin == "" {
|
||||
t.Skip("'echo' binary not available on OS, skipping.")
|
||||
}
|
||||
cmd := exec.Command(echobin, "foo")
|
||||
out, err := CombinedOutputTimeout(cmd, time.Second)
|
||||
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, "foo\n", string(out))
|
||||
}
|
||||
|
||||
// test that CombinedOutputTimeout and exec.Cmd.CombinedOutput return
|
||||
// the same output from a failed command.
|
||||
func TestCombinedOutputError(t *testing.T) {
|
||||
if shell == "" {
|
||||
t.Skip("'sh' binary not available on OS, skipping.")
|
||||
}
|
||||
cmd := exec.Command(shell, "-c", "false")
|
||||
expected, err := cmd.CombinedOutput()
|
||||
|
||||
cmd2 := exec.Command(shell, "-c", "false")
|
||||
actual, err := CombinedOutputTimeout(cmd2, time.Second)
|
||||
|
||||
assert.Error(t, err)
|
||||
assert.Equal(t, expected, actual)
|
||||
}
|
||||
|
||||
func TestRunError(t *testing.T) {
|
||||
if shell == "" {
|
||||
t.Skip("'sh' binary not available on OS, skipping.")
|
||||
}
|
||||
cmd := exec.Command(shell, "-c", "false")
|
||||
err := RunTimeout(cmd, time.Second)
|
||||
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestRandomSleep(t *testing.T) {
|
||||
// TODO: Fix this test
|
||||
t.Skip("Test failing too often, skip for now and revisit later.")
|
||||
// test that zero max returns immediately
|
||||
s := time.Now()
|
||||
RandomSleep(time.Duration(0), make(chan struct{}))
|
||||
elapsed := time.Since(s)
|
||||
assert.True(t, elapsed < time.Millisecond)
|
||||
|
||||
// test that max sleep is respected
|
||||
s = time.Now()
|
||||
RandomSleep(time.Millisecond*50, make(chan struct{}))
|
||||
elapsed = time.Since(s)
|
||||
assert.True(t, elapsed < time.Millisecond*100)
|
||||
|
||||
// test that shutdown is respected
|
||||
s = time.Now()
|
||||
shutdown := make(chan struct{})
|
||||
go func() {
|
||||
time.Sleep(time.Millisecond * 100)
|
||||
close(shutdown)
|
||||
}()
|
||||
RandomSleep(time.Second, shutdown)
|
||||
elapsed = time.Since(s)
|
||||
assert.True(t, elapsed < time.Millisecond*150)
|
||||
}
|
||||
|
||||
func TestDuration(t *testing.T) {
|
||||
var d Duration
|
||||
|
||||
d.UnmarshalTOML([]byte(`"1s"`))
|
||||
assert.Equal(t, time.Second, d.Duration)
|
||||
|
||||
d = Duration{}
|
||||
d.UnmarshalTOML([]byte(`1s`))
|
||||
assert.Equal(t, time.Second, d.Duration)
|
||||
|
||||
d = Duration{}
|
||||
d.UnmarshalTOML([]byte(`'1s'`))
|
||||
assert.Equal(t, time.Second, d.Duration)
|
||||
|
||||
d = Duration{}
|
||||
d.UnmarshalTOML([]byte(`10`))
|
||||
assert.Equal(t, 10*time.Second, d.Duration)
|
||||
|
||||
d = Duration{}
|
||||
d.UnmarshalTOML([]byte(`1.5`))
|
||||
assert.Equal(t, time.Second, d.Duration)
|
||||
}
|
||||
59
internal/limiter/limiter.go
Normal file
59
internal/limiter/limiter.go
Normal file
@@ -0,0 +1,59 @@
|
||||
package limiter
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// NewRateLimiter returns a rate limiter that will will emit from the C
|
||||
// channel only 'n' times every 'rate' seconds.
|
||||
func NewRateLimiter(n int, rate time.Duration) *rateLimiter {
|
||||
r := &rateLimiter{
|
||||
C: make(chan bool),
|
||||
rate: rate,
|
||||
n: n,
|
||||
shutdown: make(chan bool),
|
||||
}
|
||||
r.wg.Add(1)
|
||||
go r.limiter()
|
||||
return r
|
||||
}
|
||||
|
||||
type rateLimiter struct {
|
||||
C chan bool
|
||||
rate time.Duration
|
||||
n int
|
||||
|
||||
shutdown chan bool
|
||||
wg sync.WaitGroup
|
||||
}
|
||||
|
||||
func (r *rateLimiter) Stop() {
|
||||
close(r.shutdown)
|
||||
r.wg.Wait()
|
||||
close(r.C)
|
||||
}
|
||||
|
||||
func (r *rateLimiter) limiter() {
|
||||
defer r.wg.Done()
|
||||
ticker := time.NewTicker(r.rate)
|
||||
defer ticker.Stop()
|
||||
counter := 0
|
||||
for {
|
||||
select {
|
||||
case <-r.shutdown:
|
||||
return
|
||||
case <-ticker.C:
|
||||
counter = 0
|
||||
default:
|
||||
if counter < r.n {
|
||||
select {
|
||||
case r.C <- true:
|
||||
counter++
|
||||
case <-r.shutdown:
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
263
internal/models/filter.go
Normal file
263
internal/models/filter.go
Normal file
@@ -0,0 +1,263 @@
|
||||
package models
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/influxdata/telegraf/filter"
|
||||
)
|
||||
|
||||
// TagFilter is the name of a tag, and the values on which to filter
|
||||
type TagFilter struct {
|
||||
Name string
|
||||
Filter []string
|
||||
filter filter.Filter
|
||||
}
|
||||
|
||||
// Filter containing drop/pass and tagdrop/tagpass rules
|
||||
type Filter struct {
|
||||
NameDrop []string
|
||||
nameDrop filter.Filter
|
||||
NamePass []string
|
||||
namePass filter.Filter
|
||||
|
||||
FieldDrop []string
|
||||
fieldDrop filter.Filter
|
||||
FieldPass []string
|
||||
fieldPass filter.Filter
|
||||
|
||||
TagDrop []TagFilter
|
||||
TagPass []TagFilter
|
||||
|
||||
TagExclude []string
|
||||
tagExclude filter.Filter
|
||||
TagInclude []string
|
||||
tagInclude filter.Filter
|
||||
|
||||
isActive bool
|
||||
}
|
||||
|
||||
// Compile all Filter lists into filter.Filter objects.
|
||||
func (f *Filter) Compile() error {
|
||||
if len(f.NameDrop) == 0 &&
|
||||
len(f.NamePass) == 0 &&
|
||||
len(f.FieldDrop) == 0 &&
|
||||
len(f.FieldPass) == 0 &&
|
||||
len(f.TagInclude) == 0 &&
|
||||
len(f.TagExclude) == 0 &&
|
||||
len(f.TagPass) == 0 &&
|
||||
len(f.TagDrop) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
f.isActive = true
|
||||
var err error
|
||||
f.nameDrop, err = filter.Compile(f.NameDrop)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error compiling 'namedrop', %s", err)
|
||||
}
|
||||
f.namePass, err = filter.Compile(f.NamePass)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error compiling 'namepass', %s", err)
|
||||
}
|
||||
|
||||
f.fieldDrop, err = filter.Compile(f.FieldDrop)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error compiling 'fielddrop', %s", err)
|
||||
}
|
||||
f.fieldPass, err = filter.Compile(f.FieldPass)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error compiling 'fieldpass', %s", err)
|
||||
}
|
||||
|
||||
f.tagExclude, err = filter.Compile(f.TagExclude)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error compiling 'tagexclude', %s", err)
|
||||
}
|
||||
f.tagInclude, err = filter.Compile(f.TagInclude)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error compiling 'taginclude', %s", err)
|
||||
}
|
||||
|
||||
for i, _ := range f.TagDrop {
|
||||
f.TagDrop[i].filter, err = filter.Compile(f.TagDrop[i].Filter)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error compiling 'tagdrop', %s", err)
|
||||
}
|
||||
}
|
||||
for i, _ := range f.TagPass {
|
||||
f.TagPass[i].filter, err = filter.Compile(f.TagPass[i].Filter)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error compiling 'tagpass', %s", err)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Apply applies the filter to the given measurement name, fields map, and
|
||||
// tags map. It will return false if the metric should be "filtered out", and
|
||||
// true if the metric should "pass".
|
||||
// It will modify tags & fields in-place if they need to be deleted.
|
||||
func (f *Filter) Apply(
|
||||
measurement string,
|
||||
fields map[string]interface{},
|
||||
tags map[string]string,
|
||||
) bool {
|
||||
if !f.isActive {
|
||||
return true
|
||||
}
|
||||
|
||||
// check if the measurement name should pass
|
||||
if !f.shouldNamePass(measurement) {
|
||||
return false
|
||||
}
|
||||
|
||||
// check if the tags should pass
|
||||
if !f.shouldTagsPass(tags) {
|
||||
return false
|
||||
}
|
||||
|
||||
// filter fields
|
||||
for fieldkey, _ := range fields {
|
||||
if !f.shouldFieldPass(fieldkey) {
|
||||
delete(fields, fieldkey)
|
||||
}
|
||||
}
|
||||
if len(fields) == 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
// filter tags
|
||||
f.filterTags(tags)
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// IsActive checking if filter is active
|
||||
func (f *Filter) IsActive() bool {
|
||||
return f.isActive
|
||||
}
|
||||
|
||||
// shouldNamePass returns true if the metric should pass, false if should drop
|
||||
// based on the drop/pass filter parameters
|
||||
func (f *Filter) shouldNamePass(key string) bool {
|
||||
|
||||
pass := func(f *Filter) bool {
|
||||
if f.namePass.Match(key) {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
drop := func(f *Filter) bool {
|
||||
if f.nameDrop.Match(key) {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
if f.namePass != nil && f.nameDrop != nil {
|
||||
return pass(f) && drop(f)
|
||||
} else if f.namePass != nil {
|
||||
return pass(f)
|
||||
} else if f.nameDrop != nil {
|
||||
return drop(f)
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// shouldFieldPass returns true if the metric should pass, false if should drop
|
||||
// based on the drop/pass filter parameters
|
||||
func (f *Filter) shouldFieldPass(key string) bool {
|
||||
|
||||
pass := func(f *Filter) bool {
|
||||
if f.fieldPass.Match(key) {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
drop := func(f *Filter) bool {
|
||||
if f.fieldDrop.Match(key) {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
if f.fieldPass != nil && f.fieldDrop != nil {
|
||||
return pass(f) && drop(f)
|
||||
} else if f.fieldPass != nil {
|
||||
return pass(f)
|
||||
} else if f.fieldDrop != nil {
|
||||
return drop(f)
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// shouldTagsPass returns true if the metric should pass, false if should drop
|
||||
// based on the tagdrop/tagpass filter parameters
|
||||
func (f *Filter) shouldTagsPass(tags map[string]string) bool {
|
||||
|
||||
pass := func(f *Filter) bool {
|
||||
for _, pat := range f.TagPass {
|
||||
if pat.filter == nil {
|
||||
continue
|
||||
}
|
||||
if tagval, ok := tags[pat.Name]; ok {
|
||||
if pat.filter.Match(tagval) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
drop := func(f *Filter) bool {
|
||||
for _, pat := range f.TagDrop {
|
||||
if pat.filter == nil {
|
||||
continue
|
||||
}
|
||||
if tagval, ok := tags[pat.Name]; ok {
|
||||
if pat.filter.Match(tagval) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// Add additional logic in case where both parameters are set.
|
||||
// see: https://github.com/influxdata/telegraf/issues/2860
|
||||
if f.TagPass != nil && f.TagDrop != nil {
|
||||
// return true only in case when tag pass and won't be dropped (true, true).
|
||||
// in case when the same tag should be passed and dropped it will be dropped (true, false).
|
||||
return pass(f) && drop(f)
|
||||
} else if f.TagPass != nil {
|
||||
return pass(f)
|
||||
} else if f.TagDrop != nil {
|
||||
return drop(f)
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// Apply TagInclude and TagExclude filters.
|
||||
// modifies the tags map in-place.
|
||||
func (f *Filter) filterTags(tags map[string]string) {
|
||||
if f.tagInclude != nil {
|
||||
for k, _ := range tags {
|
||||
if !f.tagInclude.Match(k) {
|
||||
delete(tags, k)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if f.tagExclude != nil {
|
||||
for k, _ := range tags {
|
||||
if f.tagExclude.Match(k) {
|
||||
delete(tags, k)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
444
internal/models/filter_test.go
Normal file
444
internal/models/filter_test.go
Normal file
@@ -0,0 +1,444 @@
|
||||
package models
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestFilter_ApplyEmpty(t *testing.T) {
|
||||
f := Filter{}
|
||||
require.NoError(t, f.Compile())
|
||||
assert.False(t, f.IsActive())
|
||||
|
||||
assert.True(t, f.Apply("m", map[string]interface{}{"value": int64(1)}, map[string]string{}))
|
||||
}
|
||||
|
||||
func TestFilter_ApplyTagsDontPass(t *testing.T) {
|
||||
filters := []TagFilter{
|
||||
TagFilter{
|
||||
Name: "cpu",
|
||||
Filter: []string{"cpu-*"},
|
||||
},
|
||||
}
|
||||
f := Filter{
|
||||
TagDrop: filters,
|
||||
}
|
||||
require.NoError(t, f.Compile())
|
||||
require.NoError(t, f.Compile())
|
||||
assert.True(t, f.IsActive())
|
||||
|
||||
assert.False(t, f.Apply("m",
|
||||
map[string]interface{}{"value": int64(1)},
|
||||
map[string]string{"cpu": "cpu-total"}))
|
||||
}
|
||||
|
||||
func TestFilter_ApplyDeleteFields(t *testing.T) {
|
||||
f := Filter{
|
||||
FieldDrop: []string{"value"},
|
||||
}
|
||||
require.NoError(t, f.Compile())
|
||||
require.NoError(t, f.Compile())
|
||||
assert.True(t, f.IsActive())
|
||||
|
||||
fields := map[string]interface{}{"value": int64(1), "value2": int64(2)}
|
||||
assert.True(t, f.Apply("m", fields, nil))
|
||||
assert.Equal(t, map[string]interface{}{"value2": int64(2)}, fields)
|
||||
}
|
||||
|
||||
func TestFilter_ApplyDeleteAllFields(t *testing.T) {
|
||||
f := Filter{
|
||||
FieldDrop: []string{"value*"},
|
||||
}
|
||||
require.NoError(t, f.Compile())
|
||||
require.NoError(t, f.Compile())
|
||||
assert.True(t, f.IsActive())
|
||||
|
||||
fields := map[string]interface{}{"value": int64(1), "value2": int64(2)}
|
||||
assert.False(t, f.Apply("m", fields, nil))
|
||||
}
|
||||
|
||||
func TestFilter_Empty(t *testing.T) {
|
||||
f := Filter{}
|
||||
|
||||
measurements := []string{
|
||||
"foo",
|
||||
"bar",
|
||||
"barfoo",
|
||||
"foo_bar",
|
||||
"foo.bar",
|
||||
"foo-bar",
|
||||
"supercalifradjulisticexpialidocious",
|
||||
}
|
||||
|
||||
for _, measurement := range measurements {
|
||||
if !f.shouldFieldPass(measurement) {
|
||||
t.Errorf("Expected measurement %s to pass", measurement)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestFilter_NamePass(t *testing.T) {
|
||||
f := Filter{
|
||||
NamePass: []string{"foo*", "cpu_usage_idle"},
|
||||
}
|
||||
require.NoError(t, f.Compile())
|
||||
|
||||
passes := []string{
|
||||
"foo",
|
||||
"foo_bar",
|
||||
"foo.bar",
|
||||
"foo-bar",
|
||||
"cpu_usage_idle",
|
||||
}
|
||||
|
||||
drops := []string{
|
||||
"bar",
|
||||
"barfoo",
|
||||
"bar_foo",
|
||||
"cpu_usage_busy",
|
||||
}
|
||||
|
||||
for _, measurement := range passes {
|
||||
if !f.shouldNamePass(measurement) {
|
||||
t.Errorf("Expected measurement %s to pass", measurement)
|
||||
}
|
||||
}
|
||||
|
||||
for _, measurement := range drops {
|
||||
if f.shouldNamePass(measurement) {
|
||||
t.Errorf("Expected measurement %s to drop", measurement)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestFilter_NameDrop(t *testing.T) {
|
||||
f := Filter{
|
||||
NameDrop: []string{"foo*", "cpu_usage_idle"},
|
||||
}
|
||||
require.NoError(t, f.Compile())
|
||||
|
||||
drops := []string{
|
||||
"foo",
|
||||
"foo_bar",
|
||||
"foo.bar",
|
||||
"foo-bar",
|
||||
"cpu_usage_idle",
|
||||
}
|
||||
|
||||
passes := []string{
|
||||
"bar",
|
||||
"barfoo",
|
||||
"bar_foo",
|
||||
"cpu_usage_busy",
|
||||
}
|
||||
|
||||
for _, measurement := range passes {
|
||||
if !f.shouldNamePass(measurement) {
|
||||
t.Errorf("Expected measurement %s to pass", measurement)
|
||||
}
|
||||
}
|
||||
|
||||
for _, measurement := range drops {
|
||||
if f.shouldNamePass(measurement) {
|
||||
t.Errorf("Expected measurement %s to drop", measurement)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestFilter_FieldPass(t *testing.T) {
|
||||
f := Filter{
|
||||
FieldPass: []string{"foo*", "cpu_usage_idle"},
|
||||
}
|
||||
require.NoError(t, f.Compile())
|
||||
|
||||
passes := []string{
|
||||
"foo",
|
||||
"foo_bar",
|
||||
"foo.bar",
|
||||
"foo-bar",
|
||||
"cpu_usage_idle",
|
||||
}
|
||||
|
||||
drops := []string{
|
||||
"bar",
|
||||
"barfoo",
|
||||
"bar_foo",
|
||||
"cpu_usage_busy",
|
||||
}
|
||||
|
||||
for _, measurement := range passes {
|
||||
if !f.shouldFieldPass(measurement) {
|
||||
t.Errorf("Expected measurement %s to pass", measurement)
|
||||
}
|
||||
}
|
||||
|
||||
for _, measurement := range drops {
|
||||
if f.shouldFieldPass(measurement) {
|
||||
t.Errorf("Expected measurement %s to drop", measurement)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestFilter_FieldDrop(t *testing.T) {
|
||||
f := Filter{
|
||||
FieldDrop: []string{"foo*", "cpu_usage_idle"},
|
||||
}
|
||||
require.NoError(t, f.Compile())
|
||||
|
||||
drops := []string{
|
||||
"foo",
|
||||
"foo_bar",
|
||||
"foo.bar",
|
||||
"foo-bar",
|
||||
"cpu_usage_idle",
|
||||
}
|
||||
|
||||
passes := []string{
|
||||
"bar",
|
||||
"barfoo",
|
||||
"bar_foo",
|
||||
"cpu_usage_busy",
|
||||
}
|
||||
|
||||
for _, measurement := range passes {
|
||||
if !f.shouldFieldPass(measurement) {
|
||||
t.Errorf("Expected measurement %s to pass", measurement)
|
||||
}
|
||||
}
|
||||
|
||||
for _, measurement := range drops {
|
||||
if f.shouldFieldPass(measurement) {
|
||||
t.Errorf("Expected measurement %s to drop", measurement)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestFilter_TagPass(t *testing.T) {
|
||||
filters := []TagFilter{
|
||||
TagFilter{
|
||||
Name: "cpu",
|
||||
Filter: []string{"cpu-*"},
|
||||
},
|
||||
TagFilter{
|
||||
Name: "mem",
|
||||
Filter: []string{"mem_free"},
|
||||
}}
|
||||
f := Filter{
|
||||
TagPass: filters,
|
||||
}
|
||||
require.NoError(t, f.Compile())
|
||||
|
||||
passes := []map[string]string{
|
||||
{"cpu": "cpu-total"},
|
||||
{"cpu": "cpu-0"},
|
||||
{"cpu": "cpu-1"},
|
||||
{"cpu": "cpu-2"},
|
||||
{"mem": "mem_free"},
|
||||
}
|
||||
|
||||
drops := []map[string]string{
|
||||
{"cpu": "cputotal"},
|
||||
{"cpu": "cpu0"},
|
||||
{"cpu": "cpu1"},
|
||||
{"cpu": "cpu2"},
|
||||
{"mem": "mem_used"},
|
||||
}
|
||||
|
||||
for _, tags := range passes {
|
||||
if !f.shouldTagsPass(tags) {
|
||||
t.Errorf("Expected tags %v to pass", tags)
|
||||
}
|
||||
}
|
||||
|
||||
for _, tags := range drops {
|
||||
if f.shouldTagsPass(tags) {
|
||||
t.Errorf("Expected tags %v to drop", tags)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestFilter_TagDrop(t *testing.T) {
|
||||
filters := []TagFilter{
|
||||
TagFilter{
|
||||
Name: "cpu",
|
||||
Filter: []string{"cpu-*"},
|
||||
},
|
||||
TagFilter{
|
||||
Name: "mem",
|
||||
Filter: []string{"mem_free"},
|
||||
}}
|
||||
f := Filter{
|
||||
TagDrop: filters,
|
||||
}
|
||||
require.NoError(t, f.Compile())
|
||||
|
||||
drops := []map[string]string{
|
||||
{"cpu": "cpu-total"},
|
||||
{"cpu": "cpu-0"},
|
||||
{"cpu": "cpu-1"},
|
||||
{"cpu": "cpu-2"},
|
||||
{"mem": "mem_free"},
|
||||
}
|
||||
|
||||
passes := []map[string]string{
|
||||
{"cpu": "cputotal"},
|
||||
{"cpu": "cpu0"},
|
||||
{"cpu": "cpu1"},
|
||||
{"cpu": "cpu2"},
|
||||
{"mem": "mem_used"},
|
||||
}
|
||||
|
||||
for _, tags := range passes {
|
||||
if !f.shouldTagsPass(tags) {
|
||||
t.Errorf("Expected tags %v to pass", tags)
|
||||
}
|
||||
}
|
||||
|
||||
for _, tags := range drops {
|
||||
if f.shouldTagsPass(tags) {
|
||||
t.Errorf("Expected tags %v to drop", tags)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestFilter_FilterTagsNoMatches(t *testing.T) {
|
||||
pretags := map[string]string{
|
||||
"host": "localhost",
|
||||
"mytag": "foobar",
|
||||
}
|
||||
f := Filter{
|
||||
TagExclude: []string{"nomatch"},
|
||||
}
|
||||
require.NoError(t, f.Compile())
|
||||
|
||||
f.filterTags(pretags)
|
||||
assert.Equal(t, map[string]string{
|
||||
"host": "localhost",
|
||||
"mytag": "foobar",
|
||||
}, pretags)
|
||||
|
||||
f = Filter{
|
||||
TagInclude: []string{"nomatch"},
|
||||
}
|
||||
require.NoError(t, f.Compile())
|
||||
|
||||
f.filterTags(pretags)
|
||||
assert.Equal(t, map[string]string{}, pretags)
|
||||
}
|
||||
|
||||
func TestFilter_FilterTagsMatches(t *testing.T) {
|
||||
pretags := map[string]string{
|
||||
"host": "localhost",
|
||||
"mytag": "foobar",
|
||||
}
|
||||
f := Filter{
|
||||
TagExclude: []string{"ho*"},
|
||||
}
|
||||
require.NoError(t, f.Compile())
|
||||
|
||||
f.filterTags(pretags)
|
||||
assert.Equal(t, map[string]string{
|
||||
"mytag": "foobar",
|
||||
}, pretags)
|
||||
|
||||
pretags = map[string]string{
|
||||
"host": "localhost",
|
||||
"mytag": "foobar",
|
||||
}
|
||||
f = Filter{
|
||||
TagInclude: []string{"my*"},
|
||||
}
|
||||
require.NoError(t, f.Compile())
|
||||
|
||||
f.filterTags(pretags)
|
||||
assert.Equal(t, map[string]string{
|
||||
"mytag": "foobar",
|
||||
}, pretags)
|
||||
}
|
||||
|
||||
// TestFilter_FilterNamePassAndDrop used for check case when
|
||||
// both parameters were defined
|
||||
// see: https://github.com/influxdata/telegraf/issues/2860
|
||||
func TestFilter_FilterNamePassAndDrop(t *testing.T) {
|
||||
|
||||
inputData := []string{"name1", "name2", "name3", "name4"}
|
||||
expectedResult := []bool{false, true, false, false}
|
||||
|
||||
f := Filter{
|
||||
NamePass: []string{"name1", "name2"},
|
||||
NameDrop: []string{"name1", "name3"},
|
||||
}
|
||||
|
||||
require.NoError(t, f.Compile())
|
||||
|
||||
for i, name := range inputData {
|
||||
assert.Equal(t, f.shouldNamePass(name), expectedResult[i])
|
||||
}
|
||||
}
|
||||
|
||||
// TestFilter_FilterFieldPassAndDrop used for check case when
|
||||
// both parameters were defined
|
||||
// see: https://github.com/influxdata/telegraf/issues/2860
|
||||
func TestFilter_FilterFieldPassAndDrop(t *testing.T) {
|
||||
|
||||
inputData := []string{"field1", "field2", "field3", "field4"}
|
||||
expectedResult := []bool{false, true, false, false}
|
||||
|
||||
f := Filter{
|
||||
FieldPass: []string{"field1", "field2"},
|
||||
FieldDrop: []string{"field1", "field3"},
|
||||
}
|
||||
|
||||
require.NoError(t, f.Compile())
|
||||
|
||||
for i, field := range inputData {
|
||||
assert.Equal(t, f.shouldFieldPass(field), expectedResult[i])
|
||||
}
|
||||
}
|
||||
|
||||
// TestFilter_FilterTagsPassAndDrop used for check case when
|
||||
// both parameters were defined
|
||||
// see: https://github.com/influxdata/telegraf/issues/2860
|
||||
func TestFilter_FilterTagsPassAndDrop(t *testing.T) {
|
||||
|
||||
inputData := []map[string]string{
|
||||
{"tag1": "1", "tag2": "3"},
|
||||
{"tag1": "1", "tag2": "2"},
|
||||
{"tag1": "2", "tag2": "1"},
|
||||
{"tag1": "4", "tag2": "1"},
|
||||
}
|
||||
|
||||
expectedResult := []bool{false, true, false, false}
|
||||
|
||||
filterPass := []TagFilter{
|
||||
TagFilter{
|
||||
Name: "tag1",
|
||||
Filter: []string{"1", "4"},
|
||||
},
|
||||
}
|
||||
|
||||
filterDrop := []TagFilter{
|
||||
TagFilter{
|
||||
Name: "tag1",
|
||||
Filter: []string{"4"},
|
||||
},
|
||||
TagFilter{
|
||||
Name: "tag2",
|
||||
Filter: []string{"3"},
|
||||
},
|
||||
}
|
||||
|
||||
f := Filter{
|
||||
TagDrop: filterDrop,
|
||||
TagPass: filterPass,
|
||||
}
|
||||
|
||||
require.NoError(t, f.Compile())
|
||||
|
||||
for i, tag := range inputData {
|
||||
assert.Equal(t, f.shouldTagsPass(tag), expectedResult[i])
|
||||
}
|
||||
|
||||
}
|
||||
86
internal/models/makemetric.go
Normal file
86
internal/models/makemetric.go
Normal file
@@ -0,0 +1,86 @@
|
||||
package models
|
||||
|
||||
import (
|
||||
"log"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/metric"
|
||||
)
|
||||
|
||||
// makemetric is used by both RunningAggregator & RunningInput
|
||||
// to make metrics.
|
||||
// nameOverride: override the name of the measurement being made.
|
||||
// namePrefix: add this prefix to each measurement name.
|
||||
// nameSuffix: add this suffix to each measurement name.
|
||||
// pluginTags: these are tags that are specific to this plugin.
|
||||
// daemonTags: these are daemon-wide global tags, and get applied after pluginTags.
|
||||
// filter: this is a filter to apply to each metric being made.
|
||||
// applyFilter: if false, the above filter is not applied to each metric.
|
||||
// This is used by Aggregators, because aggregators use filters
|
||||
// on incoming metrics instead of on created metrics.
|
||||
// TODO refactor this to not have such a huge func signature.
|
||||
func makemetric(
|
||||
measurement string,
|
||||
fields map[string]interface{},
|
||||
tags map[string]string,
|
||||
nameOverride string,
|
||||
namePrefix string,
|
||||
nameSuffix string,
|
||||
pluginTags map[string]string,
|
||||
daemonTags map[string]string,
|
||||
filter Filter,
|
||||
applyFilter bool,
|
||||
mType telegraf.ValueType,
|
||||
t time.Time,
|
||||
) telegraf.Metric {
|
||||
if len(fields) == 0 || len(measurement) == 0 {
|
||||
return nil
|
||||
}
|
||||
if tags == nil {
|
||||
tags = make(map[string]string)
|
||||
}
|
||||
|
||||
// Override measurement name if set
|
||||
if len(nameOverride) != 0 {
|
||||
measurement = nameOverride
|
||||
}
|
||||
// Apply measurement prefix and suffix if set
|
||||
if len(namePrefix) != 0 {
|
||||
measurement = namePrefix + measurement
|
||||
}
|
||||
if len(nameSuffix) != 0 {
|
||||
measurement = measurement + nameSuffix
|
||||
}
|
||||
|
||||
// Apply plugin-wide tags if set
|
||||
for k, v := range pluginTags {
|
||||
if _, ok := tags[k]; !ok {
|
||||
tags[k] = v
|
||||
}
|
||||
}
|
||||
// Apply daemon-wide tags if set
|
||||
for k, v := range daemonTags {
|
||||
if _, ok := tags[k]; !ok {
|
||||
tags[k] = v
|
||||
}
|
||||
}
|
||||
|
||||
// Apply the metric filter(s)
|
||||
// for aggregators, the filter does not get applied when the metric is made.
|
||||
// instead, the filter is applied to metric incoming into the plugin.
|
||||
// ie, it gets applied in the RunningAggregator.Apply function.
|
||||
if applyFilter {
|
||||
if ok := filter.Apply(measurement, fields, tags); !ok {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
m, err := metric.New(measurement, tags, fields, t, mType)
|
||||
if err != nil {
|
||||
log.Printf("Error adding point [%s]: %s\n", measurement, err.Error())
|
||||
return nil
|
||||
}
|
||||
|
||||
return m
|
||||
}
|
||||
166
internal/models/running_aggregator.go
Normal file
166
internal/models/running_aggregator.go
Normal file
@@ -0,0 +1,166 @@
|
||||
package models
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/metric"
|
||||
)
|
||||
|
||||
type RunningAggregator struct {
|
||||
a telegraf.Aggregator
|
||||
Config *AggregatorConfig
|
||||
|
||||
metrics chan telegraf.Metric
|
||||
|
||||
periodStart time.Time
|
||||
periodEnd time.Time
|
||||
}
|
||||
|
||||
func NewRunningAggregator(
|
||||
a telegraf.Aggregator,
|
||||
conf *AggregatorConfig,
|
||||
) *RunningAggregator {
|
||||
return &RunningAggregator{
|
||||
a: a,
|
||||
Config: conf,
|
||||
metrics: make(chan telegraf.Metric, 100),
|
||||
}
|
||||
}
|
||||
|
||||
// AggregatorConfig containing configuration parameters for the running
|
||||
// aggregator plugin.
|
||||
type AggregatorConfig struct {
|
||||
Name string
|
||||
|
||||
DropOriginal bool
|
||||
NameOverride string
|
||||
MeasurementPrefix string
|
||||
MeasurementSuffix string
|
||||
Tags map[string]string
|
||||
Filter Filter
|
||||
|
||||
Period time.Duration
|
||||
Delay time.Duration
|
||||
}
|
||||
|
||||
func (r *RunningAggregator) Name() string {
|
||||
return "aggregators." + r.Config.Name
|
||||
}
|
||||
|
||||
func (r *RunningAggregator) MakeMetric(
|
||||
measurement string,
|
||||
fields map[string]interface{},
|
||||
tags map[string]string,
|
||||
mType telegraf.ValueType,
|
||||
t time.Time,
|
||||
) telegraf.Metric {
|
||||
m := makemetric(
|
||||
measurement,
|
||||
fields,
|
||||
tags,
|
||||
r.Config.NameOverride,
|
||||
r.Config.MeasurementPrefix,
|
||||
r.Config.MeasurementSuffix,
|
||||
r.Config.Tags,
|
||||
nil,
|
||||
r.Config.Filter,
|
||||
false,
|
||||
mType,
|
||||
t,
|
||||
)
|
||||
|
||||
if m != nil {
|
||||
m.SetAggregate(true)
|
||||
}
|
||||
|
||||
return m
|
||||
}
|
||||
|
||||
// Add applies the given metric to the aggregator.
|
||||
// Before applying to the plugin, it will run any defined filters on the metric.
|
||||
// Apply returns true if the original metric should be dropped.
|
||||
func (r *RunningAggregator) Add(in telegraf.Metric) bool {
|
||||
if r.Config.Filter.IsActive() {
|
||||
// check if the aggregator should apply this metric
|
||||
name := in.Name()
|
||||
fields := in.Fields()
|
||||
tags := in.Tags()
|
||||
t := in.Time()
|
||||
if ok := r.Config.Filter.Apply(name, fields, tags); !ok {
|
||||
// aggregator should not apply this metric
|
||||
return false
|
||||
}
|
||||
|
||||
in, _ = metric.New(name, tags, fields, t)
|
||||
}
|
||||
|
||||
r.metrics <- in
|
||||
return r.Config.DropOriginal
|
||||
}
|
||||
func (r *RunningAggregator) add(in telegraf.Metric) {
|
||||
r.a.Add(in)
|
||||
}
|
||||
|
||||
func (r *RunningAggregator) push(acc telegraf.Accumulator) {
|
||||
r.a.Push(acc)
|
||||
}
|
||||
|
||||
func (r *RunningAggregator) reset() {
|
||||
r.a.Reset()
|
||||
}
|
||||
|
||||
// Run runs the running aggregator, listens for incoming metrics, and waits
|
||||
// for period ticks to tell it when to push and reset the aggregator.
|
||||
func (r *RunningAggregator) Run(
|
||||
acc telegraf.Accumulator,
|
||||
shutdown chan struct{},
|
||||
) {
|
||||
// The start of the period is truncated to the nearest second.
|
||||
//
|
||||
// Every metric then gets it's timestamp checked and is dropped if it
|
||||
// is not within:
|
||||
//
|
||||
// start < t < end + truncation + delay
|
||||
//
|
||||
// So if we start at now = 00:00.2 with a 10s period and 0.3s delay:
|
||||
// now = 00:00.2
|
||||
// start = 00:00
|
||||
// truncation = 00:00.2
|
||||
// end = 00:10
|
||||
// 1st interval: 00:00 - 00:10.5
|
||||
// 2nd interval: 00:10 - 00:20.5
|
||||
// etc.
|
||||
//
|
||||
now := time.Now()
|
||||
r.periodStart = now.Truncate(time.Second)
|
||||
truncation := now.Sub(r.periodStart)
|
||||
r.periodEnd = r.periodStart.Add(r.Config.Period)
|
||||
time.Sleep(r.Config.Delay)
|
||||
periodT := time.NewTicker(r.Config.Period)
|
||||
defer periodT.Stop()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-shutdown:
|
||||
if len(r.metrics) > 0 {
|
||||
// wait until metrics are flushed before exiting
|
||||
continue
|
||||
}
|
||||
return
|
||||
case m := <-r.metrics:
|
||||
if m.Time().Before(r.periodStart) ||
|
||||
m.Time().After(r.periodEnd.Add(truncation).Add(r.Config.Delay)) {
|
||||
// the metric is outside the current aggregation period, so
|
||||
// skip it.
|
||||
continue
|
||||
}
|
||||
r.add(m)
|
||||
case <-periodT.C:
|
||||
r.periodStart = r.periodEnd
|
||||
r.periodEnd = r.periodStart.Add(r.Config.Period)
|
||||
r.push(acc)
|
||||
r.reset()
|
||||
}
|
||||
}
|
||||
}
|
||||
192
internal/models/running_aggregator_test.go
Normal file
192
internal/models/running_aggregator_test.go
Normal file
@@ -0,0 +1,192 @@
|
||||
package models
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"sync/atomic"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/testutil"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestAdd(t *testing.T) {
|
||||
a := &TestAggregator{}
|
||||
ra := NewRunningAggregator(a, &AggregatorConfig{
|
||||
Name: "TestRunningAggregator",
|
||||
Filter: Filter{
|
||||
NamePass: []string{"*"},
|
||||
},
|
||||
Period: time.Millisecond * 500,
|
||||
})
|
||||
assert.NoError(t, ra.Config.Filter.Compile())
|
||||
acc := testutil.Accumulator{}
|
||||
go ra.Run(&acc, make(chan struct{}))
|
||||
|
||||
m := ra.MakeMetric(
|
||||
"RITest",
|
||||
map[string]interface{}{"value": int(101)},
|
||||
map[string]string{},
|
||||
telegraf.Untyped,
|
||||
time.Now().Add(time.Millisecond*150),
|
||||
)
|
||||
assert.False(t, ra.Add(m))
|
||||
|
||||
for {
|
||||
time.Sleep(time.Millisecond)
|
||||
if atomic.LoadInt64(&a.sum) > 0 {
|
||||
break
|
||||
}
|
||||
}
|
||||
assert.Equal(t, int64(101), atomic.LoadInt64(&a.sum))
|
||||
}
|
||||
|
||||
func TestAddMetricsOutsideCurrentPeriod(t *testing.T) {
|
||||
a := &TestAggregator{}
|
||||
ra := NewRunningAggregator(a, &AggregatorConfig{
|
||||
Name: "TestRunningAggregator",
|
||||
Filter: Filter{
|
||||
NamePass: []string{"*"},
|
||||
},
|
||||
Period: time.Millisecond * 500,
|
||||
})
|
||||
assert.NoError(t, ra.Config.Filter.Compile())
|
||||
acc := testutil.Accumulator{}
|
||||
go ra.Run(&acc, make(chan struct{}))
|
||||
|
||||
// metric before current period
|
||||
m := ra.MakeMetric(
|
||||
"RITest",
|
||||
map[string]interface{}{"value": int(101)},
|
||||
map[string]string{},
|
||||
telegraf.Untyped,
|
||||
time.Now().Add(-time.Hour),
|
||||
)
|
||||
assert.False(t, ra.Add(m))
|
||||
|
||||
// metric after current period
|
||||
m = ra.MakeMetric(
|
||||
"RITest",
|
||||
map[string]interface{}{"value": int(101)},
|
||||
map[string]string{},
|
||||
telegraf.Untyped,
|
||||
time.Now().Add(time.Hour),
|
||||
)
|
||||
assert.False(t, ra.Add(m))
|
||||
|
||||
// "now" metric
|
||||
m = ra.MakeMetric(
|
||||
"RITest",
|
||||
map[string]interface{}{"value": int(101)},
|
||||
map[string]string{},
|
||||
telegraf.Untyped,
|
||||
time.Now().Add(time.Millisecond*50),
|
||||
)
|
||||
assert.False(t, ra.Add(m))
|
||||
|
||||
for {
|
||||
time.Sleep(time.Millisecond)
|
||||
if atomic.LoadInt64(&a.sum) > 0 {
|
||||
break
|
||||
}
|
||||
}
|
||||
assert.Equal(t, int64(101), atomic.LoadInt64(&a.sum))
|
||||
}
|
||||
|
||||
func TestAddAndPushOnePeriod(t *testing.T) {
|
||||
a := &TestAggregator{}
|
||||
ra := NewRunningAggregator(a, &AggregatorConfig{
|
||||
Name: "TestRunningAggregator",
|
||||
Filter: Filter{
|
||||
NamePass: []string{"*"},
|
||||
},
|
||||
Period: time.Millisecond * 500,
|
||||
})
|
||||
assert.NoError(t, ra.Config.Filter.Compile())
|
||||
acc := testutil.Accumulator{}
|
||||
shutdown := make(chan struct{})
|
||||
|
||||
var wg sync.WaitGroup
|
||||
wg.Add(1)
|
||||
go func() {
|
||||
defer wg.Done()
|
||||
ra.Run(&acc, shutdown)
|
||||
}()
|
||||
|
||||
m := ra.MakeMetric(
|
||||
"RITest",
|
||||
map[string]interface{}{"value": int(101)},
|
||||
map[string]string{},
|
||||
telegraf.Untyped,
|
||||
time.Now().Add(time.Millisecond*100),
|
||||
)
|
||||
assert.False(t, ra.Add(m))
|
||||
|
||||
for {
|
||||
time.Sleep(time.Millisecond)
|
||||
if acc.NMetrics() > 0 {
|
||||
break
|
||||
}
|
||||
}
|
||||
acc.AssertContainsFields(t, "TestMetric", map[string]interface{}{"sum": int64(101)})
|
||||
|
||||
close(shutdown)
|
||||
wg.Wait()
|
||||
}
|
||||
|
||||
func TestAddDropOriginal(t *testing.T) {
|
||||
ra := NewRunningAggregator(&TestAggregator{}, &AggregatorConfig{
|
||||
Name: "TestRunningAggregator",
|
||||
Filter: Filter{
|
||||
NamePass: []string{"RI*"},
|
||||
},
|
||||
DropOriginal: true,
|
||||
})
|
||||
assert.NoError(t, ra.Config.Filter.Compile())
|
||||
|
||||
m := ra.MakeMetric(
|
||||
"RITest",
|
||||
map[string]interface{}{"value": int(101)},
|
||||
map[string]string{},
|
||||
telegraf.Untyped,
|
||||
time.Now(),
|
||||
)
|
||||
assert.True(t, ra.Add(m))
|
||||
|
||||
// this metric name doesn't match the filter, so Add will return false
|
||||
m2 := ra.MakeMetric(
|
||||
"foobar",
|
||||
map[string]interface{}{"value": int(101)},
|
||||
map[string]string{},
|
||||
telegraf.Untyped,
|
||||
time.Now(),
|
||||
)
|
||||
assert.False(t, ra.Add(m2))
|
||||
}
|
||||
|
||||
type TestAggregator struct {
|
||||
sum int64
|
||||
}
|
||||
|
||||
func (t *TestAggregator) Description() string { return "" }
|
||||
func (t *TestAggregator) SampleConfig() string { return "" }
|
||||
func (t *TestAggregator) Reset() {
|
||||
atomic.StoreInt64(&t.sum, 0)
|
||||
}
|
||||
|
||||
func (t *TestAggregator) Push(acc telegraf.Accumulator) {
|
||||
acc.AddFields("TestMetric",
|
||||
map[string]interface{}{"sum": t.sum},
|
||||
map[string]string{},
|
||||
)
|
||||
}
|
||||
|
||||
func (t *TestAggregator) Add(in telegraf.Metric) {
|
||||
for _, v := range in.Fields() {
|
||||
if vi, ok := v.(int64); ok {
|
||||
atomic.AddInt64(&t.sum, vi)
|
||||
}
|
||||
}
|
||||
}
|
||||
102
internal/models/running_input.go
Normal file
102
internal/models/running_input.go
Normal file
@@ -0,0 +1,102 @@
|
||||
package models
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/plugins/serializers/influx"
|
||||
"github.com/influxdata/telegraf/selfstat"
|
||||
)
|
||||
|
||||
var GlobalMetricsGathered = selfstat.Register("agent", "metrics_gathered", map[string]string{})
|
||||
|
||||
type RunningInput struct {
|
||||
Input telegraf.Input
|
||||
Config *InputConfig
|
||||
|
||||
trace bool
|
||||
defaultTags map[string]string
|
||||
|
||||
MetricsGathered selfstat.Stat
|
||||
}
|
||||
|
||||
func NewRunningInput(
|
||||
input telegraf.Input,
|
||||
config *InputConfig,
|
||||
) *RunningInput {
|
||||
return &RunningInput{
|
||||
Input: input,
|
||||
Config: config,
|
||||
MetricsGathered: selfstat.Register(
|
||||
"gather",
|
||||
"metrics_gathered",
|
||||
map[string]string{"input": config.Name},
|
||||
),
|
||||
}
|
||||
}
|
||||
|
||||
// InputConfig containing a name, interval, and filter
|
||||
type InputConfig struct {
|
||||
Name string
|
||||
NameOverride string
|
||||
MeasurementPrefix string
|
||||
MeasurementSuffix string
|
||||
Tags map[string]string
|
||||
Filter Filter
|
||||
Interval time.Duration
|
||||
}
|
||||
|
||||
func (r *RunningInput) Name() string {
|
||||
return "inputs." + r.Config.Name
|
||||
}
|
||||
|
||||
// MakeMetric either returns a metric, or returns nil if the metric doesn't
|
||||
// need to be created (because of filtering, an error, etc.)
|
||||
func (r *RunningInput) MakeMetric(
|
||||
measurement string,
|
||||
fields map[string]interface{},
|
||||
tags map[string]string,
|
||||
mType telegraf.ValueType,
|
||||
t time.Time,
|
||||
) telegraf.Metric {
|
||||
m := makemetric(
|
||||
measurement,
|
||||
fields,
|
||||
tags,
|
||||
r.Config.NameOverride,
|
||||
r.Config.MeasurementPrefix,
|
||||
r.Config.MeasurementSuffix,
|
||||
r.Config.Tags,
|
||||
r.defaultTags,
|
||||
r.Config.Filter,
|
||||
true,
|
||||
mType,
|
||||
t,
|
||||
)
|
||||
|
||||
if r.trace && m != nil {
|
||||
s := influx.NewSerializer()
|
||||
s.SetFieldSortOrder(influx.SortFields)
|
||||
octets, err := s.Serialize(m)
|
||||
if err == nil {
|
||||
fmt.Print("> " + string(octets))
|
||||
}
|
||||
}
|
||||
|
||||
r.MetricsGathered.Incr(1)
|
||||
GlobalMetricsGathered.Incr(1)
|
||||
return m
|
||||
}
|
||||
|
||||
func (r *RunningInput) Trace() bool {
|
||||
return r.trace
|
||||
}
|
||||
|
||||
func (r *RunningInput) SetTrace(trace bool) {
|
||||
r.trace = trace
|
||||
}
|
||||
|
||||
func (r *RunningInput) SetDefaultTags(tags map[string]string) {
|
||||
r.defaultTags = tags
|
||||
}
|
||||
228
internal/models/running_input_test.go
Normal file
228
internal/models/running_input_test.go
Normal file
@@ -0,0 +1,228 @@
|
||||
package models
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/metric"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestMakeMetricNoFields(t *testing.T) {
|
||||
now := time.Now()
|
||||
ri := NewRunningInput(&testInput{}, &InputConfig{
|
||||
Name: "TestRunningInput",
|
||||
})
|
||||
|
||||
m := ri.MakeMetric(
|
||||
"RITest",
|
||||
map[string]interface{}{},
|
||||
map[string]string{},
|
||||
telegraf.Untyped,
|
||||
now,
|
||||
)
|
||||
assert.Nil(t, m)
|
||||
}
|
||||
|
||||
// nil fields should get dropped
|
||||
func TestMakeMetricNilFields(t *testing.T) {
|
||||
now := time.Now()
|
||||
ri := NewRunningInput(&testInput{}, &InputConfig{
|
||||
Name: "TestRunningInput",
|
||||
})
|
||||
|
||||
m := ri.MakeMetric(
|
||||
"RITest",
|
||||
map[string]interface{}{
|
||||
"value": int(101),
|
||||
"nil": nil,
|
||||
},
|
||||
map[string]string{},
|
||||
telegraf.Untyped,
|
||||
now,
|
||||
)
|
||||
|
||||
expected, err := metric.New("RITest",
|
||||
map[string]string{},
|
||||
map[string]interface{}{
|
||||
"value": int(101),
|
||||
},
|
||||
now,
|
||||
)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.Equal(t, expected, m)
|
||||
}
|
||||
|
||||
func TestMakeMetricWithPluginTags(t *testing.T) {
|
||||
now := time.Now()
|
||||
ri := NewRunningInput(&testInput{}, &InputConfig{
|
||||
Name: "TestRunningInput",
|
||||
Tags: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
})
|
||||
|
||||
ri.SetTrace(true)
|
||||
assert.Equal(t, true, ri.Trace())
|
||||
|
||||
m := ri.MakeMetric(
|
||||
"RITest",
|
||||
map[string]interface{}{"value": int(101)},
|
||||
nil,
|
||||
telegraf.Untyped,
|
||||
now,
|
||||
)
|
||||
|
||||
expected, err := metric.New("RITest",
|
||||
map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
map[string]interface{}{
|
||||
"value": 101,
|
||||
},
|
||||
now,
|
||||
)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, expected, m)
|
||||
}
|
||||
|
||||
func TestMakeMetricFilteredOut(t *testing.T) {
|
||||
now := time.Now()
|
||||
ri := NewRunningInput(&testInput{}, &InputConfig{
|
||||
Name: "TestRunningInput",
|
||||
Tags: map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
Filter: Filter{NamePass: []string{"foobar"}},
|
||||
})
|
||||
|
||||
ri.SetTrace(true)
|
||||
assert.Equal(t, true, ri.Trace())
|
||||
assert.NoError(t, ri.Config.Filter.Compile())
|
||||
|
||||
m := ri.MakeMetric(
|
||||
"RITest",
|
||||
map[string]interface{}{"value": int(101)},
|
||||
nil,
|
||||
telegraf.Untyped,
|
||||
now,
|
||||
)
|
||||
assert.Nil(t, m)
|
||||
}
|
||||
|
||||
func TestMakeMetricWithDaemonTags(t *testing.T) {
|
||||
now := time.Now()
|
||||
ri := NewRunningInput(&testInput{}, &InputConfig{
|
||||
Name: "TestRunningInput",
|
||||
})
|
||||
ri.SetDefaultTags(map[string]string{
|
||||
"foo": "bar",
|
||||
})
|
||||
|
||||
ri.SetTrace(true)
|
||||
assert.Equal(t, true, ri.Trace())
|
||||
|
||||
m := ri.MakeMetric(
|
||||
"RITest",
|
||||
map[string]interface{}{"value": int(101)},
|
||||
map[string]string{},
|
||||
telegraf.Untyped,
|
||||
now,
|
||||
)
|
||||
expected, err := metric.New("RITest",
|
||||
map[string]string{
|
||||
"foo": "bar",
|
||||
},
|
||||
map[string]interface{}{
|
||||
"value": 101,
|
||||
},
|
||||
now,
|
||||
)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, expected, m)
|
||||
}
|
||||
|
||||
func TestMakeMetricNameOverride(t *testing.T) {
|
||||
now := time.Now()
|
||||
ri := NewRunningInput(&testInput{}, &InputConfig{
|
||||
Name: "TestRunningInput",
|
||||
NameOverride: "foobar",
|
||||
})
|
||||
|
||||
m := ri.MakeMetric(
|
||||
"RITest",
|
||||
map[string]interface{}{"value": int(101)},
|
||||
map[string]string{},
|
||||
telegraf.Untyped,
|
||||
now,
|
||||
)
|
||||
expected, err := metric.New("foobar",
|
||||
nil,
|
||||
map[string]interface{}{
|
||||
"value": 101,
|
||||
},
|
||||
now,
|
||||
)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, expected, m)
|
||||
}
|
||||
|
||||
func TestMakeMetricNamePrefix(t *testing.T) {
|
||||
now := time.Now()
|
||||
ri := NewRunningInput(&testInput{}, &InputConfig{
|
||||
Name: "TestRunningInput",
|
||||
MeasurementPrefix: "foobar_",
|
||||
})
|
||||
|
||||
m := ri.MakeMetric(
|
||||
"RITest",
|
||||
map[string]interface{}{"value": int(101)},
|
||||
map[string]string{},
|
||||
telegraf.Untyped,
|
||||
now,
|
||||
)
|
||||
expected, err := metric.New("foobar_RITest",
|
||||
nil,
|
||||
map[string]interface{}{
|
||||
"value": 101,
|
||||
},
|
||||
now,
|
||||
)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, expected, m)
|
||||
}
|
||||
|
||||
func TestMakeMetricNameSuffix(t *testing.T) {
|
||||
now := time.Now()
|
||||
ri := NewRunningInput(&testInput{}, &InputConfig{
|
||||
Name: "TestRunningInput",
|
||||
MeasurementSuffix: "_foobar",
|
||||
})
|
||||
|
||||
m := ri.MakeMetric(
|
||||
"RITest",
|
||||
map[string]interface{}{"value": int(101)},
|
||||
map[string]string{},
|
||||
telegraf.Untyped,
|
||||
now,
|
||||
)
|
||||
expected, err := metric.New("RITest_foobar",
|
||||
nil,
|
||||
map[string]interface{}{
|
||||
"value": 101,
|
||||
},
|
||||
now,
|
||||
)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, expected, m)
|
||||
}
|
||||
|
||||
type testInput struct{}
|
||||
|
||||
func (t *testInput) Description() string { return "" }
|
||||
func (t *testInput) SampleConfig() string { return "" }
|
||||
func (t *testInput) Gather(acc telegraf.Accumulator) error { return nil }
|
||||
205
internal/models/running_output.go
Normal file
205
internal/models/running_output.go
Normal file
@@ -0,0 +1,205 @@
|
||||
package models
|
||||
|
||||
import (
|
||||
"log"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/internal/buffer"
|
||||
"github.com/influxdata/telegraf/metric"
|
||||
"github.com/influxdata/telegraf/selfstat"
|
||||
)
|
||||
|
||||
const (
|
||||
// Default size of metrics batch size.
|
||||
DEFAULT_METRIC_BATCH_SIZE = 1000
|
||||
|
||||
// Default number of metrics kept. It should be a multiple of batch size.
|
||||
DEFAULT_METRIC_BUFFER_LIMIT = 10000
|
||||
)
|
||||
|
||||
// RunningOutput contains the output configuration
|
||||
type RunningOutput struct {
|
||||
Name string
|
||||
Output telegraf.Output
|
||||
Config *OutputConfig
|
||||
MetricBufferLimit int
|
||||
MetricBatchSize int
|
||||
|
||||
MetricsFiltered selfstat.Stat
|
||||
MetricsWritten selfstat.Stat
|
||||
BufferSize selfstat.Stat
|
||||
BufferLimit selfstat.Stat
|
||||
WriteTime selfstat.Stat
|
||||
|
||||
metrics *buffer.Buffer
|
||||
failMetrics *buffer.Buffer
|
||||
|
||||
// Guards against concurrent calls to the Output as described in #3009
|
||||
sync.Mutex
|
||||
}
|
||||
|
||||
func NewRunningOutput(
|
||||
name string,
|
||||
output telegraf.Output,
|
||||
conf *OutputConfig,
|
||||
batchSize int,
|
||||
bufferLimit int,
|
||||
) *RunningOutput {
|
||||
if bufferLimit == 0 {
|
||||
bufferLimit = DEFAULT_METRIC_BUFFER_LIMIT
|
||||
}
|
||||
if batchSize == 0 {
|
||||
batchSize = DEFAULT_METRIC_BATCH_SIZE
|
||||
}
|
||||
ro := &RunningOutput{
|
||||
Name: name,
|
||||
metrics: buffer.NewBuffer(batchSize),
|
||||
failMetrics: buffer.NewBuffer(bufferLimit),
|
||||
Output: output,
|
||||
Config: conf,
|
||||
MetricBufferLimit: bufferLimit,
|
||||
MetricBatchSize: batchSize,
|
||||
MetricsWritten: selfstat.Register(
|
||||
"write",
|
||||
"metrics_written",
|
||||
map[string]string{"output": name},
|
||||
),
|
||||
MetricsFiltered: selfstat.Register(
|
||||
"write",
|
||||
"metrics_filtered",
|
||||
map[string]string{"output": name},
|
||||
),
|
||||
BufferSize: selfstat.Register(
|
||||
"write",
|
||||
"buffer_size",
|
||||
map[string]string{"output": name},
|
||||
),
|
||||
BufferLimit: selfstat.Register(
|
||||
"write",
|
||||
"buffer_limit",
|
||||
map[string]string{"output": name},
|
||||
),
|
||||
WriteTime: selfstat.RegisterTiming(
|
||||
"write",
|
||||
"write_time_ns",
|
||||
map[string]string{"output": name},
|
||||
),
|
||||
}
|
||||
ro.BufferLimit.Set(int64(ro.MetricBufferLimit))
|
||||
return ro
|
||||
}
|
||||
|
||||
// AddMetric adds a metric to the output. This function can also write cached
|
||||
// points if FlushBufferWhenFull is true.
|
||||
func (ro *RunningOutput) AddMetric(m telegraf.Metric) {
|
||||
if m == nil {
|
||||
return
|
||||
}
|
||||
// Filter any tagexclude/taginclude parameters before adding metric
|
||||
if ro.Config.Filter.IsActive() {
|
||||
// In order to filter out tags, we need to create a new metric, since
|
||||
// metrics are immutable once created.
|
||||
name := m.Name()
|
||||
tags := m.Tags()
|
||||
fields := m.Fields()
|
||||
t := m.Time()
|
||||
if ok := ro.Config.Filter.Apply(name, fields, tags); !ok {
|
||||
ro.MetricsFiltered.Incr(1)
|
||||
return
|
||||
}
|
||||
// error is not possible if creating from another metric, so ignore.
|
||||
m, _ = metric.New(name, tags, fields, t)
|
||||
}
|
||||
|
||||
if output, ok := ro.Output.(telegraf.AggregatingOutput); ok {
|
||||
output.Add(m)
|
||||
return
|
||||
}
|
||||
|
||||
ro.metrics.Add(m)
|
||||
if ro.metrics.Len() == ro.MetricBatchSize {
|
||||
batch := ro.metrics.Batch(ro.MetricBatchSize)
|
||||
err := ro.write(batch)
|
||||
if err != nil {
|
||||
ro.failMetrics.Add(batch...)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Write writes all cached points to this output.
|
||||
func (ro *RunningOutput) Write() error {
|
||||
if output, ok := ro.Output.(telegraf.AggregatingOutput); ok {
|
||||
metrics := output.Push()
|
||||
ro.metrics.Add(metrics...)
|
||||
output.Reset()
|
||||
}
|
||||
|
||||
nFails, nMetrics := ro.failMetrics.Len(), ro.metrics.Len()
|
||||
ro.BufferSize.Set(int64(nFails + nMetrics))
|
||||
log.Printf("D! Output [%s] buffer fullness: %d / %d metrics. ",
|
||||
ro.Name, nFails+nMetrics, ro.MetricBufferLimit)
|
||||
var err error
|
||||
if !ro.failMetrics.IsEmpty() {
|
||||
// how many batches of failed writes we need to write.
|
||||
nBatches := nFails/ro.MetricBatchSize + 1
|
||||
batchSize := ro.MetricBatchSize
|
||||
|
||||
for i := 0; i < nBatches; i++ {
|
||||
// If it's the last batch, only grab the metrics that have not had
|
||||
// a write attempt already (this is primarily to preserve order).
|
||||
if i == nBatches-1 {
|
||||
batchSize = nFails % ro.MetricBatchSize
|
||||
}
|
||||
batch := ro.failMetrics.Batch(batchSize)
|
||||
// If we've already failed previous writes, don't bother trying to
|
||||
// write to this output again. We are not exiting the loop just so
|
||||
// that we can rotate the metrics to preserve order.
|
||||
if err == nil {
|
||||
err = ro.write(batch)
|
||||
}
|
||||
if err != nil {
|
||||
ro.failMetrics.Add(batch...)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
batch := ro.metrics.Batch(ro.MetricBatchSize)
|
||||
// see comment above about not trying to write to an already failed output.
|
||||
// if ro.failMetrics is empty then err will always be nil at this point.
|
||||
if err == nil {
|
||||
err = ro.write(batch)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
ro.failMetrics.Add(batch...)
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (ro *RunningOutput) write(metrics []telegraf.Metric) error {
|
||||
nMetrics := len(metrics)
|
||||
if nMetrics == 0 {
|
||||
return nil
|
||||
}
|
||||
ro.Lock()
|
||||
defer ro.Unlock()
|
||||
start := time.Now()
|
||||
err := ro.Output.Write(metrics)
|
||||
elapsed := time.Since(start)
|
||||
if err == nil {
|
||||
log.Printf("D! Output [%s] wrote batch of %d metrics in %s\n",
|
||||
ro.Name, nMetrics, elapsed)
|
||||
ro.MetricsWritten.Incr(int64(nMetrics))
|
||||
ro.WriteTime.Incr(elapsed.Nanoseconds())
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// OutputConfig containing name and filter
|
||||
type OutputConfig struct {
|
||||
Name string
|
||||
Filter Filter
|
||||
}
|
||||
556
internal/models/running_output_test.go
Normal file
556
internal/models/running_output_test.go
Normal file
@@ -0,0 +1,556 @@
|
||||
package models
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sync"
|
||||
"testing"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/testutil"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
var first5 = []telegraf.Metric{
|
||||
testutil.TestMetric(101, "metric1"),
|
||||
testutil.TestMetric(101, "metric2"),
|
||||
testutil.TestMetric(101, "metric3"),
|
||||
testutil.TestMetric(101, "metric4"),
|
||||
testutil.TestMetric(101, "metric5"),
|
||||
}
|
||||
|
||||
var next5 = []telegraf.Metric{
|
||||
testutil.TestMetric(101, "metric6"),
|
||||
testutil.TestMetric(101, "metric7"),
|
||||
testutil.TestMetric(101, "metric8"),
|
||||
testutil.TestMetric(101, "metric9"),
|
||||
testutil.TestMetric(101, "metric10"),
|
||||
}
|
||||
|
||||
// Benchmark adding metrics.
|
||||
func BenchmarkRunningOutputAddWrite(b *testing.B) {
|
||||
conf := &OutputConfig{
|
||||
Filter: Filter{},
|
||||
}
|
||||
|
||||
m := &perfOutput{}
|
||||
ro := NewRunningOutput("test", m, conf, 1000, 10000)
|
||||
|
||||
for n := 0; n < b.N; n++ {
|
||||
ro.AddMetric(testutil.TestMetric(101, "metric1"))
|
||||
ro.Write()
|
||||
}
|
||||
}
|
||||
|
||||
// Benchmark adding metrics.
|
||||
func BenchmarkRunningOutputAddWriteEvery100(b *testing.B) {
|
||||
conf := &OutputConfig{
|
||||
Filter: Filter{},
|
||||
}
|
||||
|
||||
m := &perfOutput{}
|
||||
ro := NewRunningOutput("test", m, conf, 1000, 10000)
|
||||
|
||||
for n := 0; n < b.N; n++ {
|
||||
ro.AddMetric(testutil.TestMetric(101, "metric1"))
|
||||
if n%100 == 0 {
|
||||
ro.Write()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Benchmark adding metrics.
|
||||
func BenchmarkRunningOutputAddFailWrites(b *testing.B) {
|
||||
conf := &OutputConfig{
|
||||
Filter: Filter{},
|
||||
}
|
||||
|
||||
m := &perfOutput{}
|
||||
m.failWrite = true
|
||||
ro := NewRunningOutput("test", m, conf, 1000, 10000)
|
||||
|
||||
for n := 0; n < b.N; n++ {
|
||||
ro.AddMetric(testutil.TestMetric(101, "metric1"))
|
||||
}
|
||||
}
|
||||
|
||||
func TestAddingNilMetric(t *testing.T) {
|
||||
conf := &OutputConfig{
|
||||
Filter: Filter{},
|
||||
}
|
||||
|
||||
m := &mockOutput{}
|
||||
ro := NewRunningOutput("test", m, conf, 1000, 10000)
|
||||
|
||||
ro.AddMetric(nil)
|
||||
ro.AddMetric(nil)
|
||||
ro.AddMetric(nil)
|
||||
|
||||
err := ro.Write()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, m.Metrics(), 0)
|
||||
}
|
||||
|
||||
// Test that NameDrop filters ger properly applied.
|
||||
func TestRunningOutput_DropFilter(t *testing.T) {
|
||||
conf := &OutputConfig{
|
||||
Filter: Filter{
|
||||
NameDrop: []string{"metric1", "metric2"},
|
||||
},
|
||||
}
|
||||
assert.NoError(t, conf.Filter.Compile())
|
||||
|
||||
m := &mockOutput{}
|
||||
ro := NewRunningOutput("test", m, conf, 1000, 10000)
|
||||
|
||||
for _, metric := range first5 {
|
||||
ro.AddMetric(metric)
|
||||
}
|
||||
for _, metric := range next5 {
|
||||
ro.AddMetric(metric)
|
||||
}
|
||||
assert.Len(t, m.Metrics(), 0)
|
||||
|
||||
err := ro.Write()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, m.Metrics(), 8)
|
||||
}
|
||||
|
||||
// Test that NameDrop filters without a match do nothing.
|
||||
func TestRunningOutput_PassFilter(t *testing.T) {
|
||||
conf := &OutputConfig{
|
||||
Filter: Filter{
|
||||
NameDrop: []string{"metric1000", "foo*"},
|
||||
},
|
||||
}
|
||||
assert.NoError(t, conf.Filter.Compile())
|
||||
|
||||
m := &mockOutput{}
|
||||
ro := NewRunningOutput("test", m, conf, 1000, 10000)
|
||||
|
||||
for _, metric := range first5 {
|
||||
ro.AddMetric(metric)
|
||||
}
|
||||
for _, metric := range next5 {
|
||||
ro.AddMetric(metric)
|
||||
}
|
||||
assert.Len(t, m.Metrics(), 0)
|
||||
|
||||
err := ro.Write()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, m.Metrics(), 10)
|
||||
}
|
||||
|
||||
// Test that tags are properly included
|
||||
func TestRunningOutput_TagIncludeNoMatch(t *testing.T) {
|
||||
conf := &OutputConfig{
|
||||
Filter: Filter{
|
||||
TagInclude: []string{"nothing*"},
|
||||
},
|
||||
}
|
||||
assert.NoError(t, conf.Filter.Compile())
|
||||
|
||||
m := &mockOutput{}
|
||||
ro := NewRunningOutput("test", m, conf, 1000, 10000)
|
||||
|
||||
ro.AddMetric(testutil.TestMetric(101, "metric1"))
|
||||
assert.Len(t, m.Metrics(), 0)
|
||||
|
||||
err := ro.Write()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, m.Metrics(), 1)
|
||||
assert.Empty(t, m.Metrics()[0].Tags())
|
||||
}
|
||||
|
||||
// Test that tags are properly excluded
|
||||
func TestRunningOutput_TagExcludeMatch(t *testing.T) {
|
||||
conf := &OutputConfig{
|
||||
Filter: Filter{
|
||||
TagExclude: []string{"tag*"},
|
||||
},
|
||||
}
|
||||
assert.NoError(t, conf.Filter.Compile())
|
||||
|
||||
m := &mockOutput{}
|
||||
ro := NewRunningOutput("test", m, conf, 1000, 10000)
|
||||
|
||||
ro.AddMetric(testutil.TestMetric(101, "metric1"))
|
||||
assert.Len(t, m.Metrics(), 0)
|
||||
|
||||
err := ro.Write()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, m.Metrics(), 1)
|
||||
assert.Len(t, m.Metrics()[0].Tags(), 0)
|
||||
}
|
||||
|
||||
// Test that tags are properly Excluded
|
||||
func TestRunningOutput_TagExcludeNoMatch(t *testing.T) {
|
||||
conf := &OutputConfig{
|
||||
Filter: Filter{
|
||||
TagExclude: []string{"nothing*"},
|
||||
},
|
||||
}
|
||||
assert.NoError(t, conf.Filter.Compile())
|
||||
|
||||
m := &mockOutput{}
|
||||
ro := NewRunningOutput("test", m, conf, 1000, 10000)
|
||||
|
||||
ro.AddMetric(testutil.TestMetric(101, "metric1"))
|
||||
assert.Len(t, m.Metrics(), 0)
|
||||
|
||||
err := ro.Write()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, m.Metrics(), 1)
|
||||
assert.Len(t, m.Metrics()[0].Tags(), 1)
|
||||
}
|
||||
|
||||
// Test that tags are properly included
|
||||
func TestRunningOutput_TagIncludeMatch(t *testing.T) {
|
||||
conf := &OutputConfig{
|
||||
Filter: Filter{
|
||||
TagInclude: []string{"tag*"},
|
||||
},
|
||||
}
|
||||
assert.NoError(t, conf.Filter.Compile())
|
||||
|
||||
m := &mockOutput{}
|
||||
ro := NewRunningOutput("test", m, conf, 1000, 10000)
|
||||
|
||||
ro.AddMetric(testutil.TestMetric(101, "metric1"))
|
||||
assert.Len(t, m.Metrics(), 0)
|
||||
|
||||
err := ro.Write()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, m.Metrics(), 1)
|
||||
assert.Len(t, m.Metrics()[0].Tags(), 1)
|
||||
}
|
||||
|
||||
// Test that we can write metrics with simple default setup.
|
||||
func TestRunningOutputDefault(t *testing.T) {
|
||||
conf := &OutputConfig{
|
||||
Filter: Filter{},
|
||||
}
|
||||
|
||||
m := &mockOutput{}
|
||||
ro := NewRunningOutput("test", m, conf, 1000, 10000)
|
||||
|
||||
for _, metric := range first5 {
|
||||
ro.AddMetric(metric)
|
||||
}
|
||||
for _, metric := range next5 {
|
||||
ro.AddMetric(metric)
|
||||
}
|
||||
assert.Len(t, m.Metrics(), 0)
|
||||
|
||||
err := ro.Write()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, m.Metrics(), 10)
|
||||
}
|
||||
|
||||
// Test that running output doesn't flush until it's full when
|
||||
// FlushBufferWhenFull is set.
|
||||
func TestRunningOutputFlushWhenFull(t *testing.T) {
|
||||
conf := &OutputConfig{
|
||||
Filter: Filter{},
|
||||
}
|
||||
|
||||
m := &mockOutput{}
|
||||
ro := NewRunningOutput("test", m, conf, 6, 10)
|
||||
|
||||
// Fill buffer to 1 under limit
|
||||
for _, metric := range first5 {
|
||||
ro.AddMetric(metric)
|
||||
}
|
||||
// no flush yet
|
||||
assert.Len(t, m.Metrics(), 0)
|
||||
|
||||
// add one more metric
|
||||
ro.AddMetric(next5[0])
|
||||
// now it flushed
|
||||
assert.Len(t, m.Metrics(), 6)
|
||||
|
||||
// add one more metric and write it manually
|
||||
ro.AddMetric(next5[1])
|
||||
err := ro.Write()
|
||||
assert.NoError(t, err)
|
||||
assert.Len(t, m.Metrics(), 7)
|
||||
}
|
||||
|
||||
// Test that running output doesn't flush until it's full when
|
||||
// FlushBufferWhenFull is set, twice.
|
||||
func TestRunningOutputMultiFlushWhenFull(t *testing.T) {
|
||||
conf := &OutputConfig{
|
||||
Filter: Filter{},
|
||||
}
|
||||
|
||||
m := &mockOutput{}
|
||||
ro := NewRunningOutput("test", m, conf, 4, 12)
|
||||
|
||||
// Fill buffer past limit twive
|
||||
for _, metric := range first5 {
|
||||
ro.AddMetric(metric)
|
||||
}
|
||||
for _, metric := range next5 {
|
||||
ro.AddMetric(metric)
|
||||
}
|
||||
// flushed twice
|
||||
assert.Len(t, m.Metrics(), 8)
|
||||
}
|
||||
|
||||
func TestRunningOutputWriteFail(t *testing.T) {
|
||||
conf := &OutputConfig{
|
||||
Filter: Filter{},
|
||||
}
|
||||
|
||||
m := &mockOutput{}
|
||||
m.failWrite = true
|
||||
ro := NewRunningOutput("test", m, conf, 4, 12)
|
||||
|
||||
// Fill buffer to limit twice
|
||||
for _, metric := range first5 {
|
||||
ro.AddMetric(metric)
|
||||
}
|
||||
for _, metric := range next5 {
|
||||
ro.AddMetric(metric)
|
||||
}
|
||||
// no successful flush yet
|
||||
assert.Len(t, m.Metrics(), 0)
|
||||
|
||||
// manual write fails
|
||||
err := ro.Write()
|
||||
require.Error(t, err)
|
||||
// no successful flush yet
|
||||
assert.Len(t, m.Metrics(), 0)
|
||||
|
||||
m.failWrite = false
|
||||
err = ro.Write()
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.Len(t, m.Metrics(), 10)
|
||||
}
|
||||
|
||||
// Verify that the order of points is preserved during a write failure.
|
||||
func TestRunningOutputWriteFailOrder(t *testing.T) {
|
||||
conf := &OutputConfig{
|
||||
Filter: Filter{},
|
||||
}
|
||||
|
||||
m := &mockOutput{}
|
||||
m.failWrite = true
|
||||
ro := NewRunningOutput("test", m, conf, 100, 1000)
|
||||
|
||||
// add 5 metrics
|
||||
for _, metric := range first5 {
|
||||
ro.AddMetric(metric)
|
||||
}
|
||||
// no successful flush yet
|
||||
assert.Len(t, m.Metrics(), 0)
|
||||
|
||||
// Write fails
|
||||
err := ro.Write()
|
||||
require.Error(t, err)
|
||||
// no successful flush yet
|
||||
assert.Len(t, m.Metrics(), 0)
|
||||
|
||||
m.failWrite = false
|
||||
// add 5 more metrics
|
||||
for _, metric := range next5 {
|
||||
ro.AddMetric(metric)
|
||||
}
|
||||
err = ro.Write()
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify that 10 metrics were written
|
||||
assert.Len(t, m.Metrics(), 10)
|
||||
// Verify that they are in order
|
||||
expected := append(first5, next5...)
|
||||
assert.Equal(t, expected, m.Metrics())
|
||||
}
|
||||
|
||||
// Verify that the order of points is preserved during many write failures.
|
||||
func TestRunningOutputWriteFailOrder2(t *testing.T) {
|
||||
conf := &OutputConfig{
|
||||
Filter: Filter{},
|
||||
}
|
||||
|
||||
m := &mockOutput{}
|
||||
m.failWrite = true
|
||||
ro := NewRunningOutput("test", m, conf, 5, 100)
|
||||
|
||||
// add 5 metrics
|
||||
for _, metric := range first5 {
|
||||
ro.AddMetric(metric)
|
||||
}
|
||||
// Write fails
|
||||
err := ro.Write()
|
||||
require.Error(t, err)
|
||||
// no successful flush yet
|
||||
assert.Len(t, m.Metrics(), 0)
|
||||
|
||||
// add 5 metrics
|
||||
for _, metric := range next5 {
|
||||
ro.AddMetric(metric)
|
||||
}
|
||||
// Write fails
|
||||
err = ro.Write()
|
||||
require.Error(t, err)
|
||||
// no successful flush yet
|
||||
assert.Len(t, m.Metrics(), 0)
|
||||
|
||||
// add 5 metrics
|
||||
for _, metric := range first5 {
|
||||
ro.AddMetric(metric)
|
||||
}
|
||||
// Write fails
|
||||
err = ro.Write()
|
||||
require.Error(t, err)
|
||||
// no successful flush yet
|
||||
assert.Len(t, m.Metrics(), 0)
|
||||
|
||||
// add 5 metrics
|
||||
for _, metric := range next5 {
|
||||
ro.AddMetric(metric)
|
||||
}
|
||||
// Write fails
|
||||
err = ro.Write()
|
||||
require.Error(t, err)
|
||||
// no successful flush yet
|
||||
assert.Len(t, m.Metrics(), 0)
|
||||
|
||||
m.failWrite = false
|
||||
err = ro.Write()
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify that 10 metrics were written
|
||||
assert.Len(t, m.Metrics(), 20)
|
||||
// Verify that they are in order
|
||||
expected := append(first5, next5...)
|
||||
expected = append(expected, first5...)
|
||||
expected = append(expected, next5...)
|
||||
assert.Equal(t, expected, m.Metrics())
|
||||
}
|
||||
|
||||
// Verify that the order of points is preserved when there is a remainder
|
||||
// of points for the batch.
|
||||
//
|
||||
// ie, with a batch size of 5:
|
||||
//
|
||||
// 1 2 3 4 5 6 <-- order, failed points
|
||||
// 6 1 2 3 4 5 <-- order, after 1st write failure (1 2 3 4 5 was batch)
|
||||
// 1 2 3 4 5 6 <-- order, after 2nd write failure, (6 was batch)
|
||||
//
|
||||
func TestRunningOutputWriteFailOrder3(t *testing.T) {
|
||||
conf := &OutputConfig{
|
||||
Filter: Filter{},
|
||||
}
|
||||
|
||||
m := &mockOutput{}
|
||||
m.failWrite = true
|
||||
ro := NewRunningOutput("test", m, conf, 5, 1000)
|
||||
|
||||
// add 5 metrics
|
||||
for _, metric := range first5 {
|
||||
ro.AddMetric(metric)
|
||||
}
|
||||
// no successful flush yet
|
||||
assert.Len(t, m.Metrics(), 0)
|
||||
|
||||
// Write fails
|
||||
err := ro.Write()
|
||||
require.Error(t, err)
|
||||
// no successful flush yet
|
||||
assert.Len(t, m.Metrics(), 0)
|
||||
|
||||
// add and attempt to write a single metric:
|
||||
ro.AddMetric(next5[0])
|
||||
err = ro.Write()
|
||||
require.Error(t, err)
|
||||
|
||||
// unset fail and write metrics
|
||||
m.failWrite = false
|
||||
err = ro.Write()
|
||||
require.NoError(t, err)
|
||||
|
||||
// Verify that 6 metrics were written
|
||||
assert.Len(t, m.Metrics(), 6)
|
||||
// Verify that they are in order
|
||||
expected := append(first5, next5[0])
|
||||
assert.Equal(t, expected, m.Metrics())
|
||||
}
|
||||
|
||||
type mockOutput struct {
|
||||
sync.Mutex
|
||||
|
||||
metrics []telegraf.Metric
|
||||
|
||||
// if true, mock a write failure
|
||||
failWrite bool
|
||||
}
|
||||
|
||||
func (m *mockOutput) Connect() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *mockOutput) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *mockOutput) Description() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m *mockOutput) SampleConfig() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m *mockOutput) Write(metrics []telegraf.Metric) error {
|
||||
m.Lock()
|
||||
defer m.Unlock()
|
||||
if m.failWrite {
|
||||
return fmt.Errorf("Failed Write!")
|
||||
}
|
||||
|
||||
if m.metrics == nil {
|
||||
m.metrics = []telegraf.Metric{}
|
||||
}
|
||||
|
||||
for _, metric := range metrics {
|
||||
m.metrics = append(m.metrics, metric)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *mockOutput) Metrics() []telegraf.Metric {
|
||||
m.Lock()
|
||||
defer m.Unlock()
|
||||
return m.metrics
|
||||
}
|
||||
|
||||
type perfOutput struct {
|
||||
// if true, mock a write failure
|
||||
failWrite bool
|
||||
}
|
||||
|
||||
func (m *perfOutput) Connect() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *perfOutput) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *perfOutput) Description() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m *perfOutput) SampleConfig() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func (m *perfOutput) Write(metrics []telegraf.Metric) error {
|
||||
if m.failWrite {
|
||||
return fmt.Errorf("Failed Write!")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
51
internal/models/running_processor.go
Normal file
51
internal/models/running_processor.go
Normal file
@@ -0,0 +1,51 @@
|
||||
package models
|
||||
|
||||
import (
|
||||
"sync"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
)
|
||||
|
||||
type RunningProcessor struct {
|
||||
Name string
|
||||
|
||||
sync.Mutex
|
||||
Processor telegraf.Processor
|
||||
Config *ProcessorConfig
|
||||
}
|
||||
|
||||
type RunningProcessors []*RunningProcessor
|
||||
|
||||
func (rp RunningProcessors) Len() int { return len(rp) }
|
||||
func (rp RunningProcessors) Swap(i, j int) { rp[i], rp[j] = rp[j], rp[i] }
|
||||
func (rp RunningProcessors) Less(i, j int) bool { return rp[i].Config.Order < rp[j].Config.Order }
|
||||
|
||||
// FilterConfig containing a name and filter
|
||||
type ProcessorConfig struct {
|
||||
Name string
|
||||
Order int64
|
||||
Filter Filter
|
||||
}
|
||||
|
||||
func (rp *RunningProcessor) Apply(in ...telegraf.Metric) []telegraf.Metric {
|
||||
rp.Lock()
|
||||
defer rp.Unlock()
|
||||
|
||||
ret := []telegraf.Metric{}
|
||||
|
||||
for _, metric := range in {
|
||||
if rp.Config.Filter.IsActive() {
|
||||
// check if the filter should be applied to this metric
|
||||
if ok := rp.Config.Filter.Apply(metric.Name(), metric.Fields(), metric.Tags()); !ok {
|
||||
// this means filter should not be applied
|
||||
ret = append(ret, metric)
|
||||
continue
|
||||
}
|
||||
}
|
||||
// This metric should pass through the filter, so call the filter Apply
|
||||
// function and append results to the output slice.
|
||||
ret = append(ret, rp.Processor.Apply(metric)...)
|
||||
}
|
||||
|
||||
return ret
|
||||
}
|
||||
117
internal/models/running_processor_test.go
Normal file
117
internal/models/running_processor_test.go
Normal file
@@ -0,0 +1,117 @@
|
||||
package models
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/testutil"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
type TestProcessor struct {
|
||||
}
|
||||
|
||||
func (f *TestProcessor) SampleConfig() string { return "" }
|
||||
func (f *TestProcessor) Description() string { return "" }
|
||||
|
||||
// Apply renames:
|
||||
// "foo" to "fuz"
|
||||
// "bar" to "baz"
|
||||
// And it also drops measurements named "dropme"
|
||||
func (f *TestProcessor) Apply(in ...telegraf.Metric) []telegraf.Metric {
|
||||
out := make([]telegraf.Metric, 0)
|
||||
for _, m := range in {
|
||||
switch m.Name() {
|
||||
case "foo":
|
||||
out = append(out, testutil.TestMetric(1, "fuz"))
|
||||
case "bar":
|
||||
out = append(out, testutil.TestMetric(1, "baz"))
|
||||
case "dropme":
|
||||
// drop the metric!
|
||||
default:
|
||||
out = append(out, m)
|
||||
}
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func NewTestRunningProcessor() *RunningProcessor {
|
||||
out := &RunningProcessor{
|
||||
Name: "test",
|
||||
Processor: &TestProcessor{},
|
||||
Config: &ProcessorConfig{Filter: Filter{}},
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func TestRunningProcessor(t *testing.T) {
|
||||
inmetrics := []telegraf.Metric{
|
||||
testutil.TestMetric(1, "foo"),
|
||||
testutil.TestMetric(1, "bar"),
|
||||
testutil.TestMetric(1, "baz"),
|
||||
}
|
||||
|
||||
expectedNames := []string{
|
||||
"fuz",
|
||||
"baz",
|
||||
"baz",
|
||||
}
|
||||
rfp := NewTestRunningProcessor()
|
||||
filteredMetrics := rfp.Apply(inmetrics...)
|
||||
|
||||
actualNames := []string{
|
||||
filteredMetrics[0].Name(),
|
||||
filteredMetrics[1].Name(),
|
||||
filteredMetrics[2].Name(),
|
||||
}
|
||||
assert.Equal(t, expectedNames, actualNames)
|
||||
}
|
||||
|
||||
func TestRunningProcessor_WithNameDrop(t *testing.T) {
|
||||
inmetrics := []telegraf.Metric{
|
||||
testutil.TestMetric(1, "foo"),
|
||||
testutil.TestMetric(1, "bar"),
|
||||
testutil.TestMetric(1, "baz"),
|
||||
}
|
||||
|
||||
expectedNames := []string{
|
||||
"foo",
|
||||
"baz",
|
||||
"baz",
|
||||
}
|
||||
rfp := NewTestRunningProcessor()
|
||||
|
||||
rfp.Config.Filter.NameDrop = []string{"foo"}
|
||||
assert.NoError(t, rfp.Config.Filter.Compile())
|
||||
|
||||
filteredMetrics := rfp.Apply(inmetrics...)
|
||||
|
||||
actualNames := []string{
|
||||
filteredMetrics[0].Name(),
|
||||
filteredMetrics[1].Name(),
|
||||
filteredMetrics[2].Name(),
|
||||
}
|
||||
assert.Equal(t, expectedNames, actualNames)
|
||||
}
|
||||
|
||||
func TestRunningProcessor_DroppedMetric(t *testing.T) {
|
||||
inmetrics := []telegraf.Metric{
|
||||
testutil.TestMetric(1, "dropme"),
|
||||
testutil.TestMetric(1, "foo"),
|
||||
testutil.TestMetric(1, "bar"),
|
||||
}
|
||||
|
||||
expectedNames := []string{
|
||||
"fuz",
|
||||
"baz",
|
||||
}
|
||||
rfp := NewTestRunningProcessor()
|
||||
filteredMetrics := rfp.Apply(inmetrics...)
|
||||
|
||||
actualNames := []string{
|
||||
filteredMetrics[0].Name(),
|
||||
filteredMetrics[1].Name(),
|
||||
}
|
||||
assert.Equal(t, expectedNames, actualNames)
|
||||
}
|
||||
86
internal/templating/engine.go
Normal file
86
internal/templating/engine.go
Normal file
@@ -0,0 +1,86 @@
|
||||
package templating
|
||||
|
||||
import (
|
||||
"sort"
|
||||
"strings"
|
||||
)
|
||||
|
||||
const (
|
||||
// DefaultSeparator is the default separation character to use when separating template parts.
|
||||
DefaultSeparator = "."
|
||||
)
|
||||
|
||||
// Engine uses a Matcher to retrieve the appropriate template and applies the template
|
||||
// to the input string
|
||||
type Engine struct {
|
||||
joiner string
|
||||
matcher *matcher
|
||||
}
|
||||
|
||||
// Apply extracts the template fields from the given line and returns the measurement
|
||||
// name, tags and field name
|
||||
func (e *Engine) Apply(line string) (string, map[string]string, string, error) {
|
||||
return e.matcher.match(line).Apply(line, e.joiner)
|
||||
}
|
||||
|
||||
// NewEngine creates a new templating engine
|
||||
func NewEngine(joiner string, defaultTemplate *Template, templates []string) (*Engine, error) {
|
||||
engine := Engine{
|
||||
joiner: joiner,
|
||||
matcher: newMatcher(defaultTemplate),
|
||||
}
|
||||
templateSpecs := parseTemplateSpecs(templates)
|
||||
|
||||
for _, templateSpec := range templateSpecs {
|
||||
if err := engine.matcher.addSpec(templateSpec); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return &engine, nil
|
||||
}
|
||||
|
||||
func parseTemplateSpecs(templates []string) templateSpecs {
|
||||
tmplts := templateSpecs{}
|
||||
for _, pattern := range templates {
|
||||
tmplt := templateSpec{
|
||||
separator: DefaultSeparator,
|
||||
}
|
||||
|
||||
// Format is [separator] [filter] <template> [tag1=value1,tag2=value2]
|
||||
parts := strings.Fields(pattern)
|
||||
partsLength := len(parts)
|
||||
if partsLength < 1 {
|
||||
// ignore
|
||||
continue
|
||||
}
|
||||
if partsLength == 1 {
|
||||
tmplt.template = pattern
|
||||
} else if partsLength == 4 {
|
||||
tmplt.separator = parts[0]
|
||||
tmplt.filter = parts[1]
|
||||
tmplt.template = parts[2]
|
||||
tmplt.tagstring = parts[3]
|
||||
} else {
|
||||
hasTagstring := strings.Contains(parts[partsLength-1], "=")
|
||||
if hasTagstring {
|
||||
tmplt.tagstring = parts[partsLength-1]
|
||||
tmplt.template = parts[partsLength-2]
|
||||
if partsLength == 3 {
|
||||
tmplt.filter = parts[0]
|
||||
}
|
||||
} else {
|
||||
tmplt.template = parts[partsLength-1]
|
||||
if partsLength == 2 {
|
||||
tmplt.filter = parts[0]
|
||||
} else { // length == 3
|
||||
tmplt.separator = parts[0]
|
||||
tmplt.filter = parts[1]
|
||||
}
|
||||
}
|
||||
}
|
||||
tmplts = append(tmplts, tmplt)
|
||||
}
|
||||
sort.Sort(tmplts)
|
||||
return tmplts
|
||||
}
|
||||
58
internal/templating/matcher.go
Normal file
58
internal/templating/matcher.go
Normal file
@@ -0,0 +1,58 @@
|
||||
package templating
|
||||
|
||||
import (
|
||||
"strings"
|
||||
)
|
||||
|
||||
// matcher determines which template should be applied to a given metric
|
||||
// based on a filter tree.
|
||||
type matcher struct {
|
||||
root *node
|
||||
defaultTemplate *Template
|
||||
}
|
||||
|
||||
// newMatcher creates a new matcher.
|
||||
func newMatcher(defaultTemplate *Template) *matcher {
|
||||
return &matcher{
|
||||
root: &node{},
|
||||
defaultTemplate: defaultTemplate,
|
||||
}
|
||||
}
|
||||
|
||||
func (m *matcher) addSpec(tmplt templateSpec) error {
|
||||
// Parse out the default tags specific to this template
|
||||
tags := map[string]string{}
|
||||
if tmplt.tagstring != "" {
|
||||
for _, kv := range strings.Split(tmplt.tagstring, ",") {
|
||||
parts := strings.Split(kv, "=")
|
||||
tags[parts[0]] = parts[1]
|
||||
}
|
||||
}
|
||||
|
||||
tmpl, err := NewTemplate(tmplt.separator, tmplt.template, tags)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
m.add(tmplt.filter, tmpl)
|
||||
return nil
|
||||
}
|
||||
|
||||
// add inserts the template in the filter tree based the given filter
|
||||
func (m *matcher) add(filter string, template *Template) {
|
||||
if filter == "" {
|
||||
m.defaultTemplate = template
|
||||
m.root.separator = template.separator
|
||||
return
|
||||
}
|
||||
m.root.insert(filter, template)
|
||||
}
|
||||
|
||||
// match returns the template that matches the given measurement line.
|
||||
// If no template matches, the default template is returned.
|
||||
func (m *matcher) match(line string) *Template {
|
||||
tmpl := m.root.search(line)
|
||||
if tmpl != nil {
|
||||
return tmpl
|
||||
}
|
||||
return m.defaultTemplate
|
||||
}
|
||||
122
internal/templating/node.go
Normal file
122
internal/templating/node.go
Normal file
@@ -0,0 +1,122 @@
|
||||
package templating
|
||||
|
||||
import (
|
||||
"sort"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// node is an item in a sorted k-ary tree of filter parts. Each child is sorted by its part value.
|
||||
// The special value of "*", is always sorted last.
|
||||
type node struct {
|
||||
separator string
|
||||
value string
|
||||
children nodes
|
||||
template *Template
|
||||
}
|
||||
|
||||
// insert inserts the given string template into the tree. The filter string is separated
|
||||
// on the template separator and each part is used as the path in the tree.
|
||||
func (n *node) insert(filter string, template *Template) {
|
||||
n.separator = template.separator
|
||||
n.recursiveInsert(strings.Split(filter, n.separator), template)
|
||||
}
|
||||
|
||||
// recursiveInsert does the actual recursive insertion
|
||||
func (n *node) recursiveInsert(values []string, template *Template) {
|
||||
// Add the end, set the template
|
||||
if len(values) == 0 {
|
||||
n.template = template
|
||||
return
|
||||
}
|
||||
|
||||
// See if the the current element already exists in the tree. If so, insert the
|
||||
// into that sub-tree
|
||||
for _, v := range n.children {
|
||||
if v.value == values[0] {
|
||||
v.recursiveInsert(values[1:], template)
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// New element, add it to the tree and sort the children
|
||||
newNode := &node{value: values[0]}
|
||||
n.children = append(n.children, newNode)
|
||||
sort.Sort(&n.children)
|
||||
|
||||
// Now insert the rest of the tree into the new element
|
||||
newNode.recursiveInsert(values[1:], template)
|
||||
}
|
||||
|
||||
// search searches for a template matching the input string
|
||||
func (n *node) search(line string) *Template {
|
||||
separator := n.separator
|
||||
return n.recursiveSearch(strings.Split(line, separator))
|
||||
}
|
||||
|
||||
// recursiveSearch performs the actual recursive search
|
||||
func (n *node) recursiveSearch(lineParts []string) *Template {
|
||||
// Nothing to search
|
||||
if len(lineParts) == 0 || len(n.children) == 0 {
|
||||
return n.template
|
||||
}
|
||||
|
||||
// If last element is a wildcard, don't include it in this search since it's sorted
|
||||
// to the end but lexicographically it would not always be and sort.Search assumes
|
||||
// the slice is sorted.
|
||||
length := len(n.children)
|
||||
if n.children[length-1].value == "*" {
|
||||
length--
|
||||
}
|
||||
|
||||
// Find the index of child with an exact match
|
||||
i := sort.Search(length, func(i int) bool {
|
||||
return n.children[i].value >= lineParts[0]
|
||||
})
|
||||
|
||||
// Found an exact match, so search that child sub-tree
|
||||
if i < len(n.children) && n.children[i].value == lineParts[0] {
|
||||
return n.children[i].recursiveSearch(lineParts[1:])
|
||||
}
|
||||
// Not an exact match, see if we have a wildcard child to search
|
||||
if n.children[len(n.children)-1].value == "*" {
|
||||
return n.children[len(n.children)-1].recursiveSearch(lineParts[1:])
|
||||
}
|
||||
return n.template
|
||||
}
|
||||
|
||||
// nodes is simply an array of nodes implementing the sorting interface.
|
||||
type nodes []*node
|
||||
|
||||
// Less returns a boolean indicating whether the filter at position j
|
||||
// is less than the filter at position k. Filters are order by string
|
||||
// comparison of each component parts. A wildcard value "*" is never
|
||||
// less than a non-wildcard value.
|
||||
//
|
||||
// For example, the filters:
|
||||
// "*.*"
|
||||
// "servers.*"
|
||||
// "servers.localhost"
|
||||
// "*.localhost"
|
||||
//
|
||||
// Would be sorted as:
|
||||
// "servers.localhost"
|
||||
// "servers.*"
|
||||
// "*.localhost"
|
||||
// "*.*"
|
||||
func (n *nodes) Less(j, k int) bool {
|
||||
if (*n)[j].value == "*" && (*n)[k].value != "*" {
|
||||
return false
|
||||
}
|
||||
|
||||
if (*n)[j].value != "*" && (*n)[k].value == "*" {
|
||||
return true
|
||||
}
|
||||
|
||||
return (*n)[j].value < (*n)[k].value
|
||||
}
|
||||
|
||||
// Swap swaps two elements of the array
|
||||
func (n *nodes) Swap(i, j int) { (*n)[i], (*n)[j] = (*n)[j], (*n)[i] }
|
||||
|
||||
// Len returns the length of the array
|
||||
func (n *nodes) Len() int { return len(*n) }
|
||||
148
internal/templating/template.go
Normal file
148
internal/templating/template.go
Normal file
@@ -0,0 +1,148 @@
|
||||
package templating
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Template represents a pattern and tags to map a metric string to a influxdb Point
|
||||
type Template struct {
|
||||
separator string
|
||||
parts []string
|
||||
defaultTags map[string]string
|
||||
greedyField bool
|
||||
greedyMeasurement bool
|
||||
}
|
||||
|
||||
// apply extracts the template fields from the given line and returns the measurement
|
||||
// name, tags and field name
|
||||
func (t *Template) Apply(line string, joiner string) (string, map[string]string, string, error) {
|
||||
fields := strings.Split(line, t.separator)
|
||||
var (
|
||||
measurement []string
|
||||
tags = make(map[string][]string)
|
||||
field []string
|
||||
)
|
||||
|
||||
// Set any default tags
|
||||
for k, v := range t.defaultTags {
|
||||
tags[k] = append(tags[k], v)
|
||||
}
|
||||
|
||||
// See if an invalid combination has been specified in the template:
|
||||
for _, tag := range t.parts {
|
||||
if tag == "measurement*" {
|
||||
t.greedyMeasurement = true
|
||||
} else if tag == "field*" {
|
||||
t.greedyField = true
|
||||
}
|
||||
}
|
||||
if t.greedyField && t.greedyMeasurement {
|
||||
return "", nil, "",
|
||||
fmt.Errorf("either 'field*' or 'measurement*' can be used in each "+
|
||||
"template (but not both together): %q",
|
||||
strings.Join(t.parts, joiner))
|
||||
}
|
||||
|
||||
for i, tag := range t.parts {
|
||||
if i >= len(fields) {
|
||||
continue
|
||||
}
|
||||
if tag == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
switch tag {
|
||||
case "measurement":
|
||||
measurement = append(measurement, fields[i])
|
||||
case "field":
|
||||
field = append(field, fields[i])
|
||||
case "field*":
|
||||
field = append(field, fields[i:]...)
|
||||
break
|
||||
case "measurement*":
|
||||
measurement = append(measurement, fields[i:]...)
|
||||
break
|
||||
default:
|
||||
tags[tag] = append(tags[tag], fields[i])
|
||||
}
|
||||
}
|
||||
|
||||
// Convert to map of strings.
|
||||
outtags := make(map[string]string)
|
||||
for k, values := range tags {
|
||||
outtags[k] = strings.Join(values, joiner)
|
||||
}
|
||||
|
||||
return strings.Join(measurement, joiner), outtags, strings.Join(field, joiner), nil
|
||||
}
|
||||
|
||||
func NewDefaultTemplateWithPattern(pattern string) (*Template, error) {
|
||||
return NewTemplate(DefaultSeparator, pattern, nil)
|
||||
}
|
||||
|
||||
// NewTemplate returns a new template ensuring it has a measurement
|
||||
// specified.
|
||||
func NewTemplate(separator string, pattern string, defaultTags map[string]string) (*Template, error) {
|
||||
parts := strings.Split(pattern, separator)
|
||||
hasMeasurement := false
|
||||
template := &Template{
|
||||
separator: separator,
|
||||
parts: parts,
|
||||
defaultTags: defaultTags,
|
||||
}
|
||||
|
||||
for _, part := range parts {
|
||||
if strings.HasPrefix(part, "measurement") {
|
||||
hasMeasurement = true
|
||||
}
|
||||
if part == "measurement*" {
|
||||
template.greedyMeasurement = true
|
||||
} else if part == "field*" {
|
||||
template.greedyField = true
|
||||
}
|
||||
}
|
||||
|
||||
if !hasMeasurement {
|
||||
return nil, fmt.Errorf("no measurement specified for template. %q", pattern)
|
||||
}
|
||||
|
||||
return template, nil
|
||||
}
|
||||
|
||||
// templateSpec is a template string split in its constituent parts
|
||||
type templateSpec struct {
|
||||
separator string
|
||||
filter string
|
||||
template string
|
||||
tagstring string
|
||||
}
|
||||
|
||||
// templateSpecs is simply an array of template specs implementing the sorting interface
|
||||
type templateSpecs []templateSpec
|
||||
|
||||
// Less reports whether the element with
|
||||
// index j should sort before the element with index k.
|
||||
func (e templateSpecs) Less(j, k int) bool {
|
||||
if len(e[j].filter) == 0 && len(e[k].filter) == 0 {
|
||||
jlength := len(strings.Split(e[j].template, e[j].separator))
|
||||
klength := len(strings.Split(e[k].template, e[k].separator))
|
||||
return jlength < klength
|
||||
}
|
||||
if len(e[j].filter) == 0 {
|
||||
return true
|
||||
}
|
||||
if len(e[k].filter) == 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
jlength := len(strings.Split(e[j].template, e[j].separator))
|
||||
klength := len(strings.Split(e[k].template, e[k].separator))
|
||||
return jlength < klength
|
||||
}
|
||||
|
||||
// Swap swaps the elements with indexes i and j.
|
||||
func (e templateSpecs) Swap(i, j int) { e[i], e[j] = e[j], e[i] }
|
||||
|
||||
// Len is the number of elements in the collection.
|
||||
func (e templateSpecs) Len() int { return len(e) }
|
||||
130
internal/tls/config.go
Normal file
130
internal/tls/config.go
Normal file
@@ -0,0 +1,130 @@
|
||||
package tls
|
||||
|
||||
import (
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
)
|
||||
|
||||
// ClientConfig represents the standard client TLS config.
|
||||
type ClientConfig struct {
|
||||
TLSCA string `toml:"tls_ca"`
|
||||
TLSCert string `toml:"tls_cert"`
|
||||
TLSKey string `toml:"tls_key"`
|
||||
InsecureSkipVerify bool `toml:"insecure_skip_verify"`
|
||||
|
||||
// Deprecated in 1.7; use TLS variables above
|
||||
SSLCA string `toml:"ssl_ca"`
|
||||
SSLCert string `toml:"ssl_cert"`
|
||||
SSLKey string `toml:"ssl_ca"`
|
||||
}
|
||||
|
||||
// ServerConfig represents the standard server TLS config.
|
||||
type ServerConfig struct {
|
||||
TLSCert string `toml:"tls_cert"`
|
||||
TLSKey string `toml:"tls_key"`
|
||||
TLSAllowedCACerts []string `toml:"tls_allowed_cacerts"`
|
||||
}
|
||||
|
||||
// TLSConfig returns a tls.Config, may be nil without error if TLS is not
|
||||
// configured.
|
||||
func (c *ClientConfig) TLSConfig() (*tls.Config, error) {
|
||||
// Support deprecated variable names
|
||||
if c.TLSCA == "" && c.SSLCA != "" {
|
||||
c.TLSCA = c.SSLCA
|
||||
}
|
||||
if c.TLSCert == "" && c.SSLCert != "" {
|
||||
c.TLSCert = c.SSLCert
|
||||
}
|
||||
if c.TLSKey == "" && c.SSLKey != "" {
|
||||
c.TLSKey = c.SSLKey
|
||||
}
|
||||
|
||||
// TODO: return default tls.Config; plugins should not call if they don't
|
||||
// want TLS, this will require using another option to determine. In the
|
||||
// case of an HTTP plugin, you could use `https`. Other plugins may need
|
||||
// the dedicated option `TLSEnable`.
|
||||
if c.TLSCA == "" && c.TLSKey == "" && c.TLSCert == "" && !c.InsecureSkipVerify {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
tlsConfig := &tls.Config{
|
||||
InsecureSkipVerify: c.InsecureSkipVerify,
|
||||
Renegotiation: tls.RenegotiateNever,
|
||||
}
|
||||
|
||||
if c.TLSCA != "" {
|
||||
pool, err := makeCertPool([]string{c.TLSCA})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
tlsConfig.RootCAs = pool
|
||||
}
|
||||
|
||||
if c.TLSCert != "" && c.TLSKey != "" {
|
||||
err := loadCertificate(tlsConfig, c.TLSCert, c.TLSKey)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return tlsConfig, nil
|
||||
}
|
||||
|
||||
// TLSConfig returns a tls.Config, may be nil without error if TLS is not
|
||||
// configured.
|
||||
func (c *ServerConfig) TLSConfig() (*tls.Config, error) {
|
||||
if c.TLSCert == "" && c.TLSKey == "" && len(c.TLSAllowedCACerts) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
tlsConfig := &tls.Config{}
|
||||
|
||||
if len(c.TLSAllowedCACerts) != 0 {
|
||||
pool, err := makeCertPool(c.TLSAllowedCACerts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
tlsConfig.ClientCAs = pool
|
||||
tlsConfig.ClientAuth = tls.RequireAndVerifyClientCert
|
||||
}
|
||||
|
||||
if c.TLSCert != "" && c.TLSKey != "" {
|
||||
err := loadCertificate(tlsConfig, c.TLSCert, c.TLSKey)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return tlsConfig, nil
|
||||
}
|
||||
|
||||
func makeCertPool(certFiles []string) (*x509.CertPool, error) {
|
||||
pool := x509.NewCertPool()
|
||||
for _, certFile := range certFiles {
|
||||
pem, err := ioutil.ReadFile(certFile)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf(
|
||||
"could not read certificate %q: %v", certFile, err)
|
||||
}
|
||||
ok := pool.AppendCertsFromPEM(pem)
|
||||
if !ok {
|
||||
return nil, fmt.Errorf(
|
||||
"could not parse any PEM certificates %q: %v", certFile, err)
|
||||
}
|
||||
}
|
||||
return pool, nil
|
||||
}
|
||||
|
||||
func loadCertificate(config *tls.Config, certFile, keyFile string) error {
|
||||
cert, err := tls.LoadX509KeyPair(certFile, keyFile)
|
||||
if err != nil {
|
||||
return fmt.Errorf(
|
||||
"could not load keypair %s:%s: %v", certFile, keyFile, err)
|
||||
}
|
||||
|
||||
config.Certificates = []tls.Certificate{cert}
|
||||
config.BuildNameToCertificate()
|
||||
return nil
|
||||
}
|
||||
226
internal/tls/config_test.go
Normal file
226
internal/tls/config_test.go
Normal file
@@ -0,0 +1,226 @@
|
||||
package tls_test
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf/internal/tls"
|
||||
"github.com/influxdata/telegraf/testutil"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
var pki = testutil.NewPKI("../../testutil/pki")
|
||||
|
||||
func TestClientConfig(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
client tls.ClientConfig
|
||||
expNil bool
|
||||
expErr bool
|
||||
}{
|
||||
{
|
||||
name: "unset",
|
||||
client: tls.ClientConfig{},
|
||||
expNil: true,
|
||||
},
|
||||
{
|
||||
name: "success",
|
||||
client: tls.ClientConfig{
|
||||
TLSCA: pki.CACertPath(),
|
||||
TLSCert: pki.ClientCertPath(),
|
||||
TLSKey: pki.ClientKeyPath(),
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "invalid ca",
|
||||
client: tls.ClientConfig{
|
||||
TLSCA: pki.ClientKeyPath(),
|
||||
TLSCert: pki.ClientCertPath(),
|
||||
TLSKey: pki.ClientKeyPath(),
|
||||
},
|
||||
expNil: true,
|
||||
expErr: true,
|
||||
},
|
||||
{
|
||||
name: "missing ca is okay",
|
||||
client: tls.ClientConfig{
|
||||
TLSCert: pki.ClientCertPath(),
|
||||
TLSKey: pki.ClientKeyPath(),
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "invalid cert",
|
||||
client: tls.ClientConfig{
|
||||
TLSCA: pki.CACertPath(),
|
||||
TLSCert: pki.ClientKeyPath(),
|
||||
TLSKey: pki.ClientKeyPath(),
|
||||
},
|
||||
expNil: true,
|
||||
expErr: true,
|
||||
},
|
||||
{
|
||||
name: "missing cert skips client keypair",
|
||||
client: tls.ClientConfig{
|
||||
TLSCA: pki.CACertPath(),
|
||||
TLSKey: pki.ClientKeyPath(),
|
||||
},
|
||||
expNil: false,
|
||||
expErr: false,
|
||||
},
|
||||
{
|
||||
name: "missing key skips client keypair",
|
||||
client: tls.ClientConfig{
|
||||
TLSCA: pki.CACertPath(),
|
||||
TLSCert: pki.ClientCertPath(),
|
||||
},
|
||||
expNil: false,
|
||||
expErr: false,
|
||||
},
|
||||
{
|
||||
name: "support deprecated ssl field names",
|
||||
client: tls.ClientConfig{
|
||||
SSLCA: pki.CACertPath(),
|
||||
SSLCert: pki.ClientCertPath(),
|
||||
SSLKey: pki.ClientKeyPath(),
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
tlsConfig, err := tt.client.TLSConfig()
|
||||
if !tt.expNil {
|
||||
require.NotNil(t, tlsConfig)
|
||||
} else {
|
||||
require.Nil(t, tlsConfig)
|
||||
}
|
||||
|
||||
if !tt.expErr {
|
||||
require.NoError(t, err)
|
||||
} else {
|
||||
require.Error(t, err)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestServerConfig(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
server tls.ServerConfig
|
||||
expNil bool
|
||||
expErr bool
|
||||
}{
|
||||
{
|
||||
name: "unset",
|
||||
server: tls.ServerConfig{},
|
||||
expNil: true,
|
||||
},
|
||||
{
|
||||
name: "success",
|
||||
server: tls.ServerConfig{
|
||||
TLSCert: pki.ServerCertPath(),
|
||||
TLSKey: pki.ServerKeyPath(),
|
||||
TLSAllowedCACerts: []string{pki.CACertPath()},
|
||||
},
|
||||
},
|
||||
{
|
||||
name: "invalid ca",
|
||||
server: tls.ServerConfig{
|
||||
TLSCert: pki.ServerCertPath(),
|
||||
TLSKey: pki.ServerKeyPath(),
|
||||
TLSAllowedCACerts: []string{pki.ServerKeyPath()},
|
||||
},
|
||||
expNil: true,
|
||||
expErr: true,
|
||||
},
|
||||
{
|
||||
name: "missing allowed ca is okay",
|
||||
server: tls.ServerConfig{
|
||||
TLSCert: pki.ServerCertPath(),
|
||||
TLSKey: pki.ServerKeyPath(),
|
||||
},
|
||||
expNil: true,
|
||||
expErr: true,
|
||||
},
|
||||
{
|
||||
name: "invalid cert",
|
||||
server: tls.ServerConfig{
|
||||
TLSCert: pki.ServerKeyPath(),
|
||||
TLSKey: pki.ServerKeyPath(),
|
||||
TLSAllowedCACerts: []string{pki.CACertPath()},
|
||||
},
|
||||
expNil: true,
|
||||
expErr: true,
|
||||
},
|
||||
{
|
||||
name: "missing cert",
|
||||
server: tls.ServerConfig{
|
||||
TLSKey: pki.ServerKeyPath(),
|
||||
TLSAllowedCACerts: []string{pki.CACertPath()},
|
||||
},
|
||||
expNil: true,
|
||||
expErr: true,
|
||||
},
|
||||
{
|
||||
name: "missing key",
|
||||
server: tls.ServerConfig{
|
||||
TLSCert: pki.ServerCertPath(),
|
||||
TLSAllowedCACerts: []string{pki.CACertPath()},
|
||||
},
|
||||
expNil: true,
|
||||
expErr: true,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
tlsConfig, err := tt.server.TLSConfig()
|
||||
if !tt.expNil {
|
||||
require.NotNil(t, tlsConfig)
|
||||
}
|
||||
if !tt.expErr {
|
||||
require.NoError(t, err)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestConnect(t *testing.T) {
|
||||
clientConfig := tls.ClientConfig{
|
||||
TLSCA: pki.CACertPath(),
|
||||
TLSCert: pki.ClientCertPath(),
|
||||
TLSKey: pki.ClientKeyPath(),
|
||||
}
|
||||
|
||||
serverConfig := tls.ServerConfig{
|
||||
TLSCert: pki.ServerCertPath(),
|
||||
TLSKey: pki.ServerKeyPath(),
|
||||
TLSAllowedCACerts: []string{pki.CACertPath()},
|
||||
}
|
||||
|
||||
serverTLSConfig, err := serverConfig.TLSConfig()
|
||||
require.NoError(t, err)
|
||||
|
||||
ts := httptest.NewUnstartedServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
w.WriteHeader(http.StatusOK)
|
||||
}))
|
||||
ts.TLS = serverTLSConfig
|
||||
|
||||
ts.StartTLS()
|
||||
defer ts.Close()
|
||||
|
||||
clientTLSConfig, err := clientConfig.TLSConfig()
|
||||
require.NoError(t, err)
|
||||
|
||||
client := http.Client{
|
||||
Transport: &http.Transport{
|
||||
TLSClientConfig: clientTLSConfig,
|
||||
},
|
||||
Timeout: 10 * time.Second,
|
||||
}
|
||||
|
||||
resp, err := client.Get(ts.URL)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 200, resp.StatusCode)
|
||||
}
|
||||
45
internal/usage.go
Normal file
45
internal/usage.go
Normal file
@@ -0,0 +1,45 @@
|
||||
// +build !windows
|
||||
|
||||
package internal
|
||||
|
||||
const Usage = `Telegraf, The plugin-driven server agent for collecting and reporting metrics.
|
||||
|
||||
Usage:
|
||||
|
||||
telegraf [commands|flags]
|
||||
|
||||
The commands & flags are:
|
||||
|
||||
config print out full sample configuration to stdout
|
||||
version print the version to stdout
|
||||
|
||||
--config <file> configuration file to load
|
||||
--test gather metrics once, print them to stdout, and exit
|
||||
--config-directory directory containing additional *.conf files
|
||||
--input-filter filter the input plugins to enable, separator is :
|
||||
--output-filter filter the output plugins to enable, separator is :
|
||||
--usage print usage for a plugin, ie, 'telegraf --usage mysql'
|
||||
--debug print metrics as they're generated to stdout
|
||||
--pprof-addr pprof address to listen on, format: localhost:6060 or :6060
|
||||
--quiet run in quiet mode
|
||||
|
||||
Examples:
|
||||
|
||||
# generate a telegraf config file:
|
||||
telegraf config > telegraf.conf
|
||||
|
||||
# generate config with only cpu input & influxdb output plugins defined
|
||||
telegraf --input-filter cpu --output-filter influxdb config
|
||||
|
||||
# run a single telegraf collection, outputing metrics to stdout
|
||||
telegraf --config telegraf.conf --test
|
||||
|
||||
# run telegraf with all plugins defined in config file
|
||||
telegraf --config telegraf.conf
|
||||
|
||||
# run telegraf, enabling the cpu & memory input, and influxdb output plugins
|
||||
telegraf --config telegraf.conf --input-filter cpu:mem --output-filter influxdb
|
||||
|
||||
# run telegraf with pprof
|
||||
telegraf --config telegraf.conf --pprof-addr localhost:6060
|
||||
`
|
||||
54
internal/usage_windows.go
Normal file
54
internal/usage_windows.go
Normal file
@@ -0,0 +1,54 @@
|
||||
// +build windows
|
||||
|
||||
package internal
|
||||
|
||||
const Usage = `Telegraf, The plugin-driven server agent for collecting and reporting metrics.
|
||||
|
||||
Usage:
|
||||
|
||||
telegraf [commands|flags]
|
||||
|
||||
The commands & flags are:
|
||||
|
||||
config print out full sample configuration to stdout
|
||||
version print the version to stdout
|
||||
|
||||
--config <file> configuration file to load
|
||||
--test gather metrics once, print them to stdout, and exit
|
||||
--config-directory directory containing additional *.conf files
|
||||
--input-filter filter the input plugins to enable, separator is :
|
||||
--output-filter filter the output plugins to enable, separator is :
|
||||
--usage print usage for a plugin, ie, 'telegraf --usage mysql'
|
||||
--debug print metrics as they're generated to stdout
|
||||
--pprof-addr pprof address to listen on, format: localhost:6060 or :6060
|
||||
--quiet run in quiet mode
|
||||
|
||||
--console run as console application
|
||||
--service operate on service, one of: install, uninstall, start, stop
|
||||
|
||||
Examples:
|
||||
|
||||
# generate a telegraf config file:
|
||||
telegraf config > telegraf.conf
|
||||
|
||||
# generate config with only cpu input & influxdb output plugins defined
|
||||
telegraf --input-filter cpu --output-filter influxdb config
|
||||
|
||||
# run a single telegraf collection, outputing metrics to stdout
|
||||
telegraf --config telegraf.conf --test
|
||||
|
||||
# run telegraf with all plugins defined in config file
|
||||
telegraf --config telegraf.conf
|
||||
|
||||
# run telegraf, enabling the cpu & memory input, and influxdb output plugins
|
||||
telegraf --config telegraf.conf --input-filter cpu:mem --output-filter influxdb
|
||||
|
||||
# run telegraf with pprof
|
||||
telegraf --config telegraf.conf --pprof-addr localhost:6060
|
||||
|
||||
# run telegraf without service controller
|
||||
telegraf --console install --config "C:\Program Files\Telegraf\telegraf.conf"
|
||||
|
||||
# install telegraf service
|
||||
telegraf --service install --config "C:\Program Files\Telegraf\telegraf.conf"
|
||||
`
|
||||
69
logger/logger.go
Normal file
69
logger/logger.go
Normal file
@@ -0,0 +1,69 @@
|
||||
package logger
|
||||
|
||||
import (
|
||||
"io"
|
||||
"log"
|
||||
"os"
|
||||
"regexp"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/wlog"
|
||||
)
|
||||
|
||||
var prefixRegex = regexp.MustCompile("^[DIWE]!")
|
||||
|
||||
// newTelegrafWriter returns a logging-wrapped writer.
|
||||
func newTelegrafWriter(w io.Writer) io.Writer {
|
||||
return &telegrafLog{
|
||||
writer: wlog.NewWriter(w),
|
||||
}
|
||||
}
|
||||
|
||||
type telegrafLog struct {
|
||||
writer io.Writer
|
||||
}
|
||||
|
||||
func (t *telegrafLog) Write(b []byte) (n int, err error) {
|
||||
var line []byte
|
||||
if !prefixRegex.Match(b) {
|
||||
line = append([]byte(time.Now().UTC().Format(time.RFC3339)+" I! "), b...)
|
||||
} else {
|
||||
line = append([]byte(time.Now().UTC().Format(time.RFC3339)+" "), b...)
|
||||
}
|
||||
return t.writer.Write(line)
|
||||
}
|
||||
|
||||
// SetupLogging configures the logging output.
|
||||
// debug will set the log level to DEBUG
|
||||
// quiet will set the log level to ERROR
|
||||
// logfile will direct the logging output to a file. Empty string is
|
||||
// interpreted as stderr. If there is an error opening the file the
|
||||
// logger will fallback to stderr.
|
||||
func SetupLogging(debug, quiet bool, logfile string) {
|
||||
log.SetFlags(0)
|
||||
if debug {
|
||||
wlog.SetLevel(wlog.DEBUG)
|
||||
}
|
||||
if quiet {
|
||||
wlog.SetLevel(wlog.ERROR)
|
||||
}
|
||||
|
||||
var oFile *os.File
|
||||
if logfile != "" {
|
||||
if _, err := os.Stat(logfile); os.IsNotExist(err) {
|
||||
if oFile, err = os.Create(logfile); err != nil {
|
||||
log.Printf("E! Unable to create %s (%s), using stderr", logfile, err)
|
||||
oFile = os.Stderr
|
||||
}
|
||||
} else {
|
||||
if oFile, err = os.OpenFile(logfile, os.O_APPEND|os.O_WRONLY, os.ModeAppend); err != nil {
|
||||
log.Printf("E! Unable to append to %s (%s), using stderr", logfile, err)
|
||||
oFile = os.Stderr
|
||||
}
|
||||
}
|
||||
} else {
|
||||
oFile = os.Stderr
|
||||
}
|
||||
|
||||
log.SetOutput(newTelegrafWriter(oFile))
|
||||
}
|
||||
75
logger/logger_test.go
Normal file
75
logger/logger_test.go
Normal file
@@ -0,0 +1,75 @@
|
||||
package logger
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestWriteLogToFile(t *testing.T) {
|
||||
tmpfile, err := ioutil.TempFile("", "")
|
||||
assert.NoError(t, err)
|
||||
defer func() { os.Remove(tmpfile.Name()) }()
|
||||
|
||||
SetupLogging(false, false, tmpfile.Name())
|
||||
log.Printf("I! TEST")
|
||||
log.Printf("D! TEST") // <- should be ignored
|
||||
|
||||
f, err := ioutil.ReadFile(tmpfile.Name())
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, f[19:], []byte("Z I! TEST\n"))
|
||||
}
|
||||
|
||||
func TestDebugWriteLogToFile(t *testing.T) {
|
||||
tmpfile, err := ioutil.TempFile("", "")
|
||||
assert.NoError(t, err)
|
||||
defer func() { os.Remove(tmpfile.Name()) }()
|
||||
|
||||
SetupLogging(true, false, tmpfile.Name())
|
||||
log.Printf("D! TEST")
|
||||
|
||||
f, err := ioutil.ReadFile(tmpfile.Name())
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, f[19:], []byte("Z D! TEST\n"))
|
||||
}
|
||||
|
||||
func TestErrorWriteLogToFile(t *testing.T) {
|
||||
tmpfile, err := ioutil.TempFile("", "")
|
||||
assert.NoError(t, err)
|
||||
defer func() { os.Remove(tmpfile.Name()) }()
|
||||
|
||||
SetupLogging(false, true, tmpfile.Name())
|
||||
log.Printf("E! TEST")
|
||||
log.Printf("I! TEST") // <- should be ignored
|
||||
|
||||
f, err := ioutil.ReadFile(tmpfile.Name())
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, f[19:], []byte("Z E! TEST\n"))
|
||||
}
|
||||
|
||||
func TestAddDefaultLogLevel(t *testing.T) {
|
||||
tmpfile, err := ioutil.TempFile("", "")
|
||||
assert.NoError(t, err)
|
||||
defer func() { os.Remove(tmpfile.Name()) }()
|
||||
|
||||
SetupLogging(true, false, tmpfile.Name())
|
||||
log.Printf("TEST")
|
||||
|
||||
f, err := ioutil.ReadFile(tmpfile.Name())
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t, f[19:], []byte("Z I! TEST\n"))
|
||||
}
|
||||
|
||||
func BenchmarkTelegrafLogWrite(b *testing.B) {
|
||||
var msg = []byte("test")
|
||||
var buf bytes.Buffer
|
||||
w := newTelegrafWriter(&buf)
|
||||
for i := 0; i < b.N; i++ {
|
||||
buf.Reset()
|
||||
w.Write(msg)
|
||||
}
|
||||
}
|
||||
68
metric.go
Normal file
68
metric.go
Normal file
@@ -0,0 +1,68 @@
|
||||
package telegraf
|
||||
|
||||
import (
|
||||
"time"
|
||||
)
|
||||
|
||||
// ValueType is an enumeration of metric types that represent a simple value.
|
||||
type ValueType int
|
||||
|
||||
// Possible values for the ValueType enum.
|
||||
const (
|
||||
_ ValueType = iota
|
||||
Counter
|
||||
Gauge
|
||||
Untyped
|
||||
Summary
|
||||
Histogram
|
||||
)
|
||||
|
||||
type Tag struct {
|
||||
Key string
|
||||
Value string
|
||||
}
|
||||
|
||||
type Field struct {
|
||||
Key string
|
||||
Value interface{}
|
||||
}
|
||||
|
||||
type Metric interface {
|
||||
// Getting data structure functions
|
||||
Name() string
|
||||
Tags() map[string]string
|
||||
TagList() []*Tag
|
||||
Fields() map[string]interface{}
|
||||
FieldList() []*Field
|
||||
Time() time.Time
|
||||
Type() ValueType
|
||||
|
||||
// Name functions
|
||||
SetName(name string)
|
||||
AddPrefix(prefix string)
|
||||
AddSuffix(suffix string)
|
||||
|
||||
// Tag functions
|
||||
GetTag(key string) (string, bool)
|
||||
HasTag(key string) bool
|
||||
AddTag(key, value string)
|
||||
RemoveTag(key string)
|
||||
|
||||
// Field functions
|
||||
GetField(key string) (interface{}, bool)
|
||||
HasField(key string) bool
|
||||
AddField(key string, value interface{})
|
||||
RemoveField(key string)
|
||||
|
||||
SetTime(t time.Time)
|
||||
|
||||
// HashID returns an unique identifier for the series.
|
||||
HashID() uint64
|
||||
|
||||
// Copy returns a deep copy of the Metric.
|
||||
Copy() Metric
|
||||
|
||||
// Mark Metric as an aggregate
|
||||
SetAggregate(bool)
|
||||
IsAggregate() bool
|
||||
}
|
||||
53
metric/builder.go
Normal file
53
metric/builder.go
Normal file
@@ -0,0 +1,53 @@
|
||||
package metric
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
)
|
||||
|
||||
type TimeFunc func() time.Time
|
||||
|
||||
type Builder struct {
|
||||
TimeFunc
|
||||
TimePrecision time.Duration
|
||||
|
||||
*metric
|
||||
}
|
||||
|
||||
func NewBuilder() *Builder {
|
||||
b := &Builder{
|
||||
TimeFunc: time.Now,
|
||||
TimePrecision: 1 * time.Nanosecond,
|
||||
}
|
||||
b.Reset()
|
||||
return b
|
||||
}
|
||||
|
||||
func (b *Builder) SetName(name string) {
|
||||
b.name = name
|
||||
}
|
||||
|
||||
func (b *Builder) AddTag(key string, value string) {
|
||||
b.metric.AddTag(key, value)
|
||||
}
|
||||
|
||||
func (b *Builder) AddField(key string, value interface{}) {
|
||||
b.metric.AddField(key, value)
|
||||
}
|
||||
|
||||
func (b *Builder) SetTime(tm time.Time) {
|
||||
b.tm = tm
|
||||
}
|
||||
|
||||
func (b *Builder) Reset() {
|
||||
b.metric = &metric{}
|
||||
}
|
||||
|
||||
func (b *Builder) Metric() (telegraf.Metric, error) {
|
||||
if b.tm.IsZero() {
|
||||
b.tm = b.TimeFunc().Truncate(b.TimePrecision)
|
||||
}
|
||||
|
||||
return b.metric, nil
|
||||
}
|
||||
286
metric/metric.go
Normal file
286
metric/metric.go
Normal file
@@ -0,0 +1,286 @@
|
||||
package metric
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"hash/fnv"
|
||||
"sort"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
)
|
||||
|
||||
type metric struct {
|
||||
name string
|
||||
tags []*telegraf.Tag
|
||||
fields []*telegraf.Field
|
||||
tm time.Time
|
||||
|
||||
tp telegraf.ValueType
|
||||
aggregate bool
|
||||
}
|
||||
|
||||
func New(
|
||||
name string,
|
||||
tags map[string]string,
|
||||
fields map[string]interface{},
|
||||
tm time.Time,
|
||||
tp ...telegraf.ValueType,
|
||||
) (telegraf.Metric, error) {
|
||||
var vtype telegraf.ValueType
|
||||
if len(tp) > 0 {
|
||||
vtype = tp[0]
|
||||
} else {
|
||||
vtype = telegraf.Untyped
|
||||
}
|
||||
|
||||
m := &metric{
|
||||
name: name,
|
||||
tags: nil,
|
||||
fields: nil,
|
||||
tm: tm,
|
||||
tp: vtype,
|
||||
}
|
||||
|
||||
if len(tags) > 0 {
|
||||
m.tags = make([]*telegraf.Tag, 0, len(tags))
|
||||
for k, v := range tags {
|
||||
m.tags = append(m.tags,
|
||||
&telegraf.Tag{Key: k, Value: v})
|
||||
}
|
||||
sort.Slice(m.tags, func(i, j int) bool { return m.tags[i].Key < m.tags[j].Key })
|
||||
}
|
||||
|
||||
m.fields = make([]*telegraf.Field, 0, len(fields))
|
||||
for k, v := range fields {
|
||||
v := convertField(v)
|
||||
if v == nil {
|
||||
continue
|
||||
}
|
||||
m.AddField(k, v)
|
||||
}
|
||||
|
||||
return m, nil
|
||||
}
|
||||
|
||||
func (m *metric) String() string {
|
||||
return fmt.Sprintf("%s %v %v %d", m.name, m.Tags(), m.Fields(), m.tm.UnixNano())
|
||||
}
|
||||
|
||||
func (m *metric) Name() string {
|
||||
return m.name
|
||||
}
|
||||
|
||||
func (m *metric) Tags() map[string]string {
|
||||
tags := make(map[string]string, len(m.tags))
|
||||
for _, tag := range m.tags {
|
||||
tags[tag.Key] = tag.Value
|
||||
}
|
||||
return tags
|
||||
}
|
||||
|
||||
func (m *metric) TagList() []*telegraf.Tag {
|
||||
return m.tags
|
||||
}
|
||||
|
||||
func (m *metric) Fields() map[string]interface{} {
|
||||
fields := make(map[string]interface{}, len(m.fields))
|
||||
for _, field := range m.fields {
|
||||
fields[field.Key] = field.Value
|
||||
}
|
||||
|
||||
return fields
|
||||
}
|
||||
|
||||
func (m *metric) FieldList() []*telegraf.Field {
|
||||
return m.fields
|
||||
}
|
||||
|
||||
func (m *metric) Time() time.Time {
|
||||
return m.tm
|
||||
}
|
||||
|
||||
func (m *metric) Type() telegraf.ValueType {
|
||||
return m.tp
|
||||
}
|
||||
|
||||
func (m *metric) SetName(name string) {
|
||||
m.name = name
|
||||
}
|
||||
|
||||
func (m *metric) AddPrefix(prefix string) {
|
||||
m.name = prefix + m.name
|
||||
}
|
||||
|
||||
func (m *metric) AddSuffix(suffix string) {
|
||||
m.name = m.name + suffix
|
||||
}
|
||||
|
||||
func (m *metric) AddTag(key, value string) {
|
||||
for i, tag := range m.tags {
|
||||
if key > tag.Key {
|
||||
continue
|
||||
}
|
||||
|
||||
if key == tag.Key {
|
||||
tag.Value = value
|
||||
return
|
||||
}
|
||||
|
||||
m.tags = append(m.tags, nil)
|
||||
copy(m.tags[i+1:], m.tags[i:])
|
||||
m.tags[i] = &telegraf.Tag{Key: key, Value: value}
|
||||
return
|
||||
}
|
||||
|
||||
m.tags = append(m.tags, &telegraf.Tag{Key: key, Value: value})
|
||||
}
|
||||
|
||||
func (m *metric) HasTag(key string) bool {
|
||||
for _, tag := range m.tags {
|
||||
if tag.Key == key {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (m *metric) GetTag(key string) (string, bool) {
|
||||
for _, tag := range m.tags {
|
||||
if tag.Key == key {
|
||||
return tag.Value, true
|
||||
}
|
||||
}
|
||||
return "", false
|
||||
}
|
||||
|
||||
func (m *metric) RemoveTag(key string) {
|
||||
for i, tag := range m.tags {
|
||||
if tag.Key == key {
|
||||
copy(m.tags[i:], m.tags[i+1:])
|
||||
m.tags[len(m.tags)-1] = nil
|
||||
m.tags = m.tags[:len(m.tags)-1]
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (m *metric) AddField(key string, value interface{}) {
|
||||
for i, field := range m.fields {
|
||||
if key == field.Key {
|
||||
m.fields[i] = &telegraf.Field{Key: key, Value: convertField(value)}
|
||||
}
|
||||
}
|
||||
m.fields = append(m.fields, &telegraf.Field{Key: key, Value: convertField(value)})
|
||||
}
|
||||
|
||||
func (m *metric) HasField(key string) bool {
|
||||
for _, field := range m.fields {
|
||||
if field.Key == key {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (m *metric) GetField(key string) (interface{}, bool) {
|
||||
for _, field := range m.fields {
|
||||
if field.Key == key {
|
||||
return field.Value, true
|
||||
}
|
||||
}
|
||||
return nil, false
|
||||
}
|
||||
|
||||
func (m *metric) RemoveField(key string) {
|
||||
for i, field := range m.fields {
|
||||
if field.Key == key {
|
||||
copy(m.fields[i:], m.fields[i+1:])
|
||||
m.fields[len(m.fields)-1] = nil
|
||||
m.fields = m.fields[:len(m.fields)-1]
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (m *metric) SetTime(t time.Time) {
|
||||
m.tm = t
|
||||
}
|
||||
|
||||
func (m *metric) Copy() telegraf.Metric {
|
||||
m2 := &metric{
|
||||
name: m.name,
|
||||
tags: make([]*telegraf.Tag, len(m.tags)),
|
||||
fields: make([]*telegraf.Field, len(m.fields)),
|
||||
tm: m.tm,
|
||||
tp: m.tp,
|
||||
aggregate: m.aggregate,
|
||||
}
|
||||
|
||||
for i, tag := range m.tags {
|
||||
m2.tags[i] = tag
|
||||
}
|
||||
|
||||
for i, field := range m.fields {
|
||||
m2.fields[i] = field
|
||||
}
|
||||
return m2
|
||||
}
|
||||
|
||||
func (m *metric) SetAggregate(b bool) {
|
||||
m.aggregate = true
|
||||
}
|
||||
|
||||
func (m *metric) IsAggregate() bool {
|
||||
return m.aggregate
|
||||
}
|
||||
|
||||
func (m *metric) HashID() uint64 {
|
||||
h := fnv.New64a()
|
||||
h.Write([]byte(m.name))
|
||||
h.Write([]byte("\n"))
|
||||
for _, tag := range m.tags {
|
||||
h.Write([]byte(tag.Key))
|
||||
h.Write([]byte("\n"))
|
||||
h.Write([]byte(tag.Value))
|
||||
h.Write([]byte("\n"))
|
||||
}
|
||||
return h.Sum64()
|
||||
}
|
||||
|
||||
// Convert field to a supported type or nil if unconvertible
|
||||
func convertField(v interface{}) interface{} {
|
||||
switch v := v.(type) {
|
||||
case float64:
|
||||
return v
|
||||
case int64:
|
||||
return v
|
||||
case string:
|
||||
return v
|
||||
case bool:
|
||||
return v
|
||||
case int:
|
||||
return int64(v)
|
||||
case uint:
|
||||
return uint64(v)
|
||||
case uint64:
|
||||
return uint64(v)
|
||||
case []byte:
|
||||
return string(v)
|
||||
case int32:
|
||||
return int64(v)
|
||||
case int16:
|
||||
return int64(v)
|
||||
case int8:
|
||||
return int64(v)
|
||||
case uint32:
|
||||
return uint64(v)
|
||||
case uint16:
|
||||
return uint64(v)
|
||||
case uint8:
|
||||
return uint64(v)
|
||||
case float32:
|
||||
return float64(v)
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
337
metric/metric_test.go
Normal file
337
metric/metric_test.go
Normal file
@@ -0,0 +1,337 @@
|
||||
package metric
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestNewMetric(t *testing.T) {
|
||||
now := time.Now()
|
||||
|
||||
tags := map[string]string{
|
||||
"host": "localhost",
|
||||
"datacenter": "us-east-1",
|
||||
}
|
||||
fields := map[string]interface{}{
|
||||
"usage_idle": float64(99),
|
||||
"usage_busy": float64(1),
|
||||
}
|
||||
m, err := New("cpu", tags, fields, now)
|
||||
require.NoError(t, err)
|
||||
|
||||
require.Equal(t, "cpu", m.Name())
|
||||
require.Equal(t, tags, m.Tags())
|
||||
require.Equal(t, fields, m.Fields())
|
||||
require.Equal(t, 2, len(m.FieldList()))
|
||||
require.Equal(t, now, m.Time())
|
||||
}
|
||||
|
||||
func baseMetric() telegraf.Metric {
|
||||
tags := map[string]string{}
|
||||
fields := map[string]interface{}{
|
||||
"value": float64(1),
|
||||
}
|
||||
now := time.Now()
|
||||
|
||||
m, err := New("cpu", tags, fields, now)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return m
|
||||
}
|
||||
|
||||
func TestHasTag(t *testing.T) {
|
||||
m := baseMetric()
|
||||
|
||||
require.False(t, m.HasTag("host"))
|
||||
m.AddTag("host", "localhost")
|
||||
require.True(t, m.HasTag("host"))
|
||||
m.RemoveTag("host")
|
||||
require.False(t, m.HasTag("host"))
|
||||
}
|
||||
|
||||
func TestAddTagOverwrites(t *testing.T) {
|
||||
m := baseMetric()
|
||||
|
||||
m.AddTag("host", "localhost")
|
||||
m.AddTag("host", "example.org")
|
||||
|
||||
value, ok := m.GetTag("host")
|
||||
require.True(t, ok)
|
||||
require.Equal(t, "example.org", value)
|
||||
require.Equal(t, 1, len(m.TagList()))
|
||||
}
|
||||
|
||||
func TestRemoveTagNoEffectOnMissingTags(t *testing.T) {
|
||||
m := baseMetric()
|
||||
|
||||
m.RemoveTag("foo")
|
||||
m.AddTag("a", "x")
|
||||
m.RemoveTag("foo")
|
||||
m.RemoveTag("bar")
|
||||
value, ok := m.GetTag("a")
|
||||
require.True(t, ok)
|
||||
require.Equal(t, "x", value)
|
||||
}
|
||||
|
||||
func TestGetTag(t *testing.T) {
|
||||
m := baseMetric()
|
||||
|
||||
value, ok := m.GetTag("host")
|
||||
require.False(t, ok)
|
||||
|
||||
m.AddTag("host", "localhost")
|
||||
|
||||
value, ok = m.GetTag("host")
|
||||
require.True(t, ok)
|
||||
require.Equal(t, "localhost", value)
|
||||
|
||||
m.RemoveTag("host")
|
||||
value, ok = m.GetTag("host")
|
||||
require.False(t, ok)
|
||||
}
|
||||
|
||||
func TestHasField(t *testing.T) {
|
||||
m := baseMetric()
|
||||
|
||||
require.False(t, m.HasField("x"))
|
||||
m.AddField("x", 42.0)
|
||||
require.True(t, m.HasField("x"))
|
||||
m.RemoveTag("x")
|
||||
require.False(t, m.HasTag("x"))
|
||||
}
|
||||
|
||||
func TestAddFieldOverwrites(t *testing.T) {
|
||||
m := baseMetric()
|
||||
|
||||
m.AddField("value", 1.0)
|
||||
m.AddField("value", 42.0)
|
||||
|
||||
value, ok := m.GetField("value")
|
||||
require.True(t, ok)
|
||||
require.Equal(t, 42.0, value)
|
||||
}
|
||||
|
||||
func TestAddFieldChangesType(t *testing.T) {
|
||||
m := baseMetric()
|
||||
|
||||
m.AddField("value", 1.0)
|
||||
m.AddField("value", "xyzzy")
|
||||
|
||||
value, ok := m.GetField("value")
|
||||
require.True(t, ok)
|
||||
require.Equal(t, "xyzzy", value)
|
||||
}
|
||||
|
||||
func TestRemoveFieldNoEffectOnMissingFields(t *testing.T) {
|
||||
m := baseMetric()
|
||||
|
||||
m.RemoveField("foo")
|
||||
m.AddField("a", "x")
|
||||
m.RemoveField("foo")
|
||||
m.RemoveField("bar")
|
||||
value, ok := m.GetField("a")
|
||||
require.True(t, ok)
|
||||
require.Equal(t, "x", value)
|
||||
}
|
||||
|
||||
func TestGetField(t *testing.T) {
|
||||
m := baseMetric()
|
||||
|
||||
value, ok := m.GetField("foo")
|
||||
require.False(t, ok)
|
||||
|
||||
m.AddField("foo", "bar")
|
||||
|
||||
value, ok = m.GetField("foo")
|
||||
require.True(t, ok)
|
||||
require.Equal(t, "bar", value)
|
||||
|
||||
m.RemoveTag("foo")
|
||||
value, ok = m.GetTag("foo")
|
||||
require.False(t, ok)
|
||||
}
|
||||
|
||||
func TestTagList_Sorted(t *testing.T) {
|
||||
m := baseMetric()
|
||||
|
||||
m.AddTag("b", "y")
|
||||
m.AddTag("c", "z")
|
||||
m.AddTag("a", "x")
|
||||
|
||||
taglist := m.TagList()
|
||||
require.Equal(t, "a", taglist[0].Key)
|
||||
require.Equal(t, "b", taglist[1].Key)
|
||||
require.Equal(t, "c", taglist[2].Key)
|
||||
}
|
||||
|
||||
func TestEquals(t *testing.T) {
|
||||
now := time.Now()
|
||||
m1, err := New("cpu",
|
||||
map[string]string{
|
||||
"host": "localhost",
|
||||
},
|
||||
map[string]interface{}{
|
||||
"value": 42.0,
|
||||
},
|
||||
now,
|
||||
)
|
||||
require.NoError(t, err)
|
||||
|
||||
m2, err := New("cpu",
|
||||
map[string]string{
|
||||
"host": "localhost",
|
||||
},
|
||||
map[string]interface{}{
|
||||
"value": 42.0,
|
||||
},
|
||||
now,
|
||||
)
|
||||
require.NoError(t, err)
|
||||
|
||||
lhs := m1.(*metric)
|
||||
require.Equal(t, lhs, m2)
|
||||
|
||||
m3 := m2.Copy()
|
||||
require.Equal(t, lhs, m3)
|
||||
m3.AddTag("a", "x")
|
||||
require.NotEqual(t, lhs, m3)
|
||||
}
|
||||
|
||||
func TestHashID(t *testing.T) {
|
||||
m, _ := New(
|
||||
"cpu",
|
||||
map[string]string{
|
||||
"datacenter": "us-east-1",
|
||||
"mytag": "foo",
|
||||
"another": "tag",
|
||||
},
|
||||
map[string]interface{}{
|
||||
"value": float64(1),
|
||||
},
|
||||
time.Now(),
|
||||
)
|
||||
hash := m.HashID()
|
||||
|
||||
// adding a field doesn't change the hash:
|
||||
m.AddField("foo", int64(100))
|
||||
assert.Equal(t, hash, m.HashID())
|
||||
|
||||
// removing a non-existent tag doesn't change the hash:
|
||||
m.RemoveTag("no-op")
|
||||
assert.Equal(t, hash, m.HashID())
|
||||
|
||||
// adding a tag does change it:
|
||||
m.AddTag("foo", "bar")
|
||||
assert.NotEqual(t, hash, m.HashID())
|
||||
hash = m.HashID()
|
||||
|
||||
// removing a tag also changes it:
|
||||
m.RemoveTag("mytag")
|
||||
assert.NotEqual(t, hash, m.HashID())
|
||||
}
|
||||
|
||||
func TestHashID_Consistency(t *testing.T) {
|
||||
m, _ := New(
|
||||
"cpu",
|
||||
map[string]string{
|
||||
"datacenter": "us-east-1",
|
||||
"mytag": "foo",
|
||||
"another": "tag",
|
||||
},
|
||||
map[string]interface{}{
|
||||
"value": float64(1),
|
||||
},
|
||||
time.Now(),
|
||||
)
|
||||
hash := m.HashID()
|
||||
|
||||
m2, _ := New(
|
||||
"cpu",
|
||||
map[string]string{
|
||||
"datacenter": "us-east-1",
|
||||
"mytag": "foo",
|
||||
"another": "tag",
|
||||
},
|
||||
map[string]interface{}{
|
||||
"value": float64(1),
|
||||
},
|
||||
time.Now(),
|
||||
)
|
||||
assert.Equal(t, hash, m2.HashID())
|
||||
|
||||
m3 := m.Copy()
|
||||
assert.Equal(t, m2.HashID(), m3.HashID())
|
||||
}
|
||||
|
||||
func TestHashID_Delimiting(t *testing.T) {
|
||||
m1, _ := New(
|
||||
"cpu",
|
||||
map[string]string{
|
||||
"a": "x",
|
||||
"b": "y",
|
||||
"c": "z",
|
||||
},
|
||||
map[string]interface{}{
|
||||
"value": float64(1),
|
||||
},
|
||||
time.Now(),
|
||||
)
|
||||
m2, _ := New(
|
||||
"cpu",
|
||||
map[string]string{
|
||||
"a": "xbycz",
|
||||
},
|
||||
map[string]interface{}{
|
||||
"value": float64(1),
|
||||
},
|
||||
time.Now(),
|
||||
)
|
||||
assert.NotEqual(t, m1.HashID(), m2.HashID())
|
||||
}
|
||||
|
||||
func TestSetName(t *testing.T) {
|
||||
m := baseMetric()
|
||||
m.SetName("foo")
|
||||
require.Equal(t, "foo", m.Name())
|
||||
}
|
||||
|
||||
func TestAddPrefix(t *testing.T) {
|
||||
m := baseMetric()
|
||||
m.AddPrefix("foo_")
|
||||
require.Equal(t, "foo_cpu", m.Name())
|
||||
m.AddPrefix("foo_")
|
||||
require.Equal(t, "foo_foo_cpu", m.Name())
|
||||
}
|
||||
|
||||
func TestAddSuffix(t *testing.T) {
|
||||
m := baseMetric()
|
||||
m.AddSuffix("_foo")
|
||||
require.Equal(t, "cpu_foo", m.Name())
|
||||
m.AddSuffix("_foo")
|
||||
require.Equal(t, "cpu_foo_foo", m.Name())
|
||||
}
|
||||
|
||||
func TestValueType(t *testing.T) {
|
||||
now := time.Now()
|
||||
|
||||
tags := map[string]string{}
|
||||
fields := map[string]interface{}{
|
||||
"value": float64(42),
|
||||
}
|
||||
m, err := New("cpu", tags, fields, now, telegraf.Gauge)
|
||||
assert.NoError(t, err)
|
||||
|
||||
assert.Equal(t, telegraf.Gauge, m.Type())
|
||||
}
|
||||
|
||||
func TestCopyAggreate(t *testing.T) {
|
||||
m1 := baseMetric()
|
||||
m1.SetAggregate(true)
|
||||
m2 := m1.Copy()
|
||||
assert.True(t, m2.IsAggregate())
|
||||
}
|
||||
7
metric/uint_support.go
Normal file
7
metric/uint_support.go
Normal file
@@ -0,0 +1,7 @@
|
||||
// +build uint64
|
||||
|
||||
package metric
|
||||
|
||||
func init() {
|
||||
EnableUintSupport()
|
||||
}
|
||||
37
output.go
Normal file
37
output.go
Normal file
@@ -0,0 +1,37 @@
|
||||
package telegraf
|
||||
|
||||
type Output interface {
|
||||
// Connect to the Output
|
||||
Connect() error
|
||||
// Close any connections to the Output
|
||||
Close() error
|
||||
// Description returns a one-sentence description on the Output
|
||||
Description() string
|
||||
// SampleConfig returns the default configuration of the Output
|
||||
SampleConfig() string
|
||||
// Write takes in group of points to be written to the Output
|
||||
Write(metrics []Metric) error
|
||||
}
|
||||
|
||||
type AggregatingOutput interface {
|
||||
Add(in Metric)
|
||||
Push() []Metric
|
||||
Reset()
|
||||
}
|
||||
|
||||
type ServiceOutput interface {
|
||||
// Connect to the Output
|
||||
Connect() error
|
||||
// Close any connections to the Output
|
||||
Close() error
|
||||
// Description returns a one-sentence description on the Output
|
||||
Description() string
|
||||
// SampleConfig returns the default configuration of the Output
|
||||
SampleConfig() string
|
||||
// Write takes in group of points to be written to the Output
|
||||
Write(metrics []Metric) error
|
||||
// Start the "service" that will provide an Output
|
||||
Start() error
|
||||
// Stop the "service" that will provide an Output
|
||||
Stop()
|
||||
}
|
||||
7
plugins/aggregators/all/all.go
Normal file
7
plugins/aggregators/all/all.go
Normal file
@@ -0,0 +1,7 @@
|
||||
package all
|
||||
|
||||
import (
|
||||
_ "github.com/influxdata/telegraf/plugins/aggregators/basicstats"
|
||||
_ "github.com/influxdata/telegraf/plugins/aggregators/histogram"
|
||||
_ "github.com/influxdata/telegraf/plugins/aggregators/minmax"
|
||||
)
|
||||
56
plugins/aggregators/basicstats/README.md
Normal file
56
plugins/aggregators/basicstats/README.md
Normal file
@@ -0,0 +1,56 @@
|
||||
# BasicStats Aggregator Plugin
|
||||
|
||||
The BasicStats aggregator plugin give us count,max,min,mean,sum,s2(variance), stdev for a set of values,
|
||||
emitting the aggregate every `period` seconds.
|
||||
|
||||
### Configuration:
|
||||
|
||||
```toml
|
||||
# Keep the aggregate basicstats of each metric passing through.
|
||||
[[aggregators.basicstats]]
|
||||
|
||||
## General Aggregator Arguments:
|
||||
|
||||
## The period on which to flush & clear the aggregator.
|
||||
period = "30s"
|
||||
|
||||
## If true, the original metric will be dropped by the
|
||||
## aggregator and will not get sent to the output plugins.
|
||||
drop_original = false
|
||||
|
||||
## BasicStats Arguments:
|
||||
|
||||
## Configures which basic stats to push as fields
|
||||
stats = ["count","min","max","mean","stdev","s2","sum"]
|
||||
```
|
||||
|
||||
- stats
|
||||
- If not specified, then `count`, `min`, `max`, `mean`, `stdev`, and `s2` are aggregated and pushed as fields. `sum` is not aggregated by default to maintain backwards compatibility.
|
||||
- If empty array, no stats are aggregated
|
||||
|
||||
### Measurements & Fields:
|
||||
|
||||
- measurement1
|
||||
- field1_count
|
||||
- field1_max
|
||||
- field1_min
|
||||
- field1_mean
|
||||
- field1_sum
|
||||
- field1_s2 (variance)
|
||||
- field1_stdev (standard deviation)
|
||||
|
||||
### Tags:
|
||||
|
||||
No tags are applied by this aggregator.
|
||||
|
||||
### Example Output:
|
||||
|
||||
```
|
||||
$ telegraf --config telegraf.conf --quiet
|
||||
system,host=tars load1=1 1475583980000000000
|
||||
system,host=tars load1=1 1475583990000000000
|
||||
system,host=tars load1_count=2,load1_max=1,load1_min=1,load1_mean=1,load1_sum=2,load1_s2=0,load1_stdev=0 1475584010000000000
|
||||
system,host=tars load1=1 1475584020000000000
|
||||
system,host=tars load1=3 1475584030000000000
|
||||
system,host=tars load1_count=2,load1_max=3,load1_min=1,load1_mean=2,load1_sum=4,load1_s2=2,load1_stdev=1.414162 1475584010000000000
|
||||
```
|
||||
258
plugins/aggregators/basicstats/basicstats.go
Normal file
258
plugins/aggregators/basicstats/basicstats.go
Normal file
@@ -0,0 +1,258 @@
|
||||
package basicstats
|
||||
|
||||
import (
|
||||
"log"
|
||||
"math"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/plugins/aggregators"
|
||||
)
|
||||
|
||||
type BasicStats struct {
|
||||
Stats []string `toml:"stats"`
|
||||
|
||||
cache map[uint64]aggregate
|
||||
statsConfig *configuredStats
|
||||
}
|
||||
|
||||
type configuredStats struct {
|
||||
count bool
|
||||
min bool
|
||||
max bool
|
||||
mean bool
|
||||
variance bool
|
||||
stdev bool
|
||||
sum bool
|
||||
}
|
||||
|
||||
func NewBasicStats() *BasicStats {
|
||||
mm := &BasicStats{}
|
||||
mm.Reset()
|
||||
return mm
|
||||
}
|
||||
|
||||
type aggregate struct {
|
||||
fields map[string]basicstats
|
||||
name string
|
||||
tags map[string]string
|
||||
}
|
||||
|
||||
type basicstats struct {
|
||||
count float64
|
||||
min float64
|
||||
max float64
|
||||
sum float64
|
||||
mean float64
|
||||
M2 float64 //intermedia value for variance/stdev
|
||||
}
|
||||
|
||||
var sampleConfig = `
|
||||
## General Aggregator Arguments:
|
||||
## The period on which to flush & clear the aggregator.
|
||||
period = "30s"
|
||||
## If true, the original metric will be dropped by the
|
||||
## aggregator and will not get sent to the output plugins.
|
||||
drop_original = false
|
||||
`
|
||||
|
||||
func (m *BasicStats) SampleConfig() string {
|
||||
return sampleConfig
|
||||
}
|
||||
|
||||
func (m *BasicStats) Description() string {
|
||||
return "Keep the aggregate basicstats of each metric passing through."
|
||||
}
|
||||
|
||||
func (m *BasicStats) Add(in telegraf.Metric) {
|
||||
id := in.HashID()
|
||||
if _, ok := m.cache[id]; !ok {
|
||||
// hit an uncached metric, create caches for first time:
|
||||
a := aggregate{
|
||||
name: in.Name(),
|
||||
tags: in.Tags(),
|
||||
fields: make(map[string]basicstats),
|
||||
}
|
||||
for k, v := range in.Fields() {
|
||||
if fv, ok := convert(v); ok {
|
||||
a.fields[k] = basicstats{
|
||||
count: 1,
|
||||
min: fv,
|
||||
max: fv,
|
||||
mean: fv,
|
||||
sum: fv,
|
||||
M2: 0.0,
|
||||
}
|
||||
}
|
||||
}
|
||||
m.cache[id] = a
|
||||
} else {
|
||||
for k, v := range in.Fields() {
|
||||
if fv, ok := convert(v); ok {
|
||||
if _, ok := m.cache[id].fields[k]; !ok {
|
||||
// hit an uncached field of a cached metric
|
||||
m.cache[id].fields[k] = basicstats{
|
||||
count: 1,
|
||||
min: fv,
|
||||
max: fv,
|
||||
mean: fv,
|
||||
sum: fv,
|
||||
M2: 0.0,
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
tmp := m.cache[id].fields[k]
|
||||
//https://en.m.wikipedia.org/wiki/Algorithms_for_calculating_variance
|
||||
//variable initialization
|
||||
x := fv
|
||||
mean := tmp.mean
|
||||
M2 := tmp.M2
|
||||
//counter compute
|
||||
n := tmp.count + 1
|
||||
tmp.count = n
|
||||
//mean compute
|
||||
delta := x - mean
|
||||
mean = mean + delta/n
|
||||
tmp.mean = mean
|
||||
//variance/stdev compute
|
||||
M2 = M2 + delta*(x-mean)
|
||||
tmp.M2 = M2
|
||||
//max/min compute
|
||||
if fv < tmp.min {
|
||||
tmp.min = fv
|
||||
} else if fv > tmp.max {
|
||||
tmp.max = fv
|
||||
}
|
||||
//sum compute
|
||||
tmp.sum += fv
|
||||
//store final data
|
||||
m.cache[id].fields[k] = tmp
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (m *BasicStats) Push(acc telegraf.Accumulator) {
|
||||
|
||||
config := getConfiguredStats(m)
|
||||
|
||||
for _, aggregate := range m.cache {
|
||||
fields := map[string]interface{}{}
|
||||
for k, v := range aggregate.fields {
|
||||
|
||||
if config.count {
|
||||
fields[k+"_count"] = v.count
|
||||
}
|
||||
if config.min {
|
||||
fields[k+"_min"] = v.min
|
||||
}
|
||||
if config.max {
|
||||
fields[k+"_max"] = v.max
|
||||
}
|
||||
if config.mean {
|
||||
fields[k+"_mean"] = v.mean
|
||||
}
|
||||
if config.sum {
|
||||
fields[k+"_sum"] = v.sum
|
||||
}
|
||||
|
||||
//v.count always >=1
|
||||
if v.count > 1 {
|
||||
variance := v.M2 / (v.count - 1)
|
||||
|
||||
if config.variance {
|
||||
fields[k+"_s2"] = variance
|
||||
}
|
||||
if config.stdev {
|
||||
fields[k+"_stdev"] = math.Sqrt(variance)
|
||||
}
|
||||
}
|
||||
//if count == 1 StdDev = infinite => so I won't send data
|
||||
}
|
||||
|
||||
if len(fields) > 0 {
|
||||
acc.AddFields(aggregate.name, fields, aggregate.tags)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func parseStats(names []string) *configuredStats {
|
||||
|
||||
parsed := &configuredStats{}
|
||||
|
||||
for _, name := range names {
|
||||
|
||||
switch name {
|
||||
|
||||
case "count":
|
||||
parsed.count = true
|
||||
case "min":
|
||||
parsed.min = true
|
||||
case "max":
|
||||
parsed.max = true
|
||||
case "mean":
|
||||
parsed.mean = true
|
||||
case "s2":
|
||||
parsed.variance = true
|
||||
case "stdev":
|
||||
parsed.stdev = true
|
||||
case "sum":
|
||||
parsed.sum = true
|
||||
|
||||
default:
|
||||
log.Printf("W! Unrecognized basic stat '%s', ignoring", name)
|
||||
}
|
||||
}
|
||||
|
||||
return parsed
|
||||
}
|
||||
|
||||
func defaultStats() *configuredStats {
|
||||
|
||||
defaults := &configuredStats{}
|
||||
|
||||
defaults.count = true
|
||||
defaults.min = true
|
||||
defaults.max = true
|
||||
defaults.mean = true
|
||||
defaults.variance = true
|
||||
defaults.stdev = true
|
||||
defaults.sum = false
|
||||
|
||||
return defaults
|
||||
}
|
||||
|
||||
func getConfiguredStats(m *BasicStats) *configuredStats {
|
||||
|
||||
if m.statsConfig == nil {
|
||||
|
||||
if m.Stats == nil {
|
||||
m.statsConfig = defaultStats()
|
||||
} else {
|
||||
m.statsConfig = parseStats(m.Stats)
|
||||
}
|
||||
}
|
||||
|
||||
return m.statsConfig
|
||||
}
|
||||
|
||||
func (m *BasicStats) Reset() {
|
||||
m.cache = make(map[uint64]aggregate)
|
||||
}
|
||||
|
||||
func convert(in interface{}) (float64, bool) {
|
||||
switch v := in.(type) {
|
||||
case float64:
|
||||
return v, true
|
||||
case int64:
|
||||
return float64(v), true
|
||||
default:
|
||||
return 0, false
|
||||
}
|
||||
}
|
||||
|
||||
func init() {
|
||||
aggregators.Add("basicstats", func() telegraf.Aggregator {
|
||||
return NewBasicStats()
|
||||
})
|
||||
}
|
||||
511
plugins/aggregators/basicstats/basicstats_test.go
Normal file
511
plugins/aggregators/basicstats/basicstats_test.go
Normal file
@@ -0,0 +1,511 @@
|
||||
package basicstats
|
||||
|
||||
import (
|
||||
"math"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf/metric"
|
||||
"github.com/influxdata/telegraf/testutil"
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
var m1, _ = metric.New("m1",
|
||||
map[string]string{"foo": "bar"},
|
||||
map[string]interface{}{
|
||||
"a": int64(1),
|
||||
"b": int64(1),
|
||||
"c": float64(2),
|
||||
"d": float64(2),
|
||||
},
|
||||
time.Now(),
|
||||
)
|
||||
var m2, _ = metric.New("m1",
|
||||
map[string]string{"foo": "bar"},
|
||||
map[string]interface{}{
|
||||
"a": int64(1),
|
||||
"b": int64(3),
|
||||
"c": float64(4),
|
||||
"d": float64(6),
|
||||
"e": float64(200),
|
||||
"ignoreme": "string",
|
||||
"andme": true,
|
||||
},
|
||||
time.Now(),
|
||||
)
|
||||
|
||||
func BenchmarkApply(b *testing.B) {
|
||||
minmax := NewBasicStats()
|
||||
|
||||
for n := 0; n < b.N; n++ {
|
||||
minmax.Add(m1)
|
||||
minmax.Add(m2)
|
||||
}
|
||||
}
|
||||
|
||||
// Test two metrics getting added.
|
||||
func TestBasicStatsWithPeriod(t *testing.T) {
|
||||
acc := testutil.Accumulator{}
|
||||
minmax := NewBasicStats()
|
||||
|
||||
minmax.Add(m1)
|
||||
minmax.Add(m2)
|
||||
minmax.Push(&acc)
|
||||
|
||||
expectedFields := map[string]interface{}{
|
||||
"a_count": float64(2), //a
|
||||
"a_max": float64(1),
|
||||
"a_min": float64(1),
|
||||
"a_mean": float64(1),
|
||||
"a_stdev": float64(0),
|
||||
"a_s2": float64(0),
|
||||
"b_count": float64(2), //b
|
||||
"b_max": float64(3),
|
||||
"b_min": float64(1),
|
||||
"b_mean": float64(2),
|
||||
"b_s2": float64(2),
|
||||
"b_stdev": math.Sqrt(2),
|
||||
"c_count": float64(2), //c
|
||||
"c_max": float64(4),
|
||||
"c_min": float64(2),
|
||||
"c_mean": float64(3),
|
||||
"c_s2": float64(2),
|
||||
"c_stdev": math.Sqrt(2),
|
||||
"d_count": float64(2), //d
|
||||
"d_max": float64(6),
|
||||
"d_min": float64(2),
|
||||
"d_mean": float64(4),
|
||||
"d_s2": float64(8),
|
||||
"d_stdev": math.Sqrt(8),
|
||||
"e_count": float64(1), //e
|
||||
"e_max": float64(200),
|
||||
"e_min": float64(200),
|
||||
"e_mean": float64(200),
|
||||
}
|
||||
expectedTags := map[string]string{
|
||||
"foo": "bar",
|
||||
}
|
||||
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
|
||||
}
|
||||
|
||||
// Test two metrics getting added with a push/reset in between (simulates
|
||||
// getting added in different periods.)
|
||||
func TestBasicStatsDifferentPeriods(t *testing.T) {
|
||||
acc := testutil.Accumulator{}
|
||||
minmax := NewBasicStats()
|
||||
|
||||
minmax.Add(m1)
|
||||
minmax.Push(&acc)
|
||||
expectedFields := map[string]interface{}{
|
||||
"a_count": float64(1), //a
|
||||
"a_max": float64(1),
|
||||
"a_min": float64(1),
|
||||
"a_mean": float64(1),
|
||||
"b_count": float64(1), //b
|
||||
"b_max": float64(1),
|
||||
"b_min": float64(1),
|
||||
"b_mean": float64(1),
|
||||
"c_count": float64(1), //c
|
||||
"c_max": float64(2),
|
||||
"c_min": float64(2),
|
||||
"c_mean": float64(2),
|
||||
"d_count": float64(1), //d
|
||||
"d_max": float64(2),
|
||||
"d_min": float64(2),
|
||||
"d_mean": float64(2),
|
||||
}
|
||||
expectedTags := map[string]string{
|
||||
"foo": "bar",
|
||||
}
|
||||
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
|
||||
|
||||
acc.ClearMetrics()
|
||||
minmax.Reset()
|
||||
minmax.Add(m2)
|
||||
minmax.Push(&acc)
|
||||
expectedFields = map[string]interface{}{
|
||||
"a_count": float64(1), //a
|
||||
"a_max": float64(1),
|
||||
"a_min": float64(1),
|
||||
"a_mean": float64(1),
|
||||
"b_count": float64(1), //b
|
||||
"b_max": float64(3),
|
||||
"b_min": float64(3),
|
||||
"b_mean": float64(3),
|
||||
"c_count": float64(1), //c
|
||||
"c_max": float64(4),
|
||||
"c_min": float64(4),
|
||||
"c_mean": float64(4),
|
||||
"d_count": float64(1), //d
|
||||
"d_max": float64(6),
|
||||
"d_min": float64(6),
|
||||
"d_mean": float64(6),
|
||||
"e_count": float64(1), //e
|
||||
"e_max": float64(200),
|
||||
"e_min": float64(200),
|
||||
"e_mean": float64(200),
|
||||
}
|
||||
expectedTags = map[string]string{
|
||||
"foo": "bar",
|
||||
}
|
||||
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
|
||||
}
|
||||
|
||||
// Test only aggregating count
|
||||
func TestBasicStatsWithOnlyCount(t *testing.T) {
|
||||
|
||||
aggregator := NewBasicStats()
|
||||
aggregator.Stats = []string{"count"}
|
||||
|
||||
aggregator.Add(m1)
|
||||
aggregator.Add(m2)
|
||||
|
||||
acc := testutil.Accumulator{}
|
||||
aggregator.Push(&acc)
|
||||
|
||||
expectedFields := map[string]interface{}{
|
||||
"a_count": float64(2),
|
||||
"b_count": float64(2),
|
||||
"c_count": float64(2),
|
||||
"d_count": float64(2),
|
||||
"e_count": float64(1),
|
||||
}
|
||||
expectedTags := map[string]string{
|
||||
"foo": "bar",
|
||||
}
|
||||
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
|
||||
}
|
||||
|
||||
// Test only aggregating minimum
|
||||
func TestBasicStatsWithOnlyMin(t *testing.T) {
|
||||
|
||||
aggregator := NewBasicStats()
|
||||
aggregator.Stats = []string{"min"}
|
||||
|
||||
aggregator.Add(m1)
|
||||
aggregator.Add(m2)
|
||||
|
||||
acc := testutil.Accumulator{}
|
||||
aggregator.Push(&acc)
|
||||
|
||||
expectedFields := map[string]interface{}{
|
||||
"a_min": float64(1),
|
||||
"b_min": float64(1),
|
||||
"c_min": float64(2),
|
||||
"d_min": float64(2),
|
||||
"e_min": float64(200),
|
||||
}
|
||||
expectedTags := map[string]string{
|
||||
"foo": "bar",
|
||||
}
|
||||
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
|
||||
}
|
||||
|
||||
// Test only aggregating maximum
|
||||
func TestBasicStatsWithOnlyMax(t *testing.T) {
|
||||
|
||||
aggregator := NewBasicStats()
|
||||
aggregator.Stats = []string{"max"}
|
||||
|
||||
aggregator.Add(m1)
|
||||
aggregator.Add(m2)
|
||||
|
||||
acc := testutil.Accumulator{}
|
||||
aggregator.Push(&acc)
|
||||
|
||||
expectedFields := map[string]interface{}{
|
||||
"a_max": float64(1),
|
||||
"b_max": float64(3),
|
||||
"c_max": float64(4),
|
||||
"d_max": float64(6),
|
||||
"e_max": float64(200),
|
||||
}
|
||||
expectedTags := map[string]string{
|
||||
"foo": "bar",
|
||||
}
|
||||
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
|
||||
}
|
||||
|
||||
// Test only aggregating mean
|
||||
func TestBasicStatsWithOnlyMean(t *testing.T) {
|
||||
|
||||
aggregator := NewBasicStats()
|
||||
aggregator.Stats = []string{"mean"}
|
||||
|
||||
aggregator.Add(m1)
|
||||
aggregator.Add(m2)
|
||||
|
||||
acc := testutil.Accumulator{}
|
||||
aggregator.Push(&acc)
|
||||
|
||||
expectedFields := map[string]interface{}{
|
||||
"a_mean": float64(1),
|
||||
"b_mean": float64(2),
|
||||
"c_mean": float64(3),
|
||||
"d_mean": float64(4),
|
||||
"e_mean": float64(200),
|
||||
}
|
||||
expectedTags := map[string]string{
|
||||
"foo": "bar",
|
||||
}
|
||||
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
|
||||
}
|
||||
|
||||
// Test only aggregating sum
|
||||
func TestBasicStatsWithOnlySum(t *testing.T) {
|
||||
|
||||
aggregator := NewBasicStats()
|
||||
aggregator.Stats = []string{"sum"}
|
||||
|
||||
aggregator.Add(m1)
|
||||
aggregator.Add(m2)
|
||||
|
||||
acc := testutil.Accumulator{}
|
||||
aggregator.Push(&acc)
|
||||
|
||||
expectedFields := map[string]interface{}{
|
||||
"a_sum": float64(2),
|
||||
"b_sum": float64(4),
|
||||
"c_sum": float64(6),
|
||||
"d_sum": float64(8),
|
||||
"e_sum": float64(200),
|
||||
}
|
||||
expectedTags := map[string]string{
|
||||
"foo": "bar",
|
||||
}
|
||||
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
|
||||
}
|
||||
|
||||
// Verify that sum doesn't suffer from floating point errors. Early
|
||||
// implementations of sum were calulated from mean and count, which
|
||||
// e.g. summed "1, 1, 5, 1" as "7.999999..." instead of 8.
|
||||
func TestBasicStatsWithOnlySumFloatingPointErrata(t *testing.T) {
|
||||
|
||||
var sum1, _ = metric.New("m1",
|
||||
map[string]string{},
|
||||
map[string]interface{}{
|
||||
"a": int64(1),
|
||||
},
|
||||
time.Now(),
|
||||
)
|
||||
var sum2, _ = metric.New("m1",
|
||||
map[string]string{},
|
||||
map[string]interface{}{
|
||||
"a": int64(1),
|
||||
},
|
||||
time.Now(),
|
||||
)
|
||||
var sum3, _ = metric.New("m1",
|
||||
map[string]string{},
|
||||
map[string]interface{}{
|
||||
"a": int64(5),
|
||||
},
|
||||
time.Now(),
|
||||
)
|
||||
var sum4, _ = metric.New("m1",
|
||||
map[string]string{},
|
||||
map[string]interface{}{
|
||||
"a": int64(1),
|
||||
},
|
||||
time.Now(),
|
||||
)
|
||||
|
||||
aggregator := NewBasicStats()
|
||||
aggregator.Stats = []string{"sum"}
|
||||
|
||||
aggregator.Add(sum1)
|
||||
aggregator.Add(sum2)
|
||||
aggregator.Add(sum3)
|
||||
aggregator.Add(sum4)
|
||||
|
||||
acc := testutil.Accumulator{}
|
||||
aggregator.Push(&acc)
|
||||
|
||||
expectedFields := map[string]interface{}{
|
||||
"a_sum": float64(8),
|
||||
}
|
||||
expectedTags := map[string]string{}
|
||||
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
|
||||
}
|
||||
|
||||
// Test only aggregating variance
|
||||
func TestBasicStatsWithOnlyVariance(t *testing.T) {
|
||||
|
||||
aggregator := NewBasicStats()
|
||||
aggregator.Stats = []string{"s2"}
|
||||
|
||||
aggregator.Add(m1)
|
||||
aggregator.Add(m2)
|
||||
|
||||
acc := testutil.Accumulator{}
|
||||
aggregator.Push(&acc)
|
||||
|
||||
expectedFields := map[string]interface{}{
|
||||
"a_s2": float64(0),
|
||||
"b_s2": float64(2),
|
||||
"c_s2": float64(2),
|
||||
"d_s2": float64(8),
|
||||
}
|
||||
expectedTags := map[string]string{
|
||||
"foo": "bar",
|
||||
}
|
||||
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
|
||||
}
|
||||
|
||||
// Test only aggregating standard deviation
|
||||
func TestBasicStatsWithOnlyStandardDeviation(t *testing.T) {
|
||||
|
||||
aggregator := NewBasicStats()
|
||||
aggregator.Stats = []string{"stdev"}
|
||||
|
||||
aggregator.Add(m1)
|
||||
aggregator.Add(m2)
|
||||
|
||||
acc := testutil.Accumulator{}
|
||||
aggregator.Push(&acc)
|
||||
|
||||
expectedFields := map[string]interface{}{
|
||||
"a_stdev": float64(0),
|
||||
"b_stdev": math.Sqrt(2),
|
||||
"c_stdev": math.Sqrt(2),
|
||||
"d_stdev": math.Sqrt(8),
|
||||
}
|
||||
expectedTags := map[string]string{
|
||||
"foo": "bar",
|
||||
}
|
||||
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
|
||||
}
|
||||
|
||||
// Test only aggregating minimum and maximum
|
||||
func TestBasicStatsWithMinAndMax(t *testing.T) {
|
||||
|
||||
aggregator := NewBasicStats()
|
||||
aggregator.Stats = []string{"min", "max"}
|
||||
|
||||
aggregator.Add(m1)
|
||||
aggregator.Add(m2)
|
||||
|
||||
acc := testutil.Accumulator{}
|
||||
aggregator.Push(&acc)
|
||||
|
||||
expectedFields := map[string]interface{}{
|
||||
"a_max": float64(1), //a
|
||||
"a_min": float64(1),
|
||||
"b_max": float64(3), //b
|
||||
"b_min": float64(1),
|
||||
"c_max": float64(4), //c
|
||||
"c_min": float64(2),
|
||||
"d_max": float64(6), //d
|
||||
"d_min": float64(2),
|
||||
"e_max": float64(200), //e
|
||||
"e_min": float64(200),
|
||||
}
|
||||
expectedTags := map[string]string{
|
||||
"foo": "bar",
|
||||
}
|
||||
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
|
||||
}
|
||||
|
||||
// Test aggregating with all stats
|
||||
func TestBasicStatsWithAllStats(t *testing.T) {
|
||||
acc := testutil.Accumulator{}
|
||||
minmax := NewBasicStats()
|
||||
minmax.Stats = []string{"count", "min", "max", "mean", "stdev", "s2", "sum"}
|
||||
|
||||
minmax.Add(m1)
|
||||
minmax.Add(m2)
|
||||
minmax.Push(&acc)
|
||||
|
||||
expectedFields := map[string]interface{}{
|
||||
"a_count": float64(2), //a
|
||||
"a_max": float64(1),
|
||||
"a_min": float64(1),
|
||||
"a_mean": float64(1),
|
||||
"a_stdev": float64(0),
|
||||
"a_s2": float64(0),
|
||||
"a_sum": float64(2),
|
||||
"b_count": float64(2), //b
|
||||
"b_max": float64(3),
|
||||
"b_min": float64(1),
|
||||
"b_mean": float64(2),
|
||||
"b_s2": float64(2),
|
||||
"b_sum": float64(4),
|
||||
"b_stdev": math.Sqrt(2),
|
||||
"c_count": float64(2), //c
|
||||
"c_max": float64(4),
|
||||
"c_min": float64(2),
|
||||
"c_mean": float64(3),
|
||||
"c_s2": float64(2),
|
||||
"c_stdev": math.Sqrt(2),
|
||||
"c_sum": float64(6),
|
||||
"d_count": float64(2), //d
|
||||
"d_max": float64(6),
|
||||
"d_min": float64(2),
|
||||
"d_mean": float64(4),
|
||||
"d_s2": float64(8),
|
||||
"d_stdev": math.Sqrt(8),
|
||||
"d_sum": float64(8),
|
||||
"e_count": float64(1), //e
|
||||
"e_max": float64(200),
|
||||
"e_min": float64(200),
|
||||
"e_mean": float64(200),
|
||||
"e_sum": float64(200),
|
||||
}
|
||||
expectedTags := map[string]string{
|
||||
"foo": "bar",
|
||||
}
|
||||
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
|
||||
}
|
||||
|
||||
// Test that if an empty array is passed, no points are pushed
|
||||
func TestBasicStatsWithNoStats(t *testing.T) {
|
||||
|
||||
aggregator := NewBasicStats()
|
||||
aggregator.Stats = []string{}
|
||||
|
||||
aggregator.Add(m1)
|
||||
aggregator.Add(m2)
|
||||
|
||||
acc := testutil.Accumulator{}
|
||||
aggregator.Push(&acc)
|
||||
|
||||
acc.AssertDoesNotContainMeasurement(t, "m1")
|
||||
}
|
||||
|
||||
// Test that if an unknown stat is configured, it doesn't explode
|
||||
func TestBasicStatsWithUnknownStat(t *testing.T) {
|
||||
|
||||
aggregator := NewBasicStats()
|
||||
aggregator.Stats = []string{"crazy"}
|
||||
|
||||
aggregator.Add(m1)
|
||||
aggregator.Add(m2)
|
||||
|
||||
acc := testutil.Accumulator{}
|
||||
aggregator.Push(&acc)
|
||||
|
||||
acc.AssertDoesNotContainMeasurement(t, "m1")
|
||||
}
|
||||
|
||||
// Test that if Stats isn't supplied, then we only do count, min, max, mean,
|
||||
// stdev, and s2. We purposely exclude sum for backwards compatability,
|
||||
// otherwise user's working systems will suddenly (and surprisingly) start
|
||||
// capturing sum without their input.
|
||||
func TestBasicStatsWithDefaultStats(t *testing.T) {
|
||||
|
||||
aggregator := NewBasicStats()
|
||||
|
||||
aggregator.Add(m1)
|
||||
aggregator.Add(m2)
|
||||
|
||||
acc := testutil.Accumulator{}
|
||||
aggregator.Push(&acc)
|
||||
|
||||
assert.True(t, acc.HasField("m1", "a_count"))
|
||||
assert.True(t, acc.HasField("m1", "a_min"))
|
||||
assert.True(t, acc.HasField("m1", "a_max"))
|
||||
assert.True(t, acc.HasField("m1", "a_mean"))
|
||||
assert.True(t, acc.HasField("m1", "a_stdev"))
|
||||
assert.True(t, acc.HasField("m1", "a_s2"))
|
||||
assert.False(t, acc.HasField("m1", "a_sum"))
|
||||
}
|
||||
97
plugins/aggregators/histogram/README.md
Normal file
97
plugins/aggregators/histogram/README.md
Normal file
@@ -0,0 +1,97 @@
|
||||
# Histogram Aggregator Plugin
|
||||
|
||||
The histogram aggregator plugin creates histograms containing the counts of
|
||||
field values within a range.
|
||||
|
||||
Values added to a bucket are also added to the larger buckets in the
|
||||
distribution. This creates a [cumulative histogram](https://en.wikipedia.org/wiki/Histogram#/media/File:Cumulative_vs_normal_histogram.svg).
|
||||
|
||||
Like other Telegraf aggregators, the metric is emitted every `period` seconds.
|
||||
Bucket counts however are not reset between periods and will be non-strictly
|
||||
increasing while Telegraf is running.
|
||||
|
||||
#### Design
|
||||
|
||||
Each metric is passed to the aggregator and this aggregator searches
|
||||
histogram buckets for those fields, which have been specified in the
|
||||
config. If buckets are found, the aggregator will increment +1 to the appropriate
|
||||
bucket otherwise it will be added to the `+Inf` bucket. Every `period`
|
||||
seconds this data will be forwarded to the outputs.
|
||||
|
||||
The algorithm of hit counting to buckets was implemented on the base
|
||||
of the algorithm which is implemented in the Prometheus
|
||||
[client](https://github.com/prometheus/client_golang/blob/master/prometheus/histogram.go).
|
||||
|
||||
### Configuration
|
||||
|
||||
```toml
|
||||
# Configuration for aggregate histogram metrics
|
||||
[[aggregators.histogram]]
|
||||
## The period in which to flush the aggregator.
|
||||
period = "30s"
|
||||
|
||||
## If true, the original metric will be dropped by the
|
||||
## aggregator and will not get sent to the output plugins.
|
||||
drop_original = false
|
||||
|
||||
## Example config that aggregates all fields of the metric.
|
||||
# [[aggregators.histogram.config]]
|
||||
# ## The set of buckets.
|
||||
# buckets = [0.0, 15.6, 34.5, 49.1, 71.5, 80.5, 94.5, 100.0]
|
||||
# ## The name of metric.
|
||||
# measurement_name = "cpu"
|
||||
|
||||
## Example config that aggregates only specific fields of the metric.
|
||||
# [[aggregators.histogram.config]]
|
||||
# ## The set of buckets.
|
||||
# buckets = [0.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0]
|
||||
# ## The name of metric.
|
||||
# measurement_name = "diskio"
|
||||
# ## The concrete fields of metric
|
||||
# fields = ["io_time", "read_time", "write_time"]
|
||||
```
|
||||
|
||||
The user is responsible for defining the bounds of the histogram bucket as
|
||||
well as the measurement name and fields to aggregate.
|
||||
|
||||
Each histogram config section must contain a `buckets` and `measurement_name`
|
||||
option. Optionally, if `fields` is set only the fields listed will be
|
||||
aggregated. If `fields` is not set all fields are aggregated.
|
||||
|
||||
The `buckets` option contains a list of floats which specify the bucket
|
||||
boundaries. Each float value defines the inclusive upper bound of the bucket.
|
||||
The `+Inf` bucket is added automatically and does not need to be defined.
|
||||
|
||||
### Measurements & Fields:
|
||||
|
||||
The postfix `bucket` will be added to each field key.
|
||||
|
||||
- measurement1
|
||||
- field1_bucket
|
||||
- field2_bucket
|
||||
|
||||
### Tags:
|
||||
|
||||
All measurements are given the tag `le`. This tag has the border value of
|
||||
bucket. It means that the metric value is less than or equal to the value of
|
||||
this tag. For example, let assume that we have the metric value 10 and the
|
||||
following buckets: [5, 10, 30, 70, 100]. Then the tag `le` will have the value
|
||||
10, because the metrics value is passed into bucket with right border value
|
||||
`10`.
|
||||
|
||||
### Example Output:
|
||||
|
||||
```
|
||||
cpu,cpu=cpu1,host=localhost,le=0.0 usage_idle_bucket=0i 1486998330000000000
|
||||
cpu,cpu=cpu1,host=localhost,le=10.0 usage_idle_bucket=0i 1486998330000000000
|
||||
cpu,cpu=cpu1,host=localhost,le=20.0 usage_idle_bucket=1i 1486998330000000000
|
||||
cpu,cpu=cpu1,host=localhost,le=30.0 usage_idle_bucket=2i 1486998330000000000
|
||||
cpu,cpu=cpu1,host=localhost,le=40.0 usage_idle_bucket=2i 1486998330000000000
|
||||
cpu,cpu=cpu1,host=localhost,le=50.0 usage_idle_bucket=2i 1486998330000000000
|
||||
cpu,cpu=cpu1,host=localhost,le=60.0 usage_idle_bucket=2i 1486998330000000000
|
||||
cpu,cpu=cpu1,host=localhost,le=70.0 usage_idle_bucket=2i 1486998330000000000
|
||||
cpu,cpu=cpu1,host=localhost,le=80.0 usage_idle_bucket=2i 1486998330000000000
|
||||
cpu,cpu=cpu1,host=localhost,le=90.0 usage_idle_bucket=2i 1486998330000000000
|
||||
cpu,cpu=cpu1,host=localhost,le=100.0 usage_idle_bucket=2i 1486998330000000000
|
||||
cpu,cpu=cpu1,host=localhost,le=+Inf usage_idle_bucket=2i 1486998330000000000
|
||||
```
|
||||
315
plugins/aggregators/histogram/histogram.go
Normal file
315
plugins/aggregators/histogram/histogram.go
Normal file
@@ -0,0 +1,315 @@
|
||||
package histogram
|
||||
|
||||
import (
|
||||
"sort"
|
||||
"strconv"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/plugins/aggregators"
|
||||
)
|
||||
|
||||
// bucketTag is the tag, which contains right bucket border
|
||||
const bucketTag = "le"
|
||||
|
||||
// bucketInf is the right bucket border for infinite values
|
||||
const bucketInf = "+Inf"
|
||||
|
||||
// HistogramAggregator is aggregator with histogram configs and particular histograms for defined metrics
|
||||
type HistogramAggregator struct {
|
||||
Configs []config `toml:"config"`
|
||||
|
||||
buckets bucketsByMetrics
|
||||
cache map[uint64]metricHistogramCollection
|
||||
}
|
||||
|
||||
// config is the config, which contains name, field of metric and histogram buckets.
|
||||
type config struct {
|
||||
Metric string `toml:"measurement_name"`
|
||||
Fields []string `toml:"fields"`
|
||||
Buckets buckets `toml:"buckets"`
|
||||
}
|
||||
|
||||
// bucketsByMetrics contains the buckets grouped by metric and field name
|
||||
type bucketsByMetrics map[string]bucketsByFields
|
||||
|
||||
// bucketsByFields contains the buckets grouped by field name
|
||||
type bucketsByFields map[string]buckets
|
||||
|
||||
// buckets contains the right borders buckets
|
||||
type buckets []float64
|
||||
|
||||
// metricHistogramCollection aggregates the histogram data
|
||||
type metricHistogramCollection struct {
|
||||
histogramCollection map[string]counts
|
||||
name string
|
||||
tags map[string]string
|
||||
}
|
||||
|
||||
// counts is the number of hits in the bucket
|
||||
type counts []int64
|
||||
|
||||
// groupedByCountFields contains grouped fields by their count and fields values
|
||||
type groupedByCountFields struct {
|
||||
name string
|
||||
tags map[string]string
|
||||
fieldsWithCount map[string]int64
|
||||
}
|
||||
|
||||
// NewHistogramAggregator creates new histogram aggregator
|
||||
func NewHistogramAggregator() telegraf.Aggregator {
|
||||
h := &HistogramAggregator{}
|
||||
h.buckets = make(bucketsByMetrics)
|
||||
h.resetCache()
|
||||
|
||||
return h
|
||||
}
|
||||
|
||||
var sampleConfig = `
|
||||
## The period in which to flush the aggregator.
|
||||
period = "30s"
|
||||
|
||||
## If true, the original metric will be dropped by the
|
||||
## aggregator and will not get sent to the output plugins.
|
||||
drop_original = false
|
||||
|
||||
## Example config that aggregates all fields of the metric.
|
||||
# [[aggregators.histogram.config]]
|
||||
# ## The set of buckets.
|
||||
# buckets = [0.0, 15.6, 34.5, 49.1, 71.5, 80.5, 94.5, 100.0]
|
||||
# ## The name of metric.
|
||||
# measurement_name = "cpu"
|
||||
|
||||
## Example config that aggregates only specific fields of the metric.
|
||||
# [[aggregators.histogram.config]]
|
||||
# ## The set of buckets.
|
||||
# buckets = [0.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0]
|
||||
# ## The name of metric.
|
||||
# measurement_name = "diskio"
|
||||
# ## The concrete fields of metric
|
||||
# fields = ["io_time", "read_time", "write_time"]
|
||||
`
|
||||
|
||||
// SampleConfig returns sample of config
|
||||
func (h *HistogramAggregator) SampleConfig() string {
|
||||
return sampleConfig
|
||||
}
|
||||
|
||||
// Description returns description of aggregator plugin
|
||||
func (h *HistogramAggregator) Description() string {
|
||||
return "Create aggregate histograms."
|
||||
}
|
||||
|
||||
// Add adds new hit to the buckets
|
||||
func (h *HistogramAggregator) Add(in telegraf.Metric) {
|
||||
bucketsByField := make(map[string][]float64)
|
||||
for field := range in.Fields() {
|
||||
buckets := h.getBuckets(in.Name(), field)
|
||||
if buckets != nil {
|
||||
bucketsByField[field] = buckets
|
||||
}
|
||||
}
|
||||
|
||||
if len(bucketsByField) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
id := in.HashID()
|
||||
agr, ok := h.cache[id]
|
||||
if !ok {
|
||||
agr = metricHistogramCollection{
|
||||
name: in.Name(),
|
||||
tags: in.Tags(),
|
||||
histogramCollection: make(map[string]counts),
|
||||
}
|
||||
}
|
||||
|
||||
for field, value := range in.Fields() {
|
||||
if buckets, ok := bucketsByField[field]; ok {
|
||||
if agr.histogramCollection[field] == nil {
|
||||
agr.histogramCollection[field] = make(counts, len(buckets)+1)
|
||||
}
|
||||
|
||||
if value, ok := convert(value); ok {
|
||||
index := sort.SearchFloat64s(buckets, value)
|
||||
agr.histogramCollection[field][index]++
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
h.cache[id] = agr
|
||||
}
|
||||
|
||||
// Push returns histogram values for metrics
|
||||
func (h *HistogramAggregator) Push(acc telegraf.Accumulator) {
|
||||
metricsWithGroupedFields := []groupedByCountFields{}
|
||||
|
||||
for _, aggregate := range h.cache {
|
||||
for field, counts := range aggregate.histogramCollection {
|
||||
h.groupFieldsByBuckets(&metricsWithGroupedFields, aggregate.name, field, copyTags(aggregate.tags), counts)
|
||||
}
|
||||
}
|
||||
|
||||
for _, metric := range metricsWithGroupedFields {
|
||||
acc.AddFields(metric.name, makeFieldsWithCount(metric.fieldsWithCount), metric.tags)
|
||||
}
|
||||
}
|
||||
|
||||
// groupFieldsByBuckets groups fields by metric buckets which are represented as tags
|
||||
func (h *HistogramAggregator) groupFieldsByBuckets(
|
||||
metricsWithGroupedFields *[]groupedByCountFields,
|
||||
name string,
|
||||
field string,
|
||||
tags map[string]string,
|
||||
counts []int64,
|
||||
) {
|
||||
count := int64(0)
|
||||
for index, bucket := range h.getBuckets(name, field) {
|
||||
count += counts[index]
|
||||
|
||||
tags[bucketTag] = strconv.FormatFloat(bucket, 'f', -1, 64)
|
||||
h.groupField(metricsWithGroupedFields, name, field, count, copyTags(tags))
|
||||
}
|
||||
|
||||
count += counts[len(counts)-1]
|
||||
tags[bucketTag] = bucketInf
|
||||
|
||||
h.groupField(metricsWithGroupedFields, name, field, count, tags)
|
||||
}
|
||||
|
||||
// groupField groups field by count value
|
||||
func (h *HistogramAggregator) groupField(
|
||||
metricsWithGroupedFields *[]groupedByCountFields,
|
||||
name string,
|
||||
field string,
|
||||
count int64,
|
||||
tags map[string]string,
|
||||
) {
|
||||
for key, metric := range *metricsWithGroupedFields {
|
||||
if name == metric.name && isTagsIdentical(tags, metric.tags) {
|
||||
(*metricsWithGroupedFields)[key].fieldsWithCount[field] = count
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
fieldsWithCount := map[string]int64{
|
||||
field: count,
|
||||
}
|
||||
|
||||
*metricsWithGroupedFields = append(
|
||||
*metricsWithGroupedFields,
|
||||
groupedByCountFields{name: name, tags: tags, fieldsWithCount: fieldsWithCount},
|
||||
)
|
||||
}
|
||||
|
||||
// Reset does nothing, because we need to collect counts for a long time, otherwise if config parameter 'reset' has
|
||||
// small value, we will get a histogram with a small amount of the distribution.
|
||||
func (h *HistogramAggregator) Reset() {}
|
||||
|
||||
// resetCache resets cached counts(hits) in the buckets
|
||||
func (h *HistogramAggregator) resetCache() {
|
||||
h.cache = make(map[uint64]metricHistogramCollection)
|
||||
}
|
||||
|
||||
// getBuckets finds buckets and returns them
|
||||
func (h *HistogramAggregator) getBuckets(metric string, field string) []float64 {
|
||||
if buckets, ok := h.buckets[metric][field]; ok {
|
||||
return buckets
|
||||
}
|
||||
|
||||
for _, config := range h.Configs {
|
||||
if config.Metric == metric {
|
||||
if !isBucketExists(field, config) {
|
||||
continue
|
||||
}
|
||||
|
||||
if _, ok := h.buckets[metric]; !ok {
|
||||
h.buckets[metric] = make(bucketsByFields)
|
||||
}
|
||||
|
||||
h.buckets[metric][field] = sortBuckets(config.Buckets)
|
||||
}
|
||||
}
|
||||
|
||||
return h.buckets[metric][field]
|
||||
}
|
||||
|
||||
// isBucketExists checks if buckets exists for the passed field
|
||||
func isBucketExists(field string, cfg config) bool {
|
||||
if len(cfg.Fields) == 0 {
|
||||
return true
|
||||
}
|
||||
|
||||
for _, fl := range cfg.Fields {
|
||||
if fl == field {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// sortBuckets sorts the buckets if it is needed
|
||||
func sortBuckets(buckets []float64) []float64 {
|
||||
for i, bucket := range buckets {
|
||||
if i < len(buckets)-1 && bucket >= buckets[i+1] {
|
||||
sort.Float64s(buckets)
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return buckets
|
||||
}
|
||||
|
||||
// convert converts interface to concrete type
|
||||
func convert(in interface{}) (float64, bool) {
|
||||
switch v := in.(type) {
|
||||
case float64:
|
||||
return v, true
|
||||
case int64:
|
||||
return float64(v), true
|
||||
default:
|
||||
return 0, false
|
||||
}
|
||||
}
|
||||
|
||||
// copyTags copies tags
|
||||
func copyTags(tags map[string]string) map[string]string {
|
||||
copiedTags := map[string]string{}
|
||||
for key, val := range tags {
|
||||
copiedTags[key] = val
|
||||
}
|
||||
|
||||
return copiedTags
|
||||
}
|
||||
|
||||
// isTagsIdentical checks the identity of two list of tags
|
||||
func isTagsIdentical(originalTags, checkedTags map[string]string) bool {
|
||||
if len(originalTags) != len(checkedTags) {
|
||||
return false
|
||||
}
|
||||
|
||||
for tagName, tagValue := range originalTags {
|
||||
if tagValue != checkedTags[tagName] {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
|
||||
// makeFieldsWithCount assigns count value to all metric fields
|
||||
func makeFieldsWithCount(fieldsWithCountIn map[string]int64) map[string]interface{} {
|
||||
fieldsWithCountOut := map[string]interface{}{}
|
||||
for field, count := range fieldsWithCountIn {
|
||||
fieldsWithCountOut[field+"_bucket"] = count
|
||||
}
|
||||
|
||||
return fieldsWithCountOut
|
||||
}
|
||||
|
||||
// init initializes histogram aggregator plugin
|
||||
func init() {
|
||||
aggregators.Add("histogram", func() telegraf.Aggregator {
|
||||
return NewHistogramAggregator()
|
||||
})
|
||||
}
|
||||
210
plugins/aggregators/histogram/histogram_test.go
Normal file
210
plugins/aggregators/histogram/histogram_test.go
Normal file
@@ -0,0 +1,210 @@
|
||||
package histogram
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/metric"
|
||||
"github.com/influxdata/telegraf/testutil"
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
// NewTestHistogram creates new test histogram aggregation with specified config
|
||||
func NewTestHistogram(cfg []config) telegraf.Aggregator {
|
||||
htm := &HistogramAggregator{Configs: cfg}
|
||||
htm.buckets = make(bucketsByMetrics)
|
||||
htm.resetCache()
|
||||
|
||||
return htm
|
||||
}
|
||||
|
||||
// firstMetric1 is the first test metric
|
||||
var firstMetric1, _ = metric.New(
|
||||
"first_metric_name",
|
||||
map[string]string{"tag_name": "tag_value"},
|
||||
map[string]interface{}{
|
||||
"a": float64(15.3),
|
||||
"b": float64(40),
|
||||
},
|
||||
time.Now(),
|
||||
)
|
||||
|
||||
// firstMetric1 is the first test metric with other value
|
||||
var firstMetric2, _ = metric.New(
|
||||
"first_metric_name",
|
||||
map[string]string{"tag_name": "tag_value"},
|
||||
map[string]interface{}{
|
||||
"a": float64(15.9),
|
||||
"c": float64(40),
|
||||
},
|
||||
time.Now(),
|
||||
)
|
||||
|
||||
// secondMetric is the second metric
|
||||
var secondMetric, _ = metric.New(
|
||||
"second_metric_name",
|
||||
map[string]string{"tag_name": "tag_value"},
|
||||
map[string]interface{}{
|
||||
"a": float64(105),
|
||||
"ignoreme": "string",
|
||||
"andme": true,
|
||||
},
|
||||
time.Now(),
|
||||
)
|
||||
|
||||
// BenchmarkApply runs benchmarks
|
||||
func BenchmarkApply(b *testing.B) {
|
||||
histogram := NewHistogramAggregator()
|
||||
|
||||
for n := 0; n < b.N; n++ {
|
||||
histogram.Add(firstMetric1)
|
||||
histogram.Add(firstMetric2)
|
||||
histogram.Add(secondMetric)
|
||||
}
|
||||
}
|
||||
|
||||
// TestHistogramWithPeriodAndOneField tests metrics for one period and for one field
|
||||
func TestHistogramWithPeriodAndOneField(t *testing.T) {
|
||||
var cfg []config
|
||||
cfg = append(cfg, config{Metric: "first_metric_name", Fields: []string{"a"}, Buckets: []float64{0.0, 10.0, 20.0, 30.0, 40.0}})
|
||||
histogram := NewTestHistogram(cfg)
|
||||
|
||||
acc := &testutil.Accumulator{}
|
||||
|
||||
histogram.Add(firstMetric1)
|
||||
histogram.Add(firstMetric2)
|
||||
histogram.Push(acc)
|
||||
|
||||
if len(acc.Metrics) != 6 {
|
||||
assert.Fail(t, "Incorrect number of metrics")
|
||||
}
|
||||
assertContainsTaggedField(t, acc, "first_metric_name", map[string]interface{}{"a_bucket": int64(0)}, "0")
|
||||
assertContainsTaggedField(t, acc, "first_metric_name", map[string]interface{}{"a_bucket": int64(0)}, "10")
|
||||
assertContainsTaggedField(t, acc, "first_metric_name", map[string]interface{}{"a_bucket": int64(2)}, "20")
|
||||
assertContainsTaggedField(t, acc, "first_metric_name", map[string]interface{}{"a_bucket": int64(2)}, "30")
|
||||
assertContainsTaggedField(t, acc, "first_metric_name", map[string]interface{}{"a_bucket": int64(2)}, "40")
|
||||
assertContainsTaggedField(t, acc, "first_metric_name", map[string]interface{}{"a_bucket": int64(2)}, bucketInf)
|
||||
}
|
||||
|
||||
// TestHistogramWithPeriodAndAllFields tests two metrics for one period and for all fields
|
||||
func TestHistogramWithPeriodAndAllFields(t *testing.T) {
|
||||
var cfg []config
|
||||
cfg = append(cfg, config{Metric: "first_metric_name", Buckets: []float64{0.0, 15.5, 20.0, 30.0, 40.0}})
|
||||
cfg = append(cfg, config{Metric: "second_metric_name", Buckets: []float64{0.0, 4.0, 10.0, 23.0, 30.0}})
|
||||
histogram := NewTestHistogram(cfg)
|
||||
|
||||
acc := &testutil.Accumulator{}
|
||||
|
||||
histogram.Add(firstMetric1)
|
||||
histogram.Add(firstMetric2)
|
||||
histogram.Add(secondMetric)
|
||||
histogram.Push(acc)
|
||||
|
||||
if len(acc.Metrics) != 12 {
|
||||
assert.Fail(t, "Incorrect number of metrics")
|
||||
}
|
||||
|
||||
assertContainsTaggedField(t, acc, "first_metric_name", map[string]interface{}{"a_bucket": int64(0), "b_bucket": int64(0), "c_bucket": int64(0)}, "0")
|
||||
assertContainsTaggedField(t, acc, "first_metric_name", map[string]interface{}{"a_bucket": int64(1), "b_bucket": int64(0), "c_bucket": int64(0)}, "15.5")
|
||||
assertContainsTaggedField(t, acc, "first_metric_name", map[string]interface{}{"a_bucket": int64(2), "b_bucket": int64(0), "c_bucket": int64(0)}, "20")
|
||||
assertContainsTaggedField(t, acc, "first_metric_name", map[string]interface{}{"a_bucket": int64(2), "b_bucket": int64(0), "c_bucket": int64(0)}, "30")
|
||||
assertContainsTaggedField(t, acc, "first_metric_name", map[string]interface{}{"a_bucket": int64(2), "b_bucket": int64(1), "c_bucket": int64(1)}, "40")
|
||||
assertContainsTaggedField(t, acc, "first_metric_name", map[string]interface{}{"a_bucket": int64(2), "b_bucket": int64(1), "c_bucket": int64(1)}, bucketInf)
|
||||
|
||||
assertContainsTaggedField(t, acc, "second_metric_name", map[string]interface{}{"a_bucket": int64(0), "ignoreme_bucket": int64(0), "andme_bucket": int64(0)}, "0")
|
||||
assertContainsTaggedField(t, acc, "second_metric_name", map[string]interface{}{"a_bucket": int64(0), "ignoreme_bucket": int64(0), "andme_bucket": int64(0)}, "4")
|
||||
assertContainsTaggedField(t, acc, "second_metric_name", map[string]interface{}{"a_bucket": int64(0), "ignoreme_bucket": int64(0), "andme_bucket": int64(0)}, "10")
|
||||
assertContainsTaggedField(t, acc, "second_metric_name", map[string]interface{}{"a_bucket": int64(0), "ignoreme_bucket": int64(0), "andme_bucket": int64(0)}, "23")
|
||||
assertContainsTaggedField(t, acc, "second_metric_name", map[string]interface{}{"a_bucket": int64(0), "ignoreme_bucket": int64(0), "andme_bucket": int64(0)}, "30")
|
||||
assertContainsTaggedField(t, acc, "second_metric_name", map[string]interface{}{"a_bucket": int64(1), "ignoreme_bucket": int64(0), "andme_bucket": int64(0)}, bucketInf)
|
||||
}
|
||||
|
||||
// TestHistogramDifferentPeriodsAndAllFields tests two metrics getting added with a push/reset in between (simulates
|
||||
// getting added in different periods) for all fields
|
||||
func TestHistogramDifferentPeriodsAndAllFields(t *testing.T) {
|
||||
|
||||
var cfg []config
|
||||
cfg = append(cfg, config{Metric: "first_metric_name", Buckets: []float64{0.0, 10.0, 20.0, 30.0, 40.0}})
|
||||
histogram := NewTestHistogram(cfg)
|
||||
|
||||
acc := &testutil.Accumulator{}
|
||||
histogram.Add(firstMetric1)
|
||||
histogram.Push(acc)
|
||||
|
||||
assertContainsTaggedField(t, acc, "first_metric_name", map[string]interface{}{"a_bucket": int64(0), "b_bucket": int64(0)}, "0")
|
||||
assertContainsTaggedField(t, acc, "first_metric_name", map[string]interface{}{"a_bucket": int64(0), "b_bucket": int64(0)}, "10")
|
||||
assertContainsTaggedField(t, acc, "first_metric_name", map[string]interface{}{"a_bucket": int64(1), "b_bucket": int64(0)}, "20")
|
||||
assertContainsTaggedField(t, acc, "first_metric_name", map[string]interface{}{"a_bucket": int64(1), "b_bucket": int64(0)}, "30")
|
||||
assertContainsTaggedField(t, acc, "first_metric_name", map[string]interface{}{"a_bucket": int64(1), "b_bucket": int64(1)}, "40")
|
||||
assertContainsTaggedField(t, acc, "first_metric_name", map[string]interface{}{"a_bucket": int64(1), "b_bucket": int64(1)}, bucketInf)
|
||||
|
||||
acc.ClearMetrics()
|
||||
histogram.Add(firstMetric2)
|
||||
histogram.Push(acc)
|
||||
|
||||
assertContainsTaggedField(t, acc, "first_metric_name", map[string]interface{}{"a_bucket": int64(0), "b_bucket": int64(0), "c_bucket": int64(0)}, "0")
|
||||
assertContainsTaggedField(t, acc, "first_metric_name", map[string]interface{}{"a_bucket": int64(0), "b_bucket": int64(0), "c_bucket": int64(0)}, "10")
|
||||
assertContainsTaggedField(t, acc, "first_metric_name", map[string]interface{}{"a_bucket": int64(2), "b_bucket": int64(0), "c_bucket": int64(0)}, "20")
|
||||
assertContainsTaggedField(t, acc, "first_metric_name", map[string]interface{}{"a_bucket": int64(2), "b_bucket": int64(0), "c_bucket": int64(0)}, "30")
|
||||
assertContainsTaggedField(t, acc, "first_metric_name", map[string]interface{}{"a_bucket": int64(2), "b_bucket": int64(1), "c_bucket": int64(1)}, "40")
|
||||
assertContainsTaggedField(t, acc, "first_metric_name", map[string]interface{}{"a_bucket": int64(2), "b_bucket": int64(1), "c_bucket": int64(1)}, bucketInf)
|
||||
}
|
||||
|
||||
// TestWrongBucketsOrder tests the calling panic with incorrect order of buckets
|
||||
func TestWrongBucketsOrder(t *testing.T) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
assert.Equal(
|
||||
t,
|
||||
"histogram buckets must be in increasing order: 90.00 >= 20.00, metrics: first_metric_name, field: a",
|
||||
fmt.Sprint(r),
|
||||
)
|
||||
}
|
||||
}()
|
||||
|
||||
var cfg []config
|
||||
cfg = append(cfg, config{Metric: "first_metric_name", Buckets: []float64{0.0, 90.0, 20.0, 30.0, 40.0}})
|
||||
histogram := NewTestHistogram(cfg)
|
||||
histogram.Add(firstMetric2)
|
||||
}
|
||||
|
||||
// assertContainsTaggedField is help functions to test histogram data
|
||||
func assertContainsTaggedField(t *testing.T, acc *testutil.Accumulator, metricName string, fields map[string]interface{}, le string) {
|
||||
acc.Lock()
|
||||
defer acc.Unlock()
|
||||
|
||||
for _, checkedMetric := range acc.Metrics {
|
||||
// check metric name
|
||||
if checkedMetric.Measurement != metricName {
|
||||
continue
|
||||
}
|
||||
|
||||
// check "le" tag
|
||||
if checkedMetric.Tags[bucketTag] != le {
|
||||
continue
|
||||
}
|
||||
|
||||
// check fields
|
||||
isFieldsIdentical := true
|
||||
for field := range fields {
|
||||
if _, ok := checkedMetric.Fields[field]; !ok {
|
||||
isFieldsIdentical = false
|
||||
break
|
||||
}
|
||||
}
|
||||
if !isFieldsIdentical {
|
||||
continue
|
||||
}
|
||||
|
||||
// check fields with their counts
|
||||
if assert.Equal(t, fields, checkedMetric.Fields) {
|
||||
return
|
||||
}
|
||||
|
||||
assert.Fail(t, fmt.Sprintf("incorrect fields %v of metric %s", fields, metricName))
|
||||
}
|
||||
|
||||
assert.Fail(t, fmt.Sprintf("unknown measurement '%s' with tags: %v, fields: %v", metricName, map[string]string{"le": le}, fields))
|
||||
}
|
||||
42
plugins/aggregators/minmax/README.md
Normal file
42
plugins/aggregators/minmax/README.md
Normal file
@@ -0,0 +1,42 @@
|
||||
# MinMax Aggregator Plugin
|
||||
|
||||
The minmax aggregator plugin aggregates min & max values of each field it sees,
|
||||
emitting the aggrate every `period` seconds.
|
||||
|
||||
### Configuration:
|
||||
|
||||
```toml
|
||||
# Keep the aggregate min/max of each metric passing through.
|
||||
[[aggregators.minmax]]
|
||||
## General Aggregator Arguments:
|
||||
## The period on which to flush & clear the aggregator.
|
||||
period = "30s"
|
||||
## If true, the original metric will be dropped by the
|
||||
## aggregator and will not get sent to the output plugins.
|
||||
drop_original = false
|
||||
```
|
||||
|
||||
### Measurements & Fields:
|
||||
|
||||
- measurement1
|
||||
- field1_max
|
||||
- field1_min
|
||||
|
||||
### Tags:
|
||||
|
||||
No tags are applied by this aggregator.
|
||||
|
||||
### Example Output:
|
||||
|
||||
```
|
||||
$ telegraf --config telegraf.conf --quiet
|
||||
system,host=tars load1=1.72 1475583980000000000
|
||||
system,host=tars load1=1.6 1475583990000000000
|
||||
system,host=tars load1=1.66 1475584000000000000
|
||||
system,host=tars load1=1.63 1475584010000000000
|
||||
system,host=tars load1_max=1.72,load1_min=1.6 1475584010000000000
|
||||
system,host=tars load1=1.46 1475584020000000000
|
||||
system,host=tars load1=1.39 1475584030000000000
|
||||
system,host=tars load1=1.41 1475584040000000000
|
||||
system,host=tars load1_max=1.46,load1_min=1.39 1475584040000000000
|
||||
```
|
||||
119
plugins/aggregators/minmax/minmax.go
Normal file
119
plugins/aggregators/minmax/minmax.go
Normal file
@@ -0,0 +1,119 @@
|
||||
package minmax
|
||||
|
||||
import (
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/plugins/aggregators"
|
||||
)
|
||||
|
||||
type MinMax struct {
|
||||
cache map[uint64]aggregate
|
||||
}
|
||||
|
||||
func NewMinMax() telegraf.Aggregator {
|
||||
mm := &MinMax{}
|
||||
mm.Reset()
|
||||
return mm
|
||||
}
|
||||
|
||||
type aggregate struct {
|
||||
fields map[string]minmax
|
||||
name string
|
||||
tags map[string]string
|
||||
}
|
||||
|
||||
type minmax struct {
|
||||
min float64
|
||||
max float64
|
||||
}
|
||||
|
||||
var sampleConfig = `
|
||||
## General Aggregator Arguments:
|
||||
## The period on which to flush & clear the aggregator.
|
||||
period = "30s"
|
||||
## If true, the original metric will be dropped by the
|
||||
## aggregator and will not get sent to the output plugins.
|
||||
drop_original = false
|
||||
`
|
||||
|
||||
func (m *MinMax) SampleConfig() string {
|
||||
return sampleConfig
|
||||
}
|
||||
|
||||
func (m *MinMax) Description() string {
|
||||
return "Keep the aggregate min/max of each metric passing through."
|
||||
}
|
||||
|
||||
func (m *MinMax) Add(in telegraf.Metric) {
|
||||
id := in.HashID()
|
||||
if _, ok := m.cache[id]; !ok {
|
||||
// hit an uncached metric, create caches for first time:
|
||||
a := aggregate{
|
||||
name: in.Name(),
|
||||
tags: in.Tags(),
|
||||
fields: make(map[string]minmax),
|
||||
}
|
||||
for k, v := range in.Fields() {
|
||||
if fv, ok := convert(v); ok {
|
||||
a.fields[k] = minmax{
|
||||
min: fv,
|
||||
max: fv,
|
||||
}
|
||||
}
|
||||
}
|
||||
m.cache[id] = a
|
||||
} else {
|
||||
for k, v := range in.Fields() {
|
||||
if fv, ok := convert(v); ok {
|
||||
if _, ok := m.cache[id].fields[k]; !ok {
|
||||
// hit an uncached field of a cached metric
|
||||
m.cache[id].fields[k] = minmax{
|
||||
min: fv,
|
||||
max: fv,
|
||||
}
|
||||
continue
|
||||
}
|
||||
if fv < m.cache[id].fields[k].min {
|
||||
tmp := m.cache[id].fields[k]
|
||||
tmp.min = fv
|
||||
m.cache[id].fields[k] = tmp
|
||||
} else if fv > m.cache[id].fields[k].max {
|
||||
tmp := m.cache[id].fields[k]
|
||||
tmp.max = fv
|
||||
m.cache[id].fields[k] = tmp
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (m *MinMax) Push(acc telegraf.Accumulator) {
|
||||
for _, aggregate := range m.cache {
|
||||
fields := map[string]interface{}{}
|
||||
for k, v := range aggregate.fields {
|
||||
fields[k+"_min"] = v.min
|
||||
fields[k+"_max"] = v.max
|
||||
}
|
||||
acc.AddFields(aggregate.name, fields, aggregate.tags)
|
||||
}
|
||||
}
|
||||
|
||||
func (m *MinMax) Reset() {
|
||||
m.cache = make(map[uint64]aggregate)
|
||||
}
|
||||
|
||||
func convert(in interface{}) (float64, bool) {
|
||||
switch v := in.(type) {
|
||||
case float64:
|
||||
return v, true
|
||||
case int64:
|
||||
return float64(v), true
|
||||
default:
|
||||
return 0, false
|
||||
}
|
||||
}
|
||||
|
||||
func init() {
|
||||
aggregators.Add("minmax", func() telegraf.Aggregator {
|
||||
return NewMinMax()
|
||||
})
|
||||
}
|
||||
162
plugins/aggregators/minmax/minmax_test.go
Normal file
162
plugins/aggregators/minmax/minmax_test.go
Normal file
@@ -0,0 +1,162 @@
|
||||
package minmax
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf/metric"
|
||||
"github.com/influxdata/telegraf/testutil"
|
||||
)
|
||||
|
||||
var m1, _ = metric.New("m1",
|
||||
map[string]string{"foo": "bar"},
|
||||
map[string]interface{}{
|
||||
"a": int64(1),
|
||||
"b": int64(1),
|
||||
"c": int64(1),
|
||||
"d": int64(1),
|
||||
"e": int64(1),
|
||||
"f": float64(2),
|
||||
"g": float64(2),
|
||||
"h": float64(2),
|
||||
"i": float64(2),
|
||||
"j": float64(3),
|
||||
},
|
||||
time.Now(),
|
||||
)
|
||||
var m2, _ = metric.New("m1",
|
||||
map[string]string{"foo": "bar"},
|
||||
map[string]interface{}{
|
||||
"a": int64(1),
|
||||
"b": int64(3),
|
||||
"c": int64(3),
|
||||
"d": int64(3),
|
||||
"e": int64(3),
|
||||
"f": float64(1),
|
||||
"g": float64(1),
|
||||
"h": float64(1),
|
||||
"i": float64(1),
|
||||
"j": float64(1),
|
||||
"k": float64(200),
|
||||
"ignoreme": "string",
|
||||
"andme": true,
|
||||
},
|
||||
time.Now(),
|
||||
)
|
||||
|
||||
func BenchmarkApply(b *testing.B) {
|
||||
minmax := NewMinMax()
|
||||
|
||||
for n := 0; n < b.N; n++ {
|
||||
minmax.Add(m1)
|
||||
minmax.Add(m2)
|
||||
}
|
||||
}
|
||||
|
||||
// Test two metrics getting added.
|
||||
func TestMinMaxWithPeriod(t *testing.T) {
|
||||
acc := testutil.Accumulator{}
|
||||
minmax := NewMinMax()
|
||||
|
||||
minmax.Add(m1)
|
||||
minmax.Add(m2)
|
||||
minmax.Push(&acc)
|
||||
|
||||
expectedFields := map[string]interface{}{
|
||||
"a_max": float64(1),
|
||||
"a_min": float64(1),
|
||||
"b_max": float64(3),
|
||||
"b_min": float64(1),
|
||||
"c_max": float64(3),
|
||||
"c_min": float64(1),
|
||||
"d_max": float64(3),
|
||||
"d_min": float64(1),
|
||||
"e_max": float64(3),
|
||||
"e_min": float64(1),
|
||||
"f_max": float64(2),
|
||||
"f_min": float64(1),
|
||||
"g_max": float64(2),
|
||||
"g_min": float64(1),
|
||||
"h_max": float64(2),
|
||||
"h_min": float64(1),
|
||||
"i_max": float64(2),
|
||||
"i_min": float64(1),
|
||||
"j_max": float64(3),
|
||||
"j_min": float64(1),
|
||||
"k_max": float64(200),
|
||||
"k_min": float64(200),
|
||||
}
|
||||
expectedTags := map[string]string{
|
||||
"foo": "bar",
|
||||
}
|
||||
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
|
||||
}
|
||||
|
||||
// Test two metrics getting added with a push/reset in between (simulates
|
||||
// getting added in different periods.)
|
||||
func TestMinMaxDifferentPeriods(t *testing.T) {
|
||||
acc := testutil.Accumulator{}
|
||||
minmax := NewMinMax()
|
||||
|
||||
minmax.Add(m1)
|
||||
minmax.Push(&acc)
|
||||
expectedFields := map[string]interface{}{
|
||||
"a_max": float64(1),
|
||||
"a_min": float64(1),
|
||||
"b_max": float64(1),
|
||||
"b_min": float64(1),
|
||||
"c_max": float64(1),
|
||||
"c_min": float64(1),
|
||||
"d_max": float64(1),
|
||||
"d_min": float64(1),
|
||||
"e_max": float64(1),
|
||||
"e_min": float64(1),
|
||||
"f_max": float64(2),
|
||||
"f_min": float64(2),
|
||||
"g_max": float64(2),
|
||||
"g_min": float64(2),
|
||||
"h_max": float64(2),
|
||||
"h_min": float64(2),
|
||||
"i_max": float64(2),
|
||||
"i_min": float64(2),
|
||||
"j_max": float64(3),
|
||||
"j_min": float64(3),
|
||||
}
|
||||
expectedTags := map[string]string{
|
||||
"foo": "bar",
|
||||
}
|
||||
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
|
||||
|
||||
acc.ClearMetrics()
|
||||
minmax.Reset()
|
||||
minmax.Add(m2)
|
||||
minmax.Push(&acc)
|
||||
expectedFields = map[string]interface{}{
|
||||
"a_max": float64(1),
|
||||
"a_min": float64(1),
|
||||
"b_max": float64(3),
|
||||
"b_min": float64(3),
|
||||
"c_max": float64(3),
|
||||
"c_min": float64(3),
|
||||
"d_max": float64(3),
|
||||
"d_min": float64(3),
|
||||
"e_max": float64(3),
|
||||
"e_min": float64(3),
|
||||
"f_max": float64(1),
|
||||
"f_min": float64(1),
|
||||
"g_max": float64(1),
|
||||
"g_min": float64(1),
|
||||
"h_max": float64(1),
|
||||
"h_min": float64(1),
|
||||
"i_max": float64(1),
|
||||
"i_min": float64(1),
|
||||
"j_max": float64(1),
|
||||
"j_min": float64(1),
|
||||
"k_max": float64(200),
|
||||
"k_min": float64(200),
|
||||
}
|
||||
expectedTags = map[string]string{
|
||||
"foo": "bar",
|
||||
}
|
||||
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
|
||||
}
|
||||
11
plugins/aggregators/registry.go
Normal file
11
plugins/aggregators/registry.go
Normal file
@@ -0,0 +1,11 @@
|
||||
package aggregators
|
||||
|
||||
import "github.com/influxdata/telegraf"
|
||||
|
||||
type Creator func() telegraf.Aggregator
|
||||
|
||||
var Aggregators = map[string]Creator{}
|
||||
|
||||
func Add(name string, creator Creator) {
|
||||
Aggregators[name] = creator
|
||||
}
|
||||
@@ -1,6 +0,0 @@
|
||||
package all
|
||||
|
||||
import (
|
||||
_ "github.com/influxdb/tivan/plugins/redis"
|
||||
_ "github.com/influxdb/tivan/plugins/system"
|
||||
)
|
||||
61
plugins/inputs/EXAMPLE_README.md
Normal file
61
plugins/inputs/EXAMPLE_README.md
Normal file
@@ -0,0 +1,61 @@
|
||||
# Example Input Plugin
|
||||
|
||||
The example plugin gathers metrics about example things. This description
|
||||
explains at a high level what the plugin does and provides links to where
|
||||
additional information can be found.
|
||||
|
||||
### Configuration:
|
||||
|
||||
This section contains the default TOML to configure the plugin. You can
|
||||
generate it using `telegraf --usage <plugin-name>`.
|
||||
|
||||
```toml
|
||||
# Description
|
||||
[[inputs.example]]
|
||||
example_option = "example_value"
|
||||
```
|
||||
|
||||
### Metrics:
|
||||
|
||||
Here you should add an optional description and links to where the user can
|
||||
get more information about the measurements.
|
||||
|
||||
If the output is determined dynamically based on the input source, or there
|
||||
are more metrics than can reasonably be listed, describe how the input is
|
||||
mapped to the output.
|
||||
|
||||
- measurement1
|
||||
- tags:
|
||||
- tag1 (optional description)
|
||||
- tag2
|
||||
- fields:
|
||||
- field1 (type, unit)
|
||||
- field2 (float, percent)
|
||||
|
||||
- measurement2
|
||||
- tags:
|
||||
- tag3
|
||||
- fields:
|
||||
- field3 (integer, bytes)
|
||||
|
||||
### Sample Queries:
|
||||
|
||||
This section should contain some useful InfluxDB queries that can be used to
|
||||
get started with the plugin or to generate dashboards. For each query listed,
|
||||
describe at a high level what data is returned.
|
||||
|
||||
Get the max, mean, and min for the measurement in the last hour:
|
||||
```
|
||||
SELECT max(field1), mean(field1), min(field1) FROM measurement1 WHERE tag1=bar AND time > now() - 1h GROUP BY tag
|
||||
```
|
||||
|
||||
### Example Output:
|
||||
|
||||
This section shows example output in Line Protocol format. You can often use
|
||||
`telegraf --input-filter <plugin-name> --test` or use the `file` output to get
|
||||
this information.
|
||||
|
||||
```
|
||||
measurement1,tag1=foo,tag2=bar field1=1i,field2=2.1 1453831884664956455
|
||||
measurement2,tag1=foo,tag2=bar,tag3=baz field3=1i 1453831884664956455
|
||||
```
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user