Compare commits
122 Commits
0.13.2
...
1.0.0-beta
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
03d02fa67a | ||
|
|
b58cd78c79 | ||
|
|
dabb6f5466 | ||
|
|
281a4d5500 | ||
|
|
1c2965703d | ||
|
|
5dc4cce157 | ||
|
|
8c7edeb53b | ||
|
|
1d9745ee98 | ||
|
|
2d6c8767f7 | ||
|
|
b4a6d9c647 | ||
|
|
6afe9ceef1 | ||
|
|
704d9ad76c | ||
|
|
300d9adbd0 | ||
|
|
207c5498e7 | ||
|
|
d5e7439343 | ||
|
|
21add2c799 | ||
|
|
4651ab88ad | ||
|
|
53f40063b3 | ||
|
|
97d92bba67 | ||
|
|
bfdd665435 | ||
|
|
821d3fafa6 | ||
|
|
7c9b312cee | ||
|
|
69ab8a645c | ||
|
|
7b550c11cb | ||
|
|
bb4f18ca88 | ||
|
|
6efe91ea9c | ||
|
|
5f0a63f554 | ||
|
|
d14e7536ab | ||
|
|
c873937356 | ||
|
|
e1c3800cd9 | ||
|
|
c046232425 | ||
|
|
2d4864e126 | ||
|
|
048448aa93 | ||
|
|
755b2ec953 | ||
|
|
f62c493c77 | ||
|
|
a6365a6086 | ||
|
|
f7e057ec55 | ||
|
|
30cc00d11b | ||
|
|
d641c42029 | ||
|
|
9c2ca805da | ||
|
|
b0484d8a0c | ||
|
|
5ddd61d2e2 | ||
|
|
50ea7f4a9d | ||
|
|
b18134a4e3 | ||
|
|
7825df4771 | ||
|
|
d6951dacdc | ||
|
|
e603825e37 | ||
|
|
e3448153e1 | ||
|
|
25848c545a | ||
|
|
3098564896 | ||
|
|
4b6f9b93dd | ||
|
|
2beef21231 | ||
|
|
cb3c54a1ae | ||
|
|
d50a1e83ac | ||
|
|
1f10639222 | ||
|
|
af0979cce5 | ||
|
|
5b43901bd8 | ||
|
|
d7efb7a71d | ||
|
|
4d242836ee | ||
|
|
06cb5a041e | ||
|
|
ea2521bf27 | ||
|
|
4cd1f7a104 | ||
|
|
137843b2f6 | ||
|
|
008ed17a79 | ||
|
|
75e6cb9064 | ||
|
|
ad88a9421a | ||
|
|
346deb30a3 | ||
|
|
8c3d7cd145 | ||
|
|
821b30eb92 | ||
|
|
a362352587 | ||
|
|
94f952787f | ||
|
|
3ff184c061 | ||
|
|
80368e3936 | ||
|
|
2c448e22e1 | ||
|
|
1aabd38eb2 | ||
|
|
675457873a | ||
|
|
8173338f8a | ||
|
|
c4841843a9 | ||
|
|
f08a27be5d | ||
|
|
a4b36d12dd | ||
|
|
c842724b61 | ||
|
|
fb5f40319e | ||
|
|
52b9fc837c | ||
|
|
6f991ec78a | ||
|
|
7921d87a45 | ||
|
|
9f7a758bf9 | ||
|
|
0aff7a0bc1 | ||
|
|
c4cfdb8a25 | ||
|
|
342cfc4087 | ||
|
|
bd1282eddf | ||
|
|
892abec025 | ||
|
|
e809c4e445 | ||
|
|
9ff536d94d | ||
|
|
4f27315720 | ||
|
|
958ef2f872 | ||
|
|
069764f05e | ||
|
|
eeeab5192b | ||
|
|
a7dfbce3d3 | ||
|
|
ed2d1d9bb7 | ||
|
|
0fb2d2ffae | ||
|
|
3af65e7abb | ||
|
|
984b6cb0fb | ||
|
|
ca504a19ec | ||
|
|
c2797c85d1 | ||
|
|
d5add07c0b | ||
|
|
0ebf1c1ad7 | ||
|
|
42d7fc5e16 | ||
|
|
6828fc48e1 | ||
|
|
98d91b1c89 | ||
|
|
9bbdb2d562 | ||
|
|
a8334c3261 | ||
|
|
9144f9630b | ||
|
|
3e4a19539a | ||
|
|
5fe7e6e40e | ||
|
|
58f2ba1247 | ||
|
|
5f3a91bffd | ||
|
|
6351aa5167 | ||
|
|
9966099d1a | ||
|
|
1ef5599361 | ||
|
|
c78b6cdb4e | ||
|
|
d736c7235a | ||
|
|
475252d873 |
4
.gitattributes
vendored
4
.gitattributes
vendored
@@ -1,2 +1,4 @@
|
||||
CHANGELOG.md merge=union
|
||||
|
||||
README.md merge=union
|
||||
plugins/inputs/all/all.go merge=union
|
||||
plugins/outputs/all/all.go merge=union
|
||||
|
||||
2
.github/ISSUE_TEMPLATE.md
vendored
2
.github/ISSUE_TEMPLATE.md
vendored
@@ -11,6 +11,8 @@ Erase the other section and everything on and above this line.
|
||||
|
||||
## Bug report
|
||||
|
||||
### Relevant telegraf.conf:
|
||||
|
||||
### System info:
|
||||
|
||||
[Include Telegraf version, operating system name, and other relevant details]
|
||||
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -1,3 +1,4 @@
|
||||
build
|
||||
tivan
|
||||
.vagrant
|
||||
/telegraf
|
||||
|
||||
141
CHANGELOG.md
141
CHANGELOG.md
@@ -1,5 +1,146 @@
|
||||
## v1.0 [unreleased]
|
||||
|
||||
## v1.0 beta 3 [2016-07-18]
|
||||
|
||||
### Release Notes
|
||||
|
||||
**Breaking Change**: Aerospike main server node measurements have been renamed
|
||||
aerospike_node. Aerospike namespace measurements have been renamed to
|
||||
aerospike_namespace. They will also now be tagged with the node_name
|
||||
that they correspond to. This has been done to differentiate measurements
|
||||
that pertain to node vs. namespace statistics.
|
||||
|
||||
**Breaking Change**: users of github_webhooks must change to the new
|
||||
`[[inputs.webhooks]]` plugin.
|
||||
|
||||
This means that the default github_webhooks config:
|
||||
|
||||
```
|
||||
# A Github Webhook Event collector
|
||||
[[inputs.github_webhooks]]
|
||||
## Address and port to host Webhook listener on
|
||||
service_address = ":1618"
|
||||
```
|
||||
|
||||
should now look like:
|
||||
|
||||
```
|
||||
# A Webhooks Event collector
|
||||
[[inputs.webhooks]]
|
||||
## Address and port to host Webhook listener on
|
||||
service_address = ":1618"
|
||||
|
||||
[inputs.webhooks.github]
|
||||
path = "/"
|
||||
```
|
||||
|
||||
### Features
|
||||
|
||||
- [#1289](https://github.com/influxdata/telegraf/pull/1289): webhooks input plugin. Thanks @francois2metz and @cduez!
|
||||
- [#1247](https://github.com/influxdata/telegraf/pull/1247): rollbar webhook plugin.
|
||||
- [#1408](https://github.com/influxdata/telegraf/pull/1408): mandrill webhook plugin.
|
||||
- [#1402](https://github.com/influxdata/telegraf/pull/1402): docker-machine/boot2docker no longer required for unit tests.
|
||||
- [#1350](https://github.com/influxdata/telegraf/pull/1350): cgroup input plugin.
|
||||
- [#1369](https://github.com/influxdata/telegraf/pull/1369): Add input plugin for consuming metrics from NSQD.
|
||||
- [#1369](https://github.com/influxdata/telegraf/pull/1480): add ability to read redis from a socket.
|
||||
- [#1387](https://github.com/influxdata/telegraf/pull/1387): **Breaking Change** - Redis `role` tag renamed to `replication_role` to avoid global_tags override
|
||||
- [#1437](https://github.com/influxdata/telegraf/pull/1437): Fetching Galera status metrics in MySQL
|
||||
- [#1500](https://github.com/influxdata/telegraf/pull/1500): Aerospike plugin refactored to use official client lib.
|
||||
- [#1434](https://github.com/influxdata/telegraf/pull/1434): Add measurement name arg to logparser plugin.
|
||||
- [#1479](https://github.com/influxdata/telegraf/pull/1479): logparser: change resp_code from a field to a tag.
|
||||
|
||||
### Bugfixes
|
||||
|
||||
- [#1472](https://github.com/influxdata/telegraf/pull/1472): diskio input plugin: set 'skip_serial_number = true' by default to avoid high cardinality.
|
||||
- [#1426](https://github.com/influxdata/telegraf/pull/1426): nil metrics panic fix.
|
||||
- [#1384](https://github.com/influxdata/telegraf/pull/1384): Fix datarace in apache input plugin.
|
||||
- [#1399](https://github.com/influxdata/telegraf/issues/1399): Add `read_repairs` statistics to riak plugin.
|
||||
- [#1405](https://github.com/influxdata/telegraf/issues/1405): Fix memory/connection leak in prometheus input plugin.
|
||||
- [#1378](https://github.com/influxdata/telegraf/issues/1378): Trim BOM from config file for Windows support.
|
||||
- [#1339](https://github.com/influxdata/telegraf/issues/1339): Prometheus client output panic on service reload.
|
||||
- [#1461](https://github.com/influxdata/telegraf/pull/1461): Prometheus parser, protobuf format header fix.
|
||||
- [#1334](https://github.com/influxdata/telegraf/issues/1334): Prometheus output, metric refresh and caching fixes.
|
||||
- [#1432](https://github.com/influxdata/telegraf/issues/1432): Panic fix for multiple graphite outputs under very high load.
|
||||
- [#1412](https://github.com/influxdata/telegraf/pull/1412): Instrumental output has better reconnect behavior
|
||||
- [#1460](https://github.com/influxdata/telegraf/issues/1460): Remove PID from procstat plugin to fix cardinality issues.
|
||||
- [#1427](https://github.com/influxdata/telegraf/issues/1427): Cassandra input: version 2.x "column family" fix.
|
||||
- [#1463](https://github.com/influxdata/telegraf/issues/1463): Shared WaitGroup in Exec plugin
|
||||
- [#1436](https://github.com/influxdata/telegraf/issues/1436): logparser: honor modifiers in "pattern" config.
|
||||
- [#1418](https://github.com/influxdata/telegraf/issues/1418): logparser: error and exit on file permissions/missing errors.
|
||||
|
||||
## v1.0 beta 2 [2016-06-21]
|
||||
|
||||
### Features
|
||||
|
||||
- [#1340](https://github.com/influxdata/telegraf/issues/1340): statsd: do not log every dropped metric.
|
||||
- [#1368](https://github.com/influxdata/telegraf/pull/1368): Add precision rounding to all metrics on collection.
|
||||
- [#1390](https://github.com/influxdata/telegraf/pull/1390): Add support for Tengine
|
||||
- [#1320](https://github.com/influxdata/telegraf/pull/1320): Logparser input plugin for parsing grok-style log patterns.
|
||||
- [#1397](https://github.com/influxdata/telegraf/issues/1397): ElasticSearch: now supports connecting to ElasticSearch via SSL
|
||||
|
||||
### Bugfixes
|
||||
|
||||
- [#1330](https://github.com/influxdata/telegraf/issues/1330): Fix exec plugin panic when using single binary.
|
||||
- [#1336](https://github.com/influxdata/telegraf/issues/1336): Fixed incorrect prometheus metrics source selection.
|
||||
- [#1112](https://github.com/influxdata/telegraf/issues/1112): Set default Zookeeper chroot to empty string.
|
||||
- [#1335](https://github.com/influxdata/telegraf/issues/1335): Fix overall ping timeout to be calculated based on per-ping timeout.
|
||||
- [#1374](https://github.com/influxdata/telegraf/pull/1374): Change "default" retention policy to "".
|
||||
- [#1377](https://github.com/influxdata/telegraf/issues/1377): Graphite output mangling '%' character.
|
||||
- [#1396](https://github.com/influxdata/telegraf/pull/1396): Prometheus input plugin now supports x509 certs authentication
|
||||
|
||||
## v1.0 beta 1 [2016-06-07]
|
||||
|
||||
### Release Notes
|
||||
|
||||
- `flush_jitter` behavior has been changed. The random jitter will now be
|
||||
evaluated at every flush interval, rather than once at startup. This makes it
|
||||
consistent with the behavior of `collection_jitter`.
|
||||
|
||||
- All AWS plugins now utilize a standard mechanism for evaluating credentials.
|
||||
This allows all AWS plugins to support environment variables, shared credential
|
||||
files & profiles, and role assumptions. See the specific plugin README for
|
||||
details.
|
||||
|
||||
- The AWS CloudWatch input plugin can now declare a wildcard value for a metric
|
||||
dimension. This causes the plugin to read all metrics that contain the specified
|
||||
dimension key regardless of value. This is used to export collections of metrics
|
||||
without having to know the dimension values ahead of time.
|
||||
|
||||
- The AWS CloudWatch input plugin can now be configured with the `cache_ttl`
|
||||
attribute. This configures the TTL of the internal metric cache. This is useful
|
||||
in conjunction with wildcard dimension values as it will control the amount of
|
||||
time before a new metric is included by the plugin.
|
||||
|
||||
### Features
|
||||
|
||||
- [#1262](https://github.com/influxdata/telegraf/pull/1261): Add graylog input pluging.
|
||||
- [#1294](https://github.com/influxdata/telegraf/pull/1294): consul input plugin. Thanks @harnash
|
||||
- [#1164](https://github.com/influxdata/telegraf/pull/1164): conntrack input plugin. Thanks @robinpercy!
|
||||
- [#1165](https://github.com/influxdata/telegraf/pull/1165): vmstat input plugin. Thanks @jshim-xm!
|
||||
- [#1208](https://github.com/influxdata/telegraf/pull/1208): Standardized AWS credentials evaluation & wildcard CloudWatch dimensions. Thanks @johnrengelman!
|
||||
- [#1264](https://github.com/influxdata/telegraf/pull/1264): Add SSL config options to http_response plugin.
|
||||
- [#1272](https://github.com/influxdata/telegraf/pull/1272): graphite parser: add ability to specify multiple tag keys, for consistency with influxdb parser.
|
||||
- [#1265](https://github.com/influxdata/telegraf/pull/1265): Make dns lookups for chrony configurable. Thanks @zbindenren!
|
||||
- [#1275](https://github.com/influxdata/telegraf/pull/1275): Allow wildcard filtering of varnish stats.
|
||||
- [#1142](https://github.com/influxdata/telegraf/pull/1142): Support for glob patterns in exec plugin commands configuration.
|
||||
- [#1278](https://github.com/influxdata/telegraf/pull/1278): RabbitMQ input: made url parameter optional by using DefaultURL (http://localhost:15672) if not specified
|
||||
- [#1197](https://github.com/influxdata/telegraf/pull/1197): Limit AWS GetMetricStatistics requests to 10 per second.
|
||||
- [#1278](https://github.com/influxdata/telegraf/pull/1278) & [#1288](https://github.com/influxdata/telegraf/pull/1288) & [#1295](https://github.com/influxdata/telegraf/pull/1295): RabbitMQ/Apache/InfluxDB inputs: made url(s) parameter optional by using reasonable input defaults if not specified
|
||||
- [#1296](https://github.com/influxdata/telegraf/issues/1296): Refactor of flush_jitter argument.
|
||||
- [#1213](https://github.com/influxdata/telegraf/issues/1213): Add inactive & active memory to mem plugin.
|
||||
|
||||
### Bugfixes
|
||||
|
||||
- [#1252](https://github.com/influxdata/telegraf/pull/1252) & [#1279](https://github.com/influxdata/telegraf/pull/1279): Fix systemd service. Thanks @zbindenren & @PierreF!
|
||||
- [#1221](https://github.com/influxdata/telegraf/pull/1221): Fix influxdb n_shards counter.
|
||||
- [#1258](https://github.com/influxdata/telegraf/pull/1258): Fix potential kernel plugin integer parse error.
|
||||
- [#1268](https://github.com/influxdata/telegraf/pull/1268): Fix potential influxdb input type assertion panic.
|
||||
- [#1283](https://github.com/influxdata/telegraf/pull/1283): Still send processes metrics if a process exited during metric collection.
|
||||
- [#1297](https://github.com/influxdata/telegraf/issues/1297): disk plugin panic when usage grab fails.
|
||||
- [#1316](https://github.com/influxdata/telegraf/pull/1316): Removed leaked "database" tag on redis metrics. Thanks @PierreF!
|
||||
- [#1323](https://github.com/influxdata/telegraf/issues/1323): Processes plugin: fix potential error with /proc/net/stat directory.
|
||||
- [#1322](https://github.com/influxdata/telegraf/issues/1322): Fix rare RHEL 5.2 panic in gopsutil diskio gathering function.
|
||||
|
||||
## v0.13.1 [2016-05-24]
|
||||
|
||||
### Release Notes
|
||||
|
||||
@@ -114,7 +114,7 @@ creating the `Parser` object.
|
||||
You should also add the following to your SampleConfig() return:
|
||||
|
||||
```toml
|
||||
## Data format to consume.
|
||||
## Data format to consume.
|
||||
## Each data format has it's own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
@@ -212,8 +212,8 @@ func (s *Simple) Close() error {
|
||||
}
|
||||
|
||||
func (s *Simple) Write(metrics []telegraf.Metric) error {
|
||||
for _, pt := range points {
|
||||
// write `pt` to the output sink here
|
||||
for _, metric := range metrics {
|
||||
// write `metric` to the output sink here
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@@ -244,7 +244,7 @@ instantiating and creating the `Serializer` object.
|
||||
You should also add the following to your SampleConfig() return:
|
||||
|
||||
```toml
|
||||
## Data format to output.
|
||||
## Data format to output.
|
||||
## Each data format has it's own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
||||
@@ -290,10 +290,6 @@ To execute Telegraf tests follow these simple steps:
|
||||
instructions
|
||||
- execute `make test`
|
||||
|
||||
**OSX users**: you will need to install `boot2docker` or `docker-machine`.
|
||||
The Makefile will assume that you have a `docker-machine` box called `default` to
|
||||
get the IP address.
|
||||
|
||||
### Unit test troubleshooting
|
||||
|
||||
Try cleaning up your test environment by executing `make docker-kill` and
|
||||
|
||||
9
Godeps
9
Godeps
@@ -1,5 +1,6 @@
|
||||
github.com/Shopify/sarama 8aadb476e66ca998f2f6bb3c993e9a2daa3666b9
|
||||
github.com/Sirupsen/logrus 219c8cb75c258c552e999735be6df753ffc7afdc
|
||||
github.com/aerospike/aerospike-client-go 45863b7fd8640dc12f7fdd397104d97e1986f25a
|
||||
github.com/amir/raidman 53c1b967405155bfc8758557863bf2e14f814687
|
||||
github.com/aws/aws-sdk-go 13a12060f716145019378a10e2806c174356b857
|
||||
github.com/beorn7/perks 3ac7bf7a47d159a033b107610db8a1b6575507a4
|
||||
@@ -16,13 +17,14 @@ github.com/eapache/go-resiliency b86b1ec0dd4209a588dc1285cdd471e73525c0b3
|
||||
github.com/eapache/queue ded5959c0d4e360646dc9e9908cff48666781367
|
||||
github.com/eclipse/paho.mqtt.golang 0f7a459f04f13a41b7ed752d47944528d4bf9a86
|
||||
github.com/go-sql-driver/mysql 1fca743146605a172a266e1654e01e5cd5669bee
|
||||
github.com/gobwas/glob d877f6352135181470c40c73ebb81aefa22115fa
|
||||
github.com/gobwas/glob 49571a1557cd20e6a2410adc6421f85b66c730b5
|
||||
github.com/golang/protobuf 552c7b9542c194800fd493123b3798ef0a832032
|
||||
github.com/golang/snappy 427fb6fc07997f43afa32f35e850833760e489a7
|
||||
github.com/gonuts/go-shellquote e842a11b24c6abfb3dd27af69a17f482e4b483c2
|
||||
github.com/gorilla/context 1ea25387ff6f684839d82767c1733ff4d4d15d0a
|
||||
github.com/gorilla/mux c9e326e2bdec29039a3761c07bece13133863e1e
|
||||
github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478
|
||||
github.com/hashicorp/consul 5aa90455ce78d4d41578bafc86305e6e6b28d7d2
|
||||
github.com/hpcloud/tail b2940955ab8b26e19d43a43c4da0475dd81bdb56
|
||||
github.com/influxdata/config b79f6829346b8d6e78ba73544b1e1038f1f1c9da
|
||||
github.com/influxdata/influxdb e094138084855d444195b252314dfee9eae34cab
|
||||
@@ -42,12 +44,15 @@ github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
|
||||
github.com/prometheus/common e8eabff8812b05acf522b45fdcd725a785188e37
|
||||
github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8
|
||||
github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f
|
||||
github.com/shirou/gopsutil 83c6e72cbdef6e8ada934549abf700ff0ba96776
|
||||
github.com/shirou/gopsutil 586bb697f3ec9f8ec08ffefe18f521a64534037c
|
||||
github.com/soniah/gosnmp b1b4f885b12c5dcbd021c5cee1c904110de6db7d
|
||||
github.com/sparrc/aerospike-client-go d4bb42d2c2d39dae68e054116f4538af189e05d5
|
||||
github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744
|
||||
github.com/stretchr/testify 1f4a1643a57e798696635ea4c126e9127adb7d3c
|
||||
github.com/vjeantet/grok 83bfdfdfd1a8146795b28e547a8e3c8b28a466c2
|
||||
github.com/wvanbergen/kafka 46f9a1cf3f670edec492029fadded9c2d9e18866
|
||||
github.com/wvanbergen/kazoo-go 0f768712ae6f76454f987c3356177e138df258f8
|
||||
github.com/yuin/gopher-lua bf3808abd44b1e55143a2d7f08571aaa80db1808
|
||||
github.com/zensqlmonitor/go-mssqldb ffe5510c6fa5e15e6d983210ab501c815b56b363
|
||||
golang.org/x/crypto 5dc8cb4b8a8eb076cbb5a06bc3b8682c15bdbbd3
|
||||
golang.org/x/net 6acef71eb69611914f7a30939ea9f6e194c78172
|
||||
|
||||
24
Makefile
24
Makefile
@@ -1,4 +1,3 @@
|
||||
UNAME := $(shell sh -c 'uname')
|
||||
VERSION := $(shell sh -c 'git describe --always --tags')
|
||||
ifdef GOBIN
|
||||
PATH := $(GOBIN):$(PATH)
|
||||
@@ -26,10 +25,6 @@ build-for-docker:
|
||||
"-s -X main.version=$(VERSION)" \
|
||||
./cmd/telegraf/telegraf.go
|
||||
|
||||
# Build with race detector
|
||||
dev: prepare
|
||||
go build -race -ldflags "-X main.version=$(VERSION)" ./...
|
||||
|
||||
# run package script
|
||||
package:
|
||||
./scripts/build.py --package --version="$(VERSION)" --platform=linux --arch=all --upload
|
||||
@@ -46,27 +41,17 @@ prepare-windows:
|
||||
|
||||
# Run all docker containers necessary for unit tests
|
||||
docker-run:
|
||||
ifeq ($(UNAME), Darwin)
|
||||
docker run --name kafka \
|
||||
-e ADVERTISED_HOST=$(shell sh -c 'boot2docker ip || docker-machine ip default') \
|
||||
-e ADVERTISED_PORT=9092 \
|
||||
-p "2181:2181" -p "9092:9092" \
|
||||
-d spotify/kafka
|
||||
endif
|
||||
ifeq ($(UNAME), Linux)
|
||||
docker run --name kafka \
|
||||
-e ADVERTISED_HOST=localhost \
|
||||
-e ADVERTISED_PORT=9092 \
|
||||
-p "2181:2181" -p "9092:9092" \
|
||||
-d spotify/kafka
|
||||
endif
|
||||
docker run --name mysql -p "3306:3306" -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -d mysql
|
||||
docker run --name memcached -p "11211:11211" -d memcached
|
||||
docker run --name postgres -p "5432:5432" -d postgres
|
||||
docker run --name rabbitmq -p "15672:15672" -p "5672:5672" -d rabbitmq:3-management
|
||||
docker run --name opentsdb -p "4242:4242" -d petergrace/opentsdb-docker
|
||||
docker run --name redis -p "6379:6379" -d redis
|
||||
docker run --name aerospike -p "3000:3000" -d aerospike
|
||||
docker run --name aerospike -p "3000:3000" -d aerospike/aerospike-server
|
||||
docker run --name nsq -p "4150:4150" -d nsqio/nsq /nsqd
|
||||
docker run --name mqtt -p "1883:1883" -d ncarlier/mqtt
|
||||
docker run --name riemann -p "5555:5555" -d blalor/riemann
|
||||
@@ -79,8 +64,7 @@ docker-run-circle:
|
||||
-e ADVERTISED_PORT=9092 \
|
||||
-p "2181:2181" -p "9092:9092" \
|
||||
-d spotify/kafka
|
||||
docker run --name opentsdb -p "4242:4242" -d petergrace/opentsdb-docker
|
||||
docker run --name aerospike -p "3000:3000" -d aerospike
|
||||
docker run --name aerospike -p "3000:3000" -d aerospike/aerospike-server
|
||||
docker run --name nsq -p "4150:4150" -d nsqio/nsq /nsqd
|
||||
docker run --name mqtt -p "1883:1883" -d ncarlier/mqtt
|
||||
docker run --name riemann -p "5555:5555" -d blalor/riemann
|
||||
@@ -88,8 +72,8 @@ docker-run-circle:
|
||||
|
||||
# Kill all docker containers, ignore errors
|
||||
docker-kill:
|
||||
-docker kill nsq aerospike redis opentsdb rabbitmq postgres memcached mysql kafka mqtt riemann snmp
|
||||
-docker rm nsq aerospike redis opentsdb rabbitmq postgres memcached mysql kafka mqtt riemann snmp
|
||||
-docker kill nsq aerospike redis rabbitmq postgres memcached mysql kafka mqtt riemann snmp
|
||||
-docker rm nsq aerospike redis rabbitmq postgres memcached mysql kafka mqtt riemann snmp
|
||||
|
||||
# Run full unit tests using docker containers (includes setup and teardown)
|
||||
test: vet docker-kill docker-run
|
||||
|
||||
29
README.md
29
README.md
@@ -20,12 +20,12 @@ new plugins.
|
||||
### Linux deb and rpm Packages:
|
||||
|
||||
Latest:
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf_0.13.1_amd64.deb
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-0.13.1.x86_64.rpm
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf_1.0.0-beta3_amd64.deb
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0_beta3.x86_64.rpm
|
||||
|
||||
Latest (arm):
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf_0.13.1_armhf.deb
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-0.13.1.armhf.rpm
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf_1.0.0-beta3_armhf.deb
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0_beta3.armhf.rpm
|
||||
|
||||
##### Package Instructions:
|
||||
|
||||
@@ -46,14 +46,14 @@ to use this repo to install & update telegraf.
|
||||
### Linux tarballs:
|
||||
|
||||
Latest:
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-0.13.1_linux_amd64.tar.gz
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-0.13.1_linux_i386.tar.gz
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-0.13.1_linux_armhf.tar.gz
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0-beta3_linux_amd64.tar.gz
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0-beta3_linux_i386.tar.gz
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0-beta3_linux_armhf.tar.gz
|
||||
|
||||
### FreeBSD tarball:
|
||||
|
||||
Latest:
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-0.13.1_freebsd_amd64.tar.gz
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0-beta3_freebsd_amd64.tar.gz
|
||||
|
||||
### Ansible Role:
|
||||
|
||||
@@ -69,8 +69,7 @@ brew install telegraf
|
||||
### Windows Binaries (EXPERIMENTAL)
|
||||
|
||||
Latest:
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-0.13.1_windows_amd64.zip
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-0.13.1_windows_i386.zip
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0-beta3_windows_amd64.zip
|
||||
|
||||
### From Source:
|
||||
|
||||
@@ -145,6 +144,8 @@ Currently implemented sources:
|
||||
* [cassandra](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/cassandra)
|
||||
* [ceph](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/ceph)
|
||||
* [chrony](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/chrony)
|
||||
* [consul](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/consul)
|
||||
* [conntrack](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/conntrack)
|
||||
* [couchbase](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/couchbase)
|
||||
* [couchdb](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/couchdb)
|
||||
* [disque](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/disque)
|
||||
@@ -205,6 +206,7 @@ Currently implemented sources:
|
||||
* swap
|
||||
* processes
|
||||
* kernel (/proc/stat)
|
||||
* kernel (/proc/vmstat)
|
||||
|
||||
Telegraf can also collect metrics via the following service plugins:
|
||||
|
||||
@@ -215,7 +217,11 @@ Telegraf can also collect metrics via the following service plugins:
|
||||
* [mqtt_consumer](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/mqtt_consumer)
|
||||
* [kafka_consumer](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/kafka_consumer)
|
||||
* [nats_consumer](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/nats_consumer)
|
||||
* [github_webhooks](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/github_webhooks)
|
||||
* [webhooks](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/webhooks)
|
||||
* [github](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/webhooks/github)
|
||||
* [mandrill](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/webhooks/mandrill)
|
||||
* [rollbar](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/webhooks/rollbar)
|
||||
* [nsq_consumer](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/nsq_consumer)
|
||||
|
||||
We'll be adding support for many more over the coming months. Read on if you
|
||||
want to add support for another service or third-party API.
|
||||
@@ -230,6 +236,7 @@ want to add support for another service or third-party API.
|
||||
* [datadog](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/datadog)
|
||||
* [file](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/file)
|
||||
* [graphite](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/graphite)
|
||||
* [graylog](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/graylog)
|
||||
* [instrumental](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/instrumental)
|
||||
* [kafka](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/kafka)
|
||||
* [librato](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/librato)
|
||||
|
||||
@@ -18,4 +18,8 @@ type Accumulator interface {
|
||||
|
||||
Debug() bool
|
||||
SetDebug(enabled bool)
|
||||
|
||||
SetPrecision(precision, interval time.Duration)
|
||||
|
||||
DisablePrecision()
|
||||
}
|
||||
|
||||
@@ -17,6 +17,7 @@ func NewAccumulator(
|
||||
acc := accumulator{}
|
||||
acc.metrics = metrics
|
||||
acc.inputConfig = inputConfig
|
||||
acc.precision = time.Nanosecond
|
||||
return &acc
|
||||
}
|
||||
|
||||
@@ -32,6 +33,8 @@ type accumulator struct {
|
||||
inputConfig *internal_models.InputConfig
|
||||
|
||||
prefix string
|
||||
|
||||
precision time.Duration
|
||||
}
|
||||
|
||||
func (ac *accumulator) Add(
|
||||
@@ -141,6 +144,7 @@ func (ac *accumulator) AddFields(
|
||||
} else {
|
||||
timestamp = time.Now()
|
||||
}
|
||||
timestamp = timestamp.Round(ac.precision)
|
||||
|
||||
if ac.prefix != "" {
|
||||
measurement = ac.prefix + measurement
|
||||
@@ -173,6 +177,31 @@ func (ac *accumulator) SetTrace(trace bool) {
|
||||
ac.trace = trace
|
||||
}
|
||||
|
||||
// SetPrecision takes two time.Duration objects. If the first is non-zero,
|
||||
// it sets that as the precision. Otherwise, it takes the second argument
|
||||
// as the order of time that the metrics should be rounded to, with the
|
||||
// maximum being 1s.
|
||||
func (ac *accumulator) SetPrecision(precision, interval time.Duration) {
|
||||
if precision > 0 {
|
||||
ac.precision = precision
|
||||
return
|
||||
}
|
||||
switch {
|
||||
case interval >= time.Second:
|
||||
ac.precision = time.Second
|
||||
case interval >= time.Millisecond:
|
||||
ac.precision = time.Millisecond
|
||||
case interval >= time.Microsecond:
|
||||
ac.precision = time.Microsecond
|
||||
default:
|
||||
ac.precision = time.Nanosecond
|
||||
}
|
||||
}
|
||||
|
||||
func (ac *accumulator) DisablePrecision() {
|
||||
ac.precision = time.Nanosecond
|
||||
}
|
||||
|
||||
func (ac *accumulator) setDefaultTags(tags map[string]string) {
|
||||
ac.defaultTags = tags
|
||||
}
|
||||
|
||||
@@ -38,6 +38,128 @@ func TestAdd(t *testing.T) {
|
||||
actual)
|
||||
}
|
||||
|
||||
func TestAddNoPrecisionWithInterval(t *testing.T) {
|
||||
a := accumulator{}
|
||||
now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC)
|
||||
a.metrics = make(chan telegraf.Metric, 10)
|
||||
defer close(a.metrics)
|
||||
a.inputConfig = &internal_models.InputConfig{}
|
||||
|
||||
a.SetPrecision(0, time.Second)
|
||||
a.Add("acctest", float64(101), map[string]string{})
|
||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"})
|
||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"}, now)
|
||||
|
||||
testm := <-a.metrics
|
||||
actual := testm.String()
|
||||
assert.Contains(t, actual, "acctest value=101")
|
||||
|
||||
testm = <-a.metrics
|
||||
actual = testm.String()
|
||||
assert.Contains(t, actual, "acctest,acc=test value=101")
|
||||
|
||||
testm = <-a.metrics
|
||||
actual = testm.String()
|
||||
assert.Equal(t,
|
||||
fmt.Sprintf("acctest,acc=test value=101 %d", int64(1139572800000000000)),
|
||||
actual)
|
||||
}
|
||||
|
||||
func TestAddNoIntervalWithPrecision(t *testing.T) {
|
||||
a := accumulator{}
|
||||
now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC)
|
||||
a.metrics = make(chan telegraf.Metric, 10)
|
||||
defer close(a.metrics)
|
||||
a.inputConfig = &internal_models.InputConfig{}
|
||||
|
||||
a.SetPrecision(time.Second, time.Millisecond)
|
||||
a.Add("acctest", float64(101), map[string]string{})
|
||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"})
|
||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"}, now)
|
||||
|
||||
testm := <-a.metrics
|
||||
actual := testm.String()
|
||||
assert.Contains(t, actual, "acctest value=101")
|
||||
|
||||
testm = <-a.metrics
|
||||
actual = testm.String()
|
||||
assert.Contains(t, actual, "acctest,acc=test value=101")
|
||||
|
||||
testm = <-a.metrics
|
||||
actual = testm.String()
|
||||
assert.Equal(t,
|
||||
fmt.Sprintf("acctest,acc=test value=101 %d", int64(1139572800000000000)),
|
||||
actual)
|
||||
}
|
||||
|
||||
func TestAddDisablePrecision(t *testing.T) {
|
||||
a := accumulator{}
|
||||
now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC)
|
||||
a.metrics = make(chan telegraf.Metric, 10)
|
||||
defer close(a.metrics)
|
||||
a.inputConfig = &internal_models.InputConfig{}
|
||||
|
||||
a.SetPrecision(time.Second, time.Millisecond)
|
||||
a.DisablePrecision()
|
||||
a.Add("acctest", float64(101), map[string]string{})
|
||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"})
|
||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"}, now)
|
||||
|
||||
testm := <-a.metrics
|
||||
actual := testm.String()
|
||||
assert.Contains(t, actual, "acctest value=101")
|
||||
|
||||
testm = <-a.metrics
|
||||
actual = testm.String()
|
||||
assert.Contains(t, actual, "acctest,acc=test value=101")
|
||||
|
||||
testm = <-a.metrics
|
||||
actual = testm.String()
|
||||
assert.Equal(t,
|
||||
fmt.Sprintf("acctest,acc=test value=101 %d", int64(1139572800082912748)),
|
||||
actual)
|
||||
}
|
||||
|
||||
func TestDifferentPrecisions(t *testing.T) {
|
||||
a := accumulator{}
|
||||
now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC)
|
||||
a.metrics = make(chan telegraf.Metric, 10)
|
||||
defer close(a.metrics)
|
||||
a.inputConfig = &internal_models.InputConfig{}
|
||||
|
||||
a.SetPrecision(0, time.Second)
|
||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"}, now)
|
||||
testm := <-a.metrics
|
||||
actual := testm.String()
|
||||
assert.Equal(t,
|
||||
fmt.Sprintf("acctest,acc=test value=101 %d", int64(1139572800000000000)),
|
||||
actual)
|
||||
|
||||
a.SetPrecision(0, time.Millisecond)
|
||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"}, now)
|
||||
testm = <-a.metrics
|
||||
actual = testm.String()
|
||||
assert.Equal(t,
|
||||
fmt.Sprintf("acctest,acc=test value=101 %d", int64(1139572800083000000)),
|
||||
actual)
|
||||
|
||||
a.SetPrecision(0, time.Microsecond)
|
||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"}, now)
|
||||
testm = <-a.metrics
|
||||
actual = testm.String()
|
||||
assert.Equal(t,
|
||||
fmt.Sprintf("acctest,acc=test value=101 %d", int64(1139572800082913000)),
|
||||
actual)
|
||||
|
||||
a.SetPrecision(0, time.Nanosecond)
|
||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"}, now)
|
||||
testm = <-a.metrics
|
||||
actual = testm.String()
|
||||
assert.Equal(t,
|
||||
fmt.Sprintf("acctest,acc=test value=101 %d", int64(1139572800082912748)),
|
||||
actual)
|
||||
}
|
||||
|
||||
func TestAddDefaultTags(t *testing.T) {
|
||||
a := accumulator{}
|
||||
a.addDefaultTag("default", "tag")
|
||||
|
||||
@@ -1,17 +1,15 @@
|
||||
package agent
|
||||
|
||||
import (
|
||||
cryptorand "crypto/rand"
|
||||
"fmt"
|
||||
"log"
|
||||
"math/big"
|
||||
"math/rand"
|
||||
"os"
|
||||
"runtime"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/internal"
|
||||
"github.com/influxdata/telegraf/internal/config"
|
||||
"github.com/influxdata/telegraf/internal/models"
|
||||
)
|
||||
@@ -115,27 +113,18 @@ func (a *Agent) gatherer(
|
||||
ticker := time.NewTicker(interval)
|
||||
defer ticker.Stop()
|
||||
|
||||
jitter := a.Config.Agent.CollectionJitter.Duration.Nanoseconds()
|
||||
|
||||
for {
|
||||
var outerr error
|
||||
start := time.Now()
|
||||
|
||||
acc := NewAccumulator(input.Config, metricC)
|
||||
acc.SetDebug(a.Config.Agent.Debug)
|
||||
acc.SetPrecision(a.Config.Agent.Precision.Duration,
|
||||
a.Config.Agent.Interval.Duration)
|
||||
acc.setDefaultTags(a.Config.Tags)
|
||||
|
||||
if jitter != 0 {
|
||||
nanoSleep := rand.Int63n(jitter)
|
||||
d, err := time.ParseDuration(fmt.Sprintf("%dns", nanoSleep))
|
||||
if err != nil {
|
||||
log.Printf("Jittering collection interval failed for plugin %s",
|
||||
input.Name)
|
||||
} else {
|
||||
time.Sleep(d)
|
||||
}
|
||||
}
|
||||
internal.RandomSleep(a.Config.Agent.CollectionJitter.Duration, shutdown)
|
||||
|
||||
start := time.Now()
|
||||
gatherWithTimeout(shutdown, input, acc, interval)
|
||||
elapsed := time.Since(start)
|
||||
|
||||
@@ -214,6 +203,8 @@ func (a *Agent) Test() error {
|
||||
for _, input := range a.Config.Inputs {
|
||||
acc := NewAccumulator(input.Config, metricC)
|
||||
acc.SetTrace(true)
|
||||
acc.SetPrecision(a.Config.Agent.Precision.Duration,
|
||||
a.Config.Agent.Interval.Duration)
|
||||
acc.setDefaultTags(a.Config.Tags)
|
||||
|
||||
fmt.Printf("* Plugin: %s, Collection 1\n", input.Name)
|
||||
@@ -274,44 +265,40 @@ func (a *Agent) flusher(shutdown chan struct{}, metricC chan telegraf.Metric) er
|
||||
a.flush()
|
||||
return nil
|
||||
case <-ticker.C:
|
||||
internal.RandomSleep(a.Config.Agent.FlushJitter.Duration, shutdown)
|
||||
a.flush()
|
||||
case m := <-metricC:
|
||||
for _, o := range a.Config.Outputs {
|
||||
o.AddMetric(m)
|
||||
for i, o := range a.Config.Outputs {
|
||||
if i == len(a.Config.Outputs)-1 {
|
||||
o.AddMetric(m)
|
||||
} else {
|
||||
o.AddMetric(copyMetric(m))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// jitterInterval applies the the interval jitter to the flush interval using
|
||||
// crypto/rand number generator
|
||||
func jitterInterval(ininterval, injitter time.Duration) time.Duration {
|
||||
var jitter int64
|
||||
outinterval := ininterval
|
||||
if injitter.Nanoseconds() != 0 {
|
||||
maxjitter := big.NewInt(injitter.Nanoseconds())
|
||||
if j, err := cryptorand.Int(cryptorand.Reader, maxjitter); err == nil {
|
||||
jitter = j.Int64()
|
||||
}
|
||||
outinterval = time.Duration(jitter + ininterval.Nanoseconds())
|
||||
func copyMetric(m telegraf.Metric) telegraf.Metric {
|
||||
t := time.Time(m.Time())
|
||||
|
||||
tags := make(map[string]string)
|
||||
fields := make(map[string]interface{})
|
||||
for k, v := range m.Tags() {
|
||||
tags[k] = v
|
||||
}
|
||||
for k, v := range m.Fields() {
|
||||
fields[k] = v
|
||||
}
|
||||
|
||||
if outinterval.Nanoseconds() < time.Duration(500*time.Millisecond).Nanoseconds() {
|
||||
log.Printf("Flush interval %s too low, setting to 500ms\n", outinterval)
|
||||
outinterval = time.Duration(500 * time.Millisecond)
|
||||
}
|
||||
|
||||
return outinterval
|
||||
out, _ := telegraf.NewMetric(m.Name(), tags, fields, t)
|
||||
return out
|
||||
}
|
||||
|
||||
// Run runs the agent daemon, gathering every Interval
|
||||
func (a *Agent) Run(shutdown chan struct{}) error {
|
||||
var wg sync.WaitGroup
|
||||
|
||||
a.Config.Agent.FlushInterval.Duration = jitterInterval(
|
||||
a.Config.Agent.FlushInterval.Duration,
|
||||
a.Config.Agent.FlushJitter.Duration)
|
||||
|
||||
log.Printf("Agent Config: Interval:%s, Debug:%#v, Quiet:%#v, Hostname:%#v, "+
|
||||
"Flush Interval:%s \n",
|
||||
a.Config.Agent.Interval.Duration, a.Config.Agent.Debug, a.Config.Agent.Quiet,
|
||||
@@ -326,6 +313,9 @@ func (a *Agent) Run(shutdown chan struct{}) error {
|
||||
case telegraf.ServiceInput:
|
||||
acc := NewAccumulator(input.Config, metricC)
|
||||
acc.SetDebug(a.Config.Agent.Debug)
|
||||
// Service input plugins should set their own precision of their
|
||||
// metrics.
|
||||
acc.DisablePrecision()
|
||||
acc.setDefaultTags(a.Config.Tags)
|
||||
if err := p.Start(acc); err != nil {
|
||||
log.Printf("Service for input %s failed to start, exiting\n%s\n",
|
||||
|
||||
@@ -2,7 +2,6 @@ package agent
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf/internal/config"
|
||||
|
||||
@@ -110,75 +109,3 @@ func TestAgent_LoadOutput(t *testing.T) {
|
||||
a, _ = NewAgent(c)
|
||||
assert.Equal(t, 3, len(a.Config.Outputs))
|
||||
}
|
||||
|
||||
func TestAgent_ZeroJitter(t *testing.T) {
|
||||
flushinterval := jitterInterval(time.Duration(10*time.Second),
|
||||
time.Duration(0*time.Second))
|
||||
|
||||
actual := flushinterval.Nanoseconds()
|
||||
exp := time.Duration(10 * time.Second).Nanoseconds()
|
||||
|
||||
if actual != exp {
|
||||
t.Errorf("Actual %v, expected %v", actual, exp)
|
||||
}
|
||||
}
|
||||
|
||||
func TestAgent_ZeroInterval(t *testing.T) {
|
||||
min := time.Duration(500 * time.Millisecond).Nanoseconds()
|
||||
max := time.Duration(5 * time.Second).Nanoseconds()
|
||||
|
||||
for i := 0; i < 1000; i++ {
|
||||
flushinterval := jitterInterval(time.Duration(0*time.Second),
|
||||
time.Duration(5*time.Second))
|
||||
actual := flushinterval.Nanoseconds()
|
||||
|
||||
if actual > max {
|
||||
t.Errorf("Didn't expect interval %d to be > %d", actual, max)
|
||||
break
|
||||
}
|
||||
if actual < min {
|
||||
t.Errorf("Didn't expect interval %d to be < %d", actual, min)
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestAgent_ZeroBoth(t *testing.T) {
|
||||
flushinterval := jitterInterval(time.Duration(0*time.Second),
|
||||
time.Duration(0*time.Second))
|
||||
|
||||
actual := flushinterval
|
||||
exp := time.Duration(500 * time.Millisecond)
|
||||
|
||||
if actual != exp {
|
||||
t.Errorf("Actual %v, expected %v", actual, exp)
|
||||
}
|
||||
}
|
||||
|
||||
func TestAgent_JitterMax(t *testing.T) {
|
||||
max := time.Duration(32 * time.Second).Nanoseconds()
|
||||
|
||||
for i := 0; i < 1000; i++ {
|
||||
flushinterval := jitterInterval(time.Duration(30*time.Second),
|
||||
time.Duration(2*time.Second))
|
||||
actual := flushinterval.Nanoseconds()
|
||||
if actual > max {
|
||||
t.Errorf("Didn't expect interval %d to be > %d", actual, max)
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestAgent_JitterMin(t *testing.T) {
|
||||
min := time.Duration(30 * time.Second).Nanoseconds()
|
||||
|
||||
for i := 0; i < 1000; i++ {
|
||||
flushinterval := jitterInterval(time.Duration(30*time.Second),
|
||||
time.Duration(2*time.Second))
|
||||
actual := flushinterval.Nanoseconds()
|
||||
if actual < min {
|
||||
t.Errorf("Didn't expect interval %d to be < %d", actual, min)
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -186,49 +186,59 @@ name of the plugin.
|
||||
# Graphite:
|
||||
|
||||
The Graphite data format translates graphite _dot_ buckets directly into
|
||||
telegraf measurement names, with a single value field, and without any tags. For
|
||||
more advanced options, Telegraf supports specifying "templates" to translate
|
||||
telegraf measurement names, with a single value field, and without any tags.
|
||||
By default, the separator is left as ".", but this can be changed using the
|
||||
"separator" argument. For more advanced options,
|
||||
Telegraf supports specifying "templates" to translate
|
||||
graphite buckets into Telegraf metrics.
|
||||
|
||||
#### Separator:
|
||||
|
||||
You can specify a separator to use for the parsed metrics.
|
||||
By default, it will leave the metrics with a "." separator.
|
||||
Setting `separator = "_"` will translate:
|
||||
Templates are of the form:
|
||||
|
||||
```
|
||||
cpu.usage.idle 99
|
||||
=> cpu_usage_idle value=99
|
||||
"host.mytag.mytag.measurement.measurement.field*"
|
||||
```
|
||||
|
||||
#### Measurement/Tag Templates:
|
||||
Where the following keywords exist:
|
||||
|
||||
1. `measurement`: specifies that this section of the graphite bucket corresponds
|
||||
to the measurement name. This can be specified multiple times.
|
||||
2. `field`: specifies that this section of the graphite bucket corresponds
|
||||
to the field name. This can be specified multiple times.
|
||||
3. `measurement*`: specifies that all remaining elements of the graphite bucket
|
||||
correspond to the measurement name.
|
||||
4. `field*`: specifies that all remaining elements of the graphite bucket
|
||||
correspond to the field name.
|
||||
|
||||
Any part of the template that is not a keyword is treated as a tag key. This
|
||||
can also be specified multiple times.
|
||||
|
||||
NOTE: `field*` cannot be used in conjunction with `measurement*`!
|
||||
|
||||
#### Measurement & Tag Templates:
|
||||
|
||||
The most basic template is to specify a single transformation to apply to all
|
||||
incoming metrics. _measurement_ is a special keyword that tells Telegraf which
|
||||
parts of the graphite bucket to combine into the measurement name. It can have a
|
||||
trailing `*` to indicate that the remainder of the metric should be used.
|
||||
Other words are considered tag keys. So the following template:
|
||||
incoming metrics. So the following template:
|
||||
|
||||
```toml
|
||||
templates = [
|
||||
"region.measurement*"
|
||||
"region.region.measurement*"
|
||||
]
|
||||
```
|
||||
|
||||
would result in the following Graphite -> Telegraf transformation.
|
||||
|
||||
```
|
||||
us-west.cpu.load 100
|
||||
=> cpu.load,region=us-west value=100
|
||||
us.west.cpu.load 100
|
||||
=> cpu.load,region=us.west value=100
|
||||
```
|
||||
|
||||
#### Field Templates:
|
||||
|
||||
There is also a _field_ keyword, which can only be specified once.
|
||||
The field keyword tells Telegraf to give the metric that field name.
|
||||
So the following template:
|
||||
|
||||
```toml
|
||||
separator = "_"
|
||||
templates = [
|
||||
"measurement.measurement.field.field.region"
|
||||
]
|
||||
@@ -237,24 +247,26 @@ templates = [
|
||||
would result in the following Graphite -> Telegraf transformation.
|
||||
|
||||
```
|
||||
cpu.usage.idle.percent.us-west 100
|
||||
=> cpu_usage,region=us-west idle_percent=100
|
||||
cpu.usage.idle.percent.eu-east 100
|
||||
=> cpu_usage,region=eu-east idle_percent=100
|
||||
```
|
||||
|
||||
The field key can also be derived from the second "half" of the input metric-name by specifying ```field*```:
|
||||
The field key can also be derived from all remaining elements of the graphite
|
||||
bucket by specifying `field*`:
|
||||
|
||||
```toml
|
||||
separator = "_"
|
||||
templates = [
|
||||
"measurement.measurement.region.field*"
|
||||
]
|
||||
```
|
||||
|
||||
would result in the following Graphite -> Telegraf transformation.
|
||||
which would result in the following Graphite -> Telegraf transformation.
|
||||
|
||||
```
|
||||
cpu.usage.us-west.idle.percentage 100
|
||||
=> cpu_usage,region=us-west idle_percentage=100
|
||||
cpu.usage.eu-east.idle.percentage 100
|
||||
=> cpu_usage,region=eu-east idle_percentage=100
|
||||
```
|
||||
(This cannot be used in conjunction with "measurement*"!)
|
||||
|
||||
#### Filter Templates:
|
||||
|
||||
@@ -271,8 +283,8 @@ templates = [
|
||||
which would result in the following transformation:
|
||||
|
||||
```
|
||||
cpu.load.us-west 100
|
||||
=> cpu_load,region=us-west value=100
|
||||
cpu.load.eu-east 100
|
||||
=> cpu_load,region=eu-east value=100
|
||||
|
||||
mem.cached.localhost 256
|
||||
=> mem_cached,host=localhost value=256
|
||||
@@ -294,8 +306,8 @@ templates = [
|
||||
would result in the following Graphite -> Telegraf transformation.
|
||||
|
||||
```
|
||||
cpu.usage.idle.us-west 100
|
||||
=> cpu_usage,region=us-west,datacenter=1a idle=100
|
||||
cpu.usage.idle.eu-east 100
|
||||
=> cpu_usage,region=eu-east,datacenter=1a idle=100
|
||||
```
|
||||
|
||||
There are many more options available,
|
||||
@@ -326,12 +338,12 @@ There are many more options available,
|
||||
## similar to the line protocol format. There can be only one default template.
|
||||
## Templates support below format:
|
||||
## 1. filter + template
|
||||
## 2. filter + template + extra tag
|
||||
## 2. filter + template + extra tag(s)
|
||||
## 3. filter + template with field key
|
||||
## 4. default template
|
||||
templates = [
|
||||
"*.app env.service.resource.measurement",
|
||||
"stats.* .host.measurement* region=us-west,agent=sensu",
|
||||
"stats.* .host.measurement* region=eu-east,agent=sensu",
|
||||
"stats2.* .host.measurement.field",
|
||||
"measurement*"
|
||||
]
|
||||
|
||||
@@ -52,6 +52,11 @@
|
||||
## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
|
||||
flush_jitter = "0s"
|
||||
|
||||
## By default, precision will be set to the same timestamp order as the
|
||||
## collection interval, with the maximum being 1s.
|
||||
## Precision will NOT be used for service inputs, such as logparser and statsd.
|
||||
## Valid values are "Nns", "Nus" (or "Nµs"), "Nms", "Ns".
|
||||
precision = ""
|
||||
## Run telegraf in debug mode
|
||||
debug = false
|
||||
## Run telegraf in quiet mode
|
||||
@@ -75,12 +80,9 @@
|
||||
urls = ["http://localhost:8086"] # required
|
||||
## The target database for metrics (telegraf will create it if not exists).
|
||||
database = "telegraf" # required
|
||||
## Precision of writes, valid values are "ns", "us" (or "µs"), "ms", "s", "m", "h".
|
||||
## note: using "s" precision greatly improves InfluxDB compression.
|
||||
precision = "s"
|
||||
|
||||
## Retention policy to write to.
|
||||
retention_policy = "default"
|
||||
## Retention policy to write to. Empty string writes to the default rp.
|
||||
retention_policy = ""
|
||||
## Write consistency (clusters only), can be: "any", "one", "quorom", "all"
|
||||
write_consistency = "any"
|
||||
|
||||
@@ -106,10 +108,10 @@
|
||||
# [[outputs.amon]]
|
||||
# ## Amon Server Key
|
||||
# server_key = "my-server-key" # required.
|
||||
#
|
||||
#
|
||||
# ## Amon Instance URL
|
||||
# amon_instance = "https://youramoninstance" # required
|
||||
#
|
||||
#
|
||||
# ## Connection timeout.
|
||||
# # timeout = "5s"
|
||||
|
||||
@@ -125,21 +127,21 @@
|
||||
# ## Telegraf tag to use as a routing key
|
||||
# ## ie, if this tag exists, it's value will be used as the routing key
|
||||
# routing_tag = "host"
|
||||
#
|
||||
#
|
||||
# ## InfluxDB retention policy
|
||||
# # retention_policy = "default"
|
||||
# ## InfluxDB database
|
||||
# # database = "telegraf"
|
||||
# ## InfluxDB precision
|
||||
# # precision = "s"
|
||||
#
|
||||
#
|
||||
# ## Optional SSL Config
|
||||
# # ssl_ca = "/etc/telegraf/ca.pem"
|
||||
# # ssl_cert = "/etc/telegraf/cert.pem"
|
||||
# # ssl_key = "/etc/telegraf/key.pem"
|
||||
# ## Use SSL but skip chain & host verification
|
||||
# # insecure_skip_verify = false
|
||||
#
|
||||
#
|
||||
# ## Data format to output.
|
||||
# ## Each data format has it's own unique set of configuration options, read
|
||||
# ## more about them here:
|
||||
@@ -151,16 +153,22 @@
|
||||
# [[outputs.cloudwatch]]
|
||||
# ## Amazon REGION
|
||||
# region = 'us-east-1'
|
||||
#
|
||||
#
|
||||
# ## Amazon Credentials
|
||||
# ## Credentials are loaded in the following order
|
||||
# ## 1) explicit credentials from 'access_key' and 'secret_key'
|
||||
# ## 2) environment variables
|
||||
# ## 3) shared credentials file
|
||||
# ## 4) EC2 Instance Profile
|
||||
# ## 1) Assumed credentials via STS if role_arn is specified
|
||||
# ## 2) explicit credentials from 'access_key' and 'secret_key'
|
||||
# ## 3) shared profile from 'profile'
|
||||
# ## 4) environment variables
|
||||
# ## 5) shared credentials file
|
||||
# ## 6) EC2 Instance Profile
|
||||
# #access_key = ""
|
||||
# #secret_key = ""
|
||||
#
|
||||
# #token = ""
|
||||
# #role_arn = ""
|
||||
# #profile = ""
|
||||
# #shared_credential_file = ""
|
||||
#
|
||||
# ## Namespace for the CloudWatch MetricDatums
|
||||
# namespace = 'InfluxData/Telegraf'
|
||||
|
||||
@@ -169,7 +177,7 @@
|
||||
# [[outputs.datadog]]
|
||||
# ## Datadog API key
|
||||
# apikey = "my-secret-key" # required.
|
||||
#
|
||||
#
|
||||
# ## Connection timeout.
|
||||
# # timeout = "5s"
|
||||
|
||||
@@ -178,7 +186,7 @@
|
||||
# [[outputs.file]]
|
||||
# ## Files to write to, "stdout" is a specially handled file.
|
||||
# files = ["stdout", "/tmp/metrics.out"]
|
||||
#
|
||||
#
|
||||
# ## Data format to output.
|
||||
# ## Each data format has it's own unique set of configuration options, read
|
||||
# ## more about them here:
|
||||
@@ -189,6 +197,8 @@
|
||||
# # Configuration for Graphite server to send metrics to
|
||||
# [[outputs.graphite]]
|
||||
# ## TCP endpoint for your graphite instance.
|
||||
# ## If multiple endpoints are configured, the output will be load balanced.
|
||||
# ## Only one of the endpoints will be written to with each iteration.
|
||||
# servers = ["localhost:2003"]
|
||||
# ## Prefix metrics name
|
||||
# prefix = ""
|
||||
@@ -199,6 +209,12 @@
|
||||
# timeout = 2
|
||||
|
||||
|
||||
# # Send telegraf metrics to graylog(s)
|
||||
# [[outputs.graylog]]
|
||||
# ## Udp endpoint for your graylog instance.
|
||||
# servers = ["127.0.0.1:12201", "192.168.1.1:12201"]
|
||||
|
||||
|
||||
# # Configuration for sending metrics to an Instrumental project
|
||||
# [[outputs.instrumental]]
|
||||
# ## Project API Token (required)
|
||||
@@ -223,14 +239,14 @@
|
||||
# ## Telegraf tag to use as a routing key
|
||||
# ## ie, if this tag exists, it's value will be used as the routing key
|
||||
# routing_tag = "host"
|
||||
#
|
||||
#
|
||||
# ## CompressionCodec represents the various compression codecs recognized by
|
||||
# ## Kafka in messages.
|
||||
# ## 0 : No compression
|
||||
# ## 1 : Gzip compression
|
||||
# ## 2 : Snappy compression
|
||||
# compression_codec = 0
|
||||
#
|
||||
#
|
||||
# ## RequiredAcks is used in Produce Requests to tell the broker how many
|
||||
# ## replica acknowledgements it must see before responding
|
||||
# ## 0 : the producer never waits for an acknowledgement from the broker.
|
||||
@@ -246,17 +262,17 @@
|
||||
# ## guarantee that no messages will be lost as long as at least one in
|
||||
# ## sync replica remains.
|
||||
# required_acks = -1
|
||||
#
|
||||
#
|
||||
# ## The total number of times to retry sending a message
|
||||
# max_retry = 3
|
||||
#
|
||||
#
|
||||
# ## Optional SSL Config
|
||||
# # ssl_ca = "/etc/telegraf/ca.pem"
|
||||
# # ssl_cert = "/etc/telegraf/cert.pem"
|
||||
# # ssl_key = "/etc/telegraf/key.pem"
|
||||
# ## Use SSL but skip chain & host verification
|
||||
# # insecure_skip_verify = false
|
||||
#
|
||||
#
|
||||
# ## Data format to output.
|
||||
# ## Each data format has it's own unique set of configuration options, read
|
||||
# ## more about them here:
|
||||
@@ -268,16 +284,22 @@
|
||||
# [[outputs.kinesis]]
|
||||
# ## Amazon REGION of kinesis endpoint.
|
||||
# region = "ap-southeast-2"
|
||||
#
|
||||
#
|
||||
# ## Amazon Credentials
|
||||
# ## Credentials are loaded in the following order
|
||||
# ## 1) explicit credentials from 'access_key' and 'secret_key'
|
||||
# ## 2) environment variables
|
||||
# ## 3) shared credentials file
|
||||
# ## 4) EC2 Instance Profile
|
||||
# ## 1) Assumed credentials via STS if role_arn is specified
|
||||
# ## 2) explicit credentials from 'access_key' and 'secret_key'
|
||||
# ## 3) shared profile from 'profile'
|
||||
# ## 4) environment variables
|
||||
# ## 5) shared credentials file
|
||||
# ## 6) EC2 Instance Profile
|
||||
# #access_key = ""
|
||||
# #secret_key = ""
|
||||
#
|
||||
# #token = ""
|
||||
# #role_arn = ""
|
||||
# #profile = ""
|
||||
# #shared_credential_file = ""
|
||||
#
|
||||
# ## Kinesis StreamName must exist prior to starting telegraf.
|
||||
# streamname = "StreamName"
|
||||
# ## PartitionKey as used for sharding data.
|
||||
@@ -312,23 +334,23 @@
|
||||
# # Configuration for MQTT server to send metrics to
|
||||
# [[outputs.mqtt]]
|
||||
# servers = ["localhost:1883"] # required.
|
||||
#
|
||||
#
|
||||
# ## MQTT outputs send metrics to this topic format
|
||||
# ## "<topic_prefix>/<hostname>/<pluginname>/"
|
||||
# ## ex: prefix/web01.example.com/mem
|
||||
# topic_prefix = "telegraf"
|
||||
#
|
||||
#
|
||||
# ## username and password to connect MQTT server.
|
||||
# # username = "telegraf"
|
||||
# # password = "metricsmetricsmetricsmetrics"
|
||||
#
|
||||
#
|
||||
# ## Optional SSL Config
|
||||
# # ssl_ca = "/etc/telegraf/ca.pem"
|
||||
# # ssl_cert = "/etc/telegraf/cert.pem"
|
||||
# # ssl_key = "/etc/telegraf/key.pem"
|
||||
# ## Use SSL but skip chain & host verification
|
||||
# # insecure_skip_verify = false
|
||||
#
|
||||
#
|
||||
# ## Data format to output.
|
||||
# ## Each data format has it's own unique set of configuration options, read
|
||||
# ## more about them here:
|
||||
@@ -342,7 +364,7 @@
|
||||
# server = "localhost:4150"
|
||||
# ## NSQ topic for producer messages
|
||||
# topic = "telegraf"
|
||||
#
|
||||
#
|
||||
# ## Data format to output.
|
||||
# ## Each data format has it's own unique set of configuration options, read
|
||||
# ## more about them here:
|
||||
@@ -354,14 +376,14 @@
|
||||
# [[outputs.opentsdb]]
|
||||
# ## prefix for metrics keys
|
||||
# prefix = "my.specific.prefix."
|
||||
#
|
||||
#
|
||||
# ## Telnet Mode ##
|
||||
# ## DNS name of the OpenTSDB server in telnet mode
|
||||
# host = "opentsdb.example.com"
|
||||
#
|
||||
#
|
||||
# ## Port of the OpenTSDB server in telnet mode
|
||||
# port = 4242
|
||||
#
|
||||
#
|
||||
# ## Debug true - Prints OpenTSDB communication
|
||||
# debug = false
|
||||
|
||||
@@ -454,6 +476,7 @@
|
||||
# # Read Apache status information (mod_status)
|
||||
# [[inputs.apache]]
|
||||
# ## An array of Apache status URI to gather stats.
|
||||
# ## Default is "http://localhost/server-status?auto".
|
||||
# urls = ["http://localhost/server-status?auto"]
|
||||
|
||||
|
||||
@@ -462,7 +485,7 @@
|
||||
# ## Bcache sets path
|
||||
# ## If not specified, then default is:
|
||||
# bcachePath = "/sys/fs/bcache"
|
||||
#
|
||||
#
|
||||
# ## By default, telegraf gather stats for all bcache devices
|
||||
# ## Setting devices will restrict the stats to the specified
|
||||
# ## bcache devices.
|
||||
@@ -490,48 +513,71 @@
|
||||
# # Collects performance metrics from the MON and OSD nodes in a Ceph storage cluster.
|
||||
# [[inputs.ceph]]
|
||||
# ## All configuration values are optional, defaults are shown below
|
||||
#
|
||||
#
|
||||
# ## location of ceph binary
|
||||
# ceph_binary = "/usr/bin/ceph"
|
||||
#
|
||||
#
|
||||
# ## directory in which to look for socket files
|
||||
# socket_dir = "/var/run/ceph"
|
||||
#
|
||||
#
|
||||
# ## prefix of MON and OSD socket files, used to determine socket type
|
||||
# mon_prefix = "ceph-mon"
|
||||
# osd_prefix = "ceph-osd"
|
||||
#
|
||||
#
|
||||
# ## suffix used to identify socket files
|
||||
# socket_suffix = "asok"
|
||||
|
||||
|
||||
# # Read specific statistics per cgroup
|
||||
# [[inputs.cgroup]]
|
||||
# ## Directories in which to look for files, globs are supported.
|
||||
# # paths = [
|
||||
# # "/cgroup/memory",
|
||||
# # "/cgroup/memory/child1",
|
||||
# # "/cgroup/memory/child2/*",
|
||||
# # ]
|
||||
# ## cgroup stat fields, as file names, globs are supported.
|
||||
# ## these file names are appended to each path from above.
|
||||
# # files = ["memory.*usage*", "memory.limit_in_bytes"]
|
||||
|
||||
|
||||
# # Pull Metric Statistics from Amazon CloudWatch
|
||||
# [[inputs.cloudwatch]]
|
||||
# ## Amazon Region
|
||||
# region = 'us-east-1'
|
||||
#
|
||||
#
|
||||
# ## Amazon Credentials
|
||||
# ## Credentials are loaded in the following order
|
||||
# ## 1) explicit credentials from 'access_key' and 'secret_key'
|
||||
# ## 2) environment variables
|
||||
# ## 3) shared credentials file
|
||||
# ## 4) EC2 Instance Profile
|
||||
# ## 1) Assumed credentials via STS if role_arn is specified
|
||||
# ## 2) explicit credentials from 'access_key' and 'secret_key'
|
||||
# ## 3) shared profile from 'profile'
|
||||
# ## 4) environment variables
|
||||
# ## 5) shared credentials file
|
||||
# ## 6) EC2 Instance Profile
|
||||
# #access_key = ""
|
||||
# #secret_key = ""
|
||||
#
|
||||
# #token = ""
|
||||
# #role_arn = ""
|
||||
# #profile = ""
|
||||
# #shared_credential_file = ""
|
||||
#
|
||||
# ## Requested CloudWatch aggregation Period (required - must be a multiple of 60s)
|
||||
# period = '1m'
|
||||
#
|
||||
#
|
||||
# ## Collection Delay (required - must account for metrics availability via CloudWatch API)
|
||||
# delay = '1m'
|
||||
#
|
||||
#
|
||||
# ## Recomended: use metric 'interval' that is a multiple of 'period' to avoid
|
||||
# ## gaps or overlap in pulled data
|
||||
# interval = '1m'
|
||||
#
|
||||
#
|
||||
# ## Configure the TTL for the internal cache of metrics.
|
||||
# ## Defaults to 1 hr if not specified
|
||||
# #cache_ttl = '10m'
|
||||
#
|
||||
# ## Metric Statistic Namespace (required)
|
||||
# namespace = 'AWS/ELB'
|
||||
#
|
||||
#
|
||||
# ## Metrics to Pull (optional)
|
||||
# ## Defaults to all Metrics in Namespace if nothing is provided
|
||||
# ## Refreshes Namespace available metrics every 1h
|
||||
@@ -544,6 +590,23 @@
|
||||
# # value = 'p-example'
|
||||
|
||||
|
||||
# # Gather health check statuses from services registered in Consul
|
||||
# [[inputs.consul]]
|
||||
# ## Most of these values defaults to the one configured on a Consul's agent level.
|
||||
# ## Optional Consul server address (default: "localhost")
|
||||
# # address = "localhost"
|
||||
# ## Optional URI scheme for the Consul server (default: "http")
|
||||
# # scheme = "http"
|
||||
# ## Optional ACL token used in every request (default: "")
|
||||
# # token = ""
|
||||
# ## Optional username used for request HTTP Basic Authentication (default: "")
|
||||
# # username = ""
|
||||
# ## Optional password used for HTTP Basic Authentication (default: "")
|
||||
# # password = ""
|
||||
# ## Optional data centre to query the health checks from (default: "")
|
||||
# # datacentre = ""
|
||||
|
||||
|
||||
# # Read metrics from one or many couchbase clusters
|
||||
# [[inputs.couchbase]]
|
||||
# ## specify servers via a url matching:
|
||||
@@ -578,17 +641,17 @@
|
||||
# [[inputs.dns_query]]
|
||||
# ## servers to query
|
||||
# servers = ["8.8.8.8"] # required
|
||||
#
|
||||
#
|
||||
# ## Domains or subdomains to query. "."(root) is default
|
||||
# domains = ["."] # optional
|
||||
#
|
||||
#
|
||||
# ## Query record type. Default is "A"
|
||||
# ## Posible values: A, AAAA, CNAME, MX, NS, PTR, TXT, SOA, SPF, SRV.
|
||||
# record_type = "A" # optional
|
||||
#
|
||||
#
|
||||
# ## Dns server port. 53 is default
|
||||
# port = 53 # optional
|
||||
#
|
||||
#
|
||||
# ## Query timeout in seconds. Default is 2 seconds
|
||||
# timeout = 2 # optional
|
||||
|
||||
@@ -624,26 +687,37 @@
|
||||
# [[inputs.elasticsearch]]
|
||||
# ## specify a list of one or more Elasticsearch servers
|
||||
# servers = ["http://localhost:9200"]
|
||||
#
|
||||
#
|
||||
# ## set local to false when you want to read the indices stats from all nodes
|
||||
# ## within the cluster
|
||||
# local = true
|
||||
#
|
||||
#
|
||||
# ## set cluster_health to true when you want to also obtain cluster level stats
|
||||
# cluster_health = false
|
||||
#
|
||||
# ## Optional SSL Config
|
||||
# # ssl_ca = "/etc/telegraf/ca.pem"
|
||||
# # ssl_cert = "/etc/telegraf/cert.pem"
|
||||
# # ssl_key = "/etc/telegraf/key.pem"
|
||||
# ## Use SSL but skip chain & host verification
|
||||
# # insecure_skip_verify = false
|
||||
|
||||
|
||||
# # Read metrics from one or more commands that can output to stdout
|
||||
# [[inputs.exec]]
|
||||
# ## Commands array
|
||||
# commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
|
||||
#
|
||||
# commands = [
|
||||
# "/tmp/test.sh",
|
||||
# "/usr/bin/mycollector --foo=bar",
|
||||
# "/tmp/collect_*.sh"
|
||||
# ]
|
||||
#
|
||||
# ## Timeout for each command to complete.
|
||||
# timeout = "5s"
|
||||
#
|
||||
#
|
||||
# ## measurement name suffix (for separating different commands)
|
||||
# name_suffix = "_mycollector"
|
||||
#
|
||||
#
|
||||
# ## Data format to consume.
|
||||
# ## Each data format has it's own unique set of configuration options, read
|
||||
# ## more about them here:
|
||||
@@ -667,11 +741,48 @@
|
||||
# md5 = false
|
||||
|
||||
|
||||
# # Read flattened metrics from one or more GrayLog HTTP endpoints
|
||||
# [[inputs.graylog]]
|
||||
# ## API endpoint, currently supported API:
|
||||
# ##
|
||||
# ## - multiple (Ex http://<host>:12900/system/metrics/multiple)
|
||||
# ## - namespace (Ex http://<host>:12900/system/metrics/namespace/{namespace})
|
||||
# ##
|
||||
# ## For namespace endpoint, the metrics array will be ignored for that call.
|
||||
# ## Endpoint can contain namespace and multiple type calls.
|
||||
# ##
|
||||
# ## Please check http://[graylog-server-ip]:12900/api-browser for full list
|
||||
# ## of endpoints
|
||||
# servers = [
|
||||
# "http://[graylog-server-ip]:12900/system/metrics/multiple",
|
||||
# ]
|
||||
#
|
||||
# ## Metrics list
|
||||
# ## List of metrics can be found on Graylog webservice documentation.
|
||||
# ## Or by hitting the the web service api at:
|
||||
# ## http://[graylog-host]:12900/system/metrics
|
||||
# metrics = [
|
||||
# "jvm.cl.loaded",
|
||||
# "jvm.memory.pools.Metaspace.committed"
|
||||
# ]
|
||||
#
|
||||
# ## Username and password
|
||||
# username = ""
|
||||
# password = ""
|
||||
#
|
||||
# ## Optional SSL Config
|
||||
# # ssl_ca = "/etc/telegraf/ca.pem"
|
||||
# # ssl_cert = "/etc/telegraf/cert.pem"
|
||||
# # ssl_key = "/etc/telegraf/key.pem"
|
||||
# ## Use SSL but skip chain & host verification
|
||||
# # insecure_skip_verify = false
|
||||
|
||||
|
||||
# # Read metrics of haproxy, via socket or csv stats page
|
||||
# [[inputs.haproxy]]
|
||||
# ## An array of address to gather stats about. Specify an ip on hostname
|
||||
# ## with optional port. ie localhost, 10.10.3.33:1936, etc.
|
||||
#
|
||||
#
|
||||
# ## If no servers are specified, then default to 127.0.0.1:1936
|
||||
# servers = ["http://myhaproxy.com:1936", "http://anotherhaproxy.com:1936"]
|
||||
# ## Or you can also use local socket
|
||||
@@ -695,41 +806,48 @@
|
||||
# # body = '''
|
||||
# # {'fake':'data'}
|
||||
# # '''
|
||||
#
|
||||
# ## Optional SSL Config
|
||||
# # ssl_ca = "/etc/telegraf/ca.pem"
|
||||
# # ssl_cert = "/etc/telegraf/cert.pem"
|
||||
# # ssl_key = "/etc/telegraf/key.pem"
|
||||
# ## Use SSL but skip chain & host verification
|
||||
# # insecure_skip_verify = false
|
||||
|
||||
|
||||
# # Read flattened metrics from one or more JSON HTTP endpoints
|
||||
# [[inputs.httpjson]]
|
||||
# ## NOTE This plugin only reads numerical measurements, strings and booleans
|
||||
# ## will be ignored.
|
||||
#
|
||||
#
|
||||
# ## a name for the service being polled
|
||||
# name = "webserver_stats"
|
||||
#
|
||||
#
|
||||
# ## URL of each server in the service's cluster
|
||||
# servers = [
|
||||
# "http://localhost:9999/stats/",
|
||||
# "http://localhost:9998/stats/",
|
||||
# ]
|
||||
#
|
||||
#
|
||||
# ## HTTP method to use: GET or POST (case-sensitive)
|
||||
# method = "GET"
|
||||
#
|
||||
#
|
||||
# ## List of tag names to extract from top-level of JSON server response
|
||||
# # tag_keys = [
|
||||
# # "my_tag_1",
|
||||
# # "my_tag_2"
|
||||
# # ]
|
||||
#
|
||||
#
|
||||
# ## HTTP parameters (all values must be strings)
|
||||
# [inputs.httpjson.parameters]
|
||||
# event_type = "cpu_spike"
|
||||
# threshold = "0.75"
|
||||
#
|
||||
#
|
||||
# ## HTTP Header parameters (all values must be strings)
|
||||
# # [inputs.httpjson.headers]
|
||||
# # X-Auth-Token = "my-xauth-token"
|
||||
# # apiVersion = "v1"
|
||||
#
|
||||
#
|
||||
# ## Optional SSL Config
|
||||
# # ssl_ca = "/etc/telegraf/ca.pem"
|
||||
# # ssl_cert = "/etc/telegraf/cert.pem"
|
||||
@@ -743,8 +861,9 @@
|
||||
# ## Works with InfluxDB debug endpoints out of the box,
|
||||
# ## but other services can use this format too.
|
||||
# ## See the influxdb plugin's README for more details.
|
||||
#
|
||||
#
|
||||
# ## Multiple URLs from which to read InfluxDB-formatted JSON
|
||||
# ## Default is "http://localhost:8086/debug/vars".
|
||||
# urls = [
|
||||
# "http://localhost:8086/debug/vars"
|
||||
# ]
|
||||
@@ -764,7 +883,7 @@
|
||||
# [[inputs.jolokia]]
|
||||
# ## This is the context root used to compose the jolokia url
|
||||
# context = "/jolokia"
|
||||
#
|
||||
#
|
||||
# ## This specifies the mode used
|
||||
# # mode = "proxy"
|
||||
# #
|
||||
@@ -774,8 +893,8 @@
|
||||
# # [inputs.jolokia.proxy]
|
||||
# # host = "127.0.0.1"
|
||||
# # port = "8080"
|
||||
#
|
||||
#
|
||||
#
|
||||
#
|
||||
# ## List of servers exposing jolokia read service
|
||||
# [[inputs.jolokia.servers]]
|
||||
# name = "as-server-01"
|
||||
@@ -783,7 +902,7 @@
|
||||
# port = "8080"
|
||||
# # username = "myuser"
|
||||
# # password = "mypassword"
|
||||
#
|
||||
#
|
||||
# ## List of metrics collected on above servers
|
||||
# ## Each metric consists in a name, a jmx path and either
|
||||
# ## a pass or drop slice attribute.
|
||||
@@ -792,13 +911,13 @@
|
||||
# name = "heap_memory_usage"
|
||||
# mbean = "java.lang:type=Memory"
|
||||
# attribute = "HeapMemoryUsage"
|
||||
#
|
||||
#
|
||||
# ## This collect thread counts metrics.
|
||||
# [[inputs.jolokia.metrics]]
|
||||
# name = "thread_count"
|
||||
# mbean = "java.lang:type=Threading"
|
||||
# attribute = "TotalStartedThreadCount,ThreadCount,DaemonThreadCount,PeakThreadCount"
|
||||
#
|
||||
#
|
||||
# ## This collect number of class loaded/unloaded counts metrics.
|
||||
# [[inputs.jolokia.metrics]]
|
||||
# name = "class_count"
|
||||
@@ -951,7 +1070,7 @@
|
||||
# address = "github.com:80"
|
||||
# ## Set timeout
|
||||
# timeout = "1s"
|
||||
#
|
||||
#
|
||||
# ## Optional string sent to the server
|
||||
# # send = "ssh"
|
||||
# ## Optional expected string in answer
|
||||
@@ -1043,7 +1162,7 @@
|
||||
# count = 1 # required
|
||||
# ## interval, in s, at which to ping. 0 == default (ping -i <PING_INTERVAL>)
|
||||
# ping_interval = 0.0
|
||||
# ## ping timeout, in s. 0 == no timeout (ping -W <TIMEOUT>)
|
||||
# ## per-ping timeout, in s. 0 == no timeout (ping -W <TIMEOUT>)
|
||||
# timeout = 1.0
|
||||
# ## interface to send ping from (ping -I <INTERFACE>)
|
||||
# interface = ""
|
||||
@@ -1065,7 +1184,7 @@
|
||||
# ## to grab metrics for.
|
||||
# ##
|
||||
# address = "host=localhost user=postgres sslmode=disable"
|
||||
#
|
||||
#
|
||||
# ## A list of databases to pull metrics about. If not specified, metrics for all
|
||||
# ## databases are gathered.
|
||||
# # databases = ["app_production", "testing"]
|
||||
@@ -1147,7 +1266,7 @@
|
||||
# # pattern = "nginx"
|
||||
# ## user as argument for pgrep (ie, pgrep -u <user>)
|
||||
# # user = "nginx"
|
||||
#
|
||||
#
|
||||
# ## override for process_name
|
||||
# ## This is optional; default is sourced from /proc/<pid>/status
|
||||
# # process_name = "bar"
|
||||
@@ -1161,11 +1280,16 @@
|
||||
# [[inputs.prometheus]]
|
||||
# ## An array of urls to scrape metrics from.
|
||||
# urls = ["http://localhost:9100/metrics"]
|
||||
#
|
||||
# ## Use SSL but skip chain & host verification
|
||||
# # insecure_skip_verify = false
|
||||
#
|
||||
# ## Use bearer token for authorization
|
||||
# # bearer_token = /path/to/bearer/token
|
||||
#
|
||||
# ## Optional SSL Config
|
||||
# # ssl_ca = /path/to/cafile
|
||||
# # ssl_cert = /path/to/certfile
|
||||
# # ssl_key = /path/to/keyfile
|
||||
# ## Use SSL but skip chain & host verification
|
||||
# # insecure_skip_verify = false
|
||||
|
||||
|
||||
# # Reads last_run_summary.yaml file and converts to measurments
|
||||
@@ -1176,11 +1300,11 @@
|
||||
|
||||
# # Read metrics from one or many RabbitMQ servers via the management API
|
||||
# [[inputs.rabbitmq]]
|
||||
# url = "http://localhost:15672" # required
|
||||
# # url = "http://localhost:15672"
|
||||
# # name = "rmq-server-1" # optional tag
|
||||
# # username = "guest"
|
||||
# # password = "guest"
|
||||
#
|
||||
#
|
||||
# ## A list of nodes to pull metrics about. If not specified, metrics for
|
||||
# ## all nodes are gathered.
|
||||
# # nodes = ["rabbit@node1", "rabbit@node2"]
|
||||
@@ -1244,7 +1368,7 @@
|
||||
# collect = ["mybulk", "sysservices", "sysdescr"]
|
||||
# # Simple list of OIDs to get, in addition to "collect"
|
||||
# get_oids = []
|
||||
#
|
||||
#
|
||||
# [[inputs.snmp.host]]
|
||||
# address = "192.168.2.3:161"
|
||||
# community = "public"
|
||||
@@ -1256,31 +1380,31 @@
|
||||
# "ifNumber",
|
||||
# ".1.3.6.1.2.1.1.3.0",
|
||||
# ]
|
||||
#
|
||||
#
|
||||
# [[inputs.snmp.get]]
|
||||
# name = "ifnumber"
|
||||
# oid = "ifNumber"
|
||||
#
|
||||
#
|
||||
# [[inputs.snmp.get]]
|
||||
# name = "interface_speed"
|
||||
# oid = "ifSpeed"
|
||||
# instance = "0"
|
||||
#
|
||||
#
|
||||
# [[inputs.snmp.get]]
|
||||
# name = "sysuptime"
|
||||
# oid = ".1.3.6.1.2.1.1.3.0"
|
||||
# unit = "second"
|
||||
#
|
||||
#
|
||||
# [[inputs.snmp.bulk]]
|
||||
# name = "mybulk"
|
||||
# max_repetition = 127
|
||||
# oid = ".1.3.6.1.2.1.1"
|
||||
#
|
||||
#
|
||||
# [[inputs.snmp.bulk]]
|
||||
# name = "ifoutoctets"
|
||||
# max_repetition = 127
|
||||
# oid = "ifOutOctets"
|
||||
#
|
||||
#
|
||||
# [[inputs.snmp.host]]
|
||||
# address = "192.168.2.13:161"
|
||||
# #address = "127.0.0.1:161"
|
||||
@@ -1293,19 +1417,19 @@
|
||||
# [[inputs.snmp.host.table]]
|
||||
# name = "iftable3"
|
||||
# include_instances = ["enp5s0", "eth1"]
|
||||
#
|
||||
#
|
||||
# # SNMP TABLEs
|
||||
# # table without mapping neither subtables
|
||||
# [[inputs.snmp.table]]
|
||||
# name = "iftable1"
|
||||
# oid = ".1.3.6.1.2.1.31.1.1.1"
|
||||
#
|
||||
#
|
||||
# # table without mapping but with subtables
|
||||
# [[inputs.snmp.table]]
|
||||
# name = "iftable2"
|
||||
# oid = ".1.3.6.1.2.1.31.1.1.1"
|
||||
# sub_tables = [".1.3.6.1.2.1.2.2.1.13"]
|
||||
#
|
||||
#
|
||||
# # table with mapping but without subtables
|
||||
# [[inputs.snmp.table]]
|
||||
# name = "iftable3"
|
||||
@@ -1313,7 +1437,7 @@
|
||||
# # if empty. get all instances
|
||||
# mapping_table = ".1.3.6.1.2.1.31.1.1.1.1"
|
||||
# # if empty, get all subtables
|
||||
#
|
||||
#
|
||||
# # table with both mapping and subtables
|
||||
# [[inputs.snmp.table]]
|
||||
# name = "iftable4"
|
||||
@@ -1356,32 +1480,33 @@
|
||||
# [[inputs.varnish]]
|
||||
# ## The default location of the varnishstat binary can be overridden with:
|
||||
# binary = "/usr/bin/varnishstat"
|
||||
#
|
||||
#
|
||||
# ## By default, telegraf gather stats for 3 metric points.
|
||||
# ## Setting stats will override the defaults shown below.
|
||||
# ## stats may also be set to ["all"], which will collect all stats
|
||||
# ## Glob matching can be used, ie, stats = ["MAIN.*"]
|
||||
# ## stats may also be set to ["*"], which will collect all stats
|
||||
# stats = ["MAIN.cache_hit", "MAIN.cache_miss", "MAIN.uptime"]
|
||||
|
||||
|
||||
# # Read metrics of ZFS from arcstats, zfetchstats and vdev_cache_stats
|
||||
# # Read metrics of ZFS from arcstats, zfetchstats, vdev_cache_stats, and pools
|
||||
# [[inputs.zfs]]
|
||||
# ## ZFS kstat path
|
||||
# ## ZFS kstat path. Ignored on FreeBSD
|
||||
# ## If not specified, then default is:
|
||||
# kstatPath = "/proc/spl/kstat/zfs"
|
||||
#
|
||||
# # kstatPath = "/proc/spl/kstat/zfs"
|
||||
#
|
||||
# ## By default, telegraf gather all zfs stats
|
||||
# ## If not specified, then default is:
|
||||
# kstatMetrics = ["arcstats", "zfetchstats", "vdev_cache_stats"]
|
||||
#
|
||||
# # kstatMetrics = ["arcstats", "zfetchstats", "vdev_cache_stats"]
|
||||
#
|
||||
# ## By default, don't gather zpool stats
|
||||
# poolMetrics = false
|
||||
# # poolMetrics = false
|
||||
|
||||
|
||||
# # Reads 'mntr' stats from one or many zookeeper servers
|
||||
# [[inputs.zookeeper]]
|
||||
# ## An array of address to gather stats about. Specify an ip or hostname
|
||||
# ## with port. ie localhost:2181, 10.0.0.1:2181, etc.
|
||||
#
|
||||
#
|
||||
# ## If no servers are specified, then localhost is used as the host.
|
||||
# ## If no port is specified, 2181 is used
|
||||
# servers = [":2181"]
|
||||
@@ -1392,12 +1517,6 @@
|
||||
# SERVICE INPUT PLUGINS #
|
||||
###############################################################################
|
||||
|
||||
# # A Github Webhook Event collector
|
||||
# [[inputs.github_webhooks]]
|
||||
# ## Address and port to host Webhook listener on
|
||||
# service_address = ":1618"
|
||||
|
||||
|
||||
# # Read metrics from Kafka topic(s)
|
||||
# [[inputs.kafka_consumer]]
|
||||
# ## topic(s) to consume
|
||||
@@ -1405,12 +1524,12 @@
|
||||
# ## an array of Zookeeper connection strings
|
||||
# zookeeper_peers = ["localhost:2181"]
|
||||
# ## Zookeeper Chroot
|
||||
# zookeeper_chroot = "/"
|
||||
# zookeeper_chroot = ""
|
||||
# ## the name of the consumer group
|
||||
# consumer_group = "telegraf_metrics_consumers"
|
||||
# ## Offset (must be either "oldest" or "newest")
|
||||
# offset = "oldest"
|
||||
#
|
||||
#
|
||||
# ## Data format to consume.
|
||||
# ## Each data format has it's own unique set of configuration options, read
|
||||
# ## more about them here:
|
||||
@@ -1418,37 +1537,66 @@
|
||||
# data_format = "influx"
|
||||
|
||||
|
||||
# # Stream and parse log file(s).
|
||||
# [[inputs.logparser]]
|
||||
# ## Log files to parse.
|
||||
# ## These accept standard unix glob matching rules, but with the addition of
|
||||
# ## ** as a "super asterisk". ie:
|
||||
# ## /var/log/**.log -> recursively find all .log files in /var/log
|
||||
# ## /var/log/*/*.log -> find all .log files with a parent dir in /var/log
|
||||
# ## /var/log/apache.log -> only tail the apache log file
|
||||
# files = ["/var/log/influxdb/influxdb.log"]
|
||||
# ## Read file from beginning.
|
||||
# from_beginning = false
|
||||
#
|
||||
# ## Parse logstash-style "grok" patterns:
|
||||
# ## Telegraf built-in parsing patterns: https://goo.gl/dkay10
|
||||
# [inputs.logparser.grok]
|
||||
# ## This is a list of patterns to check the given log file(s) for.
|
||||
# ## Note that adding patterns here increases processing time. The most
|
||||
# ## efficient configuration is to have one pattern per logparser.
|
||||
# ## Other common built-in patterns are:
|
||||
# ## %{COMMON_LOG_FORMAT} (plain apache & nginx access logs)
|
||||
# ## %{COMBINED_LOG_FORMAT} (access logs + referrer & agent)
|
||||
# patterns = ["%{INFLUXDB_HTTPD_LOG}"]
|
||||
# ## Full path(s) to custom pattern files.
|
||||
# custom_pattern_files = []
|
||||
# ## Custom patterns can also be defined here. Put one pattern per line.
|
||||
# custom_patterns = '''
|
||||
# '''
|
||||
|
||||
|
||||
# # Read metrics from MQTT topic(s)
|
||||
# [[inputs.mqtt_consumer]]
|
||||
# servers = ["localhost:1883"]
|
||||
# ## MQTT QoS, must be 0, 1, or 2
|
||||
# qos = 0
|
||||
#
|
||||
#
|
||||
# ## Topics to subscribe to
|
||||
# topics = [
|
||||
# "telegraf/host01/cpu",
|
||||
# "telegraf/+/mem",
|
||||
# "sensors/#",
|
||||
# ]
|
||||
#
|
||||
#
|
||||
# # if true, messages that can't be delivered while the subscriber is offline
|
||||
# # will be delivered when it comes back (such as on service restart).
|
||||
# # NOTE: if true, client_id MUST be set
|
||||
# persistent_session = false
|
||||
# # If empty, a random client ID will be generated.
|
||||
# client_id = ""
|
||||
#
|
||||
#
|
||||
# ## username and password to connect MQTT server.
|
||||
# # username = "telegraf"
|
||||
# # password = "metricsmetricsmetricsmetrics"
|
||||
#
|
||||
#
|
||||
# ## Optional SSL Config
|
||||
# # ssl_ca = "/etc/telegraf/ca.pem"
|
||||
# # ssl_cert = "/etc/telegraf/cert.pem"
|
||||
# # ssl_key = "/etc/telegraf/key.pem"
|
||||
# ## Use SSL but skip chain & host verification
|
||||
# # insecure_skip_verify = false
|
||||
#
|
||||
#
|
||||
# ## Data format to consume.
|
||||
# ## Each data format has it's own unique set of configuration options, read
|
||||
# ## more about them here:
|
||||
@@ -1466,7 +1614,7 @@
|
||||
# subjects = ["telegraf"]
|
||||
# ## name a queue group
|
||||
# queue_group = "telegraf_consumers"
|
||||
#
|
||||
#
|
||||
# ## Data format to consume.
|
||||
# ## Each data format has it's own unique set of configuration options, read
|
||||
# ## more about them here:
|
||||
@@ -1488,24 +1636,24 @@
|
||||
# delete_timings = true
|
||||
# ## Percentiles to calculate for timing & histogram stats
|
||||
# percentiles = [90]
|
||||
#
|
||||
#
|
||||
# ## separator to use between elements of a statsd metric
|
||||
# metric_separator = "_"
|
||||
#
|
||||
#
|
||||
# ## Parses tags in the datadog statsd format
|
||||
# ## http://docs.datadoghq.com/guides/dogstatsd/
|
||||
# parse_data_dog_tags = false
|
||||
#
|
||||
#
|
||||
# ## Statsd data translation templates, more info can be read here:
|
||||
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#graphite
|
||||
# # templates = [
|
||||
# # "cpu.* measurement*"
|
||||
# # ]
|
||||
#
|
||||
#
|
||||
# ## Number of UDP messages allowed to queue up, once filled,
|
||||
# ## the statsd server will start dropping packets
|
||||
# allowed_pending_messages = 10000
|
||||
#
|
||||
#
|
||||
# ## Number of timing/histogram values to track per-measurement in the
|
||||
# ## calculation of percentiles. Raising this limit increases the accuracy
|
||||
# ## of percentiles but also increases the memory usage and cpu time.
|
||||
@@ -1526,7 +1674,7 @@
|
||||
# files = ["/var/mymetrics.out"]
|
||||
# ## Read file from beginning.
|
||||
# from_beginning = false
|
||||
#
|
||||
#
|
||||
# ## Data format to consume.
|
||||
# ## Each data format has it's own unique set of configuration options, read
|
||||
# ## more about them here:
|
||||
@@ -1538,14 +1686,14 @@
|
||||
# [[inputs.tcp_listener]]
|
||||
# ## Address and port to host TCP listener on
|
||||
# service_address = ":8094"
|
||||
#
|
||||
#
|
||||
# ## Number of TCP messages allowed to queue up. Once filled, the
|
||||
# ## TCP listener will start dropping packets.
|
||||
# allowed_pending_messages = 10000
|
||||
#
|
||||
#
|
||||
# ## Maximum number of concurrent TCP connections to allow
|
||||
# max_tcp_connections = 250
|
||||
#
|
||||
#
|
||||
# ## Data format to consume.
|
||||
# ## Each data format has it's own unique set of configuration options, read
|
||||
# ## more about them here:
|
||||
@@ -1557,14 +1705,26 @@
|
||||
# [[inputs.udp_listener]]
|
||||
# ## Address and port to host UDP listener on
|
||||
# service_address = ":8092"
|
||||
#
|
||||
#
|
||||
# ## Number of UDP messages allowed to queue up. Once filled, the
|
||||
# ## UDP listener will start dropping packets.
|
||||
# allowed_pending_messages = 10000
|
||||
#
|
||||
#
|
||||
# ## Data format to consume.
|
||||
# ## Each data format has it's own unique set of configuration options, read
|
||||
# ## more about them here:
|
||||
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
# data_format = "influx"
|
||||
|
||||
|
||||
# # A Webhooks Event collector
|
||||
# [[inputs.webhooks]]
|
||||
# ## Address and port to host Webhook listener on
|
||||
# service_address = ":1619"
|
||||
#
|
||||
# [inputs.webhooks.github]
|
||||
# path = "/github"
|
||||
#
|
||||
# [inputs.webhooks.rollbar]
|
||||
# path = "/rollbar"
|
||||
|
||||
|
||||
79
filter/filter.go
Normal file
79
filter/filter.go
Normal file
@@ -0,0 +1,79 @@
|
||||
package filter
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"github.com/gobwas/glob"
|
||||
)
|
||||
|
||||
type Filter interface {
|
||||
Match(string) bool
|
||||
}
|
||||
|
||||
// CompileFilter takes a list of string filters and returns a Filter interface
|
||||
// for matching a given string against the filter list. The filter list
|
||||
// supports glob matching too, ie:
|
||||
//
|
||||
// f, _ := CompileFilter([]string{"cpu", "mem", "net*"})
|
||||
// f.Match("cpu") // true
|
||||
// f.Match("network") // true
|
||||
// f.Match("memory") // false
|
||||
//
|
||||
func CompileFilter(filters []string) (Filter, error) {
|
||||
// return if there is nothing to compile
|
||||
if len(filters) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// check if we can compile a non-glob filter
|
||||
noGlob := true
|
||||
for _, filter := range filters {
|
||||
if hasMeta(filter) {
|
||||
noGlob = false
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
switch {
|
||||
case noGlob:
|
||||
// return non-globbing filter if not needed.
|
||||
return compileFilterNoGlob(filters), nil
|
||||
case len(filters) == 1:
|
||||
return glob.Compile(filters[0])
|
||||
default:
|
||||
return glob.Compile("{" + strings.Join(filters, ",") + "}")
|
||||
}
|
||||
}
|
||||
|
||||
// hasMeta reports whether path contains any magic glob characters.
|
||||
func hasMeta(s string) bool {
|
||||
return strings.IndexAny(s, "*?[") >= 0
|
||||
}
|
||||
|
||||
type filter struct {
|
||||
m map[string]struct{}
|
||||
}
|
||||
|
||||
func (f *filter) Match(s string) bool {
|
||||
_, ok := f.m[s]
|
||||
return ok
|
||||
}
|
||||
|
||||
type filtersingle struct {
|
||||
s string
|
||||
}
|
||||
|
||||
func (f *filtersingle) Match(s string) bool {
|
||||
return f.s == s
|
||||
}
|
||||
|
||||
func compileFilterNoGlob(filters []string) Filter {
|
||||
if len(filters) == 1 {
|
||||
return &filtersingle{s: filters[0]}
|
||||
}
|
||||
out := filter{m: make(map[string]struct{})}
|
||||
for _, filter := range filters {
|
||||
out.m[filter] = struct{}{}
|
||||
}
|
||||
return &out
|
||||
}
|
||||
96
filter/filter_test.go
Normal file
96
filter/filter_test.go
Normal file
@@ -0,0 +1,96 @@
|
||||
package filter
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestCompileFilter(t *testing.T) {
|
||||
f, err := CompileFilter([]string{})
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, f)
|
||||
|
||||
f, err = CompileFilter([]string{"cpu"})
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, f.Match("cpu"))
|
||||
assert.False(t, f.Match("cpu0"))
|
||||
assert.False(t, f.Match("mem"))
|
||||
|
||||
f, err = CompileFilter([]string{"cpu*"})
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, f.Match("cpu"))
|
||||
assert.True(t, f.Match("cpu0"))
|
||||
assert.False(t, f.Match("mem"))
|
||||
|
||||
f, err = CompileFilter([]string{"cpu", "mem"})
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, f.Match("cpu"))
|
||||
assert.False(t, f.Match("cpu0"))
|
||||
assert.True(t, f.Match("mem"))
|
||||
|
||||
f, err = CompileFilter([]string{"cpu", "mem", "net*"})
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, f.Match("cpu"))
|
||||
assert.False(t, f.Match("cpu0"))
|
||||
assert.True(t, f.Match("mem"))
|
||||
assert.True(t, f.Match("network"))
|
||||
}
|
||||
|
||||
var benchbool bool
|
||||
|
||||
func BenchmarkFilterSingleNoGlobFalse(b *testing.B) {
|
||||
f, _ := CompileFilter([]string{"cpu"})
|
||||
var tmp bool
|
||||
for n := 0; n < b.N; n++ {
|
||||
tmp = f.Match("network")
|
||||
}
|
||||
benchbool = tmp
|
||||
}
|
||||
|
||||
func BenchmarkFilterSingleNoGlobTrue(b *testing.B) {
|
||||
f, _ := CompileFilter([]string{"cpu"})
|
||||
var tmp bool
|
||||
for n := 0; n < b.N; n++ {
|
||||
tmp = f.Match("cpu")
|
||||
}
|
||||
benchbool = tmp
|
||||
}
|
||||
|
||||
func BenchmarkFilter(b *testing.B) {
|
||||
f, _ := CompileFilter([]string{"cpu", "mem", "net*"})
|
||||
var tmp bool
|
||||
for n := 0; n < b.N; n++ {
|
||||
tmp = f.Match("network")
|
||||
}
|
||||
benchbool = tmp
|
||||
}
|
||||
|
||||
func BenchmarkFilterNoGlob(b *testing.B) {
|
||||
f, _ := CompileFilter([]string{"cpu", "mem", "net"})
|
||||
var tmp bool
|
||||
for n := 0; n < b.N; n++ {
|
||||
tmp = f.Match("net")
|
||||
}
|
||||
benchbool = tmp
|
||||
}
|
||||
|
||||
func BenchmarkFilter2(b *testing.B) {
|
||||
f, _ := CompileFilter([]string{"aa", "bb", "c", "ad", "ar", "at", "aq",
|
||||
"aw", "az", "axxx", "ab", "cpu", "mem", "net*"})
|
||||
var tmp bool
|
||||
for n := 0; n < b.N; n++ {
|
||||
tmp = f.Match("network")
|
||||
}
|
||||
benchbool = tmp
|
||||
}
|
||||
|
||||
func BenchmarkFilter2NoGlob(b *testing.B) {
|
||||
f, _ := CompileFilter([]string{"aa", "bb", "c", "ad", "ar", "at", "aq",
|
||||
"aw", "az", "axxx", "ab", "cpu", "mem", "net"})
|
||||
var tmp bool
|
||||
for n := 0; n < b.N; n++ {
|
||||
tmp = f.Match("net")
|
||||
}
|
||||
benchbool = tmp
|
||||
}
|
||||
49
internal/config/aws/credentials.go
Normal file
49
internal/config/aws/credentials.go
Normal file
@@ -0,0 +1,49 @@
|
||||
package aws
|
||||
|
||||
import (
|
||||
"github.com/aws/aws-sdk-go/aws"
|
||||
"github.com/aws/aws-sdk-go/aws/client"
|
||||
"github.com/aws/aws-sdk-go/aws/credentials"
|
||||
"github.com/aws/aws-sdk-go/aws/credentials/stscreds"
|
||||
"github.com/aws/aws-sdk-go/aws/session"
|
||||
)
|
||||
|
||||
type CredentialConfig struct {
|
||||
Region string
|
||||
AccessKey string
|
||||
SecretKey string
|
||||
RoleARN string
|
||||
Profile string
|
||||
Filename string
|
||||
Token string
|
||||
}
|
||||
|
||||
func (c *CredentialConfig) Credentials() client.ConfigProvider {
|
||||
if c.RoleARN != "" {
|
||||
return c.assumeCredentials()
|
||||
} else {
|
||||
return c.rootCredentials()
|
||||
}
|
||||
}
|
||||
|
||||
func (c *CredentialConfig) rootCredentials() client.ConfigProvider {
|
||||
config := &aws.Config{
|
||||
Region: aws.String(c.Region),
|
||||
}
|
||||
if c.AccessKey != "" || c.SecretKey != "" {
|
||||
config.Credentials = credentials.NewStaticCredentials(c.AccessKey, c.SecretKey, c.Token)
|
||||
} else if c.Profile != "" || c.Filename != "" {
|
||||
config.Credentials = credentials.NewSharedCredentials(c.Filename, c.Profile)
|
||||
}
|
||||
|
||||
return session.New(config)
|
||||
}
|
||||
|
||||
func (c *CredentialConfig) assumeCredentials() client.ConfigProvider {
|
||||
rootCredentials := c.rootCredentials()
|
||||
config := &aws.Config{
|
||||
Region: aws.String(c.Region),
|
||||
}
|
||||
config.Credentials = stscreds.NewCredentials(rootCredentials, c.RoleARN)
|
||||
return session.New(config)
|
||||
}
|
||||
@@ -58,7 +58,6 @@ func NewConfig() *Config {
|
||||
Interval: internal.Duration{Duration: 10 * time.Second},
|
||||
RoundInterval: true,
|
||||
FlushInterval: internal.Duration{Duration: 10 * time.Second},
|
||||
FlushJitter: internal.Duration{Duration: 5 * time.Second},
|
||||
},
|
||||
|
||||
Tags: make(map[string]string),
|
||||
@@ -78,6 +77,14 @@ type AgentConfig struct {
|
||||
// ie, if Interval=10s then always collect on :00, :10, :20, etc.
|
||||
RoundInterval bool
|
||||
|
||||
// By default, precision will be set to the same timestamp order as the
|
||||
// collection interval, with the maximum being 1s.
|
||||
// ie, when interval = "10s", precision will be "1s"
|
||||
// when interval = "250ms", precision will be "1ms"
|
||||
// Precision will NOT be used for service inputs. It is up to each individual
|
||||
// service input to set the timestamp at the appropriate precision.
|
||||
Precision internal.Duration
|
||||
|
||||
// CollectionJitter is used to jitter the collection by a random amount.
|
||||
// Each plugin will sleep for a random time within jitter before collecting.
|
||||
// This can be used to avoid many plugins querying things like sysfs at the
|
||||
@@ -109,11 +116,10 @@ type AgentConfig struct {
|
||||
// does _not_ deactivate FlushInterval.
|
||||
FlushBufferWhenFull bool
|
||||
|
||||
// TODO(cam): Remove UTC and Precision parameters, they are no longer
|
||||
// TODO(cam): Remove UTC and parameter, they are no longer
|
||||
// valid for the agent config. Leaving them here for now for backwards-
|
||||
// compatability
|
||||
UTC bool `toml:"utc"`
|
||||
Precision string
|
||||
UTC bool `toml:"utc"`
|
||||
|
||||
// Debug is the option for running in debug mode
|
||||
Debug bool
|
||||
@@ -210,6 +216,11 @@ var header = `# Telegraf Configuration
|
||||
## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
|
||||
flush_jitter = "0s"
|
||||
|
||||
## By default, precision will be set to the same timestamp order as the
|
||||
## collection interval, with the maximum being 1s.
|
||||
## Precision will NOT be used for service inputs, such as logparser and statsd.
|
||||
## Valid values are "Nns", "Nus" (or "Nµs"), "Nms", "Ns".
|
||||
precision = ""
|
||||
## Run telegraf in debug mode
|
||||
debug = false
|
||||
## Run telegraf in quiet mode
|
||||
@@ -357,7 +368,7 @@ func printConfig(name string, p printer, op string, commented bool) {
|
||||
fmt.Print("\n")
|
||||
continue
|
||||
}
|
||||
fmt.Print(comment + line + "\n")
|
||||
fmt.Print(strings.TrimRight(comment+line, " ") + "\n")
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -528,6 +539,13 @@ func (c *Config) LoadConfig(path string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// trimBOM trims the Byte-Order-Marks from the beginning of the file.
|
||||
// this is for Windows compatability only.
|
||||
// see https://github.com/influxdata/telegraf/issues/1378
|
||||
func trimBOM(f []byte) []byte {
|
||||
return bytes.TrimPrefix(f, []byte("\xef\xbb\xbf"))
|
||||
}
|
||||
|
||||
// parseFile loads a TOML configuration from a provided path and
|
||||
// returns the AST produced from the TOML parser. When loading the file, it
|
||||
// will find environment variables and replace them.
|
||||
@@ -536,6 +554,8 @@ func parseFile(fpath string) (*ast.Table, error) {
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// ugh windows why
|
||||
contents = trimBOM(contents)
|
||||
|
||||
env_vars := envVarRe.FindAll(contents, -1)
|
||||
for _, env_var := range env_vars {
|
||||
|
||||
37
internal/errchan/errchan.go
Normal file
37
internal/errchan/errchan.go
Normal file
@@ -0,0 +1,37 @@
|
||||
package errchan
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type ErrChan struct {
|
||||
C chan error
|
||||
}
|
||||
|
||||
// New returns an error channel of max length 'n'
|
||||
// errors can be sent to the ErrChan.C channel, and will be returned when
|
||||
// ErrChan.Error() is called.
|
||||
func New(n int) *ErrChan {
|
||||
return &ErrChan{
|
||||
C: make(chan error, n),
|
||||
}
|
||||
}
|
||||
|
||||
// Error closes the ErrChan.C channel and returns an error if there are any
|
||||
// non-nil errors, otherwise returns nil.
|
||||
func (e *ErrChan) Error() error {
|
||||
close(e.C)
|
||||
|
||||
var out string
|
||||
for err := range e.C {
|
||||
if err != nil {
|
||||
out += "[" + err.Error() + "], "
|
||||
}
|
||||
}
|
||||
|
||||
if out != "" {
|
||||
return fmt.Errorf("Errors encountered: " + strings.TrimRight(out, ", "))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@@ -10,6 +10,7 @@ import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
"math/big"
|
||||
"os"
|
||||
"os/exec"
|
||||
"strconv"
|
||||
@@ -132,8 +133,8 @@ func GetTLSConfig(
|
||||
cert, err := tls.LoadX509KeyPair(SSLCert, SSLKey)
|
||||
if err != nil {
|
||||
return nil, errors.New(fmt.Sprintf(
|
||||
"Could not load TLS client key/certificate: %s",
|
||||
err))
|
||||
"Could not load TLS client key/certificate from %s:%s: %s",
|
||||
SSLKey, SSLCert, err))
|
||||
}
|
||||
|
||||
t.Certificates = []tls.Certificate{cert}
|
||||
@@ -205,3 +206,27 @@ func WaitTimeout(c *exec.Cmd, timeout time.Duration) error {
|
||||
return TimeoutErr
|
||||
}
|
||||
}
|
||||
|
||||
// RandomSleep will sleep for a random amount of time up to max.
|
||||
// If the shutdown channel is closed, it will return before it has finished
|
||||
// sleeping.
|
||||
func RandomSleep(max time.Duration, shutdown chan struct{}) {
|
||||
if max == 0 {
|
||||
return
|
||||
}
|
||||
maxSleep := big.NewInt(max.Nanoseconds())
|
||||
|
||||
var sleepns int64
|
||||
if j, err := rand.Int(rand.Reader, maxSleep); err == nil {
|
||||
sleepns = j.Int64()
|
||||
}
|
||||
|
||||
t := time.NewTimer(time.Nanosecond * time.Duration(sleepns))
|
||||
select {
|
||||
case <-t.C:
|
||||
return
|
||||
case <-shutdown:
|
||||
t.Stop()
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
@@ -106,3 +106,28 @@ func TestRunError(t *testing.T) {
|
||||
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestRandomSleep(t *testing.T) {
|
||||
// test that zero max returns immediately
|
||||
s := time.Now()
|
||||
RandomSleep(time.Duration(0), make(chan struct{}))
|
||||
elapsed := time.Since(s)
|
||||
assert.True(t, elapsed < time.Millisecond)
|
||||
|
||||
// test that max sleep is respected
|
||||
s = time.Now()
|
||||
RandomSleep(time.Millisecond*50, make(chan struct{}))
|
||||
elapsed = time.Since(s)
|
||||
assert.True(t, elapsed < time.Millisecond*50)
|
||||
|
||||
// test that shutdown is respected
|
||||
s = time.Now()
|
||||
shutdown := make(chan struct{})
|
||||
go func() {
|
||||
time.Sleep(time.Millisecond * 100)
|
||||
close(shutdown)
|
||||
}()
|
||||
RandomSleep(time.Second, shutdown)
|
||||
elapsed = time.Since(s)
|
||||
assert.True(t, elapsed < time.Millisecond*150)
|
||||
}
|
||||
|
||||
59
internal/limiter/limiter.go
Normal file
59
internal/limiter/limiter.go
Normal file
@@ -0,0 +1,59 @@
|
||||
package limiter
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// NewRateLimiter returns a rate limiter that will will emit from the C
|
||||
// channel only 'n' times every 'rate' seconds.
|
||||
func NewRateLimiter(n int, rate time.Duration) *rateLimiter {
|
||||
r := &rateLimiter{
|
||||
C: make(chan bool),
|
||||
rate: rate,
|
||||
n: n,
|
||||
shutdown: make(chan bool),
|
||||
}
|
||||
r.wg.Add(1)
|
||||
go r.limiter()
|
||||
return r
|
||||
}
|
||||
|
||||
type rateLimiter struct {
|
||||
C chan bool
|
||||
rate time.Duration
|
||||
n int
|
||||
|
||||
shutdown chan bool
|
||||
wg sync.WaitGroup
|
||||
}
|
||||
|
||||
func (r *rateLimiter) Stop() {
|
||||
close(r.shutdown)
|
||||
r.wg.Wait()
|
||||
close(r.C)
|
||||
}
|
||||
|
||||
func (r *rateLimiter) limiter() {
|
||||
defer r.wg.Done()
|
||||
ticker := time.NewTicker(r.rate)
|
||||
defer ticker.Stop()
|
||||
counter := 0
|
||||
for {
|
||||
select {
|
||||
case <-r.shutdown:
|
||||
return
|
||||
case <-ticker.C:
|
||||
counter = 0
|
||||
default:
|
||||
if counter < r.n {
|
||||
select {
|
||||
case r.C <- true:
|
||||
counter++
|
||||
case <-r.shutdown:
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
54
internal/limiter/limiter_test.go
Normal file
54
internal/limiter/limiter_test.go
Normal file
@@ -0,0 +1,54 @@
|
||||
package limiter
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestRateLimiter(t *testing.T) {
|
||||
r := NewRateLimiter(5, time.Second)
|
||||
ticker := time.NewTicker(time.Millisecond * 75)
|
||||
|
||||
// test that we can only get 5 receives from the rate limiter
|
||||
counter := 0
|
||||
outer:
|
||||
for {
|
||||
select {
|
||||
case <-r.C:
|
||||
counter++
|
||||
case <-ticker.C:
|
||||
break outer
|
||||
}
|
||||
}
|
||||
|
||||
assert.Equal(t, 5, counter)
|
||||
r.Stop()
|
||||
// verify that the Stop function closes the channel.
|
||||
_, ok := <-r.C
|
||||
assert.False(t, ok)
|
||||
}
|
||||
|
||||
func TestRateLimiterMultipleIterations(t *testing.T) {
|
||||
r := NewRateLimiter(5, time.Millisecond*50)
|
||||
ticker := time.NewTicker(time.Millisecond * 250)
|
||||
|
||||
// test that we can get 15 receives from the rate limiter
|
||||
counter := 0
|
||||
outer:
|
||||
for {
|
||||
select {
|
||||
case <-ticker.C:
|
||||
break outer
|
||||
case <-r.C:
|
||||
counter++
|
||||
}
|
||||
}
|
||||
|
||||
assert.True(t, counter > 10)
|
||||
r.Stop()
|
||||
// verify that the Stop function closes the channel.
|
||||
_, ok := <-r.C
|
||||
assert.False(t, ok)
|
||||
}
|
||||
@@ -2,81 +2,79 @@ package internal_models
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/gobwas/glob"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/filter"
|
||||
)
|
||||
|
||||
// TagFilter is the name of a tag, and the values on which to filter
|
||||
type TagFilter struct {
|
||||
Name string
|
||||
Filter []string
|
||||
filter glob.Glob
|
||||
filter filter.Filter
|
||||
}
|
||||
|
||||
// Filter containing drop/pass and tagdrop/tagpass rules
|
||||
type Filter struct {
|
||||
NameDrop []string
|
||||
nameDrop glob.Glob
|
||||
nameDrop filter.Filter
|
||||
NamePass []string
|
||||
namePass glob.Glob
|
||||
namePass filter.Filter
|
||||
|
||||
FieldDrop []string
|
||||
fieldDrop glob.Glob
|
||||
fieldDrop filter.Filter
|
||||
FieldPass []string
|
||||
fieldPass glob.Glob
|
||||
fieldPass filter.Filter
|
||||
|
||||
TagDrop []TagFilter
|
||||
TagPass []TagFilter
|
||||
|
||||
TagExclude []string
|
||||
tagExclude glob.Glob
|
||||
tagExclude filter.Filter
|
||||
TagInclude []string
|
||||
tagInclude glob.Glob
|
||||
tagInclude filter.Filter
|
||||
|
||||
IsActive bool
|
||||
}
|
||||
|
||||
// Compile all Filter lists into glob.Glob objects.
|
||||
// Compile all Filter lists into filter.Filter objects.
|
||||
func (f *Filter) CompileFilter() error {
|
||||
var err error
|
||||
f.nameDrop, err = compileFilter(f.NameDrop)
|
||||
f.nameDrop, err = filter.CompileFilter(f.NameDrop)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error compiling 'namedrop', %s", err)
|
||||
}
|
||||
f.namePass, err = compileFilter(f.NamePass)
|
||||
f.namePass, err = filter.CompileFilter(f.NamePass)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error compiling 'namepass', %s", err)
|
||||
}
|
||||
|
||||
f.fieldDrop, err = compileFilter(f.FieldDrop)
|
||||
f.fieldDrop, err = filter.CompileFilter(f.FieldDrop)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error compiling 'fielddrop', %s", err)
|
||||
}
|
||||
f.fieldPass, err = compileFilter(f.FieldPass)
|
||||
f.fieldPass, err = filter.CompileFilter(f.FieldPass)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error compiling 'fieldpass', %s", err)
|
||||
}
|
||||
|
||||
f.tagExclude, err = compileFilter(f.TagExclude)
|
||||
f.tagExclude, err = filter.CompileFilter(f.TagExclude)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error compiling 'tagexclude', %s", err)
|
||||
}
|
||||
f.tagInclude, err = compileFilter(f.TagInclude)
|
||||
f.tagInclude, err = filter.CompileFilter(f.TagInclude)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error compiling 'taginclude', %s", err)
|
||||
}
|
||||
|
||||
for i, _ := range f.TagDrop {
|
||||
f.TagDrop[i].filter, err = compileFilter(f.TagDrop[i].Filter)
|
||||
f.TagDrop[i].filter, err = filter.CompileFilter(f.TagDrop[i].Filter)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error compiling 'tagdrop', %s", err)
|
||||
}
|
||||
}
|
||||
for i, _ := range f.TagPass {
|
||||
f.TagPass[i].filter, err = compileFilter(f.TagPass[i].Filter)
|
||||
f.TagPass[i].filter, err = filter.CompileFilter(f.TagPass[i].Filter)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error compiling 'tagpass', %s", err)
|
||||
}
|
||||
@@ -84,20 +82,6 @@ func (f *Filter) CompileFilter() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func compileFilter(filter []string) (glob.Glob, error) {
|
||||
if len(filter) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
var g glob.Glob
|
||||
var err error
|
||||
if len(filter) == 1 {
|
||||
g, err = glob.Compile(filter[0])
|
||||
} else {
|
||||
g, err = glob.Compile("{" + strings.Join(filter, ",") + "}")
|
||||
}
|
||||
return g, err
|
||||
}
|
||||
|
||||
func (f *Filter) ShouldMetricPass(metric telegraf.Metric) bool {
|
||||
if f.ShouldNamePass(metric.Name()) && f.ShouldTagsPass(metric.Tags()) {
|
||||
return true
|
||||
|
||||
@@ -253,51 +253,6 @@ func TestFilter_TagDrop(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestFilter_CompileFilterError(t *testing.T) {
|
||||
f := Filter{
|
||||
NameDrop: []string{"", ""},
|
||||
}
|
||||
assert.Error(t, f.CompileFilter())
|
||||
f = Filter{
|
||||
NamePass: []string{"", ""},
|
||||
}
|
||||
assert.Error(t, f.CompileFilter())
|
||||
f = Filter{
|
||||
FieldDrop: []string{"", ""},
|
||||
}
|
||||
assert.Error(t, f.CompileFilter())
|
||||
f = Filter{
|
||||
FieldPass: []string{"", ""},
|
||||
}
|
||||
assert.Error(t, f.CompileFilter())
|
||||
f = Filter{
|
||||
TagExclude: []string{"", ""},
|
||||
}
|
||||
assert.Error(t, f.CompileFilter())
|
||||
f = Filter{
|
||||
TagInclude: []string{"", ""},
|
||||
}
|
||||
assert.Error(t, f.CompileFilter())
|
||||
filters := []TagFilter{
|
||||
TagFilter{
|
||||
Name: "cpu",
|
||||
Filter: []string{"{foobar}"},
|
||||
}}
|
||||
f = Filter{
|
||||
TagDrop: filters,
|
||||
}
|
||||
require.Error(t, f.CompileFilter())
|
||||
filters = []TagFilter{
|
||||
TagFilter{
|
||||
Name: "cpu",
|
||||
Filter: []string{"{foobar}"},
|
||||
}}
|
||||
f = Filter{
|
||||
TagPass: filters,
|
||||
}
|
||||
require.Error(t, f.CompileFilter())
|
||||
}
|
||||
|
||||
func TestFilter_ShouldMetricsPass(t *testing.T) {
|
||||
m := testutil.TestMetric(1, "testmetric")
|
||||
f := Filter{
|
||||
|
||||
@@ -138,7 +138,7 @@ func (ro *RunningOutput) Write() error {
|
||||
}
|
||||
|
||||
func (ro *RunningOutput) write(metrics []telegraf.Metric) error {
|
||||
if len(metrics) == 0 {
|
||||
if metrics == nil || len(metrics) == 0 {
|
||||
return nil
|
||||
}
|
||||
start := time.Now()
|
||||
|
||||
@@ -45,14 +45,9 @@ func NewMetric(
|
||||
name string,
|
||||
tags map[string]string,
|
||||
fields map[string]interface{},
|
||||
t ...time.Time,
|
||||
t time.Time,
|
||||
) (Metric, error) {
|
||||
var T time.Time
|
||||
if len(t) > 0 {
|
||||
T = t[0]
|
||||
}
|
||||
|
||||
pt, err := client.NewPoint(name, tags, fields, T)
|
||||
pt, err := client.NewPoint(name, tags, fields, t)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -51,23 +51,6 @@ func TestNewMetricString(t *testing.T) {
|
||||
assert.Equal(t, lineProtoPrecision, m.PrecisionString("s"))
|
||||
}
|
||||
|
||||
func TestNewMetricStringNoTime(t *testing.T) {
|
||||
tags := map[string]string{
|
||||
"host": "localhost",
|
||||
}
|
||||
fields := map[string]interface{}{
|
||||
"usage_idle": float64(99),
|
||||
}
|
||||
m, err := NewMetric("cpu", tags, fields)
|
||||
assert.NoError(t, err)
|
||||
|
||||
lineProto := fmt.Sprintf("cpu,host=localhost usage_idle=99")
|
||||
assert.Equal(t, lineProto, m.String())
|
||||
|
||||
lineProtoPrecision := fmt.Sprintf("cpu,host=localhost usage_idle=99")
|
||||
assert.Equal(t, lineProtoPrecision, m.PrecisionString("s"))
|
||||
}
|
||||
|
||||
func TestNewMetricFailNaN(t *testing.T) {
|
||||
now := time.Now()
|
||||
|
||||
|
||||
@@ -1,104 +1,19 @@
|
||||
package aerospike
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
"net"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/internal/errchan"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
|
||||
as "github.com/sparrc/aerospike-client-go"
|
||||
)
|
||||
|
||||
const (
|
||||
MSG_HEADER_SIZE = 8
|
||||
MSG_TYPE = 1 // Info is 1
|
||||
MSG_VERSION = 2
|
||||
)
|
||||
|
||||
var (
|
||||
STATISTICS_COMMAND = []byte("statistics\n")
|
||||
NAMESPACES_COMMAND = []byte("namespaces\n")
|
||||
)
|
||||
|
||||
type aerospikeMessageHeader struct {
|
||||
Version uint8
|
||||
Type uint8
|
||||
DataLen [6]byte
|
||||
}
|
||||
|
||||
type aerospikeMessage struct {
|
||||
aerospikeMessageHeader
|
||||
Data []byte
|
||||
}
|
||||
|
||||
// Taken from aerospike-client-go/types/message.go
|
||||
func (msg *aerospikeMessage) Serialize() []byte {
|
||||
msg.DataLen = msgLenToBytes(int64(len(msg.Data)))
|
||||
buf := bytes.NewBuffer([]byte{})
|
||||
binary.Write(buf, binary.BigEndian, msg.aerospikeMessageHeader)
|
||||
binary.Write(buf, binary.BigEndian, msg.Data[:])
|
||||
return buf.Bytes()
|
||||
}
|
||||
|
||||
type aerospikeInfoCommand struct {
|
||||
msg *aerospikeMessage
|
||||
}
|
||||
|
||||
// Taken from aerospike-client-go/info.go
|
||||
func (nfo *aerospikeInfoCommand) parseMultiResponse() (map[string]string, error) {
|
||||
responses := make(map[string]string)
|
||||
offset := int64(0)
|
||||
begin := int64(0)
|
||||
|
||||
dataLen := int64(len(nfo.msg.Data))
|
||||
|
||||
// Create reusable StringBuilder for performance.
|
||||
for offset < dataLen {
|
||||
b := nfo.msg.Data[offset]
|
||||
|
||||
if b == '\t' {
|
||||
name := nfo.msg.Data[begin:offset]
|
||||
offset++
|
||||
begin = offset
|
||||
|
||||
// Parse field value.
|
||||
for offset < dataLen {
|
||||
if nfo.msg.Data[offset] == '\n' {
|
||||
break
|
||||
}
|
||||
offset++
|
||||
}
|
||||
|
||||
if offset > begin {
|
||||
value := nfo.msg.Data[begin:offset]
|
||||
responses[string(name)] = string(value)
|
||||
} else {
|
||||
responses[string(name)] = ""
|
||||
}
|
||||
offset++
|
||||
begin = offset
|
||||
} else if b == '\n' {
|
||||
if offset > begin {
|
||||
name := nfo.msg.Data[begin:offset]
|
||||
responses[string(name)] = ""
|
||||
}
|
||||
offset++
|
||||
begin = offset
|
||||
} else {
|
||||
offset++
|
||||
}
|
||||
}
|
||||
|
||||
if offset > begin {
|
||||
name := nfo.msg.Data[begin:offset]
|
||||
responses[string(name)] = ""
|
||||
}
|
||||
return responses, nil
|
||||
}
|
||||
|
||||
type Aerospike struct {
|
||||
Servers []string
|
||||
}
|
||||
@@ -115,7 +30,7 @@ func (a *Aerospike) SampleConfig() string {
|
||||
}
|
||||
|
||||
func (a *Aerospike) Description() string {
|
||||
return "Read stats from an aerospike server"
|
||||
return "Read stats from aerospike server(s)"
|
||||
}
|
||||
|
||||
func (a *Aerospike) Gather(acc telegraf.Accumulator) error {
|
||||
@@ -124,214 +39,90 @@ func (a *Aerospike) Gather(acc telegraf.Accumulator) error {
|
||||
}
|
||||
|
||||
var wg sync.WaitGroup
|
||||
|
||||
var outerr error
|
||||
|
||||
errChan := errchan.New(len(a.Servers))
|
||||
wg.Add(len(a.Servers))
|
||||
for _, server := range a.Servers {
|
||||
wg.Add(1)
|
||||
go func(server string) {
|
||||
go func(serv string) {
|
||||
defer wg.Done()
|
||||
outerr = a.gatherServer(server, acc)
|
||||
errChan.C <- a.gatherServer(serv, acc)
|
||||
}(server)
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
return outerr
|
||||
return errChan.Error()
|
||||
}
|
||||
|
||||
func (a *Aerospike) gatherServer(host string, acc telegraf.Accumulator) error {
|
||||
aerospikeInfo, err := getMap(STATISTICS_COMMAND, host)
|
||||
func (a *Aerospike) gatherServer(hostport string, acc telegraf.Accumulator) error {
|
||||
host, port, err := net.SplitHostPort(hostport)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Aerospike info failed: %s", err)
|
||||
return err
|
||||
}
|
||||
readAerospikeStats(aerospikeInfo, acc, host, "")
|
||||
namespaces, err := getList(NAMESPACES_COMMAND, host)
|
||||
|
||||
iport, err := strconv.Atoi(port)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Aerospike namespace list failed: %s", err)
|
||||
iport = 3000
|
||||
}
|
||||
for ix := range namespaces {
|
||||
nsInfo, err := getMap([]byte("namespace/"+namespaces[ix]+"\n"), host)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Aerospike namespace '%s' query failed: %s", namespaces[ix], err)
|
||||
|
||||
c, err := as.NewClient(host, iport)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer c.Close()
|
||||
|
||||
nodes := c.GetNodes()
|
||||
for _, n := range nodes {
|
||||
tags := map[string]string{
|
||||
"node_name": n.GetName(),
|
||||
"aerospike_host": hostport,
|
||||
}
|
||||
fields := make(map[string]interface{})
|
||||
stats, err := as.RequestNodeStats(n)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for k, v := range stats {
|
||||
if iv, err := strconv.ParseInt(v, 10, 64); err == nil {
|
||||
fields[strings.Replace(k, "-", "_", -1)] = iv
|
||||
}
|
||||
}
|
||||
acc.AddFields("aerospike_node", fields, tags, time.Now())
|
||||
|
||||
info, err := as.RequestNodeInfo(n, "namespaces")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
namespaces := strings.Split(info["namespaces"], ";")
|
||||
|
||||
for _, namespace := range namespaces {
|
||||
nTags := copyTags(tags)
|
||||
nTags["namespace"] = namespace
|
||||
nFields := make(map[string]interface{})
|
||||
info, err := as.RequestNodeInfo(n, "namespace/"+namespace)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
stats := strings.Split(info["namespace/"+namespace], ";")
|
||||
for _, stat := range stats {
|
||||
parts := strings.Split(stat, "=")
|
||||
if len(parts) < 2 {
|
||||
continue
|
||||
}
|
||||
if iv, err := strconv.ParseInt(parts[1], 10, 64); err == nil {
|
||||
nFields[strings.Replace(parts[0], "-", "_", -1)] = iv
|
||||
}
|
||||
}
|
||||
acc.AddFields("aerospike_namespace", nFields, nTags, time.Now())
|
||||
}
|
||||
readAerospikeStats(nsInfo, acc, host, namespaces[ix])
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func getMap(key []byte, host string) (map[string]string, error) {
|
||||
data, err := get(key, host)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Failed to get data: %s", err)
|
||||
func copyTags(m map[string]string) map[string]string {
|
||||
out := make(map[string]string)
|
||||
for k, v := range m {
|
||||
out[k] = v
|
||||
}
|
||||
parsed, err := unmarshalMapInfo(data, string(key))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Failed to unmarshal data: %s", err)
|
||||
}
|
||||
|
||||
return parsed, nil
|
||||
}
|
||||
|
||||
func getList(key []byte, host string) ([]string, error) {
|
||||
data, err := get(key, host)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Failed to get data: %s", err)
|
||||
}
|
||||
parsed, err := unmarshalListInfo(data, string(key))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Failed to unmarshal data: %s", err)
|
||||
}
|
||||
|
||||
return parsed, nil
|
||||
}
|
||||
|
||||
func get(key []byte, host string) (map[string]string, error) {
|
||||
var err error
|
||||
var data map[string]string
|
||||
|
||||
asInfo := &aerospikeInfoCommand{
|
||||
msg: &aerospikeMessage{
|
||||
aerospikeMessageHeader: aerospikeMessageHeader{
|
||||
Version: uint8(MSG_VERSION),
|
||||
Type: uint8(MSG_TYPE),
|
||||
DataLen: msgLenToBytes(int64(len(key))),
|
||||
},
|
||||
Data: key,
|
||||
},
|
||||
}
|
||||
|
||||
cmd := asInfo.msg.Serialize()
|
||||
addr, err := net.ResolveTCPAddr("tcp", host)
|
||||
if err != nil {
|
||||
return data, fmt.Errorf("Lookup failed for '%s': %s", host, err)
|
||||
}
|
||||
|
||||
conn, err := net.DialTCP("tcp", nil, addr)
|
||||
if err != nil {
|
||||
return data, fmt.Errorf("Connection failed for '%s': %s", host, err)
|
||||
}
|
||||
defer conn.Close()
|
||||
|
||||
_, err = conn.Write(cmd)
|
||||
if err != nil {
|
||||
return data, fmt.Errorf("Failed to send to '%s': %s", host, err)
|
||||
}
|
||||
|
||||
msgHeader := bytes.NewBuffer(make([]byte, MSG_HEADER_SIZE))
|
||||
_, err = readLenFromConn(conn, msgHeader.Bytes(), MSG_HEADER_SIZE)
|
||||
if err != nil {
|
||||
return data, fmt.Errorf("Failed to read header: %s", err)
|
||||
}
|
||||
err = binary.Read(msgHeader, binary.BigEndian, &asInfo.msg.aerospikeMessageHeader)
|
||||
if err != nil {
|
||||
return data, fmt.Errorf("Failed to unmarshal header: %s", err)
|
||||
}
|
||||
|
||||
msgLen := msgLenFromBytes(asInfo.msg.aerospikeMessageHeader.DataLen)
|
||||
|
||||
if int64(len(asInfo.msg.Data)) != msgLen {
|
||||
asInfo.msg.Data = make([]byte, msgLen)
|
||||
}
|
||||
|
||||
_, err = readLenFromConn(conn, asInfo.msg.Data, len(asInfo.msg.Data))
|
||||
if err != nil {
|
||||
return data, fmt.Errorf("Failed to read from connection to '%s': %s", host, err)
|
||||
}
|
||||
|
||||
data, err = asInfo.parseMultiResponse()
|
||||
if err != nil {
|
||||
return data, fmt.Errorf("Failed to parse response from '%s': %s", host, err)
|
||||
}
|
||||
|
||||
return data, err
|
||||
}
|
||||
|
||||
func readAerospikeStats(
|
||||
stats map[string]string,
|
||||
acc telegraf.Accumulator,
|
||||
host string,
|
||||
namespace string,
|
||||
) {
|
||||
fields := make(map[string]interface{})
|
||||
tags := map[string]string{
|
||||
"aerospike_host": host,
|
||||
"namespace": "_service",
|
||||
}
|
||||
|
||||
if namespace != "" {
|
||||
tags["namespace"] = namespace
|
||||
}
|
||||
for key, value := range stats {
|
||||
// We are going to ignore all string based keys
|
||||
val, err := strconv.ParseInt(value, 10, 64)
|
||||
if err == nil {
|
||||
if strings.Contains(key, "-") {
|
||||
key = strings.Replace(key, "-", "_", -1)
|
||||
}
|
||||
fields[key] = val
|
||||
}
|
||||
}
|
||||
acc.AddFields("aerospike", fields, tags)
|
||||
}
|
||||
|
||||
func unmarshalMapInfo(infoMap map[string]string, key string) (map[string]string, error) {
|
||||
key = strings.TrimSuffix(key, "\n")
|
||||
res := map[string]string{}
|
||||
|
||||
v, exists := infoMap[key]
|
||||
if !exists {
|
||||
return res, fmt.Errorf("Key '%s' missing from info", key)
|
||||
}
|
||||
|
||||
values := strings.Split(v, ";")
|
||||
for i := range values {
|
||||
kv := strings.Split(values[i], "=")
|
||||
if len(kv) > 1 {
|
||||
res[kv[0]] = kv[1]
|
||||
}
|
||||
}
|
||||
|
||||
return res, nil
|
||||
}
|
||||
|
||||
func unmarshalListInfo(infoMap map[string]string, key string) ([]string, error) {
|
||||
key = strings.TrimSuffix(key, "\n")
|
||||
|
||||
v, exists := infoMap[key]
|
||||
if !exists {
|
||||
return []string{}, fmt.Errorf("Key '%s' missing from info", key)
|
||||
}
|
||||
|
||||
values := strings.Split(v, ";")
|
||||
return values, nil
|
||||
}
|
||||
|
||||
func readLenFromConn(c net.Conn, buffer []byte, length int) (total int, err error) {
|
||||
var r int
|
||||
for total < length {
|
||||
r, err = c.Read(buffer[total:length])
|
||||
total += r
|
||||
if err != nil {
|
||||
break
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Taken from aerospike-client-go/types/message.go
|
||||
func msgLenToBytes(DataLen int64) [6]byte {
|
||||
b := make([]byte, 8)
|
||||
binary.BigEndian.PutUint64(b, uint64(DataLen))
|
||||
res := [6]byte{}
|
||||
copy(res[:], b[2:])
|
||||
return res
|
||||
}
|
||||
|
||||
// Taken from aerospike-client-go/types/message.go
|
||||
func msgLenFromBytes(buf [6]byte) int64 {
|
||||
nbytes := append([]byte{0, 0}, buf[:]...)
|
||||
DataLen := binary.BigEndian.Uint64(nbytes)
|
||||
return int64(DataLen)
|
||||
return out
|
||||
}
|
||||
|
||||
func init() {
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package aerospike
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/influxdata/telegraf/testutil"
|
||||
@@ -23,96 +22,29 @@ func TestAerospikeStatistics(t *testing.T) {
|
||||
err := a.Gather(&acc)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Only use a few of the metrics
|
||||
asMetrics := []string{
|
||||
"transactions",
|
||||
"stat_write_errs",
|
||||
"stat_read_reqs",
|
||||
"stat_write_reqs",
|
||||
}
|
||||
|
||||
for _, metric := range asMetrics {
|
||||
assert.True(t, acc.HasIntField("aerospike", metric), metric)
|
||||
}
|
||||
|
||||
assert.True(t, acc.HasMeasurement("aerospike_node"))
|
||||
assert.True(t, acc.HasMeasurement("aerospike_namespace"))
|
||||
assert.True(t, acc.HasIntField("aerospike_node", "batch_error"))
|
||||
}
|
||||
|
||||
func TestAerospikeMsgLenFromToBytes(t *testing.T) {
|
||||
var i int64 = 8
|
||||
assert.True(t, i == msgLenFromBytes(msgLenToBytes(i)))
|
||||
}
|
||||
func TestAerospikeStatisticsPartialErr(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("Skipping integration test in short mode")
|
||||
}
|
||||
|
||||
a := &Aerospike{
|
||||
Servers: []string{
|
||||
testutil.GetLocalHost() + ":3000",
|
||||
testutil.GetLocalHost() + ":9999",
|
||||
},
|
||||
}
|
||||
|
||||
func TestReadAerospikeStatsNoNamespace(t *testing.T) {
|
||||
// Also test for re-writing
|
||||
var acc testutil.Accumulator
|
||||
stats := map[string]string{
|
||||
"stat-write-errs": "12345",
|
||||
"stat_read_reqs": "12345",
|
||||
}
|
||||
readAerospikeStats(stats, &acc, "host1", "")
|
||||
|
||||
fields := map[string]interface{}{
|
||||
"stat_write_errs": int64(12345),
|
||||
"stat_read_reqs": int64(12345),
|
||||
}
|
||||
tags := map[string]string{
|
||||
"aerospike_host": "host1",
|
||||
"namespace": "_service",
|
||||
}
|
||||
acc.AssertContainsTaggedFields(t, "aerospike", fields, tags)
|
||||
}
|
||||
|
||||
func TestReadAerospikeStatsNamespace(t *testing.T) {
|
||||
var acc testutil.Accumulator
|
||||
stats := map[string]string{
|
||||
"stat_write_errs": "12345",
|
||||
"stat_read_reqs": "12345",
|
||||
}
|
||||
readAerospikeStats(stats, &acc, "host1", "test")
|
||||
|
||||
fields := map[string]interface{}{
|
||||
"stat_write_errs": int64(12345),
|
||||
"stat_read_reqs": int64(12345),
|
||||
}
|
||||
tags := map[string]string{
|
||||
"aerospike_host": "host1",
|
||||
"namespace": "test",
|
||||
}
|
||||
acc.AssertContainsTaggedFields(t, "aerospike", fields, tags)
|
||||
}
|
||||
|
||||
func TestAerospikeUnmarshalList(t *testing.T) {
|
||||
i := map[string]string{
|
||||
"test": "one;two;three",
|
||||
}
|
||||
|
||||
expected := []string{"one", "two", "three"}
|
||||
|
||||
list, err := unmarshalListInfo(i, "test2")
|
||||
assert.True(t, err != nil)
|
||||
|
||||
list, err = unmarshalListInfo(i, "test")
|
||||
assert.True(t, err == nil)
|
||||
equal := true
|
||||
for ix := range expected {
|
||||
if list[ix] != expected[ix] {
|
||||
equal = false
|
||||
break
|
||||
}
|
||||
}
|
||||
assert.True(t, equal)
|
||||
}
|
||||
|
||||
func TestAerospikeUnmarshalMap(t *testing.T) {
|
||||
i := map[string]string{
|
||||
"test": "key1=value1;key2=value2",
|
||||
}
|
||||
|
||||
expected := map[string]string{
|
||||
"key1": "value1",
|
||||
"key2": "value2",
|
||||
}
|
||||
m, err := unmarshalMapInfo(i, "test")
|
||||
assert.True(t, err == nil)
|
||||
assert.True(t, reflect.DeepEqual(m, expected))
|
||||
err := a.Gather(&acc)
|
||||
require.Error(t, err)
|
||||
|
||||
assert.True(t, acc.HasMeasurement("aerospike_node"))
|
||||
assert.True(t, acc.HasMeasurement("aerospike_namespace"))
|
||||
assert.True(t, acc.HasIntField("aerospike_node", "batch_error"))
|
||||
}
|
||||
|
||||
@@ -6,8 +6,11 @@ import (
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/bcache"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/cassandra"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/ceph"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/cgroup"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/chrony"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/cloudwatch"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/conntrack"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/consul"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/couchbase"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/couchdb"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/disque"
|
||||
@@ -17,7 +20,7 @@ import (
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/elasticsearch"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/exec"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/filestat"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/github_webhooks"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/graylog"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/haproxy"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/http_response"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/httpjson"
|
||||
@@ -26,6 +29,7 @@ import (
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/jolokia"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/kafka_consumer"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/leofs"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/logparser"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/lustre2"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/mailchimp"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/memcached"
|
||||
@@ -37,6 +41,7 @@ import (
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/net_response"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/nginx"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/nsq"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/nsq_consumer"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/nstat"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/ntpq"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/passenger"
|
||||
@@ -65,6 +70,7 @@ import (
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/twemproxy"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/udp_listener"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/varnish"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/webhooks"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/win_perf_counters"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/zfs"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/zookeeper"
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Telegraf plugin: Apache
|
||||
|
||||
#### Plugin arguments:
|
||||
- **urls** []string: List of apache-status URLs to collect from.
|
||||
- **urls** []string: List of apache-status URLs to collect from. Default is "http://localhost/server-status?auto".
|
||||
|
||||
#### Description
|
||||
|
||||
|
||||
@@ -8,7 +8,6 @@ import (
|
||||
"net/url"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
@@ -21,6 +20,7 @@ type Apache struct {
|
||||
|
||||
var sampleConfig = `
|
||||
## An array of Apache status URI to gather stats.
|
||||
## Default is "http://localhost/server-status?auto".
|
||||
urls = ["http://localhost/server-status?auto"]
|
||||
`
|
||||
|
||||
@@ -33,8 +33,12 @@ func (n *Apache) Description() string {
|
||||
}
|
||||
|
||||
func (n *Apache) Gather(acc telegraf.Accumulator) error {
|
||||
var wg sync.WaitGroup
|
||||
if len(n.Urls) == 0 {
|
||||
n.Urls = []string{"http://localhost/server-status?auto"}
|
||||
}
|
||||
|
||||
var outerr error
|
||||
var errch = make(chan error)
|
||||
|
||||
for _, u := range n.Urls {
|
||||
addr, err := url.Parse(u)
|
||||
@@ -42,14 +46,17 @@ func (n *Apache) Gather(acc telegraf.Accumulator) error {
|
||||
return fmt.Errorf("Unable to parse address '%s': %s", u, err)
|
||||
}
|
||||
|
||||
wg.Add(1)
|
||||
go func(addr *url.URL) {
|
||||
defer wg.Done()
|
||||
outerr = n.gatherUrl(addr, acc)
|
||||
errch <- n.gatherUrl(addr, acc)
|
||||
}(addr)
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
// Drain channel, waiting for all requests to finish and save last error.
|
||||
for range n.Urls {
|
||||
if err := <-errch; err != nil {
|
||||
outerr = err
|
||||
}
|
||||
}
|
||||
|
||||
return outerr
|
||||
}
|
||||
|
||||
@@ -36,7 +36,8 @@ func TestHTTPApache(t *testing.T) {
|
||||
defer ts.Close()
|
||||
|
||||
a := Apache{
|
||||
Urls: []string{ts.URL},
|
||||
// Fetch it 2 times to catch possible data races.
|
||||
Urls: []string{ts.URL, ts.URL},
|
||||
}
|
||||
|
||||
var acc testutil.Accumulator
|
||||
|
||||
@@ -148,7 +148,7 @@ func (c cassandraMetric) addTagsFields(out map[string]interface{}) {
|
||||
tokens := parseJmxMetricRequest(r.(map[string]interface{})["mbean"].(string))
|
||||
// Requests with wildcards for keyspace or table names will return nested
|
||||
// maps in the json response
|
||||
if tokens["type"] == "Table" && (tokens["keyspace"] == "*" ||
|
||||
if (tokens["type"] == "Table" || tokens["type"] == "ColumnFamily") && (tokens["keyspace"] == "*" ||
|
||||
tokens["scope"] == "*") {
|
||||
if valuesMap, ok := out["value"]; ok {
|
||||
for k, v := range valuesMap.(map[string]interface{}) {
|
||||
|
||||
59
plugins/inputs/cgroup/README.md
Normal file
59
plugins/inputs/cgroup/README.md
Normal file
@@ -0,0 +1,59 @@
|
||||
# CGroup Input Plugin For Telegraf Agent
|
||||
|
||||
This input plugin will capture specific statistics per cgroup.
|
||||
|
||||
Following file formats are supported:
|
||||
|
||||
* Single value
|
||||
|
||||
```
|
||||
VAL\n
|
||||
```
|
||||
|
||||
* New line separated values
|
||||
|
||||
```
|
||||
VAL0\n
|
||||
VAL1\n
|
||||
```
|
||||
|
||||
* Space separated values
|
||||
|
||||
```
|
||||
VAL0 VAL1 ...\n
|
||||
```
|
||||
|
||||
* New line separated key-space-value's
|
||||
|
||||
```
|
||||
KEY0 VAL0\n
|
||||
KEY1 VAL1\n
|
||||
```
|
||||
|
||||
|
||||
### Tags:
|
||||
|
||||
Measurements don't have any specific tags unless you define them at the telegraf level (defaults). We
|
||||
used to have the path listed as a tag, but to keep cardinality in check it's easier to move this
|
||||
value to a field. Thanks @sebito91!
|
||||
|
||||
|
||||
### Configuration:
|
||||
|
||||
```
|
||||
# [[inputs.cgroup]]
|
||||
# paths = [
|
||||
# "/cgroup/memory", # root cgroup
|
||||
# "/cgroup/memory/child1", # container cgroup
|
||||
# "/cgroup/memory/child2/*", # all children cgroups under child2, but not child2 itself
|
||||
# ]
|
||||
# files = ["memory.*usage*", "memory.limit_in_bytes"]
|
||||
|
||||
# [[inputs.cgroup]]
|
||||
# paths = [
|
||||
# "/cgroup/cpu", # root cgroup
|
||||
# "/cgroup/cpu/*", # all container cgroups
|
||||
# "/cgroup/cpu/*/*", # all children cgroups under each container cgroup
|
||||
# ]
|
||||
# files = ["cpuacct.usage", "cpu.cfs_period_us", "cpu.cfs_quota_us"]
|
||||
```
|
||||
35
plugins/inputs/cgroup/cgroup.go
Normal file
35
plugins/inputs/cgroup/cgroup.go
Normal file
@@ -0,0 +1,35 @@
|
||||
package cgroup
|
||||
|
||||
import (
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
)
|
||||
|
||||
type CGroup struct {
|
||||
Paths []string `toml:"paths"`
|
||||
Files []string `toml:"files"`
|
||||
}
|
||||
|
||||
var sampleConfig = `
|
||||
## Directories in which to look for files, globs are supported.
|
||||
# paths = [
|
||||
# "/cgroup/memory",
|
||||
# "/cgroup/memory/child1",
|
||||
# "/cgroup/memory/child2/*",
|
||||
# ]
|
||||
## cgroup stat fields, as file names, globs are supported.
|
||||
## these file names are appended to each path from above.
|
||||
# files = ["memory.*usage*", "memory.limit_in_bytes"]
|
||||
`
|
||||
|
||||
func (g *CGroup) SampleConfig() string {
|
||||
return sampleConfig
|
||||
}
|
||||
|
||||
func (g *CGroup) Description() string {
|
||||
return "Read specific statistics per cgroup"
|
||||
}
|
||||
|
||||
func init() {
|
||||
inputs.Add("cgroup", func() telegraf.Input { return &CGroup{} })
|
||||
}
|
||||
243
plugins/inputs/cgroup/cgroup_linux.go
Normal file
243
plugins/inputs/cgroup/cgroup_linux.go
Normal file
@@ -0,0 +1,243 @@
|
||||
// +build linux
|
||||
|
||||
package cgroup
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strconv"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
)
|
||||
|
||||
const metricName = "cgroup"
|
||||
|
||||
func (g *CGroup) Gather(acc telegraf.Accumulator) error {
|
||||
list := make(chan pathInfo)
|
||||
go g.generateDirs(list)
|
||||
|
||||
for dir := range list {
|
||||
if dir.err != nil {
|
||||
return dir.err
|
||||
}
|
||||
if err := g.gatherDir(dir.path, acc); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (g *CGroup) gatherDir(dir string, acc telegraf.Accumulator) error {
|
||||
fields := make(map[string]interface{})
|
||||
|
||||
list := make(chan pathInfo)
|
||||
go g.generateFiles(dir, list)
|
||||
|
||||
for file := range list {
|
||||
if file.err != nil {
|
||||
return file.err
|
||||
}
|
||||
|
||||
raw, err := ioutil.ReadFile(file.path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(raw) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
fd := fileData{data: raw, path: file.path}
|
||||
if err := fd.parse(fields); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
fields["path"] = dir
|
||||
|
||||
acc.AddFields(metricName, fields, nil)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ======================================================================
|
||||
|
||||
type pathInfo struct {
|
||||
path string
|
||||
err error
|
||||
}
|
||||
|
||||
func isDir(path string) (bool, error) {
|
||||
result, err := os.Stat(path)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
return result.IsDir(), nil
|
||||
}
|
||||
|
||||
func (g *CGroup) generateDirs(list chan<- pathInfo) {
|
||||
for _, dir := range g.Paths {
|
||||
// getting all dirs that match the pattern 'dir'
|
||||
items, err := filepath.Glob(dir)
|
||||
if err != nil {
|
||||
list <- pathInfo{err: err}
|
||||
return
|
||||
}
|
||||
|
||||
for _, item := range items {
|
||||
ok, err := isDir(item)
|
||||
if err != nil {
|
||||
list <- pathInfo{err: err}
|
||||
return
|
||||
}
|
||||
// supply only dirs
|
||||
if ok {
|
||||
list <- pathInfo{path: item}
|
||||
}
|
||||
}
|
||||
}
|
||||
close(list)
|
||||
}
|
||||
|
||||
func (g *CGroup) generateFiles(dir string, list chan<- pathInfo) {
|
||||
for _, file := range g.Files {
|
||||
// getting all file paths that match the pattern 'dir + file'
|
||||
// path.Base make sure that file variable does not contains part of path
|
||||
items, err := filepath.Glob(path.Join(dir, path.Base(file)))
|
||||
if err != nil {
|
||||
list <- pathInfo{err: err}
|
||||
return
|
||||
}
|
||||
|
||||
for _, item := range items {
|
||||
ok, err := isDir(item)
|
||||
if err != nil {
|
||||
list <- pathInfo{err: err}
|
||||
return
|
||||
}
|
||||
// supply only files not dirs
|
||||
if !ok {
|
||||
list <- pathInfo{path: item}
|
||||
}
|
||||
}
|
||||
}
|
||||
close(list)
|
||||
}
|
||||
|
||||
// ======================================================================
|
||||
|
||||
type fileData struct {
|
||||
data []byte
|
||||
path string
|
||||
}
|
||||
|
||||
func (fd *fileData) format() (*fileFormat, error) {
|
||||
for _, ff := range fileFormats {
|
||||
ok, err := ff.match(fd.data)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if ok {
|
||||
return &ff, nil
|
||||
}
|
||||
}
|
||||
|
||||
return nil, fmt.Errorf("%v: unknown file format", fd.path)
|
||||
}
|
||||
|
||||
func (fd *fileData) parse(fields map[string]interface{}) error {
|
||||
format, err := fd.format()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
format.parser(filepath.Base(fd.path), fields, fd.data)
|
||||
return nil
|
||||
}
|
||||
|
||||
// ======================================================================
|
||||
|
||||
type fileFormat struct {
|
||||
name string
|
||||
pattern string
|
||||
parser func(measurement string, fields map[string]interface{}, b []byte)
|
||||
}
|
||||
|
||||
const keyPattern = "[[:alpha:]_]+"
|
||||
const valuePattern = "[\\d-]+"
|
||||
|
||||
var fileFormats = [...]fileFormat{
|
||||
// VAL\n
|
||||
fileFormat{
|
||||
name: "Single value",
|
||||
pattern: "^" + valuePattern + "\n$",
|
||||
parser: func(measurement string, fields map[string]interface{}, b []byte) {
|
||||
re := regexp.MustCompile("^(" + valuePattern + ")\n$")
|
||||
matches := re.FindAllStringSubmatch(string(b), -1)
|
||||
fields[measurement] = numberOrString(matches[0][1])
|
||||
},
|
||||
},
|
||||
// VAL0\n
|
||||
// VAL1\n
|
||||
// ...
|
||||
fileFormat{
|
||||
name: "New line separated values",
|
||||
pattern: "^(" + valuePattern + "\n){2,}$",
|
||||
parser: func(measurement string, fields map[string]interface{}, b []byte) {
|
||||
re := regexp.MustCompile("(" + valuePattern + ")\n")
|
||||
matches := re.FindAllStringSubmatch(string(b), -1)
|
||||
for i, v := range matches {
|
||||
fields[measurement+"."+strconv.Itoa(i)] = numberOrString(v[1])
|
||||
}
|
||||
},
|
||||
},
|
||||
// VAL0 VAL1 ...\n
|
||||
fileFormat{
|
||||
name: "Space separated values",
|
||||
pattern: "^(" + valuePattern + " )+\n$",
|
||||
parser: func(measurement string, fields map[string]interface{}, b []byte) {
|
||||
re := regexp.MustCompile("(" + valuePattern + ") ")
|
||||
matches := re.FindAllStringSubmatch(string(b), -1)
|
||||
for i, v := range matches {
|
||||
fields[measurement+"."+strconv.Itoa(i)] = numberOrString(v[1])
|
||||
}
|
||||
},
|
||||
},
|
||||
// KEY0 VAL0\n
|
||||
// KEY1 VAL1\n
|
||||
// ...
|
||||
fileFormat{
|
||||
name: "New line separated key-space-value's",
|
||||
pattern: "^(" + keyPattern + " " + valuePattern + "\n)+$",
|
||||
parser: func(measurement string, fields map[string]interface{}, b []byte) {
|
||||
re := regexp.MustCompile("(" + keyPattern + ") (" + valuePattern + ")\n")
|
||||
matches := re.FindAllStringSubmatch(string(b), -1)
|
||||
for _, v := range matches {
|
||||
fields[measurement+"."+v[1]] = numberOrString(v[2])
|
||||
}
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
func numberOrString(s string) interface{} {
|
||||
i, err := strconv.Atoi(s)
|
||||
if err == nil {
|
||||
return i
|
||||
}
|
||||
|
||||
return s
|
||||
}
|
||||
|
||||
func (f fileFormat) match(b []byte) (bool, error) {
|
||||
ok, err := regexp.Match(f.pattern, b)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
if ok {
|
||||
return true, nil
|
||||
}
|
||||
return false, nil
|
||||
}
|
||||
11
plugins/inputs/cgroup/cgroup_notlinux.go
Normal file
11
plugins/inputs/cgroup/cgroup_notlinux.go
Normal file
@@ -0,0 +1,11 @@
|
||||
// +build !linux
|
||||
|
||||
package cgroup
|
||||
|
||||
import (
|
||||
"github.com/influxdata/telegraf"
|
||||
)
|
||||
|
||||
func (g *CGroup) Gather(acc telegraf.Accumulator) error {
|
||||
return nil
|
||||
}
|
||||
194
plugins/inputs/cgroup/cgroup_test.go
Normal file
194
plugins/inputs/cgroup/cgroup_test.go
Normal file
@@ -0,0 +1,194 @@
|
||||
// +build linux
|
||||
|
||||
package cgroup
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/influxdata/telegraf/testutil"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"reflect"
|
||||
)
|
||||
|
||||
var cg1 = &CGroup{
|
||||
Paths: []string{"testdata/memory"},
|
||||
Files: []string{
|
||||
"memory.empty",
|
||||
"memory.max_usage_in_bytes",
|
||||
"memory.limit_in_bytes",
|
||||
"memory.stat",
|
||||
"memory.use_hierarchy",
|
||||
"notify_on_release",
|
||||
},
|
||||
}
|
||||
|
||||
func assertContainsFields(a *testutil.Accumulator, t *testing.T, measurement string, fieldSet []map[string]interface{}) {
|
||||
a.Lock()
|
||||
defer a.Unlock()
|
||||
|
||||
numEquals := 0
|
||||
for _, p := range a.Metrics {
|
||||
if p.Measurement == measurement {
|
||||
for _, fields := range fieldSet {
|
||||
if reflect.DeepEqual(fields, p.Fields) {
|
||||
numEquals++
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if numEquals != len(fieldSet) {
|
||||
assert.Fail(t, fmt.Sprintf("only %d of %d are equal", numEquals, len(fieldSet)))
|
||||
}
|
||||
}
|
||||
|
||||
func TestCgroupStatistics_1(t *testing.T) {
|
||||
var acc testutil.Accumulator
|
||||
|
||||
err := cg1.Gather(&acc)
|
||||
require.NoError(t, err)
|
||||
|
||||
fields := map[string]interface{}{
|
||||
"memory.stat.cache": 1739362304123123123,
|
||||
"memory.stat.rss": 1775325184,
|
||||
"memory.stat.rss_huge": 778043392,
|
||||
"memory.stat.mapped_file": 421036032,
|
||||
"memory.stat.dirty": -307200,
|
||||
"memory.max_usage_in_bytes.0": 0,
|
||||
"memory.max_usage_in_bytes.1": -1,
|
||||
"memory.max_usage_in_bytes.2": 2,
|
||||
"memory.limit_in_bytes": 223372036854771712,
|
||||
"memory.use_hierarchy": "12-781",
|
||||
"notify_on_release": 0,
|
||||
"path": "testdata/memory",
|
||||
}
|
||||
assertContainsFields(&acc, t, "cgroup", []map[string]interface{}{fields})
|
||||
}
|
||||
|
||||
// ======================================================================
|
||||
|
||||
var cg2 = &CGroup{
|
||||
Paths: []string{"testdata/cpu"},
|
||||
Files: []string{"cpuacct.usage_percpu"},
|
||||
}
|
||||
|
||||
func TestCgroupStatistics_2(t *testing.T) {
|
||||
var acc testutil.Accumulator
|
||||
|
||||
err := cg2.Gather(&acc)
|
||||
require.NoError(t, err)
|
||||
|
||||
fields := map[string]interface{}{
|
||||
"cpuacct.usage_percpu.0": -1452543795404,
|
||||
"cpuacct.usage_percpu.1": 1376681271659,
|
||||
"cpuacct.usage_percpu.2": 1450950799997,
|
||||
"cpuacct.usage_percpu.3": -1473113374257,
|
||||
"path": "testdata/cpu",
|
||||
}
|
||||
assertContainsFields(&acc, t, "cgroup", []map[string]interface{}{fields})
|
||||
}
|
||||
|
||||
// ======================================================================
|
||||
|
||||
var cg3 = &CGroup{
|
||||
Paths: []string{"testdata/memory/*"},
|
||||
Files: []string{"memory.limit_in_bytes"},
|
||||
}
|
||||
|
||||
func TestCgroupStatistics_3(t *testing.T) {
|
||||
var acc testutil.Accumulator
|
||||
|
||||
err := cg3.Gather(&acc)
|
||||
require.NoError(t, err)
|
||||
|
||||
fields := map[string]interface{}{
|
||||
"memory.limit_in_bytes": 223372036854771712,
|
||||
"path": "testdata/memory/group_1",
|
||||
}
|
||||
|
||||
fieldsTwo := map[string]interface{}{
|
||||
"memory.limit_in_bytes": 223372036854771712,
|
||||
"path": "testdata/memory/group_2",
|
||||
}
|
||||
assertContainsFields(&acc, t, "cgroup", []map[string]interface{}{fields, fieldsTwo})
|
||||
}
|
||||
|
||||
// ======================================================================
|
||||
|
||||
var cg4 = &CGroup{
|
||||
Paths: []string{"testdata/memory/*/*", "testdata/memory/group_2"},
|
||||
Files: []string{"memory.limit_in_bytes"},
|
||||
}
|
||||
|
||||
func TestCgroupStatistics_4(t *testing.T) {
|
||||
var acc testutil.Accumulator
|
||||
|
||||
err := cg4.Gather(&acc)
|
||||
require.NoError(t, err)
|
||||
|
||||
fields := map[string]interface{}{
|
||||
"memory.limit_in_bytes": 223372036854771712,
|
||||
"path": "testdata/memory/group_1/group_1_1",
|
||||
}
|
||||
|
||||
fieldsTwo := map[string]interface{}{
|
||||
"memory.limit_in_bytes": 223372036854771712,
|
||||
"path": "testdata/memory/group_1/group_1_2",
|
||||
}
|
||||
|
||||
fieldsThree := map[string]interface{}{
|
||||
"memory.limit_in_bytes": 223372036854771712,
|
||||
"path": "testdata/memory/group_2",
|
||||
}
|
||||
|
||||
assertContainsFields(&acc, t, "cgroup", []map[string]interface{}{fields, fieldsTwo, fieldsThree})
|
||||
}
|
||||
|
||||
// ======================================================================
|
||||
|
||||
var cg5 = &CGroup{
|
||||
Paths: []string{"testdata/memory/*/group_1_1"},
|
||||
Files: []string{"memory.limit_in_bytes"},
|
||||
}
|
||||
|
||||
func TestCgroupStatistics_5(t *testing.T) {
|
||||
var acc testutil.Accumulator
|
||||
|
||||
err := cg5.Gather(&acc)
|
||||
require.NoError(t, err)
|
||||
|
||||
fields := map[string]interface{}{
|
||||
"memory.limit_in_bytes": 223372036854771712,
|
||||
"path": "testdata/memory/group_1/group_1_1",
|
||||
}
|
||||
|
||||
fieldsTwo := map[string]interface{}{
|
||||
"memory.limit_in_bytes": 223372036854771712,
|
||||
"path": "testdata/memory/group_2/group_1_1",
|
||||
}
|
||||
assertContainsFields(&acc, t, "cgroup", []map[string]interface{}{fields, fieldsTwo})
|
||||
}
|
||||
|
||||
// ======================================================================
|
||||
|
||||
var cg6 = &CGroup{
|
||||
Paths: []string{"testdata/memory"},
|
||||
Files: []string{"memory.us*", "*/memory.kmem.*"},
|
||||
}
|
||||
|
||||
func TestCgroupStatistics_6(t *testing.T) {
|
||||
var acc testutil.Accumulator
|
||||
|
||||
err := cg6.Gather(&acc)
|
||||
require.NoError(t, err)
|
||||
|
||||
fields := map[string]interface{}{
|
||||
"memory.usage_in_bytes": 3513667584,
|
||||
"memory.use_hierarchy": "12-781",
|
||||
"memory.kmem.limit_in_bytes": 9223372036854771712,
|
||||
"path": "testdata/memory",
|
||||
}
|
||||
assertContainsFields(&acc, t, "cgroup", []map[string]interface{}{fields})
|
||||
}
|
||||
1
plugins/inputs/cgroup/testdata/blkio/blkio.io_serviced
vendored
Normal file
1
plugins/inputs/cgroup/testdata/blkio/blkio.io_serviced
vendored
Normal file
@@ -0,0 +1 @@
|
||||
Total 0
|
||||
131
plugins/inputs/cgroup/testdata/blkio/blkio.throttle.io_serviced
vendored
Normal file
131
plugins/inputs/cgroup/testdata/blkio/blkio.throttle.io_serviced
vendored
Normal file
@@ -0,0 +1,131 @@
|
||||
11:0 Read 0
|
||||
11:0 Write 0
|
||||
11:0 Sync 0
|
||||
11:0 Async 0
|
||||
11:0 Total 0
|
||||
8:0 Read 49134
|
||||
8:0 Write 216703
|
||||
8:0 Sync 177906
|
||||
8:0 Async 87931
|
||||
8:0 Total 265837
|
||||
7:7 Read 0
|
||||
7:7 Write 0
|
||||
7:7 Sync 0
|
||||
7:7 Async 0
|
||||
7:7 Total 0
|
||||
7:6 Read 0
|
||||
7:6 Write 0
|
||||
7:6 Sync 0
|
||||
7:6 Async 0
|
||||
7:6 Total 0
|
||||
7:5 Read 0
|
||||
7:5 Write 0
|
||||
7:5 Sync 0
|
||||
7:5 Async 0
|
||||
7:5 Total 0
|
||||
7:4 Read 0
|
||||
7:4 Write 0
|
||||
7:4 Sync 0
|
||||
7:4 Async 0
|
||||
7:4 Total 0
|
||||
7:3 Read 0
|
||||
7:3 Write 0
|
||||
7:3 Sync 0
|
||||
7:3 Async 0
|
||||
7:3 Total 0
|
||||
7:2 Read 0
|
||||
7:2 Write 0
|
||||
7:2 Sync 0
|
||||
7:2 Async 0
|
||||
7:2 Total 0
|
||||
7:1 Read 0
|
||||
7:1 Write 0
|
||||
7:1 Sync 0
|
||||
7:1 Async 0
|
||||
7:1 Total 0
|
||||
7:0 Read 0
|
||||
7:0 Write 0
|
||||
7:0 Sync 0
|
||||
7:0 Async 0
|
||||
7:0 Total 0
|
||||
1:15 Read 3
|
||||
1:15 Write 0
|
||||
1:15 Sync 0
|
||||
1:15 Async 3
|
||||
1:15 Total 3
|
||||
1:14 Read 3
|
||||
1:14 Write 0
|
||||
1:14 Sync 0
|
||||
1:14 Async 3
|
||||
1:14 Total 3
|
||||
1:13 Read 3
|
||||
1:13 Write 0
|
||||
1:13 Sync 0
|
||||
1:13 Async 3
|
||||
1:13 Total 3
|
||||
1:12 Read 3
|
||||
1:12 Write 0
|
||||
1:12 Sync 0
|
||||
1:12 Async 3
|
||||
1:12 Total 3
|
||||
1:11 Read 3
|
||||
1:11 Write 0
|
||||
1:11 Sync 0
|
||||
1:11 Async 3
|
||||
1:11 Total 3
|
||||
1:10 Read 3
|
||||
1:10 Write 0
|
||||
1:10 Sync 0
|
||||
1:10 Async 3
|
||||
1:10 Total 3
|
||||
1:9 Read 3
|
||||
1:9 Write 0
|
||||
1:9 Sync 0
|
||||
1:9 Async 3
|
||||
1:9 Total 3
|
||||
1:8 Read 3
|
||||
1:8 Write 0
|
||||
1:8 Sync 0
|
||||
1:8 Async 3
|
||||
1:8 Total 3
|
||||
1:7 Read 3
|
||||
1:7 Write 0
|
||||
1:7 Sync 0
|
||||
1:7 Async 3
|
||||
1:7 Total 3
|
||||
1:6 Read 3
|
||||
1:6 Write 0
|
||||
1:6 Sync 0
|
||||
1:6 Async 3
|
||||
1:6 Total 3
|
||||
1:5 Read 3
|
||||
1:5 Write 0
|
||||
1:5 Sync 0
|
||||
1:5 Async 3
|
||||
1:5 Total 3
|
||||
1:4 Read 3
|
||||
1:4 Write 0
|
||||
1:4 Sync 0
|
||||
1:4 Async 3
|
||||
1:4 Total 3
|
||||
1:3 Read 3
|
||||
1:3 Write 0
|
||||
1:3 Sync 0
|
||||
1:3 Async 3
|
||||
1:3 Total 3
|
||||
1:2 Read 3
|
||||
1:2 Write 0
|
||||
1:2 Sync 0
|
||||
1:2 Async 3
|
||||
1:2 Total 3
|
||||
1:1 Read 3
|
||||
1:1 Write 0
|
||||
1:1 Sync 0
|
||||
1:1 Async 3
|
||||
1:1 Total 3
|
||||
1:0 Read 3
|
||||
1:0 Write 0
|
||||
1:0 Sync 0
|
||||
1:0 Async 3
|
||||
1:0 Total 3
|
||||
Total 265885
|
||||
1
plugins/inputs/cgroup/testdata/cpu/cpu.cfs_quota_us
vendored
Normal file
1
plugins/inputs/cgroup/testdata/cpu/cpu.cfs_quota_us
vendored
Normal file
@@ -0,0 +1 @@
|
||||
-1
|
||||
1
plugins/inputs/cgroup/testdata/cpu/cpuacct.usage_percpu
vendored
Normal file
1
plugins/inputs/cgroup/testdata/cpu/cpuacct.usage_percpu
vendored
Normal file
@@ -0,0 +1 @@
|
||||
-1452543795404 1376681271659 1450950799997 -1473113374257
|
||||
1
plugins/inputs/cgroup/testdata/memory/group_1/group_1_1/memory.limit_in_bytes
vendored
Normal file
1
plugins/inputs/cgroup/testdata/memory/group_1/group_1_1/memory.limit_in_bytes
vendored
Normal file
@@ -0,0 +1 @@
|
||||
223372036854771712
|
||||
5
plugins/inputs/cgroup/testdata/memory/group_1/group_1_1/memory.stat
vendored
Normal file
5
plugins/inputs/cgroup/testdata/memory/group_1/group_1_1/memory.stat
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
cache 1739362304123123123
|
||||
rss 1775325184
|
||||
rss_huge 778043392
|
||||
mapped_file 421036032
|
||||
dirty -307200
|
||||
1
plugins/inputs/cgroup/testdata/memory/group_1/group_1_2/memory.limit_in_bytes
vendored
Normal file
1
plugins/inputs/cgroup/testdata/memory/group_1/group_1_2/memory.limit_in_bytes
vendored
Normal file
@@ -0,0 +1 @@
|
||||
223372036854771712
|
||||
5
plugins/inputs/cgroup/testdata/memory/group_1/group_1_2/memory.stat
vendored
Normal file
5
plugins/inputs/cgroup/testdata/memory/group_1/group_1_2/memory.stat
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
cache 1739362304123123123
|
||||
rss 1775325184
|
||||
rss_huge 778043392
|
||||
mapped_file 421036032
|
||||
dirty -307200
|
||||
1
plugins/inputs/cgroup/testdata/memory/group_1/memory.kmem.limit_in_bytes
vendored
Normal file
1
plugins/inputs/cgroup/testdata/memory/group_1/memory.kmem.limit_in_bytes
vendored
Normal file
@@ -0,0 +1 @@
|
||||
9223372036854771712
|
||||
1
plugins/inputs/cgroup/testdata/memory/group_1/memory.kmem.max_usage_in_bytes
vendored
Normal file
1
plugins/inputs/cgroup/testdata/memory/group_1/memory.kmem.max_usage_in_bytes
vendored
Normal file
@@ -0,0 +1 @@
|
||||
0
|
||||
1
plugins/inputs/cgroup/testdata/memory/group_1/memory.limit_in_bytes
vendored
Normal file
1
plugins/inputs/cgroup/testdata/memory/group_1/memory.limit_in_bytes
vendored
Normal file
@@ -0,0 +1 @@
|
||||
223372036854771712
|
||||
5
plugins/inputs/cgroup/testdata/memory/group_1/memory.stat
vendored
Normal file
5
plugins/inputs/cgroup/testdata/memory/group_1/memory.stat
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
cache 1739362304123123123
|
||||
rss 1775325184
|
||||
rss_huge 778043392
|
||||
mapped_file 421036032
|
||||
dirty -307200
|
||||
1
plugins/inputs/cgroup/testdata/memory/group_2/group_1_1/memory.limit_in_bytes
vendored
Normal file
1
plugins/inputs/cgroup/testdata/memory/group_2/group_1_1/memory.limit_in_bytes
vendored
Normal file
@@ -0,0 +1 @@
|
||||
223372036854771712
|
||||
5
plugins/inputs/cgroup/testdata/memory/group_2/group_1_1/memory.stat
vendored
Normal file
5
plugins/inputs/cgroup/testdata/memory/group_2/group_1_1/memory.stat
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
cache 1739362304123123123
|
||||
rss 1775325184
|
||||
rss_huge 778043392
|
||||
mapped_file 421036032
|
||||
dirty -307200
|
||||
1
plugins/inputs/cgroup/testdata/memory/group_2/memory.limit_in_bytes
vendored
Normal file
1
plugins/inputs/cgroup/testdata/memory/group_2/memory.limit_in_bytes
vendored
Normal file
@@ -0,0 +1 @@
|
||||
223372036854771712
|
||||
5
plugins/inputs/cgroup/testdata/memory/group_2/memory.stat
vendored
Normal file
5
plugins/inputs/cgroup/testdata/memory/group_2/memory.stat
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
cache 1739362304123123123
|
||||
rss 1775325184
|
||||
rss_huge 778043392
|
||||
mapped_file 421036032
|
||||
dirty -307200
|
||||
0
plugins/inputs/cgroup/testdata/memory/memory.empty
vendored
Normal file
0
plugins/inputs/cgroup/testdata/memory/memory.empty
vendored
Normal file
1
plugins/inputs/cgroup/testdata/memory/memory.kmem.limit_in_bytes
vendored
Normal file
1
plugins/inputs/cgroup/testdata/memory/memory.kmem.limit_in_bytes
vendored
Normal file
@@ -0,0 +1 @@
|
||||
9223372036854771712
|
||||
1
plugins/inputs/cgroup/testdata/memory/memory.limit_in_bytes
vendored
Normal file
1
plugins/inputs/cgroup/testdata/memory/memory.limit_in_bytes
vendored
Normal file
@@ -0,0 +1 @@
|
||||
223372036854771712
|
||||
3
plugins/inputs/cgroup/testdata/memory/memory.max_usage_in_bytes
vendored
Normal file
3
plugins/inputs/cgroup/testdata/memory/memory.max_usage_in_bytes
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
0
|
||||
-1
|
||||
2
|
||||
8
plugins/inputs/cgroup/testdata/memory/memory.numa_stat
vendored
Normal file
8
plugins/inputs/cgroup/testdata/memory/memory.numa_stat
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
total=858067 N0=858067
|
||||
file=406254 N0=406254
|
||||
anon=451792 N0=451792
|
||||
unevictable=21 N0=21
|
||||
hierarchical_total=858067 N0=858067
|
||||
hierarchical_file=406254 N0=406254
|
||||
hierarchical_anon=451792 N0=451792
|
||||
hierarchical_unevictable=21 N0=21
|
||||
5
plugins/inputs/cgroup/testdata/memory/memory.stat
vendored
Normal file
5
plugins/inputs/cgroup/testdata/memory/memory.stat
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
cache 1739362304123123123
|
||||
rss 1775325184
|
||||
rss_huge 778043392
|
||||
mapped_file 421036032
|
||||
dirty -307200
|
||||
1
plugins/inputs/cgroup/testdata/memory/memory.usage_in_bytes
vendored
Normal file
1
plugins/inputs/cgroup/testdata/memory/memory.usage_in_bytes
vendored
Normal file
@@ -0,0 +1 @@
|
||||
3513667584
|
||||
1
plugins/inputs/cgroup/testdata/memory/memory.use_hierarchy
vendored
Normal file
1
plugins/inputs/cgroup/testdata/memory/memory.use_hierarchy
vendored
Normal file
@@ -0,0 +1 @@
|
||||
12-781
|
||||
1
plugins/inputs/cgroup/testdata/memory/notify_on_release
vendored
Normal file
1
plugins/inputs/cgroup/testdata/memory/notify_on_release
vendored
Normal file
@@ -0,0 +1 @@
|
||||
0
|
||||
@@ -40,7 +40,7 @@ is computed for the new frequency, with weights depending on these accuracies. I
|
||||
measurements from the reference source follow a consistent trend, the residual will be
|
||||
driven to zero over time.
|
||||
- Skew - This is the estimated error bound on the frequency.
|
||||
- Root delay -This is the total of the network path delays to the stratum-1 computer
|
||||
- Root delay - This is the total of the network path delays to the stratum-1 computer
|
||||
from which the computer is ultimately synchronised. In certain extreme situations, this
|
||||
value can be negative. (This can arise in a symmetric peer arrangement where the computers’
|
||||
frequencies are not tracking each other and the network delay is very short relative to the
|
||||
@@ -56,7 +56,8 @@ Delete second or Not synchronised.
|
||||
```toml
|
||||
# Get standard chrony metrics, requires chronyc executable.
|
||||
[[inputs.chrony]]
|
||||
# no configuration
|
||||
## If true, chronyc tries to perform a DNS lookup for the time server.
|
||||
# dns_lookup = false
|
||||
```
|
||||
|
||||
### Measurements & Fields:
|
||||
|
||||
@@ -20,7 +20,8 @@ var (
|
||||
)
|
||||
|
||||
type Chrony struct {
|
||||
path string
|
||||
DNSLookup bool `toml:"dns_lookup"`
|
||||
path string
|
||||
}
|
||||
|
||||
func (*Chrony) Description() string {
|
||||
@@ -28,14 +29,24 @@ func (*Chrony) Description() string {
|
||||
}
|
||||
|
||||
func (*Chrony) SampleConfig() string {
|
||||
return ""
|
||||
return `
|
||||
## If true, chronyc tries to perform a DNS lookup for the time server.
|
||||
# dns_lookup = false
|
||||
`
|
||||
}
|
||||
|
||||
func (c *Chrony) Gather(acc telegraf.Accumulator) error {
|
||||
if len(c.path) == 0 {
|
||||
return errors.New("chronyc not found: verify that chrony is installed and that chronyc is in your PATH")
|
||||
}
|
||||
cmd := execCommand(c.path, "tracking")
|
||||
|
||||
flags := []string{}
|
||||
if !c.DNSLookup {
|
||||
flags = append(flags, "-n")
|
||||
}
|
||||
flags = append(flags, "tracking")
|
||||
|
||||
cmd := execCommand(c.path, flags...)
|
||||
out, err := internal.CombinedOutputTimeout(cmd, time.Second*5)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to run command %s: %s - %s", strings.Join(cmd.Args, " "), err, string(out))
|
||||
|
||||
@@ -42,6 +42,15 @@ func TestGather(t *testing.T) {
|
||||
}
|
||||
|
||||
acc.AssertContainsTaggedFields(t, "chrony", fields, tags)
|
||||
|
||||
// test with dns lookup
|
||||
c.DNSLookup = true
|
||||
err = c.Gather(&acc)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
acc.AssertContainsTaggedFields(t, "chrony", fields, tags)
|
||||
|
||||
}
|
||||
|
||||
// fackeExecCommand is a helper function that mock
|
||||
@@ -63,8 +72,9 @@ func TestHelperProcess(t *testing.T) {
|
||||
return
|
||||
}
|
||||
|
||||
mockData := `Reference ID : 192.168.1.22 (ntp.example.com)
|
||||
Stratum : 3
|
||||
lookup := "Reference ID : 192.168.1.22 (ntp.example.com)\n"
|
||||
noLookup := "Reference ID : 192.168.1.22 (192.168.1.22)\n"
|
||||
mockData := `Stratum : 3
|
||||
Ref time (UTC) : Thu May 12 14:27:07 2016
|
||||
System time : 0.000020390 seconds fast of NTP time
|
||||
Last offset : +0.000012651 seconds
|
||||
@@ -84,8 +94,12 @@ Leap status : Normal
|
||||
// /tmp/go-build970079519/…/_test/integration.test -test.run=TestHelperProcess --
|
||||
cmd, args := args[3], args[4:]
|
||||
|
||||
if cmd == "chronyc" && args[0] == "tracking" {
|
||||
fmt.Fprint(os.Stdout, mockData)
|
||||
if cmd == "chronyc" {
|
||||
if args[0] == "tracking" {
|
||||
fmt.Fprint(os.Stdout, lookup+mockData)
|
||||
} else {
|
||||
fmt.Fprint(os.Stdout, noLookup+mockData)
|
||||
}
|
||||
} else {
|
||||
fmt.Fprint(os.Stdout, "command not found")
|
||||
os.Exit(1)
|
||||
|
||||
@@ -6,9 +6,12 @@ This plugin will pull Metric Statistics from Amazon CloudWatch.
|
||||
|
||||
This plugin uses a credential chain for Authentication with the CloudWatch
|
||||
API endpoint. In the following order the plugin will attempt to authenticate.
|
||||
1. [IAMS Role](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
|
||||
2. [Environment Variables](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables)
|
||||
3. [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file)
|
||||
1. Assumed credentials via STS if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules)
|
||||
2. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
|
||||
3. Shared profile from `profile` attribute
|
||||
4. [Environment Variables](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#environment-variables)
|
||||
5. [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file)
|
||||
6. [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
|
||||
|
||||
### Configuration:
|
||||
|
||||
@@ -24,7 +27,7 @@ API endpoint. In the following order the plugin will attempt to authenticate.
|
||||
delay = '1m'
|
||||
|
||||
## Override global run interval (optional - defaults to global interval)
|
||||
## Recomended: use metric 'interval' that is a multiple of 'period' to avoid
|
||||
## Recomended: use metric 'interval' that is a multiple of 'period' to avoid
|
||||
## gaps or overlap in pulled data
|
||||
interval = '1m'
|
||||
|
||||
@@ -36,11 +39,15 @@ API endpoint. In the following order the plugin will attempt to authenticate.
|
||||
## Refreshes Namespace available metrics every 1h
|
||||
[[inputs.cloudwatch.metrics]]
|
||||
names = ['Latency', 'RequestCount']
|
||||
|
||||
|
||||
## Dimension filters for Metric (optional)
|
||||
[[inputs.cloudwatch.metrics.dimensions]]
|
||||
name = 'LoadBalancerName'
|
||||
value = 'p-example'
|
||||
|
||||
[[inputs.cloudwatch.metrics.dimensions]]
|
||||
name = 'AvailabilityZone'
|
||||
value = '*'
|
||||
```
|
||||
#### Requirements and Terminology
|
||||
|
||||
@@ -52,6 +59,39 @@ Plugin Configuration utilizes [CloudWatch concepts](http://docs.aws.amazon.com/A
|
||||
- `names` must be valid CloudWatch [Metric](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#Metric) names
|
||||
- `dimensions` must be valid CloudWatch [Dimension](http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_concepts.html#Dimension) name/value pairs
|
||||
|
||||
Omitting or specifying a value of `'*'` for a dimension value configures all available metrics that contain a dimension with the specified name
|
||||
to be retrieved. If specifying >1 dimension, then the metric must contain *all* the configured dimensions where the the value of the
|
||||
wildcard dimension is ignored.
|
||||
|
||||
Example:
|
||||
```
|
||||
[[inputs.cloudwatch.metrics]]
|
||||
names = ['Latency']
|
||||
|
||||
## Dimension filters for Metric (optional)
|
||||
[[inputs.cloudwatch.metrics.dimensions]]
|
||||
name = 'LoadBalancerName'
|
||||
value = 'p-example'
|
||||
|
||||
[[inputs.cloudwatch.metrics.dimensions]]
|
||||
name = 'AvailabilityZone'
|
||||
value = '*'
|
||||
```
|
||||
|
||||
If the following ELBs are available:
|
||||
- name: `p-example`, availabilityZone: `us-east-1a`
|
||||
- name: `p-example`, availabilityZone: `us-east-1b`
|
||||
- name: `q-example`, availabilityZone: `us-east-1a`
|
||||
- name: `q-example`, availabilityZone: `us-east-1b`
|
||||
|
||||
|
||||
Then 2 metrics will be output:
|
||||
- name: `p-example`, availabilityZone: `us-east-1a`
|
||||
- name: `p-example`, availabilityZone: `us-east-1b`
|
||||
|
||||
If the `AvailabilityZone` wildcard dimension was omitted, then a single metric (name: `p-example`)
|
||||
would be exported containing the aggregate values of the ELB across availability zones.
|
||||
|
||||
#### Restrictions and Limitations
|
||||
- CloudWatch metrics are not available instantly via the CloudWatch API. You should adjust your collection `delay` to account for this lag in metrics availability based on your [monitoring subscription level](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html)
|
||||
- CloudWatch API usage incurs cost - see [GetMetricStatistics Pricing](https://aws.amazon.com/cloudwatch/pricing/)
|
||||
|
||||
@@ -3,28 +3,36 @@ package cloudwatch
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/aws/aws-sdk-go/aws"
|
||||
"github.com/aws/aws-sdk-go/aws/credentials"
|
||||
"github.com/aws/aws-sdk-go/aws/session"
|
||||
|
||||
"github.com/aws/aws-sdk-go/service/cloudwatch"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/internal"
|
||||
internalaws "github.com/influxdata/telegraf/internal/config/aws"
|
||||
"github.com/influxdata/telegraf/internal/errchan"
|
||||
"github.com/influxdata/telegraf/internal/limiter"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
)
|
||||
|
||||
type (
|
||||
CloudWatch struct {
|
||||
Region string `toml:"region"`
|
||||
AccessKey string `toml:"access_key"`
|
||||
SecretKey string `toml:"secret_key"`
|
||||
Region string `toml:"region"`
|
||||
AccessKey string `toml:"access_key"`
|
||||
SecretKey string `toml:"secret_key"`
|
||||
RoleARN string `toml:"role_arn"`
|
||||
Profile string `toml:"profile"`
|
||||
Filename string `toml:"shared_credential_file"`
|
||||
Token string `toml:"token"`
|
||||
|
||||
Period internal.Duration `toml:"period"`
|
||||
Delay internal.Duration `toml:"delay"`
|
||||
Namespace string `toml:"namespace"`
|
||||
Metrics []*Metric `toml:"metrics"`
|
||||
CacheTTL internal.Duration `toml:"cache_ttl"`
|
||||
client cloudwatchClient
|
||||
metricCache *MetricCache
|
||||
}
|
||||
@@ -58,12 +66,18 @@ func (c *CloudWatch) SampleConfig() string {
|
||||
|
||||
## Amazon Credentials
|
||||
## Credentials are loaded in the following order
|
||||
## 1) explicit credentials from 'access_key' and 'secret_key'
|
||||
## 2) environment variables
|
||||
## 3) shared credentials file
|
||||
## 4) EC2 Instance Profile
|
||||
## 1) Assumed credentials via STS if role_arn is specified
|
||||
## 2) explicit credentials from 'access_key' and 'secret_key'
|
||||
## 3) shared profile from 'profile'
|
||||
## 4) environment variables
|
||||
## 5) shared credentials file
|
||||
## 6) EC2 Instance Profile
|
||||
#access_key = ""
|
||||
#secret_key = ""
|
||||
#token = ""
|
||||
#role_arn = ""
|
||||
#profile = ""
|
||||
#shared_credential_file = ""
|
||||
|
||||
## Requested CloudWatch aggregation Period (required - must be a multiple of 60s)
|
||||
period = '1m'
|
||||
@@ -75,6 +89,10 @@ func (c *CloudWatch) SampleConfig() string {
|
||||
## gaps or overlap in pulled data
|
||||
interval = '1m'
|
||||
|
||||
## Configure the TTL for the internal cache of metrics.
|
||||
## Defaults to 1 hr if not specified
|
||||
#cache_ttl = '10m'
|
||||
|
||||
## Metric Statistic Namespace (required)
|
||||
namespace = 'AWS/ELB'
|
||||
|
||||
@@ -106,20 +124,40 @@ func (c *CloudWatch) Gather(acc telegraf.Accumulator) error {
|
||||
if c.Metrics != nil {
|
||||
metrics = []*cloudwatch.Metric{}
|
||||
for _, m := range c.Metrics {
|
||||
dimensions := make([]*cloudwatch.Dimension, len(m.Dimensions))
|
||||
for k, d := range m.Dimensions {
|
||||
dimensions[k] = &cloudwatch.Dimension{
|
||||
Name: aws.String(d.Name),
|
||||
Value: aws.String(d.Value),
|
||||
if !hasWilcard(m.Dimensions) {
|
||||
dimensions := make([]*cloudwatch.Dimension, len(m.Dimensions))
|
||||
for k, d := range m.Dimensions {
|
||||
fmt.Printf("Dimension [%s]:[%s]\n", d.Name, d.Value)
|
||||
dimensions[k] = &cloudwatch.Dimension{
|
||||
Name: aws.String(d.Name),
|
||||
Value: aws.String(d.Value),
|
||||
}
|
||||
}
|
||||
for _, name := range m.MetricNames {
|
||||
metrics = append(metrics, &cloudwatch.Metric{
|
||||
Namespace: aws.String(c.Namespace),
|
||||
MetricName: aws.String(name),
|
||||
Dimensions: dimensions,
|
||||
})
|
||||
}
|
||||
} else {
|
||||
allMetrics, err := c.fetchNamespaceMetrics()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for _, name := range m.MetricNames {
|
||||
for _, metric := range allMetrics {
|
||||
if isSelected(metric, m.Dimensions) {
|
||||
metrics = append(metrics, &cloudwatch.Metric{
|
||||
Namespace: aws.String(c.Namespace),
|
||||
MetricName: aws.String(name),
|
||||
Dimensions: metric.Dimensions,
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
for _, name := range m.MetricNames {
|
||||
metrics = append(metrics, &cloudwatch.Metric{
|
||||
Namespace: aws.String(c.Namespace),
|
||||
MetricName: aws.String(name),
|
||||
Dimensions: dimensions,
|
||||
})
|
||||
}
|
||||
|
||||
}
|
||||
} else {
|
||||
var err error
|
||||
@@ -130,30 +168,35 @@ func (c *CloudWatch) Gather(acc telegraf.Accumulator) error {
|
||||
}
|
||||
|
||||
metricCount := len(metrics)
|
||||
var errChan = make(chan error, metricCount)
|
||||
errChan := errchan.New(metricCount)
|
||||
|
||||
now := time.Now()
|
||||
|
||||
// limit concurrency or we can easily exhaust user connection limit
|
||||
semaphore := make(chan byte, 64)
|
||||
|
||||
// see cloudwatch API request limits:
|
||||
// http://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/cloudwatch_limits.html
|
||||
lmtr := limiter.NewRateLimiter(10, time.Second)
|
||||
defer lmtr.Stop()
|
||||
var wg sync.WaitGroup
|
||||
wg.Add(len(metrics))
|
||||
for _, m := range metrics {
|
||||
semaphore <- 0x1
|
||||
go c.gatherMetric(acc, m, now, semaphore, errChan)
|
||||
<-lmtr.C
|
||||
go func(inm *cloudwatch.Metric) {
|
||||
defer wg.Done()
|
||||
c.gatherMetric(acc, inm, now, errChan.C)
|
||||
}(m)
|
||||
}
|
||||
wg.Wait()
|
||||
|
||||
for i := 1; i <= metricCount; i++ {
|
||||
err := <-errChan
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
return errChan.Error()
|
||||
}
|
||||
|
||||
func init() {
|
||||
inputs.Add("cloudwatch", func() telegraf.Input {
|
||||
return &CloudWatch{}
|
||||
ttl, _ := time.ParseDuration("1hr")
|
||||
return &CloudWatch{
|
||||
CacheTTL: internal.Duration{Duration: ttl},
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
@@ -161,14 +204,18 @@ func init() {
|
||||
* Initialize CloudWatch client
|
||||
*/
|
||||
func (c *CloudWatch) initializeCloudWatch() error {
|
||||
config := &aws.Config{
|
||||
Region: aws.String(c.Region),
|
||||
}
|
||||
if c.AccessKey != "" || c.SecretKey != "" {
|
||||
config.Credentials = credentials.NewStaticCredentials(c.AccessKey, c.SecretKey, "")
|
||||
credentialConfig := &internalaws.CredentialConfig{
|
||||
Region: c.Region,
|
||||
AccessKey: c.AccessKey,
|
||||
SecretKey: c.SecretKey,
|
||||
RoleARN: c.RoleARN,
|
||||
Profile: c.Profile,
|
||||
Filename: c.Filename,
|
||||
Token: c.Token,
|
||||
}
|
||||
configProvider := credentialConfig.Credentials()
|
||||
|
||||
c.client = cloudwatch.New(session.New(config))
|
||||
c.client = cloudwatch.New(configProvider)
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -203,11 +250,10 @@ func (c *CloudWatch) fetchNamespaceMetrics() (metrics []*cloudwatch.Metric, err
|
||||
more = token != nil
|
||||
}
|
||||
|
||||
cacheTTL, _ := time.ParseDuration("1hr")
|
||||
c.metricCache = &MetricCache{
|
||||
Metrics: metrics,
|
||||
Fetched: time.Now(),
|
||||
TTL: cacheTTL,
|
||||
TTL: c.CacheTTL.Duration,
|
||||
}
|
||||
|
||||
return
|
||||
@@ -216,12 +262,16 @@ func (c *CloudWatch) fetchNamespaceMetrics() (metrics []*cloudwatch.Metric, err
|
||||
/*
|
||||
* Gather given Metric and emit any error
|
||||
*/
|
||||
func (c *CloudWatch) gatherMetric(acc telegraf.Accumulator, metric *cloudwatch.Metric, now time.Time, semaphore chan byte, errChan chan error) {
|
||||
func (c *CloudWatch) gatherMetric(
|
||||
acc telegraf.Accumulator,
|
||||
metric *cloudwatch.Metric,
|
||||
now time.Time,
|
||||
errChan chan error,
|
||||
) {
|
||||
params := c.getStatisticsInput(metric, now)
|
||||
resp, err := c.client.GetMetricStatistics(params)
|
||||
if err != nil {
|
||||
errChan <- err
|
||||
<-semaphore
|
||||
return
|
||||
}
|
||||
|
||||
@@ -258,7 +308,6 @@ func (c *CloudWatch) gatherMetric(acc telegraf.Accumulator, metric *cloudwatch.M
|
||||
}
|
||||
|
||||
errChan <- nil
|
||||
<-semaphore
|
||||
}
|
||||
|
||||
/*
|
||||
@@ -309,3 +358,32 @@ func (c *CloudWatch) getStatisticsInput(metric *cloudwatch.Metric, now time.Time
|
||||
func (c *MetricCache) IsValid() bool {
|
||||
return c.Metrics != nil && time.Since(c.Fetched) < c.TTL
|
||||
}
|
||||
|
||||
func hasWilcard(dimensions []*Dimension) bool {
|
||||
for _, d := range dimensions {
|
||||
if d.Value == "" || d.Value == "*" {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func isSelected(metric *cloudwatch.Metric, dimensions []*Dimension) bool {
|
||||
if len(metric.Dimensions) != len(dimensions) {
|
||||
return false
|
||||
}
|
||||
for _, d := range dimensions {
|
||||
selected := false
|
||||
for _, d2 := range metric.Dimensions {
|
||||
if d.Name == *d2.Name {
|
||||
if d.Value == "" || d.Value == "*" || d.Value == *d2.Value {
|
||||
selected = true
|
||||
}
|
||||
}
|
||||
}
|
||||
if !selected {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
56
plugins/inputs/conntrack/README.md
Normal file
56
plugins/inputs/conntrack/README.md
Normal file
@@ -0,0 +1,56 @@
|
||||
# Conntrack Plugin
|
||||
|
||||
Collects stats from Netfilter's conntrack-tools.
|
||||
|
||||
The conntrack-tools provide a mechanism for tracking various aspects of
|
||||
network connections as they are processed by netfilter. At runtime,
|
||||
conntrack exposes many of those connection statistics within /proc/sys/net.
|
||||
Depending on your kernel version, these files can be found in either
|
||||
/proc/sys/net/ipv4/netfilter or /proc/sys/net/netfilter and will be
|
||||
prefixed with either ip_ or nf_. This plugin reads the files specified
|
||||
in its configuration and publishes each one as a field, with the prefix
|
||||
normalized to ip_.
|
||||
|
||||
In order to simplify configuration in a heterogeneous environment, a superset
|
||||
of directory and filenames can be specified. Any locations that don't exist
|
||||
will be ignored.
|
||||
|
||||
For more information on conntrack-tools, see the
|
||||
[Netfilter Documentation](http://conntrack-tools.netfilter.org/).
|
||||
|
||||
|
||||
### Configuration:
|
||||
|
||||
```toml
|
||||
# Collects conntrack stats from the configured directories and files.
|
||||
[[inputs.conntrack]]
|
||||
## The following defaults would work with multiple versions of conntrack.
|
||||
## Note the nf_ and ip_ filename prefixes are mutually exclusive across
|
||||
## kernel versions, as are the directory locations.
|
||||
|
||||
## Superset of filenames to look for within the conntrack dirs.
|
||||
## Missing files will be ignored.
|
||||
files = ["ip_conntrack_count","ip_conntrack_max",
|
||||
"nf_conntrack_count","nf_conntrack_max"]
|
||||
|
||||
## Directories to search within for the conntrack files above.
|
||||
## Missing directrories will be ignored.
|
||||
dirs = ["/proc/sys/net/ipv4/netfilter","/proc/sys/net/netfilter"]
|
||||
```
|
||||
|
||||
### Measurements & Fields:
|
||||
|
||||
- conntrack
|
||||
- ip_conntrack_count (int, count): the number of entries in the conntrack table
|
||||
- ip_conntrack_max (int, size): the max capacity of the conntrack table
|
||||
|
||||
### Tags:
|
||||
|
||||
This input does not use tags.
|
||||
|
||||
### Example Output:
|
||||
|
||||
```
|
||||
$ ./telegraf -config telegraf.conf -input-filter conntrack -test
|
||||
conntrack,host=myhost ip_conntrack_count=2,ip_conntrack_max=262144 1461620427667995735
|
||||
```
|
||||
119
plugins/inputs/conntrack/conntrack.go
Normal file
119
plugins/inputs/conntrack/conntrack.go
Normal file
@@ -0,0 +1,119 @@
|
||||
// +build linux
|
||||
|
||||
package conntrack
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
"log"
|
||||
"path/filepath"
|
||||
)
|
||||
|
||||
type Conntrack struct {
|
||||
Path string
|
||||
Dirs []string
|
||||
Files []string
|
||||
}
|
||||
|
||||
const (
|
||||
inputName = "conntrack"
|
||||
)
|
||||
|
||||
var dfltDirs = []string{
|
||||
"/proc/sys/net/ipv4/netfilter",
|
||||
"/proc/sys/net/netfilter",
|
||||
}
|
||||
|
||||
var dfltFiles = []string{
|
||||
"ip_conntrack_count",
|
||||
"ip_conntrack_max",
|
||||
"nf_conntrack_count",
|
||||
"nf_conntrack_max",
|
||||
}
|
||||
|
||||
func (c *Conntrack) setDefaults() {
|
||||
if len(c.Dirs) == 0 {
|
||||
c.Dirs = dfltDirs
|
||||
}
|
||||
|
||||
if len(c.Files) == 0 {
|
||||
c.Files = dfltFiles
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Conntrack) Description() string {
|
||||
return "Collects conntrack stats from the configured directories and files."
|
||||
}
|
||||
|
||||
var sampleConfig = `
|
||||
## The following defaults would work with multiple versions of conntrack.
|
||||
## Note the nf_ and ip_ filename prefixes are mutually exclusive across
|
||||
## kernel versions, as are the directory locations.
|
||||
|
||||
## Superset of filenames to look for within the conntrack dirs.
|
||||
## Missing files will be ignored.
|
||||
files = ["ip_conntrack_count","ip_conntrack_max",
|
||||
"nf_conntrack_count","nf_conntrack_max"]
|
||||
|
||||
## Directories to search within for the conntrack files above.
|
||||
## Missing directrories will be ignored.
|
||||
dirs = ["/proc/sys/net/ipv4/netfilter","/proc/sys/net/netfilter"]
|
||||
`
|
||||
|
||||
func (c *Conntrack) SampleConfig() string {
|
||||
return sampleConfig
|
||||
}
|
||||
|
||||
func (c *Conntrack) Gather(acc telegraf.Accumulator) error {
|
||||
c.setDefaults()
|
||||
|
||||
var metricKey string
|
||||
fields := make(map[string]interface{})
|
||||
|
||||
for _, dir := range c.Dirs {
|
||||
for _, file := range c.Files {
|
||||
// NOTE: no system will have both nf_ and ip_ prefixes,
|
||||
// so we're safe to branch on suffix only.
|
||||
parts := strings.SplitN(file, "_", 2)
|
||||
if len(parts) < 2 {
|
||||
continue
|
||||
}
|
||||
metricKey = "ip_" + parts[1]
|
||||
|
||||
fName := filepath.Join(dir, file)
|
||||
if _, err := os.Stat(fName); err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
contents, err := ioutil.ReadFile(fName)
|
||||
if err != nil {
|
||||
log.Printf("failed to read file '%s': %v", fName, err)
|
||||
}
|
||||
|
||||
v := strings.TrimSpace(string(contents))
|
||||
fields[metricKey], err = strconv.ParseFloat(v, 64)
|
||||
if err != nil {
|
||||
log.Printf("failed to parse metric, expected number but "+
|
||||
" found '%s': %v", v, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if len(fields) == 0 {
|
||||
return fmt.Errorf("Conntrack input failed to collect metrics. " +
|
||||
"Is the conntrack kernel module loaded?")
|
||||
}
|
||||
|
||||
acc.AddFields(inputName, fields, nil)
|
||||
return nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
inputs.Add(inputName, func() telegraf.Input { return &Conntrack{} })
|
||||
}
|
||||
3
plugins/inputs/conntrack/conntrack_notlinux.go
Normal file
3
plugins/inputs/conntrack/conntrack_notlinux.go
Normal file
@@ -0,0 +1,3 @@
|
||||
// +build !linux
|
||||
|
||||
package conntrack
|
||||
90
plugins/inputs/conntrack/conntrack_test.go
Normal file
90
plugins/inputs/conntrack/conntrack_test.go
Normal file
@@ -0,0 +1,90 @@
|
||||
// +build linux
|
||||
|
||||
package conntrack
|
||||
|
||||
import (
|
||||
"github.com/influxdata/telegraf/testutil"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path"
|
||||
"strconv"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func restoreDflts(savedFiles, savedDirs []string) {
|
||||
dfltFiles = savedFiles
|
||||
dfltDirs = savedDirs
|
||||
}
|
||||
|
||||
func TestNoFilesFound(t *testing.T) {
|
||||
defer restoreDflts(dfltFiles, dfltDirs)
|
||||
|
||||
dfltFiles = []string{"baz.txt"}
|
||||
dfltDirs = []string{"./foo/bar"}
|
||||
c := &Conntrack{}
|
||||
acc := &testutil.Accumulator{}
|
||||
err := c.Gather(acc)
|
||||
|
||||
assert.EqualError(t, err, "Conntrack input failed to collect metrics. "+
|
||||
"Is the conntrack kernel module loaded?")
|
||||
}
|
||||
|
||||
func TestDefaultsUsed(t *testing.T) {
|
||||
defer restoreDflts(dfltFiles, dfltDirs)
|
||||
tmpdir, err := ioutil.TempDir("", "tmp1")
|
||||
assert.NoError(t, err)
|
||||
defer os.Remove(tmpdir)
|
||||
|
||||
tmpFile, err := ioutil.TempFile(tmpdir, "ip_conntrack_count")
|
||||
assert.NoError(t, err)
|
||||
|
||||
dfltDirs = []string{tmpdir}
|
||||
fname := path.Base(tmpFile.Name())
|
||||
dfltFiles = []string{fname}
|
||||
|
||||
count := 1234321
|
||||
ioutil.WriteFile(tmpFile.Name(), []byte(strconv.Itoa(count)), 0660)
|
||||
c := &Conntrack{}
|
||||
acc := &testutil.Accumulator{}
|
||||
|
||||
c.Gather(acc)
|
||||
acc.AssertContainsFields(t, inputName, map[string]interface{}{
|
||||
fname: float64(count)})
|
||||
}
|
||||
|
||||
func TestConfigsUsed(t *testing.T) {
|
||||
defer restoreDflts(dfltFiles, dfltDirs)
|
||||
tmpdir, err := ioutil.TempDir("", "tmp1")
|
||||
assert.NoError(t, err)
|
||||
defer os.Remove(tmpdir)
|
||||
|
||||
cntFile, err := ioutil.TempFile(tmpdir, "nf_conntrack_count")
|
||||
maxFile, err := ioutil.TempFile(tmpdir, "nf_conntrack_max")
|
||||
assert.NoError(t, err)
|
||||
|
||||
dfltDirs = []string{tmpdir}
|
||||
cntFname := path.Base(cntFile.Name())
|
||||
maxFname := path.Base(maxFile.Name())
|
||||
dfltFiles = []string{cntFname, maxFname}
|
||||
|
||||
count := 1234321
|
||||
max := 9999999
|
||||
ioutil.WriteFile(cntFile.Name(), []byte(strconv.Itoa(count)), 0660)
|
||||
ioutil.WriteFile(maxFile.Name(), []byte(strconv.Itoa(max)), 0660)
|
||||
c := &Conntrack{}
|
||||
acc := &testutil.Accumulator{}
|
||||
|
||||
c.Gather(acc)
|
||||
|
||||
fix := func(s string) string {
|
||||
return strings.Replace(s, "nf_", "ip_", 1)
|
||||
}
|
||||
|
||||
acc.AssertContainsFields(t, inputName,
|
||||
map[string]interface{}{
|
||||
fix(cntFname): float64(count),
|
||||
fix(maxFname): float64(max),
|
||||
})
|
||||
}
|
||||
46
plugins/inputs/consul/README.md
Normal file
46
plugins/inputs/consul/README.md
Normal file
@@ -0,0 +1,46 @@
|
||||
# Telegraf Input Plugin: Consul
|
||||
|
||||
This plugin will collect statistics about all helath checks registered in the Consul. It uses [Consul API](https://www.consul.io/docs/agent/http/health.html#health_state)
|
||||
to query the data. It will not report the [telemetry](https://www.consul.io/docs/agent/telemetry.html) but Consul can report those stats already using StatsD protocol if needed.
|
||||
|
||||
## Configuration:
|
||||
|
||||
```
|
||||
# Gather health check statuses from services registered in Consul
|
||||
[[inputs.consul]]
|
||||
## Most of these values defaults to the one configured on a Consul's agent level.
|
||||
## Optional Consul server address (default: "")
|
||||
# address = ""
|
||||
## Optional URI scheme for the Consul server (default: "")
|
||||
# scheme = ""
|
||||
## Optional ACL token used in every request (default: "")
|
||||
# token = ""
|
||||
## Optional username used for request HTTP Basic Authentication (default: "")
|
||||
# username = ""
|
||||
## Optional password used for HTTP Basic Authentication (default: "")
|
||||
# password = ""
|
||||
## Optional data centre to query the health checks from (default: "")
|
||||
# datacentre = ""
|
||||
```
|
||||
|
||||
## Measurements:
|
||||
|
||||
### Consul:
|
||||
Tags:
|
||||
- node: on which node check/service is registered on
|
||||
- service_name: name of the service (this is the service name not the service ID)
|
||||
|
||||
Fields:
|
||||
- check_id
|
||||
- check_name
|
||||
- service_id
|
||||
- status
|
||||
|
||||
## Example output
|
||||
|
||||
```
|
||||
$ telegraf --config ./telegraf.conf -input-filter consul -test
|
||||
* Plugin: consul, Collection 1
|
||||
> consul_health_checks,host=wolfpit,node=consul-server-node check_id="serfHealth",check_name="Serf Health Status",service_id="",status="passing" 1464698464486439902
|
||||
> consul_health_checks,host=wolfpit,node=consul-server-node,service_name=www.example.com check_id="service:www-example-com.test01",check_name="Service 'www.example.com' check",service_id="www-example-com.test01",status="critical" 1464698464486519036
|
||||
```
|
||||
136
plugins/inputs/consul/consul.go
Normal file
136
plugins/inputs/consul/consul.go
Normal file
@@ -0,0 +1,136 @@
|
||||
package consul
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
|
||||
"github.com/hashicorp/consul/api"
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/internal"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
)
|
||||
|
||||
type Consul struct {
|
||||
Address string
|
||||
Scheme string
|
||||
Token string
|
||||
Username string
|
||||
Password string
|
||||
Datacentre string
|
||||
|
||||
// Path to CA file
|
||||
SSLCA string `toml:"ssl_ca"`
|
||||
// Path to host cert file
|
||||
SSLCert string `toml:"ssl_cert"`
|
||||
// Path to cert key file
|
||||
SSLKey string `toml:"ssl_key"`
|
||||
// Use SSL but skip chain & host verification
|
||||
InsecureSkipVerify bool
|
||||
|
||||
// client used to connect to Consul agnet
|
||||
client *api.Client
|
||||
}
|
||||
|
||||
var sampleConfig = `
|
||||
## Most of these values defaults to the one configured on a Consul's agent level.
|
||||
## Optional Consul server address (default: "localhost")
|
||||
# address = "localhost"
|
||||
## Optional URI scheme for the Consul server (default: "http")
|
||||
# scheme = "http"
|
||||
## Optional ACL token used in every request (default: "")
|
||||
# token = ""
|
||||
## Optional username used for request HTTP Basic Authentication (default: "")
|
||||
# username = ""
|
||||
## Optional password used for HTTP Basic Authentication (default: "")
|
||||
# password = ""
|
||||
## Optional data centre to query the health checks from (default: "")
|
||||
# datacentre = ""
|
||||
`
|
||||
|
||||
func (c *Consul) Description() string {
|
||||
return "Gather health check statuses from services registered in Consul"
|
||||
}
|
||||
|
||||
func (c *Consul) SampleConfig() string {
|
||||
return sampleConfig
|
||||
}
|
||||
|
||||
func (c *Consul) createAPIClient() (*api.Client, error) {
|
||||
config := api.DefaultConfig()
|
||||
|
||||
if c.Address != "" {
|
||||
config.Address = c.Address
|
||||
}
|
||||
|
||||
if c.Scheme != "" {
|
||||
config.Scheme = c.Scheme
|
||||
}
|
||||
|
||||
if c.Datacentre != "" {
|
||||
config.Datacenter = c.Datacentre
|
||||
}
|
||||
|
||||
if c.Username != "" {
|
||||
config.HttpAuth = &api.HttpBasicAuth{
|
||||
Username: c.Username,
|
||||
Password: c.Password,
|
||||
}
|
||||
}
|
||||
|
||||
tlsCfg, err := internal.GetTLSConfig(
|
||||
c.SSLCert, c.SSLKey, c.SSLCA, c.InsecureSkipVerify)
|
||||
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
config.HttpClient.Transport = &http.Transport{
|
||||
TLSClientConfig: tlsCfg,
|
||||
}
|
||||
|
||||
return api.NewClient(config)
|
||||
}
|
||||
|
||||
func (c *Consul) GatherHealthCheck(acc telegraf.Accumulator, checks []*api.HealthCheck) {
|
||||
for _, check := range checks {
|
||||
record := make(map[string]interface{})
|
||||
tags := make(map[string]string)
|
||||
|
||||
record["check_id"] = check.CheckID
|
||||
record["check_name"] = check.Name
|
||||
record["service_id"] = check.ServiceID
|
||||
record["status"] = check.Status
|
||||
|
||||
tags["node"] = check.Node
|
||||
tags["service_name"] = check.ServiceName
|
||||
|
||||
acc.AddFields("consul_health_checks", record, tags)
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Consul) Gather(acc telegraf.Accumulator) error {
|
||||
if c.client == nil {
|
||||
newClient, err := c.createAPIClient()
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
c.client = newClient
|
||||
}
|
||||
|
||||
checks, _, err := c.client.Health().State("any", nil)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
c.GatherHealthCheck(acc, checks)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
inputs.Add("consul", func() telegraf.Input {
|
||||
return &Consul{}
|
||||
})
|
||||
}
|
||||
42
plugins/inputs/consul/consul_test.go
Normal file
42
plugins/inputs/consul/consul_test.go
Normal file
@@ -0,0 +1,42 @@
|
||||
package consul
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/hashicorp/consul/api"
|
||||
"github.com/influxdata/telegraf/testutil"
|
||||
)
|
||||
|
||||
var sampleChecks = []*api.HealthCheck{
|
||||
&api.HealthCheck{
|
||||
Node: "localhost",
|
||||
CheckID: "foo.health123",
|
||||
Name: "foo.health",
|
||||
Status: "passing",
|
||||
Notes: "lorem ipsum",
|
||||
Output: "OK",
|
||||
ServiceID: "foo.123",
|
||||
ServiceName: "foo",
|
||||
},
|
||||
}
|
||||
|
||||
func TestGatherHealtCheck(t *testing.T) {
|
||||
expectedFields := map[string]interface{}{
|
||||
"check_id": "foo.health123",
|
||||
"check_name": "foo.health",
|
||||
"status": "passing",
|
||||
"service_id": "foo.123",
|
||||
}
|
||||
|
||||
expectedTags := map[string]string{
|
||||
"node": "localhost",
|
||||
"service_name": "foo",
|
||||
}
|
||||
|
||||
var acc testutil.Accumulator
|
||||
|
||||
consul := &Consul{}
|
||||
consul.GatherHealthCheck(&acc, sampleChecks)
|
||||
|
||||
acc.AssertContainsTaggedFields(t, "consul_health_checks", expectedFields, expectedTags)
|
||||
}
|
||||
@@ -11,6 +11,13 @@ and optionally [cluster](https://www.elastic.co/guide/en/elasticsearch/reference
|
||||
servers = ["http://localhost:9200"]
|
||||
local = true
|
||||
cluster_health = true
|
||||
|
||||
## Optional SSL Config
|
||||
# ssl_ca = "/etc/telegraf/ca.pem"
|
||||
# ssl_cert = "/etc/telegraf/cert.pem"
|
||||
# ssl_key = "/etc/telegraf/key.pem"
|
||||
## Use SSL but skip chain & host verification
|
||||
# insecure_skip_verify = false
|
||||
```
|
||||
|
||||
### Measurements & Fields:
|
||||
|
||||
@@ -2,14 +2,14 @@ package elasticsearch
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/internal"
|
||||
"github.com/influxdata/telegraf/internal/errchan"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
jsonparser "github.com/influxdata/telegraf/plugins/parsers/json"
|
||||
)
|
||||
@@ -68,25 +68,31 @@ const sampleConfig = `
|
||||
|
||||
## set cluster_health to true when you want to also obtain cluster level stats
|
||||
cluster_health = false
|
||||
|
||||
## Optional SSL Config
|
||||
# ssl_ca = "/etc/telegraf/ca.pem"
|
||||
# ssl_cert = "/etc/telegraf/cert.pem"
|
||||
# ssl_key = "/etc/telegraf/key.pem"
|
||||
## Use SSL but skip chain & host verification
|
||||
# insecure_skip_verify = false
|
||||
`
|
||||
|
||||
// Elasticsearch is a plugin to read stats from one or many Elasticsearch
|
||||
// servers.
|
||||
type Elasticsearch struct {
|
||||
Local bool
|
||||
Servers []string
|
||||
ClusterHealth bool
|
||||
client *http.Client
|
||||
Local bool
|
||||
Servers []string
|
||||
ClusterHealth bool
|
||||
SSLCA string `toml:"ssl_ca"` // Path to CA file
|
||||
SSLCert string `toml:"ssl_cert"` // Path to host cert file
|
||||
SSLKey string `toml:"ssl_key"` // Path to cert key file
|
||||
InsecureSkipVerify bool // Use SSL but skip chain & host verification
|
||||
client *http.Client
|
||||
}
|
||||
|
||||
// NewElasticsearch return a new instance of Elasticsearch
|
||||
func NewElasticsearch() *Elasticsearch {
|
||||
tr := &http.Transport{ResponseHeaderTimeout: time.Duration(3 * time.Second)}
|
||||
client := &http.Client{
|
||||
Transport: tr,
|
||||
Timeout: time.Duration(4 * time.Second),
|
||||
}
|
||||
return &Elasticsearch{client: client}
|
||||
return &Elasticsearch{}
|
||||
}
|
||||
|
||||
// SampleConfig returns sample configuration for this plugin.
|
||||
@@ -102,7 +108,16 @@ func (e *Elasticsearch) Description() string {
|
||||
// Gather reads the stats from Elasticsearch and writes it to the
|
||||
// Accumulator.
|
||||
func (e *Elasticsearch) Gather(acc telegraf.Accumulator) error {
|
||||
errChan := make(chan error, len(e.Servers))
|
||||
if e.client == nil {
|
||||
client, err := e.createHttpClient()
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
e.client = client
|
||||
}
|
||||
|
||||
errChan := errchan.New(len(e.Servers))
|
||||
var wg sync.WaitGroup
|
||||
wg.Add(len(e.Servers))
|
||||
|
||||
@@ -116,7 +131,7 @@ func (e *Elasticsearch) Gather(acc telegraf.Accumulator) error {
|
||||
url = s + statsPath
|
||||
}
|
||||
if err := e.gatherNodeStats(url, acc); err != nil {
|
||||
errChan <- err
|
||||
errChan.C <- err
|
||||
return
|
||||
}
|
||||
if e.ClusterHealth {
|
||||
@@ -126,17 +141,24 @@ func (e *Elasticsearch) Gather(acc telegraf.Accumulator) error {
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
close(errChan)
|
||||
// Get all errors and return them as one giant error
|
||||
errStrings := []string{}
|
||||
for err := range errChan {
|
||||
errStrings = append(errStrings, err.Error())
|
||||
return errChan.Error()
|
||||
}
|
||||
|
||||
func (e *Elasticsearch) createHttpClient() (*http.Client, error) {
|
||||
tlsCfg, err := internal.GetTLSConfig(e.SSLCert, e.SSLKey, e.SSLCA, e.InsecureSkipVerify)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
tr := &http.Transport{
|
||||
ResponseHeaderTimeout: time.Duration(3 * time.Second),
|
||||
TLSClientConfig: tlsCfg,
|
||||
}
|
||||
client := &http.Client{
|
||||
Transport: tr,
|
||||
Timeout: time.Duration(4 * time.Second),
|
||||
}
|
||||
|
||||
if len(errStrings) == 0 {
|
||||
return nil
|
||||
}
|
||||
return errors.New(strings.Join(errStrings, "\n"))
|
||||
return client, nil
|
||||
}
|
||||
|
||||
func (e *Elasticsearch) gatherNodeStats(url string, acc telegraf.Accumulator) error {
|
||||
|
||||
@@ -38,7 +38,7 @@ func (t *transportMock) CancelRequest(_ *http.Request) {
|
||||
}
|
||||
|
||||
func TestElasticsearch(t *testing.T) {
|
||||
es := NewElasticsearch()
|
||||
es := newElasticsearchWithClient()
|
||||
es.Servers = []string{"http://example.com:9200"}
|
||||
es.client.Transport = newTransportMock(http.StatusOK, statsResponse)
|
||||
|
||||
@@ -67,7 +67,7 @@ func TestElasticsearch(t *testing.T) {
|
||||
}
|
||||
|
||||
func TestGatherClusterStats(t *testing.T) {
|
||||
es := NewElasticsearch()
|
||||
es := newElasticsearchWithClient()
|
||||
es.Servers = []string{"http://example.com:9200"}
|
||||
es.ClusterHealth = true
|
||||
es.client.Transport = newTransportMock(http.StatusOK, clusterResponse)
|
||||
@@ -87,3 +87,9 @@ func TestGatherClusterStats(t *testing.T) {
|
||||
v2IndexExpected,
|
||||
map[string]string{"index": "v2"})
|
||||
}
|
||||
|
||||
func newElasticsearchWithClient() *Elasticsearch {
|
||||
es := NewElasticsearch()
|
||||
es.client = &http.Client{}
|
||||
return es
|
||||
}
|
||||
|
||||
@@ -6,14 +6,20 @@ Please also see: [Telegraf Input Data Formats](https://github.com/influxdata/tel
|
||||
|
||||
#### Configuration
|
||||
|
||||
In this example a script called ```/tmp/test.sh``` and a script called ```/tmp/test2.sh```
|
||||
are configured for ```[[inputs.exec]]``` in JSON format.
|
||||
In this example a script called ```/tmp/test.sh```, a script called ```/tmp/test2.sh```, and
|
||||
all scripts matching glob pattern ```/tmp/collect_*.sh``` are configured for ```[[inputs.exec]]```
|
||||
in JSON format. Glob patterns are matched on every run, so adding new scripts that match the pattern
|
||||
will cause them to be picked up immediately.
|
||||
|
||||
```
|
||||
```toml
|
||||
# Read flattened metrics from one or more commands that output JSON to stdout
|
||||
[[inputs.exec]]
|
||||
# Shell/commands array
|
||||
commands = ["/tmp/test.sh", "/tmp/test2.sh"]
|
||||
# Full command line to executable with parameters, or a glob pattern to run all matching files.
|
||||
commands = ["/tmp/test.sh", "/tmp/test2.sh", "/tmp/collect_*.sh"]
|
||||
|
||||
## Timeout for each command to complete.
|
||||
timeout = "5s"
|
||||
|
||||
# Data format to consume.
|
||||
# NOTE json only reads numerical measurements, strings and booleans are ignored.
|
||||
@@ -21,26 +27,6 @@ are configured for ```[[inputs.exec]]``` in JSON format.
|
||||
|
||||
# measurement name suffix (for separating different commands)
|
||||
name_suffix = "_mycollector"
|
||||
|
||||
## Below configuration will be used for data_format = "graphite", can be ignored for other data_format
|
||||
## If matching multiple measurement files, this string will be used to join the matched values.
|
||||
#separator = "."
|
||||
|
||||
## Each template line requires a template pattern. It can have an optional
|
||||
## filter before the template and separated by spaces. It can also have optional extra
|
||||
## tags following the template. Multiple tags should be separated by commas and no spaces
|
||||
## similar to the line protocol format. The can be only one default template.
|
||||
## Templates support below format:
|
||||
## 1. filter + template
|
||||
## 2. filter + template + extra tag
|
||||
## 3. filter + template with field key
|
||||
## 4. default template
|
||||
#templates = [
|
||||
# "*.app env.service.resource.measurement",
|
||||
# "stats.* .host.measurement* region=us-west,agent=sensu",
|
||||
# "stats2.* .host.measurement.field",
|
||||
# "measurement*"
|
||||
#]
|
||||
```
|
||||
|
||||
Other options for modifying the measurement names are:
|
||||
@@ -79,7 +65,7 @@ in influx line-protocol format.
|
||||
|
||||
#### Configuration
|
||||
|
||||
```
|
||||
```toml
|
||||
[[inputs.exec]]
|
||||
# Shell/commands array
|
||||
# compatible with old version
|
||||
@@ -87,6 +73,9 @@ in influx line-protocol format.
|
||||
# command = "/usr/bin/line_protocol_collector"
|
||||
commands = ["/usr/bin/line_protocol_collector","/tmp/test2.sh"]
|
||||
|
||||
## Timeout for each command to complete.
|
||||
timeout = "5s"
|
||||
|
||||
# Data format to consume.
|
||||
# NOTE json only reads numerical measurements, strings and booleans are ignored.
|
||||
data_format = "influx"
|
||||
@@ -120,12 +109,16 @@ We can also change the data_format to "graphite" to use the metrics collecting s
|
||||
In this example a script called /tmp/test.sh and a script called /tmp/test2.sh are configured for [[inputs.exec]] in graphite format.
|
||||
|
||||
#### Configuration
|
||||
```
|
||||
|
||||
```toml
|
||||
# Read flattened metrics from one or more commands that output JSON to stdout
|
||||
[[inputs.exec]]
|
||||
# Shell/commands array
|
||||
commands = ["/tmp/test.sh","/tmp/test2.sh"]
|
||||
|
||||
## Timeout for each command to complete.
|
||||
timeout = "5s"
|
||||
|
||||
# Data format to consume.
|
||||
# NOTE json only reads numerical measurements, strings and booleans are ignored.
|
||||
data_format = "graphite"
|
||||
@@ -180,4 +173,3 @@ sensu.metric.net.server0.eth0.rx_dropped 0 1444234982
|
||||
The templates configuration will be used to parse the graphite metrics to support influxdb/opentsdb tagging store engines.
|
||||
|
||||
More detail information about templates, please refer to [The graphite Input](https://github.com/influxdata/influxdb/blob/master/services/graphite/README.md)
|
||||
|
||||
|
||||
@@ -4,6 +4,8 @@ import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
"syscall"
|
||||
"time"
|
||||
@@ -12,6 +14,7 @@ import (
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/internal"
|
||||
"github.com/influxdata/telegraf/internal/errchan"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
"github.com/influxdata/telegraf/plugins/parsers"
|
||||
"github.com/influxdata/telegraf/plugins/parsers/nagios"
|
||||
@@ -19,7 +22,11 @@ import (
|
||||
|
||||
const sampleConfig = `
|
||||
## Commands array
|
||||
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
|
||||
commands = [
|
||||
"/tmp/test.sh",
|
||||
"/usr/bin/mycollector --foo=bar",
|
||||
"/tmp/collect_*.sh"
|
||||
]
|
||||
|
||||
## Timeout for each command to complete.
|
||||
timeout = "5s"
|
||||
@@ -41,8 +48,6 @@ type Exec struct {
|
||||
|
||||
parser parsers.Parser
|
||||
|
||||
wg sync.WaitGroup
|
||||
|
||||
runner Runner
|
||||
errChan chan error
|
||||
}
|
||||
@@ -112,8 +117,8 @@ func (c CommandRunner) Run(
|
||||
return out.Bytes(), nil
|
||||
}
|
||||
|
||||
func (e *Exec) ProcessCommand(command string, acc telegraf.Accumulator) {
|
||||
defer e.wg.Done()
|
||||
func (e *Exec) ProcessCommand(command string, acc telegraf.Accumulator, wg *sync.WaitGroup) {
|
||||
defer wg.Done()
|
||||
|
||||
out, err := e.runner.Run(e, command, acc)
|
||||
if err != nil {
|
||||
@@ -144,29 +149,52 @@ func (e *Exec) SetParser(parser parsers.Parser) {
|
||||
}
|
||||
|
||||
func (e *Exec) Gather(acc telegraf.Accumulator) error {
|
||||
var wg sync.WaitGroup
|
||||
// Legacy single command support
|
||||
if e.Command != "" {
|
||||
e.Commands = append(e.Commands, e.Command)
|
||||
e.Command = ""
|
||||
}
|
||||
|
||||
e.errChan = make(chan error, len(e.Commands))
|
||||
commands := make([]string, 0, len(e.Commands))
|
||||
for _, pattern := range e.Commands {
|
||||
cmdAndArgs := strings.SplitN(pattern, " ", 2)
|
||||
if len(cmdAndArgs) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
e.wg.Add(len(e.Commands))
|
||||
for _, command := range e.Commands {
|
||||
go e.ProcessCommand(command, acc)
|
||||
}
|
||||
e.wg.Wait()
|
||||
matches, err := filepath.Glob(cmdAndArgs[0])
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
select {
|
||||
default:
|
||||
close(e.errChan)
|
||||
return nil
|
||||
case err := <-e.errChan:
|
||||
close(e.errChan)
|
||||
return err
|
||||
if len(matches) == 0 {
|
||||
// There were no matches with the glob pattern, so let's assume
|
||||
// that the command is in PATH and just run it as it is
|
||||
commands = append(commands, pattern)
|
||||
} else {
|
||||
// There were matches, so we'll append each match together with
|
||||
// the arguments to the commands slice
|
||||
for _, match := range matches {
|
||||
if len(cmdAndArgs) == 1 {
|
||||
commands = append(commands, match)
|
||||
} else {
|
||||
commands = append(commands,
|
||||
strings.Join([]string{match, cmdAndArgs[1]}, " "))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
errChan := errchan.New(len(commands))
|
||||
e.errChan = errChan.C
|
||||
|
||||
wg.Add(len(commands))
|
||||
for _, command := range commands {
|
||||
go e.ProcessCommand(command, acc, &wg)
|
||||
}
|
||||
wg.Wait()
|
||||
return errChan.Error()
|
||||
}
|
||||
|
||||
func init() {
|
||||
|
||||
@@ -169,3 +169,51 @@ func TestLineProtocolParseMultiple(t *testing.T) {
|
||||
acc.AssertContainsTaggedFields(t, "cpu", fields, tags)
|
||||
}
|
||||
}
|
||||
|
||||
func TestExecCommandWithGlob(t *testing.T) {
|
||||
parser, _ := parsers.NewValueParser("metric", "string", nil)
|
||||
e := NewExec()
|
||||
e.Commands = []string{"/bin/ech* metric_value"}
|
||||
e.SetParser(parser)
|
||||
|
||||
var acc testutil.Accumulator
|
||||
err := e.Gather(&acc)
|
||||
require.NoError(t, err)
|
||||
|
||||
fields := map[string]interface{}{
|
||||
"value": "metric_value",
|
||||
}
|
||||
acc.AssertContainsFields(t, "metric", fields)
|
||||
}
|
||||
|
||||
func TestExecCommandWithoutGlob(t *testing.T) {
|
||||
parser, _ := parsers.NewValueParser("metric", "string", nil)
|
||||
e := NewExec()
|
||||
e.Commands = []string{"/bin/echo metric_value"}
|
||||
e.SetParser(parser)
|
||||
|
||||
var acc testutil.Accumulator
|
||||
err := e.Gather(&acc)
|
||||
require.NoError(t, err)
|
||||
|
||||
fields := map[string]interface{}{
|
||||
"value": "metric_value",
|
||||
}
|
||||
acc.AssertContainsFields(t, "metric", fields)
|
||||
}
|
||||
|
||||
func TestExecCommandWithoutGlobAndPath(t *testing.T) {
|
||||
parser, _ := parsers.NewValueParser("metric", "string", nil)
|
||||
e := NewExec()
|
||||
e.Commands = []string{"echo metric_value"}
|
||||
e.SetParser(parser)
|
||||
|
||||
var acc testutil.Accumulator
|
||||
err := e.Gather(&acc)
|
||||
require.NoError(t, err)
|
||||
|
||||
fields := map[string]interface{}{
|
||||
"value": "metric_value",
|
||||
}
|
||||
acc.AssertContainsFields(t, "metric", fields)
|
||||
}
|
||||
|
||||
55
plugins/inputs/graylog/README.md
Normal file
55
plugins/inputs/graylog/README.md
Normal file
@@ -0,0 +1,55 @@
|
||||
# GrayLog plugin
|
||||
|
||||
The Graylog plugin can collect data from remote Graylog service URLs.
|
||||
|
||||
Plugin currently support two type of end points:-
|
||||
|
||||
- multiple (Ex http://[graylog-server-ip]:12900/system/metrics/multiple)
|
||||
- namespace (Ex http://[graylog-server-ip]:12900/system/metrics/namespace/{namespace})
|
||||
|
||||
End Point can be a mixe of one multiple end point and several namespaces end points
|
||||
|
||||
|
||||
Note: if namespace end point specified metrics array will be ignored for that call.
|
||||
|
||||
### Configuration:
|
||||
|
||||
```toml
|
||||
# Read flattened metrics from one or more GrayLog HTTP endpoints
|
||||
[[inputs.graylog]]
|
||||
## API endpoint, currently supported API:
|
||||
##
|
||||
## - multiple (Ex http://<host>:12900/system/metrics/multiple)
|
||||
## - namespace (Ex http://<host>:12900/system/metrics/namespace/{namespace})
|
||||
##
|
||||
## For namespace endpoint, the metrics array will be ignored for that call.
|
||||
## Endpoint can contain namespace and multiple type calls.
|
||||
##
|
||||
## Please check http://[graylog-server-ip]:12900/api-browser for full list
|
||||
## of endpoints
|
||||
servers = [
|
||||
"http://[graylog-server-ip]:12900/system/metrics/multiple",
|
||||
]
|
||||
|
||||
## Metrics list
|
||||
## List of metrics can be found on Graylog webservice documentation.
|
||||
## Or by hitting the the web service api at:
|
||||
## http://[graylog-host]:12900/system/metrics
|
||||
metrics = [
|
||||
"jvm.cl.loaded",
|
||||
"jvm.memory.pools.Metaspace.committed"
|
||||
]
|
||||
|
||||
## Username and password
|
||||
username = ""
|
||||
password = ""
|
||||
|
||||
## Optional SSL Config
|
||||
# ssl_ca = "/etc/telegraf/ca.pem"
|
||||
# ssl_cert = "/etc/telegraf/cert.pem"
|
||||
# ssl_key = "/etc/telegraf/key.pem"
|
||||
## Use SSL but skip chain & host verification
|
||||
# insecure_skip_verify = false
|
||||
```
|
||||
|
||||
Please refer to GrayLog metrics api browser for full metric end points http://host:12900/api-browser
|
||||
312
plugins/inputs/graylog/graylog.go
Normal file
312
plugins/inputs/graylog/graylog.go
Normal file
@@ -0,0 +1,312 @@
|
||||
package graylog
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/internal"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
)
|
||||
|
||||
type ResponseMetrics struct {
|
||||
total int
|
||||
Metrics []Metric `json:"metrics"`
|
||||
}
|
||||
|
||||
type Metric struct {
|
||||
FullName string `json:"full_name"`
|
||||
Name string `json:"name"`
|
||||
Type string `json:"type"`
|
||||
Fields map[string]interface{} `json:"metric"`
|
||||
}
|
||||
|
||||
type GrayLog struct {
|
||||
Servers []string
|
||||
Metrics []string
|
||||
Username string
|
||||
Password string
|
||||
|
||||
// Path to CA file
|
||||
SSLCA string `toml:"ssl_ca"`
|
||||
// Path to host cert file
|
||||
SSLCert string `toml:"ssl_cert"`
|
||||
// Path to cert key file
|
||||
SSLKey string `toml:"ssl_key"`
|
||||
// Use SSL but skip chain & host verification
|
||||
InsecureSkipVerify bool
|
||||
|
||||
client HTTPClient
|
||||
}
|
||||
|
||||
type HTTPClient interface {
|
||||
// Returns the result of an http request
|
||||
//
|
||||
// Parameters:
|
||||
// req: HTTP request object
|
||||
//
|
||||
// Returns:
|
||||
// http.Response: HTTP respons object
|
||||
// error : Any error that may have occurred
|
||||
MakeRequest(req *http.Request) (*http.Response, error)
|
||||
|
||||
SetHTTPClient(client *http.Client)
|
||||
HTTPClient() *http.Client
|
||||
}
|
||||
|
||||
type Messagebody struct {
|
||||
Metrics []string `json:"metrics"`
|
||||
}
|
||||
|
||||
type RealHTTPClient struct {
|
||||
client *http.Client
|
||||
}
|
||||
|
||||
func (c *RealHTTPClient) MakeRequest(req *http.Request) (*http.Response, error) {
|
||||
return c.client.Do(req)
|
||||
}
|
||||
|
||||
func (c *RealHTTPClient) SetHTTPClient(client *http.Client) {
|
||||
c.client = client
|
||||
}
|
||||
|
||||
func (c *RealHTTPClient) HTTPClient() *http.Client {
|
||||
return c.client
|
||||
}
|
||||
|
||||
var sampleConfig = `
|
||||
## API endpoint, currently supported API:
|
||||
##
|
||||
## - multiple (Ex http://<host>:12900/system/metrics/multiple)
|
||||
## - namespace (Ex http://<host>:12900/system/metrics/namespace/{namespace})
|
||||
##
|
||||
## For namespace endpoint, the metrics array will be ignored for that call.
|
||||
## Endpoint can contain namespace and multiple type calls.
|
||||
##
|
||||
## Please check http://[graylog-server-ip]:12900/api-browser for full list
|
||||
## of endpoints
|
||||
servers = [
|
||||
"http://[graylog-server-ip]:12900/system/metrics/multiple",
|
||||
]
|
||||
|
||||
## Metrics list
|
||||
## List of metrics can be found on Graylog webservice documentation.
|
||||
## Or by hitting the the web service api at:
|
||||
## http://[graylog-host]:12900/system/metrics
|
||||
metrics = [
|
||||
"jvm.cl.loaded",
|
||||
"jvm.memory.pools.Metaspace.committed"
|
||||
]
|
||||
|
||||
## Username and password
|
||||
username = ""
|
||||
password = ""
|
||||
|
||||
## Optional SSL Config
|
||||
# ssl_ca = "/etc/telegraf/ca.pem"
|
||||
# ssl_cert = "/etc/telegraf/cert.pem"
|
||||
# ssl_key = "/etc/telegraf/key.pem"
|
||||
## Use SSL but skip chain & host verification
|
||||
# insecure_skip_verify = false
|
||||
`
|
||||
|
||||
func (h *GrayLog) SampleConfig() string {
|
||||
return sampleConfig
|
||||
}
|
||||
|
||||
func (h *GrayLog) Description() string {
|
||||
return "Read flattened metrics from one or more GrayLog HTTP endpoints"
|
||||
}
|
||||
|
||||
// Gathers data for all servers.
|
||||
func (h *GrayLog) Gather(acc telegraf.Accumulator) error {
|
||||
var wg sync.WaitGroup
|
||||
|
||||
if h.client.HTTPClient() == nil {
|
||||
tlsCfg, err := internal.GetTLSConfig(
|
||||
h.SSLCert, h.SSLKey, h.SSLCA, h.InsecureSkipVerify)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
tr := &http.Transport{
|
||||
ResponseHeaderTimeout: time.Duration(3 * time.Second),
|
||||
TLSClientConfig: tlsCfg,
|
||||
}
|
||||
client := &http.Client{
|
||||
Transport: tr,
|
||||
Timeout: time.Duration(4 * time.Second),
|
||||
}
|
||||
h.client.SetHTTPClient(client)
|
||||
}
|
||||
|
||||
errorChannel := make(chan error, len(h.Servers))
|
||||
|
||||
for _, server := range h.Servers {
|
||||
wg.Add(1)
|
||||
go func(server string) {
|
||||
defer wg.Done()
|
||||
if err := h.gatherServer(acc, server); err != nil {
|
||||
errorChannel <- err
|
||||
}
|
||||
}(server)
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
close(errorChannel)
|
||||
|
||||
// Get all errors and return them as one giant error
|
||||
errorStrings := []string{}
|
||||
for err := range errorChannel {
|
||||
errorStrings = append(errorStrings, err.Error())
|
||||
}
|
||||
|
||||
if len(errorStrings) == 0 {
|
||||
return nil
|
||||
}
|
||||
return errors.New(strings.Join(errorStrings, "\n"))
|
||||
}
|
||||
|
||||
// Gathers data from a particular server
|
||||
// Parameters:
|
||||
// acc : The telegraf Accumulator to use
|
||||
// serverURL: endpoint to send request to
|
||||
// service : the service being queried
|
||||
//
|
||||
// Returns:
|
||||
// error: Any error that may have occurred
|
||||
func (h *GrayLog) gatherServer(
|
||||
acc telegraf.Accumulator,
|
||||
serverURL string,
|
||||
) error {
|
||||
resp, _, err := h.sendRequest(serverURL)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
requestURL, err := url.Parse(serverURL)
|
||||
host, port, _ := net.SplitHostPort(requestURL.Host)
|
||||
var dat ResponseMetrics
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := json.Unmarshal([]byte(resp), &dat); err != nil {
|
||||
return err
|
||||
}
|
||||
for _, m_item := range dat.Metrics {
|
||||
fields := make(map[string]interface{})
|
||||
tags := map[string]string{
|
||||
"server": host,
|
||||
"port": port,
|
||||
"name": m_item.Name,
|
||||
"type": m_item.Type,
|
||||
}
|
||||
h.flatten(m_item.Fields, fields, "")
|
||||
acc.AddFields(m_item.FullName, fields, tags)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Flatten JSON hierarchy to produce field name and field value
|
||||
// Parameters:
|
||||
// item: Item map to flatten
|
||||
// fields: Map to store generated fields.
|
||||
// id: Prefix for top level metric (empty string "")
|
||||
// Returns:
|
||||
// void
|
||||
func (h *GrayLog) flatten(item map[string]interface{}, fields map[string]interface{}, id string) {
|
||||
if id != "" {
|
||||
id = id + "_"
|
||||
}
|
||||
for k, i := range item {
|
||||
switch i.(type) {
|
||||
case int:
|
||||
fields[id+k] = i.(float64)
|
||||
case float64:
|
||||
fields[id+k] = i.(float64)
|
||||
case map[string]interface{}:
|
||||
h.flatten(i.(map[string]interface{}), fields, id+k)
|
||||
default:
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Sends an HTTP request to the server using the GrayLog object's HTTPClient.
|
||||
// Parameters:
|
||||
// serverURL: endpoint to send request to
|
||||
//
|
||||
// Returns:
|
||||
// string: body of the response
|
||||
// error : Any error that may have occurred
|
||||
func (h *GrayLog) sendRequest(serverURL string) (string, float64, error) {
|
||||
headers := map[string]string{
|
||||
"Content-Type": "application/json",
|
||||
"Accept": "application/json",
|
||||
}
|
||||
method := "GET"
|
||||
content := bytes.NewBufferString("")
|
||||
headers["Authorization"] = "Basic " + base64.URLEncoding.EncodeToString([]byte(h.Username+":"+h.Password))
|
||||
// Prepare URL
|
||||
requestURL, err := url.Parse(serverURL)
|
||||
if err != nil {
|
||||
return "", -1, fmt.Errorf("Invalid server URL \"%s\"", serverURL)
|
||||
}
|
||||
if strings.Contains(requestURL.String(), "multiple") {
|
||||
m := &Messagebody{Metrics: h.Metrics}
|
||||
http_body, err := json.Marshal(m)
|
||||
if err != nil {
|
||||
return "", -1, fmt.Errorf("Invalid list of Metrics %s", h.Metrics)
|
||||
}
|
||||
method = "POST"
|
||||
content = bytes.NewBuffer(http_body)
|
||||
}
|
||||
req, err := http.NewRequest(method, requestURL.String(), content)
|
||||
if err != nil {
|
||||
return "", -1, err
|
||||
}
|
||||
// Add header parameters
|
||||
for k, v := range headers {
|
||||
req.Header.Add(k, v)
|
||||
}
|
||||
start := time.Now()
|
||||
resp, err := h.client.MakeRequest(req)
|
||||
if err != nil {
|
||||
return "", -1, err
|
||||
}
|
||||
|
||||
defer resp.Body.Close()
|
||||
responseTime := time.Since(start).Seconds()
|
||||
|
||||
body, err := ioutil.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return string(body), responseTime, err
|
||||
}
|
||||
|
||||
// Process response
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
err = fmt.Errorf("Response from url \"%s\" has status code %d (%s), expected %d (%s)",
|
||||
requestURL.String(),
|
||||
resp.StatusCode,
|
||||
http.StatusText(resp.StatusCode),
|
||||
http.StatusOK,
|
||||
http.StatusText(http.StatusOK))
|
||||
return string(body), responseTime, err
|
||||
}
|
||||
return string(body), responseTime, err
|
||||
}
|
||||
|
||||
func init() {
|
||||
inputs.Add("graylog", func() telegraf.Input {
|
||||
return &GrayLog{
|
||||
client: &RealHTTPClient{},
|
||||
}
|
||||
})
|
||||
}
|
||||
199
plugins/inputs/graylog/graylog_test.go
Normal file
199
plugins/inputs/graylog/graylog_test.go
Normal file
@@ -0,0 +1,199 @@
|
||||
package graylog
|
||||
|
||||
import (
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/influxdata/telegraf/testutil"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
const validJSON = `
|
||||
{
|
||||
"total": 3,
|
||||
"metrics": [
|
||||
{
|
||||
"full_name": "jvm.cl.loaded",
|
||||
"metric": {
|
||||
"value": 18910
|
||||
},
|
||||
"name": "loaded",
|
||||
"type": "gauge"
|
||||
},
|
||||
{
|
||||
"full_name": "jvm.memory.pools.Metaspace.committed",
|
||||
"metric": {
|
||||
"value": 108040192
|
||||
},
|
||||
"name": "committed",
|
||||
"type": "gauge"
|
||||
},
|
||||
{
|
||||
"full_name": "org.graylog2.shared.journal.KafkaJournal.writeTime",
|
||||
"metric": {
|
||||
"time": {
|
||||
"min": 99
|
||||
},
|
||||
"rate": {
|
||||
"total": 10,
|
||||
"mean": 2
|
||||
},
|
||||
"duration_unit": "microseconds",
|
||||
"rate_unit": "events/second"
|
||||
},
|
||||
"name": "writeTime",
|
||||
"type": "hdrtimer"
|
||||
}
|
||||
]
|
||||
}`
|
||||
|
||||
var validTags = map[string]map[string]string{
|
||||
"jvm.cl.loaded": {
|
||||
"name": "loaded",
|
||||
"type": "gauge",
|
||||
"port": "12900",
|
||||
"server": "localhost",
|
||||
},
|
||||
"jvm.memory.pools.Metaspace.committed": {
|
||||
"name": "committed",
|
||||
"type": "gauge",
|
||||
"port": "12900",
|
||||
"server": "localhost",
|
||||
},
|
||||
"org.graylog2.shared.journal.KafkaJournal.writeTime": {
|
||||
"name": "writeTime",
|
||||
"type": "hdrtimer",
|
||||
"port": "12900",
|
||||
"server": "localhost",
|
||||
},
|
||||
}
|
||||
|
||||
var expectedFields = map[string]map[string]interface{}{
|
||||
"jvm.cl.loaded": {
|
||||
"value": float64(18910),
|
||||
},
|
||||
"jvm.memory.pools.Metaspace.committed": {
|
||||
"value": float64(108040192),
|
||||
},
|
||||
"org.graylog2.shared.journal.KafkaJournal.writeTime": {
|
||||
"time_min": float64(99),
|
||||
"rate_total": float64(10),
|
||||
"rate_mean": float64(2),
|
||||
},
|
||||
}
|
||||
|
||||
const invalidJSON = "I don't think this is JSON"
|
||||
|
||||
const empty = ""
|
||||
|
||||
type mockHTTPClient struct {
|
||||
responseBody string
|
||||
statusCode int
|
||||
}
|
||||
|
||||
// Mock implementation of MakeRequest. Usually returns an http.Response with
|
||||
// hard-coded responseBody and statusCode. However, if the request uses a
|
||||
// nonstandard method, it uses status code 405 (method not allowed)
|
||||
func (c *mockHTTPClient) MakeRequest(req *http.Request) (*http.Response, error) {
|
||||
resp := http.Response{}
|
||||
resp.StatusCode = c.statusCode
|
||||
|
||||
// basic error checking on request method
|
||||
allowedMethods := []string{"GET", "HEAD", "POST", "PUT", "DELETE", "TRACE", "CONNECT"}
|
||||
methodValid := false
|
||||
for _, method := range allowedMethods {
|
||||
if req.Method == method {
|
||||
methodValid = true
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if !methodValid {
|
||||
resp.StatusCode = 405 // Method not allowed
|
||||
}
|
||||
|
||||
resp.Body = ioutil.NopCloser(strings.NewReader(c.responseBody))
|
||||
return &resp, nil
|
||||
}
|
||||
|
||||
func (c *mockHTTPClient) SetHTTPClient(_ *http.Client) {
|
||||
}
|
||||
|
||||
func (c *mockHTTPClient) HTTPClient() *http.Client {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Generates a pointer to an HttpJson object that uses a mock HTTP client.
|
||||
// Parameters:
|
||||
// response : Body of the response that the mock HTTP client should return
|
||||
// statusCode: HTTP status code the mock HTTP client should return
|
||||
//
|
||||
// Returns:
|
||||
// *HttpJson: Pointer to an HttpJson object that uses the generated mock HTTP client
|
||||
func genMockGrayLog(response string, statusCode int) []*GrayLog {
|
||||
return []*GrayLog{
|
||||
&GrayLog{
|
||||
client: &mockHTTPClient{responseBody: response, statusCode: statusCode},
|
||||
Servers: []string{
|
||||
"http://localhost:12900/system/metrics/multiple",
|
||||
},
|
||||
Metrics: []string{
|
||||
"jvm.memory.pools.Metaspace.committed",
|
||||
"jvm.cl.loaded",
|
||||
"org.graylog2.shared.journal.KafkaJournal.writeTime",
|
||||
},
|
||||
Username: "test",
|
||||
Password: "test",
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// Test that the proper values are ignored or collected
|
||||
func TestNormalResponse(t *testing.T) {
|
||||
graylog := genMockGrayLog(validJSON, 200)
|
||||
|
||||
for _, service := range graylog {
|
||||
var acc testutil.Accumulator
|
||||
err := service.Gather(&acc)
|
||||
require.NoError(t, err)
|
||||
for k, v := range expectedFields {
|
||||
acc.AssertContainsTaggedFields(t, k, v, validTags[k])
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Test response to HTTP 500
|
||||
func TestHttpJson500(t *testing.T) {
|
||||
graylog := genMockGrayLog(validJSON, 500)
|
||||
|
||||
var acc testutil.Accumulator
|
||||
err := graylog[0].Gather(&acc)
|
||||
|
||||
assert.NotNil(t, err)
|
||||
assert.Equal(t, 0, acc.NFields())
|
||||
}
|
||||
|
||||
// Test response to malformed JSON
|
||||
func TestHttpJsonBadJson(t *testing.T) {
|
||||
graylog := genMockGrayLog(invalidJSON, 200)
|
||||
|
||||
var acc testutil.Accumulator
|
||||
err := graylog[0].Gather(&acc)
|
||||
|
||||
assert.NotNil(t, err)
|
||||
assert.Equal(t, 0, acc.NFields())
|
||||
}
|
||||
|
||||
// Test response to empty string as response objectgT
|
||||
func TestHttpJsonEmptyResponse(t *testing.T) {
|
||||
graylog := genMockGrayLog(empty, 200)
|
||||
|
||||
var acc testutil.Accumulator
|
||||
err := graylog[0].Gather(&acc)
|
||||
|
||||
assert.NotNil(t, err)
|
||||
assert.Equal(t, 0, acc.NFields())
|
||||
}
|
||||
@@ -3,8 +3,6 @@ package haproxy
|
||||
import (
|
||||
"encoding/csv"
|
||||
"fmt"
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
"io"
|
||||
"net"
|
||||
"net/http"
|
||||
@@ -13,6 +11,10 @@ import (
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/internal/errchan"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
)
|
||||
|
||||
//CSV format: https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.1
|
||||
@@ -113,20 +115,17 @@ func (g *haproxy) Gather(acc telegraf.Accumulator) error {
|
||||
}
|
||||
|
||||
var wg sync.WaitGroup
|
||||
|
||||
var outerr error
|
||||
|
||||
for _, serv := range g.Servers {
|
||||
wg.Add(1)
|
||||
errChan := errchan.New(len(g.Servers))
|
||||
wg.Add(len(g.Servers))
|
||||
for _, server := range g.Servers {
|
||||
go func(serv string) {
|
||||
defer wg.Done()
|
||||
outerr = g.gatherServer(serv, acc)
|
||||
}(serv)
|
||||
errChan.C <- g.gatherServer(serv, acc)
|
||||
}(server)
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
|
||||
return outerr
|
||||
return errChan.Error()
|
||||
}
|
||||
|
||||
func (g *haproxy) gatherServerSocket(addr string, acc telegraf.Accumulator) error {
|
||||
|
||||
@@ -22,6 +22,13 @@ This input plugin will test HTTP/HTTPS connections.
|
||||
# body = '''
|
||||
# {'fake':'data'}
|
||||
# '''
|
||||
|
||||
## Optional SSL Config
|
||||
# ssl_ca = "/etc/telegraf/ca.pem"
|
||||
# ssl_cert = "/etc/telegraf/cert.pem"
|
||||
# ssl_key = "/etc/telegraf/key.pem"
|
||||
## Use SSL but skip chain & host verification
|
||||
# insecure_skip_verify = false
|
||||
```
|
||||
|
||||
### Measurements & Fields:
|
||||
|
||||
@@ -21,6 +21,15 @@ type HTTPResponse struct {
|
||||
ResponseTimeout internal.Duration
|
||||
Headers map[string]string
|
||||
FollowRedirects bool
|
||||
|
||||
// Path to CA file
|
||||
SSLCA string `toml:"ssl_ca"`
|
||||
// Path to host cert file
|
||||
SSLCert string `toml:"ssl_cert"`
|
||||
// Path to cert key file
|
||||
SSLKey string `toml:"ssl_key"`
|
||||
// Use SSL but skip chain & host verification
|
||||
InsecureSkipVerify bool
|
||||
}
|
||||
|
||||
// Description returns the plugin Description
|
||||
@@ -44,6 +53,13 @@ var sampleConfig = `
|
||||
# body = '''
|
||||
# {'fake':'data'}
|
||||
# '''
|
||||
|
||||
## Optional SSL Config
|
||||
# ssl_ca = "/etc/telegraf/ca.pem"
|
||||
# ssl_cert = "/etc/telegraf/cert.pem"
|
||||
# ssl_key = "/etc/telegraf/key.pem"
|
||||
## Use SSL but skip chain & host verification
|
||||
# insecure_skip_verify = false
|
||||
`
|
||||
|
||||
// SampleConfig returns the plugin SampleConfig
|
||||
@@ -56,17 +72,27 @@ var ErrRedirectAttempted = errors.New("redirect")
|
||||
|
||||
// CreateHttpClient creates an http client which will timeout at the specified
|
||||
// timeout period and can follow redirects if specified
|
||||
func CreateHttpClient(followRedirects bool, ResponseTimeout time.Duration) *http.Client {
|
||||
func (h *HTTPResponse) createHttpClient() (*http.Client, error) {
|
||||
tlsCfg, err := internal.GetTLSConfig(
|
||||
h.SSLCert, h.SSLKey, h.SSLCA, h.InsecureSkipVerify)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
tr := &http.Transport{
|
||||
ResponseHeaderTimeout: h.ResponseTimeout.Duration,
|
||||
TLSClientConfig: tlsCfg,
|
||||
}
|
||||
client := &http.Client{
|
||||
Timeout: ResponseTimeout,
|
||||
Transport: tr,
|
||||
Timeout: h.ResponseTimeout.Duration,
|
||||
}
|
||||
|
||||
if followRedirects == false {
|
||||
if h.FollowRedirects == false {
|
||||
client.CheckRedirect = func(req *http.Request, via []*http.Request) error {
|
||||
return ErrRedirectAttempted
|
||||
}
|
||||
}
|
||||
return client
|
||||
return client, nil
|
||||
}
|
||||
|
||||
// HTTPGather gathers all fields and returns any errors it encounters
|
||||
@@ -74,7 +100,10 @@ func (h *HTTPResponse) HTTPGather() (map[string]interface{}, error) {
|
||||
// Prepare fields
|
||||
fields := make(map[string]interface{})
|
||||
|
||||
client := CreateHttpClient(h.FollowRedirects, h.ResponseTimeout.Duration)
|
||||
client, err := h.createHttpClient()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var body io.Reader
|
||||
if h.Body != "" {
|
||||
|
||||
@@ -15,6 +15,7 @@ InfluxDB-formatted endpoints. See below for more information.
|
||||
## See the influxdb plugin's README for more details.
|
||||
|
||||
## Multiple URLs from which to read InfluxDB-formatted JSON
|
||||
## Default is "http://localhost:8086/debug/vars".
|
||||
urls = [
|
||||
"http://localhost:8086/debug/vars"
|
||||
]
|
||||
|
||||
@@ -28,6 +28,7 @@ func (*InfluxDB) SampleConfig() string {
|
||||
## See the influxdb plugin's README for more details.
|
||||
|
||||
## Multiple URLs from which to read InfluxDB-formatted JSON
|
||||
## Default is "http://localhost:8086/debug/vars".
|
||||
urls = [
|
||||
"http://localhost:8086/debug/vars"
|
||||
]
|
||||
@@ -35,6 +36,9 @@ func (*InfluxDB) SampleConfig() string {
|
||||
}
|
||||
|
||||
func (i *InfluxDB) Gather(acc telegraf.Accumulator) error {
|
||||
if len(i.URLs) == 0 {
|
||||
i.URLs = []string{"http://localhost:8086/debug/vars"}
|
||||
}
|
||||
errorChannel := make(chan error, len(i.URLs))
|
||||
|
||||
var wg sync.WaitGroup
|
||||
@@ -157,43 +161,45 @@ func (i *InfluxDB) gatherURL(
|
||||
return err
|
||||
}
|
||||
|
||||
if key.(string) == "memstats" {
|
||||
var m memstats
|
||||
if err := dec.Decode(&m); err != nil {
|
||||
continue
|
||||
if keyStr, ok := key.(string); ok {
|
||||
if keyStr == "memstats" {
|
||||
var m memstats
|
||||
if err := dec.Decode(&m); err != nil {
|
||||
continue
|
||||
}
|
||||
acc.AddFields("influxdb_memstats",
|
||||
map[string]interface{}{
|
||||
"alloc": m.Alloc,
|
||||
"total_alloc": m.TotalAlloc,
|
||||
"sys": m.Sys,
|
||||
"lookups": m.Lookups,
|
||||
"mallocs": m.Mallocs,
|
||||
"frees": m.Frees,
|
||||
"heap_alloc": m.HeapAlloc,
|
||||
"heap_sys": m.HeapSys,
|
||||
"heap_idle": m.HeapIdle,
|
||||
"heap_inuse": m.HeapInuse,
|
||||
"heap_released": m.HeapReleased,
|
||||
"heap_objects": m.HeapObjects,
|
||||
"stack_inuse": m.StackInuse,
|
||||
"stack_sys": m.StackSys,
|
||||
"mspan_inuse": m.MSpanInuse,
|
||||
"mspan_sys": m.MSpanSys,
|
||||
"mcache_inuse": m.MCacheInuse,
|
||||
"mcache_sys": m.MCacheSys,
|
||||
"buck_hash_sys": m.BuckHashSys,
|
||||
"gc_sys": m.GCSys,
|
||||
"other_sys": m.OtherSys,
|
||||
"next_gc": m.NextGC,
|
||||
"last_gc": m.LastGC,
|
||||
"pause_total_ns": m.PauseTotalNs,
|
||||
"num_gc": m.NumGC,
|
||||
"gcc_pu_fraction": m.GCCPUFraction,
|
||||
},
|
||||
map[string]string{
|
||||
"url": url,
|
||||
})
|
||||
}
|
||||
acc.AddFields("influxdb_memstats",
|
||||
map[string]interface{}{
|
||||
"alloc": m.Alloc,
|
||||
"total_alloc": m.TotalAlloc,
|
||||
"sys": m.Sys,
|
||||
"lookups": m.Lookups,
|
||||
"mallocs": m.Mallocs,
|
||||
"frees": m.Frees,
|
||||
"heap_alloc": m.HeapAlloc,
|
||||
"heap_sys": m.HeapSys,
|
||||
"heap_idle": m.HeapIdle,
|
||||
"heap_inuse": m.HeapInuse,
|
||||
"heap_released": m.HeapReleased,
|
||||
"heap_objects": m.HeapObjects,
|
||||
"stack_inuse": m.StackInuse,
|
||||
"stack_sys": m.StackSys,
|
||||
"mspan_inuse": m.MSpanInuse,
|
||||
"mspan_sys": m.MSpanSys,
|
||||
"mcache_inuse": m.MCacheInuse,
|
||||
"mcache_sys": m.MCacheSys,
|
||||
"buck_hash_sys": m.BuckHashSys,
|
||||
"gc_sys": m.GCSys,
|
||||
"other_sys": m.OtherSys,
|
||||
"next_gc": m.NextGC,
|
||||
"last_gc": m.LastGC,
|
||||
"pause_total_ns": m.PauseTotalNs,
|
||||
"num_gc": m.NumGC,
|
||||
"gcc_pu_fraction": m.GCCPUFraction,
|
||||
},
|
||||
map[string]string{
|
||||
"url": url,
|
||||
})
|
||||
}
|
||||
|
||||
// Attempt to parse a whole object into a point.
|
||||
@@ -204,16 +210,16 @@ func (i *InfluxDB) gatherURL(
|
||||
continue
|
||||
}
|
||||
|
||||
if p.Name == "shard" {
|
||||
shardCounter++
|
||||
}
|
||||
|
||||
// If the object was a point, but was not fully initialized,
|
||||
// ignore it and move on.
|
||||
if p.Name == "" || p.Tags == nil || p.Values == nil || len(p.Values) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
if p.Name == "shard" {
|
||||
shardCounter++
|
||||
}
|
||||
|
||||
// Add a tag to indicate the source of the data.
|
||||
p.Tags["url"] = url
|
||||
|
||||
|
||||
@@ -112,7 +112,7 @@ func TestInfluxDB(t *testing.T) {
|
||||
|
||||
acc.AssertContainsTaggedFields(t, "influxdb",
|
||||
map[string]interface{}{
|
||||
"n_shards": 2,
|
||||
"n_shards": 1,
|
||||
}, map[string]string{})
|
||||
}
|
||||
|
||||
|
||||
@@ -22,7 +22,7 @@ from the same topic in parallel.
|
||||
## Offset (must be either "oldest" or "newest")
|
||||
offset = "oldest"
|
||||
|
||||
## Data format to consume.
|
||||
## Data format to consume.
|
||||
|
||||
## Each data format has it's own unique set of configuration options, read
|
||||
## more about them here:
|
||||
@@ -32,11 +32,5 @@ from the same topic in parallel.
|
||||
|
||||
## Testing
|
||||
|
||||
Running integration tests requires running Zookeeper & Kafka. The following
|
||||
commands assume you're on OS X & using [boot2docker](http://boot2docker.io/) or docker-machine through [Docker Toolbox](https://www.docker.com/docker-toolbox).
|
||||
|
||||
To start Kafka & Zookeeper:
|
||||
|
||||
```
|
||||
docker run -d -p 2181:2181 -p 9092:9092 --env ADVERTISED_HOST=`boot2docker ip || docker-machine ip <your_machine_name>` --env ADVERTISED_PORT=9092 spotify/kafka
|
||||
```
|
||||
Running integration tests requires running Zookeeper & Kafka. See Makefile
|
||||
for kafka container command.
|
||||
|
||||
@@ -50,7 +50,7 @@ var sampleConfig = `
|
||||
## an array of Zookeeper connection strings
|
||||
zookeeper_peers = ["localhost:2181"]
|
||||
## Zookeeper Chroot
|
||||
zookeeper_chroot = "/"
|
||||
zookeeper_chroot = ""
|
||||
## the name of the consumer group
|
||||
consumer_group = "telegraf_metrics_consumers"
|
||||
## Offset (must be either "oldest" or "newest")
|
||||
|
||||
91
plugins/inputs/logparser/README.md
Normal file
91
plugins/inputs/logparser/README.md
Normal file
@@ -0,0 +1,91 @@
|
||||
# logparser Input Plugin
|
||||
|
||||
The logparser plugin streams and parses the given logfiles. Currently it only
|
||||
has the capability of parsing "grok" patterns from logfiles, which also supports
|
||||
regex patterns.
|
||||
|
||||
### Configuration:
|
||||
|
||||
```toml
|
||||
[[inputs.logparser]]
|
||||
## Log files to parse.
|
||||
## These accept standard unix glob matching rules, but with the addition of
|
||||
## ** as a "super asterisk". ie:
|
||||
## /var/log/**.log -> recursively find all .log files in /var/log
|
||||
## /var/log/*/*.log -> find all .log files with a parent dir in /var/log
|
||||
## /var/log/apache.log -> only tail the apache log file
|
||||
files = ["/var/log/influxdb/influxdb.log"]
|
||||
## Read file from beginning.
|
||||
from_beginning = false
|
||||
|
||||
## Parse logstash-style "grok" patterns:
|
||||
## Telegraf builtin parsing patterns: https://goo.gl/dkay10
|
||||
[inputs.logparser.grok]
|
||||
## This is a list of patterns to check the given log file(s) for.
|
||||
## Note that adding patterns here increases processing time. The most
|
||||
## efficient configuration is to have one file & pattern per logparser.
|
||||
patterns = ["%{INFLUXDB_HTTPD_LOG}"]
|
||||
## Full path(s) to custom pattern files.
|
||||
custom_pattern_files = []
|
||||
## Custom patterns can also be defined here. Put one pattern per line.
|
||||
custom_patterns = '''
|
||||
'''
|
||||
```
|
||||
|
||||
> **Note:** The InfluxDB log pattern in the default configuration only works for Influx versions 1.0.0-beta1 or higher.
|
||||
|
||||
## Grok Parser
|
||||
|
||||
The grok parser uses a slightly modified version of logstash "grok" patterns,
|
||||
with the format `%{<capture_syntax>[:<semantic_name>][:<modifier>]}`
|
||||
|
||||
|
||||
Telegraf has many of it's own
|
||||
[built-in patterns](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/logparser/grok/patterns/influx-patterns),
|
||||
as well as supporting
|
||||
[logstash's builtin patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns).
|
||||
|
||||
|
||||
The best way to get acquainted with grok patterns is to read the logstash docs,
|
||||
which are available here:
|
||||
https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html
|
||||
|
||||
|
||||
If you need help building patterns to match your logs,
|
||||
you will find the http://grokdebug.herokuapp.com application quite useful!
|
||||
|
||||
|
||||
By default all named captures are converted into string fields.
|
||||
Modifiers can be used to convert captures to other types or tags.
|
||||
Timestamp modifiers can be used to convert captures to the timestamp of the
|
||||
parsed metric.
|
||||
|
||||
|
||||
- Available modifiers:
|
||||
- string (default if nothing is specified)
|
||||
- int
|
||||
- float
|
||||
- duration (ie, 5.23ms gets converted to int nanoseconds)
|
||||
- tag (converts the field into a tag)
|
||||
- drop (drops the field completely)
|
||||
- Timestamp modifiers:
|
||||
- ts-ansic ("Mon Jan _2 15:04:05 2006")
|
||||
- ts-unix ("Mon Jan _2 15:04:05 MST 2006")
|
||||
- ts-ruby ("Mon Jan 02 15:04:05 -0700 2006")
|
||||
- ts-rfc822 ("02 Jan 06 15:04 MST")
|
||||
- ts-rfc822z ("02 Jan 06 15:04 -0700")
|
||||
- ts-rfc850 ("Monday, 02-Jan-06 15:04:05 MST")
|
||||
- ts-rfc1123 ("Mon, 02 Jan 2006 15:04:05 MST")
|
||||
- ts-rfc1123z ("Mon, 02 Jan 2006 15:04:05 -0700")
|
||||
- ts-rfc3339 ("2006-01-02T15:04:05Z07:00")
|
||||
- ts-rfc3339nano ("2006-01-02T15:04:05.999999999Z07:00")
|
||||
- ts-httpd ("02/Jan/2006:15:04:05 -0700")
|
||||
- ts-epoch (seconds since unix epoch)
|
||||
- ts-epochnano (nanoseconds since unix epoch)
|
||||
- ts-"CUSTOM"
|
||||
|
||||
|
||||
CUSTOM time layouts must be within quotes and be the representation of the
|
||||
"reference time", which is `Mon Jan 2 15:04:05 -0700 MST 2006`
|
||||
See https://golang.org/pkg/time/#Parse for more details.
|
||||
|
||||
394
plugins/inputs/logparser/grok/grok.go
Normal file
394
plugins/inputs/logparser/grok/grok.go
Normal file
@@ -0,0 +1,394 @@
|
||||
package grok
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/vjeantet/grok"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
)
|
||||
|
||||
var timeFormats = map[string]string{
|
||||
"ts-ansic": "Mon Jan _2 15:04:05 2006",
|
||||
"ts-unix": "Mon Jan _2 15:04:05 MST 2006",
|
||||
"ts-ruby": "Mon Jan 02 15:04:05 -0700 2006",
|
||||
"ts-rfc822": "02 Jan 06 15:04 MST",
|
||||
"ts-rfc822z": "02 Jan 06 15:04 -0700", // RFC822 with numeric zone
|
||||
"ts-rfc850": "Monday, 02-Jan-06 15:04:05 MST",
|
||||
"ts-rfc1123": "Mon, 02 Jan 2006 15:04:05 MST",
|
||||
"ts-rfc1123z": "Mon, 02 Jan 2006 15:04:05 -0700", // RFC1123 with numeric zone
|
||||
"ts-rfc3339": "2006-01-02T15:04:05Z07:00",
|
||||
"ts-rfc3339nano": "2006-01-02T15:04:05.999999999Z07:00",
|
||||
"ts-httpd": "02/Jan/2006:15:04:05 -0700",
|
||||
"ts-epoch": "EPOCH",
|
||||
"ts-epochnano": "EPOCH_NANO",
|
||||
}
|
||||
|
||||
const (
|
||||
INT = "int"
|
||||
TAG = "tag"
|
||||
FLOAT = "float"
|
||||
STRING = "string"
|
||||
DURATION = "duration"
|
||||
DROP = "drop"
|
||||
)
|
||||
|
||||
var (
|
||||
// matches named captures that contain a type.
|
||||
// ie,
|
||||
// %{NUMBER:bytes:int}
|
||||
// %{IPORHOST:clientip:tag}
|
||||
// %{HTTPDATE:ts1:ts-http}
|
||||
// %{HTTPDATE:ts2:ts-"02 Jan 06 15:04"}
|
||||
typedRe = regexp.MustCompile(`%{\w+:(\w+):(ts-".+"|t?s?-?\w+)}`)
|
||||
// matches a plain pattern name. ie, %{NUMBER}
|
||||
patternOnlyRe = regexp.MustCompile(`%{(\w+)}`)
|
||||
)
|
||||
|
||||
type Parser struct {
|
||||
Patterns []string
|
||||
// namedPatterns is a list of internally-assigned names to the patterns
|
||||
// specified by the user in Patterns.
|
||||
// They will look like:
|
||||
// GROK_INTERNAL_PATTERN_0, GROK_INTERNAL_PATTERN_1, etc.
|
||||
namedPatterns []string
|
||||
CustomPatterns string
|
||||
CustomPatternFiles []string
|
||||
Measurement string
|
||||
|
||||
// typeMap is a map of patterns -> capture name -> modifier,
|
||||
// ie, {
|
||||
// "%{TESTLOG}":
|
||||
// {
|
||||
// "bytes": "int",
|
||||
// "clientip": "tag"
|
||||
// }
|
||||
// }
|
||||
typeMap map[string]map[string]string
|
||||
// tsMap is a map of patterns -> capture name -> timestamp layout.
|
||||
// ie, {
|
||||
// "%{TESTLOG}":
|
||||
// {
|
||||
// "httptime": "02/Jan/2006:15:04:05 -0700"
|
||||
// }
|
||||
// }
|
||||
tsMap map[string]map[string]string
|
||||
// patterns is a map of all of the parsed patterns from CustomPatterns
|
||||
// and CustomPatternFiles.
|
||||
// ie, {
|
||||
// "DURATION": "%{NUMBER}[nuµm]?s"
|
||||
// "RESPONSE_CODE": "%{NUMBER:rc:tag}"
|
||||
// }
|
||||
patterns map[string]string
|
||||
|
||||
g *grok.Grok
|
||||
tsModder *tsModder
|
||||
}
|
||||
|
||||
func (p *Parser) Compile() error {
|
||||
p.typeMap = make(map[string]map[string]string)
|
||||
p.tsMap = make(map[string]map[string]string)
|
||||
p.patterns = make(map[string]string)
|
||||
p.tsModder = &tsModder{}
|
||||
var err error
|
||||
p.g, err = grok.NewWithConfig(&grok.Config{NamedCapturesOnly: true})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Give Patterns fake names so that they can be treated as named
|
||||
// "custom patterns"
|
||||
p.namedPatterns = make([]string, len(p.Patterns))
|
||||
for i, pattern := range p.Patterns {
|
||||
name := fmt.Sprintf("GROK_INTERNAL_PATTERN_%d", i)
|
||||
p.CustomPatterns += "\n" + name + " " + pattern + "\n"
|
||||
p.namedPatterns[i] = "%{" + name + "}"
|
||||
}
|
||||
|
||||
// Combine user-supplied CustomPatterns with DEFAULT_PATTERNS and parse
|
||||
// them together as the same type of pattern.
|
||||
p.CustomPatterns = DEFAULT_PATTERNS + p.CustomPatterns
|
||||
if len(p.CustomPatterns) != 0 {
|
||||
scanner := bufio.NewScanner(strings.NewReader(p.CustomPatterns))
|
||||
p.addCustomPatterns(scanner)
|
||||
}
|
||||
|
||||
// Parse any custom pattern files supplied.
|
||||
for _, filename := range p.CustomPatternFiles {
|
||||
file, err := os.Open(filename)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
scanner := bufio.NewScanner(bufio.NewReader(file))
|
||||
p.addCustomPatterns(scanner)
|
||||
}
|
||||
|
||||
if p.Measurement == "" {
|
||||
p.Measurement = "logparser_grok"
|
||||
}
|
||||
|
||||
return p.compileCustomPatterns()
|
||||
}
|
||||
|
||||
func (p *Parser) ParseLine(line string) (telegraf.Metric, error) {
|
||||
var err error
|
||||
var values map[string]string
|
||||
// the matching pattern string
|
||||
var patternName string
|
||||
for _, pattern := range p.namedPatterns {
|
||||
if values, err = p.g.Parse(pattern, line); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(values) != 0 {
|
||||
patternName = pattern
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if len(values) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
fields := make(map[string]interface{})
|
||||
tags := make(map[string]string)
|
||||
timestamp := time.Now()
|
||||
for k, v := range values {
|
||||
if k == "" || v == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
var t string
|
||||
// check if pattern has some modifiers
|
||||
if types, ok := p.typeMap[patternName]; ok {
|
||||
t = types[k]
|
||||
}
|
||||
// if we didn't find a modifier, check if we have a timestamp layout
|
||||
if t == "" {
|
||||
if ts, ok := p.tsMap[patternName]; ok {
|
||||
// check if the modifier is a timestamp layout
|
||||
if layout, ok := ts[k]; ok {
|
||||
t = layout
|
||||
}
|
||||
}
|
||||
}
|
||||
// if we didn't find a type OR timestamp modifier, assume string
|
||||
if t == "" {
|
||||
t = STRING
|
||||
}
|
||||
|
||||
switch t {
|
||||
case INT:
|
||||
iv, err := strconv.ParseInt(v, 10, 64)
|
||||
if err != nil {
|
||||
log.Printf("ERROR parsing %s to int: %s", v, err)
|
||||
} else {
|
||||
fields[k] = iv
|
||||
}
|
||||
case FLOAT:
|
||||
fv, err := strconv.ParseFloat(v, 64)
|
||||
if err != nil {
|
||||
log.Printf("ERROR parsing %s to float: %s", v, err)
|
||||
} else {
|
||||
fields[k] = fv
|
||||
}
|
||||
case DURATION:
|
||||
d, err := time.ParseDuration(v)
|
||||
if err != nil {
|
||||
log.Printf("ERROR parsing %s to duration: %s", v, err)
|
||||
} else {
|
||||
fields[k] = int64(d)
|
||||
}
|
||||
case TAG:
|
||||
tags[k] = v
|
||||
case STRING:
|
||||
fields[k] = strings.Trim(v, `"`)
|
||||
case "EPOCH":
|
||||
iv, err := strconv.ParseInt(v, 10, 64)
|
||||
if err != nil {
|
||||
log.Printf("ERROR parsing %s to int: %s", v, err)
|
||||
} else {
|
||||
timestamp = time.Unix(iv, 0)
|
||||
}
|
||||
case "EPOCH_NANO":
|
||||
iv, err := strconv.ParseInt(v, 10, 64)
|
||||
if err != nil {
|
||||
log.Printf("ERROR parsing %s to int: %s", v, err)
|
||||
} else {
|
||||
timestamp = time.Unix(0, iv)
|
||||
}
|
||||
case DROP:
|
||||
// goodbye!
|
||||
default:
|
||||
ts, err := time.Parse(t, v)
|
||||
if err == nil {
|
||||
timestamp = ts
|
||||
} else {
|
||||
log.Printf("ERROR parsing %s to time layout [%s]: %s", v, t, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return telegraf.NewMetric(p.Measurement, tags, fields, p.tsModder.tsMod(timestamp))
|
||||
}
|
||||
|
||||
func (p *Parser) addCustomPatterns(scanner *bufio.Scanner) {
|
||||
for scanner.Scan() {
|
||||
line := strings.TrimSpace(scanner.Text())
|
||||
if len(line) > 0 && line[0] != '#' {
|
||||
names := strings.SplitN(line, " ", 2)
|
||||
p.patterns[names[0]] = names[1]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (p *Parser) compileCustomPatterns() error {
|
||||
var err error
|
||||
// check if the pattern contains a subpattern that is already defined
|
||||
// replace it with the subpattern for modifier inheritance.
|
||||
for i := 0; i < 2; i++ {
|
||||
for name, pattern := range p.patterns {
|
||||
subNames := patternOnlyRe.FindAllStringSubmatch(pattern, -1)
|
||||
for _, subName := range subNames {
|
||||
if subPattern, ok := p.patterns[subName[1]]; ok {
|
||||
pattern = strings.Replace(pattern, subName[0], subPattern, 1)
|
||||
}
|
||||
}
|
||||
p.patterns[name] = pattern
|
||||
}
|
||||
}
|
||||
|
||||
// check if pattern contains modifiers. Parse them out if it does.
|
||||
for name, pattern := range p.patterns {
|
||||
if typedRe.MatchString(pattern) {
|
||||
// this pattern has modifiers, so parse out the modifiers
|
||||
pattern, err = p.parseTypedCaptures(name, pattern)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
p.patterns[name] = pattern
|
||||
}
|
||||
}
|
||||
|
||||
return p.g.AddPatternsFromMap(p.patterns)
|
||||
}
|
||||
|
||||
// parseTypedCaptures parses the capture types, and then deletes the type from
|
||||
// the line so that it is a valid "grok" pattern again.
|
||||
// ie,
|
||||
// %{NUMBER:bytes:int} => %{NUMBER:bytes} (stores %{NUMBER}->bytes->int)
|
||||
// %{IPORHOST:clientip:tag} => %{IPORHOST:clientip} (stores %{IPORHOST}->clientip->tag)
|
||||
func (p *Parser) parseTypedCaptures(name, pattern string) (string, error) {
|
||||
matches := typedRe.FindAllStringSubmatch(pattern, -1)
|
||||
|
||||
// grab the name of the capture pattern
|
||||
patternName := "%{" + name + "}"
|
||||
// create type map for this pattern
|
||||
p.typeMap[patternName] = make(map[string]string)
|
||||
p.tsMap[patternName] = make(map[string]string)
|
||||
|
||||
// boolean to verify that each pattern only has a single ts- data type.
|
||||
hasTimestamp := false
|
||||
for _, match := range matches {
|
||||
// regex capture 1 is the name of the capture
|
||||
// regex capture 2 is the type of the capture
|
||||
if strings.HasPrefix(match[2], "ts-") {
|
||||
if hasTimestamp {
|
||||
return pattern, fmt.Errorf("logparser pattern compile error: "+
|
||||
"Each pattern is allowed only one named "+
|
||||
"timestamp data type. pattern: %s", pattern)
|
||||
}
|
||||
if f, ok := timeFormats[match[2]]; ok {
|
||||
p.tsMap[patternName][match[1]] = f
|
||||
} else {
|
||||
p.tsMap[patternName][match[1]] = strings.TrimSuffix(strings.TrimPrefix(match[2], `ts-"`), `"`)
|
||||
}
|
||||
hasTimestamp = true
|
||||
} else {
|
||||
p.typeMap[patternName][match[1]] = match[2]
|
||||
}
|
||||
|
||||
// the modifier is not a valid part of a "grok" pattern, so remove it
|
||||
// from the pattern.
|
||||
pattern = strings.Replace(pattern, ":"+match[2]+"}", "}", 1)
|
||||
}
|
||||
|
||||
return pattern, nil
|
||||
}
|
||||
|
||||
// tsModder is a struct for incrementing identical timestamps of log lines
|
||||
// so that we don't push identical metrics that will get overwritten.
|
||||
type tsModder struct {
|
||||
dupe time.Time
|
||||
last time.Time
|
||||
incr time.Duration
|
||||
incrn time.Duration
|
||||
rollover time.Duration
|
||||
}
|
||||
|
||||
// tsMod increments the given timestamp one unit more from the previous
|
||||
// duplicate timestamp.
|
||||
// the increment unit is determined as the next smallest time unit below the
|
||||
// most significant time unit of ts.
|
||||
// ie, if the input is at ms precision, it will increment it 1µs.
|
||||
func (t *tsModder) tsMod(ts time.Time) time.Time {
|
||||
defer func() { t.last = ts }()
|
||||
// don't mod the time if we don't need to
|
||||
if t.last.IsZero() || ts.IsZero() {
|
||||
t.incrn = 0
|
||||
t.rollover = 0
|
||||
return ts
|
||||
}
|
||||
if !ts.Equal(t.last) && !ts.Equal(t.dupe) {
|
||||
t.incr = 0
|
||||
t.incrn = 0
|
||||
t.rollover = 0
|
||||
return ts
|
||||
}
|
||||
|
||||
if ts.Equal(t.last) {
|
||||
t.dupe = ts
|
||||
}
|
||||
|
||||
if ts.Equal(t.dupe) && t.incr == time.Duration(0) {
|
||||
tsNano := ts.UnixNano()
|
||||
|
||||
d := int64(10)
|
||||
counter := 1
|
||||
for {
|
||||
a := tsNano % d
|
||||
if a > 0 {
|
||||
break
|
||||
}
|
||||
d = d * 10
|
||||
counter++
|
||||
}
|
||||
|
||||
switch {
|
||||
case counter <= 6:
|
||||
t.incr = time.Nanosecond
|
||||
case counter <= 9:
|
||||
t.incr = time.Microsecond
|
||||
case counter > 9:
|
||||
t.incr = time.Millisecond
|
||||
}
|
||||
}
|
||||
|
||||
t.incrn++
|
||||
if t.incrn == 999 && t.incr > time.Nanosecond {
|
||||
t.rollover = t.incr * t.incrn
|
||||
t.incrn = 1
|
||||
t.incr = t.incr / 1000
|
||||
if t.incr < time.Nanosecond {
|
||||
t.incr = time.Nanosecond
|
||||
}
|
||||
}
|
||||
return ts.Add(t.incr*t.incrn + t.rollover)
|
||||
}
|
||||
564
plugins/inputs/logparser/grok/grok_test.go
Normal file
564
plugins/inputs/logparser/grok/grok_test.go
Normal file
@@ -0,0 +1,564 @@
|
||||
package grok
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
var benchM telegraf.Metric
|
||||
|
||||
func Benchmark_ParseLine_CommonLogFormat(b *testing.B) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{COMMON_LOG_FORMAT}"},
|
||||
}
|
||||
p.Compile()
|
||||
|
||||
var m telegraf.Metric
|
||||
for n := 0; n < b.N; n++ {
|
||||
m, _ = p.ParseLine(`127.0.0.1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326`)
|
||||
}
|
||||
benchM = m
|
||||
}
|
||||
|
||||
func Benchmark_ParseLine_CombinedLogFormat(b *testing.B) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{COMBINED_LOG_FORMAT}"},
|
||||
}
|
||||
p.Compile()
|
||||
|
||||
var m telegraf.Metric
|
||||
for n := 0; n < b.N; n++ {
|
||||
m, _ = p.ParseLine(`127.0.0.1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326 "-" "Mozilla"`)
|
||||
}
|
||||
benchM = m
|
||||
}
|
||||
|
||||
func Benchmark_ParseLine_InfluxLog(b *testing.B) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{INFLUXDB_HTTPD_LOG}"},
|
||||
}
|
||||
p.Compile()
|
||||
|
||||
var m telegraf.Metric
|
||||
for n := 0; n < b.N; n++ {
|
||||
m, _ = p.ParseLine(`[httpd] 192.168.1.1 - - [14/Jun/2016:11:33:29 +0100] "POST /write?consistency=any&db=telegraf&precision=ns&rp= HTTP/1.1" 204 0 "-" "InfluxDBClient" 6f61bc44-321b-11e6-8050-000000000000 2513`)
|
||||
}
|
||||
benchM = m
|
||||
}
|
||||
|
||||
func Benchmark_ParseLine_InfluxLog_NoMatch(b *testing.B) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{INFLUXDB_HTTPD_LOG}"},
|
||||
}
|
||||
p.Compile()
|
||||
|
||||
var m telegraf.Metric
|
||||
for n := 0; n < b.N; n++ {
|
||||
m, _ = p.ParseLine(`[retention] 2016/06/14 14:38:24 retention policy shard deletion check commencing`)
|
||||
}
|
||||
benchM = m
|
||||
}
|
||||
|
||||
func Benchmark_ParseLine_CustomPattern(b *testing.B) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_A}", "%{TEST_LOG_B}"},
|
||||
CustomPatterns: `
|
||||
DURATION %{NUMBER}[nuµm]?s
|
||||
RESPONSE_CODE %{NUMBER:response_code:tag}
|
||||
RESPONSE_TIME %{DURATION:response_time:duration}
|
||||
TEST_LOG_A %{NUMBER:myfloat:float} %{RESPONSE_CODE} %{IPORHOST:clientip} %{RESPONSE_TIME}
|
||||
`,
|
||||
}
|
||||
p.Compile()
|
||||
|
||||
var m telegraf.Metric
|
||||
for n := 0; n < b.N; n++ {
|
||||
m, _ = p.ParseLine(`[04/Jun/2016:12:41:45 +0100] 1.25 200 192.168.1.1 5.432µs 101`)
|
||||
}
|
||||
benchM = m
|
||||
}
|
||||
|
||||
func TestMeasurementName(t *testing.T) {
|
||||
p := &Parser{
|
||||
Measurement: "my_web_log",
|
||||
Patterns: []string{"%{COMMON_LOG_FORMAT}"},
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
// Parse an influxdb POST request
|
||||
m, err := p.ParseLine(`127.0.0.1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326`)
|
||||
require.NotNil(t, m)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t,
|
||||
map[string]interface{}{
|
||||
"resp_bytes": int64(2326),
|
||||
"auth": "frank",
|
||||
"client_ip": "127.0.0.1",
|
||||
"http_version": float64(1.0),
|
||||
"ident": "user-identifier",
|
||||
"request": "/apache_pb.gif",
|
||||
},
|
||||
m.Fields())
|
||||
assert.Equal(t, map[string]string{"verb": "GET", "resp_code": "200"}, m.Tags())
|
||||
assert.Equal(t, "my_web_log", m.Name())
|
||||
}
|
||||
|
||||
func TestBuiltinInfluxdbHttpd(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{INFLUXDB_HTTPD_LOG}"},
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
// Parse an influxdb POST request
|
||||
m, err := p.ParseLine(`[httpd] ::1 - - [14/Jun/2016:11:33:29 +0100] "POST /write?consistency=any&db=telegraf&precision=ns&rp= HTTP/1.1" 204 0 "-" "InfluxDBClient" 6f61bc44-321b-11e6-8050-000000000000 2513`)
|
||||
require.NotNil(t, m)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t,
|
||||
map[string]interface{}{
|
||||
"resp_bytes": int64(0),
|
||||
"auth": "-",
|
||||
"client_ip": "::1",
|
||||
"http_version": float64(1.1),
|
||||
"ident": "-",
|
||||
"referrer": "-",
|
||||
"request": "/write?consistency=any&db=telegraf&precision=ns&rp=",
|
||||
"response_time_us": int64(2513),
|
||||
"agent": "InfluxDBClient",
|
||||
},
|
||||
m.Fields())
|
||||
assert.Equal(t, map[string]string{"verb": "POST", "resp_code": "204"}, m.Tags())
|
||||
|
||||
// Parse an influxdb GET request
|
||||
m, err = p.ParseLine(`[httpd] ::1 - - [14/Jun/2016:12:10:02 +0100] "GET /query?db=telegraf&q=SELECT+bytes%2Cresponse_time_us+FROM+logparser_grok+WHERE+http_method+%3D+%27GET%27+AND+response_time_us+%3E+0+AND+time+%3E+now%28%29+-+1h HTTP/1.1" 200 578 "http://localhost:8083/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.84 Safari/537.36" 8a3806f1-3220-11e6-8006-000000000000 988`)
|
||||
require.NotNil(t, m)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t,
|
||||
map[string]interface{}{
|
||||
"resp_bytes": int64(578),
|
||||
"auth": "-",
|
||||
"client_ip": "::1",
|
||||
"http_version": float64(1.1),
|
||||
"ident": "-",
|
||||
"referrer": "http://localhost:8083/",
|
||||
"request": "/query?db=telegraf&q=SELECT+bytes%2Cresponse_time_us+FROM+logparser_grok+WHERE+http_method+%3D+%27GET%27+AND+response_time_us+%3E+0+AND+time+%3E+now%28%29+-+1h",
|
||||
"response_time_us": int64(988),
|
||||
"agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.84 Safari/537.36",
|
||||
},
|
||||
m.Fields())
|
||||
assert.Equal(t, map[string]string{"verb": "GET", "resp_code": "200"}, m.Tags())
|
||||
}
|
||||
|
||||
// common log format
|
||||
// 127.0.0.1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326
|
||||
func TestBuiltinCommonLogFormat(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{COMMON_LOG_FORMAT}"},
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
// Parse an influxdb POST request
|
||||
m, err := p.ParseLine(`127.0.0.1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326`)
|
||||
require.NotNil(t, m)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t,
|
||||
map[string]interface{}{
|
||||
"resp_bytes": int64(2326),
|
||||
"auth": "frank",
|
||||
"client_ip": "127.0.0.1",
|
||||
"http_version": float64(1.0),
|
||||
"ident": "user-identifier",
|
||||
"request": "/apache_pb.gif",
|
||||
},
|
||||
m.Fields())
|
||||
assert.Equal(t, map[string]string{"verb": "GET", "resp_code": "200"}, m.Tags())
|
||||
}
|
||||
|
||||
// combined log format
|
||||
// 127.0.0.1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326 "-" "Mozilla"
|
||||
func TestBuiltinCombinedLogFormat(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{COMBINED_LOG_FORMAT}"},
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
// Parse an influxdb POST request
|
||||
m, err := p.ParseLine(`127.0.0.1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326 "-" "Mozilla"`)
|
||||
require.NotNil(t, m)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t,
|
||||
map[string]interface{}{
|
||||
"resp_bytes": int64(2326),
|
||||
"auth": "frank",
|
||||
"client_ip": "127.0.0.1",
|
||||
"http_version": float64(1.0),
|
||||
"ident": "user-identifier",
|
||||
"request": "/apache_pb.gif",
|
||||
"referrer": "-",
|
||||
"agent": "Mozilla",
|
||||
},
|
||||
m.Fields())
|
||||
assert.Equal(t, map[string]string{"verb": "GET", "resp_code": "200"}, m.Tags())
|
||||
}
|
||||
|
||||
func TestCompileStringAndParse(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_A}"},
|
||||
CustomPatterns: `
|
||||
DURATION %{NUMBER}[nuµm]?s
|
||||
RESPONSE_CODE %{NUMBER:response_code:tag}
|
||||
RESPONSE_TIME %{DURATION:response_time:duration}
|
||||
TEST_LOG_A %{NUMBER:myfloat:float} %{RESPONSE_CODE} %{IPORHOST:clientip} %{RESPONSE_TIME}
|
||||
`,
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
metricA, err := p.ParseLine(`1.25 200 192.168.1.1 5.432µs`)
|
||||
require.NotNil(t, metricA)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t,
|
||||
map[string]interface{}{
|
||||
"clientip": "192.168.1.1",
|
||||
"myfloat": float64(1.25),
|
||||
"response_time": int64(5432),
|
||||
},
|
||||
metricA.Fields())
|
||||
assert.Equal(t, map[string]string{"response_code": "200"}, metricA.Tags())
|
||||
}
|
||||
|
||||
func TestCompileErrorsOnInvalidPattern(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_A}", "%{TEST_LOG_B}"},
|
||||
CustomPatterns: `
|
||||
DURATION %{NUMBER}[nuµm]?s
|
||||
RESPONSE_CODE %{NUMBER:response_code:tag}
|
||||
RESPONSE_TIME %{DURATION:response_time:duration}
|
||||
TEST_LOG_A %{NUMBER:myfloat:float} %{RESPONSE_CODE} %{IPORHOST:clientip} %{RESPONSE_TIME}
|
||||
`,
|
||||
}
|
||||
assert.Error(t, p.Compile())
|
||||
|
||||
metricA, _ := p.ParseLine(`1.25 200 192.168.1.1 5.432µs`)
|
||||
require.Nil(t, metricA)
|
||||
}
|
||||
|
||||
func TestParsePatternsWithoutCustom(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{POSINT:ts:ts-epochnano} response_time=%{POSINT:response_time:int} mymetric=%{NUMBER:metric:float}"},
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
metricA, err := p.ParseLine(`1466004605359052000 response_time=20821 mymetric=10890.645`)
|
||||
require.NotNil(t, metricA)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t,
|
||||
map[string]interface{}{
|
||||
"response_time": int64(20821),
|
||||
"metric": float64(10890.645),
|
||||
},
|
||||
metricA.Fields())
|
||||
assert.Equal(t, map[string]string{}, metricA.Tags())
|
||||
assert.Equal(t, time.Unix(0, 1466004605359052000), metricA.Time())
|
||||
}
|
||||
|
||||
func TestParseEpochNano(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{MYAPP}"},
|
||||
CustomPatterns: `
|
||||
MYAPP %{POSINT:ts:ts-epochnano} response_time=%{POSINT:response_time:int} mymetric=%{NUMBER:metric:float}
|
||||
`,
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
metricA, err := p.ParseLine(`1466004605359052000 response_time=20821 mymetric=10890.645`)
|
||||
require.NotNil(t, metricA)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t,
|
||||
map[string]interface{}{
|
||||
"response_time": int64(20821),
|
||||
"metric": float64(10890.645),
|
||||
},
|
||||
metricA.Fields())
|
||||
assert.Equal(t, map[string]string{}, metricA.Tags())
|
||||
assert.Equal(t, time.Unix(0, 1466004605359052000), metricA.Time())
|
||||
}
|
||||
|
||||
func TestParseEpoch(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{MYAPP}"},
|
||||
CustomPatterns: `
|
||||
MYAPP %{POSINT:ts:ts-epoch} response_time=%{POSINT:response_time:int} mymetric=%{NUMBER:metric:float}
|
||||
`,
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
metricA, err := p.ParseLine(`1466004605 response_time=20821 mymetric=10890.645`)
|
||||
require.NotNil(t, metricA)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t,
|
||||
map[string]interface{}{
|
||||
"response_time": int64(20821),
|
||||
"metric": float64(10890.645),
|
||||
},
|
||||
metricA.Fields())
|
||||
assert.Equal(t, map[string]string{}, metricA.Tags())
|
||||
assert.Equal(t, time.Unix(1466004605, 0), metricA.Time())
|
||||
}
|
||||
|
||||
func TestParseEpochErrors(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{MYAPP}"},
|
||||
CustomPatterns: `
|
||||
MYAPP %{WORD:ts:ts-epoch} response_time=%{POSINT:response_time:int} mymetric=%{NUMBER:metric:float}
|
||||
`,
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
_, err := p.ParseLine(`foobar response_time=20821 mymetric=10890.645`)
|
||||
assert.NoError(t, err)
|
||||
|
||||
p = &Parser{
|
||||
Patterns: []string{"%{MYAPP}"},
|
||||
CustomPatterns: `
|
||||
MYAPP %{WORD:ts:ts-epochnano} response_time=%{POSINT:response_time:int} mymetric=%{NUMBER:metric:float}
|
||||
`,
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
_, err = p.ParseLine(`foobar response_time=20821 mymetric=10890.645`)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestCompileFileAndParse(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_A}", "%{TEST_LOG_B}"},
|
||||
CustomPatternFiles: []string{"./testdata/test-patterns"},
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
metricA, err := p.ParseLine(`[04/Jun/2016:12:41:45 +0100] 1.25 200 192.168.1.1 5.432µs 101`)
|
||||
require.NotNil(t, metricA)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t,
|
||||
map[string]interface{}{
|
||||
"clientip": "192.168.1.1",
|
||||
"myfloat": float64(1.25),
|
||||
"response_time": int64(5432),
|
||||
"myint": int64(101),
|
||||
},
|
||||
metricA.Fields())
|
||||
assert.Equal(t, map[string]string{"response_code": "200"}, metricA.Tags())
|
||||
assert.Equal(t,
|
||||
time.Date(2016, time.June, 4, 12, 41, 45, 0, time.FixedZone("foo", 60*60)).Nanosecond(),
|
||||
metricA.Time().Nanosecond())
|
||||
|
||||
metricB, err := p.ParseLine(`[04/06/2016--12:41:45] 1.25 mystring dropme nomodifier`)
|
||||
require.NotNil(t, metricB)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t,
|
||||
map[string]interface{}{
|
||||
"myfloat": 1.25,
|
||||
"mystring": "mystring",
|
||||
"nomodifier": "nomodifier",
|
||||
},
|
||||
metricB.Fields())
|
||||
assert.Equal(t, map[string]string{}, metricB.Tags())
|
||||
assert.Equal(t,
|
||||
time.Date(2016, time.June, 4, 12, 41, 45, 0, time.FixedZone("foo", 60*60)).Nanosecond(),
|
||||
metricB.Time().Nanosecond())
|
||||
}
|
||||
|
||||
func TestCompileNoModifiersAndParse(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_C}"},
|
||||
CustomPatterns: `
|
||||
DURATION %{NUMBER}[nuµm]?s
|
||||
TEST_LOG_C %{NUMBER:myfloat} %{NUMBER} %{IPORHOST:clientip} %{DURATION:rt}
|
||||
`,
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
metricA, err := p.ParseLine(`1.25 200 192.168.1.1 5.432µs`)
|
||||
require.NotNil(t, metricA)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t,
|
||||
map[string]interface{}{
|
||||
"clientip": "192.168.1.1",
|
||||
"myfloat": "1.25",
|
||||
"rt": "5.432µs",
|
||||
},
|
||||
metricA.Fields())
|
||||
assert.Equal(t, map[string]string{}, metricA.Tags())
|
||||
}
|
||||
|
||||
func TestCompileNoNamesAndParse(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_C}"},
|
||||
CustomPatterns: `
|
||||
DURATION %{NUMBER}[nuµm]?s
|
||||
TEST_LOG_C %{NUMBER} %{NUMBER} %{IPORHOST} %{DURATION}
|
||||
`,
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
metricA, err := p.ParseLine(`1.25 200 192.168.1.1 5.432µs`)
|
||||
require.Nil(t, metricA)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestParseNoMatch(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_A}", "%{TEST_LOG_B}"},
|
||||
CustomPatternFiles: []string{"./testdata/test-patterns"},
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
metricA, err := p.ParseLine(`[04/Jun/2016:12:41:45 +0100] notnumber 200 192.168.1.1 5.432µs 101`)
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, metricA)
|
||||
}
|
||||
|
||||
func TestCompileErrors(t *testing.T) {
|
||||
// Compile fails because there are multiple timestamps:
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_A}", "%{TEST_LOG_B}"},
|
||||
CustomPatterns: `
|
||||
TEST_LOG_A %{HTTPDATE:ts1:ts-httpd} %{HTTPDATE:ts2:ts-httpd} %{NUMBER:mynum:int}
|
||||
`,
|
||||
}
|
||||
assert.Error(t, p.Compile())
|
||||
|
||||
// Compile fails because file doesn't exist:
|
||||
p = &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_A}", "%{TEST_LOG_B}"},
|
||||
CustomPatternFiles: []string{"/tmp/foo/bar/baz"},
|
||||
}
|
||||
assert.Error(t, p.Compile())
|
||||
}
|
||||
|
||||
func TestParseErrors(t *testing.T) {
|
||||
// Parse fails because the pattern doesn't exist
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_B}"},
|
||||
CustomPatterns: `
|
||||
TEST_LOG_A %{HTTPDATE:ts:ts-httpd} %{WORD:myword:int} %{}
|
||||
`,
|
||||
}
|
||||
assert.Error(t, p.Compile())
|
||||
_, err := p.ParseLine(`[04/Jun/2016:12:41:45 +0100] notnumber 200 192.168.1.1 5.432µs 101`)
|
||||
assert.Error(t, err)
|
||||
|
||||
// Parse fails because myword is not an int
|
||||
p = &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_A}"},
|
||||
CustomPatterns: `
|
||||
TEST_LOG_A %{HTTPDATE:ts:ts-httpd} %{WORD:myword:int}
|
||||
`,
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
_, err = p.ParseLine(`04/Jun/2016:12:41:45 +0100 notnumber`)
|
||||
assert.Error(t, err)
|
||||
|
||||
// Parse fails because myword is not a float
|
||||
p = &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_A}"},
|
||||
CustomPatterns: `
|
||||
TEST_LOG_A %{HTTPDATE:ts:ts-httpd} %{WORD:myword:float}
|
||||
`,
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
_, err = p.ParseLine(`04/Jun/2016:12:41:45 +0100 notnumber`)
|
||||
assert.Error(t, err)
|
||||
|
||||
// Parse fails because myword is not a duration
|
||||
p = &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_A}"},
|
||||
CustomPatterns: `
|
||||
TEST_LOG_A %{HTTPDATE:ts:ts-httpd} %{WORD:myword:duration}
|
||||
`,
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
_, err = p.ParseLine(`04/Jun/2016:12:41:45 +0100 notnumber`)
|
||||
assert.Error(t, err)
|
||||
|
||||
// Parse fails because the time layout is wrong.
|
||||
p = &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_A}"},
|
||||
CustomPatterns: `
|
||||
TEST_LOG_A %{HTTPDATE:ts:ts-unix} %{WORD:myword:duration}
|
||||
`,
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
_, err = p.ParseLine(`04/Jun/2016:12:41:45 +0100 notnumber`)
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestTsModder(t *testing.T) {
|
||||
tsm := &tsModder{}
|
||||
|
||||
reftime := time.Date(2006, time.December, 1, 1, 1, 1, int(time.Millisecond), time.UTC)
|
||||
modt := tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime, modt)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime.Add(time.Microsecond*1), modt)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime.Add(time.Microsecond*2), modt)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime.Add(time.Microsecond*3), modt)
|
||||
|
||||
reftime = time.Date(2006, time.December, 1, 1, 1, 1, int(time.Microsecond), time.UTC)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime, modt)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime.Add(time.Nanosecond*1), modt)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime.Add(time.Nanosecond*2), modt)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime.Add(time.Nanosecond*3), modt)
|
||||
|
||||
reftime = time.Date(2006, time.December, 1, 1, 1, 1, int(time.Microsecond)*999, time.UTC)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime, modt)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime.Add(time.Nanosecond*1), modt)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime.Add(time.Nanosecond*2), modt)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime.Add(time.Nanosecond*3), modt)
|
||||
|
||||
reftime = time.Date(2006, time.December, 1, 1, 1, 1, 0, time.UTC)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime, modt)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime.Add(time.Millisecond*1), modt)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime.Add(time.Millisecond*2), modt)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime.Add(time.Millisecond*3), modt)
|
||||
|
||||
reftime = time.Time{}
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime, modt)
|
||||
}
|
||||
|
||||
func TestTsModder_Rollover(t *testing.T) {
|
||||
tsm := &tsModder{}
|
||||
|
||||
reftime := time.Date(2006, time.December, 1, 1, 1, 1, int(time.Millisecond), time.UTC)
|
||||
modt := tsm.tsMod(reftime)
|
||||
for i := 1; i < 1000; i++ {
|
||||
modt = tsm.tsMod(reftime)
|
||||
}
|
||||
assert.Equal(t, reftime.Add(time.Microsecond*999+time.Nanosecond), modt)
|
||||
|
||||
reftime = time.Date(2006, time.December, 1, 1, 1, 1, int(time.Microsecond), time.UTC)
|
||||
modt = tsm.tsMod(reftime)
|
||||
for i := 1; i < 1001; i++ {
|
||||
modt = tsm.tsMod(reftime)
|
||||
}
|
||||
assert.Equal(t, reftime.Add(time.Nanosecond*1000), modt)
|
||||
}
|
||||
80
plugins/inputs/logparser/grok/influx_patterns.go
Normal file
80
plugins/inputs/logparser/grok/influx_patterns.go
Normal file
@@ -0,0 +1,80 @@
|
||||
package grok
|
||||
|
||||
// THIS SHOULD BE KEPT IN-SYNC WITH patterns/influx-patterns
|
||||
const DEFAULT_PATTERNS = `
|
||||
# Captures are a slightly modified version of logstash "grok" patterns, with
|
||||
# the format %{<capture syntax>[:<semantic name>][:<modifier>]}
|
||||
# By default all named captures are converted into string fields.
|
||||
# Modifiers can be used to convert captures to other types or tags.
|
||||
# Timestamp modifiers can be used to convert captures to the timestamp of the
|
||||
# parsed metric.
|
||||
|
||||
# View logstash grok pattern docs here:
|
||||
# https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html
|
||||
# All default logstash patterns are supported, these can be viewed here:
|
||||
# https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns
|
||||
|
||||
# Available modifiers:
|
||||
# string (default if nothing is specified)
|
||||
# int
|
||||
# float
|
||||
# duration (ie, 5.23ms gets converted to int nanoseconds)
|
||||
# tag (converts the field into a tag)
|
||||
# drop (drops the field completely)
|
||||
# Timestamp modifiers:
|
||||
# ts-ansic ("Mon Jan _2 15:04:05 2006")
|
||||
# ts-unix ("Mon Jan _2 15:04:05 MST 2006")
|
||||
# ts-ruby ("Mon Jan 02 15:04:05 -0700 2006")
|
||||
# ts-rfc822 ("02 Jan 06 15:04 MST")
|
||||
# ts-rfc822z ("02 Jan 06 15:04 -0700")
|
||||
# ts-rfc850 ("Monday, 02-Jan-06 15:04:05 MST")
|
||||
# ts-rfc1123 ("Mon, 02 Jan 2006 15:04:05 MST")
|
||||
# ts-rfc1123z ("Mon, 02 Jan 2006 15:04:05 -0700")
|
||||
# ts-rfc3339 ("2006-01-02T15:04:05Z07:00")
|
||||
# ts-rfc3339nano ("2006-01-02T15:04:05.999999999Z07:00")
|
||||
# ts-httpd ("02/Jan/2006:15:04:05 -0700")
|
||||
# ts-epoch (seconds since unix epoch)
|
||||
# ts-epochnano (nanoseconds since unix epoch)
|
||||
# ts-"CUSTOM"
|
||||
# CUSTOM time layouts must be within quotes and be the representation of the
|
||||
# "reference time", which is Mon Jan 2 15:04:05 -0700 MST 2006
|
||||
# See https://golang.org/pkg/time/#Parse for more details.
|
||||
|
||||
# Example log file pattern, example log looks like this:
|
||||
# [04/Jun/2016:12:41:45 +0100] 1.25 200 192.168.1.1 5.432µs
|
||||
# Breakdown of the DURATION pattern below:
|
||||
# NUMBER is a builtin logstash grok pattern matching float & int numbers.
|
||||
# [nuµm]? is a regex specifying 0 or 1 of the characters within brackets.
|
||||
# s is also regex, this pattern must end in "s".
|
||||
# so DURATION will match something like '5.324ms' or '6.1µs' or '10s'
|
||||
DURATION %{NUMBER}[nuµm]?s
|
||||
RESPONSE_CODE %{NUMBER:response_code:tag}
|
||||
RESPONSE_TIME %{DURATION:response_time_ns:duration}
|
||||
EXAMPLE_LOG \[%{HTTPDATE:ts:ts-httpd}\] %{NUMBER:myfloat:float} %{RESPONSE_CODE} %{IPORHOST:clientip} %{RESPONSE_TIME}
|
||||
|
||||
# Wider-ranging username matching vs. logstash built-in %{USER}
|
||||
NGUSERNAME [a-zA-Z\.\@\-\+_%]+
|
||||
NGUSER %{NGUSERNAME}
|
||||
|
||||
##
|
||||
## COMMON LOG PATTERNS
|
||||
##
|
||||
|
||||
# InfluxDB log patterns
|
||||
CLIENT (?:%{IPORHOST}|%{HOSTPORT}|::1)
|
||||
INFLUXDB_HTTPD_LOG \[httpd\] %{COMBINED_LOG_FORMAT} %{UUID:uuid:drop} %{NUMBER:response_time_us:int}
|
||||
|
||||
# apache & nginx logs, this is also known as the "common log format"
|
||||
# see https://en.wikipedia.org/wiki/Common_Log_Format
|
||||
COMMON_LOG_FORMAT %{CLIENT:client_ip} %{NGUSER:ident} %{NGUSER:auth} \[%{HTTPDATE:ts:ts-httpd}\] "(?:%{WORD:verb:tag} %{NOTSPACE:request}(?: HTTP/%{NUMBER:http_version:float})?|%{DATA})" %{NUMBER:resp_code:tag} (?:%{NUMBER:resp_bytes:int}|-)
|
||||
|
||||
# Combined log format is the same as the common log format but with the addition
|
||||
# of two quoted strings at the end for "referrer" and "agent"
|
||||
# See Examples at http://httpd.apache.org/docs/current/mod/mod_log_config.html
|
||||
COMBINED_LOG_FORMAT %{COMMON_LOG_FORMAT} %{QS:referrer} %{QS:agent}
|
||||
|
||||
# HTTPD log formats
|
||||
HTTPD20_ERRORLOG \[%{HTTPDERROR_DATE:timestamp}\] \[%{LOGLEVEL:loglevel:tag}\] (?:\[client %{IPORHOST:clientip}\] ){0,1}%{GREEDYDATA:errormsg}
|
||||
HTTPD24_ERRORLOG \[%{HTTPDERROR_DATE:timestamp}\] \[%{WORD:module}:%{LOGLEVEL:loglevel:tag}\] \[pid %{POSINT:pid:int}:tid %{NUMBER:tid:int}\]( \(%{POSINT:proxy_errorcode:int}\)%{DATA:proxy_errormessage}:)?( \[client %{IPORHOST:client}:%{POSINT:clientport}\])? %{DATA:errorcode}: %{GREEDYDATA:message}
|
||||
HTTPD_ERRORLOG %{HTTPD20_ERRORLOG}|%{HTTPD24_ERRORLOG}
|
||||
`
|
||||
75
plugins/inputs/logparser/grok/patterns/influx-patterns
Normal file
75
plugins/inputs/logparser/grok/patterns/influx-patterns
Normal file
@@ -0,0 +1,75 @@
|
||||
# Captures are a slightly modified version of logstash "grok" patterns, with
|
||||
# the format %{<capture syntax>[:<semantic name>][:<modifier>]}
|
||||
# By default all named captures are converted into string fields.
|
||||
# Modifiers can be used to convert captures to other types or tags.
|
||||
# Timestamp modifiers can be used to convert captures to the timestamp of the
|
||||
# parsed metric.
|
||||
|
||||
# View logstash grok pattern docs here:
|
||||
# https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html
|
||||
# All default logstash patterns are supported, these can be viewed here:
|
||||
# https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns
|
||||
|
||||
# Available modifiers:
|
||||
# string (default if nothing is specified)
|
||||
# int
|
||||
# float
|
||||
# duration (ie, 5.23ms gets converted to int nanoseconds)
|
||||
# tag (converts the field into a tag)
|
||||
# drop (drops the field completely)
|
||||
# Timestamp modifiers:
|
||||
# ts-ansic ("Mon Jan _2 15:04:05 2006")
|
||||
# ts-unix ("Mon Jan _2 15:04:05 MST 2006")
|
||||
# ts-ruby ("Mon Jan 02 15:04:05 -0700 2006")
|
||||
# ts-rfc822 ("02 Jan 06 15:04 MST")
|
||||
# ts-rfc822z ("02 Jan 06 15:04 -0700")
|
||||
# ts-rfc850 ("Monday, 02-Jan-06 15:04:05 MST")
|
||||
# ts-rfc1123 ("Mon, 02 Jan 2006 15:04:05 MST")
|
||||
# ts-rfc1123z ("Mon, 02 Jan 2006 15:04:05 -0700")
|
||||
# ts-rfc3339 ("2006-01-02T15:04:05Z07:00")
|
||||
# ts-rfc3339nano ("2006-01-02T15:04:05.999999999Z07:00")
|
||||
# ts-httpd ("02/Jan/2006:15:04:05 -0700")
|
||||
# ts-epoch (seconds since unix epoch)
|
||||
# ts-epochnano (nanoseconds since unix epoch)
|
||||
# ts-"CUSTOM"
|
||||
# CUSTOM time layouts must be within quotes and be the representation of the
|
||||
# "reference time", which is Mon Jan 2 15:04:05 -0700 MST 2006
|
||||
# See https://golang.org/pkg/time/#Parse for more details.
|
||||
|
||||
# Example log file pattern, example log looks like this:
|
||||
# [04/Jun/2016:12:41:45 +0100] 1.25 200 192.168.1.1 5.432µs
|
||||
# Breakdown of the DURATION pattern below:
|
||||
# NUMBER is a builtin logstash grok pattern matching float & int numbers.
|
||||
# [nuµm]? is a regex specifying 0 or 1 of the characters within brackets.
|
||||
# s is also regex, this pattern must end in "s".
|
||||
# so DURATION will match something like '5.324ms' or '6.1µs' or '10s'
|
||||
DURATION %{NUMBER}[nuµm]?s
|
||||
RESPONSE_CODE %{NUMBER:response_code:tag}
|
||||
RESPONSE_TIME %{DURATION:response_time_ns:duration}
|
||||
EXAMPLE_LOG \[%{HTTPDATE:ts:ts-httpd}\] %{NUMBER:myfloat:float} %{RESPONSE_CODE} %{IPORHOST:clientip} %{RESPONSE_TIME}
|
||||
|
||||
# Wider-ranging username matching vs. logstash built-in %{USER}
|
||||
NGUSERNAME [a-zA-Z\.\@\-\+_%]+
|
||||
NGUSER %{NGUSERNAME}
|
||||
|
||||
##
|
||||
## COMMON LOG PATTERNS
|
||||
##
|
||||
|
||||
# InfluxDB log patterns
|
||||
CLIENT (?:%{IPORHOST}|%{HOSTPORT}|::1)
|
||||
INFLUXDB_HTTPD_LOG \[httpd\] %{COMBINED_LOG_FORMAT} %{UUID:uuid:drop} %{NUMBER:response_time_us:int}
|
||||
|
||||
# apache & nginx logs, this is also known as the "common log format"
|
||||
# see https://en.wikipedia.org/wiki/Common_Log_Format
|
||||
COMMON_LOG_FORMAT %{CLIENT:client_ip} %{NGUSER:ident} %{NGUSER:auth} \[%{HTTPDATE:ts:ts-httpd}\] "(?:%{WORD:verb:tag} %{NOTSPACE:request}(?: HTTP/%{NUMBER:http_version:float})?|%{DATA})" %{NUMBER:resp_code:tag} (?:%{NUMBER:resp_bytes:int}|-)
|
||||
|
||||
# Combined log format is the same as the common log format but with the addition
|
||||
# of two quoted strings at the end for "referrer" and "agent"
|
||||
# See Examples at http://httpd.apache.org/docs/current/mod/mod_log_config.html
|
||||
COMBINED_LOG_FORMAT %{COMMON_LOG_FORMAT} %{QS:referrer} %{QS:agent}
|
||||
|
||||
# HTTPD log formats
|
||||
HTTPD20_ERRORLOG \[%{HTTPDERROR_DATE:timestamp}\] \[%{LOGLEVEL:loglevel:tag}\] (?:\[client %{IPORHOST:clientip}\] ){0,1}%{GREEDYDATA:errormsg}
|
||||
HTTPD24_ERRORLOG \[%{HTTPDERROR_DATE:timestamp}\] \[%{WORD:module}:%{LOGLEVEL:loglevel:tag}\] \[pid %{POSINT:pid:int}:tid %{NUMBER:tid:int}\]( \(%{POSINT:proxy_errorcode:int}\)%{DATA:proxy_errormessage}:)?( \[client %{IPORHOST:client}:%{POSINT:clientport}\])? %{DATA:errorcode}: %{GREEDYDATA:message}
|
||||
HTTPD_ERRORLOG %{HTTPD20_ERRORLOG}|%{HTTPD24_ERRORLOG}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user