Merge remote-tracking branch 'origin/master' into pagerduty
This commit is contained in:
commit
2a5c92710b
.github
.gitignoreCHANGELOG.mdCONTRIBUTING.mdGodepsGodeps_windowsMakefileREADME.mdaccumulator.goagent
cmd/telegraf
docs
etc
filter
internal
metric.gometric_test.goplugins/inputs
EXAMPLE_README.md
aerospike
all
apache
cassandra
ceph
cgroup
README.mdcgroup.gocgroup_linux.gocgroup_notlinux.gocgroup_test.go
testdata
blkio
cpu
memory
dns_query
docker
dovecot
elasticsearch
exec
haproxy
hddtemp
jolokia
kafka_consumer
logparser
memcached
mesos
|
@ -11,6 +11,8 @@ Erase the other section and everything on and above this line.
|
|||
|
||||
## Bug report
|
||||
|
||||
### Relevant telegraf.conf:
|
||||
|
||||
### System info:
|
||||
|
||||
[Include Telegraf version, operating system name, and other relevant details]
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
### Required for all PRs:
|
||||
|
||||
- [ ] CHANGELOG.md updated
|
||||
- [ ] CHANGELOG.md updated (we recommend not updating this until the PR has been approved by a maintainer)
|
||||
- [ ] Sign [CLA](https://influxdata.com/community/cla/) (if not already signed)
|
||||
- [ ] README.md updated (if adding a new plugin)
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
build
|
||||
tivan
|
||||
.vagrant
|
||||
/telegraf
|
||||
|
|
121
CHANGELOG.md
121
CHANGELOG.md
|
@ -2,31 +2,80 @@
|
|||
|
||||
### Release Notes
|
||||
|
||||
**Breaking Change** The SNMP plugin is being deprecated in it's current form.
|
||||
There is a [new SNMP plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/snmp)
|
||||
which fixes many of the issues and confusions
|
||||
of it's predecessor. For users wanting to continue to use the deprecated SNMP
|
||||
plugin, you will need to change your config file from `[[inputs.snmp]]` to
|
||||
`[[inputs.snmp_legacy]]`. The configuration of the new SNMP plugin is _not_
|
||||
backwards-compatible.
|
||||
|
||||
- Telegraf now supports being installed as an official windows service,
|
||||
which can be installed via
|
||||
`> C:\Program Files\Telegraf\telegraf.exe --service install`
|
||||
|
||||
**Breaking Change**: Aerospike main server node measurements have been renamed
|
||||
aerospike_node. Aerospike namespace measurements have been renamed to
|
||||
aerospike_namespace. They will also now be tagged with the node_name
|
||||
that they correspond to. This has been done to differentiate measurements
|
||||
that pertain to node vs. namespace statistics.
|
||||
|
||||
**Breaking Change**: users of github_webhooks must change to the new
|
||||
`[[inputs.webhooks]]` plugin.
|
||||
|
||||
This means that the default github_webhooks config:
|
||||
|
||||
```
|
||||
# A Github Webhook Event collector
|
||||
[[inputs.github_webhooks]]
|
||||
## Address and port to host Webhook listener on
|
||||
service_address = ":1618"
|
||||
```
|
||||
|
||||
should now look like:
|
||||
|
||||
```
|
||||
# A Webhooks Event collector
|
||||
[[inputs.webhooks]]
|
||||
## Address and port to host Webhook listener on
|
||||
service_address = ":1618"
|
||||
|
||||
[inputs.webhooks.github]
|
||||
path = "/"
|
||||
```
|
||||
|
||||
- `flush_jitter` behavior has been changed. The random jitter will now be
|
||||
evaluated at every flush interval, rather than once at startup. This makes it
|
||||
consistent with the behavior of `collection_jitter`.
|
||||
|
||||
- All AWS plugins now utilize a standard mechanism for evaluating credentials.
|
||||
This allows all AWS plugins to support environment variables, shared credential
|
||||
files & profiles, and role assumptions. See the specific plugin README for
|
||||
details.
|
||||
|
||||
- The AWS CloudWatch input plugin can now declare a wildcard value for a metric
|
||||
dimension. This causes the plugin to read all metrics that contain the specified
|
||||
dimension key regardless of value. This is used to export collections of metrics
|
||||
without having to know the dimension values ahead of time.
|
||||
|
||||
- The AWS CloudWatch input plugin can now be configured with the `cache_ttl`
|
||||
attribute. This configures the TTL of the internal metric cache. This is useful
|
||||
in conjunction with wildcard dimension values as it will control the amount of
|
||||
time before a new metric is included by the plugin.
|
||||
|
||||
### Features
|
||||
|
||||
- [#1413](https://github.com/influxdata/telegraf/issues/1413): Separate container_version from container_image tag.
|
||||
- [#1525](https://github.com/influxdata/telegraf/pull/1525): Support setting per-device and total metrics for Docker network and blockio.
|
||||
- [#1466](https://github.com/influxdata/telegraf/pull/1466): MongoDB input plugin: adding per DB stats from db.stats()
|
||||
- [#1503](https://github.com/influxdata/telegraf/pull/1503): Add tls support for certs to RabbitMQ input plugin
|
||||
- [#1289](https://github.com/influxdata/telegraf/pull/1289): webhooks input plugin. Thanks @francois2metz and @cduez!
|
||||
- [#1247](https://github.com/influxdata/telegraf/pull/1247): rollbar webhook plugin.
|
||||
- [#1408](https://github.com/influxdata/telegraf/pull/1408): mandrill webhook plugin.
|
||||
- [#1402](https://github.com/influxdata/telegraf/pull/1402): docker-machine/boot2docker no longer required for unit tests.
|
||||
- [#1350](https://github.com/influxdata/telegraf/pull/1350): cgroup input plugin.
|
||||
- [#1369](https://github.com/influxdata/telegraf/pull/1369): Add input plugin for consuming metrics from NSQD.
|
||||
- [#1369](https://github.com/influxdata/telegraf/pull/1480): add ability to read redis from a socket.
|
||||
- [#1387](https://github.com/influxdata/telegraf/pull/1387): **Breaking Change** - Redis `role` tag renamed to `replication_role` to avoid global_tags override
|
||||
- [#1437](https://github.com/influxdata/telegraf/pull/1437): Fetching Galera status metrics in MySQL
|
||||
- [#1500](https://github.com/influxdata/telegraf/pull/1500): Aerospike plugin refactored to use official client lib.
|
||||
- [#1434](https://github.com/influxdata/telegraf/pull/1434): Add measurement name arg to logparser plugin.
|
||||
- [#1479](https://github.com/influxdata/telegraf/pull/1479): logparser: change resp_code from a field to a tag.
|
||||
- [#1411](https://github.com/influxdata/telegraf/pull/1411): Implement support for fetching hddtemp data
|
||||
- [#1340](https://github.com/influxdata/telegraf/issues/1340): statsd: do not log every dropped metric.
|
||||
- [#1368](https://github.com/influxdata/telegraf/pull/1368): Add precision rounding to all metrics on collection.
|
||||
- [#1390](https://github.com/influxdata/telegraf/pull/1390): Add support for Tengine
|
||||
- [#1320](https://github.com/influxdata/telegraf/pull/1320): Logparser input plugin for parsing grok-style log patterns.
|
||||
- [#1397](https://github.com/influxdata/telegraf/issues/1397): ElasticSearch: now supports connecting to ElasticSearch via SSL
|
||||
- [#1262](https://github.com/influxdata/telegraf/pull/1261): Add graylog input pluging.
|
||||
- [#1294](https://github.com/influxdata/telegraf/pull/1294): consul input plugin. Thanks @harnash
|
||||
- [#1164](https://github.com/influxdata/telegraf/pull/1164): conntrack input plugin. Thanks @robinpercy!
|
||||
- [#1165](https://github.com/influxdata/telegraf/pull/1165): vmstat input plugin. Thanks @jshim-xm!
|
||||
- [#1247](https://github.com/influxdata/telegraf/pull/1247): rollbar input plugin. Thanks @francois2metz and @cduez!
|
||||
- [#1208](https://github.com/influxdata/telegraf/pull/1208): Standardized AWS credentials evaluation & wildcard CloudWatch dimensions. Thanks @johnrengelman!
|
||||
- [#1264](https://github.com/influxdata/telegraf/pull/1264): Add SSL config options to http_response plugin.
|
||||
- [#1272](https://github.com/influxdata/telegraf/pull/1272): graphite parser: add ability to specify multiple tag keys, for consistency with influxdb parser.
|
||||
|
@ -38,9 +87,44 @@ time before a new metric is included by the plugin.
|
|||
- [#1278](https://github.com/influxdata/telegraf/pull/1278) & [#1288](https://github.com/influxdata/telegraf/pull/1288) & [#1295](https://github.com/influxdata/telegraf/pull/1295): RabbitMQ/Apache/InfluxDB inputs: made url(s) parameter optional by using reasonable input defaults if not specified
|
||||
- [#1296](https://github.com/influxdata/telegraf/issues/1296): Refactor of flush_jitter argument.
|
||||
- [#1213](https://github.com/influxdata/telegraf/issues/1213): Add inactive & active memory to mem plugin.
|
||||
- [#1543](https://github.com/influxdata/telegraf/pull/1543): Official Windows service.
|
||||
- [#1414](https://github.com/influxdata/telegraf/pull/1414): Forking sensors command to remove C package dependency.
|
||||
|
||||
### Bugfixes
|
||||
|
||||
- [#1619](https://github.com/influxdata/telegraf/issues/1619): Fix `make windows` build target
|
||||
- [#1519](https://github.com/influxdata/telegraf/pull/1519): Fix error race conditions and partial failures.
|
||||
- [#1477](https://github.com/influxdata/telegraf/issues/1477): nstat: fix inaccurate config panic.
|
||||
- [#1481](https://github.com/influxdata/telegraf/issues/1481): jolokia: fix handling multiple multi-dimensional attributes.
|
||||
- [#1430](https://github.com/influxdata/telegraf/issues/1430): Fix prometheus character sanitizing. Sanitize more win_perf_counters characters.
|
||||
- [#1534](https://github.com/influxdata/telegraf/pull/1534): Add diskio io_time to FreeBSD & report timing metrics as ms (as linux does).
|
||||
- [#1379](https://github.com/influxdata/telegraf/issues/1379): Fix covering Amazon Linux for post remove flow.
|
||||
- [#1584](https://github.com/influxdata/telegraf/issues/1584): procstat missing fields: read/write bytes & count
|
||||
- [#1472](https://github.com/influxdata/telegraf/pull/1472): diskio input plugin: set 'skip_serial_number = true' by default to avoid high cardinality.
|
||||
- [#1426](https://github.com/influxdata/telegraf/pull/1426): nil metrics panic fix.
|
||||
- [#1384](https://github.com/influxdata/telegraf/pull/1384): Fix datarace in apache input plugin.
|
||||
- [#1399](https://github.com/influxdata/telegraf/issues/1399): Add `read_repairs` statistics to riak plugin.
|
||||
- [#1405](https://github.com/influxdata/telegraf/issues/1405): Fix memory/connection leak in prometheus input plugin.
|
||||
- [#1378](https://github.com/influxdata/telegraf/issues/1378): Trim BOM from config file for Windows support.
|
||||
- [#1339](https://github.com/influxdata/telegraf/issues/1339): Prometheus client output panic on service reload.
|
||||
- [#1461](https://github.com/influxdata/telegraf/pull/1461): Prometheus parser, protobuf format header fix.
|
||||
- [#1334](https://github.com/influxdata/telegraf/issues/1334): Prometheus output, metric refresh and caching fixes.
|
||||
- [#1432](https://github.com/influxdata/telegraf/issues/1432): Panic fix for multiple graphite outputs under very high load.
|
||||
- [#1412](https://github.com/influxdata/telegraf/pull/1412): Instrumental output has better reconnect behavior
|
||||
- [#1460](https://github.com/influxdata/telegraf/issues/1460): Remove PID from procstat plugin to fix cardinality issues.
|
||||
- [#1427](https://github.com/influxdata/telegraf/issues/1427): Cassandra input: version 2.x "column family" fix.
|
||||
- [#1463](https://github.com/influxdata/telegraf/issues/1463): Shared WaitGroup in Exec plugin
|
||||
- [#1436](https://github.com/influxdata/telegraf/issues/1436): logparser: honor modifiers in "pattern" config.
|
||||
- [#1418](https://github.com/influxdata/telegraf/issues/1418): logparser: error and exit on file permissions/missing errors.
|
||||
- [#1499](https://github.com/influxdata/telegraf/pull/1499): Make the user able to specify full path for HAproxy stats
|
||||
- [#1521](https://github.com/influxdata/telegraf/pull/1521): Fix Redis url, an extra "tcp://" was added.
|
||||
- [#1330](https://github.com/influxdata/telegraf/issues/1330): Fix exec plugin panic when using single binary.
|
||||
- [#1336](https://github.com/influxdata/telegraf/issues/1336): Fixed incorrect prometheus metrics source selection.
|
||||
- [#1112](https://github.com/influxdata/telegraf/issues/1112): Set default Zookeeper chroot to empty string.
|
||||
- [#1335](https://github.com/influxdata/telegraf/issues/1335): Fix overall ping timeout to be calculated based on per-ping timeout.
|
||||
- [#1374](https://github.com/influxdata/telegraf/pull/1374): Change "default" retention policy to "".
|
||||
- [#1377](https://github.com/influxdata/telegraf/issues/1377): Graphite output mangling '%' character.
|
||||
- [#1396](https://github.com/influxdata/telegraf/pull/1396): Prometheus input plugin now supports x509 certs authentication
|
||||
- [#1252](https://github.com/influxdata/telegraf/pull/1252) & [#1279](https://github.com/influxdata/telegraf/pull/1279): Fix systemd service. Thanks @zbindenren & @PierreF!
|
||||
- [#1221](https://github.com/influxdata/telegraf/pull/1221): Fix influxdb n_shards counter.
|
||||
- [#1258](https://github.com/influxdata/telegraf/pull/1258): Fix potential kernel plugin integer parse error.
|
||||
|
@ -50,6 +134,11 @@ time before a new metric is included by the plugin.
|
|||
- [#1316](https://github.com/influxdata/telegraf/pull/1316): Removed leaked "database" tag on redis metrics. Thanks @PierreF!
|
||||
- [#1323](https://github.com/influxdata/telegraf/issues/1323): Processes plugin: fix potential error with /proc/net/stat directory.
|
||||
- [#1322](https://github.com/influxdata/telegraf/issues/1322): Fix rare RHEL 5.2 panic in gopsutil diskio gathering function.
|
||||
- [#1586](https://github.com/influxdata/telegraf/pull/1586): Remove IF NOT EXISTS from influxdb output database creation.
|
||||
- [#1600](https://github.com/influxdata/telegraf/issues/1600): Fix quoting with text values in postgresql_extensible plugin.
|
||||
- [#1425](https://github.com/influxdata/telegraf/issues/1425): Fix win_perf_counter "index out of range" panic.
|
||||
- [#1634](https://github.com/influxdata/telegraf/issues/1634): Fix ntpq panic when field is missing.
|
||||
- [#1637](https://github.com/influxdata/telegraf/issues/1637): Sanitize graphite output field names.
|
||||
|
||||
## v0.13.1 [2016-05-24]
|
||||
|
||||
|
|
|
@ -11,6 +11,8 @@ Output plugins READMEs are less structured,
|
|||
but any information you can provide on how the data will look is appreciated.
|
||||
See the [OpenTSDB output](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/opentsdb)
|
||||
for a good example.
|
||||
1. **Optional:** Help users of your plugin by including example queries for populating dashboards. Include these sample queries in the `README.md` for the plugin.
|
||||
1. **Optional:** Write a [tickscript](https://docs.influxdata.com/kapacitor/v1.0/tick/syntax/) for your plugin and add it to [Kapacitor](https://github.com/influxdata/kapacitor/tree/master/examples/telegraf). Or mention @jackzampolin in a PR comment with some common queries that you would want to alert on and he will write one for you.
|
||||
|
||||
## GoDoc
|
||||
|
||||
|
@ -114,7 +116,7 @@ creating the `Parser` object.
|
|||
You should also add the following to your SampleConfig() return:
|
||||
|
||||
```toml
|
||||
## Data format to consume.
|
||||
## Data format to consume.
|
||||
## Each data format has it's own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
|
@ -244,7 +246,7 @@ instantiating and creating the `Serializer` object.
|
|||
You should also add the following to your SampleConfig() return:
|
||||
|
||||
```toml
|
||||
## Data format to output.
|
||||
## Data format to output.
|
||||
## Each data format has it's own unique set of configuration options, read
|
||||
## more about them here:
|
||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
||||
|
@ -290,10 +292,6 @@ To execute Telegraf tests follow these simple steps:
|
|||
instructions
|
||||
- execute `make test`
|
||||
|
||||
**OSX users**: you will need to install `boot2docker` or `docker-machine`.
|
||||
The Makefile will assume that you have a `docker-machine` box called `default` to
|
||||
get the IP address.
|
||||
|
||||
### Unit test troubleshooting
|
||||
|
||||
Try cleaning up your test environment by executing `make docker-kill` and
|
||||
|
|
8
Godeps
8
Godeps
|
@ -1,5 +1,6 @@
|
|||
github.com/Shopify/sarama 8aadb476e66ca998f2f6bb3c993e9a2daa3666b9
|
||||
github.com/Sirupsen/logrus 219c8cb75c258c552e999735be6df753ffc7afdc
|
||||
github.com/aerospike/aerospike-client-go 45863b7fd8640dc12f7fdd397104d97e1986f25a
|
||||
github.com/amir/raidman 53c1b967405155bfc8758557863bf2e14f814687
|
||||
github.com/aws/aws-sdk-go 13a12060f716145019378a10e2806c174356b857
|
||||
github.com/beorn7/perks 3ac7bf7a47d159a033b107610db8a1b6575507a4
|
||||
|
@ -28,6 +29,8 @@ github.com/hpcloud/tail b2940955ab8b26e19d43a43c4da0475dd81bdb56
|
|||
github.com/influxdata/config b79f6829346b8d6e78ba73544b1e1038f1f1c9da
|
||||
github.com/influxdata/influxdb e094138084855d444195b252314dfee9eae34cab
|
||||
github.com/influxdata/toml af4df43894b16e3fd2b788d01bd27ad0776ef2d0
|
||||
github.com/kardianos/osext 29ae4ffbc9a6fe9fb2bc5029050ce6996ea1d3bc
|
||||
github.com/kardianos/service 5e335590050d6d00f3aa270217d288dda1c94d0a
|
||||
github.com/klauspost/crc32 19b0b332c9e4516a6370a0456e6182c3b5036720
|
||||
github.com/lib/pq e182dc4027e2ded4b19396d638610f2653295f36
|
||||
github.com/matttproud/golang_protobuf_extensions d0c3fe89de86839aecf2e0579c40ba3bb336a453
|
||||
|
@ -43,12 +46,15 @@ github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
|
|||
github.com/prometheus/common e8eabff8812b05acf522b45fdcd725a785188e37
|
||||
github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8
|
||||
github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f
|
||||
github.com/shirou/gopsutil 586bb697f3ec9f8ec08ffefe18f521a64534037c
|
||||
github.com/shirou/gopsutil 4d0c402af66c78735c5ccf820dc2ca7de5e4ff08
|
||||
github.com/soniah/gosnmp b1b4f885b12c5dcbd021c5cee1c904110de6db7d
|
||||
github.com/sparrc/aerospike-client-go d4bb42d2c2d39dae68e054116f4538af189e05d5
|
||||
github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744
|
||||
github.com/stretchr/testify 1f4a1643a57e798696635ea4c126e9127adb7d3c
|
||||
github.com/vjeantet/grok 83bfdfdfd1a8146795b28e547a8e3c8b28a466c2
|
||||
github.com/wvanbergen/kafka 46f9a1cf3f670edec492029fadded9c2d9e18866
|
||||
github.com/wvanbergen/kazoo-go 0f768712ae6f76454f987c3356177e138df258f8
|
||||
github.com/yuin/gopher-lua bf3808abd44b1e55143a2d7f08571aaa80db1808
|
||||
github.com/zensqlmonitor/go-mssqldb ffe5510c6fa5e15e6d983210ab501c815b56b363
|
||||
golang.org/x/crypto 5dc8cb4b8a8eb076cbb5a06bc3b8682c15bdbbd3
|
||||
golang.org/x/net 6acef71eb69611914f7a30939ea9f6e194c78172
|
||||
|
|
|
@ -1,59 +1,12 @@
|
|||
github.com/Microsoft/go-winio 9f57cbbcbcb41dea496528872a4f0e37a4f7ae98
|
||||
github.com/Shopify/sarama 8aadb476e66ca998f2f6bb3c993e9a2daa3666b9
|
||||
github.com/Sirupsen/logrus 219c8cb75c258c552e999735be6df753ffc7afdc
|
||||
github.com/Microsoft/go-winio ce2922f643c8fd76b46cadc7f404a06282678b34
|
||||
github.com/StackExchange/wmi f3e2bae1e0cb5aef83e319133eabfee30013a4a5
|
||||
github.com/amir/raidman 53c1b967405155bfc8758557863bf2e14f814687
|
||||
github.com/aws/aws-sdk-go 13a12060f716145019378a10e2806c174356b857
|
||||
github.com/beorn7/perks 3ac7bf7a47d159a033b107610db8a1b6575507a4
|
||||
github.com/cenkalti/backoff 4dc77674aceaabba2c7e3da25d4c823edfb73f99
|
||||
github.com/couchbase/go-couchbase cb664315a324d87d19c879d9cc67fda6be8c2ac1
|
||||
github.com/couchbase/gomemcached a5ea6356f648fec6ab89add00edd09151455b4b2
|
||||
github.com/couchbase/goutils 5823a0cbaaa9008406021dc5daf80125ea30bba6
|
||||
github.com/dancannon/gorethink e7cac92ea2bc52638791a021f212145acfedb1fc
|
||||
github.com/davecgh/go-spew 5215b55f46b2b919f50a1df0eaa5886afe4e3b3d
|
||||
github.com/docker/engine-api 8924d6900370b4c7e7984be5adc61f50a80d7537
|
||||
github.com/docker/go-connections f549a9393d05688dff0992ef3efd8bbe6c628aeb
|
||||
github.com/docker/go-units 5d2041e26a699eaca682e2ea41c8f891e1060444
|
||||
github.com/eapache/go-resiliency b86b1ec0dd4209a588dc1285cdd471e73525c0b3
|
||||
github.com/eapache/queue ded5959c0d4e360646dc9e9908cff48666781367
|
||||
github.com/eclipse/paho.mqtt.golang 0f7a459f04f13a41b7ed752d47944528d4bf9a86
|
||||
github.com/go-ole/go-ole 50055884d646dd9434f16bbb5c9801749b9bafe4
|
||||
github.com/go-sql-driver/mysql 1fca743146605a172a266e1654e01e5cd5669bee
|
||||
github.com/golang/protobuf 552c7b9542c194800fd493123b3798ef0a832032
|
||||
github.com/golang/snappy 427fb6fc07997f43afa32f35e850833760e489a7
|
||||
github.com/gonuts/go-shellquote e842a11b24c6abfb3dd27af69a17f482e4b483c2
|
||||
github.com/gorilla/context 1ea25387ff6f684839d82767c1733ff4d4d15d0a
|
||||
github.com/gorilla/mux c9e326e2bdec29039a3761c07bece13133863e1e
|
||||
github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478
|
||||
github.com/influxdata/config b79f6829346b8d6e78ba73544b1e1038f1f1c9da
|
||||
github.com/influxdata/influxdb e3fef5593c21644f2b43af55d6e17e70910b0e48
|
||||
github.com/influxdata/toml af4df43894b16e3fd2b788d01bd27ad0776ef2d0
|
||||
github.com/klauspost/crc32 19b0b332c9e4516a6370a0456e6182c3b5036720
|
||||
github.com/lib/pq e182dc4027e2ded4b19396d638610f2653295f36
|
||||
github.com/lxn/win 9a7734ea4db26bc593d52f6a8a957afdad39c5c1
|
||||
github.com/matttproud/golang_protobuf_extensions d0c3fe89de86839aecf2e0579c40ba3bb336a453
|
||||
github.com/miekg/dns cce6c130cdb92c752850880fd285bea1d64439dd
|
||||
github.com/mreiferson/go-snappystream 028eae7ab5c4c9e2d1cb4c4ca1e53259bbe7e504
|
||||
github.com/naoina/go-stringutil 6b638e95a32d0c1131db0e7fe83775cbea4a0d0b
|
||||
github.com/nats-io/nats b13fc9d12b0b123ebc374e6b808c6228ae4234a3
|
||||
github.com/nats-io/nuid 4f84f5f3b2786224e336af2e13dba0a0a80b76fa
|
||||
github.com/nsqio/go-nsq 0b80d6f05e15ca1930e0c5e1d540ed627e299980
|
||||
github.com/prometheus/client_golang 18acf9993a863f4c4b40612e19cdd243e7c86831
|
||||
github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
|
||||
github.com/prometheus/common e8eabff8812b05acf522b45fdcd725a785188e37
|
||||
github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8
|
||||
github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f
|
||||
github.com/shirou/gopsutil 1f32ce1bb380845be7f5d174ac641a2c592c0c42
|
||||
github.com/shirou/w32 ada3ba68f000aa1b58580e45c9d308fe0b7fc5c5
|
||||
github.com/soniah/gosnmp b1b4f885b12c5dcbd021c5cee1c904110de6db7d
|
||||
github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744
|
||||
github.com/stretchr/testify 1f4a1643a57e798696635ea4c126e9127adb7d3c
|
||||
github.com/wvanbergen/kafka 46f9a1cf3f670edec492029fadded9c2d9e18866
|
||||
github.com/wvanbergen/kazoo-go 0f768712ae6f76454f987c3356177e138df258f8
|
||||
github.com/zensqlmonitor/go-mssqldb ffe5510c6fa5e15e6d983210ab501c815b56b363
|
||||
golang.org/x/net 6acef71eb69611914f7a30939ea9f6e194c78172
|
||||
golang.org/x/text a71fd10341b064c10f4a81ceac72bcf70f26ea34
|
||||
gopkg.in/dancannon/gorethink.v1 7d1af5be49cb5ecc7b177bf387d232050299d6ef
|
||||
gopkg.in/fatih/pool.v2 cba550ebf9bce999a02e963296d4bc7a486cb715
|
||||
gopkg.in/mgo.v2 d90005c5262a3463800497ea5a89aed5fe22c886
|
||||
gopkg.in/yaml.v2 a83829b6f1293c91addabc89d0571c246397bbf4
|
||||
github.com/go-ole/go-ole be49f7c07711fcb603cff39e1de7c67926dc0ba7
|
||||
github.com/lxn/win 950a0e81e7678e63d8e6cd32412bdecb325ccd88
|
||||
github.com/shirou/w32 3c9377fc6748f222729a8270fe2775d149a249ad
|
||||
golang.org/x/sys a646d33e2ee3172a661fc09bca23bb4889a41bc8
|
||||
github.com/go-ini/ini 9144852efba7c4daf409943ee90767da62d55438
|
||||
github.com/jmespath/go-jmespath bd40a432e4c76585ef6b72d3fd96fb9b6dc7b68d
|
||||
github.com/pmezard/go-difflib/difflib 792786c7400a136282c1664665ae0a8db921c6c2
|
||||
github.com/stretchr/objx 1a9d0bb9f541897e62256577b352fdbc1fb4fd94
|
||||
gopkg.in/fsnotify.v1 a8a77c9133d2d6fd8334f3260d06f60e8d80a5fb
|
||||
gopkg.in/tomb.v1 dd632973f1e7218eb1089048e0798ec9ae7dceb8
|
||||
|
|
27
Makefile
27
Makefile
|
@ -1,4 +1,3 @@
|
|||
UNAME := $(shell sh -c 'uname')
|
||||
VERSION := $(shell sh -c 'git describe --always --tags')
|
||||
ifdef GOBIN
|
||||
PATH := $(GOBIN):$(PATH)
|
||||
|
@ -17,7 +16,7 @@ build:
|
|||
go install -ldflags "-X main.version=$(VERSION)" ./...
|
||||
|
||||
build-windows:
|
||||
go build -o telegraf.exe -ldflags \
|
||||
GOOS=windows GOARCH=amd64 go build -o telegraf.exe -ldflags \
|
||||
"-X main.version=$(VERSION)" \
|
||||
./cmd/telegraf/telegraf.go
|
||||
|
||||
|
@ -26,10 +25,6 @@ build-for-docker:
|
|||
"-s -X main.version=$(VERSION)" \
|
||||
./cmd/telegraf/telegraf.go
|
||||
|
||||
# Build with race detector
|
||||
dev: prepare
|
||||
go build -race -ldflags "-X main.version=$(VERSION)" ./...
|
||||
|
||||
# run package script
|
||||
package:
|
||||
./scripts/build.py --package --version="$(VERSION)" --platform=linux --arch=all --upload
|
||||
|
@ -42,31 +37,22 @@ prepare:
|
|||
# Use the windows godeps file to prepare dependencies
|
||||
prepare-windows:
|
||||
go get github.com/sparrc/gdm
|
||||
gdm restore
|
||||
gdm restore -f Godeps_windows
|
||||
|
||||
# Run all docker containers necessary for unit tests
|
||||
docker-run:
|
||||
ifeq ($(UNAME), Darwin)
|
||||
docker run --name kafka \
|
||||
-e ADVERTISED_HOST=$(shell sh -c 'boot2docker ip || docker-machine ip default') \
|
||||
-e ADVERTISED_PORT=9092 \
|
||||
-p "2181:2181" -p "9092:9092" \
|
||||
-d spotify/kafka
|
||||
endif
|
||||
ifeq ($(UNAME), Linux)
|
||||
docker run --name kafka \
|
||||
-e ADVERTISED_HOST=localhost \
|
||||
-e ADVERTISED_PORT=9092 \
|
||||
-p "2181:2181" -p "9092:9092" \
|
||||
-d spotify/kafka
|
||||
endif
|
||||
docker run --name mysql -p "3306:3306" -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -d mysql
|
||||
docker run --name memcached -p "11211:11211" -d memcached
|
||||
docker run --name postgres -p "5432:5432" -d postgres
|
||||
docker run --name rabbitmq -p "15672:15672" -p "5672:5672" -d rabbitmq:3-management
|
||||
docker run --name opentsdb -p "4242:4242" -d petergrace/opentsdb-docker
|
||||
docker run --name redis -p "6379:6379" -d redis
|
||||
docker run --name aerospike -p "3000:3000" -d aerospike
|
||||
docker run --name aerospike -p "3000:3000" -d aerospike/aerospike-server
|
||||
docker run --name nsq -p "4150:4150" -d nsqio/nsq /nsqd
|
||||
docker run --name mqtt -p "1883:1883" -d ncarlier/mqtt
|
||||
docker run --name riemann -p "5555:5555" -d blalor/riemann
|
||||
|
@ -79,8 +65,7 @@ docker-run-circle:
|
|||
-e ADVERTISED_PORT=9092 \
|
||||
-p "2181:2181" -p "9092:9092" \
|
||||
-d spotify/kafka
|
||||
docker run --name opentsdb -p "4242:4242" -d petergrace/opentsdb-docker
|
||||
docker run --name aerospike -p "3000:3000" -d aerospike
|
||||
docker run --name aerospike -p "3000:3000" -d aerospike/aerospike-server
|
||||
docker run --name nsq -p "4150:4150" -d nsqio/nsq /nsqd
|
||||
docker run --name mqtt -p "1883:1883" -d ncarlier/mqtt
|
||||
docker run --name riemann -p "5555:5555" -d blalor/riemann
|
||||
|
@ -88,8 +73,8 @@ docker-run-circle:
|
|||
|
||||
# Kill all docker containers, ignore errors
|
||||
docker-kill:
|
||||
-docker kill nsq aerospike redis opentsdb rabbitmq postgres memcached mysql kafka mqtt riemann snmp
|
||||
-docker rm nsq aerospike redis opentsdb rabbitmq postgres memcached mysql kafka mqtt riemann snmp
|
||||
-docker kill nsq aerospike redis rabbitmq postgres memcached mysql kafka mqtt riemann snmp
|
||||
-docker rm nsq aerospike redis rabbitmq postgres memcached mysql kafka mqtt riemann snmp
|
||||
|
||||
# Run full unit tests using docker containers (includes setup and teardown)
|
||||
test: vet docker-kill docker-run
|
||||
|
|
29
README.md
29
README.md
|
@ -20,12 +20,12 @@ new plugins.
|
|||
### Linux deb and rpm Packages:
|
||||
|
||||
Latest:
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf_0.13.1_amd64.deb
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-0.13.1.x86_64.rpm
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf_1.0.0-beta3_amd64.deb
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0_beta3.x86_64.rpm
|
||||
|
||||
Latest (arm):
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf_0.13.1_armhf.deb
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-0.13.1.armhf.rpm
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf_1.0.0-beta3_armhf.deb
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0_beta3.armhf.rpm
|
||||
|
||||
##### Package Instructions:
|
||||
|
||||
|
@ -46,14 +46,14 @@ to use this repo to install & update telegraf.
|
|||
### Linux tarballs:
|
||||
|
||||
Latest:
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-0.13.1_linux_amd64.tar.gz
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-0.13.1_linux_i386.tar.gz
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-0.13.1_linux_armhf.tar.gz
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0-beta3_linux_amd64.tar.gz
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0-beta3_linux_i386.tar.gz
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0-beta3_linux_armhf.tar.gz
|
||||
|
||||
### FreeBSD tarball:
|
||||
|
||||
Latest:
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-0.13.1_freebsd_amd64.tar.gz
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0-beta3_freebsd_amd64.tar.gz
|
||||
|
||||
### Ansible Role:
|
||||
|
||||
|
@ -69,8 +69,7 @@ brew install telegraf
|
|||
### Windows Binaries (EXPERIMENTAL)
|
||||
|
||||
Latest:
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-0.13.1_windows_amd64.zip
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-0.13.1_windows_i386.zip
|
||||
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0-beta3_windows_amd64.zip
|
||||
|
||||
### From Source:
|
||||
|
||||
|
@ -157,6 +156,7 @@ Currently implemented sources:
|
|||
* [exec](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/exec) (generic executable plugin, support JSON, influx, graphite and nagios)
|
||||
* [filestat](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/filestat)
|
||||
* [haproxy](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/haproxy)
|
||||
* [hddtemp](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/hddtemp)
|
||||
* [http_response](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/http_response)
|
||||
* [httpjson](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/httpjson) (generic JSON-emitting http service plugin)
|
||||
* [influxdb](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/influxdb)
|
||||
|
@ -188,7 +188,7 @@ Currently implemented sources:
|
|||
* [redis](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/redis)
|
||||
* [rethinkdb](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/rethinkdb)
|
||||
* [riak](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/riak)
|
||||
* [sensors ](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/sensors) (only available if built from source)
|
||||
* [sensors](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/sensors)
|
||||
* [snmp](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/snmp)
|
||||
* [sql server](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/sqlserver) (microsoft)
|
||||
* [twemproxy](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/twemproxy)
|
||||
|
@ -218,8 +218,11 @@ Telegraf can also collect metrics via the following service plugins:
|
|||
* [mqtt_consumer](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/mqtt_consumer)
|
||||
* [kafka_consumer](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/kafka_consumer)
|
||||
* [nats_consumer](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/nats_consumer)
|
||||
* [github_webhooks](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/github_webhooks)
|
||||
* [rollbar_webhooks](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/rollbar_webhooks)
|
||||
* [webhooks](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/webhooks)
|
||||
* [github](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/webhooks/github)
|
||||
* [mandrill](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/webhooks/mandrill)
|
||||
* [rollbar](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/webhooks/rollbar)
|
||||
* [nsq_consumer](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/nsq_consumer)
|
||||
|
||||
We'll be adding support for many more over the coming months. Read on if you
|
||||
want to add support for another service or third-party API.
|
||||
|
|
|
@ -16,6 +16,12 @@ type Accumulator interface {
|
|||
tags map[string]string,
|
||||
t ...time.Time)
|
||||
|
||||
AddError(err error)
|
||||
|
||||
Debug() bool
|
||||
SetDebug(enabled bool)
|
||||
|
||||
SetPrecision(precision, interval time.Duration)
|
||||
|
||||
DisablePrecision()
|
||||
}
|
||||
|
|
|
@ -4,6 +4,7 @@ import (
|
|||
"fmt"
|
||||
"log"
|
||||
"math"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
|
@ -11,12 +12,13 @@ import (
|
|||
)
|
||||
|
||||
func NewAccumulator(
|
||||
inputConfig *internal_models.InputConfig,
|
||||
inputConfig *models.InputConfig,
|
||||
metrics chan telegraf.Metric,
|
||||
) *accumulator {
|
||||
acc := accumulator{}
|
||||
acc.metrics = metrics
|
||||
acc.inputConfig = inputConfig
|
||||
acc.precision = time.Nanosecond
|
||||
return &acc
|
||||
}
|
||||
|
||||
|
@ -29,9 +31,11 @@ type accumulator struct {
|
|||
// print every point added to the accumulator
|
||||
trace bool
|
||||
|
||||
inputConfig *internal_models.InputConfig
|
||||
inputConfig *models.InputConfig
|
||||
|
||||
prefix string
|
||||
precision time.Duration
|
||||
|
||||
errCount uint64
|
||||
}
|
||||
|
||||
func (ac *accumulator) Add(
|
||||
|
@ -141,10 +145,7 @@ func (ac *accumulator) AddFields(
|
|||
} else {
|
||||
timestamp = time.Now()
|
||||
}
|
||||
|
||||
if ac.prefix != "" {
|
||||
measurement = ac.prefix + measurement
|
||||
}
|
||||
timestamp = timestamp.Round(ac.precision)
|
||||
|
||||
m, err := telegraf.NewMetric(measurement, tags, result, timestamp)
|
||||
if err != nil {
|
||||
|
@ -157,6 +158,17 @@ func (ac *accumulator) AddFields(
|
|||
ac.metrics <- m
|
||||
}
|
||||
|
||||
// AddError passes a runtime error to the accumulator.
|
||||
// The error will be tagged with the plugin name and written to the log.
|
||||
func (ac *accumulator) AddError(err error) {
|
||||
if err == nil {
|
||||
return
|
||||
}
|
||||
atomic.AddUint64(&ac.errCount, 1)
|
||||
//TODO suppress/throttle consecutive duplicate errors?
|
||||
log.Printf("ERROR in input [%s]: %s", ac.inputConfig.Name, err)
|
||||
}
|
||||
|
||||
func (ac *accumulator) Debug() bool {
|
||||
return ac.debug
|
||||
}
|
||||
|
@ -173,6 +185,31 @@ func (ac *accumulator) SetTrace(trace bool) {
|
|||
ac.trace = trace
|
||||
}
|
||||
|
||||
// SetPrecision takes two time.Duration objects. If the first is non-zero,
|
||||
// it sets that as the precision. Otherwise, it takes the second argument
|
||||
// as the order of time that the metrics should be rounded to, with the
|
||||
// maximum being 1s.
|
||||
func (ac *accumulator) SetPrecision(precision, interval time.Duration) {
|
||||
if precision > 0 {
|
||||
ac.precision = precision
|
||||
return
|
||||
}
|
||||
switch {
|
||||
case interval >= time.Second:
|
||||
ac.precision = time.Second
|
||||
case interval >= time.Millisecond:
|
||||
ac.precision = time.Millisecond
|
||||
case interval >= time.Microsecond:
|
||||
ac.precision = time.Microsecond
|
||||
default:
|
||||
ac.precision = time.Nanosecond
|
||||
}
|
||||
}
|
||||
|
||||
func (ac *accumulator) DisablePrecision() {
|
||||
ac.precision = time.Nanosecond
|
||||
}
|
||||
|
||||
func (ac *accumulator) setDefaultTags(tags map[string]string) {
|
||||
ac.defaultTags = tags
|
||||
}
|
||||
|
|
|
@ -1,8 +1,11 @@
|
|||
package agent
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"log"
|
||||
"math"
|
||||
"os"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
|
@ -10,6 +13,7 @@ import (
|
|||
"github.com/influxdata/telegraf/internal/models"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
func TestAdd(t *testing.T) {
|
||||
|
@ -17,7 +21,7 @@ func TestAdd(t *testing.T) {
|
|||
now := time.Now()
|
||||
a.metrics = make(chan telegraf.Metric, 10)
|
||||
defer close(a.metrics)
|
||||
a.inputConfig = &internal_models.InputConfig{}
|
||||
a.inputConfig = &models.InputConfig{}
|
||||
|
||||
a.Add("acctest", float64(101), map[string]string{})
|
||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"})
|
||||
|
@ -38,13 +42,135 @@ func TestAdd(t *testing.T) {
|
|||
actual)
|
||||
}
|
||||
|
||||
func TestAddNoPrecisionWithInterval(t *testing.T) {
|
||||
a := accumulator{}
|
||||
now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC)
|
||||
a.metrics = make(chan telegraf.Metric, 10)
|
||||
defer close(a.metrics)
|
||||
a.inputConfig = &models.InputConfig{}
|
||||
|
||||
a.SetPrecision(0, time.Second)
|
||||
a.Add("acctest", float64(101), map[string]string{})
|
||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"})
|
||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"}, now)
|
||||
|
||||
testm := <-a.metrics
|
||||
actual := testm.String()
|
||||
assert.Contains(t, actual, "acctest value=101")
|
||||
|
||||
testm = <-a.metrics
|
||||
actual = testm.String()
|
||||
assert.Contains(t, actual, "acctest,acc=test value=101")
|
||||
|
||||
testm = <-a.metrics
|
||||
actual = testm.String()
|
||||
assert.Equal(t,
|
||||
fmt.Sprintf("acctest,acc=test value=101 %d", int64(1139572800000000000)),
|
||||
actual)
|
||||
}
|
||||
|
||||
func TestAddNoIntervalWithPrecision(t *testing.T) {
|
||||
a := accumulator{}
|
||||
now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC)
|
||||
a.metrics = make(chan telegraf.Metric, 10)
|
||||
defer close(a.metrics)
|
||||
a.inputConfig = &models.InputConfig{}
|
||||
|
||||
a.SetPrecision(time.Second, time.Millisecond)
|
||||
a.Add("acctest", float64(101), map[string]string{})
|
||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"})
|
||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"}, now)
|
||||
|
||||
testm := <-a.metrics
|
||||
actual := testm.String()
|
||||
assert.Contains(t, actual, "acctest value=101")
|
||||
|
||||
testm = <-a.metrics
|
||||
actual = testm.String()
|
||||
assert.Contains(t, actual, "acctest,acc=test value=101")
|
||||
|
||||
testm = <-a.metrics
|
||||
actual = testm.String()
|
||||
assert.Equal(t,
|
||||
fmt.Sprintf("acctest,acc=test value=101 %d", int64(1139572800000000000)),
|
||||
actual)
|
||||
}
|
||||
|
||||
func TestAddDisablePrecision(t *testing.T) {
|
||||
a := accumulator{}
|
||||
now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC)
|
||||
a.metrics = make(chan telegraf.Metric, 10)
|
||||
defer close(a.metrics)
|
||||
a.inputConfig = &models.InputConfig{}
|
||||
|
||||
a.SetPrecision(time.Second, time.Millisecond)
|
||||
a.DisablePrecision()
|
||||
a.Add("acctest", float64(101), map[string]string{})
|
||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"})
|
||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"}, now)
|
||||
|
||||
testm := <-a.metrics
|
||||
actual := testm.String()
|
||||
assert.Contains(t, actual, "acctest value=101")
|
||||
|
||||
testm = <-a.metrics
|
||||
actual = testm.String()
|
||||
assert.Contains(t, actual, "acctest,acc=test value=101")
|
||||
|
||||
testm = <-a.metrics
|
||||
actual = testm.String()
|
||||
assert.Equal(t,
|
||||
fmt.Sprintf("acctest,acc=test value=101 %d", int64(1139572800082912748)),
|
||||
actual)
|
||||
}
|
||||
|
||||
func TestDifferentPrecisions(t *testing.T) {
|
||||
a := accumulator{}
|
||||
now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC)
|
||||
a.metrics = make(chan telegraf.Metric, 10)
|
||||
defer close(a.metrics)
|
||||
a.inputConfig = &models.InputConfig{}
|
||||
|
||||
a.SetPrecision(0, time.Second)
|
||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"}, now)
|
||||
testm := <-a.metrics
|
||||
actual := testm.String()
|
||||
assert.Equal(t,
|
||||
fmt.Sprintf("acctest,acc=test value=101 %d", int64(1139572800000000000)),
|
||||
actual)
|
||||
|
||||
a.SetPrecision(0, time.Millisecond)
|
||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"}, now)
|
||||
testm = <-a.metrics
|
||||
actual = testm.String()
|
||||
assert.Equal(t,
|
||||
fmt.Sprintf("acctest,acc=test value=101 %d", int64(1139572800083000000)),
|
||||
actual)
|
||||
|
||||
a.SetPrecision(0, time.Microsecond)
|
||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"}, now)
|
||||
testm = <-a.metrics
|
||||
actual = testm.String()
|
||||
assert.Equal(t,
|
||||
fmt.Sprintf("acctest,acc=test value=101 %d", int64(1139572800082913000)),
|
||||
actual)
|
||||
|
||||
a.SetPrecision(0, time.Nanosecond)
|
||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"}, now)
|
||||
testm = <-a.metrics
|
||||
actual = testm.String()
|
||||
assert.Equal(t,
|
||||
fmt.Sprintf("acctest,acc=test value=101 %d", int64(1139572800082912748)),
|
||||
actual)
|
||||
}
|
||||
|
||||
func TestAddDefaultTags(t *testing.T) {
|
||||
a := accumulator{}
|
||||
a.addDefaultTag("default", "tag")
|
||||
now := time.Now()
|
||||
a.metrics = make(chan telegraf.Metric, 10)
|
||||
defer close(a.metrics)
|
||||
a.inputConfig = &internal_models.InputConfig{}
|
||||
a.inputConfig = &models.InputConfig{}
|
||||
|
||||
a.Add("acctest", float64(101), map[string]string{})
|
||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"})
|
||||
|
@ -70,7 +196,7 @@ func TestAddFields(t *testing.T) {
|
|||
now := time.Now()
|
||||
a.metrics = make(chan telegraf.Metric, 10)
|
||||
defer close(a.metrics)
|
||||
a.inputConfig = &internal_models.InputConfig{}
|
||||
a.inputConfig = &models.InputConfig{}
|
||||
|
||||
fields := map[string]interface{}{
|
||||
"usage": float64(99),
|
||||
|
@ -103,7 +229,7 @@ func TestAddInfFields(t *testing.T) {
|
|||
now := time.Now()
|
||||
a.metrics = make(chan telegraf.Metric, 10)
|
||||
defer close(a.metrics)
|
||||
a.inputConfig = &internal_models.InputConfig{}
|
||||
a.inputConfig = &models.InputConfig{}
|
||||
|
||||
fields := map[string]interface{}{
|
||||
"usage": inf,
|
||||
|
@ -131,7 +257,7 @@ func TestAddNaNFields(t *testing.T) {
|
|||
now := time.Now()
|
||||
a.metrics = make(chan telegraf.Metric, 10)
|
||||
defer close(a.metrics)
|
||||
a.inputConfig = &internal_models.InputConfig{}
|
||||
a.inputConfig = &models.InputConfig{}
|
||||
|
||||
fields := map[string]interface{}{
|
||||
"usage": nan,
|
||||
|
@ -155,7 +281,7 @@ func TestAddUint64Fields(t *testing.T) {
|
|||
now := time.Now()
|
||||
a.metrics = make(chan telegraf.Metric, 10)
|
||||
defer close(a.metrics)
|
||||
a.inputConfig = &internal_models.InputConfig{}
|
||||
a.inputConfig = &models.InputConfig{}
|
||||
|
||||
fields := map[string]interface{}{
|
||||
"usage": uint64(99),
|
||||
|
@ -184,7 +310,7 @@ func TestAddUint64Overflow(t *testing.T) {
|
|||
now := time.Now()
|
||||
a.metrics = make(chan telegraf.Metric, 10)
|
||||
defer close(a.metrics)
|
||||
a.inputConfig = &internal_models.InputConfig{}
|
||||
a.inputConfig = &models.InputConfig{}
|
||||
|
||||
fields := map[string]interface{}{
|
||||
"usage": uint64(9223372036854775808),
|
||||
|
@ -214,7 +340,7 @@ func TestAddInts(t *testing.T) {
|
|||
now := time.Now()
|
||||
a.metrics = make(chan telegraf.Metric, 10)
|
||||
defer close(a.metrics)
|
||||
a.inputConfig = &internal_models.InputConfig{}
|
||||
a.inputConfig = &models.InputConfig{}
|
||||
|
||||
a.Add("acctest", int(101), map[string]string{})
|
||||
a.Add("acctest", int32(101), map[string]string{"acc": "test"})
|
||||
|
@ -241,7 +367,7 @@ func TestAddFloats(t *testing.T) {
|
|||
now := time.Now()
|
||||
a.metrics = make(chan telegraf.Metric, 10)
|
||||
defer close(a.metrics)
|
||||
a.inputConfig = &internal_models.InputConfig{}
|
||||
a.inputConfig = &models.InputConfig{}
|
||||
|
||||
a.Add("acctest", float32(101), map[string]string{"acc": "test"})
|
||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"}, now)
|
||||
|
@ -263,7 +389,7 @@ func TestAddStrings(t *testing.T) {
|
|||
now := time.Now()
|
||||
a.metrics = make(chan telegraf.Metric, 10)
|
||||
defer close(a.metrics)
|
||||
a.inputConfig = &internal_models.InputConfig{}
|
||||
a.inputConfig = &models.InputConfig{}
|
||||
|
||||
a.Add("acctest", "test", map[string]string{"acc": "test"})
|
||||
a.Add("acctest", "foo", map[string]string{"acc": "test"}, now)
|
||||
|
@ -285,7 +411,7 @@ func TestAddBools(t *testing.T) {
|
|||
now := time.Now()
|
||||
a.metrics = make(chan telegraf.Metric, 10)
|
||||
defer close(a.metrics)
|
||||
a.inputConfig = &internal_models.InputConfig{}
|
||||
a.inputConfig = &models.InputConfig{}
|
||||
|
||||
a.Add("acctest", true, map[string]string{"acc": "test"})
|
||||
a.Add("acctest", false, map[string]string{"acc": "test"}, now)
|
||||
|
@ -307,11 +433,11 @@ func TestAccFilterTags(t *testing.T) {
|
|||
now := time.Now()
|
||||
a.metrics = make(chan telegraf.Metric, 10)
|
||||
defer close(a.metrics)
|
||||
filter := internal_models.Filter{
|
||||
filter := models.Filter{
|
||||
TagExclude: []string{"acc"},
|
||||
}
|
||||
assert.NoError(t, filter.CompileFilter())
|
||||
a.inputConfig = &internal_models.InputConfig{}
|
||||
a.inputConfig = &models.InputConfig{}
|
||||
a.inputConfig.Filter = filter
|
||||
|
||||
a.Add("acctest", float64(101), map[string]string{})
|
||||
|
@ -332,3 +458,27 @@ func TestAccFilterTags(t *testing.T) {
|
|||
fmt.Sprintf("acctest value=101 %d", now.UnixNano()),
|
||||
actual)
|
||||
}
|
||||
|
||||
func TestAccAddError(t *testing.T) {
|
||||
errBuf := bytes.NewBuffer(nil)
|
||||
log.SetOutput(errBuf)
|
||||
defer log.SetOutput(os.Stderr)
|
||||
|
||||
a := accumulator{}
|
||||
a.inputConfig = &models.InputConfig{}
|
||||
a.inputConfig.Name = "mock_plugin"
|
||||
|
||||
a.AddError(fmt.Errorf("foo"))
|
||||
a.AddError(fmt.Errorf("bar"))
|
||||
a.AddError(fmt.Errorf("baz"))
|
||||
|
||||
errs := bytes.Split(errBuf.Bytes(), []byte{'\n'})
|
||||
assert.EqualValues(t, 3, a.errCount)
|
||||
require.Len(t, errs, 4) // 4 because of trailing newline
|
||||
assert.Contains(t, string(errs[0]), "mock_plugin")
|
||||
assert.Contains(t, string(errs[0]), "foo")
|
||||
assert.Contains(t, string(errs[1]), "mock_plugin")
|
||||
assert.Contains(t, string(errs[1]), "bar")
|
||||
assert.Contains(t, string(errs[2]), "mock_plugin")
|
||||
assert.Contains(t, string(errs[2]), "baz")
|
||||
}
|
||||
|
|
|
@ -88,7 +88,7 @@ func (a *Agent) Close() error {
|
|||
return err
|
||||
}
|
||||
|
||||
func panicRecover(input *internal_models.RunningInput) {
|
||||
func panicRecover(input *models.RunningInput) {
|
||||
if err := recover(); err != nil {
|
||||
trace := make([]byte, 2048)
|
||||
runtime.Stack(trace, true)
|
||||
|
@ -104,7 +104,7 @@ func panicRecover(input *internal_models.RunningInput) {
|
|||
// reporting interval.
|
||||
func (a *Agent) gatherer(
|
||||
shutdown chan struct{},
|
||||
input *internal_models.RunningInput,
|
||||
input *models.RunningInput,
|
||||
interval time.Duration,
|
||||
metricC chan telegraf.Metric,
|
||||
) error {
|
||||
|
@ -118,6 +118,8 @@ func (a *Agent) gatherer(
|
|||
|
||||
acc := NewAccumulator(input.Config, metricC)
|
||||
acc.SetDebug(a.Config.Agent.Debug)
|
||||
acc.SetPrecision(a.Config.Agent.Precision.Duration,
|
||||
a.Config.Agent.Interval.Duration)
|
||||
acc.setDefaultTags(a.Config.Tags)
|
||||
|
||||
internal.RandomSleep(a.Config.Agent.CollectionJitter.Duration, shutdown)
|
||||
|
@ -150,7 +152,7 @@ func (a *Agent) gatherer(
|
|||
// over.
|
||||
func gatherWithTimeout(
|
||||
shutdown chan struct{},
|
||||
input *internal_models.RunningInput,
|
||||
input *models.RunningInput,
|
||||
acc *accumulator,
|
||||
timeout time.Duration,
|
||||
) {
|
||||
|
@ -201,6 +203,8 @@ func (a *Agent) Test() error {
|
|||
for _, input := range a.Config.Inputs {
|
||||
acc := NewAccumulator(input.Config, metricC)
|
||||
acc.SetTrace(true)
|
||||
acc.SetPrecision(a.Config.Agent.Precision.Duration,
|
||||
a.Config.Agent.Interval.Duration)
|
||||
acc.setDefaultTags(a.Config.Tags)
|
||||
|
||||
fmt.Printf("* Plugin: %s, Collection 1\n", input.Name)
|
||||
|
@ -211,6 +215,9 @@ func (a *Agent) Test() error {
|
|||
if err := input.Input.Gather(acc); err != nil {
|
||||
return err
|
||||
}
|
||||
if acc.errCount > 0 {
|
||||
return fmt.Errorf("Errors encountered during processing")
|
||||
}
|
||||
|
||||
// Special instructions for some inputs. cpu, for example, needs to be
|
||||
// run twice in order to return cpu usage percentages.
|
||||
|
@ -233,7 +240,7 @@ func (a *Agent) flush() {
|
|||
|
||||
wg.Add(len(a.Config.Outputs))
|
||||
for _, o := range a.Config.Outputs {
|
||||
go func(output *internal_models.RunningOutput) {
|
||||
go func(output *models.RunningOutput) {
|
||||
defer wg.Done()
|
||||
err := output.Write()
|
||||
if err != nil {
|
||||
|
@ -264,13 +271,33 @@ func (a *Agent) flusher(shutdown chan struct{}, metricC chan telegraf.Metric) er
|
|||
internal.RandomSleep(a.Config.Agent.FlushJitter.Duration, shutdown)
|
||||
a.flush()
|
||||
case m := <-metricC:
|
||||
for _, o := range a.Config.Outputs {
|
||||
o.AddMetric(m)
|
||||
for i, o := range a.Config.Outputs {
|
||||
if i == len(a.Config.Outputs)-1 {
|
||||
o.AddMetric(m)
|
||||
} else {
|
||||
o.AddMetric(copyMetric(m))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func copyMetric(m telegraf.Metric) telegraf.Metric {
|
||||
t := time.Time(m.Time())
|
||||
|
||||
tags := make(map[string]string)
|
||||
fields := make(map[string]interface{})
|
||||
for k, v := range m.Tags() {
|
||||
tags[k] = v
|
||||
}
|
||||
for k, v := range m.Fields() {
|
||||
fields[k] = v
|
||||
}
|
||||
|
||||
out, _ := telegraf.NewMetric(m.Name(), tags, fields, t)
|
||||
return out
|
||||
}
|
||||
|
||||
// Run runs the agent daemon, gathering every Interval
|
||||
func (a *Agent) Run(shutdown chan struct{}) error {
|
||||
var wg sync.WaitGroup
|
||||
|
@ -289,6 +316,9 @@ func (a *Agent) Run(shutdown chan struct{}) error {
|
|||
case telegraf.ServiceInput:
|
||||
acc := NewAccumulator(input.Config, metricC)
|
||||
acc.SetDebug(a.Config.Agent.Debug)
|
||||
// Service input plugins should set their own precision of their
|
||||
// metrics.
|
||||
acc.DisablePrecision()
|
||||
acc.setDefaultTags(a.Config.Tags)
|
||||
if err := p.Start(acc); err != nil {
|
||||
log.Printf("Service for input %s failed to start, exiting\n%s\n",
|
||||
|
@ -321,7 +351,7 @@ func (a *Agent) Run(shutdown chan struct{}) error {
|
|||
if input.Config.Interval != 0 {
|
||||
interval = input.Config.Interval
|
||||
}
|
||||
go func(in *internal_models.RunningInput, interv time.Duration) {
|
||||
go func(in *models.RunningInput, interv time.Duration) {
|
||||
defer wg.Done()
|
||||
if err := a.gatherer(shutdown, in, interv, metricC); err != nil {
|
||||
log.Printf(err.Error())
|
||||
|
|
|
@ -6,6 +6,7 @@ import (
|
|||
"log"
|
||||
"os"
|
||||
"os/signal"
|
||||
"runtime"
|
||||
"strings"
|
||||
"syscall"
|
||||
|
||||
|
@ -15,6 +16,7 @@ import (
|
|||
_ "github.com/influxdata/telegraf/plugins/inputs/all"
|
||||
"github.com/influxdata/telegraf/plugins/outputs"
|
||||
_ "github.com/influxdata/telegraf/plugins/outputs/all"
|
||||
"github.com/kardianos/service"
|
||||
)
|
||||
|
||||
var fDebug = flag.Bool("debug", false,
|
||||
|
@ -39,12 +41,8 @@ var fOutputList = flag.Bool("output-list", false,
|
|||
"print available output plugins.")
|
||||
var fUsage = flag.String("usage", "",
|
||||
"print usage for a plugin, ie, 'telegraf -usage mysql'")
|
||||
var fInputFiltersLegacy = flag.String("filter", "",
|
||||
"filter the inputs to enable, separator is :")
|
||||
var fOutputFiltersLegacy = flag.String("outputfilter", "",
|
||||
"filter the outputs to enable, separator is :")
|
||||
var fConfigDirectoryLegacy = flag.String("configdirectory", "",
|
||||
"directory containing additional *.conf files")
|
||||
var fService = flag.String("service", "",
|
||||
"operate on the service")
|
||||
|
||||
// Telegraf version, populated linker.
|
||||
// ie, -ldflags "-X main.version=`git describe --always --tags`"
|
||||
|
@ -74,6 +72,7 @@ The flags are:
|
|||
-debug print metrics as they're generated to stdout
|
||||
-quiet run in quiet mode
|
||||
-version print the version to stdout
|
||||
-service Control the service, ie, 'telegraf -service install (windows only)'
|
||||
|
||||
In addition to the -config flag, telegraf will also load the config file from
|
||||
an environment variable or default location. Precedence is:
|
||||
|
@ -100,7 +99,22 @@ Examples:
|
|||
telegraf -config telegraf.conf -input-filter cpu:mem -output-filter influxdb
|
||||
`
|
||||
|
||||
func main() {
|
||||
var logger service.Logger
|
||||
|
||||
var stop chan struct{}
|
||||
|
||||
var srvc service.Service
|
||||
var svcConfig *service.Config
|
||||
|
||||
type program struct{}
|
||||
|
||||
func reloadLoop(stop chan struct{}, s service.Service) {
|
||||
defer func() {
|
||||
if service.Interactive() {
|
||||
os.Exit(0)
|
||||
}
|
||||
return
|
||||
}()
|
||||
reload := make(chan bool, 1)
|
||||
reload <- true
|
||||
for <-reload {
|
||||
|
@ -110,24 +124,11 @@ func main() {
|
|||
args := flag.Args()
|
||||
|
||||
var inputFilters []string
|
||||
if *fInputFiltersLegacy != "" {
|
||||
fmt.Printf("WARNING '--filter' flag is deprecated, please use" +
|
||||
" '--input-filter'")
|
||||
inputFilter := strings.TrimSpace(*fInputFiltersLegacy)
|
||||
inputFilters = strings.Split(":"+inputFilter+":", ":")
|
||||
}
|
||||
if *fInputFilters != "" {
|
||||
inputFilter := strings.TrimSpace(*fInputFilters)
|
||||
inputFilters = strings.Split(":"+inputFilter+":", ":")
|
||||
}
|
||||
|
||||
var outputFilters []string
|
||||
if *fOutputFiltersLegacy != "" {
|
||||
fmt.Printf("WARNING '--outputfilter' flag is deprecated, please use" +
|
||||
" '--output-filter'")
|
||||
outputFilter := strings.TrimSpace(*fOutputFiltersLegacy)
|
||||
outputFilters = strings.Split(":"+outputFilter+":", ":")
|
||||
}
|
||||
if *fOutputFilters != "" {
|
||||
outputFilter := strings.TrimSpace(*fOutputFilters)
|
||||
outputFilters = strings.Split(":"+outputFilter+":", ":")
|
||||
|
@ -145,40 +146,43 @@ func main() {
|
|||
}
|
||||
}
|
||||
|
||||
if *fOutputList {
|
||||
// switch for flags which just do something and exit immediately
|
||||
switch {
|
||||
case *fOutputList:
|
||||
fmt.Println("Available Output Plugins:")
|
||||
for k, _ := range outputs.Outputs {
|
||||
fmt.Printf(" %s\n", k)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
if *fInputList {
|
||||
case *fInputList:
|
||||
fmt.Println("Available Input Plugins:")
|
||||
for k, _ := range inputs.Inputs {
|
||||
fmt.Printf(" %s\n", k)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
if *fVersion {
|
||||
case *fVersion:
|
||||
v := fmt.Sprintf("Telegraf - version %s", version)
|
||||
fmt.Println(v)
|
||||
return
|
||||
}
|
||||
|
||||
if *fSampleConfig {
|
||||
case *fSampleConfig:
|
||||
config.PrintSampleConfig(inputFilters, outputFilters)
|
||||
return
|
||||
}
|
||||
|
||||
if *fUsage != "" {
|
||||
case *fUsage != "":
|
||||
if err := config.PrintInputConfig(*fUsage); err != nil {
|
||||
if err2 := config.PrintOutputConfig(*fUsage); err2 != nil {
|
||||
log.Fatalf("%s and %s", err, err2)
|
||||
}
|
||||
}
|
||||
return
|
||||
case *fService != "" && runtime.GOOS == "windows":
|
||||
if *fConfig != "" {
|
||||
(*svcConfig).Arguments = []string{"-config", *fConfig}
|
||||
}
|
||||
err := service.Control(s, *fService)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// If no other options are specified, load the config file and run.
|
||||
|
@ -191,15 +195,6 @@ func main() {
|
|||
os.Exit(1)
|
||||
}
|
||||
|
||||
if *fConfigDirectoryLegacy != "" {
|
||||
fmt.Printf("WARNING '--configdirectory' flag is deprecated, please use" +
|
||||
" '--config-directory'")
|
||||
err = c.LoadDirectory(*fConfigDirectoryLegacy)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
}
|
||||
|
||||
if *fConfigDirectory != "" {
|
||||
err = c.LoadDirectory(*fConfigDirectory)
|
||||
if err != nil {
|
||||
|
@ -243,14 +238,18 @@ func main() {
|
|||
signals := make(chan os.Signal)
|
||||
signal.Notify(signals, os.Interrupt, syscall.SIGHUP)
|
||||
go func() {
|
||||
sig := <-signals
|
||||
if sig == os.Interrupt {
|
||||
close(shutdown)
|
||||
}
|
||||
if sig == syscall.SIGHUP {
|
||||
log.Printf("Reloading Telegraf config\n")
|
||||
<-reload
|
||||
reload <- true
|
||||
select {
|
||||
case sig := <-signals:
|
||||
if sig == os.Interrupt {
|
||||
close(shutdown)
|
||||
}
|
||||
if sig == syscall.SIGHUP {
|
||||
log.Printf("Reloading Telegraf config\n")
|
||||
<-reload
|
||||
reload <- true
|
||||
close(shutdown)
|
||||
}
|
||||
case <-stop:
|
||||
close(shutdown)
|
||||
}
|
||||
}()
|
||||
|
@ -279,3 +278,46 @@ func usageExit(rc int) {
|
|||
fmt.Println(usage)
|
||||
os.Exit(rc)
|
||||
}
|
||||
|
||||
func (p *program) Start(s service.Service) error {
|
||||
srvc = s
|
||||
go p.run()
|
||||
return nil
|
||||
}
|
||||
func (p *program) run() {
|
||||
stop = make(chan struct{})
|
||||
reloadLoop(stop, srvc)
|
||||
}
|
||||
func (p *program) Stop(s service.Service) error {
|
||||
close(stop)
|
||||
return nil
|
||||
}
|
||||
|
||||
func main() {
|
||||
if runtime.GOOS == "windows" {
|
||||
svcConfig = &service.Config{
|
||||
Name: "telegraf",
|
||||
DisplayName: "Telegraf Data Collector Service",
|
||||
Description: "Collects data using a series of plugins and publishes it to" +
|
||||
"another series of plugins.",
|
||||
Arguments: []string{"-config", "C:\\Program Files\\Telegraf\\telegraf.conf"},
|
||||
}
|
||||
|
||||
prg := &program{}
|
||||
s, err := service.New(prg, svcConfig)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
logger, err = s.Logger(nil)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
err = s.Run()
|
||||
if err != nil {
|
||||
logger.Error(err)
|
||||
}
|
||||
} else {
|
||||
stop = make(chan struct{})
|
||||
reloadLoop(stop, nil)
|
||||
}
|
||||
}
|
||||
|
|
|
@ -16,6 +16,7 @@
|
|||
- github.com/hashicorp/go-msgpack [BSD LICENSE](https://github.com/hashicorp/go-msgpack/blob/master/LICENSE)
|
||||
- github.com/hashicorp/raft [MPL LICENSE](https://github.com/hashicorp/raft/blob/master/LICENSE)
|
||||
- github.com/hashicorp/raft-boltdb [MPL LICENSE](https://github.com/hashicorp/raft-boltdb/blob/master/LICENSE)
|
||||
- github.com/kardianos/service [ZLIB LICENSE](https://github.com/kardianos/service/blob/master/LICENSE) (License not named but matches word for word with ZLib)
|
||||
- github.com/lib/pq [MIT LICENSE](https://github.com/lib/pq/blob/master/LICENSE.md)
|
||||
- github.com/matttproud/golang_protobuf_extensions [APACHE LICENSE](https://github.com/matttproud/golang_protobuf_extensions/blob/master/LICENSE)
|
||||
- github.com/naoina/go-stringutil [MIT LICENSE](https://github.com/naoina/go-stringutil/blob/master/LICENSE)
|
||||
|
|
|
@ -1,36 +1,40 @@
|
|||
# Running Telegraf as a Windows Service
|
||||
|
||||
If you have tried to install Go binaries as Windows Services with the **sc.exe**
|
||||
tool you may have seen that the service errors and stops running after a while.
|
||||
Telegraf natively supports running as a Windows Service. Outlined below is are
|
||||
the general steps to set it up.
|
||||
|
||||
**NSSM** (the Non-Sucking Service Manager) is a tool that helps you in a
|
||||
[number of scenarios](http://nssm.cc/scenarios) including running Go binaries
|
||||
that were not specifically designed to run only in Windows platforms.
|
||||
1. Obtain the telegraf windows distribution
|
||||
2. Create the directory `C:\Program Files\Telegraf` (if you install in a different
|
||||
location simply specify the `-config` parameter with the desired location)
|
||||
3. Place the telegraf.exe and the config file into `C:\Program Files\Telegraf`
|
||||
4. To install the service into the Windows Service Manager, run (as an
|
||||
administrator):
|
||||
|
||||
## NSSM Installation via Chocolatey
|
||||
```
|
||||
> C:\Program Files\Telegraf\telegraf.exe --service install
|
||||
```
|
||||
|
||||
You can install [Chocolatey](https://chocolatey.org/) and [NSSM](http://nssm.cc/)
|
||||
with these commands
|
||||
5. Edit the configuration file to meet your needs
|
||||
6. To check that it works, run:
|
||||
|
||||
```powershell
|
||||
iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))
|
||||
choco install -y nssm
|
||||
```
|
||||
```
|
||||
> C:\Program Files\Telegraf\telegraf.exe --config C:\Program Files\Telegraf\telegraf.conf --test
|
||||
```
|
||||
|
||||
## Installing Telegraf as a Windows Service with NSSM
|
||||
7. To start collecting data, run:
|
||||
|
||||
You can download the latest Telegraf Windows binaries (still Experimental at
|
||||
the moment) from [the Telegraf Github repo](https://github.com/influxdata/telegraf).
|
||||
```
|
||||
> net start telegraf
|
||||
```
|
||||
|
||||
Then you can create a C:\telegraf folder, unzip the binary there and modify the
|
||||
**telegraf.conf** sample to allocate the metrics you want to send to **InfluxDB**.
|
||||
## Other supported operations
|
||||
|
||||
Once you have NSSM installed in your system, the process is quite straightforward.
|
||||
You only need to type this command in your Windows shell
|
||||
Telegraf can manage its own service through the --service flag:
|
||||
|
||||
```powershell
|
||||
nssm install Telegraf c:\telegraf\telegraf.exe -config c:\telegraf\telegraf.config
|
||||
```
|
||||
| Command | Effect |
|
||||
|------------------------------------|-------------------------------|
|
||||
| `telegraf.exe --service install` | Install telegraf as a service |
|
||||
| `telegraf.exe --service uninstall` | Remove the telegraf service |
|
||||
| `telegraf.exe --service start` | Start the telegraf service |
|
||||
| `telegraf.exe --service stop` | Stop the telegraf service |
|
||||
|
||||
And now your service will be installed in Windows and you will be able to start and
|
||||
stop it gracefully
|
|
@ -52,6 +52,11 @@
|
|||
## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
|
||||
flush_jitter = "0s"
|
||||
|
||||
## By default, precision will be set to the same timestamp order as the
|
||||
## collection interval, with the maximum being 1s.
|
||||
## Precision will NOT be used for service inputs, such as logparser and statsd.
|
||||
## Valid values are "ns", "us" (or "µs"), "ms", "s".
|
||||
precision = ""
|
||||
## Run telegraf in debug mode
|
||||
debug = false
|
||||
## Run telegraf in quiet mode
|
||||
|
@ -75,13 +80,10 @@
|
|||
urls = ["http://localhost:8086"] # required
|
||||
## The target database for metrics (telegraf will create it if not exists).
|
||||
database = "telegraf" # required
|
||||
## Precision of writes, valid values are "ns", "us" (or "µs"), "ms", "s", "m", "h".
|
||||
## note: using "s" precision greatly improves InfluxDB compression.
|
||||
precision = "s"
|
||||
|
||||
## Retention policy to write to.
|
||||
retention_policy = "default"
|
||||
## Write consistency (clusters only), can be: "any", "one", "quorom", "all"
|
||||
## Retention policy to write to. Empty string writes to the default rp.
|
||||
retention_policy = ""
|
||||
## Write consistency (clusters only), can be: "any", "one", "quorum", "all"
|
||||
write_consistency = "any"
|
||||
|
||||
## Write timeout (for the InfluxDB client), formatted as a string.
|
||||
|
@ -195,6 +197,8 @@
|
|||
# # Configuration for Graphite server to send metrics to
|
||||
# [[outputs.graphite]]
|
||||
# ## TCP endpoint for your graphite instance.
|
||||
# ## If multiple endpoints are configured, output will be load balanced.
|
||||
# ## Only one of the endpoints will be written to with each iteration.
|
||||
# servers = ["localhost:2003"]
|
||||
# ## Prefix metrics name
|
||||
# prefix = ""
|
||||
|
@ -317,14 +321,13 @@
|
|||
# api_token = "my-secret-token" # required.
|
||||
# ## Debug
|
||||
# # debug = false
|
||||
# ## Tag Field to populate source attribute (optional)
|
||||
# ## This is typically the _hostname_ from which the metric was obtained.
|
||||
# source_tag = "host"
|
||||
# ## Connection timeout.
|
||||
# # timeout = "5s"
|
||||
# ## Output Name Template (same as graphite buckets)
|
||||
# ## Output source Template (same as graphite buckets)
|
||||
# ## see https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md#graphite
|
||||
# template = "host.tags.measurement.field"
|
||||
# ## This template is used in librato's source (not metric's name)
|
||||
# template = "host"
|
||||
#
|
||||
|
||||
|
||||
# # Configuration for MQTT server to send metrics to
|
||||
|
@ -432,8 +435,8 @@
|
|||
## disk partitions.
|
||||
## Setting devices will restrict the stats to the specified devices.
|
||||
# devices = ["sda", "sdb"]
|
||||
## Uncomment the following line if you do not need disk serial numbers.
|
||||
# skip_serial_number = true
|
||||
## Uncomment the following line if you need disk serial numbers.
|
||||
# skip_serial_number = false
|
||||
|
||||
|
||||
# Get kernel statistics from /proc/stat
|
||||
|
@ -461,7 +464,7 @@
|
|||
# no configuration
|
||||
|
||||
|
||||
# # Read stats from an aerospike server
|
||||
# # Read stats from aerospike server(s)
|
||||
# [[inputs.aerospike]]
|
||||
# ## Aerospike servers to connect to (with port)
|
||||
# ## This plugin will query all namespaces the aerospike
|
||||
|
@ -524,6 +527,19 @@
|
|||
# socket_suffix = "asok"
|
||||
|
||||
|
||||
# # Read specific statistics per cgroup
|
||||
# [[inputs.cgroup]]
|
||||
# ## Directories in which to look for files, globs are supported.
|
||||
# # paths = [
|
||||
# # "/cgroup/memory",
|
||||
# # "/cgroup/memory/child1",
|
||||
# # "/cgroup/memory/child2/*",
|
||||
# # ]
|
||||
# ## cgroup stat fields, as file names, globs are supported.
|
||||
# ## these file names are appended to each path from above.
|
||||
# # files = ["memory.*usage*", "memory.limit_in_bytes"]
|
||||
|
||||
|
||||
# # Pull Metric Statistics from Amazon CloudWatch
|
||||
# [[inputs.cloudwatch]]
|
||||
# ## Amazon Region
|
||||
|
@ -649,6 +665,13 @@
|
|||
# container_names = []
|
||||
# ## Timeout for docker list, info, and stats commands
|
||||
# timeout = "5s"
|
||||
#
|
||||
# ## Whether to report for each container per-device blkio (8:0, 8:1...) and
|
||||
# ## network (eth0, eth1, ...) stats or not
|
||||
# perdevice = true
|
||||
# ## Whether to report for each container total blkio and network stats or not
|
||||
# total = false
|
||||
#
|
||||
|
||||
|
||||
# # Read statistics from one or many dovecot servers
|
||||
|
@ -677,6 +700,13 @@
|
|||
#
|
||||
# ## set cluster_health to true when you want to also obtain cluster level stats
|
||||
# cluster_health = false
|
||||
#
|
||||
# ## Optional SSL Config
|
||||
# # ssl_ca = "/etc/telegraf/ca.pem"
|
||||
# # ssl_cert = "/etc/telegraf/cert.pem"
|
||||
# # ssl_key = "/etc/telegraf/key.pem"
|
||||
# ## Use SSL but skip chain & host verification
|
||||
# # insecure_skip_verify = false
|
||||
|
||||
|
||||
# # Read metrics from one or more commands that can output to stdout
|
||||
|
@ -758,9 +788,11 @@
|
|||
# [[inputs.haproxy]]
|
||||
# ## An array of address to gather stats about. Specify an ip on hostname
|
||||
# ## with optional port. ie localhost, 10.10.3.33:1936, etc.
|
||||
#
|
||||
# ## If no servers are specified, then default to 127.0.0.1:1936
|
||||
# servers = ["http://myhaproxy.com:1936", "http://anotherhaproxy.com:1936"]
|
||||
# ## Make sure you specify the complete path to the stats endpoint
|
||||
# ## ie 10.10.3.33:1936/haproxy?stats
|
||||
# #
|
||||
# ## If no servers are specified, then default to 127.0.0.1:1936/haproxy?stats
|
||||
# servers = ["http://myhaproxy.com:1936/haproxy?stats"]
|
||||
# ## Or you can also use local socket
|
||||
# ## servers = ["socket:/run/haproxy/admin.sock"]
|
||||
|
||||
|
@ -946,21 +978,35 @@
|
|||
|
||||
# # Telegraf plugin for gathering metrics from N Mesos masters
|
||||
# [[inputs.mesos]]
|
||||
# # Timeout, in ms.
|
||||
# ## Timeout, in ms.
|
||||
# timeout = 100
|
||||
# # A list of Mesos masters, default value is localhost:5050.
|
||||
# ## A list of Mesos masters.
|
||||
# masters = ["localhost:5050"]
|
||||
# # Metrics groups to be collected, by default, all enabled.
|
||||
# ## Master metrics groups to be collected, by default, all enabled.
|
||||
# master_collections = [
|
||||
# "resources",
|
||||
# "master",
|
||||
# "system",
|
||||
# "slaves",
|
||||
# "agents",
|
||||
# "frameworks",
|
||||
# "tasks",
|
||||
# "messages",
|
||||
# "evqueue",
|
||||
# "registrar",
|
||||
# ]
|
||||
# ## A list of Mesos slaves, default is []
|
||||
# # slaves = []
|
||||
# ## Slave metrics groups to be collected, by default, all enabled.
|
||||
# # slave_collections = [
|
||||
# # "resources",
|
||||
# # "agent",
|
||||
# # "system",
|
||||
# # "executors",
|
||||
# # "tasks",
|
||||
# # "messages",
|
||||
# # ]
|
||||
# ## Include mesos tasks statistics, default is false
|
||||
# # slave_tasks = true
|
||||
|
||||
|
||||
# # Read metrics from one or many MongoDB servers
|
||||
|
@ -971,6 +1017,7 @@
|
|||
# ## mongodb://10.10.3.33:18832,
|
||||
# ## 10.0.0.1:10000, etc.
|
||||
# servers = ["127.0.0.1:27017"]
|
||||
# gather_perdb_stats = false
|
||||
|
||||
|
||||
# # Read metrics from one or many mysql servers
|
||||
|
@ -1077,9 +1124,9 @@
|
|||
# ## file paths for proc files. If empty default paths will be used:
|
||||
# ## /proc/net/netstat, /proc/net/snmp, /proc/net/snmp6
|
||||
# ## These can also be overridden with env variables, see README.
|
||||
# proc_net_netstat = ""
|
||||
# proc_net_snmp = ""
|
||||
# proc_net_snmp6 = ""
|
||||
# proc_net_netstat = "/proc/net/netstat"
|
||||
# proc_net_snmp = "/proc/net/snmp"
|
||||
# proc_net_snmp6 = "/proc/net/snmp6"
|
||||
# ## dump metrics with 0 values too
|
||||
# dump_zeros = true
|
||||
|
||||
|
@ -1103,6 +1150,23 @@
|
|||
# command = "passenger-status -v --show=xml"
|
||||
|
||||
|
||||
# # Read metrics from one or many pgbouncer servers
|
||||
# [[inputs.pgbouncer]]
|
||||
# ## specify address via a url matching:
|
||||
# ## postgres://[pqgotest[:password]]@localhost:port[/dbname]\
|
||||
# ## ?sslmode=[disable|verify-ca|verify-full]
|
||||
# ## or a simple string:
|
||||
# ## host=localhost user=pqotest port=6432 password=... sslmode=... dbname=pgbouncer
|
||||
# ##
|
||||
# ## All connection parameters are optional, except for dbname,
|
||||
# ## you need to set it always as pgbouncer.
|
||||
# address = "host=localhost user=postgres port=6432 sslmode=disable dbname=pgbouncer"
|
||||
#
|
||||
# ## A list of databases to pull metrics about. If not specified, metrics for all
|
||||
# ## databases are gathered.
|
||||
# # databases = ["app_production", "testing"]
|
||||
|
||||
|
||||
# # Read metrics of phpfpm, via HTTP status page or socket
|
||||
# [[inputs.phpfpm]]
|
||||
# ## An array of addresses to gather stats about. Specify an ip or hostname
|
||||
|
@ -1138,7 +1202,7 @@
|
|||
# count = 1 # required
|
||||
# ## interval, in s, at which to ping. 0 == default (ping -i <PING_INTERVAL>)
|
||||
# ping_interval = 0.0
|
||||
# ## ping timeout, in s. 0 == no timeout (ping -W <TIMEOUT>)
|
||||
# ## per-ping timeout, in s. 0 == no timeout (ping -W <TIMEOUT>)
|
||||
# timeout = 1.0
|
||||
# ## interface to send ping from (ping -I <INTERFACE>)
|
||||
# interface = ""
|
||||
|
@ -1257,10 +1321,15 @@
|
|||
# ## An array of urls to scrape metrics from.
|
||||
# urls = ["http://localhost:9100/metrics"]
|
||||
#
|
||||
# ## Use SSL but skip chain & host verification
|
||||
# # insecure_skip_verify = false
|
||||
# ## Use bearer token for authorization
|
||||
# # bearer_token = /path/to/bearer/token
|
||||
#
|
||||
# ## Optional SSL Config
|
||||
# # ssl_ca = /path/to/cafile
|
||||
# # ssl_cert = /path/to/certfile
|
||||
# # ssl_key = /path/to/keyfile
|
||||
# ## Use SSL but skip chain & host verification
|
||||
# # insecure_skip_verify = false
|
||||
|
||||
|
||||
# # Reads last_run_summary.yaml file and converts to measurments
|
||||
|
@ -1276,6 +1345,13 @@
|
|||
# # username = "guest"
|
||||
# # password = "guest"
|
||||
#
|
||||
# ## Optional SSL Config
|
||||
# # ssl_ca = "/etc/telegraf/ca.pem"
|
||||
# # ssl_cert = "/etc/telegraf/cert.pem"
|
||||
# # ssl_key = "/etc/telegraf/key.pem"
|
||||
# ## Use SSL but skip chain & host verification
|
||||
# # insecure_skip_verify = false
|
||||
#
|
||||
# ## A list of nodes to pull metrics about. If not specified, metrics for
|
||||
# ## all nodes are gathered.
|
||||
# # nodes = ["rabbit@node1", "rabbit@node2"]
|
||||
|
@ -1294,6 +1370,7 @@
|
|||
# ## e.g.
|
||||
# ## tcp://localhost:6379
|
||||
# ## tcp://:password@192.168.99.100
|
||||
# ## unix:///var/run/redis.sock
|
||||
# ##
|
||||
# ## If no servers are specified, then localhost is used as the host.
|
||||
# ## If no port is specified, 6379 is used
|
||||
|
@ -1316,8 +1393,8 @@
|
|||
# servers = ["http://localhost:8098"]
|
||||
|
||||
|
||||
# # Reads oids value from one or many snmp agents
|
||||
# [[inputs.snmp]]
|
||||
# # DEPRECATED! PLEASE USE inputs.snmp INSTEAD.
|
||||
# [[inputs.snmp_legacy]]
|
||||
# ## Use 'oids.txt' file to translate oids to names
|
||||
# ## To generate 'oids.txt' you need to run:
|
||||
# ## snmptranslate -m all -Tz -On | sed -e 's/"//g' > /tmp/oids.txt
|
||||
|
@ -1488,12 +1565,6 @@
|
|||
# SERVICE INPUT PLUGINS #
|
||||
###############################################################################
|
||||
|
||||
# # A Github Webhook Event collector
|
||||
# [[inputs.github_webhooks]]
|
||||
# ## Address and port to host Webhook listener on
|
||||
# service_address = ":1618"
|
||||
|
||||
|
||||
# # Read metrics from Kafka topic(s)
|
||||
# [[inputs.kafka_consumer]]
|
||||
# ## topic(s) to consume
|
||||
|
@ -1501,7 +1572,7 @@
|
|||
# ## an array of Zookeeper connection strings
|
||||
# zookeeper_peers = ["localhost:2181"]
|
||||
# ## Zookeeper Chroot
|
||||
# zookeeper_chroot = "/"
|
||||
# zookeeper_chroot = ""
|
||||
# ## the name of the consumer group
|
||||
# consumer_group = "telegraf_metrics_consumers"
|
||||
# ## Offset (must be either "oldest" or "newest")
|
||||
|
@ -1514,6 +1585,37 @@
|
|||
# data_format = "influx"
|
||||
|
||||
|
||||
# # Stream and parse log file(s).
|
||||
# [[inputs.logparser]]
|
||||
# ## Log files to parse.
|
||||
# ## These accept standard unix glob matching rules, but with the addition of
|
||||
# ## ** as a "super asterisk". ie:
|
||||
# ## /var/log/**.log -> recursively find all .log files in /var/log
|
||||
# ## /var/log/*/*.log -> find all .log files with a parent dir in /var/log
|
||||
# ## /var/log/apache.log -> only tail the apache log file
|
||||
# files = ["/var/log/apache/access.log"]
|
||||
# ## Read file from beginning.
|
||||
# from_beginning = false
|
||||
#
|
||||
# ## Parse logstash-style "grok" patterns:
|
||||
# ## Telegraf built-in parsing patterns: https://goo.gl/dkay10
|
||||
# [inputs.logparser.grok]
|
||||
# ## This is a list of patterns to check the given log file(s) for.
|
||||
# ## Note that adding patterns here increases processing time. The most
|
||||
# ## efficient configuration is to have one pattern per logparser.
|
||||
# ## Other common built-in patterns are:
|
||||
# ## %{COMMON_LOG_FORMAT} (plain apache & nginx access logs)
|
||||
# ## %{COMBINED_LOG_FORMAT} (access logs + referrer & agent)
|
||||
# patterns = ["%{COMBINED_LOG_FORMAT}"]
|
||||
# ## Name of the outputted measurement name.
|
||||
# measurement = "apache_access_log"
|
||||
# ## Full path(s) to custom pattern files.
|
||||
# custom_pattern_files = []
|
||||
# ## Custom patterns can also be defined here. Put one pattern per line.
|
||||
# custom_patterns = '''
|
||||
# '''
|
||||
|
||||
|
||||
# # Read metrics from MQTT topic(s)
|
||||
# [[inputs.mqtt_consumer]]
|
||||
# servers = ["localhost:1883"]
|
||||
|
@ -1570,10 +1672,19 @@
|
|||
# data_format = "influx"
|
||||
|
||||
|
||||
# # A Rollbar Webhook Event collector
|
||||
# [[inputs.rollbar_webhooks]]
|
||||
# ## Address and port to host Webhook listener on
|
||||
# service_address = ":1619"
|
||||
# # Read NSQ topic for metrics.
|
||||
# [[inputs.nsq_consumer]]
|
||||
# ## An string representing the NSQD TCP Endpoint
|
||||
# server = "localhost:4150"
|
||||
# topic = "telegraf"
|
||||
# channel = "consumer"
|
||||
# max_in_flight = 100
|
||||
#
|
||||
# ## Data format to consume.
|
||||
# ## Each data format has it's own unique set of configuration options, read
|
||||
# ## more about them here:
|
||||
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
# data_format = "influx"
|
||||
|
||||
|
||||
# # Statsd Server
|
||||
|
@ -1670,3 +1781,18 @@
|
|||
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||
# data_format = "influx"
|
||||
|
||||
|
||||
# # A Webhooks Event collector
|
||||
# [[inputs.webhooks]]
|
||||
# ## Address and port to host Webhook listener on
|
||||
# service_address = ":1619"
|
||||
#
|
||||
# [inputs.webhooks.github]
|
||||
# path = "/github"
|
||||
#
|
||||
# [inputs.webhooks.mandrill]
|
||||
# path = "/mandrill"
|
||||
#
|
||||
# [inputs.webhooks.rollbar]
|
||||
# path = "/rollbar"
|
||||
|
||||
|
|
|
@ -0,0 +1,79 @@
|
|||
package filter
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"github.com/gobwas/glob"
|
||||
)
|
||||
|
||||
type Filter interface {
|
||||
Match(string) bool
|
||||
}
|
||||
|
||||
// CompileFilter takes a list of string filters and returns a Filter interface
|
||||
// for matching a given string against the filter list. The filter list
|
||||
// supports glob matching too, ie:
|
||||
//
|
||||
// f, _ := CompileFilter([]string{"cpu", "mem", "net*"})
|
||||
// f.Match("cpu") // true
|
||||
// f.Match("network") // true
|
||||
// f.Match("memory") // false
|
||||
//
|
||||
func CompileFilter(filters []string) (Filter, error) {
|
||||
// return if there is nothing to compile
|
||||
if len(filters) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// check if we can compile a non-glob filter
|
||||
noGlob := true
|
||||
for _, filter := range filters {
|
||||
if hasMeta(filter) {
|
||||
noGlob = false
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
switch {
|
||||
case noGlob:
|
||||
// return non-globbing filter if not needed.
|
||||
return compileFilterNoGlob(filters), nil
|
||||
case len(filters) == 1:
|
||||
return glob.Compile(filters[0])
|
||||
default:
|
||||
return glob.Compile("{" + strings.Join(filters, ",") + "}")
|
||||
}
|
||||
}
|
||||
|
||||
// hasMeta reports whether path contains any magic glob characters.
|
||||
func hasMeta(s string) bool {
|
||||
return strings.IndexAny(s, "*?[") >= 0
|
||||
}
|
||||
|
||||
type filter struct {
|
||||
m map[string]struct{}
|
||||
}
|
||||
|
||||
func (f *filter) Match(s string) bool {
|
||||
_, ok := f.m[s]
|
||||
return ok
|
||||
}
|
||||
|
||||
type filtersingle struct {
|
||||
s string
|
||||
}
|
||||
|
||||
func (f *filtersingle) Match(s string) bool {
|
||||
return f.s == s
|
||||
}
|
||||
|
||||
func compileFilterNoGlob(filters []string) Filter {
|
||||
if len(filters) == 1 {
|
||||
return &filtersingle{s: filters[0]}
|
||||
}
|
||||
out := filter{m: make(map[string]struct{})}
|
||||
for _, filter := range filters {
|
||||
out.m[filter] = struct{}{}
|
||||
}
|
||||
return &out
|
||||
}
|
|
@ -0,0 +1,96 @@
|
|||
package filter
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestCompileFilter(t *testing.T) {
|
||||
f, err := CompileFilter([]string{})
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, f)
|
||||
|
||||
f, err = CompileFilter([]string{"cpu"})
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, f.Match("cpu"))
|
||||
assert.False(t, f.Match("cpu0"))
|
||||
assert.False(t, f.Match("mem"))
|
||||
|
||||
f, err = CompileFilter([]string{"cpu*"})
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, f.Match("cpu"))
|
||||
assert.True(t, f.Match("cpu0"))
|
||||
assert.False(t, f.Match("mem"))
|
||||
|
||||
f, err = CompileFilter([]string{"cpu", "mem"})
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, f.Match("cpu"))
|
||||
assert.False(t, f.Match("cpu0"))
|
||||
assert.True(t, f.Match("mem"))
|
||||
|
||||
f, err = CompileFilter([]string{"cpu", "mem", "net*"})
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, f.Match("cpu"))
|
||||
assert.False(t, f.Match("cpu0"))
|
||||
assert.True(t, f.Match("mem"))
|
||||
assert.True(t, f.Match("network"))
|
||||
}
|
||||
|
||||
var benchbool bool
|
||||
|
||||
func BenchmarkFilterSingleNoGlobFalse(b *testing.B) {
|
||||
f, _ := CompileFilter([]string{"cpu"})
|
||||
var tmp bool
|
||||
for n := 0; n < b.N; n++ {
|
||||
tmp = f.Match("network")
|
||||
}
|
||||
benchbool = tmp
|
||||
}
|
||||
|
||||
func BenchmarkFilterSingleNoGlobTrue(b *testing.B) {
|
||||
f, _ := CompileFilter([]string{"cpu"})
|
||||
var tmp bool
|
||||
for n := 0; n < b.N; n++ {
|
||||
tmp = f.Match("cpu")
|
||||
}
|
||||
benchbool = tmp
|
||||
}
|
||||
|
||||
func BenchmarkFilter(b *testing.B) {
|
||||
f, _ := CompileFilter([]string{"cpu", "mem", "net*"})
|
||||
var tmp bool
|
||||
for n := 0; n < b.N; n++ {
|
||||
tmp = f.Match("network")
|
||||
}
|
||||
benchbool = tmp
|
||||
}
|
||||
|
||||
func BenchmarkFilterNoGlob(b *testing.B) {
|
||||
f, _ := CompileFilter([]string{"cpu", "mem", "net"})
|
||||
var tmp bool
|
||||
for n := 0; n < b.N; n++ {
|
||||
tmp = f.Match("net")
|
||||
}
|
||||
benchbool = tmp
|
||||
}
|
||||
|
||||
func BenchmarkFilter2(b *testing.B) {
|
||||
f, _ := CompileFilter([]string{"aa", "bb", "c", "ad", "ar", "at", "aq",
|
||||
"aw", "az", "axxx", "ab", "cpu", "mem", "net*"})
|
||||
var tmp bool
|
||||
for n := 0; n < b.N; n++ {
|
||||
tmp = f.Match("network")
|
||||
}
|
||||
benchbool = tmp
|
||||
}
|
||||
|
||||
func BenchmarkFilter2NoGlob(b *testing.B) {
|
||||
f, _ := CompileFilter([]string{"aa", "bb", "c", "ad", "ar", "at", "aq",
|
||||
"aw", "az", "axxx", "ab", "cpu", "mem", "net"})
|
||||
var tmp bool
|
||||
for n := 0; n < b.N; n++ {
|
||||
tmp = f.Match("net")
|
||||
}
|
||||
benchbool = tmp
|
||||
}
|
|
@ -9,6 +9,7 @@ import (
|
|||
"os"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"runtime"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
|
@ -47,8 +48,8 @@ type Config struct {
|
|||
OutputFilters []string
|
||||
|
||||
Agent *AgentConfig
|
||||
Inputs []*internal_models.RunningInput
|
||||
Outputs []*internal_models.RunningOutput
|
||||
Inputs []*models.RunningInput
|
||||
Outputs []*models.RunningOutput
|
||||
}
|
||||
|
||||
func NewConfig() *Config {
|
||||
|
@ -61,8 +62,8 @@ func NewConfig() *Config {
|
|||
},
|
||||
|
||||
Tags: make(map[string]string),
|
||||
Inputs: make([]*internal_models.RunningInput, 0),
|
||||
Outputs: make([]*internal_models.RunningOutput, 0),
|
||||
Inputs: make([]*models.RunningInput, 0),
|
||||
Outputs: make([]*models.RunningOutput, 0),
|
||||
InputFilters: make([]string, 0),
|
||||
OutputFilters: make([]string, 0),
|
||||
}
|
||||
|
@ -77,6 +78,14 @@ type AgentConfig struct {
|
|||
// ie, if Interval=10s then always collect on :00, :10, :20, etc.
|
||||
RoundInterval bool
|
||||
|
||||
// By default, precision will be set to the same timestamp order as the
|
||||
// collection interval, with the maximum being 1s.
|
||||
// ie, when interval = "10s", precision will be "1s"
|
||||
// when interval = "250ms", precision will be "1ms"
|
||||
// Precision will NOT be used for service inputs. It is up to each individual
|
||||
// service input to set the timestamp at the appropriate precision.
|
||||
Precision internal.Duration
|
||||
|
||||
// CollectionJitter is used to jitter the collection by a random amount.
|
||||
// Each plugin will sleep for a random time within jitter before collecting.
|
||||
// This can be used to avoid many plugins querying things like sysfs at the
|
||||
|
@ -108,11 +117,10 @@ type AgentConfig struct {
|
|||
// does _not_ deactivate FlushInterval.
|
||||
FlushBufferWhenFull bool
|
||||
|
||||
// TODO(cam): Remove UTC and Precision parameters, they are no longer
|
||||
// TODO(cam): Remove UTC and parameter, they are no longer
|
||||
// valid for the agent config. Leaving them here for now for backwards-
|
||||
// compatability
|
||||
UTC bool `toml:"utc"`
|
||||
Precision string
|
||||
UTC bool `toml:"utc"`
|
||||
|
||||
// Debug is the option for running in debug mode
|
||||
Debug bool
|
||||
|
@ -132,7 +140,7 @@ func (c *Config) InputNames() []string {
|
|||
return name
|
||||
}
|
||||
|
||||
// Outputs returns a list of strings of the configured inputs.
|
||||
// Outputs returns a list of strings of the configured outputs.
|
||||
func (c *Config) OutputNames() []string {
|
||||
var name []string
|
||||
for _, output := range c.Outputs {
|
||||
|
@ -209,6 +217,11 @@ var header = `# Telegraf Configuration
|
|||
## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
|
||||
flush_jitter = "0s"
|
||||
|
||||
## By default, precision will be set to the same timestamp order as the
|
||||
## collection interval, with the maximum being 1s.
|
||||
## Precision will NOT be used for service inputs, such as logparser and statsd.
|
||||
## Valid values are "ns", "us" (or "µs"), "ms", "s".
|
||||
precision = ""
|
||||
## Run telegraf in debug mode
|
||||
debug = false
|
||||
## Run telegraf in quiet mode
|
||||
|
@ -420,6 +433,9 @@ func getDefaultConfigPath() (string, error) {
|
|||
envfile := os.Getenv("TELEGRAF_CONFIG_PATH")
|
||||
homefile := os.ExpandEnv("${HOME}/.telegraf/telegraf.conf")
|
||||
etcfile := "/etc/telegraf/telegraf.conf"
|
||||
if runtime.GOOS == "windows" {
|
||||
etcfile = `C:\Program Files\Telegraf\telegraf.conf`
|
||||
}
|
||||
for _, path := range []string{envfile, homefile, etcfile} {
|
||||
if _, err := os.Stat(path); err == nil {
|
||||
log.Printf("Using config file: %s", path)
|
||||
|
@ -527,6 +543,13 @@ func (c *Config) LoadConfig(path string) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
// trimBOM trims the Byte-Order-Marks from the beginning of the file.
|
||||
// this is for Windows compatability only.
|
||||
// see https://github.com/influxdata/telegraf/issues/1378
|
||||
func trimBOM(f []byte) []byte {
|
||||
return bytes.TrimPrefix(f, []byte("\xef\xbb\xbf"))
|
||||
}
|
||||
|
||||
// parseFile loads a TOML configuration from a provided path and
|
||||
// returns the AST produced from the TOML parser. When loading the file, it
|
||||
// will find environment variables and replace them.
|
||||
|
@ -535,6 +558,8 @@ func parseFile(fpath string) (*ast.Table, error) {
|
|||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// ugh windows why
|
||||
contents = trimBOM(contents)
|
||||
|
||||
env_vars := envVarRe.FindAll(contents, -1)
|
||||
for _, env_var := range env_vars {
|
||||
|
@ -577,7 +602,7 @@ func (c *Config) addOutput(name string, table *ast.Table) error {
|
|||
return err
|
||||
}
|
||||
|
||||
ro := internal_models.NewRunningOutput(name, output, outputConfig,
|
||||
ro := models.NewRunningOutput(name, output, outputConfig,
|
||||
c.Agent.MetricBatchSize, c.Agent.MetricBufferLimit)
|
||||
c.Outputs = append(c.Outputs, ro)
|
||||
return nil
|
||||
|
@ -618,7 +643,7 @@ func (c *Config) addInput(name string, table *ast.Table) error {
|
|||
return err
|
||||
}
|
||||
|
||||
rp := &internal_models.RunningInput{
|
||||
rp := &models.RunningInput{
|
||||
Name: name,
|
||||
Input: input,
|
||||
Config: pluginConfig,
|
||||
|
@ -629,10 +654,10 @@ func (c *Config) addInput(name string, table *ast.Table) error {
|
|||
|
||||
// buildFilter builds a Filter
|
||||
// (tagpass/tagdrop/namepass/namedrop/fieldpass/fielddrop) to
|
||||
// be inserted into the internal_models.OutputConfig/internal_models.InputConfig
|
||||
// be inserted into the models.OutputConfig/models.InputConfig
|
||||
// to be used for glob filtering on tags and measurements
|
||||
func buildFilter(tbl *ast.Table) (internal_models.Filter, error) {
|
||||
f := internal_models.Filter{}
|
||||
func buildFilter(tbl *ast.Table) (models.Filter, error) {
|
||||
f := models.Filter{}
|
||||
|
||||
if node, ok := tbl.Fields["namepass"]; ok {
|
||||
if kv, ok := node.(*ast.KeyValue); ok {
|
||||
|
@ -696,7 +721,7 @@ func buildFilter(tbl *ast.Table) (internal_models.Filter, error) {
|
|||
if subtbl, ok := node.(*ast.Table); ok {
|
||||
for name, val := range subtbl.Fields {
|
||||
if kv, ok := val.(*ast.KeyValue); ok {
|
||||
tagfilter := &internal_models.TagFilter{Name: name}
|
||||
tagfilter := &models.TagFilter{Name: name}
|
||||
if ary, ok := kv.Value.(*ast.Array); ok {
|
||||
for _, elem := range ary.Value {
|
||||
if str, ok := elem.(*ast.String); ok {
|
||||
|
@ -715,7 +740,7 @@ func buildFilter(tbl *ast.Table) (internal_models.Filter, error) {
|
|||
if subtbl, ok := node.(*ast.Table); ok {
|
||||
for name, val := range subtbl.Fields {
|
||||
if kv, ok := val.(*ast.KeyValue); ok {
|
||||
tagfilter := &internal_models.TagFilter{Name: name}
|
||||
tagfilter := &models.TagFilter{Name: name}
|
||||
if ary, ok := kv.Value.(*ast.Array); ok {
|
||||
for _, elem := range ary.Value {
|
||||
if str, ok := elem.(*ast.String); ok {
|
||||
|
@ -772,9 +797,9 @@ func buildFilter(tbl *ast.Table) (internal_models.Filter, error) {
|
|||
|
||||
// buildInput parses input specific items from the ast.Table,
|
||||
// builds the filter and returns a
|
||||
// internal_models.InputConfig to be inserted into internal_models.RunningInput
|
||||
func buildInput(name string, tbl *ast.Table) (*internal_models.InputConfig, error) {
|
||||
cp := &internal_models.InputConfig{Name: name}
|
||||
// models.InputConfig to be inserted into models.RunningInput
|
||||
func buildInput(name string, tbl *ast.Table) (*models.InputConfig, error) {
|
||||
cp := &models.InputConfig{Name: name}
|
||||
if node, ok := tbl.Fields["interval"]; ok {
|
||||
if kv, ok := node.(*ast.KeyValue); ok {
|
||||
if str, ok := kv.Value.(*ast.String); ok {
|
||||
|
@ -948,14 +973,14 @@ func buildSerializer(name string, tbl *ast.Table) (serializers.Serializer, error
|
|||
|
||||
// buildOutput parses output specific items from the ast.Table,
|
||||
// builds the filter and returns an
|
||||
// internal_models.OutputConfig to be inserted into internal_models.RunningInput
|
||||
// models.OutputConfig to be inserted into models.RunningInput
|
||||
// Note: error exists in the return for future calls that might require error
|
||||
func buildOutput(name string, tbl *ast.Table) (*internal_models.OutputConfig, error) {
|
||||
func buildOutput(name string, tbl *ast.Table) (*models.OutputConfig, error) {
|
||||
filter, err := buildFilter(tbl)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
oc := &internal_models.OutputConfig{
|
||||
oc := &models.OutputConfig{
|
||||
Name: name,
|
||||
Filter: filter,
|
||||
}
|
||||
|
|
|
@ -26,19 +26,19 @@ func TestConfig_LoadSingleInputWithEnvVars(t *testing.T) {
|
|||
memcached := inputs.Inputs["memcached"]().(*memcached.Memcached)
|
||||
memcached.Servers = []string{"192.168.1.1"}
|
||||
|
||||
filter := internal_models.Filter{
|
||||
filter := models.Filter{
|
||||
NameDrop: []string{"metricname2"},
|
||||
NamePass: []string{"metricname1"},
|
||||
FieldDrop: []string{"other", "stuff"},
|
||||
FieldPass: []string{"some", "strings"},
|
||||
TagDrop: []internal_models.TagFilter{
|
||||
internal_models.TagFilter{
|
||||
TagDrop: []models.TagFilter{
|
||||
models.TagFilter{
|
||||
Name: "badtag",
|
||||
Filter: []string{"othertag"},
|
||||
},
|
||||
},
|
||||
TagPass: []internal_models.TagFilter{
|
||||
internal_models.TagFilter{
|
||||
TagPass: []models.TagFilter{
|
||||
models.TagFilter{
|
||||
Name: "goodtag",
|
||||
Filter: []string{"mytag"},
|
||||
},
|
||||
|
@ -46,7 +46,7 @@ func TestConfig_LoadSingleInputWithEnvVars(t *testing.T) {
|
|||
IsActive: true,
|
||||
}
|
||||
assert.NoError(t, filter.CompileFilter())
|
||||
mConfig := &internal_models.InputConfig{
|
||||
mConfig := &models.InputConfig{
|
||||
Name: "memcached",
|
||||
Filter: filter,
|
||||
Interval: 10 * time.Second,
|
||||
|
@ -66,19 +66,19 @@ func TestConfig_LoadSingleInput(t *testing.T) {
|
|||
memcached := inputs.Inputs["memcached"]().(*memcached.Memcached)
|
||||
memcached.Servers = []string{"localhost"}
|
||||
|
||||
filter := internal_models.Filter{
|
||||
filter := models.Filter{
|
||||
NameDrop: []string{"metricname2"},
|
||||
NamePass: []string{"metricname1"},
|
||||
FieldDrop: []string{"other", "stuff"},
|
||||
FieldPass: []string{"some", "strings"},
|
||||
TagDrop: []internal_models.TagFilter{
|
||||
internal_models.TagFilter{
|
||||
TagDrop: []models.TagFilter{
|
||||
models.TagFilter{
|
||||
Name: "badtag",
|
||||
Filter: []string{"othertag"},
|
||||
},
|
||||
},
|
||||
TagPass: []internal_models.TagFilter{
|
||||
internal_models.TagFilter{
|
||||
TagPass: []models.TagFilter{
|
||||
models.TagFilter{
|
||||
Name: "goodtag",
|
||||
Filter: []string{"mytag"},
|
||||
},
|
||||
|
@ -86,7 +86,7 @@ func TestConfig_LoadSingleInput(t *testing.T) {
|
|||
IsActive: true,
|
||||
}
|
||||
assert.NoError(t, filter.CompileFilter())
|
||||
mConfig := &internal_models.InputConfig{
|
||||
mConfig := &models.InputConfig{
|
||||
Name: "memcached",
|
||||
Filter: filter,
|
||||
Interval: 5 * time.Second,
|
||||
|
@ -113,19 +113,19 @@ func TestConfig_LoadDirectory(t *testing.T) {
|
|||
memcached := inputs.Inputs["memcached"]().(*memcached.Memcached)
|
||||
memcached.Servers = []string{"localhost"}
|
||||
|
||||
filter := internal_models.Filter{
|
||||
filter := models.Filter{
|
||||
NameDrop: []string{"metricname2"},
|
||||
NamePass: []string{"metricname1"},
|
||||
FieldDrop: []string{"other", "stuff"},
|
||||
FieldPass: []string{"some", "strings"},
|
||||
TagDrop: []internal_models.TagFilter{
|
||||
internal_models.TagFilter{
|
||||
TagDrop: []models.TagFilter{
|
||||
models.TagFilter{
|
||||
Name: "badtag",
|
||||
Filter: []string{"othertag"},
|
||||
},
|
||||
},
|
||||
TagPass: []internal_models.TagFilter{
|
||||
internal_models.TagFilter{
|
||||
TagPass: []models.TagFilter{
|
||||
models.TagFilter{
|
||||
Name: "goodtag",
|
||||
Filter: []string{"mytag"},
|
||||
},
|
||||
|
@ -133,7 +133,7 @@ func TestConfig_LoadDirectory(t *testing.T) {
|
|||
IsActive: true,
|
||||
}
|
||||
assert.NoError(t, filter.CompileFilter())
|
||||
mConfig := &internal_models.InputConfig{
|
||||
mConfig := &models.InputConfig{
|
||||
Name: "memcached",
|
||||
Filter: filter,
|
||||
Interval: 5 * time.Second,
|
||||
|
@ -150,7 +150,7 @@ func TestConfig_LoadDirectory(t *testing.T) {
|
|||
assert.NoError(t, err)
|
||||
ex.SetParser(p)
|
||||
ex.Command = "/usr/bin/myothercollector --foo=bar"
|
||||
eConfig := &internal_models.InputConfig{
|
||||
eConfig := &models.InputConfig{
|
||||
Name: "exec",
|
||||
MeasurementSuffix: "_myothercollector",
|
||||
}
|
||||
|
@ -169,7 +169,7 @@ func TestConfig_LoadDirectory(t *testing.T) {
|
|||
pstat := inputs.Inputs["procstat"]().(*procstat.Procstat)
|
||||
pstat.PidFile = "/var/run/grafana-server.pid"
|
||||
|
||||
pConfig := &internal_models.InputConfig{Name: "procstat"}
|
||||
pConfig := &models.InputConfig{Name: "procstat"}
|
||||
pConfig.Tags = make(map[string]string)
|
||||
|
||||
assert.Equal(t, pstat, c.Inputs[3].Input,
|
||||
|
|
|
@ -17,8 +17,6 @@ import (
|
|||
"strings"
|
||||
"time"
|
||||
"unicode"
|
||||
|
||||
"github.com/gobwas/glob"
|
||||
)
|
||||
|
||||
const alphanum string = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
|
||||
|
@ -135,8 +133,8 @@ func GetTLSConfig(
|
|||
cert, err := tls.LoadX509KeyPair(SSLCert, SSLKey)
|
||||
if err != nil {
|
||||
return nil, errors.New(fmt.Sprintf(
|
||||
"Could not load TLS client key/certificate: %s",
|
||||
err))
|
||||
"Could not load TLS client key/certificate from %s:%s: %s",
|
||||
SSLKey, SSLCert, err))
|
||||
}
|
||||
|
||||
t.Certificates = []tls.Certificate{cert}
|
||||
|
@ -209,27 +207,6 @@ func WaitTimeout(c *exec.Cmd, timeout time.Duration) error {
|
|||
}
|
||||
}
|
||||
|
||||
// CompileFilter takes a list of glob "filters", ie:
|
||||
// ["MAIN.*", "CPU.*", "NET"]
|
||||
// and compiles them into a glob object. This glob object can
|
||||
// then be used to match keys to the filter.
|
||||
func CompileFilter(filters []string) (glob.Glob, error) {
|
||||
var out glob.Glob
|
||||
|
||||
// return if there is nothing to compile
|
||||
if len(filters) == 0 {
|
||||
return out, nil
|
||||
}
|
||||
|
||||
var err error
|
||||
if len(filters) == 1 {
|
||||
out, err = glob.Compile(filters[0])
|
||||
} else {
|
||||
out, err = glob.Compile("{" + strings.Join(filters, ",") + "}")
|
||||
}
|
||||
return out, err
|
||||
}
|
||||
|
||||
// RandomSleep will sleep for a random amount of time up to max.
|
||||
// If the shutdown channel is closed, it will return before it has finished
|
||||
// sleeping.
|
||||
|
|
|
@ -107,37 +107,6 @@ func TestRunError(t *testing.T) {
|
|||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestCompileFilter(t *testing.T) {
|
||||
f, err := CompileFilter([]string{})
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, f)
|
||||
|
||||
f, err = CompileFilter([]string{"cpu"})
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, f.Match("cpu"))
|
||||
assert.False(t, f.Match("cpu0"))
|
||||
assert.False(t, f.Match("mem"))
|
||||
|
||||
f, err = CompileFilter([]string{"cpu*"})
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, f.Match("cpu"))
|
||||
assert.True(t, f.Match("cpu0"))
|
||||
assert.False(t, f.Match("mem"))
|
||||
|
||||
f, err = CompileFilter([]string{"cpu", "mem"})
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, f.Match("cpu"))
|
||||
assert.False(t, f.Match("cpu0"))
|
||||
assert.True(t, f.Match("mem"))
|
||||
|
||||
f, err = CompileFilter([]string{"cpu", "mem", "net*"})
|
||||
assert.NoError(t, err)
|
||||
assert.True(t, f.Match("cpu"))
|
||||
assert.False(t, f.Match("cpu0"))
|
||||
assert.True(t, f.Match("mem"))
|
||||
assert.True(t, f.Match("network"))
|
||||
}
|
||||
|
||||
func TestRandomSleep(t *testing.T) {
|
||||
// test that zero max returns immediately
|
||||
s := time.Now()
|
||||
|
|
|
@ -1,82 +1,80 @@
|
|||
package internal_models
|
||||
package models
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/gobwas/glob"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/internal"
|
||||
"github.com/influxdata/telegraf/filter"
|
||||
)
|
||||
|
||||
// TagFilter is the name of a tag, and the values on which to filter
|
||||
type TagFilter struct {
|
||||
Name string
|
||||
Filter []string
|
||||
filter glob.Glob
|
||||
filter filter.Filter
|
||||
}
|
||||
|
||||
// Filter containing drop/pass and tagdrop/tagpass rules
|
||||
type Filter struct {
|
||||
NameDrop []string
|
||||
nameDrop glob.Glob
|
||||
nameDrop filter.Filter
|
||||
NamePass []string
|
||||
namePass glob.Glob
|
||||
namePass filter.Filter
|
||||
|
||||
FieldDrop []string
|
||||
fieldDrop glob.Glob
|
||||
fieldDrop filter.Filter
|
||||
FieldPass []string
|
||||
fieldPass glob.Glob
|
||||
fieldPass filter.Filter
|
||||
|
||||
TagDrop []TagFilter
|
||||
TagPass []TagFilter
|
||||
|
||||
TagExclude []string
|
||||
tagExclude glob.Glob
|
||||
tagExclude filter.Filter
|
||||
TagInclude []string
|
||||
tagInclude glob.Glob
|
||||
tagInclude filter.Filter
|
||||
|
||||
IsActive bool
|
||||
}
|
||||
|
||||
// Compile all Filter lists into glob.Glob objects.
|
||||
// Compile all Filter lists into filter.Filter objects.
|
||||
func (f *Filter) CompileFilter() error {
|
||||
var err error
|
||||
f.nameDrop, err = internal.CompileFilter(f.NameDrop)
|
||||
f.nameDrop, err = filter.CompileFilter(f.NameDrop)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error compiling 'namedrop', %s", err)
|
||||
}
|
||||
f.namePass, err = internal.CompileFilter(f.NamePass)
|
||||
f.namePass, err = filter.CompileFilter(f.NamePass)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error compiling 'namepass', %s", err)
|
||||
}
|
||||
|
||||
f.fieldDrop, err = internal.CompileFilter(f.FieldDrop)
|
||||
f.fieldDrop, err = filter.CompileFilter(f.FieldDrop)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error compiling 'fielddrop', %s", err)
|
||||
}
|
||||
f.fieldPass, err = internal.CompileFilter(f.FieldPass)
|
||||
f.fieldPass, err = filter.CompileFilter(f.FieldPass)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error compiling 'fieldpass', %s", err)
|
||||
}
|
||||
|
||||
f.tagExclude, err = internal.CompileFilter(f.TagExclude)
|
||||
f.tagExclude, err = filter.CompileFilter(f.TagExclude)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error compiling 'tagexclude', %s", err)
|
||||
}
|
||||
f.tagInclude, err = internal.CompileFilter(f.TagInclude)
|
||||
f.tagInclude, err = filter.CompileFilter(f.TagInclude)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error compiling 'taginclude', %s", err)
|
||||
}
|
||||
|
||||
for i, _ := range f.TagDrop {
|
||||
f.TagDrop[i].filter, err = internal.CompileFilter(f.TagDrop[i].Filter)
|
||||
f.TagDrop[i].filter, err = filter.CompileFilter(f.TagDrop[i].Filter)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error compiling 'tagdrop', %s", err)
|
||||
}
|
||||
}
|
||||
for i, _ := range f.TagPass {
|
||||
f.TagPass[i].filter, err = internal.CompileFilter(f.TagPass[i].Filter)
|
||||
f.TagPass[i].filter, err = filter.CompileFilter(f.TagPass[i].Filter)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error compiling 'tagpass', %s", err)
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
package internal_models
|
||||
package models
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
@ -253,51 +253,6 @@ func TestFilter_TagDrop(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestFilter_CompileFilterError(t *testing.T) {
|
||||
f := Filter{
|
||||
NameDrop: []string{"", ""},
|
||||
}
|
||||
assert.Error(t, f.CompileFilter())
|
||||
f = Filter{
|
||||
NamePass: []string{"", ""},
|
||||
}
|
||||
assert.Error(t, f.CompileFilter())
|
||||
f = Filter{
|
||||
FieldDrop: []string{"", ""},
|
||||
}
|
||||
assert.Error(t, f.CompileFilter())
|
||||
f = Filter{
|
||||
FieldPass: []string{"", ""},
|
||||
}
|
||||
assert.Error(t, f.CompileFilter())
|
||||
f = Filter{
|
||||
TagExclude: []string{"", ""},
|
||||
}
|
||||
assert.Error(t, f.CompileFilter())
|
||||
f = Filter{
|
||||
TagInclude: []string{"", ""},
|
||||
}
|
||||
assert.Error(t, f.CompileFilter())
|
||||
filters := []TagFilter{
|
||||
TagFilter{
|
||||
Name: "cpu",
|
||||
Filter: []string{"{foobar}"},
|
||||
}}
|
||||
f = Filter{
|
||||
TagDrop: filters,
|
||||
}
|
||||
require.Error(t, f.CompileFilter())
|
||||
filters = []TagFilter{
|
||||
TagFilter{
|
||||
Name: "cpu",
|
||||
Filter: []string{"{foobar}"},
|
||||
}}
|
||||
f = Filter{
|
||||
TagPass: filters,
|
||||
}
|
||||
require.Error(t, f.CompileFilter())
|
||||
}
|
||||
|
||||
func TestFilter_ShouldMetricsPass(t *testing.T) {
|
||||
m := testutil.TestMetric(1, "testmetric")
|
||||
f := Filter{
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
package internal_models
|
||||
package models
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
package internal_models
|
||||
package models
|
||||
|
||||
import (
|
||||
"log"
|
||||
|
@ -138,7 +138,7 @@ func (ro *RunningOutput) Write() error {
|
|||
}
|
||||
|
||||
func (ro *RunningOutput) write(metrics []telegraf.Metric) error {
|
||||
if len(metrics) == 0 {
|
||||
if metrics == nil || len(metrics) == 0 {
|
||||
return nil
|
||||
}
|
||||
start := time.Now()
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
package internal_models
|
||||
package models
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
|
|
@ -45,14 +45,9 @@ func NewMetric(
|
|||
name string,
|
||||
tags map[string]string,
|
||||
fields map[string]interface{},
|
||||
t ...time.Time,
|
||||
t time.Time,
|
||||
) (Metric, error) {
|
||||
var T time.Time
|
||||
if len(t) > 0 {
|
||||
T = t[0]
|
||||
}
|
||||
|
||||
pt, err := client.NewPoint(name, tags, fields, T)
|
||||
pt, err := client.NewPoint(name, tags, fields, t)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
|
|
@ -51,23 +51,6 @@ func TestNewMetricString(t *testing.T) {
|
|||
assert.Equal(t, lineProtoPrecision, m.PrecisionString("s"))
|
||||
}
|
||||
|
||||
func TestNewMetricStringNoTime(t *testing.T) {
|
||||
tags := map[string]string{
|
||||
"host": "localhost",
|
||||
}
|
||||
fields := map[string]interface{}{
|
||||
"usage_idle": float64(99),
|
||||
}
|
||||
m, err := NewMetric("cpu", tags, fields)
|
||||
assert.NoError(t, err)
|
||||
|
||||
lineProto := fmt.Sprintf("cpu,host=localhost usage_idle=99")
|
||||
assert.Equal(t, lineProto, m.String())
|
||||
|
||||
lineProtoPrecision := fmt.Sprintf("cpu,host=localhost usage_idle=99")
|
||||
assert.Equal(t, lineProtoPrecision, m.PrecisionString("s"))
|
||||
}
|
||||
|
||||
func TestNewMetricFailNaN(t *testing.T) {
|
||||
now := time.Now()
|
||||
|
||||
|
|
|
@ -27,6 +27,14 @@ The example plugin gathers metrics about example things
|
|||
- tag2
|
||||
- measurement2 has the following tags:
|
||||
- tag3
|
||||
|
||||
### Sample Queries:
|
||||
|
||||
These are some useful queries (to generate dashboards or other) to run against data from this plugin:
|
||||
|
||||
```
|
||||
SELECT max(field1), mean(field1), min(field1) FROM measurement1 WHERE tag1=bar AND time > now() - 1h GROUP BY tag
|
||||
```
|
||||
|
||||
### Example Output:
|
||||
|
||||
|
|
File diff suppressed because one or more lines are too long
|
@ -1,104 +1,19 @@
|
|||
package aerospike
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
"net"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/internal/errchan"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
|
||||
as "github.com/sparrc/aerospike-client-go"
|
||||
)
|
||||
|
||||
const (
|
||||
MSG_HEADER_SIZE = 8
|
||||
MSG_TYPE = 1 // Info is 1
|
||||
MSG_VERSION = 2
|
||||
)
|
||||
|
||||
var (
|
||||
STATISTICS_COMMAND = []byte("statistics\n")
|
||||
NAMESPACES_COMMAND = []byte("namespaces\n")
|
||||
)
|
||||
|
||||
type aerospikeMessageHeader struct {
|
||||
Version uint8
|
||||
Type uint8
|
||||
DataLen [6]byte
|
||||
}
|
||||
|
||||
type aerospikeMessage struct {
|
||||
aerospikeMessageHeader
|
||||
Data []byte
|
||||
}
|
||||
|
||||
// Taken from aerospike-client-go/types/message.go
|
||||
func (msg *aerospikeMessage) Serialize() []byte {
|
||||
msg.DataLen = msgLenToBytes(int64(len(msg.Data)))
|
||||
buf := bytes.NewBuffer([]byte{})
|
||||
binary.Write(buf, binary.BigEndian, msg.aerospikeMessageHeader)
|
||||
binary.Write(buf, binary.BigEndian, msg.Data[:])
|
||||
return buf.Bytes()
|
||||
}
|
||||
|
||||
type aerospikeInfoCommand struct {
|
||||
msg *aerospikeMessage
|
||||
}
|
||||
|
||||
// Taken from aerospike-client-go/info.go
|
||||
func (nfo *aerospikeInfoCommand) parseMultiResponse() (map[string]string, error) {
|
||||
responses := make(map[string]string)
|
||||
offset := int64(0)
|
||||
begin := int64(0)
|
||||
|
||||
dataLen := int64(len(nfo.msg.Data))
|
||||
|
||||
// Create reusable StringBuilder for performance.
|
||||
for offset < dataLen {
|
||||
b := nfo.msg.Data[offset]
|
||||
|
||||
if b == '\t' {
|
||||
name := nfo.msg.Data[begin:offset]
|
||||
offset++
|
||||
begin = offset
|
||||
|
||||
// Parse field value.
|
||||
for offset < dataLen {
|
||||
if nfo.msg.Data[offset] == '\n' {
|
||||
break
|
||||
}
|
||||
offset++
|
||||
}
|
||||
|
||||
if offset > begin {
|
||||
value := nfo.msg.Data[begin:offset]
|
||||
responses[string(name)] = string(value)
|
||||
} else {
|
||||
responses[string(name)] = ""
|
||||
}
|
||||
offset++
|
||||
begin = offset
|
||||
} else if b == '\n' {
|
||||
if offset > begin {
|
||||
name := nfo.msg.Data[begin:offset]
|
||||
responses[string(name)] = ""
|
||||
}
|
||||
offset++
|
||||
begin = offset
|
||||
} else {
|
||||
offset++
|
||||
}
|
||||
}
|
||||
|
||||
if offset > begin {
|
||||
name := nfo.msg.Data[begin:offset]
|
||||
responses[string(name)] = ""
|
||||
}
|
||||
return responses, nil
|
||||
}
|
||||
|
||||
type Aerospike struct {
|
||||
Servers []string
|
||||
}
|
||||
|
@ -115,7 +30,7 @@ func (a *Aerospike) SampleConfig() string {
|
|||
}
|
||||
|
||||
func (a *Aerospike) Description() string {
|
||||
return "Read stats from an aerospike server"
|
||||
return "Read stats from aerospike server(s)"
|
||||
}
|
||||
|
||||
func (a *Aerospike) Gather(acc telegraf.Accumulator) error {
|
||||
|
@ -124,214 +39,101 @@ func (a *Aerospike) Gather(acc telegraf.Accumulator) error {
|
|||
}
|
||||
|
||||
var wg sync.WaitGroup
|
||||
|
||||
var outerr error
|
||||
|
||||
errChan := errchan.New(len(a.Servers))
|
||||
wg.Add(len(a.Servers))
|
||||
for _, server := range a.Servers {
|
||||
wg.Add(1)
|
||||
go func(server string) {
|
||||
go func(serv string) {
|
||||
defer wg.Done()
|
||||
outerr = a.gatherServer(server, acc)
|
||||
errChan.C <- a.gatherServer(serv, acc)
|
||||
}(server)
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
return outerr
|
||||
return errChan.Error()
|
||||
}
|
||||
|
||||
func (a *Aerospike) gatherServer(host string, acc telegraf.Accumulator) error {
|
||||
aerospikeInfo, err := getMap(STATISTICS_COMMAND, host)
|
||||
func (a *Aerospike) gatherServer(hostport string, acc telegraf.Accumulator) error {
|
||||
host, port, err := net.SplitHostPort(hostport)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Aerospike info failed: %s", err)
|
||||
return err
|
||||
}
|
||||
readAerospikeStats(aerospikeInfo, acc, host, "")
|
||||
namespaces, err := getList(NAMESPACES_COMMAND, host)
|
||||
|
||||
iport, err := strconv.Atoi(port)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Aerospike namespace list failed: %s", err)
|
||||
iport = 3000
|
||||
}
|
||||
for ix := range namespaces {
|
||||
nsInfo, err := getMap([]byte("namespace/"+namespaces[ix]+"\n"), host)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Aerospike namespace '%s' query failed: %s", namespaces[ix], err)
|
||||
|
||||
c, err := as.NewClient(host, iport)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer c.Close()
|
||||
|
||||
nodes := c.GetNodes()
|
||||
for _, n := range nodes {
|
||||
tags := map[string]string{
|
||||
"aerospike_host": hostport,
|
||||
}
|
||||
fields := map[string]interface{}{
|
||||
"node_name": n.GetName(),
|
||||
}
|
||||
stats, err := as.RequestNodeStats(n)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for k, v := range stats {
|
||||
fields[strings.Replace(k, "-", "_", -1)] = parseValue(v)
|
||||
}
|
||||
acc.AddFields("aerospike_node", fields, tags, time.Now())
|
||||
|
||||
info, err := as.RequestNodeInfo(n, "namespaces")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
namespaces := strings.Split(info["namespaces"], ";")
|
||||
|
||||
for _, namespace := range namespaces {
|
||||
nTags := map[string]string{
|
||||
"aerospike_host": hostport,
|
||||
}
|
||||
nTags["namespace"] = namespace
|
||||
nFields := map[string]interface{}{
|
||||
"node_name": n.GetName(),
|
||||
}
|
||||
info, err := as.RequestNodeInfo(n, "namespace/"+namespace)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
stats := strings.Split(info["namespace/"+namespace], ";")
|
||||
for _, stat := range stats {
|
||||
parts := strings.Split(stat, "=")
|
||||
if len(parts) < 2 {
|
||||
continue
|
||||
}
|
||||
nFields[strings.Replace(parts[0], "-", "_", -1)] = parseValue(parts[1])
|
||||
}
|
||||
acc.AddFields("aerospike_namespace", nFields, nTags, time.Now())
|
||||
}
|
||||
readAerospikeStats(nsInfo, acc, host, namespaces[ix])
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func getMap(key []byte, host string) (map[string]string, error) {
|
||||
data, err := get(key, host)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Failed to get data: %s", err)
|
||||
func parseValue(v string) interface{} {
|
||||
if parsed, err := strconv.ParseInt(v, 10, 64); err == nil {
|
||||
return parsed
|
||||
} else if parsed, err := strconv.ParseBool(v); err == nil {
|
||||
return parsed
|
||||
} else {
|
||||
return v
|
||||
}
|
||||
parsed, err := unmarshalMapInfo(data, string(key))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Failed to unmarshal data: %s", err)
|
||||
}
|
||||
|
||||
return parsed, nil
|
||||
}
|
||||
|
||||
func getList(key []byte, host string) ([]string, error) {
|
||||
data, err := get(key, host)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Failed to get data: %s", err)
|
||||
func copyTags(m map[string]string) map[string]string {
|
||||
out := make(map[string]string)
|
||||
for k, v := range m {
|
||||
out[k] = v
|
||||
}
|
||||
parsed, err := unmarshalListInfo(data, string(key))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("Failed to unmarshal data: %s", err)
|
||||
}
|
||||
|
||||
return parsed, nil
|
||||
}
|
||||
|
||||
func get(key []byte, host string) (map[string]string, error) {
|
||||
var err error
|
||||
var data map[string]string
|
||||
|
||||
asInfo := &aerospikeInfoCommand{
|
||||
msg: &aerospikeMessage{
|
||||
aerospikeMessageHeader: aerospikeMessageHeader{
|
||||
Version: uint8(MSG_VERSION),
|
||||
Type: uint8(MSG_TYPE),
|
||||
DataLen: msgLenToBytes(int64(len(key))),
|
||||
},
|
||||
Data: key,
|
||||
},
|
||||
}
|
||||
|
||||
cmd := asInfo.msg.Serialize()
|
||||
addr, err := net.ResolveTCPAddr("tcp", host)
|
||||
if err != nil {
|
||||
return data, fmt.Errorf("Lookup failed for '%s': %s", host, err)
|
||||
}
|
||||
|
||||
conn, err := net.DialTCP("tcp", nil, addr)
|
||||
if err != nil {
|
||||
return data, fmt.Errorf("Connection failed for '%s': %s", host, err)
|
||||
}
|
||||
defer conn.Close()
|
||||
|
||||
_, err = conn.Write(cmd)
|
||||
if err != nil {
|
||||
return data, fmt.Errorf("Failed to send to '%s': %s", host, err)
|
||||
}
|
||||
|
||||
msgHeader := bytes.NewBuffer(make([]byte, MSG_HEADER_SIZE))
|
||||
_, err = readLenFromConn(conn, msgHeader.Bytes(), MSG_HEADER_SIZE)
|
||||
if err != nil {
|
||||
return data, fmt.Errorf("Failed to read header: %s", err)
|
||||
}
|
||||
err = binary.Read(msgHeader, binary.BigEndian, &asInfo.msg.aerospikeMessageHeader)
|
||||
if err != nil {
|
||||
return data, fmt.Errorf("Failed to unmarshal header: %s", err)
|
||||
}
|
||||
|
||||
msgLen := msgLenFromBytes(asInfo.msg.aerospikeMessageHeader.DataLen)
|
||||
|
||||
if int64(len(asInfo.msg.Data)) != msgLen {
|
||||
asInfo.msg.Data = make([]byte, msgLen)
|
||||
}
|
||||
|
||||
_, err = readLenFromConn(conn, asInfo.msg.Data, len(asInfo.msg.Data))
|
||||
if err != nil {
|
||||
return data, fmt.Errorf("Failed to read from connection to '%s': %s", host, err)
|
||||
}
|
||||
|
||||
data, err = asInfo.parseMultiResponse()
|
||||
if err != nil {
|
||||
return data, fmt.Errorf("Failed to parse response from '%s': %s", host, err)
|
||||
}
|
||||
|
||||
return data, err
|
||||
}
|
||||
|
||||
func readAerospikeStats(
|
||||
stats map[string]string,
|
||||
acc telegraf.Accumulator,
|
||||
host string,
|
||||
namespace string,
|
||||
) {
|
||||
fields := make(map[string]interface{})
|
||||
tags := map[string]string{
|
||||
"aerospike_host": host,
|
||||
"namespace": "_service",
|
||||
}
|
||||
|
||||
if namespace != "" {
|
||||
tags["namespace"] = namespace
|
||||
}
|
||||
for key, value := range stats {
|
||||
// We are going to ignore all string based keys
|
||||
val, err := strconv.ParseInt(value, 10, 64)
|
||||
if err == nil {
|
||||
if strings.Contains(key, "-") {
|
||||
key = strings.Replace(key, "-", "_", -1)
|
||||
}
|
||||
fields[key] = val
|
||||
}
|
||||
}
|
||||
acc.AddFields("aerospike", fields, tags)
|
||||
}
|
||||
|
||||
func unmarshalMapInfo(infoMap map[string]string, key string) (map[string]string, error) {
|
||||
key = strings.TrimSuffix(key, "\n")
|
||||
res := map[string]string{}
|
||||
|
||||
v, exists := infoMap[key]
|
||||
if !exists {
|
||||
return res, fmt.Errorf("Key '%s' missing from info", key)
|
||||
}
|
||||
|
||||
values := strings.Split(v, ";")
|
||||
for i := range values {
|
||||
kv := strings.Split(values[i], "=")
|
||||
if len(kv) > 1 {
|
||||
res[kv[0]] = kv[1]
|
||||
}
|
||||
}
|
||||
|
||||
return res, nil
|
||||
}
|
||||
|
||||
func unmarshalListInfo(infoMap map[string]string, key string) ([]string, error) {
|
||||
key = strings.TrimSuffix(key, "\n")
|
||||
|
||||
v, exists := infoMap[key]
|
||||
if !exists {
|
||||
return []string{}, fmt.Errorf("Key '%s' missing from info", key)
|
||||
}
|
||||
|
||||
values := strings.Split(v, ";")
|
||||
return values, nil
|
||||
}
|
||||
|
||||
func readLenFromConn(c net.Conn, buffer []byte, length int) (total int, err error) {
|
||||
var r int
|
||||
for total < length {
|
||||
r, err = c.Read(buffer[total:length])
|
||||
total += r
|
||||
if err != nil {
|
||||
break
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Taken from aerospike-client-go/types/message.go
|
||||
func msgLenToBytes(DataLen int64) [6]byte {
|
||||
b := make([]byte, 8)
|
||||
binary.BigEndian.PutUint64(b, uint64(DataLen))
|
||||
res := [6]byte{}
|
||||
copy(res[:], b[2:])
|
||||
return res
|
||||
}
|
||||
|
||||
// Taken from aerospike-client-go/types/message.go
|
||||
func msgLenFromBytes(buf [6]byte) int64 {
|
||||
nbytes := append([]byte{0, 0}, buf[:]...)
|
||||
DataLen := binary.BigEndian.Uint64(nbytes)
|
||||
return int64(DataLen)
|
||||
return out
|
||||
}
|
||||
|
||||
func init() {
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
package aerospike
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"testing"
|
||||
|
||||
"github.com/influxdata/telegraf/testutil"
|
||||
|
@ -23,96 +22,29 @@ func TestAerospikeStatistics(t *testing.T) {
|
|||
err := a.Gather(&acc)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Only use a few of the metrics
|
||||
asMetrics := []string{
|
||||
"transactions",
|
||||
"stat_write_errs",
|
||||
"stat_read_reqs",
|
||||
"stat_write_reqs",
|
||||
}
|
||||
|
||||
for _, metric := range asMetrics {
|
||||
assert.True(t, acc.HasIntField("aerospike", metric), metric)
|
||||
}
|
||||
|
||||
assert.True(t, acc.HasMeasurement("aerospike_node"))
|
||||
assert.True(t, acc.HasMeasurement("aerospike_namespace"))
|
||||
assert.True(t, acc.HasIntField("aerospike_node", "batch_error"))
|
||||
}
|
||||
|
||||
func TestAerospikeMsgLenFromToBytes(t *testing.T) {
|
||||
var i int64 = 8
|
||||
assert.True(t, i == msgLenFromBytes(msgLenToBytes(i)))
|
||||
}
|
||||
func TestAerospikeStatisticsPartialErr(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("Skipping integration test in short mode")
|
||||
}
|
||||
|
||||
a := &Aerospike{
|
||||
Servers: []string{
|
||||
testutil.GetLocalHost() + ":3000",
|
||||
testutil.GetLocalHost() + ":9999",
|
||||
},
|
||||
}
|
||||
|
||||
func TestReadAerospikeStatsNoNamespace(t *testing.T) {
|
||||
// Also test for re-writing
|
||||
var acc testutil.Accumulator
|
||||
stats := map[string]string{
|
||||
"stat-write-errs": "12345",
|
||||
"stat_read_reqs": "12345",
|
||||
}
|
||||
readAerospikeStats(stats, &acc, "host1", "")
|
||||
|
||||
fields := map[string]interface{}{
|
||||
"stat_write_errs": int64(12345),
|
||||
"stat_read_reqs": int64(12345),
|
||||
}
|
||||
tags := map[string]string{
|
||||
"aerospike_host": "host1",
|
||||
"namespace": "_service",
|
||||
}
|
||||
acc.AssertContainsTaggedFields(t, "aerospike", fields, tags)
|
||||
}
|
||||
|
||||
func TestReadAerospikeStatsNamespace(t *testing.T) {
|
||||
var acc testutil.Accumulator
|
||||
stats := map[string]string{
|
||||
"stat_write_errs": "12345",
|
||||
"stat_read_reqs": "12345",
|
||||
}
|
||||
readAerospikeStats(stats, &acc, "host1", "test")
|
||||
|
||||
fields := map[string]interface{}{
|
||||
"stat_write_errs": int64(12345),
|
||||
"stat_read_reqs": int64(12345),
|
||||
}
|
||||
tags := map[string]string{
|
||||
"aerospike_host": "host1",
|
||||
"namespace": "test",
|
||||
}
|
||||
acc.AssertContainsTaggedFields(t, "aerospike", fields, tags)
|
||||
}
|
||||
|
||||
func TestAerospikeUnmarshalList(t *testing.T) {
|
||||
i := map[string]string{
|
||||
"test": "one;two;three",
|
||||
}
|
||||
|
||||
expected := []string{"one", "two", "three"}
|
||||
|
||||
list, err := unmarshalListInfo(i, "test2")
|
||||
assert.True(t, err != nil)
|
||||
|
||||
list, err = unmarshalListInfo(i, "test")
|
||||
assert.True(t, err == nil)
|
||||
equal := true
|
||||
for ix := range expected {
|
||||
if list[ix] != expected[ix] {
|
||||
equal = false
|
||||
break
|
||||
}
|
||||
}
|
||||
assert.True(t, equal)
|
||||
}
|
||||
|
||||
func TestAerospikeUnmarshalMap(t *testing.T) {
|
||||
i := map[string]string{
|
||||
"test": "key1=value1;key2=value2",
|
||||
}
|
||||
|
||||
expected := map[string]string{
|
||||
"key1": "value1",
|
||||
"key2": "value2",
|
||||
}
|
||||
m, err := unmarshalMapInfo(i, "test")
|
||||
assert.True(t, err == nil)
|
||||
assert.True(t, reflect.DeepEqual(m, expected))
|
||||
err := a.Gather(&acc)
|
||||
require.Error(t, err)
|
||||
|
||||
assert.True(t, acc.HasMeasurement("aerospike_node"))
|
||||
assert.True(t, acc.HasMeasurement("aerospike_namespace"))
|
||||
assert.True(t, acc.HasIntField("aerospike_node", "batch_error"))
|
||||
}
|
||||
|
|
|
@ -6,6 +6,7 @@ import (
|
|||
_ "github.com/influxdata/telegraf/plugins/inputs/bcache"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/cassandra"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/ceph"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/cgroup"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/chrony"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/cloudwatch"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/conntrack"
|
||||
|
@ -19,9 +20,9 @@ import (
|
|||
_ "github.com/influxdata/telegraf/plugins/inputs/elasticsearch"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/exec"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/filestat"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/github_webhooks"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/graylog"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/haproxy"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/hddtemp"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/http_response"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/httpjson"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/influxdb"
|
||||
|
@ -29,6 +30,7 @@ import (
|
|||
_ "github.com/influxdata/telegraf/plugins/inputs/jolokia"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/kafka_consumer"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/leofs"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/logparser"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/lustre2"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/mailchimp"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/memcached"
|
||||
|
@ -40,6 +42,7 @@ import (
|
|||
_ "github.com/influxdata/telegraf/plugins/inputs/net_response"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/nginx"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/nsq"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/nsq_consumer"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/nstat"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/ntpq"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/passenger"
|
||||
|
@ -56,9 +59,8 @@ import (
|
|||
_ "github.com/influxdata/telegraf/plugins/inputs/redis"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/rethinkdb"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/riak"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/rollbar_webhooks"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/sensors"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/snmp"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/snmp_legacy"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/sqlserver"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/statsd"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/sysstat"
|
||||
|
@ -69,6 +71,7 @@ import (
|
|||
_ "github.com/influxdata/telegraf/plugins/inputs/twemproxy"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/udp_listener"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/varnish"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/webhooks"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/win_perf_counters"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/zfs"
|
||||
_ "github.com/influxdata/telegraf/plugins/inputs/zookeeper"
|
||||
|
|
|
@ -8,7 +8,6 @@ import (
|
|||
"net/url"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
|
@ -38,8 +37,8 @@ func (n *Apache) Gather(acc telegraf.Accumulator) error {
|
|||
n.Urls = []string{"http://localhost/server-status?auto"}
|
||||
}
|
||||
|
||||
var wg sync.WaitGroup
|
||||
var outerr error
|
||||
var errch = make(chan error)
|
||||
|
||||
for _, u := range n.Urls {
|
||||
addr, err := url.Parse(u)
|
||||
|
@ -47,14 +46,17 @@ func (n *Apache) Gather(acc telegraf.Accumulator) error {
|
|||
return fmt.Errorf("Unable to parse address '%s': %s", u, err)
|
||||
}
|
||||
|
||||
wg.Add(1)
|
||||
go func(addr *url.URL) {
|
||||
defer wg.Done()
|
||||
outerr = n.gatherUrl(addr, acc)
|
||||
errch <- n.gatherUrl(addr, acc)
|
||||
}(addr)
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
// Drain channel, waiting for all requests to finish and save last error.
|
||||
for range n.Urls {
|
||||
if err := <-errch; err != nil {
|
||||
outerr = err
|
||||
}
|
||||
}
|
||||
|
||||
return outerr
|
||||
}
|
||||
|
|
|
@ -36,7 +36,8 @@ func TestHTTPApache(t *testing.T) {
|
|||
defer ts.Close()
|
||||
|
||||
a := Apache{
|
||||
Urls: []string{ts.URL},
|
||||
// Fetch it 2 times to catch possible data races.
|
||||
Urls: []string{ts.URL, ts.URL},
|
||||
}
|
||||
|
||||
var acc testutil.Accumulator
|
||||
|
|
|
@ -148,7 +148,7 @@ func (c cassandraMetric) addTagsFields(out map[string]interface{}) {
|
|||
tokens := parseJmxMetricRequest(r.(map[string]interface{})["mbean"].(string))
|
||||
// Requests with wildcards for keyspace or table names will return nested
|
||||
// maps in the json response
|
||||
if tokens["type"] == "Table" && (tokens["keyspace"] == "*" ||
|
||||
if (tokens["type"] == "Table" || tokens["type"] == "ColumnFamily") && (tokens["keyspace"] == "*" ||
|
||||
tokens["scope"] == "*") {
|
||||
if valuesMap, ok := out["value"]; ok {
|
||||
for k, v := range valuesMap.(map[string]interface{}) {
|
||||
|
|
|
@ -1,18 +1,18 @@
|
|||
# Ceph Storage Input Plugin
|
||||
|
||||
Collects performance metrics from the MON and OSD nodes in a Ceph storage cluster.
|
||||
Collects performance metrics from the MON and OSD nodes in a Ceph storage cluster.
|
||||
|
||||
The plugin works by scanning the configured SocketDir for OSD and MON socket files. When it finds
|
||||
a MON socket, it runs **ceph --admin-daemon $file perfcounters_dump**. For OSDs it runs **ceph --admin-daemon $file perf dump**
|
||||
a MON socket, it runs **ceph --admin-daemon $file perfcounters_dump**. For OSDs it runs **ceph --admin-daemon $file perf dump**
|
||||
|
||||
The resulting JSON is parsed and grouped into collections, based on top-level key. Top-level keys are
|
||||
used as collection tags, and all sub-keys are flattened. For example:
|
||||
|
||||
```
|
||||
{
|
||||
"paxos": {
|
||||
{
|
||||
"paxos": {
|
||||
"refresh": 9363435,
|
||||
"refresh_latency": {
|
||||
"refresh_latency": {
|
||||
"avgcount": 9363435,
|
||||
"sum": 5378.794002000
|
||||
}
|
||||
|
@ -50,7 +50,7 @@ Would be parsed into the following metrics, all of which would be tagged with co
|
|||
|
||||
### Measurements & Fields:
|
||||
|
||||
All fields are collected under the **ceph** measurement and stored as float64s. For a full list of fields, see the sample perf dumps in ceph_test.go.
|
||||
All fields are collected under the **ceph** measurement and stored as float64s. For a full list of fields, see the sample perf dumps in ceph_test.go.
|
||||
|
||||
|
||||
### Tags:
|
||||
|
@ -95,7 +95,7 @@ All measurements will have the following tags:
|
|||
- throttle-objecter_ops
|
||||
- throttle-osd_client_bytes
|
||||
- throttle-osd_client_messages
|
||||
|
||||
|
||||
|
||||
### Example Output:
|
||||
|
||||
|
|
|
@ -0,0 +1,59 @@
|
|||
# CGroup Input Plugin For Telegraf Agent
|
||||
|
||||
This input plugin will capture specific statistics per cgroup.
|
||||
|
||||
Following file formats are supported:
|
||||
|
||||
* Single value
|
||||
|
||||
```
|
||||
VAL\n
|
||||
```
|
||||
|
||||
* New line separated values
|
||||
|
||||
```
|
||||
VAL0\n
|
||||
VAL1\n
|
||||
```
|
||||
|
||||
* Space separated values
|
||||
|
||||
```
|
||||
VAL0 VAL1 ...\n
|
||||
```
|
||||
|
||||
* New line separated key-space-value's
|
||||
|
||||
```
|
||||
KEY0 VAL0\n
|
||||
KEY1 VAL1\n
|
||||
```
|
||||
|
||||
|
||||
### Tags:
|
||||
|
||||
Measurements don't have any specific tags unless you define them at the telegraf level (defaults). We
|
||||
used to have the path listed as a tag, but to keep cardinality in check it's easier to move this
|
||||
value to a field. Thanks @sebito91!
|
||||
|
||||
|
||||
### Configuration:
|
||||
|
||||
```
|
||||
# [[inputs.cgroup]]
|
||||
# paths = [
|
||||
# "/cgroup/memory", # root cgroup
|
||||
# "/cgroup/memory/child1", # container cgroup
|
||||
# "/cgroup/memory/child2/*", # all children cgroups under child2, but not child2 itself
|
||||
# ]
|
||||
# files = ["memory.*usage*", "memory.limit_in_bytes"]
|
||||
|
||||
# [[inputs.cgroup]]
|
||||
# paths = [
|
||||
# "/cgroup/cpu", # root cgroup
|
||||
# "/cgroup/cpu/*", # all container cgroups
|
||||
# "/cgroup/cpu/*/*", # all children cgroups under each container cgroup
|
||||
# ]
|
||||
# files = ["cpuacct.usage", "cpu.cfs_period_us", "cpu.cfs_quota_us"]
|
||||
```
|
|
@ -0,0 +1,35 @@
|
|||
package cgroup
|
||||
|
||||
import (
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
)
|
||||
|
||||
type CGroup struct {
|
||||
Paths []string `toml:"paths"`
|
||||
Files []string `toml:"files"`
|
||||
}
|
||||
|
||||
var sampleConfig = `
|
||||
## Directories in which to look for files, globs are supported.
|
||||
# paths = [
|
||||
# "/cgroup/memory",
|
||||
# "/cgroup/memory/child1",
|
||||
# "/cgroup/memory/child2/*",
|
||||
# ]
|
||||
## cgroup stat fields, as file names, globs are supported.
|
||||
## these file names are appended to each path from above.
|
||||
# files = ["memory.*usage*", "memory.limit_in_bytes"]
|
||||
`
|
||||
|
||||
func (g *CGroup) SampleConfig() string {
|
||||
return sampleConfig
|
||||
}
|
||||
|
||||
func (g *CGroup) Description() string {
|
||||
return "Read specific statistics per cgroup"
|
||||
}
|
||||
|
||||
func init() {
|
||||
inputs.Add("cgroup", func() telegraf.Input { return &CGroup{} })
|
||||
}
|
|
@ -0,0 +1,243 @@
|
|||
// +build linux
|
||||
|
||||
package cgroup
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"regexp"
|
||||
"strconv"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
)
|
||||
|
||||
const metricName = "cgroup"
|
||||
|
||||
func (g *CGroup) Gather(acc telegraf.Accumulator) error {
|
||||
list := make(chan pathInfo)
|
||||
go g.generateDirs(list)
|
||||
|
||||
for dir := range list {
|
||||
if dir.err != nil {
|
||||
return dir.err
|
||||
}
|
||||
if err := g.gatherDir(dir.path, acc); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (g *CGroup) gatherDir(dir string, acc telegraf.Accumulator) error {
|
||||
fields := make(map[string]interface{})
|
||||
|
||||
list := make(chan pathInfo)
|
||||
go g.generateFiles(dir, list)
|
||||
|
||||
for file := range list {
|
||||
if file.err != nil {
|
||||
return file.err
|
||||
}
|
||||
|
||||
raw, err := ioutil.ReadFile(file.path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(raw) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
fd := fileData{data: raw, path: file.path}
|
||||
if err := fd.parse(fields); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
fields["path"] = dir
|
||||
|
||||
acc.AddFields(metricName, fields, nil)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ======================================================================
|
||||
|
||||
type pathInfo struct {
|
||||
path string
|
||||
err error
|
||||
}
|
||||
|
||||
func isDir(path string) (bool, error) {
|
||||
result, err := os.Stat(path)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
return result.IsDir(), nil
|
||||
}
|
||||
|
||||
func (g *CGroup) generateDirs(list chan<- pathInfo) {
|
||||
for _, dir := range g.Paths {
|
||||
// getting all dirs that match the pattern 'dir'
|
||||
items, err := filepath.Glob(dir)
|
||||
if err != nil {
|
||||
list <- pathInfo{err: err}
|
||||
return
|
||||
}
|
||||
|
||||
for _, item := range items {
|
||||
ok, err := isDir(item)
|
||||
if err != nil {
|
||||
list <- pathInfo{err: err}
|
||||
return
|
||||
}
|
||||
// supply only dirs
|
||||
if ok {
|
||||
list <- pathInfo{path: item}
|
||||
}
|
||||
}
|
||||
}
|
||||
close(list)
|
||||
}
|
||||
|
||||
func (g *CGroup) generateFiles(dir string, list chan<- pathInfo) {
|
||||
for _, file := range g.Files {
|
||||
// getting all file paths that match the pattern 'dir + file'
|
||||
// path.Base make sure that file variable does not contains part of path
|
||||
items, err := filepath.Glob(path.Join(dir, path.Base(file)))
|
||||
if err != nil {
|
||||
list <- pathInfo{err: err}
|
||||
return
|
||||
}
|
||||
|
||||
for _, item := range items {
|
||||
ok, err := isDir(item)
|
||||
if err != nil {
|
||||
list <- pathInfo{err: err}
|
||||
return
|
||||
}
|
||||
// supply only files not dirs
|
||||
if !ok {
|
||||
list <- pathInfo{path: item}
|
||||
}
|
||||
}
|
||||
}
|
||||
close(list)
|
||||
}
|
||||
|
||||
// ======================================================================
|
||||
|
||||
type fileData struct {
|
||||
data []byte
|
||||
path string
|
||||
}
|
||||
|
||||
func (fd *fileData) format() (*fileFormat, error) {
|
||||
for _, ff := range fileFormats {
|
||||
ok, err := ff.match(fd.data)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if ok {
|
||||
return &ff, nil
|
||||
}
|
||||
}
|
||||
|
||||
return nil, fmt.Errorf("%v: unknown file format", fd.path)
|
||||
}
|
||||
|
||||
func (fd *fileData) parse(fields map[string]interface{}) error {
|
||||
format, err := fd.format()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
format.parser(filepath.Base(fd.path), fields, fd.data)
|
||||
return nil
|
||||
}
|
||||
|
||||
// ======================================================================
|
||||
|
||||
type fileFormat struct {
|
||||
name string
|
||||
pattern string
|
||||
parser func(measurement string, fields map[string]interface{}, b []byte)
|
||||
}
|
||||
|
||||
const keyPattern = "[[:alpha:]_]+"
|
||||
const valuePattern = "[\\d-]+"
|
||||
|
||||
var fileFormats = [...]fileFormat{
|
||||
// VAL\n
|
||||
fileFormat{
|
||||
name: "Single value",
|
||||
pattern: "^" + valuePattern + "\n$",
|
||||
parser: func(measurement string, fields map[string]interface{}, b []byte) {
|
||||
re := regexp.MustCompile("^(" + valuePattern + ")\n$")
|
||||
matches := re.FindAllStringSubmatch(string(b), -1)
|
||||
fields[measurement] = numberOrString(matches[0][1])
|
||||
},
|
||||
},
|
||||
// VAL0\n
|
||||
// VAL1\n
|
||||
// ...
|
||||
fileFormat{
|
||||
name: "New line separated values",
|
||||
pattern: "^(" + valuePattern + "\n){2,}$",
|
||||
parser: func(measurement string, fields map[string]interface{}, b []byte) {
|
||||
re := regexp.MustCompile("(" + valuePattern + ")\n")
|
||||
matches := re.FindAllStringSubmatch(string(b), -1)
|
||||
for i, v := range matches {
|
||||
fields[measurement+"."+strconv.Itoa(i)] = numberOrString(v[1])
|
||||
}
|
||||
},
|
||||
},
|
||||
// VAL0 VAL1 ...\n
|
||||
fileFormat{
|
||||
name: "Space separated values",
|
||||
pattern: "^(" + valuePattern + " )+\n$",
|
||||
parser: func(measurement string, fields map[string]interface{}, b []byte) {
|
||||
re := regexp.MustCompile("(" + valuePattern + ") ")
|
||||
matches := re.FindAllStringSubmatch(string(b), -1)
|
||||
for i, v := range matches {
|
||||
fields[measurement+"."+strconv.Itoa(i)] = numberOrString(v[1])
|
||||
}
|
||||
},
|
||||
},
|
||||
// KEY0 VAL0\n
|
||||
// KEY1 VAL1\n
|
||||
// ...
|
||||
fileFormat{
|
||||
name: "New line separated key-space-value's",
|
||||
pattern: "^(" + keyPattern + " " + valuePattern + "\n)+$",
|
||||
parser: func(measurement string, fields map[string]interface{}, b []byte) {
|
||||
re := regexp.MustCompile("(" + keyPattern + ") (" + valuePattern + ")\n")
|
||||
matches := re.FindAllStringSubmatch(string(b), -1)
|
||||
for _, v := range matches {
|
||||
fields[measurement+"."+v[1]] = numberOrString(v[2])
|
||||
}
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
func numberOrString(s string) interface{} {
|
||||
i, err := strconv.Atoi(s)
|
||||
if err == nil {
|
||||
return i
|
||||
}
|
||||
|
||||
return s
|
||||
}
|
||||
|
||||
func (f fileFormat) match(b []byte) (bool, error) {
|
||||
ok, err := regexp.Match(f.pattern, b)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
if ok {
|
||||
return true, nil
|
||||
}
|
||||
return false, nil
|
||||
}
|
|
@ -0,0 +1,11 @@
|
|||
// +build !linux
|
||||
|
||||
package cgroup
|
||||
|
||||
import (
|
||||
"github.com/influxdata/telegraf"
|
||||
)
|
||||
|
||||
func (g *CGroup) Gather(acc telegraf.Accumulator) error {
|
||||
return nil
|
||||
}
|
|
@ -0,0 +1,194 @@
|
|||
// +build linux
|
||||
|
||||
package cgroup
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/influxdata/telegraf/testutil"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"reflect"
|
||||
)
|
||||
|
||||
var cg1 = &CGroup{
|
||||
Paths: []string{"testdata/memory"},
|
||||
Files: []string{
|
||||
"memory.empty",
|
||||
"memory.max_usage_in_bytes",
|
||||
"memory.limit_in_bytes",
|
||||
"memory.stat",
|
||||
"memory.use_hierarchy",
|
||||
"notify_on_release",
|
||||
},
|
||||
}
|
||||
|
||||
func assertContainsFields(a *testutil.Accumulator, t *testing.T, measurement string, fieldSet []map[string]interface{}) {
|
||||
a.Lock()
|
||||
defer a.Unlock()
|
||||
|
||||
numEquals := 0
|
||||
for _, p := range a.Metrics {
|
||||
if p.Measurement == measurement {
|
||||
for _, fields := range fieldSet {
|
||||
if reflect.DeepEqual(fields, p.Fields) {
|
||||
numEquals++
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if numEquals != len(fieldSet) {
|
||||
assert.Fail(t, fmt.Sprintf("only %d of %d are equal", numEquals, len(fieldSet)))
|
||||
}
|
||||
}
|
||||
|
||||
func TestCgroupStatistics_1(t *testing.T) {
|
||||
var acc testutil.Accumulator
|
||||
|
||||
err := cg1.Gather(&acc)
|
||||
require.NoError(t, err)
|
||||
|
||||
fields := map[string]interface{}{
|
||||
"memory.stat.cache": 1739362304123123123,
|
||||
"memory.stat.rss": 1775325184,
|
||||
"memory.stat.rss_huge": 778043392,
|
||||
"memory.stat.mapped_file": 421036032,
|
||||
"memory.stat.dirty": -307200,
|
||||
"memory.max_usage_in_bytes.0": 0,
|
||||
"memory.max_usage_in_bytes.1": -1,
|
||||
"memory.max_usage_in_bytes.2": 2,
|
||||
"memory.limit_in_bytes": 223372036854771712,
|
||||
"memory.use_hierarchy": "12-781",
|
||||
"notify_on_release": 0,
|
||||
"path": "testdata/memory",
|
||||
}
|
||||
assertContainsFields(&acc, t, "cgroup", []map[string]interface{}{fields})
|
||||
}
|
||||
|
||||
// ======================================================================
|
||||
|
||||
var cg2 = &CGroup{
|
||||
Paths: []string{"testdata/cpu"},
|
||||
Files: []string{"cpuacct.usage_percpu"},
|
||||
}
|
||||
|
||||
func TestCgroupStatistics_2(t *testing.T) {
|
||||
var acc testutil.Accumulator
|
||||
|
||||
err := cg2.Gather(&acc)
|
||||
require.NoError(t, err)
|
||||
|
||||
fields := map[string]interface{}{
|
||||
"cpuacct.usage_percpu.0": -1452543795404,
|
||||
"cpuacct.usage_percpu.1": 1376681271659,
|
||||
"cpuacct.usage_percpu.2": 1450950799997,
|
||||
"cpuacct.usage_percpu.3": -1473113374257,
|
||||
"path": "testdata/cpu",
|
||||
}
|
||||
assertContainsFields(&acc, t, "cgroup", []map[string]interface{}{fields})
|
||||
}
|
||||
|
||||
// ======================================================================
|
||||
|
||||
var cg3 = &CGroup{
|
||||
Paths: []string{"testdata/memory/*"},
|
||||
Files: []string{"memory.limit_in_bytes"},
|
||||
}
|
||||
|
||||
func TestCgroupStatistics_3(t *testing.T) {
|
||||
var acc testutil.Accumulator
|
||||
|
||||
err := cg3.Gather(&acc)
|
||||
require.NoError(t, err)
|
||||
|
||||
fields := map[string]interface{}{
|
||||
"memory.limit_in_bytes": 223372036854771712,
|
||||
"path": "testdata/memory/group_1",
|
||||
}
|
||||
|
||||
fieldsTwo := map[string]interface{}{
|
||||
"memory.limit_in_bytes": 223372036854771712,
|
||||
"path": "testdata/memory/group_2",
|
||||
}
|
||||
assertContainsFields(&acc, t, "cgroup", []map[string]interface{}{fields, fieldsTwo})
|
||||
}
|
||||
|
||||
// ======================================================================
|
||||
|
||||
var cg4 = &CGroup{
|
||||
Paths: []string{"testdata/memory/*/*", "testdata/memory/group_2"},
|
||||
Files: []string{"memory.limit_in_bytes"},
|
||||
}
|
||||
|
||||
func TestCgroupStatistics_4(t *testing.T) {
|
||||
var acc testutil.Accumulator
|
||||
|
||||
err := cg4.Gather(&acc)
|
||||
require.NoError(t, err)
|
||||
|
||||
fields := map[string]interface{}{
|
||||
"memory.limit_in_bytes": 223372036854771712,
|
||||
"path": "testdata/memory/group_1/group_1_1",
|
||||
}
|
||||
|
||||
fieldsTwo := map[string]interface{}{
|
||||
"memory.limit_in_bytes": 223372036854771712,
|
||||
"path": "testdata/memory/group_1/group_1_2",
|
||||
}
|
||||
|
||||
fieldsThree := map[string]interface{}{
|
||||
"memory.limit_in_bytes": 223372036854771712,
|
||||
"path": "testdata/memory/group_2",
|
||||
}
|
||||
|
||||
assertContainsFields(&acc, t, "cgroup", []map[string]interface{}{fields, fieldsTwo, fieldsThree})
|
||||
}
|
||||
|
||||
// ======================================================================
|
||||
|
||||
var cg5 = &CGroup{
|
||||
Paths: []string{"testdata/memory/*/group_1_1"},
|
||||
Files: []string{"memory.limit_in_bytes"},
|
||||
}
|
||||
|
||||
func TestCgroupStatistics_5(t *testing.T) {
|
||||
var acc testutil.Accumulator
|
||||
|
||||
err := cg5.Gather(&acc)
|
||||
require.NoError(t, err)
|
||||
|
||||
fields := map[string]interface{}{
|
||||
"memory.limit_in_bytes": 223372036854771712,
|
||||
"path": "testdata/memory/group_1/group_1_1",
|
||||
}
|
||||
|
||||
fieldsTwo := map[string]interface{}{
|
||||
"memory.limit_in_bytes": 223372036854771712,
|
||||
"path": "testdata/memory/group_2/group_1_1",
|
||||
}
|
||||
assertContainsFields(&acc, t, "cgroup", []map[string]interface{}{fields, fieldsTwo})
|
||||
}
|
||||
|
||||
// ======================================================================
|
||||
|
||||
var cg6 = &CGroup{
|
||||
Paths: []string{"testdata/memory"},
|
||||
Files: []string{"memory.us*", "*/memory.kmem.*"},
|
||||
}
|
||||
|
||||
func TestCgroupStatistics_6(t *testing.T) {
|
||||
var acc testutil.Accumulator
|
||||
|
||||
err := cg6.Gather(&acc)
|
||||
require.NoError(t, err)
|
||||
|
||||
fields := map[string]interface{}{
|
||||
"memory.usage_in_bytes": 3513667584,
|
||||
"memory.use_hierarchy": "12-781",
|
||||
"memory.kmem.limit_in_bytes": 9223372036854771712,
|
||||
"path": "testdata/memory",
|
||||
}
|
||||
assertContainsFields(&acc, t, "cgroup", []map[string]interface{}{fields})
|
||||
}
|
|
@ -0,0 +1 @@
|
|||
Total 0
|
|
@ -0,0 +1,131 @@
|
|||
11:0 Read 0
|
||||
11:0 Write 0
|
||||
11:0 Sync 0
|
||||
11:0 Async 0
|
||||
11:0 Total 0
|
||||
8:0 Read 49134
|
||||
8:0 Write 216703
|
||||
8:0 Sync 177906
|
||||
8:0 Async 87931
|
||||
8:0 Total 265837
|
||||
7:7 Read 0
|
||||
7:7 Write 0
|
||||
7:7 Sync 0
|
||||
7:7 Async 0
|
||||
7:7 Total 0
|
||||
7:6 Read 0
|
||||
7:6 Write 0
|
||||
7:6 Sync 0
|
||||
7:6 Async 0
|
||||
7:6 Total 0
|
||||
7:5 Read 0
|
||||
7:5 Write 0
|
||||
7:5 Sync 0
|
||||
7:5 Async 0
|
||||
7:5 Total 0
|
||||
7:4 Read 0
|
||||
7:4 Write 0
|
||||
7:4 Sync 0
|
||||
7:4 Async 0
|
||||
7:4 Total 0
|
||||
7:3 Read 0
|
||||
7:3 Write 0
|
||||
7:3 Sync 0
|
||||
7:3 Async 0
|
||||
7:3 Total 0
|
||||
7:2 Read 0
|
||||
7:2 Write 0
|
||||
7:2 Sync 0
|
||||
7:2 Async 0
|
||||
7:2 Total 0
|
||||
7:1 Read 0
|
||||
7:1 Write 0
|
||||
7:1 Sync 0
|
||||
7:1 Async 0
|
||||
7:1 Total 0
|
||||
7:0 Read 0
|
||||
7:0 Write 0
|
||||
7:0 Sync 0
|
||||
7:0 Async 0
|
||||
7:0 Total 0
|
||||
1:15 Read 3
|
||||
1:15 Write 0
|
||||
1:15 Sync 0
|
||||
1:15 Async 3
|
||||
1:15 Total 3
|
||||
1:14 Read 3
|
||||
1:14 Write 0
|
||||
1:14 Sync 0
|
||||
1:14 Async 3
|
||||
1:14 Total 3
|
||||
1:13 Read 3
|
||||
1:13 Write 0
|
||||
1:13 Sync 0
|
||||
1:13 Async 3
|
||||
1:13 Total 3
|
||||
1:12 Read 3
|
||||
1:12 Write 0
|
||||
1:12 Sync 0
|
||||
1:12 Async 3
|
||||
1:12 Total 3
|
||||
1:11 Read 3
|
||||
1:11 Write 0
|
||||
1:11 Sync 0
|
||||
1:11 Async 3
|
||||
1:11 Total 3
|
||||
1:10 Read 3
|
||||
1:10 Write 0
|
||||
1:10 Sync 0
|
||||
1:10 Async 3
|
||||
1:10 Total 3
|
||||
1:9 Read 3
|
||||
1:9 Write 0
|
||||
1:9 Sync 0
|
||||
1:9 Async 3
|
||||
1:9 Total 3
|
||||
1:8 Read 3
|
||||
1:8 Write 0
|
||||
1:8 Sync 0
|
||||
1:8 Async 3
|
||||
1:8 Total 3
|
||||
1:7 Read 3
|
||||
1:7 Write 0
|
||||
1:7 Sync 0
|
||||
1:7 Async 3
|
||||
1:7 Total 3
|
||||
1:6 Read 3
|
||||
1:6 Write 0
|
||||
1:6 Sync 0
|
||||
1:6 Async 3
|
||||
1:6 Total 3
|
||||
1:5 Read 3
|
||||
1:5 Write 0
|
||||
1:5 Sync 0
|
||||
1:5 Async 3
|
||||
1:5 Total 3
|
||||
1:4 Read 3
|
||||
1:4 Write 0
|
||||
1:4 Sync 0
|
||||
1:4 Async 3
|
||||
1:4 Total 3
|
||||
1:3 Read 3
|
||||
1:3 Write 0
|
||||
1:3 Sync 0
|
||||
1:3 Async 3
|
||||
1:3 Total 3
|
||||
1:2 Read 3
|
||||
1:2 Write 0
|
||||
1:2 Sync 0
|
||||
1:2 Async 3
|
||||
1:2 Total 3
|
||||
1:1 Read 3
|
||||
1:1 Write 0
|
||||
1:1 Sync 0
|
||||
1:1 Async 3
|
||||
1:1 Total 3
|
||||
1:0 Read 3
|
||||
1:0 Write 0
|
||||
1:0 Sync 0
|
||||
1:0 Async 3
|
||||
1:0 Total 3
|
||||
Total 265885
|
|
@ -0,0 +1 @@
|
|||
-1
|
|
@ -0,0 +1 @@
|
|||
-1452543795404 1376681271659 1450950799997 -1473113374257
|
1
plugins/inputs/cgroup/testdata/memory/group_1/group_1_1/memory.limit_in_bytes
vendored
Normal file
1
plugins/inputs/cgroup/testdata/memory/group_1/group_1_1/memory.limit_in_bytes
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
223372036854771712
|
|
@ -0,0 +1,5 @@
|
|||
cache 1739362304123123123
|
||||
rss 1775325184
|
||||
rss_huge 778043392
|
||||
mapped_file 421036032
|
||||
dirty -307200
|
1
plugins/inputs/cgroup/testdata/memory/group_1/group_1_2/memory.limit_in_bytes
vendored
Normal file
1
plugins/inputs/cgroup/testdata/memory/group_1/group_1_2/memory.limit_in_bytes
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
223372036854771712
|
|
@ -0,0 +1,5 @@
|
|||
cache 1739362304123123123
|
||||
rss 1775325184
|
||||
rss_huge 778043392
|
||||
mapped_file 421036032
|
||||
dirty -307200
|
|
@ -0,0 +1 @@
|
|||
9223372036854771712
|
|
@ -0,0 +1 @@
|
|||
0
|
|
@ -0,0 +1 @@
|
|||
223372036854771712
|
|
@ -0,0 +1,5 @@
|
|||
cache 1739362304123123123
|
||||
rss 1775325184
|
||||
rss_huge 778043392
|
||||
mapped_file 421036032
|
||||
dirty -307200
|
1
plugins/inputs/cgroup/testdata/memory/group_2/group_1_1/memory.limit_in_bytes
vendored
Normal file
1
plugins/inputs/cgroup/testdata/memory/group_2/group_1_1/memory.limit_in_bytes
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
223372036854771712
|
|
@ -0,0 +1,5 @@
|
|||
cache 1739362304123123123
|
||||
rss 1775325184
|
||||
rss_huge 778043392
|
||||
mapped_file 421036032
|
||||
dirty -307200
|
|
@ -0,0 +1 @@
|
|||
223372036854771712
|
|
@ -0,0 +1,5 @@
|
|||
cache 1739362304123123123
|
||||
rss 1775325184
|
||||
rss_huge 778043392
|
||||
mapped_file 421036032
|
||||
dirty -307200
|
|
@ -0,0 +1 @@
|
|||
9223372036854771712
|
|
@ -0,0 +1 @@
|
|||
223372036854771712
|
|
@ -0,0 +1,3 @@
|
|||
0
|
||||
-1
|
||||
2
|
|
@ -0,0 +1,8 @@
|
|||
total=858067 N0=858067
|
||||
file=406254 N0=406254
|
||||
anon=451792 N0=451792
|
||||
unevictable=21 N0=21
|
||||
hierarchical_total=858067 N0=858067
|
||||
hierarchical_file=406254 N0=406254
|
||||
hierarchical_anon=451792 N0=451792
|
||||
hierarchical_unevictable=21 N0=21
|
|
@ -0,0 +1,5 @@
|
|||
cache 1739362304123123123
|
||||
rss 1775325184
|
||||
rss_huge 778043392
|
||||
mapped_file 421036032
|
||||
dirty -307200
|
|
@ -0,0 +1 @@
|
|||
3513667584
|
|
@ -0,0 +1 @@
|
|||
12-781
|
|
@ -0,0 +1 @@
|
|||
0
|
|
@ -3,12 +3,14 @@ package dns_query
|
|||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
"github.com/miekg/dns"
|
||||
"net"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/internal/errchan"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
)
|
||||
|
||||
type DnsQuery struct {
|
||||
|
@ -55,12 +57,12 @@ func (d *DnsQuery) Description() string {
|
|||
}
|
||||
func (d *DnsQuery) Gather(acc telegraf.Accumulator) error {
|
||||
d.setDefaultValues()
|
||||
|
||||
errChan := errchan.New(len(d.Domains) * len(d.Servers))
|
||||
for _, domain := range d.Domains {
|
||||
for _, server := range d.Servers {
|
||||
dnsQueryTime, err := d.getDnsQueryTime(domain, server)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
errChan.C <- err
|
||||
tags := map[string]string{
|
||||
"server": server,
|
||||
"domain": domain,
|
||||
|
@ -72,7 +74,7 @@ func (d *DnsQuery) Gather(acc telegraf.Accumulator) error {
|
|||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
return errChan.Error()
|
||||
}
|
||||
|
||||
func (d *DnsQuery) setDefaultValues() {
|
||||
|
|
|
@ -25,6 +25,8 @@ type Docker struct {
|
|||
Endpoint string
|
||||
ContainerNames []string
|
||||
Timeout internal.Duration
|
||||
PerDevice bool `toml:"perdevice"`
|
||||
Total bool `toml:"total"`
|
||||
|
||||
client DockerClient
|
||||
}
|
||||
|
@ -58,6 +60,13 @@ var sampleConfig = `
|
|||
container_names = []
|
||||
## Timeout for docker list, info, and stats commands
|
||||
timeout = "5s"
|
||||
|
||||
## Whether to report for each container per-device blkio (8:0, 8:1...) and
|
||||
## network (eth0, eth1, ...) stats or not
|
||||
perdevice = true
|
||||
## Whether to report for each container total blkio and network stats or not
|
||||
total = false
|
||||
|
||||
`
|
||||
|
||||
// Description returns input description
|
||||
|
@ -207,9 +216,18 @@ func (d *Docker) gatherContainer(
|
|||
cname = strings.TrimPrefix(container.Names[0], "/")
|
||||
}
|
||||
|
||||
// the image name sometimes has a version part.
|
||||
// ie, rabbitmq:3-management
|
||||
imageParts := strings.Split(container.Image, ":")
|
||||
imageName := imageParts[0]
|
||||
imageVersion := "unknown"
|
||||
if len(imageParts) > 1 {
|
||||
imageVersion = imageParts[1]
|
||||
}
|
||||
tags := map[string]string{
|
||||
"container_name": cname,
|
||||
"container_image": container.Image,
|
||||
"container_name": cname,
|
||||
"container_image": imageName,
|
||||
"container_version": imageVersion,
|
||||
}
|
||||
if len(d.ContainerNames) > 0 {
|
||||
if !sliceContains(cname, d.ContainerNames) {
|
||||
|
@ -237,7 +255,7 @@ func (d *Docker) gatherContainer(
|
|||
tags[k] = label
|
||||
}
|
||||
|
||||
gatherContainerStats(v, acc, tags, container.ID)
|
||||
gatherContainerStats(v, acc, tags, container.ID, d.PerDevice, d.Total)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
@ -247,6 +265,8 @@ func gatherContainerStats(
|
|||
acc telegraf.Accumulator,
|
||||
tags map[string]string,
|
||||
id string,
|
||||
perDevice bool,
|
||||
total bool,
|
||||
) {
|
||||
now := stat.Read
|
||||
|
||||
|
@ -314,6 +334,7 @@ func gatherContainerStats(
|
|||
acc.AddFields("docker_container_cpu", fields, percputags, now)
|
||||
}
|
||||
|
||||
totalNetworkStatMap := make(map[string]interface{})
|
||||
for network, netstats := range stat.Networks {
|
||||
netfields := map[string]interface{}{
|
||||
"rx_dropped": netstats.RxDropped,
|
||||
|
@ -327,12 +348,35 @@ func gatherContainerStats(
|
|||
"container_id": id,
|
||||
}
|
||||
// Create a new network tag dictionary for the "network" tag
|
||||
nettags := copyTags(tags)
|
||||
nettags["network"] = network
|
||||
acc.AddFields("docker_container_net", netfields, nettags, now)
|
||||
if perDevice {
|
||||
nettags := copyTags(tags)
|
||||
nettags["network"] = network
|
||||
acc.AddFields("docker_container_net", netfields, nettags, now)
|
||||
}
|
||||
if total {
|
||||
for field, value := range netfields {
|
||||
if field == "container_id" {
|
||||
continue
|
||||
}
|
||||
_, ok := totalNetworkStatMap[field]
|
||||
if ok {
|
||||
totalNetworkStatMap[field] = totalNetworkStatMap[field].(uint64) + value.(uint64)
|
||||
} else {
|
||||
totalNetworkStatMap[field] = value
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
gatherBlockIOMetrics(stat, acc, tags, now, id)
|
||||
// totalNetworkStatMap could be empty if container is running with --net=host.
|
||||
if total && len(totalNetworkStatMap) != 0 {
|
||||
nettags := copyTags(tags)
|
||||
nettags["network"] = "total"
|
||||
totalNetworkStatMap["container_id"] = id
|
||||
acc.AddFields("docker_container_net", totalNetworkStatMap, nettags, now)
|
||||
}
|
||||
|
||||
gatherBlockIOMetrics(stat, acc, tags, now, id, perDevice, total)
|
||||
}
|
||||
|
||||
func calculateMemPercent(stat *types.StatsJSON) float64 {
|
||||
|
@ -361,6 +405,8 @@ func gatherBlockIOMetrics(
|
|||
tags map[string]string,
|
||||
now time.Time,
|
||||
id string,
|
||||
perDevice bool,
|
||||
total bool,
|
||||
) {
|
||||
blkioStats := stat.BlkioStats
|
||||
// Make a map of devices to their block io stats
|
||||
|
@ -422,11 +468,33 @@ func gatherBlockIOMetrics(
|
|||
deviceStatMap[device]["sectors_recursive"] = metric.Value
|
||||
}
|
||||
|
||||
totalStatMap := make(map[string]interface{})
|
||||
for device, fields := range deviceStatMap {
|
||||
iotags := copyTags(tags)
|
||||
iotags["device"] = device
|
||||
fields["container_id"] = id
|
||||
acc.AddFields("docker_container_blkio", fields, iotags, now)
|
||||
if perDevice {
|
||||
iotags := copyTags(tags)
|
||||
iotags["device"] = device
|
||||
acc.AddFields("docker_container_blkio", fields, iotags, now)
|
||||
}
|
||||
if total {
|
||||
for field, value := range fields {
|
||||
if field == "container_id" {
|
||||
continue
|
||||
}
|
||||
_, ok := totalStatMap[field]
|
||||
if ok {
|
||||
totalStatMap[field] = totalStatMap[field].(uint64) + value.(uint64)
|
||||
} else {
|
||||
totalStatMap[field] = value
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if total {
|
||||
totalStatMap["container_id"] = id
|
||||
iotags := copyTags(tags)
|
||||
iotags["device"] = "total"
|
||||
acc.AddFields("docker_container_blkio", totalStatMap, iotags, now)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -471,7 +539,8 @@ func parseSize(sizeStr string) (int64, error) {
|
|||
func init() {
|
||||
inputs.Add("docker", func() telegraf.Input {
|
||||
return &Docker{
|
||||
Timeout: internal.Duration{Duration: time.Second * 5},
|
||||
PerDevice: true,
|
||||
Timeout: internal.Duration{Duration: time.Second * 5},
|
||||
}
|
||||
})
|
||||
}
|
||||
|
|
|
@ -24,7 +24,7 @@ func TestDockerGatherContainerStats(t *testing.T) {
|
|||
"container_name": "redis",
|
||||
"container_image": "redis/image",
|
||||
}
|
||||
gatherContainerStats(stats, &acc, tags, "123456789")
|
||||
gatherContainerStats(stats, &acc, tags, "123456789", true, true)
|
||||
|
||||
// test docker_container_net measurement
|
||||
netfields := map[string]interface{}{
|
||||
|
@ -42,6 +42,21 @@ func TestDockerGatherContainerStats(t *testing.T) {
|
|||
nettags["network"] = "eth0"
|
||||
acc.AssertContainsTaggedFields(t, "docker_container_net", netfields, nettags)
|
||||
|
||||
netfields = map[string]interface{}{
|
||||
"rx_dropped": uint64(6),
|
||||
"rx_bytes": uint64(8),
|
||||
"rx_errors": uint64(10),
|
||||
"tx_packets": uint64(12),
|
||||
"tx_dropped": uint64(6),
|
||||
"rx_packets": uint64(8),
|
||||
"tx_errors": uint64(10),
|
||||
"tx_bytes": uint64(12),
|
||||
"container_id": "123456789",
|
||||
}
|
||||
nettags = copyTags(tags)
|
||||
nettags["network"] = "total"
|
||||
acc.AssertContainsTaggedFields(t, "docker_container_net", netfields, nettags)
|
||||
|
||||
// test docker_blkio measurement
|
||||
blkiotags := copyTags(tags)
|
||||
blkiotags["device"] = "6:0"
|
||||
|
@ -52,6 +67,15 @@ func TestDockerGatherContainerStats(t *testing.T) {
|
|||
}
|
||||
acc.AssertContainsTaggedFields(t, "docker_container_blkio", blkiofields, blkiotags)
|
||||
|
||||
blkiotags = copyTags(tags)
|
||||
blkiotags["device"] = "total"
|
||||
blkiofields = map[string]interface{}{
|
||||
"io_service_bytes_recursive_read": uint64(100),
|
||||
"io_serviced_recursive_write": uint64(302),
|
||||
"container_id": "123456789",
|
||||
}
|
||||
acc.AssertContainsTaggedFields(t, "docker_container_blkio", blkiofields, blkiotags)
|
||||
|
||||
// test docker_container_mem measurement
|
||||
memfields := map[string]interface{}{
|
||||
"max_usage": uint64(1001),
|
||||
|
@ -186,6 +210,17 @@ func testStats() *types.StatsJSON {
|
|||
TxBytes: 4,
|
||||
}
|
||||
|
||||
stats.Networks["eth1"] = types.NetworkStats{
|
||||
RxDropped: 5,
|
||||
RxBytes: 6,
|
||||
RxErrors: 7,
|
||||
TxPackets: 8,
|
||||
TxDropped: 5,
|
||||
RxPackets: 6,
|
||||
TxErrors: 7,
|
||||
TxBytes: 8,
|
||||
}
|
||||
|
||||
sbr := types.BlkioStatEntry{
|
||||
Major: 6,
|
||||
Minor: 0,
|
||||
|
@ -198,11 +233,19 @@ func testStats() *types.StatsJSON {
|
|||
Op: "write",
|
||||
Value: 101,
|
||||
}
|
||||
sr2 := types.BlkioStatEntry{
|
||||
Major: 6,
|
||||
Minor: 1,
|
||||
Op: "write",
|
||||
Value: 201,
|
||||
}
|
||||
|
||||
stats.BlkioStats.IoServiceBytesRecursive = append(
|
||||
stats.BlkioStats.IoServiceBytesRecursive, sbr)
|
||||
stats.BlkioStats.IoServicedRecursive = append(
|
||||
stats.BlkioStats.IoServicedRecursive, sr)
|
||||
stats.BlkioStats.IoServicedRecursive = append(
|
||||
stats.BlkioStats.IoServicedRecursive, sr2)
|
||||
|
||||
return stats
|
||||
}
|
||||
|
@ -378,9 +421,10 @@ func TestDockerGatherInfo(t *testing.T) {
|
|||
"container_id": "b7dfbb9478a6ae55e237d4d74f8bbb753f0817192b5081334dc78476296e2173",
|
||||
},
|
||||
map[string]string{
|
||||
"container_name": "etcd2",
|
||||
"container_image": "quay.io/coreos/etcd:v2.2.2",
|
||||
"cpu": "cpu3",
|
||||
"container_name": "etcd2",
|
||||
"container_image": "quay.io/coreos/etcd",
|
||||
"cpu": "cpu3",
|
||||
"container_version": "v2.2.2",
|
||||
},
|
||||
)
|
||||
acc.AssertContainsTaggedFields(t,
|
||||
|
@ -423,8 +467,9 @@ func TestDockerGatherInfo(t *testing.T) {
|
|||
"container_id": "b7dfbb9478a6ae55e237d4d74f8bbb753f0817192b5081334dc78476296e2173",
|
||||
},
|
||||
map[string]string{
|
||||
"container_name": "etcd2",
|
||||
"container_image": "quay.io/coreos/etcd:v2.2.2",
|
||||
"container_name": "etcd2",
|
||||
"container_image": "quay.io/coreos/etcd",
|
||||
"container_version": "v2.2.2",
|
||||
},
|
||||
)
|
||||
|
||||
|
|
|
@ -12,6 +12,7 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/internal/errchan"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
)
|
||||
|
||||
|
@ -51,7 +52,6 @@ const defaultPort = "24242"
|
|||
|
||||
// Reads stats from all configured servers.
|
||||
func (d *Dovecot) Gather(acc telegraf.Accumulator) error {
|
||||
|
||||
if !validQuery[d.Type] {
|
||||
return fmt.Errorf("Error: %s is not a valid query type\n",
|
||||
d.Type)
|
||||
|
@ -61,31 +61,27 @@ func (d *Dovecot) Gather(acc telegraf.Accumulator) error {
|
|||
d.Servers = append(d.Servers, "127.0.0.1:24242")
|
||||
}
|
||||
|
||||
var wg sync.WaitGroup
|
||||
|
||||
var outerr error
|
||||
|
||||
if len(d.Filters) <= 0 {
|
||||
d.Filters = append(d.Filters, "")
|
||||
}
|
||||
|
||||
for _, serv := range d.Servers {
|
||||
var wg sync.WaitGroup
|
||||
errChan := errchan.New(len(d.Servers) * len(d.Filters))
|
||||
for _, server := range d.Servers {
|
||||
for _, filter := range d.Filters {
|
||||
wg.Add(1)
|
||||
go func(serv string, filter string) {
|
||||
go func(s string, f string) {
|
||||
defer wg.Done()
|
||||
outerr = d.gatherServer(serv, acc, d.Type, filter)
|
||||
}(serv, filter)
|
||||
errChan.C <- d.gatherServer(s, acc, d.Type, f)
|
||||
}(server, filter)
|
||||
}
|
||||
}
|
||||
|
||||
wg.Wait()
|
||||
|
||||
return outerr
|
||||
return errChan.Error()
|
||||
}
|
||||
|
||||
func (d *Dovecot) gatherServer(addr string, acc telegraf.Accumulator, qtype string, filter string) error {
|
||||
|
||||
_, _, err := net.SplitHostPort(addr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Error: %s on url %s\n", err, addr)
|
||||
|
|
|
@ -11,6 +11,13 @@ and optionally [cluster](https://www.elastic.co/guide/en/elasticsearch/reference
|
|||
servers = ["http://localhost:9200"]
|
||||
local = true
|
||||
cluster_health = true
|
||||
|
||||
## Optional SSL Config
|
||||
# ssl_ca = "/etc/telegraf/ca.pem"
|
||||
# ssl_cert = "/etc/telegraf/cert.pem"
|
||||
# ssl_key = "/etc/telegraf/key.pem"
|
||||
## Use SSL but skip chain & host verification
|
||||
# insecure_skip_verify = false
|
||||
```
|
||||
|
||||
### Measurements & Fields:
|
||||
|
|
|
@ -8,6 +8,7 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/internal"
|
||||
"github.com/influxdata/telegraf/internal/errchan"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
jsonparser "github.com/influxdata/telegraf/plugins/parsers/json"
|
||||
|
@ -67,25 +68,31 @@ const sampleConfig = `
|
|||
|
||||
## set cluster_health to true when you want to also obtain cluster level stats
|
||||
cluster_health = false
|
||||
|
||||
## Optional SSL Config
|
||||
# ssl_ca = "/etc/telegraf/ca.pem"
|
||||
# ssl_cert = "/etc/telegraf/cert.pem"
|
||||
# ssl_key = "/etc/telegraf/key.pem"
|
||||
## Use SSL but skip chain & host verification
|
||||
# insecure_skip_verify = false
|
||||
`
|
||||
|
||||
// Elasticsearch is a plugin to read stats from one or many Elasticsearch
|
||||
// servers.
|
||||
type Elasticsearch struct {
|
||||
Local bool
|
||||
Servers []string
|
||||
ClusterHealth bool
|
||||
client *http.Client
|
||||
Local bool
|
||||
Servers []string
|
||||
ClusterHealth bool
|
||||
SSLCA string `toml:"ssl_ca"` // Path to CA file
|
||||
SSLCert string `toml:"ssl_cert"` // Path to host cert file
|
||||
SSLKey string `toml:"ssl_key"` // Path to cert key file
|
||||
InsecureSkipVerify bool // Use SSL but skip chain & host verification
|
||||
client *http.Client
|
||||
}
|
||||
|
||||
// NewElasticsearch return a new instance of Elasticsearch
|
||||
func NewElasticsearch() *Elasticsearch {
|
||||
tr := &http.Transport{ResponseHeaderTimeout: time.Duration(3 * time.Second)}
|
||||
client := &http.Client{
|
||||
Transport: tr,
|
||||
Timeout: time.Duration(4 * time.Second),
|
||||
}
|
||||
return &Elasticsearch{client: client}
|
||||
return &Elasticsearch{}
|
||||
}
|
||||
|
||||
// SampleConfig returns sample configuration for this plugin.
|
||||
|
@ -101,6 +108,15 @@ func (e *Elasticsearch) Description() string {
|
|||
// Gather reads the stats from Elasticsearch and writes it to the
|
||||
// Accumulator.
|
||||
func (e *Elasticsearch) Gather(acc telegraf.Accumulator) error {
|
||||
if e.client == nil {
|
||||
client, err := e.createHttpClient()
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
e.client = client
|
||||
}
|
||||
|
||||
errChan := errchan.New(len(e.Servers))
|
||||
var wg sync.WaitGroup
|
||||
wg.Add(len(e.Servers))
|
||||
|
@ -128,6 +144,23 @@ func (e *Elasticsearch) Gather(acc telegraf.Accumulator) error {
|
|||
return errChan.Error()
|
||||
}
|
||||
|
||||
func (e *Elasticsearch) createHttpClient() (*http.Client, error) {
|
||||
tlsCfg, err := internal.GetTLSConfig(e.SSLCert, e.SSLKey, e.SSLCA, e.InsecureSkipVerify)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
tr := &http.Transport{
|
||||
ResponseHeaderTimeout: time.Duration(3 * time.Second),
|
||||
TLSClientConfig: tlsCfg,
|
||||
}
|
||||
client := &http.Client{
|
||||
Transport: tr,
|
||||
Timeout: time.Duration(4 * time.Second),
|
||||
}
|
||||
|
||||
return client, nil
|
||||
}
|
||||
|
||||
func (e *Elasticsearch) gatherNodeStats(url string, acc telegraf.Accumulator) error {
|
||||
nodeStats := &struct {
|
||||
ClusterName string `json:"cluster_name"`
|
||||
|
|
|
@ -38,7 +38,7 @@ func (t *transportMock) CancelRequest(_ *http.Request) {
|
|||
}
|
||||
|
||||
func TestElasticsearch(t *testing.T) {
|
||||
es := NewElasticsearch()
|
||||
es := newElasticsearchWithClient()
|
||||
es.Servers = []string{"http://example.com:9200"}
|
||||
es.client.Transport = newTransportMock(http.StatusOK, statsResponse)
|
||||
|
||||
|
@ -67,7 +67,7 @@ func TestElasticsearch(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestGatherClusterStats(t *testing.T) {
|
||||
es := NewElasticsearch()
|
||||
es := newElasticsearchWithClient()
|
||||
es.Servers = []string{"http://example.com:9200"}
|
||||
es.ClusterHealth = true
|
||||
es.client.Transport = newTransportMock(http.StatusOK, clusterResponse)
|
||||
|
@ -87,3 +87,9 @@ func TestGatherClusterStats(t *testing.T) {
|
|||
v2IndexExpected,
|
||||
map[string]string{"index": "v2"})
|
||||
}
|
||||
|
||||
func newElasticsearchWithClient() *Elasticsearch {
|
||||
es := NewElasticsearch()
|
||||
es.client = &http.Client{}
|
||||
return es
|
||||
}
|
||||
|
|
|
@ -48,8 +48,6 @@ type Exec struct {
|
|||
|
||||
parser parsers.Parser
|
||||
|
||||
wg sync.WaitGroup
|
||||
|
||||
runner Runner
|
||||
errChan chan error
|
||||
}
|
||||
|
@ -119,8 +117,8 @@ func (c CommandRunner) Run(
|
|||
return out.Bytes(), nil
|
||||
}
|
||||
|
||||
func (e *Exec) ProcessCommand(command string, acc telegraf.Accumulator) {
|
||||
defer e.wg.Done()
|
||||
func (e *Exec) ProcessCommand(command string, acc telegraf.Accumulator, wg *sync.WaitGroup) {
|
||||
defer wg.Done()
|
||||
|
||||
out, err := e.runner.Run(e, command, acc)
|
||||
if err != nil {
|
||||
|
@ -151,6 +149,7 @@ func (e *Exec) SetParser(parser parsers.Parser) {
|
|||
}
|
||||
|
||||
func (e *Exec) Gather(acc telegraf.Accumulator) error {
|
||||
var wg sync.WaitGroup
|
||||
// Legacy single command support
|
||||
if e.Command != "" {
|
||||
e.Commands = append(e.Commands, e.Command)
|
||||
|
@ -177,8 +176,12 @@ func (e *Exec) Gather(acc telegraf.Accumulator) error {
|
|||
// There were matches, so we'll append each match together with
|
||||
// the arguments to the commands slice
|
||||
for _, match := range matches {
|
||||
commands = append(
|
||||
commands, strings.Join([]string{match, cmdAndArgs[1]}, " "))
|
||||
if len(cmdAndArgs) == 1 {
|
||||
commands = append(commands, match)
|
||||
} else {
|
||||
commands = append(commands,
|
||||
strings.Join([]string{match, cmdAndArgs[1]}, " "))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -186,11 +189,11 @@ func (e *Exec) Gather(acc telegraf.Accumulator) error {
|
|||
errChan := errchan.New(len(commands))
|
||||
e.errChan = errChan.C
|
||||
|
||||
e.wg.Add(len(commands))
|
||||
wg.Add(len(commands))
|
||||
for _, command := range commands {
|
||||
go e.ProcessCommand(command, acc)
|
||||
go e.ProcessCommand(command, acc, &wg)
|
||||
}
|
||||
e.wg.Wait()
|
||||
wg.Wait()
|
||||
return errChan.Error()
|
||||
}
|
||||
|
||||
|
|
|
@ -92,9 +92,11 @@ type haproxy struct {
|
|||
var sampleConfig = `
|
||||
## An array of address to gather stats about. Specify an ip on hostname
|
||||
## with optional port. ie localhost, 10.10.3.33:1936, etc.
|
||||
|
||||
## If no servers are specified, then default to 127.0.0.1:1936
|
||||
servers = ["http://myhaproxy.com:1936", "http://anotherhaproxy.com:1936"]
|
||||
## Make sure you specify the complete path to the stats endpoint
|
||||
## ie 10.10.3.33:1936/haproxy?stats
|
||||
#
|
||||
## If no servers are specified, then default to 127.0.0.1:1936/haproxy?stats
|
||||
servers = ["http://myhaproxy.com:1936/haproxy?stats"]
|
||||
## Or you can also use local socket
|
||||
## servers = ["socket:/run/haproxy/admin.sock"]
|
||||
`
|
||||
|
@ -111,7 +113,7 @@ func (r *haproxy) Description() string {
|
|||
// Returns one of the errors encountered while gather stats (if any).
|
||||
func (g *haproxy) Gather(acc telegraf.Accumulator) error {
|
||||
if len(g.Servers) == 0 {
|
||||
return g.gatherServer("http://127.0.0.1:1936", acc)
|
||||
return g.gatherServer("http://127.0.0.1:1936/haproxy?stats", acc)
|
||||
}
|
||||
|
||||
var wg sync.WaitGroup
|
||||
|
@ -167,12 +169,16 @@ func (g *haproxy) gatherServer(addr string, acc telegraf.Accumulator) error {
|
|||
g.client = client
|
||||
}
|
||||
|
||||
if !strings.HasSuffix(addr, ";csv") {
|
||||
addr += "/;csv"
|
||||
}
|
||||
|
||||
u, err := url.Parse(addr)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Unable parse server address '%s': %s", addr, err)
|
||||
}
|
||||
|
||||
req, err := http.NewRequest("GET", fmt.Sprintf("%s://%s%s/;csv", u.Scheme, u.Host, u.Path), nil)
|
||||
req, err := http.NewRequest("GET", addr, nil)
|
||||
if u.User != nil {
|
||||
p, _ := u.User.Password()
|
||||
req.SetBasicAuth(u.User.Username(), p)
|
||||
|
@ -184,7 +190,7 @@ func (g *haproxy) gatherServer(addr string, acc telegraf.Accumulator) error {
|
|||
}
|
||||
|
||||
if res.StatusCode != 200 {
|
||||
return fmt.Errorf("Unable to get valid stat result from '%s': %s", addr, err)
|
||||
return fmt.Errorf("Unable to get valid stat result from '%s', http response code : %d", addr, res.StatusCode)
|
||||
}
|
||||
|
||||
return importCsvResult(res.Body, acc, u.Host)
|
||||
|
|
|
@ -243,7 +243,7 @@ func TestHaproxyDefaultGetFromLocalhost(t *testing.T) {
|
|||
|
||||
err := r.Gather(&acc)
|
||||
require.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "127.0.0.1:1936/;csv")
|
||||
assert.Contains(t, err.Error(), "127.0.0.1:1936/haproxy?stats/;csv")
|
||||
}
|
||||
|
||||
const csvOutputSample = `
|
||||
|
|
|
@ -0,0 +1,22 @@
|
|||
# Hddtemp Input Plugin
|
||||
|
||||
This plugin reads data from hddtemp daemon
|
||||
|
||||
## Requirements
|
||||
|
||||
Hddtemp should be installed and its daemon running
|
||||
|
||||
## Configuration
|
||||
|
||||
```
|
||||
[[inputs.hddtemp]]
|
||||
## By default, telegraf gathers temps data from all disks detected by the
|
||||
## hddtemp.
|
||||
##
|
||||
## Only collect temps from the selected disks.
|
||||
##
|
||||
## A * as the device name will return the temperature values of all disks.
|
||||
##
|
||||
# address = "127.0.0.1:7634"
|
||||
# devices = ["sda", "*"]
|
||||
```
|
|
@ -0,0 +1,21 @@
|
|||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2016 Mendelson Gusmão
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
|
@ -0,0 +1,61 @@
|
|||
package hddtemp
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
"net"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
type disk struct {
|
||||
DeviceName string
|
||||
Model string
|
||||
Temperature int32
|
||||
Unit string
|
||||
Status string
|
||||
}
|
||||
|
||||
func Fetch(address string) ([]disk, error) {
|
||||
var (
|
||||
err error
|
||||
conn net.Conn
|
||||
buffer bytes.Buffer
|
||||
disks []disk
|
||||
)
|
||||
|
||||
if conn, err = net.Dial("tcp", address); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if _, err = io.Copy(&buffer, conn); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
fields := strings.Split(buffer.String(), "|")
|
||||
|
||||
for index := 0; index < len(fields)/5; index++ {
|
||||
status := ""
|
||||
offset := index * 5
|
||||
device := fields[offset+1]
|
||||
device = device[strings.LastIndex(device, "/")+1:]
|
||||
|
||||
temperatureField := fields[offset+3]
|
||||
temperature, err := strconv.ParseInt(temperatureField, 10, 32)
|
||||
|
||||
if err != nil {
|
||||
temperature = 0
|
||||
status = temperatureField
|
||||
}
|
||||
|
||||
disks = append(disks, disk{
|
||||
DeviceName: device,
|
||||
Model: fields[offset+2],
|
||||
Temperature: int32(temperature),
|
||||
Unit: fields[offset+4],
|
||||
Status: status,
|
||||
})
|
||||
}
|
||||
|
||||
return disks, nil
|
||||
}
|
|
@ -0,0 +1,116 @@
|
|||
package hddtemp
|
||||
|
||||
import (
|
||||
"net"
|
||||
"reflect"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestFetch(t *testing.T) {
|
||||
l := serve(t, []byte("|/dev/sda|foobar|36|C|"))
|
||||
defer l.Close()
|
||||
|
||||
disks, err := Fetch(l.Addr().String())
|
||||
|
||||
if err != nil {
|
||||
t.Error("expecting err to be nil")
|
||||
}
|
||||
|
||||
expected := []disk{
|
||||
{
|
||||
DeviceName: "sda",
|
||||
Model: "foobar",
|
||||
Temperature: 36,
|
||||
Unit: "C",
|
||||
},
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(expected, disks) {
|
||||
t.Error("disks' slice is different from expected")
|
||||
}
|
||||
}
|
||||
|
||||
func TestFetchWrongAddress(t *testing.T) {
|
||||
_, err := Fetch("127.0.0.1:1")
|
||||
|
||||
if err == nil {
|
||||
t.Error("expecting err to be non-nil")
|
||||
}
|
||||
}
|
||||
|
||||
func TestFetchStatus(t *testing.T) {
|
||||
l := serve(t, []byte("|/dev/sda|foobar|SLP|C|"))
|
||||
defer l.Close()
|
||||
|
||||
disks, err := Fetch(l.Addr().String())
|
||||
|
||||
if err != nil {
|
||||
t.Error("expecting err to be nil")
|
||||
}
|
||||
|
||||
expected := []disk{
|
||||
{
|
||||
DeviceName: "sda",
|
||||
Model: "foobar",
|
||||
Temperature: 0,
|
||||
Unit: "C",
|
||||
Status: "SLP",
|
||||
},
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(expected, disks) {
|
||||
t.Error("disks' slice is different from expected")
|
||||
}
|
||||
}
|
||||
|
||||
func TestFetchTwoDisks(t *testing.T) {
|
||||
l := serve(t, []byte("|/dev/hda|ST380011A|46|C||/dev/hdd|ST340016A|SLP|*|"))
|
||||
defer l.Close()
|
||||
|
||||
disks, err := Fetch(l.Addr().String())
|
||||
|
||||
if err != nil {
|
||||
t.Error("expecting err to be nil")
|
||||
}
|
||||
|
||||
expected := []disk{
|
||||
{
|
||||
DeviceName: "hda",
|
||||
Model: "ST380011A",
|
||||
Temperature: 46,
|
||||
Unit: "C",
|
||||
},
|
||||
{
|
||||
DeviceName: "hdd",
|
||||
Model: "ST340016A",
|
||||
Temperature: 0,
|
||||
Unit: "*",
|
||||
Status: "SLP",
|
||||
},
|
||||
}
|
||||
|
||||
if !reflect.DeepEqual(expected, disks) {
|
||||
t.Error("disks' slice is different from expected")
|
||||
}
|
||||
}
|
||||
|
||||
func serve(t *testing.T, data []byte) net.Listener {
|
||||
l, err := net.Listen("tcp", "127.0.0.1:0")
|
||||
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
go func(t *testing.T) {
|
||||
conn, err := l.Accept()
|
||||
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
conn.Write(data)
|
||||
conn.Close()
|
||||
}(t)
|
||||
|
||||
return l
|
||||
}
|
|
@ -0,0 +1,74 @@
|
|||
// +build linux
|
||||
|
||||
package hddtemp
|
||||
|
||||
import (
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
gohddtemp "github.com/influxdata/telegraf/plugins/inputs/hddtemp/go-hddtemp"
|
||||
)
|
||||
|
||||
const defaultAddress = "127.0.0.1:7634"
|
||||
|
||||
type HDDTemp struct {
|
||||
Address string
|
||||
Devices []string
|
||||
}
|
||||
|
||||
func (_ *HDDTemp) Description() string {
|
||||
return "Monitor disks' temperatures using hddtemp"
|
||||
}
|
||||
|
||||
var hddtempSampleConfig = `
|
||||
## By default, telegraf gathers temps data from all disks detected by the
|
||||
## hddtemp.
|
||||
##
|
||||
## Only collect temps from the selected disks.
|
||||
##
|
||||
## A * as the device name will return the temperature values of all disks.
|
||||
##
|
||||
# address = "127.0.0.1:7634"
|
||||
# devices = ["sda", "*"]
|
||||
`
|
||||
|
||||
func (_ *HDDTemp) SampleConfig() string {
|
||||
return hddtempSampleConfig
|
||||
}
|
||||
|
||||
func (h *HDDTemp) Gather(acc telegraf.Accumulator) error {
|
||||
disks, err := gohddtemp.Fetch(h.Address)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, disk := range disks {
|
||||
for _, chosenDevice := range h.Devices {
|
||||
if chosenDevice == "*" || chosenDevice == disk.DeviceName {
|
||||
tags := map[string]string{
|
||||
"device": disk.DeviceName,
|
||||
"model": disk.Model,
|
||||
"unit": disk.Unit,
|
||||
"status": disk.Status,
|
||||
}
|
||||
|
||||
fields := map[string]interface{}{
|
||||
disk.DeviceName: disk.Temperature,
|
||||
}
|
||||
|
||||
acc.AddFields("hddtemp", fields, tags)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func init() {
|
||||
inputs.Add("hddtemp", func() telegraf.Input {
|
||||
return &HDDTemp{
|
||||
Address: defaultAddress,
|
||||
Devices: []string{"*"},
|
||||
}
|
||||
})
|
||||
}
|
|
@ -0,0 +1,3 @@
|
|||
// +build !linux
|
||||
|
||||
package hddtemp
|
|
@ -249,7 +249,14 @@ func (j *Jolokia) Gather(acc telegraf.Accumulator) error {
|
|||
switch t := values.(type) {
|
||||
case map[string]interface{}:
|
||||
for k, v := range t {
|
||||
fields[measurement+"_"+k] = v
|
||||
switch t2 := v.(type) {
|
||||
case map[string]interface{}:
|
||||
for k2, v2 := range t2 {
|
||||
fields[measurement+"_"+k+"_"+k2] = v2
|
||||
}
|
||||
case interface{}:
|
||||
fields[measurement+"_"+k] = t2
|
||||
}
|
||||
}
|
||||
case interface{}:
|
||||
fields[measurement] = t
|
||||
|
|
|
@ -22,7 +22,7 @@ from the same topic in parallel.
|
|||
## Offset (must be either "oldest" or "newest")
|
||||
offset = "oldest"
|
||||
|
||||
## Data format to consume.
|
||||
## Data format to consume.
|
||||
|
||||
## Each data format has it's own unique set of configuration options, read
|
||||
## more about them here:
|
||||
|
@ -32,11 +32,5 @@ from the same topic in parallel.
|
|||
|
||||
## Testing
|
||||
|
||||
Running integration tests requires running Zookeeper & Kafka. The following
|
||||
commands assume you're on OS X & using [boot2docker](http://boot2docker.io/) or docker-machine through [Docker Toolbox](https://www.docker.com/docker-toolbox).
|
||||
|
||||
To start Kafka & Zookeeper:
|
||||
|
||||
```
|
||||
docker run -d -p 2181:2181 -p 9092:9092 --env ADVERTISED_HOST=`boot2docker ip || docker-machine ip <your_machine_name>` --env ADVERTISED_PORT=9092 spotify/kafka
|
||||
```
|
||||
Running integration tests requires running Zookeeper & Kafka. See Makefile
|
||||
for kafka container command.
|
||||
|
|
|
@ -50,7 +50,7 @@ var sampleConfig = `
|
|||
## an array of Zookeeper connection strings
|
||||
zookeeper_peers = ["localhost:2181"]
|
||||
## Zookeeper Chroot
|
||||
zookeeper_chroot = "/"
|
||||
zookeeper_chroot = ""
|
||||
## the name of the consumer group
|
||||
consumer_group = "telegraf_metrics_consumers"
|
||||
## Offset (must be either "oldest" or "newest")
|
||||
|
|
|
@ -0,0 +1,95 @@
|
|||
# logparser Input Plugin
|
||||
|
||||
The logparser plugin streams and parses the given logfiles. Currently it only
|
||||
has the capability of parsing "grok" patterns from logfiles, which also supports
|
||||
regex patterns.
|
||||
|
||||
### Configuration:
|
||||
|
||||
```toml
|
||||
[[inputs.logparser]]
|
||||
## Log files to parse.
|
||||
## These accept standard unix glob matching rules, but with the addition of
|
||||
## ** as a "super asterisk". ie:
|
||||
## /var/log/**.log -> recursively find all .log files in /var/log
|
||||
## /var/log/*/*.log -> find all .log files with a parent dir in /var/log
|
||||
## /var/log/apache.log -> only tail the apache log file
|
||||
files = ["/var/log/apache/access.log"]
|
||||
## Read file from beginning.
|
||||
from_beginning = false
|
||||
|
||||
## Parse logstash-style "grok" patterns:
|
||||
## Telegraf built-in parsing patterns: https://goo.gl/dkay10
|
||||
[inputs.logparser.grok]
|
||||
## This is a list of patterns to check the given log file(s) for.
|
||||
## Note that adding patterns here increases processing time. The most
|
||||
## efficient configuration is to have one pattern per logparser.
|
||||
## Other common built-in patterns are:
|
||||
## %{COMMON_LOG_FORMAT} (plain apache & nginx access logs)
|
||||
## %{COMBINED_LOG_FORMAT} (access logs + referrer & agent)
|
||||
patterns = ["%{COMBINED_LOG_FORMAT}"]
|
||||
## Name of the outputted measurement name.
|
||||
measurement = "apache_access_log"
|
||||
## Full path(s) to custom pattern files.
|
||||
custom_pattern_files = []
|
||||
## Custom patterns can also be defined here. Put one pattern per line.
|
||||
custom_patterns = '''
|
||||
'''
|
||||
```
|
||||
|
||||
## Grok Parser
|
||||
|
||||
The grok parser uses a slightly modified version of logstash "grok" patterns,
|
||||
with the format `%{<capture_syntax>[:<semantic_name>][:<modifier>]}`
|
||||
|
||||
|
||||
Telegraf has many of it's own
|
||||
[built-in patterns](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/logparser/grok/patterns/influx-patterns),
|
||||
as well as supporting
|
||||
[logstash's builtin patterns](https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns).
|
||||
|
||||
|
||||
The best way to get acquainted with grok patterns is to read the logstash docs,
|
||||
which are available here:
|
||||
https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html
|
||||
|
||||
|
||||
If you need help building patterns to match your logs,
|
||||
you will find the http://grokdebug.herokuapp.com application quite useful!
|
||||
|
||||
|
||||
By default all named captures are converted into string fields.
|
||||
Modifiers can be used to convert captures to other types or tags.
|
||||
Timestamp modifiers can be used to convert captures to the timestamp of the
|
||||
parsed metric.
|
||||
|
||||
|
||||
- Available modifiers:
|
||||
- string (default if nothing is specified)
|
||||
- int
|
||||
- float
|
||||
- duration (ie, 5.23ms gets converted to int nanoseconds)
|
||||
- tag (converts the field into a tag)
|
||||
- drop (drops the field completely)
|
||||
- Timestamp modifiers:
|
||||
- ts (This will auto-learn the timestamp format)
|
||||
- ts-ansic ("Mon Jan _2 15:04:05 2006")
|
||||
- ts-unix ("Mon Jan _2 15:04:05 MST 2006")
|
||||
- ts-ruby ("Mon Jan 02 15:04:05 -0700 2006")
|
||||
- ts-rfc822 ("02 Jan 06 15:04 MST")
|
||||
- ts-rfc822z ("02 Jan 06 15:04 -0700")
|
||||
- ts-rfc850 ("Monday, 02-Jan-06 15:04:05 MST")
|
||||
- ts-rfc1123 ("Mon, 02 Jan 2006 15:04:05 MST")
|
||||
- ts-rfc1123z ("Mon, 02 Jan 2006 15:04:05 -0700")
|
||||
- ts-rfc3339 ("2006-01-02T15:04:05Z07:00")
|
||||
- ts-rfc3339nano ("2006-01-02T15:04:05.999999999Z07:00")
|
||||
- ts-httpd ("02/Jan/2006:15:04:05 -0700")
|
||||
- ts-epoch (seconds since unix epoch)
|
||||
- ts-epochnano (nanoseconds since unix epoch)
|
||||
- ts-"CUSTOM"
|
||||
|
||||
|
||||
CUSTOM time layouts must be within quotes and be the representation of the
|
||||
"reference time", which is `Mon Jan 2 15:04:05 -0700 MST 2006`
|
||||
See https://golang.org/pkg/time/#Parse for more details.
|
||||
|
|
@ -0,0 +1,440 @@
|
|||
package grok
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"log"
|
||||
"os"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/vjeantet/grok"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
)
|
||||
|
||||
var timeLayouts = map[string]string{
|
||||
"ts-ansic": "Mon Jan _2 15:04:05 2006",
|
||||
"ts-unix": "Mon Jan _2 15:04:05 MST 2006",
|
||||
"ts-ruby": "Mon Jan 02 15:04:05 -0700 2006",
|
||||
"ts-rfc822": "02 Jan 06 15:04 MST",
|
||||
"ts-rfc822z": "02 Jan 06 15:04 -0700", // RFC822 with numeric zone
|
||||
"ts-rfc850": "Monday, 02-Jan-06 15:04:05 MST",
|
||||
"ts-rfc1123": "Mon, 02 Jan 2006 15:04:05 MST",
|
||||
"ts-rfc1123z": "Mon, 02 Jan 2006 15:04:05 -0700", // RFC1123 with numeric zone
|
||||
"ts-rfc3339": "2006-01-02T15:04:05Z07:00",
|
||||
"ts-rfc3339nano": "2006-01-02T15:04:05.999999999Z07:00",
|
||||
"ts-httpd": "02/Jan/2006:15:04:05 -0700",
|
||||
// These three are not exactly "layouts", but they are special cases that
|
||||
// will get handled in the ParseLine function.
|
||||
"ts-epoch": "EPOCH",
|
||||
"ts-epochnano": "EPOCH_NANO",
|
||||
"ts": "GENERIC_TIMESTAMP", // try parsing all known timestamp layouts.
|
||||
}
|
||||
|
||||
const (
|
||||
INT = "int"
|
||||
TAG = "tag"
|
||||
FLOAT = "float"
|
||||
STRING = "string"
|
||||
DURATION = "duration"
|
||||
DROP = "drop"
|
||||
EPOCH = "EPOCH"
|
||||
EPOCH_NANO = "EPOCH_NANO"
|
||||
GENERIC_TIMESTAMP = "GENERIC_TIMESTAMP"
|
||||
)
|
||||
|
||||
var (
|
||||
// matches named captures that contain a modifier.
|
||||
// ie,
|
||||
// %{NUMBER:bytes:int}
|
||||
// %{IPORHOST:clientip:tag}
|
||||
// %{HTTPDATE:ts1:ts-http}
|
||||
// %{HTTPDATE:ts2:ts-"02 Jan 06 15:04"}
|
||||
modifierRe = regexp.MustCompile(`%{\w+:(\w+):(ts-".+"|t?s?-?\w+)}`)
|
||||
// matches a plain pattern name. ie, %{NUMBER}
|
||||
patternOnlyRe = regexp.MustCompile(`%{(\w+)}`)
|
||||
)
|
||||
|
||||
type Parser struct {
|
||||
Patterns []string
|
||||
// namedPatterns is a list of internally-assigned names to the patterns
|
||||
// specified by the user in Patterns.
|
||||
// They will look like:
|
||||
// GROK_INTERNAL_PATTERN_0, GROK_INTERNAL_PATTERN_1, etc.
|
||||
namedPatterns []string
|
||||
CustomPatterns string
|
||||
CustomPatternFiles []string
|
||||
Measurement string
|
||||
|
||||
// typeMap is a map of patterns -> capture name -> modifier,
|
||||
// ie, {
|
||||
// "%{TESTLOG}":
|
||||
// {
|
||||
// "bytes": "int",
|
||||
// "clientip": "tag"
|
||||
// }
|
||||
// }
|
||||
typeMap map[string]map[string]string
|
||||
// tsMap is a map of patterns -> capture name -> timestamp layout.
|
||||
// ie, {
|
||||
// "%{TESTLOG}":
|
||||
// {
|
||||
// "httptime": "02/Jan/2006:15:04:05 -0700"
|
||||
// }
|
||||
// }
|
||||
tsMap map[string]map[string]string
|
||||
// patterns is a map of all of the parsed patterns from CustomPatterns
|
||||
// and CustomPatternFiles.
|
||||
// ie, {
|
||||
// "DURATION": "%{NUMBER}[nuµm]?s"
|
||||
// "RESPONSE_CODE": "%{NUMBER:rc:tag}"
|
||||
// }
|
||||
patterns map[string]string
|
||||
// foundTsLayouts is a slice of timestamp patterns that have been found
|
||||
// in the log lines. This slice gets updated if the user uses the generic
|
||||
// 'ts' modifier for timestamps. This slice is checked first for matches,
|
||||
// so that previously-matched layouts get priority over all other timestamp
|
||||
// layouts.
|
||||
foundTsLayouts []string
|
||||
|
||||
g *grok.Grok
|
||||
tsModder *tsModder
|
||||
}
|
||||
|
||||
func (p *Parser) Compile() error {
|
||||
p.typeMap = make(map[string]map[string]string)
|
||||
p.tsMap = make(map[string]map[string]string)
|
||||
p.patterns = make(map[string]string)
|
||||
p.tsModder = &tsModder{}
|
||||
var err error
|
||||
p.g, err = grok.NewWithConfig(&grok.Config{NamedCapturesOnly: true})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Give Patterns fake names so that they can be treated as named
|
||||
// "custom patterns"
|
||||
p.namedPatterns = make([]string, len(p.Patterns))
|
||||
for i, pattern := range p.Patterns {
|
||||
name := fmt.Sprintf("GROK_INTERNAL_PATTERN_%d", i)
|
||||
p.CustomPatterns += "\n" + name + " " + pattern + "\n"
|
||||
p.namedPatterns[i] = "%{" + name + "}"
|
||||
}
|
||||
|
||||
// Combine user-supplied CustomPatterns with DEFAULT_PATTERNS and parse
|
||||
// them together as the same type of pattern.
|
||||
p.CustomPatterns = DEFAULT_PATTERNS + p.CustomPatterns
|
||||
if len(p.CustomPatterns) != 0 {
|
||||
scanner := bufio.NewScanner(strings.NewReader(p.CustomPatterns))
|
||||
p.addCustomPatterns(scanner)
|
||||
}
|
||||
|
||||
// Parse any custom pattern files supplied.
|
||||
for _, filename := range p.CustomPatternFiles {
|
||||
file, err := os.Open(filename)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
scanner := bufio.NewScanner(bufio.NewReader(file))
|
||||
p.addCustomPatterns(scanner)
|
||||
}
|
||||
|
||||
if p.Measurement == "" {
|
||||
p.Measurement = "logparser_grok"
|
||||
}
|
||||
|
||||
return p.compileCustomPatterns()
|
||||
}
|
||||
|
||||
func (p *Parser) ParseLine(line string) (telegraf.Metric, error) {
|
||||
var err error
|
||||
// values are the parsed fields from the log line
|
||||
var values map[string]string
|
||||
// the matching pattern string
|
||||
var patternName string
|
||||
for _, pattern := range p.namedPatterns {
|
||||
if values, err = p.g.Parse(pattern, line); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if len(values) != 0 {
|
||||
patternName = pattern
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
if len(values) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
fields := make(map[string]interface{})
|
||||
tags := make(map[string]string)
|
||||
timestamp := time.Now()
|
||||
for k, v := range values {
|
||||
if k == "" || v == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
// t is the modifier of the field
|
||||
var t string
|
||||
// check if pattern has some modifiers
|
||||
if types, ok := p.typeMap[patternName]; ok {
|
||||
t = types[k]
|
||||
}
|
||||
// if we didn't find a modifier, check if we have a timestamp layout
|
||||
if t == "" {
|
||||
if ts, ok := p.tsMap[patternName]; ok {
|
||||
// check if the modifier is a timestamp layout
|
||||
if layout, ok := ts[k]; ok {
|
||||
t = layout
|
||||
}
|
||||
}
|
||||
}
|
||||
// if we didn't find a type OR timestamp modifier, assume string
|
||||
if t == "" {
|
||||
t = STRING
|
||||
}
|
||||
|
||||
switch t {
|
||||
case INT:
|
||||
iv, err := strconv.ParseInt(v, 10, 64)
|
||||
if err != nil {
|
||||
log.Printf("ERROR parsing %s to int: %s", v, err)
|
||||
} else {
|
||||
fields[k] = iv
|
||||
}
|
||||
case FLOAT:
|
||||
fv, err := strconv.ParseFloat(v, 64)
|
||||
if err != nil {
|
||||
log.Printf("ERROR parsing %s to float: %s", v, err)
|
||||
} else {
|
||||
fields[k] = fv
|
||||
}
|
||||
case DURATION:
|
||||
d, err := time.ParseDuration(v)
|
||||
if err != nil {
|
||||
log.Printf("ERROR parsing %s to duration: %s", v, err)
|
||||
} else {
|
||||
fields[k] = int64(d)
|
||||
}
|
||||
case TAG:
|
||||
tags[k] = v
|
||||
case STRING:
|
||||
fields[k] = strings.Trim(v, `"`)
|
||||
case EPOCH:
|
||||
iv, err := strconv.ParseInt(v, 10, 64)
|
||||
if err != nil {
|
||||
log.Printf("ERROR parsing %s to int: %s", v, err)
|
||||
} else {
|
||||
timestamp = time.Unix(iv, 0)
|
||||
}
|
||||
case EPOCH_NANO:
|
||||
iv, err := strconv.ParseInt(v, 10, 64)
|
||||
if err != nil {
|
||||
log.Printf("ERROR parsing %s to int: %s", v, err)
|
||||
} else {
|
||||
timestamp = time.Unix(0, iv)
|
||||
}
|
||||
case GENERIC_TIMESTAMP:
|
||||
var foundTs bool
|
||||
// first try timestamp layouts that we've already found
|
||||
for _, layout := range p.foundTsLayouts {
|
||||
ts, err := time.Parse(layout, v)
|
||||
if err == nil {
|
||||
timestamp = ts
|
||||
foundTs = true
|
||||
break
|
||||
}
|
||||
}
|
||||
// if we haven't found a timestamp layout yet, try all timestamp
|
||||
// layouts.
|
||||
if !foundTs {
|
||||
for _, layout := range timeLayouts {
|
||||
ts, err := time.Parse(layout, v)
|
||||
if err == nil {
|
||||
timestamp = ts
|
||||
foundTs = true
|
||||
p.foundTsLayouts = append(p.foundTsLayouts, layout)
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
// if we still haven't found a timestamp layout, log it and we will
|
||||
// just use time.Now()
|
||||
if !foundTs {
|
||||
log.Printf("ERROR parsing timestamp [%s], could not find any "+
|
||||
"suitable time layouts.", v)
|
||||
}
|
||||
case DROP:
|
||||
// goodbye!
|
||||
default:
|
||||
ts, err := time.Parse(t, v)
|
||||
if err == nil {
|
||||
timestamp = ts
|
||||
} else {
|
||||
log.Printf("ERROR parsing %s to time layout [%s]: %s", v, t, err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return telegraf.NewMetric(p.Measurement, tags, fields, p.tsModder.tsMod(timestamp))
|
||||
}
|
||||
|
||||
func (p *Parser) addCustomPatterns(scanner *bufio.Scanner) {
|
||||
for scanner.Scan() {
|
||||
line := strings.TrimSpace(scanner.Text())
|
||||
if len(line) > 0 && line[0] != '#' {
|
||||
names := strings.SplitN(line, " ", 2)
|
||||
p.patterns[names[0]] = names[1]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (p *Parser) compileCustomPatterns() error {
|
||||
var err error
|
||||
// check if the pattern contains a subpattern that is already defined
|
||||
// replace it with the subpattern for modifier inheritance.
|
||||
for i := 0; i < 2; i++ {
|
||||
for name, pattern := range p.patterns {
|
||||
subNames := patternOnlyRe.FindAllStringSubmatch(pattern, -1)
|
||||
for _, subName := range subNames {
|
||||
if subPattern, ok := p.patterns[subName[1]]; ok {
|
||||
pattern = strings.Replace(pattern, subName[0], subPattern, 1)
|
||||
}
|
||||
}
|
||||
p.patterns[name] = pattern
|
||||
}
|
||||
}
|
||||
|
||||
// check if pattern contains modifiers. Parse them out if it does.
|
||||
for name, pattern := range p.patterns {
|
||||
if modifierRe.MatchString(pattern) {
|
||||
// this pattern has modifiers, so parse out the modifiers
|
||||
pattern, err = p.parseTypedCaptures(name, pattern)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
p.patterns[name] = pattern
|
||||
}
|
||||
}
|
||||
|
||||
return p.g.AddPatternsFromMap(p.patterns)
|
||||
}
|
||||
|
||||
// parseTypedCaptures parses the capture modifiers, and then deletes the
|
||||
// modifier from the line so that it is a valid "grok" pattern again.
|
||||
// ie,
|
||||
// %{NUMBER:bytes:int} => %{NUMBER:bytes} (stores %{NUMBER}->bytes->int)
|
||||
// %{IPORHOST:clientip:tag} => %{IPORHOST:clientip} (stores %{IPORHOST}->clientip->tag)
|
||||
func (p *Parser) parseTypedCaptures(name, pattern string) (string, error) {
|
||||
matches := modifierRe.FindAllStringSubmatch(pattern, -1)
|
||||
|
||||
// grab the name of the capture pattern
|
||||
patternName := "%{" + name + "}"
|
||||
// create type map for this pattern
|
||||
p.typeMap[patternName] = make(map[string]string)
|
||||
p.tsMap[patternName] = make(map[string]string)
|
||||
|
||||
// boolean to verify that each pattern only has a single ts- data type.
|
||||
hasTimestamp := false
|
||||
for _, match := range matches {
|
||||
// regex capture 1 is the name of the capture
|
||||
// regex capture 2 is the modifier of the capture
|
||||
if strings.HasPrefix(match[2], "ts") {
|
||||
if hasTimestamp {
|
||||
return pattern, fmt.Errorf("logparser pattern compile error: "+
|
||||
"Each pattern is allowed only one named "+
|
||||
"timestamp data type. pattern: %s", pattern)
|
||||
}
|
||||
if layout, ok := timeLayouts[match[2]]; ok {
|
||||
// built-in time format
|
||||
p.tsMap[patternName][match[1]] = layout
|
||||
} else {
|
||||
// custom time format
|
||||
p.tsMap[patternName][match[1]] = strings.TrimSuffix(strings.TrimPrefix(match[2], `ts-"`), `"`)
|
||||
}
|
||||
hasTimestamp = true
|
||||
} else {
|
||||
p.typeMap[patternName][match[1]] = match[2]
|
||||
}
|
||||
|
||||
// the modifier is not a valid part of a "grok" pattern, so remove it
|
||||
// from the pattern.
|
||||
pattern = strings.Replace(pattern, ":"+match[2]+"}", "}", 1)
|
||||
}
|
||||
|
||||
return pattern, nil
|
||||
}
|
||||
|
||||
// tsModder is a struct for incrementing identical timestamps of log lines
|
||||
// so that we don't push identical metrics that will get overwritten.
|
||||
type tsModder struct {
|
||||
dupe time.Time
|
||||
last time.Time
|
||||
incr time.Duration
|
||||
incrn time.Duration
|
||||
rollover time.Duration
|
||||
}
|
||||
|
||||
// tsMod increments the given timestamp one unit more from the previous
|
||||
// duplicate timestamp.
|
||||
// the increment unit is determined as the next smallest time unit below the
|
||||
// most significant time unit of ts.
|
||||
// ie, if the input is at ms precision, it will increment it 1µs.
|
||||
func (t *tsModder) tsMod(ts time.Time) time.Time {
|
||||
defer func() { t.last = ts }()
|
||||
// don't mod the time if we don't need to
|
||||
if t.last.IsZero() || ts.IsZero() {
|
||||
t.incrn = 0
|
||||
t.rollover = 0
|
||||
return ts
|
||||
}
|
||||
if !ts.Equal(t.last) && !ts.Equal(t.dupe) {
|
||||
t.incr = 0
|
||||
t.incrn = 0
|
||||
t.rollover = 0
|
||||
return ts
|
||||
}
|
||||
|
||||
if ts.Equal(t.last) {
|
||||
t.dupe = ts
|
||||
}
|
||||
|
||||
if ts.Equal(t.dupe) && t.incr == time.Duration(0) {
|
||||
tsNano := ts.UnixNano()
|
||||
|
||||
d := int64(10)
|
||||
counter := 1
|
||||
for {
|
||||
a := tsNano % d
|
||||
if a > 0 {
|
||||
break
|
||||
}
|
||||
d = d * 10
|
||||
counter++
|
||||
}
|
||||
|
||||
switch {
|
||||
case counter <= 6:
|
||||
t.incr = time.Nanosecond
|
||||
case counter <= 9:
|
||||
t.incr = time.Microsecond
|
||||
case counter > 9:
|
||||
t.incr = time.Millisecond
|
||||
}
|
||||
}
|
||||
|
||||
t.incrn++
|
||||
if t.incrn == 999 && t.incr > time.Nanosecond {
|
||||
t.rollover = t.incr * t.incrn
|
||||
t.incrn = 1
|
||||
t.incr = t.incr / 1000
|
||||
if t.incr < time.Nanosecond {
|
||||
t.incr = time.Nanosecond
|
||||
}
|
||||
}
|
||||
return ts.Add(t.incr*t.incrn + t.rollover)
|
||||
}
|
|
@ -0,0 +1,587 @@
|
|||
package grok
|
||||
|
||||
import (
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
var benchM telegraf.Metric
|
||||
|
||||
func Benchmark_ParseLine_CommonLogFormat(b *testing.B) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{COMMON_LOG_FORMAT}"},
|
||||
}
|
||||
p.Compile()
|
||||
|
||||
var m telegraf.Metric
|
||||
for n := 0; n < b.N; n++ {
|
||||
m, _ = p.ParseLine(`127.0.0.1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326`)
|
||||
}
|
||||
benchM = m
|
||||
}
|
||||
|
||||
func Benchmark_ParseLine_CombinedLogFormat(b *testing.B) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{COMBINED_LOG_FORMAT}"},
|
||||
}
|
||||
p.Compile()
|
||||
|
||||
var m telegraf.Metric
|
||||
for n := 0; n < b.N; n++ {
|
||||
m, _ = p.ParseLine(`127.0.0.1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326 "-" "Mozilla"`)
|
||||
}
|
||||
benchM = m
|
||||
}
|
||||
|
||||
func Benchmark_ParseLine_CustomPattern(b *testing.B) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_A}", "%{TEST_LOG_B}"},
|
||||
CustomPatterns: `
|
||||
DURATION %{NUMBER}[nuµm]?s
|
||||
RESPONSE_CODE %{NUMBER:response_code:tag}
|
||||
RESPONSE_TIME %{DURATION:response_time:duration}
|
||||
TEST_LOG_A %{NUMBER:myfloat:float} %{RESPONSE_CODE} %{IPORHOST:clientip} %{RESPONSE_TIME}
|
||||
`,
|
||||
}
|
||||
p.Compile()
|
||||
|
||||
var m telegraf.Metric
|
||||
for n := 0; n < b.N; n++ {
|
||||
m, _ = p.ParseLine(`[04/Jun/2016:12:41:45 +0100] 1.25 200 192.168.1.1 5.432µs 101`)
|
||||
}
|
||||
benchM = m
|
||||
}
|
||||
|
||||
func TestMeasurementName(t *testing.T) {
|
||||
p := &Parser{
|
||||
Measurement: "my_web_log",
|
||||
Patterns: []string{"%{COMMON_LOG_FORMAT}"},
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
// Parse an influxdb POST request
|
||||
m, err := p.ParseLine(`127.0.0.1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326`)
|
||||
require.NotNil(t, m)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t,
|
||||
map[string]interface{}{
|
||||
"resp_bytes": int64(2326),
|
||||
"auth": "frank",
|
||||
"client_ip": "127.0.0.1",
|
||||
"http_version": float64(1.0),
|
||||
"ident": "user-identifier",
|
||||
"request": "/apache_pb.gif",
|
||||
},
|
||||
m.Fields())
|
||||
assert.Equal(t, map[string]string{"verb": "GET", "resp_code": "200"}, m.Tags())
|
||||
assert.Equal(t, "my_web_log", m.Name())
|
||||
}
|
||||
|
||||
func TestCustomInfluxdbHttpd(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{`\[httpd\] %{COMBINED_LOG_FORMAT} %{UUID:uuid:drop} %{NUMBER:response_time_us:int}`},
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
// Parse an influxdb POST request
|
||||
m, err := p.ParseLine(`[httpd] ::1 - - [14/Jun/2016:11:33:29 +0100] "POST /write?consistency=any&db=telegraf&precision=ns&rp= HTTP/1.1" 204 0 "-" "InfluxDBClient" 6f61bc44-321b-11e6-8050-000000000000 2513`)
|
||||
require.NotNil(t, m)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t,
|
||||
map[string]interface{}{
|
||||
"resp_bytes": int64(0),
|
||||
"auth": "-",
|
||||
"client_ip": "::1",
|
||||
"http_version": float64(1.1),
|
||||
"ident": "-",
|
||||
"referrer": "-",
|
||||
"request": "/write?consistency=any&db=telegraf&precision=ns&rp=",
|
||||
"response_time_us": int64(2513),
|
||||
"agent": "InfluxDBClient",
|
||||
},
|
||||
m.Fields())
|
||||
assert.Equal(t, map[string]string{"verb": "POST", "resp_code": "204"}, m.Tags())
|
||||
|
||||
// Parse an influxdb GET request
|
||||
m, err = p.ParseLine(`[httpd] ::1 - - [14/Jun/2016:12:10:02 +0100] "GET /query?db=telegraf&q=SELECT+bytes%2Cresponse_time_us+FROM+logparser_grok+WHERE+http_method+%3D+%27GET%27+AND+response_time_us+%3E+0+AND+time+%3E+now%28%29+-+1h HTTP/1.1" 200 578 "http://localhost:8083/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.84 Safari/537.36" 8a3806f1-3220-11e6-8006-000000000000 988`)
|
||||
require.NotNil(t, m)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t,
|
||||
map[string]interface{}{
|
||||
"resp_bytes": int64(578),
|
||||
"auth": "-",
|
||||
"client_ip": "::1",
|
||||
"http_version": float64(1.1),
|
||||
"ident": "-",
|
||||
"referrer": "http://localhost:8083/",
|
||||
"request": "/query?db=telegraf&q=SELECT+bytes%2Cresponse_time_us+FROM+logparser_grok+WHERE+http_method+%3D+%27GET%27+AND+response_time_us+%3E+0+AND+time+%3E+now%28%29+-+1h",
|
||||
"response_time_us": int64(988),
|
||||
"agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.84 Safari/537.36",
|
||||
},
|
||||
m.Fields())
|
||||
assert.Equal(t, map[string]string{"verb": "GET", "resp_code": "200"}, m.Tags())
|
||||
}
|
||||
|
||||
// common log format
|
||||
// 127.0.0.1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326
|
||||
func TestBuiltinCommonLogFormat(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{COMMON_LOG_FORMAT}"},
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
// Parse an influxdb POST request
|
||||
m, err := p.ParseLine(`127.0.0.1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326`)
|
||||
require.NotNil(t, m)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t,
|
||||
map[string]interface{}{
|
||||
"resp_bytes": int64(2326),
|
||||
"auth": "frank",
|
||||
"client_ip": "127.0.0.1",
|
||||
"http_version": float64(1.0),
|
||||
"ident": "user-identifier",
|
||||
"request": "/apache_pb.gif",
|
||||
},
|
||||
m.Fields())
|
||||
assert.Equal(t, map[string]string{"verb": "GET", "resp_code": "200"}, m.Tags())
|
||||
}
|
||||
|
||||
// combined log format
|
||||
// 127.0.0.1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326 "-" "Mozilla"
|
||||
func TestBuiltinCombinedLogFormat(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{COMBINED_LOG_FORMAT}"},
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
// Parse an influxdb POST request
|
||||
m, err := p.ParseLine(`127.0.0.1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326 "-" "Mozilla"`)
|
||||
require.NotNil(t, m)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t,
|
||||
map[string]interface{}{
|
||||
"resp_bytes": int64(2326),
|
||||
"auth": "frank",
|
||||
"client_ip": "127.0.0.1",
|
||||
"http_version": float64(1.0),
|
||||
"ident": "user-identifier",
|
||||
"request": "/apache_pb.gif",
|
||||
"referrer": "-",
|
||||
"agent": "Mozilla",
|
||||
},
|
||||
m.Fields())
|
||||
assert.Equal(t, map[string]string{"verb": "GET", "resp_code": "200"}, m.Tags())
|
||||
}
|
||||
|
||||
func TestCompileStringAndParse(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_A}"},
|
||||
CustomPatterns: `
|
||||
DURATION %{NUMBER}[nuµm]?s
|
||||
RESPONSE_CODE %{NUMBER:response_code:tag}
|
||||
RESPONSE_TIME %{DURATION:response_time:duration}
|
||||
TEST_LOG_A %{NUMBER:myfloat:float} %{RESPONSE_CODE} %{IPORHOST:clientip} %{RESPONSE_TIME}
|
||||
`,
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
metricA, err := p.ParseLine(`1.25 200 192.168.1.1 5.432µs`)
|
||||
require.NotNil(t, metricA)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t,
|
||||
map[string]interface{}{
|
||||
"clientip": "192.168.1.1",
|
||||
"myfloat": float64(1.25),
|
||||
"response_time": int64(5432),
|
||||
},
|
||||
metricA.Fields())
|
||||
assert.Equal(t, map[string]string{"response_code": "200"}, metricA.Tags())
|
||||
}
|
||||
|
||||
func TestCompileErrorsOnInvalidPattern(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_A}", "%{TEST_LOG_B}"},
|
||||
CustomPatterns: `
|
||||
DURATION %{NUMBER}[nuµm]?s
|
||||
RESPONSE_CODE %{NUMBER:response_code:tag}
|
||||
RESPONSE_TIME %{DURATION:response_time:duration}
|
||||
TEST_LOG_A %{NUMBER:myfloat:float} %{RESPONSE_CODE} %{IPORHOST:clientip} %{RESPONSE_TIME}
|
||||
`,
|
||||
}
|
||||
assert.Error(t, p.Compile())
|
||||
|
||||
metricA, _ := p.ParseLine(`1.25 200 192.168.1.1 5.432µs`)
|
||||
require.Nil(t, metricA)
|
||||
}
|
||||
|
||||
func TestParsePatternsWithoutCustom(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{POSINT:ts:ts-epochnano} response_time=%{POSINT:response_time:int} mymetric=%{NUMBER:metric:float}"},
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
metricA, err := p.ParseLine(`1466004605359052000 response_time=20821 mymetric=10890.645`)
|
||||
require.NotNil(t, metricA)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t,
|
||||
map[string]interface{}{
|
||||
"response_time": int64(20821),
|
||||
"metric": float64(10890.645),
|
||||
},
|
||||
metricA.Fields())
|
||||
assert.Equal(t, map[string]string{}, metricA.Tags())
|
||||
assert.Equal(t, time.Unix(0, 1466004605359052000), metricA.Time())
|
||||
}
|
||||
|
||||
func TestParseEpochNano(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{MYAPP}"},
|
||||
CustomPatterns: `
|
||||
MYAPP %{POSINT:ts:ts-epochnano} response_time=%{POSINT:response_time:int} mymetric=%{NUMBER:metric:float}
|
||||
`,
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
metricA, err := p.ParseLine(`1466004605359052000 response_time=20821 mymetric=10890.645`)
|
||||
require.NotNil(t, metricA)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t,
|
||||
map[string]interface{}{
|
||||
"response_time": int64(20821),
|
||||
"metric": float64(10890.645),
|
||||
},
|
||||
metricA.Fields())
|
||||
assert.Equal(t, map[string]string{}, metricA.Tags())
|
||||
assert.Equal(t, time.Unix(0, 1466004605359052000), metricA.Time())
|
||||
}
|
||||
|
||||
func TestParseEpoch(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{MYAPP}"},
|
||||
CustomPatterns: `
|
||||
MYAPP %{POSINT:ts:ts-epoch} response_time=%{POSINT:response_time:int} mymetric=%{NUMBER:metric:float}
|
||||
`,
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
metricA, err := p.ParseLine(`1466004605 response_time=20821 mymetric=10890.645`)
|
||||
require.NotNil(t, metricA)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t,
|
||||
map[string]interface{}{
|
||||
"response_time": int64(20821),
|
||||
"metric": float64(10890.645),
|
||||
},
|
||||
metricA.Fields())
|
||||
assert.Equal(t, map[string]string{}, metricA.Tags())
|
||||
assert.Equal(t, time.Unix(1466004605, 0), metricA.Time())
|
||||
}
|
||||
|
||||
func TestParseEpochErrors(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{MYAPP}"},
|
||||
CustomPatterns: `
|
||||
MYAPP %{WORD:ts:ts-epoch} response_time=%{POSINT:response_time:int} mymetric=%{NUMBER:metric:float}
|
||||
`,
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
_, err := p.ParseLine(`foobar response_time=20821 mymetric=10890.645`)
|
||||
assert.NoError(t, err)
|
||||
|
||||
p = &Parser{
|
||||
Patterns: []string{"%{MYAPP}"},
|
||||
CustomPatterns: `
|
||||
MYAPP %{WORD:ts:ts-epochnano} response_time=%{POSINT:response_time:int} mymetric=%{NUMBER:metric:float}
|
||||
`,
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
_, err = p.ParseLine(`foobar response_time=20821 mymetric=10890.645`)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestParseGenericTimestamp(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{`\[%{HTTPDATE:ts:ts}\] response_time=%{POSINT:response_time:int} mymetric=%{NUMBER:metric:float}`},
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
metricA, err := p.ParseLine(`[09/Jun/2016:03:37:03 +0000] response_time=20821 mymetric=10890.645`)
|
||||
require.NotNil(t, metricA)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t,
|
||||
map[string]interface{}{
|
||||
"response_time": int64(20821),
|
||||
"metric": float64(10890.645),
|
||||
},
|
||||
metricA.Fields())
|
||||
assert.Equal(t, map[string]string{}, metricA.Tags())
|
||||
assert.Equal(t, time.Unix(1465443423, 0).UTC(), metricA.Time().UTC())
|
||||
|
||||
metricB, err := p.ParseLine(`[09/Jun/2016:03:37:04 +0000] response_time=20821 mymetric=10890.645`)
|
||||
require.NotNil(t, metricB)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t,
|
||||
map[string]interface{}{
|
||||
"response_time": int64(20821),
|
||||
"metric": float64(10890.645),
|
||||
},
|
||||
metricB.Fields())
|
||||
assert.Equal(t, map[string]string{}, metricB.Tags())
|
||||
assert.Equal(t, time.Unix(1465443424, 0).UTC(), metricB.Time().UTC())
|
||||
}
|
||||
|
||||
func TestParseGenericTimestampNotFound(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{`\[%{NOTSPACE:ts:ts}\] response_time=%{POSINT:response_time:int} mymetric=%{NUMBER:metric:float}`},
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
metricA, err := p.ParseLine(`[foobar] response_time=20821 mymetric=10890.645`)
|
||||
require.NotNil(t, metricA)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t,
|
||||
map[string]interface{}{
|
||||
"response_time": int64(20821),
|
||||
"metric": float64(10890.645),
|
||||
},
|
||||
metricA.Fields())
|
||||
assert.Equal(t, map[string]string{}, metricA.Tags())
|
||||
}
|
||||
|
||||
func TestCompileFileAndParse(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_A}", "%{TEST_LOG_B}"},
|
||||
CustomPatternFiles: []string{"./testdata/test-patterns"},
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
metricA, err := p.ParseLine(`[04/Jun/2016:12:41:45 +0100] 1.25 200 192.168.1.1 5.432µs 101`)
|
||||
require.NotNil(t, metricA)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t,
|
||||
map[string]interface{}{
|
||||
"clientip": "192.168.1.1",
|
||||
"myfloat": float64(1.25),
|
||||
"response_time": int64(5432),
|
||||
"myint": int64(101),
|
||||
},
|
||||
metricA.Fields())
|
||||
assert.Equal(t, map[string]string{"response_code": "200"}, metricA.Tags())
|
||||
assert.Equal(t,
|
||||
time.Date(2016, time.June, 4, 12, 41, 45, 0, time.FixedZone("foo", 60*60)).Nanosecond(),
|
||||
metricA.Time().Nanosecond())
|
||||
|
||||
metricB, err := p.ParseLine(`[04/06/2016--12:41:45] 1.25 mystring dropme nomodifier`)
|
||||
require.NotNil(t, metricB)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t,
|
||||
map[string]interface{}{
|
||||
"myfloat": 1.25,
|
||||
"mystring": "mystring",
|
||||
"nomodifier": "nomodifier",
|
||||
},
|
||||
metricB.Fields())
|
||||
assert.Equal(t, map[string]string{}, metricB.Tags())
|
||||
assert.Equal(t,
|
||||
time.Date(2016, time.June, 4, 12, 41, 45, 0, time.FixedZone("foo", 60*60)).Nanosecond(),
|
||||
metricB.Time().Nanosecond())
|
||||
}
|
||||
|
||||
func TestCompileNoModifiersAndParse(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_C}"},
|
||||
CustomPatterns: `
|
||||
DURATION %{NUMBER}[nuµm]?s
|
||||
TEST_LOG_C %{NUMBER:myfloat} %{NUMBER} %{IPORHOST:clientip} %{DURATION:rt}
|
||||
`,
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
metricA, err := p.ParseLine(`1.25 200 192.168.1.1 5.432µs`)
|
||||
require.NotNil(t, metricA)
|
||||
assert.NoError(t, err)
|
||||
assert.Equal(t,
|
||||
map[string]interface{}{
|
||||
"clientip": "192.168.1.1",
|
||||
"myfloat": "1.25",
|
||||
"rt": "5.432µs",
|
||||
},
|
||||
metricA.Fields())
|
||||
assert.Equal(t, map[string]string{}, metricA.Tags())
|
||||
}
|
||||
|
||||
func TestCompileNoNamesAndParse(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_C}"},
|
||||
CustomPatterns: `
|
||||
DURATION %{NUMBER}[nuµm]?s
|
||||
TEST_LOG_C %{NUMBER} %{NUMBER} %{IPORHOST} %{DURATION}
|
||||
`,
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
metricA, err := p.ParseLine(`1.25 200 192.168.1.1 5.432µs`)
|
||||
require.Nil(t, metricA)
|
||||
assert.NoError(t, err)
|
||||
}
|
||||
|
||||
func TestParseNoMatch(t *testing.T) {
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_A}", "%{TEST_LOG_B}"},
|
||||
CustomPatternFiles: []string{"./testdata/test-patterns"},
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
metricA, err := p.ParseLine(`[04/Jun/2016:12:41:45 +0100] notnumber 200 192.168.1.1 5.432µs 101`)
|
||||
assert.NoError(t, err)
|
||||
assert.Nil(t, metricA)
|
||||
}
|
||||
|
||||
func TestCompileErrors(t *testing.T) {
|
||||
// Compile fails because there are multiple timestamps:
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_A}", "%{TEST_LOG_B}"},
|
||||
CustomPatterns: `
|
||||
TEST_LOG_A %{HTTPDATE:ts1:ts-httpd} %{HTTPDATE:ts2:ts-httpd} %{NUMBER:mynum:int}
|
||||
`,
|
||||
}
|
||||
assert.Error(t, p.Compile())
|
||||
|
||||
// Compile fails because file doesn't exist:
|
||||
p = &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_A}", "%{TEST_LOG_B}"},
|
||||
CustomPatternFiles: []string{"/tmp/foo/bar/baz"},
|
||||
}
|
||||
assert.Error(t, p.Compile())
|
||||
}
|
||||
|
||||
func TestParseErrors(t *testing.T) {
|
||||
// Parse fails because the pattern doesn't exist
|
||||
p := &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_B}"},
|
||||
CustomPatterns: `
|
||||
TEST_LOG_A %{HTTPDATE:ts:ts-httpd} %{WORD:myword:int} %{}
|
||||
`,
|
||||
}
|
||||
assert.Error(t, p.Compile())
|
||||
_, err := p.ParseLine(`[04/Jun/2016:12:41:45 +0100] notnumber 200 192.168.1.1 5.432µs 101`)
|
||||
assert.Error(t, err)
|
||||
|
||||
// Parse fails because myword is not an int
|
||||
p = &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_A}"},
|
||||
CustomPatterns: `
|
||||
TEST_LOG_A %{HTTPDATE:ts:ts-httpd} %{WORD:myword:int}
|
||||
`,
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
_, err = p.ParseLine(`04/Jun/2016:12:41:45 +0100 notnumber`)
|
||||
assert.Error(t, err)
|
||||
|
||||
// Parse fails because myword is not a float
|
||||
p = &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_A}"},
|
||||
CustomPatterns: `
|
||||
TEST_LOG_A %{HTTPDATE:ts:ts-httpd} %{WORD:myword:float}
|
||||
`,
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
_, err = p.ParseLine(`04/Jun/2016:12:41:45 +0100 notnumber`)
|
||||
assert.Error(t, err)
|
||||
|
||||
// Parse fails because myword is not a duration
|
||||
p = &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_A}"},
|
||||
CustomPatterns: `
|
||||
TEST_LOG_A %{HTTPDATE:ts:ts-httpd} %{WORD:myword:duration}
|
||||
`,
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
_, err = p.ParseLine(`04/Jun/2016:12:41:45 +0100 notnumber`)
|
||||
assert.Error(t, err)
|
||||
|
||||
// Parse fails because the time layout is wrong.
|
||||
p = &Parser{
|
||||
Patterns: []string{"%{TEST_LOG_A}"},
|
||||
CustomPatterns: `
|
||||
TEST_LOG_A %{HTTPDATE:ts:ts-unix} %{WORD:myword:duration}
|
||||
`,
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
_, err = p.ParseLine(`04/Jun/2016:12:41:45 +0100 notnumber`)
|
||||
assert.Error(t, err)
|
||||
}
|
||||
|
||||
func TestTsModder(t *testing.T) {
|
||||
tsm := &tsModder{}
|
||||
|
||||
reftime := time.Date(2006, time.December, 1, 1, 1, 1, int(time.Millisecond), time.UTC)
|
||||
modt := tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime, modt)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime.Add(time.Microsecond*1), modt)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime.Add(time.Microsecond*2), modt)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime.Add(time.Microsecond*3), modt)
|
||||
|
||||
reftime = time.Date(2006, time.December, 1, 1, 1, 1, int(time.Microsecond), time.UTC)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime, modt)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime.Add(time.Nanosecond*1), modt)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime.Add(time.Nanosecond*2), modt)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime.Add(time.Nanosecond*3), modt)
|
||||
|
||||
reftime = time.Date(2006, time.December, 1, 1, 1, 1, int(time.Microsecond)*999, time.UTC)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime, modt)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime.Add(time.Nanosecond*1), modt)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime.Add(time.Nanosecond*2), modt)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime.Add(time.Nanosecond*3), modt)
|
||||
|
||||
reftime = time.Date(2006, time.December, 1, 1, 1, 1, 0, time.UTC)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime, modt)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime.Add(time.Millisecond*1), modt)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime.Add(time.Millisecond*2), modt)
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime.Add(time.Millisecond*3), modt)
|
||||
|
||||
reftime = time.Time{}
|
||||
modt = tsm.tsMod(reftime)
|
||||
assert.Equal(t, reftime, modt)
|
||||
}
|
||||
|
||||
func TestTsModder_Rollover(t *testing.T) {
|
||||
tsm := &tsModder{}
|
||||
|
||||
reftime := time.Date(2006, time.December, 1, 1, 1, 1, int(time.Millisecond), time.UTC)
|
||||
modt := tsm.tsMod(reftime)
|
||||
for i := 1; i < 1000; i++ {
|
||||
modt = tsm.tsMod(reftime)
|
||||
}
|
||||
assert.Equal(t, reftime.Add(time.Microsecond*999+time.Nanosecond), modt)
|
||||
|
||||
reftime = time.Date(2006, time.December, 1, 1, 1, 1, int(time.Microsecond), time.UTC)
|
||||
modt = tsm.tsMod(reftime)
|
||||
for i := 1; i < 1001; i++ {
|
||||
modt = tsm.tsMod(reftime)
|
||||
}
|
||||
assert.Equal(t, reftime.Add(time.Nanosecond*1000), modt)
|
||||
}
|
|
@ -0,0 +1,78 @@
|
|||
package grok
|
||||
|
||||
// THIS SHOULD BE KEPT IN-SYNC WITH patterns/influx-patterns
|
||||
const DEFAULT_PATTERNS = `
|
||||
# Captures are a slightly modified version of logstash "grok" patterns, with
|
||||
# the format %{<capture syntax>[:<semantic name>][:<modifier>]}
|
||||
# By default all named captures are converted into string fields.
|
||||
# Modifiers can be used to convert captures to other types or tags.
|
||||
# Timestamp modifiers can be used to convert captures to the timestamp of the
|
||||
# parsed metric.
|
||||
|
||||
# View logstash grok pattern docs here:
|
||||
# https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html
|
||||
# All default logstash patterns are supported, these can be viewed here:
|
||||
# https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns
|
||||
|
||||
# Available modifiers:
|
||||
# string (default if nothing is specified)
|
||||
# int
|
||||
# float
|
||||
# duration (ie, 5.23ms gets converted to int nanoseconds)
|
||||
# tag (converts the field into a tag)
|
||||
# drop (drops the field completely)
|
||||
# Timestamp modifiers:
|
||||
# ts-ansic ("Mon Jan _2 15:04:05 2006")
|
||||
# ts-unix ("Mon Jan _2 15:04:05 MST 2006")
|
||||
# ts-ruby ("Mon Jan 02 15:04:05 -0700 2006")
|
||||
# ts-rfc822 ("02 Jan 06 15:04 MST")
|
||||
# ts-rfc822z ("02 Jan 06 15:04 -0700")
|
||||
# ts-rfc850 ("Monday, 02-Jan-06 15:04:05 MST")
|
||||
# ts-rfc1123 ("Mon, 02 Jan 2006 15:04:05 MST")
|
||||
# ts-rfc1123z ("Mon, 02 Jan 2006 15:04:05 -0700")
|
||||
# ts-rfc3339 ("2006-01-02T15:04:05Z07:00")
|
||||
# ts-rfc3339nano ("2006-01-02T15:04:05.999999999Z07:00")
|
||||
# ts-httpd ("02/Jan/2006:15:04:05 -0700")
|
||||
# ts-epoch (seconds since unix epoch)
|
||||
# ts-epochnano (nanoseconds since unix epoch)
|
||||
# ts-"CUSTOM"
|
||||
# CUSTOM time layouts must be within quotes and be the representation of the
|
||||
# "reference time", which is Mon Jan 2 15:04:05 -0700 MST 2006
|
||||
# See https://golang.org/pkg/time/#Parse for more details.
|
||||
|
||||
# Example log file pattern, example log looks like this:
|
||||
# [04/Jun/2016:12:41:45 +0100] 1.25 200 192.168.1.1 5.432µs
|
||||
# Breakdown of the DURATION pattern below:
|
||||
# NUMBER is a builtin logstash grok pattern matching float & int numbers.
|
||||
# [nuµm]? is a regex specifying 0 or 1 of the characters within brackets.
|
||||
# s is also regex, this pattern must end in "s".
|
||||
# so DURATION will match something like '5.324ms' or '6.1µs' or '10s'
|
||||
DURATION %{NUMBER}[nuµm]?s
|
||||
RESPONSE_CODE %{NUMBER:response_code:tag}
|
||||
RESPONSE_TIME %{DURATION:response_time_ns:duration}
|
||||
EXAMPLE_LOG \[%{HTTPDATE:ts:ts-httpd}\] %{NUMBER:myfloat:float} %{RESPONSE_CODE} %{IPORHOST:clientip} %{RESPONSE_TIME}
|
||||
|
||||
# Wider-ranging username matching vs. logstash built-in %{USER}
|
||||
NGUSERNAME [a-zA-Z\.\@\-\+_%]+
|
||||
NGUSER %{NGUSERNAME}
|
||||
# Wider-ranging client IP matching
|
||||
CLIENT (?:%{IPORHOST}|%{HOSTPORT}|::1)
|
||||
|
||||
##
|
||||
## COMMON LOG PATTERNS
|
||||
##
|
||||
|
||||
# apache & nginx logs, this is also known as the "common log format"
|
||||
# see https://en.wikipedia.org/wiki/Common_Log_Format
|
||||
COMMON_LOG_FORMAT %{CLIENT:client_ip} %{NGUSER:ident} %{NGUSER:auth} \[%{HTTPDATE:ts:ts-httpd}\] "(?:%{WORD:verb:tag} %{NOTSPACE:request}(?: HTTP/%{NUMBER:http_version:float})?|%{DATA})" %{NUMBER:resp_code:tag} (?:%{NUMBER:resp_bytes:int}|-)
|
||||
|
||||
# Combined log format is the same as the common log format but with the addition
|
||||
# of two quoted strings at the end for "referrer" and "agent"
|
||||
# See Examples at http://httpd.apache.org/docs/current/mod/mod_log_config.html
|
||||
COMBINED_LOG_FORMAT %{COMMON_LOG_FORMAT} %{QS:referrer} %{QS:agent}
|
||||
|
||||
# HTTPD log formats
|
||||
HTTPD20_ERRORLOG \[%{HTTPDERROR_DATE:timestamp}\] \[%{LOGLEVEL:loglevel:tag}\] (?:\[client %{IPORHOST:clientip}\] ){0,1}%{GREEDYDATA:errormsg}
|
||||
HTTPD24_ERRORLOG \[%{HTTPDERROR_DATE:timestamp}\] \[%{WORD:module}:%{LOGLEVEL:loglevel:tag}\] \[pid %{POSINT:pid:int}:tid %{NUMBER:tid:int}\]( \(%{POSINT:proxy_errorcode:int}\)%{DATA:proxy_errormessage}:)?( \[client %{IPORHOST:client}:%{POSINT:clientport}\])? %{DATA:errorcode}: %{GREEDYDATA:message}
|
||||
HTTPD_ERRORLOG %{HTTPD20_ERRORLOG}|%{HTTPD24_ERRORLOG}
|
||||
`
|
|
@ -0,0 +1,73 @@
|
|||
# Captures are a slightly modified version of logstash "grok" patterns, with
|
||||
# the format %{<capture syntax>[:<semantic name>][:<modifier>]}
|
||||
# By default all named captures are converted into string fields.
|
||||
# Modifiers can be used to convert captures to other types or tags.
|
||||
# Timestamp modifiers can be used to convert captures to the timestamp of the
|
||||
# parsed metric.
|
||||
|
||||
# View logstash grok pattern docs here:
|
||||
# https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html
|
||||
# All default logstash patterns are supported, these can be viewed here:
|
||||
# https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns
|
||||
|
||||
# Available modifiers:
|
||||
# string (default if nothing is specified)
|
||||
# int
|
||||
# float
|
||||
# duration (ie, 5.23ms gets converted to int nanoseconds)
|
||||
# tag (converts the field into a tag)
|
||||
# drop (drops the field completely)
|
||||
# Timestamp modifiers:
|
||||
# ts-ansic ("Mon Jan _2 15:04:05 2006")
|
||||
# ts-unix ("Mon Jan _2 15:04:05 MST 2006")
|
||||
# ts-ruby ("Mon Jan 02 15:04:05 -0700 2006")
|
||||
# ts-rfc822 ("02 Jan 06 15:04 MST")
|
||||
# ts-rfc822z ("02 Jan 06 15:04 -0700")
|
||||
# ts-rfc850 ("Monday, 02-Jan-06 15:04:05 MST")
|
||||
# ts-rfc1123 ("Mon, 02 Jan 2006 15:04:05 MST")
|
||||
# ts-rfc1123z ("Mon, 02 Jan 2006 15:04:05 -0700")
|
||||
# ts-rfc3339 ("2006-01-02T15:04:05Z07:00")
|
||||
# ts-rfc3339nano ("2006-01-02T15:04:05.999999999Z07:00")
|
||||
# ts-httpd ("02/Jan/2006:15:04:05 -0700")
|
||||
# ts-epoch (seconds since unix epoch)
|
||||
# ts-epochnano (nanoseconds since unix epoch)
|
||||
# ts-"CUSTOM"
|
||||
# CUSTOM time layouts must be within quotes and be the representation of the
|
||||
# "reference time", which is Mon Jan 2 15:04:05 -0700 MST 2006
|
||||
# See https://golang.org/pkg/time/#Parse for more details.
|
||||
|
||||
# Example log file pattern, example log looks like this:
|
||||
# [04/Jun/2016:12:41:45 +0100] 1.25 200 192.168.1.1 5.432µs
|
||||
# Breakdown of the DURATION pattern below:
|
||||
# NUMBER is a builtin logstash grok pattern matching float & int numbers.
|
||||
# [nuµm]? is a regex specifying 0 or 1 of the characters within brackets.
|
||||
# s is also regex, this pattern must end in "s".
|
||||
# so DURATION will match something like '5.324ms' or '6.1µs' or '10s'
|
||||
DURATION %{NUMBER}[nuµm]?s
|
||||
RESPONSE_CODE %{NUMBER:response_code:tag}
|
||||
RESPONSE_TIME %{DURATION:response_time_ns:duration}
|
||||
EXAMPLE_LOG \[%{HTTPDATE:ts:ts-httpd}\] %{NUMBER:myfloat:float} %{RESPONSE_CODE} %{IPORHOST:clientip} %{RESPONSE_TIME}
|
||||
|
||||
# Wider-ranging username matching vs. logstash built-in %{USER}
|
||||
NGUSERNAME [a-zA-Z\.\@\-\+_%]+
|
||||
NGUSER %{NGUSERNAME}
|
||||
# Wider-ranging client IP matching
|
||||
CLIENT (?:%{IPORHOST}|%{HOSTPORT}|::1)
|
||||
|
||||
##
|
||||
## COMMON LOG PATTERNS
|
||||
##
|
||||
|
||||
# apache & nginx logs, this is also known as the "common log format"
|
||||
# see https://en.wikipedia.org/wiki/Common_Log_Format
|
||||
COMMON_LOG_FORMAT %{CLIENT:client_ip} %{NGUSER:ident} %{NGUSER:auth} \[%{HTTPDATE:ts:ts-httpd}\] "(?:%{WORD:verb:tag} %{NOTSPACE:request}(?: HTTP/%{NUMBER:http_version:float})?|%{DATA})" %{NUMBER:resp_code:tag} (?:%{NUMBER:resp_bytes:int}|-)
|
||||
|
||||
# Combined log format is the same as the common log format but with the addition
|
||||
# of two quoted strings at the end for "referrer" and "agent"
|
||||
# See Examples at http://httpd.apache.org/docs/current/mod/mod_log_config.html
|
||||
COMBINED_LOG_FORMAT %{COMMON_LOG_FORMAT} %{QS:referrer} %{QS:agent}
|
||||
|
||||
# HTTPD log formats
|
||||
HTTPD20_ERRORLOG \[%{HTTPDERROR_DATE:timestamp}\] \[%{LOGLEVEL:loglevel:tag}\] (?:\[client %{IPORHOST:clientip}\] ){0,1}%{GREEDYDATA:errormsg}
|
||||
HTTPD24_ERRORLOG \[%{HTTPDERROR_DATE:timestamp}\] \[%{WORD:module}:%{LOGLEVEL:loglevel:tag}\] \[pid %{POSINT:pid:int}:tid %{NUMBER:tid:int}\]( \(%{POSINT:proxy_errorcode:int}\)%{DATA:proxy_errormessage}:)?( \[client %{IPORHOST:client}:%{POSINT:clientport}\])? %{DATA:errorcode}: %{GREEDYDATA:message}
|
||||
HTTPD_ERRORLOG %{HTTPD20_ERRORLOG}|%{HTTPD24_ERRORLOG}
|
|
@ -0,0 +1,14 @@
|
|||
# Test A log line:
|
||||
# [04/Jun/2016:12:41:45 +0100] 1.25 200 192.168.1.1 5.432µs 101
|
||||
DURATION %{NUMBER}[nuµm]?s
|
||||
RESPONSE_CODE %{NUMBER:response_code:tag}
|
||||
RESPONSE_TIME %{DURATION:response_time:duration}
|
||||
TEST_LOG_A \[%{HTTPDATE:timestamp:ts-httpd}\] %{NUMBER:myfloat:float} %{RESPONSE_CODE} %{IPORHOST:clientip} %{RESPONSE_TIME} %{NUMBER:myint:int}
|
||||
|
||||
# Test B log line:
|
||||
# [04/06/2016--12:41:45] 1.25 mystring dropme nomodifier
|
||||
TEST_TIMESTAMP %{MONTHDAY}/%{MONTHNUM}/%{YEAR}--%{TIME}
|
||||
TEST_LOG_B \[%{TEST_TIMESTAMP:timestamp:ts-"02/01/2006--15:04:05"}\] %{NUMBER:myfloat:float} %{WORD:mystring:string} %{WORD:dropme:drop} %{WORD:nomodifier}
|
||||
|
||||
TEST_TIMESTAMP %{MONTHDAY}/%{MONTHNUM}/%{YEAR}--%{TIME}
|
||||
TEST_LOG_BAD \[%{TEST_TIMESTAMP:timestamp:ts-"02/01/2006--15:04:05"}\] %{NUMBER:myfloat:float} %{WORD:mystring:int} %{WORD:dropme:drop} %{WORD:nomodifier}
|
|
@ -0,0 +1 @@
|
|||
[04/Jun/2016:12:41:45 +0100] 1.25 200 192.168.1.1 5.432µs 101
|
|
@ -0,0 +1 @@
|
|||
[04/06/2016--12:41:45] 1.25 mystring dropme nomodifier
|
|
@ -0,0 +1,231 @@
|
|||
package logparser
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log"
|
||||
"reflect"
|
||||
"sync"
|
||||
|
||||
"github.com/hpcloud/tail"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/internal/errchan"
|
||||
"github.com/influxdata/telegraf/internal/globpath"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
|
||||
// Parsers
|
||||
"github.com/influxdata/telegraf/plugins/inputs/logparser/grok"
|
||||
)
|
||||
|
||||
type LogParser interface {
|
||||
ParseLine(line string) (telegraf.Metric, error)
|
||||
Compile() error
|
||||
}
|
||||
|
||||
type LogParserPlugin struct {
|
||||
Files []string
|
||||
FromBeginning bool
|
||||
|
||||
tailers []*tail.Tail
|
||||
lines chan string
|
||||
done chan struct{}
|
||||
wg sync.WaitGroup
|
||||
acc telegraf.Accumulator
|
||||
parsers []LogParser
|
||||
|
||||
sync.Mutex
|
||||
|
||||
GrokParser *grok.Parser `toml:"grok"`
|
||||
}
|
||||
|
||||
const sampleConfig = `
|
||||
## Log files to parse.
|
||||
## These accept standard unix glob matching rules, but with the addition of
|
||||
## ** as a "super asterisk". ie:
|
||||
## /var/log/**.log -> recursively find all .log files in /var/log
|
||||
## /var/log/*/*.log -> find all .log files with a parent dir in /var/log
|
||||
## /var/log/apache.log -> only tail the apache log file
|
||||
files = ["/var/log/apache/access.log"]
|
||||
## Read file from beginning.
|
||||
from_beginning = false
|
||||
|
||||
## Parse logstash-style "grok" patterns:
|
||||
## Telegraf built-in parsing patterns: https://goo.gl/dkay10
|
||||
[inputs.logparser.grok]
|
||||
## This is a list of patterns to check the given log file(s) for.
|
||||
## Note that adding patterns here increases processing time. The most
|
||||
## efficient configuration is to have one pattern per logparser.
|
||||
## Other common built-in patterns are:
|
||||
## %{COMMON_LOG_FORMAT} (plain apache & nginx access logs)
|
||||
## %{COMBINED_LOG_FORMAT} (access logs + referrer & agent)
|
||||
patterns = ["%{COMBINED_LOG_FORMAT}"]
|
||||
## Name of the outputted measurement name.
|
||||
measurement = "apache_access_log"
|
||||
## Full path(s) to custom pattern files.
|
||||
custom_pattern_files = []
|
||||
## Custom patterns can also be defined here. Put one pattern per line.
|
||||
custom_patterns = '''
|
||||
'''
|
||||
`
|
||||
|
||||
func (l *LogParserPlugin) SampleConfig() string {
|
||||
return sampleConfig
|
||||
}
|
||||
|
||||
func (l *LogParserPlugin) Description() string {
|
||||
return "Stream and parse log file(s)."
|
||||
}
|
||||
|
||||
func (l *LogParserPlugin) Gather(acc telegraf.Accumulator) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *LogParserPlugin) Start(acc telegraf.Accumulator) error {
|
||||
l.Lock()
|
||||
defer l.Unlock()
|
||||
|
||||
l.acc = acc
|
||||
l.lines = make(chan string, 1000)
|
||||
l.done = make(chan struct{})
|
||||
|
||||
// Looks for fields which implement LogParser interface
|
||||
l.parsers = []LogParser{}
|
||||
s := reflect.ValueOf(l).Elem()
|
||||
for i := 0; i < s.NumField(); i++ {
|
||||
f := s.Field(i)
|
||||
|
||||
if !f.CanInterface() {
|
||||
continue
|
||||
}
|
||||
|
||||
if lpPlugin, ok := f.Interface().(LogParser); ok {
|
||||
if reflect.ValueOf(lpPlugin).IsNil() {
|
||||
continue
|
||||
}
|
||||
l.parsers = append(l.parsers, lpPlugin)
|
||||
}
|
||||
}
|
||||
|
||||
if len(l.parsers) == 0 {
|
||||
return fmt.Errorf("ERROR: logparser input plugin: no parser defined.")
|
||||
}
|
||||
|
||||
// compile log parser patterns:
|
||||
errChan := errchan.New(len(l.parsers))
|
||||
for _, parser := range l.parsers {
|
||||
if err := parser.Compile(); err != nil {
|
||||
errChan.C <- err
|
||||
}
|
||||
}
|
||||
if err := errChan.Error(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var seek tail.SeekInfo
|
||||
if !l.FromBeginning {
|
||||
seek.Whence = 2
|
||||
seek.Offset = 0
|
||||
}
|
||||
|
||||
l.wg.Add(1)
|
||||
go l.parser()
|
||||
|
||||
// Create a "tailer" for each file
|
||||
for _, filepath := range l.Files {
|
||||
g, err := globpath.Compile(filepath)
|
||||
if err != nil {
|
||||
log.Printf("ERROR Glob %s failed to compile, %s", filepath, err)
|
||||
continue
|
||||
}
|
||||
files := g.Match()
|
||||
errChan = errchan.New(len(files))
|
||||
for file, _ := range files {
|
||||
tailer, err := tail.TailFile(file,
|
||||
tail.Config{
|
||||
ReOpen: true,
|
||||
Follow: true,
|
||||
Location: &seek,
|
||||
MustExist: true,
|
||||
})
|
||||
errChan.C <- err
|
||||
|
||||
// create a goroutine for each "tailer"
|
||||
l.wg.Add(1)
|
||||
go l.receiver(tailer)
|
||||
l.tailers = append(l.tailers, tailer)
|
||||
}
|
||||
}
|
||||
|
||||
return errChan.Error()
|
||||
}
|
||||
|
||||
// receiver is launched as a goroutine to continuously watch a tailed logfile
|
||||
// for changes and send any log lines down the l.lines channel.
|
||||
func (l *LogParserPlugin) receiver(tailer *tail.Tail) {
|
||||
defer l.wg.Done()
|
||||
|
||||
var line *tail.Line
|
||||
for line = range tailer.Lines {
|
||||
if line.Err != nil {
|
||||
log.Printf("ERROR tailing file %s, Error: %s\n",
|
||||
tailer.Filename, line.Err)
|
||||
continue
|
||||
}
|
||||
|
||||
select {
|
||||
case <-l.done:
|
||||
case l.lines <- line.Text:
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// parser is launched as a goroutine to watch the l.lines channel.
|
||||
// when a line is available, parser parses it and adds the metric(s) to the
|
||||
// accumulator.
|
||||
func (l *LogParserPlugin) parser() {
|
||||
defer l.wg.Done()
|
||||
|
||||
var m telegraf.Metric
|
||||
var err error
|
||||
var line string
|
||||
for {
|
||||
select {
|
||||
case <-l.done:
|
||||
return
|
||||
case line = <-l.lines:
|
||||
if line == "" || line == "\n" {
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
for _, parser := range l.parsers {
|
||||
m, err = parser.ParseLine(line)
|
||||
if err == nil {
|
||||
if m != nil {
|
||||
l.acc.AddFields(m.Name(), m.Fields(), m.Tags(), m.Time())
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (l *LogParserPlugin) Stop() {
|
||||
l.Lock()
|
||||
defer l.Unlock()
|
||||
|
||||
for _, t := range l.tailers {
|
||||
err := t.Stop()
|
||||
if err != nil {
|
||||
log.Printf("ERROR stopping tail on file %s\n", t.Filename)
|
||||
}
|
||||
t.Cleanup()
|
||||
}
|
||||
close(l.done)
|
||||
l.wg.Wait()
|
||||
}
|
||||
|
||||
func init() {
|
||||
inputs.Add("logparser", func() telegraf.Input {
|
||||
return &LogParserPlugin{}
|
||||
})
|
||||
}
|
|
@ -0,0 +1,119 @@
|
|||
package logparser
|
||||
|
||||
import (
|
||||
"runtime"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf/testutil"
|
||||
|
||||
"github.com/influxdata/telegraf/plugins/inputs/logparser/grok"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestStartNoParsers(t *testing.T) {
|
||||
logparser := &LogParserPlugin{
|
||||
FromBeginning: true,
|
||||
Files: []string{"grok/testdata/*.log"},
|
||||
}
|
||||
|
||||
acc := testutil.Accumulator{}
|
||||
assert.Error(t, logparser.Start(&acc))
|
||||
}
|
||||
|
||||
func TestGrokParseLogFilesNonExistPattern(t *testing.T) {
|
||||
thisdir := getCurrentDir()
|
||||
p := &grok.Parser{
|
||||
Patterns: []string{"%{FOOBAR}"},
|
||||
CustomPatternFiles: []string{thisdir + "grok/testdata/test-patterns"},
|
||||
}
|
||||
|
||||
logparser := &LogParserPlugin{
|
||||
FromBeginning: true,
|
||||
Files: []string{thisdir + "grok/testdata/*.log"},
|
||||
GrokParser: p,
|
||||
}
|
||||
|
||||
acc := testutil.Accumulator{}
|
||||
assert.Error(t, logparser.Start(&acc))
|
||||
|
||||
time.Sleep(time.Millisecond * 500)
|
||||
logparser.Stop()
|
||||
}
|
||||
|
||||
func TestGrokParseLogFiles(t *testing.T) {
|
||||
thisdir := getCurrentDir()
|
||||
p := &grok.Parser{
|
||||
Patterns: []string{"%{TEST_LOG_A}", "%{TEST_LOG_B}"},
|
||||
CustomPatternFiles: []string{thisdir + "grok/testdata/test-patterns"},
|
||||
}
|
||||
|
||||
logparser := &LogParserPlugin{
|
||||
FromBeginning: true,
|
||||
Files: []string{thisdir + "grok/testdata/*.log"},
|
||||
GrokParser: p,
|
||||
}
|
||||
|
||||
acc := testutil.Accumulator{}
|
||||
assert.NoError(t, logparser.Start(&acc))
|
||||
|
||||
time.Sleep(time.Millisecond * 500)
|
||||
logparser.Stop()
|
||||
|
||||
acc.AssertContainsTaggedFields(t, "logparser_grok",
|
||||
map[string]interface{}{
|
||||
"clientip": "192.168.1.1",
|
||||
"myfloat": float64(1.25),
|
||||
"response_time": int64(5432),
|
||||
"myint": int64(101),
|
||||
},
|
||||
map[string]string{"response_code": "200"})
|
||||
|
||||
acc.AssertContainsTaggedFields(t, "logparser_grok",
|
||||
map[string]interface{}{
|
||||
"myfloat": 1.25,
|
||||
"mystring": "mystring",
|
||||
"nomodifier": "nomodifier",
|
||||
},
|
||||
map[string]string{})
|
||||
}
|
||||
|
||||
// Test that test_a.log line gets parsed even though we don't have the correct
|
||||
// pattern available for test_b.log
|
||||
func TestGrokParseLogFilesOneBad(t *testing.T) {
|
||||
thisdir := getCurrentDir()
|
||||
p := &grok.Parser{
|
||||
Patterns: []string{"%{TEST_LOG_A}", "%{TEST_LOG_BAD}"},
|
||||
CustomPatternFiles: []string{thisdir + "grok/testdata/test-patterns"},
|
||||
}
|
||||
assert.NoError(t, p.Compile())
|
||||
|
||||
logparser := &LogParserPlugin{
|
||||
FromBeginning: true,
|
||||
Files: []string{thisdir + "grok/testdata/test_a.log"},
|
||||
GrokParser: p,
|
||||
}
|
||||
|
||||
acc := testutil.Accumulator{}
|
||||
acc.SetDebug(true)
|
||||
assert.NoError(t, logparser.Start(&acc))
|
||||
|
||||
time.Sleep(time.Millisecond * 500)
|
||||
logparser.Stop()
|
||||
|
||||
acc.AssertContainsTaggedFields(t, "logparser_grok",
|
||||
map[string]interface{}{
|
||||
"clientip": "192.168.1.1",
|
||||
"myfloat": float64(1.25),
|
||||
"response_time": int64(5432),
|
||||
"myint": int64(101),
|
||||
},
|
||||
map[string]string{"response_code": "200"})
|
||||
}
|
||||
|
||||
func getCurrentDir() string {
|
||||
_, filename, _, _ := runtime.Caller(1)
|
||||
return strings.Replace(filename, "logparser_test.go", "", 1)
|
||||
}
|
|
@ -9,6 +9,7 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/influxdata/telegraf"
|
||||
"github.com/influxdata/telegraf/internal/errchan"
|
||||
"github.com/influxdata/telegraf/plugins/inputs"
|
||||
)
|
||||
|
||||
|
@ -73,19 +74,16 @@ func (m *Memcached) Gather(acc telegraf.Accumulator) error {
|
|||
return m.gatherServer(":11211", false, acc)
|
||||
}
|
||||
|
||||
errChan := errchan.New(len(m.Servers) + len(m.UnixSockets))
|
||||
for _, serverAddress := range m.Servers {
|
||||
if err := m.gatherServer(serverAddress, false, acc); err != nil {
|
||||
return err
|
||||
}
|
||||
errChan.C <- m.gatherServer(serverAddress, false, acc)
|
||||
}
|
||||
|
||||
for _, unixAddress := range m.UnixSockets {
|
||||
if err := m.gatherServer(unixAddress, true, acc); err != nil {
|
||||
return err
|
||||
}
|
||||
errChan.C <- m.gatherServer(unixAddress, true, acc)
|
||||
}
|
||||
|
||||
return nil
|
||||
return errChan.Error()
|
||||
}
|
||||
|
||||
func (m *Memcached) gatherServer(
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
# Mesos Input Plugin
|
||||
|
||||
This input plugin gathers metrics from Mesos (*currently only Mesos masters*).
|
||||
This input plugin gathers metrics from Mesos.
|
||||
For more information, please check the [Mesos Observability Metrics](http://mesos.apache.org/documentation/latest/monitoring/) page.
|
||||
|
||||
### Configuration:
|
||||
|
@ -8,14 +8,41 @@ For more information, please check the [Mesos Observability Metrics](http://meso
|
|||
```toml
|
||||
# Telegraf plugin for gathering metrics from N Mesos masters
|
||||
[[inputs.mesos]]
|
||||
# Timeout, in ms.
|
||||
## Timeout, in ms.
|
||||
timeout = 100
|
||||
# A list of Mesos masters, default value is localhost:5050.
|
||||
## A list of Mesos masters.
|
||||
masters = ["localhost:5050"]
|
||||
# Metrics groups to be collected, by default, all enabled.
|
||||
master_collections = ["resources","master","system","slaves","frameworks","messages","evqueue","registrar"]
|
||||
## Master metrics groups to be collected, by default, all enabled.
|
||||
master_collections = [
|
||||
"resources",
|
||||
"master",
|
||||
"system",
|
||||
"agents",
|
||||
"frameworks",
|
||||
"tasks",
|
||||
"messages",
|
||||
"evqueue",
|
||||
"registrar",
|
||||
]
|
||||
## A list of Mesos slaves, default is []
|
||||
# slaves = []
|
||||
## Slave metrics groups to be collected, by default, all enabled.
|
||||
# slave_collections = [
|
||||
# "resources",
|
||||
# "agent",
|
||||
# "system",
|
||||
# "executors",
|
||||
# "tasks",
|
||||
# "messages",
|
||||
# ]
|
||||
## Include mesos tasks statistics, default is false
|
||||
# slave_tasks = true
|
||||
```
|
||||
|
||||
By default this plugin is not configured to gather metrics from mesos. Since a mesos cluster can be deployed in numerous ways it does not provide any default
|
||||
values. User needs to specify master/slave nodes this plugin will gather metrics from. Additionally, enabling `slave_tasks` will allow
|
||||
gathering metrics from tasks running on specified slaves (this option is disabled by default).
|
||||
|
||||
### Measurements & Fields:
|
||||
|
||||
Mesos master metric groups
|
||||
|
@ -33,6 +60,12 @@ Mesos master metric groups
|
|||
- master/disk_revocable_percent
|
||||
- master/disk_revocable_total
|
||||
- master/disk_revocable_used
|
||||
- master/gpus_percent
|
||||
- master/gpus_used
|
||||
- master/gpus_total
|
||||
- master/gpus_revocable_percent
|
||||
- master/gpus_revocable_total
|
||||
- master/gpus_revocable_used
|
||||
- master/mem_percent
|
||||
- master/mem_used
|
||||
- master/mem_total
|
||||
|
@ -136,17 +169,111 @@ Mesos master metric groups
|
|||
- registrar/state_store_ms/p999
|
||||
- registrar/state_store_ms/p9999
|
||||
|
||||
Mesos slave metric groups
|
||||
- resources
|
||||
- slave/cpus_percent
|
||||
- slave/cpus_used
|
||||
- slave/cpus_total
|
||||
- slave/cpus_revocable_percent
|
||||
- slave/cpus_revocable_total
|
||||
- slave/cpus_revocable_used
|
||||
- slave/disk_percent
|
||||
- slave/disk_used
|
||||
- slave/disk_total
|
||||
- slave/disk_revocable_percent
|
||||
- slave/disk_revocable_total
|
||||
- slave/disk_revocable_used
|
||||
- slave/gpus_percent
|
||||
- slave/gpus_used
|
||||
- slave/gpus_total,
|
||||
- slave/gpus_revocable_percent
|
||||
- slave/gpus_revocable_total
|
||||
- slave/gpus_revocable_used
|
||||
- slave/mem_percent
|
||||
- slave/mem_used
|
||||
- slave/mem_total
|
||||
- slave/mem_revocable_percent
|
||||
- slave/mem_revocable_total
|
||||
- slave/mem_revocable_used
|
||||
|
||||
- agent
|
||||
- slave/registered
|
||||
- slave/uptime_secs
|
||||
|
||||
- system
|
||||
- system/cpus_total
|
||||
- system/load_15min
|
||||
- system/load_5min
|
||||
- system/load_1min
|
||||
- system/mem_free_bytes
|
||||
- system/mem_total_bytes
|
||||
|
||||
- executors
|
||||
- containerizer/mesos/container_destroy_errors
|
||||
- slave/container_launch_errors
|
||||
- slave/executors_preempted
|
||||
- slave/frameworks_active
|
||||
- slave/executor_directory_max_allowed_age_secs
|
||||
- slave/executors_registering
|
||||
- slave/executors_running
|
||||
- slave/executors_terminated
|
||||
- slave/executors_terminating
|
||||
- slave/recovery_errors
|
||||
|
||||
- tasks
|
||||
- slave/tasks_failed
|
||||
- slave/tasks_finished
|
||||
- slave/tasks_killed
|
||||
- slave/tasks_lost
|
||||
- slave/tasks_running
|
||||
- slave/tasks_staging
|
||||
- slave/tasks_starting
|
||||
|
||||
- messages
|
||||
- slave/invalid_framework_messages
|
||||
- slave/invalid_status_updates
|
||||
- slave/valid_framework_messages
|
||||
- slave/valid_status_updates
|
||||
|
||||
Mesos tasks metric groups
|
||||
|
||||
- executor_id
|
||||
- executor_name
|
||||
- framework_id
|
||||
- source
|
||||
- statistics (all metrics below will have `statistics_` prefix included in their names
|
||||
- cpus_limit
|
||||
- cpus_system_time_secs
|
||||
- cpus_user_time_secs
|
||||
- mem_anon_bytes
|
||||
- mem_cache_bytes
|
||||
- mem_critical_pressure_counter
|
||||
- mem_file_bytes
|
||||
- mem_limit_bytes
|
||||
- mem_low_pressure_counter
|
||||
- mem_mapped_file_bytes
|
||||
- mem_medium_pressure_counter
|
||||
- mem_rss_bytes
|
||||
- mem_swap_bytes
|
||||
- mem_total_bytes
|
||||
- mem_total_memsw_bytes
|
||||
- mem_unevictable_bytes
|
||||
- timestamp
|
||||
|
||||
### Tags:
|
||||
|
||||
- All measurements have the following tags:
|
||||
- All master/slave measurements have the following tags:
|
||||
- server
|
||||
- role (master/slave)
|
||||
|
||||
- Tasks measurements have the following tags:
|
||||
- server
|
||||
|
||||
### Example Output:
|
||||
|
||||
```
|
||||
$ telegraf -config ~/mesos.conf -input-filter mesos -test
|
||||
* Plugin: mesos, Collection 1
|
||||
mesos,server=172.17.8.101 allocator/event_queue_dispatches=0,master/cpus_percent=0,
|
||||
mesos,host=172.17.8.102,server=172.17.8.101 allocator/event_queue_dispatches=0,master/cpus_percent=0,
|
||||
master/cpus_revocable_percent=0,master/cpus_revocable_total=0,
|
||||
master/cpus_revocable_used=0,master/cpus_total=2,
|
||||
master/cpus_used=0,master/disk_percent=0,master/disk_revocable_percent=0,
|
||||
|
@ -163,3 +290,16 @@ master/mem_revocable_used=0,master/mem_total=1002,
|
|||
master/mem_used=0,master/messages_authenticate=0,
|
||||
master/messages_deactivate_framework=0 ...
|
||||
```
|
||||
|
||||
Meoso tasks metrics (if enabled):
|
||||
```
|
||||
mesos-tasks,host=172.17.8.102,server=172.17.8.101,task_id=hello-world.e4b5b497-2ccd-11e6-a659-0242fb222ce2
|
||||
statistics_cpus_limit=0.2,statistics_cpus_system_time_secs=142.49,statistics_cpus_user_time_secs=388.14,
|
||||
statistics_mem_anon_bytes=359129088,statistics_mem_cache_bytes=3964928,
|
||||
statistics_mem_critical_pressure_counter=0,statistics_mem_file_bytes=3964928,
|
||||
statistics_mem_limit_bytes=767557632,statistics_mem_low_pressure_counter=0,
|
||||
statistics_mem_mapped_file_bytes=114688,statistics_mem_medium_pressure_counter=0,
|
||||
statistics_mem_rss_bytes=359129088,statistics_mem_swap_bytes=0,statistics_mem_total_bytes=363094016,
|
||||
statistics_mem_total_memsw_bytes=363094016,statistics_mem_unevictable_bytes=0,
|
||||
statistics_timestamp=1465486052.70525 1465486053052811792...
|
||||
```
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue