This commit is contained in:
Aurélien Hébert 2016-07-25 09:49:25 +02:00
commit be7b64172a
87 changed files with 2484 additions and 1318 deletions

View File

@ -1,7 +1,28 @@
## v1.0 ## v1.0 [unreleased]
### Features
- [#1413](https://github.com/influxdata/telegraf/issues/1413): Separate container_version from container_image tag.
- [#1525](https://github.com/influxdata/telegraf/pull/1525): Support setting per-device and total metrics for Docker network and blockio.
### Bugfixes
- [#1519](https://github.com/influxdata/telegraf/pull/1519): Fix error race conditions and partial failures.
- [#1477](https://github.com/influxdata/telegraf/issues/1477): nstat: fix inaccurate config panic.
- [#1481](https://github.com/influxdata/telegraf/issues/1481): jolokia: fix handling multiple multi-dimensional attributes.
- [#1430](https://github.com/influxdata/telegraf/issues/1430): Fix prometheus character sanitizing. Sanitize more win_perf_counters characters.
- [#1534](https://github.com/influxdata/telegraf/pull/1534): Add diskio io_time to FreeBSD & report timing metrics as ms (as linux does).
## v1.0 beta 3 [2016-07-18]
### Release Notes ### Release Notes
**Breaking Change**: Aerospike main server node measurements have been renamed
aerospike_node. Aerospike namespace measurements have been renamed to
aerospike_namespace. They will also now be tagged with the node_name
that they correspond to. This has been done to differentiate measurements
that pertain to node vs. namespace statistics.
**Breaking Change**: users of github_webhooks must change to the new **Breaking Change**: users of github_webhooks must change to the new
`[[inputs.webhooks]]` plugin. `[[inputs.webhooks]]` plugin.
@ -28,20 +49,42 @@ should now look like:
### Features ### Features
- [#1503](https://github.com/influxdata/telegraf/pull/1503): Add tls support for certs to RabbitMQ input plugin
- [#1289](https://github.com/influxdata/telegraf/pull/1289): webhooks input plugin. Thanks @francois2metz and @cduez! - [#1289](https://github.com/influxdata/telegraf/pull/1289): webhooks input plugin. Thanks @francois2metz and @cduez!
- [#1247](https://github.com/influxdata/telegraf/pull/1247): rollbar webhook plugin. - [#1247](https://github.com/influxdata/telegraf/pull/1247): rollbar webhook plugin.
- [#1408](https://github.com/influxdata/telegraf/pull/1408): mandrill webhook plugin.
- [#1402](https://github.com/influxdata/telegraf/pull/1402): docker-machine/boot2docker no longer required for unit tests. - [#1402](https://github.com/influxdata/telegraf/pull/1402): docker-machine/boot2docker no longer required for unit tests.
- [#1350](https://github.com/influxdata/telegraf/pull/1350): cgroup input plugin. - [#1350](https://github.com/influxdata/telegraf/pull/1350): cgroup input plugin.
- [#1369](https://github.com/influxdata/telegraf/pull/1369): Add input plugin for consuming metrics from NSQD. - [#1369](https://github.com/influxdata/telegraf/pull/1369): Add input plugin for consuming metrics from NSQD.
- [#1369](https://github.com/influxdata/telegraf/pull/1480): add ability to read redis from a socket.
- [#1387](https://github.com/influxdata/telegraf/pull/1387): **Breaking Change** - Redis `role` tag renamed to `replication_role` to avoid global_tags override
- [#1437](https://github.com/influxdata/telegraf/pull/1437): Fetching Galera status metrics in MySQL
- [#1500](https://github.com/influxdata/telegraf/pull/1500): Aerospike plugin refactored to use official client lib.
- [#1434](https://github.com/influxdata/telegraf/pull/1434): Add measurement name arg to logparser plugin.
- [#1479](https://github.com/influxdata/telegraf/pull/1479): logparser: change resp_code from a field to a tag.
- [#1466](https://github.com/influxdata/telegraf/pull/1466): MongoDB input plugin: adding per DB stats from db.stats()
- [#1411](https://github.com/influxdata/telegraf/pull/1411): Implement support for fetching hddtemp data
### Bugfixes ### Bugfixes
- [#1472](https://github.com/influxdata/telegraf/pull/1472): diskio input plugin: set 'skip_serial_number = true' by default to avoid high cardinality.
- [#1426](https://github.com/influxdata/telegraf/pull/1426): nil metrics panic fix. - [#1426](https://github.com/influxdata/telegraf/pull/1426): nil metrics panic fix.
- [#1384](https://github.com/influxdata/telegraf/pull/1384): Fix datarace in apache input plugin. - [#1384](https://github.com/influxdata/telegraf/pull/1384): Fix datarace in apache input plugin.
- [#1399](https://github.com/influxdata/telegraf/issues/1399): Add `read_repairs` statistics to riak plugin. - [#1399](https://github.com/influxdata/telegraf/issues/1399): Add `read_repairs` statistics to riak plugin.
- [#1405](https://github.com/influxdata/telegraf/issues/1405): Fix memory/connection leak in prometheus input plugin. - [#1405](https://github.com/influxdata/telegraf/issues/1405): Fix memory/connection leak in prometheus input plugin.
- [#1378](https://github.com/influxdata/telegraf/issues/1378): Trim BOM from config file for Windows support. - [#1378](https://github.com/influxdata/telegraf/issues/1378): Trim BOM from config file for Windows support.
- [#1339](https://github.com/influxdata/telegraf/issues/1339): Prometheus client output panic on service reload. - [#1339](https://github.com/influxdata/telegraf/issues/1339): Prometheus client output panic on service reload.
- [#1461](https://github.com/influxdata/telegraf/pull/1461): Prometheus parser, protobuf format header fix.
- [#1334](https://github.com/influxdata/telegraf/issues/1334): Prometheus output, metric refresh and caching fixes.
- [#1432](https://github.com/influxdata/telegraf/issues/1432): Panic fix for multiple graphite outputs under very high load.
- [#1412](https://github.com/influxdata/telegraf/pull/1412): Instrumental output has better reconnect behavior
- [#1460](https://github.com/influxdata/telegraf/issues/1460): Remove PID from procstat plugin to fix cardinality issues.
- [#1427](https://github.com/influxdata/telegraf/issues/1427): Cassandra input: version 2.x "column family" fix.
- [#1463](https://github.com/influxdata/telegraf/issues/1463): Shared WaitGroup in Exec plugin
- [#1436](https://github.com/influxdata/telegraf/issues/1436): logparser: honor modifiers in "pattern" config.
- [#1418](https://github.com/influxdata/telegraf/issues/1418): logparser: error and exit on file permissions/missing errors.
- [#1499](https://github.com/influxdata/telegraf/pull/1499): Make the user able to specify full path for HAproxy stats
- [#1521](https://github.com/influxdata/telegraf/pull/1521): Fix Redis url, an extra "tcp://" was added.
## v1.0 beta 2 [2016-06-21] ## v1.0 beta 2 [2016-06-21]

5
Godeps
View File

@ -1,5 +1,6 @@
github.com/Shopify/sarama 8aadb476e66ca998f2f6bb3c993e9a2daa3666b9 github.com/Shopify/sarama 8aadb476e66ca998f2f6bb3c993e9a2daa3666b9
github.com/Sirupsen/logrus 219c8cb75c258c552e999735be6df753ffc7afdc github.com/Sirupsen/logrus 219c8cb75c258c552e999735be6df753ffc7afdc
github.com/aerospike/aerospike-client-go 45863b7fd8640dc12f7fdd397104d97e1986f25a
github.com/amir/raidman 53c1b967405155bfc8758557863bf2e14f814687 github.com/amir/raidman 53c1b967405155bfc8758557863bf2e14f814687
github.com/aws/aws-sdk-go 13a12060f716145019378a10e2806c174356b857 github.com/aws/aws-sdk-go 13a12060f716145019378a10e2806c174356b857
github.com/beorn7/perks 3ac7bf7a47d159a033b107610db8a1b6575507a4 github.com/beorn7/perks 3ac7bf7a47d159a033b107610db8a1b6575507a4
@ -43,13 +44,15 @@ github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
github.com/prometheus/common e8eabff8812b05acf522b45fdcd725a785188e37 github.com/prometheus/common e8eabff8812b05acf522b45fdcd725a785188e37
github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8 github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8
github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f
github.com/shirou/gopsutil 586bb697f3ec9f8ec08ffefe18f521a64534037c github.com/shirou/gopsutil ee66bc560c366dd33b9a4046ba0b644caba46bed
github.com/soniah/gosnmp b1b4f885b12c5dcbd021c5cee1c904110de6db7d github.com/soniah/gosnmp b1b4f885b12c5dcbd021c5cee1c904110de6db7d
github.com/sparrc/aerospike-client-go d4bb42d2c2d39dae68e054116f4538af189e05d5
github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744 github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744
github.com/stretchr/testify 1f4a1643a57e798696635ea4c126e9127adb7d3c github.com/stretchr/testify 1f4a1643a57e798696635ea4c126e9127adb7d3c
github.com/vjeantet/grok 83bfdfdfd1a8146795b28e547a8e3c8b28a466c2 github.com/vjeantet/grok 83bfdfdfd1a8146795b28e547a8e3c8b28a466c2
github.com/wvanbergen/kafka 46f9a1cf3f670edec492029fadded9c2d9e18866 github.com/wvanbergen/kafka 46f9a1cf3f670edec492029fadded9c2d9e18866
github.com/wvanbergen/kazoo-go 0f768712ae6f76454f987c3356177e138df258f8 github.com/wvanbergen/kazoo-go 0f768712ae6f76454f987c3356177e138df258f8
github.com/yuin/gopher-lua bf3808abd44b1e55143a2d7f08571aaa80db1808
github.com/zensqlmonitor/go-mssqldb ffe5510c6fa5e15e6d983210ab501c815b56b363 github.com/zensqlmonitor/go-mssqldb ffe5510c6fa5e15e6d983210ab501c815b56b363
golang.org/x/crypto 5dc8cb4b8a8eb076cbb5a06bc3b8682c15bdbbd3 golang.org/x/crypto 5dc8cb4b8a8eb076cbb5a06bc3b8682c15bdbbd3
golang.org/x/net 6acef71eb69611914f7a30939ea9f6e194c78172 golang.org/x/net 6acef71eb69611914f7a30939ea9f6e194c78172

View File

@ -25,10 +25,6 @@ build-for-docker:
"-s -X main.version=$(VERSION)" \ "-s -X main.version=$(VERSION)" \
./cmd/telegraf/telegraf.go ./cmd/telegraf/telegraf.go
# Build with race detector
dev: prepare
go build -race -ldflags "-X main.version=$(VERSION)" ./...
# run package script # run package script
package: package:
./scripts/build.py --package --version="$(VERSION)" --platform=linux --arch=all --upload ./scripts/build.py --package --version="$(VERSION)" --platform=linux --arch=all --upload
@ -55,7 +51,7 @@ docker-run:
docker run --name postgres -p "5432:5432" -d postgres docker run --name postgres -p "5432:5432" -d postgres
docker run --name rabbitmq -p "15672:15672" -p "5672:5672" -d rabbitmq:3-management docker run --name rabbitmq -p "15672:15672" -p "5672:5672" -d rabbitmq:3-management
docker run --name redis -p "6379:6379" -d redis docker run --name redis -p "6379:6379" -d redis
docker run --name aerospike -p "3000:3000" -d aerospike docker run --name aerospike -p "3000:3000" -d aerospike/aerospike-server
docker run --name nsq -p "4150:4150" -d nsqio/nsq /nsqd docker run --name nsq -p "4150:4150" -d nsqio/nsq /nsqd
docker run --name mqtt -p "1883:1883" -d ncarlier/mqtt docker run --name mqtt -p "1883:1883" -d ncarlier/mqtt
docker run --name riemann -p "5555:5555" -d blalor/riemann docker run --name riemann -p "5555:5555" -d blalor/riemann
@ -68,7 +64,7 @@ docker-run-circle:
-e ADVERTISED_PORT=9092 \ -e ADVERTISED_PORT=9092 \
-p "2181:2181" -p "9092:9092" \ -p "2181:2181" -p "9092:9092" \
-d spotify/kafka -d spotify/kafka
docker run --name aerospike -p "3000:3000" -d aerospike docker run --name aerospike -p "3000:3000" -d aerospike/aerospike-server
docker run --name nsq -p "4150:4150" -d nsqio/nsq /nsqd docker run --name nsq -p "4150:4150" -d nsqio/nsq /nsqd
docker run --name mqtt -p "1883:1883" -d ncarlier/mqtt docker run --name mqtt -p "1883:1883" -d ncarlier/mqtt
docker run --name riemann -p "5555:5555" -d blalor/riemann docker run --name riemann -p "5555:5555" -d blalor/riemann

View File

@ -20,12 +20,12 @@ new plugins.
### Linux deb and rpm Packages: ### Linux deb and rpm Packages:
Latest: Latest:
* https://dl.influxdata.com/telegraf/releases/telegraf_1.0.0-beta2_amd64.deb * https://dl.influxdata.com/telegraf/releases/telegraf_1.0.0-beta3_amd64.deb
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0_beta2.x86_64.rpm * https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0_beta3.x86_64.rpm
Latest (arm): Latest (arm):
* https://dl.influxdata.com/telegraf/releases/telegraf_1.0.0-beta2_armhf.deb * https://dl.influxdata.com/telegraf/releases/telegraf_1.0.0-beta3_armhf.deb
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0_beta2.armhf.rpm * https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0_beta3.armhf.rpm
##### Package Instructions: ##### Package Instructions:
@ -46,14 +46,14 @@ to use this repo to install & update telegraf.
### Linux tarballs: ### Linux tarballs:
Latest: Latest:
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0-beta2_linux_amd64.tar.gz * https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0-beta3_linux_amd64.tar.gz
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0-beta2_linux_i386.tar.gz * https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0-beta3_linux_i386.tar.gz
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0-beta2_linux_armhf.tar.gz * https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0-beta3_linux_armhf.tar.gz
### FreeBSD tarball: ### FreeBSD tarball:
Latest: Latest:
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0-beta2_freebsd_amd64.tar.gz * https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0-beta3_freebsd_amd64.tar.gz
### Ansible Role: ### Ansible Role:
@ -69,7 +69,7 @@ brew install telegraf
### Windows Binaries (EXPERIMENTAL) ### Windows Binaries (EXPERIMENTAL)
Latest: Latest:
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0-beta2_windows_amd64.zip * https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0-beta3_windows_amd64.zip
### From Source: ### From Source:
@ -156,6 +156,7 @@ Currently implemented sources:
* [exec](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/exec) (generic executable plugin, support JSON, influx, graphite and nagios) * [exec](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/exec) (generic executable plugin, support JSON, influx, graphite and nagios)
* [filestat](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/filestat) * [filestat](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/filestat)
* [haproxy](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/haproxy) * [haproxy](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/haproxy)
* [hddtemp](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/hddtemp)
* [http_response](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/http_response) * [http_response](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/http_response)
* [httpjson](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/httpjson) (generic JSON-emitting http service plugin) * [httpjson](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/httpjson) (generic JSON-emitting http service plugin)
* [influxdb](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/influxdb) * [influxdb](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/influxdb)
@ -219,10 +220,9 @@ Telegraf can also collect metrics via the following service plugins:
* [nats_consumer](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/nats_consumer) * [nats_consumer](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/nats_consumer)
* [webhooks](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/webhooks) * [webhooks](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/webhooks)
* [github](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/webhooks/github) * [github](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/webhooks/github)
* [mandrill](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/webhooks/mandrill)
* [rollbar](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/webhooks/rollbar) * [rollbar](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/webhooks/rollbar)
* [nsq_consumer](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/nsq_consumer) * [nsq_consumer](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/nsq_consumer)
* [github_webhooks](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/github_webhooks)
* [rollbar_webhooks](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/rollbar_webhooks)
We'll be adding support for many more over the coming months. Read on if you We'll be adding support for many more over the coming months. Read on if you
want to add support for another service or third-party API. want to add support for another service or third-party API.

View File

@ -32,8 +32,6 @@ type accumulator struct {
inputConfig *internal_models.InputConfig inputConfig *internal_models.InputConfig
prefix string
precision time.Duration precision time.Duration
} }
@ -146,10 +144,6 @@ func (ac *accumulator) AddFields(
} }
timestamp = timestamp.Round(ac.precision) timestamp = timestamp.Round(ac.precision)
if ac.prefix != "" {
measurement = ac.prefix + measurement
}
m, err := telegraf.NewMetric(measurement, tags, result, timestamp) m, err := telegraf.NewMetric(measurement, tags, result, timestamp)
if err != nil { if err != nil {
log.Printf("Error adding point [%s]: %s\n", measurement, err.Error()) log.Printf("Error adding point [%s]: %s\n", measurement, err.Error())

View File

@ -268,11 +268,31 @@ func (a *Agent) flusher(shutdown chan struct{}, metricC chan telegraf.Metric) er
internal.RandomSleep(a.Config.Agent.FlushJitter.Duration, shutdown) internal.RandomSleep(a.Config.Agent.FlushJitter.Duration, shutdown)
a.flush() a.flush()
case m := <-metricC: case m := <-metricC:
for _, o := range a.Config.Outputs { for i, o := range a.Config.Outputs {
if i == len(a.Config.Outputs)-1 {
o.AddMetric(m) o.AddMetric(m)
} else {
o.AddMetric(copyMetric(m))
} }
} }
} }
}
}
func copyMetric(m telegraf.Metric) telegraf.Metric {
t := time.Time(m.Time())
tags := make(map[string]string)
fields := make(map[string]interface{})
for k, v := range m.Tags() {
tags[k] = v
}
for k, v := range m.Fields() {
fields[k] = v
}
out, _ := telegraf.NewMetric(m.Name(), tags, fields, t)
return out
} }
// Run runs the agent daemon, gathering every Interval // Run runs the agent daemon, gathering every Interval

View File

@ -197,6 +197,8 @@
# # Configuration for Graphite server to send metrics to # # Configuration for Graphite server to send metrics to
# [[outputs.graphite]] # [[outputs.graphite]]
# ## TCP endpoint for your graphite instance. # ## TCP endpoint for your graphite instance.
# ## If multiple endpoints are configured, output will be load balanced.
# ## Only one of the endpoints will be written to with each iteration.
# servers = ["localhost:2003"] # servers = ["localhost:2003"]
# ## Prefix metrics name # ## Prefix metrics name
# prefix = "" # prefix = ""
@ -434,8 +436,8 @@
## disk partitions. ## disk partitions.
## Setting devices will restrict the stats to the specified devices. ## Setting devices will restrict the stats to the specified devices.
# devices = ["sda", "sdb"] # devices = ["sda", "sdb"]
## Uncomment the following line if you do not need disk serial numbers. ## Uncomment the following line if you need disk serial numbers.
# skip_serial_number = true # skip_serial_number = false
# Get kernel statistics from /proc/stat # Get kernel statistics from /proc/stat
@ -463,7 +465,7 @@
# no configuration # no configuration
# # Read stats from an aerospike server # # Read stats from aerospike server(s)
# [[inputs.aerospike]] # [[inputs.aerospike]]
# ## Aerospike servers to connect to (with port) # ## Aerospike servers to connect to (with port)
# ## This plugin will query all namespaces the aerospike # ## This plugin will query all namespaces the aerospike
@ -664,6 +666,13 @@
# container_names = [] # container_names = []
# ## Timeout for docker list, info, and stats commands # ## Timeout for docker list, info, and stats commands
# timeout = "5s" # timeout = "5s"
#
# ## Whether to report for each container per-device blkio (8:0, 8:1...) and
# ## network (eth0, eth1, ...) stats or not
# perdevice = true
# ## Whether to report for each container total blkio and network stats or not
# total = false
#
# # Read statistics from one or many dovecot servers # # Read statistics from one or many dovecot servers
@ -780,9 +789,11 @@
# [[inputs.haproxy]] # [[inputs.haproxy]]
# ## An array of address to gather stats about. Specify an ip on hostname # ## An array of address to gather stats about. Specify an ip on hostname
# ## with optional port. ie localhost, 10.10.3.33:1936, etc. # ## with optional port. ie localhost, 10.10.3.33:1936, etc.
# # ## Make sure you specify the complete path to the stats endpoint
# ## If no servers are specified, then default to 127.0.0.1:1936 # ## ie 10.10.3.33:1936/haproxy?stats
# servers = ["http://myhaproxy.com:1936", "http://anotherhaproxy.com:1936"] # #
# ## If no servers are specified, then default to 127.0.0.1:1936/haproxy?stats
# servers = ["http://myhaproxy.com:1936/haproxy?stats"]
# ## Or you can also use local socket # ## Or you can also use local socket
# ## servers = ["socket:/run/haproxy/admin.sock"] # ## servers = ["socket:/run/haproxy/admin.sock"]
@ -968,21 +979,35 @@
# # Telegraf plugin for gathering metrics from N Mesos masters # # Telegraf plugin for gathering metrics from N Mesos masters
# [[inputs.mesos]] # [[inputs.mesos]]
# # Timeout, in ms. # ## Timeout, in ms.
# timeout = 100 # timeout = 100
# # A list of Mesos masters, default value is localhost:5050. # ## A list of Mesos masters.
# masters = ["localhost:5050"] # masters = ["localhost:5050"]
# # Metrics groups to be collected, by default, all enabled. # ## Master metrics groups to be collected, by default, all enabled.
# master_collections = [ # master_collections = [
# "resources", # "resources",
# "master", # "master",
# "system", # "system",
# "slaves", # "agents",
# "frameworks", # "frameworks",
# "tasks",
# "messages", # "messages",
# "evqueue", # "evqueue",
# "registrar", # "registrar",
# ] # ]
# ## A list of Mesos slaves, default is []
# # slaves = []
# ## Slave metrics groups to be collected, by default, all enabled.
# # slave_collections = [
# # "resources",
# # "agent",
# # "system",
# # "executors",
# # "tasks",
# # "messages",
# # ]
# ## Include mesos tasks statistics, default is false
# # slave_tasks = true
# # Read metrics from one or many MongoDB servers # # Read metrics from one or many MongoDB servers
@ -993,6 +1018,7 @@
# ## mongodb://10.10.3.33:18832, # ## mongodb://10.10.3.33:18832,
# ## 10.0.0.1:10000, etc. # ## 10.0.0.1:10000, etc.
# servers = ["127.0.0.1:27017"] # servers = ["127.0.0.1:27017"]
# gather_perdb_stats = false
# # Read metrics from one or many mysql servers # # Read metrics from one or many mysql servers
@ -1099,9 +1125,9 @@
# ## file paths for proc files. If empty default paths will be used: # ## file paths for proc files. If empty default paths will be used:
# ## /proc/net/netstat, /proc/net/snmp, /proc/net/snmp6 # ## /proc/net/netstat, /proc/net/snmp, /proc/net/snmp6
# ## These can also be overridden with env variables, see README. # ## These can also be overridden with env variables, see README.
# proc_net_netstat = "" # proc_net_netstat = "/proc/net/netstat"
# proc_net_snmp = "" # proc_net_snmp = "/proc/net/snmp"
# proc_net_snmp6 = "" # proc_net_snmp6 = "/proc/net/snmp6"
# ## dump metrics with 0 values too # ## dump metrics with 0 values too
# dump_zeros = true # dump_zeros = true
@ -1303,6 +1329,13 @@
# # username = "guest" # # username = "guest"
# # password = "guest" # # password = "guest"
# #
# ## Optional SSL Config
# # ssl_ca = "/etc/telegraf/ca.pem"
# # ssl_cert = "/etc/telegraf/cert.pem"
# # ssl_key = "/etc/telegraf/key.pem"
# ## Use SSL but skip chain & host verification
# # insecure_skip_verify = false
#
# ## A list of nodes to pull metrics about. If not specified, metrics for # ## A list of nodes to pull metrics about. If not specified, metrics for
# ## all nodes are gathered. # ## all nodes are gathered.
# # nodes = ["rabbit@node1", "rabbit@node2"] # # nodes = ["rabbit@node1", "rabbit@node2"]
@ -1321,6 +1354,7 @@
# ## e.g. # ## e.g.
# ## tcp://localhost:6379 # ## tcp://localhost:6379
# ## tcp://:password@192.168.99.100 # ## tcp://:password@192.168.99.100
# ## unix:///var/run/redis.sock
# ## # ##
# ## If no servers are specified, then localhost is used as the host. # ## If no servers are specified, then localhost is used as the host.
# ## If no port is specified, 6379 is used # ## If no port is specified, 6379 is used
@ -1557,6 +1591,8 @@
# ## %{COMMON_LOG_FORMAT} (plain apache & nginx access logs) # ## %{COMMON_LOG_FORMAT} (plain apache & nginx access logs)
# ## %{COMBINED_LOG_FORMAT} (access logs + referrer & agent) # ## %{COMBINED_LOG_FORMAT} (access logs + referrer & agent)
# patterns = ["%{INFLUXDB_HTTPD_LOG}"] # patterns = ["%{INFLUXDB_HTTPD_LOG}"]
# ## Name of the outputted measurement name.
# measurement = "influxdb_log"
# ## Full path(s) to custom pattern files. # ## Full path(s) to custom pattern files.
# custom_pattern_files = [] # custom_pattern_files = []
# ## Custom patterns can also be defined here. Put one pattern per line. # ## Custom patterns can also be defined here. Put one pattern per line.
@ -1620,6 +1656,21 @@
# data_format = "influx" # data_format = "influx"
# # Read NSQ topic for metrics.
# [[inputs.nsq_consumer]]
# ## An string representing the NSQD TCP Endpoint
# server = "localhost:4150"
# topic = "telegraf"
# channel = "consumer"
# max_in_flight = 100
#
# ## Data format to consume.
# ## Each data format has it's own unique set of configuration options, read
# ## more about them here:
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
# data_format = "influx"
# # Statsd Server # # Statsd Server
# [[inputs.statsd]] # [[inputs.statsd]]
# ## Address and port to host UDP listener on # ## Address and port to host UDP listener on
@ -1723,6 +1774,9 @@
# [inputs.webhooks.github] # [inputs.webhooks.github]
# path = "/github" # path = "/github"
# #
# [inputs.webhooks.mandrill]
# path = "/mandrill"
#
# [inputs.webhooks.rollbar] # [inputs.webhooks.rollbar]
# path = "/rollbar" # path = "/rollbar"

View File

@ -139,7 +139,7 @@ func (c *Config) InputNames() []string {
return name return name
} }
// Outputs returns a list of strings of the configured inputs. // Outputs returns a list of strings of the configured outputs.
func (c *Config) OutputNames() []string { func (c *Config) OutputNames() []string {
var name []string var name []string
for _, output := range c.Outputs { for _, output := range c.Outputs {

File diff suppressed because one or more lines are too long

View File

@ -1,104 +1,19 @@
package aerospike package aerospike
import ( import (
"bytes"
"encoding/binary"
"fmt"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"net" "net"
"strconv" "strconv"
"strings" "strings"
"sync" "sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs"
as "github.com/sparrc/aerospike-client-go"
) )
const (
MSG_HEADER_SIZE = 8
MSG_TYPE = 1 // Info is 1
MSG_VERSION = 2
)
var (
STATISTICS_COMMAND = []byte("statistics\n")
NAMESPACES_COMMAND = []byte("namespaces\n")
)
type aerospikeMessageHeader struct {
Version uint8
Type uint8
DataLen [6]byte
}
type aerospikeMessage struct {
aerospikeMessageHeader
Data []byte
}
// Taken from aerospike-client-go/types/message.go
func (msg *aerospikeMessage) Serialize() []byte {
msg.DataLen = msgLenToBytes(int64(len(msg.Data)))
buf := bytes.NewBuffer([]byte{})
binary.Write(buf, binary.BigEndian, msg.aerospikeMessageHeader)
binary.Write(buf, binary.BigEndian, msg.Data[:])
return buf.Bytes()
}
type aerospikeInfoCommand struct {
msg *aerospikeMessage
}
// Taken from aerospike-client-go/info.go
func (nfo *aerospikeInfoCommand) parseMultiResponse() (map[string]string, error) {
responses := make(map[string]string)
offset := int64(0)
begin := int64(0)
dataLen := int64(len(nfo.msg.Data))
// Create reusable StringBuilder for performance.
for offset < dataLen {
b := nfo.msg.Data[offset]
if b == '\t' {
name := nfo.msg.Data[begin:offset]
offset++
begin = offset
// Parse field value.
for offset < dataLen {
if nfo.msg.Data[offset] == '\n' {
break
}
offset++
}
if offset > begin {
value := nfo.msg.Data[begin:offset]
responses[string(name)] = string(value)
} else {
responses[string(name)] = ""
}
offset++
begin = offset
} else if b == '\n' {
if offset > begin {
name := nfo.msg.Data[begin:offset]
responses[string(name)] = ""
}
offset++
begin = offset
} else {
offset++
}
}
if offset > begin {
name := nfo.msg.Data[begin:offset]
responses[string(name)] = ""
}
return responses, nil
}
type Aerospike struct { type Aerospike struct {
Servers []string Servers []string
} }
@ -115,7 +30,7 @@ func (a *Aerospike) SampleConfig() string {
} }
func (a *Aerospike) Description() string { func (a *Aerospike) Description() string {
return "Read stats from an aerospike server" return "Read stats from aerospike server(s)"
} }
func (a *Aerospike) Gather(acc telegraf.Accumulator) error { func (a *Aerospike) Gather(acc telegraf.Accumulator) error {
@ -124,214 +39,101 @@ func (a *Aerospike) Gather(acc telegraf.Accumulator) error {
} }
var wg sync.WaitGroup var wg sync.WaitGroup
errChan := errchan.New(len(a.Servers))
var outerr error wg.Add(len(a.Servers))
for _, server := range a.Servers { for _, server := range a.Servers {
wg.Add(1) go func(serv string) {
go func(server string) {
defer wg.Done() defer wg.Done()
outerr = a.gatherServer(server, acc) errChan.C <- a.gatherServer(serv, acc)
}(server) }(server)
} }
wg.Wait() wg.Wait()
return outerr return errChan.Error()
} }
func (a *Aerospike) gatherServer(host string, acc telegraf.Accumulator) error { func (a *Aerospike) gatherServer(hostport string, acc telegraf.Accumulator) error {
aerospikeInfo, err := getMap(STATISTICS_COMMAND, host) host, port, err := net.SplitHostPort(hostport)
if err != nil { if err != nil {
return fmt.Errorf("Aerospike info failed: %s", err) return err
} }
readAerospikeStats(aerospikeInfo, acc, host, "")
namespaces, err := getList(NAMESPACES_COMMAND, host) iport, err := strconv.Atoi(port)
if err != nil { if err != nil {
return fmt.Errorf("Aerospike namespace list failed: %s", err) iport = 3000
} }
for ix := range namespaces {
nsInfo, err := getMap([]byte("namespace/"+namespaces[ix]+"\n"), host) c, err := as.NewClient(host, iport)
if err != nil { if err != nil {
return fmt.Errorf("Aerospike namespace '%s' query failed: %s", namespaces[ix], err) return err
}
defer c.Close()
nodes := c.GetNodes()
for _, n := range nodes {
tags := map[string]string{
"aerospike_host": hostport,
}
fields := map[string]interface{}{
"node_name": n.GetName(),
}
stats, err := as.RequestNodeStats(n)
if err != nil {
return err
}
for k, v := range stats {
fields[strings.Replace(k, "-", "_", -1)] = parseValue(v)
}
acc.AddFields("aerospike_node", fields, tags, time.Now())
info, err := as.RequestNodeInfo(n, "namespaces")
if err != nil {
return err
}
namespaces := strings.Split(info["namespaces"], ";")
for _, namespace := range namespaces {
nTags := map[string]string{
"aerospike_host": hostport,
}
nTags["namespace"] = namespace
nFields := map[string]interface{}{
"node_name": n.GetName(),
}
info, err := as.RequestNodeInfo(n, "namespace/"+namespace)
if err != nil {
continue
}
stats := strings.Split(info["namespace/"+namespace], ";")
for _, stat := range stats {
parts := strings.Split(stat, "=")
if len(parts) < 2 {
continue
}
nFields[strings.Replace(parts[0], "-", "_", -1)] = parseValue(parts[1])
}
acc.AddFields("aerospike_namespace", nFields, nTags, time.Now())
} }
readAerospikeStats(nsInfo, acc, host, namespaces[ix])
} }
return nil return nil
} }
func getMap(key []byte, host string) (map[string]string, error) { func parseValue(v string) interface{} {
data, err := get(key, host) if parsed, err := strconv.ParseInt(v, 10, 64); err == nil {
if err != nil { return parsed
return nil, fmt.Errorf("Failed to get data: %s", err) } else if parsed, err := strconv.ParseBool(v); err == nil {
return parsed
} else {
return v
} }
parsed, err := unmarshalMapInfo(data, string(key))
if err != nil {
return nil, fmt.Errorf("Failed to unmarshal data: %s", err)
}
return parsed, nil
} }
func getList(key []byte, host string) ([]string, error) { func copyTags(m map[string]string) map[string]string {
data, err := get(key, host) out := make(map[string]string)
if err != nil { for k, v := range m {
return nil, fmt.Errorf("Failed to get data: %s", err) out[k] = v
} }
parsed, err := unmarshalListInfo(data, string(key)) return out
if err != nil {
return nil, fmt.Errorf("Failed to unmarshal data: %s", err)
}
return parsed, nil
}
func get(key []byte, host string) (map[string]string, error) {
var err error
var data map[string]string
asInfo := &aerospikeInfoCommand{
msg: &aerospikeMessage{
aerospikeMessageHeader: aerospikeMessageHeader{
Version: uint8(MSG_VERSION),
Type: uint8(MSG_TYPE),
DataLen: msgLenToBytes(int64(len(key))),
},
Data: key,
},
}
cmd := asInfo.msg.Serialize()
addr, err := net.ResolveTCPAddr("tcp", host)
if err != nil {
return data, fmt.Errorf("Lookup failed for '%s': %s", host, err)
}
conn, err := net.DialTCP("tcp", nil, addr)
if err != nil {
return data, fmt.Errorf("Connection failed for '%s': %s", host, err)
}
defer conn.Close()
_, err = conn.Write(cmd)
if err != nil {
return data, fmt.Errorf("Failed to send to '%s': %s", host, err)
}
msgHeader := bytes.NewBuffer(make([]byte, MSG_HEADER_SIZE))
_, err = readLenFromConn(conn, msgHeader.Bytes(), MSG_HEADER_SIZE)
if err != nil {
return data, fmt.Errorf("Failed to read header: %s", err)
}
err = binary.Read(msgHeader, binary.BigEndian, &asInfo.msg.aerospikeMessageHeader)
if err != nil {
return data, fmt.Errorf("Failed to unmarshal header: %s", err)
}
msgLen := msgLenFromBytes(asInfo.msg.aerospikeMessageHeader.DataLen)
if int64(len(asInfo.msg.Data)) != msgLen {
asInfo.msg.Data = make([]byte, msgLen)
}
_, err = readLenFromConn(conn, asInfo.msg.Data, len(asInfo.msg.Data))
if err != nil {
return data, fmt.Errorf("Failed to read from connection to '%s': %s", host, err)
}
data, err = asInfo.parseMultiResponse()
if err != nil {
return data, fmt.Errorf("Failed to parse response from '%s': %s", host, err)
}
return data, err
}
func readAerospikeStats(
stats map[string]string,
acc telegraf.Accumulator,
host string,
namespace string,
) {
fields := make(map[string]interface{})
tags := map[string]string{
"aerospike_host": host,
"namespace": "_service",
}
if namespace != "" {
tags["namespace"] = namespace
}
for key, value := range stats {
// We are going to ignore all string based keys
val, err := strconv.ParseInt(value, 10, 64)
if err == nil {
if strings.Contains(key, "-") {
key = strings.Replace(key, "-", "_", -1)
}
fields[key] = val
}
}
acc.AddFields("aerospike", fields, tags)
}
func unmarshalMapInfo(infoMap map[string]string, key string) (map[string]string, error) {
key = strings.TrimSuffix(key, "\n")
res := map[string]string{}
v, exists := infoMap[key]
if !exists {
return res, fmt.Errorf("Key '%s' missing from info", key)
}
values := strings.Split(v, ";")
for i := range values {
kv := strings.Split(values[i], "=")
if len(kv) > 1 {
res[kv[0]] = kv[1]
}
}
return res, nil
}
func unmarshalListInfo(infoMap map[string]string, key string) ([]string, error) {
key = strings.TrimSuffix(key, "\n")
v, exists := infoMap[key]
if !exists {
return []string{}, fmt.Errorf("Key '%s' missing from info", key)
}
values := strings.Split(v, ";")
return values, nil
}
func readLenFromConn(c net.Conn, buffer []byte, length int) (total int, err error) {
var r int
for total < length {
r, err = c.Read(buffer[total:length])
total += r
if err != nil {
break
}
}
return
}
// Taken from aerospike-client-go/types/message.go
func msgLenToBytes(DataLen int64) [6]byte {
b := make([]byte, 8)
binary.BigEndian.PutUint64(b, uint64(DataLen))
res := [6]byte{}
copy(res[:], b[2:])
return res
}
// Taken from aerospike-client-go/types/message.go
func msgLenFromBytes(buf [6]byte) int64 {
nbytes := append([]byte{0, 0}, buf[:]...)
DataLen := binary.BigEndian.Uint64(nbytes)
return int64(DataLen)
} }
func init() { func init() {

View File

@ -1,7 +1,6 @@
package aerospike package aerospike
import ( import (
"reflect"
"testing" "testing"
"github.com/influxdata/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
@ -23,96 +22,29 @@ func TestAerospikeStatistics(t *testing.T) {
err := a.Gather(&acc) err := a.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
// Only use a few of the metrics assert.True(t, acc.HasMeasurement("aerospike_node"))
asMetrics := []string{ assert.True(t, acc.HasMeasurement("aerospike_namespace"))
"transactions", assert.True(t, acc.HasIntField("aerospike_node", "batch_error"))
"stat_write_errs",
"stat_read_reqs",
"stat_write_reqs",
}
for _, metric := range asMetrics {
assert.True(t, acc.HasIntField("aerospike", metric), metric)
}
} }
func TestAerospikeMsgLenFromToBytes(t *testing.T) { func TestAerospikeStatisticsPartialErr(t *testing.T) {
var i int64 = 8 if testing.Short() {
assert.True(t, i == msgLenFromBytes(msgLenToBytes(i))) t.Skip("Skipping integration test in short mode")
} }
a := &Aerospike{
Servers: []string{
testutil.GetLocalHost() + ":3000",
testutil.GetLocalHost() + ":9999",
},
}
func TestReadAerospikeStatsNoNamespace(t *testing.T) {
// Also test for re-writing
var acc testutil.Accumulator var acc testutil.Accumulator
stats := map[string]string{
"stat-write-errs": "12345",
"stat_read_reqs": "12345",
}
readAerospikeStats(stats, &acc, "host1", "")
fields := map[string]interface{}{ err := a.Gather(&acc)
"stat_write_errs": int64(12345), require.Error(t, err)
"stat_read_reqs": int64(12345),
} assert.True(t, acc.HasMeasurement("aerospike_node"))
tags := map[string]string{ assert.True(t, acc.HasMeasurement("aerospike_namespace"))
"aerospike_host": "host1", assert.True(t, acc.HasIntField("aerospike_node", "batch_error"))
"namespace": "_service",
}
acc.AssertContainsTaggedFields(t, "aerospike", fields, tags)
}
func TestReadAerospikeStatsNamespace(t *testing.T) {
var acc testutil.Accumulator
stats := map[string]string{
"stat_write_errs": "12345",
"stat_read_reqs": "12345",
}
readAerospikeStats(stats, &acc, "host1", "test")
fields := map[string]interface{}{
"stat_write_errs": int64(12345),
"stat_read_reqs": int64(12345),
}
tags := map[string]string{
"aerospike_host": "host1",
"namespace": "test",
}
acc.AssertContainsTaggedFields(t, "aerospike", fields, tags)
}
func TestAerospikeUnmarshalList(t *testing.T) {
i := map[string]string{
"test": "one;two;three",
}
expected := []string{"one", "two", "three"}
list, err := unmarshalListInfo(i, "test2")
assert.True(t, err != nil)
list, err = unmarshalListInfo(i, "test")
assert.True(t, err == nil)
equal := true
for ix := range expected {
if list[ix] != expected[ix] {
equal = false
break
}
}
assert.True(t, equal)
}
func TestAerospikeUnmarshalMap(t *testing.T) {
i := map[string]string{
"test": "key1=value1;key2=value2",
}
expected := map[string]string{
"key1": "value1",
"key2": "value2",
}
m, err := unmarshalMapInfo(i, "test")
assert.True(t, err == nil)
assert.True(t, reflect.DeepEqual(m, expected))
} }

View File

@ -22,6 +22,7 @@ import (
_ "github.com/influxdata/telegraf/plugins/inputs/filestat" _ "github.com/influxdata/telegraf/plugins/inputs/filestat"
_ "github.com/influxdata/telegraf/plugins/inputs/graylog" _ "github.com/influxdata/telegraf/plugins/inputs/graylog"
_ "github.com/influxdata/telegraf/plugins/inputs/haproxy" _ "github.com/influxdata/telegraf/plugins/inputs/haproxy"
_ "github.com/influxdata/telegraf/plugins/inputs/hddtemp"
_ "github.com/influxdata/telegraf/plugins/inputs/http_response" _ "github.com/influxdata/telegraf/plugins/inputs/http_response"
_ "github.com/influxdata/telegraf/plugins/inputs/httpjson" _ "github.com/influxdata/telegraf/plugins/inputs/httpjson"
_ "github.com/influxdata/telegraf/plugins/inputs/influxdb" _ "github.com/influxdata/telegraf/plugins/inputs/influxdb"

View File

@ -148,7 +148,7 @@ func (c cassandraMetric) addTagsFields(out map[string]interface{}) {
tokens := parseJmxMetricRequest(r.(map[string]interface{})["mbean"].(string)) tokens := parseJmxMetricRequest(r.(map[string]interface{})["mbean"].(string))
// Requests with wildcards for keyspace or table names will return nested // Requests with wildcards for keyspace or table names will return nested
// maps in the json response // maps in the json response
if tokens["type"] == "Table" && (tokens["keyspace"] == "*" || if (tokens["type"] == "Table" || tokens["type"] == "ColumnFamily") && (tokens["keyspace"] == "*" ||
tokens["scope"] == "*") { tokens["scope"] == "*") {
if valuesMap, ok := out["value"]; ok { if valuesMap, ok := out["value"]; ok {
for k, v := range valuesMap.(map[string]interface{}) { for k, v := range valuesMap.(map[string]interface{}) {

View File

@ -33,8 +33,9 @@ KEY1 VAL1\n
### Tags: ### Tags:
All measurements have the following tags: Measurements don't have any specific tags unless you define them at the telegraf level (defaults). We
- path used to have the path listed as a tag, but to keep cardinality in check it's easier to move this
value to a field. Thanks @sebito91!
### Configuration: ### Configuration:

View File

@ -56,10 +56,9 @@ func (g *CGroup) gatherDir(dir string, acc telegraf.Accumulator) error {
return err return err
} }
} }
fields["path"] = dir
tags := map[string]string{"path": dir} acc.AddFields(metricName, fields, nil)
acc.AddFields(metricName, fields, tags)
return nil return nil
} }

View File

@ -3,10 +3,13 @@
package cgroup package cgroup
import ( import (
"fmt"
"testing" "testing"
"github.com/influxdata/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"reflect"
) )
var cg1 = &CGroup{ var cg1 = &CGroup{
@ -21,15 +24,32 @@ var cg1 = &CGroup{
}, },
} }
func assertContainsFields(a *testutil.Accumulator, t *testing.T, measurement string, fieldSet []map[string]interface{}) {
a.Lock()
defer a.Unlock()
numEquals := 0
for _, p := range a.Metrics {
if p.Measurement == measurement {
for _, fields := range fieldSet {
if reflect.DeepEqual(fields, p.Fields) {
numEquals++
}
}
}
}
if numEquals != len(fieldSet) {
assert.Fail(t, fmt.Sprintf("only %d of %d are equal", numEquals, len(fieldSet)))
}
}
func TestCgroupStatistics_1(t *testing.T) { func TestCgroupStatistics_1(t *testing.T) {
var acc testutil.Accumulator var acc testutil.Accumulator
err := cg1.Gather(&acc) err := cg1.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
tags := map[string]string{
"path": "testdata/memory",
}
fields := map[string]interface{}{ fields := map[string]interface{}{
"memory.stat.cache": 1739362304123123123, "memory.stat.cache": 1739362304123123123,
"memory.stat.rss": 1775325184, "memory.stat.rss": 1775325184,
@ -42,8 +62,9 @@ func TestCgroupStatistics_1(t *testing.T) {
"memory.limit_in_bytes": 223372036854771712, "memory.limit_in_bytes": 223372036854771712,
"memory.use_hierarchy": "12-781", "memory.use_hierarchy": "12-781",
"notify_on_release": 0, "notify_on_release": 0,
"path": "testdata/memory",
} }
acc.AssertContainsTaggedFields(t, "cgroup", fields, tags) assertContainsFields(&acc, t, "cgroup", []map[string]interface{}{fields})
} }
// ====================================================================== // ======================================================================
@ -59,16 +80,14 @@ func TestCgroupStatistics_2(t *testing.T) {
err := cg2.Gather(&acc) err := cg2.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
tags := map[string]string{
"path": "testdata/cpu",
}
fields := map[string]interface{}{ fields := map[string]interface{}{
"cpuacct.usage_percpu.0": -1452543795404, "cpuacct.usage_percpu.0": -1452543795404,
"cpuacct.usage_percpu.1": 1376681271659, "cpuacct.usage_percpu.1": 1376681271659,
"cpuacct.usage_percpu.2": 1450950799997, "cpuacct.usage_percpu.2": 1450950799997,
"cpuacct.usage_percpu.3": -1473113374257, "cpuacct.usage_percpu.3": -1473113374257,
"path": "testdata/cpu",
} }
acc.AssertContainsTaggedFields(t, "cgroup", fields, tags) assertContainsFields(&acc, t, "cgroup", []map[string]interface{}{fields})
} }
// ====================================================================== // ======================================================================
@ -84,18 +103,16 @@ func TestCgroupStatistics_3(t *testing.T) {
err := cg3.Gather(&acc) err := cg3.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
tags := map[string]string{
"path": "testdata/memory/group_1",
}
fields := map[string]interface{}{ fields := map[string]interface{}{
"memory.limit_in_bytes": 223372036854771712, "memory.limit_in_bytes": 223372036854771712,
"path": "testdata/memory/group_1",
} }
acc.AssertContainsTaggedFields(t, "cgroup", fields, tags)
tags = map[string]string{ fieldsTwo := map[string]interface{}{
"memory.limit_in_bytes": 223372036854771712,
"path": "testdata/memory/group_2", "path": "testdata/memory/group_2",
} }
acc.AssertContainsTaggedFields(t, "cgroup", fields, tags) assertContainsFields(&acc, t, "cgroup", []map[string]interface{}{fields, fieldsTwo})
} }
// ====================================================================== // ======================================================================
@ -111,23 +128,22 @@ func TestCgroupStatistics_4(t *testing.T) {
err := cg4.Gather(&acc) err := cg4.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
tags := map[string]string{
"path": "testdata/memory/group_1/group_1_1",
}
fields := map[string]interface{}{ fields := map[string]interface{}{
"memory.limit_in_bytes": 223372036854771712, "memory.limit_in_bytes": 223372036854771712,
"path": "testdata/memory/group_1/group_1_1",
} }
acc.AssertContainsTaggedFields(t, "cgroup", fields, tags)
tags = map[string]string{ fieldsTwo := map[string]interface{}{
"memory.limit_in_bytes": 223372036854771712,
"path": "testdata/memory/group_1/group_1_2", "path": "testdata/memory/group_1/group_1_2",
} }
acc.AssertContainsTaggedFields(t, "cgroup", fields, tags)
tags = map[string]string{ fieldsThree := map[string]interface{}{
"memory.limit_in_bytes": 223372036854771712,
"path": "testdata/memory/group_2", "path": "testdata/memory/group_2",
} }
acc.AssertContainsTaggedFields(t, "cgroup", fields, tags)
assertContainsFields(&acc, t, "cgroup", []map[string]interface{}{fields, fieldsTwo, fieldsThree})
} }
// ====================================================================== // ======================================================================
@ -143,18 +159,16 @@ func TestCgroupStatistics_5(t *testing.T) {
err := cg5.Gather(&acc) err := cg5.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
tags := map[string]string{
"path": "testdata/memory/group_1/group_1_1",
}
fields := map[string]interface{}{ fields := map[string]interface{}{
"memory.limit_in_bytes": 223372036854771712, "memory.limit_in_bytes": 223372036854771712,
"path": "testdata/memory/group_1/group_1_1",
} }
acc.AssertContainsTaggedFields(t, "cgroup", fields, tags)
tags = map[string]string{ fieldsTwo := map[string]interface{}{
"memory.limit_in_bytes": 223372036854771712,
"path": "testdata/memory/group_2/group_1_1", "path": "testdata/memory/group_2/group_1_1",
} }
acc.AssertContainsTaggedFields(t, "cgroup", fields, tags) assertContainsFields(&acc, t, "cgroup", []map[string]interface{}{fields, fieldsTwo})
} }
// ====================================================================== // ======================================================================
@ -170,13 +184,11 @@ func TestCgroupStatistics_6(t *testing.T) {
err := cg6.Gather(&acc) err := cg6.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
tags := map[string]string{
"path": "testdata/memory",
}
fields := map[string]interface{}{ fields := map[string]interface{}{
"memory.usage_in_bytes": 3513667584, "memory.usage_in_bytes": 3513667584,
"memory.use_hierarchy": "12-781", "memory.use_hierarchy": "12-781",
"memory.kmem.limit_in_bytes": 9223372036854771712, "memory.kmem.limit_in_bytes": 9223372036854771712,
"path": "testdata/memory",
} }
acc.AssertContainsTaggedFields(t, "cgroup", fields, tags) assertContainsFields(&acc, t, "cgroup", []map[string]interface{}{fields})
} }

View File

@ -3,12 +3,14 @@ package dns_query
import ( import (
"errors" "errors"
"fmt" "fmt"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/miekg/dns" "github.com/miekg/dns"
"net" "net"
"strconv" "strconv"
"time" "time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs"
) )
type DnsQuery struct { type DnsQuery struct {
@ -55,12 +57,12 @@ func (d *DnsQuery) Description() string {
} }
func (d *DnsQuery) Gather(acc telegraf.Accumulator) error { func (d *DnsQuery) Gather(acc telegraf.Accumulator) error {
d.setDefaultValues() d.setDefaultValues()
errChan := errchan.New(len(d.Domains) * len(d.Servers))
for _, domain := range d.Domains { for _, domain := range d.Domains {
for _, server := range d.Servers { for _, server := range d.Servers {
dnsQueryTime, err := d.getDnsQueryTime(domain, server) dnsQueryTime, err := d.getDnsQueryTime(domain, server)
if err != nil { errChan.C <- err
return err
}
tags := map[string]string{ tags := map[string]string{
"server": server, "server": server,
"domain": domain, "domain": domain,
@ -72,7 +74,7 @@ func (d *DnsQuery) Gather(acc telegraf.Accumulator) error {
} }
} }
return nil return errChan.Error()
} }
func (d *DnsQuery) setDefaultValues() { func (d *DnsQuery) setDefaultValues() {

View File

@ -25,6 +25,8 @@ type Docker struct {
Endpoint string Endpoint string
ContainerNames []string ContainerNames []string
Timeout internal.Duration Timeout internal.Duration
PerDevice bool `toml:"perdevice"`
Total bool `toml:"total"`
client DockerClient client DockerClient
} }
@ -58,6 +60,13 @@ var sampleConfig = `
container_names = [] container_names = []
## Timeout for docker list, info, and stats commands ## Timeout for docker list, info, and stats commands
timeout = "5s" timeout = "5s"
## Whether to report for each container per-device blkio (8:0, 8:1...) and
## network (eth0, eth1, ...) stats or not
perdevice = true
## Whether to report for each container total blkio and network stats or not
total = false
` `
// Description returns input description // Description returns input description
@ -207,9 +216,18 @@ func (d *Docker) gatherContainer(
cname = strings.TrimPrefix(container.Names[0], "/") cname = strings.TrimPrefix(container.Names[0], "/")
} }
// the image name sometimes has a version part.
// ie, rabbitmq:3-management
imageParts := strings.Split(container.Image, ":")
imageName := imageParts[0]
imageVersion := "unknown"
if len(imageParts) > 1 {
imageVersion = imageParts[1]
}
tags := map[string]string{ tags := map[string]string{
"container_name": cname, "container_name": cname,
"container_image": container.Image, "container_image": imageName,
"container_version": imageVersion,
} }
if len(d.ContainerNames) > 0 { if len(d.ContainerNames) > 0 {
if !sliceContains(cname, d.ContainerNames) { if !sliceContains(cname, d.ContainerNames) {
@ -237,7 +255,7 @@ func (d *Docker) gatherContainer(
tags[k] = label tags[k] = label
} }
gatherContainerStats(v, acc, tags, container.ID) gatherContainerStats(v, acc, tags, container.ID, d.PerDevice, d.Total)
return nil return nil
} }
@ -247,6 +265,8 @@ func gatherContainerStats(
acc telegraf.Accumulator, acc telegraf.Accumulator,
tags map[string]string, tags map[string]string,
id string, id string,
perDevice bool,
total bool,
) { ) {
now := stat.Read now := stat.Read
@ -314,6 +334,7 @@ func gatherContainerStats(
acc.AddFields("docker_container_cpu", fields, percputags, now) acc.AddFields("docker_container_cpu", fields, percputags, now)
} }
totalNetworkStatMap := make(map[string]interface{})
for network, netstats := range stat.Networks { for network, netstats := range stat.Networks {
netfields := map[string]interface{}{ netfields := map[string]interface{}{
"rx_dropped": netstats.RxDropped, "rx_dropped": netstats.RxDropped,
@ -327,12 +348,35 @@ func gatherContainerStats(
"container_id": id, "container_id": id,
} }
// Create a new network tag dictionary for the "network" tag // Create a new network tag dictionary for the "network" tag
if perDevice {
nettags := copyTags(tags) nettags := copyTags(tags)
nettags["network"] = network nettags["network"] = network
acc.AddFields("docker_container_net", netfields, nettags, now) acc.AddFields("docker_container_net", netfields, nettags, now)
} }
if total {
for field, value := range netfields {
if field == "container_id" {
continue
}
_, ok := totalNetworkStatMap[field]
if ok {
totalNetworkStatMap[field] = totalNetworkStatMap[field].(uint64) + value.(uint64)
} else {
totalNetworkStatMap[field] = value
}
}
}
}
gatherBlockIOMetrics(stat, acc, tags, now, id) // totalNetworkStatMap could be empty if container is running with --net=host.
if total && len(totalNetworkStatMap) != 0 {
nettags := copyTags(tags)
nettags["network"] = "total"
totalNetworkStatMap["container_id"] = id
acc.AddFields("docker_container_net", totalNetworkStatMap, nettags, now)
}
gatherBlockIOMetrics(stat, acc, tags, now, id, perDevice, total)
} }
func calculateMemPercent(stat *types.StatsJSON) float64 { func calculateMemPercent(stat *types.StatsJSON) float64 {
@ -361,6 +405,8 @@ func gatherBlockIOMetrics(
tags map[string]string, tags map[string]string,
now time.Time, now time.Time,
id string, id string,
perDevice bool,
total bool,
) { ) {
blkioStats := stat.BlkioStats blkioStats := stat.BlkioStats
// Make a map of devices to their block io stats // Make a map of devices to their block io stats
@ -422,12 +468,34 @@ func gatherBlockIOMetrics(
deviceStatMap[device]["sectors_recursive"] = metric.Value deviceStatMap[device]["sectors_recursive"] = metric.Value
} }
totalStatMap := make(map[string]interface{})
for device, fields := range deviceStatMap { for device, fields := range deviceStatMap {
fields["container_id"] = id
if perDevice {
iotags := copyTags(tags) iotags := copyTags(tags)
iotags["device"] = device iotags["device"] = device
fields["container_id"] = id
acc.AddFields("docker_container_blkio", fields, iotags, now) acc.AddFields("docker_container_blkio", fields, iotags, now)
} }
if total {
for field, value := range fields {
if field == "container_id" {
continue
}
_, ok := totalStatMap[field]
if ok {
totalStatMap[field] = totalStatMap[field].(uint64) + value.(uint64)
} else {
totalStatMap[field] = value
}
}
}
}
if total {
totalStatMap["container_id"] = id
iotags := copyTags(tags)
iotags["device"] = "total"
acc.AddFields("docker_container_blkio", totalStatMap, iotags, now)
}
} }
func copyTags(in map[string]string) map[string]string { func copyTags(in map[string]string) map[string]string {
@ -471,6 +539,7 @@ func parseSize(sizeStr string) (int64, error) {
func init() { func init() {
inputs.Add("docker", func() telegraf.Input { inputs.Add("docker", func() telegraf.Input {
return &Docker{ return &Docker{
PerDevice: true,
Timeout: internal.Duration{Duration: time.Second * 5}, Timeout: internal.Duration{Duration: time.Second * 5},
} }
}) })

View File

@ -24,7 +24,7 @@ func TestDockerGatherContainerStats(t *testing.T) {
"container_name": "redis", "container_name": "redis",
"container_image": "redis/image", "container_image": "redis/image",
} }
gatherContainerStats(stats, &acc, tags, "123456789") gatherContainerStats(stats, &acc, tags, "123456789", true, true)
// test docker_container_net measurement // test docker_container_net measurement
netfields := map[string]interface{}{ netfields := map[string]interface{}{
@ -42,6 +42,21 @@ func TestDockerGatherContainerStats(t *testing.T) {
nettags["network"] = "eth0" nettags["network"] = "eth0"
acc.AssertContainsTaggedFields(t, "docker_container_net", netfields, nettags) acc.AssertContainsTaggedFields(t, "docker_container_net", netfields, nettags)
netfields = map[string]interface{}{
"rx_dropped": uint64(6),
"rx_bytes": uint64(8),
"rx_errors": uint64(10),
"tx_packets": uint64(12),
"tx_dropped": uint64(6),
"rx_packets": uint64(8),
"tx_errors": uint64(10),
"tx_bytes": uint64(12),
"container_id": "123456789",
}
nettags = copyTags(tags)
nettags["network"] = "total"
acc.AssertContainsTaggedFields(t, "docker_container_net", netfields, nettags)
// test docker_blkio measurement // test docker_blkio measurement
blkiotags := copyTags(tags) blkiotags := copyTags(tags)
blkiotags["device"] = "6:0" blkiotags["device"] = "6:0"
@ -52,6 +67,15 @@ func TestDockerGatherContainerStats(t *testing.T) {
} }
acc.AssertContainsTaggedFields(t, "docker_container_blkio", blkiofields, blkiotags) acc.AssertContainsTaggedFields(t, "docker_container_blkio", blkiofields, blkiotags)
blkiotags = copyTags(tags)
blkiotags["device"] = "total"
blkiofields = map[string]interface{}{
"io_service_bytes_recursive_read": uint64(100),
"io_serviced_recursive_write": uint64(302),
"container_id": "123456789",
}
acc.AssertContainsTaggedFields(t, "docker_container_blkio", blkiofields, blkiotags)
// test docker_container_mem measurement // test docker_container_mem measurement
memfields := map[string]interface{}{ memfields := map[string]interface{}{
"max_usage": uint64(1001), "max_usage": uint64(1001),
@ -186,6 +210,17 @@ func testStats() *types.StatsJSON {
TxBytes: 4, TxBytes: 4,
} }
stats.Networks["eth1"] = types.NetworkStats{
RxDropped: 5,
RxBytes: 6,
RxErrors: 7,
TxPackets: 8,
TxDropped: 5,
RxPackets: 6,
TxErrors: 7,
TxBytes: 8,
}
sbr := types.BlkioStatEntry{ sbr := types.BlkioStatEntry{
Major: 6, Major: 6,
Minor: 0, Minor: 0,
@ -198,11 +233,19 @@ func testStats() *types.StatsJSON {
Op: "write", Op: "write",
Value: 101, Value: 101,
} }
sr2 := types.BlkioStatEntry{
Major: 6,
Minor: 1,
Op: "write",
Value: 201,
}
stats.BlkioStats.IoServiceBytesRecursive = append( stats.BlkioStats.IoServiceBytesRecursive = append(
stats.BlkioStats.IoServiceBytesRecursive, sbr) stats.BlkioStats.IoServiceBytesRecursive, sbr)
stats.BlkioStats.IoServicedRecursive = append( stats.BlkioStats.IoServicedRecursive = append(
stats.BlkioStats.IoServicedRecursive, sr) stats.BlkioStats.IoServicedRecursive, sr)
stats.BlkioStats.IoServicedRecursive = append(
stats.BlkioStats.IoServicedRecursive, sr2)
return stats return stats
} }
@ -379,8 +422,9 @@ func TestDockerGatherInfo(t *testing.T) {
}, },
map[string]string{ map[string]string{
"container_name": "etcd2", "container_name": "etcd2",
"container_image": "quay.io/coreos/etcd:v2.2.2", "container_image": "quay.io/coreos/etcd",
"cpu": "cpu3", "cpu": "cpu3",
"container_version": "v2.2.2",
}, },
) )
acc.AssertContainsTaggedFields(t, acc.AssertContainsTaggedFields(t,
@ -424,7 +468,8 @@ func TestDockerGatherInfo(t *testing.T) {
}, },
map[string]string{ map[string]string{
"container_name": "etcd2", "container_name": "etcd2",
"container_image": "quay.io/coreos/etcd:v2.2.2", "container_image": "quay.io/coreos/etcd",
"container_version": "v2.2.2",
}, },
) )

View File

@ -12,6 +12,7 @@ import (
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
@ -51,7 +52,6 @@ const defaultPort = "24242"
// Reads stats from all configured servers. // Reads stats from all configured servers.
func (d *Dovecot) Gather(acc telegraf.Accumulator) error { func (d *Dovecot) Gather(acc telegraf.Accumulator) error {
if !validQuery[d.Type] { if !validQuery[d.Type] {
return fmt.Errorf("Error: %s is not a valid query type\n", return fmt.Errorf("Error: %s is not a valid query type\n",
d.Type) d.Type)
@ -61,31 +61,27 @@ func (d *Dovecot) Gather(acc telegraf.Accumulator) error {
d.Servers = append(d.Servers, "127.0.0.1:24242") d.Servers = append(d.Servers, "127.0.0.1:24242")
} }
var wg sync.WaitGroup
var outerr error
if len(d.Filters) <= 0 { if len(d.Filters) <= 0 {
d.Filters = append(d.Filters, "") d.Filters = append(d.Filters, "")
} }
for _, serv := range d.Servers { var wg sync.WaitGroup
errChan := errchan.New(len(d.Servers) * len(d.Filters))
for _, server := range d.Servers {
for _, filter := range d.Filters { for _, filter := range d.Filters {
wg.Add(1) wg.Add(1)
go func(serv string, filter string) { go func(s string, f string) {
defer wg.Done() defer wg.Done()
outerr = d.gatherServer(serv, acc, d.Type, filter) errChan.C <- d.gatherServer(s, acc, d.Type, f)
}(serv, filter) }(server, filter)
} }
} }
wg.Wait() wg.Wait()
return errChan.Error()
return outerr
} }
func (d *Dovecot) gatherServer(addr string, acc telegraf.Accumulator, qtype string, filter string) error { func (d *Dovecot) gatherServer(addr string, acc telegraf.Accumulator, qtype string, filter string) error {
_, _, err := net.SplitHostPort(addr) _, _, err := net.SplitHostPort(addr)
if err != nil { if err != nil {
return fmt.Errorf("Error: %s on url %s\n", err, addr) return fmt.Errorf("Error: %s on url %s\n", err, addr)

View File

@ -48,8 +48,6 @@ type Exec struct {
parser parsers.Parser parser parsers.Parser
wg sync.WaitGroup
runner Runner runner Runner
errChan chan error errChan chan error
} }
@ -119,8 +117,8 @@ func (c CommandRunner) Run(
return out.Bytes(), nil return out.Bytes(), nil
} }
func (e *Exec) ProcessCommand(command string, acc telegraf.Accumulator) { func (e *Exec) ProcessCommand(command string, acc telegraf.Accumulator, wg *sync.WaitGroup) {
defer e.wg.Done() defer wg.Done()
out, err := e.runner.Run(e, command, acc) out, err := e.runner.Run(e, command, acc)
if err != nil { if err != nil {
@ -151,6 +149,7 @@ func (e *Exec) SetParser(parser parsers.Parser) {
} }
func (e *Exec) Gather(acc telegraf.Accumulator) error { func (e *Exec) Gather(acc telegraf.Accumulator) error {
var wg sync.WaitGroup
// Legacy single command support // Legacy single command support
if e.Command != "" { if e.Command != "" {
e.Commands = append(e.Commands, e.Command) e.Commands = append(e.Commands, e.Command)
@ -190,11 +189,11 @@ func (e *Exec) Gather(acc telegraf.Accumulator) error {
errChan := errchan.New(len(commands)) errChan := errchan.New(len(commands))
e.errChan = errChan.C e.errChan = errChan.C
e.wg.Add(len(commands)) wg.Add(len(commands))
for _, command := range commands { for _, command := range commands {
go e.ProcessCommand(command, acc) go e.ProcessCommand(command, acc, &wg)
} }
e.wg.Wait() wg.Wait()
return errChan.Error() return errChan.Error()
} }

View File

@ -92,9 +92,11 @@ type haproxy struct {
var sampleConfig = ` var sampleConfig = `
## An array of address to gather stats about. Specify an ip on hostname ## An array of address to gather stats about. Specify an ip on hostname
## with optional port. ie localhost, 10.10.3.33:1936, etc. ## with optional port. ie localhost, 10.10.3.33:1936, etc.
## Make sure you specify the complete path to the stats endpoint
## If no servers are specified, then default to 127.0.0.1:1936 ## ie 10.10.3.33:1936/haproxy?stats
servers = ["http://myhaproxy.com:1936", "http://anotherhaproxy.com:1936"] #
## If no servers are specified, then default to 127.0.0.1:1936/haproxy?stats
servers = ["http://myhaproxy.com:1936/haproxy?stats"]
## Or you can also use local socket ## Or you can also use local socket
## servers = ["socket:/run/haproxy/admin.sock"] ## servers = ["socket:/run/haproxy/admin.sock"]
` `
@ -111,7 +113,7 @@ func (r *haproxy) Description() string {
// Returns one of the errors encountered while gather stats (if any). // Returns one of the errors encountered while gather stats (if any).
func (g *haproxy) Gather(acc telegraf.Accumulator) error { func (g *haproxy) Gather(acc telegraf.Accumulator) error {
if len(g.Servers) == 0 { if len(g.Servers) == 0 {
return g.gatherServer("http://127.0.0.1:1936", acc) return g.gatherServer("http://127.0.0.1:1936/haproxy?stats", acc)
} }
var wg sync.WaitGroup var wg sync.WaitGroup
@ -167,12 +169,16 @@ func (g *haproxy) gatherServer(addr string, acc telegraf.Accumulator) error {
g.client = client g.client = client
} }
if !strings.HasSuffix(addr, ";csv") {
addr += "/;csv"
}
u, err := url.Parse(addr) u, err := url.Parse(addr)
if err != nil { if err != nil {
return fmt.Errorf("Unable parse server address '%s': %s", addr, err) return fmt.Errorf("Unable parse server address '%s': %s", addr, err)
} }
req, err := http.NewRequest("GET", fmt.Sprintf("%s://%s%s/;csv", u.Scheme, u.Host, u.Path), nil) req, err := http.NewRequest("GET", addr, nil)
if u.User != nil { if u.User != nil {
p, _ := u.User.Password() p, _ := u.User.Password()
req.SetBasicAuth(u.User.Username(), p) req.SetBasicAuth(u.User.Username(), p)
@ -184,7 +190,7 @@ func (g *haproxy) gatherServer(addr string, acc telegraf.Accumulator) error {
} }
if res.StatusCode != 200 { if res.StatusCode != 200 {
return fmt.Errorf("Unable to get valid stat result from '%s': %s", addr, err) return fmt.Errorf("Unable to get valid stat result from '%s', http response code : %d", addr, res.StatusCode)
} }
return importCsvResult(res.Body, acc, u.Host) return importCsvResult(res.Body, acc, u.Host)

View File

@ -243,7 +243,7 @@ func TestHaproxyDefaultGetFromLocalhost(t *testing.T) {
err := r.Gather(&acc) err := r.Gather(&acc)
require.Error(t, err) require.Error(t, err)
assert.Contains(t, err.Error(), "127.0.0.1:1936/;csv") assert.Contains(t, err.Error(), "127.0.0.1:1936/haproxy?stats/;csv")
} }
const csvOutputSample = ` const csvOutputSample = `

View File

@ -0,0 +1,22 @@
# Hddtemp Input Plugin
This plugin reads data from hddtemp daemon
## Requirements
Hddtemp should be installed and its daemon running
## Configuration
```
[[inputs.hddtemp]]
## By default, telegraf gathers temps data from all disks detected by the
## hddtemp.
##
## Only collect temps from the selected disks.
##
## A * as the device name will return the temperature values of all disks.
##
# address = "127.0.0.1:7634"
# devices = ["sda", "*"]
```

View File

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2016 Mendelson Gusmão
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -0,0 +1,61 @@
package hddtemp
import (
"bytes"
"io"
"net"
"strconv"
"strings"
)
type disk struct {
DeviceName string
Model string
Temperature int32
Unit string
Status string
}
func Fetch(address string) ([]disk, error) {
var (
err error
conn net.Conn
buffer bytes.Buffer
disks []disk
)
if conn, err = net.Dial("tcp", address); err != nil {
return nil, err
}
if _, err = io.Copy(&buffer, conn); err != nil {
return nil, err
}
fields := strings.Split(buffer.String(), "|")
for index := 0; index < len(fields)/5; index++ {
status := ""
offset := index * 5
device := fields[offset+1]
device = device[strings.LastIndex(device, "/")+1:]
temperatureField := fields[offset+3]
temperature, err := strconv.ParseInt(temperatureField, 10, 32)
if err != nil {
temperature = 0
status = temperatureField
}
disks = append(disks, disk{
DeviceName: device,
Model: fields[offset+2],
Temperature: int32(temperature),
Unit: fields[offset+4],
Status: status,
})
}
return disks, nil
}

View File

@ -0,0 +1,116 @@
package hddtemp
import (
"net"
"reflect"
"testing"
)
func TestFetch(t *testing.T) {
l := serve(t, []byte("|/dev/sda|foobar|36|C|"))
defer l.Close()
disks, err := Fetch(l.Addr().String())
if err != nil {
t.Error("expecting err to be nil")
}
expected := []disk{
{
DeviceName: "sda",
Model: "foobar",
Temperature: 36,
Unit: "C",
},
}
if !reflect.DeepEqual(expected, disks) {
t.Error("disks' slice is different from expected")
}
}
func TestFetchWrongAddress(t *testing.T) {
_, err := Fetch("127.0.0.1:1")
if err == nil {
t.Error("expecting err to be non-nil")
}
}
func TestFetchStatus(t *testing.T) {
l := serve(t, []byte("|/dev/sda|foobar|SLP|C|"))
defer l.Close()
disks, err := Fetch(l.Addr().String())
if err != nil {
t.Error("expecting err to be nil")
}
expected := []disk{
{
DeviceName: "sda",
Model: "foobar",
Temperature: 0,
Unit: "C",
Status: "SLP",
},
}
if !reflect.DeepEqual(expected, disks) {
t.Error("disks' slice is different from expected")
}
}
func TestFetchTwoDisks(t *testing.T) {
l := serve(t, []byte("|/dev/hda|ST380011A|46|C||/dev/hdd|ST340016A|SLP|*|"))
defer l.Close()
disks, err := Fetch(l.Addr().String())
if err != nil {
t.Error("expecting err to be nil")
}
expected := []disk{
{
DeviceName: "hda",
Model: "ST380011A",
Temperature: 46,
Unit: "C",
},
{
DeviceName: "hdd",
Model: "ST340016A",
Temperature: 0,
Unit: "*",
Status: "SLP",
},
}
if !reflect.DeepEqual(expected, disks) {
t.Error("disks' slice is different from expected")
}
}
func serve(t *testing.T, data []byte) net.Listener {
l, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
t.Fatal(err)
}
go func(t *testing.T) {
conn, err := l.Accept()
if err != nil {
t.Fatal(err)
}
conn.Write(data)
conn.Close()
}(t)
return l
}

View File

@ -0,0 +1,74 @@
// +build linux
package hddtemp
import (
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
gohddtemp "github.com/influxdata/telegraf/plugins/inputs/hddtemp/go-hddtemp"
)
const defaultAddress = "127.0.0.1:7634"
type HDDTemp struct {
Address string
Devices []string
}
func (_ *HDDTemp) Description() string {
return "Monitor disks' temperatures using hddtemp"
}
var hddtempSampleConfig = `
## By default, telegraf gathers temps data from all disks detected by the
## hddtemp.
##
## Only collect temps from the selected disks.
##
## A * as the device name will return the temperature values of all disks.
##
# address = "127.0.0.1:7634"
# devices = ["sda", "*"]
`
func (_ *HDDTemp) SampleConfig() string {
return hddtempSampleConfig
}
func (h *HDDTemp) Gather(acc telegraf.Accumulator) error {
disks, err := gohddtemp.Fetch(h.Address)
if err != nil {
return err
}
for _, disk := range disks {
for _, chosenDevice := range h.Devices {
if chosenDevice == "*" || chosenDevice == disk.DeviceName {
tags := map[string]string{
"device": disk.DeviceName,
"model": disk.Model,
"unit": disk.Unit,
"status": disk.Status,
}
fields := map[string]interface{}{
disk.DeviceName: disk.Temperature,
}
acc.AddFields("hddtemp", fields, tags)
}
}
}
return nil
}
func init() {
inputs.Add("hddtemp", func() telegraf.Input {
return &HDDTemp{
Address: defaultAddress,
Devices: []string{"*"},
}
})
}

View File

@ -0,0 +1,3 @@
// +build !linux
package hddtemp

View File

@ -249,7 +249,14 @@ func (j *Jolokia) Gather(acc telegraf.Accumulator) error {
switch t := values.(type) { switch t := values.(type) {
case map[string]interface{}: case map[string]interface{}:
for k, v := range t { for k, v := range t {
fields[measurement+"_"+k] = v switch t2 := v.(type) {
case map[string]interface{}:
for k2, v2 := range t2 {
fields[measurement+"_"+k+"_"+k2] = v2
}
case interface{}:
fields[measurement+"_"+k] = t2
}
} }
case interface{}: case interface{}:
fields[measurement] = t fields[measurement] = t

View File

@ -32,6 +32,8 @@ regex patterns.
''' '''
``` ```
> **Note:** The InfluxDB log pattern in the default configuration only works for Influx versions 1.0.0-beta1 or higher.
## Grok Parser ## Grok Parser
The grok parser uses a slightly modified version of logstash "grok" patterns, The grok parser uses a slightly modified version of logstash "grok" patterns,

View File

@ -54,8 +54,14 @@ var (
type Parser struct { type Parser struct {
Patterns []string Patterns []string
// namedPatterns is a list of internally-assigned names to the patterns
// specified by the user in Patterns.
// They will look like:
// GROK_INTERNAL_PATTERN_0, GROK_INTERNAL_PATTERN_1, etc.
namedPatterns []string
CustomPatterns string CustomPatterns string
CustomPatternFiles []string CustomPatternFiles []string
Measurement string
// typeMap is a map of patterns -> capture name -> modifier, // typeMap is a map of patterns -> capture name -> modifier,
// ie, { // ie, {
@ -97,13 +103,24 @@ func (p *Parser) Compile() error {
return err return err
} }
p.CustomPatterns = DEFAULT_PATTERNS + p.CustomPatterns // Give Patterns fake names so that they can be treated as named
// "custom patterns"
p.namedPatterns = make([]string, len(p.Patterns))
for i, pattern := range p.Patterns {
name := fmt.Sprintf("GROK_INTERNAL_PATTERN_%d", i)
p.CustomPatterns += "\n" + name + " " + pattern + "\n"
p.namedPatterns[i] = "%{" + name + "}"
}
// Combine user-supplied CustomPatterns with DEFAULT_PATTERNS and parse
// them together as the same type of pattern.
p.CustomPatterns = DEFAULT_PATTERNS + p.CustomPatterns
if len(p.CustomPatterns) != 0 { if len(p.CustomPatterns) != 0 {
scanner := bufio.NewScanner(strings.NewReader(p.CustomPatterns)) scanner := bufio.NewScanner(strings.NewReader(p.CustomPatterns))
p.addCustomPatterns(scanner) p.addCustomPatterns(scanner)
} }
// Parse any custom pattern files supplied.
for _, filename := range p.CustomPatternFiles { for _, filename := range p.CustomPatternFiles {
file, err := os.Open(filename) file, err := os.Open(filename)
if err != nil { if err != nil {
@ -114,6 +131,10 @@ func (p *Parser) Compile() error {
p.addCustomPatterns(scanner) p.addCustomPatterns(scanner)
} }
if p.Measurement == "" {
p.Measurement = "logparser_grok"
}
return p.compileCustomPatterns() return p.compileCustomPatterns()
} }
@ -122,7 +143,7 @@ func (p *Parser) ParseLine(line string) (telegraf.Metric, error) {
var values map[string]string var values map[string]string
// the matching pattern string // the matching pattern string
var patternName string var patternName string
for _, pattern := range p.Patterns { for _, pattern := range p.namedPatterns {
if values, err = p.g.Parse(pattern, line); err != nil { if values, err = p.g.Parse(pattern, line); err != nil {
return nil, err return nil, err
} }
@ -215,7 +236,7 @@ func (p *Parser) ParseLine(line string) (telegraf.Metric, error) {
} }
} }
return telegraf.NewMetric("logparser_grok", tags, fields, p.tsModder.tsMod(timestamp)) return telegraf.NewMetric(p.Measurement, tags, fields, p.tsModder.tsMod(timestamp))
} }
func (p *Parser) addCustomPatterns(scanner *bufio.Scanner) { func (p *Parser) addCustomPatterns(scanner *bufio.Scanner) {

View File

@ -83,6 +83,31 @@ func Benchmark_ParseLine_CustomPattern(b *testing.B) {
benchM = m benchM = m
} }
func TestMeasurementName(t *testing.T) {
p := &Parser{
Measurement: "my_web_log",
Patterns: []string{"%{COMMON_LOG_FORMAT}"},
}
assert.NoError(t, p.Compile())
// Parse an influxdb POST request
m, err := p.ParseLine(`127.0.0.1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326`)
require.NotNil(t, m)
assert.NoError(t, err)
assert.Equal(t,
map[string]interface{}{
"resp_bytes": int64(2326),
"auth": "frank",
"client_ip": "127.0.0.1",
"http_version": float64(1.0),
"ident": "user-identifier",
"request": "/apache_pb.gif",
},
m.Fields())
assert.Equal(t, map[string]string{"verb": "GET", "resp_code": "200"}, m.Tags())
assert.Equal(t, "my_web_log", m.Name())
}
func TestBuiltinInfluxdbHttpd(t *testing.T) { func TestBuiltinInfluxdbHttpd(t *testing.T) {
p := &Parser{ p := &Parser{
Patterns: []string{"%{INFLUXDB_HTTPD_LOG}"}, Patterns: []string{"%{INFLUXDB_HTTPD_LOG}"},
@ -98,7 +123,6 @@ func TestBuiltinInfluxdbHttpd(t *testing.T) {
"resp_bytes": int64(0), "resp_bytes": int64(0),
"auth": "-", "auth": "-",
"client_ip": "::1", "client_ip": "::1",
"resp_code": int64(204),
"http_version": float64(1.1), "http_version": float64(1.1),
"ident": "-", "ident": "-",
"referrer": "-", "referrer": "-",
@ -107,7 +131,7 @@ func TestBuiltinInfluxdbHttpd(t *testing.T) {
"agent": "InfluxDBClient", "agent": "InfluxDBClient",
}, },
m.Fields()) m.Fields())
assert.Equal(t, map[string]string{"verb": "POST"}, m.Tags()) assert.Equal(t, map[string]string{"verb": "POST", "resp_code": "204"}, m.Tags())
// Parse an influxdb GET request // Parse an influxdb GET request
m, err = p.ParseLine(`[httpd] ::1 - - [14/Jun/2016:12:10:02 +0100] "GET /query?db=telegraf&q=SELECT+bytes%2Cresponse_time_us+FROM+logparser_grok+WHERE+http_method+%3D+%27GET%27+AND+response_time_us+%3E+0+AND+time+%3E+now%28%29+-+1h HTTP/1.1" 200 578 "http://localhost:8083/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.84 Safari/537.36" 8a3806f1-3220-11e6-8006-000000000000 988`) m, err = p.ParseLine(`[httpd] ::1 - - [14/Jun/2016:12:10:02 +0100] "GET /query?db=telegraf&q=SELECT+bytes%2Cresponse_time_us+FROM+logparser_grok+WHERE+http_method+%3D+%27GET%27+AND+response_time_us+%3E+0+AND+time+%3E+now%28%29+-+1h HTTP/1.1" 200 578 "http://localhost:8083/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.84 Safari/537.36" 8a3806f1-3220-11e6-8006-000000000000 988`)
@ -118,7 +142,6 @@ func TestBuiltinInfluxdbHttpd(t *testing.T) {
"resp_bytes": int64(578), "resp_bytes": int64(578),
"auth": "-", "auth": "-",
"client_ip": "::1", "client_ip": "::1",
"resp_code": int64(200),
"http_version": float64(1.1), "http_version": float64(1.1),
"ident": "-", "ident": "-",
"referrer": "http://localhost:8083/", "referrer": "http://localhost:8083/",
@ -127,7 +150,7 @@ func TestBuiltinInfluxdbHttpd(t *testing.T) {
"agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.84 Safari/537.36", "agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.84 Safari/537.36",
}, },
m.Fields()) m.Fields())
assert.Equal(t, map[string]string{"verb": "GET"}, m.Tags()) assert.Equal(t, map[string]string{"verb": "GET", "resp_code": "200"}, m.Tags())
} }
// common log format // common log format
@ -147,13 +170,12 @@ func TestBuiltinCommonLogFormat(t *testing.T) {
"resp_bytes": int64(2326), "resp_bytes": int64(2326),
"auth": "frank", "auth": "frank",
"client_ip": "127.0.0.1", "client_ip": "127.0.0.1",
"resp_code": int64(200),
"http_version": float64(1.0), "http_version": float64(1.0),
"ident": "user-identifier", "ident": "user-identifier",
"request": "/apache_pb.gif", "request": "/apache_pb.gif",
}, },
m.Fields()) m.Fields())
assert.Equal(t, map[string]string{"verb": "GET"}, m.Tags()) assert.Equal(t, map[string]string{"verb": "GET", "resp_code": "200"}, m.Tags())
} }
// combined log format // combined log format
@ -173,7 +195,6 @@ func TestBuiltinCombinedLogFormat(t *testing.T) {
"resp_bytes": int64(2326), "resp_bytes": int64(2326),
"auth": "frank", "auth": "frank",
"client_ip": "127.0.0.1", "client_ip": "127.0.0.1",
"resp_code": int64(200),
"http_version": float64(1.0), "http_version": float64(1.0),
"ident": "user-identifier", "ident": "user-identifier",
"request": "/apache_pb.gif", "request": "/apache_pb.gif",
@ -181,12 +202,12 @@ func TestBuiltinCombinedLogFormat(t *testing.T) {
"agent": "Mozilla", "agent": "Mozilla",
}, },
m.Fields()) m.Fields())
assert.Equal(t, map[string]string{"verb": "GET"}, m.Tags()) assert.Equal(t, map[string]string{"verb": "GET", "resp_code": "200"}, m.Tags())
} }
func TestCompileStringAndParse(t *testing.T) { func TestCompileStringAndParse(t *testing.T) {
p := &Parser{ p := &Parser{
Patterns: []string{"%{TEST_LOG_A}", "%{TEST_LOG_B}"}, Patterns: []string{"%{TEST_LOG_A}"},
CustomPatterns: ` CustomPatterns: `
DURATION %{NUMBER}[nuµm]?s DURATION %{NUMBER}[nuµm]?s
RESPONSE_CODE %{NUMBER:response_code:tag} RESPONSE_CODE %{NUMBER:response_code:tag}
@ -209,6 +230,41 @@ func TestCompileStringAndParse(t *testing.T) {
assert.Equal(t, map[string]string{"response_code": "200"}, metricA.Tags()) assert.Equal(t, map[string]string{"response_code": "200"}, metricA.Tags())
} }
func TestCompileErrorsOnInvalidPattern(t *testing.T) {
p := &Parser{
Patterns: []string{"%{TEST_LOG_A}", "%{TEST_LOG_B}"},
CustomPatterns: `
DURATION %{NUMBER}[nuµm]?s
RESPONSE_CODE %{NUMBER:response_code:tag}
RESPONSE_TIME %{DURATION:response_time:duration}
TEST_LOG_A %{NUMBER:myfloat:float} %{RESPONSE_CODE} %{IPORHOST:clientip} %{RESPONSE_TIME}
`,
}
assert.Error(t, p.Compile())
metricA, _ := p.ParseLine(`1.25 200 192.168.1.1 5.432µs`)
require.Nil(t, metricA)
}
func TestParsePatternsWithoutCustom(t *testing.T) {
p := &Parser{
Patterns: []string{"%{POSINT:ts:ts-epochnano} response_time=%{POSINT:response_time:int} mymetric=%{NUMBER:metric:float}"},
}
assert.NoError(t, p.Compile())
metricA, err := p.ParseLine(`1466004605359052000 response_time=20821 mymetric=10890.645`)
require.NotNil(t, metricA)
assert.NoError(t, err)
assert.Equal(t,
map[string]interface{}{
"response_time": int64(20821),
"metric": float64(10890.645),
},
metricA.Fields())
assert.Equal(t, map[string]string{}, metricA.Tags())
assert.Equal(t, time.Unix(0, 1466004605359052000), metricA.Time())
}
func TestParseEpochNano(t *testing.T) { func TestParseEpochNano(t *testing.T) {
p := &Parser{ p := &Parser{
Patterns: []string{"%{MYAPP}"}, Patterns: []string{"%{MYAPP}"},
@ -392,7 +448,7 @@ func TestParseErrors(t *testing.T) {
TEST_LOG_A %{HTTPDATE:ts:ts-httpd} %{WORD:myword:int} %{} TEST_LOG_A %{HTTPDATE:ts:ts-httpd} %{WORD:myword:int} %{}
`, `,
} }
assert.NoError(t, p.Compile()) assert.Error(t, p.Compile())
_, err := p.ParseLine(`[04/Jun/2016:12:41:45 +0100] notnumber 200 192.168.1.1 5.432µs 101`) _, err := p.ParseLine(`[04/Jun/2016:12:41:45 +0100] notnumber 200 192.168.1.1 5.432µs 101`)
assert.Error(t, err) assert.Error(t, err)

View File

@ -66,7 +66,7 @@ INFLUXDB_HTTPD_LOG \[httpd\] %{COMBINED_LOG_FORMAT} %{UUID:uuid:drop} %{NUMBER:r
# apache & nginx logs, this is also known as the "common log format" # apache & nginx logs, this is also known as the "common log format"
# see https://en.wikipedia.org/wiki/Common_Log_Format # see https://en.wikipedia.org/wiki/Common_Log_Format
COMMON_LOG_FORMAT %{CLIENT:client_ip} %{NGUSER:ident} %{NGUSER:auth} \[%{HTTPDATE:ts:ts-httpd}\] "(?:%{WORD:verb:tag} %{NOTSPACE:request}(?: HTTP/%{NUMBER:http_version:float})?|%{DATA})" %{NUMBER:resp_code:int} (?:%{NUMBER:resp_bytes:int}|-) COMMON_LOG_FORMAT %{CLIENT:client_ip} %{NGUSER:ident} %{NGUSER:auth} \[%{HTTPDATE:ts:ts-httpd}\] "(?:%{WORD:verb:tag} %{NOTSPACE:request}(?: HTTP/%{NUMBER:http_version:float})?|%{DATA})" %{NUMBER:resp_code:tag} (?:%{NUMBER:resp_bytes:int}|-)
# Combined log format is the same as the common log format but with the addition # Combined log format is the same as the common log format but with the addition
# of two quoted strings at the end for "referrer" and "agent" # of two quoted strings at the end for "referrer" and "agent"

View File

@ -62,7 +62,7 @@ INFLUXDB_HTTPD_LOG \[httpd\] %{COMBINED_LOG_FORMAT} %{UUID:uuid:drop} %{NUMBER:r
# apache & nginx logs, this is also known as the "common log format" # apache & nginx logs, this is also known as the "common log format"
# see https://en.wikipedia.org/wiki/Common_Log_Format # see https://en.wikipedia.org/wiki/Common_Log_Format
COMMON_LOG_FORMAT %{CLIENT:client_ip} %{NGUSER:ident} %{NGUSER:auth} \[%{HTTPDATE:ts:ts-httpd}\] "(?:%{WORD:verb:tag} %{NOTSPACE:request}(?: HTTP/%{NUMBER:http_version:float})?|%{DATA})" %{NUMBER:resp_code:int} (?:%{NUMBER:resp_bytes:int}|-) COMMON_LOG_FORMAT %{CLIENT:client_ip} %{NGUSER:ident} %{NGUSER:auth} \[%{HTTPDATE:ts:ts-httpd}\] "(?:%{WORD:verb:tag} %{NOTSPACE:request}(?: HTTP/%{NUMBER:http_version:float})?|%{DATA})" %{NUMBER:resp_code:tag} (?:%{NUMBER:resp_bytes:int}|-)
# Combined log format is the same as the common log format but with the addition # Combined log format is the same as the common log format but with the addition
# of two quoted strings at the end for "referrer" and "agent" # of two quoted strings at the end for "referrer" and "agent"

View File

@ -9,6 +9,7 @@ import (
"github.com/hpcloud/tail" "github.com/hpcloud/tail"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/internal/globpath" "github.com/influxdata/telegraf/internal/globpath"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
@ -58,6 +59,8 @@ const sampleConfig = `
## %{COMMON_LOG_FORMAT} (plain apache & nginx access logs) ## %{COMMON_LOG_FORMAT} (plain apache & nginx access logs)
## %{COMBINED_LOG_FORMAT} (access logs + referrer & agent) ## %{COMBINED_LOG_FORMAT} (access logs + referrer & agent)
patterns = ["%{INFLUXDB_HTTPD_LOG}"] patterns = ["%{INFLUXDB_HTTPD_LOG}"]
## Name of the outputted measurement name.
measurement = "influxdb_log"
## Full path(s) to custom pattern files. ## Full path(s) to custom pattern files.
custom_pattern_files = [] custom_pattern_files = []
## Custom patterns can also be defined here. Put one pattern per line. ## Custom patterns can also be defined here. Put one pattern per line.
@ -108,11 +111,15 @@ func (l *LogParserPlugin) Start(acc telegraf.Accumulator) error {
} }
// compile log parser patterns: // compile log parser patterns:
errChan := errchan.New(len(l.parsers))
for _, parser := range l.parsers { for _, parser := range l.parsers {
if err := parser.Compile(); err != nil { if err := parser.Compile(); err != nil {
return err errChan.C <- err
} }
} }
if err := errChan.Error(); err != nil {
return err
}
var seek tail.SeekInfo var seek tail.SeekInfo
if !l.FromBeginning { if !l.FromBeginning {
@ -123,24 +130,25 @@ func (l *LogParserPlugin) Start(acc telegraf.Accumulator) error {
l.wg.Add(1) l.wg.Add(1)
go l.parser() go l.parser()
var errS string
// Create a "tailer" for each file // Create a "tailer" for each file
for _, filepath := range l.Files { for _, filepath := range l.Files {
g, err := globpath.Compile(filepath) g, err := globpath.Compile(filepath)
if err != nil { if err != nil {
log.Printf("ERROR Glob %s failed to compile, %s", filepath, err) log.Printf("ERROR Glob %s failed to compile, %s", filepath, err)
continue
} }
for file, _ := range g.Match() { files := g.Match()
errChan = errchan.New(len(files))
for file, _ := range files {
tailer, err := tail.TailFile(file, tailer, err := tail.TailFile(file,
tail.Config{ tail.Config{
ReOpen: true, ReOpen: true,
Follow: true, Follow: true,
Location: &seek, Location: &seek,
MustExist: true,
}) })
if err != nil { errChan.C <- err
errS += err.Error() + " "
continue
}
// create a goroutine for each "tailer" // create a goroutine for each "tailer"
l.wg.Add(1) l.wg.Add(1)
go l.receiver(tailer) go l.receiver(tailer)
@ -148,10 +156,7 @@ func (l *LogParserPlugin) Start(acc telegraf.Accumulator) error {
} }
} }
if errS != "" { return errChan.Error()
return fmt.Errorf(errS)
}
return nil
} }
// receiver is launched as a goroutine to continuously watch a tailed logfile // receiver is launched as a goroutine to continuously watch a tailed logfile
@ -199,8 +204,6 @@ func (l *LogParserPlugin) parser() {
if m != nil { if m != nil {
l.acc.AddFields(m.Name(), m.Fields(), m.Tags(), m.Time()) l.acc.AddFields(m.Name(), m.Fields(), m.Tags(), m.Time())
} }
} else {
log.Printf("Malformed log line in [%s], Error: %s\n", line, err)
} }
} }
} }

View File

@ -37,7 +37,7 @@ func TestGrokParseLogFilesNonExistPattern(t *testing.T) {
} }
acc := testutil.Accumulator{} acc := testutil.Accumulator{}
assert.NoError(t, logparser.Start(&acc)) assert.Error(t, logparser.Start(&acc))
time.Sleep(time.Millisecond * 500) time.Sleep(time.Millisecond * 500)
logparser.Stop() logparser.Stop()
@ -80,6 +80,8 @@ func TestGrokParseLogFiles(t *testing.T) {
map[string]string{}) map[string]string{})
} }
// Test that test_a.log line gets parsed even though we don't have the correct
// pattern available for test_b.log
func TestGrokParseLogFilesOneBad(t *testing.T) { func TestGrokParseLogFilesOneBad(t *testing.T) {
thisdir := getCurrentDir() thisdir := getCurrentDir()
p := &grok.Parser{ p := &grok.Parser{
@ -90,11 +92,12 @@ func TestGrokParseLogFilesOneBad(t *testing.T) {
logparser := &LogParserPlugin{ logparser := &LogParserPlugin{
FromBeginning: true, FromBeginning: true,
Files: []string{thisdir + "grok/testdata/*.log"}, Files: []string{thisdir + "grok/testdata/test_a.log"},
GrokParser: p, GrokParser: p,
} }
acc := testutil.Accumulator{} acc := testutil.Accumulator{}
acc.SetDebug(true)
assert.NoError(t, logparser.Start(&acc)) assert.NoError(t, logparser.Start(&acc))
time.Sleep(time.Millisecond * 500) time.Sleep(time.Millisecond * 500)

View File

@ -9,6 +9,7 @@ import (
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
@ -73,19 +74,16 @@ func (m *Memcached) Gather(acc telegraf.Accumulator) error {
return m.gatherServer(":11211", false, acc) return m.gatherServer(":11211", false, acc)
} }
errChan := errchan.New(len(m.Servers) + len(m.UnixSockets))
for _, serverAddress := range m.Servers { for _, serverAddress := range m.Servers {
if err := m.gatherServer(serverAddress, false, acc); err != nil { errChan.C <- m.gatherServer(serverAddress, false, acc)
return err
}
} }
for _, unixAddress := range m.UnixSockets { for _, unixAddress := range m.UnixSockets {
if err := m.gatherServer(unixAddress, true, acc); err != nil { errChan.C <- m.gatherServer(unixAddress, true, acc)
return err
}
} }
return nil return errChan.Error()
} }
func (m *Memcached) gatherServer( func (m *Memcached) gatherServer(

View File

@ -1,6 +1,6 @@
# Mesos Input Plugin # Mesos Input Plugin
This input plugin gathers metrics from Mesos (*currently only Mesos masters*). This input plugin gathers metrics from Mesos.
For more information, please check the [Mesos Observability Metrics](http://mesos.apache.org/documentation/latest/monitoring/) page. For more information, please check the [Mesos Observability Metrics](http://mesos.apache.org/documentation/latest/monitoring/) page.
### Configuration: ### Configuration:
@ -8,14 +8,41 @@ For more information, please check the [Mesos Observability Metrics](http://meso
```toml ```toml
# Telegraf plugin for gathering metrics from N Mesos masters # Telegraf plugin for gathering metrics from N Mesos masters
[[inputs.mesos]] [[inputs.mesos]]
# Timeout, in ms. ## Timeout, in ms.
timeout = 100 timeout = 100
# A list of Mesos masters, default value is localhost:5050. ## A list of Mesos masters.
masters = ["localhost:5050"] masters = ["localhost:5050"]
# Metrics groups to be collected, by default, all enabled. ## Master metrics groups to be collected, by default, all enabled.
master_collections = ["resources","master","system","slaves","frameworks","messages","evqueue","registrar"] master_collections = [
"resources",
"master",
"system",
"agents",
"frameworks",
"tasks",
"messages",
"evqueue",
"registrar",
]
## A list of Mesos slaves, default is []
# slaves = []
## Slave metrics groups to be collected, by default, all enabled.
# slave_collections = [
# "resources",
# "agent",
# "system",
# "executors",
# "tasks",
# "messages",
# ]
## Include mesos tasks statistics, default is false
# slave_tasks = true
``` ```
By dafault this plugin is not configured to gather metrics from mesos. Since mesos cluster can be deployed in numerous ways it does not provide ane default
values in that matter. User needs to specify master/slave nodes this plugin will gather metrics from. Additionally by enabling `slave_tasks` will allow
agthering metrics from takss runing on specified slaves (this options is disabled by default).
### Measurements & Fields: ### Measurements & Fields:
Mesos master metric groups Mesos master metric groups
@ -33,6 +60,12 @@ Mesos master metric groups
- master/disk_revocable_percent - master/disk_revocable_percent
- master/disk_revocable_total - master/disk_revocable_total
- master/disk_revocable_used - master/disk_revocable_used
- master/gpus_percent
- master/gpus_used
- master/gpus_total
- master/gpus_revocable_percent
- master/gpus_revocable_total
- master/gpus_revocable_used
- master/mem_percent - master/mem_percent
- master/mem_used - master/mem_used
- master/mem_total - master/mem_total
@ -136,17 +169,111 @@ Mesos master metric groups
- registrar/state_store_ms/p999 - registrar/state_store_ms/p999
- registrar/state_store_ms/p9999 - registrar/state_store_ms/p9999
Mesos slave metric groups
- resources
- slave/cpus_percent
- slave/cpus_used
- slave/cpus_total
- slave/cpus_revocable_percent
- slave/cpus_revocable_total
- slave/cpus_revocable_used
- slave/disk_percent
- slave/disk_used
- slave/disk_total
- slave/disk_revocable_percent
- slave/disk_revocable_total
- slave/disk_revocable_used
- slave/gpus_percent
- slave/gpus_used
- slave/gpus_total,
- slave/gpus_revocable_percent
- slave/gpus_revocable_total
- slave/gpus_revocable_used
- slave/mem_percent
- slave/mem_used
- slave/mem_total
- slave/mem_revocable_percent
- slave/mem_revocable_total
- slave/mem_revocable_used
- agent
- slave/registered
- slave/uptime_secs
- system
- system/cpus_total
- system/load_15min
- system/load_5min
- system/load_1min
- system/mem_free_bytes
- system/mem_total_bytes
- executors
- containerizer/mesos/container_destroy_errors
- slave/container_launch_errors
- slave/executors_preempted
- slave/frameworks_active
- slave/executor_directory_max_allowed_age_secs
- slave/executors_registering
- slave/executors_running
- slave/executors_terminated
- slave/executors_terminating
- slave/recovery_errors
- tasks
- slave/tasks_failed
- slave/tasks_finished
- slave/tasks_killed
- slave/tasks_lost
- slave/tasks_running
- slave/tasks_staging
- slave/tasks_starting
- messages
- slave/invalid_framework_messages
- slave/invalid_status_updates
- slave/valid_framework_messages
- slave/valid_status_updates
Mesos tasks metric groups
- executor_id
- executor_name
- framework_id
- source
- statistics (all metrics below will have `statistics_` prefix included in their names
- cpus_limit
- cpus_system_time_secs
- cpus_user_time_secs
- mem_anon_bytes
- mem_cache_bytes
- mem_critical_pressure_counter
- mem_file_bytes
- mem_limit_bytes
- mem_low_pressure_counter
- mem_mapped_file_bytes
- mem_medium_pressure_counter
- mem_rss_bytes
- mem_swap_bytes
- mem_total_bytes
- mem_total_memsw_bytes
- mem_unevictable_bytes
- timestamp
### Tags: ### Tags:
- All measurements have the following tags: - All master/slave measurements have the following tags:
- server
- role (master/slave)
- Tasks measurements have the following tags:
- server - server
### Example Output: ### Example Output:
``` ```
$ telegraf -config ~/mesos.conf -input-filter mesos -test $ telegraf -config ~/mesos.conf -input-filter mesos -test
* Plugin: mesos, Collection 1 * Plugin: mesos, Collection 1
mesos,server=172.17.8.101 allocator/event_queue_dispatches=0,master/cpus_percent=0, mesos,host=172.17.8.102,server=172.17.8.101 allocator/event_queue_dispatches=0,master/cpus_percent=0,
master/cpus_revocable_percent=0,master/cpus_revocable_total=0, master/cpus_revocable_percent=0,master/cpus_revocable_total=0,
master/cpus_revocable_used=0,master/cpus_total=2, master/cpus_revocable_used=0,master/cpus_total=2,
master/cpus_used=0,master/disk_percent=0,master/disk_revocable_percent=0, master/cpus_used=0,master/disk_percent=0,master/disk_revocable_percent=0,
@ -163,3 +290,16 @@ master/mem_revocable_used=0,master/mem_total=1002,
master/mem_used=0,master/messages_authenticate=0, master/mem_used=0,master/messages_authenticate=0,
master/messages_deactivate_framework=0 ... master/messages_deactivate_framework=0 ...
``` ```
Meoso tasks metrics (if enabled):
```
mesos-tasks,host=172.17.8.102,server=172.17.8.101,task_id=hello-world.e4b5b497-2ccd-11e6-a659-0242fb222ce2
statistics_cpus_limit=0.2,statistics_cpus_system_time_secs=142.49,statistics_cpus_user_time_secs=388.14,
statistics_mem_anon_bytes=359129088,statistics_mem_cache_bytes=3964928,
statistics_mem_critical_pressure_counter=0,statistics_mem_file_bytes=3964928,
statistics_mem_limit_bytes=767557632,statistics_mem_low_pressure_counter=0,
statistics_mem_mapped_file_bytes=114688,statistics_mem_medium_pressure_counter=0,
statistics_mem_rss_bytes=359129088,statistics_mem_swap_bytes=0,statistics_mem_total_bytes=363094016,
statistics_mem_total_memsw_bytes=363094016,statistics_mem_unevictable_bytes=0,
statistics_timestamp=1465486052.70525 1465486053052811792...
```

View File

@ -17,33 +17,57 @@ import (
jsonparser "github.com/influxdata/telegraf/plugins/parsers/json" jsonparser "github.com/influxdata/telegraf/plugins/parsers/json"
) )
type Role string
const (
MASTER Role = "master"
SLAVE = "slave"
)
type Mesos struct { type Mesos struct {
Timeout int Timeout int
Masters []string Masters []string
MasterCols []string `toml:"master_collections"` MasterCols []string `toml:"master_collections"`
Slaves []string
SlaveCols []string `toml:"slave_collections"`
SlaveTasks bool
} }
var defaultMetrics = []string{ var allMetrics = map[Role][]string{
"resources", "master", "system", "slaves", "frameworks", MASTER: []string{"resources", "master", "system", "agents", "frameworks", "tasks", "messages", "evqueue", "registrar"},
"tasks", "messages", "evqueue", "messages", "registrar", SLAVE: []string{"resources", "agent", "system", "executors", "tasks", "messages"},
} }
var sampleConfig = ` var sampleConfig = `
# Timeout, in ms. ## Timeout, in ms.
timeout = 100 timeout = 100
# A list of Mesos masters, default value is localhost:5050. ## A list of Mesos masters.
masters = ["localhost:5050"] masters = ["localhost:5050"]
# Metrics groups to be collected, by default, all enabled. ## Master metrics groups to be collected, by default, all enabled.
master_collections = [ master_collections = [
"resources", "resources",
"master", "master",
"system", "system",
"slaves", "agents",
"frameworks", "frameworks",
"tasks",
"messages", "messages",
"evqueue", "evqueue",
"registrar", "registrar",
] ]
## A list of Mesos slaves, default is []
# slaves = []
## Slave metrics groups to be collected, by default, all enabled.
# slave_collections = [
# "resources",
# "agent",
# "system",
# "executors",
# "tasks",
# "messages",
# ]
## Include mesos tasks statistics, default is false
# slave_tasks = true
` `
// SampleConfig returns a sample configuration block // SampleConfig returns a sample configuration block
@ -56,21 +80,54 @@ func (m *Mesos) Description() string {
return "Telegraf plugin for gathering metrics from N Mesos masters" return "Telegraf plugin for gathering metrics from N Mesos masters"
} }
func (m *Mesos) SetDefaults() {
if len(m.MasterCols) == 0 {
m.MasterCols = allMetrics[MASTER]
}
if len(m.SlaveCols) == 0 {
m.SlaveCols = allMetrics[SLAVE]
}
if m.Timeout == 0 {
log.Println("[mesos] Missing timeout value, setting default value (100ms)")
m.Timeout = 100
}
}
// Gather() metrics from given list of Mesos Masters // Gather() metrics from given list of Mesos Masters
func (m *Mesos) Gather(acc telegraf.Accumulator) error { func (m *Mesos) Gather(acc telegraf.Accumulator) error {
var wg sync.WaitGroup var wg sync.WaitGroup
var errorChannel chan error var errorChannel chan error
if len(m.Masters) == 0 { m.SetDefaults()
m.Masters = []string{"localhost:5050"}
}
errorChannel = make(chan error, len(m.Masters)*2) errorChannel = make(chan error, len(m.Masters)+2*len(m.Slaves))
for _, v := range m.Masters { for _, v := range m.Masters {
wg.Add(1) wg.Add(1)
go func(c string) { go func(c string) {
errorChannel <- m.gatherMetrics(c, acc) errorChannel <- m.gatherMainMetrics(c, ":5050", MASTER, acc)
wg.Done()
return
}(v)
}
for _, v := range m.Slaves {
wg.Add(1)
go func(c string) {
errorChannel <- m.gatherMainMetrics(c, ":5051", MASTER, acc)
wg.Done()
return
}(v)
if !m.SlaveTasks {
continue
}
wg.Add(1)
go func(c string) {
errorChannel <- m.gatherSlaveTaskMetrics(c, ":5051", acc)
wg.Done() wg.Done()
return return
}(v) }(v)
@ -94,7 +151,7 @@ func (m *Mesos) Gather(acc telegraf.Accumulator) error {
} }
// metricsDiff() returns set names for removal // metricsDiff() returns set names for removal
func metricsDiff(w []string) []string { func metricsDiff(role Role, w []string) []string {
b := []string{} b := []string{}
s := make(map[string]bool) s := make(map[string]bool)
@ -106,7 +163,7 @@ func metricsDiff(w []string) []string {
s[v] = true s[v] = true
} }
for _, d := range defaultMetrics { for _, d := range allMetrics[role] {
if _, ok := s[d]; !ok { if _, ok := s[d]; !ok {
b = append(b, d) b = append(b, d)
} }
@ -116,11 +173,12 @@ func metricsDiff(w []string) []string {
} }
// masterBlocks serves as kind of metrics registry groupping them in sets // masterBlocks serves as kind of metrics registry groupping them in sets
func masterBlocks(g string) []string { func getMetrics(role Role, group string) []string {
var m map[string][]string var m map[string][]string
m = make(map[string][]string) m = make(map[string][]string)
if role == MASTER {
m["resources"] = []string{ m["resources"] = []string{
"master/cpus_percent", "master/cpus_percent",
"master/cpus_used", "master/cpus_used",
@ -134,6 +192,12 @@ func masterBlocks(g string) []string {
"master/disk_revocable_percent", "master/disk_revocable_percent",
"master/disk_revocable_total", "master/disk_revocable_total",
"master/disk_revocable_used", "master/disk_revocable_used",
"master/gpus_percent",
"master/gpus_used",
"master/gpus_total",
"master/gpus_revocable_percent",
"master/gpus_revocable_total",
"master/gpus_revocable_used",
"master/mem_percent", "master/mem_percent",
"master/mem_used", "master/mem_used",
"master/mem_total", "master/mem_total",
@ -156,7 +220,7 @@ func masterBlocks(g string) []string {
"system/mem_total_bytes", "system/mem_total_bytes",
} }
m["slaves"] = []string{ m["agents"] = []string{
"master/slave_registrations", "master/slave_registrations",
"master/slave_removals", "master/slave_removals",
"master/slave_reregistrations", "master/slave_reregistrations",
@ -245,27 +309,103 @@ func masterBlocks(g string) []string {
"registrar/state_store_ms/p999", "registrar/state_store_ms/p999",
"registrar/state_store_ms/p9999", "registrar/state_store_ms/p9999",
} }
} else if role == SLAVE {
m["resources"] = []string{
"slave/cpus_percent",
"slave/cpus_used",
"slave/cpus_total",
"slave/cpus_revocable_percent",
"slave/cpus_revocable_total",
"slave/cpus_revocable_used",
"slave/disk_percent",
"slave/disk_used",
"slave/disk_total",
"slave/disk_revocable_percent",
"slave/disk_revocable_total",
"slave/disk_revocable_used",
"slave/gpus_percent",
"slave/gpus_used",
"slave/gpus_total",
"slave/gpus_revocable_percent",
"slave/gpus_revocable_total",
"slave/gpus_revocable_used",
"slave/mem_percent",
"slave/mem_used",
"slave/mem_total",
"slave/mem_revocable_percent",
"slave/mem_revocable_total",
"slave/mem_revocable_used",
}
ret, ok := m[g] m["agent"] = []string{
"slave/registered",
"slave/uptime_secs",
}
m["system"] = []string{
"system/cpus_total",
"system/load_15min",
"system/load_5min",
"system/load_1min",
"system/mem_free_bytes",
"system/mem_total_bytes",
}
m["executors"] = []string{
"containerizer/mesos/container_destroy_errors",
"slave/container_launch_errors",
"slave/executors_preempted",
"slave/frameworks_active",
"slave/executor_directory_max_allowed_age_secs",
"slave/executors_registering",
"slave/executors_running",
"slave/executors_terminated",
"slave/executors_terminating",
"slave/recovery_errors",
}
m["tasks"] = []string{
"slave/tasks_failed",
"slave/tasks_finished",
"slave/tasks_killed",
"slave/tasks_lost",
"slave/tasks_running",
"slave/tasks_staging",
"slave/tasks_starting",
}
m["messages"] = []string{
"slave/invalid_framework_messages",
"slave/invalid_status_updates",
"slave/valid_framework_messages",
"slave/valid_status_updates",
}
}
ret, ok := m[group]
if !ok { if !ok {
log.Println("[mesos] Unkown metrics group: ", g) log.Printf("[mesos] Unkown %s metrics group: %s\n", role, group)
return []string{} return []string{}
} }
return ret return ret
} }
// removeGroup(), remove unwanted sets func (m *Mesos) filterMetrics(role Role, metrics *map[string]interface{}) {
func (m *Mesos) removeGroup(j *map[string]interface{}) {
var ok bool var ok bool
var selectedMetrics []string
b := metricsDiff(m.MasterCols) if role == MASTER {
selectedMetrics = m.MasterCols
} else if role == SLAVE {
selectedMetrics = m.SlaveCols
}
for _, k := range b { for _, k := range metricsDiff(role, selectedMetrics) {
for _, v := range masterBlocks(k) { for _, v := range getMetrics(role, k) {
if _, ok = (*j)[v]; ok { if _, ok = (*metrics)[v]; ok {
delete((*j), v) delete((*metrics), v)
} }
} }
} }
@ -280,23 +420,66 @@ var client = &http.Client{
Timeout: time.Duration(4 * time.Second), Timeout: time.Duration(4 * time.Second),
} }
// This should not belong to the object func (m *Mesos) gatherSlaveTaskMetrics(address string, defaultPort string, acc telegraf.Accumulator) error {
func (m *Mesos) gatherMetrics(a string, acc telegraf.Accumulator) error { var metrics []map[string]interface{}
var jsonOut map[string]interface{}
host, _, err := net.SplitHostPort(a) host, _, err := net.SplitHostPort(address)
if err != nil { if err != nil {
host = a host = address
a = a + ":5050" address = address + defaultPort
} }
tags := map[string]string{ tags := map[string]string{
"server": host, "server": host,
} }
if m.Timeout == 0 { ts := strconv.Itoa(m.Timeout) + "ms"
log.Println("[mesos] Missing timeout value, setting default value (100ms)")
m.Timeout = 100 resp, err := client.Get("http://" + address + "/monitor/statistics?timeout=" + ts)
if err != nil {
return err
}
data, err := ioutil.ReadAll(resp.Body)
resp.Body.Close()
if err != nil {
return err
}
if err = json.Unmarshal([]byte(data), &metrics); err != nil {
return errors.New("Error decoding JSON response")
}
for _, task := range metrics {
tags["task_id"] = task["executor_id"].(string)
jf := jsonparser.JSONFlattener{}
err = jf.FlattenJSON("", task)
if err != nil {
return err
}
acc.AddFields("mesos-tasks", jf.Fields, tags)
}
return nil
}
// This should not belong to the object
func (m *Mesos) gatherMainMetrics(a string, defaultPort string, role Role, acc telegraf.Accumulator) error {
var jsonOut map[string]interface{}
host, _, err := net.SplitHostPort(a)
if err != nil {
host = a
a = a + defaultPort
}
tags := map[string]string{
"server": host,
"role": string(role),
} }
ts := strconv.Itoa(m.Timeout) + "ms" ts := strconv.Itoa(m.Timeout) + "ms"
@ -317,7 +500,7 @@ func (m *Mesos) gatherMetrics(a string, acc telegraf.Accumulator) error {
return errors.New("Error decoding JSON response") return errors.New("Error decoding JSON response")
} }
m.removeGroup(&jsonOut) m.filterMetrics(role, &jsonOut)
jf := jsonparser.JSONFlattener{} jf := jsonparser.JSONFlattener{}

View File

@ -2,70 +2,275 @@ package mesos
import ( import (
"encoding/json" "encoding/json"
"fmt"
"math/rand" "math/rand"
"net/http" "net/http"
"net/http/httptest" "net/http/httptest"
"os" "os"
"testing" "testing"
jsonparser "github.com/influxdata/telegraf/plugins/parsers/json"
"github.com/influxdata/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
) )
var mesosMetrics map[string]interface{} var masterMetrics map[string]interface{}
var ts *httptest.Server var masterTestServer *httptest.Server
var slaveMetrics map[string]interface{}
var slaveTaskMetrics map[string]interface{}
var slaveTestServer *httptest.Server
func randUUID() string {
b := make([]byte, 16)
rand.Read(b)
return fmt.Sprintf("%x-%x-%x-%x-%x", b[0:4], b[4:6], b[6:8], b[8:10], b[10:])
}
func generateMetrics() { func generateMetrics() {
mesosMetrics = make(map[string]interface{}) masterMetrics = make(map[string]interface{})
metricNames := []string{"master/cpus_percent", "master/cpus_used", "master/cpus_total", metricNames := []string{
"master/cpus_revocable_percent", "master/cpus_revocable_total", "master/cpus_revocable_used", // resources
"master/disk_percent", "master/disk_used", "master/disk_total", "master/disk_revocable_percent", "master/cpus_percent",
"master/disk_revocable_total", "master/disk_revocable_used", "master/mem_percent", "master/cpus_used",
"master/mem_used", "master/mem_total", "master/mem_revocable_percent", "master/mem_revocable_total", "master/cpus_total",
"master/mem_revocable_used", "master/elected", "master/uptime_secs", "system/cpus_total", "master/cpus_revocable_percent",
"system/load_15min", "system/load_5min", "system/load_1min", "system/mem_free_bytes", "master/cpus_revocable_total",
"system/mem_total_bytes", "master/slave_registrations", "master/slave_removals", "master/cpus_revocable_used",
"master/slave_reregistrations", "master/slave_shutdowns_scheduled", "master/slave_shutdowns_canceled", "master/disk_percent",
"master/slave_shutdowns_completed", "master/slaves_active", "master/slaves_connected", "master/disk_used",
"master/slaves_disconnected", "master/slaves_inactive", "master/frameworks_active", "master/disk_total",
"master/frameworks_connected", "master/frameworks_disconnected", "master/frameworks_inactive", "master/disk_revocable_percent",
"master/outstanding_offers", "master/tasks_error", "master/tasks_failed", "master/tasks_finished", "master/disk_revocable_total",
"master/tasks_killed", "master/tasks_lost", "master/tasks_running", "master/tasks_staging", "master/disk_revocable_used",
"master/tasks_starting", "master/invalid_executor_to_framework_messages", "master/invalid_framework_to_executor_messages", "master/gpus_percent",
"master/invalid_status_update_acknowledgements", "master/invalid_status_updates", "master/gpus_used",
"master/dropped_messages", "master/messages_authenticate", "master/messages_deactivate_framework", "master/gpus_total",
"master/messages_decline_offers", "master/messages_executor_to_framework", "master/messages_exited_executor", "master/gpus_revocable_percent",
"master/messages_framework_to_executor", "master/messages_kill_task", "master/messages_launch_tasks", "master/gpus_revocable_total",
"master/messages_reconcile_tasks", "master/messages_register_framework", "master/messages_register_slave", "master/gpus_revocable_used",
"master/messages_reregister_framework", "master/messages_reregister_slave", "master/messages_resource_request", "master/mem_percent",
"master/messages_revive_offers", "master/messages_status_update", "master/messages_status_update_acknowledgement", "master/mem_used",
"master/messages_unregister_framework", "master/messages_unregister_slave", "master/messages_update_slave", "master/mem_total",
"master/recovery_slave_removals", "master/slave_removals/reason_registered", "master/slave_removals/reason_unhealthy", "master/mem_revocable_percent",
"master/slave_removals/reason_unregistered", "master/valid_framework_to_executor_messages", "master/valid_status_update_acknowledgements", "master/mem_revocable_total",
"master/valid_status_updates", "master/task_lost/source_master/reason_invalid_offers", "master/mem_revocable_used",
"master/task_lost/source_master/reason_slave_removed", "master/task_lost/source_slave/reason_executor_terminated", // master
"master/valid_executor_to_framework_messages", "master/event_queue_dispatches", "master/elected",
"master/event_queue_http_requests", "master/event_queue_messages", "registrar/state_fetch_ms", "master/uptime_secs",
"registrar/state_store_ms", "registrar/state_store_ms/max", "registrar/state_store_ms/min", // system
"registrar/state_store_ms/p50", "registrar/state_store_ms/p90", "registrar/state_store_ms/p95", "system/cpus_total",
"registrar/state_store_ms/p99", "registrar/state_store_ms/p999", "registrar/state_store_ms/p9999"} "system/load_15min",
"system/load_5min",
"system/load_1min",
"system/mem_free_bytes",
"system/mem_total_bytes",
// agents
"master/slave_registrations",
"master/slave_removals",
"master/slave_reregistrations",
"master/slave_shutdowns_scheduled",
"master/slave_shutdowns_canceled",
"master/slave_shutdowns_completed",
"master/slaves_active",
"master/slaves_connected",
"master/slaves_disconnected",
"master/slaves_inactive",
// frameworks
"master/frameworks_active",
"master/frameworks_connected",
"master/frameworks_disconnected",
"master/frameworks_inactive",
"master/outstanding_offers",
// tasks
"master/tasks_error",
"master/tasks_failed",
"master/tasks_finished",
"master/tasks_killed",
"master/tasks_lost",
"master/tasks_running",
"master/tasks_staging",
"master/tasks_starting",
// messages
"master/invalid_executor_to_framework_messages",
"master/invalid_framework_to_executor_messages",
"master/invalid_status_update_acknowledgements",
"master/invalid_status_updates",
"master/dropped_messages",
"master/messages_authenticate",
"master/messages_deactivate_framework",
"master/messages_decline_offers",
"master/messages_executor_to_framework",
"master/messages_exited_executor",
"master/messages_framework_to_executor",
"master/messages_kill_task",
"master/messages_launch_tasks",
"master/messages_reconcile_tasks",
"master/messages_register_framework",
"master/messages_register_slave",
"master/messages_reregister_framework",
"master/messages_reregister_slave",
"master/messages_resource_request",
"master/messages_revive_offers",
"master/messages_status_update",
"master/messages_status_update_acknowledgement",
"master/messages_unregister_framework",
"master/messages_unregister_slave",
"master/messages_update_slave",
"master/recovery_slave_removals",
"master/slave_removals/reason_registered",
"master/slave_removals/reason_unhealthy",
"master/slave_removals/reason_unregistered",
"master/valid_framework_to_executor_messages",
"master/valid_status_update_acknowledgements",
"master/valid_status_updates",
"master/task_lost/source_master/reason_invalid_offers",
"master/task_lost/source_master/reason_slave_removed",
"master/task_lost/source_slave/reason_executor_terminated",
"master/valid_executor_to_framework_messages",
// evgqueue
"master/event_queue_dispatches",
"master/event_queue_http_requests",
"master/event_queue_messages",
// registrar
"registrar/state_fetch_ms",
"registrar/state_store_ms",
"registrar/state_store_ms/max",
"registrar/state_store_ms/min",
"registrar/state_store_ms/p50",
"registrar/state_store_ms/p90",
"registrar/state_store_ms/p95",
"registrar/state_store_ms/p99",
"registrar/state_store_ms/p999",
"registrar/state_store_ms/p9999",
}
for _, k := range metricNames { for _, k := range metricNames {
mesosMetrics[k] = rand.Float64() masterMetrics[k] = rand.Float64()
}
slaveMetrics = make(map[string]interface{})
metricNames = []string{
// resources
"slave/cpus_percent",
"slave/cpus_used",
"slave/cpus_total",
"slave/cpus_revocable_percent",
"slave/cpus_revocable_total",
"slave/cpus_revocable_used",
"slave/disk_percent",
"slave/disk_used",
"slave/disk_total",
"slave/disk_revocable_percent",
"slave/disk_revocable_total",
"slave/disk_revocable_used",
"slave/gpus_percent",
"slave/gpus_used",
"slave/gpus_total",
"slave/gpus_revocable_percent",
"slave/gpus_revocable_total",
"slave/gpus_revocable_used",
"slave/mem_percent",
"slave/mem_used",
"slave/mem_total",
"slave/mem_revocable_percent",
"slave/mem_revocable_total",
"slave/mem_revocable_used",
// agent
"slave/registered",
"slave/uptime_secs",
// system
"system/cpus_total",
"system/load_15min",
"system/load_5min",
"system/load_1min",
"system/mem_free_bytes",
"system/mem_total_bytes",
// executors
"containerizer/mesos/container_destroy_errors",
"slave/container_launch_errors",
"slave/executors_preempted",
"slave/frameworks_active",
"slave/executor_directory_max_allowed_age_secs",
"slave/executors_registering",
"slave/executors_running",
"slave/executors_terminated",
"slave/executors_terminating",
"slave/recovery_errors",
// tasks
"slave/tasks_failed",
"slave/tasks_finished",
"slave/tasks_killed",
"slave/tasks_lost",
"slave/tasks_running",
"slave/tasks_staging",
"slave/tasks_starting",
// messages
"slave/invalid_framework_messages",
"slave/invalid_status_updates",
"slave/valid_framework_messages",
"slave/valid_status_updates",
}
for _, k := range metricNames {
slaveMetrics[k] = rand.Float64()
}
slaveTaskMetrics = map[string]interface{}{
"executor_id": fmt.Sprintf("task_%s", randUUID()),
"executor_name": "Some task description",
"framework_id": randUUID(),
"source": fmt.Sprintf("task_source_%s", randUUID()),
"statistics": map[string]interface{}{
"cpus_limit": rand.Float64(),
"cpus_system_time_secs": rand.Float64(),
"cpus_user_time_secs": rand.Float64(),
"mem_anon_bytes": float64(rand.Int63()),
"mem_cache_bytes": float64(rand.Int63()),
"mem_critical_pressure_counter": float64(rand.Int63()),
"mem_file_bytes": float64(rand.Int63()),
"mem_limit_bytes": float64(rand.Int63()),
"mem_low_pressure_counter": float64(rand.Int63()),
"mem_mapped_file_bytes": float64(rand.Int63()),
"mem_medium_pressure_counter": float64(rand.Int63()),
"mem_rss_bytes": float64(rand.Int63()),
"mem_swap_bytes": float64(rand.Int63()),
"mem_total_bytes": float64(rand.Int63()),
"mem_total_memsw_bytes": float64(rand.Int63()),
"mem_unevictable_bytes": float64(rand.Int63()),
"timestamp": rand.Float64(),
},
} }
} }
func TestMain(m *testing.M) { func TestMain(m *testing.M) {
generateMetrics() generateMetrics()
r := http.NewServeMux()
r.HandleFunc("/metrics/snapshot", func(w http.ResponseWriter, r *http.Request) { masterRouter := http.NewServeMux()
masterRouter.HandleFunc("/metrics/snapshot", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK) w.WriteHeader(http.StatusOK)
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(mesosMetrics) json.NewEncoder(w).Encode(masterMetrics)
}) })
ts = httptest.NewServer(r) masterTestServer = httptest.NewServer(masterRouter)
slaveRouter := http.NewServeMux()
slaveRouter.HandleFunc("/metrics/snapshot", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(slaveMetrics)
})
slaveRouter.HandleFunc("/monitor/statistics", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode([]map[string]interface{}{slaveTaskMetrics})
})
slaveTestServer = httptest.NewServer(slaveRouter)
rc := m.Run() rc := m.Run()
ts.Close()
masterTestServer.Close()
slaveTestServer.Close()
os.Exit(rc) os.Exit(rc)
} }
@ -73,7 +278,7 @@ func TestMesosMaster(t *testing.T) {
var acc testutil.Accumulator var acc testutil.Accumulator
m := Mesos{ m := Mesos{
Masters: []string{ts.Listener.Addr().String()}, Masters: []string{masterTestServer.Listener.Addr().String()},
Timeout: 10, Timeout: 10,
} }
@ -83,34 +288,88 @@ func TestMesosMaster(t *testing.T) {
t.Errorf(err.Error()) t.Errorf(err.Error())
} }
acc.AssertContainsFields(t, "mesos", mesosMetrics) acc.AssertContainsFields(t, "mesos", masterMetrics)
} }
func TestRemoveGroup(t *testing.T) { func TestMasterFilter(t *testing.T) {
generateMetrics()
m := Mesos{ m := Mesos{
MasterCols: []string{ MasterCols: []string{
"resources", "master", "registrar", "resources", "master", "registrar",
}, },
} }
b := []string{ b := []string{
"system", "slaves", "frameworks", "system", "agents", "frameworks",
"messages", "evqueue", "messages", "evqueue", "tasks",
} }
m.removeGroup(&mesosMetrics) m.filterMetrics(MASTER, &masterMetrics)
for _, v := range b { for _, v := range b {
for _, x := range masterBlocks(v) { for _, x := range getMetrics(MASTER, v) {
if _, ok := mesosMetrics[x]; ok { if _, ok := masterMetrics[x]; ok {
t.Errorf("Found key %s, it should be gone.", x) t.Errorf("Found key %s, it should be gone.", x)
} }
} }
} }
for _, v := range m.MasterCols { for _, v := range m.MasterCols {
for _, x := range masterBlocks(v) { for _, x := range getMetrics(MASTER, v) {
if _, ok := mesosMetrics[x]; !ok { if _, ok := masterMetrics[x]; !ok {
t.Errorf("Didn't find key %s, it should present.", x)
}
}
}
}
func TestMesosSlave(t *testing.T) {
var acc testutil.Accumulator
m := Mesos{
Masters: []string{},
Slaves: []string{slaveTestServer.Listener.Addr().String()},
SlaveTasks: true,
Timeout: 10,
}
err := m.Gather(&acc)
if err != nil {
t.Errorf(err.Error())
}
acc.AssertContainsFields(t, "mesos", slaveMetrics)
jf := jsonparser.JSONFlattener{}
err = jf.FlattenJSON("", slaveTaskMetrics)
if err != nil {
t.Errorf(err.Error())
}
acc.AssertContainsFields(t, "mesos-tasks", jf.Fields)
}
func TestSlaveFilter(t *testing.T) {
m := Mesos{
SlaveCols: []string{
"resources", "agent", "tasks",
},
}
b := []string{
"system", "executors", "messages",
}
m.filterMetrics(SLAVE, &slaveMetrics)
for _, v := range b {
for _, x := range getMetrics(SLAVE, v) {
if _, ok := slaveMetrics[x]; ok {
t.Errorf("Found key %s, it should be gone.", x)
}
}
}
for _, v := range m.MasterCols {
for _, x := range getMetrics(SLAVE, v) {
if _, ok := slaveMetrics[x]; !ok {
t.Errorf("Didn't find key %s, it should present.", x) t.Errorf("Didn't find key %s, it should present.", x)
} }
} }

View File

@ -10,6 +10,7 @@
## mongodb://10.10.3.33:18832, ## mongodb://10.10.3.33:18832,
## 10.0.0.1:10000, etc. ## 10.0.0.1:10000, etc.
servers = ["127.0.0.1:27017"] servers = ["127.0.0.1:27017"]
gather_perdb_stats = false
``` ```
For authenticated mongodb istances use connection mongdb connection URI For authenticated mongodb istances use connection mongdb connection URI
@ -52,3 +53,15 @@ and create a single measurement containing values e.g.
* ttl_passes_per_sec * ttl_passes_per_sec
* repl_lag * repl_lag
* jumbo_chunks (only if mongos or mongo config) * jumbo_chunks (only if mongos or mongo config)
If gather_db_stats is set to true, it will also collect per database stats exposed by db.stats()
creating another measurement called mongodb_db_stats and containing values:
* collections
* objects
* avg_obj_size
* data_size
* storage_size
* num_extents
* indexes
* index_size
* ok

View File

@ -10,6 +10,7 @@ import (
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
"gopkg.in/mgo.v2" "gopkg.in/mgo.v2"
) )
@ -18,6 +19,7 @@ type MongoDB struct {
Servers []string Servers []string
Ssl Ssl Ssl Ssl
mongos map[string]*Server mongos map[string]*Server
GatherPerdbStats bool
} }
type Ssl struct { type Ssl struct {
@ -32,6 +34,7 @@ var sampleConfig = `
## mongodb://10.10.3.33:18832, ## mongodb://10.10.3.33:18832,
## 10.0.0.1:10000, etc. ## 10.0.0.1:10000, etc.
servers = ["127.0.0.1:27017"] servers = ["127.0.0.1:27017"]
gather_perdb_stats = false
` `
func (m *MongoDB) SampleConfig() string { func (m *MongoDB) SampleConfig() string {
@ -53,9 +56,7 @@ func (m *MongoDB) Gather(acc telegraf.Accumulator) error {
} }
var wg sync.WaitGroup var wg sync.WaitGroup
errChan := errchan.New(len(m.Servers))
var outerr error
for _, serv := range m.Servers { for _, serv := range m.Servers {
u, err := url.Parse(serv) u, err := url.Parse(serv)
if err != nil { if err != nil {
@ -71,13 +72,12 @@ func (m *MongoDB) Gather(acc telegraf.Accumulator) error {
wg.Add(1) wg.Add(1)
go func(srv *Server) { go func(srv *Server) {
defer wg.Done() defer wg.Done()
outerr = m.gatherServer(srv, acc) errChan.C <- m.gatherServer(srv, acc)
}(m.getMongoServer(u)) }(m.getMongoServer(u))
} }
wg.Wait() wg.Wait()
return errChan.Error()
return outerr
} }
func (m *MongoDB) getMongoServer(url *url.URL) *Server { func (m *MongoDB) getMongoServer(url *url.URL) *Server {
@ -135,7 +135,7 @@ func (m *MongoDB) gatherServer(server *Server, acc telegraf.Accumulator) error {
} }
server.Session = sess server.Session = sess
} }
return server.gatherData(acc) return server.gatherData(acc, m.GatherPerdbStats)
} }
func init() { func init() {

View File

@ -12,6 +12,12 @@ type MongodbData struct {
StatLine *StatLine StatLine *StatLine
Fields map[string]interface{} Fields map[string]interface{}
Tags map[string]string Tags map[string]string
DbData []DbData
}
type DbData struct {
Name string
Fields map[string]interface{}
} }
func NewMongodbData(statLine *StatLine, tags map[string]string) *MongodbData { func NewMongodbData(statLine *StatLine, tags map[string]string) *MongodbData {
@ -22,6 +28,7 @@ func NewMongodbData(statLine *StatLine, tags map[string]string) *MongodbData {
StatLine: statLine, StatLine: statLine,
Tags: tags, Tags: tags,
Fields: make(map[string]interface{}), Fields: make(map[string]interface{}),
DbData: []DbData{},
} }
} }
@ -72,6 +79,34 @@ var WiredTigerStats = map[string]string{
"percent_cache_used": "CacheUsedPercent", "percent_cache_used": "CacheUsedPercent",
} }
var DbDataStats = map[string]string{
"collections": "Collections",
"objects": "Objects",
"avg_obj_size": "AvgObjSize",
"data_size": "DataSize",
"storage_size": "StorageSize",
"num_extents": "NumExtents",
"indexes": "Indexes",
"index_size": "IndexSize",
"ok": "Ok",
}
func (d *MongodbData) AddDbStats() {
for _, dbstat := range d.StatLine.DbStatsLines {
dbStatLine := reflect.ValueOf(&dbstat).Elem()
newDbData := &DbData{
Name: dbstat.Name,
Fields: make(map[string]interface{}),
}
newDbData.Fields["type"] = "db_stat"
for key, value := range DbDataStats {
val := dbStatLine.FieldByName(value).Interface()
newDbData.Fields[key] = val
}
d.DbData = append(d.DbData, *newDbData)
}
}
func (d *MongodbData) AddDefaultStats() { func (d *MongodbData) AddDefaultStats() {
statLine := reflect.ValueOf(d.StatLine).Elem() statLine := reflect.ValueOf(d.StatLine).Elem()
d.addStat(statLine, DefaultStats) d.addStat(statLine, DefaultStats)
@ -113,4 +148,15 @@ func (d *MongodbData) flush(acc telegraf.Accumulator) {
d.StatLine.Time, d.StatLine.Time,
) )
d.Fields = make(map[string]interface{}) d.Fields = make(map[string]interface{})
for _, db := range d.DbData {
d.Tags["db_name"] = db.Name
acc.AddFields(
"mongodb_db_stats",
db.Fields,
d.Tags,
d.StatLine.Time,
)
db.Fields = make(map[string]interface{})
}
} }

View File

@ -22,7 +22,7 @@ func (s *Server) getDefaultTags() map[string]string {
return tags return tags
} }
func (s *Server) gatherData(acc telegraf.Accumulator) error { func (s *Server) gatherData(acc telegraf.Accumulator, gatherDbStats bool) error {
s.Session.SetMode(mgo.Eventual, true) s.Session.SetMode(mgo.Eventual, true)
s.Session.SetSocketTimeout(0) s.Session.SetSocketTimeout(0)
result_server := &ServerStatus{} result_server := &ServerStatus{}
@ -42,10 +42,34 @@ func (s *Server) gatherData(acc telegraf.Accumulator) error {
JumboChunksCount: int64(jumbo_chunks), JumboChunksCount: int64(jumbo_chunks),
} }
result_db_stats := &DbStats{}
if gatherDbStats == true {
names := []string{}
names, err = s.Session.DatabaseNames()
if err != nil {
log.Println("Error getting database names (" + err.Error() + ")")
}
for _, db_name := range names {
db_stat_line := &DbStatsData{}
err = s.Session.DB(db_name).Run(bson.D{{"dbStats", 1}}, db_stat_line)
if err != nil {
log.Println("Error getting db stats from " + db_name + "(" + err.Error() + ")")
}
db := &Db{
Name: db_name,
DbStatsData: db_stat_line,
}
result_db_stats.Dbs = append(result_db_stats.Dbs, *db)
}
}
result := &MongoStatus{ result := &MongoStatus{
ServerStatus: result_server, ServerStatus: result_server,
ReplSetStatus: result_repl, ReplSetStatus: result_repl,
ClusterStatus: result_cluster, ClusterStatus: result_cluster,
DbStats: result_db_stats,
} }
defer func() { defer func() {
@ -64,6 +88,7 @@ func (s *Server) gatherData(acc telegraf.Accumulator) error {
s.getDefaultTags(), s.getDefaultTags(),
) )
data.AddDefaultStats() data.AddDefaultStats()
data.AddDbStats()
data.flush(acc) data.flush(acc)
} }
return nil return nil

View File

@ -29,12 +29,12 @@ func TestGetDefaultTags(t *testing.T) {
func TestAddDefaultStats(t *testing.T) { func TestAddDefaultStats(t *testing.T) {
var acc testutil.Accumulator var acc testutil.Accumulator
err := server.gatherData(&acc) err := server.gatherData(&acc, false)
require.NoError(t, err) require.NoError(t, err)
time.Sleep(time.Duration(1) * time.Second) time.Sleep(time.Duration(1) * time.Second)
// need to call this twice so it can perform the diff // need to call this twice so it can perform the diff
err = server.gatherData(&acc) err = server.gatherData(&acc, false)
require.NoError(t, err) require.NoError(t, err)
for key, _ := range DefaultStats { for key, _ := range DefaultStats {

View File

@ -35,6 +35,7 @@ type MongoStatus struct {
ServerStatus *ServerStatus ServerStatus *ServerStatus
ReplSetStatus *ReplSetStatus ReplSetStatus *ReplSetStatus
ClusterStatus *ClusterStatus ClusterStatus *ClusterStatus
DbStats *DbStats
} }
type ServerStatus struct { type ServerStatus struct {
@ -65,6 +66,32 @@ type ServerStatus struct {
Metrics *MetricsStats `bson:"metrics"` Metrics *MetricsStats `bson:"metrics"`
} }
// DbStats stores stats from all dbs
type DbStats struct {
Dbs []Db
}
// Db represent a single DB
type Db struct {
Name string
DbStatsData *DbStatsData
}
// DbStatsData stores stats from a db
type DbStatsData struct {
Db string `bson:"db"`
Collections int64 `bson:"collections"`
Objects int64 `bson:"objects"`
AvgObjSize float64 `bson:"avgObjSize"`
DataSize int64 `bson:"dataSize"`
StorageSize int64 `bson:"storageSize"`
NumExtents int64 `bson:"numExtents"`
Indexes int64 `bson:"indexes"`
IndexSize int64 `bson:"indexSize"`
Ok int64 `bson:"ok"`
GleStats interface{} `bson:"gleStats"`
}
// ClusterStatus stores information related to the whole cluster // ClusterStatus stores information related to the whole cluster
type ClusterStatus struct { type ClusterStatus struct {
JumboChunksCount int64 JumboChunksCount int64
@ -396,6 +423,22 @@ type StatLine struct {
// Cluster fields // Cluster fields
JumboChunksCount int64 JumboChunksCount int64
// DB stats field
DbStatsLines []DbStatLine
}
type DbStatLine struct {
Name string
Collections int64
Objects int64
AvgObjSize float64
DataSize int64
StorageSize int64
NumExtents int64
Indexes int64
IndexSize int64
Ok int64
} }
func parseLocks(stat ServerStatus) map[string]LockUsage { func parseLocks(stat ServerStatus) map[string]LockUsage {
@ -677,5 +720,27 @@ func NewStatLine(oldMongo, newMongo MongoStatus, key string, all bool, sampleSec
newClusterStat := *newMongo.ClusterStatus newClusterStat := *newMongo.ClusterStatus
returnVal.JumboChunksCount = newClusterStat.JumboChunksCount returnVal.JumboChunksCount = newClusterStat.JumboChunksCount
newDbStats := *newMongo.DbStats
for _, db := range newDbStats.Dbs {
dbStatsData := db.DbStatsData
// mongos doesn't have the db key, so setting the db name
if dbStatsData.Db == "" {
dbStatsData.Db = db.Name
}
dbStatLine := &DbStatLine{
Name: dbStatsData.Db,
Collections: dbStatsData.Collections,
Objects: dbStatsData.Objects,
AvgObjSize: dbStatsData.AvgObjSize,
DataSize: dbStatsData.DataSize,
StorageSize: dbStatsData.StorageSize,
NumExtents: dbStatsData.NumExtents,
Indexes: dbStatsData.Indexes,
IndexSize: dbStatsData.IndexSize,
Ok: dbStatsData.Ok,
}
returnVal.DbStatsLines = append(returnVal.DbStatsLines, *dbStatLine)
}
return returnVal return returnVal
} }

View File

@ -7,10 +7,12 @@ import (
"net/url" "net/url"
"strconv" "strconv"
"strings" "strings"
"sync"
"time" "time"
_ "github.com/go-sql-driver/mysql" _ "github.com/go-sql-driver/mysql"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
@ -118,26 +120,27 @@ func (m *Mysql) InitMysql() {
func (m *Mysql) Gather(acc telegraf.Accumulator) error { func (m *Mysql) Gather(acc telegraf.Accumulator) error {
if len(m.Servers) == 0 { if len(m.Servers) == 0 {
// if we can't get stats in this case, thats fine, don't report // default to localhost if nothing specified.
// an error. return m.gatherServer(localhost, acc)
m.gatherServer(localhost, acc)
return nil
} }
// Initialise additional query intervals // Initialise additional query intervals
if !initDone { if !initDone {
m.InitMysql() m.InitMysql()
} }
var wg sync.WaitGroup
errChan := errchan.New(len(m.Servers))
// Loop through each server and collect metrics // Loop through each server and collect metrics
for _, serv := range m.Servers { for _, server := range m.Servers {
err := m.gatherServer(serv, acc) wg.Add(1)
if err != nil { go func(s string) {
return err defer wg.Done()
} errChan.C <- m.gatherServer(s, acc)
}(server)
} }
return nil wg.Wait()
return errChan.Error()
} }
type mapping struct { type mapping struct {
@ -306,6 +309,10 @@ var mappings = []*mapping{
onServer: "Threadpool_", onServer: "Threadpool_",
inExport: "threadpool_", inExport: "threadpool_",
}, },
{
onServer: "wsrep_",
inExport: "wsrep_",
},
} }
var ( var (

View File

@ -20,7 +20,6 @@ func TestMysqlDefaultsToLocal(t *testing.T) {
} }
var acc testutil.Accumulator var acc testutil.Accumulator
err := m.Gather(&acc) err := m.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)

View File

@ -12,6 +12,7 @@ import (
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
@ -34,7 +35,7 @@ func (n *Nginx) Description() string {
func (n *Nginx) Gather(acc telegraf.Accumulator) error { func (n *Nginx) Gather(acc telegraf.Accumulator) error {
var wg sync.WaitGroup var wg sync.WaitGroup
var outerr error errChan := errchan.New(len(n.Urls))
for _, u := range n.Urls { for _, u := range n.Urls {
addr, err := url.Parse(u) addr, err := url.Parse(u)
@ -45,13 +46,12 @@ func (n *Nginx) Gather(acc telegraf.Accumulator) error {
wg.Add(1) wg.Add(1)
go func(addr *url.URL) { go func(addr *url.URL) {
defer wg.Done() defer wg.Done()
outerr = n.gatherUrl(addr, acc) errChan.C <- n.gatherUrl(addr, acc)
}(addr) }(addr)
} }
wg.Wait() wg.Wait()
return errChan.Error()
return outerr
} }
var tr = &http.Transport{ var tr = &http.Transport{

View File

@ -32,6 +32,7 @@ import (
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
@ -65,19 +66,17 @@ func (n *NSQ) Description() string {
func (n *NSQ) Gather(acc telegraf.Accumulator) error { func (n *NSQ) Gather(acc telegraf.Accumulator) error {
var wg sync.WaitGroup var wg sync.WaitGroup
var outerr error errChan := errchan.New(len(n.Endpoints))
for _, e := range n.Endpoints { for _, e := range n.Endpoints {
wg.Add(1) wg.Add(1)
go func(e string) { go func(e string) {
defer wg.Done() defer wg.Done()
outerr = n.gatherEndpoint(e, acc) errChan.C <- n.gatherEndpoint(e, acc)
}(e) }(e)
} }
wg.Wait() wg.Wait()
return errChan.Error()
return outerr
} }
var tr = &http.Transport{ var tr = &http.Transport{

View File

@ -43,9 +43,9 @@ var sampleConfig = `
## file paths for proc files. If empty default paths will be used: ## file paths for proc files. If empty default paths will be used:
## /proc/net/netstat, /proc/net/snmp, /proc/net/snmp6 ## /proc/net/netstat, /proc/net/snmp, /proc/net/snmp6
## These can also be overridden with env variables, see README. ## These can also be overridden with env variables, see README.
proc_net_netstat = "" proc_net_netstat = "/proc/net/netstat"
proc_net_snmp = "" proc_net_snmp = "/proc/net/snmp"
proc_net_snmp6 = "" proc_net_snmp6 = "/proc/net/snmp6"
## dump metrics with 0 values too ## dump metrics with 0 values too
dump_zeros = true dump_zeros = true
` `
@ -141,7 +141,7 @@ func (ns *Nstat) loadPaths() {
ns.ProcNetSNMP = proc(ENV_SNMP, NET_SNMP) ns.ProcNetSNMP = proc(ENV_SNMP, NET_SNMP)
} }
if ns.ProcNetSNMP6 == "" { if ns.ProcNetSNMP6 == "" {
ns.ProcNetSNMP = proc(ENV_SNMP6, NET_SNMP6) ns.ProcNetSNMP6 = proc(ENV_SNMP6, NET_SNMP6)
} }
} }

View File

@ -70,7 +70,7 @@ func (p *Procstat) Gather(acc telegraf.Accumulator) error {
p.Exe, p.PidFile, p.Pattern, p.User, err.Error()) p.Exe, p.PidFile, p.Pattern, p.User, err.Error())
} else { } else {
for pid, proc := range p.pidmap { for pid, proc := range p.pidmap {
p := NewSpecProcessor(p.ProcessName, p.Prefix, acc, proc, p.tagmap[pid]) p := NewSpecProcessor(p.ProcessName, p.Prefix, pid, acc, proc, p.tagmap[pid])
p.pushMetrics() p.pushMetrics()
} }
} }
@ -140,7 +140,6 @@ func (p *Procstat) pidsFromFile() ([]int32, error) {
out = append(out, int32(pid)) out = append(out, int32(pid))
p.tagmap[int32(pid)] = map[string]string{ p.tagmap[int32(pid)] = map[string]string{
"pidfile": p.PidFile, "pidfile": p.PidFile,
"pid": strings.TrimSpace(string(pidString)),
} }
} }
} }
@ -165,7 +164,6 @@ func (p *Procstat) pidsFromExe() ([]int32, error) {
out = append(out, int32(ipid)) out = append(out, int32(ipid))
p.tagmap[int32(ipid)] = map[string]string{ p.tagmap[int32(ipid)] = map[string]string{
"exe": p.Exe, "exe": p.Exe,
"pid": pid,
} }
} else { } else {
outerr = err outerr = err
@ -193,7 +191,6 @@ func (p *Procstat) pidsFromPattern() ([]int32, error) {
out = append(out, int32(ipid)) out = append(out, int32(ipid))
p.tagmap[int32(ipid)] = map[string]string{ p.tagmap[int32(ipid)] = map[string]string{
"pattern": p.Pattern, "pattern": p.Pattern,
"pid": pid,
} }
} else { } else {
outerr = err outerr = err
@ -221,7 +218,6 @@ func (p *Procstat) pidsFromUser() ([]int32, error) {
out = append(out, int32(ipid)) out = append(out, int32(ipid))
p.tagmap[int32(ipid)] = map[string]string{ p.tagmap[int32(ipid)] = map[string]string{
"user": p.User, "user": p.User,
"pid": pid,
} }
} else { } else {
outerr = err outerr = err

View File

@ -10,6 +10,7 @@ import (
type SpecProcessor struct { type SpecProcessor struct {
Prefix string Prefix string
pid int32
tags map[string]string tags map[string]string
fields map[string]interface{} fields map[string]interface{}
acc telegraf.Accumulator acc telegraf.Accumulator
@ -19,6 +20,7 @@ type SpecProcessor struct {
func NewSpecProcessor( func NewSpecProcessor(
processName string, processName string,
prefix string, prefix string,
pid int32,
acc telegraf.Accumulator, acc telegraf.Accumulator,
p *process.Process, p *process.Process,
tags map[string]string, tags map[string]string,
@ -33,6 +35,7 @@ func NewSpecProcessor(
} }
return &SpecProcessor{ return &SpecProcessor{
Prefix: prefix, Prefix: prefix,
pid: pid,
tags: tags, tags: tags,
fields: make(map[string]interface{}), fields: make(map[string]interface{}),
acc: acc, acc: acc,
@ -45,7 +48,7 @@ func (p *SpecProcessor) pushMetrics() {
if p.Prefix != "" { if p.Prefix != "" {
prefix = p.Prefix + "_" prefix = p.Prefix + "_"
} }
fields := map[string]interface{}{} fields := map[string]interface{}{"pid": p.pid}
numThreads, err := p.proc.NumThreads() numThreads, err := p.proc.NumThreads()
if err == nil { if err == nil {

View File

@ -10,6 +10,7 @@ import (
"io" "io"
"math" "math"
"mime" "mime"
"net/http"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
@ -19,17 +20,9 @@ import (
"github.com/prometheus/common/expfmt" "github.com/prometheus/common/expfmt"
) )
// PrometheusParser is an object for Parsing incoming metrics.
type PrometheusParser struct {
// PromFormat
PromFormat map[string]string
// DefaultTags will be added to every parsed metric
// DefaultTags map[string]string
}
// Parse returns a slice of Metrics from a text representation of a // Parse returns a slice of Metrics from a text representation of a
// metrics // metrics
func (p *PrometheusParser) Parse(buf []byte) ([]telegraf.Metric, error) { func Parse(buf []byte, header http.Header) ([]telegraf.Metric, error) {
var metrics []telegraf.Metric var metrics []telegraf.Metric
var parser expfmt.TextParser var parser expfmt.TextParser
// parse even if the buffer begins with a newline // parse even if the buffer begins with a newline
@ -38,38 +31,35 @@ func (p *PrometheusParser) Parse(buf []byte) ([]telegraf.Metric, error) {
buffer := bytes.NewBuffer(buf) buffer := bytes.NewBuffer(buf)
reader := bufio.NewReader(buffer) reader := bufio.NewReader(buffer)
// Get format mediatype, params, err := mime.ParseMediaType(header.Get("Content-Type"))
mediatype, params, err := mime.ParseMediaType(p.PromFormat["Content-Type"])
// Prepare output // Prepare output
metricFamilies := make(map[string]*dto.MetricFamily) metricFamilies := make(map[string]*dto.MetricFamily)
if err == nil && mediatype == "application/vnd.google.protobuf" && if err == nil && mediatype == "application/vnd.google.protobuf" &&
params["encoding"] == "delimited" && params["encoding"] == "delimited" &&
params["proto"] == "io.prometheus.client.MetricFamily" { params["proto"] == "io.prometheus.client.MetricFamily" {
for { for {
metricFamily := &dto.MetricFamily{} mf := &dto.MetricFamily{}
if _, err = pbutil.ReadDelimited(reader, metricFamily); err != nil { if _, ierr := pbutil.ReadDelimited(reader, mf); ierr != nil {
if err == io.EOF { if ierr == io.EOF {
break break
} }
return nil, fmt.Errorf("reading metric family protocol buffer failed: %s", err) return nil, fmt.Errorf("reading metric family protocol buffer failed: %s", ierr)
} }
metricFamilies[metricFamily.GetName()] = metricFamily metricFamilies[mf.GetName()] = mf
} }
} else { } else {
metricFamilies, err = parser.TextToMetricFamilies(reader) metricFamilies, err = parser.TextToMetricFamilies(reader)
if err != nil { if err != nil {
return nil, fmt.Errorf("reading text format failed: %s", err) return nil, fmt.Errorf("reading text format failed: %s", err)
} }
}
// read metrics // read metrics
for metricName, mf := range metricFamilies { for metricName, mf := range metricFamilies {
for _, m := range mf.Metric { for _, m := range mf.Metric {
// reading tags // reading tags
tags := makeLabels(m) tags := makeLabels(m)
/*
for key, value := range p.DefaultTags {
tags[key] = value
}
*/
// reading fields // reading fields
fields := make(map[string]interface{}) fields := make(map[string]interface{})
if mf.GetType() == dto.MetricType_SUMMARY { if mf.GetType() == dto.MetricType_SUMMARY {
@ -102,33 +92,10 @@ func (p *PrometheusParser) Parse(buf []byte) ([]telegraf.Metric, error) {
} }
} }
} }
}
return metrics, err return metrics, err
} }
// Parse one line
func (p *PrometheusParser) ParseLine(line string) (telegraf.Metric, error) {
metrics, err := p.Parse([]byte(line + "\n"))
if err != nil {
return nil, err
}
if len(metrics) < 1 {
return nil, fmt.Errorf(
"Can not parse the line: %s, for data format: prometheus", line)
}
return metrics[0], nil
}
/*
// Set default tags
func (p *PrometheusParser) SetDefaultTags(tags map[string]string) {
p.DefaultTags = tags
}
*/
// Get Quantiles from summary metric // Get Quantiles from summary metric
func makeQuantiles(m *dto.Metric) map[string]interface{} { func makeQuantiles(m *dto.Metric) map[string]interface{} {
fields := make(map[string]interface{}) fields := make(map[string]interface{})

View File

@ -1,6 +1,7 @@
package prometheus package prometheus
import ( import (
"net/http"
"testing" "testing"
"time" "time"
@ -101,10 +102,8 @@ cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
` `
func TestParseValidPrometheus(t *testing.T) { func TestParseValidPrometheus(t *testing.T) {
parser := PrometheusParser{}
// Gauge value // Gauge value
metrics, err := parser.Parse([]byte(validUniqueGauge)) metrics, err := Parse([]byte(validUniqueGauge), http.Header{})
assert.NoError(t, err) assert.NoError(t, err)
assert.Len(t, metrics, 1) assert.Len(t, metrics, 1)
assert.Equal(t, "cadvisor_version_info", metrics[0].Name()) assert.Equal(t, "cadvisor_version_info", metrics[0].Name())
@ -118,8 +117,7 @@ func TestParseValidPrometheus(t *testing.T) {
}, metrics[0].Tags()) }, metrics[0].Tags())
// Counter value // Counter value
//parser.SetDefaultTags(map[string]string{"mytag": "mytagvalue"}) metrics, err = Parse([]byte(validUniqueCounter), http.Header{})
metrics, err = parser.Parse([]byte(validUniqueCounter))
assert.NoError(t, err) assert.NoError(t, err)
assert.Len(t, metrics, 1) assert.Len(t, metrics, 1)
assert.Equal(t, "get_token_fail_count", metrics[0].Name()) assert.Equal(t, "get_token_fail_count", metrics[0].Name())
@ -129,8 +127,8 @@ func TestParseValidPrometheus(t *testing.T) {
assert.Equal(t, map[string]string{}, metrics[0].Tags()) assert.Equal(t, map[string]string{}, metrics[0].Tags())
// Summary data // Summary data
//parser.SetDefaultTags(map[string]string{}) //SetDefaultTags(map[string]string{})
metrics, err = parser.Parse([]byte(validUniqueSummary)) metrics, err = Parse([]byte(validUniqueSummary), http.Header{})
assert.NoError(t, err) assert.NoError(t, err)
assert.Len(t, metrics, 1) assert.Len(t, metrics, 1)
assert.Equal(t, "http_request_duration_microseconds", metrics[0].Name()) assert.Equal(t, "http_request_duration_microseconds", metrics[0].Name())
@ -144,7 +142,7 @@ func TestParseValidPrometheus(t *testing.T) {
assert.Equal(t, map[string]string{"handler": "prometheus"}, metrics[0].Tags()) assert.Equal(t, map[string]string{"handler": "prometheus"}, metrics[0].Tags())
// histogram data // histogram data
metrics, err = parser.Parse([]byte(validUniqueHistogram)) metrics, err = Parse([]byte(validUniqueHistogram), http.Header{})
assert.NoError(t, err) assert.NoError(t, err)
assert.Len(t, metrics, 1) assert.Len(t, metrics, 1)
assert.Equal(t, "apiserver_request_latencies", metrics[0].Name()) assert.Equal(t, "apiserver_request_latencies", metrics[0].Name())
@ -165,11 +163,3 @@ func TestParseValidPrometheus(t *testing.T) {
metrics[0].Tags()) metrics[0].Tags())
} }
func TestParseLineInvalidPrometheus(t *testing.T) {
parser := PrometheusParser{}
metric, err := parser.ParseLine(validUniqueLine)
assert.NotNil(t, err)
assert.Nil(t, metric)
}

View File

@ -13,6 +13,8 @@ import (
"time" "time"
) )
const acceptHeader = `application/vnd.google.protobuf;proto=io.prometheus.client.MetricFamily;encoding=delimited;q=0.7,text/plain;version=0.0.4;q=0.3`
type Prometheus struct { type Prometheus struct {
Urls []string Urls []string
@ -86,7 +88,7 @@ var client = &http.Client{
func (p *Prometheus) gatherURL(url string, acc telegraf.Accumulator) error { func (p *Prometheus) gatherURL(url string, acc telegraf.Accumulator) error {
collectDate := time.Now() collectDate := time.Now()
var req, err = http.NewRequest("GET", url, nil) var req, err = http.NewRequest("GET", url, nil)
req.Header = make(http.Header) req.Header.Add("Accept", acceptHeader)
var token []byte var token []byte
var resp *http.Response var resp *http.Response
@ -129,20 +131,9 @@ func (p *Prometheus) gatherURL(url string, acc telegraf.Accumulator) error {
return fmt.Errorf("error reading body: %s", err) return fmt.Errorf("error reading body: %s", err)
} }
// Headers metrics, err := Parse(body, resp.Header)
headers := make(map[string]string)
for key, value := range headers {
headers[key] = value
}
// Prepare Prometheus parser config
promparser := PrometheusParser{
PromFormat: headers,
}
metrics, err := promparser.Parse(body)
if err != nil { if err != nil {
return fmt.Errorf("error getting processing samples for %s: %s", return fmt.Errorf("error reading metrics for %s: %s",
url, err) url, err)
} }
// Add (or not) collected metrics // Add (or not) collected metrics

View File

@ -9,35 +9,59 @@ import (
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/internal/errchan" "github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
// DefaultUsername will set a default value that corrasponds to the default
// value used by Rabbitmq
const DefaultUsername = "guest" const DefaultUsername = "guest"
// DefaultPassword will set a default value that corrasponds to the default
// value used by Rabbitmq
const DefaultPassword = "guest" const DefaultPassword = "guest"
// DefaultURL will set a default value that corrasponds to the default value
// used by Rabbitmq
const DefaultURL = "http://localhost:15672" const DefaultURL = "http://localhost:15672"
// RabbitMQ defines the configuration necessary for gathering metrics,
// see the sample config for further details
type RabbitMQ struct { type RabbitMQ struct {
URL string URL string
Name string Name string
Username string Username string
Password string Password string
// Path to CA file
SSLCA string `toml:"ssl_ca"`
// Path to host cert file
SSLCert string `toml:"ssl_cert"`
// Path to cert key file
SSLKey string `toml:"ssl_key"`
// Use SSL but skip chain & host verification
InsecureSkipVerify bool
// InsecureSkipVerify bool
Nodes []string Nodes []string
Queues []string Queues []string
Client *http.Client Client *http.Client
} }
// OverviewResponse ...
type OverviewResponse struct { type OverviewResponse struct {
MessageStats *MessageStats `json:"message_stats"` MessageStats *MessageStats `json:"message_stats"`
ObjectTotals *ObjectTotals `json:"object_totals"` ObjectTotals *ObjectTotals `json:"object_totals"`
QueueTotals *QueueTotals `json:"queue_totals"` QueueTotals *QueueTotals `json:"queue_totals"`
} }
// Details ...
type Details struct { type Details struct {
Rate float64 Rate float64
} }
// MessageStats ...
type MessageStats struct { type MessageStats struct {
Ack int64 Ack int64
AckDetails Details `json:"ack_details"` AckDetails Details `json:"ack_details"`
@ -51,6 +75,7 @@ type MessageStats struct {
RedeliverDetails Details `json:"redeliver_details"` RedeliverDetails Details `json:"redeliver_details"`
} }
// ObjectTotals ...
type ObjectTotals struct { type ObjectTotals struct {
Channels int64 Channels int64
Connections int64 Connections int64
@ -59,6 +84,7 @@ type ObjectTotals struct {
Queues int64 Queues int64
} }
// QueueTotals ...
type QueueTotals struct { type QueueTotals struct {
Messages int64 Messages int64
MessagesReady int64 `json:"messages_ready"` MessagesReady int64 `json:"messages_ready"`
@ -66,10 +92,11 @@ type QueueTotals struct {
MessageBytes int64 `json:"message_bytes"` MessageBytes int64 `json:"message_bytes"`
MessageBytesReady int64 `json:"message_bytes_ready"` MessageBytesReady int64 `json:"message_bytes_ready"`
MessageBytesUnacknowledged int64 `json:"message_bytes_unacknowledged"` MessageBytesUnacknowledged int64 `json:"message_bytes_unacknowledged"`
MessageRam int64 `json:"message_bytes_ram"` MessageRAM int64 `json:"message_bytes_ram"`
MessagePersistent int64 `json:"message_bytes_persistent"` MessagePersistent int64 `json:"message_bytes_persistent"`
} }
// Queue ...
type Queue struct { type Queue struct {
QueueTotals // just to not repeat the same code QueueTotals // just to not repeat the same code
MessageStats `json:"message_stats"` MessageStats `json:"message_stats"`
@ -83,6 +110,7 @@ type Queue struct {
AutoDelete bool `json:"auto_delete"` AutoDelete bool `json:"auto_delete"`
} }
// Node ...
type Node struct { type Node struct {
Name string Name string
@ -99,6 +127,7 @@ type Node struct {
SocketsUsed int64 `json:"sockets_used"` SocketsUsed int64 `json:"sockets_used"`
} }
// gatherFunc ...
type gatherFunc func(r *RabbitMQ, acc telegraf.Accumulator, errChan chan error) type gatherFunc func(r *RabbitMQ, acc telegraf.Accumulator, errChan chan error)
var gatherFunctions = []gatherFunc{gatherOverview, gatherNodes, gatherQueues} var gatherFunctions = []gatherFunc{gatherOverview, gatherNodes, gatherQueues}
@ -109,22 +138,40 @@ var sampleConfig = `
# username = "guest" # username = "guest"
# password = "guest" # password = "guest"
## Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem"
# ssl_key = "/etc/telegraf/key.pem"
## Use SSL but skip chain & host verification
# insecure_skip_verify = false
## A list of nodes to pull metrics about. If not specified, metrics for ## A list of nodes to pull metrics about. If not specified, metrics for
## all nodes are gathered. ## all nodes are gathered.
# nodes = ["rabbit@node1", "rabbit@node2"] # nodes = ["rabbit@node1", "rabbit@node2"]
` `
// SampleConfig ...
func (r *RabbitMQ) SampleConfig() string { func (r *RabbitMQ) SampleConfig() string {
return sampleConfig return sampleConfig
} }
// Description ...
func (r *RabbitMQ) Description() string { func (r *RabbitMQ) Description() string {
return "Read metrics from one or many RabbitMQ servers via the management API" return "Read metrics from one or many RabbitMQ servers via the management API"
} }
// Gather ...
func (r *RabbitMQ) Gather(acc telegraf.Accumulator) error { func (r *RabbitMQ) Gather(acc telegraf.Accumulator) error {
if r.Client == nil { if r.Client == nil {
tr := &http.Transport{ResponseHeaderTimeout: time.Duration(3 * time.Second)} tlsCfg, err := internal.GetTLSConfig(
r.SSLCert, r.SSLKey, r.SSLCA, r.InsecureSkipVerify)
if err != nil {
return err
}
tr := &http.Transport{
ResponseHeaderTimeout: time.Duration(3 * time.Second),
TLSClientConfig: tlsCfg,
}
r.Client = &http.Client{ r.Client = &http.Client{
Transport: tr, Transport: tr,
Timeout: time.Duration(4 * time.Second), Timeout: time.Duration(4 * time.Second),
@ -286,7 +333,7 @@ func gatherQueues(r *RabbitMQ, acc telegraf.Accumulator, errChan chan error) {
"message_bytes": queue.MessageBytes, "message_bytes": queue.MessageBytes,
"message_bytes_ready": queue.MessageBytesReady, "message_bytes_ready": queue.MessageBytesReady,
"message_bytes_unacked": queue.MessageBytesUnacknowledged, "message_bytes_unacked": queue.MessageBytesUnacknowledged,
"message_bytes_ram": queue.MessageRam, "message_bytes_ram": queue.MessageRAM,
"message_bytes_persist": queue.MessagePersistent, "message_bytes_persist": queue.MessagePersistent,
"messages": queue.Messages, "messages": queue.Messages,
"messages_ready": queue.MessagesReady, "messages_ready": queue.MessagesReady,

View File

@ -43,6 +43,7 @@
- latest_fork_usec - latest_fork_usec
- connected_slaves - connected_slaves
- master_repl_offset - master_repl_offset
- master_last_io_seconds_ago
- repl_backlog_active - repl_backlog_active
- repl_backlog_size - repl_backlog_size
- repl_backlog_histlen - repl_backlog_histlen
@ -57,6 +58,7 @@
- All measurements have the following tags: - All measurements have the following tags:
- port - port
- server - server
- replication role
### Example Output: ### Example Output:

View File

@ -12,6 +12,7 @@ import (
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
@ -25,6 +26,7 @@ var sampleConfig = `
## e.g. ## e.g.
## tcp://localhost:6379 ## tcp://localhost:6379
## tcp://:password@192.168.99.100 ## tcp://:password@192.168.99.100
## unix:///var/run/redis.sock
## ##
## If no servers are specified, then localhost is used as the host. ## If no servers are specified, then localhost is used as the host.
## If no port is specified, 6379 is used ## If no port is specified, 6379 is used
@ -66,6 +68,7 @@ var Tracking = map[string]string{
"latest_fork_usec": "latest_fork_usec", "latest_fork_usec": "latest_fork_usec",
"connected_slaves": "connected_slaves", "connected_slaves": "connected_slaves",
"master_repl_offset": "master_repl_offset", "master_repl_offset": "master_repl_offset",
"master_last_io_seconds_ago": "master_last_io_seconds_ago",
"repl_backlog_active": "repl_backlog_active", "repl_backlog_active": "repl_backlog_active",
"repl_backlog_size": "repl_backlog_size", "repl_backlog_size": "repl_backlog_size",
"repl_backlog_histlen": "repl_backlog_histlen", "repl_backlog_histlen": "repl_backlog_histlen",
@ -74,16 +77,19 @@ var Tracking = map[string]string{
"used_cpu_user": "used_cpu_user", "used_cpu_user": "used_cpu_user",
"used_cpu_sys_children": "used_cpu_sys_children", "used_cpu_sys_children": "used_cpu_sys_children",
"used_cpu_user_children": "used_cpu_user_children", "used_cpu_user_children": "used_cpu_user_children",
"role": "role", "role": "replication_role",
} }
var ErrProtocolError = errors.New("redis protocol error") var ErrProtocolError = errors.New("redis protocol error")
const defaultPort = "6379"
// Reads stats from all configured servers accumulates stats. // Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any). // Returns one of the errors encountered while gather stats (if any).
func (r *Redis) Gather(acc telegraf.Accumulator) error { func (r *Redis) Gather(acc telegraf.Accumulator) error {
if len(r.Servers) == 0 { if len(r.Servers) == 0 {
url := &url.URL{ url := &url.URL{
Scheme: "tcp",
Host: ":6379", Host: ":6379",
} }
r.gatherServer(url, acc) r.gatherServer(url, acc)
@ -91,10 +97,12 @@ func (r *Redis) Gather(acc telegraf.Accumulator) error {
} }
var wg sync.WaitGroup var wg sync.WaitGroup
errChan := errchan.New(len(r.Servers))
var outerr error
for _, serv := range r.Servers { for _, serv := range r.Servers {
if !strings.HasPrefix(serv, "tcp://") && !strings.HasPrefix(serv, "unix://") {
serv = "tcp://" + serv
}
u, err := url.Parse(serv) u, err := url.Parse(serv)
if err != nil { if err != nil {
return fmt.Errorf("Unable to parse to address '%s': %s", serv, err) return fmt.Errorf("Unable to parse to address '%s': %s", serv, err)
@ -104,29 +112,35 @@ func (r *Redis) Gather(acc telegraf.Accumulator) error {
u.Host = serv u.Host = serv
u.Path = "" u.Path = ""
} }
if u.Scheme == "tcp" {
_, _, err := net.SplitHostPort(u.Host)
if err != nil {
u.Host = u.Host + ":" + defaultPort
}
}
wg.Add(1) wg.Add(1)
go func(serv string) { go func(serv string) {
defer wg.Done() defer wg.Done()
outerr = r.gatherServer(u, acc) errChan.C <- r.gatherServer(u, acc)
}(serv) }(serv)
} }
wg.Wait() wg.Wait()
return errChan.Error()
return outerr
} }
const defaultPort = "6379"
func (r *Redis) gatherServer(addr *url.URL, acc telegraf.Accumulator) error { func (r *Redis) gatherServer(addr *url.URL, acc telegraf.Accumulator) error {
_, _, err := net.SplitHostPort(addr.Host) var address string
if err != nil {
addr.Host = addr.Host + ":" + defaultPort
}
c, err := net.DialTimeout("tcp", addr.Host, defaultTimeout) if addr.Scheme == "unix" {
address = addr.Path
} else {
address = addr.Host
}
c, err := net.DialTimeout(addr.Scheme, address, defaultTimeout)
if err != nil { if err != nil {
return fmt.Errorf("Unable to connect to redis server '%s': %s", addr.Host, err) return fmt.Errorf("Unable to connect to redis server '%s': %s", address, err)
} }
defer c.Close() defer c.Close()
@ -154,12 +168,17 @@ func (r *Redis) gatherServer(addr *url.URL, acc telegraf.Accumulator) error {
c.Write([]byte("EOF\r\n")) c.Write([]byte("EOF\r\n"))
rdr := bufio.NewReader(c) rdr := bufio.NewReader(c)
var tags map[string]string
if addr.Scheme == "unix" {
tags = map[string]string{"socket": addr.Path}
} else {
// Setup tags for all redis metrics // Setup tags for all redis metrics
host, port := "unknown", "unknown" host, port := "unknown", "unknown"
// If there's an error, ignore and use 'unknown' tags // If there's an error, ignore and use 'unknown' tags
host, port, _ = net.SplitHostPort(addr.Host) host, port, _ = net.SplitHostPort(addr.Host)
tags := map[string]string{"server": host, "port": port} tags = map[string]string{"server": host, "port": port}
}
return gatherInfoOutput(rdr, acc, tags) return gatherInfoOutput(rdr, acc, tags)
} }
@ -208,7 +227,7 @@ func gatherInfoOutput(
} }
if name == "role" { if name == "role" {
tags["role"] = val tags["replication_role"] = val
continue continue
} }

View File

@ -35,7 +35,7 @@ func TestRedis_ParseMetrics(t *testing.T) {
err := gatherInfoOutput(rdr, &acc, tags) err := gatherInfoOutput(rdr, &acc, tags)
require.NoError(t, err) require.NoError(t, err)
tags = map[string]string{"host": "redis.net", "role": "master"} tags = map[string]string{"host": "redis.net", "replication_role": "master"}
fields := map[string]interface{}{ fields := map[string]interface{}{
"uptime": uint64(238), "uptime": uint64(238),
"clients": uint64(1), "clients": uint64(1),
@ -71,7 +71,7 @@ func TestRedis_ParseMetrics(t *testing.T) {
"used_cpu_user_children": float64(0.00), "used_cpu_user_children": float64(0.00),
"keyspace_hitrate": float64(0.50), "keyspace_hitrate": float64(0.50),
} }
keyspaceTags := map[string]string{"host": "redis.net", "role": "master", "database": "db0"} keyspaceTags := map[string]string{"host": "redis.net", "replication_role": "master", "database": "db0"}
keyspaceFields := map[string]interface{}{ keyspaceFields := map[string]interface{}{
"avg_ttl": uint64(0), "avg_ttl": uint64(0),
"expires": uint64(0), "expires": uint64(0),

View File

@ -92,8 +92,8 @@ var diskIoSampleConfig = `
## disk partitions. ## disk partitions.
## Setting devices will restrict the stats to the specified devices. ## Setting devices will restrict the stats to the specified devices.
# devices = ["sda", "sdb"] # devices = ["sda", "sdb"]
## Uncomment the following line if you do not need disk serial numbers. ## Uncomment the following line if you need disk serial numbers.
# skip_serial_number = true # skip_serial_number = false
` `
func (_ *DiskIOStats) SampleConfig() string { func (_ *DiskIOStats) SampleConfig() string {
@ -151,6 +151,6 @@ func init() {
}) })
inputs.Add("diskio", func() telegraf.Input { inputs.Add("diskio", func() telegraf.Input {
return &DiskIOStats{ps: &systemPS{}} return &DiskIOStats{ps: &systemPS{}, SkipSerialNumber: true}
}) })
} }

View File

@ -89,6 +89,7 @@ func (t *Tail) Start(acc telegraf.Accumulator) error {
ReOpen: true, ReOpen: true,
Follow: true, Follow: true,
Location: &seek, Location: &seek,
MustExist: true,
}) })
if err != nil { if err != nil {
errS += err.Error() + " " errS += err.Error() + " "

View File

@ -31,6 +31,8 @@ type TcpListener struct {
accept chan bool accept chan bool
// drops tracks the number of dropped metrics. // drops tracks the number of dropped metrics.
drops int drops int
// malformed tracks the number of malformed packets
malformed int
// track the listener here so we can close it in Stop() // track the listener here so we can close it in Stop()
listener *net.TCPListener listener *net.TCPListener
@ -45,6 +47,9 @@ var dropwarn = "ERROR: tcp_listener message queue full. " +
"We have dropped %d messages so far. " + "We have dropped %d messages so far. " +
"You may want to increase allowed_pending_messages in the config\n" "You may want to increase allowed_pending_messages in the config\n"
var malformedwarn = "WARNING: tcp_listener has received %d malformed packets" +
" thus far."
const sampleConfig = ` const sampleConfig = `
## Address and port to host TCP listener on ## Address and port to host TCP listener on
service_address = ":8094" service_address = ":8094"
@ -243,8 +248,10 @@ func (t *TcpListener) tcpParser() error {
if err == nil { if err == nil {
t.storeMetrics(metrics) t.storeMetrics(metrics)
} else { } else {
log.Printf("Malformed packet: [%s], Error: %s\n", t.malformed++
string(packet), err) if t.malformed == 1 || t.malformed%1000 == 0 {
log.Printf(malformedwarn, t.malformed)
}
} }
} }
} }

View File

@ -27,6 +27,8 @@ type UdpListener struct {
done chan struct{} done chan struct{}
// drops tracks the number of dropped metrics. // drops tracks the number of dropped metrics.
drops int drops int
// malformed tracks the number of malformed packets
malformed int
parser parsers.Parser parser parsers.Parser
@ -44,6 +46,9 @@ var dropwarn = "ERROR: udp_listener message queue full. " +
"We have dropped %d messages so far. " + "We have dropped %d messages so far. " +
"You may want to increase allowed_pending_messages in the config\n" "You may want to increase allowed_pending_messages in the config\n"
var malformedwarn = "WARNING: udp_listener has received %d malformed packets" +
" thus far."
const sampleConfig = ` const sampleConfig = `
## Address and port to host UDP listener on ## Address and port to host UDP listener on
service_address = ":8092" service_address = ":8092"
@ -152,7 +157,10 @@ func (u *UdpListener) udpParser() error {
if err == nil { if err == nil {
u.storeMetrics(metrics) u.storeMetrics(metrics)
} else { } else {
log.Printf("Malformed packet: [%s], Error: %s\n", packet, err) u.malformed++
if u.malformed == 1 || u.malformed%1000 == 0 {
log.Printf(malformedwarn, u.malformed)
}
} }
} }
} }

View File

@ -16,6 +16,7 @@ $ sudo service telegraf start
## Available webhooks ## Available webhooks
- [Github](github/) - [Github](github/)
- [Mandrill](mandrill/)
- [Rollbar](rollbar/) - [Rollbar](rollbar/)
## Adding new webhooks plugin ## Adding new webhooks plugin

View File

@ -0,0 +1,15 @@
# mandrill webhook
You should configure your Mandrill's Webhooks to point at the `webhooks` service. To do this go to `mandrillapp.com/` and click `Settings > Webhooks`. In the resulting page, click on `Add a Webhook`, select all events, and set the `URL` to `http://<my_ip>:1619/mandrill`, and click on `Create Webhook`.
## Events
See the [webhook doc](https://mandrill.zendesk.com/hc/en-us/articles/205583307-Message-Event-Webhook-format).
All events for logs the original timestamp, the event name and the unique identifier of the message that generated the event.
**Tags:**
* 'event' = `event.event` string
**Fields:**
* 'id' = `event._id` string

View File

@ -0,0 +1,56 @@
package mandrill
import (
"encoding/json"
"io/ioutil"
"log"
"net/http"
"net/url"
"time"
"github.com/gorilla/mux"
"github.com/influxdata/telegraf"
)
type MandrillWebhook struct {
Path string
acc telegraf.Accumulator
}
func (md *MandrillWebhook) Register(router *mux.Router, acc telegraf.Accumulator) {
router.HandleFunc(md.Path, md.returnOK).Methods("HEAD")
router.HandleFunc(md.Path, md.eventHandler).Methods("POST")
log.Printf("Started the webhooks_mandrill on %s\n", md.Path)
md.acc = acc
}
func (md *MandrillWebhook) returnOK(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
}
func (md *MandrillWebhook) eventHandler(w http.ResponseWriter, r *http.Request) {
defer r.Body.Close()
body, err := ioutil.ReadAll(r.Body)
if err != nil {
w.WriteHeader(http.StatusBadRequest)
return
}
data, err := url.ParseQuery(string(body))
if err != nil {
w.WriteHeader(http.StatusBadRequest)
return
}
var events []MandrillEvent
err = json.Unmarshal([]byte(data.Get("mandrill_events")), &events)
if err != nil {
w.WriteHeader(http.StatusBadRequest)
return
}
for _, event := range events {
md.acc.AddFields("mandrill_webhooks", event.Fields(), event.Tags(), time.Unix(event.TimeStamp, 0))
}
w.WriteHeader(http.StatusOK)
}

View File

@ -0,0 +1,24 @@
package mandrill
type Event interface {
Tags() map[string]string
Fields() map[string]interface{}
}
type MandrillEvent struct {
EventName string `json:"event"`
TimeStamp int64 `json:"ts"`
Id string `json:"_id"`
}
func (me *MandrillEvent) Tags() map[string]string {
return map[string]string{
"event": me.EventName,
}
}
func (me *MandrillEvent) Fields() map[string]interface{} {
return map[string]interface{}{
"id": me.Id,
}
}

View File

@ -0,0 +1,58 @@
package mandrill
func SendEventJSON() string {
return `
{
"event": "send",
"msg": {
"ts": 1365109999,
"subject": "This an example webhook message",
"email": "example.webhook@mandrillapp.com",
"sender": "example.sender@mandrillapp.com",
"tags": [
"webhook-example"
],
"opens": [
],
"clicks": [
],
"state": "sent",
"metadata": {
"user_id": 111
},
"_id": "exampleaaaaaaaaaaaaaaaaaaaaaaaaa",
"_version": "exampleaaaaaaaaaaaaaaa"
},
"_id": "id1",
"ts": 1384954004
}`
}
func HardBounceEventJSON() string {
return `
{
"event": "hard_bounce",
"msg": {
"ts": 1365109999,
"subject": "This an example webhook message",
"email": "example.webhook@mandrillapp.com",
"sender": "example.sender@mandrillapp.com",
"tags": [
"webhook-example"
],
"state": "bounced",
"metadata": {
"user_id": 111
},
"_id": "exampleaaaaaaaaaaaaaaaaaaaaaaaaa2",
"_version": "exampleaaaaaaaaaaaaaaa",
"bounce_description": "bad_mailbox",
"bgtools_code": 10,
"diag": "smtp;550 5.1.1 The email account that you tried to reach does not exist. Please try double-checking the recipient's email address for typos or unnecessary spaces."
},
"_id": "id2",
"ts": 1384954004
}`
}

View File

@ -0,0 +1,85 @@
package mandrill
import (
"github.com/influxdata/telegraf/testutil"
"net/http"
"net/http/httptest"
"net/url"
"strings"
"testing"
)
func postWebhooks(md *MandrillWebhook, eventBody string) *httptest.ResponseRecorder {
body := url.Values{}
body.Set("mandrill_events", eventBody)
req, _ := http.NewRequest("POST", "/mandrill", strings.NewReader(body.Encode()))
w := httptest.NewRecorder()
md.eventHandler(w, req)
return w
}
func headRequest(md *MandrillWebhook) *httptest.ResponseRecorder {
req, _ := http.NewRequest("HEAD", "/mandrill", strings.NewReader(""))
w := httptest.NewRecorder()
md.returnOK(w, req)
return w
}
func TestHead(t *testing.T) {
md := &MandrillWebhook{Path: "/mandrill"}
resp := headRequest(md)
if resp.Code != http.StatusOK {
t.Errorf("HEAD returned HTTP status code %v.\nExpected %v", resp.Code, http.StatusOK)
}
}
func TestSendEvent(t *testing.T) {
var acc testutil.Accumulator
md := &MandrillWebhook{Path: "/mandrill", acc: &acc}
resp := postWebhooks(md, "["+SendEventJSON()+"]")
if resp.Code != http.StatusOK {
t.Errorf("POST send returned HTTP status code %v.\nExpected %v", resp.Code, http.StatusOK)
}
fields := map[string]interface{}{
"id": "id1",
}
tags := map[string]string{
"event": "send",
}
acc.AssertContainsTaggedFields(t, "mandrill_webhooks", fields, tags)
}
func TestMultipleEvents(t *testing.T) {
var acc testutil.Accumulator
md := &MandrillWebhook{Path: "/mandrill", acc: &acc}
resp := postWebhooks(md, "["+SendEventJSON()+","+HardBounceEventJSON()+"]")
if resp.Code != http.StatusOK {
t.Errorf("POST send returned HTTP status code %v.\nExpected %v", resp.Code, http.StatusOK)
}
fields := map[string]interface{}{
"id": "id1",
}
tags := map[string]string{
"event": "send",
}
acc.AssertContainsTaggedFields(t, "mandrill_webhooks", fields, tags)
fields = map[string]interface{}{
"id": "id2",
}
tags = map[string]string{
"event": "hard_bounce",
}
acc.AssertContainsTaggedFields(t, "mandrill_webhooks", fields, tags)
}

View File

@ -11,6 +11,7 @@ import (
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/inputs/webhooks/github" "github.com/influxdata/telegraf/plugins/inputs/webhooks/github"
"github.com/influxdata/telegraf/plugins/inputs/webhooks/mandrill"
"github.com/influxdata/telegraf/plugins/inputs/webhooks/rollbar" "github.com/influxdata/telegraf/plugins/inputs/webhooks/rollbar"
) )
@ -26,6 +27,7 @@ type Webhooks struct {
ServiceAddress string ServiceAddress string
Github *github.GithubWebhook Github *github.GithubWebhook
Mandrill *mandrill.MandrillWebhook
Rollbar *rollbar.RollbarWebhook Rollbar *rollbar.RollbarWebhook
} }
@ -41,6 +43,9 @@ func (wb *Webhooks) SampleConfig() string {
[inputs.webhooks.github] [inputs.webhooks.github]
path = "/github" path = "/github"
[inputs.webhooks.mandrill]
path = "/mandrill"
[inputs.webhooks.rollbar] [inputs.webhooks.rollbar]
path = "/rollbar" path = "/rollbar"
` `

View File

@ -107,7 +107,8 @@ type item struct {
counterHandle win.PDH_HCOUNTER counterHandle win.PDH_HCOUNTER
} }
var sanitizedChars = strings.NewReplacer("/sec", "_persec", "/Sec", "_persec", " ", "_") var sanitizedChars = strings.NewReplacer("/sec", "_persec", "/Sec", "_persec",
" ", "_", "%", "Percent", `\`, "")
func (m *Win_PerfCounters) AddItem(metrics *itemList, query string, objectName string, counter string, instance string, func (m *Win_PerfCounters) AddItem(metrics *itemList, query string, objectName string, counter string, instance string,
measurement string, include_total bool) { measurement string, include_total bool) {
@ -299,13 +300,12 @@ func (m *Win_PerfCounters) Gather(acc telegraf.Accumulator) error {
tags["instance"] = s tags["instance"] = s
} }
tags["objectname"] = metric.objectName tags["objectname"] = metric.objectName
fields[sanitizedChars.Replace(string(metric.counter))] = float32(c.FmtValue.DoubleValue) fields[sanitizedChars.Replace(metric.counter)] =
float32(c.FmtValue.DoubleValue)
var measurement string measurement := sanitizedChars.Replace(metric.measurement)
if metric.measurement == "" { if measurement == "" {
measurement = "win_perf_counters" measurement = "win_perf_counters"
} else {
measurement = metric.measurement
} }
acc.AddFields(measurement, fields, tags) acc.AddFields(measurement, fields, tags)
} }

View File

@ -32,7 +32,7 @@ echo mntr | nc localhost 2181
Meta: Meta:
- units: int64 - units: int64
- tags: `server=<hostname> port=<port>` - tags: `server=<hostname> port=<port> state=<leader|follower>`
Measurement names: Measurement names:
- zookeeper_avg_latency - zookeeper_avg_latency
@ -55,8 +55,12 @@ Measurement names:
Meta: Meta:
- units: string - units: string
- tags: `server=<hostname> port=<port>` - tags: `server=<hostname> port=<port> state=<leader|follower>`
Measurement names: Measurement names:
- zookeeper_version - zookeeper_version
- zookeeper_server_state
### Tags:
- All measurements have the following tags:
-

View File

@ -55,6 +55,7 @@ func (z *Zookeeper) Gather(acc telegraf.Accumulator) error {
} }
func (z *Zookeeper) gatherServer(address string, acc telegraf.Accumulator) error { func (z *Zookeeper) gatherServer(address string, acc telegraf.Accumulator) error {
var zookeeper_state string
_, _, err := net.SplitHostPort(address) _, _, err := net.SplitHostPort(address)
if err != nil { if err != nil {
address = address + ":2181" address = address + ":2181"
@ -78,7 +79,6 @@ func (z *Zookeeper) gatherServer(address string, acc telegraf.Accumulator) error
if len(service) != 2 { if len(service) != 2 {
return fmt.Errorf("Invalid service address: %s", address) return fmt.Errorf("Invalid service address: %s", address)
} }
tags := map[string]string{"server": service[0], "port": service[1]}
fields := make(map[string]interface{}) fields := make(map[string]interface{})
for scanner.Scan() { for scanner.Scan() {
@ -92,6 +92,9 @@ func (z *Zookeeper) gatherServer(address string, acc telegraf.Accumulator) error
} }
measurement := strings.TrimPrefix(parts[1], "zk_") measurement := strings.TrimPrefix(parts[1], "zk_")
if measurement == "server_state" {
zookeeper_state = parts[2]
} else {
sValue := string(parts[2]) sValue := string(parts[2])
iVal, err := strconv.ParseInt(sValue, 10, 64) iVal, err := strconv.ParseInt(sValue, 10, 64)
@ -101,6 +104,12 @@ func (z *Zookeeper) gatherServer(address string, acc telegraf.Accumulator) error
fields[measurement] = sValue fields[measurement] = sValue
} }
} }
}
tags := map[string]string{
"server": service[0],
"port": service[1],
"state": zookeeper_state,
}
acc.AddFields("zookeeper", fields, tags) acc.AddFields("zookeeper", fields, tags)
return nil return nil

View File

@ -9,6 +9,8 @@ via raw TCP.
# Configuration for Graphite server to send metrics to # Configuration for Graphite server to send metrics to
[[outputs.graphite]] [[outputs.graphite]]
## TCP endpoint for your graphite instance. ## TCP endpoint for your graphite instance.
## If multiple endpoints are configured, the output will be load balanced.
## Only one of the endpoints will be written to with each iteration.
servers = ["localhost:2003"] servers = ["localhost:2003"]
## Prefix metrics name ## Prefix metrics name
prefix = "" prefix = ""

View File

@ -2,7 +2,6 @@ package graphite
import ( import (
"errors" "errors"
"fmt"
"log" "log"
"math/rand" "math/rand"
"net" "net"
@ -25,6 +24,8 @@ type Graphite struct {
var sampleConfig = ` var sampleConfig = `
## TCP endpoint for your graphite instance. ## TCP endpoint for your graphite instance.
## If multiple endpoints are configured, output will be load balanced.
## Only one of the endpoints will be written to with each iteration.
servers = ["localhost:2003"] servers = ["localhost:2003"]
## Prefix metrics name ## Prefix metrics name
prefix = "" prefix = ""
@ -96,9 +97,12 @@ func (g *Graphite) Write(metrics []telegraf.Metric) error {
// Send data to a random server // Send data to a random server
p := rand.Perm(len(g.conns)) p := rand.Perm(len(g.conns))
for _, n := range p { for _, n := range p {
if _, e := fmt.Fprint(g.conns[n], graphitePoints); e != nil { if g.Timeout > 0 {
g.conns[n].SetWriteDeadline(time.Now().Add(time.Duration(g.Timeout) * time.Second))
}
if _, e := g.conns[n].Write([]byte(graphitePoints)); e != nil {
// Error // Error
log.Println("ERROR: " + err.Error()) log.Println("ERROR: " + e.Error())
// Let's try the next one // Let's try the next one
} else { } else {
// Success // Success

View File

@ -29,7 +29,9 @@ type Instrumental struct {
const ( const (
DefaultHost = "collector.instrumentalapp.com" DefaultHost = "collector.instrumentalapp.com"
AuthFormat = "hello version go/telegraf/1.0\nauthenticate %s\n" HelloMessage = "hello version go/telegraf/1.1\n"
AuthFormat = "authenticate %s\n"
HandshakeFormat = HelloMessage + AuthFormat
) )
var ( var (
@ -52,6 +54,7 @@ var sampleConfig = `
func (i *Instrumental) Connect() error { func (i *Instrumental) Connect() error {
connection, err := net.DialTimeout("tcp", i.Host+":8000", i.Timeout.Duration) connection, err := net.DialTimeout("tcp", i.Host+":8000", i.Timeout.Duration)
if err != nil { if err != nil {
i.conn = nil i.conn = nil
return err return err
@ -151,6 +154,11 @@ func (i *Instrumental) Write(metrics []telegraf.Metric) error {
return err return err
} }
// force the connection closed after sending data
// to deal with various disconnection scenarios and eschew holding
// open idle connections en masse
i.Close()
return nil return nil
} }
@ -163,7 +171,7 @@ func (i *Instrumental) SampleConfig() string {
} }
func (i *Instrumental) authenticate(conn net.Conn) error { func (i *Instrumental) authenticate(conn net.Conn) error {
_, err := fmt.Fprintf(conn, AuthFormat, i.ApiToken) _, err := fmt.Fprintf(conn, HandshakeFormat, i.ApiToken)
if err != nil { if err != nil {
return err return err
} }

View File

@ -24,7 +24,6 @@ func TestWrite(t *testing.T) {
ApiToken: "abc123token", ApiToken: "abc123token",
Prefix: "my.prefix", Prefix: "my.prefix",
} }
i.Connect()
// Default to gauge // Default to gauge
m1, _ := telegraf.NewMetric( m1, _ := telegraf.NewMetric(
@ -40,10 +39,8 @@ func TestWrite(t *testing.T) {
time.Date(2010, time.November, 10, 23, 0, 0, 0, time.UTC), time.Date(2010, time.November, 10, 23, 0, 0, 0, time.UTC),
) )
// Simulate a connection close and reconnect.
metrics := []telegraf.Metric{m1, m2} metrics := []telegraf.Metric{m1, m2}
i.Write(metrics) i.Write(metrics)
i.Close()
// Counter and Histogram are increments // Counter and Histogram are increments
m3, _ := telegraf.NewMetric( m3, _ := telegraf.NewMetric(
@ -70,7 +67,6 @@ func TestWrite(t *testing.T) {
i.Write(metrics) i.Write(metrics)
wg.Wait() wg.Wait()
i.Close()
} }
func TCPServer(t *testing.T, wg *sync.WaitGroup) { func TCPServer(t *testing.T, wg *sync.WaitGroup) {
@ -82,10 +78,9 @@ func TCPServer(t *testing.T, wg *sync.WaitGroup) {
tp := textproto.NewReader(reader) tp := textproto.NewReader(reader)
hello, _ := tp.ReadLine() hello, _ := tp.ReadLine()
assert.Equal(t, "hello version go/telegraf/1.0", hello) assert.Equal(t, "hello version go/telegraf/1.1", hello)
auth, _ := tp.ReadLine() auth, _ := tp.ReadLine()
assert.Equal(t, "authenticate abc123token", auth) assert.Equal(t, "authenticate abc123token", auth)
conn.Write([]byte("ok\nok\n")) conn.Write([]byte("ok\nok\n"))
data1, _ := tp.ReadLine() data1, _ := tp.ReadLine()
@ -99,10 +94,9 @@ func TCPServer(t *testing.T, wg *sync.WaitGroup) {
tp = textproto.NewReader(reader) tp = textproto.NewReader(reader)
hello, _ = tp.ReadLine() hello, _ = tp.ReadLine()
assert.Equal(t, "hello version go/telegraf/1.0", hello) assert.Equal(t, "hello version go/telegraf/1.1", hello)
auth, _ = tp.ReadLine() auth, _ = tp.ReadLine()
assert.Equal(t, "authenticate abc123token", auth) assert.Equal(t, "authenticate abc123token", auth)
conn.Write([]byte("ok\nok\n")) conn.Write([]byte("ok\nok\n"))
data3, _ := tp.ReadLine() data3, _ := tp.ReadLine()

View File

@ -153,8 +153,7 @@ func (l *Librato) Description() string {
func (l *Librato) buildGauges(m telegraf.Metric) ([]*Gauge, error) { func (l *Librato) buildGauges(m telegraf.Metric) ([]*Gauge, error) {
gauges := []*Gauge{} gauges := []*Gauge{}
serializer := graphite.GraphiteSerializer{Template: l.Template} bucket := graphite.SerializeBucketName(m.Name(), m.Tags(), l.Template, "")
bucket := serializer.SerializeBucketName(m.Name(), m.Tags())
for fieldName, value := range m.Fields() { for fieldName, value := range m.Fields() {
gauge := &Gauge{ gauge := &Gauge{
Name: graphite.InsertField(bucket, fieldName), Name: graphite.InsertField(bucket, fieldName),

View File

@ -5,27 +5,21 @@ import (
"log" "log"
"net/http" "net/http"
"regexp" "regexp"
"strings" "sync"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/outputs" "github.com/influxdata/telegraf/plugins/outputs"
"github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus"
) )
var ( var invalidNameCharRE = regexp.MustCompile(`[^a-zA-Z0-9_]`)
sanitizedChars = strings.NewReplacer("/", "_", "@", "_", " ", "_", "-", "_", ".", "_")
// Prometheus metric names must match this regex
// see https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels
metricName = regexp.MustCompile("^[a-zA-Z_:][a-zA-Z0-9_:]*$")
// Prometheus labels must match this regex
// see https://prometheus.io/docs/concepts/data_model/#metric-names-and-labels
labelName = regexp.MustCompile("^[a-zA-Z_][a-zA-Z0-9_]*$")
)
type PrometheusClient struct { type PrometheusClient struct {
Listen string Listen string
metrics map[string]prometheus.Metric
sync.Mutex
} }
var sampleConfig = ` var sampleConfig = `
@ -34,6 +28,7 @@ var sampleConfig = `
` `
func (p *PrometheusClient) Start() error { func (p *PrometheusClient) Start() error {
prometheus.MustRegister(p)
defer func() { defer func() {
if r := recover(); r != nil { if r := recover(); r != nil {
// recovering from panic here because there is no way to stop a // recovering from panic here because there is no way to stop a
@ -78,25 +73,42 @@ func (p *PrometheusClient) Description() string {
return "Configuration for the Prometheus client to spawn" return "Configuration for the Prometheus client to spawn"
} }
// Implements prometheus.Collector
func (p *PrometheusClient) Describe(ch chan<- *prometheus.Desc) {
prometheus.NewGauge(prometheus.GaugeOpts{Name: "Dummy", Help: "Dummy"}).Describe(ch)
}
// Implements prometheus.Collector
func (p *PrometheusClient) Collect(ch chan<- prometheus.Metric) {
p.Lock()
defer p.Unlock()
for _, m := range p.metrics {
ch <- m
}
}
func (p *PrometheusClient) Write(metrics []telegraf.Metric) error { func (p *PrometheusClient) Write(metrics []telegraf.Metric) error {
p.Lock()
defer p.Unlock()
p.metrics = make(map[string]prometheus.Metric)
if len(metrics) == 0 { if len(metrics) == 0 {
return nil return nil
} }
for _, point := range metrics { for _, point := range metrics {
key := point.Name() key := point.Name()
key = sanitizedChars.Replace(key) key = invalidNameCharRE.ReplaceAllString(key, "_")
var labels []string var labels []string
l := prometheus.Labels{} l := prometheus.Labels{}
for k, v := range point.Tags() { for k, v := range point.Tags() {
k = sanitizedChars.Replace(k) k = invalidNameCharRE.ReplaceAllString(k, "_")
if len(k) == 0 { if len(k) == 0 {
continue continue
} }
if !labelName.MatchString(k) {
continue
}
labels = append(labels, k) labels = append(labels, k)
l[k] = v l[k] = v
} }
@ -111,7 +123,7 @@ func (p *PrometheusClient) Write(metrics []telegraf.Metric) error {
} }
// sanitize the measurement name // sanitize the measurement name
n = sanitizedChars.Replace(n) n = invalidNameCharRE.ReplaceAllString(n, "_")
var mname string var mname string
if n == "value" { if n == "value" {
mname = key mname = key
@ -119,50 +131,23 @@ func (p *PrometheusClient) Write(metrics []telegraf.Metric) error {
mname = fmt.Sprintf("%s_%s", key, n) mname = fmt.Sprintf("%s_%s", key, n)
} }
// verify that it is a valid measurement name desc := prometheus.NewDesc(mname, "Telegraf collected metric", nil, l)
if !metricName.MatchString(mname) { var metric prometheus.Metric
continue var err error
}
mVec := prometheus.NewUntypedVec(
prometheus.UntypedOpts{
Name: mname,
Help: "Telegraf collected metric",
},
labels,
)
collector, err := prometheus.RegisterOrGet(mVec)
if err != nil {
log.Printf("prometheus_client: Metric failed to register with prometheus, %s", err)
continue
}
mVec, ok := collector.(*prometheus.UntypedVec)
if !ok {
continue
}
switch val := val.(type) { switch val := val.(type) {
case int64: case int64:
m, err := mVec.GetMetricWith(l) metric, err = prometheus.NewConstMetric(desc, prometheus.UntypedValue, float64(val))
if err != nil {
log.Printf("ERROR Getting metric in Prometheus output, "+
"key: %s, labels: %v,\nerr: %s\n",
mname, l, err.Error())
continue
}
m.Set(float64(val))
case float64: case float64:
m, err := mVec.GetMetricWith(l) metric, err = prometheus.NewConstMetric(desc, prometheus.UntypedValue, val)
if err != nil {
log.Printf("ERROR Getting metric in Prometheus output, "+
"key: %s, labels: %v,\nerr: %s\n",
mname, l, err.Error())
continue
}
m.Set(val)
default: default:
continue continue
} }
if err != nil {
log.Printf("ERROR creating prometheus metric, "+
"key: %s, labels: %v,\nerr: %s\n",
mname, l, err.Error())
}
p.metrics[desc.String()] = metric
} }
} }
return nil return nil

View File

@ -10,22 +10,23 @@ import (
const DEFAULT_TEMPLATE = "host.tags.measurement.field" const DEFAULT_TEMPLATE = "host.tags.measurement.field"
var fieldDeleter = strings.NewReplacer(".FIELDNAME", "", "FIELDNAME.", "") var (
fieldDeleter = strings.NewReplacer(".FIELDNAME", "", "FIELDNAME.", "")
sanitizedChars = strings.NewReplacer("/", "-", "@", "-", "*", "-", " ", "_", "..", ".")
)
type GraphiteSerializer struct { type GraphiteSerializer struct {
Prefix string Prefix string
Template string Template string
} }
var sanitizedChars = strings.NewReplacer("/", "-", "@", "-", "*", "-", " ", "_", "..", ".")
func (s *GraphiteSerializer) Serialize(metric telegraf.Metric) ([]string, error) { func (s *GraphiteSerializer) Serialize(metric telegraf.Metric) ([]string, error) {
out := []string{} out := []string{}
// Convert UnixNano to Unix timestamps // Convert UnixNano to Unix timestamps
timestamp := metric.UnixNano() / 1000000000 timestamp := metric.UnixNano() / 1000000000
bucket := s.SerializeBucketName(metric.Name(), metric.Tags()) bucket := SerializeBucketName(metric.Name(), metric.Tags(), s.Template, s.Prefix)
if bucket == "" { if bucket == "" {
return out, nil return out, nil
} }
@ -51,12 +52,14 @@ func (s *GraphiteSerializer) Serialize(metric telegraf.Metric) ([]string, error)
// FIELDNAME. It is up to the user to replace this. This is so that // FIELDNAME. It is up to the user to replace this. This is so that
// SerializeBucketName can be called just once per measurement, rather than // SerializeBucketName can be called just once per measurement, rather than
// once per field. See GraphiteSerializer.InsertField() function. // once per field. See GraphiteSerializer.InsertField() function.
func (s *GraphiteSerializer) SerializeBucketName( func SerializeBucketName(
measurement string, measurement string,
tags map[string]string, tags map[string]string,
template string,
prefix string,
) string { ) string {
if s.Template == "" { if template == "" {
s.Template = DEFAULT_TEMPLATE template = DEFAULT_TEMPLATE
} }
tagsCopy := make(map[string]string) tagsCopy := make(map[string]string)
for k, v := range tags { for k, v := range tags {
@ -64,7 +67,7 @@ func (s *GraphiteSerializer) SerializeBucketName(
} }
var out []string var out []string
templateParts := strings.Split(s.Template, ".") templateParts := strings.Split(template, ".")
for _, templatePart := range templateParts { for _, templatePart := range templateParts {
switch templatePart { switch templatePart {
case "measurement": case "measurement":
@ -96,10 +99,10 @@ func (s *GraphiteSerializer) SerializeBucketName(
return "" return ""
} }
if s.Prefix == "" { if prefix == "" {
return sanitizedChars.Replace(strings.Join(out, ".")) return sanitizedChars.Replace(strings.Join(out, "."))
} }
return sanitizedChars.Replace(s.Prefix + "." + strings.Join(out, ".")) return sanitizedChars.Replace(prefix + "." + strings.Join(out, "."))
} }
// InsertField takes the bucket string from SerializeBucketName and replaces the // InsertField takes the bucket string from SerializeBucketName and replaces the

View File

@ -225,8 +225,7 @@ func TestSerializeBucketNameNoHost(t *testing.T) {
m, err := telegraf.NewMetric("cpu", tags, fields, now) m, err := telegraf.NewMetric("cpu", tags, fields, now)
assert.NoError(t, err) assert.NoError(t, err)
s := GraphiteSerializer{} mS := SerializeBucketName(m.Name(), m.Tags(), "", "")
mS := s.SerializeBucketName(m.Name(), m.Tags())
expS := "cpu0.us-west-2.cpu.FIELDNAME" expS := "cpu0.us-west-2.cpu.FIELDNAME"
assert.Equal(t, expS, mS) assert.Equal(t, expS, mS)
@ -240,8 +239,7 @@ func TestSerializeBucketNameHost(t *testing.T) {
m, err := telegraf.NewMetric("cpu", defaultTags, fields, now) m, err := telegraf.NewMetric("cpu", defaultTags, fields, now)
assert.NoError(t, err) assert.NoError(t, err)
s := GraphiteSerializer{} mS := SerializeBucketName(m.Name(), m.Tags(), "", "")
mS := s.SerializeBucketName(m.Name(), m.Tags())
expS := "localhost.cpu0.us-west-2.cpu.FIELDNAME" expS := "localhost.cpu0.us-west-2.cpu.FIELDNAME"
assert.Equal(t, expS, mS) assert.Equal(t, expS, mS)
@ -255,8 +253,7 @@ func TestSerializeBucketNamePrefix(t *testing.T) {
m, err := telegraf.NewMetric("cpu", defaultTags, fields, now) m, err := telegraf.NewMetric("cpu", defaultTags, fields, now)
assert.NoError(t, err) assert.NoError(t, err)
s := GraphiteSerializer{Prefix: "prefix"} mS := SerializeBucketName(m.Name(), m.Tags(), "", "prefix")
mS := s.SerializeBucketName(m.Name(), m.Tags())
expS := "prefix.localhost.cpu0.us-west-2.cpu.FIELDNAME" expS := "prefix.localhost.cpu0.us-west-2.cpu.FIELDNAME"
assert.Equal(t, expS, mS) assert.Equal(t, expS, mS)
@ -270,8 +267,7 @@ func TestTemplate1(t *testing.T) {
m, err := telegraf.NewMetric("cpu", defaultTags, fields, now) m, err := telegraf.NewMetric("cpu", defaultTags, fields, now)
assert.NoError(t, err) assert.NoError(t, err)
s := GraphiteSerializer{Template: template1} mS := SerializeBucketName(m.Name(), m.Tags(), template1, "")
mS := s.SerializeBucketName(m.Name(), m.Tags())
expS := "cpu0.us-west-2.localhost.cpu.FIELDNAME" expS := "cpu0.us-west-2.localhost.cpu.FIELDNAME"
assert.Equal(t, expS, mS) assert.Equal(t, expS, mS)
@ -285,8 +281,7 @@ func TestTemplate2(t *testing.T) {
m, err := telegraf.NewMetric("cpu", defaultTags, fields, now) m, err := telegraf.NewMetric("cpu", defaultTags, fields, now)
assert.NoError(t, err) assert.NoError(t, err)
s := GraphiteSerializer{Template: template2} mS := SerializeBucketName(m.Name(), m.Tags(), template2, "")
mS := s.SerializeBucketName(m.Name(), m.Tags())
expS := "localhost.cpu.FIELDNAME" expS := "localhost.cpu.FIELDNAME"
assert.Equal(t, expS, mS) assert.Equal(t, expS, mS)
@ -300,8 +295,7 @@ func TestTemplate3(t *testing.T) {
m, err := telegraf.NewMetric("cpu", defaultTags, fields, now) m, err := telegraf.NewMetric("cpu", defaultTags, fields, now)
assert.NoError(t, err) assert.NoError(t, err)
s := GraphiteSerializer{Template: template3} mS := SerializeBucketName(m.Name(), m.Tags(), template3, "")
mS := s.SerializeBucketName(m.Name(), m.Tags())
expS := "localhost.cpu0.us-west-2.FIELDNAME" expS := "localhost.cpu0.us-west-2.FIELDNAME"
assert.Equal(t, expS, mS) assert.Equal(t, expS, mS)
@ -315,8 +309,7 @@ func TestTemplate4(t *testing.T) {
m, err := telegraf.NewMetric("cpu", defaultTags, fields, now) m, err := telegraf.NewMetric("cpu", defaultTags, fields, now)
assert.NoError(t, err) assert.NoError(t, err)
s := GraphiteSerializer{Template: template4} mS := SerializeBucketName(m.Name(), m.Tags(), template4, "")
mS := s.SerializeBucketName(m.Name(), m.Tags())
expS := "localhost.cpu0.us-west-2.cpu" expS := "localhost.cpu0.us-west-2.cpu"
assert.Equal(t, expS, mS) assert.Equal(t, expS, mS)
@ -330,8 +323,7 @@ func TestTemplate5(t *testing.T) {
m, err := telegraf.NewMetric("cpu", defaultTags, fields, now) m, err := telegraf.NewMetric("cpu", defaultTags, fields, now)
assert.NoError(t, err) assert.NoError(t, err)
s := GraphiteSerializer{Template: template5} mS := SerializeBucketName(m.Name(), m.Tags(), template5, "")
mS := s.SerializeBucketName(m.Name(), m.Tags())
expS := "localhost.us-west-2.cpu0.cpu.FIELDNAME" expS := "localhost.us-west-2.cpu0.cpu.FIELDNAME"
assert.Equal(t, expS, mS) assert.Equal(t, expS, mS)
@ -345,8 +337,7 @@ func TestTemplate6(t *testing.T) {
m, err := telegraf.NewMetric("cpu", defaultTags, fields, now) m, err := telegraf.NewMetric("cpu", defaultTags, fields, now)
assert.NoError(t, err) assert.NoError(t, err)
s := GraphiteSerializer{Template: template6} mS := SerializeBucketName(m.Name(), m.Tags(), template6, "")
mS := s.SerializeBucketName(m.Name(), m.Tags())
expS := "localhost.cpu0.us-west-2.cpu.FIELDNAME" expS := "localhost.cpu0.us-west-2.cpu.FIELDNAME"
assert.Equal(t, expS, mS) assert.Equal(t, expS, mS)

View File

@ -69,6 +69,8 @@ exit_if_fail telegraf -config $tmpdir/config.toml \
-test -input-filter cpu:mem -test -input-filter cpu:mem
cat $GOPATH/bin/telegraf | gzip > $CIRCLE_ARTIFACTS/telegraf.gz cat $GOPATH/bin/telegraf | gzip > $CIRCLE_ARTIFACTS/telegraf.gz
go build -o telegraf-race -race -ldflags "-X main.version=${VERSION}-RACE" cmd/telegraf/telegraf.go
cat telegraf-race | gzip > $CIRCLE_ARTIFACTS/telegraf-race.gz
eval "git describe --exact-match HEAD" eval "git describe --exact-match HEAD"
if [ $? -eq 0 ]; then if [ $? -eq 0 ]; then

View File

@ -37,6 +37,10 @@ chmod 755 $LOG_DIR
if [[ -L /etc/init.d/telegraf ]]; then if [[ -L /etc/init.d/telegraf ]]; then
rm -f /etc/init.d/telegraf rm -f /etc/init.d/telegraf
fi fi
# Remove legacy symlink, if it exists
if [[ -L /etc/systemd/system/telegraf.service ]]; then
rm -f /etc/systemd/system/telegraf.service
fi
# Add defaults file, if it doesn't exist # Add defaults file, if it doesn't exist
if [[ ! -f /etc/default/telegraf ]]; then if [[ ! -f /etc/default/telegraf ]]; then

View File

@ -15,4 +15,3 @@ KillMode=control-group
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target
Alias=telegraf.service