Compare commits
1 Commits
plugins-rc
...
cam-extern
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
15e58c59fb |
2
.github/ISSUE_TEMPLATE.md
vendored
2
.github/ISSUE_TEMPLATE.md
vendored
@@ -1,7 +1,7 @@
|
|||||||
## Directions
|
## Directions
|
||||||
|
|
||||||
GitHub Issues are reserved for actionable bug reports and feature requests.
|
GitHub Issues are reserved for actionable bug reports and feature requests.
|
||||||
General questions should be asked at the [InfluxData Community](https://community.influxdata.com) site.
|
General questions should be sent to the [InfluxDB mailing list](https://groups.google.com/forum/#!forum/influxdb).
|
||||||
|
|
||||||
Before opening an issue, search for similar bug reports or feature requests on GitHub Issues.
|
Before opening an issue, search for similar bug reports or feature requests on GitHub Issues.
|
||||||
If no similar issue can be found, fill out either the "Bug Report" or the "Feature Request" section below.
|
If no similar issue can be found, fill out either the "Bug Report" or the "Feature Request" section below.
|
||||||
|
|||||||
30
CHANGELOG.md
30
CHANGELOG.md
@@ -2,12 +2,6 @@
|
|||||||
|
|
||||||
### Release Notes
|
### Release Notes
|
||||||
|
|
||||||
- Users of the windows `ping` plugin will need to drop or migrate their
|
|
||||||
measurements in order to continue using the plugin. The reason for this is that
|
|
||||||
the windows plugin was outputting a different type than the linux plugin. This
|
|
||||||
made it impossible to use the `ping` plugin for both windows and linux
|
|
||||||
machines.
|
|
||||||
|
|
||||||
- Ceph: the `ceph_pgmap_state` metric content has been modified to use a unique field `count`, with each state expressed as a `state` tag.
|
- Ceph: the `ceph_pgmap_state` metric content has been modified to use a unique field `count`, with each state expressed as a `state` tag.
|
||||||
|
|
||||||
Telegraf < 1.3:
|
Telegraf < 1.3:
|
||||||
@@ -33,15 +27,8 @@ The previous riemann output will still be available using
|
|||||||
`outputs.riemann_legacy` if needed, but that will eventually be deprecated.
|
`outputs.riemann_legacy` if needed, but that will eventually be deprecated.
|
||||||
It is highly recommended that all users migrate to the new riemann output plugin.
|
It is highly recommended that all users migrate to the new riemann output plugin.
|
||||||
|
|
||||||
- Generic [socket_listener](./plugins/inputs/socket_listener) and
|
|
||||||
[socket_writer](./plugins/outputs/socket_writer) plugins have been implemented
|
|
||||||
for receiving and sending UDP, TCP, unix, & unix-datagram data. These plugins
|
|
||||||
will replace udp_listener and tcp_listener, which are still available but will
|
|
||||||
be deprecated eventually.
|
|
||||||
|
|
||||||
### Features
|
### Features
|
||||||
|
|
||||||
- [#2094](https://github.com/influxdata/telegraf/pull/2094): Add generic socket listener & writer.
|
|
||||||
- [#2204](https://github.com/influxdata/telegraf/pull/2204): Extend http_response to support searching for a substring in response. Return 1 if found, else 0.
|
- [#2204](https://github.com/influxdata/telegraf/pull/2204): Extend http_response to support searching for a substring in response. Return 1 if found, else 0.
|
||||||
- [#2137](https://github.com/influxdata/telegraf/pull/2137): Added userstats to mysql input plugin.
|
- [#2137](https://github.com/influxdata/telegraf/pull/2137): Added userstats to mysql input plugin.
|
||||||
- [#2179](https://github.com/influxdata/telegraf/pull/2179): Added more InnoDB metric to MySQL plugin.
|
- [#2179](https://github.com/influxdata/telegraf/pull/2179): Added more InnoDB metric to MySQL plugin.
|
||||||
@@ -54,30 +41,18 @@ be deprecated eventually.
|
|||||||
- [#2201](https://github.com/influxdata/telegraf/pull/2201): Add lock option to the IPtables input plugin.
|
- [#2201](https://github.com/influxdata/telegraf/pull/2201): Add lock option to the IPtables input plugin.
|
||||||
- [#2244](https://github.com/influxdata/telegraf/pull/2244): Support ipmi_sensor plugin querying local ipmi sensors.
|
- [#2244](https://github.com/influxdata/telegraf/pull/2244): Support ipmi_sensor plugin querying local ipmi sensors.
|
||||||
- [#2339](https://github.com/influxdata/telegraf/pull/2339): Increment gather_errors for all errors emitted by inputs.
|
- [#2339](https://github.com/influxdata/telegraf/pull/2339): Increment gather_errors for all errors emitted by inputs.
|
||||||
- [#2071](https://github.com/influxdata/telegraf/issues/2071): Use official docker SDK.
|
|
||||||
- [#1678](https://github.com/influxdata/telegraf/pull/1678): Add AMQP consumer input plugin
|
|
||||||
|
|
||||||
### Bugfixes
|
### Bugfixes
|
||||||
|
|
||||||
- [#2077](https://github.com/influxdata/telegraf/issues/2077): SQL Server Input - Arithmetic overflow error converting numeric to data type int.
|
- [#2077](https://github.com/influxdata/telegraf/issues/2077): SQL Server Input - Arithmetic overflow error converting numeric to data type int.
|
||||||
- [#2262](https://github.com/influxdata/telegraf/issues/2262): Flush jitter can inhibit metric collection.
|
- [#2262](https://github.com/influxdata/telegraf/issues/2262): Flush jitter can inhibit metric collection.
|
||||||
|
- [#2287](https://github.com/influxdata/telegraf/issues/2287): Kubernetes input: Handle null startTime for stopped pods
|
||||||
|
- [#1636](https://github.com/influxdata/telegraf/issues/1636): procstat - stop caching PIDs.
|
||||||
- [#2318](https://github.com/influxdata/telegraf/issues/2318): haproxy input - Add missing fields.
|
- [#2318](https://github.com/influxdata/telegraf/issues/2318): haproxy input - Add missing fields.
|
||||||
- [#2287](https://github.com/influxdata/telegraf/issues/2287): Kubernetes input: Handle null startTime for stopped pods.
|
- [#2287](https://github.com/influxdata/telegraf/issues/2287): Kubernetes input: Handle null startTime for stopped pods.
|
||||||
- [#2356](https://github.com/influxdata/telegraf/issues/2356): cpu input panic when /proc/stat is empty.
|
- [#2356](https://github.com/influxdata/telegraf/issues/2356): cpu input panic when /proc/stat is empty.
|
||||||
- [#2341](https://github.com/influxdata/telegraf/issues/2341): telegraf swallowing panics in --test mode.
|
- [#2341](https://github.com/influxdata/telegraf/issues/2341): telegraf swallowing panics in --test mode.
|
||||||
- [#2358](https://github.com/influxdata/telegraf/pull/2358): Create pidfile with 644 permissions & defer file deletion.
|
- [#2358](https://github.com/influxdata/telegraf/pull/2358): Create pidfile with 644 permissions & defer file deletion.
|
||||||
- [#2282](https://github.com/influxdata/telegraf/issues/2282): Reloading telegraf freezes prometheus output.
|
|
||||||
- [#2390](https://github.com/influxdata/telegraf/issues/2390): Empty tag value causes error on InfluxDB output.
|
|
||||||
- [#2380](https://github.com/influxdata/telegraf/issues/2380): buffer_size field value is negative number from "internal" plugin.
|
|
||||||
- [#2414](https://github.com/influxdata/telegraf/issues/2414): Missing error handling in the MySQL plugin leads to segmentation violation.
|
|
||||||
- [#2462](https://github.com/influxdata/telegraf/pull/2462): Fix type conflict in windows ping plugin.
|
|
||||||
- [#2178](https://github.com/influxdata/telegraf/issues/2178): logparser: regexp with lookahead.
|
|
||||||
- [#2466](https://github.com/influxdata/telegraf/issues/2466): Telegraf can crash in LoadDirectory on 0600 files.
|
|
||||||
- [#2215](https://github.com/influxdata/telegraf/issues/2215): Iptables input: document better that rules without a comment are ignored.
|
|
||||||
- [#2483](https://github.com/influxdata/telegraf/pull/2483): Fix win_perf_counters capping values at 100.
|
|
||||||
- [#2498](https://github.com/influxdata/telegraf/pull/2498): Exporting Ipmi.Path to be set by config.
|
|
||||||
- [#2500](https://github.com/influxdata/telegraf/pull/2500): Remove warning if parse empty content
|
|
||||||
- [#2513](https://github.com/influxdata/telegraf/issues/2513): create /etc/telegraf/telegraf.d directory in tarball.
|
|
||||||
|
|
||||||
## v1.2.1 [2017-02-01]
|
## v1.2.1 [2017-02-01]
|
||||||
|
|
||||||
@@ -137,6 +112,7 @@ plugins, not just statsd.
|
|||||||
- [#1980](https://github.com/influxdata/telegraf/issues/1980): Hide username/password from elasticsearch error log messages.
|
- [#1980](https://github.com/influxdata/telegraf/issues/1980): Hide username/password from elasticsearch error log messages.
|
||||||
- [#2097](https://github.com/influxdata/telegraf/issues/2097): Configurable HTTP timeouts in Jolokia plugin
|
- [#2097](https://github.com/influxdata/telegraf/issues/2097): Configurable HTTP timeouts in Jolokia plugin
|
||||||
- [#2255](https://github.com/influxdata/telegraf/pull/2255): Allow changing jolokia attribute delimiter
|
- [#2255](https://github.com/influxdata/telegraf/pull/2255): Allow changing jolokia attribute delimiter
|
||||||
|
- [#2094](https://github.com/influxdata/telegraf/pull/2094): Add generic socket listener & writer.
|
||||||
|
|
||||||
### Bugfixes
|
### Bugfixes
|
||||||
|
|
||||||
|
|||||||
10
Godeps
10
Godeps
@@ -9,7 +9,10 @@ github.com/couchbase/go-couchbase bfe555a140d53dc1adf390f1a1d4b0fd4ceadb28
|
|||||||
github.com/couchbase/gomemcached 4a25d2f4e1dea9ea7dd76dfd943407abf9b07d29
|
github.com/couchbase/gomemcached 4a25d2f4e1dea9ea7dd76dfd943407abf9b07d29
|
||||||
github.com/couchbase/goutils 5823a0cbaaa9008406021dc5daf80125ea30bba6
|
github.com/couchbase/goutils 5823a0cbaaa9008406021dc5daf80125ea30bba6
|
||||||
github.com/davecgh/go-spew 346938d642f2ec3594ed81d874461961cd0faa76
|
github.com/davecgh/go-spew 346938d642f2ec3594ed81d874461961cd0faa76
|
||||||
github.com/docker/docker b89aff1afa1f61993ab2ba18fd62d9375a195f5d
|
github.com/docker/distribution fb0bebc4b64e3881cc52a2478d749845ed76d2a8
|
||||||
|
github.com/docker/engine-api 4290f40c056686fcaa5c9caf02eac1dde9315adf
|
||||||
|
github.com/docker/go-connections 9670439d95da2651d9dfc7acc5d2ed92d3f25ee6
|
||||||
|
github.com/docker/go-units 0dadbb0345b35ec7ef35e228dabb8de89a65bf52
|
||||||
github.com/eapache/go-resiliency b86b1ec0dd4209a588dc1285cdd471e73525c0b3
|
github.com/eapache/go-resiliency b86b1ec0dd4209a588dc1285cdd471e73525c0b3
|
||||||
github.com/eapache/go-xerial-snappy bb955e01b9346ac19dc29eb16586c90ded99a98c
|
github.com/eapache/go-xerial-snappy bb955e01b9346ac19dc29eb16586c90ded99a98c
|
||||||
github.com/eapache/queue 44cc805cf13205b55f69e14bcb69867d1ae92f98
|
github.com/eapache/queue 44cc805cf13205b55f69e14bcb69867d1ae92f98
|
||||||
@@ -22,7 +25,8 @@ github.com/gorilla/mux 392c28fe23e1c45ddba891b0320b3b5df220beea
|
|||||||
github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478
|
github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478
|
||||||
github.com/hashicorp/consul 63d2fc68239b996096a1c55a0d4b400ea4c2583f
|
github.com/hashicorp/consul 63d2fc68239b996096a1c55a0d4b400ea4c2583f
|
||||||
github.com/hpcloud/tail 915e5feba042395f5fda4dbe9c0e99aeab3088b3
|
github.com/hpcloud/tail 915e5feba042395f5fda4dbe9c0e99aeab3088b3
|
||||||
github.com/influxdata/toml 5d1d907f22ead1cd47adde17ceec5bda9cacaf8f
|
github.com/influxdata/config 8ec4638a81500c20be24855812bc8498ebe2dc92
|
||||||
|
github.com/influxdata/toml ad49a5c2936f96b8f5943c3fdba47630ccf45a0d
|
||||||
github.com/influxdata/wlog 7c63b0a71ef8300adc255344d275e10e5c3a71ec
|
github.com/influxdata/wlog 7c63b0a71ef8300adc255344d275e10e5c3a71ec
|
||||||
github.com/jackc/pgx c8080fc4a1bfa44bf90383ad0fdce2f68b7d313c
|
github.com/jackc/pgx c8080fc4a1bfa44bf90383ad0fdce2f68b7d313c
|
||||||
github.com/kardianos/osext c2c54e542fb797ad986b31721e1baedf214ca413
|
github.com/kardianos/osext c2c54e542fb797ad986b31721e1baedf214ca413
|
||||||
@@ -44,7 +48,7 @@ github.com/prometheus/common dd2f054febf4a6c00f2343686efb775948a8bff4
|
|||||||
github.com/prometheus/procfs 1878d9fbb537119d24b21ca07effd591627cd160
|
github.com/prometheus/procfs 1878d9fbb537119d24b21ca07effd591627cd160
|
||||||
github.com/rcrowley/go-metrics 1f30fe9094a513ce4c700b9a54458bbb0c96996c
|
github.com/rcrowley/go-metrics 1f30fe9094a513ce4c700b9a54458bbb0c96996c
|
||||||
github.com/samuel/go-zookeeper 1d7be4effb13d2d908342d349d71a284a7542693
|
github.com/samuel/go-zookeeper 1d7be4effb13d2d908342d349d71a284a7542693
|
||||||
github.com/shirou/gopsutil d371ba1293cb48fedc6850526ea48b3846c54f2c
|
github.com/shirou/gopsutil 77b5d0080adb6f028e457906f1944d9fcca34442
|
||||||
github.com/soniah/gosnmp 5ad50dc75ab389f8a1c9f8a67d3a1cd85f67ed15
|
github.com/soniah/gosnmp 5ad50dc75ab389f8a1c9f8a67d3a1cd85f67ed15
|
||||||
github.com/streadway/amqp 63795daa9a446c920826655f26ba31c81c860fd6
|
github.com/streadway/amqp 63795daa9a446c920826655f26ba31c81c860fd6
|
||||||
github.com/stretchr/testify 4d4bfba8f1d1027c4fdbe371823030df51419987
|
github.com/stretchr/testify 4d4bfba8f1d1027c4fdbe371823030df51419987
|
||||||
|
|||||||
3
Makefile
3
Makefile
@@ -15,7 +15,8 @@ windows: prepare-windows build-windows
|
|||||||
|
|
||||||
# Only run the build (no dependency grabbing)
|
# Only run the build (no dependency grabbing)
|
||||||
build:
|
build:
|
||||||
go install -ldflags "-X main.version=$(VERSION) -X main.commit=$(COMMIT) -X main.branch=$(BRANCH)" ./...
|
go install -ldflags \
|
||||||
|
"-X main.version=$(VERSION) -X main.commit=$(COMMIT) -X main.branch=$(BRANCH)" ./...
|
||||||
|
|
||||||
build-windows:
|
build-windows:
|
||||||
GOOS=windows GOARCH=amd64 go build -o telegraf.exe -ldflags \
|
GOOS=windows GOARCH=amd64 go build -o telegraf.exe -ldflags \
|
||||||
|
|||||||
16
README.md
16
README.md
@@ -43,7 +43,7 @@ Ansible role: https://github.com/rossmcdonald/telegraf
|
|||||||
|
|
||||||
Telegraf manages dependencies via [gdm](https://github.com/sparrc/gdm),
|
Telegraf manages dependencies via [gdm](https://github.com/sparrc/gdm),
|
||||||
which gets installed via the Makefile
|
which gets installed via the Makefile
|
||||||
if you don't have it already. You also must build with golang version 1.8+.
|
if you don't have it already. You also must build with golang version 1.5+.
|
||||||
|
|
||||||
1. [Install Go](https://golang.org/doc/install)
|
1. [Install Go](https://golang.org/doc/install)
|
||||||
2. [Setup your GOPATH](https://golang.org/doc/code.html#GOPATH)
|
2. [Setup your GOPATH](https://golang.org/doc/code.html#GOPATH)
|
||||||
@@ -97,14 +97,12 @@ configuration options.
|
|||||||
|
|
||||||
## Input Plugins
|
## Input Plugins
|
||||||
|
|
||||||
* [aerospike](./plugins/inputs/aerospike)
|
|
||||||
* [amqp_consumer](./plugins/inputs/amqp_consumer) (rabbitmq)
|
|
||||||
* [apache](./plugins/inputs/apache)
|
|
||||||
* [aws cloudwatch](./plugins/inputs/cloudwatch)
|
* [aws cloudwatch](./plugins/inputs/cloudwatch)
|
||||||
|
* [aerospike](./plugins/inputs/aerospike)
|
||||||
|
* [apache](./plugins/inputs/apache)
|
||||||
* [bcache](./plugins/inputs/bcache)
|
* [bcache](./plugins/inputs/bcache)
|
||||||
* [cassandra](./plugins/inputs/cassandra)
|
* [cassandra](./plugins/inputs/cassandra)
|
||||||
* [ceph](./plugins/inputs/ceph)
|
* [ceph](./plugins/inputs/ceph)
|
||||||
* [cgroup](./plugins/inputs/cgroup)
|
|
||||||
* [chrony](./plugins/inputs/chrony)
|
* [chrony](./plugins/inputs/chrony)
|
||||||
* [consul](./plugins/inputs/consul)
|
* [consul](./plugins/inputs/consul)
|
||||||
* [conntrack](./plugins/inputs/conntrack)
|
* [conntrack](./plugins/inputs/conntrack)
|
||||||
@@ -186,8 +184,8 @@ Telegraf can also collect metrics via the following service plugins:
|
|||||||
* [statsd](./plugins/inputs/statsd)
|
* [statsd](./plugins/inputs/statsd)
|
||||||
* [socket_listener](./plugins/inputs/socket_listener)
|
* [socket_listener](./plugins/inputs/socket_listener)
|
||||||
* [tail](./plugins/inputs/tail)
|
* [tail](./plugins/inputs/tail)
|
||||||
* [tcp_listener](./plugins/inputs/socket_listener)
|
* [tcp_listener](./plugins/inputs/tcp_listener)
|
||||||
* [udp_listener](./plugins/inputs/socket_listener)
|
* [udp_listener](./plugins/inputs/udp_listener)
|
||||||
* [webhooks](./plugins/inputs/webhooks)
|
* [webhooks](./plugins/inputs/webhooks)
|
||||||
* [filestack](./plugins/inputs/webhooks/filestack)
|
* [filestack](./plugins/inputs/webhooks/filestack)
|
||||||
* [github](./plugins/inputs/webhooks/github)
|
* [github](./plugins/inputs/webhooks/github)
|
||||||
@@ -222,11 +220,9 @@ Telegraf can also collect metrics via the following service plugins:
|
|||||||
* [nsq](./plugins/outputs/nsq)
|
* [nsq](./plugins/outputs/nsq)
|
||||||
* [opentsdb](./plugins/outputs/opentsdb)
|
* [opentsdb](./plugins/outputs/opentsdb)
|
||||||
* [prometheus](./plugins/outputs/prometheus_client)
|
* [prometheus](./plugins/outputs/prometheus_client)
|
||||||
|
* [socket_writer](./plugins/outputs/socket_writer)
|
||||||
* [riemann](./plugins/outputs/riemann)
|
* [riemann](./plugins/outputs/riemann)
|
||||||
* [riemann_legacy](./plugins/outputs/riemann_legacy)
|
* [riemann_legacy](./plugins/outputs/riemann_legacy)
|
||||||
* [socket_writer](./plugins/outputs/socket_writer)
|
|
||||||
* [tcp](./plugins/outputs/socket_writer)
|
|
||||||
* [udp](./plugins/outputs/socket_writer)
|
|
||||||
|
|
||||||
## Contributing
|
## Contributing
|
||||||
|
|
||||||
|
|||||||
@@ -191,12 +191,6 @@ func (a *Agent) Test() error {
|
|||||||
}()
|
}()
|
||||||
|
|
||||||
for _, input := range a.Config.Inputs {
|
for _, input := range a.Config.Inputs {
|
||||||
if _, ok := input.Input.(telegraf.ServiceInput); ok {
|
|
||||||
fmt.Printf("\nWARNING: skipping plugin [[%s]]: service inputs not supported in --test mode\n",
|
|
||||||
input.Name())
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
acc := NewAccumulator(input, metricC)
|
acc := NewAccumulator(input, metricC)
|
||||||
acc.SetPrecision(a.Config.Agent.Precision.Duration,
|
acc.SetPrecision(a.Config.Agent.Precision.Duration,
|
||||||
a.Config.Agent.Interval.Duration)
|
a.Config.Agent.Interval.Duration)
|
||||||
@@ -215,7 +209,7 @@ func (a *Agent) Test() error {
|
|||||||
// Special instructions for some inputs. cpu, for example, needs to be
|
// Special instructions for some inputs. cpu, for example, needs to be
|
||||||
// run twice in order to return cpu usage percentages.
|
// run twice in order to return cpu usage percentages.
|
||||||
switch input.Name() {
|
switch input.Name() {
|
||||||
case "inputs.cpu", "inputs.mongodb", "inputs.procstat":
|
case "cpu", "mongodb", "procstat":
|
||||||
time.Sleep(500 * time.Millisecond)
|
time.Sleep(500 * time.Millisecond)
|
||||||
fmt.Printf("* Plugin: %s, Collection 2\n", input.Name())
|
fmt.Printf("* Plugin: %s, Collection 2\n", input.Name())
|
||||||
if err := input.Input.Gather(acc); err != nil {
|
if err := input.Input.Gather(acc); err != nil {
|
||||||
@@ -398,6 +392,5 @@ func (a *Agent) Run(shutdown chan struct{}) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
wg.Wait()
|
wg.Wait()
|
||||||
a.Close()
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -5,8 +5,8 @@ machine:
|
|||||||
- sudo service zookeeper stop
|
- sudo service zookeeper stop
|
||||||
- go version
|
- go version
|
||||||
- sudo rm -rf /usr/local/go
|
- sudo rm -rf /usr/local/go
|
||||||
- wget https://storage.googleapis.com/golang/go1.8.linux-amd64.tar.gz
|
- wget https://storage.googleapis.com/golang/go1.8rc3.linux-amd64.tar.gz
|
||||||
- sudo tar -C /usr/local -xzf go1.8.linux-amd64.tar.gz
|
- sudo tar -C /usr/local -xzf go1.8rc3.linux-amd64.tar.gz
|
||||||
- go version
|
- go version
|
||||||
|
|
||||||
dependencies:
|
dependencies:
|
||||||
|
|||||||
@@ -13,20 +13,15 @@ import (
|
|||||||
"strings"
|
"strings"
|
||||||
"syscall"
|
"syscall"
|
||||||
|
|
||||||
"github.com/influxdata/telegraf"
|
|
||||||
"github.com/influxdata/telegraf/agent"
|
"github.com/influxdata/telegraf/agent"
|
||||||
"github.com/influxdata/telegraf/internal/config"
|
"github.com/influxdata/telegraf/internal/config"
|
||||||
"github.com/influxdata/telegraf/logger"
|
"github.com/influxdata/telegraf/logger"
|
||||||
"github.com/influxdata/telegraf/plugins/aggregators"
|
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
|
||||||
"github.com/influxdata/telegraf/plugins/outputs"
|
|
||||||
"github.com/influxdata/telegraf/plugins/processors"
|
|
||||||
|
|
||||||
_ "github.com/influxdata/telegraf/plugins/aggregators/all"
|
_ "github.com/influxdata/telegraf/plugins/aggregators/all"
|
||||||
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/all"
|
_ "github.com/influxdata/telegraf/plugins/inputs/all"
|
||||||
|
"github.com/influxdata/telegraf/plugins/outputs"
|
||||||
_ "github.com/influxdata/telegraf/plugins/outputs/all"
|
_ "github.com/influxdata/telegraf/plugins/outputs/all"
|
||||||
_ "github.com/influxdata/telegraf/plugins/processors/all"
|
_ "github.com/influxdata/telegraf/plugins/processors/all"
|
||||||
|
|
||||||
"github.com/kardianos/service"
|
"github.com/kardianos/service"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -58,29 +53,25 @@ var fUsage = flag.String("usage", "",
|
|||||||
"print usage for a plugin, ie, 'telegraf -usage mysql'")
|
"print usage for a plugin, ie, 'telegraf -usage mysql'")
|
||||||
var fService = flag.String("service", "",
|
var fService = flag.String("service", "",
|
||||||
"operate on the service")
|
"operate on the service")
|
||||||
var fPlugins = flag.String("external-plugins", "",
|
var fPlugins = flag.String("plugins", "",
|
||||||
"path to directory containing external plugins")
|
"path to directory containing external plugins")
|
||||||
|
|
||||||
// Telegraf version, populated linker.
|
// Telegraf version, populated linker.
|
||||||
// ie, -ldflags "-X main.version=`git describe --always --tags`"
|
// ie, -ldflags "-X main.version=`git describe --always --tags`"
|
||||||
var (
|
var (
|
||||||
version string
|
version string
|
||||||
commit string
|
commit string
|
||||||
branch string
|
branch string
|
||||||
goversion string
|
|
||||||
)
|
)
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
if version == "" {
|
// If commit or branch are not set, make that clear.
|
||||||
version = "unknown"
|
|
||||||
}
|
|
||||||
if commit == "" {
|
if commit == "" {
|
||||||
commit = "unknown"
|
commit = "unknown"
|
||||||
}
|
}
|
||||||
if branch == "" {
|
if branch == "" {
|
||||||
branch = "unknown"
|
branch = "unknown"
|
||||||
}
|
}
|
||||||
goversion = runtime.Version() + " " + runtime.GOOS + "/" + runtime.GOARCH
|
|
||||||
}
|
}
|
||||||
|
|
||||||
const usage = `Telegraf, The plugin-driven server agent for collecting and reporting metrics.
|
const usage = `Telegraf, The plugin-driven server agent for collecting and reporting metrics.
|
||||||
@@ -97,9 +88,6 @@ The commands & flags are:
|
|||||||
--config <file> configuration file to load
|
--config <file> configuration file to load
|
||||||
--test gather metrics once, print them to stdout, and exit
|
--test gather metrics once, print them to stdout, and exit
|
||||||
--config-directory directory containing additional *.conf files
|
--config-directory directory containing additional *.conf files
|
||||||
--external-plugins directory containing *.so files, this directory will be
|
|
||||||
searched recursively. Any Plugin found will be loaded
|
|
||||||
and namespaced.
|
|
||||||
--input-filter filter the input plugins to enable, separator is :
|
--input-filter filter the input plugins to enable, separator is :
|
||||||
--output-filter filter the output plugins to enable, separator is :
|
--output-filter filter the output plugins to enable, separator is :
|
||||||
--usage print usage for a plugin, ie, 'telegraf --usage mysql'
|
--usage print usage for a plugin, ie, 'telegraf --usage mysql'
|
||||||
@@ -205,8 +193,7 @@ func reloadLoop(
|
|||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
|
|
||||||
log.Printf("I! Starting Telegraf (version %s), Go version: %s\n",
|
log.Printf("I! Starting Telegraf (version %s)\n", version)
|
||||||
version, goversion)
|
|
||||||
log.Printf("I! Loaded outputs: %s", strings.Join(c.OutputNames(), " "))
|
log.Printf("I! Loaded outputs: %s", strings.Join(c.OutputNames(), " "))
|
||||||
log.Printf("I! Loaded inputs: %s", strings.Join(c.InputNames(), " "))
|
log.Printf("I! Loaded inputs: %s", strings.Join(c.InputNames(), " "))
|
||||||
log.Printf("I! Tags enabled: %s", c.ListTags())
|
log.Printf("I! Tags enabled: %s", c.ListTags())
|
||||||
@@ -266,8 +253,8 @@ func (p *program) Stop(s service.Service) error {
|
|||||||
|
|
||||||
// loadExternalPlugins loads external plugins from shared libraries (.so, .dll, etc.)
|
// loadExternalPlugins loads external plugins from shared libraries (.so, .dll, etc.)
|
||||||
// in the specified directory.
|
// in the specified directory.
|
||||||
func loadExternalPlugins(rootDir string) error {
|
func loadExternalPlugins(dir string) error {
|
||||||
return filepath.Walk(rootDir, func(pth string, info os.FileInfo, err error) error {
|
return filepath.Walk(dir, func(pth string, info os.FileInfo, err error) error {
|
||||||
// Stop if there was an error.
|
// Stop if there was an error.
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -284,68 +271,30 @@ func loadExternalPlugins(rootDir string) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// name will be the path to the plugin file beginning at the root
|
|
||||||
// directory, minus the extension.
|
|
||||||
// ie, if the plugin file is /opt/telegraf-plugins/group1/foo.so, name
|
|
||||||
// will be "group1/foo"
|
|
||||||
name := strings.TrimPrefix(strings.TrimPrefix(pth, rootDir), string(os.PathSeparator))
|
|
||||||
name = strings.TrimSuffix(name, filepath.Ext(pth))
|
|
||||||
name = "external" + string(os.PathSeparator) + name
|
|
||||||
|
|
||||||
// Load plugin.
|
// Load plugin.
|
||||||
p, err := plugin.Open(pth)
|
_, err = plugin.Open(pth)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("error loading [%s]: %s", pth, err)
|
return fmt.Errorf("error opening [%s]: %s", pth, err)
|
||||||
}
|
|
||||||
|
|
||||||
s, err := p.Lookup("Plugin")
|
|
||||||
if err != nil {
|
|
||||||
fmt.Printf("ERROR Could not find 'Plugin' symbol in [%s]\n", pth)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
switch tplugin := s.(type) {
|
|
||||||
case *telegraf.Input:
|
|
||||||
fmt.Printf("Adding external input plugin: %s\n", name)
|
|
||||||
inputs.Add(name, func() telegraf.Input { return *tplugin })
|
|
||||||
case *telegraf.Output:
|
|
||||||
fmt.Printf("Adding external output plugin: %s\n", name)
|
|
||||||
outputs.Add(name, func() telegraf.Output { return *tplugin })
|
|
||||||
case *telegraf.Processor:
|
|
||||||
fmt.Printf("Adding external processor plugin: %s\n", name)
|
|
||||||
processors.Add(name, func() telegraf.Processor { return *tplugin })
|
|
||||||
case *telegraf.Aggregator:
|
|
||||||
fmt.Printf("Adding external aggregator plugin: %s\n", name)
|
|
||||||
aggregators.Add(name, func() telegraf.Aggregator { return *tplugin })
|
|
||||||
default:
|
|
||||||
fmt.Printf("ERROR: 'Plugin' symbol from [%s] is not a telegraf interface, it has type: %T\n", pth, tplugin)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
func printVersion() {
|
|
||||||
fmt.Printf(`Telegraf %s
|
|
||||||
branch: %s
|
|
||||||
commit: %s
|
|
||||||
go version: %s
|
|
||||||
`, version, branch, commit, goversion)
|
|
||||||
}
|
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
flag.Usage = func() { usageExit(0) }
|
flag.Usage = func() { usageExit(0) }
|
||||||
flag.Parse()
|
flag.Parse()
|
||||||
args := flag.Args()
|
args := flag.Args()
|
||||||
|
|
||||||
// Load external plugins, if requested.
|
// Load external plugins, if requested.
|
||||||
if *fPlugins != "" {
|
if *fPlugins != "" {
|
||||||
pluginsDir, err := filepath.Abs(*fPlugins)
|
pluginsDir, err := filepath.Abs(*fPlugins)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal(err.Error())
|
log.Fatal("E! " + err.Error())
|
||||||
}
|
}
|
||||||
fmt.Printf("Loading external plugins from: %s\n", pluginsDir)
|
log.Printf("I! Loading external plugins from: %s\n", pluginsDir)
|
||||||
if err := loadExternalPlugins(*fPlugins); err != nil {
|
if err := loadExternalPlugins(*fPlugins); err != nil {
|
||||||
log.Fatal(err.Error())
|
log.Fatal("E! " + err.Error())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -368,7 +317,7 @@ func main() {
|
|||||||
if len(args) > 0 {
|
if len(args) > 0 {
|
||||||
switch args[0] {
|
switch args[0] {
|
||||||
case "version":
|
case "version":
|
||||||
printVersion()
|
fmt.Printf("Telegraf v%s (git: %s %s)\n", version, branch, commit)
|
||||||
return
|
return
|
||||||
case "config":
|
case "config":
|
||||||
config.PrintSampleConfig(
|
config.PrintSampleConfig(
|
||||||
@@ -396,7 +345,7 @@ func main() {
|
|||||||
}
|
}
|
||||||
return
|
return
|
||||||
case *fVersion:
|
case *fVersion:
|
||||||
printVersion()
|
fmt.Printf("Telegraf v%s (git: %s %s)\n", version, branch, commit)
|
||||||
return
|
return
|
||||||
case *fSampleConfig:
|
case *fSampleConfig:
|
||||||
config.PrintSampleConfig(
|
config.PrintSampleConfig(
|
||||||
|
|||||||
@@ -24,16 +24,6 @@ Environment variables can be used anywhere in the config file, simply prepend
|
|||||||
them with $. For strings the variable must be within quotes (ie, "$STR_VAR"),
|
them with $. For strings the variable must be within quotes (ie, "$STR_VAR"),
|
||||||
for numbers and booleans they should be plain (ie, $INT_VAR, $BOOL_VAR)
|
for numbers and booleans they should be plain (ie, $INT_VAR, $BOOL_VAR)
|
||||||
|
|
||||||
## Configuration file locations
|
|
||||||
|
|
||||||
The location of the configuration file can be set via the `--config` command
|
|
||||||
line flag. Telegraf will also pick up all files matching the pattern `*.conf` if
|
|
||||||
the `-config-directory` command line flag is used.
|
|
||||||
|
|
||||||
On most systems, the default locations are `/etc/telegraf/telegraf.conf` for
|
|
||||||
the main configuration file and `/etc/telegraf/telegraf.d` for the directory of
|
|
||||||
configuration files.
|
|
||||||
|
|
||||||
# Global Tags
|
# Global Tags
|
||||||
|
|
||||||
Global tags can be specified in the `[global_tags]` section of the config file
|
Global tags can be specified in the `[global_tags]` section of the config file
|
||||||
@@ -361,4 +351,4 @@ to the system load metrics due to the `namepass` parameter.
|
|||||||
|
|
||||||
[[outputs.file]]
|
[[outputs.file]]
|
||||||
files = ["stdout"]
|
files = ["stdout"]
|
||||||
```
|
```
|
||||||
@@ -117,8 +117,7 @@
|
|||||||
Instances = ["*"]
|
Instances = ["*"]
|
||||||
Counters = [
|
Counters = [
|
||||||
"% Idle Time",
|
"% Idle Time",
|
||||||
"% Disk Time",
|
"% Disk Time","% Disk Read Time",
|
||||||
"% Disk Read Time",
|
|
||||||
"% Disk Write Time",
|
"% Disk Write Time",
|
||||||
"Current Disk Queue Length",
|
"Current Disk Queue Length",
|
||||||
"% Free Space",
|
"% Free Space",
|
||||||
|
|||||||
@@ -25,6 +25,7 @@ import (
|
|||||||
"github.com/influxdata/telegraf/plugins/processors"
|
"github.com/influxdata/telegraf/plugins/processors"
|
||||||
"github.com/influxdata/telegraf/plugins/serializers"
|
"github.com/influxdata/telegraf/plugins/serializers"
|
||||||
|
|
||||||
|
"github.com/influxdata/config"
|
||||||
"github.com/influxdata/toml"
|
"github.com/influxdata/toml"
|
||||||
"github.com/influxdata/toml/ast"
|
"github.com/influxdata/toml/ast"
|
||||||
)
|
)
|
||||||
@@ -39,14 +40,6 @@ var (
|
|||||||
|
|
||||||
// envVarRe is a regex to find environment variables in the config file
|
// envVarRe is a regex to find environment variables in the config file
|
||||||
envVarRe = regexp.MustCompile(`\$\w+`)
|
envVarRe = regexp.MustCompile(`\$\w+`)
|
||||||
|
|
||||||
// addQuoteRe is a regex for finding and adding quotes around / characters
|
|
||||||
// when they are used for distinguishing external plugins.
|
|
||||||
// ie, a ReplaceAll() with this pattern will be used to turn this:
|
|
||||||
// [[inputs.external/test/example]]
|
|
||||||
// to
|
|
||||||
// [[inputs."external/test/example"]]
|
|
||||||
addQuoteRe = regexp.MustCompile(`(\[?\[?inputs|outputs|processors|aggregators)\.(external\/[^.\]]+)`)
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// Config specifies the URL/user/password for the database that telegraf
|
// Config specifies the URL/user/password for the database that telegraf
|
||||||
@@ -513,10 +506,6 @@ func PrintOutputConfig(name string) error {
|
|||||||
|
|
||||||
func (c *Config) LoadDirectory(path string) error {
|
func (c *Config) LoadDirectory(path string) error {
|
||||||
walkfn := func(thispath string, info os.FileInfo, _ error) error {
|
walkfn := func(thispath string, info os.FileInfo, _ error) error {
|
||||||
if info == nil {
|
|
||||||
log.Printf("W! Telegraf is not permitted to read %s", thispath)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
if info.IsDir() {
|
if info.IsDir() {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
@@ -577,7 +566,7 @@ func (c *Config) LoadConfig(path string) error {
|
|||||||
if !ok {
|
if !ok {
|
||||||
return fmt.Errorf("%s: invalid configuration", path)
|
return fmt.Errorf("%s: invalid configuration", path)
|
||||||
}
|
}
|
||||||
if err = toml.UnmarshalTable(subTable, c.Tags); err != nil {
|
if err = config.UnmarshalTable(subTable, c.Tags); err != nil {
|
||||||
log.Printf("E! Could not parse [global_tags] config\n")
|
log.Printf("E! Could not parse [global_tags] config\n")
|
||||||
return fmt.Errorf("Error parsing %s, %s", path, err)
|
return fmt.Errorf("Error parsing %s, %s", path, err)
|
||||||
}
|
}
|
||||||
@@ -590,7 +579,7 @@ func (c *Config) LoadConfig(path string) error {
|
|||||||
if !ok {
|
if !ok {
|
||||||
return fmt.Errorf("%s: invalid configuration", path)
|
return fmt.Errorf("%s: invalid configuration", path)
|
||||||
}
|
}
|
||||||
if err = toml.UnmarshalTable(subTable, c.Agent); err != nil {
|
if err = config.UnmarshalTable(subTable, c.Agent); err != nil {
|
||||||
log.Printf("E! Could not parse [agent] config\n")
|
log.Printf("E! Could not parse [agent] config\n")
|
||||||
return fmt.Errorf("Error parsing %s, %s", path, err)
|
return fmt.Errorf("Error parsing %s, %s", path, err)
|
||||||
}
|
}
|
||||||
@@ -712,9 +701,6 @@ func parseFile(fpath string) (*ast.Table, error) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// add quotes around external plugin paths.
|
|
||||||
contents = addQuoteRe.ReplaceAll(contents, []byte(`$1."$2"`))
|
|
||||||
|
|
||||||
return toml.Parse(contents)
|
return toml.Parse(contents)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -730,7 +716,7 @@ func (c *Config) addAggregator(name string, table *ast.Table) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := toml.UnmarshalTable(table, aggregator); err != nil {
|
if err := config.UnmarshalTable(table, aggregator); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -750,7 +736,7 @@ func (c *Config) addProcessor(name string, table *ast.Table) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := toml.UnmarshalTable(table, processor); err != nil {
|
if err := config.UnmarshalTable(table, processor); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -790,7 +776,7 @@ func (c *Config) addOutput(name string, table *ast.Table) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := toml.UnmarshalTable(table, output); err != nil {
|
if err := config.UnmarshalTable(table, output); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -831,7 +817,7 @@ func (c *Config) addInput(name string, table *ast.Table) error {
|
|||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := toml.UnmarshalTable(table, input); err != nil {
|
if err := config.UnmarshalTable(table, input); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -923,7 +909,7 @@ func buildAggregator(name string, tbl *ast.Table) (*models.AggregatorConfig, err
|
|||||||
conf.Tags = make(map[string]string)
|
conf.Tags = make(map[string]string)
|
||||||
if node, ok := tbl.Fields["tags"]; ok {
|
if node, ok := tbl.Fields["tags"]; ok {
|
||||||
if subtbl, ok := node.(*ast.Table); ok {
|
if subtbl, ok := node.(*ast.Table); ok {
|
||||||
if err := toml.UnmarshalTable(subtbl, conf.Tags); err != nil {
|
if err := config.UnmarshalTable(subtbl, conf.Tags); err != nil {
|
||||||
log.Printf("Could not parse tags for input %s\n", name)
|
log.Printf("Could not parse tags for input %s\n", name)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -1160,7 +1146,7 @@ func buildInput(name string, tbl *ast.Table) (*models.InputConfig, error) {
|
|||||||
cp.Tags = make(map[string]string)
|
cp.Tags = make(map[string]string)
|
||||||
if node, ok := tbl.Fields["tags"]; ok {
|
if node, ok := tbl.Fields["tags"]; ok {
|
||||||
if subtbl, ok := node.(*ast.Table); ok {
|
if subtbl, ok := node.(*ast.Table); ok {
|
||||||
if err := toml.UnmarshalTable(subtbl, cp.Tags); err != nil {
|
if err := config.UnmarshalTable(subtbl, cp.Tags); err != nil {
|
||||||
log.Printf("E! Could not parse tags for input %s\n", name)
|
log.Printf("E! Could not parse tags for input %s\n", name)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -122,9 +122,9 @@ func (ro *RunningOutput) AddMetric(m telegraf.Metric) {
|
|||||||
// Write writes all cached points to this output.
|
// Write writes all cached points to this output.
|
||||||
func (ro *RunningOutput) Write() error {
|
func (ro *RunningOutput) Write() error {
|
||||||
nFails, nMetrics := ro.failMetrics.Len(), ro.metrics.Len()
|
nFails, nMetrics := ro.failMetrics.Len(), ro.metrics.Len()
|
||||||
ro.BufferSize.Set(int64(nFails + nMetrics))
|
|
||||||
log.Printf("D! Output [%s] buffer fullness: %d / %d metrics. ",
|
log.Printf("D! Output [%s] buffer fullness: %d / %d metrics. ",
|
||||||
ro.Name, nFails+nMetrics, ro.MetricBufferLimit)
|
ro.Name, nFails+nMetrics, ro.MetricBufferLimit)
|
||||||
|
ro.BufferSize.Incr(int64(nFails + nMetrics))
|
||||||
var err error
|
var err error
|
||||||
if !ro.failMetrics.IsEmpty() {
|
if !ro.failMetrics.IsEmpty() {
|
||||||
// how many batches of failed writes we need to write.
|
// how many batches of failed writes we need to write.
|
||||||
@@ -176,6 +176,7 @@ func (ro *RunningOutput) write(metrics []telegraf.Metric) error {
|
|||||||
log.Printf("D! Output [%s] wrote batch of %d metrics in %s\n",
|
log.Printf("D! Output [%s] wrote batch of %d metrics in %s\n",
|
||||||
ro.Name, nMetrics, elapsed)
|
ro.Name, nMetrics, elapsed)
|
||||||
ro.MetricsWritten.Incr(int64(nMetrics))
|
ro.MetricsWritten.Incr(int64(nMetrics))
|
||||||
|
ro.BufferSize.Incr(-int64(nMetrics))
|
||||||
ro.WriteTime.Incr(elapsed.Nanoseconds())
|
ro.WriteTime.Incr(elapsed.Nanoseconds())
|
||||||
}
|
}
|
||||||
return err
|
return err
|
||||||
|
|||||||
@@ -44,18 +44,13 @@ func New(
|
|||||||
// pre-allocate exact size of the tags slice
|
// pre-allocate exact size of the tags slice
|
||||||
taglen := 0
|
taglen := 0
|
||||||
for k, v := range tags {
|
for k, v := range tags {
|
||||||
if len(k) == 0 || len(v) == 0 {
|
// TODO check that length of tag key & value are > 0
|
||||||
continue
|
|
||||||
}
|
|
||||||
taglen += 2 + len(escape(k, "tagkey")) + len(escape(v, "tagval"))
|
taglen += 2 + len(escape(k, "tagkey")) + len(escape(v, "tagval"))
|
||||||
}
|
}
|
||||||
m.tags = make([]byte, taglen)
|
m.tags = make([]byte, taglen)
|
||||||
|
|
||||||
i := 0
|
i := 0
|
||||||
for k, v := range tags {
|
for k, v := range tags {
|
||||||
if len(k) == 0 || len(v) == 0 {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
m.tags[i] = ','
|
m.tags[i] = ','
|
||||||
i++
|
i++
|
||||||
i += copy(m.tags[i:], escape(k, "tagkey"))
|
i += copy(m.tags[i:], escape(k, "tagkey"))
|
||||||
|
|||||||
@@ -625,26 +625,3 @@ func TestNewMetricFailNaN(t *testing.T) {
|
|||||||
_, err := New("cpu", tags, fields, now)
|
_, err := New("cpu", tags, fields, now)
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestEmptyTagValueOrKey(t *testing.T) {
|
|
||||||
now := time.Now()
|
|
||||||
|
|
||||||
tags := map[string]string{
|
|
||||||
"host": "localhost",
|
|
||||||
"emptytag": "",
|
|
||||||
"": "valuewithoutkey",
|
|
||||||
}
|
|
||||||
fields := map[string]interface{}{
|
|
||||||
"usage_idle": float64(99),
|
|
||||||
}
|
|
||||||
m, err := New("cpu", tags, fields, now)
|
|
||||||
|
|
||||||
assert.True(t, m.HasTag("host"))
|
|
||||||
assert.False(t, m.HasTag("emptytag"))
|
|
||||||
assert.Equal(t,
|
|
||||||
fmt.Sprintf("cpu,host=localhost usage_idle=99 %d\n", now.UnixNano()),
|
|
||||||
m.String())
|
|
||||||
|
|
||||||
assert.NoError(t, err)
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -44,9 +44,6 @@ func Parse(buf []byte) ([]telegraf.Metric, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func ParseWithDefaultTime(buf []byte, t time.Time) ([]telegraf.Metric, error) {
|
func ParseWithDefaultTime(buf []byte, t time.Time) ([]telegraf.Metric, error) {
|
||||||
if len(buf) == 0 {
|
|
||||||
return []telegraf.Metric{}, nil
|
|
||||||
}
|
|
||||||
if len(buf) <= 6 {
|
if len(buf) <= 6 {
|
||||||
return []telegraf.Metric{}, makeError("buffer too short", buf, 0)
|
return []telegraf.Metric{}, makeError("buffer too short", buf, 0)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2,7 +2,6 @@ package all
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/aerospike"
|
_ "github.com/influxdata/telegraf/plugins/inputs/aerospike"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/amqp_consumer"
|
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/apache"
|
_ "github.com/influxdata/telegraf/plugins/inputs/apache"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/bcache"
|
_ "github.com/influxdata/telegraf/plugins/inputs/bcache"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/cassandra"
|
_ "github.com/influxdata/telegraf/plugins/inputs/cassandra"
|
||||||
|
|||||||
@@ -1,47 +0,0 @@
|
|||||||
# AMQP Consumer Input Plugin
|
|
||||||
|
|
||||||
This plugin provides a consumer for use with AMQP 0-9-1, a promenent implementation of this protocol being [RabbitMQ](https://www.rabbitmq.com/).
|
|
||||||
|
|
||||||
Metrics are read from a topic exchange using the configured queue and binding_key.
|
|
||||||
|
|
||||||
Message payload should be formatted in one of the [Telegraf Data Formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md).
|
|
||||||
|
|
||||||
For an introduction to AMQP see:
|
|
||||||
- https://www.rabbitmq.com/tutorials/amqp-concepts.html
|
|
||||||
- https://www.rabbitmq.com/getstarted.html
|
|
||||||
|
|
||||||
The following defaults are known to work with RabbitMQ:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
# AMQP consumer plugin
|
|
||||||
[[inputs.amqp_consumer]]
|
|
||||||
## AMQP url
|
|
||||||
url = "amqp://localhost:5672/influxdb"
|
|
||||||
## AMQP exchange
|
|
||||||
exchange = "telegraf"
|
|
||||||
## AMQP queue name
|
|
||||||
queue = "telegraf"
|
|
||||||
## Binding Key
|
|
||||||
binding_key = "#"
|
|
||||||
|
|
||||||
## Controls how many messages the server will try to keep on the network
|
|
||||||
## for consumers before receiving delivery acks.
|
|
||||||
#prefetch_count = 50
|
|
||||||
|
|
||||||
## Auth method. PLAIN and EXTERNAL are supported.
|
|
||||||
## Using EXTERNAL requires enabling the rabbitmq_auth_mechanism_ssl plugin as
|
|
||||||
## described here: https://www.rabbitmq.com/plugins.html
|
|
||||||
# auth_method = "PLAIN"
|
|
||||||
## Optional SSL Config
|
|
||||||
# ssl_ca = "/etc/telegraf/ca.pem"
|
|
||||||
# ssl_cert = "/etc/telegraf/cert.pem"
|
|
||||||
# ssl_key = "/etc/telegraf/key.pem"
|
|
||||||
## Use SSL but skip chain & host verification
|
|
||||||
# insecure_skip_verify = false
|
|
||||||
|
|
||||||
## Data format to output.
|
|
||||||
## Each data format has it's own unique set of configuration options, read
|
|
||||||
## more about them here:
|
|
||||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
|
||||||
data_format = "influx"
|
|
||||||
```
|
|
||||||
@@ -1,280 +0,0 @@
|
|||||||
package amqp_consumer
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"log"
|
|
||||||
"strings"
|
|
||||||
"sync"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/streadway/amqp"
|
|
||||||
|
|
||||||
"github.com/influxdata/telegraf"
|
|
||||||
"github.com/influxdata/telegraf/internal"
|
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
|
||||||
"github.com/influxdata/telegraf/plugins/parsers"
|
|
||||||
)
|
|
||||||
|
|
||||||
// AMQPConsumer is the top level struct for this plugin
|
|
||||||
type AMQPConsumer struct {
|
|
||||||
URL string
|
|
||||||
// AMQP exchange
|
|
||||||
Exchange string
|
|
||||||
// Queue Name
|
|
||||||
Queue string
|
|
||||||
// Binding Key
|
|
||||||
BindingKey string `toml:"binding_key"`
|
|
||||||
|
|
||||||
// Controls how many messages the server will try to keep on the network
|
|
||||||
// for consumers before receiving delivery acks.
|
|
||||||
PrefetchCount int
|
|
||||||
|
|
||||||
// AMQP Auth method
|
|
||||||
AuthMethod string
|
|
||||||
// Path to CA file
|
|
||||||
SSLCA string `toml:"ssl_ca"`
|
|
||||||
// Path to host cert file
|
|
||||||
SSLCert string `toml:"ssl_cert"`
|
|
||||||
// Path to cert key file
|
|
||||||
SSLKey string `toml:"ssl_key"`
|
|
||||||
// Use SSL but skip chain & host verification
|
|
||||||
InsecureSkipVerify bool
|
|
||||||
|
|
||||||
parser parsers.Parser
|
|
||||||
conn *amqp.Connection
|
|
||||||
wg *sync.WaitGroup
|
|
||||||
}
|
|
||||||
|
|
||||||
type externalAuth struct{}
|
|
||||||
|
|
||||||
func (a *externalAuth) Mechanism() string {
|
|
||||||
return "EXTERNAL"
|
|
||||||
}
|
|
||||||
func (a *externalAuth) Response() string {
|
|
||||||
return fmt.Sprintf("\000")
|
|
||||||
}
|
|
||||||
|
|
||||||
const (
|
|
||||||
DefaultAuthMethod = "PLAIN"
|
|
||||||
DefaultPrefetchCount = 50
|
|
||||||
)
|
|
||||||
|
|
||||||
func (a *AMQPConsumer) SampleConfig() string {
|
|
||||||
return `
|
|
||||||
## AMQP url
|
|
||||||
url = "amqp://localhost:5672/influxdb"
|
|
||||||
## AMQP exchange
|
|
||||||
exchange = "telegraf"
|
|
||||||
## AMQP queue name
|
|
||||||
queue = "telegraf"
|
|
||||||
## Binding Key
|
|
||||||
binding_key = "#"
|
|
||||||
|
|
||||||
## Maximum number of messages server should give to the worker.
|
|
||||||
prefetch_count = 50
|
|
||||||
|
|
||||||
## Auth method. PLAIN and EXTERNAL are supported
|
|
||||||
## Using EXTERNAL requires enabling the rabbitmq_auth_mechanism_ssl plugin as
|
|
||||||
## described here: https://www.rabbitmq.com/plugins.html
|
|
||||||
# auth_method = "PLAIN"
|
|
||||||
|
|
||||||
## Optional SSL Config
|
|
||||||
# ssl_ca = "/etc/telegraf/ca.pem"
|
|
||||||
# ssl_cert = "/etc/telegraf/cert.pem"
|
|
||||||
# ssl_key = "/etc/telegraf/key.pem"
|
|
||||||
## Use SSL but skip chain & host verification
|
|
||||||
# insecure_skip_verify = false
|
|
||||||
|
|
||||||
## Data format to output.
|
|
||||||
## Each data format has it's own unique set of configuration options, read
|
|
||||||
## more about them here:
|
|
||||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
|
||||||
data_format = "influx"
|
|
||||||
`
|
|
||||||
}
|
|
||||||
|
|
||||||
func (a *AMQPConsumer) Description() string {
|
|
||||||
return "AMQP consumer plugin"
|
|
||||||
}
|
|
||||||
|
|
||||||
func (a *AMQPConsumer) SetParser(parser parsers.Parser) {
|
|
||||||
a.parser = parser
|
|
||||||
}
|
|
||||||
|
|
||||||
// All gathering is done in the Start function
|
|
||||||
func (a *AMQPConsumer) Gather(_ telegraf.Accumulator) error {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (a *AMQPConsumer) createConfig() (*amqp.Config, error) {
|
|
||||||
// make new tls config
|
|
||||||
tls, err := internal.GetTLSConfig(
|
|
||||||
a.SSLCert, a.SSLKey, a.SSLCA, a.InsecureSkipVerify)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
// parse auth method
|
|
||||||
var sasl []amqp.Authentication // nil by default
|
|
||||||
|
|
||||||
if strings.ToUpper(a.AuthMethod) == "EXTERNAL" {
|
|
||||||
sasl = []amqp.Authentication{&externalAuth{}}
|
|
||||||
}
|
|
||||||
|
|
||||||
config := amqp.Config{
|
|
||||||
TLSClientConfig: tls,
|
|
||||||
SASL: sasl, // if nil, it will be PLAIN
|
|
||||||
}
|
|
||||||
return &config, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Start satisfies the telegraf.ServiceInput interface
|
|
||||||
func (a *AMQPConsumer) Start(acc telegraf.Accumulator) error {
|
|
||||||
amqpConf, err := a.createConfig()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
msgs, err := a.connect(amqpConf)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
a.wg = &sync.WaitGroup{}
|
|
||||||
a.wg.Add(1)
|
|
||||||
go a.process(msgs, acc)
|
|
||||||
|
|
||||||
go func() {
|
|
||||||
err := <-a.conn.NotifyClose(make(chan *amqp.Error))
|
|
||||||
if err == nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
log.Printf("I! AMQP consumer connection closed: %s; trying to reconnect", err)
|
|
||||||
for {
|
|
||||||
msgs, err := a.connect(amqpConf)
|
|
||||||
if err != nil {
|
|
||||||
log.Printf("E! AMQP connection failed: %s", err)
|
|
||||||
time.Sleep(10 * time.Second)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
a.wg.Add(1)
|
|
||||||
go a.process(msgs, acc)
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (a *AMQPConsumer) connect(amqpConf *amqp.Config) (<-chan amqp.Delivery, error) {
|
|
||||||
conn, err := amqp.DialConfig(a.URL, *amqpConf)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
a.conn = conn
|
|
||||||
|
|
||||||
ch, err := conn.Channel()
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("Failed to open a channel: %s", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
err = ch.ExchangeDeclare(
|
|
||||||
a.Exchange, // name
|
|
||||||
"topic", // type
|
|
||||||
true, // durable
|
|
||||||
false, // auto-deleted
|
|
||||||
false, // internal
|
|
||||||
false, // no-wait
|
|
||||||
nil, // arguments
|
|
||||||
)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("Failed to declare an exchange: %s", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
q, err := ch.QueueDeclare(
|
|
||||||
a.Queue, // queue
|
|
||||||
true, // durable
|
|
||||||
false, // delete when unused
|
|
||||||
false, // exclusive
|
|
||||||
false, // no-wait
|
|
||||||
nil, // arguments
|
|
||||||
)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("Failed to declare a queue: %s", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
err = ch.QueueBind(
|
|
||||||
q.Name, // queue
|
|
||||||
a.BindingKey, // binding-key
|
|
||||||
a.Exchange, // exchange
|
|
||||||
false,
|
|
||||||
nil,
|
|
||||||
)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("Failed to bind a queue: %s", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
err = ch.Qos(
|
|
||||||
a.PrefetchCount,
|
|
||||||
0, // prefetch-size
|
|
||||||
false, // global
|
|
||||||
)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("Failed to set QoS: %s", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
msgs, err := ch.Consume(
|
|
||||||
q.Name, // queue
|
|
||||||
"", // consumer
|
|
||||||
false, // auto-ack
|
|
||||||
false, // exclusive
|
|
||||||
false, // no-local
|
|
||||||
false, // no-wait
|
|
||||||
nil, // arguments
|
|
||||||
)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("Failed establishing connection to queue: %s", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
log.Println("I! Started AMQP consumer")
|
|
||||||
return msgs, err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Read messages from queue and add them to the Accumulator
|
|
||||||
func (a *AMQPConsumer) process(msgs <-chan amqp.Delivery, acc telegraf.Accumulator) {
|
|
||||||
defer a.wg.Done()
|
|
||||||
for d := range msgs {
|
|
||||||
metrics, err := a.parser.Parse(d.Body)
|
|
||||||
if err != nil {
|
|
||||||
log.Printf("E! %v: error parsing metric - %v", err, string(d.Body))
|
|
||||||
} else {
|
|
||||||
for _, m := range metrics {
|
|
||||||
acc.AddFields(m.Name(), m.Fields(), m.Tags(), m.Time())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
d.Ack(false)
|
|
||||||
}
|
|
||||||
log.Printf("I! AMQP consumer queue closed")
|
|
||||||
}
|
|
||||||
|
|
||||||
func (a *AMQPConsumer) Stop() {
|
|
||||||
err := a.conn.Close()
|
|
||||||
if err != nil && err != amqp.ErrClosed {
|
|
||||||
log.Printf("E! Error closing AMQP connection: %s", err)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
a.wg.Wait()
|
|
||||||
log.Println("I! Stopped AMQP service")
|
|
||||||
}
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
inputs.Add("amqp_consumer", func() telegraf.Input {
|
|
||||||
return &AMQPConsumer{
|
|
||||||
AuthMethod: DefaultAuthMethod,
|
|
||||||
PrefetchCount: DefaultPrefetchCount,
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
@@ -4,7 +4,7 @@
|
|||||||
- **urls** []string: List of apache-status URLs to collect from. Default is "http://localhost/server-status?auto".
|
- **urls** []string: List of apache-status URLs to collect from. Default is "http://localhost/server-status?auto".
|
||||||
- **username** string: Username for HTTP basic authentication
|
- **username** string: Username for HTTP basic authentication
|
||||||
- **password** string: Password for HTTP basic authentication
|
- **password** string: Password for HTTP basic authentication
|
||||||
- **timeout** duration: time that the HTTP connection will remain waiting for response. Default 4 seconds ("4s")
|
- **timeout** duration: time that the HTTP connection will remain waiting for response. Defalt 4 seconds ("4s")
|
||||||
|
|
||||||
##### Optional SSL Config
|
##### Optional SSL Config
|
||||||
|
|
||||||
|
|||||||
@@ -16,20 +16,12 @@ for the stat structure can be found
|
|||||||
```
|
```
|
||||||
# Read metrics about docker containers
|
# Read metrics about docker containers
|
||||||
[[inputs.docker]]
|
[[inputs.docker]]
|
||||||
## Docker Endpoint
|
# Docker Endpoint
|
||||||
## To use TCP, set endpoint = "tcp://[ip]:[port]"
|
# To use TCP, set endpoint = "tcp://[ip]:[port]"
|
||||||
## To use environment variables (ie, docker-machine), set endpoint = "ENV"
|
# To use environment variables (ie, docker-machine), set endpoint = "ENV"
|
||||||
endpoint = "unix:///var/run/docker.sock"
|
endpoint = "unix:///var/run/docker.sock"
|
||||||
## Only collect metrics for these containers, collect all if empty
|
# Only collect metrics for these containers, collect all if empty
|
||||||
container_names = []
|
container_names = []
|
||||||
## Timeout for docker list, info, and stats commands
|
|
||||||
timeout = "5s"
|
|
||||||
|
|
||||||
## Whether to report for each container per-device blkio (8:0, 8:1...) and
|
|
||||||
## network (eth0, eth1, ...) stats or not
|
|
||||||
perdevice = true
|
|
||||||
## Whether to report for each container total blkio and network stats or not
|
|
||||||
total = false
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Measurements & Fields:
|
### Measurements & Fields:
|
||||||
|
|||||||
@@ -1,7 +1,6 @@
|
|||||||
package docker
|
package system
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io"
|
"io"
|
||||||
@@ -12,9 +11,10 @@ import (
|
|||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/docker/docker/api/types"
|
"golang.org/x/net/context"
|
||||||
"github.com/docker/docker/client"
|
|
||||||
|
|
||||||
|
"github.com/docker/engine-api/client"
|
||||||
|
"github.com/docker/engine-api/types"
|
||||||
"github.com/influxdata/telegraf"
|
"github.com/influxdata/telegraf"
|
||||||
"github.com/influxdata/telegraf/internal"
|
"github.com/influxdata/telegraf/internal"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
@@ -28,46 +28,15 @@ type Docker struct {
|
|||||||
PerDevice bool `toml:"perdevice"`
|
PerDevice bool `toml:"perdevice"`
|
||||||
Total bool `toml:"total"`
|
Total bool `toml:"total"`
|
||||||
|
|
||||||
client *client.Client
|
client DockerClient
|
||||||
engine_host string
|
engine_host string
|
||||||
|
|
||||||
testing bool
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// infoWrapper wraps client.Client.List for testing.
|
// DockerClient interface, useful for testing
|
||||||
func infoWrapper(c *client.Client, ctx context.Context) (types.Info, error) {
|
type DockerClient interface {
|
||||||
if c != nil {
|
Info(ctx context.Context) (types.Info, error)
|
||||||
return c.Info(ctx)
|
ContainerList(ctx context.Context, options types.ContainerListOptions) ([]types.Container, error)
|
||||||
}
|
ContainerStats(ctx context.Context, containerID string, stream bool) (io.ReadCloser, error)
|
||||||
fc := FakeDockerClient{}
|
|
||||||
return fc.Info(ctx)
|
|
||||||
}
|
|
||||||
|
|
||||||
// listWrapper wraps client.Client.ContainerList for testing.
|
|
||||||
func listWrapper(
|
|
||||||
c *client.Client,
|
|
||||||
ctx context.Context,
|
|
||||||
options types.ContainerListOptions,
|
|
||||||
) ([]types.Container, error) {
|
|
||||||
if c != nil {
|
|
||||||
return c.ContainerList(ctx, options)
|
|
||||||
}
|
|
||||||
fc := FakeDockerClient{}
|
|
||||||
return fc.ContainerList(ctx, options)
|
|
||||||
}
|
|
||||||
|
|
||||||
// statsWrapper wraps client.Client.ContainerStats for testing.
|
|
||||||
func statsWrapper(
|
|
||||||
c *client.Client,
|
|
||||||
ctx context.Context,
|
|
||||||
containerID string,
|
|
||||||
stream bool,
|
|
||||||
) (types.ContainerStats, error) {
|
|
||||||
if c != nil {
|
|
||||||
return c.ContainerStats(ctx, containerID, stream)
|
|
||||||
}
|
|
||||||
fc := FakeDockerClient{}
|
|
||||||
return fc.ContainerStats(ctx, containerID, stream)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// KB, MB, GB, TB, PB...human friendly
|
// KB, MB, GB, TB, PB...human friendly
|
||||||
@@ -111,7 +80,7 @@ func (d *Docker) SampleConfig() string { return sampleConfig }
|
|||||||
|
|
||||||
// Gather starts stats collection
|
// Gather starts stats collection
|
||||||
func (d *Docker) Gather(acc telegraf.Accumulator) error {
|
func (d *Docker) Gather(acc telegraf.Accumulator) error {
|
||||||
if d.client == nil && !d.testing {
|
if d.client == nil {
|
||||||
var c *client.Client
|
var c *client.Client
|
||||||
var err error
|
var err error
|
||||||
defaultHeaders := map[string]string{"User-Agent": "engine-api-cli-1.0"}
|
defaultHeaders := map[string]string{"User-Agent": "engine-api-cli-1.0"}
|
||||||
@@ -144,7 +113,7 @@ func (d *Docker) Gather(acc telegraf.Accumulator) error {
|
|||||||
opts := types.ContainerListOptions{}
|
opts := types.ContainerListOptions{}
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), d.Timeout.Duration)
|
ctx, cancel := context.WithTimeout(context.Background(), d.Timeout.Duration)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
containers, err := listWrapper(d.client, ctx, opts)
|
containers, err := d.client.ContainerList(ctx, opts)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -175,7 +144,7 @@ func (d *Docker) gatherInfo(acc telegraf.Accumulator) error {
|
|||||||
// Get info from docker daemon
|
// Get info from docker daemon
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), d.Timeout.Duration)
|
ctx, cancel := context.WithTimeout(context.Background(), d.Timeout.Duration)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
info, err := infoWrapper(d.client, ctx)
|
info, err := d.client.Info(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -278,12 +247,12 @@ func (d *Docker) gatherContainer(
|
|||||||
|
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), d.Timeout.Duration)
|
ctx, cancel := context.WithTimeout(context.Background(), d.Timeout.Duration)
|
||||||
defer cancel()
|
defer cancel()
|
||||||
r, err := statsWrapper(d.client, ctx, container.ID, false)
|
r, err := d.client.ContainerStats(ctx, container.ID, false)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("Error getting docker stats: %s", err.Error())
|
return fmt.Errorf("Error getting docker stats: %s", err.Error())
|
||||||
}
|
}
|
||||||
defer r.Body.Close()
|
defer r.Close()
|
||||||
dec := json.NewDecoder(r.Body)
|
dec := json.NewDecoder(r)
|
||||||
if err = dec.Decode(&v); err != nil {
|
if err = dec.Decode(&v); err != nil {
|
||||||
if err == io.EOF {
|
if err == io.EOF {
|
||||||
return nil
|
return nil
|
||||||
|
|||||||
@@ -1,12 +1,18 @@
|
|||||||
package docker
|
package system
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"io"
|
||||||
|
"io/ioutil"
|
||||||
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"golang.org/x/net/context"
|
||||||
|
|
||||||
|
"github.com/docker/engine-api/types"
|
||||||
|
"github.com/docker/engine-api/types/registry"
|
||||||
"github.com/influxdata/telegraf/testutil"
|
"github.com/influxdata/telegraf/testutil"
|
||||||
|
|
||||||
"github.com/docker/docker/api/types"
|
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -244,14 +250,146 @@ func testStats() *types.StatsJSON {
|
|||||||
return stats
|
return stats
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestDockerGatherInfo(t *testing.T) {
|
type FakeDockerClient struct {
|
||||||
var acc testutil.Accumulator
|
}
|
||||||
d := Docker{
|
|
||||||
client: nil,
|
func (d FakeDockerClient) Info(ctx context.Context) (types.Info, error) {
|
||||||
testing: true,
|
env := types.Info{
|
||||||
|
Containers: 108,
|
||||||
|
ContainersRunning: 98,
|
||||||
|
ContainersStopped: 6,
|
||||||
|
ContainersPaused: 3,
|
||||||
|
OomKillDisable: false,
|
||||||
|
SystemTime: "2016-02-24T00:55:09.15073105-05:00",
|
||||||
|
NEventsListener: 0,
|
||||||
|
ID: "5WQQ:TFWR:FDNG:OKQ3:37Y4:FJWG:QIKK:623T:R3ME:QTKB:A7F7:OLHD",
|
||||||
|
Debug: false,
|
||||||
|
LoggingDriver: "json-file",
|
||||||
|
KernelVersion: "4.3.0-1-amd64",
|
||||||
|
IndexServerAddress: "https://index.docker.io/v1/",
|
||||||
|
MemTotal: 3840757760,
|
||||||
|
Images: 199,
|
||||||
|
CPUCfsQuota: true,
|
||||||
|
Name: "absol",
|
||||||
|
SwapLimit: false,
|
||||||
|
IPv4Forwarding: true,
|
||||||
|
ExperimentalBuild: false,
|
||||||
|
CPUCfsPeriod: true,
|
||||||
|
RegistryConfig: ®istry.ServiceConfig{
|
||||||
|
IndexConfigs: map[string]*registry.IndexInfo{
|
||||||
|
"docker.io": {
|
||||||
|
Name: "docker.io",
|
||||||
|
Mirrors: []string{},
|
||||||
|
Official: true,
|
||||||
|
Secure: true,
|
||||||
|
},
|
||||||
|
}, InsecureRegistryCIDRs: []*registry.NetIPNet{{IP: []byte{127, 0, 0, 0}, Mask: []byte{255, 0, 0, 0}}}, Mirrors: []string{}},
|
||||||
|
OperatingSystem: "Linux Mint LMDE (containerized)",
|
||||||
|
BridgeNfIptables: true,
|
||||||
|
HTTPSProxy: "",
|
||||||
|
Labels: []string{},
|
||||||
|
MemoryLimit: false,
|
||||||
|
DriverStatus: [][2]string{{"Pool Name", "docker-8:1-1182287-pool"}, {"Pool Blocksize", "65.54 kB"}, {"Backing Filesystem", "extfs"}, {"Data file", "/dev/loop0"}, {"Metadata file", "/dev/loop1"}, {"Data Space Used", "17.3 GB"}, {"Data Space Total", "107.4 GB"}, {"Data Space Available", "36.53 GB"}, {"Metadata Space Used", "20.97 MB"}, {"Metadata Space Total", "2.147 GB"}, {"Metadata Space Available", "2.127 GB"}, {"Udev Sync Supported", "true"}, {"Deferred Removal Enabled", "false"}, {"Data loop file", "/var/lib/docker/devicemapper/devicemapper/data"}, {"Metadata loop file", "/var/lib/docker/devicemapper/devicemapper/metadata"}, {"Library Version", "1.02.115 (2016-01-25)"}},
|
||||||
|
NFd: 19,
|
||||||
|
HTTPProxy: "",
|
||||||
|
Driver: "devicemapper",
|
||||||
|
NGoroutines: 39,
|
||||||
|
NCPU: 4,
|
||||||
|
DockerRootDir: "/var/lib/docker",
|
||||||
|
NoProxy: "",
|
||||||
|
BridgeNfIP6tables: true,
|
||||||
|
}
|
||||||
|
return env, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d FakeDockerClient) ContainerList(octx context.Context, options types.ContainerListOptions) ([]types.Container, error) {
|
||||||
|
container1 := types.Container{
|
||||||
|
ID: "e2173b9478a6ae55e237d4d74f8bbb753f0817192b5081334dc78476296b7dfb",
|
||||||
|
Names: []string{"/etcd"},
|
||||||
|
Image: "quay.io/coreos/etcd:v2.2.2",
|
||||||
|
Command: "/etcd -name etcd0 -advertise-client-urls http://localhost:2379 -listen-client-urls http://0.0.0.0:2379",
|
||||||
|
Created: 1455941930,
|
||||||
|
Status: "Up 4 hours",
|
||||||
|
Ports: []types.Port{
|
||||||
|
types.Port{
|
||||||
|
PrivatePort: 7001,
|
||||||
|
PublicPort: 0,
|
||||||
|
Type: "tcp",
|
||||||
|
},
|
||||||
|
types.Port{
|
||||||
|
PrivatePort: 4001,
|
||||||
|
PublicPort: 0,
|
||||||
|
Type: "tcp",
|
||||||
|
},
|
||||||
|
types.Port{
|
||||||
|
PrivatePort: 2380,
|
||||||
|
PublicPort: 0,
|
||||||
|
Type: "tcp",
|
||||||
|
},
|
||||||
|
types.Port{
|
||||||
|
PrivatePort: 2379,
|
||||||
|
PublicPort: 2379,
|
||||||
|
Type: "tcp",
|
||||||
|
IP: "0.0.0.0",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
SizeRw: 0,
|
||||||
|
SizeRootFs: 0,
|
||||||
|
}
|
||||||
|
container2 := types.Container{
|
||||||
|
ID: "b7dfbb9478a6ae55e237d4d74f8bbb753f0817192b5081334dc78476296e2173",
|
||||||
|
Names: []string{"/etcd2"},
|
||||||
|
Image: "quay.io:4443/coreos/etcd:v2.2.2",
|
||||||
|
Command: "/etcd -name etcd2 -advertise-client-urls http://localhost:2379 -listen-client-urls http://0.0.0.0:2379",
|
||||||
|
Created: 1455941933,
|
||||||
|
Status: "Up 4 hours",
|
||||||
|
Ports: []types.Port{
|
||||||
|
types.Port{
|
||||||
|
PrivatePort: 7002,
|
||||||
|
PublicPort: 0,
|
||||||
|
Type: "tcp",
|
||||||
|
},
|
||||||
|
types.Port{
|
||||||
|
PrivatePort: 4002,
|
||||||
|
PublicPort: 0,
|
||||||
|
Type: "tcp",
|
||||||
|
},
|
||||||
|
types.Port{
|
||||||
|
PrivatePort: 2381,
|
||||||
|
PublicPort: 0,
|
||||||
|
Type: "tcp",
|
||||||
|
},
|
||||||
|
types.Port{
|
||||||
|
PrivatePort: 2382,
|
||||||
|
PublicPort: 2382,
|
||||||
|
Type: "tcp",
|
||||||
|
IP: "0.0.0.0",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
SizeRw: 0,
|
||||||
|
SizeRootFs: 0,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
containers := []types.Container{container1, container2}
|
||||||
|
return containers, nil
|
||||||
|
|
||||||
|
//#{e6a96c84ca91a5258b7cb752579fb68826b68b49ff957487695cd4d13c343b44 titilambert/snmpsim /bin/sh -c 'snmpsimd --agent-udpv4-endpoint=0.0.0.0:31161 --process-user=root --process-group=user' 1455724831 Up 4 hours [{31161 31161 udp 0.0.0.0}] 0 0 [/snmp] map[]}]2016/02/24 01:05:01 Gathered metrics, (3s interval), from 1 inputs in 1.233836656s
|
||||||
|
}
|
||||||
|
|
||||||
|
func (d FakeDockerClient) ContainerStats(ctx context.Context, containerID string, stream bool) (io.ReadCloser, error) {
|
||||||
|
var stat io.ReadCloser
|
||||||
|
jsonStat := `{"read":"2016-02-24T11:42:27.472459608-05:00","memory_stats":{"stats":{},"limit":18935443456},"blkio_stats":{"io_service_bytes_recursive":[{"major":252,"minor":1,"op":"Read","value":753664},{"major":252,"minor":1,"op":"Write"},{"major":252,"minor":1,"op":"Sync"},{"major":252,"minor":1,"op":"Async","value":753664},{"major":252,"minor":1,"op":"Total","value":753664}],"io_serviced_recursive":[{"major":252,"minor":1,"op":"Read","value":26},{"major":252,"minor":1,"op":"Write"},{"major":252,"minor":1,"op":"Sync"},{"major":252,"minor":1,"op":"Async","value":26},{"major":252,"minor":1,"op":"Total","value":26}]},"cpu_stats":{"cpu_usage":{"percpu_usage":[17871,4959158,1646137,1231652,11829401,244656,369972,0],"usage_in_usermode":10000000,"total_usage":20298847},"system_cpu_usage":24052607520000000,"throttling_data":{}},"precpu_stats":{"cpu_usage":{"percpu_usage":[17871,4959158,1646137,1231652,11829401,244656,369972,0],"usage_in_usermode":10000000,"total_usage":20298847},"system_cpu_usage":24052599550000000,"throttling_data":{}}}`
|
||||||
|
stat = ioutil.NopCloser(strings.NewReader(jsonStat))
|
||||||
|
return stat, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDockerGatherInfo(t *testing.T) {
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
client := FakeDockerClient{}
|
||||||
|
d := Docker{client: client}
|
||||||
|
|
||||||
err := d.Gather(&acc)
|
err := d.Gather(&acc)
|
||||||
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
acc.AssertContainsTaggedFields(t,
|
acc.AssertContainsTaggedFields(t,
|
||||||
|
|||||||
@@ -1,143 +0,0 @@
|
|||||||
package docker
|
|
||||||
|
|
||||||
import (
|
|
||||||
"context"
|
|
||||||
"io/ioutil"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/docker/docker/api/types"
|
|
||||||
"github.com/docker/docker/api/types/registry"
|
|
||||||
)
|
|
||||||
|
|
||||||
type FakeDockerClient struct {
|
|
||||||
}
|
|
||||||
|
|
||||||
func (d FakeDockerClient) Info(ctx context.Context) (types.Info, error) {
|
|
||||||
env := types.Info{
|
|
||||||
Containers: 108,
|
|
||||||
ContainersRunning: 98,
|
|
||||||
ContainersStopped: 6,
|
|
||||||
ContainersPaused: 3,
|
|
||||||
OomKillDisable: false,
|
|
||||||
SystemTime: "2016-02-24T00:55:09.15073105-05:00",
|
|
||||||
NEventsListener: 0,
|
|
||||||
ID: "5WQQ:TFWR:FDNG:OKQ3:37Y4:FJWG:QIKK:623T:R3ME:QTKB:A7F7:OLHD",
|
|
||||||
Debug: false,
|
|
||||||
LoggingDriver: "json-file",
|
|
||||||
KernelVersion: "4.3.0-1-amd64",
|
|
||||||
IndexServerAddress: "https://index.docker.io/v1/",
|
|
||||||
MemTotal: 3840757760,
|
|
||||||
Images: 199,
|
|
||||||
CPUCfsQuota: true,
|
|
||||||
Name: "absol",
|
|
||||||
SwapLimit: false,
|
|
||||||
IPv4Forwarding: true,
|
|
||||||
ExperimentalBuild: false,
|
|
||||||
CPUCfsPeriod: true,
|
|
||||||
RegistryConfig: ®istry.ServiceConfig{
|
|
||||||
IndexConfigs: map[string]*registry.IndexInfo{
|
|
||||||
"docker.io": {
|
|
||||||
Name: "docker.io",
|
|
||||||
Mirrors: []string{},
|
|
||||||
Official: true,
|
|
||||||
Secure: true,
|
|
||||||
},
|
|
||||||
}, InsecureRegistryCIDRs: []*registry.NetIPNet{{IP: []byte{127, 0, 0, 0}, Mask: []byte{255, 0, 0, 0}}}, Mirrors: []string{}},
|
|
||||||
OperatingSystem: "Linux Mint LMDE (containerized)",
|
|
||||||
BridgeNfIptables: true,
|
|
||||||
HTTPSProxy: "",
|
|
||||||
Labels: []string{},
|
|
||||||
MemoryLimit: false,
|
|
||||||
DriverStatus: [][2]string{{"Pool Name", "docker-8:1-1182287-pool"}, {"Pool Blocksize", "65.54 kB"}, {"Backing Filesystem", "extfs"}, {"Data file", "/dev/loop0"}, {"Metadata file", "/dev/loop1"}, {"Data Space Used", "17.3 GB"}, {"Data Space Total", "107.4 GB"}, {"Data Space Available", "36.53 GB"}, {"Metadata Space Used", "20.97 MB"}, {"Metadata Space Total", "2.147 GB"}, {"Metadata Space Available", "2.127 GB"}, {"Udev Sync Supported", "true"}, {"Deferred Removal Enabled", "false"}, {"Data loop file", "/var/lib/docker/devicemapper/devicemapper/data"}, {"Metadata loop file", "/var/lib/docker/devicemapper/devicemapper/metadata"}, {"Library Version", "1.02.115 (2016-01-25)"}},
|
|
||||||
NFd: 19,
|
|
||||||
HTTPProxy: "",
|
|
||||||
Driver: "devicemapper",
|
|
||||||
NGoroutines: 39,
|
|
||||||
NCPU: 4,
|
|
||||||
DockerRootDir: "/var/lib/docker",
|
|
||||||
NoProxy: "",
|
|
||||||
BridgeNfIP6tables: true,
|
|
||||||
}
|
|
||||||
return env, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (d FakeDockerClient) ContainerList(octx context.Context, options types.ContainerListOptions) ([]types.Container, error) {
|
|
||||||
container1 := types.Container{
|
|
||||||
ID: "e2173b9478a6ae55e237d4d74f8bbb753f0817192b5081334dc78476296b7dfb",
|
|
||||||
Names: []string{"/etcd"},
|
|
||||||
Image: "quay.io/coreos/etcd:v2.2.2",
|
|
||||||
Command: "/etcd -name etcd0 -advertise-client-urls http://localhost:2379 -listen-client-urls http://0.0.0.0:2379",
|
|
||||||
Created: 1455941930,
|
|
||||||
Status: "Up 4 hours",
|
|
||||||
Ports: []types.Port{
|
|
||||||
types.Port{
|
|
||||||
PrivatePort: 7001,
|
|
||||||
PublicPort: 0,
|
|
||||||
Type: "tcp",
|
|
||||||
},
|
|
||||||
types.Port{
|
|
||||||
PrivatePort: 4001,
|
|
||||||
PublicPort: 0,
|
|
||||||
Type: "tcp",
|
|
||||||
},
|
|
||||||
types.Port{
|
|
||||||
PrivatePort: 2380,
|
|
||||||
PublicPort: 0,
|
|
||||||
Type: "tcp",
|
|
||||||
},
|
|
||||||
types.Port{
|
|
||||||
PrivatePort: 2379,
|
|
||||||
PublicPort: 2379,
|
|
||||||
Type: "tcp",
|
|
||||||
IP: "0.0.0.0",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
SizeRw: 0,
|
|
||||||
SizeRootFs: 0,
|
|
||||||
}
|
|
||||||
container2 := types.Container{
|
|
||||||
ID: "b7dfbb9478a6ae55e237d4d74f8bbb753f0817192b5081334dc78476296e2173",
|
|
||||||
Names: []string{"/etcd2"},
|
|
||||||
Image: "quay.io:4443/coreos/etcd:v2.2.2",
|
|
||||||
Command: "/etcd -name etcd2 -advertise-client-urls http://localhost:2379 -listen-client-urls http://0.0.0.0:2379",
|
|
||||||
Created: 1455941933,
|
|
||||||
Status: "Up 4 hours",
|
|
||||||
Ports: []types.Port{
|
|
||||||
types.Port{
|
|
||||||
PrivatePort: 7002,
|
|
||||||
PublicPort: 0,
|
|
||||||
Type: "tcp",
|
|
||||||
},
|
|
||||||
types.Port{
|
|
||||||
PrivatePort: 4002,
|
|
||||||
PublicPort: 0,
|
|
||||||
Type: "tcp",
|
|
||||||
},
|
|
||||||
types.Port{
|
|
||||||
PrivatePort: 2381,
|
|
||||||
PublicPort: 0,
|
|
||||||
Type: "tcp",
|
|
||||||
},
|
|
||||||
types.Port{
|
|
||||||
PrivatePort: 2382,
|
|
||||||
PublicPort: 2382,
|
|
||||||
Type: "tcp",
|
|
||||||
IP: "0.0.0.0",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
SizeRw: 0,
|
|
||||||
SizeRootFs: 0,
|
|
||||||
}
|
|
||||||
|
|
||||||
containers := []types.Container{container1, container2}
|
|
||||||
return containers, nil
|
|
||||||
|
|
||||||
//#{e6a96c84ca91a5258b7cb752579fb68826b68b49ff957487695cd4d13c343b44 titilambert/snmpsim /bin/sh -c 'snmpsimd --agent-udpv4-endpoint=0.0.0.0:31161 --process-user=root --process-group=user' 1455724831 Up 4 hours [{31161 31161 udp 0.0.0.0}] 0 0 [/snmp] map[]}]2016/02/24 01:05:01 Gathered metrics, (3s interval), from 1 inputs in 1.233836656s
|
|
||||||
}
|
|
||||||
|
|
||||||
func (d FakeDockerClient) ContainerStats(ctx context.Context, containerID string, stream bool) (types.ContainerStats, error) {
|
|
||||||
var stat types.ContainerStats
|
|
||||||
jsonStat := `{"read":"2016-02-24T11:42:27.472459608-05:00","memory_stats":{"stats":{},"limit":18935443456},"blkio_stats":{"io_service_bytes_recursive":[{"major":252,"minor":1,"op":"Read","value":753664},{"major":252,"minor":1,"op":"Write"},{"major":252,"minor":1,"op":"Sync"},{"major":252,"minor":1,"op":"Async","value":753664},{"major":252,"minor":1,"op":"Total","value":753664}],"io_serviced_recursive":[{"major":252,"minor":1,"op":"Read","value":26},{"major":252,"minor":1,"op":"Write"},{"major":252,"minor":1,"op":"Sync"},{"major":252,"minor":1,"op":"Async","value":26},{"major":252,"minor":1,"op":"Total","value":26}]},"cpu_stats":{"cpu_usage":{"percpu_usage":[17871,4959158,1646137,1231652,11829401,244656,369972,0],"usage_in_usermode":10000000,"total_usage":20298847},"system_cpu_usage":24052607520000000,"throttling_data":{}},"precpu_stats":{"cpu_usage":{"percpu_usage":[17871,4959158,1646137,1231652,11829401,244656,369972,0],"usage_in_usermode":10000000,"total_usage":20298847},"system_cpu_usage":24052599550000000,"throttling_data":{}}}`
|
|
||||||
stat.Body = ioutil.NopCloser(strings.NewReader(jsonStat))
|
|
||||||
return stat, nil
|
|
||||||
}
|
|
||||||
@@ -37,8 +37,6 @@ const malformedJson = `
|
|||||||
`
|
`
|
||||||
|
|
||||||
const lineProtocol = "cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1\n"
|
const lineProtocol = "cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1\n"
|
||||||
const lineProtocolEmpty = ""
|
|
||||||
const lineProtocolShort = "ab"
|
|
||||||
|
|
||||||
const lineProtocolMulti = `
|
const lineProtocolMulti = `
|
||||||
cpu,cpu=cpu0,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
cpu,cpu=cpu0,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
||||||
@@ -169,33 +167,6 @@ func TestLineProtocolParse(t *testing.T) {
|
|||||||
acc.AssertContainsTaggedFields(t, "cpu", fields, tags)
|
acc.AssertContainsTaggedFields(t, "cpu", fields, tags)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestLineProtocolEmptyParse(t *testing.T) {
|
|
||||||
parser, _ := parsers.NewInfluxParser()
|
|
||||||
e := &Exec{
|
|
||||||
runner: newRunnerMock([]byte(lineProtocolEmpty), nil),
|
|
||||||
Commands: []string{"line-protocol"},
|
|
||||||
parser: parser,
|
|
||||||
}
|
|
||||||
|
|
||||||
var acc testutil.Accumulator
|
|
||||||
err := e.Gather(&acc)
|
|
||||||
require.NoError(t, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestLineProtocolShortParse(t *testing.T) {
|
|
||||||
parser, _ := parsers.NewInfluxParser()
|
|
||||||
e := &Exec{
|
|
||||||
runner: newRunnerMock([]byte(lineProtocolShort), nil),
|
|
||||||
Commands: []string{"line-protocol"},
|
|
||||||
parser: parser,
|
|
||||||
}
|
|
||||||
|
|
||||||
var acc testutil.Accumulator
|
|
||||||
err := e.Gather(&acc)
|
|
||||||
require.Error(t, err)
|
|
||||||
assert.Contains(t, err.Error(), "buffer too short", "A buffer too short error was expected")
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestLineProtocolParseMultiple(t *testing.T) {
|
func TestLineProtocolParseMultiple(t *testing.T) {
|
||||||
parser, _ := parsers.NewInfluxParser()
|
parser, _ := parsers.NewInfluxParser()
|
||||||
e := &Exec{
|
e := &Exec{
|
||||||
|
|||||||
@@ -17,7 +17,7 @@ var (
|
|||||||
)
|
)
|
||||||
|
|
||||||
type Ipmi struct {
|
type Ipmi struct {
|
||||||
Path string
|
path string
|
||||||
Servers []string
|
Servers []string
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -44,7 +44,7 @@ func (m *Ipmi) Description() string {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (m *Ipmi) Gather(acc telegraf.Accumulator) error {
|
func (m *Ipmi) Gather(acc telegraf.Accumulator) error {
|
||||||
if len(m.Path) == 0 {
|
if len(m.path) == 0 {
|
||||||
return fmt.Errorf("ipmitool not found: verify that ipmitool is installed and that ipmitool is in your PATH")
|
return fmt.Errorf("ipmitool not found: verify that ipmitool is installed and that ipmitool is in your PATH")
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -76,7 +76,7 @@ func (m *Ipmi) parse(acc telegraf.Accumulator, server string) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
opts = append(opts, "sdr")
|
opts = append(opts, "sdr")
|
||||||
cmd := execCommand(m.Path, opts...)
|
cmd := execCommand(m.path, opts...)
|
||||||
out, err := internal.CombinedOutputTimeout(cmd, time.Second*5)
|
out, err := internal.CombinedOutputTimeout(cmd, time.Second*5)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("failed to run command %s: %s - %s", strings.Join(cmd.Args, " "), err, string(out))
|
return fmt.Errorf("failed to run command %s: %s - %s", strings.Join(cmd.Args, " "), err, string(out))
|
||||||
@@ -149,7 +149,7 @@ func init() {
|
|||||||
m := Ipmi{}
|
m := Ipmi{}
|
||||||
path, _ := exec.LookPath("ipmitool")
|
path, _ := exec.LookPath("ipmitool")
|
||||||
if len(path) > 0 {
|
if len(path) > 0 {
|
||||||
m.Path = path
|
m.path = path
|
||||||
}
|
}
|
||||||
inputs.Add("ipmi_sensor", func() telegraf.Input {
|
inputs.Add("ipmi_sensor", func() telegraf.Input {
|
||||||
return &m
|
return &m
|
||||||
|
|||||||
@@ -14,7 +14,7 @@ import (
|
|||||||
func TestGather(t *testing.T) {
|
func TestGather(t *testing.T) {
|
||||||
i := &Ipmi{
|
i := &Ipmi{
|
||||||
Servers: []string{"USERID:PASSW0RD@lan(192.168.1.1)"},
|
Servers: []string{"USERID:PASSW0RD@lan(192.168.1.1)"},
|
||||||
Path: "ipmitool",
|
path: "ipmitool",
|
||||||
}
|
}
|
||||||
// overwriting exec commands with mock commands
|
// overwriting exec commands with mock commands
|
||||||
execCommand = fakeExecCommand
|
execCommand = fakeExecCommand
|
||||||
@@ -118,7 +118,7 @@ func TestGather(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
i = &Ipmi{
|
i = &Ipmi{
|
||||||
Path: "ipmitool",
|
path: "ipmitool",
|
||||||
}
|
}
|
||||||
|
|
||||||
err = i.Gather(&acc)
|
err = i.Gather(&acc)
|
||||||
|
|||||||
@@ -2,11 +2,7 @@
|
|||||||
|
|
||||||
The iptables plugin gathers packets and bytes counters for rules within a set of table and chain from the Linux's iptables firewall.
|
The iptables plugin gathers packets and bytes counters for rules within a set of table and chain from the Linux's iptables firewall.
|
||||||
|
|
||||||
Rules are identified through associated comment. **Rules without comment are ignored**.
|
Rules are identified through associated comment. Rules without comment are ignored.
|
||||||
Indeed we need a unique ID for the rule and the rule number is not a constant: it may vary when rules are inserted/deleted at start-up or by automatic tools (interactive firewalls, fail2ban, ...).
|
|
||||||
Also when the rule set is becoming big (hundreds of lines) most people are interested in monitoring only a small part of the rule set.
|
|
||||||
|
|
||||||
Before using this plugin **you must ensure that the rules you want to monitor are named with a unique comment**. Comments are added using the `-m comment --comment "my comment"` iptables options.
|
|
||||||
|
|
||||||
The iptables command requires CAP_NET_ADMIN and CAP_NET_RAW capabilities. You have several options to grant telegraf to run iptables:
|
The iptables command requires CAP_NET_ADMIN and CAP_NET_RAW capabilities. You have several options to grant telegraf to run iptables:
|
||||||
|
|
||||||
|
|||||||
@@ -33,16 +33,14 @@ func (ipt *Iptables) SampleConfig() string {
|
|||||||
## iptables require root access on most systems.
|
## iptables require root access on most systems.
|
||||||
## Setting 'use_sudo' to true will make use of sudo to run iptables.
|
## Setting 'use_sudo' to true will make use of sudo to run iptables.
|
||||||
## Users must configure sudo to allow telegraf user to run iptables with no password.
|
## Users must configure sudo to allow telegraf user to run iptables with no password.
|
||||||
## iptables can be restricted to only list command "iptables -nvL".
|
## iptables can be restricted to only list command "iptables -nvL"
|
||||||
use_sudo = false
|
use_sudo = false
|
||||||
## Setting 'use_lock' to true runs iptables with the "-w" option.
|
## Setting 'use_lock' to true runs iptables with the "-w" option.
|
||||||
## Adjust your sudo settings appropriately if using this option ("iptables -wnvl")
|
## Adjust your sudo settings appropriately if using this option ("iptables -wnvl")
|
||||||
use_lock = false
|
use_lock = false
|
||||||
## defines the table to monitor:
|
## defines the table to monitor:
|
||||||
table = "filter"
|
table = "filter"
|
||||||
## defines the chains to monitor.
|
## defines the chains to monitor:
|
||||||
## NOTE: iptables rules without a comment will not be monitored.
|
|
||||||
## Read the plugin documentation for more information.
|
|
||||||
chains = [ "INPUT" ]
|
chains = [ "INPUT" ]
|
||||||
`
|
`
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -57,43 +57,6 @@ func Benchmark_ParseLine_CustomPattern(b *testing.B) {
|
|||||||
benchM = m
|
benchM = m
|
||||||
}
|
}
|
||||||
|
|
||||||
// Test a very simple parse pattern.
|
|
||||||
func TestSimpleParse(t *testing.T) {
|
|
||||||
p := &Parser{
|
|
||||||
Patterns: []string{"%{TESTLOG}"},
|
|
||||||
CustomPatterns: `
|
|
||||||
TESTLOG %{NUMBER:num:int} %{WORD:client}
|
|
||||||
`,
|
|
||||||
}
|
|
||||||
assert.NoError(t, p.Compile())
|
|
||||||
|
|
||||||
m, err := p.ParseLine(`142 bot`)
|
|
||||||
assert.NoError(t, err)
|
|
||||||
require.NotNil(t, m)
|
|
||||||
|
|
||||||
assert.Equal(t,
|
|
||||||
map[string]interface{}{
|
|
||||||
"num": int64(142),
|
|
||||||
"client": "bot",
|
|
||||||
},
|
|
||||||
m.Fields())
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify that patterns with a regex lookahead fail at compile time.
|
|
||||||
func TestParsePatternsWithLookahead(t *testing.T) {
|
|
||||||
p := &Parser{
|
|
||||||
Patterns: []string{"%{MYLOG}"},
|
|
||||||
CustomPatterns: `
|
|
||||||
NOBOT ((?!bot|crawl).)*
|
|
||||||
MYLOG %{NUMBER:num:int} %{NOBOT:client}
|
|
||||||
`,
|
|
||||||
}
|
|
||||||
assert.NoError(t, p.Compile())
|
|
||||||
|
|
||||||
_, err := p.ParseLine(`1466004605359052000 bot`)
|
|
||||||
assert.Error(t, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestMeasurementName(t *testing.T) {
|
func TestMeasurementName(t *testing.T) {
|
||||||
p := &Parser{
|
p := &Parser{
|
||||||
Measurement: "my_web_log",
|
Measurement: "my_web_log",
|
||||||
|
|||||||
@@ -226,8 +226,6 @@ func (l *LogParserPlugin) parser() {
|
|||||||
if m != nil {
|
if m != nil {
|
||||||
l.acc.AddFields(m.Name(), m.Fields(), m.Tags(), m.Time())
|
l.acc.AddFields(m.Name(), m.Fields(), m.Tags(), m.Time())
|
||||||
}
|
}
|
||||||
} else {
|
|
||||||
log.Println("E! Error parsing log line: " + err.Error())
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,7 +4,6 @@ import (
|
|||||||
"bytes"
|
"bytes"
|
||||||
"database/sql"
|
"database/sql"
|
||||||
"fmt"
|
"fmt"
|
||||||
"log"
|
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
@@ -905,98 +904,92 @@ func (m *Mysql) gatherGlobalStatuses(db *sql.DB, serv string, acc telegraf.Accum
|
|||||||
// gather connection metrics from processlist for each user
|
// gather connection metrics from processlist for each user
|
||||||
if m.GatherProcessList {
|
if m.GatherProcessList {
|
||||||
conn_rows, err := db.Query("SELECT user, sum(1) FROM INFORMATION_SCHEMA.PROCESSLIST GROUP BY user")
|
conn_rows, err := db.Query("SELECT user, sum(1) FROM INFORMATION_SCHEMA.PROCESSLIST GROUP BY user")
|
||||||
if err != nil {
|
|
||||||
log.Printf("E! MySQL Error gathering process list: %s", err)
|
|
||||||
} else {
|
|
||||||
for conn_rows.Next() {
|
|
||||||
var user string
|
|
||||||
var connections int64
|
|
||||||
|
|
||||||
err = conn_rows.Scan(&user, &connections)
|
for conn_rows.Next() {
|
||||||
if err != nil {
|
var user string
|
||||||
return err
|
var connections int64
|
||||||
}
|
|
||||||
|
|
||||||
tags := map[string]string{"server": servtag, "user": user}
|
err = conn_rows.Scan(&user, &connections)
|
||||||
fields := make(map[string]interface{})
|
if err != nil {
|
||||||
|
return err
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
fields["connections"] = connections
|
|
||||||
acc.AddFields("mysql_users", fields, tags)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
tags := map[string]string{"server": servtag, "user": user}
|
||||||
|
fields := make(map[string]interface{})
|
||||||
|
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
fields["connections"] = connections
|
||||||
|
acc.AddFields("mysql_users", fields, tags)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// gather connection metrics from user_statistics for each user
|
// gather connection metrics from user_statistics for each user
|
||||||
if m.GatherUserStatistics {
|
if m.GatherUserStatistics {
|
||||||
conn_rows, err := db.Query("select user, total_connections, concurrent_connections, connected_time, busy_time, cpu_time, bytes_received, bytes_sent, binlog_bytes_written, rows_fetched, rows_updated, table_rows_read, select_commands, update_commands, other_commands, commit_transactions, rollback_transactions, denied_connections, lost_connections, access_denied, empty_queries, total_ssl_connections FROM INFORMATION_SCHEMA.USER_STATISTICS GROUP BY user")
|
conn_rows, err := db.Query("select user, total_connections, concurrent_connections, connected_time, busy_time, cpu_time, bytes_received, bytes_sent, binlog_bytes_written, rows_fetched, rows_updated, table_rows_read, select_commands, update_commands, other_commands, commit_transactions, rollback_transactions, denied_connections, lost_connections, access_denied, empty_queries, total_ssl_connections FROM INFORMATION_SCHEMA.USER_STATISTICS GROUP BY user")
|
||||||
if err != nil {
|
|
||||||
log.Printf("E! MySQL Error gathering user stats: %s", err)
|
|
||||||
} else {
|
|
||||||
for conn_rows.Next() {
|
|
||||||
var user string
|
|
||||||
var total_connections int64
|
|
||||||
var concurrent_connections int64
|
|
||||||
var connected_time int64
|
|
||||||
var busy_time int64
|
|
||||||
var cpu_time int64
|
|
||||||
var bytes_received int64
|
|
||||||
var bytes_sent int64
|
|
||||||
var binlog_bytes_written int64
|
|
||||||
var rows_fetched int64
|
|
||||||
var rows_updated int64
|
|
||||||
var table_rows_read int64
|
|
||||||
var select_commands int64
|
|
||||||
var update_commands int64
|
|
||||||
var other_commands int64
|
|
||||||
var commit_transactions int64
|
|
||||||
var rollback_transactions int64
|
|
||||||
var denied_connections int64
|
|
||||||
var lost_connections int64
|
|
||||||
var access_denied int64
|
|
||||||
var empty_queries int64
|
|
||||||
var total_ssl_connections int64
|
|
||||||
|
|
||||||
err = conn_rows.Scan(&user, &total_connections, &concurrent_connections,
|
for conn_rows.Next() {
|
||||||
&connected_time, &busy_time, &cpu_time, &bytes_received, &bytes_sent, &binlog_bytes_written,
|
var user string
|
||||||
&rows_fetched, &rows_updated, &table_rows_read, &select_commands, &update_commands, &other_commands,
|
var total_connections int64
|
||||||
&commit_transactions, &rollback_transactions, &denied_connections, &lost_connections, &access_denied,
|
var concurrent_connections int64
|
||||||
&empty_queries, &total_ssl_connections,
|
var connected_time int64
|
||||||
)
|
var busy_time int64
|
||||||
|
var cpu_time int64
|
||||||
|
var bytes_received int64
|
||||||
|
var bytes_sent int64
|
||||||
|
var binlog_bytes_written int64
|
||||||
|
var rows_fetched int64
|
||||||
|
var rows_updated int64
|
||||||
|
var table_rows_read int64
|
||||||
|
var select_commands int64
|
||||||
|
var update_commands int64
|
||||||
|
var other_commands int64
|
||||||
|
var commit_transactions int64
|
||||||
|
var rollback_transactions int64
|
||||||
|
var denied_connections int64
|
||||||
|
var lost_connections int64
|
||||||
|
var access_denied int64
|
||||||
|
var empty_queries int64
|
||||||
|
var total_ssl_connections int64
|
||||||
|
|
||||||
if err != nil {
|
err = conn_rows.Scan(&user, &total_connections, &concurrent_connections,
|
||||||
return err
|
&connected_time, &busy_time, &cpu_time, &bytes_received, &bytes_sent, &binlog_bytes_written,
|
||||||
}
|
&rows_fetched, &rows_updated, &table_rows_read, &select_commands, &update_commands, &other_commands,
|
||||||
|
&commit_transactions, &rollback_transactions, &denied_connections, &lost_connections, &access_denied,
|
||||||
|
&empty_queries, &total_ssl_connections,
|
||||||
|
)
|
||||||
|
|
||||||
tags := map[string]string{"server": servtag, "user": user}
|
if err != nil {
|
||||||
fields := map[string]interface{}{
|
return err
|
||||||
"total_connections": total_connections,
|
|
||||||
"concurrent_connections": concurrent_connections,
|
|
||||||
"connected_time": connected_time,
|
|
||||||
"busy_time": busy_time,
|
|
||||||
"cpu_time": cpu_time,
|
|
||||||
"bytes_received": bytes_received,
|
|
||||||
"bytes_sent": bytes_sent,
|
|
||||||
"binlog_bytes_written": binlog_bytes_written,
|
|
||||||
"rows_fetched": rows_fetched,
|
|
||||||
"rows_updated": rows_updated,
|
|
||||||
"table_rows_read": table_rows_read,
|
|
||||||
"select_commands": select_commands,
|
|
||||||
"update_commands": update_commands,
|
|
||||||
"other_commands": other_commands,
|
|
||||||
"commit_transactions": commit_transactions,
|
|
||||||
"rollback_transactions": rollback_transactions,
|
|
||||||
"denied_connections": denied_connections,
|
|
||||||
"lost_connections": lost_connections,
|
|
||||||
"access_denied": access_denied,
|
|
||||||
"empty_queries": empty_queries,
|
|
||||||
"total_ssl_connections": total_ssl_connections,
|
|
||||||
}
|
|
||||||
|
|
||||||
acc.AddFields("mysql_user_stats", fields, tags)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
tags := map[string]string{"server": servtag, "user": user}
|
||||||
|
fields := map[string]interface{}{
|
||||||
|
"total_connections": total_connections,
|
||||||
|
"concurrent_connections": concurrent_connections,
|
||||||
|
"connected_time": connected_time,
|
||||||
|
"busy_time": busy_time,
|
||||||
|
"cpu_time": cpu_time,
|
||||||
|
"bytes_received": bytes_received,
|
||||||
|
"bytes_sent": bytes_sent,
|
||||||
|
"binlog_bytes_written": binlog_bytes_written,
|
||||||
|
"rows_fetched": rows_fetched,
|
||||||
|
"rows_updated": rows_updated,
|
||||||
|
"table_rows_read": table_rows_read,
|
||||||
|
"select_commands": select_commands,
|
||||||
|
"update_commands": update_commands,
|
||||||
|
"other_commands": other_commands,
|
||||||
|
"commit_transactions": commit_transactions,
|
||||||
|
"rollback_transactions": rollback_transactions,
|
||||||
|
"denied_connections": denied_connections,
|
||||||
|
"lost_connections": lost_connections,
|
||||||
|
"access_denied": access_denied,
|
||||||
|
"empty_queries": empty_queries,
|
||||||
|
"total_ssl_connections": total_ssl_connections,
|
||||||
|
}
|
||||||
|
|
||||||
|
acc.AddFields("mysql_user_stats", fields, tags)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -40,10 +40,10 @@ func (s *Ping) Description() string {
|
|||||||
const sampleConfig = `
|
const sampleConfig = `
|
||||||
## urls to ping
|
## urls to ping
|
||||||
urls = ["www.google.com"] # required
|
urls = ["www.google.com"] # required
|
||||||
|
|
||||||
## number of pings to send per collection (ping -n <COUNT>)
|
## number of pings to send per collection (ping -n <COUNT>)
|
||||||
count = 4 # required
|
count = 4 # required
|
||||||
|
|
||||||
## Ping timeout, in seconds. 0 means default timeout (ping -w <TIMEOUT>)
|
## Ping timeout, in seconds. 0 means default timeout (ping -w <TIMEOUT>)
|
||||||
Timeout = 0
|
Timeout = 0
|
||||||
`
|
`
|
||||||
@@ -64,7 +64,7 @@ func hostPinger(timeout float64, args ...string) (string, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// processPingOutput takes in a string output from the ping command
|
// processPingOutput takes in a string output from the ping command
|
||||||
// based on linux implementation but using regex ( multilanguage support )
|
// based on linux implementation but using regex ( multilanguage support ) ( shouldn't affect the performance of the program )
|
||||||
// It returns (<transmitted packets>, <received reply>, <received packet>, <average response>, <min response>, <max response>)
|
// It returns (<transmitted packets>, <received reply>, <received packet>, <average response>, <min response>, <max response>)
|
||||||
func processPingOutput(out string) (int, int, int, int, int, int, error) {
|
func processPingOutput(out string) (int, int, int, int, int, int, error) {
|
||||||
// So find a line contain 3 numbers except reply lines
|
// So find a line contain 3 numbers except reply lines
|
||||||
@@ -189,13 +189,13 @@ func (p *Ping) Gather(acc telegraf.Accumulator) error {
|
|||||||
"percent_reply_loss": lossReply,
|
"percent_reply_loss": lossReply,
|
||||||
}
|
}
|
||||||
if avg > 0 {
|
if avg > 0 {
|
||||||
fields["average_response_ms"] = float64(avg)
|
fields["average_response_ms"] = avg
|
||||||
}
|
}
|
||||||
if min > 0 {
|
if min > 0 {
|
||||||
fields["minimum_response_ms"] = float64(min)
|
fields["minimum_response_ms"] = min
|
||||||
}
|
}
|
||||||
if max > 0 {
|
if max > 0 {
|
||||||
fields["maximum_response_ms"] = float64(max)
|
fields["maximum_response_ms"] = max
|
||||||
}
|
}
|
||||||
acc.AddFields("ping", fields, tags)
|
acc.AddFields("ping", fields, tags)
|
||||||
}(url)
|
}(url)
|
||||||
|
|||||||
@@ -77,9 +77,9 @@ func TestPingGather(t *testing.T) {
|
|||||||
"reply_received": 4,
|
"reply_received": 4,
|
||||||
"percent_packet_loss": 0.0,
|
"percent_packet_loss": 0.0,
|
||||||
"percent_reply_loss": 0.0,
|
"percent_reply_loss": 0.0,
|
||||||
"average_response_ms": 50.0,
|
"average_response_ms": 50,
|
||||||
"minimum_response_ms": 50.0,
|
"minimum_response_ms": 50,
|
||||||
"maximum_response_ms": 52.0,
|
"maximum_response_ms": 52,
|
||||||
}
|
}
|
||||||
acc.AssertContainsTaggedFields(t, "ping", fields, tags)
|
acc.AssertContainsTaggedFields(t, "ping", fields, tags)
|
||||||
|
|
||||||
|
|||||||
@@ -29,25 +29,3 @@ _* value ignored and therefore not recorded._
|
|||||||
|
|
||||||
|
|
||||||
More information about the meaning of these metrics can be found in the [PostgreSQL Documentation](http://www.postgresql.org/docs/9.2/static/monitoring-stats.html#PG-STAT-DATABASE-VIEW)
|
More information about the meaning of these metrics can be found in the [PostgreSQL Documentation](http://www.postgresql.org/docs/9.2/static/monitoring-stats.html#PG-STAT-DATABASE-VIEW)
|
||||||
|
|
||||||
## Configruation
|
|
||||||
Specify address via a url matching:
|
|
||||||
|
|
||||||
`postgres://[pqgotest[:password]]@localhost[/dbname]?sslmode=[disable|verify-ca|verify-full]`
|
|
||||||
|
|
||||||
All connection parameters are optional. Without the dbname parameter, the driver will default to a database with the same name as the user. This dbname is just for instantiating a connection with the server and doesn't restrict the databases we are trying to grab metrics for.
|
|
||||||
|
|
||||||
A list of databases to explicitly ignore. If not specified, metrics for all databases are gathered. Do NOT use with the 'databases' option.
|
|
||||||
|
|
||||||
`ignored_databases = ["postgres", "template0", "template1"]`
|
|
||||||
|
|
||||||
A list of databases to pull metrics about. If not specified, metrics for all databases are gathered. Do NOT use with the 'ignored_databases' option.
|
|
||||||
|
|
||||||
`databases = ["app_production", "testing"]`
|
|
||||||
|
|
||||||
### Configuration example
|
|
||||||
```
|
|
||||||
[[inputs.postgresql]]
|
|
||||||
address = "postgres://telegraf@localhost/someDB"
|
|
||||||
ignored_databases = ["template0", "template1"]
|
|
||||||
```
|
|
||||||
|
|||||||
@@ -43,7 +43,7 @@ var sampleConfig = `
|
|||||||
# ignored_databases = ["postgres", "template0", "template1"]
|
# ignored_databases = ["postgres", "template0", "template1"]
|
||||||
|
|
||||||
## A list of databases to pull metrics about. If not specified, metrics for all
|
## A list of databases to pull metrics about. If not specified, metrics for all
|
||||||
## databases are gathered. Do NOT use with the 'ignored_databases' option.
|
## databases are gathered. Do NOT use with the 'ignore_databases' option.
|
||||||
# databases = ["app_production", "testing"]
|
# databases = ["app_production", "testing"]
|
||||||
`
|
`
|
||||||
|
|
||||||
|
|||||||
@@ -8,8 +8,6 @@ import (
|
|||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/shirou/gopsutil/process"
|
|
||||||
|
|
||||||
"github.com/influxdata/telegraf"
|
"github.com/influxdata/telegraf"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
)
|
)
|
||||||
@@ -23,15 +21,12 @@ type Procstat struct {
|
|||||||
User string
|
User string
|
||||||
PidTag bool
|
PidTag bool
|
||||||
|
|
||||||
// pidmap maps a pid to a process object, so we don't recreate every gather
|
|
||||||
pidmap map[int32]*process.Process
|
|
||||||
// tagmap maps a pid to a map of tags for that pid
|
// tagmap maps a pid to a map of tags for that pid
|
||||||
tagmap map[int32]map[string]string
|
tagmap map[int32]map[string]string
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewProcstat() *Procstat {
|
func NewProcstat() *Procstat {
|
||||||
return &Procstat{
|
return &Procstat{
|
||||||
pidmap: make(map[int32]*process.Process),
|
|
||||||
tagmap: make(map[int32]map[string]string),
|
tagmap: make(map[int32]map[string]string),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -67,51 +62,26 @@ func (_ *Procstat) Description() string {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (p *Procstat) Gather(acc telegraf.Accumulator) error {
|
func (p *Procstat) Gather(acc telegraf.Accumulator) error {
|
||||||
err := p.createProcesses()
|
pids, err := p.getAllPids()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Printf("E! Error: procstat getting process, exe: [%s] pidfile: [%s] pattern: [%s] user: [%s] %s",
|
log.Printf("E! Error: procstat getting process, exe: [%s] pidfile: [%s] pattern: [%s] user: [%s] %s",
|
||||||
p.Exe, p.PidFile, p.Pattern, p.User, err.Error())
|
p.Exe, p.PidFile, p.Pattern, p.User, err.Error())
|
||||||
} else {
|
} else {
|
||||||
for pid, proc := range p.pidmap {
|
for _, pid := range pids {
|
||||||
if p.PidTag {
|
if p.PidTag {
|
||||||
p.tagmap[pid]["pid"] = fmt.Sprint(pid)
|
p.tagmap[pid]["pid"] = fmt.Sprint(pid)
|
||||||
}
|
}
|
||||||
p := NewSpecProcessor(p.ProcessName, p.Prefix, pid, acc, proc, p.tagmap[pid])
|
p := NewSpecProcessor(p.ProcessName, p.Prefix, pid, acc, p.tagmap[pid])
|
||||||
p.pushMetrics()
|
err := p.pushMetrics()
|
||||||
|
if err != nil {
|
||||||
|
log.Printf("E! Error: procstat: %s", err.Error())
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *Procstat) createProcesses() error {
|
|
||||||
var errstring string
|
|
||||||
var outerr error
|
|
||||||
|
|
||||||
pids, err := p.getAllPids()
|
|
||||||
if err != nil {
|
|
||||||
errstring += err.Error() + " "
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, pid := range pids {
|
|
||||||
_, ok := p.pidmap[pid]
|
|
||||||
if !ok {
|
|
||||||
proc, err := process.NewProcess(pid)
|
|
||||||
if err == nil {
|
|
||||||
p.pidmap[pid] = proc
|
|
||||||
} else {
|
|
||||||
errstring += err.Error() + " "
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if errstring != "" {
|
|
||||||
outerr = fmt.Errorf("%s", errstring)
|
|
||||||
}
|
|
||||||
|
|
||||||
return outerr
|
|
||||||
}
|
|
||||||
|
|
||||||
func (p *Procstat) getAllPids() ([]int32, error) {
|
func (p *Procstat) getAllPids() ([]int32, error) {
|
||||||
var pids []int32
|
var pids []int32
|
||||||
var err error
|
var err error
|
||||||
|
|||||||
@@ -6,7 +6,6 @@ import (
|
|||||||
"strconv"
|
"strconv"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/shirou/gopsutil/process"
|
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
|
|
||||||
@@ -24,7 +23,6 @@ func TestGather(t *testing.T) {
|
|||||||
p := Procstat{
|
p := Procstat{
|
||||||
PidFile: file.Name(),
|
PidFile: file.Name(),
|
||||||
Prefix: "foo",
|
Prefix: "foo",
|
||||||
pidmap: make(map[int32]*process.Process),
|
|
||||||
tagmap: make(map[int32]map[string]string),
|
tagmap: make(map[int32]map[string]string),
|
||||||
}
|
}
|
||||||
p.Gather(&acc)
|
p.Gather(&acc)
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
package procstat
|
package procstat
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"fmt"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/shirou/gopsutil/process"
|
"github.com/shirou/gopsutil/process"
|
||||||
@@ -9,12 +10,13 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
type SpecProcessor struct {
|
type SpecProcessor struct {
|
||||||
Prefix string
|
ProcessName string
|
||||||
pid int32
|
Prefix string
|
||||||
tags map[string]string
|
pid int32
|
||||||
fields map[string]interface{}
|
tags map[string]string
|
||||||
acc telegraf.Accumulator
|
fields map[string]interface{}
|
||||||
proc *process.Process
|
acc telegraf.Accumulator
|
||||||
|
proc *process.Process
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewSpecProcessor(
|
func NewSpecProcessor(
|
||||||
@@ -22,29 +24,35 @@ func NewSpecProcessor(
|
|||||||
prefix string,
|
prefix string,
|
||||||
pid int32,
|
pid int32,
|
||||||
acc telegraf.Accumulator,
|
acc telegraf.Accumulator,
|
||||||
p *process.Process,
|
|
||||||
tags map[string]string,
|
tags map[string]string,
|
||||||
) *SpecProcessor {
|
) *SpecProcessor {
|
||||||
if processName != "" {
|
|
||||||
tags["process_name"] = processName
|
|
||||||
} else {
|
|
||||||
name, err := p.Name()
|
|
||||||
if err == nil {
|
|
||||||
tags["process_name"] = name
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return &SpecProcessor{
|
return &SpecProcessor{
|
||||||
Prefix: prefix,
|
ProcessName: processName,
|
||||||
pid: pid,
|
Prefix: prefix,
|
||||||
tags: tags,
|
pid: pid,
|
||||||
fields: make(map[string]interface{}),
|
tags: tags,
|
||||||
acc: acc,
|
fields: make(map[string]interface{}),
|
||||||
proc: p,
|
acc: acc,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *SpecProcessor) pushMetrics() {
|
func (p *SpecProcessor) pushMetrics() error {
|
||||||
var prefix string
|
var prefix string
|
||||||
|
proc, err := process.NewProcess(p.pid)
|
||||||
|
if err != nil {
|
||||||
|
return fmt.Errorf("Failed to open process with pid '%d'. Error: '%s'",
|
||||||
|
p.pid, err)
|
||||||
|
}
|
||||||
|
p.proc = proc
|
||||||
|
if p.ProcessName != "" {
|
||||||
|
p.tags["process_name"] = p.ProcessName
|
||||||
|
} else {
|
||||||
|
name, err := p.proc.Name()
|
||||||
|
if err == nil {
|
||||||
|
p.tags["process_name"] = name
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if p.Prefix != "" {
|
if p.Prefix != "" {
|
||||||
prefix = p.Prefix + "_"
|
prefix = p.Prefix + "_"
|
||||||
}
|
}
|
||||||
@@ -107,4 +115,5 @@ func (p *SpecProcessor) pushMetrics() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
p.acc.AddFields("procstat", fields, p.tags)
|
p.acc.AddFields("procstat", fields, p.tags)
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -111,9 +111,11 @@ func TestParseValidPrometheus(t *testing.T) {
|
|||||||
"gauge": float64(1),
|
"gauge": float64(1),
|
||||||
}, metrics[0].Fields())
|
}, metrics[0].Fields())
|
||||||
assert.Equal(t, map[string]string{
|
assert.Equal(t, map[string]string{
|
||||||
"osVersion": "CentOS Linux 7 (Core)",
|
"osVersion": "CentOS Linux 7 (Core)",
|
||||||
"dockerVersion": "1.8.2",
|
"dockerVersion": "1.8.2",
|
||||||
"kernelVersion": "3.10.0-229.20.1.el7.x86_64",
|
"kernelVersion": "3.10.0-229.20.1.el7.x86_64",
|
||||||
|
"cadvisorRevision": "",
|
||||||
|
"cadvisorVersion": "",
|
||||||
}, metrics[0].Tags())
|
}, metrics[0].Tags())
|
||||||
|
|
||||||
// Counter value
|
// Counter value
|
||||||
|
|||||||
@@ -1,112 +0,0 @@
|
|||||||
# socket listener service input plugin
|
|
||||||
|
|
||||||
The Socket Listener is a service input plugin that listens for messages from
|
|
||||||
streaming (tcp, unix) or datagram (udp, unixgram) protocols.
|
|
||||||
|
|
||||||
The plugin expects messages in the
|
|
||||||
[Telegraf Input Data Formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md).
|
|
||||||
|
|
||||||
### Configuration:
|
|
||||||
|
|
||||||
This is a sample configuration for the plugin.
|
|
||||||
|
|
||||||
```toml
|
|
||||||
# Generic socket listener capable of handling multiple socket types.
|
|
||||||
[[inputs.socket_listener]]
|
|
||||||
## URL to listen on
|
|
||||||
# service_address = "tcp://:8094"
|
|
||||||
# service_address = "tcp://127.0.0.1:http"
|
|
||||||
# service_address = "tcp4://:8094"
|
|
||||||
# service_address = "tcp6://:8094"
|
|
||||||
# service_address = "tcp6://[2001:db8::1]:8094"
|
|
||||||
# service_address = "udp://:8094"
|
|
||||||
# service_address = "udp4://:8094"
|
|
||||||
# service_address = "udp6://:8094"
|
|
||||||
# service_address = "unix:///tmp/telegraf.sock"
|
|
||||||
# service_address = "unixgram:///tmp/telegraf.sock"
|
|
||||||
|
|
||||||
## Maximum number of concurrent connections.
|
|
||||||
## Only applies to stream sockets (e.g. TCP).
|
|
||||||
## 0 (default) is unlimited.
|
|
||||||
# max_connections = 1024
|
|
||||||
|
|
||||||
## Maximum socket buffer size in bytes.
|
|
||||||
## For stream sockets, once the buffer fills up, the sender will start backing up.
|
|
||||||
## For datagram sockets, once the buffer fills up, metrics will start dropping.
|
|
||||||
## Defaults to the OS default.
|
|
||||||
# read_buffer_size = 65535
|
|
||||||
|
|
||||||
## Data format to consume.
|
|
||||||
## Each data format has it's own unique set of configuration options, read
|
|
||||||
## more about them here:
|
|
||||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
|
||||||
# data_format = "influx"
|
|
||||||
```
|
|
||||||
|
|
||||||
## A Note on UDP OS Buffer Sizes
|
|
||||||
|
|
||||||
The `read_buffer_size` config option can be used to adjust the size of the socket
|
|
||||||
buffer, but this number is limited by OS settings. On Linux, `read_buffer_size`
|
|
||||||
will default to `rmem_default` and will be capped by `rmem_max`. On BSD systems,
|
|
||||||
`read_buffer_size` is capped by `maxsockbuf`, and there is no OS default
|
|
||||||
setting.
|
|
||||||
|
|
||||||
Instructions on how to adjust these OS settings are available below.
|
|
||||||
|
|
||||||
Some OSes (most notably, Linux) place very restricive limits on the performance
|
|
||||||
of UDP protocols. It is _highly_ recommended that you increase these OS limits to
|
|
||||||
at least 8MB before trying to run large amounts of UDP traffic to your instance.
|
|
||||||
8MB is just a recommendation, and can be adjusted higher.
|
|
||||||
|
|
||||||
### Linux
|
|
||||||
Check the current UDP/IP receive buffer limit & default by typing the following
|
|
||||||
commands:
|
|
||||||
|
|
||||||
```
|
|
||||||
sysctl net.core.rmem_max
|
|
||||||
sysctl net.core.rmem_default
|
|
||||||
```
|
|
||||||
|
|
||||||
If the values are less than 8388608 bytes you should add the following lines to
|
|
||||||
the /etc/sysctl.conf file:
|
|
||||||
|
|
||||||
```
|
|
||||||
net.core.rmem_max=8388608
|
|
||||||
net.core.rmem_default=8388608
|
|
||||||
```
|
|
||||||
|
|
||||||
Changes to /etc/sysctl.conf do not take effect until reboot.
|
|
||||||
To update the values immediately, type the following commands as root:
|
|
||||||
|
|
||||||
```
|
|
||||||
sysctl -w net.core.rmem_max=8388608
|
|
||||||
sysctl -w net.core.rmem_default=8388608
|
|
||||||
```
|
|
||||||
|
|
||||||
### BSD/Darwin
|
|
||||||
|
|
||||||
On BSD/Darwin systems you need to add about a 15% padding to the kernel limit
|
|
||||||
socket buffer. Meaning if you want an 8MB buffer (8388608 bytes) you need to set
|
|
||||||
the kernel limit to `8388608*1.15 = 9646900`. This is not documented anywhere but
|
|
||||||
happens
|
|
||||||
[in the kernel here.](https://github.com/freebsd/freebsd/blob/master/sys/kern/uipc_sockbuf.c#L63-L64)
|
|
||||||
|
|
||||||
Check the current UDP/IP buffer limit by typing the following command:
|
|
||||||
|
|
||||||
```
|
|
||||||
sysctl kern.ipc.maxsockbuf
|
|
||||||
```
|
|
||||||
|
|
||||||
If the value is less than 9646900 bytes you should add the following lines
|
|
||||||
to the /etc/sysctl.conf file (create it if necessary):
|
|
||||||
|
|
||||||
```
|
|
||||||
kern.ipc.maxsockbuf=9646900
|
|
||||||
```
|
|
||||||
|
|
||||||
Changes to /etc/sysctl.conf do not take effect until reboot.
|
|
||||||
To update the values immediately, type the following command as root:
|
|
||||||
|
|
||||||
```
|
|
||||||
sysctl -w kern.ipc.maxsockbuf=9646900
|
|
||||||
```
|
|
||||||
@@ -1,4 +1,30 @@
|
|||||||
# TCP listener service input plugin
|
# TCP listener service input plugin
|
||||||
|
|
||||||
> DEPRECATED: As of version 1.3 the TCP listener plugin has been deprecated in favor of the
|
The TCP listener is a service input plugin that listens for messages on a TCP
|
||||||
> [socket_listener plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/socket_listener)
|
socket and adds those messages to InfluxDB.
|
||||||
|
The plugin expects messages in the
|
||||||
|
[Telegraf Input Data Formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md).
|
||||||
|
|
||||||
|
### Configuration:
|
||||||
|
|
||||||
|
This is a sample configuration for the plugin.
|
||||||
|
|
||||||
|
```toml
|
||||||
|
# Generic TCP listener
|
||||||
|
[[inputs.tcp_listener]]
|
||||||
|
## Address and port to host TCP listener on
|
||||||
|
service_address = ":8094"
|
||||||
|
|
||||||
|
## Number of TCP messages allowed to queue up. Once filled, the
|
||||||
|
## TCP listener will start dropping packets.
|
||||||
|
allowed_pending_messages = 10000
|
||||||
|
|
||||||
|
## Maximum number of concurrent TCP connections to allow
|
||||||
|
max_tcp_connections = 250
|
||||||
|
|
||||||
|
## Data format to consume.
|
||||||
|
## Each data format has it's own unique set of configuration options, read
|
||||||
|
## more about them here:
|
||||||
|
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||||
|
data_format = "influx"
|
||||||
|
```
|
||||||
|
|||||||
@@ -58,9 +58,21 @@ var malformedwarn = "E! tcp_listener has received %d malformed packets" +
|
|||||||
" thus far."
|
" thus far."
|
||||||
|
|
||||||
const sampleConfig = `
|
const sampleConfig = `
|
||||||
# DEPRECATED: the TCP listener plugin has been deprecated in favor of the
|
## Address and port to host TCP listener on
|
||||||
# socket_listener plugin
|
# service_address = ":8094"
|
||||||
# see https://github.com/influxdata/telegraf/tree/master/plugins/inputs/socket_listener
|
|
||||||
|
## Number of TCP messages allowed to queue up. Once filled, the
|
||||||
|
## TCP listener will start dropping packets.
|
||||||
|
# allowed_pending_messages = 10000
|
||||||
|
|
||||||
|
## Maximum number of concurrent TCP connections to allow
|
||||||
|
# max_tcp_connections = 250
|
||||||
|
|
||||||
|
## Data format to consume.
|
||||||
|
## Each data format has it's own unique set of configuration options, read
|
||||||
|
## more about them here:
|
||||||
|
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||||
|
data_format = "influx"
|
||||||
`
|
`
|
||||||
|
|
||||||
func (t *TcpListener) SampleConfig() string {
|
func (t *TcpListener) SampleConfig() string {
|
||||||
@@ -86,10 +98,6 @@ func (t *TcpListener) Start(acc telegraf.Accumulator) error {
|
|||||||
t.Lock()
|
t.Lock()
|
||||||
defer t.Unlock()
|
defer t.Unlock()
|
||||||
|
|
||||||
log.Println("W! DEPRECATED: the TCP listener plugin has been deprecated " +
|
|
||||||
"in favor of the socket_listener plugin " +
|
|
||||||
"(https://github.com/influxdata/telegraf/tree/master/plugins/inputs/socket_listener)")
|
|
||||||
|
|
||||||
tags := map[string]string{
|
tags := map[string]string{
|
||||||
"address": t.ServiceAddress,
|
"address": t.ServiceAddress,
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,4 +1,86 @@
|
|||||||
# UDP listener service input plugin
|
# UDP listener service input plugin
|
||||||
|
|
||||||
> DEPRECATED: As of version 1.3 the UDP listener plugin has been deprecated in favor of the
|
The UDP listener is a service input plugin that listens for messages on a UDP
|
||||||
> [socket_listener plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/socket_listener)
|
socket and adds those messages to InfluxDB.
|
||||||
|
The plugin expects messages in the
|
||||||
|
[Telegraf Input Data Formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md).
|
||||||
|
|
||||||
|
### Configuration:
|
||||||
|
|
||||||
|
This is a sample configuration for the plugin.
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[[inputs.udp_listener]]
|
||||||
|
## Address and port to host UDP listener on
|
||||||
|
service_address = ":8092"
|
||||||
|
|
||||||
|
## Number of UDP messages allowed to queue up. Once filled, the
|
||||||
|
## UDP listener will start dropping packets.
|
||||||
|
allowed_pending_messages = 10000
|
||||||
|
|
||||||
|
## Data format to consume.
|
||||||
|
## Each data format has it's own unique set of configuration options, read
|
||||||
|
## more about them here:
|
||||||
|
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||||
|
data_format = "influx"
|
||||||
|
```
|
||||||
|
|
||||||
|
## A Note on UDP OS Buffer Sizes
|
||||||
|
|
||||||
|
Some OSes (most notably, Linux) place very restricive limits on the performance
|
||||||
|
of UDP protocols. It is _highly_ recommended that you increase these OS limits to
|
||||||
|
at least 8MB before trying to run large amounts of UDP traffic to your instance.
|
||||||
|
8MB is just a recommendation, and can be adjusted higher.
|
||||||
|
|
||||||
|
### Linux
|
||||||
|
Check the current UDP/IP receive buffer limit & default by typing the following
|
||||||
|
commands:
|
||||||
|
|
||||||
|
```
|
||||||
|
sysctl net.core.rmem_max
|
||||||
|
sysctl net.core.rmem_default
|
||||||
|
```
|
||||||
|
|
||||||
|
If the values are less than 8388608 bytes you should add the following lines to
|
||||||
|
the /etc/sysctl.conf file:
|
||||||
|
|
||||||
|
```
|
||||||
|
net.core.rmem_max=8388608
|
||||||
|
net.core.rmem_default=8388608
|
||||||
|
```
|
||||||
|
|
||||||
|
Changes to /etc/sysctl.conf do not take effect until reboot.
|
||||||
|
To update the values immediately, type the following commands as root:
|
||||||
|
|
||||||
|
```
|
||||||
|
sysctl -w net.core.rmem_max=8388608
|
||||||
|
sysctl -w net.core.rmem_default=8388608
|
||||||
|
```
|
||||||
|
|
||||||
|
### BSD/Darwin
|
||||||
|
|
||||||
|
On BSD/Darwin systems you need to add about a 15% padding to the kernel limit
|
||||||
|
socket buffer. Meaning if you want an 8MB buffer (8388608 bytes) you need to set
|
||||||
|
the kernel limit to `8388608*1.15 = 9646900`. This is not documented anywhere but
|
||||||
|
happens
|
||||||
|
[in the kernel here.](https://github.com/freebsd/freebsd/blob/master/sys/kern/uipc_sockbuf.c#L63-L64)
|
||||||
|
|
||||||
|
Check the current UDP/IP buffer limit by typing the following command:
|
||||||
|
|
||||||
|
```
|
||||||
|
sysctl kern.ipc.maxsockbuf
|
||||||
|
```
|
||||||
|
|
||||||
|
If the value is less than 9646900 bytes you should add the following lines
|
||||||
|
to the /etc/sysctl.conf file (create it if necessary):
|
||||||
|
|
||||||
|
```
|
||||||
|
kern.ipc.maxsockbuf=9646900
|
||||||
|
```
|
||||||
|
|
||||||
|
Changes to /etc/sysctl.conf do not take effect until reboot.
|
||||||
|
To update the values immediately, type the following commands as root:
|
||||||
|
|
||||||
|
```
|
||||||
|
sysctl -w kern.ipc.maxsockbuf=9646900
|
||||||
|
```
|
||||||
|
|||||||
@@ -66,9 +66,22 @@ var malformedwarn = "E! udp_listener has received %d malformed packets" +
|
|||||||
" thus far."
|
" thus far."
|
||||||
|
|
||||||
const sampleConfig = `
|
const sampleConfig = `
|
||||||
# DEPRECATED: the TCP listener plugin has been deprecated in favor of the
|
## Address and port to host UDP listener on
|
||||||
# socket_listener plugin
|
# service_address = ":8092"
|
||||||
# see https://github.com/influxdata/telegraf/tree/master/plugins/inputs/socket_listener
|
|
||||||
|
## Number of UDP messages allowed to queue up. Once filled, the
|
||||||
|
## UDP listener will start dropping packets.
|
||||||
|
# allowed_pending_messages = 10000
|
||||||
|
|
||||||
|
## Set the buffer size of the UDP connection outside of OS default (in bytes)
|
||||||
|
## If set to 0, take OS default
|
||||||
|
udp_buffer_size = 16777216
|
||||||
|
|
||||||
|
## Data format to consume.
|
||||||
|
## Each data format has it's own unique set of configuration options, read
|
||||||
|
## more about them here:
|
||||||
|
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||||
|
data_format = "influx"
|
||||||
`
|
`
|
||||||
|
|
||||||
func (u *UdpListener) SampleConfig() string {
|
func (u *UdpListener) SampleConfig() string {
|
||||||
@@ -93,10 +106,6 @@ func (u *UdpListener) Start(acc telegraf.Accumulator) error {
|
|||||||
u.Lock()
|
u.Lock()
|
||||||
defer u.Unlock()
|
defer u.Unlock()
|
||||||
|
|
||||||
log.Println("W! DEPRECATED: the UDP listener plugin has been deprecated " +
|
|
||||||
"in favor of the socket_listener plugin " +
|
|
||||||
"(https://github.com/influxdata/telegraf/tree/master/plugins/inputs/socket_listener)")
|
|
||||||
|
|
||||||
tags := map[string]string{
|
tags := map[string]string{
|
||||||
"address": u.ServiceAddress,
|
"address": u.ServiceAddress,
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -34,10 +34,9 @@ func (gh *GithubWebhook) eventHandler(w http.ResponseWriter, r *http.Request) {
|
|||||||
w.WriteHeader(http.StatusBadRequest)
|
w.WriteHeader(http.StatusBadRequest)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
if e != nil {
|
|
||||||
p := e.NewMetric()
|
p := e.NewMetric()
|
||||||
gh.acc.AddFields("github_webhooks", p.Fields(), p.Tags(), p.Time())
|
gh.acc.AddFields("github_webhooks", p.Fields(), p.Tags(), p.Time())
|
||||||
}
|
|
||||||
|
|
||||||
w.WriteHeader(http.StatusOK)
|
w.WriteHeader(http.StatusOK)
|
||||||
}
|
}
|
||||||
@@ -85,8 +84,6 @@ func NewEvent(data []byte, name string) (Event, error) {
|
|||||||
return generateEvent(data, &MembershipEvent{})
|
return generateEvent(data, &MembershipEvent{})
|
||||||
case "page_build":
|
case "page_build":
|
||||||
return generateEvent(data, &PageBuildEvent{})
|
return generateEvent(data, &PageBuildEvent{})
|
||||||
case "ping":
|
|
||||||
return nil, nil
|
|
||||||
case "public":
|
case "public":
|
||||||
return generateEvent(data, &PublicEvent{})
|
return generateEvent(data, &PublicEvent{})
|
||||||
case "pull_request":
|
case "pull_request":
|
||||||
|
|||||||
@@ -25,10 +25,6 @@ func TestCommitCommentEvent(t *testing.T) {
|
|||||||
GithubWebhookRequest("commit_comment", CommitCommentEventJSON(), t)
|
GithubWebhookRequest("commit_comment", CommitCommentEventJSON(), t)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestPingEvent(t *testing.T) {
|
|
||||||
GithubWebhookRequest("ping", "", t)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestDeleteEvent(t *testing.T) {
|
func TestDeleteEvent(t *testing.T) {
|
||||||
GithubWebhookRequest("delete", DeleteEventJSON(), t)
|
GithubWebhookRequest("delete", DeleteEventJSON(), t)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -331,7 +331,7 @@ func PdhCollectQueryData(hQuery PDH_HQUERY) uint32 {
|
|||||||
func PdhGetFormattedCounterValueDouble(hCounter PDH_HCOUNTER, lpdwType *uint32, pValue *PDH_FMT_COUNTERVALUE_DOUBLE) uint32 {
|
func PdhGetFormattedCounterValueDouble(hCounter PDH_HCOUNTER, lpdwType *uint32, pValue *PDH_FMT_COUNTERVALUE_DOUBLE) uint32 {
|
||||||
ret, _, _ := pdh_GetFormattedCounterValue.Call(
|
ret, _, _ := pdh_GetFormattedCounterValue.Call(
|
||||||
uintptr(hCounter),
|
uintptr(hCounter),
|
||||||
uintptr(PDH_FMT_DOUBLE|PDH_FMT_NOCAP100),
|
uintptr(PDH_FMT_DOUBLE),
|
||||||
uintptr(unsafe.Pointer(lpdwType)),
|
uintptr(unsafe.Pointer(lpdwType)),
|
||||||
uintptr(unsafe.Pointer(pValue)))
|
uintptr(unsafe.Pointer(pValue)))
|
||||||
|
|
||||||
@@ -378,7 +378,7 @@ func PdhGetFormattedCounterValueDouble(hCounter PDH_HCOUNTER, lpdwType *uint32,
|
|||||||
func PdhGetFormattedCounterArrayDouble(hCounter PDH_HCOUNTER, lpdwBufferSize *uint32, lpdwBufferCount *uint32, itemBuffer *PDH_FMT_COUNTERVALUE_ITEM_DOUBLE) uint32 {
|
func PdhGetFormattedCounterArrayDouble(hCounter PDH_HCOUNTER, lpdwBufferSize *uint32, lpdwBufferCount *uint32, itemBuffer *PDH_FMT_COUNTERVALUE_ITEM_DOUBLE) uint32 {
|
||||||
ret, _, _ := pdh_GetFormattedCounterArrayW.Call(
|
ret, _, _ := pdh_GetFormattedCounterArrayW.Call(
|
||||||
uintptr(hCounter),
|
uintptr(hCounter),
|
||||||
uintptr(PDH_FMT_DOUBLE|PDH_FMT_NOCAP100),
|
uintptr(PDH_FMT_DOUBLE),
|
||||||
uintptr(unsafe.Pointer(lpdwBufferSize)),
|
uintptr(unsafe.Pointer(lpdwBufferSize)),
|
||||||
uintptr(unsafe.Pointer(lpdwBufferCount)),
|
uintptr(unsafe.Pointer(lpdwBufferCount)),
|
||||||
uintptr(unsafe.Pointer(itemBuffer)))
|
uintptr(unsafe.Pointer(itemBuffer)))
|
||||||
|
|||||||
@@ -1,18 +1,13 @@
|
|||||||
# AMQP Output Plugin
|
# AMQP Output Plugin
|
||||||
|
|
||||||
This plugin writes to a AMQP 0-9-1 Exchange, a promenent implementation of this protocol being [RabbitMQ](https://www.rabbitmq.com/).
|
This plugin writes to a AMQP exchange using tag, defined in configuration file
|
||||||
|
as RoutingTag, as a routing key.
|
||||||
Metrics are written to a topic exchange using tag, defined in configuration file as RoutingTag, as a routing key.
|
|
||||||
|
|
||||||
If RoutingTag is empty, then empty routing key will be used.
|
If RoutingTag is empty, then empty routing key will be used.
|
||||||
Metrics are grouped in batches by RoutingTag.
|
Metrics are grouped in batches by RoutingTag.
|
||||||
|
|
||||||
This plugin doesn't bind exchange to a queue, so it should be done by consumer.
|
This plugin doesn't bind exchange to a queue, so it should be done by consumer.
|
||||||
|
|
||||||
For an introduction to AMQP see:
|
|
||||||
- https://www.rabbitmq.com/tutorials/amqp-concepts.html
|
|
||||||
- https://www.rabbitmq.com/getstarted.html
|
|
||||||
|
|
||||||
### Configuration:
|
### Configuration:
|
||||||
|
|
||||||
```
|
```
|
||||||
@@ -23,8 +18,6 @@ For an introduction to AMQP see:
|
|||||||
## AMQP exchange
|
## AMQP exchange
|
||||||
exchange = "telegraf"
|
exchange = "telegraf"
|
||||||
## Auth method. PLAIN and EXTERNAL are supported
|
## Auth method. PLAIN and EXTERNAL are supported
|
||||||
## Using EXTERNAL requires enabling the rabbitmq_auth_mechanism_ssl plugin as
|
|
||||||
## described here: https://www.rabbitmq.com/plugins.html
|
|
||||||
# auth_method = "PLAIN"
|
# auth_method = "PLAIN"
|
||||||
## Telegraf tag to use as a routing key
|
## Telegraf tag to use as a routing key
|
||||||
## ie, if this tag exists, it's value will be used as the routing key
|
## ie, if this tag exists, it's value will be used as the routing key
|
||||||
|
|||||||
@@ -40,7 +40,6 @@ type AMQP struct {
|
|||||||
// Use SSL but skip chain & host verification
|
// Use SSL but skip chain & host verification
|
||||||
InsecureSkipVerify bool
|
InsecureSkipVerify bool
|
||||||
|
|
||||||
conn *amqp.Connection
|
|
||||||
channel *amqp.Channel
|
channel *amqp.Channel
|
||||||
sync.Mutex
|
sync.Mutex
|
||||||
headers amqp.Table
|
headers amqp.Table
|
||||||
@@ -69,8 +68,6 @@ var sampleConfig = `
|
|||||||
## AMQP exchange
|
## AMQP exchange
|
||||||
exchange = "telegraf"
|
exchange = "telegraf"
|
||||||
## Auth method. PLAIN and EXTERNAL are supported
|
## Auth method. PLAIN and EXTERNAL are supported
|
||||||
## Using EXTERNAL requires enabling the rabbitmq_auth_mechanism_ssl plugin as
|
|
||||||
## described here: https://www.rabbitmq.com/plugins.html
|
|
||||||
# auth_method = "PLAIN"
|
# auth_method = "PLAIN"
|
||||||
## Telegraf tag to use as a routing key
|
## Telegraf tag to use as a routing key
|
||||||
## ie, if this tag exists, it's value will be used as the routing key
|
## ie, if this tag exists, it's value will be used as the routing key
|
||||||
@@ -132,8 +129,6 @@ func (q *AMQP) Connect() error {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
q.conn = connection
|
|
||||||
|
|
||||||
channel, err := connection.Channel()
|
channel, err := connection.Channel()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("Failed to open a channel: %s", err)
|
return fmt.Errorf("Failed to open a channel: %s", err)
|
||||||
@@ -153,11 +148,7 @@ func (q *AMQP) Connect() error {
|
|||||||
}
|
}
|
||||||
q.channel = channel
|
q.channel = channel
|
||||||
go func() {
|
go func() {
|
||||||
err := <-connection.NotifyClose(make(chan *amqp.Error))
|
log.Printf("I! Closing: %s", <-connection.NotifyClose(make(chan *amqp.Error)))
|
||||||
if err == nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
log.Printf("I! Closing: %s", err)
|
|
||||||
log.Printf("I! Trying to reconnect")
|
log.Printf("I! Trying to reconnect")
|
||||||
for err := q.Connect(); err != nil; err = q.Connect() {
|
for err := q.Connect(); err != nil; err = q.Connect() {
|
||||||
log.Println("E! ", err.Error())
|
log.Println("E! ", err.Error())
|
||||||
@@ -169,12 +160,7 @@ func (q *AMQP) Connect() error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (q *AMQP) Close() error {
|
func (q *AMQP) Close() error {
|
||||||
err := q.conn.Close()
|
return q.channel.Close()
|
||||||
if err != nil && err != amqp.ErrClosed {
|
|
||||||
log.Printf("E! Error closing AMQP connection: %s", err)
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (q *AMQP) SampleConfig() string {
|
func (q *AMQP) SampleConfig() string {
|
||||||
@@ -221,7 +207,7 @@ func (q *AMQP) Write(metrics []telegraf.Metric) error {
|
|||||||
Body: buf,
|
Body: buf,
|
||||||
})
|
})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("Failed to send AMQP message: %s", err)
|
return fmt.Errorf("FAILED to send amqp message: %s", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
|
|||||||
@@ -1,16 +1 @@
|
|||||||
# file Output Plugin
|
# file Output Plugin
|
||||||
|
|
||||||
This plugin writes telegraf metrics to files
|
|
||||||
|
|
||||||
### Configuration
|
|
||||||
```
|
|
||||||
[[outputs.file]]
|
|
||||||
## Files to write to, "stdout" is a specially handled file.
|
|
||||||
files = ["stdout", "/tmp/metrics.out"]
|
|
||||||
|
|
||||||
## Data format to output.
|
|
||||||
## Each data format has it's own unique set of configuration options, read
|
|
||||||
## more about them here:
|
|
||||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
|
||||||
data_format = "influx"
|
|
||||||
```
|
|
||||||
|
|||||||
@@ -112,8 +112,6 @@ func (i *InfluxDB) Connect() error {
|
|||||||
Timeout: i.Timeout.Duration,
|
Timeout: i.Timeout.Duration,
|
||||||
TLSConfig: tlsConfig,
|
TLSConfig: tlsConfig,
|
||||||
UserAgent: i.UserAgent,
|
UserAgent: i.UserAgent,
|
||||||
Username: i.Username,
|
|
||||||
Password: i.Password,
|
|
||||||
}
|
}
|
||||||
wp := client.WriteParams{
|
wp := client.WriteParams{
|
||||||
Database: i.Database,
|
Database: i.Database,
|
||||||
|
|||||||
@@ -1,7 +1,6 @@
|
|||||||
package prometheus_client
|
package prometheus_client
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"log"
|
"log"
|
||||||
"net/http"
|
"net/http"
|
||||||
@@ -25,7 +24,6 @@ type MetricWithExpiration struct {
|
|||||||
type PrometheusClient struct {
|
type PrometheusClient struct {
|
||||||
Listen string
|
Listen string
|
||||||
ExpirationInterval internal.Duration `toml:"expiration_interval"`
|
ExpirationInterval internal.Duration `toml:"expiration_interval"`
|
||||||
server *http.Server
|
|
||||||
|
|
||||||
metrics map[string]*MetricWithExpiration
|
metrics map[string]*MetricWithExpiration
|
||||||
|
|
||||||
@@ -43,25 +41,30 @@ var sampleConfig = `
|
|||||||
func (p *PrometheusClient) Start() error {
|
func (p *PrometheusClient) Start() error {
|
||||||
p.metrics = make(map[string]*MetricWithExpiration)
|
p.metrics = make(map[string]*MetricWithExpiration)
|
||||||
prometheus.Register(p)
|
prometheus.Register(p)
|
||||||
|
defer func() {
|
||||||
|
if r := recover(); r != nil {
|
||||||
|
// recovering from panic here because there is no way to stop a
|
||||||
|
// running http go server except by a kill signal. Since the server
|
||||||
|
// does not stop on SIGHUP, Start() will panic when the process
|
||||||
|
// is reloaded.
|
||||||
|
}
|
||||||
|
}()
|
||||||
if p.Listen == "" {
|
if p.Listen == "" {
|
||||||
p.Listen = "localhost:9126"
|
p.Listen = "localhost:9126"
|
||||||
}
|
}
|
||||||
|
|
||||||
mux := http.NewServeMux()
|
http.Handle("/metrics", prometheus.Handler())
|
||||||
mux.Handle("/metrics", prometheus.Handler())
|
server := &http.Server{
|
||||||
|
Addr: p.Listen,
|
||||||
p.server = &http.Server{
|
|
||||||
Addr: p.Listen,
|
|
||||||
Handler: mux,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
go p.server.ListenAndServe()
|
go server.ListenAndServe()
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *PrometheusClient) Stop() {
|
func (p *PrometheusClient) Stop() {
|
||||||
// plugin gets cleaned up in Close() already.
|
// TODO: Use a listener for http.Server that counts active connections
|
||||||
|
// that can be stopped and closed gracefully
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *PrometheusClient) Connect() error {
|
func (p *PrometheusClient) Connect() error {
|
||||||
@@ -70,9 +73,8 @@ func (p *PrometheusClient) Connect() error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (p *PrometheusClient) Close() error {
|
func (p *PrometheusClient) Close() error {
|
||||||
ctx, cancel := context.WithTimeout(context.Background(), time.Second*5)
|
// This service output does not need to close any of its connections
|
||||||
defer cancel()
|
return nil
|
||||||
return p.server.Shutdown(ctx)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *PrometheusClient) SampleConfig() string {
|
func (p *PrometheusClient) SampleConfig() string {
|
||||||
|
|||||||
@@ -193,16 +193,7 @@ func TestConnectAndWrite(t *testing.T) {
|
|||||||
err = r.Write(metrics)
|
err = r.Write(metrics)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
start := time.Now()
|
time.Sleep(200 * time.Millisecond)
|
||||||
for true {
|
|
||||||
events, _ := r.client.Query(`tagged "docker"`)
|
|
||||||
if len(events) > 0 {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
if time.Since(start) > time.Second {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// are there any "docker" tagged events in Riemann?
|
// are there any "docker" tagged events in Riemann?
|
||||||
events, err := r.client.Query(`tagged "docker"`)
|
events, err := r.client.Query(`tagged "docker"`)
|
||||||
|
|||||||
@@ -1,27 +0,0 @@
|
|||||||
# socket_writer Plugin
|
|
||||||
|
|
||||||
The socket_writer plugin can write to a UDP, TCP, or unix socket.
|
|
||||||
|
|
||||||
It can output data in any of the [supported output formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md).
|
|
||||||
|
|
||||||
```toml
|
|
||||||
# Generic socket writer capable of handling multiple socket types.
|
|
||||||
[[outputs.socket_writer]]
|
|
||||||
## URL to connect to
|
|
||||||
# address = "tcp://127.0.0.1:8094"
|
|
||||||
# address = "tcp://example.com:http"
|
|
||||||
# address = "tcp4://127.0.0.1:8094"
|
|
||||||
# address = "tcp6://127.0.0.1:8094"
|
|
||||||
# address = "tcp6://[2001:db8::1]:8094"
|
|
||||||
# address = "udp://127.0.0.1:8094"
|
|
||||||
# address = "udp4://127.0.0.1:8094"
|
|
||||||
# address = "udp6://127.0.0.1:8094"
|
|
||||||
# address = "unix:///tmp/telegraf.sock"
|
|
||||||
# address = "unixgram:///tmp/telegraf.sock"
|
|
||||||
|
|
||||||
## Data format to generate.
|
|
||||||
## Each data format has it's own unique set of configuration options, read
|
|
||||||
## more about them here:
|
|
||||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
|
||||||
# data_format = "influx"
|
|
||||||
```
|
|
||||||
@@ -67,7 +67,7 @@ func TestParseValidOutput(t *testing.T) {
|
|||||||
assert.Equal(t, map[string]interface{}{
|
assert.Equal(t, map[string]interface{}{
|
||||||
"value": float64(0.008457),
|
"value": float64(0.008457),
|
||||||
}, metrics[0].Fields())
|
}, metrics[0].Fields())
|
||||||
assert.Equal(t, map[string]string{}, metrics[0].Tags())
|
assert.Equal(t, map[string]string{"unit": ""}, metrics[0].Tags())
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestParseInvalidOutput(t *testing.T) {
|
func TestParseInvalidOutput(t *testing.T) {
|
||||||
|
|||||||
@@ -22,7 +22,6 @@ INSTALL_ROOT_DIR = "/usr/bin"
|
|||||||
LOG_DIR = "/var/log/telegraf"
|
LOG_DIR = "/var/log/telegraf"
|
||||||
SCRIPT_DIR = "/usr/lib/telegraf/scripts"
|
SCRIPT_DIR = "/usr/lib/telegraf/scripts"
|
||||||
CONFIG_DIR = "/etc/telegraf"
|
CONFIG_DIR = "/etc/telegraf"
|
||||||
CONFIG_DIR_D = "/etc/telegraf/telegraf.d"
|
|
||||||
LOGROTATE_DIR = "/etc/logrotate.d"
|
LOGROTATE_DIR = "/etc/logrotate.d"
|
||||||
|
|
||||||
INIT_SCRIPT = "scripts/init.sh"
|
INIT_SCRIPT = "scripts/init.sh"
|
||||||
@@ -116,7 +115,7 @@ def create_package_fs(build_root):
|
|||||||
logging.debug("Creating a filesystem hierarchy from directory: {}".format(build_root))
|
logging.debug("Creating a filesystem hierarchy from directory: {}".format(build_root))
|
||||||
# Using [1:] for the path names due to them being absolute
|
# Using [1:] for the path names due to them being absolute
|
||||||
# (will overwrite previous paths, per 'os.path.join' documentation)
|
# (will overwrite previous paths, per 'os.path.join' documentation)
|
||||||
dirs = [ INSTALL_ROOT_DIR[1:], LOG_DIR[1:], SCRIPT_DIR[1:], CONFIG_DIR[1:], LOGROTATE_DIR[1:], CONFIG_DIR_D[1:] ]
|
dirs = [ INSTALL_ROOT_DIR[1:], LOG_DIR[1:], SCRIPT_DIR[1:], CONFIG_DIR[1:], LOGROTATE_DIR[1:] ]
|
||||||
for d in dirs:
|
for d in dirs:
|
||||||
os.makedirs(os.path.join(build_root, d))
|
os.makedirs(os.path.join(build_root, d))
|
||||||
os.chmod(os.path.join(build_root, d), 0o755)
|
os.chmod(os.path.join(build_root, d), 0o755)
|
||||||
|
|||||||
Reference in New Issue
Block a user