Compare commits

...

55 Commits

Author SHA1 Message Date
David Norton
81caa56859 move plugin interfaces into separate package 2016-12-23 10:18:27 -05:00
David Norton
3e6c4a53a4 fix plugin namespacing 2016-12-22 12:06:04 -05:00
David Norton
ba8f91a038 update Circle build to use go1.8beta1 2016-12-22 12:06:04 -05:00
David Norton
2b77751df8 make plugin dir configurable and fix namespacing 2016-12-22 12:06:04 -05:00
David Norton
70d678c442 load external plugins 2016-12-22 12:06:04 -05:00
YKlausz
fd1feff7b4 Remove print call in cassandra plugin (#2192) 2016-12-21 17:23:54 +00:00
Dominik Labuda
37bc9cf795 [plugins] jolokia input plugin: configurable http timeouts (#2098) 2016-12-21 12:41:58 +00:00
Cameron Sparr
b762546fa7 docker: check type when totalling blkio & net metrics
closes #2027
2016-12-21 12:18:38 +00:00
Cameron Sparr
bf5f2659a1 Do not try Uint parsing in redis plugin
this is just a waste of cpu cycles, since telegraf converts all uints to
int64 anyways.
2016-12-20 23:42:14 +00:00
Mark Wolfe
d2787e8ef5 Fix for loop over value array range issue. (#2187) 2016-12-20 22:56:02 +00:00
Cameron Sparr
a9f03a72f5 Mask username/password from error messages
closes #1980
2016-12-20 19:35:45 +00:00
Cameron Sparr
7fc57812a7 changelog update 2016-12-20 18:50:32 +00:00
Mark Wolfe
8a982ca68f Moved to using the inbuilt serializer. (#1942)
* Moved to using the inbuilt serializer.

* Remove Atomic variable as it is not required.

* Adjusted metric type in line with latest changes.
2016-12-20 18:49:28 +00:00
Cameron Sparr
200237a515 Do not create a global statsd "previous instance"
this basically reverts #887

at some point we might want to do some special handling of reloading
plugins and keeping their state intact, but that will need to be done at
a higher level, and in a way that is thread-safe for multiple input
plugins of the same type.

Unfortunately this is a rather large feature that will not have a quick
fix available for it.

fixes #1975
fixes #2102
2016-12-20 17:55:04 +00:00
Cameron Sparr
0ae1e0611c changelog update 2016-12-20 16:30:49 +00:00
Matt O'Hara
1392e73125 Add clusterstats to elasticsearch plugin (#1979)
* add clusterstats to elasticsearch input plugin

* add clusterstats to elasticsearch input plugin

* add clusterstats to elasticsearch input plugin

* add clusterstats to elasticsearch input plugin

* add clusterstats to elasticsearch input plugin

* responses to requested changes

* remove unnecessary recommendation
2016-12-20 16:30:03 +00:00
Cameron Sparr
a90afd95c6 Fix & unit test logparser CLF pattern with IPv6
deals partially with #1973

see also https://github.com/vjeantet/grok/issues/17
2016-12-20 15:57:32 +00:00
Cameron Sparr
9866146545 Support negative statsd counters
closes #1898
2016-12-20 13:21:51 +00:00
Cameron Sparr
8df325a68c changelog update 2016-12-20 13:04:51 +00:00
Łukasz Harasimowicz
48ae105a11 Fixing consul with multiple health checks per service (#1994)
* plugins/input/consul: moved check_id from regular fields to tags.

When service has more than one check sending data for both would overwrite each other
resulting only in one check being written (the last one). Adding check_id as a tag
ensures we will get info for all unique checks per service.

* plugins/inputs/consul: updated tests
2016-12-20 13:03:31 +00:00
Jeff Ashton
4e808c5c20 Importing pdh from github.com/lxn/win
closes #1763
closes #2017
2016-12-20 12:06:40 +00:00
Ken Dilley
eb96443a34 Update MySQL Readme to clarify connection string examples. (#2175)
* Update MySQL Readme to clarify connection string examples.

* Update mysql sample config to clarify connection string examples
2016-12-20 10:17:00 +00:00
Cameron Sparr
e36c354ff5 internal.Duration build fixup 2016-12-17 13:10:33 +00:00
Pierre Tessier
f09c08d1f3 Added response_timeout property
closes #2006
2016-12-17 13:06:04 +00:00
Steven Pall
0e8122a2fc Add trailing slash to jolokia context (#2105) 2016-12-17 12:51:46 +00:00
Cameron Sparr
6723ea5fe6 changelog update 2016-12-16 17:30:13 +00:00
Vincent
e8bf968c78 fix mongodb replica set lag awalys 0 #1449 (#2125) 2016-12-16 17:29:04 +00:00
Cameron Sparr
9c8f24601f rabbitmq, decrease timeout verbosity in config 2016-12-16 14:12:50 +00:00
Tevin Jeffrey
4957717df5 Add field for last GC pause time (#2121) 2016-12-16 14:03:53 +00:00
Cameron Sparr
21fac3ebec changelog update 2016-12-16 14:02:11 +00:00
Patrick Hemmer
ecbc634221 fix tail input seeking when used with pipe (#2090) 2016-12-16 14:01:49 +00:00
alekseyp
90cec20d1d Standard deviation (jitter) for Input plugin Ping (#2078) 2016-12-16 13:58:27 +00:00
Cameron Sparr
bcbf82f8e8 changelog update 2016-12-16 13:54:51 +00:00
Alex Sherwin
3a45d8851d fixes #1987 custom docker repos with non-standard port (#2018)
* fixed parsing of docker image name/version

now accounts for custom docker repo's which contain a colon for a non-default port

* 1978: modifying docker test case to have a custom repo with non-standard port

* using a temp var to store index, ran gofmt

* fixes #1987, renaming iterator to 'i'
2016-12-16 13:53:16 +00:00
Pierre Tessier
4a83c8c518 Add Questions status variable for issue: #1988 (#2004) 2016-12-16 13:47:47 +00:00
Doug Reese
bc13d32d53 MongoDB input plugin: Improve state data (#2001)
* MongoDB input plugin: Improve state data

Adds ARB as a "member_status" (replica set arbiter).
Uses MongoDB replica set state string for "state" value.

* MongoDB input plugin: Improve state data - changelog update
2016-12-16 13:46:32 +00:00
Frank Stutz
e6fc32bdf0 fix for puppetagent config - test 1
put Makefile back to normal

removed comment from puppetagent.go

changed config_version to config_version_string and fixed yaml for build

changed workind from branch to environment for config_string

fixed casing and Changelog

fixed test case

closes #1917
2016-12-16 13:36:06 +00:00
Cameron Sparr
a970b9c62c Revert "Rabbitmq plugin: connection-related metrics." (#2169) 2016-12-15 19:31:40 +00:00
Florian Klink
17b307a7bc ping: fix typo in README (#2163) 2016-12-14 19:47:48 +00:00
Jose Luis Navarro
393f5044bb Collect JSON values recursively
closes #1993
closes #1693
2016-12-13 21:06:05 +00:00
Pieter Slabbert
c630212dde Enable setting a clientID for MQTT Output
closes #2079
closes #1910
2016-12-13 20:03:09 +00:00
Cameron Sparr
f39db08c6d Set default values for delete_ configuration options
closes #1893
2016-12-13 20:00:52 +00:00
Jonas Falck
b4f9bc8745 Change hddtemp to always put temperature in temperature field (#1905)
Added unit tests for the changes

Fixes #1904
2016-12-13 19:40:55 +00:00
Cameron Sparr
5f06bd2566 Graylog output should set short_message field
closes #2045
2016-12-13 16:10:59 +00:00
Cameron Sparr
8a4ab3654d Fix documentation for net_response plugin
closes #2103
2016-12-13 16:02:03 +00:00
Cameron Sparr
e2f9617228 Support strings in statsd set measurements
closes #2068
2016-12-13 15:42:22 +00:00
Cameron Sparr
e097ae9632 Fix possible panic when file info cannot be gotten
closes #2061
2016-12-13 14:54:07 +00:00
Cameron Sparr
07684fb030 Update changelog 2016-12-13 14:28:28 +00:00
Da1den
17fa6f9b17 Fixed bug that you cannot gather data on non english systems (#1944) 2016-12-13 14:24:41 +00:00
krise3k
8e3fbaa9dd Add missing slim (#1937) 2016-12-13 14:23:18 +00:00
Kishore Nallan
dede3e70ad Rabbitmq plugin: connection-related metrics. (#1908)
* Rabbitmq plugin: connection-related metrics.

* Run go fmt.
2016-12-13 14:17:20 +00:00
Anthony Arnaud
7558081873 Output openTSDB HTTPS with basic auth (#1913) 2016-12-13 14:15:51 +00:00
Leon Barrett
6e241611be Fix bug: too many cloudwatch metrics (#1885)
* Fix bug: too many cloudwatch metrics

Cloudwatch metrics were being added incorrectly. The most obvious
symptom of this was that too many metrics were being added. A simple
check against the name of the metric proved to be a sufficient fix. In
order to test the fix, a metric selection function was factored out.

* Go fmt cloudwatch

* Cloudwatch isSelected checks metric name

* Move cloudwatch line in changelog to 1.2 features
2016-12-13 14:13:53 +00:00
Rikaard Hosein
fc9f921b62 Can turn pid into tag instead of field
closes #1843
fixes  #1668
2016-12-13 13:21:39 +00:00
Cameron Sparr
12db3b9120 Check if metric is nil before calling SetAggregate
fixes #2146
2016-12-13 12:27:10 +00:00
220 changed files with 2780 additions and 1245 deletions

View File

@@ -2,6 +2,19 @@
### Release Notes ### Release Notes
- The StatsD plugin will now default all "delete_" config options to "true". This
will change te default behavior for users who were not specifying these parameters
in their config file.
- The StatsD plugin will also no longer save it's state on a service reload.
Essentially we have reverted PR [#887](https://github.com/influxdata/telegraf/pull/887).
The reason for this is that saving the state in a global variable is not
thread-safe (see [#1975](https://github.com/influxdata/telegraf/issues/1975) & [#2102](https://github.com/influxdata/telegraf/issues/2102)),
and this creates issues if users want to define multiple instances
of the statsd plugin. Saving state on reload may be considered in the future,
but this would need to be implemented at a higher level and applied to all
plugins, not just statsd.
### Features ### Features
- [#2123](https://github.com/influxdata/telegraf/pull/2123): Fix improper calculation of CPU percentages - [#2123](https://github.com/influxdata/telegraf/pull/2123): Fix improper calculation of CPU percentages
@@ -14,12 +27,45 @@
- [#2127](https://github.com/influxdata/telegraf/pull/2127): Update Go version to 1.7.4. - [#2127](https://github.com/influxdata/telegraf/pull/2127): Update Go version to 1.7.4.
- [#2126](https://github.com/influxdata/telegraf/pull/2126): Support a metric.Split function. - [#2126](https://github.com/influxdata/telegraf/pull/2126): Support a metric.Split function.
- [#2026](https://github.com/influxdata/telegraf/pull/2065): elasticsearch "shield" (basic auth) support doc. - [#2026](https://github.com/influxdata/telegraf/pull/2065): elasticsearch "shield" (basic auth) support doc.
- [#1885](https://github.com/influxdata/telegraf/pull/1885): Fix over-querying of cloudwatch metrics
- [#1913](https://github.com/influxdata/telegraf/pull/1913): OpenTSDB basic auth support.
- [#1908](https://github.com/influxdata/telegraf/pull/1908): RabbitMQ Connection metrics.
- [#1937](https://github.com/influxdata/telegraf/pull/1937): HAProxy session limit metric.
- [#2068](https://github.com/influxdata/telegraf/issues/2068): Accept strings for StatsD sets.
- [#1893](https://github.com/influxdata/telegraf/issues/1893): Change StatsD default "reset" behavior.
- [#2079](https://github.com/influxdata/telegraf/pull/2079): Enable setting ClientID in MQTT output.
- [#2001](https://github.com/influxdata/telegraf/pull/2001): MongoDB input plugin: Improve state data.
- [#2078](https://github.com/influxdata/telegraf/pull/2078): Ping input: add standard deviation field.
- [#2121](https://github.com/influxdata/telegraf/pull/2121): Add GC pause metric to InfluxDB input plugin.
- [#2006](https://github.com/influxdata/telegraf/pull/2006): Added response_timeout property to prometheus input plugin.
- [#1763](https://github.com/influxdata/telegraf/issues/1763): Pulling github.com/lxn/win's pdh wrapper into telegraf.
- [#1898](https://github.com/influxdata/telegraf/issues/1898): Support negative statsd counters.
- [#1921](https://github.com/influxdata/telegraf/issues/1921): Elasticsearch cluster stats support.
- [#1942](https://github.com/influxdata/telegraf/pull/1942): Change Amazon Kinesis output plugin to use the built-in serializer plugins.
- [#1980](https://github.com/influxdata/telegraf/issues/1980): Hide username/password from elasticsearch error log messages.
- [#2097](https://github.com/influxdata/telegraf/issues/2097): Configurable HTTP timeouts in Jolokia plugin
### Bugfixes ### Bugfixes
- [#2049](https://github.com/influxdata/telegraf/pull/2049): Fix the Value data format not trimming null characters from input. - [#2049](https://github.com/influxdata/telegraf/pull/2049): Fix the Value data format not trimming null characters from input.
- [#1949](https://github.com/influxdata/telegraf/issues/1949): Fix windows `net` plugin. - [#1949](https://github.com/influxdata/telegraf/issues/1949): Fix windows `net` plugin.
- [#1775](https://github.com/influxdata/telegraf/issues/1775): Cache & expire metrics for delivery to prometheus - [#1775](https://github.com/influxdata/telegraf/issues/1775): Cache & expire metrics for delivery to prometheus
- [#1775](https://github.com/influxdata/telegraf/issues/1775): Cache & expire metrics for delivery to prometheus.
- [#2146](https://github.com/influxdata/telegraf/issues/2146): Fix potential panic in aggregator plugin metric maker.
- [#1843](https://github.com/influxdata/telegraf/pull/1843) & [#1668](https://github.com/influxdata/telegraf/issues/1668): Add optional ability to define PID as a tag.
- [#1730](https://github.com/influxdata/telegraf/issues/1730): Fix win_perf_counters not gathering non-English counters.
- [#2061](https://github.com/influxdata/telegraf/issues/2061): Fix panic when file stat info cannot be collected due to permissions or other issue(s).
- [#2045](https://github.com/influxdata/telegraf/issues/2045): Graylog output should set short_message field.
- [#1904](https://github.com/influxdata/telegraf/issues/1904): Hddtemp always put the value in the field temperature.
- [#1693](https://github.com/influxdata/telegraf/issues/1693): Properly collect nested jolokia struct data.
- [#1917](https://github.com/influxdata/telegraf/pull/1917): fix puppetagent inputs plugin to support string for config variable.
- [#1987](https://github.com/influxdata/telegraf/issues/1987): fix docker input plugin tags when registry has port.
- [#2089](https://github.com/influxdata/telegraf/issues/2089): Fix tail input when reading from a pipe.
- [#1449](https://github.com/influxdata/telegraf/issues/1449): MongoDB plugin always shows 0 replication lag.
- [#1825](https://github.com/influxdata/telegraf/issues/1825): Consul plugin: add check_id as a tag in metrics to avoid overwrites.
- [#1973](https://github.com/influxdata/telegraf/issues/1973): Partial fix: logparser CLF pattern with IPv6 addresses.
- [#1975](https://github.com/influxdata/telegraf/issues/1975) & [#2102](https://github.com/influxdata/telegraf/issues/2102): Fix thread-safety when using multiple instances of the statsd input plugin.
- [#2027](https://github.com/influxdata/telegraf/issues/2027): docker input: interface conversion panic fix.
## v1.1.2 [2016-12-12] ## v1.1.2 [2016-12-12]

View File

@@ -1,7 +1,6 @@
github.com/Microsoft/go-winio ce2922f643c8fd76b46cadc7f404a06282678b34 github.com/Microsoft/go-winio ce2922f643c8fd76b46cadc7f404a06282678b34
github.com/StackExchange/wmi f3e2bae1e0cb5aef83e319133eabfee30013a4a5 github.com/StackExchange/wmi f3e2bae1e0cb5aef83e319133eabfee30013a4a5
github.com/go-ole/go-ole be49f7c07711fcb603cff39e1de7c67926dc0ba7 github.com/go-ole/go-ole be49f7c07711fcb603cff39e1de7c67926dc0ba7
github.com/lxn/win 950a0e81e7678e63d8e6cd32412bdecb325ccd88
github.com/shirou/w32 3c9377fc6748f222729a8270fe2775d149a249ad github.com/shirou/w32 3c9377fc6748f222729a8270fe2775d149a249ad
golang.org/x/sys a646d33e2ee3172a661fc09bca23bb4889a41bc8 golang.org/x/sys a646d33e2ee3172a661fc09bca23bb4889a41bc8
github.com/go-ini/ini 9144852efba7c4daf409943ee90767da62d55438 github.com/go-ini/ini 9144852efba7c4daf409943ee90767da62d55438

View File

@@ -4,7 +4,7 @@ import (
"log" "log"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/selfstat" "github.com/influxdata/telegraf/selfstat"
) )
@@ -18,14 +18,14 @@ type MetricMaker interface {
measurement string, measurement string,
fields map[string]interface{}, fields map[string]interface{},
tags map[string]string, tags map[string]string,
mType telegraf.ValueType, mType plugins.ValueType,
t time.Time, t time.Time,
) telegraf.Metric ) plugins.Metric
} }
func NewAccumulator( func NewAccumulator(
maker MetricMaker, maker MetricMaker,
metrics chan telegraf.Metric, metrics chan plugins.Metric,
) *accumulator { ) *accumulator {
acc := accumulator{ acc := accumulator{
maker: maker, maker: maker,
@@ -36,7 +36,7 @@ func NewAccumulator(
} }
type accumulator struct { type accumulator struct {
metrics chan telegraf.Metric metrics chan plugins.Metric
maker MetricMaker maker MetricMaker
@@ -49,7 +49,7 @@ func (ac *accumulator) AddFields(
tags map[string]string, tags map[string]string,
t ...time.Time, t ...time.Time,
) { ) {
if m := ac.maker.MakeMetric(measurement, fields, tags, telegraf.Untyped, ac.getTime(t)); m != nil { if m := ac.maker.MakeMetric(measurement, fields, tags, plugins.Untyped, ac.getTime(t)); m != nil {
ac.metrics <- m ac.metrics <- m
} }
} }
@@ -60,7 +60,7 @@ func (ac *accumulator) AddGauge(
tags map[string]string, tags map[string]string,
t ...time.Time, t ...time.Time,
) { ) {
if m := ac.maker.MakeMetric(measurement, fields, tags, telegraf.Gauge, ac.getTime(t)); m != nil { if m := ac.maker.MakeMetric(measurement, fields, tags, plugins.Gauge, ac.getTime(t)); m != nil {
ac.metrics <- m ac.metrics <- m
} }
} }
@@ -71,7 +71,7 @@ func (ac *accumulator) AddCounter(
tags map[string]string, tags map[string]string,
t ...time.Time, t ...time.Time,
) { ) {
if m := ac.maker.MakeMetric(measurement, fields, tags, telegraf.Counter, ac.getTime(t)); m != nil { if m := ac.maker.MakeMetric(measurement, fields, tags, plugins.Counter, ac.getTime(t)); m != nil {
ac.metrics <- m ac.metrics <- m
} }
} }

View File

@@ -8,7 +8,7 @@ import (
"testing" "testing"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/metric" "github.com/influxdata/telegraf/metric"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
@@ -17,7 +17,7 @@ import (
func TestAdd(t *testing.T) { func TestAdd(t *testing.T) {
now := time.Now() now := time.Now()
metrics := make(chan telegraf.Metric, 10) metrics := make(chan plugins.Metric, 10)
defer close(metrics) defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics) a := NewAccumulator(&TestMetricMaker{}, metrics)
@@ -48,7 +48,7 @@ func TestAdd(t *testing.T) {
func TestAddFields(t *testing.T) { func TestAddFields(t *testing.T) {
now := time.Now() now := time.Now()
metrics := make(chan telegraf.Metric, 10) metrics := make(chan plugins.Metric, 10)
defer close(metrics) defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics) a := NewAccumulator(&TestMetricMaker{}, metrics)
@@ -79,7 +79,7 @@ func TestAccAddError(t *testing.T) {
log.SetOutput(errBuf) log.SetOutput(errBuf)
defer log.SetOutput(os.Stderr) defer log.SetOutput(os.Stderr)
metrics := make(chan telegraf.Metric, 10) metrics := make(chan plugins.Metric, 10)
defer close(metrics) defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics) a := NewAccumulator(&TestMetricMaker{}, metrics)
@@ -100,7 +100,7 @@ func TestAccAddError(t *testing.T) {
func TestAddNoIntervalWithPrecision(t *testing.T) { func TestAddNoIntervalWithPrecision(t *testing.T) {
now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC) now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC)
metrics := make(chan telegraf.Metric, 10) metrics := make(chan plugins.Metric, 10)
defer close(metrics) defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics) a := NewAccumulator(&TestMetricMaker{}, metrics)
a.SetPrecision(0, time.Second) a.SetPrecision(0, time.Second)
@@ -132,7 +132,7 @@ func TestAddNoIntervalWithPrecision(t *testing.T) {
func TestAddDisablePrecision(t *testing.T) { func TestAddDisablePrecision(t *testing.T) {
now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC) now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC)
metrics := make(chan telegraf.Metric, 10) metrics := make(chan plugins.Metric, 10)
defer close(metrics) defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics) a := NewAccumulator(&TestMetricMaker{}, metrics)
@@ -164,7 +164,7 @@ func TestAddDisablePrecision(t *testing.T) {
func TestAddNoPrecisionWithInterval(t *testing.T) { func TestAddNoPrecisionWithInterval(t *testing.T) {
now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC) now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC)
metrics := make(chan telegraf.Metric, 10) metrics := make(chan plugins.Metric, 10)
defer close(metrics) defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics) a := NewAccumulator(&TestMetricMaker{}, metrics)
@@ -196,7 +196,7 @@ func TestAddNoPrecisionWithInterval(t *testing.T) {
func TestDifferentPrecisions(t *testing.T) { func TestDifferentPrecisions(t *testing.T) {
now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC) now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC)
metrics := make(chan telegraf.Metric, 10) metrics := make(chan plugins.Metric, 10)
defer close(metrics) defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics) a := NewAccumulator(&TestMetricMaker{}, metrics)
@@ -243,7 +243,7 @@ func TestDifferentPrecisions(t *testing.T) {
func TestAddGauge(t *testing.T) { func TestAddGauge(t *testing.T) {
now := time.Now() now := time.Now()
metrics := make(chan telegraf.Metric, 10) metrics := make(chan plugins.Metric, 10)
defer close(metrics) defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics) a := NewAccumulator(&TestMetricMaker{}, metrics)
@@ -260,24 +260,24 @@ func TestAddGauge(t *testing.T) {
testm := <-metrics testm := <-metrics
actual := testm.String() actual := testm.String()
assert.Contains(t, actual, "acctest value=101") assert.Contains(t, actual, "acctest value=101")
assert.Equal(t, testm.Type(), telegraf.Gauge) assert.Equal(t, testm.Type(), plugins.Gauge)
testm = <-metrics testm = <-metrics
actual = testm.String() actual = testm.String()
assert.Contains(t, actual, "acctest,acc=test value=101") assert.Contains(t, actual, "acctest,acc=test value=101")
assert.Equal(t, testm.Type(), telegraf.Gauge) assert.Equal(t, testm.Type(), plugins.Gauge)
testm = <-metrics testm = <-metrics
actual = testm.String() actual = testm.String()
assert.Equal(t, assert.Equal(t,
fmt.Sprintf("acctest,acc=test value=101 %d\n", now.UnixNano()), fmt.Sprintf("acctest,acc=test value=101 %d\n", now.UnixNano()),
actual) actual)
assert.Equal(t, testm.Type(), telegraf.Gauge) assert.Equal(t, testm.Type(), plugins.Gauge)
} }
func TestAddCounter(t *testing.T) { func TestAddCounter(t *testing.T) {
now := time.Now() now := time.Now()
metrics := make(chan telegraf.Metric, 10) metrics := make(chan plugins.Metric, 10)
defer close(metrics) defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics) a := NewAccumulator(&TestMetricMaker{}, metrics)
@@ -294,19 +294,19 @@ func TestAddCounter(t *testing.T) {
testm := <-metrics testm := <-metrics
actual := testm.String() actual := testm.String()
assert.Contains(t, actual, "acctest value=101") assert.Contains(t, actual, "acctest value=101")
assert.Equal(t, testm.Type(), telegraf.Counter) assert.Equal(t, testm.Type(), plugins.Counter)
testm = <-metrics testm = <-metrics
actual = testm.String() actual = testm.String()
assert.Contains(t, actual, "acctest,acc=test value=101") assert.Contains(t, actual, "acctest,acc=test value=101")
assert.Equal(t, testm.Type(), telegraf.Counter) assert.Equal(t, testm.Type(), plugins.Counter)
testm = <-metrics testm = <-metrics
actual = testm.String() actual = testm.String()
assert.Equal(t, assert.Equal(t,
fmt.Sprintf("acctest,acc=test value=101 %d\n", now.UnixNano()), fmt.Sprintf("acctest,acc=test value=101 %d\n", now.UnixNano()),
actual) actual)
assert.Equal(t, testm.Type(), telegraf.Counter) assert.Equal(t, testm.Type(), plugins.Counter)
} }
type TestMetricMaker struct { type TestMetricMaker struct {
@@ -319,20 +319,20 @@ func (tm *TestMetricMaker) MakeMetric(
measurement string, measurement string,
fields map[string]interface{}, fields map[string]interface{},
tags map[string]string, tags map[string]string,
mType telegraf.ValueType, mType plugins.ValueType,
t time.Time, t time.Time,
) telegraf.Metric { ) plugins.Metric {
switch mType { switch mType {
case telegraf.Untyped: case plugins.Untyped:
if m, err := metric.New(measurement, tags, fields, t); err == nil { if m, err := metric.New(measurement, tags, fields, t); err == nil {
return m return m
} }
case telegraf.Counter: case plugins.Counter:
if m, err := metric.New(measurement, tags, fields, t, telegraf.Counter); err == nil { if m, err := metric.New(measurement, tags, fields, t, plugins.Counter); err == nil {
return m return m
} }
case telegraf.Gauge: case plugins.Gauge:
if m, err := metric.New(measurement, tags, fields, t, telegraf.Gauge); err == nil { if m, err := metric.New(measurement, tags, fields, t, plugins.Gauge); err == nil {
return m return m
} }
} }

View File

@@ -8,10 +8,10 @@ import (
"sync" "sync"
"time" "time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/internal/config" "github.com/influxdata/telegraf/internal/config"
"github.com/influxdata/telegraf/internal/models" "github.com/influxdata/telegraf/internal/models"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/selfstat" "github.com/influxdata/telegraf/selfstat"
) )
@@ -46,7 +46,7 @@ func NewAgent(config *config.Config) (*Agent, error) {
func (a *Agent) Connect() error { func (a *Agent) Connect() error {
for _, o := range a.Config.Outputs { for _, o := range a.Config.Outputs {
switch ot := o.Output.(type) { switch ot := o.Output.(type) {
case telegraf.ServiceOutput: case plugins.ServiceOutput:
if err := ot.Start(); err != nil { if err := ot.Start(); err != nil {
log.Printf("E! Service for output %s failed to start, exiting\n%s\n", log.Printf("E! Service for output %s failed to start, exiting\n%s\n",
o.Name, err.Error()) o.Name, err.Error())
@@ -76,7 +76,7 @@ func (a *Agent) Close() error {
for _, o := range a.Config.Outputs { for _, o := range a.Config.Outputs {
err = o.Output.Close() err = o.Output.Close()
switch ot := o.Output.(type) { switch ot := o.Output.(type) {
case telegraf.ServiceOutput: case plugins.ServiceOutput:
ot.Stop() ot.Stop()
} }
} }
@@ -101,7 +101,7 @@ func (a *Agent) gatherer(
shutdown chan struct{}, shutdown chan struct{},
input *models.RunningInput, input *models.RunningInput,
interval time.Duration, interval time.Duration,
metricC chan telegraf.Metric, metricC chan plugins.Metric,
) { ) {
defer panicRecover(input) defer panicRecover(input)
@@ -176,7 +176,7 @@ func gatherWithTimeout(
func (a *Agent) Test() error { func (a *Agent) Test() error {
shutdown := make(chan struct{}) shutdown := make(chan struct{})
defer close(shutdown) defer close(shutdown)
metricC := make(chan telegraf.Metric) metricC := make(chan plugins.Metric)
// dummy receiver for the point channel // dummy receiver for the point channel
go func() { go func() {
@@ -241,14 +241,14 @@ func (a *Agent) flush() {
} }
// flusher monitors the metrics input channel and flushes on the minimum interval // flusher monitors the metrics input channel and flushes on the minimum interval
func (a *Agent) flusher(shutdown chan struct{}, metricC chan telegraf.Metric) error { func (a *Agent) flusher(shutdown chan struct{}, metricC chan plugins.Metric) error {
// Inelegant, but this sleep is to allow the Gather threads to run, so that // Inelegant, but this sleep is to allow the Gather threads to run, so that
// the flusher will flush after metrics are collected. // the flusher will flush after metrics are collected.
time.Sleep(time.Millisecond * 300) time.Sleep(time.Millisecond * 300)
// create an output metric channel and a gorouting that continously passes // create an output metric channel and a gorouting that continously passes
// each metric onto the output plugins & aggregators. // each metric onto the output plugins & aggregators.
outMetricC := make(chan telegraf.Metric, 100) outMetricC := make(chan plugins.Metric, 100)
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(1) wg.Add(1)
go func() { go func() {
@@ -300,7 +300,7 @@ func (a *Agent) flusher(shutdown chan struct{}, metricC chan telegraf.Metric) er
case metric := <-metricC: case metric := <-metricC:
// NOTE potential bottleneck here as we put each metric through the // NOTE potential bottleneck here as we put each metric through the
// processors serially. // processors serially.
mS := []telegraf.Metric{metric} mS := []plugins.Metric{metric}
for _, processor := range a.Config.Processors { for _, processor := range a.Config.Processors {
mS = processor.Apply(mS...) mS = processor.Apply(mS...)
} }
@@ -321,13 +321,13 @@ func (a *Agent) Run(shutdown chan struct{}) error {
a.Config.Agent.Hostname, a.Config.Agent.FlushInterval.Duration) a.Config.Agent.Hostname, a.Config.Agent.FlushInterval.Duration)
// channel shared between all input threads for accumulating metrics // channel shared between all input threads for accumulating metrics
metricC := make(chan telegraf.Metric, 100) metricC := make(chan plugins.Metric, 100)
// Start all ServicePlugins // Start all ServicePlugins
for _, input := range a.Config.Inputs { for _, input := range a.Config.Inputs {
input.SetDefaultTags(a.Config.Tags) input.SetDefaultTags(a.Config.Tags)
switch p := input.Input.(type) { switch p := input.Input.(type) {
case telegraf.ServiceInput: case plugins.ServiceInput:
acc := NewAccumulator(input, metricC) acc := NewAccumulator(input, metricC)
// Service input plugins should set their own precision of their // Service input plugins should set their own precision of their
// metrics. // metrics.

View File

@@ -5,8 +5,8 @@ machine:
- sudo service zookeeper stop - sudo service zookeeper stop
- go version - go version
- go version | grep 1.7.4 || sudo rm -rf /usr/local/go - go version | grep 1.7.4 || sudo rm -rf /usr/local/go
- wget https://storage.googleapis.com/golang/go1.7.4.linux-amd64.tar.gz - wget https://storage.googleapis.com/golang/go1.8beta1.linux-amd64.tar.gz
- sudo tar -C /usr/local -xzf go1.7.4.linux-amd64.tar.gz - sudo tar -C /usr/local -xzf go1.8beta1.linux-amd64.tar.gz
- go version - go version
dependencies: dependencies:

View File

@@ -6,6 +6,9 @@ import (
"log" "log"
"os" "os"
"os/signal" "os/signal"
"path"
"path/filepath"
"plugin"
"runtime" "runtime"
"strings" "strings"
"syscall" "syscall"
@@ -13,11 +16,14 @@ import (
"github.com/influxdata/telegraf/agent" "github.com/influxdata/telegraf/agent"
"github.com/influxdata/telegraf/internal/config" "github.com/influxdata/telegraf/internal/config"
"github.com/influxdata/telegraf/logger" "github.com/influxdata/telegraf/logger"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/aggregators"
_ "github.com/influxdata/telegraf/plugins/aggregators/all" _ "github.com/influxdata/telegraf/plugins/aggregators/all"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
_ "github.com/influxdata/telegraf/plugins/inputs/all" _ "github.com/influxdata/telegraf/plugins/inputs/all"
"github.com/influxdata/telegraf/plugins/outputs" "github.com/influxdata/telegraf/plugins/outputs"
_ "github.com/influxdata/telegraf/plugins/outputs/all" _ "github.com/influxdata/telegraf/plugins/outputs/all"
"github.com/influxdata/telegraf/plugins/processors"
_ "github.com/influxdata/telegraf/plugins/processors/all" _ "github.com/influxdata/telegraf/plugins/processors/all"
"github.com/kardianos/service" "github.com/kardianos/service"
) )
@@ -50,6 +56,8 @@ var fUsage = flag.String("usage", "",
"print usage for a plugin, ie, 'telegraf -usage mysql'") "print usage for a plugin, ie, 'telegraf -usage mysql'")
var fService = flag.String("service", "", var fService = flag.String("service", "",
"operate on the service") "operate on the service")
var fPlugins = flag.String("plugins", "",
"path to directory containing external plugins")
// Telegraf version, populated linker. // Telegraf version, populated linker.
// ie, -ldflags "-X main.version=`git describe --always --tags`" // ie, -ldflags "-X main.version=`git describe --always --tags`"
@@ -304,9 +312,93 @@ func (p *program) Stop(s service.Service) error {
return nil return nil
} }
// loadExternalPlugins loads external plugins from shared libraries (.so, .dll, etc.)
// in the specified directory.
func loadExternalPlugins(dir string) error {
return filepath.Walk(dir, func(pth string, info os.FileInfo, err error) error {
// Stop if there was an error.
if err != nil {
return err
}
// Ignore directories.
if info.IsDir() {
return nil
}
// Ignore files that aren't shared libraries.
ext := strings.ToLower(path.Ext(pth))
if ext != ".so" && ext != ".dll" {
return nil
}
// Load plugin.
p, err := plugin.Open(pth)
if err != nil {
return err
}
// Register plugin.
if err := registerPlugin(dir, pth, p); err != nil {
return err
}
return nil
})
}
// registerPlugin registers an external plugin with telegraf.
func registerPlugin(pluginsDir, filePath string, p *plugin.Plugin) error {
// Clean the file path and make sure it's relative to the root plugins directory.
// This is done because plugin names are namespaced using the directory
// structure. E.g., if the root plugin directory, passed in the pluginsDir
// argument, is '/home/jdoe/bin/telegraf/plugins' and we're registering plugin
// '/home/jdoe/bin/telegraf/plugins/input/mysql.so'
pluginsDir = filepath.Clean(pluginsDir)
parentDir, _ := filepath.Split(pluginsDir)
var err error
if filePath, err = filepath.Rel(parentDir, filePath); err != nil {
return err
}
// Strip the file extension and save it.
ext := path.Ext(filePath)
filePath = strings.TrimSuffix(filePath, ext)
// Convert path separators to "." to generate a plugin name namespaced by directory names.
name := strings.Replace(filePath, string(os.PathSeparator), ".", -1)
if create, err := p.Lookup("NewInput"); err == nil {
inputs.Add(name, inputs.Creator(create.(func() plugins.Input)))
} else if create, err := p.Lookup("NewOutput"); err == nil {
outputs.Add(name, outputs.Creator(create.(func() plugins.Output)))
} else if create, err := p.Lookup("NewProcessor"); err == nil {
processors.Add(name, processors.Creator(create.(func() plugins.Processor)))
} else if create, err := p.Lookup("NewAggregator"); err == nil {
aggregators.Add(name, aggregators.Creator(create.(func() plugins.Aggregator)))
} else {
return fmt.Errorf("not a telegraf plugin: %s%s", filePath, ext)
}
log.Printf("I! Registered: %s (from %s%s)\n", name, filePath, ext)
return nil
}
func main() { func main() {
flag.Usage = func() { usageExit(0) } flag.Usage = func() { usageExit(0) }
flag.Parse() flag.Parse()
// Load external plugins, if requested.
if *fPlugins != "" {
pluginsDir, err := filepath.Abs(*fPlugins)
if err != nil {
log.Fatal("E! " + err.Error())
}
log.Printf("I! Loading external plugins from: %s\n", pluginsDir)
if err := loadExternalPlugins(*fPlugins); err != nil {
log.Fatal("E! " + err.Error())
}
}
if runtime.GOOS == "windows" { if runtime.GOOS == "windows" {
svcConfig := &service.Config{ svcConfig := &service.Config{
Name: "telegraf", Name: "telegraf",

View File

@@ -784,13 +784,18 @@
# ## Timeout for HTTP requests to the elastic search server(s) # ## Timeout for HTTP requests to the elastic search server(s)
# http_timeout = "5s" # http_timeout = "5s"
# #
# ## set local to false when you want to read the indices stats from all nodes # ## When local is true (the default), the node will read only its own stats.
# ## within the cluster # ## Set local to false when you want to read the node stats from all nodes
# ## of the cluster.
# local = true # local = true
# #
# ## set cluster_health to true when you want to also obtain cluster level stats # ## set cluster_health to true when you want to also obtain cluster health stats
# cluster_health = false # cluster_health = false
# #
# ## Set cluster_stats to true when you want to obtain cluster stats from the
# ## Master node.
# cluster_stats = false
# ## Optional SSL Config # ## Optional SSL Config
# # ssl_ca = "/etc/telegraf/ca.pem" # # ssl_ca = "/etc/telegraf/ca.pem"
# # ssl_cert = "/etc/telegraf/cert.pem" # # ssl_cert = "/etc/telegraf/cert.pem"

View File

@@ -3,7 +3,7 @@ package buffer
import ( import (
"sync" "sync"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/selfstat" "github.com/influxdata/telegraf/selfstat"
) )
@@ -14,7 +14,7 @@ var (
// Buffer is an object for storing metrics in a circular buffer. // Buffer is an object for storing metrics in a circular buffer.
type Buffer struct { type Buffer struct {
buf chan telegraf.Metric buf chan plugins.Metric
mu sync.Mutex mu sync.Mutex
} }
@@ -24,7 +24,7 @@ type Buffer struct {
// called when the buffer is full, then the oldest metric(s) will be dropped. // called when the buffer is full, then the oldest metric(s) will be dropped.
func NewBuffer(size int) *Buffer { func NewBuffer(size int) *Buffer {
return &Buffer{ return &Buffer{
buf: make(chan telegraf.Metric, size), buf: make(chan plugins.Metric, size),
} }
} }
@@ -39,7 +39,7 @@ func (b *Buffer) Len() int {
} }
// Add adds metrics to the buffer. // Add adds metrics to the buffer.
func (b *Buffer) Add(metrics ...telegraf.Metric) { func (b *Buffer) Add(metrics ...plugins.Metric) {
for i, _ := range metrics { for i, _ := range metrics {
MetricsWritten.Incr(1) MetricsWritten.Incr(1)
select { select {
@@ -55,10 +55,10 @@ func (b *Buffer) Add(metrics ...telegraf.Metric) {
// Batch returns a batch of metrics of size batchSize. // Batch returns a batch of metrics of size batchSize.
// the batch will be of maximum length batchSize. It can be less than batchSize, // the batch will be of maximum length batchSize. It can be less than batchSize,
// if the length of Buffer is less than batchSize. // if the length of Buffer is less than batchSize.
func (b *Buffer) Batch(batchSize int) []telegraf.Metric { func (b *Buffer) Batch(batchSize int) []plugins.Metric {
b.mu.Lock() b.mu.Lock()
n := min(len(b.buf), batchSize) n := min(len(b.buf), batchSize)
out := make([]telegraf.Metric, n) out := make([]plugins.Metric, n)
for i := 0; i < n; i++ { for i := 0; i < n; i++ {
out[i] = <-b.buf out[i] = <-b.buf
} }

View File

@@ -3,13 +3,13 @@ package buffer
import ( import (
"testing" "testing"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
) )
var metricList = []telegraf.Metric{ var metricList = []plugins.Metric{
testutil.TestMetric(2, "mymetric1"), testutil.TestMetric(2, "mymetric1"),
testutil.TestMetric(1, "mymetric2"), testutil.TestMetric(1, "mymetric2"),
testutil.TestMetric(11, "mymetric3"), testutil.TestMetric(11, "mymetric3"),

View File

@@ -15,9 +15,9 @@ import (
"strings" "strings"
"time" "time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/internal/models" "github.com/influxdata/telegraf/internal/models"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/aggregators" "github.com/influxdata/telegraf/plugins/aggregators"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/outputs" "github.com/influxdata/telegraf/plugins/outputs"
@@ -399,7 +399,7 @@ func printFilteredInputs(inputFilters []string, commented bool) {
sort.Strings(pnames) sort.Strings(pnames)
// cache service inputs to print them at the end // cache service inputs to print them at the end
servInputs := make(map[string]telegraf.ServiceInput) servInputs := make(map[string]plugins.ServiceInput)
// for alphabetical looping: // for alphabetical looping:
servInputNames := []string{} servInputNames := []string{}
@@ -409,7 +409,7 @@ func printFilteredInputs(inputFilters []string, commented bool) {
input := creator() input := creator()
switch p := input.(type) { switch p := input.(type) {
case telegraf.ServiceInput: case plugins.ServiceInput:
servInputs[pname] = p servInputs[pname] = p
servInputNames = append(servInputNames, pname) servInputNames = append(servInputNames, pname)
continue continue

View File

@@ -5,8 +5,8 @@ import (
"math" "math"
"time" "time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/metric" "github.com/influxdata/telegraf/metric"
"github.com/influxdata/telegraf/plugins"
) )
// makemetric is used by both RunningAggregator & RunningInput // makemetric is used by both RunningAggregator & RunningInput
@@ -32,9 +32,9 @@ func makemetric(
daemonTags map[string]string, daemonTags map[string]string,
filter Filter, filter Filter,
applyFilter bool, applyFilter bool,
mType telegraf.ValueType, mType plugins.ValueType,
t time.Time, t time.Time,
) telegraf.Metric { ) plugins.Metric {
if len(fields) == 0 || len(measurement) == 0 { if len(fields) == 0 || len(measurement) == 0 {
return nil return nil
} }

View File

@@ -3,28 +3,28 @@ package models
import ( import (
"time" "time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/metric" "github.com/influxdata/telegraf/metric"
"github.com/influxdata/telegraf/plugins"
) )
type RunningAggregator struct { type RunningAggregator struct {
a telegraf.Aggregator a plugins.Aggregator
Config *AggregatorConfig Config *AggregatorConfig
metrics chan telegraf.Metric metrics chan plugins.Metric
periodStart time.Time periodStart time.Time
periodEnd time.Time periodEnd time.Time
} }
func NewRunningAggregator( func NewRunningAggregator(
a telegraf.Aggregator, a plugins.Aggregator,
conf *AggregatorConfig, conf *AggregatorConfig,
) *RunningAggregator { ) *RunningAggregator {
return &RunningAggregator{ return &RunningAggregator{
a: a, a: a,
Config: conf, Config: conf,
metrics: make(chan telegraf.Metric, 100), metrics: make(chan plugins.Metric, 100),
} }
} }
@@ -52,9 +52,9 @@ func (r *RunningAggregator) MakeMetric(
measurement string, measurement string,
fields map[string]interface{}, fields map[string]interface{},
tags map[string]string, tags map[string]string,
mType telegraf.ValueType, mType plugins.ValueType,
t time.Time, t time.Time,
) telegraf.Metric { ) plugins.Metric {
m := makemetric( m := makemetric(
measurement, measurement,
fields, fields,
@@ -70,7 +70,9 @@ func (r *RunningAggregator) MakeMetric(
t, t,
) )
m.SetAggregate(true) if m != nil {
m.SetAggregate(true)
}
return m return m
} }
@@ -78,7 +80,7 @@ func (r *RunningAggregator) MakeMetric(
// Add applies the given metric to the aggregator. // Add applies the given metric to the aggregator.
// Before applying to the plugin, it will run any defined filters on the metric. // Before applying to the plugin, it will run any defined filters on the metric.
// Apply returns true if the original metric should be dropped. // Apply returns true if the original metric should be dropped.
func (r *RunningAggregator) Add(in telegraf.Metric) bool { func (r *RunningAggregator) Add(in plugins.Metric) bool {
if r.Config.Filter.IsActive() { if r.Config.Filter.IsActive() {
// check if the aggregator should apply this metric // check if the aggregator should apply this metric
name := in.Name() name := in.Name()
@@ -96,11 +98,11 @@ func (r *RunningAggregator) Add(in telegraf.Metric) bool {
r.metrics <- in r.metrics <- in
return r.Config.DropOriginal return r.Config.DropOriginal
} }
func (r *RunningAggregator) add(in telegraf.Metric) { func (r *RunningAggregator) add(in plugins.Metric) {
r.a.Add(in) r.a.Add(in)
} }
func (r *RunningAggregator) push(acc telegraf.Accumulator) { func (r *RunningAggregator) push(acc plugins.Accumulator) {
r.a.Push(acc) r.a.Push(acc)
} }
@@ -111,7 +113,7 @@ func (r *RunningAggregator) reset() {
// Run runs the running aggregator, listens for incoming metrics, and waits // Run runs the running aggregator, listens for incoming metrics, and waits
// for period ticks to tell it when to push and reset the aggregator. // for period ticks to tell it when to push and reset the aggregator.
func (r *RunningAggregator) Run( func (r *RunningAggregator) Run(
acc telegraf.Accumulator, acc plugins.Accumulator,
shutdown chan struct{}, shutdown chan struct{},
) { ) {
// The start of the period is truncated to the nearest second. // The start of the period is truncated to the nearest second.

View File

@@ -7,7 +7,7 @@ import (
"testing" "testing"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
@@ -30,7 +30,7 @@ func TestAdd(t *testing.T) {
"RITest", "RITest",
map[string]interface{}{"value": int(101)}, map[string]interface{}{"value": int(101)},
map[string]string{}, map[string]string{},
telegraf.Untyped, plugins.Untyped,
time.Now().Add(time.Millisecond*150), time.Now().Add(time.Millisecond*150),
) )
assert.False(t, ra.Add(m)) assert.False(t, ra.Add(m))
@@ -62,7 +62,7 @@ func TestAddMetricsOutsideCurrentPeriod(t *testing.T) {
"RITest", "RITest",
map[string]interface{}{"value": int(101)}, map[string]interface{}{"value": int(101)},
map[string]string{}, map[string]string{},
telegraf.Untyped, plugins.Untyped,
time.Now().Add(-time.Hour), time.Now().Add(-time.Hour),
) )
assert.False(t, ra.Add(m)) assert.False(t, ra.Add(m))
@@ -72,7 +72,7 @@ func TestAddMetricsOutsideCurrentPeriod(t *testing.T) {
"RITest", "RITest",
map[string]interface{}{"value": int(101)}, map[string]interface{}{"value": int(101)},
map[string]string{}, map[string]string{},
telegraf.Untyped, plugins.Untyped,
time.Now().Add(time.Hour), time.Now().Add(time.Hour),
) )
assert.False(t, ra.Add(m)) assert.False(t, ra.Add(m))
@@ -82,7 +82,7 @@ func TestAddMetricsOutsideCurrentPeriod(t *testing.T) {
"RITest", "RITest",
map[string]interface{}{"value": int(101)}, map[string]interface{}{"value": int(101)},
map[string]string{}, map[string]string{},
telegraf.Untyped, plugins.Untyped,
time.Now().Add(time.Millisecond*50), time.Now().Add(time.Millisecond*50),
) )
assert.False(t, ra.Add(m)) assert.False(t, ra.Add(m))
@@ -120,7 +120,7 @@ func TestAddAndPushOnePeriod(t *testing.T) {
"RITest", "RITest",
map[string]interface{}{"value": int(101)}, map[string]interface{}{"value": int(101)},
map[string]string{}, map[string]string{},
telegraf.Untyped, plugins.Untyped,
time.Now().Add(time.Millisecond*100), time.Now().Add(time.Millisecond*100),
) )
assert.False(t, ra.Add(m)) assert.False(t, ra.Add(m))
@@ -151,7 +151,7 @@ func TestAddDropOriginal(t *testing.T) {
"RITest", "RITest",
map[string]interface{}{"value": int(101)}, map[string]interface{}{"value": int(101)},
map[string]string{}, map[string]string{},
telegraf.Untyped, plugins.Untyped,
time.Now(), time.Now(),
) )
assert.True(t, ra.Add(m)) assert.True(t, ra.Add(m))
@@ -161,7 +161,7 @@ func TestAddDropOriginal(t *testing.T) {
"foobar", "foobar",
map[string]interface{}{"value": int(101)}, map[string]interface{}{"value": int(101)},
map[string]string{}, map[string]string{},
telegraf.Untyped, plugins.Untyped,
time.Now(), time.Now(),
) )
assert.False(t, ra.Add(m2)) assert.False(t, ra.Add(m2))
@@ -179,7 +179,7 @@ func TestMakeMetricA(t *testing.T) {
"RITest", "RITest",
map[string]interface{}{"value": int(101)}, map[string]interface{}{"value": int(101)},
map[string]string{}, map[string]string{},
telegraf.Untyped, plugins.Untyped,
now, now,
) )
assert.Equal( assert.Equal(
@@ -190,14 +190,14 @@ func TestMakeMetricA(t *testing.T) {
assert.Equal( assert.Equal(
t, t,
m.Type(), m.Type(),
telegraf.Untyped, plugins.Untyped,
) )
m = ra.MakeMetric( m = ra.MakeMetric(
"RITest", "RITest",
map[string]interface{}{"value": int(101)}, map[string]interface{}{"value": int(101)},
map[string]string{}, map[string]string{},
telegraf.Counter, plugins.Counter,
now, now,
) )
assert.Equal( assert.Equal(
@@ -208,14 +208,14 @@ func TestMakeMetricA(t *testing.T) {
assert.Equal( assert.Equal(
t, t,
m.Type(), m.Type(),
telegraf.Counter, plugins.Counter,
) )
m = ra.MakeMetric( m = ra.MakeMetric(
"RITest", "RITest",
map[string]interface{}{"value": int(101)}, map[string]interface{}{"value": int(101)},
map[string]string{}, map[string]string{},
telegraf.Gauge, plugins.Gauge,
now, now,
) )
assert.Equal( assert.Equal(
@@ -226,7 +226,7 @@ func TestMakeMetricA(t *testing.T) {
assert.Equal( assert.Equal(
t, t,
m.Type(), m.Type(),
telegraf.Gauge, plugins.Gauge,
) )
} }
@@ -240,14 +240,14 @@ func (t *TestAggregator) Reset() {
atomic.StoreInt64(&t.sum, 0) atomic.StoreInt64(&t.sum, 0)
} }
func (t *TestAggregator) Push(acc telegraf.Accumulator) { func (t *TestAggregator) Push(acc plugins.Accumulator) {
acc.AddFields("TestMetric", acc.AddFields("TestMetric",
map[string]interface{}{"sum": t.sum}, map[string]interface{}{"sum": t.sum},
map[string]string{}, map[string]string{},
) )
} }
func (t *TestAggregator) Add(in telegraf.Metric) { func (t *TestAggregator) Add(in plugins.Metric) {
for _, v := range in.Fields() { for _, v := range in.Fields() {
if vi, ok := v.(int64); ok { if vi, ok := v.(int64); ok {
atomic.AddInt64(&t.sum, vi) atomic.AddInt64(&t.sum, vi)

View File

@@ -4,14 +4,14 @@ import (
"fmt" "fmt"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/selfstat" "github.com/influxdata/telegraf/selfstat"
) )
var GlobalMetricsGathered = selfstat.Register("agent", "metrics_gathered", map[string]string{}) var GlobalMetricsGathered = selfstat.Register("agent", "metrics_gathered", map[string]string{})
type RunningInput struct { type RunningInput struct {
Input telegraf.Input Input plugins.Input
Config *InputConfig Config *InputConfig
trace bool trace bool
@@ -21,7 +21,7 @@ type RunningInput struct {
} }
func NewRunningInput( func NewRunningInput(
input telegraf.Input, input plugins.Input,
config *InputConfig, config *InputConfig,
) *RunningInput { ) *RunningInput {
return &RunningInput{ return &RunningInput{
@@ -56,9 +56,9 @@ func (r *RunningInput) MakeMetric(
measurement string, measurement string,
fields map[string]interface{}, fields map[string]interface{},
tags map[string]string, tags map[string]string,
mType telegraf.ValueType, mType plugins.ValueType,
t time.Time, t time.Time,
) telegraf.Metric { ) plugins.Metric {
m := makemetric( m := makemetric(
measurement, measurement,
fields, fields,

View File

@@ -6,7 +6,7 @@ import (
"testing" "testing"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
) )
@@ -21,7 +21,7 @@ func TestMakeMetricNoFields(t *testing.T) {
"RITest", "RITest",
map[string]interface{}{}, map[string]interface{}{},
map[string]string{}, map[string]string{},
telegraf.Untyped, plugins.Untyped,
now, now,
) )
assert.Nil(t, m) assert.Nil(t, m)
@@ -41,7 +41,7 @@ func TestMakeMetricNilFields(t *testing.T) {
"nil": nil, "nil": nil,
}, },
map[string]string{}, map[string]string{},
telegraf.Untyped, plugins.Untyped,
now, now,
) )
assert.Equal( assert.Equal(
@@ -66,7 +66,7 @@ func TestMakeMetric(t *testing.T) {
"RITest", "RITest",
map[string]interface{}{"value": int(101)}, map[string]interface{}{"value": int(101)},
map[string]string{}, map[string]string{},
telegraf.Untyped, plugins.Untyped,
now, now,
) )
assert.Equal( assert.Equal(
@@ -77,14 +77,14 @@ func TestMakeMetric(t *testing.T) {
assert.Equal( assert.Equal(
t, t,
m.Type(), m.Type(),
telegraf.Untyped, plugins.Untyped,
) )
m = ri.MakeMetric( m = ri.MakeMetric(
"RITest", "RITest",
map[string]interface{}{"value": int(101)}, map[string]interface{}{"value": int(101)},
map[string]string{}, map[string]string{},
telegraf.Counter, plugins.Counter,
now, now,
) )
assert.Equal( assert.Equal(
@@ -95,14 +95,14 @@ func TestMakeMetric(t *testing.T) {
assert.Equal( assert.Equal(
t, t,
m.Type(), m.Type(),
telegraf.Counter, plugins.Counter,
) )
m = ri.MakeMetric( m = ri.MakeMetric(
"RITest", "RITest",
map[string]interface{}{"value": int(101)}, map[string]interface{}{"value": int(101)},
map[string]string{}, map[string]string{},
telegraf.Gauge, plugins.Gauge,
now, now,
) )
assert.Equal( assert.Equal(
@@ -113,7 +113,7 @@ func TestMakeMetric(t *testing.T) {
assert.Equal( assert.Equal(
t, t,
m.Type(), m.Type(),
telegraf.Gauge, plugins.Gauge,
) )
} }
@@ -133,7 +133,7 @@ func TestMakeMetricWithPluginTags(t *testing.T) {
"RITest", "RITest",
map[string]interface{}{"value": int(101)}, map[string]interface{}{"value": int(101)},
nil, nil,
telegraf.Untyped, plugins.Untyped,
now, now,
) )
assert.Equal( assert.Equal(
@@ -161,7 +161,7 @@ func TestMakeMetricFilteredOut(t *testing.T) {
"RITest", "RITest",
map[string]interface{}{"value": int(101)}, map[string]interface{}{"value": int(101)},
nil, nil,
telegraf.Untyped, plugins.Untyped,
now, now,
) )
assert.Nil(t, m) assert.Nil(t, m)
@@ -183,7 +183,7 @@ func TestMakeMetricWithDaemonTags(t *testing.T) {
"RITest", "RITest",
map[string]interface{}{"value": int(101)}, map[string]interface{}{"value": int(101)},
map[string]string{}, map[string]string{},
telegraf.Untyped, plugins.Untyped,
now, now,
) )
assert.Equal( assert.Equal(
@@ -213,7 +213,7 @@ func TestMakeMetricInfFields(t *testing.T) {
"ninf": ninf, "ninf": ninf,
}, },
map[string]string{}, map[string]string{},
telegraf.Untyped, plugins.Untyped,
now, now,
) )
assert.Equal( assert.Equal(
@@ -250,7 +250,7 @@ func TestMakeMetricAllFieldTypes(t *testing.T) {
"m": true, "m": true,
}, },
map[string]string{}, map[string]string{},
telegraf.Untyped, plugins.Untyped,
now, now,
) )
assert.Contains(t, m.String(), "a=10i") assert.Contains(t, m.String(), "a=10i")
@@ -280,7 +280,7 @@ func TestMakeMetricNameOverride(t *testing.T) {
"RITest", "RITest",
map[string]interface{}{"value": int(101)}, map[string]interface{}{"value": int(101)},
map[string]string{}, map[string]string{},
telegraf.Untyped, plugins.Untyped,
now, now,
) )
assert.Equal( assert.Equal(
@@ -301,7 +301,7 @@ func TestMakeMetricNamePrefix(t *testing.T) {
"RITest", "RITest",
map[string]interface{}{"value": int(101)}, map[string]interface{}{"value": int(101)},
map[string]string{}, map[string]string{},
telegraf.Untyped, plugins.Untyped,
now, now,
) )
assert.Equal( assert.Equal(
@@ -322,7 +322,7 @@ func TestMakeMetricNameSuffix(t *testing.T) {
"RITest", "RITest",
map[string]interface{}{"value": int(101)}, map[string]interface{}{"value": int(101)},
map[string]string{}, map[string]string{},
telegraf.Untyped, plugins.Untyped,
now, now,
) )
assert.Equal( assert.Equal(
@@ -336,4 +336,4 @@ type testInput struct{}
func (t *testInput) Description() string { return "" } func (t *testInput) Description() string { return "" }
func (t *testInput) SampleConfig() string { return "" } func (t *testInput) SampleConfig() string { return "" }
func (t *testInput) Gather(acc telegraf.Accumulator) error { return nil } func (t *testInput) Gather(acc plugins.Accumulator) error { return nil }

View File

@@ -4,7 +4,7 @@ import (
"log" "log"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal/buffer" "github.com/influxdata/telegraf/internal/buffer"
"github.com/influxdata/telegraf/metric" "github.com/influxdata/telegraf/metric"
"github.com/influxdata/telegraf/selfstat" "github.com/influxdata/telegraf/selfstat"
@@ -21,7 +21,7 @@ const (
// RunningOutput contains the output configuration // RunningOutput contains the output configuration
type RunningOutput struct { type RunningOutput struct {
Name string Name string
Output telegraf.Output Output plugins.Output
Config *OutputConfig Config *OutputConfig
MetricBufferLimit int MetricBufferLimit int
MetricBatchSize int MetricBatchSize int
@@ -38,7 +38,7 @@ type RunningOutput struct {
func NewRunningOutput( func NewRunningOutput(
name string, name string,
output telegraf.Output, output plugins.Output,
conf *OutputConfig, conf *OutputConfig,
batchSize int, batchSize int,
bufferLimit int, bufferLimit int,
@@ -89,7 +89,7 @@ func NewRunningOutput(
// AddMetric adds a metric to the output. This function can also write cached // AddMetric adds a metric to the output. This function can also write cached
// points if FlushBufferWhenFull is true. // points if FlushBufferWhenFull is true.
func (ro *RunningOutput) AddMetric(m telegraf.Metric) { func (ro *RunningOutput) AddMetric(m plugins.Metric) {
// Filter any tagexclude/taginclude parameters before adding metric // Filter any tagexclude/taginclude parameters before adding metric
if ro.Config.Filter.IsActive() { if ro.Config.Filter.IsActive() {
// In order to filter out tags, we need to create a new metric, since // In order to filter out tags, we need to create a new metric, since
@@ -161,7 +161,7 @@ func (ro *RunningOutput) Write() error {
return nil return nil
} }
func (ro *RunningOutput) write(metrics []telegraf.Metric) error { func (ro *RunningOutput) write(metrics []plugins.Metric) error {
nMetrics := len(metrics) nMetrics := len(metrics)
if nMetrics == 0 { if nMetrics == 0 {
return nil return nil

View File

@@ -5,14 +5,14 @@ import (
"sync" "sync"
"testing" "testing"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
var first5 = []telegraf.Metric{ var first5 = []plugins.Metric{
testutil.TestMetric(101, "metric1"), testutil.TestMetric(101, "metric1"),
testutil.TestMetric(101, "metric2"), testutil.TestMetric(101, "metric2"),
testutil.TestMetric(101, "metric3"), testutil.TestMetric(101, "metric3"),
@@ -20,7 +20,7 @@ var first5 = []telegraf.Metric{
testutil.TestMetric(101, "metric5"), testutil.TestMetric(101, "metric5"),
} }
var next5 = []telegraf.Metric{ var next5 = []plugins.Metric{
testutil.TestMetric(101, "metric6"), testutil.TestMetric(101, "metric6"),
testutil.TestMetric(101, "metric7"), testutil.TestMetric(101, "metric7"),
testutil.TestMetric(101, "metric8"), testutil.TestMetric(101, "metric8"),
@@ -465,7 +465,7 @@ func TestRunningOutputWriteFailOrder3(t *testing.T) {
type mockOutput struct { type mockOutput struct {
sync.Mutex sync.Mutex
metrics []telegraf.Metric metrics []plugins.Metric
// if true, mock a write failure // if true, mock a write failure
failWrite bool failWrite bool
@@ -487,7 +487,7 @@ func (m *mockOutput) SampleConfig() string {
return "" return ""
} }
func (m *mockOutput) Write(metrics []telegraf.Metric) error { func (m *mockOutput) Write(metrics []plugins.Metric) error {
m.Lock() m.Lock()
defer m.Unlock() defer m.Unlock()
if m.failWrite { if m.failWrite {
@@ -495,7 +495,7 @@ func (m *mockOutput) Write(metrics []telegraf.Metric) error {
} }
if m.metrics == nil { if m.metrics == nil {
m.metrics = []telegraf.Metric{} m.metrics = []plugins.Metric{}
} }
for _, metric := range metrics { for _, metric := range metrics {
@@ -504,7 +504,7 @@ func (m *mockOutput) Write(metrics []telegraf.Metric) error {
return nil return nil
} }
func (m *mockOutput) Metrics() []telegraf.Metric { func (m *mockOutput) Metrics() []plugins.Metric {
m.Lock() m.Lock()
defer m.Unlock() defer m.Unlock()
return m.metrics return m.metrics
@@ -531,7 +531,7 @@ func (m *perfOutput) SampleConfig() string {
return "" return ""
} }
func (m *perfOutput) Write(metrics []telegraf.Metric) error { func (m *perfOutput) Write(metrics []plugins.Metric) error {
if m.failWrite { if m.failWrite {
return fmt.Errorf("Failed Write!") return fmt.Errorf("Failed Write!")
} }

View File

@@ -1,12 +1,12 @@
package models package models
import ( import (
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
) )
type RunningProcessor struct { type RunningProcessor struct {
Name string Name string
Processor telegraf.Processor Processor plugins.Processor
Config *ProcessorConfig Config *ProcessorConfig
} }
@@ -23,8 +23,8 @@ type ProcessorConfig struct {
Filter Filter Filter Filter
} }
func (rp *RunningProcessor) Apply(in ...telegraf.Metric) []telegraf.Metric { func (rp *RunningProcessor) Apply(in ...plugins.Metric) []plugins.Metric {
ret := []telegraf.Metric{} ret := []plugins.Metric{}
for _, metric := range in { for _, metric := range in {
if rp.Config.Filter.IsActive() { if rp.Config.Filter.IsActive() {

View File

@@ -3,7 +3,7 @@ package models
import ( import (
"testing" "testing"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
@@ -19,8 +19,8 @@ func (f *TestProcessor) Description() string { return "" }
// "foo" to "fuz" // "foo" to "fuz"
// "bar" to "baz" // "bar" to "baz"
// And it also drops measurements named "dropme" // And it also drops measurements named "dropme"
func (f *TestProcessor) Apply(in ...telegraf.Metric) []telegraf.Metric { func (f *TestProcessor) Apply(in ...plugins.Metric) []plugins.Metric {
out := make([]telegraf.Metric, 0) out := make([]plugins.Metric, 0)
for _, m := range in { for _, m := range in {
switch m.Name() { switch m.Name() {
case "foo": case "foo":
@@ -46,7 +46,7 @@ func NewTestRunningProcessor() *RunningProcessor {
} }
func TestRunningProcessor(t *testing.T) { func TestRunningProcessor(t *testing.T) {
inmetrics := []telegraf.Metric{ inmetrics := []plugins.Metric{
testutil.TestMetric(1, "foo"), testutil.TestMetric(1, "foo"),
testutil.TestMetric(1, "bar"), testutil.TestMetric(1, "bar"),
testutil.TestMetric(1, "baz"), testutil.TestMetric(1, "baz"),
@@ -69,7 +69,7 @@ func TestRunningProcessor(t *testing.T) {
} }
func TestRunningProcessor_WithNameDrop(t *testing.T) { func TestRunningProcessor_WithNameDrop(t *testing.T) {
inmetrics := []telegraf.Metric{ inmetrics := []plugins.Metric{
testutil.TestMetric(1, "foo"), testutil.TestMetric(1, "foo"),
testutil.TestMetric(1, "bar"), testutil.TestMetric(1, "bar"),
testutil.TestMetric(1, "baz"), testutil.TestMetric(1, "baz"),
@@ -96,7 +96,7 @@ func TestRunningProcessor_WithNameDrop(t *testing.T) {
} }
func TestRunningProcessor_DroppedMetric(t *testing.T) { func TestRunningProcessor_DroppedMetric(t *testing.T) {
inmetrics := []telegraf.Metric{ inmetrics := []plugins.Metric{
testutil.TestMetric(1, "dropme"), testutil.TestMetric(1, "dropme"),
testutil.TestMetric(1, "foo"), testutil.TestMetric(1, "foo"),
testutil.TestMetric(1, "bar"), testutil.TestMetric(1, "bar"),

View File

@@ -8,7 +8,7 @@ import (
"strconv" "strconv"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
// TODO remove // TODO remove
"github.com/influxdata/influxdb/client/v2" "github.com/influxdata/influxdb/client/v2"
@@ -21,8 +21,8 @@ func New(
tags map[string]string, tags map[string]string,
fields map[string]interface{}, fields map[string]interface{},
t time.Time, t time.Time,
mType ...telegraf.ValueType, mType ...plugins.ValueType,
) (telegraf.Metric, error) { ) (plugins.Metric, error) {
if len(fields) == 0 { if len(fields) == 0 {
return nil, fmt.Errorf("Metric cannot be made without any fields") return nil, fmt.Errorf("Metric cannot be made without any fields")
} }
@@ -30,11 +30,11 @@ func New(
return nil, fmt.Errorf("Metric cannot be made with an empty name") return nil, fmt.Errorf("Metric cannot be made with an empty name")
} }
var thisType telegraf.ValueType var thisType plugins.ValueType
if len(mType) > 0 { if len(mType) > 0 {
thisType = mType[0] thisType = mType[0]
} else { } else {
thisType = telegraf.Untyped thisType = plugins.Untyped
} }
m := &metric{ m := &metric{
@@ -129,7 +129,7 @@ type metric struct {
fields []byte fields []byte
t []byte t []byte
mType telegraf.ValueType mType plugins.ValueType
aggregate bool aggregate bool
// cached values for reuse in "get" functions // cached values for reuse in "get" functions
@@ -154,7 +154,7 @@ func (m *metric) IsAggregate() bool {
return m.aggregate return m.aggregate
} }
func (m *metric) Type() telegraf.ValueType { func (m *metric) Type() plugins.ValueType {
return m.mType return m.mType
} }
@@ -178,11 +178,11 @@ func (m *metric) Serialize() []byte {
return tmp return tmp
} }
func (m *metric) Split(maxSize int) []telegraf.Metric { func (m *metric) Split(maxSize int) []plugins.Metric {
if m.Len() < maxSize { if m.Len() < maxSize {
return []telegraf.Metric{m} return []plugins.Metric{m}
} }
var out []telegraf.Metric var out []plugins.Metric
// constant number of bytes for each metric (in addition to field bytes) // constant number of bytes for each metric (in addition to field bytes)
constant := len(m.name) + len(m.tags) + len(m.t) + 3 constant := len(m.name) + len(m.tags) + len(m.t) + 3
@@ -430,11 +430,11 @@ func (m *metric) RemoveField(key string) error {
return nil return nil
} }
func (m *metric) Copy() telegraf.Metric { func (m *metric) Copy() plugins.Metric {
return copyWith(m.name, m.tags, m.fields, m.t) return copyWith(m.name, m.tags, m.fields, m.t)
} }
func copyWith(name, tags, fields, t []byte) telegraf.Metric { func copyWith(name, tags, fields, t []byte) plugins.Metric {
out := metric{ out := metric{
name: make([]byte, len(name)), name: make([]byte, len(name)),
tags: make([]byte, len(tags)), tags: make([]byte, len(tags)),

View File

@@ -5,7 +5,7 @@ import (
"testing" "testing"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
) )
// vars for making sure that the compiler doesnt optimize out the benchmarks: // vars for making sure that the compiler doesnt optimize out the benchmarks:
@@ -17,7 +17,7 @@ var (
) )
func BenchmarkNewMetric(b *testing.B) { func BenchmarkNewMetric(b *testing.B) {
var mt telegraf.Metric var mt plugins.Metric
for n := 0; n < b.N; n++ { for n := 0; n < b.N; n++ {
mt, _ = New("test_metric", mt, _ = New("test_metric",
map[string]string{ map[string]string{
@@ -37,7 +37,7 @@ func BenchmarkNewMetric(b *testing.B) {
} }
func BenchmarkAddTag(b *testing.B) { func BenchmarkAddTag(b *testing.B) {
var mt telegraf.Metric var mt plugins.Metric
mt = &metric{ mt = &metric{
name: []byte("cpu"), name: []byte("cpu"),
tags: []byte(",host=localhost"), tags: []byte(",host=localhost"),
@@ -51,14 +51,14 @@ func BenchmarkAddTag(b *testing.B) {
} }
func BenchmarkSplit(b *testing.B) { func BenchmarkSplit(b *testing.B) {
var mt telegraf.Metric var mt plugins.Metric
mt = &metric{ mt = &metric{
name: []byte("cpu"), name: []byte("cpu"),
tags: []byte(",host=localhost"), tags: []byte(",host=localhost"),
fields: []byte("a=101,b=10i,c=10101,d=101010,e=42"), fields: []byte("a=101,b=10i,c=10101,d=101010,e=42"),
t: []byte("1480614053000000000"), t: []byte("1480614053000000000"),
} }
var metrics []telegraf.Metric var metrics []plugins.Metric
for n := 0; n < b.N; n++ { for n := 0; n < b.N; n++ {
metrics = mt.Split(60) metrics = mt.Split(60)
} }

View File

@@ -7,7 +7,7 @@ import (
"testing" "testing"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
) )
@@ -26,7 +26,7 @@ func TestNewMetric(t *testing.T) {
m, err := New("cpu", tags, fields, now) m, err := New("cpu", tags, fields, now)
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, telegraf.Untyped, m.Type()) assert.Equal(t, plugins.Untyped, m.Type())
assert.Equal(t, tags, m.Tags()) assert.Equal(t, tags, m.Tags())
assert.Equal(t, fields, m.Fields()) assert.Equal(t, fields, m.Fields())
assert.Equal(t, "cpu", m.Name()) assert.Equal(t, "cpu", m.Name())
@@ -402,10 +402,10 @@ func TestNewGaugeMetric(t *testing.T) {
"usage_idle": float64(99), "usage_idle": float64(99),
"usage_busy": float64(1), "usage_busy": float64(1),
} }
m, err := New("cpu", tags, fields, now, telegraf.Gauge) m, err := New("cpu", tags, fields, now, plugins.Gauge)
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, telegraf.Gauge, m.Type()) assert.Equal(t, plugins.Gauge, m.Type())
assert.Equal(t, tags, m.Tags()) assert.Equal(t, tags, m.Tags())
assert.Equal(t, fields, m.Fields()) assert.Equal(t, fields, m.Fields())
assert.Equal(t, "cpu", m.Name()) assert.Equal(t, "cpu", m.Name())
@@ -424,10 +424,10 @@ func TestNewCounterMetric(t *testing.T) {
"usage_idle": float64(99), "usage_idle": float64(99),
"usage_busy": float64(1), "usage_busy": float64(1),
} }
m, err := New("cpu", tags, fields, now, telegraf.Counter) m, err := New("cpu", tags, fields, now, plugins.Counter)
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, telegraf.Counter, m.Type()) assert.Equal(t, plugins.Counter, m.Type())
assert.Equal(t, tags, m.Tags()) assert.Equal(t, tags, m.Tags())
assert.Equal(t, fields, m.Fields()) assert.Equal(t, fields, m.Fields())
assert.Equal(t, "cpu", m.Name()) assert.Equal(t, "cpu", m.Name())

View File

@@ -6,7 +6,7 @@ import (
"fmt" "fmt"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
) )
var ( var (
@@ -39,15 +39,15 @@ const (
fieldsState fieldsState
) )
func Parse(buf []byte) ([]telegraf.Metric, error) { func Parse(buf []byte) ([]plugins.Metric, error) {
return ParseWithDefaultTime(buf, time.Now()) return ParseWithDefaultTime(buf, time.Now())
} }
func ParseWithDefaultTime(buf []byte, t time.Time) ([]telegraf.Metric, error) { func ParseWithDefaultTime(buf []byte, t time.Time) ([]plugins.Metric, error) {
if len(buf) <= 6 { if len(buf) <= 6 {
return []telegraf.Metric{}, makeError("buffer too short", buf, 0) return []plugins.Metric{}, makeError("buffer too short", buf, 0)
} }
metrics := make([]telegraf.Metric, 0, bytes.Count(buf, []byte("\n"))+1) metrics := make([]plugins.Metric, 0, bytes.Count(buf, []byte("\n"))+1)
var errStr string var errStr string
i := 0 i := 0
for { for {
@@ -77,7 +77,7 @@ func ParseWithDefaultTime(buf []byte, t time.Time) ([]telegraf.Metric, error) {
return metrics, nil return metrics, nil
} }
func parseMetric(buf []byte, defaultTime time.Time) (telegraf.Metric, error) { func parseMetric(buf []byte, defaultTime time.Time) (plugins.Metric, error) {
var dTime string var dTime string
// scan the first block which is measurement[,tag1=value1,tag2=value=2...] // scan the first block which is measurement[,tag1=value1,tag2=value=2...]
pos, key, err := scanKey(buf, 0) pos, key, err := scanKey(buf, 0)

View File

@@ -1,4 +1,4 @@
package telegraf package plugins
import "time" import "time"

View File

@@ -1,4 +1,4 @@
package telegraf package plugins
// Aggregator is an interface for implementing an Aggregator plugin. // Aggregator is an interface for implementing an Aggregator plugin.
// the RunningAggregator wraps this interface and guarantees that // the RunningAggregator wraps this interface and guarantees that

View File

@@ -1,7 +1,7 @@
package minmax package minmax
import ( import (
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/aggregators" "github.com/influxdata/telegraf/plugins/aggregators"
) )
@@ -9,7 +9,7 @@ type MinMax struct {
cache map[uint64]aggregate cache map[uint64]aggregate
} }
func NewMinMax() telegraf.Aggregator { func NewMinMax() plugins.Aggregator {
mm := &MinMax{} mm := &MinMax{}
mm.Reset() mm.Reset()
return mm return mm
@@ -43,7 +43,7 @@ func (m *MinMax) Description() string {
return "Keep the aggregate min/max of each metric passing through." return "Keep the aggregate min/max of each metric passing through."
} }
func (m *MinMax) Add(in telegraf.Metric) { func (m *MinMax) Add(in plugins.Metric) {
id := in.HashID() id := in.HashID()
if _, ok := m.cache[id]; !ok { if _, ok := m.cache[id]; !ok {
// hit an uncached metric, create caches for first time: // hit an uncached metric, create caches for first time:
@@ -86,7 +86,7 @@ func (m *MinMax) Add(in telegraf.Metric) {
} }
} }
func (m *MinMax) Push(acc telegraf.Accumulator) { func (m *MinMax) Push(acc plugins.Accumulator) {
for _, aggregate := range m.cache { for _, aggregate := range m.cache {
fields := map[string]interface{}{} fields := map[string]interface{}{}
for k, v := range aggregate.fields { for k, v := range aggregate.fields {
@@ -113,7 +113,7 @@ func convert(in interface{}) (float64, bool) {
} }
func init() { func init() {
aggregators.Add("minmax", func() telegraf.Aggregator { aggregators.Add("minmax", func() plugins.Aggregator {
return NewMinMax() return NewMinMax()
}) })
} }

View File

@@ -1,8 +1,8 @@
package aggregators package aggregators
import "github.com/influxdata/telegraf" import "github.com/influxdata/telegraf/plugins"
type Creator func() telegraf.Aggregator type Creator func() plugins.Aggregator
var Aggregators = map[string]Creator{} var Aggregators = map[string]Creator{}

View File

@@ -1,4 +1,4 @@
package telegraf package plugins
type Input interface { type Input interface {
// SampleConfig returns the default configuration of the Input // SampleConfig returns the default configuration of the Input

View File

@@ -9,7 +9,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal/errchan" "github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
@@ -35,7 +35,7 @@ func (a *Aerospike) Description() string {
return "Read stats from aerospike server(s)" return "Read stats from aerospike server(s)"
} }
func (a *Aerospike) Gather(acc telegraf.Accumulator) error { func (a *Aerospike) Gather(acc plugins.Accumulator) error {
if len(a.Servers) == 0 { if len(a.Servers) == 0 {
return a.gatherServer("127.0.0.1:3000", acc) return a.gatherServer("127.0.0.1:3000", acc)
} }
@@ -54,7 +54,7 @@ func (a *Aerospike) Gather(acc telegraf.Accumulator) error {
return errChan.Error() return errChan.Error()
} }
func (a *Aerospike) gatherServer(hostport string, acc telegraf.Accumulator) error { func (a *Aerospike) gatherServer(hostport string, acc plugins.Accumulator) error {
host, port, err := net.SplitHostPort(hostport) host, port, err := net.SplitHostPort(hostport)
if err != nil { if err != nil {
return err return err
@@ -152,7 +152,7 @@ func copyTags(m map[string]string) map[string]string {
} }
func init() { func init() {
inputs.Add("aerospike", func() telegraf.Input { inputs.Add("aerospike", func() plugins.Input {
return &Aerospike{} return &Aerospike{}
}) })
} }

View File

@@ -10,7 +10,7 @@ import (
"strings" "strings"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
@@ -57,7 +57,7 @@ func (n *Apache) Description() string {
return "Read Apache status information (mod_status)" return "Read Apache status information (mod_status)"
} }
func (n *Apache) Gather(acc telegraf.Accumulator) error { func (n *Apache) Gather(acc plugins.Accumulator) error {
if len(n.Urls) == 0 { if len(n.Urls) == 0 {
n.Urls = []string{"http://localhost/server-status?auto"} n.Urls = []string{"http://localhost/server-status?auto"}
} }
@@ -89,7 +89,7 @@ func (n *Apache) Gather(acc telegraf.Accumulator) error {
return outerr return outerr
} }
func (n *Apache) gatherUrl(addr *url.URL, acc telegraf.Accumulator) error { func (n *Apache) gatherUrl(addr *url.URL, acc plugins.Accumulator) error {
var tr *http.Transport var tr *http.Transport
@@ -228,7 +228,7 @@ func getTags(addr *url.URL) map[string]string {
} }
func init() { func init() {
inputs.Add("apache", func() telegraf.Input { inputs.Add("apache", func() plugins.Input {
return &Apache{} return &Apache{}
}) })
} }

View File

@@ -8,7 +8,7 @@ import (
"strconv" "strconv"
"strings" "strings"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
@@ -70,7 +70,7 @@ func prettyToBytes(v string) uint64 {
return uint64(result) return uint64(result)
} }
func (b *Bcache) gatherBcache(bdev string, acc telegraf.Accumulator) error { func (b *Bcache) gatherBcache(bdev string, acc plugins.Accumulator) error {
tags := getTags(bdev) tags := getTags(bdev)
metrics, err := filepath.Glob(bdev + "/stats_total/*") metrics, err := filepath.Glob(bdev + "/stats_total/*")
if len(metrics) < 0 { if len(metrics) < 0 {
@@ -105,7 +105,7 @@ func (b *Bcache) gatherBcache(bdev string, acc telegraf.Accumulator) error {
return nil return nil
} }
func (b *Bcache) Gather(acc telegraf.Accumulator) error { func (b *Bcache) Gather(acc plugins.Accumulator) error {
bcacheDevsChecked := make(map[string]bool) bcacheDevsChecked := make(map[string]bool)
var restrictDevs bool var restrictDevs bool
if len(b.BcacheDevs) != 0 { if len(b.BcacheDevs) != 0 {
@@ -136,7 +136,7 @@ func (b *Bcache) Gather(acc telegraf.Accumulator) error {
} }
func init() { func init() {
inputs.Add("bcache", func() telegraf.Input { inputs.Add("bcache", func() plugins.Input {
return &Bcache{} return &Bcache{}
}) })
} }

View File

@@ -4,7 +4,7 @@ import (
"encoding/json" "encoding/json"
"errors" "errors"
"fmt" "fmt"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
"io/ioutil" "io/ioutil"
"log" "log"
@@ -35,13 +35,13 @@ type Cassandra struct {
type javaMetric struct { type javaMetric struct {
host string host string
metric string metric string
acc telegraf.Accumulator acc plugins.Accumulator
} }
type cassandraMetric struct { type cassandraMetric struct {
host string host string
metric string metric string
acc telegraf.Accumulator acc plugins.Accumulator
} }
type jmxMetric interface { type jmxMetric interface {
@@ -49,12 +49,12 @@ type jmxMetric interface {
} }
func newJavaMetric(host string, metric string, func newJavaMetric(host string, metric string,
acc telegraf.Accumulator) *javaMetric { acc plugins.Accumulator) *javaMetric {
return &javaMetric{host: host, metric: metric, acc: acc} return &javaMetric{host: host, metric: metric, acc: acc}
} }
func newCassandraMetric(host string, metric string, func newCassandraMetric(host string, metric string,
acc telegraf.Accumulator) *cassandraMetric { acc plugins.Accumulator) *cassandraMetric {
return &cassandraMetric{host: host, metric: metric, acc: acc} return &cassandraMetric{host: host, metric: metric, acc: acc}
} }
@@ -257,7 +257,7 @@ func parseServerTokens(server string) map[string]string {
return serverTokens return serverTokens
} }
func (c *Cassandra) Gather(acc telegraf.Accumulator) error { func (c *Cassandra) Gather(acc plugins.Accumulator) error {
context := c.Context context := c.Context
servers := c.Servers servers := c.Servers
metrics := c.Metrics metrics := c.Metrics
@@ -289,7 +289,6 @@ func (c *Cassandra) Gather(acc telegraf.Accumulator) error {
requestUrl.User = url.UserPassword(serverTokens["user"], requestUrl.User = url.UserPassword(serverTokens["user"],
serverTokens["passwd"]) serverTokens["passwd"])
} }
fmt.Printf("host %s url %s\n", serverTokens["host"], requestUrl)
out, err := c.getAttr(requestUrl) out, err := c.getAttr(requestUrl)
if out["status"] != 200.0 { if out["status"] != 200.0 {
@@ -303,7 +302,7 @@ func (c *Cassandra) Gather(acc telegraf.Accumulator) error {
} }
func init() { func init() {
inputs.Add("cassandra", func() telegraf.Input { inputs.Add("cassandra", func() plugins.Input {
return &Cassandra{jClient: &JolokiaClientImpl{client: &http.Client{}}} return &Cassandra{jClient: &JolokiaClientImpl{client: &http.Client{}}}
}) })
} }

View File

@@ -4,7 +4,7 @@ import (
"bytes" "bytes"
"encoding/json" "encoding/json"
"fmt" "fmt"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
"io/ioutil" "io/ioutil"
"log" "log"
@@ -75,7 +75,7 @@ func (c *Ceph) SampleConfig() string {
return sampleConfig return sampleConfig
} }
func (c *Ceph) Gather(acc telegraf.Accumulator) error { func (c *Ceph) Gather(acc plugins.Accumulator) error {
if c.GatherAdminSocketStats { if c.GatherAdminSocketStats {
if err := c.gatherAdminSocketStats(acc); err != nil { if err := c.gatherAdminSocketStats(acc); err != nil {
return err return err
@@ -91,7 +91,7 @@ func (c *Ceph) Gather(acc telegraf.Accumulator) error {
return nil return nil
} }
func (c *Ceph) gatherAdminSocketStats(acc telegraf.Accumulator) error { func (c *Ceph) gatherAdminSocketStats(acc plugins.Accumulator) error {
sockets, err := findSockets(c) sockets, err := findSockets(c)
if err != nil { if err != nil {
return fmt.Errorf("failed to find sockets at path '%s': %v", c.SocketDir, err) return fmt.Errorf("failed to find sockets at path '%s': %v", c.SocketDir, err)
@@ -117,10 +117,10 @@ func (c *Ceph) gatherAdminSocketStats(acc telegraf.Accumulator) error {
return nil return nil
} }
func (c *Ceph) gatherClusterStats(acc telegraf.Accumulator) error { func (c *Ceph) gatherClusterStats(acc plugins.Accumulator) error {
jobs := []struct { jobs := []struct {
command string command string
parser func(telegraf.Accumulator, string) error parser func(plugins.Accumulator, string) error
}{ }{
{"status", decodeStatus}, {"status", decodeStatus},
{"df", decodeDf}, {"df", decodeDf},
@@ -155,7 +155,7 @@ func init() {
GatherClusterStats: false, GatherClusterStats: false,
} }
inputs.Add(measurement, func() telegraf.Input { return &c }) inputs.Add(measurement, func() plugins.Input { return &c })
} }
@@ -322,7 +322,7 @@ func (c *Ceph) exec(command string) (string, error) {
return output, nil return output, nil
} }
func decodeStatus(acc telegraf.Accumulator, input string) error { func decodeStatus(acc plugins.Accumulator, input string) error {
data := make(map[string]interface{}) data := make(map[string]interface{})
err := json.Unmarshal([]byte(input), &data) err := json.Unmarshal([]byte(input), &data)
if err != nil { if err != nil {
@@ -347,7 +347,7 @@ func decodeStatus(acc telegraf.Accumulator, input string) error {
return nil return nil
} }
func decodeStatusOsdmap(acc telegraf.Accumulator, data map[string]interface{}) error { func decodeStatusOsdmap(acc plugins.Accumulator, data map[string]interface{}) error {
osdmap, ok := data["osdmap"].(map[string]interface{}) osdmap, ok := data["osdmap"].(map[string]interface{})
if !ok { if !ok {
return fmt.Errorf("WARNING %s - unable to decode osdmap", measurement) return fmt.Errorf("WARNING %s - unable to decode osdmap", measurement)
@@ -360,7 +360,7 @@ func decodeStatusOsdmap(acc telegraf.Accumulator, data map[string]interface{}) e
return nil return nil
} }
func decodeStatusPgmap(acc telegraf.Accumulator, data map[string]interface{}) error { func decodeStatusPgmap(acc plugins.Accumulator, data map[string]interface{}) error {
pgmap, ok := data["pgmap"].(map[string]interface{}) pgmap, ok := data["pgmap"].(map[string]interface{})
if !ok { if !ok {
return fmt.Errorf("WARNING %s - unable to decode pgmap", measurement) return fmt.Errorf("WARNING %s - unable to decode pgmap", measurement)
@@ -376,7 +376,7 @@ func decodeStatusPgmap(acc telegraf.Accumulator, data map[string]interface{}) er
return nil return nil
} }
func decodeStatusPgmapState(acc telegraf.Accumulator, data map[string]interface{}) error { func decodeStatusPgmapState(acc plugins.Accumulator, data map[string]interface{}) error {
pgmap, ok := data["pgmap"].(map[string]interface{}) pgmap, ok := data["pgmap"].(map[string]interface{})
if !ok { if !ok {
return fmt.Errorf("WARNING %s - unable to decode pgmap", measurement) return fmt.Errorf("WARNING %s - unable to decode pgmap", measurement)
@@ -409,7 +409,7 @@ func decodeStatusPgmapState(acc telegraf.Accumulator, data map[string]interface{
return nil return nil
} }
func decodeDf(acc telegraf.Accumulator, input string) error { func decodeDf(acc plugins.Accumulator, input string) error {
data := make(map[string]interface{}) data := make(map[string]interface{})
err := json.Unmarshal([]byte(input), &data) err := json.Unmarshal([]byte(input), &data)
if err != nil { if err != nil {
@@ -451,7 +451,7 @@ func decodeDf(acc telegraf.Accumulator, input string) error {
return nil return nil
} }
func decodeOsdPoolStats(acc telegraf.Accumulator, input string) error { func decodeOsdPoolStats(acc plugins.Accumulator, input string) error {
data := make([]map[string]interface{}, 0) data := make([]map[string]interface{}, 0)
err := json.Unmarshal([]byte(input), &data) err := json.Unmarshal([]byte(input), &data)
if err != nil { if err != nil {

View File

@@ -1,7 +1,7 @@
package cgroup package cgroup
import ( import (
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
@@ -34,5 +34,5 @@ func (g *CGroup) Description() string {
} }
func init() { func init() {
inputs.Add("cgroup", func() telegraf.Input { return &CGroup{} }) inputs.Add("cgroup", func() plugins.Input { return &CGroup{} })
} }

View File

@@ -11,12 +11,12 @@ import (
"regexp" "regexp"
"strconv" "strconv"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
) )
const metricName = "cgroup" const metricName = "cgroup"
func (g *CGroup) Gather(acc telegraf.Accumulator) error { func (g *CGroup) Gather(acc plugins.Accumulator) error {
list := make(chan pathInfo) list := make(chan pathInfo)
go g.generateDirs(list) go g.generateDirs(list)
@@ -32,7 +32,7 @@ func (g *CGroup) Gather(acc telegraf.Accumulator) error {
return nil return nil
} }
func (g *CGroup) gatherDir(dir string, acc telegraf.Accumulator) error { func (g *CGroup) gatherDir(dir string, acc plugins.Accumulator) error {
fields := make(map[string]interface{}) fields := make(map[string]interface{})
list := make(chan pathInfo) list := make(chan pathInfo)

View File

@@ -3,9 +3,9 @@
package cgroup package cgroup
import ( import (
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
) )
func (g *CGroup) Gather(acc telegraf.Accumulator) error { func (g *CGroup) Gather(acc plugins.Accumulator) error {
return nil return nil
} }

View File

@@ -10,7 +10,7 @@ import (
"strings" "strings"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
@@ -35,7 +35,7 @@ func (*Chrony) SampleConfig() string {
` `
} }
func (c *Chrony) Gather(acc telegraf.Accumulator) error { func (c *Chrony) Gather(acc plugins.Accumulator) error {
if len(c.path) == 0 { if len(c.path) == 0 {
return errors.New("chronyc not found: verify that chrony is installed and that chronyc is in your PATH") return errors.New("chronyc not found: verify that chrony is installed and that chronyc is in your PATH")
} }
@@ -127,7 +127,7 @@ func init() {
if len(path) > 0 { if len(path) > 0 {
c.path = path c.path = path
} }
inputs.Add("chrony", func() telegraf.Input { inputs.Add("chrony", func() plugins.Input {
return &c return &c
}) })
} }

View File

@@ -10,7 +10,7 @@ import (
"github.com/aws/aws-sdk-go/service/cloudwatch" "github.com/aws/aws-sdk-go/service/cloudwatch"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
internalaws "github.com/influxdata/telegraf/internal/config/aws" internalaws "github.com/influxdata/telegraf/internal/config/aws"
"github.com/influxdata/telegraf/internal/errchan" "github.com/influxdata/telegraf/internal/errchan"
@@ -126,11 +126,7 @@ func (c *CloudWatch) Description() string {
return "Pull Metric Statistics from Amazon CloudWatch" return "Pull Metric Statistics from Amazon CloudWatch"
} }
func (c *CloudWatch) Gather(acc telegraf.Accumulator) error { func SelectMetrics(c *CloudWatch) ([]*cloudwatch.Metric, error) {
if c.client == nil {
c.initializeCloudWatch()
}
var metrics []*cloudwatch.Metric var metrics []*cloudwatch.Metric
// check for provided metric filter // check for provided metric filter
@@ -155,11 +151,11 @@ func (c *CloudWatch) Gather(acc telegraf.Accumulator) error {
} else { } else {
allMetrics, err := c.fetchNamespaceMetrics() allMetrics, err := c.fetchNamespaceMetrics()
if err != nil { if err != nil {
return err return nil, err
} }
for _, name := range m.MetricNames { for _, name := range m.MetricNames {
for _, metric := range allMetrics { for _, metric := range allMetrics {
if isSelected(metric, m.Dimensions) { if isSelected(name, metric, m.Dimensions) {
metrics = append(metrics, &cloudwatch.Metric{ metrics = append(metrics, &cloudwatch.Metric{
Namespace: aws.String(c.Namespace), Namespace: aws.String(c.Namespace),
MetricName: aws.String(name), MetricName: aws.String(name),
@@ -169,16 +165,26 @@ func (c *CloudWatch) Gather(acc telegraf.Accumulator) error {
} }
} }
} }
} }
} else { } else {
var err error var err error
metrics, err = c.fetchNamespaceMetrics() metrics, err = c.fetchNamespaceMetrics()
if err != nil { if err != nil {
return err return nil, err
} }
} }
return metrics, nil
}
func (c *CloudWatch) Gather(acc plugins.Accumulator) error {
if c.client == nil {
c.initializeCloudWatch()
}
metrics, err := SelectMetrics(c)
if err != nil {
return err
}
metricCount := len(metrics) metricCount := len(metrics)
errChan := errchan.New(metricCount) errChan := errchan.New(metricCount)
@@ -204,7 +210,7 @@ func (c *CloudWatch) Gather(acc telegraf.Accumulator) error {
} }
func init() { func init() {
inputs.Add("cloudwatch", func() telegraf.Input { inputs.Add("cloudwatch", func() plugins.Input {
ttl, _ := time.ParseDuration("1hr") ttl, _ := time.ParseDuration("1hr")
return &CloudWatch{ return &CloudWatch{
CacheTTL: internal.Duration{Duration: ttl}, CacheTTL: internal.Duration{Duration: ttl},
@@ -275,7 +281,7 @@ func (c *CloudWatch) fetchNamespaceMetrics() ([]*cloudwatch.Metric, error) {
* Gather given Metric and emit any error * Gather given Metric and emit any error
*/ */
func (c *CloudWatch) gatherMetric( func (c *CloudWatch) gatherMetric(
acc telegraf.Accumulator, acc plugins.Accumulator,
metric *cloudwatch.Metric, metric *cloudwatch.Metric,
now time.Time, now time.Time,
errChan chan error, errChan chan error,
@@ -380,7 +386,10 @@ func hasWilcard(dimensions []*Dimension) bool {
return false return false
} }
func isSelected(metric *cloudwatch.Metric, dimensions []*Dimension) bool { func isSelected(name string, metric *cloudwatch.Metric, dimensions []*Dimension) bool {
if name != *metric.MetricName {
return false
}
if len(metric.Dimensions) != len(dimensions) { if len(metric.Dimensions) != len(dimensions) {
return false return false
} }

View File

@@ -11,9 +11,9 @@ import (
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
) )
type mockCloudWatchClient struct{} type mockGatherCloudWatchClient struct{}
func (m *mockCloudWatchClient) ListMetrics(params *cloudwatch.ListMetricsInput) (*cloudwatch.ListMetricsOutput, error) { func (m *mockGatherCloudWatchClient) ListMetrics(params *cloudwatch.ListMetricsInput) (*cloudwatch.ListMetricsOutput, error) {
metric := &cloudwatch.Metric{ metric := &cloudwatch.Metric{
Namespace: params.Namespace, Namespace: params.Namespace,
MetricName: aws.String("Latency"), MetricName: aws.String("Latency"),
@@ -31,7 +31,7 @@ func (m *mockCloudWatchClient) ListMetrics(params *cloudwatch.ListMetricsInput)
return result, nil return result, nil
} }
func (m *mockCloudWatchClient) GetMetricStatistics(params *cloudwatch.GetMetricStatisticsInput) (*cloudwatch.GetMetricStatisticsOutput, error) { func (m *mockGatherCloudWatchClient) GetMetricStatistics(params *cloudwatch.GetMetricStatisticsInput) (*cloudwatch.GetMetricStatisticsOutput, error) {
dataPoint := &cloudwatch.Datapoint{ dataPoint := &cloudwatch.Datapoint{
Timestamp: params.EndTime, Timestamp: params.EndTime,
Minimum: aws.Float64(0.1), Minimum: aws.Float64(0.1),
@@ -62,7 +62,7 @@ func TestGather(t *testing.T) {
} }
var acc testutil.Accumulator var acc testutil.Accumulator
c.client = &mockCloudWatchClient{} c.client = &mockGatherCloudWatchClient{}
c.Gather(&acc) c.Gather(&acc)
@@ -83,6 +83,94 @@ func TestGather(t *testing.T) {
} }
type mockSelectMetricsCloudWatchClient struct{}
func (m *mockSelectMetricsCloudWatchClient) ListMetrics(params *cloudwatch.ListMetricsInput) (*cloudwatch.ListMetricsOutput, error) {
metrics := []*cloudwatch.Metric{}
// 4 metrics are available
metricNames := []string{"Latency", "RequestCount", "HealthyHostCount", "UnHealthyHostCount"}
// for 3 ELBs
loadBalancers := []string{"lb-1", "lb-2", "lb-3"}
// in 2 AZs
availabilityZones := []string{"us-east-1a", "us-east-1b"}
for _, m := range metricNames {
for _, lb := range loadBalancers {
// For each metric/ELB pair, we get an aggregate value across all AZs.
metrics = append(metrics, &cloudwatch.Metric{
Namespace: aws.String("AWS/ELB"),
MetricName: aws.String(m),
Dimensions: []*cloudwatch.Dimension{
&cloudwatch.Dimension{
Name: aws.String("LoadBalancerName"),
Value: aws.String(lb),
},
},
})
for _, az := range availabilityZones {
// We get a metric for each metric/ELB/AZ triplet.
metrics = append(metrics, &cloudwatch.Metric{
Namespace: aws.String("AWS/ELB"),
MetricName: aws.String(m),
Dimensions: []*cloudwatch.Dimension{
&cloudwatch.Dimension{
Name: aws.String("LoadBalancerName"),
Value: aws.String(lb),
},
&cloudwatch.Dimension{
Name: aws.String("AvailabilityZone"),
Value: aws.String(az),
},
},
})
}
}
}
result := &cloudwatch.ListMetricsOutput{
Metrics: metrics,
}
return result, nil
}
func (m *mockSelectMetricsCloudWatchClient) GetMetricStatistics(params *cloudwatch.GetMetricStatisticsInput) (*cloudwatch.GetMetricStatisticsOutput, error) {
return nil, nil
}
func TestSelectMetrics(t *testing.T) {
duration, _ := time.ParseDuration("1m")
internalDuration := internal.Duration{
Duration: duration,
}
c := &CloudWatch{
Region: "us-east-1",
Namespace: "AWS/ELB",
Delay: internalDuration,
Period: internalDuration,
RateLimit: 10,
Metrics: []*Metric{
&Metric{
MetricNames: []string{"Latency", "RequestCount"},
Dimensions: []*Dimension{
&Dimension{
Name: "LoadBalancerName",
Value: "*",
},
&Dimension{
Name: "AvailabilityZone",
Value: "*",
},
},
},
},
}
c.client = &mockSelectMetricsCloudWatchClient{}
metrics, err := SelectMetrics(c)
// We've asked for 2 (out of 4) metrics, over all 3 load balancers in all 2
// AZs. We should get 12 metrics.
assert.Equal(t, 12, len(metrics))
assert.Nil(t, err)
}
func TestGenerateStatisticsInputParams(t *testing.T) { func TestGenerateStatisticsInputParams(t *testing.T) {
d := &cloudwatch.Dimension{ d := &cloudwatch.Dimension{
Name: aws.String("LoadBalancerName"), Name: aws.String("LoadBalancerName"),

View File

@@ -9,7 +9,7 @@ import (
"strconv" "strconv"
"strings" "strings"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
"log" "log"
"path/filepath" "path/filepath"
@@ -70,7 +70,7 @@ func (c *Conntrack) SampleConfig() string {
return sampleConfig return sampleConfig
} }
func (c *Conntrack) Gather(acc telegraf.Accumulator) error { func (c *Conntrack) Gather(acc plugins.Accumulator) error {
c.setDefaults() c.setDefaults()
var metricKey string var metricKey string
@@ -116,5 +116,5 @@ func (c *Conntrack) Gather(acc telegraf.Accumulator) error {
} }
func init() { func init() {
inputs.Add(inputName, func() telegraf.Input { return &Conntrack{} }) inputs.Add(inputName, func() plugins.Input { return &Conntrack{} })
} }

View File

@@ -29,9 +29,9 @@ to query the data. It will not report the [telemetry](https://www.consul.io/docs
Tags: Tags:
- node: on which node check/service is registered on - node: on which node check/service is registered on
- service_name: name of the service (this is the service name not the service ID) - service_name: name of the service (this is the service name not the service ID)
- check_id
Fields: Fields:
- check_id
- check_name - check_name
- service_id - service_id
- status - status
@@ -41,6 +41,6 @@ Fields:
``` ```
$ telegraf --config ./telegraf.conf -input-filter consul -test $ telegraf --config ./telegraf.conf -input-filter consul -test
* Plugin: consul, Collection 1 * Plugin: consul, Collection 1
> consul_health_checks,host=wolfpit,node=consul-server-node check_id="serfHealth",check_name="Serf Health Status",service_id="",status="passing" 1464698464486439902 > consul_health_checks,host=wolfpit,node=consul-server-node,check_id="serfHealth" check_name="Serf Health Status",service_id="",status="passing" 1464698464486439902
> consul_health_checks,host=wolfpit,node=consul-server-node,service_name=www.example.com check_id="service:www-example-com.test01",check_name="Service 'www.example.com' check",service_id="www-example-com.test01",status="critical" 1464698464486519036 > consul_health_checks,host=wolfpit,node=consul-server-node,service_name=www.example.com,check_id="service:www-example-com.test01" check_name="Service 'www.example.com' check",service_id="www-example-com.test01",status="critical" 1464698464486519036
``` ```

View File

@@ -4,7 +4,7 @@ import (
"net/http" "net/http"
"github.com/hashicorp/consul/api" "github.com/hashicorp/consul/api"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
@@ -90,24 +90,24 @@ func (c *Consul) createAPIClient() (*api.Client, error) {
return api.NewClient(config) return api.NewClient(config)
} }
func (c *Consul) GatherHealthCheck(acc telegraf.Accumulator, checks []*api.HealthCheck) { func (c *Consul) GatherHealthCheck(acc plugins.Accumulator, checks []*api.HealthCheck) {
for _, check := range checks { for _, check := range checks {
record := make(map[string]interface{}) record := make(map[string]interface{})
tags := make(map[string]string) tags := make(map[string]string)
record["check_id"] = check.CheckID
record["check_name"] = check.Name record["check_name"] = check.Name
record["service_id"] = check.ServiceID record["service_id"] = check.ServiceID
record["status"] = check.Status record["status"] = check.Status
tags["node"] = check.Node tags["node"] = check.Node
tags["service_name"] = check.ServiceName tags["service_name"] = check.ServiceName
tags["check_id"] = check.CheckID
acc.AddFields("consul_health_checks", record, tags) acc.AddFields("consul_health_checks", record, tags)
} }
} }
func (c *Consul) Gather(acc telegraf.Accumulator) error { func (c *Consul) Gather(acc plugins.Accumulator) error {
if c.client == nil { if c.client == nil {
newClient, err := c.createAPIClient() newClient, err := c.createAPIClient()
@@ -130,7 +130,7 @@ func (c *Consul) Gather(acc telegraf.Accumulator) error {
} }
func init() { func init() {
inputs.Add("consul", func() telegraf.Input { inputs.Add("consul", func() plugins.Input {
return &Consul{} return &Consul{}
}) })
} }

View File

@@ -22,7 +22,6 @@ var sampleChecks = []*api.HealthCheck{
func TestGatherHealtCheck(t *testing.T) { func TestGatherHealtCheck(t *testing.T) {
expectedFields := map[string]interface{}{ expectedFields := map[string]interface{}{
"check_id": "foo.health123",
"check_name": "foo.health", "check_name": "foo.health",
"status": "passing", "status": "passing",
"service_id": "foo.123", "service_id": "foo.123",
@@ -31,6 +30,7 @@ func TestGatherHealtCheck(t *testing.T) {
expectedTags := map[string]string{ expectedTags := map[string]string{
"node": "localhost", "node": "localhost",
"service_name": "foo", "service_name": "foo",
"check_id": "foo.health123",
} }
var acc testutil.Accumulator var acc testutil.Accumulator

View File

@@ -2,7 +2,7 @@ package couchbase
import ( import (
couchbase "github.com/couchbase/go-couchbase" couchbase "github.com/couchbase/go-couchbase"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
"sync" "sync"
) )
@@ -34,7 +34,7 @@ func (r *Couchbase) Description() string {
// Reads stats from all configured clusters. Accumulates stats. // Reads stats from all configured clusters. Accumulates stats.
// Returns one of the errors encountered while gathering stats (if any). // Returns one of the errors encountered while gathering stats (if any).
func (r *Couchbase) Gather(acc telegraf.Accumulator) error { func (r *Couchbase) Gather(acc plugins.Accumulator) error {
if len(r.Servers) == 0 { if len(r.Servers) == 0 {
r.gatherServer("http://localhost:8091/", acc, nil) r.gatherServer("http://localhost:8091/", acc, nil)
return nil return nil
@@ -57,7 +57,7 @@ func (r *Couchbase) Gather(acc telegraf.Accumulator) error {
return outerr return outerr
} }
func (r *Couchbase) gatherServer(addr string, acc telegraf.Accumulator, pool *couchbase.Pool) error { func (r *Couchbase) gatherServer(addr string, acc plugins.Accumulator, pool *couchbase.Pool) error {
if pool == nil { if pool == nil {
client, err := couchbase.Connect(addr) client, err := couchbase.Connect(addr)
if err != nil { if err != nil {
@@ -98,7 +98,7 @@ func (r *Couchbase) gatherServer(addr string, acc telegraf.Accumulator, pool *co
} }
func init() { func init() {
inputs.Add("couchbase", func() telegraf.Input { inputs.Add("couchbase", func() plugins.Input {
return &Couchbase{} return &Couchbase{}
}) })
} }

View File

@@ -4,7 +4,7 @@ import (
"encoding/json" "encoding/json"
"errors" "errors"
"fmt" "fmt"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
"net/http" "net/http"
"reflect" "reflect"
@@ -82,7 +82,7 @@ func (*CouchDB) SampleConfig() string {
` `
} }
func (c *CouchDB) Gather(accumulator telegraf.Accumulator) error { func (c *CouchDB) Gather(accumulator plugins.Accumulator) error {
errorChannel := make(chan error, len(c.HOSTs)) errorChannel := make(chan error, len(c.HOSTs))
var wg sync.WaitGroup var wg sync.WaitGroup
for _, u := range c.HOSTs { for _, u := range c.HOSTs {
@@ -122,7 +122,7 @@ var client = &http.Client{
Timeout: time.Duration(4 * time.Second), Timeout: time.Duration(4 * time.Second),
} }
func (c *CouchDB) fetchAndInsertData(accumulator telegraf.Accumulator, host string) error { func (c *CouchDB) fetchAndInsertData(accumulator plugins.Accumulator, host string) error {
response, error := client.Get(host) response, error := client.Get(host)
if error != nil { if error != nil {
@@ -209,7 +209,7 @@ func (c *CouchDB) generateFields(prefix string, obj metaData) map[string]interfa
} }
func init() { func init() {
inputs.Add("couchdb", func() telegraf.Input { inputs.Add("couchdb", func() plugins.Input {
return &CouchDB{} return &CouchDB{}
}) })
} }

View File

@@ -11,7 +11,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
@@ -64,7 +64,7 @@ var ErrProtocolError = errors.New("disque protocol error")
// Reads stats from all configured servers accumulates stats. // Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any). // Returns one of the errors encountered while gather stats (if any).
func (g *Disque) Gather(acc telegraf.Accumulator) error { func (g *Disque) Gather(acc plugins.Accumulator) error {
if len(g.Servers) == 0 { if len(g.Servers) == 0 {
url := &url.URL{ url := &url.URL{
Host: ":7711", Host: ":7711",
@@ -101,7 +101,7 @@ func (g *Disque) Gather(acc telegraf.Accumulator) error {
const defaultPort = "7711" const defaultPort = "7711"
func (g *Disque) gatherServer(addr *url.URL, acc telegraf.Accumulator) error { func (g *Disque) gatherServer(addr *url.URL, acc plugins.Accumulator) error {
if g.c == nil { if g.c == nil {
_, _, err := net.SplitHostPort(addr.Host) _, _, err := net.SplitHostPort(addr.Host)
@@ -204,7 +204,7 @@ func (g *Disque) gatherServer(addr *url.URL, acc telegraf.Accumulator) error {
} }
func init() { func init() {
inputs.Add("disque", func() telegraf.Input { inputs.Add("disque", func() plugins.Input {
return &Disque{} return &Disque{}
}) })
} }

View File

@@ -8,7 +8,7 @@ import (
"strconv" "strconv"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal/errchan" "github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
@@ -55,7 +55,7 @@ func (d *DnsQuery) SampleConfig() string {
func (d *DnsQuery) Description() string { func (d *DnsQuery) Description() string {
return "Query given DNS server and gives statistics" return "Query given DNS server and gives statistics"
} }
func (d *DnsQuery) Gather(acc telegraf.Accumulator) error { func (d *DnsQuery) Gather(acc plugins.Accumulator) error {
d.setDefaultValues() d.setDefaultValues()
errChan := errchan.New(len(d.Domains) * len(d.Servers)) errChan := errchan.New(len(d.Domains) * len(d.Servers))
@@ -156,7 +156,7 @@ func (d *DnsQuery) parseRecordType() (uint16, error) {
} }
func init() { func init() {
inputs.Add("dns_query", func() telegraf.Input { inputs.Add("dns_query", func() plugins.Input {
return &DnsQuery{} return &DnsQuery{}
}) })
} }

View File

@@ -15,7 +15,7 @@ import (
"github.com/docker/engine-api/client" "github.com/docker/engine-api/client"
"github.com/docker/engine-api/types" "github.com/docker/engine-api/types"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
@@ -79,7 +79,7 @@ func (d *Docker) Description() string {
func (d *Docker) SampleConfig() string { return sampleConfig } func (d *Docker) SampleConfig() string { return sampleConfig }
// Gather starts stats collection // Gather starts stats collection
func (d *Docker) Gather(acc telegraf.Accumulator) error { func (d *Docker) Gather(acc plugins.Accumulator) error {
if d.client == nil { if d.client == nil {
var c *client.Client var c *client.Client
var err error var err error
@@ -136,7 +136,7 @@ func (d *Docker) Gather(acc telegraf.Accumulator) error {
return nil return nil
} }
func (d *Docker) gatherInfo(acc telegraf.Accumulator) error { func (d *Docker) gatherInfo(acc plugins.Accumulator) error {
// Init vars // Init vars
dataFields := make(map[string]interface{}) dataFields := make(map[string]interface{})
metadataFields := make(map[string]interface{}) metadataFields := make(map[string]interface{})
@@ -211,7 +211,7 @@ func (d *Docker) gatherInfo(acc telegraf.Accumulator) error {
func (d *Docker) gatherContainer( func (d *Docker) gatherContainer(
container types.Container, container types.Container,
acc telegraf.Accumulator, acc plugins.Accumulator,
) error { ) error {
var v *types.StatsJSON var v *types.StatsJSON
// Parse container name // Parse container name
@@ -221,14 +221,18 @@ func (d *Docker) gatherContainer(
cname = strings.TrimPrefix(container.Names[0], "/") cname = strings.TrimPrefix(container.Names[0], "/")
} }
// the image name sometimes has a version part. // the image name sometimes has a version part, or a private repo
// ie, rabbitmq:3-management // ie, rabbitmq:3-management or docker.someco.net:4443/rabbitmq:3-management
imageParts := strings.Split(container.Image, ":") imageName := ""
imageName := imageParts[0]
imageVersion := "unknown" imageVersion := "unknown"
if len(imageParts) > 1 { i := strings.LastIndex(container.Image, ":") // index of last ':' character
imageVersion = imageParts[1] if i > -1 {
imageVersion = container.Image[i+1:]
imageName = container.Image[:i]
} else {
imageName = container.Image
} }
tags := map[string]string{ tags := map[string]string{
"engine_host": d.engine_host, "engine_host": d.engine_host,
"container_name": cname, "container_name": cname,
@@ -268,7 +272,7 @@ func (d *Docker) gatherContainer(
func gatherContainerStats( func gatherContainerStats(
stat *types.StatsJSON, stat *types.StatsJSON,
acc telegraf.Accumulator, acc plugins.Accumulator,
tags map[string]string, tags map[string]string,
id string, id string,
perDevice bool, perDevice bool,
@@ -364,11 +368,22 @@ func gatherContainerStats(
if field == "container_id" { if field == "container_id" {
continue continue
} }
var uintV uint64
switch v := value.(type) {
case uint64:
uintV = v
case int64:
uintV = uint64(v)
default:
continue
}
_, ok := totalNetworkStatMap[field] _, ok := totalNetworkStatMap[field]
if ok { if ok {
totalNetworkStatMap[field] = totalNetworkStatMap[field].(uint64) + value.(uint64) totalNetworkStatMap[field] = totalNetworkStatMap[field].(uint64) + uintV
} else { } else {
totalNetworkStatMap[field] = value totalNetworkStatMap[field] = uintV
} }
} }
} }
@@ -407,7 +422,7 @@ func calculateCPUPercent(stat *types.StatsJSON) float64 {
func gatherBlockIOMetrics( func gatherBlockIOMetrics(
stat *types.StatsJSON, stat *types.StatsJSON,
acc telegraf.Accumulator, acc plugins.Accumulator,
tags map[string]string, tags map[string]string,
now time.Time, now time.Time,
id string, id string,
@@ -487,11 +502,22 @@ func gatherBlockIOMetrics(
if field == "container_id" { if field == "container_id" {
continue continue
} }
var uintV uint64
switch v := value.(type) {
case uint64:
uintV = v
case int64:
uintV = uint64(v)
default:
continue
}
_, ok := totalStatMap[field] _, ok := totalStatMap[field]
if ok { if ok {
totalStatMap[field] = totalStatMap[field].(uint64) + value.(uint64) totalStatMap[field] = totalStatMap[field].(uint64) + uintV
} else { } else {
totalStatMap[field] = value totalStatMap[field] = uintV
} }
} }
} }
@@ -543,7 +569,7 @@ func parseSize(sizeStr string) (int64, error) {
} }
func init() { func init() {
inputs.Add("docker", func() telegraf.Input { inputs.Add("docker", func() plugins.Input {
return &Docker{ return &Docker{
PerDevice: true, PerDevice: true,
Timeout: internal.Duration{Duration: time.Second * 5}, Timeout: internal.Duration{Duration: time.Second * 5},

View File

@@ -340,7 +340,7 @@ func (d FakeDockerClient) ContainerList(octx context.Context, options types.Cont
container2 := types.Container{ container2 := types.Container{
ID: "b7dfbb9478a6ae55e237d4d74f8bbb753f0817192b5081334dc78476296e2173", ID: "b7dfbb9478a6ae55e237d4d74f8bbb753f0817192b5081334dc78476296e2173",
Names: []string{"/etcd2"}, Names: []string{"/etcd2"},
Image: "quay.io/coreos/etcd:v2.2.2", Image: "quay.io:4443/coreos/etcd:v2.2.2",
Command: "/etcd -name etcd2 -advertise-client-urls http://localhost:2379 -listen-client-urls http://0.0.0.0:2379", Command: "/etcd -name etcd2 -advertise-client-urls http://localhost:2379 -listen-client-urls http://0.0.0.0:2379",
Created: 1455941933, Created: 1455941933,
Status: "Up 4 hours", Status: "Up 4 hours",
@@ -429,7 +429,7 @@ func TestDockerGatherInfo(t *testing.T) {
}, },
map[string]string{ map[string]string{
"container_name": "etcd2", "container_name": "etcd2",
"container_image": "quay.io/coreos/etcd", "container_image": "quay.io:4443/coreos/etcd",
"cpu": "cpu3", "cpu": "cpu3",
"container_version": "v2.2.2", "container_version": "v2.2.2",
"engine_host": "absol", "engine_host": "absol",
@@ -477,7 +477,7 @@ func TestDockerGatherInfo(t *testing.T) {
map[string]string{ map[string]string{
"engine_host": "absol", "engine_host": "absol",
"container_name": "etcd2", "container_name": "etcd2",
"container_image": "quay.io/coreos/etcd", "container_image": "quay.io:4443/coreos/etcd",
"container_version": "v2.2.2", "container_version": "v2.2.2",
}, },
) )

View File

@@ -11,7 +11,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal/errchan" "github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
@@ -51,7 +51,7 @@ func (d *Dovecot) SampleConfig() string { return sampleConfig }
const defaultPort = "24242" const defaultPort = "24242"
// Reads stats from all configured servers. // Reads stats from all configured servers.
func (d *Dovecot) Gather(acc telegraf.Accumulator) error { func (d *Dovecot) Gather(acc plugins.Accumulator) error {
if !validQuery[d.Type] { if !validQuery[d.Type] {
return fmt.Errorf("Error: %s is not a valid query type\n", return fmt.Errorf("Error: %s is not a valid query type\n",
d.Type) d.Type)
@@ -81,7 +81,7 @@ func (d *Dovecot) Gather(acc telegraf.Accumulator) error {
return errChan.Error() return errChan.Error()
} }
func (d *Dovecot) gatherServer(addr string, acc telegraf.Accumulator, qtype string, filter string) error { func (d *Dovecot) gatherServer(addr string, acc plugins.Accumulator, qtype string, filter string) error {
_, _, err := net.SplitHostPort(addr) _, _, err := net.SplitHostPort(addr)
if err != nil { if err != nil {
return fmt.Errorf("Error: %s on url %s\n", err, addr) return fmt.Errorf("Error: %s on url %s\n", err, addr)
@@ -111,7 +111,7 @@ func (d *Dovecot) gatherServer(addr string, acc telegraf.Accumulator, qtype stri
return gatherStats(&buf, acc, host, qtype) return gatherStats(&buf, acc, host, qtype)
} }
func gatherStats(buf *bytes.Buffer, acc telegraf.Accumulator, host string, qtype string) error { func gatherStats(buf *bytes.Buffer, acc plugins.Accumulator, host string, qtype string) error {
lines := strings.Split(buf.String(), "\n") lines := strings.Split(buf.String(), "\n")
head := strings.Split(lines[0], "\t") head := strings.Split(lines[0], "\t")
@@ -183,7 +183,7 @@ func secParser(tm string) float64 {
} }
func init() { func init() {
inputs.Add("dovecot", func() telegraf.Input { inputs.Add("dovecot", func() plugins.Input {
return &Dovecot{} return &Dovecot{}
}) })
} }

View File

@@ -2,7 +2,8 @@
The [elasticsearch](https://www.elastic.co/) plugin queries endpoints to obtain The [elasticsearch](https://www.elastic.co/) plugin queries endpoints to obtain
[node](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html) [node](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html)
and optionally [cluster](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html) stats. and optionally [cluster-health](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html)
or [cluster-stats](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-stats.html) metrics.
### Configuration: ### Configuration:
@@ -14,13 +15,18 @@ and optionally [cluster](https://www.elastic.co/guide/en/elasticsearch/reference
## Timeout for HTTP requests to the elastic search server(s) ## Timeout for HTTP requests to the elastic search server(s)
http_timeout = "5s" http_timeout = "5s"
## set local to false when you want to read the indices stats from all nodes ## When local is true (the default), the node will read only its own stats.
## within the cluster ## Set local to false when you want to read the node stats from all nodes
## of the cluster.
local = true local = true
## set cluster_health to true when you want to also obtain cluster level stats ## Set cluster_health to true when you want to also obtain cluster health stats
cluster_health = false cluster_health = false
## Set cluster_stats to true when you want to obtain cluster stats from the
## Master node.
cluster_stats = false
## Optional SSL Config ## Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem" # ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem" # ssl_cert = "/etc/telegraf/cert.pem"

View File

@@ -4,21 +4,27 @@ import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"net/http" "net/http"
"regexp"
"sync" "sync"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/internal/errchan" "github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
jsonparser "github.com/influxdata/telegraf/plugins/parsers/json" jsonparser "github.com/influxdata/telegraf/plugins/parsers/json"
"io/ioutil"
"strings"
) )
// mask for masking username/password from error messages
var mask = regexp.MustCompile(`https?:\/\/\S+:\S+@`)
// Nodestats are always generated, so simply define a constant for these endpoints
const statsPath = "/_nodes/stats" const statsPath = "/_nodes/stats"
const statsPathLocal = "/_nodes/_local/stats" const statsPathLocal = "/_nodes/_local/stats"
const healthPath = "/_cluster/health"
type node struct { type nodeStat struct {
Host string `json:"host"` Host string `json:"host"`
Name string `json:"name"` Name string `json:"name"`
Attributes map[string]string `json:"attributes"` Attributes map[string]string `json:"attributes"`
@@ -58,6 +64,20 @@ type indexHealth struct {
UnassignedShards int `json:"unassigned_shards"` UnassignedShards int `json:"unassigned_shards"`
} }
type clusterStats struct {
NodeName string `json:"node_name"`
ClusterName string `json:"cluster_name"`
Status string `json:"status"`
Indices interface{} `json:"indices"`
Nodes interface{} `json:"nodes"`
}
type catMaster struct {
NodeID string `json:"id"`
NodeIP string `json:"ip"`
NodeName string `json:"node"`
}
const sampleConfig = ` const sampleConfig = `
## specify a list of one or more Elasticsearch servers ## specify a list of one or more Elasticsearch servers
# you can add username and password to your url to use basic authentication: # you can add username and password to your url to use basic authentication:
@@ -67,13 +87,18 @@ const sampleConfig = `
## Timeout for HTTP requests to the elastic search server(s) ## Timeout for HTTP requests to the elastic search server(s)
http_timeout = "5s" http_timeout = "5s"
## set local to false when you want to read the indices stats from all nodes ## When local is true (the default), the node will read only its own stats.
## within the cluster ## Set local to false when you want to read the node stats from all nodes
## of the cluster.
local = true local = true
## set cluster_health to true when you want to also obtain cluster level stats ## Set cluster_health to true when you want to also obtain cluster health stats
cluster_health = false cluster_health = false
## Set cluster_stats to true when you want to also obtain cluster stats from the
## Master node.
cluster_stats = false
## Optional SSL Config ## Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem" # ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem" # ssl_cert = "/etc/telegraf/cert.pem"
@@ -85,15 +110,18 @@ const sampleConfig = `
// Elasticsearch is a plugin to read stats from one or many Elasticsearch // Elasticsearch is a plugin to read stats from one or many Elasticsearch
// servers. // servers.
type Elasticsearch struct { type Elasticsearch struct {
Local bool Local bool
Servers []string Servers []string
HttpTimeout internal.Duration HttpTimeout internal.Duration
ClusterHealth bool ClusterHealth bool
SSLCA string `toml:"ssl_ca"` // Path to CA file ClusterStats bool
SSLCert string `toml:"ssl_cert"` // Path to host cert file SSLCA string `toml:"ssl_ca"` // Path to CA file
SSLKey string `toml:"ssl_key"` // Path to cert key file SSLCert string `toml:"ssl_cert"` // Path to host cert file
InsecureSkipVerify bool // Use SSL but skip chain & host verification SSLKey string `toml:"ssl_key"` // Path to cert key file
client *http.Client InsecureSkipVerify bool // Use SSL but skip chain & host verification
client *http.Client
catMasterResponseTokens []string
isMaster bool
} }
// NewElasticsearch return a new instance of Elasticsearch // NewElasticsearch return a new instance of Elasticsearch
@@ -115,7 +143,7 @@ func (e *Elasticsearch) Description() string {
// Gather reads the stats from Elasticsearch and writes it to the // Gather reads the stats from Elasticsearch and writes it to the
// Accumulator. // Accumulator.
func (e *Elasticsearch) Gather(acc telegraf.Accumulator) error { func (e *Elasticsearch) Gather(acc plugins.Accumulator) error {
if e.client == nil { if e.client == nil {
client, err := e.createHttpClient() client, err := e.createHttpClient()
@@ -125,12 +153,12 @@ func (e *Elasticsearch) Gather(acc telegraf.Accumulator) error {
e.client = client e.client = client
} }
errChan := errchan.New(len(e.Servers)) errChan := errchan.New(len(e.Servers) * 3)
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(len(e.Servers)) wg.Add(len(e.Servers))
for _, serv := range e.Servers { for _, serv := range e.Servers {
go func(s string, acc telegraf.Accumulator) { go func(s string, acc plugins.Accumulator) {
defer wg.Done() defer wg.Done()
var url string var url string
if e.Local { if e.Local {
@@ -138,12 +166,36 @@ func (e *Elasticsearch) Gather(acc telegraf.Accumulator) error {
} else { } else {
url = s + statsPath url = s + statsPath
} }
e.isMaster = false
if e.ClusterStats {
// get cat/master information here so NodeStats can determine
// whether this node is the Master
e.setCatMaster(s + "/_cat/master")
}
// Always gather node states
if err := e.gatherNodeStats(url, acc); err != nil { if err := e.gatherNodeStats(url, acc); err != nil {
err = fmt.Errorf(mask.ReplaceAllString(err.Error(), "http(s)://XXX:XXX@"))
errChan.C <- err errChan.C <- err
return return
} }
if e.ClusterHealth { if e.ClusterHealth {
e.gatherClusterStats(fmt.Sprintf("%s/_cluster/health?level=indices", s), acc) url = s + "/_cluster/health?level=indices"
if err := e.gatherClusterHealth(url, acc); err != nil {
err = fmt.Errorf(mask.ReplaceAllString(err.Error(), "http(s)://XXX:XXX@"))
errChan.C <- err
return
}
}
if e.ClusterStats && e.isMaster {
if err := e.gatherClusterStats(s+"/_cluster/stats", acc); err != nil {
err = fmt.Errorf(mask.ReplaceAllString(err.Error(), "http(s)://XXX:XXX@"))
errChan.C <- err
return
}
} }
}(serv, acc) }(serv, acc)
} }
@@ -169,14 +221,15 @@ func (e *Elasticsearch) createHttpClient() (*http.Client, error) {
return client, nil return client, nil
} }
func (e *Elasticsearch) gatherNodeStats(url string, acc telegraf.Accumulator) error { func (e *Elasticsearch) gatherNodeStats(url string, acc plugins.Accumulator) error {
nodeStats := &struct { nodeStats := &struct {
ClusterName string `json:"cluster_name"` ClusterName string `json:"cluster_name"`
Nodes map[string]*node `json:"nodes"` Nodes map[string]*nodeStat `json:"nodes"`
}{} }{}
if err := e.gatherData(url, nodeStats); err != nil { if err := e.gatherJsonData(url, nodeStats); err != nil {
return err return err
} }
for id, n := range nodeStats.Nodes { for id, n := range nodeStats.Nodes {
tags := map[string]string{ tags := map[string]string{
"node_id": id, "node_id": id,
@@ -185,6 +238,11 @@ func (e *Elasticsearch) gatherNodeStats(url string, acc telegraf.Accumulator) er
"cluster_name": nodeStats.ClusterName, "cluster_name": nodeStats.ClusterName,
} }
if e.ClusterStats {
// check for master
e.isMaster = (id == e.catMasterResponseTokens[0])
}
for k, v := range n.Attributes { for k, v := range n.Attributes {
tags["node_attribute_"+k] = v tags["node_attribute_"+k] = v
} }
@@ -204,6 +262,7 @@ func (e *Elasticsearch) gatherNodeStats(url string, acc telegraf.Accumulator) er
now := time.Now() now := time.Now()
for p, s := range stats { for p, s := range stats {
f := jsonparser.JSONFlattener{} f := jsonparser.JSONFlattener{}
// parse Json, ignoring strings and bools
err := f.FlattenJSON("", s) err := f.FlattenJSON("", s)
if err != nil { if err != nil {
return err return err
@@ -214,31 +273,31 @@ func (e *Elasticsearch) gatherNodeStats(url string, acc telegraf.Accumulator) er
return nil return nil
} }
func (e *Elasticsearch) gatherClusterStats(url string, acc telegraf.Accumulator) error { func (e *Elasticsearch) gatherClusterHealth(url string, acc plugins.Accumulator) error {
clusterStats := &clusterHealth{} healthStats := &clusterHealth{}
if err := e.gatherData(url, clusterStats); err != nil { if err := e.gatherJsonData(url, healthStats); err != nil {
return err return err
} }
measurementTime := time.Now() measurementTime := time.Now()
clusterFields := map[string]interface{}{ clusterFields := map[string]interface{}{
"status": clusterStats.Status, "status": healthStats.Status,
"timed_out": clusterStats.TimedOut, "timed_out": healthStats.TimedOut,
"number_of_nodes": clusterStats.NumberOfNodes, "number_of_nodes": healthStats.NumberOfNodes,
"number_of_data_nodes": clusterStats.NumberOfDataNodes, "number_of_data_nodes": healthStats.NumberOfDataNodes,
"active_primary_shards": clusterStats.ActivePrimaryShards, "active_primary_shards": healthStats.ActivePrimaryShards,
"active_shards": clusterStats.ActiveShards, "active_shards": healthStats.ActiveShards,
"relocating_shards": clusterStats.RelocatingShards, "relocating_shards": healthStats.RelocatingShards,
"initializing_shards": clusterStats.InitializingShards, "initializing_shards": healthStats.InitializingShards,
"unassigned_shards": clusterStats.UnassignedShards, "unassigned_shards": healthStats.UnassignedShards,
} }
acc.AddFields( acc.AddFields(
"elasticsearch_cluster_health", "elasticsearch_cluster_health",
clusterFields, clusterFields,
map[string]string{"name": clusterStats.ClusterName}, map[string]string{"name": healthStats.ClusterName},
measurementTime, measurementTime,
) )
for name, health := range clusterStats.Indices { for name, health := range healthStats.Indices {
indexFields := map[string]interface{}{ indexFields := map[string]interface{}{
"status": health.Status, "status": health.Status,
"number_of_shards": health.NumberOfShards, "number_of_shards": health.NumberOfShards,
@@ -259,7 +318,60 @@ func (e *Elasticsearch) gatherClusterStats(url string, acc telegraf.Accumulator)
return nil return nil
} }
func (e *Elasticsearch) gatherData(url string, v interface{}) error { func (e *Elasticsearch) gatherClusterStats(url string, acc plugins.Accumulator) error {
clusterStats := &clusterStats{}
if err := e.gatherJsonData(url, clusterStats); err != nil {
return err
}
now := time.Now()
tags := map[string]string{
"node_name": clusterStats.NodeName,
"cluster_name": clusterStats.ClusterName,
"status": clusterStats.Status,
}
stats := map[string]interface{}{
"nodes": clusterStats.Nodes,
"indices": clusterStats.Indices,
}
for p, s := range stats {
f := jsonparser.JSONFlattener{}
// parse json, including bools and strings
err := f.FullFlattenJSON("", s, true, true)
if err != nil {
return err
}
acc.AddFields("elasticsearch_clusterstats_"+p, f.Fields, tags, now)
}
return nil
}
func (e *Elasticsearch) setCatMaster(url string) error {
r, err := e.client.Get(url)
if err != nil {
return err
}
defer r.Body.Close()
if r.StatusCode != http.StatusOK {
// NOTE: we are not going to read/discard r.Body under the assumption we'd prefer
// to let the underlying transport close the connection and re-establish a new one for
// future calls.
return fmt.Errorf("status-code %d, expected %d", r.StatusCode, http.StatusOK)
}
response, err := ioutil.ReadAll(r.Body)
if err != nil {
return err
}
e.catMasterResponseTokens = strings.Split(string(response), " ")
return nil
}
func (e *Elasticsearch) gatherJsonData(url string, v interface{}) error {
r, err := e.client.Get(url) r, err := e.client.Get(url)
if err != nil { if err != nil {
return err return err
@@ -272,14 +384,16 @@ func (e *Elasticsearch) gatherData(url string, v interface{}) error {
return fmt.Errorf("elasticsearch: API responded with status-code %d, expected %d", return fmt.Errorf("elasticsearch: API responded with status-code %d, expected %d",
r.StatusCode, http.StatusOK) r.StatusCode, http.StatusOK)
} }
if err = json.NewDecoder(r.Body).Decode(v); err != nil { if err = json.NewDecoder(r.Body).Decode(v); err != nil {
return err return err
} }
return nil return nil
} }
func init() { func init() {
inputs.Add("elasticsearch", func() telegraf.Input { inputs.Add("elasticsearch", func() plugins.Input {
return NewElasticsearch() return NewElasticsearch()
}) })
} }

View File

@@ -8,6 +8,8 @@ import (
"github.com/influxdata/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"fmt"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@@ -37,16 +39,13 @@ func (t *transportMock) RoundTrip(r *http.Request) (*http.Response, error) {
func (t *transportMock) CancelRequest(_ *http.Request) { func (t *transportMock) CancelRequest(_ *http.Request) {
} }
func TestElasticsearch(t *testing.T) { func checkIsMaster(es *Elasticsearch, expected bool, t *testing.T) {
es := newElasticsearchWithClient() if es.isMaster != expected {
es.Servers = []string{"http://example.com:9200"} msg := fmt.Sprintf("IsMaster set incorrectly")
es.client.Transport = newTransportMock(http.StatusOK, statsResponse) assert.Fail(t, msg)
var acc testutil.Accumulator
if err := es.Gather(&acc); err != nil {
t.Fatal(err)
} }
}
func checkNodeStatsResult(t *testing.T, acc *testutil.Accumulator) {
tags := map[string]string{ tags := map[string]string{
"cluster_name": "es-testcluster", "cluster_name": "es-testcluster",
"node_attribute_master": "true", "node_attribute_master": "true",
@@ -55,25 +54,55 @@ func TestElasticsearch(t *testing.T) {
"node_host": "test", "node_host": "test",
} }
acc.AssertContainsTaggedFields(t, "elasticsearch_indices", indicesExpected, tags) acc.AssertContainsTaggedFields(t, "elasticsearch_indices", nodestatsIndicesExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_os", osExpected, tags) acc.AssertContainsTaggedFields(t, "elasticsearch_os", nodestatsOsExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_process", processExpected, tags) acc.AssertContainsTaggedFields(t, "elasticsearch_process", nodestatsProcessExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_jvm", jvmExpected, tags) acc.AssertContainsTaggedFields(t, "elasticsearch_jvm", nodestatsJvmExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_thread_pool", threadPoolExpected, tags) acc.AssertContainsTaggedFields(t, "elasticsearch_thread_pool", nodestatsThreadPoolExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_fs", fsExpected, tags) acc.AssertContainsTaggedFields(t, "elasticsearch_fs", nodestatsFsExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_transport", transportExpected, tags) acc.AssertContainsTaggedFields(t, "elasticsearch_transport", nodestatsTransportExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_http", httpExpected, tags) acc.AssertContainsTaggedFields(t, "elasticsearch_http", nodestatsHttpExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_breakers", breakersExpected, tags) acc.AssertContainsTaggedFields(t, "elasticsearch_breakers", nodestatsBreakersExpected, tags)
} }
func TestGatherClusterStats(t *testing.T) { func TestGather(t *testing.T) {
es := newElasticsearchWithClient()
es.Servers = []string{"http://example.com:9200"}
es.client.Transport = newTransportMock(http.StatusOK, nodeStatsResponse)
var acc testutil.Accumulator
if err := es.Gather(&acc); err != nil {
t.Fatal(err)
}
checkIsMaster(es, false, t)
checkNodeStatsResult(t, &acc)
}
func TestGatherNodeStats(t *testing.T) {
es := newElasticsearchWithClient()
es.Servers = []string{"http://example.com:9200"}
es.client.Transport = newTransportMock(http.StatusOK, nodeStatsResponse)
var acc testutil.Accumulator
if err := es.gatherNodeStats("junk", &acc); err != nil {
t.Fatal(err)
}
checkIsMaster(es, false, t)
checkNodeStatsResult(t, &acc)
}
func TestGatherClusterHealth(t *testing.T) {
es := newElasticsearchWithClient() es := newElasticsearchWithClient()
es.Servers = []string{"http://example.com:9200"} es.Servers = []string{"http://example.com:9200"}
es.ClusterHealth = true es.ClusterHealth = true
es.client.Transport = newTransportMock(http.StatusOK, clusterResponse) es.client.Transport = newTransportMock(http.StatusOK, clusterHealthResponse)
var acc testutil.Accumulator var acc testutil.Accumulator
require.NoError(t, es.Gather(&acc)) require.NoError(t, es.gatherClusterHealth("junk", &acc))
checkIsMaster(es, false, t)
acc.AssertContainsTaggedFields(t, "elasticsearch_cluster_health", acc.AssertContainsTaggedFields(t, "elasticsearch_cluster_health",
clusterHealthExpected, clusterHealthExpected,
@@ -88,6 +117,77 @@ func TestGatherClusterStats(t *testing.T) {
map[string]string{"index": "v2"}) map[string]string{"index": "v2"})
} }
func TestGatherClusterStatsMaster(t *testing.T) {
// This needs multiple steps to replicate the multiple calls internally.
es := newElasticsearchWithClient()
es.ClusterStats = true
es.Servers = []string{"http://example.com:9200"}
// first get catMaster
es.client.Transport = newTransportMock(http.StatusOK, IsMasterResult)
require.NoError(t, es.setCatMaster("junk"))
IsMasterResultTokens := strings.Split(string(IsMasterResult), " ")
if es.catMasterResponseTokens[0] != IsMasterResultTokens[0] {
msg := fmt.Sprintf("catmaster is incorrect")
assert.Fail(t, msg)
}
// now get node status, which determines whether we're master
var acc testutil.Accumulator
es.Local = true
es.client.Transport = newTransportMock(http.StatusOK, nodeStatsResponse)
if err := es.gatherNodeStats("junk", &acc); err != nil {
t.Fatal(err)
}
checkIsMaster(es, true, t)
checkNodeStatsResult(t, &acc)
// now test the clusterstats method
es.client.Transport = newTransportMock(http.StatusOK, clusterStatsResponse)
require.NoError(t, es.gatherClusterStats("junk", &acc))
tags := map[string]string{
"cluster_name": "es-testcluster",
"node_name": "test.host.com",
"status": "red",
}
acc.AssertContainsTaggedFields(t, "elasticsearch_clusterstats_nodes", clusterstatsNodesExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_clusterstats_indices", clusterstatsIndicesExpected, tags)
}
func TestGatherClusterStatsNonMaster(t *testing.T) {
// This needs multiple steps to replicate the multiple calls internally.
es := newElasticsearchWithClient()
es.ClusterStats = true
es.Servers = []string{"http://example.com:9200"}
// first get catMaster
es.client.Transport = newTransportMock(http.StatusOK, IsNotMasterResult)
require.NoError(t, es.setCatMaster("junk"))
IsNotMasterResultTokens := strings.Split(string(IsNotMasterResult), " ")
if es.catMasterResponseTokens[0] != IsNotMasterResultTokens[0] {
msg := fmt.Sprintf("catmaster is incorrect")
assert.Fail(t, msg)
}
// now get node status, which determines whether we're master
var acc testutil.Accumulator
es.Local = true
es.client.Transport = newTransportMock(http.StatusOK, nodeStatsResponse)
if err := es.gatherNodeStats("junk", &acc); err != nil {
t.Fatal(err)
}
// ensure flag is clear so Cluster Stats would not be done
checkIsMaster(es, false, t)
checkNodeStatsResult(t, &acc)
}
func newElasticsearchWithClient() *Elasticsearch { func newElasticsearchWithClient() *Elasticsearch {
es := NewElasticsearch() es := NewElasticsearch()
es.client = &http.Client{} es.client = &http.Client{}

View File

@@ -1,6 +1,6 @@
package elasticsearch package elasticsearch
const clusterResponse = ` const clusterHealthResponse = `
{ {
"cluster_name": "elasticsearch_telegraf", "cluster_name": "elasticsearch_telegraf",
"status": "green", "status": "green",
@@ -71,7 +71,7 @@ var v2IndexExpected = map[string]interface{}{
"unassigned_shards": 20, "unassigned_shards": 20,
} }
const statsResponse = ` const nodeStatsResponse = `
{ {
"cluster_name": "es-testcluster", "cluster_name": "es-testcluster",
"nodes": { "nodes": {
@@ -489,7 +489,7 @@ const statsResponse = `
} }
` `
var indicesExpected = map[string]interface{}{ var nodestatsIndicesExpected = map[string]interface{}{
"id_cache_memory_size_in_bytes": float64(0), "id_cache_memory_size_in_bytes": float64(0),
"completion_size_in_bytes": float64(0), "completion_size_in_bytes": float64(0),
"suggest_total": float64(0), "suggest_total": float64(0),
@@ -561,7 +561,7 @@ var indicesExpected = map[string]interface{}{
"segments_fixed_bit_set_memory_in_bytes": float64(0), "segments_fixed_bit_set_memory_in_bytes": float64(0),
} }
var osExpected = map[string]interface{}{ var nodestatsOsExpected = map[string]interface{}{
"load_average_0": float64(0.01), "load_average_0": float64(0.01),
"load_average_1": float64(0.04), "load_average_1": float64(0.04),
"load_average_2": float64(0.05), "load_average_2": float64(0.05),
@@ -576,7 +576,7 @@ var osExpected = map[string]interface{}{
"mem_used_in_bytes": float64(1621868544), "mem_used_in_bytes": float64(1621868544),
} }
var processExpected = map[string]interface{}{ var nodestatsProcessExpected = map[string]interface{}{
"mem_total_virtual_in_bytes": float64(4747890688), "mem_total_virtual_in_bytes": float64(4747890688),
"timestamp": float64(1436460392945), "timestamp": float64(1436460392945),
"open_file_descriptors": float64(160), "open_file_descriptors": float64(160),
@@ -586,7 +586,7 @@ var processExpected = map[string]interface{}{
"cpu_user_in_millis": float64(13610), "cpu_user_in_millis": float64(13610),
} }
var jvmExpected = map[string]interface{}{ var nodestatsJvmExpected = map[string]interface{}{
"timestamp": float64(1436460392945), "timestamp": float64(1436460392945),
"uptime_in_millis": float64(202245), "uptime_in_millis": float64(202245),
"mem_non_heap_used_in_bytes": float64(39634576), "mem_non_heap_used_in_bytes": float64(39634576),
@@ -621,7 +621,7 @@ var jvmExpected = map[string]interface{}{
"buffer_pools_mapped_total_capacity_in_bytes": float64(0), "buffer_pools_mapped_total_capacity_in_bytes": float64(0),
} }
var threadPoolExpected = map[string]interface{}{ var nodestatsThreadPoolExpected = map[string]interface{}{
"merge_threads": float64(6), "merge_threads": float64(6),
"merge_queue": float64(4), "merge_queue": float64(4),
"merge_active": float64(5), "merge_active": float64(5),
@@ -726,7 +726,7 @@ var threadPoolExpected = map[string]interface{}{
"flush_completed": float64(3), "flush_completed": float64(3),
} }
var fsExpected = map[string]interface{}{ var nodestatsFsExpected = map[string]interface{}{
"data_0_total_in_bytes": float64(19507089408), "data_0_total_in_bytes": float64(19507089408),
"data_0_free_in_bytes": float64(16909316096), "data_0_free_in_bytes": float64(16909316096),
"data_0_available_in_bytes": float64(15894814720), "data_0_available_in_bytes": float64(15894814720),
@@ -736,7 +736,7 @@ var fsExpected = map[string]interface{}{
"total_total_in_bytes": float64(19507089408), "total_total_in_bytes": float64(19507089408),
} }
var transportExpected = map[string]interface{}{ var nodestatsTransportExpected = map[string]interface{}{
"server_open": float64(13), "server_open": float64(13),
"rx_count": float64(6), "rx_count": float64(6),
"rx_size_in_bytes": float64(1380), "rx_size_in_bytes": float64(1380),
@@ -744,12 +744,12 @@ var transportExpected = map[string]interface{}{
"tx_size_in_bytes": float64(1380), "tx_size_in_bytes": float64(1380),
} }
var httpExpected = map[string]interface{}{ var nodestatsHttpExpected = map[string]interface{}{
"current_open": float64(3), "current_open": float64(3),
"total_opened": float64(3), "total_opened": float64(3),
} }
var breakersExpected = map[string]interface{}{ var nodestatsBreakersExpected = map[string]interface{}{
"fielddata_estimated_size_in_bytes": float64(0), "fielddata_estimated_size_in_bytes": float64(0),
"fielddata_overhead": float64(1.03), "fielddata_overhead": float64(1.03),
"fielddata_tripped": float64(0), "fielddata_tripped": float64(0),
@@ -763,3 +763,273 @@ var breakersExpected = map[string]interface{}{
"parent_limit_size_in_bytes": float64(727213670), "parent_limit_size_in_bytes": float64(727213670),
"parent_estimated_size_in_bytes": float64(0), "parent_estimated_size_in_bytes": float64(0),
} }
const clusterStatsResponse = `
{
"host":"ip-10-0-1-214",
"log_type":"metrics",
"timestamp":1475767451229,
"log_level":"INFO",
"node_name":"test.host.com",
"cluster_name":"es-testcluster",
"status":"red",
"indices":{
"count":1,
"shards":{
"total":4,
"primaries":4,
"replication":0.0,
"index":{
"shards":{
"min":4,
"max":4,
"avg":4.0
},
"primaries":{
"min":4,
"max":4,
"avg":4.0
},
"replication":{
"min":0.0,
"max":0.0,
"avg":0.0
}
}
},
"docs":{
"count":4,
"deleted":0
},
"store":{
"size_in_bytes":17084,
"throttle_time_in_millis":0
},
"fielddata":{
"memory_size_in_bytes":0,
"evictions":0
},
"query_cache":{
"memory_size_in_bytes":0,
"total_count":0,
"hit_count":0,
"miss_count":0,
"cache_size":0,
"cache_count":0,
"evictions":0
},
"completion":{
"size_in_bytes":0
},
"segments":{
"count":4,
"memory_in_bytes":11828,
"terms_memory_in_bytes":8932,
"stored_fields_memory_in_bytes":1248,
"term_vectors_memory_in_bytes":0,
"norms_memory_in_bytes":1280,
"doc_values_memory_in_bytes":368,
"index_writer_memory_in_bytes":0,
"index_writer_max_memory_in_bytes":2048000,
"version_map_memory_in_bytes":0,
"fixed_bit_set_memory_in_bytes":0
},
"percolate":{
"total":0,
"time_in_millis":0,
"current":0,
"memory_size_in_bytes":-1,
"memory_size":"-1b",
"queries":0
}
},
"nodes":{
"count":{
"total":1,
"master_only":0,
"data_only":0,
"master_data":1,
"client":0
},
"versions":[
{
"version": "2.3.3"
}
],
"os":{
"available_processors":1,
"allocated_processors":1,
"mem":{
"total_in_bytes":593301504
},
"names":[
{
"name":"Linux",
"count":1
}
]
},
"process":{
"cpu":{
"percent":0
},
"open_file_descriptors":{
"min":145,
"max":145,
"avg":145
}
},
"jvm":{
"max_uptime_in_millis":11580527,
"versions":[
{
"version":"1.8.0_101",
"vm_name":"OpenJDK 64-Bit Server VM",
"vm_version":"25.101-b13",
"vm_vendor":"Oracle Corporation",
"count":1
}
],
"mem":{
"heap_used_in_bytes":70550288,
"heap_max_in_bytes":1065025536
},
"threads":30
},
"fs":{
"total_in_bytes":8318783488,
"free_in_bytes":6447439872,
"available_in_bytes":6344785920
},
"plugins":[
{
"name":"cloud-aws",
"version":"2.3.3",
"description":"The Amazon Web Service (AWS) Cloud plugin allows to use AWS API for the unicast discovery mechanism and add S3 repositories.",
"jvm":true,
"classname":"org.elasticsearch.plugin.cloud.aws.CloudAwsPlugin",
"isolated":true,
"site":false
},
{
"name":"kopf",
"version":"2.0.1",
"description":"kopf - simple web administration tool for Elasticsearch",
"url":"/_plugin/kopf/",
"jvm":false,
"site":true
},
{
"name":"tr-metrics",
"version":"7bd5b4b",
"description":"Logs cluster and node stats for performance monitoring.",
"jvm":true,
"classname":"com.trgr.elasticsearch.plugin.metrics.MetricsPlugin",
"isolated":true,
"site":false
}
]
}
}
`
var clusterstatsIndicesExpected = map[string]interface{}{
"completion_size_in_bytes": float64(0),
"count": float64(1),
"docs_count": float64(4),
"docs_deleted": float64(0),
"fielddata_evictions": float64(0),
"fielddata_memory_size_in_bytes": float64(0),
"percolate_current": float64(0),
"percolate_memory_size_in_bytes": float64(-1),
"percolate_queries": float64(0),
"percolate_time_in_millis": float64(0),
"percolate_total": float64(0),
"percolate_memory_size": "-1b",
"query_cache_cache_count": float64(0),
"query_cache_cache_size": float64(0),
"query_cache_evictions": float64(0),
"query_cache_hit_count": float64(0),
"query_cache_memory_size_in_bytes": float64(0),
"query_cache_miss_count": float64(0),
"query_cache_total_count": float64(0),
"segments_count": float64(4),
"segments_doc_values_memory_in_bytes": float64(368),
"segments_fixed_bit_set_memory_in_bytes": float64(0),
"segments_index_writer_max_memory_in_bytes": float64(2.048e+06),
"segments_index_writer_memory_in_bytes": float64(0),
"segments_memory_in_bytes": float64(11828),
"segments_norms_memory_in_bytes": float64(1280),
"segments_stored_fields_memory_in_bytes": float64(1248),
"segments_term_vectors_memory_in_bytes": float64(0),
"segments_terms_memory_in_bytes": float64(8932),
"segments_version_map_memory_in_bytes": float64(0),
"shards_index_primaries_avg": float64(4),
"shards_index_primaries_max": float64(4),
"shards_index_primaries_min": float64(4),
"shards_index_replication_avg": float64(0),
"shards_index_replication_max": float64(0),
"shards_index_replication_min": float64(0),
"shards_index_shards_avg": float64(4),
"shards_index_shards_max": float64(4),
"shards_index_shards_min": float64(4),
"shards_primaries": float64(4),
"shards_replication": float64(0),
"shards_total": float64(4),
"store_size_in_bytes": float64(17084),
"store_throttle_time_in_millis": float64(0),
}
var clusterstatsNodesExpected = map[string]interface{}{
"count_client": float64(0),
"count_data_only": float64(0),
"count_master_data": float64(1),
"count_master_only": float64(0),
"count_total": float64(1),
"fs_available_in_bytes": float64(6.34478592e+09),
"fs_free_in_bytes": float64(6.447439872e+09),
"fs_total_in_bytes": float64(8.318783488e+09),
"jvm_max_uptime_in_millis": float64(1.1580527e+07),
"jvm_mem_heap_max_in_bytes": float64(1.065025536e+09),
"jvm_mem_heap_used_in_bytes": float64(7.0550288e+07),
"jvm_threads": float64(30),
"jvm_versions_0_count": float64(1),
"jvm_versions_0_version": "1.8.0_101",
"jvm_versions_0_vm_name": "OpenJDK 64-Bit Server VM",
"jvm_versions_0_vm_vendor": "Oracle Corporation",
"jvm_versions_0_vm_version": "25.101-b13",
"os_allocated_processors": float64(1),
"os_available_processors": float64(1),
"os_mem_total_in_bytes": float64(5.93301504e+08),
"os_names_0_count": float64(1),
"os_names_0_name": "Linux",
"process_cpu_percent": float64(0),
"process_open_file_descriptors_avg": float64(145),
"process_open_file_descriptors_max": float64(145),
"process_open_file_descriptors_min": float64(145),
"versions_0_version": "2.3.3",
"plugins_0_classname": "org.elasticsearch.plugin.cloud.aws.CloudAwsPlugin",
"plugins_0_description": "The Amazon Web Service (AWS) Cloud plugin allows to use AWS API for the unicast discovery mechanism and add S3 repositories.",
"plugins_0_isolated": true,
"plugins_0_jvm": true,
"plugins_0_name": "cloud-aws",
"plugins_0_site": false,
"plugins_0_version": "2.3.3",
"plugins_1_description": "kopf - simple web administration tool for Elasticsearch",
"plugins_1_jvm": false,
"plugins_1_name": "kopf",
"plugins_1_site": true,
"plugins_1_url": "/_plugin/kopf/",
"plugins_1_version": "2.0.1",
"plugins_2_classname": "com.trgr.elasticsearch.plugin.metrics.MetricsPlugin",
"plugins_2_description": "Logs cluster and node stats for performance monitoring.",
"plugins_2_isolated": true,
"plugins_2_jvm": true,
"plugins_2_name": "tr-metrics",
"plugins_2_site": false,
"plugins_2_version": "7bd5b4b",
}
const IsMasterResult = "SDFsfSDFsdfFSDSDfSFDSDF 10.206.124.66 10.206.124.66 test.host.com "
const IsNotMasterResult = "junk 10.206.124.66 10.206.124.66 test.junk.com "

View File

@@ -13,7 +13,7 @@ import (
"github.com/kballard/go-shellquote" "github.com/kballard/go-shellquote"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/internal/errchan" "github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
@@ -61,12 +61,12 @@ func NewExec() *Exec {
} }
type Runner interface { type Runner interface {
Run(*Exec, string, telegraf.Accumulator) ([]byte, error) Run(*Exec, string, plugins.Accumulator) ([]byte, error)
} }
type CommandRunner struct{} type CommandRunner struct{}
func AddNagiosState(exitCode error, acc telegraf.Accumulator) error { func AddNagiosState(exitCode error, acc plugins.Accumulator) error {
nagiosState := 0 nagiosState := 0
if exitCode != nil { if exitCode != nil {
exiterr, ok := exitCode.(*exec.ExitError) exiterr, ok := exitCode.(*exec.ExitError)
@@ -89,7 +89,7 @@ func AddNagiosState(exitCode error, acc telegraf.Accumulator) error {
func (c CommandRunner) Run( func (c CommandRunner) Run(
e *Exec, e *Exec,
command string, command string,
acc telegraf.Accumulator, acc plugins.Accumulator,
) ([]byte, error) { ) ([]byte, error) {
split_cmd, err := shellquote.Split(command) split_cmd, err := shellquote.Split(command)
if err != nil || len(split_cmd) == 0 { if err != nil || len(split_cmd) == 0 {
@@ -145,7 +145,7 @@ func removeCarriageReturns(b bytes.Buffer) bytes.Buffer {
} }
func (e *Exec) ProcessCommand(command string, acc telegraf.Accumulator, wg *sync.WaitGroup) { func (e *Exec) ProcessCommand(command string, acc plugins.Accumulator, wg *sync.WaitGroup) {
defer wg.Done() defer wg.Done()
out, err := e.runner.Run(e, command, acc) out, err := e.runner.Run(e, command, acc)
@@ -176,7 +176,7 @@ func (e *Exec) SetParser(parser parsers.Parser) {
e.parser = parser e.parser = parser
} }
func (e *Exec) Gather(acc telegraf.Accumulator) error { func (e *Exec) Gather(acc plugins.Accumulator) error {
var wg sync.WaitGroup var wg sync.WaitGroup
// Legacy single command support // Legacy single command support
if e.Command != "" { if e.Command != "" {
@@ -226,7 +226,7 @@ func (e *Exec) Gather(acc telegraf.Accumulator) error {
} }
func init() { func init() {
inputs.Add("exec", func() telegraf.Input { inputs.Add("exec", func() plugins.Input {
return NewExec() return NewExec()
}) })
} }

View File

@@ -6,7 +6,7 @@ import (
"runtime" "runtime"
"testing" "testing"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/parsers" "github.com/influxdata/telegraf/plugins/parsers"
"github.com/influxdata/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
@@ -83,7 +83,7 @@ func newRunnerMock(out []byte, err error) Runner {
} }
} }
func (r runnerMock) Run(e *Exec, command string, acc telegraf.Accumulator) ([]byte, error) { func (r runnerMock) Run(e *Exec, command string, acc plugins.Accumulator) ([]byte, error) {
if r.err != nil { if r.err != nil {
return nil, r.err return nil, r.err
} }

View File

@@ -4,9 +4,10 @@ import (
"crypto/md5" "crypto/md5"
"fmt" "fmt"
"io" "io"
"log"
"os" "os"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal/globpath" "github.com/influxdata/telegraf/internal/globpath"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
@@ -46,7 +47,7 @@ func (_ *FileStat) Description() string {
func (_ *FileStat) SampleConfig() string { return sampleConfig } func (_ *FileStat) SampleConfig() string { return sampleConfig }
func (f *FileStat) Gather(acc telegraf.Accumulator) error { func (f *FileStat) Gather(acc plugins.Accumulator) error {
var errS string var errS string
var err error var err error
@@ -78,8 +79,14 @@ func (f *FileStat) Gather(acc telegraf.Accumulator) error {
"file": fileName, "file": fileName,
} }
fields := map[string]interface{}{ fields := map[string]interface{}{
"exists": int64(1), "exists": int64(1),
"size_bytes": fileInfo.Size(), }
if fileInfo == nil {
log.Printf("E! Unable to get info for file [%s], possible permissions issue",
fileName)
} else {
fields["size_bytes"] = fileInfo.Size()
} }
if f.Md5 { if f.Md5 {
@@ -119,7 +126,7 @@ func getMd5(file string) (string, error) {
} }
func init() { func init() {
inputs.Add("filestat", func() telegraf.Input { inputs.Add("filestat", func() plugins.Input {
return NewFileStat() return NewFileStat()
}) })
} }

View File

@@ -14,7 +14,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
@@ -129,7 +129,7 @@ func (h *GrayLog) Description() string {
} }
// Gathers data for all servers. // Gathers data for all servers.
func (h *GrayLog) Gather(acc telegraf.Accumulator) error { func (h *GrayLog) Gather(acc plugins.Accumulator) error {
var wg sync.WaitGroup var wg sync.WaitGroup
if h.client.HTTPClient() == nil { if h.client.HTTPClient() == nil {
@@ -178,14 +178,14 @@ func (h *GrayLog) Gather(acc telegraf.Accumulator) error {
// Gathers data from a particular server // Gathers data from a particular server
// Parameters: // Parameters:
// acc : The telegraf Accumulator to use // acc : The plugins.Accumulator to use
// serverURL: endpoint to send request to // serverURL: endpoint to send request to
// service : the service being queried // service : the service being queried
// //
// Returns: // Returns:
// error: Any error that may have occurred // error: Any error that may have occurred
func (h *GrayLog) gatherServer( func (h *GrayLog) gatherServer(
acc telegraf.Accumulator, acc plugins.Accumulator,
serverURL string, serverURL string,
) error { ) error {
resp, _, err := h.sendRequest(serverURL) resp, _, err := h.sendRequest(serverURL)
@@ -304,7 +304,7 @@ func (h *GrayLog) sendRequest(serverURL string) (string, float64, error) {
} }
func init() { func init() {
inputs.Add("graylog", func() telegraf.Input { inputs.Add("graylog", func() plugins.Input {
return &GrayLog{ return &GrayLog{
client: &RealHTTPClient{}, client: &RealHTTPClient{},
} }

View File

@@ -13,7 +13,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal/errchan" "github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
@@ -115,7 +115,7 @@ func (r *haproxy) Description() string {
// Reads stats from all configured servers accumulates stats. // Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any). // Returns one of the errors encountered while gather stats (if any).
func (g *haproxy) Gather(acc telegraf.Accumulator) error { func (g *haproxy) Gather(acc plugins.Accumulator) error {
if len(g.Servers) == 0 { if len(g.Servers) == 0 {
return g.gatherServer("http://127.0.0.1:1936/haproxy?stats", acc) return g.gatherServer("http://127.0.0.1:1936/haproxy?stats", acc)
} }
@@ -160,7 +160,7 @@ func (g *haproxy) Gather(acc telegraf.Accumulator) error {
return errChan.Error() return errChan.Error()
} }
func (g *haproxy) gatherServerSocket(addr string, acc telegraf.Accumulator) error { func (g *haproxy) gatherServerSocket(addr string, acc plugins.Accumulator) error {
socketPath := getSocketAddr(addr) socketPath := getSocketAddr(addr)
c, err := net.Dial("unix", socketPath) c, err := net.Dial("unix", socketPath)
@@ -178,7 +178,7 @@ func (g *haproxy) gatherServerSocket(addr string, acc telegraf.Accumulator) erro
return importCsvResult(c, acc, socketPath) return importCsvResult(c, acc, socketPath)
} }
func (g *haproxy) gatherServer(addr string, acc telegraf.Accumulator) error { func (g *haproxy) gatherServer(addr string, acc plugins.Accumulator) error {
if !strings.HasPrefix(addr, "http") { if !strings.HasPrefix(addr, "http") {
return g.gatherServerSocket(addr, acc) return g.gatherServerSocket(addr, acc)
} }
@@ -229,7 +229,7 @@ func getSocketAddr(sock string) string {
} }
} }
func importCsvResult(r io.Reader, acc telegraf.Accumulator, host string) error { func importCsvResult(r io.Reader, acc plugins.Accumulator, host string) error {
csv := csv.NewReader(r) csv := csv.NewReader(r)
result, err := csv.ReadAll() result, err := csv.ReadAll()
now := time.Now() now := time.Now()
@@ -263,6 +263,11 @@ func importCsvResult(r io.Reader, acc telegraf.Accumulator, host string) error {
if err == nil { if err == nil {
fields["smax"] = ival fields["smax"] = ival
} }
case HF_SLIM:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["slim"] = ival
}
case HF_STOT: case HF_STOT:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
@@ -431,7 +436,7 @@ func importCsvResult(r io.Reader, acc telegraf.Accumulator, host string) error {
} }
func init() { func init() {
inputs.Add("haproxy", func() telegraf.Input { inputs.Add("haproxy", func() plugins.Input {
return &haproxy{} return &haproxy{}
}) })
} }

View File

@@ -198,6 +198,7 @@ func HaproxyGetFieldValues() map[string]interface{} {
"rtime": uint64(312), "rtime": uint64(312),
"scur": uint64(1), "scur": uint64(1),
"smax": uint64(32), "smax": uint64(32),
"slim": uint64(32),
"srv_abort": uint64(1), "srv_abort": uint64(1),
"stot": uint64(171014), "stot": uint64(171014),
"ttime": uint64(2341), "ttime": uint64(2341),
@@ -223,6 +224,6 @@ be_static,host1,0,0,0,1,,28,7873,1209688,,0,,0,0,0,0,UP,1,1,0,0,0,70698,0,,2,18,
be_static,host2,0,0,0,1,,28,13830,1085929,,0,,0,0,0,0,UP,1,1,0,0,0,70698,0,,2,18,9,,28,,2,0,,1,L4OK,,0,0,19,6,3,0,0,0,,,,0,0,,,,,338,,,0,1,1,38, be_static,host2,0,0,0,1,,28,13830,1085929,,0,,0,0,0,0,UP,1,1,0,0,0,70698,0,,2,18,9,,28,,2,0,,1,L4OK,,0,0,19,6,3,0,0,0,,,,0,0,,,,,338,,,0,1,1,38,
be_static,host3,0,0,0,1,,28,17959,1259760,,0,,0,0,0,0,UP,1,1,0,0,0,70698,0,,2,18,10,,28,,2,0,,1,L4OK,,1,0,20,6,2,0,0,0,,,,0,0,,,,,92,,,0,1,1,17, be_static,host3,0,0,0,1,,28,17959,1259760,,0,,0,0,0,0,UP,1,1,0,0,0,70698,0,,2,18,10,,28,,2,0,,1,L4OK,,1,0,20,6,2,0,0,0,,,,0,0,,,,,92,,,0,1,1,17,
be_static,BACKEND,0,0,0,2,200,307,160276,13322728,0,0,,0,0,0,0,UP,11,11,0,,0,70698,0,,2,18,0,,307,,1,0,,4,,,,0,205,73,29,0,0,,,,,0,0,0,0,0,0,92,,,0,1,3,381, be_static,BACKEND,0,0,0,2,200,307,160276,13322728,0,0,,0,0,0,0,UP,11,11,0,,0,70698,0,,2,18,0,,307,,1,0,,4,,,,0,205,73,29,0,0,,,,,0,0,0,0,0,0,92,,,0,1,3,381,
be_app,host0,0,0,1,32,,171014,510913516,2193856571,,0,,0,1,1,0,UP,100,1,0,1,0,70698,0,,2,19,1,,171013,,2,3,,12,L7OK,301,10,0,119534,48051,2345,1056,0,0,,,,73,1,,,,,0,Moved Permanently,,0,2,312,2341, be_app,host0,0,0,1,32,32,171014,510913516,2193856571,,0,,0,1,1,0,UP,100,1,0,1,0,70698,0,,2,19,1,,171013,,2,3,,12,L7OK,301,10,0,119534,48051,2345,1056,0,0,,,,73,1,,,,,0,Moved Permanently,,0,2,312,2341,
be_app,host4,0,0,2,29,,171013,499318742,2195595896,12,34,,0,2,0,0,UP,100,1,0,2,0,70698,0,,2,19,2,,171013,,2,3,,12,L7OK,301,12,0,119572,47882,2441,1088,0,0,,,,84,2,,,,,0,Moved Permanently,,0,2,316,2355, be_app,host4,0,0,2,29,32,171013,499318742,2195595896,12,34,,0,2,0,0,UP,100,1,0,2,0,70698,0,,2,19,2,,171013,,2,3,,12,L7OK,301,12,0,119572,47882,2441,1088,0,0,,,,84,2,,,,,0,Moved Permanently,,0,2,316,2355,
` `

View File

@@ -8,7 +8,7 @@ Hddtemp should be installed and its daemon running
## Configuration ## Configuration
``` ```toml
[[inputs.hddtemp]] [[inputs.hddtemp]]
## By default, telegraf gathers temps data from all disks detected by the ## By default, telegraf gathers temps data from all disks detected by the
## hddtemp. ## hddtemp.
@@ -20,3 +20,24 @@ Hddtemp should be installed and its daemon running
# address = "127.0.0.1:7634" # address = "127.0.0.1:7634"
# devices = ["sda", "*"] # devices = ["sda", "*"]
``` ```
## Measurements
- hddtemp
- temperature
Tags:
- device
- model
- unit
- status
## Example output
```
> hddtemp,unit=C,status=,host=server1,device=sdb,model=WDC\ WD740GD-00FLA1 temperature=43i 1481655647000000000
> hddtemp,device=sdc,model=SAMSUNG\ HD103UI,unit=C,status=,host=server1 temperature=38i 148165564700000000
> hddtemp,device=sdd,model=SAMSUNG\ HD103UI,unit=C,status=,host=server1 temperature=36i 1481655647000000000
```

View File

@@ -8,7 +8,7 @@ import (
"strings" "strings"
) )
type disk struct { type Disk struct {
DeviceName string DeviceName string
Model string Model string
Temperature int32 Temperature int32
@@ -16,12 +16,19 @@ type disk struct {
Status string Status string
} }
func Fetch(address string) ([]disk, error) { type hddtemp struct {
}
func New() *hddtemp {
return &hddtemp{}
}
func (h *hddtemp) Fetch(address string) ([]Disk, error) {
var ( var (
err error err error
conn net.Conn conn net.Conn
buffer bytes.Buffer buffer bytes.Buffer
disks []disk disks []Disk
) )
if conn, err = net.Dial("tcp", address); err != nil { if conn, err = net.Dial("tcp", address); err != nil {
@@ -48,7 +55,7 @@ func Fetch(address string) ([]disk, error) {
status = temperatureField status = temperatureField
} }
disks = append(disks, disk{ disks = append(disks, Disk{
DeviceName: device, DeviceName: device,
Model: fields[offset+2], Model: fields[offset+2],
Temperature: int32(temperature), Temperature: int32(temperature),

View File

@@ -10,13 +10,13 @@ func TestFetch(t *testing.T) {
l := serve(t, []byte("|/dev/sda|foobar|36|C|")) l := serve(t, []byte("|/dev/sda|foobar|36|C|"))
defer l.Close() defer l.Close()
disks, err := Fetch(l.Addr().String()) disks, err := New().Fetch(l.Addr().String())
if err != nil { if err != nil {
t.Error("expecting err to be nil") t.Error("expecting err to be nil")
} }
expected := []disk{ expected := []Disk{
{ {
DeviceName: "sda", DeviceName: "sda",
Model: "foobar", Model: "foobar",
@@ -31,7 +31,7 @@ func TestFetch(t *testing.T) {
} }
func TestFetchWrongAddress(t *testing.T) { func TestFetchWrongAddress(t *testing.T) {
_, err := Fetch("127.0.0.1:1") _, err := New().Fetch("127.0.0.1:1")
if err == nil { if err == nil {
t.Error("expecting err to be non-nil") t.Error("expecting err to be non-nil")
@@ -42,13 +42,13 @@ func TestFetchStatus(t *testing.T) {
l := serve(t, []byte("|/dev/sda|foobar|SLP|C|")) l := serve(t, []byte("|/dev/sda|foobar|SLP|C|"))
defer l.Close() defer l.Close()
disks, err := Fetch(l.Addr().String()) disks, err := New().Fetch(l.Addr().String())
if err != nil { if err != nil {
t.Error("expecting err to be nil") t.Error("expecting err to be nil")
} }
expected := []disk{ expected := []Disk{
{ {
DeviceName: "sda", DeviceName: "sda",
Model: "foobar", Model: "foobar",
@@ -67,13 +67,13 @@ func TestFetchTwoDisks(t *testing.T) {
l := serve(t, []byte("|/dev/hda|ST380011A|46|C||/dev/hdd|ST340016A|SLP|*|")) l := serve(t, []byte("|/dev/hda|ST380011A|46|C||/dev/hdd|ST340016A|SLP|*|"))
defer l.Close() defer l.Close()
disks, err := Fetch(l.Addr().String()) disks, err := New().Fetch(l.Addr().String())
if err != nil { if err != nil {
t.Error("expecting err to be nil") t.Error("expecting err to be nil")
} }
expected := []disk{ expected := []Disk{
{ {
DeviceName: "hda", DeviceName: "hda",
Model: "ST380011A", Model: "ST380011A",

View File

@@ -3,7 +3,7 @@
package hddtemp package hddtemp
import ( import (
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
gohddtemp "github.com/influxdata/telegraf/plugins/inputs/hddtemp/go-hddtemp" gohddtemp "github.com/influxdata/telegraf/plugins/inputs/hddtemp/go-hddtemp"
) )
@@ -13,6 +13,11 @@ const defaultAddress = "127.0.0.1:7634"
type HDDTemp struct { type HDDTemp struct {
Address string Address string
Devices []string Devices []string
fetcher Fetcher
}
type Fetcher interface {
Fetch(address string) ([]gohddtemp.Disk, error)
} }
func (_ *HDDTemp) Description() string { func (_ *HDDTemp) Description() string {
@@ -35,8 +40,11 @@ func (_ *HDDTemp) SampleConfig() string {
return hddtempSampleConfig return hddtempSampleConfig
} }
func (h *HDDTemp) Gather(acc telegraf.Accumulator) error { func (h *HDDTemp) Gather(acc plugins.Accumulator) error {
disks, err := gohddtemp.Fetch(h.Address) if h.fetcher == nil {
h.fetcher = gohddtemp.New()
}
disks, err := h.fetcher.Fetch(h.Address)
if err != nil { if err != nil {
return err return err
@@ -53,7 +61,7 @@ func (h *HDDTemp) Gather(acc telegraf.Accumulator) error {
} }
fields := map[string]interface{}{ fields := map[string]interface{}{
disk.DeviceName: disk.Temperature, "temperature": disk.Temperature,
} }
acc.AddFields("hddtemp", fields, tags) acc.AddFields("hddtemp", fields, tags)
@@ -65,7 +73,7 @@ func (h *HDDTemp) Gather(acc telegraf.Accumulator) error {
} }
func init() { func init() {
inputs.Add("hddtemp", func() telegraf.Input { inputs.Add("hddtemp", func() plugins.Input {
return &HDDTemp{ return &HDDTemp{
Address: defaultAddress, Address: defaultAddress,
Devices: []string{"*"}, Devices: []string{"*"},

View File

@@ -0,0 +1,80 @@
package hddtemp
import (
"testing"
hddtemp "github.com/influxdata/telegraf/plugins/inputs/hddtemp/go-hddtemp"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
type mockFetcher struct {
}
func (h *mockFetcher) Fetch(address string) ([]hddtemp.Disk, error) {
return []hddtemp.Disk{
hddtemp.Disk{
DeviceName: "Disk1",
Model: "Model1",
Temperature: 13,
Unit: "C",
},
hddtemp.Disk{
DeviceName: "Disk2",
Model: "Model2",
Temperature: 14,
Unit: "C",
},
}, nil
}
func newMockFetcher() *mockFetcher {
return &mockFetcher{}
}
func TestFetch(t *testing.T) {
hddtemp := &HDDTemp{
fetcher: newMockFetcher(),
Devices: []string{"*"},
}
acc := &testutil.Accumulator{}
err := hddtemp.Gather(acc)
require.NoError(t, err)
assert.Equal(t, acc.NFields(), 2)
var tests = []struct {
fields map[string]interface{}
tags map[string]string
}{
{
map[string]interface{}{
"temperature": int32(13),
},
map[string]string{
"device": "Disk1",
"model": "Model1",
"unit": "C",
"status": "",
},
},
{
map[string]interface{}{
"temperature": int32(14),
},
map[string]string{
"device": "Disk2",
"model": "Model2",
"unit": "C",
"status": "",
},
},
}
for _, test := range tests {
acc.AssertContainsTaggedFields(t, "hddtemp", test.fields, test.tags)
}
}

View File

@@ -10,7 +10,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/parsers/influx" "github.com/influxdata/telegraf/plugins/parsers/influx"
@@ -42,7 +42,7 @@ type HTTPListener struct {
listener net.Listener listener net.Listener
parser influx.InfluxParser parser influx.InfluxParser
acc telegraf.Accumulator acc plugins.Accumulator
pool *pool pool *pool
BytesRecv selfstat.Stat BytesRecv selfstat.Stat
@@ -84,13 +84,13 @@ func (h *HTTPListener) Description() string {
return "Influx HTTP write listener" return "Influx HTTP write listener"
} }
func (h *HTTPListener) Gather(_ telegraf.Accumulator) error { func (h *HTTPListener) Gather(_ plugins.Accumulator) error {
h.BuffersCreated.Set(h.pool.ncreated()) h.BuffersCreated.Set(h.pool.ncreated())
return nil return nil
} }
// Start starts the http listener service. // Start starts the http listener service.
func (h *HTTPListener) Start(acc telegraf.Accumulator) error { func (h *HTTPListener) Start(acc plugins.Accumulator) error {
h.mu.Lock() h.mu.Lock()
defer h.mu.Unlock() defer h.mu.Unlock()
@@ -324,7 +324,7 @@ func badRequest(res http.ResponseWriter) {
} }
func init() { func init() {
inputs.Add("http_listener", func() telegraf.Input { inputs.Add("http_listener", func() plugins.Input {
return &HTTPListener{ return &HTTPListener{
ServiceAddress: ":8186", ServiceAddress: ":8186",
} }

View File

@@ -8,7 +8,7 @@ import (
"strings" "strings"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
@@ -141,7 +141,7 @@ func (h *HTTPResponse) HTTPGather() (map[string]interface{}, error) {
} }
// Gather gets all metric fields and tags and returns any errors it encounters // Gather gets all metric fields and tags and returns any errors it encounters
func (h *HTTPResponse) Gather(acc telegraf.Accumulator) error { func (h *HTTPResponse) Gather(acc plugins.Accumulator) error {
// Set default values // Set default values
if h.ResponseTimeout.Duration < time.Second { if h.ResponseTimeout.Duration < time.Second {
h.ResponseTimeout.Duration = time.Second * 5 h.ResponseTimeout.Duration = time.Second * 5
@@ -174,7 +174,7 @@ func (h *HTTPResponse) Gather(acc telegraf.Accumulator) error {
} }
func init() { func init() {
inputs.Add("http_response", func() telegraf.Input { inputs.Add("http_response", func() plugins.Input {
return &HTTPResponse{} return &HTTPResponse{}
}) })
} }

View File

@@ -10,7 +10,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/parsers" "github.com/influxdata/telegraf/plugins/parsers"
@@ -120,7 +120,7 @@ func (h *HttpJson) Description() string {
} }
// Gathers data for all servers. // Gathers data for all servers.
func (h *HttpJson) Gather(acc telegraf.Accumulator) error { func (h *HttpJson) Gather(acc plugins.Accumulator) error {
var wg sync.WaitGroup var wg sync.WaitGroup
if h.client.HTTPClient() == nil { if h.client.HTTPClient() == nil {
@@ -169,14 +169,14 @@ func (h *HttpJson) Gather(acc telegraf.Accumulator) error {
// Gathers data from a particular server // Gathers data from a particular server
// Parameters: // Parameters:
// acc : The telegraf Accumulator to use // acc : The plugins.Accumulator to use
// serverURL: endpoint to send request to // serverURL: endpoint to send request to
// service : the service being queried // service : the service being queried
// //
// Returns: // Returns:
// error: Any error that may have occurred // error: Any error that may have occurred
func (h *HttpJson) gatherServer( func (h *HttpJson) gatherServer(
acc telegraf.Accumulator, acc plugins.Accumulator,
serverURL string, serverURL string,
) error { ) error {
resp, responseTime, err := h.sendRequest(serverURL) resp, responseTime, err := h.sendRequest(serverURL)
@@ -292,7 +292,7 @@ func (h *HttpJson) sendRequest(serverURL string) (string, float64, error) {
} }
func init() { func init() {
inputs.Add("httpjson", func() telegraf.Input { inputs.Add("httpjson", func() plugins.Input {
return &HttpJson{ return &HttpJson{
client: &RealHTTPClient{}, client: &RealHTTPClient{},
ResponseTimeout: internal.Duration{ ResponseTimeout: internal.Duration{

View File

@@ -9,7 +9,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
@@ -43,7 +43,7 @@ func (*InfluxDB) SampleConfig() string {
` `
} }
func (i *InfluxDB) Gather(acc telegraf.Accumulator) error { func (i *InfluxDB) Gather(acc plugins.Accumulator) error {
if len(i.URLs) == 0 { if len(i.URLs) == 0 {
i.URLs = []string{"http://localhost:8086/debug/vars"} i.URLs = []string{"http://localhost:8086/debug/vars"}
} }
@@ -94,43 +94,44 @@ type point struct {
} }
type memstats struct { type memstats struct {
Alloc int64 `json:"Alloc"` Alloc int64 `json:"Alloc"`
TotalAlloc int64 `json:"TotalAlloc"` TotalAlloc int64 `json:"TotalAlloc"`
Sys int64 `json:"Sys"` Sys int64 `json:"Sys"`
Lookups int64 `json:"Lookups"` Lookups int64 `json:"Lookups"`
Mallocs int64 `json:"Mallocs"` Mallocs int64 `json:"Mallocs"`
Frees int64 `json:"Frees"` Frees int64 `json:"Frees"`
HeapAlloc int64 `json:"HeapAlloc"` HeapAlloc int64 `json:"HeapAlloc"`
HeapSys int64 `json:"HeapSys"` HeapSys int64 `json:"HeapSys"`
HeapIdle int64 `json:"HeapIdle"` HeapIdle int64 `json:"HeapIdle"`
HeapInuse int64 `json:"HeapInuse"` HeapInuse int64 `json:"HeapInuse"`
HeapReleased int64 `json:"HeapReleased"` HeapReleased int64 `json:"HeapReleased"`
HeapObjects int64 `json:"HeapObjects"` HeapObjects int64 `json:"HeapObjects"`
StackInuse int64 `json:"StackInuse"` StackInuse int64 `json:"StackInuse"`
StackSys int64 `json:"StackSys"` StackSys int64 `json:"StackSys"`
MSpanInuse int64 `json:"MSpanInuse"` MSpanInuse int64 `json:"MSpanInuse"`
MSpanSys int64 `json:"MSpanSys"` MSpanSys int64 `json:"MSpanSys"`
MCacheInuse int64 `json:"MCacheInuse"` MCacheInuse int64 `json:"MCacheInuse"`
MCacheSys int64 `json:"MCacheSys"` MCacheSys int64 `json:"MCacheSys"`
BuckHashSys int64 `json:"BuckHashSys"` BuckHashSys int64 `json:"BuckHashSys"`
GCSys int64 `json:"GCSys"` GCSys int64 `json:"GCSys"`
OtherSys int64 `json:"OtherSys"` OtherSys int64 `json:"OtherSys"`
NextGC int64 `json:"NextGC"` NextGC int64 `json:"NextGC"`
LastGC int64 `json:"LastGC"` LastGC int64 `json:"LastGC"`
PauseTotalNs int64 `json:"PauseTotalNs"` PauseTotalNs int64 `json:"PauseTotalNs"`
NumGC int64 `json:"NumGC"` PauseNs [256]int64 `json:"PauseNs"`
GCCPUFraction float64 `json:"GCCPUFraction"` NumGC int64 `json:"NumGC"`
GCCPUFraction float64 `json:"GCCPUFraction"`
} }
// Gathers data from a particular URL // Gathers data from a particular URL
// Parameters: // Parameters:
// acc : The telegraf Accumulator to use // acc : The plugins.Accumulator to use
// url : endpoint to send request to // url : endpoint to send request to
// //
// Returns: // Returns:
// error: Any error that may have occurred // error: Any error that may have occurred
func (i *InfluxDB) gatherURL( func (i *InfluxDB) gatherURL(
acc telegraf.Accumulator, acc plugins.Accumulator,
url string, url string,
) error { ) error {
shardCounter := 0 shardCounter := 0
@@ -202,6 +203,7 @@ func (i *InfluxDB) gatherURL(
"next_gc": m.NextGC, "next_gc": m.NextGC,
"last_gc": m.LastGC, "last_gc": m.LastGC,
"pause_total_ns": m.PauseTotalNs, "pause_total_ns": m.PauseTotalNs,
"pause_ns": m.PauseNs[(m.NumGC+255)%256],
"num_gc": m.NumGC, "num_gc": m.NumGC,
"gcc_pu_fraction": m.GCCPUFraction, "gcc_pu_fraction": m.GCCPUFraction,
}, },
@@ -256,7 +258,7 @@ func (i *InfluxDB) gatherURL(
} }
func init() { func init() {
inputs.Add("influxdb", func() telegraf.Input { inputs.Add("influxdb", func() plugins.Input {
return &InfluxDB{ return &InfluxDB{
Timeout: internal.Duration{Duration: time.Second * 5}, Timeout: internal.Duration{Duration: time.Second * 5},
} }

View File

@@ -86,6 +86,7 @@ func TestInfluxDB(t *testing.T) {
"frees": int64(381008), "frees": int64(381008),
"heap_idle": int64(15802368), "heap_idle": int64(15802368),
"pause_total_ns": int64(5132914), "pause_total_ns": int64(5132914),
"pause_ns": int64(127053),
"lookups": int64(77), "lookups": int64(77),
"heap_sys": int64(33849344), "heap_sys": int64(33849344),
"mcache_sys": int64(16384), "mcache_sys": int64(16384),

View File

@@ -3,7 +3,7 @@ package internal
import ( import (
"runtime" "runtime"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/selfstat" "github.com/influxdata/telegraf/selfstat"
) )
@@ -12,7 +12,7 @@ type Self struct {
CollectMemstats bool CollectMemstats bool
} }
func NewSelf() telegraf.Input { func NewSelf() plugins.Input {
return &Self{ return &Self{
CollectMemstats: true, CollectMemstats: true,
} }
@@ -31,7 +31,7 @@ func (s *Self) SampleConfig() string {
return sampleConfig return sampleConfig
} }
func (s *Self) Gather(acc telegraf.Accumulator) error { func (s *Self) Gather(acc plugins.Accumulator) error {
if s.CollectMemstats { if s.CollectMemstats {
m := &runtime.MemStats{} m := &runtime.MemStats{}
runtime.ReadMemStats(m) runtime.ReadMemStats(m)

View File

@@ -5,7 +5,7 @@ import (
"strings" "strings"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
@@ -37,7 +37,7 @@ func (m *Ipmi) Description() string {
return "Read metrics from one or many bare metal servers" return "Read metrics from one or many bare metal servers"
} }
func (m *Ipmi) Gather(acc telegraf.Accumulator) error { func (m *Ipmi) Gather(acc plugins.Accumulator) error {
if m.runner == nil { if m.runner == nil {
m.runner = CommandRunner{} m.runner = CommandRunner{}
} }
@@ -51,7 +51,7 @@ func (m *Ipmi) Gather(acc telegraf.Accumulator) error {
return nil return nil
} }
func (m *Ipmi) gatherServer(serv string, acc telegraf.Accumulator) error { func (m *Ipmi) gatherServer(serv string, acc plugins.Accumulator) error {
conn := NewConnection(serv) conn := NewConnection(serv)
res, err := m.runner.Run(conn, "sdr") res, err := m.runner.Run(conn, "sdr")
@@ -123,7 +123,7 @@ func transform(s string) string {
} }
func init() { func init() {
inputs.Add("ipmi_sensor", func() telegraf.Input { inputs.Add("ipmi_sensor", func() plugins.Input {
return &Ipmi{} return &Ipmi{}
}) })
} }

View File

@@ -9,7 +9,7 @@ import (
"strconv" "strconv"
"strings" "strings"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
@@ -42,7 +42,7 @@ func (ipt *Iptables) SampleConfig() string {
} }
// Gather gathers iptables packets and bytes throughput from the configured tables and chains. // Gather gathers iptables packets and bytes throughput from the configured tables and chains.
func (ipt *Iptables) Gather(acc telegraf.Accumulator) error { func (ipt *Iptables) Gather(acc plugins.Accumulator) error {
if ipt.Table == "" || len(ipt.Chains) == 0 { if ipt.Table == "" || len(ipt.Chains) == 0 {
return nil return nil
} }
@@ -88,7 +88,7 @@ var chainNameRe = regexp.MustCompile(`^Chain\s+(\S+)`)
var fieldsHeaderRe = regexp.MustCompile(`^\s*pkts\s+bytes\s+`) var fieldsHeaderRe = regexp.MustCompile(`^\s*pkts\s+bytes\s+`)
var valuesRe = regexp.MustCompile(`^\s*([0-9]+)\s+([0-9]+)\s+.*?(/\*\s(.*)\s\*/)?$`) var valuesRe = regexp.MustCompile(`^\s*([0-9]+)\s+([0-9]+)\s+.*?(/\*\s(.*)\s\*/)?$`)
func (ipt *Iptables) parseAndGather(data string, acc telegraf.Accumulator) error { func (ipt *Iptables) parseAndGather(data string, acc plugins.Accumulator) error {
lines := strings.Split(data, "\n") lines := strings.Split(data, "\n")
if len(lines) < 3 { if len(lines) < 3 {
return nil return nil
@@ -120,7 +120,7 @@ func (ipt *Iptables) parseAndGather(data string, acc telegraf.Accumulator) error
type chainLister func(table, chain string) (string, error) type chainLister func(table, chain string) (string, error)
func init() { func init() {
inputs.Add("iptables", func() telegraf.Input { inputs.Add("iptables", func() plugins.Input {
ipt := new(Iptables) ipt := new(Iptables)
ipt.lister = ipt.chainList ipt.lister = ipt.chainList
return ipt return ipt

View File

@@ -6,7 +6,8 @@
# Read JMX metrics through Jolokia # Read JMX metrics through Jolokia
[[inputs.jolokia]] [[inputs.jolokia]]
## This is the context root used to compose the jolokia url ## This is the context root used to compose the jolokia url
context = "/jolokia" ## NOTE that Jolokia requires a trailing slash at the end of the context root
context = "/jolokia/"
## This specifies the mode used ## This specifies the mode used
# mode = "proxy" # mode = "proxy"
@@ -17,7 +18,16 @@
# [inputs.jolokia.proxy] # [inputs.jolokia.proxy]
# host = "127.0.0.1" # host = "127.0.0.1"
# port = "8080" # port = "8080"
## Optional http timeouts
##
## response_header_timeout, if non-zero, specifies the amount of time to wait
## for a server's response headers after fully writing the request.
# response_header_timeout = "3s"
##
## client_timeout specifies a time limit for requests made by this client.
## Includes connection time, any redirects, and reading the response body.
# client_timeout = "4s"
## List of servers exposing jolokia read service ## List of servers exposing jolokia read service
[[inputs.jolokia.servers]] [[inputs.jolokia.servers]]

View File

@@ -10,10 +10,15 @@ import (
"net/url" "net/url"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
// Default http timeouts
var DefaultResponseHeaderTimeout = internal.Duration{Duration: 3 * time.Second}
var DefaultClientTimeout = internal.Duration{Duration: 4 * time.Second}
type Server struct { type Server struct {
Name string Name string
Host string Host string
@@ -48,12 +53,16 @@ type Jolokia struct {
Servers []Server Servers []Server
Metrics []Metric Metrics []Metric
Proxy Server Proxy Server
ResponseHeaderTimeout internal.Duration `toml:"response_header_timeout"`
ClientTimeout internal.Duration `toml:"client_timeout"`
} }
const sampleConfig = ` const sampleConfig = `
## This is the context root used to compose the jolokia url ## This is the context root used to compose the jolokia url
## NOTE that Jolokia requires a trailing slash at the end of the context root
## NOTE that your jolokia security policy must allow for POST requests. ## NOTE that your jolokia security policy must allow for POST requests.
context = "/jolokia" context = "/jolokia/"
## This specifies the mode used ## This specifies the mode used
# mode = "proxy" # mode = "proxy"
@@ -65,6 +74,15 @@ const sampleConfig = `
# host = "127.0.0.1" # host = "127.0.0.1"
# port = "8080" # port = "8080"
## Optional http timeouts
##
## response_header_timeout, if non-zero, specifies the amount of time to wait
## for a server's response headers after fully writing the request.
# response_header_timeout = "3s"
##
## client_timeout specifies a time limit for requests made by this client.
## Includes connection time, any redirects, and reading the response body.
# client_timeout = "4s"
## List of servers exposing jolokia read service ## List of servers exposing jolokia read service
[[inputs.jolokia.servers]] [[inputs.jolokia.servers]]
@@ -148,7 +166,7 @@ func (j *Jolokia) doRequest(req *http.Request) (map[string]interface{}, error) {
func (j *Jolokia) prepareRequest(server Server, metric Metric) (*http.Request, error) { func (j *Jolokia) prepareRequest(server Server, metric Metric) (*http.Request, error) {
var jolokiaUrl *url.URL var jolokiaUrl *url.URL
context := j.Context // Usually "/jolokia" context := j.Context // Usually "/jolokia/"
// Create bodyContent // Create bodyContent
bodyContent := map[string]interface{}{ bodyContent := map[string]interface{}{
@@ -220,7 +238,26 @@ func (j *Jolokia) prepareRequest(server Server, metric Metric) (*http.Request, e
return req, nil return req, nil
} }
func (j *Jolokia) Gather(acc telegraf.Accumulator) error { func extractValues(measurement string, value interface{}, fields map[string]interface{}) {
if mapValues, ok := value.(map[string]interface{}); ok {
for k2, v2 := range mapValues {
extractValues(measurement+"_"+k2, v2, fields)
}
} else {
fields[measurement] = value
}
}
func (j *Jolokia) Gather(acc plugins.Accumulator) error {
if j.jClient == nil {
tr := &http.Transport{ResponseHeaderTimeout: j.ResponseHeaderTimeout.Duration}
j.jClient = &JolokiaClientImpl{&http.Client{
Transport: tr,
Timeout: j.ClientTimeout.Duration,
}}
}
servers := j.Servers servers := j.Servers
metrics := j.Metrics metrics := j.Metrics
tags := make(map[string]string) tags := make(map[string]string)
@@ -244,23 +281,8 @@ func (j *Jolokia) Gather(acc telegraf.Accumulator) error {
if err != nil { if err != nil {
fmt.Printf("Error handling response: %s\n", err) fmt.Printf("Error handling response: %s\n", err)
} else { } else {
if values, ok := out["value"]; ok { if values, ok := out["value"]; ok {
switch t := values.(type) { extractValues(measurement, values, fields)
case map[string]interface{}:
for k, v := range t {
switch t2 := v.(type) {
case map[string]interface{}:
for k2, v2 := range t2 {
fields[measurement+"_"+k+"_"+k2] = v2
}
case interface{}:
fields[measurement+"_"+k] = t2
}
}
case interface{}:
fields[measurement] = t
}
} else { } else {
fmt.Printf("Missing key 'value' in output response\n") fmt.Printf("Missing key 'value' in output response\n")
} }
@@ -275,12 +297,10 @@ func (j *Jolokia) Gather(acc telegraf.Accumulator) error {
} }
func init() { func init() {
inputs.Add("jolokia", func() telegraf.Input { inputs.Add("jolokia", func() plugins.Input {
tr := &http.Transport{ResponseHeaderTimeout: time.Duration(3 * time.Second)} return &Jolokia{
client := &http.Client{ ResponseHeaderTimeout: DefaultResponseHeaderTimeout,
Transport: tr, ClientTimeout: DefaultClientTimeout,
Timeout: time.Duration(4 * time.Second),
} }
return &Jolokia{jClient: &JolokiaClientImpl{client: client}}
}) })
} }

View File

@@ -12,6 +12,37 @@ import (
_ "github.com/stretchr/testify/require" _ "github.com/stretchr/testify/require"
) )
const validThreeLevelMultiValueJSON = `
{
"request":{
"mbean":"java.lang:type=*",
"type":"read"
},
"value":{
"java.lang:type=Memory":{
"ObjectPendingFinalizationCount":0,
"Verbose":false,
"HeapMemoryUsage":{
"init":134217728,
"committed":173015040,
"max":1908932608,
"used":16840016
},
"NonHeapMemoryUsage":{
"init":2555904,
"committed":51380224,
"max":-1,
"used":49944048
},
"ObjectName":{
"objectName":"java.lang:type=Memory"
}
}
},
"timestamp":1446129191,
"status":200
}`
const validMultiValueJSON = ` const validMultiValueJSON = `
{ {
"request":{ "request":{
@@ -103,6 +134,38 @@ func TestHttpJsonMultiValue(t *testing.T) {
acc.AssertContainsTaggedFields(t, "jolokia", fields, tags) acc.AssertContainsTaggedFields(t, "jolokia", fields, tags)
} }
// Test that the proper values are ignored or collected
func TestHttpJsonThreeLevelMultiValue(t *testing.T) {
jolokia := genJolokiaClientStub(validThreeLevelMultiValueJSON, 200, Servers, []Metric{HeapMetric})
var acc testutil.Accumulator
err := jolokia.Gather(&acc)
assert.Nil(t, err)
assert.Equal(t, 1, len(acc.Metrics))
fields := map[string]interface{}{
"heap_memory_usage_java.lang:type=Memory_ObjectPendingFinalizationCount": 0.0,
"heap_memory_usage_java.lang:type=Memory_Verbose": false,
"heap_memory_usage_java.lang:type=Memory_HeapMemoryUsage_init": 134217728.0,
"heap_memory_usage_java.lang:type=Memory_HeapMemoryUsage_max": 1908932608.0,
"heap_memory_usage_java.lang:type=Memory_HeapMemoryUsage_used": 16840016.0,
"heap_memory_usage_java.lang:type=Memory_HeapMemoryUsage_committed": 173015040.0,
"heap_memory_usage_java.lang:type=Memory_NonHeapMemoryUsage_init": 2555904.0,
"heap_memory_usage_java.lang:type=Memory_NonHeapMemoryUsage_committed": 51380224.0,
"heap_memory_usage_java.lang:type=Memory_NonHeapMemoryUsage_max": -1.0,
"heap_memory_usage_java.lang:type=Memory_NonHeapMemoryUsage_used": 49944048.0,
"heap_memory_usage_java.lang:type=Memory_ObjectName_objectName": "java.lang:type=Memory",
}
tags := map[string]string{
"jolokia_host": "127.0.0.1",
"jolokia_port": "8080",
"jolokia_name": "as1",
}
acc.AssertContainsTaggedFields(t, "jolokia", fields, tags)
}
// Test that the proper values are ignored or collected // Test that the proper values are ignored or collected
func TestHttpJsonOn404(t *testing.T) { func TestHttpJsonOn404(t *testing.T) {

View File

@@ -5,7 +5,7 @@ import (
"strings" "strings"
"sync" "sync"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/parsers" "github.com/influxdata/telegraf/plugins/parsers"
@@ -37,7 +37,7 @@ type Kafka struct {
done chan struct{} done chan struct{}
// keep the accumulator internally: // keep the accumulator internally:
acc telegraf.Accumulator acc plugins.Accumulator
// doNotCommitMsgs tells the parser not to call CommitUpTo on the consumer // doNotCommitMsgs tells the parser not to call CommitUpTo on the consumer
// this is mostly for test purposes, but there may be a use-case for it later. // this is mostly for test purposes, but there may be a use-case for it later.
@@ -75,7 +75,7 @@ func (k *Kafka) SetParser(parser parsers.Parser) {
k.parser = parser k.parser = parser
} }
func (k *Kafka) Start(acc telegraf.Accumulator) error { func (k *Kafka) Start(acc plugins.Accumulator) error {
k.Lock() k.Lock()
defer k.Unlock() defer k.Unlock()
var consumerErr error var consumerErr error
@@ -162,12 +162,12 @@ func (k *Kafka) Stop() {
} }
} }
func (k *Kafka) Gather(acc telegraf.Accumulator) error { func (k *Kafka) Gather(acc plugins.Accumulator) error {
return nil return nil
} }
func init() { func init() {
inputs.Add("kafka_consumer", func() telegraf.Input { inputs.Add("kafka_consumer", func() plugins.Input {
return &Kafka{} return &Kafka{}
}) })
} }

View File

@@ -9,7 +9,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/internal/errchan" "github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
@@ -54,7 +54,7 @@ const (
) )
func init() { func init() {
inputs.Add("kubernetes", func() telegraf.Input { inputs.Add("kubernetes", func() plugins.Input {
return &Kubernetes{} return &Kubernetes{}
}) })
} }
@@ -70,7 +70,7 @@ func (k *Kubernetes) Description() string {
} }
//Gather collects kubernetes metrics from a given URL //Gather collects kubernetes metrics from a given URL
func (k *Kubernetes) Gather(acc telegraf.Accumulator) error { func (k *Kubernetes) Gather(acc plugins.Accumulator) error {
var wg sync.WaitGroup var wg sync.WaitGroup
errChan := errchan.New(1) errChan := errchan.New(1)
wg.Add(1) wg.Add(1)
@@ -91,7 +91,7 @@ func buildURL(endpoint string, base string) (*url.URL, error) {
return addr, nil return addr, nil
} }
func (k *Kubernetes) gatherSummary(baseURL string, acc telegraf.Accumulator) error { func (k *Kubernetes) gatherSummary(baseURL string, acc plugins.Accumulator) error {
url := fmt.Sprintf("%s/stats/summary", baseURL) url := fmt.Sprintf("%s/stats/summary", baseURL)
var req, err = http.NewRequest("GET", url, nil) var req, err = http.NewRequest("GET", url, nil)
var token []byte var token []byte
@@ -139,7 +139,7 @@ func (k *Kubernetes) gatherSummary(baseURL string, acc telegraf.Accumulator) err
return nil return nil
} }
func buildSystemContainerMetrics(summaryMetrics *SummaryMetrics, acc telegraf.Accumulator) { func buildSystemContainerMetrics(summaryMetrics *SummaryMetrics, acc plugins.Accumulator) {
for _, container := range summaryMetrics.Node.SystemContainers { for _, container := range summaryMetrics.Node.SystemContainers {
tags := map[string]string{ tags := map[string]string{
"node_name": summaryMetrics.Node.NodeName, "node_name": summaryMetrics.Node.NodeName,
@@ -161,7 +161,7 @@ func buildSystemContainerMetrics(summaryMetrics *SummaryMetrics, acc telegraf.Ac
} }
} }
func buildNodeMetrics(summaryMetrics *SummaryMetrics, acc telegraf.Accumulator) { func buildNodeMetrics(summaryMetrics *SummaryMetrics, acc plugins.Accumulator) {
tags := map[string]string{ tags := map[string]string{
"node_name": summaryMetrics.Node.NodeName, "node_name": summaryMetrics.Node.NodeName,
} }
@@ -187,7 +187,7 @@ func buildNodeMetrics(summaryMetrics *SummaryMetrics, acc telegraf.Accumulator)
acc.AddFields("kubernetes_node", fields, tags) acc.AddFields("kubernetes_node", fields, tags)
} }
func buildPodMetrics(summaryMetrics *SummaryMetrics, acc telegraf.Accumulator) { func buildPodMetrics(summaryMetrics *SummaryMetrics, acc plugins.Accumulator) {
for _, pod := range summaryMetrics.Pods { for _, pod := range summaryMetrics.Pods {
for _, container := range pod.Containers { for _, container := range pod.Containers {
tags := map[string]string{ tags := map[string]string{

View File

@@ -10,7 +10,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
@@ -148,7 +148,7 @@ func (l *LeoFS) Description() string {
return "Read metrics from a LeoFS Server via SNMP" return "Read metrics from a LeoFS Server via SNMP"
} }
func (l *LeoFS) Gather(acc telegraf.Accumulator) error { func (l *LeoFS) Gather(acc plugins.Accumulator) error {
if len(l.Servers) == 0 { if len(l.Servers) == 0 {
l.gatherServer(defaultEndpoint, ServerTypeManagerMaster, acc) l.gatherServer(defaultEndpoint, ServerTypeManagerMaster, acc)
return nil return nil
@@ -181,7 +181,7 @@ func (l *LeoFS) Gather(acc telegraf.Accumulator) error {
func (l *LeoFS) gatherServer( func (l *LeoFS) gatherServer(
endpoint string, endpoint string,
serverType ServerType, serverType ServerType,
acc telegraf.Accumulator, acc plugins.Accumulator,
) error { ) error {
cmd := exec.Command("snmpwalk", "-v2c", "-cpublic", endpoint, oid) cmd := exec.Command("snmpwalk", "-v2c", "-cpublic", endpoint, oid)
stdout, err := cmd.StdoutPipe() stdout, err := cmd.StdoutPipe()
@@ -231,7 +231,7 @@ func retrieveTokenAfterColon(line string) (string, error) {
} }
func init() { func init() {
inputs.Add("leofs", func() telegraf.Input { inputs.Add("leofs", func() plugins.Input {
return &LeoFS{} return &LeoFS{}
}) })
} }

View File

@@ -40,8 +40,11 @@ regex patterns.
## Grok Parser ## Grok Parser
The grok parser uses a slightly modified version of logstash "grok" patterns, The grok parser uses a slightly modified version of logstash "grok" patterns,
with the format `%{<capture_syntax>[:<semantic_name>][:<modifier>]}` with the format
```
%{<capture_syntax>[:<semantic_name>][:<modifier>]}
```
Telegraf has many of it's own Telegraf has many of it's own
[built-in patterns](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/logparser/grok/patterns/influx-patterns), [built-in patterns](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/logparser/grok/patterns/influx-patterns),
@@ -92,4 +95,3 @@ Timestamp modifiers can be used to convert captures to the timestamp of the
CUSTOM time layouts must be within quotes and be the representation of the CUSTOM time layouts must be within quotes and be the representation of the
"reference time", which is `Mon Jan 2 15:04:05 -0700 MST 2006` "reference time", which is `Mon Jan 2 15:04:05 -0700 MST 2006`
See https://golang.org/pkg/time/#Parse for more details. See https://golang.org/pkg/time/#Parse for more details.

View File

@@ -12,7 +12,7 @@ import (
"github.com/vjeantet/grok" "github.com/vjeantet/grok"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/metric" "github.com/influxdata/telegraf/metric"
) )
@@ -151,7 +151,7 @@ func (p *Parser) Compile() error {
return p.compileCustomPatterns() return p.compileCustomPatterns()
} }
func (p *Parser) ParseLine(line string) (telegraf.Metric, error) { func (p *Parser) ParseLine(line string) (plugins.Metric, error) {
var err error var err error
// values are the parsed fields from the log line // values are the parsed fields from the log line
var values map[string]string var values map[string]string

View File

@@ -4,13 +4,13 @@ import (
"testing" "testing"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
var benchM telegraf.Metric var benchM plugins.Metric
func Benchmark_ParseLine_CommonLogFormat(b *testing.B) { func Benchmark_ParseLine_CommonLogFormat(b *testing.B) {
p := &Parser{ p := &Parser{
@@ -18,7 +18,7 @@ func Benchmark_ParseLine_CommonLogFormat(b *testing.B) {
} }
p.Compile() p.Compile()
var m telegraf.Metric var m plugins.Metric
for n := 0; n < b.N; n++ { for n := 0; n < b.N; n++ {
m, _ = p.ParseLine(`127.0.0.1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326`) m, _ = p.ParseLine(`127.0.0.1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326`)
} }
@@ -31,7 +31,7 @@ func Benchmark_ParseLine_CombinedLogFormat(b *testing.B) {
} }
p.Compile() p.Compile()
var m telegraf.Metric var m plugins.Metric
for n := 0; n < b.N; n++ { for n := 0; n < b.N; n++ {
m, _ = p.ParseLine(`127.0.0.1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326 "-" "Mozilla"`) m, _ = p.ParseLine(`127.0.0.1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326 "-" "Mozilla"`)
} }
@@ -50,7 +50,7 @@ func Benchmark_ParseLine_CustomPattern(b *testing.B) {
} }
p.Compile() p.Compile()
var m telegraf.Metric var m plugins.Metric
for n := 0; n < b.N; n++ { for n := 0; n < b.N; n++ {
m, _ = p.ParseLine(`[04/Jun/2016:12:41:45 +0100] 1.25 200 192.168.1.1 5.432µs 101`) m, _ = p.ParseLine(`[04/Jun/2016:12:41:45 +0100] 1.25 200 192.168.1.1 5.432µs 101`)
} }
@@ -82,6 +82,46 @@ func TestMeasurementName(t *testing.T) {
assert.Equal(t, "my_web_log", m.Name()) assert.Equal(t, "my_web_log", m.Name())
} }
func TestCLF_IPv6(t *testing.T) {
p := &Parser{
Measurement: "my_web_log",
Patterns: []string{"%{COMMON_LOG_FORMAT}"},
}
assert.NoError(t, p.Compile())
m, err := p.ParseLine(`2001:0db8:85a3:0000:0000:8a2e:0370:7334 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326`)
require.NotNil(t, m)
assert.NoError(t, err)
assert.Equal(t,
map[string]interface{}{
"resp_bytes": int64(2326),
"auth": "frank",
"client_ip": "2001:0db8:85a3:0000:0000:8a2e:0370:7334",
"http_version": float64(1.0),
"ident": "user-identifier",
"request": "/apache_pb.gif",
},
m.Fields())
assert.Equal(t, map[string]string{"verb": "GET", "resp_code": "200"}, m.Tags())
assert.Equal(t, "my_web_log", m.Name())
m, err = p.ParseLine(`::1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326`)
require.NotNil(t, m)
assert.NoError(t, err)
assert.Equal(t,
map[string]interface{}{
"resp_bytes": int64(2326),
"auth": "frank",
"client_ip": "::1",
"http_version": float64(1.0),
"ident": "user-identifier",
"request": "/apache_pb.gif",
},
m.Fields())
assert.Equal(t, map[string]string{"verb": "GET", "resp_code": "200"}, m.Tags())
assert.Equal(t, "my_web_log", m.Name())
}
func TestCustomInfluxdbHttpd(t *testing.T) { func TestCustomInfluxdbHttpd(t *testing.T) {
p := &Parser{ p := &Parser{
Patterns: []string{`\[httpd\] %{COMBINED_LOG_FORMAT} %{UUID:uuid:drop} %{NUMBER:response_time_us:int}`}, Patterns: []string{`\[httpd\] %{COMBINED_LOG_FORMAT} %{UUID:uuid:drop} %{NUMBER:response_time_us:int}`},

View File

@@ -56,7 +56,7 @@ EXAMPLE_LOG \[%{HTTPDATE:ts:ts-httpd}\] %{NUMBER:myfloat:float} %{RESPONSE_CODE}
NGUSERNAME [a-zA-Z0-9\.\@\-\+_%]+ NGUSERNAME [a-zA-Z0-9\.\@\-\+_%]+
NGUSER %{NGUSERNAME} NGUSER %{NGUSERNAME}
# Wider-ranging client IP matching # Wider-ranging client IP matching
CLIENT (?:%{IPORHOST}|%{HOSTPORT}|::1) CLIENT (?:%{IPV6}|%{IPV4}|%{HOSTNAME}|%{HOSTPORT})
## ##
## COMMON LOG PATTERNS ## COMMON LOG PATTERNS

View File

@@ -8,7 +8,7 @@ import (
"github.com/hpcloud/tail" "github.com/hpcloud/tail"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal/errchan" "github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/internal/globpath" "github.com/influxdata/telegraf/internal/globpath"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
@@ -18,7 +18,7 @@ import (
) )
type LogParser interface { type LogParser interface {
ParseLine(line string) (telegraf.Metric, error) ParseLine(line string) (plugins.Metric, error)
Compile() error Compile() error
} }
@@ -30,7 +30,7 @@ type LogParserPlugin struct {
lines chan string lines chan string
done chan struct{} done chan struct{}
wg sync.WaitGroup wg sync.WaitGroup
acc telegraf.Accumulator acc plugins.Accumulator
parsers []LogParser parsers []LogParser
sync.Mutex sync.Mutex
@@ -76,11 +76,11 @@ func (l *LogParserPlugin) Description() string {
return "Stream and parse log file(s)." return "Stream and parse log file(s)."
} }
func (l *LogParserPlugin) Gather(acc telegraf.Accumulator) error { func (l *LogParserPlugin) Gather(acc plugins.Accumulator) error {
return nil return nil
} }
func (l *LogParserPlugin) Start(acc telegraf.Accumulator) error { func (l *LogParserPlugin) Start(acc plugins.Accumulator) error {
l.Lock() l.Lock()
defer l.Unlock() defer l.Unlock()
@@ -185,7 +185,7 @@ func (l *LogParserPlugin) receiver(tailer *tail.Tail) {
func (l *LogParserPlugin) parser() { func (l *LogParserPlugin) parser() {
defer l.wg.Done() defer l.wg.Done()
var m telegraf.Metric var m plugins.Metric
var err error var err error
var line string var line string
for { for {
@@ -225,7 +225,7 @@ func (l *LogParserPlugin) Stop() {
} }
func init() { func init() {
inputs.Add("logparser", func() telegraf.Input { inputs.Add("logparser", func() plugins.Input {
return &LogParserPlugin{} return &LogParserPlugin{}
}) })
} }

View File

@@ -13,7 +13,7 @@ import (
"strconv" "strconv"
"strings" "strings"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
@@ -353,7 +353,7 @@ var wanted_mdt_jobstats_fields = []*mapping{
}, },
} }
func (l *Lustre2) GetLustreProcStats(fileglob string, wanted_fields []*mapping, acc telegraf.Accumulator) error { func (l *Lustre2) GetLustreProcStats(fileglob string, wanted_fields []*mapping, acc plugins.Accumulator) error {
files, err := filepath.Glob(fileglob) files, err := filepath.Glob(fileglob)
if err != nil { if err != nil {
return err return err
@@ -422,7 +422,7 @@ func (l *Lustre2) Description() string {
} }
// Gather reads stats from all lustre targets // Gather reads stats from all lustre targets
func (l *Lustre2) Gather(acc telegraf.Accumulator) error { func (l *Lustre2) Gather(acc plugins.Accumulator) error {
l.allFields = make(map[string]map[string]interface{}) l.allFields = make(map[string]map[string]interface{})
if len(l.Ost_procfiles) == 0 { if len(l.Ost_procfiles) == 0 {
@@ -500,7 +500,7 @@ func (l *Lustre2) Gather(acc telegraf.Accumulator) error {
} }
func init() { func init() {
inputs.Add("lustre2", func() telegraf.Input { inputs.Add("lustre2", func() plugins.Input {
return &Lustre2{} return &Lustre2{}
}) })
} }

View File

@@ -4,7 +4,7 @@ import (
"fmt" "fmt"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
@@ -35,7 +35,7 @@ func (m *MailChimp) Description() string {
return "Gathers metrics from the /3.0/reports MailChimp API" return "Gathers metrics from the /3.0/reports MailChimp API"
} }
func (m *MailChimp) Gather(acc telegraf.Accumulator) error { func (m *MailChimp) Gather(acc plugins.Accumulator) error {
if m.api == nil { if m.api == nil {
m.api = NewChimpAPI(m.ApiKey) m.api = NewChimpAPI(m.ApiKey)
} }
@@ -72,7 +72,7 @@ func (m *MailChimp) Gather(acc telegraf.Accumulator) error {
return nil return nil
} }
func gatherReport(acc telegraf.Accumulator, report Report, now time.Time) { func gatherReport(acc plugins.Accumulator, report Report, now time.Time) {
tags := make(map[string]string) tags := make(map[string]string)
tags["id"] = report.ID tags["id"] = report.ID
tags["campaign_title"] = report.CampaignTitle tags["campaign_title"] = report.CampaignTitle
@@ -111,7 +111,7 @@ func gatherReport(acc telegraf.Accumulator, report Report, now time.Time) {
} }
func init() { func init() {
inputs.Add("mailchimp", func() telegraf.Input { inputs.Add("mailchimp", func() plugins.Input {
return &MailChimp{} return &MailChimp{}
}) })
} }

View File

@@ -8,7 +8,7 @@ import (
"strconv" "strconv"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal/errchan" "github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
@@ -69,7 +69,7 @@ func (m *Memcached) Description() string {
} }
// Gather reads stats from all configured servers accumulates stats // Gather reads stats from all configured servers accumulates stats
func (m *Memcached) Gather(acc telegraf.Accumulator) error { func (m *Memcached) Gather(acc plugins.Accumulator) error {
if len(m.Servers) == 0 && len(m.UnixSockets) == 0 { if len(m.Servers) == 0 && len(m.UnixSockets) == 0 {
return m.gatherServer(":11211", false, acc) return m.gatherServer(":11211", false, acc)
} }
@@ -89,7 +89,7 @@ func (m *Memcached) Gather(acc telegraf.Accumulator) error {
func (m *Memcached) gatherServer( func (m *Memcached) gatherServer(
address string, address string,
unix bool, unix bool,
acc telegraf.Accumulator, acc plugins.Accumulator,
) error { ) error {
var conn net.Conn var conn net.Conn
var err error var err error
@@ -180,7 +180,7 @@ func parseResponse(r *bufio.Reader) (map[string]string, error) {
} }
func init() { func init() {
inputs.Add("memcached", func() telegraf.Input { inputs.Add("memcached", func() plugins.Input {
return &Memcached{} return &Memcached{}
}) })
} }

View File

@@ -12,7 +12,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
jsonparser "github.com/influxdata/telegraf/plugins/parsers/json" jsonparser "github.com/influxdata/telegraf/plugins/parsers/json"
) )
@@ -94,7 +94,7 @@ func (m *Mesos) SetDefaults() {
} }
// Gather() metrics from given list of Mesos Masters // Gather() metrics from given list of Mesos Masters
func (m *Mesos) Gather(acc telegraf.Accumulator) error { func (m *Mesos) Gather(acc plugins.Accumulator) error {
var wg sync.WaitGroup var wg sync.WaitGroup
var errorChannel chan error var errorChannel chan error
@@ -425,7 +425,7 @@ type TaskStats struct {
Statistics map[string]interface{} `json:"statistics"` Statistics map[string]interface{} `json:"statistics"`
} }
func (m *Mesos) gatherSlaveTaskMetrics(address string, defaultPort string, acc telegraf.Accumulator) error { func (m *Mesos) gatherSlaveTaskMetrics(address string, defaultPort string, acc plugins.Accumulator) error {
var metrics []TaskStats var metrics []TaskStats
host, _, err := net.SplitHostPort(address) host, _, err := net.SplitHostPort(address)
@@ -476,7 +476,7 @@ func (m *Mesos) gatherSlaveTaskMetrics(address string, defaultPort string, acc t
} }
// This should not belong to the object // This should not belong to the object
func (m *Mesos) gatherMainMetrics(a string, defaultPort string, role Role, acc telegraf.Accumulator) error { func (m *Mesos) gatherMainMetrics(a string, defaultPort string, role Role, acc plugins.Accumulator) error {
var jsonOut map[string]interface{} var jsonOut map[string]interface{}
host, _, err := net.SplitHostPort(a) host, _, err := net.SplitHostPort(a)
@@ -532,7 +532,7 @@ func (m *Mesos) gatherMainMetrics(a string, defaultPort string, role Role, acc t
} }
func init() { func init() {
inputs.Add("mesos", func() telegraf.Input { inputs.Add("mesos", func() plugins.Input {
return &Mesos{} return &Mesos{}
}) })
} }

View File

@@ -1,7 +1,7 @@
package inputs package inputs
import ( import (
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/stretchr/testify/mock" "github.com/stretchr/testify/mock"
) )
@@ -22,7 +22,7 @@ func (m *MockPlugin) SampleConfig() string {
} }
// Gather defines what data the plugin will gather. // Gather defines what data the plugin will gather.
func (m *MockPlugin) Gather(_a0 telegraf.Accumulator) error { func (m *MockPlugin) Gather(_a0 plugins.Accumulator) error {
ret := m.Called(_a0) ret := m.Called(_a0)
r0 := ret.Error(0) r0 := ret.Error(0)

View File

@@ -9,7 +9,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal/errchan" "github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
"gopkg.in/mgo.v2" "gopkg.in/mgo.v2"
@@ -49,7 +49,7 @@ var localhost = &url.URL{Host: "127.0.0.1:27017"}
// Reads stats from all configured servers accumulates stats. // Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any). // Returns one of the errors encountered while gather stats (if any).
func (m *MongoDB) Gather(acc telegraf.Accumulator) error { func (m *MongoDB) Gather(acc plugins.Accumulator) error {
if len(m.Servers) == 0 { if len(m.Servers) == 0 {
m.gatherServer(m.getMongoServer(localhost), acc) m.gatherServer(m.getMongoServer(localhost), acc)
return nil return nil
@@ -89,7 +89,7 @@ func (m *MongoDB) getMongoServer(url *url.URL) *Server {
return m.mongos[url.Host] return m.mongos[url.Host]
} }
func (m *MongoDB) gatherServer(server *Server, acc telegraf.Accumulator) error { func (m *MongoDB) gatherServer(server *Server, acc plugins.Accumulator) error {
if server.Session == nil { if server.Session == nil {
var dialAddrs []string var dialAddrs []string
if server.Url.User != nil { if server.Url.User != nil {
@@ -139,7 +139,7 @@ func (m *MongoDB) gatherServer(server *Server, acc telegraf.Accumulator) error {
} }
func init() { func init() {
inputs.Add("mongodb", func() telegraf.Input { inputs.Add("mongodb", func() plugins.Input {
return &MongoDB{ return &MongoDB{
mongos: make(map[string]*Server), mongos: make(map[string]*Server),
} }

View File

@@ -5,7 +5,7 @@ import (
"reflect" "reflect"
"strconv" "strconv"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
) )
type MongodbData struct { type MongodbData struct {
@@ -21,9 +21,6 @@ type DbData struct {
} }
func NewMongodbData(statLine *StatLine, tags map[string]string) *MongodbData { func NewMongodbData(statLine *StatLine, tags map[string]string) *MongodbData {
if statLine.NodeType != "" && statLine.NodeType != "UNK" {
tags["state"] = statLine.NodeType
}
return &MongodbData{ return &MongodbData{
StatLine: statLine, StatLine: statLine,
Tags: tags, Tags: tags,
@@ -61,6 +58,7 @@ var DefaultReplStats = map[string]string{
"repl_getmores_per_sec": "GetMoreR", "repl_getmores_per_sec": "GetMoreR",
"repl_commands_per_sec": "CommandR", "repl_commands_per_sec": "CommandR",
"member_status": "NodeType", "member_status": "NodeType",
"state": "NodeState",
"repl_lag": "ReplLag", "repl_lag": "ReplLag",
} }
@@ -140,7 +138,7 @@ func (d *MongodbData) add(key string, val interface{}) {
d.Fields[key] = val d.Fields[key] = val
} }
func (d *MongodbData) flush(acc telegraf.Accumulator) { func (d *MongodbData) flush(acc plugins.Accumulator) {
acc.AddFields( acc.AddFields(
"mongodb", "mongodb",
d.Fields, d.Fields,

View File

@@ -95,12 +95,12 @@ func TestStateTag(t *testing.T) {
Insert: 0, Insert: 0,
Query: 0, Query: 0,
NodeType: "PRI", NodeType: "PRI",
NodeState: "PRIMARY",
}, },
tags, tags,
) )
stateTags := make(map[string]string) stateTags := make(map[string]string)
stateTags["state"] = "PRI"
var acc testutil.Accumulator var acc testutil.Accumulator
@@ -115,6 +115,7 @@ func TestStateTag(t *testing.T) {
"getmores_per_sec": int64(0), "getmores_per_sec": int64(0),
"inserts_per_sec": int64(0), "inserts_per_sec": int64(0),
"member_status": "PRI", "member_status": "PRI",
"state": "PRIMARY",
"net_in_bytes": int64(0), "net_in_bytes": int64(0),
"net_out_bytes": int64(0), "net_out_bytes": int64(0),
"open_connections": int64(0), "open_connections": int64(0),

View File

@@ -5,7 +5,7 @@ import (
"net/url" "net/url"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"gopkg.in/mgo.v2" "gopkg.in/mgo.v2"
"gopkg.in/mgo.v2/bson" "gopkg.in/mgo.v2/bson"
) )
@@ -22,7 +22,7 @@ func (s *Server) getDefaultTags() map[string]string {
return tags return tags
} }
func (s *Server) gatherData(acc telegraf.Accumulator, gatherDbStats bool) error { func (s *Server) gatherData(acc plugins.Accumulator, gatherDbStats bool) error {
s.Session.SetMode(mgo.Eventual, true) s.Session.SetMode(mgo.Eventual, true)
s.Session.SetSocketTimeout(0) s.Session.SetSocketTimeout(0)
result_server := &ServerStatus{} result_server := &ServerStatus{}

View File

@@ -11,8 +11,6 @@ import (
"sort" "sort"
"strings" "strings"
"time" "time"
"gopkg.in/mgo.v2/bson"
) )
const ( const (
@@ -105,9 +103,10 @@ type ReplSetStatus struct {
// ReplSetMember stores information related to a replica set member // ReplSetMember stores information related to a replica set member
type ReplSetMember struct { type ReplSetMember struct {
Name string `bson:"name"` Name string `bson:"name"`
State int64 `bson:"state"` State int64 `bson:"state"`
OptimeDate *bson.MongoTimestamp `bson:"optimeDate"` StateStr string `bson:"stateStr"`
OptimeDate time.Time `bson:"optimeDate"`
} }
// WiredTiger stores information related to the WiredTiger storage engine. // WiredTiger stores information related to the WiredTiger storage engine.
@@ -420,6 +419,7 @@ type StatLine struct {
NumConnections int64 NumConnections int64
ReplSetName string ReplSetName string
NodeType string NodeType string
NodeState string
// Cluster fields // Cluster fields
JumboChunksCount int64 JumboChunksCount int64
@@ -566,6 +566,8 @@ func NewStatLine(oldMongo, newMongo MongoStatus, key string, all bool, sampleSec
returnVal.NodeType = "PRI" returnVal.NodeType = "PRI"
} else if newStat.Repl.Secondary.(bool) { } else if newStat.Repl.Secondary.(bool) {
returnVal.NodeType = "SEC" returnVal.NodeType = "SEC"
} else if newStat.Repl.ArbiterOnly != nil && newStat.Repl.ArbiterOnly.(bool) {
returnVal.NodeType = "ARB"
} else { } else {
returnVal.NodeType = "UNK" returnVal.NodeType = "UNK"
} }
@@ -692,6 +694,8 @@ func NewStatLine(oldMongo, newMongo MongoStatus, key string, all bool, sampleSec
me := ReplSetMember{} me := ReplSetMember{}
for _, member := range newReplStat.Members { for _, member := range newReplStat.Members {
if member.Name == myName { if member.Name == myName {
// Store my state string
returnVal.NodeState = member.StateStr
if member.State == 1 { if member.State == 1 {
// I'm the master // I'm the master
returnVal.ReplLag = 0 returnVal.ReplLag = 0
@@ -706,9 +710,9 @@ func NewStatLine(oldMongo, newMongo MongoStatus, key string, all bool, sampleSec
} }
} }
if me.OptimeDate != nil && master.OptimeDate != nil && me.State == 2 { if me.State == 2 {
// MongoTimestamp type is int64 where the first 32bits are the unix timestamp // OptimeDate.Unix() type is int64
lag := int64(*master.OptimeDate>>32 - *me.OptimeDate>>32) lag := master.OptimeDate.Unix() - me.OptimeDate.Unix()
if lag < 0 { if lag < 0 {
returnVal.ReplLag = 0 returnVal.ReplLag = 0
} else { } else {

View File

@@ -7,7 +7,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/parsers" "github.com/influxdata/telegraf/plugins/parsers"
@@ -46,7 +46,7 @@ type MQTTConsumer struct {
done chan struct{} done chan struct{}
// keep the accumulator internally: // keep the accumulator internally:
acc telegraf.Accumulator acc plugins.Accumulator
started bool started bool
} }
@@ -100,7 +100,7 @@ func (m *MQTTConsumer) SetParser(parser parsers.Parser) {
m.parser = parser m.parser = parser
} }
func (m *MQTTConsumer) Start(acc telegraf.Accumulator) error { func (m *MQTTConsumer) Start(acc plugins.Accumulator) error {
m.Lock() m.Lock()
defer m.Unlock() defer m.Unlock()
m.started = false m.started = false
@@ -191,7 +191,7 @@ func (m *MQTTConsumer) Stop() {
m.started = false m.started = false
} }
func (m *MQTTConsumer) Gather(acc telegraf.Accumulator) error { func (m *MQTTConsumer) Gather(acc plugins.Accumulator) error {
return nil return nil
} }
@@ -242,7 +242,7 @@ func (m *MQTTConsumer) createOpts() (*mqtt.ClientOptions, error) {
} }
func init() { func init() {
inputs.Add("mqtt_consumer", func() telegraf.Input { inputs.Add("mqtt_consumer", func() plugins.Input {
return &MQTTConsumer{} return &MQTTConsumer{}
}) })
} }

View File

@@ -25,8 +25,8 @@ This plugin gathers the statistic data from MySQL server
## [username[:password]@][protocol[(address)]]/[?tls=[true|false|skip-verify]] ## [username[:password]@][protocol[(address)]]/[?tls=[true|false|skip-verify]]
## see https://github.com/go-sql-driver/mysql#dsn-data-source-name ## see https://github.com/go-sql-driver/mysql#dsn-data-source-name
## e.g. ## e.g.
## db_user:passwd@tcp(127.0.0.1:3306)/?tls=false ## servers = ["user:passwd@tcp(127.0.0.1:3306)/?tls=false"]
## db_user@tcp(127.0.0.1:3306)/?tls=false ## servers = ["user@tcp(127.0.0.1:3306)/?tls=false"]
# #
## If no servers are specified, then localhost is used as the host. ## If no servers are specified, then localhost is used as the host.
servers = ["tcp(127.0.0.1:3306)/"] servers = ["tcp(127.0.0.1:3306)/"]

View File

@@ -9,7 +9,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal/errchan" "github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
@@ -41,8 +41,8 @@ var sampleConfig = `
## [username[:password]@][protocol[(address)]]/[?tls=[true|false|skip-verify]] ## [username[:password]@][protocol[(address)]]/[?tls=[true|false|skip-verify]]
## see https://github.com/go-sql-driver/mysql#dsn-data-source-name ## see https://github.com/go-sql-driver/mysql#dsn-data-source-name
## e.g. ## e.g.
## db_user:passwd@tcp(127.0.0.1:3306)/?tls=false ## servers = ["user:passwd@tcp(127.0.0.1:3306)/?tls=false"]
## db_user@tcp(127.0.0.1:3306)/?tls=false ## servers = ["user@tcp(127.0.0.1:3306)/?tls=false"]
# #
## If no servers are specified, then localhost is used as the host. ## If no servers are specified, then localhost is used as the host.
servers = ["tcp(127.0.0.1:3306)/"] servers = ["tcp(127.0.0.1:3306)/"]
@@ -118,7 +118,7 @@ func (m *Mysql) InitMysql() {
initDone = true initDone = true
} }
func (m *Mysql) Gather(acc telegraf.Accumulator) error { func (m *Mysql) Gather(acc plugins.Accumulator) error {
if len(m.Servers) == 0 { if len(m.Servers) == 0 {
// default to localhost if nothing specified. // default to localhost if nothing specified.
return m.gatherServer(localhost, acc) return m.gatherServer(localhost, acc)
@@ -534,7 +534,7 @@ const (
` `
) )
func (m *Mysql) gatherServer(serv string, acc telegraf.Accumulator) error { func (m *Mysql) gatherServer(serv string, acc plugins.Accumulator) error {
serv, err := dsnAddTimeout(serv) serv, err := dsnAddTimeout(serv)
if err != nil { if err != nil {
return err return err
@@ -649,7 +649,7 @@ func (m *Mysql) gatherServer(serv string, acc telegraf.Accumulator) error {
// gatherGlobalVariables can be used to fetch all global variables from // gatherGlobalVariables can be used to fetch all global variables from
// MySQL environment. // MySQL environment.
func (m *Mysql) gatherGlobalVariables(db *sql.DB, serv string, acc telegraf.Accumulator) error { func (m *Mysql) gatherGlobalVariables(db *sql.DB, serv string, acc plugins.Accumulator) error {
// run query // run query
rows, err := db.Query(globalVariablesQuery) rows, err := db.Query(globalVariablesQuery)
if err != nil { if err != nil {
@@ -690,7 +690,7 @@ func (m *Mysql) gatherGlobalVariables(db *sql.DB, serv string, acc telegraf.Accu
// When the server is slave, then it returns only one row. // When the server is slave, then it returns only one row.
// If the multi-source replication is set, then everything works differently // If the multi-source replication is set, then everything works differently
// This code does not work with multi-source replication. // This code does not work with multi-source replication.
func (m *Mysql) gatherSlaveStatuses(db *sql.DB, serv string, acc telegraf.Accumulator) error { func (m *Mysql) gatherSlaveStatuses(db *sql.DB, serv string, acc plugins.Accumulator) error {
// run query // run query
rows, err := db.Query(slaveStatusQuery) rows, err := db.Query(slaveStatusQuery)
if err != nil { if err != nil {
@@ -734,7 +734,7 @@ func (m *Mysql) gatherSlaveStatuses(db *sql.DB, serv string, acc telegraf.Accumu
// gatherBinaryLogs can be used to collect size and count of all binary files // gatherBinaryLogs can be used to collect size and count of all binary files
// binlogs metric requires the MySQL server to turn it on in configuration // binlogs metric requires the MySQL server to turn it on in configuration
func (m *Mysql) gatherBinaryLogs(db *sql.DB, serv string, acc telegraf.Accumulator) error { func (m *Mysql) gatherBinaryLogs(db *sql.DB, serv string, acc plugins.Accumulator) error {
// run query // run query
rows, err := db.Query(binaryLogsQuery) rows, err := db.Query(binaryLogsQuery)
if err != nil { if err != nil {
@@ -771,7 +771,7 @@ func (m *Mysql) gatherBinaryLogs(db *sql.DB, serv string, acc telegraf.Accumulat
// gatherGlobalStatuses can be used to get MySQL status metrics // gatherGlobalStatuses can be used to get MySQL status metrics
// the mappings of actual names and names of each status to be exported // the mappings of actual names and names of each status to be exported
// to output is provided on mappings variable // to output is provided on mappings variable
func (m *Mysql) gatherGlobalStatuses(db *sql.DB, serv string, acc telegraf.Accumulator) error { func (m *Mysql) gatherGlobalStatuses(db *sql.DB, serv string, acc plugins.Accumulator) error {
// If user forgot the '/', add it // If user forgot the '/', add it
if strings.HasSuffix(serv, ")") { if strings.HasSuffix(serv, ")") {
serv = serv + "/" serv = serv + "/"
@@ -828,6 +828,13 @@ func (m *Mysql) gatherGlobalStatuses(db *sql.DB, serv string, acc telegraf.Accum
} }
fields["queries"] = i fields["queries"] = i
case "Questions":
i, err := strconv.ParseInt(string(val.([]byte)), 10, 64)
if err != nil {
return err
}
fields["questions"] = i
case "Slow_queries": case "Slow_queries":
i, err := strconv.ParseInt(string(val.([]byte)), 10, 64) i, err := strconv.ParseInt(string(val.([]byte)), 10, 64)
if err != nil { if err != nil {
@@ -882,7 +889,7 @@ func (m *Mysql) gatherGlobalStatuses(db *sql.DB, serv string, acc telegraf.Accum
// GatherProcessList can be used to collect metrics on each running command // GatherProcessList can be used to collect metrics on each running command
// and its state with its running count // and its state with its running count
func (m *Mysql) GatherProcessListStatuses(db *sql.DB, serv string, acc telegraf.Accumulator) error { func (m *Mysql) GatherProcessListStatuses(db *sql.DB, serv string, acc plugins.Accumulator) error {
// run query // run query
rows, err := db.Query(infoSchemaProcessListQuery) rows, err := db.Query(infoSchemaProcessListQuery)
if err != nil { if err != nil {
@@ -927,7 +934,7 @@ func (m *Mysql) GatherProcessListStatuses(db *sql.DB, serv string, acc telegraf.
// gatherPerfTableIOWaits can be used to get total count and time // gatherPerfTableIOWaits can be used to get total count and time
// of I/O wait event for each table and process // of I/O wait event for each table and process
func (m *Mysql) gatherPerfTableIOWaits(db *sql.DB, serv string, acc telegraf.Accumulator) error { func (m *Mysql) gatherPerfTableIOWaits(db *sql.DB, serv string, acc plugins.Accumulator) error {
rows, err := db.Query(perfTableIOWaitsQuery) rows, err := db.Query(perfTableIOWaitsQuery)
if err != nil { if err != nil {
return err return err
@@ -976,7 +983,7 @@ func (m *Mysql) gatherPerfTableIOWaits(db *sql.DB, serv string, acc telegraf.Acc
// gatherPerfIndexIOWaits can be used to get total count and time // gatherPerfIndexIOWaits can be used to get total count and time
// of I/O wait event for each index and process // of I/O wait event for each index and process
func (m *Mysql) gatherPerfIndexIOWaits(db *sql.DB, serv string, acc telegraf.Accumulator) error { func (m *Mysql) gatherPerfIndexIOWaits(db *sql.DB, serv string, acc plugins.Accumulator) error {
rows, err := db.Query(perfIndexIOWaitsQuery) rows, err := db.Query(perfIndexIOWaitsQuery)
if err != nil { if err != nil {
return err return err
@@ -1029,7 +1036,7 @@ func (m *Mysql) gatherPerfIndexIOWaits(db *sql.DB, serv string, acc telegraf.Acc
} }
// gatherInfoSchemaAutoIncStatuses can be used to get auto incremented values of the column // gatherInfoSchemaAutoIncStatuses can be used to get auto incremented values of the column
func (m *Mysql) gatherInfoSchemaAutoIncStatuses(db *sql.DB, serv string, acc telegraf.Accumulator) error { func (m *Mysql) gatherInfoSchemaAutoIncStatuses(db *sql.DB, serv string, acc plugins.Accumulator) error {
rows, err := db.Query(infoSchemaAutoIncQuery) rows, err := db.Query(infoSchemaAutoIncQuery)
if err != nil { if err != nil {
return err return err
@@ -1066,7 +1073,7 @@ func (m *Mysql) gatherInfoSchemaAutoIncStatuses(db *sql.DB, serv string, acc tel
// the total number and time for SQL and external lock wait events // the total number and time for SQL and external lock wait events
// for each table and operation // for each table and operation
// requires the MySQL server to be enabled to save this metric // requires the MySQL server to be enabled to save this metric
func (m *Mysql) gatherPerfTableLockWaits(db *sql.DB, serv string, acc telegraf.Accumulator) error { func (m *Mysql) gatherPerfTableLockWaits(db *sql.DB, serv string, acc plugins.Accumulator) error {
// check if table exists, // check if table exists,
// if performance_schema is not enabled, tables do not exist // if performance_schema is not enabled, tables do not exist
// then there is no need to scan them // then there is no need to scan them
@@ -1195,7 +1202,7 @@ func (m *Mysql) gatherPerfTableLockWaits(db *sql.DB, serv string, acc telegraf.A
} }
// gatherPerfEventWaits can be used to get total time and number of event waits // gatherPerfEventWaits can be used to get total time and number of event waits
func (m *Mysql) gatherPerfEventWaits(db *sql.DB, serv string, acc telegraf.Accumulator) error { func (m *Mysql) gatherPerfEventWaits(db *sql.DB, serv string, acc plugins.Accumulator) error {
rows, err := db.Query(perfEventWaitsQuery) rows, err := db.Query(perfEventWaitsQuery)
if err != nil { if err != nil {
return err return err
@@ -1227,7 +1234,7 @@ func (m *Mysql) gatherPerfEventWaits(db *sql.DB, serv string, acc telegraf.Accum
} }
// gatherPerfFileEvents can be used to get stats on file events // gatherPerfFileEvents can be used to get stats on file events
func (m *Mysql) gatherPerfFileEventsStatuses(db *sql.DB, serv string, acc telegraf.Accumulator) error { func (m *Mysql) gatherPerfFileEventsStatuses(db *sql.DB, serv string, acc plugins.Accumulator) error {
rows, err := db.Query(perfFileEventsQuery) rows, err := db.Query(perfFileEventsQuery)
if err != nil { if err != nil {
return err return err
@@ -1285,7 +1292,7 @@ func (m *Mysql) gatherPerfFileEventsStatuses(db *sql.DB, serv string, acc telegr
} }
// gatherPerfEventsStatements can be used to get attributes of each event // gatherPerfEventsStatements can be used to get attributes of each event
func (m *Mysql) gatherPerfEventsStatements(db *sql.DB, serv string, acc telegraf.Accumulator) error { func (m *Mysql) gatherPerfEventsStatements(db *sql.DB, serv string, acc plugins.Accumulator) error {
query := fmt.Sprintf( query := fmt.Sprintf(
perfEventsStatementsQuery, perfEventsStatementsQuery,
m.PerfEventsStatementsDigestTextLimit, m.PerfEventsStatementsDigestTextLimit,
@@ -1352,7 +1359,7 @@ func (m *Mysql) gatherPerfEventsStatements(db *sql.DB, serv string, acc telegraf
} }
// gatherTableSchema can be used to gather stats on each schema // gatherTableSchema can be used to gather stats on each schema
func (m *Mysql) gatherTableSchema(db *sql.DB, serv string, acc telegraf.Accumulator) error { func (m *Mysql) gatherTableSchema(db *sql.DB, serv string, acc plugins.Accumulator) error {
var dbList []string var dbList []string
servtag := getDSNTag(serv) servtag := getDSNTag(serv)
@@ -1532,7 +1539,7 @@ func getDSNTag(dsn string) string {
} }
func init() { func init() {
inputs.Add("mysql", func() telegraf.Input { inputs.Add("mysql", func() plugins.Input {
return &Mysql{} return &Mysql{}
}) })
} }

View File

@@ -5,7 +5,7 @@ import (
"log" "log"
"sync" "sync"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/parsers" "github.com/influxdata/telegraf/plugins/parsers"
"github.com/nats-io/nats" "github.com/nats-io/nats"
@@ -47,7 +47,7 @@ type natsConsumer struct {
// channel for all NATS read errors // channel for all NATS read errors
errs chan error errs chan error
done chan struct{} done chan struct{}
acc telegraf.Accumulator acc plugins.Accumulator
} }
var sampleConfig = ` var sampleConfig = `
@@ -93,7 +93,7 @@ func (n *natsConsumer) natsErrHandler(c *nats.Conn, s *nats.Subscription, e erro
} }
// Start the nats consumer. Caller must call *natsConsumer.Stop() to clean up. // Start the nats consumer. Caller must call *natsConsumer.Stop() to clean up.
func (n *natsConsumer) Start(acc telegraf.Accumulator) error { func (n *natsConsumer) Start(acc plugins.Accumulator) error {
n.Lock() n.Lock()
defer n.Unlock() defer n.Unlock()
@@ -197,12 +197,12 @@ func (n *natsConsumer) Stop() {
n.Unlock() n.Unlock()
} }
func (n *natsConsumer) Gather(acc telegraf.Accumulator) error { func (n *natsConsumer) Gather(acc plugins.Accumulator) error {
return nil return nil
} }
func init() { func init() {
inputs.Add("nats_consumer", func() telegraf.Input { inputs.Add("nats_consumer", func() plugins.Input {
return &natsConsumer{ return &natsConsumer{
Servers: []string{"nats://localhost:4222"}, Servers: []string{"nats://localhost:4222"},
Secure: false, Secure: false,

View File

@@ -6,6 +6,27 @@ It can also check response text.
### Configuration: ### Configuration:
``` ```
[[inputs.net_response]]
## Protocol, must be "tcp" or "udp"
## NOTE: because the "udp" protocol does not respond to requests, it requires
## a send/expect string pair (see below).
protocol = "tcp"
## Server address (default localhost)
address = "localhost:80"
## Set timeout
timeout = "1s"
## Set read timeout (only used if expecting a response)
read_timeout = "1s"
## The following options are required for UDP checks. For TCP, they are
## optional. The plugin will send the given string to the server and then
## expect to receive the given 'expect' string back.
## string sent to the server
# send = "ssh"
## expected string in answer
# expect = "ssh"
[[inputs.net_response]] [[inputs.net_response]]
protocol = "tcp" protocol = "tcp"
address = ":80" address = ":80"
@@ -30,6 +51,8 @@ It can also check response text.
protocol = "udp" protocol = "udp"
address = "localhost:161" address = "localhost:161"
timeout = "2s" timeout = "2s"
send = "hello server"
expect = "hello client"
``` ```
### Measurements & Fields: ### Measurements & Fields:

Some files were not shown because too many files have changed in this diff Show More