Compare commits

..

28 Commits

Author SHA1 Message Date
Cameron Sparr
434c08a357 Release 0.10.2 2016-02-04 11:04:29 -07:00
Cameron Sparr
bd9c5b6995 mqtt output: cleanup, implement TLS
Also normalize TLS config across all output plugins and normalize
comment strings as well.
2016-02-04 10:44:37 -07:00
Cameron Sparr
b941d270ce changelog update 2016-02-03 08:35:03 -07:00
Reginaldo Sousa
9406961125 Fix a bug when setting host header in httpjson
closes #634
2016-02-02 21:59:18 -07:00
Rune Darrud
0d391b66a3 Added support for Windows operating systems pre-Vista. 2016-02-02 21:57:38 -07:00
Cameron Sparr
a11e07e250 Minor change to forgotten config file exit 2016-02-01 17:44:19 -07:00
Cameron Sparr
d266dad1f4 Don't compile ping plugin on windows.
closes #496
2016-02-01 16:39:53 -07:00
Rune Darrud
331b700d1b Corrected a issue that came from code cleanup earlier
wherein missing performance counters caused it to return
early from the loop, instead of ignoring missing in
default configuration mode.

closes #625
2016-01-31 23:17:45 -07:00
Christoph Wegener
2163fde0a4 Fix memory leak: Remove signal.Notify code from plugins/inputs/win_perf_counters.(*Win_PerfCounters).Gather 2016-01-31 23:16:09 -07:00
Cameron Sparr
24a2aaef4b Ansible role in readme 2016-01-30 11:55:48 -07:00
Cameron Sparr
042cf517b2 Mention yum/apt repo in README
Also add `make windows-build` to Makefile

closes #618
2016-01-30 11:35:39 -07:00
Cameron Sparr
b97027ac9a Allow exec plugin to parse line-protocol
closes #613
2016-01-30 11:12:59 -07:00
Christoph Wegener
4ea3f82e50 Replace all single percentage characters with double
percentage characters in sampleConfig string so that fmt.Printf
will interpret them as literal percentage characters when
running 'telegraf.exe -sample-config'

closes #620
2016-01-30 10:10:55 -07:00
Cameron Sparr
38c4111e6c Add unit tests for the root telegraf package 2016-01-29 16:01:34 -07:00
Cameron Sparr
338341add8 Put windows dependencies into a separate Godeps file 2016-01-29 11:10:18 -07:00
Cameron Sparr
93bb679f9d Fix possible panic if stat is nil
closes #612
2016-01-29 10:47:30 -07:00
Pavel Yudin
40d859354f Add powerdns input plugin
closes #614
2016-01-29 09:40:04 -07:00
Cameron Sparr
9e7c8df384 statsd: allow template parsing fields. Default to value=
closes #602
2016-01-28 16:56:50 -07:00
Rune Darrud
f088dd7e00 Added plugin to read Windows performance counters
closes #575
2016-01-28 16:35:13 -07:00
Cameron Sparr
10c4e4f63f Fix datadog json marshalling
fixes #607
2016-01-28 16:12:33 -07:00
Cameron Sparr
962325cc40 Warn when metrics are being overwritten
closes #601
2016-01-28 14:00:14 -07:00
root
a9c33abfa5 sql server: update README.md
closes #594
2016-01-28 13:50:26 -07:00
Cameron Sparr
d835c19fce Insert . between msrmnt and field name in datadog output
fixes #600
2016-01-28 12:04:26 -07:00
Marcin Bunsch
1f1384afc6 Use a single measurement with fields for timings in statsd plugin.
closes #603
2016-01-28 12:03:48 -07:00
Cameron Sparr
9d4b55be19 Include all tag values in graphite output
closes #595
2016-01-28 10:58:35 -07:00
Cameron Sparr
c549ab907a Throughout telegraf, use telegraf.Metric rather than client.Point
closes #599
2016-01-27 23:47:32 -07:00
Cameron Sparr
9c0d14bb60 Create public models for telegraf metrics, accumlator, plugins
This will basically make the root directory a place for storing the
major telegraf interfaces, which will make telegraf's godoc looks quite
a bit nicer. And make it easier for contributors to lookup the few data
types that they actually care about.

closes #564
2016-01-27 15:42:50 -07:00
Cameron Sparr
a822d942cd 386 -> i386 2016-01-27 13:42:34 -07:00
132 changed files with 4788 additions and 1282 deletions

View File

@@ -1,4 +1,4 @@
## v0.10.2 [unreleased]
## v0.10.3 [unreleased]
### Release Notes
@@ -6,6 +6,34 @@
### Bugfixes
## v0.10.2 [2016-02-04]
### Release Notes
- Statsd timing measurements are now aggregated into a single measurement with
fields.
- Graphite output now inserts tags into the bucket in alphabetical order.
- Normalized TLS/SSL support for output plugins: MQTT, AMQP, Kafka
- `verify_ssl` config option was removed from Kafka because it was actually
doing the opposite of what it claimed to do (yikes). It's been replaced by
`insecure_skip_verify`
### Features
- [#575](https://github.com/influxdata/telegraf/pull/575): Support for collecting Windows Performance Counters. Thanks @TheFlyingCorpse!
- [#564](https://github.com/influxdata/telegraf/issues/564): features for plugin writing simplification. Internal metric data type.
- [#603](https://github.com/influxdata/telegraf/pull/603): Aggregate statsd timing measurements into fields. Thanks @marcinbunsch!
- [#601](https://github.com/influxdata/telegraf/issues/601): Warn when overwriting cached metrics.
- [#614](https://github.com/influxdata/telegraf/pull/614): PowerDNS input plugin. Thanks @Kasen!
- [#617](https://github.com/influxdata/telegraf/pull/617): exec plugin: parse influx line protocol in addition to JSON.
- [#628](https://github.com/influxdata/telegraf/pull/628): Windows perf counters: pre-vista support
### Bugfixes
- [#595](https://github.com/influxdata/telegraf/issues/595): graphite output should include tags to separate duplicate measurements.
- [#599](https://github.com/influxdata/telegraf/issues/599): datadog plugin tags not working.
- [#600](https://github.com/influxdata/telegraf/issues/600): datadog measurement/field name parsing is wrong.
- [#602](https://github.com/influxdata/telegraf/issues/602): Fix statsd field name templating.
- [#612](https://github.com/influxdata/telegraf/pull/612): Docker input panic fix if stats received are nil.
- [#634](https://github.com/influxdata/telegraf/pull/634): Properly set host headers in httpjson. Thanks @reginaldosousa!
## v0.10.1 [2016-01-27]
### Release Notes

View File

@@ -37,7 +37,7 @@ and submit new inputs.
### Input Plugin Guidelines
* A plugin must conform to the `inputs.Input` interface.
* A plugin must conform to the `telegraf.Input` interface.
* Input Plugins should call `inputs.Add` in their `init` function to register themselves.
See below for a quick example.
* Input Plugins must be added to the
@@ -97,7 +97,10 @@ package simple
// simple.go
import "github.com/influxdata/telegraf/plugins/inputs"
import (
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
type Simple struct {
Ok bool
@@ -122,7 +125,7 @@ func (s *Simple) Gather(acc inputs.Accumulator) error {
}
func init() {
inputs.Add("simple", func() inputs.Input { return &Simple{} })
inputs.Add("simple", func() telegraf.Input { return &Simple{} })
}
```
@@ -182,7 +185,7 @@ type Output interface {
Close() error
Description() string
SampleConfig() string
Write(points []*client.Point) error
Write(metrics []telegraf.Metric) error
}
```
@@ -193,7 +196,10 @@ package simpleoutput
// simpleoutput.go
import "github.com/influxdata/telegraf/plugins/outputs"
import (
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/outputs"
)
type Simple struct {
Ok bool
@@ -217,7 +223,7 @@ func (s *Simple) Close() error {
return nil
}
func (s *Simple) Write(points []*client.Point) error {
func (s *Simple) Write(metrics []telegraf.Metric) error {
for _, pt := range points {
// write `pt` to the output sink here
}
@@ -225,7 +231,7 @@ func (s *Simple) Write(points []*client.Point) error {
}
func init() {
outputs.Add("simpleoutput", func() outputs.Output { return &Simple{} })
outputs.Add("simpleoutput", func() telegraf.Output { return &Simple{} })
}
```
@@ -253,7 +259,7 @@ type ServiceOutput interface {
Close() error
Description() string
SampleConfig() string
Write(points []*client.Point) error
Write(metrics []telegraf.Metric) error
Start() error
Stop()
}

63
Godeps_windows Normal file
View File

@@ -0,0 +1,63 @@
git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git dbd8d5c40a582eb9adacde36b47932b3a3ad0034
github.com/Shopify/sarama b1da1753dedcf77d053613b7eae907b98a2ddad5
github.com/Sirupsen/logrus f7f79f729e0fbe2fcc061db48a9ba0263f588252
github.com/StackExchange/wmi f3e2bae1e0cb5aef83e319133eabfee30013a4a5
github.com/amir/raidman 6a8e089bbe32e6b907feae5ba688841974b3c339
github.com/armon/go-metrics 345426c77237ece5dab0e1605c3e4b35c3f54757
github.com/aws/aws-sdk-go 2a34ea8812f32aae75b43400f9424a0559840659
github.com/beorn7/perks b965b613227fddccbfffe13eae360ed3fa822f8d
github.com/boltdb/bolt ee4a0888a9abe7eefe5a0992ca4cb06864839873
github.com/cenkalti/backoff 4dc77674aceaabba2c7e3da25d4c823edfb73f99
github.com/dancannon/gorethink 6f088135ff288deb9d5546f4c71919207f891a70
github.com/davecgh/go-spew 5215b55f46b2b919f50a1df0eaa5886afe4e3b3d
github.com/eapache/go-resiliency b86b1ec0dd4209a588dc1285cdd471e73525c0b3
github.com/eapache/queue ded5959c0d4e360646dc9e9908cff48666781367
github.com/fsouza/go-dockerclient 02a8beb401b20e112cff3ea740545960b667eab1
github.com/go-ini/ini afbd495e5aaea13597b5e14fe514ddeaa4d76fc3
github.com/go-ole/go-ole 50055884d646dd9434f16bbb5c9801749b9bafe4
github.com/go-sql-driver/mysql 7c7f556282622f94213bc028b4d0a7b6151ba239
github.com/gogo/protobuf e8904f58e872a473a5b91bc9bf3377d223555263
github.com/golang/protobuf 45bba206dd5270d96bac4942dcfe515726613249
github.com/golang/snappy 1963d058044b19e16595f80d5050fa54e2070438
github.com/gonuts/go-shellquote e842a11b24c6abfb3dd27af69a17f482e4b483c2
github.com/gorilla/context 1c83b3eabd45b6d76072b66b746c20815fb2872d
github.com/gorilla/mux 26a6070f849969ba72b72256e9f14cf519751690
github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478
github.com/hashicorp/go-msgpack fa3f63826f7c23912c15263591e65d54d080b458
github.com/hashicorp/raft 057b893fd996696719e98b6c44649ea14968c811
github.com/hashicorp/raft-boltdb d1e82c1ec3f15ee991f7cc7ffd5b67ff6f5bbaee
github.com/influxdata/config bae7cb98197d842374d3b8403905924094930f24
github.com/influxdata/influxdb 60df13fb566d07ff2cdd07aa23a4796a02b0df3c
github.com/influxdb/influxdb 60df13fb566d07ff2cdd07aa23a4796a02b0df3c
github.com/jmespath/go-jmespath c01cf91b011868172fdcd9f41838e80c9d716264
github.com/klauspost/crc32 999f3125931f6557b991b2f8472172bdfa578d38
github.com/lib/pq 8ad2b298cadd691a77015666a5372eae5dbfac8f
github.com/lxn/win 9a7734ea4db26bc593d52f6a8a957afdad39c5c1
github.com/matttproud/golang_protobuf_extensions d0c3fe89de86839aecf2e0579c40ba3bb336a453
github.com/mreiferson/go-snappystream 028eae7ab5c4c9e2d1cb4c4ca1e53259bbe7e504
github.com/naoina/go-stringutil 6b638e95a32d0c1131db0e7fe83775cbea4a0d0b
github.com/naoina/toml 751171607256bb66e64c9f0220c00662420c38e9
github.com/nsqio/go-nsq 2118015c120962edc5d03325c680daf3163a8b5f
github.com/pborman/uuid dee7705ef7b324f27ceb85a121c61f2c2e8ce988
github.com/pmezard/go-difflib 792786c7400a136282c1664665ae0a8db921c6c2
github.com/prometheus/client_golang 67994f177195311c3ea3d4407ed0175e34a4256f
github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
github.com/prometheus/common 14ca1097bbe21584194c15e391a9dab95ad42a59
github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8
github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f
github.com/shirou/gopsutil 9d8191d6a6e17dcf43b10a20084a11e8c1aa92e6
github.com/shirou/w32 ada3ba68f000aa1b58580e45c9d308fe0b7fc5c5
github.com/soniah/gosnmp b1b4f885b12c5dcbd021c5cee1c904110de6db7d
github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744
github.com/stretchr/objx 1a9d0bb9f541897e62256577b352fdbc1fb4fd94
github.com/stretchr/testify f390dcf405f7b83c997eac1b06768bb9f44dec18
github.com/wvanbergen/kafka 1a8639a45164fcc245d5c7b4bd3ccfbd1a0ffbf3
github.com/wvanbergen/kazoo-go 0f768712ae6f76454f987c3356177e138df258f8
github.com/zensqlmonitor/go-mssqldb ffe5510c6fa5e15e6d983210ab501c815b56b363
golang.org/x/crypto 1f22c0103821b9390939b6776727195525381532
golang.org/x/net 04b9de9b512f58addf28c9853d50ebef61c3953e
golang.org/x/text 6fc2e00a0d64b1f7fc1212dae5b0c939cf6d9ac4
gopkg.in/dancannon/gorethink.v1 6f088135ff288deb9d5546f4c71919207f891a70
gopkg.in/fatih/pool.v2 cba550ebf9bce999a02e963296d4bc7a486cb715
gopkg.in/mgo.v2 03c9f3ee4c14c8e51ee521a6a7d0425658dd6f64
gopkg.in/yaml.v2 f7716cbe52baa25d2e9b0d0da546fcf909fc16b4

View File

@@ -9,12 +9,20 @@ endif
# Standard Telegraf build
default: prepare build
# Windows build
windows: prepare-windows build-windows
# Only run the build (no dependency grabbing)
build:
go build -o telegraf -ldflags \
"-X main.Version=$(VERSION)" \
./cmd/telegraf/telegraf.go
build-windows:
go build -o telegraf.exe -ldflags \
"-X main.Version=$(VERSION)" \
./cmd/telegraf/telegraf.go
# Build with race detector
dev: prepare
go build -race -o telegraf -ldflags \
@@ -26,6 +34,11 @@ prepare:
go get github.com/sparrc/gdm
gdm restore
# Use the windows godeps file to prepare dependencies
prepare-windows:
go get github.com/sparrc/gdm
gdm restore -f Godeps_windows
# Run all docker containers necessary for unit tests
docker-run:
ifeq ($(UNAME), Darwin)

View File

@@ -24,17 +24,17 @@ will continue to be supported, see below for download links.
For more details on the differences between Telegraf 0.2.x and 0.10.x, see
the [release blog post](https://influxdata.com/blog/announcing-telegraf-0-10-0/).
### Linux deb and rpm packages:
### Linux deb and rpm Packages:
Latest:
* http://get.influxdb.org/telegraf/telegraf_0.10.1-1_amd64.deb
* http://get.influxdb.org/telegraf/telegraf-0.10.1-1.x86_64.rpm
* http://get.influxdb.org/telegraf/telegraf_0.10.2-1_amd64.deb
* http://get.influxdb.org/telegraf/telegraf-0.10.2-1.x86_64.rpm
0.2.x:
* http://get.influxdb.org/telegraf/telegraf_0.2.4_amd64.deb
* http://get.influxdb.org/telegraf/telegraf-0.2.4-1.x86_64.rpm
##### Package instructions:
##### Package Instructions:
* Telegraf binary is installed in `/usr/bin/telegraf`
* Telegraf daemon configuration file is in `/etc/telegraf/telegraf.conf`
@@ -43,32 +43,42 @@ Latest:
* On systemd systems (such as Ubuntu 15+), the telegraf daemon can be
controlled via `systemctl [action] telegraf`
### yum/apt Repositories:
There is a yum/apt repo available for the whole InfluxData stack, see
[here](https://docs.influxdata.com/influxdb/v0.9/introduction/installation/#installation)
for instructions, replacing the `influxdb` package name with `telegraf`.
### Linux tarballs:
Latest:
* http://get.influxdb.org/telegraf/telegraf-0.10.1-1_linux_amd64.tar.gz
* http://get.influxdb.org/telegraf/telegraf-0.10.1-1_linux_386.tar.gz
* http://get.influxdb.org/telegraf/telegraf-0.10.1-1_linux_arm.tar.gz
* http://get.influxdb.org/telegraf/telegraf-0.10.2-1_linux_amd64.tar.gz
* http://get.influxdb.org/telegraf/telegraf-0.10.2-1_linux_i386.tar.gz
* http://get.influxdb.org/telegraf/telegraf-0.10.2-1_linux_arm.tar.gz
0.2.x:
* http://get.influxdb.org/telegraf/telegraf_linux_amd64_0.2.4.tar.gz
* http://get.influxdb.org/telegraf/telegraf_linux_386_0.2.4.tar.gz
* http://get.influxdb.org/telegraf/telegraf_linux_arm_0.2.4.tar.gz
##### tarball instructions:
##### tarball Instructions:
To install the full directory structure with config file, run:
```
sudo tar -C / -xvf ./telegraf-v0.10.1-1_linux_amd64.tar.gz
sudo tar -C / -xvf ./telegraf-v0.10.2-1_linux_amd64.tar.gz
```
To extract only the binary, run:
```
tar -zxvf telegraf-v0.10.1-1_linux_amd64.tar.gz --strip-components=3 ./usr/bin/telegraf
tar -zxvf telegraf-v0.10.2-1_linux_amd64.tar.gz --strip-components=3 ./usr/bin/telegraf
```
### Ansible Role:
Ansible role: https://github.com/rossmcdonald/telegraf
### OSX via Homebrew:
```
@@ -88,7 +98,7 @@ if you don't have it already. You also must build with golang version 1.5+.
4. Run `cd $GOPATH/src/github.com/influxdata/telegraf`
5. Run `make`
### How to use it:
## How to use it:
```console
$ telegraf -help
@@ -165,6 +175,7 @@ Currently implemented sources:
* phusion passenger
* ping
* postgresql
* powerdns
* procstat
* prometheus
* puppetagent
@@ -177,6 +188,7 @@ Currently implemented sources:
* zookeeper
* sensors
* snmp
* win_perf_counters (windows performance counters)
* system
* cpu
* mem
@@ -216,4 +228,4 @@ want to add support for another service or third-party API.
Please see the
[contributing guide](CONTRIBUTING.md)
for details on contributing a plugin or output to Telegraf.
for details on contributing a plugin to Telegraf.

View File

@@ -1,188 +1,21 @@
package telegraf
import (
"fmt"
"log"
"math"
"sync"
"time"
"github.com/influxdata/telegraf/internal/models"
"github.com/influxdata/influxdb/client/v2"
)
import "time"
type Accumulator interface {
Add(measurement string, value interface{},
tags map[string]string, t ...time.Time)
AddFields(measurement string, fields map[string]interface{},
tags map[string]string, t ...time.Time)
// Create a point with a value, decorating it with tags
// NOTE: tags is expected to be owned by the caller, don't mutate
// it after passing to Add.
Add(measurement string,
value interface{},
tags map[string]string,
t ...time.Time)
SetDefaultTags(tags map[string]string)
AddDefaultTag(key, value string)
Prefix() string
SetPrefix(prefix string)
AddFields(measurement string,
fields map[string]interface{},
tags map[string]string,
t ...time.Time)
Debug() bool
SetDebug(enabled bool)
}
func NewAccumulator(
inputConfig *models.InputConfig,
points chan *client.Point,
) Accumulator {
acc := accumulator{}
acc.points = points
acc.inputConfig = inputConfig
return &acc
}
type accumulator struct {
sync.Mutex
points chan *client.Point
defaultTags map[string]string
debug bool
inputConfig *models.InputConfig
prefix string
}
func (ac *accumulator) Add(
measurement string,
value interface{},
tags map[string]string,
t ...time.Time,
) {
fields := make(map[string]interface{})
fields["value"] = value
ac.AddFields(measurement, fields, tags, t...)
}
func (ac *accumulator) AddFields(
measurement string,
fields map[string]interface{},
tags map[string]string,
t ...time.Time,
) {
if len(fields) == 0 || len(measurement) == 0 {
return
}
if !ac.inputConfig.Filter.ShouldTagsPass(tags) {
return
}
// Override measurement name if set
if len(ac.inputConfig.NameOverride) != 0 {
measurement = ac.inputConfig.NameOverride
}
// Apply measurement prefix and suffix if set
if len(ac.inputConfig.MeasurementPrefix) != 0 {
measurement = ac.inputConfig.MeasurementPrefix + measurement
}
if len(ac.inputConfig.MeasurementSuffix) != 0 {
measurement = measurement + ac.inputConfig.MeasurementSuffix
}
if tags == nil {
tags = make(map[string]string)
}
// Apply plugin-wide tags if set
for k, v := range ac.inputConfig.Tags {
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
// Apply daemon-wide tags if set
for k, v := range ac.defaultTags {
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
result := make(map[string]interface{})
for k, v := range fields {
// Filter out any filtered fields
if ac.inputConfig != nil {
if !ac.inputConfig.Filter.ShouldPass(k) {
continue
}
}
result[k] = v
// Validate uint64 and float64 fields
switch val := v.(type) {
case uint64:
// InfluxDB does not support writing uint64
if val < uint64(9223372036854775808) {
result[k] = int64(val)
} else {
result[k] = int64(9223372036854775807)
}
case float64:
// NaNs are invalid values in influxdb, skip measurement
if math.IsNaN(val) || math.IsInf(val, 0) {
if ac.debug {
log.Printf("Measurement [%s] field [%s] has a NaN or Inf "+
"field, skipping",
measurement, k)
}
continue
}
}
}
fields = nil
if len(result) == 0 {
return
}
var timestamp time.Time
if len(t) > 0 {
timestamp = t[0]
} else {
timestamp = time.Now()
}
if ac.prefix != "" {
measurement = ac.prefix + measurement
}
pt, err := client.NewPoint(measurement, tags, result, timestamp)
if err != nil {
log.Printf("Error adding point [%s]: %s\n", measurement, err.Error())
return
}
if ac.debug {
fmt.Println("> " + pt.String())
}
ac.points <- pt
}
func (ac *accumulator) SetDefaultTags(tags map[string]string) {
ac.defaultTags = tags
}
func (ac *accumulator) AddDefaultTag(key, value string) {
ac.defaultTags[key] = value
}
func (ac *accumulator) Prefix() string {
return ac.prefix
}
func (ac *accumulator) SetPrefix(prefix string) {
ac.prefix = prefix
}
func (ac *accumulator) Debug() bool {
return ac.debug
}
func (ac *accumulator) SetDebug(debug bool) {
ac.debug = debug
}

163
agent/accumulator.go Normal file
View File

@@ -0,0 +1,163 @@
package agent
import (
"fmt"
"log"
"math"
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal/models"
)
func NewAccumulator(
inputConfig *internal_models.InputConfig,
metrics chan telegraf.Metric,
) *accumulator {
acc := accumulator{}
acc.metrics = metrics
acc.inputConfig = inputConfig
return &acc
}
type accumulator struct {
sync.Mutex
metrics chan telegraf.Metric
defaultTags map[string]string
debug bool
inputConfig *internal_models.InputConfig
prefix string
}
func (ac *accumulator) Add(
measurement string,
value interface{},
tags map[string]string,
t ...time.Time,
) {
fields := make(map[string]interface{})
fields["value"] = value
ac.AddFields(measurement, fields, tags, t...)
}
func (ac *accumulator) AddFields(
measurement string,
fields map[string]interface{},
tags map[string]string,
t ...time.Time,
) {
if len(fields) == 0 || len(measurement) == 0 {
return
}
if !ac.inputConfig.Filter.ShouldTagsPass(tags) {
return
}
// Override measurement name if set
if len(ac.inputConfig.NameOverride) != 0 {
measurement = ac.inputConfig.NameOverride
}
// Apply measurement prefix and suffix if set
if len(ac.inputConfig.MeasurementPrefix) != 0 {
measurement = ac.inputConfig.MeasurementPrefix + measurement
}
if len(ac.inputConfig.MeasurementSuffix) != 0 {
measurement = measurement + ac.inputConfig.MeasurementSuffix
}
if tags == nil {
tags = make(map[string]string)
}
// Apply plugin-wide tags if set
for k, v := range ac.inputConfig.Tags {
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
// Apply daemon-wide tags if set
for k, v := range ac.defaultTags {
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
result := make(map[string]interface{})
for k, v := range fields {
// Filter out any filtered fields
if ac.inputConfig != nil {
if !ac.inputConfig.Filter.ShouldPass(k) {
continue
}
}
result[k] = v
// Validate uint64 and float64 fields
switch val := v.(type) {
case uint64:
// InfluxDB does not support writing uint64
if val < uint64(9223372036854775808) {
result[k] = int64(val)
} else {
result[k] = int64(9223372036854775807)
}
case float64:
// NaNs are invalid values in influxdb, skip measurement
if math.IsNaN(val) || math.IsInf(val, 0) {
if ac.debug {
log.Printf("Measurement [%s] field [%s] has a NaN or Inf "+
"field, skipping",
measurement, k)
}
continue
}
}
}
fields = nil
if len(result) == 0 {
return
}
var timestamp time.Time
if len(t) > 0 {
timestamp = t[0]
} else {
timestamp = time.Now()
}
if ac.prefix != "" {
measurement = ac.prefix + measurement
}
m, err := telegraf.NewMetric(measurement, tags, result, timestamp)
if err != nil {
log.Printf("Error adding point [%s]: %s\n", measurement, err.Error())
return
}
if ac.debug {
fmt.Println("> " + m.String())
}
ac.metrics <- m
}
func (ac *accumulator) Debug() bool {
return ac.debug
}
func (ac *accumulator) SetDebug(debug bool) {
ac.debug = debug
}
func (ac *accumulator) setDefaultTags(tags map[string]string) {
ac.defaultTags = tags
}
func (ac *accumulator) addDefaultTag(key, value string) {
ac.defaultTags[key] = value
}

View File

@@ -1,4 +1,4 @@
package telegraf
package agent
import (
cryptorand "crypto/rand"
@@ -11,12 +11,9 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal/config"
"github.com/influxdata/telegraf/internal/models"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/outputs"
"github.com/influxdata/influxdb/client/v2"
)
// Agent runs telegraf and collects data based on the given config
@@ -48,7 +45,7 @@ func NewAgent(config *config.Config) (*Agent, error) {
func (a *Agent) Connect() error {
for _, o := range a.Config.Outputs {
switch ot := o.Output.(type) {
case outputs.ServiceOutput:
case telegraf.ServiceOutput:
if err := ot.Start(); err != nil {
log.Printf("Service for output %s failed to start, exiting\n%s\n",
o.Name, err.Error())
@@ -81,14 +78,14 @@ func (a *Agent) Close() error {
for _, o := range a.Config.Outputs {
err = o.Output.Close()
switch ot := o.Output.(type) {
case outputs.ServiceOutput:
case telegraf.ServiceOutput:
ot.Stop()
}
}
return err
}
func panicRecover(input *models.RunningInput) {
func panicRecover(input *internal_models.RunningInput) {
if err := recover(); err != nil {
trace := make([]byte, 2048)
runtime.Stack(trace, true)
@@ -102,7 +99,7 @@ func panicRecover(input *models.RunningInput) {
// gatherParallel runs the inputs that are using the same reporting interval
// as the telegraf agent.
func (a *Agent) gatherParallel(pointChan chan *client.Point) error {
func (a *Agent) gatherParallel(metricC chan telegraf.Metric) error {
var wg sync.WaitGroup
start := time.Now()
@@ -115,13 +112,13 @@ func (a *Agent) gatherParallel(pointChan chan *client.Point) error {
wg.Add(1)
counter++
go func(input *models.RunningInput) {
go func(input *internal_models.RunningInput) {
defer panicRecover(input)
defer wg.Done()
acc := NewAccumulator(input.Config, pointChan)
acc := NewAccumulator(input.Config, metricC)
acc.SetDebug(a.Config.Agent.Debug)
acc.SetDefaultTags(a.Config.Tags)
acc.setDefaultTags(a.Config.Tags)
if jitter != 0 {
nanoSleep := rand.Int63n(jitter)
@@ -159,8 +156,8 @@ func (a *Agent) gatherParallel(pointChan chan *client.Point) error {
// reporting interval.
func (a *Agent) gatherSeparate(
shutdown chan struct{},
input *models.RunningInput,
pointChan chan *client.Point,
input *internal_models.RunningInput,
metricC chan telegraf.Metric,
) error {
defer panicRecover(input)
@@ -170,9 +167,9 @@ func (a *Agent) gatherSeparate(
var outerr error
start := time.Now()
acc := NewAccumulator(input.Config, pointChan)
acc := NewAccumulator(input.Config, metricC)
acc.SetDebug(a.Config.Agent.Debug)
acc.SetDefaultTags(a.Config.Tags)
acc.setDefaultTags(a.Config.Tags)
if err := input.Input.Gather(acc); err != nil {
log.Printf("Error in input [%s]: %s", input.Name, err)
@@ -202,13 +199,13 @@ func (a *Agent) gatherSeparate(
func (a *Agent) Test() error {
shutdown := make(chan struct{})
defer close(shutdown)
pointChan := make(chan *client.Point)
metricC := make(chan telegraf.Metric)
// dummy receiver for the point channel
go func() {
for {
select {
case <-pointChan:
case <-metricC:
// do nothing
case <-shutdown:
return
@@ -217,7 +214,7 @@ func (a *Agent) Test() error {
}()
for _, input := range a.Config.Inputs {
acc := NewAccumulator(input.Config, pointChan)
acc := NewAccumulator(input.Config, metricC)
acc.SetDebug(true)
fmt.Printf("* Plugin: %s, Collection 1\n", input.Name)
@@ -250,7 +247,7 @@ func (a *Agent) flush() {
wg.Add(len(a.Config.Outputs))
for _, o := range a.Config.Outputs {
go func(output *models.RunningOutput) {
go func(output *internal_models.RunningOutput) {
defer wg.Done()
err := output.Write()
if err != nil {
@@ -264,7 +261,7 @@ func (a *Agent) flush() {
}
// flusher monitors the points input channel and flushes on the minimum interval
func (a *Agent) flusher(shutdown chan struct{}, pointChan chan *client.Point) error {
func (a *Agent) flusher(shutdown chan struct{}, metricC chan telegraf.Metric) error {
// Inelegant, but this sleep is to allow the Gather threads to run, so that
// the flusher will flush after metrics are collected.
time.Sleep(time.Millisecond * 200)
@@ -279,9 +276,9 @@ func (a *Agent) flusher(shutdown chan struct{}, pointChan chan *client.Point) er
return nil
case <-ticker.C:
a.flush()
case pt := <-pointChan:
case m := <-metricC:
for _, o := range a.Config.Outputs {
o.AddPoint(pt)
o.AddPoint(m)
}
}
}
@@ -322,7 +319,7 @@ func (a *Agent) Run(shutdown chan struct{}) error {
a.Config.Agent.Hostname, a.Config.Agent.FlushInterval.Duration)
// channel shared between all input threads for accumulating points
pointChan := make(chan *client.Point, 1000)
metricC := make(chan telegraf.Metric, 1000)
// Round collection to nearest interval by sleeping
if a.Config.Agent.RoundInterval {
@@ -334,7 +331,7 @@ func (a *Agent) Run(shutdown chan struct{}) error {
wg.Add(1)
go func() {
defer wg.Done()
if err := a.flusher(shutdown, pointChan); err != nil {
if err := a.flusher(shutdown, metricC); err != nil {
log.Printf("Flusher routine failed, exiting: %s\n", err.Error())
close(shutdown)
}
@@ -344,7 +341,7 @@ func (a *Agent) Run(shutdown chan struct{}) error {
// Start service of any ServicePlugins
switch p := input.Input.(type) {
case inputs.ServiceInput:
case telegraf.ServiceInput:
if err := p.Start(); err != nil {
log.Printf("Service for input %s failed to start, exiting\n%s\n",
input.Name, err.Error())
@@ -357,9 +354,9 @@ func (a *Agent) Run(shutdown chan struct{}) error {
// configured. Default intervals are handled below with gatherParallel
if input.Config.Interval != 0 {
wg.Add(1)
go func(input *models.RunningInput) {
go func(input *internal_models.RunningInput) {
defer wg.Done()
if err := a.gatherSeparate(shutdown, input, pointChan); err != nil {
if err := a.gatherSeparate(shutdown, input, metricC); err != nil {
log.Printf(err.Error())
}
}(input)
@@ -369,7 +366,7 @@ func (a *Agent) Run(shutdown chan struct{}) error {
defer wg.Wait()
for {
if err := a.gatherParallel(pointChan); err != nil {
if err := a.gatherParallel(metricC); err != nil {
log.Printf(err.Error())
}

View File

@@ -1,4 +1,4 @@
package telegraf
package agent
import (
"github.com/stretchr/testify/assert"
@@ -16,35 +16,35 @@ import (
func TestAgent_LoadPlugin(t *testing.T) {
c := config.NewConfig()
c.InputFilters = []string{"mysql"}
err := c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err := c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ := NewAgent(c)
assert.Equal(t, 1, len(a.Config.Inputs))
c = config.NewConfig()
c.InputFilters = []string{"foo"}
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 0, len(a.Config.Inputs))
c = config.NewConfig()
c.InputFilters = []string{"mysql", "foo"}
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 1, len(a.Config.Inputs))
c = config.NewConfig()
c.InputFilters = []string{"mysql", "redis"}
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 2, len(a.Config.Inputs))
c = config.NewConfig()
c.InputFilters = []string{"mysql", "foo", "redis", "bar"}
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 2, len(a.Config.Inputs))
@@ -53,42 +53,42 @@ func TestAgent_LoadPlugin(t *testing.T) {
func TestAgent_LoadOutput(t *testing.T) {
c := config.NewConfig()
c.OutputFilters = []string{"influxdb"}
err := c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err := c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ := NewAgent(c)
assert.Equal(t, 2, len(a.Config.Outputs))
c = config.NewConfig()
c.OutputFilters = []string{"kafka"}
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 1, len(a.Config.Outputs))
c = config.NewConfig()
c.OutputFilters = []string{}
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 3, len(a.Config.Outputs))
c = config.NewConfig()
c.OutputFilters = []string{"foo"}
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 0, len(a.Config.Outputs))
c = config.NewConfig()
c.OutputFilters = []string{"influxdb", "foo"}
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 2, len(a.Config.Outputs))
c = config.NewConfig()
c.OutputFilters = []string{"influxdb", "kafka"}
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
assert.Equal(t, 3, len(c.Outputs))
a, _ = NewAgent(c)
@@ -96,7 +96,7 @@ func TestAgent_LoadOutput(t *testing.T) {
c = config.NewConfig()
c.OutputFilters = []string{"influxdb", "foo", "kafka", "bar"}
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 3, len(a.Config.Outputs))

View File

@@ -9,7 +9,7 @@ import (
"strings"
"syscall"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/agent"
"github.com/influxdata/telegraf/internal/config"
_ "github.com/influxdata/telegraf/plugins/inputs/all"
_ "github.com/influxdata/telegraf/plugins/outputs/all"
@@ -87,11 +87,11 @@ func main() {
reload <- true
for <-reload {
reload <- false
flag.Usage = usageExit
flag.Usage = func() { usageExit(0) }
flag.Parse()
if flag.NFlag() == 0 {
usageExit()
usageExit(0)
}
var inputFilters []string
@@ -148,9 +148,8 @@ func main() {
log.Fatal(err)
}
} else {
fmt.Println("Usage: Telegraf")
flag.PrintDefaults()
return
fmt.Println("You must specify a config file. See telegraf --help")
os.Exit(1)
}
if *fConfigDirectoryLegacy != "" {
@@ -173,7 +172,7 @@ func main() {
log.Fatalf("Error: no inputs found, did you provide a valid config file?")
}
ag, err := telegraf.NewAgent(c)
ag, err := agent.NewAgent(c)
if err != nil {
log.Fatal(err)
}
@@ -235,7 +234,7 @@ func main() {
}
}
func usageExit() {
func usageExit(rc int) {
fmt.Println(usage)
os.Exit(0)
os.Exit(rc)
}

31
input.go Normal file
View File

@@ -0,0 +1,31 @@
package telegraf
type Input interface {
// SampleConfig returns the default configuration of the Input
SampleConfig() string
// Description returns a one-sentence description on the Input
Description() string
// Gather takes in an accumulator and adds the metrics that the Input
// gathers. This is called every "interval"
Gather(Accumulator) error
}
type ServiceInput interface {
// SampleConfig returns the default configuration of the Input
SampleConfig() string
// Description returns a one-sentence description on the Input
Description() string
// Gather takes in an accumulator and adds the metrics that the Input
// gathers. This is called every "interval"
Gather(Accumulator) error
// Start starts the ServiceInput's service, whatever that may be
Start() error
// Stop stops the services and closes any necessary channels and connections
Stop()
}

View File

@@ -10,6 +10,7 @@ import (
"strings"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/internal/models"
"github.com/influxdata/telegraf/plugins/inputs"
@@ -28,8 +29,8 @@ type Config struct {
OutputFilters []string
Agent *AgentConfig
Inputs []*models.RunningInput
Outputs []*models.RunningOutput
Inputs []*internal_models.RunningInput
Outputs []*internal_models.RunningOutput
}
func NewConfig() *Config {
@@ -43,8 +44,8 @@ func NewConfig() *Config {
},
Tags: make(map[string]string),
Inputs: make([]*models.RunningInput, 0),
Outputs: make([]*models.RunningOutput, 0),
Inputs: make([]*internal_models.RunningInput, 0),
Outputs: make([]*internal_models.RunningOutput, 0),
InputFilters: make([]string, 0),
OutputFilters: make([]string, 0),
}
@@ -227,13 +228,13 @@ func PrintSampleConfig(pluginFilters []string, outputFilters []string) {
// Print Inputs
fmt.Printf(pluginHeader)
servInputs := make(map[string]inputs.ServiceInput)
servInputs := make(map[string]telegraf.ServiceInput)
for _, pname := range pnames {
creator := inputs.Inputs[pname]
input := creator()
switch p := input.(type) {
case inputs.ServiceInput:
case telegraf.ServiceInput:
servInputs[pname] = p
continue
}
@@ -403,7 +404,7 @@ func (c *Config) addOutput(name string, table *ast.Table) error {
return err
}
ro := models.NewRunningOutput(name, output, outputConfig)
ro := internal_models.NewRunningOutput(name, output, outputConfig)
if c.Agent.MetricBufferLimit > 0 {
ro.PointBufferLimit = c.Agent.MetricBufferLimit
}
@@ -436,7 +437,7 @@ func (c *Config) addInput(name string, table *ast.Table) error {
return err
}
rp := &models.RunningInput{
rp := &internal_models.RunningInput{
Name: name,
Input: input,
Config: pluginConfig,
@@ -446,10 +447,10 @@ func (c *Config) addInput(name string, table *ast.Table) error {
}
// buildFilter builds a Filter (tagpass/tagdrop/pass/drop) to
// be inserted into the models.OutputConfig/models.InputConfig to be used for prefix
// be inserted into the internal_models.OutputConfig/internal_models.InputConfig to be used for prefix
// filtering on tags and measurements
func buildFilter(tbl *ast.Table) models.Filter {
f := models.Filter{}
func buildFilter(tbl *ast.Table) internal_models.Filter {
f := internal_models.Filter{}
if node, ok := tbl.Fields["pass"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
@@ -481,7 +482,7 @@ func buildFilter(tbl *ast.Table) models.Filter {
if subtbl, ok := node.(*ast.Table); ok {
for name, val := range subtbl.Fields {
if kv, ok := val.(*ast.KeyValue); ok {
tagfilter := &models.TagFilter{Name: name}
tagfilter := &internal_models.TagFilter{Name: name}
if ary, ok := kv.Value.(*ast.Array); ok {
for _, elem := range ary.Value {
if str, ok := elem.(*ast.String); ok {
@@ -500,7 +501,7 @@ func buildFilter(tbl *ast.Table) models.Filter {
if subtbl, ok := node.(*ast.Table); ok {
for name, val := range subtbl.Fields {
if kv, ok := val.(*ast.KeyValue); ok {
tagfilter := &models.TagFilter{Name: name}
tagfilter := &internal_models.TagFilter{Name: name}
if ary, ok := kv.Value.(*ast.Array); ok {
for _, elem := range ary.Value {
if str, ok := elem.(*ast.String); ok {
@@ -524,9 +525,9 @@ func buildFilter(tbl *ast.Table) models.Filter {
// buildInput parses input specific items from the ast.Table,
// builds the filter and returns a
// models.InputConfig to be inserted into models.RunningInput
func buildInput(name string, tbl *ast.Table) (*models.InputConfig, error) {
cp := &models.InputConfig{Name: name}
// internal_models.InputConfig to be inserted into internal_models.RunningInput
func buildInput(name string, tbl *ast.Table) (*internal_models.InputConfig, error) {
cp := &internal_models.InputConfig{Name: name}
if node, ok := tbl.Fields["interval"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
@@ -583,10 +584,10 @@ func buildInput(name string, tbl *ast.Table) (*models.InputConfig, error) {
}
// buildOutput parses output specific items from the ast.Table, builds the filter and returns an
// models.OutputConfig to be inserted into models.RunningInput
// internal_models.OutputConfig to be inserted into internal_models.RunningInput
// Note: error exists in the return for future calls that might require error
func buildOutput(name string, tbl *ast.Table) (*models.OutputConfig, error) {
oc := &models.OutputConfig{
func buildOutput(name string, tbl *ast.Table) (*internal_models.OutputConfig, error) {
oc := &internal_models.OutputConfig{
Name: name,
Filter: buildFilter(tbl),
}

View File

@@ -19,19 +19,19 @@ func TestConfig_LoadSingleInput(t *testing.T) {
memcached := inputs.Inputs["memcached"]().(*memcached.Memcached)
memcached.Servers = []string{"localhost"}
mConfig := &models.InputConfig{
mConfig := &internal_models.InputConfig{
Name: "memcached",
Filter: models.Filter{
Filter: internal_models.Filter{
Drop: []string{"other", "stuff"},
Pass: []string{"some", "strings"},
TagDrop: []models.TagFilter{
models.TagFilter{
TagDrop: []internal_models.TagFilter{
internal_models.TagFilter{
Name: "badtag",
Filter: []string{"othertag"},
},
},
TagPass: []models.TagFilter{
models.TagFilter{
TagPass: []internal_models.TagFilter{
internal_models.TagFilter{
Name: "goodtag",
Filter: []string{"mytag"},
},
@@ -62,19 +62,19 @@ func TestConfig_LoadDirectory(t *testing.T) {
memcached := inputs.Inputs["memcached"]().(*memcached.Memcached)
memcached.Servers = []string{"localhost"}
mConfig := &models.InputConfig{
mConfig := &internal_models.InputConfig{
Name: "memcached",
Filter: models.Filter{
Filter: internal_models.Filter{
Drop: []string{"other", "stuff"},
Pass: []string{"some", "strings"},
TagDrop: []models.TagFilter{
models.TagFilter{
TagDrop: []internal_models.TagFilter{
internal_models.TagFilter{
Name: "badtag",
Filter: []string{"othertag"},
},
},
TagPass: []models.TagFilter{
models.TagFilter{
TagPass: []internal_models.TagFilter{
internal_models.TagFilter{
Name: "goodtag",
Filter: []string{"mytag"},
},
@@ -92,7 +92,7 @@ func TestConfig_LoadDirectory(t *testing.T) {
ex := inputs.Inputs["exec"]().(*exec.Exec)
ex.Command = "/usr/bin/myothercollector --foo=bar"
eConfig := &models.InputConfig{
eConfig := &internal_models.InputConfig{
Name: "exec",
MeasurementSuffix: "_myothercollector",
}
@@ -111,7 +111,7 @@ func TestConfig_LoadDirectory(t *testing.T) {
pstat := inputs.Inputs["procstat"]().(*procstat.Procstat)
pstat.PidFile = "/var/run/grafana-server.pid"
pConfig := &models.InputConfig{Name: "procstat"}
pConfig := &internal_models.InputConfig{Name: "procstat"}
pConfig.Tags = make(map[string]string)
assert.Equal(t, pstat, c.Inputs[3].Input,

View File

@@ -2,14 +2,20 @@ package internal
import (
"bufio"
"crypto/rand"
"crypto/tls"
"crypto/x509"
"errors"
"fmt"
"io/ioutil"
"os"
"strconv"
"strings"
"time"
)
const alphanum string = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
// Duration just wraps time.Duration
type Duration struct {
Duration time.Duration
@@ -105,6 +111,57 @@ func ReadLinesOffsetN(filename string, offset uint, n int) ([]string, error) {
return ret, nil
}
// RandomString returns a random string of alpha-numeric characters
func RandomString(n int) string {
var bytes = make([]byte, n)
rand.Read(bytes)
for i, b := range bytes {
bytes[i] = alphanum[b%byte(len(alphanum))]
}
return string(bytes)
}
// GetTLSConfig gets a tls.Config object from the given certs, key, and CA files.
// you must give the full path to the files.
// If all files are blank and InsecureSkipVerify=false, returns a nil pointer.
func GetTLSConfig(
SSLCert, SSLKey, SSLCA string,
InsecureSkipVerify bool,
) (*tls.Config, error) {
t := &tls.Config{}
if SSLCert != "" && SSLKey != "" && SSLCA != "" {
cert, err := tls.LoadX509KeyPair(SSLCert, SSLKey)
if err != nil {
return nil, errors.New(fmt.Sprintf(
"Could not load TLS client key/certificate: %s",
err))
}
caCert, err := ioutil.ReadFile(SSLCA)
if err != nil {
return nil, errors.New(fmt.Sprintf("Could not load TLS CA: %s",
err))
}
caCertPool := x509.NewCertPool()
caCertPool.AppendCertsFromPEM(caCert)
t = &tls.Config{
Certificates: []tls.Certificate{cert},
RootCAs: caCertPool,
InsecureSkipVerify: InsecureSkipVerify,
}
} else {
if InsecureSkipVerify {
t.InsecureSkipVerify = true
} else {
return nil, nil
}
}
// will be nil by default if nothing is provided
return t, nil
}
// Glob will test a string pattern, potentially containing globs, against a
// subject string. The result is a simple true/false, determining whether or
// not the glob pattern matched the subject text.

View File

@@ -1,9 +1,9 @@
package models
package internal_models
import (
"strings"
"github.com/influxdata/influxdb/client/v2"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
)
@@ -24,8 +24,8 @@ type Filter struct {
IsActive bool
}
func (f Filter) ShouldPointPass(point *client.Point) bool {
if f.ShouldPass(point.Name()) && f.ShouldTagsPass(point.Tags()) {
func (f Filter) ShouldMetricPass(metric telegraf.Metric) bool {
if f.ShouldPass(metric.Name()) && f.ShouldTagsPass(metric.Tags()) {
return true
}
return false

View File

@@ -1,4 +1,4 @@
package models
package internal_models
import (
"testing"

View File

@@ -1,14 +1,14 @@
package models
package internal_models
import (
"time"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf"
)
type RunningInput struct {
Name string
Input inputs.Input
Input telegraf.Input
Config *InputConfig
}

View File

@@ -1,35 +1,33 @@
package models
package internal_models
import (
"log"
"time"
"github.com/influxdata/telegraf/plugins/outputs"
"github.com/influxdata/influxdb/client/v2"
"github.com/influxdata/telegraf"
)
const DEFAULT_POINT_BUFFER_LIMIT = 10000
type RunningOutput struct {
Name string
Output outputs.Output
Output telegraf.Output
Config *OutputConfig
Quiet bool
PointBufferLimit int
points []*client.Point
metrics []telegraf.Metric
overwriteCounter int
}
func NewRunningOutput(
name string,
output outputs.Output,
output telegraf.Output,
conf *OutputConfig,
) *RunningOutput {
ro := &RunningOutput{
Name: name,
points: make([]*client.Point, 0),
metrics: make([]telegraf.Metric, 0),
Output: output,
Config: conf,
PointBufferLimit: DEFAULT_POINT_BUFFER_LIMIT,
@@ -37,34 +35,37 @@ func NewRunningOutput(
return ro
}
func (ro *RunningOutput) AddPoint(point *client.Point) {
func (ro *RunningOutput) AddPoint(point telegraf.Metric) {
if ro.Config.Filter.IsActive {
if !ro.Config.Filter.ShouldPointPass(point) {
if !ro.Config.Filter.ShouldMetricPass(point) {
return
}
}
if len(ro.points) < ro.PointBufferLimit {
ro.points = append(ro.points, point)
if len(ro.metrics) < ro.PointBufferLimit {
ro.metrics = append(ro.metrics, point)
} else {
if ro.overwriteCounter == len(ro.points) {
log.Printf("WARNING: overwriting cached metrics, you may want to " +
"increase the metric_buffer_limit setting in your [agent] config " +
"if you do not wish to overwrite metrics.\n")
if ro.overwriteCounter == len(ro.metrics) {
ro.overwriteCounter = 0
}
ro.points[ro.overwriteCounter] = point
ro.metrics[ro.overwriteCounter] = point
ro.overwriteCounter++
}
}
func (ro *RunningOutput) Write() error {
start := time.Now()
err := ro.Output.Write(ro.points)
err := ro.Output.Write(ro.metrics)
elapsed := time.Since(start)
if err == nil {
if !ro.Quiet {
log.Printf("Wrote %d metrics to output %s in %s\n",
len(ro.points), ro.Name, elapsed)
len(ro.metrics), ro.Name, elapsed)
}
ro.points = make([]*client.Point, 0)
ro.metrics = make([]telegraf.Metric, 0)
ro.overwriteCounter = 0
}
return err

115
metric.go Normal file
View File

@@ -0,0 +1,115 @@
package telegraf
import (
"bytes"
"time"
"github.com/influxdata/influxdb/client/v2"
"github.com/influxdata/influxdb/models"
)
type Metric interface {
// Name returns the measurement name of the metric
Name() string
// Name returns the tags associated with the metric
Tags() map[string]string
// Time return the timestamp for the metric
Time() time.Time
// UnixNano returns the unix nano time of the metric
UnixNano() int64
// Fields returns the fields for the metric
Fields() map[string]interface{}
// String returns a line-protocol string of the metric
String() string
// PrecisionString returns a line-protocol string of the metric, at precision
PrecisionString(precison string) string
// Point returns a influxdb client.Point object
Point() *client.Point
}
// metric is a wrapper of the influxdb client.Point struct
type metric struct {
pt *client.Point
}
// NewMetric returns a metric with the given timestamp. If a timestamp is not
// given, then data is sent to the database without a timestamp, in which case
// the server will assign local time upon reception. NOTE: it is recommended to
// send data with a timestamp.
func NewMetric(
name string,
tags map[string]string,
fields map[string]interface{},
t ...time.Time,
) (Metric, error) {
var T time.Time
if len(t) > 0 {
T = t[0]
}
pt, err := client.NewPoint(name, tags, fields, T)
if err != nil {
return nil, err
}
return &metric{
pt: pt,
}, nil
}
// ParseMetrics returns a slice of Metrics from a text representation of a
// metric (in line-protocol format)
// with each metric separated by newlines. If any metrics fail to parse,
// a non-nil error will be returned in addition to the metrics that parsed
// successfully.
func ParseMetrics(buf []byte) ([]Metric, error) {
// parse even if the buffer begins with a newline
buf = bytes.TrimPrefix(buf, []byte("\n"))
points, err := models.ParsePoints(buf)
metrics := make([]Metric, len(points))
for i, point := range points {
// Ignore error here because it's impossible that a model.Point
// wouldn't parse into client.Point properly
metrics[i], _ = NewMetric(point.Name(), point.Tags(),
point.Fields(), point.Time())
}
return metrics, err
}
func (m *metric) Name() string {
return m.pt.Name()
}
func (m *metric) Tags() map[string]string {
return m.pt.Tags()
}
func (m *metric) Time() time.Time {
return m.pt.Time()
}
func (m *metric) UnixNano() int64 {
return m.pt.UnixNano()
}
func (m *metric) Fields() map[string]interface{} {
return m.pt.Fields()
}
func (m *metric) String() string {
return m.pt.String()
}
func (m *metric) PrecisionString(precison string) string {
return m.pt.PrecisionString(precison)
}
func (m *metric) Point() *client.Point {
return m.pt
}

135
metric_test.go Normal file
View File

@@ -0,0 +1,135 @@
package telegraf
import (
"fmt"
"math"
"testing"
"time"
"github.com/stretchr/testify/assert"
)
const validMs = `
cpu,cpu=cpu0,host=foo,datacenter=us-east usage_idle=99,usage_busy=1 1454105876344540456
`
const invalidMs = `
cpu, cpu=cpu0,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,host=foo usage_idle
cpu,host usage_idle=99
cpu,host=foo usage_idle=99 very bad metric
`
const validInvalidMs = `
cpu,cpu=cpu0,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu1,host=foo,datacenter=us-east usage_idle=51,usage_busy=49
cpu,cpu=cpu2,host=foo,datacenter=us-east usage_idle=60,usage_busy=40
cpu,host usage_idle=99
`
func TestParseValidMetrics(t *testing.T) {
metrics, err := ParseMetrics([]byte(validMs))
assert.NoError(t, err)
assert.Len(t, metrics, 1)
m := metrics[0]
tags := map[string]string{
"host": "foo",
"datacenter": "us-east",
"cpu": "cpu0",
}
fields := map[string]interface{}{
"usage_idle": float64(99),
"usage_busy": float64(1),
}
assert.Equal(t, tags, m.Tags())
assert.Equal(t, fields, m.Fields())
assert.Equal(t, "cpu", m.Name())
assert.Equal(t, int64(1454105876344540456), m.UnixNano())
}
func TestParseInvalidMetrics(t *testing.T) {
metrics, err := ParseMetrics([]byte(invalidMs))
assert.Error(t, err)
assert.Len(t, metrics, 0)
}
func TestParseValidAndInvalidMetrics(t *testing.T) {
metrics, err := ParseMetrics([]byte(validInvalidMs))
assert.Error(t, err)
assert.Len(t, metrics, 3)
}
func TestNewMetric(t *testing.T) {
now := time.Now()
tags := map[string]string{
"host": "localhost",
"datacenter": "us-east-1",
}
fields := map[string]interface{}{
"usage_idle": float64(99),
"usage_busy": float64(1),
}
m, err := NewMetric("cpu", tags, fields, now)
assert.NoError(t, err)
assert.Equal(t, tags, m.Tags())
assert.Equal(t, fields, m.Fields())
assert.Equal(t, "cpu", m.Name())
assert.Equal(t, now, m.Time())
assert.Equal(t, now.UnixNano(), m.UnixNano())
}
func TestNewMetricString(t *testing.T) {
now := time.Now()
tags := map[string]string{
"host": "localhost",
}
fields := map[string]interface{}{
"usage_idle": float64(99),
}
m, err := NewMetric("cpu", tags, fields, now)
assert.NoError(t, err)
lineProto := fmt.Sprintf("cpu,host=localhost usage_idle=99 %d",
now.UnixNano())
assert.Equal(t, lineProto, m.String())
lineProtoPrecision := fmt.Sprintf("cpu,host=localhost usage_idle=99 %d",
now.Unix())
assert.Equal(t, lineProtoPrecision, m.PrecisionString("s"))
}
func TestNewMetricStringNoTime(t *testing.T) {
tags := map[string]string{
"host": "localhost",
}
fields := map[string]interface{}{
"usage_idle": float64(99),
}
m, err := NewMetric("cpu", tags, fields)
assert.NoError(t, err)
lineProto := fmt.Sprintf("cpu,host=localhost usage_idle=99")
assert.Equal(t, lineProto, m.String())
lineProtoPrecision := fmt.Sprintf("cpu,host=localhost usage_idle=99")
assert.Equal(t, lineProtoPrecision, m.PrecisionString("s"))
}
func TestNewMetricFailNaN(t *testing.T) {
now := time.Now()
tags := map[string]string{
"host": "localhost",
}
fields := map[string]interface{}{
"usage_idle": math.NaN(),
}
_, err := NewMetric("cpu", tags, fields, now)
assert.Error(t, err)
}

31
output.go Normal file
View File

@@ -0,0 +1,31 @@
package telegraf
type Output interface {
// Connect to the Output
Connect() error
// Close any connections to the Output
Close() error
// Description returns a one-sentence description on the Output
Description() string
// SampleConfig returns the default configuration of the Output
SampleConfig() string
// Write takes in group of points to be written to the Output
Write(metrics []Metric) error
}
type ServiceOutput interface {
// Connect to the Output
Connect() error
// Close any connections to the Output
Close() error
// Description returns a one-sentence description on the Output
Description() string
// SampleConfig returns the default configuration of the Output
SampleConfig() string
// Write takes in group of points to be written to the Output
Write(metrics []Metric) error
// Start the "service" that will provide an Output
Start() error
// Stop the "service" that will provide an Output
Stop()
}

View File

@@ -4,6 +4,7 @@ import (
"bytes"
"encoding/binary"
"fmt"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"net"
"strconv"
@@ -119,7 +120,7 @@ func (a *Aerospike) Description() string {
return "Read stats from an aerospike server"
}
func (a *Aerospike) Gather(acc inputs.Accumulator) error {
func (a *Aerospike) Gather(acc telegraf.Accumulator) error {
if len(a.Servers) == 0 {
return a.gatherServer("127.0.0.1:3000", acc)
}
@@ -140,7 +141,7 @@ func (a *Aerospike) Gather(acc inputs.Accumulator) error {
return outerr
}
func (a *Aerospike) gatherServer(host string, acc inputs.Accumulator) error {
func (a *Aerospike) gatherServer(host string, acc telegraf.Accumulator) error {
aerospikeInfo, err := getMap(STATISTICS_COMMAND, host)
if err != nil {
return fmt.Errorf("Aerospike info failed: %s", err)
@@ -249,7 +250,7 @@ func get(key []byte, host string) (map[string]string, error) {
func readAerospikeStats(
stats map[string]string,
acc inputs.Accumulator,
acc telegraf.Accumulator,
host string,
namespace string,
) {
@@ -336,7 +337,7 @@ func msgLenFromBytes(buf [6]byte) int64 {
}
func init() {
inputs.Add("aerospike", func() inputs.Input {
inputs.Add("aerospike", func() telegraf.Input {
return &Aerospike{}
})
}

View File

@@ -26,6 +26,7 @@ import (
_ "github.com/influxdata/telegraf/plugins/inputs/phpfpm"
_ "github.com/influxdata/telegraf/plugins/inputs/ping"
_ "github.com/influxdata/telegraf/plugins/inputs/postgresql"
_ "github.com/influxdata/telegraf/plugins/inputs/powerdns"
_ "github.com/influxdata/telegraf/plugins/inputs/procstat"
_ "github.com/influxdata/telegraf/plugins/inputs/prometheus"
_ "github.com/influxdata/telegraf/plugins/inputs/puppetagent"
@@ -39,6 +40,7 @@ import (
_ "github.com/influxdata/telegraf/plugins/inputs/system"
_ "github.com/influxdata/telegraf/plugins/inputs/trig"
_ "github.com/influxdata/telegraf/plugins/inputs/twemproxy"
_ "github.com/influxdata/telegraf/plugins/inputs/win_perf_counters"
_ "github.com/influxdata/telegraf/plugins/inputs/zfs"
_ "github.com/influxdata/telegraf/plugins/inputs/zookeeper"
)

View File

@@ -11,6 +11,7 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -31,7 +32,7 @@ func (n *Apache) Description() string {
return "Read Apache status information (mod_status)"
}
func (n *Apache) Gather(acc inputs.Accumulator) error {
func (n *Apache) Gather(acc telegraf.Accumulator) error {
var wg sync.WaitGroup
var outerr error
@@ -59,7 +60,7 @@ var tr = &http.Transport{
var client = &http.Client{Transport: tr}
func (n *Apache) gatherUrl(addr *url.URL, acc inputs.Accumulator) error {
func (n *Apache) gatherUrl(addr *url.URL, acc telegraf.Accumulator) error {
resp, err := client.Get(addr.String())
if err != nil {
return fmt.Errorf("error making HTTP request to %s: %s", addr.String(), err)
@@ -164,7 +165,7 @@ func getTags(addr *url.URL) map[string]string {
}
func init() {
inputs.Add("apache", func() inputs.Input {
inputs.Add("apache", func() telegraf.Input {
return &Apache{}
})
}

View File

@@ -8,6 +8,7 @@ import (
"strconv"
"strings"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -69,7 +70,7 @@ func prettyToBytes(v string) uint64 {
return uint64(result)
}
func (b *Bcache) gatherBcache(bdev string, acc inputs.Accumulator) error {
func (b *Bcache) gatherBcache(bdev string, acc telegraf.Accumulator) error {
tags := getTags(bdev)
metrics, err := filepath.Glob(bdev + "/stats_total/*")
if len(metrics) < 0 {
@@ -104,7 +105,7 @@ func (b *Bcache) gatherBcache(bdev string, acc inputs.Accumulator) error {
return nil
}
func (b *Bcache) Gather(acc inputs.Accumulator) error {
func (b *Bcache) Gather(acc telegraf.Accumulator) error {
bcacheDevsChecked := make(map[string]bool)
var restrictDevs bool
if len(b.BcacheDevs) != 0 {
@@ -135,7 +136,7 @@ func (b *Bcache) Gather(acc inputs.Accumulator) error {
}
func init() {
inputs.Add("bcache", func() inputs.Input {
inputs.Add("bcache", func() telegraf.Input {
return &Bcache{}
})
}

View File

@@ -10,6 +10,7 @@ import (
"strings"
"sync"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -61,7 +62,7 @@ var ErrProtocolError = errors.New("disque protocol error")
// Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any).
func (g *Disque) Gather(acc inputs.Accumulator) error {
func (g *Disque) Gather(acc telegraf.Accumulator) error {
if len(g.Servers) == 0 {
url := &url.URL{
Host: ":7711",
@@ -98,7 +99,7 @@ func (g *Disque) Gather(acc inputs.Accumulator) error {
const defaultPort = "7711"
func (g *Disque) gatherServer(addr *url.URL, acc inputs.Accumulator) error {
func (g *Disque) gatherServer(addr *url.URL, acc telegraf.Accumulator) error {
if g.c == nil {
_, _, err := net.SplitHostPort(addr.Host)
@@ -198,7 +199,7 @@ func (g *Disque) gatherServer(addr *url.URL, acc inputs.Accumulator) error {
}
func init() {
inputs.Add("disque", func() inputs.Input {
inputs.Add("disque", func() telegraf.Input {
return &Disque{}
})
}

View File

@@ -2,10 +2,12 @@ package system
import (
"fmt"
"log"
"strings"
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/fsouza/go-dockerclient"
@@ -33,7 +35,7 @@ func (d *Docker) Description() string {
func (d *Docker) SampleConfig() string { return sampleConfig }
func (d *Docker) Gather(acc inputs.Accumulator) error {
func (d *Docker) Gather(acc telegraf.Accumulator) error {
if d.client == nil {
var c *docker.Client
var err error
@@ -80,7 +82,7 @@ func (d *Docker) Gather(acc inputs.Accumulator) error {
func (d *Docker) gatherContainer(
container docker.APIContainers,
acc inputs.Accumulator,
acc telegraf.Accumulator,
) error {
// Parse container name
cname := "unknown"
@@ -111,12 +113,19 @@ func (d *Docker) gatherContainer(
}
go func() {
d.client.Stats(statOpts)
err := d.client.Stats(statOpts)
if err != nil {
log.Printf("Error getting docker stats: %s\n", err.Error())
}
}()
stat := <-statChan
close(done)
if stat == nil {
return nil
}
// Add labels to tags
for k, v := range container.Labels {
tags[k] = v
@@ -129,7 +138,7 @@ func (d *Docker) gatherContainer(
func gatherContainerStats(
stat *docker.Stats,
acc inputs.Accumulator,
acc telegraf.Accumulator,
tags map[string]string,
) {
now := stat.Read
@@ -212,7 +221,7 @@ func gatherContainerStats(
func gatherBlockIOMetrics(
stat *docker.Stats,
acc inputs.Accumulator,
acc telegraf.Accumulator,
tags map[string]string,
now time.Time,
) {
@@ -303,7 +312,7 @@ func sliceContains(in string, sl []string) bool {
}
func init() {
inputs.Add("docker", func() inputs.Input {
inputs.Add("docker", func() telegraf.Input {
return &Docker{}
})
}

View File

@@ -9,6 +9,7 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -95,13 +96,13 @@ func (e *Elasticsearch) Description() string {
// Gather reads the stats from Elasticsearch and writes it to the
// Accumulator.
func (e *Elasticsearch) Gather(acc inputs.Accumulator) error {
func (e *Elasticsearch) Gather(acc telegraf.Accumulator) error {
errChan := make(chan error, len(e.Servers))
var wg sync.WaitGroup
wg.Add(len(e.Servers))
for _, serv := range e.Servers {
go func(s string, acc inputs.Accumulator) {
go func(s string, acc telegraf.Accumulator) {
defer wg.Done()
var url string
if e.Local {
@@ -133,7 +134,7 @@ func (e *Elasticsearch) Gather(acc inputs.Accumulator) error {
return errors.New(strings.Join(errStrings, "\n"))
}
func (e *Elasticsearch) gatherNodeStats(url string, acc inputs.Accumulator) error {
func (e *Elasticsearch) gatherNodeStats(url string, acc telegraf.Accumulator) error {
nodeStats := &struct {
ClusterName string `json:"cluster_name"`
Nodes map[string]*node `json:"nodes"`
@@ -178,7 +179,7 @@ func (e *Elasticsearch) gatherNodeStats(url string, acc inputs.Accumulator) erro
return nil
}
func (e *Elasticsearch) gatherClusterStats(url string, acc inputs.Accumulator) error {
func (e *Elasticsearch) gatherClusterStats(url string, acc telegraf.Accumulator) error {
clusterStats := &clusterHealth{}
if err := e.gatherData(url, clusterStats); err != nil {
return err
@@ -243,7 +244,7 @@ func (e *Elasticsearch) gatherData(url string, v interface{}) error {
}
func init() {
inputs.Add("elasticsearch", func() inputs.Input {
inputs.Add("elasticsearch", func() telegraf.Input {
return NewElasticsearch()
})
}

View File

@@ -1,28 +1,39 @@
# Exec Plugin
# Exec Input Plugin
The exec plugin can execute arbitrary commands which output JSON. Then it flattens JSON and finds
all numeric values, treating them as floats.
The exec plugin can execute arbitrary commands which output JSON or
InfluxDB [line-protocol](https://docs.influxdata.com/influxdb/v0.9/write_protocols/line/).
For example, if you have a json-returning command called mycollector, you could
setup the exec plugin with:
If using JSON, only numeric values are parsed and turned into floats. Booleans
and strings will be ignored.
### Configuration
```
# Read flattened metrics from one or more commands that output JSON to stdout
[[inputs.exec]]
command = "/usr/bin/mycollector --output=json"
# the command to run
command = "/usr/bin/mycollector --foo=bar"
# Data format to consume. This can be "json" or "influx" (line-protocol)
# NOTE json only reads numerical measurements, strings and booleans are ignored.
data_format = "json"
# measurement name suffix (for separating different commands)
name_suffix = "_mycollector"
interval = "10s"
```
The name suffix is appended to exec as "exec_name_suffix" to identify the input stream.
Other options for modifying the measurement names are:
The interval is used to determine how often a particular command should be run. Each
time the exec plugin runs, it will only run a particular command if it has been at least
`interval` seconds since the exec plugin last ran the command.
```
name_override = "measurement_name"
name_prefix = "prefix_"
```
### Example 1
# Sample
Let's say that we have the above configuration, and mycollector outputs the
following JSON:
Let's say that we have a command with the name_suffix "_mycollector", which gives the following output:
```json
{
"a": 0.5,
@@ -33,13 +44,39 @@ Let's say that we have a command with the name_suffix "_mycollector", which give
}
```
The collected metrics will be stored as field values under the same measurement "exec_mycollector":
The collected metrics will be stored as fields under the measurement
"exec_mycollector":
```
exec_mycollector a=0.5,b_c=0.1,b_d=5 1452815002357578567
exec_mycollector a=0.5,b_c=0.1,b_d=5 1452815002357578567
```
Other options for modifying the measurement names are:
### Example 2
Now let's say we have the following configuration:
```
name_override = "newname"
name_prefix = "prefix_"
[[inputs.exec]]
# the command to run
command = "/usr/bin/line_protocol_collector"
# Data format to consume. This can be "json" or "influx" (line-protocol)
# NOTE json only reads numerical measurements, strings and booleans are ignored.
data_format = "influx"
```
And line_protocol_collector outputs the following line protocol:
```
cpu,cpu=cpu0,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu1,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu2,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu3,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu4,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu5,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu6,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
```
You will get data in InfluxDB exactly as it is defined above,
tags are cpu=cpuN, host=foo, and datacenter=us-east with fields usage_idle
and usage_busy. They will receive a timestamp at collection time.

View File

@@ -5,26 +5,30 @@ import (
"encoding/json"
"fmt"
"os/exec"
"time"
"github.com/gonuts/go-shellquote"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
)
const sampleConfig = `
# NOTE This plugin only reads numerical measurements, strings and booleans
# will be ignored.
# the command to run
command = "/usr/bin/mycollector --foo=bar"
# Data format to consume. This can be "json" or "influx" (line-protocol)
# NOTE json only reads numerical measurements, strings and booleans are ignored.
data_format = "json"
# measurement name suffix (for separating different commands)
name_suffix = "_mycollector"
`
type Exec struct {
Command string
Command string
DataFormat string
runner Runner
}
@@ -64,31 +68,43 @@ func (e *Exec) Description() string {
return "Read flattened metrics from one or more commands that output JSON to stdout"
}
func (e *Exec) Gather(acc inputs.Accumulator) error {
func (e *Exec) Gather(acc telegraf.Accumulator) error {
out, err := e.runner.Run(e)
if err != nil {
return err
}
var jsonOut interface{}
err = json.Unmarshal(out, &jsonOut)
if err != nil {
return fmt.Errorf("exec: unable to parse output of '%s' as JSON, %s",
e.Command, err)
}
switch e.DataFormat {
case "", "json":
var jsonOut interface{}
err = json.Unmarshal(out, &jsonOut)
if err != nil {
return fmt.Errorf("exec: unable to parse output of '%s' as JSON, %s",
e.Command, err)
}
f := internal.JSONFlattener{}
err = f.FlattenJSON("", jsonOut)
if err != nil {
f := internal.JSONFlattener{}
err = f.FlattenJSON("", jsonOut)
if err != nil {
return err
}
acc.AddFields("exec", f.Fields, nil)
case "influx":
now := time.Now()
metrics, err := telegraf.ParseMetrics(out)
for _, metric := range metrics {
acc.AddFields(metric.Name(), metric.Fields(), metric.Tags(), now)
}
return err
default:
return fmt.Errorf("Unsupported data format: %s. Must be either json "+
"or influx.", e.DataFormat)
}
acc.AddFields("exec", f.Fields, nil)
return nil
}
func init() {
inputs.Add("exec", func() inputs.Input {
inputs.Add("exec", func() telegraf.Input {
return NewExec()
})
}

View File

@@ -31,6 +31,18 @@ const malformedJson = `
"status": "green",
`
const lineProtocol = "cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1"
const lineProtocolMulti = `
cpu,cpu=cpu0,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu1,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu2,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu3,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu4,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu5,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu6,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
`
type runnerMock struct {
out []byte
err error
@@ -97,3 +109,64 @@ func TestCommandError(t *testing.T) {
require.Error(t, err)
assert.Equal(t, acc.NFields(), 0, "No new points should have been added")
}
func TestLineProtocolParse(t *testing.T) {
e := &Exec{
runner: newRunnerMock([]byte(lineProtocol), nil),
Command: "line-protocol",
DataFormat: "influx",
}
var acc testutil.Accumulator
err := e.Gather(&acc)
require.NoError(t, err)
fields := map[string]interface{}{
"usage_idle": float64(99),
"usage_busy": float64(1),
}
tags := map[string]string{
"host": "foo",
"datacenter": "us-east",
}
acc.AssertContainsTaggedFields(t, "cpu", fields, tags)
}
func TestLineProtocolParseMultiple(t *testing.T) {
e := &Exec{
runner: newRunnerMock([]byte(lineProtocolMulti), nil),
Command: "line-protocol",
DataFormat: "influx",
}
var acc testutil.Accumulator
err := e.Gather(&acc)
require.NoError(t, err)
fields := map[string]interface{}{
"usage_idle": float64(99),
"usage_busy": float64(1),
}
tags := map[string]string{
"host": "foo",
"datacenter": "us-east",
}
cpuTags := []string{"cpu0", "cpu1", "cpu2", "cpu3", "cpu4", "cpu5", "cpu6"}
for _, cpu := range cpuTags {
tags["cpu"] = cpu
acc.AssertContainsTaggedFields(t, "cpu", fields, tags)
}
}
func TestInvalidDataFormat(t *testing.T) {
e := &Exec{
runner: newRunnerMock([]byte(lineProtocol), nil),
Command: "bad data format",
DataFormat: "FooBar",
}
var acc testutil.Accumulator
err := e.Gather(&acc)
require.Error(t, err)
}

View File

@@ -9,11 +9,12 @@ import (
"sync"
"github.com/gorilla/mux"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
func init() {
inputs.Add("github_webhooks", func() inputs.Input { return &GithubWebhooks{} })
inputs.Add("github_webhooks", func() telegraf.Input { return &GithubWebhooks{} })
}
type GithubWebhooks struct {
@@ -40,11 +41,11 @@ func (gh *GithubWebhooks) Description() string {
}
// Writes the points from <-gh.in to the Accumulator
func (gh *GithubWebhooks) Gather(acc inputs.Accumulator) error {
func (gh *GithubWebhooks) Gather(acc telegraf.Accumulator) error {
gh.Lock()
defer gh.Unlock()
for _, event := range gh.events {
p := event.NewPoint()
p := event.NewMetric()
acc.AddFields("github_webhooks", p.Fields(), p.Tags(), p.Time())
}
gh.events = make([]Event, 0)

View File

@@ -5,13 +5,13 @@ import (
"log"
"time"
"github.com/influxdata/influxdb/client/v2"
"github.com/influxdata/telegraf"
)
const meas = "github_webhooks"
type Event interface {
NewPoint() *client.Point
NewMetric() telegraf.Metric
}
type Repository struct {
@@ -90,7 +90,7 @@ type CommitCommentEvent struct {
Sender Sender `json:"sender"`
}
func (s CommitCommentEvent) NewPoint() *client.Point {
func (s CommitCommentEvent) NewMetric() telegraf.Metric {
event := "commit_comment"
t := map[string]string{
"event": event,
@@ -106,11 +106,11 @@ func (s CommitCommentEvent) NewPoint() *client.Point {
"commit": s.Comment.Commit,
"comment": s.Comment.Body,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type CreateEvent struct {
@@ -120,7 +120,7 @@ type CreateEvent struct {
Sender Sender `json:"sender"`
}
func (s CreateEvent) NewPoint() *client.Point {
func (s CreateEvent) NewMetric() telegraf.Metric {
event := "create"
t := map[string]string{
"event": event,
@@ -136,11 +136,11 @@ func (s CreateEvent) NewPoint() *client.Point {
"ref": s.Ref,
"refType": s.RefType,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type DeleteEvent struct {
@@ -150,7 +150,7 @@ type DeleteEvent struct {
Sender Sender `json:"sender"`
}
func (s DeleteEvent) NewPoint() *client.Point {
func (s DeleteEvent) NewMetric() telegraf.Metric {
event := "delete"
t := map[string]string{
"event": event,
@@ -166,11 +166,11 @@ func (s DeleteEvent) NewPoint() *client.Point {
"ref": s.Ref,
"refType": s.RefType,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type DeploymentEvent struct {
@@ -179,7 +179,7 @@ type DeploymentEvent struct {
Sender Sender `json:"sender"`
}
func (s DeploymentEvent) NewPoint() *client.Point {
func (s DeploymentEvent) NewMetric() telegraf.Metric {
event := "deployment"
t := map[string]string{
"event": event,
@@ -197,11 +197,11 @@ func (s DeploymentEvent) NewPoint() *client.Point {
"environment": s.Deployment.Environment,
"description": s.Deployment.Description,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type DeploymentStatusEvent struct {
@@ -211,7 +211,7 @@ type DeploymentStatusEvent struct {
Sender Sender `json:"sender"`
}
func (s DeploymentStatusEvent) NewPoint() *client.Point {
func (s DeploymentStatusEvent) NewMetric() telegraf.Metric {
event := "delete"
t := map[string]string{
"event": event,
@@ -231,11 +231,11 @@ func (s DeploymentStatusEvent) NewPoint() *client.Point {
"depState": s.DeploymentStatus.State,
"depDescription": s.DeploymentStatus.Description,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type ForkEvent struct {
@@ -244,7 +244,7 @@ type ForkEvent struct {
Sender Sender `json:"sender"`
}
func (s ForkEvent) NewPoint() *client.Point {
func (s ForkEvent) NewMetric() telegraf.Metric {
event := "fork"
t := map[string]string{
"event": event,
@@ -259,11 +259,11 @@ func (s ForkEvent) NewPoint() *client.Point {
"issues": s.Repository.Issues,
"fork": s.Forkee.Repository,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type GollumEvent struct {
@@ -273,7 +273,7 @@ type GollumEvent struct {
}
// REVIEW: Going to be lazy and not deal with the pages.
func (s GollumEvent) NewPoint() *client.Point {
func (s GollumEvent) NewMetric() telegraf.Metric {
event := "gollum"
t := map[string]string{
"event": event,
@@ -287,11 +287,11 @@ func (s GollumEvent) NewPoint() *client.Point {
"forks": s.Repository.Forks,
"issues": s.Repository.Issues,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type IssueCommentEvent struct {
@@ -301,7 +301,7 @@ type IssueCommentEvent struct {
Sender Sender `json:"sender"`
}
func (s IssueCommentEvent) NewPoint() *client.Point {
func (s IssueCommentEvent) NewMetric() telegraf.Metric {
event := "issue_comment"
t := map[string]string{
"event": event,
@@ -319,11 +319,11 @@ func (s IssueCommentEvent) NewPoint() *client.Point {
"comments": s.Issue.Comments,
"body": s.Comment.Body,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type IssuesEvent struct {
@@ -333,7 +333,7 @@ type IssuesEvent struct {
Sender Sender `json:"sender"`
}
func (s IssuesEvent) NewPoint() *client.Point {
func (s IssuesEvent) NewMetric() telegraf.Metric {
event := "issue"
t := map[string]string{
"event": event,
@@ -351,11 +351,11 @@ func (s IssuesEvent) NewPoint() *client.Point {
"title": s.Issue.Title,
"comments": s.Issue.Comments,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type MemberEvent struct {
@@ -364,7 +364,7 @@ type MemberEvent struct {
Sender Sender `json:"sender"`
}
func (s MemberEvent) NewPoint() *client.Point {
func (s MemberEvent) NewMetric() telegraf.Metric {
event := "member"
t := map[string]string{
"event": event,
@@ -380,11 +380,11 @@ func (s MemberEvent) NewPoint() *client.Point {
"newMember": s.Member.User,
"newMemberStatus": s.Member.Admin,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type MembershipEvent struct {
@@ -394,7 +394,7 @@ type MembershipEvent struct {
Team Team `json:"team"`
}
func (s MembershipEvent) NewPoint() *client.Point {
func (s MembershipEvent) NewMetric() telegraf.Metric {
event := "membership"
t := map[string]string{
"event": event,
@@ -406,11 +406,11 @@ func (s MembershipEvent) NewPoint() *client.Point {
"newMember": s.Member.User,
"newMemberStatus": s.Member.Admin,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type PageBuildEvent struct {
@@ -418,7 +418,7 @@ type PageBuildEvent struct {
Sender Sender `json:"sender"`
}
func (s PageBuildEvent) NewPoint() *client.Point {
func (s PageBuildEvent) NewMetric() telegraf.Metric {
event := "page_build"
t := map[string]string{
"event": event,
@@ -432,11 +432,11 @@ func (s PageBuildEvent) NewPoint() *client.Point {
"forks": s.Repository.Forks,
"issues": s.Repository.Issues,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type PublicEvent struct {
@@ -444,7 +444,7 @@ type PublicEvent struct {
Sender Sender `json:"sender"`
}
func (s PublicEvent) NewPoint() *client.Point {
func (s PublicEvent) NewMetric() telegraf.Metric {
event := "public"
t := map[string]string{
"event": event,
@@ -458,11 +458,11 @@ func (s PublicEvent) NewPoint() *client.Point {
"forks": s.Repository.Forks,
"issues": s.Repository.Issues,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type PullRequestEvent struct {
@@ -472,7 +472,7 @@ type PullRequestEvent struct {
Sender Sender `json:"sender"`
}
func (s PullRequestEvent) NewPoint() *client.Point {
func (s PullRequestEvent) NewMetric() telegraf.Metric {
event := "pull_request"
t := map[string]string{
"event": event,
@@ -495,11 +495,11 @@ func (s PullRequestEvent) NewPoint() *client.Point {
"deletions": s.PullRequest.Deletions,
"changedFiles": s.PullRequest.ChangedFiles,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type PullRequestReviewCommentEvent struct {
@@ -509,7 +509,7 @@ type PullRequestReviewCommentEvent struct {
Sender Sender `json:"sender"`
}
func (s PullRequestReviewCommentEvent) NewPoint() *client.Point {
func (s PullRequestReviewCommentEvent) NewMetric() telegraf.Metric {
event := "pull_request_review_comment"
t := map[string]string{
"event": event,
@@ -533,11 +533,11 @@ func (s PullRequestReviewCommentEvent) NewPoint() *client.Point {
"commentFile": s.Comment.File,
"comment": s.Comment.Comment,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type PushEvent struct {
@@ -548,7 +548,7 @@ type PushEvent struct {
Sender Sender `json:"sender"`
}
func (s PushEvent) NewPoint() *client.Point {
func (s PushEvent) NewMetric() telegraf.Metric {
event := "push"
t := map[string]string{
"event": event,
@@ -565,11 +565,11 @@ func (s PushEvent) NewPoint() *client.Point {
"before": s.Before,
"after": s.After,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type ReleaseEvent struct {
@@ -578,7 +578,7 @@ type ReleaseEvent struct {
Sender Sender `json:"sender"`
}
func (s ReleaseEvent) NewPoint() *client.Point {
func (s ReleaseEvent) NewMetric() telegraf.Metric {
event := "release"
t := map[string]string{
"event": event,
@@ -593,11 +593,11 @@ func (s ReleaseEvent) NewPoint() *client.Point {
"issues": s.Repository.Issues,
"tagName": s.Release.TagName,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type RepositoryEvent struct {
@@ -605,7 +605,7 @@ type RepositoryEvent struct {
Sender Sender `json:"sender"`
}
func (s RepositoryEvent) NewPoint() *client.Point {
func (s RepositoryEvent) NewMetric() telegraf.Metric {
event := "repository"
t := map[string]string{
"event": event,
@@ -619,11 +619,11 @@ func (s RepositoryEvent) NewPoint() *client.Point {
"forks": s.Repository.Forks,
"issues": s.Repository.Issues,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type StatusEvent struct {
@@ -633,7 +633,7 @@ type StatusEvent struct {
Sender Sender `json:"sender"`
}
func (s StatusEvent) NewPoint() *client.Point {
func (s StatusEvent) NewMetric() telegraf.Metric {
event := "status"
t := map[string]string{
"event": event,
@@ -649,11 +649,11 @@ func (s StatusEvent) NewPoint() *client.Point {
"commit": s.Commit,
"state": s.State,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type TeamAddEvent struct {
@@ -662,7 +662,7 @@ type TeamAddEvent struct {
Sender Sender `json:"sender"`
}
func (s TeamAddEvent) NewPoint() *client.Point {
func (s TeamAddEvent) NewMetric() telegraf.Metric {
event := "team_add"
t := map[string]string{
"event": event,
@@ -677,11 +677,11 @@ func (s TeamAddEvent) NewPoint() *client.Point {
"issues": s.Repository.Issues,
"teamName": s.Team.Name,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type WatchEvent struct {
@@ -689,7 +689,7 @@ type WatchEvent struct {
Sender Sender `json:"sender"`
}
func (s WatchEvent) NewPoint() *client.Point {
func (s WatchEvent) NewMetric() telegraf.Metric {
event := "delete"
t := map[string]string{
"event": event,
@@ -703,9 +703,9 @@ func (s WatchEvent) NewPoint() *client.Point {
"forks": s.Repository.Forks,
"issues": s.Repository.Issues,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}

View File

@@ -3,6 +3,7 @@ package haproxy
import (
"encoding/csv"
"fmt"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"io"
"net/http"
@@ -104,7 +105,7 @@ func (r *haproxy) Description() string {
// Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any).
func (g *haproxy) Gather(acc inputs.Accumulator) error {
func (g *haproxy) Gather(acc telegraf.Accumulator) error {
if len(g.Servers) == 0 {
return g.gatherServer("http://127.0.0.1:1936", acc)
}
@@ -126,7 +127,7 @@ func (g *haproxy) Gather(acc inputs.Accumulator) error {
return outerr
}
func (g *haproxy) gatherServer(addr string, acc inputs.Accumulator) error {
func (g *haproxy) gatherServer(addr string, acc telegraf.Accumulator) error {
if g.client == nil {
client := &http.Client{}
@@ -156,7 +157,7 @@ func (g *haproxy) gatherServer(addr string, acc inputs.Accumulator) error {
return importCsvResult(res.Body, acc, u.Host)
}
func importCsvResult(r io.Reader, acc inputs.Accumulator, host string) error {
func importCsvResult(r io.Reader, acc telegraf.Accumulator, host string) error {
csv := csv.NewReader(r)
result, err := csv.ReadAll()
now := time.Now()
@@ -358,7 +359,7 @@ func importCsvResult(r io.Reader, acc inputs.Accumulator, host string) error {
}
func init() {
inputs.Add("haproxy", func() inputs.Input {
inputs.Add("haproxy", func() telegraf.Input {
return &haproxy{}
})
}

View File

@@ -11,6 +11,7 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -88,7 +89,7 @@ func (h *HttpJson) Description() string {
}
// Gathers data for all servers.
func (h *HttpJson) Gather(acc inputs.Accumulator) error {
func (h *HttpJson) Gather(acc telegraf.Accumulator) error {
var wg sync.WaitGroup
errorChannel := make(chan error, len(h.Servers))
@@ -127,7 +128,7 @@ func (h *HttpJson) Gather(acc inputs.Accumulator) error {
// Returns:
// error: Any error that may have occurred
func (h *HttpJson) gatherServer(
acc inputs.Accumulator,
acc telegraf.Accumulator,
serverURL string,
) error {
resp, responseTime, err := h.sendRequest(serverURL)
@@ -200,7 +201,11 @@ func (h *HttpJson) sendRequest(serverURL string) (string, float64, error) {
// Add header parameters
for k, v := range h.Headers {
req.Header.Add(k, v)
if strings.ToLower(k) == "host" {
req.Host = v
} else {
req.Header.Add(k, v)
}
}
start := time.Now()
@@ -232,7 +237,7 @@ func (h *HttpJson) sendRequest(serverURL string) (string, float64, error) {
}
func init() {
inputs.Add("httpjson", func() inputs.Input {
inputs.Add("httpjson", func() telegraf.Input {
return &HttpJson{client: RealHTTPClient{client: &http.Client{}}}
})
}

View File

@@ -136,7 +136,7 @@ func TestHttpJson200(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, 12, acc.NFields())
// Set responsetime
for _, p := range acc.Points {
for _, p := range acc.Metrics {
p.Fields["response_time"] = 1.0
}
@@ -203,7 +203,7 @@ func TestHttpJson200Tags(t *testing.T) {
var acc testutil.Accumulator
err := service.Gather(&acc)
// Set responsetime
for _, p := range acc.Points {
for _, p := range acc.Metrics {
p.Fields["response_time"] = 1.0
}
require.NoError(t, err)

View File

@@ -8,6 +8,7 @@ import (
"strings"
"sync"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -32,7 +33,7 @@ func (*InfluxDB) SampleConfig() string {
`
}
func (i *InfluxDB) Gather(acc inputs.Accumulator) error {
func (i *InfluxDB) Gather(acc telegraf.Accumulator) error {
errorChannel := make(chan error, len(i.URLs))
var wg sync.WaitGroup
@@ -77,7 +78,7 @@ type point struct {
// Returns:
// error: Any error that may have occurred
func (i *InfluxDB) gatherURL(
acc inputs.Accumulator,
acc telegraf.Accumulator,
url string,
) error {
resp, err := http.Get(url)
@@ -140,7 +141,7 @@ func (i *InfluxDB) gatherURL(
}
func init() {
inputs.Add("influxdb", func() inputs.Input {
inputs.Add("influxdb", func() telegraf.Input {
return &InfluxDB{}
})
}

View File

@@ -71,7 +71,7 @@ func TestBasic(t *testing.T) {
var acc testutil.Accumulator
require.NoError(t, plugin.Gather(&acc))
require.Len(t, acc.Points, 2)
require.Len(t, acc.Metrics, 2)
fields := map[string]interface{}{
// JSON will truncate floats to integer representations.
// Since there's no distinction in JSON, we can't assume it's an int.

View File

@@ -8,6 +8,7 @@ import (
"net/http"
"net/url"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -108,7 +109,7 @@ func (j *Jolokia) getAttr(requestUrl *url.URL) (map[string]interface{}, error) {
return jsonOut, nil
}
func (j *Jolokia) Gather(acc inputs.Accumulator) error {
func (j *Jolokia) Gather(acc telegraf.Accumulator) error {
context := j.Context //"/jolokia/read"
servers := j.Servers
metrics := j.Metrics
@@ -157,7 +158,7 @@ func (j *Jolokia) Gather(acc inputs.Accumulator) error {
}
func init() {
inputs.Add("jolokia", func() inputs.Input {
inputs.Add("jolokia", func() telegraf.Input {
return &Jolokia{jClient: &JolokiaClientImpl{client: &http.Client{}}}
})
}

View File

@@ -85,7 +85,7 @@ func TestHttpJsonMultiValue(t *testing.T) {
err := jolokia.Gather(&acc)
assert.Nil(t, err)
assert.Equal(t, 1, len(acc.Points))
assert.Equal(t, 1, len(acc.Metrics))
fields := map[string]interface{}{
"heap_memory_usage_init": 67108864.0,
@@ -112,5 +112,5 @@ func TestHttpJsonOn404(t *testing.T) {
err := jolokia.Gather(&acc)
assert.Nil(t, err)
assert.Equal(t, 0, len(acc.Points))
assert.Equal(t, 0, len(acc.Metrics))
}

View File

@@ -5,7 +5,7 @@ import (
"strings"
"sync"
"github.com/influxdata/influxdb/models"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/Shopify/sarama"
@@ -27,8 +27,8 @@ type Kafka struct {
// channel for all kafka consumer errors
errs <-chan *sarama.ConsumerError
// channel for all incoming parsed kafka points
pointChan chan models.Point
done chan struct{}
metricC chan telegraf.Metric
done chan struct{}
// doNotCommitMsgs tells the parser not to call CommitUpTo on the consumer
// this is mostly for test purposes, but there may be a use-case for it later.
@@ -93,7 +93,7 @@ func (k *Kafka) Start() error {
if k.PointBuffer == 0 {
k.PointBuffer = 100000
}
k.pointChan = make(chan models.Point, k.PointBuffer)
k.metricC = make(chan telegraf.Metric, k.PointBuffer)
// Start the kafka message reader
go k.parser()
@@ -112,18 +112,18 @@ func (k *Kafka) parser() {
case err := <-k.errs:
log.Printf("Kafka Consumer Error: %s\n", err.Error())
case msg := <-k.in:
points, err := models.ParsePoints(msg.Value)
metrics, err := telegraf.ParseMetrics(msg.Value)
if err != nil {
log.Printf("Could not parse kafka message: %s, error: %s",
string(msg.Value), err.Error())
}
for _, point := range points {
for _, metric := range metrics {
select {
case k.pointChan <- point:
case k.metricC <- metric:
continue
default:
log.Printf("Kafka Consumer buffer is full, dropping a point." +
log.Printf("Kafka Consumer buffer is full, dropping a metric." +
" You may want to increase the point_buffer setting")
}
}
@@ -148,19 +148,19 @@ func (k *Kafka) Stop() {
}
}
func (k *Kafka) Gather(acc inputs.Accumulator) error {
func (k *Kafka) Gather(acc telegraf.Accumulator) error {
k.Lock()
defer k.Unlock()
npoints := len(k.pointChan)
npoints := len(k.metricC)
for i := 0; i < npoints; i++ {
point := <-k.pointChan
point := <-k.metricC
acc.AddFields(point.Name(), point.Fields(), point.Tags(), point.Time())
}
return nil
}
func init() {
inputs.Add("kafka_consumer", func() inputs.Input {
inputs.Add("kafka_consumer", func() telegraf.Input {
return &Kafka{}
})
}

View File

@@ -51,13 +51,13 @@ func TestReadsMetricsFromKafka(t *testing.T) {
// Verify that we can now gather the sent message
var acc testutil.Accumulator
// Sanity check
assert.Equal(t, 0, len(acc.Points), "There should not be any points")
assert.Equal(t, 0, len(acc.Metrics), "There should not be any points")
// Gather points
err = k.Gather(&acc)
require.NoError(t, err)
if len(acc.Points) == 1 {
point := acc.Points[0]
if len(acc.Metrics) == 1 {
point := acc.Metrics[0]
assert.Equal(t, "cpu_load_short", point.Measurement)
assert.Equal(t, map[string]interface{}{"value": 23422.0}, point.Fields)
assert.Equal(t, map[string]string{
@@ -83,7 +83,7 @@ func waitForPoint(k *Kafka, t *testing.T) {
counter++
if counter > 1000 {
t.Fatal("Waited for 5s, point never arrived to consumer")
} else if len(k.pointChan) == 1 {
} else if len(k.metricC) == 1 {
return
}
}

View File

@@ -4,7 +4,7 @@ import (
"testing"
"time"
"github.com/influxdata/influxdb/models"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/testutil"
"github.com/Shopify/sarama"
@@ -29,7 +29,7 @@ func NewTestKafka() (*Kafka, chan *sarama.ConsumerMessage) {
doNotCommitMsgs: true,
errs: make(chan *sarama.ConsumerError, pointBuffer),
done: make(chan struct{}),
pointChan: make(chan models.Point, pointBuffer),
metricC: make(chan telegraf.Metric, pointBuffer),
}
return &k, in
}
@@ -43,7 +43,7 @@ func TestRunParser(t *testing.T) {
in <- saramaMsg(testMsg)
time.Sleep(time.Millisecond)
assert.Equal(t, len(k.pointChan), 1)
assert.Equal(t, len(k.metricC), 1)
}
// Test that the parser ignores invalid messages
@@ -55,7 +55,7 @@ func TestRunParserInvalidMsg(t *testing.T) {
in <- saramaMsg(invalidMsg)
time.Sleep(time.Millisecond)
assert.Equal(t, len(k.pointChan), 0)
assert.Equal(t, len(k.metricC), 0)
}
// Test that points are dropped when we hit the buffer limit
@@ -69,7 +69,7 @@ func TestRunParserRespectsBuffer(t *testing.T) {
}
time.Sleep(time.Millisecond)
assert.Equal(t, len(k.pointChan), 5)
assert.Equal(t, len(k.metricC), 5)
}
// Test that the parser parses kafka messages into points
@@ -84,7 +84,7 @@ func TestRunParserAndGather(t *testing.T) {
acc := testutil.Accumulator{}
k.Gather(&acc)
assert.Equal(t, len(acc.Points), 1)
assert.Equal(t, len(acc.Metrics), 1)
acc.AssertContainsFields(t, "cpu_load_short",
map[string]interface{}{"value": float64(23422)})
}

View File

@@ -3,6 +3,7 @@ package leofs
import (
"bufio"
"fmt"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"net/url"
"os/exec"
@@ -146,7 +147,7 @@ func (l *LeoFS) Description() string {
return "Read metrics from a LeoFS Server via SNMP"
}
func (l *LeoFS) Gather(acc inputs.Accumulator) error {
func (l *LeoFS) Gather(acc telegraf.Accumulator) error {
if len(l.Servers) == 0 {
l.gatherServer(defaultEndpoint, ServerTypeManagerMaster, acc)
return nil
@@ -176,7 +177,7 @@ func (l *LeoFS) Gather(acc inputs.Accumulator) error {
return outerr
}
func (l *LeoFS) gatherServer(endpoint string, serverType ServerType, acc inputs.Accumulator) error {
func (l *LeoFS) gatherServer(endpoint string, serverType ServerType, acc telegraf.Accumulator) error {
cmd := exec.Command("snmpwalk", "-v2c", "-cpublic", endpoint, oid)
stdout, err := cmd.StdoutPipe()
if err != nil {
@@ -225,7 +226,7 @@ func retrieveTokenAfterColon(line string) (string, error) {
}
func init() {
inputs.Add("leofs", func() inputs.Input {
inputs.Add("leofs", func() telegraf.Input {
return &LeoFS{}
})
}

View File

@@ -13,6 +13,7 @@ import (
"strconv"
"strings"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -129,7 +130,7 @@ var wanted_mds_fields = []*mapping{
},
}
func (l *Lustre2) GetLustreProcStats(fileglob string, wanted_fields []*mapping, acc inputs.Accumulator) error {
func (l *Lustre2) GetLustreProcStats(fileglob string, wanted_fields []*mapping, acc telegraf.Accumulator) error {
files, err := filepath.Glob(fileglob)
if err != nil {
return err
@@ -193,7 +194,7 @@ func (l *Lustre2) Description() string {
}
// Gather reads stats from all lustre targets
func (l *Lustre2) Gather(acc inputs.Accumulator) error {
func (l *Lustre2) Gather(acc telegraf.Accumulator) error {
l.allFields = make(map[string]map[string]interface{})
if len(l.Ost_procfiles) == 0 {
@@ -244,7 +245,7 @@ func (l *Lustre2) Gather(acc inputs.Accumulator) error {
}
func init() {
inputs.Add("lustre2", func() inputs.Input {
inputs.Add("lustre2", func() telegraf.Input {
return &Lustre2{}
})
}

View File

@@ -4,6 +4,7 @@ import (
"fmt"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -34,7 +35,7 @@ func (m *MailChimp) Description() string {
return "Gathers metrics from the /3.0/reports MailChimp API"
}
func (m *MailChimp) Gather(acc inputs.Accumulator) error {
func (m *MailChimp) Gather(acc telegraf.Accumulator) error {
if m.api == nil {
m.api = NewChimpAPI(m.ApiKey)
}
@@ -71,7 +72,7 @@ func (m *MailChimp) Gather(acc inputs.Accumulator) error {
return nil
}
func gatherReport(acc inputs.Accumulator, report Report, now time.Time) {
func gatherReport(acc telegraf.Accumulator, report Report, now time.Time) {
tags := make(map[string]string)
tags["id"] = report.ID
tags["campaign_title"] = report.CampaignTitle
@@ -110,7 +111,7 @@ func gatherReport(acc inputs.Accumulator, report Report, now time.Time) {
}
func init() {
inputs.Add("mailchimp", func() inputs.Input {
inputs.Add("mailchimp", func() telegraf.Input {
return &MailChimp{}
})
}

View File

@@ -8,6 +8,7 @@ import (
"strconv"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -69,7 +70,7 @@ func (m *Memcached) Description() string {
}
// Gather reads stats from all configured servers accumulates stats
func (m *Memcached) Gather(acc inputs.Accumulator) error {
func (m *Memcached) Gather(acc telegraf.Accumulator) error {
if len(m.Servers) == 0 && len(m.UnixSockets) == 0 {
return m.gatherServer(":11211", false, acc)
}
@@ -92,7 +93,7 @@ func (m *Memcached) Gather(acc inputs.Accumulator) error {
func (m *Memcached) gatherServer(
address string,
unix bool,
acc inputs.Accumulator,
acc telegraf.Accumulator,
) error {
var conn net.Conn
if unix {
@@ -178,7 +179,7 @@ func parseResponse(r *bufio.Reader) (map[string]string, error) {
}
func init() {
inputs.Add("memcached", func() inputs.Input {
inputs.Add("memcached", func() telegraf.Input {
return &Memcached{}
})
}

View File

@@ -1,12 +1,16 @@
package inputs
import "github.com/stretchr/testify/mock"
import (
"github.com/influxdata/telegraf"
"github.com/stretchr/testify/mock"
)
type MockPlugin struct {
mock.Mock
}
func (m *MockPlugin) Gather(_a0 Accumulator) error {
func (m *MockPlugin) Gather(_a0 telegraf.Accumulator) error {
ret := m.Called(_a0)
r0 := ret.Error(0)

View File

@@ -9,6 +9,7 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"gopkg.in/mgo.v2"
)
@@ -45,7 +46,7 @@ var localhost = &url.URL{Host: "127.0.0.1:27017"}
// Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any).
func (m *MongoDB) Gather(acc inputs.Accumulator) error {
func (m *MongoDB) Gather(acc telegraf.Accumulator) error {
if len(m.Servers) == 0 {
m.gatherServer(m.getMongoServer(localhost), acc)
return nil
@@ -88,7 +89,7 @@ func (m *MongoDB) getMongoServer(url *url.URL) *Server {
return m.mongos[url.Host]
}
func (m *MongoDB) gatherServer(server *Server, acc inputs.Accumulator) error {
func (m *MongoDB) gatherServer(server *Server, acc telegraf.Accumulator) error {
if server.Session == nil {
var dialAddrs []string
if server.Url.User != nil {
@@ -138,7 +139,7 @@ func (m *MongoDB) gatherServer(server *Server, acc inputs.Accumulator) error {
}
func init() {
inputs.Add("mongodb", func() inputs.Input {
inputs.Add("mongodb", func() telegraf.Input {
return &MongoDB{
mongos: make(map[string]*Server),
}

View File

@@ -5,7 +5,7 @@ import (
"reflect"
"strconv"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf"
)
type MongodbData struct {
@@ -97,7 +97,7 @@ func (d *MongodbData) add(key string, val interface{}) {
d.Fields[key] = val
}
func (d *MongodbData) flush(acc inputs.Accumulator) {
func (d *MongodbData) flush(acc telegraf.Accumulator) {
acc.AddFields(
"mongodb",
d.Fields,

View File

@@ -4,7 +4,7 @@ import (
"net/url"
"time"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf"
"gopkg.in/mgo.v2"
"gopkg.in/mgo.v2/bson"
)
@@ -21,7 +21,7 @@ func (s *Server) getDefaultTags() map[string]string {
return tags
}
func (s *Server) gatherData(acc inputs.Accumulator) error {
func (s *Server) gatherData(acc telegraf.Accumulator) error {
s.Session.SetMode(mgo.Eventual, true)
s.Session.SetSocketTimeout(0)
result := &ServerStatus{}

View File

@@ -6,6 +6,7 @@ import (
"strings"
_ "github.com/go-sql-driver/mysql"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -35,7 +36,7 @@ func (m *Mysql) Description() string {
var localhost = ""
func (m *Mysql) Gather(acc inputs.Accumulator) error {
func (m *Mysql) Gather(acc telegraf.Accumulator) error {
if len(m.Servers) == 0 {
// if we can't get stats in this case, thats fine, don't report
// an error.
@@ -113,7 +114,7 @@ var mappings = []*mapping{
},
}
func (m *Mysql) gatherServer(serv string, acc inputs.Accumulator) error {
func (m *Mysql) gatherServer(serv string, acc telegraf.Accumulator) error {
// If user forgot the '/', add it
if strings.HasSuffix(serv, ")") {
serv = serv + "/"
@@ -207,7 +208,7 @@ func (m *Mysql) gatherServer(serv string, acc inputs.Accumulator) error {
}
func init() {
inputs.Add("mysql", func() inputs.Input {
inputs.Add("mysql", func() telegraf.Input {
return &Mysql{}
})
}

View File

@@ -11,6 +11,7 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -31,7 +32,7 @@ func (n *Nginx) Description() string {
return "Read Nginx's basic status information (ngx_http_stub_status_module)"
}
func (n *Nginx) Gather(acc inputs.Accumulator) error {
func (n *Nginx) Gather(acc telegraf.Accumulator) error {
var wg sync.WaitGroup
var outerr error
@@ -59,7 +60,7 @@ var tr = &http.Transport{
var client = &http.Client{Transport: tr}
func (n *Nginx) gatherUrl(addr *url.URL, acc inputs.Accumulator) error {
func (n *Nginx) gatherUrl(addr *url.URL, acc telegraf.Accumulator) error {
resp, err := client.Get(addr.String())
if err != nil {
return fmt.Errorf("error making HTTP request to %s: %s", addr.String(), err)
@@ -159,7 +160,7 @@ func getTags(addr *url.URL) map[string]string {
}
func init() {
inputs.Add("nginx", func() inputs.Input {
inputs.Add("nginx", func() telegraf.Input {
return &Nginx{}
})
}

View File

@@ -31,6 +31,7 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -49,7 +50,7 @@ const (
)
func init() {
inputs.Add("nsq", func() inputs.Input {
inputs.Add("nsq", func() telegraf.Input {
return &NSQ{}
})
}
@@ -62,7 +63,7 @@ func (n *NSQ) Description() string {
return "Read NSQ topic and channel statistics."
}
func (n *NSQ) Gather(acc inputs.Accumulator) error {
func (n *NSQ) Gather(acc telegraf.Accumulator) error {
var wg sync.WaitGroup
var outerr error
@@ -85,7 +86,7 @@ var tr = &http.Transport{
var client = &http.Client{Transport: tr}
func (n *NSQ) gatherEndpoint(e string, acc inputs.Accumulator) error {
func (n *NSQ) gatherEndpoint(e string, acc telegraf.Accumulator) error {
u, err := buildURL(e)
if err != nil {
return err
@@ -136,7 +137,7 @@ func buildURL(e string) (*url.URL, error) {
return addr, nil
}
func topicStats(t TopicStats, acc inputs.Accumulator, host, version string) {
func topicStats(t TopicStats, acc telegraf.Accumulator, host, version string) {
// per topic overall (tag: name, paused, channel count)
tags := map[string]string{
"server_host": host,
@@ -157,7 +158,7 @@ func topicStats(t TopicStats, acc inputs.Accumulator, host, version string) {
}
}
func channelStats(c ChannelStats, acc inputs.Accumulator, host, version, topic string) {
func channelStats(c ChannelStats, acc telegraf.Accumulator, host, version, topic string) {
tags := map[string]string{
"server_host": host,
"server_version": version,
@@ -182,7 +183,7 @@ func channelStats(c ChannelStats, acc inputs.Accumulator, host, version, topic s
}
}
func clientStats(c ClientStats, acc inputs.Accumulator, host, version, topic, channel string) {
func clientStats(c ClientStats, acc telegraf.Accumulator, host, version, topic, channel string) {
tags := map[string]string{
"server_host": host,
"server_version": version,

View File

@@ -8,6 +8,7 @@ import (
"strconv"
"strings"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"golang.org/x/net/html/charset"
)
@@ -145,7 +146,7 @@ func (r *passenger) Description() string {
return "Read metrics of passenger using passenger-status"
}
func (g *passenger) Gather(acc inputs.Accumulator) error {
func (g *passenger) Gather(acc telegraf.Accumulator) error {
if g.Command == "" {
g.Command = "passenger-status -v --show=xml"
}
@@ -164,7 +165,7 @@ func (g *passenger) Gather(acc inputs.Accumulator) error {
return nil
}
func importMetric(stat []byte, acc inputs.Accumulator) error {
func importMetric(stat []byte, acc telegraf.Accumulator) error {
var p info
decoder := xml.NewDecoder(bytes.NewReader(stat))
@@ -244,7 +245,7 @@ func importMetric(stat []byte, acc inputs.Accumulator) error {
}
func init() {
inputs.Add("passenger", func() inputs.Input {
inputs.Add("passenger", func() telegraf.Input {
return &passenger{}
})
}

View File

@@ -12,6 +12,7 @@ import (
"strings"
"sync"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -73,7 +74,7 @@ func (r *phpfpm) Description() string {
// Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any).
func (g *phpfpm) Gather(acc inputs.Accumulator) error {
func (g *phpfpm) Gather(acc telegraf.Accumulator) error {
if len(g.Urls) == 0 {
return g.gatherServer("http://127.0.0.1/status", acc)
}
@@ -96,7 +97,7 @@ func (g *phpfpm) Gather(acc inputs.Accumulator) error {
}
// Request status page to get stat raw data and import it
func (g *phpfpm) gatherServer(addr string, acc inputs.Accumulator) error {
func (g *phpfpm) gatherServer(addr string, acc telegraf.Accumulator) error {
if g.client == nil {
client := &http.Client{}
g.client = client
@@ -140,7 +141,7 @@ func (g *phpfpm) gatherServer(addr string, acc inputs.Accumulator) error {
}
// Gather stat using fcgi protocol
func (g *phpfpm) gatherFcgi(fcgi *conn, statusPath string, acc inputs.Accumulator) error {
func (g *phpfpm) gatherFcgi(fcgi *conn, statusPath string, acc telegraf.Accumulator) error {
fpmOutput, fpmErr, err := fcgi.Request(map[string]string{
"SCRIPT_NAME": "/" + statusPath,
"SCRIPT_FILENAME": statusPath,
@@ -160,7 +161,7 @@ func (g *phpfpm) gatherFcgi(fcgi *conn, statusPath string, acc inputs.Accumulato
}
// Gather stat using http protocol
func (g *phpfpm) gatherHttp(addr string, acc inputs.Accumulator) error {
func (g *phpfpm) gatherHttp(addr string, acc telegraf.Accumulator) error {
u, err := url.Parse(addr)
if err != nil {
return fmt.Errorf("Unable parse server address '%s': %s", addr, err)
@@ -184,7 +185,7 @@ func (g *phpfpm) gatherHttp(addr string, acc inputs.Accumulator) error {
}
// Import stat data into Telegraf system
func importMetric(r io.Reader, acc inputs.Accumulator) (poolStat, error) {
func importMetric(r io.Reader, acc telegraf.Accumulator) (poolStat, error) {
stats := make(poolStat)
var currentPool string
@@ -239,7 +240,7 @@ func importMetric(r io.Reader, acc inputs.Accumulator) (poolStat, error) {
}
func init() {
inputs.Add("phpfpm", func() inputs.Input {
inputs.Add("phpfpm", func() telegraf.Input {
return &phpfpm{}
})
}

View File

@@ -1,3 +1,5 @@
// +build !windows
package ping
import (
@@ -7,6 +9,7 @@ import (
"strings"
"sync"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -40,6 +43,9 @@ func (_ *Ping) Description() string {
}
var sampleConfig = `
# NOTE: this plugin forks the ping command. You may need to set capabilities
# via setcap cap_net_raw+p /bin/ping
# urls to ping
urls = ["www.google.com"] # required
# number of pings to send (ping -c <COUNT>)
@@ -56,7 +62,7 @@ func (_ *Ping) SampleConfig() string {
return sampleConfig
}
func (p *Ping) Gather(acc inputs.Accumulator) error {
func (p *Ping) Gather(acc telegraf.Accumulator) error {
var wg sync.WaitGroup
errorChannel := make(chan error, len(p.Urls)*2)
@@ -64,7 +70,7 @@ func (p *Ping) Gather(acc inputs.Accumulator) error {
// Spin off a go routine for each url to ping
for _, url := range p.Urls {
wg.Add(1)
go func(url string, acc inputs.Accumulator) {
go func(url string, acc telegraf.Accumulator) {
defer wg.Done()
args := p.args(url)
out, err := p.pingHost(args...)
@@ -110,7 +116,11 @@ func (p *Ping) Gather(acc inputs.Accumulator) error {
}
func hostPinger(args ...string) (string, error) {
c := exec.Command("ping", args...)
bin, err := exec.LookPath("ping")
if err != nil {
return "", err
}
c := exec.Command(bin, args...)
out, err := c.CombinedOutput()
return string(out), err
}
@@ -176,7 +186,7 @@ func processPingOutput(out string) (int, int, float64, error) {
}
func init() {
inputs.Add("ping", func() inputs.Input {
inputs.Add("ping", func() telegraf.Input {
return &Ping{pingHost: hostPinger}
})
}

View File

@@ -1,3 +1,5 @@
// +build !windows
package ping
import (

View File

@@ -0,0 +1,3 @@
// +build windows
package ping

View File

@@ -6,6 +6,7 @@ import (
"fmt"
"strings"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
_ "github.com/lib/pq"
@@ -53,7 +54,7 @@ func (p *Postgresql) IgnoredColumns() map[string]bool {
var localhost = "host=localhost sslmode=disable"
func (p *Postgresql) Gather(acc inputs.Accumulator) error {
func (p *Postgresql) Gather(acc telegraf.Accumulator) error {
var query string
if p.Address == "" || p.Address == "localhost" {
@@ -101,7 +102,7 @@ type scanner interface {
Scan(dest ...interface{}) error
}
func (p *Postgresql) accRow(row scanner, acc inputs.Accumulator) error {
func (p *Postgresql) accRow(row scanner, acc telegraf.Accumulator) error {
var columnVars []interface{}
var dbname bytes.Buffer
@@ -145,7 +146,7 @@ func (p *Postgresql) accRow(row scanner, acc inputs.Accumulator) error {
}
func init() {
inputs.Add("postgresql", func() inputs.Input {
inputs.Add("postgresql", func() telegraf.Input {
return &Postgresql{}
})
}

View File

@@ -113,7 +113,7 @@ func TestPostgresqlDefaultsToAllDatabases(t *testing.T) {
var found bool
for _, pnt := range acc.Points {
for _, pnt := range acc.Metrics {
if pnt.Measurement == "postgresql" {
if pnt.Tags["db"] == "postgres" {
found = true

View File

@@ -0,0 +1,68 @@
# PowerDNS Input Plugin
The powerdns plugin gathers metrics about PowerDNS using unix socket.
### Configuration:
```
# Description
[[inputs.powerdns]]
# An array of sockets to gather stats about.
# Specify a path to unix socket.
#
# If no servers are specified, then '/var/run/pdns.controlsocket' is used as the path.
unix_sockets = ["/var/run/pdns.controlsocket"]
```
### Measurements & Fields:
- powerdns
- corrupt-packets
- deferred-cache-inserts
- deferred-cache-lookup
- dnsupdate-answers
- dnsupdate-changes
- dnsupdate-queries
- dnsupdate-refused
- packetcache-hit
- packetcache-miss
- packetcache-size
- query-cache-hit
- query-cache-miss
- rd-queries
- recursing-answers
- recursing-questions
- recursion-unanswered
- security-status
- servfail-packets
- signatures
- tcp-answers
- tcp-queries
- timedout-packets
- udp-answers
- udp-answers-bytes
- udp-do-queries
- udp-queries
- udp4-answers
- udp4-queries
- udp6-answers
- udp6-queries
- key-cache-size
- latency
- meta-cache-size
- qsize-q
- signature-cache-size
- sys-msec
- uptime
- user-msec
### Tags:
- tags: `server=socket`
### Example Output:
```
$ ./telegraf -config telegraf.conf -input-filter powerdns -test
> powerdns,server=/var/run/pdns.controlsocket corrupt-packets=0i,deferred-cache-inserts=0i,deferred-cache-lookup=0i,dnsupdate-answers=0i,dnsupdate-changes=0i,dnsupdate-queries=0i,dnsupdate-refused=0i,key-cache-size=0i,latency=26i,meta-cache-size=0i,packetcache-hit=0i,packetcache-miss=1i,packetcache-size=0i,qsize-q=0i,query-cache-hit=0i,query-cache-miss=6i,rd-queries=1i,recursing-answers=0i,recursing-questions=0i,recursion-unanswered=0i,security-status=3i,servfail-packets=0i,signature-cache-size=0i,signatures=0i,sys-msec=4349i,tcp-answers=0i,tcp-queries=0i,timedout-packets=0i,udp-answers=1i,udp-answers-bytes=50i,udp-do-queries=0i,udp-queries=0i,udp4-answers=1i,udp4-queries=1i,udp6-answers=0i,udp6-queries=0i,uptime=166738i,user-msec=3036i 1454078624932715706
```

View File

@@ -0,0 +1,126 @@
package powerdns
import (
"bufio"
"fmt"
"io"
"net"
"strconv"
"strings"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
type Powerdns struct {
UnixSockets []string
}
var sampleConfig = `
# An array of sockets to gather stats about.
# Specify a path to unix socket.
#
# If no servers are specified, then '/var/run/pdns.controlsocket' is used as the path.
unix_sockets = ["/var/run/pdns.controlsocket"]
`
var defaultTimeout = 5 * time.Second
func (p *Powerdns) SampleConfig() string {
return sampleConfig
}
func (p *Powerdns) Description() string {
return "Read metrics from one or many PowerDNS servers"
}
func (p *Powerdns) Gather(acc telegraf.Accumulator) error {
if len(p.UnixSockets) == 0 {
return p.gatherServer("/var/run/pdns.controlsocket", acc)
}
for _, serverSocket := range p.UnixSockets {
if err := p.gatherServer(serverSocket, acc); err != nil {
return err
}
}
return nil
}
func (p *Powerdns) gatherServer(address string, acc telegraf.Accumulator) error {
conn, err := net.DialTimeout("unix", address, defaultTimeout)
if err != nil {
return err
}
defer conn.Close()
conn.SetDeadline(time.Now().Add(defaultTimeout))
// Read and write buffer
rw := bufio.NewReadWriter(bufio.NewReader(conn), bufio.NewWriter(conn))
// Send command
if _, err := fmt.Fprint(conn, "show * \n"); err != nil {
return nil
}
if err := rw.Flush(); err != nil {
return err
}
// Read data
buf := make([]byte, 0, 4096)
tmp := make([]byte, 1024)
for {
n, err := rw.Read(tmp)
if err != nil {
if err != io.EOF {
return err
}
break
}
buf = append(buf, tmp[:n]...)
}
metrics := string(buf)
// Process data
fields, err := parseResponse(metrics)
if err != nil {
return err
}
// Add server socket as a tag
tags := map[string]string{"server": address}
acc.AddFields("powerdns", fields, tags)
return nil
}
func parseResponse(metrics string) (map[string]interface{}, error) {
values := make(map[string]interface{})
s := strings.Split(metrics, ",")
for _, metric := range s[:len(s)-1] {
m := strings.Split(metric, "=")
i, err := strconv.ParseInt(m[1], 10, 64)
if err != nil {
return values, err
}
values[m[0]] = i
}
return values, nil
}
func init() {
inputs.Add("powerdns", func() telegraf.Input {
return &Powerdns{}
})
}

View File

@@ -0,0 +1,147 @@
package powerdns
import (
"crypto/rand"
"encoding/binary"
"fmt"
"net"
"testing"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
type statServer struct{}
var metrics = "corrupt-packets=0,deferred-cache-inserts=0,deferred-cache-lookup=0," +
"dnsupdate-answers=0,dnsupdate-changes=0,dnsupdate-queries=0," +
"dnsupdate-refused=0,packetcache-hit=0,packetcache-miss=1,packetcache-size=0," +
"query-cache-hit=0,query-cache-miss=6,rd-queries=1,recursing-answers=0," +
"recursing-questions=0,recursion-unanswered=0,security-status=3," +
"servfail-packets=0,signatures=0,tcp-answers=0,tcp-queries=0," +
"timedout-packets=0,udp-answers=1,udp-answers-bytes=50,udp-do-queries=0," +
"udp-queries=0,udp4-answers=1,udp4-queries=1,udp6-answers=0,udp6-queries=0," +
"key-cache-size=0,latency=26,meta-cache-size=0,qsize-q=0," +
"signature-cache-size=0,sys-msec=2889,uptime=86317,user-msec=2167,"
func (s statServer) serverSocket(l net.Listener) {
for {
conn, err := l.Accept()
if err != nil {
return
}
go func(c net.Conn) {
buf := make([]byte, 1024)
n, _ := c.Read(buf)
data := buf[:n]
if string(data) == "show * \n" {
c.Write([]byte(metrics))
c.Close()
}
}(conn)
}
}
func TestMemcachedGeneratesMetrics(t *testing.T) {
// We create a fake server to return test data
var randomNumber int64
binary.Read(rand.Reader, binary.LittleEndian, &randomNumber)
socket, err := net.Listen("unix", fmt.Sprintf("/tmp/pdns%d.controlsocket", randomNumber))
if err != nil {
t.Fatal("Cannot initalize server on port ")
}
defer socket.Close()
s := statServer{}
go s.serverSocket(socket)
p := &Powerdns{
UnixSockets: []string{fmt.Sprintf("/tmp/pdns%d.controlsocket", randomNumber)},
}
var acc testutil.Accumulator
err = p.Gather(&acc)
require.NoError(t, err)
intMetrics := []string{"corrupt-packets", "deferred-cache-inserts",
"deferred-cache-lookup", "dnsupdate-answers", "dnsupdate-changes",
"dnsupdate-queries", "dnsupdate-refused", "packetcache-hit",
"packetcache-miss", "packetcache-size", "query-cache-hit", "query-cache-miss",
"rd-queries", "recursing-answers", "recursing-questions",
"recursion-unanswered", "security-status", "servfail-packets", "signatures",
"tcp-answers", "tcp-queries", "timedout-packets", "udp-answers",
"udp-answers-bytes", "udp-do-queries", "udp-queries", "udp4-answers",
"udp4-queries", "udp6-answers", "udp6-queries", "key-cache-size", "latency",
"meta-cache-size", "qsize-q", "signature-cache-size", "sys-msec", "uptime", "user-msec"}
for _, metric := range intMetrics {
assert.True(t, acc.HasIntField("powerdns", metric), metric)
}
}
func TestPowerdnsParseMetrics(t *testing.T) {
values, err := parseResponse(metrics)
require.NoError(t, err, "Error parsing memcached response")
tests := []struct {
key string
value int64
}{
{"corrupt-packets", 0},
{"deferred-cache-inserts", 0},
{"deferred-cache-lookup", 0},
{"dnsupdate-answers", 0},
{"dnsupdate-changes", 0},
{"dnsupdate-queries", 0},
{"dnsupdate-refused", 0},
{"packetcache-hit", 0},
{"packetcache-miss", 1},
{"packetcache-size", 0},
{"query-cache-hit", 0},
{"query-cache-miss", 6},
{"rd-queries", 1},
{"recursing-answers", 0},
{"recursing-questions", 0},
{"recursion-unanswered", 0},
{"security-status", 3},
{"servfail-packets", 0},
{"signatures", 0},
{"tcp-answers", 0},
{"tcp-queries", 0},
{"timedout-packets", 0},
{"udp-answers", 1},
{"udp-answers-bytes", 50},
{"udp-do-queries", 0},
{"udp-queries", 0},
{"udp4-answers", 1},
{"udp4-queries", 1},
{"udp6-answers", 0},
{"udp6-queries", 0},
{"key-cache-size", 0},
{"latency", 26},
{"meta-cache-size", 0},
{"qsize-q", 0},
{"signature-cache-size", 0},
{"sys-msec", 2889},
{"uptime", 86317},
{"user-msec", 2167},
}
for _, test := range tests {
value, ok := values[test.key]
if !ok {
t.Errorf("Did not find key for metric %s in values", test.key)
continue
}
if value != test.value {
t.Errorf("Metric: %s, Expected: %d, actual: %d",
test.key, test.value, value)
}
}
}

View File

@@ -10,6 +10,7 @@ import (
"github.com/shirou/gopsutil/process"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -49,7 +50,7 @@ func (_ *Procstat) Description() string {
return "Monitor process cpu and memory usage"
}
func (p *Procstat) Gather(acc inputs.Accumulator) error {
func (p *Procstat) Gather(acc telegraf.Accumulator) error {
err := p.createProcesses()
if err != nil {
log.Printf("Error: procstat getting process, exe: [%s] pidfile: [%s] pattern: [%s] %s",
@@ -175,7 +176,7 @@ func pidsFromPattern(pattern string) ([]int32, error) {
}
func init() {
inputs.Add("procstat", func() inputs.Input {
inputs.Add("procstat", func() telegraf.Input {
return NewProcstat()
})
}

View File

@@ -6,14 +6,14 @@ import (
"github.com/shirou/gopsutil/process"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf"
)
type SpecProcessor struct {
Prefix string
tags map[string]string
fields map[string]interface{}
acc inputs.Accumulator
acc telegraf.Accumulator
proc *process.Process
}
@@ -34,7 +34,7 @@ func (p *SpecProcessor) flush() {
func NewSpecProcessor(
prefix string,
acc inputs.Accumulator,
acc telegraf.Accumulator,
p *process.Process,
) *SpecProcessor {
tags := make(map[string]string)

View File

@@ -3,6 +3,7 @@ package prometheus
import (
"errors"
"fmt"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/prometheus/common/expfmt"
"github.com/prometheus/common/model"
@@ -32,7 +33,7 @@ var ErrProtocolError = errors.New("prometheus protocol error")
// Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any).
func (g *Prometheus) Gather(acc inputs.Accumulator) error {
func (g *Prometheus) Gather(acc telegraf.Accumulator) error {
var wg sync.WaitGroup
var outerr error
@@ -50,7 +51,7 @@ func (g *Prometheus) Gather(acc inputs.Accumulator) error {
return outerr
}
func (g *Prometheus) gatherURL(url string, acc inputs.Accumulator) error {
func (g *Prometheus) gatherURL(url string, acc telegraf.Accumulator) error {
resp, err := http.Get(url)
if err != nil {
return fmt.Errorf("error making HTTP request to %s: %s", url, err)
@@ -97,7 +98,7 @@ func (g *Prometheus) gatherURL(url string, acc inputs.Accumulator) error {
}
func init() {
inputs.Add("prometheus", func() inputs.Input {
inputs.Add("prometheus", func() telegraf.Input {
return &Prometheus{}
})
}

View File

@@ -8,6 +8,7 @@ import (
"reflect"
"strings"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -82,7 +83,7 @@ func (pa *PuppetAgent) Description() string {
}
// Gather reads stats from all configured servers accumulates stats
func (pa *PuppetAgent) Gather(acc inputs.Accumulator) error {
func (pa *PuppetAgent) Gather(acc telegraf.Accumulator) error {
if len(pa.Location) == 0 {
pa.Location = "/var/lib/puppet/state/last_run_summary.yaml"
@@ -110,7 +111,7 @@ func (pa *PuppetAgent) Gather(acc inputs.Accumulator) error {
return nil
}
func structPrinter(s *State, acc inputs.Accumulator, tags map[string]string) {
func structPrinter(s *State, acc telegraf.Accumulator, tags map[string]string) {
e := reflect.ValueOf(s).Elem()
fields := make(map[string]interface{})
@@ -131,7 +132,7 @@ func structPrinter(s *State, acc inputs.Accumulator, tags map[string]string) {
}
func init() {
inputs.Add("puppetagent", func() inputs.Input {
inputs.Add("puppetagent", func() telegraf.Input {
return &PuppetAgent{}
})
}

View File

@@ -7,6 +7,7 @@ import (
"strconv"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -96,7 +97,7 @@ type Node struct {
SocketsUsed int64 `json:"sockets_used"`
}
type gatherFunc func(r *RabbitMQ, acc inputs.Accumulator, errChan chan error)
type gatherFunc func(r *RabbitMQ, acc telegraf.Accumulator, errChan chan error)
var gatherFunctions = []gatherFunc{gatherOverview, gatherNodes, gatherQueues}
@@ -119,7 +120,7 @@ func (r *RabbitMQ) Description() string {
return "Read metrics from one or many RabbitMQ servers via the management API"
}
func (r *RabbitMQ) Gather(acc inputs.Accumulator) error {
func (r *RabbitMQ) Gather(acc telegraf.Accumulator) error {
if r.Client == nil {
r.Client = &http.Client{}
}
@@ -172,7 +173,7 @@ func (r *RabbitMQ) requestJSON(u string, target interface{}) error {
return nil
}
func gatherOverview(r *RabbitMQ, acc inputs.Accumulator, errChan chan error) {
func gatherOverview(r *RabbitMQ, acc telegraf.Accumulator, errChan chan error) {
overview := &OverviewResponse{}
err := r.requestJSON("/api/overview", &overview)
@@ -208,7 +209,7 @@ func gatherOverview(r *RabbitMQ, acc inputs.Accumulator, errChan chan error) {
errChan <- nil
}
func gatherNodes(r *RabbitMQ, acc inputs.Accumulator, errChan chan error) {
func gatherNodes(r *RabbitMQ, acc telegraf.Accumulator, errChan chan error) {
nodes := make([]Node, 0)
// Gather information about nodes
err := r.requestJSON("/api/nodes", &nodes)
@@ -245,7 +246,7 @@ func gatherNodes(r *RabbitMQ, acc inputs.Accumulator, errChan chan error) {
errChan <- nil
}
func gatherQueues(r *RabbitMQ, acc inputs.Accumulator, errChan chan error) {
func gatherQueues(r *RabbitMQ, acc telegraf.Accumulator, errChan chan error) {
// Gather information about queues
queues := make([]Queue, 0)
err := r.requestJSON("/api/queues", &queues)
@@ -330,7 +331,7 @@ func (r *RabbitMQ) shouldGatherQueue(queue Queue) bool {
}
func init() {
inputs.Add("rabbitmq", func() inputs.Input {
inputs.Add("rabbitmq", func() telegraf.Input {
return &RabbitMQ{}
})
}

View File

@@ -10,6 +10,7 @@ import (
"strings"
"sync"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -76,7 +77,7 @@ var ErrProtocolError = errors.New("redis protocol error")
// Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any).
func (r *Redis) Gather(acc inputs.Accumulator) error {
func (r *Redis) Gather(acc telegraf.Accumulator) error {
if len(r.Servers) == 0 {
url := &url.URL{
Host: ":6379",
@@ -113,7 +114,7 @@ func (r *Redis) Gather(acc inputs.Accumulator) error {
const defaultPort = "6379"
func (r *Redis) gatherServer(addr *url.URL, acc inputs.Accumulator) error {
func (r *Redis) gatherServer(addr *url.URL, acc telegraf.Accumulator) error {
_, _, err := net.SplitHostPort(addr.Host)
if err != nil {
addr.Host = addr.Host + ":" + defaultPort
@@ -158,7 +159,7 @@ func (r *Redis) gatherServer(addr *url.URL, acc inputs.Accumulator) error {
// gatherInfoOutput gathers
func gatherInfoOutput(
rdr *bufio.Reader,
acc inputs.Accumulator,
acc telegraf.Accumulator,
tags map[string]string,
) error {
var keyspace_hits, keyspace_misses uint64 = 0, 0
@@ -227,7 +228,7 @@ func gatherInfoOutput(
func gatherKeyspaceLine(
name string,
line string,
acc inputs.Accumulator,
acc telegraf.Accumulator,
tags map[string]string,
) {
if strings.Contains(line, "keys=") {
@@ -246,7 +247,7 @@ func gatherKeyspaceLine(
}
func init() {
inputs.Add("redis", func() inputs.Input {
inputs.Add("redis", func() telegraf.Input {
return &Redis{}
})
}

View File

@@ -1,53 +1,8 @@
package inputs
import "time"
import "github.com/influxdata/telegraf"
type Accumulator interface {
// Create a point with a value, decorating it with tags
// NOTE: tags is expected to be owned by the caller, don't mutate
// it after passing to Add.
Add(measurement string,
value interface{},
tags map[string]string,
t ...time.Time)
AddFields(measurement string,
fields map[string]interface{},
tags map[string]string,
t ...time.Time)
}
type Input interface {
// SampleConfig returns the default configuration of the Input
SampleConfig() string
// Description returns a one-sentence description on the Input
Description() string
// Gather takes in an accumulator and adds the metrics that the Input
// gathers. This is called every "interval"
Gather(Accumulator) error
}
type ServiceInput interface {
// SampleConfig returns the default configuration of the Input
SampleConfig() string
// Description returns a one-sentence description on the Input
Description() string
// Gather takes in an accumulator and adds the metrics that the Input
// gathers. This is called every "interval"
Gather(Accumulator) error
// Start starts the ServiceInput's service, whatever that may be
Start() error
// Stop stops the services and closes any necessary channels and connections
Stop()
}
type Creator func() Input
type Creator func() telegraf.Input
var Inputs = map[string]Creator{}

View File

@@ -5,6 +5,7 @@ import (
"net/url"
"sync"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"gopkg.in/dancannon/gorethink.v1"
@@ -35,7 +36,7 @@ var localhost = &Server{Url: &url.URL{Host: "127.0.0.1:28015"}}
// Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any).
func (r *RethinkDB) Gather(acc inputs.Accumulator) error {
func (r *RethinkDB) Gather(acc telegraf.Accumulator) error {
if len(r.Servers) == 0 {
r.gatherServer(localhost, acc)
return nil
@@ -65,7 +66,7 @@ func (r *RethinkDB) Gather(acc inputs.Accumulator) error {
return outerr
}
func (r *RethinkDB) gatherServer(server *Server, acc inputs.Accumulator) error {
func (r *RethinkDB) gatherServer(server *Server, acc telegraf.Accumulator) error {
var err error
connectOpts := gorethink.ConnectOpts{
Address: server.Url.Host,
@@ -87,7 +88,7 @@ func (r *RethinkDB) gatherServer(server *Server, acc inputs.Accumulator) error {
}
func init() {
inputs.Add("rethinkdb", func() inputs.Input {
inputs.Add("rethinkdb", func() telegraf.Input {
return &RethinkDB{}
})
}

View File

@@ -4,7 +4,7 @@ import (
"reflect"
"time"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf"
)
type serverStatus struct {
@@ -88,7 +88,7 @@ var engineStats = map[string]string{
func (e *Engine) AddEngineStats(
keys []string,
acc inputs.Accumulator,
acc telegraf.Accumulator,
tags map[string]string,
) {
engine := reflect.ValueOf(e).Elem()
@@ -99,7 +99,7 @@ func (e *Engine) AddEngineStats(
acc.AddFields("rethinkdb_engine", fields, tags)
}
func (s *Storage) AddStats(acc inputs.Accumulator, tags map[string]string) {
func (s *Storage) AddStats(acc telegraf.Accumulator, tags map[string]string) {
fields := map[string]interface{}{
"cache_bytes_in_use": s.Cache.BytesInUse,
"disk_read_bytes_per_sec": s.Disk.ReadBytesPerSec,

View File

@@ -9,7 +9,7 @@ import (
"strconv"
"strings"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf"
"gopkg.in/dancannon/gorethink.v1"
)
@@ -20,7 +20,7 @@ type Server struct {
serverStatus serverStatus
}
func (s *Server) gatherData(acc inputs.Accumulator) error {
func (s *Server) gatherData(acc telegraf.Accumulator) error {
if err := s.getServerStatus(); err != nil {
return fmt.Errorf("Failed to get server_status, %s\n", err)
}
@@ -110,7 +110,7 @@ var ClusterTracking = []string{
"written_docs_per_sec",
}
func (s *Server) addClusterStats(acc inputs.Accumulator) error {
func (s *Server) addClusterStats(acc telegraf.Accumulator) error {
cursor, err := gorethink.DB("rethinkdb").Table("stats").Get([]string{"cluster"}).Run(s.session)
if err != nil {
return fmt.Errorf("cluster stats query error, %s\n", err.Error())
@@ -138,7 +138,7 @@ var MemberTracking = []string{
"total_writes",
}
func (s *Server) addMemberStats(acc inputs.Accumulator) error {
func (s *Server) addMemberStats(acc telegraf.Accumulator) error {
cursor, err := gorethink.DB("rethinkdb").Table("stats").Get([]string{"server", s.serverStatus.Id}).Run(s.session)
if err != nil {
return fmt.Errorf("member stats query error, %s\n", err.Error())
@@ -162,7 +162,7 @@ var TableTracking = []string{
"total_writes",
}
func (s *Server) addTableStats(acc inputs.Accumulator) error {
func (s *Server) addTableStats(acc telegraf.Accumulator) error {
tablesCursor, err := gorethink.DB("rethinkdb").Table("table_status").Run(s.session)
defer tablesCursor.Close()
var tables []tableStatus

View File

@@ -7,6 +7,7 @@ import (
"github.com/md14454/gosensors"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -35,7 +36,7 @@ func (_ *Sensors) SampleConfig() string {
return sensorsSampleConfig
}
func (s *Sensors) Gather(acc inputs.Accumulator) error {
func (s *Sensors) Gather(acc telegraf.Accumulator) error {
gosensors.Init()
defer gosensors.Cleanup()
@@ -84,7 +85,7 @@ func (s *Sensors) Gather(acc inputs.Accumulator) error {
}
func init() {
inputs.Add("sensors", func() inputs.Input {
inputs.Add("sensors", func() telegraf.Input {
return &Sensors{}
})
}

View File

@@ -9,6 +9,7 @@ import (
"strings"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/soniah/gosnmp"
@@ -187,7 +188,7 @@ func findnodename(node Node, ids []string) (string, string) {
return node.name, ""
}
func (s *Snmp) Gather(acc inputs.Accumulator) error {
func (s *Snmp) Gather(acc telegraf.Accumulator) error {
// Create oid tree
if s.SnmptranslateFile != "" && len(initNode.subnodes) == 0 {
data, err := ioutil.ReadFile(s.SnmptranslateFile)
@@ -283,7 +284,7 @@ func (s *Snmp) Gather(acc inputs.Accumulator) error {
return nil
}
func (h *Host) SNMPGet(acc inputs.Accumulator) error {
func (h *Host) SNMPGet(acc telegraf.Accumulator) error {
// Get snmp client
snmpClient, err := h.GetSNMPClient()
if err != nil {
@@ -324,7 +325,7 @@ func (h *Host) SNMPGet(acc inputs.Accumulator) error {
return nil
}
func (h *Host) SNMPBulk(acc inputs.Accumulator) error {
func (h *Host) SNMPBulk(acc telegraf.Accumulator) error {
// Get snmp client
snmpClient, err := h.GetSNMPClient()
if err != nil {
@@ -411,7 +412,7 @@ func (h *Host) GetSNMPClient() (*gosnmp.GoSNMP, error) {
return snmpClient, nil
}
func (h *Host) HandleResponse(oids map[string]Data, result *gosnmp.SnmpPacket, acc inputs.Accumulator) (string, error) {
func (h *Host) HandleResponse(oids map[string]Data, result *gosnmp.SnmpPacket, acc telegraf.Accumulator) (string, error) {
var lastOid string
for _, variable := range result.Variables {
lastOid = variable.Name
@@ -467,7 +468,7 @@ func (h *Host) HandleResponse(oids map[string]Data, result *gosnmp.SnmpPacket, a
}
func init() {
inputs.Add("snmp", func() inputs.Input {
inputs.Add("snmp", func() telegraf.Input {
return &Snmp{}
})
}

File diff suppressed because it is too large Load Diff

View File

@@ -2,6 +2,7 @@ package sqlserver
import (
"database/sql"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"sync"
"time"
@@ -70,7 +71,7 @@ func initQueries() {
}
// Gather collect data from SQL Server
func (s *SQLServer) Gather(acc inputs.Accumulator) error {
func (s *SQLServer) Gather(acc telegraf.Accumulator) error {
initQueries()
if len(s.Servers) == 0 {
@@ -94,7 +95,7 @@ func (s *SQLServer) Gather(acc inputs.Accumulator) error {
return outerr
}
func (s *SQLServer) gatherServer(server string, query Query, acc inputs.Accumulator) error {
func (s *SQLServer) gatherServer(server string, query Query, acc telegraf.Accumulator) error {
// deferred opening
conn, err := sql.Open("mssql", server)
if err != nil {
@@ -130,7 +131,7 @@ func (s *SQLServer) gatherServer(server string, query Query, acc inputs.Accumula
return rows.Err()
}
func (s *SQLServer) accRow(query Query, acc inputs.Accumulator, row scanner) error {
func (s *SQLServer) accRow(query Query, acc telegraf.Accumulator, row scanner) error {
var columnVars []interface{}
var fields = make(map[string]interface{})
@@ -180,7 +181,7 @@ func (s *SQLServer) accRow(query Query, acc inputs.Accumulator, row scanner) err
}
func init() {
inputs.Add("sqlserver", func() inputs.Input {
inputs.Add("sqlserver", func() telegraf.Input {
return &SQLServer{}
})
}

View File

@@ -9,9 +9,11 @@ import (
"strconv"
"strings"
"sync"
"time"
"github.com/influxdata/influxdb/services/graphite"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -50,6 +52,8 @@ type Statsd struct {
done chan struct{}
// Cache gauges, counters & sets so they can be aggregated as they arrive
// gauges and counters map measurement/tags hash -> field name -> metrics
// sets and timings map measurement/tags hash -> metrics
gauges map[string]cachedgauge
counters map[string]cachedcounter
sets map[string]cachedset
@@ -79,6 +83,7 @@ func NewStatsd() *Statsd {
// One statsd metric, form is <bucket>:<value>|<mtype>|@<samplerate>
type metric struct {
name string
field string
bucket string
hash string
intvalue int64
@@ -90,21 +95,21 @@ type metric struct {
}
type cachedset struct {
name string
set map[int64]bool
tags map[string]string
name string
fields map[string]map[int64]bool
tags map[string]string
}
type cachedgauge struct {
name string
value float64
tags map[string]string
name string
fields map[string]interface{}
tags map[string]string
}
type cachedcounter struct {
name string
value int64
tags map[string]string
name string
fields map[string]interface{}
tags map[string]string
}
type cachedtimings struct {
@@ -156,41 +161,48 @@ func (_ *Statsd) SampleConfig() string {
return sampleConfig
}
func (s *Statsd) Gather(acc inputs.Accumulator) error {
func (s *Statsd) Gather(acc telegraf.Accumulator) error {
s.Lock()
defer s.Unlock()
now := time.Now()
for _, metric := range s.timings {
acc.Add(metric.name+"_mean", metric.stats.Mean(), metric.tags)
acc.Add(metric.name+"_stddev", metric.stats.Stddev(), metric.tags)
acc.Add(metric.name+"_upper", metric.stats.Upper(), metric.tags)
acc.Add(metric.name+"_lower", metric.stats.Lower(), metric.tags)
acc.Add(metric.name+"_count", metric.stats.Count(), metric.tags)
fields := make(map[string]interface{})
fields["mean"] = metric.stats.Mean()
fields["stddev"] = metric.stats.Stddev()
fields["upper"] = metric.stats.Upper()
fields["lower"] = metric.stats.Lower()
fields["count"] = metric.stats.Count()
for _, percentile := range s.Percentiles {
name := fmt.Sprintf("%s_percentile_%v", metric.name, percentile)
acc.Add(name, metric.stats.Percentile(percentile), metric.tags)
name := fmt.Sprintf("%v_percentile", percentile)
fields[name] = metric.stats.Percentile(percentile)
}
acc.AddFields(metric.name, fields, metric.tags, now)
}
if s.DeleteTimings {
s.timings = make(map[string]cachedtimings)
}
for _, metric := range s.gauges {
acc.Add(metric.name, metric.value, metric.tags)
acc.AddFields(metric.name, metric.fields, metric.tags, now)
}
if s.DeleteGauges {
s.gauges = make(map[string]cachedgauge)
}
for _, metric := range s.counters {
acc.Add(metric.name, metric.value, metric.tags)
acc.AddFields(metric.name, metric.fields, metric.tags, now)
}
if s.DeleteCounters {
s.counters = make(map[string]cachedcounter)
}
for _, metric := range s.sets {
acc.Add(metric.name, int64(len(metric.set)), metric.tags)
fields := make(map[string]interface{})
for field, set := range metric.fields {
fields[field] = int64(len(set))
}
acc.AddFields(metric.name, fields, metric.tags, now)
}
if s.DeleteSets {
s.sets = make(map[string]cachedset)
@@ -355,7 +367,12 @@ func (s *Statsd) parseStatsdLine(line string) error {
}
// Parse the name & tags from bucket
m.name, m.tags = s.parseName(m.bucket)
m.name, m.field, m.tags = s.parseName(m.bucket)
// fields are not supported for timings, so if specified combine into
// the name
if (m.mtype == "ms" || m.mtype == "h") && m.field != "value" {
m.name += "_" + m.field
}
switch m.mtype {
case "c":
m.tags["metric_type"] = "counter"
@@ -386,8 +403,8 @@ func (s *Statsd) parseStatsdLine(line string) error {
// parseName parses the given bucket name with the list of bucket maps in the
// config file. If there is a match, it will parse the name of the metric and
// map of tags.
// Return values are (<name>, <tags>)
func (s *Statsd) parseName(bucket string) (string, map[string]string) {
// Return values are (<name>, <field>, <tags>)
func (s *Statsd) parseName(bucket string) (string, string, map[string]string) {
tags := make(map[string]string)
bucketparts := strings.Split(bucket, ",")
@@ -407,17 +424,21 @@ func (s *Statsd) parseName(bucket string) (string, map[string]string) {
DefaultTags: tags,
}
var field string
name := bucketparts[0]
p, err := graphite.NewParserWithOptions(o)
if err == nil {
name, tags, _, _ = p.ApplyTemplate(name)
name, tags, field, _ = p.ApplyTemplate(name)
}
if s.ConvertNames {
name = strings.Replace(name, ".", "_", -1)
name = strings.Replace(name, "-", "__", -1)
}
if field == "" {
field = "value"
}
return name, tags
return name, field, tags
}
// Parse the key,value out of a string that looks like "key=value"
@@ -463,46 +484,59 @@ func (s *Statsd) aggregate(m metric) {
s.timings[m.hash] = cached
}
case "c":
cached, ok := s.counters[m.hash]
// check if the measurement exists
_, ok := s.counters[m.hash]
if !ok {
s.counters[m.hash] = cachedcounter{
name: m.name,
value: m.intvalue,
tags: m.tags,
name: m.name,
fields: make(map[string]interface{}),
tags: m.tags,
}
} else {
cached.value += m.intvalue
s.counters[m.hash] = cached
}
// check if the field exists
_, ok = s.counters[m.hash].fields[m.field]
if !ok {
s.counters[m.hash].fields[m.field] = int64(0)
}
s.counters[m.hash].fields[m.field] =
s.counters[m.hash].fields[m.field].(int64) + m.intvalue
case "g":
cached, ok := s.gauges[m.hash]
// check if the measurement exists
_, ok := s.gauges[m.hash]
if !ok {
s.gauges[m.hash] = cachedgauge{
name: m.name,
value: m.floatvalue,
tags: m.tags,
name: m.name,
fields: make(map[string]interface{}),
tags: m.tags,
}
}
// check if the field exists
_, ok = s.gauges[m.hash].fields[m.field]
if !ok {
s.gauges[m.hash].fields[m.field] = float64(0)
}
if m.additive {
s.gauges[m.hash].fields[m.field] =
s.gauges[m.hash].fields[m.field].(float64) + m.floatvalue
} else {
if m.additive {
cached.value = cached.value + m.floatvalue
} else {
cached.value = m.floatvalue
}
s.gauges[m.hash] = cached
s.gauges[m.hash].fields[m.field] = m.floatvalue
}
case "s":
cached, ok := s.sets[m.hash]
// check if the measurement exists
_, ok := s.sets[m.hash]
if !ok {
// Completely new metric (initialize with count of 1)
s.sets[m.hash] = cachedset{
name: m.name,
tags: m.tags,
set: map[int64]bool{m.intvalue: true},
name: m.name,
fields: make(map[string]map[int64]bool),
tags: m.tags,
}
} else {
cached.set[m.intvalue] = true
s.sets[m.hash] = cached
}
// check if the field exists
_, ok = s.sets[m.hash].fields[m.field]
if !ok {
s.sets[m.hash].fields[m.field] = make(map[int64]bool)
}
s.sets[m.hash].fields[m.field][m.intvalue] = true
}
}
@@ -515,7 +549,7 @@ func (s *Statsd) Stop() {
}
func init() {
inputs.Add("statsd", func() inputs.Input {
inputs.Add("statsd", func() telegraf.Input {
return &Statsd{
ConvertNames: true,
UDPPacketSize: UDP_PACKET_SIZE,

View File

@@ -243,6 +243,113 @@ func TestParse_TemplateSpecificity(t *testing.T) {
}
}
// Test that most specific template is chosen
func TestParse_TemplateFields(t *testing.T) {
s := NewStatsd()
s.Templates = []string{
"* measurement.measurement.field",
}
lines := []string{
"my.counter.f1:1|c",
"my.counter.f1:1|c",
"my.counter.f2:1|c",
"my.counter.f3:10|c",
"my.counter.f3:100|c",
"my.gauge.f1:10.1|g",
"my.gauge.f2:10.1|g",
"my.gauge.f1:0.9|g",
"my.set.f1:1|s",
"my.set.f1:2|s",
"my.set.f1:1|s",
"my.set.f2:100|s",
}
for _, line := range lines {
err := s.parseStatsdLine(line)
if err != nil {
t.Errorf("Parsing line %s should not have resulted in an error\n", line)
}
}
counter_tests := []struct {
name string
value int64
field string
}{
{
"my_counter",
2,
"f1",
},
{
"my_counter",
1,
"f2",
},
{
"my_counter",
110,
"f3",
},
}
// Validate counters
for _, test := range counter_tests {
err := test_validate_counter(test.name, test.value, s.counters, test.field)
if err != nil {
t.Error(err.Error())
}
}
gauge_tests := []struct {
name string
value float64
field string
}{
{
"my_gauge",
0.9,
"f1",
},
{
"my_gauge",
10.1,
"f2",
},
}
// Validate gauges
for _, test := range gauge_tests {
err := test_validate_gauge(test.name, test.value, s.gauges, test.field)
if err != nil {
t.Error(err.Error())
}
}
set_tests := []struct {
name string
value int64
field string
}{
{
"my_set",
2,
"f1",
},
{
"my_set",
1,
"f2",
},
}
// Validate sets
for _, test := range set_tests {
err := test_validate_set(test.name, test.value, s.sets, test.field)
if err != nil {
t.Error(err.Error())
}
}
}
// Test that fields are parsed correctly
func TestParse_Fields(t *testing.T) {
if false {
@@ -286,7 +393,7 @@ func TestParse_Tags(t *testing.T) {
}
for _, test := range tests {
name, tags := s.parseName(test.bucket)
name, _, tags := s.parseName(test.bucket)
if name != test.name {
t.Errorf("Expected: %s, got %s", test.name, name)
}
@@ -326,7 +433,7 @@ func TestParseName(t *testing.T) {
}
for _, test := range tests {
name, _ := s.parseName(test.in_name)
name, _, _ := s.parseName(test.in_name)
if name != test.out_name {
t.Errorf("Expected: %s, got %s", test.out_name, name)
}
@@ -354,7 +461,7 @@ func TestParseName(t *testing.T) {
}
for _, test := range tests {
name, _ := s.parseName(test.in_name)
name, _, _ := s.parseName(test.in_name)
if name != test.out_name {
t.Errorf("Expected: %s, got %s", test.out_name, name)
}
@@ -710,7 +817,7 @@ func TestParse_Timings(t *testing.T) {
// Test that counters work
valid_lines := []string{
"test.timing:1|ms",
"test.timing:1|ms",
"test.timing:11|ms",
"test.timing:1|ms",
"test.timing:1|ms",
"test.timing:1|ms",
@@ -725,40 +832,17 @@ func TestParse_Timings(t *testing.T) {
s.Gather(acc)
tests := []struct {
name string
value interface{}
}{
{
"test_timing_mean",
float64(1),
},
{
"test_timing_stddev",
float64(0),
},
{
"test_timing_upper",
float64(1),
},
{
"test_timing_lower",
float64(1),
},
{
"test_timing_count",
int64(5),
},
{
"test_timing_percentile_90",
float64(1),
},
valid := map[string]interface{}{
"90_percentile": float64(11),
"count": int64(5),
"lower": float64(1),
"mean": float64(3),
"stddev": float64(4),
"upper": float64(11),
}
for _, test := range tests {
acc.AssertContainsFields(t, test.name,
map[string]interface{}{"value": test.value})
}
acc.AssertContainsFields(t, "test_timing", valid)
}
func TestParse_Timings_Delete(t *testing.T) {
@@ -886,7 +970,14 @@ func test_validate_set(
name string,
value int64,
cache map[string]cachedset,
field ...string,
) error {
var f string
if len(field) > 0 {
f = field[0]
} else {
f = "value"
}
var metric cachedset
var found bool
for _, v := range cache {
@@ -900,23 +991,30 @@ func test_validate_set(
return errors.New(fmt.Sprintf("Test Error: Metric name %s not found\n", name))
}
if value != int64(len(metric.set)) {
if value != int64(len(metric.fields[f])) {
return errors.New(fmt.Sprintf("Measurement: %s, expected %d, actual %d\n",
name, value, len(metric.set)))
name, value, len(metric.fields[f])))
}
return nil
}
func test_validate_counter(
name string,
value int64,
valueExpected int64,
cache map[string]cachedcounter,
field ...string,
) error {
var metric cachedcounter
var f string
if len(field) > 0 {
f = field[0]
} else {
f = "value"
}
var valueActual int64
var found bool
for _, v := range cache {
if v.name == name {
metric = v
valueActual = v.fields[f].(int64)
found = true
break
}
@@ -925,23 +1023,30 @@ func test_validate_counter(
return errors.New(fmt.Sprintf("Test Error: Metric name %s not found\n", name))
}
if value != metric.value {
if valueExpected != valueActual {
return errors.New(fmt.Sprintf("Measurement: %s, expected %d, actual %d\n",
name, value, metric.value))
name, valueExpected, valueActual))
}
return nil
}
func test_validate_gauge(
name string,
value float64,
valueExpected float64,
cache map[string]cachedgauge,
field ...string,
) error {
var metric cachedgauge
var f string
if len(field) > 0 {
f = field[0]
} else {
f = "value"
}
var valueActual float64
var found bool
for _, v := range cache {
if v.name == name {
metric = v
valueActual = v.fields[f].(float64)
found = true
break
}
@@ -950,9 +1055,9 @@ func test_validate_gauge(
return errors.New(fmt.Sprintf("Test Error: Metric name %s not found\n", name))
}
if value != metric.value {
if valueExpected != valueActual {
return errors.New(fmt.Sprintf("Measurement: %s, expected %f, actual %f\n",
name, value, metric.value))
name, valueExpected, valueActual))
}
return nil
}

View File

@@ -4,6 +4,7 @@ import (
"fmt"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/shirou/gopsutil/cpu"
)
@@ -39,7 +40,7 @@ func (_ *CPUStats) SampleConfig() string {
return sampleConfig
}
func (s *CPUStats) Gather(acc inputs.Accumulator) error {
func (s *CPUStats) Gather(acc telegraf.Accumulator) error {
times, err := s.ps.CPUTimes(s.PerCPU, s.TotalCPU)
if err != nil {
return fmt.Errorf("error getting CPU info: %s", err)
@@ -111,7 +112,7 @@ func totalCpuTime(t cpu.CPUTimesStat) float64 {
}
func init() {
inputs.Add("cpu", func() inputs.Input {
inputs.Add("cpu", func() telegraf.Input {
return &CPUStats{ps: &systemPS{}}
})
}

View File

@@ -123,7 +123,7 @@ func assertContainsTaggedFloat(
tags map[string]string,
) {
var actualValue float64
for _, pt := range acc.Points {
for _, pt := range acc.Metrics {
if pt.Measurement == measurement {
for fieldname, value := range pt.Fields {
if fieldname == field {

View File

@@ -3,6 +3,7 @@ package system
import (
"fmt"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -29,7 +30,7 @@ func (_ *DiskStats) SampleConfig() string {
return diskSampleConfig
}
func (s *DiskStats) Gather(acc inputs.Accumulator) error {
func (s *DiskStats) Gather(acc telegraf.Accumulator) error {
// Legacy support:
if len(s.Mountpoints) != 0 {
s.MountPoints = s.Mountpoints
@@ -90,7 +91,7 @@ func (_ *DiskIOStats) SampleConfig() string {
return diskIoSampleConfig
}
func (s *DiskIOStats) Gather(acc inputs.Accumulator) error {
func (s *DiskIOStats) Gather(acc telegraf.Accumulator) error {
diskio, err := s.ps.DiskIO()
if err != nil {
return fmt.Errorf("error getting disk io info: %s", err)
@@ -136,11 +137,11 @@ func (s *DiskIOStats) Gather(acc inputs.Accumulator) error {
}
func init() {
inputs.Add("disk", func() inputs.Input {
inputs.Add("disk", func() telegraf.Input {
return &DiskStats{ps: &systemPS{}}
})
inputs.Add("diskio", func() inputs.Input {
inputs.Add("diskio", func() telegraf.Input {
return &DiskIOStats{ps: &systemPS{}}
})
}

View File

@@ -57,9 +57,9 @@ func TestDiskStats(t *testing.T) {
err = (&DiskStats{ps: &mps}).Gather(&acc)
require.NoError(t, err)
numDiskPoints := acc.NFields()
expectedAllDiskPoints := 14
assert.Equal(t, expectedAllDiskPoints, numDiskPoints)
numDiskMetrics := acc.NFields()
expectedAllDiskMetrics := 14
assert.Equal(t, expectedAllDiskMetrics, numDiskMetrics)
tags1 := map[string]string{
"path": "/",
@@ -91,15 +91,15 @@ func TestDiskStats(t *testing.T) {
acc.AssertContainsTaggedFields(t, "disk", fields1, tags1)
acc.AssertContainsTaggedFields(t, "disk", fields2, tags2)
// We expect 6 more DiskPoints to show up with an explicit match on "/"
// We expect 6 more DiskMetrics to show up with an explicit match on "/"
// and /home not matching the /dev in MountPoints
err = (&DiskStats{ps: &mps, MountPoints: []string{"/", "/dev"}}).Gather(&acc)
assert.Equal(t, expectedAllDiskPoints+7, acc.NFields())
assert.Equal(t, expectedAllDiskMetrics+7, acc.NFields())
// We should see all the diskpoints as MountPoints includes both
// / and /home
err = (&DiskStats{ps: &mps, MountPoints: []string{"/", "/home"}}).Gather(&acc)
assert.Equal(t, 2*expectedAllDiskPoints+7, acc.NFields())
assert.Equal(t, 2*expectedAllDiskMetrics+7, acc.NFields())
}
// func TestDiskIOStats(t *testing.T) {
@@ -138,9 +138,9 @@ func TestDiskStats(t *testing.T) {
// err = (&DiskIOStats{ps: &mps}).Gather(&acc)
// require.NoError(t, err)
// numDiskIOPoints := acc.NFields()
// expectedAllDiskIOPoints := 14
// assert.Equal(t, expectedAllDiskIOPoints, numDiskIOPoints)
// numDiskIOMetrics := acc.NFields()
// expectedAllDiskIOMetrics := 14
// assert.Equal(t, expectedAllDiskIOMetrics, numDiskIOMetrics)
// dtags1 := map[string]string{
// "name": "sda1",
@@ -166,10 +166,10 @@ func TestDiskStats(t *testing.T) {
// assert.True(t, acc.CheckTaggedValue("write_time", uint64(6087), dtags2))
// assert.True(t, acc.CheckTaggedValue("io_time", uint64(246552), dtags2))
// // We expect 7 more DiskIOPoints to show up with an explicit match on "sdb1"
// // We expect 7 more DiskIOMetrics to show up with an explicit match on "sdb1"
// // and serial should be missing from the tags with SkipSerialNumber set
// err = (&DiskIOStats{ps: &mps, Devices: []string{"sdb1"}, SkipSerialNumber: true}).Gather(&acc)
// assert.Equal(t, expectedAllDiskIOPoints+7, acc.NFields())
// assert.Equal(t, expectedAllDiskIOMetrics+7, acc.NFields())
// dtags3 := map[string]string{
// "name": "sdb1",

View File

@@ -3,6 +3,7 @@ package system
import (
"fmt"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -16,7 +17,7 @@ func (_ *MemStats) Description() string {
func (_ *MemStats) SampleConfig() string { return "" }
func (s *MemStats) Gather(acc inputs.Accumulator) error {
func (s *MemStats) Gather(acc telegraf.Accumulator) error {
vm, err := s.ps.VMStat()
if err != nil {
return fmt.Errorf("error getting virtual memory info: %s", err)
@@ -47,7 +48,7 @@ func (_ *SwapStats) Description() string {
func (_ *SwapStats) SampleConfig() string { return "" }
func (s *SwapStats) Gather(acc inputs.Accumulator) error {
func (s *SwapStats) Gather(acc telegraf.Accumulator) error {
swap, err := s.ps.SwapStat()
if err != nil {
return fmt.Errorf("error getting swap memory info: %s", err)
@@ -67,11 +68,11 @@ func (s *SwapStats) Gather(acc inputs.Accumulator) error {
}
func init() {
inputs.Add("mem", func() inputs.Input {
inputs.Add("mem", func() telegraf.Input {
return &MemStats{ps: &systemPS{}}
})
inputs.Add("swap", func() inputs.Input {
inputs.Add("swap", func() telegraf.Input {
return &SwapStats{ps: &systemPS{}}
})
}

View File

@@ -55,7 +55,7 @@ func TestMemStats(t *testing.T) {
}
acc.AssertContainsTaggedFields(t, "mem", memfields, make(map[string]string))
acc.Points = nil
acc.Metrics = nil
err = (&SwapStats{&mps}).Gather(&acc)
require.NoError(t, err)

View File

@@ -5,6 +5,7 @@ import (
"net"
"strings"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -31,7 +32,7 @@ func (_ *NetIOStats) SampleConfig() string {
return netSampleConfig
}
func (s *NetIOStats) Gather(acc inputs.Accumulator) error {
func (s *NetIOStats) Gather(acc telegraf.Accumulator) error {
netio, err := s.ps.NetIO()
if err != nil {
return fmt.Errorf("error getting net io info: %s", err)
@@ -103,7 +104,7 @@ func (s *NetIOStats) Gather(acc inputs.Accumulator) error {
}
func init() {
inputs.Add("net", func() inputs.Input {
inputs.Add("net", func() telegraf.Input {
return &NetIOStats{ps: &systemPS{}}
})
}

View File

@@ -85,7 +85,7 @@ func TestNetStats(t *testing.T) {
}
acc.AssertContainsTaggedFields(t, "net", fields2, ntags)
acc.Points = nil
acc.Metrics = nil
err = (&NetStats{&mps}).Gather(&acc)
require.NoError(t, err)

View File

@@ -4,6 +4,7 @@ import (
"fmt"
"syscall"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -21,7 +22,7 @@ func (_ *NetStats) SampleConfig() string {
return tcpstatSampleConfig
}
func (s *NetStats) Gather(acc inputs.Accumulator) error {
func (s *NetStats) Gather(acc telegraf.Accumulator) error {
netconns, err := s.ps.NetConnections()
if err != nil {
return fmt.Errorf("error getting net connections info: %s", err)
@@ -64,7 +65,7 @@ func (s *NetStats) Gather(acc inputs.Accumulator) error {
}
func init() {
inputs.Add("netstat", func() inputs.Input {
inputs.Add("netstat", func() telegraf.Input {
return &NetStats{ps: &systemPS{}}
})
}

View File

@@ -3,8 +3,8 @@ package system
import (
"os"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/shirou/gopsutil/cpu"
"github.com/shirou/gopsutil/disk"
@@ -23,7 +23,7 @@ type PS interface {
NetConnections() ([]net.NetConnectionStat, error)
}
func add(acc inputs.Accumulator,
func add(acc telegraf.Accumulator,
name string, val float64, tags map[string]string) {
if val >= 0 {
acc.Add(name, val, tags)

View File

@@ -8,6 +8,7 @@ import (
"github.com/shirou/gopsutil/host"
"github.com/shirou/gopsutil/load"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -19,7 +20,7 @@ func (_ *SystemStats) Description() string {
func (_ *SystemStats) SampleConfig() string { return "" }
func (_ *SystemStats) Gather(acc inputs.Accumulator) error {
func (_ *SystemStats) Gather(acc telegraf.Accumulator) error {
loadavg, err := load.LoadAvg()
if err != nil {
return err
@@ -68,7 +69,7 @@ func format_uptime(uptime uint64) string {
}
func init() {
inputs.Add("system", func() inputs.Input {
inputs.Add("system", func() telegraf.Input {
return &SystemStats{}
})
}

View File

@@ -3,6 +3,7 @@ package trig
import (
"math"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -24,7 +25,7 @@ func (s *Trig) Description() string {
return "Inserts sine and cosine waves for demonstration purposes"
}
func (s *Trig) Gather(acc inputs.Accumulator) error {
func (s *Trig) Gather(acc telegraf.Accumulator) error {
sinner := math.Sin((s.x*math.Pi)/5.0) * s.Amplitude
cosinner := math.Cos((s.x*math.Pi)/5.0) * s.Amplitude
@@ -41,5 +42,5 @@ func (s *Trig) Gather(acc inputs.Accumulator) error {
}
func init() {
inputs.Add("Trig", func() inputs.Input { return &Trig{x: 0.0} })
inputs.Add("Trig", func() telegraf.Input { return &Trig{x: 0.0} })
}

View File

@@ -7,6 +7,7 @@ import (
"net"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -31,7 +32,7 @@ func (t *Twemproxy) Description() string {
}
// Gather data from all Twemproxy instances
func (t *Twemproxy) Gather(acc inputs.Accumulator) error {
func (t *Twemproxy) Gather(acc telegraf.Accumulator) error {
conn, err := net.DialTimeout("tcp", t.Addr, 1*time.Second)
if err != nil {
return err
@@ -55,7 +56,7 @@ func (t *Twemproxy) Gather(acc inputs.Accumulator) error {
// Process Twemproxy server stats
func (t *Twemproxy) processStat(
acc inputs.Accumulator,
acc telegraf.Accumulator,
tags map[string]string,
data map[string]interface{},
) {
@@ -89,7 +90,7 @@ func (t *Twemproxy) processStat(
// Process pool data in Twemproxy stats
func (t *Twemproxy) processPool(
acc inputs.Accumulator,
acc telegraf.Accumulator,
tags map[string]string,
data map[string]interface{},
) {
@@ -117,7 +118,7 @@ func (t *Twemproxy) processPool(
// Process backend server(redis/memcached) stats
func (t *Twemproxy) processServer(
acc inputs.Accumulator,
acc telegraf.Accumulator,
tags map[string]string,
data map[string]interface{},
) {
@@ -143,7 +144,7 @@ func copyTags(tags map[string]string) map[string]string {
}
func init() {
inputs.Add("twemproxy", func() inputs.Input {
inputs.Add("twemproxy", func() telegraf.Input {
return &Twemproxy{}
})
}

View File

@@ -0,0 +1,327 @@
# win_perf_counters readme
The way this plugin works is that on load of Telegraf,
the plugin will be handed configuration from Telegraf.
This configuration is parsed and then tested for validity such as
if the Object, Instance and Counter existing.
If it does not match at startup, it will not be fetched.
Exceptions to this are in cases where you query for all instances "*".
By default the plugin does not return _Total
when it is querying for all (*) as this is redundant.
## Basics
The examples contained in this file have been found on the internet
as counters used when performance monitoring
Active Directory and IIS in perticular.
There are a lot other good objects to monitor, if you know what to look for.
This file is likely to be updated in the future with more examples for
useful configurations for separate scenarios.
### Plugin wide
Plugin wide entries are underneath `[[inputs.win_perf_counters]]`.
#### PrintValid
Bool, if set to `true` will print out all matching performance objects.
Example:
`PrintValid=true`
#### PreVistaSupport
Bool, if set to `true` will use the localized PerfCounter interface that is present before Vista for backwards compatability.
It is recommended NOT to use this on OSes starting with Vista and newer because it requires more configuration to use this than the newer interface present since Vista.
Example for Windows Server 2003, this would be set to true:
`PreVistaSupport=true`
### Object
See Entry below.
### Entry
A new configuration entry consists of the TOML header to start with,
`[[inputs.win_perf_counters.object]]`.
This must follow before other plugins configuration,
beneath the main win_perf_counters entry, `[[inputs.win_perf_counters]]`.
Following this is 3 required key/value pairs and the three optional parameters and their usage.
#### ObjectName
**Required**
ObjectName is the Object to query for, like Processor, DirectoryServices, LogicalDisk or similar.
Example: `ObjectName = "LogicalDisk"`
#### Instances
**Required**
Instances (this is an array) is the instances of a counter you would like returned,
it can be one or more values.
Example, `Instances = ["C:","D:","E:"]` will return only for the instances
C:, D: and E: where relevant. To get all instnaces of a Counter, use ["*"] only.
By default any results containing _Total are stripped,
unless this is specified as the wanted instance.
Alternatively see the option IncludeTotal below.
Some Objects does not have instances to select from at all,
here only one option is valid if you want data back,
and that is to specify `Instances = ["------"]`.
#### Counters
**Required**
Counters (this is an array) is the counters of the ObjectName
you would like returned, it can also be one or more values.
Example: `Counters = ["% Idle Time", "% Disk Read Time", "% Disk Write Time"]`
This must be specified for every counter you want the results of,
it is not possible to ask for all counters in the ObjectName.
#### Measurement
*Optional*
This key is optional, if it is not set it will be win_perf_counters.
In InfluxDB this is the key by which the returned data is stored underneath,
so for ordering your data in a good manner,
this is a good key to set with where you want your IIS and Disk results stored,
separate from Processor results.
Example: `Measurement = "win_disk"
#### IncludeTotal
*Optional*
This key is optional, it is a simple bool.
If it is not set to true or included it is treated as false.
This key only has an effect if Instances is set to "*"
and you would also like all instances containg _Total returned,
like "_Total", "0,_Total" and so on where applicable
(Processor Information is one example).
#### WarnOnMissing
*Optional*
This key is optional, it is a simple bool.
If it is not set to true or included it is treated as false.
This only has an effect on the first execution of the plugin,
it will print out any ObjectName/Instance/Counter combinations
asked for that does not match. Useful when debugging new configurations.
#### FailOnMissing
*Internal*
This key should not be used, it is for testing purposes only.
It is a simple bool, if it is not set to true or included this is treaded as false.
If this is set to true, the plugin will abort and end prematurely
if any of the combinations of ObjectName/Instances/Counters are invalid.
## Examples
### Generic Queries
```
[[inputs.win_perf_counters.object]]
# Processor usage, alternative to native, reports on a per core.
ObjectName = "Processor"
Instances = ["*"]
Counters = ["% Idle Time", "% Interrupt Time", "% Privileged Time", "% User Time", "% Processor Time"]
Measurement = "win_cpu"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
# Disk times and queues
ObjectName = "LogicalDisk"
Instances = ["*"]
Counters = ["% Idle Time", "% Disk Time","% Disk Read Time", "% Disk Write Time", "% User Time", "Current Disk Queue Length"]
Measurement = "win_disk"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
ObjectName = "System"
Counters = ["Context Switches/sec","System Calls/sec"]
Instances = ["------"]
Measurement = "win_system"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
# Example query where the Instance portion must be removed to get data back, such as from the Memory object.
ObjectName = "Memory"
Counters = ["Available Bytes","Cache Faults/sec","Demand Zero Faults/sec","Page Faults/sec","Pages/sec","Transition Faults/sec","Pool Nonpaged Bytes","Pool Paged Bytes"]
Instances = ["------"] # Use 6 x - to remove the Instance bit from the query.
Measurement = "win_mem"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
```
### Active Directory Domain Controller
```
[[inputs.win_perf_counters.object]]
ObjectName = "DirectoryServices"
Instances = ["*"]
Counters = ["Base Searches/sec","Database adds/sec","Database deletes/sec","Database modifys/sec","Database recycles/sec","LDAP Client Sessions","LDAP Searches/sec","LDAP Writes/sec"]
Measurement = "win_ad" # Set an alternative measurement to win_perf_counters if wanted.
#Instances = [""] # Gathers all instances by default, specify to only gather these
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
ObjectName = "Security System-Wide Statistics"
Instances = ["*"]
Counters = ["NTLM Authentications","Kerberos Authentications","Digest Authentications"]
Measurement = "win_ad"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
ObjectName = "Database"
Instances = ["*"]
Counters = ["Database Cache % Hit","Database Cache Page Fault Stalls/sec","Database Cache Page Faults/sec","Database Cache Size"]
Measurement = "win_db"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
```
### DFS Namespace + Domain Controllers
```
[[inputs.win_perf_counters.object]]
# AD, DFS N, Useful if the server hosts a DFS Namespace or is a Domain Controller
ObjectName = "DFS Namespace Service Referrals"
Instances = ["*"]
Counters = ["Requests Processed","Requests Failed","Avg. Response Time"]
Measurement = "win_dfsn"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
#WarnOnMissing = false # Print out when the performance counter is missing, either of object, counter or instance.
```
### DFS Replication + Domain Controllers
```
[[inputs.win_perf_counters.object]]
# AD, DFS R, Useful if the server hosts a DFS Replication folder or is a Domain Controller
ObjectName = "DFS Replication Service Volumes"
Instances = ["*"]
Counters = ["Data Lookups","Database Commits"]
Measurement = "win_dfsr"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
#WarnOnMissing = false # Print out when the performance counter is missing, either of object, counter or instance.
```
### DNS Server + Domain Controllers
```
[[inputs.win_perf_counters.object]]
ObjectName = "DNS"
Counters = ["Dynamic Update Received","Dynamic Update Rejected","Recursive Queries","Recursive Queries Failure","Secure Update Failure","Secure Update Received","TCP Query Received","TCP Response Sent","UDP Query Received","UDP Response Sent","Total Query Received","Total Response Sent"]
Instances = ["------"]
Measurement = "win_dns"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
```
### IIS / ASP.NET
```
[[inputs.win_perf_counters.object]]
# HTTP Service request queues in the Kernel before being handed over to User Mode.
ObjectName = "HTTP Service Request Queues"
Instances = ["*"]
Counters = ["CurrentQueueSize","RejectedRequests"]
Measurement = "win_http_queues"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
# IIS, ASP.NET Applications
ObjectName = "ASP.NET Applications"
Counters = ["Cache Total Entries","Cache Total Hit Ratio","Cache Total Turnover Rate","Output Cache Entries","Output Cache Hits","Output Cache Hit Ratio","Output Cache Turnover Rate","Compilations Total","Errors Total/Sec","Pipeline Instance Count","Requests Executing","Requests in Application Queue","Requests/Sec"]
Instances = ["*"]
Measurement = "win_aspnet_app"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
# IIS, ASP.NET
ObjectName = "ASP.NET"
Counters = ["Application Restarts","Request Wait Time","Requests Current","Requests Queued","Requests Rejected"]
Instances = ["*"]
Measurement = "win_aspnet"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
# IIS, Web Service
ObjectName = "Web Service"
Counters = ["Get Requests/sec","Post Requests/sec","Connection Attempts/sec","Current Connections","ISAPI Extension Requests/sec"]
Instances = ["*"]
Measurement = "win_websvc"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
# Web Service Cache / IIS
ObjectName = "Web Service Cache"
Counters = ["URI Cache Hits %","Kernel: URI Cache Hits %","File Cache Hits %"]
Instances = ["*"]
Measurement = "win_websvc_cache"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
```
### Process
```
[[inputs.win_perf_counters.object]]
# Process metrics, in this case for IIS only
ObjectName = "Process"
Counters = ["% Processor Time","Handle Count","Private Bytes","Thread Count","Virtual Bytes","Working Set"]
Instances = ["w3wp"]
Measurement = "win_proc"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
```
### .NET Montioring
```
[[inputs.win_perf_counters.object]]
# .NET CLR Exceptions, in this case for IIS only
ObjectName = ".NET CLR Exceptions"
Counters = ["# of Exceps Thrown / sec"]
Instances = ["w3wp"]
Measurement = "win_dotnet_exceptions"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
# .NET CLR Jit, in this case for IIS only
ObjectName = ".NET CLR Jit"
Counters = ["% Time in Jit","IL Bytes Jitted / sec"]
Instances = ["w3wp"]
Measurement = "win_dotnet_jit"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
# .NET CLR Loading, in this case for IIS only
ObjectName = ".NET CLR Loading"
Counters = ["% Time Loading"]
Instances = ["w3wp"]
Measurement = "win_dotnet_loading"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
# .NET CLR LocksAndThreads, in this case for IIS only
ObjectName = ".NET CLR LocksAndThreads"
Counters = ["# of current logical Threads","# of current physical Threads","# of current recognized threads","# of total recognized threads","Queue Length / sec","Total # of Contentions","Current Queue Length"]
Instances = ["w3wp"]
Measurement = "win_dotnet_locks"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
# .NET CLR Memory, in this case for IIS only
ObjectName = ".NET CLR Memory"
Counters = ["% Time in GC","# Bytes in all Heaps","# Gen 0 Collections","# Gen 1 Collections","# Gen 2 Collections","# Induced GC","Allocated Bytes/sec","Finalization Survivors","Gen 0 heap size","Gen 1 heap size","Gen 2 heap size","Large Object Heap size","# of Pinned Objects"]
Instances = ["w3wp"]
Measurement = "win_dotnet_mem"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
# .NET CLR Security, in this case for IIS only
ObjectName = ".NET CLR Security"
Counters = ["% Time in RT checks","Stack Walk Depth","Total Runtime Checks"]
Instances = ["w3wp"]
Measurement = "win_dotnet_security"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
```

View File

@@ -0,0 +1,327 @@
// +build windows
package win_perf_counters
import (
"errors"
"fmt"
"strings"
"unsafe"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/lxn/win"
)
var sampleConfig string = `
# By default this plugin returns basic CPU and Disk statistics.
# See the README file for more examples.
# Uncomment examples below or write your own as you see fit. If the system
# being polled for data does not have the Object at startup of the Telegraf
# agent, it will not be gathered.
# Settings:
# PrintValid = false # Print All matching performance counters
[[inputs.win_perf_counters.object]]
# Processor usage, alternative to native, reports on a per core.
ObjectName = "Processor"
Instances = ["*"]
Counters = [
"%% Idle Time", "%% Interrupt Time",
"%% Privileged Time", "%% User Time",
"%% Processor Time"
]
Measurement = "win_cpu"
# Set to true to include _Total instance when querying for all (*).
# IncludeTotal=false
# Print out when the performance counter is missing from object, counter or instance.
# WarnOnMissing = false
[[inputs.win_perf_counters.object]]
# Disk times and queues
ObjectName = "LogicalDisk"
Instances = ["*"]
Counters = [
"%% Idle Time", "%% Disk Time","%% Disk Read Time",
"%% Disk Write Time", "%% User Time", "Current Disk Queue Length"
]
Measurement = "win_disk"
[[inputs.win_perf_counters.object]]
ObjectName = "System"
Counters = ["Context Switches/sec","System Calls/sec"]
Instances = ["------"]
Measurement = "win_system"
[[inputs.win_perf_counters.object]]
# Example query where the Instance portion must be removed to get data back,
# such as from the Memory object.
ObjectName = "Memory"
Counters = [
"Available Bytes", "Cache Faults/sec", "Demand Zero Faults/sec",
"Page Faults/sec", "Pages/sec", "Transition Faults/sec",
"Pool Nonpaged Bytes", "Pool Paged Bytes"
]
Instances = ["------"] # Use 6 x - to remove the Instance bit from the query.
Measurement = "win_mem"
`
// Valid queries end up in this map.
var gItemList = make(map[int]*item)
var configParsed bool
var testConfigParsed bool
var testObject string
type Win_PerfCounters struct {
PrintValid bool
TestName string
PreVistaSupport bool
Object []perfobject
}
type perfobject struct {
ObjectName string
Counters []string
Instances []string
Measurement string
WarnOnMissing bool
FailOnMissing bool
IncludeTotal bool
}
// Parsed configuration ends up here after it has been validated for valid
// Performance Counter paths
type itemList struct {
items map[int]*item
}
type item struct {
query string
objectName string
counter string
instance string
measurement string
include_total bool
handle win.PDH_HQUERY
counterHandle win.PDH_HCOUNTER
}
func (m *Win_PerfCounters) AddItem(metrics *itemList, query string, objectName string, counter string, instance string,
measurement string, include_total bool) {
var handle win.PDH_HQUERY
var counterHandle win.PDH_HCOUNTER
ret := win.PdhOpenQuery(0, 0, &handle)
if m.PreVistaSupport {
ret = win.PdhAddCounter(handle, query, 0, &counterHandle)
} else {
ret = win.PdhAddEnglishCounter(handle, query, 0, &counterHandle)
}
_ = ret
temp := &item{query, objectName, counter, instance, measurement,
include_total, handle, counterHandle}
index := len(gItemList)
gItemList[index] = temp
if metrics.items == nil {
metrics.items = make(map[int]*item)
}
metrics.items[index] = temp
}
func (m *Win_PerfCounters) InvalidObject(exists uint32, query string, PerfObject perfobject, instance string, counter string) error {
if exists == 3221228472 { // win.PDH_CSTATUS_NO_OBJECT
if PerfObject.FailOnMissing {
err := errors.New("Performance object does not exist")
return err
} else {
fmt.Printf("Performance Object '%s' does not exist in query: %s\n", PerfObject.ObjectName, query)
}
} else if exists == 3221228473 { //win.PDH_CSTATUS_NO_COUNTER
if PerfObject.FailOnMissing {
err := errors.New("Counter in Performance object does not exist")
return err
} else {
fmt.Printf("Counter '%s' does not exist in query: %s\n", counter, query)
}
} else if exists == 2147485649 { //win.PDH_CSTATUS_NO_INSTANCE
if PerfObject.FailOnMissing {
err := errors.New("Instance in Performance object does not exist")
return err
} else {
fmt.Printf("Instance '%s' does not exist in query: %s\n", instance, query)
}
} else {
fmt.Printf("Invalid result: %v, query: %s\n", exists, query)
if PerfObject.FailOnMissing {
err := errors.New("Invalid query for Performance Counters")
return err
}
}
return nil
}
func (m *Win_PerfCounters) Description() string {
return "Input plugin to query Performance Counters on Windows operating systems"
}
func (m *Win_PerfCounters) SampleConfig() string {
return sampleConfig
}
func (m *Win_PerfCounters) ParseConfig(metrics *itemList) error {
var query string
configParsed = true
if len(m.Object) > 0 {
for _, PerfObject := range m.Object {
for _, counter := range PerfObject.Counters {
for _, instance := range PerfObject.Instances {
objectname := PerfObject.ObjectName
if instance == "------" {
query = "\\" + objectname + "\\" + counter
} else {
query = "\\" + objectname + "(" + instance + ")\\" + counter
}
var exists uint32 = win.PdhValidatePath(query)
if exists == win.ERROR_SUCCESS {
if m.PrintValid {
fmt.Printf("Valid: %s\n", query)
}
m.AddItem(metrics, query, objectname, counter, instance,
PerfObject.Measurement, PerfObject.IncludeTotal)
} else {
if PerfObject.FailOnMissing || PerfObject.WarnOnMissing {
err := m.InvalidObject(exists, query, PerfObject, instance, counter)
return err
}
}
}
}
}
return nil
} else {
err := errors.New("No performance objects configured!")
return err
}
}
func (m *Win_PerfCounters) Cleanup(metrics *itemList) {
// Cleanup
for _, metric := range metrics.items {
ret := win.PdhCloseQuery(metric.handle)
_ = ret
}
}
func (m *Win_PerfCounters) CleanupTestMode() {
// Cleanup for the testmode.
for _, metric := range gItemList {
ret := win.PdhCloseQuery(metric.handle)
_ = ret
}
}
func (m *Win_PerfCounters) Gather(acc telegraf.Accumulator) error {
metrics := itemList{}
// Both values are empty in normal use.
if m.TestName != testObject {
// Cleanup any handles before emptying the global variable containing valid queries.
m.CleanupTestMode()
gItemList = make(map[int]*item)
testObject = m.TestName
testConfigParsed = true
configParsed = false
}
// We only need to parse the config during the init, it uses the global variable after.
if configParsed == false {
err := m.ParseConfig(&metrics)
if err != nil {
return err
}
}
var bufSize uint32
var bufCount uint32
var size uint32 = uint32(unsafe.Sizeof(win.PDH_FMT_COUNTERVALUE_ITEM_DOUBLE{}))
var emptyBuf [1]win.PDH_FMT_COUNTERVALUE_ITEM_DOUBLE // need at least 1 addressable null ptr.
// For iterate over the known metrics and get the samples.
for _, metric := range gItemList {
// collect
ret := win.PdhCollectQueryData(metric.handle)
if ret == win.ERROR_SUCCESS {
ret = win.PdhGetFormattedCounterArrayDouble(metric.counterHandle, &bufSize,
&bufCount, &emptyBuf[0]) // uses null ptr here according to MSDN.
if ret == win.PDH_MORE_DATA {
filledBuf := make([]win.PDH_FMT_COUNTERVALUE_ITEM_DOUBLE, bufCount*size)
ret = win.PdhGetFormattedCounterArrayDouble(metric.counterHandle,
&bufSize, &bufCount, &filledBuf[0])
for i := 0; i < int(bufCount); i++ {
c := filledBuf[i]
var s string = win.UTF16PtrToString(c.SzName)
var add bool
if metric.include_total {
// If IncludeTotal is set, include all.
add = true
} else if metric.instance == "*" && !strings.Contains(s, "_Total") {
// Catch if set to * and that it is not a '*_Total*' instance.
add = true
} else if metric.instance == s {
// Catch if we set it to total or some form of it
add = true
} else if metric.instance == "------" {
add = true
}
if add {
fields := make(map[string]interface{})
tags := make(map[string]string)
if s != "" {
tags["instance"] = s
}
tags["objectname"] = metric.objectName
fields[string(metric.counter)] = float32(c.FmtValue.DoubleValue)
var measurement string
if metric.measurement == "" {
measurement = "win_perf_counters"
} else {
measurement = metric.measurement
}
acc.AddFields(measurement, fields, tags)
}
}
filledBuf = nil
// Need to at least set bufSize to zero, because if not, the function will not
// return PDH_MORE_DATA and will not set the bufSize.
bufCount = 0
bufSize = 0
}
}
}
return nil
}
func init() {
inputs.Add("win_perf_counters", func() telegraf.Input { return &Win_PerfCounters{} })
}

View File

@@ -0,0 +1,3 @@
// +build !windows
package win_perf_counters

View File

@@ -0,0 +1,527 @@
// +build windows
package win_perf_counters
import (
"errors"
"testing"
"time"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/require"
)
func TestWinPerfcountersConfigGet1(t *testing.T) {
validmetrics := itemList{}
var instances = make([]string, 1)
var counters = make([]string, 1)
var perfobjects = make([]perfobject, 1)
objectname := "Processor Information"
instances[0] = "_Total"
counters[0] = "% Processor Time"
var measurement string = "test"
var warnonmissing bool = false
var failonmissing bool = true
var includetotal bool = false
PerfObject := perfobject{
ObjectName: objectname,
Instances: instances,
Counters: counters,
Measurement: measurement,
WarnOnMissing: warnonmissing,
FailOnMissing: failonmissing,
IncludeTotal: includetotal,
}
perfobjects[0] = PerfObject
m := Win_PerfCounters{PrintValid: false, TestName: "ConfigGet1", Object: perfobjects}
err := m.ParseConfig(&validmetrics)
require.NoError(t, err)
}
func TestWinPerfcountersConfigGet2(t *testing.T) {
metrics := itemList{}
var instances = make([]string, 1)
var counters = make([]string, 1)
var perfobjects = make([]perfobject, 1)
objectname := "Processor Information"
instances[0] = "_Total"
counters[0] = "% Processor Time"
var measurement string = "test"
var warnonmissing bool = false
var failonmissing bool = true
var includetotal bool = false
PerfObject := perfobject{
ObjectName: objectname,
Instances: instances,
Counters: counters,
Measurement: measurement,
WarnOnMissing: warnonmissing,
FailOnMissing: failonmissing,
IncludeTotal: includetotal,
}
perfobjects[0] = PerfObject
m := Win_PerfCounters{PrintValid: false, TestName: "ConfigGet2", Object: perfobjects}
err := m.ParseConfig(&metrics)
require.NoError(t, err)
if len(metrics.items) == 1 {
require.NoError(t, nil)
} else if len(metrics.items) == 0 {
var errorstring1 string = "No results returned from the query: " + string(len(metrics.items))
err2 := errors.New(errorstring1)
require.NoError(t, err2)
} else if len(metrics.items) > 1 {
var errorstring1 string = "Too many results returned from the query: " + string(len(metrics.items))
err2 := errors.New(errorstring1)
require.NoError(t, err2)
}
}
func TestWinPerfcountersConfigGet3(t *testing.T) {
metrics := itemList{}
var instances = make([]string, 1)
var counters = make([]string, 2)
var perfobjects = make([]perfobject, 1)
objectname := "Processor Information"
instances[0] = "_Total"
counters[0] = "% Processor Time"
counters[1] = "% Idle Time"
var measurement string = "test"
var warnonmissing bool = false
var failonmissing bool = true
var includetotal bool = false
PerfObject := perfobject{
ObjectName: objectname,
Instances: instances,
Counters: counters,
Measurement: measurement,
WarnOnMissing: warnonmissing,
FailOnMissing: failonmissing,
IncludeTotal: includetotal,
}
perfobjects[0] = PerfObject
m := Win_PerfCounters{PrintValid: false, TestName: "ConfigGet3", Object: perfobjects}
err := m.ParseConfig(&metrics)
require.NoError(t, err)
if len(metrics.items) == 2 {
require.NoError(t, nil)
} else if len(metrics.items) < 2 {
var errorstring1 string = "Too few results returned from the query. " + string(len(metrics.items))
err2 := errors.New(errorstring1)
require.NoError(t, err2)
} else if len(metrics.items) > 2 {
var errorstring1 string = "Too many results returned from the query: " + string(len(metrics.items))
err2 := errors.New(errorstring1)
require.NoError(t, err2)
}
}
func TestWinPerfcountersConfigGet4(t *testing.T) {
metrics := itemList{}
var instances = make([]string, 2)
var counters = make([]string, 1)
var perfobjects = make([]perfobject, 1)
objectname := "Processor Information"
instances[0] = "_Total"
instances[1] = "0"
counters[0] = "% Processor Time"
var measurement string = "test"
var warnonmissing bool = false
var failonmissing bool = true
var includetotal bool = false
PerfObject := perfobject{
ObjectName: objectname,
Instances: instances,
Counters: counters,
Measurement: measurement,
WarnOnMissing: warnonmissing,
FailOnMissing: failonmissing,
IncludeTotal: includetotal,
}
perfobjects[0] = PerfObject
m := Win_PerfCounters{PrintValid: false, TestName: "ConfigGet4", Object: perfobjects}
err := m.ParseConfig(&metrics)
require.NoError(t, err)
if len(metrics.items) == 2 {
require.NoError(t, nil)
} else if len(metrics.items) < 2 {
var errorstring1 string = "Too few results returned from the query: " + string(len(metrics.items))
err2 := errors.New(errorstring1)
require.NoError(t, err2)
} else if len(metrics.items) > 2 {
var errorstring1 string = "Too many results returned from the query: " + string(len(metrics.items))
err2 := errors.New(errorstring1)
require.NoError(t, err2)
}
}
func TestWinPerfcountersConfigGet5(t *testing.T) {
metrics := itemList{}
var instances = make([]string, 2)
var counters = make([]string, 2)
var perfobjects = make([]perfobject, 1)
objectname := "Processor Information"
instances[0] = "_Total"
instances[1] = "0"
counters[0] = "% Processor Time"
counters[1] = "% Idle Time"
var measurement string = "test"
var warnonmissing bool = false
var failonmissing bool = true
var includetotal bool = false
PerfObject := perfobject{
ObjectName: objectname,
Instances: instances,
Counters: counters,
Measurement: measurement,
WarnOnMissing: warnonmissing,
FailOnMissing: failonmissing,
IncludeTotal: includetotal,
}
perfobjects[0] = PerfObject
m := Win_PerfCounters{PrintValid: false, TestName: "ConfigGet5", Object: perfobjects}
err := m.ParseConfig(&metrics)
require.NoError(t, err)
if len(metrics.items) == 4 {
require.NoError(t, nil)
} else if len(metrics.items) < 4 {
var errorstring1 string = "Too few results returned from the query: " +
string(len(metrics.items))
err2 := errors.New(errorstring1)
require.NoError(t, err2)
} else if len(metrics.items) > 4 {
var errorstring1 string = "Too many results returned from the query: " +
string(len(metrics.items))
err2 := errors.New(errorstring1)
require.NoError(t, err2)
}
}
func TestWinPerfcountersConfigGet6(t *testing.T) {
validmetrics := itemList{}
var instances = make([]string, 1)
var counters = make([]string, 1)
var perfobjects = make([]perfobject, 1)
objectname := "System"
instances[0] = "------"
counters[0] = "Context Switches/sec"
var measurement string = "test"
var warnonmissing bool = false
var failonmissing bool = true
var includetotal bool = false
PerfObject := perfobject{
ObjectName: objectname,
Instances: instances,
Counters: counters,
Measurement: measurement,
WarnOnMissing: warnonmissing,
FailOnMissing: failonmissing,
IncludeTotal: includetotal,
}
perfobjects[0] = PerfObject
m := Win_PerfCounters{PrintValid: false, TestName: "ConfigGet6", Object: perfobjects}
err := m.ParseConfig(&validmetrics)
require.NoError(t, err)
}
func TestWinPerfcountersConfigGet7(t *testing.T) {
metrics := itemList{}
var instances = make([]string, 1)
var counters = make([]string, 3)
var perfobjects = make([]perfobject, 1)
objectname := "Processor Information"
instances[0] = "_Total"
counters[0] = "% Processor Time"
counters[1] = "% Processor TimeERROR"
counters[2] = "% Idle Time"
var measurement string = "test"
var warnonmissing bool = false
var failonmissing bool = false
var includetotal bool = false
PerfObject := perfobject{
ObjectName: objectname,
Instances: instances,
Counters: counters,
Measurement: measurement,
WarnOnMissing: warnonmissing,
FailOnMissing: failonmissing,
IncludeTotal: includetotal,
}
perfobjects[0] = PerfObject
m := Win_PerfCounters{PrintValid: false, TestName: "ConfigGet7", Object: perfobjects}
err := m.ParseConfig(&metrics)
require.NoError(t, err)
if len(metrics.items) == 2 {
require.NoError(t, nil)
} else if len(metrics.items) < 2 {
var errorstring1 string = "Too few results returned from the query: " +
string(len(metrics.items))
err2 := errors.New(errorstring1)
require.NoError(t, err2)
} else if len(metrics.items) > 2 {
var errorstring1 string = "Too many results returned from the query: " +
string(len(metrics.items))
err2 := errors.New(errorstring1)
require.NoError(t, err2)
}
}
func TestWinPerfcountersConfigError1(t *testing.T) {
metrics := itemList{}
var instances = make([]string, 1)
var counters = make([]string, 1)
var perfobjects = make([]perfobject, 1)
objectname := "Processor InformationERROR"
instances[0] = "_Total"
counters[0] = "% Processor Time"
var measurement string = "test"
var warnonmissing bool = false
var failonmissing bool = true
var includetotal bool = false
PerfObject := perfobject{
ObjectName: objectname,
Instances: instances,
Counters: counters,
Measurement: measurement,
WarnOnMissing: warnonmissing,
FailOnMissing: failonmissing,
IncludeTotal: includetotal,
}
perfobjects[0] = PerfObject
m := Win_PerfCounters{PrintValid: false, TestName: "ConfigError1", Object: perfobjects}
err := m.ParseConfig(&metrics)
require.Error(t, err)
}
func TestWinPerfcountersConfigError2(t *testing.T) {
metrics := itemList{}
var instances = make([]string, 1)
var counters = make([]string, 1)
var perfobjects = make([]perfobject, 1)
objectname := "Processor"
instances[0] = "SuperERROR"
counters[0] = "% C1 Time"
var measurement string = "test"
var warnonmissing bool = false
var failonmissing bool = true
var includetotal bool = false
PerfObject := perfobject{
ObjectName: objectname,
Instances: instances,
Counters: counters,
Measurement: measurement,
WarnOnMissing: warnonmissing,
FailOnMissing: failonmissing,
IncludeTotal: includetotal,
}
perfobjects[0] = PerfObject
m := Win_PerfCounters{PrintValid: false, TestName: "ConfigError2", Object: perfobjects}
err := m.ParseConfig(&metrics)
require.Error(t, err)
}
func TestWinPerfcountersConfigError3(t *testing.T) {
metrics := itemList{}
var instances = make([]string, 1)
var counters = make([]string, 1)
var perfobjects = make([]perfobject, 1)
objectname := "Processor Information"
instances[0] = "_Total"
counters[0] = "% Processor TimeERROR"
var measurement string = "test"
var warnonmissing bool = false
var failonmissing bool = true
var includetotal bool = false
PerfObject := perfobject{
ObjectName: objectname,
Instances: instances,
Counters: counters,
Measurement: measurement,
WarnOnMissing: warnonmissing,
FailOnMissing: failonmissing,
IncludeTotal: includetotal,
}
perfobjects[0] = PerfObject
m := Win_PerfCounters{PrintValid: false, TestName: "ConfigError3", Object: perfobjects}
err := m.ParseConfig(&metrics)
require.Error(t, err)
}
func TestWinPerfcountersCollect1(t *testing.T) {
var instances = make([]string, 1)
var counters = make([]string, 1)
var perfobjects = make([]perfobject, 1)
objectname := "Processor Information"
instances[0] = "_Total"
counters[0] = "Parking Status"
var measurement string = "test"
var warnonmissing bool = false
var failonmissing bool = true
var includetotal bool = false
PerfObject := perfobject{
ObjectName: objectname,
Instances: instances,
Counters: counters,
Measurement: measurement,
WarnOnMissing: warnonmissing,
FailOnMissing: failonmissing,
IncludeTotal: includetotal,
}
perfobjects[0] = PerfObject
m := Win_PerfCounters{PrintValid: false, TestName: "Collect1", Object: perfobjects}
var acc testutil.Accumulator
err := m.Gather(&acc)
require.NoError(t, err)
time.Sleep(2000 * time.Millisecond)
err = m.Gather(&acc)
tags := map[string]string{
"instance": instances[0],
"objectname": objectname,
}
fields := map[string]interface{}{
counters[0]: float32(0),
}
acc.AssertContainsTaggedFields(t, measurement, fields, tags)
}
func TestWinPerfcountersCollect2(t *testing.T) {
var instances = make([]string, 2)
var counters = make([]string, 1)
var perfobjects = make([]perfobject, 1)
objectname := "Processor Information"
instances[0] = "_Total"
instances[1] = "0,0"
counters[0] = "Performance Limit Flags"
var measurement string = "test"
var warnonmissing bool = false
var failonmissing bool = true
var includetotal bool = false
PerfObject := perfobject{
ObjectName: objectname,
Instances: instances,
Counters: counters,
Measurement: measurement,
WarnOnMissing: warnonmissing,
FailOnMissing: failonmissing,
IncludeTotal: includetotal,
}
perfobjects[0] = PerfObject
m := Win_PerfCounters{PrintValid: false, TestName: "Collect2", Object: perfobjects}
var acc testutil.Accumulator
err := m.Gather(&acc)
require.NoError(t, err)
time.Sleep(2000 * time.Millisecond)
err = m.Gather(&acc)
tags := map[string]string{
"instance": instances[0],
"objectname": objectname,
}
fields := map[string]interface{}{
counters[0]: float32(0),
}
acc.AssertContainsTaggedFields(t, measurement, fields, tags)
tags = map[string]string{
"instance": instances[1],
"objectname": objectname,
}
fields = map[string]interface{}{
counters[0]: float32(0),
}
acc.AssertContainsTaggedFields(t, measurement, fields, tags)
}

View File

@@ -6,6 +6,7 @@ import (
"strconv"
"strings"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -68,7 +69,7 @@ func getTags(pools []poolInfo) map[string]string {
return map[string]string{"pools": poolNames}
}
func gatherPoolStats(pool poolInfo, acc inputs.Accumulator) error {
func gatherPoolStats(pool poolInfo, acc telegraf.Accumulator) error {
lines, err := internal.ReadLines(pool.ioFilename)
if err != nil {
return err
@@ -101,7 +102,7 @@ func gatherPoolStats(pool poolInfo, acc inputs.Accumulator) error {
return nil
}
func (z *Zfs) Gather(acc inputs.Accumulator) error {
func (z *Zfs) Gather(acc telegraf.Accumulator) error {
kstatMetrics := z.KstatMetrics
if len(kstatMetrics) == 0 {
kstatMetrics = []string{"arcstats", "zfetchstats", "vdev_cache_stats"}
@@ -149,7 +150,7 @@ func (z *Zfs) Gather(acc inputs.Accumulator) error {
}
func init() {
inputs.Add("zfs", func() inputs.Input {
inputs.Add("zfs", func() telegraf.Input {
return &Zfs{}
})
}

View File

@@ -148,7 +148,7 @@ func TestZfsPoolMetrics(t *testing.T) {
require.NoError(t, err)
require.False(t, acc.HasMeasurement("zfs_pool"))
acc.Points = nil
acc.Metrics = nil
z = &Zfs{KstatPath: testKstatPath, KstatMetrics: []string{"arcstats"}, PoolMetrics: true}
err = z.Gather(&acc)
@@ -198,7 +198,7 @@ func TestZfsGeneratesMetrics(t *testing.T) {
require.NoError(t, err)
acc.AssertContainsTaggedFields(t, "zfs", intMetrics, tags)
acc.Points = nil
acc.Metrics = nil
//two pools, all metrics
err = os.MkdirAll(testKstatPath+"/STORAGE", 0755)
@@ -217,7 +217,7 @@ func TestZfsGeneratesMetrics(t *testing.T) {
require.NoError(t, err)
acc.AssertContainsTaggedFields(t, "zfs", intMetrics, tags)
acc.Points = nil
acc.Metrics = nil
intMetrics = getKstatMetricsArcOnly()

View File

@@ -10,6 +10,7 @@ import (
"strings"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -40,7 +41,7 @@ func (z *Zookeeper) Description() string {
}
// Gather reads stats from all configured servers accumulates stats
func (z *Zookeeper) Gather(acc inputs.Accumulator) error {
func (z *Zookeeper) Gather(acc telegraf.Accumulator) error {
if len(z.Servers) == 0 {
return nil
}
@@ -53,7 +54,7 @@ func (z *Zookeeper) Gather(acc inputs.Accumulator) error {
return nil
}
func (z *Zookeeper) gatherServer(address string, acc inputs.Accumulator) error {
func (z *Zookeeper) gatherServer(address string, acc telegraf.Accumulator) error {
_, _, err := net.SplitHostPort(address)
if err != nil {
address = address + ":2181"
@@ -103,7 +104,7 @@ func (z *Zookeeper) gatherServer(address string, acc inputs.Accumulator) error {
}
func init() {
inputs.Add("zookeeper", func() inputs.Input {
inputs.Add("zookeeper", func() telegraf.Input {
return &Zookeeper{}
})
}

Some files were not shown because too many files have changed in this diff Show More