commit
37630f16d6
10
CHANGELOG.md
10
CHANGELOG.md
|
@ -2,6 +2,9 @@
|
||||||
|
|
||||||
### Release Notes:
|
### Release Notes:
|
||||||
|
|
||||||
|
- Telegraf now keeps a fixed-length buffer of metrics per-output. This buffer
|
||||||
|
defaults to 10,000 metrics, and is adjustable. The buffer is cleared when a
|
||||||
|
successful write to that output occurs.
|
||||||
- The docker plugin has been significantly overhauled to add more metrics
|
- The docker plugin has been significantly overhauled to add more metrics
|
||||||
and allow for docker-machine (incl OSX) support.
|
and allow for docker-machine (incl OSX) support.
|
||||||
[See the readme](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/docker/README.md)
|
[See the readme](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/docker/README.md)
|
||||||
|
@ -26,6 +29,11 @@ specifying a docker endpoint to get metrics from.
|
||||||
- [#553](https://github.com/influxdata/telegraf/pull/553): Amazon CloudWatch output. thanks @skwong2!
|
- [#553](https://github.com/influxdata/telegraf/pull/553): Amazon CloudWatch output. thanks @skwong2!
|
||||||
- [#503](https://github.com/influxdata/telegraf/pull/503): Support docker endpoint configuration.
|
- [#503](https://github.com/influxdata/telegraf/pull/503): Support docker endpoint configuration.
|
||||||
- [#563](https://github.com/influxdata/telegraf/pull/563): Docker plugin overhaul.
|
- [#563](https://github.com/influxdata/telegraf/pull/563): Docker plugin overhaul.
|
||||||
|
- [#285](https://github.com/influxdata/telegraf/issues/285): Fixed-size buffer of points.
|
||||||
|
- [#546](https://github.com/influxdata/telegraf/pull/546): SNMP Input plugin. Thanks @titilambert!
|
||||||
|
- [#589](https://github.com/influxdata/telegraf/pull/589): Microsoft SQL Server input plugin. Thanks @zensqlmonitor!
|
||||||
|
- [#573](https://github.com/influxdata/telegraf/pull/573): Github webhooks consumer input. Thanks @jackzampolin!
|
||||||
|
- [#471](https://github.com/influxdata/telegraf/pull/471): httpjson request headers. Thanks @asosso!
|
||||||
|
|
||||||
### Bugfixes
|
### Bugfixes
|
||||||
- [#506](https://github.com/influxdata/telegraf/pull/506): Ping input doesn't return response time metric when timeout. Thanks @titilambert!
|
- [#506](https://github.com/influxdata/telegraf/pull/506): Ping input doesn't return response time metric when timeout. Thanks @titilambert!
|
||||||
|
@ -34,6 +42,8 @@ specifying a docker endpoint to get metrics from.
|
||||||
- [#543](https://github.com/influxdata/telegraf/issues/543): Statsd Packet size sometimes truncated.
|
- [#543](https://github.com/influxdata/telegraf/issues/543): Statsd Packet size sometimes truncated.
|
||||||
- [#440](https://github.com/influxdata/telegraf/issues/440): Don't query filtered devices for disk stats.
|
- [#440](https://github.com/influxdata/telegraf/issues/440): Don't query filtered devices for disk stats.
|
||||||
- [#463](https://github.com/influxdata/telegraf/issues/463): Docker plugin not working on AWS Linux
|
- [#463](https://github.com/influxdata/telegraf/issues/463): Docker plugin not working on AWS Linux
|
||||||
|
- [#568](https://github.com/influxdata/telegraf/issues/568): Multiple output race condition.
|
||||||
|
- [#585](https://github.com/influxdata/telegraf/pull/585): Log stack trace and continue on Telegraf panic. Thanks @wutaizeng!
|
||||||
|
|
||||||
## v0.10.0 [2016-01-12]
|
## v0.10.0 [2016-01-12]
|
||||||
|
|
||||||
|
|
|
@ -1,10 +1,16 @@
|
||||||
## Steps for Contributing:
|
## Steps for Contributing:
|
||||||
|
|
||||||
1. [Sign the CLA](https://github.com/influxdata/telegraf/blob/master/CONTRIBUTING.md#sign-the-cla)
|
1. [Sign the CLA](http://influxdb.com/community/cla.html)
|
||||||
1. Write your input or output plugin (see below for details)
|
1. Make changes or write plugin (see below for details)
|
||||||
1. Add your plugin to `plugins/inputs/all/all.go` or `plugins/outputs/all/all.go`
|
1. Add your plugin to `plugins/inputs/all/all.go` or `plugins/outputs/all/all.go`
|
||||||
1. If your plugin requires a new Go package,
|
1. If your plugin requires a new Go package,
|
||||||
[add it](https://github.com/influxdata/telegraf/blob/master/CONTRIBUTING.md#adding-a-dependency)
|
[add it](https://github.com/influxdata/telegraf/blob/master/CONTRIBUTING.md#adding-a-dependency)
|
||||||
|
1. Write a README for your plugin, if it's an input plugin, it should be structured
|
||||||
|
like the [input example here](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/EXAMPLE_README.md).
|
||||||
|
Output plugins READMEs are less structured,
|
||||||
|
but any information you can provide on how the data will look is appreciated.
|
||||||
|
See the [OpenTSDB output](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/opentsdb)
|
||||||
|
for a good example.
|
||||||
|
|
||||||
## Sign the CLA
|
## Sign the CLA
|
||||||
|
|
||||||
|
@ -13,9 +19,10 @@ which can be found [on our website](http://influxdb.com/community/cla.html)
|
||||||
|
|
||||||
## Adding a dependency
|
## Adding a dependency
|
||||||
|
|
||||||
Assuming you can already build the project:
|
Assuming you can already build the project, run these in the telegraf directory:
|
||||||
|
|
||||||
1. `go get github.com/sparrc/gdm`
|
1. `go get github.com/sparrc/gdm`
|
||||||
|
1. `gdm restore`
|
||||||
1. `gdm save`
|
1. `gdm save`
|
||||||
|
|
||||||
## Input Plugins
|
## Input Plugins
|
||||||
|
|
39
Godeps
39
Godeps
|
@ -3,27 +3,30 @@ github.com/Shopify/sarama d37c73f2b2bce85f7fa16b6a550d26c5372892ef
|
||||||
github.com/Sirupsen/logrus f7f79f729e0fbe2fcc061db48a9ba0263f588252
|
github.com/Sirupsen/logrus f7f79f729e0fbe2fcc061db48a9ba0263f588252
|
||||||
github.com/amir/raidman 6a8e089bbe32e6b907feae5ba688841974b3c339
|
github.com/amir/raidman 6a8e089bbe32e6b907feae5ba688841974b3c339
|
||||||
github.com/armon/go-metrics 345426c77237ece5dab0e1605c3e4b35c3f54757
|
github.com/armon/go-metrics 345426c77237ece5dab0e1605c3e4b35c3f54757
|
||||||
github.com/aws/aws-sdk-go 3ad0b07b44c22c21c734d1094981540b7a11e942
|
github.com/aws/aws-sdk-go 87b1e60a50b09e4812dee560b33a238f67305804
|
||||||
github.com/beorn7/perks b965b613227fddccbfffe13eae360ed3fa822f8d
|
github.com/beorn7/perks b965b613227fddccbfffe13eae360ed3fa822f8d
|
||||||
github.com/boltdb/bolt 6465994716bf6400605746e79224cf1e7ed68725
|
github.com/boltdb/bolt ee4a0888a9abe7eefe5a0992ca4cb06864839873
|
||||||
github.com/cenkalti/backoff 4dc77674aceaabba2c7e3da25d4c823edfb73f99
|
github.com/cenkalti/backoff 4dc77674aceaabba2c7e3da25d4c823edfb73f99
|
||||||
github.com/dancannon/gorethink ff457cac6a529d9749d841a733d76e8305cba3c8
|
github.com/dancannon/gorethink 6f088135ff288deb9d5546f4c71919207f891a70
|
||||||
github.com/davecgh/go-spew 5215b55f46b2b919f50a1df0eaa5886afe4e3b3d
|
github.com/davecgh/go-spew 5215b55f46b2b919f50a1df0eaa5886afe4e3b3d
|
||||||
github.com/eapache/go-resiliency b86b1ec0dd4209a588dc1285cdd471e73525c0b3
|
github.com/eapache/go-resiliency b86b1ec0dd4209a588dc1285cdd471e73525c0b3
|
||||||
github.com/eapache/queue ded5959c0d4e360646dc9e9908cff48666781367
|
github.com/eapache/queue ded5959c0d4e360646dc9e9908cff48666781367
|
||||||
github.com/fsouza/go-dockerclient 6fb38e6bb3d544d7eb5b55fd396cd4e6850802d8
|
github.com/fsouza/go-dockerclient 7b651349f9479f5114913eefbfd3c4eeddd79ab4
|
||||||
github.com/go-ini/ini afbd495e5aaea13597b5e14fe514ddeaa4d76fc3
|
github.com/go-ini/ini afbd495e5aaea13597b5e14fe514ddeaa4d76fc3
|
||||||
github.com/go-sql-driver/mysql 72ea5d0b32a04c67710bf63e97095d82aea5f352
|
github.com/go-sql-driver/mysql 7c7f556282622f94213bc028b4d0a7b6151ba239
|
||||||
github.com/gogo/protobuf c57e439bad574c2e0877ff18d514badcfced004d
|
github.com/gogo/protobuf e8904f58e872a473a5b91bc9bf3377d223555263
|
||||||
github.com/golang/protobuf 2402d76f3d41f928c7902a765dfc872356dd3aad
|
github.com/golang/protobuf 6aaa8d47701fa6cf07e914ec01fde3d4a1fe79c3
|
||||||
github.com/golang/snappy 723cc1e459b8eea2dea4583200fd60757d40097a
|
github.com/golang/snappy 723cc1e459b8eea2dea4583200fd60757d40097a
|
||||||
github.com/gonuts/go-shellquote e842a11b24c6abfb3dd27af69a17f482e4b483c2
|
github.com/gonuts/go-shellquote e842a11b24c6abfb3dd27af69a17f482e4b483c2
|
||||||
github.com/hailocab/go-hostpool 50839ee41f32bfca8d03a183031aa634b2dc1c64
|
github.com/gorilla/context 1c83b3eabd45b6d76072b66b746c20815fb2872d
|
||||||
|
github.com/gorilla/mux 26a6070f849969ba72b72256e9f14cf519751690
|
||||||
|
github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478
|
||||||
github.com/hashicorp/go-msgpack fa3f63826f7c23912c15263591e65d54d080b458
|
github.com/hashicorp/go-msgpack fa3f63826f7c23912c15263591e65d54d080b458
|
||||||
github.com/hashicorp/raft b95f335efee1992886864389183ebda0c0a5d0f6
|
github.com/hashicorp/raft 057b893fd996696719e98b6c44649ea14968c811
|
||||||
github.com/hashicorp/raft-boltdb d1e82c1ec3f15ee991f7cc7ffd5b67ff6f5bbaee
|
github.com/hashicorp/raft-boltdb d1e82c1ec3f15ee991f7cc7ffd5b67ff6f5bbaee
|
||||||
github.com/influxdata/influxdb 0e0f85a0c1fd1788ae4f9145531b02c539cfa5b5
|
github.com/influxdata/config bae7cb98197d842374d3b8403905924094930f24
|
||||||
github.com/influxdb/influxdb 0e0f85a0c1fd1788ae4f9145531b02c539cfa5b5
|
github.com/influxdata/influxdb 697f48b4e62e514e701ffec39978b864a3c666e6
|
||||||
|
github.com/influxdb/influxdb 697f48b4e62e514e701ffec39978b864a3c666e6
|
||||||
github.com/jmespath/go-jmespath c01cf91b011868172fdcd9f41838e80c9d716264
|
github.com/jmespath/go-jmespath c01cf91b011868172fdcd9f41838e80c9d716264
|
||||||
github.com/klauspost/crc32 999f3125931f6557b991b2f8472172bdfa578d38
|
github.com/klauspost/crc32 999f3125931f6557b991b2f8472172bdfa578d38
|
||||||
github.com/lib/pq 8ad2b298cadd691a77015666a5372eae5dbfac8f
|
github.com/lib/pq 8ad2b298cadd691a77015666a5372eae5dbfac8f
|
||||||
|
@ -36,19 +39,21 @@ github.com/pborman/uuid dee7705ef7b324f27ceb85a121c61f2c2e8ce988
|
||||||
github.com/pmezard/go-difflib 792786c7400a136282c1664665ae0a8db921c6c2
|
github.com/pmezard/go-difflib 792786c7400a136282c1664665ae0a8db921c6c2
|
||||||
github.com/prometheus/client_golang 67994f177195311c3ea3d4407ed0175e34a4256f
|
github.com/prometheus/client_golang 67994f177195311c3ea3d4407ed0175e34a4256f
|
||||||
github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
|
github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
|
||||||
github.com/prometheus/common 0a3005bb37bc411040083a55372e77c405f6464c
|
github.com/prometheus/common 14ca1097bbe21584194c15e391a9dab95ad42a59
|
||||||
github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8
|
github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8
|
||||||
github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f
|
github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f
|
||||||
github.com/shirou/gopsutil 8850f58d7035653e1ab90711481954c8ca1b9813
|
github.com/shirou/gopsutil 85bf0974ed06e4e668595ae2b4de02e772a2819b
|
||||||
|
github.com/soniah/gosnmp b1b4f885b12c5dcbd021c5cee1c904110de6db7d
|
||||||
github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744
|
github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744
|
||||||
github.com/stretchr/objx 1a9d0bb9f541897e62256577b352fdbc1fb4fd94
|
github.com/stretchr/objx 1a9d0bb9f541897e62256577b352fdbc1fb4fd94
|
||||||
github.com/stretchr/testify f390dcf405f7b83c997eac1b06768bb9f44dec18
|
github.com/stretchr/testify f390dcf405f7b83c997eac1b06768bb9f44dec18
|
||||||
github.com/wvanbergen/kafka 1a8639a45164fcc245d5c7b4bd3ccfbd1a0ffbf3
|
github.com/wvanbergen/kafka 1a8639a45164fcc245d5c7b4bd3ccfbd1a0ffbf3
|
||||||
github.com/wvanbergen/kazoo-go 0f768712ae6f76454f987c3356177e138df258f8
|
github.com/wvanbergen/kazoo-go 0f768712ae6f76454f987c3356177e138df258f8
|
||||||
golang.org/x/crypto 3760e016850398b85094c4c99e955b8c3dea5711
|
github.com/zensqlmonitor/go-mssqldb ffe5510c6fa5e15e6d983210ab501c815b56b363
|
||||||
golang.org/x/net 72aa00c6241a8013dc9b040abb45f57edbe73945
|
golang.org/x/crypto 1f22c0103821b9390939b6776727195525381532
|
||||||
golang.org/x/text cf4986612c83df6c55578ba198316d1684a9a287
|
golang.org/x/net 04b9de9b512f58addf28c9853d50ebef61c3953e
|
||||||
gopkg.in/dancannon/gorethink.v1 e2cef022d0495329dfb0635991de76efcab5cf50
|
golang.org/x/text 6d3c22c4525a4da167968fa2479be5524d2e8bd0
|
||||||
|
gopkg.in/dancannon/gorethink.v1 6f088135ff288deb9d5546f4c71919207f891a70
|
||||||
gopkg.in/fatih/pool.v2 cba550ebf9bce999a02e963296d4bc7a486cb715
|
gopkg.in/fatih/pool.v2 cba550ebf9bce999a02e963296d4bc7a486cb715
|
||||||
gopkg.in/mgo.v2 03c9f3ee4c14c8e51ee521a6a7d0425658dd6f64
|
gopkg.in/mgo.v2 03c9f3ee4c14c8e51ee521a6a7d0425658dd6f64
|
||||||
gopkg.in/yaml.v2 f7716cbe52baa25d2e9b0d0da546fcf909fc16b4
|
gopkg.in/yaml.v2 f7716cbe52baa25d2e9b0d0da546fcf909fc16b4
|
||||||
|
|
6
Makefile
6
Makefile
|
@ -52,6 +52,7 @@ endif
|
||||||
docker run --name nsq -p "4150:4150" -d nsqio/nsq /nsqd
|
docker run --name nsq -p "4150:4150" -d nsqio/nsq /nsqd
|
||||||
docker run --name mqtt -p "1883:1883" -d ncarlier/mqtt
|
docker run --name mqtt -p "1883:1883" -d ncarlier/mqtt
|
||||||
docker run --name riemann -p "5555:5555" -d blalor/riemann
|
docker run --name riemann -p "5555:5555" -d blalor/riemann
|
||||||
|
docker run --name snmp -p "31161:31161/udp" -d titilambert/snmpsim
|
||||||
|
|
||||||
# Run docker containers necessary for CircleCI unit tests
|
# Run docker containers necessary for CircleCI unit tests
|
||||||
docker-run-circle:
|
docker-run-circle:
|
||||||
|
@ -65,11 +66,12 @@ docker-run-circle:
|
||||||
docker run --name nsq -p "4150:4150" -d nsqio/nsq /nsqd
|
docker run --name nsq -p "4150:4150" -d nsqio/nsq /nsqd
|
||||||
docker run --name mqtt -p "1883:1883" -d ncarlier/mqtt
|
docker run --name mqtt -p "1883:1883" -d ncarlier/mqtt
|
||||||
docker run --name riemann -p "5555:5555" -d blalor/riemann
|
docker run --name riemann -p "5555:5555" -d blalor/riemann
|
||||||
|
docker run --name snmp -p "31161:31161/udp" -d titilambert/snmpsim
|
||||||
|
|
||||||
# Kill all docker containers, ignore errors
|
# Kill all docker containers, ignore errors
|
||||||
docker-kill:
|
docker-kill:
|
||||||
-docker kill nsq aerospike redis opentsdb rabbitmq postgres memcached mysql kafka mqtt riemann
|
-docker kill nsq aerospike redis opentsdb rabbitmq postgres memcached mysql kafka mqtt riemann snmp
|
||||||
-docker rm nsq aerospike redis opentsdb rabbitmq postgres memcached mysql kafka mqtt riemann
|
-docker rm nsq aerospike redis opentsdb rabbitmq postgres memcached mysql kafka mqtt riemann snmp
|
||||||
|
|
||||||
# Run full unit tests using docker containers (includes setup and teardown)
|
# Run full unit tests using docker containers (includes setup and teardown)
|
||||||
test: docker-kill docker-run
|
test: docker-kill docker-run
|
||||||
|
|
|
@ -164,10 +164,12 @@ Currently implemented sources:
|
||||||
* rabbitmq
|
* rabbitmq
|
||||||
* redis
|
* redis
|
||||||
* rethinkdb
|
* rethinkdb
|
||||||
|
* sql server (microsoft)
|
||||||
* twemproxy
|
* twemproxy
|
||||||
* zfs
|
* zfs
|
||||||
* zookeeper
|
* zookeeper
|
||||||
* sensors
|
* sensors
|
||||||
|
* snmp
|
||||||
* system
|
* system
|
||||||
* cpu
|
* cpu
|
||||||
* mem
|
* mem
|
||||||
|
@ -181,6 +183,7 @@ Telegraf can also collect metrics via the following service plugins:
|
||||||
|
|
||||||
* statsd
|
* statsd
|
||||||
* kafka_consumer
|
* kafka_consumer
|
||||||
|
* github_webhooks
|
||||||
|
|
||||||
We'll be adding support for many more over the coming months. Read on if you
|
We'll be adding support for many more over the coming months. Read on if you
|
||||||
want to add support for another service or third-party API.
|
want to add support for another service or third-party API.
|
||||||
|
|
|
@ -7,7 +7,7 @@ import (
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/influxdata/telegraf/internal/config"
|
"github.com/influxdata/telegraf/internal/models"
|
||||||
|
|
||||||
"github.com/influxdata/influxdb/client/v2"
|
"github.com/influxdata/influxdb/client/v2"
|
||||||
)
|
)
|
||||||
|
@ -29,7 +29,7 @@ type Accumulator interface {
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewAccumulator(
|
func NewAccumulator(
|
||||||
inputConfig *config.InputConfig,
|
inputConfig *models.InputConfig,
|
||||||
points chan *client.Point,
|
points chan *client.Point,
|
||||||
) Accumulator {
|
) Accumulator {
|
||||||
acc := accumulator{}
|
acc := accumulator{}
|
||||||
|
@ -47,7 +47,7 @@ type accumulator struct {
|
||||||
|
|
||||||
debug bool
|
debug bool
|
||||||
|
|
||||||
inputConfig *config.InputConfig
|
inputConfig *models.InputConfig
|
||||||
|
|
||||||
prefix string
|
prefix string
|
||||||
}
|
}
|
||||||
|
|
108
agent.go
108
agent.go
|
@ -7,10 +7,12 @@ import (
|
||||||
"math/big"
|
"math/big"
|
||||||
"math/rand"
|
"math/rand"
|
||||||
"os"
|
"os"
|
||||||
|
"runtime"
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/influxdata/telegraf/internal/config"
|
"github.com/influxdata/telegraf/internal/config"
|
||||||
|
"github.com/influxdata/telegraf/internal/models"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
"github.com/influxdata/telegraf/plugins/outputs"
|
"github.com/influxdata/telegraf/plugins/outputs"
|
||||||
|
|
||||||
|
@ -86,6 +88,18 @@ func (a *Agent) Close() error {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func panicRecover(input *models.RunningInput) {
|
||||||
|
if err := recover(); err != nil {
|
||||||
|
trace := make([]byte, 2048)
|
||||||
|
runtime.Stack(trace, true)
|
||||||
|
log.Printf("FATAL: Input [%s] panicked: %s, Stack:\n%s\n",
|
||||||
|
input.Name, err, trace)
|
||||||
|
log.Println("PLEASE REPORT THIS PANIC ON GITHUB with " +
|
||||||
|
"stack trace, configuration, and OS information: " +
|
||||||
|
"https://github.com/influxdata/telegraf/issues/new")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// gatherParallel runs the inputs that are using the same reporting interval
|
// gatherParallel runs the inputs that are using the same reporting interval
|
||||||
// as the telegraf agent.
|
// as the telegraf agent.
|
||||||
func (a *Agent) gatherParallel(pointChan chan *client.Point) error {
|
func (a *Agent) gatherParallel(pointChan chan *client.Point) error {
|
||||||
|
@ -101,7 +115,8 @@ func (a *Agent) gatherParallel(pointChan chan *client.Point) error {
|
||||||
|
|
||||||
wg.Add(1)
|
wg.Add(1)
|
||||||
counter++
|
counter++
|
||||||
go func(input *config.RunningInput) {
|
go func(input *models.RunningInput) {
|
||||||
|
defer panicRecover(input)
|
||||||
defer wg.Done()
|
defer wg.Done()
|
||||||
|
|
||||||
acc := NewAccumulator(input.Config, pointChan)
|
acc := NewAccumulator(input.Config, pointChan)
|
||||||
|
@ -144,9 +159,11 @@ func (a *Agent) gatherParallel(pointChan chan *client.Point) error {
|
||||||
// reporting interval.
|
// reporting interval.
|
||||||
func (a *Agent) gatherSeparate(
|
func (a *Agent) gatherSeparate(
|
||||||
shutdown chan struct{},
|
shutdown chan struct{},
|
||||||
input *config.RunningInput,
|
input *models.RunningInput,
|
||||||
pointChan chan *client.Point,
|
pointChan chan *client.Point,
|
||||||
) error {
|
) error {
|
||||||
|
defer panicRecover(input)
|
||||||
|
|
||||||
ticker := time.NewTicker(input.Config.Interval)
|
ticker := time.NewTicker(input.Config.Interval)
|
||||||
|
|
||||||
for {
|
for {
|
||||||
|
@ -202,7 +219,6 @@ func (a *Agent) Test() error {
|
||||||
for _, input := range a.Config.Inputs {
|
for _, input := range a.Config.Inputs {
|
||||||
acc := NewAccumulator(input.Config, pointChan)
|
acc := NewAccumulator(input.Config, pointChan)
|
||||||
acc.SetDebug(true)
|
acc.SetDebug(true)
|
||||||
// acc.SetPrefix(input.Name + "_")
|
|
||||||
|
|
||||||
fmt.Printf("* Plugin: %s, Collection 1\n", input.Name)
|
fmt.Printf("* Plugin: %s, Collection 1\n", input.Name)
|
||||||
if input.Config.Interval != 0 {
|
if input.Config.Interval != 0 {
|
||||||
|
@ -228,93 +244,45 @@ func (a *Agent) Test() error {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// writeOutput writes a list of points to a single output, with retries.
|
|
||||||
// Optionally takes a `done` channel to indicate that it is done writing.
|
|
||||||
func (a *Agent) writeOutput(
|
|
||||||
points []*client.Point,
|
|
||||||
ro *config.RunningOutput,
|
|
||||||
shutdown chan struct{},
|
|
||||||
wg *sync.WaitGroup,
|
|
||||||
) {
|
|
||||||
defer wg.Done()
|
|
||||||
if len(points) == 0 {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
retry := 0
|
|
||||||
retries := a.Config.Agent.FlushRetries
|
|
||||||
start := time.Now()
|
|
||||||
|
|
||||||
for {
|
|
||||||
filtered := ro.FilterPoints(points)
|
|
||||||
err := ro.Output.Write(filtered)
|
|
||||||
if err == nil {
|
|
||||||
// Write successful
|
|
||||||
elapsed := time.Since(start)
|
|
||||||
if !a.Config.Agent.Quiet {
|
|
||||||
log.Printf("Flushed %d metrics to output %s in %s\n",
|
|
||||||
len(filtered), ro.Name, elapsed)
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
select {
|
|
||||||
case <-shutdown:
|
|
||||||
return
|
|
||||||
default:
|
|
||||||
if retry >= retries {
|
|
||||||
// No more retries
|
|
||||||
msg := "FATAL: Write to output [%s] failed %d times, dropping" +
|
|
||||||
" %d metrics\n"
|
|
||||||
log.Printf(msg, ro.Name, retries+1, len(points))
|
|
||||||
return
|
|
||||||
} else if err != nil {
|
|
||||||
// Sleep for a retry
|
|
||||||
log.Printf("Error in output [%s]: %s, retrying in %s",
|
|
||||||
ro.Name, err.Error(), a.Config.Agent.FlushInterval.Duration)
|
|
||||||
time.Sleep(a.Config.Agent.FlushInterval.Duration)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
retry++
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// flush writes a list of points to all configured outputs
|
// flush writes a list of points to all configured outputs
|
||||||
func (a *Agent) flush(
|
func (a *Agent) flush() {
|
||||||
points []*client.Point,
|
|
||||||
shutdown chan struct{},
|
|
||||||
wait bool,
|
|
||||||
) {
|
|
||||||
var wg sync.WaitGroup
|
var wg sync.WaitGroup
|
||||||
|
|
||||||
|
wg.Add(len(a.Config.Outputs))
|
||||||
for _, o := range a.Config.Outputs {
|
for _, o := range a.Config.Outputs {
|
||||||
wg.Add(1)
|
go func(output *models.RunningOutput) {
|
||||||
go a.writeOutput(points, o, shutdown, &wg)
|
defer wg.Done()
|
||||||
|
err := output.Write()
|
||||||
|
if err != nil {
|
||||||
|
log.Printf("Error writing to output [%s]: %s\n",
|
||||||
|
output.Name, err.Error())
|
||||||
}
|
}
|
||||||
if wait {
|
}(o)
|
||||||
|
}
|
||||||
|
|
||||||
wg.Wait()
|
wg.Wait()
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// flusher monitors the points input channel and flushes on the minimum interval
|
// flusher monitors the points input channel and flushes on the minimum interval
|
||||||
func (a *Agent) flusher(shutdown chan struct{}, pointChan chan *client.Point) error {
|
func (a *Agent) flusher(shutdown chan struct{}, pointChan chan *client.Point) error {
|
||||||
// Inelegant, but this sleep is to allow the Gather threads to run, so that
|
// Inelegant, but this sleep is to allow the Gather threads to run, so that
|
||||||
// the flusher will flush after metrics are collected.
|
// the flusher will flush after metrics are collected.
|
||||||
time.Sleep(time.Millisecond * 100)
|
time.Sleep(time.Millisecond * 200)
|
||||||
|
|
||||||
ticker := time.NewTicker(a.Config.Agent.FlushInterval.Duration)
|
ticker := time.NewTicker(a.Config.Agent.FlushInterval.Duration)
|
||||||
points := make([]*client.Point, 0)
|
|
||||||
|
|
||||||
for {
|
for {
|
||||||
select {
|
select {
|
||||||
case <-shutdown:
|
case <-shutdown:
|
||||||
log.Println("Hang on, flushing any cached points before shutdown")
|
log.Println("Hang on, flushing any cached points before shutdown")
|
||||||
a.flush(points, shutdown, true)
|
a.flush()
|
||||||
return nil
|
return nil
|
||||||
case <-ticker.C:
|
case <-ticker.C:
|
||||||
a.flush(points, shutdown, false)
|
a.flush()
|
||||||
points = make([]*client.Point, 0)
|
|
||||||
case pt := <-pointChan:
|
case pt := <-pointChan:
|
||||||
points = append(points, pt)
|
for _, o := range a.Config.Outputs {
|
||||||
|
o.AddPoint(pt)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -389,7 +357,7 @@ func (a *Agent) Run(shutdown chan struct{}) error {
|
||||||
// configured. Default intervals are handled below with gatherParallel
|
// configured. Default intervals are handled below with gatherParallel
|
||||||
if input.Config.Interval != 0 {
|
if input.Config.Interval != 0 {
|
||||||
wg.Add(1)
|
wg.Add(1)
|
||||||
go func(input *config.RunningInput) {
|
go func(input *models.RunningInput) {
|
||||||
defer wg.Done()
|
defer wg.Done()
|
||||||
if err := a.gatherSeparate(shutdown, input, pointChan); err != nil {
|
if err := a.gatherSeparate(shutdown, input, pointChan); err != nil {
|
||||||
log.Printf(err.Error())
|
log.Printf(err.Error())
|
||||||
|
|
121
build.py
121
build.py
|
@ -1,4 +1,4 @@
|
||||||
#!/usr/bin/env python2.7
|
#!/usr/bin/env python
|
||||||
#
|
#
|
||||||
# This is the Telegraf build script.
|
# This is the Telegraf build script.
|
||||||
#
|
#
|
||||||
|
@ -17,11 +17,7 @@ import tempfile
|
||||||
import hashlib
|
import hashlib
|
||||||
import re
|
import re
|
||||||
|
|
||||||
try:
|
debug = False
|
||||||
import boto
|
|
||||||
from boto.s3.key import Key
|
|
||||||
except ImportError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
# PACKAGING VARIABLES
|
# PACKAGING VARIABLES
|
||||||
INSTALL_ROOT_DIR = "/usr/bin"
|
INSTALL_ROOT_DIR = "/usr/bin"
|
||||||
|
@ -73,12 +69,10 @@ targets = {
|
||||||
}
|
}
|
||||||
|
|
||||||
supported_builds = {
|
supported_builds = {
|
||||||
# TODO(rossmcdonald): Add support for multiple GOARM values
|
'darwin': [ "amd64", "i386" ],
|
||||||
'darwin': [ "amd64", "386" ],
|
'windows': [ "amd64", "i386", "arm" ],
|
||||||
# 'windows': [ "amd64", "386", "arm", "arm64" ],
|
'linux': [ "amd64", "i386", "arm" ]
|
||||||
'linux': [ "amd64", "386", "arm" ]
|
|
||||||
}
|
}
|
||||||
supported_go = [ '1.5.1' ]
|
|
||||||
supported_packages = {
|
supported_packages = {
|
||||||
"darwin": [ "tar", "zip" ],
|
"darwin": [ "tar", "zip" ],
|
||||||
"linux": [ "deb", "rpm", "tar", "zip" ],
|
"linux": [ "deb", "rpm", "tar", "zip" ],
|
||||||
|
@ -87,6 +81,8 @@ supported_packages = {
|
||||||
|
|
||||||
def run(command, allow_failure=False, shell=False):
|
def run(command, allow_failure=False, shell=False):
|
||||||
out = None
|
out = None
|
||||||
|
if debug:
|
||||||
|
print("[DEBUG] {}".format(command))
|
||||||
try:
|
try:
|
||||||
if shell:
|
if shell:
|
||||||
out = subprocess.check_output(command, stderr=subprocess.STDOUT, shell=shell)
|
out = subprocess.check_output(command, stderr=subprocess.STDOUT, shell=shell)
|
||||||
|
@ -122,8 +118,11 @@ def run(command, allow_failure=False, shell=False):
|
||||||
else:
|
else:
|
||||||
return out
|
return out
|
||||||
|
|
||||||
def create_temp_dir():
|
def create_temp_dir(prefix=None):
|
||||||
|
if prefix is None:
|
||||||
return tempfile.mkdtemp(prefix="telegraf-build.")
|
return tempfile.mkdtemp(prefix="telegraf-build.")
|
||||||
|
else:
|
||||||
|
return tempfile.mkdtemp(prefix=prefix)
|
||||||
|
|
||||||
def get_current_version():
|
def get_current_version():
|
||||||
command = "git describe --always --tags --abbrev=0"
|
command = "git describe --always --tags --abbrev=0"
|
||||||
|
@ -185,31 +184,44 @@ def check_environ(build_dir = None):
|
||||||
def check_prereqs():
|
def check_prereqs():
|
||||||
print("\nChecking for dependencies:")
|
print("\nChecking for dependencies:")
|
||||||
for req in prereqs:
|
for req in prereqs:
|
||||||
print("\t- {} ->".format(req),)
|
|
||||||
path = check_path_for(req)
|
path = check_path_for(req)
|
||||||
if path:
|
if path is None:
|
||||||
print("{}".format(path))
|
path = '?'
|
||||||
else:
|
print("\t- {} -> {}".format(req, path))
|
||||||
print("?")
|
|
||||||
for req in optional_prereqs:
|
for req in optional_prereqs:
|
||||||
print("\t- {} (optional) ->".format(req))
|
|
||||||
path = check_path_for(req)
|
path = check_path_for(req)
|
||||||
if path:
|
if path is None:
|
||||||
print("{}".format(path))
|
path = '?'
|
||||||
else:
|
print("\t- {} (optional) -> {}".format(req, path))
|
||||||
print("?")
|
|
||||||
print("")
|
print("")
|
||||||
|
|
||||||
def upload_packages(packages, nightly=False):
|
def upload_packages(packages, bucket_name=None, nightly=False):
|
||||||
|
if debug:
|
||||||
|
print("[DEBUG] upload_packags: {}".format(packages))
|
||||||
|
try:
|
||||||
|
import boto
|
||||||
|
from boto.s3.key import Key
|
||||||
|
except ImportError:
|
||||||
|
print "!! Cannot upload packages without the 'boto' python library."
|
||||||
|
return 1
|
||||||
print("Uploading packages to S3...")
|
print("Uploading packages to S3...")
|
||||||
print("")
|
print("")
|
||||||
c = boto.connect_s3()
|
c = boto.connect_s3()
|
||||||
# TODO(rossmcdonald) - Set to different S3 bucket for release vs nightly
|
if bucket_name is None:
|
||||||
bucket = c.get_bucket('get.influxdb.org')
|
bucket_name = 'get.influxdb.org/telegraf'
|
||||||
|
bucket = c.get_bucket(bucket_name.split('/')[0])
|
||||||
|
print("\t - Using bucket: {}".format(bucket_name))
|
||||||
for p in packages:
|
for p in packages:
|
||||||
name = os.path.join('telegraf', os.path.basename(p))
|
if '/' in bucket_name:
|
||||||
|
# Allow for nested paths within the bucket name (ex:
|
||||||
|
# bucket/telegraf). Assuming forward-slashes as path
|
||||||
|
# delimiter.
|
||||||
|
name = os.path.join('/'.join(bucket_name.split('/')[1:]),
|
||||||
|
os.path.basename(p))
|
||||||
|
else:
|
||||||
|
name = os.path.basename(p)
|
||||||
if bucket.get_key(name) is None or nightly:
|
if bucket.get_key(name) is None or nightly:
|
||||||
print("\t - Uploading {}...".format(name))
|
print("\t - Uploading {} to {}...".format(name, bucket_name))
|
||||||
k = Key(bucket)
|
k = Key(bucket)
|
||||||
k.key = name
|
k.key = name
|
||||||
if nightly:
|
if nightly:
|
||||||
|
@ -217,7 +229,6 @@ def upload_packages(packages, nightly=False):
|
||||||
else:
|
else:
|
||||||
n = k.set_contents_from_filename(p, replace=False)
|
n = k.set_contents_from_filename(p, replace=False)
|
||||||
k.make_public()
|
k.make_public()
|
||||||
print("[ DONE ]")
|
|
||||||
else:
|
else:
|
||||||
print("\t - Not uploading {}, already exists.".format(p))
|
print("\t - Not uploading {}, already exists.".format(p))
|
||||||
print("")
|
print("")
|
||||||
|
@ -227,7 +238,6 @@ def run_tests(race, parallel, timeout, no_vet):
|
||||||
print("Retrieving Go dependencies...")
|
print("Retrieving Go dependencies...")
|
||||||
sys.stdout.flush()
|
sys.stdout.flush()
|
||||||
run(get_command)
|
run(get_command)
|
||||||
print("done.")
|
|
||||||
print("Running tests:")
|
print("Running tests:")
|
||||||
print("\tRace: ", race)
|
print("\tRace: ", race)
|
||||||
if parallel is not None:
|
if parallel is not None:
|
||||||
|
@ -307,9 +317,15 @@ def build(version=None,
|
||||||
# If a release candidate, update the version information accordingly
|
# If a release candidate, update the version information accordingly
|
||||||
version = "{}rc{}".format(version, rc)
|
version = "{}rc{}".format(version, rc)
|
||||||
|
|
||||||
|
# Set architecture to something that Go expects
|
||||||
|
if arch == 'i386':
|
||||||
|
arch = '386'
|
||||||
|
elif arch == 'x86_64':
|
||||||
|
arch = 'amd64'
|
||||||
|
|
||||||
print("Starting build...")
|
print("Starting build...")
|
||||||
for b, c in targets.items():
|
for b, c in targets.items():
|
||||||
print("\t- Building '{}'...".format(os.path.join(outdir, b)),)
|
print("\t- Building '{}'...".format(os.path.join(outdir, b)))
|
||||||
build_command = ""
|
build_command = ""
|
||||||
build_command += "GOOS={} GOARCH={} ".format(platform, arch)
|
build_command += "GOOS={} GOARCH={} ".format(platform, arch)
|
||||||
if arch == "arm" and goarm_version:
|
if arch == "arm" and goarm_version:
|
||||||
|
@ -323,16 +339,15 @@ def build(version=None,
|
||||||
if "1.4" in go_version:
|
if "1.4" in go_version:
|
||||||
build_command += "-ldflags=\"-X main.buildTime '{}' ".format(datetime.datetime.utcnow().isoformat())
|
build_command += "-ldflags=\"-X main.buildTime '{}' ".format(datetime.datetime.utcnow().isoformat())
|
||||||
build_command += "-X main.Version {} ".format(version)
|
build_command += "-X main.Version {} ".format(version)
|
||||||
build_command += "-X main.Branch {} ".format(branch)
|
build_command += "-X main.Branch {} ".format(get_current_branch())
|
||||||
build_command += "-X main.Commit {}\" ".format(get_current_commit())
|
build_command += "-X main.Commit {}\" ".format(get_current_commit())
|
||||||
else:
|
else:
|
||||||
build_command += "-ldflags=\"-X main.buildTime='{}' ".format(datetime.datetime.utcnow().isoformat())
|
build_command += "-ldflags=\"-X main.buildTime='{}' ".format(datetime.datetime.utcnow().isoformat())
|
||||||
build_command += "-X main.Version={} ".format(version)
|
build_command += "-X main.Version={} ".format(version)
|
||||||
build_command += "-X main.Branch={} ".format(branch)
|
build_command += "-X main.Branch={} ".format(get_current_branch())
|
||||||
build_command += "-X main.Commit={}\" ".format(get_current_commit())
|
build_command += "-X main.Commit={}\" ".format(get_current_commit())
|
||||||
build_command += c
|
build_command += c
|
||||||
run(build_command, shell=True)
|
run(build_command, shell=True)
|
||||||
print("[ DONE ]")
|
|
||||||
print("")
|
print("")
|
||||||
|
|
||||||
def create_dir(path):
|
def create_dir(path):
|
||||||
|
@ -386,7 +401,6 @@ def go_get(update=False):
|
||||||
get_command = "go get -d ./..."
|
get_command = "go get -d ./..."
|
||||||
print("Retrieving Go dependencies...")
|
print("Retrieving Go dependencies...")
|
||||||
run(get_command)
|
run(get_command)
|
||||||
print("done.\n")
|
|
||||||
|
|
||||||
def generate_md5_from_file(path):
|
def generate_md5_from_file(path):
|
||||||
m = hashlib.md5()
|
m = hashlib.md5()
|
||||||
|
@ -401,6 +415,8 @@ def generate_md5_from_file(path):
|
||||||
def build_packages(build_output, version, nightly=False, rc=None, iteration=1):
|
def build_packages(build_output, version, nightly=False, rc=None, iteration=1):
|
||||||
outfiles = []
|
outfiles = []
|
||||||
tmp_build_dir = create_temp_dir()
|
tmp_build_dir = create_temp_dir()
|
||||||
|
if debug:
|
||||||
|
print("[DEBUG] build_output = {}".format(build_output))
|
||||||
try:
|
try:
|
||||||
print("-------------------------")
|
print("-------------------------")
|
||||||
print("")
|
print("")
|
||||||
|
@ -429,18 +445,24 @@ def build_packages(build_output, version, nightly=False, rc=None, iteration=1):
|
||||||
for package_type in supported_packages[p]:
|
for package_type in supported_packages[p]:
|
||||||
print("\t- Packaging directory '{}' as '{}'...".format(build_root, package_type))
|
print("\t- Packaging directory '{}' as '{}'...".format(build_root, package_type))
|
||||||
name = "telegraf"
|
name = "telegraf"
|
||||||
|
# Reset version, iteration, and current location on each run
|
||||||
|
# since they may be modified below.
|
||||||
package_version = version
|
package_version = version
|
||||||
package_iteration = iteration
|
package_iteration = iteration
|
||||||
|
current_location = build_output[p][a]
|
||||||
|
|
||||||
if package_type in ['zip', 'tar']:
|
if package_type in ['zip', 'tar']:
|
||||||
if nightly:
|
if nightly:
|
||||||
name = '{}-nightly_{}_{}'.format(name, p, a)
|
name = '{}-nightly_{}_{}'.format(name, p, a)
|
||||||
else:
|
else:
|
||||||
name = '{}-{}_{}_{}'.format(name, version, p, a)
|
name = '{}-{}-{}_{}_{}'.format(name, package_version, package_iteration, p, a)
|
||||||
if package_type == 'tar':
|
if package_type == 'tar':
|
||||||
# Add `tar.gz` to path to reduce package size
|
# Add `tar.gz` to path to reduce package size
|
||||||
current_location = os.path.join(current_location, name + '.tar.gz')
|
current_location = os.path.join(current_location, name + '.tar.gz')
|
||||||
if rc is not None:
|
if rc is not None:
|
||||||
package_iteration = "0.rc{}".format(rc)
|
package_iteration = "0.rc{}".format(rc)
|
||||||
|
if a == '386':
|
||||||
|
a = 'i386'
|
||||||
fpm_command = "fpm {} --name {} -a {} -t {} --version {} --iteration {} -C {} -p {} ".format(
|
fpm_command = "fpm {} --name {} -a {} -t {} --version {} --iteration {} -C {} -p {} ".format(
|
||||||
fpm_common_args,
|
fpm_common_args,
|
||||||
name,
|
name,
|
||||||
|
@ -465,10 +487,11 @@ def build_packages(build_output, version, nightly=False, rc=None, iteration=1):
|
||||||
if nightly and package_type in ['deb', 'rpm']:
|
if nightly and package_type in ['deb', 'rpm']:
|
||||||
outfile = rename_file(outfile, outfile.replace("{}-{}".format(version, iteration), "nightly"))
|
outfile = rename_file(outfile, outfile.replace("{}-{}".format(version, iteration), "nightly"))
|
||||||
outfiles.append(os.path.join(os.getcwd(), outfile))
|
outfiles.append(os.path.join(os.getcwd(), outfile))
|
||||||
print("[ DONE ]")
|
|
||||||
# Display MD5 hash for generated package
|
# Display MD5 hash for generated package
|
||||||
print("\t\tMD5 = {}".format(generate_md5_from_file(outfile)))
|
print("\t\tMD5 = {}".format(generate_md5_from_file(outfile)))
|
||||||
print("")
|
print("")
|
||||||
|
if debug:
|
||||||
|
print("[DEBUG] package outfiles: {}".format(outfiles))
|
||||||
return outfiles
|
return outfiles
|
||||||
finally:
|
finally:
|
||||||
# Cleanup
|
# Cleanup
|
||||||
|
@ -495,6 +518,9 @@ def print_usage():
|
||||||
print("\t --parallel \n\t\t- Run Go tests in parallel up to the count specified.")
|
print("\t --parallel \n\t\t- Run Go tests in parallel up to the count specified.")
|
||||||
print("\t --timeout \n\t\t- Timeout for Go tests. Defaults to 480s.")
|
print("\t --timeout \n\t\t- Timeout for Go tests. Defaults to 480s.")
|
||||||
print("\t --clean \n\t\t- Clean the build output directory prior to creating build.")
|
print("\t --clean \n\t\t- Clean the build output directory prior to creating build.")
|
||||||
|
print("\t --no-get \n\t\t- Do not run `go get` before building.")
|
||||||
|
print("\t --bucket=<S3 bucket>\n\t\t- Full path of the bucket to upload packages to (must also specify --upload).")
|
||||||
|
print("\t --debug \n\t\t- Displays debug output.")
|
||||||
print("")
|
print("")
|
||||||
|
|
||||||
def print_package_summary(packages):
|
def print_package_summary(packages):
|
||||||
|
@ -521,6 +547,9 @@ def main():
|
||||||
iteration = 1
|
iteration = 1
|
||||||
no_vet = False
|
no_vet = False
|
||||||
goarm_version = "6"
|
goarm_version = "6"
|
||||||
|
run_get = True
|
||||||
|
upload_bucket = None
|
||||||
|
global debug
|
||||||
|
|
||||||
for arg in sys.argv[1:]:
|
for arg in sys.argv[1:]:
|
||||||
if '--outdir' in arg:
|
if '--outdir' in arg:
|
||||||
|
@ -578,6 +607,14 @@ def main():
|
||||||
elif '--goarm' in arg:
|
elif '--goarm' in arg:
|
||||||
# Signifies GOARM flag to pass to build command when compiling for ARM
|
# Signifies GOARM flag to pass to build command when compiling for ARM
|
||||||
goarm_version = arg.split("=")[1]
|
goarm_version = arg.split("=")[1]
|
||||||
|
elif '--bucket' in arg:
|
||||||
|
# The bucket to upload the packages to, relies on boto
|
||||||
|
upload_bucket = arg.split("=")[1]
|
||||||
|
elif '--no-get' in arg:
|
||||||
|
run_get = False
|
||||||
|
elif '--debug' in arg:
|
||||||
|
print "[DEBUG] Using debug output"
|
||||||
|
debug = True
|
||||||
elif '--help' in arg:
|
elif '--help' in arg:
|
||||||
print_usage()
|
print_usage()
|
||||||
return 0
|
return 0
|
||||||
|
@ -614,14 +651,18 @@ def main():
|
||||||
# If a release candidate or nightly, set iteration to 0 (instead of 1)
|
# If a release candidate or nightly, set iteration to 0 (instead of 1)
|
||||||
iteration = 0
|
iteration = 0
|
||||||
|
|
||||||
|
if target_arch == '386':
|
||||||
|
target_arch = 'i386'
|
||||||
|
elif target_arch == 'x86_64':
|
||||||
|
target_arch = 'amd64'
|
||||||
|
|
||||||
build_output = {}
|
build_output = {}
|
||||||
# TODO(rossmcdonald): Prepare git repo for build (checking out correct branch/commit, etc.)
|
|
||||||
# prepare(branch=branch, commit=commit)
|
|
||||||
if test:
|
if test:
|
||||||
if not run_tests(race, parallel, timeout, no_vet):
|
if not run_tests(race, parallel, timeout, no_vet):
|
||||||
return 1
|
return 1
|
||||||
return 0
|
return 0
|
||||||
|
|
||||||
|
if run_get:
|
||||||
go_get(update=update)
|
go_get(update=update)
|
||||||
|
|
||||||
platforms = []
|
platforms = []
|
||||||
|
@ -663,11 +704,9 @@ def main():
|
||||||
print("!! Cannot package without command 'fpm'. Stopping.")
|
print("!! Cannot package without command 'fpm'. Stopping.")
|
||||||
return 1
|
return 1
|
||||||
packages = build_packages(build_output, version, nightly=nightly, rc=rc, iteration=iteration)
|
packages = build_packages(build_output, version, nightly=nightly, rc=rc, iteration=iteration)
|
||||||
# TODO(rossmcdonald): Add nice output for print_package_summary()
|
|
||||||
# print_package_summary(packages)
|
|
||||||
# Optionally upload to S3
|
# Optionally upload to S3
|
||||||
if upload:
|
if upload:
|
||||||
upload_packages(packages, nightly=nightly)
|
upload_packages(packages, bucket_name=upload_bucket, nightly=nightly)
|
||||||
return 0
|
return 0
|
||||||
|
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
|
|
|
@ -11,13 +11,12 @@ import (
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/influxdata/telegraf/internal"
|
"github.com/influxdata/telegraf/internal"
|
||||||
|
"github.com/influxdata/telegraf/internal/models"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
"github.com/influxdata/telegraf/plugins/outputs"
|
"github.com/influxdata/telegraf/plugins/outputs"
|
||||||
|
|
||||||
"github.com/naoina/toml"
|
"github.com/influxdata/config"
|
||||||
"github.com/naoina/toml/ast"
|
"github.com/naoina/toml/ast"
|
||||||
|
|
||||||
"github.com/influxdata/influxdb/client/v2"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// Config specifies the URL/user/password for the database that telegraf
|
// Config specifies the URL/user/password for the database that telegraf
|
||||||
|
@ -29,8 +28,8 @@ type Config struct {
|
||||||
OutputFilters []string
|
OutputFilters []string
|
||||||
|
|
||||||
Agent *AgentConfig
|
Agent *AgentConfig
|
||||||
Inputs []*RunningInput
|
Inputs []*models.RunningInput
|
||||||
Outputs []*RunningOutput
|
Outputs []*models.RunningOutput
|
||||||
}
|
}
|
||||||
|
|
||||||
func NewConfig() *Config {
|
func NewConfig() *Config {
|
||||||
|
@ -40,13 +39,12 @@ func NewConfig() *Config {
|
||||||
Interval: internal.Duration{Duration: 10 * time.Second},
|
Interval: internal.Duration{Duration: 10 * time.Second},
|
||||||
RoundInterval: true,
|
RoundInterval: true,
|
||||||
FlushInterval: internal.Duration{Duration: 10 * time.Second},
|
FlushInterval: internal.Duration{Duration: 10 * time.Second},
|
||||||
FlushRetries: 2,
|
|
||||||
FlushJitter: internal.Duration{Duration: 5 * time.Second},
|
FlushJitter: internal.Duration{Duration: 5 * time.Second},
|
||||||
},
|
},
|
||||||
|
|
||||||
Tags: make(map[string]string),
|
Tags: make(map[string]string),
|
||||||
Inputs: make([]*RunningInput, 0),
|
Inputs: make([]*models.RunningInput, 0),
|
||||||
Outputs: make([]*RunningOutput, 0),
|
Outputs: make([]*models.RunningOutput, 0),
|
||||||
InputFilters: make([]string, 0),
|
InputFilters: make([]string, 0),
|
||||||
OutputFilters: make([]string, 0),
|
OutputFilters: make([]string, 0),
|
||||||
}
|
}
|
||||||
|
@ -70,15 +68,17 @@ type AgentConfig struct {
|
||||||
// Interval at which to flush data
|
// Interval at which to flush data
|
||||||
FlushInterval internal.Duration
|
FlushInterval internal.Duration
|
||||||
|
|
||||||
// FlushRetries is the number of times to retry each data flush
|
|
||||||
FlushRetries int
|
|
||||||
|
|
||||||
// FlushJitter Jitters the flush interval by a random amount.
|
// FlushJitter Jitters the flush interval by a random amount.
|
||||||
// This is primarily to avoid large write spikes for users running a large
|
// This is primarily to avoid large write spikes for users running a large
|
||||||
// number of telegraf instances.
|
// number of telegraf instances.
|
||||||
// ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
|
// ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
|
||||||
FlushJitter internal.Duration
|
FlushJitter internal.Duration
|
||||||
|
|
||||||
|
// MetricBufferLimit is the max number of metrics that each output plugin
|
||||||
|
// will cache. The buffer is cleared when a successful write occurs. When
|
||||||
|
// full, the oldest metrics will be overwritten.
|
||||||
|
MetricBufferLimit int
|
||||||
|
|
||||||
// TODO(cam): Remove UTC and Precision parameters, they are no longer
|
// TODO(cam): Remove UTC and Precision parameters, they are no longer
|
||||||
// valid for the agent config. Leaving them here for now for backwards-
|
// valid for the agent config. Leaving them here for now for backwards-
|
||||||
// compatability
|
// compatability
|
||||||
|
@ -93,129 +93,6 @@ type AgentConfig struct {
|
||||||
Hostname string
|
Hostname string
|
||||||
}
|
}
|
||||||
|
|
||||||
// TagFilter is the name of a tag, and the values on which to filter
|
|
||||||
type TagFilter struct {
|
|
||||||
Name string
|
|
||||||
Filter []string
|
|
||||||
}
|
|
||||||
|
|
||||||
type RunningOutput struct {
|
|
||||||
Name string
|
|
||||||
Output outputs.Output
|
|
||||||
Config *OutputConfig
|
|
||||||
}
|
|
||||||
|
|
||||||
type RunningInput struct {
|
|
||||||
Name string
|
|
||||||
Input inputs.Input
|
|
||||||
Config *InputConfig
|
|
||||||
}
|
|
||||||
|
|
||||||
// Filter containing drop/pass and tagdrop/tagpass rules
|
|
||||||
type Filter struct {
|
|
||||||
Drop []string
|
|
||||||
Pass []string
|
|
||||||
|
|
||||||
TagDrop []TagFilter
|
|
||||||
TagPass []TagFilter
|
|
||||||
|
|
||||||
IsActive bool
|
|
||||||
}
|
|
||||||
|
|
||||||
// InputConfig containing a name, interval, and filter
|
|
||||||
type InputConfig struct {
|
|
||||||
Name string
|
|
||||||
NameOverride string
|
|
||||||
MeasurementPrefix string
|
|
||||||
MeasurementSuffix string
|
|
||||||
Tags map[string]string
|
|
||||||
Filter Filter
|
|
||||||
Interval time.Duration
|
|
||||||
}
|
|
||||||
|
|
||||||
// OutputConfig containing name and filter
|
|
||||||
type OutputConfig struct {
|
|
||||||
Name string
|
|
||||||
Filter Filter
|
|
||||||
}
|
|
||||||
|
|
||||||
// Filter returns filtered slice of client.Points based on whether filters
|
|
||||||
// are active for this RunningOutput.
|
|
||||||
func (ro *RunningOutput) FilterPoints(points []*client.Point) []*client.Point {
|
|
||||||
if !ro.Config.Filter.IsActive {
|
|
||||||
return points
|
|
||||||
}
|
|
||||||
|
|
||||||
var filteredPoints []*client.Point
|
|
||||||
for i := range points {
|
|
||||||
if !ro.Config.Filter.ShouldPass(points[i].Name()) || !ro.Config.Filter.ShouldTagsPass(points[i].Tags()) {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
filteredPoints = append(filteredPoints, points[i])
|
|
||||||
}
|
|
||||||
return filteredPoints
|
|
||||||
}
|
|
||||||
|
|
||||||
// ShouldPass returns true if the metric should pass, false if should drop
|
|
||||||
// based on the drop/pass filter parameters
|
|
||||||
func (f Filter) ShouldPass(fieldkey string) bool {
|
|
||||||
if f.Pass != nil {
|
|
||||||
for _, pat := range f.Pass {
|
|
||||||
// TODO remove HasPrefix check, leaving it for now for legacy support.
|
|
||||||
// Cam, 2015-12-07
|
|
||||||
if strings.HasPrefix(fieldkey, pat) || internal.Glob(pat, fieldkey) {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
if f.Drop != nil {
|
|
||||||
for _, pat := range f.Drop {
|
|
||||||
// TODO remove HasPrefix check, leaving it for now for legacy support.
|
|
||||||
// Cam, 2015-12-07
|
|
||||||
if strings.HasPrefix(fieldkey, pat) || internal.Glob(pat, fieldkey) {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
// ShouldTagsPass returns true if the metric should pass, false if should drop
|
|
||||||
// based on the tagdrop/tagpass filter parameters
|
|
||||||
func (f Filter) ShouldTagsPass(tags map[string]string) bool {
|
|
||||||
if f.TagPass != nil {
|
|
||||||
for _, pat := range f.TagPass {
|
|
||||||
if tagval, ok := tags[pat.Name]; ok {
|
|
||||||
for _, filter := range pat.Filter {
|
|
||||||
if internal.Glob(filter, tagval) {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
if f.TagDrop != nil {
|
|
||||||
for _, pat := range f.TagDrop {
|
|
||||||
if tagval, ok := tags[pat.Name]; ok {
|
|
||||||
for _, filter := range pat.Filter {
|
|
||||||
if internal.Glob(filter, tagval) {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
// Inputs returns a list of strings of the configured inputs.
|
// Inputs returns a list of strings of the configured inputs.
|
||||||
func (c *Config) InputNames() []string {
|
func (c *Config) InputNames() []string {
|
||||||
var name []string
|
var name []string
|
||||||
|
@ -251,24 +128,14 @@ func (c *Config) ListTags() string {
|
||||||
var header = `# Telegraf configuration
|
var header = `# Telegraf configuration
|
||||||
|
|
||||||
# Telegraf is entirely plugin driven. All metrics are gathered from the
|
# Telegraf is entirely plugin driven. All metrics are gathered from the
|
||||||
# declared inputs.
|
# declared inputs, and sent to the declared outputs.
|
||||||
|
|
||||||
# Even if a plugin has no configuration, it must be declared in here
|
# Plugins must be declared in here to be active.
|
||||||
# to be active. Declaring a plugin means just specifying the name
|
# To deactivate a plugin, comment out the name and any variables.
|
||||||
# as a section with no variables. To deactivate a plugin, comment
|
|
||||||
# out the name and any variables.
|
|
||||||
|
|
||||||
# Use 'telegraf -config telegraf.toml -test' to see what metrics a config
|
# Use 'telegraf -config telegraf.conf -test' to see what metrics a config
|
||||||
# file would generate.
|
# file would generate.
|
||||||
|
|
||||||
# One rule that plugins conform to is wherever a connection string
|
|
||||||
# can be passed, the values '' and 'localhost' are treated specially.
|
|
||||||
# They indicate to the plugin to use their own builtin configuration to
|
|
||||||
# connect to the local system.
|
|
||||||
|
|
||||||
# NOTE: The configuration has a few required parameters. They are marked
|
|
||||||
# with 'required'. Be sure to edit those to make this configuration work.
|
|
||||||
|
|
||||||
# Tags can also be specified via a normal map, but only one form at a time:
|
# Tags can also be specified via a normal map, but only one form at a time:
|
||||||
[tags]
|
[tags]
|
||||||
# dc = "us-east-1"
|
# dc = "us-east-1"
|
||||||
|
@ -280,6 +147,11 @@ var header = `# Telegraf configuration
|
||||||
# Rounds collection interval to 'interval'
|
# Rounds collection interval to 'interval'
|
||||||
# ie, if interval="10s" then always collect on :00, :10, :20, etc.
|
# ie, if interval="10s" then always collect on :00, :10, :20, etc.
|
||||||
round_interval = true
|
round_interval = true
|
||||||
|
|
||||||
|
# Telegraf will cache metric_buffer_limit metrics for each output, and will
|
||||||
|
# flush this buffer on a successful write.
|
||||||
|
metric_buffer_limit = 10000
|
||||||
|
|
||||||
# Collection jitter is used to jitter the collection by a random amount.
|
# Collection jitter is used to jitter the collection by a random amount.
|
||||||
# Each plugin will sleep for a random time within jitter before collecting.
|
# Each plugin will sleep for a random time within jitter before collecting.
|
||||||
# This can be used to avoid many plugins querying things like sysfs at the
|
# This can be used to avoid many plugins querying things like sysfs at the
|
||||||
|
@ -442,12 +314,7 @@ func (c *Config) LoadDirectory(path string) error {
|
||||||
|
|
||||||
// LoadConfig loads the given config file and applies it to c
|
// LoadConfig loads the given config file and applies it to c
|
||||||
func (c *Config) LoadConfig(path string) error {
|
func (c *Config) LoadConfig(path string) error {
|
||||||
data, err := ioutil.ReadFile(path)
|
tbl, err := config.ParseFile(path)
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
tbl, err := toml.Parse(data)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -460,12 +327,12 @@ func (c *Config) LoadConfig(path string) error {
|
||||||
|
|
||||||
switch name {
|
switch name {
|
||||||
case "agent":
|
case "agent":
|
||||||
if err = toml.UnmarshalTable(subTable, c.Agent); err != nil {
|
if err = config.UnmarshalTable(subTable, c.Agent); err != nil {
|
||||||
log.Printf("Could not parse [agent] config\n")
|
log.Printf("Could not parse [agent] config\n")
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
case "tags":
|
case "tags":
|
||||||
if err = toml.UnmarshalTable(subTable, c.Tags); err != nil {
|
if err = config.UnmarshalTable(subTable, c.Tags); err != nil {
|
||||||
log.Printf("Could not parse [tags] config\n")
|
log.Printf("Could not parse [tags] config\n")
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
@ -531,15 +398,15 @@ func (c *Config) addOutput(name string, table *ast.Table) error {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := toml.UnmarshalTable(table, output); err != nil {
|
if err := config.UnmarshalTable(table, output); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
ro := &RunningOutput{
|
ro := models.NewRunningOutput(name, output, outputConfig)
|
||||||
Name: name,
|
if c.Agent.MetricBufferLimit > 0 {
|
||||||
Output: output,
|
ro.PointBufferLimit = c.Agent.MetricBufferLimit
|
||||||
Config: outputConfig,
|
|
||||||
}
|
}
|
||||||
|
ro.Quiet = c.Agent.Quiet
|
||||||
c.Outputs = append(c.Outputs, ro)
|
c.Outputs = append(c.Outputs, ro)
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
@ -564,11 +431,11 @@ func (c *Config) addInput(name string, table *ast.Table) error {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
if err := toml.UnmarshalTable(table, input); err != nil {
|
if err := config.UnmarshalTable(table, input); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
rp := &RunningInput{
|
rp := &models.RunningInput{
|
||||||
Name: name,
|
Name: name,
|
||||||
Input: input,
|
Input: input,
|
||||||
Config: pluginConfig,
|
Config: pluginConfig,
|
||||||
|
@ -578,10 +445,10 @@ func (c *Config) addInput(name string, table *ast.Table) error {
|
||||||
}
|
}
|
||||||
|
|
||||||
// buildFilter builds a Filter (tagpass/tagdrop/pass/drop) to
|
// buildFilter builds a Filter (tagpass/tagdrop/pass/drop) to
|
||||||
// be inserted into the OutputConfig/InputConfig to be used for prefix
|
// be inserted into the models.OutputConfig/models.InputConfig to be used for prefix
|
||||||
// filtering on tags and measurements
|
// filtering on tags and measurements
|
||||||
func buildFilter(tbl *ast.Table) Filter {
|
func buildFilter(tbl *ast.Table) models.Filter {
|
||||||
f := Filter{}
|
f := models.Filter{}
|
||||||
|
|
||||||
if node, ok := tbl.Fields["pass"]; ok {
|
if node, ok := tbl.Fields["pass"]; ok {
|
||||||
if kv, ok := node.(*ast.KeyValue); ok {
|
if kv, ok := node.(*ast.KeyValue); ok {
|
||||||
|
@ -613,7 +480,7 @@ func buildFilter(tbl *ast.Table) Filter {
|
||||||
if subtbl, ok := node.(*ast.Table); ok {
|
if subtbl, ok := node.(*ast.Table); ok {
|
||||||
for name, val := range subtbl.Fields {
|
for name, val := range subtbl.Fields {
|
||||||
if kv, ok := val.(*ast.KeyValue); ok {
|
if kv, ok := val.(*ast.KeyValue); ok {
|
||||||
tagfilter := &TagFilter{Name: name}
|
tagfilter := &models.TagFilter{Name: name}
|
||||||
if ary, ok := kv.Value.(*ast.Array); ok {
|
if ary, ok := kv.Value.(*ast.Array); ok {
|
||||||
for _, elem := range ary.Value {
|
for _, elem := range ary.Value {
|
||||||
if str, ok := elem.(*ast.String); ok {
|
if str, ok := elem.(*ast.String); ok {
|
||||||
|
@ -632,7 +499,7 @@ func buildFilter(tbl *ast.Table) Filter {
|
||||||
if subtbl, ok := node.(*ast.Table); ok {
|
if subtbl, ok := node.(*ast.Table); ok {
|
||||||
for name, val := range subtbl.Fields {
|
for name, val := range subtbl.Fields {
|
||||||
if kv, ok := val.(*ast.KeyValue); ok {
|
if kv, ok := val.(*ast.KeyValue); ok {
|
||||||
tagfilter := &TagFilter{Name: name}
|
tagfilter := &models.TagFilter{Name: name}
|
||||||
if ary, ok := kv.Value.(*ast.Array); ok {
|
if ary, ok := kv.Value.(*ast.Array); ok {
|
||||||
for _, elem := range ary.Value {
|
for _, elem := range ary.Value {
|
||||||
if str, ok := elem.(*ast.String); ok {
|
if str, ok := elem.(*ast.String); ok {
|
||||||
|
@ -656,9 +523,9 @@ func buildFilter(tbl *ast.Table) Filter {
|
||||||
|
|
||||||
// buildInput parses input specific items from the ast.Table,
|
// buildInput parses input specific items from the ast.Table,
|
||||||
// builds the filter and returns a
|
// builds the filter and returns a
|
||||||
// InputConfig to be inserted into RunningInput
|
// models.InputConfig to be inserted into models.RunningInput
|
||||||
func buildInput(name string, tbl *ast.Table) (*InputConfig, error) {
|
func buildInput(name string, tbl *ast.Table) (*models.InputConfig, error) {
|
||||||
cp := &InputConfig{Name: name}
|
cp := &models.InputConfig{Name: name}
|
||||||
if node, ok := tbl.Fields["interval"]; ok {
|
if node, ok := tbl.Fields["interval"]; ok {
|
||||||
if kv, ok := node.(*ast.KeyValue); ok {
|
if kv, ok := node.(*ast.KeyValue); ok {
|
||||||
if str, ok := kv.Value.(*ast.String); ok {
|
if str, ok := kv.Value.(*ast.String); ok {
|
||||||
|
@ -699,7 +566,7 @@ func buildInput(name string, tbl *ast.Table) (*InputConfig, error) {
|
||||||
cp.Tags = make(map[string]string)
|
cp.Tags = make(map[string]string)
|
||||||
if node, ok := tbl.Fields["tags"]; ok {
|
if node, ok := tbl.Fields["tags"]; ok {
|
||||||
if subtbl, ok := node.(*ast.Table); ok {
|
if subtbl, ok := node.(*ast.Table); ok {
|
||||||
if err := toml.UnmarshalTable(subtbl, cp.Tags); err != nil {
|
if err := config.UnmarshalTable(subtbl, cp.Tags); err != nil {
|
||||||
log.Printf("Could not parse tags for input %s\n", name)
|
log.Printf("Could not parse tags for input %s\n", name)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -715,10 +582,10 @@ func buildInput(name string, tbl *ast.Table) (*InputConfig, error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// buildOutput parses output specific items from the ast.Table, builds the filter and returns an
|
// buildOutput parses output specific items from the ast.Table, builds the filter and returns an
|
||||||
// OutputConfig to be inserted into RunningInput
|
// models.OutputConfig to be inserted into models.RunningInput
|
||||||
// Note: error exists in the return for future calls that might require error
|
// Note: error exists in the return for future calls that might require error
|
||||||
func buildOutput(name string, tbl *ast.Table) (*OutputConfig, error) {
|
func buildOutput(name string, tbl *ast.Table) (*models.OutputConfig, error) {
|
||||||
oc := &OutputConfig{
|
oc := &models.OutputConfig{
|
||||||
Name: name,
|
Name: name,
|
||||||
Filter: buildFilter(tbl),
|
Filter: buildFilter(tbl),
|
||||||
}
|
}
|
||||||
|
|
|
@ -4,6 +4,7 @@ import (
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf/internal/models"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs/exec"
|
"github.com/influxdata/telegraf/plugins/inputs/exec"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs/memcached"
|
"github.com/influxdata/telegraf/plugins/inputs/memcached"
|
||||||
|
@ -18,19 +19,19 @@ func TestConfig_LoadSingleInput(t *testing.T) {
|
||||||
memcached := inputs.Inputs["memcached"]().(*memcached.Memcached)
|
memcached := inputs.Inputs["memcached"]().(*memcached.Memcached)
|
||||||
memcached.Servers = []string{"localhost"}
|
memcached.Servers = []string{"localhost"}
|
||||||
|
|
||||||
mConfig := &InputConfig{
|
mConfig := &models.InputConfig{
|
||||||
Name: "memcached",
|
Name: "memcached",
|
||||||
Filter: Filter{
|
Filter: models.Filter{
|
||||||
Drop: []string{"other", "stuff"},
|
Drop: []string{"other", "stuff"},
|
||||||
Pass: []string{"some", "strings"},
|
Pass: []string{"some", "strings"},
|
||||||
TagDrop: []TagFilter{
|
TagDrop: []models.TagFilter{
|
||||||
TagFilter{
|
models.TagFilter{
|
||||||
Name: "badtag",
|
Name: "badtag",
|
||||||
Filter: []string{"othertag"},
|
Filter: []string{"othertag"},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
TagPass: []TagFilter{
|
TagPass: []models.TagFilter{
|
||||||
TagFilter{
|
models.TagFilter{
|
||||||
Name: "goodtag",
|
Name: "goodtag",
|
||||||
Filter: []string{"mytag"},
|
Filter: []string{"mytag"},
|
||||||
},
|
},
|
||||||
|
@ -61,19 +62,19 @@ func TestConfig_LoadDirectory(t *testing.T) {
|
||||||
memcached := inputs.Inputs["memcached"]().(*memcached.Memcached)
|
memcached := inputs.Inputs["memcached"]().(*memcached.Memcached)
|
||||||
memcached.Servers = []string{"localhost"}
|
memcached.Servers = []string{"localhost"}
|
||||||
|
|
||||||
mConfig := &InputConfig{
|
mConfig := &models.InputConfig{
|
||||||
Name: "memcached",
|
Name: "memcached",
|
||||||
Filter: Filter{
|
Filter: models.Filter{
|
||||||
Drop: []string{"other", "stuff"},
|
Drop: []string{"other", "stuff"},
|
||||||
Pass: []string{"some", "strings"},
|
Pass: []string{"some", "strings"},
|
||||||
TagDrop: []TagFilter{
|
TagDrop: []models.TagFilter{
|
||||||
TagFilter{
|
models.TagFilter{
|
||||||
Name: "badtag",
|
Name: "badtag",
|
||||||
Filter: []string{"othertag"},
|
Filter: []string{"othertag"},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
TagPass: []TagFilter{
|
TagPass: []models.TagFilter{
|
||||||
TagFilter{
|
models.TagFilter{
|
||||||
Name: "goodtag",
|
Name: "goodtag",
|
||||||
Filter: []string{"mytag"},
|
Filter: []string{"mytag"},
|
||||||
},
|
},
|
||||||
|
@ -91,7 +92,7 @@ func TestConfig_LoadDirectory(t *testing.T) {
|
||||||
|
|
||||||
ex := inputs.Inputs["exec"]().(*exec.Exec)
|
ex := inputs.Inputs["exec"]().(*exec.Exec)
|
||||||
ex.Command = "/usr/bin/myothercollector --foo=bar"
|
ex.Command = "/usr/bin/myothercollector --foo=bar"
|
||||||
eConfig := &InputConfig{
|
eConfig := &models.InputConfig{
|
||||||
Name: "exec",
|
Name: "exec",
|
||||||
MeasurementSuffix: "_myothercollector",
|
MeasurementSuffix: "_myothercollector",
|
||||||
}
|
}
|
||||||
|
@ -110,7 +111,7 @@ func TestConfig_LoadDirectory(t *testing.T) {
|
||||||
pstat := inputs.Inputs["procstat"]().(*procstat.Procstat)
|
pstat := inputs.Inputs["procstat"]().(*procstat.Procstat)
|
||||||
pstat.PidFile = "/var/run/grafana-server.pid"
|
pstat.PidFile = "/var/run/grafana-server.pid"
|
||||||
|
|
||||||
pConfig := &InputConfig{Name: "procstat"}
|
pConfig := &models.InputConfig{Name: "procstat"}
|
||||||
pConfig.Tags = make(map[string]string)
|
pConfig.Tags = make(map[string]string)
|
||||||
|
|
||||||
assert.Equal(t, pstat, c.Inputs[3].Input,
|
assert.Equal(t, pstat, c.Inputs[3].Input,
|
||||||
|
@ -118,175 +119,3 @@ func TestConfig_LoadDirectory(t *testing.T) {
|
||||||
assert.Equal(t, pConfig, c.Inputs[3].Config,
|
assert.Equal(t, pConfig, c.Inputs[3].Config,
|
||||||
"Merged Testdata did not produce correct procstat metadata.")
|
"Merged Testdata did not produce correct procstat metadata.")
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestFilter_Empty(t *testing.T) {
|
|
||||||
f := Filter{}
|
|
||||||
|
|
||||||
measurements := []string{
|
|
||||||
"foo",
|
|
||||||
"bar",
|
|
||||||
"barfoo",
|
|
||||||
"foo_bar",
|
|
||||||
"foo.bar",
|
|
||||||
"foo-bar",
|
|
||||||
"supercalifradjulisticexpialidocious",
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, measurement := range measurements {
|
|
||||||
if !f.ShouldPass(measurement) {
|
|
||||||
t.Errorf("Expected measurement %s to pass", measurement)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestFilter_Pass(t *testing.T) {
|
|
||||||
f := Filter{
|
|
||||||
Pass: []string{"foo*", "cpu_usage_idle"},
|
|
||||||
}
|
|
||||||
|
|
||||||
passes := []string{
|
|
||||||
"foo",
|
|
||||||
"foo_bar",
|
|
||||||
"foo.bar",
|
|
||||||
"foo-bar",
|
|
||||||
"cpu_usage_idle",
|
|
||||||
}
|
|
||||||
|
|
||||||
drops := []string{
|
|
||||||
"bar",
|
|
||||||
"barfoo",
|
|
||||||
"bar_foo",
|
|
||||||
"cpu_usage_busy",
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, measurement := range passes {
|
|
||||||
if !f.ShouldPass(measurement) {
|
|
||||||
t.Errorf("Expected measurement %s to pass", measurement)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, measurement := range drops {
|
|
||||||
if f.ShouldPass(measurement) {
|
|
||||||
t.Errorf("Expected measurement %s to drop", measurement)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestFilter_Drop(t *testing.T) {
|
|
||||||
f := Filter{
|
|
||||||
Drop: []string{"foo*", "cpu_usage_idle"},
|
|
||||||
}
|
|
||||||
|
|
||||||
drops := []string{
|
|
||||||
"foo",
|
|
||||||
"foo_bar",
|
|
||||||
"foo.bar",
|
|
||||||
"foo-bar",
|
|
||||||
"cpu_usage_idle",
|
|
||||||
}
|
|
||||||
|
|
||||||
passes := []string{
|
|
||||||
"bar",
|
|
||||||
"barfoo",
|
|
||||||
"bar_foo",
|
|
||||||
"cpu_usage_busy",
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, measurement := range passes {
|
|
||||||
if !f.ShouldPass(measurement) {
|
|
||||||
t.Errorf("Expected measurement %s to pass", measurement)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, measurement := range drops {
|
|
||||||
if f.ShouldPass(measurement) {
|
|
||||||
t.Errorf("Expected measurement %s to drop", measurement)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestFilter_TagPass(t *testing.T) {
|
|
||||||
filters := []TagFilter{
|
|
||||||
TagFilter{
|
|
||||||
Name: "cpu",
|
|
||||||
Filter: []string{"cpu-*"},
|
|
||||||
},
|
|
||||||
TagFilter{
|
|
||||||
Name: "mem",
|
|
||||||
Filter: []string{"mem_free"},
|
|
||||||
}}
|
|
||||||
f := Filter{
|
|
||||||
TagPass: filters,
|
|
||||||
}
|
|
||||||
|
|
||||||
passes := []map[string]string{
|
|
||||||
{"cpu": "cpu-total"},
|
|
||||||
{"cpu": "cpu-0"},
|
|
||||||
{"cpu": "cpu-1"},
|
|
||||||
{"cpu": "cpu-2"},
|
|
||||||
{"mem": "mem_free"},
|
|
||||||
}
|
|
||||||
|
|
||||||
drops := []map[string]string{
|
|
||||||
{"cpu": "cputotal"},
|
|
||||||
{"cpu": "cpu0"},
|
|
||||||
{"cpu": "cpu1"},
|
|
||||||
{"cpu": "cpu2"},
|
|
||||||
{"mem": "mem_used"},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tags := range passes {
|
|
||||||
if !f.ShouldTagsPass(tags) {
|
|
||||||
t.Errorf("Expected tags %v to pass", tags)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tags := range drops {
|
|
||||||
if f.ShouldTagsPass(tags) {
|
|
||||||
t.Errorf("Expected tags %v to drop", tags)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestFilter_TagDrop(t *testing.T) {
|
|
||||||
filters := []TagFilter{
|
|
||||||
TagFilter{
|
|
||||||
Name: "cpu",
|
|
||||||
Filter: []string{"cpu-*"},
|
|
||||||
},
|
|
||||||
TagFilter{
|
|
||||||
Name: "mem",
|
|
||||||
Filter: []string{"mem_free"},
|
|
||||||
}}
|
|
||||||
f := Filter{
|
|
||||||
TagDrop: filters,
|
|
||||||
}
|
|
||||||
|
|
||||||
drops := []map[string]string{
|
|
||||||
{"cpu": "cpu-total"},
|
|
||||||
{"cpu": "cpu-0"},
|
|
||||||
{"cpu": "cpu-1"},
|
|
||||||
{"cpu": "cpu-2"},
|
|
||||||
{"mem": "mem_free"},
|
|
||||||
}
|
|
||||||
|
|
||||||
passes := []map[string]string{
|
|
||||||
{"cpu": "cputotal"},
|
|
||||||
{"cpu": "cpu0"},
|
|
||||||
{"cpu": "cpu1"},
|
|
||||||
{"cpu": "cpu2"},
|
|
||||||
{"mem": "mem_used"},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tags := range passes {
|
|
||||||
if !f.ShouldTagsPass(tags) {
|
|
||||||
t.Errorf("Expected tags %v to pass", tags)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tags := range drops {
|
|
||||||
if f.ShouldTagsPass(tags) {
|
|
||||||
t.Errorf("Expected tags %v to drop", tags)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
|
@ -0,0 +1,92 @@
|
||||||
|
package models
|
||||||
|
|
||||||
|
import (
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/influxdata/influxdb/client/v2"
|
||||||
|
"github.com/influxdata/telegraf/internal"
|
||||||
|
)
|
||||||
|
|
||||||
|
// TagFilter is the name of a tag, and the values on which to filter
|
||||||
|
type TagFilter struct {
|
||||||
|
Name string
|
||||||
|
Filter []string
|
||||||
|
}
|
||||||
|
|
||||||
|
// Filter containing drop/pass and tagdrop/tagpass rules
|
||||||
|
type Filter struct {
|
||||||
|
Drop []string
|
||||||
|
Pass []string
|
||||||
|
|
||||||
|
TagDrop []TagFilter
|
||||||
|
TagPass []TagFilter
|
||||||
|
|
||||||
|
IsActive bool
|
||||||
|
}
|
||||||
|
|
||||||
|
func (f Filter) ShouldPointPass(point *client.Point) bool {
|
||||||
|
if f.ShouldPass(point.Name()) && f.ShouldTagsPass(point.Tags()) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// ShouldPass returns true if the metric should pass, false if should drop
|
||||||
|
// based on the drop/pass filter parameters
|
||||||
|
func (f Filter) ShouldPass(key string) bool {
|
||||||
|
if f.Pass != nil {
|
||||||
|
for _, pat := range f.Pass {
|
||||||
|
// TODO remove HasPrefix check, leaving it for now for legacy support.
|
||||||
|
// Cam, 2015-12-07
|
||||||
|
if strings.HasPrefix(key, pat) || internal.Glob(pat, key) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
if f.Drop != nil {
|
||||||
|
for _, pat := range f.Drop {
|
||||||
|
// TODO remove HasPrefix check, leaving it for now for legacy support.
|
||||||
|
// Cam, 2015-12-07
|
||||||
|
if strings.HasPrefix(key, pat) || internal.Glob(pat, key) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// ShouldTagsPass returns true if the metric should pass, false if should drop
|
||||||
|
// based on the tagdrop/tagpass filter parameters
|
||||||
|
func (f Filter) ShouldTagsPass(tags map[string]string) bool {
|
||||||
|
if f.TagPass != nil {
|
||||||
|
for _, pat := range f.TagPass {
|
||||||
|
if tagval, ok := tags[pat.Name]; ok {
|
||||||
|
for _, filter := range pat.Filter {
|
||||||
|
if internal.Glob(filter, tagval) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
if f.TagDrop != nil {
|
||||||
|
for _, pat := range f.TagDrop {
|
||||||
|
if tagval, ok := tags[pat.Name]; ok {
|
||||||
|
for _, filter := range pat.Filter {
|
||||||
|
if internal.Glob(filter, tagval) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
return true
|
||||||
|
}
|
|
@ -0,0 +1,177 @@
|
||||||
|
package models
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestFilter_Empty(t *testing.T) {
|
||||||
|
f := Filter{}
|
||||||
|
|
||||||
|
measurements := []string{
|
||||||
|
"foo",
|
||||||
|
"bar",
|
||||||
|
"barfoo",
|
||||||
|
"foo_bar",
|
||||||
|
"foo.bar",
|
||||||
|
"foo-bar",
|
||||||
|
"supercalifradjulisticexpialidocious",
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, measurement := range measurements {
|
||||||
|
if !f.ShouldPass(measurement) {
|
||||||
|
t.Errorf("Expected measurement %s to pass", measurement)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFilter_Pass(t *testing.T) {
|
||||||
|
f := Filter{
|
||||||
|
Pass: []string{"foo*", "cpu_usage_idle"},
|
||||||
|
}
|
||||||
|
|
||||||
|
passes := []string{
|
||||||
|
"foo",
|
||||||
|
"foo_bar",
|
||||||
|
"foo.bar",
|
||||||
|
"foo-bar",
|
||||||
|
"cpu_usage_idle",
|
||||||
|
}
|
||||||
|
|
||||||
|
drops := []string{
|
||||||
|
"bar",
|
||||||
|
"barfoo",
|
||||||
|
"bar_foo",
|
||||||
|
"cpu_usage_busy",
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, measurement := range passes {
|
||||||
|
if !f.ShouldPass(measurement) {
|
||||||
|
t.Errorf("Expected measurement %s to pass", measurement)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, measurement := range drops {
|
||||||
|
if f.ShouldPass(measurement) {
|
||||||
|
t.Errorf("Expected measurement %s to drop", measurement)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFilter_Drop(t *testing.T) {
|
||||||
|
f := Filter{
|
||||||
|
Drop: []string{"foo*", "cpu_usage_idle"},
|
||||||
|
}
|
||||||
|
|
||||||
|
drops := []string{
|
||||||
|
"foo",
|
||||||
|
"foo_bar",
|
||||||
|
"foo.bar",
|
||||||
|
"foo-bar",
|
||||||
|
"cpu_usage_idle",
|
||||||
|
}
|
||||||
|
|
||||||
|
passes := []string{
|
||||||
|
"bar",
|
||||||
|
"barfoo",
|
||||||
|
"bar_foo",
|
||||||
|
"cpu_usage_busy",
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, measurement := range passes {
|
||||||
|
if !f.ShouldPass(measurement) {
|
||||||
|
t.Errorf("Expected measurement %s to pass", measurement)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, measurement := range drops {
|
||||||
|
if f.ShouldPass(measurement) {
|
||||||
|
t.Errorf("Expected measurement %s to drop", measurement)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFilter_TagPass(t *testing.T) {
|
||||||
|
filters := []TagFilter{
|
||||||
|
TagFilter{
|
||||||
|
Name: "cpu",
|
||||||
|
Filter: []string{"cpu-*"},
|
||||||
|
},
|
||||||
|
TagFilter{
|
||||||
|
Name: "mem",
|
||||||
|
Filter: []string{"mem_free"},
|
||||||
|
}}
|
||||||
|
f := Filter{
|
||||||
|
TagPass: filters,
|
||||||
|
}
|
||||||
|
|
||||||
|
passes := []map[string]string{
|
||||||
|
{"cpu": "cpu-total"},
|
||||||
|
{"cpu": "cpu-0"},
|
||||||
|
{"cpu": "cpu-1"},
|
||||||
|
{"cpu": "cpu-2"},
|
||||||
|
{"mem": "mem_free"},
|
||||||
|
}
|
||||||
|
|
||||||
|
drops := []map[string]string{
|
||||||
|
{"cpu": "cputotal"},
|
||||||
|
{"cpu": "cpu0"},
|
||||||
|
{"cpu": "cpu1"},
|
||||||
|
{"cpu": "cpu2"},
|
||||||
|
{"mem": "mem_used"},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tags := range passes {
|
||||||
|
if !f.ShouldTagsPass(tags) {
|
||||||
|
t.Errorf("Expected tags %v to pass", tags)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tags := range drops {
|
||||||
|
if f.ShouldTagsPass(tags) {
|
||||||
|
t.Errorf("Expected tags %v to drop", tags)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestFilter_TagDrop(t *testing.T) {
|
||||||
|
filters := []TagFilter{
|
||||||
|
TagFilter{
|
||||||
|
Name: "cpu",
|
||||||
|
Filter: []string{"cpu-*"},
|
||||||
|
},
|
||||||
|
TagFilter{
|
||||||
|
Name: "mem",
|
||||||
|
Filter: []string{"mem_free"},
|
||||||
|
}}
|
||||||
|
f := Filter{
|
||||||
|
TagDrop: filters,
|
||||||
|
}
|
||||||
|
|
||||||
|
drops := []map[string]string{
|
||||||
|
{"cpu": "cpu-total"},
|
||||||
|
{"cpu": "cpu-0"},
|
||||||
|
{"cpu": "cpu-1"},
|
||||||
|
{"cpu": "cpu-2"},
|
||||||
|
{"mem": "mem_free"},
|
||||||
|
}
|
||||||
|
|
||||||
|
passes := []map[string]string{
|
||||||
|
{"cpu": "cputotal"},
|
||||||
|
{"cpu": "cpu0"},
|
||||||
|
{"cpu": "cpu1"},
|
||||||
|
{"cpu": "cpu2"},
|
||||||
|
{"mem": "mem_used"},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tags := range passes {
|
||||||
|
if !f.ShouldTagsPass(tags) {
|
||||||
|
t.Errorf("Expected tags %v to pass", tags)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, tags := range drops {
|
||||||
|
if f.ShouldTagsPass(tags) {
|
||||||
|
t.Errorf("Expected tags %v to drop", tags)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
|
@ -0,0 +1,24 @@
|
||||||
|
package models
|
||||||
|
|
||||||
|
import (
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
|
)
|
||||||
|
|
||||||
|
type RunningInput struct {
|
||||||
|
Name string
|
||||||
|
Input inputs.Input
|
||||||
|
Config *InputConfig
|
||||||
|
}
|
||||||
|
|
||||||
|
// InputConfig containing a name, interval, and filter
|
||||||
|
type InputConfig struct {
|
||||||
|
Name string
|
||||||
|
NameOverride string
|
||||||
|
MeasurementPrefix string
|
||||||
|
MeasurementSuffix string
|
||||||
|
Tags map[string]string
|
||||||
|
Filter Filter
|
||||||
|
Interval time.Duration
|
||||||
|
}
|
|
@ -0,0 +1,77 @@
|
||||||
|
package models
|
||||||
|
|
||||||
|
import (
|
||||||
|
"log"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf/plugins/outputs"
|
||||||
|
|
||||||
|
"github.com/influxdata/influxdb/client/v2"
|
||||||
|
)
|
||||||
|
|
||||||
|
const DEFAULT_POINT_BUFFER_LIMIT = 10000
|
||||||
|
|
||||||
|
type RunningOutput struct {
|
||||||
|
Name string
|
||||||
|
Output outputs.Output
|
||||||
|
Config *OutputConfig
|
||||||
|
Quiet bool
|
||||||
|
PointBufferLimit int
|
||||||
|
|
||||||
|
points []*client.Point
|
||||||
|
overwriteCounter int
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewRunningOutput(
|
||||||
|
name string,
|
||||||
|
output outputs.Output,
|
||||||
|
conf *OutputConfig,
|
||||||
|
) *RunningOutput {
|
||||||
|
ro := &RunningOutput{
|
||||||
|
Name: name,
|
||||||
|
points: make([]*client.Point, 0),
|
||||||
|
Output: output,
|
||||||
|
Config: conf,
|
||||||
|
PointBufferLimit: DEFAULT_POINT_BUFFER_LIMIT,
|
||||||
|
}
|
||||||
|
return ro
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ro *RunningOutput) AddPoint(point *client.Point) {
|
||||||
|
if ro.Config.Filter.IsActive {
|
||||||
|
if !ro.Config.Filter.ShouldPointPass(point) {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(ro.points) < ro.PointBufferLimit {
|
||||||
|
ro.points = append(ro.points, point)
|
||||||
|
} else {
|
||||||
|
if ro.overwriteCounter == len(ro.points) {
|
||||||
|
ro.overwriteCounter = 0
|
||||||
|
}
|
||||||
|
ro.points[ro.overwriteCounter] = point
|
||||||
|
ro.overwriteCounter++
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (ro *RunningOutput) Write() error {
|
||||||
|
start := time.Now()
|
||||||
|
err := ro.Output.Write(ro.points)
|
||||||
|
elapsed := time.Since(start)
|
||||||
|
if err == nil {
|
||||||
|
if !ro.Quiet {
|
||||||
|
log.Printf("Wrote %d metrics to output %s in %s\n",
|
||||||
|
len(ro.points), ro.Name, elapsed)
|
||||||
|
}
|
||||||
|
ro.points = make([]*client.Point, 0)
|
||||||
|
ro.overwriteCounter = 0
|
||||||
|
}
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
// OutputConfig containing name and filter
|
||||||
|
type OutputConfig struct {
|
||||||
|
Name string
|
||||||
|
Filter Filter
|
||||||
|
}
|
|
@ -0,0 +1,39 @@
|
||||||
|
# Example Input Plugin
|
||||||
|
|
||||||
|
The example plugin gathers metrics about example things
|
||||||
|
|
||||||
|
### Configuration:
|
||||||
|
|
||||||
|
```
|
||||||
|
# Description
|
||||||
|
[[inputs.example]]
|
||||||
|
# SampleConfig
|
||||||
|
```
|
||||||
|
|
||||||
|
### Measurements & Fields:
|
||||||
|
|
||||||
|
<optional description>
|
||||||
|
|
||||||
|
- measurement1
|
||||||
|
- field1 (type, unit)
|
||||||
|
- field2 (float, percent)
|
||||||
|
- measurement2
|
||||||
|
- field3 (integer, bytes)
|
||||||
|
|
||||||
|
### Tags:
|
||||||
|
|
||||||
|
- All measurements have the following tags:
|
||||||
|
- tag1 (optional description)
|
||||||
|
- tag2
|
||||||
|
- measurement2 has the following tags:
|
||||||
|
- tag3
|
||||||
|
|
||||||
|
### Example Output:
|
||||||
|
|
||||||
|
Give an example `-test` output here
|
||||||
|
|
||||||
|
```
|
||||||
|
$ ./telegraf -config telegraf.conf -input-filter example -test
|
||||||
|
measurement1,tag1=foo,tag2=bar field1=1i,field2=2.1 1453831884664956455
|
||||||
|
measurement2,tag1=foo,tag2=bar,tag3=baz field3=1i 1453831884664956455
|
||||||
|
```
|
|
@ -8,6 +8,7 @@ import (
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/docker"
|
_ "github.com/influxdata/telegraf/plugins/inputs/docker"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/elasticsearch"
|
_ "github.com/influxdata/telegraf/plugins/inputs/elasticsearch"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/exec"
|
_ "github.com/influxdata/telegraf/plugins/inputs/exec"
|
||||||
|
_ "github.com/influxdata/telegraf/plugins/inputs/github_webhooks"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/haproxy"
|
_ "github.com/influxdata/telegraf/plugins/inputs/haproxy"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/httpjson"
|
_ "github.com/influxdata/telegraf/plugins/inputs/httpjson"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/influxdb"
|
_ "github.com/influxdata/telegraf/plugins/inputs/influxdb"
|
||||||
|
@ -33,6 +34,8 @@ import (
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/redis"
|
_ "github.com/influxdata/telegraf/plugins/inputs/redis"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/rethinkdb"
|
_ "github.com/influxdata/telegraf/plugins/inputs/rethinkdb"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/sensors"
|
_ "github.com/influxdata/telegraf/plugins/inputs/sensors"
|
||||||
|
_ "github.com/influxdata/telegraf/plugins/inputs/snmp"
|
||||||
|
_ "github.com/influxdata/telegraf/plugins/inputs/sqlserver"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/statsd"
|
_ "github.com/influxdata/telegraf/plugins/inputs/statsd"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/system"
|
_ "github.com/influxdata/telegraf/plugins/inputs/system"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/trig"
|
_ "github.com/influxdata/telegraf/plugins/inputs/trig"
|
||||||
|
|
|
@ -110,15 +110,12 @@ func (d *Docker) gatherContainer(
|
||||||
Timeout: time.Duration(time.Second * 5),
|
Timeout: time.Duration(time.Second * 5),
|
||||||
}
|
}
|
||||||
|
|
||||||
var err error
|
|
||||||
go func() {
|
go func() {
|
||||||
err = d.client.Stats(statOpts)
|
d.client.Stats(statOpts)
|
||||||
}()
|
}()
|
||||||
|
|
||||||
stat := <-statChan
|
stat := <-statChan
|
||||||
if err != nil {
|
close(done)
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add labels to tags
|
// Add labels to tags
|
||||||
for k, v := range container.Labels {
|
for k, v := range container.Labels {
|
||||||
|
|
|
@ -13,6 +13,9 @@ import (
|
||||||
)
|
)
|
||||||
|
|
||||||
const sampleConfig = `
|
const sampleConfig = `
|
||||||
|
# NOTE This plugin only reads numerical measurements, strings and booleans
|
||||||
|
# will be ignored.
|
||||||
|
|
||||||
# the command to run
|
# the command to run
|
||||||
command = "/usr/bin/mycollector --foo=bar"
|
command = "/usr/bin/mycollector --foo=bar"
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1,369 @@
|
||||||
|
# github_webhooks
|
||||||
|
|
||||||
|
This is a Telegraf service plugin that listens for events kicked off by Github's Webhooks service and persists data from them into configured outputs. To set up the listener first generate the proper configuration:
|
||||||
|
```sh
|
||||||
|
$ telegraf -sample-config -input-filter github_webhooks -output-filter influxdb > config.conf.new
|
||||||
|
```
|
||||||
|
Change the config file to point to the InfluxDB server you are using and adjust the settings to match your environment. Once that is complete:
|
||||||
|
```sh
|
||||||
|
$ cp config.conf.new /etc/telegraf/telegraf.conf
|
||||||
|
$ sudo service telegraf start
|
||||||
|
```
|
||||||
|
Once the server is running you should configure your Organization's Webhooks to point at the `github_webhooks` service. To do this go to `github.com/{my_organization}` and click `Settings > Webhooks > Add webhook`. In the resulting menu set `Payload URL` to `http://<my_ip>:1618`, `Content type` to `application/json` and under the section `Which events would you like to trigger this webhook?` select 'Send me <b>everything</b>'. By default all of the events will write to the `github_webhooks` measurement, this is configurable by setting the `measurement_name` in the config file.
|
||||||
|
|
||||||
|
## Events
|
||||||
|
|
||||||
|
The titles of the following sections are links to the full payloads and details for each event. The body contains what information from the event is persisted. The format is as follows:
|
||||||
|
```
|
||||||
|
# TAGS
|
||||||
|
* 'tagKey' = `tagValue` type
|
||||||
|
# FIELDS
|
||||||
|
* 'fieldKey' = `fieldValue` type
|
||||||
|
```
|
||||||
|
The tag values and field values show the place on the incoming JSON object where the data is sourced from.
|
||||||
|
|
||||||
|
#### [`commit_comment` event](https://developer.github.com/v3/activity/events/types/#commitcommentevent)
|
||||||
|
|
||||||
|
**Tags:**
|
||||||
|
* 'event' = `headers[X-Github-Event]` string
|
||||||
|
* 'repository' = `event.repository.full_name` string
|
||||||
|
* 'private' = `event.repository.private` bool
|
||||||
|
* 'user' = `event.sender.login` string
|
||||||
|
* 'admin' = `event.sender.site_admin` bool
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
* 'stars' = `event.repository.stargazers_count` int
|
||||||
|
* 'forks' = `event.repository.forks_count` int
|
||||||
|
* 'issues' = `event.repository.open_issues_count` int
|
||||||
|
* 'commit' = `event.comment.commit_id` string
|
||||||
|
* 'comment' = `event.comment.body` string
|
||||||
|
|
||||||
|
#### [`create` event](https://developer.github.com/v3/activity/events/types/#createevent)
|
||||||
|
|
||||||
|
**Tags:**
|
||||||
|
* 'event' = `headers[X-Github-Event]` string
|
||||||
|
* 'repository' = `event.repository.full_name` string
|
||||||
|
* 'private' = `event.repository.private` bool
|
||||||
|
* 'user' = `event.sender.login` string
|
||||||
|
* 'admin' = `event.sender.site_admin` bool
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
* 'stars' = `event.repository.stargazers_count` int
|
||||||
|
* 'forks' = `event.repository.forks_count` int
|
||||||
|
* 'issues' = `event.repository.open_issues_count` int
|
||||||
|
* 'ref' = `event.ref` string
|
||||||
|
* 'issues' = `event.ref_type` string
|
||||||
|
|
||||||
|
#### [`delete` event](https://developer.github.com/v3/activity/events/types/#deleteevent)
|
||||||
|
|
||||||
|
**Tags:**
|
||||||
|
* 'event' = `headers[X-Github-Event]` string
|
||||||
|
* 'repository' = `event.repository.full_name` string
|
||||||
|
* 'private' = `event.repository.private` bool
|
||||||
|
* 'user' = `event.sender.login` string
|
||||||
|
* 'admin' = `event.sender.site_admin` bool
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
* 'stars' = `event.repository.stargazers_count` int
|
||||||
|
* 'forks' = `event.repository.forks_count` int
|
||||||
|
* 'issues' = `event.repository.open_issues_count` int
|
||||||
|
* 'ref' = `event.ref` string
|
||||||
|
* 'issues' = `event.ref_type` string
|
||||||
|
|
||||||
|
#### [`deployment` event](https://developer.github.com/v3/activity/events/types/#deploymentevent)
|
||||||
|
|
||||||
|
**Tags:**
|
||||||
|
* 'event' = `headers[X-Github-Event]` string
|
||||||
|
* 'repository' = `event.repository.full_name` string
|
||||||
|
* 'private' = `event.repository.private` bool
|
||||||
|
* 'user' = `event.sender.login` string
|
||||||
|
* 'admin' = `event.sender.site_admin` bool
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
* 'stars' = `event.repository.stargazers_count` int
|
||||||
|
* 'forks' = `event.repository.forks_count` int
|
||||||
|
* 'issues' = `event.repository.open_issues_count` int
|
||||||
|
* 'commit' = `event.deployment.sha` string
|
||||||
|
* 'task' = `event.deployment.task` string
|
||||||
|
* 'environment' = `event.deployment.evnironment` string
|
||||||
|
* 'description' = `event.deployment.description` string
|
||||||
|
|
||||||
|
#### [`deployment_status` event](https://developer.github.com/v3/activity/events/types/#deploymentstatusevent)
|
||||||
|
|
||||||
|
**Tags:**
|
||||||
|
* 'event' = `headers[X-Github-Event]` string
|
||||||
|
* 'repository' = `event.repository.full_name` string
|
||||||
|
* 'private' = `event.repository.private` bool
|
||||||
|
* 'user' = `event.sender.login` string
|
||||||
|
* 'admin' = `event.sender.site_admin` bool
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
* 'stars' = `event.repository.stargazers_count` int
|
||||||
|
* 'forks' = `event.repository.forks_count` int
|
||||||
|
* 'issues' = `event.repository.open_issues_count` int
|
||||||
|
* 'commit' = `event.deployment.sha` string
|
||||||
|
* 'task' = `event.deployment.task` string
|
||||||
|
* 'environment' = `event.deployment.evnironment` string
|
||||||
|
* 'description' = `event.deployment.description` string
|
||||||
|
* 'depState' = `event.deployment_status.state` string
|
||||||
|
* 'depDescription' = `event.deployment_status.description` string
|
||||||
|
|
||||||
|
#### [`fork` event](https://developer.github.com/v3/activity/events/types/#forkevent)
|
||||||
|
|
||||||
|
**Tags:**
|
||||||
|
* 'event' = `headers[X-Github-Event]` string
|
||||||
|
* 'repository' = `event.repository.full_name` string
|
||||||
|
* 'private' = `event.repository.private` bool
|
||||||
|
* 'user' = `event.sender.login` string
|
||||||
|
* 'admin' = `event.sender.site_admin` bool
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
* 'stars' = `event.repository.stargazers_count` int
|
||||||
|
* 'forks' = `event.repository.forks_count` int
|
||||||
|
* 'issues' = `event.repository.open_issues_count` int
|
||||||
|
* 'forkee' = `event.forkee.repository` string
|
||||||
|
|
||||||
|
#### [`gollum` event](https://developer.github.com/v3/activity/events/types/#gollumevent)
|
||||||
|
|
||||||
|
**Tags:**
|
||||||
|
* 'event' = `headers[X-Github-Event]` string
|
||||||
|
* 'repository' = `event.repository.full_name` string
|
||||||
|
* 'private' = `event.repository.private` bool
|
||||||
|
* 'user' = `event.sender.login` string
|
||||||
|
* 'admin' = `event.sender.site_admin` bool
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
* 'stars' = `event.repository.stargazers_count` int
|
||||||
|
* 'forks' = `event.repository.forks_count` int
|
||||||
|
* 'issues' = `event.repository.open_issues_count` int
|
||||||
|
|
||||||
|
#### [`issue_comment` event](https://developer.github.com/v3/activity/events/types/#issuecommentevent)
|
||||||
|
|
||||||
|
**Tags:**
|
||||||
|
* 'event' = `headers[X-Github-Event]` string
|
||||||
|
* 'repository' = `event.repository.full_name` string
|
||||||
|
* 'private' = `event.repository.private` bool
|
||||||
|
* 'user' = `event.sender.login` string
|
||||||
|
* 'admin' = `event.sender.site_admin` bool
|
||||||
|
* 'issue' = `event.issue.number` int
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
* 'stars' = `event.repository.stargazers_count` int
|
||||||
|
* 'forks' = `event.repository.forks_count` int
|
||||||
|
* 'issues' = `event.repository.open_issues_count` int
|
||||||
|
* 'title' = `event.issue.title` string
|
||||||
|
* 'comments' = `event.issue.comments` int
|
||||||
|
* 'body' = `event.comment.body` string
|
||||||
|
|
||||||
|
#### [`issues` event](https://developer.github.com/v3/activity/events/types/#issuesevent)
|
||||||
|
|
||||||
|
**Tags:**
|
||||||
|
* 'event' = `headers[X-Github-Event]` string
|
||||||
|
* 'repository' = `event.repository.full_name` string
|
||||||
|
* 'private' = `event.repository.private` bool
|
||||||
|
* 'user' = `event.sender.login` string
|
||||||
|
* 'admin' = `event.sender.site_admin` bool
|
||||||
|
* 'issue' = `event.issue.number` int
|
||||||
|
* 'action' = `event.action` string
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
* 'stars' = `event.repository.stargazers_count` int
|
||||||
|
* 'forks' = `event.repository.forks_count` int
|
||||||
|
* 'issues' = `event.repository.open_issues_count` int
|
||||||
|
* 'title' = `event.issue.title` string
|
||||||
|
* 'comments' = `event.issue.comments` int
|
||||||
|
|
||||||
|
#### [`member` event](https://developer.github.com/v3/activity/events/types/#memberevent)
|
||||||
|
|
||||||
|
**Tags:**
|
||||||
|
* 'event' = `headers[X-Github-Event]` string
|
||||||
|
* 'repository' = `event.repository.full_name` string
|
||||||
|
* 'private' = `event.repository.private` bool
|
||||||
|
* 'user' = `event.sender.login` string
|
||||||
|
* 'admin' = `event.sender.site_admin` bool
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
* 'stars' = `event.repository.stargazers_count` int
|
||||||
|
* 'forks' = `event.repository.forks_count` int
|
||||||
|
* 'issues' = `event.repository.open_issues_count` int
|
||||||
|
* 'newMember' = `event.sender.login` string
|
||||||
|
* 'newMemberStatus' = `event.sender.site_admin` bool
|
||||||
|
|
||||||
|
#### [`membership` event](https://developer.github.com/v3/activity/events/types/#membershipevent)
|
||||||
|
|
||||||
|
**Tags:**
|
||||||
|
* 'event' = `headers[X-Github-Event]` string
|
||||||
|
* 'user' = `event.sender.login` string
|
||||||
|
* 'admin' = `event.sender.site_admin` bool
|
||||||
|
* 'action' = `event.action` string
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
* 'newMember' = `event.sender.login` string
|
||||||
|
* 'newMemberStatus' = `event.sender.site_admin` bool
|
||||||
|
|
||||||
|
#### [`page_build` event](https://developer.github.com/v3/activity/events/types/#pagebuildevent)
|
||||||
|
|
||||||
|
**Tags:**
|
||||||
|
* 'event' = `headers[X-Github-Event]` string
|
||||||
|
* 'repository' = `event.repository.full_name` string
|
||||||
|
* 'private' = `event.repository.private` bool
|
||||||
|
* 'user' = `event.sender.login` string
|
||||||
|
* 'admin' = `event.sender.site_admin` bool
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
* 'stars' = `event.repository.stargazers_count` int
|
||||||
|
* 'forks' = `event.repository.forks_count` int
|
||||||
|
* 'issues' = `event.repository.open_issues_count` int
|
||||||
|
|
||||||
|
#### [`public` event](https://developer.github.com/v3/activity/events/types/#publicevent)
|
||||||
|
|
||||||
|
**Tags:**
|
||||||
|
* 'event' = `headers[X-Github-Event]` string
|
||||||
|
* 'repository' = `event.repository.full_name` string
|
||||||
|
* 'private' = `event.repository.private` bool
|
||||||
|
* 'user' = `event.sender.login` string
|
||||||
|
* 'admin' = `event.sender.site_admin` bool
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
* 'stars' = `event.repository.stargazers_count` int
|
||||||
|
* 'forks' = `event.repository.forks_count` int
|
||||||
|
* 'issues' = `event.repository.open_issues_count` int
|
||||||
|
|
||||||
|
#### [`pull_request_review_comment` event](https://developer.github.com/v3/activity/events/types/#pullrequestreviewcommentevent)
|
||||||
|
|
||||||
|
**Tags:**
|
||||||
|
* 'event' = `headers[X-Github-Event]` string
|
||||||
|
* 'action' = `event.action` string
|
||||||
|
* 'repository' = `event.repository.full_name` string
|
||||||
|
* 'private' = `event.repository.private` bool
|
||||||
|
* 'user' = `event.sender.login` string
|
||||||
|
* 'admin' = `event.sender.site_admin` bool
|
||||||
|
* 'prNumber' = `event.pull_request.number` int
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
* 'stars' = `event.repository.stargazers_count` int
|
||||||
|
* 'forks' = `event.repository.forks_count` int
|
||||||
|
* 'issues' = `event.repository.open_issues_count` int
|
||||||
|
* 'state' = `event.pull_request.state` string
|
||||||
|
* 'title' = `event.pull_request.title` string
|
||||||
|
* 'comments' = `event.pull_request.comments` int
|
||||||
|
* 'commits' = `event.pull_request.commits` int
|
||||||
|
* 'additions' = `event.pull_request.additions` int
|
||||||
|
* 'deletions' = `event.pull_request.deletions` int
|
||||||
|
* 'changedFiles' = `event.pull_request.changed_files` int
|
||||||
|
* 'commentFile' = `event.comment.file` string
|
||||||
|
* 'comment' = `event.comment.body` string
|
||||||
|
|
||||||
|
#### [`pull_request` event](https://developer.github.com/v3/activity/events/types/#pullrequestevent)
|
||||||
|
|
||||||
|
**Tags:**
|
||||||
|
* 'event' = `headers[X-Github-Event]` string
|
||||||
|
* 'action' = `event.action` string
|
||||||
|
* 'repository' = `event.repository.full_name` string
|
||||||
|
* 'private' = `event.repository.private` bool
|
||||||
|
* 'user' = `event.sender.login` string
|
||||||
|
* 'admin' = `event.sender.site_admin` bool
|
||||||
|
* 'prNumber' = `event.pull_request.number` int
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
* 'stars' = `event.repository.stargazers_count` int
|
||||||
|
* 'forks' = `event.repository.forks_count` int
|
||||||
|
* 'issues' = `event.repository.open_issues_count` int
|
||||||
|
* 'state' = `event.pull_request.state` string
|
||||||
|
* 'title' = `event.pull_request.title` string
|
||||||
|
* 'comments' = `event.pull_request.comments` int
|
||||||
|
* 'commits' = `event.pull_request.commits` int
|
||||||
|
* 'additions' = `event.pull_request.additions` int
|
||||||
|
* 'deletions' = `event.pull_request.deletions` int
|
||||||
|
* 'changedFiles' = `event.pull_request.changed_files` int
|
||||||
|
|
||||||
|
#### [`push` event](https://developer.github.com/v3/activity/events/types/#pushevent)
|
||||||
|
|
||||||
|
**Tags:**
|
||||||
|
* 'event' = `headers[X-Github-Event]` string
|
||||||
|
* 'repository' = `event.repository.full_name` string
|
||||||
|
* 'private' = `event.repository.private` bool
|
||||||
|
* 'user' = `event.sender.login` string
|
||||||
|
* 'admin' = `event.sender.site_admin` bool
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
* 'stars' = `event.repository.stargazers_count` int
|
||||||
|
* 'forks' = `event.repository.forks_count` int
|
||||||
|
* 'issues' = `event.repository.open_issues_count` int
|
||||||
|
* 'ref' = `event.ref` string
|
||||||
|
* 'before' = `event.before` string
|
||||||
|
* 'after' = `event.after` string
|
||||||
|
|
||||||
|
#### [`repository` event](https://developer.github.com/v3/activity/events/types/#repositoryevent)
|
||||||
|
|
||||||
|
**Tags:**
|
||||||
|
* 'event' = `headers[X-Github-Event]` string
|
||||||
|
* 'repository' = `event.repository.full_name` string
|
||||||
|
* 'private' = `event.repository.private` bool
|
||||||
|
* 'user' = `event.sender.login` string
|
||||||
|
* 'admin' = `event.sender.site_admin` bool
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
* 'stars' = `event.repository.stargazers_count` int
|
||||||
|
* 'forks' = `event.repository.forks_count` int
|
||||||
|
* 'issues' = `event.repository.open_issues_count` int
|
||||||
|
|
||||||
|
#### [`release` event](https://developer.github.com/v3/activity/events/types/#releaseevent)
|
||||||
|
|
||||||
|
**Tags:**
|
||||||
|
* 'event' = `headers[X-Github-Event]` string
|
||||||
|
* 'repository' = `event.repository.full_name` string
|
||||||
|
* 'private' = `event.repository.private` bool
|
||||||
|
* 'user' = `event.sender.login` string
|
||||||
|
* 'admin' = `event.sender.site_admin` bool
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
* 'stars' = `event.repository.stargazers_count` int
|
||||||
|
* 'forks' = `event.repository.forks_count` int
|
||||||
|
* 'issues' = `event.repository.open_issues_count` int
|
||||||
|
* 'tagName' = `event.release.tag_name` string
|
||||||
|
|
||||||
|
#### [`status` event](https://developer.github.com/v3/activity/events/types/#statusevent)
|
||||||
|
|
||||||
|
**Tags:**
|
||||||
|
* 'event' = `headers[X-Github-Event]` string
|
||||||
|
* 'repository' = `event.repository.full_name` string
|
||||||
|
* 'private' = `event.repository.private` bool
|
||||||
|
* 'user' = `event.sender.login` string
|
||||||
|
* 'admin' = `event.sender.site_admin` bool
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
* 'stars' = `event.repository.stargazers_count` int
|
||||||
|
* 'forks' = `event.repository.forks_count` int
|
||||||
|
* 'issues' = `event.repository.open_issues_count` int
|
||||||
|
* 'commit' = `event.sha` string
|
||||||
|
* 'state' = `event.state` string
|
||||||
|
|
||||||
|
#### [`team_add` event](https://developer.github.com/v3/activity/events/types/#teamaddevent)
|
||||||
|
|
||||||
|
**Tags:**
|
||||||
|
* 'event' = `headers[X-Github-Event]` string
|
||||||
|
* 'repository' = `event.repository.full_name` string
|
||||||
|
* 'private' = `event.repository.private` bool
|
||||||
|
* 'user' = `event.sender.login` string
|
||||||
|
* 'admin' = `event.sender.site_admin` bool
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
* 'stars' = `event.repository.stargazers_count` int
|
||||||
|
* 'forks' = `event.repository.forks_count` int
|
||||||
|
* 'issues' = `event.repository.open_issues_count` int
|
||||||
|
* 'teamName' = `event.team.name` string
|
||||||
|
|
||||||
|
#### [`watch` event](https://developer.github.com/v3/activity/events/types/#watchevent)
|
||||||
|
|
||||||
|
**Tags:**
|
||||||
|
* 'event' = `headers[X-Github-Event]` string
|
||||||
|
* 'repository' = `event.repository.full_name` string
|
||||||
|
* 'private' = `event.repository.private` bool
|
||||||
|
* 'user' = `event.sender.login` string
|
||||||
|
* 'admin' = `event.sender.site_admin` bool
|
||||||
|
|
||||||
|
**Fields:**
|
||||||
|
* 'stars' = `event.repository.stargazers_count` int
|
||||||
|
* 'forks' = `event.repository.forks_count` int
|
||||||
|
* 'issues' = `event.repository.open_issues_count` int
|
|
@ -0,0 +1,334 @@
|
||||||
|
package github_webhooks
|
||||||
|
|
||||||
|
import (
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"io/ioutil"
|
||||||
|
"log"
|
||||||
|
"net/http"
|
||||||
|
"sync"
|
||||||
|
|
||||||
|
"github.com/gorilla/mux"
|
||||||
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
|
)
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
inputs.Add("github_webhooks", func() inputs.Input { return &GithubWebhooks{} })
|
||||||
|
}
|
||||||
|
|
||||||
|
type GithubWebhooks struct {
|
||||||
|
ServiceAddress string
|
||||||
|
// Lock for the struct
|
||||||
|
sync.Mutex
|
||||||
|
// Events buffer to store events between Gather calls
|
||||||
|
events []Event
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewGithubWebhooks() *GithubWebhooks {
|
||||||
|
return &GithubWebhooks{}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (gh *GithubWebhooks) SampleConfig() string {
|
||||||
|
return `
|
||||||
|
# Address and port to host Webhook listener on
|
||||||
|
service_address = ":1618"
|
||||||
|
`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (gh *GithubWebhooks) Description() string {
|
||||||
|
return "A Github Webhook Event collector"
|
||||||
|
}
|
||||||
|
|
||||||
|
// Writes the points from <-gh.in to the Accumulator
|
||||||
|
func (gh *GithubWebhooks) Gather(acc inputs.Accumulator) error {
|
||||||
|
gh.Lock()
|
||||||
|
defer gh.Unlock()
|
||||||
|
for _, event := range gh.events {
|
||||||
|
p := event.NewPoint()
|
||||||
|
acc.AddFields("github_webhooks", p.Fields(), p.Tags(), p.Time())
|
||||||
|
}
|
||||||
|
gh.events = make([]Event, 0)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (gh *GithubWebhooks) Listen() {
|
||||||
|
r := mux.NewRouter()
|
||||||
|
r.HandleFunc("/", gh.eventHandler).Methods("POST")
|
||||||
|
err := http.ListenAndServe(fmt.Sprintf("%s", gh.ServiceAddress), r)
|
||||||
|
if err != nil {
|
||||||
|
log.Printf("Error starting server: %v", err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func (gh *GithubWebhooks) Start() error {
|
||||||
|
go gh.Listen()
|
||||||
|
log.Printf("Started the github_webhooks service on %s\n", gh.ServiceAddress)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (gh *GithubWebhooks) Stop() {
|
||||||
|
log.Println("Stopping the ghWebhooks service")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Handles the / route
|
||||||
|
func (gh *GithubWebhooks) eventHandler(w http.ResponseWriter, r *http.Request) {
|
||||||
|
eventType := r.Header["X-Github-Event"][0]
|
||||||
|
data, err := ioutil.ReadAll(r.Body)
|
||||||
|
if err != nil {
|
||||||
|
w.WriteHeader(http.StatusBadRequest)
|
||||||
|
}
|
||||||
|
e, err := NewEvent(data, eventType)
|
||||||
|
if err != nil {
|
||||||
|
w.WriteHeader(http.StatusBadRequest)
|
||||||
|
}
|
||||||
|
gh.Lock()
|
||||||
|
gh.events = append(gh.events, e)
|
||||||
|
gh.Unlock()
|
||||||
|
w.WriteHeader(http.StatusOK)
|
||||||
|
}
|
||||||
|
|
||||||
|
func newCommitComment(data []byte) (Event, error) {
|
||||||
|
commitCommentStruct := CommitCommentEvent{}
|
||||||
|
err := json.Unmarshal(data, &commitCommentStruct)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return commitCommentStruct, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func newCreate(data []byte) (Event, error) {
|
||||||
|
createStruct := CreateEvent{}
|
||||||
|
err := json.Unmarshal(data, &createStruct)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return createStruct, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func newDelete(data []byte) (Event, error) {
|
||||||
|
deleteStruct := DeleteEvent{}
|
||||||
|
err := json.Unmarshal(data, &deleteStruct)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return deleteStruct, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func newDeployment(data []byte) (Event, error) {
|
||||||
|
deploymentStruct := DeploymentEvent{}
|
||||||
|
err := json.Unmarshal(data, &deploymentStruct)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return deploymentStruct, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func newDeploymentStatus(data []byte) (Event, error) {
|
||||||
|
deploymentStatusStruct := DeploymentStatusEvent{}
|
||||||
|
err := json.Unmarshal(data, &deploymentStatusStruct)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return deploymentStatusStruct, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func newFork(data []byte) (Event, error) {
|
||||||
|
forkStruct := ForkEvent{}
|
||||||
|
err := json.Unmarshal(data, &forkStruct)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return forkStruct, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func newGollum(data []byte) (Event, error) {
|
||||||
|
gollumStruct := GollumEvent{}
|
||||||
|
err := json.Unmarshal(data, &gollumStruct)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return gollumStruct, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func newIssueComment(data []byte) (Event, error) {
|
||||||
|
issueCommentStruct := IssueCommentEvent{}
|
||||||
|
err := json.Unmarshal(data, &issueCommentStruct)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return issueCommentStruct, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func newIssues(data []byte) (Event, error) {
|
||||||
|
issuesStruct := IssuesEvent{}
|
||||||
|
err := json.Unmarshal(data, &issuesStruct)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return issuesStruct, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func newMember(data []byte) (Event, error) {
|
||||||
|
memberStruct := MemberEvent{}
|
||||||
|
err := json.Unmarshal(data, &memberStruct)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return memberStruct, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func newMembership(data []byte) (Event, error) {
|
||||||
|
membershipStruct := MembershipEvent{}
|
||||||
|
err := json.Unmarshal(data, &membershipStruct)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return membershipStruct, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func newPageBuild(data []byte) (Event, error) {
|
||||||
|
pageBuildEvent := PageBuildEvent{}
|
||||||
|
err := json.Unmarshal(data, &pageBuildEvent)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return pageBuildEvent, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func newPublic(data []byte) (Event, error) {
|
||||||
|
publicEvent := PublicEvent{}
|
||||||
|
err := json.Unmarshal(data, &publicEvent)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return publicEvent, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func newPullRequest(data []byte) (Event, error) {
|
||||||
|
pullRequestStruct := PullRequestEvent{}
|
||||||
|
err := json.Unmarshal(data, &pullRequestStruct)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return pullRequestStruct, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func newPullRequestReviewComment(data []byte) (Event, error) {
|
||||||
|
pullRequestReviewCommentStruct := PullRequestReviewCommentEvent{}
|
||||||
|
err := json.Unmarshal(data, &pullRequestReviewCommentStruct)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return pullRequestReviewCommentStruct, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func newPush(data []byte) (Event, error) {
|
||||||
|
pushStruct := PushEvent{}
|
||||||
|
err := json.Unmarshal(data, &pushStruct)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return pushStruct, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func newRelease(data []byte) (Event, error) {
|
||||||
|
releaseStruct := ReleaseEvent{}
|
||||||
|
err := json.Unmarshal(data, &releaseStruct)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return releaseStruct, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func newRepository(data []byte) (Event, error) {
|
||||||
|
repositoryStruct := RepositoryEvent{}
|
||||||
|
err := json.Unmarshal(data, &repositoryStruct)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return repositoryStruct, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func newStatus(data []byte) (Event, error) {
|
||||||
|
statusStruct := StatusEvent{}
|
||||||
|
err := json.Unmarshal(data, &statusStruct)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return statusStruct, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func newTeamAdd(data []byte) (Event, error) {
|
||||||
|
teamAddStruct := TeamAddEvent{}
|
||||||
|
err := json.Unmarshal(data, &teamAddStruct)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return teamAddStruct, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func newWatch(data []byte) (Event, error) {
|
||||||
|
watchStruct := WatchEvent{}
|
||||||
|
err := json.Unmarshal(data, &watchStruct)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return watchStruct, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
type newEventError struct {
|
||||||
|
s string
|
||||||
|
}
|
||||||
|
|
||||||
|
func (e *newEventError) Error() string {
|
||||||
|
return e.s
|
||||||
|
}
|
||||||
|
|
||||||
|
func NewEvent(r []byte, t string) (Event, error) {
|
||||||
|
log.Printf("New %v event recieved", t)
|
||||||
|
switch t {
|
||||||
|
case "commit_comment":
|
||||||
|
return newCommitComment(r)
|
||||||
|
case "create":
|
||||||
|
return newCreate(r)
|
||||||
|
case "delete":
|
||||||
|
return newDelete(r)
|
||||||
|
case "deployment":
|
||||||
|
return newDeployment(r)
|
||||||
|
case "deployment_status":
|
||||||
|
return newDeploymentStatus(r)
|
||||||
|
case "fork":
|
||||||
|
return newFork(r)
|
||||||
|
case "gollum":
|
||||||
|
return newGollum(r)
|
||||||
|
case "issue_comment":
|
||||||
|
return newIssueComment(r)
|
||||||
|
case "issues":
|
||||||
|
return newIssues(r)
|
||||||
|
case "member":
|
||||||
|
return newMember(r)
|
||||||
|
case "membership":
|
||||||
|
return newMembership(r)
|
||||||
|
case "page_build":
|
||||||
|
return newPageBuild(r)
|
||||||
|
case "public":
|
||||||
|
return newPublic(r)
|
||||||
|
case "pull_request":
|
||||||
|
return newPullRequest(r)
|
||||||
|
case "pull_request_review_comment":
|
||||||
|
return newPullRequestReviewComment(r)
|
||||||
|
case "push":
|
||||||
|
return newPush(r)
|
||||||
|
case "release":
|
||||||
|
return newRelease(r)
|
||||||
|
case "repository":
|
||||||
|
return newRepository(r)
|
||||||
|
case "status":
|
||||||
|
return newStatus(r)
|
||||||
|
case "team_add":
|
||||||
|
return newTeamAdd(r)
|
||||||
|
case "watch":
|
||||||
|
return newWatch(r)
|
||||||
|
}
|
||||||
|
return nil, &newEventError{"Not a recgonized event type"}
|
||||||
|
}
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,711 @@
|
||||||
|
package github_webhooks
|
||||||
|
|
||||||
|
import (
|
||||||
|
"fmt"
|
||||||
|
"log"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/influxdata/influxdb/client/v2"
|
||||||
|
)
|
||||||
|
|
||||||
|
const meas = "github_webhooks"
|
||||||
|
|
||||||
|
type Event interface {
|
||||||
|
NewPoint() *client.Point
|
||||||
|
}
|
||||||
|
|
||||||
|
type Repository struct {
|
||||||
|
Repository string `json:"full_name"`
|
||||||
|
Private bool `json:"private"`
|
||||||
|
Stars int `json:"stargazers_count"`
|
||||||
|
Forks int `json:"forks_count"`
|
||||||
|
Issues int `json:"open_issues_count"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type Sender struct {
|
||||||
|
User string `json:"login"`
|
||||||
|
Admin bool `json:"site_admin"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type CommitComment struct {
|
||||||
|
Commit string `json:"commit_id"`
|
||||||
|
Body string `json:"body"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type Deployment struct {
|
||||||
|
Commit string `json:"sha"`
|
||||||
|
Task string `json:"task"`
|
||||||
|
Environment string `json:"environment"`
|
||||||
|
Description string `json:"description"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type Page struct {
|
||||||
|
Name string `json:"page_name"`
|
||||||
|
Title string `json:"title"`
|
||||||
|
Action string `json:"action"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type Issue struct {
|
||||||
|
Number int `json:"number"`
|
||||||
|
Title string `json:"title"`
|
||||||
|
Comments int `json:"comments"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type IssueComment struct {
|
||||||
|
Body string `json:"body"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type Team struct {
|
||||||
|
Name string `json:"name"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type PullRequest struct {
|
||||||
|
Number int `json:"number"`
|
||||||
|
State string `json:"state"`
|
||||||
|
Title string `json:"title"`
|
||||||
|
Comments int `json:"comments"`
|
||||||
|
Commits int `json:"commits"`
|
||||||
|
Additions int `json:"additions"`
|
||||||
|
Deletions int `json:"deletions"`
|
||||||
|
ChangedFiles int `json:"changed_files"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type PullRequestReviewComment struct {
|
||||||
|
File string `json:"path"`
|
||||||
|
Comment string `json:"body"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type Release struct {
|
||||||
|
TagName string `json:"tag_name"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type DeploymentStatus struct {
|
||||||
|
State string `json:"state"`
|
||||||
|
Description string `json:"description"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type CommitCommentEvent struct {
|
||||||
|
Comment CommitComment `json:"comment"`
|
||||||
|
Repository Repository `json:"repository"`
|
||||||
|
Sender Sender `json:"sender"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s CommitCommentEvent) NewPoint() *client.Point {
|
||||||
|
event := "commit_comment"
|
||||||
|
t := map[string]string{
|
||||||
|
"event": event,
|
||||||
|
"repository": s.Repository.Repository,
|
||||||
|
"private": fmt.Sprintf("%v", s.Repository.Private),
|
||||||
|
"user": s.Sender.User,
|
||||||
|
"admin": fmt.Sprintf("%v", s.Sender.Admin),
|
||||||
|
}
|
||||||
|
f := map[string]interface{}{
|
||||||
|
"stars": s.Repository.Stars,
|
||||||
|
"forks": s.Repository.Forks,
|
||||||
|
"issues": s.Repository.Issues,
|
||||||
|
"commit": s.Comment.Commit,
|
||||||
|
"comment": s.Comment.Body,
|
||||||
|
}
|
||||||
|
p, err := client.NewPoint(meas, t, f, time.Now())
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to create %v event", event)
|
||||||
|
}
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
type CreateEvent struct {
|
||||||
|
Ref string `json:"ref"`
|
||||||
|
RefType string `json:"ref_type"`
|
||||||
|
Repository Repository `json:"repository"`
|
||||||
|
Sender Sender `json:"sender"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s CreateEvent) NewPoint() *client.Point {
|
||||||
|
event := "create"
|
||||||
|
t := map[string]string{
|
||||||
|
"event": event,
|
||||||
|
"repository": s.Repository.Repository,
|
||||||
|
"private": fmt.Sprintf("%v", s.Repository.Private),
|
||||||
|
"user": s.Sender.User,
|
||||||
|
"admin": fmt.Sprintf("%v", s.Sender.Admin),
|
||||||
|
}
|
||||||
|
f := map[string]interface{}{
|
||||||
|
"stars": s.Repository.Stars,
|
||||||
|
"forks": s.Repository.Forks,
|
||||||
|
"issues": s.Repository.Issues,
|
||||||
|
"ref": s.Ref,
|
||||||
|
"refType": s.RefType,
|
||||||
|
}
|
||||||
|
p, err := client.NewPoint(meas, t, f, time.Now())
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to create %v event", event)
|
||||||
|
}
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
type DeleteEvent struct {
|
||||||
|
Ref string `json:"ref"`
|
||||||
|
RefType string `json:"ref_type"`
|
||||||
|
Repository Repository `json:"repository"`
|
||||||
|
Sender Sender `json:"sender"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s DeleteEvent) NewPoint() *client.Point {
|
||||||
|
event := "delete"
|
||||||
|
t := map[string]string{
|
||||||
|
"event": event,
|
||||||
|
"repository": s.Repository.Repository,
|
||||||
|
"private": fmt.Sprintf("%v", s.Repository.Private),
|
||||||
|
"user": s.Sender.User,
|
||||||
|
"admin": fmt.Sprintf("%v", s.Sender.Admin),
|
||||||
|
}
|
||||||
|
f := map[string]interface{}{
|
||||||
|
"stars": s.Repository.Stars,
|
||||||
|
"forks": s.Repository.Forks,
|
||||||
|
"issues": s.Repository.Issues,
|
||||||
|
"ref": s.Ref,
|
||||||
|
"refType": s.RefType,
|
||||||
|
}
|
||||||
|
p, err := client.NewPoint(meas, t, f, time.Now())
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to create %v event", event)
|
||||||
|
}
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
type DeploymentEvent struct {
|
||||||
|
Deployment Deployment `json:"deployment"`
|
||||||
|
Repository Repository `json:"repository"`
|
||||||
|
Sender Sender `json:"sender"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s DeploymentEvent) NewPoint() *client.Point {
|
||||||
|
event := "deployment"
|
||||||
|
t := map[string]string{
|
||||||
|
"event": event,
|
||||||
|
"repository": s.Repository.Repository,
|
||||||
|
"private": fmt.Sprintf("%v", s.Repository.Private),
|
||||||
|
"user": s.Sender.User,
|
||||||
|
"admin": fmt.Sprintf("%v", s.Sender.Admin),
|
||||||
|
}
|
||||||
|
f := map[string]interface{}{
|
||||||
|
"stars": s.Repository.Stars,
|
||||||
|
"forks": s.Repository.Forks,
|
||||||
|
"issues": s.Repository.Issues,
|
||||||
|
"commit": s.Deployment.Commit,
|
||||||
|
"task": s.Deployment.Task,
|
||||||
|
"environment": s.Deployment.Environment,
|
||||||
|
"description": s.Deployment.Description,
|
||||||
|
}
|
||||||
|
p, err := client.NewPoint(meas, t, f, time.Now())
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to create %v event", event)
|
||||||
|
}
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
type DeploymentStatusEvent struct {
|
||||||
|
Deployment Deployment `json:"deployment"`
|
||||||
|
DeploymentStatus DeploymentStatus `json:"deployment_status"`
|
||||||
|
Repository Repository `json:"repository"`
|
||||||
|
Sender Sender `json:"sender"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s DeploymentStatusEvent) NewPoint() *client.Point {
|
||||||
|
event := "delete"
|
||||||
|
t := map[string]string{
|
||||||
|
"event": event,
|
||||||
|
"repository": s.Repository.Repository,
|
||||||
|
"private": fmt.Sprintf("%v", s.Repository.Private),
|
||||||
|
"user": s.Sender.User,
|
||||||
|
"admin": fmt.Sprintf("%v", s.Sender.Admin),
|
||||||
|
}
|
||||||
|
f := map[string]interface{}{
|
||||||
|
"stars": s.Repository.Stars,
|
||||||
|
"forks": s.Repository.Forks,
|
||||||
|
"issues": s.Repository.Issues,
|
||||||
|
"commit": s.Deployment.Commit,
|
||||||
|
"task": s.Deployment.Task,
|
||||||
|
"environment": s.Deployment.Environment,
|
||||||
|
"description": s.Deployment.Description,
|
||||||
|
"depState": s.DeploymentStatus.State,
|
||||||
|
"depDescription": s.DeploymentStatus.Description,
|
||||||
|
}
|
||||||
|
p, err := client.NewPoint(meas, t, f, time.Now())
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to create %v event", event)
|
||||||
|
}
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
type ForkEvent struct {
|
||||||
|
Forkee Repository `json:"forkee"`
|
||||||
|
Repository Repository `json:"repository"`
|
||||||
|
Sender Sender `json:"sender"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s ForkEvent) NewPoint() *client.Point {
|
||||||
|
event := "fork"
|
||||||
|
t := map[string]string{
|
||||||
|
"event": event,
|
||||||
|
"repository": s.Repository.Repository,
|
||||||
|
"private": fmt.Sprintf("%v", s.Repository.Private),
|
||||||
|
"user": s.Sender.User,
|
||||||
|
"admin": fmt.Sprintf("%v", s.Sender.Admin),
|
||||||
|
}
|
||||||
|
f := map[string]interface{}{
|
||||||
|
"stars": s.Repository.Stars,
|
||||||
|
"forks": s.Repository.Forks,
|
||||||
|
"issues": s.Repository.Issues,
|
||||||
|
"fork": s.Forkee.Repository,
|
||||||
|
}
|
||||||
|
p, err := client.NewPoint(meas, t, f, time.Now())
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to create %v event", event)
|
||||||
|
}
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
type GollumEvent struct {
|
||||||
|
Pages []Page `json:"pages"`
|
||||||
|
Repository Repository `json:"repository"`
|
||||||
|
Sender Sender `json:"sender"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// REVIEW: Going to be lazy and not deal with the pages.
|
||||||
|
func (s GollumEvent) NewPoint() *client.Point {
|
||||||
|
event := "gollum"
|
||||||
|
t := map[string]string{
|
||||||
|
"event": event,
|
||||||
|
"repository": s.Repository.Repository,
|
||||||
|
"private": fmt.Sprintf("%v", s.Repository.Private),
|
||||||
|
"user": s.Sender.User,
|
||||||
|
"admin": fmt.Sprintf("%v", s.Sender.Admin),
|
||||||
|
}
|
||||||
|
f := map[string]interface{}{
|
||||||
|
"stars": s.Repository.Stars,
|
||||||
|
"forks": s.Repository.Forks,
|
||||||
|
"issues": s.Repository.Issues,
|
||||||
|
}
|
||||||
|
p, err := client.NewPoint(meas, t, f, time.Now())
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to create %v event", event)
|
||||||
|
}
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
type IssueCommentEvent struct {
|
||||||
|
Issue Issue `json:"issue"`
|
||||||
|
Comment IssueComment `json:"comment"`
|
||||||
|
Repository Repository `json:"repository"`
|
||||||
|
Sender Sender `json:"sender"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s IssueCommentEvent) NewPoint() *client.Point {
|
||||||
|
event := "issue_comment"
|
||||||
|
t := map[string]string{
|
||||||
|
"event": event,
|
||||||
|
"repository": s.Repository.Repository,
|
||||||
|
"private": fmt.Sprintf("%v", s.Repository.Private),
|
||||||
|
"user": s.Sender.User,
|
||||||
|
"admin": fmt.Sprintf("%v", s.Sender.Admin),
|
||||||
|
"issue": fmt.Sprintf("%v", s.Issue.Number),
|
||||||
|
}
|
||||||
|
f := map[string]interface{}{
|
||||||
|
"stars": s.Repository.Stars,
|
||||||
|
"forks": s.Repository.Forks,
|
||||||
|
"issues": s.Repository.Issues,
|
||||||
|
"title": s.Issue.Title,
|
||||||
|
"comments": s.Issue.Comments,
|
||||||
|
"body": s.Comment.Body,
|
||||||
|
}
|
||||||
|
p, err := client.NewPoint(meas, t, f, time.Now())
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to create %v event", event)
|
||||||
|
}
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
type IssuesEvent struct {
|
||||||
|
Action string `json:"action"`
|
||||||
|
Issue Issue `json:"issue"`
|
||||||
|
Repository Repository `json:"repository"`
|
||||||
|
Sender Sender `json:"sender"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s IssuesEvent) NewPoint() *client.Point {
|
||||||
|
event := "issue"
|
||||||
|
t := map[string]string{
|
||||||
|
"event": event,
|
||||||
|
"repository": s.Repository.Repository,
|
||||||
|
"private": fmt.Sprintf("%v", s.Repository.Private),
|
||||||
|
"user": s.Sender.User,
|
||||||
|
"admin": fmt.Sprintf("%v", s.Sender.Admin),
|
||||||
|
"issue": fmt.Sprintf("%v", s.Issue.Number),
|
||||||
|
"action": s.Action,
|
||||||
|
}
|
||||||
|
f := map[string]interface{}{
|
||||||
|
"stars": s.Repository.Stars,
|
||||||
|
"forks": s.Repository.Forks,
|
||||||
|
"issues": s.Repository.Issues,
|
||||||
|
"title": s.Issue.Title,
|
||||||
|
"comments": s.Issue.Comments,
|
||||||
|
}
|
||||||
|
p, err := client.NewPoint(meas, t, f, time.Now())
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to create %v event", event)
|
||||||
|
}
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
type MemberEvent struct {
|
||||||
|
Member Sender `json:"member"`
|
||||||
|
Repository Repository `json:"repository"`
|
||||||
|
Sender Sender `json:"sender"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s MemberEvent) NewPoint() *client.Point {
|
||||||
|
event := "member"
|
||||||
|
t := map[string]string{
|
||||||
|
"event": event,
|
||||||
|
"repository": s.Repository.Repository,
|
||||||
|
"private": fmt.Sprintf("%v", s.Repository.Private),
|
||||||
|
"user": s.Sender.User,
|
||||||
|
"admin": fmt.Sprintf("%v", s.Sender.Admin),
|
||||||
|
}
|
||||||
|
f := map[string]interface{}{
|
||||||
|
"stars": s.Repository.Stars,
|
||||||
|
"forks": s.Repository.Forks,
|
||||||
|
"issues": s.Repository.Issues,
|
||||||
|
"newMember": s.Member.User,
|
||||||
|
"newMemberStatus": s.Member.Admin,
|
||||||
|
}
|
||||||
|
p, err := client.NewPoint(meas, t, f, time.Now())
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to create %v event", event)
|
||||||
|
}
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
type MembershipEvent struct {
|
||||||
|
Action string `json:"action"`
|
||||||
|
Member Sender `json:"member"`
|
||||||
|
Sender Sender `json:"sender"`
|
||||||
|
Team Team `json:"team"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s MembershipEvent) NewPoint() *client.Point {
|
||||||
|
event := "membership"
|
||||||
|
t := map[string]string{
|
||||||
|
"event": event,
|
||||||
|
"user": s.Sender.User,
|
||||||
|
"admin": fmt.Sprintf("%v", s.Sender.Admin),
|
||||||
|
"action": s.Action,
|
||||||
|
}
|
||||||
|
f := map[string]interface{}{
|
||||||
|
"newMember": s.Member.User,
|
||||||
|
"newMemberStatus": s.Member.Admin,
|
||||||
|
}
|
||||||
|
p, err := client.NewPoint(meas, t, f, time.Now())
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to create %v event", event)
|
||||||
|
}
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
type PageBuildEvent struct {
|
||||||
|
Repository Repository `json:"repository"`
|
||||||
|
Sender Sender `json:"sender"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s PageBuildEvent) NewPoint() *client.Point {
|
||||||
|
event := "page_build"
|
||||||
|
t := map[string]string{
|
||||||
|
"event": event,
|
||||||
|
"repository": s.Repository.Repository,
|
||||||
|
"private": fmt.Sprintf("%v", s.Repository.Private),
|
||||||
|
"user": s.Sender.User,
|
||||||
|
"admin": fmt.Sprintf("%v", s.Sender.Admin),
|
||||||
|
}
|
||||||
|
f := map[string]interface{}{
|
||||||
|
"stars": s.Repository.Stars,
|
||||||
|
"forks": s.Repository.Forks,
|
||||||
|
"issues": s.Repository.Issues,
|
||||||
|
}
|
||||||
|
p, err := client.NewPoint(meas, t, f, time.Now())
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to create %v event", event)
|
||||||
|
}
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
type PublicEvent struct {
|
||||||
|
Repository Repository `json:"repository"`
|
||||||
|
Sender Sender `json:"sender"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s PublicEvent) NewPoint() *client.Point {
|
||||||
|
event := "public"
|
||||||
|
t := map[string]string{
|
||||||
|
"event": event,
|
||||||
|
"repository": s.Repository.Repository,
|
||||||
|
"private": fmt.Sprintf("%v", s.Repository.Private),
|
||||||
|
"user": s.Sender.User,
|
||||||
|
"admin": fmt.Sprintf("%v", s.Sender.Admin),
|
||||||
|
}
|
||||||
|
f := map[string]interface{}{
|
||||||
|
"stars": s.Repository.Stars,
|
||||||
|
"forks": s.Repository.Forks,
|
||||||
|
"issues": s.Repository.Issues,
|
||||||
|
}
|
||||||
|
p, err := client.NewPoint(meas, t, f, time.Now())
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to create %v event", event)
|
||||||
|
}
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
type PullRequestEvent struct {
|
||||||
|
Action string `json:"action"`
|
||||||
|
PullRequest PullRequest `json:"pull_request"`
|
||||||
|
Repository Repository `json:"repository"`
|
||||||
|
Sender Sender `json:"sender"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s PullRequestEvent) NewPoint() *client.Point {
|
||||||
|
event := "pull_request"
|
||||||
|
t := map[string]string{
|
||||||
|
"event": event,
|
||||||
|
"action": s.Action,
|
||||||
|
"repository": s.Repository.Repository,
|
||||||
|
"private": fmt.Sprintf("%v", s.Repository.Private),
|
||||||
|
"user": s.Sender.User,
|
||||||
|
"admin": fmt.Sprintf("%v", s.Sender.Admin),
|
||||||
|
"prNumber": fmt.Sprintf("%v", s.PullRequest.Number),
|
||||||
|
}
|
||||||
|
f := map[string]interface{}{
|
||||||
|
"stars": s.Repository.Stars,
|
||||||
|
"forks": s.Repository.Forks,
|
||||||
|
"issues": s.Repository.Issues,
|
||||||
|
"state": s.PullRequest.State,
|
||||||
|
"title": s.PullRequest.Title,
|
||||||
|
"comments": s.PullRequest.Comments,
|
||||||
|
"commits": s.PullRequest.Commits,
|
||||||
|
"additions": s.PullRequest.Additions,
|
||||||
|
"deletions": s.PullRequest.Deletions,
|
||||||
|
"changedFiles": s.PullRequest.ChangedFiles,
|
||||||
|
}
|
||||||
|
p, err := client.NewPoint(meas, t, f, time.Now())
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to create %v event", event)
|
||||||
|
}
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
type PullRequestReviewCommentEvent struct {
|
||||||
|
Comment PullRequestReviewComment `json:"comment"`
|
||||||
|
PullRequest PullRequest `json:"pull_request"`
|
||||||
|
Repository Repository `json:"repository"`
|
||||||
|
Sender Sender `json:"sender"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s PullRequestReviewCommentEvent) NewPoint() *client.Point {
|
||||||
|
event := "pull_request_review_comment"
|
||||||
|
t := map[string]string{
|
||||||
|
"event": event,
|
||||||
|
"repository": s.Repository.Repository,
|
||||||
|
"private": fmt.Sprintf("%v", s.Repository.Private),
|
||||||
|
"user": s.Sender.User,
|
||||||
|
"admin": fmt.Sprintf("%v", s.Sender.Admin),
|
||||||
|
"prNumber": fmt.Sprintf("%v", s.PullRequest.Number),
|
||||||
|
}
|
||||||
|
f := map[string]interface{}{
|
||||||
|
"stars": s.Repository.Stars,
|
||||||
|
"forks": s.Repository.Forks,
|
||||||
|
"issues": s.Repository.Issues,
|
||||||
|
"state": s.PullRequest.State,
|
||||||
|
"title": s.PullRequest.Title,
|
||||||
|
"comments": s.PullRequest.Comments,
|
||||||
|
"commits": s.PullRequest.Commits,
|
||||||
|
"additions": s.PullRequest.Additions,
|
||||||
|
"deletions": s.PullRequest.Deletions,
|
||||||
|
"changedFiles": s.PullRequest.ChangedFiles,
|
||||||
|
"commentFile": s.Comment.File,
|
||||||
|
"comment": s.Comment.Comment,
|
||||||
|
}
|
||||||
|
p, err := client.NewPoint(meas, t, f, time.Now())
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to create %v event", event)
|
||||||
|
}
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
type PushEvent struct {
|
||||||
|
Ref string `json:"ref"`
|
||||||
|
Before string `json:"before"`
|
||||||
|
After string `json:"after"`
|
||||||
|
Repository Repository `json:"repository"`
|
||||||
|
Sender Sender `json:"sender"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s PushEvent) NewPoint() *client.Point {
|
||||||
|
event := "push"
|
||||||
|
t := map[string]string{
|
||||||
|
"event": event,
|
||||||
|
"repository": s.Repository.Repository,
|
||||||
|
"private": fmt.Sprintf("%v", s.Repository.Private),
|
||||||
|
"user": s.Sender.User,
|
||||||
|
"admin": fmt.Sprintf("%v", s.Sender.Admin),
|
||||||
|
}
|
||||||
|
f := map[string]interface{}{
|
||||||
|
"stars": s.Repository.Stars,
|
||||||
|
"forks": s.Repository.Forks,
|
||||||
|
"issues": s.Repository.Issues,
|
||||||
|
"ref": s.Ref,
|
||||||
|
"before": s.Before,
|
||||||
|
"after": s.After,
|
||||||
|
}
|
||||||
|
p, err := client.NewPoint(meas, t, f, time.Now())
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to create %v event", event)
|
||||||
|
}
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
type ReleaseEvent struct {
|
||||||
|
Release Release `json:"release"`
|
||||||
|
Repository Repository `json:"repository"`
|
||||||
|
Sender Sender `json:"sender"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s ReleaseEvent) NewPoint() *client.Point {
|
||||||
|
event := "release"
|
||||||
|
t := map[string]string{
|
||||||
|
"event": event,
|
||||||
|
"repository": s.Repository.Repository,
|
||||||
|
"private": fmt.Sprintf("%v", s.Repository.Private),
|
||||||
|
"user": s.Sender.User,
|
||||||
|
"admin": fmt.Sprintf("%v", s.Sender.Admin),
|
||||||
|
}
|
||||||
|
f := map[string]interface{}{
|
||||||
|
"stars": s.Repository.Stars,
|
||||||
|
"forks": s.Repository.Forks,
|
||||||
|
"issues": s.Repository.Issues,
|
||||||
|
"tagName": s.Release.TagName,
|
||||||
|
}
|
||||||
|
p, err := client.NewPoint(meas, t, f, time.Now())
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to create %v event", event)
|
||||||
|
}
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
type RepositoryEvent struct {
|
||||||
|
Repository Repository `json:"repository"`
|
||||||
|
Sender Sender `json:"sender"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s RepositoryEvent) NewPoint() *client.Point {
|
||||||
|
event := "repository"
|
||||||
|
t := map[string]string{
|
||||||
|
"event": event,
|
||||||
|
"repository": s.Repository.Repository,
|
||||||
|
"private": fmt.Sprintf("%v", s.Repository.Private),
|
||||||
|
"user": s.Sender.User,
|
||||||
|
"admin": fmt.Sprintf("%v", s.Sender.Admin),
|
||||||
|
}
|
||||||
|
f := map[string]interface{}{
|
||||||
|
"stars": s.Repository.Stars,
|
||||||
|
"forks": s.Repository.Forks,
|
||||||
|
"issues": s.Repository.Issues,
|
||||||
|
}
|
||||||
|
p, err := client.NewPoint(meas, t, f, time.Now())
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to create %v event", event)
|
||||||
|
}
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
type StatusEvent struct {
|
||||||
|
Commit string `json:"sha"`
|
||||||
|
State string `json:"state"`
|
||||||
|
Repository Repository `json:"repository"`
|
||||||
|
Sender Sender `json:"sender"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s StatusEvent) NewPoint() *client.Point {
|
||||||
|
event := "status"
|
||||||
|
t := map[string]string{
|
||||||
|
"event": event,
|
||||||
|
"repository": s.Repository.Repository,
|
||||||
|
"private": fmt.Sprintf("%v", s.Repository.Private),
|
||||||
|
"user": s.Sender.User,
|
||||||
|
"admin": fmt.Sprintf("%v", s.Sender.Admin),
|
||||||
|
}
|
||||||
|
f := map[string]interface{}{
|
||||||
|
"stars": s.Repository.Stars,
|
||||||
|
"forks": s.Repository.Forks,
|
||||||
|
"issues": s.Repository.Issues,
|
||||||
|
"commit": s.Commit,
|
||||||
|
"state": s.State,
|
||||||
|
}
|
||||||
|
p, err := client.NewPoint(meas, t, f, time.Now())
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to create %v event", event)
|
||||||
|
}
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
type TeamAddEvent struct {
|
||||||
|
Team Team `json:"team"`
|
||||||
|
Repository Repository `json:"repository"`
|
||||||
|
Sender Sender `json:"sender"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s TeamAddEvent) NewPoint() *client.Point {
|
||||||
|
event := "team_add"
|
||||||
|
t := map[string]string{
|
||||||
|
"event": event,
|
||||||
|
"repository": s.Repository.Repository,
|
||||||
|
"private": fmt.Sprintf("%v", s.Repository.Private),
|
||||||
|
"user": s.Sender.User,
|
||||||
|
"admin": fmt.Sprintf("%v", s.Sender.Admin),
|
||||||
|
}
|
||||||
|
f := map[string]interface{}{
|
||||||
|
"stars": s.Repository.Stars,
|
||||||
|
"forks": s.Repository.Forks,
|
||||||
|
"issues": s.Repository.Issues,
|
||||||
|
"teamName": s.Team.Name,
|
||||||
|
}
|
||||||
|
p, err := client.NewPoint(meas, t, f, time.Now())
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to create %v event", event)
|
||||||
|
}
|
||||||
|
return p
|
||||||
|
}
|
||||||
|
|
||||||
|
type WatchEvent struct {
|
||||||
|
Repository Repository `json:"repository"`
|
||||||
|
Sender Sender `json:"sender"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s WatchEvent) NewPoint() *client.Point {
|
||||||
|
event := "delete"
|
||||||
|
t := map[string]string{
|
||||||
|
"event": event,
|
||||||
|
"repository": s.Repository.Repository,
|
||||||
|
"private": fmt.Sprintf("%v", s.Repository.Private),
|
||||||
|
"user": s.Sender.User,
|
||||||
|
"admin": fmt.Sprintf("%v", s.Sender.Admin),
|
||||||
|
}
|
||||||
|
f := map[string]interface{}{
|
||||||
|
"stars": s.Repository.Stars,
|
||||||
|
"forks": s.Repository.Forks,
|
||||||
|
"issues": s.Repository.Issues,
|
||||||
|
}
|
||||||
|
p, err := client.NewPoint(meas, t, f, time.Now())
|
||||||
|
if err != nil {
|
||||||
|
log.Fatalf("Failed to create %v event", event)
|
||||||
|
}
|
||||||
|
return p
|
||||||
|
}
|
|
@ -0,0 +1,237 @@
|
||||||
|
package github_webhooks
|
||||||
|
|
||||||
|
import (
|
||||||
|
"net/http"
|
||||||
|
"net/http/httptest"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestCommitCommentEvent(t *testing.T) {
|
||||||
|
gh := NewGithubWebhooks()
|
||||||
|
jsonString := CommitCommentEventJSON()
|
||||||
|
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
|
||||||
|
req.Header.Add("X-Github-Event", "commit_comment")
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
gh.eventHandler(w, req)
|
||||||
|
if w.Code != http.StatusOK {
|
||||||
|
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDeleteEvent(t *testing.T) {
|
||||||
|
gh := NewGithubWebhooks()
|
||||||
|
jsonString := DeleteEventJSON()
|
||||||
|
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
|
||||||
|
req.Header.Add("X-Github-Event", "delete")
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
gh.eventHandler(w, req)
|
||||||
|
if w.Code != http.StatusOK {
|
||||||
|
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDeploymentEvent(t *testing.T) {
|
||||||
|
gh := NewGithubWebhooks()
|
||||||
|
jsonString := DeploymentEventJSON()
|
||||||
|
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
|
||||||
|
req.Header.Add("X-Github-Event", "deployment")
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
gh.eventHandler(w, req)
|
||||||
|
if w.Code != http.StatusOK {
|
||||||
|
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestDeploymentStatusEvent(t *testing.T) {
|
||||||
|
gh := NewGithubWebhooks()
|
||||||
|
jsonString := DeploymentStatusEventJSON()
|
||||||
|
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
|
||||||
|
req.Header.Add("X-Github-Event", "deployment_status")
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
gh.eventHandler(w, req)
|
||||||
|
if w.Code != http.StatusOK {
|
||||||
|
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestForkEvent(t *testing.T) {
|
||||||
|
gh := NewGithubWebhooks()
|
||||||
|
jsonString := ForkEventJSON()
|
||||||
|
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
|
||||||
|
req.Header.Add("X-Github-Event", "fork")
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
gh.eventHandler(w, req)
|
||||||
|
if w.Code != http.StatusOK {
|
||||||
|
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGollumEvent(t *testing.T) {
|
||||||
|
gh := NewGithubWebhooks()
|
||||||
|
jsonString := GollumEventJSON()
|
||||||
|
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
|
||||||
|
req.Header.Add("X-Github-Event", "gollum")
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
gh.eventHandler(w, req)
|
||||||
|
if w.Code != http.StatusOK {
|
||||||
|
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestIssueCommentEvent(t *testing.T) {
|
||||||
|
gh := NewGithubWebhooks()
|
||||||
|
jsonString := IssueCommentEventJSON()
|
||||||
|
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
|
||||||
|
req.Header.Add("X-Github-Event", "issue_comment")
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
gh.eventHandler(w, req)
|
||||||
|
if w.Code != http.StatusOK {
|
||||||
|
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestIssuesEvent(t *testing.T) {
|
||||||
|
gh := NewGithubWebhooks()
|
||||||
|
jsonString := IssuesEventJSON()
|
||||||
|
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
|
||||||
|
req.Header.Add("X-Github-Event", "issues")
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
gh.eventHandler(w, req)
|
||||||
|
if w.Code != http.StatusOK {
|
||||||
|
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestMemberEvent(t *testing.T) {
|
||||||
|
gh := NewGithubWebhooks()
|
||||||
|
jsonString := MemberEventJSON()
|
||||||
|
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
|
||||||
|
req.Header.Add("X-Github-Event", "member")
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
gh.eventHandler(w, req)
|
||||||
|
if w.Code != http.StatusOK {
|
||||||
|
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestMembershipEvent(t *testing.T) {
|
||||||
|
gh := NewGithubWebhooks()
|
||||||
|
jsonString := MembershipEventJSON()
|
||||||
|
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
|
||||||
|
req.Header.Add("X-Github-Event", "membership")
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
gh.eventHandler(w, req)
|
||||||
|
if w.Code != http.StatusOK {
|
||||||
|
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPageBuildEvent(t *testing.T) {
|
||||||
|
gh := NewGithubWebhooks()
|
||||||
|
jsonString := PageBuildEventJSON()
|
||||||
|
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
|
||||||
|
req.Header.Add("X-Github-Event", "page_build")
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
gh.eventHandler(w, req)
|
||||||
|
if w.Code != http.StatusOK {
|
||||||
|
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPublicEvent(t *testing.T) {
|
||||||
|
gh := NewGithubWebhooks()
|
||||||
|
jsonString := PublicEventJSON()
|
||||||
|
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
|
||||||
|
req.Header.Add("X-Github-Event", "public")
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
gh.eventHandler(w, req)
|
||||||
|
if w.Code != http.StatusOK {
|
||||||
|
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPullRequestReviewCommentEvent(t *testing.T) {
|
||||||
|
gh := NewGithubWebhooks()
|
||||||
|
jsonString := PullRequestReviewCommentEventJSON()
|
||||||
|
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
|
||||||
|
req.Header.Add("X-Github-Event", "pull_request_review_comment")
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
gh.eventHandler(w, req)
|
||||||
|
if w.Code != http.StatusOK {
|
||||||
|
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestPushEvent(t *testing.T) {
|
||||||
|
gh := NewGithubWebhooks()
|
||||||
|
jsonString := PushEventJSON()
|
||||||
|
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
|
||||||
|
req.Header.Add("X-Github-Event", "push")
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
gh.eventHandler(w, req)
|
||||||
|
if w.Code != http.StatusOK {
|
||||||
|
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestReleaseEvent(t *testing.T) {
|
||||||
|
gh := NewGithubWebhooks()
|
||||||
|
jsonString := ReleaseEventJSON()
|
||||||
|
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
|
||||||
|
req.Header.Add("X-Github-Event", "release")
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
gh.eventHandler(w, req)
|
||||||
|
if w.Code != http.StatusOK {
|
||||||
|
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestRepositoryEvent(t *testing.T) {
|
||||||
|
gh := NewGithubWebhooks()
|
||||||
|
jsonString := RepositoryEventJSON()
|
||||||
|
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
|
||||||
|
req.Header.Add("X-Github-Event", "repository")
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
gh.eventHandler(w, req)
|
||||||
|
if w.Code != http.StatusOK {
|
||||||
|
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestStatusEvent(t *testing.T) {
|
||||||
|
gh := NewGithubWebhooks()
|
||||||
|
|
||||||
|
jsonString := StatusEventJSON()
|
||||||
|
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
|
||||||
|
req.Header.Add("X-Github-Event", "status")
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
gh.eventHandler(w, req)
|
||||||
|
if w.Code != http.StatusOK {
|
||||||
|
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestTeamAddEvent(t *testing.T) {
|
||||||
|
gh := NewGithubWebhooks()
|
||||||
|
jsonString := TeamAddEventJSON()
|
||||||
|
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
|
||||||
|
req.Header.Add("X-Github-Event", "team_add")
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
gh.eventHandler(w, req)
|
||||||
|
if w.Code != http.StatusOK {
|
||||||
|
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestWatchEvent(t *testing.T) {
|
||||||
|
gh := NewGithubWebhooks()
|
||||||
|
jsonString := WatchEventJSON()
|
||||||
|
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
|
||||||
|
req.Header.Add("X-Github-Event", "watch")
|
||||||
|
w := httptest.NewRecorder()
|
||||||
|
gh.eventHandler(w, req)
|
||||||
|
if w.Code != http.StatusOK {
|
||||||
|
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
|
||||||
|
}
|
||||||
|
}
|
|
@ -45,6 +45,16 @@ You can also specify additional request parameters for the service:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
You can also specify additional request header parameters for the service:
|
||||||
|
|
||||||
|
```
|
||||||
|
[[httpjson.services]]
|
||||||
|
...
|
||||||
|
|
||||||
|
[httpjson.services.headers]
|
||||||
|
X-Auth-Token = "my-xauth-token"
|
||||||
|
apiVersion = "v1"
|
||||||
|
```
|
||||||
|
|
||||||
# Example:
|
# Example:
|
||||||
|
|
||||||
|
|
|
@ -21,6 +21,7 @@ type HttpJson struct {
|
||||||
Method string
|
Method string
|
||||||
TagKeys []string
|
TagKeys []string
|
||||||
Parameters map[string]string
|
Parameters map[string]string
|
||||||
|
Headers map[string]string
|
||||||
client HTTPClient
|
client HTTPClient
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -45,6 +46,9 @@ func (c RealHTTPClient) MakeRequest(req *http.Request) (*http.Response, error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
var sampleConfig = `
|
var sampleConfig = `
|
||||||
|
# NOTE This plugin only reads numerical measurements, strings and booleans
|
||||||
|
# will be ignored.
|
||||||
|
|
||||||
# a name for the service being polled
|
# a name for the service being polled
|
||||||
name = "webserver_stats"
|
name = "webserver_stats"
|
||||||
|
|
||||||
|
@ -67,6 +71,12 @@ var sampleConfig = `
|
||||||
[inputs.httpjson.parameters]
|
[inputs.httpjson.parameters]
|
||||||
event_type = "cpu_spike"
|
event_type = "cpu_spike"
|
||||||
threshold = "0.75"
|
threshold = "0.75"
|
||||||
|
|
||||||
|
# HTTP Header parameters (all values must be strings)
|
||||||
|
# [inputs.httpjson.headers]
|
||||||
|
# X-Auth-Token = "my-xauth-token"
|
||||||
|
# apiVersion = "v1"
|
||||||
|
|
||||||
`
|
`
|
||||||
|
|
||||||
func (h *HttpJson) SampleConfig() string {
|
func (h *HttpJson) SampleConfig() string {
|
||||||
|
@ -188,6 +198,11 @@ func (h *HttpJson) sendRequest(serverURL string) (string, float64, error) {
|
||||||
return "", -1, err
|
return "", -1, err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Add header parameters
|
||||||
|
for k, v := range h.Headers {
|
||||||
|
req.Header.Add(k, v)
|
||||||
|
}
|
||||||
|
|
||||||
start := time.Now()
|
start := time.Now()
|
||||||
resp, err := h.client.MakeRequest(req)
|
resp, err := h.client.MakeRequest(req)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|
|
@ -97,6 +97,10 @@ func genMockHttpJson(response string, statusCode int) []*HttpJson {
|
||||||
"httpParam1": "12",
|
"httpParam1": "12",
|
||||||
"httpParam2": "the second parameter",
|
"httpParam2": "the second parameter",
|
||||||
},
|
},
|
||||||
|
Headers: map[string]string{
|
||||||
|
"X-Auth-Token": "the-first-parameter",
|
||||||
|
"apiVersion": "v1",
|
||||||
|
},
|
||||||
},
|
},
|
||||||
&HttpJson{
|
&HttpJson{
|
||||||
client: mockHTTPClient{responseBody: response, statusCode: statusCode},
|
client: mockHTTPClient{responseBody: response, statusCode: statusCode},
|
||||||
|
@ -110,6 +114,10 @@ func genMockHttpJson(response string, statusCode int) []*HttpJson {
|
||||||
"httpParam1": "12",
|
"httpParam1": "12",
|
||||||
"httpParam2": "the second parameter",
|
"httpParam2": "the second parameter",
|
||||||
},
|
},
|
||||||
|
Headers: map[string]string{
|
||||||
|
"X-Auth-Token": "the-first-parameter",
|
||||||
|
"apiVersion": "v1",
|
||||||
|
},
|
||||||
TagKeys: []string{
|
TagKeys: []string{
|
||||||
"role",
|
"role",
|
||||||
"build",
|
"build",
|
||||||
|
|
|
@ -130,7 +130,7 @@ func (i *InfluxDB) gatherURL(
|
||||||
p.Tags["url"] = url
|
p.Tags["url"] = url
|
||||||
|
|
||||||
acc.AddFields(
|
acc.AddFields(
|
||||||
p.Name,
|
"influxdb_"+p.Name,
|
||||||
p.Values,
|
p.Values,
|
||||||
p.Tags,
|
p.Tags,
|
||||||
)
|
)
|
||||||
|
|
|
@ -84,7 +84,7 @@ func TestBasic(t *testing.T) {
|
||||||
"id": "ex1",
|
"id": "ex1",
|
||||||
"url": fakeServer.URL + "/endpoint",
|
"url": fakeServer.URL + "/endpoint",
|
||||||
}
|
}
|
||||||
acc.AssertContainsTaggedFields(t, "foo", fields, tags)
|
acc.AssertContainsTaggedFields(t, "influxdb_foo", fields, tags)
|
||||||
|
|
||||||
fields = map[string]interface{}{
|
fields = map[string]interface{}{
|
||||||
"x": "x",
|
"x": "x",
|
||||||
|
@ -93,5 +93,5 @@ func TestBasic(t *testing.T) {
|
||||||
"id": "ex2",
|
"id": "ex2",
|
||||||
"url": fakeServer.URL + "/endpoint",
|
"url": fakeServer.URL + "/endpoint",
|
||||||
}
|
}
|
||||||
acc.AssertContainsTaggedFields(t, "bar", fields, tags)
|
acc.AssertContainsTaggedFields(t, "influxdb_bar", fields, tags)
|
||||||
}
|
}
|
||||||
|
|
|
@ -129,9 +129,13 @@ func pidsFromFile(file string) ([]int32, error) {
|
||||||
func pidsFromExe(exe string) ([]int32, error) {
|
func pidsFromExe(exe string) ([]int32, error) {
|
||||||
var out []int32
|
var out []int32
|
||||||
var outerr error
|
var outerr error
|
||||||
pgrep, err := exec.Command("pgrep", exe).Output()
|
bin, err := exec.LookPath("pgrep")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return out, fmt.Errorf("Failed to execute pgrep. Error: '%s'", err)
|
return out, fmt.Errorf("Couldn't find pgrep binary: %s", err)
|
||||||
|
}
|
||||||
|
pgrep, err := exec.Command(bin, exe).Output()
|
||||||
|
if err != nil {
|
||||||
|
return out, fmt.Errorf("Failed to execute %s. Error: '%s'", bin, err)
|
||||||
} else {
|
} else {
|
||||||
pids := strings.Fields(string(pgrep))
|
pids := strings.Fields(string(pgrep))
|
||||||
for _, pid := range pids {
|
for _, pid := range pids {
|
||||||
|
@ -149,9 +153,13 @@ func pidsFromExe(exe string) ([]int32, error) {
|
||||||
func pidsFromPattern(pattern string) ([]int32, error) {
|
func pidsFromPattern(pattern string) ([]int32, error) {
|
||||||
var out []int32
|
var out []int32
|
||||||
var outerr error
|
var outerr error
|
||||||
pgrep, err := exec.Command("pgrep", "-f", pattern).Output()
|
bin, err := exec.LookPath("pgrep")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return out, fmt.Errorf("Failed to execute pgrep. Error: '%s'", err)
|
return out, fmt.Errorf("Couldn't find pgrep binary: %s", err)
|
||||||
|
}
|
||||||
|
pgrep, err := exec.Command(bin, "-f", pattern).Output()
|
||||||
|
if err != nil {
|
||||||
|
return out, fmt.Errorf("Failed to execute %s. Error: '%s'", bin, err)
|
||||||
} else {
|
} else {
|
||||||
pids := strings.Fields(string(pgrep))
|
pids := strings.Fields(string(pgrep))
|
||||||
for _, pid := range pids {
|
for _, pid := range pids {
|
||||||
|
|
|
@ -60,6 +60,11 @@ type QueueTotals struct {
|
||||||
Messages int64
|
Messages int64
|
||||||
MessagesReady int64 `json:"messages_ready"`
|
MessagesReady int64 `json:"messages_ready"`
|
||||||
MessagesUnacknowledged int64 `json:"messages_unacknowledged"`
|
MessagesUnacknowledged int64 `json:"messages_unacknowledged"`
|
||||||
|
MessageBytes int64 `json:"message_bytes"`
|
||||||
|
MessageBytesReady int64 `json:"message_bytes_ready"`
|
||||||
|
MessageBytesUnacknowledged int64 `json:"message_bytes_unacknowledged"`
|
||||||
|
MessageRam int64 `json:"message_bytes_ram"`
|
||||||
|
MessagePersistent int64 `json:"message_bytes_persistent"`
|
||||||
}
|
}
|
||||||
|
|
||||||
type Queue struct {
|
type Queue struct {
|
||||||
|
@ -270,6 +275,11 @@ func gatherQueues(r *RabbitMQ, acc inputs.Accumulator, errChan chan error) {
|
||||||
"consumer_utilisation": queue.ConsumerUtilisation,
|
"consumer_utilisation": queue.ConsumerUtilisation,
|
||||||
"memory": queue.Memory,
|
"memory": queue.Memory,
|
||||||
// messages information
|
// messages information
|
||||||
|
"message_bytes": queue.MessageBytes,
|
||||||
|
"message_bytes_ready": queue.MessageBytesReady,
|
||||||
|
"message_bytes_unacked": queue.MessageBytesUnacknowledged,
|
||||||
|
"message_bytes_ram": queue.MessageRam,
|
||||||
|
"message_bytes_persist": queue.MessagePersistent,
|
||||||
"messages": queue.Messages,
|
"messages": queue.Messages,
|
||||||
"messages_ready": queue.MessagesReady,
|
"messages_ready": queue.MessagesReady,
|
||||||
"messages_unack": queue.MessagesUnacknowledged,
|
"messages_unack": queue.MessagesUnacknowledged,
|
||||||
|
|
|
@ -0,0 +1,473 @@
|
||||||
|
package snmp
|
||||||
|
|
||||||
|
import (
|
||||||
|
"io/ioutil"
|
||||||
|
"log"
|
||||||
|
"net"
|
||||||
|
"regexp"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
|
|
||||||
|
"github.com/soniah/gosnmp"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Snmp is a snmp plugin
|
||||||
|
type Snmp struct {
|
||||||
|
Host []Host
|
||||||
|
Get []Data
|
||||||
|
Bulk []Data
|
||||||
|
SnmptranslateFile string
|
||||||
|
}
|
||||||
|
|
||||||
|
type Host struct {
|
||||||
|
Address string
|
||||||
|
Community string
|
||||||
|
// SNMP version. Default 2
|
||||||
|
Version int
|
||||||
|
// SNMP timeout, in seconds. 0 means no timeout
|
||||||
|
Timeout float64
|
||||||
|
// SNMP retries
|
||||||
|
Retries int
|
||||||
|
// Data to collect (list of Data names)
|
||||||
|
Collect []string
|
||||||
|
// easy get oids
|
||||||
|
GetOids []string
|
||||||
|
// Oids
|
||||||
|
getOids []Data
|
||||||
|
bulkOids []Data
|
||||||
|
}
|
||||||
|
|
||||||
|
type Data struct {
|
||||||
|
Name string
|
||||||
|
// OID (could be numbers or name)
|
||||||
|
Oid string
|
||||||
|
// Unit
|
||||||
|
Unit string
|
||||||
|
// SNMP getbulk max repetition
|
||||||
|
MaxRepetition uint8 `toml:"max_repetition"`
|
||||||
|
// SNMP Instance (default 0)
|
||||||
|
// (only used with GET request and if
|
||||||
|
// OID is a name from snmptranslate file)
|
||||||
|
Instance string
|
||||||
|
// OID (only number) (used for computation)
|
||||||
|
rawOid string
|
||||||
|
}
|
||||||
|
|
||||||
|
type Node struct {
|
||||||
|
id string
|
||||||
|
name string
|
||||||
|
subnodes map[string]Node
|
||||||
|
}
|
||||||
|
|
||||||
|
var initNode = Node{
|
||||||
|
id: "1",
|
||||||
|
name: "",
|
||||||
|
subnodes: make(map[string]Node),
|
||||||
|
}
|
||||||
|
|
||||||
|
var NameToOid = make(map[string]string)
|
||||||
|
|
||||||
|
var sampleConfig = `
|
||||||
|
# Use 'oids.txt' file to translate oids to names
|
||||||
|
# To generate 'oids.txt' you need to run:
|
||||||
|
# snmptranslate -m all -Tz -On | sed -e 's/"//g' > /tmp/oids.txt
|
||||||
|
# Or if you have an other MIB folder with custom MIBs
|
||||||
|
# snmptranslate -M /mycustommibfolder -Tz -On -m all | sed -e 's/"//g' > oids.txt
|
||||||
|
snmptranslate_file = "/tmp/oids.txt"
|
||||||
|
[[inputs.snmp.host]]
|
||||||
|
address = "192.168.2.2:161"
|
||||||
|
# SNMP community
|
||||||
|
community = "public" # default public
|
||||||
|
# SNMP version (1, 2 or 3)
|
||||||
|
# Version 3 not supported yet
|
||||||
|
version = 2 # default 2
|
||||||
|
# SNMP response timeout
|
||||||
|
timeout = 2.0 # default 2.0
|
||||||
|
# SNMP request retries
|
||||||
|
retries = 2 # default 2
|
||||||
|
# Which get/bulk do you want to collect for this host
|
||||||
|
collect = ["mybulk", "sysservices", "sysdescr"]
|
||||||
|
# Simple list of OIDs to get, in addition to "collect"
|
||||||
|
get_oids = []
|
||||||
|
|
||||||
|
[[inputs.snmp.host]]
|
||||||
|
address = "192.168.2.3:161"
|
||||||
|
community = "public"
|
||||||
|
version = 2
|
||||||
|
timeout = 2.0
|
||||||
|
retries = 2
|
||||||
|
collect = ["mybulk"]
|
||||||
|
get_oids = [
|
||||||
|
"ifNumber",
|
||||||
|
".1.3.6.1.2.1.1.3.0",
|
||||||
|
]
|
||||||
|
|
||||||
|
[[inputs.snmp.get]]
|
||||||
|
name = "ifnumber"
|
||||||
|
oid = "ifNumber"
|
||||||
|
|
||||||
|
[[inputs.snmp.get]]
|
||||||
|
name = "interface_speed"
|
||||||
|
oid = "ifSpeed"
|
||||||
|
instance = 0
|
||||||
|
|
||||||
|
[[inputs.snmp.get]]
|
||||||
|
name = "sysuptime"
|
||||||
|
oid = ".1.3.6.1.2.1.1.3.0"
|
||||||
|
unit = "second"
|
||||||
|
|
||||||
|
[[inputs.snmp.bulk]]
|
||||||
|
name = "mybulk"
|
||||||
|
max_repetition = 127
|
||||||
|
oid = ".1.3.6.1.2.1.1"
|
||||||
|
|
||||||
|
[[inputs.snmp.bulk]]
|
||||||
|
name = "ifoutoctets"
|
||||||
|
max_repetition = 127
|
||||||
|
oid = "ifOutOctets"
|
||||||
|
`
|
||||||
|
|
||||||
|
// SampleConfig returns sample configuration message
|
||||||
|
func (s *Snmp) SampleConfig() string {
|
||||||
|
return sampleConfig
|
||||||
|
}
|
||||||
|
|
||||||
|
// Description returns description of Zookeeper plugin
|
||||||
|
func (s *Snmp) Description() string {
|
||||||
|
return `Reads oids value from one or many snmp agents`
|
||||||
|
}
|
||||||
|
|
||||||
|
func fillnode(parentNode Node, oid_name string, ids []string) {
|
||||||
|
// ids = ["1", "3", "6", ...]
|
||||||
|
id, ids := ids[0], ids[1:]
|
||||||
|
node, ok := parentNode.subnodes[id]
|
||||||
|
if ok == false {
|
||||||
|
node = Node{
|
||||||
|
id: id,
|
||||||
|
name: "",
|
||||||
|
subnodes: make(map[string]Node),
|
||||||
|
}
|
||||||
|
if len(ids) == 0 {
|
||||||
|
node.name = oid_name
|
||||||
|
}
|
||||||
|
parentNode.subnodes[id] = node
|
||||||
|
}
|
||||||
|
if len(ids) > 0 {
|
||||||
|
fillnode(node, oid_name, ids)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func findnodename(node Node, ids []string) (string, string) {
|
||||||
|
// ids = ["1", "3", "6", ...]
|
||||||
|
if len(ids) == 1 {
|
||||||
|
return node.name, ids[0]
|
||||||
|
}
|
||||||
|
id, ids := ids[0], ids[1:]
|
||||||
|
// Get node
|
||||||
|
subnode, ok := node.subnodes[id]
|
||||||
|
if ok {
|
||||||
|
return findnodename(subnode, ids)
|
||||||
|
}
|
||||||
|
// We got a node
|
||||||
|
// Get node name
|
||||||
|
if node.name != "" && len(ids) == 0 && id == "0" {
|
||||||
|
// node with instance 0
|
||||||
|
return node.name, "0"
|
||||||
|
} else if node.name != "" && len(ids) == 0 && id != "0" {
|
||||||
|
// node with an instance
|
||||||
|
return node.name, string(id)
|
||||||
|
} else if node.name != "" && len(ids) > 0 {
|
||||||
|
// node with subinstances
|
||||||
|
return node.name, strings.Join(ids, ".")
|
||||||
|
}
|
||||||
|
// return an empty node name
|
||||||
|
return node.name, ""
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *Snmp) Gather(acc inputs.Accumulator) error {
|
||||||
|
// Create oid tree
|
||||||
|
if s.SnmptranslateFile != "" && len(initNode.subnodes) == 0 {
|
||||||
|
data, err := ioutil.ReadFile(s.SnmptranslateFile)
|
||||||
|
if err != nil {
|
||||||
|
log.Printf("Reading SNMPtranslate file error: %s", err)
|
||||||
|
return err
|
||||||
|
} else {
|
||||||
|
for _, line := range strings.Split(string(data), "\n") {
|
||||||
|
oidsRegEx := regexp.MustCompile(`([^\t]*)\t*([^\t]*)`)
|
||||||
|
oids := oidsRegEx.FindStringSubmatch(string(line))
|
||||||
|
if oids[2] != "" {
|
||||||
|
oid_name := oids[1]
|
||||||
|
oid := oids[2]
|
||||||
|
fillnode(initNode, oid_name, strings.Split(string(oid), "."))
|
||||||
|
NameToOid[oid_name] = oid
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Fetching data
|
||||||
|
for _, host := range s.Host {
|
||||||
|
// Set default args
|
||||||
|
if len(host.Address) == 0 {
|
||||||
|
host.Address = "127.0.0.1:161"
|
||||||
|
}
|
||||||
|
if host.Community == "" {
|
||||||
|
host.Community = "public"
|
||||||
|
}
|
||||||
|
if host.Timeout <= 0 {
|
||||||
|
host.Timeout = 2.0
|
||||||
|
}
|
||||||
|
if host.Retries <= 0 {
|
||||||
|
host.Retries = 2
|
||||||
|
}
|
||||||
|
// Prepare host
|
||||||
|
// Get Easy GET oids
|
||||||
|
for _, oidstring := range host.GetOids {
|
||||||
|
oid := Data{}
|
||||||
|
if val, ok := NameToOid[oidstring]; ok {
|
||||||
|
// TODO should we add the 0 instance ?
|
||||||
|
oid.Name = oidstring
|
||||||
|
oid.Oid = val
|
||||||
|
oid.rawOid = "." + val + ".0"
|
||||||
|
} else {
|
||||||
|
oid.Name = oidstring
|
||||||
|
oid.Oid = oidstring
|
||||||
|
if string(oidstring[:1]) != "." {
|
||||||
|
oid.rawOid = "." + oidstring
|
||||||
|
} else {
|
||||||
|
oid.rawOid = oidstring
|
||||||
|
}
|
||||||
|
}
|
||||||
|
host.getOids = append(host.getOids, oid)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, oid_name := range host.Collect {
|
||||||
|
// Get GET oids
|
||||||
|
for _, oid := range s.Get {
|
||||||
|
if oid.Name == oid_name {
|
||||||
|
if val, ok := NameToOid[oid.Oid]; ok {
|
||||||
|
// TODO should we add the 0 instance ?
|
||||||
|
if oid.Instance != "" {
|
||||||
|
oid.rawOid = "." + val + "." + oid.Instance
|
||||||
|
} else {
|
||||||
|
oid.rawOid = "." + val + ".0"
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
oid.rawOid = oid.Oid
|
||||||
|
}
|
||||||
|
host.getOids = append(host.getOids, oid)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Get GETBULK oids
|
||||||
|
for _, oid := range s.Bulk {
|
||||||
|
if oid.Name == oid_name {
|
||||||
|
if val, ok := NameToOid[oid.Oid]; ok {
|
||||||
|
oid.rawOid = "." + val
|
||||||
|
} else {
|
||||||
|
oid.rawOid = oid.Oid
|
||||||
|
}
|
||||||
|
host.bulkOids = append(host.bulkOids, oid)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Launch Get requests
|
||||||
|
if err := host.SNMPGet(acc); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if err := host.SNMPBulk(acc); err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *Host) SNMPGet(acc inputs.Accumulator) error {
|
||||||
|
// Get snmp client
|
||||||
|
snmpClient, err := h.GetSNMPClient()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
// Deconnection
|
||||||
|
defer snmpClient.Conn.Close()
|
||||||
|
// Prepare OIDs
|
||||||
|
oidsList := make(map[string]Data)
|
||||||
|
for _, oid := range h.getOids {
|
||||||
|
oidsList[oid.rawOid] = oid
|
||||||
|
}
|
||||||
|
oidsNameList := make([]string, 0, len(oidsList))
|
||||||
|
for _, oid := range oidsList {
|
||||||
|
oidsNameList = append(oidsNameList, oid.rawOid)
|
||||||
|
}
|
||||||
|
|
||||||
|
// gosnmp.MAX_OIDS == 60
|
||||||
|
// TODO use gosnmp.MAX_OIDS instead of hard coded value
|
||||||
|
max_oids := 60
|
||||||
|
// limit 60 (MAX_OIDS) oids by requests
|
||||||
|
for i := 0; i < len(oidsList); i = i + max_oids {
|
||||||
|
// Launch request
|
||||||
|
max_index := i + max_oids
|
||||||
|
if i+max_oids > len(oidsList) {
|
||||||
|
max_index = len(oidsList)
|
||||||
|
}
|
||||||
|
result, err3 := snmpClient.Get(oidsNameList[i:max_index]) // Get() accepts up to g.MAX_OIDS
|
||||||
|
if err3 != nil {
|
||||||
|
return err3
|
||||||
|
}
|
||||||
|
// Handle response
|
||||||
|
_, err = h.HandleResponse(oidsList, result, acc)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *Host) SNMPBulk(acc inputs.Accumulator) error {
|
||||||
|
// Get snmp client
|
||||||
|
snmpClient, err := h.GetSNMPClient()
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
// Deconnection
|
||||||
|
defer snmpClient.Conn.Close()
|
||||||
|
// Prepare OIDs
|
||||||
|
oidsList := make(map[string]Data)
|
||||||
|
for _, oid := range h.bulkOids {
|
||||||
|
oidsList[oid.rawOid] = oid
|
||||||
|
}
|
||||||
|
oidsNameList := make([]string, 0, len(oidsList))
|
||||||
|
for _, oid := range oidsList {
|
||||||
|
oidsNameList = append(oidsNameList, oid.rawOid)
|
||||||
|
}
|
||||||
|
// TODO Trying to make requests with more than one OID
|
||||||
|
// to reduce the number of requests
|
||||||
|
for _, oid := range oidsNameList {
|
||||||
|
oid_asked := oid
|
||||||
|
need_more_requests := true
|
||||||
|
// Set max repetition
|
||||||
|
maxRepetition := oidsList[oid].MaxRepetition
|
||||||
|
if maxRepetition <= 0 {
|
||||||
|
maxRepetition = 32
|
||||||
|
}
|
||||||
|
// Launch requests
|
||||||
|
for need_more_requests {
|
||||||
|
// Launch request
|
||||||
|
result, err3 := snmpClient.GetBulk([]string{oid}, 0, maxRepetition)
|
||||||
|
if err3 != nil {
|
||||||
|
return err3
|
||||||
|
}
|
||||||
|
// Handle response
|
||||||
|
last_oid, err := h.HandleResponse(oidsList, result, acc)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
// Determine if we need more requests
|
||||||
|
if strings.HasPrefix(last_oid, oid_asked) {
|
||||||
|
need_more_requests = true
|
||||||
|
oid = last_oid
|
||||||
|
} else {
|
||||||
|
need_more_requests = false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *Host) GetSNMPClient() (*gosnmp.GoSNMP, error) {
|
||||||
|
// Prepare Version
|
||||||
|
var version gosnmp.SnmpVersion
|
||||||
|
if h.Version == 1 {
|
||||||
|
version = gosnmp.Version1
|
||||||
|
} else if h.Version == 3 {
|
||||||
|
version = gosnmp.Version3
|
||||||
|
} else {
|
||||||
|
version = gosnmp.Version2c
|
||||||
|
}
|
||||||
|
// Prepare host and port
|
||||||
|
host, port_str, err := net.SplitHostPort(h.Address)
|
||||||
|
if err != nil {
|
||||||
|
port_str = string("161")
|
||||||
|
}
|
||||||
|
// convert port_str to port in uint16
|
||||||
|
port_64, err := strconv.ParseUint(port_str, 10, 16)
|
||||||
|
port := uint16(port_64)
|
||||||
|
// Get SNMP client
|
||||||
|
snmpClient := &gosnmp.GoSNMP{
|
||||||
|
Target: host,
|
||||||
|
Port: port,
|
||||||
|
Community: h.Community,
|
||||||
|
Version: version,
|
||||||
|
Timeout: time.Duration(h.Timeout) * time.Second,
|
||||||
|
Retries: h.Retries,
|
||||||
|
}
|
||||||
|
// Connection
|
||||||
|
err2 := snmpClient.Connect()
|
||||||
|
if err2 != nil {
|
||||||
|
return nil, err2
|
||||||
|
}
|
||||||
|
// Return snmpClient
|
||||||
|
return snmpClient, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (h *Host) HandleResponse(oids map[string]Data, result *gosnmp.SnmpPacket, acc inputs.Accumulator) (string, error) {
|
||||||
|
var lastOid string
|
||||||
|
for _, variable := range result.Variables {
|
||||||
|
lastOid = variable.Name
|
||||||
|
// Remove unwanted oid
|
||||||
|
for oid_key, oid := range oids {
|
||||||
|
if strings.HasPrefix(variable.Name, oid_key) {
|
||||||
|
switch variable.Type {
|
||||||
|
// handle Metrics
|
||||||
|
case gosnmp.Boolean, gosnmp.Integer, gosnmp.Counter32, gosnmp.Gauge32,
|
||||||
|
gosnmp.TimeTicks, gosnmp.Counter64, gosnmp.Uinteger32:
|
||||||
|
// Prepare tags
|
||||||
|
tags := make(map[string]string)
|
||||||
|
if oid.Unit != "" {
|
||||||
|
tags["unit"] = oid.Unit
|
||||||
|
}
|
||||||
|
// Get name and instance
|
||||||
|
var oid_name string
|
||||||
|
var instance string
|
||||||
|
// Get oidname and instannce from translate file
|
||||||
|
oid_name, instance = findnodename(initNode,
|
||||||
|
strings.Split(string(variable.Name[1:]), "."))
|
||||||
|
|
||||||
|
if instance != "" {
|
||||||
|
tags["instance"] = instance
|
||||||
|
}
|
||||||
|
|
||||||
|
// Set name
|
||||||
|
var field_name string
|
||||||
|
if oid_name != "" {
|
||||||
|
// Set fieldname as oid name from translate file
|
||||||
|
field_name = oid_name
|
||||||
|
} else {
|
||||||
|
// Set fieldname as oid name from inputs.snmp.get section
|
||||||
|
// Because the result oid is equal to inputs.snmp.get section
|
||||||
|
field_name = oid.Name
|
||||||
|
}
|
||||||
|
tags["host"], _, _ = net.SplitHostPort(h.Address)
|
||||||
|
fields := make(map[string]interface{})
|
||||||
|
fields[string(field_name)] = variable.Value
|
||||||
|
|
||||||
|
acc.AddFields(field_name, fields, tags)
|
||||||
|
case gosnmp.NoSuchObject, gosnmp.NoSuchInstance:
|
||||||
|
// Oid not found
|
||||||
|
log.Printf("[snmp input] Oid not found: %s", oid_key)
|
||||||
|
default:
|
||||||
|
// delete other data
|
||||||
|
}
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return lastOid, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func init() {
|
||||||
|
inputs.Add("snmp", func() inputs.Input {
|
||||||
|
return &Snmp{}
|
||||||
|
})
|
||||||
|
}
|
|
@ -0,0 +1,459 @@
|
||||||
|
package snmp
|
||||||
|
|
||||||
|
import (
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/influxdata/telegraf/testutil"
|
||||||
|
|
||||||
|
// "github.com/stretchr/testify/assert"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestSNMPErrorGet1(t *testing.T) {
|
||||||
|
get1 := Data{
|
||||||
|
Name: "oid1",
|
||||||
|
Unit: "octets",
|
||||||
|
Oid: ".1.3.6.1.2.1.2.2.1.16.1",
|
||||||
|
}
|
||||||
|
h := Host{
|
||||||
|
Collect: []string{"oid1"},
|
||||||
|
}
|
||||||
|
s := Snmp{
|
||||||
|
SnmptranslateFile: "bad_oid.txt",
|
||||||
|
Host: []Host{h},
|
||||||
|
Get: []Data{get1},
|
||||||
|
}
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
err := s.Gather(&acc)
|
||||||
|
require.Error(t, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSNMPErrorGet2(t *testing.T) {
|
||||||
|
get1 := Data{
|
||||||
|
Name: "oid1",
|
||||||
|
Unit: "octets",
|
||||||
|
Oid: ".1.3.6.1.2.1.2.2.1.16.1",
|
||||||
|
}
|
||||||
|
h := Host{
|
||||||
|
Collect: []string{"oid1"},
|
||||||
|
}
|
||||||
|
s := Snmp{
|
||||||
|
Host: []Host{h},
|
||||||
|
Get: []Data{get1},
|
||||||
|
}
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
err := s.Gather(&acc)
|
||||||
|
require.Error(t, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSNMPErrorBulk(t *testing.T) {
|
||||||
|
bulk1 := Data{
|
||||||
|
Name: "oid1",
|
||||||
|
Unit: "octets",
|
||||||
|
Oid: ".1.3.6.1.2.1.2.2.1.16",
|
||||||
|
}
|
||||||
|
h := Host{
|
||||||
|
Address: "127.0.0.1",
|
||||||
|
Collect: []string{"oid1"},
|
||||||
|
}
|
||||||
|
s := Snmp{
|
||||||
|
Host: []Host{h},
|
||||||
|
Bulk: []Data{bulk1},
|
||||||
|
}
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
err := s.Gather(&acc)
|
||||||
|
require.Error(t, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSNMPGet1(t *testing.T) {
|
||||||
|
get1 := Data{
|
||||||
|
Name: "oid1",
|
||||||
|
Unit: "octets",
|
||||||
|
Oid: ".1.3.6.1.2.1.2.2.1.16.1",
|
||||||
|
}
|
||||||
|
h := Host{
|
||||||
|
Address: "127.0.0.1:31161",
|
||||||
|
Community: "telegraf",
|
||||||
|
Version: 2,
|
||||||
|
Timeout: 2.0,
|
||||||
|
Retries: 2,
|
||||||
|
Collect: []string{"oid1"},
|
||||||
|
}
|
||||||
|
s := Snmp{
|
||||||
|
Host: []Host{h},
|
||||||
|
Get: []Data{get1},
|
||||||
|
}
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
err := s.Gather(&acc)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
acc.AssertContainsTaggedFields(t,
|
||||||
|
"oid1",
|
||||||
|
map[string]interface{}{
|
||||||
|
"oid1": uint(543846),
|
||||||
|
},
|
||||||
|
map[string]string{
|
||||||
|
"unit": "octets",
|
||||||
|
"host": "127.0.0.1",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSNMPGet2(t *testing.T) {
|
||||||
|
get1 := Data{
|
||||||
|
Name: "oid1",
|
||||||
|
Oid: "ifNumber",
|
||||||
|
}
|
||||||
|
h := Host{
|
||||||
|
Address: "127.0.0.1:31161",
|
||||||
|
Community: "telegraf",
|
||||||
|
Version: 2,
|
||||||
|
Timeout: 2.0,
|
||||||
|
Retries: 2,
|
||||||
|
Collect: []string{"oid1"},
|
||||||
|
}
|
||||||
|
s := Snmp{
|
||||||
|
SnmptranslateFile: "./testdata/oids.txt",
|
||||||
|
Host: []Host{h},
|
||||||
|
Get: []Data{get1},
|
||||||
|
}
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
err := s.Gather(&acc)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
acc.AssertContainsTaggedFields(t,
|
||||||
|
"ifNumber",
|
||||||
|
map[string]interface{}{
|
||||||
|
"ifNumber": int(4),
|
||||||
|
},
|
||||||
|
map[string]string{
|
||||||
|
"instance": "0",
|
||||||
|
"host": "127.0.0.1",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSNMPGet3(t *testing.T) {
|
||||||
|
get1 := Data{
|
||||||
|
Name: "oid1",
|
||||||
|
Unit: "octets",
|
||||||
|
Oid: "ifSpeed",
|
||||||
|
Instance: "1",
|
||||||
|
}
|
||||||
|
h := Host{
|
||||||
|
Address: "127.0.0.1:31161",
|
||||||
|
Community: "telegraf",
|
||||||
|
Version: 2,
|
||||||
|
Timeout: 2.0,
|
||||||
|
Retries: 2,
|
||||||
|
Collect: []string{"oid1"},
|
||||||
|
}
|
||||||
|
s := Snmp{
|
||||||
|
SnmptranslateFile: "./testdata/oids.txt",
|
||||||
|
Host: []Host{h},
|
||||||
|
Get: []Data{get1},
|
||||||
|
}
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
err := s.Gather(&acc)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
acc.AssertContainsTaggedFields(t,
|
||||||
|
"ifSpeed",
|
||||||
|
map[string]interface{}{
|
||||||
|
"ifSpeed": uint(10000000),
|
||||||
|
},
|
||||||
|
map[string]string{
|
||||||
|
"unit": "octets",
|
||||||
|
"instance": "1",
|
||||||
|
"host": "127.0.0.1",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSNMPEasyGet4(t *testing.T) {
|
||||||
|
get1 := Data{
|
||||||
|
Name: "oid1",
|
||||||
|
Unit: "octets",
|
||||||
|
Oid: "ifSpeed",
|
||||||
|
Instance: "1",
|
||||||
|
}
|
||||||
|
h := Host{
|
||||||
|
Address: "127.0.0.1:31161",
|
||||||
|
Community: "telegraf",
|
||||||
|
Version: 2,
|
||||||
|
Timeout: 2.0,
|
||||||
|
Retries: 2,
|
||||||
|
Collect: []string{"oid1"},
|
||||||
|
GetOids: []string{"ifNumber"},
|
||||||
|
}
|
||||||
|
s := Snmp{
|
||||||
|
SnmptranslateFile: "./testdata/oids.txt",
|
||||||
|
Host: []Host{h},
|
||||||
|
Get: []Data{get1},
|
||||||
|
}
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
err := s.Gather(&acc)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
acc.AssertContainsTaggedFields(t,
|
||||||
|
"ifSpeed",
|
||||||
|
map[string]interface{}{
|
||||||
|
"ifSpeed": uint(10000000),
|
||||||
|
},
|
||||||
|
map[string]string{
|
||||||
|
"unit": "octets",
|
||||||
|
"instance": "1",
|
||||||
|
"host": "127.0.0.1",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
acc.AssertContainsTaggedFields(t,
|
||||||
|
"ifNumber",
|
||||||
|
map[string]interface{}{
|
||||||
|
"ifNumber": int(4),
|
||||||
|
},
|
||||||
|
map[string]string{
|
||||||
|
"instance": "0",
|
||||||
|
"host": "127.0.0.1",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSNMPEasyGet5(t *testing.T) {
|
||||||
|
get1 := Data{
|
||||||
|
Name: "oid1",
|
||||||
|
Unit: "octets",
|
||||||
|
Oid: "ifSpeed",
|
||||||
|
Instance: "1",
|
||||||
|
}
|
||||||
|
h := Host{
|
||||||
|
Address: "127.0.0.1:31161",
|
||||||
|
Community: "telegraf",
|
||||||
|
Version: 2,
|
||||||
|
Timeout: 2.0,
|
||||||
|
Retries: 2,
|
||||||
|
Collect: []string{"oid1"},
|
||||||
|
GetOids: []string{".1.3.6.1.2.1.2.1.0"},
|
||||||
|
}
|
||||||
|
s := Snmp{
|
||||||
|
SnmptranslateFile: "./testdata/oids.txt",
|
||||||
|
Host: []Host{h},
|
||||||
|
Get: []Data{get1},
|
||||||
|
}
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
err := s.Gather(&acc)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
acc.AssertContainsTaggedFields(t,
|
||||||
|
"ifSpeed",
|
||||||
|
map[string]interface{}{
|
||||||
|
"ifSpeed": uint(10000000),
|
||||||
|
},
|
||||||
|
map[string]string{
|
||||||
|
"unit": "octets",
|
||||||
|
"instance": "1",
|
||||||
|
"host": "127.0.0.1",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
acc.AssertContainsTaggedFields(t,
|
||||||
|
"ifNumber",
|
||||||
|
map[string]interface{}{
|
||||||
|
"ifNumber": int(4),
|
||||||
|
},
|
||||||
|
map[string]string{
|
||||||
|
"instance": "0",
|
||||||
|
"host": "127.0.0.1",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSNMPEasyGet6(t *testing.T) {
|
||||||
|
h := Host{
|
||||||
|
Address: "127.0.0.1:31161",
|
||||||
|
Community: "telegraf",
|
||||||
|
Version: 2,
|
||||||
|
Timeout: 2.0,
|
||||||
|
Retries: 2,
|
||||||
|
GetOids: []string{"1.3.6.1.2.1.2.1.0"},
|
||||||
|
}
|
||||||
|
s := Snmp{
|
||||||
|
SnmptranslateFile: "./testdata/oids.txt",
|
||||||
|
Host: []Host{h},
|
||||||
|
}
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
err := s.Gather(&acc)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
acc.AssertContainsTaggedFields(t,
|
||||||
|
"ifNumber",
|
||||||
|
map[string]interface{}{
|
||||||
|
"ifNumber": int(4),
|
||||||
|
},
|
||||||
|
map[string]string{
|
||||||
|
"instance": "0",
|
||||||
|
"host": "127.0.0.1",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestSNMPBulk1(t *testing.T) {
|
||||||
|
bulk1 := Data{
|
||||||
|
Name: "oid1",
|
||||||
|
Unit: "octets",
|
||||||
|
Oid: ".1.3.6.1.2.1.2.2.1.16",
|
||||||
|
MaxRepetition: 2,
|
||||||
|
}
|
||||||
|
h := Host{
|
||||||
|
Address: "127.0.0.1:31161",
|
||||||
|
Community: "telegraf",
|
||||||
|
Version: 2,
|
||||||
|
Timeout: 2.0,
|
||||||
|
Retries: 2,
|
||||||
|
Collect: []string{"oid1"},
|
||||||
|
}
|
||||||
|
s := Snmp{
|
||||||
|
SnmptranslateFile: "./testdata/oids.txt",
|
||||||
|
Host: []Host{h},
|
||||||
|
Bulk: []Data{bulk1},
|
||||||
|
}
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
err := s.Gather(&acc)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
acc.AssertContainsTaggedFields(t,
|
||||||
|
"ifOutOctets",
|
||||||
|
map[string]interface{}{
|
||||||
|
"ifOutOctets": uint(543846),
|
||||||
|
},
|
||||||
|
map[string]string{
|
||||||
|
"unit": "octets",
|
||||||
|
"instance": "1",
|
||||||
|
"host": "127.0.0.1",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
acc.AssertContainsTaggedFields(t,
|
||||||
|
"ifOutOctets",
|
||||||
|
map[string]interface{}{
|
||||||
|
"ifOutOctets": uint(26475179),
|
||||||
|
},
|
||||||
|
map[string]string{
|
||||||
|
"unit": "octets",
|
||||||
|
"instance": "2",
|
||||||
|
"host": "127.0.0.1",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
acc.AssertContainsTaggedFields(t,
|
||||||
|
"ifOutOctets",
|
||||||
|
map[string]interface{}{
|
||||||
|
"ifOutOctets": uint(108963968),
|
||||||
|
},
|
||||||
|
map[string]string{
|
||||||
|
"unit": "octets",
|
||||||
|
"instance": "3",
|
||||||
|
"host": "127.0.0.1",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
acc.AssertContainsTaggedFields(t,
|
||||||
|
"ifOutOctets",
|
||||||
|
map[string]interface{}{
|
||||||
|
"ifOutOctets": uint(12991453),
|
||||||
|
},
|
||||||
|
map[string]string{
|
||||||
|
"unit": "octets",
|
||||||
|
"instance": "36",
|
||||||
|
"host": "127.0.0.1",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
// TODO find why, if this test is active
|
||||||
|
// Circle CI stops with the following error...
|
||||||
|
// bash scripts/circle-test.sh died unexpectedly
|
||||||
|
// Maybe the test is too long ??
|
||||||
|
func dTestSNMPBulk2(t *testing.T) {
|
||||||
|
bulk1 := Data{
|
||||||
|
Name: "oid1",
|
||||||
|
Unit: "octets",
|
||||||
|
Oid: "ifOutOctets",
|
||||||
|
MaxRepetition: 2,
|
||||||
|
}
|
||||||
|
h := Host{
|
||||||
|
Address: "127.0.0.1:31161",
|
||||||
|
Community: "telegraf",
|
||||||
|
Version: 2,
|
||||||
|
Timeout: 2.0,
|
||||||
|
Retries: 2,
|
||||||
|
Collect: []string{"oid1"},
|
||||||
|
}
|
||||||
|
s := Snmp{
|
||||||
|
SnmptranslateFile: "./testdata/oids.txt",
|
||||||
|
Host: []Host{h},
|
||||||
|
Bulk: []Data{bulk1},
|
||||||
|
}
|
||||||
|
|
||||||
|
var acc testutil.Accumulator
|
||||||
|
err := s.Gather(&acc)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
acc.AssertContainsTaggedFields(t,
|
||||||
|
"ifOutOctets",
|
||||||
|
map[string]interface{}{
|
||||||
|
"ifOutOctets": uint(543846),
|
||||||
|
},
|
||||||
|
map[string]string{
|
||||||
|
"unit": "octets",
|
||||||
|
"instance": "1",
|
||||||
|
"host": "127.0.0.1",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
acc.AssertContainsTaggedFields(t,
|
||||||
|
"ifOutOctets",
|
||||||
|
map[string]interface{}{
|
||||||
|
"ifOutOctets": uint(26475179),
|
||||||
|
},
|
||||||
|
map[string]string{
|
||||||
|
"unit": "octets",
|
||||||
|
"instance": "2",
|
||||||
|
"host": "127.0.0.1",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
acc.AssertContainsTaggedFields(t,
|
||||||
|
"ifOutOctets",
|
||||||
|
map[string]interface{}{
|
||||||
|
"ifOutOctets": uint(108963968),
|
||||||
|
},
|
||||||
|
map[string]string{
|
||||||
|
"unit": "octets",
|
||||||
|
"instance": "3",
|
||||||
|
"host": "127.0.0.1",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
|
||||||
|
acc.AssertContainsTaggedFields(t,
|
||||||
|
"ifOutOctets",
|
||||||
|
map[string]interface{}{
|
||||||
|
"ifOutOctets": uint(12991453),
|
||||||
|
},
|
||||||
|
map[string]string{
|
||||||
|
"unit": "octets",
|
||||||
|
"instance": "36",
|
||||||
|
"host": "127.0.0.1",
|
||||||
|
},
|
||||||
|
)
|
||||||
|
}
|
|
@ -0,0 +1,32 @@
|
||||||
|
org 1.3
|
||||||
|
dod 1.3.6
|
||||||
|
internet 1.3.6.1
|
||||||
|
directory 1.3.6.1.1
|
||||||
|
mgmt 1.3.6.1.2
|
||||||
|
mib-2 1.3.6.1.2.1
|
||||||
|
interfaces 1.3.6.1.2.1.2
|
||||||
|
ifNumber 1.3.6.1.2.1.2.1
|
||||||
|
ifTable 1.3.6.1.2.1.2.2
|
||||||
|
ifEntry 1.3.6.1.2.1.2.2.1
|
||||||
|
ifIndex 1.3.6.1.2.1.2.2.1.1
|
||||||
|
ifDescr 1.3.6.1.2.1.2.2.1.2
|
||||||
|
ifType 1.3.6.1.2.1.2.2.1.3
|
||||||
|
ifMtu 1.3.6.1.2.1.2.2.1.4
|
||||||
|
ifSpeed 1.3.6.1.2.1.2.2.1.5
|
||||||
|
ifPhysAddress 1.3.6.1.2.1.2.2.1.6
|
||||||
|
ifAdminStatus 1.3.6.1.2.1.2.2.1.7
|
||||||
|
ifOperStatus 1.3.6.1.2.1.2.2.1.8
|
||||||
|
ifLastChange 1.3.6.1.2.1.2.2.1.9
|
||||||
|
ifInOctets 1.3.6.1.2.1.2.2.1.10
|
||||||
|
ifInUcastPkts 1.3.6.1.2.1.2.2.1.11
|
||||||
|
ifInNUcastPkts 1.3.6.1.2.1.2.2.1.12
|
||||||
|
ifInDiscards 1.3.6.1.2.1.2.2.1.13
|
||||||
|
ifInErrors 1.3.6.1.2.1.2.2.1.14
|
||||||
|
ifInUnknownProtos 1.3.6.1.2.1.2.2.1.15
|
||||||
|
ifOutOctets 1.3.6.1.2.1.2.2.1.16
|
||||||
|
ifOutUcastPkts 1.3.6.1.2.1.2.2.1.17
|
||||||
|
ifOutNUcastPkts 1.3.6.1.2.1.2.2.1.18
|
||||||
|
ifOutDiscards 1.3.6.1.2.1.2.2.1.19
|
||||||
|
ifOutErrors 1.3.6.1.2.1.2.2.1.20
|
||||||
|
ifOutQLen 1.3.6.1.2.1.2.2.1.21
|
||||||
|
ifSpecific 1.3.6.1.2.1.2.2.1.22
|
|
@ -0,0 +1,50 @@
|
||||||
|
# SQL Server plugin
|
||||||
|
|
||||||
|
This sqlserver plugin provides metrics for your SQL Server instance.
|
||||||
|
It currently works with SQL Server versions 2008+.
|
||||||
|
Recorded metrics are lightweight and use Dynamic Management Views supplied by SQL Server:
|
||||||
|
```
|
||||||
|
Performance counters : 1000+ metrics from sys.dm_os_performance_counters
|
||||||
|
Performance metrics : special performance and ratio metrics
|
||||||
|
Wait stats : wait tasks categorized from sys.dm_os_wait_stats
|
||||||
|
Memory clerk : memory breakdown from sys.dm_os_memory_clerks
|
||||||
|
Database size : databases size trend from sys.dm_io_virtual_file_stats
|
||||||
|
Database IO : databases I/O from sys.dm_io_virtual_file_stats
|
||||||
|
Database latency : databases latency from sys.dm_io_virtual_file_stats
|
||||||
|
Database properties : databases properties, state and recovery model, from sys.databases
|
||||||
|
OS Volume : available, used and total space from sys.dm_os_volume_stats
|
||||||
|
CPU : cpu usage from sys.dm_os_ring_buffers
|
||||||
|
```
|
||||||
|
|
||||||
|
You must create a login on every instance you want to monitor, with following script:
|
||||||
|
```SQL
|
||||||
|
USE master;
|
||||||
|
GO
|
||||||
|
CREATE LOGIN [telegraf] WITH PASSWORD = N'mystrongpassword';
|
||||||
|
GO
|
||||||
|
GRANT VIEW SERVER STATE TO [telegraf];
|
||||||
|
GO
|
||||||
|
GRANT VIEW ANY DEFINITION TO [telegraf];
|
||||||
|
GO
|
||||||
|
```
|
||||||
|
|
||||||
|
Overview
|
||||||
|
![telegraf-sqlserver-0](https://cloud.githubusercontent.com/assets/16494280/12538189/ec1b70aa-c2d3-11e5-97ec-1a4f575e8a07.png)
|
||||||
|
|
||||||
|
General Activity
|
||||||
|
![telegraf-sqlserver-1](https://cloud.githubusercontent.com/assets/16494280/12591410/f098b602-c467-11e5-9acf-2edea077ed7e.png)
|
||||||
|
|
||||||
|
Memory
|
||||||
|
![telegraf-sqlserver-2](https://cloud.githubusercontent.com/assets/16494280/12591412/f2075688-c467-11e5-9d0f-d256e032cd0e.png)
|
||||||
|
|
||||||
|
I/O
|
||||||
|
![telegraf-sqlserver-3](https://cloud.githubusercontent.com/assets/16494280/12591417/f40ccb84-c467-11e5-89ff-498fb1bc3110.png)
|
||||||
|
|
||||||
|
Disks
|
||||||
|
![telegraf-sqlserver-4](https://cloud.githubusercontent.com/assets/16494280/12591420/f5de5f68-c467-11e5-90c8-9185444ac490.png)
|
||||||
|
|
||||||
|
CPU
|
||||||
|
![telegraf-sqlserver-5](https://cloud.githubusercontent.com/assets/16494280/12591446/11dfe7b8-c468-11e5-9681-6e33296e70e8.png)
|
||||||
|
|
||||||
|
Full view
|
||||||
|
![telegraf-sqlserver-full](https://cloud.githubusercontent.com/assets/16494280/12591426/fa2b17b4-c467-11e5-9c00-929f4c4aea57.png)
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,45 @@
|
||||||
|
# Disk Input Plugin
|
||||||
|
|
||||||
|
The disk input plugin gathers metrics about disk usage.
|
||||||
|
|
||||||
|
Note that `used_percent` is calculated by doing `used / (used + free)`, _not_
|
||||||
|
`used / total`, which is how the unix `df` command does it. See
|
||||||
|
https://en.wikipedia.org/wiki/Df_(Unix) for more details.
|
||||||
|
|
||||||
|
### Configuration:
|
||||||
|
|
||||||
|
```
|
||||||
|
# Read metrics about disk usage by mount point
|
||||||
|
[[inputs.disk]]
|
||||||
|
# By default, telegraf gather stats for all mountpoints.
|
||||||
|
# Setting mountpoints will restrict the stats to the specified mountpoints.
|
||||||
|
# mount_points = ["/"]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Measurements & Fields:
|
||||||
|
|
||||||
|
- disk
|
||||||
|
- free (integer, bytes)
|
||||||
|
- total (integer, bytes)
|
||||||
|
- used (integer, bytes)
|
||||||
|
- used_percent (float, percent)
|
||||||
|
- inodes_free (integer, files)
|
||||||
|
- inodes_total (integer, files)
|
||||||
|
- inodes_used (integer, files)
|
||||||
|
|
||||||
|
### Tags:
|
||||||
|
|
||||||
|
- All measurements have the following tags:
|
||||||
|
- fstype (filesystem type)
|
||||||
|
- path (mount point path)
|
||||||
|
|
||||||
|
### Example Output:
|
||||||
|
|
||||||
|
```
|
||||||
|
% ./telegraf -config ~/ws/telegraf.conf -input-filter disk -test
|
||||||
|
* Plugin: disk, Collection 1
|
||||||
|
> disk,fstype=hfs,path=/ free=398407520256i,inodes_free=97267461i,inodes_total=121847806i,inodes_used=24580345i,total=499088621568i,used=100418957312i,used_percent=20.131039916242397 1453832006274071563
|
||||||
|
> disk,fstype=devfs,path=/dev free=0i,inodes_free=0i,inodes_total=628i,inodes_used=628i,total=185856i,used=185856i,used_percent=100 1453832006274137913
|
||||||
|
> disk,fstype=autofs,path=/net free=0i,inodes_free=0i,inodes_total=0i,inodes_used=0i,total=0i,used=0i,used_percent=0 1453832006274157077
|
||||||
|
> disk,fstype=autofs,path=/home free=0i,inodes_free=0i,inodes_total=0i,inodes_used=0i,total=0i,used=0i,used_percent=0 1453832006274169688
|
||||||
|
```
|
|
@ -45,13 +45,20 @@ func (s *DiskStats) Gather(acc inputs.Accumulator) error {
|
||||||
"path": du.Path,
|
"path": du.Path,
|
||||||
"fstype": du.Fstype,
|
"fstype": du.Fstype,
|
||||||
}
|
}
|
||||||
|
var used_percent float64
|
||||||
|
if du.Used+du.Free > 0 {
|
||||||
|
used_percent = float64(du.Used) /
|
||||||
|
(float64(du.Used) + float64(du.Free)) * 100
|
||||||
|
}
|
||||||
|
|
||||||
fields := map[string]interface{}{
|
fields := map[string]interface{}{
|
||||||
"total": du.Total,
|
"total": du.Total,
|
||||||
"free": du.Free,
|
"free": du.Free,
|
||||||
"used": du.Total - du.Free,
|
"used": du.Used,
|
||||||
|
"used_percent": used_percent,
|
||||||
"inodes_total": du.InodesTotal,
|
"inodes_total": du.InodesTotal,
|
||||||
"inodes_free": du.InodesFree,
|
"inodes_free": du.InodesFree,
|
||||||
"inodes_used": du.InodesTotal - du.InodesFree,
|
"inodes_used": du.InodesUsed,
|
||||||
}
|
}
|
||||||
acc.AddFields("disk", fields, tags)
|
acc.AddFields("disk", fields, tags)
|
||||||
}
|
}
|
||||||
|
|
|
@ -21,16 +21,20 @@ func TestDiskStats(t *testing.T) {
|
||||||
Fstype: "ext4",
|
Fstype: "ext4",
|
||||||
Total: 128,
|
Total: 128,
|
||||||
Free: 23,
|
Free: 23,
|
||||||
|
Used: 100,
|
||||||
InodesTotal: 1234,
|
InodesTotal: 1234,
|
||||||
InodesFree: 234,
|
InodesFree: 234,
|
||||||
|
InodesUsed: 1000,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
Path: "/home",
|
Path: "/home",
|
||||||
Fstype: "ext4",
|
Fstype: "ext4",
|
||||||
Total: 256,
|
Total: 256,
|
||||||
Free: 46,
|
Free: 46,
|
||||||
|
Used: 200,
|
||||||
InodesTotal: 2468,
|
InodesTotal: 2468,
|
||||||
InodesFree: 468,
|
InodesFree: 468,
|
||||||
|
InodesUsed: 2000,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
duFiltered := []*disk.DiskUsageStat{
|
duFiltered := []*disk.DiskUsageStat{
|
||||||
|
@ -39,8 +43,10 @@ func TestDiskStats(t *testing.T) {
|
||||||
Fstype: "ext4",
|
Fstype: "ext4",
|
||||||
Total: 128,
|
Total: 128,
|
||||||
Free: 23,
|
Free: 23,
|
||||||
|
Used: 100,
|
||||||
InodesTotal: 1234,
|
InodesTotal: 1234,
|
||||||
InodesFree: 234,
|
InodesFree: 234,
|
||||||
|
InodesUsed: 1000,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -52,7 +58,7 @@ func TestDiskStats(t *testing.T) {
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
numDiskPoints := acc.NFields()
|
numDiskPoints := acc.NFields()
|
||||||
expectedAllDiskPoints := 12
|
expectedAllDiskPoints := 14
|
||||||
assert.Equal(t, expectedAllDiskPoints, numDiskPoints)
|
assert.Equal(t, expectedAllDiskPoints, numDiskPoints)
|
||||||
|
|
||||||
tags1 := map[string]string{
|
tags1 := map[string]string{
|
||||||
|
@ -66,19 +72,21 @@ func TestDiskStats(t *testing.T) {
|
||||||
|
|
||||||
fields1 := map[string]interface{}{
|
fields1 := map[string]interface{}{
|
||||||
"total": uint64(128),
|
"total": uint64(128),
|
||||||
"used": uint64(105),
|
"used": uint64(100),
|
||||||
"free": uint64(23),
|
"free": uint64(23),
|
||||||
"inodes_total": uint64(1234),
|
"inodes_total": uint64(1234),
|
||||||
"inodes_free": uint64(234),
|
"inodes_free": uint64(234),
|
||||||
"inodes_used": uint64(1000),
|
"inodes_used": uint64(1000),
|
||||||
|
"used_percent": float64(81.30081300813008),
|
||||||
}
|
}
|
||||||
fields2 := map[string]interface{}{
|
fields2 := map[string]interface{}{
|
||||||
"total": uint64(256),
|
"total": uint64(256),
|
||||||
"used": uint64(210),
|
"used": uint64(200),
|
||||||
"free": uint64(46),
|
"free": uint64(46),
|
||||||
"inodes_total": uint64(2468),
|
"inodes_total": uint64(2468),
|
||||||
"inodes_free": uint64(468),
|
"inodes_free": uint64(468),
|
||||||
"inodes_used": uint64(2000),
|
"inodes_used": uint64(2000),
|
||||||
|
"used_percent": float64(81.30081300813008),
|
||||||
}
|
}
|
||||||
acc.AssertContainsTaggedFields(t, "disk", fields1, tags1)
|
acc.AssertContainsTaggedFields(t, "disk", fields1, tags1)
|
||||||
acc.AssertContainsTaggedFields(t, "disk", fields2, tags2)
|
acc.AssertContainsTaggedFields(t, "disk", fields2, tags2)
|
||||||
|
@ -86,12 +94,12 @@ func TestDiskStats(t *testing.T) {
|
||||||
// We expect 6 more DiskPoints to show up with an explicit match on "/"
|
// We expect 6 more DiskPoints to show up with an explicit match on "/"
|
||||||
// and /home not matching the /dev in MountPoints
|
// and /home not matching the /dev in MountPoints
|
||||||
err = (&DiskStats{ps: &mps, MountPoints: []string{"/", "/dev"}}).Gather(&acc)
|
err = (&DiskStats{ps: &mps, MountPoints: []string{"/", "/dev"}}).Gather(&acc)
|
||||||
assert.Equal(t, expectedAllDiskPoints+6, acc.NFields())
|
assert.Equal(t, expectedAllDiskPoints+7, acc.NFields())
|
||||||
|
|
||||||
// We should see all the diskpoints as MountPoints includes both
|
// We should see all the diskpoints as MountPoints includes both
|
||||||
// / and /home
|
// / and /home
|
||||||
err = (&DiskStats{ps: &mps, MountPoints: []string{"/", "/home"}}).Gather(&acc)
|
err = (&DiskStats{ps: &mps, MountPoints: []string{"/", "/home"}}).Gather(&acc)
|
||||||
assert.Equal(t, 2*expectedAllDiskPoints+6, acc.NFields())
|
assert.Equal(t, 2*expectedAllDiskPoints+7, acc.NFields())
|
||||||
}
|
}
|
||||||
|
|
||||||
// func TestDiskIOStats(t *testing.T) {
|
// func TestDiskIOStats(t *testing.T) {
|
||||||
|
|
|
@ -1,7 +1,6 @@
|
||||||
package kinesis
|
package kinesis
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"errors"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"log"
|
"log"
|
||||||
"os"
|
"os"
|
||||||
|
@ -101,7 +100,7 @@ func (k *KinesisOutput) Connect() error {
|
||||||
}
|
}
|
||||||
|
|
||||||
func (k *KinesisOutput) Close() error {
|
func (k *KinesisOutput) Close() error {
|
||||||
return errors.New("Error")
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func FormatMetric(k *KinesisOutput, point *client.Point) (string, error) {
|
func FormatMetric(k *KinesisOutput, point *client.Point) (string, error) {
|
||||||
|
|
|
@ -66,4 +66,11 @@ elif [[ -f /etc/debian_version ]]; then
|
||||||
install_init
|
install_init
|
||||||
install_update_rcd
|
install_update_rcd
|
||||||
fi
|
fi
|
||||||
|
elif [[ -f /etc/os-release ]]; then
|
||||||
|
source /etc/os-release
|
||||||
|
if [[ $ID = "amzn" ]]; then
|
||||||
|
# Amazon Linux logic
|
||||||
|
install_init
|
||||||
|
install_chkconfig
|
||||||
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
Loading…
Reference in New Issue