Merge remote-tracking branch 'upstream/master'

This commit is contained in:
Ali Alrahahleh 2016-05-24 15:31:04 -07:00
commit 4c762c6950
72 changed files with 4776 additions and 1181 deletions

42
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@ -0,0 +1,42 @@
## Directions
GitHub Issues are reserved for actionable bug reports and feature requests.
General questions should be sent to the [InfluxDB mailing list](https://groups.google.com/forum/#!forum/influxdb).
Before opening an issue, search for similar bug reports or feature requests on GitHub Issues.
If no similar issue can be found, fill out either the "Bug Report" or the "Feature Request" section below.
Erase the other section and everything on and above this line.
*Please note, the quickest way to fix a bug is to open a Pull Request.*
## Bug report
### System info:
[Include Telegraf version, operating system name, and other relevant details]
### Steps to reproduce:
1. ...
2. ...
### Expected behavior:
### Actual behavior:
### Additional info:
[Include gist of relevant config, logs, etc.]
## Feature Request
Opening a feature request kicks off a discussion.
### Proposal:
### Current behavior:
### Desired behavior:
### Use case: [Why is this important (helps with prioritizing requests)]

5
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@ -0,0 +1,5 @@
### Required for all PRs:
- [ ] CHANGELOG.md updated
- [ ] Sign [CLA](https://influxdata.com/community/cla/) (if not already signed)
- [ ] README.md updated (if adding a new plugin)

View File

@ -1,4 +1,50 @@
## v0.13 [unreleased]
## v1.0 [unreleased]
### Release Notes
### Features
- [#1247](https://github.com/influxdata/telegraf/pull/1247): rollbar input plugin. Thanks @francois2metz and @cduez!
### Bugfixes
- [#1252](https://github.com/influxdata/telegraf/pull/1252): Fix systemd service. Thanks @zbindenren!
## v0.13.1 [2016-05-24]
### Release Notes
- net_response and http_response plugins timeouts will now accept duration
strings, ie, "2s" or "500ms".
- Input plugin Gathers will no longer be logged by default, but a Gather for
_each_ plugin will be logged in Debug mode.
- Debug mode will no longer print every point added to the accumulator. This
functionality can be duplicated using the `file` output plugin and printing
to "stdout".
### Features
- [#1173](https://github.com/influxdata/telegraf/pull/1173): varnish input plugin. Thanks @sfox-xmatters!
- [#1138](https://github.com/influxdata/telegraf/pull/1138): nstat input plugin. Thanks @Maksadbek!
- [#1139](https://github.com/influxdata/telegraf/pull/1139): instrumental output plugin. Thanks @jasonroelofs!
- [#1172](https://github.com/influxdata/telegraf/pull/1172): Ceph storage stats. Thanks @robinpercy!
- [#1233](https://github.com/influxdata/telegraf/pull/1233): Updated golint gopsutil dependency.
- [#1238](https://github.com/influxdata/telegraf/pull/1238): chrony input plugin. Thanks @zbindenren!
- [#479](https://github.com/influxdata/telegraf/issues/479): per-plugin execution time added to debug output.
- [#1249](https://github.com/influxdata/telegraf/issues/1249): influxdb output: added write_consistency argument.
### Bugfixes
- [#1195](https://github.com/influxdata/telegraf/pull/1195): Docker panic on timeout. Thanks @zstyblik!
- [#1211](https://github.com/influxdata/telegraf/pull/1211): mongodb input. Fix possible panic. Thanks @kols!
- [#1215](https://github.com/influxdata/telegraf/pull/1215): Fix for possible gopsutil-dependent plugin hangs.
- [#1228](https://github.com/influxdata/telegraf/pull/1228): Fix service plugin host tag overwrite.
- [#1198](https://github.com/influxdata/telegraf/pull/1198): http_response: override request Host header properly
- [#1230](https://github.com/influxdata/telegraf/issues/1230): Fix Telegraf process hangup due to a single plugin hanging.
- [#1214](https://github.com/influxdata/telegraf/issues/1214): Use TCP timeout argument in net_response plugin.
- [#1243](https://github.com/influxdata/telegraf/pull/1243): Logfile not created on systemd.
## v0.13 [2016-05-11]
### Release Notes
@ -48,7 +94,15 @@ based on _prefix_ in addition to globs. This means that a filter like
- disque: `host -> disque_host`
- rethinkdb: `host -> rethinkdb_host`
- **Breaking Change**: The `win_perf_counters` input has been changed to sanitize field names, replacing `/Sec` and `/sec` with `_persec`, as well as spaces with underscores. This is needed because Graphite doesn't like slashes and spaces, and was failing to accept metrics that had them. The `/[sS]ec` -> `_persec` is just to make things clearer and uniform.
- **Breaking Change**: The `win_perf_counters` input has been changed to
sanitize field names, replacing `/Sec` and `/sec` with `_persec`, as well as
spaces with underscores. This is needed because Graphite doesn't like slashes
and spaces, and was failing to accept metrics that had them.
The `/[sS]ec` -> `_persec` is just to make things clearer and uniform.
- **Breaking Change**: snmp plugin. The `host` tag of the snmp plugin has been
changed to the `snmp_host` tag.
- The `disk` input plugin can now be configured with the `HOST_MOUNT_PREFIX` environment variable.
This value is prepended to any mountpaths discovered before retrieving stats.
It is not included on the report path. This is necessary for reporting host disk stats when running from within a container.

4
Godeps
View File

@ -25,7 +25,7 @@ github.com/gorilla/mux c9e326e2bdec29039a3761c07bece13133863e1e
github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478
github.com/hpcloud/tail b2940955ab8b26e19d43a43c4da0475dd81bdb56
github.com/influxdata/config b79f6829346b8d6e78ba73544b1e1038f1f1c9da
github.com/influxdata/influxdb 21db76b3374c733f37ed16ad93f3484020034351
github.com/influxdata/influxdb e094138084855d444195b252314dfee9eae34cab
github.com/influxdata/toml af4df43894b16e3fd2b788d01bd27ad0776ef2d0
github.com/klauspost/crc32 19b0b332c9e4516a6370a0456e6182c3b5036720
github.com/lib/pq e182dc4027e2ded4b19396d638610f2653295f36
@ -42,7 +42,7 @@ github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
github.com/prometheus/common e8eabff8812b05acf522b45fdcd725a785188e37
github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8
github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f
github.com/shirou/gopsutil 1f32ce1bb380845be7f5d174ac641a2c592c0c42
github.com/shirou/gopsutil 83c6e72cbdef6e8ada934549abf700ff0ba96776
github.com/soniah/gosnmp b1b4f885b12c5dcbd021c5cee1c904110de6db7d
github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744
github.com/stretchr/testify 1f4a1643a57e798696635ea4c126e9127adb7d3c

View File

@ -14,21 +14,21 @@ windows: prepare-windows build-windows
# Only run the build (no dependency grabbing)
build:
go install -ldflags "-X main.Version=$(VERSION)" ./...
go install -ldflags "-X main.version=$(VERSION)" ./...
build-windows:
go build -o telegraf.exe -ldflags \
"-X main.Version=$(VERSION)" \
"-X main.version=$(VERSION)" \
./cmd/telegraf/telegraf.go
build-for-docker:
CGO_ENABLED=0 GOOS=linux go build -installsuffix cgo -o telegraf -ldflags \
"-s -X main.Version=$(VERSION)" \
"-s -X main.version=$(VERSION)" \
./cmd/telegraf/telegraf.go
# Build with race detector
dev: prepare
go build -race -ldflags "-X main.Version=$(VERSION)" ./...
go build -race -ldflags "-X main.version=$(VERSION)" ./...
# run package script
package:

View File

@ -1,4 +1,4 @@
# Telegraf [![Circle CI](https://circleci.com/gh/influxdata/telegraf.svg?style=svg)](https://circleci.com/gh/influxdata/telegraf)
# Telegraf [![Circle CI](https://circleci.com/gh/influxdata/telegraf.svg?style=svg)](https://circleci.com/gh/influxdata/telegraf) [![Docker pulls](https://img.shields.io/docker/pulls/library/telegraf.svg)](https://hub.docker.com/_/telegraf/)
Telegraf is an agent written in Go for collecting metrics from the system it's
running on, or from other services, and writing them into InfluxDB or other
@ -20,12 +20,12 @@ new plugins.
### Linux deb and rpm Packages:
Latest:
* http://get.influxdb.org/telegraf/telegraf_0.12.1-1_amd64.deb
* http://get.influxdb.org/telegraf/telegraf-0.12.1-1.x86_64.rpm
* https://dl.influxdata.com/telegraf/releases/telegraf_0.13.1_amd64.deb
* https://dl.influxdata.com/telegraf/releases/telegraf-0.13.1.x86_64.rpm
Latest (arm):
* http://get.influxdb.org/telegraf/telegraf_0.12.1-1_armhf.deb
* http://get.influxdb.org/telegraf/telegraf-0.12.1-1.armhf.rpm
* https://dl.influxdata.com/telegraf/releases/telegraf_0.13.1_armhf.deb
* https://dl.influxdata.com/telegraf/releases/telegraf-0.13.1.armhf.rpm
##### Package Instructions:
@ -46,32 +46,14 @@ to use this repo to install & update telegraf.
### Linux tarballs:
Latest:
* http://get.influxdb.org/telegraf/telegraf-0.12.1-1_linux_amd64.tar.gz
* http://get.influxdb.org/telegraf/telegraf-0.12.1-1_linux_i386.tar.gz
* http://get.influxdb.org/telegraf/telegraf-0.12.1-1_linux_armhf.tar.gz
##### tarball Instructions:
To install the full directory structure with config file, run:
```
sudo tar -C / -zxvf ./telegraf-0.12.1-1_linux_amd64.tar.gz
```
To extract only the binary, run:
```
tar -zxvf telegraf-0.12.1-1_linux_amd64.tar.gz --strip-components=3 ./usr/bin/telegraf
```
* https://dl.influxdata.com/telegraf/releases/telegraf-0.13.1_linux_amd64.tar.gz
* https://dl.influxdata.com/telegraf/releases/telegraf-0.13.1_linux_i386.tar.gz
* https://dl.influxdata.com/telegraf/releases/telegraf-0.13.1_linux_armhf.tar.gz
### FreeBSD tarball:
Latest:
* http://get.influxdb.org/telegraf/telegraf-0.12.1-1_freebsd_amd64.tar.gz
##### tarball Instructions:
See linux instructions above.
* https://dl.influxdata.com/telegraf/releases/telegraf-0.13.1_freebsd_amd64.tar.gz
### Ansible Role:
@ -87,8 +69,8 @@ brew install telegraf
### Windows Binaries (EXPERIMENTAL)
Latest:
* http://get.influxdb.org/telegraf/telegraf-0.12.1-1_windows_amd64.zip
* http://get.influxdb.org/telegraf/telegraf-0.12.1-1_windows_i386.zip
* https://dl.influxdata.com/telegraf/releases/telegraf-0.13.1_windows_amd64.zip
* https://dl.influxdata.com/telegraf/releases/telegraf-0.13.1_windows_i386.zip
### From Source:
@ -161,6 +143,8 @@ Currently implemented sources:
* [apache](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/apache)
* [bcache](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/bcache)
* [cassandra](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/cassandra)
* [ceph](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/ceph)
* [chrony](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/chrony)
* [couchbase](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/couchbase)
* [couchdb](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/couchdb)
* [disque](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/disque)
@ -186,6 +170,7 @@ Currently implemented sources:
* [net_response](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/net_response)
* [nginx](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/nginx)
* [nsq](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/nsq)
* [nstat](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/nstat)
* [ntpq](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/ntpq)
* [phpfpm](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/phpfpm)
* [phusion passenger](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/passenger)
@ -205,6 +190,7 @@ Currently implemented sources:
* [snmp](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/snmp)
* [sql server](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/sqlserver) (microsoft)
* [twemproxy](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/twemproxy)
* [varnish](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/varnish)
* [zfs](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/zfs)
* [zookeeper](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/zookeeper)
* [win_perf_counters ](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/win_perf_counters) (windows performance counters)
@ -223,12 +209,14 @@ Currently implemented sources:
Telegraf can also collect metrics via the following service plugins:
* [statsd](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/statsd)
* [tail](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/tail)
* [udp_listener](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/udp_listener)
* [tcp_listener](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/tcp_listener)
* [mqtt_consumer](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/mqtt_consumer)
* [kafka_consumer](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/kafka_consumer)
* [nats_consumer](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/nats_consumer)
* [github_webhooks](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/github_webhooks)
* [rollbar_webhooks](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/rollbar_webhooks)
We'll be adding support for many more over the coming months. Read on if you
want to add support for another service or third-party API.
@ -243,6 +231,7 @@ want to add support for another service or third-party API.
* [datadog](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/datadog)
* [file](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/file)
* [graphite](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/graphite)
* [instrumental](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/instrumental)
* [kafka](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/kafka)
* [librato](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/librato)
* [mqtt](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/mqtt)

View File

@ -4,7 +4,6 @@ import (
"fmt"
"log"
"math"
"sync"
"time"
"github.com/influxdata/telegraf"
@ -22,13 +21,13 @@ func NewAccumulator(
}
type accumulator struct {
sync.Mutex
metrics chan telegraf.Metric
defaultTags map[string]string
debug bool
// print every point added to the accumulator
trace bool
inputConfig *internal_models.InputConfig
@ -84,14 +83,18 @@ func (ac *accumulator) AddFields(
if tags == nil {
tags = make(map[string]string)
}
// Apply daemon-wide tags if set
for k, v := range ac.defaultTags {
tags[k] = v
}
// Apply plugin-wide tags if set
for k, v := range ac.inputConfig.Tags {
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
// Apply daemon-wide tags if set
for k, v := range ac.defaultTags {
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
ac.inputConfig.Filter.FilterTags(tags)
result := make(map[string]interface{})
@ -148,7 +151,7 @@ func (ac *accumulator) AddFields(
log.Printf("Error adding point [%s]: %s\n", measurement, err.Error())
return
}
if ac.debug {
if ac.trace {
fmt.Println("> " + m.String())
}
ac.metrics <- m
@ -162,6 +165,14 @@ func (ac *accumulator) SetDebug(debug bool) {
ac.debug = debug
}
func (ac *accumulator) Trace() bool {
return ac.trace
}
func (ac *accumulator) SetTrace(trace bool) {
ac.trace = trace
}
func (ac *accumulator) setDefaultTags(tags map[string]string) {
ac.defaultTags = tags
}

View File

@ -102,24 +102,24 @@ func panicRecover(input *internal_models.RunningInput) {
}
}
// gatherParallel runs the inputs that are using the same reporting interval
// as the telegraf agent.
func (a *Agent) gatherParallel(metricC chan telegraf.Metric) error {
var wg sync.WaitGroup
start := time.Now()
counter := 0
jitter := a.Config.Agent.CollectionJitter.Duration.Nanoseconds()
for _, input := range a.Config.Inputs {
if input.Config.Interval != 0 {
continue
}
wg.Add(1)
counter++
go func(input *internal_models.RunningInput) {
// gatherer runs the inputs that have been configured with their own
// reporting interval.
func (a *Agent) gatherer(
shutdown chan struct{},
input *internal_models.RunningInput,
interval time.Duration,
metricC chan telegraf.Metric,
) error {
defer panicRecover(input)
defer wg.Done()
ticker := time.NewTicker(interval)
defer ticker.Stop()
jitter := a.Config.Agent.CollectionJitter.Duration.Nanoseconds()
for {
var outerr error
start := time.Now()
acc := NewAccumulator(input.Config, metricC)
acc.SetDebug(a.Config.Agent.Debug)
@ -136,59 +136,16 @@ func (a *Agent) gatherParallel(metricC chan telegraf.Metric) error {
}
}
if err := input.Input.Gather(acc); err != nil {
log.Printf("Error in input [%s]: %s", input.Name, err)
}
}(input)
}
if counter == 0 {
return nil
}
wg.Wait()
gatherWithTimeout(shutdown, input, acc, interval)
elapsed := time.Since(start)
if !a.Config.Agent.Quiet {
log.Printf("Gathered metrics, (%s interval), from %d inputs in %s\n",
a.Config.Agent.Interval.Duration, counter, elapsed)
}
return nil
}
// gatherSeparate runs the inputs that have been configured with their own
// reporting interval.
func (a *Agent) gatherSeparate(
shutdown chan struct{},
input *internal_models.RunningInput,
metricC chan telegraf.Metric,
) error {
defer panicRecover(input)
ticker := time.NewTicker(input.Config.Interval)
for {
var outerr error
start := time.Now()
acc := NewAccumulator(input.Config, metricC)
acc.SetDebug(a.Config.Agent.Debug)
acc.setDefaultTags(a.Config.Tags)
if err := input.Input.Gather(acc); err != nil {
log.Printf("Error in input [%s]: %s", input.Name, err)
}
elapsed := time.Since(start)
if !a.Config.Agent.Quiet {
log.Printf("Gathered metrics, (separate %s interval), from %s in %s\n",
input.Config.Interval, input.Name, elapsed)
}
if outerr != nil {
return outerr
}
if a.Config.Agent.Debug {
log.Printf("Input [%s] gathered metrics, (%s interval) in %s\n",
input.Name, interval, elapsed)
}
select {
case <-shutdown:
@ -199,6 +156,42 @@ func (a *Agent) gatherSeparate(
}
}
// gatherWithTimeout gathers from the given input, with the given timeout.
// when the given timeout is reached, gatherWithTimeout logs an error message
// but continues waiting for it to return. This is to avoid leaving behind
// hung processes, and to prevent re-calling the same hung process over and
// over.
func gatherWithTimeout(
shutdown chan struct{},
input *internal_models.RunningInput,
acc *accumulator,
timeout time.Duration,
) {
ticker := time.NewTicker(timeout)
defer ticker.Stop()
done := make(chan error)
go func() {
done <- input.Input.Gather(acc)
}()
for {
select {
case err := <-done:
if err != nil {
log.Printf("ERROR in input [%s]: %s", input.Name, err)
}
return
case <-ticker.C:
log.Printf("ERROR: input [%s] took longer to collect than "+
"collection interval (%s)",
input.Name, timeout)
continue
case <-shutdown:
return
}
}
}
// Test verifies that we can 'Gather' from all inputs with their configured
// Config struct
func (a *Agent) Test() error {
@ -220,7 +213,7 @@ func (a *Agent) Test() error {
for _, input := range a.Config.Inputs {
acc := NewAccumulator(input.Config, metricC)
acc.SetDebug(true)
acc.SetTrace(true)
acc.setDefaultTags(a.Config.Tags)
fmt.Printf("* Plugin: %s, Collection 1\n", input.Name)
@ -348,7 +341,6 @@ func (a *Agent) Run(shutdown chan struct{}) error {
i := int64(a.Config.Agent.Interval.Duration)
time.Sleep(time.Duration(i - (time.Now().UnixNano() % i)))
}
ticker := time.NewTicker(a.Config.Agent.Interval.Duration)
wg.Add(1)
go func() {
@ -359,32 +351,21 @@ func (a *Agent) Run(shutdown chan struct{}) error {
}
}()
wg.Add(len(a.Config.Inputs))
for _, input := range a.Config.Inputs {
// Special handling for inputs that have their own collection interval
// configured. Default intervals are handled below with gatherParallel
interval := a.Config.Agent.Interval.Duration
// overwrite global interval if this plugin has it's own.
if input.Config.Interval != 0 {
wg.Add(1)
go func(input *internal_models.RunningInput) {
interval = input.Config.Interval
}
go func(in *internal_models.RunningInput, interv time.Duration) {
defer wg.Done()
if err := a.gatherSeparate(shutdown, input, metricC); err != nil {
if err := a.gatherer(shutdown, in, interv, metricC); err != nil {
log.Printf(err.Error())
}
}(input)
}
}(input, interval)
}
defer wg.Wait()
for {
if err := a.gatherParallel(metricC); err != nil {
log.Printf(err.Error())
}
select {
case <-shutdown:
wg.Wait()
return nil
case <-ticker.C:
continue
}
}
}

View File

@ -46,9 +46,13 @@ var fOutputFiltersLegacy = flag.String("outputfilter", "",
var fConfigDirectoryLegacy = flag.String("configdirectory", "",
"directory containing additional *.conf files")
// Telegraf version
// -ldflags "-X main.Version=`git describe --always --tags`"
var Version string
// Telegraf version, populated linker.
// ie, -ldflags "-X main.version=`git describe --always --tags`"
var (
version string
commit string
branch string
)
const usage = `Telegraf, The plugin-driven server agent for collecting and reporting metrics.
@ -132,7 +136,7 @@ func main() {
if len(args) > 0 {
switch args[0] {
case "version":
v := fmt.Sprintf("Telegraf - Version %s", Version)
v := fmt.Sprintf("Telegraf - version %s", version)
fmt.Println(v)
return
case "config":
@ -158,7 +162,7 @@ func main() {
}
if *fVersion {
v := fmt.Sprintf("Telegraf - Version %s", Version)
v := fmt.Sprintf("Telegraf - version %s", version)
fmt.Println(v)
return
}
@ -251,7 +255,7 @@ func main() {
}
}()
log.Printf("Starting Telegraf (version %s)\n", Version)
log.Printf("Starting Telegraf (version %s)\n", version)
log.Printf("Loaded outputs: %s", strings.Join(c.OutputNames(), " "))
log.Printf("Loaded inputs: %s", strings.Join(c.InputNames(), " "))
log.Printf("Tags enabled: %s", c.ListTags())

View File

@ -75,12 +75,15 @@
urls = ["http://localhost:8086"] # required
## The target database for metrics (telegraf will create it if not exists).
database = "telegraf" # required
## Retention policy to write to.
retention_policy = "default"
## Precision of writes, valid values are "ns", "us" (or "µs"), "ms", "s", "m", "h".
## note: using "s" precision greatly improves InfluxDB compression.
precision = "s"
## Retention policy to write to.
retention_policy = "default"
## Write consistency (clusters only), can be: "any", "one", "quorom", "all"
write_consistency = "any"
## Write timeout (for the InfluxDB client), formatted as a string.
## If not provided, will default to 5s. 0s means no timeout (not recommended).
timeout = "5s"
@ -196,6 +199,21 @@
# timeout = 2
# # Configuration for sending metrics to an Instrumental project
# [[outputs.instrumental]]
# ## Project API Token (required)
# api_token = "API Token" # required
# ## Prefix the metrics with a given name
# prefix = ""
# ## Stats output template (Graphite formatting)
# ## see https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md#graphite
# template = "host.tags.measurement.field"
# ## Timeout in seconds to connect
# timeout = "2s"
# ## Display Communcation to Instrumental
# debug = false
# # Configuration for the Kafka server to send metrics to
# [[outputs.kafka]]
# ## URLs of kafka brokers
@ -469,6 +487,24 @@
# ]
# # Collects performance metrics from the MON and OSD nodes in a Ceph storage cluster.
# [[inputs.ceph]]
# ## All configuration values are optional, defaults are shown below
#
# ## location of ceph binary
# ceph_binary = "/usr/bin/ceph"
#
# ## directory in which to look for socket files
# socket_dir = "/var/run/ceph"
#
# ## prefix of MON and OSD socket files, used to determine socket type
# mon_prefix = "ceph-mon"
# osd_prefix = "ceph-osd"
#
# ## suffix used to identify socket files
# socket_suffix = "asok"
# # Pull Metric Statistics from Amazon CloudWatch
# [[inputs.cloudwatch]]
# ## Amazon Region
@ -638,8 +674,8 @@
#
# ## If no servers are specified, then default to 127.0.0.1:1936
# servers = ["http://myhaproxy.com:1936", "http://anotherhaproxy.com:1936"]
# ## Or you can also use local socket(not work yet)
# ## servers = ["socket://run/haproxy/admin.sock"]
# ## Or you can also use local socket
# ## servers = ["socket:/run/haproxy/admin.sock"]
# # HTTP/HTTPS request given an address a method and a timeout
@ -647,7 +683,7 @@
# ## Server address (default http://localhost)
# address = "http://github.com"
# ## Set response_timeout (default 5 seconds)
# response_timeout = 5
# response_timeout = "5s"
# ## HTTP Request Method
# method = "GET"
# ## Whether to follow redirects from the server (defaults to false)
@ -848,8 +884,8 @@
# ## [username[:password]@][protocol[(address)]]/[?tls=[true|false|skip-verify]]
# ## see https://github.com/go-sql-driver/mysql#dsn-data-source-name
# ## e.g.
# ## root:passwd@tcp(127.0.0.1:3306)/?tls=false
# ## root@tcp(127.0.0.1:3306)/?tls=false
# ## db_user:passwd@tcp(127.0.0.1:3306)/?tls=false
# ## db_user@tcp(127.0.0.1:3306)/?tls=false
# #
# ## If no servers are specified, then localhost is used as the host.
# servers = ["tcp(127.0.0.1:3306)/"]
@ -913,14 +949,15 @@
# protocol = "tcp"
# ## Server address (default localhost)
# address = "github.com:80"
# ## Set timeout (default 1.0 seconds)
# timeout = 1.0
# ## Set read timeout (default 1.0 seconds)
# read_timeout = 1.0
# ## Set timeout
# timeout = "1s"
#
# ## Optional string sent to the server
# # send = "ssh"
# ## Optional expected string in answer
# # expect = "ssh"
# ## Set read timeout (only used if expecting a response)
# read_timeout = "1s"
# # Read TCP metrics such as established, time wait and sockets counts.
@ -940,6 +977,18 @@
# endpoints = ["http://localhost:4151"]
# # Collect kernel snmp counters and network interface statistics
# [[inputs.nstat]]
# ## file paths for proc files. If empty default paths will be used:
# ## /proc/net/netstat, /proc/net/snmp, /proc/net/snmp6
# ## These can also be overridden with env variables, see README.
# proc_net_netstat = ""
# proc_net_snmp = ""
# proc_net_snmp6 = ""
# ## dump metrics with 0 values too
# dump_zeros = true
# # Get standard NTP query metrics, requires ntpq executable.
# [[inputs.ntpq]]
# ## If false, set the -n ntpq flag. Can reduce metric gather time.
@ -1099,6 +1148,9 @@
# ## user as argument for pgrep (ie, pgrep -u <user>)
# # user = "nginx"
#
# ## override for process_name
# ## This is optional; default is sourced from /proc/<pid>/status
# # process_name = "bar"
# ## Field name prefix
# prefix = ""
# ## comment this out if you want raw cpu_time stats
@ -1300,6 +1352,17 @@
# pools = ["redis_pool", "mc_pool"]
# # A plugin to collect stats from Varnish HTTP Cache
# [[inputs.varnish]]
# ## The default location of the varnishstat binary can be overridden with:
# binary = "/usr/bin/varnishstat"
#
# ## By default, telegraf gather stats for 3 metric points.
# ## Setting stats will override the defaults shown below.
# ## stats may also be set to ["all"], which will collect all stats
# stats = ["MAIN.cache_hit", "MAIN.cache_miss", "MAIN.uptime"]
# # Read metrics of ZFS from arcstats, zfetchstats and vdev_cache_stats
# [[inputs.zfs]]
# ## ZFS kstat path
@ -1411,6 +1474,12 @@
# data_format = "influx"
# # A Rollbar Webhook Event collector
# [[inputs.rollbar_webhooks]]
# ## Address and port to host Webhook listener on
# service_address = ":1619"
# # Statsd Server
# [[inputs.statsd]]
# ## Address and port to host UDP listener on

View File

@ -12,6 +12,7 @@ import (
"log"
"os"
"os/exec"
"strconv"
"strings"
"time"
"unicode"
@ -32,12 +33,25 @@ type Duration struct {
// UnmarshalTOML parses the duration from the TOML config file
func (d *Duration) UnmarshalTOML(b []byte) error {
dur, err := time.ParseDuration(string(b[1 : len(b)-1]))
if err != nil {
return err
var err error
// Parse string duration, ie, "1s"
d.Duration, err = time.ParseDuration(string(b[1 : len(b)-1]))
if err == nil {
return nil
}
d.Duration = dur
// First try parsing as integer seconds
sI, err := strconv.ParseInt(string(b), 10, 64)
if err == nil {
d.Duration = time.Second * time.Duration(sI)
return nil
}
// Second try parsing as float seconds
sF, err := strconv.ParseFloat(string(b), 64)
if err == nil {
d.Duration = time.Second * time.Duration(sF)
return nil
}
return nil
}

View File

@ -5,6 +5,8 @@ import (
_ "github.com/influxdata/telegraf/plugins/inputs/apache"
_ "github.com/influxdata/telegraf/plugins/inputs/bcache"
_ "github.com/influxdata/telegraf/plugins/inputs/cassandra"
_ "github.com/influxdata/telegraf/plugins/inputs/ceph"
_ "github.com/influxdata/telegraf/plugins/inputs/chrony"
_ "github.com/influxdata/telegraf/plugins/inputs/cloudwatch"
_ "github.com/influxdata/telegraf/plugins/inputs/couchbase"
_ "github.com/influxdata/telegraf/plugins/inputs/couchdb"
@ -35,6 +37,7 @@ import (
_ "github.com/influxdata/telegraf/plugins/inputs/net_response"
_ "github.com/influxdata/telegraf/plugins/inputs/nginx"
_ "github.com/influxdata/telegraf/plugins/inputs/nsq"
_ "github.com/influxdata/telegraf/plugins/inputs/nstat"
_ "github.com/influxdata/telegraf/plugins/inputs/ntpq"
_ "github.com/influxdata/telegraf/plugins/inputs/passenger"
_ "github.com/influxdata/telegraf/plugins/inputs/phpfpm"
@ -50,6 +53,7 @@ import (
_ "github.com/influxdata/telegraf/plugins/inputs/redis"
_ "github.com/influxdata/telegraf/plugins/inputs/rethinkdb"
_ "github.com/influxdata/telegraf/plugins/inputs/riak"
_ "github.com/influxdata/telegraf/plugins/inputs/rollbar_webhooks"
_ "github.com/influxdata/telegraf/plugins/inputs/sensors"
_ "github.com/influxdata/telegraf/plugins/inputs/snmp"
_ "github.com/influxdata/telegraf/plugins/inputs/sqlserver"
@ -61,6 +65,7 @@ import (
_ "github.com/influxdata/telegraf/plugins/inputs/trig"
_ "github.com/influxdata/telegraf/plugins/inputs/twemproxy"
_ "github.com/influxdata/telegraf/plugins/inputs/udp_listener"
_ "github.com/influxdata/telegraf/plugins/inputs/varnish"
_ "github.com/influxdata/telegraf/plugins/inputs/win_perf_counters"
_ "github.com/influxdata/telegraf/plugins/inputs/zfs"
_ "github.com/influxdata/telegraf/plugins/inputs/zookeeper"

View File

@ -0,0 +1,109 @@
# Ceph Storage Input Plugin
Collects performance metrics from the MON and OSD nodes in a Ceph storage cluster.
The plugin works by scanning the configured SocketDir for OSD and MON socket files. When it finds
a MON socket, it runs **ceph --admin-daemon $file perfcounters_dump**. For OSDs it runs **ceph --admin-daemon $file perf dump**
The resulting JSON is parsed and grouped into collections, based on top-level key. Top-level keys are
used as collection tags, and all sub-keys are flattened. For example:
```
{
"paxos": {
"refresh": 9363435,
"refresh_latency": {
"avgcount": 9363435,
"sum": 5378.794002000
}
}
}
```
Would be parsed into the following metrics, all of which would be tagged with collection=paxos:
- refresh = 9363435
- refresh_latency.avgcount: 9363435
- refresh_latency.sum: 5378.794002000
### Configuration:
```
# Collects performance metrics from the MON and OSD nodes in a Ceph storage cluster.
[[inputs.ceph]]
## All configuration values are optional, defaults are shown below
## location of ceph binary
ceph_binary = "/usr/bin/ceph"
## directory in which to look for socket files
socket_dir = "/var/run/ceph"
## prefix of MON and OSD socket files, used to determine socket type
mon_prefix = "ceph-mon"
osd_prefix = "ceph-osd"
## suffix used to identify socket files
socket_suffix = "asok"
```
### Measurements & Fields:
All fields are collected under the **ceph** measurement and stored as float64s. For a full list of fields, see the sample perf dumps in ceph_test.go.
### Tags:
All measurements will have the following tags:
- type: either 'osd' or 'mon' to indicate which type of node was queried
- id: a unique string identifier, parsed from the socket file name for the node
- collection: the top-level key under which these fields were reported. Possible values are:
- for MON nodes:
- cluster
- leveldb
- mon
- paxos
- throttle-mon_client_bytes
- throttle-mon_daemon_bytes
- throttle-msgr_dispatch_throttler-mon
- for OSD nodes:
- WBThrottle
- filestore
- leveldb
- mutex-FileJournal::completions_lock
- mutex-FileJournal::finisher_lock
- mutex-FileJournal::write_lock
- mutex-FileJournal::writeq_lock
- mutex-JOS::ApplyManager::apply_lock
- mutex-JOS::ApplyManager::com_lock
- mutex-JOS::SubmitManager::lock
- mutex-WBThrottle::lock
- objecter
- osd
- recoverystate_perf
- throttle-filestore_bytes
- throttle-filestore_ops
- throttle-msgr_dispatch_throttler-client
- throttle-msgr_dispatch_throttler-cluster
- throttle-msgr_dispatch_throttler-hb_back_server
- throttle-msgr_dispatch_throttler-hb_front_serve
- throttle-msgr_dispatch_throttler-hbclient
- throttle-msgr_dispatch_throttler-ms_objecter
- throttle-objecter_bytes
- throttle-objecter_ops
- throttle-osd_client_bytes
- throttle-osd_client_messages
### Example Output:
<pre>
telegraf -test -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d -input-filter ceph
* Plugin: ceph, Collection 1
> ceph,collection=paxos, id=node-2,role=openstack,type=mon accept_timeout=0,begin=14931264,begin_bytes.avgcount=14931264,begin_bytes.sum=180309683362,begin_keys.avgcount=0,begin_keys.sum=0,begin_latency.avgcount=14931264,begin_latency.sum=9293.29589,collect=1,collect_bytes.avgcount=1,collect_bytes.sum=24,collect_keys.avgcount=1,collect_keys.sum=1,collect_latency.avgcount=1,collect_latency.sum=0.00028,collect_timeout=0,collect_uncommitted=0,commit=14931264,commit_bytes.avgcount=0,commit_bytes.sum=0,commit_keys.avgcount=0,commit_keys.sum=0,commit_latency.avgcount=0,commit_latency.sum=0,lease_ack_timeout=0,lease_timeout=0,new_pn=0,new_pn_latency.avgcount=0,new_pn_latency.sum=0,refresh=14931264,refresh_latency.avgcount=14931264,refresh_latency.sum=8706.98498,restart=4,share_state=0,share_state_bytes.avgcount=0,share_state_bytes.sum=0,share_state_keys.avgcount=0,share_state_keys.sum=0,start_leader=0,start_peon=1,store_state=14931264,store_state_bytes.avgcount=14931264,store_state_bytes.sum=353119959211,store_state_keys.avgcount=14931264,store_state_keys.sum=289807523,store_state_latency.avgcount=14931264,store_state_latency.sum=10952.835724 1462821234814535148
> ceph,collection=throttle-mon_client_bytes,id=node-2,type=mon get=1413017,get_or_fail_fail=0,get_or_fail_success=0,get_sum=71211705,max=104857600,put=1413013,put_sum=71211459,take=0,take_sum=0,val=246,wait.avgcount=0,wait.sum=0 1462821234814737219
> ceph,collection=throttle-mon_daemon_bytes,id=node-2,type=mon get=4058121,get_or_fail_fail=0,get_or_fail_success=0,get_sum=6027348117,max=419430400,put=4058121,put_sum=6027348117,take=0,take_sum=0,val=0,wait.avgcount=0,wait.sum=0 1462821234814815661
> ceph,collection=throttle-msgr_dispatch_throttler-mon,id=node-2,type=mon get=54276277,get_or_fail_fail=0,get_or_fail_success=0,get_sum=370232877040,max=104857600,put=54276277,put_sum=370232877040,take=0,take_sum=0,val=0,wait.avgcount=0,wait.sum=0 1462821234814872064
</pre>

249
plugins/inputs/ceph/ceph.go Normal file
View File

@ -0,0 +1,249 @@
package ceph
import (
"bytes"
"encoding/json"
"fmt"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"io/ioutil"
"log"
"os/exec"
"path/filepath"
"strings"
)
const (
measurement = "ceph"
typeMon = "monitor"
typeOsd = "osd"
osdPrefix = "ceph-osd"
monPrefix = "ceph-mon"
sockSuffix = "asok"
)
type Ceph struct {
CephBinary string
OsdPrefix string
MonPrefix string
SocketDir string
SocketSuffix string
}
func (c *Ceph) setDefaults() {
if c.CephBinary == "" {
c.CephBinary = "/usr/bin/ceph"
}
if c.OsdPrefix == "" {
c.OsdPrefix = osdPrefix
}
if c.MonPrefix == "" {
c.MonPrefix = monPrefix
}
if c.SocketDir == "" {
c.SocketDir = "/var/run/ceph"
}
if c.SocketSuffix == "" {
c.SocketSuffix = sockSuffix
}
}
func (c *Ceph) Description() string {
return "Collects performance metrics from the MON and OSD nodes in a Ceph storage cluster."
}
var sampleConfig = `
## All configuration values are optional, defaults are shown below
## location of ceph binary
ceph_binary = "/usr/bin/ceph"
## directory in which to look for socket files
socket_dir = "/var/run/ceph"
## prefix of MON and OSD socket files, used to determine socket type
mon_prefix = "ceph-mon"
osd_prefix = "ceph-osd"
## suffix used to identify socket files
socket_suffix = "asok"
`
func (c *Ceph) SampleConfig() string {
return sampleConfig
}
func (c *Ceph) Gather(acc telegraf.Accumulator) error {
c.setDefaults()
sockets, err := findSockets(c)
if err != nil {
return fmt.Errorf("failed to find sockets at path '%s': %v", c.SocketDir, err)
}
for _, s := range sockets {
dump, err := perfDump(c.CephBinary, s)
if err != nil {
log.Printf("error reading from socket '%s': %v", s.socket, err)
continue
}
data, err := parseDump(dump)
if err != nil {
log.Printf("error parsing dump from socket '%s': %v", s.socket, err)
continue
}
for tag, metrics := range *data {
acc.AddFields(measurement,
map[string]interface{}(metrics),
map[string]string{"type": s.sockType, "id": s.sockId, "collection": tag})
}
}
return nil
}
func init() {
inputs.Add(measurement, func() telegraf.Input { return &Ceph{} })
}
var perfDump = func(binary string, socket *socket) (string, error) {
cmdArgs := []string{"--admin-daemon", socket.socket}
if socket.sockType == typeOsd {
cmdArgs = append(cmdArgs, "perf", "dump")
} else if socket.sockType == typeMon {
cmdArgs = append(cmdArgs, "perfcounters_dump")
} else {
return "", fmt.Errorf("ignoring unknown socket type: %s", socket.sockType)
}
cmd := exec.Command(binary, cmdArgs...)
var out bytes.Buffer
cmd.Stdout = &out
err := cmd.Run()
if err != nil {
return "", fmt.Errorf("error running ceph dump: %s", err)
}
return out.String(), nil
}
var findSockets = func(c *Ceph) ([]*socket, error) {
listing, err := ioutil.ReadDir(c.SocketDir)
if err != nil {
return []*socket{}, fmt.Errorf("Failed to read socket directory '%s': %v", c.SocketDir, err)
}
sockets := make([]*socket, 0, len(listing))
for _, info := range listing {
f := info.Name()
var sockType string
var sockPrefix string
if strings.HasPrefix(f, c.MonPrefix) {
sockType = typeMon
sockPrefix = monPrefix
}
if strings.HasPrefix(f, c.OsdPrefix) {
sockType = typeOsd
sockPrefix = osdPrefix
}
if sockType == typeOsd || sockType == typeMon {
path := filepath.Join(c.SocketDir, f)
sockets = append(sockets, &socket{parseSockId(f, sockPrefix, c.SocketSuffix), sockType, path})
}
}
return sockets, nil
}
func parseSockId(fname, prefix, suffix string) string {
s := fname
s = strings.TrimPrefix(s, prefix)
s = strings.TrimSuffix(s, suffix)
s = strings.Trim(s, ".-_")
return s
}
type socket struct {
sockId string
sockType string
socket string
}
type metric struct {
pathStack []string // lifo stack of name components
value float64
}
// Pops names of pathStack to build the flattened name for a metric
func (m *metric) name() string {
buf := bytes.Buffer{}
for i := len(m.pathStack) - 1; i >= 0; i-- {
if buf.Len() > 0 {
buf.WriteString(".")
}
buf.WriteString(m.pathStack[i])
}
return buf.String()
}
type metricMap map[string]interface{}
type taggedMetricMap map[string]metricMap
// Parses a raw JSON string into a taggedMetricMap
// Delegates the actual parsing to newTaggedMetricMap(..)
func parseDump(dump string) (*taggedMetricMap, error) {
data := make(map[string]interface{})
err := json.Unmarshal([]byte(dump), &data)
if err != nil {
return nil, fmt.Errorf("failed to parse json: '%s': %v", dump, err)
}
tmm := newTaggedMetricMap(data)
if err != nil {
return nil, fmt.Errorf("failed to tag dataset: '%v': %v", tmm, err)
}
return tmm, nil
}
// Builds a TaggedMetricMap out of a generic string map.
// The top-level key is used as a tag and all sub-keys are flattened into metrics
func newTaggedMetricMap(data map[string]interface{}) *taggedMetricMap {
tmm := make(taggedMetricMap)
for tag, datapoints := range data {
mm := make(metricMap)
for _, m := range flatten(datapoints) {
mm[m.name()] = m.value
}
tmm[tag] = mm
}
return &tmm
}
// Recursively flattens any k-v hierarchy present in data.
// Nested keys are flattened into ordered slices associated with a metric value.
// The key slices are treated as stacks, and are expected to be reversed and concatenated
// when passed as metrics to the accumulator. (see (*metric).name())
func flatten(data interface{}) []*metric {
var metrics []*metric
switch val := data.(type) {
case float64:
metrics = []*metric{&metric{make([]string, 0, 1), val}}
case map[string]interface{}:
metrics = make([]*metric, 0, len(val))
for k, v := range val {
for _, m := range flatten(v) {
m.pathStack = append(m.pathStack, k)
metrics = append(metrics, m)
}
}
default:
log.Printf("Ignoring unexpected type '%T' for value %v", val, val)
}
return metrics
}

View File

@ -0,0 +1,682 @@
package ceph
import (
"fmt"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"io/ioutil"
"os"
"path"
"strconv"
"strings"
"testing"
)
const (
epsilon = float64(0.00000001)
)
func TestParseSockId(t *testing.T) {
s := parseSockId(sockFile(osdPrefix, 1), osdPrefix, sockSuffix)
assert.Equal(t, s, "1")
}
func TestParseMonDump(t *testing.T) {
dump, err := parseDump(monPerfDump)
assert.NoError(t, err)
assert.InEpsilon(t, 5678670180, (*dump)["cluster"]["osd_kb_used"], epsilon)
assert.InEpsilon(t, 6866.540527000, (*dump)["paxos"]["store_state_latency.sum"], epsilon)
}
func TestParseOsdDump(t *testing.T) {
dump, err := parseDump(osdPerfDump)
assert.NoError(t, err)
assert.InEpsilon(t, 552132.109360000, (*dump)["filestore"]["commitcycle_interval.sum"], epsilon)
assert.Equal(t, float64(0), (*dump)["mutex-FileJournal::finisher_lock"]["wait.avgcount"])
}
func TestGather(t *testing.T) {
saveFind := findSockets
saveDump := perfDump
defer func() {
findSockets = saveFind
perfDump = saveDump
}()
findSockets = func(c *Ceph) ([]*socket, error) {
return []*socket{&socket{"osd.1", typeOsd, ""}}, nil
}
perfDump = func(binary string, s *socket) (string, error) {
return osdPerfDump, nil
}
acc := &testutil.Accumulator{}
c := &Ceph{}
c.Gather(acc)
}
func TestFindSockets(t *testing.T) {
tmpdir, err := ioutil.TempDir("", "socktest")
assert.NoError(t, err)
defer func() {
err := os.Remove(tmpdir)
assert.NoError(t, err)
}()
c := &Ceph{
CephBinary: "foo",
SocketDir: tmpdir,
}
c.setDefaults()
for _, st := range sockTestParams {
createTestFiles(tmpdir, st)
sockets, err := findSockets(c)
assert.NoError(t, err)
for i := 1; i <= st.osds; i++ {
assertFoundSocket(t, tmpdir, typeOsd, i, sockets)
}
for i := 1; i <= st.mons; i++ {
assertFoundSocket(t, tmpdir, typeMon, i, sockets)
}
cleanupTestFiles(tmpdir, st)
}
}
func assertFoundSocket(t *testing.T, dir, sockType string, i int, sockets []*socket) {
var prefix string
if sockType == typeOsd {
prefix = osdPrefix
} else {
prefix = monPrefix
}
expected := path.Join(dir, sockFile(prefix, i))
found := false
for _, s := range sockets {
fmt.Printf("Checking %s\n", s.socket)
if s.socket == expected {
found = true
assert.Equal(t, s.sockType, sockType, "Unexpected socket type for '%s'", s)
assert.Equal(t, s.sockId, strconv.Itoa(i))
}
}
assert.True(t, found, "Did not find socket: %s", expected)
}
func sockFile(prefix string, i int) string {
return strings.Join([]string{prefix, strconv.Itoa(i), sockSuffix}, ".")
}
func createTestFiles(dir string, st *SockTest) {
writeFile := func(prefix string, i int) {
f := sockFile(prefix, i)
fpath := path.Join(dir, f)
ioutil.WriteFile(fpath, []byte(""), 0777)
}
tstFileApply(st, writeFile)
}
func cleanupTestFiles(dir string, st *SockTest) {
rmFile := func(prefix string, i int) {
f := sockFile(prefix, i)
fpath := path.Join(dir, f)
err := os.Remove(fpath)
if err != nil {
fmt.Printf("Error removing test file %s: %v\n", fpath, err)
}
}
tstFileApply(st, rmFile)
}
func tstFileApply(st *SockTest, fn func(prefix string, i int)) {
for i := 1; i <= st.osds; i++ {
fn(osdPrefix, i)
}
for i := 1; i <= st.mons; i++ {
fn(monPrefix, i)
}
}
type SockTest struct {
osds int
mons int
}
var sockTestParams = []*SockTest{
&SockTest{
osds: 2,
mons: 2,
},
&SockTest{
mons: 1,
},
&SockTest{
osds: 1,
},
&SockTest{},
}
var monPerfDump = `
{ "cluster": { "num_mon": 2,
"num_mon_quorum": 2,
"num_osd": 26,
"num_osd_up": 26,
"num_osd_in": 26,
"osd_epoch": 3306,
"osd_kb": 11487846448,
"osd_kb_used": 5678670180,
"osd_kb_avail": 5809176268,
"num_pool": 12,
"num_pg": 768,
"num_pg_active_clean": 768,
"num_pg_active": 768,
"num_pg_peering": 0,
"num_object": 397616,
"num_object_degraded": 0,
"num_object_unfound": 0,
"num_bytes": 2917848227467,
"num_mds_up": 0,
"num_mds_in": 0,
"num_mds_failed": 0,
"mds_epoch": 1},
"leveldb": { "leveldb_get": 321950312,
"leveldb_transaction": 18729922,
"leveldb_compact": 0,
"leveldb_compact_range": 74141,
"leveldb_compact_queue_merge": 0,
"leveldb_compact_queue_len": 0},
"mon": {},
"paxos": { "start_leader": 0,
"start_peon": 1,
"restart": 4,
"refresh": 9363435,
"refresh_latency": { "avgcount": 9363435,
"sum": 5378.794002000},
"begin": 9363435,
"begin_keys": { "avgcount": 0,
"sum": 0},
"begin_bytes": { "avgcount": 9363435,
"sum": 110468605489},
"begin_latency": { "avgcount": 9363435,
"sum": 5850.060682000},
"commit": 9363435,
"commit_keys": { "avgcount": 0,
"sum": 0},
"commit_bytes": { "avgcount": 0,
"sum": 0},
"commit_latency": { "avgcount": 0,
"sum": 0.000000000},
"collect": 1,
"collect_keys": { "avgcount": 1,
"sum": 1},
"collect_bytes": { "avgcount": 1,
"sum": 24},
"collect_latency": { "avgcount": 1,
"sum": 0.000280000},
"collect_uncommitted": 0,
"collect_timeout": 0,
"accept_timeout": 0,
"lease_ack_timeout": 0,
"lease_timeout": 0,
"store_state": 9363435,
"store_state_keys": { "avgcount": 9363435,
"sum": 176572789},
"store_state_bytes": { "avgcount": 9363435,
"sum": 216355887217},
"store_state_latency": { "avgcount": 9363435,
"sum": 6866.540527000},
"share_state": 0,
"share_state_keys": { "avgcount": 0,
"sum": 0},
"share_state_bytes": { "avgcount": 0,
"sum": 0},
"new_pn": 0,
"new_pn_latency": { "avgcount": 0,
"sum": 0.000000000}},
"throttle-mon_client_bytes": { "val": 246,
"max": 104857600,
"get": 896030,
"get_sum": 45854374,
"get_or_fail_fail": 0,
"get_or_fail_success": 0,
"take": 0,
"take_sum": 0,
"put": 896026,
"put_sum": 45854128,
"wait": { "avgcount": 0,
"sum": 0.000000000}},
"throttle-mon_daemon_bytes": { "val": 0,
"max": 419430400,
"get": 2773768,
"get_sum": 3627676976,
"get_or_fail_fail": 0,
"get_or_fail_success": 0,
"take": 0,
"take_sum": 0,
"put": 2773768,
"put_sum": 3627676976,
"wait": { "avgcount": 0,
"sum": 0.000000000}},
"throttle-msgr_dispatch_throttler-mon": { "val": 0,
"max": 104857600,
"get": 34504949,
"get_sum": 226860281124,
"get_or_fail_fail": 0,
"get_or_fail_success": 0,
"take": 0,
"take_sum": 0,
"put": 34504949,
"put_sum": 226860281124,
"wait": { "avgcount": 0,
"sum": 0.000000000}}}
`
var osdPerfDump = `
{ "WBThrottle": { "bytes_dirtied": 28405539,
"bytes_wb": 0,
"ios_dirtied": 93,
"ios_wb": 0,
"inodes_dirtied": 86,
"inodes_wb": 0},
"filestore": { "journal_queue_max_ops": 0,
"journal_queue_ops": 0,
"journal_ops": 1108008,
"journal_queue_max_bytes": 0,
"journal_queue_bytes": 0,
"journal_bytes": 73233416196,
"journal_latency": { "avgcount": 1108008,
"sum": 290.981036000},
"journal_wr": 1091866,
"journal_wr_bytes": { "avgcount": 1091866,
"sum": 74925682688},
"journal_full": 0,
"committing": 0,
"commitcycle": 110389,
"commitcycle_interval": { "avgcount": 110389,
"sum": 552132.109360000},
"commitcycle_latency": { "avgcount": 110389,
"sum": 178.657804000},
"op_queue_max_ops": 50,
"op_queue_ops": 0,
"ops": 1108008,
"op_queue_max_bytes": 104857600,
"op_queue_bytes": 0,
"bytes": 73226768148,
"apply_latency": { "avgcount": 1108008,
"sum": 947.742722000},
"queue_transaction_latency_avg": { "avgcount": 1108008,
"sum": 0.511327000}},
"leveldb": { "leveldb_get": 4361221,
"leveldb_transaction": 4351276,
"leveldb_compact": 0,
"leveldb_compact_range": 0,
"leveldb_compact_queue_merge": 0,
"leveldb_compact_queue_len": 0},
"mutex-FileJournal::completions_lock": { "wait": { "avgcount": 0,
"sum": 0.000000000}},
"mutex-FileJournal::finisher_lock": { "wait": { "avgcount": 0,
"sum": 0.000000000}},
"mutex-FileJournal::write_lock": { "wait": { "avgcount": 0,
"sum": 0.000000000}},
"mutex-FileJournal::writeq_lock": { "wait": { "avgcount": 0,
"sum": 0.000000000}},
"mutex-JOS::ApplyManager::apply_lock": { "wait": { "avgcount": 0,
"sum": 0.000000000}},
"mutex-JOS::ApplyManager::com_lock": { "wait": { "avgcount": 0,
"sum": 0.000000000}},
"mutex-JOS::SubmitManager::lock": { "wait": { "avgcount": 0,
"sum": 0.000000000}},
"mutex-WBThrottle::lock": { "wait": { "avgcount": 0,
"sum": 0.000000000}},
"objecter": { "op_active": 0,
"op_laggy": 0,
"op_send": 0,
"op_send_bytes": 0,
"op_resend": 0,
"op_ack": 0,
"op_commit": 0,
"op": 0,
"op_r": 0,
"op_w": 0,
"op_rmw": 0,
"op_pg": 0,
"osdop_stat": 0,
"osdop_create": 0,
"osdop_read": 0,
"osdop_write": 0,
"osdop_writefull": 0,
"osdop_append": 0,
"osdop_zero": 0,
"osdop_truncate": 0,
"osdop_delete": 0,
"osdop_mapext": 0,
"osdop_sparse_read": 0,
"osdop_clonerange": 0,
"osdop_getxattr": 0,
"osdop_setxattr": 0,
"osdop_cmpxattr": 0,
"osdop_rmxattr": 0,
"osdop_resetxattrs": 0,
"osdop_tmap_up": 0,
"osdop_tmap_put": 0,
"osdop_tmap_get": 0,
"osdop_call": 0,
"osdop_watch": 0,
"osdop_notify": 0,
"osdop_src_cmpxattr": 0,
"osdop_pgls": 0,
"osdop_pgls_filter": 0,
"osdop_other": 0,
"linger_active": 0,
"linger_send": 0,
"linger_resend": 0,
"poolop_active": 0,
"poolop_send": 0,
"poolop_resend": 0,
"poolstat_active": 0,
"poolstat_send": 0,
"poolstat_resend": 0,
"statfs_active": 0,
"statfs_send": 0,
"statfs_resend": 0,
"command_active": 0,
"command_send": 0,
"command_resend": 0,
"map_epoch": 3300,
"map_full": 0,
"map_inc": 3293,
"osd_sessions": 0,
"osd_session_open": 0,
"osd_session_close": 0,
"osd_laggy": 0},
"osd": { "opq": 0,
"op_wip": 0,
"op": 23939,
"op_in_bytes": 1245903961,
"op_out_bytes": 29103083856,
"op_latency": { "avgcount": 23939,
"sum": 440.192015000},
"op_process_latency": { "avgcount": 23939,
"sum": 30.170685000},
"op_r": 23112,
"op_r_out_bytes": 29103056146,
"op_r_latency": { "avgcount": 23112,
"sum": 19.373526000},
"op_r_process_latency": { "avgcount": 23112,
"sum": 14.625928000},
"op_w": 549,
"op_w_in_bytes": 1245804358,
"op_w_rlat": { "avgcount": 549,
"sum": 17.022299000},
"op_w_latency": { "avgcount": 549,
"sum": 418.494610000},
"op_w_process_latency": { "avgcount": 549,
"sum": 13.316555000},
"op_rw": 278,
"op_rw_in_bytes": 99603,
"op_rw_out_bytes": 27710,
"op_rw_rlat": { "avgcount": 278,
"sum": 2.213785000},
"op_rw_latency": { "avgcount": 278,
"sum": 2.323879000},
"op_rw_process_latency": { "avgcount": 278,
"sum": 2.228202000},
"subop": 1074774,
"subop_in_bytes": 26841811636,
"subop_latency": { "avgcount": 1074774,
"sum": 745.509160000},
"subop_w": 0,
"subop_w_in_bytes": 26841811636,
"subop_w_latency": { "avgcount": 1074774,
"sum": 745.509160000},
"subop_pull": 0,
"subop_pull_latency": { "avgcount": 0,
"sum": 0.000000000},
"subop_push": 0,
"subop_push_in_bytes": 0,
"subop_push_latency": { "avgcount": 0,
"sum": 0.000000000},
"pull": 0,
"push": 28,
"push_out_bytes": 103483392,
"push_in": 0,
"push_in_bytes": 0,
"recovery_ops": 15,
"loadavg": 202,
"buffer_bytes": 0,
"numpg": 18,
"numpg_primary": 8,
"numpg_replica": 10,
"numpg_stray": 0,
"heartbeat_to_peers": 10,
"heartbeat_from_peers": 0,
"map_messages": 7413,
"map_message_epochs": 9792,
"map_message_epoch_dups": 10105,
"messages_delayed_for_map": 83,
"stat_bytes": 102123175936,
"stat_bytes_used": 49961820160,
"stat_bytes_avail": 52161355776,
"copyfrom": 0,
"tier_promote": 0,
"tier_flush": 0,
"tier_flush_fail": 0,
"tier_try_flush": 0,
"tier_try_flush_fail": 0,
"tier_evict": 0,
"tier_whiteout": 0,
"tier_dirty": 230,
"tier_clean": 0,
"tier_delay": 0,
"agent_wake": 0,
"agent_skip": 0,
"agent_flush": 0,
"agent_evict": 0},
"recoverystate_perf": { "initial_latency": { "avgcount": 473,
"sum": 0.027207000},
"started_latency": { "avgcount": 1480,
"sum": 9854902.397648000},
"reset_latency": { "avgcount": 1953,
"sum": 0.096206000},
"start_latency": { "avgcount": 1953,
"sum": 0.059947000},
"primary_latency": { "avgcount": 765,
"sum": 4688922.186935000},
"peering_latency": { "avgcount": 704,
"sum": 1668.652135000},
"backfilling_latency": { "avgcount": 0,
"sum": 0.000000000},
"waitremotebackfillreserved_latency": { "avgcount": 0,
"sum": 0.000000000},
"waitlocalbackfillreserved_latency": { "avgcount": 0,
"sum": 0.000000000},
"notbackfilling_latency": { "avgcount": 0,
"sum": 0.000000000},
"repnotrecovering_latency": { "avgcount": 462,
"sum": 5158922.114600000},
"repwaitrecoveryreserved_latency": { "avgcount": 15,
"sum": 0.008275000},
"repwaitbackfillreserved_latency": { "avgcount": 1,
"sum": 0.000095000},
"RepRecovering_latency": { "avgcount": 16,
"sum": 2274.944727000},
"activating_latency": { "avgcount": 514,
"sum": 261.008520000},
"waitlocalrecoveryreserved_latency": { "avgcount": 20,
"sum": 0.175422000},
"waitremoterecoveryreserved_latency": { "avgcount": 20,
"sum": 0.682778000},
"recovering_latency": { "avgcount": 20,
"sum": 0.697551000},
"recovered_latency": { "avgcount": 511,
"sum": 0.011038000},
"clean_latency": { "avgcount": 503,
"sum": 4686961.154278000},
"active_latency": { "avgcount": 506,
"sum": 4687223.640464000},
"replicaactive_latency": { "avgcount": 446,
"sum": 5161197.078966000},
"stray_latency": { "avgcount": 794,
"sum": 4805.105128000},
"getinfo_latency": { "avgcount": 704,
"sum": 1138.477937000},
"getlog_latency": { "avgcount": 678,
"sum": 0.036393000},
"waitactingchange_latency": { "avgcount": 69,
"sum": 59.172893000},
"incomplete_latency": { "avgcount": 0,
"sum": 0.000000000},
"getmissing_latency": { "avgcount": 609,
"sum": 0.012288000},
"waitupthru_latency": { "avgcount": 576,
"sum": 530.106999000}},
"throttle-filestore_bytes": { "val": 0,
"max": 0,
"get": 0,
"get_sum": 0,
"get_or_fail_fail": 0,
"get_or_fail_success": 0,
"take": 0,
"take_sum": 0,
"put": 0,
"put_sum": 0,
"wait": { "avgcount": 0,
"sum": 0.000000000}},
"throttle-filestore_ops": { "val": 0,
"max": 0,
"get": 0,
"get_sum": 0,
"get_or_fail_fail": 0,
"get_or_fail_success": 0,
"take": 0,
"take_sum": 0,
"put": 0,
"put_sum": 0,
"wait": { "avgcount": 0,
"sum": 0.000000000}},
"throttle-msgr_dispatch_throttler-client": { "val": 0,
"max": 104857600,
"get": 130730,
"get_sum": 1246039872,
"get_or_fail_fail": 0,
"get_or_fail_success": 0,
"take": 0,
"take_sum": 0,
"put": 130730,
"put_sum": 1246039872,
"wait": { "avgcount": 0,
"sum": 0.000000000}},
"throttle-msgr_dispatch_throttler-cluster": { "val": 0,
"max": 104857600,
"get": 1108033,
"get_sum": 71277949992,
"get_or_fail_fail": 0,
"get_or_fail_success": 0,
"take": 0,
"take_sum": 0,
"put": 1108033,
"put_sum": 71277949992,
"wait": { "avgcount": 0,
"sum": 0.000000000}},
"throttle-msgr_dispatch_throttler-hb_back_server": { "val": 0,
"max": 104857600,
"get": 18320575,
"get_sum": 861067025,
"get_or_fail_fail": 0,
"get_or_fail_success": 0,
"take": 0,
"take_sum": 0,
"put": 18320575,
"put_sum": 861067025,
"wait": { "avgcount": 0,
"sum": 0.000000000}},
"throttle-msgr_dispatch_throttler-hb_front_server": { "val": 0,
"max": 104857600,
"get": 18320575,
"get_sum": 861067025,
"get_or_fail_fail": 0,
"get_or_fail_success": 0,
"take": 0,
"take_sum": 0,
"put": 18320575,
"put_sum": 861067025,
"wait": { "avgcount": 0,
"sum": 0.000000000}},
"throttle-msgr_dispatch_throttler-hbclient": { "val": 0,
"max": 104857600,
"get": 40479394,
"get_sum": 1902531518,
"get_or_fail_fail": 0,
"get_or_fail_success": 0,
"take": 0,
"take_sum": 0,
"put": 40479394,
"put_sum": 1902531518,
"wait": { "avgcount": 0,
"sum": 0.000000000}},
"throttle-msgr_dispatch_throttler-ms_objecter": { "val": 0,
"max": 104857600,
"get": 0,
"get_sum": 0,
"get_or_fail_fail": 0,
"get_or_fail_success": 0,
"take": 0,
"take_sum": 0,
"put": 0,
"put_sum": 0,
"wait": { "avgcount": 0,
"sum": 0.000000000}},
"throttle-objecter_bytes": { "val": 0,
"max": 104857600,
"get": 0,
"get_sum": 0,
"get_or_fail_fail": 0,
"get_or_fail_success": 0,
"take": 0,
"take_sum": 0,
"put": 0,
"put_sum": 0,
"wait": { "avgcount": 0,
"sum": 0.000000000}},
"throttle-objecter_ops": { "val": 0,
"max": 1024,
"get": 0,
"get_sum": 0,
"get_or_fail_fail": 0,
"get_or_fail_success": 0,
"take": 0,
"take_sum": 0,
"put": 0,
"put_sum": 0,
"wait": { "avgcount": 0,
"sum": 0.000000000}},
"throttle-osd_client_bytes": { "val": 0,
"max": 524288000,
"get": 24241,
"get_sum": 1241992581,
"get_or_fail_fail": 0,
"get_or_fail_success": 0,
"take": 0,
"take_sum": 0,
"put": 25958,
"put_sum": 1241992581,
"wait": { "avgcount": 0,
"sum": 0.000000000}},
"throttle-osd_client_messages": { "val": 0,
"max": 100,
"get": 49214,
"get_sum": 49214,
"get_or_fail_fail": 0,
"get_or_fail_success": 0,
"take": 0,
"take_sum": 0,
"put": 49214,
"put_sum": 49214,
"wait": { "avgcount": 0,
"sum": 0.000000000}}}
`

View File

@ -0,0 +1,91 @@
# chrony Input Plugin
Get standard chrony metrics, requires chronyc executable.
Below is the documentation of the various headers returned by `chronyc tracking`.
- Reference ID - This is the refid and name (or IP address) if available, of the
server to which the computer is currently synchronised. If this is 127.127.1.1
it means the computer is not synchronised to any external source and that you
have the local mode operating (via the local command in chronyc (see section local),
or the local directive in the /etc/chrony.conf file (see section local)).
- Stratum - The stratum indicates how many hops away from a computer with an attached
reference clock we are. Such a computer is a stratum-1 computer, so the computer in the
example is two hops away (i.e. a.b.c is a stratum-2 and is synchronised from a stratum-1).
- Ref time - This is the time (UTC) at which the last measurement from the reference
source was processed.
- System time - In normal operation, chronyd never steps the system clock, because any
jump in the timescale can have adverse consequences for certain application programs.
Instead, any error in the system clock is corrected by slightly speeding up or slowing
down the system clock until the error has been removed, and then returning to the system
clocks normal speed. A consequence of this is that there will be a period when the
system clock (as read by other programs using the gettimeofday() system call, or by the
date command in the shell) will be different from chronyd's estimate of the current true
time (which it reports to NTP clients when it is operating in server mode). The value
reported on this line is the difference due to this effect.
- Last offset - This is the estimated local offset on the last clock update.
- RMS offset - This is a long-term average of the offset value.
- Frequency - The frequency is the rate by which the systems clock would be
wrong if chronyd was not correcting it. It is expressed in ppm (parts per million).
For example, a value of 1ppm would mean that when the systems clock thinks it has
advanced 1 second, it has actually advanced by 1.000001 seconds relative to true time.
- Residual freq - This shows the residual frequency for the currently selected
reference source. This reflects any difference between what the measurements from the
reference source indicate the frequency should be and the frequency currently being used.
The reason this is not always zero is that a smoothing procedure is applied to the
frequency. Each time a measurement from the reference source is obtained and a new
residual frequency computed, the estimated accuracy of this residual is compared with the
estimated accuracy (see skew next) of the existing frequency value. A weighted average
is computed for the new frequency, with weights depending on these accuracies. If the
measurements from the reference source follow a consistent trend, the residual will be
driven to zero over time.
- Skew - This is the estimated error bound on the frequency.
- Root delay -This is the total of the network path delays to the stratum-1 computer
from which the computer is ultimately synchronised. In certain extreme situations, this
value can be negative. (This can arise in a symmetric peer arrangement where the computers
frequencies are not tracking each other and the network delay is very short relative to the
turn-around time at each computer.)
- Root dispersion - This is the total dispersion accumulated through all the computers
back to the stratum-1 computer from which the computer is ultimately synchronised.
Dispersion is due to system clock resolution, statistical measurement variations etc.
- Leap status - This is the leap status, which can be Normal, Insert second,
Delete second or Not synchronised.
### Configuration:
```toml
# Get standard chrony metrics, requires chronyc executable.
[[inputs.chrony]]
# no configuration
```
### Measurements & Fields:
- chrony
- last_offset (float, seconds)
- rms_offset (float, seconds)
- frequency (float, ppm)
- residual_freq (float, ppm)
- skew (float, ppm)
- root_delay (float, seconds)
- root_dispersion (float, seconds)
- update_interval (float, seconds)
### Tags:
- All measurements have the following tags:
- reference_id
- stratum
- leap_status
### Example Output:
```
$ telegraf -config telegraf.conf -input-filter chrony -test
* Plugin: chrony, Collection 1
> chrony,leap_status=normal,reference_id=192.168.1.1,stratum=3 frequency=-35.657,last_offset=-0.000013616,residual_freq=-0,rms_offset=0.000027073,root_delay=0.000644,root_dispersion=0.003444,skew=0.001,update_interval=1031.2 1463750789687639161
```

View File

@ -0,0 +1,118 @@
// +build linux
package chrony
import (
"errors"
"fmt"
"os/exec"
"strconv"
"strings"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
)
var (
execCommand = exec.Command // execCommand is used to mock commands in tests.
)
type Chrony struct {
path string
}
func (*Chrony) Description() string {
return "Get standard chrony metrics, requires chronyc executable."
}
func (*Chrony) SampleConfig() string {
return ""
}
func (c *Chrony) Gather(acc telegraf.Accumulator) error {
if len(c.path) == 0 {
return errors.New("chronyc not found: verify that chrony is installed and that chronyc is in your PATH")
}
cmd := execCommand(c.path, "tracking")
out, err := internal.CombinedOutputTimeout(cmd, time.Second*5)
if err != nil {
return fmt.Errorf("failed to run command %s: %s - %s", strings.Join(cmd.Args, " "), err, string(out))
}
fields, tags, err := processChronycOutput(string(out))
if err != nil {
return err
}
acc.AddFields("chrony", fields, tags)
return nil
}
// processChronycOutput takes in a string output from the chronyc command, like:
//
// Reference ID : 192.168.1.22 (ntp.example.com)
// Stratum : 3
// Ref time (UTC) : Thu May 12 14:27:07 2016
// System time : 0.000020390 seconds fast of NTP time
// Last offset : +0.000012651 seconds
// RMS offset : 0.000025577 seconds
// Frequency : 16.001 ppm slow
// Residual freq : -0.000 ppm
// Skew : 0.006 ppm
// Root delay : 0.001655 seconds
// Root dispersion : 0.003307 seconds
// Update interval : 507.2 seconds
// Leap status : Normal
//
// The value on the left side of the colon is used as field name, if the first field on
// the right side is a float. If it cannot be parsed as float, it is a tag name.
//
// Ref time is ignored and all names are converted to snake case.
//
// It returns (<fields>, <tags>)
func processChronycOutput(out string) (map[string]interface{}, map[string]string, error) {
tags := map[string]string{}
fields := map[string]interface{}{}
lines := strings.Split(strings.TrimSpace(out), "\n")
for _, line := range lines {
stats := strings.Split(line, ":")
if len(stats) < 2 {
return nil, nil, fmt.Errorf("unexpected output from chronyc, expected ':' in %s", out)
}
name := strings.ToLower(strings.Replace(strings.TrimSpace(stats[0]), " ", "_", -1))
// ignore reference time
if strings.Contains(name, "time") {
continue
}
valueFields := strings.Fields(stats[1])
if len(valueFields) == 0 {
return nil, nil, fmt.Errorf("unexpected output from chronyc: %s", out)
}
if strings.Contains(strings.ToLower(name), "stratum") {
tags["stratum"] = valueFields[0]
continue
}
value, err := strconv.ParseFloat(valueFields[0], 64)
if err != nil {
tags[name] = strings.ToLower(valueFields[0])
continue
}
if strings.Contains(stats[1], "slow") {
value = -value
}
fields[name] = value
}
return fields, tags, nil
}
func init() {
c := Chrony{}
path, _ := exec.LookPath("chronyc")
if len(path) > 0 {
c.path = path
}
inputs.Add("chrony", func() telegraf.Input {
return &c
})
}

View File

@ -0,0 +1,3 @@
// +build !linux
package chrony

View File

@ -0,0 +1,95 @@
// +build linux
package chrony
import (
"fmt"
"os"
"os/exec"
"testing"
"github.com/influxdata/telegraf/testutil"
)
func TestGather(t *testing.T) {
c := Chrony{
path: "chronyc",
}
// overwriting exec commands with mock commands
execCommand = fakeExecCommand
defer func() { execCommand = exec.Command }()
var acc testutil.Accumulator
err := c.Gather(&acc)
if err != nil {
t.Fatal(err)
}
tags := map[string]string{
"reference_id": "192.168.1.22",
"leap_status": "normal",
"stratum": "3",
}
fields := map[string]interface{}{
"last_offset": 0.000012651,
"rms_offset": 0.000025577,
"frequency": -16.001,
"residual_freq": 0.0,
"skew": 0.006,
"root_delay": 0.001655,
"root_dispersion": 0.003307,
"update_interval": 507.2,
}
acc.AssertContainsTaggedFields(t, "chrony", fields, tags)
}
// fackeExecCommand is a helper function that mock
// the exec.Command call (and call the test binary)
func fakeExecCommand(command string, args ...string) *exec.Cmd {
cs := []string{"-test.run=TestHelperProcess", "--", command}
cs = append(cs, args...)
cmd := exec.Command(os.Args[0], cs...)
cmd.Env = []string{"GO_WANT_HELPER_PROCESS=1"}
return cmd
}
// TestHelperProcess isn't a real test. It's used to mock exec.Command
// For example, if you run:
// GO_WANT_HELPER_PROCESS=1 go test -test.run=TestHelperProcess -- chrony tracking
// it returns below mockData.
func TestHelperProcess(t *testing.T) {
if os.Getenv("GO_WANT_HELPER_PROCESS") != "1" {
return
}
mockData := `Reference ID : 192.168.1.22 (ntp.example.com)
Stratum : 3
Ref time (UTC) : Thu May 12 14:27:07 2016
System time : 0.000020390 seconds fast of NTP time
Last offset : +0.000012651 seconds
RMS offset : 0.000025577 seconds
Frequency : 16.001 ppm slow
Residual freq : -0.000 ppm
Skew : 0.006 ppm
Root delay : 0.001655 seconds
Root dispersion : 0.003307 seconds
Update interval : 507.2 seconds
Leap status : Normal
`
args := os.Args
// Previous arguments are tests stuff, that looks like :
// /tmp/go-build970079519/…/_test/integration.test -test.run=TestHelperProcess --
cmd, args := args[3], args[4:]
if cmd == "chronyc" && args[0] == "tracking" {
fmt.Fprint(os.Stdout, mockData)
} else {
fmt.Fprint(os.Stdout, "command not found")
os.Exit(1)
}
os.Exit(0)
}

View File

@ -221,7 +221,7 @@ func (d *Docker) gatherContainer(
defer cancel()
r, err := d.client.ContainerStats(ctx, container.ID, false)
if err != nil {
log.Printf("Error getting docker stats: %s\n", err.Error())
return fmt.Errorf("Error getting docker stats: %s", err.Error())
}
defer r.Close()
dec := json.NewDecoder(r)
@ -470,6 +470,8 @@ func parseSize(sizeStr string) (int64, error) {
func init() {
inputs.Add("docker", func() telegraf.Input {
return &Docker{}
return &Docker{
Timeout: internal.Duration{Duration: time.Second * 5},
}
})
}

View File

@ -1,320 +1,307 @@
# Elasticsearch plugin
#### Plugin arguments:
- **servers** []string: list of one or more Elasticsearch servers
- **local** boolean: If false, it will read the indices stats from all nodes
- **cluster_health** boolean: If true, it will also obtain cluster level stats
#### Description
# Elasticsearch input plugin
The [elasticsearch](https://www.elastic.co/) plugin queries endpoints to obtain
[node](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-nodes-stats.html)
and optionally [cluster](https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-health.html) stats.
Example:
### Configuration:
```
[elasticsearch]
[[inputs.elasticsearch]]
servers = ["http://localhost:9200"]
local = true
cluster_health = true
```
# Measurements
#### cluster measurements (utilizes fields instead of single values):
contains `status`, `timed_out`, `number_of_nodes`, `number_of_data_nodes`,
`active_primary_shards`, `active_shards`, `relocating_shards`,
`initializing_shards`, `unassigned_shards` fields
- elasticsearch_cluster_health
contains `status`, `number_of_shards`, `number_of_replicas`,
`active_primary_shards`, `active_shards`, `relocating_shards`,
`initializing_shards`, `unassigned_shards` fields
- elasticsearch_indices
#### node measurements:
### Measurements & Fields:
field data circuit breaker measurement names:
- elasticsearch_breakers_fielddata_estimated_size_in_bytes value=0
- elasticsearch_breakers_fielddata_overhead value=1.03
- elasticsearch_breakers_fielddata_tripped value=0
- elasticsearch_breakers_fielddata_limit_size_in_bytes value=623326003
- elasticsearch_breakers_request_estimated_size_in_bytes value=0
- elasticsearch_breakers_request_overhead value=1.0
- elasticsearch_breakers_request_tripped value=0
- elasticsearch_breakers_request_limit_size_in_bytes value=415550668
- elasticsearch_breakers_parent_overhead value=1.0
- elasticsearch_breakers_parent_tripped value=0
- elasticsearch_breakers_parent_limit_size_in_bytes value=727213670
- elasticsearch_breakers_parent_estimated_size_in_bytes value=0
- elasticsearch_breakers
- fielddata_estimated_size_in_bytes value=0
- fielddata_overhead value=1.03
- fielddata_tripped value=0
- fielddata_limit_size_in_bytes value=623326003
- request_estimated_size_in_bytes value=0
- request_overhead value=1.0
- request_tripped value=0
- request_limit_size_in_bytes value=415550668
- parent_overhead value=1.0
- parent_tripped value=0
- parent_limit_size_in_bytes value=727213670
- parent_estimated_size_in_bytes value=0
File system information, data path, free disk space, read/write measurement names:
- elasticsearch_fs_timestamp value=1436460392946
- elasticsearch_fs_total_free_in_bytes value=16909316096
- elasticsearch_fs_total_available_in_bytes value=15894814720
- elasticsearch_fs_total_total_in_bytes value=19507089408
- elasticsearch_fs
- timestamp value=1436460392946
- total_free_in_bytes value=16909316096
- total_available_in_bytes value=15894814720
- total_total_in_bytes value=19507089408
indices size, document count, indexing and deletion times, search times,
field cache size, merges and flushes measurement names:
- elasticsearch_indices_id_cache_memory_size_in_bytes value=0
- elasticsearch_indices_completion_size_in_bytes value=0
- elasticsearch_indices_suggest_total value=0
- elasticsearch_indices_suggest_time_in_millis value=0
- elasticsearch_indices_suggest_current value=0
- elasticsearch_indices_query_cache_memory_size_in_bytes value=0
- elasticsearch_indices_query_cache_evictions value=0
- elasticsearch_indices_query_cache_hit_count value=0
- elasticsearch_indices_query_cache_miss_count value=0
- elasticsearch_indices_store_size_in_bytes value=37715234
- elasticsearch_indices_store_throttle_time_in_millis value=215
- elasticsearch_indices_merges_current_docs value=0
- elasticsearch_indices_merges_current_size_in_bytes value=0
- elasticsearch_indices_merges_total value=133
- elasticsearch_indices_merges_total_time_in_millis value=21060
- elasticsearch_indices_merges_total_docs value=203672
- elasticsearch_indices_merges_total_size_in_bytes value=142900226
- elasticsearch_indices_merges_current value=0
- elasticsearch_indices_filter_cache_memory_size_in_bytes value=7384
- elasticsearch_indices_filter_cache_evictions value=0
- elasticsearch_indices_indexing_index_total value=84790
- elasticsearch_indices_indexing_index_time_in_millis value=29680
- elasticsearch_indices_indexing_index_current value=0
- elasticsearch_indices_indexing_noop_update_total value=0
- elasticsearch_indices_indexing_throttle_time_in_millis value=0
- elasticsearch_indices_indexing_delete_tota value=13879
- elasticsearch_indices_indexing_delete_time_in_millis value=1139
- elasticsearch_indices_indexing_delete_current value=0
- elasticsearch_indices_get_exists_time_in_millis value=0
- elasticsearch_indices_get_missing_total value=1
- elasticsearch_indices_get_missing_time_in_millis value=2
- elasticsearch_indices_get_current value=0
- elasticsearch_indices_get_total value=1
- elasticsearch_indices_get_time_in_millis value=2
- elasticsearch_indices_get_exists_total value=0
- elasticsearch_indices_refresh_total value=1076
- elasticsearch_indices_refresh_total_time_in_millis value=20078
- elasticsearch_indices_percolate_current value=0
- elasticsearch_indices_percolate_memory_size_in_bytes value=-1
- elasticsearch_indices_percolate_queries value=0
- elasticsearch_indices_percolate_total value=0
- elasticsearch_indices_percolate_time_in_millis value=0
- elasticsearch_indices_translog_operations value=17702
- elasticsearch_indices_translog_size_in_bytes value=17
- elasticsearch_indices_recovery_current_as_source value=0
- elasticsearch_indices_recovery_current_as_target value=0
- elasticsearch_indices_recovery_throttle_time_in_millis value=0
- elasticsearch_indices_docs_count value=29652
- elasticsearch_indices_docs_deleted value=5229
- elasticsearch_indices_flush_total_time_in_millis value=2401
- elasticsearch_indices_flush_total value=115
- elasticsearch_indices_fielddata_memory_size_in_bytes value=12996
- elasticsearch_indices_fielddata_evictions value=0
- elasticsearch_indices_search_fetch_current value=0
- elasticsearch_indices_search_open_contexts value=0
- elasticsearch_indices_search_query_total value=1452
- elasticsearch_indices_search_query_time_in_millis value=5695
- elasticsearch_indices_search_query_current value=0
- elasticsearch_indices_search_fetch_total value=414
- elasticsearch_indices_search_fetch_time_in_millis value=146
- elasticsearch_indices_warmer_current value=0
- elasticsearch_indices_warmer_total value=2319
- elasticsearch_indices_warmer_total_time_in_millis value=448
- elasticsearch_indices_segments_count value=134
- elasticsearch_indices_segments_memory_in_bytes value=1285212
- elasticsearch_indices_segments_index_writer_memory_in_bytes value=0
- elasticsearch_indices_segments_index_writer_max_memory_in_bytes value=172368955
- elasticsearch_indices_segments_version_map_memory_in_bytes value=611844
- elasticsearch_indices_segments_fixed_bit_set_memory_in_bytes value=0
- elasticsearch_indices
- id_cache_memory_size_in_bytes value=0
- completion_size_in_bytes value=0
- suggest_total value=0
- suggest_time_in_millis value=0
- suggest_current value=0
- query_cache_memory_size_in_bytes value=0
- query_cache_evictions value=0
- query_cache_hit_count value=0
- query_cache_miss_count value=0
- store_size_in_bytes value=37715234
- store_throttle_time_in_millis value=215
- merges_current_docs value=0
- merges_current_size_in_bytes value=0
- merges_total value=133
- merges_total_time_in_millis value=21060
- merges_total_docs value=203672
- merges_total_size_in_bytes value=142900226
- merges_current value=0
- filter_cache_memory_size_in_bytes value=7384
- filter_cache_evictions value=0
- indexing_index_total value=84790
- indexing_index_time_in_millis value=29680
- indexing_index_current value=0
- indexing_noop_update_total value=0
- indexing_throttle_time_in_millis value=0
- indexing_delete_tota value=13879
- indexing_delete_time_in_millis value=1139
- indexing_delete_current value=0
- get_exists_time_in_millis value=0
- get_missing_total value=1
- get_missing_time_in_millis value=2
- get_current value=0
- get_total value=1
- get_time_in_millis value=2
- get_exists_total value=0
- refresh_total value=1076
- refresh_total_time_in_millis value=20078
- percolate_current value=0
- percolate_memory_size_in_bytes value=-1
- percolate_queries value=0
- percolate_total value=0
- percolate_time_in_millis value=0
- translog_operations value=17702
- translog_size_in_bytes value=17
- recovery_current_as_source value=0
- recovery_current_as_target value=0
- recovery_throttle_time_in_millis value=0
- docs_count value=29652
- docs_deleted value=5229
- flush_total_time_in_millis value=2401
- flush_total value=115
- fielddata_memory_size_in_bytes value=12996
- fielddata_evictions value=0
- search_fetch_current value=0
- search_open_contexts value=0
- search_query_total value=1452
- search_query_time_in_millis value=5695
- search_query_current value=0
- search_fetch_total value=414
- search_fetch_time_in_millis value=146
- warmer_current value=0
- warmer_total value=2319
- warmer_total_time_in_millis value=448
- segments_count value=134
- segments_memory_in_bytes value=1285212
- segments_index_writer_memory_in_bytes value=0
- segments_index_writer_max_memory_in_bytes value=172368955
- segments_version_map_memory_in_bytes value=611844
- segments_fixed_bit_set_memory_in_bytes value=0
HTTP connection measurement names:
- elasticsearch_http_current_open value=3
- elasticsearch_http_total_opened value=3
- elasticsearch_http
- current_open value=3
- total_opened value=3
JVM stats, memory pool information, garbage collection, buffer pools measurement names:
- elasticsearch_jvm_timestamp value=1436460392945
- elasticsearch_jvm_uptime_in_millis value=202245
- elasticsearch_jvm_mem_non_heap_used_in_bytes value=39634576
- elasticsearch_jvm_mem_non_heap_committed_in_bytes value=40841216
- elasticsearch_jvm_mem_pools_young_max_in_bytes value=279183360
- elasticsearch_jvm_mem_pools_young_peak_used_in_bytes value=71630848
- elasticsearch_jvm_mem_pools_young_peak_max_in_bytes value=279183360
- elasticsearch_jvm_mem_pools_young_used_in_bytes value=32685760
- elasticsearch_jvm_mem_pools_survivor_peak_used_in_bytes value=8912888
- elasticsearch_jvm_mem_pools_survivor_peak_max_in_bytes value=34865152
- elasticsearch_jvm_mem_pools_survivor_used_in_bytes value=8912880
- elasticsearch_jvm_mem_pools_survivor_max_in_bytes value=34865152
- elasticsearch_jvm_mem_pools_old_peak_max_in_bytes value=724828160
- elasticsearch_jvm_mem_pools_old_used_in_bytes value=11110928
- elasticsearch_jvm_mem_pools_old_max_in_bytes value=724828160
- elasticsearch_jvm_mem_pools_old_peak_used_in_bytes value=14354608
- elasticsearch_jvm_mem_heap_used_in_bytes value=52709568
- elasticsearch_jvm_mem_heap_used_percent value=5
- elasticsearch_jvm_mem_heap_committed_in_bytes value=259522560
- elasticsearch_jvm_mem_heap_max_in_bytes value=1038876672
- elasticsearch_jvm_threads_peak_count value=45
- elasticsearch_jvm_threads_count value=44
- elasticsearch_jvm_gc_collectors_young_collection_count value=2
- elasticsearch_jvm_gc_collectors_young_collection_time_in_millis value=98
- elasticsearch_jvm_gc_collectors_old_collection_count value=1
- elasticsearch_jvm_gc_collectors_old_collection_time_in_millis value=24
- elasticsearch_jvm_buffer_pools_direct_count value=40
- elasticsearch_jvm_buffer_pools_direct_used_in_bytes value=6304239
- elasticsearch_jvm_buffer_pools_direct_total_capacity_in_bytes value=6304239
- elasticsearch_jvm_buffer_pools_mapped_count value=0
- elasticsearch_jvm_buffer_pools_mapped_used_in_bytes value=0
- elasticsearch_jvm_buffer_pools_mapped_total_capacity_in_bytes value=0
- elasticsearch_jvm
- timestamp value=1436460392945
- uptime_in_millis value=202245
- mem_non_heap_used_in_bytes value=39634576
- mem_non_heap_committed_in_bytes value=40841216
- mem_pools_young_max_in_bytes value=279183360
- mem_pools_young_peak_used_in_bytes value=71630848
- mem_pools_young_peak_max_in_bytes value=279183360
- mem_pools_young_used_in_bytes value=32685760
- mem_pools_survivor_peak_used_in_bytes value=8912888
- mem_pools_survivor_peak_max_in_bytes value=34865152
- mem_pools_survivor_used_in_bytes value=8912880
- mem_pools_survivor_max_in_bytes value=34865152
- mem_pools_old_peak_max_in_bytes value=724828160
- mem_pools_old_used_in_bytes value=11110928
- mem_pools_old_max_in_bytes value=724828160
- mem_pools_old_peak_used_in_bytes value=14354608
- mem_heap_used_in_bytes value=52709568
- mem_heap_used_percent value=5
- mem_heap_committed_in_bytes value=259522560
- mem_heap_max_in_bytes value=1038876672
- threads_peak_count value=45
- threads_count value=44
- gc_collectors_young_collection_count value=2
- gc_collectors_young_collection_time_in_millis value=98
- gc_collectors_old_collection_count value=1
- gc_collectors_old_collection_time_in_millis value=24
- buffer_pools_direct_count value=40
- buffer_pools_direct_used_in_bytes value=6304239
- buffer_pools_direct_total_capacity_in_bytes value=6304239
- buffer_pools_mapped_count value=0
- buffer_pools_mapped_used_in_bytes value=0
- buffer_pools_mapped_total_capacity_in_bytes value=0
TCP information measurement names:
- elasticsearch_network_tcp_in_errs value=0
- elasticsearch_network_tcp_passive_opens value=16
- elasticsearch_network_tcp_curr_estab value=29
- elasticsearch_network_tcp_in_segs value=113
- elasticsearch_network_tcp_out_segs value=97
- elasticsearch_network_tcp_retrans_segs value=0
- elasticsearch_network_tcp_attempt_fails value=0
- elasticsearch_network_tcp_active_opens value=13
- elasticsearch_network_tcp_estab_resets value=0
- elasticsearch_network_tcp_out_rsts value=0
- elasticsearch_network
- tcp_in_errs value=0
- tcp_passive_opens value=16
- tcp_curr_estab value=29
- tcp_in_segs value=113
- tcp_out_segs value=97
- tcp_retrans_segs value=0
- tcp_attempt_fails value=0
- tcp_active_opens value=13
- tcp_estab_resets value=0
- tcp_out_rsts value=0
Operating system stats, load average, cpu, mem, swap measurement names:
- elasticsearch_os_swap_used_in_bytes value=0
- elasticsearch_os_swap_free_in_bytes value=487997440
- elasticsearch_os_timestamp value=1436460392944
- elasticsearch_os_uptime_in_millis value=25092
- elasticsearch_os_cpu_sys value=0
- elasticsearch_os_cpu_user value=0
- elasticsearch_os_cpu_idle value=99
- elasticsearch_os_cpu_usage value=0
- elasticsearch_os_cpu_stolen value=0
- elasticsearch_os_mem_free_percent value=74
- elasticsearch_os_mem_used_percent value=25
- elasticsearch_os_mem_actual_free_in_bytes value=1565470720
- elasticsearch_os_mem_actual_used_in_bytes value=534159360
- elasticsearch_os_mem_free_in_bytes value=477761536
- elasticsearch_os_mem_used_in_bytes value=1621868544
- elasticsearch_os
- swap_used_in_bytes value=0
- swap_free_in_bytes value=487997440
- timestamp value=1436460392944
- uptime_in_millis value=25092
- cpu_sys value=0
- cpu_user value=0
- cpu_idle value=99
- cpu_usage value=0
- cpu_stolen value=0
- mem_free_percent value=74
- mem_used_percent value=25
- mem_actual_free_in_bytes value=1565470720
- mem_actual_used_in_bytes value=534159360
- mem_free_in_bytes value=477761536
- mem_used_in_bytes value=1621868544
Process statistics, memory consumption, cpu usage, open file descriptors measurement names:
- elasticsearch_process_mem_resident_in_bytes value=246382592
- elasticsearch_process_mem_share_in_bytes value=18747392
- elasticsearch_process_mem_total_virtual_in_bytes value=4747890688
- elasticsearch_process_timestamp value=1436460392945
- elasticsearch_process_open_file_descriptors value=160
- elasticsearch_process_cpu_total_in_millis value=15480
- elasticsearch_process_cpu_percent value=2
- elasticsearch_process_cpu_sys_in_millis value=1870
- elasticsearch_process_cpu_user_in_millis value=13610
- elasticsearch_process
- mem_resident_in_bytes value=246382592
- mem_share_in_bytes value=18747392
- mem_total_virtual_in_bytes value=4747890688
- timestamp value=1436460392945
- open_file_descriptors value=160
- cpu_total_in_millis value=15480
- cpu_percent value=2
- cpu_sys_in_millis value=1870
- cpu_user_in_millis value=13610
Statistics about each thread pool, including current size, queue and rejected tasks measurement names:
- elasticsearch_thread_pool_merge_threads value=6
- elasticsearch_thread_pool_merge_queue value=4
- elasticsearch_thread_pool_merge_active value=5
- elasticsearch_thread_pool_merge_rejected value=2
- elasticsearch_thread_pool_merge_largest value=5
- elasticsearch_thread_pool_merge_completed value=1
- elasticsearch_thread_pool_bulk_threads value=4
- elasticsearch_thread_pool_bulk_queue value=5
- elasticsearch_thread_pool_bulk_active value=7
- elasticsearch_thread_pool_bulk_rejected value=3
- elasticsearch_thread_pool_bulk_largest value=1
- elasticsearch_thread_pool_bulk_completed value=4
- elasticsearch_thread_pool_warmer_threads value=2
- elasticsearch_thread_pool_warmer_queue value=7
- elasticsearch_thread_pool_warmer_active value=3
- elasticsearch_thread_pool_warmer_rejected value=2
- elasticsearch_thread_pool_warmer_largest value=3
- elasticsearch_thread_pool_warmer_completed value=1
- elasticsearch_thread_pool_get_largest value=2
- elasticsearch_thread_pool_get_completed value=1
- elasticsearch_thread_pool_get_threads value=1
- elasticsearch_thread_pool_get_queue value=8
- elasticsearch_thread_pool_get_active value=4
- elasticsearch_thread_pool_get_rejected value=3
- elasticsearch_thread_pool_index_threads value=6
- elasticsearch_thread_pool_index_queue value=8
- elasticsearch_thread_pool_index_active value=4
- elasticsearch_thread_pool_index_rejected value=2
- elasticsearch_thread_pool_index_largest value=3
- elasticsearch_thread_pool_index_completed value=6
- elasticsearch_thread_pool_suggest_threads value=2
- elasticsearch_thread_pool_suggest_queue value=7
- elasticsearch_thread_pool_suggest_active value=2
- elasticsearch_thread_pool_suggest_rejected value=1
- elasticsearch_thread_pool_suggest_largest value=8
- elasticsearch_thread_pool_suggest_completed value=3
- elasticsearch_thread_pool_fetch_shard_store_queue value=7
- elasticsearch_thread_pool_fetch_shard_store_active value=4
- elasticsearch_thread_pool_fetch_shard_store_rejected value=2
- elasticsearch_thread_pool_fetch_shard_store_largest value=4
- elasticsearch_thread_pool_fetch_shard_store_completed value=1
- elasticsearch_thread_pool_fetch_shard_store_threads value=1
- elasticsearch_thread_pool_management_threads value=2
- elasticsearch_thread_pool_management_queue value=3
- elasticsearch_thread_pool_management_active value=1
- elasticsearch_thread_pool_management_rejected value=6
- elasticsearch_thread_pool_management_largest value=2
- elasticsearch_thread_pool_management_completed value=22
- elasticsearch_thread_pool_percolate_queue value=23
- elasticsearch_thread_pool_percolate_active value=13
- elasticsearch_thread_pool_percolate_rejected value=235
- elasticsearch_thread_pool_percolate_largest value=23
- elasticsearch_thread_pool_percolate_completed value=33
- elasticsearch_thread_pool_percolate_threads value=123
- elasticsearch_thread_pool_listener_active value=4
- elasticsearch_thread_pool_listener_rejected value=8
- elasticsearch_thread_pool_listener_largest value=1
- elasticsearch_thread_pool_listener_completed value=1
- elasticsearch_thread_pool_listener_threads value=1
- elasticsearch_thread_pool_listener_queue value=2
- elasticsearch_thread_pool_search_rejected value=7
- elasticsearch_thread_pool_search_largest value=2
- elasticsearch_thread_pool_search_completed value=4
- elasticsearch_thread_pool_search_threads value=5
- elasticsearch_thread_pool_search_queue value=7
- elasticsearch_thread_pool_search_active value=2
- elasticsearch_thread_pool_fetch_shard_started_threads value=3
- elasticsearch_thread_pool_fetch_shard_started_queue value=1
- elasticsearch_thread_pool_fetch_shard_started_active value=5
- elasticsearch_thread_pool_fetch_shard_started_rejected value=6
- elasticsearch_thread_pool_fetch_shard_started_largest value=4
- elasticsearch_thread_pool_fetch_shard_started_completed value=54
- elasticsearch_thread_pool_refresh_rejected value=4
- elasticsearch_thread_pool_refresh_largest value=8
- elasticsearch_thread_pool_refresh_completed value=3
- elasticsearch_thread_pool_refresh_threads value=23
- elasticsearch_thread_pool_refresh_queue value=7
- elasticsearch_thread_pool_refresh_active value=3
- elasticsearch_thread_pool_optimize_threads value=3
- elasticsearch_thread_pool_optimize_queue value=4
- elasticsearch_thread_pool_optimize_active value=1
- elasticsearch_thread_pool_optimize_rejected value=2
- elasticsearch_thread_pool_optimize_largest value=7
- elasticsearch_thread_pool_optimize_completed value=3
- elasticsearch_thread_pool_snapshot_largest value=1
- elasticsearch_thread_pool_snapshot_completed value=0
- elasticsearch_thread_pool_snapshot_threads value=8
- elasticsearch_thread_pool_snapshot_queue value=5
- elasticsearch_thread_pool_snapshot_active value=6
- elasticsearch_thread_pool_snapshot_rejected value=2
- elasticsearch_thread_pool_generic_threads value=1
- elasticsearch_thread_pool_generic_queue value=4
- elasticsearch_thread_pool_generic_active value=6
- elasticsearch_thread_pool_generic_rejected value=3
- elasticsearch_thread_pool_generic_largest value=2
- elasticsearch_thread_pool_generic_completed value=27
- elasticsearch_thread_pool_flush_threads value=3
- elasticsearch_thread_pool_flush_queue value=8
- elasticsearch_thread_pool_flush_active value=0
- elasticsearch_thread_pool_flush_rejected value=1
- elasticsearch_thread_pool_flush_largest value=5
- elasticsearch_thread_pool_flush_completed value=3
- elasticsearch_thread_pool
- merge_threads value=6
- merge_queue value=4
- merge_active value=5
- merge_rejected value=2
- merge_largest value=5
- merge_completed value=1
- bulk_threads value=4
- bulk_queue value=5
- bulk_active value=7
- bulk_rejected value=3
- bulk_largest value=1
- bulk_completed value=4
- warmer_threads value=2
- warmer_queue value=7
- warmer_active value=3
- warmer_rejected value=2
- warmer_largest value=3
- warmer_completed value=1
- get_largest value=2
- get_completed value=1
- get_threads value=1
- get_queue value=8
- get_active value=4
- get_rejected value=3
- index_threads value=6
- index_queue value=8
- index_active value=4
- index_rejected value=2
- index_largest value=3
- index_completed value=6
- suggest_threads value=2
- suggest_queue value=7
- suggest_active value=2
- suggest_rejected value=1
- suggest_largest value=8
- suggest_completed value=3
- fetch_shard_store_queue value=7
- fetch_shard_store_active value=4
- fetch_shard_store_rejected value=2
- fetch_shard_store_largest value=4
- fetch_shard_store_completed value=1
- fetch_shard_store_threads value=1
- management_threads value=2
- management_queue value=3
- management_active value=1
- management_rejected value=6
- management_largest value=2
- management_completed value=22
- percolate_queue value=23
- percolate_active value=13
- percolate_rejected value=235
- percolate_largest value=23
- percolate_completed value=33
- percolate_threads value=123
- listener_active value=4
- listener_rejected value=8
- listener_largest value=1
- listener_completed value=1
- listener_threads value=1
- listener_queue value=2
- search_rejected value=7
- search_largest value=2
- search_completed value=4
- search_threads value=5
- search_queue value=7
- search_active value=2
- fetch_shard_started_threads value=3
- fetch_shard_started_queue value=1
- fetch_shard_started_active value=5
- fetch_shard_started_rejected value=6
- fetch_shard_started_largest value=4
- fetch_shard_started_completed value=54
- refresh_rejected value=4
- refresh_largest value=8
- refresh_completed value=3
- refresh_threads value=23
- refresh_queue value=7
- refresh_active value=3
- optimize_threads value=3
- optimize_queue value=4
- optimize_active value=1
- optimize_rejected value=2
- optimize_largest value=7
- optimize_completed value=3
- snapshot_largest value=1
- snapshot_completed value=0
- snapshot_threads value=8
- snapshot_queue value=5
- snapshot_active value=6
- snapshot_rejected value=2
- generic_threads value=1
- generic_queue value=4
- generic_active value=6
- generic_rejected value=3
- generic_largest value=2
- generic_completed value=27
- flush_threads value=3
- flush_queue value=8
- flush_active value=0
- flush_rejected value=1
- flush_largest value=5
- flush_completed value=3
Transport statistics about sent and received bytes in cluster communication measurement names:
- elasticsearch_transport_server_open value=13
- elasticsearch_transport_rx_count value=6
- elasticsearch_transport_rx_size_in_bytes value=1380
- elasticsearch_transport_tx_count value=6
- elasticsearch_transport_tx_size_in_bytes value=1380
- elasticsearch_transport
- server_open value=13
- rx_count value=6
- rx_size_in_bytes value=1380
- tx_count value=6
- tx_size_in_bytes value=1380

View File

@ -91,193 +91,12 @@ func (gh *GithubWebhooks) eventHandler(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
}
func newCommitComment(data []byte) (Event, error) {
commitCommentStruct := CommitCommentEvent{}
err := json.Unmarshal(data, &commitCommentStruct)
func generateEvent(data []byte, event Event) (Event, error) {
err := json.Unmarshal(data, event)
if err != nil {
return nil, err
}
return commitCommentStruct, nil
}
func newCreate(data []byte) (Event, error) {
createStruct := CreateEvent{}
err := json.Unmarshal(data, &createStruct)
if err != nil {
return nil, err
}
return createStruct, nil
}
func newDelete(data []byte) (Event, error) {
deleteStruct := DeleteEvent{}
err := json.Unmarshal(data, &deleteStruct)
if err != nil {
return nil, err
}
return deleteStruct, nil
}
func newDeployment(data []byte) (Event, error) {
deploymentStruct := DeploymentEvent{}
err := json.Unmarshal(data, &deploymentStruct)
if err != nil {
return nil, err
}
return deploymentStruct, nil
}
func newDeploymentStatus(data []byte) (Event, error) {
deploymentStatusStruct := DeploymentStatusEvent{}
err := json.Unmarshal(data, &deploymentStatusStruct)
if err != nil {
return nil, err
}
return deploymentStatusStruct, nil
}
func newFork(data []byte) (Event, error) {
forkStruct := ForkEvent{}
err := json.Unmarshal(data, &forkStruct)
if err != nil {
return nil, err
}
return forkStruct, nil
}
func newGollum(data []byte) (Event, error) {
gollumStruct := GollumEvent{}
err := json.Unmarshal(data, &gollumStruct)
if err != nil {
return nil, err
}
return gollumStruct, nil
}
func newIssueComment(data []byte) (Event, error) {
issueCommentStruct := IssueCommentEvent{}
err := json.Unmarshal(data, &issueCommentStruct)
if err != nil {
return nil, err
}
return issueCommentStruct, nil
}
func newIssues(data []byte) (Event, error) {
issuesStruct := IssuesEvent{}
err := json.Unmarshal(data, &issuesStruct)
if err != nil {
return nil, err
}
return issuesStruct, nil
}
func newMember(data []byte) (Event, error) {
memberStruct := MemberEvent{}
err := json.Unmarshal(data, &memberStruct)
if err != nil {
return nil, err
}
return memberStruct, nil
}
func newMembership(data []byte) (Event, error) {
membershipStruct := MembershipEvent{}
err := json.Unmarshal(data, &membershipStruct)
if err != nil {
return nil, err
}
return membershipStruct, nil
}
func newPageBuild(data []byte) (Event, error) {
pageBuildEvent := PageBuildEvent{}
err := json.Unmarshal(data, &pageBuildEvent)
if err != nil {
return nil, err
}
return pageBuildEvent, nil
}
func newPublic(data []byte) (Event, error) {
publicEvent := PublicEvent{}
err := json.Unmarshal(data, &publicEvent)
if err != nil {
return nil, err
}
return publicEvent, nil
}
func newPullRequest(data []byte) (Event, error) {
pullRequestStruct := PullRequestEvent{}
err := json.Unmarshal(data, &pullRequestStruct)
if err != nil {
return nil, err
}
return pullRequestStruct, nil
}
func newPullRequestReviewComment(data []byte) (Event, error) {
pullRequestReviewCommentStruct := PullRequestReviewCommentEvent{}
err := json.Unmarshal(data, &pullRequestReviewCommentStruct)
if err != nil {
return nil, err
}
return pullRequestReviewCommentStruct, nil
}
func newPush(data []byte) (Event, error) {
pushStruct := PushEvent{}
err := json.Unmarshal(data, &pushStruct)
if err != nil {
return nil, err
}
return pushStruct, nil
}
func newRelease(data []byte) (Event, error) {
releaseStruct := ReleaseEvent{}
err := json.Unmarshal(data, &releaseStruct)
if err != nil {
return nil, err
}
return releaseStruct, nil
}
func newRepository(data []byte) (Event, error) {
repositoryStruct := RepositoryEvent{}
err := json.Unmarshal(data, &repositoryStruct)
if err != nil {
return nil, err
}
return repositoryStruct, nil
}
func newStatus(data []byte) (Event, error) {
statusStruct := StatusEvent{}
err := json.Unmarshal(data, &statusStruct)
if err != nil {
return nil, err
}
return statusStruct, nil
}
func newTeamAdd(data []byte) (Event, error) {
teamAddStruct := TeamAddEvent{}
err := json.Unmarshal(data, &teamAddStruct)
if err != nil {
return nil, err
}
return teamAddStruct, nil
}
func newWatch(data []byte) (Event, error) {
watchStruct := WatchEvent{}
err := json.Unmarshal(data, &watchStruct)
if err != nil {
return nil, err
}
return watchStruct, nil
return event, nil
}
type newEventError struct {
@ -288,51 +107,51 @@ func (e *newEventError) Error() string {
return e.s
}
func NewEvent(r []byte, t string) (Event, error) {
log.Printf("New %v event recieved", t)
switch t {
func NewEvent(data []byte, name string) (Event, error) {
log.Printf("New %v event received", name)
switch name {
case "commit_comment":
return newCommitComment(r)
return generateEvent(data, &CommitCommentEvent{})
case "create":
return newCreate(r)
return generateEvent(data, &CreateEvent{})
case "delete":
return newDelete(r)
return generateEvent(data, &DeleteEvent{})
case "deployment":
return newDeployment(r)
return generateEvent(data, &DeploymentEvent{})
case "deployment_status":
return newDeploymentStatus(r)
return generateEvent(data, &DeploymentStatusEvent{})
case "fork":
return newFork(r)
return generateEvent(data, &ForkEvent{})
case "gollum":
return newGollum(r)
return generateEvent(data, &GollumEvent{})
case "issue_comment":
return newIssueComment(r)
return generateEvent(data, &IssueCommentEvent{})
case "issues":
return newIssues(r)
return generateEvent(data, &IssuesEvent{})
case "member":
return newMember(r)
return generateEvent(data, &MemberEvent{})
case "membership":
return newMembership(r)
return generateEvent(data, &MembershipEvent{})
case "page_build":
return newPageBuild(r)
return generateEvent(data, &PageBuildEvent{})
case "public":
return newPublic(r)
return generateEvent(data, &PublicEvent{})
case "pull_request":
return newPullRequest(r)
return generateEvent(data, &PullRequestEvent{})
case "pull_request_review_comment":
return newPullRequestReviewComment(r)
return generateEvent(data, &PullRequestReviewCommentEvent{})
case "push":
return newPush(r)
return generateEvent(data, &PushEvent{})
case "release":
return newRelease(r)
return generateEvent(data, &ReleaseEvent{})
case "repository":
return newRepository(r)
return generateEvent(data, &RepositoryEvent{})
case "status":
return newStatus(r)
return generateEvent(data, &StatusEvent{})
case "team_add":
return newTeamAdd(r)
return generateEvent(data, &TeamAddEvent{})
case "watch":
return newWatch(r)
return generateEvent(data, &WatchEvent{})
}
return nil, &newEventError{"Not a recgonized event type"}
return nil, &newEventError{"Not a recognized event type"}
}

View File

@ -7,231 +7,89 @@ import (
"testing"
)
func TestCommitCommentEvent(t *testing.T) {
func GithubWebhookRequest(event string, jsonString string, t *testing.T) {
gh := NewGithubWebhooks()
jsonString := CommitCommentEventJSON()
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
req.Header.Add("X-Github-Event", "commit_comment")
req.Header.Add("X-Github-Event", event)
w := httptest.NewRecorder()
gh.eventHandler(w, req)
if w.Code != http.StatusOK {
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
t.Errorf("POST "+event+" returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
}
}
func TestCommitCommentEvent(t *testing.T) {
GithubWebhookRequest("commit_comment", CommitCommentEventJSON(), t)
}
func TestDeleteEvent(t *testing.T) {
gh := NewGithubWebhooks()
jsonString := DeleteEventJSON()
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
req.Header.Add("X-Github-Event", "delete")
w := httptest.NewRecorder()
gh.eventHandler(w, req)
if w.Code != http.StatusOK {
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
}
GithubWebhookRequest("delete", DeleteEventJSON(), t)
}
func TestDeploymentEvent(t *testing.T) {
gh := NewGithubWebhooks()
jsonString := DeploymentEventJSON()
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
req.Header.Add("X-Github-Event", "deployment")
w := httptest.NewRecorder()
gh.eventHandler(w, req)
if w.Code != http.StatusOK {
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
}
GithubWebhookRequest("deployment", DeploymentEventJSON(), t)
}
func TestDeploymentStatusEvent(t *testing.T) {
gh := NewGithubWebhooks()
jsonString := DeploymentStatusEventJSON()
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
req.Header.Add("X-Github-Event", "deployment_status")
w := httptest.NewRecorder()
gh.eventHandler(w, req)
if w.Code != http.StatusOK {
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
}
GithubWebhookRequest("deployment_status", DeploymentStatusEventJSON(), t)
}
func TestForkEvent(t *testing.T) {
gh := NewGithubWebhooks()
jsonString := ForkEventJSON()
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
req.Header.Add("X-Github-Event", "fork")
w := httptest.NewRecorder()
gh.eventHandler(w, req)
if w.Code != http.StatusOK {
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
}
GithubWebhookRequest("fork", ForkEventJSON(), t)
}
func TestGollumEvent(t *testing.T) {
gh := NewGithubWebhooks()
jsonString := GollumEventJSON()
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
req.Header.Add("X-Github-Event", "gollum")
w := httptest.NewRecorder()
gh.eventHandler(w, req)
if w.Code != http.StatusOK {
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
}
GithubWebhookRequest("gollum", GollumEventJSON(), t)
}
func TestIssueCommentEvent(t *testing.T) {
gh := NewGithubWebhooks()
jsonString := IssueCommentEventJSON()
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
req.Header.Add("X-Github-Event", "issue_comment")
w := httptest.NewRecorder()
gh.eventHandler(w, req)
if w.Code != http.StatusOK {
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
}
GithubWebhookRequest("issue_comment", IssueCommentEventJSON(), t)
}
func TestIssuesEvent(t *testing.T) {
gh := NewGithubWebhooks()
jsonString := IssuesEventJSON()
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
req.Header.Add("X-Github-Event", "issues")
w := httptest.NewRecorder()
gh.eventHandler(w, req)
if w.Code != http.StatusOK {
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
}
GithubWebhookRequest("issues", IssuesEventJSON(), t)
}
func TestMemberEvent(t *testing.T) {
gh := NewGithubWebhooks()
jsonString := MemberEventJSON()
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
req.Header.Add("X-Github-Event", "member")
w := httptest.NewRecorder()
gh.eventHandler(w, req)
if w.Code != http.StatusOK {
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
}
GithubWebhookRequest("member", MemberEventJSON(), t)
}
func TestMembershipEvent(t *testing.T) {
gh := NewGithubWebhooks()
jsonString := MembershipEventJSON()
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
req.Header.Add("X-Github-Event", "membership")
w := httptest.NewRecorder()
gh.eventHandler(w, req)
if w.Code != http.StatusOK {
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
}
GithubWebhookRequest("membership", MembershipEventJSON(), t)
}
func TestPageBuildEvent(t *testing.T) {
gh := NewGithubWebhooks()
jsonString := PageBuildEventJSON()
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
req.Header.Add("X-Github-Event", "page_build")
w := httptest.NewRecorder()
gh.eventHandler(w, req)
if w.Code != http.StatusOK {
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
}
GithubWebhookRequest("page_build", PageBuildEventJSON(), t)
}
func TestPublicEvent(t *testing.T) {
gh := NewGithubWebhooks()
jsonString := PublicEventJSON()
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
req.Header.Add("X-Github-Event", "public")
w := httptest.NewRecorder()
gh.eventHandler(w, req)
if w.Code != http.StatusOK {
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
}
GithubWebhookRequest("public", PublicEventJSON(), t)
}
func TestPullRequestReviewCommentEvent(t *testing.T) {
gh := NewGithubWebhooks()
jsonString := PullRequestReviewCommentEventJSON()
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
req.Header.Add("X-Github-Event", "pull_request_review_comment")
w := httptest.NewRecorder()
gh.eventHandler(w, req)
if w.Code != http.StatusOK {
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
}
GithubWebhookRequest("pull_request_review_comment", PullRequestReviewCommentEventJSON(), t)
}
func TestPushEvent(t *testing.T) {
gh := NewGithubWebhooks()
jsonString := PushEventJSON()
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
req.Header.Add("X-Github-Event", "push")
w := httptest.NewRecorder()
gh.eventHandler(w, req)
if w.Code != http.StatusOK {
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
}
GithubWebhookRequest("push", PushEventJSON(), t)
}
func TestReleaseEvent(t *testing.T) {
gh := NewGithubWebhooks()
jsonString := ReleaseEventJSON()
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
req.Header.Add("X-Github-Event", "release")
w := httptest.NewRecorder()
gh.eventHandler(w, req)
if w.Code != http.StatusOK {
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
}
GithubWebhookRequest("release", ReleaseEventJSON(), t)
}
func TestRepositoryEvent(t *testing.T) {
gh := NewGithubWebhooks()
jsonString := RepositoryEventJSON()
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
req.Header.Add("X-Github-Event", "repository")
w := httptest.NewRecorder()
gh.eventHandler(w, req)
if w.Code != http.StatusOK {
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
}
GithubWebhookRequest("repository", RepositoryEventJSON(), t)
}
func TestStatusEvent(t *testing.T) {
gh := NewGithubWebhooks()
jsonString := StatusEventJSON()
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
req.Header.Add("X-Github-Event", "status")
w := httptest.NewRecorder()
gh.eventHandler(w, req)
if w.Code != http.StatusOK {
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
}
GithubWebhookRequest("status", StatusEventJSON(), t)
}
func TestTeamAddEvent(t *testing.T) {
gh := NewGithubWebhooks()
jsonString := TeamAddEventJSON()
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
req.Header.Add("X-Github-Event", "team_add")
w := httptest.NewRecorder()
gh.eventHandler(w, req)
if w.Code != http.StatusOK {
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
}
GithubWebhookRequest("team_add", TeamAddEventJSON(), t)
}
func TestWatchEvent(t *testing.T) {
gh := NewGithubWebhooks()
jsonString := WatchEventJSON()
req, _ := http.NewRequest("POST", "/", strings.NewReader(jsonString))
req.Header.Add("X-Github-Event", "watch")
w := httptest.NewRecorder()
gh.eventHandler(w, req)
if w.Code != http.StatusOK {
t.Errorf("POST commit_comment returned HTTP status code %v.\nExpected %v", w.Code, http.StatusOK)
}
GithubWebhookRequest("watch", WatchEventJSON(), t)
}

View File

@ -5,23 +5,23 @@ This input plugin will test HTTP/HTTPS connections.
### Configuration:
```
# List of UDP/TCP connections you want to check
# HTTP/HTTPS request given an address a method and a timeout
[[inputs.http_response]]
## Server address (default http://localhost)
address = "http://github.com"
## Set response_timeout (default 5 seconds)
response_timeout = 5
response_timeout = "5s"
## HTTP Request Method
method = "GET"
## HTTP Request Headers
[inputs.http_response.headers]
Host = github.com
## Whether to follow redirects from the server (defaults to false)
follow_redirects = true
## HTTP Request Headers (all values must be strings)
# [inputs.http_response.headers]
# Host = "github.com"
## Optional HTTP Request Body
body = '''
{'fake':'data'}
'''
# body = '''
# {'fake':'data'}
# '''
```
### Measurements & Fields:

View File

@ -9,6 +9,7 @@ import (
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
)
@ -17,7 +18,7 @@ type HTTPResponse struct {
Address string
Body string
Method string
ResponseTimeout int
ResponseTimeout internal.Duration
Headers map[string]string
FollowRedirects bool
}
@ -31,7 +32,7 @@ var sampleConfig = `
## Server address (default http://localhost)
address = "http://github.com"
## Set response_timeout (default 5 seconds)
response_timeout = 5
response_timeout = "5s"
## HTTP Request Method
method = "GET"
## Whether to follow redirects from the server (defaults to false)
@ -57,7 +58,7 @@ var ErrRedirectAttempted = errors.New("redirect")
// timeout period and can follow redirects if specified
func CreateHttpClient(followRedirects bool, ResponseTimeout time.Duration) *http.Client {
client := &http.Client{
Timeout: time.Second * ResponseTimeout,
Timeout: ResponseTimeout,
}
if followRedirects == false {
@ -68,22 +69,12 @@ func CreateHttpClient(followRedirects bool, ResponseTimeout time.Duration) *http
return client
}
// CreateHeaders takes a map of header strings and puts them
// into a http.Header Object
func CreateHeaders(headers map[string]string) http.Header {
httpHeaders := make(http.Header)
for key := range headers {
httpHeaders.Add(key, headers[key])
}
return httpHeaders
}
// HTTPGather gathers all fields and returns any errors it encounters
func (h *HTTPResponse) HTTPGather() (map[string]interface{}, error) {
// Prepare fields
fields := make(map[string]interface{})
client := CreateHttpClient(h.FollowRedirects, time.Duration(h.ResponseTimeout))
client := CreateHttpClient(h.FollowRedirects, h.ResponseTimeout.Duration)
var body io.Reader
if h.Body != "" {
@ -93,7 +84,13 @@ func (h *HTTPResponse) HTTPGather() (map[string]interface{}, error) {
if err != nil {
return nil, err
}
request.Header = CreateHeaders(h.Headers)
for key, val := range h.Headers {
request.Header.Add(key, val)
if key == "Host" {
request.Host = val
}
}
// Start Timer
start := time.Now()
@ -117,8 +114,8 @@ func (h *HTTPResponse) HTTPGather() (map[string]interface{}, error) {
// Gather gets all metric fields and tags and returns any errors it encounters
func (h *HTTPResponse) Gather(acc telegraf.Accumulator) error {
// Set default values
if h.ResponseTimeout < 1 {
h.ResponseTimeout = 5
if h.ResponseTimeout.Duration < time.Second {
h.ResponseTimeout.Duration = time.Second * 5
}
// Check send and expected string
if h.Method == "" {

View File

@ -2,28 +2,17 @@ package http_response
import (
"fmt"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"io/ioutil"
"net/http"
"net/http/httptest"
"testing"
"time"
)
func TestCreateHeaders(t *testing.T) {
fakeHeaders := map[string]string{
"Accept": "text/plain",
"Content-Type": "application/json",
"Cache-Control": "no-cache",
}
headers := CreateHeaders(fakeHeaders)
testHeaders := make(http.Header)
testHeaders.Add("Accept", "text/plain")
testHeaders.Add("Content-Type", "application/json")
testHeaders.Add("Cache-Control", "no-cache")
assert.Equal(t, testHeaders, headers)
}
"github.com/influxdata/telegraf/internal"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func setUpTestMux() http.Handler {
mux := http.NewServeMux()
@ -63,6 +52,33 @@ func setUpTestMux() http.Handler {
return mux
}
func TestHeaders(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
cHeader := r.Header.Get("Content-Type")
assert.Equal(t, "Hello", r.Host)
assert.Equal(t, "application/json", cHeader)
w.WriteHeader(http.StatusOK)
}))
defer ts.Close()
h := &HTTPResponse{
Address: ts.URL,
Method: "GET",
ResponseTimeout: internal.Duration{Duration: time.Second * 2},
Headers: map[string]string{
"Content-Type": "application/json",
"Host": "Hello",
},
}
fields, err := h.HTTPGather()
require.NoError(t, err)
assert.NotEmpty(t, fields)
if assert.NotNil(t, fields["http_response_code"]) {
assert.Equal(t, http.StatusOK, fields["http_response_code"])
}
assert.NotNil(t, fields["response_time"])
}
func TestFields(t *testing.T) {
mux := setUpTestMux()
ts := httptest.NewServer(mux)
@ -72,7 +88,7 @@ func TestFields(t *testing.T) {
Address: ts.URL + "/good",
Body: "{ 'test': 'data'}",
Method: "GET",
ResponseTimeout: 20,
ResponseTimeout: internal.Duration{Duration: time.Second * 20},
Headers: map[string]string{
"Content-Type": "application/json",
},
@ -85,7 +101,6 @@ func TestFields(t *testing.T) {
assert.Equal(t, http.StatusOK, fields["http_response_code"])
}
assert.NotNil(t, fields["response_time"])
}
func TestRedirects(t *testing.T) {
@ -97,7 +112,7 @@ func TestRedirects(t *testing.T) {
Address: ts.URL + "/redirect",
Body: "{ 'test': 'data'}",
Method: "GET",
ResponseTimeout: 20,
ResponseTimeout: internal.Duration{Duration: time.Second * 20},
Headers: map[string]string{
"Content-Type": "application/json",
},
@ -114,7 +129,7 @@ func TestRedirects(t *testing.T) {
Address: ts.URL + "/badredirect",
Body: "{ 'test': 'data'}",
Method: "GET",
ResponseTimeout: 20,
ResponseTimeout: internal.Duration{Duration: time.Second * 20},
Headers: map[string]string{
"Content-Type": "application/json",
},
@ -133,7 +148,7 @@ func TestMethod(t *testing.T) {
Address: ts.URL + "/mustbepostmethod",
Body: "{ 'test': 'data'}",
Method: "POST",
ResponseTimeout: 20,
ResponseTimeout: internal.Duration{Duration: time.Second * 20},
Headers: map[string]string{
"Content-Type": "application/json",
},
@ -150,7 +165,7 @@ func TestMethod(t *testing.T) {
Address: ts.URL + "/mustbepostmethod",
Body: "{ 'test': 'data'}",
Method: "GET",
ResponseTimeout: 20,
ResponseTimeout: internal.Duration{Duration: time.Second * 20},
Headers: map[string]string{
"Content-Type": "application/json",
},
@ -168,7 +183,7 @@ func TestMethod(t *testing.T) {
Address: ts.URL + "/mustbepostmethod",
Body: "{ 'test': 'data'}",
Method: "head",
ResponseTimeout: 20,
ResponseTimeout: internal.Duration{Duration: time.Second * 20},
Headers: map[string]string{
"Content-Type": "application/json",
},
@ -191,7 +206,7 @@ func TestBody(t *testing.T) {
Address: ts.URL + "/musthaveabody",
Body: "{ 'test': 'data'}",
Method: "GET",
ResponseTimeout: 20,
ResponseTimeout: internal.Duration{Duration: time.Second * 20},
Headers: map[string]string{
"Content-Type": "application/json",
},
@ -207,7 +222,7 @@ func TestBody(t *testing.T) {
h = &HTTPResponse{
Address: ts.URL + "/musthaveabody",
Method: "GET",
ResponseTimeout: 20,
ResponseTimeout: internal.Duration{Duration: time.Second * 20},
Headers: map[string]string{
"Content-Type": "application/json",
},
@ -230,7 +245,7 @@ func TestTimeout(t *testing.T) {
Address: ts.URL + "/twosecondnap",
Body: "{ 'test': 'data'}",
Method: "GET",
ResponseTimeout: 1,
ResponseTimeout: internal.Duration{Duration: time.Second * 1},
Headers: map[string]string{
"Content-Type": "application/json",
},

View File

@ -22,16 +22,78 @@ InfluxDB-formatted endpoints. See below for more information.
### Measurements & Fields
- influxdb
- n_shards
- influxdb_database
- influxdb_httpd
- influxdb_measurement
- influxdb_memstats
- heap_inuse
- heap_released
- mspan_inuse
- total_alloc
- sys
- mallocs
- frees
- heap_idle
- pause_total_ns
- lookups
- heap_sys
- mcache_sys
- next_gc
- gcc_pu_fraction
- other_sys
- alloc
- stack_inuse
- stack_sys
- buck_hash_sys
- gc_sys
- num_gc
- heap_alloc
- heap_objects
- mspan_sys
- mcache_inuse
- last_gc
- influxdb_shard
- influxdb_subscriber
- influxdb_tsm1_cache
- influxdb_tsm1_wal
- influxdb_write
### Example Output:
```
telegraf -config ~/ws/telegraf.conf -input-filter influxdb -test
* Plugin: influxdb, Collection 1
> influxdb_database,database=_internal,host=tyrion,url=http://localhost:8086/debug/vars numMeasurements=10,numSeries=29 1463590500247354636
> influxdb_httpd,bind=:8086,host=tyrion,url=http://localhost:8086/debug/vars req=7,reqActive=1,reqDurationNs=14227734 1463590500247354636
> influxdb_measurement,database=_internal,host=tyrion,measurement=database,url=http://localhost:8086/debug/vars numSeries=1 1463590500247354636
> influxdb_measurement,database=_internal,host=tyrion,measurement=httpd,url=http://localhost:8086/debug/vars numSeries=1 1463590500247354636
> influxdb_measurement,database=_internal,host=tyrion,measurement=measurement,url=http://localhost:8086/debug/vars numSeries=10 1463590500247354636
> influxdb_measurement,database=_internal,host=tyrion,measurement=runtime,url=http://localhost:8086/debug/vars numSeries=1 1463590500247354636
> influxdb_measurement,database=_internal,host=tyrion,measurement=shard,url=http://localhost:8086/debug/vars numSeries=4 1463590500247354636
> influxdb_measurement,database=_internal,host=tyrion,measurement=subscriber,url=http://localhost:8086/debug/vars numSeries=1 1463590500247354636
> influxdb_measurement,database=_internal,host=tyrion,measurement=tsm1_cache,url=http://localhost:8086/debug/vars numSeries=4 1463590500247354636
> influxdb_measurement,database=_internal,host=tyrion,measurement=tsm1_filestore,url=http://localhost:8086/debug/vars numSeries=2 1463590500247354636
> influxdb_measurement,database=_internal,host=tyrion,measurement=tsm1_wal,url=http://localhost:8086/debug/vars numSeries=4 1463590500247354636
> influxdb_measurement,database=_internal,host=tyrion,measurement=write,url=http://localhost:8086/debug/vars numSeries=1 1463590500247354636
> influxdb_memstats,host=tyrion,url=http://localhost:8086/debug/vars alloc=7642384i,buck_hash_sys=1463471i,frees=1169558i,gc_sys=653312i,gcc_pu_fraction=0.00003825652361068311,heap_alloc=7642384i,heap_idle=9912320i,heap_inuse=9125888i,heap_objects=48276i,heap_released=0i,heap_sys=19038208i,last_gc=1463590480877651621i,lookups=90i,mallocs=1217834i,mcache_inuse=4800i,mcache_sys=16384i,mspan_inuse=70920i,mspan_sys=81920i,next_gc=11679787i,num_gc=141i,other_sys=1244233i,pause_total_ns=24034027i,stack_inuse=884736i,stack_sys=884736i,sys=23382264i,total_alloc=679012200i 1463590500277918755
> influxdb_shard,database=_internal,engine=tsm1,host=tyrion,id=4,path=/Users/sparrc/.influxdb/data/_internal/monitor/4,retentionPolicy=monitor,url=http://localhost:8086/debug/vars fieldsCreate=65,seriesCreate=26,writePointsOk=7274,writeReq=280 1463590500247354636
> influxdb_subscriber,host=tyrion,url=http://localhost:8086/debug/vars pointsWritten=7274 1463590500247354636
> influxdb_tsm1_cache,database=_internal,host=tyrion,path=/Users/sparrc/.influxdb/data/_internal/monitor/1,retentionPolicy=monitor,url=http://localhost:8086/debug/vars WALCompactionTimeMs=0,cacheAgeMs=2809192,cachedBytes=0,diskBytes=0,memBytes=0,snapshotCount=0 1463590500247354636
> influxdb_tsm1_cache,database=_internal,host=tyrion,path=/Users/sparrc/.influxdb/data/_internal/monitor/2,retentionPolicy=monitor,url=http://localhost:8086/debug/vars WALCompactionTimeMs=0,cacheAgeMs=2809184,cachedBytes=0,diskBytes=0,memBytes=0,snapshotCount=0 1463590500247354636
> influxdb_tsm1_cache,database=_internal,host=tyrion,path=/Users/sparrc/.influxdb/data/_internal/monitor/3,retentionPolicy=monitor,url=http://localhost:8086/debug/vars WALCompactionTimeMs=0,cacheAgeMs=2809180,cachedBytes=0,diskBytes=0,memBytes=42368,snapshotCount=0 1463590500247354636
> influxdb_tsm1_cache,database=_internal,host=tyrion,path=/Users/sparrc/.influxdb/data/_internal/monitor/4,retentionPolicy=monitor,url=http://localhost:8086/debug/vars WALCompactionTimeMs=0,cacheAgeMs=2799155,cachedBytes=0,diskBytes=0,memBytes=331216,snapshotCount=0 1463590500247354636
> influxdb_tsm1_filestore,database=_internal,host=tyrion,path=/Users/sparrc/.influxdb/data/_internal/monitor/1,retentionPolicy=monitor,url=http://localhost:8086/debug/vars diskBytes=37892 1463590500247354636
> influxdb_tsm1_filestore,database=_internal,host=tyrion,path=/Users/sparrc/.influxdb/data/_internal/monitor/2,retentionPolicy=monitor,url=http://localhost:8086/debug/vars diskBytes=52907 1463590500247354636
> influxdb_tsm1_wal,database=_internal,host=tyrion,path=/Users/sparrc/.influxdb/wal/_internal/monitor/1,retentionPolicy=monitor,url=http://localhost:8086/debug/vars currentSegmentDiskBytes=0,oldSegmentsDiskBytes=0 1463590500247354636
> influxdb_tsm1_wal,database=_internal,host=tyrion,path=/Users/sparrc/.influxdb/wal/_internal/monitor/2,retentionPolicy=monitor,url=http://localhost:8086/debug/vars currentSegmentDiskBytes=0,oldSegmentsDiskBytes=0 1463590500247354636
> influxdb_tsm1_wal,database=_internal,host=tyrion,path=/Users/sparrc/.influxdb/wal/_internal/monitor/3,retentionPolicy=monitor,url=http://localhost:8086/debug/vars currentSegmentDiskBytes=0,oldSegmentsDiskBytes=65651 1463590500247354636
> influxdb_tsm1_wal,database=_internal,host=tyrion,path=/Users/sparrc/.influxdb/wal/_internal/monitor/4,retentionPolicy=monitor,url=http://localhost:8086/debug/vars currentSegmentDiskBytes=495687,oldSegmentsDiskBytes=0 1463590500247354636
> influxdb_write,host=tyrion,url=http://localhost:8086/debug/vars pointReq=7274,pointReqLocal=7274,req=280,subWriteOk=280,writeOk=280 1463590500247354636
> influxdb_shard,host=tyrion n_shards=4i 1463590500247354636
```
### InfluxDB-formatted endpoints
The influxdb plugin can collect InfluxDB-formatted data from JSON endpoints.
@ -46,65 +108,3 @@ With a configuration of:
"http://192.168.2.1:8086/debug/vars"
]
```
And if 127.0.0.1 responds with this JSON:
```json
{
"k1": {
"name": "fruit",
"tags": {
"kind": "apple"
},
"values": {
"inventory": 371,
"sold": 112
}
},
"k2": {
"name": "fruit",
"tags": {
"kind": "banana"
},
"values": {
"inventory": 1000,
"sold": 403
}
}
}
```
And if 192.168.2.1 responds like so:
```json
{
"k3": {
"name": "transactions",
"tags": {},
"values": {
"total": 100,
"balance": 184.75
}
}
}
```
Then the collected metrics will be:
```
influxdb_fruit,url='http://127.0.0.1:8086/debug/vars',kind='apple' inventory=371.0,sold=112.0
influxdb_fruit,url='http://127.0.0.1:8086/debug/vars',kind='banana' inventory=1000.0,sold=403.0
influxdb_transactions,url='http://192.168.2.1:8086/debug/vars' total=100.0,balance=184.75
```
There are two important details to note about the collected metrics:
1. Even though the values in JSON are being displayed as integers,
the metrics are reported as floats.
JSON encoders usually don't print the fractional part for round floats.
Because you cannot change the type of an existing field in InfluxDB,
we assume all numbers are floats.
2. The top-level keys' names (in the example above, `"k1"`, `"k2"`, and `"k3"`)
are not considered when recording the metrics.

View File

@ -120,6 +120,9 @@ func (i *InfluxDB) gatherURL(
acc telegraf.Accumulator,
url string,
) error {
shardCounter := 0
now := time.Now()
resp, err := client.Get(url)
if err != nil {
return err
@ -201,6 +204,10 @@ func (i *InfluxDB) gatherURL(
continue
}
if p.Name == "shard" {
shardCounter++
}
// If the object was a point, but was not fully initialized,
// ignore it and move on.
if p.Name == "" || p.Tags == nil || p.Values == nil || len(p.Values) == 0 {
@ -214,9 +221,18 @@ func (i *InfluxDB) gatherURL(
"influxdb_"+p.Name,
p.Values,
p.Tags,
now,
)
}
acc.AddFields("influxdb",
map[string]interface{}{
"n_shards": shardCounter,
},
nil,
now,
)
return nil
}

View File

@ -27,7 +27,7 @@ func TestBasic(t *testing.T) {
var acc testutil.Accumulator
require.NoError(t, plugin.Gather(&acc))
require.Len(t, acc.Metrics, 2)
require.Len(t, acc.Metrics, 3)
fields := map[string]interface{}{
// JSON will truncate floats to integer representations.
// Since there's no distinction in JSON, we can't assume it's an int.
@ -50,6 +50,11 @@ func TestBasic(t *testing.T) {
"url": fakeServer.URL + "/endpoint",
}
acc.AssertContainsTaggedFields(t, "influxdb_bar", fields, tags)
acc.AssertContainsTaggedFields(t, "influxdb",
map[string]interface{}{
"n_shards": 0,
}, map[string]string{})
}
func TestInfluxDB(t *testing.T) {
@ -69,7 +74,7 @@ func TestInfluxDB(t *testing.T) {
var acc testutil.Accumulator
require.NoError(t, plugin.Gather(&acc))
require.Len(t, acc.Metrics, 33)
require.Len(t, acc.Metrics, 34)
fields := map[string]interface{}{
"heap_inuse": int64(18046976),
@ -104,6 +109,11 @@ func TestInfluxDB(t *testing.T) {
"url": fakeInfluxServer.URL + "/endpoint",
}
acc.AssertContainsTaggedFields(t, "influxdb_memstats", fields, tags)
acc.AssertContainsTaggedFields(t, "influxdb",
map[string]interface{}{
"n_shards": 2,
}, map[string]string{})
}
func TestErrorHandling(t *testing.T) {

View File

@ -69,10 +69,10 @@ func (m *MongoDB) Gather(acc telegraf.Accumulator) error {
}
}
wg.Add(1)
go func() {
go func(srv *Server) {
defer wg.Done()
outerr = m.gatherServer(m.getMongoServer(u), acc)
}()
outerr = m.gatherServer(srv, acc)
}(m.getMongoServer(u))
}
wg.Wait()

View File

@ -21,8 +21,12 @@ The plugin expects messages in the
"sensors/#",
]
## Maximum number of metrics to buffer between collection intervals
metric_buffer = 100000
# if true, messages that can't be delivered while the subscriber is offline
# will be delivered when it comes back (such as on service restart).
# NOTE: if true, client_id MUST be set
persistent_session = false
# If empty, a random client ID will be generated.
client_id = ""
## username and password to connect MQTT server.
# username = "telegraf"

View File

@ -25,8 +25,8 @@ This plugin gathers the statistic data from MySQL server
## [username[:password]@][protocol[(address)]]/[?tls=[true|false|skip-verify]]
## see https://github.com/go-sql-driver/mysql#dsn-data-source-name
## e.g.
## root:passwd@tcp(127.0.0.1:3306)/?tls=false
## root@tcp(127.0.0.1:3306)/?tls=false
## db_user:passwd@tcp(127.0.0.1:3306)/?tls=false
## db_user@tcp(127.0.0.1:3306)/?tls=false
#
## If no servers are specified, then localhost is used as the host.
servers = ["tcp(127.0.0.1:3306)/"]

View File

@ -39,8 +39,8 @@ var sampleConfig = `
## [username[:password]@][protocol[(address)]]/[?tls=[true|false|skip-verify]]
## see https://github.com/go-sql-driver/mysql#dsn-data-source-name
## e.g.
## root:passwd@tcp(127.0.0.1:3306)/?tls=false
## root@tcp(127.0.0.1:3306)/?tls=false
## db_user:passwd@tcp(127.0.0.1:3306)/?tls=false
## db_user@tcp(127.0.0.1:3306)/?tls=false
#
## If no servers are specified, then localhost is used as the host.
servers = ["tcp(127.0.0.1:3306)/"]

View File

@ -6,41 +6,30 @@ It can also check response text.
### Configuration:
```
# List of UDP/TCP connections you want to check
[[inputs.net_response]]
protocol = "tcp"
# Server address (default IP localhost)
address = "github.com:80"
# Set timeout (default 1.0)
timeout = 1.0
# Set read timeout (default 1.0)
read_timeout = 1.0
# String sent to the server
send = "ssh"
# Expected string in answer
expect = "ssh"
[[inputs.net_response]]
protocol = "tcp"
address = ":80"
# TCP or UDP 'ping' given url and collect response time in seconds
[[inputs.net_response]]
protocol = "udp"
# Server address (default IP localhost)
## Protocol, must be "tcp" or "udp"
protocol = "tcp"
## Server address (default localhost)
address = "github.com:80"
# Set timeout (default 1.0)
timeout = 1.0
# Set read timeout (default 1.0)
read_timeout = 1.0
# String sent to the server
## Set timeout
timeout = "1s"
## Optional string sent to the server
send = "ssh"
# Expected string in answer
## Optional expected string in answer
expect = "ssh"
## Set read timeout (only used if expecting a response)
read_timeout = "1s"
[[inputs.net_response]]
protocol = "udp"
address = "localhost:161"
timeout = 2.0
timeout = "2s"
```
### Measurements & Fields:

View File

@ -9,14 +9,15 @@ import (
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
)
// NetResponses struct
type NetResponse struct {
Address string
Timeout float64
ReadTimeout float64
Timeout internal.Duration
ReadTimeout internal.Duration
Send string
Expect string
Protocol string
@ -31,29 +32,28 @@ var sampleConfig = `
protocol = "tcp"
## Server address (default localhost)
address = "github.com:80"
## Set timeout (default 1.0 seconds)
timeout = 1.0
## Set read timeout (default 1.0 seconds)
read_timeout = 1.0
## Set timeout
timeout = "1s"
## Optional string sent to the server
# send = "ssh"
## Optional expected string in answer
# expect = "ssh"
## Set read timeout (only used if expecting a response)
read_timeout = "1s"
`
func (_ *NetResponse) SampleConfig() string {
return sampleConfig
}
func (t *NetResponse) TcpGather() (map[string]interface{}, error) {
func (n *NetResponse) TcpGather() (map[string]interface{}, error) {
// Prepare fields
fields := make(map[string]interface{})
// Start Timer
start := time.Now()
// Resolving
tcpAddr, err := net.ResolveTCPAddr("tcp", t.Address)
// Connecting
conn, err := net.DialTCP("tcp", nil, tcpAddr)
conn, err := net.DialTimeout("tcp", n.Address, n.Timeout.Duration)
// Stop timer
responseTime := time.Since(start).Seconds()
// Handle error
@ -62,17 +62,16 @@ func (t *NetResponse) TcpGather() (map[string]interface{}, error) {
}
defer conn.Close()
// Send string if needed
if t.Send != "" {
msg := []byte(t.Send)
if n.Send != "" {
msg := []byte(n.Send)
conn.Write(msg)
conn.CloseWrite()
// Stop timer
responseTime = time.Since(start).Seconds()
}
// Read string if needed
if t.Expect != "" {
if n.Expect != "" {
// Set read timeout
conn.SetReadDeadline(time.Now().Add(time.Duration(t.ReadTimeout) * time.Second))
conn.SetReadDeadline(time.Now().Add(n.ReadTimeout.Duration))
// Prepare reader
reader := bufio.NewReader(conn)
tp := textproto.NewReader(reader)
@ -85,7 +84,7 @@ func (t *NetResponse) TcpGather() (map[string]interface{}, error) {
fields["string_found"] = false
} else {
// Looking for string in answer
RegEx := regexp.MustCompile(`.*` + t.Expect + `.*`)
RegEx := regexp.MustCompile(`.*` + n.Expect + `.*`)
find := RegEx.FindString(string(data))
if find != "" {
fields["string_found"] = true
@ -99,13 +98,13 @@ func (t *NetResponse) TcpGather() (map[string]interface{}, error) {
return fields, nil
}
func (u *NetResponse) UdpGather() (map[string]interface{}, error) {
func (n *NetResponse) UdpGather() (map[string]interface{}, error) {
// Prepare fields
fields := make(map[string]interface{})
// Start Timer
start := time.Now()
// Resolving
udpAddr, err := net.ResolveUDPAddr("udp", u.Address)
udpAddr, err := net.ResolveUDPAddr("udp", n.Address)
LocalAddr, err := net.ResolveUDPAddr("udp", "127.0.0.1:0")
// Connecting
conn, err := net.DialUDP("udp", LocalAddr, udpAddr)
@ -115,11 +114,11 @@ func (u *NetResponse) UdpGather() (map[string]interface{}, error) {
return nil, err
}
// Send string
msg := []byte(u.Send)
msg := []byte(n.Send)
conn.Write(msg)
// Read string
// Set read timeout
conn.SetReadDeadline(time.Now().Add(time.Duration(u.ReadTimeout) * time.Second))
conn.SetReadDeadline(time.Now().Add(n.ReadTimeout.Duration))
// Read
buf := make([]byte, 1024)
_, _, err = conn.ReadFromUDP(buf)
@ -130,7 +129,7 @@ func (u *NetResponse) UdpGather() (map[string]interface{}, error) {
return nil, err
} else {
// Looking for string in answer
RegEx := regexp.MustCompile(`.*` + u.Expect + `.*`)
RegEx := regexp.MustCompile(`.*` + n.Expect + `.*`)
find := RegEx.FindString(string(buf))
if find != "" {
fields["string_found"] = true
@ -142,28 +141,28 @@ func (u *NetResponse) UdpGather() (map[string]interface{}, error) {
return fields, nil
}
func (c *NetResponse) Gather(acc telegraf.Accumulator) error {
func (n *NetResponse) Gather(acc telegraf.Accumulator) error {
// Set default values
if c.Timeout == 0 {
c.Timeout = 1.0
if n.Timeout.Duration == 0 {
n.Timeout.Duration = time.Second
}
if c.ReadTimeout == 0 {
c.ReadTimeout = 1.0
if n.ReadTimeout.Duration == 0 {
n.ReadTimeout.Duration = time.Second
}
// Check send and expected string
if c.Protocol == "udp" && c.Send == "" {
if n.Protocol == "udp" && n.Send == "" {
return errors.New("Send string cannot be empty")
}
if c.Protocol == "udp" && c.Expect == "" {
if n.Protocol == "udp" && n.Expect == "" {
return errors.New("Expected string cannot be empty")
}
// Prepare host and port
host, port, err := net.SplitHostPort(c.Address)
host, port, err := net.SplitHostPort(n.Address)
if err != nil {
return err
}
if host == "" {
c.Address = "localhost:" + port
n.Address = "localhost:" + port
}
if port == "" {
return errors.New("Bad port")
@ -172,11 +171,11 @@ func (c *NetResponse) Gather(acc telegraf.Accumulator) error {
tags := map[string]string{"server": host, "port": port}
var fields map[string]interface{}
// Gather data
if c.Protocol == "tcp" {
fields, err = c.TcpGather()
if n.Protocol == "tcp" {
fields, err = n.TcpGather()
tags["protocol"] = "tcp"
} else if c.Protocol == "udp" {
fields, err = c.UdpGather()
} else if n.Protocol == "udp" {
fields, err = n.UdpGather()
tags["protocol"] = "udp"
} else {
return errors.New("Bad protocol")

View File

@ -5,7 +5,9 @@ import (
"regexp"
"sync"
"testing"
"time"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
@ -35,7 +37,7 @@ func TestTCPError(t *testing.T) {
// Error
err1 := c.Gather(&acc)
require.Error(t, err1)
assert.Equal(t, "dial tcp 127.0.0.1:9999: getsockopt: connection refused", err1.Error())
assert.Contains(t, err1.Error(), "getsockopt: connection refused")
}
func TestTCPOK1(t *testing.T) {
@ -46,8 +48,8 @@ func TestTCPOK1(t *testing.T) {
Address: "127.0.0.1:2004",
Send: "test",
Expect: "test",
ReadTimeout: 3.0,
Timeout: 1.0,
ReadTimeout: internal.Duration{Duration: time.Second * 3},
Timeout: internal.Duration{Duration: time.Second},
Protocol: "tcp",
}
// Start TCP server
@ -86,8 +88,8 @@ func TestTCPOK2(t *testing.T) {
Address: "127.0.0.1:2004",
Send: "test",
Expect: "test2",
ReadTimeout: 3.0,
Timeout: 1.0,
ReadTimeout: internal.Duration{Duration: time.Second * 3},
Timeout: internal.Duration{Duration: time.Second},
Protocol: "tcp",
}
// Start TCP server
@ -141,8 +143,8 @@ func TestUDPOK1(t *testing.T) {
Address: "127.0.0.1:2004",
Send: "test",
Expect: "test",
ReadTimeout: 3.0,
Timeout: 1.0,
ReadTimeout: internal.Duration{Duration: time.Second * 3},
Timeout: internal.Duration{Duration: time.Second},
Protocol: "udp",
}
// Start UDP server

View File

@ -0,0 +1,343 @@
## Nstat input plugin
Plugin collects network metrics from `/proc/net/netstat`, `/proc/net/snmp` and `/proc/net/snmp6` files
### Configuration
The plugin firstly tries to read file paths from config values
if it is empty, then it reads from env variables.
* `PROC_NET_NETSTAT`
* `PROC_NET_SNMP`
* `PROC_NET_SNMP6`
If these variables are also not set,
then it tries to read the proc root from env - `PROC_ROOT`,
and sets `/proc` as a root path if `PROC_ROOT` is also empty.
Then appends default file paths:
* `/net/netstat`
* `/net/snmp`
* `/net/snmp6`
So if nothing is given, no paths in config and in env vars, the plugin takes the default paths.
* `/proc/net/netstat`
* `/proc/net/snmp`
* `/proc/net/snmp6`
The sample config file
```toml
[[inputs.nstat]]
## file paths
## e.g: /proc/net/netstat, /proc/net/snmp, /proc/net/snmp6
# proc_net_netstat = ""
# proc_net_snmp = ""
# proc_net_snmp6 = ""
## dump metrics with 0 values too
# dump_zeros = true
```
### Measurements & Fields
- nstat
- Icmp6InCsumErrors
- Icmp6InDestUnreachs
- Icmp6InEchoReplies
- Icmp6InEchos
- Icmp6InErrors
- Icmp6InGroupMembQueries
- Icmp6InGroupMembReductions
- Icmp6InGroupMembResponses
- Icmp6InMLDv2Reports
- Icmp6InMsgs
- Icmp6InNeighborAdvertisements
- Icmp6InNeighborSolicits
- Icmp6InParmProblems
- Icmp6InPktTooBigs
- Icmp6InRedirects
- Icmp6InRouterAdvertisements
- Icmp6InRouterSolicits
- Icmp6InTimeExcds
- Icmp6OutDestUnreachs
- Icmp6OutEchoReplies
- Icmp6OutEchos
- Icmp6OutErrors
- Icmp6OutGroupMembQueries
- Icmp6OutGroupMembReductions
- Icmp6OutGroupMembResponses
- Icmp6OutMLDv2Reports
- Icmp6OutMsgs
- Icmp6OutNeighborAdvertisements
- Icmp6OutNeighborSolicits
- Icmp6OutParmProblems
- Icmp6OutPktTooBigs
- Icmp6OutRedirects
- Icmp6OutRouterAdvertisements
- Icmp6OutRouterSolicits
- Icmp6OutTimeExcds
- Icmp6OutType133
- Icmp6OutType135
- Icmp6OutType143
- IcmpInAddrMaskReps
- IcmpInAddrMasks
- IcmpInCsumErrors
- IcmpInDestUnreachs
- IcmpInEchoReps
- IcmpInEchos
- IcmpInErrors
- IcmpInMsgs
- IcmpInParmProbs
- IcmpInRedirects
- IcmpInSrcQuenchs
- IcmpInTimeExcds
- IcmpInTimestampReps
- IcmpInTimestamps
- IcmpMsgInType3
- IcmpMsgOutType3
- IcmpOutAddrMaskReps
- IcmpOutAddrMasks
- IcmpOutDestUnreachs
- IcmpOutEchoReps
- IcmpOutEchos
- IcmpOutErrors
- IcmpOutMsgs
- IcmpOutParmProbs
- IcmpOutRedirects
- IcmpOutSrcQuenchs
- IcmpOutTimeExcds
- IcmpOutTimestampReps
- IcmpOutTimestamps
- Ip6FragCreates
- Ip6FragFails
- Ip6FragOKs
- Ip6InAddrErrors
- Ip6InBcastOctets
- Ip6InCEPkts
- Ip6InDelivers
- Ip6InDiscards
- Ip6InECT0Pkts
- Ip6InECT1Pkts
- Ip6InHdrErrors
- Ip6InMcastOctets
- Ip6InMcastPkts
- Ip6InNoECTPkts
- Ip6InNoRoutes
- Ip6InOctets
- Ip6InReceives
- Ip6InTooBigErrors
- Ip6InTruncatedPkts
- Ip6InUnknownProtos
- Ip6OutBcastOctets
- Ip6OutDiscards
- Ip6OutForwDatagrams
- Ip6OutMcastOctets
- Ip6OutMcastPkts
- Ip6OutNoRoutes
- Ip6OutOctets
- Ip6OutRequests
- Ip6ReasmFails
- Ip6ReasmOKs
- Ip6ReasmReqds
- Ip6ReasmTimeout
- IpDefaultTTL
- IpExtInBcastOctets
- IpExtInBcastPkts
- IpExtInCEPkts
- IpExtInCsumErrors
- IpExtInECT0Pkts
- IpExtInECT1Pkts
- IpExtInMcastOctets
- IpExtInMcastPkts
- IpExtInNoECTPkts
- IpExtInNoRoutes
- IpExtInOctets
- IpExtInTruncatedPkts
- IpExtOutBcastOctets
- IpExtOutBcastPkts
- IpExtOutMcastOctets
- IpExtOutMcastPkts
- IpExtOutOctets
- IpForwDatagrams
- IpForwarding
- IpFragCreates
- IpFragFails
- IpFragOKs
- IpInAddrErrors
- IpInDelivers
- IpInDiscards
- IpInHdrErrors
- IpInReceives
- IpInUnknownProtos
- IpOutDiscards
- IpOutNoRoutes
- IpOutRequests
- IpReasmFails
- IpReasmOKs
- IpReasmReqds
- IpReasmTimeout
- TcpActiveOpens
- TcpAttemptFails
- TcpCurrEstab
- TcpEstabResets
- TcpExtArpFilter
- TcpExtBusyPollRxPackets
- TcpExtDelayedACKLocked
- TcpExtDelayedACKLost
- TcpExtDelayedACKs
- TcpExtEmbryonicRsts
- TcpExtIPReversePathFilter
- TcpExtListenDrops
- TcpExtListenOverflows
- TcpExtLockDroppedIcmps
- TcpExtOfoPruned
- TcpExtOutOfWindowIcmps
- TcpExtPAWSActive
- TcpExtPAWSEstab
- TcpExtPAWSPassive
- TcpExtPruneCalled
- TcpExtRcvPruned
- TcpExtSyncookiesFailed
- TcpExtSyncookiesRecv
- TcpExtSyncookiesSent
- TcpExtTCPACKSkippedChallenge
- TcpExtTCPACKSkippedFinWait2
- TcpExtTCPACKSkippedPAWS
- TcpExtTCPACKSkippedSeq
- TcpExtTCPACKSkippedSynRecv
- TcpExtTCPACKSkippedTimeWait
- TcpExtTCPAbortFailed
- TcpExtTCPAbortOnClose
- TcpExtTCPAbortOnData
- TcpExtTCPAbortOnLinger
- TcpExtTCPAbortOnMemory
- TcpExtTCPAbortOnTimeout
- TcpExtTCPAutoCorking
- TcpExtTCPBacklogDrop
- TcpExtTCPChallengeACK
- TcpExtTCPDSACKIgnoredNoUndo
- TcpExtTCPDSACKIgnoredOld
- TcpExtTCPDSACKOfoRecv
- TcpExtTCPDSACKOfoSent
- TcpExtTCPDSACKOldSent
- TcpExtTCPDSACKRecv
- TcpExtTCPDSACKUndo
- TcpExtTCPDeferAcceptDrop
- TcpExtTCPDirectCopyFromBacklog
- TcpExtTCPDirectCopyFromPrequeue
- TcpExtTCPFACKReorder
- TcpExtTCPFastOpenActive
- TcpExtTCPFastOpenActiveFail
- TcpExtTCPFastOpenCookieReqd
- TcpExtTCPFastOpenListenOverflow
- TcpExtTCPFastOpenPassive
- TcpExtTCPFastOpenPassiveFail
- TcpExtTCPFastRetrans
- TcpExtTCPForwardRetrans
- TcpExtTCPFromZeroWindowAdv
- TcpExtTCPFullUndo
- TcpExtTCPHPAcks
- TcpExtTCPHPHits
- TcpExtTCPHPHitsToUser
- TcpExtTCPHystartDelayCwnd
- TcpExtTCPHystartDelayDetect
- TcpExtTCPHystartTrainCwnd
- TcpExtTCPHystartTrainDetect
- TcpExtTCPKeepAlive
- TcpExtTCPLossFailures
- TcpExtTCPLossProbeRecovery
- TcpExtTCPLossProbes
- TcpExtTCPLossUndo
- TcpExtTCPLostRetransmit
- TcpExtTCPMD5NotFound
- TcpExtTCPMD5Unexpected
- TcpExtTCPMTUPFail
- TcpExtTCPMTUPSuccess
- TcpExtTCPMemoryPressures
- TcpExtTCPMinTTLDrop
- TcpExtTCPOFODrop
- TcpExtTCPOFOMerge
- TcpExtTCPOFOQueue
- TcpExtTCPOrigDataSent
- TcpExtTCPPartialUndo
- TcpExtTCPPrequeueDropped
- TcpExtTCPPrequeued
- TcpExtTCPPureAcks
- TcpExtTCPRcvCoalesce
- TcpExtTCPRcvCollapsed
- TcpExtTCPRenoFailures
- TcpExtTCPRenoRecovery
- TcpExtTCPRenoRecoveryFail
- TcpExtTCPRenoReorder
- TcpExtTCPReqQFullDoCookies
- TcpExtTCPReqQFullDrop
- TcpExtTCPRetransFail
- TcpExtTCPSACKDiscard
- TcpExtTCPSACKReneging
- TcpExtTCPSACKReorder
- TcpExtTCPSYNChallenge
- TcpExtTCPSackFailures
- TcpExtTCPSackMerged
- TcpExtTCPSackRecovery
- TcpExtTCPSackRecoveryFail
- TcpExtTCPSackShiftFallback
- TcpExtTCPSackShifted
- TcpExtTCPSchedulerFailed
- TcpExtTCPSlowStartRetrans
- TcpExtTCPSpuriousRTOs
- TcpExtTCPSpuriousRtxHostQueues
- TcpExtTCPSynRetrans
- TcpExtTCPTSReorder
- TcpExtTCPTimeWaitOverflow
- TcpExtTCPTimeouts
- TcpExtTCPToZeroWindowAdv
- TcpExtTCPWantZeroWindowAdv
- TcpExtTCPWinProbe
- TcpExtTW
- TcpExtTWKilled
- TcpExtTWRecycled
- TcpInCsumErrors
- TcpInErrs
- TcpInSegs
- TcpMaxConn
- TcpOutRsts
- TcpOutSegs
- TcpPassiveOpens
- TcpRetransSegs
- TcpRtoAlgorithm
- TcpRtoMax
- TcpRtoMin
- Udp6IgnoredMulti
- Udp6InCsumErrors
- Udp6InDatagrams
- Udp6InErrors
- Udp6NoPorts
- Udp6OutDatagrams
- Udp6RcvbufErrors
- Udp6SndbufErrors
- UdpIgnoredMulti
- UdpInCsumErrors
- UdpInDatagrams
- UdpInErrors
- UdpLite6InCsumErrors
- UdpLite6InDatagrams
- UdpLite6InErrors
- UdpLite6NoPorts
- UdpLite6OutDatagrams
- UdpLite6RcvbufErrors
- UdpLite6SndbufErrors
- UdpLiteIgnoredMulti
- UdpLiteInCsumErrors
- UdpLiteInDatagrams
- UdpLiteInErrors
- UdpLiteNoPorts
- UdpLiteOutDatagrams
- UdpLiteRcvbufErrors
- UdpLiteSndbufErrors
- UdpNoPorts
- UdpOutDatagrams
- UdpRcvbufErrors
- UdpSndbufErrors
### Tags
- All measurements have the following tags
- host (host of the system)
- name (the type of the metric: snmp, snmp6 or netstat)

View File

@ -0,0 +1,233 @@
package nstat
import (
"bytes"
"io/ioutil"
"os"
"strconv"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
var (
zeroByte = []byte("0")
newLineByte = []byte("\n")
colonByte = []byte(":")
)
// default file paths
const (
NET_NETSTAT = "/net/netstat"
NET_SNMP = "/net/snmp"
NET_SNMP6 = "/net/snmp6"
NET_PROC = "/proc"
)
// env variable names
const (
ENV_NETSTAT = "PROC_NET_NETSTAT"
ENV_SNMP = "PROC_NET_SNMP"
ENV_SNMP6 = "PROC_NET_SNMP6"
ENV_ROOT = "PROC_ROOT"
)
type Nstat struct {
ProcNetNetstat string `toml:"proc_net_netstat"`
ProcNetSNMP string `toml:"proc_net_snmp"`
ProcNetSNMP6 string `toml:"proc_net_snmp6"`
DumpZeros bool `toml:"dump_zeros"`
}
var sampleConfig = `
## file paths for proc files. If empty default paths will be used:
## /proc/net/netstat, /proc/net/snmp, /proc/net/snmp6
## These can also be overridden with env variables, see README.
proc_net_netstat = ""
proc_net_snmp = ""
proc_net_snmp6 = ""
## dump metrics with 0 values too
dump_zeros = true
`
func (ns *Nstat) Description() string {
return "Collect kernel snmp counters and network interface statistics"
}
func (ns *Nstat) SampleConfig() string {
return sampleConfig
}
func (ns *Nstat) Gather(acc telegraf.Accumulator) error {
// load paths, get from env if config values are empty
ns.loadPaths()
netstat, err := ioutil.ReadFile(ns.ProcNetNetstat)
if err != nil {
return err
}
// collect netstat data
err = ns.gatherNetstat(netstat, acc)
if err != nil {
return err
}
// collect SNMP data
snmp, err := ioutil.ReadFile(ns.ProcNetSNMP)
if err != nil {
return err
}
err = ns.gatherSNMP(snmp, acc)
if err != nil {
return err
}
// collect SNMP6 data
snmp6, err := ioutil.ReadFile(ns.ProcNetSNMP6)
if err != nil {
return err
}
err = ns.gatherSNMP6(snmp6, acc)
if err != nil {
return err
}
return nil
}
func (ns *Nstat) gatherNetstat(data []byte, acc telegraf.Accumulator) error {
metrics, err := loadUglyTable(data, ns.DumpZeros)
if err != nil {
return err
}
tags := map[string]string{
"name": "netstat",
}
acc.AddFields("nstat", metrics, tags)
return nil
}
func (ns *Nstat) gatherSNMP(data []byte, acc telegraf.Accumulator) error {
metrics, err := loadUglyTable(data, ns.DumpZeros)
if err != nil {
return err
}
tags := map[string]string{
"name": "snmp",
}
acc.AddFields("nstat", metrics, tags)
return nil
}
func (ns *Nstat) gatherSNMP6(data []byte, acc telegraf.Accumulator) error {
metrics, err := loadGoodTable(data, ns.DumpZeros)
if err != nil {
return err
}
tags := map[string]string{
"name": "snmp6",
}
acc.AddFields("nstat", metrics, tags)
return nil
}
// loadPaths can be used to read paths firstly from config
// if it is empty then try read from env variables
func (ns *Nstat) loadPaths() {
if ns.ProcNetNetstat == "" {
ns.ProcNetNetstat = proc(ENV_NETSTAT, NET_NETSTAT)
}
if ns.ProcNetSNMP == "" {
ns.ProcNetSNMP = proc(ENV_SNMP, NET_SNMP)
}
if ns.ProcNetSNMP6 == "" {
ns.ProcNetSNMP = proc(ENV_SNMP6, NET_SNMP6)
}
}
// loadGoodTable can be used to parse string heap that
// headers and values are arranged in right order
func loadGoodTable(table []byte, dumpZeros bool) (map[string]interface{}, error) {
entries := map[string]interface{}{}
fields := bytes.Fields(table)
var value int64
var err error
// iterate over two values each time
// first value is header, second is value
for i := 0; i < len(fields); i = i + 2 {
// counter is zero
if bytes.Equal(fields[i+1], zeroByte) {
if !dumpZeros {
continue
} else {
entries[string(fields[i])] = int64(0)
continue
}
}
// the counter is not zero, so parse it.
value, err = strconv.ParseInt(string(fields[i+1]), 10, 64)
if err == nil {
entries[string(fields[i])] = value
}
}
return entries, nil
}
// loadUglyTable can be used to parse string heap that
// the headers and values are splitted with a newline
func loadUglyTable(table []byte, dumpZeros bool) (map[string]interface{}, error) {
entries := map[string]interface{}{}
// split the lines by newline
lines := bytes.Split(table, newLineByte)
var value int64
var err error
// iterate over lines, take 2 lines each time
// first line contains header names
// second line contains values
for i := 0; i < len(lines); i = i + 2 {
if len(lines[i]) == 0 {
continue
}
headers := bytes.Fields(lines[i])
prefix := bytes.TrimSuffix(headers[0], colonByte)
metrics := bytes.Fields(lines[i+1])
for j := 1; j < len(headers); j++ {
// counter is zero
if bytes.Equal(metrics[j], zeroByte) {
if !dumpZeros {
continue
} else {
entries[string(append(prefix, headers[j]...))] = int64(0)
continue
}
}
// the counter is not zero, so parse it.
value, err = strconv.ParseInt(string(metrics[j]), 10, 64)
if err == nil {
entries[string(append(prefix, headers[j]...))] = value
}
}
}
return entries, nil
}
// proc can be used to read file paths from env
func proc(env, path string) string {
// try to read full file path
if p := os.Getenv(env); p != "" {
return p
}
// try to read root path, or use default root path
root := os.Getenv(ENV_ROOT)
if root == "" {
root = NET_PROC
}
return root + path
}
func init() {
inputs.Add("nstat", func() telegraf.Input {
return &Nstat{}
})
}

View File

@ -0,0 +1,56 @@
package nstat
import "testing"
func TestLoadUglyTable(t *testing.T) {
uglyStr := `IpExt: InNoRoutes InTruncatedPkts InMcastPkts InCEPkts
IpExt: 332 433718 0 2660494435`
parsed := map[string]interface{}{
"IpExtInNoRoutes": int64(332),
"IpExtInTruncatedPkts": int64(433718),
"IpExtInMcastPkts": int64(0),
"IpExtInCEPkts": int64(2660494435),
}
got, err := loadUglyTable([]byte(uglyStr), true)
if err != nil {
t.Fatal(err)
}
if len(got) == 0 {
t.Fatalf("want %+v, got %+v", parsed, got)
}
for key := range parsed {
if parsed[key].(int64) != got[key].(int64) {
t.Fatalf("want %+v, got %+v", parsed[key], got[key])
}
}
}
func TestLoadGoodTable(t *testing.T) {
goodStr := `Ip6InReceives 11707
Ip6InTooBigErrors 0
Ip6InDelivers 62
Ip6InMcastOctets 1242966`
parsed := map[string]interface{}{
"Ip6InReceives": int64(11707),
"Ip6InTooBigErrors": int64(0),
"Ip6InDelivers": int64(62),
"Ip6InMcastOctets": int64(1242966),
}
got, err := loadGoodTable([]byte(goodStr), true)
if err != nil {
t.Fatal(err)
}
if len(got) == 0 {
t.Fatalf("want %+v, got %+v", parsed, got)
}
for key := range parsed {
if parsed[key].(int64) != got[key].(int64) {
t.Fatalf("want %+v, got %+v", parsed[key], got[key])
}
}
}

View File

@ -70,7 +70,17 @@ func (n *NTPQ) Gather(acc telegraf.Accumulator) error {
lineCounter := 0
scanner := bufio.NewScanner(bytes.NewReader(out))
for scanner.Scan() {
fields := strings.Fields(scanner.Text())
line := scanner.Text()
tags := make(map[string]string)
// if there is an ntpq state prefix, remove it and make it it's own tag
// see https://github.com/influxdata/telegraf/issues/1161
if strings.ContainsAny(string(line[0]), "*#o+x.-") {
tags["state_prefix"] = string(line[0])
line = strings.TrimLeft(line, "*#o+x.-")
}
fields := strings.Fields(line)
if len(fields) < 2 {
continue
}
@ -97,7 +107,6 @@ func (n *NTPQ) Gather(acc telegraf.Accumulator) error {
}
}
} else {
tags := make(map[string]string)
mFields := make(map[string]interface{})
// Get tags from output
@ -113,6 +122,9 @@ func (n *NTPQ) Gather(acc telegraf.Accumulator) error {
if index == -1 {
continue
}
if fields[index] == "-" {
continue
}
if key == "when" {
when := fields[index]
@ -160,6 +172,9 @@ func (n *NTPQ) Gather(acc telegraf.Accumulator) error {
if index == -1 {
continue
}
if fields[index] == "-" {
continue
}
m, err := strconv.ParseFloat(fields[index], 64)
if err != nil {

View File

@ -32,7 +32,8 @@ func TestSingleNTPQ(t *testing.T) {
"jitter": float64(17.462),
}
tags := map[string]string{
"remote": "*uschi5-ntp-002.",
"remote": "uschi5-ntp-002.",
"state_prefix": "*",
"refid": "10.177.80.46",
"stratum": "2",
"type": "u",
@ -60,7 +61,8 @@ func TestBadIntNTPQ(t *testing.T) {
"jitter": float64(17.462),
}
tags := map[string]string{
"remote": "*uschi5-ntp-002.",
"remote": "uschi5-ntp-002.",
"state_prefix": "*",
"refid": "10.177.80.46",
"stratum": "2",
"type": "u",
@ -88,7 +90,8 @@ func TestBadFloatNTPQ(t *testing.T) {
"jitter": float64(17.462),
}
tags := map[string]string{
"remote": "*uschi5-ntp-002.",
"remote": "uschi5-ntp-002.",
"state_prefix": "*",
"refid": "10.177.80.46",
"stratum": "2",
"type": "u",
@ -117,7 +120,8 @@ func TestDaysNTPQ(t *testing.T) {
"jitter": float64(17.462),
}
tags := map[string]string{
"remote": "*uschi5-ntp-002.",
"remote": "uschi5-ntp-002.",
"state_prefix": "*",
"refid": "10.177.80.46",
"stratum": "2",
"type": "u",
@ -146,7 +150,8 @@ func TestHoursNTPQ(t *testing.T) {
"jitter": float64(17.462),
}
tags := map[string]string{
"remote": "*uschi5-ntp-002.",
"remote": "uschi5-ntp-002.",
"state_prefix": "*",
"refid": "10.177.80.46",
"stratum": "2",
"type": "u",
@ -175,7 +180,8 @@ func TestMinutesNTPQ(t *testing.T) {
"jitter": float64(17.462),
}
tags := map[string]string{
"remote": "*uschi5-ntp-002.",
"remote": "uschi5-ntp-002.",
"state_prefix": "*",
"refid": "10.177.80.46",
"stratum": "2",
"type": "u",
@ -203,7 +209,8 @@ func TestBadWhenNTPQ(t *testing.T) {
"jitter": float64(17.462),
}
tags := map[string]string{
"remote": "*uschi5-ntp-002.",
"remote": "uschi5-ntp-002.",
"state_prefix": "*",
"refid": "10.177.80.46",
"stratum": "2",
"type": "u",
@ -278,7 +285,8 @@ func TestBadHeaderNTPQ(t *testing.T) {
"jitter": float64(17.462),
}
tags := map[string]string{
"remote": "*uschi5-ntp-002.",
"remote": "uschi5-ntp-002.",
"state_prefix": "*",
"refid": "10.177.80.46",
"type": "u",
}
@ -306,7 +314,8 @@ func TestMissingDelayColumnNTPQ(t *testing.T) {
"jitter": float64(17.462),
}
tags := map[string]string{
"remote": "*uschi5-ntp-002.",
"remote": "uschi5-ntp-002.",
"state_prefix": "*",
"refid": "10.177.80.46",
"type": "u",
}

View File

@ -19,6 +19,7 @@ type Procstat struct {
Exe string
Pattern string
Prefix string
ProcessName string
User string
// pidmap maps a pid to a process object, so we don't recreate every gather
@ -45,6 +46,9 @@ var sampleConfig = `
## user as argument for pgrep (ie, pgrep -u <user>)
# user = "nginx"
## override for process_name
## This is optional; default is sourced from /proc/<pid>/status
# process_name = "bar"
## Field name prefix
prefix = ""
## comment this out if you want raw cpu_time stats
@ -66,7 +70,7 @@ func (p *Procstat) Gather(acc telegraf.Accumulator) error {
p.Exe, p.PidFile, p.Pattern, p.User, err.Error())
} else {
for pid, proc := range p.pidmap {
p := NewSpecProcessor(p.Prefix, acc, proc, p.tagmap[pid])
p := NewSpecProcessor(p.ProcessName, p.Prefix, acc, proc, p.tagmap[pid])
p.pushMetrics()
}
}

View File

@ -17,14 +17,20 @@ type SpecProcessor struct {
}
func NewSpecProcessor(
processName string,
prefix string,
acc telegraf.Accumulator,
p *process.Process,
tags map[string]string,
) *SpecProcessor {
if name, err := p.Name(); err == nil {
if processName != "" {
tags["process_name"] = processName
} else {
name, err := p.Name()
if err == nil {
tags["process_name"] = name
}
}
return &SpecProcessor{
Prefix: prefix,
tags: tags,
@ -65,7 +71,7 @@ func (p *SpecProcessor) pushMetrics() {
fields[prefix+"write_bytes"] = io.WriteCount
}
cpu_time, err := p.proc.CPUTimes()
cpu_time, err := p.proc.Times()
if err == nil {
fields[prefix+"cpu_time_user"] = cpu_time.User
fields[prefix+"cpu_time_system"] = cpu_time.System
@ -80,7 +86,7 @@ func (p *SpecProcessor) pushMetrics() {
fields[prefix+"cpu_time_guest_nice"] = cpu_time.GuestNice
}
cpu_perc, err := p.proc.CPUPercent(time.Duration(0))
cpu_perc, err := p.proc.Percent(time.Duration(0))
if err == nil && cpu_perc != 0 {
fields[prefix+"cpu_usage"] = cpu_perc
}

View File

@ -74,6 +74,7 @@ var Tracking = map[string]string{
"used_cpu_user": "used_cpu_user",
"used_cpu_sys_children": "used_cpu_sys_children",
"used_cpu_user_children": "used_cpu_user_children",
"role": "role",
}
var ErrProtocolError = errors.New("redis protocol error")
@ -206,6 +207,11 @@ func gatherInfoOutput(
keyspace_misses = ival
}
if name == "role" {
tags["role"] = val
continue
}
if err == nil {
fields[metric] = ival
continue

View File

@ -0,0 +1,47 @@
# rollbar_webhooks
This is a Telegraf service plugin that listens for events kicked off by Rollbar Webhooks service and persists data from them into configured outputs. To set up the listener first generate the proper configuration:
```sh
$ telegraf -sample-config -input-filter rollbar_webhooks -output-filter influxdb > config.conf.new
```
Change the config file to point to the InfluxDB server you are using and adjust the settings to match your environment. Once that is complete:
```sh
$ cp config.conf.new /etc/telegraf/telegraf.conf
$ sudo service telegraf start
```
Once the server is running you should configure your Rollbar's Webhooks to point at the `rollbar_webhooks` service. To do this go to `rollbar.com/` and click `Settings > Notifications > Webhook`. In the resulting page set `URL` to `http://<my_ip>:1619`, and click on `Enable Webhook Integration`.
## Events
The titles of the following sections are links to the full payloads and details for each event. The body contains what information from the event is persisted. The format is as follows:
```
# TAGS
* 'tagKey' = `tagValue` type
# FIELDS
* 'fieldKey' = `fieldValue` type
```
The tag values and field values show the place on the incoming JSON object where the data is sourced from.
See [webhook doc](https://rollbar.com/docs/webhooks/)
#### `new_item` event
**Tags:**
* 'event' = `event.event_name` string
* 'environment' = `event.data.item.environment` string
* 'project_id = `event.data.item.project_id` int
* 'language' = `event.data.item.last_occurence.language` string
* 'level' = `event.data.item.last_occurence.level` string
**Fields:**
* 'id' = `event.data.item.id` int
#### `deploy` event
**Tags:**
* 'event' = `event.event_name` string
* 'environment' = `event.data.deploy.environment` string
* 'project_id = `event.data.deploy.project_id` int
**Fields:**
* 'id' = `event.data.item.id` int

View File

@ -0,0 +1,119 @@
package rollbar_webhooks
import (
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"log"
"net/http"
"sync"
"time"
"github.com/gorilla/mux"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
func init() {
inputs.Add("rollbar_webhooks", func() telegraf.Input { return NewRollbarWebhooks() })
}
type RollbarWebhooks struct {
ServiceAddress string
// Lock for the struct
sync.Mutex
// Events buffer to store events between Gather calls
events []Event
}
func NewRollbarWebhooks() *RollbarWebhooks {
return &RollbarWebhooks{}
}
func (rb *RollbarWebhooks) SampleConfig() string {
return `
## Address and port to host Webhook listener on
service_address = ":1619"
`
}
func (rb *RollbarWebhooks) Description() string {
return "A Rollbar Webhook Event collector"
}
func (rb *RollbarWebhooks) Gather(acc telegraf.Accumulator) error {
rb.Lock()
defer rb.Unlock()
for _, event := range rb.events {
acc.AddFields("rollbar_webhooks", event.Fields(), event.Tags(), time.Now())
}
rb.events = make([]Event, 0)
return nil
}
func (rb *RollbarWebhooks) Listen() {
r := mux.NewRouter()
r.HandleFunc("/", rb.eventHandler).Methods("POST")
err := http.ListenAndServe(fmt.Sprintf("%s", rb.ServiceAddress), r)
if err != nil {
log.Printf("Error starting server: %v", err)
}
}
func (rb *RollbarWebhooks) Start(_ telegraf.Accumulator) error {
go rb.Listen()
log.Printf("Started the rollbar_webhooks service on %s\n", rb.ServiceAddress)
return nil
}
func (rb *RollbarWebhooks) Stop() {
log.Println("Stopping the rbWebhooks service")
}
func (rb *RollbarWebhooks) eventHandler(w http.ResponseWriter, r *http.Request) {
defer r.Body.Close()
data, err := ioutil.ReadAll(r.Body)
if err != nil {
w.WriteHeader(http.StatusBadRequest)
return
}
dummyEvent := &DummyEvent{}
err = json.Unmarshal(data, dummyEvent)
if err != nil {
w.WriteHeader(http.StatusBadRequest)
return
}
event, err := NewEvent(dummyEvent, data)
if err != nil {
w.WriteHeader(http.StatusOK)
return
}
rb.Lock()
rb.events = append(rb.events, event)
rb.Unlock()
w.WriteHeader(http.StatusOK)
}
func generateEvent(event Event, data []byte) (Event, error) {
err := json.Unmarshal(data, event)
if err != nil {
return nil, err
}
return event, nil
}
func NewEvent(dummyEvent *DummyEvent, data []byte) (Event, error) {
switch dummyEvent.EventName {
case "new_item":
return generateEvent(&NewItem{}, data)
case "deploy":
return generateEvent(&Deploy{}, data)
default:
return nil, errors.New("Not implemented type: " + dummyEvent.EventName)
}
}

View File

@ -0,0 +1,78 @@
package rollbar_webhooks
import "strconv"
type Event interface {
Tags() map[string]string
Fields() map[string]interface{}
}
type DummyEvent struct {
EventName string `json:"event_name"`
}
type NewItemDataItemLastOccurence struct {
Language string `json:"language"`
Level string `json:"level"`
}
type NewItemDataItem struct {
Id int `json:"id"`
Environment string `json:"environment"`
ProjectId int `json:"project_id"`
LastOccurence NewItemDataItemLastOccurence `json:"last_occurrence"`
}
type NewItemData struct {
Item NewItemDataItem `json:"item"`
}
type NewItem struct {
EventName string `json:"event_name"`
Data NewItemData `json:"data"`
}
func (ni *NewItem) Tags() map[string]string {
return map[string]string{
"event": ni.EventName,
"environment": ni.Data.Item.Environment,
"project_id": strconv.Itoa(ni.Data.Item.ProjectId),
"language": ni.Data.Item.LastOccurence.Language,
"level": ni.Data.Item.LastOccurence.Level,
}
}
func (ni *NewItem) Fields() map[string]interface{} {
return map[string]interface{}{
"id": ni.Data.Item.Id,
}
}
type DeployDataDeploy struct {
Id int `json:"id"`
Environment string `json:"environment"`
ProjectId int `json:"project_id"`
}
type DeployData struct {
Deploy DeployDataDeploy `json:"deploy"`
}
type Deploy struct {
EventName string `json:"event_name"`
Data DeployData `json:"data"`
}
func (ni *Deploy) Tags() map[string]string {
return map[string]string{
"event": ni.EventName,
"environment": ni.Data.Deploy.Environment,
"project_id": strconv.Itoa(ni.Data.Deploy.ProjectId),
}
}
func (ni *Deploy) Fields() map[string]interface{} {
return map[string]interface{}{
"id": ni.Data.Deploy.Id,
}
}

View File

@ -0,0 +1,96 @@
package rollbar_webhooks
func NewItemJSON() string {
return `
{
"event_name": "new_item",
"data": {
"item": {
"public_item_id": null,
"integrations_data": {},
"last_activated_timestamp": 1382655421,
"unique_occurrences": null,
"id": 272716944,
"environment": "production",
"title": "testing aobg98wrwe",
"last_occurrence_id": 481761639,
"last_occurrence_timestamp": 1382655421,
"platform": 0,
"first_occurrence_timestamp": 1382655421,
"project_id": 90,
"resolved_in_version": null,
"status": 1,
"hash": "c595b2ae0af9b397bb6bdafd57104ac4d5f6b382",
"last_occurrence": {
"body": {
"message": {
"body": "testing aobg98wrwe"
}
},
"uuid": "d2036647-e0b7-4cad-bc98-934831b9b6d1",
"language": "python",
"level": "error",
"timestamp": 1382655421,
"server": {
"host": "dev",
"argv": [
""
]
},
"environment": "production",
"framework": "unknown",
"notifier": {
"version": "0.5.12",
"name": "pyrollbar"
},
"metadata": {
"access_token": "",
"debug": {
"routes": {
"start_time": 1382212080401,
"counters": {
"post_item": 3274122
}
}
},
"customer_timestamp": 1382655421,
"api_server_hostname": "web6"
}
},
"framework": 0,
"total_occurrences": 1,
"level": 40,
"counter": 4,
"first_occurrence_id": 481761639,
"activating_occurrence_id": 481761639
}
}
}`
}
func DeployJSON() string {
return `
{
"event_name": "deploy",
"data": {
"deploy": {
"comment": "deploying webs",
"user_id": 1,
"finish_time": 1382656039,
"start_time": 1382656038,
"id": 187585,
"environment": "production",
"project_id": 90,
"local_username": "brian",
"revision": "e4b9b7db860b2e5ac799f8c06b9498b71ab270bb"
}
}
}`
}
func UnknowJSON() string {
return `
{
"event_name": "roger"
}`
}

View File

@ -0,0 +1,74 @@
package rollbar_webhooks
import (
"net/http"
"net/http/httptest"
"strings"
"testing"
"github.com/influxdata/telegraf/testutil"
)
func postWebhooks(rb *RollbarWebhooks, eventBody string) *httptest.ResponseRecorder {
req, _ := http.NewRequest("POST", "/", strings.NewReader(eventBody))
w := httptest.NewRecorder()
w.Code = 500
rb.eventHandler(w, req)
return w
}
func TestNewItem(t *testing.T) {
var acc testutil.Accumulator
rb := NewRollbarWebhooks()
resp := postWebhooks(rb, NewItemJSON())
if resp.Code != http.StatusOK {
t.Errorf("POST new_item returned HTTP status code %v.\nExpected %v", resp.Code, http.StatusOK)
}
rb.Gather(&acc)
fields := map[string]interface{}{
"id": 272716944,
}
tags := map[string]string{
"event": "new_item",
"environment": "production",
"project_id": "90",
"language": "python",
"level": "error",
}
acc.AssertContainsTaggedFields(t, "rollbar_webhooks", fields, tags)
}
func TestDeploy(t *testing.T) {
var acc testutil.Accumulator
rb := NewRollbarWebhooks()
resp := postWebhooks(rb, DeployJSON())
if resp.Code != http.StatusOK {
t.Errorf("POST deploy returned HTTP status code %v.\nExpected %v", resp.Code, http.StatusOK)
}
rb.Gather(&acc)
fields := map[string]interface{}{
"id": 187585,
}
tags := map[string]string{
"event": "deploy",
"environment": "production",
"project_id": "90",
}
acc.AssertContainsTaggedFields(t, "rollbar_webhooks", fields, tags)
}
func TestUnknowItem(t *testing.T) {
rb := NewRollbarWebhooks()
resp := postWebhooks(rb, UnknowJSON())
if resp.Code != http.StatusOK {
t.Errorf("POST unknow returned HTTP status code %v.\nExpected %v", resp.Code, http.StatusOK)
}
}

View File

@ -749,7 +749,7 @@ func (h *Host) HandleResponse(
switch variable.Type {
// handle Metrics
case gosnmp.Boolean, gosnmp.Integer, gosnmp.Counter32, gosnmp.Gauge32,
gosnmp.TimeTicks, gosnmp.Counter64, gosnmp.Uinteger32:
gosnmp.TimeTicks, gosnmp.Counter64, gosnmp.Uinteger32, gosnmp.OctetString:
// Prepare tags
tags := make(map[string]string)
if oid.Unit != "" {
@ -792,7 +792,7 @@ func (h *Host) HandleResponse(
// Because the result oid is equal to inputs.snmp.get section
field_name = oid.Name
}
tags["host"], _, _ = net.SplitHostPort(h.Address)
tags["snmp_host"], _, _ = net.SplitHostPort(h.Address)
fields := make(map[string]interface{})
fields[string(field_name)] = variable.Value

View File

@ -103,7 +103,7 @@ func TestSNMPGet1(t *testing.T) {
},
map[string]string{
"unit": "octets",
"host": testutil.GetLocalHost(),
"snmp_host": testutil.GetLocalHost(),
},
)
}
@ -141,7 +141,7 @@ func TestSNMPGet2(t *testing.T) {
},
map[string]string{
"instance": "0",
"host": testutil.GetLocalHost(),
"snmp_host": testutil.GetLocalHost(),
},
)
}
@ -182,7 +182,7 @@ func TestSNMPGet3(t *testing.T) {
map[string]string{
"unit": "octets",
"instance": "1",
"host": testutil.GetLocalHost(),
"snmp_host": testutil.GetLocalHost(),
},
)
}
@ -224,7 +224,7 @@ func TestSNMPEasyGet4(t *testing.T) {
map[string]string{
"unit": "octets",
"instance": "1",
"host": testutil.GetLocalHost(),
"snmp_host": testutil.GetLocalHost(),
},
)
@ -235,7 +235,7 @@ func TestSNMPEasyGet4(t *testing.T) {
},
map[string]string{
"instance": "0",
"host": testutil.GetLocalHost(),
"snmp_host": testutil.GetLocalHost(),
},
)
}
@ -277,7 +277,7 @@ func TestSNMPEasyGet5(t *testing.T) {
map[string]string{
"unit": "octets",
"instance": "1",
"host": testutil.GetLocalHost(),
"snmp_host": testutil.GetLocalHost(),
},
)
@ -288,7 +288,7 @@ func TestSNMPEasyGet5(t *testing.T) {
},
map[string]string{
"instance": "0",
"host": testutil.GetLocalHost(),
"snmp_host": testutil.GetLocalHost(),
},
)
}
@ -321,7 +321,7 @@ func TestSNMPEasyGet6(t *testing.T) {
},
map[string]string{
"instance": "0",
"host": testutil.GetLocalHost(),
"snmp_host": testutil.GetLocalHost(),
},
)
}
@ -362,7 +362,7 @@ func TestSNMPBulk1(t *testing.T) {
map[string]string{
"unit": "octets",
"instance": "1",
"host": testutil.GetLocalHost(),
"snmp_host": testutil.GetLocalHost(),
},
)
@ -374,7 +374,7 @@ func TestSNMPBulk1(t *testing.T) {
map[string]string{
"unit": "octets",
"instance": "2",
"host": testutil.GetLocalHost(),
"snmp_host": testutil.GetLocalHost(),
},
)
@ -386,7 +386,7 @@ func TestSNMPBulk1(t *testing.T) {
map[string]string{
"unit": "octets",
"instance": "3",
"host": testutil.GetLocalHost(),
"snmp_host": testutil.GetLocalHost(),
},
)
@ -398,7 +398,7 @@ func TestSNMPBulk1(t *testing.T) {
map[string]string{
"unit": "octets",
"instance": "36",
"host": testutil.GetLocalHost(),
"snmp_host": testutil.GetLocalHost(),
},
)
}
@ -440,7 +440,7 @@ func dTestSNMPBulk2(t *testing.T) {
map[string]string{
"unit": "octets",
"instance": "1",
"host": testutil.GetLocalHost(),
"snmp_host": testutil.GetLocalHost(),
},
)
@ -452,7 +452,7 @@ func dTestSNMPBulk2(t *testing.T) {
map[string]string{
"unit": "octets",
"instance": "2",
"host": testutil.GetLocalHost(),
"snmp_host": testutil.GetLocalHost(),
},
)
@ -464,7 +464,7 @@ func dTestSNMPBulk2(t *testing.T) {
map[string]string{
"unit": "octets",
"instance": "3",
"host": testutil.GetLocalHost(),
"snmp_host": testutil.GetLocalHost(),
},
)
@ -476,7 +476,7 @@ func dTestSNMPBulk2(t *testing.T) {
map[string]string{
"unit": "octets",
"instance": "36",
"host": testutil.GetLocalHost(),
"snmp_host": testutil.GetLocalHost(),
},
)
}

View File

@ -140,7 +140,7 @@ func (s *Sysstat) Gather(acc telegraf.Accumulator) error {
if firstTimestamp.IsZero() {
firstTimestamp = time.Now()
} else {
s.interval = int(time.Since(firstTimestamp).Seconds())
s.interval = int(time.Since(firstTimestamp).Seconds() + 0.5)
}
}
ts := time.Now().Add(time.Duration(s.interval) * time.Second)

View File

@ -11,7 +11,7 @@ import (
type CPUStats struct {
ps PS
lastStats []cpu.CPUTimesStat
lastStats []cpu.TimesStat
PerCPU bool `toml:"percpu"`
TotalCPU bool `toml:"totalcpu"`
@ -105,7 +105,7 @@ func (s *CPUStats) Gather(acc telegraf.Accumulator) error {
return nil
}
func totalCpuTime(t cpu.CPUTimesStat) float64 {
func totalCpuTime(t cpu.TimesStat) float64 {
total := t.User + t.System + t.Nice + t.Iowait + t.Irq + t.Softirq + t.Steal +
t.Guest + t.GuestNice + t.Idle
return total

View File

@ -15,7 +15,7 @@ func TestCPUStats(t *testing.T) {
defer mps.AssertExpectations(t)
var acc testutil.Accumulator
cts := cpu.CPUTimesStat{
cts := cpu.TimesStat{
CPU: "cpu0",
User: 3.1,
System: 8.2,
@ -29,7 +29,7 @@ func TestCPUStats(t *testing.T) {
GuestNice: 0.324,
}
cts2 := cpu.CPUTimesStat{
cts2 := cpu.TimesStat{
CPU: "cpu0",
User: 11.4, // increased by 8.3
System: 10.9, // increased by 2.7
@ -43,7 +43,7 @@ func TestCPUStats(t *testing.T) {
GuestNice: 2.524, // increased by 2.2
}
mps.On("CPUTimes").Return([]cpu.CPUTimesStat{cts}, nil)
mps.On("CPUTimes").Return([]cpu.TimesStat{cts}, nil)
cs := NewCPUStats(&mps)
@ -68,7 +68,7 @@ func TestCPUStats(t *testing.T) {
assertContainsTaggedFloat(t, &acc, "cpu", "time_guest_nice", 0.324, 0, cputags)
mps2 := MockPS{}
mps2.On("CPUTimes").Return([]cpu.CPUTimesStat{cts2}, nil)
mps2.On("CPUTimes").Return([]cpu.TimesStat{cts2}, nil)
cs.ps = &mps2
// Should have added cpu percentages too

View File

@ -15,7 +15,7 @@ func TestDiskStats(t *testing.T) {
var acc testutil.Accumulator
var err error
duAll := []*disk.DiskUsageStat{
duAll := []*disk.UsageStat{
{
Path: "/",
Fstype: "ext4",
@ -37,7 +37,7 @@ func TestDiskStats(t *testing.T) {
InodesUsed: 2000,
},
}
duFiltered := []*disk.DiskUsageStat{
duFiltered := []*disk.UsageStat{
{
Path: "/",
Fstype: "ext4",
@ -108,7 +108,7 @@ func TestDiskStats(t *testing.T) {
// var acc testutil.Accumulator
// var err error
// diskio1 := disk.DiskIOCountersStat{
// diskio1 := disk.IOCountersStat{
// ReadCount: 888,
// WriteCount: 5341,
// ReadBytes: 100000,
@ -119,7 +119,7 @@ func TestDiskStats(t *testing.T) {
// IoTime: 123552,
// SerialNumber: "ab-123-ad",
// }
// diskio2 := disk.DiskIOCountersStat{
// diskio2 := disk.IOCountersStat{
// ReadCount: 444,
// WriteCount: 2341,
// ReadBytes: 200000,
@ -132,7 +132,7 @@ func TestDiskStats(t *testing.T) {
// }
// mps.On("DiskIO").Return(
// map[string]disk.DiskIOCountersStat{"sda1": diskio1, "sdb1": diskio2},
// map[string]disk.IOCountersStat{"sda1": diskio1, "sdb1": diskio2},
// nil)
// err = (&DiskIOStats{ps: &mps}).Gather(&acc)

View File

@ -15,55 +15,55 @@ type MockPS struct {
mock.Mock
}
func (m *MockPS) LoadAvg() (*load.LoadAvgStat, error) {
func (m *MockPS) LoadAvg() (*load.AvgStat, error) {
ret := m.Called()
r0 := ret.Get(0).(*load.LoadAvgStat)
r0 := ret.Get(0).(*load.AvgStat)
r1 := ret.Error(1)
return r0, r1
}
func (m *MockPS) CPUTimes(perCPU, totalCPU bool) ([]cpu.CPUTimesStat, error) {
func (m *MockPS) CPUTimes(perCPU, totalCPU bool) ([]cpu.TimesStat, error) {
ret := m.Called()
r0 := ret.Get(0).([]cpu.CPUTimesStat)
r0 := ret.Get(0).([]cpu.TimesStat)
r1 := ret.Error(1)
return r0, r1
}
func (m *MockPS) DiskUsage(mountPointFilter []string, fstypeExclude []string) ([]*disk.DiskUsageStat, error) {
func (m *MockPS) DiskUsage(mountPointFilter []string, fstypeExclude []string) ([]*disk.UsageStat, error) {
ret := m.Called(mountPointFilter, fstypeExclude)
r0 := ret.Get(0).([]*disk.DiskUsageStat)
r0 := ret.Get(0).([]*disk.UsageStat)
r1 := ret.Error(1)
return r0, r1
}
func (m *MockPS) NetIO() ([]net.NetIOCountersStat, error) {
func (m *MockPS) NetIO() ([]net.IOCountersStat, error) {
ret := m.Called()
r0 := ret.Get(0).([]net.NetIOCountersStat)
r0 := ret.Get(0).([]net.IOCountersStat)
r1 := ret.Error(1)
return r0, r1
}
func (m *MockPS) NetProto() ([]net.NetProtoCountersStat, error) {
func (m *MockPS) NetProto() ([]net.ProtoCountersStat, error) {
ret := m.Called()
r0 := ret.Get(0).([]net.NetProtoCountersStat)
r0 := ret.Get(0).([]net.ProtoCountersStat)
r1 := ret.Error(1)
return r0, r1
}
func (m *MockPS) DiskIO() (map[string]disk.DiskIOCountersStat, error) {
func (m *MockPS) DiskIO() (map[string]disk.IOCountersStat, error) {
ret := m.Called()
r0 := ret.Get(0).(map[string]disk.DiskIOCountersStat)
r0 := ret.Get(0).(map[string]disk.IOCountersStat)
r1 := ret.Error(1)
return r0, r1
@ -87,10 +87,10 @@ func (m *MockPS) SwapStat() (*mem.SwapMemoryStat, error) {
return r0, r1
}
func (m *MockPS) NetConnections() ([]net.NetConnectionStat, error) {
func (m *MockPS) NetConnections() ([]net.ConnectionStat, error) {
ret := m.Called()
r0 := ret.Get(0).([]net.NetConnectionStat)
r0 := ret.Get(0).([]net.ConnectionStat)
r1 := ret.Error(1)
return r0, r1

View File

@ -15,7 +15,7 @@ func TestNetStats(t *testing.T) {
defer mps.AssertExpectations(t)
var acc testutil.Accumulator
netio := net.NetIOCountersStat{
netio := net.IOCountersStat{
Name: "eth0",
BytesSent: 1123,
BytesRecv: 8734422,
@ -27,10 +27,10 @@ func TestNetStats(t *testing.T) {
Dropout: 1,
}
mps.On("NetIO").Return([]net.NetIOCountersStat{netio}, nil)
mps.On("NetIO").Return([]net.IOCountersStat{netio}, nil)
netprotos := []net.NetProtoCountersStat{
net.NetProtoCountersStat{
netprotos := []net.ProtoCountersStat{
net.ProtoCountersStat{
Protocol: "Udp",
Stats: map[string]int64{
"InDatagrams": 4655,
@ -40,17 +40,17 @@ func TestNetStats(t *testing.T) {
}
mps.On("NetProto").Return(netprotos, nil)
netstats := []net.NetConnectionStat{
net.NetConnectionStat{
netstats := []net.ConnectionStat{
net.ConnectionStat{
Type: syscall.SOCK_DGRAM,
},
net.NetConnectionStat{
net.ConnectionStat{
Status: "ESTABLISHED",
},
net.NetConnectionStat{
net.ConnectionStat{
Status: "ESTABLISHED",
},
net.NetConnectionStat{
net.ConnectionStat{
Status: "CLOSE",
},
}

View File

@ -70,6 +70,7 @@ func getEmptyFields() map[string]interface{} {
"running": int64(0),
"sleeping": int64(0),
"total": int64(0),
"unknown": int64(0),
}
switch runtime.GOOS {
case "freebsd":
@ -114,6 +115,8 @@ func (p *Processes) gatherFromPS(fields map[string]interface{}) error {
fields["sleeping"] = fields["sleeping"].(int64) + int64(1)
case 'I':
fields["idle"] = fields["idle"].(int64) + int64(1)
case '?':
fields["unknown"] = fields["unknown"].(int64) + int64(1)
default:
log.Printf("processes: Unknown state [ %s ] from ps",
string(status[0]))

View File

@ -13,14 +13,14 @@ import (
)
type PS interface {
CPUTimes(perCPU, totalCPU bool) ([]cpu.CPUTimesStat, error)
DiskUsage(mountPointFilter []string, fstypeExclude []string) ([]*disk.DiskUsageStat, error)
NetIO() ([]net.NetIOCountersStat, error)
NetProto() ([]net.NetProtoCountersStat, error)
DiskIO() (map[string]disk.DiskIOCountersStat, error)
CPUTimes(perCPU, totalCPU bool) ([]cpu.TimesStat, error)
DiskUsage(mountPointFilter []string, fstypeExclude []string) ([]*disk.UsageStat, error)
NetIO() ([]net.IOCountersStat, error)
NetProto() ([]net.ProtoCountersStat, error)
DiskIO() (map[string]disk.IOCountersStat, error)
VMStat() (*mem.VirtualMemoryStat, error)
SwapStat() (*mem.SwapMemoryStat, error)
NetConnections() ([]net.NetConnectionStat, error)
NetConnections() ([]net.ConnectionStat, error)
}
func add(acc telegraf.Accumulator,
@ -32,17 +32,17 @@ func add(acc telegraf.Accumulator,
type systemPS struct{}
func (s *systemPS) CPUTimes(perCPU, totalCPU bool) ([]cpu.CPUTimesStat, error) {
var cpuTimes []cpu.CPUTimesStat
func (s *systemPS) CPUTimes(perCPU, totalCPU bool) ([]cpu.TimesStat, error) {
var cpuTimes []cpu.TimesStat
if perCPU {
if perCPUTimes, err := cpu.CPUTimes(true); err == nil {
if perCPUTimes, err := cpu.Times(true); err == nil {
cpuTimes = append(cpuTimes, perCPUTimes...)
} else {
return nil, err
}
}
if totalCPU {
if totalCPUTimes, err := cpu.CPUTimes(false); err == nil {
if totalCPUTimes, err := cpu.Times(false); err == nil {
cpuTimes = append(cpuTimes, totalCPUTimes...)
} else {
return nil, err
@ -54,8 +54,8 @@ func (s *systemPS) CPUTimes(perCPU, totalCPU bool) ([]cpu.CPUTimesStat, error) {
func (s *systemPS) DiskUsage(
mountPointFilter []string,
fstypeExclude []string,
) ([]*disk.DiskUsageStat, error) {
parts, err := disk.DiskPartitions(true)
) ([]*disk.UsageStat, error) {
parts, err := disk.Partitions(true)
if err != nil {
return nil, err
}
@ -70,7 +70,7 @@ func (s *systemPS) DiskUsage(
fstypeExcludeSet[filter] = true
}
var usage []*disk.DiskUsageStat
var usage []*disk.UsageStat
for _, p := range parts {
if len(mountPointFilter) > 0 {
@ -83,7 +83,7 @@ func (s *systemPS) DiskUsage(
}
mountpoint := os.Getenv("HOST_MOUNT_PREFIX") + p.Mountpoint
if _, err := os.Stat(mountpoint); err == nil {
du, err := disk.DiskUsage(mountpoint)
du, err := disk.Usage(mountpoint)
du.Path = p.Mountpoint
if err != nil {
return nil, err
@ -102,20 +102,20 @@ func (s *systemPS) DiskUsage(
return usage, nil
}
func (s *systemPS) NetProto() ([]net.NetProtoCountersStat, error) {
return net.NetProtoCounters(nil)
func (s *systemPS) NetProto() ([]net.ProtoCountersStat, error) {
return net.ProtoCounters(nil)
}
func (s *systemPS) NetIO() ([]net.NetIOCountersStat, error) {
return net.NetIOCounters(true)
func (s *systemPS) NetIO() ([]net.IOCountersStat, error) {
return net.IOCounters(true)
}
func (s *systemPS) NetConnections() ([]net.NetConnectionStat, error) {
return net.NetConnections("all")
func (s *systemPS) NetConnections() ([]net.ConnectionStat, error) {
return net.Connections("all")
}
func (s *systemPS) DiskIO() (map[string]disk.DiskIOCountersStat, error) {
m, err := disk.DiskIOCounters()
func (s *systemPS) DiskIO() (map[string]disk.IOCountersStat, error) {
m, err := disk.IOCounters()
if err == internal.NotImplementedError {
return nil, nil
}

View File

@ -22,12 +22,12 @@ func (_ *SystemStats) Description() string {
func (_ *SystemStats) SampleConfig() string { return "" }
func (_ *SystemStats) Gather(acc telegraf.Accumulator) error {
loadavg, err := load.LoadAvg()
loadavg, err := load.Avg()
if err != nil {
return err
}
hostinfo, err := host.HostInfo()
hostinfo, err := host.Info()
if err != nil {
return err
}

View File

@ -95,6 +95,7 @@ func (t *Tail) Start(acc telegraf.Accumulator) error {
continue
}
// create a goroutine for each "tailer"
t.wg.Add(1)
go t.receiver(tailer)
t.tailers = append(t.tailers, tailer)
}
@ -109,7 +110,6 @@ func (t *Tail) Start(acc telegraf.Accumulator) error {
// this is launched as a goroutine to continuously watch a tailed logfile
// for changes, parse any incoming msgs, and add to the accumulator.
func (t *Tail) receiver(tailer *tail.Tail) {
t.wg.Add(1)
defer t.wg.Done()
var m telegraf.Metric

View File

@ -0,0 +1,339 @@
# Varnish Input Plugin
This plugin gathers stats from [Varnish HTTP Cache](https://varnish-cache.org/)
### Configuration:
```toml
# A plugin to collect stats from Varnish HTTP Cache
[[inputs.varnish]]
## The default location of the varnishstat binary can be overridden with:
binary = "/usr/bin/varnishstat"
## By default, telegraf gathers stats for 3 metric points.
## Setting stats will override the defaults shown below.
## stats may also be set to ["all"], which will collect all stats
stats = ["MAIN.cache_hit", "MAIN.cache_miss", "MAIN.uptime"]
```
### Measurements & Fields:
This is the full list of stats provided by varnish. Stats will be grouped by their capitalized prefix (eg MAIN,
MEMPOOL, etc). In the output, the prefix will be used as a tag, and removed from field names.
- varnish
- MAIN.uptime (int, count, Child process uptime)
- MAIN.sess_conn (int, count, Sessions accepted)
- MAIN.sess_drop (int, count, Sessions dropped)
- MAIN.sess_fail (int, count, Session accept failures)
- MAIN.sess_pipe_overflow (int, count, Session pipe overflow)
- MAIN.client_req_400 (int, count, Client requests received,)
- MAIN.client_req_411 (int, count, Client requests received,)
- MAIN.client_req_413 (int, count, Client requests received,)
- MAIN.client_req_417 (int, count, Client requests received,)
- MAIN.client_req (int, count, Good client requests)
- MAIN.cache_hit (int, count, Cache hits)
- MAIN.cache_hitpass (int, count, Cache hits for)
- MAIN.cache_miss (int, count, Cache misses)
- MAIN.backend_conn (int, count, Backend conn. success)
- MAIN.backend_unhealthy (int, count, Backend conn. not)
- MAIN.backend_busy (int, count, Backend conn. too)
- MAIN.backend_fail (int, count, Backend conn. failures)
- MAIN.backend_reuse (int, count, Backend conn. reuses)
- MAIN.backend_toolate (int, count, Backend conn. was)
- MAIN.backend_recycle (int, count, Backend conn. recycles)
- MAIN.backend_retry (int, count, Backend conn. retry)
- MAIN.fetch_head (int, count, Fetch no body)
- MAIN.fetch_length (int, count, Fetch with Length)
- MAIN.fetch_chunked (int, count, Fetch chunked)
- MAIN.fetch_eof (int, count, Fetch EOF)
- MAIN.fetch_bad (int, count, Fetch bad T- E)
- MAIN.fetch_close (int, count, Fetch wanted close)
- MAIN.fetch_oldhttp (int, count, Fetch pre HTTP/1.1)
- MAIN.fetch_zero (int, count, Fetch zero len)
- MAIN.fetch_1xx (int, count, Fetch no body)
- MAIN.fetch_204 (int, count, Fetch no body)
- MAIN.fetch_304 (int, count, Fetch no body)
- MAIN.fetch_failed (int, count, Fetch failed (all)
- MAIN.fetch_no_thread (int, count, Fetch failed (no)
- MAIN.pools (int, count, Number of thread)
- MAIN.threads (int, count, Total number of)
- MAIN.threads_limited (int, count, Threads hit max)
- MAIN.threads_created (int, count, Threads created)
- MAIN.threads_destroyed (int, count, Threads destroyed)
- MAIN.threads_failed (int, count, Thread creation failed)
- MAIN.thread_queue_len (int, count, Length of session)
- MAIN.busy_sleep (int, count, Number of requests)
- MAIN.busy_wakeup (int, count, Number of requests)
- MAIN.sess_queued (int, count, Sessions queued for)
- MAIN.sess_dropped (int, count, Sessions dropped for)
- MAIN.n_object (int, count, object structs made)
- MAIN.n_vampireobject (int, count, unresurrected objects)
- MAIN.n_objectcore (int, count, objectcore structs made)
- MAIN.n_objecthead (int, count, objecthead structs made)
- MAIN.n_waitinglist (int, count, waitinglist structs made)
- MAIN.n_backend (int, count, Number of backends)
- MAIN.n_expired (int, count, Number of expired)
- MAIN.n_lru_nuked (int, count, Number of LRU)
- MAIN.n_lru_moved (int, count, Number of LRU)
- MAIN.losthdr (int, count, HTTP header overflows)
- MAIN.s_sess (int, count, Total sessions seen)
- MAIN.s_req (int, count, Total requests seen)
- MAIN.s_pipe (int, count, Total pipe sessions)
- MAIN.s_pass (int, count, Total pass- ed requests)
- MAIN.s_fetch (int, count, Total backend fetches)
- MAIN.s_synth (int, count, Total synthethic responses)
- MAIN.s_req_hdrbytes (int, count, Request header bytes)
- MAIN.s_req_bodybytes (int, count, Request body bytes)
- MAIN.s_resp_hdrbytes (int, count, Response header bytes)
- MAIN.s_resp_bodybytes (int, count, Response body bytes)
- MAIN.s_pipe_hdrbytes (int, count, Pipe request header)
- MAIN.s_pipe_in (int, count, Piped bytes from)
- MAIN.s_pipe_out (int, count, Piped bytes to)
- MAIN.sess_closed (int, count, Session Closed)
- MAIN.sess_pipeline (int, count, Session Pipeline)
- MAIN.sess_readahead (int, count, Session Read Ahead)
- MAIN.sess_herd (int, count, Session herd)
- MAIN.shm_records (int, count, SHM records)
- MAIN.shm_writes (int, count, SHM writes)
- MAIN.shm_flushes (int, count, SHM flushes due)
- MAIN.shm_cont (int, count, SHM MTX contention)
- MAIN.shm_cycles (int, count, SHM cycles through)
- MAIN.sms_nreq (int, count, SMS allocator requests)
- MAIN.sms_nobj (int, count, SMS outstanding allocations)
- MAIN.sms_nbytes (int, count, SMS outstanding bytes)
- MAIN.sms_balloc (int, count, SMS bytes allocated)
- MAIN.sms_bfree (int, count, SMS bytes freed)
- MAIN.backend_req (int, count, Backend requests made)
- MAIN.n_vcl (int, count, Number of loaded)
- MAIN.n_vcl_avail (int, count, Number of VCLs)
- MAIN.n_vcl_discard (int, count, Number of discarded)
- MAIN.bans (int, count, Count of bans)
- MAIN.bans_completed (int, count, Number of bans)
- MAIN.bans_obj (int, count, Number of bans)
- MAIN.bans_req (int, count, Number of bans)
- MAIN.bans_added (int, count, Bans added)
- MAIN.bans_deleted (int, count, Bans deleted)
- MAIN.bans_tested (int, count, Bans tested against)
- MAIN.bans_obj_killed (int, count, Objects killed by)
- MAIN.bans_lurker_tested (int, count, Bans tested against)
- MAIN.bans_tests_tested (int, count, Ban tests tested)
- MAIN.bans_lurker_tests_tested (int, count, Ban tests tested)
- MAIN.bans_lurker_obj_killed (int, count, Objects killed by)
- MAIN.bans_dups (int, count, Bans superseded by)
- MAIN.bans_lurker_contention (int, count, Lurker gave way)
- MAIN.bans_persisted_bytes (int, count, Bytes used by)
- MAIN.bans_persisted_fragmentation (int, count, Extra bytes in)
- MAIN.n_purges (int, count, Number of purge)
- MAIN.n_obj_purged (int, count, Number of purged)
- MAIN.exp_mailed (int, count, Number of objects)
- MAIN.exp_received (int, count, Number of objects)
- MAIN.hcb_nolock (int, count, HCB Lookups without)
- MAIN.hcb_lock (int, count, HCB Lookups with)
- MAIN.hcb_insert (int, count, HCB Inserts)
- MAIN.esi_errors (int, count, ESI parse errors)
- MAIN.esi_warnings (int, count, ESI parse warnings)
- MAIN.vmods (int, count, Loaded VMODs)
- MAIN.n_gzip (int, count, Gzip operations)
- MAIN.n_gunzip (int, count, Gunzip operations)
- MAIN.vsm_free (int, count, Free VSM space)
- MAIN.vsm_used (int, count, Used VSM space)
- MAIN.vsm_cooling (int, count, Cooling VSM space)
- MAIN.vsm_overflow (int, count, Overflow VSM space)
- MAIN.vsm_overflowed (int, count, Overflowed VSM space)
- MGT.uptime (int, count, Management process uptime)
- MGT.child_start (int, count, Child process started)
- MGT.child_exit (int, count, Child process normal)
- MGT.child_stop (int, count, Child process unexpected)
- MGT.child_died (int, count, Child process died)
- MGT.child_dump (int, count, Child process core)
- MGT.child_panic (int, count, Child process panic)
- MEMPOOL.vbc.live (int, count, In use)
- MEMPOOL.vbc.pool (int, count, In Pool)
- MEMPOOL.vbc.sz_wanted (int, count, Size requested)
- MEMPOOL.vbc.sz_needed (int, count, Size allocated)
- MEMPOOL.vbc.allocs (int, count, Allocations )
- MEMPOOL.vbc.frees (int, count, Frees )
- MEMPOOL.vbc.recycle (int, count, Recycled from pool)
- MEMPOOL.vbc.timeout (int, count, Timed out from)
- MEMPOOL.vbc.toosmall (int, count, Too small to)
- MEMPOOL.vbc.surplus (int, count, Too many for)
- MEMPOOL.vbc.randry (int, count, Pool ran dry)
- MEMPOOL.busyobj.live (int, count, In use)
- MEMPOOL.busyobj.pool (int, count, In Pool)
- MEMPOOL.busyobj.sz_wanted (int, count, Size requested)
- MEMPOOL.busyobj.sz_needed (int, count, Size allocated)
- MEMPOOL.busyobj.allocs (int, count, Allocations )
- MEMPOOL.busyobj.frees (int, count, Frees )
- MEMPOOL.busyobj.recycle (int, count, Recycled from pool)
- MEMPOOL.busyobj.timeout (int, count, Timed out from)
- MEMPOOL.busyobj.toosmall (int, count, Too small to)
- MEMPOOL.busyobj.surplus (int, count, Too many for)
- MEMPOOL.busyobj.randry (int, count, Pool ran dry)
- MEMPOOL.req0.live (int, count, In use)
- MEMPOOL.req0.pool (int, count, In Pool)
- MEMPOOL.req0.sz_wanted (int, count, Size requested)
- MEMPOOL.req0.sz_needed (int, count, Size allocated)
- MEMPOOL.req0.allocs (int, count, Allocations )
- MEMPOOL.req0.frees (int, count, Frees )
- MEMPOOL.req0.recycle (int, count, Recycled from pool)
- MEMPOOL.req0.timeout (int, count, Timed out from)
- MEMPOOL.req0.toosmall (int, count, Too small to)
- MEMPOOL.req0.surplus (int, count, Too many for)
- MEMPOOL.req0.randry (int, count, Pool ran dry)
- MEMPOOL.sess0.live (int, count, In use)
- MEMPOOL.sess0.pool (int, count, In Pool)
- MEMPOOL.sess0.sz_wanted (int, count, Size requested)
- MEMPOOL.sess0.sz_needed (int, count, Size allocated)
- MEMPOOL.sess0.allocs (int, count, Allocations )
- MEMPOOL.sess0.frees (int, count, Frees )
- MEMPOOL.sess0.recycle (int, count, Recycled from pool)
- MEMPOOL.sess0.timeout (int, count, Timed out from)
- MEMPOOL.sess0.toosmall (int, count, Too small to)
- MEMPOOL.sess0.surplus (int, count, Too many for)
- MEMPOOL.sess0.randry (int, count, Pool ran dry)
- MEMPOOL.req1.live (int, count, In use)
- MEMPOOL.req1.pool (int, count, In Pool)
- MEMPOOL.req1.sz_wanted (int, count, Size requested)
- MEMPOOL.req1.sz_needed (int, count, Size allocated)
- MEMPOOL.req1.allocs (int, count, Allocations )
- MEMPOOL.req1.frees (int, count, Frees )
- MEMPOOL.req1.recycle (int, count, Recycled from pool)
- MEMPOOL.req1.timeout (int, count, Timed out from)
- MEMPOOL.req1.toosmall (int, count, Too small to)
- MEMPOOL.req1.surplus (int, count, Too many for)
- MEMPOOL.req1.randry (int, count, Pool ran dry)
- MEMPOOL.sess1.live (int, count, In use)
- MEMPOOL.sess1.pool (int, count, In Pool)
- MEMPOOL.sess1.sz_wanted (int, count, Size requested)
- MEMPOOL.sess1.sz_needed (int, count, Size allocated)
- MEMPOOL.sess1.allocs (int, count, Allocations )
- MEMPOOL.sess1.frees (int, count, Frees )
- MEMPOOL.sess1.recycle (int, count, Recycled from pool)
- MEMPOOL.sess1.timeout (int, count, Timed out from)
- MEMPOOL.sess1.toosmall (int, count, Too small to)
- MEMPOOL.sess1.surplus (int, count, Too many for)
- MEMPOOL.sess1.randry (int, count, Pool ran dry)
- SMA.s0.c_req (int, count, Allocator requests)
- SMA.s0.c_fail (int, count, Allocator failures)
- SMA.s0.c_bytes (int, count, Bytes allocated)
- SMA.s0.c_freed (int, count, Bytes freed)
- SMA.s0.g_alloc (int, count, Allocations outstanding)
- SMA.s0.g_bytes (int, count, Bytes outstanding)
- SMA.s0.g_space (int, count, Bytes available)
- SMA.Transient.c_req (int, count, Allocator requests)
- SMA.Transient.c_fail (int, count, Allocator failures)
- SMA.Transient.c_bytes (int, count, Bytes allocated)
- SMA.Transient.c_freed (int, count, Bytes freed)
- SMA.Transient.g_alloc (int, count, Allocations outstanding)
- SMA.Transient.g_bytes (int, count, Bytes outstanding)
- SMA.Transient.g_space (int, count, Bytes available)
- VBE.default(127.0.0.1,,8080).vcls (int, count, VCL references)
- VBE.default(127.0.0.1,,8080).happy (int, count, Happy health probes)
- VBE.default(127.0.0.1,,8080).bereq_hdrbytes (int, count, Request header bytes)
- VBE.default(127.0.0.1,,8080).bereq_bodybytes (int, count, Request body bytes)
- VBE.default(127.0.0.1,,8080).beresp_hdrbytes (int, count, Response header bytes)
- VBE.default(127.0.0.1,,8080).beresp_bodybytes (int, count, Response body bytes)
- VBE.default(127.0.0.1,,8080).pipe_hdrbytes (int, count, Pipe request header)
- VBE.default(127.0.0.1,,8080).pipe_out (int, count, Piped bytes to)
- VBE.default(127.0.0.1,,8080).pipe_in (int, count, Piped bytes from)
- LCK.sms.creat (int, count, Created locks)
- LCK.sms.destroy (int, count, Destroyed locks)
- LCK.sms.locks (int, count, Lock Operations)
- LCK.smp.creat (int, count, Created locks)
- LCK.smp.destroy (int, count, Destroyed locks)
- LCK.smp.locks (int, count, Lock Operations)
- LCK.sma.creat (int, count, Created locks)
- LCK.sma.destroy (int, count, Destroyed locks)
- LCK.sma.locks (int, count, Lock Operations)
- LCK.smf.creat (int, count, Created locks)
- LCK.smf.destroy (int, count, Destroyed locks)
- LCK.smf.locks (int, count, Lock Operations)
- LCK.hsl.creat (int, count, Created locks)
- LCK.hsl.destroy (int, count, Destroyed locks)
- LCK.hsl.locks (int, count, Lock Operations)
- LCK.hcb.creat (int, count, Created locks)
- LCK.hcb.destroy (int, count, Destroyed locks)
- LCK.hcb.locks (int, count, Lock Operations)
- LCK.hcl.creat (int, count, Created locks)
- LCK.hcl.destroy (int, count, Destroyed locks)
- LCK.hcl.locks (int, count, Lock Operations)
- LCK.vcl.creat (int, count, Created locks)
- LCK.vcl.destroy (int, count, Destroyed locks)
- LCK.vcl.locks (int, count, Lock Operations)
- LCK.sessmem.creat (int, count, Created locks)
- LCK.sessmem.destroy (int, count, Destroyed locks)
- LCK.sessmem.locks (int, count, Lock Operations)
- LCK.sess.creat (int, count, Created locks)
- LCK.sess.destroy (int, count, Destroyed locks)
- LCK.sess.locks (int, count, Lock Operations)
- LCK.wstat.creat (int, count, Created locks)
- LCK.wstat.destroy (int, count, Destroyed locks)
- LCK.wstat.locks (int, count, Lock Operations)
- LCK.herder.creat (int, count, Created locks)
- LCK.herder.destroy (int, count, Destroyed locks)
- LCK.herder.locks (int, count, Lock Operations)
- LCK.wq.creat (int, count, Created locks)
- LCK.wq.destroy (int, count, Destroyed locks)
- LCK.wq.locks (int, count, Lock Operations)
- LCK.objhdr.creat (int, count, Created locks)
- LCK.objhdr.destroy (int, count, Destroyed locks)
- LCK.objhdr.locks (int, count, Lock Operations)
- LCK.exp.creat (int, count, Created locks)
- LCK.exp.destroy (int, count, Destroyed locks)
- LCK.exp.locks (int, count, Lock Operations)
- LCK.lru.creat (int, count, Created locks)
- LCK.lru.destroy (int, count, Destroyed locks)
- LCK.lru.locks (int, count, Lock Operations)
- LCK.cli.creat (int, count, Created locks)
- LCK.cli.destroy (int, count, Destroyed locks)
- LCK.cli.locks (int, count, Lock Operations)
- LCK.ban.creat (int, count, Created locks)
- LCK.ban.destroy (int, count, Destroyed locks)
- LCK.ban.locks (int, count, Lock Operations)
- LCK.vbp.creat (int, count, Created locks)
- LCK.vbp.destroy (int, count, Destroyed locks)
- LCK.vbp.locks (int, count, Lock Operations)
- LCK.backend.creat (int, count, Created locks)
- LCK.backend.destroy (int, count, Destroyed locks)
- LCK.backend.locks (int, count, Lock Operations)
- LCK.vcapace.creat (int, count, Created locks)
- LCK.vcapace.destroy (int, count, Destroyed locks)
- LCK.vcapace.locks (int, count, Lock Operations)
- LCK.nbusyobj.creat (int, count, Created locks)
- LCK.nbusyobj.destroy (int, count, Destroyed locks)
- LCK.nbusyobj.locks (int, count, Lock Operations)
- LCK.busyobj.creat (int, count, Created locks)
- LCK.busyobj.destroy (int, count, Destroyed locks)
- LCK.busyobj.locks (int, count, Lock Operations)
- LCK.mempool.creat (int, count, Created locks)
- LCK.mempool.destroy (int, count, Destroyed locks)
- LCK.mempool.locks (int, count, Lock Operations)
- LCK.vxid.creat (int, count, Created locks)
- LCK.vxid.destroy (int, count, Destroyed locks)
- LCK.vxid.locks (int, count, Lock Operations)
- LCK.pipestat.creat (int, count, Created locks)
- LCK.pipestat.destroy (int, count, Destroyed locks)
- LCK.pipestat.locks (int, count, Lock Operations)
### Tags:
As indicated above, the prefix of a varnish stat will be used as it's 'section' tag. So section tag may have one of
the following values:
- section:
- MAIN
- MGT
- MEMPOOL
- SMA
- VBE
- LCK
### Example Output:
```
telegraf -test -config etc/telegraf.conf -input-filter varnish
* Plugin: varnish, Collection 1
> varnish,host=rpercy-VirtualBox,section=MAIN cache_hit=0i,cache_miss=0i,uptime=8416i 1462765437090957980
```

View File

@ -0,0 +1,164 @@
// +build !windows
package varnish
import (
"bufio"
"bytes"
"fmt"
"os"
"os/exec"
"strconv"
"strings"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
"time"
)
const (
kwAll = "all"
)
// Varnish is used to store configuration values
type Varnish struct {
Stats []string
Binary string
}
var defaultStats = []string{"MAIN.cache_hit", "MAIN.cache_miss", "MAIN.uptime"}
var defaultBinary = "/usr/bin/varnishstat"
var varnishSampleConfig = `
## The default location of the varnishstat binary can be overridden with:
binary = "/usr/bin/varnishstat"
## By default, telegraf gather stats for 3 metric points.
## Setting stats will override the defaults shown below.
## stats may also be set to ["all"], which will collect all stats
stats = ["MAIN.cache_hit", "MAIN.cache_miss", "MAIN.uptime"]
`
func (s *Varnish) Description() string {
return "A plugin to collect stats from Varnish HTTP Cache"
}
// SampleConfig displays configuration instructions
func (s *Varnish) SampleConfig() string {
return fmt.Sprintf(varnishSampleConfig, strings.Join(defaultStats, "\",\""))
}
func (s *Varnish) setDefaults() {
if len(s.Stats) == 0 {
s.Stats = defaultStats
}
if s.Binary == "" {
s.Binary = defaultBinary
}
}
// Builds a filter function that will indicate whether a given stat should
// be reported
func (s *Varnish) statsFilter() func(string) bool {
s.setDefaults()
// Build a set for constant-time lookup of whether stats should be included
filter := make(map[string]struct{})
for _, s := range s.Stats {
filter[s] = struct{}{}
}
// Create a function that respects the kwAll by always returning true
// if it is set
return func(stat string) bool {
if s.Stats[0] == kwAll {
return true
}
_, found := filter[stat]
return found
}
}
// Shell out to varnish_stat and return the output
var varnishStat = func(cmdName string) (*bytes.Buffer, error) {
cmdArgs := []string{"-1"}
cmd := exec.Command(cmdName, cmdArgs...)
var out bytes.Buffer
cmd.Stdout = &out
err := internal.RunTimeout(cmd, time.Millisecond*200)
if err != nil {
return &out, fmt.Errorf("error running varnishstat: %s", err)
}
return &out, nil
}
// Gather collects the configured stats from varnish_stat and adds them to the
// Accumulator
//
// The prefix of each stat (eg MAIN, MEMPOOL, LCK, etc) will be used as a
// 'section' tag and all stats that share that prefix will be reported as fields
// with that tag
func (s *Varnish) Gather(acc telegraf.Accumulator) error {
s.setDefaults()
out, err := varnishStat(s.Binary)
if err != nil {
return fmt.Errorf("error gathering metrics: %s", err)
}
statsFilter := s.statsFilter()
sectionMap := make(map[string]map[string]interface{})
scanner := bufio.NewScanner(out)
for scanner.Scan() {
cols := strings.Fields(scanner.Text())
if len(cols) < 2 {
continue
}
if !strings.Contains(cols[0], ".") {
continue
}
stat := cols[0]
value := cols[1]
if !statsFilter(stat) {
continue
}
parts := strings.SplitN(stat, ".", 2)
section := parts[0]
field := parts[1]
// Init the section if necessary
if _, ok := sectionMap[section]; !ok {
sectionMap[section] = make(map[string]interface{})
}
sectionMap[section][field], err = strconv.Atoi(value)
if err != nil {
fmt.Fprintf(os.Stderr, "Expected a numeric value for %s = %v\n",
stat, value)
}
}
for section, fields := range sectionMap {
tags := map[string]string{
"section": section,
}
if len(fields) == 0 {
continue
}
acc.AddFields("varnish", fields, tags)
}
return nil
}
func init() {
inputs.Add("varnish", func() telegraf.Input { return &Varnish{} })
}

View File

@ -0,0 +1,442 @@
// +build !windows
package varnish
import (
"bytes"
"fmt"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"strings"
"testing"
)
func fakeVarnishStat(output string) func(string) (*bytes.Buffer, error) {
return func(string) (*bytes.Buffer, error) {
return bytes.NewBuffer([]byte(output)), nil
}
}
func TestConfigsUsed(t *testing.T) {
saved := varnishStat
defer func() {
varnishStat = saved
}()
expecations := map[string]string{
"": defaultBinary,
"/foo/bar/baz": "/foo/bar/baz",
}
for in, expected := range expecations {
varnishStat = func(actual string) (*bytes.Buffer, error) {
assert.Equal(t, expected, actual)
return &bytes.Buffer{}, nil
}
acc := &testutil.Accumulator{}
v := &Varnish{Binary: in}
v.Gather(acc)
}
}
func TestGather(t *testing.T) {
saved := varnishStat
defer func() {
varnishStat = saved
}()
varnishStat = fakeVarnishStat(smOutput)
acc := &testutil.Accumulator{}
v := &Varnish{Stats: []string{"all"}}
v.Gather(acc)
acc.HasMeasurement("varnish")
for tag, fields := range parsedSmOutput {
acc.AssertContainsTaggedFields(t, "varnish", fields, map[string]string{
"section": tag,
})
}
}
func TestParseFullOutput(t *testing.T) {
saved := varnishStat
defer func() {
varnishStat = saved
}()
varnishStat = fakeVarnishStat(fullOutput)
acc := &testutil.Accumulator{}
v := &Varnish{Stats: []string{"all"}}
err := v.Gather(acc)
assert.NoError(t, err)
acc.HasMeasurement("varnish")
flat := flatten(acc.Metrics)
assert.Len(t, acc.Metrics, 6)
assert.Equal(t, 293, len(flat))
}
func TestFieldConfig(t *testing.T) {
saved := varnishStat
defer func() {
varnishStat = saved
}()
varnishStat = fakeVarnishStat(fullOutput)
expect := map[string]int{
"all": 293,
"": 0, // default
"MAIN.uptime": 1,
"MEMPOOL.req0.sz_needed,MAIN.fetch_bad": 2,
}
for fieldCfg, expected := range expect {
acc := &testutil.Accumulator{}
v := &Varnish{Stats: strings.Split(fieldCfg, ",")}
err := v.Gather(acc)
assert.NoError(t, err)
acc.HasMeasurement("varnish")
flat := flatten(acc.Metrics)
assert.Equal(t, expected, len(flat))
}
}
func flatten(metrics []*testutil.Metric) map[string]interface{} {
flat := map[string]interface{}{}
for _, m := range metrics {
buf := &bytes.Buffer{}
for k, v := range m.Tags {
buf.WriteString(fmt.Sprintf("%s=%s", k, v))
}
for k, v := range m.Fields {
flat[fmt.Sprintf("%s %s", buf.String(), k)] = v
}
}
return flat
}
var smOutput = `
MAIN.uptime 895 1.00 Child process uptime
MAIN.cache_hit 95 0.00 Cache hits
MAIN.cache_miss 5 0.00 Cache misses
MGT.uptime 896 1.00 Management process uptime
MGT.child_start 1 0.00 Child process started
MEMPOOL.vbc.live 0 . In use
MEMPOOL.vbc.pool 10 . In Pool
MEMPOOL.vbc.sz_wanted 88 . Size requested
`
var parsedSmOutput = map[string]map[string]interface{}{
"MAIN": map[string]interface{}{
"uptime": 895,
"cache_hit": 95,
"cache_miss": 5,
},
"MGT": map[string]interface{}{
"uptime": 896,
"child_start": 1,
},
"MEMPOOL": map[string]interface{}{
"vbc.live": 0,
"vbc.pool": 10,
"vbc.sz_wanted": 88,
},
}
var fullOutput = `
MAIN.uptime 2872 1.00 Child process uptime
MAIN.sess_conn 0 0.00 Sessions accepted
MAIN.sess_drop 0 0.00 Sessions dropped
MAIN.sess_fail 0 0.00 Session accept failures
MAIN.sess_pipe_overflow 0 0.00 Session pipe overflow
MAIN.client_req_400 0 0.00 Client requests received, subject to 400 errors
MAIN.client_req_411 0 0.00 Client requests received, subject to 411 errors
MAIN.client_req_413 0 0.00 Client requests received, subject to 413 errors
MAIN.client_req_417 0 0.00 Client requests received, subject to 417 errors
MAIN.client_req 0 0.00 Good client requests received
MAIN.cache_hit 0 0.00 Cache hits
MAIN.cache_hitpass 0 0.00 Cache hits for pass
MAIN.cache_miss 0 0.00 Cache misses
MAIN.backend_conn 0 0.00 Backend conn. success
MAIN.backend_unhealthy 0 0.00 Backend conn. not attempted
MAIN.backend_busy 0 0.00 Backend conn. too many
MAIN.backend_fail 0 0.00 Backend conn. failures
MAIN.backend_reuse 0 0.00 Backend conn. reuses
MAIN.backend_toolate 0 0.00 Backend conn. was closed
MAIN.backend_recycle 0 0.00 Backend conn. recycles
MAIN.backend_retry 0 0.00 Backend conn. retry
MAIN.fetch_head 0 0.00 Fetch no body (HEAD)
MAIN.fetch_length 0 0.00 Fetch with Length
MAIN.fetch_chunked 0 0.00 Fetch chunked
MAIN.fetch_eof 0 0.00 Fetch EOF
MAIN.fetch_bad 0 0.00 Fetch bad T-E
MAIN.fetch_close 0 0.00 Fetch wanted close
MAIN.fetch_oldhttp 0 0.00 Fetch pre HTTP/1.1 closed
MAIN.fetch_zero 0 0.00 Fetch zero len body
MAIN.fetch_1xx 0 0.00 Fetch no body (1xx)
MAIN.fetch_204 0 0.00 Fetch no body (204)
MAIN.fetch_304 0 0.00 Fetch no body (304)
MAIN.fetch_failed 0 0.00 Fetch failed (all causes)
MAIN.fetch_no_thread 0 0.00 Fetch failed (no thread)
MAIN.pools 2 . Number of thread pools
MAIN.threads 200 . Total number of threads
MAIN.threads_limited 0 0.00 Threads hit max
MAIN.threads_created 200 0.07 Threads created
MAIN.threads_destroyed 0 0.00 Threads destroyed
MAIN.threads_failed 0 0.00 Thread creation failed
MAIN.thread_queue_len 0 . Length of session queue
MAIN.busy_sleep 0 0.00 Number of requests sent to sleep on busy objhdr
MAIN.busy_wakeup 0 0.00 Number of requests woken after sleep on busy objhdr
MAIN.sess_queued 0 0.00 Sessions queued for thread
MAIN.sess_dropped 0 0.00 Sessions dropped for thread
MAIN.n_object 0 . object structs made
MAIN.n_vampireobject 0 . unresurrected objects
MAIN.n_objectcore 0 . objectcore structs made
MAIN.n_objecthead 0 . objecthead structs made
MAIN.n_waitinglist 0 . waitinglist structs made
MAIN.n_backend 1 . Number of backends
MAIN.n_expired 0 . Number of expired objects
MAIN.n_lru_nuked 0 . Number of LRU nuked objects
MAIN.n_lru_moved 0 . Number of LRU moved objects
MAIN.losthdr 0 0.00 HTTP header overflows
MAIN.s_sess 0 0.00 Total sessions seen
MAIN.s_req 0 0.00 Total requests seen
MAIN.s_pipe 0 0.00 Total pipe sessions seen
MAIN.s_pass 0 0.00 Total pass-ed requests seen
MAIN.s_fetch 0 0.00 Total backend fetches initiated
MAIN.s_synth 0 0.00 Total synthethic responses made
MAIN.s_req_hdrbytes 0 0.00 Request header bytes
MAIN.s_req_bodybytes 0 0.00 Request body bytes
MAIN.s_resp_hdrbytes 0 0.00 Response header bytes
MAIN.s_resp_bodybytes 0 0.00 Response body bytes
MAIN.s_pipe_hdrbytes 0 0.00 Pipe request header bytes
MAIN.s_pipe_in 0 0.00 Piped bytes from client
MAIN.s_pipe_out 0 0.00 Piped bytes to client
MAIN.sess_closed 0 0.00 Session Closed
MAIN.sess_pipeline 0 0.00 Session Pipeline
MAIN.sess_readahead 0 0.00 Session Read Ahead
MAIN.sess_herd 0 0.00 Session herd
MAIN.shm_records 1918 0.67 SHM records
MAIN.shm_writes 1918 0.67 SHM writes
MAIN.shm_flushes 0 0.00 SHM flushes due to overflow
MAIN.shm_cont 0 0.00 SHM MTX contention
MAIN.shm_cycles 0 0.00 SHM cycles through buffer
MAIN.sms_nreq 0 0.00 SMS allocator requests
MAIN.sms_nobj 0 . SMS outstanding allocations
MAIN.sms_nbytes 0 . SMS outstanding bytes
MAIN.sms_balloc 0 . SMS bytes allocated
MAIN.sms_bfree 0 . SMS bytes freed
MAIN.backend_req 0 0.00 Backend requests made
MAIN.n_vcl 1 0.00 Number of loaded VCLs in total
MAIN.n_vcl_avail 1 0.00 Number of VCLs available
MAIN.n_vcl_discard 0 0.00 Number of discarded VCLs
MAIN.bans 1 . Count of bans
MAIN.bans_completed 1 . Number of bans marked 'completed'
MAIN.bans_obj 0 . Number of bans using obj.*
MAIN.bans_req 0 . Number of bans using req.*
MAIN.bans_added 1 0.00 Bans added
MAIN.bans_deleted 0 0.00 Bans deleted
MAIN.bans_tested 0 0.00 Bans tested against objects (lookup)
MAIN.bans_obj_killed 0 0.00 Objects killed by bans (lookup)
MAIN.bans_lurker_tested 0 0.00 Bans tested against objects (lurker)
MAIN.bans_tests_tested 0 0.00 Ban tests tested against objects (lookup)
MAIN.bans_lurker_tests_tested 0 0.00 Ban tests tested against objects (lurker)
MAIN.bans_lurker_obj_killed 0 0.00 Objects killed by bans (lurker)
MAIN.bans_dups 0 0.00 Bans superseded by other bans
MAIN.bans_lurker_contention 0 0.00 Lurker gave way for lookup
MAIN.bans_persisted_bytes 13 . Bytes used by the persisted ban lists
MAIN.bans_persisted_fragmentation 0 . Extra bytes in persisted ban lists due to fragmentation
MAIN.n_purges 0 . Number of purge operations executed
MAIN.n_obj_purged 0 . Number of purged objects
MAIN.exp_mailed 0 0.00 Number of objects mailed to expiry thread
MAIN.exp_received 0 0.00 Number of objects received by expiry thread
MAIN.hcb_nolock 0 0.00 HCB Lookups without lock
MAIN.hcb_lock 0 0.00 HCB Lookups with lock
MAIN.hcb_insert 0 0.00 HCB Inserts
MAIN.esi_errors 0 0.00 ESI parse errors (unlock)
MAIN.esi_warnings 0 0.00 ESI parse warnings (unlock)
MAIN.vmods 0 . Loaded VMODs
MAIN.n_gzip 0 0.00 Gzip operations
MAIN.n_gunzip 0 0.00 Gunzip operations
MAIN.vsm_free 972528 . Free VSM space
MAIN.vsm_used 83962080 . Used VSM space
MAIN.vsm_cooling 0 . Cooling VSM space
MAIN.vsm_overflow 0 . Overflow VSM space
MAIN.vsm_overflowed 0 0.00 Overflowed VSM space
MGT.uptime 2871 1.00 Management process uptime
MGT.child_start 1 0.00 Child process started
MGT.child_exit 0 0.00 Child process normal exit
MGT.child_stop 0 0.00 Child process unexpected exit
MGT.child_died 0 0.00 Child process died (signal)
MGT.child_dump 0 0.00 Child process core dumped
MGT.child_panic 0 0.00 Child process panic
MEMPOOL.vbc.live 0 . In use
MEMPOOL.vbc.pool 10 . In Pool
MEMPOOL.vbc.sz_wanted 88 . Size requested
MEMPOOL.vbc.sz_needed 120 . Size allocated
MEMPOOL.vbc.allocs 0 0.00 Allocations
MEMPOOL.vbc.frees 0 0.00 Frees
MEMPOOL.vbc.recycle 0 0.00 Recycled from pool
MEMPOOL.vbc.timeout 0 0.00 Timed out from pool
MEMPOOL.vbc.toosmall 0 0.00 Too small to recycle
MEMPOOL.vbc.surplus 0 0.00 Too many for pool
MEMPOOL.vbc.randry 0 0.00 Pool ran dry
MEMPOOL.busyobj.live 0 . In use
MEMPOOL.busyobj.pool 10 . In Pool
MEMPOOL.busyobj.sz_wanted 65536 . Size requested
MEMPOOL.busyobj.sz_needed 65568 . Size allocated
MEMPOOL.busyobj.allocs 0 0.00 Allocations
MEMPOOL.busyobj.frees 0 0.00 Frees
MEMPOOL.busyobj.recycle 0 0.00 Recycled from pool
MEMPOOL.busyobj.timeout 0 0.00 Timed out from pool
MEMPOOL.busyobj.toosmall 0 0.00 Too small to recycle
MEMPOOL.busyobj.surplus 0 0.00 Too many for pool
MEMPOOL.busyobj.randry 0 0.00 Pool ran dry
MEMPOOL.req0.live 0 . In use
MEMPOOL.req0.pool 10 . In Pool
MEMPOOL.req0.sz_wanted 65536 . Size requested
MEMPOOL.req0.sz_needed 65568 . Size allocated
MEMPOOL.req0.allocs 0 0.00 Allocations
MEMPOOL.req0.frees 0 0.00 Frees
MEMPOOL.req0.recycle 0 0.00 Recycled from pool
MEMPOOL.req0.timeout 0 0.00 Timed out from pool
MEMPOOL.req0.toosmall 0 0.00 Too small to recycle
MEMPOOL.req0.surplus 0 0.00 Too many for pool
MEMPOOL.req0.randry 0 0.00 Pool ran dry
MEMPOOL.sess0.live 0 . In use
MEMPOOL.sess0.pool 10 . In Pool
MEMPOOL.sess0.sz_wanted 384 . Size requested
MEMPOOL.sess0.sz_needed 416 . Size allocated
MEMPOOL.sess0.allocs 0 0.00 Allocations
MEMPOOL.sess0.frees 0 0.00 Frees
MEMPOOL.sess0.recycle 0 0.00 Recycled from pool
MEMPOOL.sess0.timeout 0 0.00 Timed out from pool
MEMPOOL.sess0.toosmall 0 0.00 Too small to recycle
MEMPOOL.sess0.surplus 0 0.00 Too many for pool
MEMPOOL.sess0.randry 0 0.00 Pool ran dry
MEMPOOL.req1.live 0 . In use
MEMPOOL.req1.pool 10 . In Pool
MEMPOOL.req1.sz_wanted 65536 . Size requested
MEMPOOL.req1.sz_needed 65568 . Size allocated
MEMPOOL.req1.allocs 0 0.00 Allocations
MEMPOOL.req1.frees 0 0.00 Frees
MEMPOOL.req1.recycle 0 0.00 Recycled from pool
MEMPOOL.req1.timeout 0 0.00 Timed out from pool
MEMPOOL.req1.toosmall 0 0.00 Too small to recycle
MEMPOOL.req1.surplus 0 0.00 Too many for pool
MEMPOOL.req1.randry 0 0.00 Pool ran dry
MEMPOOL.sess1.live 0 . In use
MEMPOOL.sess1.pool 10 . In Pool
MEMPOOL.sess1.sz_wanted 384 . Size requested
MEMPOOL.sess1.sz_needed 416 . Size allocated
MEMPOOL.sess1.allocs 0 0.00 Allocations
MEMPOOL.sess1.frees 0 0.00 Frees
MEMPOOL.sess1.recycle 0 0.00 Recycled from pool
MEMPOOL.sess1.timeout 0 0.00 Timed out from pool
MEMPOOL.sess1.toosmall 0 0.00 Too small to recycle
MEMPOOL.sess1.surplus 0 0.00 Too many for pool
MEMPOOL.sess1.randry 0 0.00 Pool ran dry
SMA.s0.c_req 0 0.00 Allocator requests
SMA.s0.c_fail 0 0.00 Allocator failures
SMA.s0.c_bytes 0 0.00 Bytes allocated
SMA.s0.c_freed 0 0.00 Bytes freed
SMA.s0.g_alloc 0 . Allocations outstanding
SMA.s0.g_bytes 0 . Bytes outstanding
SMA.s0.g_space 268435456 . Bytes available
SMA.Transient.c_req 0 0.00 Allocator requests
SMA.Transient.c_fail 0 0.00 Allocator failures
SMA.Transient.c_bytes 0 0.00 Bytes allocated
SMA.Transient.c_freed 0 0.00 Bytes freed
SMA.Transient.g_alloc 0 . Allocations outstanding
SMA.Transient.g_bytes 0 . Bytes outstanding
SMA.Transient.g_space 0 . Bytes available
VBE.default(127.0.0.1,,8080).vcls 1 . VCL references
VBE.default(127.0.0.1,,8080).happy 0 . Happy health probes
VBE.default(127.0.0.1,,8080).bereq_hdrbytes 0 0.00 Request header bytes
VBE.default(127.0.0.1,,8080).bereq_bodybytes 0 0.00 Request body bytes
VBE.default(127.0.0.1,,8080).beresp_hdrbytes 0 0.00 Response header bytes
VBE.default(127.0.0.1,,8080).beresp_bodybytes 0 0.00 Response body bytes
VBE.default(127.0.0.1,,8080).pipe_hdrbytes 0 0.00 Pipe request header bytes
VBE.default(127.0.0.1,,8080).pipe_out 0 0.00 Piped bytes to backend
VBE.default(127.0.0.1,,8080).pipe_in 0 0.00 Piped bytes from backend
LCK.sms.creat 0 0.00 Created locks
LCK.sms.destroy 0 0.00 Destroyed locks
LCK.sms.locks 0 0.00 Lock Operations
LCK.smp.creat 0 0.00 Created locks
LCK.smp.destroy 0 0.00 Destroyed locks
LCK.smp.locks 0 0.00 Lock Operations
LCK.sma.creat 2 0.00 Created locks
LCK.sma.destroy 0 0.00 Destroyed locks
LCK.sma.locks 0 0.00 Lock Operations
LCK.smf.creat 0 0.00 Created locks
LCK.smf.destroy 0 0.00 Destroyed locks
LCK.smf.locks 0 0.00 Lock Operations
LCK.hsl.creat 0 0.00 Created locks
LCK.hsl.destroy 0 0.00 Destroyed locks
LCK.hsl.locks 0 0.00 Lock Operations
LCK.hcb.creat 1 0.00 Created locks
LCK.hcb.destroy 0 0.00 Destroyed locks
LCK.hcb.locks 16 0.01 Lock Operations
LCK.hcl.creat 0 0.00 Created locks
LCK.hcl.destroy 0 0.00 Destroyed locks
LCK.hcl.locks 0 0.00 Lock Operations
LCK.vcl.creat 1 0.00 Created locks
LCK.vcl.destroy 0 0.00 Destroyed locks
LCK.vcl.locks 2 0.00 Lock Operations
LCK.sessmem.creat 0 0.00 Created locks
LCK.sessmem.destroy 0 0.00 Destroyed locks
LCK.sessmem.locks 0 0.00 Lock Operations
LCK.sess.creat 0 0.00 Created locks
LCK.sess.destroy 0 0.00 Destroyed locks
LCK.sess.locks 0 0.00 Lock Operations
LCK.wstat.creat 1 0.00 Created locks
LCK.wstat.destroy 0 0.00 Destroyed locks
LCK.wstat.locks 930 0.32 Lock Operations
LCK.herder.creat 0 0.00 Created locks
LCK.herder.destroy 0 0.00 Destroyed locks
LCK.herder.locks 0 0.00 Lock Operations
LCK.wq.creat 3 0.00 Created locks
LCK.wq.destroy 0 0.00 Destroyed locks
LCK.wq.locks 1554 0.54 Lock Operations
LCK.objhdr.creat 1 0.00 Created locks
LCK.objhdr.destroy 0 0.00 Destroyed locks
LCK.objhdr.locks 0 0.00 Lock Operations
LCK.exp.creat 1 0.00 Created locks
LCK.exp.destroy 0 0.00 Destroyed locks
LCK.exp.locks 915 0.32 Lock Operations
LCK.lru.creat 2 0.00 Created locks
LCK.lru.destroy 0 0.00 Destroyed locks
LCK.lru.locks 0 0.00 Lock Operations
LCK.cli.creat 1 0.00 Created locks
LCK.cli.destroy 0 0.00 Destroyed locks
LCK.cli.locks 970 0.34 Lock Operations
LCK.ban.creat 1 0.00 Created locks
LCK.ban.destroy 0 0.00 Destroyed locks
LCK.ban.locks 9413 3.28 Lock Operations
LCK.vbp.creat 1 0.00 Created locks
LCK.vbp.destroy 0 0.00 Destroyed locks
LCK.vbp.locks 0 0.00 Lock Operations
LCK.backend.creat 1 0.00 Created locks
LCK.backend.destroy 0 0.00 Destroyed locks
LCK.backend.locks 0 0.00 Lock Operations
LCK.vcapace.creat 1 0.00 Created locks
LCK.vcapace.destroy 0 0.00 Destroyed locks
LCK.vcapace.locks 0 0.00 Lock Operations
LCK.nbusyobj.creat 0 0.00 Created locks
LCK.nbusyobj.destroy 0 0.00 Destroyed locks
LCK.nbusyobj.locks 0 0.00 Lock Operations
LCK.busyobj.creat 0 0.00 Created locks
LCK.busyobj.destroy 0 0.00 Destroyed locks
LCK.busyobj.locks 0 0.00 Lock Operations
LCK.mempool.creat 6 0.00 Created locks
LCK.mempool.destroy 0 0.00 Destroyed locks
LCK.mempool.locks 15306 5.33 Lock Operations
LCK.vxid.creat 1 0.00 Created locks
LCK.vxid.destroy 0 0.00 Destroyed locks
LCK.vxid.locks 0 0.00 Lock Operations
LCK.pipestat.creat 1 0.00 Created locks
LCK.pipestat.destroy 0 0.00 Destroyed locks
LCK.pipestat.locks 0 0.00 Lock Operations
`

View File

@ -0,0 +1,3 @@
// +build windows
package varnish

View File

@ -8,6 +8,7 @@ import (
_ "github.com/influxdata/telegraf/plugins/outputs/file"
_ "github.com/influxdata/telegraf/plugins/outputs/graphite"
_ "github.com/influxdata/telegraf/plugins/outputs/influxdb"
_ "github.com/influxdata/telegraf/plugins/outputs/instrumental"
_ "github.com/influxdata/telegraf/plugins/outputs/kafka"
_ "github.com/influxdata/telegraf/plugins/outputs/kinesis"
_ "github.com/influxdata/telegraf/plugins/outputs/librato"

View File

@ -26,6 +26,7 @@ type InfluxDB struct {
UserAgent string
Precision string
RetentionPolicy string
WriteConsistency string
Timeout internal.Duration
UDPPayload int `toml:"udp_payload"`
@ -49,12 +50,15 @@ var sampleConfig = `
urls = ["http://localhost:8086"] # required
## The target database for metrics (telegraf will create it if not exists).
database = "telegraf" # required
## Retention policy to write to.
retention_policy = "default"
## Precision of writes, valid values are "ns", "us" (or "µs"), "ms", "s", "m", "h".
## note: using "s" precision greatly improves InfluxDB compression.
precision = "s"
## Retention policy to write to.
retention_policy = "default"
## Write consistency (clusters only), can be: "any", "one", "quorom", "all"
write_consistency = "any"
## Write timeout (for the InfluxDB client), formatted as a string.
## If not provided, will default to 5s. 0s means no timeout (not recommended).
timeout = "5s"
@ -182,6 +186,7 @@ func (i *InfluxDB) Write(metrics []telegraf.Metric) error {
Database: i.Database,
Precision: i.Precision,
RetentionPolicy: i.RetentionPolicy,
WriteConsistency: i.WriteConsistency,
})
if err != nil {
return err

View File

@ -0,0 +1,25 @@
# Instrumental Output Plugin
This plugin writes to the [Instrumental Collector API](https://instrumentalapp.com/docs/tcp-collector)
and requires a Project-specific API token.
Instrumental accepts stats in a format very close to Graphite, with the only difference being that
the type of stat (gauge, increment) is the first token, separated from the metric itself
by whitespace. The `increment` type is only used if the metric comes in as a counter through `[[input.statsd]]`.
## Configuration:
```toml
[[outputs.instrumental]]
## Project API Token (required)
api_token = "API Token" # required
## Prefix the metrics with a given name
prefix = ""
## Stats output template (Graphite formatting)
## see https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md#graphite
template = "host.tags.measurement.field"
## Timeout in seconds to connect
timeout = "2s"
## Debug true - Print communcation to Instrumental
debug = false
```

View File

@ -0,0 +1,192 @@
package instrumental
import (
"fmt"
"io"
"log"
"net"
"regexp"
"strings"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/outputs"
"github.com/influxdata/telegraf/plugins/serializers"
"github.com/influxdata/telegraf/plugins/serializers/graphite"
)
type Instrumental struct {
Host string
ApiToken string
Prefix string
DataFormat string
Template string
Timeout internal.Duration
Debug bool
conn net.Conn
}
const (
DefaultHost = "collector.instrumentalapp.com"
AuthFormat = "hello version go/telegraf/1.0\nauthenticate %s\n"
)
var (
StatIncludesBadChar = regexp.MustCompile("[^[:alnum:][:blank:]-_.]")
)
var sampleConfig = `
## Project API Token (required)
api_token = "API Token" # required
## Prefix the metrics with a given name
prefix = ""
## Stats output template (Graphite formatting)
## see https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md#graphite
template = "host.tags.measurement.field"
## Timeout in seconds to connect
timeout = "2s"
## Display Communcation to Instrumental
debug = false
`
func (i *Instrumental) Connect() error {
connection, err := net.DialTimeout("tcp", i.Host+":8000", i.Timeout.Duration)
if err != nil {
i.conn = nil
return err
}
err = i.authenticate(connection)
if err != nil {
i.conn = nil
return err
}
return nil
}
func (i *Instrumental) Close() error {
i.conn.Close()
i.conn = nil
return nil
}
func (i *Instrumental) Write(metrics []telegraf.Metric) error {
if i.conn == nil {
err := i.Connect()
if err != nil {
return fmt.Errorf("FAILED to (re)connect to Instrumental. Error: %s\n", err)
}
}
s, err := serializers.NewGraphiteSerializer(i.Prefix, i.Template)
if err != nil {
return err
}
var points []string
var metricType string
var toSerialize telegraf.Metric
var newTags map[string]string
for _, metric := range metrics {
// Pull the metric_type out of the metric's tags. We don't want the type
// to show up with the other tags pulled from the system, as they go in the
// beginning of the line instead.
// e.g we want:
//
// increment some_prefix.host.tag1.tag2.tag3.field value timestamp
//
// vs
//
// increment some_prefix.host.tag1.tag2.tag3.counter.field value timestamp
//
newTags = metric.Tags()
metricType = newTags["metric_type"]
delete(newTags, "metric_type")
toSerialize, _ = telegraf.NewMetric(
metric.Name(),
newTags,
metric.Fields(),
metric.Time(),
)
stats, err := s.Serialize(toSerialize)
if err != nil {
log.Printf("Error serializing a metric to Instrumental: %s", err)
}
switch metricType {
case "counter":
fallthrough
case "histogram":
metricType = "increment"
default:
metricType = "gauge"
}
for _, stat := range stats {
if !StatIncludesBadChar.MatchString(stat) {
points = append(points, fmt.Sprintf("%s %s", metricType, stat))
} else if i.Debug {
log.Printf("Unable to send bad stat: %s", stat)
}
}
}
allPoints := strings.Join(points, "\n") + "\n"
_, err = fmt.Fprintf(i.conn, allPoints)
if i.Debug {
log.Println(allPoints)
}
if err != nil {
if err == io.EOF {
i.Close()
}
return err
}
return nil
}
func (i *Instrumental) Description() string {
return "Configuration for sending metrics to an Instrumental project"
}
func (i *Instrumental) SampleConfig() string {
return sampleConfig
}
func (i *Instrumental) authenticate(conn net.Conn) error {
_, err := fmt.Fprintf(conn, AuthFormat, i.ApiToken)
if err != nil {
return err
}
// The response here will either be two "ok"s or an error message.
responses := make([]byte, 512)
if _, err = conn.Read(responses); err != nil {
return err
}
if string(responses)[:6] != "ok\nok\n" {
return fmt.Errorf("Authentication failed: %s", responses)
}
i.conn = conn
return nil
}
func init() {
outputs.Add("instrumental", func() telegraf.Output {
return &Instrumental{
Host: DefaultHost,
Template: graphite.DEFAULT_TEMPLATE,
}
})
}

View File

@ -0,0 +1,114 @@
package instrumental
import (
"bufio"
"net"
"net/textproto"
"sync"
"testing"
"time"
"github.com/influxdata/telegraf"
"github.com/stretchr/testify/assert"
)
func TestWrite(t *testing.T) {
var wg sync.WaitGroup
wg.Add(1)
go TCPServer(t, &wg)
// Give the fake TCP server some time to start:
time.Sleep(time.Millisecond * 100)
i := Instrumental{
Host: "127.0.0.1",
ApiToken: "abc123token",
Prefix: "my.prefix",
}
i.Connect()
// Default to gauge
m1, _ := telegraf.NewMetric(
"mymeasurement",
map[string]string{"host": "192.168.0.1"},
map[string]interface{}{"myfield": float64(3.14)},
time.Date(2010, time.November, 10, 23, 0, 0, 0, time.UTC),
)
m2, _ := telegraf.NewMetric(
"mymeasurement",
map[string]string{"host": "192.168.0.1", "metric_type": "set"},
map[string]interface{}{"value": float64(3.14)},
time.Date(2010, time.November, 10, 23, 0, 0, 0, time.UTC),
)
// Simulate a connection close and reconnect.
metrics := []telegraf.Metric{m1, m2}
i.Write(metrics)
i.Close()
// Counter and Histogram are increments
m3, _ := telegraf.NewMetric(
"my_histogram",
map[string]string{"host": "192.168.0.1", "metric_type": "histogram"},
map[string]interface{}{"value": float64(3.14)},
time.Date(2010, time.November, 10, 23, 0, 0, 0, time.UTC),
)
// We will drop metrics that simply won't be accepted by Instrumental
m4, _ := telegraf.NewMetric(
"bad_values",
map[string]string{"host": "192.168.0.1", "metric_type": "counter"},
map[string]interface{}{"value": "\" 3:30\""},
time.Date(2010, time.November, 10, 23, 0, 0, 0, time.UTC),
)
m5, _ := telegraf.NewMetric(
"my_counter",
map[string]string{"host": "192.168.0.1", "metric_type": "counter"},
map[string]interface{}{"value": float64(3.14)},
time.Date(2010, time.November, 10, 23, 0, 0, 0, time.UTC),
)
metrics = []telegraf.Metric{m3, m4, m5}
i.Write(metrics)
wg.Wait()
i.Close()
}
func TCPServer(t *testing.T, wg *sync.WaitGroup) {
tcpServer, _ := net.Listen("tcp", "127.0.0.1:8000")
defer wg.Done()
conn, _ := tcpServer.Accept()
conn.SetDeadline(time.Now().Add(1 * time.Second))
reader := bufio.NewReader(conn)
tp := textproto.NewReader(reader)
hello, _ := tp.ReadLine()
assert.Equal(t, "hello version go/telegraf/1.0", hello)
auth, _ := tp.ReadLine()
assert.Equal(t, "authenticate abc123token", auth)
conn.Write([]byte("ok\nok\n"))
data1, _ := tp.ReadLine()
assert.Equal(t, "gauge my.prefix.192_168_0_1.mymeasurement.myfield 3.14 1289430000", data1)
data2, _ := tp.ReadLine()
assert.Equal(t, "gauge my.prefix.192_168_0_1.mymeasurement 3.14 1289430000", data2)
conn, _ = tcpServer.Accept()
conn.SetDeadline(time.Now().Add(1 * time.Second))
reader = bufio.NewReader(conn)
tp = textproto.NewReader(reader)
hello, _ = tp.ReadLine()
assert.Equal(t, "hello version go/telegraf/1.0", hello)
auth, _ = tp.ReadLine()
assert.Equal(t, "authenticate abc123token", auth)
conn.Write([]byte("ok\nok\n"))
data3, _ := tp.ReadLine()
assert.Equal(t, "increment my.prefix.192_168_0_1.my_histogram 3.14 1289430000", data3)
data4, _ := tp.ReadLine()
assert.Equal(t, "increment my.prefix.192_168_0_1.my_counter 3.14 1289430000", data4)
conn.Close()
}

View File

@ -138,7 +138,7 @@ case $1 in
if which start-stop-daemon > /dev/null 2>&1; then
start-stop-daemon --chuid $USER:$GROUP --start --quiet --pidfile $pidfile --exec $daemon -- -pidfile $pidfile -config $config -config-directory $confdir $TELEGRAF_OPTS >>$STDOUT 2>>$STDERR &
else
nohup sudo -u $USER $daemon -pidfile $pidfile -config $config -config-directory $confdir $TELEGRAF_OPTS >>$STDOUT 2>>$STDERR &
su -s /bin/sh -c "nohup $daemon -pidfile $pidfile -config $config -config-directory $confdir $TELEGRAF_OPTS >>$STDOUT 2>>$STDERR &" $USER
fi
log_success_msg "$name process was started"
;;

View File

@ -6,10 +6,12 @@ After=network.target
[Service]
EnvironmentFile=-/etc/default/telegraf
User=telegraf
ExecStart=/usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d ${TELEGRAF_OPTS}
Environment='STDOUT=/var/log/telegraf/telegraf.log'
Environment='STDERR=/var/log/telegraf/telegraf.log'
ExecStart=/bin/sh -c "/usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d ${TELEGRAF_OPTS} >>${STDOUT} 2>>${STDERR}"
ExecReload=/bin/kill -HUP $MAINPID
Restart=on-failure
KillMode=process
KillMode=control-group
[Install]
WantedBy=multi-user.target