This commit is contained in:
Thibault Cohen 2016-01-08 00:40:31 +00:00
commit c18951323c
192 changed files with 5055 additions and 5400 deletions

View File

@ -1,3 +1,33 @@
## v0.3.0 [unreleased]
### Release Notes
- **breaking change** `plugins` have been renamed to `inputs`. This was done because
`plugins` is too generic, as there are now also "output plugins", and will likely
be "aggregator plugins" and "filter plugins" in the future. Additionally,
`inputs/` and `outputs/` directories have been placed in the root-level `plugins/`
directory.
- **breaking change** the `io` plugin has been renamed `diskio`
- **breaking change** Plugin measurements aggregated into a single measurement.
- **breaking change** `jolokia` plugin: must use global tag/drop/pass parameters
for configuration.
- **breaking change** `twemproxy` plugin: `prefix` option removed.
- **breaking change** `procstat` cpu measurements are now prepended with `cpu_time_`
instead of only `cpu_`
- **breaking change** some command-line flags have been renamed to separate words.
`-configdirectory` -> `-config-directory`, `-filter` -> `-input-filter`,
`-outputfilter` -> `-output-filter`
- The prometheus plugin schema has not been changed (measurements have not been
aggregated).
### Features
- Plugin measurements aggregated into a single measurement.
- Added ability to specify per-plugin tags
- Added ability to specify per-plugin measurement suffix and prefix.
(`name_prefix` and `name_suffix`)
- Added ability to override base plugin measurement name. (`name_override`)
### Bugfixes
## v0.2.5 [unreleased]
### Features
@ -38,11 +68,11 @@ functional.
same type can be specified, like this:
```
[[plugins.cpu]]
[[inputs.cpu]]
percpu = false
totalcpu = true
[[plugins.cpu]]
[[inputs.cpu]]
percpu = true
totalcpu = false
drop = ["cpu_time"]
@ -69,7 +99,7 @@ same type can be specified, like this:
lists of servers/URLs. 0.2.2 is being released solely to fix that bug
### Bugfixes
- [#377](https://github.com/influxdb/telegraf/pull/377): Fix for duplicate slices in plugins.
- [#377](https://github.com/influxdb/telegraf/pull/377): Fix for duplicate slices in inputs.
## v0.2.1 [2015-11-16]
@ -130,7 +160,7 @@ be controlled via the `round_interval` and `flush_jitter` config options.
- [#241](https://github.com/influxdb/telegraf/pull/241): MQTT Output. Thanks @shirou!
- Memory plugin: cached and buffered measurements re-added
- Logging: additional logging for each collection interval, track the number
of metrics collected and from how many plugins.
of metrics collected and from how many inputs.
- [#240](https://github.com/influxdb/telegraf/pull/240): procstat plugin, thanks @ranjib!
- [#244](https://github.com/influxdb/telegraf/pull/244): netstat plugin, thanks @shirou!
- [#262](https://github.com/influxdb/telegraf/pull/262): zookeeper plugin, thanks @jrxFive!
@ -163,7 +193,7 @@ will still be backwards compatible if only `url` is specified.
- The -test flag will now output two metric collections
- Support for filtering telegraf outputs on the CLI -- Telegraf will now
allow filtering of output sinks on the command-line using the `-outputfilter`
flag, much like how the `-filter` flag works for plugins.
flag, much like how the `-filter` flag works for inputs.
- Support for filtering on config-file creation -- Telegraf now supports
filtering to -sample-config command. You can now run
`telegraf -sample-config -filter cpu -outputfilter influxdb` to get a config

189
CONFIGURATION.md Normal file
View File

@ -0,0 +1,189 @@
# Telegraf Configuration
## Generating a config file
A default Telegraf config file can be generated using the `-sample-config` flag,
like this: `telegraf -sample-config`
To generate a file with specific inputs and outputs, you can use the
`-input-filter` and `-output-filter` flags, like this:
`telegraf -sample-config -input-filter cpu:mem:net:swap -output-filter influxdb:kafka`
## Input Configuration
There are some configuration options that are configurable per plugin:
* **name_override**: Override the base name of the measurement.
(Default is the name of the plugin).
* **name_prefix**: Specifies a prefix to attach to the measurement name.
* **name_suffix**: Specifies a suffix to attach to the measurement name.
* **tags**: A map of tags to apply to a specific plugin's measurements.
* **interval**: How often to gather this metric. Normal plugins use a single
global interval, but if one particular plugin should be run less or more often,
you can configure that here.
### Input Filters
There are also filters that can be configured per plugin:
* **pass**: An array of strings that is used to filter metrics generated by the
current plugin. Each string in the array is tested as a glob match against field names
and if it matches, the field is emitted.
* **drop**: The inverse of pass, if a field name matches, it is not emitted.
* **tagpass**: tag names and arrays of strings that are used to filter
measurements by the current plugin. Each string in the array is tested as a glob
match against the tag name, and if it matches the measurement is emitted.
* **tagdrop**: The inverse of tagpass. If a tag matches, the measurement is not
emitted. This is tested on measurements that have passed the tagpass test.
### Input Configuration Examples
This is a full working config that will output CPU data to an InfluxDB instance
at 192.168.59.103:8086, tagging measurements with dc="denver-1". It will output
measurements at a 10s interval and will collect per-cpu data, dropping any
fields which begin with `time_`.
```toml
[tags]
dc = "denver-1"
[agent]
interval = "10s"
# OUTPUTS
[outputs]
[[outputs.influxdb]]
url = "http://192.168.59.103:8086" # required.
database = "telegraf" # required.
precision = "s"
# INPUTS
[inputs]
[[inputs.cpu]]
percpu = true
totalcpu = false
# filter all fields beginning with 'time_'
drop = ["time_*"]
```
### Input Config: tagpass and tagdrop
```toml
[inputs]
[[inputs.cpu]]
percpu = true
totalcpu = false
drop = ["cpu_time"]
# Don't collect CPU data for cpu6 & cpu7
[inputs.cpu.tagdrop]
cpu = [ "cpu6", "cpu7" ]
[[inputs.disk]]
[inputs.disk.tagpass]
# tagpass conditions are OR, not AND.
# If the (filesystem is ext4 or xfs) OR (the path is /opt or /home)
# then the metric passes
fstype = [ "ext4", "xfs" ]
# Globs can also be used on the tag values
path = [ "/opt", "/home*" ]
```
### Input Config: pass and drop
```toml
# Drop all metrics for guest & steal CPU usage
[[inputs.cpu]]
percpu = false
totalcpu = true
drop = ["usage_guest", "usage_steal"]
# Only store inode related metrics for disks
[[inputs.disk]]
pass = ["inodes*"]
```
### Input config: prefix, suffix, and override
This plugin will emit measurements with the name `cpu_total`
```toml
[[inputs.cpu]]
name_suffix = "_total"
percpu = false
totalcpu = true
```
This will emit measurements with the name `foobar`
```toml
[[inputs.cpu]]
name_override = "foobar"
percpu = false
totalcpu = true
```
### Input config: tags
This plugin will emit measurements with two additional tags: `tag1=foo` and
`tag2=bar`
```toml
[[inputs.cpu]]
percpu = false
totalcpu = true
[inputs.cpu.tags]
tag1 = "foo"
tag2 = "bar"
```
### Multiple inputs of the same type
Additional inputs (or outputs) of the same type can be specified,
just define more instances in the config file. It is highly recommended that
you utilize `name_override`, `name_prefix`, or `name_suffix` config options
to avoid measurement collisions:
```toml
[[inputs.cpu]]
percpu = false
totalcpu = true
[[inputs.cpu]]
percpu = true
totalcpu = false
name_override = "percpu_usage"
drop = ["cpu_time*"]
```
## Output Configuration
Telegraf also supports specifying multiple output sinks to send data to,
configuring each output sink is different, but examples can be
found by running `telegraf -sample-config`.
Outputs also support the same configurable options as inputs
(pass, drop, tagpass, tagdrop)
```toml
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf"
precision = "s"
# Drop all measurements that start with "aerospike"
drop = ["aerospike*"]
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf-aerospike-data"
precision = "s"
# Only accept aerospike data:
pass = ["aerospike*"]
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf-cpu0-data"
precision = "s"
# Only store measurements where the tag "cpu" matches the value "cpu0"
[outputs.influxdb.tagpass]
cpu = ["cpu0"]
```

View File

@ -5,23 +5,23 @@ which can be found [on our website](http://influxdb.com/community/cla.html)
## Plugins
This section is for developers who want to create new collection plugins.
This section is for developers who want to create new collection inputs.
Telegraf is entirely plugin driven. This interface allows for operators to
pick and chose what is gathered as well as makes it easy for developers
to create new ways of generating metrics.
Plugin authorship is kept as simple as possible to promote people to develop
and submit new plugins.
and submit new inputs.
### Plugin Guidelines
* A plugin must conform to the `plugins.Plugin` interface.
* A plugin must conform to the `inputs.Input` interface.
* Each generated metric automatically has the name of the plugin that generated
it prepended. This is to keep plugins honest.
* Plugins should call `plugins.Add` in their `init` function to register themselves.
* Plugins should call `inputs.Add` in their `init` function to register themselves.
See below for a quick example.
* To be available within Telegraf itself, plugins must add themselves to the
`github.com/influxdb/telegraf/plugins/all/all.go` file.
`github.com/influxdb/telegraf/plugins/inputs/all/all.go` file.
* The `SampleConfig` function should return valid toml that describes how the
plugin can be configured. This is include in `telegraf -sample-config`.
* The `Description` function should say in one line what this plugin does.
@ -78,7 +78,7 @@ type Process struct {
PID int
}
func Gather(acc plugins.Accumulator) error {
func Gather(acc inputs.Accumulator) error {
for _, process := range system.Processes() {
tags := map[string]string {
"pid": fmt.Sprintf("%d", process.Pid),
@ -97,7 +97,7 @@ package simple
// simple.go
import "github.com/influxdb/telegraf/plugins"
import "github.com/influxdb/telegraf/plugins/inputs"
type Simple struct {
Ok bool
@ -111,7 +111,7 @@ func (s *Simple) SampleConfig() string {
return "ok = true # indicate if everything is fine"
}
func (s *Simple) Gather(acc plugins.Accumulator) error {
func (s *Simple) Gather(acc inputs.Accumulator) error {
if s.Ok {
acc.Add("state", "pretty good", nil)
} else {
@ -122,14 +122,14 @@ func (s *Simple) Gather(acc plugins.Accumulator) error {
}
func init() {
plugins.Add("simple", func() plugins.Plugin { return &Simple{} })
inputs.Add("simple", func() inputs.Input { return &Simple{} })
}
```
## Service Plugins
This section is for developers who want to create new "service" collection
plugins. A service plugin differs from a regular plugin in that it operates
inputs. A service plugin differs from a regular plugin in that it operates
a background service while Telegraf is running. One example would be the `statsd`
plugin, which operates a statsd server.
@ -143,7 +143,7 @@ and `Stop()` methods.
### Service Plugin Guidelines
* Same as the `Plugin` guidelines, except that they must conform to the
`plugins.ServicePlugin` interface.
`inputs.ServiceInput` interface.
### Service Plugin interface
@ -169,7 +169,7 @@ similar constructs.
* Outputs should call `outputs.Add` in their `init` function to register themselves.
See below for a quick example.
* To be available within Telegraf itself, plugins must add themselves to the
`github.com/influxdb/telegraf/outputs/all/all.go` file.
`github.com/influxdb/telegraf/plugins/outputs/all/all.go` file.
* The `SampleConfig` function should return valid toml that describes how the
output can be configured. This is include in `telegraf -sample-config`.
* The `Description` function should say in one line what this output does.
@ -193,7 +193,7 @@ package simpleoutput
// simpleoutput.go
import "github.com/influxdb/telegraf/outputs"
import "github.com/influxdb/telegraf/plugins/outputs"
type Simple struct {
Ok bool
@ -243,7 +243,7 @@ and `Stop()` methods.
### Service Output Guidelines
* Same as the `Output` guidelines, except that they must conform to the
`plugins.ServiceOutput` interface.
`inputs.ServiceOutput` interface.
### Service Output interface

38
Godeps
View File

@ -1,51 +1,51 @@
git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git dbd8d5c40a582eb9adacde36b47932b3a3ad0034
github.com/Shopify/sarama 159e9990b0796511607dd0d7aaa3eb37d1829d16
github.com/Shopify/sarama d37c73f2b2bce85f7fa16b6a550d26c5372892ef
github.com/Sirupsen/logrus 446d1c146faa8ed3f4218f056fcd165f6bcfda81
github.com/amir/raidman 6a8e089bbe32e6b907feae5ba688841974b3c339
github.com/armon/go-metrics 06b60999766278efd6d2b5d8418a58c3d5b99e87
github.com/aws/aws-sdk-go 999b1591218c36d5050d1ba7266eba956e65965f
github.com/armon/go-metrics 345426c77237ece5dab0e1605c3e4b35c3f54757
github.com/aws/aws-sdk-go f09322ae1e6468fe828c862542389bc45baf3c00
github.com/beorn7/perks b965b613227fddccbfffe13eae360ed3fa822f8d
github.com/boltdb/bolt b34b35ea8d06bb9ae69d9a349119252e4c1d8ee0
github.com/boltdb/bolt 34a0fa5307f7562980fb8e7ff4723f7987edf49b
github.com/cenkalti/backoff 4dc77674aceaabba2c7e3da25d4c823edfb73f99
github.com/dancannon/gorethink a124c9663325ed9f7fb669d17c69961b59151e6e
github.com/davecgh/go-spew 5215b55f46b2b919f50a1df0eaa5886afe4e3b3d
github.com/eapache/go-resiliency f341fb4dca45128e4aa86389fa6a675d55fe25e1
github.com/eapache/go-resiliency b86b1ec0dd4209a588dc1285cdd471e73525c0b3
github.com/eapache/queue ded5959c0d4e360646dc9e9908cff48666781367
github.com/fsouza/go-dockerclient 7177a9e3543b0891a5d91dbf7051e0f71455c8ef
github.com/go-ini/ini 9314fb0ef64171d6a3d0a4fa570dfa33441cba05
github.com/go-sql-driver/mysql d512f204a577a4ab037a1816604c48c9c13210be
github.com/gogo/protobuf e492fd34b12d0230755c45aa5fb1e1eea6a84aa9
github.com/golang/protobuf 68415e7123da32b07eab49c96d2c4d6158360e9b
github.com/fsouza/go-dockerclient 175e1df973274f04e9b459a62cffc49808f1a649
github.com/go-ini/ini afbd495e5aaea13597b5e14fe514ddeaa4d76fc3
github.com/go-sql-driver/mysql 7a8740a6bd8feb6af5786ab9a9f1513970019d8c
github.com/gogo/protobuf 7b1331554dbe882cb3613ee8f1824a5583627963
github.com/golang/protobuf 2402d76f3d41f928c7902a765dfc872356dd3aad
github.com/golang/snappy 723cc1e459b8eea2dea4583200fd60757d40097a
github.com/gonuts/go-shellquote e842a11b24c6abfb3dd27af69a17f482e4b483c2
github.com/hailocab/go-hostpool 0637eae892be221164aff5fcbccc57171aea6406
github.com/hailocab/go-hostpool 50839ee41f32bfca8d03a183031aa634b2dc1c64
github.com/hashicorp/go-msgpack fa3f63826f7c23912c15263591e65d54d080b458
github.com/hashicorp/raft d136cd15dfb7876fd7c89cad1995bc4f19ceb294
github.com/hashicorp/raft-boltdb d1e82c1ec3f15ee991f7cc7ffd5b67ff6f5bbaee
github.com/influxdb/influxdb 69a7664f2d4b75aec300b7cbfc7e57c971721f04
github.com/influxdb/influxdb bd63489ef0faae2465ae5b1f0a28bd7e71e02e38
github.com/jmespath/go-jmespath c01cf91b011868172fdcd9f41838e80c9d716264
github.com/klauspost/crc32 0aff1ea9c20474c3901672b5b6ead0ac611156de
github.com/klauspost/crc32 a3b15ae34567abb20a22992b989cd76f48d09c47
github.com/lib/pq 11fc39a580a008f1f39bb3d11d984fb34ed778d9
github.com/matttproud/golang_protobuf_extensions d0c3fe89de86839aecf2e0579c40ba3bb336a453
github.com/mreiferson/go-snappystream 028eae7ab5c4c9e2d1cb4c4ca1e53259bbe7e504
github.com/naoina/go-stringutil 6b638e95a32d0c1131db0e7fe83775cbea4a0d0b
github.com/naoina/toml 751171607256bb66e64c9f0220c00662420c38e9
github.com/nsqio/go-nsq 2118015c120962edc5d03325c680daf3163a8b5f
github.com/pborman/uuid cccd189d45f7ac3368a0d127efb7f4d08ae0b655
github.com/pborman/uuid dee7705ef7b324f27ceb85a121c61f2c2e8ce988
github.com/pmezard/go-difflib e8554b8641db39598be7f6342874b958f12ae1d4
github.com/prometheus/client_golang 67994f177195311c3ea3d4407ed0175e34a4256f
github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
github.com/prometheus/common 56b90312e937d43b930f06a59bf0d6a4ae1944bc
github.com/prometheus/common 0a3005bb37bc411040083a55372e77c405f6464c
github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8
github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f
github.com/shirou/gopsutil fc932d9090f13a84fb4b3cb8baa124610cab184c
github.com/shirou/gopsutil ef151b7ff7fe76308f89a389447b7b78dfa02e0f
github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744
github.com/stretchr/objx 1a9d0bb9f541897e62256577b352fdbc1fb4fd94
github.com/stretchr/testify e3a8ff8ce36581f87a15341206f205b1da467059
github.com/stretchr/testify c92828f29518bc633893affbce12904ba41a7cfa
github.com/wvanbergen/kafka 1a8639a45164fcc245d5c7b4bd3ccfbd1a0ffbf3
github.com/wvanbergen/kazoo-go 0f768712ae6f76454f987c3356177e138df258f8
golang.org/x/crypto 7b85b097bf7527677d54d3220065e966a0e3b613
golang.org/x/net 1796f9b8b7178e3c7587dff118d3bb9d37f9b0b3
golang.org/x/crypto f23ba3a5ee43012fcb4b92e1a2a405a92554f4f2
golang.org/x/net 520af5de654dc4dd4f0f65aa40e66dbbd9043df1
gopkg.in/dancannon/gorethink.v1 a124c9663325ed9f7fb669d17c69961b59151e6e
gopkg.in/fatih/pool.v2 cba550ebf9bce999a02e963296d4bc7a486cb715
gopkg.in/mgo.v2 e30de8ac9ae3b30df7065f766c71f88bba7d4e49

198
README.md
View File

@ -1,23 +1,35 @@
# Telegraf - A native agent for InfluxDB [![Circle CI](https://circleci.com/gh/influxdata/telegraf.svg?style=svg)](https://circleci.com/gh/influxdata/telegraf)
# Telegraf [![Circle CI](https://circleci.com/gh/influxdata/telegraf.svg?style=svg)](https://circleci.com/gh/influxdata/telegraf)
Telegraf is an agent written in Go for collecting metrics from the system it's
running on, or from other services, and writing them into InfluxDB.
running on, or from other services, and writing them into InfluxDB or other
[outputs](https://github.com/influxdata/telegraf#supported-output-plugins).
Design goals are to have a minimal memory footprint with a plugin system so
that developers in the community can easily add support for collecting metrics
from well known services (like Hadoop, Postgres, or Redis) and third party
APIs (like Mailchimp, AWS CloudWatch, or Google Analytics).
We'll eagerly accept pull requests for new plugins and will manage the set of
plugins that Telegraf supports. See the
[contributing guide](CONTRIBUTING.md) for instructions on
writing new plugins.
New input and output plugins are designed to be easy to contribute,
we'll eagerly accept pull
requests and will manage the set of plugins that Telegraf supports.
See the [contributing guide](CONTRIBUTING.md) for instructions on writing
new plugins.
## Installation:
NOTE: Telegraf 0.3.x is **not** backwards-compatible with previous versions of
telegraf, both in the database layout and the configuration file. 0.2.x will
continue to be supported, see below for download links.
TODO: link to blog post about 0.3.x changes.
### Linux deb and rpm packages:
Latest:
* http://get.influxdb.org/telegraf/telegraf_0.3.0_amd64.deb
* http://get.influxdb.org/telegraf/telegraf-0.3.0-1.x86_64.rpm
0.2.x:
* http://get.influxdb.org/telegraf/telegraf_0.2.4_amd64.deb
* http://get.influxdb.org/telegraf/telegraf-0.2.4-1.x86_64.rpm
@ -33,6 +45,11 @@ controlled via `systemctl [action] telegraf`
### Linux binaries:
Latest:
* http://get.influxdb.org/telegraf/telegraf_linux_amd64_0.3.0.tar.gz
* http://get.influxdb.org/telegraf/telegraf_linux_386_0.3.0.tar.gz
* http://get.influxdb.org/telegraf/telegraf_linux_arm_0.3.0.tar.gz
0.2.x:
* http://get.influxdb.org/telegraf/telegraf_linux_amd64_0.2.4.tar.gz
* http://get.influxdb.org/telegraf/telegraf_linux_386_0.2.4.tar.gz
* http://get.influxdb.org/telegraf/telegraf_linux_arm_0.2.4.tar.gz
@ -51,32 +68,6 @@ brew update
brew install telegraf
```
### Version 0.3.0 Beta
Version 0.3.0 will introduce many new breaking changes to Telegraf. For starters,
plugin measurements will be aggregated into fields. This means that there will no
longer be a `cpu_usage_idle` measurement, there will be a `cpu` measurement with
a `usage_idle` field.
There will also be config file changes, meaning that your 0.2.x Telegraf config
files will no longer work properly. It is recommended that you use the
`-sample-config` flag to generate a new config file to see what the changes are.
You can also read the
[0.3.0 configuration guide](https://github.com/influxdb/telegraf/blob/0.3.0/CONFIGURATION.md)
to see some of the new features and options available.
You can read more about the justifications for the aggregated measurements
[here](https://github.com/influxdb/telegraf/issues/152), and a more detailed
breakdown of the work [here](https://github.com/influxdb/telegraf/pull/437).
Once we're closer to a full release, there will be a detailed blog post
explaining all the changes.
* http://get.influxdb.org/telegraf/telegraf_0.3.0-beta2_amd64.deb
* http://get.influxdb.org/telegraf/telegraf-0.3.0_beta2-1.x86_64.rpm
* http://get.influxdb.org/telegraf/telegraf_linux_amd64_0.3.0-beta2.tar.gz
* http://get.influxdb.org/telegraf/telegraf_linux_386_0.3.0-beta2.tar.gz
* http://get.influxdb.org/telegraf/telegraf_linux_arm_0.3.0-beta2.tar.gz
### From Source:
Telegraf manages dependencies via [gdm](https://github.com/sparrc/gdm),
@ -92,7 +83,7 @@ if you don't have it already. You also must build with golang version 1.4+.
### How to use it:
* Run `telegraf -sample-config > telegraf.conf` to create an initial configuration.
* Or run `telegraf -sample-config -filter cpu:mem -outputfilter influxdb > telegraf.conf`.
* Or run `telegraf -sample-config -input-filter cpu:mem -output-filter influxdb > telegraf.conf`.
to create a config file with only CPU and memory plugins defined, and InfluxDB
output defined.
* Edit the configuration to match your needs.
@ -100,7 +91,7 @@ output defined.
sample to STDOUT. NOTE: you may want to run as the telegraf user if you are using
the linux packages `sudo -u telegraf telegraf -config telegraf.conf -test`
* Run `telegraf -config telegraf.conf` to gather and send metrics to configured outputs.
* Run `telegraf -config telegraf.conf -filter system:swap`.
* Run `telegraf -config telegraf.conf -input-filter system:swap`.
to run telegraf with only the system & swap plugins defined in the config.
## Telegraf Options
@ -116,101 +107,12 @@ unit parser, e.g. "10s" for 10 seconds or "5m" for 5 minutes.
* **debug**: Set to true to gather and send metrics to STDOUT as well as
InfluxDB.
## Plugin Options
## Configuration
There are 5 configuration options that are configurable per plugin:
See the [configuration guide](CONFIGURATION.md) for a rundown of the more advanced
configuration options.
* **pass**: An array of strings that is used to filter metrics generated by the
current plugin. Each string in the array is tested as a glob match against metric names
and if it matches, the metric is emitted.
* **drop**: The inverse of pass, if a metric name matches, it is not emitted.
* **tagpass**: tag names and arrays of strings that are used to filter metrics by the current plugin. Each string in the array is tested as a glob match against
the tag name, and if it matches the metric is emitted.
* **tagdrop**: The inverse of tagpass. If a tag matches, the metric is not emitted.
This is tested on metrics that have passed the tagpass test.
* **interval**: How often to gather this metric. Normal plugins use a single
global interval, but if one particular plugin should be run less or more often,
you can configure that here.
### Plugin Configuration Examples
This is a full working config that will output CPU data to an InfluxDB instance
at 192.168.59.103:8086, tagging measurements with dc="denver-1". It will output
measurements at a 10s interval and will collect per-cpu data, dropping any
measurements which begin with `cpu_time`.
```toml
[tags]
dc = "denver-1"
[agent]
interval = "10s"
# OUTPUTS
[outputs]
[[outputs.influxdb]]
url = "http://192.168.59.103:8086" # required.
database = "telegraf" # required.
precision = "s"
# PLUGINS
[plugins]
[[plugins.cpu]]
percpu = true
totalcpu = false
drop = ["cpu_time*"]
```
Below is how to configure `tagpass` and `tagdrop` parameters
```toml
[plugins]
[[plugins.cpu]]
percpu = true
totalcpu = false
drop = ["cpu_time"]
# Don't collect CPU data for cpu6 & cpu7
[plugins.cpu.tagdrop]
cpu = [ "cpu6", "cpu7" ]
[[plugins.disk]]
[plugins.disk.tagpass]
# tagpass conditions are OR, not AND.
# If the (filesystem is ext4 or xfs) OR (the path is /opt or /home)
# then the metric passes
fstype = [ "ext4", "xfs" ]
# Globs can also be used on the tag values
path = [ "/opt", "/home*" ]
```
Below is how to configure `pass` and `drop` parameters
```toml
# Drop all metrics for guest CPU usage
[[plugins.cpu]]
drop = [ "cpu_usage_guest" ]
# Only store inode related metrics for disks
[[plugins.disk]]
pass = [ "disk_inodes*" ]
```
Additional plugins (or outputs) of the same type can be specified,
just define more instances in the config file:
```toml
[[plugins.cpu]]
percpu = false
totalcpu = true
[[plugins.cpu]]
percpu = true
totalcpu = false
drop = ["cpu_time*"]
```
## Supported Plugins
## Supported Input Plugins
**You can view usage instructions for each plugin by running**
`telegraf -usage <pluginname>`.
@ -226,7 +128,7 @@ Telegraf currently has support for collecting metrics from:
* haproxy
* httpjson (generic JSON-emitting http service plugin)
* influxdb
* jolokia (remote JMX with JSON over HTTP)
* jolokia
* leofs
* lustre2
* mailchimp
@ -249,13 +151,13 @@ Telegraf currently has support for collecting metrics from:
* system
* cpu
* mem
* io
* net
* netstat
* disk
* diskio
* swap
## Supported Service Plugins
## Supported Input Service Plugins
Telegraf can collect metrics via the following services:
@ -265,41 +167,7 @@ Telegraf can collect metrics via the following services:
We'll be adding support for many more over the coming months. Read on if you
want to add support for another service or third-party API.
## Output options
Telegraf also supports specifying multiple output sinks to send data to,
configuring each output sink is different, but examples can be
found by running `telegraf -sample-config`.
Outputs also support the same configurable options as plugins
(pass, drop, tagpass, tagdrop), added in 0.2.4
```toml
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf"
precision = "s"
# Drop all measurements that start with "aerospike"
drop = ["aerospike*"]
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf-aerospike-data"
precision = "s"
# Only accept aerospike data:
pass = ["aerospike*"]
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf-cpu0-data"
precision = "s"
# Only store measurements where the tag "cpu" matches the value "cpu0"
[outputs.influxdb.tagpass]
cpu = ["cpu0"]
```
## Supported Outputs
## Supported Output Plugins
* influxdb
* nsq

View File

@ -29,12 +29,12 @@ type Accumulator interface {
}
func NewAccumulator(
pluginConfig *config.PluginConfig,
inputConfig *config.InputConfig,
points chan *client.Point,
) Accumulator {
acc := accumulator{}
acc.points = points
acc.pluginConfig = pluginConfig
acc.inputConfig = inputConfig
return &acc
}
@ -47,7 +47,7 @@ type accumulator struct {
debug bool
pluginConfig *config.PluginConfig
inputConfig *config.InputConfig
prefix string
}
@ -69,30 +69,76 @@ func (ac *accumulator) AddFields(
tags map[string]string,
t ...time.Time,
) {
// Validate uint64 and float64 fields
if len(fields) == 0 || len(measurement) == 0 {
return
}
if !ac.inputConfig.Filter.ShouldTagsPass(tags) {
return
}
// Override measurement name if set
if len(ac.inputConfig.NameOverride) != 0 {
measurement = ac.inputConfig.NameOverride
}
// Apply measurement prefix and suffix if set
if len(ac.inputConfig.MeasurementPrefix) != 0 {
measurement = ac.inputConfig.MeasurementPrefix + measurement
}
if len(ac.inputConfig.MeasurementSuffix) != 0 {
measurement = measurement + ac.inputConfig.MeasurementSuffix
}
if tags == nil {
tags = make(map[string]string)
}
// Apply plugin-wide tags if set
for k, v := range ac.inputConfig.Tags {
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
// Apply daemon-wide tags if set
for k, v := range ac.defaultTags {
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
result := make(map[string]interface{})
for k, v := range fields {
// Filter out any filtered fields
if ac.inputConfig != nil {
if !ac.inputConfig.Filter.ShouldPass(k) {
continue
}
}
result[k] = v
// Validate uint64 and float64 fields
switch val := v.(type) {
case uint64:
// InfluxDB does not support writing uint64
if val < uint64(9223372036854775808) {
fields[k] = int64(val)
result[k] = int64(val)
} else {
fields[k] = int64(9223372036854775807)
result[k] = int64(9223372036854775807)
}
case float64:
// NaNs are invalid values in influxdb, skip measurement
if math.IsNaN(val) || math.IsInf(val, 0) {
if ac.debug {
log.Printf("Measurement [%s] has a NaN or Inf field, skipping",
measurement)
log.Printf("Measurement [%s] field [%s] has a NaN or Inf "+
"field, skipping",
measurement, k)
}
return
continue
}
}
}
if tags == nil {
tags = make(map[string]string)
fields = nil
if len(result) == 0 {
return
}
var timestamp time.Time
@ -106,19 +152,7 @@ func (ac *accumulator) AddFields(
measurement = ac.prefix + measurement
}
if ac.pluginConfig != nil {
if !ac.pluginConfig.Filter.ShouldPass(measurement) || !ac.pluginConfig.Filter.ShouldTagsPass(tags) {
return
}
}
for k, v := range ac.defaultTags {
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
pt, err := client.NewPoint(measurement, tags, fields, timestamp)
pt, err := client.NewPoint(measurement, tags, result, timestamp)
if err != nil {
log.Printf("Error adding point [%s]: %s\n", measurement, err.Error())
return

View File

@ -10,8 +10,8 @@ import (
"time"
"github.com/influxdb/telegraf/internal/config"
"github.com/influxdb/telegraf/outputs"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/inputs"
"github.com/influxdb/telegraf/plugins/outputs"
"github.com/influxdb/influxdb/client/v2"
)
@ -85,33 +85,33 @@ func (a *Agent) Close() error {
return err
}
// gatherParallel runs the plugins that are using the same reporting interval
// gatherParallel runs the inputs that are using the same reporting interval
// as the telegraf agent.
func (a *Agent) gatherParallel(pointChan chan *client.Point) error {
var wg sync.WaitGroup
start := time.Now()
counter := 0
for _, plugin := range a.Config.Plugins {
if plugin.Config.Interval != 0 {
for _, input := range a.Config.Inputs {
if input.Config.Interval != 0 {
continue
}
wg.Add(1)
counter++
go func(plugin *config.RunningPlugin) {
go func(input *config.RunningInput) {
defer wg.Done()
acc := NewAccumulator(plugin.Config, pointChan)
acc := NewAccumulator(input.Config, pointChan)
acc.SetDebug(a.Config.Agent.Debug)
acc.SetPrefix(plugin.Name + "_")
// acc.SetPrefix(input.Name + "_")
acc.SetDefaultTags(a.Config.Tags)
if err := plugin.Plugin.Gather(acc); err != nil {
log.Printf("Error in plugin [%s]: %s", plugin.Name, err)
if err := input.Input.Gather(acc); err != nil {
log.Printf("Error in input [%s]: %s", input.Name, err)
}
}(plugin)
}(input)
}
if counter == 0 {
@ -121,36 +121,36 @@ func (a *Agent) gatherParallel(pointChan chan *client.Point) error {
wg.Wait()
elapsed := time.Since(start)
log.Printf("Gathered metrics, (%s interval), from %d plugins in %s\n",
a.Config.Agent.Interval, counter, elapsed)
log.Printf("Gathered metrics, (%s interval), from %d inputs in %s\n",
a.Config.Agent.Interval.Duration, counter, elapsed)
return nil
}
// gatherSeparate runs the plugins that have been configured with their own
// gatherSeparate runs the inputs that have been configured with their own
// reporting interval.
func (a *Agent) gatherSeparate(
shutdown chan struct{},
plugin *config.RunningPlugin,
input *config.RunningInput,
pointChan chan *client.Point,
) error {
ticker := time.NewTicker(plugin.Config.Interval)
ticker := time.NewTicker(input.Config.Interval)
for {
var outerr error
start := time.Now()
acc := NewAccumulator(plugin.Config, pointChan)
acc := NewAccumulator(input.Config, pointChan)
acc.SetDebug(a.Config.Agent.Debug)
acc.SetPrefix(plugin.Name + "_")
// acc.SetPrefix(input.Name + "_")
acc.SetDefaultTags(a.Config.Tags)
if err := plugin.Plugin.Gather(acc); err != nil {
log.Printf("Error in plugin [%s]: %s", plugin.Name, err)
if err := input.Input.Gather(acc); err != nil {
log.Printf("Error in input [%s]: %s", input.Name, err)
}
elapsed := time.Since(start)
log.Printf("Gathered metrics, (separate %s interval), from %s in %s\n",
plugin.Config.Interval, plugin.Name, elapsed)
input.Config.Interval, input.Name, elapsed)
if outerr != nil {
return outerr
@ -165,7 +165,7 @@ func (a *Agent) gatherSeparate(
}
}
// Test verifies that we can 'Gather' from all plugins with their configured
// Test verifies that we can 'Gather' from all inputs with their configured
// Config struct
func (a *Agent) Test() error {
shutdown := make(chan struct{})
@ -184,27 +184,27 @@ func (a *Agent) Test() error {
}
}()
for _, plugin := range a.Config.Plugins {
acc := NewAccumulator(plugin.Config, pointChan)
for _, input := range a.Config.Inputs {
acc := NewAccumulator(input.Config, pointChan)
acc.SetDebug(true)
acc.SetPrefix(plugin.Name + "_")
// acc.SetPrefix(input.Name + "_")
fmt.Printf("* Plugin: %s, Collection 1\n", plugin.Name)
if plugin.Config.Interval != 0 {
fmt.Printf("* Internal: %s\n", plugin.Config.Interval)
fmt.Printf("* Plugin: %s, Collection 1\n", input.Name)
if input.Config.Interval != 0 {
fmt.Printf("* Internal: %s\n", input.Config.Interval)
}
if err := plugin.Plugin.Gather(acc); err != nil {
if err := input.Input.Gather(acc); err != nil {
return err
}
// Special instructions for some plugins. cpu, for example, needs to be
// Special instructions for some inputs. cpu, for example, needs to be
// run twice in order to return cpu usage percentages.
switch plugin.Name {
switch input.Name {
case "cpu", "mongodb":
time.Sleep(500 * time.Millisecond)
fmt.Printf("* Plugin: %s, Collection 2\n", plugin.Name)
if err := plugin.Plugin.Gather(acc); err != nil {
fmt.Printf("* Plugin: %s, Collection 2\n", input.Name)
if err := input.Input.Gather(acc); err != nil {
return err
}
}
@ -332,10 +332,10 @@ func (a *Agent) Run(shutdown chan struct{}) error {
log.Printf("Agent Config: Interval:%s, Debug:%#v, Hostname:%#v, "+
"Flush Interval:%s\n",
a.Config.Agent.Interval, a.Config.Agent.Debug,
a.Config.Agent.Hostname, a.Config.Agent.FlushInterval)
a.Config.Agent.Interval.Duration, a.Config.Agent.Debug,
a.Config.Agent.Hostname, a.Config.Agent.FlushInterval.Duration)
// channel shared between all plugin threads for accumulating points
// channel shared between all input threads for accumulating points
pointChan := make(chan *client.Point, 1000)
// Round collection to nearest interval by sleeping
@ -354,29 +354,29 @@ func (a *Agent) Run(shutdown chan struct{}) error {
}
}()
for _, plugin := range a.Config.Plugins {
for _, input := range a.Config.Inputs {
// Start service of any ServicePlugins
switch p := plugin.Plugin.(type) {
case plugins.ServicePlugin:
switch p := input.Input.(type) {
case inputs.ServiceInput:
if err := p.Start(); err != nil {
log.Printf("Service for plugin %s failed to start, exiting\n%s\n",
plugin.Name, err.Error())
log.Printf("Service for input %s failed to start, exiting\n%s\n",
input.Name, err.Error())
return err
}
defer p.Stop()
}
// Special handling for plugins that have their own collection interval
// Special handling for inputs that have their own collection interval
// configured. Default intervals are handled below with gatherParallel
if plugin.Config.Interval != 0 {
if input.Config.Interval != 0 {
wg.Add(1)
go func(plugin *config.RunningPlugin) {
go func(input *config.RunningInput) {
defer wg.Done()
if err := a.gatherSeparate(shutdown, plugin, pointChan); err != nil {
if err := a.gatherSeparate(shutdown, input, pointChan); err != nil {
log.Printf(err.Error())
}
}(plugin)
}(input)
}
}

View File

@ -8,77 +8,96 @@ import (
"github.com/influxdb/telegraf/internal/config"
// needing to load the plugins
_ "github.com/influxdb/telegraf/plugins/all"
_ "github.com/influxdb/telegraf/plugins/inputs/all"
// needing to load the outputs
_ "github.com/influxdb/telegraf/outputs/all"
_ "github.com/influxdb/telegraf/plugins/outputs/all"
)
func TestAgent_LoadPlugin(t *testing.T) {
c := config.NewConfig()
c.PluginFilters = []string{"mysql"}
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
c.InputFilters = []string{"mysql"}
err := c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ := NewAgent(c)
assert.Equal(t, 1, len(a.Config.Plugins))
assert.Equal(t, 1, len(a.Config.Inputs))
c = config.NewConfig()
c.PluginFilters = []string{"foo"}
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
c.InputFilters = []string{"foo"}
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 0, len(a.Config.Plugins))
assert.Equal(t, 0, len(a.Config.Inputs))
c = config.NewConfig()
c.PluginFilters = []string{"mysql", "foo"}
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
c.InputFilters = []string{"mysql", "foo"}
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 1, len(a.Config.Plugins))
assert.Equal(t, 1, len(a.Config.Inputs))
c = config.NewConfig()
c.PluginFilters = []string{"mysql", "redis"}
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
c.InputFilters = []string{"mysql", "redis"}
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 2, len(a.Config.Plugins))
assert.Equal(t, 2, len(a.Config.Inputs))
c = config.NewConfig()
c.PluginFilters = []string{"mysql", "foo", "redis", "bar"}
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
c.InputFilters = []string{"mysql", "foo", "redis", "bar"}
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 2, len(a.Config.Plugins))
assert.Equal(t, 2, len(a.Config.Inputs))
}
func TestAgent_LoadOutput(t *testing.T) {
c := config.NewConfig()
c.OutputFilters = []string{"influxdb"}
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err := c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ := NewAgent(c)
assert.Equal(t, 2, len(a.Config.Outputs))
c = config.NewConfig()
c.OutputFilters = []string{"kafka"}
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 1, len(a.Config.Outputs))
c = config.NewConfig()
c.OutputFilters = []string{}
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 3, len(a.Config.Outputs))
c = config.NewConfig()
c.OutputFilters = []string{"foo"}
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 0, len(a.Config.Outputs))
c = config.NewConfig()
c.OutputFilters = []string{"influxdb", "foo"}
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 2, len(a.Config.Outputs))
c = config.NewConfig()
c.OutputFilters = []string{"influxdb", "kafka"}
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
assert.Equal(t, 3, len(c.Outputs))
a, _ = NewAgent(c)
assert.Equal(t, 3, len(a.Config.Outputs))
c = config.NewConfig()
c.OutputFilters = []string{"influxdb", "foo", "kafka", "bar"}
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 3, len(a.Config.Outputs))
}

View File

@ -4,14 +4,12 @@ machine:
post:
- sudo service zookeeper stop
- go version
- go version | grep 1.5.1 || sudo rm -rf /usr/local/go
- wget https://storage.googleapis.com/golang/go1.5.1.linux-amd64.tar.gz
- sudo tar -C /usr/local -xzf go1.5.1.linux-amd64.tar.gz
- go version | grep 1.5.2 || sudo rm -rf /usr/local/go
- wget https://storage.googleapis.com/golang/go1.5.2.linux-amd64.tar.gz
- sudo tar -C /usr/local -xzf go1.5.2.linux-amd64.tar.gz
- go version
dependencies:
cache_directories:
- "~/telegraf-build/src"
override:
- docker info

View File

@ -10,41 +10,96 @@ import (
"github.com/influxdb/telegraf"
"github.com/influxdb/telegraf/internal/config"
_ "github.com/influxdb/telegraf/outputs/all"
_ "github.com/influxdb/telegraf/plugins/all"
_ "github.com/influxdb/telegraf/plugins/inputs/all"
_ "github.com/influxdb/telegraf/plugins/outputs/all"
)
var fDebug = flag.Bool("debug", false,
"show metrics as they're generated to stdout")
var fTest = flag.Bool("test", false, "gather metrics, print them out, and exit")
var fConfig = flag.String("config", "", "configuration file to load")
var fConfigDirectory = flag.String("configdirectory", "",
var fConfigDirectory = flag.String("config-directory", "",
"directory containing additional *.conf files")
var fVersion = flag.Bool("version", false, "display the version")
var fSampleConfig = flag.Bool("sample-config", false,
"print out full sample configuration")
var fPidfile = flag.String("pidfile", "", "file to write our pid to")
var fPLuginFilters = flag.String("filter", "",
var fInputFilters = flag.String("input-filter", "",
"filter the plugins to enable, separator is :")
var fOutputFilters = flag.String("outputfilter", "",
var fOutputFilters = flag.String("output-filter", "",
"filter the outputs to enable, separator is :")
var fUsage = flag.String("usage", "",
"print usage for a plugin, ie, 'telegraf -usage mysql'")
var fInputFiltersLegacy = flag.String("filter", "",
"filter the plugins to enable, separator is :")
var fOutputFiltersLegacy = flag.String("outputfilter", "",
"filter the outputs to enable, separator is :")
var fConfigDirectoryLegacy = flag.String("configdirectory", "",
"directory containing additional *.conf files")
// Telegraf version
// -ldflags "-X main.Version=`git describe --always --tags`"
var Version string
const usage = `Telegraf, The plugin-driven server agent for collecting and reporting metrics.
Usage:
telegraf <flags>
The flags are:
-config <file> configuration file to load
-test gather metrics once, print them to stdout, and exit
-sample-config print out full sample configuration to stdout
-config-directory directory containing additional *.conf files
-input-filter filter the input plugins to enable, separator is :
-output-filter filter the output plugins to enable, separator is :
-usage print usage for a plugin, ie, 'telegraf -usage mysql'
-version print the version to stdout
Examples:
# generate a telegraf config file:
telegraf -sample-config > telegraf.conf
# generate config with only cpu input & influxdb output plugins defined
telegraf -sample-config -input-filter cpu -output-filter influxdb
# run a single telegraf collection, outputing metrics to stdout
telegraf -config telegraf.conf -test
# run telegraf with all plugins defined in config file
telegraf -config telegraf.conf
# run telegraf, enabling the cpu & memory input, and influxdb output plugins
telegraf -config telegraf.conf -input-filter cpu:mem -output-filter influxdb
`
func main() {
flag.Usage = usageExit
flag.Parse()
var pluginFilters []string
if *fPLuginFilters != "" {
pluginsFilter := strings.TrimSpace(*fPLuginFilters)
pluginFilters = strings.Split(":"+pluginsFilter+":", ":")
if flag.NFlag() == 0 {
usageExit()
}
var inputFilters []string
if *fInputFiltersLegacy != "" {
inputFilter := strings.TrimSpace(*fInputFiltersLegacy)
inputFilters = strings.Split(":"+inputFilter+":", ":")
}
if *fInputFilters != "" {
inputFilter := strings.TrimSpace(*fInputFilters)
inputFilters = strings.Split(":"+inputFilter+":", ":")
}
var outputFilters []string
if *fOutputFiltersLegacy != "" {
outputFilter := strings.TrimSpace(*fOutputFiltersLegacy)
outputFilters = strings.Split(":"+outputFilter+":", ":")
}
if *fOutputFilters != "" {
outputFilter := strings.TrimSpace(*fOutputFilters)
outputFilters = strings.Split(":"+outputFilter+":", ":")
@ -57,12 +112,12 @@ func main() {
}
if *fSampleConfig {
config.PrintSampleConfig(pluginFilters, outputFilters)
config.PrintSampleConfig(inputFilters, outputFilters)
return
}
if *fUsage != "" {
if err := config.PrintPluginConfig(*fUsage); err != nil {
if err := config.PrintInputConfig(*fUsage); err != nil {
if err2 := config.PrintOutputConfig(*fUsage); err2 != nil {
log.Fatalf("%s and %s", err, err2)
}
@ -78,7 +133,7 @@ func main() {
if *fConfig != "" {
c = config.NewConfig()
c.OutputFilters = outputFilters
c.PluginFilters = pluginFilters
c.InputFilters = inputFilters
err = c.LoadConfig(*fConfig)
if err != nil {
log.Fatal(err)
@ -89,6 +144,13 @@ func main() {
return
}
if *fConfigDirectoryLegacy != "" {
err = c.LoadDirectory(*fConfigDirectoryLegacy)
if err != nil {
log.Fatal(err)
}
}
if *fConfigDirectory != "" {
err = c.LoadDirectory(*fConfigDirectory)
if err != nil {
@ -98,7 +160,7 @@ func main() {
if len(c.Outputs) == 0 {
log.Fatalf("Error: no outputs found, did you provide a valid config file?")
}
if len(c.Plugins) == 0 {
if len(c.Inputs) == 0 {
log.Fatalf("Error: no plugins found, did you provide a valid config file?")
}
@ -134,7 +196,7 @@ func main() {
log.Printf("Starting Telegraf (version %s)\n", Version)
log.Printf("Loaded outputs: %s", strings.Join(c.OutputNames(), " "))
log.Printf("Loaded plugins: %s", strings.Join(c.PluginNames(), " "))
log.Printf("Loaded plugins: %s", strings.Join(c.InputNames(), " "))
log.Printf("Tags enabled: %s", c.ListTags())
if *fPidfile != "" {
@ -150,3 +212,8 @@ func main() {
ag.Run(shutdown)
}
func usageExit() {
fmt.Println(usage)
os.Exit(0)
}

View File

@ -1,7 +1,7 @@
# Telegraf configuration
# Telegraf is entirely plugin driven. All metrics are gathered from the
# declared plugins.
# declared inputs.
# Even if a plugin has no configuration, it must be declared in here
# to be active. Declaring a plugin means just specifying the name
@ -76,13 +76,13 @@
###############################################################################
# PLUGINS #
# INPUTS #
###############################################################################
[plugins]
[inputs]
# Read metrics about cpu usage
[[plugins.cpu]]
[[inputs.cpu]]
# Whether to report per-cpu stats or not
percpu = true
# Whether to report total system cpu stats or not
@ -91,14 +91,14 @@
drop = ["cpu_time"]
# Read metrics about disk usage by mount point
[[plugins.disk]]
[[inputs.disk]]
# By default, telegraf gather stats for all mountpoints.
# Setting mountpoints will restrict the stats to the specified mountpoints.
# Mountpoints=["/"]
# Read metrics about disk IO by device
[[plugins.io]]
# By default, telegraf will gather stats for all devices including
[[inputs.diskio]]
# By default, telegraf will gather stats for all devices including
# disk partitions.
# Setting devices will restrict the stats to the specified devices.
# Devices=["sda","sdb"]
@ -106,18 +106,18 @@
# SkipSerialNumber = true
# Read metrics about memory usage
[[plugins.mem]]
[[inputs.mem]]
# no configuration
# Read metrics about swap memory usage
[[plugins.swap]]
[[inputs.swap]]
# no configuration
# Read metrics about system load & uptime
[[plugins.system]]
[[inputs.system]]
# no configuration
###############################################################################
# SERVICE PLUGINS #
# SERVICE INPUTS #
###############################################################################

View File

@ -11,8 +11,8 @@ import (
"time"
"github.com/influxdb/telegraf/internal"
"github.com/influxdb/telegraf/outputs"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/inputs"
"github.com/influxdb/telegraf/plugins/outputs"
"github.com/naoina/toml"
"github.com/naoina/toml/ast"
@ -25,11 +25,11 @@ import (
// specified
type Config struct {
Tags map[string]string
PluginFilters []string
InputFilters []string
OutputFilters []string
Agent *AgentConfig
Plugins []*RunningPlugin
Inputs []*RunningInput
Outputs []*RunningOutput
}
@ -45,9 +45,9 @@ func NewConfig() *Config {
},
Tags: make(map[string]string),
Plugins: make([]*RunningPlugin, 0),
Inputs: make([]*RunningInput, 0),
Outputs: make([]*RunningOutput, 0),
PluginFilters: make([]string, 0),
InputFilters: make([]string, 0),
OutputFilters: make([]string, 0),
}
return c
@ -93,10 +93,10 @@ type RunningOutput struct {
Config *OutputConfig
}
type RunningPlugin struct {
type RunningInput struct {
Name string
Plugin plugins.Plugin
Config *PluginConfig
Input inputs.Input
Config *InputConfig
}
// Filter containing drop/pass and tagdrop/tagpass rules
@ -110,11 +110,15 @@ type Filter struct {
IsActive bool
}
// PluginConfig containing a name, interval, and filter
type PluginConfig struct {
Name string
Filter Filter
Interval time.Duration
// InputConfig containing a name, interval, and filter
type InputConfig struct {
Name string
NameOverride string
MeasurementPrefix string
MeasurementSuffix string
Tags map[string]string
Filter Filter
Interval time.Duration
}
// OutputConfig containing name and filter
@ -142,12 +146,12 @@ func (ro *RunningOutput) FilterPoints(points []*client.Point) []*client.Point {
// ShouldPass returns true if the metric should pass, false if should drop
// based on the drop/pass filter parameters
func (f Filter) ShouldPass(measurement string) bool {
func (f Filter) ShouldPass(fieldkey string) bool {
if f.Pass != nil {
for _, pat := range f.Pass {
// TODO remove HasPrefix check, leaving it for now for legacy support.
// Cam, 2015-12-07
if strings.HasPrefix(measurement, pat) || internal.Glob(pat, measurement) {
if strings.HasPrefix(fieldkey, pat) || internal.Glob(pat, fieldkey) {
return true
}
}
@ -158,7 +162,7 @@ func (f Filter) ShouldPass(measurement string) bool {
for _, pat := range f.Drop {
// TODO remove HasPrefix check, leaving it for now for legacy support.
// Cam, 2015-12-07
if strings.HasPrefix(measurement, pat) || internal.Glob(pat, measurement) {
if strings.HasPrefix(fieldkey, pat) || internal.Glob(pat, fieldkey) {
return false
}
}
@ -200,16 +204,16 @@ func (f Filter) ShouldTagsPass(tags map[string]string) bool {
return true
}
// Plugins returns a list of strings of the configured plugins.
func (c *Config) PluginNames() []string {
// Inputs returns a list of strings of the configured inputs.
func (c *Config) InputNames() []string {
var name []string
for _, plugin := range c.Plugins {
name = append(name, plugin.Name)
for _, input := range c.Inputs {
name = append(name, input.Name)
}
return name
}
// Outputs returns a list of strings of the configured plugins.
// Outputs returns a list of strings of the configured inputs.
func (c *Config) OutputNames() []string {
var name []string
for _, output := range c.Outputs {
@ -235,7 +239,7 @@ func (c *Config) ListTags() string {
var header = `# Telegraf configuration
# Telegraf is entirely plugin driven. All metrics are gathered from the
# declared plugins.
# declared inputs.
# Even if a plugin has no configuration, it must be declared in here
# to be active. Declaring a plugin means just specifying the name
@ -259,7 +263,7 @@ var header = `# Telegraf configuration
# Configuration for telegraf agent
[agent]
# Default data collection interval for all plugins
# Default data collection interval for all inputs
interval = "10s"
# Rounds collection interval to 'interval'
# ie, if interval="10s" then always collect on :00, :10, :20, etc.
@ -289,16 +293,16 @@ var header = `# Telegraf configuration
var pluginHeader = `
###############################################################################
# PLUGINS #
# INPUTS #
###############################################################################
[plugins]
[inputs]
`
var servicePluginHeader = `
var serviceInputHeader = `
###############################################################################
# SERVICE PLUGINS #
# SERVICE INPUTS #
###############################################################################
`
@ -322,35 +326,35 @@ func PrintSampleConfig(pluginFilters []string, outputFilters []string) {
printConfig(oname, output, "outputs")
}
// Filter plugins
// Filter inputs
var pnames []string
for pname := range plugins.Plugins {
for pname := range inputs.Inputs {
if len(pluginFilters) == 0 || sliceContains(pname, pluginFilters) {
pnames = append(pnames, pname)
}
}
sort.Strings(pnames)
// Print Plugins
// Print Inputs
fmt.Printf(pluginHeader)
servPlugins := make(map[string]plugins.ServicePlugin)
servInputs := make(map[string]inputs.ServiceInput)
for _, pname := range pnames {
creator := plugins.Plugins[pname]
plugin := creator()
creator := inputs.Inputs[pname]
input := creator()
switch p := plugin.(type) {
case plugins.ServicePlugin:
servPlugins[pname] = p
switch p := input.(type) {
case inputs.ServiceInput:
servInputs[pname] = p
continue
}
printConfig(pname, plugin, "plugins")
printConfig(pname, input, "inputs")
}
// Print Service Plugins
fmt.Printf(servicePluginHeader)
for name, plugin := range servPlugins {
printConfig(name, plugin, "plugins")
// Print Service Inputs
fmt.Printf(serviceInputHeader)
for name, input := range servInputs {
printConfig(name, input, "inputs")
}
}
@ -378,12 +382,12 @@ func sliceContains(name string, list []string) bool {
return false
}
// PrintPluginConfig prints the config usage of a single plugin.
func PrintPluginConfig(name string) error {
if creator, ok := plugins.Plugins[name]; ok {
printConfig(name, creator(), "plugins")
// PrintInputConfig prints the config usage of a single input.
func PrintInputConfig(name string) error {
if creator, ok := inputs.Inputs[name]; ok {
printConfig(name, creator(), "inputs")
} else {
return errors.New(fmt.Sprintf("Plugin %s not found", name))
return errors.New(fmt.Sprintf("Input %s not found", name))
}
return nil
}
@ -449,33 +453,15 @@ func (c *Config) LoadConfig(path string) error {
return err
}
case "outputs":
for outputName, outputVal := range subTable.Fields {
switch outputSubTable := outputVal.(type) {
case *ast.Table:
if err = c.addOutput(outputName, outputSubTable); err != nil {
return err
}
case []*ast.Table:
for _, t := range outputSubTable {
if err = c.addOutput(outputName, t); err != nil {
return err
}
}
default:
return fmt.Errorf("Unsupported config format: %s",
outputName)
}
}
case "plugins":
for pluginName, pluginVal := range subTable.Fields {
switch pluginSubTable := pluginVal.(type) {
case *ast.Table:
if err = c.addPlugin(pluginName, pluginSubTable); err != nil {
if err = c.addOutput(pluginName, pluginSubTable); err != nil {
return err
}
case []*ast.Table:
for _, t := range pluginSubTable {
if err = c.addPlugin(pluginName, t); err != nil {
if err = c.addOutput(pluginName, t); err != nil {
return err
}
}
@ -484,10 +470,28 @@ func (c *Config) LoadConfig(path string) error {
pluginName)
}
}
// Assume it's a plugin for legacy config file support if no other
case "inputs":
for pluginName, pluginVal := range subTable.Fields {
switch pluginSubTable := pluginVal.(type) {
case *ast.Table:
if err = c.addInput(pluginName, pluginSubTable); err != nil {
return err
}
case []*ast.Table:
for _, t := range pluginSubTable {
if err = c.addInput(pluginName, t); err != nil {
return err
}
}
default:
return fmt.Errorf("Unsupported config format: %s",
pluginName)
}
}
// Assume it's an input input for legacy config file support if no other
// identifiers are present
default:
if err = c.addPlugin(name, subTable); err != nil {
if err = c.addInput(name, subTable); err != nil {
return err
}
}
@ -523,36 +527,41 @@ func (c *Config) addOutput(name string, table *ast.Table) error {
return nil
}
func (c *Config) addPlugin(name string, table *ast.Table) error {
if len(c.PluginFilters) > 0 && !sliceContains(name, c.PluginFilters) {
func (c *Config) addInput(name string, table *ast.Table) error {
if len(c.InputFilters) > 0 && !sliceContains(name, c.InputFilters) {
return nil
}
creator, ok := plugins.Plugins[name]
if !ok {
return fmt.Errorf("Undefined but requested plugin: %s", name)
// Legacy support renaming io input to diskio
if name == "io" {
name = "diskio"
}
plugin := creator()
pluginConfig, err := buildPlugin(name, table)
creator, ok := inputs.Inputs[name]
if !ok {
return fmt.Errorf("Undefined but requested input: %s", name)
}
input := creator()
pluginConfig, err := buildInput(name, table)
if err != nil {
return err
}
if err := toml.UnmarshalTable(table, plugin); err != nil {
if err := toml.UnmarshalTable(table, input); err != nil {
return err
}
rp := &RunningPlugin{
rp := &RunningInput{
Name: name,
Plugin: plugin,
Input: input,
Config: pluginConfig,
}
c.Plugins = append(c.Plugins, rp)
c.Inputs = append(c.Inputs, rp)
return nil
}
// buildFilter builds a Filter (tagpass/tagdrop/pass/drop) to
// be inserted into the OutputConfig/PluginConfig to be used for prefix
// be inserted into the OutputConfig/InputConfig to be used for prefix
// filtering on tags and measurements
func buildFilter(tbl *ast.Table) Filter {
f := Filter{}
@ -628,10 +637,11 @@ func buildFilter(tbl *ast.Table) Filter {
return f
}
// buildPlugin parses plugin specific items from the ast.Table, builds the filter and returns a
// PluginConfig to be inserted into RunningPlugin
func buildPlugin(name string, tbl *ast.Table) (*PluginConfig, error) {
cp := &PluginConfig{Name: name}
// buildInput parses input specific items from the ast.Table,
// builds the filter and returns a
// InputConfig to be inserted into RunningInput
func buildInput(name string, tbl *ast.Table) (*InputConfig, error) {
cp := &InputConfig{Name: name}
if node, ok := tbl.Fields["interval"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
@ -644,14 +654,51 @@ func buildPlugin(name string, tbl *ast.Table) (*PluginConfig, error) {
}
}
}
if node, ok := tbl.Fields["name_prefix"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
cp.MeasurementPrefix = str.Value
}
}
}
if node, ok := tbl.Fields["name_suffix"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
cp.MeasurementSuffix = str.Value
}
}
}
if node, ok := tbl.Fields["name_override"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
cp.NameOverride = str.Value
}
}
}
cp.Tags = make(map[string]string)
if node, ok := tbl.Fields["tags"]; ok {
if subtbl, ok := node.(*ast.Table); ok {
if err := toml.UnmarshalTable(subtbl, cp.Tags); err != nil {
log.Printf("Could not parse tags for input %s\n", name)
}
}
}
delete(tbl.Fields, "name_prefix")
delete(tbl.Fields, "name_suffix")
delete(tbl.Fields, "name_override")
delete(tbl.Fields, "interval")
delete(tbl.Fields, "tags")
cp.Filter = buildFilter(tbl)
return cp, nil
}
// buildOutput parses output specific items from the ast.Table, builds the filter and returns an
// OutputConfig to be inserted into RunningPlugin
// OutputConfig to be inserted into RunningInput
// Note: error exists in the return for future calls that might require error
func buildOutput(name string, tbl *ast.Table) (*OutputConfig, error) {
oc := &OutputConfig{
@ -659,5 +706,4 @@ func buildOutput(name string, tbl *ast.Table) (*OutputConfig, error) {
Filter: buildFilter(tbl),
}
return oc, nil
}

View File

@ -4,21 +4,21 @@ import (
"testing"
"time"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/exec"
"github.com/influxdb/telegraf/plugins/memcached"
"github.com/influxdb/telegraf/plugins/procstat"
"github.com/influxdb/telegraf/plugins/inputs"
"github.com/influxdb/telegraf/plugins/inputs/exec"
"github.com/influxdb/telegraf/plugins/inputs/memcached"
"github.com/influxdb/telegraf/plugins/inputs/procstat"
"github.com/stretchr/testify/assert"
)
func TestConfig_LoadSinglePlugin(t *testing.T) {
func TestConfig_LoadSingleInput(t *testing.T) {
c := NewConfig()
c.LoadConfig("./testdata/single_plugin.toml")
memcached := plugins.Plugins["memcached"]().(*memcached.Memcached)
memcached := inputs.Inputs["memcached"]().(*memcached.Memcached)
memcached.Servers = []string{"localhost"}
mConfig := &PluginConfig{
mConfig := &InputConfig{
Name: "memcached",
Filter: Filter{
Drop: []string{"other", "stuff"},
@ -39,10 +39,11 @@ func TestConfig_LoadSinglePlugin(t *testing.T) {
},
Interval: 5 * time.Second,
}
mConfig.Tags = make(map[string]string)
assert.Equal(t, memcached, c.Plugins[0].Plugin,
assert.Equal(t, memcached, c.Inputs[0].Input,
"Testdata did not produce a correct memcached struct.")
assert.Equal(t, mConfig, c.Plugins[0].Config,
assert.Equal(t, mConfig, c.Inputs[0].Config,
"Testdata did not produce correct memcached metadata.")
}
@ -57,10 +58,10 @@ func TestConfig_LoadDirectory(t *testing.T) {
t.Error(err)
}
memcached := plugins.Plugins["memcached"]().(*memcached.Memcached)
memcached := inputs.Inputs["memcached"]().(*memcached.Memcached)
memcached.Servers = []string{"localhost"}
mConfig := &PluginConfig{
mConfig := &InputConfig{
Name: "memcached",
Filter: Filter{
Drop: []string{"other", "stuff"},
@ -81,45 +82,40 @@ func TestConfig_LoadDirectory(t *testing.T) {
},
Interval: 5 * time.Second,
}
assert.Equal(t, memcached, c.Plugins[0].Plugin,
mConfig.Tags = make(map[string]string)
assert.Equal(t, memcached, c.Inputs[0].Input,
"Testdata did not produce a correct memcached struct.")
assert.Equal(t, mConfig, c.Plugins[0].Config,
assert.Equal(t, mConfig, c.Inputs[0].Config,
"Testdata did not produce correct memcached metadata.")
ex := plugins.Plugins["exec"]().(*exec.Exec)
ex.Commands = []*exec.Command{
&exec.Command{
Command: "/usr/bin/myothercollector --foo=bar",
Name: "myothercollector",
},
ex := inputs.Inputs["exec"]().(*exec.Exec)
ex.Command = "/usr/bin/myothercollector --foo=bar"
eConfig := &InputConfig{
Name: "exec",
MeasurementSuffix: "_myothercollector",
}
eConfig := &PluginConfig{Name: "exec"}
assert.Equal(t, ex, c.Plugins[1].Plugin,
eConfig.Tags = make(map[string]string)
assert.Equal(t, ex, c.Inputs[1].Input,
"Merged Testdata did not produce a correct exec struct.")
assert.Equal(t, eConfig, c.Plugins[1].Config,
assert.Equal(t, eConfig, c.Inputs[1].Config,
"Merged Testdata did not produce correct exec metadata.")
memcached.Servers = []string{"192.168.1.1"}
assert.Equal(t, memcached, c.Plugins[2].Plugin,
assert.Equal(t, memcached, c.Inputs[2].Input,
"Testdata did not produce a correct memcached struct.")
assert.Equal(t, mConfig, c.Plugins[2].Config,
assert.Equal(t, mConfig, c.Inputs[2].Config,
"Testdata did not produce correct memcached metadata.")
pstat := plugins.Plugins["procstat"]().(*procstat.Procstat)
pstat.Specifications = []*procstat.Specification{
&procstat.Specification{
PidFile: "/var/run/grafana-server.pid",
},
&procstat.Specification{
PidFile: "/var/run/influxdb/influxd.pid",
},
}
pstat := inputs.Inputs["procstat"]().(*procstat.Procstat)
pstat.PidFile = "/var/run/grafana-server.pid"
pConfig := &PluginConfig{Name: "procstat"}
pConfig := &InputConfig{Name: "procstat"}
pConfig.Tags = make(map[string]string)
assert.Equal(t, pstat, c.Plugins[3].Plugin,
assert.Equal(t, pstat, c.Inputs[3].Input,
"Merged Testdata did not produce a correct procstat struct.")
assert.Equal(t, pConfig, c.Plugins[3].Config,
assert.Equal(t, pConfig, c.Inputs[3].Config,
"Merged Testdata did not produce correct procstat metadata.")
}

View File

@ -1,9 +1,9 @@
[[plugins.memcached]]
[[inputs.memcached]]
servers = ["localhost"]
pass = ["some", "strings"]
drop = ["other", "stuff"]
interval = "5s"
[plugins.memcached.tagpass]
[inputs.memcached.tagpass]
goodtag = ["mytag"]
[plugins.memcached.tagdrop]
[inputs.memcached.tagdrop]
badtag = ["othertag"]

View File

@ -1,8 +1,4 @@
[[plugins.exec]]
# specify commands via an array of tables
[[plugins.exec.commands]]
[[inputs.exec]]
# the command to run
command = "/usr/bin/myothercollector --foo=bar"
# name of the command (used as a prefix for measurements)
name = "myothercollector"
name_suffix = "_myothercollector"

View File

@ -1,9 +1,9 @@
[[plugins.memcached]]
[[inputs.memcached]]
servers = ["192.168.1.1"]
pass = ["some", "strings"]
drop = ["other", "stuff"]
interval = "5s"
[plugins.memcached.tagpass]
[inputs.memcached.tagpass]
goodtag = ["mytag"]
[plugins.memcached.tagdrop]
[inputs.memcached.tagdrop]
badtag = ["othertag"]

View File

@ -1,5 +1,2 @@
[[plugins.procstat]]
[[plugins.procstat.specifications]]
[[inputs.procstat]]
pid_file = "/var/run/grafana-server.pid"
[[plugins.procstat.specifications]]
pid_file = "/var/run/influxdb/influxd.pid"

View File

@ -1,7 +1,7 @@
# Telegraf configuration
# Telegraf is entirely plugin driven. All metrics are gathered from the
# declared plugins.
# declared inputs.
# Even if a plugin has no configuration, it must be declared in here
# to be active. Declaring a plugin means just specifying the name
@ -21,20 +21,13 @@
# Tags can also be specified via a normal map, but only one form at a time:
[tags]
# dc = "us-east-1"
dc = "us-east-1"
# Configuration for telegraf agent
[agent]
# Default data collection interval for all plugins
interval = "10s"
# If utc = false, uses local time (utc is highly recommended)
utc = true
# Precision of writes, valid values are n, u, ms, s, m, and h
# note: using second precision greatly helps InfluxDB compression
precision = "s"
# run telegraf in debug mode
debug = false
@ -58,17 +51,6 @@
# The target database for metrics. This database must already exist
database = "telegraf" # required.
# Connection timeout (for the connection with InfluxDB), formatted as a string.
# Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
# If not provided, will default to 0 (no timeout)
# timeout = "5s"
# username = "telegraf"
# password = "metricsmetricsmetricsmetrics"
# Set the user agent for the POSTs (can be useful for log differentiation)
# user_agent = "telegraf"
[[outputs.influxdb]]
urls = ["udp://localhost:8089"]
database = "udp-telegraf"
@ -88,15 +70,15 @@
# PLUGINS #
###############################################################################
[plugins]
[inputs]
# Read Apache status information (mod_status)
[[plugins.apache]]
# An array of Apache status URI to gather stats.
urls = ["http://localhost/server-status?auto"]
[[inputs.apache]]
# An array of Apache status URI to gather stats.
urls = ["http://localhost/server-status?auto"]
# Read metrics about cpu usage
[[plugins.cpu]]
[[inputs.cpu]]
# Whether to report per-cpu stats or not
percpu = true
# Whether to report total system cpu stats or not
@ -105,11 +87,11 @@ urls = ["http://localhost/server-status?auto"]
drop = ["cpu_time"]
# Read metrics about disk usage by mount point
[[plugins.disk]]
[[inputs.diskio]]
# no configuration
# Read metrics from one or many disque servers
[[plugins.disque]]
[[inputs.disque]]
# An array of URI to gather stats about. Specify an ip or hostname
# with optional port and password. ie disque://localhost, disque://10.10.3.33:18832,
# 10.0.0.1:10000, etc.
@ -118,7 +100,7 @@ urls = ["http://localhost/server-status?auto"]
servers = ["localhost"]
# Read stats from one or more Elasticsearch servers or clusters
[[plugins.elasticsearch]]
[[inputs.elasticsearch]]
# specify a list of one or more Elasticsearch servers
servers = ["http://localhost:9200"]
@ -127,17 +109,13 @@ urls = ["http://localhost/server-status?auto"]
local = true
# Read flattened metrics from one or more commands that output JSON to stdout
[[plugins.exec]]
# specify commands via an array of tables
[[exec.commands]]
[[inputs.exec]]
# the command to run
command = "/usr/bin/mycollector --foo=bar"
# name of the command (used as a prefix for measurements)
name = "mycollector"
name_suffix = "_mycollector"
# Read metrics of haproxy, via socket or csv stats page
[[plugins.haproxy]]
[[inputs.haproxy]]
# An array of address to gather stats about. Specify an ip on hostname
# with optional port. ie localhost, 10.10.3.33:1936, etc.
#
@ -147,33 +125,30 @@ urls = ["http://localhost/server-status?auto"]
# servers = ["socket:/run/haproxy/admin.sock"]
# Read flattened metrics from one or more JSON HTTP endpoints
[[plugins.httpjson]]
# Specify services via an array of tables
[[httpjson.services]]
[[inputs.httpjson]]
# a name for the service being polled
name = "webserver_stats"
# a name for the service being polled
name = "webserver_stats"
# URL of each server in the service's cluster
servers = [
"http://localhost:9999/stats/",
"http://localhost:9998/stats/",
]
# URL of each server in the service's cluster
servers = [
"http://localhost:9999/stats/",
"http://localhost:9998/stats/",
]
# HTTP method to use (case-sensitive)
method = "GET"
# HTTP method to use (case-sensitive)
method = "GET"
# HTTP parameters (all values must be strings)
[httpjson.services.parameters]
event_type = "cpu_spike"
threshold = "0.75"
# HTTP parameters (all values must be strings)
[httpjson.parameters]
event_type = "cpu_spike"
threshold = "0.75"
# Read metrics about disk IO by device
[[plugins.io]]
[[inputs.diskio]]
# no configuration
# read metrics from a Kafka topic
[[plugins.kafka_consumer]]
[[inputs.kafka_consumer]]
# topic(s) to consume
topics = ["telegraf"]
# an array of Zookeeper connection strings
@ -186,7 +161,7 @@ urls = ["http://localhost/server-status?auto"]
offset = "oldest"
# Read metrics from a LeoFS Server via SNMP
[[plugins.leofs]]
[[inputs.leofs]]
# An array of URI to gather stats about LeoFS.
# Specify an ip or hostname with port. ie 127.0.0.1:4020
#
@ -194,7 +169,7 @@ urls = ["http://localhost/server-status?auto"]
servers = ["127.0.0.1:4021"]
# Read metrics from local Lustre service on OST, MDS
[[plugins.lustre2]]
[[inputs.lustre2]]
# An array of /proc globs to search for Lustre stats
# If not specified, the default will work on Lustre 2.5.x
#
@ -202,11 +177,11 @@ urls = ["http://localhost/server-status?auto"]
# mds_procfiles = ["/proc/fs/lustre/mdt/*/md_stats"]
# Read metrics about memory usage
[[plugins.mem]]
[[inputs.mem]]
# no configuration
# Read metrics from one or many memcached servers
[[plugins.memcached]]
[[inputs.memcached]]
# An array of address to gather stats about. Specify an ip on hostname
# with optional port. ie localhost, 10.0.0.1:11211, etc.
#
@ -214,7 +189,7 @@ urls = ["http://localhost/server-status?auto"]
servers = ["localhost"]
# Read metrics from one or many MongoDB servers
[[plugins.mongodb]]
[[inputs.mongodb]]
# An array of URI to gather stats about. Specify an ip or hostname
# with optional port add password. ie mongodb://user:auth_key@10.10.3.30:27017,
# mongodb://10.10.3.33:18832, 10.0.0.1:10000, etc.
@ -223,7 +198,7 @@ urls = ["http://localhost/server-status?auto"]
servers = ["127.0.0.1:27017"]
# Read metrics from one or many mysql servers
[[plugins.mysql]]
[[inputs.mysql]]
# specify servers via a url matching:
# [username[:password]@][protocol[(address)]]/[?tls=[true|false|skip-verify]]
# e.g.
@ -234,7 +209,7 @@ urls = ["http://localhost/server-status?auto"]
servers = ["localhost"]
# Read metrics about network interface usage
[[plugins.net]]
[[inputs.net]]
# By default, telegraf gathers stats from any up interface (excluding loopback)
# Setting interfaces will tell it to gather these explicit interfaces,
# regardless of status.
@ -242,12 +217,12 @@ urls = ["http://localhost/server-status?auto"]
# interfaces = ["eth0", ... ]
# Read Nginx's basic status information (ngx_http_stub_status_module)
[[plugins.nginx]]
[[inputs.nginx]]
# An array of Nginx stub_status URI to gather stats.
urls = ["http://localhost/status"]
# Ping given url(s) and return statistics
[[plugins.ping]]
[[inputs.ping]]
# urls to ping
urls = ["www.google.com"] # required
# number of pings to send (ping -c <COUNT>)
@ -260,10 +235,7 @@ urls = ["http://localhost/server-status?auto"]
interface = ""
# Read metrics from one or many postgresql servers
[[plugins.postgresql]]
# specify servers via an array of tables
[[postgresql.servers]]
[[inputs.postgresql]]
# specify address via a url matching:
# postgres://[pqgotest[:password]]@localhost[/dbname]?sslmode=[disable|verify-ca|verify-full]
# or a simple string:
@ -290,14 +262,13 @@ urls = ["http://localhost/server-status?auto"]
# address = "influx@remoteserver"
# Read metrics from one or many prometheus clients
[[plugins.prometheus]]
[[inputs.prometheus]]
# An array of urls to scrape metrics from.
urls = ["http://localhost:9100/metrics"]
# Read metrics from one or many RabbitMQ servers via the management API
[[plugins.rabbitmq]]
[[inputs.rabbitmq]]
# Specify servers via an array of tables
[[rabbitmq.servers]]
# name = "rmq-server-1" # optional tag
# url = "http://localhost:15672"
# username = "guest"
@ -308,7 +279,7 @@ urls = ["http://localhost/server-status?auto"]
# nodes = ["rabbit@node1", "rabbit@node2"]
# Read metrics from one or many redis servers
[[plugins.redis]]
[[inputs.redis]]
# An array of URI to gather stats about. Specify an ip or hostname
# with optional port add password. ie redis://localhost, redis://10.10.3.33:18832,
# 10.0.0.1:10000, etc.
@ -317,7 +288,7 @@ urls = ["http://localhost/server-status?auto"]
servers = ["localhost"]
# Read metrics from one or many RethinkDB servers
[[plugins.rethinkdb]]
[[inputs.rethinkdb]]
# An array of URI to gather stats about. Specify an ip or hostname
# with optional port add password. ie rethinkdb://user:auth_key@10.10.3.30:28105,
# rethinkdb://10.10.3.33:18832, 10.0.0.1:10000, etc.
@ -326,9 +297,9 @@ urls = ["http://localhost/server-status?auto"]
servers = ["127.0.0.1:28015"]
# Read metrics about swap memory usage
[[plugins.swap]]
[[inputs.swap]]
# no configuration
# Read metrics about system load & uptime
[[plugins.system]]
[[inputs.system]]
# no configuration

View File

@ -3,6 +3,7 @@ package internal
import (
"bufio"
"errors"
"fmt"
"os"
"strings"
"time"
@ -27,6 +28,39 @@ func (d *Duration) UnmarshalTOML(b []byte) error {
var NotImplementedError = errors.New("not implemented yet")
type JSONFlattener struct {
Fields map[string]interface{}
}
// FlattenJSON flattens nested maps/interfaces into a fields map
func (f *JSONFlattener) FlattenJSON(
fieldname string,
v interface{},
) error {
if f.Fields == nil {
f.Fields = make(map[string]interface{})
}
fieldname = strings.Trim(fieldname, "_")
switch t := v.(type) {
case map[string]interface{}:
for k, v := range t {
err := f.FlattenJSON(fieldname+"_"+k+"_", v)
if err != nil {
return err
}
}
case float64:
f.Fields[fieldname] = t
case bool, string, []interface{}, nil:
// ignored types
return nil
default:
return fmt.Errorf("JSON Flattener: got unexpected type %T with value %v (%s)",
t, t, fieldname)
}
return nil
}
// ReadLines reads contents from a file and splits them by new lines.
// A convenience wrapper to ReadLinesOffsetN(filename, 0, -1).
func ReadLines(filename string) ([]string, error) {

View File

@ -1,16 +0,0 @@
package all
import (
_ "github.com/influxdb/telegraf/outputs/amon"
_ "github.com/influxdb/telegraf/outputs/amqp"
_ "github.com/influxdb/telegraf/outputs/datadog"
_ "github.com/influxdb/telegraf/outputs/influxdb"
_ "github.com/influxdb/telegraf/outputs/kafka"
_ "github.com/influxdb/telegraf/outputs/kinesis"
_ "github.com/influxdb/telegraf/outputs/librato"
_ "github.com/influxdb/telegraf/outputs/mqtt"
_ "github.com/influxdb/telegraf/outputs/nsq"
_ "github.com/influxdb/telegraf/outputs/opentsdb"
_ "github.com/influxdb/telegraf/outputs/prometheus_client"
_ "github.com/influxdb/telegraf/outputs/riemann"
)

View File

@ -1,37 +0,0 @@
package all
import (
_ "github.com/influxdb/telegraf/plugins/aerospike"
_ "github.com/influxdb/telegraf/plugins/apache"
_ "github.com/influxdb/telegraf/plugins/bcache"
_ "github.com/influxdb/telegraf/plugins/disque"
_ "github.com/influxdb/telegraf/plugins/elasticsearch"
_ "github.com/influxdb/telegraf/plugins/exec"
_ "github.com/influxdb/telegraf/plugins/haproxy"
_ "github.com/influxdb/telegraf/plugins/httpjson"
_ "github.com/influxdb/telegraf/plugins/influxdb"
_ "github.com/influxdb/telegraf/plugins/jolokia"
_ "github.com/influxdb/telegraf/plugins/kafka_consumer"
_ "github.com/influxdb/telegraf/plugins/leofs"
_ "github.com/influxdb/telegraf/plugins/lustre2"
_ "github.com/influxdb/telegraf/plugins/mailchimp"
_ "github.com/influxdb/telegraf/plugins/memcached"
_ "github.com/influxdb/telegraf/plugins/mongodb"
_ "github.com/influxdb/telegraf/plugins/mysql"
_ "github.com/influxdb/telegraf/plugins/nginx"
_ "github.com/influxdb/telegraf/plugins/phpfpm"
_ "github.com/influxdb/telegraf/plugins/ping"
_ "github.com/influxdb/telegraf/plugins/postgresql"
_ "github.com/influxdb/telegraf/plugins/procstat"
_ "github.com/influxdb/telegraf/plugins/prometheus"
_ "github.com/influxdb/telegraf/plugins/puppetagent"
_ "github.com/influxdb/telegraf/plugins/rabbitmq"
_ "github.com/influxdb/telegraf/plugins/redis"
_ "github.com/influxdb/telegraf/plugins/rethinkdb"
_ "github.com/influxdb/telegraf/plugins/statsd"
_ "github.com/influxdb/telegraf/plugins/system"
_ "github.com/influxdb/telegraf/plugins/trig"
_ "github.com/influxdb/telegraf/plugins/twemproxy"
_ "github.com/influxdb/telegraf/plugins/zfs"
_ "github.com/influxdb/telegraf/plugins/zookeeper"
)

View File

@ -1,759 +0,0 @@
package elasticsearch
const clusterResponse = `
{
"cluster_name": "elasticsearch_telegraf",
"status": "green",
"timed_out": false,
"number_of_nodes": 3,
"number_of_data_nodes": 3,
"active_primary_shards": 5,
"active_shards": 15,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0,
"indices": {
"v1": {
"status": "green",
"number_of_shards": 10,
"number_of_replicas": 1,
"active_primary_shards": 10,
"active_shards": 20,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0
},
"v2": {
"status": "red",
"number_of_shards": 10,
"number_of_replicas": 1,
"active_primary_shards": 0,
"active_shards": 0,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 20
}
}
}
`
var clusterHealthExpected = map[string]interface{}{
"status": "green",
"timed_out": false,
"number_of_nodes": 3,
"number_of_data_nodes": 3,
"active_primary_shards": 5,
"active_shards": 15,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0,
}
var v1IndexExpected = map[string]interface{}{
"status": "green",
"number_of_shards": 10,
"number_of_replicas": 1,
"active_primary_shards": 10,
"active_shards": 20,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0,
}
var v2IndexExpected = map[string]interface{}{
"status": "red",
"number_of_shards": 10,
"number_of_replicas": 1,
"active_primary_shards": 0,
"active_shards": 0,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 20,
}
const statsResponse = `
{
"cluster_name": "es-testcluster",
"nodes": {
"SDFsfSDFsdfFSDSDfSFDSDF": {
"timestamp": 1436365550135,
"name": "test.host.com",
"transport_address": "inet[/127.0.0.1:9300]",
"host": "test",
"ip": [
"inet[/127.0.0.1:9300]",
"NONE"
],
"attributes": {
"master": "true"
},
"indices": {
"docs": {
"count": 29652,
"deleted": 5229
},
"store": {
"size_in_bytes": 37715234,
"throttle_time_in_millis": 215
},
"indexing": {
"index_total": 84790,
"index_time_in_millis": 29680,
"index_current": 0,
"delete_total": 13879,
"delete_time_in_millis": 1139,
"delete_current": 0,
"noop_update_total": 0,
"is_throttled": false,
"throttle_time_in_millis": 0
},
"get": {
"total": 1,
"time_in_millis": 2,
"exists_total": 0,
"exists_time_in_millis": 0,
"missing_total": 1,
"missing_time_in_millis": 2,
"current": 0
},
"search": {
"open_contexts": 0,
"query_total": 1452,
"query_time_in_millis": 5695,
"query_current": 0,
"fetch_total": 414,
"fetch_time_in_millis": 146,
"fetch_current": 0
},
"merges": {
"current": 0,
"current_docs": 0,
"current_size_in_bytes": 0,
"total": 133,
"total_time_in_millis": 21060,
"total_docs": 203672,
"total_size_in_bytes": 142900226
},
"refresh": {
"total": 1076,
"total_time_in_millis": 20078
},
"flush": {
"total": 115,
"total_time_in_millis": 2401
},
"warmer": {
"current": 0,
"total": 2319,
"total_time_in_millis": 448
},
"filter_cache": {
"memory_size_in_bytes": 7384,
"evictions": 0
},
"id_cache": {
"memory_size_in_bytes": 0
},
"fielddata": {
"memory_size_in_bytes": 12996,
"evictions": 0
},
"percolate": {
"total": 0,
"time_in_millis": 0,
"current": 0,
"memory_size_in_bytes": -1,
"memory_size": "-1b",
"queries": 0
},
"completion": {
"size_in_bytes": 0
},
"segments": {
"count": 134,
"memory_in_bytes": 1285212,
"index_writer_memory_in_bytes": 0,
"index_writer_max_memory_in_bytes": 172368955,
"version_map_memory_in_bytes": 611844,
"fixed_bit_set_memory_in_bytes": 0
},
"translog": {
"operations": 17702,
"size_in_bytes": 17
},
"suggest": {
"total": 0,
"time_in_millis": 0,
"current": 0
},
"query_cache": {
"memory_size_in_bytes": 0,
"evictions": 0,
"hit_count": 0,
"miss_count": 0
},
"recovery": {
"current_as_source": 0,
"current_as_target": 0,
"throttle_time_in_millis": 0
}
},
"os": {
"timestamp": 1436460392944,
"load_average": [
0.01,
0.04,
0.05
],
"mem": {
"free_in_bytes": 477761536,
"used_in_bytes": 1621868544,
"free_percent": 74,
"used_percent": 25,
"actual_free_in_bytes": 1565470720,
"actual_used_in_bytes": 534159360
},
"swap": {
"used_in_bytes": 0,
"free_in_bytes": 487997440
}
},
"process": {
"timestamp": 1436460392945,
"open_file_descriptors": 160,
"cpu": {
"percent": 2,
"sys_in_millis": 1870,
"user_in_millis": 13610,
"total_in_millis": 15480
},
"mem": {
"total_virtual_in_bytes": 4747890688
}
},
"jvm": {
"timestamp": 1436460392945,
"uptime_in_millis": 202245,
"mem": {
"heap_used_in_bytes": 52709568,
"heap_used_percent": 5,
"heap_committed_in_bytes": 259522560,
"heap_max_in_bytes": 1038876672,
"non_heap_used_in_bytes": 39634576,
"non_heap_committed_in_bytes": 40841216,
"pools": {
"young": {
"used_in_bytes": 32685760,
"max_in_bytes": 279183360,
"peak_used_in_bytes": 71630848,
"peak_max_in_bytes": 279183360
},
"survivor": {
"used_in_bytes": 8912880,
"max_in_bytes": 34865152,
"peak_used_in_bytes": 8912888,
"peak_max_in_bytes": 34865152
},
"old": {
"used_in_bytes": 11110928,
"max_in_bytes": 724828160,
"peak_used_in_bytes": 14354608,
"peak_max_in_bytes": 724828160
}
}
},
"threads": {
"count": 44,
"peak_count": 45
},
"gc": {
"collectors": {
"young": {
"collection_count": 2,
"collection_time_in_millis": 98
},
"old": {
"collection_count": 1,
"collection_time_in_millis": 24
}
}
},
"buffer_pools": {
"direct": {
"count": 40,
"used_in_bytes": 6304239,
"total_capacity_in_bytes": 6304239
},
"mapped": {
"count": 0,
"used_in_bytes": 0,
"total_capacity_in_bytes": 0
}
}
},
"thread_pool": {
"percolate": {
"threads": 123,
"queue": 23,
"active": 13,
"rejected": 235,
"largest": 23,
"completed": 33
},
"fetch_shard_started": {
"threads": 3,
"queue": 1,
"active": 5,
"rejected": 6,
"largest": 4,
"completed": 54
},
"listener": {
"threads": 1,
"queue": 2,
"active": 4,
"rejected": 8,
"largest": 1,
"completed": 1
},
"index": {
"threads": 6,
"queue": 8,
"active": 4,
"rejected": 2,
"largest": 3,
"completed": 6
},
"refresh": {
"threads": 23,
"queue": 7,
"active": 3,
"rejected": 4,
"largest": 8,
"completed": 3
},
"suggest": {
"threads": 2,
"queue": 7,
"active": 2,
"rejected": 1,
"largest": 8,
"completed": 3
},
"generic": {
"threads": 1,
"queue": 4,
"active": 6,
"rejected": 3,
"largest": 2,
"completed": 27
},
"warmer": {
"threads": 2,
"queue": 7,
"active": 3,
"rejected": 2,
"largest": 3,
"completed": 1
},
"search": {
"threads": 5,
"queue": 7,
"active": 2,
"rejected": 7,
"largest": 2,
"completed": 4
},
"flush": {
"threads": 3,
"queue": 8,
"active": 0,
"rejected": 1,
"largest": 5,
"completed": 3
},
"optimize": {
"threads": 3,
"queue": 4,
"active": 1,
"rejected": 2,
"largest": 7,
"completed": 3
},
"fetch_shard_store": {
"threads": 1,
"queue": 7,
"active": 4,
"rejected": 2,
"largest": 4,
"completed": 1
},
"management": {
"threads": 2,
"queue": 3,
"active": 1,
"rejected": 6,
"largest": 2,
"completed": 22
},
"get": {
"threads": 1,
"queue": 8,
"active": 4,
"rejected": 3,
"largest": 2,
"completed": 1
},
"merge": {
"threads": 6,
"queue": 4,
"active": 5,
"rejected": 2,
"largest": 5,
"completed": 1
},
"bulk": {
"threads": 4,
"queue": 5,
"active": 7,
"rejected": 3,
"largest": 1,
"completed": 4
},
"snapshot": {
"threads": 8,
"queue": 5,
"active": 6,
"rejected": 2,
"largest": 1,
"completed": 0
}
},
"fs": {
"timestamp": 1436460392946,
"total": {
"total_in_bytes": 19507089408,
"free_in_bytes": 16909316096,
"available_in_bytes": 15894814720
},
"data": [
{
"path": "/usr/share/elasticsearch/data/elasticsearch/nodes/0",
"mount": "/usr/share/elasticsearch/data",
"type": "ext4",
"total_in_bytes": 19507089408,
"free_in_bytes": 16909316096,
"available_in_bytes": 15894814720
}
]
},
"transport": {
"server_open": 13,
"rx_count": 6,
"rx_size_in_bytes": 1380,
"tx_count": 6,
"tx_size_in_bytes": 1380
},
"http": {
"current_open": 3,
"total_opened": 3
},
"breakers": {
"fielddata": {
"limit_size_in_bytes": 623326003,
"limit_size": "594.4mb",
"estimated_size_in_bytes": 0,
"estimated_size": "0b",
"overhead": 1.03,
"tripped": 0
},
"request": {
"limit_size_in_bytes": 415550668,
"limit_size": "396.2mb",
"estimated_size_in_bytes": 0,
"estimated_size": "0b",
"overhead": 1.0,
"tripped": 0
},
"parent": {
"limit_size_in_bytes": 727213670,
"limit_size": "693.5mb",
"estimated_size_in_bytes": 0,
"estimated_size": "0b",
"overhead": 1.0,
"tripped": 0
}
}
}
}
}
`
var indicesExpected = map[string]float64{
"indices_id_cache_memory_size_in_bytes": 0,
"indices_completion_size_in_bytes": 0,
"indices_suggest_total": 0,
"indices_suggest_time_in_millis": 0,
"indices_suggest_current": 0,
"indices_query_cache_memory_size_in_bytes": 0,
"indices_query_cache_evictions": 0,
"indices_query_cache_hit_count": 0,
"indices_query_cache_miss_count": 0,
"indices_store_size_in_bytes": 37715234,
"indices_store_throttle_time_in_millis": 215,
"indices_merges_current_docs": 0,
"indices_merges_current_size_in_bytes": 0,
"indices_merges_total": 133,
"indices_merges_total_time_in_millis": 21060,
"indices_merges_total_docs": 203672,
"indices_merges_total_size_in_bytes": 142900226,
"indices_merges_current": 0,
"indices_filter_cache_memory_size_in_bytes": 7384,
"indices_filter_cache_evictions": 0,
"indices_indexing_index_total": 84790,
"indices_indexing_index_time_in_millis": 29680,
"indices_indexing_index_current": 0,
"indices_indexing_noop_update_total": 0,
"indices_indexing_throttle_time_in_millis": 0,
"indices_indexing_delete_total": 13879,
"indices_indexing_delete_time_in_millis": 1139,
"indices_indexing_delete_current": 0,
"indices_get_exists_time_in_millis": 0,
"indices_get_missing_total": 1,
"indices_get_missing_time_in_millis": 2,
"indices_get_current": 0,
"indices_get_total": 1,
"indices_get_time_in_millis": 2,
"indices_get_exists_total": 0,
"indices_refresh_total": 1076,
"indices_refresh_total_time_in_millis": 20078,
"indices_percolate_current": 0,
"indices_percolate_memory_size_in_bytes": -1,
"indices_percolate_queries": 0,
"indices_percolate_total": 0,
"indices_percolate_time_in_millis": 0,
"indices_translog_operations": 17702,
"indices_translog_size_in_bytes": 17,
"indices_recovery_current_as_source": 0,
"indices_recovery_current_as_target": 0,
"indices_recovery_throttle_time_in_millis": 0,
"indices_docs_count": 29652,
"indices_docs_deleted": 5229,
"indices_flush_total_time_in_millis": 2401,
"indices_flush_total": 115,
"indices_fielddata_memory_size_in_bytes": 12996,
"indices_fielddata_evictions": 0,
"indices_search_fetch_current": 0,
"indices_search_open_contexts": 0,
"indices_search_query_total": 1452,
"indices_search_query_time_in_millis": 5695,
"indices_search_query_current": 0,
"indices_search_fetch_total": 414,
"indices_search_fetch_time_in_millis": 146,
"indices_warmer_current": 0,
"indices_warmer_total": 2319,
"indices_warmer_total_time_in_millis": 448,
"indices_segments_count": 134,
"indices_segments_memory_in_bytes": 1285212,
"indices_segments_index_writer_memory_in_bytes": 0,
"indices_segments_index_writer_max_memory_in_bytes": 172368955,
"indices_segments_version_map_memory_in_bytes": 611844,
"indices_segments_fixed_bit_set_memory_in_bytes": 0,
}
var osExpected = map[string]float64{
"os_swap_used_in_bytes": 0,
"os_swap_free_in_bytes": 487997440,
"os_timestamp": 1436460392944,
"os_mem_free_percent": 74,
"os_mem_used_percent": 25,
"os_mem_actual_free_in_bytes": 1565470720,
"os_mem_actual_used_in_bytes": 534159360,
"os_mem_free_in_bytes": 477761536,
"os_mem_used_in_bytes": 1621868544,
}
var processExpected = map[string]float64{
"process_mem_total_virtual_in_bytes": 4747890688,
"process_timestamp": 1436460392945,
"process_open_file_descriptors": 160,
"process_cpu_total_in_millis": 15480,
"process_cpu_percent": 2,
"process_cpu_sys_in_millis": 1870,
"process_cpu_user_in_millis": 13610,
}
var jvmExpected = map[string]float64{
"jvm_timestamp": 1436460392945,
"jvm_uptime_in_millis": 202245,
"jvm_mem_non_heap_used_in_bytes": 39634576,
"jvm_mem_non_heap_committed_in_bytes": 40841216,
"jvm_mem_pools_young_max_in_bytes": 279183360,
"jvm_mem_pools_young_peak_used_in_bytes": 71630848,
"jvm_mem_pools_young_peak_max_in_bytes": 279183360,
"jvm_mem_pools_young_used_in_bytes": 32685760,
"jvm_mem_pools_survivor_peak_used_in_bytes": 8912888,
"jvm_mem_pools_survivor_peak_max_in_bytes": 34865152,
"jvm_mem_pools_survivor_used_in_bytes": 8912880,
"jvm_mem_pools_survivor_max_in_bytes": 34865152,
"jvm_mem_pools_old_peak_max_in_bytes": 724828160,
"jvm_mem_pools_old_used_in_bytes": 11110928,
"jvm_mem_pools_old_max_in_bytes": 724828160,
"jvm_mem_pools_old_peak_used_in_bytes": 14354608,
"jvm_mem_heap_used_in_bytes": 52709568,
"jvm_mem_heap_used_percent": 5,
"jvm_mem_heap_committed_in_bytes": 259522560,
"jvm_mem_heap_max_in_bytes": 1038876672,
"jvm_threads_peak_count": 45,
"jvm_threads_count": 44,
"jvm_gc_collectors_young_collection_count": 2,
"jvm_gc_collectors_young_collection_time_in_millis": 98,
"jvm_gc_collectors_old_collection_count": 1,
"jvm_gc_collectors_old_collection_time_in_millis": 24,
"jvm_buffer_pools_direct_count": 40,
"jvm_buffer_pools_direct_used_in_bytes": 6304239,
"jvm_buffer_pools_direct_total_capacity_in_bytes": 6304239,
"jvm_buffer_pools_mapped_count": 0,
"jvm_buffer_pools_mapped_used_in_bytes": 0,
"jvm_buffer_pools_mapped_total_capacity_in_bytes": 0,
}
var threadPoolExpected = map[string]float64{
"thread_pool_merge_threads": 6,
"thread_pool_merge_queue": 4,
"thread_pool_merge_active": 5,
"thread_pool_merge_rejected": 2,
"thread_pool_merge_largest": 5,
"thread_pool_merge_completed": 1,
"thread_pool_bulk_threads": 4,
"thread_pool_bulk_queue": 5,
"thread_pool_bulk_active": 7,
"thread_pool_bulk_rejected": 3,
"thread_pool_bulk_largest": 1,
"thread_pool_bulk_completed": 4,
"thread_pool_warmer_threads": 2,
"thread_pool_warmer_queue": 7,
"thread_pool_warmer_active": 3,
"thread_pool_warmer_rejected": 2,
"thread_pool_warmer_largest": 3,
"thread_pool_warmer_completed": 1,
"thread_pool_get_largest": 2,
"thread_pool_get_completed": 1,
"thread_pool_get_threads": 1,
"thread_pool_get_queue": 8,
"thread_pool_get_active": 4,
"thread_pool_get_rejected": 3,
"thread_pool_index_threads": 6,
"thread_pool_index_queue": 8,
"thread_pool_index_active": 4,
"thread_pool_index_rejected": 2,
"thread_pool_index_largest": 3,
"thread_pool_index_completed": 6,
"thread_pool_suggest_threads": 2,
"thread_pool_suggest_queue": 7,
"thread_pool_suggest_active": 2,
"thread_pool_suggest_rejected": 1,
"thread_pool_suggest_largest": 8,
"thread_pool_suggest_completed": 3,
"thread_pool_fetch_shard_store_queue": 7,
"thread_pool_fetch_shard_store_active": 4,
"thread_pool_fetch_shard_store_rejected": 2,
"thread_pool_fetch_shard_store_largest": 4,
"thread_pool_fetch_shard_store_completed": 1,
"thread_pool_fetch_shard_store_threads": 1,
"thread_pool_management_threads": 2,
"thread_pool_management_queue": 3,
"thread_pool_management_active": 1,
"thread_pool_management_rejected": 6,
"thread_pool_management_largest": 2,
"thread_pool_management_completed": 22,
"thread_pool_percolate_queue": 23,
"thread_pool_percolate_active": 13,
"thread_pool_percolate_rejected": 235,
"thread_pool_percolate_largest": 23,
"thread_pool_percolate_completed": 33,
"thread_pool_percolate_threads": 123,
"thread_pool_listener_active": 4,
"thread_pool_listener_rejected": 8,
"thread_pool_listener_largest": 1,
"thread_pool_listener_completed": 1,
"thread_pool_listener_threads": 1,
"thread_pool_listener_queue": 2,
"thread_pool_search_rejected": 7,
"thread_pool_search_largest": 2,
"thread_pool_search_completed": 4,
"thread_pool_search_threads": 5,
"thread_pool_search_queue": 7,
"thread_pool_search_active": 2,
"thread_pool_fetch_shard_started_threads": 3,
"thread_pool_fetch_shard_started_queue": 1,
"thread_pool_fetch_shard_started_active": 5,
"thread_pool_fetch_shard_started_rejected": 6,
"thread_pool_fetch_shard_started_largest": 4,
"thread_pool_fetch_shard_started_completed": 54,
"thread_pool_refresh_rejected": 4,
"thread_pool_refresh_largest": 8,
"thread_pool_refresh_completed": 3,
"thread_pool_refresh_threads": 23,
"thread_pool_refresh_queue": 7,
"thread_pool_refresh_active": 3,
"thread_pool_optimize_threads": 3,
"thread_pool_optimize_queue": 4,
"thread_pool_optimize_active": 1,
"thread_pool_optimize_rejected": 2,
"thread_pool_optimize_largest": 7,
"thread_pool_optimize_completed": 3,
"thread_pool_snapshot_largest": 1,
"thread_pool_snapshot_completed": 0,
"thread_pool_snapshot_threads": 8,
"thread_pool_snapshot_queue": 5,
"thread_pool_snapshot_active": 6,
"thread_pool_snapshot_rejected": 2,
"thread_pool_generic_threads": 1,
"thread_pool_generic_queue": 4,
"thread_pool_generic_active": 6,
"thread_pool_generic_rejected": 3,
"thread_pool_generic_largest": 2,
"thread_pool_generic_completed": 27,
"thread_pool_flush_threads": 3,
"thread_pool_flush_queue": 8,
"thread_pool_flush_active": 0,
"thread_pool_flush_rejected": 1,
"thread_pool_flush_largest": 5,
"thread_pool_flush_completed": 3,
}
var fsExpected = map[string]float64{
"fs_timestamp": 1436460392946,
"fs_total_free_in_bytes": 16909316096,
"fs_total_available_in_bytes": 15894814720,
"fs_total_total_in_bytes": 19507089408,
}
var transportExpected = map[string]float64{
"transport_server_open": 13,
"transport_rx_count": 6,
"transport_rx_size_in_bytes": 1380,
"transport_tx_count": 6,
"transport_tx_size_in_bytes": 1380,
}
var httpExpected = map[string]float64{
"http_current_open": 3,
"http_total_opened": 3,
}
var breakersExpected = map[string]float64{
"breakers_fielddata_estimated_size_in_bytes": 0,
"breakers_fielddata_overhead": 1.03,
"breakers_fielddata_tripped": 0,
"breakers_fielddata_limit_size_in_bytes": 623326003,
"breakers_request_estimated_size_in_bytes": 0,
"breakers_request_overhead": 1.0,
"breakers_request_tripped": 0,
"breakers_request_limit_size_in_bytes": 415550668,
"breakers_parent_overhead": 1.0,
"breakers_parent_tripped": 0,
"breakers_parent_limit_size_in_bytes": 727213670,
"breakers_parent_estimated_size_in_bytes": 0,
}

View File

@ -1,162 +0,0 @@
package exec
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"github.com/gonuts/go-shellquote"
"github.com/influxdb/telegraf/plugins"
"math"
"os/exec"
"strings"
"sync"
"time"
)
const sampleConfig = `
# specify commands via an array of tables
[[plugins.exec.commands]]
# the command to run
command = "/usr/bin/mycollector --foo=bar"
# name of the command (used as a prefix for measurements)
name = "mycollector"
# Only run this command if it has been at least this many
# seconds since it last ran
interval = 10
`
type Exec struct {
Commands []*Command
runner Runner
clock Clock
}
type Command struct {
Command string
Name string
Interval int
lastRunAt time.Time
}
type Runner interface {
Run(*Command) ([]byte, error)
}
type Clock interface {
Now() time.Time
}
type CommandRunner struct{}
type RealClock struct{}
func (c CommandRunner) Run(command *Command) ([]byte, error) {
command.lastRunAt = time.Now()
split_cmd, err := shellquote.Split(command.Command)
if err != nil || len(split_cmd) == 0 {
return nil, fmt.Errorf("exec: unable to parse command, %s", err)
}
cmd := exec.Command(split_cmd[0], split_cmd[1:]...)
var out bytes.Buffer
cmd.Stdout = &out
if err := cmd.Run(); err != nil {
return nil, fmt.Errorf("exec: %s for command '%s'", err, command.Command)
}
return out.Bytes(), nil
}
func (c RealClock) Now() time.Time {
return time.Now()
}
func NewExec() *Exec {
return &Exec{runner: CommandRunner{}, clock: RealClock{}}
}
func (e *Exec) SampleConfig() string {
return sampleConfig
}
func (e *Exec) Description() string {
return "Read flattened metrics from one or more commands that output JSON to stdout"
}
func (e *Exec) Gather(acc plugins.Accumulator) error {
var wg sync.WaitGroup
errorChannel := make(chan error, len(e.Commands))
for _, c := range e.Commands {
wg.Add(1)
go func(c *Command, acc plugins.Accumulator) {
defer wg.Done()
err := e.gatherCommand(c, acc)
if err != nil {
errorChannel <- err
}
}(c, acc)
}
wg.Wait()
close(errorChannel)
// Get all errors and return them as one giant error
errorStrings := []string{}
for err := range errorChannel {
errorStrings = append(errorStrings, err.Error())
}
if len(errorStrings) == 0 {
return nil
}
return errors.New(strings.Join(errorStrings, "\n"))
}
func (e *Exec) gatherCommand(c *Command, acc plugins.Accumulator) error {
secondsSinceLastRun := 0.0
if c.lastRunAt.Unix() == 0 { // means time is uninitialized
secondsSinceLastRun = math.Inf(1)
} else {
secondsSinceLastRun = (e.clock.Now().Sub(c.lastRunAt)).Seconds()
}
if secondsSinceLastRun >= float64(c.Interval) {
out, err := e.runner.Run(c)
if err != nil {
return err
}
var jsonOut interface{}
err = json.Unmarshal(out, &jsonOut)
if err != nil {
return fmt.Errorf("exec: unable to parse output of '%s' as JSON, %s", c.Command, err)
}
processResponse(acc, c.Name, map[string]string{}, jsonOut)
}
return nil
}
func processResponse(acc plugins.Accumulator, prefix string, tags map[string]string, v interface{}) {
switch t := v.(type) {
case map[string]interface{}:
for k, v := range t {
processResponse(acc, prefix+"_"+k, tags, v)
}
case float64:
acc.Add(prefix, v, tags)
}
}
func init() {
plugins.Add("exec", func() plugins.Plugin {
return NewExec()
})
}

View File

@ -1,262 +0,0 @@
package exec
import (
"fmt"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"math"
"testing"
"time"
)
// Midnight 9/22/2015
const baseTimeSeconds = 1442905200
const validJson = `
{
"status": "green",
"num_processes": 82,
"cpu": {
"status": "red",
"nil_status": null,
"used": 8234,
"free": 32
},
"percent": 0.81,
"users": [0, 1, 2, 3]
}`
const malformedJson = `
{
"status": "green",
`
type runnerMock struct {
out []byte
err error
}
type clockMock struct {
now time.Time
}
func newRunnerMock(out []byte, err error) Runner {
return &runnerMock{
out: out,
err: err,
}
}
func (r runnerMock) Run(command *Command) ([]byte, error) {
if r.err != nil {
return nil, r.err
}
return r.out, nil
}
func newClockMock(now time.Time) Clock {
return &clockMock{now: now}
}
func (c clockMock) Now() time.Time {
return c.now
}
func TestExec(t *testing.T) {
runner := newRunnerMock([]byte(validJson), nil)
clock := newClockMock(time.Unix(baseTimeSeconds+20, 0))
command := Command{
Command: "testcommand arg1",
Name: "mycollector",
Interval: 10,
lastRunAt: time.Unix(baseTimeSeconds, 0),
}
e := &Exec{
runner: runner,
clock: clock,
Commands: []*Command{&command},
}
var acc testutil.Accumulator
initialPoints := len(acc.Points)
err := e.Gather(&acc)
deltaPoints := len(acc.Points) - initialPoints
require.NoError(t, err)
checkFloat := []struct {
name string
value float64
}{
{"mycollector_num_processes", 82},
{"mycollector_cpu_used", 8234},
{"mycollector_cpu_free", 32},
{"mycollector_percent", 0.81},
}
for _, c := range checkFloat {
assert.True(t, acc.CheckValue(c.name, c.value))
}
assert.Equal(t, deltaPoints, 4, "non-numeric measurements should be ignored")
}
func TestExecMalformed(t *testing.T) {
runner := newRunnerMock([]byte(malformedJson), nil)
clock := newClockMock(time.Unix(baseTimeSeconds+20, 0))
command := Command{
Command: "badcommand arg1",
Name: "mycollector",
Interval: 10,
lastRunAt: time.Unix(baseTimeSeconds, 0),
}
e := &Exec{
runner: runner,
clock: clock,
Commands: []*Command{&command},
}
var acc testutil.Accumulator
initialPoints := len(acc.Points)
err := e.Gather(&acc)
deltaPoints := len(acc.Points) - initialPoints
require.Error(t, err)
assert.Equal(t, deltaPoints, 0, "No new points should have been added")
}
func TestCommandError(t *testing.T) {
runner := newRunnerMock(nil, fmt.Errorf("exit status code 1"))
clock := newClockMock(time.Unix(baseTimeSeconds+20, 0))
command := Command{
Command: "badcommand",
Name: "mycollector",
Interval: 10,
lastRunAt: time.Unix(baseTimeSeconds, 0),
}
e := &Exec{
runner: runner,
clock: clock,
Commands: []*Command{&command},
}
var acc testutil.Accumulator
initialPoints := len(acc.Points)
err := e.Gather(&acc)
deltaPoints := len(acc.Points) - initialPoints
require.Error(t, err)
assert.Equal(t, deltaPoints, 0, "No new points should have been added")
}
func TestExecNotEnoughTime(t *testing.T) {
runner := newRunnerMock([]byte(validJson), nil)
clock := newClockMock(time.Unix(baseTimeSeconds+5, 0))
command := Command{
Command: "testcommand arg1",
Name: "mycollector",
Interval: 10,
lastRunAt: time.Unix(baseTimeSeconds, 0),
}
e := &Exec{
runner: runner,
clock: clock,
Commands: []*Command{&command},
}
var acc testutil.Accumulator
initialPoints := len(acc.Points)
err := e.Gather(&acc)
deltaPoints := len(acc.Points) - initialPoints
require.NoError(t, err)
assert.Equal(t, deltaPoints, 0, "No new points should have been added")
}
func TestExecUninitializedLastRunAt(t *testing.T) {
runner := newRunnerMock([]byte(validJson), nil)
clock := newClockMock(time.Unix(baseTimeSeconds, 0))
command := Command{
Command: "testcommand arg1",
Name: "mycollector",
Interval: math.MaxInt32,
// Uninitialized lastRunAt should default to time.Unix(0, 0), so this should
// run no matter what the interval is
}
e := &Exec{
runner: runner,
clock: clock,
Commands: []*Command{&command},
}
var acc testutil.Accumulator
initialPoints := len(acc.Points)
err := e.Gather(&acc)
deltaPoints := len(acc.Points) - initialPoints
require.NoError(t, err)
checkFloat := []struct {
name string
value float64
}{
{"mycollector_num_processes", 82},
{"mycollector_cpu_used", 8234},
{"mycollector_cpu_free", 32},
{"mycollector_percent", 0.81},
}
for _, c := range checkFloat {
assert.True(t, acc.CheckValue(c.name, c.value))
}
assert.Equal(t, deltaPoints, 4, "non-numeric measurements should be ignored")
}
func TestExecOneNotEnoughTimeAndOneEnoughTime(t *testing.T) {
runner := newRunnerMock([]byte(validJson), nil)
clock := newClockMock(time.Unix(baseTimeSeconds+5, 0))
notEnoughTimeCommand := Command{
Command: "testcommand arg1",
Name: "mycollector",
Interval: 10,
lastRunAt: time.Unix(baseTimeSeconds, 0),
}
enoughTimeCommand := Command{
Command: "testcommand arg1",
Name: "mycollector",
Interval: 3,
lastRunAt: time.Unix(baseTimeSeconds, 0),
}
e := &Exec{
runner: runner,
clock: clock,
Commands: []*Command{&notEnoughTimeCommand, &enoughTimeCommand},
}
var acc testutil.Accumulator
initialPoints := len(acc.Points)
err := e.Gather(&acc)
deltaPoints := len(acc.Points) - initialPoints
require.NoError(t, err)
checkFloat := []struct {
name string
value float64
}{
{"mycollector_num_processes", 82},
{"mycollector_cpu_used", 8234},
{"mycollector_cpu_free", 32},
{"mycollector_percent", 0.81},
}
for _, c := range checkFloat {
assert.True(t, acc.CheckValue(c.name, c.value))
}
assert.Equal(t, deltaPoints, 4, "Only one command should have been run")
}

View File

@ -4,7 +4,7 @@ import (
"bytes"
"encoding/binary"
"fmt"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/inputs"
"net"
"strconv"
"strings"
@ -119,7 +119,7 @@ func (a *Aerospike) Description() string {
return "Read stats from an aerospike server"
}
func (a *Aerospike) Gather(acc plugins.Accumulator) error {
func (a *Aerospike) Gather(acc inputs.Accumulator) error {
if len(a.Servers) == 0 {
return a.gatherServer("127.0.0.1:3000", acc)
}
@ -140,7 +140,7 @@ func (a *Aerospike) Gather(acc plugins.Accumulator) error {
return outerr
}
func (a *Aerospike) gatherServer(host string, acc plugins.Accumulator) error {
func (a *Aerospike) gatherServer(host string, acc inputs.Accumulator) error {
aerospikeInfo, err := getMap(STATISTICS_COMMAND, host)
if err != nil {
return fmt.Errorf("Aerospike info failed: %s", err)
@ -247,26 +247,32 @@ func get(key []byte, host string) (map[string]string, error) {
return data, err
}
func readAerospikeStats(stats map[string]string, acc plugins.Accumulator, host, namespace string) {
func readAerospikeStats(
stats map[string]string,
acc inputs.Accumulator,
host string,
namespace string,
) {
fields := make(map[string]interface{})
tags := map[string]string{
"aerospike_host": host,
"namespace": "_service",
}
if namespace != "" {
tags["namespace"] = namespace
}
for key, value := range stats {
tags := map[string]string{
"aerospike_host": host,
"namespace": "_service",
}
if namespace != "" {
tags["namespace"] = namespace
}
// We are going to ignore all string based keys
val, err := strconv.ParseInt(value, 10, 64)
if err == nil {
if strings.Contains(key, "-") {
key = strings.Replace(key, "-", "_", -1)
}
acc.Add(key, val, tags)
fields[key] = val
}
}
acc.AddFields("aerospike", fields, tags)
}
func unmarshalMapInfo(infoMap map[string]string, key string) (map[string]string, error) {
@ -330,7 +336,7 @@ func msgLenFromBytes(buf [6]byte) int64 {
}
func init() {
plugins.Add("aerospike", func() plugins.Plugin {
inputs.Add("aerospike", func() inputs.Input {
return &Aerospike{}
})
}

View File

@ -1,11 +1,12 @@
package aerospike
import (
"reflect"
"testing"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"reflect"
"testing"
)
func TestAerospikeStatistics(t *testing.T) {
@ -31,7 +32,7 @@ func TestAerospikeStatistics(t *testing.T) {
}
for _, metric := range asMetrics {
assert.True(t, acc.HasIntValue(metric), metric)
assert.True(t, acc.HasIntField("aerospike", metric), metric)
}
}
@ -49,13 +50,16 @@ func TestReadAerospikeStatsNoNamespace(t *testing.T) {
"stat_read_reqs": "12345",
}
readAerospikeStats(stats, &acc, "host1", "")
for k := range stats {
if k == "stat-write-errs" {
k = "stat_write_errs"
}
assert.True(t, acc.HasMeasurement(k))
assert.True(t, acc.CheckValue(k, int64(12345)))
fields := map[string]interface{}{
"stat_write_errs": int64(12345),
"stat_read_reqs": int64(12345),
}
tags := map[string]string{
"aerospike_host": "host1",
"namespace": "_service",
}
acc.AssertContainsTaggedFields(t, "aerospike", fields, tags)
}
func TestReadAerospikeStatsNamespace(t *testing.T) {
@ -66,13 +70,15 @@ func TestReadAerospikeStatsNamespace(t *testing.T) {
}
readAerospikeStats(stats, &acc, "host1", "test")
fields := map[string]interface{}{
"stat_write_errs": int64(12345),
"stat_read_reqs": int64(12345),
}
tags := map[string]string{
"aerospike_host": "host1",
"namespace": "test",
}
for k := range stats {
assert.True(t, acc.ValidateTaggedValue(k, int64(12345), tags) == nil)
}
acc.AssertContainsTaggedFields(t, "aerospike", fields, tags)
}
func TestAerospikeUnmarshalList(t *testing.T) {

37
plugins/inputs/all/all.go Normal file
View File

@ -0,0 +1,37 @@
package all
import (
_ "github.com/influxdb/telegraf/plugins/inputs/aerospike"
_ "github.com/influxdb/telegraf/plugins/inputs/apache"
_ "github.com/influxdb/telegraf/plugins/inputs/bcache"
_ "github.com/influxdb/telegraf/plugins/inputs/disque"
_ "github.com/influxdb/telegraf/plugins/inputs/elasticsearch"
_ "github.com/influxdb/telegraf/plugins/inputs/exec"
_ "github.com/influxdb/telegraf/plugins/inputs/haproxy"
_ "github.com/influxdb/telegraf/plugins/inputs/httpjson"
_ "github.com/influxdb/telegraf/plugins/inputs/influxdb"
_ "github.com/influxdb/telegraf/plugins/inputs/jolokia"
_ "github.com/influxdb/telegraf/plugins/inputs/kafka_consumer"
_ "github.com/influxdb/telegraf/plugins/inputs/leofs"
_ "github.com/influxdb/telegraf/plugins/inputs/lustre2"
_ "github.com/influxdb/telegraf/plugins/inputs/mailchimp"
_ "github.com/influxdb/telegraf/plugins/inputs/memcached"
_ "github.com/influxdb/telegraf/plugins/inputs/mongodb"
_ "github.com/influxdb/telegraf/plugins/inputs/mysql"
_ "github.com/influxdb/telegraf/plugins/inputs/nginx"
_ "github.com/influxdb/telegraf/plugins/inputs/phpfpm"
_ "github.com/influxdb/telegraf/plugins/inputs/ping"
_ "github.com/influxdb/telegraf/plugins/inputs/postgresql"
_ "github.com/influxdb/telegraf/plugins/inputs/procstat"
_ "github.com/influxdb/telegraf/plugins/inputs/prometheus"
_ "github.com/influxdb/telegraf/plugins/inputs/puppetagent"
_ "github.com/influxdb/telegraf/plugins/inputs/rabbitmq"
_ "github.com/influxdb/telegraf/plugins/inputs/redis"
_ "github.com/influxdb/telegraf/plugins/inputs/rethinkdb"
_ "github.com/influxdb/telegraf/plugins/inputs/statsd"
_ "github.com/influxdb/telegraf/plugins/inputs/system"
_ "github.com/influxdb/telegraf/plugins/inputs/trig"
_ "github.com/influxdb/telegraf/plugins/inputs/twemproxy"
_ "github.com/influxdb/telegraf/plugins/inputs/zfs"
_ "github.com/influxdb/telegraf/plugins/inputs/zookeeper"
)

View File

@ -11,7 +11,7 @@ import (
"sync"
"time"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/inputs"
)
type Apache struct {
@ -31,7 +31,7 @@ func (n *Apache) Description() string {
return "Read Apache status information (mod_status)"
}
func (n *Apache) Gather(acc plugins.Accumulator) error {
func (n *Apache) Gather(acc inputs.Accumulator) error {
var wg sync.WaitGroup
var outerr error
@ -59,7 +59,7 @@ var tr = &http.Transport{
var client = &http.Client{Transport: tr}
func (n *Apache) gatherUrl(addr *url.URL, acc plugins.Accumulator) error {
func (n *Apache) gatherUrl(addr *url.URL, acc inputs.Accumulator) error {
resp, err := client.Get(addr.String())
if err != nil {
return fmt.Errorf("error making HTTP request to %s: %s", addr.String(), err)
@ -72,32 +72,33 @@ func (n *Apache) gatherUrl(addr *url.URL, acc plugins.Accumulator) error {
tags := getTags(addr)
sc := bufio.NewScanner(resp.Body)
fields := make(map[string]interface{})
for sc.Scan() {
line := sc.Text()
if strings.Contains(line, ":") {
parts := strings.SplitN(line, ":", 2)
key, part := strings.Replace(parts[0], " ", "", -1), strings.TrimSpace(parts[1])
switch key {
case "Scoreboard":
n.gatherScores(part, acc, tags)
for field, value := range n.gatherScores(part) {
fields[field] = value
}
default:
value, err := strconv.ParseFloat(part, 64)
if err != nil {
continue
}
acc.Add(key, value, tags)
fields[key] = value
}
}
}
acc.AddFields("apache", fields, tags)
return nil
}
func (n *Apache) gatherScores(data string, acc plugins.Accumulator, tags map[string]string) {
func (n *Apache) gatherScores(data string) map[string]interface{} {
var waiting, open int = 0, 0
var S, R, W, K, D, C, L, G, I int = 0, 0, 0, 0, 0, 0, 0, 0, 0
@ -129,17 +130,20 @@ func (n *Apache) gatherScores(data string, acc plugins.Accumulator, tags map[str
}
}
acc.Add("scboard_waiting", float64(waiting), tags)
acc.Add("scboard_starting", float64(S), tags)
acc.Add("scboard_reading", float64(R), tags)
acc.Add("scboard_sending", float64(W), tags)
acc.Add("scboard_keepalive", float64(K), tags)
acc.Add("scboard_dnslookup", float64(D), tags)
acc.Add("scboard_closing", float64(C), tags)
acc.Add("scboard_logging", float64(L), tags)
acc.Add("scboard_finishing", float64(G), tags)
acc.Add("scboard_idle_cleanup", float64(I), tags)
acc.Add("scboard_open", float64(open), tags)
fields := map[string]interface{}{
"scboard_waiting": float64(waiting),
"scboard_starting": float64(S),
"scboard_reading": float64(R),
"scboard_sending": float64(W),
"scboard_keepalive": float64(K),
"scboard_dnslookup": float64(D),
"scboard_closing": float64(C),
"scboard_logging": float64(L),
"scboard_finishing": float64(G),
"scboard_idle_cleanup": float64(I),
"scboard_open": float64(open),
}
return fields
}
// Get tag(s) for the apache plugin
@ -160,7 +164,7 @@ func getTags(addr *url.URL) map[string]string {
}
func init() {
plugins.Add("apache", func() plugins.Plugin {
inputs.Add("apache", func() inputs.Input {
return &Apache{}
})
}

View File

@ -8,7 +8,6 @@ import (
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@ -44,37 +43,31 @@ func TestHTTPApache(t *testing.T) {
err := a.Gather(&acc)
require.NoError(t, err)
testInt := []struct {
measurement string
value float64
}{
{"TotalAccesses", 1.29811861e+08},
{"TotalkBytes", 5.213701865e+09},
{"CPULoad", 6.51929},
{"Uptime", 941553},
{"ReqPerSec", 137.87},
{"BytesPerSec", 5.67024e+06},
{"BytesPerReq", 41127.4},
{"BusyWorkers", 270},
{"IdleWorkers", 630},
{"ConnsTotal", 1451},
{"ConnsAsyncWriting", 32},
{"ConnsAsyncKeepAlive", 945},
{"ConnsAsyncClosing", 205},
{"scboard_waiting", 630},
{"scboard_starting", 0},
{"scboard_reading", 157},
{"scboard_sending", 113},
{"scboard_keepalive", 0},
{"scboard_dnslookup", 0},
{"scboard_closing", 0},
{"scboard_logging", 0},
{"scboard_finishing", 0},
{"scboard_idle_cleanup", 0},
{"scboard_open", 2850},
}
for _, test := range testInt {
assert.True(t, acc.CheckValue(test.measurement, test.value))
fields := map[string]interface{}{
"TotalAccesses": float64(1.29811861e+08),
"TotalkBytes": float64(5.213701865e+09),
"CPULoad": float64(6.51929),
"Uptime": float64(941553),
"ReqPerSec": float64(137.87),
"BytesPerSec": float64(5.67024e+06),
"BytesPerReq": float64(41127.4),
"BusyWorkers": float64(270),
"IdleWorkers": float64(630),
"ConnsTotal": float64(1451),
"ConnsAsyncWriting": float64(32),
"ConnsAsyncKeepAlive": float64(945),
"ConnsAsyncClosing": float64(205),
"scboard_waiting": float64(630),
"scboard_starting": float64(0),
"scboard_reading": float64(157),
"scboard_sending": float64(113),
"scboard_keepalive": float64(0),
"scboard_dnslookup": float64(0),
"scboard_closing": float64(0),
"scboard_logging": float64(0),
"scboard_finishing": float64(0),
"scboard_idle_cleanup": float64(0),
"scboard_open": float64(2850),
}
acc.AssertContainsFields(t, "apache", fields)
}

View File

@ -26,27 +26,27 @@ Measurement names:
dirty_data
Amount of dirty data for this backing device in the cache. Continuously
updated unlike the cache set's version, but may be slightly off.
bypassed
Amount of IO (both reads and writes) that has bypassed the cache
cache_bypass_hits
cache_bypass_misses
Hits and misses for IO that is intended to skip the cache are still counted,
but broken out here.
cache_hits
cache_misses
cache_hit_ratio
Hits and misses are counted per individual IO as bcache sees them; a
partial hit is counted as a miss.
cache_miss_collisions
Counts instances where data was going to be inserted into the cache from a
cache miss, but raced with a write and data was already present (usually 0
since the synchronization for cache misses was rewritten)
cache_readaheads
Count of times readahead occurred.
```
@ -70,7 +70,7 @@ Using this configuration:
When run with:
```
./telegraf -config telegraf.conf -filter bcache -test
./telegraf -config telegraf.conf -input-filter bcache -test
```
It produces:

View File

@ -8,7 +8,7 @@ import (
"strconv"
"strings"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/inputs"
)
type Bcache struct {
@ -69,7 +69,7 @@ func prettyToBytes(v string) uint64 {
return uint64(result)
}
func (b *Bcache) gatherBcache(bdev string, acc plugins.Accumulator) error {
func (b *Bcache) gatherBcache(bdev string, acc inputs.Accumulator) error {
tags := getTags(bdev)
metrics, err := filepath.Glob(bdev + "/stats_total/*")
if len(metrics) < 0 {
@ -81,7 +81,9 @@ func (b *Bcache) gatherBcache(bdev string, acc plugins.Accumulator) error {
}
rawValue := strings.TrimSpace(string(file))
value := prettyToBytes(rawValue)
acc.Add("dirty_data", value, tags)
fields := make(map[string]interface{})
fields["dirty_data"] = value
for _, path := range metrics {
key := filepath.Base(path)
@ -92,16 +94,17 @@ func (b *Bcache) gatherBcache(bdev string, acc plugins.Accumulator) error {
}
if key == "bypassed" {
value := prettyToBytes(rawValue)
acc.Add(key, value, tags)
fields[key] = value
} else {
value, _ := strconv.ParseUint(rawValue, 10, 64)
acc.Add(key, value, tags)
fields[key] = value
}
}
acc.AddFields("bcache", fields, tags)
return nil
}
func (b *Bcache) Gather(acc plugins.Accumulator) error {
func (b *Bcache) Gather(acc inputs.Accumulator) error {
bcacheDevsChecked := make(map[string]bool)
var restrictDevs bool
if len(b.BcacheDevs) != 0 {
@ -117,7 +120,7 @@ func (b *Bcache) Gather(acc plugins.Accumulator) error {
}
bdevs, _ := filepath.Glob(bcachePath + "/*/bdev*")
if len(bdevs) < 1 {
return errors.New("Can't found any bcache device")
return errors.New("Can't find any bcache device")
}
for _, bdev := range bdevs {
if restrictDevs {
@ -132,7 +135,7 @@ func (b *Bcache) Gather(acc plugins.Accumulator) error {
}
func init() {
plugins.Add("bcache", func() plugins.Plugin {
inputs.Add("bcache", func() inputs.Input {
return &Bcache{}
})
}

View File

@ -6,7 +6,6 @@ import (
"testing"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@ -29,11 +28,6 @@ var (
testBcacheBackingDevPath = os.TempDir() + "/telegraf/sys/devices/virtual/block/md10"
)
type metrics struct {
name string
value uint64
}
func TestBcacheGeneratesMetrics(t *testing.T) {
err := os.MkdirAll(testBcacheUuidPath, 0755)
require.NoError(t, err)
@ -53,70 +47,52 @@ func TestBcacheGeneratesMetrics(t *testing.T) {
err = os.MkdirAll(testBcacheUuidPath+"/bdev0/stats_total", 0755)
require.NoError(t, err)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/dirty_data", []byte(dirty_data), 0644)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/dirty_data",
[]byte(dirty_data), 0644)
require.NoError(t, err)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/bypassed", []byte(bypassed), 0644)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/bypassed",
[]byte(bypassed), 0644)
require.NoError(t, err)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_bypass_hits", []byte(cache_bypass_hits), 0644)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_bypass_hits",
[]byte(cache_bypass_hits), 0644)
require.NoError(t, err)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_bypass_misses", []byte(cache_bypass_misses), 0644)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_bypass_misses",
[]byte(cache_bypass_misses), 0644)
require.NoError(t, err)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_hit_ratio", []byte(cache_hit_ratio), 0644)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_hit_ratio",
[]byte(cache_hit_ratio), 0644)
require.NoError(t, err)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_hits", []byte(cache_hits), 0644)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_hits",
[]byte(cache_hits), 0644)
require.NoError(t, err)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_miss_collisions", []byte(cache_miss_collisions), 0644)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_miss_collisions",
[]byte(cache_miss_collisions), 0644)
require.NoError(t, err)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_misses", []byte(cache_misses), 0644)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_misses",
[]byte(cache_misses), 0644)
require.NoError(t, err)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_readaheads", []byte(cache_readaheads), 0644)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_readaheads",
[]byte(cache_readaheads), 0644)
require.NoError(t, err)
intMetrics := []*metrics{
{
name: "dirty_data",
value: 1610612736,
},
{
name: "bypassed",
value: 5167704440832,
},
{
name: "cache_bypass_hits",
value: 146155333,
},
{
name: "cache_bypass_misses",
value: 0,
},
{
name: "cache_hit_ratio",
value: 90,
},
{
name: "cache_hits",
value: 511469583,
},
{
name: "cache_miss_collisions",
value: 157567,
},
{
name: "cache_misses",
value: 50616331,
},
{
name: "cache_readaheads",
value: 2,
},
fields := map[string]interface{}{
"dirty_data": uint64(1610612736),
"bypassed": uint64(5167704440832),
"cache_bypass_hits": uint64(146155333),
"cache_bypass_misses": uint64(0),
"cache_hit_ratio": uint64(90),
"cache_hits": uint64(511469583),
"cache_miss_collisions": uint64(157567),
"cache_misses": uint64(50616331),
"cache_readaheads": uint64(2),
}
tags := map[string]string{
@ -126,27 +102,19 @@ func TestBcacheGeneratesMetrics(t *testing.T) {
var acc testutil.Accumulator
//all devs
// all devs
b := &Bcache{BcachePath: testBcachePath}
err = b.Gather(&acc)
require.NoError(t, err)
acc.AssertContainsTaggedFields(t, "bcache", fields, tags)
for _, metric := range intMetrics {
assert.True(t, acc.HasUIntValue(metric.name), metric.name)
assert.True(t, acc.CheckTaggedValue(metric.name, metric.value, tags))
}
//one exist dev
// one exist dev
b = &Bcache{BcachePath: testBcachePath, BcacheDevs: []string{"bcache0"}}
err = b.Gather(&acc)
require.NoError(t, err)
for _, metric := range intMetrics {
assert.True(t, acc.HasUIntValue(metric.name), metric.name)
assert.True(t, acc.CheckTaggedValue(metric.name, metric.value, tags))
}
acc.AssertContainsTaggedFields(t, "bcache", fields, tags)
err = os.RemoveAll(os.TempDir() + "/telegraf")
require.NoError(t, err)

View File

@ -10,7 +10,7 @@ import (
"strings"
"sync"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/inputs"
)
type Disque struct {
@ -61,7 +61,7 @@ var ErrProtocolError = errors.New("disque protocol error")
// Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any).
func (g *Disque) Gather(acc plugins.Accumulator) error {
func (g *Disque) Gather(acc inputs.Accumulator) error {
if len(g.Servers) == 0 {
url := &url.URL{
Host: ":7711",
@ -98,7 +98,7 @@ func (g *Disque) Gather(acc plugins.Accumulator) error {
const defaultPort = "7711"
func (g *Disque) gatherServer(addr *url.URL, acc plugins.Accumulator) error {
func (g *Disque) gatherServer(addr *url.URL, acc inputs.Accumulator) error {
if g.c == nil {
_, _, err := net.SplitHostPort(addr.Host)
@ -155,6 +155,8 @@ func (g *Disque) gatherServer(addr *url.URL, acc plugins.Accumulator) error {
var read int
fields := make(map[string]interface{})
tags := map[string]string{"host": addr.String()}
for read < sz {
line, err := r.ReadString('\n')
if err != nil {
@ -176,12 +178,11 @@ func (g *Disque) gatherServer(addr *url.URL, acc plugins.Accumulator) error {
continue
}
tags := map[string]string{"host": addr.String()}
val := strings.TrimSpace(parts[1])
ival, err := strconv.ParseUint(val, 10, 64)
if err == nil {
acc.Add(metric, ival, tags)
fields[metric] = ival
continue
}
@ -190,14 +191,14 @@ func (g *Disque) gatherServer(addr *url.URL, acc plugins.Accumulator) error {
return err
}
acc.Add(metric, fval, tags)
fields[metric] = fval
}
acc.AddFields("disque", fields, tags)
return nil
}
func init() {
plugins.Add("disque", func() plugins.Plugin {
inputs.Add("disque", func() inputs.Input {
return &Disque{}
})
}

View File

@ -7,7 +7,6 @@ import (
"testing"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@ -55,42 +54,26 @@ func TestDisqueGeneratesMetrics(t *testing.T) {
err = r.Gather(&acc)
require.NoError(t, err)
checkInt := []struct {
name string
value uint64
}{
{"uptime", 1452705},
{"clients", 31},
{"blocked_clients", 13},
{"used_memory", 1840104},
{"used_memory_rss", 3227648},
{"used_memory_peak", 89603656},
{"total_connections_received", 5062777},
{"total_commands_processed", 12308396},
{"instantaneous_ops_per_sec", 18},
{"latest_fork_usec", 1644},
{"registered_jobs", 360},
{"registered_queues", 12},
}
for _, c := range checkInt {
assert.True(t, acc.CheckValue(c.name, c.value))
}
checkFloat := []struct {
name string
value float64
}{
{"mem_fragmentation_ratio", 1.75},
{"used_cpu_sys", 19585.73},
{"used_cpu_user", 11255.96},
{"used_cpu_sys_children", 1.75},
{"used_cpu_user_children", 1.91},
}
for _, c := range checkFloat {
assert.True(t, acc.CheckValue(c.name, c.value))
fields := map[string]interface{}{
"uptime": uint64(1452705),
"clients": uint64(31),
"blocked_clients": uint64(13),
"used_memory": uint64(1840104),
"used_memory_rss": uint64(3227648),
"used_memory_peak": uint64(89603656),
"total_connections_received": uint64(5062777),
"total_commands_processed": uint64(12308396),
"instantaneous_ops_per_sec": uint64(18),
"latest_fork_usec": uint64(1644),
"registered_jobs": uint64(360),
"registered_queues": uint64(12),
"mem_fragmentation_ratio": float64(1.75),
"used_cpu_sys": float64(19585.73),
"used_cpu_user": float64(11255.96),
"used_cpu_sys_children": float64(1.75),
"used_cpu_user_children": float64(1.91),
}
acc.AssertContainsFields(t, "disque", fields)
}
func TestDisqueCanPullStatsFromMultipleServers(t *testing.T) {
@ -137,42 +120,26 @@ func TestDisqueCanPullStatsFromMultipleServers(t *testing.T) {
err = r.Gather(&acc)
require.NoError(t, err)
checkInt := []struct {
name string
value uint64
}{
{"uptime", 1452705},
{"clients", 31},
{"blocked_clients", 13},
{"used_memory", 1840104},
{"used_memory_rss", 3227648},
{"used_memory_peak", 89603656},
{"total_connections_received", 5062777},
{"total_commands_processed", 12308396},
{"instantaneous_ops_per_sec", 18},
{"latest_fork_usec", 1644},
{"registered_jobs", 360},
{"registered_queues", 12},
}
for _, c := range checkInt {
assert.True(t, acc.CheckValue(c.name, c.value))
}
checkFloat := []struct {
name string
value float64
}{
{"mem_fragmentation_ratio", 1.75},
{"used_cpu_sys", 19585.73},
{"used_cpu_user", 11255.96},
{"used_cpu_sys_children", 1.75},
{"used_cpu_user_children", 1.91},
}
for _, c := range checkFloat {
assert.True(t, acc.CheckValue(c.name, c.value))
fields := map[string]interface{}{
"uptime": uint64(1452705),
"clients": uint64(31),
"blocked_clients": uint64(13),
"used_memory": uint64(1840104),
"used_memory_rss": uint64(3227648),
"used_memory_peak": uint64(89603656),
"total_connections_received": uint64(5062777),
"total_commands_processed": uint64(12308396),
"instantaneous_ops_per_sec": uint64(18),
"latest_fork_usec": uint64(1644),
"registered_jobs": uint64(360),
"registered_queues": uint64(12),
"mem_fragmentation_ratio": float64(1.75),
"used_cpu_sys": float64(19585.73),
"used_cpu_user": float64(11255.96),
"used_cpu_sys_children": float64(1.75),
"used_cpu_user_children": float64(1.91),
}
acc.AssertContainsFields(t, "disque", fields)
}
const testOutput = `# Server

View File

@ -31,8 +31,9 @@ contains `status`, `timed_out`, `number_of_nodes`, `number_of_data_nodes`,
`initializing_shards`, `unassigned_shards` fields
- elasticsearch_cluster_health
contains `status`, `number_of_shards`, `number_of_replicas`, `active_primary_shards`,
`active_shards`, `relocating_shards`, `initializing_shards`, `unassigned_shards` fields
contains `status`, `number_of_shards`, `number_of_replicas`,
`active_primary_shards`, `active_shards`, `relocating_shards`,
`initializing_shards`, `unassigned_shards` fields
- elasticsearch_indices
#### node measurements:
@ -316,4 +317,4 @@ Transport statistics about sent and received bytes in cluster communication meas
- elasticsearch_transport_rx_count value=6
- elasticsearch_transport_rx_size_in_bytes value=1380
- elasticsearch_transport_tx_count value=6
- elasticsearch_transport_tx_size_in_bytes value=1380
- elasticsearch_transport_tx_size_in_bytes value=1380

View File

@ -6,7 +6,8 @@ import (
"net/http"
"time"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/internal"
"github.com/influxdb/telegraf/plugins/inputs"
)
const statsPath = "/_nodes/stats"
@ -91,7 +92,7 @@ func (e *Elasticsearch) Description() string {
// Gather reads the stats from Elasticsearch and writes it to the
// Accumulator.
func (e *Elasticsearch) Gather(acc plugins.Accumulator) error {
func (e *Elasticsearch) Gather(acc inputs.Accumulator) error {
for _, serv := range e.Servers {
var url string
if e.Local {
@ -109,7 +110,7 @@ func (e *Elasticsearch) Gather(acc plugins.Accumulator) error {
return nil
}
func (e *Elasticsearch) gatherNodeStats(url string, acc plugins.Accumulator) error {
func (e *Elasticsearch) gatherNodeStats(url string, acc inputs.Accumulator) error {
nodeStats := &struct {
ClusterName string `json:"cluster_name"`
Nodes map[string]*node `json:"nodes"`
@ -141,16 +142,20 @@ func (e *Elasticsearch) gatherNodeStats(url string, acc plugins.Accumulator) err
"breakers": n.Breakers,
}
now := time.Now()
for p, s := range stats {
if err := e.parseInterface(acc, p, tags, s); err != nil {
f := internal.JSONFlattener{}
err := f.FlattenJSON("", s)
if err != nil {
return err
}
acc.AddFields("elasticsearch_"+p, f.Fields, tags, now)
}
}
return nil
}
func (e *Elasticsearch) gatherClusterStats(url string, acc plugins.Accumulator) error {
func (e *Elasticsearch) gatherClusterStats(url string, acc inputs.Accumulator) error {
clusterStats := &clusterHealth{}
if err := e.gatherData(url, clusterStats); err != nil {
return err
@ -168,7 +173,7 @@ func (e *Elasticsearch) gatherClusterStats(url string, acc plugins.Accumulator)
"unassigned_shards": clusterStats.UnassignedShards,
}
acc.AddFields(
"cluster_health",
"elasticsearch_cluster_health",
clusterFields,
map[string]string{"name": clusterStats.ClusterName},
measurementTime,
@ -186,7 +191,7 @@ func (e *Elasticsearch) gatherClusterStats(url string, acc plugins.Accumulator)
"unassigned_shards": health.UnassignedShards,
}
acc.AddFields(
"indices",
"elasticsearch_indices",
indexFields,
map[string]string{"index": name},
measurementTime,
@ -205,7 +210,8 @@ func (e *Elasticsearch) gatherData(url string, v interface{}) error {
// NOTE: we are not going to read/discard r.Body under the assumption we'd prefer
// to let the underlying transport close the connection and re-establish a new one for
// future calls.
return fmt.Errorf("elasticsearch: API responded with status-code %d, expected %d", r.StatusCode, http.StatusOK)
return fmt.Errorf("elasticsearch: API responded with status-code %d, expected %d",
r.StatusCode, http.StatusOK)
}
if err = json.NewDecoder(r.Body).Decode(v); err != nil {
return err
@ -213,27 +219,8 @@ func (e *Elasticsearch) gatherData(url string, v interface{}) error {
return nil
}
func (e *Elasticsearch) parseInterface(acc plugins.Accumulator, prefix string, tags map[string]string, v interface{}) error {
switch t := v.(type) {
case map[string]interface{}:
for k, v := range t {
if err := e.parseInterface(acc, prefix+"_"+k, tags, v); err != nil {
return err
}
}
case float64:
acc.Add(prefix, t, tags)
case bool, string, []interface{}:
// ignored types
return nil
default:
return fmt.Errorf("elasticsearch: got unexpected type %T with value %v (%s)", t, t, prefix)
}
return nil
}
func init() {
plugins.Add("elasticsearch", func() plugins.Plugin {
inputs.Add("elasticsearch", func() inputs.Input {
return NewElasticsearch()
})
}

View File

@ -7,7 +7,7 @@ import (
"testing"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@ -52,23 +52,15 @@ func TestElasticsearch(t *testing.T) {
"node_host": "test",
}
testTables := []map[string]float64{
indicesExpected,
osExpected,
processExpected,
jvmExpected,
threadPoolExpected,
fsExpected,
transportExpected,
httpExpected,
breakersExpected,
}
for _, testTable := range testTables {
for k, v := range testTable {
assert.NoError(t, acc.ValidateTaggedValue(k, v, tags))
}
}
acc.AssertContainsTaggedFields(t, "elasticsearch_indices", indicesExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_os", osExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_process", processExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_jvm", jvmExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_thread_pool", threadPoolExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_fs", fsExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_transport", transportExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_http", httpExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_breakers", breakersExpected, tags)
}
func TestGatherClusterStats(t *testing.T) {
@ -80,29 +72,15 @@ func TestGatherClusterStats(t *testing.T) {
var acc testutil.Accumulator
require.NoError(t, es.Gather(&acc))
var clusterHealthTests = []struct {
measurement string
fields map[string]interface{}
tags map[string]string
}{
{
"cluster_health",
clusterHealthExpected,
map[string]string{"name": "elasticsearch_telegraf"},
},
{
"indices",
v1IndexExpected,
map[string]string{"index": "v1"},
},
{
"indices",
v2IndexExpected,
map[string]string{"index": "v2"},
},
}
acc.AssertContainsTaggedFields(t, "elasticsearch_cluster_health",
clusterHealthExpected,
map[string]string{"name": "elasticsearch_telegraf"})
for _, exp := range clusterHealthTests {
assert.NoError(t, acc.ValidateTaggedFields(exp.measurement, exp.fields, exp.tags))
}
acc.AssertContainsTaggedFields(t, "elasticsearch_indices",
v1IndexExpected,
map[string]string{"index": "v1"})
acc.AssertContainsTaggedFields(t, "elasticsearch_indices",
v2IndexExpected,
map[string]string{"index": "v2"})
}

View File

@ -0,0 +1,759 @@
package elasticsearch
const clusterResponse = `
{
"cluster_name": "elasticsearch_telegraf",
"status": "green",
"timed_out": false,
"number_of_nodes": 3,
"number_of_data_nodes": 3,
"active_primary_shards": 5,
"active_shards": 15,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0,
"indices": {
"v1": {
"status": "green",
"number_of_shards": 10,
"number_of_replicas": 1,
"active_primary_shards": 10,
"active_shards": 20,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0
},
"v2": {
"status": "red",
"number_of_shards": 10,
"number_of_replicas": 1,
"active_primary_shards": 0,
"active_shards": 0,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 20
}
}
}
`
var clusterHealthExpected = map[string]interface{}{
"status": "green",
"timed_out": false,
"number_of_nodes": 3,
"number_of_data_nodes": 3,
"active_primary_shards": 5,
"active_shards": 15,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0,
}
var v1IndexExpected = map[string]interface{}{
"status": "green",
"number_of_shards": 10,
"number_of_replicas": 1,
"active_primary_shards": 10,
"active_shards": 20,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0,
}
var v2IndexExpected = map[string]interface{}{
"status": "red",
"number_of_shards": 10,
"number_of_replicas": 1,
"active_primary_shards": 0,
"active_shards": 0,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 20,
}
const statsResponse = `
{
"cluster_name": "es-testcluster",
"nodes": {
"SDFsfSDFsdfFSDSDfSFDSDF": {
"timestamp": 1436365550135,
"name": "test.host.com",
"transport_address": "inet[/127.0.0.1:9300]",
"host": "test",
"ip": [
"inet[/127.0.0.1:9300]",
"NONE"
],
"attributes": {
"master": "true"
},
"indices": {
"docs": {
"count": 29652,
"deleted": 5229
},
"store": {
"size_in_bytes": 37715234,
"throttle_time_in_millis": 215
},
"indexing": {
"index_total": 84790,
"index_time_in_millis": 29680,
"index_current": 0,
"delete_total": 13879,
"delete_time_in_millis": 1139,
"delete_current": 0,
"noop_update_total": 0,
"is_throttled": false,
"throttle_time_in_millis": 0
},
"get": {
"total": 1,
"time_in_millis": 2,
"exists_total": 0,
"exists_time_in_millis": 0,
"missing_total": 1,
"missing_time_in_millis": 2,
"current": 0
},
"search": {
"open_contexts": 0,
"query_total": 1452,
"query_time_in_millis": 5695,
"query_current": 0,
"fetch_total": 414,
"fetch_time_in_millis": 146,
"fetch_current": 0
},
"merges": {
"current": 0,
"current_docs": 0,
"current_size_in_bytes": 0,
"total": 133,
"total_time_in_millis": 21060,
"total_docs": 203672,
"total_size_in_bytes": 142900226
},
"refresh": {
"total": 1076,
"total_time_in_millis": 20078
},
"flush": {
"total": 115,
"total_time_in_millis": 2401
},
"warmer": {
"current": 0,
"total": 2319,
"total_time_in_millis": 448
},
"filter_cache": {
"memory_size_in_bytes": 7384,
"evictions": 0
},
"id_cache": {
"memory_size_in_bytes": 0
},
"fielddata": {
"memory_size_in_bytes": 12996,
"evictions": 0
},
"percolate": {
"total": 0,
"time_in_millis": 0,
"current": 0,
"memory_size_in_bytes": -1,
"memory_size": "-1b",
"queries": 0
},
"completion": {
"size_in_bytes": 0
},
"segments": {
"count": 134,
"memory_in_bytes": 1285212,
"index_writer_memory_in_bytes": 0,
"index_writer_max_memory_in_bytes": 172368955,
"version_map_memory_in_bytes": 611844,
"fixed_bit_set_memory_in_bytes": 0
},
"translog": {
"operations": 17702,
"size_in_bytes": 17
},
"suggest": {
"total": 0,
"time_in_millis": 0,
"current": 0
},
"query_cache": {
"memory_size_in_bytes": 0,
"evictions": 0,
"hit_count": 0,
"miss_count": 0
},
"recovery": {
"current_as_source": 0,
"current_as_target": 0,
"throttle_time_in_millis": 0
}
},
"os": {
"timestamp": 1436460392944,
"load_average": [
0.01,
0.04,
0.05
],
"mem": {
"free_in_bytes": 477761536,
"used_in_bytes": 1621868544,
"free_percent": 74,
"used_percent": 25,
"actual_free_in_bytes": 1565470720,
"actual_used_in_bytes": 534159360
},
"swap": {
"used_in_bytes": 0,
"free_in_bytes": 487997440
}
},
"process": {
"timestamp": 1436460392945,
"open_file_descriptors": 160,
"cpu": {
"percent": 2,
"sys_in_millis": 1870,
"user_in_millis": 13610,
"total_in_millis": 15480
},
"mem": {
"total_virtual_in_bytes": 4747890688
}
},
"jvm": {
"timestamp": 1436460392945,
"uptime_in_millis": 202245,
"mem": {
"heap_used_in_bytes": 52709568,
"heap_used_percent": 5,
"heap_committed_in_bytes": 259522560,
"heap_max_in_bytes": 1038876672,
"non_heap_used_in_bytes": 39634576,
"non_heap_committed_in_bytes": 40841216,
"pools": {
"young": {
"used_in_bytes": 32685760,
"max_in_bytes": 279183360,
"peak_used_in_bytes": 71630848,
"peak_max_in_bytes": 279183360
},
"survivor": {
"used_in_bytes": 8912880,
"max_in_bytes": 34865152,
"peak_used_in_bytes": 8912888,
"peak_max_in_bytes": 34865152
},
"old": {
"used_in_bytes": 11110928,
"max_in_bytes": 724828160,
"peak_used_in_bytes": 14354608,
"peak_max_in_bytes": 724828160
}
}
},
"threads": {
"count": 44,
"peak_count": 45
},
"gc": {
"collectors": {
"young": {
"collection_count": 2,
"collection_time_in_millis": 98
},
"old": {
"collection_count": 1,
"collection_time_in_millis": 24
}
}
},
"buffer_pools": {
"direct": {
"count": 40,
"used_in_bytes": 6304239,
"total_capacity_in_bytes": 6304239
},
"mapped": {
"count": 0,
"used_in_bytes": 0,
"total_capacity_in_bytes": 0
}
}
},
"thread_pool": {
"percolate": {
"threads": 123,
"queue": 23,
"active": 13,
"rejected": 235,
"largest": 23,
"completed": 33
},
"fetch_shard_started": {
"threads": 3,
"queue": 1,
"active": 5,
"rejected": 6,
"largest": 4,
"completed": 54
},
"listener": {
"threads": 1,
"queue": 2,
"active": 4,
"rejected": 8,
"largest": 1,
"completed": 1
},
"index": {
"threads": 6,
"queue": 8,
"active": 4,
"rejected": 2,
"largest": 3,
"completed": 6
},
"refresh": {
"threads": 23,
"queue": 7,
"active": 3,
"rejected": 4,
"largest": 8,
"completed": 3
},
"suggest": {
"threads": 2,
"queue": 7,
"active": 2,
"rejected": 1,
"largest": 8,
"completed": 3
},
"generic": {
"threads": 1,
"queue": 4,
"active": 6,
"rejected": 3,
"largest": 2,
"completed": 27
},
"warmer": {
"threads": 2,
"queue": 7,
"active": 3,
"rejected": 2,
"largest": 3,
"completed": 1
},
"search": {
"threads": 5,
"queue": 7,
"active": 2,
"rejected": 7,
"largest": 2,
"completed": 4
},
"flush": {
"threads": 3,
"queue": 8,
"active": 0,
"rejected": 1,
"largest": 5,
"completed": 3
},
"optimize": {
"threads": 3,
"queue": 4,
"active": 1,
"rejected": 2,
"largest": 7,
"completed": 3
},
"fetch_shard_store": {
"threads": 1,
"queue": 7,
"active": 4,
"rejected": 2,
"largest": 4,
"completed": 1
},
"management": {
"threads": 2,
"queue": 3,
"active": 1,
"rejected": 6,
"largest": 2,
"completed": 22
},
"get": {
"threads": 1,
"queue": 8,
"active": 4,
"rejected": 3,
"largest": 2,
"completed": 1
},
"merge": {
"threads": 6,
"queue": 4,
"active": 5,
"rejected": 2,
"largest": 5,
"completed": 1
},
"bulk": {
"threads": 4,
"queue": 5,
"active": 7,
"rejected": 3,
"largest": 1,
"completed": 4
},
"snapshot": {
"threads": 8,
"queue": 5,
"active": 6,
"rejected": 2,
"largest": 1,
"completed": 0
}
},
"fs": {
"timestamp": 1436460392946,
"total": {
"total_in_bytes": 19507089408,
"free_in_bytes": 16909316096,
"available_in_bytes": 15894814720
},
"data": [
{
"path": "/usr/share/elasticsearch/data/elasticsearch/nodes/0",
"mount": "/usr/share/elasticsearch/data",
"type": "ext4",
"total_in_bytes": 19507089408,
"free_in_bytes": 16909316096,
"available_in_bytes": 15894814720
}
]
},
"transport": {
"server_open": 13,
"rx_count": 6,
"rx_size_in_bytes": 1380,
"tx_count": 6,
"tx_size_in_bytes": 1380
},
"http": {
"current_open": 3,
"total_opened": 3
},
"breakers": {
"fielddata": {
"limit_size_in_bytes": 623326003,
"limit_size": "594.4mb",
"estimated_size_in_bytes": 0,
"estimated_size": "0b",
"overhead": 1.03,
"tripped": 0
},
"request": {
"limit_size_in_bytes": 415550668,
"limit_size": "396.2mb",
"estimated_size_in_bytes": 0,
"estimated_size": "0b",
"overhead": 1.0,
"tripped": 0
},
"parent": {
"limit_size_in_bytes": 727213670,
"limit_size": "693.5mb",
"estimated_size_in_bytes": 0,
"estimated_size": "0b",
"overhead": 1.0,
"tripped": 0
}
}
}
}
}
`
var indicesExpected = map[string]interface{}{
"id_cache_memory_size_in_bytes": float64(0),
"completion_size_in_bytes": float64(0),
"suggest_total": float64(0),
"suggest_time_in_millis": float64(0),
"suggest_current": float64(0),
"query_cache_memory_size_in_bytes": float64(0),
"query_cache_evictions": float64(0),
"query_cache_hit_count": float64(0),
"query_cache_miss_count": float64(0),
"store_size_in_bytes": float64(37715234),
"store_throttle_time_in_millis": float64(215),
"merges_current_docs": float64(0),
"merges_current_size_in_bytes": float64(0),
"merges_total": float64(133),
"merges_total_time_in_millis": float64(21060),
"merges_total_docs": float64(203672),
"merges_total_size_in_bytes": float64(142900226),
"merges_current": float64(0),
"filter_cache_memory_size_in_bytes": float64(7384),
"filter_cache_evictions": float64(0),
"indexing_index_total": float64(84790),
"indexing_index_time_in_millis": float64(29680),
"indexing_index_current": float64(0),
"indexing_noop_update_total": float64(0),
"indexing_throttle_time_in_millis": float64(0),
"indexing_delete_total": float64(13879),
"indexing_delete_time_in_millis": float64(1139),
"indexing_delete_current": float64(0),
"get_exists_time_in_millis": float64(0),
"get_missing_total": float64(1),
"get_missing_time_in_millis": float64(2),
"get_current": float64(0),
"get_total": float64(1),
"get_time_in_millis": float64(2),
"get_exists_total": float64(0),
"refresh_total": float64(1076),
"refresh_total_time_in_millis": float64(20078),
"percolate_current": float64(0),
"percolate_memory_size_in_bytes": float64(-1),
"percolate_queries": float64(0),
"percolate_total": float64(0),
"percolate_time_in_millis": float64(0),
"translog_operations": float64(17702),
"translog_size_in_bytes": float64(17),
"recovery_current_as_source": float64(0),
"recovery_current_as_target": float64(0),
"recovery_throttle_time_in_millis": float64(0),
"docs_count": float64(29652),
"docs_deleted": float64(5229),
"flush_total_time_in_millis": float64(2401),
"flush_total": float64(115),
"fielddata_memory_size_in_bytes": float64(12996),
"fielddata_evictions": float64(0),
"search_fetch_current": float64(0),
"search_open_contexts": float64(0),
"search_query_total": float64(1452),
"search_query_time_in_millis": float64(5695),
"search_query_current": float64(0),
"search_fetch_total": float64(414),
"search_fetch_time_in_millis": float64(146),
"warmer_current": float64(0),
"warmer_total": float64(2319),
"warmer_total_time_in_millis": float64(448),
"segments_count": float64(134),
"segments_memory_in_bytes": float64(1285212),
"segments_index_writer_memory_in_bytes": float64(0),
"segments_index_writer_max_memory_in_bytes": float64(172368955),
"segments_version_map_memory_in_bytes": float64(611844),
"segments_fixed_bit_set_memory_in_bytes": float64(0),
}
var osExpected = map[string]interface{}{
"swap_used_in_bytes": float64(0),
"swap_free_in_bytes": float64(487997440),
"timestamp": float64(1436460392944),
"mem_free_percent": float64(74),
"mem_used_percent": float64(25),
"mem_actual_free_in_bytes": float64(1565470720),
"mem_actual_used_in_bytes": float64(534159360),
"mem_free_in_bytes": float64(477761536),
"mem_used_in_bytes": float64(1621868544),
}
var processExpected = map[string]interface{}{
"mem_total_virtual_in_bytes": float64(4747890688),
"timestamp": float64(1436460392945),
"open_file_descriptors": float64(160),
"cpu_total_in_millis": float64(15480),
"cpu_percent": float64(2),
"cpu_sys_in_millis": float64(1870),
"cpu_user_in_millis": float64(13610),
}
var jvmExpected = map[string]interface{}{
"timestamp": float64(1436460392945),
"uptime_in_millis": float64(202245),
"mem_non_heap_used_in_bytes": float64(39634576),
"mem_non_heap_committed_in_bytes": float64(40841216),
"mem_pools_young_max_in_bytes": float64(279183360),
"mem_pools_young_peak_used_in_bytes": float64(71630848),
"mem_pools_young_peak_max_in_bytes": float64(279183360),
"mem_pools_young_used_in_bytes": float64(32685760),
"mem_pools_survivor_peak_used_in_bytes": float64(8912888),
"mem_pools_survivor_peak_max_in_bytes": float64(34865152),
"mem_pools_survivor_used_in_bytes": float64(8912880),
"mem_pools_survivor_max_in_bytes": float64(34865152),
"mem_pools_old_peak_max_in_bytes": float64(724828160),
"mem_pools_old_used_in_bytes": float64(11110928),
"mem_pools_old_max_in_bytes": float64(724828160),
"mem_pools_old_peak_used_in_bytes": float64(14354608),
"mem_heap_used_in_bytes": float64(52709568),
"mem_heap_used_percent": float64(5),
"mem_heap_committed_in_bytes": float64(259522560),
"mem_heap_max_in_bytes": float64(1038876672),
"threads_peak_count": float64(45),
"threads_count": float64(44),
"gc_collectors_young_collection_count": float64(2),
"gc_collectors_young_collection_time_in_millis": float64(98),
"gc_collectors_old_collection_count": float64(1),
"gc_collectors_old_collection_time_in_millis": float64(24),
"buffer_pools_direct_count": float64(40),
"buffer_pools_direct_used_in_bytes": float64(6304239),
"buffer_pools_direct_total_capacity_in_bytes": float64(6304239),
"buffer_pools_mapped_count": float64(0),
"buffer_pools_mapped_used_in_bytes": float64(0),
"buffer_pools_mapped_total_capacity_in_bytes": float64(0),
}
var threadPoolExpected = map[string]interface{}{
"merge_threads": float64(6),
"merge_queue": float64(4),
"merge_active": float64(5),
"merge_rejected": float64(2),
"merge_largest": float64(5),
"merge_completed": float64(1),
"bulk_threads": float64(4),
"bulk_queue": float64(5),
"bulk_active": float64(7),
"bulk_rejected": float64(3),
"bulk_largest": float64(1),
"bulk_completed": float64(4),
"warmer_threads": float64(2),
"warmer_queue": float64(7),
"warmer_active": float64(3),
"warmer_rejected": float64(2),
"warmer_largest": float64(3),
"warmer_completed": float64(1),
"get_largest": float64(2),
"get_completed": float64(1),
"get_threads": float64(1),
"get_queue": float64(8),
"get_active": float64(4),
"get_rejected": float64(3),
"index_threads": float64(6),
"index_queue": float64(8),
"index_active": float64(4),
"index_rejected": float64(2),
"index_largest": float64(3),
"index_completed": float64(6),
"suggest_threads": float64(2),
"suggest_queue": float64(7),
"suggest_active": float64(2),
"suggest_rejected": float64(1),
"suggest_largest": float64(8),
"suggest_completed": float64(3),
"fetch_shard_store_queue": float64(7),
"fetch_shard_store_active": float64(4),
"fetch_shard_store_rejected": float64(2),
"fetch_shard_store_largest": float64(4),
"fetch_shard_store_completed": float64(1),
"fetch_shard_store_threads": float64(1),
"management_threads": float64(2),
"management_queue": float64(3),
"management_active": float64(1),
"management_rejected": float64(6),
"management_largest": float64(2),
"management_completed": float64(22),
"percolate_queue": float64(23),
"percolate_active": float64(13),
"percolate_rejected": float64(235),
"percolate_largest": float64(23),
"percolate_completed": float64(33),
"percolate_threads": float64(123),
"listener_active": float64(4),
"listener_rejected": float64(8),
"listener_largest": float64(1),
"listener_completed": float64(1),
"listener_threads": float64(1),
"listener_queue": float64(2),
"search_rejected": float64(7),
"search_largest": float64(2),
"search_completed": float64(4),
"search_threads": float64(5),
"search_queue": float64(7),
"search_active": float64(2),
"fetch_shard_started_threads": float64(3),
"fetch_shard_started_queue": float64(1),
"fetch_shard_started_active": float64(5),
"fetch_shard_started_rejected": float64(6),
"fetch_shard_started_largest": float64(4),
"fetch_shard_started_completed": float64(54),
"refresh_rejected": float64(4),
"refresh_largest": float64(8),
"refresh_completed": float64(3),
"refresh_threads": float64(23),
"refresh_queue": float64(7),
"refresh_active": float64(3),
"optimize_threads": float64(3),
"optimize_queue": float64(4),
"optimize_active": float64(1),
"optimize_rejected": float64(2),
"optimize_largest": float64(7),
"optimize_completed": float64(3),
"snapshot_largest": float64(1),
"snapshot_completed": float64(0),
"snapshot_threads": float64(8),
"snapshot_queue": float64(5),
"snapshot_active": float64(6),
"snapshot_rejected": float64(2),
"generic_threads": float64(1),
"generic_queue": float64(4),
"generic_active": float64(6),
"generic_rejected": float64(3),
"generic_largest": float64(2),
"generic_completed": float64(27),
"flush_threads": float64(3),
"flush_queue": float64(8),
"flush_active": float64(0),
"flush_rejected": float64(1),
"flush_largest": float64(5),
"flush_completed": float64(3),
}
var fsExpected = map[string]interface{}{
"timestamp": float64(1436460392946),
"total_free_in_bytes": float64(16909316096),
"total_available_in_bytes": float64(15894814720),
"total_total_in_bytes": float64(19507089408),
}
var transportExpected = map[string]interface{}{
"server_open": float64(13),
"rx_count": float64(6),
"rx_size_in_bytes": float64(1380),
"tx_count": float64(6),
"tx_size_in_bytes": float64(1380),
}
var httpExpected = map[string]interface{}{
"current_open": float64(3),
"total_opened": float64(3),
}
var breakersExpected = map[string]interface{}{
"fielddata_estimated_size_in_bytes": float64(0),
"fielddata_overhead": float64(1.03),
"fielddata_tripped": float64(0),
"fielddata_limit_size_in_bytes": float64(623326003),
"request_estimated_size_in_bytes": float64(0),
"request_overhead": float64(1.0),
"request_tripped": float64(0),
"request_limit_size_in_bytes": float64(415550668),
"parent_overhead": float64(1.0),
"parent_tripped": float64(0),
"parent_limit_size_in_bytes": float64(727213670),
"parent_estimated_size_in_bytes": float64(0),
}

View File

@ -0,0 +1,91 @@
package exec
import (
"bytes"
"encoding/json"
"fmt"
"os/exec"
"github.com/gonuts/go-shellquote"
"github.com/influxdb/telegraf/internal"
"github.com/influxdb/telegraf/plugins/inputs"
)
const sampleConfig = `
# the command to run
command = "/usr/bin/mycollector --foo=bar"
# measurement name suffix (for separating different commands)
name_suffix = "_mycollector"
`
type Exec struct {
Command string
runner Runner
}
type Runner interface {
Run(*Exec) ([]byte, error)
}
type CommandRunner struct{}
func (c CommandRunner) Run(e *Exec) ([]byte, error) {
split_cmd, err := shellquote.Split(e.Command)
if err != nil || len(split_cmd) == 0 {
return nil, fmt.Errorf("exec: unable to parse command, %s", err)
}
cmd := exec.Command(split_cmd[0], split_cmd[1:]...)
var out bytes.Buffer
cmd.Stdout = &out
if err := cmd.Run(); err != nil {
return nil, fmt.Errorf("exec: %s for command '%s'", err, e.Command)
}
return out.Bytes(), nil
}
func NewExec() *Exec {
return &Exec{runner: CommandRunner{}}
}
func (e *Exec) SampleConfig() string {
return sampleConfig
}
func (e *Exec) Description() string {
return "Read flattened metrics from one or more commands that output JSON to stdout"
}
func (e *Exec) Gather(acc inputs.Accumulator) error {
out, err := e.runner.Run(e)
if err != nil {
return err
}
var jsonOut interface{}
err = json.Unmarshal(out, &jsonOut)
if err != nil {
return fmt.Errorf("exec: unable to parse output of '%s' as JSON, %s",
e.Command, err)
}
f := internal.JSONFlattener{}
err = f.FlattenJSON("", jsonOut)
if err != nil {
return err
}
acc.AddFields("exec", f.Fields, nil)
return nil
}
func init() {
inputs.Add("exec", func() inputs.Input {
return NewExec()
})
}

View File

@ -0,0 +1,95 @@
package exec
import (
"fmt"
"testing"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// Midnight 9/22/2015
const baseTimeSeconds = 1442905200
const validJson = `
{
"status": "green",
"num_processes": 82,
"cpu": {
"status": "red",
"nil_status": null,
"used": 8234,
"free": 32
},
"percent": 0.81,
"users": [0, 1, 2, 3]
}`
const malformedJson = `
{
"status": "green",
`
type runnerMock struct {
out []byte
err error
}
func newRunnerMock(out []byte, err error) Runner {
return &runnerMock{
out: out,
err: err,
}
}
func (r runnerMock) Run(e *Exec) ([]byte, error) {
if r.err != nil {
return nil, r.err
}
return r.out, nil
}
func TestExec(t *testing.T) {
e := &Exec{
runner: newRunnerMock([]byte(validJson), nil),
Command: "testcommand arg1",
}
var acc testutil.Accumulator
err := e.Gather(&acc)
require.NoError(t, err)
assert.Equal(t, acc.NFields(), 4, "non-numeric measurements should be ignored")
fields := map[string]interface{}{
"num_processes": float64(82),
"cpu_used": float64(8234),
"cpu_free": float64(32),
"percent": float64(0.81),
}
acc.AssertContainsFields(t, "exec", fields)
}
func TestExecMalformed(t *testing.T) {
e := &Exec{
runner: newRunnerMock([]byte(malformedJson), nil),
Command: "badcommand arg1",
}
var acc testutil.Accumulator
err := e.Gather(&acc)
require.Error(t, err)
assert.Equal(t, acc.NFields(), 0, "No new points should have been added")
}
func TestCommandError(t *testing.T) {
e := &Exec{
runner: newRunnerMock(nil, fmt.Errorf("exit status code 1")),
Command: "badcommand",
}
var acc testutil.Accumulator
err := e.Gather(&acc)
require.Error(t, err)
assert.Equal(t, acc.NFields(), 0, "No new points should have been added")
}

View File

@ -3,12 +3,13 @@ package haproxy
import (
"encoding/csv"
"fmt"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/inputs"
"io"
"net/http"
"net/url"
"strconv"
"sync"
"time"
)
//CSV format: https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.1
@ -90,7 +91,7 @@ var sampleConfig = `
# If no servers are specified, then default to 127.0.0.1:1936
servers = ["http://myhaproxy.com:1936", "http://anotherhaproxy.com:1936"]
# Or you can also use local socket(not work yet)
# servers = ["socket:/run/haproxy/admin.sock"]
# servers = ["socket://run/haproxy/admin.sock"]
`
func (r *haproxy) SampleConfig() string {
@ -103,7 +104,7 @@ func (r *haproxy) Description() string {
// Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any).
func (g *haproxy) Gather(acc plugins.Accumulator) error {
func (g *haproxy) Gather(acc inputs.Accumulator) error {
if len(g.Servers) == 0 {
return g.gatherServer("http://127.0.0.1:1936", acc)
}
@ -125,7 +126,7 @@ func (g *haproxy) Gather(acc plugins.Accumulator) error {
return outerr
}
func (g *haproxy) gatherServer(addr string, acc plugins.Accumulator) error {
func (g *haproxy) gatherServer(addr string, acc inputs.Accumulator) error {
if g.client == nil {
client := &http.Client{}
@ -152,214 +153,212 @@ func (g *haproxy) gatherServer(addr string, acc plugins.Accumulator) error {
return fmt.Errorf("Unable to get valid stat result from '%s': %s", addr, err)
}
importCsvResult(res.Body, acc, u.Host)
return nil
return importCsvResult(res.Body, acc, u.Host)
}
func importCsvResult(r io.Reader, acc plugins.Accumulator, host string) ([][]string, error) {
func importCsvResult(r io.Reader, acc inputs.Accumulator, host string) error {
csv := csv.NewReader(r)
result, err := csv.ReadAll()
now := time.Now()
for _, row := range result {
fields := make(map[string]interface{})
tags := map[string]string{
"server": host,
"proxy": row[HF_PXNAME],
"sv": row[HF_SVNAME],
}
for field, v := range row {
tags := map[string]string{
"server": host,
"proxy": row[HF_PXNAME],
"sv": row[HF_SVNAME],
}
switch field {
case HF_QCUR:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("qcur", ival, tags)
fields["qcur"] = ival
}
case HF_QMAX:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("qmax", ival, tags)
fields["qmax"] = ival
}
case HF_SCUR:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("scur", ival, tags)
fields["scur"] = ival
}
case HF_SMAX:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("smax", ival, tags)
fields["smax"] = ival
}
case HF_STOT:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("stot", ival, tags)
fields["stot"] = ival
}
case HF_BIN:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("bin", ival, tags)
fields["bin"] = ival
}
case HF_BOUT:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("bout", ival, tags)
fields["bout"] = ival
}
case HF_DREQ:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("dreq", ival, tags)
fields["dreq"] = ival
}
case HF_DRESP:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("dresp", ival, tags)
fields["dresp"] = ival
}
case HF_EREQ:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("ereq", ival, tags)
fields["ereq"] = ival
}
case HF_ECON:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("econ", ival, tags)
fields["econ"] = ival
}
case HF_ERESP:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("eresp", ival, tags)
fields["eresp"] = ival
}
case HF_WRETR:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("wretr", ival, tags)
fields["wretr"] = ival
}
case HF_WREDIS:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("wredis", ival, tags)
fields["wredis"] = ival
}
case HF_ACT:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("active_servers", ival, tags)
fields["active_servers"] = ival
}
case HF_BCK:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("backup_servers", ival, tags)
fields["backup_servers"] = ival
}
case HF_DOWNTIME:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("downtime", ival, tags)
fields["downtime"] = ival
}
case HF_THROTTLE:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("throttle", ival, tags)
fields["throttle"] = ival
}
case HF_LBTOT:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("lbtot", ival, tags)
fields["lbtot"] = ival
}
case HF_RATE:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("rate", ival, tags)
fields["rate"] = ival
}
case HF_RATE_MAX:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("rate_max", ival, tags)
fields["rate_max"] = ival
}
case HF_CHECK_DURATION:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("check_duration", ival, tags)
fields["check_duration"] = ival
}
case HF_HRSP_1xx:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("http_response.1xx", ival, tags)
fields["http_response.1xx"] = ival
}
case HF_HRSP_2xx:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("http_response.2xx", ival, tags)
fields["http_response.2xx"] = ival
}
case HF_HRSP_3xx:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("http_response.3xx", ival, tags)
fields["http_response.3xx"] = ival
}
case HF_HRSP_4xx:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("http_response.4xx", ival, tags)
fields["http_response.4xx"] = ival
}
case HF_HRSP_5xx:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("http_response.5xx", ival, tags)
fields["http_response.5xx"] = ival
}
case HF_REQ_RATE:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("req_rate", ival, tags)
fields["req_rate"] = ival
}
case HF_REQ_RATE_MAX:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("req_rate_max", ival, tags)
fields["req_rate_max"] = ival
}
case HF_REQ_TOT:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("req_tot", ival, tags)
fields["req_tot"] = ival
}
case HF_CLI_ABRT:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("cli_abort", ival, tags)
fields["cli_abort"] = ival
}
case HF_SRV_ABRT:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("srv_abort", ival, tags)
fields["srv_abort"] = ival
}
case HF_QTIME:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("qtime", ival, tags)
fields["qtime"] = ival
}
case HF_CTIME:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("ctime", ival, tags)
fields["ctime"] = ival
}
case HF_RTIME:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("rtime", ival, tags)
fields["rtime"] = ival
}
case HF_TTIME:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
acc.Add("ttime", ival, tags)
fields["ttime"] = ival
}
}
}
acc.AddFields("haproxy", fields, tags, now)
}
return result, err
return err
}
func init() {
plugins.Add("haproxy", func() plugins.Plugin {
inputs.Add("haproxy", func() inputs.Input {
return &haproxy{}
})
}

View File

@ -47,52 +47,39 @@ func TestHaproxyGeneratesMetricsWithAuthentication(t *testing.T) {
"sv": "host0",
}
assert.NoError(t, acc.ValidateTaggedValue("stot", uint64(171014), tags))
checkInt := []struct {
name string
value uint64
}{
{"qmax", 81},
{"scur", 288},
{"smax", 713},
{"bin", 5557055817},
{"bout", 24096715169},
{"dreq", 1102},
{"dresp", 80},
{"ereq", 95740},
{"econ", 0},
{"eresp", 0},
{"wretr", 17},
{"wredis", 19},
{"active_servers", 1},
{"backup_servers", 0},
{"downtime", 0},
{"throttle", 13},
{"lbtot", 114},
{"rate", 18},
{"rate_max", 102},
{"check_duration", 1},
{"http_response.1xx", 0},
{"http_response.2xx", 1314093},
{"http_response.3xx", 537036},
{"http_response.4xx", 123452},
{"http_response.5xx", 11966},
{"req_rate", 35},
{"req_rate_max", 140},
{"req_tot", 1987928},
{"cli_abort", 0},
{"srv_abort", 0},
{"qtime", 0},
{"ctime", 2},
{"rtime", 23},
{"ttime", 545},
}
for _, c := range checkInt {
assert.Equal(t, true, acc.CheckValue(c.name, c.value))
fields := map[string]interface{}{
"active_servers": uint64(1),
"backup_servers": uint64(0),
"bin": uint64(510913516),
"bout": uint64(2193856571),
"check_duration": uint64(10),
"cli_abort": uint64(73),
"ctime": uint64(2),
"downtime": uint64(0),
"dresp": uint64(0),
"econ": uint64(0),
"eresp": uint64(1),
"http_response.1xx": uint64(0),
"http_response.2xx": uint64(119534),
"http_response.3xx": uint64(48051),
"http_response.4xx": uint64(2345),
"http_response.5xx": uint64(1056),
"lbtot": uint64(171013),
"qcur": uint64(0),
"qmax": uint64(0),
"qtime": uint64(0),
"rate": uint64(3),
"rate_max": uint64(12),
"rtime": uint64(312),
"scur": uint64(1),
"smax": uint64(32),
"srv_abort": uint64(1),
"stot": uint64(171014),
"ttime": uint64(2341),
"wredis": uint64(0),
"wretr": uint64(1),
}
acc.AssertContainsTaggedFields(t, "haproxy", fields, tags)
//Here, we should get error because we don't pass authentication data
r = &haproxy{
@ -124,10 +111,39 @@ func TestHaproxyGeneratesMetricsWithoutAuthentication(t *testing.T) {
"sv": "host0",
}
assert.NoError(t, acc.ValidateTaggedValue("stot", uint64(171014), tags))
assert.NoError(t, acc.ValidateTaggedValue("scur", uint64(1), tags))
assert.NoError(t, acc.ValidateTaggedValue("rate", uint64(3), tags))
assert.Equal(t, true, acc.CheckValue("bin", uint64(5557055817)))
fields := map[string]interface{}{
"active_servers": uint64(1),
"backup_servers": uint64(0),
"bin": uint64(510913516),
"bout": uint64(2193856571),
"check_duration": uint64(10),
"cli_abort": uint64(73),
"ctime": uint64(2),
"downtime": uint64(0),
"dresp": uint64(0),
"econ": uint64(0),
"eresp": uint64(1),
"http_response.1xx": uint64(0),
"http_response.2xx": uint64(119534),
"http_response.3xx": uint64(48051),
"http_response.4xx": uint64(2345),
"http_response.5xx": uint64(1056),
"lbtot": uint64(171013),
"qcur": uint64(0),
"qmax": uint64(0),
"qtime": uint64(0),
"rate": uint64(3),
"rate_max": uint64(12),
"rtime": uint64(312),
"scur": uint64(1),
"smax": uint64(32),
"srv_abort": uint64(1),
"stot": uint64(171014),
"ttime": uint64(2341),
"wredis": uint64(0),
"wretr": uint64(1),
}
acc.AssertContainsTaggedFields(t, "haproxy", fields, tags)
}
//When not passing server config, we default to localhost

View File

@ -10,20 +10,17 @@ import (
"strings"
"sync"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/internal"
"github.com/influxdb/telegraf/plugins/inputs"
)
type HttpJson struct {
Services []Service
client HTTPClient
}
type Service struct {
Name string
Servers []string
Method string
TagKeys []string
Parameters map[string]string
client HTTPClient
}
type HTTPClient interface {
@ -47,31 +44,28 @@ func (c RealHTTPClient) MakeRequest(req *http.Request) (*http.Response, error) {
}
var sampleConfig = `
# Specify services via an array of tables
[[plugins.httpjson.services]]
# a name for the service being polled
name = "webserver_stats"
# a name for the service being polled
name = "webserver_stats"
# URL of each server in the service's cluster
servers = [
"http://localhost:9999/stats/",
"http://localhost:9998/stats/",
]
# URL of each server in the service's cluster
servers = [
"http://localhost:9999/stats/",
"http://localhost:9998/stats/",
]
# HTTP method to use (case-sensitive)
method = "GET"
# HTTP method to use (case-sensitive)
method = "GET"
# List of tag names to extract from top-level of JSON server response
# tag_keys = [
# "my_tag_1",
# "my_tag_2"
# ]
# List of tag names to extract from top-level of JSON server response
# tag_keys = [
# "my_tag_1",
# "my_tag_2"
# ]
# HTTP parameters (all values must be strings)
[plugins.httpjson.services.parameters]
event_type = "cpu_spike"
threshold = "0.75"
# HTTP parameters (all values must be strings)
[inputs.httpjson.parameters]
event_type = "cpu_spike"
threshold = "0.75"
`
func (h *HttpJson) SampleConfig() string {
@ -83,25 +77,19 @@ func (h *HttpJson) Description() string {
}
// Gathers data for all servers.
func (h *HttpJson) Gather(acc plugins.Accumulator) error {
func (h *HttpJson) Gather(acc inputs.Accumulator) error {
var wg sync.WaitGroup
totalServers := 0
for _, service := range h.Services {
totalServers += len(service.Servers)
}
errorChannel := make(chan error, totalServers)
errorChannel := make(chan error, len(h.Servers))
for _, service := range h.Services {
for _, server := range service.Servers {
wg.Add(1)
go func(service Service, server string) {
defer wg.Done()
if err := h.gatherServer(acc, service, server); err != nil {
errorChannel <- err
}
}(service, server)
}
for _, server := range h.Servers {
wg.Add(1)
go func(server string) {
defer wg.Done()
if err := h.gatherServer(acc, server); err != nil {
errorChannel <- err
}
}(server)
}
wg.Wait()
@ -128,11 +116,10 @@ func (h *HttpJson) Gather(acc plugins.Accumulator) error {
// Returns:
// error: Any error that may have occurred
func (h *HttpJson) gatherServer(
acc plugins.Accumulator,
service Service,
acc inputs.Accumulator,
serverURL string,
) error {
resp, err := h.sendRequest(service, serverURL)
resp, err := h.sendRequest(serverURL)
if err != nil {
return err
}
@ -146,7 +133,7 @@ func (h *HttpJson) gatherServer(
"server": serverURL,
}
for _, tag := range service.TagKeys {
for _, tag := range h.TagKeys {
switch v := jsonOut[tag].(type) {
case string:
tags[tag] = v
@ -154,7 +141,19 @@ func (h *HttpJson) gatherServer(
delete(jsonOut, tag)
}
processResponse(acc, service.Name, tags, jsonOut)
f := internal.JSONFlattener{}
err = f.FlattenJSON("", jsonOut)
if err != nil {
return err
}
var msrmnt_name string
if h.Name == "" {
msrmnt_name = "httpjson"
} else {
msrmnt_name = "httpjson_" + h.Name
}
acc.AddFields(msrmnt_name, f.Fields, tags)
return nil
}
@ -165,7 +164,7 @@ func (h *HttpJson) gatherServer(
// Returns:
// string: body of the response
// error : Any error that may have occurred
func (h *HttpJson) sendRequest(service Service, serverURL string) (string, error) {
func (h *HttpJson) sendRequest(serverURL string) (string, error) {
// Prepare URL
requestURL, err := url.Parse(serverURL)
if err != nil {
@ -173,13 +172,13 @@ func (h *HttpJson) sendRequest(service Service, serverURL string) (string, error
}
params := url.Values{}
for k, v := range service.Parameters {
for k, v := range h.Parameters {
params.Add(k, v)
}
requestURL.RawQuery = params.Encode()
// Create + send request
req, err := http.NewRequest(service.Method, requestURL.String(), nil)
req, err := http.NewRequest(h.Method, requestURL.String(), nil)
if err != nil {
return "", err
}
@ -188,6 +187,7 @@ func (h *HttpJson) sendRequest(service Service, serverURL string) (string, error
if err != nil {
return "", err
}
defer resp.Body.Close()
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
@ -209,25 +209,8 @@ func (h *HttpJson) sendRequest(service Service, serverURL string) (string, error
return string(body), err
}
// Flattens the map generated from the JSON object and stores its float values using a
// plugins.Accumulator. It ignores any non-float values.
// Parameters:
// acc: the Accumulator to use
// prefix: What the name of the measurement name should be prefixed by.
// tags: telegraf tags to
func processResponse(acc plugins.Accumulator, prefix string, tags map[string]string, v interface{}) {
switch t := v.(type) {
case map[string]interface{}:
for k, v := range t {
processResponse(acc, prefix+"_"+k, tags, v)
}
case float64:
acc.Add(prefix, v, tags)
}
}
func init() {
plugins.Add("httpjson", func() plugins.Plugin {
inputs.Add("httpjson", func() inputs.Input {
return &HttpJson{client: RealHTTPClient{client: &http.Client{}}}
})
}

View File

@ -1,7 +1,6 @@
package httpjson
import (
"fmt"
"io/ioutil"
"net/http"
"strings"
@ -35,6 +34,11 @@ const validJSONTags = `
"build": "123"
}`
var expectedFields = map[string]interface{}{
"parent_child": float64(3),
"integer": float64(4),
}
const invalidJSON = "I don't think this is JSON"
const empty = ""
@ -76,37 +80,36 @@ func (c mockHTTPClient) MakeRequest(req *http.Request) (*http.Response, error) {
//
// Returns:
// *HttpJson: Pointer to an HttpJson object that uses the generated mock HTTP client
func genMockHttpJson(response string, statusCode int) *HttpJson {
return &HttpJson{
client: mockHTTPClient{responseBody: response, statusCode: statusCode},
Services: []Service{
Service{
Servers: []string{
"http://server1.example.com/metrics/",
"http://server2.example.com/metrics/",
},
Name: "my_webapp",
Method: "GET",
Parameters: map[string]string{
"httpParam1": "12",
"httpParam2": "the second parameter",
},
func genMockHttpJson(response string, statusCode int) []*HttpJson {
return []*HttpJson{
&HttpJson{
client: mockHTTPClient{responseBody: response, statusCode: statusCode},
Servers: []string{
"http://server1.example.com/metrics/",
"http://server2.example.com/metrics/",
},
Service{
Servers: []string{
"http://server3.example.com/metrics/",
"http://server4.example.com/metrics/",
},
Name: "other_webapp",
Method: "POST",
Parameters: map[string]string{
"httpParam1": "12",
"httpParam2": "the second parameter",
},
TagKeys: []string{
"role",
"build",
},
Name: "my_webapp",
Method: "GET",
Parameters: map[string]string{
"httpParam1": "12",
"httpParam2": "the second parameter",
},
},
&HttpJson{
client: mockHTTPClient{responseBody: response, statusCode: statusCode},
Servers: []string{
"http://server3.example.com/metrics/",
"http://server4.example.com/metrics/",
},
Name: "other_webapp",
Method: "POST",
Parameters: map[string]string{
"httpParam1": "12",
"httpParam2": "the second parameter",
},
TagKeys: []string{
"role",
"build",
},
},
}
@ -116,28 +119,15 @@ func genMockHttpJson(response string, statusCode int) *HttpJson {
func TestHttpJson200(t *testing.T) {
httpjson := genMockHttpJson(validJSON, 200)
var acc testutil.Accumulator
err := httpjson.Gather(&acc)
require.NoError(t, err)
assert.Equal(t, 8, len(acc.Points))
for _, service := range httpjson.Services {
for _, service := range httpjson {
var acc testutil.Accumulator
err := service.Gather(&acc)
require.NoError(t, err)
assert.Equal(t, 4, acc.NFields())
for _, srv := range service.Servers {
require.NoError(t,
acc.ValidateTaggedValue(
fmt.Sprintf("%s_parent_child", service.Name),
3.0,
map[string]string{"server": srv},
),
)
require.NoError(t,
acc.ValidateTaggedValue(
fmt.Sprintf("%s_integer", service.Name),
4.0,
map[string]string{"server": srv},
),
)
tags := map[string]string{"server": srv}
mname := "httpjson_" + service.Name
acc.AssertContainsTaggedFields(t, mname, expectedFields, tags)
}
}
}
@ -147,28 +137,22 @@ func TestHttpJson500(t *testing.T) {
httpjson := genMockHttpJson(validJSON, 500)
var acc testutil.Accumulator
err := httpjson.Gather(&acc)
err := httpjson[0].Gather(&acc)
assert.NotNil(t, err)
// 4 error lines for (2 urls) * (2 services)
assert.Equal(t, len(strings.Split(err.Error(), "\n")), 4)
assert.Equal(t, 0, len(acc.Points))
assert.Equal(t, 0, acc.NFields())
}
// Test response to HTTP 405
func TestHttpJsonBadMethod(t *testing.T) {
httpjson := genMockHttpJson(validJSON, 200)
httpjson.Services[0].Method = "NOT_A_REAL_METHOD"
httpjson[0].Method = "NOT_A_REAL_METHOD"
var acc testutil.Accumulator
err := httpjson.Gather(&acc)
err := httpjson[0].Gather(&acc)
assert.NotNil(t, err)
// 2 error lines for (2 urls) * (1 falied service)
assert.Equal(t, len(strings.Split(err.Error(), "\n")), 2)
// (2 measurements) * (2 servers) * (1 successful service)
assert.Equal(t, 4, len(acc.Points))
assert.Equal(t, 0, acc.NFields())
}
// Test response to malformed JSON
@ -176,12 +160,10 @@ func TestHttpJsonBadJson(t *testing.T) {
httpjson := genMockHttpJson(invalidJSON, 200)
var acc testutil.Accumulator
err := httpjson.Gather(&acc)
err := httpjson[0].Gather(&acc)
assert.NotNil(t, err)
// 4 error lines for (2 urls) * (2 services)
assert.Equal(t, len(strings.Split(err.Error(), "\n")), 4)
assert.Equal(t, 0, len(acc.Points))
assert.Equal(t, 0, acc.NFields())
}
// Test response to empty string as response objectgT
@ -189,34 +171,27 @@ func TestHttpJsonEmptyResponse(t *testing.T) {
httpjson := genMockHttpJson(empty, 200)
var acc testutil.Accumulator
err := httpjson.Gather(&acc)
err := httpjson[0].Gather(&acc)
assert.NotNil(t, err)
// 4 error lines for (2 urls) * (2 services)
assert.Equal(t, len(strings.Split(err.Error(), "\n")), 4)
assert.Equal(t, 0, len(acc.Points))
assert.Equal(t, 0, acc.NFields())
}
// Test that the proper values are ignored or collected
func TestHttpJson200Tags(t *testing.T) {
httpjson := genMockHttpJson(validJSONTags, 200)
var acc testutil.Accumulator
err := httpjson.Gather(&acc)
require.NoError(t, err)
assert.Equal(t, 4, len(acc.Points))
for _, service := range httpjson.Services {
for _, service := range httpjson {
if service.Name == "other_webapp" {
var acc testutil.Accumulator
err := service.Gather(&acc)
require.NoError(t, err)
assert.Equal(t, 2, acc.NFields())
for _, srv := range service.Servers {
require.NoError(t,
acc.ValidateTaggedValue(
fmt.Sprintf("%s_value", service.Name),
15.0,
map[string]string{"server": srv, "role": "master", "build": "123"},
),
)
tags := map[string]string{"server": srv, "role": "master", "build": "123"}
fields := map[string]interface{}{"value": float64(15)}
mname := "httpjson_" + service.Name
acc.AssertContainsTaggedFields(t, mname, fields, tags)
}
}
}

View File

@ -5,7 +5,7 @@ The influxdb plugin collects InfluxDB-formatted data from JSON endpoints.
With a configuration of:
```toml
[[plugins.influxdb]]
[[inputs.influxdb]]
urls = [
"http://127.0.0.1:8086/debug/vars",
"http://192.168.2.1:8086/debug/vars"

View File

@ -8,7 +8,7 @@ import (
"strings"
"sync"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/inputs"
)
type InfluxDB struct {
@ -32,7 +32,7 @@ func (*InfluxDB) SampleConfig() string {
`
}
func (i *InfluxDB) Gather(acc plugins.Accumulator) error {
func (i *InfluxDB) Gather(acc inputs.Accumulator) error {
errorChannel := make(chan error, len(i.URLs))
var wg sync.WaitGroup
@ -77,7 +77,7 @@ type point struct {
// Returns:
// error: Any error that may have occurred
func (i *InfluxDB) gatherURL(
acc plugins.Accumulator,
acc inputs.Accumulator,
url string,
) error {
resp, err := http.Get(url)
@ -140,7 +140,7 @@ func (i *InfluxDB) gatherURL(
}
func init() {
plugins.Add("influxdb", func() plugins.Plugin {
inputs.Add("influxdb", func() inputs.Input {
return &InfluxDB{}
})
}

View File

@ -5,7 +5,7 @@ import (
"net/http/httptest"
"testing"
"github.com/influxdb/telegraf/plugins/influxdb"
"github.com/influxdb/telegraf/plugins/inputs/influxdb"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/require"
)
@ -72,29 +72,26 @@ func TestBasic(t *testing.T) {
require.NoError(t, plugin.Gather(&acc))
require.Len(t, acc.Points, 2)
require.NoError(t, acc.ValidateTaggedFieldsValue(
"foo",
map[string]interface{}{
// JSON will truncate floats to integer representations.
// Since there's no distinction in JSON, we can't assume it's an int.
"i": -1.0,
"f": 0.5,
"b": true,
"s": "string",
},
map[string]string{
"id": "ex1",
"url": fakeServer.URL + "/endpoint",
},
))
require.NoError(t, acc.ValidateTaggedFieldsValue(
"bar",
map[string]interface{}{
"x": "x",
},
map[string]string{
"id": "ex2",
"url": fakeServer.URL + "/endpoint",
},
))
fields := map[string]interface{}{
// JSON will truncate floats to integer representations.
// Since there's no distinction in JSON, we can't assume it's an int.
"i": -1.0,
"f": 0.5,
"b": true,
"s": "string",
}
tags := map[string]string{
"id": "ex1",
"url": fakeServer.URL + "/endpoint",
}
acc.AssertContainsTaggedFields(t, "foo", fields, tags)
fields = map[string]interface{}{
"x": "x",
}
tags = map[string]string{
"id": "ex2",
"url": fakeServer.URL + "/endpoint",
}
acc.AssertContainsTaggedFields(t, "bar", fields, tags)
}

View File

@ -7,9 +7,8 @@ import (
"io/ioutil"
"net/http"
"net/url"
"strings"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/inputs"
)
type Server struct {
@ -23,8 +22,6 @@ type Server struct {
type Metric struct {
Name string
Jmx string
Pass []string
Drop []string
}
type JolokiaClient interface {
@ -44,7 +41,6 @@ type Jolokia struct {
Context string
Servers []Server
Metrics []Metric
Tags map[string]string
}
func (j *Jolokia) SampleConfig() string {
@ -52,12 +48,8 @@ func (j *Jolokia) SampleConfig() string {
# This is the context root used to compose the jolokia url
context = "/jolokia/read"
# Tags added to each measurements
[jolokia.tags]
group = "as"
# List of servers exposing jolokia read service
[[plugins.jolokia.servers]]
[[inputs.jolokia.servers]]
name = "stable"
host = "192.168.103.2"
port = "8180"
@ -67,26 +59,9 @@ func (j *Jolokia) SampleConfig() string {
# List of metrics collected on above servers
# Each metric consists in a name, a jmx path and either a pass or drop slice attributes
# This collect all heap memory usage metrics
[[plugins.jolokia.metrics]]
[[inputs.jolokia.metrics]]
name = "heap_memory_usage"
jmx = "/java.lang:type=Memory/HeapMemoryUsage"
# This drops the 'committed' value from Eden space measurement
[[plugins.jolokia.metrics]]
name = "memory_eden"
jmx = "/java.lang:type=MemoryPool,name=PS Eden Space/Usage"
drop = [ "committed" ]
# This passes only DaemonThreadCount and ThreadCount
[[plugins.jolokia.metrics]]
name = "heap_threads"
jmx = "/java.lang:type=Threading"
pass = [
"DaemonThreadCount",
"ThreadCount"
]
`
}
@ -102,10 +77,6 @@ func (j *Jolokia) getAttr(requestUrl *url.URL) (map[string]interface{}, error) {
}
resp, err := j.jClient.MakeRequest(req)
if err != nil {
return nil, err
}
if err != nil {
return nil, err
}
@ -137,65 +108,22 @@ func (j *Jolokia) getAttr(requestUrl *url.URL) (map[string]interface{}, error) {
return jsonOut, nil
}
func (m *Metric) shouldPass(field string) bool {
if m.Pass != nil {
for _, pass := range m.Pass {
if strings.HasPrefix(field, pass) {
return true
}
}
return false
}
if m.Drop != nil {
for _, drop := range m.Drop {
if strings.HasPrefix(field, drop) {
return false
}
}
return true
}
return true
}
func (m *Metric) filterFields(fields map[string]interface{}) map[string]interface{} {
for field, _ := range fields {
if !m.shouldPass(field) {
delete(fields, field)
}
}
return fields
}
func (j *Jolokia) Gather(acc plugins.Accumulator) error {
func (j *Jolokia) Gather(acc inputs.Accumulator) error {
context := j.Context //"/jolokia/read"
servers := j.Servers
metrics := j.Metrics
tags := j.Tags
if tags == nil {
tags = map[string]string{}
}
tags := make(map[string]string)
for _, server := range servers {
tags["server"] = server.Name
tags["port"] = server.Port
tags["host"] = server.Host
fields := make(map[string]interface{})
for _, metric := range metrics {
measurement := metric.Name
jmxPath := metric.Jmx
tags["server"] = server.Name
tags["port"] = server.Port
tags["host"] = server.Host
// Prepare URL
requestUrl, err := url.Parse("http://" + server.Host + ":" +
server.Port + context + jmxPath)
@ -209,23 +137,27 @@ func (j *Jolokia) Gather(acc plugins.Accumulator) error {
out, _ := j.getAttr(requestUrl)
if values, ok := out["value"]; ok {
switch values.(type) {
switch t := values.(type) {
case map[string]interface{}:
acc.AddFields(measurement, metric.filterFields(values.(map[string]interface{})), tags)
for k, v := range t {
fields[measurement+"_"+k] = v
}
case interface{}:
acc.Add(measurement, values.(interface{}), tags)
fields[measurement] = t
}
} else {
fmt.Printf("Missing key 'value' in '%s' output response\n", requestUrl.String())
fmt.Printf("Missing key 'value' in '%s' output response\n",
requestUrl.String())
}
}
acc.AddFields("jolokia", fields, tags)
}
return nil
}
func init() {
plugins.Add("jolokia", func() plugins.Plugin {
inputs.Add("jolokia", func() inputs.Input {
return &Jolokia{jClient: &JolokiaClientImpl{client: &http.Client{}}}
})
}

View File

@ -48,7 +48,7 @@ const empty = ""
var Servers = []Server{Server{Name: "as1", Host: "127.0.0.1", Port: "8080"}}
var HeapMetric = Metric{Name: "heap_memory_usage", Jmx: "/java.lang:type=Memory/HeapMemoryUsage"}
var UsedHeapMetric = Metric{Name: "heap_memory_usage", Jmx: "/java.lang:type=Memory/HeapMemoryUsage", Pass: []string{"used"}}
var UsedHeapMetric = Metric{Name: "heap_memory_usage", Jmx: "/java.lang:type=Memory/HeapMemoryUsage"}
type jolokiaClientStub struct {
responseBody string
@ -79,7 +79,6 @@ func genJolokiaClientStub(response string, statusCode int, servers []Server, met
// Test that the proper values are ignored or collected
func TestHttpJsonMultiValue(t *testing.T) {
jolokia := genJolokiaClientStub(validMultiValueJSON, 200, Servers, []Metric{HeapMetric})
var acc testutil.Accumulator
@ -88,58 +87,28 @@ func TestHttpJsonMultiValue(t *testing.T) {
assert.Nil(t, err)
assert.Equal(t, 1, len(acc.Points))
assert.True(t, acc.CheckFieldsValue("heap_memory_usage", map[string]interface{}{"init": 67108864.0,
"committed": 456130560.0,
"max": 477626368.0,
"used": 203288528.0}))
}
// Test that the proper values are ignored or collected
func TestHttpJsonMultiValueWithPass(t *testing.T) {
jolokia := genJolokiaClientStub(validMultiValueJSON, 200, Servers, []Metric{UsedHeapMetric})
var acc testutil.Accumulator
err := jolokia.Gather(&acc)
assert.Nil(t, err)
assert.Equal(t, 1, len(acc.Points))
assert.True(t, acc.CheckFieldsValue("heap_memory_usage", map[string]interface{}{"used": 203288528.0}))
}
// Test that the proper values are ignored or collected
func TestHttpJsonMultiValueTags(t *testing.T) {
jolokia := genJolokiaClientStub(validMultiValueJSON, 200, Servers, []Metric{UsedHeapMetric})
var acc testutil.Accumulator
err := jolokia.Gather(&acc)
assert.Nil(t, err)
assert.Equal(t, 1, len(acc.Points))
assert.NoError(t, acc.ValidateTaggedFieldsValue("heap_memory_usage", map[string]interface{}{"used": 203288528.0}, map[string]string{"host": "127.0.0.1", "port": "8080", "server": "as1"}))
}
// Test that the proper values are ignored or collected
func TestHttpJsonSingleValueTags(t *testing.T) {
jolokia := genJolokiaClientStub(validSingleValueJSON, 200, Servers, []Metric{UsedHeapMetric})
var acc testutil.Accumulator
err := jolokia.Gather(&acc)
assert.Nil(t, err)
assert.Equal(t, 1, len(acc.Points))
assert.NoError(t, acc.ValidateTaggedFieldsValue("heap_memory_usage", map[string]interface{}{"value": 209274376.0}, map[string]string{"host": "127.0.0.1", "port": "8080", "server": "as1"}))
fields := map[string]interface{}{
"heap_memory_usage_init": 67108864.0,
"heap_memory_usage_committed": 456130560.0,
"heap_memory_usage_max": 477626368.0,
"heap_memory_usage_used": 203288528.0,
}
tags := map[string]string{
"host": "127.0.0.1",
"port": "8080",
"server": "as1",
}
acc.AssertContainsTaggedFields(t, "jolokia", fields, tags)
}
// Test that the proper values are ignored or collected
func TestHttpJsonOn404(t *testing.T) {
jolokia := genJolokiaClientStub(validMultiValueJSON, 404, Servers, []Metric{UsedHeapMetric})
jolokia := genJolokiaClientStub(validMultiValueJSON, 404, Servers,
[]Metric{UsedHeapMetric})
var acc testutil.Accumulator
acc.SetDebug(true)
err := jolokia.Gather(&acc)
assert.Nil(t, err)

View File

@ -6,7 +6,7 @@ import (
"sync"
"github.com/influxdb/influxdb/models"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/inputs"
"github.com/Shopify/sarama"
"github.com/wvanbergen/kafka/consumergroup"
@ -148,7 +148,7 @@ func (k *Kafka) Stop() {
}
}
func (k *Kafka) Gather(acc plugins.Accumulator) error {
func (k *Kafka) Gather(acc inputs.Accumulator) error {
k.Lock()
defer k.Unlock()
npoints := len(k.pointChan)
@ -160,7 +160,7 @@ func (k *Kafka) Gather(acc plugins.Accumulator) error {
}
func init() {
plugins.Add("kafka_consumer", func() plugins.Plugin {
inputs.Add("kafka_consumer", func() inputs.Input {
return &Kafka{}
})
}

View File

@ -85,7 +85,8 @@ func TestRunParserAndGather(t *testing.T) {
k.Gather(&acc)
assert.Equal(t, len(acc.Points), 1)
assert.True(t, acc.CheckValue("cpu_load_short", 23422.0))
acc.AssertContainsFields(t, "cpu_load_short",
map[string]interface{}{"value": float64(23422)})
}
func saramaMsg(val string) *sarama.ConsumerMessage {

View File

@ -3,7 +3,7 @@ package leofs
import (
"bufio"
"fmt"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/inputs"
"net/url"
"os/exec"
"strconv"
@ -146,7 +146,7 @@ func (l *LeoFS) Description() string {
return "Read metrics from a LeoFS Server via SNMP"
}
func (l *LeoFS) Gather(acc plugins.Accumulator) error {
func (l *LeoFS) Gather(acc inputs.Accumulator) error {
if len(l.Servers) == 0 {
l.gatherServer(defaultEndpoint, ServerTypeManagerMaster, acc)
return nil
@ -176,7 +176,7 @@ func (l *LeoFS) Gather(acc plugins.Accumulator) error {
return outerr
}
func (l *LeoFS) gatherServer(endpoint string, serverType ServerType, acc plugins.Accumulator) error {
func (l *LeoFS) gatherServer(endpoint string, serverType ServerType, acc inputs.Accumulator) error {
cmd := exec.Command("snmpwalk", "-v2c", "-cpublic", endpoint, oid)
stdout, err := cmd.StdoutPipe()
if err != nil {
@ -197,6 +197,8 @@ func (l *LeoFS) gatherServer(endpoint string, serverType ServerType, acc plugins
"node": nodeNameTrimmed,
}
i := 0
fields := make(map[string]interface{})
for scanner.Scan() {
key := KeyMapping[serverType][i]
val, err := retrieveTokenAfterColon(scanner.Text())
@ -207,9 +209,10 @@ func (l *LeoFS) gatherServer(endpoint string, serverType ServerType, acc plugins
if err != nil {
return fmt.Errorf("Unable to parse the value:%s, err:%s", val, err)
}
acc.Add(key, fVal, tags)
fields[key] = fVal
i++
}
acc.AddFields("leofs", fields, tags)
return nil
}
@ -222,7 +225,7 @@ func retrieveTokenAfterColon(line string) (string, error) {
}
func init() {
plugins.Add("leofs", func() plugins.Plugin {
inputs.Add("leofs", func() inputs.Input {
return &LeoFS{}
})
}

View File

@ -129,7 +129,6 @@ func buildFakeSNMPCmd(src string) {
}
func testMain(t *testing.T, code string, endpoint string, serverType ServerType) {
// Build the fake snmpwalk for test
src := makeFakeSNMPSrc(code)
defer os.Remove(src)
@ -145,6 +144,7 @@ func testMain(t *testing.T, code string, endpoint string, serverType ServerType)
}
var acc testutil.Accumulator
acc.SetDebug(true)
err := l.Gather(&acc)
require.NoError(t, err)
@ -152,7 +152,7 @@ func testMain(t *testing.T, code string, endpoint string, serverType ServerType)
floatMetrics := KeyMapping[serverType]
for _, metric := range floatMetrics {
assert.True(t, acc.HasFloatValue(metric), metric)
assert.True(t, acc.HasFloatField("leofs", metric), metric)
}
}

View File

@ -14,7 +14,7 @@ import (
"strings"
"github.com/influxdb/telegraf/internal"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/inputs"
)
// Lustre proc files can change between versions, so we want to future-proof
@ -22,6 +22,9 @@ import (
type Lustre2 struct {
Ost_procfiles []string
Mds_procfiles []string
// allFields maps and OST name to the metric fields associated with that OST
allFields map[string]map[string]interface{}
}
var sampleConfig = `
@ -126,7 +129,7 @@ var wanted_mds_fields = []*mapping{
},
}
func (l *Lustre2) GetLustreProcStats(fileglob string, wanted_fields []*mapping, acc plugins.Accumulator) error {
func (l *Lustre2) GetLustreProcStats(fileglob string, wanted_fields []*mapping, acc inputs.Accumulator) error {
files, err := filepath.Glob(fileglob)
if err != nil {
return err
@ -140,8 +143,11 @@ func (l *Lustre2) GetLustreProcStats(fileglob string, wanted_fields []*mapping,
*/
path := strings.Split(file, "/")
name := path[len(path)-2]
tags := map[string]string{
"name": name,
var fields map[string]interface{}
fields, ok := l.allFields[name]
if !ok {
fields = make(map[string]interface{})
l.allFields[name] = fields
}
lines, err := internal.ReadLines(file)
@ -150,18 +156,17 @@ func (l *Lustre2) GetLustreProcStats(fileglob string, wanted_fields []*mapping,
}
for _, line := range lines {
fields := strings.Fields(line)
parts := strings.Fields(line)
for _, wanted := range wanted_fields {
var data uint64
if fields[0] == wanted.inProc {
if parts[0] == wanted.inProc {
wanted_field := wanted.field
// if not set, assume field[1]. Shouldn't be field[0], as
// that's a string
if wanted_field == 0 {
wanted_field = 1
}
data, err = strconv.ParseUint((fields[wanted_field]), 10, 64)
data, err = strconv.ParseUint((parts[wanted_field]), 10, 64)
if err != nil {
return err
}
@ -169,8 +174,7 @@ func (l *Lustre2) GetLustreProcStats(fileglob string, wanted_fields []*mapping,
if wanted.reportAs != "" {
report_name = wanted.reportAs
}
acc.Add(report_name, data, tags)
fields[report_name] = data
}
}
}
@ -189,16 +193,19 @@ func (l *Lustre2) Description() string {
}
// Gather reads stats from all lustre targets
func (l *Lustre2) Gather(acc plugins.Accumulator) error {
func (l *Lustre2) Gather(acc inputs.Accumulator) error {
l.allFields = make(map[string]map[string]interface{})
if len(l.Ost_procfiles) == 0 {
// read/write bytes are in obdfilter/<ost_name>/stats
err := l.GetLustreProcStats("/proc/fs/lustre/obdfilter/*/stats", wanted_ost_fields, acc)
err := l.GetLustreProcStats("/proc/fs/lustre/obdfilter/*/stats",
wanted_ost_fields, acc)
if err != nil {
return err
}
// cache counters are in osd-ldiskfs/<ost_name>/stats
err = l.GetLustreProcStats("/proc/fs/lustre/osd-ldiskfs/*/stats", wanted_ost_fields, acc)
err = l.GetLustreProcStats("/proc/fs/lustre/osd-ldiskfs/*/stats",
wanted_ost_fields, acc)
if err != nil {
return err
}
@ -206,7 +213,8 @@ func (l *Lustre2) Gather(acc plugins.Accumulator) error {
if len(l.Mds_procfiles) == 0 {
// Metadata server stats
err := l.GetLustreProcStats("/proc/fs/lustre/mdt/*/md_stats", wanted_mds_fields, acc)
err := l.GetLustreProcStats("/proc/fs/lustre/mdt/*/md_stats",
wanted_mds_fields, acc)
if err != nil {
return err
}
@ -225,11 +233,18 @@ func (l *Lustre2) Gather(acc plugins.Accumulator) error {
}
}
for name, fields := range l.allFields {
tags := map[string]string{
"name": name,
}
acc.AddFields("lustre2", fields, tags)
}
return nil
}
func init() {
plugins.Add("lustre2", func() plugins.Plugin {
inputs.Add("lustre2", func() inputs.Input {
return &Lustre2{}
})
}

View File

@ -6,7 +6,6 @@ import (
"testing"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@ -58,11 +57,6 @@ samedir_rename 259625 samples [reqs]
crossdir_rename 369571 samples [reqs]
`
type metrics struct {
name string
value uint64
}
func TestLustre2GeneratesMetrics(t *testing.T) {
tempdir := os.TempDir() + "/telegraf/proc/fs/lustre/"
@ -103,41 +97,33 @@ func TestLustre2GeneratesMetrics(t *testing.T) {
"name": ost_name,
}
intMetrics := []*metrics{
{
name: "write_bytes",
value: 15201500833981,
},
{
name: "read_bytes",
value: 78026117632000,
},
{
name: "write_calls",
value: 71893382,
},
{
name: "read_calls",
value: 203238095,
},
{
name: "cache_hit",
value: 7393729777,
},
{
name: "cache_access",
value: 19047063027,
},
{
name: "cache_miss",
value: 11653333250,
},
fields := map[string]interface{}{
"cache_access": uint64(19047063027),
"cache_hit": uint64(7393729777),
"cache_miss": uint64(11653333250),
"close": uint64(873243496),
"crossdir_rename": uint64(369571),
"getattr": uint64(1503663097),
"getxattr": uint64(6145349681),
"link": uint64(445),
"mkdir": uint64(705499),
"mknod": uint64(349042),
"open": uint64(1024577037),
"read_bytes": uint64(78026117632000),
"read_calls": uint64(203238095),
"rename": uint64(629196),
"rmdir": uint64(227434),
"samedir_rename": uint64(259625),
"setattr": uint64(1898364),
"setxattr": uint64(83969),
"statfs": uint64(2916320),
"sync": uint64(434081),
"unlink": uint64(3549417),
"write_bytes": uint64(15201500833981),
"write_calls": uint64(71893382),
}
for _, metric := range intMetrics {
assert.True(t, acc.HasUIntValue(metric.name), metric.name)
assert.True(t, acc.CheckTaggedValue(metric.name, metric.value, tags))
}
acc.AssertContainsTaggedFields(t, "lustre2", fields, tags)
err = os.RemoveAll(os.TempDir() + "/telegraf")
require.NoError(t, err)

View File

@ -0,0 +1,116 @@
package mailchimp
import (
"fmt"
"time"
"github.com/influxdb/telegraf/plugins/inputs"
)
type MailChimp struct {
api *ChimpAPI
ApiKey string
DaysOld int
CampaignId string
}
var sampleConfig = `
# MailChimp API key
# get from https://admin.mailchimp.com/account/api/
api_key = "" # required
# Reports for campaigns sent more than days_old ago will not be collected.
# 0 means collect all.
days_old = 0
# Campaign ID to get, if empty gets all campaigns, this option overrides days_old
# campaign_id = ""
`
func (m *MailChimp) SampleConfig() string {
return sampleConfig
}
func (m *MailChimp) Description() string {
return "Gathers metrics from the /3.0/reports MailChimp API"
}
func (m *MailChimp) Gather(acc inputs.Accumulator) error {
if m.api == nil {
m.api = NewChimpAPI(m.ApiKey)
}
m.api.Debug = false
if m.CampaignId == "" {
since := ""
if m.DaysOld > 0 {
now := time.Now()
d, _ := time.ParseDuration(fmt.Sprintf("%dh", 24*m.DaysOld))
since = now.Add(-d).Format(time.RFC3339)
}
reports, err := m.api.GetReports(ReportsParams{
SinceSendTime: since,
})
if err != nil {
return err
}
now := time.Now()
for _, report := range reports.Reports {
gatherReport(acc, report, now)
}
} else {
report, err := m.api.GetReport(m.CampaignId)
if err != nil {
return err
}
now := time.Now()
gatherReport(acc, report, now)
}
return nil
}
func gatherReport(acc inputs.Accumulator, report Report, now time.Time) {
tags := make(map[string]string)
tags["id"] = report.ID
tags["campaign_title"] = report.CampaignTitle
fields := map[string]interface{}{
"emails_sent": report.EmailsSent,
"abuse_reports": report.AbuseReports,
"unsubscribed": report.Unsubscribed,
"hard_bounces": report.Bounces.HardBounces,
"soft_bounces": report.Bounces.SoftBounces,
"syntax_errors": report.Bounces.SyntaxErrors,
"forwards_count": report.Forwards.ForwardsCount,
"forwards_opens": report.Forwards.ForwardsOpens,
"opens_total": report.Opens.OpensTotal,
"unique_opens": report.Opens.UniqueOpens,
"open_rate": report.Opens.OpenRate,
"clicks_total": report.Clicks.ClicksTotal,
"unique_clicks": report.Clicks.UniqueClicks,
"unique_subscriber_clicks": report.Clicks.UniqueSubscriberClicks,
"click_rate": report.Clicks.ClickRate,
"facebook_recipient_likes": report.FacebookLikes.RecipientLikes,
"facebook_unique_likes": report.FacebookLikes.UniqueLikes,
"facebook_likes": report.FacebookLikes.FacebookLikes,
"industry_type": report.IndustryStats.Type,
"industry_open_rate": report.IndustryStats.OpenRate,
"industry_click_rate": report.IndustryStats.ClickRate,
"industry_bounce_rate": report.IndustryStats.BounceRate,
"industry_unopen_rate": report.IndustryStats.UnopenRate,
"industry_unsub_rate": report.IndustryStats.UnsubRate,
"industry_abuse_rate": report.IndustryStats.AbuseRate,
"list_stats_sub_rate": report.ListStats.SubRate,
"list_stats_unsub_rate": report.ListStats.UnsubRate,
"list_stats_open_rate": report.ListStats.OpenRate,
"list_stats_click_rate": report.ListStats.ClickRate,
}
acc.AddFields("mailchimp", fields, tags, now)
}
func init() {
inputs.Add("mailchimp", func() inputs.Input {
return &MailChimp{}
})
}

View File

@ -9,7 +9,6 @@ import (
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@ -42,67 +41,38 @@ func TestMailChimpGatherReports(t *testing.T) {
tags["id"] = "42694e9e57"
tags["campaign_title"] = "Freddie's Jokes Vol. 1"
testInts := []struct {
measurement string
value int
}{
{"emails_sent", 200},
{"abuse_reports", 0},
{"unsubscribed", 2},
{"hard_bounces", 0},
{"soft_bounces", 2},
{"syntax_errors", 0},
{"forwards_count", 0},
{"forwards_opens", 0},
{"opens_total", 186},
{"unique_opens", 100},
{"clicks_total", 42},
{"unique_clicks", 400},
{"unique_subscriber_clicks", 42},
{"facebook_recipient_likes", 5},
{"facebook_unique_likes", 8},
{"facebook_likes", 42},
}
for _, test := range testInts {
assert.True(t, acc.CheckTaggedValue(test.measurement, test.value, tags),
fmt.Sprintf("Measurement: %v, value: %v, tags: %v not found",
test.measurement, test.value, tags))
}
testFloats := []struct {
measurement string
value float64
}{
{"open_rate", 42},
{"click_rate", 42},
{"industry_open_rate", 0.17076777144396},
{"industry_click_rate", 0.027431311866951},
{"industry_bounce_rate", 0.0063767751251474},
{"industry_unopen_rate", 0.82285545343089},
{"industry_unsub_rate", 0.001436957032815},
{"industry_abuse_rate", 0.00021111996110887},
{"list_stats_sub_rate", 10},
{"list_stats_unsub_rate", 20},
{"list_stats_open_rate", 42},
{"list_stats_click_rate", 42},
}
for _, test := range testFloats {
assert.True(t, acc.CheckTaggedValue(test.measurement, test.value, tags),
fmt.Sprintf("Measurement: %v, value: %v, tags: %v not found",
test.measurement, test.value, tags))
}
testStrings := []struct {
measurement string
value string
}{
{"industry_type", "Social Networks and Online Communities"},
}
for _, test := range testStrings {
assert.True(t, acc.CheckTaggedValue(test.measurement, test.value, tags),
fmt.Sprintf("Measurement: %v, value: %v, tags: %v not found",
test.measurement, test.value, tags))
fields := map[string]interface{}{
"emails_sent": int(200),
"abuse_reports": int(0),
"unsubscribed": int(2),
"hard_bounces": int(0),
"soft_bounces": int(2),
"syntax_errors": int(0),
"forwards_count": int(0),
"forwards_opens": int(0),
"opens_total": int(186),
"unique_opens": int(100),
"clicks_total": int(42),
"unique_clicks": int(400),
"unique_subscriber_clicks": int(42),
"facebook_recipient_likes": int(5),
"facebook_unique_likes": int(8),
"facebook_likes": int(42),
"open_rate": float64(42),
"click_rate": float64(42),
"industry_open_rate": float64(0.17076777144396),
"industry_click_rate": float64(0.027431311866951),
"industry_bounce_rate": float64(0.0063767751251474),
"industry_unopen_rate": float64(0.82285545343089),
"industry_unsub_rate": float64(0.001436957032815),
"industry_abuse_rate": float64(0.00021111996110887),
"list_stats_sub_rate": float64(10),
"list_stats_unsub_rate": float64(20),
"list_stats_open_rate": float64(42),
"list_stats_click_rate": float64(42),
"industry_type": "Social Networks and Online Communities",
}
acc.AssertContainsTaggedFields(t, "mailchimp", fields, tags)
}
func TestMailChimpGatherReport(t *testing.T) {
@ -135,67 +105,39 @@ func TestMailChimpGatherReport(t *testing.T) {
tags["id"] = "42694e9e57"
tags["campaign_title"] = "Freddie's Jokes Vol. 1"
testInts := []struct {
measurement string
value int
}{
{"emails_sent", 200},
{"abuse_reports", 0},
{"unsubscribed", 2},
{"hard_bounces", 0},
{"soft_bounces", 2},
{"syntax_errors", 0},
{"forwards_count", 0},
{"forwards_opens", 0},
{"opens_total", 186},
{"unique_opens", 100},
{"clicks_total", 42},
{"unique_clicks", 400},
{"unique_subscriber_clicks", 42},
{"facebook_recipient_likes", 5},
{"facebook_unique_likes", 8},
{"facebook_likes", 42},
}
for _, test := range testInts {
assert.True(t, acc.CheckTaggedValue(test.measurement, test.value, tags),
fmt.Sprintf("Measurement: %v, value: %v, tags: %v not found",
test.measurement, test.value, tags))
fields := map[string]interface{}{
"emails_sent": int(200),
"abuse_reports": int(0),
"unsubscribed": int(2),
"hard_bounces": int(0),
"soft_bounces": int(2),
"syntax_errors": int(0),
"forwards_count": int(0),
"forwards_opens": int(0),
"opens_total": int(186),
"unique_opens": int(100),
"clicks_total": int(42),
"unique_clicks": int(400),
"unique_subscriber_clicks": int(42),
"facebook_recipient_likes": int(5),
"facebook_unique_likes": int(8),
"facebook_likes": int(42),
"open_rate": float64(42),
"click_rate": float64(42),
"industry_open_rate": float64(0.17076777144396),
"industry_click_rate": float64(0.027431311866951),
"industry_bounce_rate": float64(0.0063767751251474),
"industry_unopen_rate": float64(0.82285545343089),
"industry_unsub_rate": float64(0.001436957032815),
"industry_abuse_rate": float64(0.00021111996110887),
"list_stats_sub_rate": float64(10),
"list_stats_unsub_rate": float64(20),
"list_stats_open_rate": float64(42),
"list_stats_click_rate": float64(42),
"industry_type": "Social Networks and Online Communities",
}
acc.AssertContainsTaggedFields(t, "mailchimp", fields, tags)
testFloats := []struct {
measurement string
value float64
}{
{"open_rate", 42},
{"click_rate", 42},
{"industry_open_rate", 0.17076777144396},
{"industry_click_rate", 0.027431311866951},
{"industry_bounce_rate", 0.0063767751251474},
{"industry_unopen_rate", 0.82285545343089},
{"industry_unsub_rate", 0.001436957032815},
{"industry_abuse_rate", 0.00021111996110887},
{"list_stats_sub_rate", 10},
{"list_stats_unsub_rate", 20},
{"list_stats_open_rate", 42},
{"list_stats_click_rate", 42},
}
for _, test := range testFloats {
assert.True(t, acc.CheckTaggedValue(test.measurement, test.value, tags),
fmt.Sprintf("Measurement: %v, value: %v, tags: %v not found",
test.measurement, test.value, tags))
}
testStrings := []struct {
measurement string
value string
}{
{"industry_type", "Social Networks and Online Communities"},
}
for _, test := range testStrings {
assert.True(t, acc.CheckTaggedValue(test.measurement, test.value, tags),
fmt.Sprintf("Measurement: %v, value: %v, tags: %v not found",
test.measurement, test.value, tags))
}
}
func TestMailChimpGatherError(t *testing.T) {

View File

@ -8,7 +8,7 @@ import (
"strconv"
"time"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/inputs"
)
// Memcached is a memcached plugin
@ -69,7 +69,7 @@ func (m *Memcached) Description() string {
}
// Gather reads stats from all configured servers accumulates stats
func (m *Memcached) Gather(acc plugins.Accumulator) error {
func (m *Memcached) Gather(acc inputs.Accumulator) error {
if len(m.Servers) == 0 && len(m.UnixSockets) == 0 {
return m.gatherServer(":11211", false, acc)
}
@ -92,7 +92,7 @@ func (m *Memcached) Gather(acc plugins.Accumulator) error {
func (m *Memcached) gatherServer(
address string,
unix bool,
acc plugins.Accumulator,
acc inputs.Accumulator,
) error {
var conn net.Conn
if unix {
@ -137,16 +137,18 @@ func (m *Memcached) gatherServer(
tags := map[string]string{"server": address}
// Process values
fields := make(map[string]interface{})
for _, key := range sendMetrics {
if value, ok := values[key]; ok {
// Mostly it is the number
if iValue, errParse := strconv.ParseInt(value, 10, 64); errParse != nil {
acc.Add(key, value, tags)
if iValue, errParse := strconv.ParseInt(value, 10, 64); errParse == nil {
fields[key] = iValue
} else {
acc.Add(key, iValue, tags)
fields[key] = value
}
}
}
acc.AddFields("memcached", fields, tags)
return nil
}
@ -176,7 +178,7 @@ func parseResponse(r *bufio.Reader) (map[string]string, error) {
}
func init() {
plugins.Add("memcached", func() plugins.Plugin {
inputs.Add("memcached", func() inputs.Input {
return &Memcached{}
})
}

View File

@ -32,7 +32,7 @@ func TestMemcachedGeneratesMetrics(t *testing.T) {
"bytes_read", "bytes_written", "threads", "conn_yields"}
for _, metric := range intMetrics {
assert.True(t, acc.HasIntValue(metric), metric)
assert.True(t, acc.HasIntField("memcached", metric), metric)
}
}

View File

@ -1,4 +1,4 @@
package plugins
package inputs
import "github.com/stretchr/testify/mock"

View File

@ -9,7 +9,7 @@ import (
"sync"
"time"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/inputs"
"gopkg.in/mgo.v2"
)
@ -45,7 +45,7 @@ var localhost = &url.URL{Host: "127.0.0.1:27017"}
// Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any).
func (m *MongoDB) Gather(acc plugins.Accumulator) error {
func (m *MongoDB) Gather(acc inputs.Accumulator) error {
if len(m.Servers) == 0 {
m.gatherServer(m.getMongoServer(localhost), acc)
return nil
@ -88,7 +88,7 @@ func (m *MongoDB) getMongoServer(url *url.URL) *Server {
return m.mongos[url.Host]
}
func (m *MongoDB) gatherServer(server *Server, acc plugins.Accumulator) error {
func (m *MongoDB) gatherServer(server *Server, acc inputs.Accumulator) error {
if server.Session == nil {
var dialAddrs []string
if server.Url.User != nil {
@ -98,7 +98,8 @@ func (m *MongoDB) gatherServer(server *Server, acc plugins.Accumulator) error {
}
dialInfo, err := mgo.ParseURL(dialAddrs[0])
if err != nil {
return fmt.Errorf("Unable to parse URL (%s), %s\n", dialAddrs[0], err.Error())
return fmt.Errorf("Unable to parse URL (%s), %s\n",
dialAddrs[0], err.Error())
}
dialInfo.Direct = true
dialInfo.Timeout = time.Duration(10) * time.Second
@ -137,7 +138,7 @@ func (m *MongoDB) gatherServer(server *Server, acc plugins.Accumulator) error {
}
func init() {
plugins.Add("mongodb", func() plugins.Plugin {
inputs.Add("mongodb", func() inputs.Input {
return &MongoDB{
mongos: make(map[string]*Server),
}

View File

@ -5,11 +5,12 @@ import (
"reflect"
"strconv"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/inputs"
)
type MongodbData struct {
StatLine *StatLine
Fields map[string]interface{}
Tags map[string]string
}
@ -20,6 +21,7 @@ func NewMongodbData(statLine *StatLine, tags map[string]string) *MongodbData {
return &MongodbData{
StatLine: statLine,
Tags: tags,
Fields: make(map[string]interface{}),
}
}
@ -63,38 +65,44 @@ var WiredTigerStats = map[string]string{
"percent_cache_used": "CacheUsedPercent",
}
func (d *MongodbData) AddDefaultStats(acc plugins.Accumulator) {
func (d *MongodbData) AddDefaultStats() {
statLine := reflect.ValueOf(d.StatLine).Elem()
d.addStat(acc, statLine, DefaultStats)
d.addStat(statLine, DefaultStats)
if d.StatLine.NodeType != "" {
d.addStat(acc, statLine, DefaultReplStats)
d.addStat(statLine, DefaultReplStats)
}
if d.StatLine.StorageEngine == "mmapv1" {
d.addStat(acc, statLine, MmapStats)
d.addStat(statLine, MmapStats)
} else if d.StatLine.StorageEngine == "wiredTiger" {
for key, value := range WiredTigerStats {
val := statLine.FieldByName(value).Interface()
percentVal := fmt.Sprintf("%.1f", val.(float64)*100)
floatVal, _ := strconv.ParseFloat(percentVal, 64)
d.add(acc, key, floatVal)
d.add(key, floatVal)
}
}
}
func (d *MongodbData) addStat(acc plugins.Accumulator, statLine reflect.Value, stats map[string]string) {
func (d *MongodbData) addStat(
statLine reflect.Value,
stats map[string]string,
) {
for key, value := range stats {
val := statLine.FieldByName(value).Interface()
d.add(acc, key, val)
d.add(key, val)
}
}
func (d *MongodbData) add(acc plugins.Accumulator, key string, val interface{}) {
func (d *MongodbData) add(key string, val interface{}) {
d.Fields[key] = val
}
func (d *MongodbData) flush(acc inputs.Accumulator) {
acc.AddFields(
key,
map[string]interface{}{
"value": val,
},
"mongodb",
d.Fields,
d.Tags,
d.StatLine.Time,
)
d.Fields = make(map[string]interface{})
}

View File

@ -6,7 +6,6 @@ import (
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
var tags = make(map[string]string)
@ -37,10 +36,11 @@ func TestAddNonReplStats(t *testing.T) {
)
var acc testutil.Accumulator
d.AddDefaultStats(&acc)
d.AddDefaultStats()
d.flush(&acc)
for key, _ := range DefaultStats {
assert.True(t, acc.HasIntValue(key))
assert.True(t, acc.HasIntField("mongodb", key))
}
}
@ -57,10 +57,11 @@ func TestAddReplStats(t *testing.T) {
var acc testutil.Accumulator
d.AddDefaultStats(&acc)
d.AddDefaultStats()
d.flush(&acc)
for key, _ := range MmapStats {
assert.True(t, acc.HasIntValue(key))
assert.True(t, acc.HasIntField("mongodb", key))
}
}
@ -76,10 +77,11 @@ func TestAddWiredTigerStats(t *testing.T) {
var acc testutil.Accumulator
d.AddDefaultStats(&acc)
d.AddDefaultStats()
d.flush(&acc)
for key, _ := range WiredTigerStats {
assert.True(t, acc.HasFloatValue(key))
assert.True(t, acc.HasFloatField("mongodb", key))
}
}
@ -95,17 +97,37 @@ func TestStateTag(t *testing.T) {
tags,
)
stats := []string{"inserts_per_sec", "queries_per_sec"}
stateTags := make(map[string]string)
stateTags["state"] = "PRI"
var acc testutil.Accumulator
d.AddDefaultStats(&acc)
for _, key := range stats {
err := acc.ValidateTaggedValue(key, int64(0), stateTags)
require.NoError(t, err)
d.AddDefaultStats()
d.flush(&acc)
fields := map[string]interface{}{
"active_reads": int64(0),
"active_writes": int64(0),
"commands_per_sec": int64(0),
"deletes_per_sec": int64(0),
"flushes_per_sec": int64(0),
"getmores_per_sec": int64(0),
"inserts_per_sec": int64(0),
"member_status": "PRI",
"net_in_bytes": int64(0),
"net_out_bytes": int64(0),
"open_connections": int64(0),
"queries_per_sec": int64(0),
"queued_reads": int64(0),
"queued_writes": int64(0),
"repl_commands_per_sec": int64(0),
"repl_deletes_per_sec": int64(0),
"repl_getmores_per_sec": int64(0),
"repl_inserts_per_sec": int64(0),
"repl_queries_per_sec": int64(0),
"repl_updates_per_sec": int64(0),
"resident_megabytes": int64(0),
"updates_per_sec": int64(0),
"vsize_megabytes": int64(0),
}
acc.AssertContainsTaggedFields(t, "mongodb", fields, stateTags)
}

View File

@ -4,7 +4,7 @@ import (
"net/url"
"time"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/inputs"
"gopkg.in/mgo.v2"
"gopkg.in/mgo.v2/bson"
)
@ -21,7 +21,7 @@ func (s *Server) getDefaultTags() map[string]string {
return tags
}
func (s *Server) gatherData(acc plugins.Accumulator) error {
func (s *Server) gatherData(acc inputs.Accumulator) error {
s.Session.SetMode(mgo.Eventual, true)
s.Session.SetSocketTimeout(0)
result := &ServerStatus{}
@ -44,7 +44,8 @@ func (s *Server) gatherData(acc plugins.Accumulator) error {
NewStatLine(*s.lastResult, *result, s.Url.Host, true, durationInSeconds),
s.getDefaultTags(),
)
data.AddDefaultStats(acc)
data.AddDefaultStats()
data.flush(acc)
}
return nil
}

View File

@ -6,7 +6,7 @@ import (
"strings"
_ "github.com/go-sql-driver/mysql"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/inputs"
)
type Mysql struct {
@ -35,7 +35,7 @@ func (m *Mysql) Description() string {
var localhost = ""
func (m *Mysql) Gather(acc plugins.Accumulator) error {
func (m *Mysql) Gather(acc inputs.Accumulator) error {
if len(m.Servers) == 0 {
// if we can't get stats in this case, thats fine, don't report
// an error.
@ -113,7 +113,7 @@ var mappings = []*mapping{
},
}
func (m *Mysql) gatherServer(serv string, acc plugins.Accumulator) error {
func (m *Mysql) gatherServer(serv string, acc inputs.Accumulator) error {
// If user forgot the '/', add it
if strings.HasSuffix(serv, ")") {
serv = serv + "/"
@ -138,6 +138,8 @@ func (m *Mysql) gatherServer(serv string, acc plugins.Accumulator) error {
if err != nil {
servtag = "localhost"
}
tags := map[string]string{"server": servtag}
fields := make(map[string]interface{})
for rows.Next() {
var name string
var val interface{}
@ -149,12 +151,10 @@ func (m *Mysql) gatherServer(serv string, acc plugins.Accumulator) error {
var found bool
tags := map[string]string{"server": servtag}
for _, mapped := range mappings {
if strings.HasPrefix(name, mapped.onServer) {
i, _ := strconv.Atoi(string(val.([]byte)))
acc.Add(mapped.inExport+name[len(mapped.onServer):], i, tags)
fields[mapped.inExport+name[len(mapped.onServer):]] = i
found = true
}
}
@ -170,16 +170,17 @@ func (m *Mysql) gatherServer(serv string, acc plugins.Accumulator) error {
return err
}
acc.Add("queries", i, tags)
fields["queries"] = i
case "Slow_queries":
i, err := strconv.ParseInt(string(val.([]byte)), 10, 64)
if err != nil {
return err
}
acc.Add("slow_queries", i, tags)
fields["slow_queries"] = i
}
}
acc.AddFields("mysql", fields, tags)
conn_rows, err := db.Query("SELECT user, sum(1) FROM INFORMATION_SCHEMA.PROCESSLIST GROUP BY user")
@ -193,18 +194,20 @@ func (m *Mysql) gatherServer(serv string, acc plugins.Accumulator) error {
}
tags := map[string]string{"server": servtag, "user": user}
fields := make(map[string]interface{})
if err != nil {
return err
}
acc.Add("connections", connections, tags)
fields["connections"] = connections
acc.AddFields("mysql_users", fields, tags)
}
return nil
}
func init() {
plugins.Add("mysql", func() plugins.Plugin {
inputs.Add("mysql", func() inputs.Input {
return &Mysql{}
})
}

View File

@ -2,7 +2,6 @@ package mysql
import (
"fmt"
"strings"
"testing"
"github.com/influxdb/telegraf/testutil"
@ -10,64 +9,6 @@ import (
"github.com/stretchr/testify/require"
)
func TestMysqlGeneratesMetrics(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
m := &Mysql{
Servers: []string{fmt.Sprintf("root@tcp(%s:3306)/", testutil.GetLocalHost())},
}
var acc testutil.Accumulator
err := m.Gather(&acc)
require.NoError(t, err)
prefixes := []struct {
prefix string
count int
}{
{"commands", 139},
{"handler", 16},
{"bytes", 2},
{"innodb", 46},
{"threads", 4},
{"aborted", 2},
{"created", 3},
{"key", 7},
{"open", 7},
{"opened", 3},
{"qcache", 8},
{"table", 1},
}
intMetrics := []string{
"queries",
"slow_queries",
"connections",
}
for _, prefix := range prefixes {
var count int
for _, p := range acc.Points {
if strings.HasPrefix(p.Measurement, prefix.prefix) {
count++
}
}
if prefix.count > count {
t.Errorf("Expected less than %d measurements with prefix %s, got %d",
count, prefix.prefix, prefix.count)
}
}
for _, metric := range intMetrics {
assert.True(t, acc.HasIntValue(metric))
}
}
func TestMysqlDefaultsToLocal(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
@ -82,7 +23,7 @@ func TestMysqlDefaultsToLocal(t *testing.T) {
err := m.Gather(&acc)
require.NoError(t, err)
assert.True(t, len(acc.Points) > 0)
assert.True(t, acc.HasMeasurement("mysql"))
}
func TestMysqlParseDSN(t *testing.T) {

View File

@ -11,7 +11,7 @@ import (
"sync"
"time"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/inputs"
)
type Nginx struct {
@ -31,7 +31,7 @@ func (n *Nginx) Description() string {
return "Read Nginx's basic status information (ngx_http_stub_status_module)"
}
func (n *Nginx) Gather(acc plugins.Accumulator) error {
func (n *Nginx) Gather(acc inputs.Accumulator) error {
var wg sync.WaitGroup
var outerr error
@ -59,7 +59,7 @@ var tr = &http.Transport{
var client = &http.Client{Transport: tr}
func (n *Nginx) gatherUrl(addr *url.URL, acc plugins.Accumulator) error {
func (n *Nginx) gatherUrl(addr *url.URL, acc inputs.Accumulator) error {
resp, err := client.Get(addr.String())
if err != nil {
return fmt.Errorf("error making HTTP request to %s: %s", addr.String(), err)
@ -127,14 +127,16 @@ func (n *Nginx) gatherUrl(addr *url.URL, acc plugins.Accumulator) error {
}
tags := getTags(addr)
acc.Add("active", active, tags)
acc.Add("accepts", accepts, tags)
acc.Add("handled", handled, tags)
acc.Add("requests", requests, tags)
acc.Add("reading", reading, tags)
acc.Add("writing", writing, tags)
acc.Add("waiting", waiting, tags)
fields := map[string]interface{}{
"active": active,
"accepts": accepts,
"handled": handled,
"requests": requests,
"reading": reading,
"writing": writing,
"waiting": waiting,
}
acc.AddFields("nginx", fields, tags)
return nil
}
@ -157,7 +159,7 @@ func getTags(addr *url.URL) map[string]string {
}
func init() {
plugins.Add("nginx", func() plugins.Plugin {
inputs.Add("nginx", func() inputs.Input {
return &Nginx{}
})
}

View File

@ -54,17 +54,14 @@ func TestNginxGeneratesMetrics(t *testing.T) {
err := n.Gather(&acc)
require.NoError(t, err)
metrics := []struct {
name string
value uint64
}{
{"active", 585},
{"accepts", 85340},
{"handled", 85340},
{"requests", 35085},
{"reading", 4},
{"writing", 135},
{"waiting", 446},
fields := map[string]interface{}{
"active": uint64(585),
"accepts": uint64(85340),
"handled": uint64(85340),
"requests": uint64(35085),
"reading": uint64(4),
"writing": uint64(135),
"waiting": uint64(446),
}
addr, err := url.Parse(ts.URL)
if err != nil {
@ -84,8 +81,5 @@ func TestNginxGeneratesMetrics(t *testing.T) {
}
tags := map[string]string{"server": host, "port": port}
for _, m := range metrics {
assert.NoError(t, acc.ValidateTaggedValue(m.name, m.value, tags))
}
acc.AssertContainsTaggedFields(t, "nginx", fields, tags)
}

View File

@ -43,7 +43,7 @@ Using this configuration:
When run with:
```
./telegraf -config telegraf.conf -filter phpfpm -test
./telegraf -config telegraf.conf -input-filter phpfpm -test
```
It produces:

View File

@ -11,7 +11,7 @@ import (
"strings"
"sync"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/inputs"
)
const (
@ -67,7 +67,7 @@ func (r *phpfpm) Description() string {
// Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any).
func (g *phpfpm) Gather(acc plugins.Accumulator) error {
func (g *phpfpm) Gather(acc inputs.Accumulator) error {
if len(g.Urls) == 0 {
return g.gatherServer("http://127.0.0.1/status", acc)
}
@ -90,7 +90,7 @@ func (g *phpfpm) Gather(acc plugins.Accumulator) error {
}
// Request status page to get stat raw data
func (g *phpfpm) gatherServer(addr string, acc plugins.Accumulator) error {
func (g *phpfpm) gatherServer(addr string, acc inputs.Accumulator) error {
if g.client == nil {
client := &http.Client{}
@ -153,7 +153,7 @@ func (g *phpfpm) gatherServer(addr string, acc plugins.Accumulator) error {
}
// Import HTTP stat data into Telegraf system
func importMetric(r io.Reader, acc plugins.Accumulator, host string) (poolStat, error) {
func importMetric(r io.Reader, acc inputs.Accumulator, host string) (poolStat, error) {
stats := make(poolStat)
var currentPool string
@ -198,16 +198,18 @@ func importMetric(r io.Reader, acc plugins.Accumulator, host string) (poolStat,
"url": host,
"pool": pool,
}
fields := make(map[string]interface{})
for k, v := range stats[pool] {
acc.Add(strings.Replace(k, " ", "_", -1), v, tags)
fields[strings.Replace(k, " ", "_", -1)] = v
}
acc.AddFields("phpfpm", fields, tags)
}
return stats, nil
}
func init() {
plugins.Add("phpfpm", func() plugins.Plugin {
inputs.Add("phpfpm", func() inputs.Input {
return &phpfpm{}
})
}

View File

@ -32,27 +32,21 @@ func TestPhpFpmGeneratesMetrics(t *testing.T) {
"url": ts.Listener.Addr().String(),
"pool": "www",
}
assert.NoError(t, acc.ValidateTaggedValue("accepted_conn", int64(3), tags))
checkInt := []struct {
name string
value int64
}{
{"accepted_conn", 3},
{"listen_queue", 1},
{"max_listen_queue", 0},
{"listen_queue_len", 0},
{"idle_processes", 1},
{"active_processes", 1},
{"total_processes", 2},
{"max_active_processes", 1},
{"max_children_reached", 2},
{"slow_requests", 1},
fields := map[string]interface{}{
"accepted_conn": int64(3),
"listen_queue": int64(1),
"max_listen_queue": int64(0),
"listen_queue_len": int64(0),
"idle_processes": int64(1),
"active_processes": int64(1),
"total_processes": int64(2),
"max_active_processes": int64(1),
"max_children_reached": int64(2),
"slow_requests": int64(1),
}
for _, c := range checkInt {
assert.Equal(t, true, acc.CheckValue(c.name, c.value))
}
acc.AssertContainsTaggedFields(t, "phpfpm", fields, tags)
}
//When not passing server config, we default to localhost

View File

@ -7,7 +7,7 @@ import (
"strings"
"sync"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/inputs"
)
// HostPinger is a function that runs the "ping" function using a list of
@ -56,7 +56,7 @@ func (_ *Ping) SampleConfig() string {
return sampleConfig
}
func (p *Ping) Gather(acc plugins.Accumulator) error {
func (p *Ping) Gather(acc inputs.Accumulator) error {
var wg sync.WaitGroup
errorChannel := make(chan error, len(p.Urls)*2)
@ -64,7 +64,7 @@ func (p *Ping) Gather(acc plugins.Accumulator) error {
// Spin off a go routine for each url to ping
for _, url := range p.Urls {
wg.Add(1)
go func(url string, acc plugins.Accumulator) {
go func(url string, acc inputs.Accumulator) {
defer wg.Done()
args := p.args(url)
out, err := p.pingHost(args...)
@ -82,10 +82,13 @@ func (p *Ping) Gather(acc plugins.Accumulator) error {
}
// Calculate packet loss percentage
loss := float64(trans-rec) / float64(trans) * 100.0
acc.Add("packets_transmitted", trans, tags)
acc.Add("packets_received", rec, tags)
acc.Add("percent_packet_loss", loss, tags)
acc.Add("average_response_ms", avg, tags)
fields := map[string]interface{}{
"packets_transmitted": trans,
"packets_received": rec,
"percent_packet_loss": loss,
"average_response_ms": avg,
}
acc.AddFields("ping", fields, tags)
}(url, acc)
}
@ -171,7 +174,7 @@ func processPingOutput(out string) (int, int, float64, error) {
}
func init() {
plugins.Add("ping", func() plugins.Plugin {
inputs.Add("ping", func() inputs.Input {
return &Ping{pingHost: hostPinger}
})
}

View File

@ -120,18 +120,16 @@ func TestPingGather(t *testing.T) {
p.Gather(&acc)
tags := map[string]string{"url": "www.google.com"}
assert.NoError(t, acc.ValidateTaggedValue("packets_transmitted", 5, tags))
assert.NoError(t, acc.ValidateTaggedValue("packets_received", 5, tags))
assert.NoError(t, acc.ValidateTaggedValue("percent_packet_loss", 0.0, tags))
assert.NoError(t, acc.ValidateTaggedValue("average_response_ms",
43.628, tags))
fields := map[string]interface{}{
"packets_transmitted": 5,
"packets_received": 5,
"percent_packet_loss": 0.0,
"average_response_ms": 43.628,
}
acc.AssertContainsTaggedFields(t, "ping", fields, tags)
tags = map[string]string{"url": "www.reddit.com"}
assert.NoError(t, acc.ValidateTaggedValue("packets_transmitted", 5, tags))
assert.NoError(t, acc.ValidateTaggedValue("packets_received", 5, tags))
assert.NoError(t, acc.ValidateTaggedValue("percent_packet_loss", 0.0, tags))
assert.NoError(t, acc.ValidateTaggedValue("average_response_ms",
43.628, tags))
acc.AssertContainsTaggedFields(t, "ping", fields, tags)
}
var lossyPingOutput = `
@ -159,10 +157,13 @@ func TestLossyPingGather(t *testing.T) {
p.Gather(&acc)
tags := map[string]string{"url": "www.google.com"}
assert.NoError(t, acc.ValidateTaggedValue("packets_transmitted", 5, tags))
assert.NoError(t, acc.ValidateTaggedValue("packets_received", 3, tags))
assert.NoError(t, acc.ValidateTaggedValue("percent_packet_loss", 40.0, tags))
assert.NoError(t, acc.ValidateTaggedValue("average_response_ms", 44.033, tags))
fields := map[string]interface{}{
"packets_transmitted": 5,
"packets_received": 3,
"percent_packet_loss": 40.0,
"average_response_ms": 44.033,
}
acc.AssertContainsTaggedFields(t, "ping", fields, tags)
}
var errorPingOutput = `
@ -188,10 +189,13 @@ func TestBadPingGather(t *testing.T) {
p.Gather(&acc)
tags := map[string]string{"url": "www.amazon.com"}
assert.NoError(t, acc.ValidateTaggedValue("packets_transmitted", 2, tags))
assert.NoError(t, acc.ValidateTaggedValue("packets_received", 0, tags))
assert.NoError(t, acc.ValidateTaggedValue("percent_packet_loss", 100.0, tags))
assert.NoError(t, acc.ValidateTaggedValue("average_response_ms", 0.0, tags))
fields := map[string]interface{}{
"packets_transmitted": 2,
"packets_received": 0,
"percent_packet_loss": 100.0,
"average_response_ms": 0.0,
}
acc.AssertContainsTaggedFields(t, "ping", fields, tags)
}
func mockFatalHostPinger(args ...string) (string, error) {

View File

@ -6,51 +6,37 @@ import (
"fmt"
"strings"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/inputs"
_ "github.com/lib/pq"
)
type Server struct {
type Postgresql struct {
Address string
Databases []string
OrderedColumns []string
}
type Postgresql struct {
Servers []*Server
}
var ignoredColumns = map[string]bool{"datid": true, "datname": true, "stats_reset": true}
var sampleConfig = `
# specify servers via an array of tables
[[plugins.postgresql.servers]]
# specify address via a url matching:
# postgres://[pqgotest[:password]]@localhost[/dbname]?sslmode=[disable|verify-ca|verify-full]
# or a simple string:
# host=localhost user=pqotest password=... sslmode=... dbname=app_production
#
# All connection parameters are optional. By default, the host is localhost
# and the user is the currently running user. For localhost, we default
# to sslmode=disable as well.
# All connection parameters are optional.
#
# Without the dbname parameter, the driver will default to a database
# with the same name as the user. This dbname is just for instantiating a
# connection with the server and doesn't restrict the databases we are trying
# to grab metrics for.
#
address = "sslmode=disable"
address = "host=localhost user=postgres sslmode=disable"
# A list of databases to pull metrics about. If not specified, metrics for all
# databases are gathered.
# databases = ["app_production", "blah_testing"]
# [[plugins.postgresql.servers]]
# address = "influx@remoteserver"
# databases = ["app_production", "testing"]
`
func (p *Postgresql) SampleConfig() string {
@ -65,42 +51,27 @@ func (p *Postgresql) IgnoredColumns() map[string]bool {
return ignoredColumns
}
var localhost = &Server{Address: "sslmode=disable"}
var localhost = "host=localhost sslmode=disable"
func (p *Postgresql) Gather(acc plugins.Accumulator) error {
if len(p.Servers) == 0 {
p.gatherServer(localhost, acc)
return nil
}
for _, serv := range p.Servers {
err := p.gatherServer(serv, acc)
if err != nil {
return err
}
}
return nil
}
func (p *Postgresql) gatherServer(serv *Server, acc plugins.Accumulator) error {
func (p *Postgresql) Gather(acc inputs.Accumulator) error {
var query string
if serv.Address == "" || serv.Address == "localhost" {
serv = localhost
if p.Address == "" || p.Address == "localhost" {
p.Address = localhost
}
db, err := sql.Open("postgres", serv.Address)
db, err := sql.Open("postgres", p.Address)
if err != nil {
return err
}
defer db.Close()
if len(serv.Databases) == 0 {
if len(p.Databases) == 0 {
query = `SELECT * FROM pg_stat_database`
} else {
query = fmt.Sprintf(`SELECT * FROM pg_stat_database WHERE datname IN ('%s')`, strings.Join(serv.Databases, "','"))
query = fmt.Sprintf(`SELECT * FROM pg_stat_database WHERE datname IN ('%s')`,
strings.Join(p.Databases, "','"))
}
rows, err := db.Query(query)
@ -111,13 +82,13 @@ func (p *Postgresql) gatherServer(serv *Server, acc plugins.Accumulator) error {
defer rows.Close()
// grab the column information from the result
serv.OrderedColumns, err = rows.Columns()
p.OrderedColumns, err = rows.Columns()
if err != nil {
return err
}
for rows.Next() {
err = p.accRow(rows, acc, serv)
err = p.accRow(rows, acc)
if err != nil {
return err
}
@ -130,20 +101,20 @@ type scanner interface {
Scan(dest ...interface{}) error
}
func (p *Postgresql) accRow(row scanner, acc plugins.Accumulator, serv *Server) error {
func (p *Postgresql) accRow(row scanner, acc inputs.Accumulator) error {
var columnVars []interface{}
var dbname bytes.Buffer
// this is where we'll store the column name with its *interface{}
columnMap := make(map[string]*interface{})
for _, column := range serv.OrderedColumns {
for _, column := range p.OrderedColumns {
columnMap[column] = new(interface{})
}
// populate the array of interface{} with the pointers in the right order
for i := 0; i < len(columnMap); i++ {
columnVars = append(columnVars, columnMap[serv.OrderedColumns[i]])
columnVars = append(columnVars, columnMap[p.OrderedColumns[i]])
}
// deconstruct array of variables and send to Scan
@ -159,20 +130,22 @@ func (p *Postgresql) accRow(row scanner, acc plugins.Accumulator, serv *Server)
dbname.WriteString(string(dbnameChars[i]))
}
tags := map[string]string{"server": serv.Address, "db": dbname.String()}
tags := map[string]string{"server": p.Address, "db": dbname.String()}
fields := make(map[string]interface{})
for col, val := range columnMap {
_, ignore := ignoredColumns[col]
if !ignore {
acc.Add(col, *val, tags)
fields[col] = *val
}
}
acc.AddFields("postgresql", fields, tags)
return nil
}
func init() {
plugins.Add("postgresql", func() plugins.Plugin {
inputs.Add("postgresql", func() inputs.Input {
return &Postgresql{}
})
}

View File

@ -15,13 +15,9 @@ func TestPostgresqlGeneratesMetrics(t *testing.T) {
}
p := &Postgresql{
Servers: []*Server{
{
Address: fmt.Sprintf("host=%s user=postgres sslmode=disable",
testutil.GetLocalHost()),
Databases: []string{"postgres"},
},
},
Address: fmt.Sprintf("host=%s user=postgres sslmode=disable",
testutil.GetLocalHost()),
Databases: []string{"postgres"},
}
var acc testutil.Accumulator
@ -30,7 +26,7 @@ func TestPostgresqlGeneratesMetrics(t *testing.T) {
require.NoError(t, err)
availableColumns := make(map[string]bool)
for _, col := range p.Servers[0].OrderedColumns {
for _, col := range p.OrderedColumns {
availableColumns[col] = true
}
@ -61,7 +57,7 @@ func TestPostgresqlGeneratesMetrics(t *testing.T) {
for _, metric := range intMetrics {
_, ok := availableColumns[metric]
if ok {
assert.True(t, acc.HasIntValue(metric))
assert.True(t, acc.HasIntField("postgresql", metric))
metricsCounted++
}
}
@ -69,7 +65,7 @@ func TestPostgresqlGeneratesMetrics(t *testing.T) {
for _, metric := range floatMetrics {
_, ok := availableColumns[metric]
if ok {
assert.True(t, acc.HasFloatValue(metric))
assert.True(t, acc.HasFloatField("postgresql", metric))
metricsCounted++
}
}
@ -84,13 +80,9 @@ func TestPostgresqlTagsMetricsWithDatabaseName(t *testing.T) {
}
p := &Postgresql{
Servers: []*Server{
{
Address: fmt.Sprintf("host=%s user=postgres sslmode=disable",
testutil.GetLocalHost()),
Databases: []string{"postgres"},
},
},
Address: fmt.Sprintf("host=%s user=postgres sslmode=disable",
testutil.GetLocalHost()),
Databases: []string{"postgres"},
}
var acc testutil.Accumulator
@ -98,7 +90,7 @@ func TestPostgresqlTagsMetricsWithDatabaseName(t *testing.T) {
err := p.Gather(&acc)
require.NoError(t, err)
point, ok := acc.Get("xact_commit")
point, ok := acc.Get("postgresql")
require.True(t, ok)
assert.Equal(t, "postgres", point.Tags["db"])
@ -110,12 +102,8 @@ func TestPostgresqlDefaultsToAllDatabases(t *testing.T) {
}
p := &Postgresql{
Servers: []*Server{
{
Address: fmt.Sprintf("host=%s user=postgres sslmode=disable",
testutil.GetLocalHost()),
},
},
Address: fmt.Sprintf("host=%s user=postgres sslmode=disable",
testutil.GetLocalHost()),
}
var acc testutil.Accumulator
@ -126,7 +114,7 @@ func TestPostgresqlDefaultsToAllDatabases(t *testing.T) {
var found bool
for _, pnt := range acc.Points {
if pnt.Measurement == "xact_commit" {
if pnt.Measurement == "postgresql" {
if pnt.Tags["db"] == "postgres" {
found = true
break
@ -143,12 +131,8 @@ func TestPostgresqlIgnoresUnwantedColumns(t *testing.T) {
}
p := &Postgresql{
Servers: []*Server{
{
Address: fmt.Sprintf("host=%s user=postgres sslmode=disable",
testutil.GetLocalHost()),
},
},
Address: fmt.Sprintf("host=%s user=postgres sslmode=disable",
testutil.GetLocalHost()),
}
var acc testutil.Accumulator

View File

@ -7,22 +7,17 @@ import (
"os/exec"
"strconv"
"strings"
"sync"
"github.com/shirou/gopsutil/process"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/inputs"
)
type Specification struct {
type Procstat struct {
PidFile string `toml:"pid_file"`
Exe string
Prefix string
Pattern string
}
type Procstat struct {
Specifications []*Specification
Prefix string
}
func NewProcstat() *Procstat {
@ -30,8 +25,6 @@ func NewProcstat() *Procstat {
}
var sampleConfig = `
[[plugins.procstat.specifications]]
prefix = "" # optional string to prefix measurements
# Must specify one of: pid_file, exe, or pattern
# PID file to monitor process
pid_file = "/var/run/nginx.pid"
@ -39,6 +32,9 @@ var sampleConfig = `
# exe = "nginx"
# pattern as argument for pgrep (ie, pgrep -f <pattern>)
# pattern = "nginx"
# Field name prefix
prefix = ""
`
func (_ *Procstat) SampleConfig() string {
@ -49,36 +45,27 @@ func (_ *Procstat) Description() string {
return "Monitor process cpu and memory usage"
}
func (p *Procstat) Gather(acc plugins.Accumulator) error {
var wg sync.WaitGroup
for _, specification := range p.Specifications {
wg.Add(1)
go func(spec *Specification, acc plugins.Accumulator) {
defer wg.Done()
procs, err := spec.createProcesses()
if err != nil {
log.Printf("Error: procstat getting process, exe: [%s] pidfile: [%s] pattern: [%s] %s",
spec.Exe, spec.PidFile, spec.Pattern, err.Error())
} else {
for _, proc := range procs {
p := NewSpecProcessor(spec.Prefix, acc, proc)
p.pushMetrics()
}
}
}(specification, acc)
func (p *Procstat) Gather(acc inputs.Accumulator) error {
procs, err := p.createProcesses()
if err != nil {
log.Printf("Error: procstat getting process, exe: [%s] pidfile: [%s] pattern: [%s] %s",
p.Exe, p.PidFile, p.Pattern, err.Error())
} else {
for _, proc := range procs {
p := NewSpecProcessor(p.Prefix, acc, proc)
p.pushMetrics()
}
}
wg.Wait()
return nil
}
func (spec *Specification) createProcesses() ([]*process.Process, error) {
func (p *Procstat) createProcesses() ([]*process.Process, error) {
var out []*process.Process
var errstring string
var outerr error
pids, err := spec.getAllPids()
pids, err := p.getAllPids()
if err != nil {
errstring += err.Error() + " "
}
@ -99,16 +86,16 @@ func (spec *Specification) createProcesses() ([]*process.Process, error) {
return out, outerr
}
func (spec *Specification) getAllPids() ([]int32, error) {
func (p *Procstat) getAllPids() ([]int32, error) {
var pids []int32
var err error
if spec.PidFile != "" {
pids, err = pidsFromFile(spec.PidFile)
} else if spec.Exe != "" {
pids, err = pidsFromExe(spec.Exe)
} else if spec.Pattern != "" {
pids, err = pidsFromPattern(spec.Pattern)
if p.PidFile != "" {
pids, err = pidsFromFile(p.PidFile)
} else if p.Exe != "" {
pids, err = pidsFromExe(p.Exe)
} else if p.Pattern != "" {
pids, err = pidsFromPattern(p.Pattern)
} else {
err = fmt.Errorf("Either exe, pid_file or pattern has to be specified")
}
@ -174,7 +161,7 @@ func pidsFromPattern(pattern string) ([]int32, error) {
}
func init() {
plugins.Add("procstat", func() plugins.Plugin {
inputs.Add("procstat", func() inputs.Input {
return NewProcstat()
})
}

View File

@ -20,11 +20,11 @@ func TestGather(t *testing.T) {
file.Write([]byte(strconv.Itoa(pid)))
file.Close()
defer os.Remove(file.Name())
specifications := []*Specification{&Specification{PidFile: file.Name(), Prefix: "foo"}}
p := Procstat{
Specifications: specifications,
PidFile: file.Name(),
Prefix: "foo",
}
p.Gather(&acc)
assert.True(t, acc.HasFloatValue("foo_cpu_user"))
assert.True(t, acc.HasUIntValue("foo_memory_vms"))
assert.True(t, acc.HasFloatField("procstat", "foo_cpu_time_user"))
assert.True(t, acc.HasUIntField("procstat", "foo_memory_vms"))
}

View File

@ -6,13 +6,14 @@ import (
"github.com/shirou/gopsutil/process"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/inputs"
)
type SpecProcessor struct {
Prefix string
tags map[string]string
acc plugins.Accumulator
fields map[string]interface{}
acc inputs.Accumulator
proc *process.Process
}
@ -23,12 +24,17 @@ func (p *SpecProcessor) add(metric string, value interface{}) {
} else {
mname = p.Prefix + "_" + metric
}
p.acc.Add(mname, value, p.tags)
p.fields[mname] = value
}
func (p *SpecProcessor) flush() {
p.acc.AddFields("procstat", p.fields, p.tags)
p.fields = make(map[string]interface{})
}
func NewSpecProcessor(
prefix string,
acc plugins.Accumulator,
acc inputs.Accumulator,
p *process.Process,
) *SpecProcessor {
tags := make(map[string]string)
@ -39,6 +45,7 @@ func NewSpecProcessor(
return &SpecProcessor{
Prefix: prefix,
tags: tags,
fields: make(map[string]interface{}),
acc: acc,
proc: p,
}
@ -60,6 +67,7 @@ func (p *SpecProcessor) pushMetrics() {
if err := p.pushMemoryStats(); err != nil {
log.Printf("procstat, mem stats not available: %s", err.Error())
}
p.flush()
}
func (p *SpecProcessor) pushFDStats() error {
@ -94,21 +102,22 @@ func (p *SpecProcessor) pushIOStats() error {
}
func (p *SpecProcessor) pushCPUStats() error {
cpu, err := p.proc.CPUTimes()
cpu_time, err := p.proc.CPUTimes()
if err != nil {
return err
}
p.add("cpu_user", cpu.User)
p.add("cpu_system", cpu.System)
p.add("cpu_idle", cpu.Idle)
p.add("cpu_nice", cpu.Nice)
p.add("cpu_iowait", cpu.Iowait)
p.add("cpu_irq", cpu.Irq)
p.add("cpu_soft_irq", cpu.Softirq)
p.add("cpu_soft_steal", cpu.Steal)
p.add("cpu_soft_stolen", cpu.Stolen)
p.add("cpu_soft_guest", cpu.Guest)
p.add("cpu_soft_guest_nice", cpu.GuestNice)
p.add("cpu_time_user", cpu_time.User)
p.add("cpu_time_system", cpu_time.System)
p.add("cpu_time_idle", cpu_time.Idle)
p.add("cpu_time_nice", cpu_time.Nice)
p.add("cpu_time_iowait", cpu_time.Iowait)
p.add("cpu_time_irq", cpu_time.Irq)
p.add("cpu_time_soft_irq", cpu_time.Softirq)
p.add("cpu_time_soft_steal", cpu_time.Steal)
p.add("cpu_time_soft_stolen", cpu_time.Stolen)
p.add("cpu_time_soft_guest", cpu_time.Guest)
p.add("cpu_time_soft_guest_nice", cpu_time.GuestNice)
return nil
}

View File

@ -3,7 +3,7 @@ package prometheus
import (
"errors"
"fmt"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/inputs"
"github.com/prometheus/common/expfmt"
"github.com/prometheus/common/model"
"io"
@ -32,7 +32,7 @@ var ErrProtocolError = errors.New("prometheus protocol error")
// Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any).
func (g *Prometheus) Gather(acc plugins.Accumulator) error {
func (g *Prometheus) Gather(acc inputs.Accumulator) error {
var wg sync.WaitGroup
var outerr error
@ -50,7 +50,7 @@ func (g *Prometheus) Gather(acc plugins.Accumulator) error {
return outerr
}
func (g *Prometheus) gatherURL(url string, acc plugins.Accumulator) error {
func (g *Prometheus) gatherURL(url string, acc inputs.Accumulator) error {
resp, err := http.Get(url)
if err != nil {
return fmt.Errorf("error making HTTP request to %s: %s", url, err)
@ -77,17 +77,18 @@ func (g *Prometheus) gatherURL(url string, acc plugins.Accumulator) error {
if err == io.EOF {
break
} else if err != nil {
return fmt.Errorf("error getting processing samples for %s: %s", url, err)
return fmt.Errorf("error getting processing samples for %s: %s",
url, err)
}
for _, sample := range samples {
tags := map[string]string{}
tags := make(map[string]string)
for key, value := range sample.Metric {
if key == model.MetricNameLabel {
continue
}
tags[string(key)] = string(value)
}
acc.Add(string(sample.Metric[model.MetricNameLabel]),
acc.Add("prometheus_"+string(sample.Metric[model.MetricNameLabel]),
float64(sample.Value), tags)
}
}
@ -96,7 +97,7 @@ func (g *Prometheus) gatherURL(url string, acc plugins.Accumulator) error {
}
func init() {
plugins.Add("prometheus", func() plugins.Plugin {
inputs.Add("prometheus", func() inputs.Input {
return &Prometheus{}
})
}

View File

@ -45,11 +45,11 @@ func TestPrometheusGeneratesMetrics(t *testing.T) {
value float64
tags map[string]string
}{
{"go_gc_duration_seconds_count", 7, map[string]string{}},
{"go_goroutines", 15, map[string]string{}},
{"prometheus_go_gc_duration_seconds_count", 7, map[string]string{}},
{"prometheus_go_goroutines", 15, map[string]string{}},
}
for _, e := range expected {
assert.NoError(t, acc.ValidateValue(e.name, e.value))
assert.True(t, acc.HasFloatField(e.name, "value"))
}
}

View File

@ -8,7 +8,7 @@ import (
"reflect"
"strings"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/inputs"
)
// PuppetAgent is a PuppetAgent plugin
@ -82,7 +82,7 @@ func (pa *PuppetAgent) Description() string {
}
// Gather reads stats from all configured servers accumulates stats
func (pa *PuppetAgent) Gather(acc plugins.Accumulator) error {
func (pa *PuppetAgent) Gather(acc inputs.Accumulator) error {
if len(pa.Location) == 0 {
pa.Location = "/var/lib/puppet/state/last_run_summary.yaml"
@ -104,15 +104,16 @@ func (pa *PuppetAgent) Gather(acc plugins.Accumulator) error {
return fmt.Errorf("%s", err)
}
structPrinter(&puppetState, acc)
tags := map[string]string{"location": pa.Location}
structPrinter(&puppetState, acc, tags)
return nil
}
func structPrinter(s *State, acc plugins.Accumulator) {
func structPrinter(s *State, acc inputs.Accumulator, tags map[string]string) {
e := reflect.ValueOf(s).Elem()
fields := make(map[string]interface{})
for tLevelFNum := 0; tLevelFNum < e.NumField(); tLevelFNum++ {
name := e.Type().Field(tLevelFNum).Name
nameNumField := e.FieldByName(name).NumField()
@ -123,14 +124,14 @@ func structPrinter(s *State, acc plugins.Accumulator) {
lname := strings.ToLower(name)
lsName := strings.ToLower(sName)
acc.Add(fmt.Sprintf("%s_%s", lname, lsName), sValue, nil)
fields[fmt.Sprintf("%s_%s", lname, lsName)] = sValue
}
}
acc.AddFields("puppetagent", fields, tags)
}
func init() {
plugins.Add("puppetagent", func() plugins.Plugin {
inputs.Add("puppetagent", func() inputs.Input {
return &PuppetAgent{}
})
}

View File

@ -0,0 +1,48 @@
package puppetagent
import (
"github.com/influxdb/telegraf/testutil"
"testing"
)
func TestGather(t *testing.T) {
var acc testutil.Accumulator
pa := PuppetAgent{
Location: "last_run_summary.yaml",
}
pa.Gather(&acc)
tags := map[string]string{"location": "last_run_summary.yaml"}
fields := map[string]interface{}{
"events_failure": int64(0),
"events_total": int64(0),
"events_success": int64(0),
"resources_failed": int64(0),
"resources_scheduled": int64(0),
"resources_changed": int64(0),
"resources_skipped": int64(0),
"resources_total": int64(109),
"resources_failedtorestart": int64(0),
"resources_restarted": int64(0),
"resources_outofsync": int64(0),
"changes_total": int64(0),
"time_lastrun": int64(1444936531),
"version_config": int64(1444936521),
"time_user": float64(0.004331),
"time_schedule": float64(0.001123),
"time_filebucket": float64(0.000353),
"time_file": float64(0.441472),
"time_exec": float64(0.508123),
"time_anchor": float64(0.000555),
"time_sshauthorizedkey": float64(0.000764),
"time_service": float64(1.807795),
"time_package": float64(1.325788),
"time_total": float64(8.85354707064819),
"time_configretrieval": float64(4.75567007064819),
"time_cron": float64(0.000584),
"version_puppet": "3.7.5",
}
acc.AssertContainsTaggedFields(t, "puppetagent", fields, tags)
}

View File

@ -5,25 +5,22 @@ import (
"fmt"
"net/http"
"strconv"
"time"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/inputs"
)
const DefaultUsername = "guest"
const DefaultPassword = "guest"
const DefaultURL = "http://localhost:15672"
type Server struct {
type RabbitMQ struct {
URL string
Name string
Username string
Password string
Nodes []string
Queues []string
}
type RabbitMQ struct {
Servers []*Server
Client *http.Client
}
@ -94,15 +91,13 @@ type Node struct {
SocketsUsed int64 `json:"sockets_used"`
}
type gatherFunc func(r *RabbitMQ, serv *Server, acc plugins.Accumulator, errChan chan error)
type gatherFunc func(r *RabbitMQ, acc inputs.Accumulator, errChan chan error)
var gatherFunctions = []gatherFunc{gatherOverview, gatherNodes, gatherQueues}
var sampleConfig = `
# Specify servers via an array of tables
[[plugins.rabbitmq.servers]]
url = "http://localhost:15672" # required
# name = "rmq-server-1" # optional tag
# url = "http://localhost:15672"
# username = "guest"
# password = "guest"
@ -119,27 +114,18 @@ func (r *RabbitMQ) Description() string {
return "Read metrics from one or many RabbitMQ servers via the management API"
}
var localhost = &Server{URL: DefaultURL}
func (r *RabbitMQ) Gather(acc plugins.Accumulator) error {
func (r *RabbitMQ) Gather(acc inputs.Accumulator) error {
if r.Client == nil {
r.Client = &http.Client{}
}
var errChan = make(chan error, len(r.Servers))
var errChan = make(chan error, len(gatherFunctions))
// use localhost is no servers are specified in config
if len(r.Servers) == 0 {
r.Servers = append(r.Servers, localhost)
for _, f := range gatherFunctions {
go f(r, acc, errChan)
}
for _, serv := range r.Servers {
for _, f := range gatherFunctions {
go f(r, serv, acc, errChan)
}
}
for i := 1; i <= len(r.Servers)*len(gatherFunctions); i++ {
for i := 1; i <= len(gatherFunctions); i++ {
err := <-errChan
if err != nil {
return err
@ -149,20 +135,20 @@ func (r *RabbitMQ) Gather(acc plugins.Accumulator) error {
return nil
}
func (r *RabbitMQ) requestJSON(serv *Server, u string, target interface{}) error {
u = fmt.Sprintf("%s%s", serv.URL, u)
func (r *RabbitMQ) requestJSON(u string, target interface{}) error {
u = fmt.Sprintf("%s%s", r.URL, u)
req, err := http.NewRequest("GET", u, nil)
if err != nil {
return err
}
username := serv.Username
username := r.Username
if username == "" {
username = DefaultUsername
}
password := serv.Password
password := r.Password
if password == "" {
password = DefaultPassword
}
@ -181,10 +167,10 @@ func (r *RabbitMQ) requestJSON(serv *Server, u string, target interface{}) error
return nil
}
func gatherOverview(r *RabbitMQ, serv *Server, acc plugins.Accumulator, errChan chan error) {
func gatherOverview(r *RabbitMQ, acc inputs.Accumulator, errChan chan error) {
overview := &OverviewResponse{}
err := r.requestJSON(serv, "/api/overview", &overview)
err := r.requestJSON("/api/overview", &overview)
if err != nil {
errChan <- err
return
@ -195,76 +181,80 @@ func gatherOverview(r *RabbitMQ, serv *Server, acc plugins.Accumulator, errChan
return
}
tags := map[string]string{"url": serv.URL}
if serv.Name != "" {
tags["name"] = serv.Name
tags := map[string]string{"url": r.URL}
if r.Name != "" {
tags["name"] = r.Name
}
acc.Add("messages", overview.QueueTotals.Messages, tags)
acc.Add("messages_ready", overview.QueueTotals.MessagesReady, tags)
acc.Add("messages_unacked", overview.QueueTotals.MessagesUnacknowledged, tags)
acc.Add("channels", overview.ObjectTotals.Channels, tags)
acc.Add("connections", overview.ObjectTotals.Connections, tags)
acc.Add("consumers", overview.ObjectTotals.Consumers, tags)
acc.Add("exchanges", overview.ObjectTotals.Exchanges, tags)
acc.Add("queues", overview.ObjectTotals.Queues, tags)
acc.Add("messages_acked", overview.MessageStats.Ack, tags)
acc.Add("messages_delivered", overview.MessageStats.Deliver, tags)
acc.Add("messages_published", overview.MessageStats.Publish, tags)
fields := map[string]interface{}{
"messages": overview.QueueTotals.Messages,
"messages_ready": overview.QueueTotals.MessagesReady,
"messages_unacked": overview.QueueTotals.MessagesUnacknowledged,
"channels": overview.ObjectTotals.Channels,
"connections": overview.ObjectTotals.Connections,
"consumers": overview.ObjectTotals.Consumers,
"exchanges": overview.ObjectTotals.Exchanges,
"queues": overview.ObjectTotals.Queues,
"messages_acked": overview.MessageStats.Ack,
"messages_delivered": overview.MessageStats.Deliver,
"messages_published": overview.MessageStats.Publish,
}
acc.AddFields("rabbitmq_overview", fields, tags)
errChan <- nil
}
func gatherNodes(r *RabbitMQ, serv *Server, acc plugins.Accumulator, errChan chan error) {
func gatherNodes(r *RabbitMQ, acc inputs.Accumulator, errChan chan error) {
nodes := make([]Node, 0)
// Gather information about nodes
err := r.requestJSON(serv, "/api/nodes", &nodes)
err := r.requestJSON("/api/nodes", &nodes)
if err != nil {
errChan <- err
return
}
now := time.Now()
for _, node := range nodes {
if !shouldGatherNode(node, serv) {
if !r.shouldGatherNode(node) {
continue
}
tags := map[string]string{"url": serv.URL}
tags := map[string]string{"url": r.URL}
tags["node"] = node.Name
acc.Add("disk_free", node.DiskFree, tags)
acc.Add("disk_free_limit", node.DiskFreeLimit, tags)
acc.Add("fd_total", node.FdTotal, tags)
acc.Add("fd_used", node.FdUsed, tags)
acc.Add("mem_limit", node.MemLimit, tags)
acc.Add("mem_used", node.MemUsed, tags)
acc.Add("proc_total", node.ProcTotal, tags)
acc.Add("proc_used", node.ProcUsed, tags)
acc.Add("run_queue", node.RunQueue, tags)
acc.Add("sockets_total", node.SocketsTotal, tags)
acc.Add("sockets_used", node.SocketsUsed, tags)
fields := map[string]interface{}{
"disk_free": node.DiskFree,
"disk_free_limit": node.DiskFreeLimit,
"fd_total": node.FdTotal,
"fd_used": node.FdUsed,
"mem_limit": node.MemLimit,
"mem_used": node.MemUsed,
"proc_total": node.ProcTotal,
"proc_used": node.ProcUsed,
"run_queue": node.RunQueue,
"sockets_total": node.SocketsTotal,
"sockets_used": node.SocketsUsed,
}
acc.AddFields("rabbitmq_node", fields, tags, now)
}
errChan <- nil
}
func gatherQueues(r *RabbitMQ, serv *Server, acc plugins.Accumulator, errChan chan error) {
func gatherQueues(r *RabbitMQ, acc inputs.Accumulator, errChan chan error) {
// Gather information about queues
queues := make([]Queue, 0)
err := r.requestJSON(serv, "/api/queues", &queues)
err := r.requestJSON("/api/queues", &queues)
if err != nil {
errChan <- err
return
}
for _, queue := range queues {
if !shouldGatherQueue(queue, serv) {
if !r.shouldGatherQueue(queue) {
continue
}
tags := map[string]string{
"url": serv.URL,
"url": r.URL,
"queue": queue.Name,
"vhost": queue.Vhost,
"node": queue.Node,
@ -273,7 +263,7 @@ func gatherQueues(r *RabbitMQ, serv *Server, acc plugins.Accumulator, errChan ch
}
acc.AddFields(
"queue",
"rabbitmq_queue",
map[string]interface{}{
// common information
"consumers": queue.Consumers,
@ -301,12 +291,12 @@ func gatherQueues(r *RabbitMQ, serv *Server, acc plugins.Accumulator, errChan ch
errChan <- nil
}
func shouldGatherNode(node Node, serv *Server) bool {
if len(serv.Nodes) == 0 {
func (r *RabbitMQ) shouldGatherNode(node Node) bool {
if len(r.Nodes) == 0 {
return true
}
for _, name := range serv.Nodes {
for _, name := range r.Nodes {
if name == node.Name {
return true
}
@ -315,12 +305,12 @@ func shouldGatherNode(node Node, serv *Server) bool {
return false
}
func shouldGatherQueue(queue Queue, serv *Server) bool {
if len(serv.Queues) == 0 {
func (r *RabbitMQ) shouldGatherQueue(queue Queue) bool {
if len(r.Queues) == 0 {
return true
}
for _, name := range serv.Queues {
for _, name := range r.Queues {
if name == queue.Name {
return true
}
@ -330,7 +320,7 @@ func shouldGatherQueue(queue Queue, serv *Server) bool {
}
func init() {
plugins.Add("rabbitmq", func() plugins.Plugin {
inputs.Add("rabbitmq", func() inputs.Input {
return &RabbitMQ{}
})
}

Some files were not shown because too many files have changed in this diff Show More