Compare commits

...

118 Commits

Author SHA1 Message Date
Cameron Sparr
0074696e67 Experimental windows build process changes 2016-02-22 18:20:37 -07:00
Cameron Sparr
54ee44839c Put arm deb and rpm downloads on readme 2016-02-22 16:59:45 -07:00
Cameron Sparr
8362aa9d66 Some windows build script fixes 2016-02-22 15:12:35 -07:00
Cameron Sparr
2a6ff16819 Fix up config panic points for naoina/toml support
closes #736
2016-02-22 14:44:33 -07:00
Pierre Fersing
47ad73cc89 Ignore boring filesystems from disk plugin
Modern Linux has a lots of boring filesystem (tmpfs on /dev, devpts on
/dev/pts, lots of cgroup on /sys/fs/cgroup/*, ...).

* Ignore filesystem with 0 bytes (this cover cgroup, devpts and other).
* Add IgnoreFS to ignore additional FS by their type. Add tmpfs and
  devtmpfs as default ignored type.
2016-02-22 14:34:26 -07:00
Aurélien DEHAY
9687f71a17 README updated for pgrep user support
closes #724
2016-02-22 14:33:37 -07:00
Aurélien DEHAY
ed684be18d Adding pgrep user support 2016-02-22 14:32:04 -07:00
Cameron Sparr
5aef725c13 Change pass/drop to namepass/namedrop for outputs
closes #730
2016-02-22 13:35:06 -07:00
Thibault Cohen
d00550c45f Add metric pass/drop filter 2016-02-22 12:11:33 -07:00
Cameron Sparr
9ce8d78835 Set running output quiet mode in agent connect func
closes #701
2016-02-22 11:42:02 -07:00
Cameron Sparr
29016822fd Sensors input currently only available if built from source 2016-02-21 16:35:56 -07:00
Marcin Jasion
bb50d7edb4 dns_query plugin fixups:
- renamed plugin to dns_query
- domains are optional
- new record types

closes #694
2016-02-21 16:33:04 -07:00
Marcin Jasion
d43d6f2b13 renamed plugin to dns_query and value to query_time_ms
small polishings

added more record types - AAAA and ANY
2016-02-21 16:21:11 -07:00
Marcin Jasion
636dc27ead Dns query input plugin 2016-02-21 16:21:11 -07:00
Cameron Sparr
a18f535f21 Circle script: unset GOGC so it uses default 2016-02-21 16:00:41 -07:00
Cameron Sparr
6994d4a712 Turn GOGC on for packaging, use go 1.5.3 2016-02-21 10:41:46 -07:00
Cameron Sparr
c9d0ae7cf3 Circle script: create packages if commit is tagged 2016-02-20 12:47:31 -07:00
Jason Coene
9edc25999e Minor formatting improvements
closes #727
2016-02-19 16:18:06 -07:00
Jason Coene
53c130b704 Add riak plugin 2016-02-19 16:16:50 -07:00
Cameron Sparr
e4e174981d Skip snmp tests that require docker in short mode 2016-02-19 16:15:14 -07:00
Cameron Sparr
584a52ac21 InfluxDB output should not default to 'no timeout' for http writes
default to 5s instead, since even if it times out we will cache the
points and move on

closes #685
2016-02-19 15:38:51 -07:00
Cameron Sparr
f9b5767dae Provide default args: percpu=true and totalcpu=true for cpu plugin
Also if outputs.file is empty, write to stdout

closes #720
2016-02-19 11:56:33 -07:00
Cameron Sparr
3179829fa5 Update changelog for 0.10.3 2016-02-18 17:18:43 -07:00
Cameron Sparr
187d1b853d Update Makefile to 'go install' rather than 'go build' 2016-02-18 16:48:59 -07:00
Cameron Sparr
8d2e5f0bda Seems to be a toml parse bug around triple pounds 2016-02-18 14:36:03 -07:00
Cameron Sparr
7def6663bd Root directory cleanup 2016-02-18 13:37:36 -07:00
Dragostin Yanev (netixen)
a13d19c582 pugins/outputs/influxdb: Prevent runtime panic.
- Check and return error from NewBatchPoints to prevent runtime panic if
   user provides an unparsable precision time unit in config.
- Provide correct sample config precision examples.
- Update etc/telegraf.conf precision comment.

closes #715
2016-02-18 13:12:20 -07:00
Gabriel Levine
1837f83282 cleaned up the httpjson POST function.
closes #688
closes #394
2016-02-18 10:11:56 -07:00
Cameron Sparr
b14cfd6c64 Add Configuration to statsd input readme
closes #714
2016-02-18 10:09:57 -07:00
Sergio Jimenez
963c51f473 fix(config): Made sample config consistent.
closes #682
2016-02-18 10:01:03 -07:00
Sergio Jimenez
1f77b75e14 fix(sample): Made TOML parser happy again 2016-02-18 09:00:27 +01:00
Sergio Jimenez
e5f3acd139 doc(readme): Added README.md. 2016-02-18 09:00:27 +01:00
Sergio Jimenez
c8365b3b7e test(unit): Removed useless tests 2016-02-18 09:00:27 +01:00
Sergio Jimenez
29c671ce46 fix(mesos): TOML annotation
* It was still using the previous config name
2016-02-18 09:00:27 +01:00
Sergio Jimenez
38ac9d2ecf List mesos in main README
And on the test configuration file
2016-02-18 09:00:27 +01:00
Sergio Jimenez
3573d93855 fix(vet): Range var used by goroutine
* Use it as a paramater for the closure
2016-02-18 09:00:27 +01:00
Sergio Jimenez
3cc2cda026 refactor(naming): For master specific settings
* This should help backwards compatibility when adding more features or
  supported Mesos components
2016-02-18 09:00:27 +01:00
Sergio Jimenez
7d10986f10 test(unit): Test for whitelisted metrics 2016-02-18 09:00:27 +01:00
Sergio Jimenez
8c6a6604ce Comments and cleanup 2016-02-18 09:00:27 +01:00
Sergio Jimenez
7170280401 fix(import): Json parser lives outside internal
* Fixed import for JSONFlattener{} it's now in parsers, broke after
  rebasing.
2016-02-18 09:00:27 +01:00
Sergio Jimenez
babecb6d49 feat(timeout): Use timeout setting
* Use timeout as parameter in the http request
* A bit of cleanup
* More tests
2016-02-18 09:00:27 +01:00
Sergio Jimenez
9770802901 feat(whitelist): Converted black to whitelist
* Defined global var for holding default metric groups
* Refactor removeGroup() to work with the whitelist
* Refactor TestRemoveGroup()
2016-02-18 09:00:27 +01:00
Sergio Jimenez
4c1e817b38 fix(indent): For configuration sample 2016-02-18 09:00:27 +01:00
Sergio Jimenez
52b329be4e plugin(mesos): Reversed removeGroup()
* Now the user selects what to push instead of what not
* Required to check and improve tests
* Missing checks in the code when MetricsCol is empty
2016-02-18 09:00:27 +01:00
Sergio Jimenez
1d50d62a79 plugin(mesos): Added goroutines.
The plugin will iterate over the Servers slice and create a goroutine
for each of them.
2016-02-18 09:00:27 +01:00
Sergio Jimenez
07502c9804 Don't add port to tags just the host 2016-02-18 09:00:27 +01:00
Sergio Jimenez
59e0e49822 Indentation for sample config string 2016-02-18 09:00:27 +01:00
Sergio Jimenez
05170d78be plugin(mesos): Initial commit
The plugin is able to query a Mesos master and push the metrics, a
blacklist can be configured and a timeout, it's still not used.

Added unit test, might be a good idea to have system test using docker.
2016-02-18 09:00:27 +01:00
Cameron Sparr
88c83277c6 Write unit tests for RunningOutput 2016-02-17 17:06:34 -07:00
Cameron Sparr
d0734b105b Start service plugins immediately, fix off-by-one bug 2016-02-17 15:10:32 -07:00
Cameron Sparr
4860dc148c changelog update 2016-02-17 09:53:41 -07:00
Cameron Sparr
ee468be696 Flush based on buffer size rather than time
this includes:
- Add Accumulator to the Start() function of service inputs
- For message consumer plugins, use the Accumulator to constantly add
  metrics and make Gather a dummy function
- rework unit tests to match this new behavior.
- make "flush_buffer_when_full" a config option that defaults to true

closes #666
2016-02-16 22:25:22 -07:00
Cameron Sparr
7f539c951a changelog update 2016-02-15 16:08:45 -07:00
Thibault Cohen
e495ae9030 Add tcp/udp check connection input plugin
closes #650
2016-02-15 13:38:58 -07:00
Cameron Sparr
ccb6b3c64b Small readme formattings 2016-02-14 18:44:48 -07:00
Anton Bykov
85594cc92e Readme: specify compression format for unpacking
closes #693
2016-02-14 15:59:53 -07:00
Andrei Burd
0b72612cd2 Code formatted, Readme updated based on example
closes #695
2016-02-14 15:58:05 -07:00
Vladislav Shub
dd086c7830 Added full support for raindrops and tests 2016-02-14 18:52:26 +02:00
Cameron Sparr
6a601ceb97 Add support for specifying SSL config for influxdb output
closes #191
2016-02-12 17:02:01 -07:00
Cameron Sparr
8236534e3c changelog update 2016-02-12 16:55:27 -07:00
Cameron Sparr
0fef147713 data output readme update 2016-02-12 16:52:33 -07:00
Cameron Sparr
0198296ced Data format output documentation 2016-02-12 16:47:07 -07:00
Cameron Sparr
37726a02af Add Serializer plugins, and 'file' output plugin 2016-02-12 15:05:27 -07:00
Cameron Sparr
a9c135488e Add Serializer plugins, and 'file' output plugin 2016-02-12 14:13:49 -07:00
Thomas Menard
72f5c9b62d postgres plugin bgwriter stats
Add pg_stat_bg_writer stats

closes #683
2016-02-12 11:21:53 -07:00
Cameron Sparr
8d0f50a6fd MQTT Consumer Input plugin 2016-02-12 11:13:32 -07:00
Dragostin Yanev (netixen)
6c353e8b8f Change point_buffer to metric_buffer to conform will changes in https://github.com/influxdata/telegraf/pull/676
closes #680
2016-02-12 10:01:56 -07:00
Dragostin Yanev (netixen)
512d9822f0 Add NATS consumer input plugin. 2016-02-12 09:58:32 -07:00
Cameron Sparr
d003ca46c7 Merge pull request #673 from miketonks/f-docker-percentages
Add calculated cpu and memory percentages to docker input (via config option)
2016-02-11 08:43:55 -07:00
Mike Tonks
7587dc350e Remove config option, percent option always activated. Fix review issues 2016-02-11 10:49:48 +00:00
Cameron Sparr
28664fedb2 Support exec input plugin legacy behavior 2016-02-10 13:26:02 -07:00
Marcus Geiger
ef20f05221 Add --pkgarch option to build.py to specify the packaging architecture
pkg arch can be different to GOARCH.

Example: build for debian on raspberry pi. GOARCH will be arm
but the packaging architecture on debian will be armhf (arm
hard float). The --pkgarch option is passed to fpm to specify
the required architecture which is reflected in the package
manifest and also in the result filename.

closes #675
2016-02-09 17:41:48 -07:00
Miki
cabf5d004d added dovecot plugin
closes #671
2016-02-09 14:10:17 -07:00
Cameron Sparr
d551da26e5 Fix exec input legacy behavior, command='' 2016-02-09 13:49:14 -07:00
Dhruv Bansal
893357f01e Updated Riemann output:
* Customizable 'separator' option instead of hard-coded '_'

* String values are sent as "State" instead of "Metric", preventing
  Riemann from rejecting them

* Riemann service name is set to an (ugly) combination of input name &
  (sorted) tags' values...this allows connecting different events for
  the same input together on the Riemann side

closes #642
2016-02-09 11:17:07 -07:00
Cameron Sparr
fc7fa4b6c5 Cleanup comments and indentation in config file 2016-02-09 11:01:50 -07:00
Cameron Sparr
fb75db2f1f re-arrange and cleanup graphite output test 2016-02-09 11:01:13 -07:00
Mike Tonks
7c20522a30 Add calculated cpu and memory percentages to docker input (via config option) 2016-02-09 15:20:56 +00:00
Cameron Sparr
c09884c686 Fixup some URL typos 2016-02-08 21:36:53 -07:00
Cameron Sparr
9273782093 changelog update 2016-02-08 21:34:22 -07:00
Cameron Sparr
44ffe29c10 Update Godeps and Godeps_windows files 2016-02-08 21:26:56 -07:00
Cameron Sparr
e619493ece Implementing generic parser plugins and documentation
This constitutes a large change in how we will parse different data
formats going forward (for the plugins that support it)

This is working off @henrypfhu's changes.
2016-02-08 21:08:44 -07:00
Henry Hu
1449c8b887 Add Graphite line protocol parsing to exec plugin
closes #637
2016-02-08 17:12:28 -07:00
Cameron Sparr
6b06a23102 Change [tags] to [global_tags] to deal with toml bug
closes #662
2016-02-08 16:20:47 -07:00
Cameron Sparr
b55a93a3e1 update changelog 2016-02-07 09:06:51 -07:00
Cameron Sparr
f5f43e6d1b ping plugin: use -W for linux, -t for bsd/darwin
closes #443
2016-02-06 23:24:47 -07:00
Cameron Sparr
1e03a9440b Try ping plugin with -n and -s options added 2016-02-06 23:09:29 -07:00
codehate
9a59512f75 Add: Telegraf CouchDB Plugin
CouchDB Plugin - Formatted Code

closes #652

Minor fix for CouchDB Plugin

Formatted code fix for CouchDB Plugin

CouchDB Plugin - Changed hosts to full urls

CouchDB Plugin - Formatted Code

CouchDB Plugin - Fatal commit from local fix

CouchDB Plugin - Updated test case
2016-02-05 14:14:19 -07:00
Thibault Cohen
35150caea4 Add a make command with CGO disabled
closes #458
2016-02-04 17:33:40 -07:00
Cameron Sparr
f01da8fee4 Remove extraneous 'v' from README tarball 2016-02-04 11:23:46 -07:00
Cameron Sparr
434c08a357 Release 0.10.2 2016-02-04 11:04:29 -07:00
Cameron Sparr
bd9c5b6995 mqtt output: cleanup, implement TLS
Also normalize TLS config across all output plugins and normalize
comment strings as well.
2016-02-04 10:44:37 -07:00
Cameron Sparr
b941d270ce changelog update 2016-02-03 08:35:03 -07:00
Reginaldo Sousa
9406961125 Fix a bug when setting host header in httpjson
closes #634
2016-02-02 21:59:18 -07:00
Rune Darrud
0d391b66a3 Added support for Windows operating systems pre-Vista. 2016-02-02 21:57:38 -07:00
Cameron Sparr
a11e07e250 Minor change to forgotten config file exit 2016-02-01 17:44:19 -07:00
Cameron Sparr
d266dad1f4 Don't compile ping plugin on windows.
closes #496
2016-02-01 16:39:53 -07:00
Rune Darrud
331b700d1b Corrected a issue that came from code cleanup earlier
wherein missing performance counters caused it to return
early from the loop, instead of ignoring missing in
default configuration mode.

closes #625
2016-01-31 23:17:45 -07:00
Christoph Wegener
2163fde0a4 Fix memory leak: Remove signal.Notify code from plugins/inputs/win_perf_counters.(*Win_PerfCounters).Gather 2016-01-31 23:16:09 -07:00
Cameron Sparr
24a2aaef4b Ansible role in readme 2016-01-30 11:55:48 -07:00
Cameron Sparr
042cf517b2 Mention yum/apt repo in README
Also add `make windows-build` to Makefile

closes #618
2016-01-30 11:35:39 -07:00
Cameron Sparr
b97027ac9a Allow exec plugin to parse line-protocol
closes #613
2016-01-30 11:12:59 -07:00
Christoph Wegener
4ea3f82e50 Replace all single percentage characters with double
percentage characters in sampleConfig string so that fmt.Printf
will interpret them as literal percentage characters when
running 'telegraf.exe -sample-config'

closes #620
2016-01-30 10:10:55 -07:00
Cameron Sparr
38c4111e6c Add unit tests for the root telegraf package 2016-01-29 16:01:34 -07:00
Cameron Sparr
338341add8 Put windows dependencies into a separate Godeps file 2016-01-29 11:10:18 -07:00
Cameron Sparr
93bb679f9d Fix possible panic if stat is nil
closes #612
2016-01-29 10:47:30 -07:00
Pavel Yudin
40d859354f Add powerdns input plugin
closes #614
2016-01-29 09:40:04 -07:00
Cameron Sparr
9e7c8df384 statsd: allow template parsing fields. Default to value=
closes #602
2016-01-28 16:56:50 -07:00
Rune Darrud
f088dd7e00 Added plugin to read Windows performance counters
closes #575
2016-01-28 16:35:13 -07:00
Cameron Sparr
10c4e4f63f Fix datadog json marshalling
fixes #607
2016-01-28 16:12:33 -07:00
Cameron Sparr
962325cc40 Warn when metrics are being overwritten
closes #601
2016-01-28 14:00:14 -07:00
root
a9c33abfa5 sql server: update README.md
closes #594
2016-01-28 13:50:26 -07:00
Cameron Sparr
d835c19fce Insert . between msrmnt and field name in datadog output
fixes #600
2016-01-28 12:04:26 -07:00
Marcin Bunsch
1f1384afc6 Use a single measurement with fields for timings in statsd plugin.
closes #603
2016-01-28 12:03:48 -07:00
Cameron Sparr
9d4b55be19 Include all tag values in graphite output
closes #595
2016-01-28 10:58:35 -07:00
Cameron Sparr
c549ab907a Throughout telegraf, use telegraf.Metric rather than client.Point
closes #599
2016-01-27 23:47:32 -07:00
Cameron Sparr
9c0d14bb60 Create public models for telegraf metrics, accumlator, plugins
This will basically make the root directory a place for storing the
major telegraf interfaces, which will make telegraf's godoc looks quite
a bit nicer. And make it easier for contributors to lookup the few data
types that they actually care about.

closes #564
2016-01-27 15:42:50 -07:00
Cameron Sparr
a822d942cd 386 -> i386 2016-01-27 13:42:34 -07:00
198 changed files with 13851 additions and 2140 deletions

View File

@@ -1,10 +1,80 @@
## v0.10.2 [unreleased]
### Release Notes
## v0.10.4 [unreleased]
### Features
- [#727](https://github.com/influxdata/telegraf/pull/727): riak input, thanks @jcoene!
- [#694](https://github.com/influxdata/telegraf/pull/694): DNS Query input, thanks @mjasion!
- [#724](https://github.com/influxdata/telegraf/pull/724): username matching for procstat input, thanks @zorel!
- [#736](https://github.com/influxdata/telegraf/pull/736): Ignore dummy filesystems from disk plugin. Thanks @PierreF!
### Bugfixes
- [#701](https://github.com/influxdata/telegraf/pull/701): output write count shouldnt print in quiet mode.
## v0.10.3 [2016-02-18]
### Release Notes
- Users of the `exec` and `kafka_consumer` (and the new `nats_consumer`
and `mqtt_consumer` plugins) can now specify the incoming data
format that they would like to parse. Currently supports: "json", "influx", and
"graphite"
- Users of message broker and file output plugins can now choose what data format
they would like to output. Currently supports: "influx" and "graphite"
- More info on parsing _incoming_ data formats can be found
[here](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md)
- More info on serializing _outgoing_ data formats can be found
[here](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md)
- Telegraf now has an option `flush_buffer_when_full` that will flush the
metric buffer whenever it fills up for each output, rather than dropping
points and only flushing on a set time interval. This will default to `true`
and is in the `[agent]` config section.
### Features
- [#652](https://github.com/influxdata/telegraf/pull/652): CouchDB Input Plugin. Thanks @codehate!
- [#655](https://github.com/influxdata/telegraf/pull/655): Support parsing arbitrary data formats. Currently limited to kafka_consumer and exec inputs.
- [#671](https://github.com/influxdata/telegraf/pull/671): Dovecot input plugin. Thanks @mikif70!
- [#680](https://github.com/influxdata/telegraf/pull/680): NATS consumer input plugin. Thanks @netixen!
- [#676](https://github.com/influxdata/telegraf/pull/676): MQTT consumer input plugin.
- [#683](https://github.com/influxdata/telegraf/pull/683): PostGRES input plugin: add pg_stat_bgwriter. Thanks @menardorama!
- [#679](https://github.com/influxdata/telegraf/pull/679): File/stdout output plugin.
- [#679](https://github.com/influxdata/telegraf/pull/679): Support for arbitrary output data formats.
- [#695](https://github.com/influxdata/telegraf/pull/695): raindrops input plugin. Thanks @burdandrei!
- [#650](https://github.com/influxdata/telegraf/pull/650): net_response input plugin. Thanks @titilambert!
- [#699](https://github.com/influxdata/telegraf/pull/699): Flush based on buffer size rather than time.
- [#682](https://github.com/influxdata/telegraf/pull/682): Mesos input plugin. Thanks @tripledes!
### Bugfixes
- [#443](https://github.com/influxdata/telegraf/issues/443): Fix Ping command timeout parameter on Linux.
- [#662](https://github.com/influxdata/telegraf/pull/667): Change `[tags]` to `[global_tags]` to fix multiple-plugin tags bug.
- [#642](https://github.com/influxdata/telegraf/issues/642): Riemann output plugin issues.
- [#394](https://github.com/influxdata/telegraf/issues/394): Support HTTP POST. Thanks @gabelev!
- [#715](https://github.com/influxdata/telegraf/pull/715): Fix influxdb precision config panic. Thanks @netixen!
## v0.10.2 [2016-02-04]
### Release Notes
- Statsd timing measurements are now aggregated into a single measurement with
fields.
- Graphite output now inserts tags into the bucket in alphabetical order.
- Normalized TLS/SSL support for output plugins: MQTT, AMQP, Kafka
- `verify_ssl` config option was removed from Kafka because it was actually
doing the opposite of what it claimed to do (yikes). It's been replaced by
`insecure_skip_verify`
### Features
- [#575](https://github.com/influxdata/telegraf/pull/575): Support for collecting Windows Performance Counters. Thanks @TheFlyingCorpse!
- [#564](https://github.com/influxdata/telegraf/issues/564): features for plugin writing simplification. Internal metric data type.
- [#603](https://github.com/influxdata/telegraf/pull/603): Aggregate statsd timing measurements into fields. Thanks @marcinbunsch!
- [#601](https://github.com/influxdata/telegraf/issues/601): Warn when overwriting cached metrics.
- [#614](https://github.com/influxdata/telegraf/pull/614): PowerDNS input plugin. Thanks @Kasen!
- [#617](https://github.com/influxdata/telegraf/pull/617): exec plugin: parse influx line protocol in addition to JSON.
- [#628](https://github.com/influxdata/telegraf/pull/628): Windows perf counters: pre-vista support
### Bugfixes
- [#595](https://github.com/influxdata/telegraf/issues/595): graphite output should include tags to separate duplicate measurements.
- [#599](https://github.com/influxdata/telegraf/issues/599): datadog plugin tags not working.
- [#600](https://github.com/influxdata/telegraf/issues/600): datadog measurement/field name parsing is wrong.
- [#602](https://github.com/influxdata/telegraf/issues/602): Fix statsd field name templating.
- [#612](https://github.com/influxdata/telegraf/pull/612): Docker input panic fix if stats received are nil.
- [#634](https://github.com/influxdata/telegraf/pull/634): Properly set host headers in httpjson. Thanks @reginaldosousa!
## v0.10.1 [2016-01-27]

View File

@@ -12,6 +12,13 @@ but any information you can provide on how the data will look is appreciated.
See the [OpenTSDB output](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/opentsdb)
for a good example.
## GoDoc
Public interfaces for inputs, outputs, metrics, and the accumulator can be found
on the GoDoc
[![GoDoc](https://godoc.org/github.com/influxdata/telegraf?status.svg)](https://godoc.org/github.com/influxdata/telegraf)
## Sign the CLA
Before we can merge a pull request, you will need to sign the CLA,
@@ -29,7 +36,7 @@ Assuming you can already build the project, run these in the telegraf directory:
This section is for developers who want to create new collection inputs.
Telegraf is entirely plugin driven. This interface allows for operators to
pick and chose what is gathered as well as makes it easy for developers
pick and chose what is gathered and makes it easy for developers
to create new ways of generating metrics.
Plugin authorship is kept as simple as possible to promote people to develop
@@ -37,7 +44,7 @@ and submit new inputs.
### Input Plugin Guidelines
* A plugin must conform to the `inputs.Input` interface.
* A plugin must conform to the `telegraf.Input` interface.
* Input Plugins should call `inputs.Add` in their `init` function to register themselves.
See below for a quick example.
* Input Plugins must be added to the
@@ -46,49 +53,8 @@ See below for a quick example.
plugin can be configured. This is include in `telegraf -sample-config`.
* The `Description` function should say in one line what this plugin does.
### Input interface
```go
type Input interface {
SampleConfig() string
Description() string
Gather(Accumulator) error
}
type Accumulator interface {
Add(measurement string,
value interface{},
tags map[string]string,
timestamp ...time.Time)
AddFields(measurement string,
fields map[string]interface{},
tags map[string]string,
timestamp ...time.Time)
}
```
### Accumulator
The way that a plugin emits metrics is by interacting with the Accumulator.
The `Add` function takes 3 arguments:
* **measurement**: A string description of the metric. For instance `bytes_read` or `
faults`.
* **value**: A value for the metric. This accepts 5 different types of value:
* **int**: The most common type. All int types are accepted but favor using `int64`
Useful for counters, etc.
* **float**: Favor `float64`, useful for gauges, percentages, etc.
* **bool**: `true` or `false`, useful to indicate the presence of a state. `light_on`,
etc.
* **string**: Typically used to indicate a message, or some kind of freeform
information.
* **time.Time**: Useful for indicating when a state last occurred, for instance `
light_on_since`.
* **tags**: This is a map of strings to strings to describe the where or who
about the metric. For instance, the `net` plugin adds a tag named `"interface"`
set to the name of the network interface, like `"eth0"`.
Let's say you've written a plugin that emits metrics about processes on the current host.
Let's say you've written a plugin that emits metrics about processes on the
current host.
### Input Plugin Example
@@ -97,7 +63,10 @@ package simple
// simple.go
import "github.com/influxdata/telegraf/plugins/inputs"
import (
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
type Simple struct {
Ok bool
@@ -122,10 +91,56 @@ func (s *Simple) Gather(acc inputs.Accumulator) error {
}
func init() {
inputs.Add("simple", func() inputs.Input { return &Simple{} })
inputs.Add("simple", func() telegraf.Input { return &Simple{} })
}
```
## Input Plugins Accepting Arbitrary Data Formats
Some input plugins (such as
[exec](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/exec))
accept arbitrary input data formats. An overview of these data formats can
be found
[here](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md).
In order to enable this, you must specify a `SetParser(parser parsers.Parser)`
function on the plugin object (see the exec plugin for an example), as well as
defining `parser` as a field of the object.
You can then utilize the parser internally in your plugin, parsing data as you
see fit. Telegraf's configuration layer will take care of instantiating and
creating the `Parser` object.
You should also add the following to your SampleConfig() return:
```toml
## Data format to consume. This can be "json", "influx" or "graphite"
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "influx"
```
Below is the `Parser` interface.
```go
// Parser is an interface defining functions that a parser plugin must satisfy.
type Parser interface {
// Parse takes a byte buffer separated by newlines
// ie, `cpu.usage.idle 90\ncpu.usage.busy 10`
// and parses it into telegraf metrics
Parse(buf []byte) ([]telegraf.Metric, error)
// ParseLine takes a single string metric
// ie, "cpu.usage.idle 90"
// and parses it into a telegraf metric.
ParseLine(line string) (telegraf.Metric, error)
}
```
And you can view the code
[here.](https://github.com/influxdata/telegraf/blob/henrypfhu-master/plugins/parsers/registry.go)
## Service Input Plugins
This section is for developers who want to create new "service" collection
@@ -145,18 +160,6 @@ and `Stop()` methods.
* Same as the `Plugin` guidelines, except that they must conform to the
`inputs.ServiceInput` interface.
### Service Plugin interface
```go
type ServicePlugin interface {
SampleConfig() string
Description() string
Gather(Accumulator) error
Start() error
Stop()
}
```
## Output Plugins
This section is for developers who want to create a new output sink. Outputs
@@ -174,18 +177,6 @@ See below for a quick example.
output can be configured. This is include in `telegraf -sample-config`.
* The `Description` function should say in one line what this output does.
### Output interface
```go
type Output interface {
Connect() error
Close() error
Description() string
SampleConfig() string
Write(points []*client.Point) error
}
```
### Output Example
```go
@@ -193,7 +184,10 @@ package simpleoutput
// simpleoutput.go
import "github.com/influxdata/telegraf/plugins/outputs"
import (
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/outputs"
)
type Simple struct {
Ok bool
@@ -217,7 +211,7 @@ func (s *Simple) Close() error {
return nil
}
func (s *Simple) Write(points []*client.Point) error {
func (s *Simple) Write(metrics []telegraf.Metric) error {
for _, pt := range points {
// write `pt` to the output sink here
}
@@ -225,11 +219,38 @@ func (s *Simple) Write(points []*client.Point) error {
}
func init() {
outputs.Add("simpleoutput", func() outputs.Output { return &Simple{} })
outputs.Add("simpleoutput", func() telegraf.Output { return &Simple{} })
}
```
## Output Plugins Writing Arbitrary Data Formats
Some output plugins (such as
[file](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/file))
can write arbitrary output data formats. An overview of these data formats can
be found
[here](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md).
In order to enable this, you must specify a
`SetSerializer(serializer serializers.Serializer)`
function on the plugin object (see the file plugin for an example), as well as
defining `serializer` as a field of the object.
You can then utilize the serializer internally in your plugin, serializing data
before it's written. Telegraf's configuration layer will take care of
instantiating and creating the `Serializer` object.
You should also add the following to your SampleConfig() return:
```toml
## Data format to output. This can be "influx" or "graphite"
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "influx"
```
## Service Output Plugins
This section is for developers who want to create new "service" output. A
@@ -245,20 +266,6 @@ and `Stop()` methods.
* Same as the `Output` guidelines, except that they must conform to the
`output.ServiceOutput` interface.
### Service Output interface
```go
type ServiceOutput interface {
Connect() error
Close() error
Description() string
SampleConfig() string
Write(points []*client.Point) error
Start() error
Stop()
}
```
## Unit Tests
### Execute short tests
@@ -274,7 +281,7 @@ which would take some time to replicate.
To overcome this situation we've decided to use docker containers to provide a
fast and reproducible environment to test those services which require it.
For other situations
(i.e: https://github.com/influxdata/telegraf/blob/master/plugins/redis/redis_test.go)
(i.e: https://github.com/influxdata/telegraf/blob/master/plugins/inputs/redis/redis_test.go)
a simple mock will suffice.
To execute Telegraf tests follow these simple steps:

16
Godeps
View File

@@ -1,11 +1,9 @@
git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git dbd8d5c40a582eb9adacde36b47932b3a3ad0034
git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git 617c801af238c3af2d9e72c5d4a0f02edad03ce5
github.com/Shopify/sarama d37c73f2b2bce85f7fa16b6a550d26c5372892ef
github.com/Sirupsen/logrus f7f79f729e0fbe2fcc061db48a9ba0263f588252
github.com/amir/raidman 6a8e089bbe32e6b907feae5ba688841974b3c339
github.com/armon/go-metrics 345426c77237ece5dab0e1605c3e4b35c3f54757
github.com/aws/aws-sdk-go 87b1e60a50b09e4812dee560b33a238f67305804
github.com/beorn7/perks b965b613227fddccbfffe13eae360ed3fa822f8d
github.com/boltdb/bolt ee4a0888a9abe7eefe5a0992ca4cb06864839873
github.com/cenkalti/backoff 4dc77674aceaabba2c7e3da25d4c823edfb73f99
github.com/dancannon/gorethink 6f088135ff288deb9d5546f4c71919207f891a70
github.com/davecgh/go-spew 5215b55f46b2b919f50a1df0eaa5886afe4e3b3d
@@ -14,19 +12,14 @@ github.com/eapache/queue ded5959c0d4e360646dc9e9908cff48666781367
github.com/fsouza/go-dockerclient 7b651349f9479f5114913eefbfd3c4eeddd79ab4
github.com/go-ini/ini afbd495e5aaea13597b5e14fe514ddeaa4d76fc3
github.com/go-sql-driver/mysql 7c7f556282622f94213bc028b4d0a7b6151ba239
github.com/gogo/protobuf e8904f58e872a473a5b91bc9bf3377d223555263
github.com/golang/protobuf 6aaa8d47701fa6cf07e914ec01fde3d4a1fe79c3
github.com/golang/snappy 723cc1e459b8eea2dea4583200fd60757d40097a
github.com/gonuts/go-shellquote e842a11b24c6abfb3dd27af69a17f482e4b483c2
github.com/gorilla/context 1c83b3eabd45b6d76072b66b746c20815fb2872d
github.com/gorilla/mux 26a6070f849969ba72b72256e9f14cf519751690
github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478
github.com/hashicorp/go-msgpack fa3f63826f7c23912c15263591e65d54d080b458
github.com/hashicorp/raft 057b893fd996696719e98b6c44649ea14968c811
github.com/hashicorp/raft-boltdb d1e82c1ec3f15ee991f7cc7ffd5b67ff6f5bbaee
github.com/influxdata/config bae7cb98197d842374d3b8403905924094930f24
github.com/influxdata/influxdb 697f48b4e62e514e701ffec39978b864a3c666e6
github.com/influxdb/influxdb 697f48b4e62e514e701ffec39978b864a3c666e6
github.com/influxdata/influxdb ef571fc104dc24b77cd3710c156cd95e5cfd7aa5
github.com/jmespath/go-jmespath c01cf91b011868172fdcd9f41838e80c9d716264
github.com/klauspost/crc32 999f3125931f6557b991b2f8472172bdfa578d38
github.com/lib/pq 8ad2b298cadd691a77015666a5372eae5dbfac8f
@@ -34,15 +27,15 @@ github.com/matttproud/golang_protobuf_extensions d0c3fe89de86839aecf2e0579c40ba3
github.com/mreiferson/go-snappystream 028eae7ab5c4c9e2d1cb4c4ca1e53259bbe7e504
github.com/naoina/go-stringutil 6b638e95a32d0c1131db0e7fe83775cbea4a0d0b
github.com/naoina/toml 751171607256bb66e64c9f0220c00662420c38e9
github.com/nats-io/nats 6a83f1a633cfbfd90aa648ac99fb38c06a8b40df
github.com/nsqio/go-nsq 2118015c120962edc5d03325c680daf3163a8b5f
github.com/pborman/uuid dee7705ef7b324f27ceb85a121c61f2c2e8ce988
github.com/pmezard/go-difflib 792786c7400a136282c1664665ae0a8db921c6c2
github.com/prometheus/client_golang 67994f177195311c3ea3d4407ed0175e34a4256f
github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
github.com/prometheus/common 14ca1097bbe21584194c15e391a9dab95ad42a59
github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8
github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f
github.com/shirou/gopsutil 85bf0974ed06e4e668595ae2b4de02e772a2819b
github.com/shirou/gopsutil e77438504d45b9985c99a75730fe65220ceea00e
github.com/soniah/gosnmp b1b4f885b12c5dcbd021c5cee1c904110de6db7d
github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744
github.com/stretchr/objx 1a9d0bb9f541897e62256577b352fdbc1fb4fd94
@@ -57,3 +50,4 @@ gopkg.in/dancannon/gorethink.v1 6f088135ff288deb9d5546f4c71919207f891a70
gopkg.in/fatih/pool.v2 cba550ebf9bce999a02e963296d4bc7a486cb715
gopkg.in/mgo.v2 03c9f3ee4c14c8e51ee521a6a7d0425658dd6f64
gopkg.in/yaml.v2 f7716cbe52baa25d2e9b0d0da546fcf909fc16b4
github.com/miekg/dns e0d84d97e59bcb6561eae269c4e94d25b66822cb

56
Godeps_windows Normal file
View File

@@ -0,0 +1,56 @@
git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git 617c801af238c3af2d9e72c5d4a0f02edad03ce5
github.com/Shopify/sarama d37c73f2b2bce85f7fa16b6a550d26c5372892ef
github.com/Sirupsen/logrus f7f79f729e0fbe2fcc061db48a9ba0263f588252
github.com/StackExchange/wmi f3e2bae1e0cb5aef83e319133eabfee30013a4a5
github.com/amir/raidman 6a8e089bbe32e6b907feae5ba688841974b3c339
github.com/aws/aws-sdk-go 87b1e60a50b09e4812dee560b33a238f67305804
github.com/beorn7/perks b965b613227fddccbfffe13eae360ed3fa822f8d
github.com/cenkalti/backoff 4dc77674aceaabba2c7e3da25d4c823edfb73f99
github.com/dancannon/gorethink 6f088135ff288deb9d5546f4c71919207f891a70
github.com/davecgh/go-spew 5215b55f46b2b919f50a1df0eaa5886afe4e3b3d
github.com/eapache/go-resiliency b86b1ec0dd4209a588dc1285cdd471e73525c0b3
github.com/eapache/queue ded5959c0d4e360646dc9e9908cff48666781367
github.com/fsouza/go-dockerclient 7b651349f9479f5114913eefbfd3c4eeddd79ab4
github.com/go-ini/ini afbd495e5aaea13597b5e14fe514ddeaa4d76fc3
github.com/go-ole/go-ole 50055884d646dd9434f16bbb5c9801749b9bafe4
github.com/go-sql-driver/mysql 7c7f556282622f94213bc028b4d0a7b6151ba239
github.com/golang/protobuf 6aaa8d47701fa6cf07e914ec01fde3d4a1fe79c3
github.com/golang/snappy 723cc1e459b8eea2dea4583200fd60757d40097a
github.com/gonuts/go-shellquote e842a11b24c6abfb3dd27af69a17f482e4b483c2
github.com/gorilla/context 1c83b3eabd45b6d76072b66b746c20815fb2872d
github.com/gorilla/mux 26a6070f849969ba72b72256e9f14cf519751690
github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478
github.com/influxdata/config bae7cb98197d842374d3b8403905924094930f24
github.com/influxdata/influxdb ef571fc104dc24b77cd3710c156cd95e5cfd7aa5
github.com/jmespath/go-jmespath c01cf91b011868172fdcd9f41838e80c9d716264
github.com/klauspost/crc32 999f3125931f6557b991b2f8472172bdfa578d38
github.com/lib/pq 8ad2b298cadd691a77015666a5372eae5dbfac8f
github.com/lxn/win 9a7734ea4db26bc593d52f6a8a957afdad39c5c1
github.com/matttproud/golang_protobuf_extensions d0c3fe89de86839aecf2e0579c40ba3bb336a453
github.com/miekg/dns e0d84d97e59bcb6561eae269c4e94d25b66822cb
github.com/mreiferson/go-snappystream 028eae7ab5c4c9e2d1cb4c4ca1e53259bbe7e504
github.com/naoina/go-stringutil 6b638e95a32d0c1131db0e7fe83775cbea4a0d0b
github.com/naoina/toml 751171607256bb66e64c9f0220c00662420c38e9
github.com/nats-io/nats 6a83f1a633cfbfd90aa648ac99fb38c06a8b40df
github.com/nsqio/go-nsq 2118015c120962edc5d03325c680daf3163a8b5f
github.com/pmezard/go-difflib 792786c7400a136282c1664665ae0a8db921c6c2
github.com/prometheus/client_golang 67994f177195311c3ea3d4407ed0175e34a4256f
github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
github.com/prometheus/common 14ca1097bbe21584194c15e391a9dab95ad42a59
github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8
github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f
github.com/shirou/gopsutil e77438504d45b9985c99a75730fe65220ceea00e
github.com/shirou/w32 ada3ba68f000aa1b58580e45c9d308fe0b7fc5c5
github.com/soniah/gosnmp b1b4f885b12c5dcbd021c5cee1c904110de6db7d
github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744
github.com/stretchr/objx 1a9d0bb9f541897e62256577b352fdbc1fb4fd94
github.com/stretchr/testify f390dcf405f7b83c997eac1b06768bb9f44dec18
github.com/wvanbergen/kafka 1a8639a45164fcc245d5c7b4bd3ccfbd1a0ffbf3
github.com/wvanbergen/kazoo-go 0f768712ae6f76454f987c3356177e138df258f8
github.com/zensqlmonitor/go-mssqldb ffe5510c6fa5e15e6d983210ab501c815b56b363
golang.org/x/net 04b9de9b512f58addf28c9853d50ebef61c3953e
golang.org/x/text 6d3c22c4525a4da167968fa2479be5524d2e8bd0
gopkg.in/dancannon/gorethink.v1 6f088135ff288deb9d5546f4c71919207f891a70
gopkg.in/fatih/pool.v2 cba550ebf9bce999a02e963296d4bc7a486cb715
gopkg.in/mgo.v2 03c9f3ee4c14c8e51ee521a6a7d0425658dd6f64
gopkg.in/yaml.v2 f7716cbe52baa25d2e9b0d0da546fcf909fc16b4

View File

@@ -9,23 +9,41 @@ endif
# Standard Telegraf build
default: prepare build
# Windows build
windows: prepare-windows build-windows
# Only run the build (no dependency grabbing)
build:
go build -o telegraf -ldflags \
go install -ldflags "-X main.Version=$(VERSION)" ./...
build-windows:
go build -o telegraf.exe -ldflags \
"-X main.Version=$(VERSION)" \
./cmd/telegraf/telegraf.go
build-for-docker:
CGO_ENABLED=0 GOOS=linux go -o telegraf -ldflags \
"-X main.Version=$(VERSION)" \
./cmd/telegraf/telegraf.go
# Build with race detector
dev: prepare
go build -race -o telegraf -ldflags \
"-X main.Version=$(VERSION)" \
./cmd/telegraf/telegraf.go
go build -race -ldflags "-X main.Version=$(VERSION)" ./...
# run package script
package:
./scripts/build.py --package --version="$(VERSION)" --platform=linux --arch=all --upload
# Get dependencies and use gdm to checkout changesets
prepare:
go get github.com/sparrc/gdm
gdm restore
# Use the windows godeps file to prepare dependencies
prepare-windows:
go get github.com/sparrc/gdm
gdm restore -f Godeps_windows
# Run all docker containers necessary for unit tests
docker-run:
ifeq ($(UNAME), Darwin)
@@ -74,14 +92,17 @@ docker-kill:
-docker rm nsq aerospike redis opentsdb rabbitmq postgres memcached mysql kafka mqtt riemann snmp
# Run full unit tests using docker containers (includes setup and teardown)
test: docker-kill docker-run
test: vet docker-kill docker-run
# Sleeping for kafka leadership election, TSDB setup, etc.
sleep 60
# SUCCESS, running tests
go test -race ./...
# Run "short" unit tests
test-short:
test-short: vet
go test -short ./...
.PHONY: test
vet:
go vet ./...
.PHONY: test test-short vet build default

View File

@@ -24,17 +24,21 @@ will continue to be supported, see below for download links.
For more details on the differences between Telegraf 0.2.x and 0.10.x, see
the [release blog post](https://influxdata.com/blog/announcing-telegraf-0-10-0/).
### Linux deb and rpm packages:
### Linux deb and rpm Packages:
Latest:
* http://get.influxdb.org/telegraf/telegraf_0.10.1-1_amd64.deb
* http://get.influxdb.org/telegraf/telegraf-0.10.1-1.x86_64.rpm
* http://get.influxdb.org/telegraf/telegraf_0.10.3-1_amd64.deb
* http://get.influxdb.org/telegraf/telegraf-0.10.3-1.x86_64.rpm
Latest (arm):
* http://get.influxdb.org/telegraf/telegraf_0.10.3-1_arm.deb
* http://get.influxdb.org/telegraf/telegraf-0.10.3-1.arm.rpm
0.2.x:
* http://get.influxdb.org/telegraf/telegraf_0.2.4_amd64.deb
* http://get.influxdb.org/telegraf/telegraf-0.2.4-1.x86_64.rpm
##### Package instructions:
##### Package Instructions:
* Telegraf binary is installed in `/usr/bin/telegraf`
* Telegraf daemon configuration file is in `/etc/telegraf/telegraf.conf`
@@ -43,32 +47,42 @@ Latest:
* On systemd systems (such as Ubuntu 15+), the telegraf daemon can be
controlled via `systemctl [action] telegraf`
### yum/apt Repositories:
There is a yum/apt repo available for the whole InfluxData stack, see
[here](https://docs.influxdata.com/influxdb/v0.9/introduction/installation/#installation)
for instructions, replacing the `influxdb` package name with `telegraf`.
### Linux tarballs:
Latest:
* http://get.influxdb.org/telegraf/telegraf-0.10.1-1_linux_amd64.tar.gz
* http://get.influxdb.org/telegraf/telegraf-0.10.1-1_linux_386.tar.gz
* http://get.influxdb.org/telegraf/telegraf-0.10.1-1_linux_arm.tar.gz
* http://get.influxdb.org/telegraf/telegraf-0.10.3-1_linux_amd64.tar.gz
* http://get.influxdb.org/telegraf/telegraf-0.10.3-1_linux_i386.tar.gz
* http://get.influxdb.org/telegraf/telegraf-0.10.3-1_linux_arm.tar.gz
0.2.x:
* http://get.influxdb.org/telegraf/telegraf_linux_amd64_0.2.4.tar.gz
* http://get.influxdb.org/telegraf/telegraf_linux_386_0.2.4.tar.gz
* http://get.influxdb.org/telegraf/telegraf_linux_arm_0.2.4.tar.gz
##### tarball instructions:
##### tarball Instructions:
To install the full directory structure with config file, run:
```
sudo tar -C / -xvf ./telegraf-v0.10.1-1_linux_amd64.tar.gz
sudo tar -C / -zxvf ./telegraf-0.10.3-1_linux_amd64.tar.gz
```
To extract only the binary, run:
```
tar -zxvf telegraf-v0.10.1-1_linux_amd64.tar.gz --strip-components=3 ./usr/bin/telegraf
tar -zxvf telegraf-0.10.3-1_linux_amd64.tar.gz --strip-components=3 ./usr/bin/telegraf
```
### Ansible Role:
Ansible role: https://github.com/rossmcdonald/telegraf
### OSX via Homebrew:
```
@@ -88,7 +102,7 @@ if you don't have it already. You also must build with golang version 1.5+.
4. Run `cd $GOPATH/src/github.com/influxdata/telegraf`
5. Run `make`
### How to use it:
## How to use it:
```console
$ telegraf -help
@@ -131,7 +145,7 @@ Examples:
## Configuration
See the [configuration guide](CONFIGURATION.md) for a rundown of the more advanced
See the [configuration guide](docs/CONFIGURATION.md) for a rundown of the more advanced
configuration options.
## Supported Input Plugins
@@ -145,10 +159,13 @@ Currently implemented sources:
* aerospike
* apache
* bcache
* couchdb
* disque
* dns query time
* docker
* dovecot
* elasticsearch
* exec (generic JSON-emitting executable plugin)
* exec (generic executable plugin, support JSON, influx and graphite)
* haproxy
* httpjson (generic JSON-emitting http service plugin)
* influxdb
@@ -157,26 +174,32 @@ Currently implemented sources:
* lustre2
* mailchimp
* memcached
* mesos
* mongodb
* mysql
* net_response
* nginx
* nsq
* phpfpm
* phusion passenger
* ping
* postgresql
* powerdns
* procstat
* prometheus
* puppetagent
* rabbitmq
* raindrops
* redis
* rethinkdb
* riak
* sensors (only available if built from source)
* snmp
* sql server (microsoft)
* twemproxy
* zfs
* zookeeper
* sensors
* snmp
* win_perf_counters (windows performance counters)
* system
* cpu
* mem
@@ -189,7 +212,9 @@ Currently implemented sources:
Telegraf can also collect metrics via the following service plugins:
* statsd
* mqtt_consumer
* kafka_consumer
* nats_consumer
* github_webhooks
We'll be adding support for many more over the coming months. Read on if you
@@ -216,4 +241,4 @@ want to add support for another service or third-party API.
Please see the
[contributing guide](CONTRIBUTING.md)
for details on contributing a plugin or output to Telegraf.
for details on contributing a plugin to Telegraf.

View File

@@ -1,188 +1,21 @@
package telegraf
import (
"fmt"
"log"
"math"
"sync"
"time"
"github.com/influxdata/telegraf/internal/models"
"github.com/influxdata/influxdb/client/v2"
)
import "time"
type Accumulator interface {
Add(measurement string, value interface{},
tags map[string]string, t ...time.Time)
AddFields(measurement string, fields map[string]interface{},
tags map[string]string, t ...time.Time)
// Create a point with a value, decorating it with tags
// NOTE: tags is expected to be owned by the caller, don't mutate
// it after passing to Add.
Add(measurement string,
value interface{},
tags map[string]string,
t ...time.Time)
SetDefaultTags(tags map[string]string)
AddDefaultTag(key, value string)
Prefix() string
SetPrefix(prefix string)
AddFields(measurement string,
fields map[string]interface{},
tags map[string]string,
t ...time.Time)
Debug() bool
SetDebug(enabled bool)
}
func NewAccumulator(
inputConfig *models.InputConfig,
points chan *client.Point,
) Accumulator {
acc := accumulator{}
acc.points = points
acc.inputConfig = inputConfig
return &acc
}
type accumulator struct {
sync.Mutex
points chan *client.Point
defaultTags map[string]string
debug bool
inputConfig *models.InputConfig
prefix string
}
func (ac *accumulator) Add(
measurement string,
value interface{},
tags map[string]string,
t ...time.Time,
) {
fields := make(map[string]interface{})
fields["value"] = value
ac.AddFields(measurement, fields, tags, t...)
}
func (ac *accumulator) AddFields(
measurement string,
fields map[string]interface{},
tags map[string]string,
t ...time.Time,
) {
if len(fields) == 0 || len(measurement) == 0 {
return
}
if !ac.inputConfig.Filter.ShouldTagsPass(tags) {
return
}
// Override measurement name if set
if len(ac.inputConfig.NameOverride) != 0 {
measurement = ac.inputConfig.NameOverride
}
// Apply measurement prefix and suffix if set
if len(ac.inputConfig.MeasurementPrefix) != 0 {
measurement = ac.inputConfig.MeasurementPrefix + measurement
}
if len(ac.inputConfig.MeasurementSuffix) != 0 {
measurement = measurement + ac.inputConfig.MeasurementSuffix
}
if tags == nil {
tags = make(map[string]string)
}
// Apply plugin-wide tags if set
for k, v := range ac.inputConfig.Tags {
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
// Apply daemon-wide tags if set
for k, v := range ac.defaultTags {
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
result := make(map[string]interface{})
for k, v := range fields {
// Filter out any filtered fields
if ac.inputConfig != nil {
if !ac.inputConfig.Filter.ShouldPass(k) {
continue
}
}
result[k] = v
// Validate uint64 and float64 fields
switch val := v.(type) {
case uint64:
// InfluxDB does not support writing uint64
if val < uint64(9223372036854775808) {
result[k] = int64(val)
} else {
result[k] = int64(9223372036854775807)
}
case float64:
// NaNs are invalid values in influxdb, skip measurement
if math.IsNaN(val) || math.IsInf(val, 0) {
if ac.debug {
log.Printf("Measurement [%s] field [%s] has a NaN or Inf "+
"field, skipping",
measurement, k)
}
continue
}
}
}
fields = nil
if len(result) == 0 {
return
}
var timestamp time.Time
if len(t) > 0 {
timestamp = t[0]
} else {
timestamp = time.Now()
}
if ac.prefix != "" {
measurement = ac.prefix + measurement
}
pt, err := client.NewPoint(measurement, tags, result, timestamp)
if err != nil {
log.Printf("Error adding point [%s]: %s\n", measurement, err.Error())
return
}
if ac.debug {
fmt.Println("> " + pt.String())
}
ac.points <- pt
}
func (ac *accumulator) SetDefaultTags(tags map[string]string) {
ac.defaultTags = tags
}
func (ac *accumulator) AddDefaultTag(key, value string) {
ac.defaultTags[key] = value
}
func (ac *accumulator) Prefix() string {
return ac.prefix
}
func (ac *accumulator) SetPrefix(prefix string) {
ac.prefix = prefix
}
func (ac *accumulator) Debug() bool {
return ac.debug
}
func (ac *accumulator) SetDebug(debug bool) {
ac.debug = debug
}

172
agent/accumulator.go Normal file
View File

@@ -0,0 +1,172 @@
package agent
import (
"fmt"
"log"
"math"
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal/models"
)
func NewAccumulator(
inputConfig *internal_models.InputConfig,
metrics chan telegraf.Metric,
) *accumulator {
acc := accumulator{}
acc.metrics = metrics
acc.inputConfig = inputConfig
return &acc
}
type accumulator struct {
sync.Mutex
metrics chan telegraf.Metric
defaultTags map[string]string
debug bool
inputConfig *internal_models.InputConfig
prefix string
}
func (ac *accumulator) Add(
measurement string,
value interface{},
tags map[string]string,
t ...time.Time,
) {
fields := make(map[string]interface{})
fields["value"] = value
if !ac.inputConfig.Filter.ShouldNamePass(measurement) {
return
}
ac.AddFields(measurement, fields, tags, t...)
}
func (ac *accumulator) AddFields(
measurement string,
fields map[string]interface{},
tags map[string]string,
t ...time.Time,
) {
if len(fields) == 0 || len(measurement) == 0 {
return
}
if !ac.inputConfig.Filter.ShouldNamePass(measurement) {
return
}
if !ac.inputConfig.Filter.ShouldTagsPass(tags) {
return
}
// Override measurement name if set
if len(ac.inputConfig.NameOverride) != 0 {
measurement = ac.inputConfig.NameOverride
}
// Apply measurement prefix and suffix if set
if len(ac.inputConfig.MeasurementPrefix) != 0 {
measurement = ac.inputConfig.MeasurementPrefix + measurement
}
if len(ac.inputConfig.MeasurementSuffix) != 0 {
measurement = measurement + ac.inputConfig.MeasurementSuffix
}
if tags == nil {
tags = make(map[string]string)
}
// Apply plugin-wide tags if set
for k, v := range ac.inputConfig.Tags {
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
// Apply daemon-wide tags if set
for k, v := range ac.defaultTags {
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
result := make(map[string]interface{})
for k, v := range fields {
// Filter out any filtered fields
if ac.inputConfig != nil {
if !ac.inputConfig.Filter.ShouldFieldsPass(k) {
continue
}
}
result[k] = v
// Validate uint64 and float64 fields
switch val := v.(type) {
case uint64:
// InfluxDB does not support writing uint64
if val < uint64(9223372036854775808) {
result[k] = int64(val)
} else {
result[k] = int64(9223372036854775807)
}
case float64:
// NaNs are invalid values in influxdb, skip measurement
if math.IsNaN(val) || math.IsInf(val, 0) {
if ac.debug {
log.Printf("Measurement [%s] field [%s] has a NaN or Inf "+
"field, skipping",
measurement, k)
}
continue
}
}
}
fields = nil
if len(result) == 0 {
return
}
var timestamp time.Time
if len(t) > 0 {
timestamp = t[0]
} else {
timestamp = time.Now()
}
if ac.prefix != "" {
measurement = ac.prefix + measurement
}
m, err := telegraf.NewMetric(measurement, tags, result, timestamp)
if err != nil {
log.Printf("Error adding point [%s]: %s\n", measurement, err.Error())
return
}
if ac.debug {
fmt.Println("> " + m.String())
}
ac.metrics <- m
}
func (ac *accumulator) Debug() bool {
return ac.debug
}
func (ac *accumulator) SetDebug(debug bool) {
ac.debug = debug
}
func (ac *accumulator) setDefaultTags(tags map[string]string) {
ac.defaultTags = tags
}
func (ac *accumulator) addDefaultTag(key, value string) {
ac.defaultTags[key] = value
}

View File

@@ -1,4 +1,4 @@
package telegraf
package agent
import (
cryptorand "crypto/rand"
@@ -11,12 +11,9 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal/config"
"github.com/influxdata/telegraf/internal/models"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/outputs"
"github.com/influxdata/influxdb/client/v2"
)
// Agent runs telegraf and collects data based on the given config
@@ -47,8 +44,10 @@ func NewAgent(config *config.Config) (*Agent, error) {
// Connect connects to all configured outputs
func (a *Agent) Connect() error {
for _, o := range a.Config.Outputs {
o.Quiet = a.Config.Agent.Quiet
switch ot := o.Output.(type) {
case outputs.ServiceOutput:
case telegraf.ServiceOutput:
if err := ot.Start(); err != nil {
log.Printf("Service for output %s failed to start, exiting\n%s\n",
o.Name, err.Error())
@@ -61,7 +60,8 @@ func (a *Agent) Connect() error {
}
err := o.Output.Connect()
if err != nil {
log.Printf("Failed to connect to output %s, retrying in 15s, error was '%s' \n", o.Name, err)
log.Printf("Failed to connect to output %s, retrying in 15s, "+
"error was '%s' \n", o.Name, err)
time.Sleep(15 * time.Second)
err = o.Output.Connect()
if err != nil {
@@ -81,14 +81,14 @@ func (a *Agent) Close() error {
for _, o := range a.Config.Outputs {
err = o.Output.Close()
switch ot := o.Output.(type) {
case outputs.ServiceOutput:
case telegraf.ServiceOutput:
ot.Stop()
}
}
return err
}
func panicRecover(input *models.RunningInput) {
func panicRecover(input *internal_models.RunningInput) {
if err := recover(); err != nil {
trace := make([]byte, 2048)
runtime.Stack(trace, true)
@@ -102,7 +102,7 @@ func panicRecover(input *models.RunningInput) {
// gatherParallel runs the inputs that are using the same reporting interval
// as the telegraf agent.
func (a *Agent) gatherParallel(pointChan chan *client.Point) error {
func (a *Agent) gatherParallel(metricC chan telegraf.Metric) error {
var wg sync.WaitGroup
start := time.Now()
@@ -115,13 +115,13 @@ func (a *Agent) gatherParallel(pointChan chan *client.Point) error {
wg.Add(1)
counter++
go func(input *models.RunningInput) {
go func(input *internal_models.RunningInput) {
defer panicRecover(input)
defer wg.Done()
acc := NewAccumulator(input.Config, pointChan)
acc := NewAccumulator(input.Config, metricC)
acc.SetDebug(a.Config.Agent.Debug)
acc.SetDefaultTags(a.Config.Tags)
acc.setDefaultTags(a.Config.Tags)
if jitter != 0 {
nanoSleep := rand.Int63n(jitter)
@@ -159,8 +159,8 @@ func (a *Agent) gatherParallel(pointChan chan *client.Point) error {
// reporting interval.
func (a *Agent) gatherSeparate(
shutdown chan struct{},
input *models.RunningInput,
pointChan chan *client.Point,
input *internal_models.RunningInput,
metricC chan telegraf.Metric,
) error {
defer panicRecover(input)
@@ -170,9 +170,9 @@ func (a *Agent) gatherSeparate(
var outerr error
start := time.Now()
acc := NewAccumulator(input.Config, pointChan)
acc := NewAccumulator(input.Config, metricC)
acc.SetDebug(a.Config.Agent.Debug)
acc.SetDefaultTags(a.Config.Tags)
acc.setDefaultTags(a.Config.Tags)
if err := input.Input.Gather(acc); err != nil {
log.Printf("Error in input [%s]: %s", input.Name, err)
@@ -202,13 +202,13 @@ func (a *Agent) gatherSeparate(
func (a *Agent) Test() error {
shutdown := make(chan struct{})
defer close(shutdown)
pointChan := make(chan *client.Point)
metricC := make(chan telegraf.Metric)
// dummy receiver for the point channel
go func() {
for {
select {
case <-pointChan:
case <-metricC:
// do nothing
case <-shutdown:
return
@@ -217,7 +217,7 @@ func (a *Agent) Test() error {
}()
for _, input := range a.Config.Inputs {
acc := NewAccumulator(input.Config, pointChan)
acc := NewAccumulator(input.Config, metricC)
acc.SetDebug(true)
fmt.Printf("* Plugin: %s, Collection 1\n", input.Name)
@@ -244,13 +244,13 @@ func (a *Agent) Test() error {
return nil
}
// flush writes a list of points to all configured outputs
// flush writes a list of metrics to all configured outputs
func (a *Agent) flush() {
var wg sync.WaitGroup
wg.Add(len(a.Config.Outputs))
for _, o := range a.Config.Outputs {
go func(output *models.RunningOutput) {
go func(output *internal_models.RunningOutput) {
defer wg.Done()
err := output.Write()
if err != nil {
@@ -263,8 +263,8 @@ func (a *Agent) flush() {
wg.Wait()
}
// flusher monitors the points input channel and flushes on the minimum interval
func (a *Agent) flusher(shutdown chan struct{}, pointChan chan *client.Point) error {
// flusher monitors the metrics input channel and flushes on the minimum interval
func (a *Agent) flusher(shutdown chan struct{}, metricC chan telegraf.Metric) error {
// Inelegant, but this sleep is to allow the Gather threads to run, so that
// the flusher will flush after metrics are collected.
time.Sleep(time.Millisecond * 200)
@@ -274,14 +274,14 @@ func (a *Agent) flusher(shutdown chan struct{}, pointChan chan *client.Point) er
for {
select {
case <-shutdown:
log.Println("Hang on, flushing any cached points before shutdown")
log.Println("Hang on, flushing any cached metrics before shutdown")
a.flush()
return nil
case <-ticker.C:
a.flush()
case pt := <-pointChan:
case m := <-metricC:
for _, o := range a.Config.Outputs {
o.AddPoint(pt)
o.AddMetric(m)
}
}
}
@@ -321,8 +321,24 @@ func (a *Agent) Run(shutdown chan struct{}) error {
a.Config.Agent.Interval.Duration, a.Config.Agent.Debug, a.Config.Agent.Quiet,
a.Config.Agent.Hostname, a.Config.Agent.FlushInterval.Duration)
// channel shared between all input threads for accumulating points
pointChan := make(chan *client.Point, 1000)
// channel shared between all input threads for accumulating metrics
metricC := make(chan telegraf.Metric, 10000)
for _, input := range a.Config.Inputs {
// Start service of any ServicePlugins
switch p := input.Input.(type) {
case telegraf.ServiceInput:
acc := NewAccumulator(input.Config, metricC)
acc.SetDebug(a.Config.Agent.Debug)
acc.setDefaultTags(a.Config.Tags)
if err := p.Start(acc); err != nil {
log.Printf("Service for input %s failed to start, exiting\n%s\n",
input.Name, err.Error())
return err
}
defer p.Stop()
}
}
// Round collection to nearest interval by sleeping
if a.Config.Agent.RoundInterval {
@@ -334,32 +350,20 @@ func (a *Agent) Run(shutdown chan struct{}) error {
wg.Add(1)
go func() {
defer wg.Done()
if err := a.flusher(shutdown, pointChan); err != nil {
if err := a.flusher(shutdown, metricC); err != nil {
log.Printf("Flusher routine failed, exiting: %s\n", err.Error())
close(shutdown)
}
}()
for _, input := range a.Config.Inputs {
// Start service of any ServicePlugins
switch p := input.Input.(type) {
case inputs.ServiceInput:
if err := p.Start(); err != nil {
log.Printf("Service for input %s failed to start, exiting\n%s\n",
input.Name, err.Error())
return err
}
defer p.Stop()
}
// Special handling for inputs that have their own collection interval
// configured. Default intervals are handled below with gatherParallel
if input.Config.Interval != 0 {
wg.Add(1)
go func(input *models.RunningInput) {
go func(input *internal_models.RunningInput) {
defer wg.Done()
if err := a.gatherSeparate(shutdown, input, pointChan); err != nil {
if err := a.gatherSeparate(shutdown, input, metricC); err != nil {
log.Printf(err.Error())
}
}(input)
@@ -369,7 +373,7 @@ func (a *Agent) Run(shutdown chan struct{}) error {
defer wg.Wait()
for {
if err := a.gatherParallel(pointChan); err != nil {
if err := a.gatherParallel(metricC); err != nil {
log.Printf(err.Error())
}

View File

@@ -1,4 +1,4 @@
package telegraf
package agent
import (
"github.com/stretchr/testify/assert"
@@ -16,35 +16,35 @@ import (
func TestAgent_LoadPlugin(t *testing.T) {
c := config.NewConfig()
c.InputFilters = []string{"mysql"}
err := c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err := c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ := NewAgent(c)
assert.Equal(t, 1, len(a.Config.Inputs))
c = config.NewConfig()
c.InputFilters = []string{"foo"}
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 0, len(a.Config.Inputs))
c = config.NewConfig()
c.InputFilters = []string{"mysql", "foo"}
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 1, len(a.Config.Inputs))
c = config.NewConfig()
c.InputFilters = []string{"mysql", "redis"}
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 2, len(a.Config.Inputs))
c = config.NewConfig()
c.InputFilters = []string{"mysql", "foo", "redis", "bar"}
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 2, len(a.Config.Inputs))
@@ -53,42 +53,42 @@ func TestAgent_LoadPlugin(t *testing.T) {
func TestAgent_LoadOutput(t *testing.T) {
c := config.NewConfig()
c.OutputFilters = []string{"influxdb"}
err := c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err := c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ := NewAgent(c)
assert.Equal(t, 2, len(a.Config.Outputs))
c = config.NewConfig()
c.OutputFilters = []string{"kafka"}
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 1, len(a.Config.Outputs))
c = config.NewConfig()
c.OutputFilters = []string{}
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 3, len(a.Config.Outputs))
c = config.NewConfig()
c.OutputFilters = []string{"foo"}
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 0, len(a.Config.Outputs))
c = config.NewConfig()
c.OutputFilters = []string{"influxdb", "foo"}
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 2, len(a.Config.Outputs))
c = config.NewConfig()
c.OutputFilters = []string{"influxdb", "kafka"}
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
assert.Equal(t, 3, len(c.Outputs))
a, _ = NewAgent(c)
@@ -96,7 +96,7 @@ func TestAgent_LoadOutput(t *testing.T) {
c = config.NewConfig()
c.OutputFilters = []string{"influxdb", "foo", "kafka", "bar"}
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 3, len(a.Config.Outputs))

View File

@@ -4,14 +4,17 @@ machine:
post:
- sudo service zookeeper stop
- go version
- go version | grep 1.5.2 || sudo rm -rf /usr/local/go
- wget https://storage.googleapis.com/golang/go1.5.2.linux-amd64.tar.gz
- sudo tar -C /usr/local -xzf go1.5.2.linux-amd64.tar.gz
- go version | grep 1.5.3 || sudo rm -rf /usr/local/go
- wget https://storage.googleapis.com/golang/go1.5.3.linux-amd64.tar.gz
- sudo tar -C /usr/local -xzf go1.5.3.linux-amd64.tar.gz
- go version
dependencies:
override:
- docker info
post:
- gem install fpm
- sudo apt-get install -y rpm python-boto
test:
override:

View File

@@ -9,8 +9,9 @@ import (
"strings"
"syscall"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/agent"
"github.com/influxdata/telegraf/internal/config"
_ "github.com/influxdata/telegraf/plugins/inputs/all"
_ "github.com/influxdata/telegraf/plugins/outputs/all"
)
@@ -87,11 +88,11 @@ func main() {
reload <- true
for <-reload {
reload <- false
flag.Usage = usageExit
flag.Usage = func() { usageExit(0) }
flag.Parse()
if flag.NFlag() == 0 {
usageExit()
usageExit(0)
}
var inputFilters []string
@@ -148,9 +149,8 @@ func main() {
log.Fatal(err)
}
} else {
fmt.Println("Usage: Telegraf")
flag.PrintDefaults()
return
fmt.Println("You must specify a config file. See telegraf --help")
os.Exit(1)
}
if *fConfigDirectoryLegacy != "" {
@@ -173,7 +173,7 @@ func main() {
log.Fatalf("Error: no inputs found, did you provide a valid config file?")
}
ag, err := telegraf.NewAgent(c)
ag, err := agent.NewAgent(c)
if err != nil {
log.Fatal(err)
}
@@ -235,7 +235,7 @@ func main() {
}
}
func usageExit() {
func usageExit(rc int) {
fmt.Println(usage)
os.Exit(0)
os.Exit(rc)
}

View File

@@ -9,9 +9,9 @@ To generate a file with specific inputs and outputs, you can use the
-input-filter and -output-filter flags:
`telegraf -sample-config -input-filter cpu:mem:net:swap -output-filter influxdb:kafka`
## `[tags]` Configuration
## `[global_tags]` Configuration
Global tags can be specific in the `[tags]` section of the config file in
Global tags can be specific in the `[global_tags]` section of the config file in
key="value" format. All metrics being gathered on this host will be tagged
with the tags specified here.
@@ -58,10 +58,14 @@ you can configure that here.
There are also filters that can be configured per input:
* **pass**: An array of strings that is used to filter metrics generated by the
* **namepass**: An array of strings that is used to filter metrics generated by the
current input. Each string in the array is tested as a glob match against
measurement names and if it matches, the field is emitted.
* **namedrop**: The inverse of pass, if a measurement name matches, it is not emitted.
* **fieldpass**: An array of strings that is used to filter metrics generated by the
current input. Each string in the array is tested as a glob match against field names
and if it matches, the field is emitted.
* **drop**: The inverse of pass, if a field name matches, it is not emitted.
* **fielddrop**: The inverse of pass, if a field name matches, it is not emitted.
* **tagpass**: tag names and arrays of strings that are used to filter
measurements by the current input. Each string in the array is tested as a glob
match against the tag name, and if it matches the measurement is emitted.
@@ -76,7 +80,7 @@ measurements at a 10s interval and will collect per-cpu data, dropping any
fields which begin with `time_`.
```toml
[tags]
[global_tags]
dc = "denver-1"
[agent]
@@ -117,18 +121,32 @@ fields which begin with `time_`.
path = [ "/opt", "/home*" ]
```
#### Input Config: pass and drop
#### Input Config: fieldpass and fielddrop
```toml
# Drop all metrics for guest & steal CPU usage
[[inputs.cpu]]
percpu = false
totalcpu = true
drop = ["usage_guest", "usage_steal"]
fielddrop = ["usage_guest", "usage_steal"]
# Only store inode related metrics for disks
[[inputs.disk]]
pass = ["inodes*"]
fieldpass = ["inodes*"]
```
#### Input Config: namepass and namedrop
```toml
# Drop all metrics about containers for kubelet
[[inputs.prometheus]]
urls = ["http://kube-node-1:4194/metrics"]
namedrop = ["container_"]
# Only store rest client related metrics for kubelet
[[inputs.prometheus]]
urls = ["http://kube-node-1:4194/metrics"]
namepass = ["rest_client_"]
```
#### Input config: prefix, suffix, and override
@@ -191,7 +209,7 @@ configuring each output sink is different, but examples can be
found by running `telegraf -sample-config`.
Outputs also support the same configurable options as inputs
(pass, drop, tagpass, tagdrop)
(namepass, namedrop, tagpass, tagdrop)
```toml
[[outputs.influxdb]]
@@ -199,14 +217,14 @@ Outputs also support the same configurable options as inputs
database = "telegraf"
precision = "s"
# Drop all measurements that start with "aerospike"
drop = ["aerospike*"]
namedrop = ["aerospike*"]
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf-aerospike-data"
precision = "s"
# Only accept aerospike data:
pass = ["aerospike*"]
namepass = ["aerospike*"]
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]

274
docs/DATA_FORMATS_INPUT.md Normal file
View File

@@ -0,0 +1,274 @@
# Telegraf Input Data Formats
Telegraf metrics, like InfluxDB
[points](https://docs.influxdata.com/influxdb/v0.10/write_protocols/line/),
are a combination of four basic parts:
1. Measurement Name
1. Tags
1. Fields
1. Timestamp
These four parts are easily defined when using InfluxDB line-protocol as a
data format. But there are other data formats that users may want to use which
require more advanced configuration to create usable Telegraf metrics.
Plugins such as `exec` and `kafka_consumer` parse textual data. Up until now,
these plugins were statically configured to parse just a single
data format. `exec` mostly only supported parsing JSON, and `kafka_consumer` only
supported data in InfluxDB line-protocol.
But now we are normalizing the parsing of various data formats across all
plugins that can support it. You will be able to identify a plugin that supports
different data formats by the presence of a `data_format` config option, for
example, in the exec plugin:
```toml
[[inputs.exec]]
## Commands array
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
## measurement name suffix (for separating different commands)
name_suffix = "_mycollector"
## Data format to consume. This can be "json", "influx" or "graphite"
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "json"
## Additional configuration options go here
```
Each data_format has an additional set of configuration options available, which
I'll go over below.
## Influx:
There are no additional configuration options for InfluxDB line-protocol. The
metrics are parsed directly into Telegraf metrics.
#### Influx Configuration:
```toml
[[inputs.exec]]
## Commands array
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
## measurement name suffix (for separating different commands)
name_suffix = "_mycollector"
## Data format to consume. This can be "json", "influx" or "graphite"
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "influx"
```
## JSON:
The JSON data format flattens JSON into metric _fields_. For example, this JSON:
```json
{
"a": 5,
"b": {
"c": 6
}
}
```
Would get translated into _fields_ of a measurement:
```
myjsonmetric a=5,b_c=6
```
The _measurement_ _name_ is usually the name of the plugin,
but can be overridden using the `name_override` config option.
#### JSON Configuration:
The JSON data format supports specifying "tag keys". If specified, keys
will be searched for in the root-level of the JSON blob. If the key(s) exist,
they will be applied as tags to the Telegraf metrics.
For example, if you had this configuration:
```toml
[[inputs.exec]]
## Commands array
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
## measurement name suffix (for separating different commands)
name_suffix = "_mycollector"
## Data format to consume. This can be "json", "influx" or "graphite"
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "json"
## List of tag names to extract from top-level of JSON server response
tag_keys = [
"my_tag_1",
"my_tag_2"
]
```
with this JSON output from a command:
```json
{
"a": 5,
"b": {
"c": 6
},
"my_tag_1": "foo"
}
```
Your Telegraf metrics would get tagged with "my_tag_1"
```
exec_mycollector,my_tag_1=foo a=5,b_c=6
```
## Graphite:
The Graphite data format translates graphite _dot_ buckets directly into
telegraf measurement names, with a single value field, and without any tags. For
more advanced options, Telegraf supports specifying "templates" to translate
graphite buckets into Telegraf metrics.
#### Separator:
You can specify a separator to use for the parsed metrics.
By default, it will leave the metrics with a "." separator.
Setting `separator = "_"` will translate:
```
cpu.usage.idle 99
=> cpu_usage_idle value=99
```
#### Measurement/Tag Templates:
The most basic template is to specify a single transformation to apply to all
incoming metrics. _measurement_ is a special keyword that tells Telegraf which
parts of the graphite bucket to combine into the measurement name. It can have a
trailing `*` to indicate that the remainder of the metric should be used.
Other words are considered tag keys. So the following template:
```toml
templates = [
"region.measurement*"
]
```
would result in the following Graphite -> Telegraf transformation.
```
us-west.cpu.load 100
=> cpu.load,region=us-west value=100
```
#### Field Templates:
There is also a _field_ keyword, which can only be specified once.
The field keyword tells Telegraf to give the metric that field name.
So the following template:
```toml
templates = [
"measurement.measurement.field.region"
]
```
would result in the following Graphite -> Telegraf transformation.
```
cpu.usage.idle.us-west 100
=> cpu_usage,region=us-west idle=100
```
#### Filter Templates:
Users can also filter the template(s) to use based on the name of the bucket,
using glob matching, like so:
```toml
templates = [
"cpu.* measurement.measurement.region",
"mem.* measurement.measurement.host"
]
```
which would result in the following transformation:
```
cpu.load.us-west 100
=> cpu_load,region=us-west value=100
mem.cached.localhost 256
=> mem_cached,host=localhost value=256
```
#### Adding Tags:
Additional tags can be added to a metric that don't exist on the received metric.
You can add additional tags by specifying them after the pattern.
Tags have the same format as the line protocol.
Multiple tags are separated by commas.
```toml
templates = [
"measurement.measurement.field.region datacenter=1a"
]
```
would result in the following Graphite -> Telegraf transformation.
```
cpu.usage.idle.us-west 100
=> cpu_usage,region=us-west,datacenter=1a idle=100
```
There are many more options available,
[More details can be found here](https://github.com/influxdata/influxdb/tree/master/services/graphite#templates)
#### Graphite Configuration:
```toml
[[inputs.exec]]
## Commands array
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
## measurement name suffix (for separating different commands)
name_suffix = "_mycollector"
## Data format to consume. This can be "json", "influx" or "graphite" (line-protocol)
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "graphite"
## This string will be used to join the matched values.
separator = "_"
## Each template line requires a template pattern. It can have an optional
## filter before the template and separated by spaces. It can also have optional extra
## tags following the template. Multiple tags should be separated by commas and no spaces
## similar to the line protocol format. There can be only one default template.
## Templates support below format:
## 1. filter + template
## 2. filter + template + extra tag
## 3. filter + template with field key
## 4. default template
templates = [
"*.app env.service.resource.measurement",
"stats.* .host.measurement* region=us-west,agent=sensu",
"stats2.* .host.measurement.field",
"measurement*"
]
```

View File

@@ -0,0 +1,97 @@
# Telegraf Output Data Formats
Telegraf metrics, like InfluxDB
[points](https://docs.influxdata.com/influxdb/v0.10/write_protocols/line/),
are a combination of four basic parts:
1. Measurement Name
1. Tags
1. Fields
1. Timestamp
In InfluxDB line protocol, these 4 parts are easily defined in textual form:
```
measurement_name[,tag1=val1,...] field1=val1[,field2=val2,...] [timestamp]
```
For Telegraf outputs that write textual data (such as `kafka`, `mqtt`, and `file`),
InfluxDB line protocol was originally the only available output format. But now
we are normalizing telegraf metric "serializers" into a
[plugin-like interface](https://github.com/influxdata/telegraf/tree/master/plugins/serializers)
across all output plugins that can support it.
You will be able to identify a plugin that supports different data formats
by the presence of a `data_format`
config option, for example, in the `file` output plugin:
```toml
[[outputs.file]]
## Files to write to, "stdout" is a specially handled file.
files = ["stdout"]
## Data format to output. This can be "influx" or "graphite"
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "influx"
## Additional configuration options go here
```
Each data_format has an additional set of configuration options available, which
I'll go over below.
## Influx:
There are no additional configuration options for InfluxDB line-protocol. The
metrics are serialized directly into InfluxDB line-protocol.
#### Influx Configuration:
```toml
[[outputs.file]]
## Files to write to, "stdout" is a specially handled file.
files = ["stdout", "/tmp/metrics.out"]
## Data format to output. This can be "influx" or "graphite"
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "influx"
```
## Graphite:
The Graphite data format translates Telegraf metrics into _dot_ buckets.
The format is:
```
[prefix].[host tag].[all tags (alphabetical)].[measurement name].[field name] value timestamp
```
Which means the following influx metric -> graphite conversion would happen:
```
cpu,cpu=cpu-total,dc=us-east-1,host=tars usage_idle=98.09,usage_user=0.89 1455320660004257758
=>
tars.cpu-total.us-east-1.cpu.usage_user 0.89 1455320690
tars.cpu-total.us-east-1.cpu.usage_idle 98.09 1455320690
```
`prefix` is a configuration option when using the graphite output data format.
#### Graphite Configuration:
```toml
[[outputs.file]]
## Files to write to, "stdout" is a specially handled file.
files = ["stdout", "/tmp/metrics.out"]
## Data format to output. This can be "influx" or "graphite"
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "influx"
prefix = "telegraf"
```

View File

@@ -10,29 +10,43 @@
# file would generate.
# Global tags can be specified here in key="value" format.
[tags]
[global_tags]
# dc = "us-east-1" # will tag all metrics with dc=us-east-1
# rack = "1a"
# Configuration for telegraf agent
[agent]
# Default data collection interval for all plugins
## Default data collection interval for all inputs
interval = "10s"
# Rounds collection interval to 'interval'
# ie, if interval="10s" then always collect on :00, :10, :20, etc.
## Rounds collection interval to 'interval'
## ie, if interval="10s" then always collect on :00, :10, :20, etc.
round_interval = true
# Default data flushing interval for all outputs. You should not set this below
# interval. Maximum flush_interval will be flush_interval + flush_jitter
## Telegraf will cache metric_buffer_limit metrics for each output, and will
## flush this buffer on a successful write.
metric_buffer_limit = 10000
## Flush the buffer whenever full, regardless of flush_interval.
flush_buffer_when_full = true
## Collection jitter is used to jitter the collection by a random amount.
## Each plugin will sleep for a random time within jitter before collecting.
## This can be used to avoid many plugins querying things like sysfs at the
## same time, which can have a measurable effect on the system.
collection_jitter = "0s"
## Default flushing interval for all outputs. You shouldn't set this below
## interval. Maximum flush_interval will be flush_interval + flush_jitter
flush_interval = "10s"
# Jitter the flush interval by a random amount. This is primarily to avoid
# large write spikes for users running a large number of telegraf instances.
# ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
## Jitter the flush interval by a random amount. This is primarily to avoid
## large write spikes for users running a large number of telegraf instances.
## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
flush_jitter = "0s"
# Run telegraf in debug mode
## Run telegraf in debug mode
debug = false
# Override default hostname, if empty use os.Hostname()
## Run telegraf in quiet mode
quiet = false
## Override default hostname, if empty use os.Hostname()
hostname = ""
@@ -49,13 +63,13 @@
urls = ["http://localhost:8086"] # required
# The target database for metrics (telegraf will create it if not exists)
database = "telegraf" # required
# Precision of writes, valid values are n, u, ms, s, m, and h
# Precision of writes, valid values are "ns", "us" (or "µs"), "ms", "s", "m", "h".
# note: using second precision greatly helps InfluxDB compression
precision = "s"
# Connection timeout (for the connection with InfluxDB), formatted as a string.
# If not provided, will default to 0 (no timeout)
# timeout = "5s"
## Write timeout (for the InfluxDB client), formatted as a string.
## If not provided, will default to 5s. 0s means no timeout (not recommended).
timeout = "5s"
# username = "telegraf"
# password = "metricsmetricsmetricsmetrics"
# Set the user agent for HTTP POSTs (can be useful for log differentiation)
@@ -75,7 +89,7 @@
# Whether to report total system cpu stats or not
totalcpu = true
# Comment this line if you want the raw CPU time metrics
drop = ["time_*"]
fielddrop = ["time_*"]
# Read metrics about disk usage by mount point
[[inputs.disk]]
@@ -83,6 +97,10 @@
# Setting mountpoints will restrict the stats to the specified mountpoints.
# mount_points=["/"]
# Ignore some mountpoints by filesystem type. For example (dev)tmpfs (usually
# present on /run, /var/run, /dev/shm or /dev).
ignore_fs = ["tmpfs", "devtmpfs"]
# Read metrics about disk IO by device
[[inputs.diskio]]
# By default, telegraf will gather stats for all devices including

31
input.go Normal file
View File

@@ -0,0 +1,31 @@
package telegraf
type Input interface {
// SampleConfig returns the default configuration of the Input
SampleConfig() string
// Description returns a one-sentence description on the Input
Description() string
// Gather takes in an accumulator and adds the metrics that the Input
// gathers. This is called every "interval"
Gather(Accumulator) error
}
type ServiceInput interface {
// SampleConfig returns the default configuration of the Input
SampleConfig() string
// Description returns a one-sentence description on the Input
Description() string
// Gather takes in an accumulator and adds the metrics that the Input
// gathers. This is called every "interval"
Gather(Accumulator) error
// Start starts the ServiceInput's service, whatever that may be
Start(Accumulator) error
// Stop stops the services and closes any necessary channels and connections
Stop()
}

View File

@@ -10,10 +10,13 @@ import (
"strings"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/internal/models"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/outputs"
"github.com/influxdata/telegraf/plugins/parsers"
"github.com/influxdata/telegraf/plugins/serializers"
"github.com/influxdata/config"
"github.com/naoina/toml/ast"
@@ -28,8 +31,8 @@ type Config struct {
OutputFilters []string
Agent *AgentConfig
Inputs []*models.RunningInput
Outputs []*models.RunningOutput
Inputs []*internal_models.RunningInput
Outputs []*internal_models.RunningOutput
}
func NewConfig() *Config {
@@ -43,8 +46,8 @@ func NewConfig() *Config {
},
Tags: make(map[string]string),
Inputs: make([]*models.RunningInput, 0),
Outputs: make([]*models.RunningOutput, 0),
Inputs: make([]*internal_models.RunningInput, 0),
Outputs: make([]*internal_models.RunningOutput, 0),
InputFilters: make([]string, 0),
OutputFilters: make([]string, 0),
}
@@ -65,7 +68,7 @@ type AgentConfig struct {
// same time, which can have a measurable effect on the system.
CollectionJitter internal.Duration
// Interval at which to flush data
// FlushInterval is the Interval at which to flush data
FlushInterval internal.Duration
// FlushJitter Jitters the flush interval by a random amount.
@@ -79,6 +82,11 @@ type AgentConfig struct {
// full, the oldest metrics will be overwritten.
MetricBufferLimit int
// FlushBufferWhenFull tells Telegraf to flush the metric buffer whenever
// it fills up, regardless of FlushInterval. Setting this option to true
// does _not_ deactivate FlushInterval.
FlushBufferWhenFull bool
// TODO(cam): Remove UTC and Precision parameters, they are no longer
// valid for the agent config. Leaving them here for now for backwards-
// compatability
@@ -125,7 +133,7 @@ func (c *Config) ListTags() string {
return strings.Join(tags, " ")
}
var header = `# Telegraf configuration
var header = `# Telegraf Configuration
# Telegraf is entirely plugin driven. All metrics are gathered from the
# declared inputs, and sent to the declared outputs.
@@ -137,63 +145,62 @@ var header = `# Telegraf configuration
# file would generate.
# Global tags can be specified here in key="value" format.
[tags]
[global_tags]
# dc = "us-east-1" # will tag all metrics with dc=us-east-1
# rack = "1a"
# Configuration for telegraf agent
[agent]
# Default data collection interval for all inputs
## Default data collection interval for all inputs
interval = "10s"
# Rounds collection interval to 'interval'
# ie, if interval="10s" then always collect on :00, :10, :20, etc.
## Rounds collection interval to 'interval'
## ie, if interval="10s" then always collect on :00, :10, :20, etc.
round_interval = true
# Telegraf will cache metric_buffer_limit metrics for each output, and will
# flush this buffer on a successful write.
## Telegraf will cache metric_buffer_limit metrics for each output, and will
## flush this buffer on a successful write.
metric_buffer_limit = 10000
## Flush the buffer whenever full, regardless of flush_interval.
flush_buffer_when_full = true
# Collection jitter is used to jitter the collection by a random amount.
# Each plugin will sleep for a random time within jitter before collecting.
# This can be used to avoid many plugins querying things like sysfs at the
# same time, which can have a measurable effect on the system.
## Collection jitter is used to jitter the collection by a random amount.
## Each plugin will sleep for a random time within jitter before collecting.
## This can be used to avoid many plugins querying things like sysfs at the
## same time, which can have a measurable effect on the system.
collection_jitter = "0s"
# Default data flushing interval for all outputs. You should not set this below
# interval. Maximum flush_interval will be flush_interval + flush_jitter
## Default flushing interval for all outputs. You shouldn't set this below
## interval. Maximum flush_interval will be flush_interval + flush_jitter
flush_interval = "10s"
# Jitter the flush interval by a random amount. This is primarily to avoid
# large write spikes for users running a large number of telegraf instances.
# ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
## Jitter the flush interval by a random amount. This is primarily to avoid
## large write spikes for users running a large number of telegraf instances.
## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
flush_jitter = "0s"
# Run telegraf in debug mode
## Run telegraf in debug mode
debug = false
# Run telegraf in quiet mode
## Run telegraf in quiet mode
quiet = false
# Override default hostname, if empty use os.Hostname()
## Override default hostname, if empty use os.Hostname()
hostname = ""
###############################################################################
# OUTPUTS #
###############################################################################
#
# OUTPUTS:
#
`
var pluginHeader = `
###############################################################################
# INPUTS #
###############################################################################
#
# INPUTS:
#
`
var serviceInputHeader = `
###############################################################################
# SERVICE INPUTS #
###############################################################################
#
# SERVICE INPUTS:
#
`
// PrintSampleConfig prints the sample config
@@ -227,13 +234,13 @@ func PrintSampleConfig(pluginFilters []string, outputFilters []string) {
// Print Inputs
fmt.Printf(pluginHeader)
servInputs := make(map[string]inputs.ServiceInput)
servInputs := make(map[string]telegraf.ServiceInput)
for _, pname := range pnames {
creator := inputs.Inputs[pname]
input := creator()
switch p := input.(type) {
case inputs.ServiceInput:
case telegraf.ServiceInput:
servInputs[pname] = p
continue
}
@@ -332,9 +339,9 @@ func (c *Config) LoadConfig(path string) error {
log.Printf("Could not parse [agent] config\n")
return err
}
case "tags":
case "global_tags", "tags":
if err = config.UnmarshalTable(subTable, c.Tags); err != nil {
log.Printf("Could not parse [tags] config\n")
log.Printf("Could not parse [global_tags] config\n")
return err
}
case "outputs":
@@ -394,6 +401,17 @@ func (c *Config) addOutput(name string, table *ast.Table) error {
}
output := creator()
// If the output has a SetSerializer function, then this means it can write
// arbitrary types of output, so build the serializer and set it.
switch t := output.(type) {
case serializers.SerializerOutput:
serializer, err := buildSerializer(name, table)
if err != nil {
return err
}
t.SetSerializer(serializer)
}
outputConfig, err := buildOutput(name, table)
if err != nil {
return err
@@ -403,11 +421,11 @@ func (c *Config) addOutput(name string, table *ast.Table) error {
return err
}
ro := models.NewRunningOutput(name, output, outputConfig)
ro := internal_models.NewRunningOutput(name, output, outputConfig)
if c.Agent.MetricBufferLimit > 0 {
ro.PointBufferLimit = c.Agent.MetricBufferLimit
ro.MetricBufferLimit = c.Agent.MetricBufferLimit
}
ro.Quiet = c.Agent.Quiet
ro.FlushBufferWhenFull = c.Agent.FlushBufferWhenFull
c.Outputs = append(c.Outputs, ro)
return nil
}
@@ -427,6 +445,17 @@ func (c *Config) addInput(name string, table *ast.Table) error {
}
input := creator()
// If the input has a SetParser function, then this means it can accept
// arbitrary types of input, so build the parser and set it.
switch t := input.(type) {
case parsers.ParserInput:
parser, err := buildParser(name, table)
if err != nil {
return err
}
t.SetParser(parser)
}
pluginConfig, err := buildInput(name, table)
if err != nil {
return err
@@ -436,7 +465,7 @@ func (c *Config) addInput(name string, table *ast.Table) error {
return err
}
rp := &models.RunningInput{
rp := &internal_models.RunningInput{
Name: name,
Input: input,
Config: pluginConfig,
@@ -445,18 +474,19 @@ func (c *Config) addInput(name string, table *ast.Table) error {
return nil
}
// buildFilter builds a Filter (tagpass/tagdrop/pass/drop) to
// be inserted into the models.OutputConfig/models.InputConfig to be used for prefix
// buildFilter builds a Filter
// (tagpass/tagdrop/namepass/namedrop/fieldpass/fielddrop) to
// be inserted into the internal_models.OutputConfig/internal_models.InputConfig to be used for prefix
// filtering on tags and measurements
func buildFilter(tbl *ast.Table) models.Filter {
f := models.Filter{}
func buildFilter(tbl *ast.Table) internal_models.Filter {
f := internal_models.Filter{}
if node, ok := tbl.Fields["pass"]; ok {
if node, ok := tbl.Fields["namepass"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if ary, ok := kv.Value.(*ast.Array); ok {
for _, elem := range ary.Value {
if str, ok := elem.(*ast.String); ok {
f.Pass = append(f.Pass, str.Value)
f.NamePass = append(f.NamePass, str.Value)
f.IsActive = true
}
}
@@ -464,12 +494,12 @@ func buildFilter(tbl *ast.Table) models.Filter {
}
}
if node, ok := tbl.Fields["drop"]; ok {
if node, ok := tbl.Fields["namedrop"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if ary, ok := kv.Value.(*ast.Array); ok {
for _, elem := range ary.Value {
if str, ok := elem.(*ast.String); ok {
f.Drop = append(f.Drop, str.Value)
f.NameDrop = append(f.NameDrop, str.Value)
f.IsActive = true
}
}
@@ -477,11 +507,43 @@ func buildFilter(tbl *ast.Table) models.Filter {
}
}
fields := []string{"pass", "fieldpass"}
for _, field := range fields {
if node, ok := tbl.Fields[field]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if ary, ok := kv.Value.(*ast.Array); ok {
for _, elem := range ary.Value {
if str, ok := elem.(*ast.String); ok {
f.FieldPass = append(f.FieldPass, str.Value)
f.IsActive = true
}
}
}
}
}
}
fields = []string{"drop", "fielddrop"}
for _, field := range fields {
if node, ok := tbl.Fields[field]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if ary, ok := kv.Value.(*ast.Array); ok {
for _, elem := range ary.Value {
if str, ok := elem.(*ast.String); ok {
f.FieldDrop = append(f.FieldDrop, str.Value)
f.IsActive = true
}
}
}
}
}
}
if node, ok := tbl.Fields["tagpass"]; ok {
if subtbl, ok := node.(*ast.Table); ok {
for name, val := range subtbl.Fields {
if kv, ok := val.(*ast.KeyValue); ok {
tagfilter := &models.TagFilter{Name: name}
tagfilter := &internal_models.TagFilter{Name: name}
if ary, ok := kv.Value.(*ast.Array); ok {
for _, elem := range ary.Value {
if str, ok := elem.(*ast.String); ok {
@@ -500,7 +562,7 @@ func buildFilter(tbl *ast.Table) models.Filter {
if subtbl, ok := node.(*ast.Table); ok {
for name, val := range subtbl.Fields {
if kv, ok := val.(*ast.KeyValue); ok {
tagfilter := &models.TagFilter{Name: name}
tagfilter := &internal_models.TagFilter{Name: name}
if ary, ok := kv.Value.(*ast.Array); ok {
for _, elem := range ary.Value {
if str, ok := elem.(*ast.String); ok {
@@ -515,6 +577,10 @@ func buildFilter(tbl *ast.Table) models.Filter {
}
}
delete(tbl.Fields, "namedrop")
delete(tbl.Fields, "namepass")
delete(tbl.Fields, "fielddrop")
delete(tbl.Fields, "fieldpass")
delete(tbl.Fields, "drop")
delete(tbl.Fields, "pass")
delete(tbl.Fields, "tagdrop")
@@ -524,9 +590,9 @@ func buildFilter(tbl *ast.Table) models.Filter {
// buildInput parses input specific items from the ast.Table,
// builds the filter and returns a
// models.InputConfig to be inserted into models.RunningInput
func buildInput(name string, tbl *ast.Table) (*models.InputConfig, error) {
cp := &models.InputConfig{Name: name}
// internal_models.InputConfig to be inserted into internal_models.RunningInput
func buildInput(name string, tbl *ast.Table) (*internal_models.InputConfig, error) {
cp := &internal_models.InputConfig{Name: name}
if node, ok := tbl.Fields["interval"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
@@ -582,13 +648,114 @@ func buildInput(name string, tbl *ast.Table) (*models.InputConfig, error) {
return cp, nil
}
// buildParser grabs the necessary entries from the ast.Table for creating
// a parsers.Parser object, and creates it, which can then be added onto
// an Input object.
func buildParser(name string, tbl *ast.Table) (parsers.Parser, error) {
c := &parsers.Config{}
if node, ok := tbl.Fields["data_format"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
c.DataFormat = str.Value
}
}
}
// Legacy support, exec plugin originally parsed JSON by default.
if name == "exec" && c.DataFormat == "" {
c.DataFormat = "json"
} else if c.DataFormat == "" {
c.DataFormat = "influx"
}
if node, ok := tbl.Fields["separator"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
c.Separator = str.Value
}
}
}
if node, ok := tbl.Fields["templates"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if ary, ok := kv.Value.(*ast.Array); ok {
for _, elem := range ary.Value {
if str, ok := elem.(*ast.String); ok {
c.Templates = append(c.Templates, str.Value)
}
}
}
}
}
if node, ok := tbl.Fields["tag_keys"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if ary, ok := kv.Value.(*ast.Array); ok {
for _, elem := range ary.Value {
if str, ok := elem.(*ast.String); ok {
c.TagKeys = append(c.TagKeys, str.Value)
}
}
}
}
}
c.MetricName = name
delete(tbl.Fields, "data_format")
delete(tbl.Fields, "separator")
delete(tbl.Fields, "templates")
delete(tbl.Fields, "tag_keys")
return parsers.NewParser(c)
}
// buildSerializer grabs the necessary entries from the ast.Table for creating
// a serializers.Serializer object, and creates it, which can then be added onto
// an Output object.
func buildSerializer(name string, tbl *ast.Table) (serializers.Serializer, error) {
c := &serializers.Config{}
if node, ok := tbl.Fields["data_format"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
c.DataFormat = str.Value
}
}
}
if c.DataFormat == "" {
c.DataFormat = "influx"
}
if node, ok := tbl.Fields["prefix"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
c.Prefix = str.Value
}
}
}
delete(tbl.Fields, "data_format")
delete(tbl.Fields, "prefix")
return serializers.NewSerializer(c)
}
// buildOutput parses output specific items from the ast.Table, builds the filter and returns an
// models.OutputConfig to be inserted into models.RunningInput
// internal_models.OutputConfig to be inserted into internal_models.RunningInput
// Note: error exists in the return for future calls that might require error
func buildOutput(name string, tbl *ast.Table) (*models.OutputConfig, error) {
oc := &models.OutputConfig{
func buildOutput(name string, tbl *ast.Table) (*internal_models.OutputConfig, error) {
oc := &internal_models.OutputConfig{
Name: name,
Filter: buildFilter(tbl),
}
// Outputs don't support FieldDrop/FieldPass, so set to NameDrop/NamePass
if len(oc.Filter.FieldDrop) > 0 {
oc.Filter.NameDrop = oc.Filter.FieldDrop
}
if len(oc.Filter.FieldPass) > 0 {
oc.Filter.NamePass = oc.Filter.FieldPass
}
return oc, nil
}

View File

@@ -9,6 +9,7 @@ import (
"github.com/influxdata/telegraf/plugins/inputs/exec"
"github.com/influxdata/telegraf/plugins/inputs/memcached"
"github.com/influxdata/telegraf/plugins/inputs/procstat"
"github.com/influxdata/telegraf/plugins/parsers"
"github.com/stretchr/testify/assert"
)
@@ -19,19 +20,21 @@ func TestConfig_LoadSingleInput(t *testing.T) {
memcached := inputs.Inputs["memcached"]().(*memcached.Memcached)
memcached.Servers = []string{"localhost"}
mConfig := &models.InputConfig{
mConfig := &internal_models.InputConfig{
Name: "memcached",
Filter: models.Filter{
Drop: []string{"other", "stuff"},
Pass: []string{"some", "strings"},
TagDrop: []models.TagFilter{
models.TagFilter{
Filter: internal_models.Filter{
NameDrop: []string{"metricname2"},
NamePass: []string{"metricname1"},
FieldDrop: []string{"other", "stuff"},
FieldPass: []string{"some", "strings"},
TagDrop: []internal_models.TagFilter{
internal_models.TagFilter{
Name: "badtag",
Filter: []string{"othertag"},
},
},
TagPass: []models.TagFilter{
models.TagFilter{
TagPass: []internal_models.TagFilter{
internal_models.TagFilter{
Name: "goodtag",
Filter: []string{"mytag"},
},
@@ -62,19 +65,21 @@ func TestConfig_LoadDirectory(t *testing.T) {
memcached := inputs.Inputs["memcached"]().(*memcached.Memcached)
memcached.Servers = []string{"localhost"}
mConfig := &models.InputConfig{
mConfig := &internal_models.InputConfig{
Name: "memcached",
Filter: models.Filter{
Drop: []string{"other", "stuff"},
Pass: []string{"some", "strings"},
TagDrop: []models.TagFilter{
models.TagFilter{
Filter: internal_models.Filter{
NameDrop: []string{"metricname2"},
NamePass: []string{"metricname1"},
FieldDrop: []string{"other", "stuff"},
FieldPass: []string{"some", "strings"},
TagDrop: []internal_models.TagFilter{
internal_models.TagFilter{
Name: "badtag",
Filter: []string{"othertag"},
},
},
TagPass: []models.TagFilter{
models.TagFilter{
TagPass: []internal_models.TagFilter{
internal_models.TagFilter{
Name: "goodtag",
Filter: []string{"mytag"},
},
@@ -91,8 +96,11 @@ func TestConfig_LoadDirectory(t *testing.T) {
"Testdata did not produce correct memcached metadata.")
ex := inputs.Inputs["exec"]().(*exec.Exec)
p, err := parsers.NewJSONParser("exec", nil, nil)
assert.NoError(t, err)
ex.SetParser(p)
ex.Command = "/usr/bin/myothercollector --foo=bar"
eConfig := &models.InputConfig{
eConfig := &internal_models.InputConfig{
Name: "exec",
MeasurementSuffix: "_myothercollector",
}
@@ -111,7 +119,7 @@ func TestConfig_LoadDirectory(t *testing.T) {
pstat := inputs.Inputs["procstat"]().(*procstat.Procstat)
pstat.PidFile = "/var/run/grafana-server.pid"
pConfig := &models.InputConfig{Name: "procstat"}
pConfig := &internal_models.InputConfig{Name: "procstat"}
pConfig.Tags = make(map[string]string)
assert.Equal(t, pstat, c.Inputs[3].Input,

View File

@@ -1,7 +1,9 @@
[[inputs.memcached]]
servers = ["localhost"]
pass = ["some", "strings"]
drop = ["other", "stuff"]
namepass = ["metricname1"]
namedrop = ["metricname2"]
fieldpass = ["some", "strings"]
fielddrop = ["other", "stuff"]
interval = "5s"
[inputs.memcached.tagpass]
goodtag = ["mytag"]

View File

@@ -1,5 +1,7 @@
[[inputs.memcached]]
servers = ["192.168.1.1"]
namepass = ["metricname1"]
namedrop = ["metricname2"]
pass = ["some", "strings"]
drop = ["other", "stuff"]
interval = "5s"

View File

@@ -20,7 +20,7 @@
# with 'required'. Be sure to edit those to make this configuration work.
# Tags can also be specified via a normal map, but only one form at a time:
[tags]
[global_tags]
dc = "us-east-1"
# Configuration for telegraf agent
@@ -184,6 +184,15 @@
# If no servers are specified, then localhost is used as the host.
servers = ["localhost"]
# Telegraf plugin for gathering metrics from N Mesos masters
[[inputs.mesos]]
# Timeout, in ms.
timeout = 100
# A list of Mesos masters, default value is localhost:5050.
masters = ["localhost:5050"]
# Metrics groups to be collected, by default, all enabled.
master_collections = ["resources","master","system","slaves","frameworks","messages","evqueue","registrar"]
# Read metrics from one or many MongoDB servers
[[inputs.mongodb]]
# An array of URI to gather stats about. Specify an ip or hostname

View File

@@ -2,14 +2,19 @@ package internal
import (
"bufio"
"crypto/rand"
"crypto/tls"
"crypto/x509"
"errors"
"fmt"
"io/ioutil"
"os"
"strconv"
"strings"
"time"
)
const alphanum string = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
// Duration just wraps time.Duration
type Duration struct {
Duration time.Duration
@@ -29,47 +34,6 @@ func (d *Duration) UnmarshalTOML(b []byte) error {
var NotImplementedError = errors.New("not implemented yet")
type JSONFlattener struct {
Fields map[string]interface{}
}
// FlattenJSON flattens nested maps/interfaces into a fields map
func (f *JSONFlattener) FlattenJSON(
fieldname string,
v interface{},
) error {
if f.Fields == nil {
f.Fields = make(map[string]interface{})
}
fieldname = strings.Trim(fieldname, "_")
switch t := v.(type) {
case map[string]interface{}:
for k, v := range t {
err := f.FlattenJSON(fieldname+"_"+k+"_", v)
if err != nil {
return err
}
}
case []interface{}:
for i, v := range t {
k := strconv.Itoa(i)
err := f.FlattenJSON(fieldname+"_"+k+"_", v)
if err != nil {
return nil
}
}
case float64:
f.Fields[fieldname] = t
case bool, string, nil:
// ignored types
return nil
default:
return fmt.Errorf("JSON Flattener: got unexpected type %T with value %v (%s)",
t, t, fieldname)
}
return nil
}
// ReadLines reads contents from a file and splits them by new lines.
// A convenience wrapper to ReadLinesOffsetN(filename, 0, -1).
func ReadLines(filename string) ([]string, error) {
@@ -105,6 +69,57 @@ func ReadLinesOffsetN(filename string, offset uint, n int) ([]string, error) {
return ret, nil
}
// RandomString returns a random string of alpha-numeric characters
func RandomString(n int) string {
var bytes = make([]byte, n)
rand.Read(bytes)
for i, b := range bytes {
bytes[i] = alphanum[b%byte(len(alphanum))]
}
return string(bytes)
}
// GetTLSConfig gets a tls.Config object from the given certs, key, and CA files.
// you must give the full path to the files.
// If all files are blank and InsecureSkipVerify=false, returns a nil pointer.
func GetTLSConfig(
SSLCert, SSLKey, SSLCA string,
InsecureSkipVerify bool,
) (*tls.Config, error) {
t := &tls.Config{}
if SSLCert != "" && SSLKey != "" && SSLCA != "" {
cert, err := tls.LoadX509KeyPair(SSLCert, SSLKey)
if err != nil {
return nil, errors.New(fmt.Sprintf(
"Could not load TLS client key/certificate: %s",
err))
}
caCert, err := ioutil.ReadFile(SSLCA)
if err != nil {
return nil, errors.New(fmt.Sprintf("Could not load TLS CA: %s",
err))
}
caCertPool := x509.NewCertPool()
caCertPool.AppendCertsFromPEM(caCert)
t = &tls.Config{
Certificates: []tls.Certificate{cert},
RootCAs: caCertPool,
InsecureSkipVerify: InsecureSkipVerify,
}
} else {
if InsecureSkipVerify {
t.InsecureSkipVerify = true
} else {
return nil, nil
}
}
// will be nil by default if nothing is provided
return t, nil
}
// Glob will test a string pattern, potentially containing globs, against a
// subject string. The result is a simple true/false, determining whether or
// not the glob pattern matched the subject text.

View File

@@ -1,9 +1,9 @@
package models
package internal_models
import (
"strings"
"github.com/influxdata/influxdb/client/v2"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
)
@@ -15,8 +15,11 @@ type TagFilter struct {
// Filter containing drop/pass and tagdrop/tagpass rules
type Filter struct {
Drop []string
Pass []string
NameDrop []string
NamePass []string
FieldDrop []string
FieldPass []string
TagDrop []TagFilter
TagPass []TagFilter
@@ -24,18 +27,18 @@ type Filter struct {
IsActive bool
}
func (f Filter) ShouldPointPass(point *client.Point) bool {
if f.ShouldPass(point.Name()) && f.ShouldTagsPass(point.Tags()) {
func (f Filter) ShouldMetricPass(metric telegraf.Metric) bool {
if f.ShouldNamePass(metric.Name()) && f.ShouldTagsPass(metric.Tags()) {
return true
}
return false
}
// ShouldPass returns true if the metric should pass, false if should drop
// ShouldFieldsPass returns true if the metric should pass, false if should drop
// based on the drop/pass filter parameters
func (f Filter) ShouldPass(key string) bool {
if f.Pass != nil {
for _, pat := range f.Pass {
func (f Filter) ShouldNamePass(key string) bool {
if f.NamePass != nil {
for _, pat := range f.NamePass {
// TODO remove HasPrefix check, leaving it for now for legacy support.
// Cam, 2015-12-07
if strings.HasPrefix(key, pat) || internal.Glob(pat, key) {
@@ -45,8 +48,36 @@ func (f Filter) ShouldPass(key string) bool {
return false
}
if f.Drop != nil {
for _, pat := range f.Drop {
if f.NameDrop != nil {
for _, pat := range f.NameDrop {
// TODO remove HasPrefix check, leaving it for now for legacy support.
// Cam, 2015-12-07
if strings.HasPrefix(key, pat) || internal.Glob(pat, key) {
return false
}
}
return true
}
return true
}
// ShouldFieldsPass returns true if the metric should pass, false if should drop
// based on the drop/pass filter parameters
func (f Filter) ShouldFieldsPass(key string) bool {
if f.FieldPass != nil {
for _, pat := range f.FieldPass {
// TODO remove HasPrefix check, leaving it for now for legacy support.
// Cam, 2015-12-07
if strings.HasPrefix(key, pat) || internal.Glob(pat, key) {
return true
}
}
return false
}
if f.FieldDrop != nil {
for _, pat := range f.FieldDrop {
// TODO remove HasPrefix check, leaving it for now for legacy support.
// Cam, 2015-12-07
if strings.HasPrefix(key, pat) || internal.Glob(pat, key) {

View File

@@ -1,4 +1,4 @@
package models
package internal_models
import (
"testing"
@@ -18,15 +18,15 @@ func TestFilter_Empty(t *testing.T) {
}
for _, measurement := range measurements {
if !f.ShouldPass(measurement) {
if !f.ShouldFieldsPass(measurement) {
t.Errorf("Expected measurement %s to pass", measurement)
}
}
}
func TestFilter_Pass(t *testing.T) {
func TestFilter_NamePass(t *testing.T) {
f := Filter{
Pass: []string{"foo*", "cpu_usage_idle"},
NamePass: []string{"foo*", "cpu_usage_idle"},
}
passes := []string{
@@ -45,21 +45,21 @@ func TestFilter_Pass(t *testing.T) {
}
for _, measurement := range passes {
if !f.ShouldPass(measurement) {
if !f.ShouldNamePass(measurement) {
t.Errorf("Expected measurement %s to pass", measurement)
}
}
for _, measurement := range drops {
if f.ShouldPass(measurement) {
if f.ShouldNamePass(measurement) {
t.Errorf("Expected measurement %s to drop", measurement)
}
}
}
func TestFilter_Drop(t *testing.T) {
func TestFilter_NameDrop(t *testing.T) {
f := Filter{
Drop: []string{"foo*", "cpu_usage_idle"},
NameDrop: []string{"foo*", "cpu_usage_idle"},
}
drops := []string{
@@ -78,13 +78,79 @@ func TestFilter_Drop(t *testing.T) {
}
for _, measurement := range passes {
if !f.ShouldPass(measurement) {
if !f.ShouldNamePass(measurement) {
t.Errorf("Expected measurement %s to pass", measurement)
}
}
for _, measurement := range drops {
if f.ShouldPass(measurement) {
if f.ShouldNamePass(measurement) {
t.Errorf("Expected measurement %s to drop", measurement)
}
}
}
func TestFilter_FieldPass(t *testing.T) {
f := Filter{
FieldPass: []string{"foo*", "cpu_usage_idle"},
}
passes := []string{
"foo",
"foo_bar",
"foo.bar",
"foo-bar",
"cpu_usage_idle",
}
drops := []string{
"bar",
"barfoo",
"bar_foo",
"cpu_usage_busy",
}
for _, measurement := range passes {
if !f.ShouldFieldsPass(measurement) {
t.Errorf("Expected measurement %s to pass", measurement)
}
}
for _, measurement := range drops {
if f.ShouldFieldsPass(measurement) {
t.Errorf("Expected measurement %s to drop", measurement)
}
}
}
func TestFilter_FieldDrop(t *testing.T) {
f := Filter{
FieldDrop: []string{"foo*", "cpu_usage_idle"},
}
drops := []string{
"foo",
"foo_bar",
"foo.bar",
"foo-bar",
"cpu_usage_idle",
}
passes := []string{
"bar",
"barfoo",
"bar_foo",
"cpu_usage_busy",
}
for _, measurement := range passes {
if !f.ShouldFieldsPass(measurement) {
t.Errorf("Expected measurement %s to pass", measurement)
}
}
for _, measurement := range drops {
if f.ShouldFieldsPass(measurement) {
t.Errorf("Expected measurement %s to drop", measurement)
}
}

View File

@@ -1,14 +1,14 @@
package models
package internal_models
import (
"time"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf"
)
type RunningInput struct {
Name string
Input inputs.Input
Input telegraf.Input
Config *InputConfig
}

View File

@@ -1,71 +1,132 @@
package models
package internal_models
import (
"log"
"sync"
"time"
"github.com/influxdata/telegraf/plugins/outputs"
"github.com/influxdata/influxdb/client/v2"
"github.com/influxdata/telegraf"
)
const DEFAULT_POINT_BUFFER_LIMIT = 10000
const (
// Default number of metrics kept between flushes.
DEFAULT_METRIC_BUFFER_LIMIT = 10000
// Limit how many full metric buffers are kept due to failed writes.
FULL_METRIC_BUFFERS_LIMIT = 100
)
type RunningOutput struct {
Name string
Output outputs.Output
Config *OutputConfig
Quiet bool
PointBufferLimit int
Name string
Output telegraf.Output
Config *OutputConfig
Quiet bool
MetricBufferLimit int
FlushBufferWhenFull bool
points []*client.Point
overwriteCounter int
metrics []telegraf.Metric
tmpmetrics map[int][]telegraf.Metric
overwriteI int
mapI int
sync.Mutex
}
func NewRunningOutput(
name string,
output outputs.Output,
output telegraf.Output,
conf *OutputConfig,
) *RunningOutput {
ro := &RunningOutput{
Name: name,
points: make([]*client.Point, 0),
Output: output,
Config: conf,
PointBufferLimit: DEFAULT_POINT_BUFFER_LIMIT,
Name: name,
metrics: make([]telegraf.Metric, 0),
tmpmetrics: make(map[int][]telegraf.Metric),
Output: output,
Config: conf,
MetricBufferLimit: DEFAULT_METRIC_BUFFER_LIMIT,
}
return ro
}
func (ro *RunningOutput) AddPoint(point *client.Point) {
// AddMetric adds a metric to the output. This function can also write cached
// points if FlushBufferWhenFull is true.
func (ro *RunningOutput) AddMetric(metric telegraf.Metric) {
if ro.Config.Filter.IsActive {
if !ro.Config.Filter.ShouldPointPass(point) {
if !ro.Config.Filter.ShouldMetricPass(metric) {
return
}
}
ro.Lock()
defer ro.Unlock()
if len(ro.points) < ro.PointBufferLimit {
ro.points = append(ro.points, point)
if len(ro.metrics) < ro.MetricBufferLimit {
ro.metrics = append(ro.metrics, metric)
} else {
if ro.overwriteCounter == len(ro.points) {
ro.overwriteCounter = 0
if ro.FlushBufferWhenFull {
ro.metrics = append(ro.metrics, metric)
tmpmetrics := make([]telegraf.Metric, len(ro.metrics))
copy(tmpmetrics, ro.metrics)
ro.metrics = make([]telegraf.Metric, 0)
err := ro.write(tmpmetrics)
if err != nil {
log.Printf("ERROR writing full metric buffer to output %s, %s",
ro.Name, err)
if len(ro.tmpmetrics) == FULL_METRIC_BUFFERS_LIMIT {
ro.mapI = 0
// overwrite one
ro.tmpmetrics[ro.mapI] = tmpmetrics
ro.mapI++
} else {
ro.tmpmetrics[ro.mapI] = tmpmetrics
ro.mapI++
}
}
} else {
log.Printf("WARNING: overwriting cached metrics, you may want to " +
"increase the metric_buffer_limit setting in your [agent] " +
"config if you do not wish to overwrite metrics.\n")
if ro.overwriteI == len(ro.metrics) {
ro.overwriteI = 0
}
ro.metrics[ro.overwriteI] = metric
ro.overwriteI++
}
ro.points[ro.overwriteCounter] = point
ro.overwriteCounter++
}
}
// Write writes all cached points to this output.
func (ro *RunningOutput) Write() error {
ro.Lock()
defer ro.Unlock()
err := ro.write(ro.metrics)
if err != nil {
return err
} else {
ro.metrics = make([]telegraf.Metric, 0)
ro.overwriteI = 0
}
// Write any cached metric buffers that failed previously
for i, tmpmetrics := range ro.tmpmetrics {
if err := ro.write(tmpmetrics); err != nil {
return err
} else {
delete(ro.tmpmetrics, i)
}
}
return nil
}
func (ro *RunningOutput) write(metrics []telegraf.Metric) error {
start := time.Now()
err := ro.Output.Write(ro.points)
err := ro.Output.Write(metrics)
elapsed := time.Since(start)
if err == nil {
if !ro.Quiet {
log.Printf("Wrote %d metrics to output %s in %s\n",
len(ro.points), ro.Name, elapsed)
len(metrics), ro.Name, elapsed)
}
ro.points = make([]*client.Point, 0)
ro.overwriteCounter = 0
}
return err
}

View File

@@ -0,0 +1,265 @@
package internal_models
import (
"fmt"
"sort"
"sync"
"testing"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
var first5 = []telegraf.Metric{
testutil.TestMetric(101, "metric1"),
testutil.TestMetric(101, "metric2"),
testutil.TestMetric(101, "metric3"),
testutil.TestMetric(101, "metric4"),
testutil.TestMetric(101, "metric5"),
}
var next5 = []telegraf.Metric{
testutil.TestMetric(101, "metric6"),
testutil.TestMetric(101, "metric7"),
testutil.TestMetric(101, "metric8"),
testutil.TestMetric(101, "metric9"),
testutil.TestMetric(101, "metric10"),
}
// Test that we can write metrics with simple default setup.
func TestRunningOutputDefault(t *testing.T) {
conf := &OutputConfig{
Filter: Filter{
IsActive: false,
},
}
m := &mockOutput{}
ro := NewRunningOutput("test", m, conf)
for _, metric := range first5 {
ro.AddMetric(metric)
}
for _, metric := range next5 {
ro.AddMetric(metric)
}
assert.Len(t, m.Metrics(), 0)
err := ro.Write()
assert.NoError(t, err)
assert.Len(t, m.Metrics(), 10)
}
// Test that the first metric gets overwritten if there is a buffer overflow.
func TestRunningOutputOverwrite(t *testing.T) {
conf := &OutputConfig{
Filter: Filter{
IsActive: false,
},
}
m := &mockOutput{}
ro := NewRunningOutput("test", m, conf)
ro.MetricBufferLimit = 4
for _, metric := range first5 {
ro.AddMetric(metric)
}
require.Len(t, m.Metrics(), 0)
err := ro.Write()
require.NoError(t, err)
require.Len(t, m.Metrics(), 4)
var expected, actual []string
for i, exp := range first5[1:] {
expected = append(expected, exp.String())
actual = append(actual, m.Metrics()[i].String())
}
sort.Strings(expected)
sort.Strings(actual)
assert.Equal(t, expected, actual)
}
// Test that multiple buffer overflows are handled properly.
func TestRunningOutputMultiOverwrite(t *testing.T) {
conf := &OutputConfig{
Filter: Filter{
IsActive: false,
},
}
m := &mockOutput{}
ro := NewRunningOutput("test", m, conf)
ro.MetricBufferLimit = 3
for _, metric := range first5 {
ro.AddMetric(metric)
}
for _, metric := range next5 {
ro.AddMetric(metric)
}
require.Len(t, m.Metrics(), 0)
err := ro.Write()
require.NoError(t, err)
require.Len(t, m.Metrics(), 3)
var expected, actual []string
for i, exp := range next5[2:] {
expected = append(expected, exp.String())
actual = append(actual, m.Metrics()[i].String())
}
sort.Strings(expected)
sort.Strings(actual)
assert.Equal(t, expected, actual)
}
// Test that running output doesn't flush until it's full when
// FlushBufferWhenFull is set.
func TestRunningOutputFlushWhenFull(t *testing.T) {
conf := &OutputConfig{
Filter: Filter{
IsActive: false,
},
}
m := &mockOutput{}
ro := NewRunningOutput("test", m, conf)
ro.FlushBufferWhenFull = true
ro.MetricBufferLimit = 5
// Fill buffer to limit
for _, metric := range first5 {
ro.AddMetric(metric)
}
// no flush yet
assert.Len(t, m.Metrics(), 0)
// add one more metric
ro.AddMetric(next5[0])
// now it flushed
assert.Len(t, m.Metrics(), 6)
// add one more metric and write it manually
ro.AddMetric(next5[1])
err := ro.Write()
assert.NoError(t, err)
assert.Len(t, m.Metrics(), 7)
}
// Test that running output doesn't flush until it's full when
// FlushBufferWhenFull is set, twice.
func TestRunningOutputMultiFlushWhenFull(t *testing.T) {
conf := &OutputConfig{
Filter: Filter{
IsActive: false,
},
}
m := &mockOutput{}
ro := NewRunningOutput("test", m, conf)
ro.FlushBufferWhenFull = true
ro.MetricBufferLimit = 4
// Fill buffer past limit twive
for _, metric := range first5 {
ro.AddMetric(metric)
}
for _, metric := range next5 {
ro.AddMetric(metric)
}
// flushed twice
assert.Len(t, m.Metrics(), 10)
}
func TestRunningOutputWriteFail(t *testing.T) {
conf := &OutputConfig{
Filter: Filter{
IsActive: false,
},
}
m := &mockOutput{}
m.failWrite = true
ro := NewRunningOutput("test", m, conf)
ro.FlushBufferWhenFull = true
ro.MetricBufferLimit = 4
// Fill buffer past limit twice
for _, metric := range first5 {
ro.AddMetric(metric)
}
for _, metric := range next5 {
ro.AddMetric(metric)
}
// no successful flush yet
assert.Len(t, m.Metrics(), 0)
// manual write fails
err := ro.Write()
require.Error(t, err)
// no successful flush yet
assert.Len(t, m.Metrics(), 0)
m.failWrite = false
err = ro.Write()
require.NoError(t, err)
assert.Len(t, m.Metrics(), 10)
}
type mockOutput struct {
sync.Mutex
metrics []telegraf.Metric
// if true, mock a write failure
failWrite bool
}
func (m *mockOutput) Connect() error {
return nil
}
func (m *mockOutput) Close() error {
return nil
}
func (m *mockOutput) Description() string {
return ""
}
func (m *mockOutput) SampleConfig() string {
return ""
}
func (m *mockOutput) Write(metrics []telegraf.Metric) error {
m.Lock()
defer m.Unlock()
if m.failWrite {
return fmt.Errorf("Failed Write!")
}
if m.metrics == nil {
m.metrics = []telegraf.Metric{}
}
for _, metric := range metrics {
m.metrics = append(m.metrics, metric)
}
return nil
}
func (m *mockOutput) Metrics() []telegraf.Metric {
m.Lock()
defer m.Unlock()
return m.metrics
}

94
metric.go Normal file
View File

@@ -0,0 +1,94 @@
package telegraf
import (
"time"
"github.com/influxdata/influxdb/client/v2"
)
type Metric interface {
// Name returns the measurement name of the metric
Name() string
// Name returns the tags associated with the metric
Tags() map[string]string
// Time return the timestamp for the metric
Time() time.Time
// UnixNano returns the unix nano time of the metric
UnixNano() int64
// Fields returns the fields for the metric
Fields() map[string]interface{}
// String returns a line-protocol string of the metric
String() string
// PrecisionString returns a line-protocol string of the metric, at precision
PrecisionString(precison string) string
// Point returns a influxdb client.Point object
Point() *client.Point
}
// metric is a wrapper of the influxdb client.Point struct
type metric struct {
pt *client.Point
}
// NewMetric returns a metric with the given timestamp. If a timestamp is not
// given, then data is sent to the database without a timestamp, in which case
// the server will assign local time upon reception. NOTE: it is recommended to
// send data with a timestamp.
func NewMetric(
name string,
tags map[string]string,
fields map[string]interface{},
t ...time.Time,
) (Metric, error) {
var T time.Time
if len(t) > 0 {
T = t[0]
}
pt, err := client.NewPoint(name, tags, fields, T)
if err != nil {
return nil, err
}
return &metric{
pt: pt,
}, nil
}
func (m *metric) Name() string {
return m.pt.Name()
}
func (m *metric) Tags() map[string]string {
return m.pt.Tags()
}
func (m *metric) Time() time.Time {
return m.pt.Time()
}
func (m *metric) UnixNano() int64 {
return m.pt.UnixNano()
}
func (m *metric) Fields() map[string]interface{} {
return m.pt.Fields()
}
func (m *metric) String() string {
return m.pt.String()
}
func (m *metric) PrecisionString(precison string) string {
return m.pt.PrecisionString(precison)
}
func (m *metric) Point() *client.Point {
return m.pt
}

83
metric_test.go Normal file
View File

@@ -0,0 +1,83 @@
package telegraf
import (
"fmt"
"math"
"testing"
"time"
"github.com/stretchr/testify/assert"
)
func TestNewMetric(t *testing.T) {
now := time.Now()
tags := map[string]string{
"host": "localhost",
"datacenter": "us-east-1",
}
fields := map[string]interface{}{
"usage_idle": float64(99),
"usage_busy": float64(1),
}
m, err := NewMetric("cpu", tags, fields, now)
assert.NoError(t, err)
assert.Equal(t, tags, m.Tags())
assert.Equal(t, fields, m.Fields())
assert.Equal(t, "cpu", m.Name())
assert.Equal(t, now, m.Time())
assert.Equal(t, now.UnixNano(), m.UnixNano())
}
func TestNewMetricString(t *testing.T) {
now := time.Now()
tags := map[string]string{
"host": "localhost",
}
fields := map[string]interface{}{
"usage_idle": float64(99),
}
m, err := NewMetric("cpu", tags, fields, now)
assert.NoError(t, err)
lineProto := fmt.Sprintf("cpu,host=localhost usage_idle=99 %d",
now.UnixNano())
assert.Equal(t, lineProto, m.String())
lineProtoPrecision := fmt.Sprintf("cpu,host=localhost usage_idle=99 %d",
now.Unix())
assert.Equal(t, lineProtoPrecision, m.PrecisionString("s"))
}
func TestNewMetricStringNoTime(t *testing.T) {
tags := map[string]string{
"host": "localhost",
}
fields := map[string]interface{}{
"usage_idle": float64(99),
}
m, err := NewMetric("cpu", tags, fields)
assert.NoError(t, err)
lineProto := fmt.Sprintf("cpu,host=localhost usage_idle=99")
assert.Equal(t, lineProto, m.String())
lineProtoPrecision := fmt.Sprintf("cpu,host=localhost usage_idle=99")
assert.Equal(t, lineProtoPrecision, m.PrecisionString("s"))
}
func TestNewMetricFailNaN(t *testing.T) {
now := time.Now()
tags := map[string]string{
"host": "localhost",
}
fields := map[string]interface{}{
"usage_idle": math.NaN(),
}
_, err := NewMetric("cpu", tags, fields, now)
assert.Error(t, err)
}

31
output.go Normal file
View File

@@ -0,0 +1,31 @@
package telegraf
type Output interface {
// Connect to the Output
Connect() error
// Close any connections to the Output
Close() error
// Description returns a one-sentence description on the Output
Description() string
// SampleConfig returns the default configuration of the Output
SampleConfig() string
// Write takes in group of points to be written to the Output
Write(metrics []Metric) error
}
type ServiceOutput interface {
// Connect to the Output
Connect() error
// Close any connections to the Output
Close() error
// Description returns a one-sentence description on the Output
Description() string
// SampleConfig returns the default configuration of the Output
SampleConfig() string
// Write takes in group of points to be written to the Output
Write(metrics []Metric) error
// Start the "service" that will provide an Output
Start() error
// Stop the "service" that will provide an Output
Stop()
}

View File

@@ -4,7 +4,7 @@ The example plugin gathers metrics about example things
### Configuration:
```
```toml
# Description
[[inputs.example]]
# SampleConfig

View File

@@ -4,6 +4,7 @@ import (
"bytes"
"encoding/binary"
"fmt"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"net"
"strconv"
@@ -103,11 +104,9 @@ type Aerospike struct {
}
var sampleConfig = `
# Aerospike servers to connect to (with port)
# Default: servers = ["localhost:3000"]
#
# This plugin will query all namespaces the aerospike
# server has configured and get stats for them.
## Aerospike servers to connect to (with port)
## This plugin will query all namespaces the aerospike
## server has configured and get stats for them.
servers = ["localhost:3000"]
`
@@ -119,7 +118,7 @@ func (a *Aerospike) Description() string {
return "Read stats from an aerospike server"
}
func (a *Aerospike) Gather(acc inputs.Accumulator) error {
func (a *Aerospike) Gather(acc telegraf.Accumulator) error {
if len(a.Servers) == 0 {
return a.gatherServer("127.0.0.1:3000", acc)
}
@@ -140,7 +139,7 @@ func (a *Aerospike) Gather(acc inputs.Accumulator) error {
return outerr
}
func (a *Aerospike) gatherServer(host string, acc inputs.Accumulator) error {
func (a *Aerospike) gatherServer(host string, acc telegraf.Accumulator) error {
aerospikeInfo, err := getMap(STATISTICS_COMMAND, host)
if err != nil {
return fmt.Errorf("Aerospike info failed: %s", err)
@@ -249,7 +248,7 @@ func get(key []byte, host string) (map[string]string, error) {
func readAerospikeStats(
stats map[string]string,
acc inputs.Accumulator,
acc telegraf.Accumulator,
host string,
namespace string,
) {
@@ -336,7 +335,7 @@ func msgLenFromBytes(buf [6]byte) int64 {
}
func init() {
inputs.Add("aerospike", func() inputs.Input {
inputs.Add("aerospike", func() telegraf.Input {
return &Aerospike{}
})
}

View File

@@ -4,8 +4,11 @@ import (
_ "github.com/influxdata/telegraf/plugins/inputs/aerospike"
_ "github.com/influxdata/telegraf/plugins/inputs/apache"
_ "github.com/influxdata/telegraf/plugins/inputs/bcache"
_ "github.com/influxdata/telegraf/plugins/inputs/couchdb"
_ "github.com/influxdata/telegraf/plugins/inputs/disque"
_ "github.com/influxdata/telegraf/plugins/inputs/dns_query"
_ "github.com/influxdata/telegraf/plugins/inputs/docker"
_ "github.com/influxdata/telegraf/plugins/inputs/dovecot"
_ "github.com/influxdata/telegraf/plugins/inputs/elasticsearch"
_ "github.com/influxdata/telegraf/plugins/inputs/exec"
_ "github.com/influxdata/telegraf/plugins/inputs/github_webhooks"
@@ -18,20 +21,27 @@ import (
_ "github.com/influxdata/telegraf/plugins/inputs/lustre2"
_ "github.com/influxdata/telegraf/plugins/inputs/mailchimp"
_ "github.com/influxdata/telegraf/plugins/inputs/memcached"
_ "github.com/influxdata/telegraf/plugins/inputs/mesos"
_ "github.com/influxdata/telegraf/plugins/inputs/mongodb"
_ "github.com/influxdata/telegraf/plugins/inputs/mqtt_consumer"
_ "github.com/influxdata/telegraf/plugins/inputs/mysql"
_ "github.com/influxdata/telegraf/plugins/inputs/nats_consumer"
_ "github.com/influxdata/telegraf/plugins/inputs/net_response"
_ "github.com/influxdata/telegraf/plugins/inputs/nginx"
_ "github.com/influxdata/telegraf/plugins/inputs/nsq"
_ "github.com/influxdata/telegraf/plugins/inputs/passenger"
_ "github.com/influxdata/telegraf/plugins/inputs/phpfpm"
_ "github.com/influxdata/telegraf/plugins/inputs/ping"
_ "github.com/influxdata/telegraf/plugins/inputs/postgresql"
_ "github.com/influxdata/telegraf/plugins/inputs/powerdns"
_ "github.com/influxdata/telegraf/plugins/inputs/procstat"
_ "github.com/influxdata/telegraf/plugins/inputs/prometheus"
_ "github.com/influxdata/telegraf/plugins/inputs/puppetagent"
_ "github.com/influxdata/telegraf/plugins/inputs/rabbitmq"
_ "github.com/influxdata/telegraf/plugins/inputs/raindrops"
_ "github.com/influxdata/telegraf/plugins/inputs/redis"
_ "github.com/influxdata/telegraf/plugins/inputs/rethinkdb"
_ "github.com/influxdata/telegraf/plugins/inputs/riak"
_ "github.com/influxdata/telegraf/plugins/inputs/sensors"
_ "github.com/influxdata/telegraf/plugins/inputs/snmp"
_ "github.com/influxdata/telegraf/plugins/inputs/sqlserver"
@@ -39,6 +49,7 @@ import (
_ "github.com/influxdata/telegraf/plugins/inputs/system"
_ "github.com/influxdata/telegraf/plugins/inputs/trig"
_ "github.com/influxdata/telegraf/plugins/inputs/twemproxy"
_ "github.com/influxdata/telegraf/plugins/inputs/win_perf_counters"
_ "github.com/influxdata/telegraf/plugins/inputs/zfs"
_ "github.com/influxdata/telegraf/plugins/inputs/zookeeper"
)

View File

@@ -11,6 +11,7 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -19,7 +20,7 @@ type Apache struct {
}
var sampleConfig = `
# An array of Apache status URI to gather stats.
## An array of Apache status URI to gather stats.
urls = ["http://localhost/server-status?auto"]
`
@@ -31,7 +32,7 @@ func (n *Apache) Description() string {
return "Read Apache status information (mod_status)"
}
func (n *Apache) Gather(acc inputs.Accumulator) error {
func (n *Apache) Gather(acc telegraf.Accumulator) error {
var wg sync.WaitGroup
var outerr error
@@ -59,7 +60,7 @@ var tr = &http.Transport{
var client = &http.Client{Transport: tr}
func (n *Apache) gatherUrl(addr *url.URL, acc inputs.Accumulator) error {
func (n *Apache) gatherUrl(addr *url.URL, acc telegraf.Accumulator) error {
resp, err := client.Get(addr.String())
if err != nil {
return fmt.Errorf("error making HTTP request to %s: %s", addr.String(), err)
@@ -164,7 +165,7 @@ func getTags(addr *url.URL) map[string]string {
}
func init() {
inputs.Add("apache", func() inputs.Input {
inputs.Add("apache", func() telegraf.Input {
return &Apache{}
})
}

View File

@@ -8,6 +8,7 @@ import (
"strconv"
"strings"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -17,14 +18,14 @@ type Bcache struct {
}
var sampleConfig = `
# Bcache sets path
# If not specified, then default is:
# bcachePath = "/sys/fs/bcache"
#
# By default, telegraf gather stats for all bcache devices
# Setting devices will restrict the stats to the specified
# bcache devices.
# bcacheDevs = ["bcache0", ...]
## Bcache sets path
## If not specified, then default is:
bcachePath = "/sys/fs/bcache"
## By default, telegraf gather stats for all bcache devices
## Setting devices will restrict the stats to the specified
## bcache devices.
bcacheDevs = ["bcache0"]
`
func (b *Bcache) SampleConfig() string {
@@ -69,7 +70,7 @@ func prettyToBytes(v string) uint64 {
return uint64(result)
}
func (b *Bcache) gatherBcache(bdev string, acc inputs.Accumulator) error {
func (b *Bcache) gatherBcache(bdev string, acc telegraf.Accumulator) error {
tags := getTags(bdev)
metrics, err := filepath.Glob(bdev + "/stats_total/*")
if len(metrics) < 0 {
@@ -104,7 +105,7 @@ func (b *Bcache) gatherBcache(bdev string, acc inputs.Accumulator) error {
return nil
}
func (b *Bcache) Gather(acc inputs.Accumulator) error {
func (b *Bcache) Gather(acc telegraf.Accumulator) error {
bcacheDevsChecked := make(map[string]bool)
var restrictDevs bool
if len(b.BcacheDevs) != 0 {
@@ -135,7 +136,7 @@ func (b *Bcache) Gather(acc inputs.Accumulator) error {
}
func init() {
inputs.Add("bcache", func() inputs.Input {
inputs.Add("bcache", func() telegraf.Input {
return &Bcache{}
})
}

View File

@@ -0,0 +1,255 @@
# CouchDB Input Plugin
---
The CouchDB plugin gathers metrics of CouchDB using [_stats](http://docs.couchdb.org/en/1.6.1/api/server/common.html?highlight=stats#get--_stats) endpoint.
### Configuration:
```
# Sample Config:
[[inputs.couchdb]]
hosts = ["http://localhost:5984/_stats"]
```
### Measurements & Fields:
Statistics specific to the internals of CouchDB:
- couchdb_auth_cache_misses
- couchdb_database_writes
- couchdb_open_databases
- couchdb_auth_cache_hits
- couchdb_request_time
- couchdb_database_reads
- couchdb_open_os_files
Statistics of HTTP requests by method:
- httpd_request_methods_put
- httpd_request_methods_get
- httpd_request_methods_copy
- httpd_request_methods_delete
- httpd_request_methods_post
- httpd_request_methods_head
Statistics of HTTP requests by response code:
- httpd_status_codes_200
- httpd_status_codes_201
- httpd_status_codes_202
- httpd_status_codes_301
- httpd_status_codes_304
- httpd_status_codes_400
- httpd_status_codes_401
- httpd_status_codes_403
- httpd_status_codes_404
- httpd_status_codes_405
- httpd_status_codes_409
- httpd_status_codes_412
- httpd_status_codes_500
httpd statistics:
- httpd_clients_requesting_changes
- httpd_temporary_view_reads
- httpd_requests
- httpd_bulk_requests
- httpd_view_reads
### Tags:
- server (url of the couchdb _stats endpoint)
### Example output:
```
➜ telegraf git:(master) ✗ ./telegraf -config ./config.conf -input-filter couchdb -test
* Plugin: couchdb,
Collection 1
> couchdb,server=http://localhost:5984/_stats couchdb_auth_cache_hits_current=0,
couchdb_auth_cache_hits_max=0,
couchdb_auth_cache_hits_mean=0,
couchdb_auth_cache_hits_min=0,
couchdb_auth_cache_hits_stddev=0,
couchdb_auth_cache_hits_sum=0,
couchdb_auth_cache_misses_current=0,
couchdb_auth_cache_misses_max=0,
couchdb_auth_cache_misses_mean=0,
couchdb_auth_cache_misses_min=0,
couchdb_auth_cache_misses_stddev=0,
couchdb_auth_cache_misses_sum=0,
couchdb_database_reads_current=0,
couchdb_database_reads_max=0,
couchdb_database_reads_mean=0,
couchdb_database_reads_min=0,
couchdb_database_reads_stddev=0,
couchdb_database_reads_sum=0,
couchdb_database_writes_current=1102,
couchdb_database_writes_max=131,
couchdb_database_writes_mean=0.116,
couchdb_database_writes_min=0,
couchdb_database_writes_stddev=3.536,
couchdb_database_writes_sum=1102,
couchdb_open_databases_current=1,
couchdb_open_databases_max=1,
couchdb_open_databases_mean=0,
couchdb_open_databases_min=0,
couchdb_open_databases_stddev=0.01,
couchdb_open_databases_sum=1,
couchdb_open_os_files_current=2,
couchdb_open_os_files_max=2,
couchdb_open_os_files_mean=0,
couchdb_open_os_files_min=0,
couchdb_open_os_files_stddev=0.02,
couchdb_open_os_files_sum=2,
couchdb_request_time_current=242.21,
couchdb_request_time_max=102,
couchdb_request_time_mean=5.767,
couchdb_request_time_min=1,
couchdb_request_time_stddev=17.369,
couchdb_request_time_sum=242.21,
httpd_bulk_requests_current=0,
httpd_bulk_requests_max=0,
httpd_bulk_requests_mean=0,
httpd_bulk_requests_min=0,
httpd_bulk_requests_stddev=0,
httpd_bulk_requests_sum=0,
httpd_clients_requesting_changes_current=0,
httpd_clients_requesting_changes_max=0,
httpd_clients_requesting_changes_mean=0,
httpd_clients_requesting_changes_min=0,
httpd_clients_requesting_changes_stddev=0,
httpd_clients_requesting_changes_sum=0,
httpd_request_methods_copy_current=0,
httpd_request_methods_copy_max=0,
httpd_request_methods_copy_mean=0,
httpd_request_methods_copy_min=0,
httpd_request_methods_copy_stddev=0,
httpd_request_methods_copy_sum=0,
httpd_request_methods_delete_current=0,
httpd_request_methods_delete_max=0,
httpd_request_methods_delete_mean=0,
httpd_request_methods_delete_min=0,
httpd_request_methods_delete_stddev=0,
httpd_request_methods_delete_sum=0,
httpd_request_methods_get_current=31,
httpd_request_methods_get_max=1,
httpd_request_methods_get_mean=0.003,
httpd_request_methods_get_min=0,
httpd_request_methods_get_stddev=0.057,
httpd_request_methods_get_sum=31,
httpd_request_methods_head_current=0,
httpd_request_methods_head_max=0,
httpd_request_methods_head_mean=0,
httpd_request_methods_head_min=0,
httpd_request_methods_head_stddev=0,
httpd_request_methods_head_sum=0,
httpd_request_methods_post_current=1102,
httpd_request_methods_post_max=131,
httpd_request_methods_post_mean=0.116,
httpd_request_methods_post_min=0,
httpd_request_methods_post_stddev=3.536,
httpd_request_methods_post_sum=1102,
httpd_request_methods_put_current=1,
httpd_request_methods_put_max=1,
httpd_request_methods_put_mean=0,
httpd_request_methods_put_min=0,
httpd_request_methods_put_stddev=0.01,
httpd_request_methods_put_sum=1,
httpd_requests_current=1133,
httpd_requests_max=130,
httpd_requests_mean=0.118,
httpd_requests_min=0,
httpd_requests_stddev=3.512,
httpd_requests_sum=1133,
httpd_status_codes_200_current=31,
httpd_status_codes_200_max=1,
httpd_status_codes_200_mean=0.003,
httpd_status_codes_200_min=0,
httpd_status_codes_200_stddev=0.057,
httpd_status_codes_200_sum=31,
httpd_status_codes_201_current=1103,
httpd_status_codes_201_max=130,
httpd_status_codes_201_mean=0.116,
httpd_status_codes_201_min=0,
httpd_status_codes_201_stddev=3.532,
httpd_status_codes_201_sum=1103,
httpd_status_codes_202_current=0,
httpd_status_codes_202_max=0,
httpd_status_codes_202_mean=0,
httpd_status_codes_202_min=0,
httpd_status_codes_202_stddev=0,
httpd_status_codes_202_sum=0,
httpd_status_codes_301_current=0,
httpd_status_codes_301_max=0,
httpd_status_codes_301_mean=0,
httpd_status_codes_301_min=0,
httpd_status_codes_301_stddev=0,
httpd_status_codes_301_sum=0,
httpd_status_codes_304_current=0,
httpd_status_codes_304_max=0,
httpd_status_codes_304_mean=0,
httpd_status_codes_304_min=0,
httpd_status_codes_304_stddev=0,
httpd_status_codes_304_sum=0,
httpd_status_codes_400_current=0,
httpd_status_codes_400_max=0,
httpd_status_codes_400_mean=0,
httpd_status_codes_400_min=0,
httpd_status_codes_400_stddev=0,
httpd_status_codes_400_sum=0,
httpd_status_codes_401_current=0,
httpd_status_codes_401_max=0,
httpd_status_codes_401_mean=0,
httpd_status_codes_401_min=0,
httpd_status_codes_401_stddev=0,
httpd_status_codes_401_sum=0,
httpd_status_codes_403_current=0,
httpd_status_codes_403_max=0,
httpd_status_codes_403_mean=0,
httpd_status_codes_403_min=0,
httpd_status_codes_403_stddev=0,
httpd_status_codes_403_sum=0,
httpd_status_codes_404_current=0,
httpd_status_codes_404_max=0,
httpd_status_codes_404_mean=0,
httpd_status_codes_404_min=0,
httpd_status_codes_404_stddev=0,
httpd_status_codes_404_sum=0,
httpd_status_codes_405_current=0,
httpd_status_codes_405_max=0,
httpd_status_codes_405_mean=0,
httpd_status_codes_405_min=0,
httpd_status_codes_405_stddev=0,
httpd_status_codes_405_sum=0,
httpd_status_codes_409_current=0,
httpd_status_codes_409_max=0,
httpd_status_codes_409_mean=0,
httpd_status_codes_409_min=0,
httpd_status_codes_409_stddev=0,
httpd_status_codes_409_sum=0,
httpd_status_codes_412_current=0,
httpd_status_codes_412_max=0,
httpd_status_codes_412_mean=0,
httpd_status_codes_412_min=0,
httpd_status_codes_412_stddev=0,
httpd_status_codes_412_sum=0,
httpd_status_codes_500_current=0,
httpd_status_codes_500_max=0,
httpd_status_codes_500_mean=0,
httpd_status_codes_500_min=0,
httpd_status_codes_500_stddev=0,
httpd_status_codes_500_sum=0,
httpd_temporary_view_reads_current=0,
httpd_temporary_view_reads_max=0,
httpd_temporary_view_reads_mean=0,
httpd_temporary_view_reads_min=0,
httpd_temporary_view_reads_stddev=0,
httpd_temporary_view_reads_sum=0,
httpd_view_reads_current=0,
httpd_view_reads_max=0,
httpd_view_reads_mean=0,
httpd_view_reads_min=0,
httpd_view_reads_stddev=0,
httpd_view_reads_sum=0 1454692257621938169
```

View File

@@ -0,0 +1,205 @@
package couchdb
import (
"encoding/json"
"errors"
"fmt"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"net/http"
"reflect"
"strings"
"sync"
)
// Schema:
type metaData struct {
Description string `json:"description"`
Current float64 `json:"current"`
Sum float64 `json:"sum"`
Mean float64 `json:"mean"`
Stddev float64 `json:"stddev"`
Min float64 `json:"min"`
Max float64 `json:"max"`
}
type Stats struct {
Couchdb struct {
AuthCacheMisses metaData `json:"auth_cache_misses"`
DatabaseWrites metaData `json:"database_writes"`
OpenDatabases metaData `json:"open_databases"`
AuthCacheHits metaData `json:"auth_cache_hits"`
RequestTime metaData `json:"request_time"`
DatabaseReads metaData `json:"database_reads"`
OpenOsFiles metaData `json:"open_os_files"`
} `json:"couchdb"`
HttpdRequestMethods struct {
Put metaData `json:"PUT"`
Get metaData `json:"GET"`
Copy metaData `json:"COPY"`
Delete metaData `json:"DELETE"`
Post metaData `json:"POST"`
Head metaData `json:"HEAD"`
} `json:"httpd_request_methods"`
HttpdStatusCodes struct {
Status200 metaData `json:"200"`
Status201 metaData `json:"201"`
Status202 metaData `json:"202"`
Status301 metaData `json:"301"`
Status304 metaData `json:"304"`
Status400 metaData `json:"400"`
Status401 metaData `json:"401"`
Status403 metaData `json:"403"`
Status404 metaData `json:"404"`
Status405 metaData `json:"405"`
Status409 metaData `json:"409"`
Status412 metaData `json:"412"`
Status500 metaData `json:"500"`
} `json:"httpd_status_codes"`
Httpd struct {
ClientsRequestingChanges metaData `json:"clients_requesting_changes"`
TemporaryViewReads metaData `json:"temporary_view_reads"`
Requests metaData `json:"requests"`
BulkRequests metaData `json:"bulk_requests"`
ViewReads metaData `json:"view_reads"`
} `json:"httpd"`
}
type CouchDB struct {
HOSTs []string `toml:"hosts"`
}
func (*CouchDB) Description() string {
return "Read CouchDB Stats from one or more servers"
}
func (*CouchDB) SampleConfig() string {
return `
## Works with CouchDB stats endpoints out of the box
## Multiple HOSTs from which to read CouchDB stats:
hosts = ["http://localhost:8086/_stats"]
`
}
func (c *CouchDB) Gather(accumulator telegraf.Accumulator) error {
errorChannel := make(chan error, len(c.HOSTs))
var wg sync.WaitGroup
for _, u := range c.HOSTs {
wg.Add(1)
go func(host string) {
defer wg.Done()
if err := c.fetchAndInsertData(accumulator, host); err != nil {
errorChannel <- fmt.Errorf("[host=%s]: %s", host, err)
}
}(u)
}
wg.Wait()
close(errorChannel)
// If there weren't any errors, we can return nil now.
if len(errorChannel) == 0 {
return nil
}
// There were errors, so join them all together as one big error.
errorStrings := make([]string, 0, len(errorChannel))
for err := range errorChannel {
errorStrings = append(errorStrings, err.Error())
}
return errors.New(strings.Join(errorStrings, "\n"))
}
func (c *CouchDB) fetchAndInsertData(accumulator telegraf.Accumulator, host string) error {
response, error := http.Get(host)
if error != nil {
return error
}
defer response.Body.Close()
var stats Stats
decoder := json.NewDecoder(response.Body)
decoder.Decode(&stats)
fields := map[string]interface{}{}
// CouchDB meta stats:
c.MapCopy(fields, c.generateFields("couchdb_auth_cache_misses", stats.Couchdb.AuthCacheMisses))
c.MapCopy(fields, c.generateFields("couchdb_database_writes", stats.Couchdb.DatabaseWrites))
c.MapCopy(fields, c.generateFields("couchdb_open_databases", stats.Couchdb.OpenDatabases))
c.MapCopy(fields, c.generateFields("couchdb_auth_cache_hits", stats.Couchdb.AuthCacheHits))
c.MapCopy(fields, c.generateFields("couchdb_request_time", stats.Couchdb.RequestTime))
c.MapCopy(fields, c.generateFields("couchdb_database_reads", stats.Couchdb.DatabaseReads))
c.MapCopy(fields, c.generateFields("couchdb_open_os_files", stats.Couchdb.OpenOsFiles))
// http request methods stats:
c.MapCopy(fields, c.generateFields("httpd_request_methods_put", stats.HttpdRequestMethods.Put))
c.MapCopy(fields, c.generateFields("httpd_request_methods_get", stats.HttpdRequestMethods.Get))
c.MapCopy(fields, c.generateFields("httpd_request_methods_copy", stats.HttpdRequestMethods.Copy))
c.MapCopy(fields, c.generateFields("httpd_request_methods_delete", stats.HttpdRequestMethods.Delete))
c.MapCopy(fields, c.generateFields("httpd_request_methods_post", stats.HttpdRequestMethods.Post))
c.MapCopy(fields, c.generateFields("httpd_request_methods_head", stats.HttpdRequestMethods.Head))
// status code stats:
c.MapCopy(fields, c.generateFields("httpd_status_codes_200", stats.HttpdStatusCodes.Status200))
c.MapCopy(fields, c.generateFields("httpd_status_codes_201", stats.HttpdStatusCodes.Status201))
c.MapCopy(fields, c.generateFields("httpd_status_codes_202", stats.HttpdStatusCodes.Status202))
c.MapCopy(fields, c.generateFields("httpd_status_codes_301", stats.HttpdStatusCodes.Status301))
c.MapCopy(fields, c.generateFields("httpd_status_codes_304", stats.HttpdStatusCodes.Status304))
c.MapCopy(fields, c.generateFields("httpd_status_codes_400", stats.HttpdStatusCodes.Status400))
c.MapCopy(fields, c.generateFields("httpd_status_codes_401", stats.HttpdStatusCodes.Status401))
c.MapCopy(fields, c.generateFields("httpd_status_codes_403", stats.HttpdStatusCodes.Status403))
c.MapCopy(fields, c.generateFields("httpd_status_codes_404", stats.HttpdStatusCodes.Status404))
c.MapCopy(fields, c.generateFields("httpd_status_codes_405", stats.HttpdStatusCodes.Status405))
c.MapCopy(fields, c.generateFields("httpd_status_codes_409", stats.HttpdStatusCodes.Status409))
c.MapCopy(fields, c.generateFields("httpd_status_codes_412", stats.HttpdStatusCodes.Status412))
c.MapCopy(fields, c.generateFields("httpd_status_codes_500", stats.HttpdStatusCodes.Status500))
// httpd stats:
c.MapCopy(fields, c.generateFields("httpd_clients_requesting_changes", stats.Httpd.ClientsRequestingChanges))
c.MapCopy(fields, c.generateFields("httpd_temporary_view_reads", stats.Httpd.TemporaryViewReads))
c.MapCopy(fields, c.generateFields("httpd_requests", stats.Httpd.Requests))
c.MapCopy(fields, c.generateFields("httpd_bulk_requests", stats.Httpd.BulkRequests))
c.MapCopy(fields, c.generateFields("httpd_view_reads", stats.Httpd.ViewReads))
tags := map[string]string{
"server": host,
}
accumulator.AddFields("couchdb", fields, tags)
return nil
}
func (*CouchDB) MapCopy(dst, src interface{}) {
dv, sv := reflect.ValueOf(dst), reflect.ValueOf(src)
for _, k := range sv.MapKeys() {
dv.SetMapIndex(k, sv.MapIndex(k))
}
}
func (*CouchDB) safeCheck(value interface{}) interface{} {
if value == nil {
return 0.0
}
return value
}
func (c *CouchDB) generateFields(prefix string, obj metaData) map[string]interface{} {
fields := map[string]interface{}{
prefix + "_current": c.safeCheck(obj.Current),
prefix + "_sum": c.safeCheck(obj.Sum),
prefix + "_mean": c.safeCheck(obj.Mean),
prefix + "_stddev": c.safeCheck(obj.Stddev),
prefix + "_min": c.safeCheck(obj.Min),
prefix + "_max": c.safeCheck(obj.Max),
}
return fields
}
func init() {
inputs.Add("couchdb", func() telegraf.Input {
return &CouchDB{}
})
}

View File

@@ -0,0 +1,320 @@
package couchdb_test
import (
"github.com/influxdata/telegraf/plugins/inputs/couchdb"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/require"
"net/http"
"net/http/httptest"
"testing"
)
func TestBasic(t *testing.T) {
js := `
{
"couchdb": {
"auth_cache_misses": {
"description": "number of authentication cache misses",
"current": null,
"sum": null,
"mean": null,
"stddev": null,
"min": null,
"max": null
},
"database_writes": {
"description": "number of times a database was changed",
"current": null,
"sum": null,
"mean": null,
"stddev": null,
"min": null,
"max": null
},
"open_databases": {
"description": "number of open databases",
"current": null,
"sum": null,
"mean": null,
"stddev": null,
"min": null,
"max": null
},
"auth_cache_hits": {
"description": "number of authentication cache hits",
"current": null,
"sum": null,
"mean": null,
"stddev": null,
"min": null,
"max": null
},
"request_time": {
"description": "length of a request inside CouchDB without MochiWeb",
"current": 18.0,
"sum": 18.0,
"mean": 18.0,
"stddev": null,
"min": 18.0,
"max": 18.0
},
"database_reads": {
"description": "number of times a document was read from a database",
"current": null,
"sum": null,
"mean": null,
"stddev": null,
"min": null,
"max": null
},
"open_os_files": {
"description": "number of file descriptors CouchDB has open",
"current": null,
"sum": null,
"mean": null,
"stddev": null,
"min": null,
"max": null
}
},
"httpd_request_methods": {
"PUT": {
"description": "number of HTTP PUT requests",
"current": null,
"sum": null,
"mean": null,
"stddev": null,
"min": null,
"max": null
},
"GET": {
"description": "number of HTTP GET requests",
"current": 2.0,
"sum": 2.0,
"mean": 0.25,
"stddev": 0.70699999999999996181,
"min": 0,
"max": 2
},
"COPY": {
"description": "number of HTTP COPY requests",
"current": null,
"sum": null,
"mean": null,
"stddev": null,
"min": null,
"max": null
},
"DELETE": {
"description": "number of HTTP DELETE requests",
"current": null,
"sum": null,
"mean": null,
"stddev": null,
"min": null,
"max": null
},
"POST": {
"description": "number of HTTP POST requests",
"current": null,
"sum": null,
"mean": null,
"stddev": null,
"min": null,
"max": null
},
"HEAD": {
"description": "number of HTTP HEAD requests",
"current": null,
"sum": null,
"mean": null,
"stddev": null,
"min": null,
"max": null
}
},
"httpd_status_codes": {
"403": {
"description": "number of HTTP 403 Forbidden responses",
"current": null,
"sum": null,
"mean": null,
"stddev": null,
"min": null,
"max": null
},
"202": {
"description": "number of HTTP 202 Accepted responses",
"current": null,
"sum": null,
"mean": null,
"stddev": null,
"min": null,
"max": null
},
"401": {
"description": "number of HTTP 401 Unauthorized responses",
"current": null,
"sum": null,
"mean": null,
"stddev": null,
"min": null,
"max": null
},
"409": {
"description": "number of HTTP 409 Conflict responses",
"current": null,
"sum": null,
"mean": null,
"stddev": null,
"min": null,
"max": null
},
"200": {
"description": "number of HTTP 200 OK responses",
"current": 1.0,
"sum": 1.0,
"mean": 0.125,
"stddev": 0.35399999999999998135,
"min": 0,
"max": 1
},
"405": {
"description": "number of HTTP 405 Method Not Allowed responses",
"current": null,
"sum": null,
"mean": null,
"stddev": null,
"min": null,
"max": null
},
"400": {
"description": "number of HTTP 400 Bad Request responses",
"current": null,
"sum": null,
"mean": null,
"stddev": null,
"min": null,
"max": null
},
"201": {
"description": "number of HTTP 201 Created responses",
"current": null,
"sum": null,
"mean": null,
"stddev": null,
"min": null,
"max": null
},
"404": {
"description": "number of HTTP 404 Not Found responses",
"current": null,
"sum": null,
"mean": null,
"stddev": null,
"min": null,
"max": null
},
"500": {
"description": "number of HTTP 500 Internal Server Error responses",
"current": null,
"sum": null,
"mean": null,
"stddev": null,
"min": null,
"max": null
},
"412": {
"description": "number of HTTP 412 Precondition Failed responses",
"current": null,
"sum": null,
"mean": null,
"stddev": null,
"min": null,
"max": null
},
"301": {
"description": "number of HTTP 301 Moved Permanently responses",
"current": null,
"sum": null,
"mean": null,
"stddev": null,
"min": null,
"max": null
},
"304": {
"description": "number of HTTP 304 Not Modified responses",
"current": null,
"sum": null,
"mean": null,
"stddev": null,
"min": null,
"max": null
}
},
"httpd": {
"clients_requesting_changes": {
"description": "number of clients for continuous _changes",
"current": null,
"sum": null,
"mean": null,
"stddev": null,
"min": null,
"max": null
},
"temporary_view_reads": {
"description": "number of temporary view reads",
"current": null,
"sum": null,
"mean": null,
"stddev": null,
"min": null,
"max": null
},
"requests": {
"description": "number of HTTP requests",
"current": 2.0,
"sum": 2.0,
"mean": 0.25,
"stddev": 0.70699999999999996181,
"min": 0,
"max": 2
},
"bulk_requests": {
"description": "number of bulk requests",
"current": null,
"sum": null,
"mean": null,
"stddev": null,
"min": null,
"max": null
},
"view_reads": {
"description": "number of view reads",
"current": null,
"sum": null,
"mean": null,
"stddev": null,
"min": null,
"max": null
}
}
}
`
fakeServer := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.URL.Path == "/_stats" {
_, _ = w.Write([]byte(js))
} else {
w.WriteHeader(http.StatusNotFound)
}
}))
defer fakeServer.Close()
plugin := &couchdb.CouchDB{
HOSTs: []string{fakeServer.URL + "/_stats"},
}
var acc testutil.Accumulator
require.NoError(t, plugin.Gather(&acc))
}

View File

@@ -10,6 +10,7 @@ import (
"strings"
"sync"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -21,11 +22,11 @@ type Disque struct {
}
var sampleConfig = `
# An array of URI to gather stats about. Specify an ip or hostname
# with optional port and password. ie disque://localhost, disque://10.10.3.33:18832,
# 10.0.0.1:10000, etc.
#
# If no servers are specified, then localhost is used as the host.
## An array of URI to gather stats about. Specify an ip or hostname
## with optional port and password. ie disque://localhost, disque://10.10.3.33:18832,
## 10.0.0.1:10000, etc.
## If no servers are specified, then localhost is used as the host.
servers = ["localhost"]
`
@@ -61,7 +62,7 @@ var ErrProtocolError = errors.New("disque protocol error")
// Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any).
func (g *Disque) Gather(acc inputs.Accumulator) error {
func (g *Disque) Gather(acc telegraf.Accumulator) error {
if len(g.Servers) == 0 {
url := &url.URL{
Host: ":7711",
@@ -98,7 +99,7 @@ func (g *Disque) Gather(acc inputs.Accumulator) error {
const defaultPort = "7711"
func (g *Disque) gatherServer(addr *url.URL, acc inputs.Accumulator) error {
func (g *Disque) gatherServer(addr *url.URL, acc telegraf.Accumulator) error {
if g.c == nil {
_, _, err := net.SplitHostPort(addr.Host)
@@ -198,7 +199,7 @@ func (g *Disque) gatherServer(addr *url.URL, acc inputs.Accumulator) error {
}
func init() {
inputs.Add("disque", func() inputs.Input {
inputs.Add("disque", func() telegraf.Input {
return &Disque{}
})
}

View File

@@ -0,0 +1,51 @@
# DNS Query Input Plugin
The DNS plugin gathers dns query times in miliseconds - like [Dig](https://en.wikipedia.org/wiki/Dig_\(command\))
### Configuration:
```
# Sample Config:
[[inputs.dns_query]]
## servers to query
servers = ["8.8.8.8"] # required
## Domains or subdomains to query. "." (root) is default
domains = ["."] # optional
## Query record type. Posible values: A, AAAA, ANY, CNAME, MX, NS, PTR, SOA, SPF, SRV, TXT. Default is "NS"
record_type = "A" # optional
## Dns server port. 53 is default
port = 53 # optional
## Query timeout in seconds. Default is 2 seconds
timeout = 2 # optional
```
For querying more than one record type make:
```
[[inputs.dns_query]]
domains = ["mjasion.pl"]
servers = ["8.8.8.8", "8.8.4.4"]
record_type = "A"
[[inputs.dns_query]]
domains = ["mjasion.pl"]
servers = ["8.8.8.8", "8.8.4.4"]
record_type = "MX"
```
### Tags:
- server
- domain
- record_type
### Example output:
```
./telegraf -config telegraf.conf -test -input-filter dns_query -test
> dns_query,domain=mjasion.pl,record_type=A,server=8.8.8.8 query_time_ms=67.189842 1456082743585760680
```

View File

@@ -0,0 +1,159 @@
package dns_query
import (
"errors"
"fmt"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/miekg/dns"
"net"
"strconv"
"time"
)
type DnsQuery struct {
// Domains or subdomains to query
Domains []string
// Server to query
Servers []string
// Record type
RecordType string `toml:"record_type"`
// DNS server port number
Port int
// Dns query timeout in seconds. 0 means no timeout
Timeout int
}
var sampleConfig = `
## servers to query
servers = ["8.8.8.8"] # required
## Domains or subdomains to query. "."(root) is default
domains = ["."] # optional
## Query record type. Posible values: A, AAAA, CNAME, MX, NS, PTR, TXT, SOA, SPF, SRV. Default is "NS"
record_type = "A" # optional
## Dns server port. 53 is default
port = 53 # optional
## Query timeout in seconds. Default is 2 seconds
timeout = 2 # optional
`
func (d *DnsQuery) SampleConfig() string {
return sampleConfig
}
func (d *DnsQuery) Description() string {
return "Query given DNS server and gives statistics"
}
func (d *DnsQuery) Gather(acc telegraf.Accumulator) error {
d.setDefaultValues()
for _, domain := range d.Domains {
for _, server := range d.Servers {
dnsQueryTime, err := d.getDnsQueryTime(domain, server)
if err != nil {
return err
}
tags := map[string]string{
"server": server,
"domain": domain,
"record_type": d.RecordType,
}
fields := map[string]interface{}{"query_time_ms": dnsQueryTime}
acc.AddFields("dns_query", fields, tags)
}
}
return nil
}
func (d *DnsQuery) setDefaultValues() {
if len(d.RecordType) == 0 {
d.RecordType = "NS"
}
if len(d.Domains) == 0 {
d.Domains = []string{"."}
d.RecordType = "NS"
}
if d.Port == 0 {
d.Port = 53
}
if d.Timeout == 0 {
d.Timeout = 2
}
}
func (d *DnsQuery) getDnsQueryTime(domain string, server string) (float64, error) {
dnsQueryTime := float64(0)
c := new(dns.Client)
c.ReadTimeout = time.Duration(d.Timeout) * time.Second
m := new(dns.Msg)
recordType, err := d.parseRecordType()
if err != nil {
return dnsQueryTime, err
}
m.SetQuestion(dns.Fqdn(domain), recordType)
m.RecursionDesired = true
r, rtt, err := c.Exchange(m, net.JoinHostPort(server, strconv.Itoa(d.Port)))
if err != nil {
return dnsQueryTime, err
}
if r.Rcode != dns.RcodeSuccess {
return dnsQueryTime, errors.New(fmt.Sprintf("Invalid answer name %s after %s query for %s\n", domain, d.RecordType, domain))
}
dnsQueryTime = float64(rtt.Nanoseconds()) / 1e6
return dnsQueryTime, nil
}
func (d *DnsQuery) parseRecordType() (uint16, error) {
var recordType uint16
var error error
switch d.RecordType {
case "A":
recordType = dns.TypeA
case "AAAA":
recordType = dns.TypeAAAA
case "ANY":
recordType = dns.TypeANY
case "CNAME":
recordType = dns.TypeCNAME
case "MX":
recordType = dns.TypeMX
case "NS":
recordType = dns.TypeNS
case "PTR":
recordType = dns.TypePTR
case "SOA":
recordType = dns.TypeSOA
case "SPF":
recordType = dns.TypeSPF
case "SRV":
recordType = dns.TypeSRV
case "TXT":
recordType = dns.TypeTXT
default:
error = errors.New(fmt.Sprintf("Record type %s not recognized", d.RecordType))
}
return recordType, error
}
func init() {
inputs.Add("dns_query", func() telegraf.Input {
return &DnsQuery{}
})
}

View File

@@ -0,0 +1,184 @@
package dns_query
import (
"github.com/influxdata/telegraf/testutil"
"github.com/miekg/dns"
"github.com/stretchr/testify/assert"
"testing"
"time"
)
var servers = []string{"8.8.8.8"}
var domains = []string{"mjasion.pl"}
func TestGathering(t *testing.T) {
var dnsConfig = DnsQuery{
Servers: servers,
Domains: domains,
}
var acc testutil.Accumulator
dnsConfig.Gather(&acc)
metric, _ := acc.Get("dns_query")
queryTime, _ := metric.Fields["query_time_ms"].(float64)
assert.NotEqual(t, 0, queryTime)
}
func TestGatheringMxRecord(t *testing.T) {
var dnsConfig = DnsQuery{
Servers: servers,
Domains: domains,
}
var acc testutil.Accumulator
dnsConfig.RecordType = "MX"
dnsConfig.Gather(&acc)
metric, _ := acc.Get("dns_query")
queryTime, _ := metric.Fields["query_time_ms"].(float64)
assert.NotEqual(t, 0, queryTime)
}
func TestGatheringRootDomain(t *testing.T) {
var dnsConfig = DnsQuery{
Servers: servers,
Domains: []string{"."},
RecordType: "MX",
}
var acc testutil.Accumulator
tags := map[string]string{
"server": "8.8.8.8",
"domain": ".",
"record_type": "MX",
}
fields := map[string]interface{}{}
dnsConfig.Gather(&acc)
metric, _ := acc.Get("dns_query")
queryTime, _ := metric.Fields["query_time_ms"].(float64)
fields["query_time_ms"] = queryTime
acc.AssertContainsTaggedFields(t, "dns_query", fields, tags)
}
func TestMetricContainsServerAndDomainAndRecordTypeTags(t *testing.T) {
var dnsConfig = DnsQuery{
Servers: servers,
Domains: domains,
}
var acc testutil.Accumulator
tags := map[string]string{
"server": "8.8.8.8",
"domain": "mjasion.pl",
"record_type": "NS",
}
fields := map[string]interface{}{}
dnsConfig.Gather(&acc)
metric, _ := acc.Get("dns_query")
queryTime, _ := metric.Fields["query_time_ms"].(float64)
fields["query_time_ms"] = queryTime
acc.AssertContainsTaggedFields(t, "dns_query", fields, tags)
}
func TestGatheringTimeout(t *testing.T) {
var dnsConfig = DnsQuery{
Servers: servers,
Domains: domains,
}
var acc testutil.Accumulator
dnsConfig.Port = 60054
dnsConfig.Timeout = 1
var err error
channel := make(chan error, 1)
go func() {
channel <- dnsConfig.Gather(&acc)
}()
select {
case res := <-channel:
err = res
case <-time.After(time.Second * 2):
err = nil
}
assert.Error(t, err)
assert.Contains(t, err.Error(), "i/o timeout")
}
func TestSettingDefaultValues(t *testing.T) {
dnsConfig := DnsQuery{}
dnsConfig.setDefaultValues()
assert.Equal(t, []string{"."}, dnsConfig.Domains, "Default domain not equal \".\"")
assert.Equal(t, "NS", dnsConfig.RecordType, "Default record type not equal 'NS'")
assert.Equal(t, 53, dnsConfig.Port, "Default port number not equal 53")
assert.Equal(t, 2, dnsConfig.Timeout, "Default timeout not equal 2")
dnsConfig = DnsQuery{Domains: []string{"."}}
dnsConfig.setDefaultValues()
assert.Equal(t, "NS", dnsConfig.RecordType, "Default record type not equal 'NS'")
}
func TestRecordTypeParser(t *testing.T) {
var dnsConfig = DnsQuery{}
var recordType uint16
dnsConfig.RecordType = "A"
recordType, _ = dnsConfig.parseRecordType()
assert.Equal(t, dns.TypeA, recordType)
dnsConfig.RecordType = "AAAA"
recordType, _ = dnsConfig.parseRecordType()
assert.Equal(t, dns.TypeAAAA, recordType)
dnsConfig.RecordType = "ANY"
recordType, _ = dnsConfig.parseRecordType()
assert.Equal(t, dns.TypeANY, recordType)
dnsConfig.RecordType = "CNAME"
recordType, _ = dnsConfig.parseRecordType()
assert.Equal(t, dns.TypeCNAME, recordType)
dnsConfig.RecordType = "MX"
recordType, _ = dnsConfig.parseRecordType()
assert.Equal(t, dns.TypeMX, recordType)
dnsConfig.RecordType = "NS"
recordType, _ = dnsConfig.parseRecordType()
assert.Equal(t, dns.TypeNS, recordType)
dnsConfig.RecordType = "PTR"
recordType, _ = dnsConfig.parseRecordType()
assert.Equal(t, dns.TypePTR, recordType)
dnsConfig.RecordType = "SOA"
recordType, _ = dnsConfig.parseRecordType()
assert.Equal(t, dns.TypeSOA, recordType)
dnsConfig.RecordType = "SPF"
recordType, _ = dnsConfig.parseRecordType()
assert.Equal(t, dns.TypeSPF, recordType)
dnsConfig.RecordType = "SRV"
recordType, _ = dnsConfig.parseRecordType()
assert.Equal(t, dns.TypeSRV, recordType)
dnsConfig.RecordType = "TXT"
recordType, _ = dnsConfig.parseRecordType()
assert.Equal(t, dns.TypeTXT, recordType)
}
func TestRecordTypeParserError(t *testing.T) {
var dnsConfig = DnsQuery{}
var err error
dnsConfig.RecordType = "nil"
_, err = dnsConfig.parseRecordType()
assert.Error(t, err)
}

View File

@@ -2,10 +2,12 @@ package system
import (
"fmt"
"log"
"strings"
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/fsouza/go-dockerclient"
@@ -19,11 +21,11 @@ type Docker struct {
}
var sampleConfig = `
# Docker Endpoint
# To use TCP, set endpoint = "tcp://[ip]:[port]"
# To use environment variables (ie, docker-machine), set endpoint = "ENV"
## Docker Endpoint
## To use TCP, set endpoint = "tcp://[ip]:[port]"
## To use environment variables (ie, docker-machine), set endpoint = "ENV"
endpoint = "unix:///var/run/docker.sock"
# Only collect metrics for these containers, collect all if empty
## Only collect metrics for these containers, collect all if empty
container_names = []
`
@@ -33,7 +35,7 @@ func (d *Docker) Description() string {
func (d *Docker) SampleConfig() string { return sampleConfig }
func (d *Docker) Gather(acc inputs.Accumulator) error {
func (d *Docker) Gather(acc telegraf.Accumulator) error {
if d.client == nil {
var c *docker.Client
var err error
@@ -65,6 +67,7 @@ func (d *Docker) Gather(acc inputs.Accumulator) error {
var wg sync.WaitGroup
wg.Add(len(containers))
for _, container := range containers {
go func(c docker.APIContainers) {
defer wg.Done()
err := d.gatherContainer(c, acc)
@@ -80,7 +83,7 @@ func (d *Docker) Gather(acc inputs.Accumulator) error {
func (d *Docker) gatherContainer(
container docker.APIContainers,
acc inputs.Accumulator,
acc telegraf.Accumulator,
) error {
// Parse container name
cname := "unknown"
@@ -111,12 +114,19 @@ func (d *Docker) gatherContainer(
}
go func() {
d.client.Stats(statOpts)
err := d.client.Stats(statOpts)
if err != nil {
log.Printf("Error getting docker stats: %s\n", err.Error())
}
}()
stat := <-statChan
close(done)
if stat == nil {
return nil
}
// Add labels to tags
for k, v := range container.Labels {
tags[k] = v
@@ -129,7 +139,7 @@ func (d *Docker) gatherContainer(
func gatherContainerStats(
stat *docker.Stats,
acc inputs.Accumulator,
acc telegraf.Accumulator,
tags map[string]string,
) {
now := stat.Read
@@ -168,6 +178,7 @@ func gatherContainerStats(
"pgfault": stat.MemoryStats.Stats.Pgfault,
"inactive_file": stat.MemoryStats.Stats.InactiveFile,
"total_pgpgin": stat.MemoryStats.Stats.TotalPgpgin,
"usage_percent": calculateMemPercent(stat),
}
acc.AddFields("docker_mem", memfields, tags, now)
@@ -179,6 +190,7 @@ func gatherContainerStats(
"throttling_periods": stat.CPUStats.ThrottlingData.Periods,
"throttling_throttled_periods": stat.CPUStats.ThrottlingData.ThrottledPeriods,
"throttling_throttled_time": stat.CPUStats.ThrottlingData.ThrottledTime,
"usage_percent": calculateCPUPercent(stat),
}
cputags := copyTags(tags)
cputags["cpu"] = "cpu-total"
@@ -210,9 +222,29 @@ func gatherContainerStats(
gatherBlockIOMetrics(stat, acc, tags, now)
}
func calculateMemPercent(stat *docker.Stats) float64 {
var memPercent = 0.0
if stat.MemoryStats.Limit > 0 {
memPercent = float64(stat.MemoryStats.Usage) / float64(stat.MemoryStats.Limit) * 100.0
}
return memPercent
}
func calculateCPUPercent(stat *docker.Stats) float64 {
var cpuPercent = 0.0
// calculate the change for the cpu and system usage of the container in between readings
cpuDelta := float64(stat.CPUStats.CPUUsage.TotalUsage) - float64(stat.PreCPUStats.CPUUsage.TotalUsage)
systemDelta := float64(stat.CPUStats.SystemCPUUsage) - float64(stat.PreCPUStats.SystemCPUUsage)
if systemDelta > 0.0 && cpuDelta > 0.0 {
cpuPercent = (cpuDelta / systemDelta) * float64(len(stat.CPUStats.CPUUsage.PercpuUsage)) * 100.0
}
return cpuPercent
}
func gatherBlockIOMetrics(
stat *docker.Stats,
acc inputs.Accumulator,
acc telegraf.Accumulator,
tags map[string]string,
now time.Time,
) {
@@ -303,7 +335,7 @@ func sliceContains(in string, sl []string) bool {
}
func init() {
inputs.Add("docker", func() inputs.Input {
inputs.Add("docker", func() telegraf.Input {
return &Docker{}
})
}

View File

@@ -49,7 +49,7 @@ func TestDockerGatherContainerStats(t *testing.T) {
"max_usage": uint64(1001),
"usage": uint64(1111),
"fail_count": uint64(1),
"limit": uint64(20),
"limit": uint64(2000),
"total_pgmafault": uint64(0),
"cache": uint64(0),
"mapped_file": uint64(0),
@@ -79,7 +79,9 @@ func TestDockerGatherContainerStats(t *testing.T) {
"pgfault": uint64(2),
"inactive_file": uint64(3),
"total_pgpgin": uint64(4),
"usage_percent": float64(55.55),
}
acc.AssertContainsTaggedFields(t, "docker_mem", memfields, tags)
// test docker_cpu measurement
@@ -93,6 +95,7 @@ func TestDockerGatherContainerStats(t *testing.T) {
"throttling_periods": uint64(1),
"throttling_throttled_periods": uint64(0),
"throttling_throttled_time": uint64(0),
"usage_percent": float64(400.0),
}
acc.AssertContainsTaggedFields(t, "docker_cpu", cpufields, cputags)
@@ -122,6 +125,9 @@ func testStats() *docker.Stats {
stats.CPUStats.SystemCPUUsage = 100
stats.CPUStats.ThrottlingData.Periods = 1
stats.PreCPUStats.CPUUsage.TotalUsage = 400
stats.PreCPUStats.SystemCPUUsage = 50
stats.MemoryStats.Stats.TotalPgmafault = 0
stats.MemoryStats.Stats.Cache = 0
stats.MemoryStats.Stats.MappedFile = 0
@@ -155,7 +161,7 @@ func testStats() *docker.Stats {
stats.MemoryStats.MaxUsage = 1001
stats.MemoryStats.Usage = 1111
stats.MemoryStats.Failcnt = 1
stats.MemoryStats.Limit = 20
stats.MemoryStats.Limit = 2000
stats.Networks["eth0"] = docker.NetworkStats{
RxDropped: 1,

View File

@@ -0,0 +1,67 @@
# Dovecot Input Plugin
The dovecot plugin uses the dovecot Stats protocol to gather metrics on configured
domains. You can read Dovecot's documentation
[here](http://wiki2.dovecot.org/Statistics)
### Configuration:
```
# Read metrics about dovecot servers
[[inputs.dovecot]]
# Dovecot servers
# specify dovecot servers via an address:port list
# e.g.
# localhost:24242
#
# If no servers are specified, then localhost is used as the host.
servers = ["localhost:24242"]
# Only collect metrics for these domains, collect all if empty
domains = []
```
### Tags:
server: hostname
domain: domain name
### Fields:
reset_timestamp time.Time
last_update time.Time
num_logins int64
num_cmds int64
num_connected_sessions int64
user_cpu float32
sys_cpu float32
clock_time float64
min_faults int64
maj_faults int64
vol_cs int64
invol_cs int64
disk_input int64
disk_output int64
read_count int64
read_bytes int64
write_count int64
write_bytes int64
mail_lookup_path int64
mail_lookup_attr int64
mail_read_count int64
mail_read_bytes int64
mail_cache_hits int64
### Example Output:
```
telegraf -config telegraf.cfg -input-filter dovecot -test
* Plugin: dovecot, Collection 1
> dovecot,domain=xxxxx.it,server=dovecot--1.mail.sys clock_time=12105746411632.5,disk_input=115285225472i,disk_output=4885067755520i,invol_cs=169701886i,last_update="2016-02-09 08:49:47.000014113 +0100 CET",mail_cache_hits=441828i,mail_lookup_attr=0i,mail_lookup_path=25323i,mail_read_bytes=241188145i,mail_read_count=11719i,maj_faults=3168i,min_faults=321438988i,num_cmds=51635i,num_connected_sessions=2i,num_logins=17149i,read_bytes=7939026951110i,read_count=3716991752i,reset_timestamp="2016-01-28 09:34:36 +0100 CET",sys_cpu=222595.288,user_cpu=267468.08,vol_cs=3288715920i,write_bytes=4483648967059i,write_count=1640646952i 1455004219924838345
> dovecot,domain=yyyyy.com,server=dovecot-1.mail.sys clock_time=6650794455331782,disk_input=61957695569920i,disk_output=2638244004487168i,invol_cs=2004805041i,last_update="2016-02-09 08:49:49.000251296 +0100 CET",mail_cache_hits=2499112513i,mail_lookup_attr=506730i,mail_lookup_path=39128227i,mail_read_bytes=1076496874501i,mail_read_count=32615262i,maj_faults=1643304i,min_faults=4216116325i,num_cmds=85785559i,num_connected_sessions=1177i,num_logins=11658255i,read_bytes=4289150974554145i,read_count=1112000703i,reset_timestamp="2016-01-28 09:31:26 +0100 CET",sys_cpu=121125923.032,user_cpu=145561336.428,vol_cs=205451885i,write_bytes=2420130526835796i,write_count=2991367252i 1455004219925152529
> dovecot,domain=xxxxx.it,server=dovecot-2.mail.sys clock_time=10710826586999.143,disk_input=79792410624i,disk_output=4496066158592i,invol_cs=150426876i,last_update="2016-02-09 08:48:19.000209134 +0100 CET",mail_cache_hits=5480869i,mail_lookup_attr=0i,mail_lookup_path=122563i,mail_read_bytes=340746273i,mail_read_count=44275i,maj_faults=1722i,min_faults=288071875i,num_cmds=50098i,num_connected_sessions=0i,num_logins=16389i,read_bytes=7259551999517i,read_count=3396625369i,reset_timestamp="2016-01-28 09:31:29 +0100 CET",sys_cpu=200762.792,user_cpu=242477.664,vol_cs=2996657358i,write_bytes=4133381575263i,write_count=1497242759i 1455004219924888283
> dovecot,domain=yyyyy.com,server=dovecot-2.mail.sys clock_time=6522131245483702,disk_input=48259150004224i,disk_output=2754333359087616i,invol_cs=2294595260i,last_update="2016-02-09 08:49:49.000251919 +0100 CET",mail_cache_hits=2139113611i,mail_lookup_attr=520276i,mail_lookup_path=37940318i,mail_read_bytes=1088002215022i,mail_read_count=31350271i,maj_faults=994420i,min_faults=1486260543i,num_cmds=40414997i,num_connected_sessions=978i,num_logins=11259672i,read_bytes=4445546612487315i,read_count=1763534543i,reset_timestamp="2016-01-28 09:31:24 +0100 CET",sys_cpu=123655962.668,user_cpu=149259327.032,vol_cs=4215130546i,write_bytes=2531186030222761i,write_count=2186579650i 1455004219925398372
```

View File

@@ -0,0 +1,166 @@
package dovecot
import (
"bytes"
"fmt"
"io"
"net"
"strconv"
"strings"
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
type Dovecot struct {
Servers []string
Domains []string
}
func (d *Dovecot) Description() string {
return "Read statistics from one or many dovecot servers"
}
var sampleConfig = `
## specify dovecot servers via an address:port list
## e.g.
## localhost:24242
##
## If no servers are specified, then localhost is used as the host.
servers = ["localhost:24242"]
## Only collect metrics for these domains, collect all if empty
domains = []
`
func (d *Dovecot) SampleConfig() string { return sampleConfig }
const defaultPort = "24242"
// Reads stats from all configured servers.
func (d *Dovecot) Gather(acc telegraf.Accumulator) error {
if len(d.Servers) == 0 {
d.Servers = append(d.Servers, "127.0.0.1:24242")
}
var wg sync.WaitGroup
var outerr error
var domains = make(map[string]bool)
for _, dom := range d.Domains {
domains[dom] = true
}
for _, serv := range d.Servers {
wg.Add(1)
go func(serv string) {
defer wg.Done()
outerr = d.gatherServer(serv, acc, domains)
}(serv)
}
wg.Wait()
return outerr
}
func (d *Dovecot) gatherServer(addr string, acc telegraf.Accumulator, doms map[string]bool) error {
_, _, err := net.SplitHostPort(addr)
if err != nil {
return fmt.Errorf("Error: %s on url %s\n", err, addr)
}
c, err := net.Dial("tcp", addr)
if err != nil {
return fmt.Errorf("Unable to connect to dovecot server '%s': %s", addr, err)
}
defer c.Close()
c.Write([]byte("EXPORT\tdomain\n\n"))
var buf bytes.Buffer
io.Copy(&buf, c)
// buf := bufio.NewReader(c)
host, _, _ := net.SplitHostPort(addr)
return gatherStats(&buf, acc, doms, host)
}
func gatherStats(buf *bytes.Buffer, acc telegraf.Accumulator, doms map[string]bool, host string) error {
lines := strings.Split(buf.String(), "\n")
head := strings.Split(lines[0], "\t")
vals := lines[1:]
for i := range vals {
if vals[i] == "" {
continue
}
val := strings.Split(vals[i], "\t")
fields := make(map[string]interface{})
if len(doms) > 0 && !doms[val[0]] {
continue
}
tags := map[string]string{"server": host, "domain": val[0]}
for n := range val {
switch head[n] {
case "domain":
continue
// fields[head[n]] = val[n]
case "user_cpu", "sys_cpu", "clock_time":
fields[head[n]] = secParser(val[n])
case "reset_timestamp", "last_update":
fields[head[n]] = timeParser(val[n])
default:
ival, _ := splitSec(val[n])
fields[head[n]] = ival
}
}
acc.AddFields("dovecot", fields, tags)
}
return nil
}
func splitSec(tm string) (sec int64, msec int64) {
var err error
ss := strings.Split(tm, ".")
sec, err = strconv.ParseInt(ss[0], 10, 64)
if err != nil {
sec = 0
}
if len(ss) > 1 {
msec, err = strconv.ParseInt(ss[1], 10, 64)
if err != nil {
msec = 0
}
} else {
msec = 0
}
return sec, msec
}
func timeParser(tm string) time.Time {
sec, msec := splitSec(tm)
return time.Unix(sec, msec)
}
func secParser(tm string) float64 {
sec, msec := splitSec(tm)
return float64(sec) + (float64(msec) / 1000000.0)
}
func init() {
inputs.Add("dovecot", func() telegraf.Input {
return &Dovecot{}
})
}

View File

@@ -0,0 +1,61 @@
package dovecot
import (
"bytes"
"testing"
"time"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/require"
)
func TestDovecot(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
var acc testutil.Accumulator
tags := map[string]string{"server": "dovecot.test", "domain": "domain.test"}
buf := bytes.NewBufferString(sampleStats)
var doms = map[string]bool{
"domain.test": true,
}
err := gatherStats(buf, &acc, doms, "dovecot.test")
require.NoError(t, err)
fields := map[string]interface{}{
"reset_timestamp": time.Unix(1453969886, 0),
"last_update": time.Unix(1454603963, 39864),
"num_logins": int64(7503897),
"num_cmds": int64(52595715),
"num_connected_sessions": int64(1204),
"user_cpu": 1.00831175372e+08,
"sys_cpu": 8.3849071112e+07,
"clock_time": 4.3260019315281835e+15,
"min_faults": int64(763950011),
"maj_faults": int64(1112443),
"vol_cs": int64(4120386897),
"invol_cs": int64(3685239306),
"disk_input": int64(41679480946688),
"disk_output": int64(1819070669176832),
"read_count": int64(2368906465),
"read_bytes": int64(2957928122981169),
"write_count": int64(3545389615),
"write_bytes": int64(1666822498251286),
"mail_lookup_path": int64(24396105),
"mail_lookup_attr": int64(302845),
"mail_read_count": int64(20155768),
"mail_read_bytes": int64(669946617705),
"mail_cache_hits": int64(1557255080),
}
acc.AssertContainsTaggedFields(t, "dovecot", fields, tags)
}
const sampleStats = `domain reset_timestamp last_update num_logins num_cmds num_connected_sessions user_cpu sys_cpu clock_time min_faults maj_faults vol_cs invol_cs disk_input disk_output read_count read_bytes write_count write_bytes mail_lookup_path mail_lookup_attr mail_read_count mail_read_bytes mail_cache_hits
domain.bad 1453970076 1454603947.383029 10749 33828 0 177988.524000 148071.772000 7531838964717.193706 212491179 2125 2190386067 112779200 74487934976 3221808119808 2469948401 5237602841760 1091171292 2951966459802 15363 0 2922 136403379 334372
domain.test 1453969886 1454603963.039864 7503897 52595715 1204 100831175.372000 83849071.112000 4326001931528183.495762 763950011 1112443 4120386897 3685239306 41679480946688 1819070669176832 2368906465 2957928122981169 3545389615 1666822498251286 24396105 302845 20155768 669946617705 1557255080`

View File

@@ -9,8 +9,9 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
jsonparser "github.com/influxdata/telegraf/plugins/parsers/json"
)
const statsPath = "/_nodes/stats"
@@ -58,14 +59,14 @@ type indexHealth struct {
}
const sampleConfig = `
# specify a list of one or more Elasticsearch servers
## specify a list of one or more Elasticsearch servers
servers = ["http://localhost:9200"]
# set local to false when you want to read the indices stats from all nodes
# within the cluster
## set local to false when you want to read the indices stats from all nodes
## within the cluster
local = true
# set cluster_health to true when you want to also obtain cluster level stats
## set cluster_health to true when you want to also obtain cluster level stats
cluster_health = false
`
@@ -95,13 +96,13 @@ func (e *Elasticsearch) Description() string {
// Gather reads the stats from Elasticsearch and writes it to the
// Accumulator.
func (e *Elasticsearch) Gather(acc inputs.Accumulator) error {
func (e *Elasticsearch) Gather(acc telegraf.Accumulator) error {
errChan := make(chan error, len(e.Servers))
var wg sync.WaitGroup
wg.Add(len(e.Servers))
for _, serv := range e.Servers {
go func(s string, acc inputs.Accumulator) {
go func(s string, acc telegraf.Accumulator) {
defer wg.Done()
var url string
if e.Local {
@@ -133,7 +134,7 @@ func (e *Elasticsearch) Gather(acc inputs.Accumulator) error {
return errors.New(strings.Join(errStrings, "\n"))
}
func (e *Elasticsearch) gatherNodeStats(url string, acc inputs.Accumulator) error {
func (e *Elasticsearch) gatherNodeStats(url string, acc telegraf.Accumulator) error {
nodeStats := &struct {
ClusterName string `json:"cluster_name"`
Nodes map[string]*node `json:"nodes"`
@@ -167,7 +168,7 @@ func (e *Elasticsearch) gatherNodeStats(url string, acc inputs.Accumulator) erro
now := time.Now()
for p, s := range stats {
f := internal.JSONFlattener{}
f := jsonparser.JSONFlattener{}
err := f.FlattenJSON("", s)
if err != nil {
return err
@@ -178,7 +179,7 @@ func (e *Elasticsearch) gatherNodeStats(url string, acc inputs.Accumulator) erro
return nil
}
func (e *Elasticsearch) gatherClusterStats(url string, acc inputs.Accumulator) error {
func (e *Elasticsearch) gatherClusterStats(url string, acc telegraf.Accumulator) error {
clusterStats := &clusterHealth{}
if err := e.gatherData(url, clusterStats); err != nil {
return err
@@ -243,7 +244,7 @@ func (e *Elasticsearch) gatherData(url string, v interface{}) error {
}
func init() {
inputs.Add("elasticsearch", func() inputs.Input {
inputs.Add("elasticsearch", func() telegraf.Input {
return NewElasticsearch()
})
}

View File

@@ -1,28 +1,74 @@
# Exec Plugin
# Exec Input Plugin
The exec plugin can execute arbitrary commands which output JSON. Then it flattens JSON and finds
all numeric values, treating them as floats.
The exec plugin can execute arbitrary commands which output:
For example, if you have a json-returning command called mycollector, you could
setup the exec plugin with:
* JSON
* InfluxDB [line-protocol](https://docs.influxdata.com/influxdb/v0.9/write_protocols/line/)
* Graphite [graphite-protocol](http://graphite.readthedocs.org/en/latest/feeding-carbon.html)
> Graphite understands messages with this format:
> ```
metric_path value timestamp\n
```
> __metric_path__ is the metric namespace that you want to populate.
> __value__ is the value that you want to assign to the metric at this time.
> __timestamp__ is the unix epoch time.
If using JSON, only numeric values are parsed and turned into floats. Booleans
and strings will be ignored.
### Configuration
```
# Read flattened metrics from one or more commands that output JSON to stdout
[[inputs.exec]]
command = "/usr/bin/mycollector --output=json"
# Shell/commands array
commands = ["/tmp/test.sh", "/tmp/test2.sh"]
# Data format to consume. This can be "json", "influx" or "graphite" (line-protocol)
# NOTE json only reads numerical measurements, strings and booleans are ignored.
data_format = "json"
# measurement name suffix (for separating different commands)
name_suffix = "_mycollector"
interval = "10s"
## Below configuration will be used for data_format = "graphite", can be ignored for other data_format
## If matching multiple measurement files, this string will be used to join the matched values.
#separator = "."
## Each template line requires a template pattern. It can have an optional
## filter before the template and separated by spaces. It can also have optional extra
## tags following the template. Multiple tags should be separated by commas and no spaces
## similar to the line protocol format. The can be only one default template.
## Templates support below format:
## 1. filter + template
## 2. filter + template + extra tag
## 3. filter + template with field key
## 4. default template
#templates = [
# "*.app env.service.resource.measurement",
# "stats.* .host.measurement* region=us-west,agent=sensu",
# "stats2.* .host.measurement.field",
# "measurement*"
#]
```
The name suffix is appended to exec as "exec_name_suffix" to identify the input stream.
Other options for modifying the measurement names are:
The interval is used to determine how often a particular command should be run. Each
time the exec plugin runs, it will only run a particular command if it has been at least
`interval` seconds since the exec plugin last ran the command.
```
name_prefix = "prefix_"
```
### Example 1
# Sample
Let's say that we have the above configuration, and mycollector outputs the
following JSON:
Let's say that we have a command with the name_suffix "_mycollector", which gives the following output:
```json
{
"a": 0.5,
@@ -33,13 +79,102 @@ Let's say that we have a command with the name_suffix "_mycollector", which give
}
```
The collected metrics will be stored as field values under the same measurement "exec_mycollector":
The collected metrics will be stored as fields under the measurement
"exec_mycollector":
```
exec_mycollector a=0.5,b_c=0.1,b_d=5 1452815002357578567
exec_mycollector a=0.5,b_c=0.1,b_d=5 1452815002357578567
```
Other options for modifying the measurement names are:
### Example 2
Now let's say we have the following configuration:
```
name_override = "newname"
name_prefix = "prefix_"
[[inputs.exec]]
# Shell/commands array
# compatible with old version
# we can still use the old command configuration
# command = "/usr/bin/line_protocol_collector"
commands = ["/usr/bin/line_protocol_collector","/tmp/test2.sh"]
# Data format to consume. This can be "json" or "influx" (line-protocol)
# NOTE json only reads numerical measurements, strings and booleans are ignored.
data_format = "influx"
```
And line_protocol_collector outputs the following line protocol:
```
cpu,cpu=cpu0,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu1,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu2,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu3,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu4,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu5,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu6,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
```
You will get data in InfluxDB exactly as it is defined above,
tags are cpu=cpuN, host=foo, and datacenter=us-east with fields usage_idle
and usage_busy. They will receive a timestamp at collection time.
### Example 3
We can also change the data_format to "graphite" to use the metrics collecting scripts such as (compatible with graphite):
* Nagios [Mertics Plugins] (https://exchange.nagios.org/directory/Plugins)
* Sensu [Mertics Plugins] (https://github.com/sensu-plugins)
#### Configuration
```
# Read flattened metrics from one or more commands that output JSON to stdout
[[inputs.exec]]
# Shell/commands array
commands = ["/tmp/test.sh","/tmp/test2.sh"]
# Data format to consume. This can be "json", "influx" or "graphite" (line-protocol)
# NOTE json only reads numerical measurements, strings and booleans are ignored.
data_format = "graphite"
# measurement name suffix (for separating different commands)
name_suffix = "_mycollector"
## Below configuration will be used for data_format = "graphite", can be ignored for other data_format
## If matching multiple measurement files, this string will be used to join the matched values.
separator = "."
## Each template line requires a template pattern. It can have an optional
## filter before the template and separated by spaces. It can also have optional extra
## tags following the template. Multiple tags should be separated by commas and no spaces
## similar to the line protocol format. The can be only one default template.
## Templates support below format:
## 1. filter + template
## 2. filter + template + extra tag
## 3. filter + template with field key
## 4. default template
templates = [
"*.app env.service.resource.measurement",
"stats.* .host.measurement* region=us-west,agent=sensu",
"stats2.* .host.measurement.field",
"measurement*"
]
```
And test.sh/test2.sh will output:
```
sensu.metric.net.server0.eth0.rx_packets 461295119435 1444234982
sensu.metric.net.server0.eth0.tx_bytes 1093086493388480 1444234982
sensu.metric.net.server0.eth0.rx_bytes 1015633926034834 1444234982
sensu.metric.net.server0.eth0.tx_errors 0 1444234982
sensu.metric.net.server0.eth0.rx_errors 0 1444234982
sensu.metric.net.server0.eth0.tx_dropped 0 1444234982
sensu.metric.net.server0.eth0.rx_dropped 0 1444234982
```
The templates configuration will be used to parse the graphite metrics to support influxdb/opentsdb tagging store engines.
More detail information about templates, please refer to [The graphite Input] (https://github.com/influxdata/influxdb/blob/master/services/graphite/README.md)

View File

@@ -2,58 +2,90 @@ package exec
import (
"bytes"
"encoding/json"
"fmt"
"os/exec"
"sync"
"github.com/gonuts/go-shellquote"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/parsers"
)
const sampleConfig = `
# NOTE This plugin only reads numerical measurements, strings and booleans
# will be ignored.
## Commands array
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
# the command to run
command = "/usr/bin/mycollector --foo=bar"
# measurement name suffix (for separating different commands)
## measurement name suffix (for separating different commands)
name_suffix = "_mycollector"
## Data format to consume. This can be "json", "influx" or "graphite"
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "influx"
`
type Exec struct {
Command string
Commands []string
Command string
runner Runner
parser parsers.Parser
wg sync.WaitGroup
runner Runner
errChan chan error
}
func NewExec() *Exec {
return &Exec{
runner: CommandRunner{},
}
}
type Runner interface {
Run(*Exec) ([]byte, error)
Run(*Exec, string) ([]byte, error)
}
type CommandRunner struct{}
func (c CommandRunner) Run(e *Exec) ([]byte, error) {
split_cmd, err := shellquote.Split(e.Command)
func (c CommandRunner) Run(e *Exec, command string) ([]byte, error) {
split_cmd, err := shellquote.Split(command)
if err != nil || len(split_cmd) == 0 {
return nil, fmt.Errorf("exec: unable to parse command, %s", err)
}
cmd := exec.Command(split_cmd[0], split_cmd[1:]...)
var out bytes.Buffer
cmd.Stdout = &out
if err := cmd.Run(); err != nil {
return nil, fmt.Errorf("exec: %s for command '%s'", err, e.Command)
return nil, fmt.Errorf("exec: %s for command '%s'", err, command)
}
return out.Bytes(), nil
}
func NewExec() *Exec {
return &Exec{runner: CommandRunner{}}
func (e *Exec) ProcessCommand(command string, acc telegraf.Accumulator) {
defer e.wg.Done()
out, err := e.runner.Run(e, command)
if err != nil {
e.errChan <- err
return
}
metrics, err := e.parser.Parse(out)
if err != nil {
e.errChan <- err
} else {
for _, metric := range metrics {
acc.AddFields(metric.Name(), metric.Fields(), metric.Tags(), metric.Time())
}
}
}
func (e *Exec) SampleConfig() string {
@@ -61,34 +93,41 @@ func (e *Exec) SampleConfig() string {
}
func (e *Exec) Description() string {
return "Read flattened metrics from one or more commands that output JSON to stdout"
return "Read metrics from one or more commands that can output to stdout"
}
func (e *Exec) Gather(acc inputs.Accumulator) error {
out, err := e.runner.Run(e)
if err != nil {
func (e *Exec) SetParser(parser parsers.Parser) {
e.parser = parser
}
func (e *Exec) Gather(acc telegraf.Accumulator) error {
// Legacy single command support
if e.Command != "" {
e.Commands = append(e.Commands, e.Command)
e.Command = ""
}
e.errChan = make(chan error, len(e.Commands))
e.wg.Add(len(e.Commands))
for _, command := range e.Commands {
go e.ProcessCommand(command, acc)
}
e.wg.Wait()
select {
default:
close(e.errChan)
return nil
case err := <-e.errChan:
close(e.errChan)
return err
}
var jsonOut interface{}
err = json.Unmarshal(out, &jsonOut)
if err != nil {
return fmt.Errorf("exec: unable to parse output of '%s' as JSON, %s",
e.Command, err)
}
f := internal.JSONFlattener{}
err = f.FlattenJSON("", jsonOut)
if err != nil {
return err
}
acc.AddFields("exec", f.Fields, nil)
return nil
}
func init() {
inputs.Add("exec", func() inputs.Input {
inputs.Add("exec", func() telegraf.Input {
return NewExec()
})
}

View File

@@ -4,6 +4,8 @@ import (
"fmt"
"testing"
"github.com/influxdata/telegraf/plugins/parsers"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -31,6 +33,18 @@ const malformedJson = `
"status": "green",
`
const lineProtocol = "cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1"
const lineProtocolMulti = `
cpu,cpu=cpu0,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu1,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu2,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu3,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu4,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu5,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu6,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
`
type runnerMock struct {
out []byte
err error
@@ -43,7 +57,7 @@ func newRunnerMock(out []byte, err error) Runner {
}
}
func (r runnerMock) Run(e *Exec) ([]byte, error) {
func (r runnerMock) Run(e *Exec, command string) ([]byte, error) {
if r.err != nil {
return nil, r.err
}
@@ -51,9 +65,11 @@ func (r runnerMock) Run(e *Exec) ([]byte, error) {
}
func TestExec(t *testing.T) {
parser, _ := parsers.NewJSONParser("exec", []string{}, nil)
e := &Exec{
runner: newRunnerMock([]byte(validJson), nil),
Command: "testcommand arg1",
runner: newRunnerMock([]byte(validJson), nil),
Commands: []string{"testcommand arg1"},
parser: parser,
}
var acc testutil.Accumulator
@@ -75,9 +91,11 @@ func TestExec(t *testing.T) {
}
func TestExecMalformed(t *testing.T) {
parser, _ := parsers.NewJSONParser("exec", []string{}, nil)
e := &Exec{
runner: newRunnerMock([]byte(malformedJson), nil),
Command: "badcommand arg1",
runner: newRunnerMock([]byte(malformedJson), nil),
Commands: []string{"badcommand arg1"},
parser: parser,
}
var acc testutil.Accumulator
@@ -87,9 +105,11 @@ func TestExecMalformed(t *testing.T) {
}
func TestCommandError(t *testing.T) {
parser, _ := parsers.NewJSONParser("exec", []string{}, nil)
e := &Exec{
runner: newRunnerMock(nil, fmt.Errorf("exit status code 1")),
Command: "badcommand",
runner: newRunnerMock(nil, fmt.Errorf("exit status code 1")),
Commands: []string{"badcommand"},
parser: parser,
}
var acc testutil.Accumulator
@@ -97,3 +117,54 @@ func TestCommandError(t *testing.T) {
require.Error(t, err)
assert.Equal(t, acc.NFields(), 0, "No new points should have been added")
}
func TestLineProtocolParse(t *testing.T) {
parser, _ := parsers.NewInfluxParser()
e := &Exec{
runner: newRunnerMock([]byte(lineProtocol), nil),
Commands: []string{"line-protocol"},
parser: parser,
}
var acc testutil.Accumulator
err := e.Gather(&acc)
require.NoError(t, err)
fields := map[string]interface{}{
"usage_idle": float64(99),
"usage_busy": float64(1),
}
tags := map[string]string{
"host": "foo",
"datacenter": "us-east",
}
acc.AssertContainsTaggedFields(t, "cpu", fields, tags)
}
func TestLineProtocolParseMultiple(t *testing.T) {
parser, _ := parsers.NewInfluxParser()
e := &Exec{
runner: newRunnerMock([]byte(lineProtocolMulti), nil),
Commands: []string{"line-protocol"},
parser: parser,
}
var acc testutil.Accumulator
err := e.Gather(&acc)
require.NoError(t, err)
fields := map[string]interface{}{
"usage_idle": float64(99),
"usage_busy": float64(1),
}
tags := map[string]string{
"host": "foo",
"datacenter": "us-east",
}
cpuTags := []string{"cpu0", "cpu1", "cpu2", "cpu3", "cpu4", "cpu5", "cpu6"}
for _, cpu := range cpuTags {
tags["cpu"] = cpu
acc.AssertContainsTaggedFields(t, "cpu", fields, tags)
}
}

View File

@@ -9,11 +9,12 @@ import (
"sync"
"github.com/gorilla/mux"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
func init() {
inputs.Add("github_webhooks", func() inputs.Input { return &GithubWebhooks{} })
inputs.Add("github_webhooks", func() telegraf.Input { return &GithubWebhooks{} })
}
type GithubWebhooks struct {
@@ -30,7 +31,7 @@ func NewGithubWebhooks() *GithubWebhooks {
func (gh *GithubWebhooks) SampleConfig() string {
return `
# Address and port to host Webhook listener on
## Address and port to host Webhook listener on
service_address = ":1618"
`
}
@@ -40,11 +41,11 @@ func (gh *GithubWebhooks) Description() string {
}
// Writes the points from <-gh.in to the Accumulator
func (gh *GithubWebhooks) Gather(acc inputs.Accumulator) error {
func (gh *GithubWebhooks) Gather(acc telegraf.Accumulator) error {
gh.Lock()
defer gh.Unlock()
for _, event := range gh.events {
p := event.NewPoint()
p := event.NewMetric()
acc.AddFields("github_webhooks", p.Fields(), p.Tags(), p.Time())
}
gh.events = make([]Event, 0)
@@ -60,7 +61,7 @@ func (gh *GithubWebhooks) Listen() {
}
}
func (gh *GithubWebhooks) Start() error {
func (gh *GithubWebhooks) Start(_ telegraf.Accumulator) error {
go gh.Listen()
log.Printf("Started the github_webhooks service on %s\n", gh.ServiceAddress)
return nil

View File

@@ -5,13 +5,13 @@ import (
"log"
"time"
"github.com/influxdata/influxdb/client/v2"
"github.com/influxdata/telegraf"
)
const meas = "github_webhooks"
type Event interface {
NewPoint() *client.Point
NewMetric() telegraf.Metric
}
type Repository struct {
@@ -90,7 +90,7 @@ type CommitCommentEvent struct {
Sender Sender `json:"sender"`
}
func (s CommitCommentEvent) NewPoint() *client.Point {
func (s CommitCommentEvent) NewMetric() telegraf.Metric {
event := "commit_comment"
t := map[string]string{
"event": event,
@@ -106,11 +106,11 @@ func (s CommitCommentEvent) NewPoint() *client.Point {
"commit": s.Comment.Commit,
"comment": s.Comment.Body,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type CreateEvent struct {
@@ -120,7 +120,7 @@ type CreateEvent struct {
Sender Sender `json:"sender"`
}
func (s CreateEvent) NewPoint() *client.Point {
func (s CreateEvent) NewMetric() telegraf.Metric {
event := "create"
t := map[string]string{
"event": event,
@@ -136,11 +136,11 @@ func (s CreateEvent) NewPoint() *client.Point {
"ref": s.Ref,
"refType": s.RefType,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type DeleteEvent struct {
@@ -150,7 +150,7 @@ type DeleteEvent struct {
Sender Sender `json:"sender"`
}
func (s DeleteEvent) NewPoint() *client.Point {
func (s DeleteEvent) NewMetric() telegraf.Metric {
event := "delete"
t := map[string]string{
"event": event,
@@ -166,11 +166,11 @@ func (s DeleteEvent) NewPoint() *client.Point {
"ref": s.Ref,
"refType": s.RefType,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type DeploymentEvent struct {
@@ -179,7 +179,7 @@ type DeploymentEvent struct {
Sender Sender `json:"sender"`
}
func (s DeploymentEvent) NewPoint() *client.Point {
func (s DeploymentEvent) NewMetric() telegraf.Metric {
event := "deployment"
t := map[string]string{
"event": event,
@@ -197,11 +197,11 @@ func (s DeploymentEvent) NewPoint() *client.Point {
"environment": s.Deployment.Environment,
"description": s.Deployment.Description,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type DeploymentStatusEvent struct {
@@ -211,7 +211,7 @@ type DeploymentStatusEvent struct {
Sender Sender `json:"sender"`
}
func (s DeploymentStatusEvent) NewPoint() *client.Point {
func (s DeploymentStatusEvent) NewMetric() telegraf.Metric {
event := "delete"
t := map[string]string{
"event": event,
@@ -231,11 +231,11 @@ func (s DeploymentStatusEvent) NewPoint() *client.Point {
"depState": s.DeploymentStatus.State,
"depDescription": s.DeploymentStatus.Description,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type ForkEvent struct {
@@ -244,7 +244,7 @@ type ForkEvent struct {
Sender Sender `json:"sender"`
}
func (s ForkEvent) NewPoint() *client.Point {
func (s ForkEvent) NewMetric() telegraf.Metric {
event := "fork"
t := map[string]string{
"event": event,
@@ -259,11 +259,11 @@ func (s ForkEvent) NewPoint() *client.Point {
"issues": s.Repository.Issues,
"fork": s.Forkee.Repository,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type GollumEvent struct {
@@ -273,7 +273,7 @@ type GollumEvent struct {
}
// REVIEW: Going to be lazy and not deal with the pages.
func (s GollumEvent) NewPoint() *client.Point {
func (s GollumEvent) NewMetric() telegraf.Metric {
event := "gollum"
t := map[string]string{
"event": event,
@@ -287,11 +287,11 @@ func (s GollumEvent) NewPoint() *client.Point {
"forks": s.Repository.Forks,
"issues": s.Repository.Issues,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type IssueCommentEvent struct {
@@ -301,7 +301,7 @@ type IssueCommentEvent struct {
Sender Sender `json:"sender"`
}
func (s IssueCommentEvent) NewPoint() *client.Point {
func (s IssueCommentEvent) NewMetric() telegraf.Metric {
event := "issue_comment"
t := map[string]string{
"event": event,
@@ -319,11 +319,11 @@ func (s IssueCommentEvent) NewPoint() *client.Point {
"comments": s.Issue.Comments,
"body": s.Comment.Body,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type IssuesEvent struct {
@@ -333,7 +333,7 @@ type IssuesEvent struct {
Sender Sender `json:"sender"`
}
func (s IssuesEvent) NewPoint() *client.Point {
func (s IssuesEvent) NewMetric() telegraf.Metric {
event := "issue"
t := map[string]string{
"event": event,
@@ -351,11 +351,11 @@ func (s IssuesEvent) NewPoint() *client.Point {
"title": s.Issue.Title,
"comments": s.Issue.Comments,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type MemberEvent struct {
@@ -364,7 +364,7 @@ type MemberEvent struct {
Sender Sender `json:"sender"`
}
func (s MemberEvent) NewPoint() *client.Point {
func (s MemberEvent) NewMetric() telegraf.Metric {
event := "member"
t := map[string]string{
"event": event,
@@ -380,11 +380,11 @@ func (s MemberEvent) NewPoint() *client.Point {
"newMember": s.Member.User,
"newMemberStatus": s.Member.Admin,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type MembershipEvent struct {
@@ -394,7 +394,7 @@ type MembershipEvent struct {
Team Team `json:"team"`
}
func (s MembershipEvent) NewPoint() *client.Point {
func (s MembershipEvent) NewMetric() telegraf.Metric {
event := "membership"
t := map[string]string{
"event": event,
@@ -406,11 +406,11 @@ func (s MembershipEvent) NewPoint() *client.Point {
"newMember": s.Member.User,
"newMemberStatus": s.Member.Admin,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type PageBuildEvent struct {
@@ -418,7 +418,7 @@ type PageBuildEvent struct {
Sender Sender `json:"sender"`
}
func (s PageBuildEvent) NewPoint() *client.Point {
func (s PageBuildEvent) NewMetric() telegraf.Metric {
event := "page_build"
t := map[string]string{
"event": event,
@@ -432,11 +432,11 @@ func (s PageBuildEvent) NewPoint() *client.Point {
"forks": s.Repository.Forks,
"issues": s.Repository.Issues,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type PublicEvent struct {
@@ -444,7 +444,7 @@ type PublicEvent struct {
Sender Sender `json:"sender"`
}
func (s PublicEvent) NewPoint() *client.Point {
func (s PublicEvent) NewMetric() telegraf.Metric {
event := "public"
t := map[string]string{
"event": event,
@@ -458,11 +458,11 @@ func (s PublicEvent) NewPoint() *client.Point {
"forks": s.Repository.Forks,
"issues": s.Repository.Issues,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type PullRequestEvent struct {
@@ -472,7 +472,7 @@ type PullRequestEvent struct {
Sender Sender `json:"sender"`
}
func (s PullRequestEvent) NewPoint() *client.Point {
func (s PullRequestEvent) NewMetric() telegraf.Metric {
event := "pull_request"
t := map[string]string{
"event": event,
@@ -495,11 +495,11 @@ func (s PullRequestEvent) NewPoint() *client.Point {
"deletions": s.PullRequest.Deletions,
"changedFiles": s.PullRequest.ChangedFiles,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type PullRequestReviewCommentEvent struct {
@@ -509,7 +509,7 @@ type PullRequestReviewCommentEvent struct {
Sender Sender `json:"sender"`
}
func (s PullRequestReviewCommentEvent) NewPoint() *client.Point {
func (s PullRequestReviewCommentEvent) NewMetric() telegraf.Metric {
event := "pull_request_review_comment"
t := map[string]string{
"event": event,
@@ -533,11 +533,11 @@ func (s PullRequestReviewCommentEvent) NewPoint() *client.Point {
"commentFile": s.Comment.File,
"comment": s.Comment.Comment,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type PushEvent struct {
@@ -548,7 +548,7 @@ type PushEvent struct {
Sender Sender `json:"sender"`
}
func (s PushEvent) NewPoint() *client.Point {
func (s PushEvent) NewMetric() telegraf.Metric {
event := "push"
t := map[string]string{
"event": event,
@@ -565,11 +565,11 @@ func (s PushEvent) NewPoint() *client.Point {
"before": s.Before,
"after": s.After,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type ReleaseEvent struct {
@@ -578,7 +578,7 @@ type ReleaseEvent struct {
Sender Sender `json:"sender"`
}
func (s ReleaseEvent) NewPoint() *client.Point {
func (s ReleaseEvent) NewMetric() telegraf.Metric {
event := "release"
t := map[string]string{
"event": event,
@@ -593,11 +593,11 @@ func (s ReleaseEvent) NewPoint() *client.Point {
"issues": s.Repository.Issues,
"tagName": s.Release.TagName,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type RepositoryEvent struct {
@@ -605,7 +605,7 @@ type RepositoryEvent struct {
Sender Sender `json:"sender"`
}
func (s RepositoryEvent) NewPoint() *client.Point {
func (s RepositoryEvent) NewMetric() telegraf.Metric {
event := "repository"
t := map[string]string{
"event": event,
@@ -619,11 +619,11 @@ func (s RepositoryEvent) NewPoint() *client.Point {
"forks": s.Repository.Forks,
"issues": s.Repository.Issues,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type StatusEvent struct {
@@ -633,7 +633,7 @@ type StatusEvent struct {
Sender Sender `json:"sender"`
}
func (s StatusEvent) NewPoint() *client.Point {
func (s StatusEvent) NewMetric() telegraf.Metric {
event := "status"
t := map[string]string{
"event": event,
@@ -649,11 +649,11 @@ func (s StatusEvent) NewPoint() *client.Point {
"commit": s.Commit,
"state": s.State,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type TeamAddEvent struct {
@@ -662,7 +662,7 @@ type TeamAddEvent struct {
Sender Sender `json:"sender"`
}
func (s TeamAddEvent) NewPoint() *client.Point {
func (s TeamAddEvent) NewMetric() telegraf.Metric {
event := "team_add"
t := map[string]string{
"event": event,
@@ -677,11 +677,11 @@ func (s TeamAddEvent) NewPoint() *client.Point {
"issues": s.Repository.Issues,
"teamName": s.Team.Name,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}
type WatchEvent struct {
@@ -689,7 +689,7 @@ type WatchEvent struct {
Sender Sender `json:"sender"`
}
func (s WatchEvent) NewPoint() *client.Point {
func (s WatchEvent) NewMetric() telegraf.Metric {
event := "delete"
t := map[string]string{
"event": event,
@@ -703,9 +703,9 @@ func (s WatchEvent) NewPoint() *client.Point {
"forks": s.Repository.Forks,
"issues": s.Repository.Issues,
}
p, err := client.NewPoint(meas, t, f, time.Now())
m, err := telegraf.NewMetric(meas, t, f, time.Now())
if err != nil {
log.Fatalf("Failed to create %v event", event)
}
return p
return m
}

View File

@@ -3,6 +3,7 @@ package haproxy
import (
"encoding/csv"
"fmt"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"io"
"net/http"
@@ -85,13 +86,13 @@ type haproxy struct {
}
var sampleConfig = `
# An array of address to gather stats about. Specify an ip on hostname
# with optional port. ie localhost, 10.10.3.33:1936, etc.
#
# If no servers are specified, then default to 127.0.0.1:1936
## An array of address to gather stats about. Specify an ip on hostname
## with optional port. ie localhost, 10.10.3.33:1936, etc.
## If no servers are specified, then default to 127.0.0.1:1936
servers = ["http://myhaproxy.com:1936", "http://anotherhaproxy.com:1936"]
# Or you can also use local socket(not work yet)
# servers = ["socket://run/haproxy/admin.sock"]
## Or you can also use local socket(not work yet)
## servers = ["socket://run/haproxy/admin.sock"]
`
func (r *haproxy) SampleConfig() string {
@@ -104,7 +105,7 @@ func (r *haproxy) Description() string {
// Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any).
func (g *haproxy) Gather(acc inputs.Accumulator) error {
func (g *haproxy) Gather(acc telegraf.Accumulator) error {
if len(g.Servers) == 0 {
return g.gatherServer("http://127.0.0.1:1936", acc)
}
@@ -126,7 +127,7 @@ func (g *haproxy) Gather(acc inputs.Accumulator) error {
return outerr
}
func (g *haproxy) gatherServer(addr string, acc inputs.Accumulator) error {
func (g *haproxy) gatherServer(addr string, acc telegraf.Accumulator) error {
if g.client == nil {
client := &http.Client{}
@@ -156,7 +157,7 @@ func (g *haproxy) gatherServer(addr string, acc inputs.Accumulator) error {
return importCsvResult(res.Body, acc, u.Host)
}
func importCsvResult(r io.Reader, acc inputs.Accumulator, host string) error {
func importCsvResult(r io.Reader, acc telegraf.Accumulator, host string) error {
csv := csv.NewReader(r)
result, err := csv.ReadAll()
now := time.Now()
@@ -358,7 +359,7 @@ func importCsvResult(r io.Reader, acc inputs.Accumulator, host string) error {
}
func init() {
inputs.Add("haproxy", func() inputs.Input {
inputs.Add("haproxy", func() telegraf.Input {
return &haproxy{}
})
}

View File

@@ -1,7 +1,7 @@
package httpjson
import (
"encoding/json"
"bytes"
"errors"
"fmt"
"io/ioutil"
@@ -11,8 +11,9 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/parsers"
)
type HttpJson struct {
@@ -46,37 +47,36 @@ func (c RealHTTPClient) MakeRequest(req *http.Request) (*http.Response, error) {
}
var sampleConfig = `
# NOTE This plugin only reads numerical measurements, strings and booleans
# will be ignored.
## NOTE This plugin only reads numerical measurements, strings and booleans
## will be ignored.
# a name for the service being polled
## a name for the service being polled
name = "webserver_stats"
# URL of each server in the service's cluster
## URL of each server in the service's cluster
servers = [
"http://localhost:9999/stats/",
"http://localhost:9998/stats/",
]
# HTTP method to use (case-sensitive)
## HTTP method to use: GET or POST (case-sensitive)
method = "GET"
# List of tag names to extract from top-level of JSON server response
## List of tag names to extract from top-level of JSON server response
# tag_keys = [
# "my_tag_1",
# "my_tag_2"
# ]
# HTTP parameters (all values must be strings)
## HTTP parameters (all values must be strings)
[inputs.httpjson.parameters]
event_type = "cpu_spike"
threshold = "0.75"
# HTTP Header parameters (all values must be strings)
## HTTP Header parameters (all values must be strings)
# [inputs.httpjson.headers]
# X-Auth-Token = "my-xauth-token"
# apiVersion = "v1"
`
func (h *HttpJson) SampleConfig() string {
@@ -88,7 +88,7 @@ func (h *HttpJson) Description() string {
}
// Gathers data for all servers.
func (h *HttpJson) Gather(acc inputs.Accumulator) error {
func (h *HttpJson) Gather(acc telegraf.Accumulator) error {
var wg sync.WaitGroup
errorChannel := make(chan error, len(h.Servers))
@@ -127,7 +127,7 @@ func (h *HttpJson) Gather(acc inputs.Accumulator) error {
// Returns:
// error: Any error that may have occurred
func (h *HttpJson) gatherServer(
acc inputs.Accumulator,
acc telegraf.Accumulator,
serverURL string,
) error {
resp, responseTime, err := h.sendRequest(serverURL)
@@ -136,43 +136,39 @@ func (h *HttpJson) gatherServer(
return err
}
var jsonOut map[string]interface{}
if err = json.Unmarshal([]byte(resp), &jsonOut); err != nil {
return errors.New("Error decoding JSON response")
}
tags := map[string]string{
"server": serverURL,
}
for _, tag := range h.TagKeys {
switch v := jsonOut[tag].(type) {
case string:
tags[tag] = v
}
delete(jsonOut, tag)
}
if responseTime >= 0 {
jsonOut["response_time"] = responseTime
}
f := internal.JSONFlattener{}
err = f.FlattenJSON("", jsonOut)
if err != nil {
return err
}
var msrmnt_name string
if h.Name == "" {
msrmnt_name = "httpjson"
} else {
msrmnt_name = "httpjson_" + h.Name
}
acc.AddFields(msrmnt_name, f.Fields, tags)
tags := map[string]string{
"server": serverURL,
}
parser, err := parsers.NewJSONParser(msrmnt_name, h.TagKeys, tags)
if err != nil {
return err
}
metrics, err := parser.Parse([]byte(resp))
if err != nil {
return err
}
for _, metric := range metrics {
fields := make(map[string]interface{})
for k, v := range metric.Fields() {
fields[k] = v
}
fields["response_time"] = responseTime
acc.AddFields(metric.Name(), fields, metric.Tags())
}
return nil
}
// Sends an HTTP request to the server using the HttpJson object's HTTPClient
// Sends an HTTP request to the server using the HttpJson object's HTTPClient.
// This request can be either a GET or a POST.
// Parameters:
// serverURL: endpoint to send request to
//
@@ -187,20 +183,35 @@ func (h *HttpJson) sendRequest(serverURL string) (string, float64, error) {
}
params := url.Values{}
for k, v := range h.Parameters {
params.Add(k, v)
data := url.Values{}
switch {
case h.Method == "GET":
requestURL.RawQuery = params.Encode()
for k, v := range h.Parameters {
params.Add(k, v)
}
case h.Method == "POST":
requestURL.RawQuery = ""
for k, v := range h.Parameters {
data.Add(k, v)
}
}
requestURL.RawQuery = params.Encode()
// Create + send request
req, err := http.NewRequest(h.Method, requestURL.String(), nil)
req, err := http.NewRequest(h.Method, requestURL.String(), bytes.NewBufferString(data.Encode()))
if err != nil {
return "", -1, err
}
// Add header parameters
for k, v := range h.Headers {
req.Header.Add(k, v)
if strings.ToLower(k) == "host" {
req.Host = v
} else {
req.Header.Add(k, v)
}
}
start := time.Now()
@@ -232,7 +243,7 @@ func (h *HttpJson) sendRequest(serverURL string) (string, float64, error) {
}
func init() {
inputs.Add("httpjson", func() inputs.Input {
inputs.Add("httpjson", func() telegraf.Input {
return &HttpJson{client: RealHTTPClient{client: &http.Client{}}}
})
}

View File

@@ -136,7 +136,7 @@ func TestHttpJson200(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, 12, acc.NFields())
// Set responsetime
for _, p := range acc.Points {
for _, p := range acc.Metrics {
p.Fields["response_time"] = 1.0
}
@@ -203,7 +203,7 @@ func TestHttpJson200Tags(t *testing.T) {
var acc testutil.Accumulator
err := service.Gather(&acc)
// Set responsetime
for _, p := range acc.Points {
for _, p := range acc.Metrics {
p.Fields["response_time"] = 1.0
}
require.NoError(t, err)

View File

@@ -8,6 +8,7 @@ import (
"strings"
"sync"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -21,18 +22,18 @@ func (*InfluxDB) Description() string {
func (*InfluxDB) SampleConfig() string {
return `
# Works with InfluxDB debug endpoints out of the box,
# but other services can use this format too.
# See the influxdb plugin's README for more details.
## Works with InfluxDB debug endpoints out of the box,
## but other services can use this format too.
## See the influxdb plugin's README for more details.
# Multiple URLs from which to read InfluxDB-formatted JSON
## Multiple URLs from which to read InfluxDB-formatted JSON
urls = [
"http://localhost:8086/debug/vars"
]
`
}
func (i *InfluxDB) Gather(acc inputs.Accumulator) error {
func (i *InfluxDB) Gather(acc telegraf.Accumulator) error {
errorChannel := make(chan error, len(i.URLs))
var wg sync.WaitGroup
@@ -77,7 +78,7 @@ type point struct {
// Returns:
// error: Any error that may have occurred
func (i *InfluxDB) gatherURL(
acc inputs.Accumulator,
acc telegraf.Accumulator,
url string,
) error {
resp, err := http.Get(url)
@@ -140,7 +141,7 @@ func (i *InfluxDB) gatherURL(
}
func init() {
inputs.Add("influxdb", func() inputs.Input {
inputs.Add("influxdb", func() telegraf.Input {
return &InfluxDB{}
})
}

View File

@@ -71,7 +71,7 @@ func TestBasic(t *testing.T) {
var acc testutil.Accumulator
require.NoError(t, plugin.Gather(&acc))
require.Len(t, acc.Points, 2)
require.Len(t, acc.Metrics, 2)
fields := map[string]interface{}{
// JSON will truncate floats to integer representations.
// Since there's no distinction in JSON, we can't assume it's an int.

View File

@@ -8,6 +8,7 @@ import (
"net/http"
"net/url"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -45,10 +46,10 @@ type Jolokia struct {
func (j *Jolokia) SampleConfig() string {
return `
# This is the context root used to compose the jolokia url
## This is the context root used to compose the jolokia url
context = "/jolokia/read"
# List of servers exposing jolokia read service
## List of servers exposing jolokia read service
[[inputs.jolokia.servers]]
name = "stable"
host = "192.168.103.2"
@@ -56,9 +57,10 @@ func (j *Jolokia) SampleConfig() string {
# username = "myuser"
# password = "mypassword"
# List of metrics collected on above servers
# Each metric consists in a name, a jmx path and either a pass or drop slice attributes
# This collect all heap memory usage metrics
## List of metrics collected on above servers
## Each metric consists in a name, a jmx path and either
## a pass or drop slice attribute.
## This collect all heap memory usage metrics.
[[inputs.jolokia.metrics]]
name = "heap_memory_usage"
jmx = "/java.lang:type=Memory/HeapMemoryUsage"
@@ -108,7 +110,7 @@ func (j *Jolokia) getAttr(requestUrl *url.URL) (map[string]interface{}, error) {
return jsonOut, nil
}
func (j *Jolokia) Gather(acc inputs.Accumulator) error {
func (j *Jolokia) Gather(acc telegraf.Accumulator) error {
context := j.Context //"/jolokia/read"
servers := j.Servers
metrics := j.Metrics
@@ -157,7 +159,7 @@ func (j *Jolokia) Gather(acc inputs.Accumulator) error {
}
func init() {
inputs.Add("jolokia", func() inputs.Input {
inputs.Add("jolokia", func() telegraf.Input {
return &Jolokia{jClient: &JolokiaClientImpl{client: &http.Client{}}}
})
}

View File

@@ -85,7 +85,7 @@ func TestHttpJsonMultiValue(t *testing.T) {
err := jolokia.Gather(&acc)
assert.Nil(t, err)
assert.Equal(t, 1, len(acc.Points))
assert.Equal(t, 1, len(acc.Metrics))
fields := map[string]interface{}{
"heap_memory_usage_init": 67108864.0,
@@ -112,5 +112,5 @@ func TestHttpJsonOn404(t *testing.T) {
err := jolokia.Gather(&acc)
assert.Nil(t, err)
assert.Equal(t, 0, len(acc.Points))
assert.Equal(t, 0, len(acc.Metrics))
}

View File

@@ -1,4 +1,4 @@
# Kafka Consumer
# Kafka Consumer Input Plugin
The [Kafka](http://kafka.apache.org/) consumer plugin polls a specified Kafka
topic and adds messages to InfluxDB. The plugin assumes messages follow the
@@ -6,6 +6,29 @@ line protocol. [Consumer Group](http://godoc.org/github.com/wvanbergen/kafka/con
is used to talk to the Kafka cluster so multiple instances of telegraf can read
from the same topic in parallel.
## Configuration
```toml
# Read metrics from Kafka topic(s)
[[inputs.kafka_consumer]]
## topic(s) to consume
topics = ["telegraf"]
## an array of Zookeeper connection strings
zookeeper_peers = ["localhost:2181"]
## the name of the consumer group
consumer_group = "telegraf_metrics_consumers"
## Maximum number of metrics to buffer between collection intervals
metric_buffer = 100000
## Offset (must be either "oldest" or "newest")
offset = "oldest"
## Data format to consume. This can be "json", "influx" or "graphite"
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "influx"
```
## Testing
Running integration tests requires running Zookeeper & Kafka. The following
@@ -16,9 +39,3 @@ To start Kafka & Zookeeper:
```
docker run -d -p 2181:2181 -p 9092:9092 --env ADVERTISED_HOST=`boot2docker ip || docker-machine ip <your_machine_name>` --env ADVERTISED_PORT=9092 spotify/kafka
```
To run tests:
```
go test
```

View File

@@ -5,8 +5,9 @@ import (
"strings"
"sync"
"github.com/influxdata/influxdb/models"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/parsers"
"github.com/Shopify/sarama"
"github.com/wvanbergen/kafka/consumergroup"
@@ -17,8 +18,14 @@ type Kafka struct {
Topics []string
ZookeeperPeers []string
Consumer *consumergroup.ConsumerGroup
PointBuffer int
Offset string
// Legacy metric buffer support
MetricBuffer int
// TODO remove PointBuffer, legacy support
PointBuffer int
Offset string
parser parsers.Parser
sync.Mutex
@@ -26,9 +33,10 @@ type Kafka struct {
in <-chan *sarama.ConsumerMessage
// channel for all kafka consumer errors
errs <-chan *sarama.ConsumerError
// channel for all incoming parsed kafka points
pointChan chan models.Point
done chan struct{}
done chan struct{}
// keep the accumulator internally:
acc telegraf.Accumulator
// doNotCommitMsgs tells the parser not to call CommitUpTo on the consumer
// this is mostly for test purposes, but there may be a use-case for it later.
@@ -36,16 +44,20 @@ type Kafka struct {
}
var sampleConfig = `
# topic(s) to consume
## topic(s) to consume
topics = ["telegraf"]
# an array of Zookeeper connection strings
## an array of Zookeeper connection strings
zookeeper_peers = ["localhost:2181"]
# the name of the consumer group
## the name of the consumer group
consumer_group = "telegraf_metrics_consumers"
# Maximum number of points to buffer between collection intervals
point_buffer = 100000
# Offset (must be either "oldest" or "newest")
## Offset (must be either "oldest" or "newest")
offset = "oldest"
## Data format to consume. This can be "json", "influx" or "graphite"
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "influx"
`
func (k *Kafka) SampleConfig() string {
@@ -53,14 +65,20 @@ func (k *Kafka) SampleConfig() string {
}
func (k *Kafka) Description() string {
return "Read line-protocol metrics from Kafka topic(s)"
return "Read metrics from Kafka topic(s)"
}
func (k *Kafka) Start() error {
func (k *Kafka) SetParser(parser parsers.Parser) {
k.parser = parser
}
func (k *Kafka) Start(acc telegraf.Accumulator) error {
k.Lock()
defer k.Unlock()
var consumerErr error
k.acc = acc
config := consumergroup.NewConfig()
switch strings.ToLower(k.Offset) {
case "oldest", "":
@@ -90,21 +108,17 @@ func (k *Kafka) Start() error {
}
k.done = make(chan struct{})
if k.PointBuffer == 0 {
k.PointBuffer = 100000
}
k.pointChan = make(chan models.Point, k.PointBuffer)
// Start the kafka message reader
go k.parser()
go k.receiver()
log.Printf("Started the kafka consumer service, peers: %v, topics: %v\n",
k.ZookeeperPeers, k.Topics)
return nil
}
// parser() reads all incoming messages from the consumer, and parses them into
// receiver() reads all incoming messages from the consumer, and parses them into
// influxdb metric points.
func (k *Kafka) parser() {
func (k *Kafka) receiver() {
for {
select {
case <-k.done:
@@ -112,20 +126,14 @@ func (k *Kafka) parser() {
case err := <-k.errs:
log.Printf("Kafka Consumer Error: %s\n", err.Error())
case msg := <-k.in:
points, err := models.ParsePoints(msg.Value)
metrics, err := k.parser.Parse(msg.Value)
if err != nil {
log.Printf("Could not parse kafka message: %s, error: %s",
log.Printf("KAFKA PARSE ERROR\nmessage: %s\nerror: %s",
string(msg.Value), err.Error())
}
for _, point := range points {
select {
case k.pointChan <- point:
continue
default:
log.Printf("Kafka Consumer buffer is full, dropping a point." +
" You may want to increase the point_buffer setting")
}
for _, metric := range metrics {
k.acc.AddFields(metric.Name(), metric.Fields(), metric.Tags(), metric.Time())
}
if !k.doNotCommitMsgs {
@@ -148,19 +156,12 @@ func (k *Kafka) Stop() {
}
}
func (k *Kafka) Gather(acc inputs.Accumulator) error {
k.Lock()
defer k.Unlock()
npoints := len(k.pointChan)
for i := 0; i < npoints; i++ {
point := <-k.pointChan
acc.AddFields(point.Name(), point.Fields(), point.Tags(), point.Time())
}
func (k *Kafka) Gather(acc telegraf.Accumulator) error {
return nil
}
func init() {
inputs.Add("kafka_consumer", func() inputs.Input {
inputs.Add("kafka_consumer", func() telegraf.Input {
return &Kafka{}
})
}

View File

@@ -9,6 +9,8 @@ import (
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/influxdata/telegraf/plugins/parsers"
)
func TestReadsMetricsFromKafka(t *testing.T) {
@@ -40,24 +42,27 @@ func TestReadsMetricsFromKafka(t *testing.T) {
PointBuffer: 100000,
Offset: "oldest",
}
if err := k.Start(); err != nil {
p, _ := parsers.NewInfluxParser()
k.SetParser(p)
// Verify that we can now gather the sent message
var acc testutil.Accumulator
// Sanity check
assert.Equal(t, 0, len(acc.Metrics), "There should not be any points")
if err := k.Start(&acc); err != nil {
t.Fatal(err.Error())
} else {
defer k.Stop()
}
waitForPoint(k, t)
// Verify that we can now gather the sent message
var acc testutil.Accumulator
// Sanity check
assert.Equal(t, 0, len(acc.Points), "There should not be any points")
waitForPoint(&acc, t)
// Gather points
err = k.Gather(&acc)
require.NoError(t, err)
if len(acc.Points) == 1 {
point := acc.Points[0]
if len(acc.Metrics) == 1 {
point := acc.Metrics[0]
assert.Equal(t, "cpu_load_short", point.Measurement)
assert.Equal(t, map[string]interface{}{"value": 23422.0}, point.Fields)
assert.Equal(t, map[string]string{
@@ -73,7 +78,7 @@ func TestReadsMetricsFromKafka(t *testing.T) {
// Waits for the metric that was sent to the kafka broker to arrive at the kafka
// consumer
func waitForPoint(k *Kafka, t *testing.T) {
func waitForPoint(acc *testutil.Accumulator, t *testing.T) {
// Give the kafka container up to 2 seconds to get the point to the consumer
ticker := time.NewTicker(5 * time.Millisecond)
counter := 0
@@ -83,7 +88,7 @@ func waitForPoint(k *Kafka, t *testing.T) {
counter++
if counter > 1000 {
t.Fatal("Waited for 5s, point never arrived to consumer")
} else if len(k.pointChan) == 1 {
} else if acc.NFields() == 1 {
return
}
}

View File

@@ -4,7 +4,7 @@ import (
"testing"
"time"
"github.com/influxdata/influxdb/models"
"github.com/influxdata/telegraf/plugins/parsers"
"github.com/influxdata/telegraf/testutil"
"github.com/Shopify/sarama"
@@ -12,83 +12,117 @@ import (
)
const (
testMsg = "cpu_load_short,host=server01 value=23422.0 1422568543702900257"
invalidMsg = "cpu_load_short,host=server01 1422568543702900257"
pointBuffer = 5
testMsg = "cpu_load_short,host=server01 value=23422.0 1422568543702900257"
testMsgGraphite = "cpu.load.short.graphite 23422 1454780029"
testMsgJSON = "{\"a\": 5, \"b\": {\"c\": 6}}\n"
invalidMsg = "cpu_load_short,host=server01 1422568543702900257"
)
func NewTestKafka() (*Kafka, chan *sarama.ConsumerMessage) {
in := make(chan *sarama.ConsumerMessage, pointBuffer)
func newTestKafka() (*Kafka, chan *sarama.ConsumerMessage) {
in := make(chan *sarama.ConsumerMessage, 1000)
k := Kafka{
ConsumerGroup: "test",
Topics: []string{"telegraf"},
ZookeeperPeers: []string{"localhost:2181"},
PointBuffer: pointBuffer,
Offset: "oldest",
in: in,
doNotCommitMsgs: true,
errs: make(chan *sarama.ConsumerError, pointBuffer),
errs: make(chan *sarama.ConsumerError, 1000),
done: make(chan struct{}),
pointChan: make(chan models.Point, pointBuffer),
}
return &k, in
}
// Test that the parser parses kafka messages into points
func TestRunParser(t *testing.T) {
k, in := NewTestKafka()
k, in := newTestKafka()
acc := testutil.Accumulator{}
k.acc = &acc
defer close(k.done)
go k.parser()
k.parser, _ = parsers.NewInfluxParser()
go k.receiver()
in <- saramaMsg(testMsg)
time.Sleep(time.Millisecond)
assert.Equal(t, len(k.pointChan), 1)
assert.Equal(t, acc.NFields(), 1)
}
// Test that the parser ignores invalid messages
func TestRunParserInvalidMsg(t *testing.T) {
k, in := NewTestKafka()
k, in := newTestKafka()
acc := testutil.Accumulator{}
k.acc = &acc
defer close(k.done)
go k.parser()
k.parser, _ = parsers.NewInfluxParser()
go k.receiver()
in <- saramaMsg(invalidMsg)
time.Sleep(time.Millisecond)
assert.Equal(t, len(k.pointChan), 0)
}
// Test that points are dropped when we hit the buffer limit
func TestRunParserRespectsBuffer(t *testing.T) {
k, in := NewTestKafka()
defer close(k.done)
go k.parser()
for i := 0; i < pointBuffer+1; i++ {
in <- saramaMsg(testMsg)
}
time.Sleep(time.Millisecond)
assert.Equal(t, len(k.pointChan), 5)
assert.Equal(t, acc.NFields(), 0)
}
// Test that the parser parses kafka messages into points
func TestRunParserAndGather(t *testing.T) {
k, in := NewTestKafka()
k, in := newTestKafka()
acc := testutil.Accumulator{}
k.acc = &acc
defer close(k.done)
go k.parser()
k.parser, _ = parsers.NewInfluxParser()
go k.receiver()
in <- saramaMsg(testMsg)
time.Sleep(time.Millisecond)
acc := testutil.Accumulator{}
k.Gather(&acc)
assert.Equal(t, len(acc.Points), 1)
assert.Equal(t, acc.NFields(), 1)
acc.AssertContainsFields(t, "cpu_load_short",
map[string]interface{}{"value": float64(23422)})
}
// Test that the parser parses kafka messages into points
func TestRunParserAndGatherGraphite(t *testing.T) {
k, in := newTestKafka()
acc := testutil.Accumulator{}
k.acc = &acc
defer close(k.done)
k.parser, _ = parsers.NewGraphiteParser("_", []string{}, nil)
go k.receiver()
in <- saramaMsg(testMsgGraphite)
time.Sleep(time.Millisecond)
k.Gather(&acc)
assert.Equal(t, acc.NFields(), 1)
acc.AssertContainsFields(t, "cpu_load_short_graphite",
map[string]interface{}{"value": float64(23422)})
}
// Test that the parser parses kafka messages into points
func TestRunParserAndGatherJSON(t *testing.T) {
k, in := newTestKafka()
acc := testutil.Accumulator{}
k.acc = &acc
defer close(k.done)
k.parser, _ = parsers.NewJSONParser("kafka_json_test", []string{}, nil)
go k.receiver()
in <- saramaMsg(testMsgJSON)
time.Sleep(time.Millisecond)
k.Gather(&acc)
assert.Equal(t, acc.NFields(), 2)
acc.AssertContainsFields(t, "kafka_json_test",
map[string]interface{}{
"a": float64(5),
"b_c": float64(6),
})
}
func saramaMsg(val string) *sarama.ConsumerMessage {
return &sarama.ConsumerMessage{
Key: nil,

View File

@@ -3,6 +3,7 @@ package leofs
import (
"bufio"
"fmt"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"net/url"
"os/exec"
@@ -131,10 +132,8 @@ var serverTypeMapping = map[string]ServerType{
}
var sampleConfig = `
# An array of URI to gather stats about LeoFS.
# Specify an ip or hostname with port. ie 127.0.0.1:4020
#
# If no servers are specified, then 127.0.0.1 is used as the host and 4020 as the port.
## An array of URI to gather stats about LeoFS.
## Specify an ip or hostname with port. ie 127.0.0.1:4020
servers = ["127.0.0.1:4021"]
`
@@ -146,7 +145,7 @@ func (l *LeoFS) Description() string {
return "Read metrics from a LeoFS Server via SNMP"
}
func (l *LeoFS) Gather(acc inputs.Accumulator) error {
func (l *LeoFS) Gather(acc telegraf.Accumulator) error {
if len(l.Servers) == 0 {
l.gatherServer(defaultEndpoint, ServerTypeManagerMaster, acc)
return nil
@@ -176,7 +175,7 @@ func (l *LeoFS) Gather(acc inputs.Accumulator) error {
return outerr
}
func (l *LeoFS) gatherServer(endpoint string, serverType ServerType, acc inputs.Accumulator) error {
func (l *LeoFS) gatherServer(endpoint string, serverType ServerType, acc telegraf.Accumulator) error {
cmd := exec.Command("snmpwalk", "-v2c", "-cpublic", endpoint, oid)
stdout, err := cmd.StdoutPipe()
if err != nil {
@@ -225,7 +224,7 @@ func retrieveTokenAfterColon(line string) (string, error) {
}
func init() {
inputs.Add("leofs", func() inputs.Input {
inputs.Add("leofs", func() telegraf.Input {
return &LeoFS{}
})
}

View File

@@ -13,6 +13,7 @@ import (
"strconv"
"strings"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -28,10 +29,13 @@ type Lustre2 struct {
}
var sampleConfig = `
# An array of /proc globs to search for Lustre stats
# If not specified, the default will work on Lustre 2.5.x
#
# ost_procfiles = ["/proc/fs/lustre/obdfilter/*/stats", "/proc/fs/lustre/osd-ldiskfs/*/stats"]
## An array of /proc globs to search for Lustre stats
## If not specified, the default will work on Lustre 2.5.x
##
# ost_procfiles = [
# "/proc/fs/lustre/obdfilter/*/stats",
# "/proc/fs/lustre/osd-ldiskfs/*/stats"
# ]
# mds_procfiles = ["/proc/fs/lustre/mdt/*/md_stats"]
`
@@ -129,7 +133,7 @@ var wanted_mds_fields = []*mapping{
},
}
func (l *Lustre2) GetLustreProcStats(fileglob string, wanted_fields []*mapping, acc inputs.Accumulator) error {
func (l *Lustre2) GetLustreProcStats(fileglob string, wanted_fields []*mapping, acc telegraf.Accumulator) error {
files, err := filepath.Glob(fileglob)
if err != nil {
return err
@@ -193,7 +197,7 @@ func (l *Lustre2) Description() string {
}
// Gather reads stats from all lustre targets
func (l *Lustre2) Gather(acc inputs.Accumulator) error {
func (l *Lustre2) Gather(acc telegraf.Accumulator) error {
l.allFields = make(map[string]map[string]interface{})
if len(l.Ost_procfiles) == 0 {
@@ -244,7 +248,7 @@ func (l *Lustre2) Gather(acc inputs.Accumulator) error {
}
func init() {
inputs.Add("lustre2", func() inputs.Input {
inputs.Add("lustre2", func() telegraf.Input {
return &Lustre2{}
})
}

View File

@@ -4,6 +4,7 @@ import (
"fmt"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -16,13 +17,13 @@ type MailChimp struct {
}
var sampleConfig = `
# MailChimp API key
# get from https://admin.mailchimp.com/account/api/
## MailChimp API key
## get from https://admin.mailchimp.com/account/api/
api_key = "" # required
# Reports for campaigns sent more than days_old ago will not be collected.
# 0 means collect all.
## Reports for campaigns sent more than days_old ago will not be collected.
## 0 means collect all.
days_old = 0
# Campaign ID to get, if empty gets all campaigns, this option overrides days_old
## Campaign ID to get, if empty gets all campaigns, this option overrides days_old
# campaign_id = ""
`
@@ -34,7 +35,7 @@ func (m *MailChimp) Description() string {
return "Gathers metrics from the /3.0/reports MailChimp API"
}
func (m *MailChimp) Gather(acc inputs.Accumulator) error {
func (m *MailChimp) Gather(acc telegraf.Accumulator) error {
if m.api == nil {
m.api = NewChimpAPI(m.ApiKey)
}
@@ -71,7 +72,7 @@ func (m *MailChimp) Gather(acc inputs.Accumulator) error {
return nil
}
func gatherReport(acc inputs.Accumulator, report Report, now time.Time) {
func gatherReport(acc telegraf.Accumulator, report Report, now time.Time) {
tags := make(map[string]string)
tags["id"] = report.ID
tags["campaign_title"] = report.CampaignTitle
@@ -110,7 +111,7 @@ func gatherReport(acc inputs.Accumulator, report Report, now time.Time) {
}
func init() {
inputs.Add("mailchimp", func() inputs.Input {
inputs.Add("mailchimp", func() telegraf.Input {
return &MailChimp{}
})
}

View File

@@ -8,6 +8,7 @@ import (
"strconv"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -18,10 +19,8 @@ type Memcached struct {
}
var sampleConfig = `
# An array of address to gather stats about. Specify an ip on hostname
# with optional port. ie localhost, 10.0.0.1:11211, etc.
#
# If no servers are specified, then localhost is used as the host.
## An array of address to gather stats about. Specify an ip on hostname
## with optional port. ie localhost, 10.0.0.1:11211, etc.
servers = ["localhost:11211"]
# unix_sockets = ["/var/run/memcached.sock"]
`
@@ -69,7 +68,7 @@ func (m *Memcached) Description() string {
}
// Gather reads stats from all configured servers accumulates stats
func (m *Memcached) Gather(acc inputs.Accumulator) error {
func (m *Memcached) Gather(acc telegraf.Accumulator) error {
if len(m.Servers) == 0 && len(m.UnixSockets) == 0 {
return m.gatherServer(":11211", false, acc)
}
@@ -92,7 +91,7 @@ func (m *Memcached) Gather(acc inputs.Accumulator) error {
func (m *Memcached) gatherServer(
address string,
unix bool,
acc inputs.Accumulator,
acc telegraf.Accumulator,
) error {
var conn net.Conn
if unix {
@@ -178,7 +177,7 @@ func parseResponse(r *bufio.Reader) (map[string]string, error) {
}
func init() {
inputs.Add("memcached", func() inputs.Input {
inputs.Add("memcached", func() telegraf.Input {
return &Memcached{}
})
}

View File

@@ -0,0 +1,165 @@
# Mesos Input Plugin
This input plugin gathers metrics from Mesos (*currently only Mesos masters*).
For more information, please check the [Mesos Observability Metrics](http://mesos.apache.org/documentation/latest/monitoring/) page.
### Configuration:
```toml
# Telegraf plugin for gathering metrics from N Mesos masters
[[inputs.mesos]]
# Timeout, in ms.
timeout = 100
# A list of Mesos masters, default value is localhost:5050.
masters = ["localhost:5050"]
# Metrics groups to be collected, by default, all enabled.
master_collections = ["resources","master","system","slaves","frameworks","messages","evqueue","registrar"]
```
### Measurements & Fields:
Mesos master metric groups
- resources
- master/cpus_percent
- master/cpus_used
- master/cpus_total
- master/cpus_revocable_percent
- master/cpus_revocable_total
- master/cpus_revocable_used
- master/disk_percent
- master/disk_used
- master/disk_total
- master/disk_revocable_percent
- master/disk_revocable_total
- master/disk_revocable_used
- master/mem_percent
- master/mem_used
- master/mem_total
- master/mem_revocable_percent
- master/mem_revocable_total
- master/mem_revocable_used
- master
- master/elected
- master/uptime_secs
- system
- system/cpus_total
- system/load_15min
- system/load_5min
- system/load_1min
- system/mem_free_bytes
- system/mem_total_bytes
- slaves
- master/slave_registrations
- master/slave_removals
- master/slave_reregistrations
- master/slave_shutdowns_scheduled
- master/slave_shutdowns_canceled
- master/slave_shutdowns_completed
- master/slaves_active
- master/slaves_connected
- master/slaves_disconnected
- master/slaves_inactive
- frameworks
- master/frameworks_active
- master/frameworks_connected
- master/frameworks_disconnected
- master/frameworks_inactive
- master/outstanding_offers
- tasks
- master/tasks_error
- master/tasks_failed
- master/tasks_finished
- master/tasks_killed
- master/tasks_lost
- master/tasks_running
- master/tasks_staging
- master/tasks_starting
- messages
- master/invalid_executor_to_framework_messages
- master/invalid_framework_to_executor_messages
- master/invalid_status_update_acknowledgements
- master/invalid_status_updates
- master/dropped_messages
- master/messages_authenticate
- master/messages_deactivate_framework
- master/messages_decline_offers
- master/messages_executor_to_framework
- master/messages_exited_executor
- master/messages_framework_to_executor
- master/messages_kill_task
- master/messages_launch_tasks
- master/messages_reconcile_tasks
- master/messages_register_framework
- master/messages_register_slave
- master/messages_reregister_framework
- master/messages_reregister_slave
- master/messages_resource_request
- master/messages_revive_offers
- master/messages_status_update
- master/messages_status_update_acknowledgement
- master/messages_unregister_framework
- master/messages_unregister_slave
- master/messages_update_slave
- master/recovery_slave_removals
- master/slave_removals/reason_registered
- master/slave_removals/reason_unhealthy
- master/slave_removals/reason_unregistered
- master/valid_framework_to_executor_messages
- master/valid_status_update_acknowledgements
- master/valid_status_updates
- master/task_lost/source_master/reason_invalid_offers
- master/task_lost/source_master/reason_slave_removed
- master/task_lost/source_slave/reason_executor_terminated
- master/valid_executor_to_framework_messages
- evqueue
- master/event_queue_dispatches
- master/event_queue_http_requests
- master/event_queue_messages
- registrar
- registrar/state_fetch_ms
- registrar/state_store_ms
- registrar/state_store_ms/max
- registrar/state_store_ms/min
- registrar/state_store_ms/p50
- registrar/state_store_ms/p90
- registrar/state_store_ms/p95
- registrar/state_store_ms/p99
- registrar/state_store_ms/p999
- registrar/state_store_ms/p9999
### Tags:
- All measurements have the following tags:
- server
### Example Output:
```
$ telegraf -config ~/mesos.conf -input-filter mesos -test
* Plugin: mesos, Collection 1
mesos,server=172.17.8.101 allocator/event_queue_dispatches=0,master/cpus_percent=0,
master/cpus_revocable_percent=0,master/cpus_revocable_total=0,
master/cpus_revocable_used=0,master/cpus_total=2,
master/cpus_used=0,master/disk_percent=0,master/disk_revocable_percent=0,
master/disk_revocable_total=0,master/disk_revocable_used=0,master/disk_total=10823,
master/disk_used=0,master/dropped_messages=2,master/elected=1,
master/event_queue_dispatches=10,master/event_queue_http_requests=0,
master/event_queue_messages=0,master/frameworks_active=2,master/frameworks_connected=2,
master/frameworks_disconnected=0,master/frameworks_inactive=0,
master/invalid_executor_to_framework_messages=0,
master/invalid_framework_to_executor_messages=0,
master/invalid_status_update_acknowledgements=0,master/invalid_status_updates=0,master/mem_percent=0,
master/mem_revocable_percent=0,master/mem_revocable_total=0,
master/mem_revocable_used=0,master/mem_total=1002,
master/mem_used=0,master/messages_authenticate=0,
master/messages_deactivate_framework=0 ...
```

View File

@@ -0,0 +1,320 @@
package mesos
import (
"encoding/json"
"errors"
"io/ioutil"
"log"
"net"
"net/http"
"strconv"
"strings"
"sync"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
jsonparser "github.com/influxdata/telegraf/plugins/parsers/json"
)
type Mesos struct {
Timeout int
Masters []string
MasterCols []string `toml:"master_collections"`
}
var defaultMetrics = []string{
"resources", "master", "system", "slaves", "frameworks",
"tasks", "messages", "evqueue", "messages", "registrar",
}
var sampleConfig = `
# Timeout, in ms.
timeout = 100
# A list of Mesos masters, default value is localhost:5050.
masters = ["localhost:5050"]
# Metrics groups to be collected, by default, all enabled.
master_collections = ["resources","master","system","slaves","frameworks","messages","evqueue","registrar"]
`
// SampleConfig returns a sample configuration block
func (m *Mesos) SampleConfig() string {
return sampleConfig
}
// Description just returns a short description of the Mesos plugin
func (m *Mesos) Description() string {
return "Telegraf plugin for gathering metrics from N Mesos masters"
}
// Gather() metrics from given list of Mesos Masters
func (m *Mesos) Gather(acc telegraf.Accumulator) error {
var wg sync.WaitGroup
var errorChannel chan error
if len(m.Masters) == 0 {
m.Masters = []string{"localhost:5050"}
}
errorChannel = make(chan error, len(m.Masters)*2)
for _, v := range m.Masters {
wg.Add(1)
go func(c string) {
errorChannel <- m.gatherMetrics(c, acc)
wg.Done()
return
}(v)
}
wg.Wait()
close(errorChannel)
errorStrings := []string{}
// Gather all errors for returning them at once
for err := range errorChannel {
if err != nil {
errorStrings = append(errorStrings, err.Error())
}
}
if len(errorStrings) > 0 {
return errors.New(strings.Join(errorStrings, "\n"))
}
return nil
}
// metricsDiff() returns set names for removal
func metricsDiff(w []string) []string {
b := []string{}
s := make(map[string]bool)
if len(w) == 0 {
return b
}
for _, v := range w {
s[v] = true
}
for _, d := range defaultMetrics {
if _, ok := s[d]; !ok {
b = append(b, d)
}
}
return b
}
// masterBlocks serves as kind of metrics registry groupping them in sets
func masterBlocks(g string) []string {
var m map[string][]string
m = make(map[string][]string)
m["resources"] = []string{
"master/cpus_percent",
"master/cpus_used",
"master/cpus_total",
"master/cpus_revocable_percent",
"master/cpus_revocable_total",
"master/cpus_revocable_used",
"master/disk_percent",
"master/disk_used",
"master/disk_total",
"master/disk_revocable_percent",
"master/disk_revocable_total",
"master/disk_revocable_used",
"master/mem_percent",
"master/mem_used",
"master/mem_total",
"master/mem_revocable_percent",
"master/mem_revocable_total",
"master/mem_revocable_used",
}
m["master"] = []string{
"master/elected",
"master/uptime_secs",
}
m["system"] = []string{
"system/cpus_total",
"system/load_15min",
"system/load_5min",
"system/load_1min",
"system/mem_free_bytes",
"system/mem_total_bytes",
}
m["slaves"] = []string{
"master/slave_registrations",
"master/slave_removals",
"master/slave_reregistrations",
"master/slave_shutdowns_scheduled",
"master/slave_shutdowns_canceled",
"master/slave_shutdowns_completed",
"master/slaves_active",
"master/slaves_connected",
"master/slaves_disconnected",
"master/slaves_inactive",
}
m["frameworks"] = []string{
"master/frameworks_active",
"master/frameworks_connected",
"master/frameworks_disconnected",
"master/frameworks_inactive",
"master/outstanding_offers",
}
m["tasks"] = []string{
"master/tasks_error",
"master/tasks_failed",
"master/tasks_finished",
"master/tasks_killed",
"master/tasks_lost",
"master/tasks_running",
"master/tasks_staging",
"master/tasks_starting",
}
m["messages"] = []string{
"master/invalid_executor_to_framework_messages",
"master/invalid_framework_to_executor_messages",
"master/invalid_status_update_acknowledgements",
"master/invalid_status_updates",
"master/dropped_messages",
"master/messages_authenticate",
"master/messages_deactivate_framework",
"master/messages_decline_offers",
"master/messages_executor_to_framework",
"master/messages_exited_executor",
"master/messages_framework_to_executor",
"master/messages_kill_task",
"master/messages_launch_tasks",
"master/messages_reconcile_tasks",
"master/messages_register_framework",
"master/messages_register_slave",
"master/messages_reregister_framework",
"master/messages_reregister_slave",
"master/messages_resource_request",
"master/messages_revive_offers",
"master/messages_status_update",
"master/messages_status_update_acknowledgement",
"master/messages_unregister_framework",
"master/messages_unregister_slave",
"master/messages_update_slave",
"master/recovery_slave_removals",
"master/slave_removals/reason_registered",
"master/slave_removals/reason_unhealthy",
"master/slave_removals/reason_unregistered",
"master/valid_framework_to_executor_messages",
"master/valid_status_update_acknowledgements",
"master/valid_status_updates",
"master/task_lost/source_master/reason_invalid_offers",
"master/task_lost/source_master/reason_slave_removed",
"master/task_lost/source_slave/reason_executor_terminated",
"master/valid_executor_to_framework_messages",
}
m["evqueue"] = []string{
"master/event_queue_dispatches",
"master/event_queue_http_requests",
"master/event_queue_messages",
}
m["registrar"] = []string{
"registrar/state_fetch_ms",
"registrar/state_store_ms",
"registrar/state_store_ms/max",
"registrar/state_store_ms/min",
"registrar/state_store_ms/p50",
"registrar/state_store_ms/p90",
"registrar/state_store_ms/p95",
"registrar/state_store_ms/p99",
"registrar/state_store_ms/p999",
"registrar/state_store_ms/p9999",
}
ret, ok := m[g]
if !ok {
log.Println("[mesos] Unkown metrics group: ", g)
return []string{}
}
return ret
}
// removeGroup(), remove unwanted sets
func (m *Mesos) removeGroup(j *map[string]interface{}) {
var ok bool
b := metricsDiff(m.MasterCols)
for _, k := range b {
for _, v := range masterBlocks(k) {
if _, ok = (*j)[v]; ok {
delete((*j), v)
}
}
}
}
// This should not belong to the object
func (m *Mesos) gatherMetrics(a string, acc telegraf.Accumulator) error {
var jsonOut map[string]interface{}
host, _, err := net.SplitHostPort(a)
if err != nil {
host = a
a = a + ":5050"
}
tags := map[string]string{
"server": host,
}
if m.Timeout == 0 {
log.Println("[mesos] Missing timeout value, setting default value (100ms)")
m.Timeout = 100
}
ts := strconv.Itoa(m.Timeout) + "ms"
resp, err := http.Get("http://" + a + "/metrics/snapshot?timeout=" + ts)
if err != nil {
return err
}
data, err := ioutil.ReadAll(resp.Body)
resp.Body.Close()
if err != nil {
return err
}
if err = json.Unmarshal([]byte(data), &jsonOut); err != nil {
return errors.New("Error decoding JSON response")
}
m.removeGroup(&jsonOut)
jf := jsonparser.JSONFlattener{}
err = jf.FlattenJSON("", jsonOut)
if err != nil {
return err
}
acc.AddFields("mesos", jf.Fields, tags)
return nil
}
func init() {
inputs.Add("mesos", func() telegraf.Input {
return &Mesos{}
})
}

View File

@@ -0,0 +1,118 @@
package mesos
import (
"encoding/json"
"math/rand"
"net/http"
"net/http/httptest"
"os"
"testing"
"github.com/influxdata/telegraf/testutil"
)
var mesosMetrics map[string]interface{}
var ts *httptest.Server
func generateMetrics() {
mesosMetrics = make(map[string]interface{})
metricNames := []string{"master/cpus_percent", "master/cpus_used", "master/cpus_total",
"master/cpus_revocable_percent", "master/cpus_revocable_total", "master/cpus_revocable_used",
"master/disk_percent", "master/disk_used", "master/disk_total", "master/disk_revocable_percent",
"master/disk_revocable_total", "master/disk_revocable_used", "master/mem_percent",
"master/mem_used", "master/mem_total", "master/mem_revocable_percent", "master/mem_revocable_total",
"master/mem_revocable_used", "master/elected", "master/uptime_secs", "system/cpus_total",
"system/load_15min", "system/load_5min", "system/load_1min", "system/mem_free_bytes",
"system/mem_total_bytes", "master/slave_registrations", "master/slave_removals",
"master/slave_reregistrations", "master/slave_shutdowns_scheduled", "master/slave_shutdowns_canceled",
"master/slave_shutdowns_completed", "master/slaves_active", "master/slaves_connected",
"master/slaves_disconnected", "master/slaves_inactive", "master/frameworks_active",
"master/frameworks_connected", "master/frameworks_disconnected", "master/frameworks_inactive",
"master/outstanding_offers", "master/tasks_error", "master/tasks_failed", "master/tasks_finished",
"master/tasks_killed", "master/tasks_lost", "master/tasks_running", "master/tasks_staging",
"master/tasks_starting", "master/invalid_executor_to_framework_messages", "master/invalid_framework_to_executor_messages",
"master/invalid_status_update_acknowledgements", "master/invalid_status_updates",
"master/dropped_messages", "master/messages_authenticate", "master/messages_deactivate_framework",
"master/messages_decline_offers", "master/messages_executor_to_framework", "master/messages_exited_executor",
"master/messages_framework_to_executor", "master/messages_kill_task", "master/messages_launch_tasks",
"master/messages_reconcile_tasks", "master/messages_register_framework", "master/messages_register_slave",
"master/messages_reregister_framework", "master/messages_reregister_slave", "master/messages_resource_request",
"master/messages_revive_offers", "master/messages_status_update", "master/messages_status_update_acknowledgement",
"master/messages_unregister_framework", "master/messages_unregister_slave", "master/messages_update_slave",
"master/recovery_slave_removals", "master/slave_removals/reason_registered", "master/slave_removals/reason_unhealthy",
"master/slave_removals/reason_unregistered", "master/valid_framework_to_executor_messages", "master/valid_status_update_acknowledgements",
"master/valid_status_updates", "master/task_lost/source_master/reason_invalid_offers",
"master/task_lost/source_master/reason_slave_removed", "master/task_lost/source_slave/reason_executor_terminated",
"master/valid_executor_to_framework_messages", "master/event_queue_dispatches",
"master/event_queue_http_requests", "master/event_queue_messages", "registrar/state_fetch_ms",
"registrar/state_store_ms", "registrar/state_store_ms/max", "registrar/state_store_ms/min",
"registrar/state_store_ms/p50", "registrar/state_store_ms/p90", "registrar/state_store_ms/p95",
"registrar/state_store_ms/p99", "registrar/state_store_ms/p999", "registrar/state_store_ms/p9999"}
for _, k := range metricNames {
mesosMetrics[k] = rand.Float64()
}
}
func TestMain(m *testing.M) {
generateMetrics()
r := http.NewServeMux()
r.HandleFunc("/metrics/snapshot", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(mesosMetrics)
})
ts = httptest.NewServer(r)
rc := m.Run()
ts.Close()
os.Exit(rc)
}
func TestMesosMaster(t *testing.T) {
var acc testutil.Accumulator
m := Mesos{
Masters: []string{ts.Listener.Addr().String()},
Timeout: 10,
}
err := m.Gather(&acc)
if err != nil {
t.Errorf(err.Error())
}
acc.AssertContainsFields(t, "mesos", mesosMetrics)
}
func TestRemoveGroup(t *testing.T) {
generateMetrics()
m := Mesos{
MasterCols: []string{
"resources", "master", "registrar",
},
}
b := []string{
"system", "slaves", "frameworks",
"messages", "evqueue",
}
m.removeGroup(&mesosMetrics)
for _, v := range b {
for _, x := range masterBlocks(v) {
if _, ok := mesosMetrics[x]; ok {
t.Errorf("Found key %s, it should be gone.", x)
}
}
}
for _, v := range m.MasterCols {
for _, x := range masterBlocks(v) {
if _, ok := mesosMetrics[x]; !ok {
t.Errorf("Didn't find key %s, it should present.", x)
}
}
}
}

View File

@@ -1,12 +1,16 @@
package inputs
import "github.com/stretchr/testify/mock"
import (
"github.com/influxdata/telegraf"
"github.com/stretchr/testify/mock"
)
type MockPlugin struct {
mock.Mock
}
func (m *MockPlugin) Gather(_a0 Accumulator) error {
func (m *MockPlugin) Gather(_a0 telegraf.Accumulator) error {
ret := m.Called(_a0)
r0 := ret.Error(0)

View File

@@ -9,6 +9,7 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"gopkg.in/mgo.v2"
)
@@ -25,11 +26,11 @@ type Ssl struct {
}
var sampleConfig = `
# An array of URI to gather stats about. Specify an ip or hostname
# with optional port add password. ie mongodb://user:auth_key@10.10.3.30:27017,
# mongodb://10.10.3.33:18832, 10.0.0.1:10000, etc.
#
# If no servers are specified, then 127.0.0.1 is used as the host and 27107 as the port.
## An array of URI to gather stats about. Specify an ip or hostname
## with optional port add password. ie,
## mongodb://user:auth_key@10.10.3.30:27017,
## mongodb://10.10.3.33:18832,
## 10.0.0.1:10000, etc.
servers = ["127.0.0.1:27017"]
`
@@ -45,7 +46,7 @@ var localhost = &url.URL{Host: "127.0.0.1:27017"}
// Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any).
func (m *MongoDB) Gather(acc inputs.Accumulator) error {
func (m *MongoDB) Gather(acc telegraf.Accumulator) error {
if len(m.Servers) == 0 {
m.gatherServer(m.getMongoServer(localhost), acc)
return nil
@@ -88,7 +89,7 @@ func (m *MongoDB) getMongoServer(url *url.URL) *Server {
return m.mongos[url.Host]
}
func (m *MongoDB) gatherServer(server *Server, acc inputs.Accumulator) error {
func (m *MongoDB) gatherServer(server *Server, acc telegraf.Accumulator) error {
if server.Session == nil {
var dialAddrs []string
if server.Url.User != nil {
@@ -138,7 +139,7 @@ func (m *MongoDB) gatherServer(server *Server, acc inputs.Accumulator) error {
}
func init() {
inputs.Add("mongodb", func() inputs.Input {
inputs.Add("mongodb", func() telegraf.Input {
return &MongoDB{
mongos: make(map[string]*Server),
}

View File

@@ -5,7 +5,7 @@ import (
"reflect"
"strconv"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf"
)
type MongodbData struct {
@@ -97,7 +97,7 @@ func (d *MongodbData) add(key string, val interface{}) {
d.Fields[key] = val
}
func (d *MongodbData) flush(acc inputs.Accumulator) {
func (d *MongodbData) flush(acc telegraf.Accumulator) {
acc.AddFields(
"mongodb",
d.Fields,

View File

@@ -4,7 +4,7 @@ import (
"net/url"
"time"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf"
"gopkg.in/mgo.v2"
"gopkg.in/mgo.v2/bson"
)
@@ -21,7 +21,7 @@ func (s *Server) getDefaultTags() map[string]string {
return tags
}
func (s *Server) gatherData(acc inputs.Accumulator) error {
func (s *Server) gatherData(acc telegraf.Accumulator) error {
s.Session.SetMode(mgo.Eventual, true)
s.Session.SetSocketTimeout(0)
result := &ServerStatus{}

View File

@@ -0,0 +1,48 @@
# MQTT Consumer Input Plugin
The [MQTT](http://mqtt.org/) consumer plugin reads from
specified MQTT topics and adds messages to InfluxDB.
The plugin expects messages in the
[Telegraf Input Data Formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md).
### Configuration:
```toml
# Read metrics from MQTT topic(s)
[[inputs.mqtt_consumer]]
servers = ["localhost:1883"]
## MQTT QoS, must be 0, 1, or 2
qos = 0
## Topics to subscribe to
topics = [
"telegraf/host01/cpu",
"telegraf/+/mem",
"sensors/#",
]
## Maximum number of metrics to buffer between collection intervals
metric_buffer = 100000
## username and password to connect MQTT server.
# username = "telegraf"
# password = "metricsmetricsmetricsmetrics"
## Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem"
# ssl_key = "/etc/telegraf/key.pem"
## Use SSL but skip chain & host verification
# insecure_skip_verify = false
## Data format to consume. This can be "json", "influx" or "graphite"
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "influx"
```
### Tags:
- All measurements are tagged with the incoming topic, ie
`topic=telegraf/host01/cpu`

View File

@@ -0,0 +1,209 @@
package mqtt_consumer
import (
"fmt"
"log"
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/parsers"
"git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git"
)
type MQTTConsumer struct {
Servers []string
Topics []string
Username string
Password string
QoS int `toml:"qos"`
parser parsers.Parser
// Legacy metric buffer support
MetricBuffer int
// Path to CA file
SSLCA string `toml:"ssl_ca"`
// Path to host cert file
SSLCert string `toml:"ssl_cert"`
// Path to cert key file
SSLKey string `toml:"ssl_key"`
// Use SSL but skip chain & host verification
InsecureSkipVerify bool
sync.Mutex
client *mqtt.Client
// channel of all incoming raw mqtt messages
in chan mqtt.Message
done chan struct{}
// keep the accumulator internally:
acc telegraf.Accumulator
}
var sampleConfig = `
servers = ["localhost:1883"]
## MQTT QoS, must be 0, 1, or 2
qos = 0
## Topics to subscribe to
topics = [
"telegraf/host01/cpu",
"telegraf/+/mem",
"sensors/#",
]
## username and password to connect MQTT server.
# username = "telegraf"
# password = "metricsmetricsmetricsmetrics"
## Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem"
# ssl_key = "/etc/telegraf/key.pem"
## Use SSL but skip chain & host verification
# insecure_skip_verify = false
## Data format to consume. This can be "json", "influx" or "graphite"
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "influx"
`
func (m *MQTTConsumer) SampleConfig() string {
return sampleConfig
}
func (m *MQTTConsumer) Description() string {
return "Read metrics from MQTT topic(s)"
}
func (m *MQTTConsumer) SetParser(parser parsers.Parser) {
m.parser = parser
}
func (m *MQTTConsumer) Start(acc telegraf.Accumulator) error {
m.Lock()
defer m.Unlock()
m.acc = acc
if m.QoS > 2 || m.QoS < 0 {
return fmt.Errorf("MQTT Consumer, invalid QoS value: %d", m.QoS)
}
opts, err := m.createOpts()
if err != nil {
return err
}
m.client = mqtt.NewClient(opts)
if token := m.client.Connect(); token.Wait() && token.Error() != nil {
return token.Error()
}
m.in = make(chan mqtt.Message, 1000)
m.done = make(chan struct{})
topics := make(map[string]byte)
for _, topic := range m.Topics {
topics[topic] = byte(m.QoS)
}
subscribeToken := m.client.SubscribeMultiple(topics, m.recvMessage)
subscribeToken.Wait()
if subscribeToken.Error() != nil {
return subscribeToken.Error()
}
go m.receiver()
return nil
}
// receiver() reads all incoming messages from the consumer, and parses them into
// influxdb metric points.
func (m *MQTTConsumer) receiver() {
for {
select {
case <-m.done:
return
case msg := <-m.in:
topic := msg.Topic()
metrics, err := m.parser.Parse(msg.Payload())
if err != nil {
log.Printf("MQTT PARSE ERROR\nmessage: %s\nerror: %s",
string(msg.Payload()), err.Error())
}
for _, metric := range metrics {
tags := metric.Tags()
tags["topic"] = topic
m.acc.AddFields(metric.Name(), metric.Fields(), tags, metric.Time())
}
}
}
}
func (m *MQTTConsumer) recvMessage(_ *mqtt.Client, msg mqtt.Message) {
m.in <- msg
}
func (m *MQTTConsumer) Stop() {
m.Lock()
defer m.Unlock()
close(m.done)
m.client.Disconnect(200)
}
func (m *MQTTConsumer) Gather(acc telegraf.Accumulator) error {
return nil
}
func (m *MQTTConsumer) createOpts() (*mqtt.ClientOptions, error) {
opts := mqtt.NewClientOptions()
opts.SetClientID("Telegraf-Consumer-" + internal.RandomString(5))
tlsCfg, err := internal.GetTLSConfig(
m.SSLCert, m.SSLKey, m.SSLCA, m.InsecureSkipVerify)
if err != nil {
return nil, err
}
scheme := "tcp"
if tlsCfg != nil {
scheme = "ssl"
opts.SetTLSConfig(tlsCfg)
}
user := m.Username
if user == "" {
opts.SetUsername(user)
}
password := m.Password
if password != "" {
opts.SetPassword(password)
}
if len(m.Servers) == 0 {
return opts, fmt.Errorf("could not get host infomations")
}
for _, host := range m.Servers {
server := fmt.Sprintf("%s://%s", scheme, host)
opts.AddBroker(server)
}
opts.SetAutoReconnect(true)
opts.SetKeepAlive(time.Second * 60)
return opts, nil
}
func init() {
inputs.Add("mqtt_consumer", func() telegraf.Input {
return &MQTTConsumer{}
})
}

View File

@@ -0,0 +1,162 @@
package mqtt_consumer
import (
"testing"
"time"
"github.com/influxdata/telegraf/plugins/parsers"
"github.com/influxdata/telegraf/testutil"
"git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git"
)
const (
testMsg = "cpu_load_short,host=server01 value=23422.0 1422568543702900257"
testMsgGraphite = "cpu.load.short.graphite 23422 1454780029"
testMsgJSON = "{\"a\": 5, \"b\": {\"c\": 6}}\n"
invalidMsg = "cpu_load_short,host=server01 1422568543702900257"
)
func newTestMQTTConsumer() (*MQTTConsumer, chan mqtt.Message) {
in := make(chan mqtt.Message, 100)
n := &MQTTConsumer{
Topics: []string{"telegraf"},
Servers: []string{"localhost:1883"},
in: in,
done: make(chan struct{}),
}
return n, in
}
// Test that the parser parses NATS messages into metrics
func TestRunParser(t *testing.T) {
n, in := newTestMQTTConsumer()
acc := testutil.Accumulator{}
n.acc = &acc
defer close(n.done)
n.parser, _ = parsers.NewInfluxParser()
go n.receiver()
in <- mqttMsg(testMsg)
time.Sleep(time.Millisecond * 25)
if a := acc.NFields(); a != 1 {
t.Errorf("got %v, expected %v", a, 1)
}
}
// Test that the parser ignores invalid messages
func TestRunParserInvalidMsg(t *testing.T) {
n, in := newTestMQTTConsumer()
acc := testutil.Accumulator{}
n.acc = &acc
defer close(n.done)
n.parser, _ = parsers.NewInfluxParser()
go n.receiver()
in <- mqttMsg(invalidMsg)
time.Sleep(time.Millisecond * 25)
if a := acc.NFields(); a != 0 {
t.Errorf("got %v, expected %v", a, 0)
}
}
// Test that the parser parses line format messages into metrics
func TestRunParserAndGather(t *testing.T) {
n, in := newTestMQTTConsumer()
acc := testutil.Accumulator{}
n.acc = &acc
defer close(n.done)
n.parser, _ = parsers.NewInfluxParser()
go n.receiver()
in <- mqttMsg(testMsg)
time.Sleep(time.Millisecond * 25)
n.Gather(&acc)
acc.AssertContainsFields(t, "cpu_load_short",
map[string]interface{}{"value": float64(23422)})
}
// Test that the parser parses graphite format messages into metrics
func TestRunParserAndGatherGraphite(t *testing.T) {
n, in := newTestMQTTConsumer()
acc := testutil.Accumulator{}
n.acc = &acc
defer close(n.done)
n.parser, _ = parsers.NewGraphiteParser("_", []string{}, nil)
go n.receiver()
in <- mqttMsg(testMsgGraphite)
time.Sleep(time.Millisecond * 25)
n.Gather(&acc)
acc.AssertContainsFields(t, "cpu_load_short_graphite",
map[string]interface{}{"value": float64(23422)})
}
// Test that the parser parses json format messages into metrics
func TestRunParserAndGatherJSON(t *testing.T) {
n, in := newTestMQTTConsumer()
acc := testutil.Accumulator{}
n.acc = &acc
defer close(n.done)
n.parser, _ = parsers.NewJSONParser("nats_json_test", []string{}, nil)
go n.receiver()
in <- mqttMsg(testMsgJSON)
time.Sleep(time.Millisecond * 25)
n.Gather(&acc)
acc.AssertContainsFields(t, "nats_json_test",
map[string]interface{}{
"a": float64(5),
"b_c": float64(6),
})
}
func mqttMsg(val string) mqtt.Message {
return &message{
topic: "telegraf/unit_test",
payload: []byte(val),
}
}
// Take the message struct from the paho mqtt client library for returning
// a test message interface.
type message struct {
duplicate bool
qos byte
retained bool
topic string
messageID uint16
payload []byte
}
func (m *message) Duplicate() bool {
return m.duplicate
}
func (m *message) Qos() byte {
return m.qos
}
func (m *message) Retained() bool {
return m.retained
}
func (m *message) Topic() string {
return m.topic
}
func (m *message) MessageID() uint16 {
return m.messageID
}
func (m *message) Payload() []byte {
return m.payload
}

View File

@@ -6,6 +6,7 @@ import (
"strings"
_ "github.com/go-sql-driver/mysql"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -14,14 +15,14 @@ type Mysql struct {
}
var sampleConfig = `
# specify servers via a url matching:
# [username[:password]@][protocol[(address)]]/[?tls=[true|false|skip-verify]]
# see https://github.com/go-sql-driver/mysql#dsn-data-source-name
# e.g.
# root:passwd@tcp(127.0.0.1:3306)/?tls=false
# root@tcp(127.0.0.1:3306)/?tls=false
#
# If no servers are specified, then localhost is used as the host.
## specify servers via a url matching:
## [username[:password]@][protocol[(address)]]/[?tls=[true|false|skip-verify]]
## see https://github.com/go-sql-driver/mysql#dsn-data-source-name
## e.g.
## root:passwd@tcp(127.0.0.1:3306)/?tls=false
## root@tcp(127.0.0.1:3306)/?tls=false
##
## If no servers are specified, then localhost is used as the host.
servers = ["tcp(127.0.0.1:3306)/"]
`
@@ -35,7 +36,7 @@ func (m *Mysql) Description() string {
var localhost = ""
func (m *Mysql) Gather(acc inputs.Accumulator) error {
func (m *Mysql) Gather(acc telegraf.Accumulator) error {
if len(m.Servers) == 0 {
// if we can't get stats in this case, thats fine, don't report
// an error.
@@ -113,7 +114,7 @@ var mappings = []*mapping{
},
}
func (m *Mysql) gatherServer(serv string, acc inputs.Accumulator) error {
func (m *Mysql) gatherServer(serv string, acc telegraf.Accumulator) error {
// If user forgot the '/', add it
if strings.HasSuffix(serv, ")") {
serv = serv + "/"
@@ -207,7 +208,7 @@ func (m *Mysql) gatherServer(serv string, acc inputs.Accumulator) error {
}
func init() {
inputs.Add("mysql", func() inputs.Input {
inputs.Add("mysql", func() telegraf.Input {
return &Mysql{}
})
}

View File

@@ -0,0 +1,31 @@
# NATS Consumer Input Plugin
The [NATS](http://www.nats.io/about/) consumer plugin reads from
specified NATS subjects and adds messages to InfluxDB. The plugin expects messages
in the [Telegraf Input Data Formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md).
A [Queue Group](http://www.nats.io/documentation/concepts/nats-queueing/)
is used when subscribing to subjects so multiple instances of telegraf can read
from a NATS cluster in parallel.
## Configuration
```toml
# Read metrics from NATS subject(s)
[[inputs.nats_consumer]]
## urls of NATS servers
servers = ["nats://localhost:4222"]
## Use Transport Layer Security
secure = false
## subject(s) to consume
subjects = ["telegraf"]
## name a queue group
queue_group = "telegraf_consumers"
## Maximum number of metrics to buffer between collection intervals
metric_buffer = 100000
## Data format to consume. This can be "json", "influx" or "graphite"
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "influx"
```

View File

@@ -0,0 +1,184 @@
package natsconsumer
import (
"fmt"
"log"
"sync"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/parsers"
"github.com/nats-io/nats"
)
type natsError struct {
conn *nats.Conn
sub *nats.Subscription
err error
}
func (e natsError) Error() string {
return fmt.Sprintf("%s url:%s id:%s sub:%s queue:%s",
e.err.Error(), e.conn.ConnectedUrl(), e.conn.ConnectedServerId(), e.sub.Subject, e.sub.Queue)
}
type natsConsumer struct {
QueueGroup string
Subjects []string
Servers []string
Secure bool
// Legacy metric buffer support
MetricBuffer int
parser parsers.Parser
sync.Mutex
Conn *nats.Conn
Subs []*nats.Subscription
// channel for all incoming NATS messages
in chan *nats.Msg
// channel for all NATS read errors
errs chan error
done chan struct{}
acc telegraf.Accumulator
}
var sampleConfig = `
## urls of NATS servers
servers = ["nats://localhost:4222"]
## Use Transport Layer Security
secure = false
## subject(s) to consume
subjects = ["telegraf"]
## name a queue group
queue_group = "telegraf_consumers"
## Data format to consume. This can be "json", "influx" or "graphite"
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "influx"
`
func (n *natsConsumer) SampleConfig() string {
return sampleConfig
}
func (n *natsConsumer) Description() string {
return "Read metrics from NATS subject(s)"
}
func (n *natsConsumer) SetParser(parser parsers.Parser) {
n.parser = parser
}
func (n *natsConsumer) natsErrHandler(c *nats.Conn, s *nats.Subscription, e error) {
select {
case n.errs <- natsError{conn: c, sub: s, err: e}:
default:
return
}
}
// Start the nats consumer. Caller must call *natsConsumer.Stop() to clean up.
func (n *natsConsumer) Start(acc telegraf.Accumulator) error {
n.Lock()
defer n.Unlock()
n.acc = acc
var connectErr error
opts := nats.DefaultOptions
opts.Servers = n.Servers
opts.Secure = n.Secure
if n.Conn == nil || n.Conn.IsClosed() {
n.Conn, connectErr = opts.Connect()
if connectErr != nil {
return connectErr
}
// Setup message and error channels
n.errs = make(chan error)
n.Conn.SetErrorHandler(n.natsErrHandler)
n.in = make(chan *nats.Msg)
for _, subj := range n.Subjects {
sub, err := n.Conn.ChanQueueSubscribe(subj, n.QueueGroup, n.in)
if err != nil {
return err
}
n.Subs = append(n.Subs, sub)
}
}
n.done = make(chan struct{})
// Start the message reader
go n.receiver()
log.Printf("Started the NATS consumer service, nats: %v, subjects: %v, queue: %v\n",
n.Conn.ConnectedUrl(), n.Subjects, n.QueueGroup)
return nil
}
// receiver() reads all incoming messages from NATS, and parses them into
// telegraf metrics.
func (n *natsConsumer) receiver() {
defer n.clean()
for {
select {
case <-n.done:
return
case err := <-n.errs:
log.Printf("error reading from %s\n", err.Error())
case msg := <-n.in:
metrics, err := n.parser.Parse(msg.Data)
if err != nil {
log.Printf("subject: %s, error: %s", msg.Subject, err.Error())
}
for _, metric := range metrics {
n.acc.AddFields(metric.Name(), metric.Fields(), metric.Tags(), metric.Time())
}
}
}
}
func (n *natsConsumer) clean() {
n.Lock()
defer n.Unlock()
close(n.in)
close(n.errs)
for _, sub := range n.Subs {
if err := sub.Unsubscribe(); err != nil {
log.Printf("Error unsubscribing from subject %s in queue %s: %s\n",
sub.Subject, sub.Queue, err.Error())
}
}
if n.Conn != nil && !n.Conn.IsClosed() {
n.Conn.Close()
}
}
func (n *natsConsumer) Stop() {
n.Lock()
close(n.done)
n.Unlock()
}
func (n *natsConsumer) Gather(acc telegraf.Accumulator) error {
return nil
}
func init() {
inputs.Add("nats_consumer", func() telegraf.Input {
return &natsConsumer{}
})
}

View File

@@ -0,0 +1,130 @@
package natsconsumer
import (
"testing"
"time"
"github.com/influxdata/telegraf/plugins/parsers"
"github.com/influxdata/telegraf/testutil"
"github.com/nats-io/nats"
)
const (
testMsg = "cpu_load_short,host=server01 value=23422.0 1422568543702900257"
testMsgGraphite = "cpu.load.short.graphite 23422 1454780029"
testMsgJSON = "{\"a\": 5, \"b\": {\"c\": 6}}\n"
invalidMsg = "cpu_load_short,host=server01 1422568543702900257"
metricBuffer = 5
)
func newTestNatsConsumer() (*natsConsumer, chan *nats.Msg) {
in := make(chan *nats.Msg, metricBuffer)
n := &natsConsumer{
QueueGroup: "test",
Subjects: []string{"telegraf"},
Servers: []string{"nats://localhost:4222"},
Secure: false,
in: in,
errs: make(chan error, metricBuffer),
done: make(chan struct{}),
}
return n, in
}
// Test that the parser parses NATS messages into metrics
func TestRunParser(t *testing.T) {
n, in := newTestNatsConsumer()
acc := testutil.Accumulator{}
n.acc = &acc
defer close(n.done)
n.parser, _ = parsers.NewInfluxParser()
go n.receiver()
in <- natsMsg(testMsg)
time.Sleep(time.Millisecond * 25)
if acc.NFields() != 1 {
t.Errorf("got %v, expected %v", acc.NFields(), 1)
}
}
// Test that the parser ignores invalid messages
func TestRunParserInvalidMsg(t *testing.T) {
n, in := newTestNatsConsumer()
acc := testutil.Accumulator{}
n.acc = &acc
defer close(n.done)
n.parser, _ = parsers.NewInfluxParser()
go n.receiver()
in <- natsMsg(invalidMsg)
time.Sleep(time.Millisecond * 25)
if acc.NFields() != 0 {
t.Errorf("got %v, expected %v", acc.NFields(), 0)
}
}
// Test that the parser parses line format messages into metrics
func TestRunParserAndGather(t *testing.T) {
n, in := newTestNatsConsumer()
acc := testutil.Accumulator{}
n.acc = &acc
defer close(n.done)
n.parser, _ = parsers.NewInfluxParser()
go n.receiver()
in <- natsMsg(testMsg)
time.Sleep(time.Millisecond * 25)
n.Gather(&acc)
acc.AssertContainsFields(t, "cpu_load_short",
map[string]interface{}{"value": float64(23422)})
}
// Test that the parser parses graphite format messages into metrics
func TestRunParserAndGatherGraphite(t *testing.T) {
n, in := newTestNatsConsumer()
acc := testutil.Accumulator{}
n.acc = &acc
defer close(n.done)
n.parser, _ = parsers.NewGraphiteParser("_", []string{}, nil)
go n.receiver()
in <- natsMsg(testMsgGraphite)
time.Sleep(time.Millisecond * 25)
n.Gather(&acc)
acc.AssertContainsFields(t, "cpu_load_short_graphite",
map[string]interface{}{"value": float64(23422)})
}
// Test that the parser parses json format messages into metrics
func TestRunParserAndGatherJSON(t *testing.T) {
n, in := newTestNatsConsumer()
acc := testutil.Accumulator{}
n.acc = &acc
defer close(n.done)
n.parser, _ = parsers.NewJSONParser("nats_json_test", []string{}, nil)
go n.receiver()
in <- natsMsg(testMsgJSON)
time.Sleep(time.Millisecond * 25)
n.Gather(&acc)
acc.AssertContainsFields(t, "nats_json_test",
map[string]interface{}{
"a": float64(5),
"b_c": float64(6),
})
}
func natsMsg(val string) *nats.Msg {
return &nats.Msg{
Subject: "telegraf",
Data: []byte(val),
}
}

View File

@@ -0,0 +1,66 @@
# Example Input Plugin
The input plugin test UDP/TCP connections response time.
It can also check response text.
### Configuration:
```
# List of UDP/TCP connections you want to check
[[inputs.net_response]]
protocol = "tcp"
# Server address (default IP localhost)
address = "github.com:80"
# Set timeout (default 1.0)
timeout = 1.0
# Set read timeout (default 1.0)
read_timeout = 1.0
# String sent to the server
send = "ssh"
# Expected string in answer
expect = "ssh"
[[inputs.net_response]]
protocol = "tcp"
address = ":80"
[[inputs.net_response]]
protocol = "udp"
# Server address (default IP localhost)
address = "github.com:80"
# Set timeout (default 1.0)
timeout = 1.0
# Set read timeout (default 1.0)
read_timeout = 1.0
# String sent to the server
send = "ssh"
# Expected string in answer
expect = "ssh"
[[inputs.net_response]]
protocol = "udp"
address = "localhost:161"
timeout = 2.0
```
### Measurements & Fields:
- net_response
- response_time (float, seconds)
- string_found (bool) # Only if "expected: option is set
### Tags:
- All measurements have the following tags:
- host
- port
- protocol
### Example Output:
```
$ ./telegraf -config telegraf.conf -input-filter net_response -test
net_response,host=127.0.0.1,port=22,protocol=tcp response_time=0.18070360500000002,string_found=true 1454785464182527094
net_response,host=127.0.0.1,port=2222,protocol=tcp response_time=1.090124776,string_found=false 1454784433658942325
```

View File

@@ -0,0 +1,196 @@
package net_response
import (
"bufio"
"errors"
"net"
"net/textproto"
"regexp"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
// NetResponses struct
type NetResponse struct {
Address string
Timeout float64
ReadTimeout float64
Send string
Expect string
Protocol string
}
func (_ *NetResponse) Description() string {
return "TCP or UDP 'ping' given url and collect response time in seconds"
}
var sampleConfig = `
## Protocol, must be "tcp" or "udp"
protocol = "tcp"
## Server address (default localhost)
address = "github.com:80"
## Set timeout (default 1.0 seconds)
timeout = 1.0
## Set read timeout (default 1.0 seconds)
read_timeout = 1.0
## Optional string sent to the server
# send = "ssh"
## Optional expected string in answer
# expect = "ssh"
`
func (_ *NetResponse) SampleConfig() string {
return sampleConfig
}
func (t *NetResponse) TcpGather() (map[string]interface{}, error) {
// Prepare fields
fields := make(map[string]interface{})
// Start Timer
start := time.Now()
// Resolving
tcpAddr, err := net.ResolveTCPAddr("tcp", t.Address)
// Connecting
conn, err := net.DialTCP("tcp", nil, tcpAddr)
// Stop timer
responseTime := time.Since(start).Seconds()
// Handle error
if err != nil {
return nil, err
}
defer conn.Close()
// Send string if needed
if t.Send != "" {
msg := []byte(t.Send)
conn.Write(msg)
conn.CloseWrite()
// Stop timer
responseTime = time.Since(start).Seconds()
}
// Read string if needed
if t.Expect != "" {
// Set read timeout
conn.SetReadDeadline(time.Now().Add(time.Duration(t.ReadTimeout) * time.Second))
// Prepare reader
reader := bufio.NewReader(conn)
tp := textproto.NewReader(reader)
// Read
data, err := tp.ReadLine()
// Stop timer
responseTime = time.Since(start).Seconds()
// Handle error
if err != nil {
fields["string_found"] = false
} else {
// Looking for string in answer
RegEx := regexp.MustCompile(`.*` + t.Expect + `.*`)
find := RegEx.FindString(string(data))
if find != "" {
fields["string_found"] = true
} else {
fields["string_found"] = false
}
}
}
fields["response_time"] = responseTime
return fields, nil
}
func (u *NetResponse) UdpGather() (map[string]interface{}, error) {
// Prepare fields
fields := make(map[string]interface{})
// Start Timer
start := time.Now()
// Resolving
udpAddr, err := net.ResolveUDPAddr("udp", u.Address)
LocalAddr, err := net.ResolveUDPAddr("udp", "127.0.0.1:0")
// Connecting
conn, err := net.DialUDP("udp", LocalAddr, udpAddr)
defer conn.Close()
// Handle error
if err != nil {
return nil, err
}
// Send string
msg := []byte(u.Send)
conn.Write(msg)
// Read string
// Set read timeout
conn.SetReadDeadline(time.Now().Add(time.Duration(u.ReadTimeout) * time.Second))
// Read
buf := make([]byte, 1024)
_, _, err = conn.ReadFromUDP(buf)
// Stop timer
responseTime := time.Since(start).Seconds()
// Handle error
if err != nil {
return nil, err
} else {
// Looking for string in answer
RegEx := regexp.MustCompile(`.*` + u.Expect + `.*`)
find := RegEx.FindString(string(buf))
if find != "" {
fields["string_found"] = true
} else {
fields["string_found"] = false
}
}
fields["response_time"] = responseTime
return fields, nil
}
func (c *NetResponse) Gather(acc telegraf.Accumulator) error {
// Set default values
if c.Timeout == 0 {
c.Timeout = 1.0
}
if c.ReadTimeout == 0 {
c.ReadTimeout = 1.0
}
// Check send and expected string
if c.Protocol == "udp" && c.Send == "" {
return errors.New("Send string cannot be empty")
}
if c.Protocol == "udp" && c.Expect == "" {
return errors.New("Expected string cannot be empty")
}
// Prepare host and port
host, port, err := net.SplitHostPort(c.Address)
if err != nil {
return err
}
if host == "" {
c.Address = "localhost:" + port
}
if port == "" {
return errors.New("Bad port")
}
// Prepare data
tags := map[string]string{"host": host, "port": port}
var fields map[string]interface{}
// Gather data
if c.Protocol == "tcp" {
fields, err = c.TcpGather()
tags["protocol"] = "tcp"
} else if c.Protocol == "udp" {
fields, err = c.UdpGather()
tags["protocol"] = "udp"
} else {
return errors.New("Bad protocol")
}
if err != nil {
return err
}
// Add metrics
acc.AddFields("net_response", fields, tags)
return nil
}
func init() {
inputs.Add("net_response", func() telegraf.Input {
return &NetResponse{}
})
}

View File

@@ -0,0 +1,198 @@
package net_response
import (
"net"
"regexp"
"sync"
"testing"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestBadProtocol(t *testing.T) {
var acc testutil.Accumulator
// Init plugin
c := NetResponse{
Protocol: "unknownprotocol",
Address: ":9999",
}
// Error
err1 := c.Gather(&acc)
require.Error(t, err1)
assert.Equal(t, "Bad protocol", err1.Error())
}
func TestTCPError(t *testing.T) {
var acc testutil.Accumulator
// Init plugin
c := NetResponse{
Protocol: "tcp",
Address: ":9999",
}
// Error
err1 := c.Gather(&acc)
require.Error(t, err1)
assert.Equal(t, "dial tcp 127.0.0.1:9999: getsockopt: connection refused", err1.Error())
}
func TestTCPOK1(t *testing.T) {
var wg sync.WaitGroup
var acc testutil.Accumulator
// Init plugin
c := NetResponse{
Address: "127.0.0.1:2004",
Send: "test",
Expect: "test",
ReadTimeout: 3.0,
Timeout: 1.0,
Protocol: "tcp",
}
// Start TCP server
wg.Add(1)
go TCPServer(t, &wg)
wg.Wait()
// Connect
wg.Add(1)
err1 := c.Gather(&acc)
wg.Wait()
// Override response time
for _, p := range acc.Metrics {
p.Fields["response_time"] = 1.0
}
require.NoError(t, err1)
acc.AssertContainsTaggedFields(t,
"net_response",
map[string]interface{}{
"string_found": true,
"response_time": 1.0,
},
map[string]string{"host": "127.0.0.1",
"port": "2004",
"protocol": "tcp",
},
)
// Waiting TCPserver
wg.Wait()
}
func TestTCPOK2(t *testing.T) {
var wg sync.WaitGroup
var acc testutil.Accumulator
// Init plugin
c := NetResponse{
Address: "127.0.0.1:2004",
Send: "test",
Expect: "test2",
ReadTimeout: 3.0,
Timeout: 1.0,
Protocol: "tcp",
}
// Start TCP server
wg.Add(1)
go TCPServer(t, &wg)
wg.Wait()
// Connect
wg.Add(1)
err1 := c.Gather(&acc)
wg.Wait()
// Override response time
for _, p := range acc.Metrics {
p.Fields["response_time"] = 1.0
}
require.NoError(t, err1)
acc.AssertContainsTaggedFields(t,
"net_response",
map[string]interface{}{
"string_found": false,
"response_time": 1.0,
},
map[string]string{"host": "127.0.0.1",
"port": "2004",
"protocol": "tcp",
},
)
// Waiting TCPserver
wg.Wait()
}
func TestUDPrror(t *testing.T) {
var acc testutil.Accumulator
// Init plugin
c := NetResponse{
Address: ":9999",
Send: "test",
Expect: "test",
Protocol: "udp",
}
// Error
err1 := c.Gather(&acc)
require.Error(t, err1)
assert.Regexp(t, regexp.MustCompile(`read udp 127.0.0.1:[0-9]*->127.0.0.1:9999: recvfrom: connection refused`), err1.Error())
}
func TestUDPOK1(t *testing.T) {
var wg sync.WaitGroup
var acc testutil.Accumulator
// Init plugin
c := NetResponse{
Address: "127.0.0.1:2004",
Send: "test",
Expect: "test",
ReadTimeout: 3.0,
Timeout: 1.0,
Protocol: "udp",
}
// Start UDP server
wg.Add(1)
go UDPServer(t, &wg)
wg.Wait()
// Connect
wg.Add(1)
err1 := c.Gather(&acc)
wg.Wait()
// Override response time
for _, p := range acc.Metrics {
p.Fields["response_time"] = 1.0
}
require.NoError(t, err1)
acc.AssertContainsTaggedFields(t,
"net_response",
map[string]interface{}{
"string_found": true,
"response_time": 1.0,
},
map[string]string{"host": "127.0.0.1",
"port": "2004",
"protocol": "udp",
},
)
// Waiting TCPserver
wg.Wait()
}
func UDPServer(t *testing.T, wg *sync.WaitGroup) {
udpAddr, _ := net.ResolveUDPAddr("udp", "127.0.0.1:2004")
conn, _ := net.ListenUDP("udp", udpAddr)
wg.Done()
buf := make([]byte, 1024)
_, remoteaddr, _ := conn.ReadFromUDP(buf)
conn.WriteToUDP(buf, remoteaddr)
conn.Close()
wg.Done()
}
func TCPServer(t *testing.T, wg *sync.WaitGroup) {
tcpAddr, _ := net.ResolveTCPAddr("tcp", "127.0.0.1:2004")
tcpServer, _ := net.ListenTCP("tcp", tcpAddr)
wg.Done()
conn, _ := tcpServer.AcceptTCP()
buf := make([]byte, 1024)
conn.Read(buf)
conn.Write(buf)
conn.CloseWrite()
tcpServer.Close()
wg.Done()
}

View File

@@ -11,6 +11,7 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -19,7 +20,7 @@ type Nginx struct {
}
var sampleConfig = `
# An array of Nginx stub_status URI to gather stats.
## An array of Nginx stub_status URI to gather stats.
urls = ["http://localhost/status"]
`
@@ -31,7 +32,7 @@ func (n *Nginx) Description() string {
return "Read Nginx's basic status information (ngx_http_stub_status_module)"
}
func (n *Nginx) Gather(acc inputs.Accumulator) error {
func (n *Nginx) Gather(acc telegraf.Accumulator) error {
var wg sync.WaitGroup
var outerr error
@@ -59,7 +60,7 @@ var tr = &http.Transport{
var client = &http.Client{Transport: tr}
func (n *Nginx) gatherUrl(addr *url.URL, acc inputs.Accumulator) error {
func (n *Nginx) gatherUrl(addr *url.URL, acc telegraf.Accumulator) error {
resp, err := client.Get(addr.String())
if err != nil {
return fmt.Errorf("error making HTTP request to %s: %s", addr.String(), err)
@@ -159,7 +160,7 @@ func getTags(addr *url.URL) map[string]string {
}
func init() {
inputs.Add("nginx", func() inputs.Input {
inputs.Add("nginx", func() telegraf.Input {
return &Nginx{}
})
}

View File

@@ -31,6 +31,7 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -40,7 +41,7 @@ type NSQ struct {
}
var sampleConfig = `
# An array of NSQD HTTP API endpoints
## An array of NSQD HTTP API endpoints
endpoints = ["http://localhost:4151"]
`
@@ -49,7 +50,7 @@ const (
)
func init() {
inputs.Add("nsq", func() inputs.Input {
inputs.Add("nsq", func() telegraf.Input {
return &NSQ{}
})
}
@@ -62,7 +63,7 @@ func (n *NSQ) Description() string {
return "Read NSQ topic and channel statistics."
}
func (n *NSQ) Gather(acc inputs.Accumulator) error {
func (n *NSQ) Gather(acc telegraf.Accumulator) error {
var wg sync.WaitGroup
var outerr error
@@ -85,7 +86,7 @@ var tr = &http.Transport{
var client = &http.Client{Transport: tr}
func (n *NSQ) gatherEndpoint(e string, acc inputs.Accumulator) error {
func (n *NSQ) gatherEndpoint(e string, acc telegraf.Accumulator) error {
u, err := buildURL(e)
if err != nil {
return err
@@ -136,7 +137,7 @@ func buildURL(e string) (*url.URL, error) {
return addr, nil
}
func topicStats(t TopicStats, acc inputs.Accumulator, host, version string) {
func topicStats(t TopicStats, acc telegraf.Accumulator, host, version string) {
// per topic overall (tag: name, paused, channel count)
tags := map[string]string{
"server_host": host,
@@ -157,7 +158,7 @@ func topicStats(t TopicStats, acc inputs.Accumulator, host, version string) {
}
}
func channelStats(c ChannelStats, acc inputs.Accumulator, host, version, topic string) {
func channelStats(c ChannelStats, acc telegraf.Accumulator, host, version, topic string) {
tags := map[string]string{
"server_host": host,
"server_version": version,
@@ -182,7 +183,7 @@ func channelStats(c ChannelStats, acc inputs.Accumulator, host, version, topic s
}
}
func clientStats(c ClientStats, acc inputs.Accumulator, host, version, topic, channel string) {
func clientStats(c ClientStats, acc telegraf.Accumulator, host, version, topic, channel string) {
tags := map[string]string{
"server_host": host,
"server_version": version,

View File

@@ -8,6 +8,7 @@ import (
"strconv"
"strings"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"golang.org/x/net/html/charset"
)
@@ -125,16 +126,15 @@ func (p *process) getUptime() int64 {
}
var sampleConfig = `
# Path of passenger-status.
#
# Plugin gather metric via parsing XML output of passenger-status
# More information about the tool:
# https://www.phusionpassenger.com/library/admin/apache/overall_status_report.html
#
#
# If no path is specified, then the plugin simply execute passenger-status
# hopefully it can be found in your PATH
command = "passenger-status -v --show=xml"
## Path of passenger-status.
##
## Plugin gather metric via parsing XML output of passenger-status
## More information about the tool:
## https://www.phusionpassenger.com/library/admin/apache/overall_status_report.html
##
## If no path is specified, then the plugin simply execute passenger-status
## hopefully it can be found in your PATH
command = "passenger-status -v --show=xml"
`
func (r *passenger) SampleConfig() string {
@@ -145,7 +145,7 @@ func (r *passenger) Description() string {
return "Read metrics of passenger using passenger-status"
}
func (g *passenger) Gather(acc inputs.Accumulator) error {
func (g *passenger) Gather(acc telegraf.Accumulator) error {
if g.Command == "" {
g.Command = "passenger-status -v --show=xml"
}
@@ -164,7 +164,7 @@ func (g *passenger) Gather(acc inputs.Accumulator) error {
return nil
}
func importMetric(stat []byte, acc inputs.Accumulator) error {
func importMetric(stat []byte, acc telegraf.Accumulator) error {
var p info
decoder := xml.NewDecoder(bytes.NewReader(stat))
@@ -244,7 +244,7 @@ func importMetric(stat []byte, acc inputs.Accumulator) error {
}
func init() {
inputs.Add("passenger", func() inputs.Input {
inputs.Add("passenger", func() telegraf.Input {
return &passenger{}
})
}

View File

@@ -12,6 +12,7 @@ import (
"strings"
"sync"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -40,26 +41,25 @@ type phpfpm struct {
}
var sampleConfig = `
# An array of addresses to gather stats about. Specify an ip or hostname
# with optional port and path
#
# Plugin can be configured in three modes (either can be used):
# - http: the URL must start with http:// or https://, ie:
# "http://localhost/status"
# "http://192.168.130.1/status?full"
#
# - unixsocket: path to fpm socket, ie:
# "/var/run/php5-fpm.sock"
# or using a custom fpm status path:
# "/var/run/php5-fpm.sock:fpm-custom-status-path"
#
# - fcgi: the URL must start with fcgi:// or cgi://, and port must be present, ie:
# "fcgi://10.0.0.12:9000/status"
# "cgi://10.0.10.12:9001/status"
#
# Example of multiple gathering from local socket and remove host
# urls = ["http://192.168.1.20/status", "/tmp/fpm.sock"]
# If no servers are specified, then default to http://127.0.0.1/status
## An array of addresses to gather stats about. Specify an ip or hostname
## with optional port and path
##
## Plugin can be configured in three modes (either can be used):
## - http: the URL must start with http:// or https://, ie:
## "http://localhost/status"
## "http://192.168.130.1/status?full"
##
## - unixsocket: path to fpm socket, ie:
## "/var/run/php5-fpm.sock"
## or using a custom fpm status path:
## "/var/run/php5-fpm.sock:fpm-custom-status-path"
##
## - fcgi: the URL must start with fcgi:// or cgi://, and port must be present, ie:
## "fcgi://10.0.0.12:9000/status"
## "cgi://10.0.10.12:9001/status"
##
## Example of multiple gathering from local socket and remove host
## urls = ["http://192.168.1.20/status", "/tmp/fpm.sock"]
urls = ["http://localhost/status"]
`
@@ -73,7 +73,7 @@ func (r *phpfpm) Description() string {
// Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any).
func (g *phpfpm) Gather(acc inputs.Accumulator) error {
func (g *phpfpm) Gather(acc telegraf.Accumulator) error {
if len(g.Urls) == 0 {
return g.gatherServer("http://127.0.0.1/status", acc)
}
@@ -96,7 +96,7 @@ func (g *phpfpm) Gather(acc inputs.Accumulator) error {
}
// Request status page to get stat raw data and import it
func (g *phpfpm) gatherServer(addr string, acc inputs.Accumulator) error {
func (g *phpfpm) gatherServer(addr string, acc telegraf.Accumulator) error {
if g.client == nil {
client := &http.Client{}
g.client = client
@@ -140,7 +140,7 @@ func (g *phpfpm) gatherServer(addr string, acc inputs.Accumulator) error {
}
// Gather stat using fcgi protocol
func (g *phpfpm) gatherFcgi(fcgi *conn, statusPath string, acc inputs.Accumulator) error {
func (g *phpfpm) gatherFcgi(fcgi *conn, statusPath string, acc telegraf.Accumulator) error {
fpmOutput, fpmErr, err := fcgi.Request(map[string]string{
"SCRIPT_NAME": "/" + statusPath,
"SCRIPT_FILENAME": statusPath,
@@ -160,7 +160,7 @@ func (g *phpfpm) gatherFcgi(fcgi *conn, statusPath string, acc inputs.Accumulato
}
// Gather stat using http protocol
func (g *phpfpm) gatherHttp(addr string, acc inputs.Accumulator) error {
func (g *phpfpm) gatherHttp(addr string, acc telegraf.Accumulator) error {
u, err := url.Parse(addr)
if err != nil {
return fmt.Errorf("Unable parse server address '%s': %s", addr, err)
@@ -184,7 +184,7 @@ func (g *phpfpm) gatherHttp(addr string, acc inputs.Accumulator) error {
}
// Import stat data into Telegraf system
func importMetric(r io.Reader, acc inputs.Accumulator) (poolStat, error) {
func importMetric(r io.Reader, acc telegraf.Accumulator) (poolStat, error) {
stats := make(poolStat)
var currentPool string
@@ -239,7 +239,7 @@ func importMetric(r io.Reader, acc inputs.Accumulator) (poolStat, error) {
}
func init() {
inputs.Add("phpfpm", func() inputs.Input {
inputs.Add("phpfpm", func() telegraf.Input {
return &phpfpm{}
})
}

View File

@@ -1,12 +1,16 @@
// +build !windows
package ping
import (
"errors"
"os/exec"
"runtime"
"strconv"
"strings"
"sync"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -40,15 +44,18 @@ func (_ *Ping) Description() string {
}
var sampleConfig = `
# urls to ping
## NOTE: this plugin forks the ping command. You may need to set capabilities
## via setcap cap_net_raw+p /bin/ping
## urls to ping
urls = ["www.google.com"] # required
# number of pings to send (ping -c <COUNT>)
## number of pings to send (ping -c <COUNT>)
count = 1 # required
# interval, in s, at which to ping. 0 == default (ping -i <PING_INTERVAL>)
## interval, in s, at which to ping. 0 == default (ping -i <PING_INTERVAL>)
ping_interval = 0.0
# ping timeout, in s. 0 == no timeout (ping -t <TIMEOUT>)
## ping timeout, in s. 0 == no timeout (ping -t <TIMEOUT>)
timeout = 0.0
# interface to send ping from (ping -I <INTERFACE>)
## interface to send ping from (ping -I <INTERFACE>)
interface = ""
`
@@ -56,7 +63,7 @@ func (_ *Ping) SampleConfig() string {
return sampleConfig
}
func (p *Ping) Gather(acc inputs.Accumulator) error {
func (p *Ping) Gather(acc telegraf.Accumulator) error {
var wg sync.WaitGroup
errorChannel := make(chan error, len(p.Urls)*2)
@@ -64,7 +71,7 @@ func (p *Ping) Gather(acc inputs.Accumulator) error {
// Spin off a go routine for each url to ping
for _, url := range p.Urls {
wg.Add(1)
go func(url string, acc inputs.Accumulator) {
go func(url string, acc telegraf.Accumulator) {
defer wg.Done()
args := p.args(url)
out, err := p.pingHost(args...)
@@ -110,7 +117,11 @@ func (p *Ping) Gather(acc inputs.Accumulator) error {
}
func hostPinger(args ...string) (string, error) {
c := exec.Command("ping", args...)
bin, err := exec.LookPath("ping")
if err != nil {
return "", err
}
c := exec.Command(bin, args...)
out, err := c.CombinedOutput()
return string(out), err
}
@@ -118,12 +129,20 @@ func hostPinger(args ...string) (string, error) {
// args returns the arguments for the 'ping' executable
func (p *Ping) args(url string) []string {
// Build the ping command args based on toml config
args := []string{"-c", strconv.Itoa(p.Count)}
args := []string{"-c", strconv.Itoa(p.Count), "-n", "-s", "16"}
if p.PingInterval > 0 {
args = append(args, "-i", strconv.FormatFloat(p.PingInterval, 'f', 1, 64))
}
if p.Timeout > 0 {
args = append(args, "-t", strconv.FormatFloat(p.Timeout, 'f', 1, 64))
switch runtime.GOOS {
case "darwin", "freebsd":
args = append(args, "-t", strconv.FormatFloat(p.Timeout, 'f', 1, 64))
case "linux":
args = append(args, "-W", strconv.FormatFloat(p.Timeout, 'f', 1, 64))
default:
// Not sure the best option here, just assume GNU ping?
args = append(args, "-W", strconv.FormatFloat(p.Timeout, 'f', 1, 64))
}
}
if p.Interface != "" {
args = append(args, "-I", p.Interface)
@@ -176,7 +195,7 @@ func processPingOutput(out string) (int, int, float64, error) {
}
func init() {
inputs.Add("ping", func() inputs.Input {
inputs.Add("ping", func() telegraf.Input {
return &Ping{pingHost: hostPinger}
})
}

View File

@@ -1,8 +1,11 @@
// +build !windows
package ping
import (
"errors"
"reflect"
"runtime"
"sort"
"testing"
@@ -74,7 +77,7 @@ func TestArgs(t *testing.T) {
// Actual and Expected arg lists must be sorted for reflect.DeepEqual
actual := p.args("www.google.com")
expected := []string{"-c", "2", "www.google.com"}
expected := []string{"-c", "2", "-n", "-s", "16", "www.google.com"}
sort.Strings(actual)
sort.Strings(expected)
assert.True(t, reflect.DeepEqual(expected, actual),
@@ -82,7 +85,8 @@ func TestArgs(t *testing.T) {
p.Interface = "eth0"
actual = p.args("www.google.com")
expected = []string{"-c", "2", "-I", "eth0", "www.google.com"}
expected = []string{"-c", "2", "-n", "-s", "16", "-I", "eth0",
"www.google.com"}
sort.Strings(actual)
sort.Strings(expected)
assert.True(t, reflect.DeepEqual(expected, actual),
@@ -90,7 +94,15 @@ func TestArgs(t *testing.T) {
p.Timeout = 12.0
actual = p.args("www.google.com")
expected = []string{"-c", "2", "-I", "eth0", "-t", "12.0", "www.google.com"}
switch runtime.GOOS {
case "darwin", "freebsd":
expected = []string{"-c", "2", "-n", "-s", "16", "-I", "eth0", "-t",
"12.0", "www.google.com"}
default:
expected = []string{"-c", "2", "-n", "-s", "16", "-I", "eth0", "-W",
"12.0", "www.google.com"}
}
sort.Strings(actual)
sort.Strings(expected)
assert.True(t, reflect.DeepEqual(expected, actual),
@@ -98,8 +110,14 @@ func TestArgs(t *testing.T) {
p.PingInterval = 1.2
actual = p.args("www.google.com")
expected = []string{"-c", "2", "-I", "eth0", "-t", "12.0", "-i", "1.2",
"www.google.com"}
switch runtime.GOOS {
case "darwin", "freebsd":
expected = []string{"-c", "2", "-n", "-s", "16", "-I", "eth0", "-t",
"12.0", "-i", "1.2", "www.google.com"}
default:
expected = []string{"-c", "2", "-n", "-s", "16", "-I", "eth0", "-W",
"12.0", "-i", "1.2", "www.google.com"}
}
sort.Strings(actual)
sort.Strings(expected)
assert.True(t, reflect.DeepEqual(expected, actual),

View File

@@ -0,0 +1,3 @@
// +build windows
package ping

View File

@@ -1,6 +1,6 @@
# PostgreSQL plugin
This postgresql plugin provides metrics for your postgres database. It currently works with postgres versions 8.1+. It uses data from the built in _pg_stat_database_ view. The metrics recorded depend on your version of postgres. See table:
This postgresql plugin provides metrics for your postgres database. It currently works with postgres versions 8.1+. It uses data from the built in _pg_stat_database_ and pg_stat_bgwriter views. The metrics recorded depend on your version of postgres. See table:
```
pg version 9.2+ 9.1 8.3-9.0 8.1-8.2 7.4-8.0(unsupported)
--- --- --- ------- ------- -------
@@ -27,4 +27,5 @@ stats_reset* x x
_* value ignored and therefore not recorded._
More information about the meaning of these metrics can be found in the [PostgreSQL Documentation](http://www.postgresql.org/docs/9.2/static/monitoring-stats.html#PG-STAT-DATABASE-VIEW)

View File

@@ -4,8 +4,10 @@ import (
"bytes"
"database/sql"
"fmt"
"sort"
"strings"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
_ "github.com/lib/pq"
@@ -15,27 +17,28 @@ type Postgresql struct {
Address string
Databases []string
OrderedColumns []string
AllColumns []string
}
var ignoredColumns = map[string]bool{"datid": true, "datname": true, "stats_reset": true}
var sampleConfig = `
# specify address via a url matching:
# postgres://[pqgotest[:password]]@localhost[/dbname]?sslmode=[disable|verify-ca|verify-full]
# or a simple string:
# host=localhost user=pqotest password=... sslmode=... dbname=app_production
#
# All connection parameters are optional.
#
# Without the dbname parameter, the driver will default to a database
# with the same name as the user. This dbname is just for instantiating a
# connection with the server and doesn't restrict the databases we are trying
# to grab metrics for.
#
## specify address via a url matching:
## postgres://[pqgotest[:password]]@localhost[/dbname]?sslmode=[disable|verify-ca|verify-full]
## or a simple string:
## host=localhost user=pqotest password=... sslmode=... dbname=app_production
##
## All connection parameters are optional.
##
## Without the dbname parameter, the driver will default to a database
## with the same name as the user. This dbname is just for instantiating a
## connection with the server and doesn't restrict the databases we are trying
## to grab metrics for.
##
address = "host=localhost user=postgres sslmode=disable"
# A list of databases to pull metrics about. If not specified, metrics for all
# databases are gathered.
## A list of databases to pull metrics about. If not specified, metrics for all
## databases are gathered.
# databases = ["app_production", "testing"]
`
@@ -53,7 +56,7 @@ func (p *Postgresql) IgnoredColumns() map[string]bool {
var localhost = "host=localhost sslmode=disable"
func (p *Postgresql) Gather(acc inputs.Accumulator) error {
func (p *Postgresql) Gather(acc telegraf.Accumulator) error {
var query string
if p.Address == "" || p.Address == "localhost" {
@@ -85,6 +88,9 @@ func (p *Postgresql) Gather(acc inputs.Accumulator) error {
p.OrderedColumns, err = rows.Columns()
if err != nil {
return err
} else {
p.AllColumns = make([]string, len(p.OrderedColumns))
copy(p.AllColumns, p.OrderedColumns)
}
for rows.Next() {
@@ -93,15 +99,41 @@ func (p *Postgresql) Gather(acc inputs.Accumulator) error {
return err
}
}
//return rows.Err()
query = `SELECT * FROM pg_stat_bgwriter`
return rows.Err()
bg_writer_row, err := db.Query(query)
if err != nil {
return err
}
defer bg_writer_row.Close()
// grab the column information from the result
p.OrderedColumns, err = bg_writer_row.Columns()
if err != nil {
return err
} else {
for _, v := range p.OrderedColumns {
p.AllColumns = append(p.AllColumns, v)
}
}
for bg_writer_row.Next() {
err = p.accRow(bg_writer_row, acc)
if err != nil {
return err
}
}
sort.Strings(p.AllColumns)
return bg_writer_row.Err()
}
type scanner interface {
Scan(dest ...interface{}) error
}
func (p *Postgresql) accRow(row scanner, acc inputs.Accumulator) error {
func (p *Postgresql) accRow(row scanner, acc telegraf.Accumulator) error {
var columnVars []interface{}
var dbname bytes.Buffer
@@ -123,11 +155,14 @@ func (p *Postgresql) accRow(row scanner, acc inputs.Accumulator) error {
if err != nil {
return err
}
// extract the database name from the column map
dbnameChars := (*columnMap["datname"]).([]uint8)
for i := 0; i < len(dbnameChars); i++ {
dbname.WriteString(string(dbnameChars[i]))
if columnMap["datname"] != nil {
// extract the database name from the column map
dbnameChars := (*columnMap["datname"]).([]uint8)
for i := 0; i < len(dbnameChars); i++ {
dbname.WriteString(string(dbnameChars[i]))
}
} else {
dbname.WriteString("postgres")
}
tags := map[string]string{"server": p.Address, "db": dbname.String()}
@@ -145,7 +180,7 @@ func (p *Postgresql) accRow(row scanner, acc inputs.Accumulator) error {
}
func init() {
inputs.Add("postgresql", func() inputs.Input {
inputs.Add("postgresql", func() telegraf.Input {
return &Postgresql{}
})
}

View File

@@ -21,15 +21,13 @@ func TestPostgresqlGeneratesMetrics(t *testing.T) {
}
var acc testutil.Accumulator
err := p.Gather(&acc)
require.NoError(t, err)
availableColumns := make(map[string]bool)
for _, col := range p.OrderedColumns {
for _, col := range p.AllColumns {
availableColumns[col] = true
}
intMetrics := []string{
"xact_commit",
"xact_rollback",
@@ -45,6 +43,14 @@ func TestPostgresqlGeneratesMetrics(t *testing.T) {
"temp_bytes",
"deadlocks",
"numbackends",
"buffers_alloc",
"buffers_backend",
"buffers_backend_fsync",
"buffers_checkpoint",
"buffers_clean",
"checkpoints_req",
"checkpoints_timed",
"maxwritten_clean",
}
floatMetrics := []string{
@@ -71,7 +77,7 @@ func TestPostgresqlGeneratesMetrics(t *testing.T) {
}
assert.True(t, metricsCounted > 0)
assert.Equal(t, len(availableColumns)-len(p.IgnoredColumns()), metricsCounted)
//assert.Equal(t, len(availableColumns)-len(p.IgnoredColumns()), metricsCounted)
}
func TestPostgresqlTagsMetricsWithDatabaseName(t *testing.T) {
@@ -113,7 +119,7 @@ func TestPostgresqlDefaultsToAllDatabases(t *testing.T) {
var found bool
for _, pnt := range acc.Points {
for _, pnt := range acc.Metrics {
if pnt.Measurement == "postgresql" {
if pnt.Tags["db"] == "postgres" {
found = true

View File

@@ -0,0 +1,68 @@
# PowerDNS Input Plugin
The powerdns plugin gathers metrics about PowerDNS using unix socket.
### Configuration:
```
# Description
[[inputs.powerdns]]
# An array of sockets to gather stats about.
# Specify a path to unix socket.
#
# If no servers are specified, then '/var/run/pdns.controlsocket' is used as the path.
unix_sockets = ["/var/run/pdns.controlsocket"]
```
### Measurements & Fields:
- powerdns
- corrupt-packets
- deferred-cache-inserts
- deferred-cache-lookup
- dnsupdate-answers
- dnsupdate-changes
- dnsupdate-queries
- dnsupdate-refused
- packetcache-hit
- packetcache-miss
- packetcache-size
- query-cache-hit
- query-cache-miss
- rd-queries
- recursing-answers
- recursing-questions
- recursion-unanswered
- security-status
- servfail-packets
- signatures
- tcp-answers
- tcp-queries
- timedout-packets
- udp-answers
- udp-answers-bytes
- udp-do-queries
- udp-queries
- udp4-answers
- udp4-queries
- udp6-answers
- udp6-queries
- key-cache-size
- latency
- meta-cache-size
- qsize-q
- signature-cache-size
- sys-msec
- uptime
- user-msec
### Tags:
- tags: `server=socket`
### Example Output:
```
$ ./telegraf -config telegraf.conf -input-filter powerdns -test
> powerdns,server=/var/run/pdns.controlsocket corrupt-packets=0i,deferred-cache-inserts=0i,deferred-cache-lookup=0i,dnsupdate-answers=0i,dnsupdate-changes=0i,dnsupdate-queries=0i,dnsupdate-refused=0i,key-cache-size=0i,latency=26i,meta-cache-size=0i,packetcache-hit=0i,packetcache-miss=1i,packetcache-size=0i,qsize-q=0i,query-cache-hit=0i,query-cache-miss=6i,rd-queries=1i,recursing-answers=0i,recursing-questions=0i,recursion-unanswered=0i,security-status=3i,servfail-packets=0i,signature-cache-size=0i,signatures=0i,sys-msec=4349i,tcp-answers=0i,tcp-queries=0i,timedout-packets=0i,udp-answers=1i,udp-answers-bytes=50i,udp-do-queries=0i,udp-queries=0i,udp4-answers=1i,udp4-queries=1i,udp6-answers=0i,udp6-queries=0i,uptime=166738i,user-msec=3036i 1454078624932715706
```

View File

@@ -0,0 +1,124 @@
package powerdns
import (
"bufio"
"fmt"
"io"
"net"
"strconv"
"strings"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
type Powerdns struct {
UnixSockets []string
}
var sampleConfig = `
## An array of sockets to gather stats about.
## Specify a path to unix socket.
unix_sockets = ["/var/run/pdns.controlsocket"]
`
var defaultTimeout = 5 * time.Second
func (p *Powerdns) SampleConfig() string {
return sampleConfig
}
func (p *Powerdns) Description() string {
return "Read metrics from one or many PowerDNS servers"
}
func (p *Powerdns) Gather(acc telegraf.Accumulator) error {
if len(p.UnixSockets) == 0 {
return p.gatherServer("/var/run/pdns.controlsocket", acc)
}
for _, serverSocket := range p.UnixSockets {
if err := p.gatherServer(serverSocket, acc); err != nil {
return err
}
}
return nil
}
func (p *Powerdns) gatherServer(address string, acc telegraf.Accumulator) error {
conn, err := net.DialTimeout("unix", address, defaultTimeout)
if err != nil {
return err
}
defer conn.Close()
conn.SetDeadline(time.Now().Add(defaultTimeout))
// Read and write buffer
rw := bufio.NewReadWriter(bufio.NewReader(conn), bufio.NewWriter(conn))
// Send command
if _, err := fmt.Fprint(conn, "show * \n"); err != nil {
return nil
}
if err := rw.Flush(); err != nil {
return err
}
// Read data
buf := make([]byte, 0, 4096)
tmp := make([]byte, 1024)
for {
n, err := rw.Read(tmp)
if err != nil {
if err != io.EOF {
return err
}
break
}
buf = append(buf, tmp[:n]...)
}
metrics := string(buf)
// Process data
fields, err := parseResponse(metrics)
if err != nil {
return err
}
// Add server socket as a tag
tags := map[string]string{"server": address}
acc.AddFields("powerdns", fields, tags)
return nil
}
func parseResponse(metrics string) (map[string]interface{}, error) {
values := make(map[string]interface{})
s := strings.Split(metrics, ",")
for _, metric := range s[:len(s)-1] {
m := strings.Split(metric, "=")
i, err := strconv.ParseInt(m[1], 10, 64)
if err != nil {
return values, err
}
values[m[0]] = i
}
return values, nil
}
func init() {
inputs.Add("powerdns", func() telegraf.Input {
return &Powerdns{}
})
}

View File

@@ -0,0 +1,147 @@
package powerdns
import (
"crypto/rand"
"encoding/binary"
"fmt"
"net"
"testing"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
type statServer struct{}
var metrics = "corrupt-packets=0,deferred-cache-inserts=0,deferred-cache-lookup=0," +
"dnsupdate-answers=0,dnsupdate-changes=0,dnsupdate-queries=0," +
"dnsupdate-refused=0,packetcache-hit=0,packetcache-miss=1,packetcache-size=0," +
"query-cache-hit=0,query-cache-miss=6,rd-queries=1,recursing-answers=0," +
"recursing-questions=0,recursion-unanswered=0,security-status=3," +
"servfail-packets=0,signatures=0,tcp-answers=0,tcp-queries=0," +
"timedout-packets=0,udp-answers=1,udp-answers-bytes=50,udp-do-queries=0," +
"udp-queries=0,udp4-answers=1,udp4-queries=1,udp6-answers=0,udp6-queries=0," +
"key-cache-size=0,latency=26,meta-cache-size=0,qsize-q=0," +
"signature-cache-size=0,sys-msec=2889,uptime=86317,user-msec=2167,"
func (s statServer) serverSocket(l net.Listener) {
for {
conn, err := l.Accept()
if err != nil {
return
}
go func(c net.Conn) {
buf := make([]byte, 1024)
n, _ := c.Read(buf)
data := buf[:n]
if string(data) == "show * \n" {
c.Write([]byte(metrics))
c.Close()
}
}(conn)
}
}
func TestMemcachedGeneratesMetrics(t *testing.T) {
// We create a fake server to return test data
var randomNumber int64
binary.Read(rand.Reader, binary.LittleEndian, &randomNumber)
socket, err := net.Listen("unix", fmt.Sprintf("/tmp/pdns%d.controlsocket", randomNumber))
if err != nil {
t.Fatal("Cannot initalize server on port ")
}
defer socket.Close()
s := statServer{}
go s.serverSocket(socket)
p := &Powerdns{
UnixSockets: []string{fmt.Sprintf("/tmp/pdns%d.controlsocket", randomNumber)},
}
var acc testutil.Accumulator
err = p.Gather(&acc)
require.NoError(t, err)
intMetrics := []string{"corrupt-packets", "deferred-cache-inserts",
"deferred-cache-lookup", "dnsupdate-answers", "dnsupdate-changes",
"dnsupdate-queries", "dnsupdate-refused", "packetcache-hit",
"packetcache-miss", "packetcache-size", "query-cache-hit", "query-cache-miss",
"rd-queries", "recursing-answers", "recursing-questions",
"recursion-unanswered", "security-status", "servfail-packets", "signatures",
"tcp-answers", "tcp-queries", "timedout-packets", "udp-answers",
"udp-answers-bytes", "udp-do-queries", "udp-queries", "udp4-answers",
"udp4-queries", "udp6-answers", "udp6-queries", "key-cache-size", "latency",
"meta-cache-size", "qsize-q", "signature-cache-size", "sys-msec", "uptime", "user-msec"}
for _, metric := range intMetrics {
assert.True(t, acc.HasIntField("powerdns", metric), metric)
}
}
func TestPowerdnsParseMetrics(t *testing.T) {
values, err := parseResponse(metrics)
require.NoError(t, err, "Error parsing memcached response")
tests := []struct {
key string
value int64
}{
{"corrupt-packets", 0},
{"deferred-cache-inserts", 0},
{"deferred-cache-lookup", 0},
{"dnsupdate-answers", 0},
{"dnsupdate-changes", 0},
{"dnsupdate-queries", 0},
{"dnsupdate-refused", 0},
{"packetcache-hit", 0},
{"packetcache-miss", 1},
{"packetcache-size", 0},
{"query-cache-hit", 0},
{"query-cache-miss", 6},
{"rd-queries", 1},
{"recursing-answers", 0},
{"recursing-questions", 0},
{"recursion-unanswered", 0},
{"security-status", 3},
{"servfail-packets", 0},
{"signatures", 0},
{"tcp-answers", 0},
{"tcp-queries", 0},
{"timedout-packets", 0},
{"udp-answers", 1},
{"udp-answers-bytes", 50},
{"udp-do-queries", 0},
{"udp-queries", 0},
{"udp4-answers", 1},
{"udp4-queries", 1},
{"udp6-answers", 0},
{"udp6-queries", 0},
{"key-cache-size", 0},
{"latency", 26},
{"meta-cache-size", 0},
{"qsize-q", 0},
{"signature-cache-size", 0},
{"sys-msec", 2889},
{"uptime", 86317},
{"user-msec", 2167},
}
for _, test := range tests {
value, ok := values[test.key]
if !ok {
t.Errorf("Did not find key for metric %s in values", test.key)
continue
}
if value != test.value {
t.Errorf("Metric: %s, Expected: %d, actual: %d",
test.key, test.value, value)
}
}
}

Some files were not shown because too many files have changed in this diff Show More