Compare commits

..

66 Commits

Author SHA1 Message Date
Cameron Sparr
f9aef06a3c CircleCI script, do not explicitly set version tag 2016-11-03 17:07:25 +00:00
Cameron Sparr
105bb65f73 Add release 1.2 section to changelog 2016-11-03 17:01:53 +00:00
Cameron Sparr
16081b2d1a Update etc/telegraf.conf 2016-11-03 14:31:55 +00:00
Matteo Cerutti
e43cfc2fce fix leap_status value in chrony input plugin (#1983) 2016-11-03 10:46:54 +00:00
Prunar
137272afea Update README.md (#1963)
Typo
2016-11-02 14:25:09 +00:00
Cameron Sparr
2150510bd4 nats_consumer: buffer incoming messages
fixes #1956
2016-10-27 13:39:27 +01:00
albundy83
fc59757a1a Just fix typo (#1962) 2016-10-27 11:45:17 +01:00
Cameron Sparr
0cfa0d419a udp_listener & tcp_listener set default values
closes #1936
2016-10-27 10:25:24 +01:00
Paulo Pires
522658bd07 Fix NATS plug-ins reconnection logic (#1955)
* NATS output plug-in now retries to reconnect forever after a lost connection.

* NATS input plug-in now retries to reconnect forever after a lost connection.

* Fixes #1953
2016-10-26 15:45:33 +01:00
Jonathan Chauncey
b1a97e35b9 fix(kubernetes): Only initialize RoundTripper once (#1951)
fixes #1933
2016-10-26 13:47:35 +01:00
Cameron Sparr
c66363cba5 Update Go version: 1.7.1->1.7.3 2016-10-25 14:49:21 +01:00
Cameron Sparr
61269c3500 Update config generation docs
closes #1925
2016-10-25 14:46:50 +01:00
Priyank Trivedi
393d129982 Fix typo from 'Proctstas' to 'Procstat' in procstat plugin's README (#1945) 2016-10-25 13:57:55 +01:00
Cameron Sparr
80d4864844 Only install fpm,rpm,boto if we need them 2016-10-25 13:31:48 +01:00
Cameron Sparr
f729fa990d Unit testing for internal.Duration Unmarshal
closes #1926
2016-10-25 13:11:32 +01:00
Alex Zorin
662db7a944 Fix panic in internal.Duration UnmarshalTOML 2016-10-25 18:30:01 +11:00
Cameron Sparr
c849b58de9 http_listener input unit tests 2016-10-24 18:17:49 +01:00
Cameron Sparr
097b1e09db http listener refactor
in this commit:

- chunks out the http request body to avoid making very large
  allocations.
- establishes a limit for the maximum http request body size that the
  listener will accept.
- utilizes a pool of byte buffers to reduce GC pressure.
2016-10-24 18:17:49 +01:00
John Hu
babd37bf35 Typo (#1924) 2016-10-21 14:11:03 +01:00
David Norton
91f48e7ad5 Merge pull request #1847 from jchauncey/kubernetes-plugin
feat(kubernetes): Add kubernetes input plugin
2016-10-17 15:58:47 -04:00
Jonathan Chauncey
a12bd878e0 feat(kubernetes): Add kubernetes input plugin
closes #1774
2016-10-17 15:40:55 -04:00
Cameron Sparr
a4e8f24b16 Set reasonable defaults in ping plugin
closes #1742
2016-10-17 15:21:09 +01:00
Cameron Sparr
a65447d22e Use mysql.ParseDSN func instead of url.Parse
The MySQL DB driver has it's own DSN parsing function. Previously we
were using the url.Parse function, but this causes problems because a
valid MySQL DSN can be an invalid http URL, namely when using some
special characters in the password.

This change uses the MySQL DB driver's builtin ParseDSN function and
applies a timeout parameter natively via that.

Another benefit of this change is that we fail earlier if given an
invalid MySQL DSN.

closes #870
closes #1842
2016-10-12 17:10:28 +01:00
Cameron Sparr
b00ad65b08 Log config file parsing errors properly
closes #1344
2016-10-12 16:50:22 +01:00
Cameron Sparr
a84ce5d5cb drop metrics outside of the aggregators period 2016-10-12 14:56:03 +01:00
Cameron Sparr
8ca4a50c18 delete nil fields in the metric maker.
closes #1771
2016-10-12 14:50:19 +01:00
Cameron Sparr
03b2984ac2 Fixup some code based on feedback from @dgnorton 2016-10-12 14:50:19 +01:00
Cameron Sparr
9540a6532f Update influxdb dependency for new models.Tags 2016-10-12 14:50:19 +01:00
Cameron Sparr
cace663bbf Processor & Aggregator Contrib doc 2016-10-12 14:50:19 +01:00
Cameron Sparr
acfdd15aa9 Processor & Aggregator configuration doccing 2016-10-12 14:50:19 +01:00
Cameron Sparr
78f544c0aa Support --aggregator-filter & --processor-filter 2016-10-12 14:50:19 +01:00
Cameron Sparr
2175a72fcc Rebase fixup 2016-10-12 14:50:19 +01:00
Cameron Sparr
b03c1d9691 Support ordering of processor plugins 2016-10-12 14:50:19 +01:00
Cameron Sparr
fead80844e Refactor handling of MinMax functionality into RunningAggregator
allows for easier addition of a sliding window at a later time.

Also makes `period` be a generic argument for all aggregator plugins.
2016-10-12 14:50:19 +01:00
Cameron Sparr
ef885eda62 Change minmax aggregator to store float64 2016-10-12 14:50:19 +01:00
Cameron Sparr
64a71263a1 Support Processor & Aggregator Plugins
closes #1726
2016-10-12 14:50:19 +01:00
Cameron Sparr
974221f0cf Fix phpfpm fcgi client panic when URL doesnt exist
closes #1886
2016-10-12 11:58:38 +01:00
Ririsoft
bccef2856d Revert "Moving cgroup path name to field from tag to reduce cardinality (#1457)"
This was introducing a regression with influxdb output, leading to
collision an points missing.
This reverts commit 53f40063b3.

closes #1724
closes #1796
2016-10-12 11:04:28 +01:00
Patrick Hemmer
80df3f7634 snmp: fix initialization of table fields in manual tables (#1836) 2016-10-12 11:00:39 +01:00
Cameron Sparr
e96f7a9b12 graphite parser, handle multiple templates empty filter
Previously, the graphite parser would simply overwrite any template that
had an identical filter to a previous template. This included the empty
filter.

Now we will still overwrite, but first we will sort to make sure that
the most "specific" template always matches.

closes #1731
2016-10-11 15:22:51 +01:00
Cameron Sparr
2bbb6aa6f2 Add doc for SNMP debug tips (#1831) 2016-10-11 14:48:08 +01:00
Cameron Sparr
1ff721ad84 Add riemann output plugin deprecation message 2016-10-11 12:28:20 +01:00
Eric
3e3b094270 Only log warning on type when in debug mode.
closes #1793
2016-10-11 11:35:43 +01:00
Eric
1f7a8fceef Fixed json serialization to make sure only value type supported by OpenTSDB are sent and made sure we send numbers un-quoted event though OpenTSDB API accepts them as this is not clean json. 2016-10-11 11:32:24 +01:00
Marko Crnic
b702a9758b haproxy/README: make quotes consistent
closes #1700
2016-10-11 11:30:22 +01:00
Marko Crnic
3b607aa8ae haproxy: add README covering basics of the plugin 2016-10-11 11:29:04 +01:00
Marko Crnic
4a4a6892f9 haproxy: update HAproxy docs URL 2016-10-11 11:29:04 +01:00
Marko Crnic
56b627dfe2 haproxy_test: extend tests to cover name globbing 2016-10-11 11:29:04 +01:00
Marko Crnic
5c87b92976 haproxy_test: define expected results in one place
Map holding expected results was defined in multiple places, making test
cases a bit hard to read. This way we can change our expectations of
good results in one place and have them affect multiple test cases.
2016-10-11 11:29:04 +01:00
Marko Crnic
dbcc312b0e haproxy: clarify handling of http and socket addresses
This behaviour was introduced along with socket support, but never got
documented properly.
2016-10-11 11:29:04 +01:00
Marko Crnic
2d842fefb8 haproxy: add support for socket name globbing 2016-10-11 11:29:04 +01:00
Marko Crnic
d63e3c8cc4 haproxy: move socket address detection to own function 2016-10-11 11:29:04 +01:00
Stian Øvrevåge
187a894fe9 Create CONFIG-EXAMPLES.md with a switch interface example
Added a standard example for collecting interface metrics from switches or routers and tagging them properly.

closes #1666
2016-10-11 11:00:25 +01:00
Cameron Sparr
ca55c4a55d Remove COMING SOON: multiple statsd fields 2016-10-11 10:57:34 +01:00
Cameron Sparr
d627bdbbdb logparser: allow numbers in ident & auth parameters
fixes #1810
2016-10-10 11:27:35 +01:00
Edie Zhang
4f06f6b3d8 adding the tags in the graylog output plugin
closes #1861
2016-10-07 12:24:21 +01:00
Cameron Sparr
7f0fe78615 Changelog update for systemd log change 2016-10-06 17:48:23 +01:00
Ririsoft
5913f7cb36 Log to systemd journal
Let's align to InfluxDB 1.0 logging policy and log to systemd journal by
default.

closes #1732
2016-10-06 17:48:22 +01:00
James Carr
8dc42ad9f2 Add idle_since to emitted metrics (#1844) 2016-10-06 14:26:53 +01:00
Cameron Sparr
886bdd2ef2 changelog update 2016-10-06 14:25:28 +01:00
Patrick Hemmer
5a86a2ff26 snmp: return error on unknown conversion type (#1853) 2016-10-06 14:23:51 +01:00
zensqlmonitor
817d696628 SQL Server plugin: Fix WaitStats issue (#1859)
Issue #1854
2016-10-06 14:21:14 +01:00
Cameron Sparr
4ab0344ebf Update changelog & readme for 1.0.1 2016-10-05 08:41:58 +01:00
Patrick Hemmer
7b05170145 update to latest gosnmp (#1850) 2016-10-05 08:40:56 +01:00
Patrick Hemmer
b48ad4b737 fix snmp emitting empty fields
closes #1848
closes #1835
2016-10-04 16:25:16 +01:00
Patrick Hemmer
9feb639bbd fix translating snmp fields not in MIB (#1846) 2016-10-04 16:22:15 +01:00
96 changed files with 4836 additions and 1568 deletions

View File

@@ -1,9 +1,29 @@
## v1.1 [unreleased]
## v1.2 [unreleased]
### Release Notes
### Features
### Bugfixes
## v1.1 [unreleased]
### Release Notes
- Telegraf now supports two new types of plugins: processors & aggregators.
- On systemd Telegraf will no longer redirect it's stdout to /var/log/telegraf/telegraf.log.
On most systems, the logs will be directed to the systemd journal and can be
accessed by `journalctl -u telegraf.service`. Consult the systemd journal
documentation for configuring journald. There is also a [`logfile` config option](https://github.com/influxdata/telegraf/blob/master/etc/telegraf.conf#L70)
available in 1.1, which will allow users to easily configure telegraf to
continue sending logs to /var/log/telegraf/telegraf.log.
### Features
- [#1726](https://github.com/influxdata/telegraf/issues/1726): Processor & Aggregator plugin support.
- [#1861](https://github.com/influxdata/telegraf/pull/1861): adding the tags in the graylog output plugin
- [#1732](https://github.com/influxdata/telegraf/pull/1732): Telegraf systemd service, log to journal.
- [#1782](https://github.com/influxdata/telegraf/pull/1782): Allow numeric and non-string values for tag_keys.
- [#1694](https://github.com/influxdata/telegraf/pull/1694): Adding Gauge and Counter metric types.
- [#1606](https://github.com/influxdata/telegraf/pull/1606): Remove carraige returns from exec plugin output on Windows
@@ -18,18 +38,22 @@
- [#1542](https://github.com/influxdata/telegraf/pull/1542): Add filestack webhook plugin.
- [#1599](https://github.com/influxdata/telegraf/pull/1599): Add server hostname for each docker measurements.
- [#1697](https://github.com/influxdata/telegraf/pull/1697): Add NATS output plugin.
- [#1407](https://github.com/influxdata/telegraf/pull/1407): HTTP service listener input plugin.
- [#1407](https://github.com/influxdata/telegraf/pull/1407) & [#1915](https://github.com/influxdata/telegraf/pull/1915): HTTP service listener input plugin.
- [#1699](https://github.com/influxdata/telegraf/pull/1699): Add database blacklist option for Postgresql
- [#1791](https://github.com/influxdata/telegraf/pull/1791): Add Docker container state metrics to Docker input plugin output
- [#1755](https://github.com/influxdata/telegraf/issues/1755): Add support to SNMP for IP & MAC address conversion.
- [#1729](https://github.com/influxdata/telegraf/issues/1729): Add support to SNMP for OID index suffixes.
- [#1813](https://github.com/influxdata/telegraf/pull/1813): Change default arguments for SNMP plugin.
- [#1686](https://github.com/influxdata/telegraf/pull/1686): Mesos input plugin: very high-cardinality mesos-task metrics removed.
- [#1839](https://github.com/influxdata/telegraf/pull/1839): Exact match with pgrep -x option in procstat
- [#1838](https://github.com/influxdata/telegraf/pull/1838): Logging overhaul to centralize the logger & log levels, & provide a logfile config option.
- [#1700](https://github.com/influxdata/telegraf/pull/1700): HAProxy plugin socket glob matching.
- [#1847](https://github.com/influxdata/telegraf/pull/1847): Add Kubernetes plugin for retrieving pod metrics.
### Bugfixes
- [#1955](https://github.com/influxdata/telegraf/issues/1955): Fix NATS plug-ins reconnection logic.
- [#1936](https://github.com/influxdata/telegraf/issues/1936): Set required default values in udp_listener & tcp_listener.
- [#1926](https://github.com/influxdata/telegraf/issues/1926): Fix toml unmarshal panic in Duration objects.
- [#1746](https://github.com/influxdata/telegraf/issues/1746): Fix handling of non-string values for JSON keys listed in tag_keys.
- [#1628](https://github.com/influxdata/telegraf/issues/1628): Fix mongodb input panic on version 2.2.
- [#1733](https://github.com/influxdata/telegraf/issues/1733): Fix statsd scientific notation parsing
@@ -44,8 +68,21 @@
- [#1772](https://github.com/influxdata/telegraf/pull/1772): Windows remote management interactive service fix.
- [#1702](https://github.com/influxdata/telegraf/issues/1702): sqlserver, fix issue when case sensitive collation is activated.
- [#1823](https://github.com/influxdata/telegraf/issues/1823): Fix huge allocations in http_listener when dealing with huge payloads.
- [#1833](https://github.com/influxdata/telegraf/issues/1833): Fix translating SNMP fields not in MIB.
- [#1835](https://github.com/influxdata/telegraf/issues/1835): Fix SNMP emitting empty fields.
- [#1854](https://github.com/influxdata/telegraf/pull/1853): SQL Server waitstats truncation bug.
- [#1810](https://github.com/influxdata/telegraf/issues/1810): Fix logparser common log format: numbers in ident.
- [#1793](https://github.com/influxdata/telegraf/pull/1793): Fix JSON Serialization in OpenTSDB output.
- [#1731](https://github.com/influxdata/telegraf/issues/1731): Fix Graphite template ordering, use most specific.
- [#1836](https://github.com/influxdata/telegraf/pull/1836): Fix snmp table field initialization for non-automatic table.
- [#1724](https://github.com/influxdata/telegraf/issues/1724): cgroups path being parsed as metric.
- [#1886](https://github.com/influxdata/telegraf/issues/1886): Fix phpfpm fcgi client panic when URL does not exist.
- [#1344](https://github.com/influxdata/telegraf/issues/1344): Fix config file parse error logging.
- [#1771](https://github.com/influxdata/telegraf/issues/1771): Delete nil fields in the metric maker.
- [#870](https://github.com/influxdata/telegraf/issues/870): Fix MySQL special characters in DSN parsing.
- [#1742](https://github.com/influxdata/telegraf/issues/1742): Ping input odd timeout behavior.
## v1.0.1 [unreleased]
## v1.0.1 [2016-09-26]
### Bugfixes

View File

@@ -2,7 +2,7 @@
1. [Sign the CLA](http://influxdb.com/community/cla.html)
1. Make changes or write plugin (see below for details)
1. Add your plugin to `plugins/inputs/all/all.go` or `plugins/outputs/all/all.go`
1. Add your plugin to one of: `plugins/{inputs,outputs,aggregators,processors}/all/all.go`
1. If your plugin requires a new Go package,
[add it](https://github.com/influxdata/telegraf/blob/master/CONTRIBUTING.md#adding-a-dependency)
1. Write a README for your plugin, if it's an input plugin, it should be structured
@@ -16,8 +16,8 @@ for a good example.
## GoDoc
Public interfaces for inputs, outputs, metrics, and the accumulator can be found
on the GoDoc
Public interfaces for inputs, outputs, processors, aggregators, metrics,
and the accumulator can be found on the GoDoc
[![GoDoc](https://godoc.org/github.com/influxdata/telegraf?status.svg)](https://godoc.org/github.com/influxdata/telegraf)
@@ -46,7 +46,7 @@ and submit new inputs.
### Input Plugin Guidelines
* A plugin must conform to the `telegraf.Input` interface.
* A plugin must conform to the [`telegraf.Input`](https://godoc.org/github.com/influxdata/telegraf#Input) interface.
* Input Plugins should call `inputs.Add` in their `init` function to register themselves.
See below for a quick example.
* Input Plugins must be added to the
@@ -177,7 +177,7 @@ similar constructs.
### Output Plugin Guidelines
* An output must conform to the `outputs.Output` interface.
* An output must conform to the [`telegraf.Output`](https://godoc.org/github.com/influxdata/telegraf#Output) interface.
* Outputs should call `outputs.Add` in their `init` function to register themselves.
See below for a quick example.
* To be available within Telegraf itself, plugins must add themselves to the
@@ -275,6 +275,186 @@ and `Stop()` methods.
* Same as the `Output` guidelines, except that they must conform to the
`output.ServiceOutput` interface.
## Processor Plugins
This section is for developers who want to create a new processor plugin.
### Processor Plugin Guidelines
* A processor must conform to the [`telegraf.Processor`](https://godoc.org/github.com/influxdata/telegraf#Processor) interface.
* Processors should call `processors.Add` in their `init` function to register themselves.
See below for a quick example.
* To be available within Telegraf itself, plugins must add themselves to the
`github.com/influxdata/telegraf/plugins/processors/all/all.go` file.
* The `SampleConfig` function should return valid toml that describes how the
processor can be configured. This is include in `telegraf -sample-config`.
* The `Description` function should say in one line what this processor does.
### Processor Example
```go
package printer
// printer.go
import (
"fmt"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/processors"
)
type Printer struct {
}
var sampleConfig = `
`
func (p *Printer) SampleConfig() string {
return sampleConfig
}
func (p *Printer) Description() string {
return "Print all metrics that pass through this filter."
}
func (p *Printer) Apply(in ...telegraf.Metric) []telegraf.Metric {
for _, metric := range in {
fmt.Println(metric.String())
}
return in
}
func init() {
processors.Add("printer", func() telegraf.Processor {
return &Printer{}
})
}
```
## Aggregator Plugins
This section is for developers who want to create a new aggregator plugin.
### Aggregator Plugin Guidelines
* A aggregator must conform to the [`telegraf.Aggregator`](https://godoc.org/github.com/influxdata/telegraf#Aggregator) interface.
* Aggregators should call `aggregators.Add` in their `init` function to register themselves.
See below for a quick example.
* To be available within Telegraf itself, plugins must add themselves to the
`github.com/influxdata/telegraf/plugins/aggregators/all/all.go` file.
* The `SampleConfig` function should return valid toml that describes how the
aggregator can be configured. This is include in `telegraf -sample-config`.
* The `Description` function should say in one line what this aggregator does.
* The Aggregator plugin will need to keep caches of metrics that have passed
through it. This should be done using the builtin `HashID()` function of each
metric.
* When the `Reset()` function is called, all caches should be cleared.
### Aggregator Example
```go
package min
// min.go
import (
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/aggregators"
)
type Min struct {
// caches for metric fields, names, and tags
fieldCache map[uint64]map[string]float64
nameCache map[uint64]string
tagCache map[uint64]map[string]string
}
func NewMin() telegraf.Aggregator {
m := &Min{}
m.Reset()
return m
}
var sampleConfig = `
## period is the flush & clear interval of the aggregator.
period = "30s"
## If true drop_original will drop the original metrics and
## only send aggregates.
drop_original = false
`
func (m *Min) SampleConfig() string {
return sampleConfig
}
func (m *Min) Description() string {
return "Keep the aggregate min of each metric passing through."
}
func (m *Min) Add(in telegraf.Metric) {
id := in.HashID()
if _, ok := m.nameCache[id]; !ok {
// hit an uncached metric, create caches for first time:
m.nameCache[id] = in.Name()
m.tagCache[id] = in.Tags()
m.fieldCache[id] = make(map[string]float64)
for k, v := range in.Fields() {
if fv, ok := convert(v); ok {
m.fieldCache[id][k] = fv
}
}
} else {
for k, v := range in.Fields() {
if fv, ok := convert(v); ok {
if _, ok := m.fieldCache[id][k]; !ok {
// hit an uncached field of a cached metric
m.fieldCache[id][k] = fv
continue
}
if fv < m.fieldCache[id][k] {
// set new minimum
m.fieldCache[id][k] = fv
}
}
}
}
}
func (m *Min) Push(acc telegraf.Accumulator) {
for id, _ := range m.nameCache {
fields := map[string]interface{}{}
for k, v := range m.fieldCache[id] {
fields[k+"_min"] = v
}
acc.AddFields(m.nameCache[id], fields, m.tagCache[id])
}
}
func (m *Min) Reset() {
m.fieldCache = make(map[uint64]map[string]float64)
m.nameCache = make(map[uint64]string)
m.tagCache = make(map[uint64]map[string]string)
}
func convert(in interface{}) (float64, bool) {
switch v := in.(type) {
case float64:
return v, true
case int64:
return float64(v), true
default:
return 0, false
}
}
func init() {
aggregators.Add("min", func() telegraf.Aggregator {
return NewMin()
})
}
```
## Unit Tests
### Execute short tests

8
Godeps
View File

@@ -19,7 +19,7 @@ github.com/eclipse/paho.mqtt.golang 0f7a459f04f13a41b7ed752d47944528d4bf9a86
github.com/go-sql-driver/mysql 1fca743146605a172a266e1654e01e5cd5669bee
github.com/gobwas/glob 49571a1557cd20e6a2410adc6421f85b66c730b5
github.com/golang/protobuf 552c7b9542c194800fd493123b3798ef0a832032
github.com/golang/snappy 427fb6fc07997f43afa32f35e850833760e489a7
github.com/golang/snappy d9eb7a3d35ec988b8585d4a0068e462c27d28380
github.com/gonuts/go-shellquote e842a11b24c6abfb3dd27af69a17f482e4b483c2
github.com/gorilla/context 1ea25387ff6f684839d82767c1733ff4d4d15d0a
github.com/gorilla/mux c9e326e2bdec29039a3761c07bece13133863e1e
@@ -27,7 +27,7 @@ github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478
github.com/hashicorp/consul 5aa90455ce78d4d41578bafc86305e6e6b28d7d2
github.com/hpcloud/tail b2940955ab8b26e19d43a43c4da0475dd81bdb56
github.com/influxdata/config b79f6829346b8d6e78ba73544b1e1038f1f1c9da
github.com/influxdata/influxdb e094138084855d444195b252314dfee9eae34cab
github.com/influxdata/influxdb fc57c0f7c635df3873f3d64f0ed2100ddc94d5ae
github.com/influxdata/toml af4df43894b16e3fd2b788d01bd27ad0776ef2d0
github.com/influxdata/wlog 7c63b0a71ef8300adc255344d275e10e5c3a71ec
github.com/kardianos/osext 29ae4ffbc9a6fe9fb2bc5029050ce6996ea1d3bc
@@ -48,7 +48,7 @@ github.com/prometheus/common e8eabff8812b05acf522b45fdcd725a785188e37
github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8
github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f
github.com/shirou/gopsutil 4d0c402af66c78735c5ccf820dc2ca7de5e4ff08
github.com/soniah/gosnmp eb32571c2410868d85849ad67d1e51d01273eb84
github.com/soniah/gosnmp 3fe3beb30fa9700988893c56a63b1df8e1b68c26
github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744
github.com/stretchr/testify 1f4a1643a57e798696635ea4c126e9127adb7d3c
github.com/vjeantet/grok 83bfdfdfd1a8146795b28e547a8e3c8b28a466c2
@@ -56,7 +56,7 @@ github.com/wvanbergen/kafka 46f9a1cf3f670edec492029fadded9c2d9e18866
github.com/wvanbergen/kazoo-go 0f768712ae6f76454f987c3356177e138df258f8
github.com/yuin/gopher-lua bf3808abd44b1e55143a2d7f08571aaa80db1808
github.com/zensqlmonitor/go-mssqldb ffe5510c6fa5e15e6d983210ab501c815b56b363
golang.org/x/crypto 5dc8cb4b8a8eb076cbb5a06bc3b8682c15bdbbd3
golang.org/x/crypto c197bcf24cde29d3f73c7b4ac6fd41f4384e8af6
golang.org/x/net 6acef71eb69611914f7a30939ea9f6e194c78172
golang.org/x/text a71fd10341b064c10f4a81ceac72bcf70f26ea34
gopkg.in/dancannon/gorethink.v1 7d1af5be49cb5ecc7b177bf387d232050299d6ef

View File

@@ -20,12 +20,12 @@ new plugins.
### Linux deb and rpm Packages:
Latest:
* https://dl.influxdata.com/telegraf/releases/telegraf_1.0.0_amd64.deb
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0.x86_64.rpm
* https://dl.influxdata.com/telegraf/releases/telegraf_1.0.1_amd64.deb
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.1.x86_64.rpm
Latest (arm):
* https://dl.influxdata.com/telegraf/releases/telegraf_1.0.0_armhf.deb
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0.armhf.rpm
* https://dl.influxdata.com/telegraf/releases/telegraf_1.0.1_armhf.deb
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.1.armhf.rpm
##### Package Instructions:
@@ -46,14 +46,14 @@ to use this repo to install & update telegraf.
### Linux tarballs:
Latest:
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0_linux_amd64.tar.gz
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0_linux_i386.tar.gz
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0_linux_armhf.tar.gz
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.1_linux_amd64.tar.gz
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.1_linux_i386.tar.gz
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.1_linux_armhf.tar.gz
### FreeBSD tarball:
Latest:
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0_freebsd_amd64.tar.gz
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.1_freebsd_amd64.tar.gz
### Ansible Role:
@@ -69,7 +69,7 @@ brew install telegraf
### Windows Binaries (EXPERIMENTAL)
Latest:
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.0_windows_amd64.zip
* https://dl.influxdata.com/telegraf/releases/telegraf-1.0.1_windows_amd64.zip
### From Source:
@@ -85,44 +85,42 @@ if you don't have it already. You also must build with golang version 1.5+.
## How to use it:
```console
$ telegraf -help
Telegraf, The plugin-driven server agent for collecting and reporting metrics.
See usage with:
Usage:
telegraf <flags>
The flags are:
-config <file> configuration file to load
-test gather metrics once, print them to stdout, and exit
-sample-config print out full sample configuration to stdout
-config-directory directory containing additional *.conf files
-input-filter filter the input plugins to enable, separator is :
-output-filter filter the output plugins to enable, separator is :
-usage print usage for a plugin, ie, 'telegraf -usage mysql'
-debug print metrics as they're generated to stdout
-quiet run in quiet mode
-version print the version to stdout
Examples:
# generate a telegraf config file:
telegraf -sample-config > telegraf.conf
# generate config with only cpu input & influxdb output plugins defined
telegraf -sample-config -input-filter cpu -output-filter influxdb
# run a single telegraf collection, outputing metrics to stdout
telegraf -config telegraf.conf -test
# run telegraf with all plugins defined in config file
telegraf -config telegraf.conf
# run telegraf, enabling the cpu & memory input, and influxdb output plugins
telegraf -config telegraf.conf -input-filter cpu:mem -output-filter influxdb
```
telegraf --help
```
### Generate a telegraf config file:
```
telegraf config > telegraf.conf
```
### Generate config with only cpu input & influxdb output plugins defined
```
telegraf --input-filter cpu --output-filter influxdb config
```
### Run a single telegraf collection, outputing metrics to stdout
```
telegraf --config telegraf.conf -test
```
### Run telegraf with all plugins defined in config file
```
telegraf --config telegraf.conf
```
### Run telegraf, enabling the cpu & memory input, and influxdb output plugins
```
telegraf --config telegraf.conf -input-filter cpu:mem -output-filter influxdb
```
## Configuration

View File

@@ -2,9 +2,8 @@ package telegraf
import "time"
// Accumulator is an interface for "accumulating" metrics from input plugin(s).
// The metrics are sent down a channel shared between all input plugins and then
// flushed on the configured flush_interval.
// Accumulator is an interface for "accumulating" metrics from plugin(s).
// The metrics are sent down a channel shared between all plugins.
type Accumulator interface {
// AddFields adds a metric to the accumulator with the given measurement
// name, fields, and tags (and timestamp). If a timestamp is not provided,
@@ -29,12 +28,7 @@ type Accumulator interface {
tags map[string]string,
t ...time.Time)
AddError(err error)
Debug() bool
SetDebug(enabled bool)
SetPrecision(precision, interval time.Duration)
DisablePrecision()
AddError(err error)
}

View File

@@ -1,37 +1,40 @@
package agent
import (
"fmt"
"log"
"math"
"sync/atomic"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal/models"
)
type MetricMaker interface {
Name() string
MakeMetric(
measurement string,
fields map[string]interface{},
tags map[string]string,
mType telegraf.ValueType,
t time.Time,
) telegraf.Metric
}
func NewAccumulator(
inputConfig *models.InputConfig,
maker MetricMaker,
metrics chan telegraf.Metric,
) *accumulator {
acc := accumulator{}
acc.metrics = metrics
acc.inputConfig = inputConfig
acc.precision = time.Nanosecond
acc := accumulator{
maker: maker,
metrics: metrics,
precision: time.Nanosecond,
}
return &acc
}
type accumulator struct {
metrics chan telegraf.Metric
defaultTags map[string]string
debug bool
// print every point added to the accumulator
trace bool
inputConfig *models.InputConfig
maker MetricMaker
precision time.Duration
@@ -44,7 +47,7 @@ func (ac *accumulator) AddFields(
tags map[string]string,
t ...time.Time,
) {
if m := ac.makeMetric(measurement, fields, tags, telegraf.Untyped, t...); m != nil {
if m := ac.maker.MakeMetric(measurement, fields, tags, telegraf.Untyped, ac.getTime(t)); m != nil {
ac.metrics <- m
}
}
@@ -55,7 +58,7 @@ func (ac *accumulator) AddGauge(
tags map[string]string,
t ...time.Time,
) {
if m := ac.makeMetric(measurement, fields, tags, telegraf.Gauge, t...); m != nil {
if m := ac.maker.MakeMetric(measurement, fields, tags, telegraf.Gauge, ac.getTime(t)); m != nil {
ac.metrics <- m
}
}
@@ -66,114 +69,11 @@ func (ac *accumulator) AddCounter(
tags map[string]string,
t ...time.Time,
) {
if m := ac.makeMetric(measurement, fields, tags, telegraf.Counter, t...); m != nil {
if m := ac.maker.MakeMetric(measurement, fields, tags, telegraf.Counter, ac.getTime(t)); m != nil {
ac.metrics <- m
}
}
// makeMetric either returns a metric, or returns nil if the metric doesn't
// need to be created (because of filtering, an error, etc.)
func (ac *accumulator) makeMetric(
measurement string,
fields map[string]interface{},
tags map[string]string,
mType telegraf.ValueType,
t ...time.Time,
) telegraf.Metric {
if len(fields) == 0 || len(measurement) == 0 {
return nil
}
if tags == nil {
tags = make(map[string]string)
}
// Override measurement name if set
if len(ac.inputConfig.NameOverride) != 0 {
measurement = ac.inputConfig.NameOverride
}
// Apply measurement prefix and suffix if set
if len(ac.inputConfig.MeasurementPrefix) != 0 {
measurement = ac.inputConfig.MeasurementPrefix + measurement
}
if len(ac.inputConfig.MeasurementSuffix) != 0 {
measurement = measurement + ac.inputConfig.MeasurementSuffix
}
// Apply plugin-wide tags if set
for k, v := range ac.inputConfig.Tags {
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
// Apply daemon-wide tags if set
for k, v := range ac.defaultTags {
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
// Apply the metric filter(s)
if ok := ac.inputConfig.Filter.Apply(measurement, fields, tags); !ok {
return nil
}
for k, v := range fields {
// Validate uint64 and float64 fields
switch val := v.(type) {
case uint64:
// InfluxDB does not support writing uint64
if val < uint64(9223372036854775808) {
fields[k] = int64(val)
} else {
fields[k] = int64(9223372036854775807)
}
continue
case float64:
// NaNs are invalid values in influxdb, skip measurement
if math.IsNaN(val) || math.IsInf(val, 0) {
if ac.debug {
log.Printf("I! Measurement [%s] field [%s] has a NaN or Inf "+
"field, skipping",
measurement, k)
}
delete(fields, k)
continue
}
}
fields[k] = v
}
var timestamp time.Time
if len(t) > 0 {
timestamp = t[0]
} else {
timestamp = time.Now()
}
timestamp = timestamp.Round(ac.precision)
var m telegraf.Metric
var err error
switch mType {
case telegraf.Counter:
m, err = telegraf.NewCounterMetric(measurement, tags, fields, timestamp)
case telegraf.Gauge:
m, err = telegraf.NewGaugeMetric(measurement, tags, fields, timestamp)
default:
m, err = telegraf.NewMetric(measurement, tags, fields, timestamp)
}
if err != nil {
log.Printf("E! Error adding point [%s]: %s\n", measurement, err.Error())
return nil
}
if ac.trace {
fmt.Println("> " + m.String())
}
return m
}
// AddError passes a runtime error to the accumulator.
// The error will be tagged with the plugin name and written to the log.
func (ac *accumulator) AddError(err error) {
@@ -182,23 +82,7 @@ func (ac *accumulator) AddError(err error) {
}
atomic.AddUint64(&ac.errCount, 1)
//TODO suppress/throttle consecutive duplicate errors?
log.Printf("E! Error in input [%s]: %s", ac.inputConfig.Name, err)
}
func (ac *accumulator) Debug() bool {
return ac.debug
}
func (ac *accumulator) SetDebug(debug bool) {
ac.debug = debug
}
func (ac *accumulator) Trace() bool {
return ac.trace
}
func (ac *accumulator) SetTrace(trace bool) {
ac.trace = trace
log.Printf("E! Error in plugin [%s]: %s", ac.maker.Name(), err)
}
// SetPrecision takes two time.Duration objects. If the first is non-zero,
@@ -222,17 +106,12 @@ func (ac *accumulator) SetPrecision(precision, interval time.Duration) {
}
}
func (ac *accumulator) DisablePrecision() {
ac.precision = time.Nanosecond
}
func (ac *accumulator) setDefaultTags(tags map[string]string) {
ac.defaultTags = tags
}
func (ac *accumulator) addDefaultTag(key, value string) {
if ac.defaultTags == nil {
ac.defaultTags = make(map[string]string)
func (ac accumulator) getTime(t []time.Time) time.Time {
var timestamp time.Time
if len(t) > 0 {
timestamp = t[0]
} else {
timestamp = time.Now()
}
ac.defaultTags[key] = value
return timestamp.Round(ac.precision)
}

View File

@@ -4,24 +4,21 @@ import (
"bytes"
"fmt"
"log"
"math"
"os"
"testing"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal/models"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestAdd(t *testing.T) {
a := accumulator{}
now := time.Now()
a.metrics = make(chan telegraf.Metric, 10)
defer close(a.metrics)
a.inputConfig = &models.InputConfig{}
metrics := make(chan telegraf.Metric, 10)
defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics)
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
@@ -33,97 +30,142 @@ func TestAdd(t *testing.T) {
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"}, now)
testm := <-a.metrics
testm := <-metrics
actual := testm.String()
assert.Contains(t, actual, "acctest value=101")
testm = <-a.metrics
testm = <-metrics
actual = testm.String()
assert.Contains(t, actual, "acctest,acc=test value=101")
testm = <-a.metrics
testm = <-metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test value=101 %d", now.UnixNano()),
actual)
}
func TestAddGauge(t *testing.T) {
a := accumulator{}
func TestAddFields(t *testing.T) {
now := time.Now()
a.metrics = make(chan telegraf.Metric, 10)
defer close(a.metrics)
a.inputConfig = &models.InputConfig{}
metrics := make(chan telegraf.Metric, 10)
defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics)
a.AddGauge("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{})
a.AddGauge("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"})
a.AddGauge("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"}, now)
fields := map[string]interface{}{
"usage": float64(99),
}
a.AddFields("acctest", fields, map[string]string{})
a.AddGauge("acctest", fields, map[string]string{"acc": "test"})
a.AddCounter("acctest", fields, map[string]string{"acc": "test"}, now)
testm := <-a.metrics
testm := <-metrics
actual := testm.String()
assert.Contains(t, actual, "acctest value=101")
assert.Equal(t, testm.Type(), telegraf.Gauge)
assert.Contains(t, actual, "acctest usage=99")
testm = <-a.metrics
testm = <-metrics
actual = testm.String()
assert.Contains(t, actual, "acctest,acc=test value=101")
assert.Equal(t, testm.Type(), telegraf.Gauge)
assert.Contains(t, actual, "acctest,acc=test usage=99")
testm = <-a.metrics
testm = <-metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test value=101 %d", now.UnixNano()),
fmt.Sprintf("acctest,acc=test usage=99 %d", now.UnixNano()),
actual)
assert.Equal(t, testm.Type(), telegraf.Gauge)
}
func TestAddCounter(t *testing.T) {
a := accumulator{}
now := time.Now()
a.metrics = make(chan telegraf.Metric, 10)
defer close(a.metrics)
a.inputConfig = &models.InputConfig{}
func TestAccAddError(t *testing.T) {
errBuf := bytes.NewBuffer(nil)
log.SetOutput(errBuf)
defer log.SetOutput(os.Stderr)
a.AddCounter("acctest",
metrics := make(chan telegraf.Metric, 10)
defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics)
a.AddError(fmt.Errorf("foo"))
a.AddError(fmt.Errorf("bar"))
a.AddError(fmt.Errorf("baz"))
errs := bytes.Split(errBuf.Bytes(), []byte{'\n'})
assert.EqualValues(t, 3, a.errCount)
require.Len(t, errs, 4) // 4 because of trailing newline
assert.Contains(t, string(errs[0]), "TestPlugin")
assert.Contains(t, string(errs[0]), "foo")
assert.Contains(t, string(errs[1]), "TestPlugin")
assert.Contains(t, string(errs[1]), "bar")
assert.Contains(t, string(errs[2]), "TestPlugin")
assert.Contains(t, string(errs[2]), "baz")
}
func TestAddNoIntervalWithPrecision(t *testing.T) {
now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC)
metrics := make(chan telegraf.Metric, 10)
defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics)
a.SetPrecision(0, time.Second)
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{})
a.AddCounter("acctest",
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"})
a.AddCounter("acctest",
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"}, now)
testm := <-a.metrics
actual := testm.String()
assert.Contains(t, actual, "acctest value=101")
assert.Equal(t, testm.Type(), telegraf.Counter)
testm = <-a.metrics
actual = testm.String()
assert.Contains(t, actual, "acctest,acc=test value=101")
assert.Equal(t, testm.Type(), telegraf.Counter)
testm = <-a.metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test value=101 %d", now.UnixNano()),
fmt.Sprintf("acctest,acc=test value=101 %d", int64(1139572800000000000)),
actual)
}
func TestAddDisablePrecision(t *testing.T) {
now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC)
metrics := make(chan telegraf.Metric, 10)
defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics)
a.SetPrecision(time.Nanosecond, 0)
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{})
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"})
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"}, now)
testm := <-a.metrics
actual := testm.String()
assert.Contains(t, actual, "acctest value=101")
testm = <-a.metrics
actual = testm.String()
assert.Contains(t, actual, "acctest,acc=test value=101")
testm = <-a.metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test value=101 %d", int64(1139572800082912748)),
actual)
assert.Equal(t, testm.Type(), telegraf.Counter)
}
func TestAddNoPrecisionWithInterval(t *testing.T) {
a := accumulator{}
now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC)
a.metrics = make(chan telegraf.Metric, 10)
defer close(a.metrics)
a.inputConfig = &models.InputConfig{}
metrics := make(chan telegraf.Metric, 10)
defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics)
a.SetPrecision(0, time.Second)
a.AddFields("acctest",
@@ -151,79 +193,11 @@ func TestAddNoPrecisionWithInterval(t *testing.T) {
actual)
}
func TestAddNoIntervalWithPrecision(t *testing.T) {
a := accumulator{}
now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC)
a.metrics = make(chan telegraf.Metric, 10)
defer close(a.metrics)
a.inputConfig = &models.InputConfig{}
a.SetPrecision(time.Second, time.Millisecond)
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{})
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"})
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"}, now)
testm := <-a.metrics
actual := testm.String()
assert.Contains(t, actual, "acctest value=101")
testm = <-a.metrics
actual = testm.String()
assert.Contains(t, actual, "acctest,acc=test value=101")
testm = <-a.metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test value=101 %d", int64(1139572800000000000)),
actual)
}
func TestAddDisablePrecision(t *testing.T) {
a := accumulator{}
now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC)
a.metrics = make(chan telegraf.Metric, 10)
defer close(a.metrics)
a.inputConfig = &models.InputConfig{}
a.SetPrecision(time.Second, time.Millisecond)
a.DisablePrecision()
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{})
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"})
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"}, now)
testm := <-a.metrics
actual := testm.String()
assert.Contains(t, actual, "acctest value=101")
testm = <-a.metrics
actual = testm.String()
assert.Contains(t, actual, "acctest,acc=test value=101")
testm = <-a.metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test value=101 %d", int64(1139572800082912748)),
actual)
}
func TestDifferentPrecisions(t *testing.T) {
a := accumulator{}
now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC)
a.metrics = make(chan telegraf.Metric, 10)
defer close(a.metrics)
a.inputConfig = &models.InputConfig{}
metrics := make(chan telegraf.Metric, 10)
defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics)
a.SetPrecision(0, time.Second)
a.AddFields("acctest",
@@ -266,349 +240,100 @@ func TestDifferentPrecisions(t *testing.T) {
actual)
}
func TestAddDefaultTags(t *testing.T) {
a := accumulator{}
a.addDefaultTag("default", "tag")
func TestAddGauge(t *testing.T) {
now := time.Now()
a.metrics = make(chan telegraf.Metric, 10)
defer close(a.metrics)
a.inputConfig = &models.InputConfig{}
metrics := make(chan telegraf.Metric, 10)
defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics)
a.AddFields("acctest",
a.AddGauge("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{})
a.AddFields("acctest",
a.AddGauge("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"})
a.AddFields("acctest",
a.AddGauge("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"}, now)
testm := <-a.metrics
actual := testm.String()
assert.Contains(t, actual, "acctest,default=tag value=101")
testm = <-a.metrics
actual = testm.String()
assert.Contains(t, actual, "acctest,acc=test,default=tag value=101")
testm = <-a.metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test,default=tag value=101 %d", now.UnixNano()),
actual)
}
func TestAddFields(t *testing.T) {
a := accumulator{}
now := time.Now()
a.metrics = make(chan telegraf.Metric, 10)
defer close(a.metrics)
a.inputConfig = &models.InputConfig{}
fields := map[string]interface{}{
"usage": float64(99),
}
a.AddFields("acctest", fields, map[string]string{})
a.AddFields("acctest", fields, map[string]string{"acc": "test"})
a.AddFields("acctest", fields, map[string]string{"acc": "test"}, now)
testm := <-a.metrics
actual := testm.String()
assert.Contains(t, actual, "acctest usage=99")
testm = <-a.metrics
actual = testm.String()
assert.Contains(t, actual, "acctest,acc=test usage=99")
testm = <-a.metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test usage=99 %d", now.UnixNano()),
actual)
}
// Test that all Inf fields get dropped, and not added to metrics channel
func TestAddInfFields(t *testing.T) {
inf := math.Inf(1)
ninf := math.Inf(-1)
a := accumulator{}
now := time.Now()
a.metrics = make(chan telegraf.Metric, 10)
defer close(a.metrics)
a.inputConfig = &models.InputConfig{}
fields := map[string]interface{}{
"usage": inf,
"nusage": ninf,
}
a.AddFields("acctest", fields, map[string]string{})
a.AddFields("acctest", fields, map[string]string{"acc": "test"})
a.AddFields("acctest", fields, map[string]string{"acc": "test"}, now)
assert.Len(t, a.metrics, 0)
// test that non-inf fields are kept and not dropped
fields["notinf"] = float64(100)
a.AddFields("acctest", fields, map[string]string{})
testm := <-a.metrics
actual := testm.String()
assert.Contains(t, actual, "acctest notinf=100")
}
// Test that nan fields are dropped and not added
func TestAddNaNFields(t *testing.T) {
nan := math.NaN()
a := accumulator{}
now := time.Now()
a.metrics = make(chan telegraf.Metric, 10)
defer close(a.metrics)
a.inputConfig = &models.InputConfig{}
fields := map[string]interface{}{
"usage": nan,
}
a.AddFields("acctest", fields, map[string]string{})
a.AddFields("acctest", fields, map[string]string{"acc": "test"})
a.AddFields("acctest", fields, map[string]string{"acc": "test"}, now)
assert.Len(t, a.metrics, 0)
// test that non-nan fields are kept and not dropped
fields["notnan"] = float64(100)
a.AddFields("acctest", fields, map[string]string{})
testm := <-a.metrics
actual := testm.String()
assert.Contains(t, actual, "acctest notnan=100")
}
func TestAddUint64Fields(t *testing.T) {
a := accumulator{}
now := time.Now()
a.metrics = make(chan telegraf.Metric, 10)
defer close(a.metrics)
a.inputConfig = &models.InputConfig{}
fields := map[string]interface{}{
"usage": uint64(99),
}
a.AddFields("acctest", fields, map[string]string{})
a.AddFields("acctest", fields, map[string]string{"acc": "test"})
a.AddFields("acctest", fields, map[string]string{"acc": "test"}, now)
testm := <-a.metrics
actual := testm.String()
assert.Contains(t, actual, "acctest usage=99i")
testm = <-a.metrics
actual = testm.String()
assert.Contains(t, actual, "acctest,acc=test usage=99i")
testm = <-a.metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test usage=99i %d", now.UnixNano()),
actual)
}
func TestAddUint64Overflow(t *testing.T) {
a := accumulator{}
now := time.Now()
a.metrics = make(chan telegraf.Metric, 10)
defer close(a.metrics)
a.inputConfig = &models.InputConfig{}
fields := map[string]interface{}{
"usage": uint64(9223372036854775808),
}
a.AddFields("acctest", fields, map[string]string{})
a.AddFields("acctest", fields, map[string]string{"acc": "test"})
a.AddFields("acctest", fields, map[string]string{"acc": "test"}, now)
testm := <-a.metrics
actual := testm.String()
assert.Contains(t, actual, "acctest usage=9223372036854775807i")
testm = <-a.metrics
actual = testm.String()
assert.Contains(t, actual, "acctest,acc=test usage=9223372036854775807i")
testm = <-a.metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test usage=9223372036854775807i %d", now.UnixNano()),
actual)
}
func TestAddInts(t *testing.T) {
a := accumulator{}
a.addDefaultTag("default", "tag")
now := time.Now()
a.metrics = make(chan telegraf.Metric, 10)
defer close(a.metrics)
a.inputConfig = &models.InputConfig{}
a.AddFields("acctest",
map[string]interface{}{"value": int(101)},
map[string]string{})
a.AddFields("acctest",
map[string]interface{}{"value": int32(101)},
map[string]string{"acc": "test"})
a.AddFields("acctest",
map[string]interface{}{"value": int64(101)},
map[string]string{"acc": "test"}, now)
testm := <-a.metrics
actual := testm.String()
assert.Contains(t, actual, "acctest,default=tag value=101i")
testm = <-a.metrics
actual = testm.String()
assert.Contains(t, actual, "acctest,acc=test,default=tag value=101i")
testm = <-a.metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test,default=tag value=101i %d", now.UnixNano()),
actual)
}
func TestAddFloats(t *testing.T) {
a := accumulator{}
a.addDefaultTag("default", "tag")
now := time.Now()
a.metrics = make(chan telegraf.Metric, 10)
defer close(a.metrics)
a.inputConfig = &models.InputConfig{}
a.AddFields("acctest",
map[string]interface{}{"value": float32(101)},
map[string]string{"acc": "test"})
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"}, now)
testm := <-a.metrics
actual := testm.String()
assert.Contains(t, actual, "acctest,acc=test,default=tag value=101")
testm = <-a.metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test,default=tag value=101 %d", now.UnixNano()),
actual)
}
func TestAddStrings(t *testing.T) {
a := accumulator{}
a.addDefaultTag("default", "tag")
now := time.Now()
a.metrics = make(chan telegraf.Metric, 10)
defer close(a.metrics)
a.inputConfig = &models.InputConfig{}
a.AddFields("acctest",
map[string]interface{}{"value": "test"},
map[string]string{"acc": "test"})
a.AddFields("acctest",
map[string]interface{}{"value": "foo"},
map[string]string{"acc": "test"}, now)
testm := <-a.metrics
actual := testm.String()
assert.Contains(t, actual, "acctest,acc=test,default=tag value=\"test\"")
testm = <-a.metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test,default=tag value=\"foo\" %d", now.UnixNano()),
actual)
}
func TestAddBools(t *testing.T) {
a := accumulator{}
a.addDefaultTag("default", "tag")
now := time.Now()
a.metrics = make(chan telegraf.Metric, 10)
defer close(a.metrics)
a.inputConfig = &models.InputConfig{}
a.AddFields("acctest",
map[string]interface{}{"value": true}, map[string]string{"acc": "test"})
a.AddFields("acctest",
map[string]interface{}{"value": false}, map[string]string{"acc": "test"}, now)
testm := <-a.metrics
actual := testm.String()
assert.Contains(t, actual, "acctest,acc=test,default=tag value=true")
testm = <-a.metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test,default=tag value=false %d", now.UnixNano()),
actual)
}
// Test that tag filters get applied to metrics.
func TestAccFilterTags(t *testing.T) {
a := accumulator{}
now := time.Now()
a.metrics = make(chan telegraf.Metric, 10)
defer close(a.metrics)
filter := models.Filter{
TagExclude: []string{"acc"},
}
assert.NoError(t, filter.Compile())
a.inputConfig = &models.InputConfig{}
a.inputConfig.Filter = filter
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{})
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"})
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"}, now)
testm := <-a.metrics
testm := <-metrics
actual := testm.String()
assert.Contains(t, actual, "acctest value=101")
assert.Equal(t, testm.Type(), telegraf.Gauge)
testm = <-a.metrics
testm = <-metrics
actual = testm.String()
assert.Contains(t, actual, "acctest value=101")
assert.Contains(t, actual, "acctest,acc=test value=101")
assert.Equal(t, testm.Type(), telegraf.Gauge)
testm = <-a.metrics
testm = <-metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest value=101 %d", now.UnixNano()),
fmt.Sprintf("acctest,acc=test value=101 %d", now.UnixNano()),
actual)
assert.Equal(t, testm.Type(), telegraf.Gauge)
}
func TestAccAddError(t *testing.T) {
errBuf := bytes.NewBuffer(nil)
log.SetOutput(errBuf)
defer log.SetOutput(os.Stderr)
func TestAddCounter(t *testing.T) {
now := time.Now()
metrics := make(chan telegraf.Metric, 10)
defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics)
a := accumulator{}
a.inputConfig = &models.InputConfig{}
a.inputConfig.Name = "mock_plugin"
a.AddCounter("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{})
a.AddCounter("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"})
a.AddCounter("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"}, now)
a.AddError(fmt.Errorf("foo"))
a.AddError(fmt.Errorf("bar"))
a.AddError(fmt.Errorf("baz"))
testm := <-metrics
actual := testm.String()
assert.Contains(t, actual, "acctest value=101")
assert.Equal(t, testm.Type(), telegraf.Counter)
errs := bytes.Split(errBuf.Bytes(), []byte{'\n'})
assert.EqualValues(t, 3, a.errCount)
require.Len(t, errs, 4) // 4 because of trailing newline
assert.Contains(t, string(errs[0]), "mock_plugin")
assert.Contains(t, string(errs[0]), "foo")
assert.Contains(t, string(errs[1]), "mock_plugin")
assert.Contains(t, string(errs[1]), "bar")
assert.Contains(t, string(errs[2]), "mock_plugin")
assert.Contains(t, string(errs[2]), "baz")
testm = <-metrics
actual = testm.String()
assert.Contains(t, actual, "acctest,acc=test value=101")
assert.Equal(t, testm.Type(), telegraf.Counter)
testm = <-metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test value=101 %d", now.UnixNano()),
actual)
assert.Equal(t, testm.Type(), telegraf.Counter)
}
type TestMetricMaker struct {
}
func (tm *TestMetricMaker) Name() string {
return "TestPlugin"
}
func (tm *TestMetricMaker) MakeMetric(
measurement string,
fields map[string]interface{},
tags map[string]string,
mType telegraf.ValueType,
t time.Time,
) telegraf.Metric {
switch mType {
case telegraf.Untyped:
if m, err := telegraf.NewMetric(measurement, tags, fields, t); err == nil {
return m
}
case telegraf.Counter:
if m, err := telegraf.NewCounterMetric(measurement, tags, fields, t); err == nil {
return m
}
case telegraf.Gauge:
if m, err := telegraf.NewGaugeMetric(measurement, tags, fields, t); err == nil {
return m
}
}
return nil
}

View File

@@ -89,7 +89,7 @@ func panicRecover(input *models.RunningInput) {
trace := make([]byte, 2048)
runtime.Stack(trace, true)
log.Printf("E! FATAL: Input [%s] panicked: %s, Stack:\n%s\n",
input.Name, err, trace)
input.Name(), err, trace)
log.Println("E! PLEASE REPORT THIS PANIC ON GITHUB with " +
"stack trace, configuration, and OS information: " +
"https://github.com/influxdata/telegraf/issues/new")
@@ -103,19 +103,18 @@ func (a *Agent) gatherer(
input *models.RunningInput,
interval time.Duration,
metricC chan telegraf.Metric,
) error {
) {
defer panicRecover(input)
ticker := time.NewTicker(interval)
defer ticker.Stop()
for {
var outerr error
acc := NewAccumulator(input.Config, metricC)
acc := NewAccumulator(input, metricC)
acc.SetPrecision(a.Config.Agent.Precision.Duration,
a.Config.Agent.Interval.Duration)
acc.setDefaultTags(a.Config.Tags)
input.SetDebug(a.Config.Agent.Debug)
input.SetDefaultTags(a.Config.Tags)
internal.RandomSleep(a.Config.Agent.CollectionJitter.Duration, shutdown)
@@ -123,15 +122,12 @@ func (a *Agent) gatherer(
gatherWithTimeout(shutdown, input, acc, interval)
elapsed := time.Since(start)
if outerr != nil {
return outerr
}
log.Printf("D! Input [%s] gathered metrics, (%s interval) in %s\n",
input.Name, interval, elapsed)
input.Name(), interval, elapsed)
select {
case <-shutdown:
return nil
return
case <-ticker.C:
continue
}
@@ -160,13 +156,13 @@ func gatherWithTimeout(
select {
case err := <-done:
if err != nil {
log.Printf("E! ERROR in input [%s]: %s", input.Name, err)
log.Printf("E! ERROR in input [%s]: %s", input.Name(), err)
}
return
case <-ticker.C:
log.Printf("E! ERROR: input [%s] took longer to collect than "+
"collection interval (%s)",
input.Name, timeout)
input.Name(), timeout)
continue
case <-shutdown:
return
@@ -194,13 +190,13 @@ func (a *Agent) Test() error {
}()
for _, input := range a.Config.Inputs {
acc := NewAccumulator(input.Config, metricC)
acc.SetTrace(true)
acc := NewAccumulator(input, metricC)
acc.SetPrecision(a.Config.Agent.Precision.Duration,
a.Config.Agent.Interval.Duration)
acc.setDefaultTags(a.Config.Tags)
input.SetTrace(true)
input.SetDefaultTags(a.Config.Tags)
fmt.Printf("* Plugin: %s, Collection 1\n", input.Name)
fmt.Printf("* Plugin: %s, Collection 1\n", input.Name())
if input.Config.Interval != 0 {
fmt.Printf("* Internal: %s\n", input.Config.Interval)
}
@@ -214,10 +210,10 @@ func (a *Agent) Test() error {
// Special instructions for some inputs. cpu, for example, needs to be
// run twice in order to return cpu usage percentages.
switch input.Name {
switch input.Name() {
case "cpu", "mongodb", "procstat":
time.Sleep(500 * time.Millisecond)
fmt.Printf("* Plugin: %s, Collection 2\n", input.Name)
fmt.Printf("* Plugin: %s, Collection 2\n", input.Name())
if err := input.Input.Gather(acc); err != nil {
return err
}
@@ -250,47 +246,73 @@ func (a *Agent) flush() {
func (a *Agent) flusher(shutdown chan struct{}, metricC chan telegraf.Metric) error {
// Inelegant, but this sleep is to allow the Gather threads to run, so that
// the flusher will flush after metrics are collected.
time.Sleep(time.Millisecond * 200)
time.Sleep(time.Millisecond * 300)
// create an output metric channel and a gorouting that continously passes
// each metric onto the output plugins & aggregators.
outMetricC := make(chan telegraf.Metric, 100)
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
for {
select {
case <-shutdown:
if len(outMetricC) > 0 {
// keep going until outMetricC is flushed
continue
}
return
case m := <-outMetricC:
// if dropOriginal is set to true, then we will only send this
// metric to the aggregators, not the outputs.
var dropOriginal bool
if !m.IsAggregate() {
for _, agg := range a.Config.Aggregators {
if ok := agg.Add(copyMetric(m)); ok {
dropOriginal = true
}
}
}
if !dropOriginal {
for i, o := range a.Config.Outputs {
if i == len(a.Config.Outputs)-1 {
o.AddMetric(m)
} else {
o.AddMetric(copyMetric(m))
}
}
}
}
}
}()
ticker := time.NewTicker(a.Config.Agent.FlushInterval.Duration)
for {
select {
case <-shutdown:
log.Println("I! Hang on, flushing any cached metrics before shutdown")
// wait for outMetricC to get flushed before flushing outputs
wg.Wait()
a.flush()
return nil
case <-ticker.C:
internal.RandomSleep(a.Config.Agent.FlushJitter.Duration, shutdown)
a.flush()
case m := <-metricC:
for i, o := range a.Config.Outputs {
if i == len(a.Config.Outputs)-1 {
o.AddMetric(m)
} else {
o.AddMetric(copyMetric(m))
}
case metric := <-metricC:
// NOTE potential bottleneck here as we put each metric through the
// processors serially.
mS := []telegraf.Metric{metric}
for _, processor := range a.Config.Processors {
mS = processor.Apply(mS...)
}
for _, m := range mS {
outMetricC <- m
}
}
}
}
func copyMetric(m telegraf.Metric) telegraf.Metric {
t := time.Time(m.Time())
tags := make(map[string]string)
fields := make(map[string]interface{})
for k, v := range m.Tags() {
tags[k] = v
}
for k, v := range m.Fields() {
fields[k] = v
}
out, _ := telegraf.NewMetric(m.Name(), tags, fields, t)
return out
}
// Run runs the agent daemon, gathering every Interval
func (a *Agent) Run(shutdown chan struct{}) error {
var wg sync.WaitGroup
@@ -301,20 +323,20 @@ func (a *Agent) Run(shutdown chan struct{}) error {
a.Config.Agent.Hostname, a.Config.Agent.FlushInterval.Duration)
// channel shared between all input threads for accumulating metrics
metricC := make(chan telegraf.Metric, 10000)
metricC := make(chan telegraf.Metric, 100)
// Start all ServicePlugins
for _, input := range a.Config.Inputs {
// Start service of any ServicePlugins
switch p := input.Input.(type) {
case telegraf.ServiceInput:
acc := NewAccumulator(input.Config, metricC)
acc := NewAccumulator(input, metricC)
// Service input plugins should set their own precision of their
// metrics.
acc.DisablePrecision()
acc.setDefaultTags(a.Config.Tags)
acc.SetPrecision(time.Nanosecond, 0)
input.SetDefaultTags(a.Config.Tags)
if err := p.Start(acc); err != nil {
log.Printf("E! Service for input %s failed to start, exiting\n%s\n",
input.Name, err.Error())
input.Name(), err.Error())
return err
}
defer p.Stop()
@@ -336,6 +358,17 @@ func (a *Agent) Run(shutdown chan struct{}) error {
}
}()
wg.Add(len(a.Config.Aggregators))
for _, aggregator := range a.Config.Aggregators {
go func(agg *models.RunningAggregator) {
defer wg.Done()
acc := NewAccumulator(agg, metricC)
acc.SetPrecision(a.Config.Agent.Precision.Duration,
a.Config.Agent.Interval.Duration)
agg.Run(acc, shutdown)
}(aggregator)
}
wg.Add(len(a.Config.Inputs))
for _, input := range a.Config.Inputs {
interval := a.Config.Agent.Interval.Duration
@@ -345,12 +378,26 @@ func (a *Agent) Run(shutdown chan struct{}) error {
}
go func(in *models.RunningInput, interv time.Duration) {
defer wg.Done()
if err := a.gatherer(shutdown, in, interv, metricC); err != nil {
log.Printf("E! " + err.Error())
}
a.gatherer(shutdown, in, interv, metricC)
}(input, interval)
}
wg.Wait()
return nil
}
func copyMetric(m telegraf.Metric) telegraf.Metric {
t := time.Time(m.Time())
tags := make(map[string]string)
fields := make(map[string]interface{})
for k, v := range m.Tags() {
tags[k] = v
}
for k, v := range m.Fields() {
fields[k] = v
}
out, _ := telegraf.NewMetric(m.Name(), tags, fields, t)
return out
}

22
aggregator.go Normal file
View File

@@ -0,0 +1,22 @@
package telegraf
// Aggregator is an interface for implementing an Aggregator plugin.
// the RunningAggregator wraps this interface and guarantees that
// Add, Push, and Reset can not be called concurrently, so locking is not
// required when implementing an Aggregator plugin.
type Aggregator interface {
// SampleConfig returns the default configuration of the Input.
SampleConfig() string
// Description returns a one-sentence description on the Input.
Description() string
// Add the metric to the aggregator.
Add(in Metric)
// Push pushes the current aggregates to the accumulator.
Push(acc Accumulator)
// Reset resets the aggregators caches and aggregates.
Reset()
}

View File

@@ -4,17 +4,14 @@ machine:
post:
- sudo service zookeeper stop
- go version
- go version | grep 1.7.1 || sudo rm -rf /usr/local/go
- wget https://storage.googleapis.com/golang/go1.7.1.linux-amd64.tar.gz
- sudo tar -C /usr/local -xzf go1.7.1.linux-amd64.tar.gz
- go version | grep 1.7.3 || sudo rm -rf /usr/local/go
- wget https://storage.googleapis.com/golang/go1.7.3.linux-amd64.tar.gz
- sudo tar -C /usr/local -xzf go1.7.3.linux-amd64.tar.gz
- go version
dependencies:
override:
- docker info
post:
- gem install fpm
- sudo apt-get install -y rpm python-boto
test:
override:

View File

@@ -13,11 +13,12 @@ import (
"github.com/influxdata/telegraf/agent"
"github.com/influxdata/telegraf/internal/config"
"github.com/influxdata/telegraf/logger"
_ "github.com/influxdata/telegraf/plugins/aggregators/all"
"github.com/influxdata/telegraf/plugins/inputs"
_ "github.com/influxdata/telegraf/plugins/inputs/all"
"github.com/influxdata/telegraf/plugins/outputs"
_ "github.com/influxdata/telegraf/plugins/outputs/all"
_ "github.com/influxdata/telegraf/plugins/processors/all"
"github.com/kardianos/service"
)
@@ -41,6 +42,10 @@ var fOutputFilters = flag.String("output-filter", "",
"filter the outputs to enable, separator is :")
var fOutputList = flag.Bool("output-list", false,
"print available output plugins.")
var fAggregatorFilters = flag.String("aggregator-filter", "",
"filter the aggregators to enable, separator is :")
var fProcessorFilters = flag.String("processor-filter", "",
"filter the processors to enable, separator is :")
var fUsage = flag.String("usage", "",
"print usage for a plugin, ie, 'telegraf -usage mysql'")
var fService = flag.String("service", "",
@@ -68,47 +73,38 @@ const usage = `Telegraf, The plugin-driven server agent for collecting and repor
Usage:
telegraf <flags>
telegraf [commands|flags]
The flags are:
The commands & flags are:
-config <file> configuration file to load
-test gather metrics once, print them to stdout, and exit
-sample-config print out full sample configuration to stdout
-config-directory directory containing additional *.conf files
-input-filter filter the input plugins to enable, separator is :
-input-list print all the plugins inputs
-output-filter filter the output plugins to enable, separator is :
-output-list print all the available outputs
-usage print usage for a plugin, ie, 'telegraf -usage mysql'
-debug print metrics as they're generated to stdout
-quiet run in quiet mode
-version print the version to stdout
-service Control the service, ie, 'telegraf -service install (windows only)'
config print out full sample configuration to stdout
version print the version to stdout
In addition to the -config flag, telegraf will also load the config file from
an environment variable or default location. Precedence is:
1. -config flag
2. $TELEGRAF_CONFIG_PATH environment variable
3. $HOME/.telegraf/telegraf.conf
4. /etc/telegraf/telegraf.conf
--config <file> configuration file to load
--test gather metrics once, print them to stdout, and exit
--config-directory directory containing additional *.conf files
--input-filter filter the input plugins to enable, separator is :
--output-filter filter the output plugins to enable, separator is :
--usage print usage for a plugin, ie, 'telegraf --usage mysql'
--debug print metrics as they're generated to stdout
--quiet run in quiet mode
Examples:
# generate a telegraf config file:
telegraf -sample-config > telegraf.conf
telegraf config > telegraf.conf
# generate config with only cpu input & influxdb output plugins defined
telegraf -sample-config -input-filter cpu -output-filter influxdb
telegraf --input-filter cpu --output-filter influxdb config
# run a single telegraf collection, outputing metrics to stdout
telegraf -config telegraf.conf -test
telegraf --config telegraf.conf -test
# run telegraf with all plugins defined in config file
telegraf -config telegraf.conf
telegraf --config telegraf.conf
# run telegraf, enabling the cpu & memory input, and influxdb output plugins
telegraf -config telegraf.conf -input-filter cpu:mem -output-filter influxdb
telegraf --config telegraf.conf --input-filter cpu:mem --output-filter influxdb
`
var stop chan struct{}
@@ -128,7 +124,6 @@ func reloadLoop(stop chan struct{}, s service.Service) {
reload <- true
for <-reload {
reload <- false
flag.Usage = func() { usageExit(0) }
flag.Parse()
args := flag.Args()
@@ -142,6 +137,16 @@ func reloadLoop(stop chan struct{}, s service.Service) {
outputFilter := strings.TrimSpace(*fOutputFilters)
outputFilters = strings.Split(":"+outputFilter+":", ":")
}
var aggregatorFilters []string
if *fAggregatorFilters != "" {
aggregatorFilter := strings.TrimSpace(*fAggregatorFilters)
aggregatorFilters = strings.Split(":"+aggregatorFilter+":", ":")
}
var processorFilters []string
if *fProcessorFilters != "" {
processorFilter := strings.TrimSpace(*fProcessorFilters)
processorFilters = strings.Split(":"+processorFilter+":", ":")
}
if len(args) > 0 {
switch args[0] {
@@ -149,7 +154,12 @@ func reloadLoop(stop chan struct{}, s service.Service) {
fmt.Printf("Telegraf v%s (git: %s %s)\n", version, branch, commit)
return
case "config":
config.PrintSampleConfig(inputFilters, outputFilters)
config.PrintSampleConfig(
inputFilters,
outputFilters,
aggregatorFilters,
processorFilters,
)
return
}
}
@@ -172,12 +182,17 @@ func reloadLoop(stop chan struct{}, s service.Service) {
fmt.Printf("Telegraf v%s (git: %s %s)\n", version, branch, commit)
return
case *fSampleConfig:
config.PrintSampleConfig(inputFilters, outputFilters)
config.PrintSampleConfig(
inputFilters,
outputFilters,
aggregatorFilters,
processorFilters,
)
return
case *fUsage != "":
if err := config.PrintInputConfig(*fUsage); err != nil {
if err2 := config.PrintOutputConfig(*fUsage); err2 != nil {
log.Fatalf("%s and %s", err, err2)
log.Fatalf("E! %s and %s", err, err2)
}
}
return
@@ -189,26 +204,25 @@ func reloadLoop(stop chan struct{}, s service.Service) {
c.InputFilters = inputFilters
err := c.LoadConfig(*fConfig)
if err != nil {
fmt.Println(err)
os.Exit(1)
log.Fatal("E! " + err.Error())
}
if *fConfigDirectory != "" {
err = c.LoadDirectory(*fConfigDirectory)
if err != nil {
log.Fatal(err)
log.Fatal("E! " + err.Error())
}
}
if len(c.Outputs) == 0 {
log.Fatalf("Error: no outputs found, did you provide a valid config file?")
log.Fatalf("E! Error: no outputs found, did you provide a valid config file?")
}
if len(c.Inputs) == 0 {
log.Fatalf("Error: no inputs found, did you provide a valid config file?")
log.Fatalf("E! Error: no inputs found, did you provide a valid config file?")
}
ag, err := agent.NewAgent(c)
if err != nil {
log.Fatal(err)
log.Fatal("E! " + err.Error())
}
// Setup logging
@@ -221,14 +235,14 @@ func reloadLoop(stop chan struct{}, s service.Service) {
if *fTest {
err = ag.Test()
if err != nil {
log.Fatal(err)
log.Fatal("E! " + err.Error())
}
return
}
err = ag.Connect()
if err != nil {
log.Fatal(err)
log.Fatal("E! " + err.Error())
}
shutdown := make(chan struct{})
@@ -259,7 +273,7 @@ func reloadLoop(stop chan struct{}, s service.Service) {
if *fPidfile != "" {
f, err := os.Create(*fPidfile)
if err != nil {
log.Fatalf("Unable to create pidfile: %s", err)
log.Fatalf("E! Unable to create pidfile: %s", err)
}
fmt.Fprintf(f, "%d\n", os.Getpid())
@@ -291,6 +305,7 @@ func (p *program) Stop(s service.Service) error {
}
func main() {
flag.Usage = func() { usageExit(0) }
flag.Parse()
if runtime.GOOS == "windows" {
svcConfig := &service.Config{
@@ -304,7 +319,7 @@ func main() {
prg := &program{}
s, err := service.New(prg, svcConfig)
if err != nil {
log.Fatal(err)
log.Fatal("E! " + err.Error())
}
// Handle the -service flag here to prevent any issues with tooling that
// may not have an interactive session, e.g. installing from Ansible.
@@ -314,7 +329,7 @@ func main() {
}
err := service.Control(s, *fService)
if err != nil {
log.Fatal(err)
log.Fatal("E! " + err.Error())
}
} else {
err = s.Run()

View File

@@ -1,38 +1,38 @@
# Telegraf Configuration
## Generating a Configuration File
A default Telegraf config file can be generated using the -sample-config flag:
```
telegraf -sample-config > telegraf.conf
```
To generate a file with specific inputs and outputs, you can use the
-input-filter and -output-filter flags:
```
telegraf -sample-config -input-filter cpu:mem:net:swap -output-filter influxdb:kafka
```
You can see the latest config file with all available plugins here:
[telegraf.conf](https://github.com/influxdata/telegraf/blob/master/etc/telegraf.conf)
## Generating a Configuration File
A default Telegraf config file can be auto-generated by telegraf:
```
telegraf config > telegraf.conf
```
To generate a file with specific inputs and outputs, you can use the
--input-filter and --output-filter flags:
```
telegraf --input-filter cpu:mem:net:swap --output-filter influxdb:kafka config
```
## Environment Variables
Environment variables can be used anywhere in the config file, simply prepend
them with $. For strings the variable must be within quotes (ie, "$STR_VAR"),
for numbers and booleans they should be plain (ie, $INT_VAR, $BOOL_VAR)
## `[global_tags]` Configuration
# Global Tags
Global tags can be specified in the `[global_tags]` section of the config file
in key="value" format. All metrics being gathered on this host will be tagged
with the tags specified here.
## `[agent]` Configuration
## Agent Configuration
Telegraf has a few options you can configure under the `agent` section of the
Telegraf has a few options you can configure under the `[agent]` section of the
config.
* **interval**: Default data collection interval for all inputs
@@ -56,13 +56,63 @@ interval. Maximum flush_interval will be flush_interval + flush_jitter
This is primarily to avoid
large write spikes for users running a large number of telegraf instances.
ie, a jitter of 5s and flush_interval 10s means flushes will happen every 10-15s.
* **precision**: By default, precision will be set to the same timestamp order
as the collection interval, with the maximum being 1s. Precision will NOT
be used for service inputs, such as logparser and statsd. Valid values are
"ns", "us" (or "µs"), "ms", "s".
* **logfile**: Specify the log file name. The empty string means to log to stdout.
* **debug**: Run telegraf in debug mode.
* **quiet**: Run telegraf in quiet mode.
* **quiet**: Run telegraf in quiet mode (error messages only).
* **hostname**: Override default hostname, if empty use os.Hostname().
* **omit_hostname**: If true, do no set the "host" tag in the telegraf agent.
## Input Configuration
The following config parameters are available for all inputs:
* **interval**: How often to gather this metric. Normal plugins use a single
global interval, but if one particular input should be run less or more often,
you can configure that here.
* **name_override**: Override the base name of the measurement.
(Default is the name of the input).
* **name_prefix**: Specifies a prefix to attach to the measurement name.
* **name_suffix**: Specifies a suffix to attach to the measurement name.
* **tags**: A map of tags to apply to a specific input's measurements.
## Output Configuration
There are no generic configuration options available for all outputs.
## Aggregator Configuration
The following config parameters are available for all aggregators:
* **period**: The period on which to flush & clear each aggregator. All metrics
that are sent with timestamps outside of this period will be ignored by the
aggregator.
* **delay**: The delay before each aggregator is flushed. This is to control
how long for aggregators to wait before receiving metrics from input plugins,
in the case that aggregators are flushing and inputs are gathering on the
same interval.
* **drop_original**: If true, the original metric will be dropped by the
aggregator and will not get sent to the output plugins.
* **name_override**: Override the base name of the measurement.
(Default is the name of the input).
* **name_prefix**: Specifies a prefix to attach to the measurement name.
* **name_suffix**: Specifies a suffix to attach to the measurement name.
* **tags**: A map of tags to apply to a specific input's measurements.
## Processor Configuration
The following config parameters are available for all processors:
* **order**: This is the order in which the processor(s) get executed. If this
is not specified then processor execution order will be random.
#### Measurement Filtering
Filters can be configured per input or output, see below for examples.
Filters can be configured per input, output, processor, or aggregator,
see below for examples.
* **namepass**: An array of strings that is used to filter metrics generated by the
current input. Each string in the array is tested as a glob match against
@@ -90,19 +140,6 @@ the tag keys in the final measurement.
the plugin definition, otherwise subsequent plugin config options will be
interpreted as part of the tagpass/tagdrop map.
## Input Configuration
Some configuration options are configurable per input:
* **name_override**: Override the base name of the measurement.
(Default is the name of the input).
* **name_prefix**: Specifies a prefix to attach to the measurement name.
* **name_suffix**: Specifies a suffix to attach to the measurement name.
* **tags**: A map of tags to apply to a specific input's measurements.
* **interval**: How often to gather this metric. Normal plugins use a single
global interval, but if one particular input should be run less or more often,
you can configure that here.
#### Input Configuration Examples
This is a full working config that will output CPU data to an InfluxDB instance
@@ -254,11 +291,7 @@ to avoid measurement collisions:
fielddrop = ["cpu_time*"]
```
## Output Configuration
Telegraf also supports specifying multiple output sinks to send data to,
configuring each output sink is different, but examples can be
found by running `telegraf -sample-config`.
#### Output Configuration Examples:
```toml
[[outputs.influxdb]]
@@ -283,3 +316,39 @@ found by running `telegraf -sample-config`.
[outputs.influxdb.tagpass]
cpu = ["cpu0"]
```
#### Aggregator Configuration Examples:
This will collect and emit the min/max of the system load1 metric every
30s, dropping the originals.
```toml
[[inputs.system]]
fieldpass = ["load1"] # collects system load1 metric.
[[aggregators.minmax]]
period = "30s" # send & clear the aggregate every 30s.
drop_original = true # drop the original metrics.
[[outputs.file]]
files = ["stdout"]
```
This will collect and emit the min/max of the swap metrics every
30s, dropping the originals. The aggregator will not be applied
to the system load metrics due to the `namepass` parameter.
```toml
[[inputs.swap]]
[[inputs.system]]
fieldpass = ["load1"] # collects system load1 metric.
[[aggregators.minmax]]
period = "30s" # send & clear the aggregate every 30s.
drop_original = true # drop the original metrics.
namepass = ["swap"] # only "pass" swap metrics through the aggregator.
[[outputs.file]]
files = ["stdout"]
```

View File

@@ -232,6 +232,16 @@ us.west.cpu.load 100
=> cpu.load,region=us.west value=100
```
Multiple templates can also be specified, but these should be differentiated
using _filters_ (see below for more details)
```toml
templates = [
"*.*.* region.region.measurement", # <- all 3-part measurements will match this one.
"*.*.*.* region.region.host.measurement", # <- all 4-part measurements will match this one.
]
```
#### Field Templates:
The field keyword tells Telegraf to give the metric that field name.

View File

@@ -66,7 +66,7 @@
debug = false
## Run telegraf in quiet mode (error log messages only).
quiet = false
## Specify the log file name. The empty string means to log to stdout.
## Specify the log file name. The empty string means to log to stderr.
logfile = ""
## Override default hostname, if empty use os.Hostname()
@@ -441,6 +441,30 @@
###############################################################################
# PROCESSOR PLUGINS #
###############################################################################
# # Print all metrics that pass through this filter.
# [[processors.printer]]
###############################################################################
# AGGREGATOR PLUGINS #
###############################################################################
# # Keep the aggregate min/max of each metric passing through.
# [[aggregators.minmax]]
# ## General Aggregator Arguments:
# ## The period on which to flush & clear the aggregator.
# period = "30s"
# ## If true, the original metric will be dropped by the
# ## aggregator and will not get sent to the output plugins.
# drop_original = false
###############################################################################
# INPUT PLUGINS #
###############################################################################
@@ -582,15 +606,18 @@
# # Read specific statistics per cgroup
# [[inputs.cgroup]]
# ## Directories in which to look for files, globs are supported.
# # paths = [
# # "/cgroup/memory",
# # "/cgroup/memory/child1",
# # "/cgroup/memory/child2/*",
# # ]
# ## cgroup stat fields, as file names, globs are supported.
# ## these file names are appended to each path from above.
# # files = ["memory.*usage*", "memory.limit_in_bytes"]
# ## Directories in which to look for files, globs are supported.
# ## Consider restricting paths to the set of cgroups you really
# ## want to monitor if you have a large number of cgroups, to avoid
# ## any cardinality issues.
# # paths = [
# # "/cgroup/memory",
# # "/cgroup/memory/child1",
# # "/cgroup/memory/child2/*",
# # ]
# ## cgroup stat fields, as file names, globs are supported.
# ## these file names are appended to each path from above.
# # files = ["memory.*usage*", "memory.limit_in_bytes"]
# # Pull Metric Statistics from Amazon CloudWatch
@@ -850,12 +877,15 @@
# ## An array of address to gather stats about. Specify an ip on hostname
# ## with optional port. ie localhost, 10.10.3.33:1936, etc.
# ## Make sure you specify the complete path to the stats endpoint
# ## ie 10.10.3.33:1936/haproxy?stats
# ## including the protocol, ie http://10.10.3.33:1936/haproxy?stats
# #
# ## If no servers are specified, then default to 127.0.0.1:1936/haproxy?stats
# servers = ["http://myhaproxy.com:1936/haproxy?stats"]
# ## Or you can also use local socket
# ## servers = ["socket:/run/haproxy/admin.sock"]
# ##
# ## You can also use local socket with standard wildcard globbing.
# ## Server address not starting with 'http' will be treated as a possible
# ## socket, so both examples below are valid.
# ## servers = ["socket:/run/haproxy/admin.sock", "/run/haproxy/*.sock"]
# # HTTP/HTTPS request given an address a method and a timeout
@@ -1000,6 +1030,22 @@
# attribute = "LoadedClassCount,UnloadedClassCount,TotalLoadedClassCount"
# # Read metrics from the kubernetes kubelet api
# [[inputs.kubernetes]]
# ## URL for the kubelet
# url = "http://1.1.1.1:10255"
#
# ## Use bearer token for authorization
# # bearer_token = /path/to/bearer/token
#
# ## Optional SSL Config
# # ssl_ca = /path/to/cafile
# # ssl_cert = /path/to/certfile
# # ssl_key = /path/to/keyfile
# ## Use SSL but skip chain & host verification
# # insecure_skip_verify = false
# # Read metrics from a LeoFS Server via SNMP
# [[inputs.leofs]]
# ## An array of URI to gather stats about LeoFS.
@@ -1119,13 +1165,13 @@
# ## gather metrics from SHOW BINARY LOGS command output
# gather_binary_logs = false
# #
# ## gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMART_BY_TABLE
# ## gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMARY_BY_TABLE
# gather_table_io_waits = false
# #
# ## gather metrics from PERFORMANCE_SCHEMA.TABLE_LOCK_WAITS
# gather_table_lock_waits = false
# #
# ## gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMART_BY_INDEX_USAGE
# ## gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMARY_BY_INDEX_USAGE
# gather_index_io_waits = false
# #
# ## gather metrics from PERFORMANCE_SCHEMA.EVENT_WAITS
@@ -1247,13 +1293,13 @@
# ## urls to ping
# urls = ["www.google.com"] # required
# ## number of pings to send per collection (ping -c <COUNT>)
# count = 1 # required
# # count = 1
# ## interval, in s, at which to ping. 0 == default (ping -i <PING_INTERVAL>)
# ping_interval = 0.0
# # ping_interval = 1.0
# ## per-ping timeout, in s. 0 == no timeout (ping -W <TIMEOUT>)
# timeout = 1.0
# # timeout = 1.0
# ## interface to send ping from (ping -I <INTERFACE>)
# interface = ""
# # interface = ""
# # Read metrics from one or many postgresql servers
@@ -1356,8 +1402,6 @@
# # exe = "nginx"
# ## pattern as argument for pgrep (ie, pgrep -f <pattern>)
# # pattern = "nginx"
# ## match the exact name of the process (ie, pgrep -xf <pattern>)
# # exact = false
# ## user as argument for pgrep (ie, pgrep -u <user>)
# # user = "nginx"
#
@@ -1683,9 +1727,18 @@
# ## Address and port to host HTTP listener on
# service_address = ":8186"
#
# ## timeouts
# ## maximum duration before timing out read of the request
# read_timeout = "10s"
# ## maximum duration before timing out write of the response
# write_timeout = "10s"
#
# ## Maximum allowed http request body size in bytes.
# ## 0 means to use the default of 536,870,912 bytes (500 mebibytes)
# max_body_size = 0
#
# ## Maximum line size allowed to be sent in bytes.
# ## 0 means to use the default of 65536 bytes (64 kibibytes)
# max_line_size = 0
# # Read metrics from Kafka topic(s)
@@ -1780,13 +1833,18 @@
# # Read metrics from NATS subject(s)
# [[inputs.nats_consumer]]
# ## urls of NATS servers
# servers = ["nats://localhost:4222"]
# # servers = ["nats://localhost:4222"]
# ## Use Transport Layer Security
# secure = false
# # secure = false
# ## subject(s) to consume
# subjects = ["telegraf"]
# # subjects = ["telegraf"]
# ## name a queue group
# queue_group = "telegraf_consumers"
# # queue_group = "telegraf_consumers"
#
# ## Sets the limits for pending msgs and bytes for each subscription
# ## These shouldn't need to be adjusted except in very high throughput scenarios
# # pending_message_limit = 65536
# # pending_bytes_limit = 67108864
#
# ## Data format to consume.
# ## Each data format has it's own unique set of configuration options, read
@@ -1873,14 +1931,14 @@
# # Generic TCP listener
# [[inputs.tcp_listener]]
# ## Address and port to host TCP listener on
# service_address = ":8094"
# # service_address = ":8094"
#
# ## Number of TCP messages allowed to queue up. Once filled, the
# ## TCP listener will start dropping packets.
# allowed_pending_messages = 10000
# # allowed_pending_messages = 10000
#
# ## Maximum number of concurrent TCP connections to allow
# max_tcp_connections = 250
# # max_tcp_connections = 250
#
# ## Data format to consume.
# ## Each data format has it's own unique set of configuration options, read
@@ -1892,11 +1950,11 @@
# # Generic UDP listener
# [[inputs.udp_listener]]
# ## Address and port to host UDP listener on
# service_address = ":8092"
# # service_address = ":8092"
#
# ## Number of UDP messages allowed to queue up. Once filled, the
# ## UDP listener will start dropping packets.
# allowed_pending_messages = 10000
# # allowed_pending_messages = 10000
#
# ## Data format to consume.
# ## Each data format has it's own unique set of configuration options, read

View File

@@ -1,6 +1,8 @@
package buffer
import (
"sync"
"github.com/influxdata/telegraf"
)
@@ -11,6 +13,8 @@ type Buffer struct {
drops int
// total metrics added
total int
mu sync.Mutex
}
// NewBuffer returns a Buffer
@@ -61,11 +65,13 @@ func (b *Buffer) Add(metrics ...telegraf.Metric) {
// the batch will be of maximum length batchSize. It can be less than batchSize,
// if the length of Buffer is less than batchSize.
func (b *Buffer) Batch(batchSize int) []telegraf.Metric {
b.mu.Lock()
n := min(len(b.buf), batchSize)
out := make([]telegraf.Metric, n)
for i := 0; i < n; i++ {
out[i] = <-b.buf
}
b.mu.Unlock()
return out
}

View File

@@ -11,15 +11,18 @@ import (
"regexp"
"runtime"
"sort"
"strconv"
"strings"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/internal/models"
"github.com/influxdata/telegraf/plugins/aggregators"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/outputs"
"github.com/influxdata/telegraf/plugins/parsers"
"github.com/influxdata/telegraf/plugins/processors"
"github.com/influxdata/telegraf/plugins/serializers"
"github.com/influxdata/config"
@@ -47,9 +50,12 @@ type Config struct {
InputFilters []string
OutputFilters []string
Agent *AgentConfig
Inputs []*models.RunningInput
Outputs []*models.RunningOutput
Agent *AgentConfig
Inputs []*models.RunningInput
Outputs []*models.RunningOutput
Aggregators []*models.RunningAggregator
// Processors have a slice wrapper type because they need to be sorted
Processors models.RunningProcessors
}
func NewConfig() *Config {
@@ -64,6 +70,7 @@ func NewConfig() *Config {
Tags: make(map[string]string),
Inputs: make([]*models.RunningInput, 0),
Outputs: make([]*models.RunningOutput, 0),
Processors: make([]*models.RunningProcessor, 0),
InputFilters: make([]string, 0),
OutputFilters: make([]string, 0),
}
@@ -138,7 +145,7 @@ type AgentConfig struct {
func (c *Config) InputNames() []string {
var name []string
for _, input := range c.Inputs {
name = append(name, input.Name)
name = append(name, input.Name())
}
return name
}
@@ -234,7 +241,7 @@ var header = `# Telegraf Configuration
debug = false
## Run telegraf in quiet mode (error log messages only).
quiet = false
## Specify the log file name. The empty string means to log to stdout.
## Specify the log file name. The empty string means to log to stderr.
logfile = ""
## Override default hostname, if empty use os.Hostname()
@@ -248,6 +255,20 @@ var header = `# Telegraf Configuration
###############################################################################
`
var processorHeader = `
###############################################################################
# PROCESSOR PLUGINS #
###############################################################################
`
var aggregatorHeader = `
###############################################################################
# AGGREGATOR PLUGINS #
###############################################################################
`
var inputHeader = `
###############################################################################
@@ -263,9 +284,15 @@ var serviceInputHeader = `
`
// PrintSampleConfig prints the sample config
func PrintSampleConfig(inputFilters []string, outputFilters []string) {
func PrintSampleConfig(
inputFilters []string,
outputFilters []string,
aggregatorFilters []string,
processorFilters []string,
) {
fmt.Printf(header)
// print output plugins
if len(outputFilters) != 0 {
printFilteredOutputs(outputFilters, false)
} else {
@@ -281,6 +308,33 @@ func PrintSampleConfig(inputFilters []string, outputFilters []string) {
printFilteredOutputs(pnames, true)
}
// print processor plugins
fmt.Printf(processorHeader)
if len(processorFilters) != 0 {
printFilteredProcessors(processorFilters, false)
} else {
pnames := []string{}
for pname := range processors.Processors {
pnames = append(pnames, pname)
}
sort.Strings(pnames)
printFilteredProcessors(pnames, true)
}
// pring aggregator plugins
fmt.Printf(aggregatorHeader)
if len(aggregatorFilters) != 0 {
printFilteredAggregators(aggregatorFilters, false)
} else {
pnames := []string{}
for pname := range aggregators.Aggregators {
pnames = append(pnames, pname)
}
sort.Strings(pnames)
printFilteredAggregators(pnames, true)
}
// print input plugins
fmt.Printf(inputHeader)
if len(inputFilters) != 0 {
printFilteredInputs(inputFilters, false)
@@ -298,6 +352,42 @@ func PrintSampleConfig(inputFilters []string, outputFilters []string) {
}
}
func printFilteredProcessors(processorFilters []string, commented bool) {
// Filter processors
var pnames []string
for pname := range processors.Processors {
if sliceContains(pname, processorFilters) {
pnames = append(pnames, pname)
}
}
sort.Strings(pnames)
// Print Outputs
for _, pname := range pnames {
creator := processors.Processors[pname]
output := creator()
printConfig(pname, output, "processors", commented)
}
}
func printFilteredAggregators(aggregatorFilters []string, commented bool) {
// Filter outputs
var anames []string
for aname := range aggregators.Aggregators {
if sliceContains(aname, aggregatorFilters) {
anames = append(anames, aname)
}
}
sort.Strings(anames)
// Print Outputs
for _, aname := range anames {
creator := aggregators.Aggregators[aname]
output := creator()
printConfig(aname, output, "aggregators", commented)
}
}
func printFilteredInputs(inputFilters []string, commented bool) {
// Filter inputs
var pnames []string
@@ -507,6 +597,7 @@ func (c *Config) LoadConfig(path string) error {
case "outputs":
for pluginName, pluginVal := range subTable.Fields {
switch pluginSubTable := pluginVal.(type) {
// legacy [outputs.influxdb] support
case *ast.Table:
if err = c.addOutput(pluginName, pluginSubTable); err != nil {
return fmt.Errorf("Error parsing %s, %s", path, err)
@@ -525,6 +616,7 @@ func (c *Config) LoadConfig(path string) error {
case "inputs", "plugins":
for pluginName, pluginVal := range subTable.Fields {
switch pluginSubTable := pluginVal.(type) {
// legacy [inputs.cpu] support
case *ast.Table:
if err = c.addInput(pluginName, pluginSubTable); err != nil {
return fmt.Errorf("Error parsing %s, %s", path, err)
@@ -540,6 +632,34 @@ func (c *Config) LoadConfig(path string) error {
pluginName, path)
}
}
case "processors":
for pluginName, pluginVal := range subTable.Fields {
switch pluginSubTable := pluginVal.(type) {
case []*ast.Table:
for _, t := range pluginSubTable {
if err = c.addProcessor(pluginName, t); err != nil {
return fmt.Errorf("Error parsing %s, %s", path, err)
}
}
default:
return fmt.Errorf("Unsupported config format: %s, file %s",
pluginName, path)
}
}
case "aggregators":
for pluginName, pluginVal := range subTable.Fields {
switch pluginSubTable := pluginVal.(type) {
case []*ast.Table:
for _, t := range pluginSubTable {
if err = c.addAggregator(pluginName, t); err != nil {
return fmt.Errorf("Error parsing %s, %s", path, err)
}
}
default:
return fmt.Errorf("Unsupported config format: %s, file %s",
pluginName, path)
}
}
// Assume it's an input input for legacy config file support if no other
// identifiers are present
default:
@@ -548,6 +668,10 @@ func (c *Config) LoadConfig(path string) error {
}
}
}
if len(c.Processors) > 1 {
sort.Sort(c.Processors)
}
return nil
}
@@ -580,6 +704,52 @@ func parseFile(fpath string) (*ast.Table, error) {
return toml.Parse(contents)
}
func (c *Config) addAggregator(name string, table *ast.Table) error {
creator, ok := aggregators.Aggregators[name]
if !ok {
return fmt.Errorf("Undefined but requested aggregator: %s", name)
}
aggregator := creator()
conf, err := buildAggregator(name, table)
if err != nil {
return err
}
if err := config.UnmarshalTable(table, aggregator); err != nil {
return err
}
c.Aggregators = append(c.Aggregators, models.NewRunningAggregator(aggregator, conf))
return nil
}
func (c *Config) addProcessor(name string, table *ast.Table) error {
creator, ok := processors.Processors[name]
if !ok {
return fmt.Errorf("Undefined but requested processor: %s", name)
}
processor := creator()
processorConfig, err := buildProcessor(name, table)
if err != nil {
return err
}
if err := config.UnmarshalTable(table, processor); err != nil {
return err
}
rf := &models.RunningProcessor{
Name: name,
Processor: processor,
Config: processorConfig,
}
c.Processors = append(c.Processors, rf)
return nil
}
func (c *Config) addOutput(name string, table *ast.Table) error {
if len(c.OutputFilters) > 0 && !sliceContains(name, c.OutputFilters) {
return nil
@@ -652,7 +822,6 @@ func (c *Config) addInput(name string, table *ast.Table) error {
}
rp := &models.RunningInput{
Name: name,
Input: input,
Config: pluginConfig,
}
@@ -660,6 +829,144 @@ func (c *Config) addInput(name string, table *ast.Table) error {
return nil
}
// buildAggregator parses Aggregator specific items from the ast.Table,
// builds the filter and returns a
// models.AggregatorConfig to be inserted into models.RunningAggregator
func buildAggregator(name string, tbl *ast.Table) (*models.AggregatorConfig, error) {
unsupportedFields := []string{"tagexclude", "taginclude"}
for _, field := range unsupportedFields {
if _, ok := tbl.Fields[field]; ok {
return nil, fmt.Errorf("%s is not supported for aggregator plugins (%s).",
field, name)
}
}
conf := &models.AggregatorConfig{
Name: name,
Delay: time.Millisecond * 100,
Period: time.Second * 30,
}
if node, ok := tbl.Fields["period"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
dur, err := time.ParseDuration(str.Value)
if err != nil {
return nil, err
}
conf.Period = dur
}
}
}
if node, ok := tbl.Fields["delay"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
dur, err := time.ParseDuration(str.Value)
if err != nil {
return nil, err
}
conf.Delay = dur
}
}
}
if node, ok := tbl.Fields["drop_original"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if b, ok := kv.Value.(*ast.Boolean); ok {
var err error
conf.DropOriginal, err = strconv.ParseBool(b.Value)
if err != nil {
log.Printf("Error parsing boolean value for %s: %s\n", name, err)
}
}
}
}
if node, ok := tbl.Fields["name_prefix"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
conf.MeasurementPrefix = str.Value
}
}
}
if node, ok := tbl.Fields["name_suffix"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
conf.MeasurementSuffix = str.Value
}
}
}
if node, ok := tbl.Fields["name_override"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
conf.NameOverride = str.Value
}
}
}
conf.Tags = make(map[string]string)
if node, ok := tbl.Fields["tags"]; ok {
if subtbl, ok := node.(*ast.Table); ok {
if err := config.UnmarshalTable(subtbl, conf.Tags); err != nil {
log.Printf("Could not parse tags for input %s\n", name)
}
}
}
delete(tbl.Fields, "period")
delete(tbl.Fields, "delay")
delete(tbl.Fields, "drop_original")
delete(tbl.Fields, "name_prefix")
delete(tbl.Fields, "name_suffix")
delete(tbl.Fields, "name_override")
delete(tbl.Fields, "tags")
var err error
conf.Filter, err = buildFilter(tbl)
if err != nil {
return conf, err
}
return conf, nil
}
// buildProcessor parses Processor specific items from the ast.Table,
// builds the filter and returns a
// models.ProcessorConfig to be inserted into models.RunningProcessor
func buildProcessor(name string, tbl *ast.Table) (*models.ProcessorConfig, error) {
conf := &models.ProcessorConfig{Name: name}
unsupportedFields := []string{"tagexclude", "taginclude", "fielddrop", "fieldpass"}
for _, field := range unsupportedFields {
if _, ok := tbl.Fields[field]; ok {
return nil, fmt.Errorf("%s is not supported for processor plugins (%s).",
field, name)
}
}
if node, ok := tbl.Fields["order"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if b, ok := kv.Value.(*ast.Integer); ok {
var err error
conf.Order, err = strconv.ParseInt(b.Value, 10, 64)
if err != nil {
log.Printf("Error parsing int value for %s: %s\n", name, err)
}
}
}
}
delete(tbl.Fields, "order")
var err error
conf.Filter, err = buildFilter(tbl)
if err != nil {
return conf, err
}
return conf, nil
}
// buildFilter builds a Filter
// (tagpass/tagdrop/namepass/namedrop/fieldpass/fielddrop) to
// be inserted into the models.OutputConfig/models.InputConfig

View File

@@ -35,12 +35,21 @@ type Duration struct {
// UnmarshalTOML parses the duration from the TOML config file
func (d *Duration) UnmarshalTOML(b []byte) error {
var err error
// Parse string duration, ie, "1s"
d.Duration, err = time.ParseDuration(string(b[1 : len(b)-1]))
// see if we can straight convert it
d.Duration, err = time.ParseDuration(string(b))
if err == nil {
return nil
}
// Parse string duration, ie, "1s"
if uq, err := strconv.Unquote(string(b)); err == nil && len(uq) > 0 {
d.Duration, err = time.ParseDuration(uq)
if err == nil {
return nil
}
}
// First try parsing as integer seconds
sI, err := strconv.ParseInt(string(b), 10, 64)
if err == nil {

View File

@@ -131,3 +131,22 @@ func TestRandomSleep(t *testing.T) {
elapsed = time.Since(s)
assert.True(t, elapsed < time.Millisecond*150)
}
func TestDuration(t *testing.T) {
var d Duration
d.UnmarshalTOML([]byte(`"1s"`))
assert.Equal(t, time.Second, d.Duration)
d = Duration{}
d.UnmarshalTOML([]byte(`1s`))
assert.Equal(t, time.Second, d.Duration)
d = Duration{}
d.UnmarshalTOML([]byte(`10`))
assert.Equal(t, 10*time.Second, d.Duration)
d = Duration{}
d.UnmarshalTOML([]byte(`1.5`))
assert.Equal(t, time.Second, d.Duration)
}

View File

@@ -96,7 +96,7 @@ func (f *Filter) Compile() error {
// Apply applies the filter to the given measurement name, fields map, and
// tags map. It will return false if the metric should be "filtered out", and
// true if the metric should "pass".
// It will modify tags in-place if they need to be deleted.
// It will modify tags & fields in-place if they need to be deleted.
func (f *Filter) Apply(
measurement string,
fields map[string]interface{},

View File

@@ -0,0 +1,154 @@
package models
import (
"log"
"math"
"time"
"github.com/influxdata/telegraf"
)
// makemetric is used by both RunningAggregator & RunningInput
// to make metrics.
// nameOverride: override the name of the measurement being made.
// namePrefix: add this prefix to each measurement name.
// nameSuffix: add this suffix to each measurement name.
// pluginTags: these are tags that are specific to this plugin.
// daemonTags: these are daemon-wide global tags, and get applied after pluginTags.
// filter: this is a filter to apply to each metric being made.
// applyFilter: if false, the above filter is not applied to each metric.
// This is used by Aggregators, because aggregators use filters
// on incoming metrics instead of on created metrics.
// TODO refactor this to not have such a huge func signature.
func makemetric(
measurement string,
fields map[string]interface{},
tags map[string]string,
nameOverride string,
namePrefix string,
nameSuffix string,
pluginTags map[string]string,
daemonTags map[string]string,
filter Filter,
applyFilter bool,
debug bool,
mType telegraf.ValueType,
t time.Time,
) telegraf.Metric {
if len(fields) == 0 || len(measurement) == 0 {
return nil
}
if tags == nil {
tags = make(map[string]string)
}
// Override measurement name if set
if len(nameOverride) != 0 {
measurement = nameOverride
}
// Apply measurement prefix and suffix if set
if len(namePrefix) != 0 {
measurement = namePrefix + measurement
}
if len(nameSuffix) != 0 {
measurement = measurement + nameSuffix
}
// Apply plugin-wide tags if set
for k, v := range pluginTags {
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
// Apply daemon-wide tags if set
for k, v := range daemonTags {
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
// Apply the metric filter(s)
// for aggregators, the filter does not get applied when the metric is made.
// instead, the filter is applied to metric incoming into the plugin.
// ie, it gets applied in the RunningAggregator.Apply function.
if applyFilter {
if ok := filter.Apply(measurement, fields, tags); !ok {
return nil
}
}
for k, v := range fields {
// Validate uint64 and float64 fields
// convert all int & uint types to int64
switch val := v.(type) {
case nil:
// delete nil fields
delete(fields, k)
case uint:
fields[k] = int64(val)
continue
case uint8:
fields[k] = int64(val)
continue
case uint16:
fields[k] = int64(val)
continue
case uint32:
fields[k] = int64(val)
continue
case int:
fields[k] = int64(val)
continue
case int8:
fields[k] = int64(val)
continue
case int16:
fields[k] = int64(val)
continue
case int32:
fields[k] = int64(val)
continue
case uint64:
// InfluxDB does not support writing uint64
if val < uint64(9223372036854775808) {
fields[k] = int64(val)
} else {
fields[k] = int64(9223372036854775807)
}
continue
case float32:
fields[k] = float64(val)
continue
case float64:
// NaNs are invalid values in influxdb, skip measurement
if math.IsNaN(val) || math.IsInf(val, 0) {
if debug {
log.Printf("Measurement [%s] field [%s] has a NaN or Inf "+
"field, skipping",
measurement, k)
}
delete(fields, k)
continue
}
default:
fields[k] = v
}
}
var m telegraf.Metric
var err error
switch mType {
case telegraf.Counter:
m, err = telegraf.NewCounterMetric(measurement, tags, fields, t)
case telegraf.Gauge:
m, err = telegraf.NewGaugeMetric(measurement, tags, fields, t)
default:
m, err = telegraf.NewMetric(measurement, tags, fields, t)
}
if err != nil {
log.Printf("Error adding point [%s]: %s\n", measurement, err.Error())
return nil
}
return m
}

View File

@@ -0,0 +1,164 @@
package models
import (
"time"
"github.com/influxdata/telegraf"
)
type RunningAggregator struct {
a telegraf.Aggregator
Config *AggregatorConfig
metrics chan telegraf.Metric
periodStart time.Time
periodEnd time.Time
}
func NewRunningAggregator(
a telegraf.Aggregator,
conf *AggregatorConfig,
) *RunningAggregator {
return &RunningAggregator{
a: a,
Config: conf,
metrics: make(chan telegraf.Metric, 100),
}
}
// AggregatorConfig containing configuration parameters for the running
// aggregator plugin.
type AggregatorConfig struct {
Name string
DropOriginal bool
NameOverride string
MeasurementPrefix string
MeasurementSuffix string
Tags map[string]string
Filter Filter
Period time.Duration
Delay time.Duration
}
func (r *RunningAggregator) Name() string {
return "aggregators." + r.Config.Name
}
func (r *RunningAggregator) MakeMetric(
measurement string,
fields map[string]interface{},
tags map[string]string,
mType telegraf.ValueType,
t time.Time,
) telegraf.Metric {
m := makemetric(
measurement,
fields,
tags,
r.Config.NameOverride,
r.Config.MeasurementPrefix,
r.Config.MeasurementSuffix,
r.Config.Tags,
nil,
r.Config.Filter,
false,
false,
mType,
t,
)
m.SetAggregate(true)
return m
}
// Add applies the given metric to the aggregator.
// Before applying to the plugin, it will run any defined filters on the metric.
// Apply returns true if the original metric should be dropped.
func (r *RunningAggregator) Add(in telegraf.Metric) bool {
if r.Config.Filter.IsActive() {
// check if the aggregator should apply this metric
name := in.Name()
fields := in.Fields()
tags := in.Tags()
t := in.Time()
if ok := r.Config.Filter.Apply(name, fields, tags); !ok {
// aggregator should not apply this metric
return false
}
in, _ = telegraf.NewMetric(name, tags, fields, t)
}
r.metrics <- in
return r.Config.DropOriginal
}
func (r *RunningAggregator) add(in telegraf.Metric) {
r.a.Add(in)
}
func (r *RunningAggregator) push(acc telegraf.Accumulator) {
r.a.Push(acc)
}
func (r *RunningAggregator) reset() {
r.a.Reset()
}
// Run runs the running aggregator, listens for incoming metrics, and waits
// for period ticks to tell it when to push and reset the aggregator.
func (r *RunningAggregator) Run(
acc telegraf.Accumulator,
shutdown chan struct{},
) {
// The start of the period is truncated to the nearest second.
//
// Every metric then gets it's timestamp checked and is dropped if it
// is not within:
//
// start < t < end + truncation + delay
//
// So if we start at now = 00:00.2 with a 10s period and 0.3s delay:
// now = 00:00.2
// start = 00:00
// truncation = 00:00.2
// end = 00:10
// 1st interval: 00:00 - 00:10.5
// 2nd interval: 00:10 - 00:20.5
// etc.
//
now := time.Now()
r.periodStart = now.Truncate(time.Second)
truncation := now.Sub(r.periodStart)
r.periodEnd = r.periodStart.Add(r.Config.Period)
time.Sleep(r.Config.Delay)
periodT := time.NewTicker(r.Config.Period)
defer periodT.Stop()
for {
select {
case <-shutdown:
if len(r.metrics) > 0 {
// wait until metrics are flushed before exiting
continue
}
return
case m := <-r.metrics:
if m.Time().Before(r.periodStart) ||
m.Time().After(r.periodEnd.Add(truncation).Add(r.Config.Delay)) {
// the metric is outside the current aggregation period, so
// skip it.
continue
}
r.add(m)
case <-periodT.C:
r.periodStart = r.periodEnd
r.periodEnd = r.periodStart.Add(r.Config.Period)
r.push(acc)
r.reset()
}
}
}

View File

@@ -0,0 +1,256 @@
package models
import (
"fmt"
"sync"
"sync/atomic"
"testing"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
)
func TestAdd(t *testing.T) {
a := &TestAggregator{}
ra := NewRunningAggregator(a, &AggregatorConfig{
Name: "TestRunningAggregator",
Filter: Filter{
NamePass: []string{"*"},
},
Period: time.Millisecond * 500,
})
assert.NoError(t, ra.Config.Filter.Compile())
acc := testutil.Accumulator{}
go ra.Run(&acc, make(chan struct{}))
m := ra.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Untyped,
time.Now().Add(time.Millisecond*150),
)
assert.False(t, ra.Add(m))
for {
time.Sleep(time.Millisecond)
if atomic.LoadInt64(&a.sum) > 0 {
break
}
}
assert.Equal(t, int64(101), atomic.LoadInt64(&a.sum))
}
func TestAddMetricsOutsideCurrentPeriod(t *testing.T) {
a := &TestAggregator{}
ra := NewRunningAggregator(a, &AggregatorConfig{
Name: "TestRunningAggregator",
Filter: Filter{
NamePass: []string{"*"},
},
Period: time.Millisecond * 500,
})
assert.NoError(t, ra.Config.Filter.Compile())
acc := testutil.Accumulator{}
go ra.Run(&acc, make(chan struct{}))
// metric before current period
m := ra.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Untyped,
time.Now().Add(-time.Hour),
)
assert.False(t, ra.Add(m))
// metric after current period
m = ra.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Untyped,
time.Now().Add(time.Hour),
)
assert.False(t, ra.Add(m))
// "now" metric
m = ra.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Untyped,
time.Now().Add(time.Millisecond*50),
)
assert.False(t, ra.Add(m))
for {
time.Sleep(time.Millisecond)
if atomic.LoadInt64(&a.sum) > 0 {
break
}
}
assert.Equal(t, int64(101), atomic.LoadInt64(&a.sum))
}
func TestAddAndPushOnePeriod(t *testing.T) {
a := &TestAggregator{}
ra := NewRunningAggregator(a, &AggregatorConfig{
Name: "TestRunningAggregator",
Filter: Filter{
NamePass: []string{"*"},
},
Period: time.Millisecond * 500,
})
assert.NoError(t, ra.Config.Filter.Compile())
acc := testutil.Accumulator{}
shutdown := make(chan struct{})
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
ra.Run(&acc, shutdown)
}()
m := ra.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Untyped,
time.Now().Add(time.Millisecond*100),
)
assert.False(t, ra.Add(m))
for {
time.Sleep(time.Millisecond)
if acc.NMetrics() > 0 {
break
}
}
acc.AssertContainsFields(t, "TestMetric", map[string]interface{}{"sum": int64(101)})
close(shutdown)
wg.Wait()
}
func TestAddDropOriginal(t *testing.T) {
ra := NewRunningAggregator(&TestAggregator{}, &AggregatorConfig{
Name: "TestRunningAggregator",
Filter: Filter{
NamePass: []string{"RI*"},
},
DropOriginal: true,
})
assert.NoError(t, ra.Config.Filter.Compile())
m := ra.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Untyped,
time.Now(),
)
assert.True(t, ra.Add(m))
// this metric name doesn't match the filter, so Add will return false
m2 := ra.MakeMetric(
"foobar",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Untyped,
time.Now(),
)
assert.False(t, ra.Add(m2))
}
// make an untyped, counter, & gauge metric
func TestMakeMetricA(t *testing.T) {
now := time.Now()
ra := NewRunningAggregator(&TestAggregator{}, &AggregatorConfig{
Name: "TestRunningAggregator",
})
assert.Equal(t, "aggregators.TestRunningAggregator", ra.Name())
m := ra.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Untyped,
now,
)
assert.Equal(
t,
m.String(),
fmt.Sprintf("RITest value=101i %d", now.UnixNano()),
)
assert.Equal(
t,
m.Type(),
telegraf.Untyped,
)
m = ra.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Counter,
now,
)
assert.Equal(
t,
m.String(),
fmt.Sprintf("RITest value=101i %d", now.UnixNano()),
)
assert.Equal(
t,
m.Type(),
telegraf.Counter,
)
m = ra.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Gauge,
now,
)
assert.Equal(
t,
m.String(),
fmt.Sprintf("RITest value=101i %d", now.UnixNano()),
)
assert.Equal(
t,
m.Type(),
telegraf.Gauge,
)
}
type TestAggregator struct {
sum int64
}
func (t *TestAggregator) Description() string { return "" }
func (t *TestAggregator) SampleConfig() string { return "" }
func (t *TestAggregator) Reset() {
atomic.StoreInt64(&t.sum, 0)
}
func (t *TestAggregator) Push(acc telegraf.Accumulator) {
acc.AddFields("TestMetric",
map[string]interface{}{"sum": t.sum},
map[string]string{},
)
}
func (t *TestAggregator) Add(in telegraf.Metric) {
for _, v := range in.Fields() {
if vi, ok := v.(int64); ok {
atomic.AddInt64(&t.sum, vi)
}
}
}

View File

@@ -1,15 +1,19 @@
package models
import (
"fmt"
"time"
"github.com/influxdata/telegraf"
)
type RunningInput struct {
Name string
Input telegraf.Input
Config *InputConfig
trace bool
debug bool
defaultTags map[string]string
}
// InputConfig containing a name, interval, and filter
@@ -22,3 +26,59 @@ type InputConfig struct {
Filter Filter
Interval time.Duration
}
func (r *RunningInput) Name() string {
return "inputs." + r.Config.Name
}
// MakeMetric either returns a metric, or returns nil if the metric doesn't
// need to be created (because of filtering, an error, etc.)
func (r *RunningInput) MakeMetric(
measurement string,
fields map[string]interface{},
tags map[string]string,
mType telegraf.ValueType,
t time.Time,
) telegraf.Metric {
m := makemetric(
measurement,
fields,
tags,
r.Config.NameOverride,
r.Config.MeasurementPrefix,
r.Config.MeasurementSuffix,
r.Config.Tags,
r.defaultTags,
r.Config.Filter,
true,
r.debug,
mType,
t,
)
if r.trace && m != nil {
fmt.Println("> " + m.String())
}
return m
}
func (r *RunningInput) Debug() bool {
return r.debug
}
func (r *RunningInput) SetDebug(debug bool) {
r.debug = debug
}
func (r *RunningInput) Trace() bool {
return r.trace
}
func (r *RunningInput) SetTrace(trace bool) {
r.trace = trace
}
func (r *RunningInput) SetDefaultTags(tags map[string]string) {
r.defaultTags = tags
}

View File

@@ -0,0 +1,352 @@
package models
import (
"fmt"
"math"
"testing"
"time"
"github.com/influxdata/telegraf"
"github.com/stretchr/testify/assert"
)
func TestMakeMetricNoFields(t *testing.T) {
now := time.Now()
ri := RunningInput{
Config: &InputConfig{
Name: "TestRunningInput",
},
}
m := ri.MakeMetric(
"RITest",
map[string]interface{}{},
map[string]string{},
telegraf.Untyped,
now,
)
assert.Nil(t, m)
}
// nil fields should get dropped
func TestMakeMetricNilFields(t *testing.T) {
now := time.Now()
ri := RunningInput{
Config: &InputConfig{
Name: "TestRunningInput",
},
}
m := ri.MakeMetric(
"RITest",
map[string]interface{}{
"value": int(101),
"nil": nil,
},
map[string]string{},
telegraf.Untyped,
now,
)
assert.Equal(
t,
fmt.Sprintf("RITest value=101i %d", now.UnixNano()),
m.String(),
)
}
// make an untyped, counter, & gauge metric
func TestMakeMetric(t *testing.T) {
now := time.Now()
ri := RunningInput{
Config: &InputConfig{
Name: "TestRunningInput",
},
}
ri.SetDebug(true)
assert.Equal(t, true, ri.Debug())
ri.SetTrace(true)
assert.Equal(t, true, ri.Trace())
assert.Equal(t, "inputs.TestRunningInput", ri.Name())
m := ri.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Untyped,
now,
)
assert.Equal(
t,
m.String(),
fmt.Sprintf("RITest value=101i %d", now.UnixNano()),
)
assert.Equal(
t,
m.Type(),
telegraf.Untyped,
)
m = ri.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Counter,
now,
)
assert.Equal(
t,
m.String(),
fmt.Sprintf("RITest value=101i %d", now.UnixNano()),
)
assert.Equal(
t,
m.Type(),
telegraf.Counter,
)
m = ri.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Gauge,
now,
)
assert.Equal(
t,
m.String(),
fmt.Sprintf("RITest value=101i %d", now.UnixNano()),
)
assert.Equal(
t,
m.Type(),
telegraf.Gauge,
)
}
func TestMakeMetricWithPluginTags(t *testing.T) {
now := time.Now()
ri := RunningInput{
Config: &InputConfig{
Name: "TestRunningInput",
Tags: map[string]string{
"foo": "bar",
},
},
}
ri.SetDebug(true)
assert.Equal(t, true, ri.Debug())
ri.SetTrace(true)
assert.Equal(t, true, ri.Trace())
m := ri.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
nil,
telegraf.Untyped,
now,
)
assert.Equal(
t,
m.String(),
fmt.Sprintf("RITest,foo=bar value=101i %d", now.UnixNano()),
)
}
func TestMakeMetricFilteredOut(t *testing.T) {
now := time.Now()
ri := RunningInput{
Config: &InputConfig{
Name: "TestRunningInput",
Tags: map[string]string{
"foo": "bar",
},
Filter: Filter{NamePass: []string{"foobar"}},
},
}
ri.SetDebug(true)
assert.Equal(t, true, ri.Debug())
ri.SetTrace(true)
assert.Equal(t, true, ri.Trace())
assert.NoError(t, ri.Config.Filter.Compile())
m := ri.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
nil,
telegraf.Untyped,
now,
)
assert.Nil(t, m)
}
func TestMakeMetricWithDaemonTags(t *testing.T) {
now := time.Now()
ri := RunningInput{
Config: &InputConfig{
Name: "TestRunningInput",
},
}
ri.SetDefaultTags(map[string]string{
"foo": "bar",
})
ri.SetDebug(true)
assert.Equal(t, true, ri.Debug())
ri.SetTrace(true)
assert.Equal(t, true, ri.Trace())
m := ri.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Untyped,
now,
)
assert.Equal(
t,
m.String(),
fmt.Sprintf("RITest,foo=bar value=101i %d", now.UnixNano()),
)
}
// make an untyped, counter, & gauge metric
func TestMakeMetricInfFields(t *testing.T) {
inf := math.Inf(1)
ninf := math.Inf(-1)
now := time.Now()
ri := RunningInput{
Config: &InputConfig{
Name: "TestRunningInput",
},
}
ri.SetDebug(true)
assert.Equal(t, true, ri.Debug())
ri.SetTrace(true)
assert.Equal(t, true, ri.Trace())
m := ri.MakeMetric(
"RITest",
map[string]interface{}{
"value": int(101),
"inf": inf,
"ninf": ninf,
},
map[string]string{},
telegraf.Untyped,
now,
)
assert.Equal(
t,
m.String(),
fmt.Sprintf("RITest value=101i %d", now.UnixNano()),
)
}
func TestMakeMetricAllFieldTypes(t *testing.T) {
now := time.Now()
ri := RunningInput{
Config: &InputConfig{
Name: "TestRunningInput",
},
}
ri.SetDebug(true)
assert.Equal(t, true, ri.Debug())
ri.SetTrace(true)
assert.Equal(t, true, ri.Trace())
m := ri.MakeMetric(
"RITest",
map[string]interface{}{
"a": int(10),
"b": int8(10),
"c": int16(10),
"d": int32(10),
"e": uint(10),
"f": uint8(10),
"g": uint16(10),
"h": uint32(10),
"i": uint64(10),
"j": float32(10),
"k": uint64(9223372036854775810),
"l": "foobar",
"m": true,
},
map[string]string{},
telegraf.Untyped,
now,
)
assert.Equal(
t,
fmt.Sprintf("RITest a=10i,b=10i,c=10i,d=10i,e=10i,f=10i,g=10i,h=10i,i=10i,j=10,k=9223372036854775807i,l=\"foobar\",m=true %d", now.UnixNano()),
m.String(),
)
}
func TestMakeMetricNameOverride(t *testing.T) {
now := time.Now()
ri := RunningInput{
Config: &InputConfig{
Name: "TestRunningInput",
NameOverride: "foobar",
},
}
m := ri.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Untyped,
now,
)
assert.Equal(
t,
m.String(),
fmt.Sprintf("foobar value=101i %d", now.UnixNano()),
)
}
func TestMakeMetricNamePrefix(t *testing.T) {
now := time.Now()
ri := RunningInput{
Config: &InputConfig{
Name: "TestRunningInput",
MeasurementPrefix: "foobar_",
},
}
m := ri.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Untyped,
now,
)
assert.Equal(
t,
m.String(),
fmt.Sprintf("foobar_RITest value=101i %d", now.UnixNano()),
)
}
func TestMakeMetricNameSuffix(t *testing.T) {
now := time.Now()
ri := RunningInput{
Config: &InputConfig{
Name: "TestRunningInput",
MeasurementSuffix: "_foobar",
},
}
m := ri.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Untyped,
now,
)
assert.Equal(
t,
m.String(),
fmt.Sprintf("RITest_foobar value=101i %d", now.UnixNano()),
)
}

View File

@@ -132,7 +132,6 @@ func TestRunningOutput_PassFilter(t *testing.T) {
func TestRunningOutput_TagIncludeNoMatch(t *testing.T) {
conf := &OutputConfig{
Filter: Filter{
TagInclude: []string{"nothing*"},
},
}
@@ -154,7 +153,6 @@ func TestRunningOutput_TagIncludeNoMatch(t *testing.T) {
func TestRunningOutput_TagExcludeMatch(t *testing.T) {
conf := &OutputConfig{
Filter: Filter{
TagExclude: []string{"tag*"},
},
}
@@ -176,7 +174,6 @@ func TestRunningOutput_TagExcludeMatch(t *testing.T) {
func TestRunningOutput_TagExcludeNoMatch(t *testing.T) {
conf := &OutputConfig{
Filter: Filter{
TagExclude: []string{"nothing*"},
},
}
@@ -198,7 +195,6 @@ func TestRunningOutput_TagExcludeNoMatch(t *testing.T) {
func TestRunningOutput_TagIncludeMatch(t *testing.T) {
conf := &OutputConfig{
Filter: Filter{
TagInclude: []string{"tag*"},
},
}

View File

@@ -0,0 +1,44 @@
package models
import (
"github.com/influxdata/telegraf"
)
type RunningProcessor struct {
Name string
Processor telegraf.Processor
Config *ProcessorConfig
}
type RunningProcessors []*RunningProcessor
func (rp RunningProcessors) Len() int { return len(rp) }
func (rp RunningProcessors) Swap(i, j int) { rp[i], rp[j] = rp[j], rp[i] }
func (rp RunningProcessors) Less(i, j int) bool { return rp[i].Config.Order < rp[j].Config.Order }
// FilterConfig containing a name and filter
type ProcessorConfig struct {
Name string
Order int64
Filter Filter
}
func (rp *RunningProcessor) Apply(in ...telegraf.Metric) []telegraf.Metric {
ret := []telegraf.Metric{}
for _, metric := range in {
if rp.Config.Filter.IsActive() {
// check if the filter should be applied to this metric
if ok := rp.Config.Filter.Apply(metric.Name(), metric.Fields(), metric.Tags()); !ok {
// this means filter should not be applied
ret = append(ret, metric)
continue
}
}
// This metric should pass through the filter, so call the filter Apply
// function and append results to the output slice.
ret = append(ret, rp.Processor.Apply(metric)...)
}
return ret
}

View File

@@ -0,0 +1,117 @@
package models
import (
"testing"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
)
type TestProcessor struct {
}
func (f *TestProcessor) SampleConfig() string { return "" }
func (f *TestProcessor) Description() string { return "" }
// Apply renames:
// "foo" to "fuz"
// "bar" to "baz"
// And it also drops measurements named "dropme"
func (f *TestProcessor) Apply(in ...telegraf.Metric) []telegraf.Metric {
out := make([]telegraf.Metric, 0)
for _, m := range in {
switch m.Name() {
case "foo":
out = append(out, testutil.TestMetric(1, "fuz"))
case "bar":
out = append(out, testutil.TestMetric(1, "baz"))
case "dropme":
// drop the metric!
default:
out = append(out, m)
}
}
return out
}
func NewTestRunningProcessor() *RunningProcessor {
out := &RunningProcessor{
Name: "test",
Processor: &TestProcessor{},
Config: &ProcessorConfig{Filter: Filter{}},
}
return out
}
func TestRunningProcessor(t *testing.T) {
inmetrics := []telegraf.Metric{
testutil.TestMetric(1, "foo"),
testutil.TestMetric(1, "bar"),
testutil.TestMetric(1, "baz"),
}
expectedNames := []string{
"fuz",
"baz",
"baz",
}
rfp := NewTestRunningProcessor()
filteredMetrics := rfp.Apply(inmetrics...)
actualNames := []string{
filteredMetrics[0].Name(),
filteredMetrics[1].Name(),
filteredMetrics[2].Name(),
}
assert.Equal(t, expectedNames, actualNames)
}
func TestRunningProcessor_WithNameDrop(t *testing.T) {
inmetrics := []telegraf.Metric{
testutil.TestMetric(1, "foo"),
testutil.TestMetric(1, "bar"),
testutil.TestMetric(1, "baz"),
}
expectedNames := []string{
"foo",
"baz",
"baz",
}
rfp := NewTestRunningProcessor()
rfp.Config.Filter.NameDrop = []string{"foo"}
assert.NoError(t, rfp.Config.Filter.Compile())
filteredMetrics := rfp.Apply(inmetrics...)
actualNames := []string{
filteredMetrics[0].Name(),
filteredMetrics[1].Name(),
filteredMetrics[2].Name(),
}
assert.Equal(t, expectedNames, actualNames)
}
func TestRunningProcessor_DroppedMetric(t *testing.T) {
inmetrics := []telegraf.Metric{
testutil.TestMetric(1, "dropme"),
testutil.TestMetric(1, "foo"),
testutil.TestMetric(1, "bar"),
}
expectedNames := []string{
"fuz",
"baz",
}
rfp := NewTestRunningProcessor()
filteredMetrics := rfp.Apply(inmetrics...)
actualNames := []string{
filteredMetrics[0].Name(),
filteredMetrics[1].Name(),
}
assert.Equal(t, expectedNames, actualNames)
}

View File

@@ -27,8 +27,8 @@ func (t *telegrafLog) Write(p []byte) (n int, err error) {
// debug will set the log level to DEBUG
// quiet will set the log level to ERROR
// logfile will direct the logging output to a file. Empty string is
// interpreted as stdout. If there is an error opening the file the
// logger will fallback to stdout.
// interpreted as stderr. If there is an error opening the file the
// logger will fallback to stderr.
func SetupLogging(debug, quiet bool, logfile string) {
if debug {
wlog.SetLevel(wlog.DEBUG)
@@ -41,17 +41,17 @@ func SetupLogging(debug, quiet bool, logfile string) {
if logfile != "" {
if _, err := os.Stat(logfile); os.IsNotExist(err) {
if oFile, err = os.Create(logfile); err != nil {
log.Printf("E! Unable to create %s (%s), using stdout", logfile, err)
oFile = os.Stdout
log.Printf("E! Unable to create %s (%s), using stderr", logfile, err)
oFile = os.Stderr
}
} else {
if oFile, err = os.OpenFile(logfile, os.O_APPEND|os.O_WRONLY, os.ModeAppend); err != nil {
log.Printf("E! Unable to append to %s (%s), using stdout", logfile, err)
oFile = os.Stdout
log.Printf("E! Unable to append to %s (%s), using stderr", logfile, err)
oFile = os.Stderr
}
}
} else {
oFile = os.Stdout
oFile = os.Stderr
}
log.SetOutput(newTelegrafWriter(oFile))

View File

@@ -4,6 +4,7 @@ import (
"time"
"github.com/influxdata/influxdb/client/v2"
"github.com/influxdata/influxdb/models"
)
// ValueType is an enumeration of metric types that represent a simple value.
@@ -33,6 +34,10 @@ type Metric interface {
// UnixNano returns the unix nano time of the metric
UnixNano() int64
// HashID returns a non-cryptographic hash of the metric (name + tags)
// NOTE: do not persist & depend on this value to disk.
HashID() uint64
// Fields returns the fields for the metric
Fields() map[string]interface{}
@@ -44,13 +49,28 @@ type Metric interface {
// Point returns a influxdb client.Point object
Point() *client.Point
// SetAggregate sets the metric's aggregate status
// This is so that aggregate metrics don't get re-sent to aggregator plugins
SetAggregate(bool)
// IsAggregate returns true if the metric is an aggregate
IsAggregate() bool
}
// metric is a wrapper of the influxdb client.Point struct
type metric struct {
pt *client.Point
pt models.Point
mType ValueType
isaggregate bool
}
func NewMetricFromPoint(pt models.Point) Metric {
return &metric{
pt: pt,
mType: Untyped,
}
}
// NewMetric returns an untyped metric.
@@ -60,7 +80,7 @@ func NewMetric(
fields map[string]interface{},
t time.Time,
) (Metric, error) {
pt, err := client.NewPoint(name, tags, fields, t)
pt, err := models.NewPoint(name, models.NewTags(tags), fields, t)
if err != nil {
return nil, err
}
@@ -79,7 +99,7 @@ func NewGaugeMetric(
fields map[string]interface{},
t time.Time,
) (Metric, error) {
pt, err := client.NewPoint(name, tags, fields, t)
pt, err := models.NewPoint(name, models.NewTags(tags), fields, t)
if err != nil {
return nil, err
}
@@ -98,7 +118,7 @@ func NewCounterMetric(
fields map[string]interface{},
t time.Time,
) (Metric, error) {
pt, err := client.NewPoint(name, tags, fields, t)
pt, err := models.NewPoint(name, models.NewTags(tags), fields, t)
if err != nil {
return nil, err
}
@@ -113,7 +133,7 @@ func (m *metric) Name() string {
}
func (m *metric) Tags() map[string]string {
return m.pt.Tags()
return m.pt.Tags().Map()
}
func (m *metric) Time() time.Time {
@@ -124,6 +144,10 @@ func (m *metric) Type() ValueType {
return m.mType
}
func (m *metric) HashID() uint64 {
return m.pt.HashID()
}
func (m *metric) UnixNano() int64 {
return m.pt.UnixNano()
}
@@ -141,5 +165,13 @@ func (m *metric) PrecisionString(precison string) string {
}
func (m *metric) Point() *client.Point {
return m.pt
return client.NewPointFrom(m.pt)
}
func (m *metric) IsAggregate() bool {
return m.isaggregate
}
func (m *metric) SetAggregate(b bool) {
m.isaggregate = b
}

View File

@@ -0,0 +1,5 @@
package all
import (
_ "github.com/influxdata/telegraf/plugins/aggregators/minmax"
)

View File

@@ -0,0 +1,119 @@
package minmax
import (
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/aggregators"
)
type MinMax struct {
cache map[uint64]aggregate
}
func NewMinMax() telegraf.Aggregator {
mm := &MinMax{}
mm.Reset()
return mm
}
type aggregate struct {
fields map[string]minmax
name string
tags map[string]string
}
type minmax struct {
min float64
max float64
}
var sampleConfig = `
## General Aggregator Arguments:
## The period on which to flush & clear the aggregator.
period = "30s"
## If true, the original metric will be dropped by the
## aggregator and will not get sent to the output plugins.
drop_original = false
`
func (m *MinMax) SampleConfig() string {
return sampleConfig
}
func (m *MinMax) Description() string {
return "Keep the aggregate min/max of each metric passing through."
}
func (m *MinMax) Add(in telegraf.Metric) {
id := in.HashID()
if _, ok := m.cache[id]; !ok {
// hit an uncached metric, create caches for first time:
a := aggregate{
name: in.Name(),
tags: in.Tags(),
fields: make(map[string]minmax),
}
for k, v := range in.Fields() {
if fv, ok := convert(v); ok {
a.fields[k] = minmax{
min: fv,
max: fv,
}
}
}
m.cache[id] = a
} else {
for k, v := range in.Fields() {
if fv, ok := convert(v); ok {
if _, ok := m.cache[id].fields[k]; !ok {
// hit an uncached field of a cached metric
m.cache[id].fields[k] = minmax{
min: fv,
max: fv,
}
continue
}
if fv < m.cache[id].fields[k].min {
tmp := m.cache[id].fields[k]
tmp.min = fv
m.cache[id].fields[k] = tmp
} else if fv > m.cache[id].fields[k].max {
tmp := m.cache[id].fields[k]
tmp.max = fv
m.cache[id].fields[k] = tmp
}
}
}
}
}
func (m *MinMax) Push(acc telegraf.Accumulator) {
for _, aggregate := range m.cache {
fields := map[string]interface{}{}
for k, v := range aggregate.fields {
fields[k+"_min"] = v.min
fields[k+"_max"] = v.max
}
acc.AddFields(aggregate.name, fields, aggregate.tags)
}
}
func (m *MinMax) Reset() {
m.cache = make(map[uint64]aggregate)
}
func convert(in interface{}) (float64, bool) {
switch v := in.(type) {
case float64:
return v, true
case int64:
return float64(v), true
default:
return 0, false
}
}
func init() {
aggregators.Add("minmax", func() telegraf.Aggregator {
return NewMinMax()
})
}

View File

@@ -0,0 +1,162 @@
package minmax
import (
"testing"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/testutil"
)
var m1, _ = telegraf.NewMetric("m1",
map[string]string{"foo": "bar"},
map[string]interface{}{
"a": int64(1),
"b": int64(1),
"c": int64(1),
"d": int64(1),
"e": int64(1),
"f": float64(2),
"g": float64(2),
"h": float64(2),
"i": float64(2),
"j": float64(3),
},
time.Now(),
)
var m2, _ = telegraf.NewMetric("m1",
map[string]string{"foo": "bar"},
map[string]interface{}{
"a": int64(1),
"b": int64(3),
"c": int64(3),
"d": int64(3),
"e": int64(3),
"f": float64(1),
"g": float64(1),
"h": float64(1),
"i": float64(1),
"j": float64(1),
"k": float64(200),
"ignoreme": "string",
"andme": true,
},
time.Now(),
)
func BenchmarkApply(b *testing.B) {
minmax := NewMinMax()
for n := 0; n < b.N; n++ {
minmax.Add(m1)
minmax.Add(m2)
}
}
// Test two metrics getting added.
func TestMinMaxWithPeriod(t *testing.T) {
acc := testutil.Accumulator{}
minmax := NewMinMax()
minmax.Add(m1)
minmax.Add(m2)
minmax.Push(&acc)
expectedFields := map[string]interface{}{
"a_max": float64(1),
"a_min": float64(1),
"b_max": float64(3),
"b_min": float64(1),
"c_max": float64(3),
"c_min": float64(1),
"d_max": float64(3),
"d_min": float64(1),
"e_max": float64(3),
"e_min": float64(1),
"f_max": float64(2),
"f_min": float64(1),
"g_max": float64(2),
"g_min": float64(1),
"h_max": float64(2),
"h_min": float64(1),
"i_max": float64(2),
"i_min": float64(1),
"j_max": float64(3),
"j_min": float64(1),
"k_max": float64(200),
"k_min": float64(200),
}
expectedTags := map[string]string{
"foo": "bar",
}
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
}
// Test two metrics getting added with a push/reset in between (simulates
// getting added in different periods.)
func TestMinMaxDifferentPeriods(t *testing.T) {
acc := testutil.Accumulator{}
minmax := NewMinMax()
minmax.Add(m1)
minmax.Push(&acc)
expectedFields := map[string]interface{}{
"a_max": float64(1),
"a_min": float64(1),
"b_max": float64(1),
"b_min": float64(1),
"c_max": float64(1),
"c_min": float64(1),
"d_max": float64(1),
"d_min": float64(1),
"e_max": float64(1),
"e_min": float64(1),
"f_max": float64(2),
"f_min": float64(2),
"g_max": float64(2),
"g_min": float64(2),
"h_max": float64(2),
"h_min": float64(2),
"i_max": float64(2),
"i_min": float64(2),
"j_max": float64(3),
"j_min": float64(3),
}
expectedTags := map[string]string{
"foo": "bar",
}
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
acc.ClearMetrics()
minmax.Reset()
minmax.Add(m2)
minmax.Push(&acc)
expectedFields = map[string]interface{}{
"a_max": float64(1),
"a_min": float64(1),
"b_max": float64(3),
"b_min": float64(3),
"c_max": float64(3),
"c_min": float64(3),
"d_max": float64(3),
"d_min": float64(3),
"e_max": float64(3),
"e_min": float64(3),
"f_max": float64(1),
"f_min": float64(1),
"g_max": float64(1),
"g_min": float64(1),
"h_max": float64(1),
"h_min": float64(1),
"i_max": float64(1),
"i_min": float64(1),
"j_max": float64(1),
"j_min": float64(1),
"k_max": float64(200),
"k_min": float64(200),
}
expectedTags = map[string]string{
"foo": "bar",
}
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
}

View File

@@ -0,0 +1,11 @@
package aggregators
import "github.com/influxdata/telegraf"
type Creator func() telegraf.Aggregator
var Aggregators = map[string]Creator{}
func Add(name string, creator Creator) {
Aggregators[name] = creator
}

View File

@@ -31,6 +31,7 @@ import (
_ "github.com/influxdata/telegraf/plugins/inputs/iptables"
_ "github.com/influxdata/telegraf/plugins/inputs/jolokia"
_ "github.com/influxdata/telegraf/plugins/inputs/kafka_consumer"
_ "github.com/influxdata/telegraf/plugins/inputs/kubernetes"
_ "github.com/influxdata/telegraf/plugins/inputs/leofs"
_ "github.com/influxdata/telegraf/plugins/inputs/logparser"
_ "github.com/influxdata/telegraf/plugins/inputs/lustre2"

View File

@@ -7,7 +7,7 @@
#### Description
The Cassandra plugin collects Cassandra/JVM metrics exposed as MBean's attributes through jolokia REST endpoint. All metrics are collected for each server configured.
The Cassandra plugin collects Cassandra 3 / JVM metrics exposed as MBean's attributes through jolokia REST endpoint. All metrics are collected for each server configured.
See: https://jolokia.org/ and [Cassandra Documentation](http://docs.datastax.com/en/cassandra/3.x/cassandra/operations/monitoringCassandraTOC.html)
@@ -38,9 +38,9 @@ Here is a list of metrics that might be useful to monitor your cassandra cluster
####measurement = javaGarbageCollector
- /java.lang:type=GarbageCollector,name=ConcurrentMarkSweep/CollectionTime
- /java.lang:type=GarbageCollector,name=ConcurrentMarkSweep/CollectionTime
- /java.lang:type=GarbageCollector,name=ConcurrentMarkSweep/CollectionCount
- /java.lang:type=GarbageCollector,name=ParNew/CollectionTime
- /java.lang:type=GarbageCollector,name=ParNew/CollectionTime
- /java.lang:type=GarbageCollector,name=ParNew/CollectionCount
####measurement = javaMemory
@@ -50,13 +50,13 @@ Here is a list of metrics that might be useful to monitor your cassandra cluster
####measurement = cassandraCache
- /org.apache.cassandra.metrics:type=Cache,scope=KeyCache,name=Hit
- /org.apache.cassandra.metrics:type=Cache,scope=KeyCache,name=Hits
- /org.apache.cassandra.metrics:type=Cache,scope=KeyCache,name=Requests
- /org.apache.cassandra.metrics:type=Cache,scope=KeyCache,name=Entries
- /org.apache.cassandra.metrics:type=Cache,scope=KeyCache,name=Size
- /org.apache.cassandra.metrics:type=Cache,scope=KeyCache,name=Capacity
- /org.apache.cassandra.metrics:type=Cache,scope=RowCache,name=Hit
- /org.apache.cassandra.metrics:type=Cache,scope=RowCache,name=Requests
- /org.apache.cassandra.metrics:type=Cache,scope=KeyCache,name=Size
- /org.apache.cassandra.metrics:type=Cache,scope=KeyCache,name=Capacity
- /org.apache.cassandra.metrics:type=Cache,scope=RowCache,name=Hits
- /org.apache.cassandra.metrics:type=Cache,scope=RowCache,name=Requests
- /org.apache.cassandra.metrics:type=Cache,scope=RowCache,name=Entries
- /org.apache.cassandra.metrics:type=Cache,scope=RowCache,name=Size
- /org.apache.cassandra.metrics:type=Cache,scope=RowCache,name=Capacity
@@ -67,33 +67,33 @@ Here is a list of metrics that might be useful to monitor your cassandra cluster
####measurement = cassandraClientRequest
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Read,name=TotalLatency
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=TotalLatency
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Read,name=Latency
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Latency
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Read,name=Timeouts
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Read,name=TotalLatency
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=TotalLatency
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Read,name=Latency
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Latency
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Read,name=Timeouts
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Timeouts
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Read,name=Unavailables
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Unavailables
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Read,name=Failures
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Failures
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Read,name=Unavailables
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Unavailables
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Read,name=Failures
- /org.apache.cassandra.metrics:type=ClientRequest,scope=Write,name=Failures
####measurement = cassandraCommitLog
- /org.apache.cassandra.metrics:type=CommitLog,name=PendingTasks
- /org.apache.cassandra.metrics:type=CommitLog,name=PendingTasks
- /org.apache.cassandra.metrics:type=CommitLog,name=TotalCommitLogSize
####measurement = cassandraCompaction
- /org.apache.cassandra.metrics:type=Compaction,name=CompletedTask
- /org.apache.cassandra.metrics:type=Compaction,name=PendingTasks
- /org.apache.cassandra.metrics:type=Compaction,name=CompletedTasks
- /org.apache.cassandra.metrics:type=Compaction,name=PendingTasks
- /org.apache.cassandra.metrics:type=Compaction,name=TotalCompactionsCompleted
- /org.apache.cassandra.metrics:type=Compaction,name=BytesCompacted
####measurement = cassandraStorage
- /org.apache.cassandra.metrics:type=Storage,name=Load
- /org.apache.cassandra.metrics:type=Storage,name=Exceptions
- /org.apache.cassandra.metrics:type=Storage,name=Exceptions
####measurement = cassandraTable
Using wildcards for "keyspace" and "scope" can create a lot of series as metrics will be reported for every table and keyspace including internal system tables. Specify a keyspace name and/or a table name to limit them.
@@ -101,25 +101,25 @@ Using wildcards for "keyspace" and "scope" can create a lot of series as metrics
- /org.apache.cassandra.metrics:type=Table,keyspace=\*,scope=\*,name=LiveDiskSpaceUsed
- /org.apache.cassandra.metrics:type=Table,keyspace=\*,scope=\*,name=TotalDiskSpaceUsed
- /org.apache.cassandra.metrics:type=Table,keyspace=\*,scope=\*,name=ReadLatency
- /org.apache.cassandra.metrics:type=Table,keyspace=\*,scope=\*,name=CoordinatorReadLatency
- /org.apache.cassandra.metrics:type=Table,keyspace=\*,scope=\*,name=WriteLatency
- /org.apache.cassandra.metrics:type=Table,keyspace=\*,scope=\*,name=ReadTotalLatency
- /org.apache.cassandra.metrics:type=Table,keyspace=\*,scope=\*,name=WriteTotalLatency
- /org.apache.cassandra.metrics:type=Table,keyspace=\*,scope=\*,name=CoordinatorReadLatency
- /org.apache.cassandra.metrics:type=Table,keyspace=\*,scope=\*,name=WriteLatency
- /org.apache.cassandra.metrics:type=Table,keyspace=\*,scope=\*,name=ReadTotalLatency
- /org.apache.cassandra.metrics:type=Table,keyspace=\*,scope=\*,name=WriteTotalLatency
####measurement = cassandraThreadPools
- /org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=CompactionExecutor,name=ActiveTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=AntiEntropyStage,name=ActiveTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=CompactionExecutor,name=ActiveTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=internal,scope=AntiEntropyStage,name=ActiveTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=CounterMutationStage,name=PendingTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=CounterMutationStage,name=CurrentlyBlockedTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=MutationStage,name=PendingTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=MutationStage,name=CurrentlyBlockedTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=MutationStage,name=CurrentlyBlockedTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadRepairStage,name=PendingTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadRepairStage,name=CurrentlyBlockedTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadRepairStage,name=CurrentlyBlockedTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadStage,name=PendingTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadStage,name=CurrentlyBlockedTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=RequestResponseStage,name=PendingTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=RequestResponseStage,name=CurrentlyBlockedTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=ReadStage,name=CurrentlyBlockedTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=RequestResponseStage,name=PendingTasks
- /org.apache.cassandra.metrics:type=ThreadPools,path=request,scope=RequestResponseStage,name=CurrentlyBlockedTasks

View File

@@ -2,6 +2,10 @@
This input plugin will capture specific statistics per cgroup.
Consider restricting paths to the set of cgroups you really
want to monitor if you have a large number of cgroups, to avoid
any cardinality issues.
Following file formats are supported:
* Single value
@@ -33,9 +37,8 @@ KEY1 VAL1\n
### Tags:
Measurements don't have any specific tags unless you define them at the telegraf level (defaults). We
used to have the path listed as a tag, but to keep cardinality in check it's easier to move this
value to a field. Thanks @sebito91!
All measurements have the following tags:
- path
### Configuration:

View File

@@ -11,15 +11,18 @@ type CGroup struct {
}
var sampleConfig = `
## Directories in which to look for files, globs are supported.
# paths = [
# "/cgroup/memory",
# "/cgroup/memory/child1",
# "/cgroup/memory/child2/*",
# ]
## cgroup stat fields, as file names, globs are supported.
## these file names are appended to each path from above.
# files = ["memory.*usage*", "memory.limit_in_bytes"]
## Directories in which to look for files, globs are supported.
## Consider restricting paths to the set of cgroups you really
## want to monitor if you have a large number of cgroups, to avoid
## any cardinality issues.
# paths = [
# "/cgroup/memory",
# "/cgroup/memory/child1",
# "/cgroup/memory/child2/*",
# ]
## cgroup stat fields, as file names, globs are supported.
## these file names are appended to each path from above.
# files = ["memory.*usage*", "memory.limit_in_bytes"]
`
func (g *CGroup) SampleConfig() string {

View File

@@ -56,9 +56,10 @@ func (g *CGroup) gatherDir(dir string, acc telegraf.Accumulator) error {
return err
}
}
fields["path"] = dir
acc.AddFields(metricName, fields, nil)
tags := map[string]string{"path": dir}
acc.AddFields(metricName, fields, tags)
return nil
}

View File

@@ -3,13 +3,10 @@
package cgroup
import (
"fmt"
"testing"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"reflect"
)
var cg1 = &CGroup{
@@ -24,32 +21,15 @@ var cg1 = &CGroup{
},
}
func assertContainsFields(a *testutil.Accumulator, t *testing.T, measurement string, fieldSet []map[string]interface{}) {
a.Lock()
defer a.Unlock()
numEquals := 0
for _, p := range a.Metrics {
if p.Measurement == measurement {
for _, fields := range fieldSet {
if reflect.DeepEqual(fields, p.Fields) {
numEquals++
}
}
}
}
if numEquals != len(fieldSet) {
assert.Fail(t, fmt.Sprintf("only %d of %d are equal", numEquals, len(fieldSet)))
}
}
func TestCgroupStatistics_1(t *testing.T) {
var acc testutil.Accumulator
err := cg1.Gather(&acc)
require.NoError(t, err)
tags := map[string]string{
"path": "testdata/memory",
}
fields := map[string]interface{}{
"memory.stat.cache": 1739362304123123123,
"memory.stat.rss": 1775325184,
@@ -62,9 +42,8 @@ func TestCgroupStatistics_1(t *testing.T) {
"memory.limit_in_bytes": 223372036854771712,
"memory.use_hierarchy": "12-781",
"notify_on_release": 0,
"path": "testdata/memory",
}
assertContainsFields(&acc, t, "cgroup", []map[string]interface{}{fields})
acc.AssertContainsTaggedFields(t, "cgroup", fields, tags)
}
// ======================================================================
@@ -80,14 +59,16 @@ func TestCgroupStatistics_2(t *testing.T) {
err := cg2.Gather(&acc)
require.NoError(t, err)
tags := map[string]string{
"path": "testdata/cpu",
}
fields := map[string]interface{}{
"cpuacct.usage_percpu.0": -1452543795404,
"cpuacct.usage_percpu.1": 1376681271659,
"cpuacct.usage_percpu.2": 1450950799997,
"cpuacct.usage_percpu.3": -1473113374257,
"path": "testdata/cpu",
}
assertContainsFields(&acc, t, "cgroup", []map[string]interface{}{fields})
acc.AssertContainsTaggedFields(t, "cgroup", fields, tags)
}
// ======================================================================
@@ -103,16 +84,18 @@ func TestCgroupStatistics_3(t *testing.T) {
err := cg3.Gather(&acc)
require.NoError(t, err)
tags := map[string]string{
"path": "testdata/memory/group_1",
}
fields := map[string]interface{}{
"memory.limit_in_bytes": 223372036854771712,
"path": "testdata/memory/group_1",
}
acc.AssertContainsTaggedFields(t, "cgroup", fields, tags)
fieldsTwo := map[string]interface{}{
"memory.limit_in_bytes": 223372036854771712,
"path": "testdata/memory/group_2",
tags = map[string]string{
"path": "testdata/memory/group_2",
}
assertContainsFields(&acc, t, "cgroup", []map[string]interface{}{fields, fieldsTwo})
acc.AssertContainsTaggedFields(t, "cgroup", fields, tags)
}
// ======================================================================
@@ -128,22 +111,23 @@ func TestCgroupStatistics_4(t *testing.T) {
err := cg4.Gather(&acc)
require.NoError(t, err)
tags := map[string]string{
"path": "testdata/memory/group_1/group_1_1",
}
fields := map[string]interface{}{
"memory.limit_in_bytes": 223372036854771712,
"path": "testdata/memory/group_1/group_1_1",
}
acc.AssertContainsTaggedFields(t, "cgroup", fields, tags)
fieldsTwo := map[string]interface{}{
"memory.limit_in_bytes": 223372036854771712,
"path": "testdata/memory/group_1/group_1_2",
tags = map[string]string{
"path": "testdata/memory/group_1/group_1_2",
}
acc.AssertContainsTaggedFields(t, "cgroup", fields, tags)
fieldsThree := map[string]interface{}{
"memory.limit_in_bytes": 223372036854771712,
"path": "testdata/memory/group_2",
tags = map[string]string{
"path": "testdata/memory/group_2",
}
assertContainsFields(&acc, t, "cgroup", []map[string]interface{}{fields, fieldsTwo, fieldsThree})
acc.AssertContainsTaggedFields(t, "cgroup", fields, tags)
}
// ======================================================================
@@ -159,16 +143,18 @@ func TestCgroupStatistics_5(t *testing.T) {
err := cg5.Gather(&acc)
require.NoError(t, err)
tags := map[string]string{
"path": "testdata/memory/group_1/group_1_1",
}
fields := map[string]interface{}{
"memory.limit_in_bytes": 223372036854771712,
"path": "testdata/memory/group_1/group_1_1",
}
acc.AssertContainsTaggedFields(t, "cgroup", fields, tags)
fieldsTwo := map[string]interface{}{
"memory.limit_in_bytes": 223372036854771712,
"path": "testdata/memory/group_2/group_1_1",
tags = map[string]string{
"path": "testdata/memory/group_2/group_1_1",
}
assertContainsFields(&acc, t, "cgroup", []map[string]interface{}{fields, fieldsTwo})
acc.AssertContainsTaggedFields(t, "cgroup", fields, tags)
}
// ======================================================================
@@ -184,11 +170,13 @@ func TestCgroupStatistics_6(t *testing.T) {
err := cg6.Gather(&acc)
require.NoError(t, err)
tags := map[string]string{
"path": "testdata/memory",
}
fields := map[string]interface{}{
"memory.usage_in_bytes": 3513667584,
"memory.use_hierarchy": "12-781",
"memory.kmem.limit_in_bytes": 9223372036854771712,
"path": "testdata/memory",
}
assertContainsFields(&acc, t, "cgroup", []map[string]interface{}{fields})
acc.AssertContainsTaggedFields(t, "cgroup", fields, tags)
}

View File

@@ -103,9 +103,13 @@ func processChronycOutput(out string) (map[string]interface{}, map[string]string
tags["stratum"] = valueFields[0]
continue
}
if strings.Contains(strings.ToLower(name), "reference_id") {
tags["reference_id"] = valueFields[0]
continue
}
value, err := strconv.ParseFloat(valueFields[0], 64)
if err != nil {
tags[name] = strings.ToLower(valueFields[0])
tags[name] = strings.ToLower(strings.Join(valueFields, " "))
continue
}
if strings.Contains(stats[1], "slow") {

View File

@@ -27,7 +27,7 @@ func TestGather(t *testing.T) {
tags := map[string]string{
"reference_id": "192.168.1.22",
"leap_status": "normal",
"leap_status": "not synchronized",
"stratum": "3",
}
fields := map[string]interface{}{
@@ -85,7 +85,7 @@ Skew : 0.006 ppm
Root delay : 0.001655 seconds
Root dispersion : 0.003307 seconds
Update interval : 507.2 seconds
Leap status : Normal
Leap status : Not synchronized
`
args := os.Args

View File

@@ -0,0 +1,37 @@
# HAproxy Input Plugin
[HAproxy](http://www.haproxy.org/) input plugin gathers metrics directly from any running HAproxy instance. It can do so by using CSV generated by HAproxy status page or from admin socket(s).
### Configuration:
```toml
# SampleConfig
[[inputs.haproxy]]
servers = ["http://1.2.3.4/haproxy?stats", "/var/run/haproxy*.sock"]
```
Server addresses need to explicitly start with 'http' if you wish to use HAproxy status page. Otherwise, address will be assumed to be an UNIX socket and protocol (if present) will be discarded.
Following examples will all resolve to the same socket:
```
socket:/var/run/haproxy.sock
unix:/var/run/haproxy.sock
foo:/var/run/haproxy.sock
/var/run/haproxy.sock
```
When using socket names, wildcard expansion is supported so plugin can gather stats from multiple sockets at once.
If no servers are specified, then the default address of `http://127.0.0.1:1936/haproxy?stats` will be used.
### Measurements & Fields:
Plugin will gather measurements outlined in [HAproxy CSV format documentation](https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#9.1).
### Tags:
- All measurements have the following tags:
- server - address of server data is gathered from
- proxy - proxy name as reported in `pxname`
- sv - service name as reported in `svname`

View File

@@ -7,6 +7,7 @@ import (
"net"
"net/http"
"net/url"
"path/filepath"
"strconv"
"strings"
"sync"
@@ -17,7 +18,7 @@ import (
"github.com/influxdata/telegraf/plugins/inputs"
)
//CSV format: https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.1
//CSV format: https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#9.1
const (
HF_PXNAME = 0 // 0. pxname [LFBS]: proxy name
HF_SVNAME = 1 // 1. svname [LFBS]: service name (FRONTEND for frontend, BACKEND for backend, any name for server/listener)
@@ -93,12 +94,15 @@ var sampleConfig = `
## An array of address to gather stats about. Specify an ip on hostname
## with optional port. ie localhost, 10.10.3.33:1936, etc.
## Make sure you specify the complete path to the stats endpoint
## ie 10.10.3.33:1936/haproxy?stats
## including the protocol, ie http://10.10.3.33:1936/haproxy?stats
#
## If no servers are specified, then default to 127.0.0.1:1936/haproxy?stats
servers = ["http://myhaproxy.com:1936/haproxy?stats"]
## Or you can also use local socket
## servers = ["socket:/run/haproxy/admin.sock"]
##
## You can also use local socket with standard wildcard globbing.
## Server address not starting with 'http' will be treated as a possible
## socket, so both examples below are valid.
## servers = ["socket:/run/haproxy/admin.sock", "/run/haproxy/*.sock"]
`
func (r *haproxy) SampleConfig() string {
@@ -116,10 +120,36 @@ func (g *haproxy) Gather(acc telegraf.Accumulator) error {
return g.gatherServer("http://127.0.0.1:1936/haproxy?stats", acc)
}
endpoints := make([]string, 0, len(g.Servers))
for _, endpoint := range g.Servers {
if strings.HasPrefix(endpoint, "http") {
endpoints = append(endpoints, endpoint)
continue
}
socketPath := getSocketAddr(endpoint)
matches, err := filepath.Glob(socketPath)
if err != nil {
return err
}
if len(matches) == 0 {
endpoints = append(endpoints, socketPath)
} else {
for _, match := range matches {
endpoints = append(endpoints, match)
}
}
}
var wg sync.WaitGroup
errChan := errchan.New(len(g.Servers))
wg.Add(len(g.Servers))
for _, server := range g.Servers {
errChan := errchan.New(len(endpoints))
wg.Add(len(endpoints))
for _, server := range endpoints {
go func(serv string) {
defer wg.Done()
errChan.C <- g.gatherServer(serv, acc)
@@ -131,14 +161,7 @@ func (g *haproxy) Gather(acc telegraf.Accumulator) error {
}
func (g *haproxy) gatherServerSocket(addr string, acc telegraf.Accumulator) error {
var socketPath string
socketAddr := strings.Split(addr, ":")
if len(socketAddr) >= 2 {
socketPath = socketAddr[1]
} else {
socketPath = socketAddr[0]
}
socketPath := getSocketAddr(addr)
c, err := net.Dial("unix", socketPath)
@@ -196,6 +219,16 @@ func (g *haproxy) gatherServer(addr string, acc telegraf.Accumulator) error {
return importCsvResult(res.Body, acc, u.Host)
}
func getSocketAddr(sock string) string {
socketAddr := strings.Split(sock, ":")
if len(socketAddr) >= 2 {
return socketAddr[1]
} else {
return socketAddr[0]
}
}
func importCsvResult(r io.Reader, acc telegraf.Accumulator, host string) error {
csv := csv.NewReader(r)
result, err := csv.ReadAll()

View File

@@ -72,38 +72,7 @@ func TestHaproxyGeneratesMetricsWithAuthentication(t *testing.T) {
"sv": "host0",
}
fields := map[string]interface{}{
"active_servers": uint64(1),
"backup_servers": uint64(0),
"bin": uint64(510913516),
"bout": uint64(2193856571),
"check_duration": uint64(10),
"cli_abort": uint64(73),
"ctime": uint64(2),
"downtime": uint64(0),
"dresp": uint64(0),
"econ": uint64(0),
"eresp": uint64(1),
"http_response.1xx": uint64(0),
"http_response.2xx": uint64(119534),
"http_response.3xx": uint64(48051),
"http_response.4xx": uint64(2345),
"http_response.5xx": uint64(1056),
"lbtot": uint64(171013),
"qcur": uint64(0),
"qmax": uint64(0),
"qtime": uint64(0),
"rate": uint64(3),
"rate_max": uint64(12),
"rtime": uint64(312),
"scur": uint64(1),
"smax": uint64(32),
"srv_abort": uint64(1),
"stot": uint64(171014),
"ttime": uint64(2341),
"wredis": uint64(0),
"wretr": uint64(1),
}
fields := HaproxyGetFieldValues()
acc.AssertContainsTaggedFields(t, "haproxy", fields, tags)
//Here, we should get error because we don't pass authentication data
@@ -136,102 +105,58 @@ func TestHaproxyGeneratesMetricsWithoutAuthentication(t *testing.T) {
"sv": "host0",
}
fields := map[string]interface{}{
"active_servers": uint64(1),
"backup_servers": uint64(0),
"bin": uint64(510913516),
"bout": uint64(2193856571),
"check_duration": uint64(10),
"cli_abort": uint64(73),
"ctime": uint64(2),
"downtime": uint64(0),
"dresp": uint64(0),
"econ": uint64(0),
"eresp": uint64(1),
"http_response.1xx": uint64(0),
"http_response.2xx": uint64(119534),
"http_response.3xx": uint64(48051),
"http_response.4xx": uint64(2345),
"http_response.5xx": uint64(1056),
"lbtot": uint64(171013),
"qcur": uint64(0),
"qmax": uint64(0),
"qtime": uint64(0),
"rate": uint64(3),
"rate_max": uint64(12),
"rtime": uint64(312),
"scur": uint64(1),
"smax": uint64(32),
"srv_abort": uint64(1),
"stot": uint64(171014),
"ttime": uint64(2341),
"wredis": uint64(0),
"wretr": uint64(1),
}
fields := HaproxyGetFieldValues()
acc.AssertContainsTaggedFields(t, "haproxy", fields, tags)
}
func TestHaproxyGeneratesMetricsUsingSocket(t *testing.T) {
var randomNumber int64
binary.Read(rand.Reader, binary.LittleEndian, &randomNumber)
sock, err := net.Listen("unix", fmt.Sprintf("/tmp/test-haproxy%d.sock", randomNumber))
if err != nil {
t.Fatal("Cannot initialize socket ")
var sockets [5]net.Listener
_globmask := "/tmp/test-haproxy*.sock"
_badmask := "/tmp/test-fail-haproxy*.sock"
for i := 0; i < 5; i++ {
binary.Read(rand.Reader, binary.LittleEndian, &randomNumber)
sockname := fmt.Sprintf("/tmp/test-haproxy%d.sock", randomNumber)
sock, err := net.Listen("unix", sockname)
if err != nil {
t.Fatal("Cannot initialize socket ")
}
sockets[i] = sock
defer sock.Close()
s := statServer{}
go s.serverSocket(sock)
}
defer sock.Close()
s := statServer{}
go s.serverSocket(sock)
r := &haproxy{
Servers: []string{sock.Addr().String()},
Servers: []string{_globmask},
}
var acc testutil.Accumulator
err = r.Gather(&acc)
err := r.Gather(&acc)
require.NoError(t, err)
tags := map[string]string{
"proxy": "be_app",
"server": sock.Addr().String(),
"sv": "host0",
fields := HaproxyGetFieldValues()
for _, sock := range sockets {
tags := map[string]string{
"proxy": "be_app",
"server": sock.Addr().String(),
"sv": "host0",
}
acc.AssertContainsTaggedFields(t, "haproxy", fields, tags)
}
fields := map[string]interface{}{
"active_servers": uint64(1),
"backup_servers": uint64(0),
"bin": uint64(510913516),
"bout": uint64(2193856571),
"check_duration": uint64(10),
"cli_abort": uint64(73),
"ctime": uint64(2),
"downtime": uint64(0),
"dresp": uint64(0),
"econ": uint64(0),
"eresp": uint64(1),
"http_response.1xx": uint64(0),
"http_response.2xx": uint64(119534),
"http_response.3xx": uint64(48051),
"http_response.4xx": uint64(2345),
"http_response.5xx": uint64(1056),
"lbtot": uint64(171013),
"qcur": uint64(0),
"qmax": uint64(0),
"qtime": uint64(0),
"rate": uint64(3),
"rate_max": uint64(12),
"rtime": uint64(312),
"scur": uint64(1),
"smax": uint64(32),
"srv_abort": uint64(1),
"stot": uint64(171014),
"ttime": uint64(2341),
"wredis": uint64(0),
"wretr": uint64(1),
}
acc.AssertContainsTaggedFields(t, "haproxy", fields, tags)
// This mask should not match any socket
r.Servers = []string{_badmask}
err = r.Gather(&acc)
require.Error(t, err)
}
//When not passing server config, we default to localhost
@@ -246,6 +171,42 @@ func TestHaproxyDefaultGetFromLocalhost(t *testing.T) {
assert.Contains(t, err.Error(), "127.0.0.1:1936/haproxy?stats/;csv")
}
func HaproxyGetFieldValues() map[string]interface{} {
fields := map[string]interface{}{
"active_servers": uint64(1),
"backup_servers": uint64(0),
"bin": uint64(510913516),
"bout": uint64(2193856571),
"check_duration": uint64(10),
"cli_abort": uint64(73),
"ctime": uint64(2),
"downtime": uint64(0),
"dresp": uint64(0),
"econ": uint64(0),
"eresp": uint64(1),
"http_response.1xx": uint64(0),
"http_response.2xx": uint64(119534),
"http_response.3xx": uint64(48051),
"http_response.4xx": uint64(2345),
"http_response.5xx": uint64(1056),
"lbtot": uint64(171013),
"qcur": uint64(0),
"qmax": uint64(0),
"qtime": uint64(0),
"rate": uint64(3),
"rate_max": uint64(12),
"rtime": uint64(312),
"scur": uint64(1),
"smax": uint64(32),
"srv_abort": uint64(1),
"stot": uint64(171014),
"ttime": uint64(2341),
"wredis": uint64(0),
"wretr": uint64(1),
}
return fields
}
const csvOutputSample = `
# pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max,check_status,check_code,check_duration,hrsp_1xx,hrsp_2xx,hrsp_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,req_rate,req_rate_max,req_tot,cli_abrt,srv_abrt,comp_in,comp_out,comp_byp,comp_rsp,lastsess,last_chk,last_agt,qtime,ctime,rtime,ttime,
fe_app,FRONTEND,,81,288,713,2000,1094063,5557055817,24096715169,1102,80,95740,,,17,19,OPEN,,,,,,,,,2,16,113,13,114,,0,18,0,102,,,,0,1314093,537036,123452,11966,1360,,35,140,1987928,,,0,0,0,0,,,,,,,,

View File

@@ -0,0 +1,43 @@
package http_listener
import (
"sync/atomic"
)
type pool struct {
buffers chan []byte
size int
created int64
}
// NewPool returns a new pool object.
// n is the number of buffers
// bufSize is the size (in bytes) of each buffer
func NewPool(n, bufSize int) *pool {
return &pool{
buffers: make(chan []byte, n),
size: bufSize,
}
}
func (p *pool) get() []byte {
select {
case b := <-p.buffers:
return b
default:
atomic.AddInt64(&p.created, 1)
return make([]byte, p.size)
}
}
func (p *pool) put(b []byte) {
select {
case p.buffers <- b:
default:
}
}
func (p *pool) ncreated() int64 {
return atomic.LoadInt64(&p.created)
}

View File

@@ -1,9 +1,9 @@
package http_listener
import (
"bufio"
"bytes"
"fmt"
"compress/gzip"
"io"
"log"
"net"
"net/http"
@@ -13,135 +13,137 @@ import (
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/inputs/http_listener/stoppableListener"
"github.com/influxdata/telegraf/plugins/parsers"
"github.com/influxdata/telegraf/plugins/parsers/influx"
)
type HttpListener struct {
const (
// DEFAULT_MAX_BODY_SIZE is the default maximum request body size, in bytes.
// if the request body is over this size, we will return an HTTP 413 error.
// 500 MB
DEFAULT_MAX_BODY_SIZE = 500 * 1024 * 1024
// MAX_LINE_SIZE is the maximum size, in bytes, that can be allocated for
// a single InfluxDB point.
// 64 KB
DEFAULT_MAX_LINE_SIZE = 64 * 1024
)
type HTTPListener struct {
ServiceAddress string
ReadTimeout internal.Duration
WriteTimeout internal.Duration
MaxBodySize int64
MaxLineSize int
sync.Mutex
mu sync.Mutex
wg sync.WaitGroup
listener *stoppableListener.StoppableListener
listener net.Listener
parser parsers.Parser
parser influx.InfluxParser
acc telegraf.Accumulator
pool *pool
}
const sampleConfig = `
## Address and port to host HTTP listener on
service_address = ":8186"
## timeouts
## maximum duration before timing out read of the request
read_timeout = "10s"
## maximum duration before timing out write of the response
write_timeout = "10s"
## Maximum allowed http request body size in bytes.
## 0 means to use the default of 536,870,912 bytes (500 mebibytes)
max_body_size = 0
## Maximum line size allowed to be sent in bytes.
## 0 means to use the default of 65536 bytes (64 kibibytes)
max_line_size = 0
`
func (t *HttpListener) SampleConfig() string {
func (h *HTTPListener) SampleConfig() string {
return sampleConfig
}
func (t *HttpListener) Description() string {
func (h *HTTPListener) Description() string {
return "Influx HTTP write listener"
}
func (t *HttpListener) Gather(_ telegraf.Accumulator) error {
func (h *HTTPListener) Gather(_ telegraf.Accumulator) error {
log.Printf("D! The http_listener has created %d buffers", h.pool.ncreated())
return nil
}
func (t *HttpListener) SetParser(parser parsers.Parser) {
t.parser = parser
}
// Start starts the http listener service.
func (t *HttpListener) Start(acc telegraf.Accumulator) error {
t.Lock()
defer t.Unlock()
func (h *HTTPListener) Start(acc telegraf.Accumulator) error {
h.mu.Lock()
defer h.mu.Unlock()
t.acc = acc
if h.MaxBodySize == 0 {
h.MaxBodySize = DEFAULT_MAX_BODY_SIZE
}
if h.MaxLineSize == 0 {
h.MaxLineSize = DEFAULT_MAX_LINE_SIZE
}
var rawListener, err = net.Listen("tcp", t.ServiceAddress)
if err != nil {
return err
}
t.listener, err = stoppableListener.New(rawListener)
h.acc = acc
h.pool = NewPool(200, h.MaxLineSize)
var listener, err = net.Listen("tcp", h.ServiceAddress)
if err != nil {
return err
}
h.listener = listener
go t.httpListen()
h.wg.Add(1)
go func() {
defer h.wg.Done()
h.httpListen()
}()
log.Printf("I! Started HTTP listener service on %s\n", t.ServiceAddress)
log.Printf("I! Started HTTP listener service on %s\n", h.ServiceAddress)
return nil
}
// Stop cleans up all resources
func (t *HttpListener) Stop() {
t.Lock()
defer t.Unlock()
func (h *HTTPListener) Stop() {
h.mu.Lock()
defer h.mu.Unlock()
t.listener.Stop()
t.listener.Close()
h.listener.Close()
h.wg.Wait()
t.wg.Wait()
log.Println("I! Stopped HTTP listener service on ", t.ServiceAddress)
log.Println("I! Stopped HTTP listener service on ", h.ServiceAddress)
}
// httpListen listens for HTTP requests.
func (t *HttpListener) httpListen() error {
if t.ReadTimeout.Duration < time.Second {
t.ReadTimeout.Duration = time.Second * 10
// httpListen sets up an http.Server and calls server.Serve.
// like server.Serve, httpListen will always return a non-nil error, for this
// reason, the error returned should probably be ignored.
// see https://golang.org/pkg/net/http/#Server.Serve
func (h *HTTPListener) httpListen() error {
if h.ReadTimeout.Duration < time.Second {
h.ReadTimeout.Duration = time.Second * 10
}
if t.WriteTimeout.Duration < time.Second {
t.WriteTimeout.Duration = time.Second * 10
if h.WriteTimeout.Duration < time.Second {
h.WriteTimeout.Duration = time.Second * 10
}
var server = http.Server{
Handler: t,
ReadTimeout: t.ReadTimeout.Duration,
WriteTimeout: t.WriteTimeout.Duration,
Handler: h,
ReadTimeout: h.ReadTimeout.Duration,
WriteTimeout: h.WriteTimeout.Duration,
}
return server.Serve(t.listener)
return server.Serve(h.listener)
}
func (t *HttpListener) ServeHTTP(res http.ResponseWriter, req *http.Request) {
t.wg.Add(1)
defer t.wg.Done()
func (h *HTTPListener) ServeHTTP(res http.ResponseWriter, req *http.Request) {
switch req.URL.Path {
case "/write":
var http400msg bytes.Buffer
var partial string
scanner := bufio.NewScanner(req.Body)
scanner.Buffer([]byte(""), 128*1024)
for scanner.Scan() {
metrics, err := t.parser.Parse(scanner.Bytes())
if err == nil {
for _, m := range metrics {
t.acc.AddFields(m.Name(), m.Fields(), m.Tags(), m.Time())
}
partial = "partial write: "
} else {
http400msg.WriteString(err.Error() + " ")
}
}
if err := scanner.Err(); err != nil {
http.Error(res, "Internal server error: "+err.Error(), http.StatusInternalServerError)
} else if http400msg.Len() > 0 {
res.Header().Set("Content-Type", "application/json")
res.Header().Set("X-Influxdb-Version", "1.0")
res.WriteHeader(http.StatusBadRequest)
res.Write([]byte(fmt.Sprintf(`{"error":"%s%s"}`, partial, http400msg.String())))
} else {
res.WriteHeader(http.StatusNoContent)
}
h.serveWrite(res, req)
case "/query":
// Deliver a dummy response to the query endpoint, as some InfluxDB
// clients test endpoint availability with a query
@@ -158,8 +160,135 @@ func (t *HttpListener) ServeHTTP(res http.ResponseWriter, req *http.Request) {
}
}
func (h *HTTPListener) serveWrite(res http.ResponseWriter, req *http.Request) {
// Check that the content length is not too large for us to handle.
if req.ContentLength > h.MaxBodySize {
tooLarge(res)
return
}
now := time.Now()
// Handle gzip request bodies
body := req.Body
var err error
if req.Header.Get("Content-Encoding") == "gzip" {
body, err = gzip.NewReader(req.Body)
defer body.Close()
if err != nil {
log.Println("E! " + err.Error())
badRequest(res)
return
}
}
body = http.MaxBytesReader(res, body, h.MaxBodySize)
var return400 bool
var hangingBytes bool
buf := h.pool.get()
defer h.pool.put(buf)
bufStart := 0
for {
n, err := io.ReadFull(body, buf[bufStart:])
if err != nil && err != io.ErrUnexpectedEOF && err != io.EOF {
log.Println("E! " + err.Error())
// problem reading the request body
badRequest(res)
return
}
if err == io.EOF {
if return400 {
badRequest(res)
} else {
res.WriteHeader(http.StatusNoContent)
}
return
}
if hangingBytes {
i := bytes.IndexByte(buf, '\n')
if i == -1 {
// still didn't find a newline, keep scanning
continue
}
// rotate the bit remaining after the first newline to the front of the buffer
i++ // start copying after the newline
bufStart = len(buf) - i
if bufStart > 0 {
copy(buf, buf[i:])
}
hangingBytes = false
continue
}
if err == io.ErrUnexpectedEOF {
// finished reading the request body
if err := h.parse(buf[:n+bufStart], now); err != nil {
log.Println("E! " + err.Error())
return400 = true
}
if return400 {
badRequest(res)
} else {
res.WriteHeader(http.StatusNoContent)
}
return
}
// if we got down here it means that we filled our buffer, and there
// are still bytes remaining to be read. So we will parse up until the
// final newline, then push the rest of the bytes into the next buffer.
i := bytes.LastIndexByte(buf, '\n')
if i == -1 {
// drop any line longer than the max buffer size
log.Printf("E! http_listener received a single line longer than the maximum of %d bytes",
len(buf))
hangingBytes = true
return400 = true
bufStart = 0
continue
}
if err := h.parse(buf[:i], now); err != nil {
log.Println("E! " + err.Error())
return400 = true
}
// rotate the bit remaining after the last newline to the front of the buffer
i++ // start copying after the newline
bufStart = len(buf) - i
if bufStart > 0 {
copy(buf, buf[i:])
}
}
}
func (h *HTTPListener) parse(b []byte, t time.Time) error {
metrics, err := h.parser.ParseWithDefaultTime(b, t)
for _, m := range metrics {
h.acc.AddFields(m.Name(), m.Fields(), m.Tags(), m.Time())
}
return err
}
func tooLarge(res http.ResponseWriter) {
res.Header().Set("Content-Type", "application/json")
res.Header().Set("X-Influxdb-Version", "1.0")
res.WriteHeader(http.StatusRequestEntityTooLarge)
res.Write([]byte(`{"error":"http: request body too large"}`))
}
func badRequest(res http.ResponseWriter) {
res.Header().Set("Content-Type", "application/json")
res.Header().Set("X-Influxdb-Version", "1.0")
res.WriteHeader(http.StatusBadRequest)
res.Write([]byte(`{"error":"http: bad request"}`))
}
func init() {
inputs.Add("http_listener", func() telegraf.Input {
return &HttpListener{}
return &HTTPListener{
ServiceAddress: ":8186",
}
})
}

View File

@@ -1,16 +1,16 @@
package http_listener
import (
"bytes"
"io/ioutil"
"net/http"
"sync"
"testing"
"time"
"github.com/influxdata/telegraf/plugins/parsers"
"github.com/influxdata/telegraf/testutil"
"bytes"
"github.com/stretchr/testify/require"
"net/http"
)
const (
@@ -27,17 +27,15 @@ cpu_load_short,host=server06 value=12.0 1422568543702900257
emptyMsg = ""
)
func newTestHttpListener() *HttpListener {
listener := &HttpListener{
func newTestHTTPListener() *HTTPListener {
listener := &HTTPListener{
ServiceAddress: ":8186",
}
return listener
}
func TestWriteHTTP(t *testing.T) {
listener := newTestHttpListener()
parser, _ := parsers.NewInfluxParser()
listener.SetParser(parser)
listener := newTestHTTPListener()
acc := &testutil.Accumulator{}
require.NoError(t, listener.Start(acc))
@@ -71,10 +69,10 @@ func TestWriteHTTP(t *testing.T) {
)
}
// Post a gigantic metric to the listener:
// Post a gigantic metric to the listener and verify that an error is returned:
resp, err = http.Post("http://localhost:8186/write?db=mydb", "", bytes.NewBuffer([]byte(hugeMetric)))
require.NoError(t, err)
require.EqualValues(t, 204, resp.StatusCode)
require.EqualValues(t, 400, resp.StatusCode)
time.Sleep(time.Millisecond * 15)
acc.AssertContainsTaggedFields(t, "cpu_load_short",
@@ -83,11 +81,133 @@ func TestWriteHTTP(t *testing.T) {
)
}
func TestWriteHTTPMaxLineSizeIncrease(t *testing.T) {
listener := &HTTPListener{
ServiceAddress: ":8296",
MaxLineSize: 128 * 1000,
}
acc := &testutil.Accumulator{}
require.NoError(t, listener.Start(acc))
defer listener.Stop()
time.Sleep(time.Millisecond * 25)
// Post a gigantic metric to the listener and verify that it writes OK this time:
resp, err := http.Post("http://localhost:8296/write?db=mydb", "", bytes.NewBuffer([]byte(hugeMetric)))
require.NoError(t, err)
require.EqualValues(t, 204, resp.StatusCode)
}
func TestWriteHTTPVerySmallMaxBody(t *testing.T) {
listener := &HTTPListener{
ServiceAddress: ":8297",
MaxBodySize: 4096,
}
acc := &testutil.Accumulator{}
require.NoError(t, listener.Start(acc))
defer listener.Stop()
time.Sleep(time.Millisecond * 25)
resp, err := http.Post("http://localhost:8297/write", "", bytes.NewBuffer([]byte(hugeMetric)))
require.NoError(t, err)
require.EqualValues(t, 413, resp.StatusCode)
}
func TestWriteHTTPVerySmallMaxLineSize(t *testing.T) {
listener := &HTTPListener{
ServiceAddress: ":8298",
MaxLineSize: 70,
}
acc := &testutil.Accumulator{}
require.NoError(t, listener.Start(acc))
defer listener.Stop()
time.Sleep(time.Millisecond * 25)
resp, err := http.Post("http://localhost:8298/write", "", bytes.NewBuffer([]byte(testMsgs)))
require.NoError(t, err)
require.EqualValues(t, 204, resp.StatusCode)
time.Sleep(time.Millisecond * 15)
hostTags := []string{"server02", "server03",
"server04", "server05", "server06"}
for _, hostTag := range hostTags {
acc.AssertContainsTaggedFields(t, "cpu_load_short",
map[string]interface{}{"value": float64(12)},
map[string]string{"host": hostTag},
)
}
}
func TestWriteHTTPLargeLinesSkipped(t *testing.T) {
listener := &HTTPListener{
ServiceAddress: ":8300",
MaxLineSize: 100,
}
acc := &testutil.Accumulator{}
require.NoError(t, listener.Start(acc))
defer listener.Stop()
time.Sleep(time.Millisecond * 25)
resp, err := http.Post("http://localhost:8300/write", "", bytes.NewBuffer([]byte(hugeMetric+testMsgs)))
require.NoError(t, err)
require.EqualValues(t, 400, resp.StatusCode)
time.Sleep(time.Millisecond * 15)
hostTags := []string{"server02", "server03",
"server04", "server05", "server06"}
for _, hostTag := range hostTags {
acc.AssertContainsTaggedFields(t, "cpu_load_short",
map[string]interface{}{"value": float64(12)},
map[string]string{"host": hostTag},
)
}
}
// test that writing gzipped data works
func TestWriteHTTPGzippedData(t *testing.T) {
listener := &HTTPListener{
ServiceAddress: ":8299",
}
acc := &testutil.Accumulator{}
require.NoError(t, listener.Start(acc))
defer listener.Stop()
time.Sleep(time.Millisecond * 25)
data, err := ioutil.ReadFile("./testdata/testmsgs.gz")
require.NoError(t, err)
req, err := http.NewRequest("POST", "http://localhost:8299/write", bytes.NewBuffer(data))
require.NoError(t, err)
req.Header.Set("Content-Encoding", "gzip")
client := &http.Client{}
resp, err := client.Do(req)
require.NoError(t, err)
require.EqualValues(t, 204, resp.StatusCode)
time.Sleep(time.Millisecond * 50)
hostTags := []string{"server02", "server03",
"server04", "server05", "server06"}
for _, hostTag := range hostTags {
acc.AssertContainsTaggedFields(t, "cpu_load_short",
map[string]interface{}{"value": float64(12)},
map[string]string{"host": hostTag},
)
}
}
// writes 25,000 metrics to the listener with 10 different writers
func TestWriteHTTPHighTraffic(t *testing.T) {
listener := &HttpListener{ServiceAddress: ":8286"}
parser, _ := parsers.NewInfluxParser()
listener.SetParser(parser)
listener := &HTTPListener{ServiceAddress: ":8286"}
acc := &testutil.Accumulator{}
require.NoError(t, listener.Start(acc))
@@ -99,26 +219,25 @@ func TestWriteHTTPHighTraffic(t *testing.T) {
var wg sync.WaitGroup
for i := 0; i < 10; i++ {
wg.Add(1)
go func() {
go func(innerwg *sync.WaitGroup) {
defer innerwg.Done()
for i := 0; i < 500; i++ {
resp, err := http.Post("http://localhost:8286/write?db=mydb", "", bytes.NewBuffer([]byte(testMsgs)))
require.NoError(t, err)
require.EqualValues(t, 204, resp.StatusCode)
}
wg.Done()
}()
}(&wg)
}
wg.Wait()
time.Sleep(time.Millisecond * 50)
time.Sleep(time.Millisecond * 250)
listener.Gather(acc)
require.Equal(t, int64(25000), int64(acc.NMetrics()))
}
func TestReceive404ForInvalidEndpoint(t *testing.T) {
listener := newTestHttpListener()
listener.parser, _ = parsers.NewInfluxParser()
listener := newTestHTTPListener()
acc := &testutil.Accumulator{}
require.NoError(t, listener.Start(acc))
@@ -135,8 +254,7 @@ func TestReceive404ForInvalidEndpoint(t *testing.T) {
func TestWriteHTTPInvalid(t *testing.T) {
time.Sleep(time.Millisecond * 250)
listener := newTestHttpListener()
listener.parser, _ = parsers.NewInfluxParser()
listener := newTestHTTPListener()
acc := &testutil.Accumulator{}
require.NoError(t, listener.Start(acc))
@@ -153,8 +271,7 @@ func TestWriteHTTPInvalid(t *testing.T) {
func TestWriteHTTPEmpty(t *testing.T) {
time.Sleep(time.Millisecond * 250)
listener := newTestHttpListener()
listener.parser, _ = parsers.NewInfluxParser()
listener := newTestHTTPListener()
acc := &testutil.Accumulator{}
require.NoError(t, listener.Start(acc))
@@ -171,8 +288,7 @@ func TestWriteHTTPEmpty(t *testing.T) {
func TestQueryAndPingHTTP(t *testing.T) {
time.Sleep(time.Millisecond * 250)
listener := newTestHttpListener()
listener.parser, _ = parsers.NewInfluxParser()
listener := newTestHTTPListener()
acc := &testutil.Accumulator{}
require.NoError(t, listener.Start(acc))

View File

@@ -1,10 +0,0 @@
Copyright (c) 2014, Eric Urban
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,62 +0,0 @@
package stoppableListener
import (
"errors"
"net"
"time"
)
type StoppableListener struct {
*net.TCPListener //Wrapped listener
stop chan int //Channel used only to indicate listener should shutdown
}
func New(l net.Listener) (*StoppableListener, error) {
tcpL, ok := l.(*net.TCPListener)
if !ok {
return nil, errors.New("Cannot wrap listener")
}
retval := &StoppableListener{}
retval.TCPListener = tcpL
retval.stop = make(chan int)
return retval, nil
}
var StoppedError = errors.New("Listener stopped")
func (sl *StoppableListener) Accept() (net.Conn, error) {
for {
//Wait up to one second for a new connection
sl.SetDeadline(time.Now().Add(time.Second))
newConn, err := sl.TCPListener.Accept()
//Check for the channel being closed
select {
case <-sl.stop:
return nil, StoppedError
default:
//If the channel is still open, continue as normal
}
if err != nil {
netErr, ok := err.(net.Error)
//If this is a timeout, then continue to wait for
//new connections
if ok && netErr.Timeout() && netErr.Temporary() {
continue
}
}
return newConn, err
}
}
func (sl *StoppableListener) Stop() {
close(sl.stop)
}

Binary file not shown.

View File

@@ -0,0 +1,265 @@
# Kubernetes Input Plugin
**This plugin is experimental and may cause high cardinality issues with moderate to large Kubernetes deployments**
This input plugin talks to the kubelet api using the `/stats/summary` endpoint to gather metrics about the running pods and containers for a single host. It is assumed that this plugin is running as part of a `daemonset` within a kubernetes installation. This means that telegraf is running on every node within the cluster. Therefore, you should configure this plugin to talk to its locally running kubelet.
To find the ip address of the host you are running on you can issue a command like the following:
```
$ curl -s $API_URL/api/v1/namespaces/$POD_NAMESPACE/pods/$HOSTNAME --header "Authorization: Bearer $TOKEN" --insecure | jq -r '.status.hostIP'
```
In this case we used the downward API to pass in the `$POD_NAMESPACE` and `$HOSTNAME` is the hostname of the pod which is set by the kubernetes API.
## Summary Data
```json
{
"node": {
"nodeName": "node1",
"systemContainers": [
{
"name": "kubelet",
"startTime": "2016-08-25T18:46:52Z",
"cpu": {
"time": "2016-09-27T16:57:31Z",
"usageNanoCores": 56652446,
"usageCoreNanoSeconds": 101437561712262
},
"memory": {
"time": "2016-09-27T16:57:31Z",
"usageBytes": 62529536,
"workingSetBytes": 62349312,
"rssBytes": 47509504,
"pageFaults": 4769397409,
"majorPageFaults": 13
},
"rootfs": {
"availableBytes": 84379979776,
"capacityBytes": 105553100800
},
"logs": {
"availableBytes": 84379979776,
"capacityBytes": 105553100800
},
"userDefinedMetrics": null
},
{
"name": "bar",
"startTime": "2016-08-25T18:46:52Z",
"cpu": {
"time": "2016-09-27T16:57:31Z",
"usageNanoCores": 56652446,
"usageCoreNanoSeconds": 101437561712262
},
"memory": {
"time": "2016-09-27T16:57:31Z",
"usageBytes": 62529536,
"workingSetBytes": 62349312,
"rssBytes": 47509504,
"pageFaults": 4769397409,
"majorPageFaults": 13
},
"rootfs": {
"availableBytes": 84379979776,
"capacityBytes": 105553100800
},
"logs": {
"availableBytes": 84379979776,
"capacityBytes": 105553100800
},
"userDefinedMetrics": null
}
],
"startTime": "2016-08-25T18:46:52Z",
"cpu": {
"time": "2016-09-27T16:57:41Z",
"usageNanoCores": 576996212,
"usageCoreNanoSeconds": 774129887054161
},
"memory": {
"time": "2016-09-27T16:57:41Z",
"availableBytes": 10726387712,
"usageBytes": 12313182208,
"workingSetBytes": 5081538560,
"rssBytes": 35586048,
"pageFaults": 351742,
"majorPageFaults": 1236
},
"network": {
"time": "2016-09-27T16:57:41Z",
"rxBytes": 213281337459,
"rxErrors": 0,
"txBytes": 292869995684,
"txErrors": 0
},
"fs": {
"availableBytes": 84379979776,
"capacityBytes": 105553100800,
"usedBytes": 16754286592
},
"runtime": {
"imageFs": {
"availableBytes": 84379979776,
"capacityBytes": 105553100800,
"usedBytes": 5809371475
}
}
},
"pods": [
{
"podRef": {
"name": "foopod",
"namespace": "foons",
"uid": "6d305b06-8419-11e6-825c-42010af000ae"
},
"startTime": "2016-09-26T18:45:42Z",
"containers": [
{
"name": "foocontainer",
"startTime": "2016-09-26T18:46:43Z",
"cpu": {
"time": "2016-09-27T16:57:32Z",
"usageNanoCores": 846503,
"usageCoreNanoSeconds": 56507553554
},
"memory": {
"time": "2016-09-27T16:57:32Z",
"usageBytes": 30789632,
"workingSetBytes": 30789632,
"rssBytes": 30695424,
"pageFaults": 10761,
"majorPageFaults": 0
},
"rootfs": {
"availableBytes": 84379979776,
"capacityBytes": 105553100800,
"usedBytes": 57344
},
"logs": {
"availableBytes": 84379979776,
"capacityBytes": 105553100800,
"usedBytes": 24576
},
"userDefinedMetrics": null
}
],
"network": {
"time": "2016-09-27T16:57:34Z",
"rxBytes": 70749124,
"rxErrors": 0,
"txBytes": 47813506,
"txErrors": 0
},
"volume": [
{
"availableBytes": 7903948800,
"capacityBytes": 7903961088,
"usedBytes": 12288,
"name": "volume1"
},
{
"availableBytes": 7903956992,
"capacityBytes": 7903961088,
"usedBytes": 4096,
"name": "volume2"
},
{
"availableBytes": 7903948800,
"capacityBytes": 7903961088,
"usedBytes": 12288,
"name": "volume3"
},
{
"availableBytes": 7903952896,
"capacityBytes": 7903961088,
"usedBytes": 8192,
"name": "volume4"
}
]
}
]
}
```
### Daemonset YAML
```yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: telegraf
namespace: telegraf
spec:
template:
metadata:
labels:
app: telegraf
spec:
serviceAccount: telegraf
containers:
- name: telegraf
image: quay.io/org/image:latest
imagePullPolicy: IfNotPresent
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: "HOST_PROC"
value: "/rootfs/proc"
- name: "HOST_SYS"
value: "/rootfs/sys"
volumeMounts:
- name: sysro
mountPath: /rootfs/sys
readOnly: true
- name: procro
mountPath: /rootfs/proc
readOnly: true
- name: varrunutmpro
mountPath: /var/run/utmp
readOnly: true
- name: logger-redis-creds
mountPath: /var/run/secrets/deis/redis/creds
volumes:
- name: sysro
hostPath:
path: /sys
- name: procro
hostPath:
path: /proc
- name: varrunutmpro
hostPath:
path: /var/run/utmp
```
### Line Protocol
#### kubernetes_pod_container
```
kubernetes_pod_container,host=ip-10-0-0-0.ec2.internal,
container_name=deis-controller,namespace=deis,
node_name=ip-10-0-0-0.ec2.internal, pod_name=deis-controller-3058870187-xazsr, cpu_usage_core_nanoseconds=2432835i,cpu_usage_nanocores=0i,
logsfs_avaialble_bytes=121128271872i,logsfs_capacity_bytes=153567944704i,
logsfs_used_bytes=20787200i,memory_major_page_faults=0i,
memory_page_faults=175i,memory_rss_bytes=0i,
memory_usage_bytes=0i,memory_working_set_bytes=0i,
rootfs_available_bytes=121128271872i,rootfs_capacity_bytes=153567944704i,
rootfs_used_bytes=1110016i 1476477530000000000
```
#### kubernetes_pod_volume
```
kubernetes_pod_volume,host=ip-10-0-0-0.ec2.internal,name=default-token-f7wts,
namespace=kube-system,node_name=ip-10-0-0-0.ec2.internal,
pod_name=kubernetes-dashboard-v1.1.1-t4x4t, available_bytes=8415240192i,
capacity_bytes=8415252480i,used_bytes=12288i 1476477530000000000
```
#### kubernetes_pod_network
```
kubernetes_pod_network,host=ip-10-0-0-0.ec2.internal,namespace=deis,
node_name=ip-10-0-0-0.ec2.internal,pod_name=deis-controller-3058870187-xazsr,
rx_bytes=120671099i,rx_errors=0i,
tx_bytes=102451983i,tx_errors=0i 1476477530000000000
```

View File

@@ -0,0 +1,242 @@
package kubernetes
import (
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"net/url"
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs"
)
// Kubernetes represents the config object for the plugin
type Kubernetes struct {
URL string
// Bearer Token authorization file path
BearerToken string `toml:"bearer_token"`
// Path to CA file
SSLCA string `toml:"ssl_ca"`
// Path to host cert file
SSLCert string `toml:"ssl_cert"`
// Path to cert key file
SSLKey string `toml:"ssl_key"`
// Use SSL but skip chain & host verification
InsecureSkipVerify bool
RoundTripper http.RoundTripper
}
var sampleConfig = `
## URL for the kubelet
url = "http://1.1.1.1:10255"
## Use bearer token for authorization
# bearer_token = /path/to/bearer/token
## Optional SSL Config
# ssl_ca = /path/to/cafile
# ssl_cert = /path/to/certfile
# ssl_key = /path/to/keyfile
## Use SSL but skip chain & host verification
# insecure_skip_verify = false
`
const (
summaryEndpoint = `%s/stats/summary`
)
func init() {
inputs.Add("kubernetes", func() telegraf.Input {
return &Kubernetes{}
})
}
//SampleConfig returns a sample config
func (k *Kubernetes) SampleConfig() string {
return sampleConfig
}
//Description returns the description of this plugin
func (k *Kubernetes) Description() string {
return "Read metrics from the kubernetes kubelet api"
}
//Gather collects kubernetes metrics from a given URL
func (k *Kubernetes) Gather(acc telegraf.Accumulator) error {
var wg sync.WaitGroup
errChan := errchan.New(1)
wg.Add(1)
go func(k *Kubernetes) {
defer wg.Done()
errChan.C <- k.gatherSummary(k.URL, acc)
}(k)
wg.Wait()
return errChan.Error()
}
func buildURL(endpoint string, base string) (*url.URL, error) {
u := fmt.Sprintf(endpoint, base)
addr, err := url.Parse(u)
if err != nil {
return nil, fmt.Errorf("Unable to parse address '%s': %s", u, err)
}
return addr, nil
}
func (k *Kubernetes) gatherSummary(baseURL string, acc telegraf.Accumulator) error {
url := fmt.Sprintf("%s/stats/summary", baseURL)
var req, err = http.NewRequest("GET", url, nil)
var token []byte
var resp *http.Response
tlsCfg, err := internal.GetTLSConfig(k.SSLCert, k.SSLKey, k.SSLCA, k.InsecureSkipVerify)
if err != nil {
return err
}
if k.RoundTripper == nil {
k.RoundTripper = &http.Transport{
TLSHandshakeTimeout: 5 * time.Second,
TLSClientConfig: tlsCfg,
ResponseHeaderTimeout: time.Duration(3 * time.Second),
}
}
if k.BearerToken != "" {
token, err = ioutil.ReadFile(k.BearerToken)
if err != nil {
return err
}
req.Header.Set("Authorization", "Bearer "+string(token))
}
resp, err = k.RoundTripper.RoundTrip(req)
if err != nil {
return fmt.Errorf("error making HTTP request to %s: %s", url, err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("%s returned HTTP status %s", url, resp.Status)
}
summaryMetrics := &SummaryMetrics{}
err = json.NewDecoder(resp.Body).Decode(summaryMetrics)
if err != nil {
return fmt.Errorf(`Error parsing response: %s`, err)
}
buildSystemContainerMetrics(summaryMetrics, acc)
buildNodeMetrics(summaryMetrics, acc)
buildPodMetrics(summaryMetrics, acc)
return nil
}
func buildSystemContainerMetrics(summaryMetrics *SummaryMetrics, acc telegraf.Accumulator) {
for _, container := range summaryMetrics.Node.SystemContainers {
tags := map[string]string{
"node_name": summaryMetrics.Node.NodeName,
"container_name": container.Name,
}
fields := make(map[string]interface{})
fields["cpu_usage_nanocores"] = container.CPU.UsageNanoCores
fields["cpu_usage_core_nanoseconds"] = container.CPU.UsageCoreNanoSeconds
fields["memory_usage_bytes"] = container.Memory.UsageBytes
fields["memory_working_set_bytes"] = container.Memory.WorkingSetBytes
fields["memory_rss_bytes"] = container.Memory.RSSBytes
fields["memory_page_faults"] = container.Memory.PageFaults
fields["memory_major_page_faults"] = container.Memory.MajorPageFaults
fields["rootfs_available_bytes"] = container.RootFS.AvailableBytes
fields["rootfs_capacity_bytes"] = container.RootFS.CapacityBytes
fields["logsfs_avaialble_bytes"] = container.LogsFS.AvailableBytes
fields["logsfs_capacity_bytes"] = container.LogsFS.CapacityBytes
acc.AddFields("kubernetes_system_container", fields, tags)
}
}
func buildNodeMetrics(summaryMetrics *SummaryMetrics, acc telegraf.Accumulator) {
tags := map[string]string{
"node_name": summaryMetrics.Node.NodeName,
}
fields := make(map[string]interface{})
fields["cpu_usage_nanocores"] = summaryMetrics.Node.CPU.UsageNanoCores
fields["cpu_usage_core_nanoseconds"] = summaryMetrics.Node.CPU.UsageCoreNanoSeconds
fields["memory_available_bytes"] = summaryMetrics.Node.Memory.AvailableBytes
fields["memory_usage_bytes"] = summaryMetrics.Node.Memory.UsageBytes
fields["memory_working_set_bytes"] = summaryMetrics.Node.Memory.WorkingSetBytes
fields["memory_rss_bytes"] = summaryMetrics.Node.Memory.RSSBytes
fields["memory_page_faults"] = summaryMetrics.Node.Memory.PageFaults
fields["memory_major_page_faults"] = summaryMetrics.Node.Memory.MajorPageFaults
fields["network_rx_bytes"] = summaryMetrics.Node.Network.RXBytes
fields["network_rx_errors"] = summaryMetrics.Node.Network.RXErrors
fields["network_tx_bytes"] = summaryMetrics.Node.Network.TXBytes
fields["network_tx_errors"] = summaryMetrics.Node.Network.TXErrors
fields["fs_available_bytes"] = summaryMetrics.Node.FileSystem.AvailableBytes
fields["fs_capacity_bytes"] = summaryMetrics.Node.FileSystem.CapacityBytes
fields["fs_used_bytes"] = summaryMetrics.Node.FileSystem.UsedBytes
fields["runtime_image_fs_available_bytes"] = summaryMetrics.Node.Runtime.ImageFileSystem.AvailableBytes
fields["runtime_image_fs_capacity_bytes"] = summaryMetrics.Node.Runtime.ImageFileSystem.CapacityBytes
fields["runtime_image_fs_used_bytes"] = summaryMetrics.Node.Runtime.ImageFileSystem.UsedBytes
acc.AddFields("kubernetes_node", fields, tags)
}
func buildPodMetrics(summaryMetrics *SummaryMetrics, acc telegraf.Accumulator) {
for _, pod := range summaryMetrics.Pods {
for _, container := range pod.Containers {
tags := map[string]string{
"node_name": summaryMetrics.Node.NodeName,
"namespace": pod.PodRef.Namespace,
"container_name": container.Name,
"pod_name": pod.PodRef.Name,
}
fields := make(map[string]interface{})
fields["cpu_usage_nanocores"] = container.CPU.UsageNanoCores
fields["cpu_usage_core_nanoseconds"] = container.CPU.UsageCoreNanoSeconds
fields["memory_usage_bytes"] = container.Memory.UsageBytes
fields["memory_working_set_bytes"] = container.Memory.WorkingSetBytes
fields["memory_rss_bytes"] = container.Memory.RSSBytes
fields["memory_page_faults"] = container.Memory.PageFaults
fields["memory_major_page_faults"] = container.Memory.MajorPageFaults
fields["rootfs_available_bytes"] = container.RootFS.AvailableBytes
fields["rootfs_capacity_bytes"] = container.RootFS.CapacityBytes
fields["rootfs_used_bytes"] = container.RootFS.UsedBytes
fields["logsfs_avaialble_bytes"] = container.LogsFS.AvailableBytes
fields["logsfs_capacity_bytes"] = container.LogsFS.CapacityBytes
fields["logsfs_used_bytes"] = container.LogsFS.UsedBytes
acc.AddFields("kubernetes_pod_container", fields, tags)
}
for _, volume := range pod.Volumes {
tags := map[string]string{
"node_name": summaryMetrics.Node.NodeName,
"pod_name": pod.PodRef.Name,
"namespace": pod.PodRef.Namespace,
"volume_name": volume.Name,
}
fields := make(map[string]interface{})
fields["available_bytes"] = volume.AvailableBytes
fields["capacity_bytes"] = volume.CapacityBytes
fields["used_bytes"] = volume.UsedBytes
acc.AddFields("kubernetes_pod_volume", fields, tags)
}
tags := map[string]string{
"node_name": summaryMetrics.Node.NodeName,
"pod_name": pod.PodRef.Name,
"namespace": pod.PodRef.Namespace,
}
fields := make(map[string]interface{})
fields["rx_bytes"] = pod.Network.RXBytes
fields["rx_errors"] = pod.Network.RXErrors
fields["tx_bytes"] = pod.Network.TXBytes
fields["tx_errors"] = pod.Network.TXErrors
acc.AddFields("kubernetes_pod_network", fields, tags)
}
}

View File

@@ -0,0 +1,93 @@
package kubernetes
import "time"
// SummaryMetrics represents all the summary data about a paritcular node retrieved from a kubelet
type SummaryMetrics struct {
Node NodeMetrics `json:"node"`
Pods []PodMetrics `json:"pods"`
}
// NodeMetrics represents detailed information about a node
type NodeMetrics struct {
NodeName string `json:"nodeName"`
SystemContainers []ContainerMetrics `json:"systemContainers"`
StartTime time.Time `json:"startTime"`
CPU CPUMetrics `json:"cpu"`
Memory MemoryMetrics `json:"memory"`
Network NetworkMetrics `json:"network"`
FileSystem FileSystemMetrics `json:"fs"`
Runtime RuntimeMetrics `json:"runtime"`
}
// ContainerMetrics represents the metric data collect about a container from the kubelet
type ContainerMetrics struct {
Name string `json:"name"`
StartTime time.Time `json:"startTime"`
CPU CPUMetrics `json:"cpu"`
Memory MemoryMetrics `json:"memory"`
RootFS FileSystemMetrics `json:"rootfs"`
LogsFS FileSystemMetrics `json:"logs"`
}
// RuntimeMetrics contains metric data on the runtime of the system
type RuntimeMetrics struct {
ImageFileSystem FileSystemMetrics `json:"imageFs"`
}
// CPUMetrics represents the cpu usage data of a pod or node
type CPUMetrics struct {
Time time.Time `json:"time"`
UsageNanoCores int64 `json:"usageNanoCores"`
UsageCoreNanoSeconds int64 `json:"usageCoreNanoSeconds"`
}
// PodMetrics contains metric data on a given pod
type PodMetrics struct {
PodRef PodReference `json:"podRef"`
StartTime time.Time `json:"startTime"`
Containers []ContainerMetrics `json:"containers"`
Network NetworkMetrics `json:"network"`
Volumes []VolumeMetrics `json:"volume"`
}
// PodReference is how a pod is identified
type PodReference struct {
Name string `json:"name"`
Namespace string `json:"namespace"`
}
// MemoryMetrics represents the memory metrics for a pod or node
type MemoryMetrics struct {
Time time.Time `json:"time"`
AvailableBytes int64 `json:"availableBytes"`
UsageBytes int64 `json:"usageBytes"`
WorkingSetBytes int64 `json:"workingSetBytes"`
RSSBytes int64 `json:"rssBytes"`
PageFaults int64 `json:"pageFaults"`
MajorPageFaults int64 `json:"majorPageFaults"`
}
// FileSystemMetrics represents disk usage metrics for a pod or node
type FileSystemMetrics struct {
AvailableBytes int64 `json:"availableBytes"`
CapacityBytes int64 `json:"capacityBytes"`
UsedBytes int64 `json:"usedBytes"`
}
// NetworkMetrics represents network usage data for a pod or node
type NetworkMetrics struct {
Time time.Time `json:"time"`
RXBytes int64 `json:"rxBytes"`
RXErrors int64 `json:"rxErrors"`
TXBytes int64 `json:"txBytes"`
TXErrors int64 `json:"txErrors"`
}
// VolumeMetrics represents the disk usage data for a given volume
type VolumeMetrics struct {
Name string `json:"name"`
AvailableBytes int64 `json:"availableBytes"`
CapacityBytes int64 `json:"capacityBytes"`
UsedBytes int64 `json:"usedBytes"`
}

View File

@@ -0,0 +1,289 @@
package kubernetes
import (
"fmt"
"net/http"
"net/http/httptest"
"testing"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/require"
)
func TestKubernetesStats(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
fmt.Fprintln(w, response)
}))
defer ts.Close()
k := &Kubernetes{
URL: ts.URL,
}
var acc testutil.Accumulator
err := k.Gather(&acc)
require.NoError(t, err)
fields := map[string]interface{}{
"cpu_usage_nanocores": int64(56652446),
"cpu_usage_core_nanoseconds": int64(101437561712262),
"memory_usage_bytes": int64(62529536),
"memory_working_set_bytes": int64(62349312),
"memory_rss_bytes": int64(47509504),
"memory_page_faults": int64(4769397409),
"memory_major_page_faults": int64(13),
"rootfs_available_bytes": int64(84379979776),
"rootfs_capacity_bytes": int64(105553100800),
"logsfs_avaialble_bytes": int64(84379979776),
"logsfs_capacity_bytes": int64(105553100800),
}
tags := map[string]string{
"node_name": "node1",
"container_name": "kubelet",
}
acc.AssertContainsTaggedFields(t, "kubernetes_system_container", fields, tags)
fields = map[string]interface{}{
"cpu_usage_nanocores": int64(576996212),
"cpu_usage_core_nanoseconds": int64(774129887054161),
"memory_usage_bytes": int64(12313182208),
"memory_working_set_bytes": int64(5081538560),
"memory_rss_bytes": int64(35586048),
"memory_page_faults": int64(351742),
"memory_major_page_faults": int64(1236),
"memory_available_bytes": int64(10726387712),
"network_rx_bytes": int64(213281337459),
"network_rx_errors": int64(0),
"network_tx_bytes": int64(292869995684),
"network_tx_errors": int64(0),
"fs_available_bytes": int64(84379979776),
"fs_capacity_bytes": int64(105553100800),
"fs_used_bytes": int64(16754286592),
"runtime_image_fs_available_bytes": int64(84379979776),
"runtime_image_fs_capacity_bytes": int64(105553100800),
"runtime_image_fs_used_bytes": int64(5809371475),
}
tags = map[string]string{
"node_name": "node1",
}
acc.AssertContainsTaggedFields(t, "kubernetes_node", fields, tags)
fields = map[string]interface{}{
"cpu_usage_nanocores": int64(846503),
"cpu_usage_core_nanoseconds": int64(56507553554),
"memory_usage_bytes": int64(30789632),
"memory_working_set_bytes": int64(30789632),
"memory_rss_bytes": int64(30695424),
"memory_page_faults": int64(10761),
"memory_major_page_faults": int64(0),
"rootfs_available_bytes": int64(84379979776),
"rootfs_capacity_bytes": int64(105553100800),
"rootfs_used_bytes": int64(57344),
"logsfs_avaialble_bytes": int64(84379979776),
"logsfs_capacity_bytes": int64(105553100800),
"logsfs_used_bytes": int64(24576),
}
tags = map[string]string{
"node_name": "node1",
"container_name": "foocontainer",
"namespace": "foons",
"pod_name": "foopod",
}
acc.AssertContainsTaggedFields(t, "kubernetes_pod_container", fields, tags)
fields = map[string]interface{}{
"available_bytes": int64(7903948800),
"capacity_bytes": int64(7903961088),
"used_bytes": int64(12288),
}
tags = map[string]string{
"node_name": "node1",
"volume_name": "volume1",
"namespace": "foons",
"pod_name": "foopod",
}
acc.AssertContainsTaggedFields(t, "kubernetes_pod_volume", fields, tags)
fields = map[string]interface{}{
"rx_bytes": int64(70749124),
"rx_errors": int64(0),
"tx_bytes": int64(47813506),
"tx_errors": int64(0),
}
tags = map[string]string{
"node_name": "node1",
"namespace": "foons",
"pod_name": "foopod",
}
acc.AssertContainsTaggedFields(t, "kubernetes_pod_network", fields, tags)
}
var response = `
{
"node": {
"nodeName": "node1",
"systemContainers": [
{
"name": "kubelet",
"startTime": "2016-08-25T18:46:52Z",
"cpu": {
"time": "2016-09-27T16:57:31Z",
"usageNanoCores": 56652446,
"usageCoreNanoSeconds": 101437561712262
},
"memory": {
"time": "2016-09-27T16:57:31Z",
"usageBytes": 62529536,
"workingSetBytes": 62349312,
"rssBytes": 47509504,
"pageFaults": 4769397409,
"majorPageFaults": 13
},
"rootfs": {
"availableBytes": 84379979776,
"capacityBytes": 105553100800
},
"logs": {
"availableBytes": 84379979776,
"capacityBytes": 105553100800
},
"userDefinedMetrics": null
},
{
"name": "bar",
"startTime": "2016-08-25T18:46:52Z",
"cpu": {
"time": "2016-09-27T16:57:31Z",
"usageNanoCores": 56652446,
"usageCoreNanoSeconds": 101437561712262
},
"memory": {
"time": "2016-09-27T16:57:31Z",
"usageBytes": 62529536,
"workingSetBytes": 62349312,
"rssBytes": 47509504,
"pageFaults": 4769397409,
"majorPageFaults": 13
},
"rootfs": {
"availableBytes": 84379979776,
"capacityBytes": 105553100800
},
"logs": {
"availableBytes": 84379979776,
"capacityBytes": 105553100800
},
"userDefinedMetrics": null
}
],
"startTime": "2016-08-25T18:46:52Z",
"cpu": {
"time": "2016-09-27T16:57:41Z",
"usageNanoCores": 576996212,
"usageCoreNanoSeconds": 774129887054161
},
"memory": {
"time": "2016-09-27T16:57:41Z",
"availableBytes": 10726387712,
"usageBytes": 12313182208,
"workingSetBytes": 5081538560,
"rssBytes": 35586048,
"pageFaults": 351742,
"majorPageFaults": 1236
},
"network": {
"time": "2016-09-27T16:57:41Z",
"rxBytes": 213281337459,
"rxErrors": 0,
"txBytes": 292869995684,
"txErrors": 0
},
"fs": {
"availableBytes": 84379979776,
"capacityBytes": 105553100800,
"usedBytes": 16754286592
},
"runtime": {
"imageFs": {
"availableBytes": 84379979776,
"capacityBytes": 105553100800,
"usedBytes": 5809371475
}
}
},
"pods": [
{
"podRef": {
"name": "foopod",
"namespace": "foons",
"uid": "6d305b06-8419-11e6-825c-42010af000ae"
},
"startTime": "2016-09-26T18:45:42Z",
"containers": [
{
"name": "foocontainer",
"startTime": "2016-09-26T18:46:43Z",
"cpu": {
"time": "2016-09-27T16:57:32Z",
"usageNanoCores": 846503,
"usageCoreNanoSeconds": 56507553554
},
"memory": {
"time": "2016-09-27T16:57:32Z",
"usageBytes": 30789632,
"workingSetBytes": 30789632,
"rssBytes": 30695424,
"pageFaults": 10761,
"majorPageFaults": 0
},
"rootfs": {
"availableBytes": 84379979776,
"capacityBytes": 105553100800,
"usedBytes": 57344
},
"logs": {
"availableBytes": 84379979776,
"capacityBytes": 105553100800,
"usedBytes": 24576
},
"userDefinedMetrics": null
}
],
"network": {
"time": "2016-09-27T16:57:34Z",
"rxBytes": 70749124,
"rxErrors": 0,
"txBytes": 47813506,
"txErrors": 0
},
"volume": [
{
"availableBytes": 7903948800,
"capacityBytes": 7903961088,
"usedBytes": 12288,
"name": "volume1"
},
{
"availableBytes": 7903956992,
"capacityBytes": 7903961088,
"usedBytes": 4096,
"name": "volume2"
},
{
"availableBytes": 7903948800,
"capacityBytes": 7903961088,
"usedBytes": 12288,
"name": "volume3"
},
{
"availableBytes": 7903952896,
"capacityBytes": 7903961088,
"usedBytes": 8192,
"name": "volume4"
}
]
}
]
}`

View File

@@ -152,6 +152,31 @@ func TestBuiltinCommonLogFormat(t *testing.T) {
assert.Equal(t, map[string]string{"verb": "GET", "resp_code": "200"}, m.Tags())
}
// common log format
// 127.0.0.1 user1234 frank1234 [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326
func TestBuiltinCommonLogFormatWithNumbers(t *testing.T) {
p := &Parser{
Patterns: []string{"%{COMMON_LOG_FORMAT}"},
}
assert.NoError(t, p.Compile())
// Parse an influxdb POST request
m, err := p.ParseLine(`127.0.0.1 user1234 frank1234 [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326`)
require.NotNil(t, m)
assert.NoError(t, err)
assert.Equal(t,
map[string]interface{}{
"resp_bytes": int64(2326),
"auth": "frank1234",
"client_ip": "127.0.0.1",
"http_version": float64(1.0),
"ident": "user1234",
"request": "/apache_pb.gif",
},
m.Fields())
assert.Equal(t, map[string]string{"verb": "GET", "resp_code": "200"}, m.Tags())
}
// combined log format
// 127.0.0.1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326 "-" "Mozilla"
func TestBuiltinCombinedLogFormat(t *testing.T) {

View File

@@ -53,7 +53,7 @@ RESPONSE_TIME %{DURATION:response_time_ns:duration}
EXAMPLE_LOG \[%{HTTPDATE:ts:ts-httpd}\] %{NUMBER:myfloat:float} %{RESPONSE_CODE} %{IPORHOST:clientip} %{RESPONSE_TIME}
# Wider-ranging username matching vs. logstash built-in %{USER}
NGUSERNAME [a-zA-Z\.\@\-\+_%]+
NGUSERNAME [a-zA-Z0-9\.\@\-\+_%]+
NGUSER %{NGUSERNAME}
# Wider-ranging client IP matching
CLIENT (?:%{IPORHOST}|%{HOSTPORT}|::1)
@@ -64,7 +64,7 @@ CLIENT (?:%{IPORHOST}|%{HOSTPORT}|::1)
# apache & nginx logs, this is also known as the "common log format"
# see https://en.wikipedia.org/wiki/Common_Log_Format
COMMON_LOG_FORMAT %{CLIENT:client_ip} %{NGUSER:ident} %{NGUSER:auth} \[%{HTTPDATE:ts:ts-httpd}\] "(?:%{WORD:verb:tag} %{NOTSPACE:request}(?: HTTP/%{NUMBER:http_version:float})?|%{DATA})" %{NUMBER:resp_code:tag} (?:%{NUMBER:resp_bytes:int}|-)
COMMON_LOG_FORMAT %{CLIENT:client_ip} %{NOTSPACE:ident} %{NOTSPACE:auth} \[%{HTTPDATE:ts:ts-httpd}\] "(?:%{WORD:verb:tag} %{NOTSPACE:request}(?: HTTP/%{NUMBER:http_version:float})?|%{DATA})" %{NUMBER:resp_code:tag} (?:%{NUMBER:resp_bytes:int}|-)
# Combined log format is the same as the common log format but with the addition
# of two quoted strings at the end for "referrer" and "agent"

View File

@@ -49,7 +49,7 @@ RESPONSE_TIME %{DURATION:response_time_ns:duration}
EXAMPLE_LOG \[%{HTTPDATE:ts:ts-httpd}\] %{NUMBER:myfloat:float} %{RESPONSE_CODE} %{IPORHOST:clientip} %{RESPONSE_TIME}
# Wider-ranging username matching vs. logstash built-in %{USER}
NGUSERNAME [a-zA-Z\.\@\-\+_%]+
NGUSERNAME [a-zA-Z0-9\.\@\-\+_%]+
NGUSER %{NGUSERNAME}
# Wider-ranging client IP matching
CLIENT (?:%{IPORHOST}|%{HOSTPORT}|::1)
@@ -60,7 +60,7 @@ CLIENT (?:%{IPORHOST}|%{HOSTPORT}|::1)
# apache & nginx logs, this is also known as the "common log format"
# see https://en.wikipedia.org/wiki/Common_Log_Format
COMMON_LOG_FORMAT %{CLIENT:client_ip} %{NGUSER:ident} %{NGUSER:auth} \[%{HTTPDATE:ts:ts-httpd}\] "(?:%{WORD:verb:tag} %{NOTSPACE:request}(?: HTTP/%{NUMBER:http_version:float})?|%{DATA})" %{NUMBER:resp_code:tag} (?:%{NUMBER:resp_bytes:int}|-)
COMMON_LOG_FORMAT %{CLIENT:client_ip} %{NOTSPACE:ident} %{NOTSPACE:auth} \[%{HTTPDATE:ts:ts-httpd}\] "(?:%{WORD:verb:tag} %{NOTSPACE:request}(?: HTTP/%{NUMBER:http_version:float})?|%{DATA})" %{NUMBER:resp_code:tag} (?:%{NUMBER:resp_bytes:int}|-)
# Combined log format is the same as the common log format but with the addition
# of two quoted strings at the end for "referrer" and "agent"

View File

@@ -4,16 +4,16 @@ import (
"bytes"
"database/sql"
"fmt"
"net/url"
"strconv"
"strings"
"sync"
"time"
_ "github.com/go-sql-driver/mysql"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/go-sql-driver/mysql"
)
type Mysql struct {
@@ -69,13 +69,13 @@ var sampleConfig = `
## gather metrics from SHOW BINARY LOGS command output
gather_binary_logs = false
#
## gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMART_BY_TABLE
## gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMARY_BY_TABLE
gather_table_io_waits = false
#
## gather metrics from PERFORMANCE_SCHEMA.TABLE_LOCK_WAITS
gather_table_lock_waits = false
#
## gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMART_BY_INDEX_USAGE
## gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMARY_BY_INDEX_USAGE
gather_index_io_waits = false
#
## gather metrics from PERFORMANCE_SCHEMA.EVENT_WAITS
@@ -398,27 +398,6 @@ var (
}
)
func dsnAddTimeout(dsn string) (string, error) {
// DSN "?timeout=5s" is not valid, but "/?timeout=5s" is valid ("" and "/"
// are the same DSN)
if dsn == "" {
dsn = "/"
}
u, err := url.Parse(dsn)
if err != nil {
return "", err
}
v := u.Query()
// Only override timeout if not already defined
if _, ok := v["timeout"]; ok == false {
v.Add("timeout", defaultTimeout.String())
u.RawQuery = v.Encode()
}
return u.String(), nil
}
// Math constants
const (
picoSeconds = 1e12
@@ -682,10 +661,7 @@ func (m *Mysql) gatherGlobalVariables(db *sql.DB, serv string, acc telegraf.Accu
var val sql.RawBytes
// parse DSN and save server tag
servtag, err := parseDSN(serv)
if err != nil {
servtag = "localhost"
}
servtag := getDSNTag(serv)
tags := map[string]string{"server": servtag}
fields := make(map[string]interface{})
for rows.Next() {
@@ -722,10 +698,7 @@ func (m *Mysql) gatherSlaveStatuses(db *sql.DB, serv string, acc telegraf.Accumu
}
defer rows.Close()
servtag, err := parseDSN(serv)
if err != nil {
servtag = "localhost"
}
servtag := getDSNTag(serv)
tags := map[string]string{"server": servtag}
fields := make(map[string]interface{})
@@ -770,11 +743,7 @@ func (m *Mysql) gatherBinaryLogs(db *sql.DB, serv string, acc telegraf.Accumulat
defer rows.Close()
// parse DSN and save host as a tag
var servtag string
servtag, err = parseDSN(serv)
if err != nil {
servtag = "localhost"
}
servtag := getDSNTag(serv)
tags := map[string]string{"server": servtag}
var (
size uint64 = 0
@@ -817,11 +786,7 @@ func (m *Mysql) gatherGlobalStatuses(db *sql.DB, serv string, acc telegraf.Accum
}
// parse the DSN and save host name as a tag
var servtag string
servtag, err = parseDSN(serv)
if err != nil {
servtag = "localhost"
}
servtag := getDSNTag(serv)
tags := map[string]string{"server": servtag}
fields := make(map[string]interface{})
for rows.Next() {
@@ -932,10 +897,7 @@ func (m *Mysql) GatherProcessListStatuses(db *sql.DB, serv string, acc telegraf.
var servtag string
fields := make(map[string]interface{})
servtag, err = parseDSN(serv)
if err != nil {
servtag = "localhost"
}
servtag = getDSNTag(serv)
// mapping of state with its counts
stateCounts := make(map[string]uint32, len(generalThreadStates))
@@ -978,10 +940,7 @@ func (m *Mysql) gatherPerfTableIOWaits(db *sql.DB, serv string, acc telegraf.Acc
timeFetch, timeInsert, timeUpdate, timeDelete float64
)
servtag, err = parseDSN(serv)
if err != nil {
servtag = "localhost"
}
servtag = getDSNTag(serv)
for rows.Next() {
err = rows.Scan(&objSchema, &objName,
@@ -1030,10 +989,7 @@ func (m *Mysql) gatherPerfIndexIOWaits(db *sql.DB, serv string, acc telegraf.Acc
timeFetch, timeInsert, timeUpdate, timeDelete float64
)
servtag, err = parseDSN(serv)
if err != nil {
servtag = "localhost"
}
servtag = getDSNTag(serv)
for rows.Next() {
err = rows.Scan(&objSchema, &objName, &indexName,
@@ -1085,10 +1041,7 @@ func (m *Mysql) gatherInfoSchemaAutoIncStatuses(db *sql.DB, serv string, acc tel
incValue, maxInt uint64
)
servtag, err := parseDSN(serv)
if err != nil {
servtag = "localhost"
}
servtag := getDSNTag(serv)
for rows.Next() {
if err := rows.Scan(&schema, &table, &column, &incValue, &maxInt); err != nil {
@@ -1132,10 +1085,7 @@ func (m *Mysql) gatherPerfTableLockWaits(db *sql.DB, serv string, acc telegraf.A
}
defer rows.Close()
servtag, err := parseDSN(serv)
if err != nil {
servtag = "localhost"
}
servtag := getDSNTag(serv)
var (
objectSchema string
@@ -1257,10 +1207,7 @@ func (m *Mysql) gatherPerfEventWaits(db *sql.DB, serv string, acc telegraf.Accum
starCount, timeWait float64
)
servtag, err := parseDSN(serv)
if err != nil {
servtag = "localhost"
}
servtag := getDSNTag(serv)
tags := map[string]string{
"server": servtag,
}
@@ -1295,10 +1242,7 @@ func (m *Mysql) gatherPerfFileEventsStatuses(db *sql.DB, serv string, acc telegr
sumNumBytesRead, sumNumBytesWrite float64
)
servtag, err := parseDSN(serv)
if err != nil {
servtag = "localhost"
}
servtag := getDSNTag(serv)
tags := map[string]string{
"server": servtag,
}
@@ -1365,10 +1309,7 @@ func (m *Mysql) gatherPerfEventsStatements(db *sql.DB, serv string, acc telegraf
noIndexUsed float64
)
servtag, err := parseDSN(serv)
if err != nil {
servtag = "localhost"
}
servtag := getDSNTag(serv)
tags := map[string]string{
"server": servtag,
}
@@ -1412,14 +1353,8 @@ func (m *Mysql) gatherPerfEventsStatements(db *sql.DB, serv string, acc telegraf
// gatherTableSchema can be used to gather stats on each schema
func (m *Mysql) gatherTableSchema(db *sql.DB, serv string, acc telegraf.Accumulator) error {
var (
dbList []string
servtag string
)
servtag, err := parseDSN(serv)
if err != nil {
servtag = "localhost"
}
var dbList []string
servtag := getDSNTag(serv)
// if the list of databases if empty, then get all databases
if len(m.TableSchemaDatabases) == 0 {
@@ -1575,6 +1510,27 @@ func copyTags(in map[string]string) map[string]string {
return out
}
func dsnAddTimeout(dsn string) (string, error) {
conf, err := mysql.ParseDSN(dsn)
if err != nil {
return "", err
}
if conf.Timeout == 0 {
conf.Timeout = time.Second * 5
}
return conf.FormatDSN(), nil
}
func getDSNTag(dsn string) string {
conf, err := mysql.ParseDSN(dsn)
if err != nil {
return "127.0.0.1:3306"
}
return conf.Addr
}
func init() {
inputs.Add("mysql", func() telegraf.Input {
return &Mysql{}

View File

@@ -26,7 +26,7 @@ func TestMysqlDefaultsToLocal(t *testing.T) {
assert.True(t, acc.HasMeasurement("mysql"))
}
func TestMysqlParseDSN(t *testing.T) {
func TestMysqlGetDSNTag(t *testing.T) {
tests := []struct {
input string
output string
@@ -78,9 +78,9 @@ func TestMysqlParseDSN(t *testing.T) {
}
for _, test := range tests {
output, _ := parseDSN(test.input)
output := getDSNTag(test.input)
if output != test.output {
t.Errorf("Expected %s, got %s\n", test.output, output)
t.Errorf("Input: %s Expected %s, got %s\n", test.input, test.output, output)
}
}
}
@@ -92,7 +92,7 @@ func TestMysqlDNSAddTimeout(t *testing.T) {
}{
{
"",
"/?timeout=5s",
"tcp(127.0.0.1:3306)/?timeout=5s",
},
{
"tcp(192.168.1.1:3306)/",
@@ -104,7 +104,19 @@ func TestMysqlDNSAddTimeout(t *testing.T) {
},
{
"root:passwd@tcp(192.168.1.1:3306)/?tls=false&timeout=10s",
"root:passwd@tcp(192.168.1.1:3306)/?tls=false&timeout=10s",
"root:passwd@tcp(192.168.1.1:3306)/?timeout=10s&tls=false",
},
{
"tcp(10.150.1.123:3306)/",
"tcp(10.150.1.123:3306)/?timeout=5s",
},
{
"root:@!~(*&$#%(&@#(@&#Password@tcp(10.150.1.123:3306)/",
"root:@!~(*&$#%(&@#(@&#Password@tcp(10.150.1.123:3306)/?timeout=5s",
},
{
"root:Test3a#@!@tcp(10.150.1.123:3306)/",
"root:Test3a#@!@tcp(10.150.1.123:3306)/?timeout=5s",
},
}

View File

@@ -1,85 +0,0 @@
// Copyright 2012 The Go-MySQL-Driver Authors. All rights reserved.
//
// This Source Code Form is subject to the terms of the Mozilla Public
// License, v. 2.0. If a copy of the MPL was not distributed with this file,
// You can obtain one at http://mozilla.org/MPL/2.0/.
package mysql
import (
"errors"
"strings"
)
// parseDSN parses the DSN string to a config
func parseDSN(dsn string) (string, error) {
//var user, passwd string
var addr, net string
// [user[:password]@][net[(addr)]]/dbname[?param1=value1&paramN=valueN]
// Find the last '/' (since the password or the net addr might contain a '/')
for i := len(dsn) - 1; i >= 0; i-- {
if dsn[i] == '/' {
var j, k int
// left part is empty if i <= 0
if i > 0 {
// [username[:password]@][protocol[(address)]]
// Find the last '@' in dsn[:i]
for j = i; j >= 0; j-- {
if dsn[j] == '@' {
// username[:password]
// Find the first ':' in dsn[:j]
for k = 0; k < j; k++ {
if dsn[k] == ':' {
//passwd = dsn[k+1 : j]
break
}
}
//user = dsn[:k]
break
}
}
// [protocol[(address)]]
// Find the first '(' in dsn[j+1:i]
for k = j + 1; k < i; k++ {
if dsn[k] == '(' {
// dsn[i-1] must be == ')' if an address is specified
if dsn[i-1] != ')' {
if strings.ContainsRune(dsn[k+1:i], ')') {
return "", errors.New("Invalid DSN unescaped")
}
return "", errors.New("Invalid DSN Addr")
}
addr = dsn[k+1 : i-1]
break
}
}
net = dsn[j+1 : k]
}
break
}
}
// Set default network if empty
if net == "" {
net = "tcp"
}
// Set default address if empty
if addr == "" {
switch net {
case "tcp":
addr = "127.0.0.1:3306"
case "unix":
addr = "/tmp/mysql.sock"
default:
return "", errors.New("Default addr for network '" + net + "' unknown")
}
}
return addr, nil
}

View File

@@ -28,12 +28,17 @@ type natsConsumer struct {
Servers []string
Secure bool
// Client pending limits:
PendingMessageLimit int
PendingBytesLimit int
// Legacy metric buffer support
MetricBuffer int
parser parsers.Parser
sync.Mutex
wg sync.WaitGroup
Conn *nats.Conn
Subs []*nats.Subscription
@@ -47,13 +52,18 @@ type natsConsumer struct {
var sampleConfig = `
## urls of NATS servers
servers = ["nats://localhost:4222"]
# servers = ["nats://localhost:4222"]
## Use Transport Layer Security
secure = false
# secure = false
## subject(s) to consume
subjects = ["telegraf"]
# subjects = ["telegraf"]
## name a queue group
queue_group = "telegraf_consumers"
# queue_group = "telegraf_consumers"
## Sets the limits for pending msgs and bytes for each subscription
## These shouldn't need to be adjusted except in very high throughput scenarios
# pending_message_limit = 65536
# pending_bytes_limit = 67108864
## Data format to consume.
## Each data format has it's own unique set of configuration options, read
@@ -91,8 +101,15 @@ func (n *natsConsumer) Start(acc telegraf.Accumulator) error {
var connectErr error
// set default NATS connection options
opts := nats.DefaultOptions
// override max reconnection tries
opts.MaxReconnect = -1
// override servers if any were specified
opts.Servers = n.Servers
opts.Secure = n.Secure
if n.Conn == nil || n.Conn.IsClosed() {
@@ -105,12 +122,22 @@ func (n *natsConsumer) Start(acc telegraf.Accumulator) error {
n.errs = make(chan error)
n.Conn.SetErrorHandler(n.natsErrHandler)
n.in = make(chan *nats.Msg)
n.in = make(chan *nats.Msg, 1000)
for _, subj := range n.Subjects {
sub, err := n.Conn.ChanQueueSubscribe(subj, n.QueueGroup, n.in)
sub, err := n.Conn.QueueSubscribe(subj, n.QueueGroup, func(m *nats.Msg) {
n.in <- m
})
if err != nil {
return err
}
// ensure that the subscription has been processed by the server
if err = n.Conn.Flush(); err != nil {
return err
}
// set the subscription pending limits
if err = sub.SetPendingLimits(n.PendingMessageLimit, n.PendingBytesLimit); err != nil {
return err
}
n.Subs = append(n.Subs, sub)
}
}
@@ -118,6 +145,7 @@ func (n *natsConsumer) Start(acc telegraf.Accumulator) error {
n.done = make(chan struct{})
// Start the message reader
n.wg.Add(1)
go n.receiver()
log.Printf("I! Started the NATS consumer service, nats: %v, subjects: %v, queue: %v\n",
n.Conn.ConnectedUrl(), n.Subjects, n.QueueGroup)
@@ -128,7 +156,7 @@ func (n *natsConsumer) Start(acc telegraf.Accumulator) error {
// receiver() reads all incoming messages from NATS, and parses them into
// telegraf metrics.
func (n *natsConsumer) receiver() {
defer n.clean()
defer n.wg.Done()
for {
select {
case <-n.done:
@@ -144,17 +172,11 @@ func (n *natsConsumer) receiver() {
for _, metric := range metrics {
n.acc.AddFields(metric.Name(), metric.Fields(), metric.Tags(), metric.Time())
}
}
}
}
func (n *natsConsumer) clean() {
n.Lock()
defer n.Unlock()
close(n.in)
close(n.errs)
for _, sub := range n.Subs {
if err := sub.Unsubscribe(); err != nil {
log.Printf("E! Error unsubscribing from subject %s in queue %s: %s\n",
@@ -170,6 +192,8 @@ func (n *natsConsumer) clean() {
func (n *natsConsumer) Stop() {
n.Lock()
close(n.done)
n.wg.Wait()
n.clean()
n.Unlock()
}
@@ -179,6 +203,13 @@ func (n *natsConsumer) Gather(acc telegraf.Accumulator) error {
func init() {
inputs.Add("nats_consumer", func() telegraf.Input {
return &natsConsumer{}
return &natsConsumer{
Servers: []string{"nats://localhost:4222"},
Secure: false,
Subjects: []string{"telegraf"},
QueueGroup: "telegraf_consumers",
PendingBytesLimit: nats.DefaultSubPendingBytesLimit,
PendingMessageLimit: nats.DefaultSubPendingMsgsLimit,
}
})
}

View File

@@ -39,6 +39,7 @@ func TestRunParser(t *testing.T) {
defer close(n.done)
n.parser, _ = parsers.NewInfluxParser()
n.wg.Add(1)
go n.receiver()
in <- natsMsg(testMsg)
time.Sleep(time.Millisecond * 25)
@@ -56,6 +57,7 @@ func TestRunParserInvalidMsg(t *testing.T) {
defer close(n.done)
n.parser, _ = parsers.NewInfluxParser()
n.wg.Add(1)
go n.receiver()
in <- natsMsg(invalidMsg)
time.Sleep(time.Millisecond * 25)
@@ -73,6 +75,7 @@ func TestRunParserAndGather(t *testing.T) {
defer close(n.done)
n.parser, _ = parsers.NewInfluxParser()
n.wg.Add(1)
go n.receiver()
in <- natsMsg(testMsg)
time.Sleep(time.Millisecond * 25)
@@ -91,6 +94,7 @@ func TestRunParserAndGatherGraphite(t *testing.T) {
defer close(n.done)
n.parser, _ = parsers.NewGraphiteParser("_", []string{}, nil)
n.wg.Add(1)
go n.receiver()
in <- natsMsg(testMsgGraphite)
time.Sleep(time.Millisecond * 25)
@@ -109,6 +113,7 @@ func TestRunParserAndGatherJSON(t *testing.T) {
defer close(n.done)
n.parser, _ = parsers.NewJSONParser("nats_json_test", []string{}, nil)
n.wg.Add(1)
go n.receiver()
in <- natsMsg(testMsgJSON)
time.Sleep(time.Millisecond * 25)

View File

@@ -122,6 +122,9 @@ func (g *phpfpm) gatherServer(addr string, acc telegraf.Accumulator) error {
fcgiIp := socketAddr[0]
fcgiPort, _ := strconv.Atoi(socketAddr[1])
fcgi, err = newFcgiClient(fcgiIp, fcgiPort)
if err != nil {
return err
}
if len(u.Path) > 1 {
statusPath = strings.Trim(u.Path, "/")
} else {

View File

@@ -52,13 +52,13 @@ const sampleConfig = `
## urls to ping
urls = ["www.google.com"] # required
## number of pings to send per collection (ping -c <COUNT>)
count = 1 # required
# count = 1
## interval, in s, at which to ping. 0 == default (ping -i <PING_INTERVAL>)
ping_interval = 0.0
# ping_interval = 1.0
## per-ping timeout, in s. 0 == no timeout (ping -W <TIMEOUT>)
timeout = 1.0
# timeout = 1.0
## interface to send ping from (ping -I <INTERFACE>)
interface = ""
# interface = ""
`
func (_ *Ping) SampleConfig() string {
@@ -200,6 +200,11 @@ func processPingOutput(out string) (int, int, float64, error) {
func init() {
inputs.Add("ping", func() telegraf.Input {
return &Ping{pingHost: hostPinger}
return &Ping{
pingHost: hostPinger,
PingInterval: 1.0,
Count: 1,
Timeout: 1.0,
}
})
}

View File

@@ -29,6 +29,7 @@ type Postgresql struct {
Tagvalue string
Measurement string
}
Debug bool
}
type query []struct {

View File

@@ -10,7 +10,7 @@ The plugin will tag processes by their PID and their process name.
Processes can be specified either by pid file, by executable name, by command
line pattern matching, or by username (in this order or priority. Procstat
plugin will use `pgrep` when executable name is provided to obtain the pid.
Proctstas plugin will transmit IO, memory, cpu, file descriptor related
Procstat plugin will transmit IO, memory, cpu, file descriptor related
measurements for every process specified. A prefix can be set to isolate
individual process specific measurements.

View File

@@ -16,7 +16,6 @@ import (
type Procstat struct {
PidFile string `toml:"pid_file"`
Exact bool
Exe string
Pattern string
Prefix string
@@ -44,8 +43,6 @@ var sampleConfig = `
# exe = "nginx"
## pattern as argument for pgrep (ie, pgrep -f <pattern>)
# pattern = "nginx"
## match the exact name of the process (ie, pgrep -xf <pattern>)
# exact = false
## user as argument for pgrep (ie, pgrep -u <user>)
# user = "nginx"
@@ -179,17 +176,11 @@ func (p *Procstat) pidsFromExe() ([]int32, error) {
func (p *Procstat) pidsFromPattern() ([]int32, error) {
var out []int32
var outerr error
var options string
bin, err := exec.LookPath("pgrep")
if err != nil {
return out, fmt.Errorf("Couldn't find pgrep binary: %s", err)
}
if p.Exact == true {
options = "-xf"
} else {
options = "-f"
}
pgrep, err := exec.Command(bin, options, p.Pattern).Output()
pgrep, err := exec.Command(bin, "-f", p.Pattern).Output()
if err != nil {
return out, fmt.Errorf("Failed to execute %s. Error: '%s'", bin, err)
} else {

View File

@@ -107,7 +107,8 @@ type Queue struct {
Node string
Vhost string
Durable bool
AutoDelete bool `json:"auto_delete"`
AutoDelete bool `json:"auto_delete"`
IdleSince string `json:"idle_since"`
}
// Node ...
@@ -328,6 +329,7 @@ func gatherQueues(r *RabbitMQ, acc telegraf.Accumulator, errChan chan error) {
// common information
"consumers": queue.Consumers,
"consumer_utilisation": queue.ConsumerUtilisation,
"idle_since": queue.IdleSince,
"memory": queue.Memory,
// messages information
"message_bytes": queue.MessageBytes,

View File

@@ -0,0 +1,65 @@
Here are a few configuration examples for different use cases.
### Switch/router interface metrics
This setup will collect data on all interfaces from three different tables, `IF-MIB::ifTable`, `IF-MIB::ifXTable` and `EtherLike-MIB::dot3StatsTable`. It will also add the name from `IF-MIB::ifDescr` and use that as a tag. Depending on your needs and preferences you can easily use `IF-MIB::ifName` or `IF-MIB::ifAlias` instead or in addition. The values of these are typically:
IF-MIB::ifName = Gi0/0/0
IF-MIB::ifDescr = GigabitEthernet0/0/0
IF-MIB::ifAlias = ### LAN ###
This configuration also collects the hostname from the device (`RFC1213-MIB::sysName.0`) and adds as a tag. So each metric will both have the configured host/IP as `agent_host` as well as the device self-reported hostname as `hostname` and the name of the host that has collected these metrics as `host`.
Here is the configuration that you add to your `telegraf.conf`:
```
[[inputs.snmp]]
agents = [ "host.example.com" ]
version = 2
community = "public"
[[inputs.snmp.field]]
name = "hostname"
oid = "RFC1213-MIB::sysName.0"
is_tag = true
[[inputs.snmp.field]]
name = "uptime"
oid = "DISMAN-EXPRESSION-MIB::sysUpTimeInstance"
# IF-MIB::ifTable contains counters on input and output traffic as well as errors and discards.
[[inputs.snmp.table]]
name = "interface"
inherit_tags = [ "hostname" ]
oid = "IF-MIB::ifTable"
# Interface tag - used to identify interface in metrics database
[[inputs.snmp.table.field]]
name = "ifDescr"
oid = "IF-MIB::ifDescr"
is_tag = true
# IF-MIB::ifXTable contains newer High Capacity (HC) counters that do not overflow as fast for a few of the ifTable counters
[[inputs.snmp.table]]
name = "interface"
inherit_tags = [ "hostname" ]
oid = "IF-MIB::ifXTable"
# Interface tag - used to identify interface in metrics database
[[inputs.snmp.table.field]]
name = "ifDescr"
oid = "IF-MIB::ifDescr"
is_tag = true
# EtherLike-MIB::dot3StatsTable contains detailed ethernet-level information about what kind of errors have been logged on an interface (such as FCS error, frame too long, etc)
[[inputs.snmp.table]]
name = "interface"
inherit_tags = [ "hostname" ]
oid = "EtherLike-MIB::dot3StatsTable"
# Interface tag - used to identify interface in metrics database
[[inputs.snmp.table.field]]
name = "ifDescr"
oid = "IF-MIB::ifDescr"
is_tag = true
```

View File

@@ -0,0 +1,53 @@
# Debugging & Testing SNMP Issues
### Install net-snmp on your system:
Mac:
```
brew install net-snmp
```
### Run an SNMP simulator docker image to get a full MIB on port 161:
```
docker run -d -p 161:161/udp xeemetric/snmp-simulator
```
### snmpget:
snmpget corresponds to the inputs.snmp.field configuration.
```bash
$ # get an snmp field with fully-qualified MIB name.
$ snmpget -v2c -c public localhost:161 system.sysUpTime.0
DISMAN-EVENT-MIB::sysUpTimeInstance = Timeticks: (1643) 0:00:16.43
$ # get an snmp field, outputting the numeric OID.
$ snmpget -On -v2c -c public localhost:161 system.sysUpTime.0
.1.3.6.1.2.1.1.3.0 = Timeticks: (1638) 0:00:16.38
```
### snmptranslate:
snmptranslate can be used to translate an OID to a MIB name:
```bash
$ snmptranslate .1.3.6.1.2.1.1.3.0
DISMAN-EVENT-MIB::sysUpTimeInstance
```
And to convert a partial MIB name to a fully qualified one:
```bash
$ snmptranslate -IR sysUpTime.0
DISMAN-EVENT-MIB::sysUpTimeInstance
```
And to convert a MIB name to an OID:
```bash
$ snmptranslate -On -IR system.sysUpTime.0
.1.3.6.1.2.1.1.3.0
```

View File

@@ -4,6 +4,8 @@ The SNMP input plugin gathers metrics from SNMP agents.
## Configuration:
See additional SNMP plugin configuration examples [here](./CONFIG-EXAMPLES.md).
### Example:
SNMP data:
@@ -67,7 +69,7 @@ Resulting output:
#### Configuration via MIB:
This example uses the SNMP data above, but is configured via the MIB.
This example uses the SNMP data above, but is configured via the MIB.
The example MIB file can be found in the `testdata` directory. See the [MIB lookups](#mib-lookups) section for more information.
Telegraf config:
@@ -95,58 +97,58 @@ Resulting output:
### Config parameters
* `agents`: Default: `[]`
* `agents`: Default: `[]`
List of SNMP agents to connect to in the form of `IP[:PORT]`. If `:PORT` is unspecified, it defaults to `161`.
* `version`: Default: `2`
* `version`: Default: `2`
SNMP protocol version to use.
* `community`: Default: `"public"`
* `community`: Default: `"public"`
SNMP community to use.
* `max_repetitions`: Default: `50`
* `max_repetitions`: Default: `50`
Maximum number of iterations for repeating variables.
* `sec_name`:
* `sec_name`:
Security name for authenticated SNMPv3 requests.
* `auth_protocol`: Values: `"MD5"`,`"SHA"`,`""`. Default: `""`
* `auth_protocol`: Values: `"MD5"`,`"SHA"`,`""`. Default: `""`
Authentication protocol for authenticated SNMPv3 requests.
* `auth_password`:
* `auth_password`:
Authentication password for authenticated SNMPv3 requests.
* `sec_level`: Values: `"noAuthNoPriv"`,`"authNoPriv"`,`"authPriv"`. Default: `"noAuthNoPriv"`
* `sec_level`: Values: `"noAuthNoPriv"`,`"authNoPriv"`,`"authPriv"`. Default: `"noAuthNoPriv"`
Security level used for SNMPv3 messages.
* `context_name`:
* `context_name`:
Context name used for SNMPv3 requests.
* `priv_protocol`: Values: `"DES"`,`"AES"`,`""`. Default: `""`
* `priv_protocol`: Values: `"DES"`,`"AES"`,`""`. Default: `""`
Privacy protocol used for encrypted SNMPv3 messages.
* `priv_password`:
* `priv_password`:
Privacy password used for encrypted SNMPv3 messages.
* `name`:
* `name`:
Output measurement name.
#### Field parameters:
* `oid`:
* `oid`:
OID to get. May be a numeric or textual OID.
* `oid_index_suffix`:
The OID sub-identifier to strip off so that the index can be matched against other fields in the table.
* `name`:
* `name`:
Output field/tag name.
If not specified, it defaults to the value of `oid`. If `oid` is numeric, an attempt to translate the numeric OID into a texual OID will be made.
* `is_tag`:
* `is_tag`:
Output this field as a tag.
* `conversion`: Values: `"float(X)"`,`"float"`,`"int"`,`""`. Default: `""`
* `conversion`: Values: `"float(X)"`,`"float"`,`"int"`,`""`. Default: `""`
Converts the value according to the given specification.
- `float(X)`: Converts the input value into a float and divides by the Xth power of 10. Efficively just moves the decimal left X places. For example a value of `123` with `float(2)` will result in `1.23`.
@@ -156,14 +158,14 @@ Converts the value according to the given specification.
- `ipaddr`: Converts the value to an IP address.
#### Table parameters:
* `oid`:
* `oid`:
Automatically populates the table's fields using data from the MIB.
* `name`:
* `name`:
Output measurement name.
If not specified, it defaults to the value of `oid`. If `oid` is numeric, an attempt to translate the numeric OID into a texual OID will be made.
* `inherit_tags`:
* `inherit_tags`:
Which tags to inherit from the top-level config and to use in the output of this table's measurement.
### MIB lookups

View File

@@ -109,7 +109,7 @@ type Snmp struct {
Community string
// Parameters for Version 2 & 3
MaxRepetitions int
MaxRepetitions uint8
// Parameters for Version 3
ContextName string
@@ -178,13 +178,30 @@ type Table struct {
initialized bool
}
// init() populates Fields if a table OID is provided.
// init() builds & initializes the nested fields.
func (t *Table) init() error {
if t.initialized {
return nil
}
if err := t.initBuild(); err != nil {
return err
}
// initialize all the nested fields
for i := range t.Fields {
if err := t.Fields[i].init(); err != nil {
return err
}
}
t.initialized = true
return nil
}
// init() populates Fields if a table OID is provided.
func (t *Table) initBuild() error {
if t.Oid == "" {
t.initialized = true
return nil
}
@@ -242,14 +259,6 @@ func (t *Table) init() error {
t.Fields = append(t.Fields, Field{Name: col, Oid: mibPrefix + col, IsTag: isTag})
}
// initialize all the nested fields
for i := range t.Fields {
if err := t.Fields[i].init(); err != nil {
return err
}
}
t.initialized = true
return nil
}
@@ -460,13 +469,15 @@ func (t Table) Build(gs snmpConnection, walk bool) (*RTable, error) {
// index, and being added on the same row.
if pkt, err := gs.Get([]string{oid}); err != nil {
return nil, Errorf(err, "performing get")
} else if pkt != nil && len(pkt.Variables) > 0 && pkt.Variables[0].Type != gosnmp.NoSuchObject {
} else if pkt != nil && len(pkt.Variables) > 0 && pkt.Variables[0].Type != gosnmp.NoSuchObject && pkt.Variables[0].Type != gosnmp.NoSuchInstance {
ent := pkt.Variables[0]
fv, err := fieldConvert(f.Conversion, ent.Value)
if err != nil {
return nil, Errorf(err, "converting %q", ent.Value)
}
ifv[""] = fv
if fvs, ok := fv.(string); !ok || fvs != "" {
ifv[""] = fv
}
}
} else {
err := gs.Walk(oid, func(ent gosnmp.SnmpPDU) error {
@@ -487,7 +498,9 @@ func (t Table) Build(gs snmpConnection, walk bool) (*RTable, error) {
if err != nil {
return Errorf(err, "converting %q", ent.Value)
}
ifv[idx] = fv
if fvs, ok := fv.(string); !ok || fvs != "" {
ifv[idx] = fv
}
return nil
})
if err != nil {
@@ -712,7 +725,6 @@ func (s *Snmp) getConnection(agent string) (snmpConnection, error) {
// "hwaddr" will convert the value into a MAC address.
// "ipaddr" will convert the value into into an IP address.
// "" will convert a byte slice into a string.
// Any other conv will return the input value unchanged.
func fieldConvert(conv string, v interface{}) (interface{}, error) {
if conv == "" {
if bs, ok := v.([]byte); ok {
@@ -801,6 +813,7 @@ func fieldConvert(conv string, v interface{}) (interface{}, error) {
default:
return nil, fmt.Errorf("invalid type (%T) for hwaddr conversion", v)
}
return v, nil
}
if conv == "ipaddr" {
@@ -825,7 +838,7 @@ func fieldConvert(conv string, v interface{}) (interface{}, error) {
return v, nil
}
return v, nil
return nil, fmt.Errorf("invalid conversion type '%s'", conv)
}
// snmpTranslate resolves the given OID.
@@ -850,11 +863,16 @@ func snmpTranslate(oid string) (mibName string, oidNum string, oidText string, c
i := strings.Index(oidText, "::")
if i == -1 {
// was not found in MIB. Value is numeric
return "", oidText, oidText, "", nil
// was not found in MIB.
if bytes.Index(bb.Bytes(), []byte(" [TRUNCATED]")) >= 0 {
return "", oid, oid, "", nil
}
// not truncated, but not fully found. We still need to parse out numeric OID, so keep going
oidText = oid
} else {
mibName = oidText[:i]
oidText = oidText[i+2:]
}
mibName = oidText[:i]
oidText = oidText[i+2:]
if i := bytes.Index(bb.Bytes(), []byte(" -- TEXTUAL CONVENTION ")); i != -1 {
bb.Next(i + len(" -- TEXTUAL CONVENTION "))

View File

@@ -21,6 +21,9 @@ var mockedCommands = [][]string{
{"snmptranslate", "-Td", "-Ob", "-m", "all", "1.0.0.1.1"},
{"snmptranslate", "-Td", "-Ob", "-m", "all", ".1.0.0.0.1.1"},
{"snmptranslate", "-Td", "-Ob", "-m", "all", ".1.0.0.0.1.1.0"},
{"snmptranslate", "-Td", "-Ob", "-m", "all", ".1.0.0.0.1.4"},
{"snmptranslate", "-Td", "-Ob", "-m", "all", ".1.2.3"},
{"snmptranslate", "-Td", "-Ob", ".iso.2.3"},
{"snmptranslate", "-Td", "-Ob", "-m", "all", ".999"},
{"snmptranslate", "-Td", "-Ob", "TEST::server"},
{"snmptranslate", "-Td", "-Ob", "TEST::server.0"},

View File

@@ -67,6 +67,9 @@ var mockedCommandResults = map[string]mockedCommandResult{
"snmptranslate\x00-Td\x00-Ob\x00-m\x00all\x001.0.0.1.1": mockedCommandResult{stdout: "TEST::hostname\nhostname OBJECT-TYPE\n -- FROM\tTEST\n SYNTAX\tOCTET STRING\n MAX-ACCESS\tread-only\n STATUS\tcurrent\n::= { iso(1) 0 testOID(0) 1 1 }\n", stderr: "", exitError: false},
"snmptranslate\x00-Td\x00-Ob\x00-m\x00all\x00.1.0.0.0.1.1": mockedCommandResult{stdout: "TEST::server\nserver OBJECT-TYPE\n -- FROM\tTEST\n SYNTAX\tOCTET STRING\n MAX-ACCESS\tread-only\n STATUS\tcurrent\n::= { iso(1) 0 testOID(0) testTable(0) testTableEntry(1) 1 }\n", stderr: "", exitError: false},
"snmptranslate\x00-Td\x00-Ob\x00-m\x00all\x00.1.0.0.0.1.1.0": mockedCommandResult{stdout: "TEST::server.0\nserver OBJECT-TYPE\n -- FROM\tTEST\n SYNTAX\tOCTET STRING\n MAX-ACCESS\tread-only\n STATUS\tcurrent\n::= { iso(1) 0 testOID(0) testTable(0) testTableEntry(1) server(1) 0 }\n", stderr: "", exitError: false},
"snmptranslate\x00-Td\x00-Ob\x00-m\x00all\x00.1.0.0.0.1.4": mockedCommandResult{stdout: "TEST::testTableEntry.4\ntestTableEntry OBJECT-TYPE\n -- FROM\tTEST\n MAX-ACCESS\tnot-accessible\n STATUS\tcurrent\n INDEX\t\t{ server }\n::= { iso(1) 0 testOID(0) testTable(0) testTableEntry(1) 4 }\n", stderr: "", exitError: false},
"snmptranslate\x00-Td\x00-Ob\x00-m\x00all\x00.1.2.3": mockedCommandResult{stdout: "iso.2.3\niso OBJECT-TYPE\n -- FROM\t#-1\n::= { iso(1) 2 3 }\n", stderr: "", exitError: false},
"snmptranslate\x00-Td\x00-Ob\x00.iso.2.3": mockedCommandResult{stdout: "iso.2.3\niso OBJECT-TYPE\n -- FROM\t#-1\n::= { iso(1) 2 3 }\n", stderr: "", exitError: false},
"snmptranslate\x00-Td\x00-Ob\x00-m\x00all\x00.999": mockedCommandResult{stdout: ".999\n [TRUNCATED]\n", stderr: "", exitError: false},
"snmptranslate\x00-Td\x00-Ob\x00TEST::server": mockedCommandResult{stdout: "TEST::server\nserver OBJECT-TYPE\n -- FROM\tTEST\n SYNTAX\tOCTET STRING\n MAX-ACCESS\tread-only\n STATUS\tcurrent\n::= { iso(1) 0 testOID(0) testTable(0) testTableEntry(1) 1 }\n", stderr: "", exitError: false},
"snmptranslate\x00-Td\x00-Ob\x00TEST::server.0": mockedCommandResult{stdout: "TEST::server.0\nserver OBJECT-TYPE\n -- FROM\tTEST\n SYNTAX\tOCTET STRING\n MAX-ACCESS\tread-only\n STATUS\tcurrent\n::= { iso(1) 0 testOID(0) testTable(0) testTableEntry(1) server(1) 0 }\n", stderr: "", exitError: false},

View File

@@ -62,12 +62,15 @@ var tsc = &testSNMPConnection{
values: map[string]interface{}{
".1.0.0.0.1.1.0": "foo",
".1.0.0.0.1.1.1": []byte("bar"),
".1.0.0.0.1.1.2": []byte(""),
".1.0.0.0.1.102": "bad",
".1.0.0.0.1.2.0": 1,
".1.0.0.0.1.2.1": 2,
".1.0.0.0.1.2.2": 0,
".1.0.0.0.1.3.0": "0.123",
".1.0.0.0.1.3.1": "0.456",
".1.0.0.0.1.3.2": "9.999",
".1.0.0.0.1.3.2": "0.000",
".1.0.0.0.1.3.3": "9.999",
".1.0.0.0.1.4.0": 123456,
".1.0.0.1.1": "baz",
".1.0.0.1.2": 234,
@@ -128,6 +131,8 @@ func TestFieldInit(t *testing.T) {
expectedName string
expectedConversion string
}{
{".1.2.3", "foo", "", ".1.2.3", "foo", ""},
{".iso.2.3", "foo", "", ".1.2.3", "foo", ""},
{".1.0.0.0.1.1", "", "", ".1.0.0.0.1.1", "server", ""},
{".1.0.0.0.1.1.0", "", "", ".1.0.0.0.1.1.0", "server.0", ""},
{".999", "", "", ".999", ".999", ""},
@@ -424,6 +429,14 @@ func TestTableBuild_noWalk(t *testing.T) {
Oid: ".1.0.0.1.2",
IsTag: true,
},
{
Name: "empty",
Oid: ".1.0.0.0.1.1.2",
},
{
Name: "noexist",
Oid: ".1.2.3.4.5",
},
},
}

View File

@@ -1139,7 +1139,7 @@ DECLARE @w4 TABLE
)
DECLARE @w5 TABLE
(
WaitCategory nvarchar(16) NOT NULL,
WaitCategory nvarchar(64) NOT NULL,
WaitTimeInMs bigint NOT NULL,
WaitTaskCount bigint NOT NULL
)

View File

@@ -93,7 +93,6 @@ tags in a manner similar to the line-protocol, like this:
users.current,service=payroll,region=us-west:32|g
```
COMING SOON: there will be a way to specify multiple fields.
<!-- TODO Second, you can specify multiple fields within a measurement:
```

View File

@@ -52,14 +52,14 @@ var malformedwarn = "E! tcp_listener has received %d malformed packets" +
const sampleConfig = `
## Address and port to host TCP listener on
service_address = ":8094"
# service_address = ":8094"
## Number of TCP messages allowed to queue up. Once filled, the
## TCP listener will start dropping packets.
allowed_pending_messages = 10000
# allowed_pending_messages = 10000
## Maximum number of concurrent TCP connections to allow
max_tcp_connections = 250
# max_tcp_connections = 250
## Data format to consume.
## Each data format has it's own unique set of configuration options, read
@@ -276,6 +276,10 @@ func (t *TcpListener) remember(id string, conn *net.TCPConn) {
func init() {
inputs.Add("tcp_listener", func() telegraf.Input {
return &TcpListener{}
return &TcpListener{
ServiceAddress: ":8094",
AllowedPendingMessages: 10000,
MaxTCPConnections: 250,
}
})
}

View File

@@ -51,11 +51,11 @@ var malformedwarn = "E! udp_listener has received %d malformed packets" +
const sampleConfig = `
## Address and port to host UDP listener on
service_address = ":8092"
# service_address = ":8092"
## Number of UDP messages allowed to queue up. Once filled, the
## UDP listener will start dropping packets.
allowed_pending_messages = 10000
# allowed_pending_messages = 10000
## Data format to consume.
## Each data format has it's own unique set of configuration options, read
@@ -178,6 +178,9 @@ func (u *UdpListener) udpParser() error {
func init() {
inputs.Add("udp_listener", func() telegraf.Input {
return &UdpListener{}
return &UdpListener{
ServiceAddress: ":8092",
AllowedPendingMessages: 10000,
}
})
}

View File

@@ -64,7 +64,7 @@ Instances (this is an array) is the instances of a counter you would like return
it can be one or more values.
Example, `Instances = ["C:","D:","E:"]` will return only for the instances
C:, D: and E: where relevant. To get all instnaces of a Counter, use ["*"] only.
C:, D: and E: where relevant. To get all instances of a Counter, use ["*"] only.
By default any results containing _Total are stripped,
unless this is specified as the wanted instance.
Alternatively see the option IncludeTotal below.

View File

@@ -226,9 +226,14 @@ func serialize(metric telegraf.Metric) ([]string, error) {
m["host"] = host
}
for key, value := range metric.Tags() {
if key != "host" {
m["_"+key] = value
}
}
for key, value := range metric.Fields() {
nkey := fmt.Sprintf("_%s", key)
m[nkey] = value
m["_"+key] = value
}
serialized, err := ejson.Marshal(m)

View File

@@ -62,14 +62,23 @@ func (n *NATS) SetSerializer(serializer serializers.Serializer) {
func (n *NATS) Connect() error {
var err error
// set NATS connection options
// set default NATS connection options
opts := nats_client.DefaultOptions
// override max reconnection tries
opts.MaxReconnect = -1
// override servers, if any were specified
opts.Servers = n.Servers
// override authentication, if any was specified
if n.Username != "" {
opts.User = n.Username
opts.Password = n.Password
}
// override TLS, if it was specified
tlsConfig, err := internal.GetTLSConfig(
n.SSLCert, n.SSLKey, n.SSLCA, n.InsecureSkipVerify)
if err != nil {

View File

@@ -2,6 +2,7 @@ package opentsdb
import (
"fmt"
"log"
"net"
"net/url"
"sort"
@@ -109,9 +110,12 @@ func (o *OpenTSDB) WriteHttp(metrics []telegraf.Metric, u *url.URL) error {
tags := cleanTags(m.Tags())
for fieldName, value := range m.Fields() {
metricValue, buildError := buildValue(value)
if buildError != nil {
fmt.Printf("OpenTSDB: %s\n", buildError.Error())
switch value.(type) {
case int64:
case uint64:
case float64:
default:
log.Printf("D! OpenTSDB does not support metric value: [%s] of type [%T].\n", value, value)
continue
}
@@ -120,7 +124,7 @@ func (o *OpenTSDB) WriteHttp(metrics []telegraf.Metric, u *url.URL) error {
o.Prefix, m.Name(), fieldName)),
Tags: tags,
Timestamp: now,
Value: metricValue,
Value: value,
}
if err := http.sendDataPoint(metric); err != nil {
@@ -153,7 +157,7 @@ func (o *OpenTSDB) WriteTelnet(metrics []telegraf.Metric, u *url.URL) error {
for fieldName, value := range m.Fields() {
metricValue, buildError := buildValue(value)
if buildError != nil {
fmt.Printf("OpenTSDB: %s\n", buildError.Error())
log.Printf("E! OpenTSDB: %s\n", buildError.Error())
continue
}
@@ -161,9 +165,6 @@ func (o *OpenTSDB) WriteTelnet(metrics []telegraf.Metric, u *url.URL) error {
sanitizedChars.Replace(fmt.Sprintf("%s%s_%s", o.Prefix, m.Name(), fieldName)),
now, metricValue, tags)
if o.Debug {
fmt.Print(messageLine)
}
_, err := connection.Write([]byte(messageLine))
if err != nil {
return fmt.Errorf("OpenTSDB: Telnet writing error %s", err.Error())

View File

@@ -16,7 +16,7 @@ import (
type HttpMetric struct {
Metric string `json:"metric"`
Timestamp int64 `json:"timestamp"`
Value string `json:"value"`
Value interface{} `json:"value"`
Tags map[string]string `json:"tags"`
}

View File

@@ -2,6 +2,7 @@ package riemann
import (
"fmt"
"log"
"os"
"sort"
"strings"
@@ -11,6 +12,8 @@ import (
"github.com/influxdata/telegraf/plugins/outputs"
)
const deprecationMsg = "I! WARNING: this Riemann output plugin will be deprecated in a future release, see https://github.com/influxdata/telegraf/issues/1878 for more details & discussion."
type Riemann struct {
URL string
Transport string
@@ -29,6 +32,7 @@ var sampleConfig = `
`
func (r *Riemann) Connect() error {
log.Printf(deprecationMsg)
c, err := raidman.Dial(r.Transport, r.URL)
if err != nil {
@@ -58,6 +62,7 @@ func (r *Riemann) Description() string {
}
func (r *Riemann) Write(metrics []telegraf.Metric) error {
log.Printf(deprecationMsg)
if len(metrics) == 0 {
return nil
}

View File

@@ -57,38 +57,34 @@ func NewGraphiteParser(
defaultTemplate, _ := NewTemplate("measurement*", nil, p.Separator)
matcher.AddDefaultTemplate(defaultTemplate)
tmplts := parsedTemplates{}
for _, pattern := range p.Templates {
template := pattern
filter := ""
tmplt := parsedTemplate{}
tmplt.template = pattern
// Format is [filter] <template> [tag1=value1,tag2=value2]
parts := strings.Fields(pattern)
if len(parts) < 1 {
continue
} else if len(parts) >= 2 {
if strings.Contains(parts[1], "=") {
template = parts[0]
tmplt.template = parts[0]
tmplt.tagstring = parts[1]
} else {
filter = parts[0]
template = parts[1]
tmplt.filter = parts[0]
tmplt.template = parts[1]
if len(parts) > 2 {
tmplt.tagstring = parts[2]
}
}
}
tmplts = append(tmplts, tmplt)
}
// Parse out the default tags specific to this template
tags := map[string]string{}
if strings.Contains(parts[len(parts)-1], "=") {
tagStrs := strings.Split(parts[len(parts)-1], ",")
for _, kv := range tagStrs {
parts := strings.Split(kv, "=")
tags[parts[0]] = parts[1]
}
sort.Sort(tmplts)
for _, tmplt := range tmplts {
if err := p.addToMatcher(tmplt); err != nil {
return nil, err
}
tmpl, err1 := NewTemplate(template, tags, p.Separator)
if err1 != nil {
err = err1
break
}
matcher.Add(filter, tmpl)
}
if err != nil {
@@ -98,6 +94,24 @@ func NewGraphiteParser(
}
}
func (p *GraphiteParser) addToMatcher(tmplt parsedTemplate) error {
// Parse out the default tags specific to this template
tags := map[string]string{}
if tmplt.tagstring != "" {
for _, kv := range strings.Split(tmplt.tagstring, ",") {
parts := strings.Split(kv, "=")
tags[parts[0]] = parts[1]
}
}
tmpl, err := NewTemplate(tmplt.template, tags, p.Separator)
if err != nil {
return err
}
p.matcher.Add(tmplt.filter, tmpl)
return nil
}
func (p *GraphiteParser) Parse(buf []byte) ([]telegraf.Metric, error) {
// parse even if the buffer begins with a newline
buf = bytes.TrimPrefix(buf, []byte("\n"))
@@ -465,3 +479,30 @@ func (n *nodes) Less(j, k int) bool {
func (n *nodes) Swap(i, j int) { (*n)[i], (*n)[j] = (*n)[j], (*n)[i] }
func (n *nodes) Len() int { return len(*n) }
type parsedTemplate struct {
template string
filter string
tagstring string
}
type parsedTemplates []parsedTemplate
func (e parsedTemplates) Less(j, k int) bool {
if len(e[j].filter) == 0 && len(e[k].filter) == 0 {
nj := len(strings.Split(e[j].template, "."))
nk := len(strings.Split(e[k].template, "."))
return nj < nk
}
if len(e[j].filter) == 0 {
return true
}
if len(e[k].filter) == 0 {
return false
}
nj := len(strings.Split(e[j].template, "."))
nk := len(strings.Split(e[k].template, "."))
return nj < nk
}
func (e parsedTemplates) Swap(i, j int) { e[i], e[j] = e[j], e[i] }
func (e parsedTemplates) Len() int { return len(e) }

View File

@@ -747,6 +747,48 @@ func TestApplyTemplateGreedyField(t *testing.T) {
}
}
func TestApplyTemplateOverSpecific(t *testing.T) {
p, err := NewGraphiteParser(
".",
[]string{
"measurement.host.metric.metric.metric",
},
nil,
)
assert.NoError(t, err)
measurement, tags, _, err := p.ApplyTemplate("net.server001.a.b 2")
assert.Equal(t, "net", measurement)
assert.Equal(t,
map[string]string{"host": "server001", "metric": "a.b"},
tags)
}
func TestApplyTemplateMostSpecificTemplate(t *testing.T) {
p, err := NewGraphiteParser(
".",
[]string{
"measurement.host.metric",
"measurement.host.metric.metric.metric",
"measurement.host.metric.metric",
},
nil,
)
assert.NoError(t, err)
measurement, tags, _, err := p.ApplyTemplate("net.server001.a.b.c 2")
assert.Equal(t, "net", measurement)
assert.Equal(t,
map[string]string{"host": "server001", "metric": "a.b.c"},
tags)
measurement, tags, _, err = p.ApplyTemplate("net.server001.a.b 2")
assert.Equal(t, "net", measurement)
assert.Equal(t,
map[string]string{"host": "server001", "metric": "a.b"},
tags)
}
// Test Helpers
func errstr(err error) string {
if err != nil {

View File

@@ -3,6 +3,7 @@ package influx
import (
"bytes"
"fmt"
"time"
"github.com/influxdata/telegraf"
@@ -15,30 +16,32 @@ type InfluxParser struct {
DefaultTags map[string]string
}
func (p *InfluxParser) ParseWithDefaultTime(buf []byte, t time.Time) ([]telegraf.Metric, error) {
// parse even if the buffer begins with a newline
buf = bytes.TrimPrefix(buf, []byte("\n"))
points, err := models.ParsePointsWithPrecision(buf, t, "n")
metrics := make([]telegraf.Metric, len(points))
for i, point := range points {
for k, v := range p.DefaultTags {
// only set the default tag if it doesn't already exist:
if tmp := point.Tags().GetString(k); tmp == "" {
point.AddTag(k, v)
}
}
// Ignore error here because it's impossible that a model.Point
// wouldn't parse into client.Point properly
metrics[i] = telegraf.NewMetricFromPoint(point)
}
return metrics, err
}
// Parse returns a slice of Metrics from a text representation of a
// metric (in line-protocol format)
// with each metric separated by newlines. If any metrics fail to parse,
// a non-nil error will be returned in addition to the metrics that parsed
// successfully.
func (p *InfluxParser) Parse(buf []byte) ([]telegraf.Metric, error) {
// parse even if the buffer begins with a newline
buf = bytes.TrimPrefix(buf, []byte("\n"))
points, err := models.ParsePoints(buf)
metrics := make([]telegraf.Metric, len(points))
for i, point := range points {
tags := point.Tags()
for k, v := range p.DefaultTags {
// Only set tags not in parsed metric
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
// Ignore error here because it's impossible that a model.Point
// wouldn't parse into client.Point properly
metrics[i], _ = telegraf.NewMetric(point.Name(), tags,
point.Fields(), point.Time())
}
return metrics, err
return p.ParseWithDefaultTime(buf, time.Now())
}
func (p *InfluxParser) ParseLine(line string) (telegraf.Metric, error) {

View File

@@ -0,0 +1,5 @@
package all
import (
_ "github.com/influxdata/telegraf/plugins/processors/printer"
)

View File

@@ -0,0 +1,35 @@
package printer
import (
"fmt"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/processors"
)
type Printer struct {
}
var sampleConfig = `
`
func (p *Printer) SampleConfig() string {
return sampleConfig
}
func (p *Printer) Description() string {
return "Print all metrics that pass through this filter."
}
func (p *Printer) Apply(in ...telegraf.Metric) []telegraf.Metric {
for _, metric := range in {
fmt.Println(metric.String())
}
return in
}
func init() {
processors.Add("printer", func() telegraf.Processor {
return &Printer{}
})
}

View File

@@ -0,0 +1 @@
package printer

View File

@@ -0,0 +1,11 @@
package processors
import "github.com/influxdata/telegraf"
type Creator func() telegraf.Processor
var Processors = map[string]Creator{}
func Add(name string, creator Creator) {
Processors[name] = creator
}

12
processor.go Normal file
View File

@@ -0,0 +1,12 @@
package telegraf
type Processor interface {
// SampleConfig returns the default configuration of the Input
SampleConfig() string
// Description returns a one-sentence description on the Input
Description() string
// Apply the filter to the given metric
Apply(in ...Metric) []Metric
}

View File

@@ -75,9 +75,13 @@ cat telegraf-race | gzip > $CIRCLE_ARTIFACTS/telegraf-race.gz
eval "git describe --exact-match HEAD"
if [ $? -eq 0 ]; then
# install fpm (packaging dependency)
exit_if_fail gem install fpm
# install boto & rpm (packaging & AWS dependencies)
exit_if_fail sudo apt-get install -y rpm python-boto
unset GOGC
tag=$(git describe --exact-match HEAD)
echo $tag
exit_if_fail ./scripts/build.py --release --package --version=$tag --platform=all --arch=all --upload --bucket=dl.influxdata.com/telegraf/releases
exit_if_fail ./scripts/build.py --release --package --platform=all --arch=all --upload --bucket=dl.influxdata.com/telegraf/releases
mv build $CIRCLE_ARTIFACTS
fi

View File

@@ -6,9 +6,7 @@ After=network.target
[Service]
EnvironmentFile=-/etc/default/telegraf
User=telegraf
Environment='STDOUT=/var/log/telegraf/telegraf.log'
Environment='STDERR=/var/log/telegraf/telegraf.log'
ExecStart=/bin/sh -c "exec /usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d ${TELEGRAF_OPTS} >>${STDOUT} 2>>${STDERR}"
ExecStart=/usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d ${TELEGRAF_OPTS}
ExecReload=/bin/kill -HUP $MAINPID
Restart=on-failure
KillMode=control-group

View File

@@ -39,6 +39,13 @@ func (a *Accumulator) NMetrics() uint64 {
return atomic.LoadUint64(&a.nMetrics)
}
func (a *Accumulator) ClearMetrics() {
atomic.StoreUint64(&a.nMetrics, 0)
a.Lock()
defer a.Unlock()
a.Metrics = make([]*Metric, 0)
}
// AddFields adds a measurement point with a specified timestamp.
func (a *Accumulator) AddFields(
measurement string,