Merge remote-tracking branch 'upstream/master'

This commit is contained in:
Miki 2016-02-17 15:20:31 +01:00
commit 398989d2de
52 changed files with 2860 additions and 293 deletions

View File

@ -1,16 +1,28 @@
## v0.10.3 [unreleased] ## v0.10.3 [unreleased]
### Release Notes ### Release Notes
- Users of the `exec` and `kafka_consumer` can now specify the incoming data - Users of the `exec` and `kafka_consumer` (and the new `nats_consumer`
and `mqtt_consumer` plugins) can now specify the incoming data
format that they would like to parse. Currently supports: "json", "influx", and format that they would like to parse. Currently supports: "json", "influx", and
"graphite" "graphite"
- More info on parsing arbitrary data formats can be found - Users of message broker and file output plugins can now choose what data format
they would like to output. Currently supports: "influx" and "graphite"
- More info on parsing _incoming_ data formats can be found
[here](https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS_INPUT.md) [here](https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS_INPUT.md)
- More info on serializing _outgoing_ data formats can be found
[here](https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS_OUTPUT.md)
### Features ### Features
- [#652](https://github.com/influxdata/telegraf/pull/652): CouchDB Input Plugin - [#652](https://github.com/influxdata/telegraf/pull/652): CouchDB Input Plugin. Thanks @codehate!
- [#655](https://github.com/influxdata/telegraf/pull/655): Support parsing arbitrary data formats. Currently limited to kafka_consumer and exec inputs. - [#655](https://github.com/influxdata/telegraf/pull/655): Support parsing arbitrary data formats. Currently limited to kafka_consumer and exec inputs.
- [#671](https://github.com/influxdata/telegraf/pull/671): Dovecot input plugin. - [#671](https://github.com/influxdata/telegraf/pull/671): Dovecot input plugin. Thanks @mikif70!
- [#680](https://github.com/influxdata/telegraf/pull/680): NATS consumer input plugin. Thanks @netixen!
- [#676](https://github.com/influxdata/telegraf/pull/676): MQTT consumer input plugin.
- [#683](https://github.com/influxdata/telegraf/pull/683): PostGRES input plugin: add pg_stat_bgwriter. Thanks @menardorama!
- [#679](https://github.com/influxdata/telegraf/pull/679): File/stdout output plugin.
- [#679](https://github.com/influxdata/telegraf/pull/679): Support for arbitrary output data formats.
- [#695](https://github.com/influxdata/telegraf/pull/695): raindrops input plugin. Thanks @burdandrei!
- [#650](https://github.com/influxdata/telegraf/pull/650): net_response input plugin. Thanks @titilambert!
### Bugfixes ### Bugfixes
- [#443](https://github.com/influxdata/telegraf/issues/443): Fix Ping command timeout parameter on Linux. - [#443](https://github.com/influxdata/telegraf/issues/443): Fix Ping command timeout parameter on Linux.

View File

@ -12,6 +12,13 @@ but any information you can provide on how the data will look is appreciated.
See the [OpenTSDB output](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/opentsdb) See the [OpenTSDB output](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/opentsdb)
for a good example. for a good example.
## GoDoc
Public interfaces for inputs, outputs, metrics, and the accumulator can be found
on the GoDoc
[![GoDoc](https://godoc.org/github.com/influxdata/telegraf?status.svg)](https://godoc.org/github.com/influxdata/telegraf)
## Sign the CLA ## Sign the CLA
Before we can merge a pull request, you will need to sign the CLA, Before we can merge a pull request, you will need to sign the CLA,
@ -29,7 +36,7 @@ Assuming you can already build the project, run these in the telegraf directory:
This section is for developers who want to create new collection inputs. This section is for developers who want to create new collection inputs.
Telegraf is entirely plugin driven. This interface allows for operators to Telegraf is entirely plugin driven. This interface allows for operators to
pick and chose what is gathered as well as makes it easy for developers pick and chose what is gathered and makes it easy for developers
to create new ways of generating metrics. to create new ways of generating metrics.
Plugin authorship is kept as simple as possible to promote people to develop Plugin authorship is kept as simple as possible to promote people to develop
@ -46,49 +53,8 @@ See below for a quick example.
plugin can be configured. This is include in `telegraf -sample-config`. plugin can be configured. This is include in `telegraf -sample-config`.
* The `Description` function should say in one line what this plugin does. * The `Description` function should say in one line what this plugin does.
### Input interface Let's say you've written a plugin that emits metrics about processes on the
current host.
```go
type Input interface {
SampleConfig() string
Description() string
Gather(Accumulator) error
}
type Accumulator interface {
Add(measurement string,
value interface{},
tags map[string]string,
timestamp ...time.Time)
AddFields(measurement string,
fields map[string]interface{},
tags map[string]string,
timestamp ...time.Time)
}
```
### Accumulator
The way that a plugin emits metrics is by interacting with the Accumulator.
The `Add` function takes 3 arguments:
* **measurement**: A string description of the metric. For instance `bytes_read` or `
faults`.
* **value**: A value for the metric. This accepts 5 different types of value:
* **int**: The most common type. All int types are accepted but favor using `int64`
Useful for counters, etc.
* **float**: Favor `float64`, useful for gauges, percentages, etc.
* **bool**: `true` or `false`, useful to indicate the presence of a state. `light_on`,
etc.
* **string**: Typically used to indicate a message, or some kind of freeform
information.
* **time.Time**: Useful for indicating when a state last occurred, for instance `
light_on_since`.
* **tags**: This is a map of strings to strings to describe the where or who
about the metric. For instance, the `net` plugin adds a tag named `"interface"`
set to the name of the network interface, like `"eth0"`.
Let's say you've written a plugin that emits metrics about processes on the current host.
### Input Plugin Example ### Input Plugin Example
@ -194,18 +160,6 @@ and `Stop()` methods.
* Same as the `Plugin` guidelines, except that they must conform to the * Same as the `Plugin` guidelines, except that they must conform to the
`inputs.ServiceInput` interface. `inputs.ServiceInput` interface.
### Service Plugin interface
```go
type ServicePlugin interface {
SampleConfig() string
Description() string
Gather(Accumulator) error
Start() error
Stop()
}
```
## Output Plugins ## Output Plugins
This section is for developers who want to create a new output sink. Outputs This section is for developers who want to create a new output sink. Outputs
@ -223,18 +177,6 @@ See below for a quick example.
output can be configured. This is include in `telegraf -sample-config`. output can be configured. This is include in `telegraf -sample-config`.
* The `Description` function should say in one line what this output does. * The `Description` function should say in one line what this output does.
### Output interface
```go
type Output interface {
Connect() error
Close() error
Description() string
SampleConfig() string
Write(metrics []telegraf.Metric) error
}
```
### Output Example ### Output Example
```go ```go
@ -282,6 +224,33 @@ func init() {
``` ```
## Output Plugins Writing Arbitrary Data Formats
Some output plugins (such as
[file](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/file))
can write arbitrary output data formats. An overview of these data formats can
be found
[here](https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS_OUTPUT.md).
In order to enable this, you must specify a
`SetSerializer(serializer serializers.Serializer)`
function on the plugin object (see the file plugin for an example), as well as
defining `serializer` as a field of the object.
You can then utilize the serializer internally in your plugin, serializing data
before it's written. Telegraf's configuration layer will take care of
instantiating and creating the `Serializer` object.
You should also add the following to your SampleConfig() return:
```toml
### Data format to output. This can be "influx" or "graphite"
### Each data format has it's own unique set of configuration options, read
### more about them here:
### https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS_OUTPUT.md
data_format = "influx"
```
## Service Output Plugins ## Service Output Plugins
This section is for developers who want to create new "service" output. A This section is for developers who want to create new "service" output. A
@ -297,20 +266,6 @@ and `Stop()` methods.
* Same as the `Output` guidelines, except that they must conform to the * Same as the `Output` guidelines, except that they must conform to the
`output.ServiceOutput` interface. `output.ServiceOutput` interface.
### Service Output interface
```go
type ServiceOutput interface {
Connect() error
Close() error
Description() string
SampleConfig() string
Write(metrics []telegraf.Metric) error
Start() error
Stop()
}
```
## Unit Tests ## Unit Tests
### Execute short tests ### Execute short tests
@ -326,7 +281,7 @@ which would take some time to replicate.
To overcome this situation we've decided to use docker containers to provide a To overcome this situation we've decided to use docker containers to provide a
fast and reproducible environment to test those services which require it. fast and reproducible environment to test those services which require it.
For other situations For other situations
(i.e: https://github.com/influxdata/telegraf/blob/master/plugins/redis/redis_test.go) (i.e: https://github.com/influxdata/telegraf/blob/master/plugins/inputs/redis/redis_test.go)
a simple mock will suffice. a simple mock will suffice.
To execute Telegraf tests follow these simple steps: To execute Telegraf tests follow these simple steps:

97
DATA_FORMATS_OUTPUT.md Normal file
View File

@ -0,0 +1,97 @@
# Telegraf Output Data Formats
Telegraf metrics, like InfluxDB
[points](https://docs.influxdata.com/influxdb/v0.10/write_protocols/line/),
are a combination of four basic parts:
1. Measurement Name
1. Tags
1. Fields
1. Timestamp
In InfluxDB line protocol, these 4 parts are easily defined in textual form:
```
measurement_name[,tag1=val1,...] field1=val1[,field2=val2,...] [timestamp]
```
For Telegraf outputs that write textual data (such as `kafka`, `mqtt`, and `file`),
InfluxDB line protocol was originally the only available output format. But now
we are normalizing telegraf metric "serializers" into a
[plugin-like interface](https://github.com/influxdata/telegraf/tree/master/plugins/serializers)
across all output plugins that can support it.
You will be able to identify a plugin that supports different data formats
by the presence of a `data_format`
config option, for example, in the `file` output plugin:
```toml
[[outputs.file]]
### Files to write to, "stdout" is a specially handled file.
files = ["stdout"]
### Data format to output. This can be "influx" or "graphite"
### Each data format has it's own unique set of configuration options, read
### more about them here:
### https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS_OUTPUT.md
data_format = "influx"
### Additional configuration options go here
```
Each data_format has an additional set of configuration options available, which
I'll go over below.
## Influx:
There are no additional configuration options for InfluxDB line-protocol. The
metrics are serialized directly into InfluxDB line-protocol.
#### Influx Configuration:
```toml
[[outputs.file]]
### Files to write to, "stdout" is a specially handled file.
files = ["stdout", "/tmp/metrics.out"]
### Data format to output. This can be "influx" or "graphite"
### Each data format has it's own unique set of configuration options, read
### more about them here:
### https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS_OUTPUT.md
data_format = "influx"
```
## Graphite:
The Graphite data format translates Telegraf metrics into _dot_ buckets.
The format is:
```
[prefix].[host tag].[all tags (alphabetical)].[measurement name].[field name] value timestamp
```
Which means the following influx metric -> graphite conversion would happen:
```
cpu,cpu=cpu-total,dc=us-east-1,host=tars usage_idle=98.09,usage_user=0.89 1455320660004257758
=>
tars.cpu-total.us-east-1.cpu.usage_user 0.89 1455320690
tars.cpu-total.us-east-1.cpu.usage_idle 98.09 1455320690
```
`prefix` is a configuration option when using the graphite output data format.
#### Graphite Configuration:
```toml
[[outputs.file]]
### Files to write to, "stdout" is a specially handled file.
files = ["stdout", "/tmp/metrics.out"]
### Data format to output. This can be "influx" or "graphite"
### Each data format has it's own unique set of configuration options, read
### more about them here:
### https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS_OUTPUT.md
data_format = "influx"
prefix = "telegraf"
```

7
Godeps
View File

@ -1,4 +1,4 @@
git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git dbd8d5c40a582eb9adacde36b47932b3a3ad0034 git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git 617c801af238c3af2d9e72c5d4a0f02edad03ce5
github.com/Shopify/sarama d37c73f2b2bce85f7fa16b6a550d26c5372892ef github.com/Shopify/sarama d37c73f2b2bce85f7fa16b6a550d26c5372892ef
github.com/Sirupsen/logrus f7f79f729e0fbe2fcc061db48a9ba0263f588252 github.com/Sirupsen/logrus f7f79f729e0fbe2fcc061db48a9ba0263f588252
github.com/amir/raidman 6a8e089bbe32e6b907feae5ba688841974b3c339 github.com/amir/raidman 6a8e089bbe32e6b907feae5ba688841974b3c339
@ -19,8 +19,7 @@ github.com/gorilla/context 1c83b3eabd45b6d76072b66b746c20815fb2872d
github.com/gorilla/mux 26a6070f849969ba72b72256e9f14cf519751690 github.com/gorilla/mux 26a6070f849969ba72b72256e9f14cf519751690
github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478 github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478
github.com/influxdata/config bae7cb98197d842374d3b8403905924094930f24 github.com/influxdata/config bae7cb98197d842374d3b8403905924094930f24
github.com/influxdata/influxdb a9552fdd91361819a792f337e5d9998859732a67 github.com/influxdata/influxdb ef571fc104dc24b77cd3710c156cd95e5cfd7aa5
github.com/influxdb/influxdb a9552fdd91361819a792f337e5d9998859732a67
github.com/jmespath/go-jmespath c01cf91b011868172fdcd9f41838e80c9d716264 github.com/jmespath/go-jmespath c01cf91b011868172fdcd9f41838e80c9d716264
github.com/klauspost/crc32 999f3125931f6557b991b2f8472172bdfa578d38 github.com/klauspost/crc32 999f3125931f6557b991b2f8472172bdfa578d38
github.com/lib/pq 8ad2b298cadd691a77015666a5372eae5dbfac8f github.com/lib/pq 8ad2b298cadd691a77015666a5372eae5dbfac8f
@ -28,8 +27,8 @@ github.com/matttproud/golang_protobuf_extensions d0c3fe89de86839aecf2e0579c40ba3
github.com/mreiferson/go-snappystream 028eae7ab5c4c9e2d1cb4c4ca1e53259bbe7e504 github.com/mreiferson/go-snappystream 028eae7ab5c4c9e2d1cb4c4ca1e53259bbe7e504
github.com/naoina/go-stringutil 6b638e95a32d0c1131db0e7fe83775cbea4a0d0b github.com/naoina/go-stringutil 6b638e95a32d0c1131db0e7fe83775cbea4a0d0b
github.com/naoina/toml 751171607256bb66e64c9f0220c00662420c38e9 github.com/naoina/toml 751171607256bb66e64c9f0220c00662420c38e9
github.com/nats-io/nats 6a83f1a633cfbfd90aa648ac99fb38c06a8b40df
github.com/nsqio/go-nsq 2118015c120962edc5d03325c680daf3163a8b5f github.com/nsqio/go-nsq 2118015c120962edc5d03325c680daf3163a8b5f
github.com/pborman/uuid dee7705ef7b324f27ceb85a121c61f2c2e8ce988
github.com/pmezard/go-difflib 792786c7400a136282c1664665ae0a8db921c6c2 github.com/pmezard/go-difflib 792786c7400a136282c1664665ae0a8db921c6c2
github.com/prometheus/client_golang 67994f177195311c3ea3d4407ed0175e34a4256f github.com/prometheus/client_golang 67994f177195311c3ea3d4407ed0175e34a4256f
github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6 github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6

View File

@ -66,7 +66,7 @@ Latest:
To install the full directory structure with config file, run: To install the full directory structure with config file, run:
``` ```
sudo tar -C / -xvf ./telegraf-0.10.2-1_linux_amd64.tar.gz sudo tar -C / -zxvf ./telegraf-0.10.2-1_linux_amd64.tar.gz
``` ```
To extract only the binary, run: To extract only the binary, run:
@ -171,6 +171,7 @@ Currently implemented sources:
* memcached * memcached
* mongodb * mongodb
* mysql * mysql
* net_response
* nginx * nginx
* nsq * nsq
* phpfpm * phpfpm
@ -182,6 +183,7 @@ Currently implemented sources:
* prometheus * prometheus
* puppetagent * puppetagent
* rabbitmq * rabbitmq
* raindrops
* redis * redis
* rethinkdb * rethinkdb
* sql server (microsoft) * sql server (microsoft)
@ -203,7 +205,9 @@ Currently implemented sources:
Telegraf can also collect metrics via the following service plugins: Telegraf can also collect metrics via the following service plugins:
* statsd * statsd
* mqtt_consumer
* kafka_consumer * kafka_consumer
* nats_consumer
* github_webhooks * github_webhooks
We'll be adding support for many more over the coming months. Read on if you We'll be adding support for many more over the coming months. Read on if you

View File

@ -16,6 +16,7 @@ import (
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/outputs" "github.com/influxdata/telegraf/plugins/outputs"
"github.com/influxdata/telegraf/plugins/parsers" "github.com/influxdata/telegraf/plugins/parsers"
"github.com/influxdata/telegraf/plugins/serializers"
"github.com/influxdata/config" "github.com/influxdata/config"
"github.com/naoina/toml/ast" "github.com/naoina/toml/ast"
@ -398,6 +399,17 @@ func (c *Config) addOutput(name string, table *ast.Table) error {
} }
output := creator() output := creator()
// If the output has a SetSerializer function, then this means it can write
// arbitrary types of output, so build the serializer and set it.
switch t := output.(type) {
case serializers.SerializerOutput:
serializer, err := buildSerializer(name, table)
if err != nil {
return err
}
t.SetSerializer(serializer)
}
outputConfig, err := buildOutput(name, table) outputConfig, err := buildOutput(name, table)
if err != nil { if err != nil {
return err return err
@ -660,6 +672,37 @@ func buildParser(name string, tbl *ast.Table) (parsers.Parser, error) {
return parsers.NewParser(c) return parsers.NewParser(c)
} }
// buildSerializer grabs the necessary entries from the ast.Table for creating
// a serializers.Serializer object, and creates it, which can then be added onto
// an Output object.
func buildSerializer(name string, tbl *ast.Table) (serializers.Serializer, error) {
c := &serializers.Config{}
if node, ok := tbl.Fields["data_format"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
c.DataFormat = str.Value
}
}
}
if c.DataFormat == "" {
c.DataFormat = "influx"
}
if node, ok := tbl.Fields["prefix"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
c.Prefix = str.Value
}
}
}
delete(tbl.Fields, "data_format")
delete(tbl.Fields, "prefix")
return serializers.NewSerializer(c)
}
// buildOutput parses output specific items from the ast.Table, builds the filter and returns an // buildOutput parses output specific items from the ast.Table, builds the filter and returns an
// internal_models.OutputConfig to be inserted into internal_models.RunningInput // internal_models.OutputConfig to be inserted into internal_models.RunningInput
// Note: error exists in the return for future calls that might require error // Note: error exists in the return for future calls that might require error

View File

@ -4,7 +4,7 @@ The example plugin gathers metrics about example things
### Configuration: ### Configuration:
``` ```toml
# Description # Description
[[inputs.example]] [[inputs.example]]
# SampleConfig # SampleConfig

View File

@ -21,7 +21,10 @@ import (
_ "github.com/influxdata/telegraf/plugins/inputs/mailchimp" _ "github.com/influxdata/telegraf/plugins/inputs/mailchimp"
_ "github.com/influxdata/telegraf/plugins/inputs/memcached" _ "github.com/influxdata/telegraf/plugins/inputs/memcached"
_ "github.com/influxdata/telegraf/plugins/inputs/mongodb" _ "github.com/influxdata/telegraf/plugins/inputs/mongodb"
_ "github.com/influxdata/telegraf/plugins/inputs/mqtt_consumer"
_ "github.com/influxdata/telegraf/plugins/inputs/mysql" _ "github.com/influxdata/telegraf/plugins/inputs/mysql"
_ "github.com/influxdata/telegraf/plugins/inputs/nats_consumer"
_ "github.com/influxdata/telegraf/plugins/inputs/net_response"
_ "github.com/influxdata/telegraf/plugins/inputs/nginx" _ "github.com/influxdata/telegraf/plugins/inputs/nginx"
_ "github.com/influxdata/telegraf/plugins/inputs/nsq" _ "github.com/influxdata/telegraf/plugins/inputs/nsq"
_ "github.com/influxdata/telegraf/plugins/inputs/passenger" _ "github.com/influxdata/telegraf/plugins/inputs/passenger"
@ -33,6 +36,7 @@ import (
_ "github.com/influxdata/telegraf/plugins/inputs/prometheus" _ "github.com/influxdata/telegraf/plugins/inputs/prometheus"
_ "github.com/influxdata/telegraf/plugins/inputs/puppetagent" _ "github.com/influxdata/telegraf/plugins/inputs/puppetagent"
_ "github.com/influxdata/telegraf/plugins/inputs/rabbitmq" _ "github.com/influxdata/telegraf/plugins/inputs/rabbitmq"
_ "github.com/influxdata/telegraf/plugins/inputs/raindrops"
_ "github.com/influxdata/telegraf/plugins/inputs/redis" _ "github.com/influxdata/telegraf/plugins/inputs/redis"
_ "github.com/influxdata/telegraf/plugins/inputs/rethinkdb" _ "github.com/influxdata/telegraf/plugins/inputs/rethinkdb"
_ "github.com/influxdata/telegraf/plugins/inputs/sensors" _ "github.com/influxdata/telegraf/plugins/inputs/sensors"

View File

@ -67,6 +67,7 @@ func (d *Docker) Gather(acc telegraf.Accumulator) error {
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(len(containers)) wg.Add(len(containers))
for _, container := range containers { for _, container := range containers {
go func(c docker.APIContainers) { go func(c docker.APIContainers) {
defer wg.Done() defer wg.Done()
err := d.gatherContainer(c, acc) err := d.gatherContainer(c, acc)
@ -177,6 +178,7 @@ func gatherContainerStats(
"pgfault": stat.MemoryStats.Stats.Pgfault, "pgfault": stat.MemoryStats.Stats.Pgfault,
"inactive_file": stat.MemoryStats.Stats.InactiveFile, "inactive_file": stat.MemoryStats.Stats.InactiveFile,
"total_pgpgin": stat.MemoryStats.Stats.TotalPgpgin, "total_pgpgin": stat.MemoryStats.Stats.TotalPgpgin,
"usage_percent": calculateMemPercent(stat),
} }
acc.AddFields("docker_mem", memfields, tags, now) acc.AddFields("docker_mem", memfields, tags, now)
@ -188,6 +190,7 @@ func gatherContainerStats(
"throttling_periods": stat.CPUStats.ThrottlingData.Periods, "throttling_periods": stat.CPUStats.ThrottlingData.Periods,
"throttling_throttled_periods": stat.CPUStats.ThrottlingData.ThrottledPeriods, "throttling_throttled_periods": stat.CPUStats.ThrottlingData.ThrottledPeriods,
"throttling_throttled_time": stat.CPUStats.ThrottlingData.ThrottledTime, "throttling_throttled_time": stat.CPUStats.ThrottlingData.ThrottledTime,
"usage_percent": calculateCPUPercent(stat),
} }
cputags := copyTags(tags) cputags := copyTags(tags)
cputags["cpu"] = "cpu-total" cputags["cpu"] = "cpu-total"
@ -219,6 +222,26 @@ func gatherContainerStats(
gatherBlockIOMetrics(stat, acc, tags, now) gatherBlockIOMetrics(stat, acc, tags, now)
} }
func calculateMemPercent(stat *docker.Stats) float64 {
var memPercent = 0.0
if stat.MemoryStats.Limit > 0 {
memPercent = float64(stat.MemoryStats.Usage) / float64(stat.MemoryStats.Limit) * 100.0
}
return memPercent
}
func calculateCPUPercent(stat *docker.Stats) float64 {
var cpuPercent = 0.0
// calculate the change for the cpu and system usage of the container in between readings
cpuDelta := float64(stat.CPUStats.CPUUsage.TotalUsage) - float64(stat.PreCPUStats.CPUUsage.TotalUsage)
systemDelta := float64(stat.CPUStats.SystemCPUUsage) - float64(stat.PreCPUStats.SystemCPUUsage)
if systemDelta > 0.0 && cpuDelta > 0.0 {
cpuPercent = (cpuDelta / systemDelta) * float64(len(stat.CPUStats.CPUUsage.PercpuUsage)) * 100.0
}
return cpuPercent
}
func gatherBlockIOMetrics( func gatherBlockIOMetrics(
stat *docker.Stats, stat *docker.Stats,
acc telegraf.Accumulator, acc telegraf.Accumulator,

View File

@ -49,7 +49,7 @@ func TestDockerGatherContainerStats(t *testing.T) {
"max_usage": uint64(1001), "max_usage": uint64(1001),
"usage": uint64(1111), "usage": uint64(1111),
"fail_count": uint64(1), "fail_count": uint64(1),
"limit": uint64(20), "limit": uint64(2000),
"total_pgmafault": uint64(0), "total_pgmafault": uint64(0),
"cache": uint64(0), "cache": uint64(0),
"mapped_file": uint64(0), "mapped_file": uint64(0),
@ -79,7 +79,9 @@ func TestDockerGatherContainerStats(t *testing.T) {
"pgfault": uint64(2), "pgfault": uint64(2),
"inactive_file": uint64(3), "inactive_file": uint64(3),
"total_pgpgin": uint64(4), "total_pgpgin": uint64(4),
"usage_percent": float64(55.55),
} }
acc.AssertContainsTaggedFields(t, "docker_mem", memfields, tags) acc.AssertContainsTaggedFields(t, "docker_mem", memfields, tags)
// test docker_cpu measurement // test docker_cpu measurement
@ -93,6 +95,7 @@ func TestDockerGatherContainerStats(t *testing.T) {
"throttling_periods": uint64(1), "throttling_periods": uint64(1),
"throttling_throttled_periods": uint64(0), "throttling_throttled_periods": uint64(0),
"throttling_throttled_time": uint64(0), "throttling_throttled_time": uint64(0),
"usage_percent": float64(400.0),
} }
acc.AssertContainsTaggedFields(t, "docker_cpu", cpufields, cputags) acc.AssertContainsTaggedFields(t, "docker_cpu", cpufields, cputags)
@ -122,6 +125,9 @@ func testStats() *docker.Stats {
stats.CPUStats.SystemCPUUsage = 100 stats.CPUStats.SystemCPUUsage = 100
stats.CPUStats.ThrottlingData.Periods = 1 stats.CPUStats.ThrottlingData.Periods = 1
stats.PreCPUStats.CPUUsage.TotalUsage = 400
stats.PreCPUStats.SystemCPUUsage = 50
stats.MemoryStats.Stats.TotalPgmafault = 0 stats.MemoryStats.Stats.TotalPgmafault = 0
stats.MemoryStats.Stats.Cache = 0 stats.MemoryStats.Stats.Cache = 0
stats.MemoryStats.Stats.MappedFile = 0 stats.MemoryStats.Stats.MappedFile = 0
@ -155,7 +161,7 @@ func testStats() *docker.Stats {
stats.MemoryStats.MaxUsage = 1001 stats.MemoryStats.MaxUsage = 1001
stats.MemoryStats.Usage = 1111 stats.MemoryStats.Usage = 1111
stats.MemoryStats.Failcnt = 1 stats.MemoryStats.Failcnt = 1
stats.MemoryStats.Limit = 20 stats.MemoryStats.Limit = 2000
stats.Networks["eth0"] = docker.NetworkStats{ stats.Networks["eth0"] = docker.NetworkStats{
RxDropped: 1, RxDropped: 1,

View File

@ -1,4 +1,4 @@
# Kafka Consumer # Kafka Consumer Input Plugin
The [Kafka](http://kafka.apache.org/) consumer plugin polls a specified Kafka The [Kafka](http://kafka.apache.org/) consumer plugin polls a specified Kafka
topic and adds messages to InfluxDB. The plugin assumes messages follow the topic and adds messages to InfluxDB. The plugin assumes messages follow the
@ -6,6 +6,29 @@ line protocol. [Consumer Group](http://godoc.org/github.com/wvanbergen/kafka/con
is used to talk to the Kafka cluster so multiple instances of telegraf can read is used to talk to the Kafka cluster so multiple instances of telegraf can read
from the same topic in parallel. from the same topic in parallel.
## Configuration
```toml
# Read metrics from Kafka topic(s)
[[inputs.kafka_consumer]]
### topic(s) to consume
topics = ["telegraf"]
### an array of Zookeeper connection strings
zookeeper_peers = ["localhost:2181"]
### the name of the consumer group
consumer_group = "telegraf_metrics_consumers"
### Maximum number of metrics to buffer between collection intervals
metric_buffer = 100000
### Offset (must be either "oldest" or "newest")
offset = "oldest"
### Data format to consume. This can be "json", "influx" or "graphite"
### Each data format has it's own unique set of configuration options, read
### more about them here:
### https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS_INPUT.md
data_format = "influx"
```
## Testing ## Testing
Running integration tests requires running Zookeeper & Kafka. The following Running integration tests requires running Zookeeper & Kafka. The following
@ -16,9 +39,3 @@ To start Kafka & Zookeeper:
``` ```
docker run -d -p 2181:2181 -p 9092:9092 --env ADVERTISED_HOST=`boot2docker ip || docker-machine ip <your_machine_name>` --env ADVERTISED_PORT=9092 spotify/kafka docker run -d -p 2181:2181 -p 9092:9092 --env ADVERTISED_HOST=`boot2docker ip || docker-machine ip <your_machine_name>` --env ADVERTISED_PORT=9092 spotify/kafka
``` ```
To run tests:
```
go test
```

View File

@ -19,8 +19,10 @@ type Kafka struct {
Topics []string Topics []string
ZookeeperPeers []string ZookeeperPeers []string
Consumer *consumergroup.ConsumerGroup Consumer *consumergroup.ConsumerGroup
PointBuffer int MetricBuffer int
Offset string // TODO remove PointBuffer, legacy support
PointBuffer int
Offset string
parser parsers.Parser parser parsers.Parser
@ -30,7 +32,7 @@ type Kafka struct {
in <-chan *sarama.ConsumerMessage in <-chan *sarama.ConsumerMessage
// channel for all kafka consumer errors // channel for all kafka consumer errors
errs <-chan *sarama.ConsumerError errs <-chan *sarama.ConsumerError
// channel for all incoming parsed kafka points // channel for all incoming parsed kafka metrics
metricC chan telegraf.Metric metricC chan telegraf.Metric
done chan struct{} done chan struct{}
@ -46,8 +48,8 @@ var sampleConfig = `
zookeeper_peers = ["localhost:2181"] zookeeper_peers = ["localhost:2181"]
### the name of the consumer group ### the name of the consumer group
consumer_group = "telegraf_metrics_consumers" consumer_group = "telegraf_metrics_consumers"
### Maximum number of points to buffer between collection intervals ### Maximum number of metrics to buffer between collection intervals
point_buffer = 100000 metric_buffer = 100000
### Offset (must be either "oldest" or "newest") ### Offset (must be either "oldest" or "newest")
offset = "oldest" offset = "oldest"
@ -104,10 +106,13 @@ func (k *Kafka) Start() error {
} }
k.done = make(chan struct{}) k.done = make(chan struct{})
if k.PointBuffer == 0 { if k.PointBuffer == 0 && k.MetricBuffer == 0 {
k.PointBuffer = 100000 k.MetricBuffer = 100000
} else if k.PointBuffer > 0 {
// Legacy support of PointBuffer field TODO remove
k.MetricBuffer = k.PointBuffer
} }
k.metricC = make(chan telegraf.Metric, k.PointBuffer) k.metricC = make(chan telegraf.Metric, k.MetricBuffer)
// Start the kafka message reader // Start the kafka message reader
go k.receiver() go k.receiver()
@ -128,7 +133,7 @@ func (k *Kafka) receiver() {
case msg := <-k.in: case msg := <-k.in:
metrics, err := k.parser.Parse(msg.Value) metrics, err := k.parser.Parse(msg.Value)
if err != nil { if err != nil {
log.Printf("Could not parse kafka message: %s, error: %s", log.Printf("KAFKA PARSE ERROR\nmessage: %s\nerror: %s",
string(msg.Value), err.Error()) string(msg.Value), err.Error())
} }
@ -139,7 +144,7 @@ func (k *Kafka) receiver() {
continue continue
default: default:
log.Printf("Kafka Consumer buffer is full, dropping a metric." + log.Printf("Kafka Consumer buffer is full, dropping a metric." +
" You may want to increase the point_buffer setting") " You may want to increase the metric_buffer setting")
} }
} }
@ -166,10 +171,10 @@ func (k *Kafka) Stop() {
func (k *Kafka) Gather(acc telegraf.Accumulator) error { func (k *Kafka) Gather(acc telegraf.Accumulator) error {
k.Lock() k.Lock()
defer k.Unlock() defer k.Unlock()
npoints := len(k.metricC) nmetrics := len(k.metricC)
for i := 0; i < npoints; i++ { for i := 0; i < nmetrics; i++ {
point := <-k.metricC metric := <-k.metricC
acc.AddFields(point.Name(), point.Fields(), point.Tags(), point.Time()) acc.AddFields(metric.Name(), metric.Fields(), metric.Tags(), metric.Time())
} }
return nil return nil
} }

View File

@ -0,0 +1,48 @@
# MQTT Consumer Input Plugin
The [MQTT](http://mqtt.org/) consumer plugin reads from
specified MQTT topics and adds messages to InfluxDB.
The plugin expects messages in the
[Telegraf Input Data Formats](https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS_INPUT.md).
### Configuration:
```toml
# Read metrics from MQTT topic(s)
[[inputs.mqtt_consumer]]
servers = ["localhost:1883"]
### MQTT QoS, must be 0, 1, or 2
qos = 0
### Topics to subscribe to
topics = [
"telegraf/host01/cpu",
"telegraf/+/mem",
"sensors/#",
]
### Maximum number of metrics to buffer between collection intervals
metric_buffer = 100000
### username and password to connect MQTT server.
# username = "telegraf"
# password = "metricsmetricsmetricsmetrics"
### Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem"
# ssl_key = "/etc/telegraf/key.pem"
### Use SSL but skip chain & host verification
# insecure_skip_verify = false
### Data format to consume. This can be "json", "influx" or "graphite"
### Each data format has it's own unique set of configuration options, read
### more about them here:
### https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS_INPUT.md
data_format = "influx"
```
### Tags:
- All measurements are tagged with the incoming topic, ie
`topic=telegraf/host01/cpu`

View File

@ -0,0 +1,228 @@
package mqtt_consumer
import (
"fmt"
"log"
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/parsers"
"git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git"
)
type MQTTConsumer struct {
Servers []string
Topics []string
Username string
Password string
MetricBuffer int
QoS int `toml:"qos"`
parser parsers.Parser
// Path to CA file
SSLCA string `toml:"ssl_ca"`
// Path to host cert file
SSLCert string `toml:"ssl_cert"`
// Path to cert key file
SSLKey string `toml:"ssl_key"`
// Use SSL but skip chain & host verification
InsecureSkipVerify bool
sync.Mutex
client *mqtt.Client
// channel for all incoming parsed mqtt metrics
metricC chan telegraf.Metric
// channel for the topics of all incoming metrics (for tagging metrics)
topicC chan string
// channel of all incoming raw mqtt messages
in chan mqtt.Message
done chan struct{}
}
var sampleConfig = `
servers = ["localhost:1883"]
### MQTT QoS, must be 0, 1, or 2
qos = 0
### Topics to subscribe to
topics = [
"telegraf/host01/cpu",
"telegraf/+/mem",
"sensors/#",
]
### Maximum number of metrics to buffer between collection intervals
metric_buffer = 100000
### username and password to connect MQTT server.
# username = "telegraf"
# password = "metricsmetricsmetricsmetrics"
### Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem"
# ssl_key = "/etc/telegraf/key.pem"
### Use SSL but skip chain & host verification
# insecure_skip_verify = false
### Data format to consume. This can be "json", "influx" or "graphite"
### Each data format has it's own unique set of configuration options, read
### more about them here:
### https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS_INPUT.md
data_format = "influx"
`
func (m *MQTTConsumer) SampleConfig() string {
return sampleConfig
}
func (m *MQTTConsumer) Description() string {
return "Read metrics from MQTT topic(s)"
}
func (m *MQTTConsumer) SetParser(parser parsers.Parser) {
m.parser = parser
}
func (m *MQTTConsumer) Start() error {
m.Lock()
defer m.Unlock()
if m.QoS > 2 || m.QoS < 0 {
return fmt.Errorf("MQTT Consumer, invalid QoS value: %d", m.QoS)
}
opts, err := m.createOpts()
if err != nil {
return err
}
m.client = mqtt.NewClient(opts)
if token := m.client.Connect(); token.Wait() && token.Error() != nil {
return token.Error()
}
m.in = make(chan mqtt.Message, m.MetricBuffer)
m.done = make(chan struct{})
if m.MetricBuffer == 0 {
m.MetricBuffer = 100000
}
m.metricC = make(chan telegraf.Metric, m.MetricBuffer)
m.topicC = make(chan string, m.MetricBuffer)
topics := make(map[string]byte)
for _, topic := range m.Topics {
topics[topic] = byte(m.QoS)
}
subscribeToken := m.client.SubscribeMultiple(topics, m.recvMessage)
subscribeToken.Wait()
if subscribeToken.Error() != nil {
return subscribeToken.Error()
}
go m.receiver()
return nil
}
// receiver() reads all incoming messages from the consumer, and parses them into
// influxdb metric points.
func (m *MQTTConsumer) receiver() {
for {
select {
case <-m.done:
return
case msg := <-m.in:
topic := msg.Topic()
metrics, err := m.parser.Parse(msg.Payload())
if err != nil {
log.Printf("MQTT PARSE ERROR\nmessage: %s\nerror: %s",
string(msg.Payload()), err.Error())
}
for _, metric := range metrics {
select {
case m.metricC <- metric:
m.topicC <- topic
default:
log.Printf("MQTT Consumer buffer is full, dropping a metric." +
" You may want to increase the metric_buffer setting")
}
}
}
}
}
func (m *MQTTConsumer) recvMessage(_ *mqtt.Client, msg mqtt.Message) {
m.in <- msg
}
func (m *MQTTConsumer) Stop() {
m.Lock()
defer m.Unlock()
close(m.done)
m.client.Disconnect(200)
}
func (m *MQTTConsumer) Gather(acc telegraf.Accumulator) error {
m.Lock()
defer m.Unlock()
nmetrics := len(m.metricC)
for i := 0; i < nmetrics; i++ {
metric := <-m.metricC
topic := <-m.topicC
tags := metric.Tags()
tags["topic"] = topic
acc.AddFields(metric.Name(), metric.Fields(), tags, metric.Time())
}
return nil
}
func (m *MQTTConsumer) createOpts() (*mqtt.ClientOptions, error) {
opts := mqtt.NewClientOptions()
opts.SetClientID("Telegraf-Consumer-" + internal.RandomString(5))
tlsCfg, err := internal.GetTLSConfig(
m.SSLCert, m.SSLKey, m.SSLCA, m.InsecureSkipVerify)
if err != nil {
return nil, err
}
scheme := "tcp"
if tlsCfg != nil {
scheme = "ssl"
opts.SetTLSConfig(tlsCfg)
}
user := m.Username
if user == "" {
opts.SetUsername(user)
}
password := m.Password
if password != "" {
opts.SetPassword(password)
}
if len(m.Servers) == 0 {
return opts, fmt.Errorf("could not get host infomations")
}
for _, host := range m.Servers {
server := fmt.Sprintf("%s://%s", scheme, host)
opts.AddBroker(server)
}
opts.SetAutoReconnect(true)
opts.SetKeepAlive(time.Second * 60)
return opts, nil
}
func init() {
inputs.Add("mqtt_consumer", func() telegraf.Input {
return &MQTTConsumer{}
})
}

View File

@ -0,0 +1,186 @@
package mqtt_consumer
import (
"testing"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/parsers"
"github.com/influxdata/telegraf/testutil"
"git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git"
)
const (
testMsg = "cpu_load_short,host=server01 value=23422.0 1422568543702900257"
testMsgGraphite = "cpu.load.short.graphite 23422 1454780029"
testMsgJSON = "{\"a\": 5, \"b\": {\"c\": 6}}\n"
invalidMsg = "cpu_load_short,host=server01 1422568543702900257"
metricBuffer = 5
)
func newTestMQTTConsumer() (*MQTTConsumer, chan mqtt.Message) {
in := make(chan mqtt.Message, metricBuffer)
n := &MQTTConsumer{
Topics: []string{"telegraf"},
Servers: []string{"localhost:1883"},
MetricBuffer: metricBuffer,
in: in,
done: make(chan struct{}),
metricC: make(chan telegraf.Metric, metricBuffer),
topicC: make(chan string, metricBuffer),
}
return n, in
}
// Test that the parser parses NATS messages into metrics
func TestRunParser(t *testing.T) {
n, in := newTestMQTTConsumer()
defer close(n.done)
n.parser, _ = parsers.NewInfluxParser()
go n.receiver()
in <- mqttMsg(testMsg)
time.Sleep(time.Millisecond)
if a := len(n.metricC); a != 1 {
t.Errorf("got %v, expected %v", a, 1)
}
}
// Test that the parser ignores invalid messages
func TestRunParserInvalidMsg(t *testing.T) {
n, in := newTestMQTTConsumer()
defer close(n.done)
n.parser, _ = parsers.NewInfluxParser()
go n.receiver()
in <- mqttMsg(invalidMsg)
time.Sleep(time.Millisecond)
if a := len(n.metricC); a != 0 {
t.Errorf("got %v, expected %v", a, 0)
}
}
// Test that metrics are dropped when we hit the buffer limit
func TestRunParserRespectsBuffer(t *testing.T) {
n, in := newTestMQTTConsumer()
defer close(n.done)
n.parser, _ = parsers.NewInfluxParser()
go n.receiver()
for i := 0; i < metricBuffer+1; i++ {
in <- mqttMsg(testMsg)
}
time.Sleep(time.Millisecond)
if a := len(n.metricC); a != metricBuffer {
t.Errorf("got %v, expected %v", a, metricBuffer)
}
}
// Test that the parser parses line format messages into metrics
func TestRunParserAndGather(t *testing.T) {
n, in := newTestMQTTConsumer()
defer close(n.done)
n.parser, _ = parsers.NewInfluxParser()
go n.receiver()
in <- mqttMsg(testMsg)
time.Sleep(time.Millisecond)
acc := testutil.Accumulator{}
n.Gather(&acc)
if a := len(acc.Metrics); a != 1 {
t.Errorf("got %v, expected %v", a, 1)
}
acc.AssertContainsFields(t, "cpu_load_short",
map[string]interface{}{"value": float64(23422)})
}
// Test that the parser parses graphite format messages into metrics
func TestRunParserAndGatherGraphite(t *testing.T) {
n, in := newTestMQTTConsumer()
defer close(n.done)
n.parser, _ = parsers.NewGraphiteParser("_", []string{}, nil)
go n.receiver()
in <- mqttMsg(testMsgGraphite)
time.Sleep(time.Millisecond)
acc := testutil.Accumulator{}
n.Gather(&acc)
if a := len(acc.Metrics); a != 1 {
t.Errorf("got %v, expected %v", a, 1)
}
acc.AssertContainsFields(t, "cpu_load_short_graphite",
map[string]interface{}{"value": float64(23422)})
}
// Test that the parser parses json format messages into metrics
func TestRunParserAndGatherJSON(t *testing.T) {
n, in := newTestMQTTConsumer()
defer close(n.done)
n.parser, _ = parsers.NewJSONParser("nats_json_test", []string{}, nil)
go n.receiver()
in <- mqttMsg(testMsgJSON)
time.Sleep(time.Millisecond)
acc := testutil.Accumulator{}
n.Gather(&acc)
if a := len(acc.Metrics); a != 1 {
t.Errorf("got %v, expected %v", a, 1)
}
acc.AssertContainsFields(t, "nats_json_test",
map[string]interface{}{
"a": float64(5),
"b_c": float64(6),
})
}
func mqttMsg(val string) mqtt.Message {
return &message{
topic: "telegraf/unit_test",
payload: []byte(val),
}
}
// Take the message struct from the paho mqtt client library for returning
// a test message interface.
type message struct {
duplicate bool
qos byte
retained bool
topic string
messageID uint16
payload []byte
}
func (m *message) Duplicate() bool {
return m.duplicate
}
func (m *message) Qos() byte {
return m.qos
}
func (m *message) Retained() bool {
return m.retained
}
func (m *message) Topic() string {
return m.topic
}
func (m *message) MessageID() uint16 {
return m.messageID
}
func (m *message) Payload() []byte {
return m.payload
}

View File

@ -0,0 +1,31 @@
# NATS Consumer Input Plugin
The [NATS](http://www.nats.io/about/) consumer plugin reads from
specified NATS subjects and adds messages to InfluxDB. The plugin expects messages
in the [Telegraf Input Data Formats](https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS_INPUT.md).
A [Queue Group](http://www.nats.io/documentation/concepts/nats-queueing/)
is used when subscribing to subjects so multiple instances of telegraf can read
from a NATS cluster in parallel.
## Configuration
```toml
# Read metrics from NATS subject(s)
[[inputs.nats_consumer]]
### urls of NATS servers
servers = ["nats://localhost:4222"]
### Use Transport Layer Security
secure = false
### subject(s) to consume
subjects = ["telegraf"]
### name a queue group
queue_group = "telegraf_consumers"
### Maximum number of metrics to buffer between collection intervals
metric_buffer = 100000
### Data format to consume. This can be "json", "influx" or "graphite"
### Each data format has it's own unique set of configuration options, read
### more about them here:
### https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS_INPUT.md
data_format = "influx"
```

View File

@ -0,0 +1,202 @@
package natsconsumer
import (
"fmt"
"log"
"sync"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/parsers"
"github.com/nats-io/nats"
)
type natsError struct {
conn *nats.Conn
sub *nats.Subscription
err error
}
func (e natsError) Error() string {
return fmt.Sprintf("%s url:%s id:%s sub:%s queue:%s",
e.err.Error(), e.conn.ConnectedUrl(), e.conn.ConnectedServerId(), e.sub.Subject, e.sub.Queue)
}
type natsConsumer struct {
QueueGroup string
Subjects []string
Servers []string
Secure bool
MetricBuffer int
parser parsers.Parser
sync.Mutex
Conn *nats.Conn
Subs []*nats.Subscription
// channel for all incoming NATS messages
in chan *nats.Msg
// channel for all NATS read errors
errs chan error
// channel for all incoming parsed metrics
metricC chan telegraf.Metric
done chan struct{}
}
var sampleConfig = `
### urls of NATS servers
servers = ["nats://localhost:4222"]
### Use Transport Layer Security
secure = false
### subject(s) to consume
subjects = ["telegraf"]
### name a queue group
queue_group = "telegraf_consumers"
### Maximum number of metrics to buffer between collection intervals
metric_buffer = 100000
### Data format to consume. This can be "json", "influx" or "graphite"
### Each data format has it's own unique set of configuration options, read
### more about them here:
### https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS_INPUT.md
data_format = "influx"
`
func (n *natsConsumer) SampleConfig() string {
return sampleConfig
}
func (n *natsConsumer) Description() string {
return "Read metrics from NATS subject(s)"
}
func (n *natsConsumer) SetParser(parser parsers.Parser) {
n.parser = parser
}
func (n *natsConsumer) natsErrHandler(c *nats.Conn, s *nats.Subscription, e error) {
select {
case n.errs <- natsError{conn: c, sub: s, err: e}:
default:
return
}
}
// Start the nats consumer. Caller must call *natsConsumer.Stop() to clean up.
func (n *natsConsumer) Start() error {
n.Lock()
defer n.Unlock()
var connectErr error
opts := nats.DefaultOptions
opts.Servers = n.Servers
opts.Secure = n.Secure
if n.Conn == nil || n.Conn.IsClosed() {
n.Conn, connectErr = opts.Connect()
if connectErr != nil {
return connectErr
}
// Setup message and error channels
n.errs = make(chan error)
n.Conn.SetErrorHandler(n.natsErrHandler)
n.in = make(chan *nats.Msg)
for _, subj := range n.Subjects {
sub, err := n.Conn.ChanQueueSubscribe(subj, n.QueueGroup, n.in)
if err != nil {
return err
}
n.Subs = append(n.Subs, sub)
}
}
n.done = make(chan struct{})
if n.MetricBuffer == 0 {
n.MetricBuffer = 100000
}
n.metricC = make(chan telegraf.Metric, n.MetricBuffer)
// Start the message reader
go n.receiver()
log.Printf("Started the NATS consumer service, nats: %v, subjects: %v, queue: %v\n",
n.Conn.ConnectedUrl(), n.Subjects, n.QueueGroup)
return nil
}
// receiver() reads all incoming messages from NATS, and parses them into
// telegraf metrics.
func (n *natsConsumer) receiver() {
defer n.clean()
for {
select {
case <-n.done:
return
case err := <-n.errs:
log.Printf("error reading from %s\n", err.Error())
case msg := <-n.in:
metrics, err := n.parser.Parse(msg.Data)
if err != nil {
log.Printf("subject: %s, error: %s", msg.Subject, err.Error())
}
for _, metric := range metrics {
select {
case n.metricC <- metric:
continue
default:
log.Printf("NATS Consumer buffer is full, dropping a metric." +
" You may want to increase the metric_buffer setting")
}
}
}
}
}
func (n *natsConsumer) clean() {
n.Lock()
defer n.Unlock()
close(n.in)
close(n.metricC)
close(n.errs)
for _, sub := range n.Subs {
if err := sub.Unsubscribe(); err != nil {
log.Printf("Error unsubscribing from subject %s in queue %s: %s\n",
sub.Subject, sub.Queue, err.Error())
}
}
if n.Conn != nil && !n.Conn.IsClosed() {
n.Conn.Close()
}
}
func (n *natsConsumer) Stop() {
n.Lock()
close(n.done)
n.Unlock()
}
func (n *natsConsumer) Gather(acc telegraf.Accumulator) error {
n.Lock()
defer n.Unlock()
nmetrics := len(n.metricC)
for i := 0; i < nmetrics; i++ {
metric := <-n.metricC
acc.AddFields(metric.Name(), metric.Fields(), metric.Tags(), metric.Time())
}
return nil
}
func init() {
inputs.Add("nats_consumer", func() telegraf.Input {
return &natsConsumer{}
})
}

View File

@ -0,0 +1,152 @@
package natsconsumer
import (
"testing"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/parsers"
"github.com/influxdata/telegraf/testutil"
"github.com/nats-io/nats"
)
const (
testMsg = "cpu_load_short,host=server01 value=23422.0 1422568543702900257"
testMsgGraphite = "cpu.load.short.graphite 23422 1454780029"
testMsgJSON = "{\"a\": 5, \"b\": {\"c\": 6}}\n"
invalidMsg = "cpu_load_short,host=server01 1422568543702900257"
metricBuffer = 5
)
func newTestNatsConsumer() (*natsConsumer, chan *nats.Msg) {
in := make(chan *nats.Msg, metricBuffer)
n := &natsConsumer{
QueueGroup: "test",
Subjects: []string{"telegraf"},
Servers: []string{"nats://localhost:4222"},
Secure: false,
MetricBuffer: metricBuffer,
in: in,
errs: make(chan error, metricBuffer),
done: make(chan struct{}),
metricC: make(chan telegraf.Metric, metricBuffer),
}
return n, in
}
// Test that the parser parses NATS messages into metrics
func TestRunParser(t *testing.T) {
n, in := newTestNatsConsumer()
defer close(n.done)
n.parser, _ = parsers.NewInfluxParser()
go n.receiver()
in <- natsMsg(testMsg)
time.Sleep(time.Millisecond)
if a := len(n.metricC); a != 1 {
t.Errorf("got %v, expected %v", a, 1)
}
}
// Test that the parser ignores invalid messages
func TestRunParserInvalidMsg(t *testing.T) {
n, in := newTestNatsConsumer()
defer close(n.done)
n.parser, _ = parsers.NewInfluxParser()
go n.receiver()
in <- natsMsg(invalidMsg)
time.Sleep(time.Millisecond)
if a := len(n.metricC); a != 0 {
t.Errorf("got %v, expected %v", a, 0)
}
}
// Test that metrics are dropped when we hit the buffer limit
func TestRunParserRespectsBuffer(t *testing.T) {
n, in := newTestNatsConsumer()
defer close(n.done)
n.parser, _ = parsers.NewInfluxParser()
go n.receiver()
for i := 0; i < metricBuffer+1; i++ {
in <- natsMsg(testMsg)
}
time.Sleep(time.Millisecond)
if a := len(n.metricC); a != metricBuffer {
t.Errorf("got %v, expected %v", a, metricBuffer)
}
}
// Test that the parser parses line format messages into metrics
func TestRunParserAndGather(t *testing.T) {
n, in := newTestNatsConsumer()
defer close(n.done)
n.parser, _ = parsers.NewInfluxParser()
go n.receiver()
in <- natsMsg(testMsg)
time.Sleep(time.Millisecond)
acc := testutil.Accumulator{}
n.Gather(&acc)
if a := len(acc.Metrics); a != 1 {
t.Errorf("got %v, expected %v", a, 1)
}
acc.AssertContainsFields(t, "cpu_load_short",
map[string]interface{}{"value": float64(23422)})
}
// Test that the parser parses graphite format messages into metrics
func TestRunParserAndGatherGraphite(t *testing.T) {
n, in := newTestNatsConsumer()
defer close(n.done)
n.parser, _ = parsers.NewGraphiteParser("_", []string{}, nil)
go n.receiver()
in <- natsMsg(testMsgGraphite)
time.Sleep(time.Millisecond)
acc := testutil.Accumulator{}
n.Gather(&acc)
if a := len(acc.Metrics); a != 1 {
t.Errorf("got %v, expected %v", a, 1)
}
acc.AssertContainsFields(t, "cpu_load_short_graphite",
map[string]interface{}{"value": float64(23422)})
}
// Test that the parser parses json format messages into metrics
func TestRunParserAndGatherJSON(t *testing.T) {
n, in := newTestNatsConsumer()
defer close(n.done)
n.parser, _ = parsers.NewJSONParser("nats_json_test", []string{}, nil)
go n.receiver()
in <- natsMsg(testMsgJSON)
time.Sleep(time.Millisecond)
acc := testutil.Accumulator{}
n.Gather(&acc)
if a := len(acc.Metrics); a != 1 {
t.Errorf("got %v, expected %v", a, 1)
}
acc.AssertContainsFields(t, "nats_json_test",
map[string]interface{}{
"a": float64(5),
"b_c": float64(6),
})
}
func natsMsg(val string) *nats.Msg {
return &nats.Msg{
Subject: "telegraf",
Data: []byte(val),
}
}

View File

@ -0,0 +1,66 @@
# Example Input Plugin
The input plugin test UDP/TCP connections response time.
It can also check response text.
### Configuration:
```
# List of UDP/TCP connections you want to check
[[inputs.net_response]]
protocol = "tcp"
# Server address (default IP localhost)
address = "github.com:80"
# Set timeout (default 1.0)
timeout = 1.0
# Set read timeout (default 1.0)
read_timeout = 1.0
# String sent to the server
send = "ssh"
# Expected string in answer
expect = "ssh"
[[inputs.net_response]]
protocol = "tcp"
address = ":80"
[[inputs.net_response]]
protocol = "udp"
# Server address (default IP localhost)
address = "github.com:80"
# Set timeout (default 1.0)
timeout = 1.0
# Set read timeout (default 1.0)
read_timeout = 1.0
# String sent to the server
send = "ssh"
# Expected string in answer
expect = "ssh"
[[inputs.net_response]]
protocol = "udp"
address = "localhost:161"
timeout = 2.0
```
### Measurements & Fields:
- net_response
- response_time (float, seconds)
- string_found (bool) # Only if "expected: option is set
### Tags:
- All measurements have the following tags:
- host
- port
- protocol
### Example Output:
```
$ ./telegraf -config telegraf.conf -input-filter net_response -test
net_response,host=127.0.0.1,port=22,protocol=tcp response_time=0.18070360500000002,string_found=true 1454785464182527094
net_response,host=127.0.0.1,port=2222,protocol=tcp response_time=1.090124776,string_found=false 1454784433658942325
```

View File

@ -0,0 +1,196 @@
package net_response
import (
"bufio"
"errors"
"net"
"net/textproto"
"regexp"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
// NetResponses struct
type NetResponse struct {
Address string
Timeout float64
ReadTimeout float64
Send string
Expect string
Protocol string
}
func (_ *NetResponse) Description() string {
return "TCP or UDP 'ping' given url and collect response time in seconds"
}
var sampleConfig = `
### Protocol, must be "tcp" or "udp"
protocol = "tcp"
### Server address (default localhost)
address = "github.com:80"
### Set timeout (default 1.0 seconds)
timeout = 1.0
### Set read timeout (default 1.0 seconds)
read_timeout = 1.0
### Optional string sent to the server
# send = "ssh"
### Optional expected string in answer
# expect = "ssh"
`
func (_ *NetResponse) SampleConfig() string {
return sampleConfig
}
func (t *NetResponse) TcpGather() (map[string]interface{}, error) {
// Prepare fields
fields := make(map[string]interface{})
// Start Timer
start := time.Now()
// Resolving
tcpAddr, err := net.ResolveTCPAddr("tcp", t.Address)
// Connecting
conn, err := net.DialTCP("tcp", nil, tcpAddr)
// Stop timer
responseTime := time.Since(start).Seconds()
// Handle error
if err != nil {
return nil, err
}
defer conn.Close()
// Send string if needed
if t.Send != "" {
msg := []byte(t.Send)
conn.Write(msg)
conn.CloseWrite()
// Stop timer
responseTime = time.Since(start).Seconds()
}
// Read string if needed
if t.Expect != "" {
// Set read timeout
conn.SetReadDeadline(time.Now().Add(time.Duration(t.ReadTimeout) * time.Second))
// Prepare reader
reader := bufio.NewReader(conn)
tp := textproto.NewReader(reader)
// Read
data, err := tp.ReadLine()
// Stop timer
responseTime = time.Since(start).Seconds()
// Handle error
if err != nil {
fields["string_found"] = false
} else {
// Looking for string in answer
RegEx := regexp.MustCompile(`.*` + t.Expect + `.*`)
find := RegEx.FindString(string(data))
if find != "" {
fields["string_found"] = true
} else {
fields["string_found"] = false
}
}
}
fields["response_time"] = responseTime
return fields, nil
}
func (u *NetResponse) UdpGather() (map[string]interface{}, error) {
// Prepare fields
fields := make(map[string]interface{})
// Start Timer
start := time.Now()
// Resolving
udpAddr, err := net.ResolveUDPAddr("udp", u.Address)
LocalAddr, err := net.ResolveUDPAddr("udp", "127.0.0.1:0")
// Connecting
conn, err := net.DialUDP("udp", LocalAddr, udpAddr)
defer conn.Close()
// Handle error
if err != nil {
return nil, err
}
// Send string
msg := []byte(u.Send)
conn.Write(msg)
// Read string
// Set read timeout
conn.SetReadDeadline(time.Now().Add(time.Duration(u.ReadTimeout) * time.Second))
// Read
buf := make([]byte, 1024)
_, _, err = conn.ReadFromUDP(buf)
// Stop timer
responseTime := time.Since(start).Seconds()
// Handle error
if err != nil {
return nil, err
} else {
// Looking for string in answer
RegEx := regexp.MustCompile(`.*` + u.Expect + `.*`)
find := RegEx.FindString(string(buf))
if find != "" {
fields["string_found"] = true
} else {
fields["string_found"] = false
}
}
fields["response_time"] = responseTime
return fields, nil
}
func (c *NetResponse) Gather(acc telegraf.Accumulator) error {
// Set default values
if c.Timeout == 0 {
c.Timeout = 1.0
}
if c.ReadTimeout == 0 {
c.ReadTimeout = 1.0
}
// Check send and expected string
if c.Protocol == "udp" && c.Send == "" {
return errors.New("Send string cannot be empty")
}
if c.Protocol == "udp" && c.Expect == "" {
return errors.New("Expected string cannot be empty")
}
// Prepare host and port
host, port, err := net.SplitHostPort(c.Address)
if err != nil {
return err
}
if host == "" {
c.Address = "localhost:" + port
}
if port == "" {
return errors.New("Bad port")
}
// Prepare data
tags := map[string]string{"host": host, "port": port}
var fields map[string]interface{}
// Gather data
if c.Protocol == "tcp" {
fields, err = c.TcpGather()
tags["protocol"] = "tcp"
} else if c.Protocol == "udp" {
fields, err = c.UdpGather()
tags["protocol"] = "udp"
} else {
return errors.New("Bad protocol")
}
if err != nil {
return err
}
// Add metrics
acc.AddFields("net_response", fields, tags)
return nil
}
func init() {
inputs.Add("net_response", func() telegraf.Input {
return &NetResponse{}
})
}

View File

@ -0,0 +1,198 @@
package net_response
import (
"net"
"regexp"
"sync"
"testing"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestBadProtocol(t *testing.T) {
var acc testutil.Accumulator
// Init plugin
c := NetResponse{
Protocol: "unknownprotocol",
Address: ":9999",
}
// Error
err1 := c.Gather(&acc)
require.Error(t, err1)
assert.Equal(t, "Bad protocol", err1.Error())
}
func TestTCPError(t *testing.T) {
var acc testutil.Accumulator
// Init plugin
c := NetResponse{
Protocol: "tcp",
Address: ":9999",
}
// Error
err1 := c.Gather(&acc)
require.Error(t, err1)
assert.Equal(t, "dial tcp 127.0.0.1:9999: getsockopt: connection refused", err1.Error())
}
func TestTCPOK1(t *testing.T) {
var wg sync.WaitGroup
var acc testutil.Accumulator
// Init plugin
c := NetResponse{
Address: "127.0.0.1:2004",
Send: "test",
Expect: "test",
ReadTimeout: 3.0,
Timeout: 1.0,
Protocol: "tcp",
}
// Start TCP server
wg.Add(1)
go TCPServer(t, &wg)
wg.Wait()
// Connect
wg.Add(1)
err1 := c.Gather(&acc)
wg.Wait()
// Override response time
for _, p := range acc.Metrics {
p.Fields["response_time"] = 1.0
}
require.NoError(t, err1)
acc.AssertContainsTaggedFields(t,
"net_response",
map[string]interface{}{
"string_found": true,
"response_time": 1.0,
},
map[string]string{"host": "127.0.0.1",
"port": "2004",
"protocol": "tcp",
},
)
// Waiting TCPserver
wg.Wait()
}
func TestTCPOK2(t *testing.T) {
var wg sync.WaitGroup
var acc testutil.Accumulator
// Init plugin
c := NetResponse{
Address: "127.0.0.1:2004",
Send: "test",
Expect: "test2",
ReadTimeout: 3.0,
Timeout: 1.0,
Protocol: "tcp",
}
// Start TCP server
wg.Add(1)
go TCPServer(t, &wg)
wg.Wait()
// Connect
wg.Add(1)
err1 := c.Gather(&acc)
wg.Wait()
// Override response time
for _, p := range acc.Metrics {
p.Fields["response_time"] = 1.0
}
require.NoError(t, err1)
acc.AssertContainsTaggedFields(t,
"net_response",
map[string]interface{}{
"string_found": false,
"response_time": 1.0,
},
map[string]string{"host": "127.0.0.1",
"port": "2004",
"protocol": "tcp",
},
)
// Waiting TCPserver
wg.Wait()
}
func TestUDPrror(t *testing.T) {
var acc testutil.Accumulator
// Init plugin
c := NetResponse{
Address: ":9999",
Send: "test",
Expect: "test",
Protocol: "udp",
}
// Error
err1 := c.Gather(&acc)
require.Error(t, err1)
assert.Regexp(t, regexp.MustCompile(`read udp 127.0.0.1:[0-9]*->127.0.0.1:9999: recvfrom: connection refused`), err1.Error())
}
func TestUDPOK1(t *testing.T) {
var wg sync.WaitGroup
var acc testutil.Accumulator
// Init plugin
c := NetResponse{
Address: "127.0.0.1:2004",
Send: "test",
Expect: "test",
ReadTimeout: 3.0,
Timeout: 1.0,
Protocol: "udp",
}
// Start UDP server
wg.Add(1)
go UDPServer(t, &wg)
wg.Wait()
// Connect
wg.Add(1)
err1 := c.Gather(&acc)
wg.Wait()
// Override response time
for _, p := range acc.Metrics {
p.Fields["response_time"] = 1.0
}
require.NoError(t, err1)
acc.AssertContainsTaggedFields(t,
"net_response",
map[string]interface{}{
"string_found": true,
"response_time": 1.0,
},
map[string]string{"host": "127.0.0.1",
"port": "2004",
"protocol": "udp",
},
)
// Waiting TCPserver
wg.Wait()
}
func UDPServer(t *testing.T, wg *sync.WaitGroup) {
udpAddr, _ := net.ResolveUDPAddr("udp", "127.0.0.1:2004")
conn, _ := net.ListenUDP("udp", udpAddr)
wg.Done()
buf := make([]byte, 1024)
_, remoteaddr, _ := conn.ReadFromUDP(buf)
conn.WriteToUDP(buf, remoteaddr)
conn.Close()
wg.Done()
}
func TCPServer(t *testing.T, wg *sync.WaitGroup) {
tcpAddr, _ := net.ResolveTCPAddr("tcp", "127.0.0.1:2004")
tcpServer, _ := net.ListenTCP("tcp", tcpAddr)
wg.Done()
conn, _ := tcpServer.AcceptTCP()
buf := make([]byte, 1024)
conn.Read(buf)
conn.Write(buf)
conn.CloseWrite()
tcpServer.Close()
wg.Done()
}

View File

@ -1,6 +1,6 @@
# PostgreSQL plugin # PostgreSQL plugin
This postgresql plugin provides metrics for your postgres database. It currently works with postgres versions 8.1+. It uses data from the built in _pg_stat_database_ view. The metrics recorded depend on your version of postgres. See table: This postgresql plugin provides metrics for your postgres database. It currently works with postgres versions 8.1+. It uses data from the built in _pg_stat_database_ and pg_stat_bgwriter views. The metrics recorded depend on your version of postgres. See table:
``` ```
pg version 9.2+ 9.1 8.3-9.0 8.1-8.2 7.4-8.0(unsupported) pg version 9.2+ 9.1 8.3-9.0 8.1-8.2 7.4-8.0(unsupported)
--- --- --- ------- ------- ------- --- --- --- ------- ------- -------
@ -27,4 +27,5 @@ stats_reset* x x
_* value ignored and therefore not recorded._ _* value ignored and therefore not recorded._
More information about the meaning of these metrics can be found in the [PostgreSQL Documentation](http://www.postgresql.org/docs/9.2/static/monitoring-stats.html#PG-STAT-DATABASE-VIEW) More information about the meaning of these metrics can be found in the [PostgreSQL Documentation](http://www.postgresql.org/docs/9.2/static/monitoring-stats.html#PG-STAT-DATABASE-VIEW)

View File

@ -4,6 +4,7 @@ import (
"bytes" "bytes"
"database/sql" "database/sql"
"fmt" "fmt"
"sort"
"strings" "strings"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
@ -16,6 +17,7 @@ type Postgresql struct {
Address string Address string
Databases []string Databases []string
OrderedColumns []string OrderedColumns []string
AllColumns []string
} }
var ignoredColumns = map[string]bool{"datid": true, "datname": true, "stats_reset": true} var ignoredColumns = map[string]bool{"datid": true, "datname": true, "stats_reset": true}
@ -86,6 +88,9 @@ func (p *Postgresql) Gather(acc telegraf.Accumulator) error {
p.OrderedColumns, err = rows.Columns() p.OrderedColumns, err = rows.Columns()
if err != nil { if err != nil {
return err return err
} else {
p.AllColumns = make([]string, len(p.OrderedColumns))
copy(p.AllColumns, p.OrderedColumns)
} }
for rows.Next() { for rows.Next() {
@ -94,8 +99,34 @@ func (p *Postgresql) Gather(acc telegraf.Accumulator) error {
return err return err
} }
} }
//return rows.Err()
query = `SELECT * FROM pg_stat_bgwriter`
return rows.Err() bg_writer_row, err := db.Query(query)
if err != nil {
return err
}
defer bg_writer_row.Close()
// grab the column information from the result
p.OrderedColumns, err = bg_writer_row.Columns()
if err != nil {
return err
} else {
for _, v := range p.OrderedColumns {
p.AllColumns = append(p.AllColumns, v)
}
}
for bg_writer_row.Next() {
err = p.accRow(bg_writer_row, acc)
if err != nil {
return err
}
}
sort.Strings(p.AllColumns)
return bg_writer_row.Err()
} }
type scanner interface { type scanner interface {
@ -124,11 +155,14 @@ func (p *Postgresql) accRow(row scanner, acc telegraf.Accumulator) error {
if err != nil { if err != nil {
return err return err
} }
if columnMap["datname"] != nil {
// extract the database name from the column map // extract the database name from the column map
dbnameChars := (*columnMap["datname"]).([]uint8) dbnameChars := (*columnMap["datname"]).([]uint8)
for i := 0; i < len(dbnameChars); i++ { for i := 0; i < len(dbnameChars); i++ {
dbname.WriteString(string(dbnameChars[i])) dbname.WriteString(string(dbnameChars[i]))
}
} else {
dbname.WriteString("postgres")
} }
tags := map[string]string{"server": p.Address, "db": dbname.String()} tags := map[string]string{"server": p.Address, "db": dbname.String()}

View File

@ -21,15 +21,13 @@ func TestPostgresqlGeneratesMetrics(t *testing.T) {
} }
var acc testutil.Accumulator var acc testutil.Accumulator
err := p.Gather(&acc) err := p.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
availableColumns := make(map[string]bool) availableColumns := make(map[string]bool)
for _, col := range p.OrderedColumns { for _, col := range p.AllColumns {
availableColumns[col] = true availableColumns[col] = true
} }
intMetrics := []string{ intMetrics := []string{
"xact_commit", "xact_commit",
"xact_rollback", "xact_rollback",
@ -45,6 +43,14 @@ func TestPostgresqlGeneratesMetrics(t *testing.T) {
"temp_bytes", "temp_bytes",
"deadlocks", "deadlocks",
"numbackends", "numbackends",
"buffers_alloc",
"buffers_backend",
"buffers_backend_fsync",
"buffers_checkpoint",
"buffers_clean",
"checkpoints_req",
"checkpoints_timed",
"maxwritten_clean",
} }
floatMetrics := []string{ floatMetrics := []string{
@ -71,7 +77,7 @@ func TestPostgresqlGeneratesMetrics(t *testing.T) {
} }
assert.True(t, metricsCounted > 0) assert.True(t, metricsCounted > 0)
assert.Equal(t, len(availableColumns)-len(p.IgnoredColumns()), metricsCounted) //assert.Equal(t, len(availableColumns)-len(p.IgnoredColumns()), metricsCounted)
} }
func TestPostgresqlTagsMetricsWithDatabaseName(t *testing.T) { func TestPostgresqlTagsMetricsWithDatabaseName(t *testing.T) {

View File

@ -0,0 +1,49 @@
# Raindrops Input Plugin
The [raindrops](http://raindrops.bogomips.org/) plugin reads from
specified raindops [middleware](http://raindrops.bogomips.org/Raindrops/Middleware.html) URI and adds stats to InfluxDB.
### Configuration:
```toml
# Read raindrops stats
[[inputs.raindrops]]
urls = ["http://localhost:8080/_raindrops"]
```
### Measurements & Fields:
- raindrops
- calling (integer, count)
- writing (integer, count)
- raindrops_listen
- active (integer, bytes)
- queued (integer, bytes)
### Tags:
- Raindops calling/writing of all the workers:
- server
- port
- raindrops_listen (ip:port):
- ip
- port
- raindrops_listen (Unix Socket):
- socket
### Example Output:
```
$ ./telegraf -config telegraf.conf -input-filter raindrops -test
* Plugin: raindrops, Collection 1
> raindrops,port=8080,server=localhost calling=0i,writing=0i 1455479896806238204
> raindrops_listen,ip=0.0.0.0,port=8080 active=0i,queued=0i 1455479896806561938
> raindrops_listen,ip=0.0.0.0,port=8081 active=1i,queued=0i 1455479896806605749
> raindrops_listen,ip=127.0.0.1,port=8082 active=0i,queued=0i 1455479896806646315
> raindrops_listen,ip=0.0.0.0,port=8083 active=0i,queued=0i 1455479896806683252
> raindrops_listen,ip=0.0.0.0,port=8084 active=0i,queued=0i 1455479896806712025
> raindrops_listen,ip=0.0.0.0,port=3000 active=0i,queued=0i 1455479896806779197
> raindrops_listen,socket=/tmp/listen.me active=0i,queued=0i 1455479896806813907
```

View File

@ -0,0 +1,184 @@
package raindrops
import (
"bufio"
"fmt"
"net"
"net/http"
"net/url"
"strconv"
"strings"
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
type Raindrops struct {
Urls []string
http_client *http.Client
}
var sampleConfig = `
### An array of raindrops middleware URI to gather stats.
urls = ["http://localhost:8080/_raindrops"]
`
func (r *Raindrops) SampleConfig() string {
return sampleConfig
}
func (r *Raindrops) Description() string {
return "Read raindrops stats (raindrops - real-time stats for preforking Rack servers)"
}
func (r *Raindrops) Gather(acc telegraf.Accumulator) error {
var wg sync.WaitGroup
var outerr error
for _, u := range r.Urls {
addr, err := url.Parse(u)
if err != nil {
return fmt.Errorf("Unable to parse address '%s': %s", u, err)
}
wg.Add(1)
go func(addr *url.URL) {
defer wg.Done()
outerr = r.gatherUrl(addr, acc)
}(addr)
}
wg.Wait()
return outerr
}
func (r *Raindrops) gatherUrl(addr *url.URL, acc telegraf.Accumulator) error {
resp, err := r.http_client.Get(addr.String())
if err != nil {
return fmt.Errorf("error making HTTP request to %s: %s", addr.String(), err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("%s returned HTTP status %s", addr.String(), resp.Status)
}
buf := bufio.NewReader(resp.Body)
// Calling
_, err = buf.ReadString(':')
if err != nil {
return err
}
line, err := buf.ReadString('\n')
if err != nil {
return err
}
calling, err := strconv.ParseUint(strings.TrimSpace(line), 10, 64)
if err != nil {
return err
}
// Writing
_, err = buf.ReadString(':')
if err != nil {
return err
}
line, err = buf.ReadString('\n')
if err != nil {
return err
}
writing, err := strconv.ParseUint(strings.TrimSpace(line), 10, 64)
if err != nil {
return err
}
tags := r.getTags(addr)
fields := map[string]interface{}{
"calling": calling,
"writing": writing,
}
acc.AddFields("raindrops", fields, tags)
iterate := true
var queued_line_str string
var active_line_str string
var active_err error
var queued_err error
for iterate {
// Listen
var tags map[string]string
lis := map[string]interface{}{
"active": 0,
"queued": 0,
}
active_line_str, active_err = buf.ReadString('\n')
if active_err != nil {
iterate = false
break
}
if strings.Compare(active_line_str, "\n") == 0 {
break
}
queued_line_str, queued_err = buf.ReadString('\n')
if queued_err != nil {
iterate = false
}
active_line := strings.Split(active_line_str, " ")
listen_name := active_line[0]
active, err := strconv.ParseUint(strings.TrimSpace(active_line[2]), 10, 64)
if err != nil {
active = 0
}
lis["active"] = active
queued_line := strings.Split(queued_line_str, " ")
queued, err := strconv.ParseUint(strings.TrimSpace(queued_line[2]), 10, 64)
if err != nil {
queued = 0
}
lis["queued"] = queued
if strings.Contains(listen_name, ":") {
listener := strings.Split(listen_name, ":")
tags = map[string]string{
"ip": listener[0],
"port": listener[1],
}
} else {
tags = map[string]string{
"socket": listen_name,
}
}
acc.AddFields("raindrops_listen", lis, tags)
}
return nil
}
// Get tag(s) for the raindrops calling/writing plugin
func (r *Raindrops) getTags(addr *url.URL) map[string]string {
h := addr.Host
host, port, err := net.SplitHostPort(h)
if err != nil {
host = addr.Host
if addr.Scheme == "http" {
port = "80"
} else if addr.Scheme == "https" {
port = "443"
} else {
port = ""
}
}
return map[string]string{"server": host, "port": port}
}
func init() {
inputs.Add("raindrops", func() telegraf.Input {
return &Raindrops{http_client: &http.Client{Transport: &http.Transport{
ResponseHeaderTimeout: time.Duration(3 * time.Second),
}}}
})
}

View File

@ -0,0 +1,107 @@
package raindrops
import (
"fmt"
"net"
"net/http"
"net/http/httptest"
"net/url"
"testing"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"time"
)
const sampleResponse = `
calling: 100
writing: 200
0.0.0.0:8080 active: 1
0.0.0.0:8080 queued: 2
0.0.0.0:8081 active: 3
0.0.0.0:8081 queued: 4
127.0.0.1:8082 active: 5
127.0.0.1:8082 queued: 6
0.0.0.0:8083 active: 7
0.0.0.0:8083 queued: 8
0.0.0.0:8084 active: 9
0.0.0.0:8084 queued: 10
0.0.0.0:3000 active: 11
0.0.0.0:3000 queued: 12
/tmp/listen.me active: 13
/tmp/listen.me queued: 14
`
// Verify that raindrops tags are properly parsed based on the server
func TestRaindropsTags(t *testing.T) {
urls := []string{"http://localhost/_raindrops", "http://localhost:80/_raindrops"}
var addr *url.URL
r := &Raindrops{}
for _, url1 := range urls {
addr, _ = url.Parse(url1)
tagMap := r.getTags(addr)
assert.Contains(t, tagMap["server"], "localhost")
}
}
func TestRaindropsGeneratesMetrics(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
var rsp string
if r.URL.Path == "/_raindrops" {
rsp = sampleResponse
} else {
panic("Cannot handle request")
}
fmt.Fprintln(w, rsp)
}))
defer ts.Close()
n := &Raindrops{
Urls: []string{fmt.Sprintf("%s/_raindrops", ts.URL)},
http_client: &http.Client{Transport: &http.Transport{
ResponseHeaderTimeout: time.Duration(3 * time.Second),
}},
}
var acc testutil.Accumulator
err := n.Gather(&acc)
require.NoError(t, err)
fields := map[string]interface{}{
"calling": uint64(100),
"writing": uint64(200),
}
addr, err := url.Parse(ts.URL)
if err != nil {
panic(err)
}
host, port, err := net.SplitHostPort(addr.Host)
if err != nil {
host = addr.Host
if addr.Scheme == "http" {
port = "80"
} else if addr.Scheme == "https" {
port = "443"
} else {
port = ""
}
}
tags := map[string]string{"server": host, "port": port}
acc.AssertContainsTaggedFields(t, "raindrops", fields, tags)
tags = map[string]string{
"port": "8081",
"ip": "0.0.0.0",
}
fields = map[string]interface{}{
"active": uint64(3),
"queued": uint64(4),
}
acc.AssertContainsTaggedFields(t, "raindrops_listen", fields, tags)
}

View File

@ -5,6 +5,7 @@ import (
_ "github.com/influxdata/telegraf/plugins/outputs/amqp" _ "github.com/influxdata/telegraf/plugins/outputs/amqp"
_ "github.com/influxdata/telegraf/plugins/outputs/cloudwatch" _ "github.com/influxdata/telegraf/plugins/outputs/cloudwatch"
_ "github.com/influxdata/telegraf/plugins/outputs/datadog" _ "github.com/influxdata/telegraf/plugins/outputs/datadog"
_ "github.com/influxdata/telegraf/plugins/outputs/file"
_ "github.com/influxdata/telegraf/plugins/outputs/graphite" _ "github.com/influxdata/telegraf/plugins/outputs/graphite"
_ "github.com/influxdata/telegraf/plugins/outputs/influxdb" _ "github.com/influxdata/telegraf/plugins/outputs/influxdb"
_ "github.com/influxdata/telegraf/plugins/outputs/kafka" _ "github.com/influxdata/telegraf/plugins/outputs/kafka"

View File

@ -10,6 +10,8 @@ import (
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/outputs" "github.com/influxdata/telegraf/plugins/outputs"
"github.com/influxdata/telegraf/plugins/serializers"
"github.com/streadway/amqp" "github.com/streadway/amqp"
) )
@ -39,6 +41,8 @@ type AMQP struct {
channel *amqp.Channel channel *amqp.Channel
sync.Mutex sync.Mutex
headers amqp.Table headers amqp.Table
serializer serializers.Serializer
} }
const ( const (
@ -69,8 +73,18 @@ var sampleConfig = `
# ssl_key = "/etc/telegraf/key.pem" # ssl_key = "/etc/telegraf/key.pem"
### Use SSL but skip chain & host verification ### Use SSL but skip chain & host verification
# insecure_skip_verify = false # insecure_skip_verify = false
### Data format to output. This can be "influx" or "graphite"
### Each data format has it's own unique set of configuration options, read
### more about them here:
### https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS_OUTPUT.md
data_format = "influx"
` `
func (a *AMQP) SetSerializer(serializer serializers.Serializer) {
a.serializer = serializer
}
func (q *AMQP) Connect() error { func (q *AMQP) Connect() error {
q.Lock() q.Lock()
defer q.Unlock() defer q.Unlock()
@ -147,18 +161,24 @@ func (q *AMQP) Write(metrics []telegraf.Metric) error {
} }
var outbuf = make(map[string][][]byte) var outbuf = make(map[string][][]byte)
for _, p := range metrics { for _, metric := range metrics {
var value, key string var key string
value = p.String()
if q.RoutingTag != "" { if q.RoutingTag != "" {
if h, ok := p.Tags()[q.RoutingTag]; ok { if h, ok := metric.Tags()[q.RoutingTag]; ok {
key = h key = h
} }
} }
outbuf[key] = append(outbuf[key], []byte(value))
values, err := q.serializer.Serialize(metric)
if err != nil {
return err
}
for _, value := range values {
outbuf[key] = append(outbuf[key], []byte(value))
}
} }
for key, buf := range outbuf { for key, buf := range outbuf {
err := q.channel.Publish( err := q.channel.Publish(
q.Exchange, // exchange q.Exchange, // exchange

View File

@ -3,6 +3,7 @@ package amqp
import ( import (
"testing" "testing"
"github.com/influxdata/telegraf/plugins/serializers"
"github.com/influxdata/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -13,9 +14,11 @@ func TestConnectAndWrite(t *testing.T) {
} }
var url = "amqp://" + testutil.GetLocalHost() + ":5672/" var url = "amqp://" + testutil.GetLocalHost() + ":5672/"
s, _ := serializers.NewInfluxSerializer()
q := &AMQP{ q := &AMQP{
URL: url, URL: url,
Exchange: "telegraf_test", Exchange: "telegraf_test",
serializer: s,
} }
// Verify that we can connect to the AMQP broker // Verify that we can connect to the AMQP broker

View File

@ -0,0 +1 @@
# file Output Plugin

View File

@ -0,0 +1,109 @@
package file
import (
"fmt"
"io"
"os"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/outputs"
"github.com/influxdata/telegraf/plugins/serializers"
)
type File struct {
Files []string
writer io.Writer
closers []io.Closer
serializer serializers.Serializer
}
var sampleConfig = `
### Files to write to, "stdout" is a specially handled file.
files = ["stdout", "/tmp/metrics.out"]
### Data format to output. This can be "influx" or "graphite"
### Each data format has it's own unique set of configuration options, read
### more about them here:
### https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS_OUTPUT.md
data_format = "influx"
`
func (f *File) SetSerializer(serializer serializers.Serializer) {
f.serializer = serializer
}
func (f *File) Connect() error {
writers := []io.Writer{}
for _, file := range f.Files {
if file == "stdout" {
writers = append(writers, os.Stdout)
f.closers = append(f.closers, os.Stdout)
} else {
var of *os.File
var err error
if _, err := os.Stat(file); os.IsNotExist(err) {
of, err = os.Create(file)
} else {
of, err = os.OpenFile(file, os.O_APPEND|os.O_WRONLY, os.ModeAppend)
}
if err != nil {
return err
}
writers = append(writers, of)
f.closers = append(f.closers, of)
}
}
f.writer = io.MultiWriter(writers...)
return nil
}
func (f *File) Close() error {
var errS string
for _, c := range f.closers {
if err := c.Close(); err != nil {
errS += err.Error() + "\n"
}
}
if errS != "" {
return fmt.Errorf(errS)
}
return nil
}
func (f *File) SampleConfig() string {
return sampleConfig
}
func (f *File) Description() string {
return "Send telegraf metrics to file(s)"
}
func (f *File) Write(metrics []telegraf.Metric) error {
if len(metrics) == 0 {
return nil
}
for _, metric := range metrics {
values, err := f.serializer.Serialize(metric)
if err != nil {
return err
}
for _, value := range values {
_, err = f.writer.Write([]byte(value + "\n"))
if err != nil {
return fmt.Errorf("FAILED to write message: %s, %s", value, err)
}
}
}
return nil
}
func init() {
outputs.Add("file", func() telegraf.Output {
return &File{}
})
}

View File

@ -0,0 +1,196 @@
package file
import (
"bytes"
"io"
"io/ioutil"
"os"
"testing"
"github.com/stretchr/testify/assert"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/serializers"
"github.com/influxdata/telegraf/testutil"
)
const (
expNewFile = "test1,tag1=value1 value=1 1257894000000000000\n"
expExistFile = "cpu,cpu=cpu0 value=100 1455312810012459582\n" +
"test1,tag1=value1 value=1 1257894000000000000\n"
)
func TestFileExistingFile(t *testing.T) {
fh := createFile()
s, _ := serializers.NewInfluxSerializer()
f := File{
Files: []string{fh.Name()},
serializer: s,
}
err := f.Connect()
assert.NoError(t, err)
err = f.Write(testutil.MockMetrics())
assert.NoError(t, err)
validateFile(fh.Name(), expExistFile, t)
err = f.Close()
assert.NoError(t, err)
}
func TestFileNewFile(t *testing.T) {
s, _ := serializers.NewInfluxSerializer()
fh := tmpFile()
f := File{
Files: []string{fh},
serializer: s,
}
err := f.Connect()
assert.NoError(t, err)
err = f.Write(testutil.MockMetrics())
assert.NoError(t, err)
validateFile(fh, expNewFile, t)
err = f.Close()
assert.NoError(t, err)
}
func TestFileExistingFiles(t *testing.T) {
fh1 := createFile()
fh2 := createFile()
fh3 := createFile()
s, _ := serializers.NewInfluxSerializer()
f := File{
Files: []string{fh1.Name(), fh2.Name(), fh3.Name()},
serializer: s,
}
err := f.Connect()
assert.NoError(t, err)
err = f.Write(testutil.MockMetrics())
assert.NoError(t, err)
validateFile(fh1.Name(), expExistFile, t)
validateFile(fh2.Name(), expExistFile, t)
validateFile(fh3.Name(), expExistFile, t)
err = f.Close()
assert.NoError(t, err)
}
func TestFileNewFiles(t *testing.T) {
s, _ := serializers.NewInfluxSerializer()
fh1 := tmpFile()
fh2 := tmpFile()
fh3 := tmpFile()
f := File{
Files: []string{fh1, fh2, fh3},
serializer: s,
}
err := f.Connect()
assert.NoError(t, err)
err = f.Write(testutil.MockMetrics())
assert.NoError(t, err)
validateFile(fh1, expNewFile, t)
validateFile(fh2, expNewFile, t)
validateFile(fh3, expNewFile, t)
err = f.Close()
assert.NoError(t, err)
}
func TestFileBoth(t *testing.T) {
fh1 := createFile()
fh2 := tmpFile()
s, _ := serializers.NewInfluxSerializer()
f := File{
Files: []string{fh1.Name(), fh2},
serializer: s,
}
err := f.Connect()
assert.NoError(t, err)
err = f.Write(testutil.MockMetrics())
assert.NoError(t, err)
validateFile(fh1.Name(), expExistFile, t)
validateFile(fh2, expNewFile, t)
err = f.Close()
assert.NoError(t, err)
}
func TestFileStdout(t *testing.T) {
// keep backup of the real stdout
old := os.Stdout
r, w, _ := os.Pipe()
os.Stdout = w
s, _ := serializers.NewInfluxSerializer()
f := File{
Files: []string{"stdout"},
serializer: s,
}
err := f.Connect()
assert.NoError(t, err)
err = f.Write(testutil.MockMetrics())
assert.NoError(t, err)
err = f.Close()
assert.NoError(t, err)
outC := make(chan string)
// copy the output in a separate goroutine so printing can't block indefinitely
go func() {
var buf bytes.Buffer
io.Copy(&buf, r)
outC <- buf.String()
}()
// back to normal state
w.Close()
// restoring the real stdout
os.Stdout = old
out := <-outC
assert.Equal(t, expNewFile, out)
}
func createFile() *os.File {
f, err := ioutil.TempFile("", "")
if err != nil {
panic(err)
}
f.WriteString("cpu,cpu=cpu0 value=100 1455312810012459582\n")
return f
}
func tmpFile() string {
d, err := ioutil.TempDir("", "")
if err != nil {
panic(err)
}
return d + internal.RandomString(10)
}
func validateFile(fname, expS string, t *testing.T) {
buf, err := ioutil.ReadFile(fname)
if err != nil {
panic(err)
}
assert.Equal(t, expS, string(buf))
}

View File

@ -3,14 +3,15 @@ package graphite
import ( import (
"errors" "errors"
"fmt" "fmt"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/outputs"
"log" "log"
"math/rand" "math/rand"
"net" "net"
"sort"
"strings" "strings"
"time" "time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/outputs"
"github.com/influxdata/telegraf/plugins/serializers"
) )
type Graphite struct { type Graphite struct {
@ -71,42 +72,22 @@ func (g *Graphite) Description() string {
func (g *Graphite) Write(metrics []telegraf.Metric) error { func (g *Graphite) Write(metrics []telegraf.Metric) error {
// Prepare data // Prepare data
var bp []string var bp []string
for _, metric := range metrics { s, err := serializers.NewGraphiteSerializer(g.Prefix)
// Get name if err != nil {
name := metric.Name() return err
// Convert UnixNano to Unix timestamps
timestamp := metric.UnixNano() / 1000000000
tag_str := buildTags(metric)
for field_name, value := range metric.Fields() {
// Convert value
value_str := fmt.Sprintf("%#v", value)
// Write graphite metric
var graphitePoint string
if name == field_name {
graphitePoint = fmt.Sprintf("%s.%s %s %d\n",
tag_str,
strings.Replace(name, ".", "_", -1),
value_str,
timestamp)
} else {
graphitePoint = fmt.Sprintf("%s.%s.%s %s %d\n",
tag_str,
strings.Replace(name, ".", "_", -1),
strings.Replace(field_name, ".", "_", -1),
value_str,
timestamp)
}
if g.Prefix != "" {
graphitePoint = fmt.Sprintf("%s.%s", g.Prefix, graphitePoint)
}
bp = append(bp, graphitePoint)
}
} }
graphitePoints := strings.Join(bp, "")
for _, metric := range metrics {
gMetrics, err := s.Serialize(metric)
if err != nil {
log.Printf("Error serializing some metrics to graphite: %s", err.Error())
}
bp = append(bp, gMetrics...)
}
graphitePoints := strings.Join(bp, "\n") + "\n"
// This will get set to nil if a successful write occurs // This will get set to nil if a successful write occurs
err := errors.New("Could not write to any Graphite server in cluster\n") err = errors.New("Could not write to any Graphite server in cluster\n")
// Send data to a random server // Send data to a random server
p := rand.Perm(len(g.conns)) p := rand.Perm(len(g.conns))
@ -128,37 +109,6 @@ func (g *Graphite) Write(metrics []telegraf.Metric) error {
return err return err
} }
func buildTags(metric telegraf.Metric) string {
var keys []string
tags := metric.Tags()
for k := range tags {
if k == "host" {
continue
}
keys = append(keys, k)
}
sort.Strings(keys)
var tag_str string
if host, ok := tags["host"]; ok {
if len(keys) > 0 {
tag_str = strings.Replace(host, ".", "_", -1) + "."
} else {
tag_str = strings.Replace(host, ".", "_", -1)
}
}
for i, k := range keys {
tag_value := strings.Replace(tags[k], ".", "_", -1)
if i == 0 {
tag_str += tag_value
} else {
tag_str += "." + tag_value
}
}
return tag_str
}
func init() { func init() {
outputs.Add("graphite", func() telegraf.Output { outputs.Add("graphite", func() telegraf.Output {
return &Graphite{} return &Graphite{}

View File

@ -43,6 +43,8 @@ func TestGraphiteOK(t *testing.T) {
// Start TCP server // Start TCP server
wg.Add(1) wg.Add(1)
go TCPServer(t, &wg) go TCPServer(t, &wg)
// Give the fake graphite TCP server some time to start:
time.Sleep(time.Millisecond * 100)
// Init plugin // Init plugin
g := Graphite{ g := Graphite{
@ -95,32 +97,3 @@ func TCPServer(t *testing.T, wg *sync.WaitGroup) {
assert.Equal(t, "my.prefix.192_168_0_1.my_measurement.value 3.14 1289430000", data3) assert.Equal(t, "my.prefix.192_168_0_1.my_measurement.value 3.14 1289430000", data3)
conn.Close() conn.Close()
} }
func TestGraphiteTags(t *testing.T) {
m1, _ := telegraf.NewMetric(
"mymeasurement",
map[string]string{"host": "192.168.0.1"},
map[string]interface{}{"value": float64(3.14)},
time.Date(2010, time.November, 10, 23, 0, 0, 0, time.UTC),
)
m2, _ := telegraf.NewMetric(
"mymeasurement",
map[string]string{"host": "192.168.0.1", "afoo": "first", "bfoo": "second"},
map[string]interface{}{"value": float64(3.14)},
time.Date(2010, time.November, 10, 23, 0, 0, 0, time.UTC),
)
m3, _ := telegraf.NewMetric(
"mymeasurement",
map[string]string{"afoo": "first", "bfoo": "second"},
map[string]interface{}{"value": float64(3.14)},
time.Date(2010, time.November, 10, 23, 0, 0, 0, time.UTC),
)
tags1 := buildTags(m1)
tags2 := buildTags(m2)
tags3 := buildTags(m3)
assert.Equal(t, "192_168_0_1", tags1)
assert.Equal(t, "192_168_0_1.first.second", tags2)
assert.Equal(t, "first.second", tags3)
}

View File

@ -28,6 +28,15 @@ type InfluxDB struct {
Timeout internal.Duration Timeout internal.Duration
UDPPayload int `toml:"udp_payload"` UDPPayload int `toml:"udp_payload"`
// Path to CA file
SSLCA string `toml:"ssl_ca"`
// Path to host cert file
SSLCert string `toml:"ssl_cert"`
// Path to cert key file
SSLKey string `toml:"ssl_key"`
// Use SSL but skip chain & host verification
InsecureSkipVerify bool
conns []client.Client conns []client.Client
} }
@ -52,6 +61,13 @@ var sampleConfig = `
# user_agent = "telegraf" # user_agent = "telegraf"
### Set UDP payload size, defaults to InfluxDB UDP Client default (512 bytes) ### Set UDP payload size, defaults to InfluxDB UDP Client default (512 bytes)
# udp_payload = 512 # udp_payload = 512
### Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem"
# ssl_key = "/etc/telegraf/key.pem"
### Use SSL but skip chain & host verification
# insecure_skip_verify = false
` `
func (i *InfluxDB) Connect() error { func (i *InfluxDB) Connect() error {
@ -66,6 +82,12 @@ func (i *InfluxDB) Connect() error {
urls = append(urls, i.URL) urls = append(urls, i.URL)
} }
tlsCfg, err := internal.GetTLSConfig(
i.SSLCert, i.SSLKey, i.SSLCA, i.InsecureSkipVerify)
if err != nil {
return err
}
var conns []client.Client var conns []client.Client
for _, u := range urls { for _, u := range urls {
switch { switch {
@ -94,6 +116,7 @@ func (i *InfluxDB) Connect() error {
Password: i.Password, Password: i.Password,
UserAgent: i.UserAgent, UserAgent: i.UserAgent,
Timeout: i.Timeout.Duration, Timeout: i.Timeout.Duration,
TLSConfig: tlsCfg,
}) })
if err != nil { if err != nil {
return err return err

View File

@ -2,12 +2,12 @@ package kafka
import ( import (
"crypto/tls" "crypto/tls"
"errors"
"fmt" "fmt"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/outputs" "github.com/influxdata/telegraf/plugins/outputs"
"github.com/influxdata/telegraf/plugins/serializers"
"github.com/Shopify/sarama" "github.com/Shopify/sarama"
) )
@ -40,6 +40,8 @@ type Kafka struct {
tlsConfig tls.Config tlsConfig tls.Config
producer sarama.SyncProducer producer sarama.SyncProducer
serializer serializers.Serializer
} }
var sampleConfig = ` var sampleConfig = `
@ -57,8 +59,18 @@ var sampleConfig = `
# ssl_key = "/etc/telegraf/key.pem" # ssl_key = "/etc/telegraf/key.pem"
### Use SSL but skip chain & host verification ### Use SSL but skip chain & host verification
# insecure_skip_verify = false # insecure_skip_verify = false
### Data format to output. This can be "influx" or "graphite"
### Each data format has it's own unique set of configuration options, read
### more about them here:
### https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS_OUTPUT.md
data_format = "influx"
` `
func (k *Kafka) SetSerializer(serializer serializers.Serializer) {
k.serializer = serializer
}
func (k *Kafka) Connect() error { func (k *Kafka) Connect() error {
config := sarama.NewConfig() config := sarama.NewConfig()
// Wait for all in-sync replicas to ack the message // Wait for all in-sync replicas to ack the message
@ -109,21 +121,27 @@ func (k *Kafka) Write(metrics []telegraf.Metric) error {
return nil return nil
} }
for _, p := range metrics { for _, metric := range metrics {
value := p.String() values, err := k.serializer.Serialize(metric)
m := &sarama.ProducerMessage{
Topic: k.Topic,
Value: sarama.StringEncoder(value),
}
if h, ok := p.Tags()[k.RoutingTag]; ok {
m.Key = sarama.StringEncoder(h)
}
_, _, err := k.producer.SendMessage(m)
if err != nil { if err != nil {
return errors.New(fmt.Sprintf("FAILED to send kafka message: %s\n", return err
err)) }
var pubErr error
for _, value := range values {
m := &sarama.ProducerMessage{
Topic: k.Topic,
Value: sarama.StringEncoder(value),
}
if h, ok := metric.Tags()[k.RoutingTag]; ok {
m.Key = sarama.StringEncoder(h)
}
_, _, pubErr = k.producer.SendMessage(m)
}
if pubErr != nil {
return fmt.Errorf("FAILED to send kafka message: %s\n", pubErr)
} }
} }
return nil return nil

View File

@ -3,6 +3,7 @@ package kafka
import ( import (
"testing" "testing"
"github.com/influxdata/telegraf/plugins/serializers"
"github.com/influxdata/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -13,9 +14,11 @@ func TestConnectAndWrite(t *testing.T) {
} }
brokers := []string{testutil.GetLocalHost() + ":9092"} brokers := []string{testutil.GetLocalHost() + ":9092"}
s, _ := serializers.NewInfluxSerializer()
k := &Kafka{ k := &Kafka{
Brokers: brokers, Brokers: brokers,
Topic: "Test", Topic: "Test",
serializer: s,
} }
// Verify that we can connect to the Kafka broker // Verify that we can connect to the Kafka broker

View File

@ -5,38 +5,14 @@ import (
"strings" "strings"
"sync" "sync"
paho "git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/outputs" "github.com/influxdata/telegraf/plugins/outputs"
"github.com/influxdata/telegraf/plugins/serializers"
paho "git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git"
) )
const MaxRetryCount = 3
const ClientIdPrefix = "telegraf"
type MQTT struct {
Servers []string `toml:"servers"`
Username string
Password string
Database string
Timeout internal.Duration
TopicPrefix string
// Path to CA file
SSLCA string `toml:"ssl_ca"`
// Path to host cert file
SSLCert string `toml:"ssl_cert"`
// Path to cert key file
SSLKey string `toml:"ssl_key"`
// Use SSL but skip chain & host verification
InsecureSkipVerify bool
client *paho.Client
opts *paho.ClientOptions
sync.Mutex
}
var sampleConfig = ` var sampleConfig = `
servers = ["localhost:1883"] # required. servers = ["localhost:1883"] # required.
@ -55,12 +31,47 @@ var sampleConfig = `
# ssl_key = "/etc/telegraf/key.pem" # ssl_key = "/etc/telegraf/key.pem"
### Use SSL but skip chain & host verification ### Use SSL but skip chain & host verification
# insecure_skip_verify = false # insecure_skip_verify = false
### Data format to output. This can be "influx" or "graphite"
### Each data format has it's own unique set of configuration options, read
### more about them here:
### https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS_OUTPUT.md
data_format = "influx"
` `
type MQTT struct {
Servers []string `toml:"servers"`
Username string
Password string
Database string
Timeout internal.Duration
TopicPrefix string
QoS int `toml:"qos"`
// Path to CA file
SSLCA string `toml:"ssl_ca"`
// Path to host cert file
SSLCert string `toml:"ssl_cert"`
// Path to cert key file
SSLKey string `toml:"ssl_key"`
// Use SSL but skip chain & host verification
InsecureSkipVerify bool
client *paho.Client
opts *paho.ClientOptions
serializer serializers.Serializer
sync.Mutex
}
func (m *MQTT) Connect() error { func (m *MQTT) Connect() error {
var err error var err error
m.Lock() m.Lock()
defer m.Unlock() defer m.Unlock()
if m.QoS > 2 || m.QoS < 0 {
return fmt.Errorf("MQTT Output, invalid QoS value: %d", m.QoS)
}
m.opts, err = m.createOpts() m.opts, err = m.createOpts()
if err != nil { if err != nil {
@ -75,6 +86,10 @@ func (m *MQTT) Connect() error {
return nil return nil
} }
func (m *MQTT) SetSerializer(serializer serializers.Serializer) {
m.serializer = serializer
}
func (m *MQTT) Close() error { func (m *MQTT) Close() error {
if m.client.IsConnected() { if m.client.IsConnected() {
m.client.Disconnect(20) m.client.Disconnect(20)
@ -101,7 +116,7 @@ func (m *MQTT) Write(metrics []telegraf.Metric) error {
hostname = "" hostname = ""
} }
for _, p := range metrics { for _, metric := range metrics {
var t []string var t []string
if m.TopicPrefix != "" { if m.TopicPrefix != "" {
t = append(t, m.TopicPrefix) t = append(t, m.TopicPrefix)
@ -110,13 +125,20 @@ func (m *MQTT) Write(metrics []telegraf.Metric) error {
t = append(t, hostname) t = append(t, hostname)
} }
t = append(t, p.Name()) t = append(t, metric.Name())
topic := strings.Join(t, "/") topic := strings.Join(t, "/")
value := p.String() values, err := m.serializer.Serialize(metric)
err := m.publish(topic, value)
if err != nil { if err != nil {
return fmt.Errorf("Could not write to MQTT server, %s", err) return fmt.Errorf("MQTT Could not serialize metric: %s",
metric.String())
}
for _, value := range values {
err = m.publish(topic, value)
if err != nil {
return fmt.Errorf("Could not write to MQTT server, %s", err)
}
} }
} }
@ -124,7 +146,7 @@ func (m *MQTT) Write(metrics []telegraf.Metric) error {
} }
func (m *MQTT) publish(topic, body string) error { func (m *MQTT) publish(topic, body string) error {
token := m.client.Publish(topic, 0, false, body) token := m.client.Publish(topic, byte(m.QoS), false, body)
token.Wait() token.Wait()
if token.Error() != nil { if token.Error() != nil {
return token.Error() return token.Error()

View File

@ -3,7 +3,9 @@ package mqtt
import ( import (
"testing" "testing"
"github.com/influxdata/telegraf/plugins/serializers"
"github.com/influxdata/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -13,8 +15,10 @@ func TestConnectAndWrite(t *testing.T) {
} }
var url = testutil.GetLocalHost() + ":1883" var url = testutil.GetLocalHost() + ":1883"
s, _ := serializers.NewInfluxSerializer()
m := &MQTT{ m := &MQTT{
Servers: []string{url}, Servers: []string{url},
serializer: s,
} }
// Verify that we can connect to the MQTT broker // Verify that we can connect to the MQTT broker

View File

@ -2,15 +2,20 @@ package nsq
import ( import (
"fmt" "fmt"
"github.com/nsqio/go-nsq"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/outputs" "github.com/influxdata/telegraf/plugins/outputs"
"github.com/nsqio/go-nsq" "github.com/influxdata/telegraf/plugins/serializers"
) )
type NSQ struct { type NSQ struct {
Server string Server string
Topic string Topic string
producer *nsq.Producer producer *nsq.Producer
serializer serializers.Serializer
} }
var sampleConfig = ` var sampleConfig = `
@ -18,8 +23,18 @@ var sampleConfig = `
server = "localhost:4150" server = "localhost:4150"
### NSQ topic for producer messages ### NSQ topic for producer messages
topic = "telegraf" topic = "telegraf"
### Data format to output. This can be "influx" or "graphite"
### Each data format has it's own unique set of configuration options, read
### more about them here:
### https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS_OUTPUT.md
data_format = "influx"
` `
func (n *NSQ) SetSerializer(serializer serializers.Serializer) {
n.serializer = serializer
}
func (n *NSQ) Connect() error { func (n *NSQ) Connect() error {
config := nsq.NewConfig() config := nsq.NewConfig()
producer, err := nsq.NewProducer(n.Server, config) producer, err := nsq.NewProducer(n.Server, config)
@ -50,12 +65,21 @@ func (n *NSQ) Write(metrics []telegraf.Metric) error {
return nil return nil
} }
for _, p := range metrics { for _, metric := range metrics {
value := p.String() values, err := n.serializer.Serialize(metric)
err := n.producer.Publish(n.Topic, []byte(value))
if err != nil { if err != nil {
return err
}
var pubErr error
for _, value := range values {
err = n.producer.Publish(n.Topic, []byte(value))
if err != nil {
pubErr = err
}
}
if pubErr != nil {
return fmt.Errorf("FAILED to send NSQD message: %s", err) return fmt.Errorf("FAILED to send NSQD message: %s", err)
} }
} }

View File

@ -3,6 +3,7 @@ package nsq
import ( import (
"testing" "testing"
"github.com/influxdata/telegraf/plugins/serializers"
"github.com/influxdata/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -13,9 +14,11 @@ func TestConnectAndWrite(t *testing.T) {
} }
server := []string{testutil.GetLocalHost() + ":4150"} server := []string{testutil.GetLocalHost() + ":4150"}
s, _ := serializers.NewInfluxSerializer()
n := &NSQ{ n := &NSQ{
Server: server[0], Server: server[0],
Topic: "telegraf", Topic: "telegraf",
serializer: s,
} }
// Verify that we can connect to the NSQ daemon // Verify that we can connect to the NSQ daemon

View File

@ -2,6 +2,7 @@ package prometheus_client
import ( import (
"testing" "testing"
"time"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
@ -18,6 +19,7 @@ func TestPrometheusWritePointEmptyTag(t *testing.T) {
} }
pTesting = &PrometheusClient{Listen: "localhost:9127"} pTesting = &PrometheusClient{Listen: "localhost:9127"}
err := pTesting.Start() err := pTesting.Start()
time.Sleep(time.Millisecond * 200)
require.NoError(t, err) require.NoError(t, err)
defer pTesting.Stop() defer pTesting.Stop()

View File

@ -29,6 +29,10 @@ type GraphiteParser struct {
matcher *matcher matcher *matcher
} }
func (p *GraphiteParser) SetDefaultTags(tags map[string]string) {
p.DefaultTags = tags
}
func NewGraphiteParser( func NewGraphiteParser(
separator string, separator string,
templates []string, templates []string,
@ -104,13 +108,14 @@ func (p *GraphiteParser) Parse(buf []byte) ([]telegraf.Metric, error) {
metrics := make([]telegraf.Metric, 0) metrics := make([]telegraf.Metric, 0)
var errStr string
buffer := bytes.NewBuffer(buf) buffer := bytes.NewBuffer(buf)
reader := bufio.NewReader(buffer) reader := bufio.NewReader(buffer)
for { for {
// Read up to the next newline. // Read up to the next newline.
buf, err := reader.ReadBytes('\n') buf, err := reader.ReadBytes('\n')
if err == io.EOF { if err == io.EOF {
return metrics, nil break
} }
if err != nil && err != io.EOF { if err != nil && err != io.EOF {
return metrics, err return metrics, err
@ -118,10 +123,19 @@ func (p *GraphiteParser) Parse(buf []byte) ([]telegraf.Metric, error) {
// Trim the buffer, even though there should be no padding // Trim the buffer, even though there should be no padding
line := strings.TrimSpace(string(buf)) line := strings.TrimSpace(string(buf))
if metric, err := p.ParseLine(line); err == nil { metric, err := p.ParseLine(line)
if err == nil {
metrics = append(metrics, metric) metrics = append(metrics, metric)
} else {
errStr += err.Error() + "\n"
} }
} }
if errStr != "" {
return metrics, fmt.Errorf(errStr)
}
return metrics, nil
} }
// Parse performs Graphite parsing of a single line. // Parse performs Graphite parsing of a single line.

View File

@ -15,7 +15,7 @@ type InfluxParser struct {
DefaultTags map[string]string DefaultTags map[string]string
} }
// ParseMetrics returns a slice of Metrics from a text representation of a // Parse returns a slice of Metrics from a text representation of a
// metric (in line-protocol format) // metric (in line-protocol format)
// with each metric separated by newlines. If any metrics fail to parse, // with each metric separated by newlines. If any metrics fail to parse,
// a non-nil error will be returned in addition to the metrics that parsed // a non-nil error will be returned in addition to the metrics that parsed
@ -55,3 +55,7 @@ func (p *InfluxParser) ParseLine(line string) (telegraf.Metric, error) {
return metrics[0], nil return metrics[0], nil
} }
func (p *InfluxParser) SetDefaultTags(tags map[string]string) {
p.DefaultTags = tags
}

View File

@ -67,6 +67,10 @@ func (p *JSONParser) ParseLine(line string) (telegraf.Metric, error) {
return metrics[0], nil return metrics[0], nil
} }
func (p *JSONParser) SetDefaultTags(tags map[string]string) {
p.DefaultTags = tags
}
type JSONFlattener struct { type JSONFlattener struct {
Fields map[string]interface{} Fields map[string]interface{}
} }

View File

@ -28,6 +28,11 @@ type Parser interface {
// ie, "cpu.usage.idle 90" // ie, "cpu.usage.idle 90"
// and parses it into a telegraf metric. // and parses it into a telegraf metric.
ParseLine(line string) (telegraf.Metric, error) ParseLine(line string) (telegraf.Metric, error)
// SetDefaultTags tells the parser to add all of the given tags
// to each parsed metric.
// NOTE: do _not_ modify the map after you've passed it here!!
SetDefaultTags(tags map[string]string)
} }
// Config is a struct that covers the data types needed for all parser types, // Config is a struct that covers the data types needed for all parser types,

View File

@ -0,0 +1,79 @@
package graphite
import (
"fmt"
"sort"
"strings"
"github.com/influxdata/telegraf"
)
type GraphiteSerializer struct {
Prefix string
}
func (s *GraphiteSerializer) Serialize(metric telegraf.Metric) ([]string, error) {
out := []string{}
// Get name
name := metric.Name()
// Convert UnixNano to Unix timestamps
timestamp := metric.UnixNano() / 1000000000
tag_str := buildTags(metric)
for field_name, value := range metric.Fields() {
// Convert value
value_str := fmt.Sprintf("%#v", value)
// Write graphite metric
var graphitePoint string
if name == field_name {
graphitePoint = fmt.Sprintf("%s.%s %s %d",
tag_str,
strings.Replace(name, ".", "_", -1),
value_str,
timestamp)
} else {
graphitePoint = fmt.Sprintf("%s.%s.%s %s %d",
tag_str,
strings.Replace(name, ".", "_", -1),
strings.Replace(field_name, ".", "_", -1),
value_str,
timestamp)
}
if s.Prefix != "" {
graphitePoint = fmt.Sprintf("%s.%s", s.Prefix, graphitePoint)
}
out = append(out, graphitePoint)
}
return out, nil
}
func buildTags(metric telegraf.Metric) string {
var keys []string
tags := metric.Tags()
for k := range tags {
if k == "host" {
continue
}
keys = append(keys, k)
}
sort.Strings(keys)
var tag_str string
if host, ok := tags["host"]; ok {
if len(keys) > 0 {
tag_str = strings.Replace(host, ".", "_", -1) + "."
} else {
tag_str = strings.Replace(host, ".", "_", -1)
}
}
for i, k := range keys {
tag_value := strings.Replace(tags[k], ".", "_", -1)
if i == 0 {
tag_str += tag_value
} else {
tag_str += "." + tag_value
}
}
return tag_str
}

View File

@ -0,0 +1,121 @@
package graphite
import (
"fmt"
"sort"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/influxdata/telegraf"
)
func TestGraphiteTags(t *testing.T) {
m1, _ := telegraf.NewMetric(
"mymeasurement",
map[string]string{"host": "192.168.0.1"},
map[string]interface{}{"value": float64(3.14)},
time.Date(2010, time.November, 10, 23, 0, 0, 0, time.UTC),
)
m2, _ := telegraf.NewMetric(
"mymeasurement",
map[string]string{"host": "192.168.0.1", "afoo": "first", "bfoo": "second"},
map[string]interface{}{"value": float64(3.14)},
time.Date(2010, time.November, 10, 23, 0, 0, 0, time.UTC),
)
m3, _ := telegraf.NewMetric(
"mymeasurement",
map[string]string{"afoo": "first", "bfoo": "second"},
map[string]interface{}{"value": float64(3.14)},
time.Date(2010, time.November, 10, 23, 0, 0, 0, time.UTC),
)
tags1 := buildTags(m1)
tags2 := buildTags(m2)
tags3 := buildTags(m3)
assert.Equal(t, "192_168_0_1", tags1)
assert.Equal(t, "192_168_0_1.first.second", tags2)
assert.Equal(t, "first.second", tags3)
}
func TestSerializeMetricNoHost(t *testing.T) {
now := time.Now()
tags := map[string]string{
"cpu": "cpu0",
"datacenter": "us-west-2",
}
fields := map[string]interface{}{
"usage_idle": float64(91.5),
"usage_busy": float64(8.5),
}
m, err := telegraf.NewMetric("cpu", tags, fields, now)
assert.NoError(t, err)
s := GraphiteSerializer{}
mS, err := s.Serialize(m)
assert.NoError(t, err)
expS := []string{
fmt.Sprintf("cpu0.us-west-2.cpu.usage_idle 91.5 %d", now.Unix()),
fmt.Sprintf("cpu0.us-west-2.cpu.usage_busy 8.5 %d", now.Unix()),
}
sort.Strings(mS)
sort.Strings(expS)
assert.Equal(t, expS, mS)
}
func TestSerializeMetricHost(t *testing.T) {
now := time.Now()
tags := map[string]string{
"host": "localhost",
"cpu": "cpu0",
"datacenter": "us-west-2",
}
fields := map[string]interface{}{
"usage_idle": float64(91.5),
"usage_busy": float64(8.5),
}
m, err := telegraf.NewMetric("cpu", tags, fields, now)
assert.NoError(t, err)
s := GraphiteSerializer{}
mS, err := s.Serialize(m)
assert.NoError(t, err)
expS := []string{
fmt.Sprintf("localhost.cpu0.us-west-2.cpu.usage_idle 91.5 %d", now.Unix()),
fmt.Sprintf("localhost.cpu0.us-west-2.cpu.usage_busy 8.5 %d", now.Unix()),
}
sort.Strings(mS)
sort.Strings(expS)
assert.Equal(t, expS, mS)
}
func TestSerializeMetricPrefix(t *testing.T) {
now := time.Now()
tags := map[string]string{
"host": "localhost",
"cpu": "cpu0",
"datacenter": "us-west-2",
}
fields := map[string]interface{}{
"usage_idle": float64(91.5),
"usage_busy": float64(8.5),
}
m, err := telegraf.NewMetric("cpu", tags, fields, now)
assert.NoError(t, err)
s := GraphiteSerializer{Prefix: "prefix"}
mS, err := s.Serialize(m)
assert.NoError(t, err)
expS := []string{
fmt.Sprintf("prefix.localhost.cpu0.us-west-2.cpu.usage_idle 91.5 %d", now.Unix()),
fmt.Sprintf("prefix.localhost.cpu0.us-west-2.cpu.usage_busy 8.5 %d", now.Unix()),
}
sort.Strings(mS)
sort.Strings(expS)
assert.Equal(t, expS, mS)
}

View File

@ -0,0 +1,12 @@
package influx
import (
"github.com/influxdata/telegraf"
)
type InfluxSerializer struct {
}
func (s *InfluxSerializer) Serialize(metric telegraf.Metric) ([]string, error) {
return []string{metric.String()}, nil
}

View File

@ -0,0 +1,68 @@
package influx
import (
"fmt"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/influxdata/telegraf"
)
func TestSerializeMetricFloat(t *testing.T) {
now := time.Now()
tags := map[string]string{
"cpu": "cpu0",
}
fields := map[string]interface{}{
"usage_idle": float64(91.5),
}
m, err := telegraf.NewMetric("cpu", tags, fields, now)
assert.NoError(t, err)
s := InfluxSerializer{}
mS, err := s.Serialize(m)
assert.NoError(t, err)
expS := []string{fmt.Sprintf("cpu,cpu=cpu0 usage_idle=91.5 %d", now.UnixNano())}
assert.Equal(t, expS, mS)
}
func TestSerializeMetricInt(t *testing.T) {
now := time.Now()
tags := map[string]string{
"cpu": "cpu0",
}
fields := map[string]interface{}{
"usage_idle": int64(90),
}
m, err := telegraf.NewMetric("cpu", tags, fields, now)
assert.NoError(t, err)
s := InfluxSerializer{}
mS, err := s.Serialize(m)
assert.NoError(t, err)
expS := []string{fmt.Sprintf("cpu,cpu=cpu0 usage_idle=90i %d", now.UnixNano())}
assert.Equal(t, expS, mS)
}
func TestSerializeMetricString(t *testing.T) {
now := time.Now()
tags := map[string]string{
"cpu": "cpu0",
}
fields := map[string]interface{}{
"usage_idle": "foobar",
}
m, err := telegraf.NewMetric("cpu", tags, fields, now)
assert.NoError(t, err)
s := InfluxSerializer{}
mS, err := s.Serialize(m)
assert.NoError(t, err)
expS := []string{fmt.Sprintf("cpu,cpu=cpu0 usage_idle=\"foobar\" %d", now.UnixNano())}
assert.Equal(t, expS, mS)
}

View File

@ -0,0 +1,55 @@
package serializers
import (
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/serializers/graphite"
"github.com/influxdata/telegraf/plugins/serializers/influx"
)
// SerializerOutput is an interface for output plugins that are able to
// serialize telegraf metrics into arbitrary data formats.
type SerializerOutput interface {
// SetSerializer sets the serializer function for the interface.
SetSerializer(serializer Serializer)
}
// Serializer is an interface defining functions that a serializer plugin must
// satisfy.
type Serializer interface {
// Serialize takes a single telegraf metric and turns it into a string.
Serialize(metric telegraf.Metric) ([]string, error)
}
// Config is a struct that covers the data types needed for all serializer types,
// and can be used to instantiate _any_ of the serializers.
type Config struct {
// Dataformat can be one of: influx, graphite
DataFormat string
// Prefix to add to all measurements, only supports Graphite
Prefix string
}
// NewSerializer a Serializer interface based on the given config.
func NewSerializer(config *Config) (Serializer, error) {
var err error
var serializer Serializer
switch config.DataFormat {
case "influx":
serializer, err = NewInfluxSerializer()
case "graphite":
serializer, err = NewGraphiteSerializer(config.Prefix)
}
return serializer, err
}
func NewInfluxSerializer() (Serializer, error) {
return &influx.InfluxSerializer{}, nil
}
func NewGraphiteSerializer(prefix string) (Serializer, error) {
return &graphite.GraphiteSerializer{
Prefix: prefix,
}, nil
}