Implementing generic parser plugins and documentation

This constitutes a large change in how we will parse different data
formats going forward (for the plugins that support it)

This is working off @henrypfhu's changes.
This commit is contained in:
Cameron Sparr 2016-02-05 17:36:35 -07:00
parent 1449c8b887
commit e619493ece
32 changed files with 1971 additions and 522 deletions

View File

@ -129,6 +129,52 @@ func init() {
} }
``` ```
## Input Plugins Accepting Arbitrary Data Formats
Some input plugins (such as
[exec](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/exec))
accept arbitrary input data formats. An overview of these data formats can
be found
[here](https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS_INPUT.md).
In order to enable this, you must specify a `SetParser(parser parsers.Parser)`
function on the plugin object (see the exec plugin for an example), as well as
defining `parser` as a field of the object.
You can then utilize the parser internally in your plugin, parsing data as you
see fit. Telegraf's configuration layer will take care of instantiating and
creating the `Parser` object.
You should also add the following to your SampleConfig() return:
```toml
### Data format to consume. This can be "json", "influx" or "graphite"
### Each data format has it's own unique set of configuration options, read
### more about them here:
### https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS.md
data_format = "influx"
```
Below is the `Parser` interface.
```go
// Parser is an interface defining functions that a parser plugin must satisfy.
type Parser interface {
// Parse takes a byte buffer separated by newlines
// ie, `cpu.usage.idle 90\ncpu.usage.busy 10`
// and parses it into telegraf metrics
Parse(buf []byte) ([]telegraf.Metric, error)
// ParseLine takes a single string metric
// ie, "cpu.usage.idle 90"
// and parses it into a telegraf metric.
ParseLine(line string) (telegraf.Metric, error)
}
```
And you can view the code
[here.](https://github.com/influxdata/telegraf/blob/henrypfhu-master/plugins/parsers/registry.go)
## Service Input Plugins ## Service Input Plugins
This section is for developers who want to create new "service" collection This section is for developers who want to create new "service" collection

274
DATA_FORMATS_INPUT.md Normal file
View File

@ -0,0 +1,274 @@
# Telegraf Input Data Formats
Telegraf metrics, like InfluxDB
[points](https://docs.influxdata.com/influxdb/v0.10/write_protocols/line/),
are a combination of four basic parts:
1. Measurement Name
1. Tags
1. Fields
1. Timestamp
These four parts are easily defined when using InfluxDB line-protocol as a
data format. But there are other data formats that users may want to use which
require more advanced configuration to create usable Telegraf metrics.
Plugins such as `exec` and `kafka_consumer` parse textual data. Up until now,
these plugins were statically configured to parse just a single
data format. `exec` mostly only supported parsing JSON, and `kafka_consumer` only
supported data in InfluxDB line-protocol.
But now we are normalizing the parsing of various data formats across all
plugins that can support it. You will be able to identify a plugin that supports
different data formats by the presence of a `data_format` config option, for
example, in the exec plugin:
```toml
[[inputs.exec]]
### Commands array
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
### measurement name suffix (for separating different commands)
name_suffix = "_mycollector"
### Data format to consume. This can be "json", "influx" or "graphite"
### Each data format has it's own unique set of configuration options, read
### more about them here:
### https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS.md
data_format = "json"
### Additional configuration options go here
```
Each data_format has an additional set of configuration options available, which
I'll go over below.
## Influx:
There are no additional configuration options for InfluxDB line-protocol. The
metrics are parsed directly into Telegraf metrics.
#### Influx Configuration:
```toml
[[inputs.exec]]
### Commands array
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
### measurement name suffix (for separating different commands)
name_suffix = "_mycollector"
### Data format to consume. This can be "json", "influx" or "graphite"
### Each data format has it's own unique set of configuration options, read
### more about them here:
### https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS.md
data_format = "influx"
```
## JSON:
The JSON data format flattens JSON into metric _fields_. For example, this JSON:
```json
{
"a": 5,
"b": {
"c": 6
}
}
```
Would get translated into _fields_ of a measurement:
```
myjsonmetric a=5,b_c=6
```
The _measurement_ _name_ is usually the name of the plugin,
but can be overridden using the `name_override` config option.
#### JSON Configuration:
The JSON data format supports specifying "tag keys". If specified, keys
will be searched for in the root-level of the JSON blob. If the key(s) exist,
they will be applied as tags to the Telegraf metrics.
For example, if you had this configuration:
```toml
[[inputs.exec]]
### Commands array
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
### measurement name suffix (for separating different commands)
name_suffix = "_mycollector"
### Data format to consume. This can be "json", "influx" or "graphite"
### Each data format has it's own unique set of configuration options, read
### more about them here:
### https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS.md
data_format = "json"
### List of tag names to extract from top-level of JSON server response
tag_keys = [
"my_tag_1",
"my_tag_2"
]
```
with this JSON output from a command:
```json
{
"a": 5,
"b": {
"c": 6
},
"my_tag_1": "foo"
}
```
Your Telegraf metrics would get tagged with "my_tag_1"
```
exec_mycollector,my_tag_1=foo a=5,b_c=6
```
## Graphite:
The Graphite data format translates graphite _dot_ buckets directly into
telegraf measurement names, with a single value field, and without any tags. For
more advanced options, Telegraf supports specifying "templates" to translate
graphite buckets into Telegraf metrics.
#### Separator:
You can specify a separator to use for the parsed metrics.
By default, it will leave the metrics with a "." separator.
Setting `separator = "_"` will translate:
```
cpu.usage.idle 99
=> cpu_usage_idle value=99
```
#### Measurement/Tag Templates:
The most basic template is to specify a single transformation to apply to all
incoming metrics. _measurement_ is a special keyword that tells Telegraf which
parts of the graphite bucket to combine into the measurement name. It can have a
trailing `*` to indicate that the remainder of the metric should be used.
Other words are considered tag keys. So the following template:
```toml
templates = [
"region.measurement*"
]
```
would result in the following Graphite -> Telegraf transformation.
```
us-west.cpu.load 100
=> cpu.load,region=us-west value=100
```
#### Field Templates:
There is also a _field_ keyword, which can only be specified once.
The field keyword tells Telegraf to give the metric that field name.
So the following template:
```toml
templates = [
"measurement.measurement.field.region"
]
```
would result in the following Graphite -> Telegraf transformation.
```
cpu.usage.idle.us-west 100
=> cpu_usage,region=us-west idle=100
```
#### Filter Templates:
Users can also filter the template(s) to use based on the name of the bucket,
using glob matching, like so:
```toml
templates = [
"cpu.* measurement.measurement.region",
"mem.* measurement.measurement.host"
]
```
which would result in the following transformation:
```
cpu.load.us-west 100
=> cpu_load,region=us-west value=100
mem.cached.localhost 256
=> mem_cached,host=localhost value=256
```
#### Adding Tags:
Additional tags can be added to a metric that don't exist on the received metric.
You can add additional tags by specifying them after the pattern.
Tags have the same format as the line protocol.
Multiple tags are separated by commas.
```toml
templates = [
"measurement.measurement.field.region datacenter=1a"
]
```
would result in the following Graphite -> Telegraf transformation.
```
cpu.usage.idle.us-west 100
=> cpu_usage,region=us-west,datacenter=1a idle=100
```
There are many more options available,
[More details can be found here](https://github.com/influxdata/influxdb/tree/master/services/graphite#templates)
#### Graphite Configuration:
```toml
[[inputs.exec]]
### Commands array
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
### measurement name suffix (for separating different commands)
name_suffix = "_mycollector"
### Data format to consume. This can be "json", "influx" or "graphite" (line-protocol)
### Each data format has it's own unique set of configuration options, read
### more about them here:
### https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS.md
data_format = "graphite"
### This string will be used to join the matched values.
separator = "_"
### Each template line requires a template pattern. It can have an optional
### filter before the template and separated by spaces. It can also have optional extra
### tags following the template. Multiple tags should be separated by commas and no spaces
### similar to the line protocol format. There can be only one default template.
### Templates support below format:
### 1. filter + template
### 2. filter + template + extra tag
### 3. filter + template with field key
### 4. default template
templates = [
"*.app env.service.resource.measurement",
"stats.* .host.measurement* region=us-west,agent=sensu",
"stats2.* .host.measurement.field",
"measurement*"
]
```

6
Godeps
View File

@ -2,10 +2,8 @@ git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git dbd8d5c40a582eb9ad
github.com/Shopify/sarama d37c73f2b2bce85f7fa16b6a550d26c5372892ef github.com/Shopify/sarama d37c73f2b2bce85f7fa16b6a550d26c5372892ef
github.com/Sirupsen/logrus f7f79f729e0fbe2fcc061db48a9ba0263f588252 github.com/Sirupsen/logrus f7f79f729e0fbe2fcc061db48a9ba0263f588252
github.com/amir/raidman 6a8e089bbe32e6b907feae5ba688841974b3c339 github.com/amir/raidman 6a8e089bbe32e6b907feae5ba688841974b3c339
github.com/armon/go-metrics 345426c77237ece5dab0e1605c3e4b35c3f54757
github.com/aws/aws-sdk-go 87b1e60a50b09e4812dee560b33a238f67305804 github.com/aws/aws-sdk-go 87b1e60a50b09e4812dee560b33a238f67305804
github.com/beorn7/perks b965b613227fddccbfffe13eae360ed3fa822f8d github.com/beorn7/perks b965b613227fddccbfffe13eae360ed3fa822f8d
github.com/boltdb/bolt ee4a0888a9abe7eefe5a0992ca4cb06864839873
github.com/cenkalti/backoff 4dc77674aceaabba2c7e3da25d4c823edfb73f99 github.com/cenkalti/backoff 4dc77674aceaabba2c7e3da25d4c823edfb73f99
github.com/dancannon/gorethink 6f088135ff288deb9d5546f4c71919207f891a70 github.com/dancannon/gorethink 6f088135ff288deb9d5546f4c71919207f891a70
github.com/davecgh/go-spew 5215b55f46b2b919f50a1df0eaa5886afe4e3b3d github.com/davecgh/go-spew 5215b55f46b2b919f50a1df0eaa5886afe4e3b3d
@ -14,16 +12,12 @@ github.com/eapache/queue ded5959c0d4e360646dc9e9908cff48666781367
github.com/fsouza/go-dockerclient 7b651349f9479f5114913eefbfd3c4eeddd79ab4 github.com/fsouza/go-dockerclient 7b651349f9479f5114913eefbfd3c4eeddd79ab4
github.com/go-ini/ini afbd495e5aaea13597b5e14fe514ddeaa4d76fc3 github.com/go-ini/ini afbd495e5aaea13597b5e14fe514ddeaa4d76fc3
github.com/go-sql-driver/mysql 7c7f556282622f94213bc028b4d0a7b6151ba239 github.com/go-sql-driver/mysql 7c7f556282622f94213bc028b4d0a7b6151ba239
github.com/gogo/protobuf e8904f58e872a473a5b91bc9bf3377d223555263
github.com/golang/protobuf 6aaa8d47701fa6cf07e914ec01fde3d4a1fe79c3 github.com/golang/protobuf 6aaa8d47701fa6cf07e914ec01fde3d4a1fe79c3
github.com/golang/snappy 723cc1e459b8eea2dea4583200fd60757d40097a github.com/golang/snappy 723cc1e459b8eea2dea4583200fd60757d40097a
github.com/gonuts/go-shellquote e842a11b24c6abfb3dd27af69a17f482e4b483c2 github.com/gonuts/go-shellquote e842a11b24c6abfb3dd27af69a17f482e4b483c2
github.com/gorilla/context 1c83b3eabd45b6d76072b66b746c20815fb2872d github.com/gorilla/context 1c83b3eabd45b6d76072b66b746c20815fb2872d
github.com/gorilla/mux 26a6070f849969ba72b72256e9f14cf519751690 github.com/gorilla/mux 26a6070f849969ba72b72256e9f14cf519751690
github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478 github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478
github.com/hashicorp/go-msgpack fa3f63826f7c23912c15263591e65d54d080b458
github.com/hashicorp/raft 057b893fd996696719e98b6c44649ea14968c811
github.com/hashicorp/raft-boltdb d1e82c1ec3f15ee991f7cc7ffd5b67ff6f5bbaee
github.com/influxdata/config bae7cb98197d842374d3b8403905924094930f24 github.com/influxdata/config bae7cb98197d842374d3b8403905924094930f24
github.com/influxdata/influxdb 697f48b4e62e514e701ffec39978b864a3c666e6 github.com/influxdata/influxdb 697f48b4e62e514e701ffec39978b864a3c666e6
github.com/influxdb/influxdb 697f48b4e62e514e701ffec39978b864a3c666e6 github.com/influxdb/influxdb 697f48b4e62e514e701ffec39978b864a3c666e6

View File

@ -1,34 +1,28 @@
git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git dbd8d5c40a582eb9adacde36b47932b3a3ad0034 git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git dbd8d5c40a582eb9adacde36b47932b3a3ad0034
github.com/Shopify/sarama b1da1753dedcf77d053613b7eae907b98a2ddad5 github.com/Shopify/sarama d37c73f2b2bce85f7fa16b6a550d26c5372892ef
github.com/Sirupsen/logrus f7f79f729e0fbe2fcc061db48a9ba0263f588252 github.com/Sirupsen/logrus f7f79f729e0fbe2fcc061db48a9ba0263f588252
github.com/StackExchange/wmi f3e2bae1e0cb5aef83e319133eabfee30013a4a5 github.com/StackExchange/wmi f3e2bae1e0cb5aef83e319133eabfee30013a4a5
github.com/amir/raidman 6a8e089bbe32e6b907feae5ba688841974b3c339 github.com/amir/raidman 6a8e089bbe32e6b907feae5ba688841974b3c339
github.com/armon/go-metrics 345426c77237ece5dab0e1605c3e4b35c3f54757 github.com/aws/aws-sdk-go 87b1e60a50b09e4812dee560b33a238f67305804
github.com/aws/aws-sdk-go 2a34ea8812f32aae75b43400f9424a0559840659
github.com/beorn7/perks b965b613227fddccbfffe13eae360ed3fa822f8d github.com/beorn7/perks b965b613227fddccbfffe13eae360ed3fa822f8d
github.com/boltdb/bolt ee4a0888a9abe7eefe5a0992ca4cb06864839873
github.com/cenkalti/backoff 4dc77674aceaabba2c7e3da25d4c823edfb73f99 github.com/cenkalti/backoff 4dc77674aceaabba2c7e3da25d4c823edfb73f99
github.com/dancannon/gorethink 6f088135ff288deb9d5546f4c71919207f891a70 github.com/dancannon/gorethink 6f088135ff288deb9d5546f4c71919207f891a70
github.com/davecgh/go-spew 5215b55f46b2b919f50a1df0eaa5886afe4e3b3d github.com/davecgh/go-spew 5215b55f46b2b919f50a1df0eaa5886afe4e3b3d
github.com/eapache/go-resiliency b86b1ec0dd4209a588dc1285cdd471e73525c0b3 github.com/eapache/go-resiliency b86b1ec0dd4209a588dc1285cdd471e73525c0b3
github.com/eapache/queue ded5959c0d4e360646dc9e9908cff48666781367 github.com/eapache/queue ded5959c0d4e360646dc9e9908cff48666781367
github.com/fsouza/go-dockerclient 02a8beb401b20e112cff3ea740545960b667eab1 github.com/fsouza/go-dockerclient 7b651349f9479f5114913eefbfd3c4eeddd79ab4
github.com/go-ini/ini afbd495e5aaea13597b5e14fe514ddeaa4d76fc3 github.com/go-ini/ini afbd495e5aaea13597b5e14fe514ddeaa4d76fc3
github.com/go-ole/go-ole 50055884d646dd9434f16bbb5c9801749b9bafe4 github.com/go-ole/go-ole 50055884d646dd9434f16bbb5c9801749b9bafe4
github.com/go-sql-driver/mysql 7c7f556282622f94213bc028b4d0a7b6151ba239 github.com/go-sql-driver/mysql 7c7f556282622f94213bc028b4d0a7b6151ba239
github.com/gogo/protobuf e8904f58e872a473a5b91bc9bf3377d223555263 github.com/golang/protobuf 6aaa8d47701fa6cf07e914ec01fde3d4a1fe79c3
github.com/golang/protobuf 45bba206dd5270d96bac4942dcfe515726613249 github.com/golang/snappy 723cc1e459b8eea2dea4583200fd60757d40097a
github.com/golang/snappy 1963d058044b19e16595f80d5050fa54e2070438
github.com/gonuts/go-shellquote e842a11b24c6abfb3dd27af69a17f482e4b483c2 github.com/gonuts/go-shellquote e842a11b24c6abfb3dd27af69a17f482e4b483c2
github.com/gorilla/context 1c83b3eabd45b6d76072b66b746c20815fb2872d github.com/gorilla/context 1c83b3eabd45b6d76072b66b746c20815fb2872d
github.com/gorilla/mux 26a6070f849969ba72b72256e9f14cf519751690 github.com/gorilla/mux 26a6070f849969ba72b72256e9f14cf519751690
github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478 github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478
github.com/hashicorp/go-msgpack fa3f63826f7c23912c15263591e65d54d080b458
github.com/hashicorp/raft 057b893fd996696719e98b6c44649ea14968c811
github.com/hashicorp/raft-boltdb d1e82c1ec3f15ee991f7cc7ffd5b67ff6f5bbaee
github.com/influxdata/config bae7cb98197d842374d3b8403905924094930f24 github.com/influxdata/config bae7cb98197d842374d3b8403905924094930f24
github.com/influxdata/influxdb 60df13fb566d07ff2cdd07aa23a4796a02b0df3c github.com/influxdata/influxdb 697f48b4e62e514e701ffec39978b864a3c666e6
github.com/influxdb/influxdb 60df13fb566d07ff2cdd07aa23a4796a02b0df3c github.com/influxdb/influxdb 697f48b4e62e514e701ffec39978b864a3c666e6
github.com/jmespath/go-jmespath c01cf91b011868172fdcd9f41838e80c9d716264 github.com/jmespath/go-jmespath c01cf91b011868172fdcd9f41838e80c9d716264
github.com/klauspost/crc32 999f3125931f6557b991b2f8472172bdfa578d38 github.com/klauspost/crc32 999f3125931f6557b991b2f8472172bdfa578d38
github.com/lib/pq 8ad2b298cadd691a77015666a5372eae5dbfac8f github.com/lib/pq 8ad2b298cadd691a77015666a5372eae5dbfac8f
@ -45,7 +39,7 @@ github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
github.com/prometheus/common 14ca1097bbe21584194c15e391a9dab95ad42a59 github.com/prometheus/common 14ca1097bbe21584194c15e391a9dab95ad42a59
github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8 github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8
github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f
github.com/shirou/gopsutil 9d8191d6a6e17dcf43b10a20084a11e8c1aa92e6 github.com/shirou/gopsutil 85bf0974ed06e4e668595ae2b4de02e772a2819b
github.com/shirou/w32 ada3ba68f000aa1b58580e45c9d308fe0b7fc5c5 github.com/shirou/w32 ada3ba68f000aa1b58580e45c9d308fe0b7fc5c5
github.com/soniah/gosnmp b1b4f885b12c5dcbd021c5cee1c904110de6db7d github.com/soniah/gosnmp b1b4f885b12c5dcbd021c5cee1c904110de6db7d
github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744 github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744
@ -54,9 +48,8 @@ github.com/stretchr/testify f390dcf405f7b83c997eac1b06768bb9f44dec18
github.com/wvanbergen/kafka 1a8639a45164fcc245d5c7b4bd3ccfbd1a0ffbf3 github.com/wvanbergen/kafka 1a8639a45164fcc245d5c7b4bd3ccfbd1a0ffbf3
github.com/wvanbergen/kazoo-go 0f768712ae6f76454f987c3356177e138df258f8 github.com/wvanbergen/kazoo-go 0f768712ae6f76454f987c3356177e138df258f8
github.com/zensqlmonitor/go-mssqldb ffe5510c6fa5e15e6d983210ab501c815b56b363 github.com/zensqlmonitor/go-mssqldb ffe5510c6fa5e15e6d983210ab501c815b56b363
golang.org/x/crypto 1f22c0103821b9390939b6776727195525381532
golang.org/x/net 04b9de9b512f58addf28c9853d50ebef61c3953e golang.org/x/net 04b9de9b512f58addf28c9853d50ebef61c3953e
golang.org/x/text 6fc2e00a0d64b1f7fc1212dae5b0c939cf6d9ac4 golang.org/x/text 6d3c22c4525a4da167968fa2479be5524d2e8bd0
gopkg.in/dancannon/gorethink.v1 6f088135ff288deb9d5546f4c71919207f891a70 gopkg.in/dancannon/gorethink.v1 6f088135ff288deb9d5546f4c71919207f891a70
gopkg.in/fatih/pool.v2 cba550ebf9bce999a02e963296d4bc7a486cb715 gopkg.in/fatih/pool.v2 cba550ebf9bce999a02e963296d4bc7a486cb715
gopkg.in/mgo.v2 03c9f3ee4c14c8e51ee521a6a7d0425658dd6f64 gopkg.in/mgo.v2 03c9f3ee4c14c8e51ee521a6a7d0425658dd6f64

View File

@ -11,6 +11,7 @@ import (
"github.com/influxdata/telegraf/agent" "github.com/influxdata/telegraf/agent"
"github.com/influxdata/telegraf/internal/config" "github.com/influxdata/telegraf/internal/config"
_ "github.com/influxdata/telegraf/plugins/inputs/all" _ "github.com/influxdata/telegraf/plugins/inputs/all"
_ "github.com/influxdata/telegraf/plugins/outputs/all" _ "github.com/influxdata/telegraf/plugins/outputs/all"
) )

View File

@ -15,6 +15,7 @@ import (
"github.com/influxdata/telegraf/internal/models" "github.com/influxdata/telegraf/internal/models"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/outputs" "github.com/influxdata/telegraf/plugins/outputs"
"github.com/influxdata/telegraf/plugins/parsers"
"github.com/influxdata/config" "github.com/influxdata/config"
"github.com/naoina/toml/ast" "github.com/naoina/toml/ast"
@ -428,6 +429,17 @@ func (c *Config) addInput(name string, table *ast.Table) error {
} }
input := creator() input := creator()
// If the input has a SetParser function, then this means it can accept
// arbitrary types of input, so build the parser and set it.
switch t := input.(type) {
case parsers.ParserInput:
parser, err := buildParser(name, table)
if err != nil {
return err
}
t.SetParser(parser)
}
pluginConfig, err := buildInput(name, table) pluginConfig, err := buildInput(name, table)
if err != nil { if err != nil {
return err return err
@ -583,6 +595,66 @@ func buildInput(name string, tbl *ast.Table) (*internal_models.InputConfig, erro
return cp, nil return cp, nil
} }
// buildParser grabs the necessary entries from the ast.Table for creating
// a parsers.Parser object, and creates it, which can then be added onto
// an Input object.
func buildParser(name string, tbl *ast.Table) (parsers.Parser, error) {
c := &parsers.Config{}
if node, ok := tbl.Fields["data_format"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
c.DataFormat = str.Value
}
}
}
if c.DataFormat == "" {
c.DataFormat = "influx"
}
if node, ok := tbl.Fields["separator"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
c.Separator = str.Value
}
}
}
if node, ok := tbl.Fields["templates"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if ary, ok := kv.Value.(*ast.Array); ok {
for _, elem := range ary.Value {
if str, ok := elem.(*ast.String); ok {
c.Templates = append(c.Templates, str.Value)
}
}
}
}
}
if node, ok := tbl.Fields["tag_keys"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if ary, ok := kv.Value.(*ast.Array); ok {
for _, elem := range ary.Value {
if str, ok := elem.(*ast.String); ok {
c.TagKeys = append(c.TagKeys, str.Value)
}
}
}
}
}
c.MetricName = name
delete(tbl.Fields, "data_format")
delete(tbl.Fields, "separator")
delete(tbl.Fields, "templates")
delete(tbl.Fields, "tag_keys")
return parsers.NewParser(c)
}
// buildOutput parses output specific items from the ast.Table, builds the filter and returns an // buildOutput parses output specific items from the ast.Table, builds the filter and returns an
// internal_models.OutputConfig to be inserted into internal_models.RunningInput // internal_models.OutputConfig to be inserted into internal_models.RunningInput
// Note: error exists in the return for future calls that might require error // Note: error exists in the return for future calls that might require error

View File

@ -9,6 +9,7 @@ import (
"github.com/influxdata/telegraf/plugins/inputs/exec" "github.com/influxdata/telegraf/plugins/inputs/exec"
"github.com/influxdata/telegraf/plugins/inputs/memcached" "github.com/influxdata/telegraf/plugins/inputs/memcached"
"github.com/influxdata/telegraf/plugins/inputs/procstat" "github.com/influxdata/telegraf/plugins/inputs/procstat"
"github.com/influxdata/telegraf/plugins/parsers"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
) )
@ -91,6 +92,9 @@ func TestConfig_LoadDirectory(t *testing.T) {
"Testdata did not produce correct memcached metadata.") "Testdata did not produce correct memcached metadata.")
ex := inputs.Inputs["exec"]().(*exec.Exec) ex := inputs.Inputs["exec"]().(*exec.Exec)
p, err := parsers.NewInfluxParser()
assert.NoError(t, err)
ex.SetParser(p)
ex.Command = "/usr/bin/myothercollector --foo=bar" ex.Command = "/usr/bin/myothercollector --foo=bar"
eConfig := &internal_models.InputConfig{ eConfig := &internal_models.InputConfig{
Name: "exec", Name: "exec",

View File

@ -1,31 +0,0 @@
package encoding
import (
"fmt"
"github.com/influxdata/telegraf"
)
type Parser interface {
InitConfig(configs map[string]interface{}) error
Parse(buf []byte) ([]telegraf.Metric, error)
ParseLine(line string) (telegraf.Metric, error)
}
type Creator func() Parser
var Parsers = map[string]Creator{}
func Add(name string, creator Creator) {
Parsers[name] = creator
}
func NewParser(dataFormat string, configs map[string]interface{}) (parser Parser, err error) {
creator := Parsers[dataFormat]
if creator == nil {
return nil, fmt.Errorf("Unsupported data format: %s. ", dataFormat)
}
parser = creator()
err = parser.InitConfig(configs)
return parser, err
}

View File

@ -1,48 +0,0 @@
package influx
import (
"fmt"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal/encoding"
)
type InfluxParser struct {
}
func (p *InfluxParser) Parse(buf []byte) ([]telegraf.Metric, error) {
metrics, err := telegraf.ParseMetrics(buf)
if err != nil {
return nil, err
}
return metrics, nil
}
func (p *InfluxParser) ParseLine(line string) (telegraf.Metric, error) {
metrics, err := p.Parse([]byte(line + "\n"))
if err != nil {
return nil, err
}
if len(metrics) < 1 {
return nil, fmt.Errorf("Can not parse the line: %s, for data format: influx ", line)
}
return metrics[0], nil
}
func NewParser() *InfluxParser {
return &InfluxParser{}
}
func (p *InfluxParser) InitConfig(configs map[string]interface{}) error {
return nil
}
func init() {
encoding.Add("influx", func() encoding.Parser {
return NewParser()
})
}

View File

@ -1,68 +0,0 @@
package json
import (
"encoding/json"
"fmt"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/internal/encoding"
)
type JsonParser struct {
}
func (p *JsonParser) Parse(buf []byte) ([]telegraf.Metric, error) {
metrics := make([]telegraf.Metric, 0)
var jsonOut interface{}
err := json.Unmarshal(buf, &jsonOut)
if err != nil {
err = fmt.Errorf("unable to parse out as JSON, %s", err)
return nil, err
}
f := internal.JSONFlattener{}
err = f.FlattenJSON("", jsonOut)
if err != nil {
return nil, err
}
metric, err := telegraf.NewMetric("exec", nil, f.Fields, time.Now().UTC())
if err != nil {
return nil, err
}
return append(metrics, metric), nil
}
func (p *JsonParser) ParseLine(line string) (telegraf.Metric, error) {
metrics, err := p.Parse([]byte(line + "\n"))
if err != nil {
return nil, err
}
if len(metrics) < 1 {
return nil, fmt.Errorf("Can not parse the line: %s, for data format: influx ", line)
}
return metrics[0], nil
}
func NewParser() *JsonParser {
return &JsonParser{}
}
func (p *JsonParser) InitConfig(configs map[string]interface{}) error {
return nil
}
func init() {
encoding.Add("json", func() encoding.Parser {
return NewParser()
})
}

View File

@ -9,7 +9,6 @@ import (
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"os" "os"
"strconv"
"strings" "strings"
"time" "time"
) )
@ -35,47 +34,6 @@ func (d *Duration) UnmarshalTOML(b []byte) error {
var NotImplementedError = errors.New("not implemented yet") var NotImplementedError = errors.New("not implemented yet")
type JSONFlattener struct {
Fields map[string]interface{}
}
// FlattenJSON flattens nested maps/interfaces into a fields map
func (f *JSONFlattener) FlattenJSON(
fieldname string,
v interface{},
) error {
if f.Fields == nil {
f.Fields = make(map[string]interface{})
}
fieldname = strings.Trim(fieldname, "_")
switch t := v.(type) {
case map[string]interface{}:
for k, v := range t {
err := f.FlattenJSON(fieldname+"_"+k+"_", v)
if err != nil {
return err
}
}
case []interface{}:
for i, v := range t {
k := strconv.Itoa(i)
err := f.FlattenJSON(fieldname+"_"+k+"_", v)
if err != nil {
return nil
}
}
case float64:
f.Fields[fieldname] = t
case bool, string, nil:
// ignored types
return nil
default:
return fmt.Errorf("JSON Flattener: got unexpected type %T with value %v (%s)",
t, t, fieldname)
}
return nil
}
// ReadLines reads contents from a file and splits them by new lines. // ReadLines reads contents from a file and splits them by new lines.
// A convenience wrapper to ReadLinesOffsetN(filename, 0, -1). // A convenience wrapper to ReadLinesOffsetN(filename, 0, -1).
func ReadLines(filename string) ([]string, error) { func ReadLines(filename string) ([]string, error) {

View File

@ -1,11 +1,9 @@
package telegraf package telegraf
import ( import (
"bytes"
"time" "time"
"github.com/influxdata/influxdb/client/v2" "github.com/influxdata/influxdb/client/v2"
"github.com/influxdata/influxdb/models"
) )
type Metric interface { type Metric interface {
@ -63,25 +61,6 @@ func NewMetric(
}, nil }, nil
} }
// ParseMetrics returns a slice of Metrics from a text representation of a
// metric (in line-protocol format)
// with each metric separated by newlines. If any metrics fail to parse,
// a non-nil error will be returned in addition to the metrics that parsed
// successfully.
func ParseMetrics(buf []byte) ([]Metric, error) {
// parse even if the buffer begins with a newline
buf = bytes.TrimPrefix(buf, []byte("\n"))
points, err := models.ParsePoints(buf)
metrics := make([]Metric, len(points))
for i, point := range points {
// Ignore error here because it's impossible that a model.Point
// wouldn't parse into client.Point properly
metrics[i], _ = NewMetric(point.Name(), point.Tags(),
point.Fields(), point.Time())
}
return metrics, err
}
func (m *metric) Name() string { func (m *metric) Name() string {
return m.pt.Name() return m.pt.Name()
} }

View File

@ -9,58 +9,6 @@ import (
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
) )
const validMs = `
cpu,cpu=cpu0,host=foo,datacenter=us-east usage_idle=99,usage_busy=1 1454105876344540456
`
const invalidMs = `
cpu, cpu=cpu0,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,host=foo usage_idle
cpu,host usage_idle=99
cpu,host=foo usage_idle=99 very bad metric
`
const validInvalidMs = `
cpu,cpu=cpu0,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu1,host=foo,datacenter=us-east usage_idle=51,usage_busy=49
cpu,cpu=cpu2,host=foo,datacenter=us-east usage_idle=60,usage_busy=40
cpu,host usage_idle=99
`
func TestParseValidMetrics(t *testing.T) {
metrics, err := ParseMetrics([]byte(validMs))
assert.NoError(t, err)
assert.Len(t, metrics, 1)
m := metrics[0]
tags := map[string]string{
"host": "foo",
"datacenter": "us-east",
"cpu": "cpu0",
}
fields := map[string]interface{}{
"usage_idle": float64(99),
"usage_busy": float64(1),
}
assert.Equal(t, tags, m.Tags())
assert.Equal(t, fields, m.Fields())
assert.Equal(t, "cpu", m.Name())
assert.Equal(t, int64(1454105876344540456), m.UnixNano())
}
func TestParseInvalidMetrics(t *testing.T) {
metrics, err := ParseMetrics([]byte(invalidMs))
assert.Error(t, err)
assert.Len(t, metrics, 0)
}
func TestParseValidAndInvalidMetrics(t *testing.T) {
metrics, err := ParseMetrics([]byte(validInvalidMs))
assert.Error(t, err)
assert.Len(t, metrics, 3)
}
func TestNewMetric(t *testing.T) { func TestNewMetric(t *testing.T) {
now := time.Now() now := time.Now()

View File

@ -10,8 +10,8 @@ import (
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
jsonparser "github.com/influxdata/telegraf/plugins/parsers/json"
) )
const statsPath = "/_nodes/stats" const statsPath = "/_nodes/stats"
@ -168,7 +168,7 @@ func (e *Elasticsearch) gatherNodeStats(url string, acc telegraf.Accumulator) er
now := time.Now() now := time.Now()
for p, s := range stats { for p, s := range stats {
f := internal.JSONFlattener{} f := jsonparser.JSONFlattener{}
err := f.FlattenJSON("", s) err := f.FlattenJSON("", s)
if err != nil { if err != nil {
return err return err

View File

@ -28,9 +28,6 @@ and strings will be ignored.
# Read flattened metrics from one or more commands that output JSON to stdout # Read flattened metrics from one or more commands that output JSON to stdout
[[inputs.exec]] [[inputs.exec]]
# Shell/commands array # Shell/commands array
# compatible with old version
# we can still use the old command configuration
# command = "/usr/bin/mycollector --foo=bar"
commands = ["/tmp/test.sh", "/tmp/test2.sh"] commands = ["/tmp/test.sh", "/tmp/test2.sh"]
# Data format to consume. This can be "json", "influx" or "graphite" (line-protocol) # Data format to consume. This can be "json", "influx" or "graphite" (line-protocol)

View File

@ -9,66 +9,40 @@ import (
"github.com/gonuts/go-shellquote" "github.com/gonuts/go-shellquote"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal/encoding"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/parsers"
_ "github.com/influxdata/telegraf/internal/encoding/graphite"
_ "github.com/influxdata/telegraf/internal/encoding/influx"
_ "github.com/influxdata/telegraf/internal/encoding/json"
) )
const sampleConfig = ` const sampleConfig = `
# Shell/commands array ### Commands array
# compatible with old version commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
# we can still use the old command configuration
# command = "/usr/bin/mycollector --foo=bar"
commands = ["/tmp/test.sh","/tmp/test2.sh"]
# Data format to consume. This can be "json", "influx" or "graphite" (line-protocol) ### measurement name suffix (for separating different commands)
# NOTE json only reads numerical measurements, strings and booleans are ignored.
data_format = "json"
# measurement name suffix (for separating different commands)
name_suffix = "_mycollector" name_suffix = "_mycollector"
### Below configuration will be used for data_format = "graphite", can be ignored for other data_format ### Data format to consume. This can be "json", "influx" or "graphite"
### If matching multiple measurement files, this string will be used to join the matched values. ### Each data format has it's own unique set of configuration options, read
separator = "." ### more about them here:
### https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS.md
### Each template line requires a template pattern. It can have an optional data_format = "influx"
### filter before the template and separated by spaces. It can also have optional extra
### tags following the template. Multiple tags should be separated by commas and no spaces
### similar to the line protocol format. The can be only one default template.
### Templates support below format:
### 1. filter + template
### 2. filter + template + extra tag
### 3. filter + template with field key
### 4. default template
templates = [
"*.app env.service.resource.measurement",
"stats.* .host.measurement* region=us-west,agent=sensu",
"stats2.* .host.measurement.field",
"measurement*"
]
` `
type Exec struct { type Exec struct {
Commands []string Commands []string
Command string Command string
DataFormat string
Separator string parser parsers.Parser
Templates []string
encodingParser encoding.Parser
initedConfig bool
wg sync.WaitGroup wg sync.WaitGroup
sync.Mutex
runner Runner runner Runner
errc chan error errChan chan error
}
func NewExec() *Exec {
return &Exec{
runner: CommandRunner{},
}
} }
type Runner interface { type Runner interface {
@ -95,22 +69,18 @@ func (c CommandRunner) Run(e *Exec, command string) ([]byte, error) {
return out.Bytes(), nil return out.Bytes(), nil
} }
func NewExec() *Exec {
return &Exec{runner: CommandRunner{}}
}
func (e *Exec) ProcessCommand(command string, acc telegraf.Accumulator) { func (e *Exec) ProcessCommand(command string, acc telegraf.Accumulator) {
defer e.wg.Done() defer e.wg.Done()
out, err := e.runner.Run(e, command) out, err := e.runner.Run(e, command)
if err != nil { if err != nil {
e.errc <- err e.errChan <- err
return return
} }
metrics, err := e.encodingParser.Parse(out) metrics, err := e.parser.Parse(out)
if err != nil { if err != nil {
e.errc <- err e.errChan <- err
} else { } else {
for _, metric := range metrics { for _, metric := range metrics {
acc.AddFields(metric.Name(), metric.Fields(), metric.Tags(), metric.Time()) acc.AddFields(metric.Name(), metric.Fields(), metric.Tags(), metric.Time())
@ -118,66 +88,33 @@ func (e *Exec) ProcessCommand(command string, acc telegraf.Accumulator) {
} }
} }
func (e *Exec) initConfig() error {
e.Lock()
defer e.Unlock()
if e.Command != "" && len(e.Commands) < 1 {
e.Commands = []string{e.Command}
}
if e.DataFormat == "" {
e.DataFormat = "json"
}
var err error
configs := make(map[string]interface{})
configs["Separator"] = e.Separator
configs["Templates"] = e.Templates
e.encodingParser, err = encoding.NewParser(e.DataFormat, configs)
if err != nil {
return fmt.Errorf("exec configuration is error: %s ", err.Error())
}
return nil
}
func (e *Exec) SampleConfig() string { func (e *Exec) SampleConfig() string {
return sampleConfig return sampleConfig
} }
func (e *Exec) Description() string { func (e *Exec) Description() string {
return "Read metrics from one or more commands that can output JSON, influx or graphite line protocol to stdout" return "Read metrics from one or more commands that can output to stdout"
}
func (e *Exec) SetParser(parser parsers.Parser) {
e.parser = parser
} }
func (e *Exec) Gather(acc telegraf.Accumulator) error { func (e *Exec) Gather(acc telegraf.Accumulator) error {
e.errChan = make(chan error, len(e.Commands))
if !e.initedConfig { e.wg.Add(len(e.Commands))
if err := e.initConfig(); err != nil {
return err
}
e.initedConfig = true
}
e.Lock()
e.errc = make(chan error, 10)
e.Unlock()
for _, command := range e.Commands { for _, command := range e.Commands {
e.wg.Add(1)
go e.ProcessCommand(command, acc) go e.ProcessCommand(command, acc)
} }
e.wg.Wait() e.wg.Wait()
select { select {
default: default:
close(e.errc) close(e.errChan)
return nil return nil
case err := <-e.errc: case err := <-e.errChan:
close(e.errc) close(e.errChan)
return err return err
} }

View File

@ -4,6 +4,8 @@ import (
"fmt" "fmt"
"testing" "testing"
"github.com/influxdata/telegraf/plugins/parsers"
"github.com/influxdata/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
@ -63,9 +65,11 @@ func (r runnerMock) Run(e *Exec, command string) ([]byte, error) {
} }
func TestExec(t *testing.T) { func TestExec(t *testing.T) {
parser, _ := parsers.NewJSONParser("exec", []string{}, nil)
e := &Exec{ e := &Exec{
runner: newRunnerMock([]byte(validJson), nil), runner: newRunnerMock([]byte(validJson), nil),
Commands: []string{"testcommand arg1"}, Commands: []string{"testcommand arg1"},
parser: parser,
} }
var acc testutil.Accumulator var acc testutil.Accumulator
@ -87,9 +91,11 @@ func TestExec(t *testing.T) {
} }
func TestExecMalformed(t *testing.T) { func TestExecMalformed(t *testing.T) {
parser, _ := parsers.NewJSONParser("exec", []string{}, nil)
e := &Exec{ e := &Exec{
runner: newRunnerMock([]byte(malformedJson), nil), runner: newRunnerMock([]byte(malformedJson), nil),
Commands: []string{"badcommand arg1"}, Commands: []string{"badcommand arg1"},
parser: parser,
} }
var acc testutil.Accumulator var acc testutil.Accumulator
@ -99,9 +105,11 @@ func TestExecMalformed(t *testing.T) {
} }
func TestCommandError(t *testing.T) { func TestCommandError(t *testing.T) {
parser, _ := parsers.NewJSONParser("exec", []string{}, nil)
e := &Exec{ e := &Exec{
runner: newRunnerMock(nil, fmt.Errorf("exit status code 1")), runner: newRunnerMock(nil, fmt.Errorf("exit status code 1")),
Commands: []string{"badcommand"}, Commands: []string{"badcommand"},
parser: parser,
} }
var acc testutil.Accumulator var acc testutil.Accumulator
@ -111,10 +119,11 @@ func TestCommandError(t *testing.T) {
} }
func TestLineProtocolParse(t *testing.T) { func TestLineProtocolParse(t *testing.T) {
parser, _ := parsers.NewInfluxParser()
e := &Exec{ e := &Exec{
runner: newRunnerMock([]byte(lineProtocol), nil), runner: newRunnerMock([]byte(lineProtocol), nil),
Commands: []string{"line-protocol"}, Commands: []string{"line-protocol"},
DataFormat: "influx", parser: parser,
} }
var acc testutil.Accumulator var acc testutil.Accumulator
@ -133,10 +142,11 @@ func TestLineProtocolParse(t *testing.T) {
} }
func TestLineProtocolParseMultiple(t *testing.T) { func TestLineProtocolParseMultiple(t *testing.T) {
parser, _ := parsers.NewInfluxParser()
e := &Exec{ e := &Exec{
runner: newRunnerMock([]byte(lineProtocolMulti), nil), runner: newRunnerMock([]byte(lineProtocolMulti), nil),
Commands: []string{"line-protocol"}, Commands: []string{"line-protocol"},
DataFormat: "influx", parser: parser,
} }
var acc testutil.Accumulator var acc testutil.Accumulator
@ -158,15 +168,3 @@ func TestLineProtocolParseMultiple(t *testing.T) {
acc.AssertContainsTaggedFields(t, "cpu", fields, tags) acc.AssertContainsTaggedFields(t, "cpu", fields, tags)
} }
} }
func TestInvalidDataFormat(t *testing.T) {
e := &Exec{
runner: newRunnerMock([]byte(lineProtocol), nil),
Commands: []string{"bad data format"},
DataFormat: "FooBar",
}
var acc testutil.Accumulator
err := e.Gather(&acc)
require.Error(t, err)
}

View File

@ -1,7 +1,6 @@
package httpjson package httpjson
import ( import (
"encoding/json"
"errors" "errors"
"fmt" "fmt"
"io/ioutil" "io/ioutil"
@ -12,8 +11,8 @@ import (
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/parsers"
) )
type HttpJson struct { type HttpJson struct {
@ -137,39 +136,34 @@ func (h *HttpJson) gatherServer(
return err return err
} }
var jsonOut map[string]interface{}
if err = json.Unmarshal([]byte(resp), &jsonOut); err != nil {
return errors.New("Error decoding JSON response")
}
tags := map[string]string{
"server": serverURL,
}
for _, tag := range h.TagKeys {
switch v := jsonOut[tag].(type) {
case string:
tags[tag] = v
}
delete(jsonOut, tag)
}
if responseTime >= 0 {
jsonOut["response_time"] = responseTime
}
f := internal.JSONFlattener{}
err = f.FlattenJSON("", jsonOut)
if err != nil {
return err
}
var msrmnt_name string var msrmnt_name string
if h.Name == "" { if h.Name == "" {
msrmnt_name = "httpjson" msrmnt_name = "httpjson"
} else { } else {
msrmnt_name = "httpjson_" + h.Name msrmnt_name = "httpjson_" + h.Name
} }
acc.AddFields(msrmnt_name, f.Fields, tags) tags := map[string]string{
"server": serverURL,
}
parser, err := parsers.NewJSONParser(msrmnt_name, h.TagKeys, tags)
if err != nil {
return err
}
metrics, err := parser.Parse([]byte(resp))
if err != nil {
return err
}
for _, metric := range metrics {
fields := make(map[string]interface{})
for k, v := range metric.Fields() {
fields[k] = v
}
fields["response_time"] = responseTime
acc.AddFields(metric.Name(), fields, metric.Tags())
}
return nil return nil
} }

View File

@ -1,12 +1,14 @@
package kafka_consumer package kafka_consumer
import ( import (
"fmt"
"log" "log"
"strings" "strings"
"sync" "sync"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/parsers"
"github.com/Shopify/sarama" "github.com/Shopify/sarama"
"github.com/wvanbergen/kafka/consumergroup" "github.com/wvanbergen/kafka/consumergroup"
@ -20,6 +22,8 @@ type Kafka struct {
PointBuffer int PointBuffer int
Offset string Offset string
parser parsers.Parser
sync.Mutex sync.Mutex
// channel for all incoming kafka messages // channel for all incoming kafka messages
@ -36,16 +40,22 @@ type Kafka struct {
} }
var sampleConfig = ` var sampleConfig = `
# topic(s) to consume ### topic(s) to consume
topics = ["telegraf"] topics = ["telegraf"]
# an array of Zookeeper connection strings ### an array of Zookeeper connection strings
zookeeper_peers = ["localhost:2181"] zookeeper_peers = ["localhost:2181"]
# the name of the consumer group ### the name of the consumer group
consumer_group = "telegraf_metrics_consumers" consumer_group = "telegraf_metrics_consumers"
# Maximum number of points to buffer between collection intervals ### Maximum number of points to buffer between collection intervals
point_buffer = 100000 point_buffer = 100000
# Offset (must be either "oldest" or "newest") ### Offset (must be either "oldest" or "newest")
offset = "oldest" offset = "oldest"
### Data format to consume. This can be "json", "influx" or "graphite"
### Each data format has it's own unique set of configuration options, read
### more about them here:
### https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS.md
data_format = "influx"
` `
func (k *Kafka) SampleConfig() string { func (k *Kafka) SampleConfig() string {
@ -53,7 +63,11 @@ func (k *Kafka) SampleConfig() string {
} }
func (k *Kafka) Description() string { func (k *Kafka) Description() string {
return "Read line-protocol metrics from Kafka topic(s)" return "Read metrics from Kafka topic(s)"
}
func (k *Kafka) SetParser(parser parsers.Parser) {
k.parser = parser
} }
func (k *Kafka) Start() error { func (k *Kafka) Start() error {
@ -96,15 +110,15 @@ func (k *Kafka) Start() error {
k.metricC = make(chan telegraf.Metric, k.PointBuffer) k.metricC = make(chan telegraf.Metric, k.PointBuffer)
// Start the kafka message reader // Start the kafka message reader
go k.parser() go k.receiver()
log.Printf("Started the kafka consumer service, peers: %v, topics: %v\n", log.Printf("Started the kafka consumer service, peers: %v, topics: %v\n",
k.ZookeeperPeers, k.Topics) k.ZookeeperPeers, k.Topics)
return nil return nil
} }
// parser() reads all incoming messages from the consumer, and parses them into // receiver() reads all incoming messages from the consumer, and parses them into
// influxdb metric points. // influxdb metric points.
func (k *Kafka) parser() { func (k *Kafka) receiver() {
for { for {
select { select {
case <-k.done: case <-k.done:
@ -112,13 +126,14 @@ func (k *Kafka) parser() {
case err := <-k.errs: case err := <-k.errs:
log.Printf("Kafka Consumer Error: %s\n", err.Error()) log.Printf("Kafka Consumer Error: %s\n", err.Error())
case msg := <-k.in: case msg := <-k.in:
metrics, err := telegraf.ParseMetrics(msg.Value) metrics, err := k.parser.Parse(msg.Value)
if err != nil { if err != nil {
log.Printf("Could not parse kafka message: %s, error: %s", log.Printf("Could not parse kafka message: %s, error: %s",
string(msg.Value), err.Error()) string(msg.Value), err.Error())
} }
for _, metric := range metrics { for _, metric := range metrics {
fmt.Println(string(metric.Name()))
select { select {
case k.metricC <- metric: case k.metricC <- metric:
continue continue

View File

@ -9,6 +9,8 @@ import (
"github.com/influxdata/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"github.com/influxdata/telegraf/plugins/parsers"
) )
func TestReadsMetricsFromKafka(t *testing.T) { func TestReadsMetricsFromKafka(t *testing.T) {
@ -40,6 +42,8 @@ func TestReadsMetricsFromKafka(t *testing.T) {
PointBuffer: 100000, PointBuffer: 100000,
Offset: "oldest", Offset: "oldest",
} }
p, _ := parsers.NewInfluxParser()
k.SetParser(p)
if err := k.Start(); err != nil { if err := k.Start(); err != nil {
t.Fatal(err.Error()) t.Fatal(err.Error())
} else { } else {

View File

@ -5,6 +5,7 @@ import (
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/parsers"
"github.com/influxdata/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/Shopify/sarama" "github.com/Shopify/sarama"
@ -13,6 +14,8 @@ import (
const ( const (
testMsg = "cpu_load_short,host=server01 value=23422.0 1422568543702900257" testMsg = "cpu_load_short,host=server01 value=23422.0 1422568543702900257"
testMsgGraphite = "cpu.load.short.graphite 23422 1454780029"
testMsgJSON = "{\"a\": 5, \"b\": {\"c\": 6}}\n"
invalidMsg = "cpu_load_short,host=server01 1422568543702900257" invalidMsg = "cpu_load_short,host=server01 1422568543702900257"
pointBuffer = 5 pointBuffer = 5
) )
@ -39,7 +42,8 @@ func TestRunParser(t *testing.T) {
k, in := NewTestKafka() k, in := NewTestKafka()
defer close(k.done) defer close(k.done)
go k.parser() k.parser, _ = parsers.NewInfluxParser()
go k.receiver()
in <- saramaMsg(testMsg) in <- saramaMsg(testMsg)
time.Sleep(time.Millisecond) time.Sleep(time.Millisecond)
@ -51,7 +55,8 @@ func TestRunParserInvalidMsg(t *testing.T) {
k, in := NewTestKafka() k, in := NewTestKafka()
defer close(k.done) defer close(k.done)
go k.parser() k.parser, _ = parsers.NewInfluxParser()
go k.receiver()
in <- saramaMsg(invalidMsg) in <- saramaMsg(invalidMsg)
time.Sleep(time.Millisecond) time.Sleep(time.Millisecond)
@ -63,7 +68,8 @@ func TestRunParserRespectsBuffer(t *testing.T) {
k, in := NewTestKafka() k, in := NewTestKafka()
defer close(k.done) defer close(k.done)
go k.parser() k.parser, _ = parsers.NewInfluxParser()
go k.receiver()
for i := 0; i < pointBuffer+1; i++ { for i := 0; i < pointBuffer+1; i++ {
in <- saramaMsg(testMsg) in <- saramaMsg(testMsg)
} }
@ -77,7 +83,8 @@ func TestRunParserAndGather(t *testing.T) {
k, in := NewTestKafka() k, in := NewTestKafka()
defer close(k.done) defer close(k.done)
go k.parser() k.parser, _ = parsers.NewInfluxParser()
go k.receiver()
in <- saramaMsg(testMsg) in <- saramaMsg(testMsg)
time.Sleep(time.Millisecond) time.Sleep(time.Millisecond)
@ -89,6 +96,45 @@ func TestRunParserAndGather(t *testing.T) {
map[string]interface{}{"value": float64(23422)}) map[string]interface{}{"value": float64(23422)})
} }
// Test that the parser parses kafka messages into points
func TestRunParserAndGatherGraphite(t *testing.T) {
k, in := NewTestKafka()
defer close(k.done)
k.parser, _ = parsers.NewGraphiteParser("_", []string{}, nil)
go k.receiver()
in <- saramaMsg(testMsgGraphite)
time.Sleep(time.Millisecond)
acc := testutil.Accumulator{}
k.Gather(&acc)
assert.Equal(t, len(acc.Metrics), 1)
acc.AssertContainsFields(t, "cpu_load_short_graphite",
map[string]interface{}{"value": float64(23422)})
}
// Test that the parser parses kafka messages into points
func TestRunParserAndGatherJSON(t *testing.T) {
k, in := NewTestKafka()
defer close(k.done)
k.parser, _ = parsers.NewJSONParser("kafka_json_test", []string{}, nil)
go k.receiver()
in <- saramaMsg(testMsgJSON)
time.Sleep(time.Millisecond)
acc := testutil.Accumulator{}
k.Gather(&acc)
assert.Equal(t, len(acc.Metrics), 1)
acc.AssertContainsFields(t, "kafka_json_test",
map[string]interface{}{
"a": float64(5),
"b_c": float64(6),
})
}
func saramaMsg(val string) *sarama.ConsumerMessage { func saramaMsg(val string) *sarama.ConsumerMessage {
return &sarama.ConsumerMessage{ return &sarama.ConsumerMessage{
Key: nil, Key: nil,

View File

@ -11,7 +11,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/influxdata/influxdb/services/graphite" "github.com/influxdata/telegraf/plugins/parsers/graphite"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
@ -123,37 +123,39 @@ func (_ *Statsd) Description() string {
} }
const sampleConfig = ` const sampleConfig = `
# Address and port to host UDP listener on ### Address and port to host UDP listener on
service_address = ":8125" service_address = ":8125"
# Delete gauges every interval (default=false) ### Delete gauges every interval (default=false)
delete_gauges = false delete_gauges = false
# Delete counters every interval (default=false) ### Delete counters every interval (default=false)
delete_counters = false delete_counters = false
# Delete sets every interval (default=false) ### Delete sets every interval (default=false)
delete_sets = false delete_sets = false
# Delete timings & histograms every interval (default=true) ### Delete timings & histograms every interval (default=true)
delete_timings = true delete_timings = true
# Percentiles to calculate for timing & histogram stats ### Percentiles to calculate for timing & histogram stats
percentiles = [90] percentiles = [90]
# convert measurement names, "." to "_" and "-" to "__" ### convert measurement names, "." to "_" and "-" to "__"
convert_names = true convert_names = true
### Statsd data translation templates, more info can be read here:
### https://github.com/influxdata/telegraf/blob/master/DATA_FORMATS.md#graphite
# templates = [ # templates = [
# "cpu.* measurement*" # "cpu.* measurement*"
# ] # ]
# Number of UDP messages allowed to queue up, once filled, ### Number of UDP messages allowed to queue up, once filled,
# the statsd server will start dropping packets ### the statsd server will start dropping packets
allowed_pending_messages = 10000 allowed_pending_messages = 10000
# Number of timing/histogram values to track per-measurement in the ### Number of timing/histogram values to track per-measurement in the
# calculation of percentiles. Raising this limit increases the accuracy ### calculation of percentiles. Raising this limit increases the accuracy
# of percentiles but also increases the memory usage and cpu time. ### of percentiles but also increases the memory usage and cpu time.
percentile_limit = 1000 percentile_limit = 1000
# UDP packet size for the server to listen for. This will depend on the size ### UDP packet size for the server to listen for. This will depend on the size
# of the packets that the client is sending, which is usually 1500 bytes. ### of the packets that the client is sending, which is usually 1500 bytes.
udp_packet_size = 1500 udp_packet_size = 1500
` `
@ -418,18 +420,14 @@ func (s *Statsd) parseName(bucket string) (string, string, map[string]string) {
} }
} }
o := graphite.Options{
Separator: "_",
Templates: s.Templates,
DefaultTags: tags,
}
var field string var field string
name := bucketparts[0] name := bucketparts[0]
p, err := graphite.NewParserWithOptions(o) p, err := graphite.NewGraphiteParser(".", s.Templates, nil)
if err == nil { if err == nil {
p.DefaultTags = tags
name, tags, field, _ = p.ApplyTemplate(name) name, tags, field, _ = p.ApplyTemplate(name)
} }
if s.ConvertNames { if s.ConvertNames {
name = strings.Replace(name, ".", "_", -1) name = strings.Replace(name, ".", "_", -1)
name = strings.Replace(name, "-", "__", -1) name = strings.Replace(name, "-", "__", -1)

View File

@ -71,16 +71,11 @@ func TestGraphiteOK(t *testing.T) {
// Start TCP server // Start TCP server
wg.Add(1) wg.Add(1)
go TCPServer(t, &wg) go TCPServer(t, &wg)
wg.Wait()
// Connect
wg.Add(1)
err1 := g.Connect() err1 := g.Connect()
wg.Wait()
require.NoError(t, err1) require.NoError(t, err1)
// Send Data // Send Data
err2 := g.Write(metrics) err2 := g.Write(metrics)
require.NoError(t, err2) require.NoError(t, err2)
wg.Add(1)
// Waiting TCPserver // Waiting TCPserver
wg.Wait() wg.Wait()
g.Close() g.Close()
@ -88,9 +83,8 @@ func TestGraphiteOK(t *testing.T) {
func TCPServer(t *testing.T, wg *sync.WaitGroup) { func TCPServer(t *testing.T, wg *sync.WaitGroup) {
tcpServer, _ := net.Listen("tcp", "127.0.0.1:2003") tcpServer, _ := net.Listen("tcp", "127.0.0.1:2003")
wg.Done() defer wg.Done()
conn, _ := tcpServer.Accept() conn, _ := tcpServer.Accept()
wg.Done()
reader := bufio.NewReader(conn) reader := bufio.NewReader(conn)
tp := textproto.NewReader(reader) tp := textproto.NewReader(reader)
data1, _ := tp.ReadLine() data1, _ := tp.ReadLine()
@ -100,7 +94,6 @@ func TCPServer(t *testing.T, wg *sync.WaitGroup) {
data3, _ := tp.ReadLine() data3, _ := tp.ReadLine()
assert.Equal(t, "my.prefix.192_168_0_1.my_measurement.value 3.14 1289430000", data3) assert.Equal(t, "my.prefix.192_168_0_1.my_measurement.value 3.14 1289430000", data3)
conn.Close() conn.Close()
wg.Done()
} }
func TestGraphiteTags(t *testing.T) { func TestGraphiteTags(t *testing.T) {

View File

@ -1,6 +1,7 @@
package graphite package graphite
import ( import (
"bufio"
"bytes" "bytes"
"fmt" "fmt"
"io" "io"
@ -10,11 +11,7 @@ import (
"strings" "strings"
"time" "time"
"bufio"
"github.com/influxdata/influxdb/models"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal/encoding"
) )
// Minimum and maximum supported dates for timestamps. // Minimum and maximum supported dates for timestamps.
@ -23,35 +20,40 @@ var (
MaxDate = time.Date(2038, 1, 19, 0, 0, 0, 0, time.UTC) MaxDate = time.Date(2038, 1, 19, 0, 0, 0, 0, time.UTC)
) )
// Options are configurable values that can be provided to a Parser
type Options struct {
Separator string
Templates []string
}
// Parser encapsulates a Graphite Parser. // Parser encapsulates a Graphite Parser.
type GraphiteParser struct { type GraphiteParser struct {
Separator string
Templates []string
DefaultTags map[string]string
matcher *matcher matcher *matcher
} }
func NewParser() *GraphiteParser { func NewGraphiteParser(
return &GraphiteParser{} separator string,
templates []string,
defaultTags map[string]string,
) (*GraphiteParser, error) {
var err error
if separator == "" {
separator = DefaultSeparator
}
p := &GraphiteParser{
Separator: separator,
Templates: templates,
} }
func (p *GraphiteParser) InitConfig(configs map[string]interface{}) error { if defaultTags != nil {
p.DefaultTags = defaultTags
var err error }
options := Options{
Templates: configs["Templates"].([]string),
Separator: configs["Separator"].(string)}
matcher := newMatcher() matcher := newMatcher()
p.matcher = matcher p.matcher = matcher
defaultTemplate, _ := NewTemplate("measurement*", nil, DefaultSeparator) defaultTemplate, _ := NewTemplate("measurement*", nil, p.Separator)
matcher.AddDefaultTemplate(defaultTemplate) matcher.AddDefaultTemplate(defaultTemplate)
for _, pattern := range options.Templates { for _, pattern := range p.Templates {
template := pattern template := pattern
filter := "" filter := ""
// Format is [filter] <template> [tag1=value1,tag2=value2] // Format is [filter] <template> [tag1=value1,tag2=value2]
@ -68,7 +70,7 @@ func (p *GraphiteParser) InitConfig(configs map[string]interface{}) error {
} }
// Parse out the default tags specific to this template // Parse out the default tags specific to this template
tags := models.Tags{} tags := map[string]string{}
if strings.Contains(parts[len(parts)-1], "=") { if strings.Contains(parts[len(parts)-1], "=") {
tagStrs := strings.Split(parts[len(parts)-1], ",") tagStrs := strings.Split(parts[len(parts)-1], ",")
for _, kv := range tagStrs { for _, kv := range tagStrs {
@ -77,7 +79,7 @@ func (p *GraphiteParser) InitConfig(configs map[string]interface{}) error {
} }
} }
tmpl, err1 := NewTemplate(template, tags, options.Separator) tmpl, err1 := NewTemplate(template, tags, p.Separator)
if err1 != nil { if err1 != nil {
err = err1 err = err1
break break
@ -86,22 +88,19 @@ func (p *GraphiteParser) InitConfig(configs map[string]interface{}) error {
} }
if err != nil { if err != nil {
return fmt.Errorf("exec input parser config is error: %s ", err.Error()) return p, fmt.Errorf("exec input parser config is error: %s ", err.Error())
} else { } else {
return nil return p, nil
} }
}
func init() {
encoding.Add("graphite", func() encoding.Parser {
return NewParser()
})
} }
func (p *GraphiteParser) Parse(buf []byte) ([]telegraf.Metric, error) { func (p *GraphiteParser) Parse(buf []byte) ([]telegraf.Metric, error) {
// parse even if the buffer begins with a newline // parse even if the buffer begins with a newline
buf = bytes.TrimPrefix(buf, []byte("\n")) buf = bytes.TrimPrefix(buf, []byte("\n"))
// add newline to end if not exists:
if len(buf) > 0 && !bytes.HasSuffix(buf, []byte("\n")) {
buf = append(buf, []byte("\n")...)
}
metrics := make([]telegraf.Metric, 0) metrics := make([]telegraf.Metric, 0)
@ -123,7 +122,6 @@ func (p *GraphiteParser) Parse(buf []byte) ([]telegraf.Metric, error) {
metrics = append(metrics, metric) metrics = append(metrics, metric)
} }
} }
} }
// Parse performs Graphite parsing of a single line. // Parse performs Graphite parsing of a single line.
@ -183,6 +181,12 @@ func (p *GraphiteParser) ParseLine(line string) (telegraf.Metric, error) {
} }
} }
} }
// Set the default tags on the point if they are not already set
for k, v := range p.DefaultTags {
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
return telegraf.NewMetric(measurement, tags, fieldValues, timestamp) return telegraf.NewMetric(measurement, tags, fieldValues, timestamp)
} }
@ -199,20 +203,27 @@ func (p *GraphiteParser) ApplyTemplate(line string) (string, map[string]string,
template := p.matcher.Match(fields[0]) template := p.matcher.Match(fields[0])
name, tags, field, err := template.Apply(fields[0]) name, tags, field, err := template.Apply(fields[0])
// Set the default tags on the point if they are not already set
for k, v := range p.DefaultTags {
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
return name, tags, field, err return name, tags, field, err
} }
// template represents a pattern and tags to map a graphite metric string to a influxdb Point // template represents a pattern and tags to map a graphite metric string to a influxdb Point
type template struct { type template struct {
tags []string tags []string
defaultTags models.Tags defaultTags map[string]string
greedyMeasurement bool greedyMeasurement bool
separator string separator string
} }
// NewTemplate returns a new template ensuring it has a measurement // NewTemplate returns a new template ensuring it has a measurement
// specified. // specified.
func NewTemplate(pattern string, defaultTags models.Tags, separator string) (*template, error) { func NewTemplate(pattern string, defaultTags map[string]string, separator string) (*template, error) {
tags := strings.Split(pattern, ".") tags := strings.Split(pattern, ".")
hasMeasurement := false hasMeasurement := false
template := &template{tags: tags, defaultTags: defaultTags, separator: separator} template := &template{tags: tags, defaultTags: defaultTags, separator: separator}

View File

@ -0,0 +1,595 @@
package graphite
import (
"reflect"
"strconv"
"testing"
"time"
"github.com/influxdata/telegraf"
"github.com/stretchr/testify/assert"
)
func BenchmarkParse(b *testing.B) {
p, err := NewGraphiteParser("_", []string{
"*.* .wrong.measurement*",
"servers.* .host.measurement*",
"servers.localhost .host.measurement*",
"*.localhost .host.measurement*",
"*.*.cpu .host.measurement*",
"a.b.c .host.measurement*",
"influxd.*.foo .host.measurement*",
"prod.*.mem .host.measurement*",
}, nil)
if err != nil {
b.Fatalf("unexpected error creating parser, got %v", err)
}
for i := 0; i < b.N; i++ {
p.Parse([]byte("servers.localhost.cpu.load 11 1435077219"))
}
}
func TestTemplateApply(t *testing.T) {
var tests = []struct {
test string
input string
template string
measurement string
tags map[string]string
err string
}{
{
test: "metric only",
input: "cpu",
template: "measurement",
measurement: "cpu",
},
{
test: "metric with single series",
input: "cpu.server01",
template: "measurement.hostname",
measurement: "cpu",
tags: map[string]string{"hostname": "server01"},
},
{
test: "metric with multiple series",
input: "cpu.us-west.server01",
template: "measurement.region.hostname",
measurement: "cpu",
tags: map[string]string{"hostname": "server01", "region": "us-west"},
},
{
test: "no metric",
tags: make(map[string]string),
err: `no measurement specified for template. ""`,
},
{
test: "ignore unnamed",
input: "foo.cpu",
template: "measurement",
measurement: "foo",
tags: make(map[string]string),
},
{
test: "name shorter than template",
input: "foo",
template: "measurement.A.B.C",
measurement: "foo",
tags: make(map[string]string),
},
{
test: "wildcard measurement at end",
input: "prod.us-west.server01.cpu.load",
template: "env.zone.host.measurement*",
measurement: "cpu.load",
tags: map[string]string{"env": "prod", "zone": "us-west", "host": "server01"},
},
{
test: "skip fields",
input: "ignore.us-west.ignore-this-too.cpu.load",
template: ".zone..measurement*",
measurement: "cpu.load",
tags: map[string]string{"zone": "us-west"},
},
}
for _, test := range tests {
tmpl, err := NewTemplate(test.template, nil, DefaultSeparator)
if errstr(err) != test.err {
t.Fatalf("err does not match. expected %v, got %v", test.err, err)
}
if err != nil {
// If we erred out,it was intended and the following tests won't work
continue
}
measurement, tags, _, _ := tmpl.Apply(test.input)
if measurement != test.measurement {
t.Fatalf("name parse failer. expected %v, got %v", test.measurement, measurement)
}
if len(tags) != len(test.tags) {
t.Fatalf("unexpected number of tags. expected %v, got %v", test.tags, tags)
}
for k, v := range test.tags {
if tags[k] != v {
t.Fatalf("unexpected tag value for tags[%s]. expected %q, got %q", k, v, tags[k])
}
}
}
}
func TestParseMissingMeasurement(t *testing.T) {
_, err := NewGraphiteParser("", []string{"a.b.c"}, nil)
if err == nil {
t.Fatalf("expected error creating parser, got nil")
}
}
func TestParse(t *testing.T) {
testTime := time.Now().Round(time.Second)
epochTime := testTime.Unix()
strTime := strconv.FormatInt(epochTime, 10)
var tests = []struct {
test string
input string
measurement string
tags map[string]string
value float64
time time.Time
template string
err string
}{
{
test: "normal case",
input: `cpu.foo.bar 50 ` + strTime,
template: "measurement.foo.bar",
measurement: "cpu",
tags: map[string]string{
"foo": "foo",
"bar": "bar",
},
value: 50,
time: testTime,
},
{
test: "metric only with float value",
input: `cpu 50.554 ` + strTime,
measurement: "cpu",
template: "measurement",
value: 50.554,
time: testTime,
},
{
test: "missing metric",
input: `1419972457825`,
template: "measurement",
err: `received "1419972457825" which doesn't have required fields`,
},
{
test: "should error parsing invalid float",
input: `cpu 50.554z 1419972457825`,
template: "measurement",
err: `field "cpu" value: strconv.ParseFloat: parsing "50.554z": invalid syntax`,
},
{
test: "should error parsing invalid int",
input: `cpu 50z 1419972457825`,
template: "measurement",
err: `field "cpu" value: strconv.ParseFloat: parsing "50z": invalid syntax`,
},
{
test: "should error parsing invalid time",
input: `cpu 50.554 14199724z57825`,
template: "measurement",
err: `field "cpu" time: strconv.ParseFloat: parsing "14199724z57825": invalid syntax`,
},
}
for _, test := range tests {
p, err := NewGraphiteParser("", []string{test.template}, nil)
if err != nil {
t.Fatalf("unexpected error creating graphite parser: %v", err)
}
metric, err := p.ParseLine(test.input)
if errstr(err) != test.err {
t.Fatalf("err does not match. expected %v, got %v", test.err, err)
}
if err != nil {
// If we erred out,it was intended and the following tests won't work
continue
}
if metric.Name() != test.measurement {
t.Fatalf("name parse failer. expected %v, got %v",
test.measurement, metric.Name())
}
if len(metric.Tags()) != len(test.tags) {
t.Fatalf("tags len mismatch. expected %d, got %d",
len(test.tags), len(metric.Tags()))
}
f := metric.Fields()["value"].(float64)
if metric.Fields()["value"] != f {
t.Fatalf("floatValue value mismatch. expected %v, got %v",
test.value, f)
}
if metric.Time().UnixNano()/1000000 != test.time.UnixNano()/1000000 {
t.Fatalf("time value mismatch. expected %v, got %v",
test.time.UnixNano(), metric.Time().UnixNano())
}
}
}
func TestParseNaN(t *testing.T) {
p, err := NewGraphiteParser("", []string{"measurement*"}, nil)
assert.NoError(t, err)
_, err = p.ParseLine("servers.localhost.cpu_load NaN 1435077219")
assert.Error(t, err)
if _, ok := err.(*UnsupposedValueError); !ok {
t.Fatalf("expected *ErrUnsupportedValue, got %v", reflect.TypeOf(err))
}
}
func TestFilterMatchDefault(t *testing.T) {
p, err := NewGraphiteParser("", []string{"servers.localhost .host.measurement*"}, nil)
if err != nil {
t.Fatalf("unexpected error creating parser, got %v", err)
}
exp, err := telegraf.NewMetric("miss.servers.localhost.cpu_load",
map[string]string{},
map[string]interface{}{"value": float64(11)},
time.Unix(1435077219, 0))
assert.NoError(t, err)
m, err := p.ParseLine("miss.servers.localhost.cpu_load 11 1435077219")
assert.NoError(t, err)
assert.Equal(t, exp.String(), m.String())
}
func TestFilterMatchMultipleMeasurement(t *testing.T) {
p, err := NewGraphiteParser("", []string{"servers.localhost .host.measurement.measurement*"}, nil)
if err != nil {
t.Fatalf("unexpected error creating parser, got %v", err)
}
exp, err := telegraf.NewMetric("cpu.cpu_load.10",
map[string]string{"host": "localhost"},
map[string]interface{}{"value": float64(11)},
time.Unix(1435077219, 0))
assert.NoError(t, err)
m, err := p.ParseLine("servers.localhost.cpu.cpu_load.10 11 1435077219")
assert.NoError(t, err)
assert.Equal(t, exp.String(), m.String())
}
func TestFilterMatchMultipleMeasurementSeparator(t *testing.T) {
p, err := NewGraphiteParser("_",
[]string{"servers.localhost .host.measurement.measurement*"},
nil,
)
assert.NoError(t, err)
exp, err := telegraf.NewMetric("cpu_cpu_load_10",
map[string]string{"host": "localhost"},
map[string]interface{}{"value": float64(11)},
time.Unix(1435077219, 0))
assert.NoError(t, err)
m, err := p.ParseLine("servers.localhost.cpu.cpu_load.10 11 1435077219")
assert.NoError(t, err)
assert.Equal(t, exp.String(), m.String())
}
func TestFilterMatchSingle(t *testing.T) {
p, err := NewGraphiteParser("", []string{"servers.localhost .host.measurement*"}, nil)
if err != nil {
t.Fatalf("unexpected error creating parser, got %v", err)
}
exp, err := telegraf.NewMetric("cpu_load",
map[string]string{"host": "localhost"},
map[string]interface{}{"value": float64(11)},
time.Unix(1435077219, 0))
m, err := p.ParseLine("servers.localhost.cpu_load 11 1435077219")
assert.NoError(t, err)
assert.Equal(t, exp.String(), m.String())
}
func TestParseNoMatch(t *testing.T) {
p, err := NewGraphiteParser("", []string{"servers.*.cpu .host.measurement.cpu.measurement"}, nil)
if err != nil {
t.Fatalf("unexpected error creating parser, got %v", err)
}
exp, err := telegraf.NewMetric("servers.localhost.memory.VmallocChunk",
map[string]string{},
map[string]interface{}{"value": float64(11)},
time.Unix(1435077219, 0))
assert.NoError(t, err)
m, err := p.ParseLine("servers.localhost.memory.VmallocChunk 11 1435077219")
assert.NoError(t, err)
assert.Equal(t, exp.String(), m.String())
}
func TestFilterMatchWildcard(t *testing.T) {
p, err := NewGraphiteParser("", []string{"servers.* .host.measurement*"}, nil)
if err != nil {
t.Fatalf("unexpected error creating parser, got %v", err)
}
exp, err := telegraf.NewMetric("cpu_load",
map[string]string{"host": "localhost"},
map[string]interface{}{"value": float64(11)},
time.Unix(1435077219, 0))
assert.NoError(t, err)
m, err := p.ParseLine("servers.localhost.cpu_load 11 1435077219")
assert.NoError(t, err)
assert.Equal(t, exp.String(), m.String())
}
func TestFilterMatchExactBeforeWildcard(t *testing.T) {
p, err := NewGraphiteParser("", []string{
"servers.* .wrong.measurement*",
"servers.localhost .host.measurement*"}, nil)
if err != nil {
t.Fatalf("unexpected error creating parser, got %v", err)
}
exp, err := telegraf.NewMetric("cpu_load",
map[string]string{"host": "localhost"},
map[string]interface{}{"value": float64(11)},
time.Unix(1435077219, 0))
assert.NoError(t, err)
m, err := p.ParseLine("servers.localhost.cpu_load 11 1435077219")
assert.NoError(t, err)
assert.Equal(t, exp.String(), m.String())
}
func TestFilterMatchMostLongestFilter(t *testing.T) {
p, err := NewGraphiteParser("", []string{
"*.* .wrong.measurement*",
"servers.* .wrong.measurement*",
"servers.localhost .wrong.measurement*",
"servers.localhost.cpu .host.resource.measurement*", // should match this
"*.localhost .wrong.measurement*",
}, nil)
if err != nil {
t.Fatalf("unexpected error creating parser, got %v", err)
}
exp, err := telegraf.NewMetric("cpu_load",
map[string]string{"host": "localhost", "resource": "cpu"},
map[string]interface{}{"value": float64(11)},
time.Unix(1435077219, 0))
assert.NoError(t, err)
m, err := p.ParseLine("servers.localhost.cpu.cpu_load 11 1435077219")
assert.NoError(t, err)
assert.Equal(t, exp.String(), m.String())
}
func TestFilterMatchMultipleWildcards(t *testing.T) {
p, err := NewGraphiteParser("", []string{
"*.* .wrong.measurement*",
"servers.* .host.measurement*", // should match this
"servers.localhost .wrong.measurement*",
"*.localhost .wrong.measurement*",
}, nil)
if err != nil {
t.Fatalf("unexpected error creating parser, got %v", err)
}
exp, err := telegraf.NewMetric("cpu_load",
map[string]string{"host": "server01"},
map[string]interface{}{"value": float64(11)},
time.Unix(1435077219, 0))
assert.NoError(t, err)
m, err := p.ParseLine("servers.server01.cpu_load 11 1435077219")
assert.NoError(t, err)
assert.Equal(t, exp.String(), m.String())
}
func TestParseDefaultTags(t *testing.T) {
p, err := NewGraphiteParser("", []string{"servers.localhost .host.measurement*"}, map[string]string{
"region": "us-east",
"zone": "1c",
"host": "should not set",
})
if err != nil {
t.Fatalf("unexpected error creating parser, got %v", err)
}
exp, err := telegraf.NewMetric("cpu_load",
map[string]string{"host": "localhost", "region": "us-east", "zone": "1c"},
map[string]interface{}{"value": float64(11)},
time.Unix(1435077219, 0))
assert.NoError(t, err)
m, err := p.ParseLine("servers.localhost.cpu_load 11 1435077219")
assert.NoError(t, err)
assert.Equal(t, exp.String(), m.String())
}
func TestParseDefaultTemplateTags(t *testing.T) {
p, err := NewGraphiteParser("", []string{"servers.localhost .host.measurement* zone=1c"}, map[string]string{
"region": "us-east",
"host": "should not set",
})
if err != nil {
t.Fatalf("unexpected error creating parser, got %v", err)
}
exp, err := telegraf.NewMetric("cpu_load",
map[string]string{"host": "localhost", "region": "us-east", "zone": "1c"},
map[string]interface{}{"value": float64(11)},
time.Unix(1435077219, 0))
assert.NoError(t, err)
m, err := p.ParseLine("servers.localhost.cpu_load 11 1435077219")
assert.NoError(t, err)
assert.Equal(t, exp.String(), m.String())
}
func TestParseDefaultTemplateTagsOverridGlobal(t *testing.T) {
p, err := NewGraphiteParser("", []string{"servers.localhost .host.measurement* zone=1c,region=us-east"}, map[string]string{
"region": "shot not be set",
"host": "should not set",
})
if err != nil {
t.Fatalf("unexpected error creating parser, got %v", err)
}
exp, err := telegraf.NewMetric("cpu_load",
map[string]string{"host": "localhost", "region": "us-east", "zone": "1c"},
map[string]interface{}{"value": float64(11)},
time.Unix(1435077219, 0))
assert.NoError(t, err)
m, err := p.ParseLine("servers.localhost.cpu_load 11 1435077219")
assert.NoError(t, err)
assert.Equal(t, exp.String(), m.String())
}
func TestParseTemplateWhitespace(t *testing.T) {
p, err := NewGraphiteParser("",
[]string{"servers.localhost .host.measurement* zone=1c"},
map[string]string{
"region": "us-east",
"host": "should not set",
})
if err != nil {
t.Fatalf("unexpected error creating parser, got %v", err)
}
exp, err := telegraf.NewMetric("cpu_load",
map[string]string{"host": "localhost", "region": "us-east", "zone": "1c"},
map[string]interface{}{"value": float64(11)},
time.Unix(1435077219, 0))
assert.NoError(t, err)
m, err := p.ParseLine("servers.localhost.cpu_load 11 1435077219")
assert.NoError(t, err)
assert.Equal(t, exp.String(), m.String())
}
// Test basic functionality of ApplyTemplate
func TestApplyTemplate(t *testing.T) {
p, err := NewGraphiteParser("_",
[]string{"current.* measurement.measurement"},
nil)
assert.NoError(t, err)
measurement, _, _, _ := p.ApplyTemplate("current.users")
assert.Equal(t, "current_users", measurement)
}
// Test basic functionality of ApplyTemplate
func TestApplyTemplateNoMatch(t *testing.T) {
p, err := NewGraphiteParser(".",
[]string{"foo.bar measurement.measurement"},
nil)
assert.NoError(t, err)
measurement, _, _, _ := p.ApplyTemplate("current.users")
assert.Equal(t, "current.users", measurement)
}
// Test that most specific template is chosen
func TestApplyTemplateSpecific(t *testing.T) {
p, err := NewGraphiteParser("_",
[]string{
"current.* measurement.measurement",
"current.*.* measurement.measurement.service",
}, nil)
assert.NoError(t, err)
measurement, tags, _, _ := p.ApplyTemplate("current.users.facebook")
assert.Equal(t, "current_users", measurement)
service, ok := tags["service"]
if !ok {
t.Error("Expected for template to apply a 'service' tag, but not found")
}
if service != "facebook" {
t.Errorf("Expected service='facebook' tag, got service='%s'", service)
}
}
func TestApplyTemplateTags(t *testing.T) {
p, err := NewGraphiteParser("_",
[]string{"current.* measurement.measurement region=us-west"}, nil)
assert.NoError(t, err)
measurement, tags, _, _ := p.ApplyTemplate("current.users")
assert.Equal(t, "current_users", measurement)
region, ok := tags["region"]
if !ok {
t.Error("Expected for template to apply a 'region' tag, but not found")
}
if region != "us-west" {
t.Errorf("Expected region='us-west' tag, got region='%s'", region)
}
}
func TestApplyTemplateField(t *testing.T) {
p, err := NewGraphiteParser("_",
[]string{"current.* measurement.measurement.field"}, nil)
assert.NoError(t, err)
measurement, _, field, err := p.ApplyTemplate("current.users.logged_in")
assert.Equal(t, "current_users", measurement)
if field != "logged_in" {
t.Errorf("Parser.ApplyTemplate unexpected result. got %s, exp %s",
field, "logged_in")
}
}
func TestApplyTemplateFieldError(t *testing.T) {
p, err := NewGraphiteParser("_",
[]string{"current.* measurement.field.field"}, nil)
assert.NoError(t, err)
_, _, _, err = p.ApplyTemplate("current.users.logged_in")
if err == nil {
t.Errorf("Parser.ApplyTemplate unexpected result. got %s, exp %s", err,
"'field' can only be used once in each template: current.users.logged_in")
}
}
// Test Helpers
func errstr(err error) string {
if err != nil {
return err.Error()
}
return ""
}

View File

@ -0,0 +1,57 @@
package influx
import (
"bytes"
"fmt"
"github.com/influxdata/telegraf"
"github.com/influxdata/influxdb/models"
)
// InfluxParser is an object for Parsing incoming metrics.
type InfluxParser struct {
// DefaultTags will be added to every parsed metric
DefaultTags map[string]string
}
// ParseMetrics returns a slice of Metrics from a text representation of a
// metric (in line-protocol format)
// with each metric separated by newlines. If any metrics fail to parse,
// a non-nil error will be returned in addition to the metrics that parsed
// successfully.
func (p *InfluxParser) Parse(buf []byte) ([]telegraf.Metric, error) {
// parse even if the buffer begins with a newline
buf = bytes.TrimPrefix(buf, []byte("\n"))
points, err := models.ParsePoints(buf)
metrics := make([]telegraf.Metric, len(points))
for i, point := range points {
tags := point.Tags()
for k, v := range p.DefaultTags {
// Only set tags not in parsed metric
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
// Ignore error here because it's impossible that a model.Point
// wouldn't parse into client.Point properly
metrics[i], _ = telegraf.NewMetric(point.Name(), tags,
point.Fields(), point.Time())
}
return metrics, err
}
func (p *InfluxParser) ParseLine(line string) (telegraf.Metric, error) {
metrics, err := p.Parse([]byte(line + "\n"))
if err != nil {
return nil, err
}
if len(metrics) < 1 {
return nil, fmt.Errorf(
"Can not parse the line: %s, for data format: influx ", line)
}
return metrics[0], nil
}

View File

@ -0,0 +1,194 @@
package influx
import (
"testing"
"time"
"github.com/stretchr/testify/assert"
)
var exptime = time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC)
const (
validInflux = "cpu_load_short,cpu=cpu0 value=10 1257894000000000000"
validInfluxNewline = "\ncpu_load_short,cpu=cpu0 value=10 1257894000000000000\n"
invalidInflux = "I don't think this is line protocol"
invalidInflux2 = "{\"a\": 5, \"b\": {\"c\": 6}}"
)
const influxMulti = `
cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
`
const influxMultiSomeInvalid = `
cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu3, host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu4 , usage_idle=99,usage_busy=1
cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
`
func TestParseValidInflux(t *testing.T) {
parser := InfluxParser{}
metrics, err := parser.Parse([]byte(validInflux))
assert.NoError(t, err)
assert.Len(t, metrics, 1)
assert.Equal(t, "cpu_load_short", metrics[0].Name())
assert.Equal(t, map[string]interface{}{
"value": float64(10),
}, metrics[0].Fields())
assert.Equal(t, map[string]string{
"cpu": "cpu0",
}, metrics[0].Tags())
assert.Equal(t, exptime, metrics[0].Time())
metrics, err = parser.Parse([]byte(validInfluxNewline))
assert.NoError(t, err)
assert.Len(t, metrics, 1)
assert.Equal(t, "cpu_load_short", metrics[0].Name())
assert.Equal(t, map[string]interface{}{
"value": float64(10),
}, metrics[0].Fields())
assert.Equal(t, map[string]string{
"cpu": "cpu0",
}, metrics[0].Tags())
assert.Equal(t, exptime, metrics[0].Time())
}
func TestParseLineValidInflux(t *testing.T) {
parser := InfluxParser{}
metric, err := parser.ParseLine(validInflux)
assert.NoError(t, err)
assert.Equal(t, "cpu_load_short", metric.Name())
assert.Equal(t, map[string]interface{}{
"value": float64(10),
}, metric.Fields())
assert.Equal(t, map[string]string{
"cpu": "cpu0",
}, metric.Tags())
assert.Equal(t, exptime, metric.Time())
metric, err = parser.ParseLine(validInfluxNewline)
assert.NoError(t, err)
assert.Equal(t, "cpu_load_short", metric.Name())
assert.Equal(t, map[string]interface{}{
"value": float64(10),
}, metric.Fields())
assert.Equal(t, map[string]string{
"cpu": "cpu0",
}, metric.Tags())
assert.Equal(t, exptime, metric.Time())
}
func TestParseMultipleValid(t *testing.T) {
parser := InfluxParser{}
metrics, err := parser.Parse([]byte(influxMulti))
assert.NoError(t, err)
assert.Len(t, metrics, 7)
for _, metric := range metrics {
assert.Equal(t, "cpu", metric.Name())
assert.Equal(t, map[string]string{
"datacenter": "us-east",
"host": "foo",
}, metrics[0].Tags())
assert.Equal(t, map[string]interface{}{
"usage_idle": float64(99),
"usage_busy": float64(1),
}, metrics[0].Fields())
}
}
func TestParseSomeValid(t *testing.T) {
parser := InfluxParser{}
metrics, err := parser.Parse([]byte(influxMultiSomeInvalid))
assert.Error(t, err)
assert.Len(t, metrics, 4)
for _, metric := range metrics {
assert.Equal(t, "cpu", metric.Name())
assert.Equal(t, map[string]string{
"datacenter": "us-east",
"host": "foo",
}, metrics[0].Tags())
assert.Equal(t, map[string]interface{}{
"usage_idle": float64(99),
"usage_busy": float64(1),
}, metrics[0].Fields())
}
}
// Test that default tags are applied.
func TestParseDefaultTags(t *testing.T) {
parser := InfluxParser{
DefaultTags: map[string]string{
"tag": "default",
},
}
metrics, err := parser.Parse([]byte(influxMultiSomeInvalid))
assert.Error(t, err)
assert.Len(t, metrics, 4)
for _, metric := range metrics {
assert.Equal(t, "cpu", metric.Name())
assert.Equal(t, map[string]string{
"datacenter": "us-east",
"host": "foo",
"tag": "default",
}, metrics[0].Tags())
assert.Equal(t, map[string]interface{}{
"usage_idle": float64(99),
"usage_busy": float64(1),
}, metrics[0].Fields())
}
}
// Verify that metric tags will override default tags
func TestParseDefaultTagsOverride(t *testing.T) {
parser := InfluxParser{
DefaultTags: map[string]string{
"host": "default",
},
}
metrics, err := parser.Parse([]byte(influxMultiSomeInvalid))
assert.Error(t, err)
assert.Len(t, metrics, 4)
for _, metric := range metrics {
assert.Equal(t, "cpu", metric.Name())
assert.Equal(t, map[string]string{
"datacenter": "us-east",
"host": "foo",
}, metrics[0].Tags())
assert.Equal(t, map[string]interface{}{
"usage_idle": float64(99),
"usage_busy": float64(1),
}, metrics[0].Fields())
}
}
func TestParseInvalidInflux(t *testing.T) {
parser := InfluxParser{}
_, err := parser.Parse([]byte(invalidInflux))
assert.Error(t, err)
_, err = parser.Parse([]byte(invalidInflux2))
assert.Error(t, err)
_, err = parser.ParseLine(invalidInflux)
assert.Error(t, err)
_, err = parser.ParseLine(invalidInflux2)
assert.Error(t, err)
}

View File

@ -0,0 +1,109 @@
package json
import (
"encoding/json"
"fmt"
"strconv"
"strings"
"time"
"github.com/influxdata/telegraf"
)
type JSONParser struct {
MetricName string
TagKeys []string
DefaultTags map[string]string
}
func (p *JSONParser) Parse(buf []byte) ([]telegraf.Metric, error) {
metrics := make([]telegraf.Metric, 0)
var jsonOut map[string]interface{}
err := json.Unmarshal(buf, &jsonOut)
if err != nil {
err = fmt.Errorf("unable to parse out as JSON, %s", err)
return nil, err
}
tags := make(map[string]string)
for k, v := range p.DefaultTags {
tags[k] = v
}
for _, tag := range p.TagKeys {
switch v := jsonOut[tag].(type) {
case string:
tags[tag] = v
}
delete(jsonOut, tag)
}
f := JSONFlattener{}
err = f.FlattenJSON("", jsonOut)
if err != nil {
return nil, err
}
metric, err := telegraf.NewMetric(p.MetricName, tags, f.Fields, time.Now().UTC())
if err != nil {
return nil, err
}
return append(metrics, metric), nil
}
func (p *JSONParser) ParseLine(line string) (telegraf.Metric, error) {
metrics, err := p.Parse([]byte(line + "\n"))
if err != nil {
return nil, err
}
if len(metrics) < 1 {
return nil, fmt.Errorf("Can not parse the line: %s, for data format: influx ", line)
}
return metrics[0], nil
}
type JSONFlattener struct {
Fields map[string]interface{}
}
// FlattenJSON flattens nested maps/interfaces into a fields map
func (f *JSONFlattener) FlattenJSON(
fieldname string,
v interface{},
) error {
if f.Fields == nil {
f.Fields = make(map[string]interface{})
}
fieldname = strings.Trim(fieldname, "_")
switch t := v.(type) {
case map[string]interface{}:
for k, v := range t {
err := f.FlattenJSON(fieldname+"_"+k+"_", v)
if err != nil {
return err
}
}
case []interface{}:
for i, v := range t {
k := strconv.Itoa(i)
err := f.FlattenJSON(fieldname+"_"+k+"_", v)
if err != nil {
return nil
}
}
case float64:
f.Fields[fieldname] = t
case bool, string, nil:
// ignored types
return nil
default:
return fmt.Errorf("JSON Flattener: got unexpected type %T with value %v (%s)",
t, t, fieldname)
}
return nil
}

View File

@ -0,0 +1,284 @@
package json
import (
"testing"
"github.com/stretchr/testify/assert"
)
const (
validJSON = "{\"a\": 5, \"b\": {\"c\": 6}}"
validJSONNewline = "\n{\"d\": 7, \"b\": {\"d\": 8}}\n"
invalidJSON = "I don't think this is JSON"
invalidJSON2 = "{\"a\": 5, \"b\": \"c\": 6}}"
)
const validJSONTags = `
{
"a": 5,
"b": {
"c": 6
},
"mytag": "foobar",
"othertag": "baz"
}
`
func TestParseValidJSON(t *testing.T) {
parser := JSONParser{
MetricName: "json_test",
}
// Most basic vanilla test
metrics, err := parser.Parse([]byte(validJSON))
assert.NoError(t, err)
assert.Len(t, metrics, 1)
assert.Equal(t, "json_test", metrics[0].Name())
assert.Equal(t, map[string]interface{}{
"a": float64(5),
"b_c": float64(6),
}, metrics[0].Fields())
assert.Equal(t, map[string]string{}, metrics[0].Tags())
// Test that newlines are fine
metrics, err = parser.Parse([]byte(validJSONNewline))
assert.NoError(t, err)
assert.Len(t, metrics, 1)
assert.Equal(t, "json_test", metrics[0].Name())
assert.Equal(t, map[string]interface{}{
"d": float64(7),
"b_d": float64(8),
}, metrics[0].Fields())
assert.Equal(t, map[string]string{}, metrics[0].Tags())
// Test that strings without TagKeys defined are ignored
metrics, err = parser.Parse([]byte(validJSONTags))
assert.NoError(t, err)
assert.Len(t, metrics, 1)
assert.Equal(t, "json_test", metrics[0].Name())
assert.Equal(t, map[string]interface{}{
"a": float64(5),
"b_c": float64(6),
}, metrics[0].Fields())
assert.Equal(t, map[string]string{}, metrics[0].Tags())
}
func TestParseLineValidJSON(t *testing.T) {
parser := JSONParser{
MetricName: "json_test",
}
// Most basic vanilla test
metric, err := parser.ParseLine(validJSON)
assert.NoError(t, err)
assert.Equal(t, "json_test", metric.Name())
assert.Equal(t, map[string]interface{}{
"a": float64(5),
"b_c": float64(6),
}, metric.Fields())
assert.Equal(t, map[string]string{}, metric.Tags())
// Test that newlines are fine
metric, err = parser.ParseLine(validJSONNewline)
assert.NoError(t, err)
assert.Equal(t, "json_test", metric.Name())
assert.Equal(t, map[string]interface{}{
"d": float64(7),
"b_d": float64(8),
}, metric.Fields())
assert.Equal(t, map[string]string{}, metric.Tags())
// Test that strings without TagKeys defined are ignored
metric, err = parser.ParseLine(validJSONTags)
assert.NoError(t, err)
assert.Equal(t, "json_test", metric.Name())
assert.Equal(t, map[string]interface{}{
"a": float64(5),
"b_c": float64(6),
}, metric.Fields())
assert.Equal(t, map[string]string{}, metric.Tags())
}
func TestParseInvalidJSON(t *testing.T) {
parser := JSONParser{
MetricName: "json_test",
}
_, err := parser.Parse([]byte(invalidJSON))
assert.Error(t, err)
_, err = parser.Parse([]byte(invalidJSON2))
assert.Error(t, err)
_, err = parser.ParseLine(invalidJSON)
assert.Error(t, err)
}
func TestParseWithTagKeys(t *testing.T) {
// Test that strings not matching tag keys are ignored
parser := JSONParser{
MetricName: "json_test",
TagKeys: []string{"wrongtagkey"},
}
metrics, err := parser.Parse([]byte(validJSONTags))
assert.NoError(t, err)
assert.Len(t, metrics, 1)
assert.Equal(t, "json_test", metrics[0].Name())
assert.Equal(t, map[string]interface{}{
"a": float64(5),
"b_c": float64(6),
}, metrics[0].Fields())
assert.Equal(t, map[string]string{}, metrics[0].Tags())
// Test that single tag key is found and applied
parser = JSONParser{
MetricName: "json_test",
TagKeys: []string{"mytag"},
}
metrics, err = parser.Parse([]byte(validJSONTags))
assert.NoError(t, err)
assert.Len(t, metrics, 1)
assert.Equal(t, "json_test", metrics[0].Name())
assert.Equal(t, map[string]interface{}{
"a": float64(5),
"b_c": float64(6),
}, metrics[0].Fields())
assert.Equal(t, map[string]string{
"mytag": "foobar",
}, metrics[0].Tags())
// Test that both tag keys are found and applied
parser = JSONParser{
MetricName: "json_test",
TagKeys: []string{"mytag", "othertag"},
}
metrics, err = parser.Parse([]byte(validJSONTags))
assert.NoError(t, err)
assert.Len(t, metrics, 1)
assert.Equal(t, "json_test", metrics[0].Name())
assert.Equal(t, map[string]interface{}{
"a": float64(5),
"b_c": float64(6),
}, metrics[0].Fields())
assert.Equal(t, map[string]string{
"mytag": "foobar",
"othertag": "baz",
}, metrics[0].Tags())
}
func TestParseLineWithTagKeys(t *testing.T) {
// Test that strings not matching tag keys are ignored
parser := JSONParser{
MetricName: "json_test",
TagKeys: []string{"wrongtagkey"},
}
metric, err := parser.ParseLine(validJSONTags)
assert.NoError(t, err)
assert.Equal(t, "json_test", metric.Name())
assert.Equal(t, map[string]interface{}{
"a": float64(5),
"b_c": float64(6),
}, metric.Fields())
assert.Equal(t, map[string]string{}, metric.Tags())
// Test that single tag key is found and applied
parser = JSONParser{
MetricName: "json_test",
TagKeys: []string{"mytag"},
}
metric, err = parser.ParseLine(validJSONTags)
assert.NoError(t, err)
assert.Equal(t, "json_test", metric.Name())
assert.Equal(t, map[string]interface{}{
"a": float64(5),
"b_c": float64(6),
}, metric.Fields())
assert.Equal(t, map[string]string{
"mytag": "foobar",
}, metric.Tags())
// Test that both tag keys are found and applied
parser = JSONParser{
MetricName: "json_test",
TagKeys: []string{"mytag", "othertag"},
}
metric, err = parser.ParseLine(validJSONTags)
assert.NoError(t, err)
assert.Equal(t, "json_test", metric.Name())
assert.Equal(t, map[string]interface{}{
"a": float64(5),
"b_c": float64(6),
}, metric.Fields())
assert.Equal(t, map[string]string{
"mytag": "foobar",
"othertag": "baz",
}, metric.Tags())
}
func TestParseValidJSONDefaultTags(t *testing.T) {
parser := JSONParser{
MetricName: "json_test",
TagKeys: []string{"mytag"},
DefaultTags: map[string]string{
"t4g": "default",
},
}
// Most basic vanilla test
metrics, err := parser.Parse([]byte(validJSON))
assert.NoError(t, err)
assert.Len(t, metrics, 1)
assert.Equal(t, "json_test", metrics[0].Name())
assert.Equal(t, map[string]interface{}{
"a": float64(5),
"b_c": float64(6),
}, metrics[0].Fields())
assert.Equal(t, map[string]string{"t4g": "default"}, metrics[0].Tags())
// Test that tagkeys and default tags are applied
metrics, err = parser.Parse([]byte(validJSONTags))
assert.NoError(t, err)
assert.Len(t, metrics, 1)
assert.Equal(t, "json_test", metrics[0].Name())
assert.Equal(t, map[string]interface{}{
"a": float64(5),
"b_c": float64(6),
}, metrics[0].Fields())
assert.Equal(t, map[string]string{
"t4g": "default",
"mytag": "foobar",
}, metrics[0].Tags())
}
// Test that default tags are overridden by tag keys
func TestParseValidJSONDefaultTagsOverride(t *testing.T) {
parser := JSONParser{
MetricName: "json_test",
TagKeys: []string{"mytag"},
DefaultTags: map[string]string{
"mytag": "default",
},
}
// Most basic vanilla test
metrics, err := parser.Parse([]byte(validJSON))
assert.NoError(t, err)
assert.Len(t, metrics, 1)
assert.Equal(t, "json_test", metrics[0].Name())
assert.Equal(t, map[string]interface{}{
"a": float64(5),
"b_c": float64(6),
}, metrics[0].Fields())
assert.Equal(t, map[string]string{"mytag": "default"}, metrics[0].Tags())
// Test that tagkeys override default tags
metrics, err = parser.Parse([]byte(validJSONTags))
assert.NoError(t, err)
assert.Len(t, metrics, 1)
assert.Equal(t, "json_test", metrics[0].Name())
assert.Equal(t, map[string]interface{}{
"a": float64(5),
"b_c": float64(6),
}, metrics[0].Fields())
assert.Equal(t, map[string]string{
"mytag": "foobar",
}, metrics[0].Tags())
}

View File

@ -0,0 +1,95 @@
package parsers
import (
"fmt"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/parsers/graphite"
"github.com/influxdata/telegraf/plugins/parsers/influx"
"github.com/influxdata/telegraf/plugins/parsers/json"
)
// ParserInput is an interface for input plugins that are able to parse
// arbitrary data formats.
type ParserInput interface {
// SetParser sets the parser function for the interface
SetParser(parser Parser)
}
// Parser is an interface defining functions that a parser plugin must satisfy.
type Parser interface {
// Parse takes a byte buffer separated by newlines
// ie, `cpu.usage.idle 90\ncpu.usage.busy 10`
// and parses it into telegraf metrics
Parse(buf []byte) ([]telegraf.Metric, error)
// ParseLine takes a single string metric
// ie, "cpu.usage.idle 90"
// and parses it into a telegraf metric.
ParseLine(line string) (telegraf.Metric, error)
}
// Config is a struct that covers the data types needed for all parser types,
// and can be used to instantiate _any_ of the parsers.
type Config struct {
// Dataformat can be one of: json, influx, graphite
DataFormat string
// Separator only applied to Graphite data.
Separator string
// Templates only apply to Graphite data.
Templates []string
// TagKeys only apply to JSON data
TagKeys []string
// MetricName only applies to JSON data. This will be the name of the measurement.
MetricName string
// DefaultTags are the default tags that will be added to all parsed metrics.
DefaultTags map[string]string
}
// NewParser returns a Parser interface based on the given config.
func NewParser(config *Config) (Parser, error) {
var err error
var parser Parser
switch config.DataFormat {
case "json":
parser, err = NewJSONParser(config.MetricName,
config.TagKeys, config.DefaultTags)
case "influx":
parser, err = NewInfluxParser()
case "graphite":
parser, err = NewGraphiteParser(config.Separator,
config.Templates, config.DefaultTags)
default:
err = fmt.Errorf("Invalid data format: %s", config.DataFormat)
}
return parser, err
}
func NewJSONParser(
metricName string,
tagKeys []string,
defaultTags map[string]string,
) (Parser, error) {
parser := &json.JSONParser{
MetricName: metricName,
TagKeys: tagKeys,
DefaultTags: defaultTags,
}
return parser, nil
}
func NewInfluxParser() (Parser, error) {
return &influx.InfluxParser{}, nil
}
func NewGraphiteParser(
separator string,
templates []string,
defaultTags map[string]string,
) (Parser, error) {
return graphite.NewGraphiteParser(separator, templates, defaultTags)
}