Merge branch 'master' into dn-particle-plugin

This commit is contained in:
David G. Simmons 2017-11-03 12:13:49 -04:00 committed by GitHub
commit 43b8f19dce
118 changed files with 3978 additions and 1064 deletions

1
.gitignore vendored
View File

@ -5,3 +5,4 @@ tivan
.idea .idea
*~ *~
*# *#
.DS_Store

View File

@ -1,9 +1,12 @@
## v1.5 [unreleased] ## v1.5 [unreleased]
### New Plugins ### New Plugins
- [nginx_plus](./plugins/inputs/nginx_plus/README.md) - Thanks to @mplonka & @poblahblahblah - [basicstats](./plugins/aggregators/basicstats/README.md) - Thanks to @toni-moreno
- [jolokia2](./plugins/inputs/jolokia2/README.md) - Thanks to @dylanmei - [jolokia2](./plugins/inputs/jolokia2/README.md) - Thanks to @dylanmei
- [wavefront](./plugins/inputswavefront/README.md) - Thanks to @puckpuck - [nginx_plus](./plugins/inputs/nginx_plus/README.md) - Thanks to @mplonka & @poblahblahblah
- [smart](./plugins/inputs/smart/README.md) - Thanks to @rickard-von-essen
- [teamspeak](./plugins/inputs/teamspeak/README.md) - Thanks to @p4ddy1
- [wavefront](./plugins/outputs/wavefront/README.md) - Thanks to @puckpuck
### Release Notes ### Release Notes
@ -21,11 +24,9 @@
- [#3170](https://github.com/influxdata/telegraf/pull/3170): Add support for sharding based on metric name. - [#3170](https://github.com/influxdata/telegraf/pull/3170): Add support for sharding based on metric name.
- [#3196](https://github.com/influxdata/telegraf/pull/3196): Add Kafka output plugin topic_suffix option. - [#3196](https://github.com/influxdata/telegraf/pull/3196): Add Kafka output plugin topic_suffix option.
- [#3027](https://github.com/influxdata/telegraf/pull/3027): Include mount mode option in disk metrics. - [#3027](https://github.com/influxdata/telegraf/pull/3027): Include mount mode option in disk metrics.
- [#3212](https://github.com/influxdata/telegraf/pull/3212): Add support for standard proxy env vars in outputs.
- [#3191](https://github.com/influxdata/telegraf/pull/3191): TLS and MTLS enhancements to HTTPListener input plugin. - [#3191](https://github.com/influxdata/telegraf/pull/3191): TLS and MTLS enhancements to HTTPListener input plugin.
- [#3213](https://github.com/influxdata/telegraf/pull/3213): Add polling method to logparser and tail inputs. - [#3213](https://github.com/influxdata/telegraf/pull/3213): Add polling method to logparser and tail inputs.
- [#3211](https://github.com/influxdata/telegraf/pull/3211): Add timeout option for kubernetes input. - [#3211](https://github.com/influxdata/telegraf/pull/3211): Add timeout option for kubernetes input.
- [#3224](https://github.com/influxdata/telegraf/pull/3224): Preserve url path prefix in influx output.
- [#3234](https://github.com/influxdata/telegraf/pull/3234): Add support for timing sums in statsd input. - [#3234](https://github.com/influxdata/telegraf/pull/3234): Add support for timing sums in statsd input.
- [#2617](https://github.com/influxdata/telegraf/issues/2617): Add resource limit monitoring to procstat. - [#2617](https://github.com/influxdata/telegraf/issues/2617): Add resource limit monitoring to procstat.
- [#3236](https://github.com/influxdata/telegraf/pull/3236): Add support for k8s service DNS discovery to prometheus input. - [#3236](https://github.com/influxdata/telegraf/pull/3236): Add support for k8s service DNS discovery to prometheus input.
@ -36,13 +37,50 @@
- [#3106](https://github.com/influxdata/telegraf/pull/3106): Add configurable separator for metrics and fields in opentsdb output. - [#3106](https://github.com/influxdata/telegraf/pull/3106): Add configurable separator for metrics and fields in opentsdb output.
- [#1692](https://github.com/influxdata/telegraf/pull/1692): Add support for the rollbar occurrence webhook event. - [#1692](https://github.com/influxdata/telegraf/pull/1692): Add support for the rollbar occurrence webhook event.
- [#3160](https://github.com/influxdata/telegraf/pull/3160): Add Wavefront output plugin. - [#3160](https://github.com/influxdata/telegraf/pull/3160): Add Wavefront output plugin.
- [#3281](https://github.com/influxdata/telegraf/pull/3281): Add extra wired tiger cache metrics to mongodb input.
- [#3141](https://github.com/influxdata/telegraf/pull/3141): Collect Docker Swarm service metrics in docker input plugin.
- [#2449](https://github.com/influxdata/telegraf/pull/2449): Add smart input plugin for collecting S.M.A.R.T. data.
- [#3269](https://github.com/influxdata/telegraf/pull/3269): Add cluster health level configuration to elasticsearch input.
- [#3304](https://github.com/influxdata/telegraf/pull/3304): Add ability to limit node stats in elasticsearch input.
- [#2167](https://github.com/influxdata/telegraf/pull/2167): Add new basicstats aggregator.
- [#3344](https://github.com/influxdata/telegraf/pull/3344): Add UDP IPv6 support to statsd input.
- [#3350](https://github.com/influxdata/telegraf/pull/3350): Use labels in prometheus output for string fields.
- [#3358](https://github.com/influxdata/telegraf/pull/3358): Add support for decimal timestamps to ts-epoch modifier.
- [#3337](https://github.com/influxdata/telegraf/pull/3337): Add histogram and summary types and use in prometheus plugins.
- [#3365](https://github.com/influxdata/telegraf/pull/3365): Gather concurrently from snmp agents.
- [#3333](https://github.com/influxdata/telegraf/issues/3333): Perform DNS lookup before ping and report result.
- [#3398](https://github.com/influxdata/telegraf/issues/3398): Add instance name option to varnish plugin.
- [#3406](https://github.com/influxdata/telegraf/pull/3406): Add support for SSL settings to ElasticSearch output plugin.
- [#3315](https://github.com/influxdata/telegraf/pull/3315): Add Teamspeak 3 input plugin.
### Bugfixes ### Bugfixes
- [#3136](https://github.com/influxdata/telegraf/issues/3136): Fix webhooks input address in use during reload. - [#3136](https://github.com/influxdata/telegraf/issues/3136): Fix webhooks input address in use during reload.
- [#3258](https://github.com/influxdata/telegraf/issues/3258): Unlock Statsd when stopping to prevent deadlock. - [#3258](https://github.com/influxdata/telegraf/issues/3258): Unlock Statsd when stopping to prevent deadlock.
- [#3319](https://github.com/influxdata/telegraf/issues/3319): Fix cloudwatch output requires unneeded permissions.
- [#3351](https://github.com/influxdata/telegraf/issues/3351): Fix prometheus passthrough for existing value types.
## v1.4.2 [unreleased] ## v1.4.4 [unreleased]
- [#3401](https://github.com/influxdata/telegraf/pull/3401): Use schema specified in mqtt_consumer input.
## v1.4.3 [2017-10-25]
### Bugfixes
- [#3327](https://github.com/influxdata/telegraf/issues/3327): Fix container name filters in docker input.
- [#3321](https://github.com/influxdata/telegraf/issues/3321): Fix snmpwalk address format in leofs input.
- [#3329](https://github.com/influxdata/telegraf/issues/3329): Fix case sensitivity issue in sqlserver query.
- [#3342](https://github.com/influxdata/telegraf/pull/3342): Fix CPU input plugin stuck after suspend on Linux.
- [#3013](https://github.com/influxdata/telegraf/issues/3013): Fix mongodb input panic when restarting mongodb.
- [#3224](https://github.com/influxdata/telegraf/pull/3224): Preserve url path prefix in influx output.
- [#3354](https://github.com/influxdata/telegraf/pull/3354): Fix TELEGRAF_OPTS expansion in systemd service unit.
- [#3357](https://github.com/influxdata/telegraf/issues/3357): Remove warning when JSON contains null value.
- [#3375](https://github.com/influxdata/telegraf/issues/3375): Fix ACL token usage in consul input plugin.
- [#3369](https://github.com/influxdata/telegraf/issues/3369): Fix unquoting error with Tomcat 6.
- [#3373](https://github.com/influxdata/telegraf/issues/3373): Fix syscall panic in diskio on some Linux systems.
## v1.4.2 [2017-10-10]
### Bugfixes ### Bugfixes
@ -50,6 +88,11 @@
- [#3265](https://github.com/influxdata/telegraf/issues/3265): Fix parsing of JSON with a UTF8 BOM in httpjson. - [#3265](https://github.com/influxdata/telegraf/issues/3265): Fix parsing of JSON with a UTF8 BOM in httpjson.
- [#2887](https://github.com/influxdata/telegraf/issues/2887): Allow JSON data format to contain zero metrics. - [#2887](https://github.com/influxdata/telegraf/issues/2887): Allow JSON data format to contain zero metrics.
- [#3284](https://github.com/influxdata/telegraf/issues/3284): Fix format of connection_timeout in mqtt_consumer. - [#3284](https://github.com/influxdata/telegraf/issues/3284): Fix format of connection_timeout in mqtt_consumer.
- [#3081](https://github.com/influxdata/telegraf/issues/3081): Fix case sensitivity error in sqlserver input.
- [#3297](https://github.com/influxdata/telegraf/issues/3297): Add support for proxy environment variables to http_response.
- [#1588](https://github.com/influxdata/telegraf/issues/1588): Add support for standard proxy env vars in outputs.
- [#3282](https://github.com/influxdata/telegraf/issues/3282): Fix panic in cpu input if number of cpus changes.
- [#2854](https://github.com/influxdata/telegraf/issues/2854): Use chunked transfer encoding in InfluxDB output.
## v1.4.1 [2017-09-26] ## v1.4.1 [2017-09-26]

2
Godeps
View File

@ -40,6 +40,8 @@ github.com/kballard/go-shellquote d8ec1a69a250a17bb0e419c386eac1f3711dc142
github.com/matttproud/golang_protobuf_extensions c12348ce28de40eed0136aa2b644d0ee0650e56c github.com/matttproud/golang_protobuf_extensions c12348ce28de40eed0136aa2b644d0ee0650e56c
github.com/Microsoft/go-winio ce2922f643c8fd76b46cadc7f404a06282678b34 github.com/Microsoft/go-winio ce2922f643c8fd76b46cadc7f404a06282678b34
github.com/miekg/dns 99f84ae56e75126dd77e5de4fae2ea034a468ca1 github.com/miekg/dns 99f84ae56e75126dd77e5de4fae2ea034a468ca1
github.com/mitchellh/mapstructure d0303fe809921458f417bcf828397a65db30a7e4
github.com/multiplay/go-ts3 07477f49b8dfa3ada231afc7b7b17617d42afe8e
github.com/naoina/go-stringutil 6b638e95a32d0c1131db0e7fe83775cbea4a0d0b github.com/naoina/go-stringutil 6b638e95a32d0c1131db0e7fe83775cbea4a0d0b
github.com/nats-io/go-nats ea9585611a4ab58a205b9b125ebd74c389a6b898 github.com/nats-io/go-nats ea9585611a4ab58a205b9b125ebd74c389a6b898
github.com/nats-io/nats ea9585611a4ab58a205b9b125ebd74c389a6b898 github.com/nats-io/nats ea9585611a4ab58a205b9b125ebd74c389a6b898

View File

@ -5,8 +5,7 @@ and writing metrics.
Design goals are to have a minimal memory footprint with a plugin system so Design goals are to have a minimal memory footprint with a plugin system so
that developers in the community can easily add support for collecting metrics that developers in the community can easily add support for collecting metrics
from well known services (like Hadoop, Postgres, or Redis) and third party from local or remote services.
APIs (like Mailchimp, AWS CloudWatch, or Google Analytics).
Telegraf is plugin-driven and has the concept of 4 distinct plugins: Telegraf is plugin-driven and has the concept of 4 distinct plugins:
@ -193,9 +192,11 @@ configuration options.
* [riak](./plugins/inputs/riak) * [riak](./plugins/inputs/riak)
* [salesforce](./plugins/inputs/salesforce) * [salesforce](./plugins/inputs/salesforce)
* [sensors](./plugins/inputs/sensors) * [sensors](./plugins/inputs/sensors)
* [smart](./plugins/inputs/smart)
* [snmp](./plugins/inputs/snmp) * [snmp](./plugins/inputs/snmp)
* [snmp_legacy](./plugins/inputs/snmp_legacy) * [snmp_legacy](./plugins/inputs/snmp_legacy)
* [sql server](./plugins/inputs/sqlserver) (microsoft) * [sql server](./plugins/inputs/sqlserver) (microsoft)
* [teamspeak](./plugins/inputs/teamspeak)
* [tomcat](./plugins/inputs/tomcat) * [tomcat](./plugins/inputs/tomcat)
* [twemproxy](./plugins/inputs/twemproxy) * [twemproxy](./plugins/inputs/twemproxy)
* [varnish](./plugins/inputs/varnish) * [varnish](./plugins/inputs/varnish)
@ -254,6 +255,7 @@ formats may be used with input plugins supporting the `data_format` option:
## Aggregator Plugins ## Aggregator Plugins
* [basicstats](./plugins/aggregators/basicstats)
* [minmax](./plugins/aggregators/minmax) * [minmax](./plugins/aggregators/minmax)
* [histogram](./plugins/aggregators/histogram) * [histogram](./plugins/aggregators/histogram)

View File

@ -28,6 +28,18 @@ type Accumulator interface {
tags map[string]string, tags map[string]string,
t ...time.Time) t ...time.Time)
// AddSummary is the same as AddFields, but will add the metric as a "Summary" type
AddSummary(measurement string,
fields map[string]interface{},
tags map[string]string,
t ...time.Time)
// AddHistogram is the same as AddFields, but will add the metric as a "Histogram" type
AddHistogram(measurement string,
fields map[string]interface{},
tags map[string]string,
t ...time.Time)
SetPrecision(precision, interval time.Duration) SetPrecision(precision, interval time.Duration)
AddError(err error) AddError(err error)

View File

@ -76,6 +76,28 @@ func (ac *accumulator) AddCounter(
} }
} }
func (ac *accumulator) AddSummary(
measurement string,
fields map[string]interface{},
tags map[string]string,
t ...time.Time,
) {
if m := ac.maker.MakeMetric(measurement, fields, tags, telegraf.Summary, ac.getTime(t)); m != nil {
ac.metrics <- m
}
}
func (ac *accumulator) AddHistogram(
measurement string,
fields map[string]interface{},
tags map[string]string,
t ...time.Time,
) {
if m := ac.maker.MakeMetric(measurement, fields, tags, telegraf.Histogram, ac.getTime(t)); m != nil {
ac.metrics <- m
}
}
// AddError passes a runtime error to the accumulator. // AddError passes a runtime error to the accumulator.
// The error will be tagged with the plugin name and written to the log. // The error will be tagged with the plugin name and written to the log.
func (ac *accumulator) AddError(err error) { func (ac *accumulator) AddError(err error) {

View File

@ -252,7 +252,7 @@ func (a *Agent) flusher(shutdown chan struct{}, metricC chan telegraf.Metric, ag
// the flusher will flush after metrics are collected. // the flusher will flush after metrics are collected.
time.Sleep(time.Millisecond * 300) time.Sleep(time.Millisecond * 300)
// create an output metric channel and a gorouting that continously passes // create an output metric channel and a gorouting that continuously passes
// each metric onto the output plugins & aggregators. // each metric onto the output plugins & aggregators.
outMetricC := make(chan telegraf.Metric, 100) outMetricC := make(chan telegraf.Metric, 100)
var wg sync.WaitGroup var wg sync.WaitGroup

View File

@ -6,8 +6,8 @@ machine:
- rabbitmq-server - rabbitmq-server
post: post:
- sudo rm -rf /usr/local/go - sudo rm -rf /usr/local/go
- wget https://storage.googleapis.com/golang/go1.9.linux-amd64.tar.gz - wget https://storage.googleapis.com/golang/go1.9.1.linux-amd64.tar.gz
- sudo tar -C /usr/local -xzf go1.9.linux-amd64.tar.gz - sudo tar -C /usr/local -xzf go1.9.1.linux-amd64.tar.gz
- go version - go version
dependencies: dependencies:

View File

@ -55,9 +55,6 @@ var fUsage = flag.String("usage", "",
var fService = flag.String("service", "", var fService = flag.String("service", "",
"operate on the service") "operate on the service")
// Telegraf version, populated linker.
// ie, -ldflags "-X main.version=`git describe --always --tags`"
var ( var (
nextVersion = "1.5.0" nextVersion = "1.5.0"
version string version string

View File

@ -39,6 +39,11 @@ metrics as they pass through Telegraf:
Both Aggregators and Processors analyze metrics as they pass through Telegraf. Both Aggregators and Processors analyze metrics as they pass through Telegraf.
Use [measurement filtering](CONFIGURATION.md#measurement-filtering)
to control which metrics are passed through a processor or aggregator. If a
metric is filtered out the metric bypasses the plugin and is passed downstream
to the next plugin.
**Processor** plugins process metrics as they pass through and immediately emit **Processor** plugins process metrics as they pass through and immediately emit
results based on the values they process. For example, this could be printing results based on the values they process. For example, this could be printing
all metrics or adding a tag to all metrics that pass through. all metrics or adding a tag to all metrics that pass through.

View File

@ -24,6 +24,9 @@ Environment variables can be used anywhere in the config file, simply prepend
them with $. For strings the variable must be within quotes (ie, "$STR_VAR"), them with $. For strings the variable must be within quotes (ie, "$STR_VAR"),
for numbers and booleans they should be plain (ie, $INT_VAR, $BOOL_VAR) for numbers and booleans they should be plain (ie, $INT_VAR, $BOOL_VAR)
When using the `.deb` or `.rpm` packages, you can define environment variables
in the `/etc/default/telegraf` file.
## Configuration file locations ## Configuration file locations
The location of the configuration file can be set via the `--config` command The location of the configuration file can be set via the `--config` command
@ -95,9 +98,13 @@ you can configure that here.
* **name_suffix**: Specifies a suffix to attach to the measurement name. * **name_suffix**: Specifies a suffix to attach to the measurement name.
* **tags**: A map of tags to apply to a specific input's measurements. * **tags**: A map of tags to apply to a specific input's measurements.
The [measurement filtering](#measurement-filtering) parameters can be used to
limit what metrics are emitted from the input plugin.
## Output Configuration ## Output Configuration
There are no generic configuration options available for all outputs. The [measurement filtering](#measurement-filtering) parameters can be used to
limit what metrics are emitted from the output plugin.
## Aggregator Configuration ## Aggregator Configuration
@ -118,6 +125,10 @@ aggregator and will not get sent to the output plugins.
* **name_suffix**: Specifies a suffix to attach to the measurement name. * **name_suffix**: Specifies a suffix to attach to the measurement name.
* **tags**: A map of tags to apply to a specific input's measurements. * **tags**: A map of tags to apply to a specific input's measurements.
The [measurement filtering](#measurement-filtering) parameters be used to
limit what metrics are handled by the aggregator. Excluded metrics are passed
downstream to the next aggregator.
## Processor Configuration ## Processor Configuration
The following config parameters are available for all processors: The following config parameters are available for all processors:
@ -125,6 +136,10 @@ The following config parameters are available for all processors:
* **order**: This is the order in which the processor(s) get executed. If this * **order**: This is the order in which the processor(s) get executed. If this
is not specified then processor execution order will be random. is not specified then processor execution order will be random.
The [measurement filtering](#measurement-filtering) can parameters may be used
to limit what metrics are handled by the processor. Excluded metrics are
passed downstream to the next processor.
#### Measurement Filtering #### Measurement Filtering
Filters can be configured per input, output, processor, or aggregator, Filters can be configured per input, output, processor, or aggregator,
@ -374,3 +389,15 @@ to the system load metrics due to the `namepass` parameter.
[[outputs.file]] [[outputs.file]]
files = ["stdout"] files = ["stdout"]
``` ```
#### Processor Configuration Examples:
Print only the metrics with `cpu` as the measurement name, all metrics are
passed to the output:
```toml
[[processors.printer]]
namepass = "cpu"
[[outputs.file]]
files = ["/tmp/metrics.out"]
```

View File

@ -20,3 +20,8 @@ If running as a service add the environment variable to `/etc/default/telegraf`:
``` ```
GODEBUG=netdns=cgo GODEBUG=netdns=cgo
``` ```
### Q: When will the next version be released?
The latest release date estimate can be viewed on the
[milestones](https://github.com/influxdata/telegraf/milestones) page.

View File

@ -82,6 +82,8 @@ following works:
- github.com/streadway/amqp [BSD](https://github.com/streadway/amqp/blob/master/LICENSE) - github.com/streadway/amqp [BSD](https://github.com/streadway/amqp/blob/master/LICENSE)
- github.com/stretchr/objx [MIT](https://github.com/stretchr/objx/blob/master/LICENSE.md) - github.com/stretchr/objx [MIT](https://github.com/stretchr/objx/blob/master/LICENSE.md)
- github.com/stretchr/testify [MIT](https://github.com/stretchr/testify/blob/master/LICENCE.txt) - github.com/stretchr/testify [MIT](https://github.com/stretchr/testify/blob/master/LICENCE.txt)
- github.com/mitchellh/mapstructure [MIT](https://github.com/mitchellh/mapstructure/blob/master/LICENSE)
- github.com/multiplay/go-ts3 [BSD](https://github.com/multiplay/go-ts3/blob/master/LICENSE)
- github.com/vjeantet/grok [APACHE](https://github.com/vjeantet/grok/blob/master/LICENSE) - github.com/vjeantet/grok [APACHE](https://github.com/vjeantet/grok/blob/master/LICENSE)
- github.com/wvanbergen/kafka [MIT](https://github.com/wvanbergen/kafka/blob/master/LICENSE) - github.com/wvanbergen/kafka [MIT](https://github.com/wvanbergen/kafka/blob/master/LICENSE)
- github.com/wvanbergen/kazoo-go [MIT](https://github.com/wvanbergen/kazoo-go/blob/master/MIT-LICENSE) - github.com/wvanbergen/kazoo-go [MIT](https://github.com/wvanbergen/kazoo-go/blob/master/MIT-LICENSE)

View File

@ -38,7 +38,7 @@ Telegraf can manage its own service through the --service flag:
| `telegraf.exe --service stop` | Stop the telegraf service | | `telegraf.exe --service stop` | Stop the telegraf service |
Trobleshooting common error #1067 Troubleshooting common error #1067
When installing as service in Windows, always double check to specify full path of the config file, otherwise windows service will fail to start When installing as service in Windows, always double check to specify full path of the config file, otherwise windows service will fail to start

View File

@ -1586,8 +1586,8 @@
# # Read metrics from a LeoFS Server via SNMP # # Read metrics from a LeoFS Server via SNMP
# [[inputs.leofs]] # [[inputs.leofs]]
# ## An array of URLs of the form: # ## An array of URLs of the form:
# ## "udp://" host [ ":" port] # ## host [ ":" port]
# servers = ["udp://127.0.0.1:4020"] # servers = ["127.0.0.1:4020"]
# # Provides Linux sysctl fs metrics # # Provides Linux sysctl fs metrics

View File

@ -77,3 +77,40 @@ func compileFilterNoGlob(filters []string) Filter {
} }
return &out return &out
} }
type IncludeExcludeFilter struct {
include Filter
exclude Filter
}
func NewIncludeExcludeFilter(
include []string,
exclude []string,
) (Filter, error) {
in, err := Compile(include)
if err != nil {
return nil, err
}
ex, err := Compile(exclude)
if err != nil {
return nil, err
}
return &IncludeExcludeFilter{in, ex}, nil
}
func (f *IncludeExcludeFilter) Match(s string) bool {
if f.include != nil {
if !f.include.Match(s) {
return false
}
}
if f.exclude != nil {
if f.exclude.Match(s) {
return false
}
}
return true
}

View File

@ -126,7 +126,7 @@ type AgentConfig struct {
// TODO(cam): Remove UTC and parameter, they are no longer // TODO(cam): Remove UTC and parameter, they are no longer
// valid for the agent config. Leaving them here for now for backwards- // valid for the agent config. Leaving them here for now for backwards-
// compatability // compatibility
UTC bool `toml:"utc"` UTC bool `toml:"utc"`
// Debug is the option for running in debug mode // Debug is the option for running in debug mode
@ -683,7 +683,7 @@ func (c *Config) LoadConfig(path string) error {
} }
// trimBOM trims the Byte-Order-Marks from the beginning of the file. // trimBOM trims the Byte-Order-Marks from the beginning of the file.
// this is for Windows compatability only. // this is for Windows compatibility only.
// see https://github.com/influxdata/telegraf/issues/1378 // see https://github.com/influxdata/telegraf/issues/1378
func trimBOM(f []byte) []byte { func trimBOM(f []byte) []byte {
return bytes.TrimPrefix(f, []byte("\xef\xbb\xbf")) return bytes.TrimPrefix(f, []byte("\xef\xbb\xbf"))

View File

@ -13,6 +13,8 @@ const (
Counter Counter
Gauge Gauge
Untyped Untyped
Summary
Histogram
) )
type Metric interface { type Metric interface {

View File

@ -647,7 +647,7 @@ func skipWhitespace(buf []byte, i int) int {
} }
// makeError is a helper function for making a metric parsing error. // makeError is a helper function for making a metric parsing error.
// reason is the reason that the error occured. // reason is the reason why the error occurred.
// buf should be the current buffer we are parsing. // buf should be the current buffer we are parsing.
// i is the current index, to give some context on where in the buffer we are. // i is the current index, to give some context on where in the buffer we are.
func makeError(reason string, buf []byte, i int) error { func makeError(reason string, buf []byte, i int) error {

View File

@ -181,7 +181,7 @@ func TestMetricReader_SplitWithExactLengthSplit(t *testing.T) {
} }
} }
// Regresssion test for when a metric requires to be split and one of the // Regression test for when a metric requires to be split and one of the
// split metrics is larger than the buffer. // split metrics is larger than the buffer.
// //
// Previously the metric index would be set incorrectly causing a panic. // Previously the metric index would be set incorrectly causing a panic.
@ -218,7 +218,7 @@ func TestMetricReader_SplitOverflowOversized(t *testing.T) {
} }
} }
// Regresssion test for when a split metric exactly fits in the buffer. // Regression test for when a split metric exactly fits in the buffer.
// //
// Previously the metric would be overflow split when not required. // Previously the metric would be overflow split when not required.
func TestMetricReader_SplitOverflowUneeded(t *testing.T) { func TestMetricReader_SplitOverflowUneeded(t *testing.T) {

View File

@ -1,6 +1,7 @@
package all package all
import ( import (
_ "github.com/influxdata/telegraf/plugins/aggregators/basicstats"
_ "github.com/influxdata/telegraf/plugins/aggregators/histogram" _ "github.com/influxdata/telegraf/plugins/aggregators/histogram"
_ "github.com/influxdata/telegraf/plugins/aggregators/minmax" _ "github.com/influxdata/telegraf/plugins/aggregators/minmax"
) )

View File

@ -0,0 +1,43 @@
# BasicStats Aggregator Plugin
The BasicStats aggregator plugin give us count,max,min,mean,s2(variance), stdev for a set of values,
emitting the aggregate every `period` seconds.
### Configuration:
```toml
# Keep the aggregate basicstats of each metric passing through.
[[aggregators.basicstats]]
## General Aggregator Arguments:
## The period on which to flush & clear the aggregator.
period = "30s"
## If true, the original metric will be dropped by the
## aggregator and will not get sent to the output plugins.
drop_original = false
```
### Measurements & Fields:
- measurement1
- field1_count
- field1_max
- field1_min
- field1_mean
- field1_s2 (variance)
- field1_stdev (standard deviation)
### Tags:
No tags are applied by this aggregator.
### Example Output:
```
$ telegraf --config telegraf.conf --quiet
system,host=tars load1=1 1475583980000000000
system,host=tars load1=1 1475583990000000000
system,host=tars load1_count=2,load1_max=1,load1_min=1,load1_mean=1,load1_s2=0,load1_stdev=0 1475584010000000000
system,host=tars load1=1 1475584020000000000
system,host=tars load1=3 1475584030000000000
system,host=tars load1_count=2,load1_max=3,load1_min=1,load1_mean=2,load1_s2=2,load1_stdev=1.414162 1475584010000000000
```

View File

@ -0,0 +1,155 @@
package basicstats
import (
"math"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/aggregators"
)
type BasicStats struct {
cache map[uint64]aggregate
}
func NewBasicStats() telegraf.Aggregator {
mm := &BasicStats{}
mm.Reset()
return mm
}
type aggregate struct {
fields map[string]basicstats
name string
tags map[string]string
}
type basicstats struct {
count float64
min float64
max float64
mean float64
M2 float64 //intermedia value for variance/stdev
}
var sampleConfig = `
## General Aggregator Arguments:
## The period on which to flush & clear the aggregator.
period = "30s"
## If true, the original metric will be dropped by the
## aggregator and will not get sent to the output plugins.
drop_original = false
`
func (m *BasicStats) SampleConfig() string {
return sampleConfig
}
func (m *BasicStats) Description() string {
return "Keep the aggregate basicstats of each metric passing through."
}
func (m *BasicStats) Add(in telegraf.Metric) {
id := in.HashID()
if _, ok := m.cache[id]; !ok {
// hit an uncached metric, create caches for first time:
a := aggregate{
name: in.Name(),
tags: in.Tags(),
fields: make(map[string]basicstats),
}
for k, v := range in.Fields() {
if fv, ok := convert(v); ok {
a.fields[k] = basicstats{
count: 1,
min: fv,
max: fv,
mean: fv,
M2: 0.0,
}
}
}
m.cache[id] = a
} else {
for k, v := range in.Fields() {
if fv, ok := convert(v); ok {
if _, ok := m.cache[id].fields[k]; !ok {
// hit an uncached field of a cached metric
m.cache[id].fields[k] = basicstats{
count: 1,
min: fv,
max: fv,
mean: fv,
M2: 0.0,
}
continue
}
tmp := m.cache[id].fields[k]
//https://en.m.wikipedia.org/wiki/Algorithms_for_calculating_variance
//variable initialization
x := fv
mean := tmp.mean
M2 := tmp.M2
//counter compute
n := tmp.count + 1
tmp.count = n
//mean compute
delta := x - mean
mean = mean + delta/n
tmp.mean = mean
//variance/stdev compute
M2 = M2 + delta*(x-mean)
tmp.M2 = M2
//max/min compute
if fv < tmp.min {
tmp.min = fv
} else if fv > tmp.max {
tmp.max = fv
}
//store final data
m.cache[id].fields[k] = tmp
}
}
}
}
func (m *BasicStats) Push(acc telegraf.Accumulator) {
for _, aggregate := range m.cache {
fields := map[string]interface{}{}
for k, v := range aggregate.fields {
fields[k+"_count"] = v.count
fields[k+"_min"] = v.min
fields[k+"_max"] = v.max
fields[k+"_mean"] = v.mean
//v.count always >=1
if v.count > 1 {
variance := v.M2 / (v.count - 1)
fields[k+"_s2"] = variance
fields[k+"_stdev"] = math.Sqrt(variance)
}
//if count == 1 StdDev = infinite => so I won't send data
}
acc.AddFields(aggregate.name, fields, aggregate.tags)
}
}
func (m *BasicStats) Reset() {
m.cache = make(map[uint64]aggregate)
}
func convert(in interface{}) (float64, bool) {
switch v := in.(type) {
case float64:
return v, true
case int64:
return float64(v), true
default:
return 0, false
}
}
func init() {
aggregators.Add("basicstats", func() telegraf.Aggregator {
return NewBasicStats()
})
}

View File

@ -0,0 +1,151 @@
package basicstats
import (
"math"
"testing"
"time"
"github.com/influxdata/telegraf/metric"
"github.com/influxdata/telegraf/testutil"
)
var m1, _ = metric.New("m1",
map[string]string{"foo": "bar"},
map[string]interface{}{
"a": int64(1),
"b": int64(1),
"c": float64(2),
"d": float64(2),
},
time.Now(),
)
var m2, _ = metric.New("m1",
map[string]string{"foo": "bar"},
map[string]interface{}{
"a": int64(1),
"b": int64(3),
"c": float64(4),
"d": float64(6),
"e": float64(200),
"ignoreme": "string",
"andme": true,
},
time.Now(),
)
func BenchmarkApply(b *testing.B) {
minmax := NewBasicStats()
for n := 0; n < b.N; n++ {
minmax.Add(m1)
minmax.Add(m2)
}
}
// Test two metrics getting added.
func TestBasicStatsWithPeriod(t *testing.T) {
acc := testutil.Accumulator{}
minmax := NewBasicStats()
minmax.Add(m1)
minmax.Add(m2)
minmax.Push(&acc)
expectedFields := map[string]interface{}{
"a_count": float64(2), //a
"a_max": float64(1),
"a_min": float64(1),
"a_mean": float64(1),
"a_stdev": float64(0),
"a_s2": float64(0),
"b_count": float64(2), //b
"b_max": float64(3),
"b_min": float64(1),
"b_mean": float64(2),
"b_s2": float64(2),
"b_stdev": math.Sqrt(2),
"c_count": float64(2), //c
"c_max": float64(4),
"c_min": float64(2),
"c_mean": float64(3),
"c_s2": float64(2),
"c_stdev": math.Sqrt(2),
"d_count": float64(2), //d
"d_max": float64(6),
"d_min": float64(2),
"d_mean": float64(4),
"d_s2": float64(8),
"d_stdev": math.Sqrt(8),
"e_count": float64(1), //e
"e_max": float64(200),
"e_min": float64(200),
"e_mean": float64(200),
}
expectedTags := map[string]string{
"foo": "bar",
}
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
}
// Test two metrics getting added with a push/reset in between (simulates
// getting added in different periods.)
func TestBasicStatsDifferentPeriods(t *testing.T) {
acc := testutil.Accumulator{}
minmax := NewBasicStats()
minmax.Add(m1)
minmax.Push(&acc)
expectedFields := map[string]interface{}{
"a_count": float64(1), //a
"a_max": float64(1),
"a_min": float64(1),
"a_mean": float64(1),
"b_count": float64(1), //b
"b_max": float64(1),
"b_min": float64(1),
"b_mean": float64(1),
"c_count": float64(1), //c
"c_max": float64(2),
"c_min": float64(2),
"c_mean": float64(2),
"d_count": float64(1), //d
"d_max": float64(2),
"d_min": float64(2),
"d_mean": float64(2),
}
expectedTags := map[string]string{
"foo": "bar",
}
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
acc.ClearMetrics()
minmax.Reset()
minmax.Add(m2)
minmax.Push(&acc)
expectedFields = map[string]interface{}{
"a_count": float64(1), //a
"a_max": float64(1),
"a_min": float64(1),
"a_mean": float64(1),
"b_count": float64(1), //b
"b_max": float64(3),
"b_min": float64(3),
"b_mean": float64(3),
"c_count": float64(1), //c
"c_max": float64(4),
"c_min": float64(4),
"c_mean": float64(4),
"d_count": float64(1), //d
"d_max": float64(6),
"d_min": float64(6),
"d_mean": float64(6),
"e_count": float64(1), //e
"e_max": float64(200),
"e_min": float64(200),
"e_mean": float64(200),
}
expectedTags = map[string]string{
"foo": "bar",
}
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
}

View File

@ -76,6 +76,7 @@ import (
_ "github.com/influxdata/telegraf/plugins/inputs/riak" _ "github.com/influxdata/telegraf/plugins/inputs/riak"
_ "github.com/influxdata/telegraf/plugins/inputs/salesforce" _ "github.com/influxdata/telegraf/plugins/inputs/salesforce"
_ "github.com/influxdata/telegraf/plugins/inputs/sensors" _ "github.com/influxdata/telegraf/plugins/inputs/sensors"
_ "github.com/influxdata/telegraf/plugins/inputs/smart"
_ "github.com/influxdata/telegraf/plugins/inputs/snmp" _ "github.com/influxdata/telegraf/plugins/inputs/snmp"
_ "github.com/influxdata/telegraf/plugins/inputs/snmp_legacy" _ "github.com/influxdata/telegraf/plugins/inputs/snmp_legacy"
_ "github.com/influxdata/telegraf/plugins/inputs/socket_listener" _ "github.com/influxdata/telegraf/plugins/inputs/socket_listener"
@ -85,6 +86,7 @@ import (
_ "github.com/influxdata/telegraf/plugins/inputs/system" _ "github.com/influxdata/telegraf/plugins/inputs/system"
_ "github.com/influxdata/telegraf/plugins/inputs/tail" _ "github.com/influxdata/telegraf/plugins/inputs/tail"
_ "github.com/influxdata/telegraf/plugins/inputs/tcp_listener" _ "github.com/influxdata/telegraf/plugins/inputs/tcp_listener"
_ "github.com/influxdata/telegraf/plugins/inputs/teamspeak"
_ "github.com/influxdata/telegraf/plugins/inputs/tomcat" _ "github.com/influxdata/telegraf/plugins/inputs/tomcat"
_ "github.com/influxdata/telegraf/plugins/inputs/trig" _ "github.com/influxdata/telegraf/plugins/inputs/trig"
_ "github.com/influxdata/telegraf/plugins/inputs/twemproxy" _ "github.com/influxdata/telegraf/plugins/inputs/twemproxy"

View File

@ -92,7 +92,7 @@ func (c *CloudWatch) SampleConfig() string {
## Collection Delay (required - must account for metrics availability via CloudWatch API) ## Collection Delay (required - must account for metrics availability via CloudWatch API)
delay = "5m" delay = "5m"
## Recomended: use metric 'interval' that is a multiple of 'period' to avoid ## Recommended: use metric 'interval' that is a multiple of 'period' to avoid
## gaps or overlap in pulled data ## gaps or overlap in pulled data
interval = "5m" interval = "5m"

View File

@ -69,6 +69,10 @@ func (c *Consul) createAPIClient() (*api.Client, error) {
config.Datacenter = c.Datacentre config.Datacenter = c.Datacentre
} }
if c.Token != "" {
config.Token = c.Token
}
if c.Username != "" { if c.Username != "" {
config.HttpAuth = &api.HttpBasicAuth{ config.HttpAuth = &api.HttpBasicAuth{
Username: c.Username, Username: c.Username,

View File

@ -20,7 +20,7 @@ var sampleChecks = []*api.HealthCheck{
}, },
} }
func TestGatherHealtCheck(t *testing.T) { func TestGatherHealthCheck(t *testing.T) {
expectedFields := map[string]interface{}{ expectedFields := map[string]interface{}{
"check_name": "foo.health", "check_name": "foo.health",
"status": "passing", "status": "passing",

View File

@ -21,7 +21,7 @@ var sampleConfig = `
## http://admin:secret@couchbase-0.example.com:8091/ ## http://admin:secret@couchbase-0.example.com:8091/
## ##
## If no servers are specified, then localhost is used as the host. ## If no servers are specified, then localhost is used as the host.
## If no protocol is specifed, HTTP is used. ## If no protocol is specified, HTTP is used.
## If no port is specified, 8091 is used. ## If no port is specified, 8091 is used.
servers = ["http://localhost:8091"] servers = ["http://localhost:8091"]
` `

View File

@ -17,7 +17,7 @@ type DnsQuery struct {
// Domains or subdomains to query // Domains or subdomains to query
Domains []string Domains []string
// Network protocl name // Network protocol name
Network string Network string
// Server to query // Server to query

View File

@ -17,6 +17,11 @@ to gather stats from the [Engine API](https://docs.docker.com/engine/api/v1.20/)
## To use environment variables (ie, docker-machine), set endpoint = "ENV" ## To use environment variables (ie, docker-machine), set endpoint = "ENV"
endpoint = "unix:///var/run/docker.sock" endpoint = "unix:///var/run/docker.sock"
## Set to true to collect Swarm metrics(desired_replicas, running_replicas)
## Note: configure this in one of the manager nodes in a Swarm cluster.
## configuring in multiple Swarm managers results in duplication of metrics.
gather_services = false
## Only collect metrics for these containers. Values will be appended to ## Only collect metrics for these containers. Values will be appended to
## container_name_include. ## container_name_include.
## Deprecated (1.4.0), use container_name_include ## Deprecated (1.4.0), use container_name_include
@ -161,6 +166,9 @@ based on the availability of per-cpu stats on your system.
- available - available
- total - total
- used - used
- docker_swarm
- tasks_desired
- tasks_running
### Tags: ### Tags:
@ -191,6 +199,10 @@ based on the availability of per-cpu stats on your system.
- network - network
- docker_container_blkio specific: - docker_container_blkio specific:
- device - device
- docker_swarm specific:
- service_id
- service_name
- service_mode
### Example Output: ### Example Output:
@ -242,4 +254,7 @@ io_service_bytes_recursive_sync=77824i,io_service_bytes_recursive_total=80293888
io_service_bytes_recursive_write=368640i,io_serviced_recursive_async=6562i,\ io_service_bytes_recursive_write=368640i,io_serviced_recursive_async=6562i,\
io_serviced_recursive_read=6492i,io_serviced_recursive_sync=37i,\ io_serviced_recursive_read=6492i,io_serviced_recursive_sync=37i,\
io_serviced_recursive_total=6599i,io_serviced_recursive_write=107i 1453409536840126713 io_serviced_recursive_total=6599i,io_serviced_recursive_write=107i 1453409536840126713
``` >docker_swarm,
service_id=xaup2o9krw36j2dy1mjx1arjw,service_mode=replicated,service_name=test,\
tasks_desired=3,tasks_running=3 1508968160000000000
```

View File

@ -6,6 +6,7 @@ import (
"net/http" "net/http"
"github.com/docker/docker/api/types" "github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/swarm"
docker "github.com/docker/docker/client" docker "github.com/docker/docker/client"
"github.com/docker/go-connections/sockets" "github.com/docker/go-connections/sockets"
) )
@ -20,6 +21,9 @@ type Client interface {
ContainerList(ctx context.Context, options types.ContainerListOptions) ([]types.Container, error) ContainerList(ctx context.Context, options types.ContainerListOptions) ([]types.Container, error)
ContainerStats(ctx context.Context, containerID string, stream bool) (types.ContainerStats, error) ContainerStats(ctx context.Context, containerID string, stream bool) (types.ContainerStats, error)
ContainerInspect(ctx context.Context, containerID string) (types.ContainerJSON, error) ContainerInspect(ctx context.Context, containerID string) (types.ContainerJSON, error)
ServiceList(ctx context.Context, options types.ServiceListOptions) ([]swarm.Service, error)
TaskList(ctx context.Context, options types.TaskListOptions) ([]swarm.Task, error)
NodeList(ctx context.Context, options types.NodeListOptions) ([]swarm.Node, error)
} }
func NewEnvClient() (Client, error) { func NewEnvClient() (Client, error) {
@ -65,3 +69,12 @@ func (c *SocketClient) ContainerStats(ctx context.Context, containerID string, s
func (c *SocketClient) ContainerInspect(ctx context.Context, containerID string) (types.ContainerJSON, error) { func (c *SocketClient) ContainerInspect(ctx context.Context, containerID string) (types.ContainerJSON, error) {
return c.client.ContainerInspect(ctx, containerID) return c.client.ContainerInspect(ctx, containerID)
} }
func (c *SocketClient) ServiceList(ctx context.Context, options types.ServiceListOptions) ([]swarm.Service, error) {
return c.client.ServiceList(ctx, options)
}
func (c *SocketClient) TaskList(ctx context.Context, options types.TaskListOptions) ([]swarm.Task, error) {
return c.client.TaskList(ctx, options)
}
func (c *SocketClient) NodeList(ctx context.Context, options types.NodeListOptions) ([]swarm.Node, error) {
return c.client.NodeList(ctx, options)
}

View File

@ -6,6 +6,7 @@ import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"io" "io"
"log"
"net/http" "net/http"
"regexp" "regexp"
"strconv" "strconv"
@ -14,38 +15,29 @@ import (
"time" "time"
"github.com/docker/docker/api/types" "github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/swarm"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/filter" "github.com/influxdata/telegraf/filter"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
type DockerLabelFilter struct {
labelInclude filter.Filter
labelExclude filter.Filter
}
type DockerContainerFilter struct {
containerInclude filter.Filter
containerExclude filter.Filter
}
// Docker object // Docker object
type Docker struct { type Docker struct {
Endpoint string Endpoint string
ContainerNames []string ContainerNames []string
GatherServices bool `toml:"gather_services"`
Timeout internal.Duration Timeout internal.Duration
PerDevice bool `toml:"perdevice"` PerDevice bool `toml:"perdevice"`
Total bool `toml:"total"` Total bool `toml:"total"`
TagEnvironment []string `toml:"tag_env"` TagEnvironment []string `toml:"tag_env"`
LabelInclude []string `toml:"docker_label_include"` LabelInclude []string `toml:"docker_label_include"`
LabelExclude []string `toml:"docker_label_exclude"` LabelExclude []string `toml:"docker_label_exclude"`
LabelFilter DockerLabelFilter
ContainerInclude []string `toml:"container_name_include"` ContainerInclude []string `toml:"container_name_include"`
ContainerExclude []string `toml:"container_name_exclude"` ContainerExclude []string `toml:"container_name_exclude"`
ContainerFilter DockerContainerFilter
SSLCA string `toml:"ssl_ca"` SSLCA string `toml:"ssl_ca"`
SSLCert string `toml:"ssl_cert"` SSLCert string `toml:"ssl_cert"`
@ -55,10 +47,12 @@ type Docker struct {
newEnvClient func() (Client, error) newEnvClient func() (Client, error)
newClient func(string, *tls.Config) (Client, error) newClient func(string, *tls.Config) (Client, error)
client Client client Client
httpClient *http.Client httpClient *http.Client
engine_host string engine_host string
filtersCreated bool filtersCreated bool
labelFilter filter.Filter
containerFilter filter.Filter
} }
// KB, MB, GB, TB, PB...human friendly // KB, MB, GB, TB, PB...human friendly
@ -82,6 +76,9 @@ var sampleConfig = `
## To use environment variables (ie, docker-machine), set endpoint = "ENV" ## To use environment variables (ie, docker-machine), set endpoint = "ENV"
endpoint = "unix:///var/run/docker.sock" endpoint = "unix:///var/run/docker.sock"
## Set to true to collect Swarm metrics(desired_replicas, running_replicas)
gather_services = false
## Only collect metrics for these containers, collect all if empty ## Only collect metrics for these containers, collect all if empty
container_names = [] container_names = []
@ -160,6 +157,13 @@ func (d *Docker) Gather(acc telegraf.Accumulator) error {
acc.AddError(err) acc.AddError(err)
} }
if d.GatherServices {
err := d.gatherSwarmInfo(acc)
if err != nil {
acc.AddError(err)
}
}
// List containers // List containers
opts := types.ContainerListOptions{} opts := types.ContainerListOptions{}
ctx, cancel := context.WithTimeout(context.Background(), d.Timeout.Duration) ctx, cancel := context.WithTimeout(context.Background(), d.Timeout.Duration)
@ -187,6 +191,75 @@ func (d *Docker) Gather(acc telegraf.Accumulator) error {
return nil return nil
} }
func (d *Docker) gatherSwarmInfo(acc telegraf.Accumulator) error {
ctx, cancel := context.WithTimeout(context.Background(), d.Timeout.Duration)
defer cancel()
services, err := d.client.ServiceList(ctx, types.ServiceListOptions{})
if err != nil {
return err
}
if len(services) > 0 {
tasks, err := d.client.TaskList(ctx, types.TaskListOptions{})
if err != nil {
return err
}
nodes, err := d.client.NodeList(ctx, types.NodeListOptions{})
if err != nil {
return err
}
running := map[string]int{}
tasksNoShutdown := map[string]int{}
activeNodes := make(map[string]struct{})
for _, n := range nodes {
if n.Status.State != swarm.NodeStateDown {
activeNodes[n.ID] = struct{}{}
}
}
for _, task := range tasks {
if task.DesiredState != swarm.TaskStateShutdown {
tasksNoShutdown[task.ServiceID]++
}
if task.Status.State == swarm.TaskStateRunning {
running[task.ServiceID]++
}
}
for _, service := range services {
tags := map[string]string{}
fields := make(map[string]interface{})
now := time.Now()
tags["service_id"] = service.ID
tags["service_name"] = service.Spec.Name
if service.Spec.Mode.Replicated != nil && service.Spec.Mode.Replicated.Replicas != nil {
tags["service_mode"] = "replicated"
fields["tasks_running"] = running[service.ID]
fields["tasks_desired"] = *service.Spec.Mode.Replicated.Replicas
} else if service.Spec.Mode.Global != nil {
tags["service_mode"] = "global"
fields["tasks_running"] = running[service.ID]
fields["tasks_desired"] = tasksNoShutdown[service.ID]
} else {
log.Printf("E! Unknow Replicas Mode")
}
// Add metrics
acc.AddFields("docker_swarm",
fields,
tags,
now)
}
}
return nil
}
func (d *Docker) gatherInfo(acc telegraf.Accumulator) error { func (d *Docker) gatherInfo(acc telegraf.Accumulator) error {
// Init vars // Init vars
dataFields := make(map[string]interface{}) dataFields := make(map[string]interface{})
@ -291,12 +364,8 @@ func (d *Docker) gatherContainer(
"container_version": imageVersion, "container_version": imageVersion,
} }
if len(d.ContainerInclude) > 0 || len(d.ContainerExclude) > 0 { if !d.containerFilter.Match(cname) {
if len(d.ContainerInclude) == 0 || !d.ContainerFilter.containerInclude.Match(cname) { return nil
if len(d.ContainerExclude) == 0 || d.ContainerFilter.containerExclude.Match(cname) {
return nil
}
}
} }
ctx, cancel := context.WithTimeout(context.Background(), d.Timeout.Duration) ctx, cancel := context.WithTimeout(context.Background(), d.Timeout.Duration)
@ -317,10 +386,8 @@ func (d *Docker) gatherContainer(
// Add labels to tags // Add labels to tags
for k, label := range container.Labels { for k, label := range container.Labels {
if len(d.LabelInclude) == 0 || d.LabelFilter.labelInclude.Match(k) { if d.labelFilter.Match(k) {
if len(d.LabelExclude) == 0 || !d.LabelFilter.labelExclude.Match(k) { tags[k] = label
tags[k] = label
}
} }
} }
@ -666,46 +733,25 @@ func parseSize(sizeStr string) (int64, error) {
} }
func (d *Docker) createContainerFilters() error { func (d *Docker) createContainerFilters() error {
// Backwards compatibility for deprecated `container_names` parameter.
if len(d.ContainerNames) > 0 { if len(d.ContainerNames) > 0 {
d.ContainerInclude = append(d.ContainerInclude, d.ContainerNames...) d.ContainerInclude = append(d.ContainerInclude, d.ContainerNames...)
} }
if len(d.ContainerInclude) != 0 { filter, err := filter.NewIncludeExcludeFilter(d.ContainerInclude, d.ContainerExclude)
var err error if err != nil {
d.ContainerFilter.containerInclude, err = filter.Compile(d.ContainerInclude) return err
if err != nil {
return err
}
} }
d.containerFilter = filter
if len(d.ContainerExclude) != 0 {
var err error
d.ContainerFilter.containerExclude, err = filter.Compile(d.ContainerExclude)
if err != nil {
return err
}
}
return nil return nil
} }
func (d *Docker) createLabelFilters() error { func (d *Docker) createLabelFilters() error {
if len(d.LabelInclude) != 0 { filter, err := filter.NewIncludeExcludeFilter(d.LabelInclude, d.LabelExclude)
var err error if err != nil {
d.LabelFilter.labelInclude, err = filter.Compile(d.LabelInclude) return err
if err != nil {
return err
}
} }
d.labelFilter = filter
if len(d.LabelExclude) != 0 {
var err error
d.LabelFilter.labelExclude, err = filter.Compile(d.LabelExclude)
if err != nil {
return err
}
}
return nil return nil
} }

View File

@ -8,6 +8,7 @@ import (
"github.com/influxdata/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/docker/docker/api/types" "github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/swarm"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -16,6 +17,9 @@ type MockClient struct {
ContainerListF func(ctx context.Context, options types.ContainerListOptions) ([]types.Container, error) ContainerListF func(ctx context.Context, options types.ContainerListOptions) ([]types.Container, error)
ContainerStatsF func(ctx context.Context, containerID string, stream bool) (types.ContainerStats, error) ContainerStatsF func(ctx context.Context, containerID string, stream bool) (types.ContainerStats, error)
ContainerInspectF func(ctx context.Context, containerID string) (types.ContainerJSON, error) ContainerInspectF func(ctx context.Context, containerID string) (types.ContainerJSON, error)
ServiceListF func(ctx context.Context, options types.ServiceListOptions) ([]swarm.Service, error)
TaskListF func(ctx context.Context, options types.TaskListOptions) ([]swarm.Task, error)
NodeListF func(ctx context.Context, options types.NodeListOptions) ([]swarm.Node, error)
} }
func (c *MockClient) Info(ctx context.Context) (types.Info, error) { func (c *MockClient) Info(ctx context.Context) (types.Info, error) {
@ -44,21 +48,53 @@ func (c *MockClient) ContainerInspect(
return c.ContainerInspectF(ctx, containerID) return c.ContainerInspectF(ctx, containerID)
} }
func (c *MockClient) ServiceList(
ctx context.Context,
options types.ServiceListOptions,
) ([]swarm.Service, error) {
return c.ServiceListF(ctx, options)
}
func (c *MockClient) TaskList(
ctx context.Context,
options types.TaskListOptions,
) ([]swarm.Task, error) {
return c.TaskListF(ctx, options)
}
func (c *MockClient) NodeList(
ctx context.Context,
options types.NodeListOptions,
) ([]swarm.Node, error) {
return c.NodeListF(ctx, options)
}
var baseClient = MockClient{
InfoF: func(context.Context) (types.Info, error) {
return info, nil
},
ContainerListF: func(context.Context, types.ContainerListOptions) ([]types.Container, error) {
return containerList, nil
},
ContainerStatsF: func(context.Context, string, bool) (types.ContainerStats, error) {
return containerStats(), nil
},
ContainerInspectF: func(context.Context, string) (types.ContainerJSON, error) {
return containerInspect, nil
},
ServiceListF: func(context.Context, types.ServiceListOptions) ([]swarm.Service, error) {
return ServiceList, nil
},
TaskListF: func(context.Context, types.TaskListOptions) ([]swarm.Task, error) {
return TaskList, nil
},
NodeListF: func(context.Context, types.NodeListOptions) ([]swarm.Node, error) {
return NodeList, nil
},
}
func newClient(host string, tlsConfig *tls.Config) (Client, error) { func newClient(host string, tlsConfig *tls.Config) (Client, error) {
return &MockClient{ return &baseClient, nil
InfoF: func(context.Context) (types.Info, error) {
return info, nil
},
ContainerListF: func(context.Context, types.ContainerListOptions) ([]types.Container, error) {
return containerList, nil
},
ContainerStatsF: func(context.Context, string, bool) (types.ContainerStats, error) {
return containerStats(), nil
},
ContainerInspectF: func(context.Context, string) (types.ContainerJSON, error) {
return containerInspect, nil
},
}, nil
} }
func TestDockerGatherContainerStats(t *testing.T) { func TestDockerGatherContainerStats(t *testing.T) {
@ -227,6 +263,15 @@ func TestDocker_WindowsMemoryContainerStats(t *testing.T) {
ContainerInspectF: func(ctx context.Context, containerID string) (types.ContainerJSON, error) { ContainerInspectF: func(ctx context.Context, containerID string) (types.ContainerJSON, error) {
return containerInspect, nil return containerInspect, nil
}, },
ServiceListF: func(context.Context, types.ServiceListOptions) ([]swarm.Service, error) {
return ServiceList, nil
},
TaskListF: func(context.Context, types.TaskListOptions) ([]swarm.Task, error) {
return TaskList, nil
},
NodeListF: func(context.Context, types.NodeListOptions) ([]swarm.Node, error) {
return NodeList, nil
},
}, nil }, nil
}, },
} }
@ -234,82 +279,291 @@ func TestDocker_WindowsMemoryContainerStats(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
} }
func TestDockerGatherLabels(t *testing.T) { func TestContainerLabels(t *testing.T) {
var gatherLabelsTests = []struct { var tests = []struct {
include []string name string
exclude []string container types.Container
expected []string include []string
notexpected []string exclude []string
expected map[string]string
}{ }{
{[]string{}, []string{}, []string{"label1", "label2"}, []string{}}, {
{[]string{"*"}, []string{}, []string{"label1", "label2"}, []string{}}, name: "Nil filters matches all",
{[]string{"lab*"}, []string{}, []string{"label1", "label2"}, []string{}}, container: types.Container{
{[]string{"label1"}, []string{}, []string{"label1"}, []string{"label2"}}, Labels: map[string]string{
{[]string{"label1*"}, []string{}, []string{"label1"}, []string{"label2"}}, "a": "x",
{[]string{}, []string{"*"}, []string{}, []string{"label1", "label2"}}, },
{[]string{}, []string{"lab*"}, []string{}, []string{"label1", "label2"}}, },
{[]string{}, []string{"label1"}, []string{"label2"}, []string{"label1"}}, include: nil,
{[]string{"*"}, []string{"*"}, []string{}, []string{"label1", "label2"}}, exclude: nil,
expected: map[string]string{
"a": "x",
},
},
{
name: "Empty filters matches all",
container: types.Container{
Labels: map[string]string{
"a": "x",
},
},
include: []string{},
exclude: []string{},
expected: map[string]string{
"a": "x",
},
},
{
name: "Must match include",
container: types.Container{
Labels: map[string]string{
"a": "x",
"b": "y",
},
},
include: []string{"a"},
exclude: []string{},
expected: map[string]string{
"a": "x",
},
},
{
name: "Must not match exclude",
container: types.Container{
Labels: map[string]string{
"a": "x",
"b": "y",
},
},
include: []string{},
exclude: []string{"b"},
expected: map[string]string{
"a": "x",
},
},
{
name: "Include Glob",
container: types.Container{
Labels: map[string]string{
"aa": "x",
"ab": "y",
"bb": "z",
},
},
include: []string{"a*"},
exclude: []string{},
expected: map[string]string{
"aa": "x",
"ab": "y",
},
},
{
name: "Exclude Glob",
container: types.Container{
Labels: map[string]string{
"aa": "x",
"ab": "y",
"bb": "z",
},
},
include: []string{},
exclude: []string{"a*"},
expected: map[string]string{
"bb": "z",
},
},
{
name: "Excluded Includes",
container: types.Container{
Labels: map[string]string{
"aa": "x",
"ab": "y",
"bb": "z",
},
},
include: []string{"a*"},
exclude: []string{"*b"},
expected: map[string]string{
"aa": "x",
},
},
} }
for _, tt := range tests {
for _, tt := range gatherLabelsTests { t.Run(tt.name, func(t *testing.T) {
t.Run("", func(t *testing.T) {
var acc testutil.Accumulator var acc testutil.Accumulator
d := Docker{
newClient: newClient, newClientFunc := func(host string, tlsConfig *tls.Config) (Client, error) {
client := baseClient
client.ContainerListF = func(context.Context, types.ContainerListOptions) ([]types.Container, error) {
return []types.Container{tt.container}, nil
}
return &client, nil
} }
for _, label := range tt.include { d := Docker{
d.LabelInclude = append(d.LabelInclude, label) newClient: newClientFunc,
} LabelInclude: tt.include,
for _, label := range tt.exclude { LabelExclude: tt.exclude,
d.LabelExclude = append(d.LabelExclude, label)
} }
err := d.Gather(&acc) err := d.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
for _, label := range tt.expected { // Grab tags from a container metric
if !acc.HasTag("docker_container_cpu", label) { var actual map[string]string
t.Errorf("Didn't get expected label of %s. Test was: Include: %s Exclude %s", for _, metric := range acc.Metrics {
label, tt.include, tt.exclude) if metric.Measurement == "docker_container_cpu" {
actual = metric.Tags
} }
} }
for _, label := range tt.notexpected { for k, v := range tt.expected {
if acc.HasTag("docker_container_cpu", label) { require.Equal(t, v, actual[k])
t.Errorf("Got unexpected label of %s. Test was: Include: %s Exclude %s",
label, tt.include, tt.exclude)
}
} }
}) })
} }
} }
func TestContainerNames(t *testing.T) { func TestContainerNames(t *testing.T) {
var gatherContainerNames = []struct { var tests = []struct {
include []string name string
exclude []string containers [][]string
expected []string include []string
notexpected []string exclude []string
expected []string
}{ }{
{[]string{}, []string{}, []string{"etcd", "etcd2"}, []string{}}, {
{[]string{"*"}, []string{}, []string{"etcd", "etcd2"}, []string{}}, name: "Nil filters matches all",
{[]string{"etc*"}, []string{}, []string{"etcd", "etcd2"}, []string{}}, containers: [][]string{
{[]string{"etcd"}, []string{}, []string{"etcd"}, []string{"etcd2"}}, {"/etcd"},
{[]string{"etcd2*"}, []string{}, []string{"etcd2"}, []string{"etcd"}}, {"/etcd2"},
{[]string{}, []string{"etc*"}, []string{}, []string{"etcd", "etcd2"}}, },
{[]string{}, []string{"etcd"}, []string{"etcd2"}, []string{"etcd"}}, include: nil,
{[]string{"*"}, []string{"*"}, []string{"etcd", "etcd2"}, []string{}}, exclude: nil,
{[]string{}, []string{"*"}, []string{""}, []string{"etcd", "etcd2"}}, expected: []string{"etcd", "etcd2"},
},
{
name: "Empty filters matches all",
containers: [][]string{
{"/etcd"},
{"/etcd2"},
},
include: []string{},
exclude: []string{},
expected: []string{"etcd", "etcd2"},
},
{
name: "Match all containers",
containers: [][]string{
{"/etcd"},
{"/etcd2"},
},
include: []string{"*"},
exclude: []string{},
expected: []string{"etcd", "etcd2"},
},
{
name: "Include prefix match",
containers: [][]string{
{"/etcd"},
{"/etcd2"},
},
include: []string{"etc*"},
exclude: []string{},
expected: []string{"etcd", "etcd2"},
},
{
name: "Exact match",
containers: [][]string{
{"/etcd"},
{"/etcd2"},
},
include: []string{"etcd"},
exclude: []string{},
expected: []string{"etcd"},
},
{
name: "Star matches zero length",
containers: [][]string{
{"/etcd"},
{"/etcd2"},
},
include: []string{"etcd2*"},
exclude: []string{},
expected: []string{"etcd2"},
},
{
name: "Exclude matches all",
containers: [][]string{
{"/etcd"},
{"/etcd2"},
},
include: []string{},
exclude: []string{"etc*"},
expected: []string{},
},
{
name: "Exclude single",
containers: [][]string{
{"/etcd"},
{"/etcd2"},
},
include: []string{},
exclude: []string{"etcd"},
expected: []string{"etcd2"},
},
{
name: "Exclude all",
containers: [][]string{
{"/etcd"},
{"/etcd2"},
},
include: []string{"*"},
exclude: []string{"*"},
expected: []string{},
},
{
name: "Exclude item matching include",
containers: [][]string{
{"acme"},
{"foo"},
{"acme-test"},
},
include: []string{"acme*"},
exclude: []string{"*test*"},
expected: []string{"acme"},
},
{
name: "Exclude item no wildcards",
containers: [][]string{
{"acme"},
{"acme-test"},
},
include: []string{"acme*"},
exclude: []string{"test"},
expected: []string{"acme", "acme-test"},
},
} }
for _, tt := range tests {
for _, tt := range gatherContainerNames { t.Run(tt.name, func(t *testing.T) {
t.Run("", func(t *testing.T) {
var acc testutil.Accumulator var acc testutil.Accumulator
newClientFunc := func(host string, tlsConfig *tls.Config) (Client, error) {
client := baseClient
client.ContainerListF = func(context.Context, types.ContainerListOptions) ([]types.Container, error) {
var containers []types.Container
for _, names := range tt.containers {
containers = append(containers, types.Container{
Names: names,
})
}
return containers, nil
}
return &client, nil
}
d := Docker{ d := Docker{
newClient: newClient, newClient: newClientFunc,
ContainerInclude: tt.include, ContainerInclude: tt.include,
ContainerExclude: tt.exclude, ContainerExclude: tt.exclude,
} }
@ -317,39 +571,21 @@ func TestContainerNames(t *testing.T) {
err := d.Gather(&acc) err := d.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
// Set of expected names
var expected = make(map[string]bool)
for _, v := range tt.expected {
expected[v] = true
}
// Set of actual names
var actual = make(map[string]bool)
for _, metric := range acc.Metrics { for _, metric := range acc.Metrics {
if metric.Measurement == "docker_container_cpu" { if name, ok := metric.Tags["container_name"]; ok {
if val, ok := metric.Tags["container_name"]; ok { actual[name] = true
var found bool = false
for _, cname := range tt.expected {
if val == cname {
found = true
break
}
}
if !found {
t.Errorf("Got unexpected container of %s. Test was -> Include: %s, Exclude: %s", val, tt.include, tt.exclude)
}
}
} }
} }
for _, metric := range acc.Metrics { require.Equal(t, expected, actual)
if metric.Measurement == "docker_container_cpu" {
if val, ok := metric.Tags["container_name"]; ok {
var found bool = false
for _, cname := range tt.notexpected {
if val == cname {
found = true
break
}
}
if found {
t.Errorf("Got unexpected container of %s. Test was -> Include: %s, Exclude: %s", val, tt.include, tt.exclude)
}
}
}
}
}) })
} }
} }
@ -436,3 +672,42 @@ func TestDockerGatherInfo(t *testing.T) {
}, },
) )
} }
func TestDockerGatherSwarmInfo(t *testing.T) {
var acc testutil.Accumulator
d := Docker{
newClient: newClient,
}
err := acc.GatherError(d.Gather)
require.NoError(t, err)
d.gatherSwarmInfo(&acc)
// test docker_container_net measurement
acc.AssertContainsTaggedFields(t,
"docker_swarm",
map[string]interface{}{
"tasks_running": int(2),
"tasks_desired": uint64(2),
},
map[string]string{
"service_id": "qolkls9g5iasdiuihcyz9rnx2",
"service_name": "test1",
"service_mode": "replicated",
},
)
acc.AssertContainsTaggedFields(t,
"docker_swarm",
map[string]interface{}{
"tasks_running": int(1),
"tasks_desired": int(1),
},
map[string]string{
"service_id": "qolkls9g5iasdiuihcyz9rn3",
"service_name": "test2",
"service_mode": "global",
},
)
}

View File

@ -8,6 +8,7 @@ import (
"github.com/docker/docker/api/types" "github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container" "github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/registry" "github.com/docker/docker/api/types/registry"
"github.com/docker/docker/api/types/swarm"
) )
var info = types.Info{ var info = types.Info{
@ -133,6 +134,79 @@ var containerList = []types.Container{
}, },
} }
var two = uint64(2)
var ServiceList = []swarm.Service{
swarm.Service{
ID: "qolkls9g5iasdiuihcyz9rnx2",
Spec: swarm.ServiceSpec{
Annotations: swarm.Annotations{
Name: "test1",
},
Mode: swarm.ServiceMode{
Replicated: &swarm.ReplicatedService{
Replicas: &two,
},
},
},
},
swarm.Service{
ID: "qolkls9g5iasdiuihcyz9rn3",
Spec: swarm.ServiceSpec{
Annotations: swarm.Annotations{
Name: "test2",
},
Mode: swarm.ServiceMode{
Global: &swarm.GlobalService{},
},
},
},
}
var TaskList = []swarm.Task{
swarm.Task{
ID: "kwh0lv7hwwbh",
ServiceID: "qolkls9g5iasdiuihcyz9rnx2",
NodeID: "0cl4jturcyd1ks3fwpd010kor",
Status: swarm.TaskStatus{
State: "running",
},
DesiredState: "running",
},
swarm.Task{
ID: "u78m5ojbivc3",
ServiceID: "qolkls9g5iasdiuihcyz9rnx2",
NodeID: "0cl4jturcyd1ks3fwpd010kor",
Status: swarm.TaskStatus{
State: "running",
},
DesiredState: "running",
},
swarm.Task{
ID: "1n1uilkhr98l",
ServiceID: "qolkls9g5iasdiuihcyz9rn3",
NodeID: "0cl4jturcyd1ks3fwpd010kor",
Status: swarm.TaskStatus{
State: "running",
},
DesiredState: "running",
},
}
var NodeList = []swarm.Node{
swarm.Node{
ID: "0cl4jturcyd1ks3fwpd010kor",
Status: swarm.NodeStatus{
State: "ready",
},
},
swarm.Node{
ID: "0cl4jturcyd1ks3fwpd010kor",
Status: swarm.NodeStatus{
State: "ready",
},
},
}
func containerStats() types.ContainerStats { func containerStats() types.ContainerStats {
var stat types.ContainerStats var stat types.ContainerStats
jsonStat := ` jsonStat := `

View File

@ -23,10 +23,21 @@ or [cluster-stats](https://www.elastic.co/guide/en/elasticsearch/reference/curre
## Set cluster_health to true when you want to also obtain cluster health stats ## Set cluster_health to true when you want to also obtain cluster health stats
cluster_health = false cluster_health = false
## Set cluster_stats to true when you want to obtain cluster stats from the ## Adjust cluster_health_level when you want to also obtain detailed health stats
## Master node. ## The options are
## - indices (default)
## - cluster
# cluster_health_level = "indices"
## Set cluster_stats to true when you want to also obtain cluster stats from the
## Master node.
cluster_stats = false cluster_stats = false
## node_stats is a list of sub-stats that you want to have gathered. Valid options
## are "indices", "os", "process", "jvm", "thread_pool", "fs", "transport", "http",
## "breakers". Per default, all stats are gathered.
# node_stats = ["jvm", "http"]
## Optional SSL Config ## Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem" # ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem" # ssl_cert = "/etc/telegraf/cert.pem"

View File

@ -3,17 +3,16 @@ package elasticsearch
import ( import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"net/http"
"regexp"
"sync"
"time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
jsonparser "github.com/influxdata/telegraf/plugins/parsers/json" jsonparser "github.com/influxdata/telegraf/plugins/parsers/json"
"io/ioutil" "io/ioutil"
"net/http"
"regexp"
"strings" "strings"
"sync"
"time"
) )
// mask for masking username/password from error messages // mask for masking username/password from error messages
@ -94,10 +93,21 @@ const sampleConfig = `
## Set cluster_health to true when you want to also obtain cluster health stats ## Set cluster_health to true when you want to also obtain cluster health stats
cluster_health = false cluster_health = false
## Adjust cluster_health_level when you want to also obtain detailed health stats
## The options are
## - indices (default)
## - cluster
# cluster_health_level = "indices"
## Set cluster_stats to true when you want to also obtain cluster stats from the ## Set cluster_stats to true when you want to also obtain cluster stats from the
## Master node. ## Master node.
cluster_stats = false cluster_stats = false
## node_stats is a list of sub-stats that you want to have gathered. Valid options
## are "indices", "os", "process", "jvm", "thread_pool", "fs", "transport", "http",
## "breakers". Per default, all stats are gathered.
# node_stats = ["jvm", "http"]
## Optional SSL Config ## Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem" # ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem" # ssl_cert = "/etc/telegraf/cert.pem"
@ -113,7 +123,9 @@ type Elasticsearch struct {
Servers []string Servers []string
HttpTimeout internal.Duration HttpTimeout internal.Duration
ClusterHealth bool ClusterHealth bool
ClusterHealthLevel string
ClusterStats bool ClusterStats bool
NodeStats []string
SSLCA string `toml:"ssl_ca"` // Path to CA file SSLCA string `toml:"ssl_ca"` // Path to CA file
SSLCert string `toml:"ssl_cert"` // Path to host cert file SSLCert string `toml:"ssl_cert"` // Path to host cert file
SSLKey string `toml:"ssl_key"` // Path to cert key file SSLKey string `toml:"ssl_key"` // Path to cert key file
@ -126,7 +138,8 @@ type Elasticsearch struct {
// NewElasticsearch return a new instance of Elasticsearch // NewElasticsearch return a new instance of Elasticsearch
func NewElasticsearch() *Elasticsearch { func NewElasticsearch() *Elasticsearch {
return &Elasticsearch{ return &Elasticsearch{
HttpTimeout: internal.Duration{Duration: time.Second * 5}, HttpTimeout: internal.Duration{Duration: time.Second * 5},
ClusterHealthLevel: "indices",
} }
} }
@ -158,12 +171,7 @@ func (e *Elasticsearch) Gather(acc telegraf.Accumulator) error {
for _, serv := range e.Servers { for _, serv := range e.Servers {
go func(s string, acc telegraf.Accumulator) { go func(s string, acc telegraf.Accumulator) {
defer wg.Done() defer wg.Done()
var url string url := e.nodeStatsUrl(s)
if e.Local {
url = s + statsPathLocal
} else {
url = s + statsPath
}
e.isMaster = false e.isMaster = false
if e.ClusterStats { if e.ClusterStats {
@ -182,7 +190,10 @@ func (e *Elasticsearch) Gather(acc telegraf.Accumulator) error {
} }
if e.ClusterHealth { if e.ClusterHealth {
url = s + "/_cluster/health?level=indices" url = s + "/_cluster/health"
if e.ClusterHealthLevel != "" {
url = url + "?level=" + e.ClusterHealthLevel
}
if err := e.gatherClusterHealth(url, acc); err != nil { if err := e.gatherClusterHealth(url, acc); err != nil {
acc.AddError(fmt.Errorf(mask.ReplaceAllString(err.Error(), "http(s)://XXX:XXX@"))) acc.AddError(fmt.Errorf(mask.ReplaceAllString(err.Error(), "http(s)://XXX:XXX@")))
return return
@ -219,6 +230,22 @@ func (e *Elasticsearch) createHttpClient() (*http.Client, error) {
return client, nil return client, nil
} }
func (e *Elasticsearch) nodeStatsUrl(baseUrl string) string {
var url string
if e.Local {
url = baseUrl + statsPathLocal
} else {
url = baseUrl + statsPath
}
if len(e.NodeStats) == 0 {
return url
}
return fmt.Sprintf("%s/%s", url, strings.Join(e.NodeStats, ","))
}
func (e *Elasticsearch) gatherNodeStats(url string, acc telegraf.Accumulator) error { func (e *Elasticsearch) gatherNodeStats(url string, acc telegraf.Accumulator) error {
nodeStats := &struct { nodeStats := &struct {
ClusterName string `json:"cluster_name"` ClusterName string `json:"cluster_name"`
@ -259,6 +286,11 @@ func (e *Elasticsearch) gatherNodeStats(url string, acc telegraf.Accumulator) er
now := time.Now() now := time.Now()
for p, s := range stats { for p, s := range stats {
// if one of the individual node stats is not even in the
// original result
if s == nil {
continue
}
f := jsonparser.JSONFlattener{} f := jsonparser.JSONFlattener{}
// parse Json, ignoring strings and bools // parse Json, ignoring strings and bools
err := f.FlattenJSON("", s) err := f.FlattenJSON("", s)

View File

@ -13,6 +13,16 @@ import (
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
func defaultTags() map[string]string {
return map[string]string{
"cluster_name": "es-testcluster",
"node_attribute_master": "true",
"node_id": "SDFsfSDFsdfFSDSDfSFDSDF",
"node_name": "test.host.com",
"node_host": "test",
}
}
type transportMock struct { type transportMock struct {
statusCode int statusCode int
body string body string
@ -45,15 +55,9 @@ func checkIsMaster(es *Elasticsearch, expected bool, t *testing.T) {
assert.Fail(t, msg) assert.Fail(t, msg)
} }
} }
func checkNodeStatsResult(t *testing.T, acc *testutil.Accumulator) {
tags := map[string]string{
"cluster_name": "es-testcluster",
"node_attribute_master": "true",
"node_id": "SDFsfSDFsdfFSDSDfSFDSDF",
"node_name": "test.host.com",
"node_host": "test",
}
func checkNodeStatsResult(t *testing.T, acc *testutil.Accumulator) {
tags := defaultTags()
acc.AssertContainsTaggedFields(t, "elasticsearch_indices", nodestatsIndicesExpected, tags) acc.AssertContainsTaggedFields(t, "elasticsearch_indices", nodestatsIndicesExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_os", nodestatsOsExpected, tags) acc.AssertContainsTaggedFields(t, "elasticsearch_os", nodestatsOsExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_process", nodestatsProcessExpected, tags) acc.AssertContainsTaggedFields(t, "elasticsearch_process", nodestatsProcessExpected, tags)
@ -79,6 +83,31 @@ func TestGather(t *testing.T) {
checkNodeStatsResult(t, &acc) checkNodeStatsResult(t, &acc)
} }
func TestGatherIndividualStats(t *testing.T) {
es := newElasticsearchWithClient()
es.Servers = []string{"http://example.com:9200"}
es.NodeStats = []string{"jvm", "process"}
es.client.Transport = newTransportMock(http.StatusOK, nodeStatsResponseJVMProcess)
var acc testutil.Accumulator
if err := acc.GatherError(es.Gather); err != nil {
t.Fatal(err)
}
checkIsMaster(es, false, t)
tags := defaultTags()
acc.AssertDoesNotContainsTaggedFields(t, "elasticsearch_indices", nodestatsIndicesExpected, tags)
acc.AssertDoesNotContainsTaggedFields(t, "elasticsearch_os", nodestatsOsExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_process", nodestatsProcessExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_jvm", nodestatsJvmExpected, tags)
acc.AssertDoesNotContainsTaggedFields(t, "elasticsearch_thread_pool", nodestatsThreadPoolExpected, tags)
acc.AssertDoesNotContainsTaggedFields(t, "elasticsearch_fs", nodestatsFsExpected, tags)
acc.AssertDoesNotContainsTaggedFields(t, "elasticsearch_transport", nodestatsTransportExpected, tags)
acc.AssertDoesNotContainsTaggedFields(t, "elasticsearch_http", nodestatsHttpExpected, tags)
acc.AssertDoesNotContainsTaggedFields(t, "elasticsearch_breakers", nodestatsBreakersExpected, tags)
}
func TestGatherNodeStats(t *testing.T) { func TestGatherNodeStats(t *testing.T) {
es := newElasticsearchWithClient() es := newElasticsearchWithClient()
es.Servers = []string{"http://example.com:9200"} es.Servers = []string{"http://example.com:9200"}
@ -93,10 +122,11 @@ func TestGatherNodeStats(t *testing.T) {
checkNodeStatsResult(t, &acc) checkNodeStatsResult(t, &acc)
} }
func TestGatherClusterHealth(t *testing.T) { func TestGatherClusterHealthEmptyClusterHealth(t *testing.T) {
es := newElasticsearchWithClient() es := newElasticsearchWithClient()
es.Servers = []string{"http://example.com:9200"} es.Servers = []string{"http://example.com:9200"}
es.ClusterHealth = true es.ClusterHealth = true
es.ClusterHealthLevel = ""
es.client.Transport = newTransportMock(http.StatusOK, clusterHealthResponse) es.client.Transport = newTransportMock(http.StatusOK, clusterHealthResponse)
var acc testutil.Accumulator var acc testutil.Accumulator
@ -104,6 +134,56 @@ func TestGatherClusterHealth(t *testing.T) {
checkIsMaster(es, false, t) checkIsMaster(es, false, t)
acc.AssertContainsTaggedFields(t, "elasticsearch_cluster_health",
clusterHealthExpected,
map[string]string{"name": "elasticsearch_telegraf"})
acc.AssertDoesNotContainsTaggedFields(t, "elasticsearch_indices",
v1IndexExpected,
map[string]string{"index": "v1"})
acc.AssertDoesNotContainsTaggedFields(t, "elasticsearch_indices",
v2IndexExpected,
map[string]string{"index": "v2"})
}
func TestGatherClusterHealthSpecificClusterHealth(t *testing.T) {
es := newElasticsearchWithClient()
es.Servers = []string{"http://example.com:9200"}
es.ClusterHealth = true
es.ClusterHealthLevel = "cluster"
es.client.Transport = newTransportMock(http.StatusOK, clusterHealthResponse)
var acc testutil.Accumulator
require.NoError(t, es.gatherClusterHealth("junk", &acc))
checkIsMaster(es, false, t)
acc.AssertContainsTaggedFields(t, "elasticsearch_cluster_health",
clusterHealthExpected,
map[string]string{"name": "elasticsearch_telegraf"})
acc.AssertDoesNotContainsTaggedFields(t, "elasticsearch_indices",
v1IndexExpected,
map[string]string{"index": "v1"})
acc.AssertDoesNotContainsTaggedFields(t, "elasticsearch_indices",
v2IndexExpected,
map[string]string{"index": "v2"})
}
func TestGatherClusterHealthAlsoIndicesHealth(t *testing.T) {
es := newElasticsearchWithClient()
es.Servers = []string{"http://example.com:9200"}
es.ClusterHealth = true
es.ClusterHealthLevel = "indices"
es.client.Transport = newTransportMock(http.StatusOK, clusterHealthResponseWithIndices)
var acc testutil.Accumulator
require.NoError(t, es.gatherClusterHealth("junk", &acc))
checkIsMaster(es, false, t)
acc.AssertContainsTaggedFields(t, "elasticsearch_cluster_health", acc.AssertContainsTaggedFields(t, "elasticsearch_cluster_health",
clusterHealthExpected, clusterHealthExpected,
map[string]string{"name": "elasticsearch_telegraf"}) map[string]string{"name": "elasticsearch_telegraf"})
@ -185,7 +265,6 @@ func TestGatherClusterStatsNonMaster(t *testing.T) {
// ensure flag is clear so Cluster Stats would not be done // ensure flag is clear so Cluster Stats would not be done
checkIsMaster(es, false, t) checkIsMaster(es, false, t)
checkNodeStatsResult(t, &acc) checkNodeStatsResult(t, &acc)
} }
func newElasticsearchWithClient() *Elasticsearch { func newElasticsearchWithClient() *Elasticsearch {

View File

@ -1,6 +1,21 @@
package elasticsearch package elasticsearch
const clusterHealthResponse = ` const clusterHealthResponse = `
{
"cluster_name": "elasticsearch_telegraf",
"status": "green",
"timed_out": false,
"number_of_nodes": 3,
"number_of_data_nodes": 3,
"active_primary_shards": 5,
"active_shards": 15,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0
}
`
const clusterHealthResponseWithIndices = `
{ {
"cluster_name": "elasticsearch_telegraf", "cluster_name": "elasticsearch_telegraf",
"status": "green", "status": "green",
@ -489,6 +504,100 @@ const nodeStatsResponse = `
} }
` `
const nodeStatsResponseJVMProcess = `
{
"cluster_name": "es-testcluster",
"nodes": {
"SDFsfSDFsdfFSDSDfSFDSDF": {
"timestamp": 1436365550135,
"name": "test.host.com",
"transport_address": "inet[/127.0.0.1:9300]",
"host": "test",
"ip": [
"inet[/127.0.0.1:9300]",
"NONE"
],
"attributes": {
"master": "true"
},
"process": {
"timestamp": 1436460392945,
"open_file_descriptors": 160,
"cpu": {
"percent": 2,
"sys_in_millis": 1870,
"user_in_millis": 13610,
"total_in_millis": 15480
},
"mem": {
"total_virtual_in_bytes": 4747890688
}
},
"jvm": {
"timestamp": 1436460392945,
"uptime_in_millis": 202245,
"mem": {
"heap_used_in_bytes": 52709568,
"heap_used_percent": 5,
"heap_committed_in_bytes": 259522560,
"heap_max_in_bytes": 1038876672,
"non_heap_used_in_bytes": 39634576,
"non_heap_committed_in_bytes": 40841216,
"pools": {
"young": {
"used_in_bytes": 32685760,
"max_in_bytes": 279183360,
"peak_used_in_bytes": 71630848,
"peak_max_in_bytes": 279183360
},
"survivor": {
"used_in_bytes": 8912880,
"max_in_bytes": 34865152,
"peak_used_in_bytes": 8912888,
"peak_max_in_bytes": 34865152
},
"old": {
"used_in_bytes": 11110928,
"max_in_bytes": 724828160,
"peak_used_in_bytes": 14354608,
"peak_max_in_bytes": 724828160
}
}
},
"threads": {
"count": 44,
"peak_count": 45
},
"gc": {
"collectors": {
"young": {
"collection_count": 2,
"collection_time_in_millis": 98
},
"old": {
"collection_count": 1,
"collection_time_in_millis": 24
}
}
},
"buffer_pools": {
"direct": {
"count": 40,
"used_in_bytes": 6304239,
"total_capacity_in_bytes": 6304239
},
"mapped": {
"count": 0,
"used_in_bytes": 0,
"total_capacity_in_bytes": 0
}
}
}
}
}
}
`
var nodestatsIndicesExpected = map[string]interface{}{ var nodestatsIndicesExpected = map[string]interface{}{
"id_cache_memory_size_in_bytes": float64(0), "id_cache_memory_size_in_bytes": float64(0),
"completion_size_in_bytes": float64(0), "completion_size_in_bytes": float64(0),

View File

@ -5,6 +5,8 @@ import (
"strings" "strings"
"testing" "testing"
"github.com/stretchr/testify/require"
"github.com/influxdata/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
) )
@ -24,28 +26,24 @@ func TestGatherNoMd5(t *testing.T) {
tags1 := map[string]string{ tags1 := map[string]string{
"file": dir + "log1.log", "file": dir + "log1.log",
} }
fields1 := map[string]interface{}{ require.True(t, acc.HasPoint("filestat", tags1,
"size_bytes": int64(0), "size_bytes", int64(0)))
"exists": int64(1), require.True(t, acc.HasPoint("filestat", tags1,
} "exists", int64(1)))
acc.AssertContainsTaggedFields(t, "filestat", fields1, tags1)
tags2 := map[string]string{ tags2 := map[string]string{
"file": dir + "log2.log", "file": dir + "log2.log",
} }
fields2 := map[string]interface{}{ require.True(t, acc.HasPoint("filestat", tags2,
"size_bytes": int64(0), "size_bytes", int64(0)))
"exists": int64(1), require.True(t, acc.HasPoint("filestat", tags2,
} "exists", int64(1)))
acc.AssertContainsTaggedFields(t, "filestat", fields2, tags2)
tags3 := map[string]string{ tags3 := map[string]string{
"file": "/non/existant/file", "file": "/non/existant/file",
} }
fields3 := map[string]interface{}{ require.True(t, acc.HasPoint("filestat", tags3,
"exists": int64(0), "exists", int64(0)))
}
acc.AssertContainsTaggedFields(t, "filestat", fields3, tags3)
} }
func TestGatherExplicitFiles(t *testing.T) { func TestGatherExplicitFiles(t *testing.T) {
@ -64,30 +62,28 @@ func TestGatherExplicitFiles(t *testing.T) {
tags1 := map[string]string{ tags1 := map[string]string{
"file": dir + "log1.log", "file": dir + "log1.log",
} }
fields1 := map[string]interface{}{ require.True(t, acc.HasPoint("filestat", tags1,
"size_bytes": int64(0), "size_bytes", int64(0)))
"exists": int64(1), require.True(t, acc.HasPoint("filestat", tags1,
"md5_sum": "d41d8cd98f00b204e9800998ecf8427e", "exists", int64(1)))
} require.True(t, acc.HasPoint("filestat", tags1,
acc.AssertContainsTaggedFields(t, "filestat", fields1, tags1) "md5_sum", "d41d8cd98f00b204e9800998ecf8427e"))
tags2 := map[string]string{ tags2 := map[string]string{
"file": dir + "log2.log", "file": dir + "log2.log",
} }
fields2 := map[string]interface{}{ require.True(t, acc.HasPoint("filestat", tags2,
"size_bytes": int64(0), "size_bytes", int64(0)))
"exists": int64(1), require.True(t, acc.HasPoint("filestat", tags2,
"md5_sum": "d41d8cd98f00b204e9800998ecf8427e", "exists", int64(1)))
} require.True(t, acc.HasPoint("filestat", tags2,
acc.AssertContainsTaggedFields(t, "filestat", fields2, tags2) "md5_sum", "d41d8cd98f00b204e9800998ecf8427e"))
tags3 := map[string]string{ tags3 := map[string]string{
"file": "/non/existant/file", "file": "/non/existant/file",
} }
fields3 := map[string]interface{}{ require.True(t, acc.HasPoint("filestat", tags3,
"exists": int64(0), "exists", int64(0)))
}
acc.AssertContainsTaggedFields(t, "filestat", fields3, tags3)
} }
func TestGatherGlob(t *testing.T) { func TestGatherGlob(t *testing.T) {
@ -136,32 +132,32 @@ func TestGatherSuperAsterisk(t *testing.T) {
tags1 := map[string]string{ tags1 := map[string]string{
"file": dir + "log1.log", "file": dir + "log1.log",
} }
fields1 := map[string]interface{}{ require.True(t, acc.HasPoint("filestat", tags1,
"size_bytes": int64(0), "size_bytes", int64(0)))
"exists": int64(1), require.True(t, acc.HasPoint("filestat", tags1,
"md5_sum": "d41d8cd98f00b204e9800998ecf8427e", "exists", int64(1)))
} require.True(t, acc.HasPoint("filestat", tags1,
acc.AssertContainsTaggedFields(t, "filestat", fields1, tags1) "md5_sum", "d41d8cd98f00b204e9800998ecf8427e"))
tags2 := map[string]string{ tags2 := map[string]string{
"file": dir + "log2.log", "file": dir + "log2.log",
} }
fields2 := map[string]interface{}{ require.True(t, acc.HasPoint("filestat", tags2,
"size_bytes": int64(0), "size_bytes", int64(0)))
"exists": int64(1), require.True(t, acc.HasPoint("filestat", tags2,
"md5_sum": "d41d8cd98f00b204e9800998ecf8427e", "exists", int64(1)))
} require.True(t, acc.HasPoint("filestat", tags2,
acc.AssertContainsTaggedFields(t, "filestat", fields2, tags2) "md5_sum", "d41d8cd98f00b204e9800998ecf8427e"))
tags3 := map[string]string{ tags3 := map[string]string{
"file": dir + "test.conf", "file": dir + "test.conf",
} }
fields3 := map[string]interface{}{ require.True(t, acc.HasPoint("filestat", tags3,
"size_bytes": int64(104), "size_bytes", int64(104)))
"exists": int64(1), require.True(t, acc.HasPoint("filestat", tags3,
"md5_sum": "5a7e9b77fa25e7bb411dbd17cf403c1f", "exists", int64(1)))
} require.True(t, acc.HasPoint("filestat", tags3,
acc.AssertContainsTaggedFields(t, "filestat", fields3, tags3) "md5_sum", "5a7e9b77fa25e7bb411dbd17cf403c1f"))
} }
func TestGetMd5(t *testing.T) { func TestGetMd5(t *testing.T) {

View File

@ -34,137 +34,82 @@ cpu_load_short,host=server06 value=12.0 1422568543702900257
emptyMsg = "" emptyMsg = ""
serviceRootPEM = `-----BEGIN CERTIFICATE----- serviceRootPEM = `-----BEGIN CERTIFICATE-----
MIIDRTCCAi2gAwIBAgIUenakcvMDj2URxBvUHBe0Mfhac0cwDQYJKoZIhvcNAQEL MIIBxzCCATCgAwIBAgIJAOLq2g9+9TVgMA0GCSqGSIb3DQEBCwUAMBYxFDASBgNV
BQAwGzEZMBcGA1UEAxMQdGVsZWdyYWYtdGVzdC1jYTAeFw0xNzA4MzEwNTE5NDNa BAMMC1RlbGVncmFmIENBMB4XDTE3MTAwMjIyNDMwOFoXDTE3MTEwMTIyNDMwOFow
Fw0yNzA4MjkwNTIwMTNaMBsxGTAXBgNVBAMTEHRlbGVncmFmLXRlc3QtY2EwggEi FjEUMBIGA1UEAwwLVGVsZWdyYWYgQ0EwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJ
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDxpDlUEC6LNXQMhvTtlWKUekwa AoGBALHtGXLKZz3HUA4E1H0mR3gAtgNwUSRArxylCjQwO/7tFEYDFVCCPFzAF7G8
xh2OaiR16WvO8iA+sYmjlpFXOe+V6YWT+daOGujCqlGdrfDjj3C3pqFPJ6Q4VXaA hzHyBNgx5FwNrH3bMEol9iIxzoZNU0XTWS7DzN4S+89C2Tn+NaFko/SeFBMp4IK/
xQyd0Ena7kRtuQ/IUSpTWxyrpSIzKL3dAoV0NYpjFWznjVMP3Rq4l+4cHqviZSvK 55YAgcYGe2QbFnPITGYPT05VkbSBMD0PBITNSwsclGZGFVoHAgMBAAGjHTAbMAwG
bWUK5n0vBGpEw3A22V9urhlSNkSbECvzn9EFHyIeJX603zaKXYw5wiDwCp1swbXW A1UdEwQFMAMBAf8wCwYDVR0PBAQDAgEGMA0GCSqGSIb3DQEBCwUAA4GBAIJpAA+X
2WS2h45JeI5xrpKcFmLaqRNe0swi6bkGnmefyCv7nsbOLeKyEW9AExDSd6nSLdu9 QB57JhNxevUlFFLmGx7ASKrOeZLupzak4qUK718erafMAsXhydx1eKL/5Ne7ZcFa
TGzhAfnfodcajSmKiQ+7YL9JY1bQ9hlfXk1ULg4riSEMKF+trZFZUanaXeeBAgMB Tf6dRPzCjv89WzYK/kJ59AgATkXNPvADRUKd0ViQw4Q4EcfuQrTMEym+gl1W2qQl
AAGjgYAwfjAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4E U9/eBDE341pcrfdHHGhS5LKv6KTmjyYmDLxl
FgQUiPkCD8gEsSgIiV8jzACMoUZcHaIwHwYDVR0jBBgwFoAUiPkCD8gEsSgIiV8j
zACMoUZcHaIwGwYDVR0RBBQwEoIQdGVsZWdyYWYtdGVzdC1jYTANBgkqhkiG9w0B
AQsFAAOCAQEAXeadR7ZVkb2C0F8OEd2CQxVt2/JOqM4G2N2O8uTwf+hIn+qm+jbb
Q6JokGhr5Ybhvtv3U9JnI6RVI+TOYNkDzs5e2DtntFQmcKb2c+y5Z+OpvWd13ObK
GMCs4bho6O7h1qo1Z+Ftd6sYQ7JL0MuTGWCNbXv2c1iC4zPT54n1vGZC5so08RO0
r7bqLLEnkSawabvSAeTxtweCXJUw3D576e0sb8oU0AP/Hn/2IC9E1vFZdjDswEfs
ARE4Oc5XnN6sqjtp0q5CqPpW6tYFwxdtZFk0VYPXyRnETVgry7Dc/iX6mktIYUx+
qWSyPEDKALyxx6yUyVDqgcY2VUm0rM/1Iw==
-----END CERTIFICATE-----` -----END CERTIFICATE-----`
serviceCertPEM = `-----BEGIN CERTIFICATE----- serviceCertPEM = `-----BEGIN CERTIFICATE-----
MIIDKjCCAhKgAwIBAgIUVYjQKruuFavlMZvV7X6RRF4OyBowDQYJKoZIhvcNAQEL MIIBzzCCATigAwIBAgIBATANBgkqhkiG9w0BAQsFADAWMRQwEgYDVQQDDAtUZWxl
BQAwGzEZMBcGA1UEAxMQdGVsZWdyYWYtdGVzdC1jYTAeFw0xNzA4MzEwNTM3MjRa Z3JhZiBDQTAeFw0xNzEwMDIyMjQzMDhaFw0yNzA5MzAyMjQzMDhaMBQxEjAQBgNV
Fw0xNzA5MzAwNTM3NTRaMBQxEjAQBgNVBAMTCWxvY2FsaG9zdDCCASIwDQYJKoZI BAMMCWxvY2FsaG9zdDCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAoI/8ceps
hvcNAQEBBQADggEPADCCAQoCggEBANojLHm+4ttLfl8xo4orZ436/o36wdQ30sWz DvvA3KUDViYwZcB+RvfT6XCPCT35mEzuXWP42JHk1VPNA41215U8CGoJF7+OzRcZ
xE8eGejhARvCSNIR1Tau41Towq/MQVQQejQJRgqBSz7UEfzJNJGKKKc560j6fmTM an3b2WLfAph+bi4Vmpe8eolmPHjf57jJ2fdDeLtMA4T0WF8yR4fHxrrU2UFsgXod
FHpFNZcTrNrTb0r3blUWF1oswhTgg313OXbVsz+E9tHkT1p/s9uURy3TJ3O/CFHq kpQNqa/R5+iEKNMQVQgD2HjP5BE1u+H6fscCAwEAAaMvMC0wCQYDVR0TBAIwADAL
2vTiTQMTq31v0FEN1E/d6uzMhnGy5+QuRu/0A2iPpgXgPopYZwG5t4hN1KklM//l BgNVHQ8EBAMCBSAwEwYDVR0lBAwwCgYIKwYBBQUHAwEwDQYJKoZIhvcNAQELBQAD
j2gMlX6mAYalctFOkDbhIe4/4dQcfT0sWA49KInZmUeB1RdyiNfCoXnDRZHocPIj gYEAV5vx8FHNlD6Z3e01/MiNSmYXn93CwlixYMRyW1Ri2P6hMtJiMRp59fNFzroa
ltYAK/Igda0fdlMisoqh2ZMrCt8yhws7ycc12cFi7ZMv8zvi5p8CAwEAAaNtMGsw iv6djr30uuKYOiAvdKhNaYWERgrtjGVEuPoIMQfaAaKHQj6CKLBXeGZ5Gxhy+M6G
EwYDVR0lBAwwCgYIKwYBBQUHAwEwHQYDVR0OBBYEFCdE87Nz7vPpgRmj++6J8rQR OE6g0E4ufHOqr1h1GDIiAq88zyJ2AupgLLUCMFtkq0v0mr0=
0F/TMB8GA1UdIwQYMBaAFIj5Ag/IBLEoCIlfI8wAjKFGXB2iMBQGA1UdEQQNMAuC
CWxvY2FsaG9zdDANBgkqhkiG9w0BAQsFAAOCAQEAIPhMYCsCPvOcvLLkahaZVn2g
ZbzPDplFhEsH1cpc7vd3GCV2EYjNTbBTDs5NlovSbJLf1DFB+gwsfEjhlFVZB3UQ
6GtuA5CQh/Skv8ngCDiLP50BbKF0CLa4Ia0xrSTAyRsg2rt9APphbej0yKqJ7j8U
1KK6rjOSnuzrKseex26VVovjPFq0FgkghWRm0xrAeizGTBCSEStZEPhk3pBo2x95
a32VPpmhlQMDyiV6m1cc9/MfxMisnyeLqJl8E9nziNa4/BgwwN9DcOp63D9OOa6A
brtLz8OXqvV+7gKlq+nASFDimXwFKRyqRH6ECyHNTE2K14KZb7+JTa0AUm6Nlw==
-----END CERTIFICATE-----` -----END CERTIFICATE-----`
serviceKeyPEM = `-----BEGIN RSA PRIVATE KEY----- serviceKeyPEM = `-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEA2iMseb7i20t+XzGjiitnjfr+jfrB1DfSxbPETx4Z6OEBG8JI MIICXAIBAAKBgQCgj/xx6mwO+8DcpQNWJjBlwH5G99PpcI8JPfmYTO5dY/jYkeTV
0hHVNq7jVOjCr8xBVBB6NAlGCoFLPtQR/Mk0kYoopznrSPp+ZMwUekU1lxOs2tNv U80DjXbXlTwIagkXv47NFxlqfdvZYt8CmH5uLhWal7x6iWY8eN/nuMnZ90N4u0wD
SvduVRYXWizCFOCDfXc5dtWzP4T20eRPWn+z25RHLdMnc78IUera9OJNAxOrfW/Q hPRYXzJHh8fGutTZQWyBeh2SlA2pr9Hn6IQo0xBVCAPYeM/kETW74fp+xwIDAQAB
UQ3UT93q7MyGcbLn5C5G7/QDaI+mBeA+ilhnAbm3iE3UqSUz/+WPaAyVfqYBhqVy AoGABiRb6NOp3Ize3NHnJcWCNnI9omNalOR8ZEMdqCjROXtYiphSI6L4BbnEoQyR
0U6QNuEh7j/h1Bx9PSxYDj0oidmZR4HVF3KI18KhecNFkehw8iOW1gAr8iB1rR92 ZlUAEgt+3/ORQlScM12n4EaLF4Zi4CTGmibRHUff/ybUDGMg2Lp/AL/ghP/3U37l
UyKyiqHZkysK3zKHCzvJxzXZwWLtky/zO+LmnwIDAQABAoIBABD8MidcrK9kpndl C/oRjohK9Rqn28hf8xgL9Jz+KbQaVv5f+frLwL3EKreYtOkCQQDLe1s89rbxvTZr
FxXYIV0V0SJfBx6uJhRM1hlO/7d5ZauyqhbpWo/CeGMRKK+lmOShz9Ijcre4r5I5 PhtwYrnXC8KbBNPIzJbTXrphqr0H3xuDlTpd+4tvIlL6LoqANYXAmHHlKUuPcar6
0xi61gQLHPVAdkidcKAKoAGRSAX2ezwiwIS21Xl8md7ko0wa20I2uVu+chGdGdbo QCj9xNwTAkEAygDRac8qewqIWhZOs0u8phC37dxzwVXslrgjO+kTLxN/Q1srK45T
DyG91dRgLFauHWFO26f9QIVW5aY6ifyjg1fyxR/9n2YZfkqbjvASW4Mmfv5GR1aT gHDbJuCrBPkYrjAXWHd2rIkOWl0rk38A/QJADct4HQLw1iSous6EF7Npu+19LPs/
mffajgsquy78PKs86f879iG+cfCzPYdoK+h7fsm4EEqDwK8JCsUIY1qN+Tuj5RQY zF4qX3wNkK99jzoN6HbGdTandkpSa8mZ9CUswyjSl+Gb0Ma4+6w72zBsZwJBAKn+
zuIuD34+wywe7Jd1vwjQ40Cyilgtnu8Q8s8J05bXrD3mqer5nrqIGOX0vKgs+EXx Zj0VCjrhcj3d5/0bD3bxOtgBXaimFqP/8ibIzkwfrEmSv5G4BK1iTAs7prBYsFxm
1hV+6ZECgYEA+950L2u8oPzNXu9BAL8Y5Tl384qj1+Cj/g28MuZFoPf/KU0HRN6l PD9GyagI7vs8zR8jEkECQD51jhM8DDPah/ECC31we54Y9dqBOupy1a8y6os1YFkv
PBlXKaGP9iX+749tdiNPk5keIwOL8xCVXOpMLOA/jOlGODydG9rX67WCL/R1RcJR BV7zTVrpOzwUsrkMW+wFyQSX9eyyMfJHJihlobXA+QY=
+Pip8dxO1ZNpOKHud06XLMuuVz9qNq0Xwb1VCzNTOxnEDwtXNyDm6OkCgYEA3bcW
hMeDNn85UA4n0ljcdCmEu12WS7L//jaAOWuPPfM3GgKEIic6hqrPWEuZlZtQnybx
L6qQgaWyCfl/8z0+5WynQqkVPz1j1dSrSKnemxyeySrmUcOH5UJfAKaG5PUd7H3t
oPTCxkbW3Bi2QLlgd4nb7+OEk6w0V9Zzv4AFHkcCgYBL/aD2Ub4WoE9iLjNhg0aC
mmUrcI/gaSFxXDmE7d7iIxC0KE5iI/6cdFTM9bbWoD4bjx2KgDrZIGBsVfyaeE1o
PDSBcaMa46LRAtCv/8YXkqrVxx6+zlMnF/dGRp7uZ0xeztSA4JBR7p4KKtLj7jN1
u6b1+yVIdoylsVk+A8pHSQKBgQCUcsn5DTyleHl/SHsRM74naMUeToMbHDaalxMz
XvkBmZ8DIzwlQe7FzAgYLkYfDWblqMVEDQfERpT2aL9qtU8vfZhf4aYAObJmsYYd
mN8bLAaE2txrUmfi8JV7cgRPuG7YsVgxtK/U4glqRIGCxJv6bat86vERjvNc/JFz
XtwOcQKBgF83Ov+EA9pL0AFs+BMiO+0SX/OqLX0TDMSqUKg3jjVfgl+BSBEZIsOu
g5jqHBx3Om/UyrXdn+lnMhyEgCuNkeC6057B5iGcWucTlnINeejXk/pnbvMtGjD1
OGWmdXhgLtKg6Edqm+9fnH0UJN6DRxRRCUfzMfbY8TRmLzZG2W34
-----END RSA PRIVATE KEY-----` -----END RSA PRIVATE KEY-----`
clientRootPEM = `-----BEGIN CERTIFICATE----- clientRootPEM = `-----BEGIN CERTIFICATE-----
MIIDRTCCAi2gAwIBAgIUenakcvMDj2URxBvUHBe0Mfhac0cwDQYJKoZIhvcNAQEL MIIBxzCCATCgAwIBAgIJAOLq2g9+9TVgMA0GCSqGSIb3DQEBCwUAMBYxFDASBgNV
BQAwGzEZMBcGA1UEAxMQdGVsZWdyYWYtdGVzdC1jYTAeFw0xNzA4MzEwNTE5NDNa BAMMC1RlbGVncmFmIENBMB4XDTE3MTAwMjIyNDMwOFoXDTE3MTEwMTIyNDMwOFow
Fw0yNzA4MjkwNTIwMTNaMBsxGTAXBgNVBAMTEHRlbGVncmFmLXRlc3QtY2EwggEi FjEUMBIGA1UEAwwLVGVsZWdyYWYgQ0EwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJ
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDxpDlUEC6LNXQMhvTtlWKUekwa AoGBALHtGXLKZz3HUA4E1H0mR3gAtgNwUSRArxylCjQwO/7tFEYDFVCCPFzAF7G8
xh2OaiR16WvO8iA+sYmjlpFXOe+V6YWT+daOGujCqlGdrfDjj3C3pqFPJ6Q4VXaA hzHyBNgx5FwNrH3bMEol9iIxzoZNU0XTWS7DzN4S+89C2Tn+NaFko/SeFBMp4IK/
xQyd0Ena7kRtuQ/IUSpTWxyrpSIzKL3dAoV0NYpjFWznjVMP3Rq4l+4cHqviZSvK 55YAgcYGe2QbFnPITGYPT05VkbSBMD0PBITNSwsclGZGFVoHAgMBAAGjHTAbMAwG
bWUK5n0vBGpEw3A22V9urhlSNkSbECvzn9EFHyIeJX603zaKXYw5wiDwCp1swbXW A1UdEwQFMAMBAf8wCwYDVR0PBAQDAgEGMA0GCSqGSIb3DQEBCwUAA4GBAIJpAA+X
2WS2h45JeI5xrpKcFmLaqRNe0swi6bkGnmefyCv7nsbOLeKyEW9AExDSd6nSLdu9 QB57JhNxevUlFFLmGx7ASKrOeZLupzak4qUK718erafMAsXhydx1eKL/5Ne7ZcFa
TGzhAfnfodcajSmKiQ+7YL9JY1bQ9hlfXk1ULg4riSEMKF+trZFZUanaXeeBAgMB Tf6dRPzCjv89WzYK/kJ59AgATkXNPvADRUKd0ViQw4Q4EcfuQrTMEym+gl1W2qQl
AAGjgYAwfjAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4E U9/eBDE341pcrfdHHGhS5LKv6KTmjyYmDLxl
FgQUiPkCD8gEsSgIiV8jzACMoUZcHaIwHwYDVR0jBBgwFoAUiPkCD8gEsSgIiV8j
zACMoUZcHaIwGwYDVR0RBBQwEoIQdGVsZWdyYWYtdGVzdC1jYTANBgkqhkiG9w0B
AQsFAAOCAQEAXeadR7ZVkb2C0F8OEd2CQxVt2/JOqM4G2N2O8uTwf+hIn+qm+jbb
Q6JokGhr5Ybhvtv3U9JnI6RVI+TOYNkDzs5e2DtntFQmcKb2c+y5Z+OpvWd13ObK
GMCs4bho6O7h1qo1Z+Ftd6sYQ7JL0MuTGWCNbXv2c1iC4zPT54n1vGZC5so08RO0
r7bqLLEnkSawabvSAeTxtweCXJUw3D576e0sb8oU0AP/Hn/2IC9E1vFZdjDswEfs
ARE4Oc5XnN6sqjtp0q5CqPpW6tYFwxdtZFk0VYPXyRnETVgry7Dc/iX6mktIYUx+
qWSyPEDKALyxx6yUyVDqgcY2VUm0rM/1Iw==
-----END CERTIFICATE-----` -----END CERTIFICATE-----`
clientCertPEM = `-----BEGIN CERTIFICATE----- clientCertPEM = `-----BEGIN CERTIFICATE-----
MIIDMDCCAhigAwIBAgIUIVOF5g2zH6+J/dbGdu4q18aSJoMwDQYJKoZIhvcNAQEL MIIBzjCCATegAwIBAgIBAjANBgkqhkiG9w0BAQsFADAWMRQwEgYDVQQDDAtUZWxl
BQAwGzEZMBcGA1UEAxMQdGVsZWdyYWYtdGVzdC1jYTAeFw0xNzA4MzEwNTQ1MzJa Z3JhZiBDQTAeFw0xNzEwMDIyMjQzMDhaFw0yNzA5MzAyMjQzMDhaMBMxETAPBgNV
Fw0yNzA4MjUwMTQ2MDJaMBcxFTATBgNVBAMTDGR1bW15LWNsaWVudDCCASIwDQYJ BAMMCHRlbGVncmFmMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDIrPGv8Sm1
KoZIhvcNAQEBBQADggEPADCCAQoCggEBAKok1HJ40buyjrS+DG9ORLzrWIJad2y/ 6tI+vlATzWGOK1D40iNTiGj4FpcS2Tm4SdaDSfa3VL9N5l8aeuN4E8O2YXK3QcR8
6X2Bg9MSENfpEUgaS7nK2ML3m1e2poHqBSR+V8VECNs+MDCLSOeQ4FC1TdBKMLfw NoeY87cWW06PtFc/ByS42VeWDKt28/DpGzbrzCVNOumS3X5QEyySYLpi0uqI9ZZ5
NxW88y5Gj6rTRcAXl092ba7stwbqJPBAZu1Eh1jXIp5nrFKh8Jq7kRxmMB5vC70V O2sOJ2yVua8F3cwqPTveVmU3LeQfVrh7QwIDAQABoy8wLTAJBgNVHRMEAjAAMAsG
fOSPS0RZtEd7D+QZ6jgkFJWsZzn4gJr8nc/kmLcntLw+g/tz9/8lfaV306tLlhMH A1UdDwQEAwIHgDATBgNVHSUEDDAKBggrBgEFBQcDAjANBgkqhkiG9w0BAQsFAAOB
dv3Ka6Nt86j6/muOwvoeAkAnCEFAgDcXg4F37PFAiEHRw9DyTeWDuZqvnMZ3gosL gQAVEfHePY9fumW8rkbbSbiuQ1dGIINbMGPO17eAjOxMT4Z1jDb8oTVHbaZM0rKo
kl15QhnP0yG2QCjSb1gaLcKB42cyxDnPc31WsVuuzQnajazcVf3lJW0CAwEAAaNw wKx4dDp5mnLK+NuMZ1sNxKOf6IMmQ022ANOYM0dkwfg13bpC3BGW8Z7nOFK0xXh6
MG4wEwYDVR0lBAwwCgYIKwYBBQUHAwIwHQYDVR0OBBYEFCemMO+Qlj+YCLQ3ScAQ 4KTcXktBUtubmn6w7szvWY2OajPVoiGgcapwwhCrBEa6rg==
8XYJJJ5ZMB8GA1UdIwQYMBaAFIj5Ag/IBLEoCIlfI8wAjKFGXB2iMBcGA1UdEQQQ
MA6CDGR1bW15LWNsaWVudDANBgkqhkiG9w0BAQsFAAOCAQEARThbApKvvGDp7uSc
mINaqDOHe69F9PepV0/3+B5+X1b3yd2sbzZL/ZoHl27kajSHVrUF+09gcTosfuY3
omnIPw+NseqTJG+qTMRb3AarLNO46EJZLOowAEhnJyVmhK5uU0YqhV1X9eN+g4/o
BuyOPvHj6UJWviZFy6fDIj2N+ygN/CNP5X3iLDBUoyCEHAehLiQr0aRgsqe4JLlS
P+0l0btTUpcqUhsQy+sD2lv3MO1tZ/P4zhzu0J0LUeLBDdOPf/FIvTgkCNxN9GGy
SLmeBeCzsKmWbzE3Yuahw3h4IblVyyGc7ZDGIobDrZgFqshcZylU8wrsjUnjNSPA
G+LOWQ==
-----END CERTIFICATE-----` -----END CERTIFICATE-----`
clientKeyPEM = `-----BEGIN RSA PRIVATE KEY----- clientKeyPEM = `-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEAqiTUcnjRu7KOtL4Mb05EvOtYglp3bL/pfYGD0xIQ1+kRSBpL MIICXgIBAAKBgQDIrPGv8Sm16tI+vlATzWGOK1D40iNTiGj4FpcS2Tm4SdaDSfa3
ucrYwvebV7amgeoFJH5XxUQI2z4wMItI55DgULVN0Eowt/A3FbzzLkaPqtNFwBeX VL9N5l8aeuN4E8O2YXK3QcR8NoeY87cWW06PtFc/ByS42VeWDKt28/DpGzbrzCVN
T3Ztruy3Buok8EBm7USHWNcinmesUqHwmruRHGYwHm8LvRV85I9LRFm0R3sP5Bnq OumS3X5QEyySYLpi0uqI9ZZ5O2sOJ2yVua8F3cwqPTveVmU3LeQfVrh7QwIDAQAB
OCQUlaxnOfiAmvydz+SYtye0vD6D+3P3/yV9pXfTq0uWEwd2/cpro23zqPr+a47C AoGAHtvpdqLhRSZNGnTtn33vyIsEsp6t7ASID855gN6Cr8I7CIlxNRQFLxeD/HB1
+h4CQCcIQUCANxeDgXfs8UCIQdHD0PJN5YO5mq+cxneCiwuSXXlCGc/TIbZAKNJv VlvDtuIZX/DvJCLGi1C/EOMNm2nY7IT2gZgMpxvmfjfGhHKT1MWYu9cdyiOOacqD
WBotwoHjZzLEOc9zfVaxW67NCdqNrNxV/eUlbQIDAQABAoIBAAXZYEhTKPqn58oE yRDAcKpubIPEIV3aczglv9sVApXwZcgePzDwweTVfP/Nv5ECQQDthIv5Y5k3UO8h
4o6NBUXtXUyV6ZcefdtnsW13KIcTpxlwdfv8IjmJo5h/WfgLYIPhqAjLDvbii2uP Hht+27W8McFJ5eiF5OcLGOQ4nKGHkCOskfD4u/i+j+4dUeGBdpT8CzszgofBa6wh
zkDPtTZxFSy88DHSm0IvDbkgid3Yh4RUC0qbCqhB0QT21bBAtokfmvuN4c3KSJ1K dJevQerVAkEA2Ep8PUfXRjel8NiLNL8iK/SR26y8wPduKam31SMUPq71+GROKkFz
nefj3Ng6Fxtku+WTMIj2+CJwZwcyAH47ZUngYs/77gA0hAJcbdL/bj8Bpmd+lH6C yYYAbKORs+fS6LBT+M48cEu470o+g8eptwJBALzCEMeSOqp2XIRSPAG2NBiq5fSH
Ci22T2hrw+cpWMN6qwa3wxWIneCaqxkylSgpUzSNE0QO3mXkX+NYtL2BQ0w+wPqq jSIThvYPwxemisyEZYV4uivCnu06zz5n2zIa/k3L0zGdc6vomPRBh2aVmT0CQQCY
lww3QJOFAX1qRLflglL9K+ruTQofm49vxv6apsoqdkrxEBoPzkljlqiPRmzUxau4 /B5ibfUbqnLKJzBXb7Xo50Vf3w9nYdvexjfMHtLL/47lUXVkOAWBDjIwpYWCfb/V
cvbApQECgYEAy5m5O3mQt6DBrDRJWSwpZh6DRNd5USEqXOIFtp+Qze2Jx1pYQfZt bBsJCj7/ot+9CYOsTEaDAkEA4XAGFxx78JMVuJLjevkf0pGUPEocdoOAvpYWT5sR
NOXOrwy04o0+6yLzc4O4W5ta2KfTlALFzCa6Na3Ca4ZUAeteWprrdh8b1b2w/wUH 9FODrPEtW84ZevSmuByjzeqVzS3ElIxACopRJgSN20d9vg==
E3uQFkvH0zFdPsA3pTTZ0k/ydmHnu4zZqBnSeh0dIW8xFYgZZCgQusECgYEA1e7O
ujCUa8y49sY42D/Y/c8B96xVfJZO5hhY7eLgkzqUlmFl31Ld7AjlJcXpbMeW1vaa
0Mxbfx2qAVaZEkvdnXq3V8spe6qOGBdlKzey4DMEfmEXLFp5DRYCSwpXiqDZcGqc
jwI58wuzKoDgydN9bLdF8XYGtQXnHIE9WyTYMa0CgYBKYSBgb+rEir/2LyvUneOJ
4P/HuIgjcWBOimvX6bc2495/q6uufV4sAwBcxuGWGk+wCxaxTp+dJ8YqfDU5T0H/
cO56Cb6LFYm/IcNYilwWzQqYLTJqF+Yb4fojiw+3QcN01zf87K/eu0IyqVXFGJGz
bauM3PH1cu+VlCDijBiAgQKBgDOQ9YmRriTx2t+41fjiIvbC0BGYG58FSA1UbxMg
LcuvQiOhZIHZIp8DYeCh/Or4jRZRqO2NZLyWNOVPr2Pmn4uXCdyCnwQtD0UlVoB9
U4ORKJMh6gkJ4cXSuUjHPGSw8tiTChu6iKdZ+ZzUJdrgPIpY/uX98g3uV0/aoyR2
FBqdAoGAQIrcOsTpCe6l3ZDtQyNIeAj1s7sZyW8MBy95RXw3y/yzVEOAu4yWNobj
RReeHQEsrQq+sJ/cols8HfoOpGpL3U0IGDi5vr1JlOXmBhFX2xuFrfh3jvgXlUqb
fqxPcT3d7I/UEi0ueDh3osyTn46mDfRfF7HBLBNeyQbIFWBDDus=
-----END RSA PRIVATE KEY-----` -----END RSA PRIVATE KEY-----`
) )

View File

@ -98,6 +98,7 @@ func (h *HTTPResponse) createHttpClient() (*http.Client, error) {
} }
client := &http.Client{ client := &http.Client{
Transport: &http.Transport{ Transport: &http.Transport{
Proxy: http.ProxyFromEnvironment,
DisableKeepAlives: true, DisableKeepAlives: true,
TLSClientConfig: tlsCfg, TLSClientConfig: tlsCfg,
}, },

View File

@ -1,8 +1,7 @@
# Telegraf ipmi plugin # IPMI Sensor Input Plugin
Get bare metal metrics using the command line utility `ipmitool` Get bare metal metrics using the command line utility
[`ipmitool`](https://sourceforge.net/projects/ipmitool/files/ipmitool/).
see ipmitool(https://sourceforge.net/projects/ipmitool/files/ipmitool/)
If no servers are specified, the plugin will query the local machine sensor stats via the following command: If no servers are specified, the plugin will query the local machine sensor stats via the following command:
@ -16,18 +15,7 @@ When one or more servers are specified, the plugin will use the following comman
ipmitool -I lan -H SERVER -U USERID -P PASSW0RD sdr ipmitool -I lan -H SERVER -U USERID -P PASSW0RD sdr
``` ```
## Measurements ### Configuration
- ipmi_sensor:
* Tags: `name`, `unit`
* Fields:
- status
- value
The `server` tag will be made available when retrieving stats from remote server(s).
## Configuration
```toml ```toml
# Read metrics from the bare metal servers via IPMI # Read metrics from the bare metal servers via IPMI
@ -52,26 +40,49 @@ The `server` tag will be made available when retrieving stats from remote server
timeout = "20s" timeout = "20s"
``` ```
## Output ### Measurements
- ipmi_sensor:
- tags:
- name
- unit
- server (only when retrieving stats from remote servers)
- fields:
- status (int)
- value (float)
#### Permissions
When gathering from the local system, Telegraf will need permission to the
ipmi device node. When using udev you can create the device node giving
`rw` permissions to the `telegraf` user by adding the following rule to
`/etc/udev/rules.d/52-telegraf-ipmi.rules`:
```
KERNEL=="ipmi*", MODE="660", GROUP="telegraf"
```
### Example Output
When retrieving stats from a remote server: When retrieving stats from a remote server:
``` ```
> ipmi_sensor,server=10.20.2.203,unit=degrees_c,name=ambient_temp status=1i,value=20 1458488465012559455 ipmi_sensor,server=10.20.2.203,unit=degrees_c,name=ambient_temp status=1i,value=20 1458488465012559455
> ipmi_sensor,server=10.20.2.203,unit=feet,name=altitude status=1i,value=80 1458488465012688613 ipmi_sensor,server=10.20.2.203,unit=feet,name=altitude status=1i,value=80 1458488465012688613
> ipmi_sensor,server=10.20.2.203,unit=watts,name=avg_power status=1i,value=220 1458488465012776511 ipmi_sensor,server=10.20.2.203,unit=watts,name=avg_power status=1i,value=220 1458488465012776511
> ipmi_sensor,server=10.20.2.203,unit=volts,name=planar_3.3v status=1i,value=3.28 1458488465012861875 ipmi_sensor,server=10.20.2.203,unit=volts,name=planar_3.3v status=1i,value=3.28 1458488465012861875
> ipmi_sensor,server=10.20.2.203,unit=volts,name=planar_vbat status=1i,value=3.04 1458488465013072508 ipmi_sensor,server=10.20.2.203,unit=volts,name=planar_vbat status=1i,value=3.04 1458488465013072508
> ipmi_sensor,server=10.20.2.203,unit=rpm,name=fan_1a_tach status=1i,value=2610 1458488465013137932 ipmi_sensor,server=10.20.2.203,unit=rpm,name=fan_1a_tach status=1i,value=2610 1458488465013137932
> ipmi_sensor,server=10.20.2.203,unit=rpm,name=fan_1b_tach status=1i,value=1775 1458488465013279896 ipmi_sensor,server=10.20.2.203,unit=rpm,name=fan_1b_tach status=1i,value=1775 1458488465013279896
``` ```
When retrieving stats from the local machine (no server specified): When retrieving stats from the local machine (no server specified):
``` ```
> ipmi_sensor,unit=degrees_c,name=ambient_temp status=1i,value=20 1458488465012559455 ipmi_sensor,unit=degrees_c,name=ambient_temp status=1i,value=20 1458488465012559455
> ipmi_sensor,unit=feet,name=altitude status=1i,value=80 1458488465012688613 ipmi_sensor,unit=feet,name=altitude status=1i,value=80 1458488465012688613
> ipmi_sensor,unit=watts,name=avg_power status=1i,value=220 1458488465012776511 ipmi_sensor,unit=watts,name=avg_power status=1i,value=220 1458488465012776511
> ipmi_sensor,unit=volts,name=planar_3.3v status=1i,value=3.28 1458488465012861875 ipmi_sensor,unit=volts,name=planar_3.3v status=1i,value=3.28 1458488465012861875
> ipmi_sensor,unit=volts,name=planar_vbat status=1i,value=3.04 1458488465013072508 ipmi_sensor,unit=volts,name=planar_vbat status=1i,value=3.04 1458488465013072508
> ipmi_sensor,unit=rpm,name=fan_1a_tach status=1i,value=2610 1458488465013137932 ipmi_sensor,unit=rpm,name=fan_1a_tach status=1i,value=2610 1458488465013137932
> ipmi_sensor,unit=rpm,name=fan_1b_tach status=1i,value=1775 1458488465013279896 ipmi_sensor,unit=rpm,name=fan_1b_tach status=1i,value=1775 1458488465013279896
``` ```

View File

@ -35,7 +35,7 @@ var sampleConfig = `
## ##
# servers = ["USERID:PASSW0RD@lan(192.168.1.1)"] # servers = ["USERID:PASSW0RD@lan(192.168.1.1)"]
## Recomended: use metric 'interval' that is a multiple of 'timeout' to avoid ## Recommended: use metric 'interval' that is a multiple of 'timeout' to avoid
## gaps or overlap in pulled data ## gaps or overlap in pulled data
interval = "30s" interval = "30s"

View File

@ -81,7 +81,7 @@ func TestIptables_Gather(t *testing.T) {
K 4520 RETURN tcp -- * * 0.0.0.0/0 0.0.0.0/0 K 4520 RETURN tcp -- * * 0.0.0.0/0 0.0.0.0/0
`}, `},
}, },
{ // 8 - Multiple rows, multipe chains => no error { // 8 - Multiple rows, multiple chains => no error
table: "filter", table: "filter",
chains: []string{"INPUT", "FORWARD"}, chains: []string{"INPUT", "FORWARD"},
values: []string{ values: []string{

View File

@ -3,8 +3,6 @@ package leofs
import ( import (
"bufio" "bufio"
"fmt" "fmt"
"log"
"net/url"
"os/exec" "os/exec"
"strconv" "strconv"
"strings" "strings"
@ -19,7 +17,7 @@ import (
const oid = ".1.3.6.1.4.1.35450" const oid = ".1.3.6.1.4.1.35450"
// For Manager Master // For Manager Master
const defaultEndpoint = "udp://127.0.0.1:4020" const defaultEndpoint = "127.0.0.1:4020"
type ServerType int type ServerType int
@ -137,8 +135,8 @@ var serverTypeMapping = map[string]ServerType{
var sampleConfig = ` var sampleConfig = `
## An array of URLs of the form: ## An array of URLs of the form:
## "udp://" host [ ":" port] ## host [ ":" port]
servers = ["udp://127.0.0.1:4020"] servers = ["127.0.0.1:4020"]
` `
func (l *LeoFS) SampleConfig() string { func (l *LeoFS) SampleConfig() string {
@ -155,28 +153,22 @@ func (l *LeoFS) Gather(acc telegraf.Accumulator) error {
return nil return nil
} }
var wg sync.WaitGroup var wg sync.WaitGroup
for i, endpoint := range l.Servers { for _, endpoint := range l.Servers {
if !strings.HasPrefix(endpoint, "udp://") { results := strings.Split(endpoint, ":")
// Preserve backwards compatibility for hostnames without a
// scheme, broken in go 1.8. Remove in Telegraf 2.0 port := "4020"
endpoint = "udp://" + endpoint if len(results) > 2 {
log.Printf("W! [inputs.mongodb] Using %q as connection URL; please update your configuration to use an URL", endpoint)
l.Servers[i] = endpoint
}
u, err := url.Parse(endpoint)
if err != nil {
acc.AddError(fmt.Errorf("Unable to parse address %q: %s", endpoint, err))
continue
}
if u.Host == "" {
acc.AddError(fmt.Errorf("Unable to parse address %q", endpoint)) acc.AddError(fmt.Errorf("Unable to parse address %q", endpoint))
continue continue
} else if len(results) == 2 {
if _, err := strconv.Atoi(results[1]); err == nil {
port = results[1]
} else {
acc.AddError(fmt.Errorf("Unable to parse port from %q", endpoint))
continue
}
} }
port := u.Port()
if port == "" {
port = "4020"
}
st, ok := serverTypeMapping[port] st, ok := serverTypeMapping[port]
if !ok { if !ok {
st = ServerTypeStorage st = ServerTypeStorage
@ -196,7 +188,7 @@ func (l *LeoFS) gatherServer(
serverType ServerType, serverType ServerType,
acc telegraf.Accumulator, acc telegraf.Accumulator,
) error { ) error {
cmd := exec.Command("snmpwalk", "-v2c", "-cpublic", endpoint, oid) cmd := exec.Command("snmpwalk", "-v2c", "-cpublic", "-On", endpoint, oid)
stdout, err := cmd.StdoutPipe() stdout, err := cmd.StdoutPipe()
if err != nil { if err != nil {
return err return err

View File

@ -16,21 +16,21 @@ package main
import "fmt" import "fmt"
const output = ` + "`" + `iso.3.6.1.4.1.35450.15.1.0 = STRING: "manager_888@127.0.0.1" const output = ` + "`" + `.1.3.6.1.4.1.35450.15.1.0 = STRING: "manager_888@127.0.0.1"
iso.3.6.1.4.1.35450.15.2.0 = Gauge32: 186 .1.3.6.1.4.1.35450.15.2.0 = Gauge32: 186
iso.3.6.1.4.1.35450.15.3.0 = Gauge32: 46235519 .1.3.6.1.4.1.35450.15.3.0 = Gauge32: 46235519
iso.3.6.1.4.1.35450.15.4.0 = Gauge32: 32168525 .1.3.6.1.4.1.35450.15.4.0 = Gauge32: 32168525
iso.3.6.1.4.1.35450.15.5.0 = Gauge32: 14066068 .1.3.6.1.4.1.35450.15.5.0 = Gauge32: 14066068
iso.3.6.1.4.1.35450.15.6.0 = Gauge32: 5512968 .1.3.6.1.4.1.35450.15.6.0 = Gauge32: 5512968
iso.3.6.1.4.1.35450.15.7.0 = Gauge32: 186 .1.3.6.1.4.1.35450.15.7.0 = Gauge32: 186
iso.3.6.1.4.1.35450.15.8.0 = Gauge32: 46269006 .1.3.6.1.4.1.35450.15.8.0 = Gauge32: 46269006
iso.3.6.1.4.1.35450.15.9.0 = Gauge32: 32202867 .1.3.6.1.4.1.35450.15.9.0 = Gauge32: 32202867
iso.3.6.1.4.1.35450.15.10.0 = Gauge32: 14064995 .1.3.6.1.4.1.35450.15.10.0 = Gauge32: 14064995
iso.3.6.1.4.1.35450.15.11.0 = Gauge32: 5492634 .1.3.6.1.4.1.35450.15.11.0 = Gauge32: 5492634
iso.3.6.1.4.1.35450.15.12.0 = Gauge32: 60 .1.3.6.1.4.1.35450.15.12.0 = Gauge32: 60
iso.3.6.1.4.1.35450.15.13.0 = Gauge32: 43515904 .1.3.6.1.4.1.35450.15.13.0 = Gauge32: 43515904
iso.3.6.1.4.1.35450.15.14.0 = Gauge32: 60 .1.3.6.1.4.1.35450.15.14.0 = Gauge32: 60
iso.3.6.1.4.1.35450.15.15.0 = Gauge32: 43533983` + "`" + .1.3.6.1.4.1.35450.15.15.0 = Gauge32: 43533983` + "`" +
` `
func main() { func main() {
fmt.Println(output) fmt.Println(output)
@ -42,34 +42,34 @@ package main
import "fmt" import "fmt"
const output = ` + "`" + `iso.3.6.1.4.1.35450.34.1.0 = STRING: "storage_0@127.0.0.1" const output = ` + "`" + `.1.3.6.1.4.1.35450.34.1.0 = STRING: "storage_0@127.0.0.1"
iso.3.6.1.4.1.35450.34.2.0 = Gauge32: 512 .1.3.6.1.4.1.35450.34.2.0 = Gauge32: 512
iso.3.6.1.4.1.35450.34.3.0 = Gauge32: 38126307 .1.3.6.1.4.1.35450.34.3.0 = Gauge32: 38126307
iso.3.6.1.4.1.35450.34.4.0 = Gauge32: 22308716 .1.3.6.1.4.1.35450.34.4.0 = Gauge32: 22308716
iso.3.6.1.4.1.35450.34.5.0 = Gauge32: 15816448 .1.3.6.1.4.1.35450.34.5.0 = Gauge32: 15816448
iso.3.6.1.4.1.35450.34.6.0 = Gauge32: 5232008 .1.3.6.1.4.1.35450.34.6.0 = Gauge32: 5232008
iso.3.6.1.4.1.35450.34.7.0 = Gauge32: 512 .1.3.6.1.4.1.35450.34.7.0 = Gauge32: 512
iso.3.6.1.4.1.35450.34.8.0 = Gauge32: 38113176 .1.3.6.1.4.1.35450.34.8.0 = Gauge32: 38113176
iso.3.6.1.4.1.35450.34.9.0 = Gauge32: 22313398 .1.3.6.1.4.1.35450.34.9.0 = Gauge32: 22313398
iso.3.6.1.4.1.35450.34.10.0 = Gauge32: 15798779 .1.3.6.1.4.1.35450.34.10.0 = Gauge32: 15798779
iso.3.6.1.4.1.35450.34.11.0 = Gauge32: 5237315 .1.3.6.1.4.1.35450.34.11.0 = Gauge32: 5237315
iso.3.6.1.4.1.35450.34.12.0 = Gauge32: 191 .1.3.6.1.4.1.35450.34.12.0 = Gauge32: 191
iso.3.6.1.4.1.35450.34.13.0 = Gauge32: 824 .1.3.6.1.4.1.35450.34.13.0 = Gauge32: 824
iso.3.6.1.4.1.35450.34.14.0 = Gauge32: 0 .1.3.6.1.4.1.35450.34.14.0 = Gauge32: 0
iso.3.6.1.4.1.35450.34.15.0 = Gauge32: 50105 .1.3.6.1.4.1.35450.34.15.0 = Gauge32: 50105
iso.3.6.1.4.1.35450.34.16.0 = Gauge32: 196654 .1.3.6.1.4.1.35450.34.16.0 = Gauge32: 196654
iso.3.6.1.4.1.35450.34.17.0 = Gauge32: 0 .1.3.6.1.4.1.35450.34.17.0 = Gauge32: 0
iso.3.6.1.4.1.35450.34.18.0 = Gauge32: 2052 .1.3.6.1.4.1.35450.34.18.0 = Gauge32: 2052
iso.3.6.1.4.1.35450.34.19.0 = Gauge32: 50296 .1.3.6.1.4.1.35450.34.19.0 = Gauge32: 50296
iso.3.6.1.4.1.35450.34.20.0 = Gauge32: 35 .1.3.6.1.4.1.35450.34.20.0 = Gauge32: 35
iso.3.6.1.4.1.35450.34.21.0 = Gauge32: 898 .1.3.6.1.4.1.35450.34.21.0 = Gauge32: 898
iso.3.6.1.4.1.35450.34.22.0 = Gauge32: 0 .1.3.6.1.4.1.35450.34.22.0 = Gauge32: 0
iso.3.6.1.4.1.35450.34.23.0 = Gauge32: 0 .1.3.6.1.4.1.35450.34.23.0 = Gauge32: 0
iso.3.6.1.4.1.35450.34.24.0 = Gauge32: 0 .1.3.6.1.4.1.35450.34.24.0 = Gauge32: 0
iso.3.6.1.4.1.35450.34.31.0 = Gauge32: 51 .1.3.6.1.4.1.35450.34.31.0 = Gauge32: 51
iso.3.6.1.4.1.35450.34.32.0 = Gauge32: 53219328 .1.3.6.1.4.1.35450.34.32.0 = Gauge32: 53219328
iso.3.6.1.4.1.35450.34.33.0 = Gauge32: 51 .1.3.6.1.4.1.35450.34.33.0 = Gauge32: 51
iso.3.6.1.4.1.35450.34.34.0 = Gauge32: 53351083` + "`" + .1.3.6.1.4.1.35450.34.34.0 = Gauge32: 53351083` + "`" +
` `
func main() { func main() {
fmt.Println(output) fmt.Println(output)
@ -81,31 +81,31 @@ package main
import "fmt" import "fmt"
const output = ` + "`" + `iso.3.6.1.4.1.35450.34.1.0 = STRING: "gateway_0@127.0.0.1" const output = ` + "`" + `.1.3.6.1.4.1.35450.34.1.0 = STRING: "gateway_0@127.0.0.1"
iso.3.6.1.4.1.35450.34.2.0 = Gauge32: 465 .1.3.6.1.4.1.35450.34.2.0 = Gauge32: 465
iso.3.6.1.4.1.35450.34.3.0 = Gauge32: 61676335 .1.3.6.1.4.1.35450.34.3.0 = Gauge32: 61676335
iso.3.6.1.4.1.35450.34.4.0 = Gauge32: 46890415 .1.3.6.1.4.1.35450.34.4.0 = Gauge32: 46890415
iso.3.6.1.4.1.35450.34.5.0 = Gauge32: 14785011 .1.3.6.1.4.1.35450.34.5.0 = Gauge32: 14785011
iso.3.6.1.4.1.35450.34.6.0 = Gauge32: 5578855 .1.3.6.1.4.1.35450.34.6.0 = Gauge32: 5578855
iso.3.6.1.4.1.35450.34.7.0 = Gauge32: 465 .1.3.6.1.4.1.35450.34.7.0 = Gauge32: 465
iso.3.6.1.4.1.35450.34.8.0 = Gauge32: 61644426 .1.3.6.1.4.1.35450.34.8.0 = Gauge32: 61644426
iso.3.6.1.4.1.35450.34.9.0 = Gauge32: 46880358 .1.3.6.1.4.1.35450.34.9.0 = Gauge32: 46880358
iso.3.6.1.4.1.35450.34.10.0 = Gauge32: 14763002 .1.3.6.1.4.1.35450.34.10.0 = Gauge32: 14763002
iso.3.6.1.4.1.35450.34.11.0 = Gauge32: 5582125 .1.3.6.1.4.1.35450.34.11.0 = Gauge32: 5582125
iso.3.6.1.4.1.35450.34.12.0 = Gauge32: 191 .1.3.6.1.4.1.35450.34.12.0 = Gauge32: 191
iso.3.6.1.4.1.35450.34.13.0 = Gauge32: 827 .1.3.6.1.4.1.35450.34.13.0 = Gauge32: 827
iso.3.6.1.4.1.35450.34.14.0 = Gauge32: 0 .1.3.6.1.4.1.35450.34.14.0 = Gauge32: 0
iso.3.6.1.4.1.35450.34.15.0 = Gauge32: 50105 .1.3.6.1.4.1.35450.34.15.0 = Gauge32: 50105
iso.3.6.1.4.1.35450.34.16.0 = Gauge32: 196650 .1.3.6.1.4.1.35450.34.16.0 = Gauge32: 196650
iso.3.6.1.4.1.35450.34.17.0 = Gauge32: 0 .1.3.6.1.4.1.35450.34.17.0 = Gauge32: 0
iso.3.6.1.4.1.35450.34.18.0 = Gauge32: 30256 .1.3.6.1.4.1.35450.34.18.0 = Gauge32: 30256
iso.3.6.1.4.1.35450.34.19.0 = Gauge32: 532158 .1.3.6.1.4.1.35450.34.19.0 = Gauge32: 532158
iso.3.6.1.4.1.35450.34.20.0 = Gauge32: 34 .1.3.6.1.4.1.35450.34.20.0 = Gauge32: 34
iso.3.6.1.4.1.35450.34.21.0 = Gauge32: 1 .1.3.6.1.4.1.35450.34.21.0 = Gauge32: 1
iso.3.6.1.4.1.35450.34.31.0 = Gauge32: 53 .1.3.6.1.4.1.35450.34.31.0 = Gauge32: 53
iso.3.6.1.4.1.35450.34.32.0 = Gauge32: 55050240 .1.3.6.1.4.1.35450.34.32.0 = Gauge32: 55050240
iso.3.6.1.4.1.35450.34.33.0 = Gauge32: 53 .1.3.6.1.4.1.35450.34.33.0 = Gauge32: 53
iso.3.6.1.4.1.35450.34.34.0 = Gauge32: 55186538` + "`" + .1.3.6.1.4.1.35450.34.34.0 = Gauge32: 55186538` + "`" +
` `
func main() { func main() {
fmt.Println(output) fmt.Println(output)

View File

@ -100,7 +100,7 @@ current time.
- ts-rfc3339 ("2006-01-02T15:04:05Z07:00") - ts-rfc3339 ("2006-01-02T15:04:05Z07:00")
- ts-rfc3339nano ("2006-01-02T15:04:05.999999999Z07:00") - ts-rfc3339nano ("2006-01-02T15:04:05.999999999Z07:00")
- ts-httpd ("02/Jan/2006:15:04:05 -0700") - ts-httpd ("02/Jan/2006:15:04:05 -0700")
- ts-epoch (seconds since unix epoch) - ts-epoch (seconds since unix epoch, may contain decimal)
- ts-epochnano (nanoseconds since unix epoch) - ts-epochnano (nanoseconds since unix epoch)
- ts-"CUSTOM" - ts-"CUSTOM"
@ -130,6 +130,19 @@ This example input and config parses a file using a custom timestamp conversion:
patterns = ['%{TIMESTAMP_ISO8601:timestamp:ts-"2006-01-02 15:04:05"} value=%{NUMBER:value:int}'] patterns = ['%{TIMESTAMP_ISO8601:timestamp:ts-"2006-01-02 15:04:05"} value=%{NUMBER:value:int}']
``` ```
This example input and config parses a file using a timestamp in unix time:
```
1466004605 value=42
1466004605.123456789 value=42
```
```toml
[[inputs.logparser]]
[inputs.logparser.grok]
patterns = ['%{NUMBER:timestamp:ts-epoch} value=%{NUMBER:value:int}']
```
This example parses a file using a built-in conversion and a custom pattern: This example parses a file using a built-in conversion and a custom pattern:
``` ```

View File

@ -253,12 +253,30 @@ func (p *Parser) ParseLine(line string) (telegraf.Metric, error) {
case STRING: case STRING:
fields[k] = strings.Trim(v, `"`) fields[k] = strings.Trim(v, `"`)
case EPOCH: case EPOCH:
iv, err := strconv.ParseInt(v, 10, 64) parts := strings.SplitN(v, ".", 2)
if err != nil { if len(parts) == 0 {
log.Printf("E! Error parsing %s to int: %s", v, err) log.Printf("E! Error parsing %s to timestamp: %s", v, err)
} else { break
timestamp = time.Unix(iv, 0)
} }
sec, err := strconv.ParseInt(parts[0], 10, 64)
if err != nil {
log.Printf("E! Error parsing %s to timestamp: %s", v, err)
break
}
ts := time.Unix(sec, 0)
if len(parts) == 2 {
padded := fmt.Sprintf("%-9s", parts[1])
nsString := strings.Replace(padded[:9], " ", "0", -1)
nanosec, err := strconv.ParseInt(nsString, 10, 64)
if err != nil {
log.Printf("E! Error parsing %s to timestamp: %s", v, err)
break
}
ts = ts.Add(time.Duration(nanosec) * time.Nanosecond)
}
timestamp = ts
case EPOCH_NANO: case EPOCH_NANO:
iv, err := strconv.ParseInt(v, 10, 64) iv, err := strconv.ParseInt(v, 10, 64)
if err != nil { if err != nil {

View File

@ -385,6 +385,77 @@ func TestParseEpoch(t *testing.T) {
assert.Equal(t, time.Unix(1466004605, 0), metricA.Time()) assert.Equal(t, time.Unix(1466004605, 0), metricA.Time())
} }
func TestParseEpochDecimal(t *testing.T) {
var tests = []struct {
name string
line string
noMatch bool
err error
tags map[string]string
fields map[string]interface{}
time time.Time
}{
{
name: "ns precision",
line: "1466004605.359052000 value=42",
tags: map[string]string{},
fields: map[string]interface{}{
"value": int64(42),
},
time: time.Unix(0, 1466004605359052000),
},
{
name: "ms precision",
line: "1466004605.359 value=42",
tags: map[string]string{},
fields: map[string]interface{}{
"value": int64(42),
},
time: time.Unix(0, 1466004605359000000),
},
{
name: "second precision",
line: "1466004605 value=42",
tags: map[string]string{},
fields: map[string]interface{}{
"value": int64(42),
},
time: time.Unix(0, 1466004605000000000),
},
{
name: "sub ns precision",
line: "1466004605.123456789123 value=42",
tags: map[string]string{},
fields: map[string]interface{}{
"value": int64(42),
},
time: time.Unix(0, 1466004605123456789),
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
parser := &Parser{
Patterns: []string{"%{NUMBER:ts:ts-epoch} value=%{NUMBER:value:int}"},
}
assert.NoError(t, parser.Compile())
m, err := parser.ParseLine(tt.line)
if tt.noMatch {
require.Nil(t, m)
require.Nil(t, err)
return
}
require.Equal(t, tt.err, err)
require.NotNil(t, m)
require.Equal(t, tt.tags, m.Tags())
require.Equal(t, tt.fields, m.Fields())
require.Equal(t, tt.time, m.Time())
})
}
}
func TestParseEpochErrors(t *testing.T) { func TestParseEpochErrors(t *testing.T) {
p := &Parser{ p := &Parser{
Patterns: []string{"%{MYAPP}"}, Patterns: []string{"%{MYAPP}"},

View File

@ -367,7 +367,7 @@ func getMetrics(role Role, group string) []string {
ret, ok := m[group] ret, ok := m[group]
if !ok { if !ok {
log.Printf("I! [mesos] Unkown %s metrics group: %s\n", role, group) log.Printf("I! [mesos] Unknown %s metrics group: %s\n", role, group)
return []string{} return []string{}
} }

View File

@ -197,7 +197,7 @@ func (c *Client) Send(typ int32, command string) (response *Packet, err error) {
} }
// NewClient creates a new Client type, creating the connection // NewClient creates a new Client type, creating the connection
// to the server specified by the host and port arguements. If // to the server specified by the host and port arguments. If
// the connection fails, an error is returned. // the connection fails, an error is returned.
func NewClient(host string, port int) (client *Client, err error) { func NewClient(host string, port int) (client *Client, err error) {
client = new(Client) client = new(Client)

View File

@ -47,7 +47,7 @@ func (s *Minecraft) SampleConfig() string {
return sampleConfig return sampleConfig
} }
// Gather uses the RCON protocal to collect player and // Gather uses the RCON protocol to collect player and
// scoreboard stats from a minecraft server. // scoreboard stats from a minecraft server.
//var hasClient bool = false //var hasClient bool = false
func (s *Minecraft) Gather(acc telegraf.Accumulator) error { func (s *Minecraft) Gather(acc telegraf.Accumulator) error {

View File

@ -76,7 +76,7 @@ func newClient(server, port string) (*rcon.Client, error) {
return client, nil return client, nil
} }
// Gather recieves all player scoreboard information and returns it per player. // Gather receives all player scoreboard information and returns it per player.
func (r *RCON) Gather(producer RCONClientProducer) ([]string, error) { func (r *RCON) Gather(producer RCONClientProducer) ([]string, error) {
if r.client == nil { if r.client == nil {
var err error var err error

View File

@ -77,6 +77,21 @@ var WiredTigerStats = map[string]string{
"percent_cache_used": "CacheUsedPercent", "percent_cache_used": "CacheUsedPercent",
} }
var WiredTigerExtStats = map[string]string{
"wtcache_tracked_dirty_bytes": "TrackedDirtyBytes",
"wtcache_current_bytes": "CurrentCachedBytes",
"wtcache_max_bytes_configured": "MaxBytesConfigured",
"wtcache_app_threads_page_read_count": "AppThreadsPageReadCount",
"wtcache_app_threads_page_read_time": "AppThreadsPageReadTime",
"wtcache_app_threads_page_write_count": "AppThreadsPageWriteCount",
"wtcache_bytes_written_from": "BytesWrittenFrom",
"wtcache_bytes_read_into": "BytesReadInto",
"wtcache_pages_evicted_by_app_thread": "PagesEvictedByAppThread",
"wtcache_pages_queued_for_eviction": "PagesQueuedForEviction",
"wtcache_server_evicting_pages": "ServerEvictingPages",
"wtcache_worker_thread_evictingpages": "WorkerThreadEvictingPages",
}
var DbDataStats = map[string]string{ var DbDataStats = map[string]string{
"collections": "Collections", "collections": "Collections",
"objects": "Objects", "objects": "Objects",
@ -121,13 +136,11 @@ func (d *MongodbData) AddDefaultStats() {
floatVal, _ := strconv.ParseFloat(percentVal, 64) floatVal, _ := strconv.ParseFloat(percentVal, 64)
d.add(key, floatVal) d.add(key, floatVal)
} }
d.addStat(statLine, WiredTigerExtStats)
} }
} }
func (d *MongodbData) addStat( func (d *MongodbData) addStat(statLine reflect.Value, stats map[string]string) {
statLine reflect.Value,
stats map[string]string,
) {
for key, value := range stats { for key, value := range stats {
val := statLine.FieldByName(value).Interface() val := statLine.FieldByName(value).Interface()
d.add(key, val) d.add(key, val)

View File

@ -70,9 +70,21 @@ func TestAddReplStats(t *testing.T) {
func TestAddWiredTigerStats(t *testing.T) { func TestAddWiredTigerStats(t *testing.T) {
d := NewMongodbData( d := NewMongodbData(
&StatLine{ &StatLine{
StorageEngine: "wiredTiger", StorageEngine: "wiredTiger",
CacheDirtyPercent: 0, CacheDirtyPercent: 0,
CacheUsedPercent: 0, CacheUsedPercent: 0,
TrackedDirtyBytes: 0,
CurrentCachedBytes: 0,
MaxBytesConfigured: 0,
AppThreadsPageReadCount: 0,
AppThreadsPageReadTime: 0,
AppThreadsPageWriteCount: 0,
BytesWrittenFrom: 0,
BytesReadInto: 0,
PagesEvictedByAppThread: 0,
PagesQueuedForEviction: 0,
ServerEvictingPages: 0,
WorkerThreadEvictingPages: 0,
}, },
tags, tags,
) )

View File

@ -127,9 +127,19 @@ type ConcurrentTransStats struct {
// CacheStats stores cache statistics for WiredTiger. // CacheStats stores cache statistics for WiredTiger.
type CacheStats struct { type CacheStats struct {
TrackedDirtyBytes int64 `bson:"tracked dirty bytes in the cache"` TrackedDirtyBytes int64 `bson:"tracked dirty bytes in the cache"`
CurrentCachedBytes int64 `bson:"bytes currently in the cache"` CurrentCachedBytes int64 `bson:"bytes currently in the cache"`
MaxBytesConfigured int64 `bson:"maximum bytes configured"` MaxBytesConfigured int64 `bson:"maximum bytes configured"`
AppThreadsPageReadCount int64 `bson:"application threads page read from disk to cache count"`
AppThreadsPageReadTime int64 `bson:"application threads page read from disk to cache time (usecs)"`
AppThreadsPageWriteCount int64 `bson:"application threads page write from cache to disk count"`
AppThreadsPageWriteTime int64 `bson:"application threads page write from cache to disk time (usecs)"`
BytesWrittenFrom int64 `bson:"bytes written from cache"`
BytesReadInto int64 `bson:"bytes read into cache"`
PagesEvictedByAppThread int64 `bson:"pages evicted by application threads"`
PagesQueuedForEviction int64 `bson:"pages queued for eviction"`
ServerEvictingPages int64 `bson:"eviction server evicting pages"`
WorkerThreadEvictingPages int64 `bson:"eviction worker thread evicting pages"`
} }
// TransactionStats stores transaction checkpoints in WiredTiger. // TransactionStats stores transaction checkpoints in WiredTiger.
@ -406,6 +416,20 @@ type StatLine struct {
CacheDirtyPercent float64 CacheDirtyPercent float64
CacheUsedPercent float64 CacheUsedPercent float64
// Cache ultilization extended (wiredtiger only)
TrackedDirtyBytes int64
CurrentCachedBytes int64
MaxBytesConfigured int64
AppThreadsPageReadCount int64
AppThreadsPageReadTime int64
AppThreadsPageWriteCount int64
BytesWrittenFrom int64
BytesReadInto int64
PagesEvictedByAppThread int64
PagesQueuedForEviction int64
ServerEvictingPages int64
WorkerThreadEvictingPages int64
// Replicated Opcounter fields // Replicated Opcounter fields
InsertR, QueryR, UpdateR, DeleteR, GetMoreR, CommandR int64 InsertR, QueryR, UpdateR, DeleteR, GetMoreR, CommandR int64
ReplLag int64 ReplLag int64
@ -514,7 +538,7 @@ func NewStatLine(oldMongo, newMongo MongoStatus, key string, all bool, sampleSec
returnVal.Command = diff(newStat.Opcounters.Command, oldStat.Opcounters.Command, sampleSecs) returnVal.Command = diff(newStat.Opcounters.Command, oldStat.Opcounters.Command, sampleSecs)
} }
if newStat.Metrics != nil && newStat.Metrics.TTL != nil && oldStat.Metrics.TTL != nil { if newStat.Metrics != nil && newStat.Metrics.TTL != nil && oldStat.Metrics != nil && oldStat.Metrics.TTL != nil {
returnVal.Passes = diff(newStat.Metrics.TTL.Passes, oldStat.Metrics.TTL.Passes, sampleSecs) returnVal.Passes = diff(newStat.Metrics.TTL.Passes, oldStat.Metrics.TTL.Passes, sampleSecs)
returnVal.DeletedDocuments = diff(newStat.Metrics.TTL.DeletedDocuments, oldStat.Metrics.TTL.DeletedDocuments, sampleSecs) returnVal.DeletedDocuments = diff(newStat.Metrics.TTL.DeletedDocuments, oldStat.Metrics.TTL.DeletedDocuments, sampleSecs)
} }
@ -534,6 +558,19 @@ func NewStatLine(oldMongo, newMongo MongoStatus, key string, all bool, sampleSec
returnVal.Flushes = newStat.WiredTiger.Transaction.TransCheckpoints - oldStat.WiredTiger.Transaction.TransCheckpoints returnVal.Flushes = newStat.WiredTiger.Transaction.TransCheckpoints - oldStat.WiredTiger.Transaction.TransCheckpoints
returnVal.CacheDirtyPercent = float64(newStat.WiredTiger.Cache.TrackedDirtyBytes) / float64(newStat.WiredTiger.Cache.MaxBytesConfigured) returnVal.CacheDirtyPercent = float64(newStat.WiredTiger.Cache.TrackedDirtyBytes) / float64(newStat.WiredTiger.Cache.MaxBytesConfigured)
returnVal.CacheUsedPercent = float64(newStat.WiredTiger.Cache.CurrentCachedBytes) / float64(newStat.WiredTiger.Cache.MaxBytesConfigured) returnVal.CacheUsedPercent = float64(newStat.WiredTiger.Cache.CurrentCachedBytes) / float64(newStat.WiredTiger.Cache.MaxBytesConfigured)
returnVal.TrackedDirtyBytes = newStat.WiredTiger.Cache.TrackedDirtyBytes
returnVal.CurrentCachedBytes = newStat.WiredTiger.Cache.CurrentCachedBytes
returnVal.MaxBytesConfigured = newStat.WiredTiger.Cache.MaxBytesConfigured
returnVal.AppThreadsPageReadCount = newStat.WiredTiger.Cache.AppThreadsPageReadCount
returnVal.AppThreadsPageReadTime = newStat.WiredTiger.Cache.AppThreadsPageReadTime
returnVal.AppThreadsPageWriteCount = newStat.WiredTiger.Cache.AppThreadsPageWriteCount
returnVal.BytesWrittenFrom = newStat.WiredTiger.Cache.BytesWrittenFrom
returnVal.BytesReadInto = newStat.WiredTiger.Cache.BytesReadInto
returnVal.PagesEvictedByAppThread = newStat.WiredTiger.Cache.PagesEvictedByAppThread
returnVal.PagesQueuedForEviction = newStat.WiredTiger.Cache.PagesQueuedForEviction
returnVal.ServerEvictingPages = newStat.WiredTiger.Cache.ServerEvictingPages
returnVal.WorkerThreadEvictingPages = newStat.WiredTiger.Cache.WorkerThreadEvictingPages
} else if newStat.BackgroundFlushing != nil && oldStat.BackgroundFlushing != nil { } else if newStat.BackgroundFlushing != nil && oldStat.BackgroundFlushing != nil {
returnVal.Flushes = newStat.BackgroundFlushing.Flushes - oldStat.BackgroundFlushing.Flushes returnVal.Flushes = newStat.BackgroundFlushing.Flushes - oldStat.BackgroundFlushing.Flushes
} }

View File

@ -10,7 +10,9 @@ The plugin expects messages in the
```toml ```toml
# Read metrics from MQTT topic(s) # Read metrics from MQTT topic(s)
[[inputs.mqtt_consumer]] [[inputs.mqtt_consumer]]
servers = ["localhost:1883"] ## MQTT broker URLs to be used. The format should be scheme://host:port,
## schema can be tcp, ssl, or ws.
servers = ["tcp://localhost:1883"]
## MQTT QoS, must be 0, 1, or 2 ## MQTT QoS, must be 0, 1, or 2
qos = 0 qos = 0
## Connection timeout for initial connection in seconds ## Connection timeout for initial connection in seconds

View File

@ -56,7 +56,10 @@ type MQTTConsumer struct {
} }
var sampleConfig = ` var sampleConfig = `
servers = ["localhost:1883"] ## MQTT broker URLs to be used. The format should be scheme://host:port,
## schema can be tcp, ssl, or ws.
servers = ["tcp://localhost:1883"]
## MQTT QoS, must be 0, 1, or 2 ## MQTT QoS, must be 0, 1, or 2
qos = 0 qos = 0
## Connection timeout for initial connection in seconds ## Connection timeout for initial connection in seconds
@ -239,9 +242,7 @@ func (m *MQTTConsumer) createOpts() (*mqtt.ClientOptions, error) {
return nil, err return nil, err
} }
scheme := "tcp"
if tlsCfg != nil { if tlsCfg != nil {
scheme = "ssl"
opts.SetTLSConfig(tlsCfg) opts.SetTLSConfig(tlsCfg)
} }
@ -257,8 +258,17 @@ func (m *MQTTConsumer) createOpts() (*mqtt.ClientOptions, error) {
if len(m.Servers) == 0 { if len(m.Servers) == 0 {
return opts, fmt.Errorf("could not get host infomations") return opts, fmt.Errorf("could not get host infomations")
} }
for _, host := range m.Servers {
server := fmt.Sprintf("%s://%s", scheme, host) for _, server := range m.Servers {
// Preserve support for host:port style servers; deprecated in Telegraf 1.4.4
if !strings.Contains(server, "://") {
log.Printf("W! mqtt_consumer server %q should be updated to use `scheme://host:port` format", server)
if tlsCfg == nil {
server = "tcp://" + server
} else {
server = "ssl://" + server
}
}
opts.AddBroker(server) opts.AddBroker(server)
} }

View File

@ -72,6 +72,7 @@ func (n *Nginx) Gather(acc telegraf.Accumulator) error {
addr, err := url.Parse(u) addr, err := url.Parse(u)
if err != nil { if err != nil {
acc.AddError(fmt.Errorf("Unable to parse address '%s': %s", u, err)) acc.AddError(fmt.Errorf("Unable to parse address '%s': %s", u, err))
continue
} }
wg.Add(1) wg.Add(1)

View File

@ -59,6 +59,7 @@ func (n *NginxPlus) Gather(acc telegraf.Accumulator) error {
addr, err := url.Parse(u) addr, err := url.Parse(u)
if err != nil { if err != nil {
acc.AddError(fmt.Errorf("Unable to parse address '%s': %s", u, err)) acc.AddError(fmt.Errorf("Unable to parse address '%s': %s", u, err))
continue
} }
wg.Add(1) wg.Add(1)

View File

@ -59,7 +59,7 @@ func (client *conn) Request(
rec := &record{} rec := &record{}
var err1 error var err1 error
// recive untill EOF or FCGI_END_REQUEST // recive until EOF or FCGI_END_REQUEST
READ_LOOP: READ_LOOP:
for { for {
err1 = rec.read(client.rwc) err1 = rec.read(client.rwc)

View File

@ -62,7 +62,7 @@ func TestPhpFpmGeneratesMetrics_From_Fcgi(t *testing.T) {
// Let OS find an available port // Let OS find an available port
tcp, err := net.Listen("tcp", "127.0.0.1:0") tcp, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil { if err != nil {
t.Fatal("Cannot initalize test server") t.Fatal("Cannot initialize test server")
} }
defer tcp.Close() defer tcp.Close()
@ -106,7 +106,7 @@ func TestPhpFpmGeneratesMetrics_From_Socket(t *testing.T) {
binary.Read(rand.Reader, binary.LittleEndian, &randomNumber) binary.Read(rand.Reader, binary.LittleEndian, &randomNumber)
tcp, err := net.Listen("unix", fmt.Sprintf("/tmp/test-fpm%d.sock", randomNumber)) tcp, err := net.Listen("unix", fmt.Sprintf("/tmp/test-fpm%d.sock", randomNumber))
if err != nil { if err != nil {
t.Fatal("Cannot initalize server on port ") t.Fatal("Cannot initialize server on port ")
} }
defer tcp.Close() defer tcp.Close()
@ -150,7 +150,7 @@ func TestPhpFpmGeneratesMetrics_From_Socket_Custom_Status_Path(t *testing.T) {
binary.Read(rand.Reader, binary.LittleEndian, &randomNumber) binary.Read(rand.Reader, binary.LittleEndian, &randomNumber)
tcp, err := net.Listen("unix", fmt.Sprintf("/tmp/test-fpm%d.sock", randomNumber)) tcp, err := net.Listen("unix", fmt.Sprintf("/tmp/test-fpm%d.sock", randomNumber))
if err != nil { if err != nil {
t.Fatal("Cannot initalize server on port ") t.Fatal("Cannot initialize server on port ")
} }
defer tcp.Close() defer tcp.Close()

View File

@ -28,11 +28,14 @@ urls = ["www.google.com"] # required
- packets_received ( from ping output ) - packets_received ( from ping output )
- percent_reply_loss ( compute from packets_transmitted and reply_received ) - percent_reply_loss ( compute from packets_transmitted and reply_received )
- percent_packets_loss ( compute from packets_transmitted and packets_received ) - percent_packets_loss ( compute from packets_transmitted and packets_received )
- errors ( when host can not be found or wrong prameters is passed to application ) - errors ( when host can not be found or wrong parameters is passed to application )
- response time - response time
- average_response_ms ( compute from minimum_response_ms and maximum_response_ms ) - average_response_ms ( compute from minimum_response_ms and maximum_response_ms )
- minimum_response_ms ( from ping output ) - minimum_response_ms ( from ping output )
- maximum_response_ms ( from ping output ) - maximum_response_ms ( from ping output )
- result_code
- 0: success
- 1: no such host
### Tags: ### Tags:
@ -44,5 +47,5 @@ urls = ["www.google.com"] # required
``` ```
$ ./telegraf --config telegraf.conf --input-filter ping --test $ ./telegraf --config telegraf.conf --input-filter ping --test
* Plugin: ping, Collection 1 * Plugin: ping, Collection 1
ping,host=WIN-PBAPLP511R7,url=www.google.com average_response_ms=7i,maximum_response_ms=9i,minimum_response_ms=7i,packets_received=4i,packets_transmitted=4i,percent_packet_loss=0,percent_reply_loss=0,reply_received=4i 1469879119000000000 ping,host=WIN-PBAPLP511R7,url=www.google.com result_code=0i,average_response_ms=7i,maximum_response_ms=9i,minimum_response_ms=7i,packets_received=4i,packets_transmitted=4i,percent_packet_loss=0,percent_reply_loss=0,reply_received=4i 1469879119000000000
``` ```

View File

@ -5,6 +5,7 @@ package ping
import ( import (
"errors" "errors"
"fmt" "fmt"
"net"
"os/exec" "os/exec"
"runtime" "runtime"
"strconv" "strconv"
@ -76,6 +77,17 @@ func (p *Ping) Gather(acc telegraf.Accumulator) error {
wg.Add(1) wg.Add(1)
go func(u string) { go func(u string) {
defer wg.Done() defer wg.Done()
tags := map[string]string{"url": u}
fields := map[string]interface{}{"result_code": 0}
_, err := net.LookupHost(u)
if err != nil {
acc.AddError(err)
fields["result_code"] = 1
acc.AddFields("ping", fields, tags)
return
}
args := p.args(u) args := p.args(u)
totalTimeout := float64(p.Count)*p.Timeout + float64(p.Count-1)*p.PingInterval totalTimeout := float64(p.Count)*p.Timeout + float64(p.Count-1)*p.PingInterval
@ -99,24 +111,23 @@ func (p *Ping) Gather(acc telegraf.Accumulator) error {
} else { } else {
acc.AddError(err) acc.AddError(err)
} }
acc.AddFields("ping", fields, tags)
return return
} }
} }
tags := map[string]string{"url": u}
trans, rec, min, avg, max, stddev, err := processPingOutput(out) trans, rec, min, avg, max, stddev, err := processPingOutput(out)
if err != nil { if err != nil {
// fatal error // fatal error
acc.AddError(fmt.Errorf("%s: %s", err, u)) acc.AddError(fmt.Errorf("%s: %s", err, u))
acc.AddFields("ping", fields, tags)
return return
} }
// Calculate packet loss percentage // Calculate packet loss percentage
loss := float64(trans-rec) / float64(trans) * 100.0 loss := float64(trans-rec) / float64(trans) * 100.0
fields := map[string]interface{}{ fields["packets_transmitted"] = trans
"packets_transmitted": trans, fields["packets_received"] = rec
"packets_received": rec, fields["percent_packet_loss"] = loss
"percent_packet_loss": loss,
}
if min > 0 { if min > 0 {
fields["minimum_response_ms"] = min fields["minimum_response_ms"] = min
} }
@ -145,7 +156,7 @@ func hostPinger(timeout float64, args ...string) (string, error) {
} }
c := exec.Command(bin, args...) c := exec.Command(bin, args...)
out, err := internal.CombinedOutputTimeout(c, out, err := internal.CombinedOutputTimeout(c,
time.Second*time.Duration(timeout+1)) time.Second*time.Duration(timeout+5))
return string(out), err return string(out), err
} }
@ -194,7 +205,6 @@ func processPingOutput(out string) (int, int, float64, float64, float64, float64
for _, line := range lines { for _, line := range lines {
if strings.Contains(line, "transmitted") && if strings.Contains(line, "transmitted") &&
strings.Contains(line, "received") { strings.Contains(line, "received") {
err = nil
stats := strings.Split(line, ", ") stats := strings.Split(line, ", ")
// Transmitted packets // Transmitted packets
trans, err = strconv.Atoi(strings.Split(stats[0], " ")[0]) trans, err = strconv.Atoi(strings.Split(stats[0], " ")[0])
@ -209,8 +219,17 @@ func processPingOutput(out string) (int, int, float64, float64, float64, float64
} else if strings.Contains(line, "min/avg/max") { } else if strings.Contains(line, "min/avg/max") {
stats := strings.Split(line, " ")[3] stats := strings.Split(line, " ")[3]
min, err = strconv.ParseFloat(strings.Split(stats, "/")[0], 64) min, err = strconv.ParseFloat(strings.Split(stats, "/")[0], 64)
if err != nil {
return trans, recv, min, avg, max, stddev, err
}
avg, err = strconv.ParseFloat(strings.Split(stats, "/")[1], 64) avg, err = strconv.ParseFloat(strings.Split(stats, "/")[1], 64)
if err != nil {
return trans, recv, min, avg, max, stddev, err
}
max, err = strconv.ParseFloat(strings.Split(stats, "/")[2], 64) max, err = strconv.ParseFloat(strings.Split(stats, "/")[2], 64)
if err != nil {
return trans, recv, min, avg, max, stddev, err
}
stddev, err = strconv.ParseFloat(strings.Split(stats, "/")[3], 64) stddev, err = strconv.ParseFloat(strings.Split(stats, "/")[3], 64)
if err != nil { if err != nil {
return trans, recv, min, avg, max, stddev, err return trans, recv, min, avg, max, stddev, err

View File

@ -158,6 +158,7 @@ func TestPingGather(t *testing.T) {
"average_response_ms": 43.628, "average_response_ms": 43.628,
"maximum_response_ms": 51.806, "maximum_response_ms": 51.806,
"standard_deviation_ms": 5.325, "standard_deviation_ms": 5.325,
"result_code": 0,
} }
acc.AssertContainsTaggedFields(t, "ping", fields, tags) acc.AssertContainsTaggedFields(t, "ping", fields, tags)
@ -198,6 +199,7 @@ func TestLossyPingGather(t *testing.T) {
"average_response_ms": 44.033, "average_response_ms": 44.033,
"maximum_response_ms": 51.806, "maximum_response_ms": 51.806,
"standard_deviation_ms": 5.325, "standard_deviation_ms": 5.325,
"result_code": 0,
} }
acc.AssertContainsTaggedFields(t, "ping", fields, tags) acc.AssertContainsTaggedFields(t, "ping", fields, tags)
} }
@ -230,6 +232,7 @@ func TestBadPingGather(t *testing.T) {
"packets_transmitted": 2, "packets_transmitted": 2,
"packets_received": 0, "packets_received": 0,
"percent_packet_loss": 100.0, "percent_packet_loss": 100.0,
"result_code": 0,
} }
acc.AssertContainsTaggedFields(t, "ping", fields, tags) acc.AssertContainsTaggedFields(t, "ping", fields, tags)
} }

View File

@ -4,6 +4,7 @@ package ping
import ( import (
"errors" "errors"
"net"
"os/exec" "os/exec"
"regexp" "regexp"
"strconv" "strconv"
@ -158,16 +159,27 @@ func (p *Ping) Gather(acc telegraf.Accumulator) error {
wg.Add(1) wg.Add(1)
go func(u string) { go func(u string) {
defer wg.Done() defer wg.Done()
tags := map[string]string{"url": u}
fields := map[string]interface{}{"result_code": 0}
_, err := net.LookupHost(u)
if err != nil {
errorChannel <- err
fields["result_code"] = 1
acc.AddFields("ping", fields, tags)
return
}
args := p.args(u) args := p.args(u)
totalTimeout := p.timeout() * float64(p.Count) totalTimeout := p.timeout() * float64(p.Count)
out, err := p.pingHost(totalTimeout, args...) out, err := p.pingHost(totalTimeout, args...)
// ping host return exitcode != 0 also when there was no response from host // ping host return exitcode != 0 also when there was no response from host
// but command was execute succesfully // but command was execute successfully
if err != nil { if err != nil {
// Combine go err + stderr output // Combine go err + stderr output
pendingError = errors.New(strings.TrimSpace(out) + ", " + err.Error()) pendingError = errors.New(strings.TrimSpace(out) + ", " + err.Error())
} }
tags := map[string]string{"url": u}
trans, recReply, receivePacket, avg, min, max, err := processPingOutput(out) trans, recReply, receivePacket, avg, min, max, err := processPingOutput(out)
if err != nil { if err != nil {
// fatal error // fatal error
@ -175,24 +187,20 @@ func (p *Ping) Gather(acc telegraf.Accumulator) error {
errorChannel <- pendingError errorChannel <- pendingError
} }
errorChannel <- err errorChannel <- err
fields := map[string]interface{}{
"errors": 100.0,
}
fields["errors"] = 100.0
acc.AddFields("ping", fields, tags) acc.AddFields("ping", fields, tags)
return return
} }
// Calculate packet loss percentage // Calculate packet loss percentage
lossReply := float64(trans-recReply) / float64(trans) * 100.0 lossReply := float64(trans-recReply) / float64(trans) * 100.0
lossPackets := float64(trans-receivePacket) / float64(trans) * 100.0 lossPackets := float64(trans-receivePacket) / float64(trans) * 100.0
fields := map[string]interface{}{
"packets_transmitted": trans, fields["packets_transmitted"] = trans
"reply_received": recReply, fields["reply_received"] = recReply
"packets_received": receivePacket, fields["packets_received"] = receivePacket
"percent_packet_loss": lossPackets, fields["percent_packet_loss"] = lossPackets
"percent_reply_loss": lossReply, fields["percent_reply_loss"] = lossReply
}
if avg > 0 { if avg > 0 {
fields["average_response_ms"] = float64(avg) fields["average_response_ms"] = float64(avg)
} }

View File

@ -4,9 +4,10 @@ package ping
import ( import (
"errors" "errors"
"testing"
"github.com/influxdata/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"testing"
) )
// Windows ping format ( should support multilanguage ?) // Windows ping format ( should support multilanguage ?)
@ -81,6 +82,7 @@ func TestPingGather(t *testing.T) {
"average_response_ms": 50.0, "average_response_ms": 50.0,
"minimum_response_ms": 50.0, "minimum_response_ms": 50.0,
"maximum_response_ms": 52.0, "maximum_response_ms": 52.0,
"result_code": 0,
} }
acc.AssertContainsTaggedFields(t, "ping", fields, tags) acc.AssertContainsTaggedFields(t, "ping", fields, tags)
@ -121,6 +123,7 @@ func TestBadPingGather(t *testing.T) {
"reply_received": 0, "reply_received": 0,
"percent_packet_loss": 100.0, "percent_packet_loss": 100.0,
"percent_reply_loss": 100.0, "percent_reply_loss": 100.0,
"result_code": 0,
} }
acc.AssertContainsTaggedFields(t, "ping", fields, tags) acc.AssertContainsTaggedFields(t, "ping", fields, tags)
} }
@ -167,6 +170,7 @@ func TestLossyPingGather(t *testing.T) {
"average_response_ms": 115.0, "average_response_ms": 115.0,
"minimum_response_ms": 114.0, "minimum_response_ms": 114.0,
"maximum_response_ms": 119.0, "maximum_response_ms": 119.0,
"result_code": 0,
} }
acc.AssertContainsTaggedFields(t, "ping", fields, tags) acc.AssertContainsTaggedFields(t, "ping", fields, tags)
} }
@ -269,6 +273,7 @@ func TestUnreachablePingGather(t *testing.T) {
"reply_received": 0, "reply_received": 0,
"percent_packet_loss": 75.0, "percent_packet_loss": 75.0,
"percent_reply_loss": 100.0, "percent_reply_loss": 100.0,
"result_code": 0,
} }
acc.AssertContainsTaggedFields(t, "ping", fields, tags) acc.AssertContainsTaggedFields(t, "ping", fields, tags)
@ -315,6 +320,7 @@ func TestTTLExpiredPingGather(t *testing.T) {
"reply_received": 0, "reply_received": 0,
"percent_packet_loss": 75.0, "percent_packet_loss": 75.0,
"percent_reply_loss": 100.0, "percent_reply_loss": 100.0,
"result_code": 0,
} }
acc.AssertContainsTaggedFields(t, "ping", fields, tags) acc.AssertContainsTaggedFields(t, "ping", fields, tags)

View File

@ -76,7 +76,7 @@ func TestMemcachedGeneratesMetrics(t *testing.T) {
binary.Read(rand.Reader, binary.LittleEndian, &randomNumber) binary.Read(rand.Reader, binary.LittleEndian, &randomNumber)
socket, err := net.Listen("unix", fmt.Sprintf("/tmp/pdns%d.controlsocket", randomNumber)) socket, err := net.Listen("unix", fmt.Sprintf("/tmp/pdns%d.controlsocket", randomNumber))
if err != nil { if err != nil {
t.Fatal("Cannot initalize server on port ") t.Fatal("Cannot initialize server on port ")
} }
defer socket.Close() defer socket.Close()

View File

@ -86,7 +86,7 @@ func Parse(buf []byte, header http.Header) ([]telegraf.Metric, error) {
} else { } else {
t = time.Now() t = time.Now()
} }
metric, err := metric.New(metricName, tags, fields, t) metric, err := metric.New(metricName, tags, fields, t, valueType(mf.GetType()))
if err == nil { if err == nil {
metrics = append(metrics, metric) metrics = append(metrics, metric)
} }
@ -97,6 +97,21 @@ func Parse(buf []byte, header http.Header) ([]telegraf.Metric, error) {
return metrics, err return metrics, err
} }
func valueType(mt dto.MetricType) telegraf.ValueType {
switch mt {
case dto.MetricType_COUNTER:
return telegraf.Counter
case dto.MetricType_GAUGE:
return telegraf.Gauge
case dto.MetricType_SUMMARY:
return telegraf.Summary
case dto.MetricType_HISTOGRAM:
return telegraf.Histogram
default:
return telegraf.Untyped
}
}
// Get Quantiles from summary metric // Get Quantiles from summary metric
func makeQuantiles(m *dto.Metric) map[string]interface{} { func makeQuantiles(m *dto.Metric) map[string]interface{} {
fields := make(map[string]interface{}) fields := make(map[string]interface{})
@ -134,11 +149,11 @@ func getNameAndValue(m *dto.Metric) map[string]interface{} {
fields["gauge"] = float64(m.GetGauge().GetValue()) fields["gauge"] = float64(m.GetGauge().GetValue())
} }
} else if m.Counter != nil { } else if m.Counter != nil {
if !math.IsNaN(m.GetGauge().GetValue()) { if !math.IsNaN(m.GetCounter().GetValue()) {
fields["counter"] = float64(m.GetCounter().GetValue()) fields["counter"] = float64(m.GetCounter().GetValue())
} }
} else if m.Untyped != nil { } else if m.Untyped != nil {
if !math.IsNaN(m.GetGauge().GetValue()) { if !math.IsNaN(m.GetUntyped().GetValue()) {
fields["value"] = float64(m.GetUntyped().GetValue()) fields["value"] = float64(m.GetUntyped().GetValue())
} }
} }

View File

@ -218,7 +218,19 @@ func (p *Prometheus) gatherURL(url UrlAndAddress, acc telegraf.Accumulator) erro
if url.Address != "" { if url.Address != "" {
tags["address"] = url.Address tags["address"] = url.Address
} }
acc.AddFields(metric.Name(), metric.Fields(), tags, metric.Time())
switch metric.Type() {
case telegraf.Counter:
acc.AddCounter(metric.Name(), metric.Fields(), tags, metric.Time())
case telegraf.Gauge:
acc.AddGauge(metric.Name(), metric.Fields(), tags, metric.Time())
case telegraf.Summary:
acc.AddSummary(metric.Name(), metric.Fields(), tags, metric.Time())
case telegraf.Histogram:
acc.AddHistogram(metric.Name(), metric.Fields(), tags, metric.Time())
default:
acc.AddFields(metric.Name(), metric.Fields(), tags, metric.Time())
}
} }
return nil return nil

View File

@ -0,0 +1,135 @@
# Telegraf S.M.A.R.T. plugin
Get metrics using the command line utility `smartctl` for S.M.A.R.T. (Self-Monitoring, Analysis and Reporting Technology) storage devices. SMART is a monitoring system included in computer hard disk drives (HDDs) and solid-state drives (SSDs)[1] that detects and reports on various indicators of drive reliability, with the intent of enabling the anticipation of hardware failures.
See smartmontools (https://www.smartmontools.org/).
If no devices are specified, the plugin will scan for SMART devices via the following command:
```
smartctl --scan
```
Metrics will be reported from the following `smartctl` command:
```
smartctl --info --attributes --health -n <nocheck> --format=brief <device>
```
This plugin supports _smartmontools_ version 5.41 and above, but v. 5.41 and v. 5.42
might require setting `nocheck`, see the comment in the sample configuration.
To enable SMART on a storage device run:
```
smartctl -s on <device>
```
## Measurements
- smart_device:
* Tags:
- `capacity`
- `device`
- `device_model`
- `enabled`
- `health`
- `serial_no`
- `wwn`
* Fields:
- `exit_status`
- `health_ok`
- `read_error_rate`
- `seek_error`
- `temp_c`
- `udma_crc_errors`
- smart_attribute:
* Tags:
- `device`
- `fail`
- `flags`
- `id`
- `name`
- `serial_no`
- `wwn`
* Fields:
- `exit_status`
- `raw_value`
- `threshold`
- `value`
- `worst`
### Flags
The interpretation of the tag `flags` is:
- *K* auto-keep
- *C* event count
- *R* error rate
- *S* speed/performance
- *O* updated online
- *P* prefailure warning
### Exit Status
The `exit_status` field captures the exit status of the smartctl command which
is defined by a bitmask. For the interpretation of the bitmask see the man page for
smartctl.
### Device Names
Device names, e.g., `/dev/sda`, are *not persistent*, and may be
subject to change across reboots or system changes. Instead, you can the
*World Wide Name* (WWN) or serial number to identify devices. On Linux block
devices can be referenced by the WWN in the following location:
`/dev/disk/by-id/`.
## Configuration
```toml
# Read metrics from storage devices supporting S.M.A.R.T.
[[inputs.smart]]
## Optionally specify the path to the smartctl executable
# path = "/usr/bin/smartctl"
#
## On most platforms smartctl requires root access.
## Setting 'use_sudo' to true will make use of sudo to run smartctl.
## Sudo must be configured to to allow the telegraf user to run smartctl
## with out password.
# use_sudo = false
#
## Skip checking disks in this power mode. Defaults to
## "standby" to not wake up disks that have stoped rotating.
## See --nockeck in the man pages for smartctl.
## smartctl version 5.41 and 5.42 have faulty detection of
## power mode and might require changing this value to
## "never" depending on your storage device.
# nocheck = "standby"
#
## Gather detailed metrics for each SMART Attribute.
## Defaults to "false"
##
# attributes = false
#
## Optionally specify devices to exclude from reporting.
# excludes = [ "/dev/pass6" ]
#
## Optionally specify devices and device type, if unset
## a scan (smartctl --scan) for S.M.A.R.T. devices will
## done and all found will be included except for the
## excluded in excludes.
# devices = [ "/dev/ada0 -d atacam" ]
```
To run `smartctl` with `sudo` create a wrapper script and use `path` in
the configuration to execute that.
## Output
Example output from an _Apple SSD_:
```
> smart_attribute,serial_no=S1K5NYCD964433,wwn=5002538655584d30,id=199,name=UDMA_CRC_Error_Count,flags=-O-RC-,fail=-,host=mbpro.local,device=/dev/rdisk0 threshold=0i,raw_value=0i,exit_status=0i,value=200i,worst=200i 1502536854000000000
> smart_attribute,device=/dev/rdisk0,serial_no=S1K5NYCD964433,wwn=5002538655584d30,id=240,name=Unknown_SSD_Attribute,flags=-O---K,fail=-,host=mbpro.local exit_status=0i,value=100i,worst=100i,threshold=0i,raw_value=0i 1502536854000000000
> smart_device,enabled=Enabled,host=mbpro.local,device=/dev/rdisk0,model=APPLE\ SSD\ SM0512F,serial_no=S1K5NYCD964433,wwn=5002538655584d30,capacity=500277790720 udma_crc_errors=0i,exit_status=0i,health_ok=true,read_error_rate=0i,temp_c=40i 1502536854000000000
```

View File

@ -0,0 +1,339 @@
package smart
import (
"fmt"
"os/exec"
"regexp"
"strconv"
"strings"
"sync"
"syscall"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
)
var (
execCommand = exec.Command // execCommand is used to mock commands in tests.
// Device Model: APPLE SSD SM256E
modelInInfo = regexp.MustCompile("^Device Model:\\s+(.*)$")
// Serial Number: S0X5NZBC422720
serialInInfo = regexp.MustCompile("^Serial Number:\\s+(.*)$")
// LU WWN Device Id: 5 002538 655584d30
wwnInInfo = regexp.MustCompile("^LU WWN Device Id:\\s+(.*)$")
// User Capacity: 251,000,193,024 bytes [251 GB]
usercapacityInInfo = regexp.MustCompile("^User Capacity:\\s+([0-9,]+)\\s+bytes.*$")
// SMART support is: Enabled
smartEnabledInInfo = regexp.MustCompile("^SMART support is:\\s+(\\w+)$")
// SMART overall-health self-assessment test result: PASSED
// PASSED, FAILED, UNKNOWN
smartOverallHealth = regexp.MustCompile("^SMART overall-health self-assessment test result:\\s+(\\w+).*$")
// ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE
// 1 Raw_Read_Error_Rate -O-RC- 200 200 000 - 0
// 5 Reallocated_Sector_Ct PO--CK 100 100 000 - 0
// 192 Power-Off_Retract_Count -O--C- 097 097 000 - 14716
attribute = regexp.MustCompile("^\\s*([0-9]+)\\s(\\S+)\\s+([-P][-O][-S][-R][-C][-K])\\s+([0-9]+)\\s+([0-9]+)\\s+([0-9]+)\\s+([-\\w]+)\\s+([\\w\\+\\.]+).*$")
deviceFieldIds = map[string]string{
"1": "read_error_rate",
"7": "seek_error_rate",
"194": "temp_c",
"199": "udma_crc_errors",
}
)
type Smart struct {
Path string
Nocheck string
Attributes bool
Excludes []string
Devices []string
UseSudo bool
}
var sampleConfig = `
## Optionally specify the path to the smartctl executable
# path = "/usr/bin/smartctl"
#
## On most platforms smartctl requires root access.
## Setting 'use_sudo' to true will make use of sudo to run smartctl.
## Sudo must be configured to to allow the telegraf user to run smartctl
## with out password.
# use_sudo = false
#
## Skip checking disks in this power mode. Defaults to
## "standby" to not wake up disks that have stoped rotating.
## See --nocheck in the man pages for smartctl.
## smartctl version 5.41 and 5.42 have faulty detection of
## power mode and might require changing this value to
## "never" depending on your disks.
# nocheck = "standby"
#
## Gather detailed metrics for each SMART Attribute.
## Defaults to "false"
##
# attributes = false
#
## Optionally specify devices to exclude from reporting.
# excludes = [ "/dev/pass6" ]
#
## Optionally specify devices and device type, if unset
## a scan (smartctl --scan) for S.M.A.R.T. devices will
## done and all found will be included except for the
## excluded in excludes.
# devices = [ "/dev/ada0 -d atacam" ]
`
func (m *Smart) SampleConfig() string {
return sampleConfig
}
func (m *Smart) Description() string {
return "Read metrics from storage devices supporting S.M.A.R.T."
}
func (m *Smart) Gather(acc telegraf.Accumulator) error {
if len(m.Path) == 0 {
return fmt.Errorf("smartctl not found: verify that smartctl is installed and that smartctl is in your PATH")
}
devices := m.Devices
if len(devices) == 0 {
var err error
devices, err = m.scan()
if err != nil {
return err
}
}
m.getAttributes(acc, devices)
return nil
}
// Wrap with sudo
func sudo(sudo bool, command string, args ...string) *exec.Cmd {
if sudo {
return execCommand("sudo", append([]string{"-n", command}, args...)...)
}
return execCommand(command, args...)
}
// Scan for S.M.A.R.T. devices
func (m *Smart) scan() ([]string, error) {
cmd := sudo(m.UseSudo, m.Path, "--scan")
out, err := internal.CombinedOutputTimeout(cmd, time.Second*5)
if err != nil {
return []string{}, fmt.Errorf("failed to run command %s: %s - %s", strings.Join(cmd.Args, " "), err, string(out))
}
devices := []string{}
for _, line := range strings.Split(string(out), "\n") {
dev := strings.Split(line, "#")
if len(dev) > 1 && !excludedDev(m.Excludes, strings.TrimSpace(dev[0])) {
devices = append(devices, strings.TrimSpace(dev[0]))
}
}
return devices, nil
}
func excludedDev(excludes []string, deviceLine string) bool {
device := strings.Split(deviceLine, " ")
if len(device) != 0 {
for _, exclude := range excludes {
if device[0] == exclude {
return true
}
}
}
return false
}
// Get info and attributes for each S.M.A.R.T. device
func (m *Smart) getAttributes(acc telegraf.Accumulator, devices []string) {
var wg sync.WaitGroup
wg.Add(len(devices))
for _, device := range devices {
go gatherDisk(acc, m.UseSudo, m.Attributes, m.Path, m.Nocheck, device, &wg)
}
wg.Wait()
}
// Command line parse errors are denoted by the exit code having the 0 bit set.
// All other errors are drive/communication errors and should be ignored.
func exitStatus(err error) (int, error) {
if exiterr, ok := err.(*exec.ExitError); ok {
if status, ok := exiterr.Sys().(syscall.WaitStatus); ok {
return status.ExitStatus(), nil
}
}
return 0, err
}
func gatherDisk(acc telegraf.Accumulator, usesudo, attributes bool, path, nockeck, device string, wg *sync.WaitGroup) {
defer wg.Done()
// smartctl 5.41 & 5.42 have are broken regarding handling of --nocheck/-n
args := []string{"--info", "--health", "--attributes", "--tolerance=verypermissive", "-n", nockeck, "--format=brief"}
args = append(args, strings.Split(device, " ")...)
cmd := sudo(usesudo, path, args...)
out, e := internal.CombinedOutputTimeout(cmd, time.Second*5)
outStr := string(out)
// Ignore all exit statuses except if it is a command line parse error
exitStatus, er := exitStatus(e)
if er != nil {
acc.AddError(fmt.Errorf("failed to run command %s: %s - %s", strings.Join(cmd.Args, " "), e, outStr))
return
}
device_tags := map[string]string{}
device_tags["device"] = strings.Split(device, " ")[0]
device_fields := make(map[string]interface{})
device_fields["exit_status"] = exitStatus
for _, line := range strings.Split(outStr, "\n") {
model := modelInInfo.FindStringSubmatch(line)
if len(model) > 1 {
device_tags["model"] = model[1]
}
serial := serialInInfo.FindStringSubmatch(line)
if len(serial) > 1 {
device_tags["serial_no"] = serial[1]
}
wwn := wwnInInfo.FindStringSubmatch(line)
if len(wwn) > 1 {
device_tags["wwn"] = strings.Replace(wwn[1], " ", "", -1)
}
capacity := usercapacityInInfo.FindStringSubmatch(line)
if len(capacity) > 1 {
device_tags["capacity"] = strings.Replace(capacity[1], ",", "", -1)
}
enabled := smartEnabledInInfo.FindStringSubmatch(line)
if len(enabled) > 1 {
device_tags["enabled"] = enabled[1]
}
health := smartOverallHealth.FindStringSubmatch(line)
if len(health) > 1 {
device_fields["health_ok"] = (health[1] == "PASSED")
}
attr := attribute.FindStringSubmatch(line)
if len(attr) > 1 {
if attributes {
tags := map[string]string{}
fields := make(map[string]interface{})
tags["device"] = strings.Split(device, " ")[0]
if serial, ok := device_tags["serial_no"]; ok {
tags["serial_no"] = serial
}
if wwn, ok := device_tags["wwn"]; ok {
tags["wwn"] = wwn
}
tags["id"] = attr[1]
tags["name"] = attr[2]
tags["flags"] = attr[3]
fields["exit_status"] = exitStatus
if i, err := strconv.ParseInt(attr[4], 10, 64); err == nil {
fields["value"] = i
}
if i, err := strconv.ParseInt(attr[5], 10, 64); err == nil {
fields["worst"] = i
}
if i, err := strconv.ParseInt(attr[6], 10, 64); err == nil {
fields["threshold"] = i
}
tags["fail"] = attr[7]
if val, err := parseRawValue(attr[8]); err == nil {
fields["raw_value"] = val
}
acc.AddFields("smart_attribute", fields, tags)
}
// If the attribute matches on the one in deviceFieldIds
// save the raw value to a field.
if field, ok := deviceFieldIds[attr[1]]; ok {
if val, err := parseRawValue(attr[8]); err == nil {
device_fields[field] = val
}
}
}
}
acc.AddFields("smart_device", device_fields, device_tags)
}
func parseRawValue(rawVal string) (int64, error) {
// Integer
if i, err := strconv.ParseInt(rawVal, 10, 64); err == nil {
return i, nil
}
// Duration: 65h+33m+09.259s
unit := regexp.MustCompile("^(.*)([hms])$")
parts := strings.Split(rawVal, "+")
if len(parts) == 0 {
return 0, fmt.Errorf("Couldn't parse RAW_VALUE '%s'", rawVal)
}
duration := int64(0)
for _, part := range parts {
timePart := unit.FindStringSubmatch(part)
if len(timePart) == 0 {
continue
}
switch timePart[2] {
case "h":
duration += parseInt(timePart[1]) * int64(3600)
case "m":
duration += parseInt(timePart[1]) * int64(60)
case "s":
// drop fractions of seconds
duration += parseInt(strings.Split(timePart[1], ".")[0])
default:
// Unknown, ignore
}
}
return duration, nil
}
func parseInt(str string) int64 {
if i, err := strconv.ParseInt(str, 10, 64); err == nil {
return i
}
return 0
}
func init() {
m := Smart{}
path, _ := exec.LookPath("smartctl")
if len(path) > 0 {
m.Path = path
}
m.Nocheck = "standby"
inputs.Add("smart", func() telegraf.Input {
return &m
})
}

View File

@ -0,0 +1,426 @@
package smart
import (
"fmt"
"os"
"os/exec"
"testing"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
var (
mockScanData = `/dev/ada0 -d atacam # /dev/ada0, ATA device
`
mockInfoAttributeData = `smartctl 6.5 2016-05-07 r4318 [Darwin 16.4.0 x86_64] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
CHECK POWER MODE not implemented, ignoring -n option
=== START OF INFORMATION SECTION ===
Model Family: Apple SD/SM/TS...E/F SSDs
Device Model: APPLE SSD SM256E
Serial Number: S0X5NZBC422720
LU WWN Device Id: 5 002538 043584d30
Firmware Version: CXM09A1Q
User Capacity: 251,000,193,024 bytes [251 GB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: Solid State Device
Device is: In smartctl database [for details use: -P show]
ATA Version is: ATA8-ACS T13/1699-D revision 4c
SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Thu Feb 9 16:48:45 2017 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
=== START OF READ SMART DATA SECTION ===
SMART Attributes Data Structure revision number: 1
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAGS VALUE WORST THRESH FAIL RAW_VALUE
1 Raw_Read_Error_Rate -O-RC- 200 200 000 - 0
5 Reallocated_Sector_Ct PO--CK 100 100 000 - 0
9 Power_On_Hours -O--CK 099 099 000 - 2988
12 Power_Cycle_Count -O--CK 085 085 000 - 14879
169 Unknown_Attribute PO--C- 253 253 010 - 2044932921600
173 Wear_Leveling_Count -O--CK 185 185 100 - 957808640337
190 Airflow_Temperature_Cel -O---K 055 040 045 Past 45 (Min/Max 43/57 #2689)
192 Power-Off_Retract_Count -O--C- 097 097 000 - 14716
194 Temperature_Celsius -O---K 066 021 000 - 34 (Min/Max 14/79)
197 Current_Pending_Sector -O---K 100 100 000 - 0
199 UDMA_CRC_Error_Count -O-RC- 200 200 000 - 0
240 Head_Flying_Hours ------ 100 253 000 - 6585h+55m+23.234s
||||||_ K auto-keep
|||||__ C event count
||||___ R error rate
|||____ S speed/performance
||_____ O updated online
|______ P prefailure warning
`
)
func TestGatherAttributes(t *testing.T) {
s := &Smart{
Path: "smartctl",
Attributes: true,
}
// overwriting exec commands with mock commands
execCommand = fakeExecCommand
var acc testutil.Accumulator
err := s.Gather(&acc)
require.NoError(t, err)
assert.Equal(t, 65, acc.NFields(), "Wrong number of fields gathered")
var testsAda0Attributes = []struct {
fields map[string]interface{}
tags map[string]string
}{
{
map[string]interface{}{
"value": int64(200),
"worst": int64(200),
"threshold": int64(0),
"raw_value": int64(0),
"exit_status": int(0),
},
map[string]string{
"device": "/dev/ada0",
"serial_no": "S0X5NZBC422720",
"wwn": "5002538043584d30",
"id": "1",
"name": "Raw_Read_Error_Rate",
"flags": "-O-RC-",
"fail": "-",
},
},
{
map[string]interface{}{
"value": int64(100),
"worst": int64(100),
"threshold": int64(0),
"raw_value": int64(0),
"exit_status": int(0),
},
map[string]string{
"device": "/dev/ada0",
"serial_no": "S0X5NZBC422720",
"wwn": "5002538043584d30",
"id": "5",
"name": "Reallocated_Sector_Ct",
"flags": "PO--CK",
"fail": "-",
},
},
{
map[string]interface{}{
"value": int64(99),
"worst": int64(99),
"threshold": int64(0),
"raw_value": int64(2988),
"exit_status": int(0),
},
map[string]string{
"device": "/dev/ada0",
"serial_no": "S0X5NZBC422720",
"wwn": "5002538043584d30",
"id": "9",
"name": "Power_On_Hours",
"flags": "-O--CK",
"fail": "-",
},
},
{
map[string]interface{}{
"value": int64(85),
"worst": int64(85),
"threshold": int64(0),
"raw_value": int64(14879),
"exit_status": int(0),
},
map[string]string{
"device": "/dev/ada0",
"serial_no": "S0X5NZBC422720",
"wwn": "5002538043584d30",
"id": "12",
"name": "Power_Cycle_Count",
"flags": "-O--CK",
"fail": "-",
},
},
{
map[string]interface{}{
"value": int64(253),
"worst": int64(253),
"threshold": int64(10),
"raw_value": int64(2044932921600),
"exit_status": int(0),
},
map[string]string{
"device": "/dev/ada0",
"serial_no": "S0X5NZBC422720",
"wwn": "5002538043584d30",
"id": "169",
"name": "Unknown_Attribute",
"flags": "PO--C-",
"fail": "-",
},
},
{
map[string]interface{}{
"value": int64(185),
"worst": int64(185),
"threshold": int64(100),
"raw_value": int64(957808640337),
"exit_status": int(0),
},
map[string]string{
"device": "/dev/ada0",
"serial_no": "S0X5NZBC422720",
"wwn": "5002538043584d30",
"id": "173",
"name": "Wear_Leveling_Count",
"flags": "-O--CK",
"fail": "-",
},
},
{
map[string]interface{}{
"value": int64(55),
"worst": int64(40),
"threshold": int64(45),
"raw_value": int64(45),
"exit_status": int(0),
},
map[string]string{
"device": "/dev/ada0",
"serial_no": "S0X5NZBC422720",
"wwn": "5002538043584d30",
"id": "190",
"name": "Airflow_Temperature_Cel",
"flags": "-O---K",
"fail": "Past",
},
},
{
map[string]interface{}{
"value": int64(97),
"worst": int64(97),
"threshold": int64(0),
"raw_value": int64(14716),
"exit_status": int(0),
},
map[string]string{
"device": "/dev/ada0",
"serial_no": "S0X5NZBC422720",
"wwn": "5002538043584d30",
"id": "192",
"name": "Power-Off_Retract_Count",
"flags": "-O--C-",
"fail": "-",
},
},
{
map[string]interface{}{
"value": int64(66),
"worst": int64(21),
"threshold": int64(0),
"raw_value": int64(34),
"exit_status": int(0),
},
map[string]string{
"device": "/dev/ada0",
"serial_no": "S0X5NZBC422720",
"wwn": "5002538043584d30",
"id": "194",
"name": "Temperature_Celsius",
"flags": "-O---K",
"fail": "-",
},
},
{
map[string]interface{}{
"value": int64(100),
"worst": int64(100),
"threshold": int64(0),
"raw_value": int64(0),
"exit_status": int(0),
},
map[string]string{
"device": "/dev/ada0",
"serial_no": "S0X5NZBC422720",
"wwn": "5002538043584d30",
"id": "197",
"name": "Current_Pending_Sector",
"flags": "-O---K",
"fail": "-",
},
},
{
map[string]interface{}{
"value": int64(200),
"worst": int64(200),
"threshold": int64(0),
"raw_value": int64(0),
"exit_status": int(0),
},
map[string]string{
"device": "/dev/ada0",
"serial_no": "S0X5NZBC422720",
"wwn": "5002538043584d30",
"id": "199",
"name": "UDMA_CRC_Error_Count",
"flags": "-O-RC-",
"fail": "-",
},
},
{
map[string]interface{}{
"value": int64(100),
"worst": int64(253),
"threshold": int64(0),
"raw_value": int64(23709323),
"exit_status": int(0),
},
map[string]string{
"device": "/dev/ada0",
"serial_no": "S0X5NZBC422720",
"wwn": "5002538043584d30",
"id": "240",
"name": "Head_Flying_Hours",
"flags": "------",
"fail": "-",
},
},
}
for _, test := range testsAda0Attributes {
acc.AssertContainsTaggedFields(t, "smart_attribute", test.fields, test.tags)
}
// tags = map[string]string{}
var testsAda0Device = []struct {
fields map[string]interface{}
tags map[string]string
}{
{
map[string]interface{}{
"exit_status": int(0),
"health_ok": bool(true),
"read_error_rate": int64(0),
"temp_c": int64(34),
"udma_crc_errors": int64(0),
},
map[string]string{
"device": "/dev/ada0",
"model": "APPLE SSD SM256E",
"serial_no": "S0X5NZBC422720",
"wwn": "5002538043584d30",
"enabled": "Enabled",
"capacity": "251000193024",
},
},
}
for _, test := range testsAda0Device {
acc.AssertContainsTaggedFields(t, "smart_device", test.fields, test.tags)
}
}
func TestGatherNoAttributes(t *testing.T) {
s := &Smart{
Path: "smartctl",
Attributes: false,
}
// overwriting exec commands with mock commands
execCommand = fakeExecCommand
var acc testutil.Accumulator
err := s.Gather(&acc)
require.NoError(t, err)
assert.Equal(t, 5, acc.NFields(), "Wrong number of fields gathered")
acc.AssertDoesNotContainMeasurement(t, "smart_attribute")
// tags = map[string]string{}
var testsAda0Device = []struct {
fields map[string]interface{}
tags map[string]string
}{
{
map[string]interface{}{
"exit_status": int(0),
"health_ok": bool(true),
"read_error_rate": int64(0),
"temp_c": int64(34),
"udma_crc_errors": int64(0),
},
map[string]string{
"device": "/dev/ada0",
"model": "APPLE SSD SM256E",
"serial_no": "S0X5NZBC422720",
"wwn": "5002538043584d30",
"enabled": "Enabled",
"capacity": "251000193024",
},
},
}
for _, test := range testsAda0Device {
acc.AssertContainsTaggedFields(t, "smart_device", test.fields, test.tags)
}
}
func TestExcludedDev(t *testing.T) {
assert.Equal(t, true, excludedDev([]string{"/dev/pass6"}, "/dev/pass6 -d atacam"), "Should be excluded.")
assert.Equal(t, false, excludedDev([]string{}, "/dev/pass6 -d atacam"), "Shouldn't be excluded.")
assert.Equal(t, false, excludedDev([]string{"/dev/pass6"}, "/dev/pass1 -d atacam"), "Shouldn't be excluded.")
}
// fackeExecCommand is a helper function that mock
// the exec.Command call (and call the test binary)
func fakeExecCommand(command string, args ...string) *exec.Cmd {
cs := []string{"-test.run=TestHelperProcess", "--", command}
cs = append(cs, args...)
cmd := exec.Command(os.Args[0], cs...)
cmd.Env = []string{"GO_WANT_HELPER_PROCESS=1"}
return cmd
}
// TestHelperProcess isn't a real test. It's used to mock exec.Command
// For example, if you run:
// GO_WANT_HELPER_PROCESS=1 go test -test.run=TestHelperProcess -- --scan
// it returns below mockScanData.
func TestHelperProcess(t *testing.T) {
if os.Getenv("GO_WANT_HELPER_PROCESS") != "1" {
return
}
args := os.Args
// Previous arguments are tests stuff, that looks like :
// /tmp/go-build970079519/…/_test/integration.test -test.run=TestHelperProcess --
cmd, arg1, args := args[3], args[4], args[5:]
if cmd == "smartctl" {
if arg1 == "--scan" {
fmt.Fprint(os.Stdout, mockScanData)
}
if arg1 == "--info" {
fmt.Fprint(os.Stdout, mockInfoAttributeData)
}
} else {
fmt.Fprint(os.Stdout, "command not found")
os.Exit(1)
}
os.Exit(0)
}

View File

@ -135,7 +135,7 @@ type Snmp struct {
Name string Name string
Fields []Field `toml:"field"` Fields []Field `toml:"field"`
connectionCache map[string]snmpConnection connectionCache []snmpConnection
initialized bool initialized bool
} }
@ -144,6 +144,8 @@ func (s *Snmp) init() error {
return nil return nil
} }
s.connectionCache = make([]snmpConnection, len(s.Agents))
for i := range s.Tables { for i := range s.Tables {
if err := s.Tables[i].init(); err != nil { if err := s.Tables[i].init(); err != nil {
return Errorf(err, "initializing table %s", s.Tables[i].Name) return Errorf(err, "initializing table %s", s.Tables[i].Name)
@ -342,30 +344,36 @@ func (s *Snmp) Gather(acc telegraf.Accumulator) error {
return err return err
} }
for _, agent := range s.Agents { var wg sync.WaitGroup
gs, err := s.getConnection(agent) for i, agent := range s.Agents {
if err != nil { wg.Add(1)
acc.AddError(Errorf(err, "agent %s", agent)) go func(i int, agent string) {
continue defer wg.Done()
} gs, err := s.getConnection(i)
if err != nil {
// First is the top-level fields. We treat the fields as table prefixes with an empty index. acc.AddError(Errorf(err, "agent %s", agent))
t := Table{ return
Name: s.Name,
Fields: s.Fields,
}
topTags := map[string]string{}
if err := s.gatherTable(acc, gs, t, topTags, false); err != nil {
acc.AddError(Errorf(err, "agent %s", agent))
}
// Now is the real tables.
for _, t := range s.Tables {
if err := s.gatherTable(acc, gs, t, topTags, true); err != nil {
acc.AddError(Errorf(err, "agent %s: gathering table %s", agent, t.Name))
} }
}
// First is the top-level fields. We treat the fields as table prefixes with an empty index.
t := Table{
Name: s.Name,
Fields: s.Fields,
}
topTags := map[string]string{}
if err := s.gatherTable(acc, gs, t, topTags, false); err != nil {
acc.AddError(Errorf(err, "agent %s", agent))
}
// Now is the real tables.
for _, t := range s.Tables {
if err := s.gatherTable(acc, gs, t, topTags, true); err != nil {
acc.AddError(Errorf(err, "agent %s: gathering table %s", agent, t.Name))
}
}
}(i, agent)
} }
wg.Wait()
return nil return nil
} }
@ -568,16 +576,18 @@ func (gsw gosnmpWrapper) Get(oids []string) (*gosnmp.SnmpPacket, error) {
} }
// getConnection creates a snmpConnection (*gosnmp.GoSNMP) object and caches the // getConnection creates a snmpConnection (*gosnmp.GoSNMP) object and caches the
// result using `agent` as the cache key. // result using `agentIndex` as the cache key. This is done to allow multiple
func (s *Snmp) getConnection(agent string) (snmpConnection, error) { // connections to a single address. It is an error to use a connection in
if s.connectionCache == nil { // more than one goroutine.
s.connectionCache = map[string]snmpConnection{} func (s *Snmp) getConnection(idx int) (snmpConnection, error) {
} if gs := s.connectionCache[idx]; gs != nil {
if gs, ok := s.connectionCache[agent]; ok {
return gs, nil return gs, nil
} }
agent := s.Agents[idx]
gs := gosnmpWrapper{&gosnmp.GoSNMP{}} gs := gosnmpWrapper{&gosnmp.GoSNMP{}}
s.connectionCache[idx] = gs
host, portStr, err := net.SplitHostPort(agent) host, portStr, err := net.SplitHostPort(agent)
if err != nil { if err != nil {
@ -677,7 +687,6 @@ func (s *Snmp) getConnection(agent string) (snmpConnection, error) {
return nil, Errorf(err, "setting up connection") return nil, Errorf(err, "setting up connection")
} }
s.connectionCache[agent] = gs
return gs, nil return gs, nil
} }

View File

@ -120,7 +120,7 @@ func TestSampleConfig(t *testing.T) {
}, },
}, },
} }
assert.Equal(t, s, *conf.Inputs.Snmp[0]) assert.Equal(t, &s, conf.Inputs.Snmp[0])
} }
func TestFieldInit(t *testing.T) { func TestFieldInit(t *testing.T) {
@ -251,13 +251,16 @@ func TestSnmpInit_noTranslate(t *testing.T) {
func TestGetSNMPConnection_v2(t *testing.T) { func TestGetSNMPConnection_v2(t *testing.T) {
s := &Snmp{ s := &Snmp{
Agents: []string{"1.2.3.4:567", "1.2.3.4"},
Timeout: internal.Duration{Duration: 3 * time.Second}, Timeout: internal.Duration{Duration: 3 * time.Second},
Retries: 4, Retries: 4,
Version: 2, Version: 2,
Community: "foo", Community: "foo",
} }
err := s.init()
require.NoError(t, err)
gsc, err := s.getConnection("1.2.3.4:567") gsc, err := s.getConnection(0)
require.NoError(t, err) require.NoError(t, err)
gs := gsc.(gosnmpWrapper) gs := gsc.(gosnmpWrapper)
assert.Equal(t, "1.2.3.4", gs.Target) assert.Equal(t, "1.2.3.4", gs.Target)
@ -265,7 +268,7 @@ func TestGetSNMPConnection_v2(t *testing.T) {
assert.Equal(t, gosnmp.Version2c, gs.Version) assert.Equal(t, gosnmp.Version2c, gs.Version)
assert.Equal(t, "foo", gs.Community) assert.Equal(t, "foo", gs.Community)
gsc, err = s.getConnection("1.2.3.4") gsc, err = s.getConnection(1)
require.NoError(t, err) require.NoError(t, err)
gs = gsc.(gosnmpWrapper) gs = gsc.(gosnmpWrapper)
assert.Equal(t, "1.2.3.4", gs.Target) assert.Equal(t, "1.2.3.4", gs.Target)
@ -274,6 +277,7 @@ func TestGetSNMPConnection_v2(t *testing.T) {
func TestGetSNMPConnection_v3(t *testing.T) { func TestGetSNMPConnection_v3(t *testing.T) {
s := &Snmp{ s := &Snmp{
Agents: []string{"1.2.3.4"},
Version: 3, Version: 3,
MaxRepetitions: 20, MaxRepetitions: 20,
ContextName: "mycontext", ContextName: "mycontext",
@ -287,8 +291,10 @@ func TestGetSNMPConnection_v3(t *testing.T) {
EngineBoots: 1, EngineBoots: 1,
EngineTime: 2, EngineTime: 2,
} }
err := s.init()
require.NoError(t, err)
gsc, err := s.getConnection("1.2.3.4") gsc, err := s.getConnection(0)
require.NoError(t, err) require.NoError(t, err)
gs := gsc.(gosnmpWrapper) gs := gsc.(gosnmpWrapper)
assert.Equal(t, gs.Version, gosnmp.Version3) assert.Equal(t, gs.Version, gosnmp.Version3)
@ -308,15 +314,22 @@ func TestGetSNMPConnection_v3(t *testing.T) {
} }
func TestGetSNMPConnection_caching(t *testing.T) { func TestGetSNMPConnection_caching(t *testing.T) {
s := &Snmp{} s := &Snmp{
gs1, err := s.getConnection("1.2.3.4") Agents: []string{"1.2.3.4", "1.2.3.5", "1.2.3.5"},
}
err := s.init()
require.NoError(t, err) require.NoError(t, err)
gs2, err := s.getConnection("1.2.3.4") gs1, err := s.getConnection(0)
require.NoError(t, err) require.NoError(t, err)
gs3, err := s.getConnection("1.2.3.5") gs2, err := s.getConnection(0)
require.NoError(t, err)
gs3, err := s.getConnection(1)
require.NoError(t, err)
gs4, err := s.getConnection(2)
require.NoError(t, err) require.NoError(t, err)
assert.True(t, gs1 == gs2) assert.True(t, gs1 == gs2)
assert.False(t, gs2 == gs3) assert.False(t, gs2 == gs3)
assert.False(t, gs3 == gs4)
} }
func TestGosnmpWrapper_walk_retry(t *testing.T) { func TestGosnmpWrapper_walk_retry(t *testing.T) {
@ -560,11 +573,11 @@ func TestGather(t *testing.T) {
}, },
}, },
connectionCache: map[string]snmpConnection{ connectionCache: []snmpConnection{
"TestGather": tsc, tsc,
}, },
initialized: true,
} }
acc := &testutil.Accumulator{} acc := &testutil.Accumulator{}
tstart := time.Now() tstart := time.Now()
@ -607,9 +620,10 @@ func TestGather_host(t *testing.T) {
}, },
}, },
connectionCache: map[string]snmpConnection{ connectionCache: []snmpConnection{
"TestGather": tsc, tsc,
}, },
initialized: true,
} }
acc := &testutil.Accumulator{} acc := &testutil.Accumulator{}

View File

@ -2,11 +2,12 @@ package sqlserver
import ( import (
"database/sql" "database/sql"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"sync" "sync"
"time" "time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
// go-mssqldb initialization // go-mssqldb initialization
_ "github.com/zensqlmonitor/go-mssqldb" _ "github.com/zensqlmonitor/go-mssqldb"
) )
@ -244,10 +245,10 @@ UNION ALL
SELECT 'Average pending disk IO', AveragePendingDiskIOCount = (SELECT AVG(pending_disk_io_count) FROM sys.dm_os_schedulers WITH (NOLOCK) WHERE scheduler_id < 255 ) SELECT 'Average pending disk IO', AveragePendingDiskIOCount = (SELECT AVG(pending_disk_io_count) FROM sys.dm_os_schedulers WITH (NOLOCK) WHERE scheduler_id < 255 )
UNION ALL UNION ALL
SELECT 'Buffer pool rate (bytes/sec)', BufferPoolRate = (1.0*cntr_value * 8 * 1024) / SELECT 'Buffer pool rate (bytes/sec)', BufferPoolRate = (1.0*cntr_value * 8 * 1024) /
(SELECT 1.0*cntr_value FROM sys.dm_os_performance_counters WHERE object_name like '%Buffer Manager%' AND lower(counter_name) = 'Page life expectancy') (SELECT 1.0*cntr_value FROM sys.dm_os_performance_counters WHERE object_name like '%Buffer Manager%' AND counter_name = 'Page life expectancy')
FROM sys.dm_os_performance_counters FROM sys.dm_os_performance_counters
WHERE object_name like '%Buffer Manager%' WHERE object_name like '%Buffer Manager%'
AND counter_name = 'database pages' AND counter_name = 'Database pages'
UNION ALL UNION ALL
SELECT 'Memory grant pending', MemoryGrantPending = cntr_value SELECT 'Memory grant pending', MemoryGrantPending = cntr_value
FROM sys.dm_os_performance_counters FROM sys.dm_os_performance_counters
@ -1436,16 +1437,16 @@ SELECT
, type = 'Wait stats' , type = 'Wait stats'
---- values ---- values
, [I/O] = SUM([I/O]) , [I/O] = SUM([I/O])
, [Latch] = SUM([Latch]) , [Latch] = SUM([LATCH])
, [Lock] = SUM([Lock]) , [Lock] = SUM([LOCK])
, [Network] = SUM([Network]) , [Network] = SUM([NETWORK])
, [Service broker] = SUM([Service broker]) , [Service broker] = SUM([SERVICE BROKER])
, [Memory] = SUM([Memory]) , [Memory] = SUM([MEMORY])
, [Buffer] = SUM([Buffer]) , [Buffer] = SUM([BUFFER])
, [CLR] = SUM([CLR]) , [CLR] = SUM([CLR])
, [SQLOS] = SUM([SQLOS]) , [SQLOS] = SUM([SQLOS])
, [XEvent] = SUM([XEvent]) , [XEvent] = SUM([XEVENT])
, [Other] = SUM([Other]) , [Other] = SUM([OTHER])
, [Total] = SUM([I/O]+[LATCH]+[LOCK]+[NETWORK]+[SERVICE BROKER]+[MEMORY]+[BUFFER]+[CLR]+[XEVENT]+[SQLOS]+[OTHER]) , [Total] = SUM([I/O]+[LATCH]+[LOCK]+[NETWORK]+[SERVICE BROKER]+[MEMORY]+[BUFFER]+[CLR]+[XEVENT]+[SQLOS]+[OTHER])
FROM FROM
( (
@ -1479,16 +1480,16 @@ SELECT
, type = 'Wait stats' , type = 'Wait stats'
---- values ---- values
, [I/O] = SUM([I/O]) , [I/O] = SUM([I/O])
, [Latch] = SUM([Latch]) , [Latch] = SUM([LATCH])
, [Lock] = SUM([Lock]) , [Lock] = SUM([LOCK])
, [Network] = SUM([Network]) , [Network] = SUM([NETWORK])
, [Service broker] = SUM([Service broker]) , [Service broker] = SUM([SERVICE BROKER])
, [Memory] = SUM([Memory]) , [Memory] = SUM([MEMORY])
, [Buffer] = SUM([Buffer]) , [Buffer] = SUM([BUFFER])
, [CLR] = SUM([CLR]) , [CLR] = SUM([CLR])
, [SQLOS] = SUM([SQLOS]) , [SQLOS] = SUM([SQLOS])
, [XEvent] = SUM([XEvent]) , [XEvent] = SUM([XEVENT])
, [Other] = SUM([Other]) , [Other] = SUM([OTHER])
, [Total] = SUM([I/O]+[LATCH]+[LOCK]+[NETWORK]+[SERVICE BROKER]+[MEMORY]+[BUFFER]+[CLR]+[XEVENT]+[SQLOS]+[OTHER]) , [Total] = SUM([I/O]+[LATCH]+[LOCK]+[NETWORK]+[SERVICE BROKER]+[MEMORY]+[BUFFER]+[CLR]+[XEVENT]+[SQLOS]+[OTHER])
FROM FROM
( (

View File

@ -5,7 +5,7 @@
```toml ```toml
# Statsd Server # Statsd Server
[[inputs.statsd]] [[inputs.statsd]]
## Protocol, must be "tcp" or "udp" (default=udp) ## Protocol, must be "tcp", "udp4", "udp6" or "udp" (default=udp)
protocol = "udp" protocol = "udp"
## MaxTCPConnection - applicable when protocol is set to tcp (default=250) ## MaxTCPConnection - applicable when protocol is set to tcp (default=250)

View File

@ -66,7 +66,7 @@ type Statsd struct {
// MetricSeparator is the separator between parts of the metric name. // MetricSeparator is the separator between parts of the metric name.
MetricSeparator string MetricSeparator string
// This flag enables parsing of tags in the dogstatsd extention to the // This flag enables parsing of tags in the dogstatsd extension to the
// statsd protocol (http://docs.datadoghq.com/guides/dogstatsd/) // statsd protocol (http://docs.datadoghq.com/guides/dogstatsd/)
ParseDataDogTags bool ParseDataDogTags bool
@ -171,7 +171,7 @@ func (_ *Statsd) Description() string {
} }
const sampleConfig = ` const sampleConfig = `
## Protocol, must be "tcp" or "udp" (default=udp) ## Protocol, must be "tcp", "udp", "udp4" or "udp6" (default=udp)
protocol = "udp" protocol = "udp"
## MaxTCPConnection - applicable when protocol is set to tcp (default=250) ## MaxTCPConnection - applicable when protocol is set to tcp (default=250)
@ -327,10 +327,9 @@ func (s *Statsd) Start(_ telegraf.Accumulator) error {
s.wg.Add(2) s.wg.Add(2)
// Start the UDP listener // Start the UDP listener
switch s.Protocol { if s.isUDP() {
case "udp":
go s.udpListen() go s.udpListen()
case "tcp": } else {
go s.tcpListen() go s.tcpListen()
} }
// Start the line parser // Start the line parser
@ -382,8 +381,8 @@ func (s *Statsd) tcpListen() error {
func (s *Statsd) udpListen() error { func (s *Statsd) udpListen() error {
defer s.wg.Done() defer s.wg.Done()
var err error var err error
address, _ := net.ResolveUDPAddr("udp", s.ServiceAddress) address, _ := net.ResolveUDPAddr(s.Protocol, s.ServiceAddress)
s.UDPlistener, err = net.ListenUDP("udp", address) s.UDPlistener, err = net.ListenUDP(s.Protocol, address)
if err != nil { if err != nil {
log.Fatalf("ERROR: ListenUDP - %s", err) log.Fatalf("ERROR: ListenUDP - %s", err)
} }
@ -427,13 +426,13 @@ func (s *Statsd) parser() error {
return nil return nil
case buf := <-s.in: case buf := <-s.in:
lines := strings.Split(buf.String(), "\n") lines := strings.Split(buf.String(), "\n")
s.bufPool.Put(buf)
for _, line := range lines { for _, line := range lines {
line = strings.TrimSpace(line) line = strings.TrimSpace(line)
if line != "" { if line != "" {
s.parseStatsdLine(line) s.parseStatsdLine(line)
} }
} }
s.bufPool.Put(buf)
} }
} }
} }
@ -825,10 +824,9 @@ func (s *Statsd) Stop() {
s.Lock() s.Lock()
log.Println("I! Stopping the statsd service") log.Println("I! Stopping the statsd service")
close(s.done) close(s.done)
switch s.Protocol { if s.isUDP() {
case "udp":
s.UDPlistener.Close() s.UDPlistener.Close()
case "tcp": } else {
s.TCPlistener.Close() s.TCPlistener.Close()
// Close all open TCP connections // Close all open TCP connections
// - get all conns from the s.conns map and put into slice // - get all conns from the s.conns map and put into slice
@ -843,8 +841,6 @@ func (s *Statsd) Stop() {
for _, conn := range conns { for _, conn := range conns {
conn.Close() conn.Close()
} }
default:
s.UDPlistener.Close()
} }
s.Unlock() s.Unlock()
@ -856,6 +852,11 @@ func (s *Statsd) Stop() {
s.Unlock() s.Unlock()
} }
// IsUDP returns true if the protocol is UDP, false otherwise.
func (s *Statsd) isUDP() bool {
return strings.HasPrefix(s.Protocol, "udp")
}
func init() { func init() {
inputs.Add("statsd", func() telegraf.Input { inputs.Add("statsd", func() telegraf.Input {
return &Statsd{ return &Statsd{

View File

@ -100,7 +100,7 @@ var sampleConfig = `
# #
# #
## Options for the sadf command. The values on the left represent the sadf ## Options for the sadf command. The values on the left represent the sadf
## options and the values on the right their description (wich are used for ## options and the values on the right their description (which are used for
## grouping and prefixing metrics). ## grouping and prefixing metrics).
## ##
## Run 'sar -h' or 'man sar' to find out the supported options for your ## Run 'sar -h' or 'man sar' to find out the supported options for your

View File

@ -140,7 +140,7 @@ SELECT derivative(last("io_time"),1ms) FROM "diskio" WHERE time > now() - 30m GR
#### Calculate average queue depth: #### Calculate average queue depth:
`iops_in_progress` will give you an instantaneous value. This will give you the average between polling intervals. `iops_in_progress` will give you an instantaneous value. This will give you the average between polling intervals.
``` ```
SELECT derivative(last("weighted_io_time",1ms))/1000 from "diskio" WHERE time > now() - 30m GROUP BY "host","name",time(60s) SELECT derivative(last("weighted_io_time",1ms)) from "diskio" WHERE time > now() - 30m GROUP BY "host","name",time(60s)
``` ```
### Example Output: ### Example Output:

View File

@ -11,7 +11,7 @@ import (
type CPUStats struct { type CPUStats struct {
ps PS ps PS
lastStats []cpu.TimesStat lastStats map[string]cpu.TimesStat
PerCPU bool `toml:"percpu"` PerCPU bool `toml:"percpu"`
TotalCPU bool `toml:"totalcpu"` TotalCPU bool `toml:"totalcpu"`
@ -53,7 +53,7 @@ func (s *CPUStats) Gather(acc telegraf.Accumulator) error {
} }
now := time.Now() now := time.Now()
for i, cts := range times { for _, cts := range times {
tags := map[string]string{ tags := map[string]string{
"cpu": cts.CPU, "cpu": cts.CPU,
} }
@ -86,14 +86,18 @@ func (s *CPUStats) Gather(acc telegraf.Accumulator) error {
// If it's the 1st gather, can't get CPU Usage stats yet // If it's the 1st gather, can't get CPU Usage stats yet
continue continue
} }
lastCts := s.lastStats[i]
lastCts, ok := s.lastStats[cts.CPU]
if !ok {
continue
}
lastTotal := totalCpuTime(lastCts) lastTotal := totalCpuTime(lastCts)
lastActive := activeCpuTime(lastCts) lastActive := activeCpuTime(lastCts)
totalDelta := total - lastTotal totalDelta := total - lastTotal
if totalDelta < 0 { if totalDelta < 0 {
s.lastStats = times err = fmt.Errorf("Error: current total CPU time is less than previous total CPU time")
return fmt.Errorf("Error: current total CPU time is less than previous total CPU time") break
} }
if totalDelta == 0 { if totalDelta == 0 {
@ -118,9 +122,12 @@ func (s *CPUStats) Gather(acc telegraf.Accumulator) error {
acc.AddGauge("cpu", fieldsG, tags, now) acc.AddGauge("cpu", fieldsG, tags, now)
} }
s.lastStats = times s.lastStats = make(map[string]cpu.TimesStat)
for _, cts := range times {
s.lastStats[cts.CPU] = cts
}
return nil return err
} }
func totalCpuTime(t cpu.TimesStat) float64 { func totalCpuTime(t cpu.TimesStat) float64 {

View File

@ -54,7 +54,7 @@ func TestCPUStats(t *testing.T) {
err := cs.Gather(&acc) err := cs.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
// Computed values are checked with delta > 0 becasue of floating point arithmatic // Computed values are checked with delta > 0 because of floating point arithmatic
// imprecision // imprecision
assertContainsTaggedFloat(t, &acc, "cpu", "time_user", 8.8, 0, cputags) assertContainsTaggedFloat(t, &acc, "cpu", "time_user", 8.8, 0, cputags)
assertContainsTaggedFloat(t, &acc, "cpu", "time_system", 8.2, 0, cputags) assertContainsTaggedFloat(t, &acc, "cpu", "time_system", 8.2, 0, cputags)
@ -105,7 +105,7 @@ func TestCPUStats(t *testing.T) {
// specific tags within a certain distance of a given expected value. Asserts a failure // specific tags within a certain distance of a given expected value. Asserts a failure
// if the measurement is of the wrong type, or if no matching measurements are found // if the measurement is of the wrong type, or if no matching measurements are found
// //
// Paramaters: // Parameters:
// t *testing.T : Testing object to use // t *testing.T : Testing object to use
// acc testutil.Accumulator: Accumulator to examine // acc testutil.Accumulator: Accumulator to examine
// measurement string : Name of the measurement to examine // measurement string : Name of the measurement to examine
@ -149,3 +149,107 @@ func assertContainsTaggedFloat(
measurement, delta, expectedValue, actualValue) measurement, delta, expectedValue, actualValue)
assert.Fail(t, msg) assert.Fail(t, msg)
} }
// TestCPUCountChange tests that no errors are encountered if the number of
// CPUs increases as reported with LXC.
func TestCPUCountIncrease(t *testing.T) {
var mps MockPS
var mps2 MockPS
var acc testutil.Accumulator
var err error
cs := NewCPUStats(&mps)
mps.On("CPUTimes").Return(
[]cpu.TimesStat{
cpu.TimesStat{
CPU: "cpu0",
},
}, nil)
err = cs.Gather(&acc)
require.NoError(t, err)
mps2.On("CPUTimes").Return(
[]cpu.TimesStat{
cpu.TimesStat{
CPU: "cpu0",
},
cpu.TimesStat{
CPU: "cpu1",
},
}, nil)
cs.ps = &mps2
err = cs.Gather(&acc)
require.NoError(t, err)
}
// TestCPUTimesDecrease tests that telegraf continue to works after
// CPU times decrease, which seems to occur when Linux system is suspended.
func TestCPUTimesDecrease(t *testing.T) {
var mps MockPS
defer mps.AssertExpectations(t)
var acc testutil.Accumulator
cts := cpu.TimesStat{
CPU: "cpu0",
User: 18,
Idle: 80,
Iowait: 2,
}
cts2 := cpu.TimesStat{
CPU: "cpu0",
User: 38, // increased by 20
Idle: 40, // decreased by 40
Iowait: 1, // decreased by 1
}
cts3 := cpu.TimesStat{
CPU: "cpu0",
User: 56, // increased by 18
Idle: 120, // increased by 80
Iowait: 3, // increased by 2
}
mps.On("CPUTimes").Return([]cpu.TimesStat{cts}, nil)
cs := NewCPUStats(&mps)
cputags := map[string]string{
"cpu": "cpu0",
}
err := cs.Gather(&acc)
require.NoError(t, err)
// Computed values are checked with delta > 0 because of floating point arithmatic
// imprecision
assertContainsTaggedFloat(t, &acc, "cpu", "time_user", 18, 0, cputags)
assertContainsTaggedFloat(t, &acc, "cpu", "time_idle", 80, 0, cputags)
assertContainsTaggedFloat(t, &acc, "cpu", "time_iowait", 2, 0, cputags)
mps2 := MockPS{}
mps2.On("CPUTimes").Return([]cpu.TimesStat{cts2}, nil)
cs.ps = &mps2
// CPU times decreased. An error should be raised
err = cs.Gather(&acc)
require.Error(t, err)
mps3 := MockPS{}
mps3.On("CPUTimes").Return([]cpu.TimesStat{cts3}, nil)
cs.ps = &mps3
err = cs.Gather(&acc)
require.NoError(t, err)
assertContainsTaggedFloat(t, &acc, "cpu", "time_user", 56, 0, cputags)
assertContainsTaggedFloat(t, &acc, "cpu", "time_idle", 120, 0, cputags)
assertContainsTaggedFloat(t, &acc, "cpu", "time_iowait", 3, 0, cputags)
assertContainsTaggedFloat(t, &acc, "cpu", "usage_user", 18, 0.0005, cputags)
assertContainsTaggedFloat(t, &acc, "cpu", "usage_idle", 80, 0.0005, cputags)
assertContainsTaggedFloat(t, &acc, "cpu", "usage_iowait", 2, 0.0005, cputags)
}

View File

@ -2,6 +2,7 @@ package system
import ( import (
"fmt" "fmt"
"log"
"regexp" "regexp"
"strings" "strings"
@ -166,14 +167,13 @@ func (s *DiskIOStats) Gather(acc telegraf.Accumulator) error {
var varRegex = regexp.MustCompile(`\$(?:\w+|\{\w+\})`) var varRegex = regexp.MustCompile(`\$(?:\w+|\{\w+\})`)
func (s *DiskIOStats) diskName(devName string) string { func (s *DiskIOStats) diskName(devName string) string {
di, err := s.diskInfo(devName) if len(s.NameTemplates) == 0 {
if err != nil {
// discard error :-(
// We can't return error because it's non-fatal to the Gather().
// And we have no logger, so we can't log it.
return devName return devName
} }
if di == nil {
di, err := s.diskInfo(devName)
if err != nil {
log.Printf("W! Error gathering disk info: %s", err)
return devName return devName
} }
@ -200,14 +200,13 @@ func (s *DiskIOStats) diskName(devName string) string {
} }
func (s *DiskIOStats) diskTags(devName string) map[string]string { func (s *DiskIOStats) diskTags(devName string) map[string]string {
di, err := s.diskInfo(devName) if len(s.DeviceTags) == 0 {
if err != nil {
// discard error :-(
// We can't return error because it's non-fatal to the Gather().
// And we have no logger, so we can't log it.
return nil return nil
} }
if di == nil {
di, err := s.diskInfo(devName)
if err != nil {
log.Printf("W! Error gathering disk info: %s", err)
return nil return nil
} }

View File

@ -5,25 +5,26 @@ import (
"fmt" "fmt"
"os" "os"
"strings" "strings"
"syscall"
"golang.org/x/sys/unix"
) )
type diskInfoCache struct { type diskInfoCache struct {
stat syscall.Stat_t udevDataPath string
values map[string]string values map[string]string
} }
var udevPath = "/run/udev/data" var udevPath = "/run/udev/data"
func (s *DiskIOStats) diskInfo(devName string) (map[string]string, error) { func (s *DiskIOStats) diskInfo(devName string) (map[string]string, error) {
fi, err := os.Stat("/dev/" + devName) var err error
var stat unix.Stat_t
path := "/dev/" + devName
err = unix.Stat(path, &stat)
if err != nil { if err != nil {
return nil, err return nil, err
} }
stat, ok := fi.Sys().(*syscall.Stat_t)
if !ok {
return nil, nil
}
if s.infoCache == nil { if s.infoCache == nil {
s.infoCache = map[string]diskInfoCache{} s.infoCache = map[string]diskInfoCache{}
@ -31,25 +32,26 @@ func (s *DiskIOStats) diskInfo(devName string) (map[string]string, error) {
ic, ok := s.infoCache[devName] ic, ok := s.infoCache[devName]
if ok { if ok {
return ic.values, nil return ic.values, nil
} else {
ic = diskInfoCache{
stat: *stat,
values: map[string]string{},
}
s.infoCache[devName] = ic
} }
di := ic.values
major := stat.Rdev >> 8 & 0xff major := stat.Rdev >> 8 & 0xff
minor := stat.Rdev & 0xff minor := stat.Rdev & 0xff
udevDataPath := fmt.Sprintf("%s/b%d:%d", udevPath, major, minor)
f, err := os.Open(fmt.Sprintf("%s/b%d:%d", udevPath, major, minor)) di := map[string]string{}
s.infoCache[devName] = diskInfoCache{
udevDataPath: udevDataPath,
values: di,
}
f, err := os.Open(udevDataPath)
if err != nil { if err != nil {
return nil, err return nil, err
} }
defer f.Close() defer f.Close()
scnr := bufio.NewScanner(f)
scnr := bufio.NewScanner(f)
for scnr.Scan() { for scnr.Scan() {
l := scnr.Text() l := scnr.Text()
if len(l) < 4 || l[:2] != "E:" { if len(l) < 4 || l[:2] != "E:" {

View File

@ -0,0 +1,45 @@
# Teamspeak 3 Input Plugin
This plugin uses the Teamspeak 3 ServerQuery interface of the Teamspeak server to collect statistics of one or more
virtual servers. If you are querying an external Teamspeak server, make sure to add the host which is running Telegraf
to query_ip_whitelist.txt in the Teamspeak Server directory. For information about how to configure the server take a look
the [Teamspeak 3 ServerQuery Manual](http://media.teamspeak.com/ts3_literature/TeamSpeak%203%20Server%20Query%20Manual.pdf)
### Configuration:
```
# Reads metrics from a Teamspeak 3 Server via ServerQuery
[[inputs.teamspeak]]
## Server address for Teamspeak 3 ServerQuery
# server = "127.0.0.1:10011"
## Username for ServerQuery
username = "serverqueryuser"
## Password for ServerQuery
password = "secret"
## Array of virtual servers
# virtual_servers = [1]
```
### Measurements:
- teamspeak
- uptime
- clients_online
- total_ping
- total_packet_loss
- packets_sent_total
- packets_received_total
- bytes_sent_total
- bytes_received_total
### Tags:
- The following tags are used:
- virtual_server
- name
### Example output:
```
teamspeak,virtual_server=1,name=LeopoldsServer,host=vm01 bytes_received_total=29638202639i,uptime=13567846i,total_ping=26.89,total_packet_loss=0,packets_sent_total=415821252i,packets_received_total=237069900i,bytes_sent_total=55309568252i,clients_online=11i 1507406561000000000
```

View File

@ -0,0 +1,100 @@
package teamspeak
import (
"github.com/multiplay/go-ts3"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"strconv"
)
type Teamspeak struct {
Server string
Username string
Password string
VirtualServers []int `toml:"virtual_servers"`
client *ts3.Client
connected bool
}
func (ts *Teamspeak) Description() string {
return "Reads metrics from a Teamspeak 3 Server via ServerQuery"
}
const sampleConfig = `
## Server address for Teamspeak 3 ServerQuery
# server = "127.0.0.1:10011"
## Username for ServerQuery
username = "serverqueryuser"
## Password for ServerQuery
password = "secret"
## Array of virtual servers
# virtual_servers = [1]
`
func (ts *Teamspeak) SampleConfig() string {
return sampleConfig
}
func (ts *Teamspeak) Gather(acc telegraf.Accumulator) error {
var err error
if !ts.connected {
ts.client, err = ts3.NewClient(ts.Server)
if err != nil {
return err
}
err = ts.client.Login(ts.Username, ts.Password)
if err != nil {
return err
}
ts.connected = true
}
for _, vserver := range ts.VirtualServers {
ts.client.Use(vserver)
sm, err := ts.client.Server.Info()
if err != nil {
ts.connected = false
return err
}
sc, err := ts.client.Server.ServerConnectionInfo()
if err != nil {
ts.connected = false
return err
}
tags := map[string]string{
"virtual_server": strconv.Itoa(sm.ID),
"name": sm.Name,
}
fields := map[string]interface{}{
"uptime": sm.Uptime,
"clients_online": sm.ClientsOnline,
"total_ping": sm.TotalPing,
"total_packet_loss": sm.TotalPacketLossTotal,
"packets_sent_total": sc.PacketsSentTotal,
"packets_received_total": sc.PacketsReceivedTotal,
"bytes_sent_total": sc.BytesSentTotal,
"bytes_received_total": sc.BytesReceivedTotal,
}
acc.AddFields("teamspeak", fields, tags)
}
return nil
}
func init() {
inputs.Add("teamspeak", func() telegraf.Input {
return &Teamspeak{
Server: "127.0.0.1:10011",
VirtualServers: []int{1},
}
})
}

View File

@ -0,0 +1,87 @@
package teamspeak
import (
"bufio"
"net"
"strings"
"testing"
"github.com/influxdata/telegraf/testutil"
)
const welcome = `Welcome to the TeamSpeak 3 ServerQuery interface, type "help" for a list of commands and "help <command>" for information on a specific command.`
const ok = `error id=0 msg=ok`
const errorMsg = `error id=256 msg=command\snot\sfound`
var cmd = map[string]string{
"login": "",
"use": "",
"serverinfo": `virtualserver_unique_identifier=a1vn9PLF8CMIU virtualserver_name=Testserver virtualserver_welcomemessage=Test virtualserver_platform=Linux virtualserver_version=3.0.13.8\s[Build:\s1500452811] virtualserver_maxclients=32 virtualserver_password virtualserver_clientsonline=2 virtualserver_channelsonline=1 virtualserver_created=1507400243 virtualserver_uptime=148 virtualserver_codec_encryption_mode=0 virtualserver_hostmessage virtualserver_hostmessage_mode=0 virtualserver_filebase=files\/virtualserver_1 virtualserver_default_server_group=8 virtualserver_default_channel_group=8 virtualserver_flag_password=0 virtualserver_default_channel_admin_group=5 virtualserver_max_download_total_bandwidth=18446744073709551615 virtualserver_max_upload_total_bandwidth=18446744073709551615 virtualserver_hostbanner_url virtualserver_hostbanner_gfx_url virtualserver_hostbanner_gfx_interval=0 virtualserver_complain_autoban_count=5 virtualserver_complain_autoban_time=1200 virtualserver_complain_remove_time=3600 virtualserver_min_clients_in_channel_before_forced_silence=100 virtualserver_priority_speaker_dimm_modificator=-18.0000 virtualserver_id=1 virtualserver_antiflood_points_tick_reduce=5 virtualserver_antiflood_points_needed_command_block=150 virtualserver_antiflood_points_needed_ip_block=250 virtualserver_client_connections=1 virtualserver_query_client_connections=1 virtualserver_hostbutton_tooltip virtualserver_hostbutton_url virtualserver_hostbutton_gfx_url virtualserver_queryclientsonline=1 virtualserver_download_quota=18446744073709551615 virtualserver_upload_quota=18446744073709551615 virtualserver_month_bytes_downloaded=0 virtualserver_month_bytes_uploaded=0 virtualserver_total_bytes_downloaded=0 virtualserver_total_bytes_uploaded=0 virtualserver_port=9987 virtualserver_autostart=1 virtualserver_machine_id virtualserver_needed_identity_security_level=8 virtualserver_log_client=0 virtualserver_log_query=0 virtualserver_log_channel=0 virtualserver_log_permissions=1 virtualserver_log_server=0 virtualserver_log_filetransfer=0 virtualserver_min_client_version=1445512488 virtualserver_name_phonetic virtualserver_icon_id=0 virtualserver_reserved_slots=0 virtualserver_total_packetloss_speech=0.0000 virtualserver_total_packetloss_keepalive=0.0000 virtualserver_total_packetloss_control=0.0000 virtualserver_total_packetloss_total=0.0000 virtualserver_total_ping=1.0000 virtualserver_ip=0.0.0.0,\s:: virtualserver_weblist_enabled=1 virtualserver_ask_for_privilegekey=0 virtualserver_hostbanner_mode=0 virtualserver_channel_temp_delete_delay_default=0 virtualserver_min_android_version=1407159763 virtualserver_min_ios_version=1407159763 virtualserver_status=online connection_filetransfer_bandwidth_sent=0 connection_filetransfer_bandwidth_received=0 connection_filetransfer_bytes_sent_total=0 connection_filetransfer_bytes_received_total=0 connection_packets_sent_speech=0 connection_bytes_sent_speech=0 connection_packets_received_speech=0 connection_bytes_received_speech=0 connection_packets_sent_keepalive=261 connection_bytes_sent_keepalive=10701 connection_packets_received_keepalive=261 connection_bytes_received_keepalive=10961 connection_packets_sent_control=54 connection_bytes_sent_control=15143 connection_packets_received_control=55 connection_bytes_received_control=4239 connection_packets_sent_total=315 connection_bytes_sent_total=25844 connection_packets_received_total=316 connection_bytes_received_total=15200 connection_bandwidth_sent_last_second_total=81 connection_bandwidth_sent_last_minute_total=141 connection_bandwidth_received_last_second_total=83 connection_bandwidth_received_last_minute_total=98`,
"serverrequestconnectioninfo": `connection_filetransfer_bandwidth_sent=0 connection_filetransfer_bandwidth_received=0 connection_filetransfer_bytes_sent_total=0 connection_filetransfer_bytes_received_total=0 connection_packets_sent_total=369 connection_bytes_sent_total=28058 connection_packets_received_total=370 connection_bytes_received_total=17468 connection_bandwidth_sent_last_second_total=81 connection_bandwidth_sent_last_minute_total=109 connection_bandwidth_received_last_second_total=83 connection_bandwidth_received_last_minute_total=94 connection_connected_time=174 connection_packetloss_total=0.0000 connection_ping=1.0000`,
}
func TestGather(t *testing.T) {
l, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
t.Fatal("Initializing test server failed")
}
defer l.Close()
go handleRequest(l, t)
var acc testutil.Accumulator
testConfig := Teamspeak{
Server: l.Addr().String(),
Username: "serveradmin",
Password: "test",
VirtualServers: []int{1},
}
err = testConfig.Gather(&acc)
if err != nil {
t.Fatalf("Gather returned error. Error: %s\n", err)
}
fields := map[string]interface{}{
"uptime": int(148),
"clients_online": int(2),
"total_ping": float32(1.0000),
"total_packet_loss": float64(0.0000),
"packets_sent_total": uint64(369),
"packets_received_total": uint64(370),
"bytes_sent_total": uint64(28058),
"bytes_received_total": uint64(17468),
}
acc.AssertContainsFields(t, "teamspeak", fields)
}
func handleRequest(l net.Listener, t *testing.T) {
c, err := l.Accept()
if err != nil {
t.Fatal("Error accepting test connection")
}
c.Write([]byte("TS3\n\r" + welcome + "\n\r"))
for {
msg, _, err := bufio.NewReader(c).ReadLine()
if err != nil {
return
}
r, exists := cmd[strings.Split(string(msg), " ")[0]]
if exists {
switch r {
case "":
c.Write([]byte(ok + "\n\r"))
case "quit":
c.Write([]byte(ok + "\n\r"))
c.Close()
return
default:
c.Write([]byte(r + "\n\r" + ok + "\n\r"))
}
} else {
c.Write([]byte(errorMsg + "\n\r"))
}
}
}

View File

@ -165,7 +165,7 @@ func (s *Tomcat) Gather(acc telegraf.Accumulator) error {
for _, c := range status.TomcatConnectors { for _, c := range status.TomcatConnectors {
name, err := strconv.Unquote(c.Name) name, err := strconv.Unquote(c.Name)
if err != nil { if err != nil {
return fmt.Errorf("Unable to unquote name '%s': %s", c.Name, err) name = c.Name
} }
tccTags := map[string]string{ tccTags := map[string]string{

View File

@ -11,7 +11,7 @@ import (
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
var tomcatStatus = `<?xml version="1.0" encoding="UTF-8"?> var tomcatStatus8 = `<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="/manager/xform.xsl" ?> <?xml-stylesheet type="text/xsl" href="/manager/xform.xsl" ?>
<status> <status>
<jvm> <jvm>
@ -37,10 +37,10 @@ var tomcatStatus = `<?xml version="1.0" encoding="UTF-8"?>
</connector> </connector>
</status>` </status>`
func TestHTTPTomcat(t *testing.T) { func TestHTTPTomcat8(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK) w.WriteHeader(http.StatusOK)
fmt.Fprintln(w, tomcatStatus) fmt.Fprintln(w, tomcatStatus8)
})) }))
defer ts.Close() defer ts.Close()
@ -91,5 +91,63 @@ func TestHTTPTomcat(t *testing.T) {
"name": "http-apr-8080", "name": "http-apr-8080",
} }
acc.AssertContainsTaggedFields(t, "tomcat_connector", connectorFields, connectorTags) acc.AssertContainsTaggedFields(t, "tomcat_connector", connectorFields, connectorTags)
}
var tomcatStatus6 = `<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="xform.xsl" ?>
<status>
<jvm>
<memory free="1942681600" total="2040070144" max="2040070144"/>
</jvm>
<connector name="http-8080">
<threadInfo maxThreads="150" currentThreadCount="2" currentThreadsBusy="2"/>
<requestInfo maxTime="1005" processingTime="2465" requestCount="436" errorCount="16" bytesReceived="0" bytesSent="550196"/>
<workers>
<worker stage="K" requestProcessingTime="526" requestBytesSent="0" requestBytesReceived="0" remoteAddr="127.0.0.1" virtualHost="?" method="?" currentUri="?" currentQueryString="?" protocol="?"/>
<worker stage="S" requestProcessingTime="1" requestBytesSent="0" requestBytesReceived="0" remoteAddr="127.0.0.1" virtualHost="127.0.0.1" method="GET" currentUri="/manager/status/all" currentQueryString="XML=true" protocol="HTTP/1.1"/>
</workers>
</connector>
</status>`
func TestHTTPTomcat6(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
fmt.Fprintln(w, tomcatStatus6)
}))
defer ts.Close()
tc := Tomcat{
URL: ts.URL,
Username: "tomcat",
Password: "s3cret",
}
var acc testutil.Accumulator
err := tc.Gather(&acc)
require.NoError(t, err)
// tomcat_jvm_memory
jvmMemoryFields := map[string]interface{}{
"free": int64(1942681600),
"total": int64(2040070144),
"max": int64(2040070144),
}
acc.AssertContainsFields(t, "tomcat_jvm_memory", jvmMemoryFields)
// tomcat_connector
connectorFields := map[string]interface{}{
"bytes_received": int64(0),
"bytes_sent": int64(550196),
"current_thread_count": int64(2),
"current_threads_busy": int64(2),
"error_count": int(16),
"max_threads": int64(150),
"max_time": int(1005),
"processing_time": int(2465),
"request_count": int(436),
}
connectorTags := map[string]string{
"name": "http-8080",
}
acc.AssertContainsTaggedFields(t, "tomcat_connector", connectorFields, connectorTags)
} }

View File

@ -17,6 +17,10 @@ This plugin gathers stats from [Varnish HTTP Cache](https://varnish-cache.org/)
## Setting stats will override the defaults shown below. ## Setting stats will override the defaults shown below.
## stats may also be set to ["all"], which will collect all stats ## stats may also be set to ["all"], which will collect all stats
stats = ["MAIN.cache_hit", "MAIN.cache_miss", "MAIN.uptime"] stats = ["MAIN.cache_hit", "MAIN.cache_miss", "MAIN.uptime"]
## Optional name for the varnish instance (or working directory) to query
## Usually appened after -n in varnish cli
#name = instanceName
``` ```
### Measurements & Fields: ### Measurements & Fields:

View File

@ -17,13 +17,14 @@ import (
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/plugins/inputs"
) )
type runner func(cmdName string, UseSudo bool) (*bytes.Buffer, error) type runner func(cmdName string, UseSudo bool, InstanceName string) (*bytes.Buffer, error)
// Varnish is used to store configuration values // Varnish is used to store configuration values
type Varnish struct { type Varnish struct {
Stats []string Stats []string
Binary string Binary string
UseSudo bool UseSudo bool
InstanceName string
filter filter.Filter filter filter.Filter
run runner run runner
@ -44,6 +45,10 @@ var sampleConfig = `
## Glob matching can be used, ie, stats = ["MAIN.*"] ## Glob matching can be used, ie, stats = ["MAIN.*"]
## stats may also be set to ["*"], which will collect all stats ## stats may also be set to ["*"], which will collect all stats
stats = ["MAIN.cache_hit", "MAIN.cache_miss", "MAIN.uptime"] stats = ["MAIN.cache_hit", "MAIN.cache_miss", "MAIN.uptime"]
## Optional name for the varnish instance (or working directory) to query
## Usually appened after -n in varnish cli
#name = instanceName
` `
func (s *Varnish) Description() string { func (s *Varnish) Description() string {
@ -56,8 +61,13 @@ func (s *Varnish) SampleConfig() string {
} }
// Shell out to varnish_stat and return the output // Shell out to varnish_stat and return the output
func varnishRunner(cmdName string, UseSudo bool) (*bytes.Buffer, error) { func varnishRunner(cmdName string, UseSudo bool, InstanceName string) (*bytes.Buffer, error) {
cmdArgs := []string{"-1"} cmdArgs := []string{"-1"}
if InstanceName != "" {
cmdArgs = append(cmdArgs, []string{"-n", InstanceName}...)
}
cmd := exec.Command(cmdName, cmdArgs...) cmd := exec.Command(cmdName, cmdArgs...)
if UseSudo { if UseSudo {
@ -99,7 +109,7 @@ func (s *Varnish) Gather(acc telegraf.Accumulator) error {
} }
} }
out, err := s.run(s.Binary, s.UseSudo) out, err := s.run(s.Binary, s.UseSudo, s.InstanceName)
if err != nil { if err != nil {
return fmt.Errorf("error gathering metrics: %s", err) return fmt.Errorf("error gathering metrics: %s", err)
} }
@ -155,10 +165,11 @@ func (s *Varnish) Gather(acc telegraf.Accumulator) error {
func init() { func init() {
inputs.Add("varnish", func() telegraf.Input { inputs.Add("varnish", func() telegraf.Input {
return &Varnish{ return &Varnish{
run: varnishRunner, run: varnishRunner,
Stats: defaultStats, Stats: defaultStats,
Binary: defaultBinary, Binary: defaultBinary,
UseSudo: false, UseSudo: false,
InstanceName: "",
} }
}) })
} }

View File

@ -5,14 +5,15 @@ package varnish
import ( import (
"bytes" "bytes"
"fmt" "fmt"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"strings" "strings"
"testing" "testing"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
) )
func fakeVarnishStat(output string, useSudo bool) func(string, bool) (*bytes.Buffer, error) { func fakeVarnishStat(output string, useSudo bool, InstanceName string) func(string, bool, string) (*bytes.Buffer, error) {
return func(string, bool) (*bytes.Buffer, error) { return func(string, bool, string) (*bytes.Buffer, error) {
return bytes.NewBuffer([]byte(output)), nil return bytes.NewBuffer([]byte(output)), nil
} }
} }
@ -20,7 +21,7 @@ func fakeVarnishStat(output string, useSudo bool) func(string, bool) (*bytes.Buf
func TestGather(t *testing.T) { func TestGather(t *testing.T) {
acc := &testutil.Accumulator{} acc := &testutil.Accumulator{}
v := &Varnish{ v := &Varnish{
run: fakeVarnishStat(smOutput, false), run: fakeVarnishStat(smOutput, false, ""),
Stats: []string{"*"}, Stats: []string{"*"},
} }
v.Gather(acc) v.Gather(acc)
@ -36,7 +37,7 @@ func TestGather(t *testing.T) {
func TestParseFullOutput(t *testing.T) { func TestParseFullOutput(t *testing.T) {
acc := &testutil.Accumulator{} acc := &testutil.Accumulator{}
v := &Varnish{ v := &Varnish{
run: fakeVarnishStat(fullOutput, true), run: fakeVarnishStat(fullOutput, true, ""),
Stats: []string{"*"}, Stats: []string{"*"},
} }
err := v.Gather(acc) err := v.Gather(acc)
@ -51,7 +52,7 @@ func TestParseFullOutput(t *testing.T) {
func TestFilterSomeStats(t *testing.T) { func TestFilterSomeStats(t *testing.T) {
acc := &testutil.Accumulator{} acc := &testutil.Accumulator{}
v := &Varnish{ v := &Varnish{
run: fakeVarnishStat(fullOutput, false), run: fakeVarnishStat(fullOutput, false, ""),
Stats: []string{"MGT.*", "VBE.*"}, Stats: []string{"MGT.*", "VBE.*"},
} }
err := v.Gather(acc) err := v.Gather(acc)
@ -74,7 +75,7 @@ func TestFieldConfig(t *testing.T) {
for fieldCfg, expected := range expect { for fieldCfg, expected := range expect {
acc := &testutil.Accumulator{} acc := &testutil.Accumulator{}
v := &Varnish{ v := &Varnish{
run: fakeVarnishStat(fullOutput, true), run: fakeVarnishStat(fullOutput, true, ""),
Stats: strings.Split(fieldCfg, ","), Stats: strings.Split(fieldCfg, ","),
} }
err := v.Gather(acc) err := v.Gather(acc)

View File

@ -1,6 +1,17 @@
# particle webhooks # particle webhooks
You should configure your Rollbar's Webhooks to point at the `webhooks` service. To do this go to `https://console.particle.io/` and click `Settings > Notifications > Webhook`. In the resulting page set `URL` to `http://<my_ip>:1619/particle`, and click on `Enable Webhook Integration`.
You should configure your Particle.io's Webhooks to point at the `webhooks` service. To do this go to `(https://console.particle.io/)[https://console.particle.io]` and click `Integrations > New Integration > Webhook`. In the resulting page set `URL` to `http://<my_ip>:1619/particle`, and under `Advanced Settings` click on `JSON` and add:
```
{
"influx_db": "your_measurement_name"
}
```
If required, enter your username and password, etc. and then click `Save`
## Events ## Events
@ -18,9 +29,11 @@ String data = String::format("{ \"tags\" : {
); );
Particle.publish("event_name", data, PRIVATE); Particle.publish("event_name", data, PRIVATE);
``` ```
Escaping the "" is required in the source file. Escaping the "" is required in the source file.
The number of tag values and field values is not restrictied so you can send as many values per webhook call as you'd like. The number of tag values and field values is not restrictied so you can send as many values per webhook call as you'd like.
You will need to enable JSON messages in the Webhooks setup of Particle.io You will need to enable JSON messages in the Webhooks setup of Particle.io
See [webhook doc](https://docs.particle.io/reference/webhooks/) See [webhook doc](https://docs.particle.io/reference/webhooks/)

View File

@ -347,7 +347,7 @@ func PdhGetFormattedCounterValueDouble(hCounter PDH_HCOUNTER, lpdwType *uint32,
// //
// okPath := "\\Process(*)\\% Processor Time" // notice the wildcard * character // okPath := "\\Process(*)\\% Processor Time" // notice the wildcard * character
// //
// // ommitted all necessary stuff ... // // omitted all necessary stuff ...
// //
// var bufSize uint32 // var bufSize uint32
// var bufCount uint32 // var bufCount uint32

View File

@ -110,7 +110,7 @@ func (m *Win_PerfCounters) AddItem(query string, objectName string, counter stri
ret = PdhAddEnglishCounter(handle, query, 0, &counterHandle) ret = PdhAddEnglishCounter(handle, query, 0, &counterHandle)
} }
// Call PdhCollectQueryData one time to check existance of the counter // Call PdhCollectQueryData one time to check existence of the counter
ret = PdhCollectQueryData(handle) ret = PdhCollectQueryData(handle)
if ret != ERROR_SUCCESS { if ret != ERROR_SUCCESS {
PdhCloseQuery(handle) PdhCloseQuery(handle)

View File

@ -13,6 +13,8 @@ API endpoint. In the following order the plugin will attempt to authenticate.
5. [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file) 5. [Shared Credentials](https://github.com/aws/aws-sdk-go/wiki/configuring-sdk#shared-credentials-file)
6. [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html) 6. [EC2 Instance Profile](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
The IAM user needs only the `cloudwatch:PutMetricData` permission.
## Config ## Config
For this output plugin to function correctly the following variables For this output plugin to function correctly the following variables

View File

@ -9,6 +9,7 @@ import (
"github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/service/cloudwatch" "github.com/aws/aws-sdk-go/service/cloudwatch"
"github.com/aws/aws-sdk-go/service/sts"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
internalaws "github.com/influxdata/telegraf/internal/config/aws" internalaws "github.com/influxdata/telegraf/internal/config/aws"
@ -71,21 +72,20 @@ func (c *CloudWatch) Connect() error {
} }
configProvider := credentialConfig.Credentials() configProvider := credentialConfig.Credentials()
svc := cloudwatch.New(configProvider) stsService := sts.New(configProvider)
params := &cloudwatch.ListMetricsInput{ params := &sts.GetSessionTokenInput{}
Namespace: aws.String(c.Namespace),
}
_, err := svc.ListMetrics(params) // Try a read-only call to test connection. _, err := stsService.GetSessionToken(params)
if err != nil { if err != nil {
log.Printf("E! cloudwatch: Error in ListMetrics API call : %+v \n", err.Error()) log.Printf("E! cloudwatch: Cannot use credentials to connect to AWS : %+v \n", err.Error())
return err
} }
c.svc = svc c.svc = cloudwatch.New(configProvider)
return err return nil
} }
func (c *CloudWatch) Close() error { func (c *CloudWatch) Close() error {

View File

@ -174,6 +174,13 @@ This plugin will format the events in the following way:
# %H - hour (00..23) # %H - hour (00..23)
index_name = "telegraf-%Y.%m.%d" # required. index_name = "telegraf-%Y.%m.%d" # required.
## Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem"
# ssl_key = "/etc/telegraf/key.pem"
## Use SSL but skip chain & host verification
# insecure_skip_verify = false
## Template Config ## Template Config
## Set to true if you want telegraf to manage its index template. ## Set to true if you want telegraf to manage its index template.
## If enabled it will create a recommended index template for telegraf indexes ## If enabled it will create a recommended index template for telegraf indexes

View File

@ -3,15 +3,15 @@ package elasticsearch
import ( import (
"context" "context"
"fmt" "fmt"
"log"
"strconv"
"strings"
"time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/outputs" "github.com/influxdata/telegraf/plugins/outputs"
"gopkg.in/olivere/elastic.v5" "gopkg.in/olivere/elastic.v5"
"log"
"net/http"
"strconv"
"strings"
"time"
) )
type Elasticsearch struct { type Elasticsearch struct {
@ -25,6 +25,10 @@ type Elasticsearch struct {
ManageTemplate bool ManageTemplate bool
TemplateName string TemplateName string
OverwriteTemplate bool OverwriteTemplate bool
SSLCA string `toml:"ssl_ca"` // Path to CA file
SSLCert string `toml:"ssl_cert"` // Path to host cert file
SSLKey string `toml:"ssl_key"` // Path to cert key file
InsecureSkipVerify bool // Use SSL but skip chain & host verification
Client *elastic.Client Client *elastic.Client
} }
@ -56,6 +60,13 @@ var sampleConfig = `
# %H - hour (00..23) # %H - hour (00..23)
index_name = "telegraf-%Y.%m.%d" # required. index_name = "telegraf-%Y.%m.%d" # required.
## Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem"
# ssl_key = "/etc/telegraf/key.pem"
## Use SSL but skip chain & host verification
# insecure_skip_verify = false
## Template Config ## Template Config
## Set to true if you want telegraf to manage its index template. ## Set to true if you want telegraf to manage its index template.
## If enabled it will create a recommended index template for telegraf indexes ## If enabled it will create a recommended index template for telegraf indexes
@ -76,7 +87,21 @@ func (a *Elasticsearch) Connect() error {
var clientOptions []elastic.ClientOptionFunc var clientOptions []elastic.ClientOptionFunc
tlsCfg, err := internal.GetTLSConfig(a.SSLCert, a.SSLKey, a.SSLCA, a.InsecureSkipVerify)
if err != nil {
return err
}
tr := &http.Transport{
TLSClientConfig: tlsCfg,
}
httpclient := &http.Client{
Transport: tr,
Timeout: a.Timeout.Duration,
}
clientOptions = append(clientOptions, clientOptions = append(clientOptions,
elastic.SetHttpClient(httpclient),
elastic.SetSniff(a.EnableSniffer), elastic.SetSniff(a.EnableSniffer),
elastic.SetURL(a.URLs...), elastic.SetURL(a.URLs...),
elastic.SetHealthcheckInterval(a.HealthCheckInterval.Duration), elastic.SetHealthcheckInterval(a.HealthCheckInterval.Duration),

Some files were not shown because too many files have changed in this diff Show More