Compare commits

..

1 Commits

Author SHA1 Message Date
Daniel Nelson
a4f5c6fbc3 Update sample config 2017-08-16 16:48:10 -07:00
439 changed files with 12715 additions and 61325 deletions

View File

@@ -1,50 +0,0 @@
---
defaults: &defaults
docker:
- image: 'circleci/golang:1.9.4'
working_directory: '/go/src/github.com/influxdata/telegraf'
version: 2
jobs:
build:
<<: *defaults
steps:
- checkout
- run: 'make deps'
- run: 'make test-ci'
release:
<<: *defaults
steps:
- checkout
- run: './scripts/release.sh'
- store_artifacts:
path: './artifacts'
destination: '.'
nightly:
<<: *defaults
steps:
- checkout
- run: './scripts/release.sh'
- store_artifacts:
path: './artifacts'
destination: '.'
workflows:
version: 2
build_and_release:
jobs:
- 'build'
- 'release':
requires:
- 'build'
nightly:
jobs:
- 'build'
- 'nightly':
requires:
- 'build'
triggers:
- schedule:
cron: "0 18 * * *"
filters:
branches:
only:
- master

8
.gitignore vendored
View File

@@ -1,3 +1,7 @@
/build
build
tivan
.vagrant
/telegraf
/telegraf.gz
.idea
*~
*#

View File

@@ -1,314 +1,4 @@
## v1.6 [unreleased]
### Release Notes
- The `mysql` input plugin has been updated fix a number of type convertion
issues. This may cause a `field type error` when inserting into InfluxDB due
the change of types.
To address this we have introduced a new `metric_version` option to control
enabling the new format. For in depth recommendations on upgrading please
reference the [mysql plugin documentation](./plugins/inputs/mysql/README.md#metric-version).
It is encouraged to migrate to the new model when possible as the old version
is deprecated and will be removed in a future version.
- The `postgresql` plugins now defaults to using a persistent connection to the database.
In environments where TCP connections are terminated the `max_lifetime`
setting should be set less than the collection `interval` to prevent errors.
- The `sqlserver` input plugin has a new query and data model that can be enabled
by setting `query_version = 2`. It is encouraged to migrate to the new
model when possible as the old version is deprecated and will be removed in
a future version.
- An option has been added to the `openldap` input plugin that reverses metric
name to improve grouping. This change is enabled when `reverse_metric_names = true`
is set. It is encouraged to enable this option when possible as the old
ordering is deprecated.
- The new `http` input configured with `data_format = "json"` can perform the
same task as the, now deprecated, `httpjson` input.
### New Inputs
- [http](./plugins/inputs/http/README.md) - Thanks to @grange74
- [ipset](./plugins/inputs/ipset/README.md) - Thanks to @sajoupa
- [nats](./plugins/inputs/nats/README.md) - Thanks to @mjs & @levex
### New Processors
- [override](./plugins/processors/override/README.md) - Thanks to @KarstenSchnitter
### New Parsers
- [dropwizard](./docs/DATA_FORMATS_INPUT.md#dropwizard) - Thanks to @atzoum
### Features
- [#3551](https://github.com/influxdata/telegraf/pull/3551): Add health status mapping from string to int in elasticsearch input.
- [#3580](https://github.com/influxdata/telegraf/pull/3580): Add control over which stats to gather in basicstats aggregator.
- [#3596](https://github.com/influxdata/telegraf/pull/3596): Add messages_delivered_get to rabbitmq input.
- [#3632](https://github.com/influxdata/telegraf/pull/3632): Add wired field to mem input.
- [#3619](https://github.com/influxdata/telegraf/pull/3619): Add support for gathering exchange metrics to the rabbitmq input.
- [#3565](https://github.com/influxdata/telegraf/pull/3565): Add support for additional metrics on Linux in zfs input.
- [#3524](https://github.com/influxdata/telegraf/pull/3524): Add available_entropy field to kernel input plugin.
- [#3643](https://github.com/influxdata/telegraf/pull/3643): Add user privilege level setting to IPMI sensors.
- [#2701](https://github.com/influxdata/telegraf/pull/2701): Use persistent connection to postgresql database.
- [#2846](https://github.com/influxdata/telegraf/pull/2846): Add support for dropwizard input format.
- [#3666](https://github.com/influxdata/telegraf/pull/3666): Add container health metrics to docker input.
- [#3687](https://github.com/influxdata/telegraf/pull/3687): Add support for using globs in devices list of diskio input plugin.
- [#2754](https://github.com/influxdata/telegraf/pull/2754): Allow running as console application on Windows.
- [#3703](https://github.com/influxdata/telegraf/pull/3703): Add listener counts and node running status to rabbitmq input.
- [#3674](https://github.com/influxdata/telegraf/pull/3674): Add NATS Monitoring Input Plugin.
- [#3702](https://github.com/influxdata/telegraf/pull/3702): Add ability to select which queues will be gathered in rabbitmq input.
- [#3726](https://github.com/influxdata/telegraf/pull/3726): Add support for setting bsd source address to the ping input.
- [#3346](https://github.com/influxdata/telegraf/pull/3346): Add Ipset input plugin.
- [#3719](https://github.com/influxdata/telegraf/pull/3719): Add TLS and HTTP basic auth to prometheus_client output.
- [#3618](https://github.com/influxdata/telegraf/pull/3618): Add new sqlserver output data model.
- [#3559](https://github.com/influxdata/telegraf/pull/3559): Add native Go method for finding pids to procstat.
- [#3722](https://github.com/influxdata/telegraf/pull/3722): Add additional metrics and reverse metric names option to openldap.
- [#3769](https://github.com/influxdata/telegraf/pull/3769): Add TLS support to the mesos input plugin.
- [#3546](https://github.com/influxdata/telegraf/pull/3546): Add http input plugin.
- [#3781](https://github.com/influxdata/telegraf/pull/3781): Add keep alive support to the TCP mode of statsd.
- [#3783](https://github.com/influxdata/telegraf/pull/3783): Support deadline in ping plugin.
- [#3765](https://github.com/influxdata/telegraf/pull/3765): Add option to disable labels in prometheus output for string fields.
- [#3808](https://github.com/influxdata/telegraf/pull/3808): Add shard server stats to the mongodb input plugin.
- [#3713](https://github.com/influxdata/telegraf/pull/3713): Add server option to unbound plugin.
- [#3804](https://github.com/influxdata/telegraf/pull/3804): Convert boolean metric values to float in datadog output.
- [#3799](https://github.com/influxdata/telegraf/pull/3799): Add Solr 3 compatibility.
- [#3797](https://github.com/influxdata/telegraf/pull/3797): Add sum stat to basicstats aggregator.
- [#3626](https://github.com/influxdata/telegraf/pull/3626): Add ability to override proxy from environment in http response.
- [#3853](https://github.com/influxdata/telegraf/pull/3853): Add host to ping timeout log message.
- [#3773](https://github.com/influxdata/telegraf/pull/3773): Add override processor.
- [#3814](https://github.com/influxdata/telegraf/pull/3814): Add status_code and result tags and result_type field to http_response input.
- [#3880](https://github.com/influxdata/telegraf/pull/3880): Added config flag to skip collection of network protocol metrics.
- [#3927](https://github.com/influxdata/telegraf/pull/3927): Add TLS support to kapacitor input.
- [#3496](https://github.com/influxdata/telegraf/pull/3496): Add HTTP basic auth support to the http_listener input.
- [#3452](https://github.com/influxdata/telegraf/issues/3452): Tags in output InfluxDB Line Protocol are now sorted.
- [#3631](https://github.com/influxdata/telegraf/issues/3631): InfluxDB Line Protocol parser now accepts DOS line endings.
- [#2496](https://github.com/influxdata/telegraf/issues/2496): An option has been added to skip database creation in the InfluxDB output.
- [#3366](https://github.com/influxdata/telegraf/issues/3366): Add support for connecting to InfluxDB over a unix domain socket.
- [#3946](https://github.com/influxdata/telegraf/pull/3946): Add optional unsigned integer support to the influx data format.
- [#3811](https://github.com/influxdata/telegraf/pull/3811): Add TLS support to zookeeper input.
- [#2737](https://github.com/influxdata/telegraf/issues/2737): Add filters for container state to docker input.
### Bugfixes
- [#1896](https://github.com/influxdata/telegraf/issues/1896): Fix various mysql data type conversions.
- [#3810](https://github.com/influxdata/telegraf/issues/3810): Fix metric buffer limit in internal plugin after reload.
- [#3801](https://github.com/influxdata/telegraf/issues/3801): Fix panic in http_response on invalid regex.
- [#3973](https://github.com/influxdata/telegraf/issues/3873): Fix socket_listener setting ReadBufferSize on tcp sockets.
- [#1575](https://github.com/influxdata/telegraf/issues/1575): Add tag for target url to phpfpm input.
- [#3868](https://github.com/influxdata/telegraf/issues/3868): Fix cannot unmarshal object error in DC/OS input.
- [#3648](https://github.com/influxdata/telegraf/issues/3648): Fix InfluxDB output not able to reconnect when server address changes.
- [#3957](https://github.com/influxdata/telegraf/issues/3957): Fix parsing of dos line endings in the smart input.
- [#3754](https://github.com/influxdata/telegraf/issues/3754): Fix precision truncation when no timestamp included.
## v1.5.3 [2018-03-14]
### Bugfixes
- [#3729](https://github.com/influxdata/telegraf/issues/3729): Set path to / if HOST_MOUNT_PREFIX matches full path.
- [#3739](https://github.com/influxdata/telegraf/issues/3739): Remove userinfo from url tag in prometheus input.
- [#3778](https://github.com/influxdata/telegraf/issues/3778): Fix ping plugin not reporting zero durations.
- [#3697](https://github.com/influxdata/telegraf/issues/3697): Disable keepalive in mqtt output to prevent deadlock.
- [#3786](https://github.com/influxdata/telegraf/pull/3786): Fix collation difference in sqlserver input.
- [#3871](https://github.com/influxdata/telegraf/pull/3871): Fix uptime metric in passenger input plugin.
- [#3851](https://github.com/influxdata/telegraf/issues/3851): Add output of stderr in case of error to exec log message.
## v1.5.2 [2018-01-30]
### Bugfixes
- [#3684](https://github.com/influxdata/telegraf/pull/3684): Ignore empty lines in Graphite plaintext.
- [#3604](https://github.com/influxdata/telegraf/issues/3604): Fix index out of bounds error in solr input plugin.
- [#3680](https://github.com/influxdata/telegraf/pull/3680): Reconnect before sending graphite metrics if disconnected.
- [#3693](https://github.com/influxdata/telegraf/pull/3693): Align aggregator period with internal ticker to avoid skipping metrics.
- [#3629](https://github.com/influxdata/telegraf/issues/3629): Fix a potential deadlock when using aggregators.
- [#3697](https://github.com/influxdata/telegraf/issues/3697): Limit wait time for writes in mqtt output.
- [#3698](https://github.com/influxdata/telegraf/issues/3698): Revert change in graphite output where dot in field key was replaced by underscore.
- [#3710](https://github.com/influxdata/telegraf/issues/3710): Add timeout to wavefront output write.
- [#3725](https://github.com/influxdata/telegraf/issues/3725): Exclude master_replid fields from redis input.
## v1.5.1 [2018-01-10]
### Bugfixes
- [#3624](https://github.com/influxdata/telegraf/pull/3624): Fix name error in jolokia2_agent sample config.
- [#3625](https://github.com/influxdata/telegraf/pull/3625): Fix DC/OS login expiration time.
- [#3593](https://github.com/influxdata/telegraf/pull/3593): Set Content-Type charset in influxdb output and allow it be overridden.
- [#3594](https://github.com/influxdata/telegraf/pull/3594): Document permissions setup for postfix input.
- [#3633](https://github.com/influxdata/telegraf/pull/3633): Fix deliver_get field in rabbitmq input.
- [#3607](https://github.com/influxdata/telegraf/issues/3607): Escape environment variables during config toml parsing.
## v1.5 [2017-12-14]
### New Plugins
- [basicstats](./plugins/aggregators/basicstats/README.md) - Thanks to @toni-moreno
- [bond](./plugins/inputs/bond/README.md) - Thanks to @ildarsv
- [cratedb](./plugins/outputs/cratedb/README.md) - Thanks to @felixge
- [dcos](./plugins/inputs/dcos/README.md) - Thanks to @influxdata
- [jolokia2](./plugins/inputs/jolokia2/README.md) - Thanks to @dylanmei
- [nginx_plus](./plugins/inputs/nginx_plus/README.md) - Thanks to @mplonka & @poblahblahblah
- [opensmtpd](./plugins/inputs/opensmtpd/README.md) - Thanks to @aromeyer
- [particle](./plugins/inputs/webhooks/particle/README.md) - Thanks to @davidgs
- [pf](./plugins/inputs/pf/README.md) - Thanks to @nferch
- [postfix](./plugins/inputs/postfix/README.md) - Thanks to @phemmer
- [smart](./plugins/inputs/smart/README.md) - Thanks to @rickard-von-essen
- [solr](./plugins/inputs/solr/README.md) - Thanks to @ljagiello
- [teamspeak](./plugins/inputs/teamspeak/README.md) - Thanks to @p4ddy1
- [unbound](./plugins/inputs/unbound/README.md) - Thanks to @aromeyer
- [wavefront](./plugins/outputs/wavefront/README.md) - Thanks to @puckpuck
### Release Notes
- In the `kinesis` output, use of the `partition_key` and
`use_random_partitionkey` options has been deprecated in favor of the
`partition` subtable. This allows for more flexible methods to set the
partition key such as by metric name or by tag.
- With the release of the new improved `jolokia2` input, the legacy `jolokia`
plugin is deprecated and will be removed in a future release. Users of this
plugin are encouraged to update to the new `jolokia2` plugin.
- In the `postgresql` and `postgresql_extensible` plugins, the type of the oid
data type has changed from string to integer. It is recommended to drop
affected fields until a new shard is started. For details on how to
workaround this issue please see [#3622](https://github.com/influxdata/telegraf/issues/3622).
### Features
- [#3170](https://github.com/influxdata/telegraf/pull/3170): Add support for sharding based on metric name.
- [#3196](https://github.com/influxdata/telegraf/pull/3196): Add Kafka output plugin topic_suffix option.
- [#3027](https://github.com/influxdata/telegraf/pull/3027): Include mount mode option in disk metrics.
- [#3191](https://github.com/influxdata/telegraf/pull/3191): TLS and MTLS enhancements to HTTPListener input plugin.
- [#3213](https://github.com/influxdata/telegraf/pull/3213): Add polling method to logparser and tail inputs.
- [#3211](https://github.com/influxdata/telegraf/pull/3211): Add timeout option for kubernetes input.
- [#3234](https://github.com/influxdata/telegraf/pull/3234): Add support for timing sums in statsd input.
- [#2617](https://github.com/influxdata/telegraf/issues/2617): Add resource limit monitoring to procstat.
- [#3236](https://github.com/influxdata/telegraf/pull/3236): Add support for k8s service DNS discovery to prometheus input.
- [#3245](https://github.com/influxdata/telegraf/pull/3245): Add configurable metrics endpoint to prometheus output.
- [#3214](https://github.com/influxdata/telegraf/pull/3214): Add new nginx_plus input plugin.
- [#3215](https://github.com/influxdata/telegraf/pull/3215): Add support for NSQLookupd to nsq_consumer.
- [#2278](https://github.com/influxdata/telegraf/pull/2278): Add redesigned Jolokia input plugin.
- [#3106](https://github.com/influxdata/telegraf/pull/3106): Add configurable separator for metrics and fields in opentsdb output.
- [#1692](https://github.com/influxdata/telegraf/pull/1692): Add support for the rollbar occurrence webhook event.
- [#3160](https://github.com/influxdata/telegraf/pull/3160): Add Wavefront output plugin.
- [#3281](https://github.com/influxdata/telegraf/pull/3281): Add extra wired tiger cache metrics to mongodb input.
- [#3141](https://github.com/influxdata/telegraf/pull/3141): Collect Docker Swarm service metrics in docker input plugin.
- [#2449](https://github.com/influxdata/telegraf/pull/2449): Add smart input plugin for collecting S.M.A.R.T. data.
- [#3269](https://github.com/influxdata/telegraf/pull/3269): Add cluster health level configuration to elasticsearch input.
- [#3304](https://github.com/influxdata/telegraf/pull/3304): Add ability to limit node stats in elasticsearch input.
- [#2167](https://github.com/influxdata/telegraf/pull/2167): Add new basicstats aggregator.
- [#3344](https://github.com/influxdata/telegraf/pull/3344): Add UDP IPv6 support to statsd input.
- [#3350](https://github.com/influxdata/telegraf/pull/3350): Use labels in prometheus output for string fields.
- [#3358](https://github.com/influxdata/telegraf/pull/3358): Add support for decimal timestamps to ts-epoch modifier.
- [#3337](https://github.com/influxdata/telegraf/pull/3337): Add histogram and summary types and use in prometheus plugins.
- [#3365](https://github.com/influxdata/telegraf/pull/3365): Gather concurrently from snmp agents.
- [#3333](https://github.com/influxdata/telegraf/issues/3333): Perform DNS lookup before ping and report result.
- [#3398](https://github.com/influxdata/telegraf/issues/3398): Add instance name option to varnish plugin.
- [#3406](https://github.com/influxdata/telegraf/pull/3406): Add support for SSL settings to ElasticSearch output plugin.
- [#3315](https://github.com/influxdata/telegraf/pull/3315): Add Teamspeak 3 input plugin.
- [#3305](https://github.com/influxdata/telegraf/pull/3305): Add modification_time field to filestat input plugin.
- [#2019](https://github.com/influxdata/telegraf/pull/2019): Add Solr input plugin.
- [#3210](https://github.com/influxdata/telegraf/pull/3210): Add CrateDB output plugin.
- [#3459](https://github.com/influxdata/telegraf/pull/3459): Add systemd unit pid and cgroup matching to procstat.
- [#3477](https://github.com/influxdata/telegraf/pull/3477): Add Particle Webhook Plugin.
- [#3471](https://github.com/influxdata/telegraf/pull/3471): Use MAX() instead of SUM() for latency measurements in sqlserver.
- [#3490](https://github.com/influxdata/telegraf/pull/3490): Add index by week number to Elasticsearch output.
- [#3434](https://github.com/influxdata/telegraf/pull/3434): Add unbound input plugin.
- [#3449](https://github.com/influxdata/telegraf/pull/3449): Add opensmtpd input plugin.
- [#3470](https://github.com/influxdata/telegraf/pull/3470): Add support for tags in the index name in elasticsearch output.
- [#2553](https://github.com/influxdata/telegraf/pull/2553): Add postfix input plugin.
- [#3424](https://github.com/influxdata/telegraf/pull/3424): Add bond input plugin.
- [#3518](https://github.com/influxdata/telegraf/pull/3518): Add slab to mem plugin.
- [#3519](https://github.com/influxdata/telegraf/pull/3519): Add input plugin for DC/OS.
- [#3140](https://github.com/influxdata/telegraf/pull/3140): Add support for glob patterns in net input plugin.
- [#3405](https://github.com/influxdata/telegraf/pull/3405): Add input plugin for OpenBSD/FreeBSD pf.
- [#3528](https://github.com/influxdata/telegraf/pull/3528): Add option to amqp output to publish persistent messages.
- [#3530](https://github.com/influxdata/telegraf/pull/3530): Support I (idle) process state on procfs+Linux.
### Bugfixes
- [#3136](https://github.com/influxdata/telegraf/issues/3136): Fix webhooks input address in use during reload.
- [#3258](https://github.com/influxdata/telegraf/issues/3258): Unlock Statsd when stopping to prevent deadlock.
- [#3319](https://github.com/influxdata/telegraf/issues/3319): Fix cloudwatch output requires unneeded permissions.
- [#3351](https://github.com/influxdata/telegraf/issues/3351): Fix prometheus passthrough for existing value types.
- [#3430](https://github.com/influxdata/telegraf/issues/3430): Always ignore autofs filesystems in disk input.
- [#3326](https://github.com/influxdata/telegraf/issues/3326): Fail metrics parsing on unescaped quotes.
- [#3473](https://github.com/influxdata/telegraf/pull/3473): Whitelist allowed char classes for graphite output.
- [#3488](https://github.com/influxdata/telegraf/pull/3488): Use hexadecimal ids and lowercase names in zipkin input.
- [#3263](https://github.com/influxdata/telegraf/issues/3263): Fix snmp-tools output parsing with Windows EOLs.
- [#3447](https://github.com/influxdata/telegraf/issues/3447): Add shadow-utils dependency to rpm package.
- [#3448](https://github.com/influxdata/telegraf/issues/3448): Use deb-systemd-invoke to restart service.
- [#3553](https://github.com/influxdata/telegraf/issues/3553): Fix kafka_consumer outside range of offsets error.
- [#3568](https://github.com/influxdata/telegraf/issues/3568): Fix separation of multiple prometheus_client outputs.
- [#3577](https://github.com/influxdata/telegraf/issues/3577): Don't add system input uptime_format as a counter.
## v1.4.5 [2017-12-01]
### Bugfixes
- [#3500](https://github.com/influxdata/telegraf/issues/3500): Fix global variable collection when using interval_slow option in mysql input.
- [#3486](https://github.com/influxdata/telegraf/issues/3486): Fix error getting net connections info in netstat input.
- [#3529](https://github.com/influxdata/telegraf/issues/3529): Fix HOST_MOUNT_PREFIX in docker with disk input.
## v1.4.4 [2017-11-08]
### Bugfixes
- [#3401](https://github.com/influxdata/telegraf/pull/3401): Use schema specified in mqtt_consumer input.
- [#3419](https://github.com/influxdata/telegraf/issues/3419): Redact datadog API key in log output.
- [#3311](https://github.com/influxdata/telegraf/issues/3311): Fix error getting pids in netstat input.
- [#3339](https://github.com/influxdata/telegraf/issues/3339): Support HOST_VAR envvar to locate /var in system input.
- [#3383](https://github.com/influxdata/telegraf/issues/3383): Use current time if docker container read time is zero value.
## v1.4.3 [2017-10-25]
### Bugfixes
- [#3327](https://github.com/influxdata/telegraf/issues/3327): Fix container name filters in docker input.
- [#3321](https://github.com/influxdata/telegraf/issues/3321): Fix snmpwalk address format in leofs input.
- [#3329](https://github.com/influxdata/telegraf/issues/3329): Fix case sensitivity issue in sqlserver query.
- [#3342](https://github.com/influxdata/telegraf/pull/3342): Fix CPU input plugin stuck after suspend on Linux.
- [#3013](https://github.com/influxdata/telegraf/issues/3013): Fix mongodb input panic when restarting mongodb.
- [#3224](https://github.com/influxdata/telegraf/pull/3224): Preserve url path prefix in influx output.
- [#3354](https://github.com/influxdata/telegraf/pull/3354): Fix TELEGRAF_OPTS expansion in systemd service unit.
- [#3357](https://github.com/influxdata/telegraf/issues/3357): Remove warning when JSON contains null value.
- [#3375](https://github.com/influxdata/telegraf/issues/3375): Fix ACL token usage in consul input plugin.
- [#3369](https://github.com/influxdata/telegraf/issues/3369): Fix unquoting error with Tomcat 6.
- [#3373](https://github.com/influxdata/telegraf/issues/3373): Fix syscall panic in diskio on some Linux systems.
## v1.4.2 [2017-10-10]
### Bugfixes
- [#3259](https://github.com/influxdata/telegraf/issues/3259): Fix error if int larger than 32-bit in /proc/vmstat.
- [#3265](https://github.com/influxdata/telegraf/issues/3265): Fix parsing of JSON with a UTF8 BOM in httpjson.
- [#2887](https://github.com/influxdata/telegraf/issues/2887): Allow JSON data format to contain zero metrics.
- [#3284](https://github.com/influxdata/telegraf/issues/3284): Fix format of connection_timeout in mqtt_consumer.
- [#3081](https://github.com/influxdata/telegraf/issues/3081): Fix case sensitivity error in sqlserver input.
- [#3297](https://github.com/influxdata/telegraf/issues/3297): Add support for proxy environment variables to http_response.
- [#1588](https://github.com/influxdata/telegraf/issues/1588): Add support for standard proxy env vars in outputs.
- [#3282](https://github.com/influxdata/telegraf/issues/3282): Fix panic in cpu input if number of cpus changes.
- [#2854](https://github.com/influxdata/telegraf/issues/2854): Use chunked transfer encoding in InfluxDB output.
## v1.4.1 [2017-09-26]
### Bugfixes
- [#3167](https://github.com/influxdata/telegraf/issues/3167): Fix MQTT input exits if Broker is not available on startup.
- [#3217](https://github.com/influxdata/telegraf/issues/3217): Fix optional field value conversions in fluentd input.
- [#3227](https://github.com/influxdata/telegraf/issues/3227): Whitelist allowed char classes for opentsdb output.
- [#3232](https://github.com/influxdata/telegraf/issues/3232): Fix counter and gauge metric types.
- [#3235](https://github.com/influxdata/telegraf/issues/3235): Fix skipped line with empty target in iptables.
- [#3175](https://github.com/influxdata/telegraf/issues/3175): Fix duplicate keys in perf counters sqlserver query.
- [#3230](https://github.com/influxdata/telegraf/issues/3230): Fix panic in statsd p100 calculation.
- [#3242](https://github.com/influxdata/telegraf/issues/3242): Fix arm64 packages contain 32-bit executable.
## v1.4 [2017-09-05]
## v1.4 [unreleased]
### Release Notes
@@ -372,7 +62,6 @@
- [#2978](https://github.com/influxdata/telegraf/pull/2978): Add gzip content-encoding support to influxdb output.
- [#3127](https://github.com/influxdata/telegraf/pull/3127): Allow using system plugin in Windows.
- [#3112](https://github.com/influxdata/telegraf/pull/3112): Add tomcat input plugin.
- [#3182](https://github.com/influxdata/telegraf/pull/3182): HTTP headers can be added to InfluxDB output.
### Bugfixes
@@ -404,16 +93,6 @@
- [#2899](https://github.com/influxdata/telegraf/issues/2899): Skip compilcation of logparser and tail on solaris.
- [#2951](https://github.com/influxdata/telegraf/issues/2951): Discard logging from tail library.
- [#3126](https://github.com/influxdata/telegraf/pull/3126): Remove log message on ping timeout.
- [#3144](https://github.com/influxdata/telegraf/issues/3144): Don't retry points beyond retention policy.
- [#3015](https://github.com/influxdata/telegraf/issues/3015): Don't start Telegraf on install in Amazon Linux.
- [#3153](https://github.com/influxdata/telegraf/issues/3053): Enable hddtemp input on all platforms.
- [#3142](https://github.com/influxdata/telegraf/issues/3142): Escape backslash within string fields.
- [#3162](https://github.com/influxdata/telegraf/issues/3162): Fix parsing of SHM remotes in ntpq input
- [#3149](https://github.com/influxdata/telegraf/issues/3149): Don't fail parsing zpool stats if pool health is UNAVAIL on FreeBSD.
- [#2672](https://github.com/influxdata/telegraf/issues/2672): Fix NSQ input plugin when used with version 1.0.0-compat.
- [#2523](https://github.com/influxdata/telegraf/issues/2523): Added CloudWatch metric constraint validation.
- [#3179](https://github.com/influxdata/telegraf/issues/3179): Skip non-numerical values in graphite format.
- [#3187](https://github.com/influxdata/telegraf/issues/3187): Fix panic when handling string fields with escapes.
## v1.3.5 [2017-07-26]

View File

@@ -12,7 +12,7 @@ but any information you can provide on how the data will look is appreciated.
See the [OpenTSDB output](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/opentsdb)
for a good example.
1. **Optional:** Help users of your plugin by including example queries for populating dashboards. Include these sample queries in the `README.md` for the plugin.
1. **Optional:** Write a [tickscript](https://docs.influxdata.com/kapacitor/v1.0/tick/syntax/) for your plugin and add it to [Kapacitor](https://github.com/influxdata/kapacitor/tree/master/examples/telegraf).
1. **Optional:** Write a [tickscript](https://docs.influxdata.com/kapacitor/v1.0/tick/syntax/) for your plugin and add it to [Kapacitor](https://github.com/influxdata/kapacitor/tree/master/examples/telegraf). Or mention @jackzampolin in a PR comment with some common queries that you would want to alert on and he will write one for you.
## GoDoc
@@ -52,7 +52,7 @@ See below for a quick example.
* Input Plugins must be added to the
`github.com/influxdata/telegraf/plugins/inputs/all/all.go` file.
* The `SampleConfig` function should return valid toml that describes how the
plugin can be configured. This is include in `telegraf config`.
plugin can be configured. This is include in `telegraf -sample-config`.
* The `Description` function should say in one line what this plugin does.
Let's say you've written a plugin that emits metrics about processes on the
@@ -79,10 +79,7 @@ func (s *Simple) Description() string {
}
func (s *Simple) SampleConfig() string {
return `
## Indicate if everything is fine
ok = true
`
return "ok = true # indicate if everything is fine"
}
func (s *Simple) Gather(acc telegraf.Accumulator) error {
@@ -170,7 +167,7 @@ and `Stop()` methods.
### Service Plugin Guidelines
* Same as the `Plugin` guidelines, except that they must conform to the
[`telegraf.ServiceInput`](https://godoc.org/github.com/influxdata/telegraf#ServiceInput) interface.
`inputs.ServiceInput` interface.
## Output Plugins
@@ -186,7 +183,7 @@ See below for a quick example.
* To be available within Telegraf itself, plugins must add themselves to the
`github.com/influxdata/telegraf/plugins/outputs/all/all.go` file.
* The `SampleConfig` function should return valid toml that describes how the
output can be configured. This is include in `telegraf config`.
output can be configured. This is include in `telegraf -sample-config`.
* The `Description` function should say in one line what this output does.
### Output Example
@@ -210,9 +207,7 @@ func (s *Simple) Description() string {
}
func (s *Simple) SampleConfig() string {
return `
ok = true
`
return "url = localhost"
}
func (s *Simple) Connect() error {
@@ -292,7 +287,7 @@ See below for a quick example.
* To be available within Telegraf itself, plugins must add themselves to the
`github.com/influxdata/telegraf/plugins/processors/all/all.go` file.
* The `SampleConfig` function should return valid toml that describes how the
processor can be configured. This is include in the output of `telegraf config`.
processor can be configured. This is include in `telegraf -sample-config`.
* The `Description` function should say in one line what this processor does.
### Processor Example
@@ -349,7 +344,7 @@ See below for a quick example.
* To be available within Telegraf itself, plugins must add themselves to the
`github.com/influxdata/telegraf/plugins/aggregators/all/all.go` file.
* The `SampleConfig` function should return valid toml that describes how the
aggregator can be configured. This is include in `telegraf config`.
aggregator can be configured. This is include in `telegraf -sample-config`.
* The `Description` function should say in one line what this aggregator does.
* The Aggregator plugin will need to keep caches of metrics that have passed
through it. This should be done using the builtin `HashID()` function of each
@@ -462,28 +457,29 @@ func init() {
## Unit Tests
Before opening a pull request you should run the linter checks and
the short tests.
### Execute linter
execute `make lint`
### Execute short tests
execute `make test`
execute `make test-short`
### Execute integration tests
### Execute long tests
Running the integration tests requires several docker containers to be
running. You can start the containers with:
```
make docker-run
```
As Telegraf collects metrics from several third-party services it becomes a
difficult task to mock each service as some of them have complicated protocols
which would take some time to replicate.
And run the full test suite with:
```
make test-all
```
To overcome this situation we've decided to use docker containers to provide a
fast and reproducible environment to test those services which require it.
For other situations
(i.e: https://github.com/influxdata/telegraf/blob/master/plugins/inputs/redis/redis_test.go)
a simple mock will suffice.
Use `make docker-kill` to stop the containers.
To execute Telegraf tests follow these simple steps:
- Install docker following [these](https://docs.docker.com/installation/)
instructions
- execute `make test`
### Unit test troubleshooting
Try cleaning up your test environment by executing `make docker-kill` and
re-running

27
Godeps
View File

@@ -4,19 +4,18 @@ github.com/amir/raidman c74861fe6a7bb8ede0a010ce4485bdbb4fc4c985
github.com/apache/thrift 4aaa92ece8503a6da9bc6701604f69acf2b99d07
github.com/aws/aws-sdk-go c861d27d0304a79f727e9a8a4e2ac1e74602fdc0
github.com/beorn7/perks 4c0e84591b9aa9e6dcfdf3e020114cd81f89d5f9
github.com/bsm/sarama-cluster abf039439f66c1ce78017f560b490612552f6472
github.com/bsm/sarama-cluster ccdc0803695fbce22f1706d04ded46cd518fd832
github.com/cenkalti/backoff b02f2bbce11d7ea6b97f282ef1771b0fe2f65ef3
github.com/couchbase/go-couchbase bfe555a140d53dc1adf390f1a1d4b0fd4ceadb28
github.com/couchbase/gomemcached 4a25d2f4e1dea9ea7dd76dfd943407abf9b07d29
github.com/couchbase/goutils 5823a0cbaaa9008406021dc5daf80125ea30bba6
github.com/davecgh/go-spew 346938d642f2ec3594ed81d874461961cd0faa76
github.com/dgrijalva/jwt-go dbeaa9332f19a944acb5736b4456cfcc02140e29
github.com/docker/docker f5ec1e2936dcbe7b5001c2b817188b095c700c27
github.com/docker/go-connections 990a1a1a70b0da4c4cb70e117971a4f0babfbf1a
github.com/eapache/go-resiliency b86b1ec0dd4209a588dc1285cdd471e73525c0b3
github.com/eapache/go-xerial-snappy bb955e01b9346ac19dc29eb16586c90ded99a98c
github.com/eapache/queue 44cc805cf13205b55f69e14bcb69867d1ae92f98
github.com/eclipse/paho.mqtt.golang aff15770515e3c57fc6109da73d42b0d46f7f483
github.com/eclipse/paho.mqtt.golang d4f545eb108a2d19f9b1a735689dbfb719bc21fb
github.com/go-logfmt/logfmt 390ab7935ee28ec6b286364bba9b4dd6410cb3d5
github.com/go-sql-driver/mysql 2e00b5cd70399450106cec6431c2e2ce3cae5034
github.com/gobwas/glob bea32b9cd2d6f55753d94a28e959b13f0244797a
@@ -27,15 +26,13 @@ github.com/golang/snappy 7db9049039a047d955fe8c19b83c8ff5abd765c7
github.com/go-ole/go-ole be49f7c07711fcb603cff39e1de7c67926dc0ba7
github.com/google/go-cmp f94e52cad91c65a63acc1e75d4be223ea22e99bc
github.com/gorilla/mux 392c28fe23e1c45ddba891b0320b3b5df220beea
github.com/go-redis/redis 73b70592cdaa9e6abdfcfbf97b4a90d80728c836
github.com/go-sql-driver/mysql 2e00b5cd70399450106cec6431c2e2ce3cae5034
github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478
github.com/hashicorp/consul 63d2fc68239b996096a1c55a0d4b400ea4c2583f
github.com/influxdata/tail c43482518d410361b6c383d7aebce33d0471d7bc
github.com/influxdata/tail a395bf99fe07c233f41fba0735fa2b13b58588ea
github.com/influxdata/toml 5d1d907f22ead1cd47adde17ceec5bda9cacaf8f
github.com/influxdata/wlog 7c63b0a71ef8300adc255344d275e10e5c3a71ec
github.com/fsnotify/fsnotify c2828203cd70a50dcccfb2761f8b1f8ceef9a8e9
github.com/jackc/pgx 63f58fd32edb5684b9e9f4cfaac847c6b42b3917
github.com/jackc/pgx b84338d7d62598f75859b2b146d830b22f1b9ec8
github.com/jmespath/go-jmespath bd40a432e4c76585ef6b72d3fd96fb9b6dc7b68d
github.com/kardianos/osext c2c54e542fb797ad986b31721e1baedf214ca413
github.com/kardianos/service 6d3a0ee7d3425d9d835debc51a0ca1ffa28f4893
@@ -43,14 +40,11 @@ github.com/kballard/go-shellquote d8ec1a69a250a17bb0e419c386eac1f3711dc142
github.com/matttproud/golang_protobuf_extensions c12348ce28de40eed0136aa2b644d0ee0650e56c
github.com/Microsoft/go-winio ce2922f643c8fd76b46cadc7f404a06282678b34
github.com/miekg/dns 99f84ae56e75126dd77e5de4fae2ea034a468ca1
github.com/mitchellh/mapstructure d0303fe809921458f417bcf828397a65db30a7e4
github.com/multiplay/go-ts3 07477f49b8dfa3ada231afc7b7b17617d42afe8e
github.com/naoina/go-stringutil 6b638e95a32d0c1131db0e7fe83775cbea4a0d0b
github.com/nats-io/gnatsd 393bbb7c031433e68707c8810fda0bfcfbe6ab9b
github.com/nats-io/go-nats ea9585611a4ab58a205b9b125ebd74c389a6b898
github.com/nats-io/nats ea9585611a4ab58a205b9b125ebd74c389a6b898
github.com/nats-io/nuid 289cccf02c178dc782430d534e3c1f5b72af807f
github.com/nsqio/go-nsq eee57a3ac4174c55924125bb15eeeda8cffb6e6f
github.com/nsqio/go-nsq a53d495e81424aaf7a7665a9d32a97715c40e953
github.com/opencontainers/runc 89ab7f2ccc1e45ddf6485eaa802c35dcf321dfc8
github.com/opentracing-contrib/go-observer a52f2342449246d5bcc273e65cbdcfa5f7d6c63c
github.com/opentracing/opentracing-go 06f47b42c792fef2796e9681353e1d908c417827
@@ -66,17 +60,15 @@ github.com/prometheus/procfs 1878d9fbb537119d24b21ca07effd591627cd160
github.com/rcrowley/go-metrics 1f30fe9094a513ce4c700b9a54458bbb0c96996c
github.com/samuel/go-zookeeper 1d7be4effb13d2d908342d349d71a284a7542693
github.com/satori/go.uuid 5bf94b69c6b68ee1b541973bb8e1144db23a194b
github.com/shirou/gopsutil fc04d2dd9a512906a2604242b35275179e250eda
github.com/shirou/gopsutil 9a4a9167ad3b4355dbf1c2c7a0f5f0d3fb1e9ab9
github.com/shirou/w32 3c9377fc6748f222729a8270fe2775d149a249ad
github.com/Shopify/sarama 3b1b38866a79f06deddf0487d5c27ba0697ccd65
github.com/Shopify/sarama c01858abb625b73a3af51d0798e4ad42c8147093
github.com/Sirupsen/logrus 61e43dc76f7ee59a82bdf3d71033dc12bea4c77d
github.com/soniah/gosnmp 5ad50dc75ab389f8a1c9f8a67d3a1cd85f67ed15
github.com/StackExchange/wmi f3e2bae1e0cb5aef83e319133eabfee30013a4a5
github.com/streadway/amqp 63795daa9a446c920826655f26ba31c81c860fd6
github.com/stretchr/objx facf9a85c22f48d2f52f2380e4efce1768749a89
github.com/stretchr/testify 12b6f73e6084dad08a7c6e575284b177ecafbc71
github.com/tidwall/gjson 0623bd8fbdbf97cc62b98d15108832851a658e59
github.com/tidwall/match 173748da739a410c5b0b813b956f89ff94730b4c
github.com/stretchr/objx 1a9d0bb9f541897e62256577b352fdbc1fb4fd94
github.com/stretchr/testify 4d4bfba8f1d1027c4fdbe371823030df51419987
github.com/vjeantet/grok d73e972b60935c7fec0b4ffbc904ed39ecaf7efe
github.com/wvanbergen/kafka bc265fedb9ff5b5c5d3c0fdcef4a819b3523d3ee
github.com/wvanbergen/kazoo-go 968957352185472eacb69215fa3dbfcfdbac1096
@@ -88,6 +80,7 @@ golang.org/x/sys 739734461d1c916b6c72a63d7efda2b27edb369f
golang.org/x/text 506f9d5c962f284575e88337e7d9296d27e729d3
gopkg.in/asn1-ber.v1 4e86f4367175e39f69d9358a5f17b4dda270378d
gopkg.in/fatih/pool.v2 6e328e67893eb46323ad06f0e92cb9536babbabc
gopkg.in/fsnotify.v1 a8a77c9133d2d6fd8334f3260d06f60e8d80a5fb
gopkg.in/gorethink/gorethink.v3 7ab832f7b65573104a555d84a27992ae9ea1f659
gopkg.in/ldap.v2 8168ee085ee43257585e50c6441aadf54ecb2c9f
gopkg.in/mgo.v2 3f83fa5005286a7fe593b055f0d7771a7dce4655

111
Makefile
View File

@@ -2,10 +2,6 @@ PREFIX := /usr/local
VERSION := $(shell git describe --exact-match --tags 2>/dev/null)
BRANCH := $(shell git rev-parse --abbrev-ref HEAD)
COMMIT := $(shell git rev-parse --short HEAD)
GOFILES ?= $(shell git ls-files '*.go')
GOFMT ?= $(shell gofmt -l $(filter-out plugins/parsers/influx/machine.go, $(GOFILES)))
BUILDFLAGS ?=
ifdef GOBIN
PATH := $(GOBIN):$(PATH)
else
@@ -19,17 +15,17 @@ ifdef VERSION
LDFLAGS += -X main.version=$(VERSION)
endif
all:
$(MAKE) deps
$(MAKE) telegraf
deps:
go get -u github.com/golang/lint/golint
go get github.com/sparrc/gdm
gdm restore
telegraf:
go build -i -o $(TELEGRAF) -ldflags "$(LDFLAGS)" ./cmd/telegraf/telegraf.go
go build -o $(TELEGRAF) -ldflags "$(LDFLAGS)" ./cmd/telegraf/telegraf.go
go-install:
go install -ldflags "-w -s $(LDFLAGS)" ./cmd/telegraf
@@ -41,56 +37,81 @@ install: telegraf
test:
go test -short ./...
fmt:
@gofmt -w $(filter-out plugins/parsers/influx/machine.go, $(GOFILES))
fmtcheck:
@echo '[INFO] running gofmt to identify incorrectly formatted code...'
@if [ ! -z $(GOFMT) ]; then \
echo "[ERROR] gofmt has found errors in the following files:" ; \
echo "$(GOFMT)" ; \
echo "" ;\
echo "Run make fmt to fix them." ; \
exit 1 ;\
fi
@echo '[INFO] done.'
test-windows:
go test ./plugins/inputs/ping/...
go test ./plugins/inputs/win_perf_counters/...
go test ./plugins/inputs/win_services/...
go test ./plugins/inputs/procstat/...
# vet runs the Go source code static analysis tool `vet` to find
# any common errors.
vet:
@echo 'go vet $$(go list ./... | grep -v ./plugins/parsers/influx)'
@go vet $$(go list ./... | grep -v ./plugins/parsers/influx) ; if [ $$? -eq 1 ]; then \
echo ""; \
echo "go vet has found suspicious constructs. Please remediate any reported errors"; \
echo "to fix them before submitting code for review."; \
exit 1; \
fi
lint:
go vet ./...
test-ci: fmtcheck vet
go test -short./...
test-all: fmtcheck vet
test-all: lint
go test ./...
package:
./scripts/build.py --package --platform=all --arch=all
./scripts/build.py --package --version="$(VERSION)" --platform=linux --arch=all --upload
clean:
rm -f telegraf
rm -f telegraf.exe
-rm -f telegraf
-rm -f telegraf.exe
docker-image:
./scripts/build.py --package --platform=linux --arch=amd64
cp build/telegraf*$(COMMIT)*.deb .
docker build -f scripts/dev.docker --build-arg "package=telegraf*$(COMMIT)*.deb" -t "telegraf-dev:$(COMMIT)" .
# Run all docker containers necessary for integration tests
docker-run:
docker run --name aerospike -p "3000:3000" -d aerospike/aerospike-server:3.9.0
docker run --name zookeeper -p "2181:2181" -d wurstmeister/zookeeper
docker run --name kafka \
--link zookeeper:zookeeper \
-e KAFKA_ADVERTISED_HOST_NAME=localhost \
-e KAFKA_ADVERTISED_PORT=9092 \
-e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
-e KAFKA_CREATE_TOPICS="test:1:1" \
-p "9092:9092" \
-d wurstmeister/kafka
docker run --name elasticsearch -p "9200:9200" -p "9300:9300" -d elasticsearch:5
docker run --name mysql -p "3306:3306" -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -d mysql
docker run --name memcached -p "11211:11211" -d memcached
docker run --name postgres -p "5432:5432" -d postgres
docker run --name rabbitmq -p "15672:15672" -p "5672:5672" -d rabbitmq:3-management
docker run --name redis -p "6379:6379" -d redis
docker run --name nsq -p "4150:4150" -d nsqio/nsq /nsqd
docker run --name mqtt -p "1883:1883" -d ncarlier/mqtt
docker run --name riemann -p "5555:5555" -d stealthly/docker-riemann
docker run --name nats -p "4222:4222" -d nats
docker run --name openldap \
-e SLAPD_CONFIG_ROOTDN="cn=manager,cn=config" \
-e SLAPD_CONFIG_ROOTPW="secret" \
-p "389:389" -p "636:636" \
-d cobaugh/openldap-alpine
plugins/parsers/influx/machine.go: plugins/parsers/influx/machine.go.rl
ragel -Z -G2 $^ -o $@
# Run docker containers necessary for integration tests; skipping services provided
# by CircleCI
docker-run-circle:
docker run --name aerospike -p "3000:3000" -d aerospike/aerospike-server:3.9.0
docker run --name zookeeper -p "2181:2181" -d wurstmeister/zookeeper
docker run --name kafka \
--link zookeeper:zookeeper \
-e KAFKA_ADVERTISED_HOST_NAME=localhost \
-e KAFKA_ADVERTISED_PORT=9092 \
-e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
-e KAFKA_CREATE_TOPICS="test:1:1" \
-p "9092:9092" \
-d wurstmeister/kafka
docker run --name elasticsearch -p "9200:9200" -p "9300:9300" -d elasticsearch:5
docker run --name nsq -p "4150:4150" -d nsqio/nsq /nsqd
docker run --name mqtt -p "1883:1883" -d ncarlier/mqtt
docker run --name riemann -p "5555:5555" -d stealthly/docker-riemann
docker run --name nats -p "4222:4222" -d nats
docker run --name openldap \
-e SLAPD_CONFIG_ROOTDN="cn=manager,cn=config" \
-e SLAPD_CONFIG_ROOTPW="secret" \
-p "389:389" -p "636:636" \
-d cobaugh/openldap-alpine
.PHONY: deps telegraf install test test-windows lint vet test-all package clean docker-image fmtcheck uint64
docker-kill:
-docker kill aerospike elasticsearch kafka memcached mqtt mysql nats nsq \
openldap postgres rabbitmq redis riemann zookeeper
-docker rm aerospike elasticsearch kafka memcached mqtt mysql nats nsq \
openldap postgres rabbitmq redis riemann zookeeper
.PHONY: deps telegraf telegraf.exe install test test-windows lint test-all \
package clean docker-run docker-run-circle docker-kill

View File

@@ -5,7 +5,8 @@ and writing metrics.
Design goals are to have a minimal memory footprint with a plugin system so
that developers in the community can easily add support for collecting metrics
from local or remote services.
from well known services (like Hadoop, Postgres, or Redis) and third party
APIs (like Mailchimp, AWS CloudWatch, or Google Analytics).
Telegraf is plugin-driven and has the concept of 4 distinct plugins:
@@ -51,33 +52,6 @@ which is installed by the Makefile if you don't have it already.
4. Run `cd $GOPATH/src/github.com/influxdata/telegraf`
5. Run `make`
### Nightly Builds
These builds are generated from the master branch:
- [telegraf_nightly_amd64.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_amd64.deb)
- [telegraf_nightly_arm64.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_arm64.deb)
- [telegraf-nightly.arm64.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.arm64.rpm)
- [telegraf_nightly_armel.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_armel.deb)
- [telegraf-nightly.armel.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.armel.rpm)
- [telegraf_nightly_armhf.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_armhf.deb)
- [telegraf-nightly.armv6hl.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.armv6hl.rpm)
- [telegraf-nightly_freebsd_amd64.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_freebsd_amd64.tar.gz)
- [telegraf-nightly_freebsd_i386.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_freebsd_i386.tar.gz)
- [telegraf_nightly_i386.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_i386.deb)
- [telegraf-nightly.i386.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.i386.rpm)
- [telegraf-nightly_linux_amd64.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_amd64.tar.gz)
- [telegraf-nightly_linux_arm64.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_arm64.tar.gz)
- [telegraf-nightly_linux_armel.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_armel.tar.gz)
- [telegraf-nightly_linux_armhf.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_armhf.tar.gz)
- [telegraf-nightly_linux_i386.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_i386.tar.gz)
- [telegraf-nightly_linux_s390x.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_linux_s390x.tar.gz)
- [telegraf_nightly_s390x.deb](https://dl.influxdata.com/telegraf/nightlies/telegraf_nightly_s390x.deb)
- [telegraf-nightly.s390x.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.s390x.rpm)
- [telegraf-nightly_windows_amd64.zip](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_windows_amd64.zip)
- [telegraf-nightly_windows_i386.zip](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly_windows_i386.zip)
- [telegraf-nightly.x86_64.rpm](https://dl.influxdata.com/telegraf/nightlies/telegraf-nightly.x86_64.rpm)
- [telegraf-static-nightly_linux_amd64.tar.gz](https://dl.influxdata.com/telegraf/nightlies/telegraf-static-nightly_linux_amd64.tar.gz)
## How to use it:
See usage with:
@@ -129,7 +103,6 @@ configuration options.
* [apache](./plugins/inputs/apache)
* [aws cloudwatch](./plugins/inputs/cloudwatch)
* [bcache](./plugins/inputs/bcache)
* [bond](./plugins/inputs/bond)
* [cassandra](./plugins/inputs/cassandra)
* [ceph](./plugins/inputs/ceph)
* [cgroup](./plugins/inputs/cgroup)
@@ -138,7 +111,6 @@ configuration options.
* [conntrack](./plugins/inputs/conntrack)
* [couchbase](./plugins/inputs/couchbase)
* [couchdb](./plugins/inputs/couchdb)
* [DC/OS](./plugins/inputs/dcos)
* [disque](./plugins/inputs/disque)
* [dmcache](./plugins/inputs/dmcache)
* [dns query time](./plugins/inputs/dns_query)
@@ -152,7 +124,6 @@ configuration options.
* [graylog](./plugins/inputs/graylog)
* [haproxy](./plugins/inputs/haproxy)
* [hddtemp](./plugins/inputs/hddtemp)
* [http](./plugins/inputs/http) (generic HTTP plugin, supports using input data formats)
* [http_response](./plugins/inputs/http_response)
* [httpjson](./plugins/inputs/httpjson) (generic JSON-emitting http service plugin)
* [internal](./plugins/inputs/internal)
@@ -160,9 +131,7 @@ configuration options.
* [interrupts](./plugins/inputs/interrupts)
* [ipmi_sensor](./plugins/inputs/ipmi_sensor)
* [iptables](./plugins/inputs/iptables)
* [ipset](./plugins/inputs/ipset)
* [jolokia](./plugins/inputs/jolokia) (deprecated, use [jolokia2](./plugins/inputs/jolokia2))
* [jolokia2](./plugins/inputs/jolokia2)
* [jolokia](./plugins/inputs/jolokia)
* [kapacitor](./plugins/inputs/kapacitor)
* [kubernetes](./plugins/inputs/kubernetes)
* [leofs](./plugins/inputs/leofs)
@@ -173,22 +142,17 @@ configuration options.
* [minecraft](./plugins/inputs/minecraft)
* [mongodb](./plugins/inputs/mongodb)
* [mysql](./plugins/inputs/mysql)
* [nats](./plugins/inputs/nats)
* [net_response](./plugins/inputs/net_response)
* [nginx](./plugins/inputs/nginx)
* [nginx_plus](./plugins/inputs/nginx_plus)
* [nsq](./plugins/inputs/nsq)
* [nstat](./plugins/inputs/nstat)
* [ntpq](./plugins/inputs/ntpq)
* [openldap](./plugins/inputs/openldap)
* [opensmtpd](./plugins/inputs/opensmtpd)
* [pf](./plugins/inputs/pf)
* [phpfpm](./plugins/inputs/phpfpm)
* [phusion passenger](./plugins/inputs/passenger)
* [ping](./plugins/inputs/ping)
* [postfix](./plugins/inputs/postfix)
* [postgresql_extensible](./plugins/inputs/postgresql_extensible)
* [postgresql](./plugins/inputs/postgresql)
* [postgresql_extensible](./plugins/inputs/postgresql_extensible)
* [powerdns](./plugins/inputs/powerdns)
* [procstat](./plugins/inputs/procstat)
* [prometheus](./plugins/inputs/prometheus) (can be used for [Caddy server](./plugins/inputs/prometheus/README.md#usage-for-caddy-http-server))
@@ -200,20 +164,15 @@ configuration options.
* [riak](./plugins/inputs/riak)
* [salesforce](./plugins/inputs/salesforce)
* [sensors](./plugins/inputs/sensors)
* [smart](./plugins/inputs/smart)
* [snmp](./plugins/inputs/snmp)
* [snmp_legacy](./plugins/inputs/snmp_legacy)
* [solr](./plugins/inputs/solr)
* [sql server](./plugins/inputs/sqlserver) (microsoft)
* [teamspeak](./plugins/inputs/teamspeak)
* [tomcat](./plugins/inputs/tomcat)
* [twemproxy](./plugins/inputs/twemproxy)
* [unbound](./plugins/inputs/unbound)
* [varnish](./plugins/inputs/varnish)
* [zfs](./plugins/inputs/zfs)
* [zookeeper](./plugins/inputs/zookeeper)
* [win_perf_counters](./plugins/inputs/win_perf_counters) (windows performance counters)
* [win_services](./plugins/inputs/win_services)
* [win_perf_counters ](./plugins/inputs/win_perf_counters) (windows performance counters)
* [sysstat](./plugins/inputs/sysstat)
* [system](./plugins/inputs/system)
* cpu
@@ -245,9 +204,8 @@ Telegraf can also collect metrics via the following service plugins:
* [filestack](./plugins/inputs/webhooks/filestack)
* [github](./plugins/inputs/webhooks/github)
* [mandrill](./plugins/inputs/webhooks/mandrill)
* [papertrail](./plugins/inputs/webhooks/papertrail)
* [particle](./plugins/inputs/webhooks/particle)
* [rollbar](./plugins/inputs/webhooks/rollbar)
* [papertrail](./plugins/inputs/webhooks/papertrail)
* [zipkin](./plugins/inputs/zipkin)
Telegraf is able to parse the following input data formats into metrics, these
@@ -259,16 +217,13 @@ formats may be used with input plugins supporting the `data_format` option:
* [Value](./docs/DATA_FORMATS_INPUT.md#value)
* [Nagios](./docs/DATA_FORMATS_INPUT.md#nagios)
* [Collectd](./docs/DATA_FORMATS_INPUT.md#collectd)
* [Dropwizard](./docs/DATA_FORMATS_INPUT.md#dropwizard)
## Processor Plugins
* [printer](./plugins/processors/printer)
* [override](./plugins/processors/override)
## Aggregator Plugins
* [basicstats](./plugins/aggregators/basicstats)
* [minmax](./plugins/aggregators/minmax)
* [histogram](./plugins/aggregators/histogram)
@@ -279,7 +234,6 @@ formats may be used with input plugins supporting the `data_format` option:
* [amqp](./plugins/outputs/amqp) (rabbitmq)
* [aws kinesis](./plugins/outputs/kinesis)
* [aws cloudwatch](./plugins/outputs/cloudwatch)
* [cratedb](./plugins/outputs/cratedb)
* [datadog](./plugins/outputs/datadog)
* [discard](./plugins/outputs/discard)
* [elasticsearch](./plugins/outputs/elasticsearch)
@@ -299,4 +253,3 @@ formats may be used with input plugins supporting the `data_format` option:
* [socket_writer](./plugins/outputs/socket_writer)
* [tcp](./plugins/outputs/socket_writer)
* [udp](./plugins/outputs/socket_writer)
* [wavefront](./plugins/outputs/wavefront)

View File

@@ -28,18 +28,6 @@ type Accumulator interface {
tags map[string]string,
t ...time.Time)
// AddSummary is the same as AddFields, but will add the metric as a "Summary" type
AddSummary(measurement string,
fields map[string]interface{},
tags map[string]string,
t ...time.Time)
// AddHistogram is the same as AddFields, but will add the metric as a "Histogram" type
AddHistogram(measurement string,
fields map[string]interface{},
tags map[string]string,
t ...time.Time)
SetPrecision(precision, interval time.Duration)
AddError(err error)

View File

@@ -26,7 +26,7 @@ type MetricMaker interface {
func NewAccumulator(
maker MetricMaker,
metrics chan telegraf.Metric,
) telegraf.Accumulator {
) *accumulator {
acc := accumulator{
maker: maker,
metrics: metrics,
@@ -76,28 +76,6 @@ func (ac *accumulator) AddCounter(
}
}
func (ac *accumulator) AddSummary(
measurement string,
fields map[string]interface{},
tags map[string]string,
t ...time.Time,
) {
if m := ac.maker.MakeMetric(measurement, fields, tags, telegraf.Summary, ac.getTime(t)); m != nil {
ac.metrics <- m
}
}
func (ac *accumulator) AddHistogram(
measurement string,
fields map[string]interface{},
tags map[string]string,
t ...time.Time,
) {
if m := ac.maker.MakeMetric(measurement, fields, tags, telegraf.Histogram, ac.getTime(t)); m != nil {
ac.metrics <- m
}
}
// AddError passes a runtime error to the accumulator.
// The error will be tagged with the plugin name and written to the log.
func (ac *accumulator) AddError(err error) {

View File

@@ -15,36 +15,63 @@ import (
"github.com/stretchr/testify/require"
)
func TestAddFields(t *testing.T) {
func TestAdd(t *testing.T) {
now := time.Now()
metrics := make(chan telegraf.Metric, 10)
defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics)
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{})
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"})
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"}, now)
testm := <-metrics
actual := testm.String()
assert.Contains(t, actual, "acctest value=101")
testm = <-metrics
actual = testm.String()
assert.Contains(t, actual, "acctest,acc=test value=101")
testm = <-metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test value=101 %d\n", now.UnixNano()),
actual)
}
func TestAddFields(t *testing.T) {
now := time.Now()
metrics := make(chan telegraf.Metric, 10)
defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics)
tags := map[string]string{"foo": "bar"}
fields := map[string]interface{}{
"usage": float64(99),
}
now := time.Now()
a.AddCounter("acctest", fields, tags, now)
a.AddFields("acctest", fields, map[string]string{})
a.AddGauge("acctest", fields, map[string]string{"acc": "test"})
a.AddCounter("acctest", fields, map[string]string{"acc": "test"}, now)
testm := <-metrics
actual := testm.String()
assert.Contains(t, actual, "acctest usage=99")
require.Equal(t, "acctest", testm.Name())
actual, ok := testm.GetField("usage")
testm = <-metrics
actual = testm.String()
assert.Contains(t, actual, "acctest,acc=test usage=99")
require.True(t, ok)
require.Equal(t, float64(99), actual)
actual, ok = testm.GetTag("foo")
require.True(t, ok)
require.Equal(t, "bar", actual)
tm := testm.Time()
// okay if monotonic clock differs
require.True(t, now.Equal(tm))
tp := testm.Type()
require.Equal(t, telegraf.Counter, tp)
testm = <-metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test usage=99 %d\n", now.UnixNano()),
actual)
}
func TestAccAddError(t *testing.T) {
@@ -71,61 +98,215 @@ func TestAccAddError(t *testing.T) {
assert.Contains(t, string(errs[2]), "baz")
}
func TestSetPrecision(t *testing.T) {
tests := []struct {
name string
unset bool
precision time.Duration
interval time.Duration
timestamp time.Time
expected time.Time
}{
{
name: "default precision is nanosecond",
unset: true,
timestamp: time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC),
expected: time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC),
},
{
name: "second interval",
interval: time.Second,
timestamp: time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC),
expected: time.Date(2006, time.February, 10, 12, 0, 0, 0, time.UTC),
},
{
name: "microsecond interval",
interval: time.Microsecond,
timestamp: time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC),
expected: time.Date(2006, time.February, 10, 12, 0, 0, 82913000, time.UTC),
},
{
name: "2 second precision",
precision: 2 * time.Second,
timestamp: time.Date(2006, time.February, 10, 12, 0, 2, 4, time.UTC),
expected: time.Date(2006, time.February, 10, 12, 0, 2, 0, time.UTC),
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
metrics := make(chan telegraf.Metric, 10)
func TestAddNoIntervalWithPrecision(t *testing.T) {
now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC)
metrics := make(chan telegraf.Metric, 10)
defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics)
a.SetPrecision(0, time.Second)
a := NewAccumulator(&TestMetricMaker{}, metrics)
if !tt.unset {
a.SetPrecision(tt.precision, tt.interval)
}
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{})
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"})
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"}, now)
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{},
tt.timestamp,
)
testm := <-a.metrics
actual := testm.String()
assert.Contains(t, actual, "acctest value=101")
testm := <-metrics
require.Equal(t, tt.expected, testm.Time())
testm = <-a.metrics
actual = testm.String()
assert.Contains(t, actual, "acctest,acc=test value=101")
close(metrics)
})
}
testm = <-a.metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test value=101 %d\n", int64(1139572800000000000)),
actual)
}
func TestAddDisablePrecision(t *testing.T) {
now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC)
metrics := make(chan telegraf.Metric, 10)
defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics)
a.SetPrecision(time.Nanosecond, 0)
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{})
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"})
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"}, now)
testm := <-a.metrics
actual := testm.String()
assert.Contains(t, actual, "acctest value=101")
testm = <-a.metrics
actual = testm.String()
assert.Contains(t, actual, "acctest,acc=test value=101")
testm = <-a.metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test value=101 %d\n", int64(1139572800082912748)),
actual)
}
func TestAddNoPrecisionWithInterval(t *testing.T) {
now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC)
metrics := make(chan telegraf.Metric, 10)
defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics)
a.SetPrecision(0, time.Second)
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{})
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"})
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"}, now)
testm := <-a.metrics
actual := testm.String()
assert.Contains(t, actual, "acctest value=101")
testm = <-a.metrics
actual = testm.String()
assert.Contains(t, actual, "acctest,acc=test value=101")
testm = <-a.metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test value=101 %d\n", int64(1139572800000000000)),
actual)
}
func TestDifferentPrecisions(t *testing.T) {
now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC)
metrics := make(chan telegraf.Metric, 10)
defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics)
a.SetPrecision(0, time.Second)
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"}, now)
testm := <-a.metrics
actual := testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test value=101 %d\n", int64(1139572800000000000)),
actual)
a.SetPrecision(0, time.Millisecond)
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"}, now)
testm = <-a.metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test value=101 %d\n", int64(1139572800083000000)),
actual)
a.SetPrecision(0, time.Microsecond)
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"}, now)
testm = <-a.metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test value=101 %d\n", int64(1139572800082913000)),
actual)
a.SetPrecision(0, time.Nanosecond)
a.AddFields("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"}, now)
testm = <-a.metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test value=101 %d\n", int64(1139572800082912748)),
actual)
}
func TestAddGauge(t *testing.T) {
now := time.Now()
metrics := make(chan telegraf.Metric, 10)
defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics)
a.AddGauge("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{})
a.AddGauge("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"})
a.AddGauge("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"}, now)
testm := <-metrics
actual := testm.String()
assert.Contains(t, actual, "acctest value=101")
assert.Equal(t, testm.Type(), telegraf.Gauge)
testm = <-metrics
actual = testm.String()
assert.Contains(t, actual, "acctest,acc=test value=101")
assert.Equal(t, testm.Type(), telegraf.Gauge)
testm = <-metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test value=101 %d\n", now.UnixNano()),
actual)
assert.Equal(t, testm.Type(), telegraf.Gauge)
}
func TestAddCounter(t *testing.T) {
now := time.Now()
metrics := make(chan telegraf.Metric, 10)
defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics)
a.AddCounter("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{})
a.AddCounter("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"})
a.AddCounter("acctest",
map[string]interface{}{"value": float64(101)},
map[string]string{"acc": "test"}, now)
testm := <-metrics
actual := testm.String()
assert.Contains(t, actual, "acctest value=101")
assert.Equal(t, testm.Type(), telegraf.Counter)
testm = <-metrics
actual = testm.String()
assert.Contains(t, actual, "acctest,acc=test value=101")
assert.Equal(t, testm.Type(), telegraf.Counter)
testm = <-metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test value=101 %d\n", now.UnixNano()),
actual)
assert.Equal(t, testm.Type(), telegraf.Counter)
}
type TestMetricMaker struct {

View File

@@ -143,7 +143,7 @@ func (a *Agent) gatherer(
func gatherWithTimeout(
shutdown chan struct{},
input *models.RunningInput,
acc telegraf.Accumulator,
acc *accumulator,
timeout time.Duration,
) {
ticker := time.NewTicker(timeout)
@@ -252,7 +252,7 @@ func (a *Agent) flusher(shutdown chan struct{}, metricC chan telegraf.Metric, ag
// the flusher will flush after metrics are collected.
time.Sleep(time.Millisecond * 300)
// create an output metric channel and a gorouting that continuously passes
// create an output metric channel and a gorouting that continously passes
// each metric onto the output plugins & aggregators.
outMetricC := make(chan telegraf.Metric, 100)
var wg sync.WaitGroup
@@ -271,9 +271,11 @@ func (a *Agent) flusher(shutdown chan struct{}, metricC chan telegraf.Metric, ag
// if dropOriginal is set to true, then we will only send this
// metric to the aggregators, not the outputs.
var dropOriginal bool
for _, agg := range a.Config.Aggregators {
if ok := agg.Add(m.Copy()); ok {
dropOriginal = true
if !m.IsAggregate() {
for _, agg := range a.Config.Aggregators {
if ok := agg.Add(m.Copy()); ok {
dropOriginal = true
}
}
}
if !dropOriginal {
@@ -306,13 +308,7 @@ func (a *Agent) flusher(shutdown chan struct{}, metricC chan telegraf.Metric, ag
metrics = processor.Apply(metrics...)
}
for _, m := range metrics {
for i, o := range a.Config.Outputs {
if i == len(a.Config.Outputs)-1 {
o.AddMetric(m)
} else {
o.AddMetric(m.Copy())
}
}
outMetricC <- m
}
}
}
@@ -368,6 +364,8 @@ func (a *Agent) Run(shutdown chan struct{}) error {
metricC := make(chan telegraf.Metric, 100)
aggC := make(chan telegraf.Metric, 100)
now := time.Now()
// Start all ServicePlugins
for _, input := range a.Config.Inputs {
input.SetDefaultTags(a.Config.Tags)
@@ -408,7 +406,7 @@ func (a *Agent) Run(shutdown chan struct{}) error {
acc := NewAccumulator(agg, aggC)
acc.SetPrecision(a.Config.Agent.Precision.Duration,
a.Config.Agent.Interval.Duration)
agg.Run(acc, shutdown)
agg.Run(acc, now, shutdown)
}(aggregator)
}

View File

@@ -1,4 +1,3 @@
image: Previous Visual Studio 2015
version: "{build}"
cache:
@@ -13,19 +12,18 @@ platform: x64
install:
- IF NOT EXIST "C:\Cache" mkdir C:\Cache
- IF NOT EXIST "C:\Cache\go1.9.4.msi" curl -o "C:\Cache\go1.9.4.msi" https://storage.googleapis.com/golang/go1.9.4.windows-amd64.msi
- IF NOT EXIST "C:\Cache\go1.8.1.msi" curl -o "C:\Cache\go1.8.1.msi" https://storage.googleapis.com/golang/go1.8.1.windows-amd64.msi
- IF NOT EXIST "C:\Cache\gnuwin32-bin.zip" curl -o "C:\Cache\gnuwin32-bin.zip" https://dl.influxdata.com/telegraf/ci/make-3.81-bin.zip
- IF NOT EXIST "C:\Cache\gnuwin32-dep.zip" curl -o "C:\Cache\gnuwin32-dep.zip" https://dl.influxdata.com/telegraf/ci/make-3.81-dep.zip
- IF EXIST "C:\Go" rmdir /S /Q C:\Go
- msiexec.exe /i "C:\Cache\go1.9.4.msi" /quiet
- msiexec.exe /i "C:\Cache\go1.8.1.msi" /quiet
- 7z x "C:\Cache\gnuwin32-bin.zip" -oC:\GnuWin32 -y
- 7z x "C:\Cache\gnuwin32-dep.zip" -oC:\GnuWin32 -y
- go version
- go env
build_script:
- cmd: C:\GnuWin32\bin\make deps
- cmd: C:\GnuWin32\bin\make telegraf
- cmd: C:\GnuWin32\bin\make
test_script:
- cmd: C:\GnuWin32\bin\make test-windows

16
circle.yml Normal file
View File

@@ -0,0 +1,16 @@
machine:
go:
version: 1.8.1
services:
- docker
- memcached
- redis
- rabbitmq-server
dependencies:
override:
- docker info
test:
override:
- bash scripts/circle-test.sh

View File

@@ -54,10 +54,12 @@ var fUsage = flag.String("usage", "",
"print usage for a plugin, ie, 'telegraf --usage mysql'")
var fService = flag.String("service", "",
"operate on the service")
var fRunAsConsole = flag.Bool("console", false, "run as console application (windows only)")
// Telegraf version, populated linker.
// ie, -ldflags "-X main.version=`git describe --always --tags`"
var (
nextVersion = "1.6.0"
nextVersion = "1.4.0"
version string
commit string
branch string
@@ -266,7 +268,7 @@ func (p *program) Stop(s service.Service) error {
func displayVersion() string {
if version == "" {
return fmt.Sprintf("v%s~%s", nextVersion, commit)
return fmt.Sprintf("v%s~pre%s", nextVersion, commit)
}
return "v" + version
}
@@ -359,7 +361,7 @@ func main() {
return
}
if runtime.GOOS == "windows" && !(*fRunAsConsole) {
if runtime.GOOS == "windows" {
svcConfig := &service.Config{
Name: "telegraf",
DisplayName: "Telegraf Data Collector Service",

View File

@@ -1,93 +0,0 @@
version: '3'
services:
aerospike:
image: aerospike/aerospike-server:3.9.0
ports:
- "3000:3000"
zookeeper:
image: wurstmeister/zookeeper
environment:
- JAVA_OPTS="-Xms256m -Xmx256m"
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
environment:
- KAFKA_ADVERTISED_HOST_NAME=localhost
- KAFKA_ADVERTISED_PORT=9092
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_CREATE_TOPICS="test:1:1"
- JAVA_OPTS="-Xms256m -Xmx256m"
ports:
- "9092:9092"
depends_on:
- zookeeper
elasticsearch:
image: elasticsearch:5
environment:
- JAVA_OPTS="-Xms256m -Xmx256m"
ports:
- "9200:9200"
- "9300:9300"
mysql:
image: mysql
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
ports:
- "3306:3306"
memcached:
image: memcached
ports:
- "11211:11211"
postgres:
image: postgres:alpine
ports:
- "5432:5432"
rabbitmq:
image: rabbitmq:3-management
ports:
- "15672:15672"
- "5672:5672"
redis:
image: redis:alpine
ports:
- "6379:6379"
nsq:
image: nsqio/nsq
ports:
- "4150:4150"
command: "/nsqd"
mqtt:
image: ncarlier/mqtt
ports:
- "1883:1883"
riemann:
image: stealthly/docker-riemann
ports:
- "5555:5555"
nats:
image: nats
ports:
- "4222:4222"
openldap:
image: cobaugh/openldap-alpine
environment:
- SLAPD_CONFIG_ROOTDN="cn=manager,cn=config"
- SLAPD_CONFIG_ROOTPW="secret"
ports:
- "389:389"
- "636:636"
crate:
image: crate/crate
ports:
- "4200:4200"
- "4230:4230"
command:
- crate
- -Cnetwork.host=0.0.0.0
- -Ctransport.host=localhost
- -Clicense.enterprise=false
environment:
- CRATE_HEAP_SIZE=128m
- JAVA_OPTS='-Xms256m -Xmx256m'

View File

@@ -39,11 +39,6 @@ metrics as they pass through Telegraf:
Both Aggregators and Processors analyze metrics as they pass through Telegraf.
Use [measurement filtering](CONFIGURATION.md#measurement-filtering)
to control which metrics are passed through a processor or aggregator. If a
metric is filtered out the metric bypasses the plugin and is passed downstream
to the next plugin.
**Processor** plugins process metrics as they pass through and immediately emit
results based on the values they process. For example, this could be printing
all metrics or adding a tag to all metrics that pass through.

View File

@@ -24,17 +24,11 @@ Environment variables can be used anywhere in the config file, simply prepend
them with $. For strings the variable must be within quotes (ie, "$STR_VAR"),
for numbers and booleans they should be plain (ie, $INT_VAR, $BOOL_VAR)
When using the `.deb` or `.rpm` packages, you can define environment variables
in the `/etc/default/telegraf` file.
## Configuration file locations
The location of the configuration file can be set via the `--config` command
line flag.
When the `--config-directory` command line flag is used files ending with
`.conf` in the specified directory will also be included in the Telegraf
configuration.
line flag. Telegraf will also pick up all files matching the pattern `*.conf` if
the `-config-directory` command line flag is used.
On most systems, the default locations are `/etc/telegraf/telegraf.conf` for
the main configuration file and `/etc/telegraf/telegraf.d` for the directory of
@@ -98,13 +92,9 @@ you can configure that here.
* **name_suffix**: Specifies a suffix to attach to the measurement name.
* **tags**: A map of tags to apply to a specific input's measurements.
The [measurement filtering](#measurement-filtering) parameters can be used to
limit what metrics are emitted from the input plugin.
## Output Configuration
The [measurement filtering](#measurement-filtering) parameters can be used to
limit what metrics are emitted from the output plugin.
There are no generic configuration options available for all outputs.
## Aggregator Configuration
@@ -125,10 +115,6 @@ aggregator and will not get sent to the output plugins.
* **name_suffix**: Specifies a suffix to attach to the measurement name.
* **tags**: A map of tags to apply to a specific input's measurements.
The [measurement filtering](#measurement-filtering) parameters can be used to
limit what metrics are handled by the aggregator. Excluded metrics are passed
downstream to the next aggregator.
## Processor Configuration
The following config parameters are available for all processors:
@@ -136,10 +122,6 @@ The following config parameters are available for all processors:
* **order**: This is the order in which the processor(s) get executed. If this
is not specified then processor execution order will be random.
The [measurement filtering](#measurement-filtering) parameters can be used
to limit what metrics are handled by the processor. Excluded metrics are
passed downstream to the next processor.
#### Measurement Filtering
Filters can be configured per input, output, processor, or aggregator,
@@ -389,15 +371,3 @@ to the system load metrics due to the `namepass` parameter.
[[outputs.file]]
files = ["stdout"]
```
#### Processor Configuration Examples:
Print only the metrics with `cpu` as the measurement name, all metrics are
passed to the output:
```toml
[[processors.printer]]
namepass = "cpu"
[[outputs.file]]
files = ["/tmp/metrics.out"]
```

View File

@@ -8,7 +8,6 @@ Telegraf is able to parse the following input data formats into metrics:
1. [Value](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#value), ie: 45 or "booyah"
1. [Nagios](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#nagios) (exec input only)
1. [Collectd](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#collectd)
1. [Dropwizard](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#dropwizard)
Telegraf metrics, like InfluxDB
[points](https://docs.influxdata.com/influxdb/v0.10/write_protocols/line/),
@@ -480,176 +479,3 @@ You can also change the path to the typesdb or add additional typesdb using
## Path of to TypesDB specifications
collectd_typesdb = ["/usr/share/collectd/types.db"]
```
# Dropwizard:
The dropwizard format can parse the JSON representation of a single dropwizard metric registry. By default, tags are parsed from metric names as if they were actual influxdb line protocol keys (`measurement<,tag_set>`) which can be overriden by defining custom [measurement & tag templates](./DATA_FORMATS_INPUT.md#measurement--tag-templates). All field value types are supported, `string`, `number` and `boolean`.
A typical JSON of a dropwizard metric registry:
```json
{
"version": "3.0.0",
"counters" : {
"measurement,tag1=green" : {
"count" : 1
}
},
"meters" : {
"measurement" : {
"count" : 1,
"m15_rate" : 1.0,
"m1_rate" : 1.0,
"m5_rate" : 1.0,
"mean_rate" : 1.0,
"units" : "events/second"
}
},
"gauges" : {
"measurement" : {
"value" : 1
}
},
"histograms" : {
"measurement" : {
"count" : 1,
"max" : 1.0,
"mean" : 1.0,
"min" : 1.0,
"p50" : 1.0,
"p75" : 1.0,
"p95" : 1.0,
"p98" : 1.0,
"p99" : 1.0,
"p999" : 1.0,
"stddev" : 1.0
}
},
"timers" : {
"measurement" : {
"count" : 1,
"max" : 1.0,
"mean" : 1.0,
"min" : 1.0,
"p50" : 1.0,
"p75" : 1.0,
"p95" : 1.0,
"p98" : 1.0,
"p99" : 1.0,
"p999" : 1.0,
"stddev" : 1.0,
"m15_rate" : 1.0,
"m1_rate" : 1.0,
"m5_rate" : 1.0,
"mean_rate" : 1.0,
"duration_units" : "seconds",
"rate_units" : "calls/second"
}
}
}
```
Would get translated into 4 different measurements:
```
measurement,metric_type=counter,tag1=green count=1
measurement,metric_type=meter count=1,m15_rate=1.0,m1_rate=1.0,m5_rate=1.0,mean_rate=1.0
measurement,metric_type=gauge value=1
measurement,metric_type=histogram count=1,max=1.0,mean=1.0,min=1.0,p50=1.0,p75=1.0,p95=1.0,p98=1.0,p99=1.0,p999=1.0
measurement,metric_type=timer count=1,max=1.0,mean=1.0,min=1.0,p50=1.0,p75=1.0,p95=1.0,p98=1.0,p99=1.0,p999=1.0,stddev=1.0,m15_rate=1.0,m1_rate=1.0,m5_rate=1.0,mean_rate=1.0
```
You may also parse a dropwizard registry from any JSON document which contains a dropwizard registry in some inner field.
Eg. to parse the following JSON document:
```json
{
"time" : "2017-02-22T14:33:03.662+02:00",
"tags" : {
"tag1" : "green",
"tag2" : "yellow"
},
"metrics" : {
"counters" : {
"measurement" : {
"count" : 1
}
},
"meters" : {},
"gauges" : {},
"histograms" : {},
"timers" : {}
}
}
```
and translate it into:
```
measurement,metric_type=counter,tag1=green,tag2=yellow count=1 1487766783662000000
```
you simply need to use the following additional configuration properties:
```toml
dropwizard_metric_registry_path = "metrics"
dropwizard_time_path = "time"
dropwizard_time_format = "2006-01-02T15:04:05Z07:00"
dropwizard_tags_path = "tags"
## tag paths per tag are supported too, eg.
#[inputs.yourinput.dropwizard_tag_paths]
# tag1 = "tags.tag1"
# tag2 = "tags.tag2"
```
For more information about the dropwizard json format see
[here](http://metrics.dropwizard.io/3.1.0/manual/json/).
#### Dropwizard Configuration:
```toml
[[inputs.exec]]
## Commands array
commands = ["curl http://localhost:8080/sys/metrics"]
timeout = "5s"
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "dropwizard"
## Used by the templating engine to join matched values when cardinality is > 1
separator = "_"
## Each template line requires a template pattern. It can have an optional
## filter before the template and separated by spaces. It can also have optional extra
## tags following the template. Multiple tags should be separated by commas and no spaces
## similar to the line protocol format. There can be only one default template.
## Templates support below format:
## 1. filter + template
## 2. filter + template + extra tag(s)
## 3. filter + template with field key
## 4. default template
## By providing an empty template array, templating is disabled and measurements are parsed as influxdb line protocol keys (measurement<,tag_set>)
templates = []
## You may use an appropriate [gjson path](https://github.com/tidwall/gjson#path-syntax)
## to locate the metric registry within the JSON document
# dropwizard_metric_registry_path = "metrics"
## You may use an appropriate [gjson path](https://github.com/tidwall/gjson#path-syntax)
## to locate the default time of the measurements within the JSON document
# dropwizard_time_path = "time"
# dropwizard_time_format = "2006-01-02T15:04:05Z07:00"
## You may use an appropriate [gjson path](https://github.com/tidwall/gjson#path-syntax)
## to locate the tags map within the JSON document
# dropwizard_tags_path = "tags"
## You may even use tag paths per tag
# [inputs.exec.dropwizard_tag_paths]
# tag1 = "tags.tag1"
# tag2 = "tags.tag2"
```

View File

@@ -2,12 +2,12 @@
Telegraf is able to serialize metrics into the following output data formats:
1. [InfluxDB Line Protocol](#influx)
1. [JSON](#json)
1. [Graphite](#graphite)
1. [InfluxDB Line Protocol](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md#influx)
1. [JSON](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md#json)
1. [Graphite](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md#graphite)
Telegraf metrics, like InfluxDB
[points](https://docs.influxdata.com/influxdb/latest/concepts/glossary/#point),
[points](https://docs.influxdata.com/influxdb/v0.10/write_protocols/line/),
are a combination of four basic parts:
1. Measurement Name
@@ -49,10 +49,8 @@ I'll go over below.
# Influx:
The `influx` format outputs data as
[InfluxDB Line Protocol](https://docs.influxdata.com/influxdb/latest/write_protocols/line_protocol_tutorial/).
This is the recommended format to use unless another format is required for
interoperability.
There are no additional configuration options for InfluxDB line-protocol. The
metrics are serialized directly into InfluxDB line-protocol.
### Influx Configuration:
@@ -66,20 +64,6 @@ interoperability.
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "influx"
## Maximum line length in bytes. Useful only for debugging.
# influx_max_line_bytes = 0
## When true, fields will be output in ascending lexical order. Enabling
## this option will result in decreased performance and is only recommended
## when you need predictable ordering while debugging.
# influx_sort_fields = false
## When true, Telegraf will output unsigned integers as unsigned values,
## i.e.: `42u`. You will need a version of InfluxDB supporting unsigned
## integer values. Enabling this option will result in field type errors if
## existing data has been written.
# influx_uint_support = false
```
# Graphite:
@@ -112,9 +96,6 @@ tars.cpu-total.us-east-1.cpu.usage_user 0.89 1455320690
tars.cpu-total.us-east-1.cpu.usage_idle 98.09 1455320690
```
Fields with string values will be skipped. Boolean fields will be converted
to 1 (true) or 0 (false).
### Graphite Configuration:
```toml

View File

@@ -1,46 +0,0 @@
# Frequently Asked Questions
### Q: How can I monitor the Docker Engine Host from within a container?
You will need to setup several volume mounts as well as some environment
variables:
```
docker run --name telegraf
-v /:/hostfs:ro
-v /etc:/hostfs/etc:ro
-v /proc:/hostfs/proc:ro
-v /sys:/hostfs/sys:ro
-v /var/run/utmp:/var/run/utmp:ro
-e HOST_ETC=/hostfs/etc
-e HOST_PROC=/hostfs/proc
-e HOST_SYS=/hostfs/sys
-e HOST_MOUNT_PREFIX=/hostfs
telegraf
```
### Q: Why do I get a "no such host" error resolving hostnames that other
programs can resolve?
Go uses a pure Go resolver by default for [name resolution](https://golang.org/pkg/net/#hdr-Name_Resolution).
This resolver behaves differently than the C library functions but is more
efficient when used with the Go runtime.
If you encounter problems or want to use more advanced name resolution methods
that are unsupported by the pure Go resolver, you can switch to the cgo
resolver.
If running manually set:
```
export GODEBUG=netdns=cgo
```
If running as a service add the environment variable to `/etc/default/telegraf`:
```
GODEBUG=netdns=cgo
```
### Q: When will the next version be released?
The latest release date estimate can be viewed on the
[milestones](https://github.com/influxdata/telegraf/milestones) page.

View File

@@ -24,7 +24,6 @@ following works:
- github.com/eapache/go-xerial-snappy [MIT](https://github.com/eapache/go-xerial-snappy/blob/master/LICENSE)
- github.com/eapache/queue [MIT](https://github.com/eapache/queue/blob/master/LICENSE)
- github.com/eclipse/paho.mqtt.golang [ECLIPSE](https://github.com/eclipse/paho.mqtt.golang/blob/master/LICENSE)
- github.com/fsnotify/fsnotify [BSD](https://github.com/fsnotify/fsnotify/blob/master/LICENSE)
- github.com/fsouza/go-dockerclient [BSD](https://github.com/fsouza/go-dockerclient/blob/master/LICENSE)
- github.com/gobwas/glob [MIT](https://github.com/gobwas/glob/blob/master/LICENSE)
- github.com/google/go-cmp [BSD](https://github.com/google/go-cmp/blob/master/LICENSE)
@@ -55,7 +54,6 @@ following works:
- github.com/miekg/dns [BSD](https://github.com/miekg/dns/blob/master/LICENSE)
- github.com/naoina/go-stringutil [MIT](https://github.com/naoina/go-stringutil/blob/master/LICENSE)
- github.com/naoina/toml [MIT](https://github.com/naoina/toml/blob/master/LICENSE)
- github.com/nats-io/gnatsd [MIT](https://github.com/nats-io/gnatsd/blob/master/LICENSE)
- github.com/nats-io/go-nats [MIT](https://github.com/nats-io/go-nats/blob/master/LICENSE)
- github.com/nats-io/nats [MIT](https://github.com/nats-io/nats/blob/master/LICENSE)
- github.com/nats-io/nuid [MIT](https://github.com/nats-io/nuid/blob/master/LICENSE)
@@ -84,10 +82,6 @@ following works:
- github.com/streadway/amqp [BSD](https://github.com/streadway/amqp/blob/master/LICENSE)
- github.com/stretchr/objx [MIT](https://github.com/stretchr/objx/blob/master/LICENSE.md)
- github.com/stretchr/testify [MIT](https://github.com/stretchr/testify/blob/master/LICENCE.txt)
- github.com/tidwall/gjson [MIT](https://github.com/tidwall/gjson/blob/master/LICENSE)
- github.com/tidwall/match [MIT](https://github.com/tidwall/match/blob/master/LICENSE)
- github.com/mitchellh/mapstructure [MIT](https://github.com/mitchellh/mapstructure/blob/master/LICENSE)
- github.com/multiplay/go-ts3 [BSD](https://github.com/multiplay/go-ts3/blob/master/LICENSE)
- github.com/vjeantet/grok [APACHE](https://github.com/vjeantet/grok/blob/master/LICENSE)
- github.com/wvanbergen/kafka [MIT](https://github.com/wvanbergen/kafka/blob/master/LICENSE)
- github.com/wvanbergen/kazoo-go [MIT](https://github.com/wvanbergen/kazoo-go/blob/master/MIT-LICENSE)
@@ -100,6 +94,7 @@ following works:
- gopkg.in/asn1-ber.v1 [MIT](https://github.com/go-asn1-ber/asn1-ber/blob/v1.2/LICENSE)
- gopkg.in/dancannon/gorethink.v1 [APACHE](https://github.com/dancannon/gorethink/blob/v1.1.2/LICENSE)
- gopkg.in/fatih/pool.v2 [MIT](https://github.com/fatih/pool/blob/v2.0.0/LICENSE)
- gopkg.in/fsnotify.v1 [BSD](https://github.com/fsnotify/fsnotify/blob/v1.4.2/LICENSE)
- gopkg.in/ldap.v2 [MIT](https://github.com/go-ldap/ldap/blob/v2.5.0/LICENSE)
- gopkg.in/mgo.v2 [BSD](https://github.com/go-mgo/mgo/blob/v2/LICENSE)
- gopkg.in/olivere/elastic.v5 [MIT](https://github.com/olivere/elastic/blob/v5.0.38/LICENSE)

View File

@@ -38,7 +38,7 @@ Telegraf can manage its own service through the --service flag:
| `telegraf.exe --service stop` | Stop the telegraf service |
Troubleshooting common error #1067
Trobleshooting common error #1067
When installing as service in Windows, always double check to specify full path of the config file, otherwise windows service will fail to start

File diff suppressed because it is too large Load Diff

View File

@@ -63,8 +63,8 @@
# The full HTTP or UDP endpoint URL for your InfluxDB instance.
# Multiple urls can be specified but it is assumed that they are part of the same
# cluster, this means that only ONE of the urls will be written to each interval.
# urls = ["udp://127.0.0.1:8089"] # UDP endpoint example
urls = ["http://127.0.0.1:8086"] # required
# urls = ["udp://localhost:8089"] # UDP endpoint example
urls = ["http://localhost:8086"] # required
# The target database for metrics (telegraf will create it if not exists)
database = "telegraf" # required
# Precision of writes, valid values are "ns", "us" (or "µs"), "ms", "s", "m", "h".

View File

@@ -77,40 +77,3 @@ func compileFilterNoGlob(filters []string) Filter {
}
return &out
}
type IncludeExcludeFilter struct {
include Filter
exclude Filter
}
func NewIncludeExcludeFilter(
include []string,
exclude []string,
) (Filter, error) {
in, err := Compile(include)
if err != nil {
return nil, err
}
ex, err := Compile(exclude)
if err != nil {
return nil, err
}
return &IncludeExcludeFilter{in, ex}, nil
}
func (f *IncludeExcludeFilter) Match(s string) bool {
if f.include != nil {
if !f.include.Match(s) {
return false
}
}
if f.exclude != nil {
if f.exclude.Match(s) {
return false
}
}
return true
}

View File

@@ -9,7 +9,6 @@ import (
"math"
"os"
"path/filepath"
"regexp"
"runtime"
"sort"
@@ -41,11 +40,6 @@ var (
// envVarRe is a regex to find environment variables in the config file
envVarRe = regexp.MustCompile(`\$\w+`)
envVarEscaper = strings.NewReplacer(
`"`, `\"`,
`\`, `\\`,
)
)
// Config specifies the URL/user/password for the database that telegraf
@@ -132,7 +126,7 @@ type AgentConfig struct {
// TODO(cam): Remove UTC and parameter, they are no longer
// valid for the agent config. Leaving them here for now for backwards-
// compatibility
// compatability
UTC bool `toml:"utc"`
// Debug is the option for running in debug mode
@@ -689,17 +683,12 @@ func (c *Config) LoadConfig(path string) error {
}
// trimBOM trims the Byte-Order-Marks from the beginning of the file.
// this is for Windows compatibility only.
// this is for Windows compatability only.
// see https://github.com/influxdata/telegraf/issues/1378
func trimBOM(f []byte) []byte {
return bytes.TrimPrefix(f, []byte("\xef\xbb\xbf"))
}
// escapeEnv escapes a value for inserting into a TOML string.
func escapeEnv(value string) string {
return envVarEscaper.Replace(value)
}
// parseFile loads a TOML configuration from a provided path and
// returns the AST produced from the TOML parser. When loading the file, it
// will find environment variables and replace them.
@@ -713,9 +702,8 @@ func parseFile(fpath string) (*ast.Table, error) {
env_vars := envVarRe.FindAll(contents, -1)
for _, env_var := range env_vars {
env_val, ok := os.LookupEnv(strings.TrimPrefix(string(env_var), "$"))
if ok {
env_val = escapeEnv(env_val)
env_val := os.Getenv(strings.TrimPrefix(string(env_var), "$"))
if env_val != "" {
contents = bytes.Replace(contents, env_var, []byte(env_val), 1)
}
}
@@ -1273,47 +1261,6 @@ func buildParser(name string, tbl *ast.Table) (parsers.Parser, error) {
}
}
if node, ok := tbl.Fields["dropwizard_metric_registry_path"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
c.DropwizardMetricRegistryPath = str.Value
}
}
}
if node, ok := tbl.Fields["dropwizard_time_path"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
c.DropwizardTimePath = str.Value
}
}
}
if node, ok := tbl.Fields["dropwizard_time_format"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
c.DropwizardTimeFormat = str.Value
}
}
}
if node, ok := tbl.Fields["dropwizard_tags_path"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
c.DropwizardTagsPath = str.Value
}
}
}
c.DropwizardTagPathsMap = make(map[string]string)
if node, ok := tbl.Fields["dropwizard_tag_paths"]; ok {
if subtbl, ok := node.(*ast.Table); ok {
for name, val := range subtbl.Fields {
if kv, ok := val.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
c.DropwizardTagPathsMap[name] = str.Value
}
}
}
}
}
c.MetricName = name
delete(tbl.Fields, "data_format")
@@ -1324,11 +1271,6 @@ func buildParser(name string, tbl *ast.Table) (parsers.Parser, error) {
delete(tbl.Fields, "collectd_auth_file")
delete(tbl.Fields, "collectd_security_level")
delete(tbl.Fields, "collectd_typesdb")
delete(tbl.Fields, "dropwizard_metric_registry_path")
delete(tbl.Fields, "dropwizard_time_path")
delete(tbl.Fields, "dropwizard_time_format")
delete(tbl.Fields, "dropwizard_tags_path")
delete(tbl.Fields, "dropwizard_tag_paths")
return parsers.NewParser(c)
}
@@ -1367,42 +1309,6 @@ func buildSerializer(name string, tbl *ast.Table) (serializers.Serializer, error
}
}
if node, ok := tbl.Fields["influx_max_line_bytes"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if integer, ok := kv.Value.(*ast.Integer); ok {
v, err := integer.Int()
if err != nil {
return nil, err
}
c.InfluxMaxLineBytes = int(v)
}
}
}
if node, ok := tbl.Fields["influx_sort_fields"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if b, ok := kv.Value.(*ast.Boolean); ok {
var err error
c.InfluxSortFields, err = b.Boolean()
if err != nil {
return nil, err
}
}
}
}
if node, ok := tbl.Fields["influx_uint_support"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if b, ok := kv.Value.(*ast.Boolean); ok {
var err error
c.InfluxUintSupport, err = b.Boolean()
if err != nil {
return nil, err
}
}
}
}
if node, ok := tbl.Fields["json_timestamp_units"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
@@ -1419,9 +1325,6 @@ func buildSerializer(name string, tbl *ast.Table) (serializers.Serializer, error
}
}
delete(tbl.Fields, "influx_max_line_bytes")
delete(tbl.Fields, "influx_sort_fields")
delete(tbl.Fields, "influx_uint_support")
delete(tbl.Fields, "data_format")
delete(tbl.Fields, "prefix")
delete(tbl.Fields, "template")

View File

@@ -40,13 +40,9 @@ func TestSnakeCase(t *testing.T) {
var (
sleepbin, _ = exec.LookPath("sleep")
echobin, _ = exec.LookPath("echo")
shell, _ = exec.LookPath("sh")
)
func TestRunTimeout(t *testing.T) {
if testing.Short() {
t.Skip("Skipping test due to random failures.")
}
if sleepbin == "" {
t.Skip("'sleep' binary not available on OS, skipping.")
}
@@ -61,8 +57,6 @@ func TestRunTimeout(t *testing.T) {
}
func TestCombinedOutputTimeout(t *testing.T) {
// TODO: Fix this test
t.Skip("Test failing too often, skip for now and revisit later.")
if sleepbin == "" {
t.Skip("'sleep' binary not available on OS, skipping.")
}
@@ -90,13 +84,13 @@ func TestCombinedOutput(t *testing.T) {
// test that CombinedOutputTimeout and exec.Cmd.CombinedOutput return
// the same output from a failed command.
func TestCombinedOutputError(t *testing.T) {
if shell == "" {
t.Skip("'sh' binary not available on OS, skipping.")
if sleepbin == "" {
t.Skip("'sleep' binary not available on OS, skipping.")
}
cmd := exec.Command(shell, "-c", "false")
cmd := exec.Command(sleepbin, "foo")
expected, err := cmd.CombinedOutput()
cmd2 := exec.Command(shell, "-c", "false")
cmd2 := exec.Command(sleepbin, "foo")
actual, err := CombinedOutputTimeout(cmd2, time.Second)
assert.Error(t, err)
@@ -104,18 +98,16 @@ func TestCombinedOutputError(t *testing.T) {
}
func TestRunError(t *testing.T) {
if shell == "" {
t.Skip("'sh' binary not available on OS, skipping.")
if sleepbin == "" {
t.Skip("'sleep' binary not available on OS, skipping.")
}
cmd := exec.Command(shell, "-c", "false")
cmd := exec.Command(sleepbin, "foo")
err := RunTimeout(cmd, time.Second)
assert.Error(t, err)
}
func TestRandomSleep(t *testing.T) {
// TODO: Fix this test
t.Skip("Test failing too often, skip for now and revisit later.")
// test that zero max returns immediately
s := time.Now()
RandomSleep(time.Duration(0), make(chan struct{}))

View File

@@ -2,6 +2,8 @@ package models
import (
"log"
"math"
"strings"
"time"
"github.com/influxdata/telegraf"
@@ -76,6 +78,90 @@ func makemetric(
}
}
for k, v := range tags {
if strings.HasSuffix(k, `\`) {
log.Printf("D! Measurement [%s] tag [%s] "+
"ends with a backslash, skipping", measurement, k)
delete(tags, k)
continue
} else if strings.HasSuffix(v, `\`) {
log.Printf("D! Measurement [%s] tag [%s] has a value "+
"ending with a backslash, skipping", measurement, k)
delete(tags, k)
continue
}
}
for k, v := range fields {
if strings.HasSuffix(k, `\`) {
log.Printf("D! Measurement [%s] field [%s] "+
"ends with a backslash, skipping", measurement, k)
delete(fields, k)
continue
}
// Validate uint64 and float64 fields
// convert all int & uint types to int64
switch val := v.(type) {
case nil:
// delete nil fields
delete(fields, k)
case uint:
fields[k] = int64(val)
continue
case uint8:
fields[k] = int64(val)
continue
case uint16:
fields[k] = int64(val)
continue
case uint32:
fields[k] = int64(val)
continue
case int:
fields[k] = int64(val)
continue
case int8:
fields[k] = int64(val)
continue
case int16:
fields[k] = int64(val)
continue
case int32:
fields[k] = int64(val)
continue
case uint64:
// InfluxDB does not support writing uint64
if val < uint64(9223372036854775808) {
fields[k] = int64(val)
} else {
fields[k] = int64(9223372036854775807)
}
continue
case float32:
fields[k] = float64(val)
continue
case float64:
// NaNs are invalid values in influxdb, skip measurement
if math.IsNaN(val) || math.IsInf(val, 0) {
log.Printf("D! Measurement [%s] field [%s] has a NaN or Inf "+
"field, skipping",
measurement, k)
delete(fields, k)
continue
}
case string:
if strings.HasSuffix(val, `\`) {
log.Printf("D! Measurement [%s] field [%s] has a value "+
"ending with a backslash, skipping", measurement, k)
delete(fields, k)
continue
}
fields[k] = v
default:
fields[k] = v
}
}
m, err := metric.New(measurement, tags, fields, t, mType)
if err != nil {
log.Printf("Error adding point [%s]: %s\n", measurement, err.Error())

View File

@@ -114,6 +114,7 @@ func (r *RunningAggregator) reset() {
// for period ticks to tell it when to push and reset the aggregator.
func (r *RunningAggregator) Run(
acc telegraf.Accumulator,
now time.Time,
shutdown chan struct{},
) {
// The start of the period is truncated to the nearest second.
@@ -132,7 +133,6 @@ func (r *RunningAggregator) Run(
// 2nd interval: 00:10 - 00:20.5
// etc.
//
now := time.Now()
r.periodStart = now.Truncate(time.Second)
truncation := now.Sub(r.periodStart)
r.periodEnd = r.periodStart.Add(r.Config.Period)

View File

@@ -1,6 +1,7 @@
package models
import (
"fmt"
"sync"
"sync/atomic"
"testing"
@@ -23,7 +24,7 @@ func TestAdd(t *testing.T) {
})
assert.NoError(t, ra.Config.Filter.Compile())
acc := testutil.Accumulator{}
go ra.Run(&acc, make(chan struct{}))
go ra.Run(&acc, time.Now(), make(chan struct{}))
m := ra.MakeMetric(
"RITest",
@@ -54,7 +55,7 @@ func TestAddMetricsOutsideCurrentPeriod(t *testing.T) {
})
assert.NoError(t, ra.Config.Filter.Compile())
acc := testutil.Accumulator{}
go ra.Run(&acc, make(chan struct{}))
go ra.Run(&acc, time.Now(), make(chan struct{}))
// metric before current period
m := ra.MakeMetric(
@@ -112,7 +113,7 @@ func TestAddAndPushOnePeriod(t *testing.T) {
wg.Add(1)
go func() {
defer wg.Done()
ra.Run(&acc, shutdown)
ra.Run(&acc, time.Now(), shutdown)
}()
m := ra.MakeMetric(
@@ -166,6 +167,69 @@ func TestAddDropOriginal(t *testing.T) {
assert.False(t, ra.Add(m2))
}
// make an untyped, counter, & gauge metric
func TestMakeMetricA(t *testing.T) {
now := time.Now()
ra := NewRunningAggregator(&TestAggregator{}, &AggregatorConfig{
Name: "TestRunningAggregator",
})
assert.Equal(t, "aggregators.TestRunningAggregator", ra.Name())
m := ra.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Untyped,
now,
)
assert.Equal(
t,
fmt.Sprintf("RITest value=101i %d\n", now.UnixNano()),
m.String(),
)
assert.Equal(
t,
m.Type(),
telegraf.Untyped,
)
m = ra.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Counter,
now,
)
assert.Equal(
t,
fmt.Sprintf("RITest value=101i %d\n", now.UnixNano()),
m.String(),
)
assert.Equal(
t,
m.Type(),
telegraf.Counter,
)
m = ra.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Gauge,
now,
)
assert.Equal(
t,
fmt.Sprintf("RITest value=101i %d\n", now.UnixNano()),
m.String(),
)
assert.Equal(
t,
m.Type(),
telegraf.Gauge,
)
}
type TestAggregator struct {
sum int64
}

View File

@@ -5,7 +5,6 @@ import (
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/serializers/influx"
"github.com/influxdata/telegraf/selfstat"
)
@@ -76,11 +75,7 @@ func (r *RunningInput) MakeMetric(
)
if r.trace && m != nil {
s := influx.NewSerializer()
octets, err := s.Serialize(m)
if err == nil {
fmt.Print("> " + string(octets))
}
fmt.Print("> " + m.String())
}
r.MetricsGathered.Incr(1)

View File

@@ -1,11 +1,12 @@
package models
import (
"fmt"
"math"
"testing"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/metric"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -44,17 +45,77 @@ func TestMakeMetricNilFields(t *testing.T) {
telegraf.Untyped,
now,
)
assert.Equal(
t,
fmt.Sprintf("RITest value=101i %d\n", now.UnixNano()),
m.String(),
)
}
expected, err := metric.New("RITest",
// make an untyped, counter, & gauge metric
func TestMakeMetric(t *testing.T) {
now := time.Now()
ri := NewRunningInput(&testInput{}, &InputConfig{
Name: "TestRunningInput",
})
ri.SetTrace(true)
assert.Equal(t, true, ri.Trace())
assert.Equal(t, "inputs.TestRunningInput", ri.Name())
m := ri.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
map[string]interface{}{
"value": int(101),
},
telegraf.Untyped,
now,
)
require.NoError(t, err)
assert.Equal(
t,
fmt.Sprintf("RITest value=101i %d\n", now.UnixNano()),
m.String(),
)
assert.Equal(
t,
m.Type(),
telegraf.Untyped,
)
require.Equal(t, expected, m)
m = ri.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Counter,
now,
)
assert.Equal(
t,
fmt.Sprintf("RITest value=101i %d\n", now.UnixNano()),
m.String(),
)
assert.Equal(
t,
m.Type(),
telegraf.Counter,
)
m = ri.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Gauge,
now,
)
assert.Equal(
t,
fmt.Sprintf("RITest value=101i %d\n", now.UnixNano()),
m.String(),
)
assert.Equal(
t,
m.Type(),
telegraf.Gauge,
)
}
func TestMakeMetricWithPluginTags(t *testing.T) {
@@ -76,18 +137,11 @@ func TestMakeMetricWithPluginTags(t *testing.T) {
telegraf.Untyped,
now,
)
expected, err := metric.New("RITest",
map[string]string{
"foo": "bar",
},
map[string]interface{}{
"value": 101,
},
now,
assert.Equal(
t,
fmt.Sprintf("RITest,foo=bar value=101i %d\n", now.UnixNano()),
m.String(),
)
require.NoError(t, err)
require.Equal(t, expected, m)
}
func TestMakeMetricFilteredOut(t *testing.T) {
@@ -133,17 +187,87 @@ func TestMakeMetricWithDaemonTags(t *testing.T) {
telegraf.Untyped,
now,
)
expected, err := metric.New("RITest",
map[string]string{
"foo": "bar",
},
assert.Equal(
t,
fmt.Sprintf("RITest,foo=bar value=101i %d\n", now.UnixNano()),
m.String(),
)
}
// make an untyped, counter, & gauge metric
func TestMakeMetricInfFields(t *testing.T) {
inf := math.Inf(1)
ninf := math.Inf(-1)
now := time.Now()
ri := NewRunningInput(&testInput{}, &InputConfig{
Name: "TestRunningInput",
})
ri.SetTrace(true)
assert.Equal(t, true, ri.Trace())
m := ri.MakeMetric(
"RITest",
map[string]interface{}{
"value": 101,
"value": int(101),
"inf": inf,
"ninf": ninf,
},
map[string]string{},
telegraf.Untyped,
now,
)
require.NoError(t, err)
require.Equal(t, expected, m)
assert.Equal(
t,
fmt.Sprintf("RITest value=101i %d\n", now.UnixNano()),
m.String(),
)
}
func TestMakeMetricAllFieldTypes(t *testing.T) {
now := time.Now()
ri := NewRunningInput(&testInput{}, &InputConfig{
Name: "TestRunningInput",
})
ri.SetTrace(true)
assert.Equal(t, true, ri.Trace())
m := ri.MakeMetric(
"RITest",
map[string]interface{}{
"a": int(10),
"b": int8(10),
"c": int16(10),
"d": int32(10),
"e": uint(10),
"f": uint8(10),
"g": uint16(10),
"h": uint32(10),
"i": uint64(10),
"j": float32(10),
"k": uint64(9223372036854775810),
"l": "foobar",
"m": true,
},
map[string]string{},
telegraf.Untyped,
now,
)
assert.Contains(t, m.String(), "a=10i")
assert.Contains(t, m.String(), "b=10i")
assert.Contains(t, m.String(), "c=10i")
assert.Contains(t, m.String(), "d=10i")
assert.Contains(t, m.String(), "e=10i")
assert.Contains(t, m.String(), "f=10i")
assert.Contains(t, m.String(), "g=10i")
assert.Contains(t, m.String(), "h=10i")
assert.Contains(t, m.String(), "i=10i")
assert.Contains(t, m.String(), "j=10")
assert.NotContains(t, m.String(), "j=10i")
assert.Contains(t, m.String(), "k=9223372036854775807i")
assert.Contains(t, m.String(), "l=\"foobar\"")
assert.Contains(t, m.String(), "m=true")
}
func TestMakeMetricNameOverride(t *testing.T) {
@@ -160,15 +284,11 @@ func TestMakeMetricNameOverride(t *testing.T) {
telegraf.Untyped,
now,
)
expected, err := metric.New("foobar",
nil,
map[string]interface{}{
"value": 101,
},
now,
assert.Equal(
t,
fmt.Sprintf("foobar value=101i %d\n", now.UnixNano()),
m.String(),
)
require.NoError(t, err)
require.Equal(t, expected, m)
}
func TestMakeMetricNamePrefix(t *testing.T) {
@@ -185,15 +305,11 @@ func TestMakeMetricNamePrefix(t *testing.T) {
telegraf.Untyped,
now,
)
expected, err := metric.New("foobar_RITest",
nil,
map[string]interface{}{
"value": 101,
},
now,
assert.Equal(
t,
fmt.Sprintf("foobar_RITest value=101i %d\n", now.UnixNano()),
m.String(),
)
require.NoError(t, err)
require.Equal(t, expected, m)
}
func TestMakeMetricNameSuffix(t *testing.T) {
@@ -210,15 +326,133 @@ func TestMakeMetricNameSuffix(t *testing.T) {
telegraf.Untyped,
now,
)
expected, err := metric.New("RITest_foobar",
nil,
map[string]interface{}{
"value": 101,
},
now,
assert.Equal(
t,
fmt.Sprintf("RITest_foobar value=101i %d\n", now.UnixNano()),
m.String(),
)
require.NoError(t, err)
require.Equal(t, expected, m)
}
func TestMakeMetric_TrailingSlash(t *testing.T) {
now := time.Now()
tests := []struct {
name string
measurement string
fields map[string]interface{}
tags map[string]string
expectedNil bool
expectedMeasurement string
expectedFields map[string]interface{}
expectedTags map[string]string
}{
{
name: "Measurement cannot have trailing slash",
measurement: `cpu\`,
fields: map[string]interface{}{
"value": int64(42),
},
tags: map[string]string{},
expectedNil: true,
},
{
name: "Field key with trailing slash dropped",
measurement: `cpu`,
fields: map[string]interface{}{
"value": int64(42),
`bad\`: `xyzzy`,
},
tags: map[string]string{},
expectedMeasurement: `cpu`,
expectedFields: map[string]interface{}{
"value": int64(42),
},
expectedTags: map[string]string{},
},
{
name: "Field value with trailing slash dropped",
measurement: `cpu`,
fields: map[string]interface{}{
"value": int64(42),
"bad": `xyzzy\`,
},
tags: map[string]string{},
expectedMeasurement: `cpu`,
expectedFields: map[string]interface{}{
"value": int64(42),
},
expectedTags: map[string]string{},
},
{
name: "Must have one field after dropped",
measurement: `cpu`,
fields: map[string]interface{}{
"bad": `xyzzy\`,
},
tags: map[string]string{},
expectedNil: true,
},
{
name: "Tag key with trailing slash dropped",
measurement: `cpu`,
fields: map[string]interface{}{
"value": int64(42),
},
tags: map[string]string{
`host\`: "localhost",
"a": "x",
},
expectedMeasurement: `cpu`,
expectedFields: map[string]interface{}{
"value": int64(42),
},
expectedTags: map[string]string{
"a": "x",
},
},
{
name: "Tag value with trailing slash dropped",
measurement: `cpu`,
fields: map[string]interface{}{
"value": int64(42),
},
tags: map[string]string{
`host`: `localhost\`,
"a": "x",
},
expectedMeasurement: `cpu`,
expectedFields: map[string]interface{}{
"value": int64(42),
},
expectedTags: map[string]string{
"a": "x",
},
},
}
ri := NewRunningInput(&testInput{}, &InputConfig{
Name: "TestRunningInput",
})
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
m := ri.MakeMetric(
tc.measurement,
tc.fields,
tc.tags,
telegraf.Untyped,
now)
if tc.expectedNil {
require.Nil(t, m)
} else {
require.NotNil(t, m)
require.Equal(t, tc.expectedMeasurement, m.Name())
require.Equal(t, tc.expectedFields, m.Fields())
require.Equal(t, tc.expectedTags, m.Tags())
}
})
}
}
type testInput struct{}

View File

@@ -87,7 +87,7 @@ func NewRunningOutput(
map[string]string{"output": name},
),
}
ro.BufferLimit.Set(int64(ro.MetricBufferLimit))
ro.BufferLimit.Incr(int64(ro.MetricBufferLimit))
return ro
}

View File

@@ -1,86 +0,0 @@
package templating
import (
"sort"
"strings"
)
const (
// DefaultSeparator is the default separation character to use when separating template parts.
DefaultSeparator = "."
)
// Engine uses a Matcher to retrieve the appropriate template and applies the template
// to the input string
type Engine struct {
joiner string
matcher *matcher
}
// Apply extracts the template fields from the given line and returns the measurement
// name, tags and field name
func (e *Engine) Apply(line string) (string, map[string]string, string, error) {
return e.matcher.match(line).Apply(line, e.joiner)
}
// NewEngine creates a new templating engine
func NewEngine(joiner string, defaultTemplate *Template, templates []string) (*Engine, error) {
engine := Engine{
joiner: joiner,
matcher: newMatcher(defaultTemplate),
}
templateSpecs := parseTemplateSpecs(templates)
for _, templateSpec := range templateSpecs {
if err := engine.matcher.addSpec(templateSpec); err != nil {
return nil, err
}
}
return &engine, nil
}
func parseTemplateSpecs(templates []string) templateSpecs {
tmplts := templateSpecs{}
for _, pattern := range templates {
tmplt := templateSpec{
separator: DefaultSeparator,
}
// Format is [separator] [filter] <template> [tag1=value1,tag2=value2]
parts := strings.Fields(pattern)
partsLength := len(parts)
if partsLength < 1 {
// ignore
continue
}
if partsLength == 1 {
tmplt.template = pattern
} else if partsLength == 4 {
tmplt.separator = parts[0]
tmplt.filter = parts[1]
tmplt.template = parts[2]
tmplt.tagstring = parts[3]
} else {
hasTagstring := strings.Contains(parts[partsLength-1], "=")
if hasTagstring {
tmplt.tagstring = parts[partsLength-1]
tmplt.template = parts[partsLength-2]
if partsLength == 3 {
tmplt.filter = parts[0]
}
} else {
tmplt.template = parts[partsLength-1]
if partsLength == 2 {
tmplt.filter = parts[0]
} else { // length == 3
tmplt.separator = parts[0]
tmplt.filter = parts[1]
}
}
}
tmplts = append(tmplts, tmplt)
}
sort.Sort(tmplts)
return tmplts
}

View File

@@ -1,58 +0,0 @@
package templating
import (
"strings"
)
// matcher determines which template should be applied to a given metric
// based on a filter tree.
type matcher struct {
root *node
defaultTemplate *Template
}
// newMatcher creates a new matcher.
func newMatcher(defaultTemplate *Template) *matcher {
return &matcher{
root: &node{},
defaultTemplate: defaultTemplate,
}
}
func (m *matcher) addSpec(tmplt templateSpec) error {
// Parse out the default tags specific to this template
tags := map[string]string{}
if tmplt.tagstring != "" {
for _, kv := range strings.Split(tmplt.tagstring, ",") {
parts := strings.Split(kv, "=")
tags[parts[0]] = parts[1]
}
}
tmpl, err := NewTemplate(tmplt.separator, tmplt.template, tags)
if err != nil {
return err
}
m.add(tmplt.filter, tmpl)
return nil
}
// add inserts the template in the filter tree based the given filter
func (m *matcher) add(filter string, template *Template) {
if filter == "" {
m.defaultTemplate = template
m.root.separator = template.separator
return
}
m.root.insert(filter, template)
}
// match returns the template that matches the given measurement line.
// If no template matches, the default template is returned.
func (m *matcher) match(line string) *Template {
tmpl := m.root.search(line)
if tmpl != nil {
return tmpl
}
return m.defaultTemplate
}

View File

@@ -1,122 +0,0 @@
package templating
import (
"sort"
"strings"
)
// node is an item in a sorted k-ary tree of filter parts. Each child is sorted by its part value.
// The special value of "*", is always sorted last.
type node struct {
separator string
value string
children nodes
template *Template
}
// insert inserts the given string template into the tree. The filter string is separated
// on the template separator and each part is used as the path in the tree.
func (n *node) insert(filter string, template *Template) {
n.separator = template.separator
n.recursiveInsert(strings.Split(filter, n.separator), template)
}
// recursiveInsert does the actual recursive insertion
func (n *node) recursiveInsert(values []string, template *Template) {
// Add the end, set the template
if len(values) == 0 {
n.template = template
return
}
// See if the the current element already exists in the tree. If so, insert the
// into that sub-tree
for _, v := range n.children {
if v.value == values[0] {
v.recursiveInsert(values[1:], template)
return
}
}
// New element, add it to the tree and sort the children
newNode := &node{value: values[0]}
n.children = append(n.children, newNode)
sort.Sort(&n.children)
// Now insert the rest of the tree into the new element
newNode.recursiveInsert(values[1:], template)
}
// search searches for a template matching the input string
func (n *node) search(line string) *Template {
separator := n.separator
return n.recursiveSearch(strings.Split(line, separator))
}
// recursiveSearch performs the actual recursive search
func (n *node) recursiveSearch(lineParts []string) *Template {
// Nothing to search
if len(lineParts) == 0 || len(n.children) == 0 {
return n.template
}
// If last element is a wildcard, don't include it in this search since it's sorted
// to the end but lexicographically it would not always be and sort.Search assumes
// the slice is sorted.
length := len(n.children)
if n.children[length-1].value == "*" {
length--
}
// Find the index of child with an exact match
i := sort.Search(length, func(i int) bool {
return n.children[i].value >= lineParts[0]
})
// Found an exact match, so search that child sub-tree
if i < len(n.children) && n.children[i].value == lineParts[0] {
return n.children[i].recursiveSearch(lineParts[1:])
}
// Not an exact match, see if we have a wildcard child to search
if n.children[len(n.children)-1].value == "*" {
return n.children[len(n.children)-1].recursiveSearch(lineParts[1:])
}
return n.template
}
// nodes is simply an array of nodes implementing the sorting interface.
type nodes []*node
// Less returns a boolean indicating whether the filter at position j
// is less than the filter at position k. Filters are order by string
// comparison of each component parts. A wildcard value "*" is never
// less than a non-wildcard value.
//
// For example, the filters:
// "*.*"
// "servers.*"
// "servers.localhost"
// "*.localhost"
//
// Would be sorted as:
// "servers.localhost"
// "servers.*"
// "*.localhost"
// "*.*"
func (n *nodes) Less(j, k int) bool {
if (*n)[j].value == "*" && (*n)[k].value != "*" {
return false
}
if (*n)[j].value != "*" && (*n)[k].value == "*" {
return true
}
return (*n)[j].value < (*n)[k].value
}
// Swap swaps two elements of the array
func (n *nodes) Swap(i, j int) { (*n)[i], (*n)[j] = (*n)[j], (*n)[i] }
// Len returns the length of the array
func (n *nodes) Len() int { return len(*n) }

View File

@@ -1,148 +0,0 @@
package templating
import (
"fmt"
"strings"
)
// Template represents a pattern and tags to map a metric string to a influxdb Point
type Template struct {
separator string
parts []string
defaultTags map[string]string
greedyField bool
greedyMeasurement bool
}
// apply extracts the template fields from the given line and returns the measurement
// name, tags and field name
func (t *Template) Apply(line string, joiner string) (string, map[string]string, string, error) {
fields := strings.Split(line, t.separator)
var (
measurement []string
tags = make(map[string][]string)
field []string
)
// Set any default tags
for k, v := range t.defaultTags {
tags[k] = append(tags[k], v)
}
// See if an invalid combination has been specified in the template:
for _, tag := range t.parts {
if tag == "measurement*" {
t.greedyMeasurement = true
} else if tag == "field*" {
t.greedyField = true
}
}
if t.greedyField && t.greedyMeasurement {
return "", nil, "",
fmt.Errorf("either 'field*' or 'measurement*' can be used in each "+
"template (but not both together): %q",
strings.Join(t.parts, joiner))
}
for i, tag := range t.parts {
if i >= len(fields) {
continue
}
if tag == "" {
continue
}
switch tag {
case "measurement":
measurement = append(measurement, fields[i])
case "field":
field = append(field, fields[i])
case "field*":
field = append(field, fields[i:]...)
break
case "measurement*":
measurement = append(measurement, fields[i:]...)
break
default:
tags[tag] = append(tags[tag], fields[i])
}
}
// Convert to map of strings.
outtags := make(map[string]string)
for k, values := range tags {
outtags[k] = strings.Join(values, joiner)
}
return strings.Join(measurement, joiner), outtags, strings.Join(field, joiner), nil
}
func NewDefaultTemplateWithPattern(pattern string) (*Template, error) {
return NewTemplate(DefaultSeparator, pattern, nil)
}
// NewTemplate returns a new template ensuring it has a measurement
// specified.
func NewTemplate(separator string, pattern string, defaultTags map[string]string) (*Template, error) {
parts := strings.Split(pattern, separator)
hasMeasurement := false
template := &Template{
separator: separator,
parts: parts,
defaultTags: defaultTags,
}
for _, part := range parts {
if strings.HasPrefix(part, "measurement") {
hasMeasurement = true
}
if part == "measurement*" {
template.greedyMeasurement = true
} else if part == "field*" {
template.greedyField = true
}
}
if !hasMeasurement {
return nil, fmt.Errorf("no measurement specified for template. %q", pattern)
}
return template, nil
}
// templateSpec is a template string split in its constituent parts
type templateSpec struct {
separator string
filter string
template string
tagstring string
}
// templateSpecs is simply an array of template specs implementing the sorting interface
type templateSpecs []templateSpec
// Less reports whether the element with
// index j should sort before the element with index k.
func (e templateSpecs) Less(j, k int) bool {
if len(e[j].filter) == 0 && len(e[k].filter) == 0 {
jlength := len(strings.Split(e[j].template, e[j].separator))
klength := len(strings.Split(e[k].template, e[k].separator))
return jlength < klength
}
if len(e[j].filter) == 0 {
return true
}
if len(e[k].filter) == 0 {
return false
}
jlength := len(strings.Split(e[j].template, e[j].separator))
klength := len(strings.Split(e[k].template, e[k].separator))
return jlength < klength
}
// Swap swaps the elements with indexes i and j.
func (e templateSpecs) Swap(i, j int) { e[i], e[j] = e[j], e[i] }
// Len is the number of elements in the collection.
func (e templateSpecs) Len() int { return len(e) }

View File

@@ -13,54 +13,50 @@ const (
Counter
Gauge
Untyped
Summary
Histogram
)
type Tag struct {
Key string
Value string
}
type Field struct {
Key string
Value interface{}
}
type Metric interface {
// Getting data structure functions
Name() string
Tags() map[string]string
TagList() []*Tag
Fields() map[string]interface{}
FieldList() []*Field
Time() time.Time
Type() ValueType
// Name functions
SetName(name string)
AddPrefix(prefix string)
AddSuffix(suffix string)
// Serialize serializes the metric into a line-protocol byte buffer,
// including a newline at the end.
Serialize() []byte
// same as Serialize, but avoids an allocation.
// returns number of bytes copied into dst.
SerializeTo(dst []byte) int
// String is the same as Serialize, but returns a string.
String() string
// Copy deep-copies the metric.
Copy() Metric
// Split will attempt to return multiple metrics with the same timestamp
// whose string representations are no longer than maxSize.
// Metrics with a single field may exceed the requested size.
Split(maxSize int) []Metric
// Tag functions
GetTag(key string) (string, bool)
HasTag(key string) bool
AddTag(key, value string)
RemoveTag(key string)
// Field functions
GetField(key string) (interface{}, bool)
HasField(key string) bool
AddField(key string, value interface{})
RemoveField(key string)
RemoveField(key string) error
// HashID returns an unique identifier for the series.
// Name functions
SetName(name string)
SetPrefix(prefix string)
SetSuffix(suffix string)
// Getting data structure functions
Name() string
Tags() map[string]string
Fields() map[string]interface{}
Time() time.Time
UnixNano() int64
Type() ValueType
Len() int // returns the length of the serialized metric, including newline
HashID() uint64
// Copy returns a deep copy of the Metric.
Copy() Metric
// Mark Metric as an aggregate
// aggregator things:
SetAggregate(bool)
IsAggregate() bool
}

View File

@@ -1,53 +0,0 @@
package metric
import (
"time"
"github.com/influxdata/telegraf"
)
type TimeFunc func() time.Time
type Builder struct {
TimeFunc
TimePrecision time.Duration
*metric
}
func NewBuilder() *Builder {
b := &Builder{
TimeFunc: time.Now,
TimePrecision: 1 * time.Nanosecond,
}
b.Reset()
return b
}
func (b *Builder) SetName(name string) {
b.name = name
}
func (b *Builder) AddTag(key string, value string) {
b.metric.AddTag(key, value)
}
func (b *Builder) AddField(key string, value interface{}) {
b.metric.AddField(key, value)
}
func (b *Builder) SetTime(tm time.Time) {
b.tm = tm
}
func (b *Builder) Reset() {
b.metric = &metric{}
}
func (b *Builder) Metric() (telegraf.Metric, error) {
if b.tm.IsZero() {
b.tm = b.TimeFunc().Truncate(b.TimePrecision)
}
return b.metric, nil
}

49
metric/escape.go Normal file
View File

@@ -0,0 +1,49 @@
package metric
import (
"strings"
)
var (
// escaper is for escaping:
// - tag keys
// - tag values
// - field keys
// see https://docs.influxdata.com/influxdb/v1.0/write_protocols/line_protocol_tutorial/#special-characters-and-keywords
escaper = strings.NewReplacer(`,`, `\,`, `"`, `\"`, ` `, `\ `, `=`, `\=`)
unEscaper = strings.NewReplacer(`\,`, `,`, `\"`, `"`, `\ `, ` `, `\=`, `=`)
// nameEscaper is for escaping measurement names only.
// see https://docs.influxdata.com/influxdb/v1.0/write_protocols/line_protocol_tutorial/#special-characters-and-keywords
nameEscaper = strings.NewReplacer(`,`, `\,`, ` `, `\ `)
nameUnEscaper = strings.NewReplacer(`\,`, `,`, `\ `, ` `)
// stringFieldEscaper is for escaping string field values only.
// see https://docs.influxdata.com/influxdb/v1.0/write_protocols/line_protocol_tutorial/#special-characters-and-keywords
stringFieldEscaper = strings.NewReplacer(`"`, `\"`)
stringFieldUnEscaper = strings.NewReplacer(`\"`, `"`)
)
func escape(s string, t string) string {
switch t {
case "fieldkey", "tagkey", "tagval":
return escaper.Replace(s)
case "name":
return nameEscaper.Replace(s)
case "fieldval":
return stringFieldEscaper.Replace(s)
}
return s
}
func unescape(s string, t string) string {
switch t {
case "fieldkey", "tagkey", "tagval":
return unEscaper.Replace(s)
case "name":
return nameUnEscaper.Replace(s)
case "fieldval":
return stringFieldUnEscaper.Replace(s)
}
return s
}

View File

@@ -0,0 +1,38 @@
package metric
import (
"reflect"
"strconv"
"unsafe"
)
// parseIntBytes is a zero-alloc wrapper around strconv.ParseInt.
func parseIntBytes(b []byte, base int, bitSize int) (i int64, err error) {
s := unsafeBytesToString(b)
return strconv.ParseInt(s, base, bitSize)
}
// parseFloatBytes is a zero-alloc wrapper around strconv.ParseFloat.
func parseFloatBytes(b []byte, bitSize int) (float64, error) {
s := unsafeBytesToString(b)
return strconv.ParseFloat(s, bitSize)
}
// parseBoolBytes is a zero-alloc wrapper around strconv.ParseBool.
func parseBoolBytes(b []byte) (bool, error) {
return strconv.ParseBool(unsafeBytesToString(b))
}
// unsafeBytesToString converts a []byte to a string without a heap allocation.
//
// It is unsafe, and is intended to prepare input to short-lived functions
// that require strings.
func unsafeBytesToString(in []byte) string {
src := *(*reflect.SliceHeader)(unsafe.Pointer(&in))
dst := reflect.StringHeader{
Data: src.Data,
Len: src.Len,
}
s := *(*string)(unsafe.Pointer(&dst))
return s
}

View File

@@ -0,0 +1,103 @@
package metric
import (
"strconv"
"testing"
"testing/quick"
)
func TestParseIntBytesEquivalenceFuzz(t *testing.T) {
f := func(b []byte, base int, bitSize int) bool {
exp, expErr := strconv.ParseInt(string(b), base, bitSize)
got, gotErr := parseIntBytes(b, base, bitSize)
return exp == got && checkErrs(expErr, gotErr)
}
cfg := &quick.Config{
MaxCount: 10000,
}
if err := quick.Check(f, cfg); err != nil {
t.Fatal(err)
}
}
func TestParseIntBytesValid64bitBase10EquivalenceFuzz(t *testing.T) {
buf := []byte{}
f := func(n int64) bool {
buf = strconv.AppendInt(buf[:0], n, 10)
exp, expErr := strconv.ParseInt(string(buf), 10, 64)
got, gotErr := parseIntBytes(buf, 10, 64)
return exp == got && checkErrs(expErr, gotErr)
}
cfg := &quick.Config{
MaxCount: 10000,
}
if err := quick.Check(f, cfg); err != nil {
t.Fatal(err)
}
}
func TestParseFloatBytesEquivalenceFuzz(t *testing.T) {
f := func(b []byte, bitSize int) bool {
exp, expErr := strconv.ParseFloat(string(b), bitSize)
got, gotErr := parseFloatBytes(b, bitSize)
return exp == got && checkErrs(expErr, gotErr)
}
cfg := &quick.Config{
MaxCount: 10000,
}
if err := quick.Check(f, cfg); err != nil {
t.Fatal(err)
}
}
func TestParseFloatBytesValid64bitEquivalenceFuzz(t *testing.T) {
buf := []byte{}
f := func(n float64) bool {
buf = strconv.AppendFloat(buf[:0], n, 'f', -1, 64)
exp, expErr := strconv.ParseFloat(string(buf), 64)
got, gotErr := parseFloatBytes(buf, 64)
return exp == got && checkErrs(expErr, gotErr)
}
cfg := &quick.Config{
MaxCount: 10000,
}
if err := quick.Check(f, cfg); err != nil {
t.Fatal(err)
}
}
func TestParseBoolBytesEquivalence(t *testing.T) {
var buf []byte
for _, s := range []string{"1", "t", "T", "TRUE", "true", "True", "0", "f", "F", "FALSE", "false", "False", "fail", "TrUe", "FAlSE", "numbers", ""} {
buf = append(buf[:0], s...)
exp, expErr := strconv.ParseBool(s)
got, gotErr := parseBoolBytes(buf)
if got != exp || !checkErrs(expErr, gotErr) {
t.Errorf("Failed to parse boolean value %q correctly: wanted (%t, %v), got (%t, %v)", s, exp, expErr, got, gotErr)
}
}
}
func checkErrs(a, b error) bool {
if (a == nil) != (b == nil) {
return false
}
return a == nil || a.Error() == b.Error()
}

View File

@@ -1,278 +1,588 @@
package metric
import (
"bytes"
"fmt"
"hash/fnv"
"sort"
"strconv"
"strings"
"time"
"github.com/influxdata/telegraf"
)
type metric struct {
name string
tags []*telegraf.Tag
fields []*telegraf.Field
tm time.Time
tp telegraf.ValueType
aggregate bool
}
const MaxInt = int(^uint(0) >> 1)
func New(
name string,
tags map[string]string,
fields map[string]interface{},
tm time.Time,
tp ...telegraf.ValueType,
t time.Time,
mType ...telegraf.ValueType,
) (telegraf.Metric, error) {
var vtype telegraf.ValueType
if len(tp) > 0 {
vtype = tp[0]
if len(fields) == 0 {
return nil, fmt.Errorf("Metric cannot be made without any fields")
}
if len(name) == 0 {
return nil, fmt.Errorf("Metric cannot be made with an empty name")
}
if strings.HasSuffix(name, `\`) {
return nil, fmt.Errorf("Metric cannot have measurement name ending with a backslash")
}
var thisType telegraf.ValueType
if len(mType) > 0 {
thisType = mType[0]
} else {
vtype = telegraf.Untyped
thisType = telegraf.Untyped
}
m := &metric{
name: name,
tags: nil,
fields: nil,
tm: tm,
tp: vtype,
name: []byte(escape(name, "name")),
t: []byte(fmt.Sprint(t.UnixNano())),
nsec: t.UnixNano(),
mType: thisType,
}
if len(tags) > 0 {
m.tags = make([]*telegraf.Tag, 0, len(tags))
for k, v := range tags {
m.tags = append(m.tags,
&telegraf.Tag{Key: k, Value: v})
// pre-allocate exact size of the tags slice
taglen := 0
for k, v := range tags {
if strings.HasSuffix(k, `\`) {
return nil, fmt.Errorf("Metric cannot have tag key ending with a backslash")
}
if strings.HasSuffix(v, `\`) {
return nil, fmt.Errorf("Metric cannot have tag value ending with a backslash")
}
sort.Slice(m.tags, func(i, j int) bool { return m.tags[i].Key < m.tags[j].Key })
}
m.fields = make([]*telegraf.Field, 0, len(fields))
for k, v := range fields {
v := convertField(v)
if v == nil {
if len(k) == 0 || len(v) == 0 {
continue
}
m.AddField(k, v)
taglen += 2 + len(escape(k, "tagkey")) + len(escape(v, "tagval"))
}
m.tags = make([]byte, taglen)
i := 0
for k, v := range tags {
if len(k) == 0 || len(v) == 0 {
continue
}
m.tags[i] = ','
i++
i += copy(m.tags[i:], escape(k, "tagkey"))
m.tags[i] = '='
i++
i += copy(m.tags[i:], escape(v, "tagval"))
}
// pre-allocate capacity of the fields slice
fieldlen := 0
for k, v := range fields {
if strings.HasSuffix(k, `\`) {
return nil, fmt.Errorf("Metric cannot have field key ending with a backslash")
}
switch val := v.(type) {
case string:
if strings.HasSuffix(val, `\`) {
return nil, fmt.Errorf("Metric cannot have field value ending with a backslash")
}
}
// 10 bytes is completely arbitrary, but will at least prevent some
// amount of allocations. There's a small possibility this will create
// slightly more allocations for a metric that has many short fields.
fieldlen += len(k) + 10
}
m.fields = make([]byte, 0, fieldlen)
i = 0
for k, v := range fields {
if i != 0 {
m.fields = append(m.fields, ',')
}
m.fields = appendField(m.fields, k, v)
i++
}
return m, nil
}
// indexUnescapedByte finds the index of the first byte equal to b in buf that
// is not escaped. Returns -1 if not found.
func indexUnescapedByte(buf []byte, b byte) int {
var keyi int
for {
i := bytes.IndexByte(buf[keyi:], b)
if i == -1 {
return -1
} else if i == 0 {
break
}
keyi += i
if buf[keyi-1] != '\\' {
break
} else {
keyi++
}
}
return keyi
}
type metric struct {
name []byte
tags []byte
fields []byte
t []byte
mType telegraf.ValueType
aggregate bool
// cached values for reuse in "get" functions
hashID uint64
nsec int64
}
func (m *metric) String() string {
return fmt.Sprintf("%s %v %v %d", m.name, m.Tags(), m.Fields(), m.tm.UnixNano())
}
func (m *metric) Name() string {
return m.name
}
func (m *metric) Tags() map[string]string {
tags := make(map[string]string, len(m.tags))
for _, tag := range m.tags {
tags[tag.Key] = tag.Value
}
return tags
}
func (m *metric) TagList() []*telegraf.Tag {
return m.tags
}
func (m *metric) Fields() map[string]interface{} {
fields := make(map[string]interface{}, len(m.fields))
for _, field := range m.fields {
fields[field.Key] = field.Value
}
return fields
}
func (m *metric) FieldList() []*telegraf.Field {
return m.fields
}
func (m *metric) Time() time.Time {
return m.tm
}
func (m *metric) Type() telegraf.ValueType {
return m.tp
}
func (m *metric) SetName(name string) {
m.name = name
}
func (m *metric) AddPrefix(prefix string) {
m.name = prefix + m.name
}
func (m *metric) AddSuffix(suffix string) {
m.name = m.name + suffix
}
func (m *metric) AddTag(key, value string) {
for i, tag := range m.tags {
if key > tag.Key {
continue
}
if key == tag.Key {
tag.Value = value
}
m.tags = append(m.tags, nil)
copy(m.tags[i+1:], m.tags[i:])
m.tags[i] = &telegraf.Tag{Key: key, Value: value}
return
}
m.tags = append(m.tags, &telegraf.Tag{Key: key, Value: value})
}
func (m *metric) HasTag(key string) bool {
for _, tag := range m.tags {
if tag.Key == key {
return true
}
}
return false
}
func (m *metric) GetTag(key string) (string, bool) {
for _, tag := range m.tags {
if tag.Key == key {
return tag.Value, true
}
}
return "", false
}
func (m *metric) RemoveTag(key string) {
for i, tag := range m.tags {
if tag.Key == key {
copy(m.tags[i:], m.tags[i+1:])
m.tags[len(m.tags)-1] = nil
m.tags = m.tags[:len(m.tags)-1]
return
}
}
}
func (m *metric) AddField(key string, value interface{}) {
for i, field := range m.fields {
if key == field.Key {
m.fields[i] = &telegraf.Field{Key: key, Value: convertField(value)}
}
}
m.fields = append(m.fields, &telegraf.Field{Key: key, Value: convertField(value)})
}
func (m *metric) HasField(key string) bool {
for _, field := range m.fields {
if field.Key == key {
return true
}
}
return false
}
func (m *metric) GetField(key string) (interface{}, bool) {
for _, field := range m.fields {
if field.Key == key {
return field.Value, true
}
}
return nil, false
}
func (m *metric) RemoveField(key string) {
for i, field := range m.fields {
if field.Key == key {
copy(m.fields[i:], m.fields[i+1:])
m.fields[len(m.fields)-1] = nil
m.fields = m.fields[:len(m.fields)-1]
return
}
}
}
func (m *metric) Copy() telegraf.Metric {
m2 := &metric{
name: m.name,
tags: make([]*telegraf.Tag, len(m.tags)),
fields: make([]*telegraf.Field, len(m.fields)),
tm: m.tm,
tp: m.tp,
aggregate: m.aggregate,
}
for i, tag := range m.tags {
m2.tags[i] = tag
}
for i, field := range m.fields {
m2.fields[i] = field
}
return m2
return string(m.name) + string(m.tags) + " " + string(m.fields) + " " + string(m.t) + "\n"
}
func (m *metric) SetAggregate(b bool) {
m.aggregate = true
m.aggregate = b
}
func (m *metric) IsAggregate() bool {
return m.aggregate
}
func (m *metric) HashID() uint64 {
h := fnv.New64a()
h.Write([]byte(m.name))
for _, tag := range m.tags {
h.Write([]byte(tag.Key))
h.Write([]byte(tag.Value))
}
return h.Sum64()
func (m *metric) Type() telegraf.ValueType {
return m.mType
}
// Convert field to a supported type or nil if unconvertible
func convertField(v interface{}) interface{} {
switch v := v.(type) {
case float64:
return v
case int64:
return v
case string:
return v
case bool:
return v
case int:
return int64(v)
case uint:
return uint64(v)
case uint64:
return uint64(v)
case []byte:
return string(v)
case int32:
return int64(v)
case int16:
return int64(v)
case int8:
return int64(v)
case uint32:
return uint64(v)
case uint16:
return uint64(v)
case uint8:
return uint64(v)
case float32:
return float64(v)
default:
func (m *metric) Len() int {
// 3 is for 2 spaces surrounding the fields array + newline at the end.
return len(m.name) + len(m.tags) + len(m.fields) + len(m.t) + 3
}
func (m *metric) Serialize() []byte {
tmp := make([]byte, m.Len())
i := 0
i += copy(tmp[i:], m.name)
i += copy(tmp[i:], m.tags)
tmp[i] = ' '
i++
i += copy(tmp[i:], m.fields)
tmp[i] = ' '
i++
i += copy(tmp[i:], m.t)
tmp[i] = '\n'
return tmp
}
func (m *metric) SerializeTo(dst []byte) int {
i := 0
if i >= len(dst) {
return i
}
i += copy(dst[i:], m.name)
if i >= len(dst) {
return i
}
i += copy(dst[i:], m.tags)
if i >= len(dst) {
return i
}
dst[i] = ' '
i++
if i >= len(dst) {
return i
}
i += copy(dst[i:], m.fields)
if i >= len(dst) {
return i
}
dst[i] = ' '
i++
if i >= len(dst) {
return i
}
i += copy(dst[i:], m.t)
if i >= len(dst) {
return i
}
dst[i] = '\n'
return i + 1
}
func (m *metric) Split(maxSize int) []telegraf.Metric {
if m.Len() <= maxSize {
return []telegraf.Metric{m}
}
var out []telegraf.Metric
// constant number of bytes for each metric (in addition to field bytes)
constant := len(m.name) + len(m.tags) + len(m.t) + 3
// currently selected fields
fields := make([]byte, 0, maxSize)
i := 0
for {
if i >= len(m.fields) {
// hit the end of the field byte slice
if len(fields) > 0 {
out = append(out, copyWith(m.name, m.tags, fields, m.t))
}
break
}
// find the end of the next field
j := indexUnescapedByte(m.fields[i:], ',')
if j == -1 {
j = len(m.fields)
} else {
j += i
}
// if true, then we need to create a metric _not_ including the currently
// selected field
if len(m.fields[i:j])+len(fields)+constant >= maxSize {
// if false, then we'll create a metric including the currently
// selected field anyways. This means that the given maxSize is too
// small for a single field to fit.
if len(fields) > 0 {
out = append(out, copyWith(m.name, m.tags, fields, m.t))
}
fields = make([]byte, 0, maxSize)
}
if len(fields) > 0 {
fields = append(fields, ',')
}
fields = append(fields, m.fields[i:j]...)
i = j + 1
}
return out
}
func (m *metric) Fields() map[string]interface{} {
fieldMap := map[string]interface{}{}
i := 0
for {
if i >= len(m.fields) {
break
}
// end index of field key
i1 := indexUnescapedByte(m.fields[i:], '=')
if i1 == -1 {
break
}
// start index of field value
i2 := i1 + 1
// end index of field value
var i3 int
if m.fields[i:][i2] == '"' {
i3 = indexUnescapedByte(m.fields[i:][i2+1:], '"')
if i3 == -1 {
i3 = len(m.fields[i:])
}
i3 += i2 + 2 // increment index to the comma
} else {
i3 = indexUnescapedByte(m.fields[i:], ',')
if i3 == -1 {
i3 = len(m.fields[i:])
}
}
switch m.fields[i:][i2] {
case '"':
// string field
fieldMap[unescape(string(m.fields[i:][0:i1]), "fieldkey")] = unescape(string(m.fields[i:][i2+1:i3-1]), "fieldval")
case '-', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9':
// number field
switch m.fields[i:][i3-1] {
case 'i':
// integer field
n, err := parseIntBytes(m.fields[i:][i2:i3-1], 10, 64)
if err == nil {
fieldMap[unescape(string(m.fields[i:][0:i1]), "fieldkey")] = n
} else {
// TODO handle error or just ignore field silently?
}
default:
// float field
n, err := parseFloatBytes(m.fields[i:][i2:i3], 64)
if err == nil {
fieldMap[unescape(string(m.fields[i:][0:i1]), "fieldkey")] = n
} else {
// TODO handle error or just ignore field silently?
}
}
case 'T', 't':
fieldMap[unescape(string(m.fields[i:][0:i1]), "fieldkey")] = true
case 'F', 'f':
fieldMap[unescape(string(m.fields[i:][0:i1]), "fieldkey")] = false
default:
// TODO handle unsupported field type
}
i += i3 + 1
}
return fieldMap
}
func (m *metric) Tags() map[string]string {
tagMap := map[string]string{}
if len(m.tags) == 0 {
return tagMap
}
i := 0
for {
// start index of tag key
i0 := indexUnescapedByte(m.tags[i:], ',') + 1
if i0 == 0 {
// didn't find a tag start
break
}
// end index of tag key
i1 := indexUnescapedByte(m.tags[i:], '=')
// start index of tag value
i2 := i1 + 1
// end index of tag value (starting from i2)
i3 := indexUnescapedByte(m.tags[i+i2:], ',')
if i3 == -1 {
tagMap[unescape(string(m.tags[i:][i0:i1]), "tagkey")] = unescape(string(m.tags[i:][i2:]), "tagval")
break
}
tagMap[unescape(string(m.tags[i:][i0:i1]), "tagkey")] = unescape(string(m.tags[i:][i2:i2+i3]), "tagval")
// increment start index for the next tag
i += i2 + i3
}
return tagMap
}
func (m *metric) Name() string {
return unescape(string(m.name), "name")
}
func (m *metric) Time() time.Time {
// assume metric has been verified already and ignore error:
if m.nsec == 0 {
m.nsec, _ = parseIntBytes(m.t, 10, 64)
}
return time.Unix(0, m.nsec)
}
func (m *metric) UnixNano() int64 {
// assume metric has been verified already and ignore error:
if m.nsec == 0 {
m.nsec, _ = parseIntBytes(m.t, 10, 64)
}
return m.nsec
}
func (m *metric) SetName(name string) {
m.hashID = 0
m.name = []byte(nameEscaper.Replace(name))
}
func (m *metric) SetPrefix(prefix string) {
m.hashID = 0
m.name = append([]byte(nameEscaper.Replace(prefix)), m.name...)
}
func (m *metric) SetSuffix(suffix string) {
m.hashID = 0
m.name = append(m.name, []byte(nameEscaper.Replace(suffix))...)
}
func (m *metric) AddTag(key, value string) {
m.RemoveTag(key)
m.tags = append(m.tags, []byte(","+escape(key, "tagkey")+"="+escape(value, "tagval"))...)
}
func (m *metric) HasTag(key string) bool {
i := bytes.Index(m.tags, []byte(escape(key, "tagkey")+"="))
if i == -1 {
return false
}
return true
}
func (m *metric) RemoveTag(key string) {
m.hashID = 0
i := bytes.Index(m.tags, []byte(escape(key, "tagkey")+"="))
if i == -1 {
return
}
tmp := m.tags[0 : i-1]
j := indexUnescapedByte(m.tags[i:], ',')
if j != -1 {
tmp = append(tmp, m.tags[i+j:]...)
}
m.tags = tmp
return
}
func (m *metric) AddField(key string, value interface{}) {
m.fields = append(m.fields, ',')
m.fields = appendField(m.fields, key, value)
}
func (m *metric) HasField(key string) bool {
i := bytes.Index(m.fields, []byte(escape(key, "tagkey")+"="))
if i == -1 {
return false
}
return true
}
func (m *metric) RemoveField(key string) error {
i := bytes.Index(m.fields, []byte(escape(key, "tagkey")+"="))
if i == -1 {
return nil
}
var tmp []byte
if i != 0 {
tmp = m.fields[0 : i-1]
}
j := indexUnescapedByte(m.fields[i:], ',')
if j != -1 {
tmp = append(tmp, m.fields[i+j:]...)
}
if len(tmp) == 0 {
return fmt.Errorf("Metric cannot remove final field: %s", m.fields)
}
m.fields = tmp
return nil
}
func (m *metric) Copy() telegraf.Metric {
return copyWith(m.name, m.tags, m.fields, m.t)
}
func copyWith(name, tags, fields, t []byte) telegraf.Metric {
out := metric{
name: make([]byte, len(name)),
tags: make([]byte, len(tags)),
fields: make([]byte, len(fields)),
t: make([]byte, len(t)),
}
copy(out.name, name)
copy(out.tags, tags)
copy(out.fields, fields)
copy(out.t, t)
return &out
}
func (m *metric) HashID() uint64 {
if m.hashID == 0 {
h := fnv.New64a()
h.Write(m.name)
tags := m.Tags()
tmp := make([]string, len(tags))
i := 0
for k, v := range tags {
tmp[i] = k + v
i++
}
sort.Strings(tmp)
for _, s := range tmp {
h.Write([]byte(s))
}
m.hashID = h.Sum64()
}
return m.hashID
}
func appendField(b []byte, k string, v interface{}) []byte {
if v == nil {
return b
}
b = append(b, []byte(escape(k, "tagkey")+"=")...)
// check popular types first
switch v := v.(type) {
case float64:
b = strconv.AppendFloat(b, v, 'f', -1, 64)
case int64:
b = strconv.AppendInt(b, v, 10)
b = append(b, 'i')
case string:
b = append(b, '"')
b = append(b, []byte(escape(v, "fieldval"))...)
b = append(b, '"')
case bool:
b = strconv.AppendBool(b, v)
case int32:
b = strconv.AppendInt(b, int64(v), 10)
b = append(b, 'i')
case int16:
b = strconv.AppendInt(b, int64(v), 10)
b = append(b, 'i')
case int8:
b = strconv.AppendInt(b, int64(v), 10)
b = append(b, 'i')
case int:
b = strconv.AppendInt(b, int64(v), 10)
b = append(b, 'i')
case uint64:
// Cap uints above the maximum int value
var intv int64
if v <= uint64(MaxInt) {
intv = int64(v)
} else {
intv = int64(MaxInt)
}
b = strconv.AppendInt(b, intv, 10)
b = append(b, 'i')
case uint32:
b = strconv.AppendInt(b, int64(v), 10)
b = append(b, 'i')
case uint16:
b = strconv.AppendInt(b, int64(v), 10)
b = append(b, 'i')
case uint8:
b = strconv.AppendInt(b, int64(v), 10)
b = append(b, 'i')
case uint:
// Cap uints above the maximum int value
var intv int64
if v <= uint(MaxInt) {
intv = int64(v)
} else {
intv = int64(MaxInt)
}
b = strconv.AppendInt(b, intv, 10)
b = append(b, 'i')
case float32:
b = strconv.AppendFloat(b, float64(v), 'f', -1, 32)
case []byte:
b = append(b, v...)
default:
// Can't determine the type, so convert to string
b = append(b, '"')
b = append(b, []byte(escape(fmt.Sprintf("%v", v), "fieldval"))...)
b = append(b, '"')
}
return b
}

View File

@@ -0,0 +1,148 @@
package metric
import (
"fmt"
"testing"
"time"
"github.com/influxdata/telegraf"
)
// vars for making sure that the compiler doesnt optimize out the benchmarks:
var (
s string
I interface{}
tags map[string]string
fields map[string]interface{}
)
func BenchmarkNewMetric(b *testing.B) {
var mt telegraf.Metric
for n := 0; n < b.N; n++ {
mt, _ = New("test_metric",
map[string]string{
"test_tag_1": "tag_value_1",
"test_tag_2": "tag_value_2",
"test_tag_3": "tag_value_3",
},
map[string]interface{}{
"string_field": "string",
"int_field": int64(1000),
"float_field": float64(2.1),
},
time.Now(),
)
}
s = string(mt.String())
}
func BenchmarkAddTag(b *testing.B) {
var mt telegraf.Metric
mt = &metric{
name: []byte("cpu"),
tags: []byte(",host=localhost"),
fields: []byte("a=101"),
t: []byte("1480614053000000000"),
}
for n := 0; n < b.N; n++ {
mt.AddTag("foo", "bar")
}
s = string(mt.String())
}
func BenchmarkSplit(b *testing.B) {
var mt telegraf.Metric
mt = &metric{
name: []byte("cpu"),
tags: []byte(",host=localhost"),
fields: []byte("a=101,b=10i,c=10101,d=101010,e=42"),
t: []byte("1480614053000000000"),
}
var metrics []telegraf.Metric
for n := 0; n < b.N; n++ {
metrics = mt.Split(60)
}
s = string(metrics[0].String())
}
func BenchmarkTags(b *testing.B) {
for n := 0; n < b.N; n++ {
var mt, _ = New("test_metric",
map[string]string{
"test_tag_1": "tag_value_1",
"test_tag_2": "tag_value_2",
"test_tag_3": "tag_value_3",
},
map[string]interface{}{
"string_field": "string",
"int_field": int64(1000),
"float_field": float64(2.1),
},
time.Now(),
)
tags = mt.Tags()
}
s = fmt.Sprint(tags)
}
func BenchmarkFields(b *testing.B) {
for n := 0; n < b.N; n++ {
var mt, _ = New("test_metric",
map[string]string{
"test_tag_1": "tag_value_1",
"test_tag_2": "tag_value_2",
"test_tag_3": "tag_value_3",
},
map[string]interface{}{
"string_field": "string",
"int_field": int64(1000),
"float_field": float64(2.1),
},
time.Now(),
)
fields = mt.Fields()
}
s = fmt.Sprint(fields)
}
func BenchmarkString(b *testing.B) {
mt, _ := New("test_metric",
map[string]string{
"test_tag_1": "tag_value_1",
"test_tag_2": "tag_value_2",
"test_tag_3": "tag_value_3",
},
map[string]interface{}{
"string_field": "string",
"int_field": int64(1000),
"float_field": float64(2.1),
},
time.Now(),
)
var S string
for n := 0; n < b.N; n++ {
S = mt.String()
}
s = S
}
func BenchmarkSerialize(b *testing.B) {
mt, _ := New("test_metric",
map[string]string{
"test_tag_1": "tag_value_1",
"test_tag_2": "tag_value_2",
"test_tag_3": "tag_value_3",
},
map[string]interface{}{
"string_field": "string",
"int_field": int64(1000),
"float_field": float64(2.1),
},
time.Now(),
)
var B []byte
for n := 0; n < b.N; n++ {
B = mt.Serialize()
}
s = string(B)
}

View File

@@ -1,10 +1,14 @@
package metric
import (
"fmt"
"math"
"regexp"
"testing"
"time"
"github.com/influxdata/telegraf"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -21,184 +25,102 @@ func TestNewMetric(t *testing.T) {
"usage_busy": float64(1),
}
m, err := New("cpu", tags, fields, now)
require.NoError(t, err)
assert.NoError(t, err)
require.Equal(t, "cpu", m.Name())
require.Equal(t, tags, m.Tags())
require.Equal(t, fields, m.Fields())
require.Equal(t, 2, len(m.FieldList()))
require.Equal(t, now, m.Time())
assert.Equal(t, telegraf.Untyped, m.Type())
assert.Equal(t, tags, m.Tags())
assert.Equal(t, fields, m.Fields())
assert.Equal(t, "cpu", m.Name())
assert.Equal(t, now, m.Time())
assert.Equal(t, now.UnixNano(), m.UnixNano())
}
func baseMetric() telegraf.Metric {
tags := map[string]string{}
func TestNewErrors(t *testing.T) {
// creating a metric with an empty name produces an error:
m, err := New(
"",
map[string]string{
"datacenter": "us-east-1",
"mytag": "foo",
"another": "tag",
},
map[string]interface{}{
"value": float64(1),
},
time.Now(),
)
assert.Error(t, err)
assert.Nil(t, m)
// creating a metric with empty fields produces an error:
m, err = New(
"foobar",
map[string]string{
"datacenter": "us-east-1",
"mytag": "foo",
"another": "tag",
},
map[string]interface{}{},
time.Now(),
)
assert.Error(t, err)
assert.Nil(t, m)
}
func TestNewMetric_Tags(t *testing.T) {
now := time.Now()
tags := map[string]string{
"host": "localhost",
"datacenter": "us-east-1",
}
fields := map[string]interface{}{
"value": float64(1),
}
now := time.Now()
m, err := New("cpu", tags, fields, now)
if err != nil {
panic(err)
}
return m
}
assert.NoError(t, err)
func TestHasTag(t *testing.T) {
m := baseMetric()
assert.True(t, m.HasTag("host"))
assert.True(t, m.HasTag("datacenter"))
require.False(t, m.HasTag("host"))
m.AddTag("host", "localhost")
require.True(t, m.HasTag("host"))
m.RemoveTag("host")
require.False(t, m.HasTag("host"))
}
func TestAddTagOverwrites(t *testing.T) {
m := baseMetric()
m.AddTag("host", "localhost")
m.AddTag("host", "example.org")
value, ok := m.GetTag("host")
require.True(t, ok)
require.Equal(t, "example.org", value)
}
func TestRemoveTagNoEffectOnMissingTags(t *testing.T) {
m := baseMetric()
m.RemoveTag("foo")
m.AddTag("a", "x")
m.RemoveTag("foo")
m.RemoveTag("bar")
value, ok := m.GetTag("a")
require.True(t, ok)
require.Equal(t, "x", value)
}
func TestGetTag(t *testing.T) {
m := baseMetric()
value, ok := m.GetTag("host")
require.False(t, ok)
m.AddTag("host", "localhost")
value, ok = m.GetTag("host")
require.True(t, ok)
require.Equal(t, "localhost", value)
m.AddTag("newtag", "foo")
assert.True(t, m.HasTag("newtag"))
m.RemoveTag("host")
value, ok = m.GetTag("host")
require.False(t, ok)
assert.False(t, m.HasTag("host"))
assert.True(t, m.HasTag("newtag"))
assert.True(t, m.HasTag("datacenter"))
m.RemoveTag("datacenter")
assert.False(t, m.HasTag("datacenter"))
assert.True(t, m.HasTag("newtag"))
assert.Equal(t, map[string]string{"newtag": "foo"}, m.Tags())
m.RemoveTag("newtag")
assert.False(t, m.HasTag("newtag"))
assert.Equal(t, map[string]string{}, m.Tags())
assert.Equal(t, "cpu value=1 "+fmt.Sprint(now.UnixNano())+"\n", m.String())
}
func TestHasField(t *testing.T) {
m := baseMetric()
require.False(t, m.HasField("x"))
m.AddField("x", 42.0)
require.True(t, m.HasField("x"))
m.RemoveTag("x")
require.False(t, m.HasTag("x"))
}
func TestAddFieldOverwrites(t *testing.T) {
m := baseMetric()
m.AddField("value", 1.0)
m.AddField("value", 42.0)
value, ok := m.GetField("value")
require.True(t, ok)
require.Equal(t, 42.0, value)
}
func TestAddFieldChangesType(t *testing.T) {
m := baseMetric()
m.AddField("value", 1.0)
m.AddField("value", "xyzzy")
value, ok := m.GetField("value")
require.True(t, ok)
require.Equal(t, "xyzzy", value)
}
func TestRemoveFieldNoEffectOnMissingFields(t *testing.T) {
m := baseMetric()
m.RemoveField("foo")
m.AddField("a", "x")
m.RemoveField("foo")
m.RemoveField("bar")
value, ok := m.GetField("a")
require.True(t, ok)
require.Equal(t, "x", value)
}
func TestGetField(t *testing.T) {
m := baseMetric()
value, ok := m.GetField("foo")
require.False(t, ok)
m.AddField("foo", "bar")
value, ok = m.GetField("foo")
require.True(t, ok)
require.Equal(t, "bar", value)
m.RemoveTag("foo")
value, ok = m.GetTag("foo")
require.False(t, ok)
}
func TestTagList_Sorted(t *testing.T) {
m := baseMetric()
m.AddTag("b", "y")
m.AddTag("c", "z")
m.AddTag("a", "x")
taglist := m.TagList()
require.Equal(t, "a", taglist[0].Key)
require.Equal(t, "b", taglist[1].Key)
require.Equal(t, "c", taglist[2].Key)
}
func TestEquals(t *testing.T) {
func TestSerialize(t *testing.T) {
now := time.Now()
m1, err := New("cpu",
map[string]string{
"host": "localhost",
},
map[string]interface{}{
"value": 42.0,
},
now,
)
require.NoError(t, err)
tags := map[string]string{
"datacenter": "us-east-1",
}
fields := map[string]interface{}{
"value": float64(1),
}
m, err := New("cpu", tags, fields, now)
assert.NoError(t, err)
m2, err := New("cpu",
map[string]string{
"host": "localhost",
},
map[string]interface{}{
"value": 42.0,
},
now,
)
require.NoError(t, err)
assert.Equal(t,
[]byte("cpu,datacenter=us-east-1 value=1 "+fmt.Sprint(now.UnixNano())+"\n"),
m.Serialize())
lhs := m1.(*metric)
require.Equal(t, lhs, m2)
m3 := m2.Copy()
require.Equal(t, lhs, m3)
m3.AddTag("a", "x")
require.NotEqual(t, lhs, m3)
m.RemoveTag("datacenter")
assert.Equal(t,
[]byte("cpu value=1 "+fmt.Sprint(now.UnixNano())+"\n"),
m.Serialize())
}
func TestHashID(t *testing.T) {
@@ -249,62 +171,571 @@ func TestHashID_Consistency(t *testing.T) {
)
hash := m.HashID()
m2, _ := New(
"cpu",
map[string]string{
"datacenter": "us-east-1",
"mytag": "foo",
"another": "tag",
},
map[string]interface{}{
"value": float64(1),
},
time.Now(),
)
assert.Equal(t, hash, m2.HashID())
m3 := m.Copy()
assert.Equal(t, m2.HashID(), m3.HashID())
for i := 0; i < 1000; i++ {
m2, _ := New(
"cpu",
map[string]string{
"datacenter": "us-east-1",
"mytag": "foo",
"another": "tag",
},
map[string]interface{}{
"value": float64(1),
},
time.Now(),
)
assert.Equal(t, hash, m2.HashID())
}
}
func TestSetName(t *testing.T) {
m := baseMetric()
m.SetName("foo")
require.Equal(t, "foo", m.Name())
}
func TestAddPrefix(t *testing.T) {
m := baseMetric()
m.AddPrefix("foo_")
require.Equal(t, "foo_cpu", m.Name())
m.AddPrefix("foo_")
require.Equal(t, "foo_foo_cpu", m.Name())
}
func TestAddSuffix(t *testing.T) {
m := baseMetric()
m.AddSuffix("_foo")
require.Equal(t, "cpu_foo", m.Name())
m.AddSuffix("_foo")
require.Equal(t, "cpu_foo_foo", m.Name())
}
func TestValueType(t *testing.T) {
func TestNewMetric_NameModifiers(t *testing.T) {
now := time.Now()
tags := map[string]string{}
fields := map[string]interface{}{
"value": float64(42),
"value": float64(1),
}
m, err := New("cpu", tags, fields, now)
assert.NoError(t, err)
hash := m.HashID()
suffix := fmt.Sprintf(" value=1 %d\n", now.UnixNano())
assert.Equal(t, "cpu"+suffix, m.String())
m.SetPrefix("pre_")
assert.NotEqual(t, hash, m.HashID())
hash = m.HashID()
assert.Equal(t, "pre_cpu"+suffix, m.String())
m.SetSuffix("_post")
assert.NotEqual(t, hash, m.HashID())
hash = m.HashID()
assert.Equal(t, "pre_cpu_post"+suffix, m.String())
m.SetName("mem")
assert.NotEqual(t, hash, m.HashID())
assert.Equal(t, "mem"+suffix, m.String())
}
func TestNewMetric_FieldModifiers(t *testing.T) {
now := time.Now()
tags := map[string]string{
"host": "localhost",
}
fields := map[string]interface{}{
"value": float64(1),
}
m, err := New("cpu", tags, fields, now)
assert.NoError(t, err)
assert.True(t, m.HasField("value"))
assert.False(t, m.HasField("foo"))
m.AddField("newfield", "foo")
assert.True(t, m.HasField("newfield"))
assert.NoError(t, m.RemoveField("newfield"))
assert.False(t, m.HasField("newfield"))
// don't allow user to remove all fields:
assert.Error(t, m.RemoveField("value"))
m.AddField("value2", int64(101))
assert.NoError(t, m.RemoveField("value"))
assert.False(t, m.HasField("value"))
}
func TestNewMetric_Fields(t *testing.T) {
now := time.Now()
tags := map[string]string{
"host": "localhost",
}
fields := map[string]interface{}{
"float": float64(1),
"int": int64(1),
"bool": true,
"false": false,
"string": "test",
"quote_string": `x"y`,
"backslash_quote_string": `x\"y`,
}
m, err := New("cpu", tags, fields, now)
assert.NoError(t, err)
assert.Equal(t, fields, m.Fields())
}
func TestNewMetric_Time(t *testing.T) {
now := time.Now()
tags := map[string]string{
"host": "localhost",
}
fields := map[string]interface{}{
"float": float64(1),
"int": int64(1),
"bool": true,
"false": false,
"string": "test",
}
m, err := New("cpu", tags, fields, now)
assert.NoError(t, err)
m = m.Copy()
m2 := m.Copy()
assert.Equal(t, now.UnixNano(), m.Time().UnixNano())
assert.Equal(t, now.UnixNano(), m2.UnixNano())
}
func TestNewMetric_Copy(t *testing.T) {
now := time.Now()
tags := map[string]string{}
fields := map[string]interface{}{
"float": float64(1),
}
m, err := New("cpu", tags, fields, now)
assert.NoError(t, err)
m2 := m.Copy()
assert.Equal(t,
fmt.Sprintf("cpu float=1 %d\n", now.UnixNano()),
m.String())
m.AddTag("host", "localhost")
assert.Equal(t,
fmt.Sprintf("cpu,host=localhost float=1 %d\n", now.UnixNano()),
m.String())
assert.Equal(t,
fmt.Sprintf("cpu float=1 %d\n", now.UnixNano()),
m2.String())
}
func TestNewMetric_AllTypes(t *testing.T) {
now := time.Now()
tags := map[string]string{}
fields := map[string]interface{}{
"float64": float64(1),
"float32": float32(1),
"int64": int64(1),
"int32": int32(1),
"int16": int16(1),
"int8": int8(1),
"int": int(1),
"uint64": uint64(1),
"uint32": uint32(1),
"uint16": uint16(1),
"uint8": uint8(1),
"uint": uint(1),
"bytes": []byte("foo"),
"nil": nil,
"maxuint64": uint64(MaxInt) + 10,
"maxuint": uint(MaxInt) + 10,
"unsupported": []int{1, 2},
}
m, err := New("cpu", tags, fields, now)
assert.NoError(t, err)
assert.Contains(t, m.String(), "float64=1")
assert.Contains(t, m.String(), "float32=1")
assert.Contains(t, m.String(), "int64=1i")
assert.Contains(t, m.String(), "int32=1i")
assert.Contains(t, m.String(), "int16=1i")
assert.Contains(t, m.String(), "int8=1i")
assert.Contains(t, m.String(), "int=1i")
assert.Contains(t, m.String(), "uint64=1i")
assert.Contains(t, m.String(), "uint32=1i")
assert.Contains(t, m.String(), "uint16=1i")
assert.Contains(t, m.String(), "uint8=1i")
assert.Contains(t, m.String(), "uint=1i")
assert.NotContains(t, m.String(), "nil")
assert.Contains(t, m.String(), fmt.Sprintf("maxuint64=%di", MaxInt))
assert.Contains(t, m.String(), fmt.Sprintf("maxuint=%di", MaxInt))
}
func TestIndexUnescapedByte(t *testing.T) {
tests := []struct {
in []byte
b byte
expected int
}{
{
in: []byte(`foobar`),
b: 'b',
expected: 3,
},
{
in: []byte(`foo\bar`),
b: 'b',
expected: -1,
},
{
in: []byte(`foo\\bar`),
b: 'b',
expected: -1,
},
{
in: []byte(`foobar`),
b: 'f',
expected: 0,
},
{
in: []byte(`foobar`),
b: 'r',
expected: 5,
},
{
in: []byte(`\foobar`),
b: 'f',
expected: -1,
},
}
for _, test := range tests {
got := indexUnescapedByte(test.in, test.b)
assert.Equal(t, test.expected, got)
}
}
func TestNewGaugeMetric(t *testing.T) {
now := time.Now()
tags := map[string]string{
"host": "localhost",
"datacenter": "us-east-1",
}
fields := map[string]interface{}{
"usage_idle": float64(99),
"usage_busy": float64(1),
}
m, err := New("cpu", tags, fields, now, telegraf.Gauge)
assert.NoError(t, err)
assert.Equal(t, telegraf.Gauge, m.Type())
assert.Equal(t, tags, m.Tags())
assert.Equal(t, fields, m.Fields())
assert.Equal(t, "cpu", m.Name())
assert.Equal(t, now, m.Time())
assert.Equal(t, now.UnixNano(), m.UnixNano())
}
func TestCopyAggreate(t *testing.T) {
m1 := baseMetric()
m1.SetAggregate(true)
m2 := m1.Copy()
assert.True(t, m2.IsAggregate())
func TestNewCounterMetric(t *testing.T) {
now := time.Now()
tags := map[string]string{
"host": "localhost",
"datacenter": "us-east-1",
}
fields := map[string]interface{}{
"usage_idle": float64(99),
"usage_busy": float64(1),
}
m, err := New("cpu", tags, fields, now, telegraf.Counter)
assert.NoError(t, err)
assert.Equal(t, telegraf.Counter, m.Type())
assert.Equal(t, tags, m.Tags())
assert.Equal(t, fields, m.Fields())
assert.Equal(t, "cpu", m.Name())
assert.Equal(t, now, m.Time())
assert.Equal(t, now.UnixNano(), m.UnixNano())
}
// test splitting metric into various max lengths
func TestSplitMetric(t *testing.T) {
now := time.Unix(0, 1480940990034083306)
tags := map[string]string{
"host": "localhost",
}
fields := map[string]interface{}{
"float": float64(100001),
"int": int64(100001),
"bool": true,
"false": false,
"string": "test",
}
m, err := New("cpu", tags, fields, now)
assert.NoError(t, err)
split80 := m.Split(80)
assert.Len(t, split80, 2)
split70 := m.Split(70)
assert.Len(t, split70, 3)
split60 := m.Split(60)
assert.Len(t, split60, 5)
}
// test splitting metric into various max lengths
// use a simple regex check to verify that the split metrics are valid
func TestSplitMetric_RegexVerify(t *testing.T) {
now := time.Unix(0, 1480940990034083306)
tags := map[string]string{
"host": "localhost",
}
fields := map[string]interface{}{
"foo": float64(98934259085),
"bar": float64(19385292),
"number": float64(19385292),
"another": float64(19385292),
"n": float64(19385292),
}
m, err := New("cpu", tags, fields, now)
assert.NoError(t, err)
// verification regex
re := regexp.MustCompile(`cpu,host=localhost \w+=\d+(,\w+=\d+)* 1480940990034083306`)
split90 := m.Split(90)
assert.Len(t, split90, 2)
for _, splitM := range split90 {
assert.True(t, re.Match(splitM.Serialize()), splitM.String())
}
split70 := m.Split(70)
assert.Len(t, split70, 3)
for _, splitM := range split70 {
assert.True(t, re.Match(splitM.Serialize()), splitM.String())
}
split20 := m.Split(20)
assert.Len(t, split20, 5)
for _, splitM := range split20 {
assert.True(t, re.Match(splitM.Serialize()), splitM.String())
}
}
// test splitting metric even when given length is shorter than
// shortest possible length
// Split should split metric as short as possible, ie, 1 field per metric
func TestSplitMetric_TooShort(t *testing.T) {
now := time.Unix(0, 1480940990034083306)
tags := map[string]string{
"host": "localhost",
}
fields := map[string]interface{}{
"float": float64(100001),
"int": int64(100001),
"bool": true,
"false": false,
"string": "test",
}
m, err := New("cpu", tags, fields, now)
assert.NoError(t, err)
split := m.Split(10)
assert.Len(t, split, 5)
strings := make([]string, 5)
for i, splitM := range split {
strings[i] = splitM.String()
}
assert.Contains(t, strings, "cpu,host=localhost float=100001 1480940990034083306\n")
assert.Contains(t, strings, "cpu,host=localhost int=100001i 1480940990034083306\n")
assert.Contains(t, strings, "cpu,host=localhost bool=true 1480940990034083306\n")
assert.Contains(t, strings, "cpu,host=localhost false=false 1480940990034083306\n")
assert.Contains(t, strings, "cpu,host=localhost string=\"test\" 1480940990034083306\n")
}
func TestSplitMetric_NoOp(t *testing.T) {
now := time.Unix(0, 1480940990034083306)
tags := map[string]string{
"host": "localhost",
}
fields := map[string]interface{}{
"float": float64(100001),
"int": int64(100001),
"bool": true,
"false": false,
"string": "test",
}
m, err := New("cpu", tags, fields, now)
assert.NoError(t, err)
split := m.Split(1000)
assert.Len(t, split, 1)
assert.Equal(t, m, split[0])
}
func TestSplitMetric_OneField(t *testing.T) {
now := time.Unix(0, 1480940990034083306)
tags := map[string]string{
"host": "localhost",
}
fields := map[string]interface{}{
"float": float64(100001),
}
m, err := New("cpu", tags, fields, now)
assert.NoError(t, err)
assert.Equal(t, "cpu,host=localhost float=100001 1480940990034083306\n", m.String())
split := m.Split(1000)
assert.Len(t, split, 1)
assert.Equal(t, "cpu,host=localhost float=100001 1480940990034083306\n", split[0].String())
split = m.Split(1)
assert.Len(t, split, 1)
assert.Equal(t, "cpu,host=localhost float=100001 1480940990034083306\n", split[0].String())
split = m.Split(40)
assert.Len(t, split, 1)
assert.Equal(t, "cpu,host=localhost float=100001 1480940990034083306\n", split[0].String())
}
func TestSplitMetric_ExactSize(t *testing.T) {
now := time.Unix(0, 1480940990034083306)
tags := map[string]string{
"host": "localhost",
}
fields := map[string]interface{}{
"float": float64(100001),
"int": int64(100001),
"bool": true,
"false": false,
"string": "test",
}
m, err := New("cpu", tags, fields, now)
assert.NoError(t, err)
actual := m.Split(m.Len())
// check that no copy was made
require.Equal(t, &m, &actual[0])
}
func TestSplitMetric_NoRoomForNewline(t *testing.T) {
now := time.Unix(0, 1480940990034083306)
tags := map[string]string{
"host": "localhost",
}
fields := map[string]interface{}{
"float": float64(100001),
"int": int64(100001),
"bool": true,
"false": false,
}
m, err := New("cpu", tags, fields, now)
assert.NoError(t, err)
actual := m.Split(m.Len() - 1)
require.Equal(t, 2, len(actual))
}
func TestNewMetricAggregate(t *testing.T) {
now := time.Now()
tags := map[string]string{
"host": "localhost",
}
fields := map[string]interface{}{
"usage_idle": float64(99),
}
m, err := New("cpu", tags, fields, now)
assert.NoError(t, err)
assert.False(t, m.IsAggregate())
m.SetAggregate(true)
assert.True(t, m.IsAggregate())
}
func TestNewMetricString(t *testing.T) {
now := time.Now()
tags := map[string]string{
"host": "localhost",
}
fields := map[string]interface{}{
"usage_idle": float64(99),
}
m, err := New("cpu", tags, fields, now)
assert.NoError(t, err)
lineProto := fmt.Sprintf("cpu,host=localhost usage_idle=99 %d\n",
now.UnixNano())
assert.Equal(t, lineProto, m.String())
}
func TestNewMetricFailNaN(t *testing.T) {
now := time.Now()
tags := map[string]string{
"host": "localhost",
}
fields := map[string]interface{}{
"usage_idle": math.NaN(),
}
_, err := New("cpu", tags, fields, now)
assert.NoError(t, err)
}
func TestEmptyTagValueOrKey(t *testing.T) {
now := time.Now()
tags := map[string]string{
"host": "localhost",
"emptytag": "",
"": "valuewithoutkey",
}
fields := map[string]interface{}{
"usage_idle": float64(99),
}
m, err := New("cpu", tags, fields, now)
assert.True(t, m.HasTag("host"))
assert.False(t, m.HasTag("emptytag"))
assert.Equal(t,
fmt.Sprintf("cpu,host=localhost usage_idle=99 %d\n", now.UnixNano()),
m.String())
assert.NoError(t, err)
}
func TestNewMetric_TrailingSlash(t *testing.T) {
now := time.Now()
tests := []struct {
name string
tags map[string]string
fields map[string]interface{}
}{
{
name: `cpu\`,
fields: map[string]interface{}{
"value": int64(42),
},
},
{
name: "cpu",
fields: map[string]interface{}{
`value\`: "x",
},
},
{
name: "cpu",
fields: map[string]interface{}{
"value": `x\`,
},
},
{
name: "cpu",
tags: map[string]string{
`host\`: "localhost",
},
fields: map[string]interface{}{
"value": int64(42),
},
},
{
name: "cpu",
tags: map[string]string{
"host": `localhost\`,
},
fields: map[string]interface{}{
"value": int64(42),
},
},
}
for _, tc := range tests {
_, err := New(tc.name, tc.tags, tc.fields, now)
assert.Error(t, err)
}
}

674
metric/parse.go Normal file
View File

@@ -0,0 +1,674 @@
package metric
import (
"bytes"
"errors"
"fmt"
"strconv"
"time"
"github.com/influxdata/telegraf"
)
var (
ErrInvalidNumber = errors.New("invalid number")
)
const (
// the number of characters for the largest possible int64 (9223372036854775807)
maxInt64Digits = 19
// the number of characters for the smallest possible int64 (-9223372036854775808)
minInt64Digits = 20
// the number of characters required for the largest float64 before a range check
// would occur during parsing
maxFloat64Digits = 25
// the number of characters required for smallest float64 before a range check occur
// would occur during parsing
minFloat64Digits = 27
MaxKeyLength = 65535
)
// The following constants allow us to specify which state to move to
// next, when scanning sections of a Point.
const (
tagKeyState = iota
tagValueState
fieldsState
)
func Parse(buf []byte) ([]telegraf.Metric, error) {
return ParseWithDefaultTimePrecision(buf, time.Now(), "")
}
func ParseWithDefaultTime(buf []byte, t time.Time) ([]telegraf.Metric, error) {
return ParseWithDefaultTimePrecision(buf, t, "")
}
func ParseWithDefaultTimePrecision(
buf []byte,
t time.Time,
precision string,
) ([]telegraf.Metric, error) {
if len(buf) == 0 {
return []telegraf.Metric{}, nil
}
if len(buf) <= 6 {
return []telegraf.Metric{}, makeError("buffer too short", buf, 0)
}
metrics := make([]telegraf.Metric, 0, bytes.Count(buf, []byte("\n"))+1)
var errStr string
i := 0
for {
j := bytes.IndexByte(buf[i:], '\n')
if j == -1 {
break
}
if len(buf[i:i+j]) < 2 {
i += j + 1 // increment i past the previous newline
continue
}
m, err := parseMetric(buf[i:i+j], t, precision)
if err != nil {
i += j + 1 // increment i past the previous newline
errStr += " " + err.Error()
continue
}
i += j + 1 // increment i past the previous newline
metrics = append(metrics, m)
}
if len(errStr) > 0 {
return metrics, fmt.Errorf(errStr)
}
return metrics, nil
}
func parseMetric(buf []byte,
defaultTime time.Time,
precision string,
) (telegraf.Metric, error) {
var dTime string
// scan the first block which is measurement[,tag1=value1,tag2=value=2...]
pos, key, err := scanKey(buf, 0)
if err != nil {
return nil, err
}
// measurement name is required
if len(key) == 0 {
return nil, fmt.Errorf("missing measurement")
}
if len(key) > MaxKeyLength {
return nil, fmt.Errorf("max key length exceeded: %v > %v", len(key), MaxKeyLength)
}
// scan the second block is which is field1=value1[,field2=value2,...]
pos, fields, err := scanFields(buf, pos)
if err != nil {
return nil, err
}
// at least one field is required
if len(fields) == 0 {
return nil, fmt.Errorf("missing fields")
}
// scan the last block which is an optional integer timestamp
pos, ts, err := scanTime(buf, pos)
if err != nil {
return nil, err
}
// apply precision multiplier
var nsec int64
multiplier := getPrecisionMultiplier(precision)
if len(ts) > 0 && multiplier > 1 {
tsint, err := parseIntBytes(ts, 10, 64)
if err != nil {
return nil, err
}
nsec := multiplier * tsint
ts = []byte(strconv.FormatInt(nsec, 10))
}
m := &metric{
fields: fields,
t: ts,
nsec: nsec,
}
// parse out the measurement name
// namei is the index at which the "name" ends
namei := indexUnescapedByte(key, ',')
if namei < 1 {
// no tags
m.name = key
} else {
m.name = key[0:namei]
m.tags = key[namei:]
}
if len(m.t) == 0 {
if len(dTime) == 0 {
dTime = fmt.Sprint(defaultTime.UnixNano())
}
// use default time
m.t = []byte(dTime)
}
// here we copy on return because this allows us to later call
// AddTag, AddField, RemoveTag, RemoveField, etc. without worrying about
// modifying 'tag' bytes having an affect on 'field' bytes, for example.
return m.Copy(), nil
}
// scanKey scans buf starting at i for the measurement and tag portion of the point.
// It returns the ending position and the byte slice of key within buf. If there
// are tags, they will be sorted if they are not already.
func scanKey(buf []byte, i int) (int, []byte, error) {
start := skipWhitespace(buf, i)
i = start
// First scan the Point's measurement.
state, i, err := scanMeasurement(buf, i)
if err != nil {
return i, buf[start:i], err
}
// Optionally scan tags if needed.
if state == tagKeyState {
i, err = scanTags(buf, i)
if err != nil {
return i, buf[start:i], err
}
}
return i, buf[start:i], nil
}
// scanMeasurement examines the measurement part of a Point, returning
// the next state to move to, and the current location in the buffer.
func scanMeasurement(buf []byte, i int) (int, int, error) {
// Check first byte of measurement, anything except a comma is fine.
// It can't be a space, since whitespace is stripped prior to this
// function call.
if i >= len(buf) || buf[i] == ',' {
return -1, i, makeError("missing measurement", buf, i)
}
for {
i++
if i >= len(buf) {
// cpu
return -1, i, makeError("missing fields", buf, i)
}
if buf[i-1] == '\\' {
// Skip character (it's escaped).
continue
}
// Unescaped comma; move onto scanning the tags.
if buf[i] == ',' {
return tagKeyState, i + 1, nil
}
// Unescaped space; move onto scanning the fields.
if buf[i] == ' ' {
// cpu value=1.0
return fieldsState, i, nil
}
}
}
// scanTags examines all the tags in a Point, keeping track of and
// returning the updated indices slice, number of commas and location
// in buf where to start examining the Point fields.
func scanTags(buf []byte, i int) (int, error) {
var (
err error
state = tagKeyState
)
for {
switch state {
case tagKeyState:
i, err = scanTagsKey(buf, i)
state = tagValueState // tag value always follows a tag key
case tagValueState:
state, i, err = scanTagsValue(buf, i)
case fieldsState:
return i, nil
}
if err != nil {
return i, err
}
}
}
// scanTagsKey scans each character in a tag key.
func scanTagsKey(buf []byte, i int) (int, error) {
// First character of the key.
if i >= len(buf) || buf[i] == ' ' || buf[i] == ',' || buf[i] == '=' {
// cpu,{'', ' ', ',', '='}
return i, makeError("missing tag key", buf, i)
}
// Examine each character in the tag key until we hit an unescaped
// equals (the tag value), or we hit an error (i.e., unescaped
// space or comma).
for {
i++
// Either we reached the end of the buffer or we hit an
// unescaped comma or space.
if i >= len(buf) ||
((buf[i] == ' ' || buf[i] == ',') && buf[i-1] != '\\') {
// cpu,tag{'', ' ', ','}
return i, makeError("missing tag value", buf, i)
}
if buf[i] == '=' && buf[i-1] != '\\' {
// cpu,tag=
return i + 1, nil
}
}
}
// scanTagsValue scans each character in a tag value.
func scanTagsValue(buf []byte, i int) (int, int, error) {
// Tag value cannot be empty.
if i >= len(buf) || buf[i] == ',' || buf[i] == ' ' {
// cpu,tag={',', ' '}
return -1, i, makeError("missing tag value", buf, i)
}
// Examine each character in the tag value until we hit an unescaped
// comma (move onto next tag key), an unescaped space (move onto
// fields), or we error out.
for {
i++
if i >= len(buf) {
// cpu,tag=value
return -1, i, makeError("missing fields", buf, i)
}
// An unescaped equals sign is an invalid tag value.
if buf[i] == '=' && buf[i-1] != '\\' {
// cpu,tag={'=', 'fo=o'}
return -1, i, makeError("invalid tag format", buf, i)
}
if buf[i] == ',' && buf[i-1] != '\\' {
// cpu,tag=foo,
return tagKeyState, i + 1, nil
}
// cpu,tag=foo value=1.0
// cpu, tag=foo\= value=1.0
if buf[i] == ' ' && buf[i-1] != '\\' {
return fieldsState, i, nil
}
}
}
// scanFields scans buf, starting at i for the fields section of a point. It returns
// the ending position and the byte slice of the fields within buf
func scanFields(buf []byte, i int) (int, []byte, error) {
start := skipWhitespace(buf, i)
i = start
quoted := false
// tracks how many '=' we've seen
equals := 0
// tracks how many commas we've seen
commas := 0
for {
// reached the end of buf?
if i >= len(buf) {
break
}
// escaped characters?
if buf[i] == '\\' && i+1 < len(buf) {
i += 2
continue
}
// If the value is quoted, scan until we get to the end quote
// Only quote values in the field value since quotes are not significant
// in the field key
if buf[i] == '"' && equals > commas {
quoted = !quoted
i++
continue
}
// If we see an =, ensure that there is at least on char before and after it
if buf[i] == '=' && !quoted {
equals++
// check for "... =123" but allow "a\ =123"
if buf[i-1] == ' ' && buf[i-2] != '\\' {
return i, buf[start:i], makeError("missing field key", buf, i)
}
// check for "...a=123,=456" but allow "a=123,a\,=456"
if buf[i-1] == ',' && buf[i-2] != '\\' {
return i, buf[start:i], makeError("missing field key", buf, i)
}
// check for "... value="
if i+1 >= len(buf) {
return i, buf[start:i], makeError("missing field value", buf, i)
}
// check for "... value=,value2=..."
if buf[i+1] == ',' || buf[i+1] == ' ' {
return i, buf[start:i], makeError("missing field value", buf, i)
}
if isNumeric(buf[i+1]) || buf[i+1] == '-' || buf[i+1] == 'N' || buf[i+1] == 'n' {
var err error
i, err = scanNumber(buf, i+1)
if err != nil {
return i, buf[start:i], err
}
continue
}
// If next byte is not a double-quote, the value must be a boolean
if buf[i+1] != '"' {
var err error
i, _, err = scanBoolean(buf, i+1)
if err != nil {
return i, buf[start:i], err
}
continue
}
}
if buf[i] == ',' && !quoted {
commas++
}
// reached end of block?
if buf[i] == ' ' && !quoted {
break
}
i++
}
if quoted {
return i, buf[start:i], makeError("unbalanced quotes", buf, i)
}
// check that all field sections had key and values (e.g. prevent "a=1,b"
if equals == 0 || commas != equals-1 {
return i, buf[start:i], makeError("invalid field format", buf, i)
}
return i, buf[start:i], nil
}
// scanTime scans buf, starting at i for the time section of a point. It
// returns the ending position and the byte slice of the timestamp within buf
// and and error if the timestamp is not in the correct numeric format.
func scanTime(buf []byte, i int) (int, []byte, error) {
start := skipWhitespace(buf, i)
i = start
for {
// reached the end of buf?
if i >= len(buf) {
break
}
// Reached end of block or trailing whitespace?
if buf[i] == '\n' || buf[i] == ' ' {
break
}
// Handle negative timestamps
if i == start && buf[i] == '-' {
i++
continue
}
// Timestamps should be integers, make sure they are so we don't need
// to actually parse the timestamp until needed.
if buf[i] < '0' || buf[i] > '9' {
return i, buf[start:i], makeError("invalid timestamp", buf, i)
}
i++
}
return i, buf[start:i], nil
}
func isNumeric(b byte) bool {
return (b >= '0' && b <= '9') || b == '.'
}
// scanNumber returns the end position within buf, start at i after
// scanning over buf for an integer, or float. It returns an
// error if a invalid number is scanned.
func scanNumber(buf []byte, i int) (int, error) {
start := i
var isInt bool
// Is negative number?
if i < len(buf) && buf[i] == '-' {
i++
// There must be more characters now, as just '-' is illegal.
if i == len(buf) {
return i, ErrInvalidNumber
}
}
// how many decimal points we've see
decimal := false
// indicates the number is float in scientific notation
scientific := false
for {
if i >= len(buf) {
break
}
if buf[i] == ',' || buf[i] == ' ' {
break
}
if buf[i] == 'i' && i > start && !isInt {
isInt = true
i++
continue
}
if buf[i] == '.' {
// Can't have more than 1 decimal (e.g. 1.1.1 should fail)
if decimal {
return i, ErrInvalidNumber
}
decimal = true
}
// `e` is valid for floats but not as the first char
if i > start && (buf[i] == 'e' || buf[i] == 'E') {
scientific = true
i++
continue
}
// + and - are only valid at this point if they follow an e (scientific notation)
if (buf[i] == '+' || buf[i] == '-') && (buf[i-1] == 'e' || buf[i-1] == 'E') {
i++
continue
}
// NaN is an unsupported value
if i+2 < len(buf) && (buf[i] == 'N' || buf[i] == 'n') {
return i, ErrInvalidNumber
}
if !isNumeric(buf[i]) {
return i, ErrInvalidNumber
}
i++
}
if isInt && (decimal || scientific) {
return i, ErrInvalidNumber
}
numericDigits := i - start
if isInt {
numericDigits--
}
if decimal {
numericDigits--
}
if buf[start] == '-' {
numericDigits--
}
if numericDigits == 0 {
return i, ErrInvalidNumber
}
// It's more common that numbers will be within min/max range for their type but we need to prevent
// out or range numbers from being parsed successfully. This uses some simple heuristics to decide
// if we should parse the number to the actual type. It does not do it all the time because it incurs
// extra allocations and we end up converting the type again when writing points to disk.
if isInt {
// Make sure the last char is an 'i' for integers (e.g. 9i10 is not valid)
if buf[i-1] != 'i' {
return i, ErrInvalidNumber
}
// Parse the int to check bounds the number of digits could be larger than the max range
// We subtract 1 from the index to remove the `i` from our tests
if len(buf[start:i-1]) >= maxInt64Digits || len(buf[start:i-1]) >= minInt64Digits {
if _, err := parseIntBytes(buf[start:i-1], 10, 64); err != nil {
return i, makeError(fmt.Sprintf("unable to parse integer %s: %s", buf[start:i-1], err), buf, i)
}
}
} else {
// Parse the float to check bounds if it's scientific or the number of digits could be larger than the max range
if scientific || len(buf[start:i]) >= maxFloat64Digits || len(buf[start:i]) >= minFloat64Digits {
if _, err := parseFloatBytes(buf[start:i], 10); err != nil {
return i, makeError("invalid float", buf, i)
}
}
}
return i, nil
}
// scanBoolean returns the end position within buf, start at i after
// scanning over buf for boolean. Valid values for a boolean are
// t, T, true, TRUE, f, F, false, FALSE. It returns an error if a invalid boolean
// is scanned.
func scanBoolean(buf []byte, i int) (int, []byte, error) {
start := i
if i < len(buf) && (buf[i] != 't' && buf[i] != 'f' && buf[i] != 'T' && buf[i] != 'F') {
return i, buf[start:i], makeError("invalid value", buf, i)
}
i++
for {
if i >= len(buf) {
break
}
if buf[i] == ',' || buf[i] == ' ' {
break
}
i++
}
// Single char bool (t, T, f, F) is ok
if i-start == 1 {
return i, buf[start:i], nil
}
// length must be 4 for true or TRUE
if (buf[start] == 't' || buf[start] == 'T') && i-start != 4 {
return i, buf[start:i], makeError("invalid boolean", buf, i)
}
// length must be 5 for false or FALSE
if (buf[start] == 'f' || buf[start] == 'F') && i-start != 5 {
return i, buf[start:i], makeError("invalid boolean", buf, i)
}
// Otherwise
valid := false
switch buf[start] {
case 't':
valid = bytes.Equal(buf[start:i], []byte("true"))
case 'f':
valid = bytes.Equal(buf[start:i], []byte("false"))
case 'T':
valid = bytes.Equal(buf[start:i], []byte("TRUE")) || bytes.Equal(buf[start:i], []byte("True"))
case 'F':
valid = bytes.Equal(buf[start:i], []byte("FALSE")) || bytes.Equal(buf[start:i], []byte("False"))
}
if !valid {
return i, buf[start:i], makeError("invalid boolean", buf, i)
}
return i, buf[start:i], nil
}
// skipWhitespace returns the end position within buf, starting at i after
// scanning over spaces in tags
func skipWhitespace(buf []byte, i int) int {
for i < len(buf) {
if buf[i] != ' ' && buf[i] != '\t' && buf[i] != 0 {
break
}
i++
}
return i
}
// makeError is a helper function for making a metric parsing error.
// reason is the reason that the error occured.
// buf should be the current buffer we are parsing.
// i is the current index, to give some context on where in the buffer we are.
func makeError(reason string, buf []byte, i int) error {
return fmt.Errorf("metric parsing error, reason: [%s], buffer: [%s], index: [%d]",
reason, buf, i)
}
// getPrecisionMultiplier will return a multiplier for the precision specified.
func getPrecisionMultiplier(precision string) int64 {
d := time.Nanosecond
switch precision {
case "u":
d = time.Microsecond
case "ms":
d = time.Millisecond
case "s":
d = time.Second
case "m":
d = time.Minute
case "h":
d = time.Hour
}
return int64(d)
}

413
metric/parse_test.go Normal file
View File

@@ -0,0 +1,413 @@
package metric
import (
"testing"
"time"
"github.com/stretchr/testify/assert"
)
const trues = `booltest b=T
booltest b=t
booltest b=True
booltest b=TRUE
booltest b=true
`
const falses = `booltest b=F
booltest b=f
booltest b=False
booltest b=FALSE
booltest b=false
`
const withEscapes = `w\,\ eather,host=local temp=99 1465839830100400200
w\,eather,host=local temp=99 1465839830100400200
weather,location=us\,midwest temperature=82 1465839830100400200
weather,location=us-midwest temp\=rature=82 1465839830100400200
weather,location\ place=us-midwest temperature=82 1465839830100400200
weather,location=us-midwest temperature="too\"hot\"" 1465839830100400200
`
const withTimestamps = `cpu usage=99 1480595849000000000
cpu usage=99 1480595850000000000
cpu usage=99 1480595851700030000
cpu usage=99 1480595852000000300
`
const sevenMetrics = `cpu,host=foo,datacenter=us-east idle=99,busy=1i,b=true,s="string"
cpu,host=foo,datacenter=us-east idle=99,busy=1i,b=true,s="string"
cpu,host=foo,datacenter=us-east idle=99,busy=1i,b=true,s="string"
cpu,host=foo,datacenter=us-east idle=99,busy=1i,b=true,s="string"
cpu,host=foo,datacenter=us-east idle=99,busy=1i,b=true,s="string"
cpu,host=foo,datacenter=us-east idle=99,busy=1i,b=true,s="string"
cpu,host=foo,datacenter=us-east idle=99,busy=1i,b=true,s="string"
`
const negMetrics = `weather,host=local temp=-99i,temp_float=-99.4 1465839830100400200
`
// some metrics are invalid
const someInvalid = `cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu3, host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu4 , usage_idle=99,usage_busy=1
cpu 1480595852000000300
cpu usage=99 1480595852foobar300
cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
`
func TestParse(t *testing.T) {
start := time.Now()
metrics, err := Parse([]byte(sevenMetrics))
assert.NoError(t, err)
assert.Len(t, metrics, 7)
// all metrics parsed together w/o a timestamp should have the same time.
firstTime := metrics[0].Time()
for _, m := range metrics {
assert.Equal(t,
map[string]interface{}{
"idle": float64(99),
"busy": int64(1),
"b": true,
"s": "string",
},
m.Fields(),
)
assert.Equal(t,
map[string]string{
"host": "foo",
"datacenter": "us-east",
},
m.Tags(),
)
assert.True(t, m.Time().After(start))
assert.True(t, m.Time().Equal(firstTime))
}
}
func TestParseNegNumbers(t *testing.T) {
metrics, err := Parse([]byte(negMetrics))
assert.NoError(t, err)
assert.Len(t, metrics, 1)
assert.Equal(t,
map[string]interface{}{
"temp": int64(-99),
"temp_float": float64(-99.4),
},
metrics[0].Fields(),
)
assert.Equal(t,
map[string]string{
"host": "local",
},
metrics[0].Tags(),
)
}
func TestParseErrors(t *testing.T) {
start := time.Now()
metrics, err := Parse([]byte(someInvalid))
assert.Error(t, err)
assert.Len(t, metrics, 4)
// all metrics parsed together w/o a timestamp should have the same time.
firstTime := metrics[0].Time()
for _, m := range metrics {
assert.Equal(t,
map[string]interface{}{
"usage_idle": float64(99),
"usage_busy": float64(1),
},
m.Fields(),
)
assert.Equal(t,
map[string]string{
"host": "foo",
"datacenter": "us-east",
},
m.Tags(),
)
assert.True(t, m.Time().After(start))
assert.True(t, m.Time().Equal(firstTime))
}
}
func TestParseWithTimestamps(t *testing.T) {
metrics, err := Parse([]byte(withTimestamps))
assert.NoError(t, err)
assert.Len(t, metrics, 4)
expectedTimestamps := []time.Time{
time.Unix(0, 1480595849000000000),
time.Unix(0, 1480595850000000000),
time.Unix(0, 1480595851700030000),
time.Unix(0, 1480595852000000300),
}
// all metrics parsed together w/o a timestamp should have the same time.
for i, m := range metrics {
assert.Equal(t,
map[string]interface{}{
"usage": float64(99),
},
m.Fields(),
)
assert.True(t, m.Time().Equal(expectedTimestamps[i]))
}
}
func TestParseEscapes(t *testing.T) {
metrics, err := Parse([]byte(withEscapes))
assert.NoError(t, err)
assert.Len(t, metrics, 6)
tests := []struct {
name string
fields map[string]interface{}
tags map[string]string
}{
{
name: `w, eather`,
fields: map[string]interface{}{"temp": float64(99)},
tags: map[string]string{"host": "local"},
},
{
name: `w,eather`,
fields: map[string]interface{}{"temp": float64(99)},
tags: map[string]string{"host": "local"},
},
{
name: `weather`,
fields: map[string]interface{}{"temperature": float64(82)},
tags: map[string]string{"location": `us,midwest`},
},
{
name: `weather`,
fields: map[string]interface{}{`temp=rature`: float64(82)},
tags: map[string]string{"location": `us-midwest`},
},
{
name: `weather`,
fields: map[string]interface{}{"temperature": float64(82)},
tags: map[string]string{`location place`: `us-midwest`},
},
{
name: `weather`,
fields: map[string]interface{}{`temperature`: `too"hot"`},
tags: map[string]string{"location": `us-midwest`},
},
}
for i, test := range tests {
assert.Equal(t, test.name, metrics[i].Name())
assert.Equal(t, test.fields, metrics[i].Fields())
assert.Equal(t, test.tags, metrics[i].Tags())
}
}
func TestParseTrueBooleans(t *testing.T) {
metrics, err := Parse([]byte(trues))
assert.NoError(t, err)
assert.Len(t, metrics, 5)
for _, metric := range metrics {
assert.Equal(t, "booltest", metric.Name())
assert.Equal(t, true, metric.Fields()["b"])
}
}
func TestParseFalseBooleans(t *testing.T) {
metrics, err := Parse([]byte(falses))
assert.NoError(t, err)
assert.Len(t, metrics, 5)
for _, metric := range metrics {
assert.Equal(t, "booltest", metric.Name())
assert.Equal(t, false, metric.Fields()["b"])
}
}
func TestParsePointBadNumber(t *testing.T) {
for _, tt := range []string{
"cpu v=- ",
"cpu v=-i ",
"cpu v=-. ",
"cpu v=. ",
"cpu v=1.0i ",
"cpu v=1ii ",
"cpu v=1a ",
"cpu v=-e-e-e ",
"cpu v=42+3 ",
"cpu v= ",
} {
_, err := Parse([]byte(tt + "\n"))
assert.Error(t, err, tt)
}
}
func TestParseTagsMissingParts(t *testing.T) {
for _, tt := range []string{
`cpu,host`,
`cpu,host,`,
`cpu,host=`,
`cpu,f=oo=bar value=1`,
`cpu,host value=1i`,
`cpu,host=serverA,region value=1i`,
`cpu,host=serverA,region= value=1i`,
`cpu,host=serverA,region=,zone=us-west value=1i`,
`cpu, value=1`,
`cpu, ,,`,
`cpu,,,`,
`cpu,host=serverA,=us-east value=1i`,
`cpu,host=serverAa\,,=us-east value=1i`,
`cpu,host=serverA\,,=us-east value=1i`,
`cpu, =serverA value=1i`,
} {
_, err := Parse([]byte(tt + "\n"))
assert.Error(t, err, tt)
}
}
func TestParsePointWhitespace(t *testing.T) {
for _, tt := range []string{
`cpu value=1.0 1257894000000000000`,
`cpu value=1.0 1257894000000000000`,
`cpu value=1.0 1257894000000000000`,
`cpu value=1.0 1257894000000000000 `,
} {
m, err := Parse([]byte(tt + "\n"))
assert.NoError(t, err, tt)
assert.Equal(t, "cpu", m[0].Name())
assert.Equal(t, map[string]interface{}{"value": float64(1)}, m[0].Fields())
}
}
func TestParsePointInvalidFields(t *testing.T) {
for _, tt := range []string{
"test,foo=bar a=101,=value",
"test,foo=bar =value",
"test,foo=bar a=101,key=",
"test,foo=bar key=",
`test,foo=bar a=101,b="foo`,
} {
_, err := Parse([]byte(tt + "\n"))
assert.Error(t, err, tt)
}
}
func TestParsePointNoFields(t *testing.T) {
for _, tt := range []string{
"cpu_load_short,host=server01,region=us-west",
"very_long_measurement_name",
"cpu,host==",
"============",
"cpu",
"cpu\n\n\n\n\n\n\n",
" ",
} {
_, err := Parse([]byte(tt + "\n"))
assert.Error(t, err, tt)
}
}
// a b=1 << this is the shortest possible metric
// any shorter is just ignored
func TestParseBufTooShort(t *testing.T) {
for _, tt := range []string{
"",
"a",
"a ",
"a b=",
} {
_, err := Parse([]byte(tt + "\n"))
assert.Error(t, err, tt)
}
}
func TestParseInvalidBooleans(t *testing.T) {
for _, tt := range []string{
"test b=tru",
"test b=fals",
"test b=faLse",
"test q=foo",
"test b=lambchops",
} {
_, err := Parse([]byte(tt + "\n"))
assert.Error(t, err, tt)
}
}
func TestParseInvalidNumbers(t *testing.T) {
for _, tt := range []string{
"test b=-",
"test b=1.1.1",
"test b=nan",
"test b=9i10",
"test b=9999999999999999999i",
} {
_, err := Parse([]byte(tt + "\n"))
assert.Error(t, err, tt)
}
}
func TestParseNegativeTimestamps(t *testing.T) {
for _, tt := range []string{
"test foo=101 -1257894000000000000",
} {
metrics, err := Parse([]byte(tt + "\n"))
assert.NoError(t, err, tt)
assert.True(t, metrics[0].Time().Equal(time.Unix(0, -1257894000000000000)))
}
}
func TestParsePrecision(t *testing.T) {
for _, tt := range []struct {
line string
precision string
expected int64
}{
{"test v=42 1491847420", "s", 1491847420000000000},
{"test v=42 1491847420123", "ms", 1491847420123000000},
{"test v=42 1491847420123456", "u", 1491847420123456000},
{"test v=42 1491847420123456789", "ns", 1491847420123456789},
{"test v=42 1491847420123456789", "1s", 1491847420123456789},
{"test v=42 1491847420123456789", "asdf", 1491847420123456789},
} {
metrics, err := ParseWithDefaultTimePrecision(
[]byte(tt.line+"\n"), time.Now(), tt.precision)
assert.NoError(t, err)
assert.Equal(t, tt.expected, metrics[0].UnixNano())
}
}
func TestParsePrecisionUnsetTime(t *testing.T) {
for _, tt := range []struct {
line string
precision string
}{
{"test v=42", "s"},
{"test v=42", "ns"},
} {
_, err := ParseWithDefaultTimePrecision(
[]byte(tt.line+"\n"), time.Now(), tt.precision)
assert.NoError(t, err)
}
}
func TestParseMaxKeyLength(t *testing.T) {
key := ""
for {
if len(key) > MaxKeyLength {
break
}
key += "test"
}
_, err := Parse([]byte(key + " value=1\n"))
assert.Error(t, err)
}

159
metric/reader.go Normal file
View File

@@ -0,0 +1,159 @@
package metric
import (
"io"
"github.com/influxdata/telegraf"
)
type state int
const (
_ state = iota
// normal state copies whole metrics into the given buffer until we can't
// fit the next metric.
normal
// split state means that we have a metric that we were able to split, so
// that we can fit it into multiple metrics (and calls to Read)
split
// overflow state means that we have a metric that didn't fit into a single
// buffer, and needs to be split across multiple calls to Read.
overflow
// splitOverflow state means that a split metric didn't fit into a single
// buffer, and needs to be split across multiple calls to Read.
splitOverflow
// done means we're done reading metrics, and now always return (0, io.EOF)
done
)
type reader struct {
metrics []telegraf.Metric
splitMetrics []telegraf.Metric
buf []byte
state state
// metric index
iM int
// split metric index
iSM int
// buffer index
iB int
}
func NewReader(metrics []telegraf.Metric) io.Reader {
return &reader{
metrics: metrics,
state: normal,
}
}
func (r *reader) Read(p []byte) (n int, err error) {
var i int
switch r.state {
case done:
return 0, io.EOF
case normal:
for {
// this for-loop is the sunny-day scenario, where we are given a
// buffer that is large enough to hold at least a single metric.
// all of the cases below it are edge-cases.
if r.metrics[r.iM].Len() <= len(p[i:]) {
i += r.metrics[r.iM].SerializeTo(p[i:])
} else {
break
}
r.iM++
if r.iM == len(r.metrics) {
r.state = done
return i, io.EOF
}
}
// if we haven't written any bytes, check if we can split the current
// metric into multiple full metrics at a smaller size.
if i == 0 {
tmp := r.metrics[r.iM].Split(len(p))
if len(tmp) > 1 {
r.splitMetrics = tmp
r.state = split
if r.splitMetrics[0].Len() <= len(p) {
i += r.splitMetrics[0].SerializeTo(p)
r.iSM = 1
} else {
// splitting didn't quite work, so we'll drop down and
// overflow the metric.
r.state = normal
r.iSM = 0
}
}
}
// if we haven't written any bytes and we're not at the end of the metrics
// slice, then it means we have a single metric that is larger than the
// provided buffer.
if i == 0 {
r.buf = r.metrics[r.iM].Serialize()
i += copy(p, r.buf[r.iB:])
r.iB += i
r.state = overflow
}
case split:
if r.splitMetrics[r.iSM].Len() <= len(p) {
// write the current split metric
i += r.splitMetrics[r.iSM].SerializeTo(p)
r.iSM++
if r.iSM >= len(r.splitMetrics) {
// done writing the current split metrics
r.iSM = 0
r.iM++
if r.iM == len(r.metrics) {
r.state = done
return i, io.EOF
}
r.state = normal
}
} else {
// This would only happen if we split the metric, and then a
// subsequent buffer was smaller than the initial one given,
// so that our split metric no longer fits.
r.buf = r.splitMetrics[r.iSM].Serialize()
i += copy(p, r.buf[r.iB:])
r.iB += i
r.state = splitOverflow
}
case splitOverflow:
i = copy(p, r.buf[r.iB:])
r.iB += i
if r.iB >= len(r.buf) {
r.iB = 0
r.iSM++
if r.iSM == len(r.splitMetrics) {
r.iM++
if r.iM == len(r.metrics) {
r.state = done
return i, io.EOF
}
r.state = normal
} else {
r.state = split
}
}
case overflow:
i = copy(p, r.buf[r.iB:])
r.iB += i
if r.iB >= len(r.buf) {
r.iB = 0
r.iM++
if r.iM == len(r.metrics) {
r.state = done
return i, io.EOF
}
r.state = normal
}
}
return i, nil
}

635
metric/reader_test.go Normal file
View File

@@ -0,0 +1,635 @@
package metric
import (
"io"
"io/ioutil"
"regexp"
"testing"
"time"
"github.com/influxdata/telegraf"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func BenchmarkMetricReader(b *testing.B) {
metrics := make([]telegraf.Metric, 10)
for i := 0; i < 10; i++ {
metrics[i], _ = New("foo", map[string]string{},
map[string]interface{}{"value": int64(1)}, time.Now())
}
for n := 0; n < b.N; n++ {
r := NewReader(metrics)
io.Copy(ioutil.Discard, r)
}
}
func TestMetricReader(t *testing.T) {
ts := time.Unix(1481032190, 0)
metrics := make([]telegraf.Metric, 10)
for i := 0; i < 10; i++ {
metrics[i], _ = New("foo", map[string]string{},
map[string]interface{}{"value": int64(1)}, ts)
}
r := NewReader(metrics)
buf := make([]byte, 35)
for i := 0; i < 10; i++ {
n, err := r.Read(buf)
if err != nil {
assert.True(t, err == io.EOF, err.Error())
}
assert.Equal(t, 33, n)
assert.Equal(t, "foo value=1i 1481032190000000000\n", string(buf[0:n]))
}
// reader should now be done, and always return 0, io.EOF
for i := 0; i < 10; i++ {
n, err := r.Read(buf)
assert.True(t, err == io.EOF, err.Error())
assert.Equal(t, 0, n)
}
}
func TestMetricReader_OverflowMetric(t *testing.T) {
ts := time.Unix(1481032190, 0)
m, _ := New("foo", map[string]string{},
map[string]interface{}{"value": int64(10)}, ts)
metrics := []telegraf.Metric{m}
r := NewReader(metrics)
buf := make([]byte, 5)
tests := []struct {
exp string
err error
n int
}{
{
"foo v",
nil,
5,
},
{
"alue=",
nil,
5,
},
{
"10i 1",
nil,
5,
},
{
"48103",
nil,
5,
},
{
"21900",
nil,
5,
},
{
"00000",
nil,
5,
},
{
"000\n",
io.EOF,
4,
},
{
"",
io.EOF,
0,
},
}
for _, test := range tests {
n, err := r.Read(buf)
assert.Equal(t, test.n, n)
assert.Equal(t, test.exp, string(buf[0:n]))
assert.Equal(t, test.err, err)
}
}
// Regression test for when a metric is the same size as the buffer.
//
// Previously EOF would not be set until the next call to Read.
func TestMetricReader_MetricSizeEqualsBufferSize(t *testing.T) {
ts := time.Unix(1481032190, 0)
m1, _ := New("foo", map[string]string{},
map[string]interface{}{"a": int64(1)}, ts)
metrics := []telegraf.Metric{m1}
r := NewReader(metrics)
buf := make([]byte, m1.Len())
for {
n, err := r.Read(buf)
// Should never read 0 bytes unless at EOF, unless input buffer is 0 length
if n == 0 {
require.Equal(t, io.EOF, err)
break
}
// Lines should be terminated with a LF
if err == io.EOF {
require.Equal(t, uint8('\n'), buf[n-1])
break
}
require.NoError(t, err)
}
}
// Regression test for when a metric requires to be split and one of the
// split metrics is exactly the size of the buffer.
//
// Previously an empty string would be returned on the next Read without error,
// and then next Read call would panic.
func TestMetricReader_SplitWithExactLengthSplit(t *testing.T) {
ts := time.Unix(1481032190, 0)
m1, _ := New("foo", map[string]string{},
map[string]interface{}{"a": int64(1), "bb": int64(2)}, ts)
metrics := []telegraf.Metric{m1}
r := NewReader(metrics)
buf := make([]byte, 30)
// foo a=1i,bb=2i 1481032190000000000\n // len 35
//
// Requires this specific split order:
// foo a=1i 1481032190000000000\n // len 29
// foo bb=2i 1481032190000000000\n // len 30
for {
n, err := r.Read(buf)
// Should never read 0 bytes unless at EOF, unless input buffer is 0 length
if n == 0 {
require.Equal(t, io.EOF, err)
break
}
// Lines should be terminated with a LF
if err == io.EOF {
require.Equal(t, uint8('\n'), buf[n-1])
break
}
require.NoError(t, err)
}
}
// Regresssion test for when a metric requires to be split and one of the
// split metrics is larger than the buffer.
//
// Previously the metric index would be set incorrectly causing a panic.
func TestMetricReader_SplitOverflowOversized(t *testing.T) {
ts := time.Unix(1481032190, 0)
m1, _ := New("foo", map[string]string{},
map[string]interface{}{
"a": int64(1),
"bbb": int64(2),
}, ts)
metrics := []telegraf.Metric{m1}
r := NewReader(metrics)
buf := make([]byte, 30)
// foo a=1i,bbb=2i 1481032190000000000\n // len 36
//
// foo a=1i 1481032190000000000\n // len 29
// foo bbb=2i 1481032190000000000\n // len 31
for {
n, err := r.Read(buf)
// Should never read 0 bytes unless at EOF, unless input buffer is 0 length
if n == 0 {
require.Equal(t, io.EOF, err)
break
}
// Lines should be terminated with a LF
if err == io.EOF {
require.Equal(t, uint8('\n'), buf[n-1])
break
}
require.NoError(t, err)
}
}
// Regresssion test for when a split metric exactly fits in the buffer.
//
// Previously the metric would be overflow split when not required.
func TestMetricReader_SplitOverflowUneeded(t *testing.T) {
ts := time.Unix(1481032190, 0)
m1, _ := New("foo", map[string]string{},
map[string]interface{}{"a": int64(1), "b": int64(2)}, ts)
metrics := []telegraf.Metric{m1}
r := NewReader(metrics)
buf := make([]byte, 29)
// foo a=1i,b=2i 1481032190000000000\n // len 34
//
// foo a=1i 1481032190000000000\n // len 29
// foo b=2i 1481032190000000000\n // len 29
for {
n, err := r.Read(buf)
// Should never read 0 bytes unless at EOF, unless input buffer is 0 length
if n == 0 {
require.Equal(t, io.EOF, err)
break
}
// Lines should be terminated with a LF
if err == io.EOF {
require.Equal(t, uint8('\n'), buf[n-1])
break
}
require.NoError(t, err)
}
}
func TestMetricReader_OverflowMultipleMetrics(t *testing.T) {
ts := time.Unix(1481032190, 0)
m, _ := New("foo", map[string]string{},
map[string]interface{}{"value": int64(10)}, ts)
metrics := []telegraf.Metric{m, m.Copy()}
r := NewReader(metrics)
buf := make([]byte, 10)
tests := []struct {
exp string
err error
n int
}{
{
"foo value=",
nil,
10,
},
{
"10i 148103",
nil,
10,
},
{
"2190000000",
nil,
10,
},
{
"000\n",
nil,
4,
},
{
"foo value=",
nil,
10,
},
{
"10i 148103",
nil,
10,
},
{
"2190000000",
nil,
10,
},
{
"000\n",
io.EOF,
4,
},
{
"",
io.EOF,
0,
},
}
for _, test := range tests {
n, err := r.Read(buf)
assert.Equal(t, test.n, n)
assert.Equal(t, test.exp, string(buf[0:n]))
assert.Equal(t, test.err, err)
}
}
// test splitting a metric
func TestMetricReader_SplitMetric(t *testing.T) {
ts := time.Unix(1481032190, 0)
m1, _ := New("foo", map[string]string{},
map[string]interface{}{
"value1": int64(10),
"value2": int64(10),
"value3": int64(10),
"value4": int64(10),
"value5": int64(10),
"value6": int64(10),
},
ts,
)
metrics := []telegraf.Metric{m1}
r := NewReader(metrics)
buf := make([]byte, 60)
tests := []struct {
expRegex string
err error
n int
}{
{
`foo value\d=10i,value\d=10i,value\d=10i 1481032190000000000\n`,
nil,
57,
},
{
`foo value\d=10i,value\d=10i,value\d=10i 1481032190000000000\n`,
io.EOF,
57,
},
{
"",
io.EOF,
0,
},
}
for _, test := range tests {
n, err := r.Read(buf)
assert.Equal(t, test.n, n)
re := regexp.MustCompile(test.expRegex)
assert.True(t, re.MatchString(string(buf[0:n])), string(buf[0:n]))
assert.Equal(t, test.err, err)
}
}
// test an array with one split metric and one unsplit
func TestMetricReader_SplitMetric2(t *testing.T) {
ts := time.Unix(1481032190, 0)
m1, _ := New("foo", map[string]string{},
map[string]interface{}{
"value1": int64(10),
"value2": int64(10),
"value3": int64(10),
"value4": int64(10),
"value5": int64(10),
"value6": int64(10),
},
ts,
)
m2, _ := New("foo", map[string]string{},
map[string]interface{}{
"value1": int64(10),
},
ts,
)
metrics := []telegraf.Metric{m1, m2}
r := NewReader(metrics)
buf := make([]byte, 60)
tests := []struct {
expRegex string
err error
n int
}{
{
`foo value\d=10i,value\d=10i,value\d=10i 1481032190000000000\n`,
nil,
57,
},
{
`foo value\d=10i,value\d=10i,value\d=10i 1481032190000000000\n`,
nil,
57,
},
{
`foo value1=10i 1481032190000000000\n`,
io.EOF,
35,
},
{
"",
io.EOF,
0,
},
}
for _, test := range tests {
n, err := r.Read(buf)
assert.Equal(t, test.n, n)
re := regexp.MustCompile(test.expRegex)
assert.True(t, re.MatchString(string(buf[0:n])), string(buf[0:n]))
assert.Equal(t, test.err, err)
}
}
// test split that results in metrics that are still too long, which results in
// the reader falling back to regular overflow.
func TestMetricReader_SplitMetricTooLong(t *testing.T) {
ts := time.Unix(1481032190, 0)
m1, _ := New("foo", map[string]string{},
map[string]interface{}{
"value1": int64(10),
"value2": int64(10),
},
ts,
)
metrics := []telegraf.Metric{m1}
r := NewReader(metrics)
buf := make([]byte, 30)
tests := []struct {
expRegex string
err error
n int
}{
{
`foo value\d=10i,value\d=10i 1481`,
nil,
30,
},
{
`032190000000000\n`,
io.EOF,
16,
},
{
"",
io.EOF,
0,
},
}
for _, test := range tests {
n, err := r.Read(buf)
assert.Equal(t, test.n, n)
re := regexp.MustCompile(test.expRegex)
assert.True(t, re.MatchString(string(buf[0:n])), string(buf[0:n]))
assert.Equal(t, test.err, err)
}
}
// test split with a changing buffer size in the middle of subsequent calls
// to Read
func TestMetricReader_SplitMetricChangingBuffer(t *testing.T) {
ts := time.Unix(1481032190, 0)
m1, _ := New("foo", map[string]string{},
map[string]interface{}{
"value1": int64(10),
"value2": int64(10),
"value3": int64(10),
},
ts,
)
m2, _ := New("foo", map[string]string{},
map[string]interface{}{
"value1": int64(10),
},
ts,
)
metrics := []telegraf.Metric{m1, m2}
r := NewReader(metrics)
tests := []struct {
expRegex string
err error
n int
buf []byte
}{
{
`foo value\d=10i 1481032190000000000\n`,
nil,
35,
make([]byte, 36),
},
{
`foo value\d=10i 148103219000000`,
nil,
30,
make([]byte, 30),
},
{
`0000\n`,
nil,
5,
make([]byte, 30),
},
{
`foo value\d=10i 1481032190000000000\n`,
nil,
35,
make([]byte, 36),
},
{
`foo value1=10i 1481032190000000000\n`,
io.EOF,
35,
make([]byte, 36),
},
{
"",
io.EOF,
0,
make([]byte, 36),
},
}
for _, test := range tests {
n, err := r.Read(test.buf)
assert.Equal(t, test.n, n, test.expRegex)
re := regexp.MustCompile(test.expRegex)
assert.True(t, re.MatchString(string(test.buf[0:n])), string(test.buf[0:n]))
assert.Equal(t, test.err, err, test.expRegex)
}
}
// test split with a changing buffer size in the middle of subsequent calls
// to Read
func TestMetricReader_SplitMetricChangingBuffer2(t *testing.T) {
ts := time.Unix(1481032190, 0)
m1, _ := New("foo", map[string]string{},
map[string]interface{}{
"value1": int64(10),
"value2": int64(10),
},
ts,
)
m2, _ := New("foo", map[string]string{},
map[string]interface{}{
"value1": int64(10),
},
ts,
)
metrics := []telegraf.Metric{m1, m2}
r := NewReader(metrics)
tests := []struct {
expRegex string
err error
n int
buf []byte
}{
{
`foo value\d=10i 1481032190000000000\n`,
nil,
35,
make([]byte, 36),
},
{
`foo value\d=10i 148103219000000`,
nil,
30,
make([]byte, 30),
},
{
`0000\n`,
nil,
5,
make([]byte, 30),
},
{
`foo value1=10i 1481032190000000000\n`,
io.EOF,
35,
make([]byte, 36),
},
{
"",
io.EOF,
0,
make([]byte, 36),
},
}
for _, test := range tests {
n, err := r.Read(test.buf)
assert.Equal(t, test.n, n, test.expRegex)
re := regexp.MustCompile(test.expRegex)
assert.True(t, re.MatchString(string(test.buf[0:n])), string(test.buf[0:n]))
assert.Equal(t, test.err, err, test.expRegex)
}
}
func TestMetricRoundtrip(t *testing.T) {
const lp = `nstat,bu=linux,cls=server,dc=cer,env=production,host=hostname,name=netstat,sr=database IpExtInBcastOctets=12570626154i,IpExtInBcastPkts=95541226i,IpExtInCEPkts=0i,IpExtInCsumErrors=0i,IpExtInECT0Pkts=55674i,IpExtInECT1Pkts=0i,IpExtInMcastOctets=5928296i,IpExtInMcastPkts=174365i,IpExtInNoECTPkts=17965863529i,IpExtInNoRoutes=20i,IpExtInOctets=3334866321815i,IpExtInTruncatedPkts=0i,IpExtOutBcastOctets=0i,IpExtOutBcastPkts=0i,IpExtOutMcastOctets=0i,IpExtOutMcastPkts=0i,IpExtOutOctets=31397892391399i,TcpExtArpFilter=0i,TcpExtBusyPollRxPackets=0i,TcpExtDelayedACKLocked=14094i,TcpExtDelayedACKLost=302083i,TcpExtDelayedACKs=55486507i,TcpExtEmbryonicRsts=11879i,TcpExtIPReversePathFilter=0i,TcpExtListenDrops=1736i,TcpExtListenOverflows=0i,TcpExtLockDroppedIcmps=0i,TcpExtOfoPruned=0i,TcpExtOutOfWindowIcmps=8i,TcpExtPAWSActive=0i,TcpExtPAWSEstab=974i,TcpExtPAWSPassive=0i,TcpExtPruneCalled=0i,TcpExtRcvPruned=0i,TcpExtSyncookiesFailed=12593i,TcpExtSyncookiesRecv=0i,TcpExtSyncookiesSent=0i,TcpExtTCPACKSkippedChallenge=0i,TcpExtTCPACKSkippedFinWait2=0i,TcpExtTCPACKSkippedPAWS=806i,TcpExtTCPACKSkippedSeq=519i,TcpExtTCPACKSkippedSynRecv=0i,TcpExtTCPACKSkippedTimeWait=0i,TcpExtTCPAbortFailed=0i,TcpExtTCPAbortOnClose=22i,TcpExtTCPAbortOnData=36593i,TcpExtTCPAbortOnLinger=0i,TcpExtTCPAbortOnMemory=0i,TcpExtTCPAbortOnTimeout=674i,TcpExtTCPAutoCorking=494253233i,TcpExtTCPBacklogDrop=0i,TcpExtTCPChallengeACK=281i,TcpExtTCPDSACKIgnoredNoUndo=93354i,TcpExtTCPDSACKIgnoredOld=336i,TcpExtTCPDSACKOfoRecv=0i,TcpExtTCPDSACKOfoSent=7i,TcpExtTCPDSACKOldSent=302073i,TcpExtTCPDSACKRecv=215884i,TcpExtTCPDSACKUndo=7633i,TcpExtTCPDeferAcceptDrop=0i,TcpExtTCPDirectCopyFromBacklog=0i,TcpExtTCPDirectCopyFromPrequeue=0i,TcpExtTCPFACKReorder=1320i,TcpExtTCPFastOpenActive=0i,TcpExtTCPFastOpenActiveFail=0i,TcpExtTCPFastOpenCookieReqd=0i,TcpExtTCPFastOpenListenOverflow=0i,TcpExtTCPFastOpenPassive=0i,TcpExtTCPFastOpenPassiveFail=0i,TcpExtTCPFastRetrans=350681i,TcpExtTCPForwardRetrans=142168i,TcpExtTCPFromZeroWindowAdv=4317i,TcpExtTCPFullUndo=29502i,TcpExtTCPHPAcks=10267073000i,TcpExtTCPHPHits=5629837098i,TcpExtTCPHPHitsToUser=0i,TcpExtTCPHystartDelayCwnd=285127i,TcpExtTCPHystartDelayDetect=12318i,TcpExtTCPHystartTrainCwnd=69160570i,TcpExtTCPHystartTrainDetect=3315799i,TcpExtTCPLossFailures=109i,TcpExtTCPLossProbeRecovery=110819i,TcpExtTCPLossProbes=233995i,TcpExtTCPLossUndo=5276i,TcpExtTCPLostRetransmit=397i,TcpExtTCPMD5NotFound=0i,TcpExtTCPMD5Unexpected=0i,TcpExtTCPMemoryPressures=0i,TcpExtTCPMinTTLDrop=0i,TcpExtTCPOFODrop=0i,TcpExtTCPOFOMerge=7i,TcpExtTCPOFOQueue=15196i,TcpExtTCPOrigDataSent=29055119435i,TcpExtTCPPartialUndo=21320i,TcpExtTCPPrequeueDropped=0i,TcpExtTCPPrequeued=0i,TcpExtTCPPureAcks=1236441827i,TcpExtTCPRcvCoalesce=225590473i,TcpExtTCPRcvCollapsed=0i,TcpExtTCPRenoFailures=0i,TcpExtTCPRenoRecovery=0i,TcpExtTCPRenoRecoveryFail=0i,TcpExtTCPRenoReorder=0i,TcpExtTCPReqQFullDoCookies=0i,TcpExtTCPReqQFullDrop=0i,TcpExtTCPRetransFail=41i,TcpExtTCPSACKDiscard=0i,TcpExtTCPSACKReneging=0i,TcpExtTCPSACKReorder=4307i,TcpExtTCPSYNChallenge=244i,TcpExtTCPSackFailures=1698i,TcpExtTCPSackMerged=184668i,TcpExtTCPSackRecovery=97369i,TcpExtTCPSackRecoveryFail=381i,TcpExtTCPSackShiftFallback=2697079i,TcpExtTCPSackShifted=760299i,TcpExtTCPSchedulerFailed=0i,TcpExtTCPSlowStartRetrans=9276i,TcpExtTCPSpuriousRTOs=959i,TcpExtTCPSpuriousRtxHostQueues=2973i,TcpExtTCPSynRetrans=200970i,TcpExtTCPTSReorder=15221i,TcpExtTCPTimeWaitOverflow=0i,TcpExtTCPTimeouts=70127i,TcpExtTCPToZeroWindowAdv=4317i,TcpExtTCPWantZeroWindowAdv=2133i,TcpExtTW=24809813i,TcpExtTWKilled=0i,TcpExtTWRecycled=0i 1496460785000000000
nstat,bu=linux,cls=server,dc=cer,env=production,host=hostname,name=snmp,sr=database IcmpInAddrMaskReps=0i,IcmpInAddrMasks=90i,IcmpInCsumErrors=0i,IcmpInDestUnreachs=284401i,IcmpInEchoReps=9i,IcmpInEchos=1761912i,IcmpInErrors=407i,IcmpInMsgs=2047767i,IcmpInParmProbs=0i,IcmpInRedirects=0i,IcmpInSrcQuenchs=0i,IcmpInTimeExcds=46i,IcmpInTimestampReps=0i,IcmpInTimestamps=1309i,IcmpMsgInType0=9i,IcmpMsgInType11=46i,IcmpMsgInType13=1309i,IcmpMsgInType17=90i,IcmpMsgInType3=284401i,IcmpMsgInType8=1761912i,IcmpMsgOutType0=1761912i,IcmpMsgOutType14=1248i,IcmpMsgOutType3=108709i,IcmpMsgOutType8=9i,IcmpOutAddrMaskReps=0i,IcmpOutAddrMasks=0i,IcmpOutDestUnreachs=108709i,IcmpOutEchoReps=1761912i,IcmpOutEchos=9i,IcmpOutErrors=0i,IcmpOutMsgs=1871878i,IcmpOutParmProbs=0i,IcmpOutRedirects=0i,IcmpOutSrcQuenchs=0i,IcmpOutTimeExcds=0i,IcmpOutTimestampReps=1248i,IcmpOutTimestamps=0i,IpDefaultTTL=64i,IpForwDatagrams=0i,IpForwarding=2i,IpFragCreates=0i,IpFragFails=0i,IpFragOKs=0i,IpInAddrErrors=0i,IpInDelivers=17658795773i,IpInDiscards=0i,IpInHdrErrors=0i,IpInReceives=17659269339i,IpInUnknownProtos=0i,IpOutDiscards=236976i,IpOutNoRoutes=1009i,IpOutRequests=23466783734i,IpReasmFails=0i,IpReasmOKs=0i,IpReasmReqds=0i,IpReasmTimeout=0i,TcpActiveOpens=23308977i,TcpAttemptFails=3757543i,TcpCurrEstab=280i,TcpEstabResets=184792i,TcpInCsumErrors=0i,TcpInErrs=232i,TcpInSegs=17536573089i,TcpMaxConn=-1i,TcpOutRsts=4051451i,TcpOutSegs=29836254873i,TcpPassiveOpens=176546974i,TcpRetransSegs=878085i,TcpRtoAlgorithm=1i,TcpRtoMax=120000i,TcpRtoMin=200i,UdpInCsumErrors=0i,UdpInDatagrams=24441661i,UdpInErrors=0i,UdpLiteInCsumErrors=0i,UdpLiteInDatagrams=0i,UdpLiteInErrors=0i,UdpLiteNoPorts=0i,UdpLiteOutDatagrams=0i,UdpLiteRcvbufErrors=0i,UdpLiteSndbufErrors=0i,UdpNoPorts=17660i,UdpOutDatagrams=51807896i,UdpRcvbufErrors=0i,UdpSndbufErrors=236922i 1496460785000000000
`
metrics, err := Parse([]byte(lp))
require.NoError(t, err)
r := NewReader(metrics)
buf := make([]byte, 128)
_, err = r.Read(buf)
require.NoError(t, err)
metrics, err = Parse(buf)
require.NoError(t, err)
}

View File

@@ -1,7 +0,0 @@
// +build uint64
package metric
func init() {
EnableUintSupport()
}

View File

@@ -1,7 +1,6 @@
package all
import (
_ "github.com/influxdata/telegraf/plugins/aggregators/basicstats"
_ "github.com/influxdata/telegraf/plugins/aggregators/histogram"
_ "github.com/influxdata/telegraf/plugins/aggregators/minmax"
)

View File

@@ -1,56 +0,0 @@
# BasicStats Aggregator Plugin
The BasicStats aggregator plugin give us count,max,min,mean,sum,s2(variance), stdev for a set of values,
emitting the aggregate every `period` seconds.
### Configuration:
```toml
# Keep the aggregate basicstats of each metric passing through.
[[aggregators.basicstats]]
## General Aggregator Arguments:
## The period on which to flush & clear the aggregator.
period = "30s"
## If true, the original metric will be dropped by the
## aggregator and will not get sent to the output plugins.
drop_original = false
## BasicStats Arguments:
## Configures which basic stats to push as fields
stats = ["count","min","max","mean","stdev","s2","sum"]
```
- stats
- If not specified, then `count`, `min`, `max`, `mean`, `stdev`, and `s2` are aggregated and pushed as fields. `sum` is not aggregated by default to maintain backwards compatibility.
- If empty array, no stats are aggregated
### Measurements & Fields:
- measurement1
- field1_count
- field1_max
- field1_min
- field1_mean
- field1_sum
- field1_s2 (variance)
- field1_stdev (standard deviation)
### Tags:
No tags are applied by this aggregator.
### Example Output:
```
$ telegraf --config telegraf.conf --quiet
system,host=tars load1=1 1475583980000000000
system,host=tars load1=1 1475583990000000000
system,host=tars load1_count=2,load1_max=1,load1_min=1,load1_mean=1,load1_sum=2,load1_s2=0,load1_stdev=0 1475584010000000000
system,host=tars load1=1 1475584020000000000
system,host=tars load1=3 1475584030000000000
system,host=tars load1_count=2,load1_max=3,load1_min=1,load1_mean=2,load1_sum=4,load1_s2=2,load1_stdev=1.414162 1475584010000000000
```

View File

@@ -1,258 +0,0 @@
package basicstats
import (
"log"
"math"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/aggregators"
)
type BasicStats struct {
Stats []string `toml:"stats"`
cache map[uint64]aggregate
statsConfig *configuredStats
}
type configuredStats struct {
count bool
min bool
max bool
mean bool
variance bool
stdev bool
sum bool
}
func NewBasicStats() *BasicStats {
mm := &BasicStats{}
mm.Reset()
return mm
}
type aggregate struct {
fields map[string]basicstats
name string
tags map[string]string
}
type basicstats struct {
count float64
min float64
max float64
sum float64
mean float64
M2 float64 //intermedia value for variance/stdev
}
var sampleConfig = `
## General Aggregator Arguments:
## The period on which to flush & clear the aggregator.
period = "30s"
## If true, the original metric will be dropped by the
## aggregator and will not get sent to the output plugins.
drop_original = false
`
func (m *BasicStats) SampleConfig() string {
return sampleConfig
}
func (m *BasicStats) Description() string {
return "Keep the aggregate basicstats of each metric passing through."
}
func (m *BasicStats) Add(in telegraf.Metric) {
id := in.HashID()
if _, ok := m.cache[id]; !ok {
// hit an uncached metric, create caches for first time:
a := aggregate{
name: in.Name(),
tags: in.Tags(),
fields: make(map[string]basicstats),
}
for k, v := range in.Fields() {
if fv, ok := convert(v); ok {
a.fields[k] = basicstats{
count: 1,
min: fv,
max: fv,
mean: fv,
sum: fv,
M2: 0.0,
}
}
}
m.cache[id] = a
} else {
for k, v := range in.Fields() {
if fv, ok := convert(v); ok {
if _, ok := m.cache[id].fields[k]; !ok {
// hit an uncached field of a cached metric
m.cache[id].fields[k] = basicstats{
count: 1,
min: fv,
max: fv,
mean: fv,
sum: fv,
M2: 0.0,
}
continue
}
tmp := m.cache[id].fields[k]
//https://en.m.wikipedia.org/wiki/Algorithms_for_calculating_variance
//variable initialization
x := fv
mean := tmp.mean
M2 := tmp.M2
//counter compute
n := tmp.count + 1
tmp.count = n
//mean compute
delta := x - mean
mean = mean + delta/n
tmp.mean = mean
//variance/stdev compute
M2 = M2 + delta*(x-mean)
tmp.M2 = M2
//max/min compute
if fv < tmp.min {
tmp.min = fv
} else if fv > tmp.max {
tmp.max = fv
}
//sum compute
tmp.sum += fv
//store final data
m.cache[id].fields[k] = tmp
}
}
}
}
func (m *BasicStats) Push(acc telegraf.Accumulator) {
config := getConfiguredStats(m)
for _, aggregate := range m.cache {
fields := map[string]interface{}{}
for k, v := range aggregate.fields {
if config.count {
fields[k+"_count"] = v.count
}
if config.min {
fields[k+"_min"] = v.min
}
if config.max {
fields[k+"_max"] = v.max
}
if config.mean {
fields[k+"_mean"] = v.mean
}
if config.sum {
fields[k+"_sum"] = v.sum
}
//v.count always >=1
if v.count > 1 {
variance := v.M2 / (v.count - 1)
if config.variance {
fields[k+"_s2"] = variance
}
if config.stdev {
fields[k+"_stdev"] = math.Sqrt(variance)
}
}
//if count == 1 StdDev = infinite => so I won't send data
}
if len(fields) > 0 {
acc.AddFields(aggregate.name, fields, aggregate.tags)
}
}
}
func parseStats(names []string) *configuredStats {
parsed := &configuredStats{}
for _, name := range names {
switch name {
case "count":
parsed.count = true
case "min":
parsed.min = true
case "max":
parsed.max = true
case "mean":
parsed.mean = true
case "s2":
parsed.variance = true
case "stdev":
parsed.stdev = true
case "sum":
parsed.sum = true
default:
log.Printf("W! Unrecognized basic stat '%s', ignoring", name)
}
}
return parsed
}
func defaultStats() *configuredStats {
defaults := &configuredStats{}
defaults.count = true
defaults.min = true
defaults.max = true
defaults.mean = true
defaults.variance = true
defaults.stdev = true
defaults.sum = false
return defaults
}
func getConfiguredStats(m *BasicStats) *configuredStats {
if m.statsConfig == nil {
if m.Stats == nil {
m.statsConfig = defaultStats()
} else {
m.statsConfig = parseStats(m.Stats)
}
}
return m.statsConfig
}
func (m *BasicStats) Reset() {
m.cache = make(map[uint64]aggregate)
}
func convert(in interface{}) (float64, bool) {
switch v := in.(type) {
case float64:
return v, true
case int64:
return float64(v), true
default:
return 0, false
}
}
func init() {
aggregators.Add("basicstats", func() telegraf.Aggregator {
return NewBasicStats()
})
}

View File

@@ -1,511 +0,0 @@
package basicstats
import (
"math"
"testing"
"time"
"github.com/influxdata/telegraf/metric"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
)
var m1, _ = metric.New("m1",
map[string]string{"foo": "bar"},
map[string]interface{}{
"a": int64(1),
"b": int64(1),
"c": float64(2),
"d": float64(2),
},
time.Now(),
)
var m2, _ = metric.New("m1",
map[string]string{"foo": "bar"},
map[string]interface{}{
"a": int64(1),
"b": int64(3),
"c": float64(4),
"d": float64(6),
"e": float64(200),
"ignoreme": "string",
"andme": true,
},
time.Now(),
)
func BenchmarkApply(b *testing.B) {
minmax := NewBasicStats()
for n := 0; n < b.N; n++ {
minmax.Add(m1)
minmax.Add(m2)
}
}
// Test two metrics getting added.
func TestBasicStatsWithPeriod(t *testing.T) {
acc := testutil.Accumulator{}
minmax := NewBasicStats()
minmax.Add(m1)
minmax.Add(m2)
minmax.Push(&acc)
expectedFields := map[string]interface{}{
"a_count": float64(2), //a
"a_max": float64(1),
"a_min": float64(1),
"a_mean": float64(1),
"a_stdev": float64(0),
"a_s2": float64(0),
"b_count": float64(2), //b
"b_max": float64(3),
"b_min": float64(1),
"b_mean": float64(2),
"b_s2": float64(2),
"b_stdev": math.Sqrt(2),
"c_count": float64(2), //c
"c_max": float64(4),
"c_min": float64(2),
"c_mean": float64(3),
"c_s2": float64(2),
"c_stdev": math.Sqrt(2),
"d_count": float64(2), //d
"d_max": float64(6),
"d_min": float64(2),
"d_mean": float64(4),
"d_s2": float64(8),
"d_stdev": math.Sqrt(8),
"e_count": float64(1), //e
"e_max": float64(200),
"e_min": float64(200),
"e_mean": float64(200),
}
expectedTags := map[string]string{
"foo": "bar",
}
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
}
// Test two metrics getting added with a push/reset in between (simulates
// getting added in different periods.)
func TestBasicStatsDifferentPeriods(t *testing.T) {
acc := testutil.Accumulator{}
minmax := NewBasicStats()
minmax.Add(m1)
minmax.Push(&acc)
expectedFields := map[string]interface{}{
"a_count": float64(1), //a
"a_max": float64(1),
"a_min": float64(1),
"a_mean": float64(1),
"b_count": float64(1), //b
"b_max": float64(1),
"b_min": float64(1),
"b_mean": float64(1),
"c_count": float64(1), //c
"c_max": float64(2),
"c_min": float64(2),
"c_mean": float64(2),
"d_count": float64(1), //d
"d_max": float64(2),
"d_min": float64(2),
"d_mean": float64(2),
}
expectedTags := map[string]string{
"foo": "bar",
}
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
acc.ClearMetrics()
minmax.Reset()
minmax.Add(m2)
minmax.Push(&acc)
expectedFields = map[string]interface{}{
"a_count": float64(1), //a
"a_max": float64(1),
"a_min": float64(1),
"a_mean": float64(1),
"b_count": float64(1), //b
"b_max": float64(3),
"b_min": float64(3),
"b_mean": float64(3),
"c_count": float64(1), //c
"c_max": float64(4),
"c_min": float64(4),
"c_mean": float64(4),
"d_count": float64(1), //d
"d_max": float64(6),
"d_min": float64(6),
"d_mean": float64(6),
"e_count": float64(1), //e
"e_max": float64(200),
"e_min": float64(200),
"e_mean": float64(200),
}
expectedTags = map[string]string{
"foo": "bar",
}
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
}
// Test only aggregating count
func TestBasicStatsWithOnlyCount(t *testing.T) {
aggregator := NewBasicStats()
aggregator.Stats = []string{"count"}
aggregator.Add(m1)
aggregator.Add(m2)
acc := testutil.Accumulator{}
aggregator.Push(&acc)
expectedFields := map[string]interface{}{
"a_count": float64(2),
"b_count": float64(2),
"c_count": float64(2),
"d_count": float64(2),
"e_count": float64(1),
}
expectedTags := map[string]string{
"foo": "bar",
}
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
}
// Test only aggregating minimum
func TestBasicStatsWithOnlyMin(t *testing.T) {
aggregator := NewBasicStats()
aggregator.Stats = []string{"min"}
aggregator.Add(m1)
aggregator.Add(m2)
acc := testutil.Accumulator{}
aggregator.Push(&acc)
expectedFields := map[string]interface{}{
"a_min": float64(1),
"b_min": float64(1),
"c_min": float64(2),
"d_min": float64(2),
"e_min": float64(200),
}
expectedTags := map[string]string{
"foo": "bar",
}
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
}
// Test only aggregating maximum
func TestBasicStatsWithOnlyMax(t *testing.T) {
aggregator := NewBasicStats()
aggregator.Stats = []string{"max"}
aggregator.Add(m1)
aggregator.Add(m2)
acc := testutil.Accumulator{}
aggregator.Push(&acc)
expectedFields := map[string]interface{}{
"a_max": float64(1),
"b_max": float64(3),
"c_max": float64(4),
"d_max": float64(6),
"e_max": float64(200),
}
expectedTags := map[string]string{
"foo": "bar",
}
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
}
// Test only aggregating mean
func TestBasicStatsWithOnlyMean(t *testing.T) {
aggregator := NewBasicStats()
aggregator.Stats = []string{"mean"}
aggregator.Add(m1)
aggregator.Add(m2)
acc := testutil.Accumulator{}
aggregator.Push(&acc)
expectedFields := map[string]interface{}{
"a_mean": float64(1),
"b_mean": float64(2),
"c_mean": float64(3),
"d_mean": float64(4),
"e_mean": float64(200),
}
expectedTags := map[string]string{
"foo": "bar",
}
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
}
// Test only aggregating sum
func TestBasicStatsWithOnlySum(t *testing.T) {
aggregator := NewBasicStats()
aggregator.Stats = []string{"sum"}
aggregator.Add(m1)
aggregator.Add(m2)
acc := testutil.Accumulator{}
aggregator.Push(&acc)
expectedFields := map[string]interface{}{
"a_sum": float64(2),
"b_sum": float64(4),
"c_sum": float64(6),
"d_sum": float64(8),
"e_sum": float64(200),
}
expectedTags := map[string]string{
"foo": "bar",
}
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
}
// Verify that sum doesn't suffer from floating point errors. Early
// implementations of sum were calulated from mean and count, which
// e.g. summed "1, 1, 5, 1" as "7.999999..." instead of 8.
func TestBasicStatsWithOnlySumFloatingPointErrata(t *testing.T) {
var sum1, _ = metric.New("m1",
map[string]string{},
map[string]interface{}{
"a": int64(1),
},
time.Now(),
)
var sum2, _ = metric.New("m1",
map[string]string{},
map[string]interface{}{
"a": int64(1),
},
time.Now(),
)
var sum3, _ = metric.New("m1",
map[string]string{},
map[string]interface{}{
"a": int64(5),
},
time.Now(),
)
var sum4, _ = metric.New("m1",
map[string]string{},
map[string]interface{}{
"a": int64(1),
},
time.Now(),
)
aggregator := NewBasicStats()
aggregator.Stats = []string{"sum"}
aggregator.Add(sum1)
aggregator.Add(sum2)
aggregator.Add(sum3)
aggregator.Add(sum4)
acc := testutil.Accumulator{}
aggregator.Push(&acc)
expectedFields := map[string]interface{}{
"a_sum": float64(8),
}
expectedTags := map[string]string{}
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
}
// Test only aggregating variance
func TestBasicStatsWithOnlyVariance(t *testing.T) {
aggregator := NewBasicStats()
aggregator.Stats = []string{"s2"}
aggregator.Add(m1)
aggregator.Add(m2)
acc := testutil.Accumulator{}
aggregator.Push(&acc)
expectedFields := map[string]interface{}{
"a_s2": float64(0),
"b_s2": float64(2),
"c_s2": float64(2),
"d_s2": float64(8),
}
expectedTags := map[string]string{
"foo": "bar",
}
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
}
// Test only aggregating standard deviation
func TestBasicStatsWithOnlyStandardDeviation(t *testing.T) {
aggregator := NewBasicStats()
aggregator.Stats = []string{"stdev"}
aggregator.Add(m1)
aggregator.Add(m2)
acc := testutil.Accumulator{}
aggregator.Push(&acc)
expectedFields := map[string]interface{}{
"a_stdev": float64(0),
"b_stdev": math.Sqrt(2),
"c_stdev": math.Sqrt(2),
"d_stdev": math.Sqrt(8),
}
expectedTags := map[string]string{
"foo": "bar",
}
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
}
// Test only aggregating minimum and maximum
func TestBasicStatsWithMinAndMax(t *testing.T) {
aggregator := NewBasicStats()
aggregator.Stats = []string{"min", "max"}
aggregator.Add(m1)
aggregator.Add(m2)
acc := testutil.Accumulator{}
aggregator.Push(&acc)
expectedFields := map[string]interface{}{
"a_max": float64(1), //a
"a_min": float64(1),
"b_max": float64(3), //b
"b_min": float64(1),
"c_max": float64(4), //c
"c_min": float64(2),
"d_max": float64(6), //d
"d_min": float64(2),
"e_max": float64(200), //e
"e_min": float64(200),
}
expectedTags := map[string]string{
"foo": "bar",
}
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
}
// Test aggregating with all stats
func TestBasicStatsWithAllStats(t *testing.T) {
acc := testutil.Accumulator{}
minmax := NewBasicStats()
minmax.Stats = []string{"count", "min", "max", "mean", "stdev", "s2", "sum"}
minmax.Add(m1)
minmax.Add(m2)
minmax.Push(&acc)
expectedFields := map[string]interface{}{
"a_count": float64(2), //a
"a_max": float64(1),
"a_min": float64(1),
"a_mean": float64(1),
"a_stdev": float64(0),
"a_s2": float64(0),
"a_sum": float64(2),
"b_count": float64(2), //b
"b_max": float64(3),
"b_min": float64(1),
"b_mean": float64(2),
"b_s2": float64(2),
"b_sum": float64(4),
"b_stdev": math.Sqrt(2),
"c_count": float64(2), //c
"c_max": float64(4),
"c_min": float64(2),
"c_mean": float64(3),
"c_s2": float64(2),
"c_stdev": math.Sqrt(2),
"c_sum": float64(6),
"d_count": float64(2), //d
"d_max": float64(6),
"d_min": float64(2),
"d_mean": float64(4),
"d_s2": float64(8),
"d_stdev": math.Sqrt(8),
"d_sum": float64(8),
"e_count": float64(1), //e
"e_max": float64(200),
"e_min": float64(200),
"e_mean": float64(200),
"e_sum": float64(200),
}
expectedTags := map[string]string{
"foo": "bar",
}
acc.AssertContainsTaggedFields(t, "m1", expectedFields, expectedTags)
}
// Test that if an empty array is passed, no points are pushed
func TestBasicStatsWithNoStats(t *testing.T) {
aggregator := NewBasicStats()
aggregator.Stats = []string{}
aggregator.Add(m1)
aggregator.Add(m2)
acc := testutil.Accumulator{}
aggregator.Push(&acc)
acc.AssertDoesNotContainMeasurement(t, "m1")
}
// Test that if an unknown stat is configured, it doesn't explode
func TestBasicStatsWithUnknownStat(t *testing.T) {
aggregator := NewBasicStats()
aggregator.Stats = []string{"crazy"}
aggregator.Add(m1)
aggregator.Add(m2)
acc := testutil.Accumulator{}
aggregator.Push(&acc)
acc.AssertDoesNotContainMeasurement(t, "m1")
}
// Test that if Stats isn't supplied, then we only do count, min, max, mean,
// stdev, and s2. We purposely exclude sum for backwards compatability,
// otherwise user's working systems will suddenly (and surprisingly) start
// capturing sum without their input.
func TestBasicStatsWithDefaultStats(t *testing.T) {
aggregator := NewBasicStats()
aggregator.Add(m1)
aggregator.Add(m2)
acc := testutil.Accumulator{}
aggregator.Push(&acc)
assert.True(t, acc.HasField("m1", "a_count"))
assert.True(t, acc.HasField("m1", "a_min"))
assert.True(t, acc.HasField("m1", "a_max"))
assert.True(t, acc.HasField("m1", "a_mean"))
assert.True(t, acc.HasField("m1", "a_stdev"))
assert.True(t, acc.HasField("m1", "a_s2"))
assert.False(t, acc.HasField("m1", "a_sum"))
}

View File

@@ -1,25 +1,38 @@
# Histogram Aggregator Plugin
The histogram aggregator plugin creates histograms containing the counts of
field values within a range.
#### Goal
Values added to a bucket are also added to the larger buckets in the
distribution. This creates a [cumulative histogram](https://en.wikipedia.org/wiki/Histogram#/media/File:Cumulative_vs_normal_histogram.svg).
This plugin was added for ability to build histograms.
Like other Telegraf aggregators, the metric is emitted every `period` seconds.
Bucket counts however are not reset between periods and will be non-strictly
increasing while Telegraf is running.
#### Description
#### Design
The histogram aggregator plugin aggregates values of specified metric's
fields. The metric is emitted every `period` seconds. All you need to do
is to specify borders of histogram buckets and fields, for which you want
to aggregate histogram.
Each metric is passed to the aggregator and this aggregator searches
#### How it works
The each metric is passed to the aggregator and this aggregator searches
histogram buckets for those fields, which have been specified in the
config. If buckets are found, the aggregator will increment +1 to the appropriate
bucket otherwise it will be added to the `+Inf` bucket. Every `period`
seconds this data will be forwarded to the outputs.
config. If buckets are found, the aggregator will put +1 to appropriate
bucket. Otherwise, nothing will happen. Every `period` seconds these data
will be pushed to output.
The algorithm of hit counting to buckets was implemented on the base
of the algorithm which is implemented in the Prometheus
Note, that the all hits of current bucket will be also added to all next
buckets in final result of distribution. Why does it work this way? In
configuration you define right borders for each bucket in a ascending
sequence. Internally buckets are presented as ranges with borders
(0..bucketBorder]: 0..1, 0..10, 0..50, …, 0..+Inf. So the value "+1" will be
put into those buckets, in which the metric value fell with such ranges of
buckets.
This plugin creates cumulative histograms. It means, that the hits in the
buckets will always increase from the moment of telegraf start. But if you
restart telegraf, all hits in the buckets will be reset to 0.
Also, the algorithm of hit counting to buckets was implemented on the base
of the algorithm, which is implemented in the Prometheus
[client](https://github.com/prometheus/client_golang/blob/master/prometheus/histogram.go).
### Configuration
@@ -27,44 +40,61 @@ of the algorithm which is implemented in the Prometheus
```toml
# Configuration for aggregate histogram metrics
[[aggregators.histogram]]
## The period in which to flush the aggregator.
## General Aggregator Arguments:
## The period on which to flush & clear the aggregator.
period = "30s"
## If true, the original metric will be dropped by the
## aggregator and will not get sent to the output plugins.
drop_original = false
## Example config that aggregates all fields of the metric.
# [[aggregators.histogram.config]]
# ## The set of buckets.
# buckets = [0.0, 15.6, 34.5, 49.1, 71.5, 80.5, 94.5, 100.0]
# ## The name of metric.
# measurement_name = "cpu"
## The example of config to aggregate histogram for all fields of specified metric.
[[aggregators.histogram.config]]
## The set of buckets.
buckets = [0.0, 15.6, 34.5, 49.1, 71.5, 80.5, 94.5, 100.0]
## The name of metric.
metric_name = "cpu"
## Example config that aggregates only specific fields of the metric.
# [[aggregators.histogram.config]]
# ## The set of buckets.
# buckets = [0.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0]
# ## The name of metric.
# measurement_name = "diskio"
# ## The concrete fields of metric
# fields = ["io_time", "read_time", "write_time"]
## The example of config to aggregate histogram for concrete fields of specified metric.
[[aggregators.histogram.config]]
## The set of buckets.
buckets = [0.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0]
## The name of metric.
metric_name = "diskio"
## The concrete fields of metric.
metric_fields = ["io_time", "read_time", "write_time"]
```
The user is responsible for defining the bounds of the histogram bucket as
well as the measurement name and fields to aggregate.
#### Explanation
Each histogram config section must contain a `buckets` and `measurement_name`
option. Optionally, if `fields` is set only the fields listed will be
aggregated. If `fields` is not set all fields are aggregated.
The field `metric_fields` is the list of metric fields. For example, the
metric `cpu` has the following fields: usage_user, usage_system,
usage_idle, usage_nice, usage_iowait, usage_irq, usage_softirq, usage_steal,
usage_guest, usage_guest_nice.
The `buckets` option contains a list of floats which specify the bucket
boundaries. Each float value defines the inclusive upper bound of the bucket.
The `+Inf` bucket is added automatically and does not need to be defined.
Note that histogram metrics will be pushed every `period` seconds.
As you know telegraf calls aggregator `Reset()` func each `period` seconds.
Histogram aggregator ignores `Reset()` and continues to count hits.
#### Use cases
You can specify fields using two cases:
1. The specifying only metric name. In this case all fields of metric
will be aggregated.
2. The specifying metric name and concrete field.
#### Some rules
- The setting of each histogram must be in separate section with title
`aggregators.histogram.config`.
- The each value of bucket must be float value.
- Don\`t include the border bucket `+Inf`. It will be done automatically.
### Measurements & Fields:
The postfix `bucket` will be added to each field key.
The postfix `bucket` will be added to each field.
- measurement1
- field1_bucket
@@ -72,15 +102,16 @@ The postfix `bucket` will be added to each field key.
### Tags:
All measurements are given the tag `le`. This tag has the border value of
bucket. It means that the metric value is less than or equal to the value of
this tag. For example, let assume that we have the metric value 10 and the
following buckets: [5, 10, 30, 70, 100]. Then the tag `le` will have the value
10, because the metrics value is passed into bucket with right border value
`10`.
All measurements have tag `le`. This tag has the border value of bucket. It
means that the metric value is less or equal to the value of this tag. For
example, let assume that we have the metric value 10 and the following
buckets: [5, 10, 30, 70, 100]. Then the tag `le` will have the value 10,
because the metrics value is passed into bucket with right border value `10`.
### Example Output:
The following output will return to the Prometheus client.
```
cpu,cpu=cpu1,host=localhost,le=0.0 usage_idle_bucket=0i 1486998330000000000
cpu,cpu=cpu1,host=localhost,le=10.0 usage_idle_bucket=0i 1486998330000000000

View File

@@ -24,8 +24,8 @@ type HistogramAggregator struct {
// config is the config, which contains name, field of metric and histogram buckets.
type config struct {
Metric string `toml:"measurement_name"`
Fields []string `toml:"fields"`
Metric string `toml:"metric_name"`
Fields []string `toml:"metric_fields"`
Buckets buckets `toml:"buckets"`
}
@@ -65,28 +65,28 @@ func NewHistogramAggregator() telegraf.Aggregator {
}
var sampleConfig = `
## The period in which to flush the aggregator.
## General Aggregator Arguments:
## The period on which to flush & clear the aggregator.
period = "30s"
## If true, the original metric will be dropped by the
## aggregator and will not get sent to the output plugins.
drop_original = false
## Example config that aggregates all fields of the metric.
# [[aggregators.histogram.config]]
# ## The set of buckets.
# buckets = [0.0, 15.6, 34.5, 49.1, 71.5, 80.5, 94.5, 100.0]
# ## The name of metric.
# measurement_name = "cpu"
## The example of config to aggregate histogram for all fields of specified metric.
[[aggregators.histogram.config]]
## The set of buckets.
buckets = [0.0, 15.6, 34.5, 49.1, 71.5, 80.5, 94.5, 100.0]
## The name of metric.
metric_name = "cpu"
## Example config that aggregates only specific fields of the metric.
# [[aggregators.histogram.config]]
# ## The set of buckets.
# buckets = [0.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0]
# ## The name of metric.
# measurement_name = "diskio"
# ## The concrete fields of metric
# fields = ["io_time", "read_time", "write_time"]
## The example of config to aggregate for specified fields of metric.
[[aggregators.histogram.config]]
## The set of buckets.
buckets = [0.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0]
## The name of metric.
metric_name = "diskio"
## The concrete fields of metric
metric_fields = ["io_time", "read_time", "write_time"]
`
// SampleConfig returns sample of config
@@ -96,7 +96,7 @@ func (h *HistogramAggregator) SampleConfig() string {
// Description returns description of aggregator plugin
func (h *HistogramAggregator) Description() string {
return "Create aggregate histograms."
return "Keep the aggregate histogram of each metric passing through."
}
// Add adds new hit to the buckets

View File

@@ -6,38 +6,31 @@ additional information can be found.
### Configuration:
This section contains the default TOML to configure the plugin. You can
generate it using `telegraf --usage <plugin-name>`.
```toml
# Description
[[inputs.example]]
example_option = "example_value"
# SampleConfig
```
### Metrics:
### Measurements & Fields:
Here you should add an optional description and links to where the user can
get more information about the measurements.
If the output is determined dynamically based on the input source, or there
are more metrics than can reasonably be listed, describe how the input is
mapped to the output.
- measurement1
- tags:
- tag1 (optional description)
- tag2
- fields:
- field1 (type, unit)
- field2 (float, percent)
- measurement2
- tags:
- tag3
- fields:
- field3 (integer, bytes)
### Tags:
- All measurements have the following tags:
- tag1 (optional description)
- tag2
- measurement2 has the following tags:
- tag3
### Sample Queries:
This section should contain some useful InfluxDB queries that can be used to
@@ -51,10 +44,6 @@ SELECT max(field1), mean(field1), min(field1) FROM measurement1 WHERE tag1=bar A
### Example Output:
This section shows example output in Line Protocol format. You can often use
`telegraf --input-filter <plugin-name> --test` or use the `file` output to get
this information.
```
measurement1,tag1=foo,tag2=bar field1=1i,field2=2.1 1453831884664956455
measurement2,tag1=foo,tag2=bar,tag3=baz field3=1i 1453831884664956455

View File

@@ -5,7 +5,6 @@ import (
_ "github.com/influxdata/telegraf/plugins/inputs/amqp_consumer"
_ "github.com/influxdata/telegraf/plugins/inputs/apache"
_ "github.com/influxdata/telegraf/plugins/inputs/bcache"
_ "github.com/influxdata/telegraf/plugins/inputs/bond"
_ "github.com/influxdata/telegraf/plugins/inputs/cassandra"
_ "github.com/influxdata/telegraf/plugins/inputs/ceph"
_ "github.com/influxdata/telegraf/plugins/inputs/cgroup"
@@ -15,7 +14,6 @@ import (
_ "github.com/influxdata/telegraf/plugins/inputs/consul"
_ "github.com/influxdata/telegraf/plugins/inputs/couchbase"
_ "github.com/influxdata/telegraf/plugins/inputs/couchdb"
_ "github.com/influxdata/telegraf/plugins/inputs/dcos"
_ "github.com/influxdata/telegraf/plugins/inputs/disque"
_ "github.com/influxdata/telegraf/plugins/inputs/dmcache"
_ "github.com/influxdata/telegraf/plugins/inputs/dns_query"
@@ -29,7 +27,6 @@ import (
_ "github.com/influxdata/telegraf/plugins/inputs/graylog"
_ "github.com/influxdata/telegraf/plugins/inputs/haproxy"
_ "github.com/influxdata/telegraf/plugins/inputs/hddtemp"
_ "github.com/influxdata/telegraf/plugins/inputs/http"
_ "github.com/influxdata/telegraf/plugins/inputs/http_listener"
_ "github.com/influxdata/telegraf/plugins/inputs/http_response"
_ "github.com/influxdata/telegraf/plugins/inputs/httpjson"
@@ -37,10 +34,8 @@ import (
_ "github.com/influxdata/telegraf/plugins/inputs/internal"
_ "github.com/influxdata/telegraf/plugins/inputs/interrupts"
_ "github.com/influxdata/telegraf/plugins/inputs/ipmi_sensor"
_ "github.com/influxdata/telegraf/plugins/inputs/ipset"
_ "github.com/influxdata/telegraf/plugins/inputs/iptables"
_ "github.com/influxdata/telegraf/plugins/inputs/jolokia"
_ "github.com/influxdata/telegraf/plugins/inputs/jolokia2"
_ "github.com/influxdata/telegraf/plugins/inputs/kafka_consumer"
_ "github.com/influxdata/telegraf/plugins/inputs/kafka_consumer_legacy"
_ "github.com/influxdata/telegraf/plugins/inputs/kapacitor"
@@ -55,22 +50,17 @@ import (
_ "github.com/influxdata/telegraf/plugins/inputs/mongodb"
_ "github.com/influxdata/telegraf/plugins/inputs/mqtt_consumer"
_ "github.com/influxdata/telegraf/plugins/inputs/mysql"
_ "github.com/influxdata/telegraf/plugins/inputs/nats"
_ "github.com/influxdata/telegraf/plugins/inputs/nats_consumer"
_ "github.com/influxdata/telegraf/plugins/inputs/net_response"
_ "github.com/influxdata/telegraf/plugins/inputs/nginx"
_ "github.com/influxdata/telegraf/plugins/inputs/nginx_plus"
_ "github.com/influxdata/telegraf/plugins/inputs/nsq"
_ "github.com/influxdata/telegraf/plugins/inputs/nsq_consumer"
_ "github.com/influxdata/telegraf/plugins/inputs/nstat"
_ "github.com/influxdata/telegraf/plugins/inputs/ntpq"
_ "github.com/influxdata/telegraf/plugins/inputs/openldap"
_ "github.com/influxdata/telegraf/plugins/inputs/opensmtpd"
_ "github.com/influxdata/telegraf/plugins/inputs/passenger"
_ "github.com/influxdata/telegraf/plugins/inputs/pf"
_ "github.com/influxdata/telegraf/plugins/inputs/phpfpm"
_ "github.com/influxdata/telegraf/plugins/inputs/ping"
_ "github.com/influxdata/telegraf/plugins/inputs/postfix"
_ "github.com/influxdata/telegraf/plugins/inputs/postgresql"
_ "github.com/influxdata/telegraf/plugins/inputs/postgresql_extensible"
_ "github.com/influxdata/telegraf/plugins/inputs/powerdns"
@@ -84,23 +74,19 @@ import (
_ "github.com/influxdata/telegraf/plugins/inputs/riak"
_ "github.com/influxdata/telegraf/plugins/inputs/salesforce"
_ "github.com/influxdata/telegraf/plugins/inputs/sensors"
_ "github.com/influxdata/telegraf/plugins/inputs/smart"
_ "github.com/influxdata/telegraf/plugins/inputs/snmp"
_ "github.com/influxdata/telegraf/plugins/inputs/snmp_legacy"
_ "github.com/influxdata/telegraf/plugins/inputs/socket_listener"
_ "github.com/influxdata/telegraf/plugins/inputs/solr"
_ "github.com/influxdata/telegraf/plugins/inputs/sqlserver"
_ "github.com/influxdata/telegraf/plugins/inputs/statsd"
_ "github.com/influxdata/telegraf/plugins/inputs/sysstat"
_ "github.com/influxdata/telegraf/plugins/inputs/system"
_ "github.com/influxdata/telegraf/plugins/inputs/tail"
_ "github.com/influxdata/telegraf/plugins/inputs/tcp_listener"
_ "github.com/influxdata/telegraf/plugins/inputs/teamspeak"
_ "github.com/influxdata/telegraf/plugins/inputs/tomcat"
_ "github.com/influxdata/telegraf/plugins/inputs/trig"
_ "github.com/influxdata/telegraf/plugins/inputs/twemproxy"
_ "github.com/influxdata/telegraf/plugins/inputs/udp_listener"
_ "github.com/influxdata/telegraf/plugins/inputs/unbound"
_ "github.com/influxdata/telegraf/plugins/inputs/varnish"
_ "github.com/influxdata/telegraf/plugins/inputs/webhooks"
_ "github.com/influxdata/telegraf/plugins/inputs/win_perf_counters"

View File

@@ -39,9 +39,9 @@ The following defaults are known to work with RabbitMQ:
## Use SSL but skip chain & host verification
# insecure_skip_verify = false
## Data format to consume.
## Data format to output.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "influx"
```

View File

@@ -85,10 +85,10 @@ func (a *AMQPConsumer) SampleConfig() string {
## Use SSL but skip chain & host verification
# insecure_skip_verify = false
## Data format to consume.
## Data format to output.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "influx"
`
}
@@ -145,25 +145,23 @@ func (a *AMQPConsumer) Start(acc telegraf.Accumulator) error {
go a.process(msgs, acc)
go func() {
err := <-a.conn.NotifyClose(make(chan *amqp.Error))
if err == nil {
return
}
log.Printf("I! AMQP consumer connection closed: %s; trying to reconnect", err)
for {
err := <-a.conn.NotifyClose(make(chan *amqp.Error))
if err == nil {
break
msgs, err := a.connect(amqpConf)
if err != nil {
log.Printf("E! AMQP connection failed: %s", err)
time.Sleep(10 * time.Second)
continue
}
log.Printf("I! AMQP consumer connection closed: %s; trying to reconnect", err)
for {
msgs, err := a.connect(amqpConf)
if err != nil {
log.Printf("E! AMQP connection failed: %s", err)
time.Sleep(10 * time.Second)
continue
}
a.wg.Add(1)
go a.process(msgs, acc)
break
}
a.wg.Add(1)
go a.process(msgs, acc)
break
}
}()

View File

@@ -2,7 +2,7 @@
The Apache plugin collects server performance information using the [`mod_status`](https://httpd.apache.org/docs/2.4/mod/mod_status.html) module of the [Apache HTTP Server](https://httpd.apache.org/).
Typically, the `mod_status` module is configured to expose a page at the `/server-status?auto` location of the Apache server. The [ExtendedStatus](https://httpd.apache.org/docs/2.4/mod/core.html#extendedstatus) option must be enabled in order to collect all available fields. For information about how to configure your server reference the [module documentation](https://httpd.apache.org/docs/2.4/mod/mod_status.html#enable).
Typically, the `mod_status` module is configured to expose a page at the `/server-status?auto` location of the Apache server. The [ExtendedStatus](https://httpd.apache.org/docs/2.4/mod/core.html#extendedstatus) option must be enabled in order to collect all available fields. For information about how to configure your server reference the [module documenation](https://httpd.apache.org/docs/2.4/mod/mod_status.html#enable).
### Configuration:

View File

@@ -1,85 +0,0 @@
# Bond Input Plugin
The Bond input plugin collects network bond interface status for both the
network bond interface as well as slave interfaces.
The plugin collects these metrics from `/proc/net/bonding/*` files.
### Configuration:
```toml
[[inputs.bond]]
## Sets 'proc' directory path
## If not specified, then default is /proc
# host_proc = "/proc"
## By default, telegraf gather stats for all bond interfaces
## Setting interfaces will restrict the stats to the specified
## bond interfaces.
# bond_interfaces = ["bond0"]
```
### Measurements & Fields:
- bond
- active_slave (for active-backup mode)
- status
- bond_slave
- failures
- status
### Description:
```
active_slave
Currently active slave interface for active-backup mode.
status
Status of bond interface or bonds's slave interface (down = 0, up = 1).
failures
Amount of failures for bond's slave interface.
```
### Tags:
- bond
- bond
- bond_slave
- bond
- interface
### Example output:
Configuration:
```
[[inputs.bond]]
## Sets 'proc' directory path
## If not specified, then default is /proc
host_proc = "/proc"
## By default, telegraf gather stats for all bond interfaces
## Setting interfaces will restrict the stats to the specified
## bond interfaces.
bond_interfaces = ["bond0", "bond1"]
```
Run:
```
telegraf --config telegraf.conf --input-filter bond --test
```
Output:
```
* Plugin: inputs.bond, Collection 1
> bond,bond=bond1,host=local active_slave="eth0",status=1i 1509704525000000000
> bond_slave,bond=bond1,interface=eth0,host=local status=1i,failures=0i 1509704525000000000
> bond_slave,host=local,bond=bond1,interface=eth1 status=1i,failures=0i 1509704525000000000
> bond,bond=bond0,host=isvetlov-mac.local status=1i 1509704525000000000
> bond_slave,bond=bond0,interface=eth1,host=local status=1i,failures=0i 1509704525000000000
> bond_slave,bond=bond0,interface=eth2,host=local status=1i,failures=0i 1509704525000000000
```

View File

@@ -1,204 +0,0 @@
package bond
import (
"bufio"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strconv"
"strings"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
// default host proc path
const defaultHostProc = "/proc"
// env host proc variable name
const envProc = "HOST_PROC"
type Bond struct {
HostProc string `toml:"host_proc"`
BondInterfaces []string `toml:"bond_interfaces"`
}
var sampleConfig = `
## Sets 'proc' directory path
## If not specified, then default is /proc
# host_proc = "/proc"
## By default, telegraf gather stats for all bond interfaces
## Setting interfaces will restrict the stats to the specified
## bond interfaces.
# bond_interfaces = ["bond0"]
`
func (bond *Bond) Description() string {
return "Collect bond interface status, slaves statuses and failures count"
}
func (bond *Bond) SampleConfig() string {
return sampleConfig
}
func (bond *Bond) Gather(acc telegraf.Accumulator) error {
// load proc path, get default value if config value and env variable are empty
bond.loadPath()
// list bond interfaces from bonding directory or gather all interfaces.
bondNames, err := bond.listInterfaces()
if err != nil {
return err
}
for _, bondName := range bondNames {
bondAbsPath := bond.HostProc + "/net/bonding/" + bondName
file, err := ioutil.ReadFile(bondAbsPath)
if err != nil {
acc.AddError(fmt.Errorf("error inspecting '%s' interface: %v", bondAbsPath, err))
continue
}
rawFile := strings.TrimSpace(string(file))
err = bond.gatherBondInterface(bondName, rawFile, acc)
if err != nil {
acc.AddError(fmt.Errorf("error inspecting '%s' interface: %v", bondName, err))
}
}
return nil
}
func (bond *Bond) gatherBondInterface(bondName string, rawFile string, acc telegraf.Accumulator) error {
splitIndex := strings.Index(rawFile, "Slave Interface:")
if splitIndex == -1 {
splitIndex = len(rawFile)
}
bondPart := rawFile[:splitIndex]
slavePart := rawFile[splitIndex:]
err := bond.gatherBondPart(bondName, bondPart, acc)
if err != nil {
return err
}
err = bond.gatherSlavePart(bondName, slavePart, acc)
if err != nil {
return err
}
return nil
}
func (bond *Bond) gatherBondPart(bondName string, rawFile string, acc telegraf.Accumulator) error {
fields := make(map[string]interface{})
tags := map[string]string{
"bond": bondName,
}
scanner := bufio.NewScanner(strings.NewReader(rawFile))
for scanner.Scan() {
line := scanner.Text()
stats := strings.Split(line, ":")
if len(stats) < 2 {
continue
}
name := strings.TrimSpace(stats[0])
value := strings.TrimSpace(stats[1])
if strings.Contains(name, "Currently Active Slave") {
fields["active_slave"] = value
}
if strings.Contains(name, "MII Status") {
fields["status"] = 0
if value == "up" {
fields["status"] = 1
}
acc.AddFields("bond", fields, tags)
return nil
}
}
if err := scanner.Err(); err != nil {
return err
}
return fmt.Errorf("Couldn't find status info for '%s' ", bondName)
}
func (bond *Bond) gatherSlavePart(bondName string, rawFile string, acc telegraf.Accumulator) error {
var slave string
var status int
scanner := bufio.NewScanner(strings.NewReader(rawFile))
for scanner.Scan() {
line := scanner.Text()
stats := strings.Split(line, ":")
if len(stats) < 2 {
continue
}
name := strings.TrimSpace(stats[0])
value := strings.TrimSpace(stats[1])
if strings.Contains(name, "Slave Interface") {
slave = value
}
if strings.Contains(name, "MII Status") {
status = 0
if value == "up" {
status = 1
}
}
if strings.Contains(name, "Link Failure Count") {
count, err := strconv.Atoi(value)
if err != nil {
return err
}
fields := map[string]interface{}{
"status": status,
"failures": count,
}
tags := map[string]string{
"bond": bondName,
"interface": slave,
}
acc.AddFields("bond_slave", fields, tags)
}
}
if err := scanner.Err(); err != nil {
return err
}
return nil
}
// loadPath can be used to read path firstly from config
// if it is empty then try read from env variable
func (bond *Bond) loadPath() {
if bond.HostProc == "" {
bond.HostProc = proc(envProc, defaultHostProc)
}
}
// proc can be used to read file paths from env
func proc(env, path string) string {
// try to read full file path
if p := os.Getenv(env); p != "" {
return p
}
// return default path
return path
}
func (bond *Bond) listInterfaces() ([]string, error) {
var interfaces []string
if len(bond.BondInterfaces) > 0 {
interfaces = bond.BondInterfaces
} else {
paths, err := filepath.Glob(bond.HostProc + "/net/bonding/*")
if err != nil {
return nil, err
}
for _, p := range paths {
interfaces = append(interfaces, filepath.Base(p))
}
}
return interfaces, nil
}
func init() {
inputs.Add("bond", func() telegraf.Input {
return &Bond{}
})
}

View File

@@ -1,77 +0,0 @@
package bond
import (
"testing"
"github.com/influxdata/telegraf/testutil"
)
var sampleTest802 = `
Ethernet Channel Bonding Driver: v3.5.0 (November 4, 2008)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: fast
Aggregator selection policy (ad_select): stable
bond bond0 has no active aggregator
Slave Interface: eth1
MII Status: up
Link Failure Count: 0
Permanent HW addr: 00:0c:29:f5:b7:11
Aggregator ID: N/A
Slave Interface: eth2
MII Status: up
Link Failure Count: 3
Permanent HW addr: 00:0c:29:f5:b7:1b
Aggregator ID: N/A
`
var sampleTestAB = `
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)
Bonding Mode: fault-tolerance (active-backup)
Primary Slave: eth2 (primary_reselect always)
Currently Active Slave: eth2
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Slave Interface: eth3
MII Status: down
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 2
Permanent HW addr:
Slave queue ID: 0
Slave Interface: eth2
MII Status: up
Speed: 100 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr:
`
func TestGatherBondInterface(t *testing.T) {
var acc testutil.Accumulator
bond := &Bond{}
bond.gatherBondInterface("bond802", sampleTest802, &acc)
acc.AssertContainsTaggedFields(t, "bond", map[string]interface{}{"status": 1}, map[string]string{"bond": "bond802"})
acc.AssertContainsTaggedFields(t, "bond_slave", map[string]interface{}{"failures": 0, "status": 1}, map[string]string{"bond": "bond802", "interface": "eth1"})
acc.AssertContainsTaggedFields(t, "bond_slave", map[string]interface{}{"failures": 3, "status": 1}, map[string]string{"bond": "bond802", "interface": "eth2"})
bond.gatherBondInterface("bondAB", sampleTestAB, &acc)
acc.AssertContainsTaggedFields(t, "bond", map[string]interface{}{"active_slave": "eth2", "status": 1}, map[string]string{"bond": "bondAB"})
acc.AssertContainsTaggedFields(t, "bond_slave", map[string]interface{}{"failures": 2, "status": 0}, map[string]string{"bond": "bondAB", "interface": "eth3"})
acc.AssertContainsTaggedFields(t, "bond_slave", map[string]interface{}{"failures": 0, "status": 1}, map[string]string{"bond": "bondAB", "interface": "eth2"})
}

View File

@@ -26,7 +26,7 @@ func TestParseSockId(t *testing.T) {
func TestParseMonDump(t *testing.T) {
dump, err := parseDump(monPerfDump)
assert.NoError(t, err)
assert.InEpsilon(t, int64(5678670180), dump["cluster"]["osd_kb_used"], epsilon)
assert.InEpsilon(t, 5678670180, dump["cluster"]["osd_kb_used"], epsilon)
assert.InEpsilon(t, 6866.540527000, dump["paxos"]["store_state_latency.sum"], epsilon)
}

View File

@@ -225,7 +225,7 @@ var fileFormats = [...]fileFormat{
}
func numberOrString(s string) interface{} {
i, err := strconv.ParseInt(s, 10, 64)
i, err := strconv.Atoi(s)
if err == nil {
return i
}

View File

@@ -31,17 +31,17 @@ func TestCgroupStatistics_1(t *testing.T) {
"path": "testdata/memory",
}
fields := map[string]interface{}{
"memory.stat.cache": int64(1739362304123123123),
"memory.stat.rss": int64(1775325184),
"memory.stat.rss_huge": int64(778043392),
"memory.stat.mapped_file": int64(421036032),
"memory.stat.dirty": int64(-307200),
"memory.max_usage_in_bytes.0": int64(0),
"memory.max_usage_in_bytes.1": int64(-1),
"memory.max_usage_in_bytes.2": int64(2),
"memory.limit_in_bytes": int64(223372036854771712),
"memory.stat.cache": 1739362304123123123,
"memory.stat.rss": 1775325184,
"memory.stat.rss_huge": 778043392,
"memory.stat.mapped_file": 421036032,
"memory.stat.dirty": -307200,
"memory.max_usage_in_bytes.0": 0,
"memory.max_usage_in_bytes.1": -1,
"memory.max_usage_in_bytes.2": 2,
"memory.limit_in_bytes": 223372036854771712,
"memory.use_hierarchy": "12-781",
"notify_on_release": int64(0),
"notify_on_release": 0,
}
acc.AssertContainsTaggedFields(t, "cgroup", fields, tags)
}
@@ -63,10 +63,10 @@ func TestCgroupStatistics_2(t *testing.T) {
"path": "testdata/cpu",
}
fields := map[string]interface{}{
"cpuacct.usage_percpu.0": int64(-1452543795404),
"cpuacct.usage_percpu.1": int64(1376681271659),
"cpuacct.usage_percpu.2": int64(1450950799997),
"cpuacct.usage_percpu.3": int64(-1473113374257),
"cpuacct.usage_percpu.0": -1452543795404,
"cpuacct.usage_percpu.1": 1376681271659,
"cpuacct.usage_percpu.2": 1450950799997,
"cpuacct.usage_percpu.3": -1473113374257,
}
acc.AssertContainsTaggedFields(t, "cgroup", fields, tags)
}
@@ -88,7 +88,7 @@ func TestCgroupStatistics_3(t *testing.T) {
"path": "testdata/memory/group_1",
}
fields := map[string]interface{}{
"memory.limit_in_bytes": int64(223372036854771712),
"memory.limit_in_bytes": 223372036854771712,
}
acc.AssertContainsTaggedFields(t, "cgroup", fields, tags)
@@ -115,7 +115,7 @@ func TestCgroupStatistics_4(t *testing.T) {
"path": "testdata/memory/group_1/group_1_1",
}
fields := map[string]interface{}{
"memory.limit_in_bytes": int64(223372036854771712),
"memory.limit_in_bytes": 223372036854771712,
}
acc.AssertContainsTaggedFields(t, "cgroup", fields, tags)
@@ -147,7 +147,7 @@ func TestCgroupStatistics_5(t *testing.T) {
"path": "testdata/memory/group_1/group_1_1",
}
fields := map[string]interface{}{
"memory.limit_in_bytes": int64(223372036854771712),
"memory.limit_in_bytes": 223372036854771712,
}
acc.AssertContainsTaggedFields(t, "cgroup", fields, tags)
@@ -174,9 +174,9 @@ func TestCgroupStatistics_6(t *testing.T) {
"path": "testdata/memory",
}
fields := map[string]interface{}{
"memory.usage_in_bytes": int64(3513667584),
"memory.usage_in_bytes": 3513667584,
"memory.use_hierarchy": "12-781",
"memory.kmem.limit_in_bytes": int64(9223372036854771712),
"memory.kmem.limit_in_bytes": 9223372036854771712,
}
acc.AssertContainsTaggedFields(t, "cgroup", fields, tags)
}

View File

@@ -1,3 +1,5 @@
// +build linux
package chrony
import (

View File

@@ -0,0 +1,3 @@
// +build !linux
package chrony

View File

@@ -1,3 +1,5 @@
// +build linux
package chrony
import (

View File

@@ -92,7 +92,7 @@ func (c *CloudWatch) SampleConfig() string {
## Collection Delay (required - must account for metrics availability via CloudWatch API)
delay = "5m"
## Recommended: use metric 'interval' that is a multiple of 'period' to avoid
## Recomended: use metric 'interval' that is a multiple of 'period' to avoid
## gaps or overlap in pulled data
interval = "5m"

View File

@@ -1,54 +1,43 @@
# Consul Input Plugin
# Telegraf Input Plugin: Consul
This plugin will collect statistics about all health checks registered in the
Consul. It uses [Consul API](https://www.consul.io/docs/agent/http/health.html#health_state)
to query the data. It will not report the
[telemetry](https://www.consul.io/docs/agent/telemetry.html) but Consul can
report those stats already using StatsD protocol if needed.
This plugin will collect statistics about all health checks registered in the Consul. It uses [Consul API](https://www.consul.io/docs/agent/http/health.html#health_state)
to query the data. It will not report the [telemetry](https://www.consul.io/docs/agent/telemetry.html) but Consul can report those stats already using StatsD protocol if needed.
### Configuration:
## Configuration:
```toml
```
# Gather health check statuses from services registered in Consul
[[inputs.consul]]
## Consul server address
# address = "localhost"
## URI scheme for the Consul server, one of "http", "https"
# scheme = "http"
## ACL token used in every request
## Most of these values defaults to the one configured on a Consul's agent level.
## Optional Consul server address (default: "")
# address = ""
## Optional URI scheme for the Consul server (default: "")
# scheme = ""
## Optional ACL token used in every request (default: "")
# token = ""
## HTTP Basic Authentication username and password.
## Optional username used for request HTTP Basic Authentication (default: "")
# username = ""
## Optional password used for HTTP Basic Authentication (default: "")
# password = ""
## Data centre to query the health checks from
## Optional data centre to query the health checks from (default: "")
# datacentre = ""
## SSL Config
# ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem"
# ssl_key = "/etc/telegraf/key.pem"
## If false, skip chain & host verification
# insecure_skip_verify = true
```
### Metrics:
## Measurements:
- consul_health_checks
- tags:
- node (node that check/service is registred on)
- service_name
- check_id
- fields:
- check_name
- service_id
- status
- passing (integer)
- critical (integer)
- warning (integer)
### Consul:
Tags:
- node: on which node check/service is registered on
- service_name: name of the service (this is the service name not the service ID)
- check_id
Fields:
- check_name
- service_id
- status
- passing
- critical
- warning
`passing`, `critical`, and `warning` are integer representations of the health
check state. A value of `1` represents that the status was the state of the
@@ -57,6 +46,8 @@ the health check at this sample.
## Example output
```
consul_health_checks,host=wolfpit,node=consul-server-node,check_id="serfHealth" check_name="Serf Health Status",service_id="",status="passing",passing=1i,critical=0i,warning=0i 1464698464486439902
consul_health_checks,host=wolfpit,node=consul-server-node,service_name=www.example.com,check_id="service:www-example-com.test01" check_name="Service 'www.example.com' check",service_id="www-example-com.test01",status="critical",passing=0i,critical=1i,warning=0i 1464698464486519036
$ telegraf --config ./telegraf.conf --input-filter consul --test
* Plugin: consul, Collection 1
> consul_health_checks,host=wolfpit,node=consul-server-node,check_id="serfHealth" check_name="Serf Health Status",service_id="",status="passing",passing=1i,critical=0i,warning=0i 1464698464486439902
> consul_health_checks,host=wolfpit,node=consul-server-node,service_name=www.example.com,check_id="service:www-example-com.test01" check_name="Service 'www.example.com' check",service_id="www-example-com.test01",status="critical",passing=0i,critical=1i,warning=0i 1464698464486519036
```

View File

@@ -31,28 +31,19 @@ type Consul struct {
}
var sampleConfig = `
## Consul server address
## Most of these values defaults to the one configured on a Consul's agent level.
## Optional Consul server address (default: "localhost")
# address = "localhost"
## URI scheme for the Consul server, one of "http", "https"
## Optional URI scheme for the Consul server (default: "http")
# scheme = "http"
## ACL token used in every request
## Optional ACL token used in every request (default: "")
# token = ""
## HTTP Basic Authentication username and password.
## Optional username used for request HTTP Basic Authentication (default: "")
# username = ""
## Optional password used for HTTP Basic Authentication (default: "")
# password = ""
## Data centre to query the health checks from
## Optional data centre to query the health checks from (default: "")
# datacentre = ""
## SSL Config
# ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem"
# ssl_key = "/etc/telegraf/key.pem"
## If false, skip chain & host verification
# insecure_skip_verify = true
`
func (c *Consul) Description() string {
@@ -78,10 +69,6 @@ func (c *Consul) createAPIClient() (*api.Client, error) {
config.Datacenter = c.Datacentre
}
if c.Token != "" {
config.Token = c.Token
}
if c.Username != "" {
config.HttpAuth = &api.HttpBasicAuth{
Username: c.Username,

View File

@@ -20,7 +20,7 @@ var sampleChecks = []*api.HealthCheck{
},
}
func TestGatherHealthCheck(t *testing.T) {
func TestGatherHealtCheck(t *testing.T) {
expectedFields := map[string]interface{}{
"check_name": "foo.health",
"status": "passing",

View File

@@ -21,7 +21,7 @@ var sampleConfig = `
## http://admin:secret@couchbase-0.example.com:8091/
##
## If no servers are specified, then localhost is used as the host.
## If no protocol is specified, HTTP is used.
## If no protocol is specifed, HTTP is used.
## If no port is specified, 8091 is used.
servers = ["http://localhost:8091"]
`

View File

@@ -1,216 +0,0 @@
# DC/OS Input Plugin
This input plugin gathers metrics from a DC/OS cluster's [metrics component](https://docs.mesosphere.com/1.10/metrics/).
**Series Cardinality Warning**
Depending on the work load of your DC/OS cluster, this plugin can quickly
create a high number of series which, when unchecked, can cause high load on
your database.
- Use the
[measurement filtering](https://docs.influxdata.com/telegraf/latest/administration/configuration/#measurement-filtering)
options to exclude unneeded tags.
- Write to a database with an appropriate
[retention policy](https://docs.influxdata.com/influxdb/latest/guides/downsampling_and_retention/).
- Limit series cardinality in your database using the
[`max-series-per-database`](https://docs.influxdata.com/influxdb/latest/administration/config/#max-series-per-database-1000000) and
[`max-values-per-tag`](https://docs.influxdata.com/influxdb/latest/administration/config/#max-values-per-tag-100000) settings.
- Consider using the
[Time Series Index](https://docs.influxdata.com/influxdb/latest/concepts/time-series-index/).
- Monitor your databases
[series cardinality](https://docs.influxdata.com/influxdb/latest/query_language/spec/#show-cardinality).
### Configuration:
```toml
[[inputs.dcos]]
## The DC/OS cluster URL.
cluster_url = "https://dcos-master-1"
## The ID of the service account.
service_account_id = "telegraf"
## The private key file for the service account.
service_account_private_key = "/etc/telegraf/telegraf-sa-key.pem"
## Path containing login token. If set, will read on every gather.
# token_file = "/home/dcos/.dcos/token"
## In all filter options if both include and exclude are empty all items
## will be collected. Arrays may contain glob patterns.
##
## Node IDs to collect metrics from. If a node is excluded, no metrics will
## be collected for its containers or apps.
# node_include = []
# node_exclude = []
## Container IDs to collect container metrics from.
# container_include = []
# container_exclude = []
## Container IDs to collect app metrics from.
# app_include = []
# app_exclude = []
## Maximum concurrent connections to the cluster.
# max_connections = 10
## Maximum time to receive a response from cluster.
# response_timeout = "20s"
## Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem"
# ssl_key = "/etc/telegraf/key.pem"
## If false, skip chain & host verification
# insecure_skip_verify = true
## Recommended filtering to reduce series cardinality.
# [inputs.dcos.tagdrop]
# path = ["/var/lib/mesos/slave/slaves/*"]
```
#### Enterprise Authentication
When using Enterprise DC/OS, it is recommended to use a service account to
authenticate with the cluster.
The plugin requires the following permissions:
```
dcos:adminrouter:ops:system-metrics full
dcos:adminrouter:ops:mesos full
```
Follow the directions to [create a service account and assign permissions](https://docs.mesosphere.com/1.10/security/service-auth/custom-service-auth/).
Quick configuration using the Enterprise CLI:
```
dcos security org service-accounts keypair telegraf-sa-key.pem telegraf-sa-cert.pem
dcos security org service-accounts create -p telegraf-sa-cert.pem -d "Telegraf DC/OS input plugin" telegraf
dcos security org users grant telegraf dcos:adminrouter:ops:system-metrics full
dcos security org users grant telegraf dcos:adminrouter:ops:mesos full
```
#### Open Source Authentication
The Open Source DC/OS does not provide service accounts. Instead you can use
of the following options:
1. [Disable authentication](https://dcos.io/docs/1.10/security/managing-authentication/#authentication-opt-out)
2. Use the `token_file` parameter to read a authentication token from a file.
Then `token_file` can be set by using the [dcos cli] to login periodically.
The cli can login for at most XXX days, you will need to ensure the cli
performs a new login before this time expires.
```
dcos auth login --username foo --password bar
dcos config show core.dcos_acs_token > ~/.dcos/token
```
Another option to create a `token_file` is to generate a token using the
cluster secret. This will allow you to set the expiration date manually or
even create a never expiring token. However, if the cluster secret or the
token is compromised it cannot be revoked and may require a full reinstall of
the cluster. For more information on this technique reference
[this blog post](https://medium.com/@richardgirges/authenticating-open-source-dc-os-with-third-party-services-125fa33a5add).
### Metrics:
Please consult the [Metrics Reference](https://docs.mesosphere.com/1.10/metrics/reference/)
for details about field interpretation.
- dcos_node
- tags:
- cluster
- hostname
- path (filesystem fields only)
- interface (network fields only)
- fields:
- system_uptime (float)
- cpu_cores (float)
- cpu_total (float)
- cpu_user (float)
- cpu_system (float)
- cpu_idle (float)
- cpu_wait (float)
- load_1min (float)
- load_5min (float)
- load_15min (float)
- filesystem_capacity_total_bytes (int)
- filesystem_capacity_used_bytes (int)
- filesystem_capacity_free_bytes (int)
- filesystem_inode_total (float)
- filesystem_inode_used (float)
- filesystem_inode_free (float)
- memory_total_bytes (int)
- memory_free_bytes (int)
- memory_buffers_bytes (int)
- memory_cached_bytes (int)
- swap_total_bytes (int)
- swap_free_bytes (int)
- swap_used_bytes (int)
- network_in_bytes (int)
- network_out_bytes (int)
- network_in_packets (float)
- network_out_packets (float)
- network_in_dropped (float)
- network_out_dropped (float)
- network_in_errors (float)
- network_out_errors (float)
- process_count (float)
- dcos_container
- tags:
- cluster
- hostname
- container_id
- task_name
- fields:
- cpus_limit (float)
- cpus_system_time (float)
- cpus_throttled_time (float)
- cpus_user_time (float)
- disk_limit_bytes (int)
- disk_used_bytes (int)
- mem_limit_bytes (int)
- mem_total_bytes (int)
- net_rx_bytes (int)
- net_rx_dropped (float)
- net_rx_errors (float)
- net_rx_packets (float)
- net_tx_bytes (int)
- net_tx_dropped (float)
- net_tx_errors (float)
- net_tx_packets (float)
- dcos_app
- tags:
- cluster
- hostname
- container_id
- task_name
- fields:
- fields are application specific
### Example Output:
```
dcos_node,cluster=enterprise,hostname=192.168.122.18,path=/boot filesystem_capacity_free_bytes=918188032i,filesystem_capacity_total_bytes=1063256064i,filesystem_capacity_used_bytes=145068032i,filesystem_inode_free=523958,filesystem_inode_total=524288,filesystem_inode_used=330 1511859222000000000
dcos_node,cluster=enterprise,hostname=192.168.122.18,interface=dummy0 network_in_bytes=0i,network_in_dropped=0,network_in_errors=0,network_in_packets=0,network_out_bytes=0i,network_out_dropped=0,network_out_errors=0,network_out_packets=0 1511859222000000000
dcos_node,cluster=enterprise,hostname=192.168.122.18,interface=docker0 network_in_bytes=0i,network_in_dropped=0,network_in_errors=0,network_in_packets=0,network_out_bytes=0i,network_out_dropped=0,network_out_errors=0,network_out_packets=0 1511859222000000000
dcos_node,cluster=enterprise,hostname=192.168.122.18 cpu_cores=2,cpu_idle=81.62,cpu_system=4.19,cpu_total=13.670000000000002,cpu_user=9.48,cpu_wait=0,load_15min=0.7,load_1min=0.22,load_5min=0.6,memory_buffers_bytes=970752i,memory_cached_bytes=1830473728i,memory_free_bytes=1178636288i,memory_total_bytes=3975073792i,process_count=198,swap_free_bytes=859828224i,swap_total_bytes=859828224i,swap_used_bytes=0i,system_uptime=18874 1511859222000000000
dcos_node,cluster=enterprise,hostname=192.168.122.18,interface=lo network_in_bytes=1090992450i,network_in_dropped=0,network_in_errors=0,network_in_packets=1546938,network_out_bytes=1090992450i,network_out_dropped=0,network_out_errors=0,network_out_packets=1546938 1511859222000000000
dcos_node,cluster=enterprise,hostname=192.168.122.18,path=/ filesystem_capacity_free_bytes=1668378624i,filesystem_capacity_total_bytes=6641680384i,filesystem_capacity_used_bytes=4973301760i,filesystem_inode_free=3107856,filesystem_inode_total=3248128,filesystem_inode_used=140272 1511859222000000000
dcos_node,cluster=enterprise,hostname=192.168.122.18,interface=minuteman network_in_bytes=0i,network_in_dropped=0,network_in_errors=0,network_in_packets=0,network_out_bytes=210i,network_out_dropped=0,network_out_errors=0,network_out_packets=3 1511859222000000000
dcos_node,cluster=enterprise,hostname=192.168.122.18,interface=eth0 network_in_bytes=539886216i,network_in_dropped=1,network_in_errors=0,network_in_packets=979808,network_out_bytes=112395836i,network_out_dropped=0,network_out_errors=0,network_out_packets=891239 1511859222000000000
dcos_node,cluster=enterprise,hostname=192.168.122.18,interface=spartan network_in_bytes=0i,network_in_dropped=0,network_in_errors=0,network_in_packets=0,network_out_bytes=210i,network_out_dropped=0,network_out_errors=0,network_out_packets=3 1511859222000000000
dcos_node,cluster=enterprise,hostname=192.168.122.18,path=/var/lib/docker/overlay filesystem_capacity_free_bytes=1668378624i,filesystem_capacity_total_bytes=6641680384i,filesystem_capacity_used_bytes=4973301760i,filesystem_inode_free=3107856,filesystem_inode_total=3248128,filesystem_inode_used=140272 1511859222000000000
dcos_node,cluster=enterprise,hostname=192.168.122.18,interface=vtep1024 network_in_bytes=0i,network_in_dropped=0,network_in_errors=0,network_in_packets=0,network_out_bytes=0i,network_out_dropped=0,network_out_errors=0,network_out_packets=0 1511859222000000000
dcos_node,cluster=enterprise,hostname=192.168.122.18,path=/var/lib/docker/plugins filesystem_capacity_free_bytes=1668378624i,filesystem_capacity_total_bytes=6641680384i,filesystem_capacity_used_bytes=4973301760i,filesystem_inode_free=3107856,filesystem_inode_total=3248128,filesystem_inode_used=140272 1511859222000000000
dcos_node,cluster=enterprise,hostname=192.168.122.18,interface=d-dcos network_in_bytes=0i,network_in_dropped=0,network_in_errors=0,network_in_packets=0,network_out_bytes=0i,network_out_dropped=0,network_out_errors=0,network_out_packets=0 1511859222000000000
dcos_app,cluster=enterprise,container_id=9a78d34a-3bbf-467e-81cf-a57737f154ee,hostname=192.168.122.18 container_received_bytes_per_sec=0,container_throttled_bytes_per_sec=0 1511859222000000000
dcos_container,cluster=enterprise,container_id=cbf19b77-3b8d-4bcf-b81f-824b67279629,hostname=192.168.122.18 cpus_limit=0.3,cpus_system_time=307.31,cpus_throttled_time=102.029930607,cpus_user_time=268.57,disk_limit_bytes=268435456i,disk_used_bytes=30953472i,mem_limit_bytes=570425344i,mem_total_bytes=13316096i,net_rx_bytes=0i,net_rx_dropped=0,net_rx_errors=0,net_rx_packets=0,net_tx_bytes=0i,net_tx_dropped=0,net_tx_errors=0,net_tx_packets=0 1511859222000000000
dcos_app,cluster=enterprise,container_id=cbf19b77-3b8d-4bcf-b81f-824b67279629,hostname=192.168.122.18 container_received_bytes_per_sec=0,container_throttled_bytes_per_sec=0 1511859222000000000
dcos_container,cluster=enterprise,container_id=5725e219-f66e-40a8-b3ab-519d85f4c4dc,hostname=192.168.122.18,task_name=hello-world cpus_limit=0.6,cpus_system_time=25.6,cpus_throttled_time=327.977109217,cpus_user_time=566.54,disk_limit_bytes=0i,disk_used_bytes=0i,mem_limit_bytes=1107296256i,mem_total_bytes=335941632i,net_rx_bytes=0i,net_rx_dropped=0,net_rx_errors=0,net_rx_packets=0,net_tx_bytes=0i,net_tx_dropped=0,net_tx_errors=0,net_tx_packets=0 1511859222000000000
dcos_app,cluster=enterprise,container_id=5725e219-f66e-40a8-b3ab-519d85f4c4dc,hostname=192.168.122.18 container_received_bytes_per_sec=0,container_throttled_bytes_per_sec=0 1511859222000000000
dcos_app,cluster=enterprise,container_id=c76e1488-4fb7-4010-a4cf-25725f8173f9,hostname=192.168.122.18 container_received_bytes_per_sec=0,container_throttled_bytes_per_sec=0 1511859222000000000
dcos_container,cluster=enterprise,container_id=cbe0b2f9-061f-44ac-8f15-4844229e8231,hostname=192.168.122.18,task_name=telegraf cpus_limit=0.2,cpus_system_time=8.109999999,cpus_throttled_time=93.183916045,cpus_user_time=17.97,disk_limit_bytes=0i,disk_used_bytes=0i,mem_limit_bytes=167772160i,mem_total_bytes=0i,net_rx_bytes=0i,net_rx_dropped=0,net_rx_errors=0,net_rx_packets=0,net_tx_bytes=0i,net_tx_dropped=0,net_tx_errors=0,net_tx_packets=0 1511859222000000000
dcos_container,cluster=enterprise,container_id=b64115de-3d2a-431d-a805-76e7c46453f1,hostname=192.168.122.18 cpus_limit=0.2,cpus_system_time=2.69,cpus_throttled_time=20.064861214,cpus_user_time=6.56,disk_limit_bytes=268435456i,disk_used_bytes=29360128i,mem_limit_bytes=297795584i,mem_total_bytes=13733888i,net_rx_bytes=0i,net_rx_dropped=0,net_rx_errors=0,net_rx_packets=0,net_tx_bytes=0i,net_tx_dropped=0,net_tx_errors=0,net_tx_packets=0 1511859222000000000
dcos_app,cluster=enterprise,container_id=b64115de-3d2a-431d-a805-76e7c46453f1,hostname=192.168.122.18 container_received_bytes_per_sec=0,container_throttled_bytes_per_sec=0 1511859222000000000
```

View File

@@ -1,337 +0,0 @@
package dcos
import (
"bytes"
"context"
"crypto/tls"
"encoding/json"
"fmt"
"net/http"
"net/url"
"time"
jwt "github.com/dgrijalva/jwt-go"
)
const (
// How long to stayed logged in for
loginDuration = 65 * time.Minute
)
// Client is an interface for communicating with the DC/OS API.
type Client interface {
SetToken(token string)
Login(ctx context.Context, sa *ServiceAccount) (*AuthToken, error)
GetSummary(ctx context.Context) (*Summary, error)
GetContainers(ctx context.Context, node string) ([]Container, error)
GetNodeMetrics(ctx context.Context, node string) (*Metrics, error)
GetContainerMetrics(ctx context.Context, node, container string) (*Metrics, error)
GetAppMetrics(ctx context.Context, node, container string) (*Metrics, error)
}
type APIError struct {
URL string
StatusCode int
Title string
Description string
}
// Login is request data for logging in.
type Login struct {
UID string `json:"uid"`
Exp int64 `json:"exp"`
Token string `json:"token"`
}
// LoginError is the response when login fails.
type LoginError struct {
Title string `json:"title"`
Description string `json:"description"`
}
// LoginAuth is the response to a successful login.
type LoginAuth struct {
Token string `json:"token"`
}
// Slave is a node in the cluster.
type Slave struct {
ID string `json:"id"`
}
// Summary provides high level cluster wide information.
type Summary struct {
Cluster string
Slaves []Slave
}
// Container is a container on a node.
type Container struct {
ID string
}
type DataPoint struct {
Name string `json:"name"`
Tags map[string]string `json:"tags"`
Unit string `json:"unit"`
Value float64 `json:"value"`
}
// Metrics are the DCOS metrics
type Metrics struct {
Datapoints []DataPoint `json:"datapoints"`
Dimensions map[string]interface{} `json:"dimensions"`
}
// AuthToken is the authentication token.
type AuthToken struct {
Text string
Expire time.Time
}
// ClusterClient is a Client that uses the cluster URL.
type ClusterClient struct {
clusterURL *url.URL
httpClient *http.Client
credentials *Credentials
token string
semaphore chan struct{}
}
type claims struct {
UID string `json:"uid"`
jwt.StandardClaims
}
func (e APIError) Error() string {
if e.Description != "" {
return fmt.Sprintf("[%s] %s: %s", e.URL, e.Title, e.Description)
}
return fmt.Sprintf("[%s] %s", e.URL, e.Title)
}
func NewClusterClient(
clusterURL *url.URL,
timeout time.Duration,
maxConns int,
tlsConfig *tls.Config,
) *ClusterClient {
httpClient := &http.Client{
Transport: &http.Transport{
MaxIdleConns: maxConns,
TLSClientConfig: tlsConfig,
},
Timeout: timeout,
}
semaphore := make(chan struct{}, maxConns)
c := &ClusterClient{
clusterURL: clusterURL,
httpClient: httpClient,
semaphore: semaphore,
}
return c
}
func (c *ClusterClient) SetToken(token string) {
c.token = token
}
func (c *ClusterClient) Login(ctx context.Context, sa *ServiceAccount) (*AuthToken, error) {
token, err := c.createLoginToken(sa)
if err != nil {
return nil, err
}
exp := time.Now().Add(loginDuration)
body := &Login{
UID: sa.AccountID,
Exp: exp.Unix(),
Token: token,
}
octets, err := json.Marshal(body)
if err != nil {
return nil, err
}
loc := c.url("/acs/api/v1/auth/login")
req, err := http.NewRequest("POST", loc, bytes.NewBuffer(octets))
if err != nil {
return nil, err
}
req.Header.Add("Content-Type", "application/json")
req = req.WithContext(ctx)
resp, err := c.httpClient.Do(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
if resp.StatusCode == http.StatusOK {
auth := &LoginAuth{}
dec := json.NewDecoder(resp.Body)
err = dec.Decode(auth)
if err != nil {
return nil, err
}
token := &AuthToken{
Text: auth.Token,
Expire: exp,
}
return token, nil
}
loginError := &LoginError{}
dec := json.NewDecoder(resp.Body)
err = dec.Decode(loginError)
if err != nil {
err := &APIError{
URL: loc,
StatusCode: resp.StatusCode,
Title: resp.Status,
}
return nil, err
}
err = &APIError{
URL: loc,
StatusCode: resp.StatusCode,
Title: loginError.Title,
Description: loginError.Description,
}
return nil, err
}
func (c *ClusterClient) GetSummary(ctx context.Context) (*Summary, error) {
summary := &Summary{}
err := c.doGet(ctx, c.url("/mesos/master/state-summary"), summary)
if err != nil {
return nil, err
}
return summary, nil
}
func (c *ClusterClient) GetContainers(ctx context.Context, node string) ([]Container, error) {
list := []string{}
path := fmt.Sprintf("/system/v1/agent/%s/metrics/v0/containers", node)
err := c.doGet(ctx, c.url(path), &list)
if err != nil {
return nil, err
}
containers := make([]Container, 0, len(list))
for _, c := range list {
containers = append(containers, Container{ID: c})
}
return containers, nil
}
func (c *ClusterClient) getMetrics(ctx context.Context, url string) (*Metrics, error) {
metrics := &Metrics{}
err := c.doGet(ctx, url, metrics)
if err != nil {
return nil, err
}
return metrics, nil
}
func (c *ClusterClient) GetNodeMetrics(ctx context.Context, node string) (*Metrics, error) {
path := fmt.Sprintf("/system/v1/agent/%s/metrics/v0/node", node)
return c.getMetrics(ctx, c.url(path))
}
func (c *ClusterClient) GetContainerMetrics(ctx context.Context, node, container string) (*Metrics, error) {
path := fmt.Sprintf("/system/v1/agent/%s/metrics/v0/containers/%s", node, container)
return c.getMetrics(ctx, c.url(path))
}
func (c *ClusterClient) GetAppMetrics(ctx context.Context, node, container string) (*Metrics, error) {
path := fmt.Sprintf("/system/v1/agent/%s/metrics/v0/containers/%s/app", node, container)
return c.getMetrics(ctx, c.url(path))
}
func createGetRequest(url string, token string) (*http.Request, error) {
req, err := http.NewRequest("GET", url, nil)
if err != nil {
return nil, err
}
if token != "" {
req.Header.Add("Authorization", "token="+token)
}
req.Header.Add("Accept", "application/json")
return req, nil
}
func (c *ClusterClient) doGet(ctx context.Context, url string, v interface{}) error {
req, err := createGetRequest(url, c.token)
if err != nil {
return err
}
select {
case c.semaphore <- struct{}{}:
break
case <-ctx.Done():
return ctx.Err()
}
resp, err := c.httpClient.Do(req.WithContext(ctx))
if err != nil {
<-c.semaphore
return err
}
defer func() {
resp.Body.Close()
<-c.semaphore
}()
// Clear invalid token if unauthorized
if resp.StatusCode == http.StatusUnauthorized {
c.token = ""
}
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
return &APIError{
URL: url,
StatusCode: resp.StatusCode,
Title: resp.Status,
}
}
if resp.StatusCode == http.StatusNoContent {
return nil
}
err = json.NewDecoder(resp.Body).Decode(v)
return err
}
func (c *ClusterClient) url(path string) string {
url := *c.clusterURL
url.Path = path
return url.String()
}
func (c *ClusterClient) createLoginToken(sa *ServiceAccount) (string, error) {
token := jwt.NewWithClaims(jwt.SigningMethodRS256, claims{
UID: sa.AccountID,
StandardClaims: jwt.StandardClaims{
// How long we have to login with this token
ExpiresAt: time.Now().Add(5 * time.Minute).Unix(),
},
})
return token.SignedString(sa.PrivateKey)
}

View File

@@ -1,245 +0,0 @@
package dcos
import (
"context"
"fmt"
"net/http"
"net/http/httptest"
"net/url"
"testing"
jwt "github.com/dgrijalva/jwt-go"
"github.com/stretchr/testify/require"
)
const (
privateKey = `-----BEGIN RSA PRIVATE KEY-----
MIICXQIBAAKBgQCwlGyzVp9cqtwiNCgCnaR0kilPZhr4xFBcnXxvQ8/uzOHaWKxj
XWR38cKR3gPh5+4iSmzMdo3HDJM5ks6imXGnp+LPOA5iNewnpLNs7UxA2arwKH/6
4qIaAXAtf5jE46wZIMgc2EW9wGL3dxC0JY8EXPpBFB/3J8gADkorFR8lwwIDAQAB
AoGBAJaFHxfMmjHK77U0UnrQWFSKFy64cftmlL4t/Nl3q7L68PdIKULWZIMeEWZ4
I0UZiFOwr4em83oejQ1ByGSwekEuiWaKUI85IaHfcbt+ogp9hY/XbOEo56OPQUAd
bEZv1JqJOqta9Ug1/E1P9LjEEyZ5F5ubx7813rxAE31qKtKJAkEA1zaMlCWIr+Rj
hGvzv5rlHH3wbOB4kQFXO4nqj3J/ttzR5QiJW24STMDcbNngFlVcDVju56LrNTiD
dPh9qvl7nwJBANILguR4u33OMksEZTYB7nQZSurqXsq6382zH7pTl29ANQTROHaM
PKC8dnDWq8RGTqKuvWblIzzGIKqIMovZo10CQC96T0UXirITFolOL3XjvAuvFO1Q
EAkdXJs77805m0dCK+P1IChVfiAEpBw3bKJArpAbQIlFfdI953JUp5SieU0CQEub
BSSEKMjh/cxu6peEHnb/262vayuCFKkQPu1sxWewLuVrAe36EKCy9dcsDmv5+rgo
Odjdxc9Madm4aKlaT6kCQQCpAgeblDrrxTrNQ+Typzo37PlnQrvI+0EceAUuJ72G
P0a+YZUeHNRqT2pPN9lMTAZGGi3CtcF2XScbLNEBeXge
-----END RSA PRIVATE KEY-----`
)
func TestLogin(t *testing.T) {
ts := httptest.NewServer(http.NotFoundHandler())
defer ts.Close()
var tests = []struct {
name string
responseCode int
responseBody string
expectedError error
expectedToken string
}{
{
name: "Login successful",
responseCode: http.StatusOK,
responseBody: `{"token": "XXX.YYY.ZZZ"}`,
expectedError: nil,
expectedToken: "XXX.YYY.ZZZ",
},
{
name: "Unauthorized Error",
responseCode: http.StatusUnauthorized,
responseBody: `{"title": "x", "description": "y"}`,
expectedError: &APIError{
URL: ts.URL + "/acs/api/v1/auth/login",
StatusCode: http.StatusUnauthorized,
Title: "x",
Description: "y",
},
expectedToken: "",
},
}
key, err := jwt.ParseRSAPrivateKeyFromPEM([]byte(privateKey))
require.NoError(t, err)
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ts.Config.Handler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(tt.responseCode)
fmt.Fprintln(w, tt.responseBody)
})
u, err := url.Parse(ts.URL)
require.NoError(t, err)
ctx := context.Background()
sa := &ServiceAccount{
AccountID: "telegraf",
PrivateKey: key,
}
client := NewClusterClient(u, defaultResponseTimeout, 1, nil)
auth, err := client.Login(ctx, sa)
require.Equal(t, tt.expectedError, err)
if tt.expectedToken != "" {
require.Equal(t, tt.expectedToken, auth.Text)
} else {
require.Nil(t, auth)
}
})
}
}
func TestGetSummary(t *testing.T) {
ts := httptest.NewServer(http.NotFoundHandler())
defer ts.Close()
var tests = []struct {
name string
responseCode int
responseBody string
expectedValue *Summary
expectedError error
}{
{
name: "No nodes",
responseCode: http.StatusOK,
responseBody: `{"cluster": "a", "slaves": []}`,
expectedValue: &Summary{Cluster: "a", Slaves: []Slave{}},
expectedError: nil,
},
{
name: "Unauthorized Error",
responseCode: http.StatusUnauthorized,
responseBody: `<html></html>`,
expectedValue: nil,
expectedError: &APIError{
URL: ts.URL + "/mesos/master/state-summary",
StatusCode: http.StatusUnauthorized,
Title: "401 Unauthorized",
},
},
{
name: "Has nodes",
responseCode: http.StatusOK,
responseBody: `{"cluster": "a", "slaves": [{"id": "a"}, {"id": "b"}]}`,
expectedValue: &Summary{
Cluster: "a",
Slaves: []Slave{
Slave{ID: "a"},
Slave{ID: "b"},
},
},
expectedError: nil,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ts.Config.Handler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// check the path
w.WriteHeader(tt.responseCode)
fmt.Fprintln(w, tt.responseBody)
})
u, err := url.Parse(ts.URL)
require.NoError(t, err)
ctx := context.Background()
client := NewClusterClient(u, defaultResponseTimeout, 1, nil)
summary, err := client.GetSummary(ctx)
require.Equal(t, tt.expectedError, err)
require.Equal(t, tt.expectedValue, summary)
})
}
}
func TestGetNodeMetrics(t *testing.T) {
ts := httptest.NewServer(http.NotFoundHandler())
defer ts.Close()
var tests = []struct {
name string
responseCode int
responseBody string
expectedValue *Metrics
expectedError error
}{
{
name: "Empty Body",
responseCode: http.StatusOK,
responseBody: `{}`,
expectedValue: &Metrics{},
expectedError: nil,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ts.Config.Handler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// check the path
w.WriteHeader(tt.responseCode)
fmt.Fprintln(w, tt.responseBody)
})
u, err := url.Parse(ts.URL)
require.NoError(t, err)
ctx := context.Background()
client := NewClusterClient(u, defaultResponseTimeout, 1, nil)
m, err := client.GetNodeMetrics(ctx, "foo")
require.Equal(t, tt.expectedError, err)
require.Equal(t, tt.expectedValue, m)
})
}
}
func TestGetContainerMetrics(t *testing.T) {
ts := httptest.NewServer(http.NotFoundHandler())
defer ts.Close()
var tests = []struct {
name string
responseCode int
responseBody string
expectedValue *Metrics
expectedError error
}{
{
name: "204 No Content",
responseCode: http.StatusNoContent,
responseBody: ``,
expectedValue: &Metrics{},
expectedError: nil,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
ts.Config.Handler = http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// check the path
w.WriteHeader(tt.responseCode)
fmt.Fprintln(w, tt.responseBody)
})
u, err := url.Parse(ts.URL)
require.NoError(t, err)
ctx := context.Background()
client := NewClusterClient(u, defaultResponseTimeout, 1, nil)
m, err := client.GetContainerMetrics(ctx, "foo", "bar")
require.Equal(t, tt.expectedError, err)
require.Equal(t, tt.expectedValue, m)
})
}
}

View File

@@ -1,72 +0,0 @@
package dcos
import (
"context"
"crypto/rsa"
"fmt"
"io/ioutil"
"strings"
"time"
"unicode/utf8"
)
const (
// How long before expiration to renew token
relogDuration = 5 * time.Minute
)
type Credentials interface {
Token(ctx context.Context, client Client) (string, error)
IsExpired() bool
}
type ServiceAccount struct {
AccountID string
PrivateKey *rsa.PrivateKey
auth *AuthToken
}
type TokenCreds struct {
Path string
}
type NullCreds struct {
}
func (c *ServiceAccount) Token(ctx context.Context, client Client) (string, error) {
auth, err := client.Login(ctx, c)
if err != nil {
return "", err
}
c.auth = auth
return auth.Text, nil
}
func (c *ServiceAccount) IsExpired() bool {
return c.auth.Text != "" || c.auth.Expire.Add(relogDuration).After(time.Now())
}
func (c *TokenCreds) Token(ctx context.Context, client Client) (string, error) {
octets, err := ioutil.ReadFile(c.Path)
if err != nil {
return "", fmt.Errorf("Error reading token file %q: %s", c.Path, err)
}
if !utf8.Valid(octets) {
return "", fmt.Errorf("Token file does not contain utf-8 encoded text: %s", c.Path)
}
token := strings.TrimSpace(string(octets))
return token, nil
}
func (c *TokenCreds) IsExpired() bool {
return true
}
func (c *NullCreds) Token(ctx context.Context, client Client) (string, error) {
return "", nil
}
func (c *NullCreds) IsExpired() bool {
return true
}

View File

@@ -1,435 +0,0 @@
package dcos
import (
"context"
"io/ioutil"
"net/url"
"sort"
"strings"
"sync"
"time"
jwt "github.com/dgrijalva/jwt-go"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/filter"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
)
const (
defaultMaxConnections = 10
defaultResponseTimeout = 20 * time.Second
)
var (
nodeDimensions = []string{
"hostname",
"path",
"interface",
}
containerDimensions = []string{
"hostname",
"container_id",
"task_name",
}
appDimensions = []string{
"hostname",
"container_id",
"task_name",
}
)
type DCOS struct {
ClusterURL string `toml:"cluster_url"`
ServiceAccountID string `toml:"service_account_id"`
ServiceAccountPrivateKey string
TokenFile string
NodeInclude []string
NodeExclude []string
ContainerInclude []string
ContainerExclude []string
AppInclude []string
AppExclude []string
MaxConnections int
ResponseTimeout internal.Duration
SSLCA string `toml:"ssl_ca"`
SSLCert string `toml:"ssl_cert"`
SSLKey string `toml:"ssl_key"`
InsecureSkipVerify bool `toml:"insecure_skip_verify"`
client Client
creds Credentials
initialized bool
nodeFilter filter.Filter
containerFilter filter.Filter
appFilter filter.Filter
taskNameFilter filter.Filter
}
func (d *DCOS) Description() string {
return "Input plugin for DC/OS metrics"
}
var sampleConfig = `
## The DC/OS cluster URL.
cluster_url = "https://dcos-ee-master-1"
## The ID of the service account.
service_account_id = "telegraf"
## The private key file for the service account.
service_account_private_key = "/etc/telegraf/telegraf-sa-key.pem"
## Path containing login token. If set, will read on every gather.
# token_file = "/home/dcos/.dcos/token"
## In all filter options if both include and exclude are empty all items
## will be collected. Arrays may contain glob patterns.
##
## Node IDs to collect metrics from. If a node is excluded, no metrics will
## be collected for its containers or apps.
# node_include = []
# node_exclude = []
## Container IDs to collect container metrics from.
# container_include = []
# container_exclude = []
## Container IDs to collect app metrics from.
# app_include = []
# app_exclude = []
## Maximum concurrent connections to the cluster.
# max_connections = 10
## Maximum time to receive a response from cluster.
# response_timeout = "20s"
## Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem"
# ssl_key = "/etc/telegraf/key.pem"
## If false, skip chain & host verification
# insecure_skip_verify = true
## Recommended filtering to reduce series cardinality.
# [inputs.dcos.tagdrop]
# path = ["/var/lib/mesos/slave/slaves/*"]
`
func (d *DCOS) SampleConfig() string {
return sampleConfig
}
func (d *DCOS) Gather(acc telegraf.Accumulator) error {
err := d.init()
if err != nil {
return err
}
ctx := context.Background()
token, err := d.creds.Token(ctx, d.client)
if err != nil {
return err
}
d.client.SetToken(token)
summary, err := d.client.GetSummary(ctx)
if err != nil {
return err
}
var wg sync.WaitGroup
for _, node := range summary.Slaves {
wg.Add(1)
go func(node string) {
defer wg.Done()
d.GatherNode(ctx, acc, summary.Cluster, node)
}(node.ID)
}
wg.Wait()
return nil
}
func (d *DCOS) GatherNode(ctx context.Context, acc telegraf.Accumulator, cluster, node string) {
if !d.nodeFilter.Match(node) {
return
}
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
m, err := d.client.GetNodeMetrics(ctx, node)
if err != nil {
acc.AddError(err)
return
}
d.addNodeMetrics(acc, cluster, m)
}()
d.GatherContainers(ctx, acc, cluster, node)
wg.Wait()
}
func (d *DCOS) GatherContainers(ctx context.Context, acc telegraf.Accumulator, cluster, node string) {
containers, err := d.client.GetContainers(ctx, node)
if err != nil {
acc.AddError(err)
return
}
var wg sync.WaitGroup
for _, container := range containers {
if d.containerFilter.Match(container.ID) {
wg.Add(1)
go func(container string) {
defer wg.Done()
m, err := d.client.GetContainerMetrics(ctx, node, container)
if err != nil {
if err, ok := err.(APIError); ok && err.StatusCode == 404 {
return
}
acc.AddError(err)
return
}
d.addContainerMetrics(acc, cluster, m)
}(container.ID)
}
if d.appFilter.Match(container.ID) {
wg.Add(1)
go func(container string) {
defer wg.Done()
m, err := d.client.GetAppMetrics(ctx, node, container)
if err != nil {
if err, ok := err.(APIError); ok && err.StatusCode == 404 {
return
}
acc.AddError(err)
return
}
d.addAppMetrics(acc, cluster, m)
}(container.ID)
}
}
wg.Wait()
}
type point struct {
tags map[string]string
labels map[string]string
fields map[string]interface{}
}
func (d *DCOS) createPoints(acc telegraf.Accumulator, m *Metrics) []*point {
points := make(map[string]*point)
for _, dp := range m.Datapoints {
fieldKey := strings.Replace(dp.Name, ".", "_", -1)
tags := dp.Tags
if tags == nil {
tags = make(map[string]string)
}
if dp.Unit == "bytes" && !strings.HasSuffix(fieldKey, "_bytes") {
fieldKey = fieldKey + "_bytes"
}
if strings.HasPrefix(fieldKey, "dcos_metrics_module_") {
fieldKey = strings.TrimPrefix(fieldKey, "dcos_metrics_module_")
}
tagset := make([]string, 0, len(tags))
for k, v := range tags {
tagset = append(tagset, k+"="+v)
}
sort.Strings(tagset)
seriesParts := make([]string, 0, len(tagset))
seriesParts = append(seriesParts, tagset...)
seriesKey := strings.Join(seriesParts, ",")
p, ok := points[seriesKey]
if !ok {
p = &point{}
p.tags = tags
p.labels = make(map[string]string)
p.fields = make(map[string]interface{})
points[seriesKey] = p
}
if dp.Unit == "bytes" {
p.fields[fieldKey] = int64(dp.Value)
} else {
p.fields[fieldKey] = dp.Value
}
}
results := make([]*point, 0, len(points))
for _, p := range points {
for k, v := range m.Dimensions {
switch v := v.(type) {
case string:
p.tags[k] = v
case map[string]string:
if k == "labels" {
for k, v := range v {
p.labels[k] = v
}
}
}
}
results = append(results, p)
}
return results
}
func (d *DCOS) addMetrics(acc telegraf.Accumulator, cluster, mname string, m *Metrics, tagDimensions []string) {
tm := time.Now()
points := d.createPoints(acc, m)
for _, p := range points {
tags := make(map[string]string)
tags["cluster"] = cluster
for _, tagkey := range tagDimensions {
v, ok := p.tags[tagkey]
if ok {
tags[tagkey] = v
}
}
for k, v := range p.labels {
tags[k] = v
}
acc.AddFields(mname, p.fields, tags, tm)
}
}
func (d *DCOS) addNodeMetrics(acc telegraf.Accumulator, cluster string, m *Metrics) {
d.addMetrics(acc, cluster, "dcos_node", m, nodeDimensions)
}
func (d *DCOS) addContainerMetrics(acc telegraf.Accumulator, cluster string, m *Metrics) {
d.addMetrics(acc, cluster, "dcos_container", m, containerDimensions)
}
func (d *DCOS) addAppMetrics(acc telegraf.Accumulator, cluster string, m *Metrics) {
d.addMetrics(acc, cluster, "dcos_app", m, appDimensions)
}
func (d *DCOS) init() error {
if !d.initialized {
err := d.createFilters()
if err != nil {
return err
}
if d.client == nil {
client, err := d.createClient()
if err != nil {
return err
}
d.client = client
}
if d.creds == nil {
creds, err := d.createCredentials()
if err != nil {
return err
}
d.creds = creds
}
d.initialized = true
}
return nil
}
func (d *DCOS) createClient() (Client, error) {
tlsCfg, err := internal.GetTLSConfig(
d.SSLCert, d.SSLKey, d.SSLCA, d.InsecureSkipVerify)
if err != nil {
return nil, err
}
url, err := url.Parse(d.ClusterURL)
if err != nil {
return nil, err
}
client := NewClusterClient(
url,
d.ResponseTimeout.Duration,
d.MaxConnections,
tlsCfg,
)
return client, nil
}
func (d *DCOS) createCredentials() (Credentials, error) {
if d.ServiceAccountID != "" && d.ServiceAccountPrivateKey != "" {
bs, err := ioutil.ReadFile(d.ServiceAccountPrivateKey)
if err != nil {
return nil, err
}
privateKey, err := jwt.ParseRSAPrivateKeyFromPEM(bs)
if err != nil {
return nil, err
}
creds := &ServiceAccount{
AccountID: d.ServiceAccountID,
PrivateKey: privateKey,
}
return creds, nil
} else if d.TokenFile != "" {
creds := &TokenCreds{
Path: d.TokenFile,
}
return creds, nil
} else {
creds := &NullCreds{}
return creds, nil
}
}
func (d *DCOS) createFilters() error {
var err error
d.nodeFilter, err = filter.NewIncludeExcludeFilter(
d.NodeInclude, d.NodeExclude)
if err != nil {
return err
}
d.containerFilter, err = filter.NewIncludeExcludeFilter(
d.ContainerInclude, d.ContainerExclude)
if err != nil {
return err
}
d.appFilter, err = filter.NewIncludeExcludeFilter(
d.AppInclude, d.AppExclude)
if err != nil {
return err
}
return nil
}
func init() {
inputs.Add("dcos", func() telegraf.Input {
return &DCOS{
MaxConnections: defaultMaxConnections,
ResponseTimeout: internal.Duration{
Duration: defaultResponseTimeout,
},
}
})
}

View File

@@ -1,441 +0,0 @@
package dcos
import (
"context"
"fmt"
"testing"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/require"
)
type mockClient struct {
SetTokenF func(token string)
LoginF func(ctx context.Context, sa *ServiceAccount) (*AuthToken, error)
GetSummaryF func(ctx context.Context) (*Summary, error)
GetContainersF func(ctx context.Context, node string) ([]Container, error)
GetNodeMetricsF func(ctx context.Context, node string) (*Metrics, error)
GetContainerMetricsF func(ctx context.Context, node, container string) (*Metrics, error)
GetAppMetricsF func(ctx context.Context, node, container string) (*Metrics, error)
}
func (c *mockClient) SetToken(token string) {
c.SetTokenF(token)
}
func (c *mockClient) Login(ctx context.Context, sa *ServiceAccount) (*AuthToken, error) {
return c.LoginF(ctx, sa)
}
func (c *mockClient) GetSummary(ctx context.Context) (*Summary, error) {
return c.GetSummaryF(ctx)
}
func (c *mockClient) GetContainers(ctx context.Context, node string) ([]Container, error) {
return c.GetContainersF(ctx, node)
}
func (c *mockClient) GetNodeMetrics(ctx context.Context, node string) (*Metrics, error) {
return c.GetNodeMetricsF(ctx, node)
}
func (c *mockClient) GetContainerMetrics(ctx context.Context, node, container string) (*Metrics, error) {
return c.GetContainerMetricsF(ctx, node, container)
}
func (c *mockClient) GetAppMetrics(ctx context.Context, node, container string) (*Metrics, error) {
return c.GetAppMetricsF(ctx, node, container)
}
func TestAddNodeMetrics(t *testing.T) {
var tests = []struct {
name string
metrics *Metrics
check func(*testutil.Accumulator) []bool
}{
{
name: "basic datapoint conversion",
metrics: &Metrics{
Datapoints: []DataPoint{
{
Name: "process.count",
Unit: "count",
Value: 42.0,
},
},
},
check: func(acc *testutil.Accumulator) []bool {
return []bool{acc.HasPoint(
"dcos_node",
map[string]string{
"cluster": "a",
},
"process_count", 42.0,
)}
},
},
{
name: "path added as tag",
metrics: &Metrics{
Datapoints: []DataPoint{
{
Name: "filesystem.inode.free",
Tags: map[string]string{
"path": "/var/lib",
},
Unit: "count",
Value: 42.0,
},
},
},
check: func(acc *testutil.Accumulator) []bool {
return []bool{acc.HasPoint(
"dcos_node",
map[string]string{
"cluster": "a",
"path": "/var/lib",
},
"filesystem_inode_free", 42.0,
)}
},
},
{
name: "interface added as tag",
metrics: &Metrics{
Datapoints: []DataPoint{
{
Name: "network.out.dropped",
Tags: map[string]string{
"interface": "eth0",
},
Unit: "count",
Value: 42.0,
},
},
},
check: func(acc *testutil.Accumulator) []bool {
return []bool{acc.HasPoint(
"dcos_node",
map[string]string{
"cluster": "a",
"interface": "eth0",
},
"network_out_dropped", 42.0,
)}
},
},
{
name: "bytes unit appended to fieldkey",
metrics: &Metrics{
Datapoints: []DataPoint{
{
Name: "network.in",
Tags: map[string]string{
"interface": "eth0",
},
Unit: "bytes",
Value: 42.0,
},
},
},
check: func(acc *testutil.Accumulator) []bool {
return []bool{acc.HasPoint(
"dcos_node",
map[string]string{
"cluster": "a",
"interface": "eth0",
},
"network_in_bytes", int64(42),
)}
},
},
{
name: "dimensions added as tags",
metrics: &Metrics{
Datapoints: []DataPoint{
{
Name: "process.count",
Tags: map[string]string{},
Unit: "count",
Value: 42.0,
},
{
Name: "memory.total",
Tags: map[string]string{},
Unit: "bytes",
Value: 42,
},
},
Dimensions: map[string]interface{}{
"cluster_id": "c0760bbd-9e9d-434b-bd4a-39c7cdef8a63",
"hostname": "192.168.122.18",
"mesos_id": "2dfbbd28-29d2-411d-92c4-e2f84c38688e-S1",
},
},
check: func(acc *testutil.Accumulator) []bool {
return []bool{
acc.HasPoint(
"dcos_node",
map[string]string{
"cluster": "a",
"hostname": "192.168.122.18",
},
"process_count", 42.0),
acc.HasPoint(
"dcos_node",
map[string]string{
"cluster": "a",
"hostname": "192.168.122.18",
},
"memory_total_bytes", int64(42)),
}
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
var acc testutil.Accumulator
dcos := &DCOS{}
dcos.addNodeMetrics(&acc, "a", tt.metrics)
for i, ok := range tt.check(&acc) {
require.True(t, ok, fmt.Sprintf("Index was not true: %d", i))
}
})
}
}
func TestAddContainerMetrics(t *testing.T) {
var tests = []struct {
name string
metrics *Metrics
check func(*testutil.Accumulator) []bool
}{
{
name: "container",
metrics: &Metrics{
Datapoints: []DataPoint{
{
Name: "net.rx.errors",
Tags: map[string]string{
"container_id": "f25c457b-fceb-44f0-8f5b-38be34cbb6fb",
"executor_id": "telegraf.192fb45f-cc0c-11e7-af48-ea183c0b541a",
"executor_name": "Command Executor (Task: telegraf.192fb45f-cc0c-11e7-af48-ea183c0b541a) (Command: NO EXECUTABLE)",
"framework_id": "ab2f3a8b-06db-4e8c-95b6-fb1940874a30-0001",
"source": "telegraf.192fb45f-cc0c-11e7-af48-ea183c0b541a",
},
Unit: "count",
Value: 42.0,
},
},
Dimensions: map[string]interface{}{
"cluster_id": "c0760bbd-9e9d-434b-bd4a-39c7cdef8a63",
"container_id": "f25c457b-fceb-44f0-8f5b-38be34cbb6fb",
"executor_id": "telegraf.192fb45f-cc0c-11e7-af48-ea183c0b541a",
"framework_id": "ab2f3a8b-06db-4e8c-95b6-fb1940874a30-0001",
"framework_name": "marathon",
"framework_principal": "dcos_marathon",
"framework_role": "slave_public",
"hostname": "192.168.122.18",
"labels": map[string]string{
"DCOS_SPACE": "/telegraf",
},
"mesos_id": "2dfbbd28-29d2-411d-92c4-e2f84c38688e-S1",
"task_id": "telegraf.192fb45f-cc0c-11e7-af48-ea183c0b541a",
"task_name": "telegraf",
},
},
check: func(acc *testutil.Accumulator) []bool {
return []bool{
acc.HasPoint(
"dcos_container",
map[string]string{
"cluster": "a",
"container_id": "f25c457b-fceb-44f0-8f5b-38be34cbb6fb",
"hostname": "192.168.122.18",
"task_name": "telegraf",
"DCOS_SPACE": "/telegraf",
},
"net_rx_errors",
42.0,
),
}
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
var acc testutil.Accumulator
dcos := &DCOS{}
dcos.addContainerMetrics(&acc, "a", tt.metrics)
for i, ok := range tt.check(&acc) {
require.True(t, ok, fmt.Sprintf("Index was not true: %d", i))
}
})
}
}
func TestAddAppMetrics(t *testing.T) {
var tests = []struct {
name string
metrics *Metrics
check func(*testutil.Accumulator) []bool
}{
{
name: "tags are optional",
metrics: &Metrics{
Datapoints: []DataPoint{
{
Name: "dcos.metrics.module.container_throttled_bytes_per_sec",
Unit: "",
Value: 42.0,
},
},
},
check: func(acc *testutil.Accumulator) []bool {
return []bool{
acc.HasPoint(
"dcos_app",
map[string]string{
"cluster": "a",
},
"container_throttled_bytes_per_sec", 42.0,
),
}
},
},
{
name: "dimensions are tagged",
metrics: &Metrics{
Datapoints: []DataPoint{
{
Name: "dcos.metrics.module.container_throttled_bytes_per_sec",
Unit: "",
Value: 42.0,
},
},
Dimensions: map[string]interface{}{
"cluster_id": "c0760bbd-9e9d-434b-bd4a-39c7cdef8a63",
"container_id": "02d31175-1c01-4459-8520-ef8b1339bc52",
"hostname": "192.168.122.18",
"mesos_id": "2dfbbd28-29d2-411d-92c4-e2f84c38688e-S1",
},
},
check: func(acc *testutil.Accumulator) []bool {
return []bool{
acc.HasPoint(
"dcos_app",
map[string]string{
"cluster": "a",
"container_id": "02d31175-1c01-4459-8520-ef8b1339bc52",
"hostname": "192.168.122.18",
},
"container_throttled_bytes_per_sec", 42.0,
),
}
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
var acc testutil.Accumulator
dcos := &DCOS{}
dcos.addAppMetrics(&acc, "a", tt.metrics)
for i, ok := range tt.check(&acc) {
require.True(t, ok, fmt.Sprintf("Index was not true: %d", i))
}
})
}
}
func TestGatherFilterNode(t *testing.T) {
var tests = []struct {
name string
nodeInclude []string
nodeExclude []string
client Client
check func(*testutil.Accumulator) []bool
}{
{
name: "cluster without nodes has no metrics",
client: &mockClient{
SetTokenF: func(token string) {},
GetSummaryF: func(ctx context.Context) (*Summary, error) {
return &Summary{
Cluster: "a",
Slaves: []Slave{},
}, nil
},
},
check: func(acc *testutil.Accumulator) []bool {
return []bool{
acc.NMetrics() == 0,
}
},
},
{
name: "node include",
nodeInclude: []string{"x"},
client: &mockClient{
SetTokenF: func(token string) {},
GetSummaryF: func(ctx context.Context) (*Summary, error) {
return &Summary{
Cluster: "a",
Slaves: []Slave{
Slave{ID: "x"},
Slave{ID: "y"},
},
}, nil
},
GetContainersF: func(ctx context.Context, node string) ([]Container, error) {
return []Container{}, nil
},
GetNodeMetricsF: func(ctx context.Context, node string) (*Metrics, error) {
return &Metrics{
Datapoints: []DataPoint{
{
Name: "value",
Value: 42.0,
},
},
Dimensions: map[string]interface{}{
"hostname": "x",
},
}, nil
},
},
check: func(acc *testutil.Accumulator) []bool {
return []bool{
acc.HasPoint(
"dcos_node",
map[string]string{
"cluster": "a",
"hostname": "x",
},
"value", 42.0,
),
acc.NMetrics() == 1,
}
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
var acc testutil.Accumulator
dcos := &DCOS{
NodeInclude: tt.nodeInclude,
NodeExclude: tt.nodeExclude,
client: tt.client,
}
err := dcos.Gather(&acc)
require.NoError(t, err)
for i, ok := range tt.check(&acc) {
require.True(t, ok, fmt.Sprintf("Index was not true: %d", i))
}
})
}
}

View File

@@ -16,21 +16,21 @@ const metricName = "dmcache"
type cacheStatus struct {
device string
length int64
length int
target string
metadataBlocksize int64
metadataUsed int64
metadataTotal int64
cacheBlocksize int64
cacheUsed int64
cacheTotal int64
readHits int64
readMisses int64
writeHits int64
writeMisses int64
demotions int64
promotions int64
dirty int64
metadataBlocksize int
metadataUsed int
metadataTotal int
cacheBlocksize int
cacheUsed int
cacheTotal int
readHits int
readMisses int
writeHits int
writeMisses int
demotions int
promotions int
dirty int
}
func (c *DMCache) Gather(acc telegraf.Accumulator) error {
@@ -69,12 +69,12 @@ func parseDMSetupStatus(line string) (cacheStatus, error) {
}
status.device = strings.TrimRight(values[0], ":")
status.length, err = strconv.ParseInt(values[2], 10, 64)
status.length, err = strconv.Atoi(values[2])
if err != nil {
return cacheStatus{}, err
}
status.target = values[3]
status.metadataBlocksize, err = strconv.ParseInt(values[4], 10, 64)
status.metadataBlocksize, err = strconv.Atoi(values[4])
if err != nil {
return cacheStatus{}, err
}
@@ -82,15 +82,15 @@ func parseDMSetupStatus(line string) (cacheStatus, error) {
if len(metadata) != 2 {
return cacheStatus{}, parseError
}
status.metadataUsed, err = strconv.ParseInt(metadata[0], 10, 64)
status.metadataUsed, err = strconv.Atoi(metadata[0])
if err != nil {
return cacheStatus{}, err
}
status.metadataTotal, err = strconv.ParseInt(metadata[1], 10, 64)
status.metadataTotal, err = strconv.Atoi(metadata[1])
if err != nil {
return cacheStatus{}, err
}
status.cacheBlocksize, err = strconv.ParseInt(values[6], 10, 64)
status.cacheBlocksize, err = strconv.Atoi(values[6])
if err != nil {
return cacheStatus{}, err
}
@@ -98,39 +98,39 @@ func parseDMSetupStatus(line string) (cacheStatus, error) {
if len(cache) != 2 {
return cacheStatus{}, parseError
}
status.cacheUsed, err = strconv.ParseInt(cache[0], 10, 64)
status.cacheUsed, err = strconv.Atoi(cache[0])
if err != nil {
return cacheStatus{}, err
}
status.cacheTotal, err = strconv.ParseInt(cache[1], 10, 64)
status.cacheTotal, err = strconv.Atoi(cache[1])
if err != nil {
return cacheStatus{}, err
}
status.readHits, err = strconv.ParseInt(values[8], 10, 64)
status.readHits, err = strconv.Atoi(values[8])
if err != nil {
return cacheStatus{}, err
}
status.readMisses, err = strconv.ParseInt(values[9], 10, 64)
status.readMisses, err = strconv.Atoi(values[9])
if err != nil {
return cacheStatus{}, err
}
status.writeHits, err = strconv.ParseInt(values[10], 10, 64)
status.writeHits, err = strconv.Atoi(values[10])
if err != nil {
return cacheStatus{}, err
}
status.writeMisses, err = strconv.ParseInt(values[11], 10, 64)
status.writeMisses, err = strconv.Atoi(values[11])
if err != nil {
return cacheStatus{}, err
}
status.demotions, err = strconv.ParseInt(values[12], 10, 64)
status.demotions, err = strconv.Atoi(values[12])
if err != nil {
return cacheStatus{}, err
}
status.promotions, err = strconv.ParseInt(values[13], 10, 64)
status.promotions, err = strconv.Atoi(values[13])
if err != nil {
return cacheStatus{}, err
}
status.dirty, err = strconv.ParseInt(values[14], 10, 64)
status.dirty, err = strconv.Atoi(values[14])
if err != nil {
return cacheStatus{}, err
}

View File

@@ -1,5 +1,3 @@
// +build linux
package dmcache
import (
@@ -35,20 +33,20 @@ func TestPerDeviceGoodOutput(t *testing.T) {
"device": "cs-1",
}
fields1 := map[string]interface{}{
"length": int64(4883791872),
"metadata_blocksize": int64(8),
"metadata_used": int64(1018),
"metadata_total": int64(1501122),
"cache_blocksize": int64(512),
"cache_used": int64(7),
"cache_total": int64(464962),
"read_hits": int64(139),
"read_misses": int64(352643),
"write_hits": int64(15),
"write_misses": int64(46),
"demotions": int64(0),
"promotions": int64(7),
"dirty": int64(0),
"length": 4883791872,
"metadata_blocksize": 8,
"metadata_used": 1018,
"metadata_total": 1501122,
"cache_blocksize": 512,
"cache_used": 7,
"cache_total": 464962,
"read_hits": 139,
"read_misses": 352643,
"write_hits": 15,
"write_misses": 46,
"demotions": 0,
"promotions": 7,
"dirty": 0,
}
acc.AssertContainsTaggedFields(t, measurement, fields1, tags1)
@@ -56,20 +54,20 @@ func TestPerDeviceGoodOutput(t *testing.T) {
"device": "cs-2",
}
fields2 := map[string]interface{}{
"length": int64(4294967296),
"metadata_blocksize": int64(8),
"metadata_used": int64(72352),
"metadata_total": int64(1310720),
"cache_blocksize": int64(128),
"cache_used": int64(26),
"cache_total": int64(24327168),
"read_hits": int64(2409),
"read_misses": int64(286),
"write_hits": int64(265),
"write_misses": int64(524682),
"demotions": int64(0),
"promotions": int64(0),
"dirty": int64(0),
"length": 4294967296,
"metadata_blocksize": 8,
"metadata_used": 72352,
"metadata_total": 1310720,
"cache_blocksize": 128,
"cache_used": 26,
"cache_total": 24327168,
"read_hits": 2409,
"read_misses": 286,
"write_hits": 265,
"write_misses": 524682,
"demotions": 0,
"promotions": 0,
"dirty": 0,
}
acc.AssertContainsTaggedFields(t, measurement, fields2, tags2)
@@ -78,20 +76,20 @@ func TestPerDeviceGoodOutput(t *testing.T) {
}
fields3 := map[string]interface{}{
"length": int64(9178759168),
"metadata_blocksize": int64(16),
"metadata_used": int64(73370),
"metadata_total": int64(2811842),
"cache_blocksize": int64(640),
"cache_used": int64(33),
"cache_total": int64(24792130),
"read_hits": int64(2548),
"read_misses": int64(352929),
"write_hits": int64(280),
"write_misses": int64(524728),
"demotions": int64(0),
"promotions": int64(7),
"dirty": int64(0),
"length": 9178759168,
"metadata_blocksize": 16,
"metadata_used": 73370,
"metadata_total": 2811842,
"cache_blocksize": 640,
"cache_used": 33,
"cache_total": 24792130,
"read_hits": 2548,
"read_misses": 352929,
"write_hits": 280,
"write_misses": 524728,
"demotions": 0,
"promotions": 7,
"dirty": 0,
}
acc.AssertContainsTaggedFields(t, measurement, fields3, tags3)
}
@@ -113,20 +111,20 @@ func TestNotPerDeviceGoodOutput(t *testing.T) {
}
fields := map[string]interface{}{
"length": int64(9178759168),
"metadata_blocksize": int64(16),
"metadata_used": int64(73370),
"metadata_total": int64(2811842),
"cache_blocksize": int64(640),
"cache_used": int64(33),
"cache_total": int64(24792130),
"read_hits": int64(2548),
"read_misses": int64(352929),
"write_hits": int64(280),
"write_misses": int64(524728),
"demotions": int64(0),
"promotions": int64(7),
"dirty": int64(0),
"length": 9178759168,
"metadata_blocksize": 16,
"metadata_used": 73370,
"metadata_total": 2811842,
"cache_blocksize": 640,
"cache_used": 33,
"cache_total": 24792130,
"read_hits": 2548,
"read_misses": 352929,
"write_hits": 280,
"write_misses": 524728,
"demotions": 0,
"promotions": 7,
"dirty": 0,
}
acc.AssertContainsTaggedFields(t, measurement, fields, tags)
}

View File

@@ -17,7 +17,7 @@ type DnsQuery struct {
// Domains or subdomains to query
Domains []string
// Network protocol name
// Network protocl name
Network string
// Server to query

View File

@@ -17,11 +17,6 @@ to gather stats from the [Engine API](https://docs.docker.com/engine/api/v1.20/)
## To use environment variables (ie, docker-machine), set endpoint = "ENV"
endpoint = "unix:///var/run/docker.sock"
## Set to true to collect Swarm metrics(desired_replicas, running_replicas)
## Note: configure this in one of the manager nodes in a Swarm cluster.
## configuring in multiple Swarm managers results in duplication of metrics.
gather_services = false
## Only collect metrics for these containers. Values will be appended to
## container_name_include.
## Deprecated (1.4.0), use container_name_include
@@ -31,11 +26,6 @@ to gather stats from the [Engine API](https://docs.docker.com/engine/api/v1.20/)
container_name_include = []
container_name_exclude = []
## Container states to include and exclude. Globs accepted.
## When empty only containers in the "running" state will be captured.
# container_state_include = []
# container_state_exclude = []
## Timeout for docker list, info, and stats commands
timeout = "5s"
@@ -67,15 +57,6 @@ to gather stats from the [Engine API](https://docs.docker.com/engine/api/v1.20/)
When using the `"ENV"` endpoint, the connection is configured using the
[cli Docker environment variables](https://godoc.org/github.com/moby/moby/client#NewEnvClient).
#### Kubernetes Labels
Kubernetes may add many labels to your containers, if they are not needed you
may prefer to exclude them:
```
docker_label_exclude = ["annotation.kubernetes*"]
```
### Measurements & Fields:
Every effort was made to preserve the names based on the JSON response from the
@@ -171,9 +152,6 @@ based on the availability of per-cpu stats on your system.
- available
- total
- used
- docker_swarm
- tasks_desired
- tasks_running
### Tags:
@@ -204,13 +182,6 @@ based on the availability of per-cpu stats on your system.
- network
- docker_container_blkio specific:
- device
- docker_container_health specific:
- health_status
- failing_streak
- docker_swarm specific:
- service_id
- service_name
- service_mode
### Example Output:
@@ -262,7 +233,4 @@ io_service_bytes_recursive_sync=77824i,io_service_bytes_recursive_total=80293888
io_service_bytes_recursive_write=368640i,io_serviced_recursive_async=6562i,\
io_serviced_recursive_read=6492i,io_serviced_recursive_sync=37i,\
io_serviced_recursive_total=6599i,io_serviced_recursive_write=107i 1453409536840126713
>docker_swarm,
service_id=xaup2o9krw36j2dy1mjx1arjw,service_mode=replicated,service_name=test,\
tasks_desired=3,tasks_running=3 1508968160000000000
```

View File

@@ -6,7 +6,6 @@ import (
"net/http"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/swarm"
docker "github.com/docker/docker/client"
"github.com/docker/go-connections/sockets"
)
@@ -21,9 +20,6 @@ type Client interface {
ContainerList(ctx context.Context, options types.ContainerListOptions) ([]types.Container, error)
ContainerStats(ctx context.Context, containerID string, stream bool) (types.ContainerStats, error)
ContainerInspect(ctx context.Context, containerID string) (types.ContainerJSON, error)
ServiceList(ctx context.Context, options types.ServiceListOptions) ([]swarm.Service, error)
TaskList(ctx context.Context, options types.TaskListOptions) ([]swarm.Task, error)
NodeList(ctx context.Context, options types.NodeListOptions) ([]swarm.Node, error)
}
func NewEnvClient() (Client, error) {
@@ -69,12 +65,3 @@ func (c *SocketClient) ContainerStats(ctx context.Context, containerID string, s
func (c *SocketClient) ContainerInspect(ctx context.Context, containerID string) (types.ContainerJSON, error) {
return c.client.ContainerInspect(ctx, containerID)
}
func (c *SocketClient) ServiceList(ctx context.Context, options types.ServiceListOptions) ([]swarm.Service, error) {
return c.client.ServiceList(ctx, options)
}
func (c *SocketClient) TaskList(ctx context.Context, options types.TaskListOptions) ([]swarm.Task, error) {
return c.client.TaskList(ctx, options)
}
func (c *SocketClient) NodeList(ctx context.Context, options types.NodeListOptions) ([]swarm.Node, error) {
return c.client.NodeList(ctx, options)
}

View File

@@ -6,7 +6,6 @@ import (
"encoding/json"
"fmt"
"io"
"log"
"net/http"
"regexp"
"strconv"
@@ -15,20 +14,26 @@ import (
"time"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/filters"
"github.com/docker/docker/api/types/swarm"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/filter"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
)
type DockerLabelFilter struct {
labelInclude filter.Filter
labelExclude filter.Filter
}
type DockerContainerFilter struct {
containerInclude filter.Filter
containerExclude filter.Filter
}
// Docker object
type Docker struct {
Endpoint string
ContainerNames []string // deprecated in 1.4; use container_name_include
GatherServices bool `toml:"gather_services"`
ContainerNames []string
Timeout internal.Duration
PerDevice bool `toml:"perdevice"`
@@ -36,12 +41,11 @@ type Docker struct {
TagEnvironment []string `toml:"tag_env"`
LabelInclude []string `toml:"docker_label_include"`
LabelExclude []string `toml:"docker_label_exclude"`
LabelFilter DockerLabelFilter
ContainerInclude []string `toml:"container_name_include"`
ContainerExclude []string `toml:"container_name_exclude"`
ContainerStateInclude []string `toml:"container_state_include"`
ContainerStateExclude []string `toml:"container_state_exclude"`
ContainerFilter DockerContainerFilter
SSLCA string `toml:"ssl_ca"`
SSLCert string `toml:"ssl_cert"`
@@ -51,13 +55,10 @@ type Docker struct {
newEnvClient func() (Client, error)
newClient func(string, *tls.Config) (Client, error)
client Client
httpClient *http.Client
engine_host string
filtersCreated bool
labelFilter filter.Filter
containerFilter filter.Filter
stateFilter filter.Filter
client Client
httpClient *http.Client
engine_host string
filtersCreated bool
}
// KB, MB, GB, TB, PB...human friendly
@@ -72,8 +73,7 @@ const (
)
var (
sizeRegex = regexp.MustCompile(`^(\d+(\.\d+)*) ?([kKmMgGtTpP])?[bB]?$`)
containerStates = []string{"created", "restarting", "running", "removing", "paused", "exited", "dead"}
sizeRegex = regexp.MustCompile(`^(\d+(\.\d+)*) ?([kKmMgGtTpP])?[bB]?$`)
)
var sampleConfig = `
@@ -82,9 +82,6 @@ var sampleConfig = `
## To use environment variables (ie, docker-machine), set endpoint = "ENV"
endpoint = "unix:///var/run/docker.sock"
## Set to true to collect Swarm metrics(desired_replicas, running_replicas)
gather_services = false
## Only collect metrics for these containers, collect all if empty
container_names = []
@@ -93,11 +90,6 @@ var sampleConfig = `
container_name_include = []
container_name_exclude = []
## Container states to include and exclude. Globs accepted.
## When empty only containers in the "running" state will be captured.
# container_state_include = []
# container_state_exclude = []
## Timeout for docker list, info, and stats commands
timeout = "5s"
@@ -159,10 +151,6 @@ func (d *Docker) Gather(acc telegraf.Accumulator) error {
if err != nil {
return err
}
err = d.createContainerStateFilters()
if err != nil {
return err
}
d.filtersCreated = true
}
@@ -172,29 +160,8 @@ func (d *Docker) Gather(acc telegraf.Accumulator) error {
acc.AddError(err)
}
if d.GatherServices {
err := d.gatherSwarmInfo(acc)
if err != nil {
acc.AddError(err)
}
}
filterArgs := filters.NewArgs()
for _, state := range containerStates {
if d.stateFilter.Match(state) {
filterArgs.Add("status", state)
}
}
// All container states were excluded
if filterArgs.Len() == 0 {
return nil
}
// List containers
opts := types.ContainerListOptions{
Filters: filterArgs,
}
opts := types.ContainerListOptions{}
ctx, cancel := context.WithTimeout(context.Background(), d.Timeout.Duration)
defer cancel()
containers, err := d.client.ContainerList(ctx, opts)
@@ -220,75 +187,6 @@ func (d *Docker) Gather(acc telegraf.Accumulator) error {
return nil
}
func (d *Docker) gatherSwarmInfo(acc telegraf.Accumulator) error {
ctx, cancel := context.WithTimeout(context.Background(), d.Timeout.Duration)
defer cancel()
services, err := d.client.ServiceList(ctx, types.ServiceListOptions{})
if err != nil {
return err
}
if len(services) > 0 {
tasks, err := d.client.TaskList(ctx, types.TaskListOptions{})
if err != nil {
return err
}
nodes, err := d.client.NodeList(ctx, types.NodeListOptions{})
if err != nil {
return err
}
running := map[string]int{}
tasksNoShutdown := map[string]int{}
activeNodes := make(map[string]struct{})
for _, n := range nodes {
if n.Status.State != swarm.NodeStateDown {
activeNodes[n.ID] = struct{}{}
}
}
for _, task := range tasks {
if task.DesiredState != swarm.TaskStateShutdown {
tasksNoShutdown[task.ServiceID]++
}
if task.Status.State == swarm.TaskStateRunning {
running[task.ServiceID]++
}
}
for _, service := range services {
tags := map[string]string{}
fields := make(map[string]interface{})
now := time.Now()
tags["service_id"] = service.ID
tags["service_name"] = service.Spec.Name
if service.Spec.Mode.Replicated != nil && service.Spec.Mode.Replicated.Replicas != nil {
tags["service_mode"] = "replicated"
fields["tasks_running"] = running[service.ID]
fields["tasks_desired"] = *service.Spec.Mode.Replicated.Replicas
} else if service.Spec.Mode.Global != nil {
tags["service_mode"] = "global"
fields["tasks_running"] = running[service.ID]
fields["tasks_desired"] = tasksNoShutdown[service.ID]
} else {
log.Printf("E! Unknow Replicas Mode")
}
// Add metrics
acc.AddFields("docker_swarm",
fields,
tags,
now)
}
}
return nil
}
func (d *Docker) gatherInfo(acc telegraf.Accumulator) error {
// Init vars
dataFields := make(map[string]interface{})
@@ -393,8 +291,12 @@ func (d *Docker) gatherContainer(
"container_version": imageVersion,
}
if !d.containerFilter.Match(cname) {
return nil
if len(d.ContainerInclude) > 0 || len(d.ContainerExclude) > 0 {
if len(d.ContainerInclude) == 0 || !d.ContainerFilter.containerInclude.Match(cname) {
if len(d.ContainerExclude) == 0 || d.ContainerFilter.containerExclude.Match(cname) {
return nil
}
}
}
ctx, cancel := context.WithTimeout(context.Background(), d.Timeout.Duration)
@@ -415,18 +317,19 @@ func (d *Docker) gatherContainer(
// Add labels to tags
for k, label := range container.Labels {
if d.labelFilter.Match(k) {
tags[k] = label
if len(d.LabelInclude) == 0 || d.LabelFilter.labelInclude.Match(k) {
if len(d.LabelExclude) == 0 || !d.LabelFilter.labelExclude.Match(k) {
tags[k] = label
}
}
}
info, err := d.client.ContainerInspect(ctx, container.ID)
if err != nil {
return fmt.Errorf("Error inspecting docker container: %s", err.Error())
}
// Add whitelisted environment variables to tags
if len(d.TagEnvironment) > 0 {
info, err := d.client.ContainerInspect(ctx, container.ID)
if err != nil {
return fmt.Errorf("Error inspecting docker container: %s", err.Error())
}
for _, envvar := range info.Config.Env {
for _, configvar := range d.TagEnvironment {
dock_env := strings.SplitN(envvar, "=", 2)
@@ -438,14 +341,6 @@ func (d *Docker) gatherContainer(
}
}
if info.State.Health != nil {
healthfields := map[string]interface{}{
"health_status": info.State.Health.Status,
"failing_streak": info.ContainerJSONBase.State.Health.FailingStreak,
}
acc.AddFields("docker_container_health", healthfields, tags, time.Now())
}
gatherContainerStats(v, acc, tags, container.ID, d.PerDevice, d.Total, daemonOSType)
return nil
@@ -460,11 +355,7 @@ func gatherContainerStats(
total bool,
daemonOSType string,
) {
tm := stat.Read
if tm.Before(time.Unix(0, 0)) {
tm = time.Now()
}
now := stat.Read
memfields := map[string]interface{}{
"container_id": id,
@@ -524,7 +415,7 @@ func gatherContainerStats(
memfields["private_working_set"] = stat.MemoryStats.PrivateWorkingSet
}
acc.AddFields("docker_container_mem", memfields, tags, tm)
acc.AddFields("docker_container_mem", memfields, tags, now)
cpufields := map[string]interface{}{
"usage_total": stat.CPUStats.CPUUsage.TotalUsage,
@@ -549,7 +440,7 @@ func gatherContainerStats(
cputags := copyTags(tags)
cputags["cpu"] = "cpu-total"
acc.AddFields("docker_container_cpu", cpufields, cputags, tm)
acc.AddFields("docker_container_cpu", cpufields, cputags, now)
// If we have OnlineCPUs field, then use it to restrict stats gathering to only Online CPUs
// (https://github.com/moby/moby/commit/115f91d7575d6de6c7781a96a082f144fd17e400)
@@ -567,7 +458,7 @@ func gatherContainerStats(
"usage_total": percpu,
"container_id": id,
}
acc.AddFields("docker_container_cpu", fields, percputags, tm)
acc.AddFields("docker_container_cpu", fields, percputags, now)
}
totalNetworkStatMap := make(map[string]interface{})
@@ -587,7 +478,7 @@ func gatherContainerStats(
if perDevice {
nettags := copyTags(tags)
nettags["network"] = network
acc.AddFields("docker_container_net", netfields, nettags, tm)
acc.AddFields("docker_container_net", netfields, nettags, now)
}
if total {
for field, value := range netfields {
@@ -620,17 +511,17 @@ func gatherContainerStats(
nettags := copyTags(tags)
nettags["network"] = "total"
totalNetworkStatMap["container_id"] = id
acc.AddFields("docker_container_net", totalNetworkStatMap, nettags, tm)
acc.AddFields("docker_container_net", totalNetworkStatMap, nettags, now)
}
gatherBlockIOMetrics(stat, acc, tags, tm, id, perDevice, total)
gatherBlockIOMetrics(stat, acc, tags, now, id, perDevice, total)
}
func gatherBlockIOMetrics(
stat *types.StatsJSON,
acc telegraf.Accumulator,
tags map[string]string,
tm time.Time,
now time.Time,
id string,
perDevice bool,
total bool,
@@ -701,7 +592,7 @@ func gatherBlockIOMetrics(
if perDevice {
iotags := copyTags(tags)
iotags["device"] = device
acc.AddFields("docker_container_blkio", fields, iotags, tm)
acc.AddFields("docker_container_blkio", fields, iotags, now)
}
if total {
for field, value := range fields {
@@ -732,7 +623,7 @@ func gatherBlockIOMetrics(
totalStatMap["container_id"] = id
iotags := copyTags(tags)
iotags["device"] = "total"
acc.AddFields("docker_container_blkio", totalStatMap, iotags, tm)
acc.AddFields("docker_container_blkio", totalStatMap, iotags, now)
}
}
@@ -775,37 +666,46 @@ func parseSize(sizeStr string) (int64, error) {
}
func (d *Docker) createContainerFilters() error {
// Backwards compatibility for deprecated `container_names` parameter.
if len(d.ContainerNames) > 0 {
d.ContainerInclude = append(d.ContainerInclude, d.ContainerNames...)
}
filter, err := filter.NewIncludeExcludeFilter(d.ContainerInclude, d.ContainerExclude)
if err != nil {
return err
if len(d.ContainerInclude) != 0 {
var err error
d.ContainerFilter.containerInclude, err = filter.Compile(d.ContainerInclude)
if err != nil {
return err
}
}
d.containerFilter = filter
if len(d.ContainerExclude) != 0 {
var err error
d.ContainerFilter.containerExclude, err = filter.Compile(d.ContainerExclude)
if err != nil {
return err
}
}
return nil
}
func (d *Docker) createLabelFilters() error {
filter, err := filter.NewIncludeExcludeFilter(d.LabelInclude, d.LabelExclude)
if err != nil {
return err
if len(d.LabelInclude) != 0 {
var err error
d.LabelFilter.labelInclude, err = filter.Compile(d.LabelInclude)
if err != nil {
return err
}
}
d.labelFilter = filter
return nil
}
func (d *Docker) createContainerStateFilters() error {
if len(d.ContainerStateInclude) == 0 && len(d.ContainerStateExclude) == 0 {
d.ContainerStateInclude = []string{"running"}
if len(d.LabelExclude) != 0 {
var err error
d.LabelFilter.labelExclude, err = filter.Compile(d.LabelExclude)
if err != nil {
return err
}
}
filter, err := filter.NewIncludeExcludeFilter(d.ContainerStateInclude, d.ContainerStateExclude)
if err != nil {
return err
}
d.stateFilter = filter
return nil
}

View File

@@ -3,13 +3,11 @@ package docker
import (
"context"
"crypto/tls"
"sort"
"testing"
"github.com/influxdata/telegraf/testutil"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/swarm"
"github.com/stretchr/testify/require"
)
@@ -18,9 +16,6 @@ type MockClient struct {
ContainerListF func(ctx context.Context, options types.ContainerListOptions) ([]types.Container, error)
ContainerStatsF func(ctx context.Context, containerID string, stream bool) (types.ContainerStats, error)
ContainerInspectF func(ctx context.Context, containerID string) (types.ContainerJSON, error)
ServiceListF func(ctx context.Context, options types.ServiceListOptions) ([]swarm.Service, error)
TaskListF func(ctx context.Context, options types.TaskListOptions) ([]swarm.Task, error)
NodeListF func(ctx context.Context, options types.NodeListOptions) ([]swarm.Node, error)
}
func (c *MockClient) Info(ctx context.Context) (types.Info, error) {
@@ -49,53 +44,21 @@ func (c *MockClient) ContainerInspect(
return c.ContainerInspectF(ctx, containerID)
}
func (c *MockClient) ServiceList(
ctx context.Context,
options types.ServiceListOptions,
) ([]swarm.Service, error) {
return c.ServiceListF(ctx, options)
}
func (c *MockClient) TaskList(
ctx context.Context,
options types.TaskListOptions,
) ([]swarm.Task, error) {
return c.TaskListF(ctx, options)
}
func (c *MockClient) NodeList(
ctx context.Context,
options types.NodeListOptions,
) ([]swarm.Node, error) {
return c.NodeListF(ctx, options)
}
var baseClient = MockClient{
InfoF: func(context.Context) (types.Info, error) {
return info, nil
},
ContainerListF: func(context.Context, types.ContainerListOptions) ([]types.Container, error) {
return containerList, nil
},
ContainerStatsF: func(context.Context, string, bool) (types.ContainerStats, error) {
return containerStats(), nil
},
ContainerInspectF: func(context.Context, string) (types.ContainerJSON, error) {
return containerInspect, nil
},
ServiceListF: func(context.Context, types.ServiceListOptions) ([]swarm.Service, error) {
return ServiceList, nil
},
TaskListF: func(context.Context, types.TaskListOptions) ([]swarm.Task, error) {
return TaskList, nil
},
NodeListF: func(context.Context, types.NodeListOptions) ([]swarm.Node, error) {
return NodeList, nil
},
}
func newClient(host string, tlsConfig *tls.Config) (Client, error) {
return &baseClient, nil
return &MockClient{
InfoF: func(context.Context) (types.Info, error) {
return info, nil
},
ContainerListF: func(context.Context, types.ContainerListOptions) ([]types.Container, error) {
return containerList, nil
},
ContainerStatsF: func(context.Context, string, bool) (types.ContainerStats, error) {
return containerStats(), nil
},
ContainerInspectF: func(context.Context, string) (types.ContainerJSON, error) {
return containerInspect, nil
},
}, nil
}
func TestDockerGatherContainerStats(t *testing.T) {
@@ -264,15 +227,6 @@ func TestDocker_WindowsMemoryContainerStats(t *testing.T) {
ContainerInspectF: func(ctx context.Context, containerID string) (types.ContainerJSON, error) {
return containerInspect, nil
},
ServiceListF: func(context.Context, types.ServiceListOptions) ([]swarm.Service, error) {
return ServiceList, nil
},
TaskListF: func(context.Context, types.TaskListOptions) ([]swarm.Task, error) {
return TaskList, nil
},
NodeListF: func(context.Context, types.NodeListOptions) ([]swarm.Node, error) {
return NodeList, nil
},
}, nil
},
}
@@ -280,291 +234,82 @@ func TestDocker_WindowsMemoryContainerStats(t *testing.T) {
require.NoError(t, err)
}
func TestContainerLabels(t *testing.T) {
var tests = []struct {
name string
container types.Container
include []string
exclude []string
expected map[string]string
func TestDockerGatherLabels(t *testing.T) {
var gatherLabelsTests = []struct {
include []string
exclude []string
expected []string
notexpected []string
}{
{
name: "Nil filters matches all",
container: types.Container{
Labels: map[string]string{
"a": "x",
},
},
include: nil,
exclude: nil,
expected: map[string]string{
"a": "x",
},
},
{
name: "Empty filters matches all",
container: types.Container{
Labels: map[string]string{
"a": "x",
},
},
include: []string{},
exclude: []string{},
expected: map[string]string{
"a": "x",
},
},
{
name: "Must match include",
container: types.Container{
Labels: map[string]string{
"a": "x",
"b": "y",
},
},
include: []string{"a"},
exclude: []string{},
expected: map[string]string{
"a": "x",
},
},
{
name: "Must not match exclude",
container: types.Container{
Labels: map[string]string{
"a": "x",
"b": "y",
},
},
include: []string{},
exclude: []string{"b"},
expected: map[string]string{
"a": "x",
},
},
{
name: "Include Glob",
container: types.Container{
Labels: map[string]string{
"aa": "x",
"ab": "y",
"bb": "z",
},
},
include: []string{"a*"},
exclude: []string{},
expected: map[string]string{
"aa": "x",
"ab": "y",
},
},
{
name: "Exclude Glob",
container: types.Container{
Labels: map[string]string{
"aa": "x",
"ab": "y",
"bb": "z",
},
},
include: []string{},
exclude: []string{"a*"},
expected: map[string]string{
"bb": "z",
},
},
{
name: "Excluded Includes",
container: types.Container{
Labels: map[string]string{
"aa": "x",
"ab": "y",
"bb": "z",
},
},
include: []string{"a*"},
exclude: []string{"*b"},
expected: map[string]string{
"aa": "x",
},
},
{[]string{}, []string{}, []string{"label1", "label2"}, []string{}},
{[]string{"*"}, []string{}, []string{"label1", "label2"}, []string{}},
{[]string{"lab*"}, []string{}, []string{"label1", "label2"}, []string{}},
{[]string{"label1"}, []string{}, []string{"label1"}, []string{"label2"}},
{[]string{"label1*"}, []string{}, []string{"label1"}, []string{"label2"}},
{[]string{}, []string{"*"}, []string{}, []string{"label1", "label2"}},
{[]string{}, []string{"lab*"}, []string{}, []string{"label1", "label2"}},
{[]string{}, []string{"label1"}, []string{"label2"}, []string{"label1"}},
{[]string{"*"}, []string{"*"}, []string{}, []string{"label1", "label2"}},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
var acc testutil.Accumulator
newClientFunc := func(host string, tlsConfig *tls.Config) (Client, error) {
client := baseClient
client.ContainerListF = func(context.Context, types.ContainerListOptions) ([]types.Container, error) {
return []types.Container{tt.container}, nil
}
return &client, nil
for _, tt := range gatherLabelsTests {
t.Run("", func(t *testing.T) {
var acc testutil.Accumulator
d := Docker{
newClient: newClient,
}
d := Docker{
newClient: newClientFunc,
LabelInclude: tt.include,
LabelExclude: tt.exclude,
for _, label := range tt.include {
d.LabelInclude = append(d.LabelInclude, label)
}
for _, label := range tt.exclude {
d.LabelExclude = append(d.LabelExclude, label)
}
err := d.Gather(&acc)
require.NoError(t, err)
// Grab tags from a container metric
var actual map[string]string
for _, metric := range acc.Metrics {
if metric.Measurement == "docker_container_cpu" {
actual = metric.Tags
for _, label := range tt.expected {
if !acc.HasTag("docker_container_cpu", label) {
t.Errorf("Didn't get expected label of %s. Test was: Include: %s Exclude %s",
label, tt.include, tt.exclude)
}
}
for k, v := range tt.expected {
require.Equal(t, v, actual[k])
for _, label := range tt.notexpected {
if acc.HasTag("docker_container_cpu", label) {
t.Errorf("Got unexpected label of %s. Test was: Include: %s Exclude %s",
label, tt.include, tt.exclude)
}
}
})
}
}
func TestContainerNames(t *testing.T) {
var tests = []struct {
name string
containers [][]string
include []string
exclude []string
expected []string
var gatherContainerNames = []struct {
include []string
exclude []string
expected []string
notexpected []string
}{
{
name: "Nil filters matches all",
containers: [][]string{
{"/etcd"},
{"/etcd2"},
},
include: nil,
exclude: nil,
expected: []string{"etcd", "etcd2"},
},
{
name: "Empty filters matches all",
containers: [][]string{
{"/etcd"},
{"/etcd2"},
},
include: []string{},
exclude: []string{},
expected: []string{"etcd", "etcd2"},
},
{
name: "Match all containers",
containers: [][]string{
{"/etcd"},
{"/etcd2"},
},
include: []string{"*"},
exclude: []string{},
expected: []string{"etcd", "etcd2"},
},
{
name: "Include prefix match",
containers: [][]string{
{"/etcd"},
{"/etcd2"},
},
include: []string{"etc*"},
exclude: []string{},
expected: []string{"etcd", "etcd2"},
},
{
name: "Exact match",
containers: [][]string{
{"/etcd"},
{"/etcd2"},
},
include: []string{"etcd"},
exclude: []string{},
expected: []string{"etcd"},
},
{
name: "Star matches zero length",
containers: [][]string{
{"/etcd"},
{"/etcd2"},
},
include: []string{"etcd2*"},
exclude: []string{},
expected: []string{"etcd2"},
},
{
name: "Exclude matches all",
containers: [][]string{
{"/etcd"},
{"/etcd2"},
},
include: []string{},
exclude: []string{"etc*"},
expected: []string{},
},
{
name: "Exclude single",
containers: [][]string{
{"/etcd"},
{"/etcd2"},
},
include: []string{},
exclude: []string{"etcd"},
expected: []string{"etcd2"},
},
{
name: "Exclude all",
containers: [][]string{
{"/etcd"},
{"/etcd2"},
},
include: []string{"*"},
exclude: []string{"*"},
expected: []string{},
},
{
name: "Exclude item matching include",
containers: [][]string{
{"acme"},
{"foo"},
{"acme-test"},
},
include: []string{"acme*"},
exclude: []string{"*test*"},
expected: []string{"acme"},
},
{
name: "Exclude item no wildcards",
containers: [][]string{
{"acme"},
{"acme-test"},
},
include: []string{"acme*"},
exclude: []string{"test"},
expected: []string{"acme", "acme-test"},
},
{[]string{}, []string{}, []string{"etcd", "etcd2"}, []string{}},
{[]string{"*"}, []string{}, []string{"etcd", "etcd2"}, []string{}},
{[]string{"etc*"}, []string{}, []string{"etcd", "etcd2"}, []string{}},
{[]string{"etcd"}, []string{}, []string{"etcd"}, []string{"etcd2"}},
{[]string{"etcd2*"}, []string{}, []string{"etcd2"}, []string{"etcd"}},
{[]string{}, []string{"etc*"}, []string{}, []string{"etcd", "etcd2"}},
{[]string{}, []string{"etcd"}, []string{"etcd2"}, []string{"etcd"}},
{[]string{"*"}, []string{"*"}, []string{"etcd", "etcd2"}, []string{}},
{[]string{}, []string{"*"}, []string{""}, []string{"etcd", "etcd2"}},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
for _, tt := range gatherContainerNames {
t.Run("", func(t *testing.T) {
var acc testutil.Accumulator
newClientFunc := func(host string, tlsConfig *tls.Config) (Client, error) {
client := baseClient
client.ContainerListF = func(context.Context, types.ContainerListOptions) ([]types.Container, error) {
var containers []types.Container
for _, names := range tt.containers {
containers = append(containers, types.Container{
Names: names,
})
}
return containers, nil
}
return &client, nil
}
d := Docker{
newClient: newClientFunc,
newClient: newClient,
ContainerInclude: tt.include,
ContainerExclude: tt.exclude,
}
@@ -572,21 +317,39 @@ func TestContainerNames(t *testing.T) {
err := d.Gather(&acc)
require.NoError(t, err)
// Set of expected names
var expected = make(map[string]bool)
for _, v := range tt.expected {
expected[v] = true
}
// Set of actual names
var actual = make(map[string]bool)
for _, metric := range acc.Metrics {
if name, ok := metric.Tags["container_name"]; ok {
actual[name] = true
if metric.Measurement == "docker_container_cpu" {
if val, ok := metric.Tags["container_name"]; ok {
var found bool = false
for _, cname := range tt.expected {
if val == cname {
found = true
break
}
}
if !found {
t.Errorf("Got unexpected container of %s. Test was -> Include: %s, Exclude: %s", val, tt.include, tt.exclude)
}
}
}
}
require.Equal(t, expected, actual)
for _, metric := range acc.Metrics {
if metric.Measurement == "docker_container_cpu" {
if val, ok := metric.Tags["container_name"]; ok {
var found bool = false
for _, cname := range tt.notexpected {
if val == cname {
found = true
break
}
}
if found {
t.Errorf("Got unexpected container of %s. Test was -> Include: %s, Exclude: %s", val, tt.include, tt.exclude)
}
}
}
}
})
}
}
@@ -673,124 +436,3 @@ func TestDockerGatherInfo(t *testing.T) {
},
)
}
func TestDockerGatherSwarmInfo(t *testing.T) {
var acc testutil.Accumulator
d := Docker{
newClient: newClient,
}
err := acc.GatherError(d.Gather)
require.NoError(t, err)
d.gatherSwarmInfo(&acc)
// test docker_container_net measurement
acc.AssertContainsTaggedFields(t,
"docker_swarm",
map[string]interface{}{
"tasks_running": int(2),
"tasks_desired": uint64(2),
},
map[string]string{
"service_id": "qolkls9g5iasdiuihcyz9rnx2",
"service_name": "test1",
"service_mode": "replicated",
},
)
acc.AssertContainsTaggedFields(t,
"docker_swarm",
map[string]interface{}{
"tasks_running": int(1),
"tasks_desired": int(1),
},
map[string]string{
"service_id": "qolkls9g5iasdiuihcyz9rn3",
"service_name": "test2",
"service_mode": "global",
},
)
}
func TestContainerStateFilter(t *testing.T) {
var tests = []struct {
name string
include []string
exclude []string
expected map[string][]string
}{
{
name: "default",
expected: map[string][]string{
"status": []string{"running"},
},
},
{
name: "include running",
include: []string{"running"},
expected: map[string][]string{
"status": []string{"running"},
},
},
{
name: "include glob",
include: []string{"r*"},
expected: map[string][]string{
"status": []string{"restarting", "running", "removing"},
},
},
{
name: "include all",
include: []string{"*"},
expected: map[string][]string{
"status": []string{"created", "restarting", "running", "removing", "paused", "exited", "dead"},
},
},
{
name: "exclude all",
exclude: []string{"*"},
expected: map[string][]string{
"status": []string{},
},
},
{
name: "exclude all",
include: []string{"*"},
exclude: []string{"exited"},
expected: map[string][]string{
"status": []string{"created", "restarting", "running", "removing", "paused", "dead"},
},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
var acc testutil.Accumulator
newClientFunc := func(host string, tlsConfig *tls.Config) (Client, error) {
client := baseClient
client.ContainerListF = func(ctx context.Context, options types.ContainerListOptions) ([]types.Container, error) {
for k, v := range tt.expected {
actual := options.Filters.Get(k)
sort.Strings(actual)
sort.Strings(v)
require.Equal(t, v, actual)
}
return nil, nil
}
return &client, nil
}
d := Docker{
newClient: newClientFunc,
ContainerStateInclude: tt.include,
ContainerStateExclude: tt.exclude,
}
err := d.Gather(&acc)
require.NoError(t, err)
})
}
}

View File

@@ -8,7 +8,6 @@ import (
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/registry"
"github.com/docker/docker/api/types/swarm"
)
var info = types.Info{
@@ -134,79 +133,6 @@ var containerList = []types.Container{
},
}
var two = uint64(2)
var ServiceList = []swarm.Service{
swarm.Service{
ID: "qolkls9g5iasdiuihcyz9rnx2",
Spec: swarm.ServiceSpec{
Annotations: swarm.Annotations{
Name: "test1",
},
Mode: swarm.ServiceMode{
Replicated: &swarm.ReplicatedService{
Replicas: &two,
},
},
},
},
swarm.Service{
ID: "qolkls9g5iasdiuihcyz9rn3",
Spec: swarm.ServiceSpec{
Annotations: swarm.Annotations{
Name: "test2",
},
Mode: swarm.ServiceMode{
Global: &swarm.GlobalService{},
},
},
},
}
var TaskList = []swarm.Task{
swarm.Task{
ID: "kwh0lv7hwwbh",
ServiceID: "qolkls9g5iasdiuihcyz9rnx2",
NodeID: "0cl4jturcyd1ks3fwpd010kor",
Status: swarm.TaskStatus{
State: "running",
},
DesiredState: "running",
},
swarm.Task{
ID: "u78m5ojbivc3",
ServiceID: "qolkls9g5iasdiuihcyz9rnx2",
NodeID: "0cl4jturcyd1ks3fwpd010kor",
Status: swarm.TaskStatus{
State: "running",
},
DesiredState: "running",
},
swarm.Task{
ID: "1n1uilkhr98l",
ServiceID: "qolkls9g5iasdiuihcyz9rn3",
NodeID: "0cl4jturcyd1ks3fwpd010kor",
Status: swarm.TaskStatus{
State: "running",
},
DesiredState: "running",
},
}
var NodeList = []swarm.Node{
swarm.Node{
ID: "0cl4jturcyd1ks3fwpd010kor",
Status: swarm.NodeStatus{
State: "ready",
},
},
swarm.Node{
ID: "0cl4jturcyd1ks3fwpd010kor",
Status: swarm.NodeStatus{
State: "ready",
},
},
}
func containerStats() types.ContainerStats {
var stat types.ContainerStats
jsonStat := `
@@ -477,12 +403,4 @@ var containerInspect = types.ContainerJSON{
"PATH=/bin:/sbin",
},
},
ContainerJSONBase: &types.ContainerJSONBase{
State: &types.ContainerState{
Health: &types.Health{
FailingStreak: 1,
Status: "Unhealthy",
},
},
},
}

View File

@@ -23,21 +23,10 @@ or [cluster-stats](https://www.elastic.co/guide/en/elasticsearch/reference/curre
## Set cluster_health to true when you want to also obtain cluster health stats
cluster_health = false
## Adjust cluster_health_level when you want to also obtain detailed health stats
## The options are
## - indices (default)
## - cluster
# cluster_health_level = "indices"
## Set cluster_stats to true when you want to also obtain cluster stats from the
## Master node.
## Set cluster_stats to true when you want to obtain cluster stats from the
## Master node.
cluster_stats = false
## node_stats is a list of sub-stats that you want to have gathered. Valid options
## are "indices", "os", "process", "jvm", "thread_pool", "fs", "transport", "http",
## "breaker". Per default, all stats are gathered.
# node_stats = ["jvm", "http"]
## Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem"
@@ -46,17 +35,6 @@ or [cluster-stats](https://www.elastic.co/guide/en/elasticsearch/reference/curre
# insecure_skip_verify = false
```
### Status mappings
When reporting health (green/yellow/red), additional field `status_code`
is reported. Field contains mapping from status:string to status_code:int
with following rules:
* `green` - 1
* `yellow` - 2
* `red` - 3
* `unknown` - 0
### Measurements & Fields:
field data circuit breaker measurement names:

View File

@@ -3,16 +3,17 @@ package elasticsearch
import (
"encoding/json"
"fmt"
"net/http"
"regexp"
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
jsonparser "github.com/influxdata/telegraf/plugins/parsers/json"
"io/ioutil"
"net/http"
"regexp"
"strings"
"sync"
"time"
)
// mask for masking username/password from error messages
@@ -93,21 +94,10 @@ const sampleConfig = `
## Set cluster_health to true when you want to also obtain cluster health stats
cluster_health = false
## Adjust cluster_health_level when you want to also obtain detailed health stats
## The options are
## - indices (default)
## - cluster
# cluster_health_level = "indices"
## Set cluster_stats to true when you want to also obtain cluster stats from the
## Master node.
cluster_stats = false
## node_stats is a list of sub-stats that you want to have gathered. Valid options
## are "indices", "os", "process", "jvm", "thread_pool", "fs", "transport", "http",
## "breaker". Per default, all stats are gathered.
# node_stats = ["jvm", "http"]
## Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem"
@@ -123,9 +113,7 @@ type Elasticsearch struct {
Servers []string
HttpTimeout internal.Duration
ClusterHealth bool
ClusterHealthLevel string
ClusterStats bool
NodeStats []string
SSLCA string `toml:"ssl_ca"` // Path to CA file
SSLCert string `toml:"ssl_cert"` // Path to host cert file
SSLKey string `toml:"ssl_key"` // Path to cert key file
@@ -138,24 +126,10 @@ type Elasticsearch struct {
// NewElasticsearch return a new instance of Elasticsearch
func NewElasticsearch() *Elasticsearch {
return &Elasticsearch{
HttpTimeout: internal.Duration{Duration: time.Second * 5},
ClusterHealthLevel: "indices",
HttpTimeout: internal.Duration{Duration: time.Second * 5},
}
}
// perform status mapping
func mapHealthStatusToCode(s string) int {
switch strings.ToLower(s) {
case "green":
return 1
case "yellow":
return 2
case "red":
return 3
}
return 0
}
// SampleConfig returns sample configuration for this plugin.
func (e *Elasticsearch) SampleConfig() string {
return sampleConfig
@@ -184,7 +158,12 @@ func (e *Elasticsearch) Gather(acc telegraf.Accumulator) error {
for _, serv := range e.Servers {
go func(s string, acc telegraf.Accumulator) {
defer wg.Done()
url := e.nodeStatsUrl(s)
var url string
if e.Local {
url = s + statsPathLocal
} else {
url = s + statsPath
}
e.isMaster = false
if e.ClusterStats {
@@ -203,10 +182,7 @@ func (e *Elasticsearch) Gather(acc telegraf.Accumulator) error {
}
if e.ClusterHealth {
url = s + "/_cluster/health"
if e.ClusterHealthLevel != "" {
url = url + "?level=" + e.ClusterHealthLevel
}
url = s + "/_cluster/health?level=indices"
if err := e.gatherClusterHealth(url, acc); err != nil {
acc.AddError(fmt.Errorf(mask.ReplaceAllString(err.Error(), "http(s)://XXX:XXX@")))
return
@@ -243,22 +219,6 @@ func (e *Elasticsearch) createHttpClient() (*http.Client, error) {
return client, nil
}
func (e *Elasticsearch) nodeStatsUrl(baseUrl string) string {
var url string
if e.Local {
url = baseUrl + statsPathLocal
} else {
url = baseUrl + statsPath
}
if len(e.NodeStats) == 0 {
return url
}
return fmt.Sprintf("%s/%s", url, strings.Join(e.NodeStats, ","))
}
func (e *Elasticsearch) gatherNodeStats(url string, acc telegraf.Accumulator) error {
nodeStats := &struct {
ClusterName string `json:"cluster_name"`
@@ -299,11 +259,6 @@ func (e *Elasticsearch) gatherNodeStats(url string, acc telegraf.Accumulator) er
now := time.Now()
for p, s := range stats {
// if one of the individual node stats is not even in the
// original result
if s == nil {
continue
}
f := jsonparser.JSONFlattener{}
// parse Json, ignoring strings and bools
err := f.FlattenJSON("", s)
@@ -324,7 +279,6 @@ func (e *Elasticsearch) gatherClusterHealth(url string, acc telegraf.Accumulator
measurementTime := time.Now()
clusterFields := map[string]interface{}{
"status": healthStats.Status,
"status_code": mapHealthStatusToCode(healthStats.Status),
"timed_out": healthStats.TimedOut,
"number_of_nodes": healthStats.NumberOfNodes,
"number_of_data_nodes": healthStats.NumberOfDataNodes,
@@ -344,7 +298,6 @@ func (e *Elasticsearch) gatherClusterHealth(url string, acc telegraf.Accumulator
for name, health := range healthStats.Indices {
indexFields := map[string]interface{}{
"status": health.Status,
"status_code": mapHealthStatusToCode(health.Status),
"number_of_shards": health.NumberOfShards,
"number_of_replicas": health.NumberOfReplicas,
"active_primary_shards": health.ActivePrimaryShards,

View File

@@ -13,16 +13,6 @@ import (
"github.com/stretchr/testify/require"
)
func defaultTags() map[string]string {
return map[string]string{
"cluster_name": "es-testcluster",
"node_attribute_master": "true",
"node_id": "SDFsfSDFsdfFSDSDfSFDSDF",
"node_name": "test.host.com",
"node_host": "test",
}
}
type transportMock struct {
statusCode int
body string
@@ -55,9 +45,15 @@ func checkIsMaster(es *Elasticsearch, expected bool, t *testing.T) {
assert.Fail(t, msg)
}
}
func checkNodeStatsResult(t *testing.T, acc *testutil.Accumulator) {
tags := defaultTags()
tags := map[string]string{
"cluster_name": "es-testcluster",
"node_attribute_master": "true",
"node_id": "SDFsfSDFsdfFSDSDfSFDSDF",
"node_name": "test.host.com",
"node_host": "test",
}
acc.AssertContainsTaggedFields(t, "elasticsearch_indices", nodestatsIndicesExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_os", nodestatsOsExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_process", nodestatsProcessExpected, tags)
@@ -83,31 +79,6 @@ func TestGather(t *testing.T) {
checkNodeStatsResult(t, &acc)
}
func TestGatherIndividualStats(t *testing.T) {
es := newElasticsearchWithClient()
es.Servers = []string{"http://example.com:9200"}
es.NodeStats = []string{"jvm", "process"}
es.client.Transport = newTransportMock(http.StatusOK, nodeStatsResponseJVMProcess)
var acc testutil.Accumulator
if err := acc.GatherError(es.Gather); err != nil {
t.Fatal(err)
}
checkIsMaster(es, false, t)
tags := defaultTags()
acc.AssertDoesNotContainsTaggedFields(t, "elasticsearch_indices", nodestatsIndicesExpected, tags)
acc.AssertDoesNotContainsTaggedFields(t, "elasticsearch_os", nodestatsOsExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_process", nodestatsProcessExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_jvm", nodestatsJvmExpected, tags)
acc.AssertDoesNotContainsTaggedFields(t, "elasticsearch_thread_pool", nodestatsThreadPoolExpected, tags)
acc.AssertDoesNotContainsTaggedFields(t, "elasticsearch_fs", nodestatsFsExpected, tags)
acc.AssertDoesNotContainsTaggedFields(t, "elasticsearch_transport", nodestatsTransportExpected, tags)
acc.AssertDoesNotContainsTaggedFields(t, "elasticsearch_http", nodestatsHttpExpected, tags)
acc.AssertDoesNotContainsTaggedFields(t, "elasticsearch_breakers", nodestatsBreakersExpected, tags)
}
func TestGatherNodeStats(t *testing.T) {
es := newElasticsearchWithClient()
es.Servers = []string{"http://example.com:9200"}
@@ -122,11 +93,10 @@ func TestGatherNodeStats(t *testing.T) {
checkNodeStatsResult(t, &acc)
}
func TestGatherClusterHealthEmptyClusterHealth(t *testing.T) {
func TestGatherClusterHealth(t *testing.T) {
es := newElasticsearchWithClient()
es.Servers = []string{"http://example.com:9200"}
es.ClusterHealth = true
es.ClusterHealthLevel = ""
es.client.Transport = newTransportMock(http.StatusOK, clusterHealthResponse)
var acc testutil.Accumulator
@@ -134,56 +104,6 @@ func TestGatherClusterHealthEmptyClusterHealth(t *testing.T) {
checkIsMaster(es, false, t)
acc.AssertContainsTaggedFields(t, "elasticsearch_cluster_health",
clusterHealthExpected,
map[string]string{"name": "elasticsearch_telegraf"})
acc.AssertDoesNotContainsTaggedFields(t, "elasticsearch_indices",
v1IndexExpected,
map[string]string{"index": "v1"})
acc.AssertDoesNotContainsTaggedFields(t, "elasticsearch_indices",
v2IndexExpected,
map[string]string{"index": "v2"})
}
func TestGatherClusterHealthSpecificClusterHealth(t *testing.T) {
es := newElasticsearchWithClient()
es.Servers = []string{"http://example.com:9200"}
es.ClusterHealth = true
es.ClusterHealthLevel = "cluster"
es.client.Transport = newTransportMock(http.StatusOK, clusterHealthResponse)
var acc testutil.Accumulator
require.NoError(t, es.gatherClusterHealth("junk", &acc))
checkIsMaster(es, false, t)
acc.AssertContainsTaggedFields(t, "elasticsearch_cluster_health",
clusterHealthExpected,
map[string]string{"name": "elasticsearch_telegraf"})
acc.AssertDoesNotContainsTaggedFields(t, "elasticsearch_indices",
v1IndexExpected,
map[string]string{"index": "v1"})
acc.AssertDoesNotContainsTaggedFields(t, "elasticsearch_indices",
v2IndexExpected,
map[string]string{"index": "v2"})
}
func TestGatherClusterHealthAlsoIndicesHealth(t *testing.T) {
es := newElasticsearchWithClient()
es.Servers = []string{"http://example.com:9200"}
es.ClusterHealth = true
es.ClusterHealthLevel = "indices"
es.client.Transport = newTransportMock(http.StatusOK, clusterHealthResponseWithIndices)
var acc testutil.Accumulator
require.NoError(t, es.gatherClusterHealth("junk", &acc))
checkIsMaster(es, false, t)
acc.AssertContainsTaggedFields(t, "elasticsearch_cluster_health",
clusterHealthExpected,
map[string]string{"name": "elasticsearch_telegraf"})
@@ -265,6 +185,7 @@ func TestGatherClusterStatsNonMaster(t *testing.T) {
// ensure flag is clear so Cluster Stats would not be done
checkIsMaster(es, false, t)
checkNodeStatsResult(t, &acc)
}
func newElasticsearchWithClient() *Elasticsearch {

View File

@@ -1,21 +1,6 @@
package elasticsearch
const clusterHealthResponse = `
{
"cluster_name": "elasticsearch_telegraf",
"status": "green",
"timed_out": false,
"number_of_nodes": 3,
"number_of_data_nodes": 3,
"active_primary_shards": 5,
"active_shards": 15,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0
}
`
const clusterHealthResponseWithIndices = `
{
"cluster_name": "elasticsearch_telegraf",
"status": "green",
@@ -54,7 +39,6 @@ const clusterHealthResponseWithIndices = `
var clusterHealthExpected = map[string]interface{}{
"status": "green",
"status_code": 1,
"timed_out": false,
"number_of_nodes": 3,
"number_of_data_nodes": 3,
@@ -67,7 +51,6 @@ var clusterHealthExpected = map[string]interface{}{
var v1IndexExpected = map[string]interface{}{
"status": "green",
"status_code": 1,
"number_of_shards": 10,
"number_of_replicas": 1,
"active_primary_shards": 10,
@@ -79,7 +62,6 @@ var v1IndexExpected = map[string]interface{}{
var v2IndexExpected = map[string]interface{}{
"status": "red",
"status_code": 3,
"number_of_shards": 10,
"number_of_replicas": 1,
"active_primary_shards": 0,
@@ -507,100 +489,6 @@ const nodeStatsResponse = `
}
`
const nodeStatsResponseJVMProcess = `
{
"cluster_name": "es-testcluster",
"nodes": {
"SDFsfSDFsdfFSDSDfSFDSDF": {
"timestamp": 1436365550135,
"name": "test.host.com",
"transport_address": "inet[/127.0.0.1:9300]",
"host": "test",
"ip": [
"inet[/127.0.0.1:9300]",
"NONE"
],
"attributes": {
"master": "true"
},
"process": {
"timestamp": 1436460392945,
"open_file_descriptors": 160,
"cpu": {
"percent": 2,
"sys_in_millis": 1870,
"user_in_millis": 13610,
"total_in_millis": 15480
},
"mem": {
"total_virtual_in_bytes": 4747890688
}
},
"jvm": {
"timestamp": 1436460392945,
"uptime_in_millis": 202245,
"mem": {
"heap_used_in_bytes": 52709568,
"heap_used_percent": 5,
"heap_committed_in_bytes": 259522560,
"heap_max_in_bytes": 1038876672,
"non_heap_used_in_bytes": 39634576,
"non_heap_committed_in_bytes": 40841216,
"pools": {
"young": {
"used_in_bytes": 32685760,
"max_in_bytes": 279183360,
"peak_used_in_bytes": 71630848,
"peak_max_in_bytes": 279183360
},
"survivor": {
"used_in_bytes": 8912880,
"max_in_bytes": 34865152,
"peak_used_in_bytes": 8912888,
"peak_max_in_bytes": 34865152
},
"old": {
"used_in_bytes": 11110928,
"max_in_bytes": 724828160,
"peak_used_in_bytes": 14354608,
"peak_max_in_bytes": 724828160
}
}
},
"threads": {
"count": 44,
"peak_count": 45
},
"gc": {
"collectors": {
"young": {
"collection_count": 2,
"collection_time_in_millis": 98
},
"old": {
"collection_count": 1,
"collection_time_in_millis": 24
}
}
},
"buffer_pools": {
"direct": {
"count": 40,
"used_in_bytes": 6304239,
"total_capacity_in_bytes": 6304239
},
"mapped": {
"count": 0,
"used_in_bytes": 0,
"total_capacity_in_bytes": 0
}
}
}
}
}
}
`
var nodestatsIndicesExpected = map[string]interface{}{
"id_cache_memory_size_in_bytes": float64(0),
"completion_size_in_bytes": float64(0),

View File

@@ -1,57 +1,175 @@
# Exec Input Plugin
The `exec` plugin executes the `commands` on every interval and parses metrics from
their output in any one of the accepted [Input Data Formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md).
Please also see: [Telegraf Input Data Formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md)
This plugin can be used to poll for custom metrics from any source.
### Example 1 - JSON
### Configuration:
#### Configuration
In this example a script called ```/tmp/test.sh```, a script called ```/tmp/test2.sh```, and
all scripts matching glob pattern ```/tmp/collect_*.sh``` are configured for ```[[inputs.exec]]```
in JSON format. Glob patterns are matched on every run, so adding new scripts that match the pattern
will cause them to be picked up immediately.
```toml
# Read flattened metrics from one or more commands that output JSON to stdout
[[inputs.exec]]
## Commands array
commands = [
"/tmp/test.sh",
"/usr/bin/mycollector --foo=bar",
"/tmp/collect_*.sh"
]
# Shell/commands array
# Full command line to executable with parameters, or a glob pattern to run all matching files.
commands = ["/tmp/test.sh", "/tmp/test2.sh", "/tmp/collect_*.sh"]
## Timeout for each command to complete.
timeout = "5s"
## measurement name suffix (for separating different commands)
# Data format to consume.
# NOTE json only reads numerical measurements, strings and booleans are ignored.
data_format = "json"
# measurement name suffix (for separating different commands)
name_suffix = "_mycollector"
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "influx"
```
Glob patterns in the `command` option are matched on every run, so adding new
scripts that match the pattern will cause them to be picked up immediately.
Other options for modifying the measurement names are:
### Example:
This script produces static values, since no timestamp is specified the values are at the current time.
```sh
#!/bin/sh
echo 'example,tag1=a,tag2=b i=42i,j=43i,k=44i'
```
name_prefix = "prefix_"
```
It can be paired with the following configuration and will be ran at the `interval` of the agent.
Let's say that we have the above configuration, and mycollector outputs the
following JSON:
```json
{
"a": 0.5,
"b": {
"c": 0.1,
"d": 5
}
}
```
The collected metrics will be stored as fields under the measurement
"exec_mycollector":
```
exec_mycollector a=0.5,b_c=0.1,b_d=5 1452815002357578567
```
If using JSON, only numeric values are parsed and turned into floats. Booleans
and strings will be ignored.
### Example 2 - Influx Line-Protocol
In this example an application called ```/usr/bin/line_protocol_collector```
and a script called ```/tmp/test2.sh``` are configured for ```[[inputs.exec]]```
in influx line-protocol format.
#### Configuration
```toml
[[inputs.exec]]
commands = ["sh /tmp/test.sh"]
# Shell/commands array
# compatible with old version
# we can still use the old command configuration
# command = "/usr/bin/line_protocol_collector"
commands = ["/usr/bin/line_protocol_collector","/tmp/test2.sh"]
## Timeout for each command to complete.
timeout = "5s"
# Data format to consume.
# NOTE json only reads numerical measurements, strings and booleans are ignored.
data_format = "influx"
```
### Common Issues:
The line_protocol_collector application outputs the following line protocol:
#### Q: My script works when I run it by hand, but not when Telegraf is running as a service.
```
cpu,cpu=cpu0,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu1,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu2,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu3,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu4,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu5,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,cpu=cpu6,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
```
This may be related to the Telegraf service running as a different user. The
official packages run Telegraf as the `telegraf` user and group on Linux
systems.
You will get data in InfluxDB exactly as it is defined above,
tags are cpu=cpuN, host=foo, and datacenter=us-east with fields usage_idle
and usage_busy. They will receive a timestamp at collection time.
Each line must end in \n, just as the Influx line protocol does.
### Example 3 - Graphite
We can also change the data_format to "graphite" to use the metrics collecting scripts such as (compatible with graphite):
* Nagios [Metrics Plugins](https://exchange.nagios.org/directory/Plugins)
* Sensu [Metrics Plugins](https://github.com/sensu-plugins)
In this example a script called /tmp/test.sh and a script called /tmp/test2.sh are configured for [[inputs.exec]] in graphite format.
#### Configuration
```toml
# Read flattened metrics from one or more commands that output JSON to stdout
[[inputs.exec]]
# Shell/commands array
commands = ["/tmp/test.sh","/tmp/test2.sh"]
## Timeout for each command to complete.
timeout = "5s"
# Data format to consume.
# NOTE json only reads numerical measurements, strings and booleans are ignored.
data_format = "graphite"
# measurement name suffix (for separating different commands)
name_suffix = "_mycollector"
## Below configuration will be used for data_format = "graphite", can be ignored for other data_format
## If matching multiple measurement files, this string will be used to join the matched values.
separator = "."
## Each template line requires a template pattern. It can have an optional
## filter before the template and separated by spaces. It can also have optional extra
## tags following the template. Multiple tags should be separated by commas and no spaces
## similar to the line protocol format. The can be only one default template.
## Templates support below format:
## 1. filter + template
## 2. filter + template + extra tag
## 3. filter + template with field key
## 4. default template
templates = [
"*.app env.service.resource.measurement",
"stats.* .host.measurement* region=us-west,agent=sensu",
"stats2.* .host.measurement.field",
"measurement*"
]
```
Graphite messages are in this format:
```
metric_path value timestamp\n
```
__metric_path__ is the metric namespace that you want to populate.
__value__ is the value that you want to assign to the metric at this time.
__timestamp__ is the unix epoch time.
And test.sh/test2.sh will output:
```
sensu.metric.net.server0.eth0.rx_packets 461295119435 1444234982
sensu.metric.net.server0.eth0.tx_bytes 1093086493388480 1444234982
sensu.metric.net.server0.eth0.rx_bytes 1015633926034834 1444234982
sensu.metric.net.server0.eth0.tx_errors 0 1444234982
sensu.metric.net.server0.eth0.rx_errors 0 1444234982
sensu.metric.net.server0.eth0.tx_dropped 0 1444234982
sensu.metric.net.server0.eth0.rx_dropped 0 1444234982
```
The templates configuration will be used to parse the graphite metrics to support influxdb/opentsdb tagging store engines.
More detail information about templates, please refer to [The graphite Input](https://github.com/influxdata/influxdb/blob/master/services/graphite/README.md)

View File

@@ -41,8 +41,6 @@ const sampleConfig = `
data_format = "influx"
`
const MaxStderrBytes = 512
type Exec struct {
Commands []string
Command string
@@ -98,41 +96,15 @@ func (c CommandRunner) Run(
cmd := exec.Command(split_cmd[0], split_cmd[1:]...)
var (
out bytes.Buffer
stderr bytes.Buffer
)
var out bytes.Buffer
cmd.Stdout = &out
cmd.Stderr = &stderr
if err := internal.RunTimeout(cmd, e.Timeout.Duration); err != nil {
switch e.parser.(type) {
case *nagios.NagiosParser:
AddNagiosState(err, acc)
default:
var errMessage = ""
if stderr.Len() > 0 {
stderr = removeCarriageReturns(stderr)
// Limit the number of bytes.
didTruncate := false
if stderr.Len() > MaxStderrBytes {
stderr.Truncate(MaxStderrBytes)
didTruncate = true
}
if i := bytes.IndexByte(stderr.Bytes(), '\n'); i > 0 {
// Only show truncation if the newline wasn't the last character.
if i < stderr.Len()-1 {
didTruncate = true
}
stderr.Truncate(i)
}
if didTruncate {
stderr.WriteString("...")
}
errMessage = fmt.Sprintf(": %s", stderr.String())
}
return nil, fmt.Errorf("exec: %s for command '%s'%s", err, command, errMessage)
return nil, fmt.Errorf("exec: %s for command '%s'", err, command)
}
} else {
switch e.parser.(type) {

View File

@@ -144,6 +144,83 @@ func TestCommandError(t *testing.T) {
assert.Equal(t, acc.NFields(), 0, "No new points should have been added")
}
func TestLineProtocolParse(t *testing.T) {
parser, _ := parsers.NewInfluxParser()
e := &Exec{
runner: newRunnerMock([]byte(lineProtocol), nil),
Commands: []string{"line-protocol"},
parser: parser,
}
var acc testutil.Accumulator
require.NoError(t, acc.GatherError(e.Gather))
fields := map[string]interface{}{
"usage_idle": float64(99),
"usage_busy": float64(1),
}
tags := map[string]string{
"host": "foo",
"datacenter": "us-east",
}
acc.AssertContainsTaggedFields(t, "cpu", fields, tags)
}
func TestLineProtocolEmptyParse(t *testing.T) {
parser, _ := parsers.NewInfluxParser()
e := &Exec{
runner: newRunnerMock([]byte(lineProtocolEmpty), nil),
Commands: []string{"line-protocol"},
parser: parser,
}
var acc testutil.Accumulator
err := e.Gather(&acc)
require.NoError(t, err)
}
func TestLineProtocolShortParse(t *testing.T) {
parser, _ := parsers.NewInfluxParser()
e := &Exec{
runner: newRunnerMock([]byte(lineProtocolShort), nil),
Commands: []string{"line-protocol"},
parser: parser,
}
var acc testutil.Accumulator
err := acc.GatherError(e.Gather)
require.Error(t, err)
assert.Contains(t, err.Error(), "buffer too short", "A buffer too short error was expected")
}
func TestLineProtocolParseMultiple(t *testing.T) {
parser, _ := parsers.NewInfluxParser()
e := &Exec{
runner: newRunnerMock([]byte(lineProtocolMulti), nil),
Commands: []string{"line-protocol"},
parser: parser,
}
var acc testutil.Accumulator
err := acc.GatherError(e.Gather)
require.NoError(t, err)
fields := map[string]interface{}{
"usage_idle": float64(99),
"usage_busy": float64(1),
}
tags := map[string]string{
"host": "foo",
"datacenter": "us-east",
}
cpuTags := []string{"cpu0", "cpu1", "cpu2", "cpu3", "cpu4", "cpu5", "cpu6"}
for _, cpu := range cpuTags {
tags["cpu"] = cpu
acc.AssertContainsTaggedFields(t, "cpu", fields, tags)
}
}
func TestExecCommandWithGlob(t *testing.T) {
parser, _ := parsers.NewValueParser("metric", "string", nil)
e := NewExec()

View File

@@ -1,19 +1,19 @@
# Fail2ban Input Plugin
# Fail2ban Plugin
The fail2ban plugin gathers the count of failed and banned ip addresses using [fail2ban](https://www.fail2ban.org).
The fail2ban plugin gathers counts of failed and banned ip addresses from fail2ban.
This plugin runs the `fail2ban-client` command which generally requires root access.
Acquiring the required permissions can be done using several methods:
This plugin run fail2ban-client command, and fail2ban-client require root access.
You have to grant telegraf to run fail2ban-client:
- Use sudo run fail2ban-client.
- Run telegraf as root. (not recommended)
- Run telegraf as root. (deprecate)
- Configure sudo to grant telegraf to fail2ban-client.
### Using sudo
You may edit your sudo configuration with the following:
``` sudo
telegraf ALL=(root) NOEXEC: NOPASSWD: /usr/bin/fail2ban-client status, /usr/bin/fail2ban-client status *
telegraf ALL=(root) NOPASSWD: /usr/bin/fail2ban-client status *
```
### Configuration:
@@ -21,7 +21,10 @@ telegraf ALL=(root) NOEXEC: NOPASSWD: /usr/bin/fail2ban-client status, /usr/bin/
``` toml
# Read metrics from fail2ban.
[[inputs.fail2ban]]
## Use sudo to run fail2ban-client
## fail2ban-client require root access.
## Setting 'use_sudo' to true will make use of sudo to run fail2ban-client.
## Users must configure sudo to allow telegraf user to run fail2ban-client with no password.
## This plugin run only "fail2ban-client status".
use_sudo = false
```
@@ -35,7 +38,7 @@ telegraf ALL=(root) NOEXEC: NOPASSWD: /usr/bin/fail2ban-client status, /usr/bin/
- All measurements have the following tags:
- jail
### Example Output:
```
@@ -52,5 +55,6 @@ Status for the jail: sshd
```
```
$ ./telegraf --config telegraf.conf --input-filter fail2ban --test
fail2ban,jail=sshd failed=5i,banned=2i 1495868667000000000
```

View File

@@ -1,3 +1,5 @@
// +build linux
package fail2ban
import (
@@ -6,10 +8,9 @@ import (
"os/exec"
"strings"
"strconv"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"strconv"
)
var (
@@ -22,7 +23,10 @@ type Fail2ban struct {
}
var sampleConfig = `
## Use sudo to run fail2ban-client
## fail2ban-client require root access.
## Setting 'use_sudo' to true will make use of sudo to run fail2ban-client.
## Users must configure sudo to allow telegraf user to run fail2ban-client with no password.
## This plugin run only "fail2ban-client status".
use_sudo = false
`

View File

@@ -0,0 +1,3 @@
// +build !linux
package fail2ban

View File

@@ -20,7 +20,6 @@ The filestat plugin gathers metrics about file existence, size, and other stats.
- filestat
- exists (int, 0 | 1)
- size_bytes (int, bytes)
- modification_time (int, unixtime)
- md5 (optional, string)
### Tags:
@@ -33,6 +32,6 @@ The filestat plugin gathers metrics about file existence, size, and other stats.
```
$ telegraf --config /etc/telegraf/telegraf.conf --input-filter filestat --test
* Plugin: filestat, Collection 1
> filestat,file=/tmp/foo/bar,host=tyrion exists=0i 1507218518192154351
> filestat,file=/Users/sparrc/ws/telegraf.conf,host=tyrion exists=1i,size=47894i,modification_time=1507152973123456789i 1507218518192154351
> filestat,file=/tmp/foo/bar,host=tyrion exists=0i 1461203374493128216
> filestat,file=/Users/sparrc/ws/telegraf.conf,host=tyrion exists=1i,size=47894i 1461203374493199335
```

Some files were not shown because too many files have changed in this diff Show More