Merging upstream.

This commit is contained in:
amandahla 2016-11-08 15:23:59 -02:00
commit 28fdc1fff1
20 changed files with 513 additions and 172 deletions

View File

@ -1,3 +1,15 @@
## v1.2 [unreleased]
### Release Notes
### Features
- [#1564](https://github.com/influxdata/telegraf/issues/1564): Use RFC3339 timestamps in log output.
### Bugfixes
- [#1949](https://github.com/influxdata/telegraf/issues/1949): Fix windows `net` plugin.
## v1.1 [unreleased]
### Release Notes

2
Godeps
View File

@ -47,7 +47,7 @@ github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
github.com/prometheus/common e8eabff8812b05acf522b45fdcd725a785188e37
github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8
github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f
github.com/shirou/gopsutil 4d0c402af66c78735c5ccf820dc2ca7de5e4ff08
github.com/shirou/gopsutil 1516eb9ddc5e61ba58874047a98f8b44b5e585e8
github.com/soniah/gosnmp 3fe3beb30fa9700988893c56a63b1df8e1b68c26
github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744
github.com/stretchr/testify 1f4a1643a57e798696635ea4c126e9127adb7d3c

View File

@ -1,6 +1,6 @@
VERSION := $(shell sh -c 'git describe --always --tags')
BRANCH := $(shell sh -c 'git rev-parse --abbrev-ref HEAD')
COMMIT := $(shell sh -c 'git rev-parse HEAD')
COMMIT := $(shell sh -c 'git rev-parse --short HEAD')
ifdef GOBIN
PATH := $(GOBIN):$(PATH)
else

196
README.md
View File

@ -1,15 +1,23 @@
# Telegraf [![Circle CI](https://circleci.com/gh/influxdata/telegraf.svg?style=svg)](https://circleci.com/gh/influxdata/telegraf) [![Docker pulls](https://img.shields.io/docker/pulls/library/telegraf.svg)](https://hub.docker.com/_/telegraf/)
Telegraf is an agent written in Go for collecting metrics from the system it's
running on, or from other services, and writing them into InfluxDB or other
[outputs](https://github.com/influxdata/telegraf#supported-output-plugins).
Telegraf is an agent written in Go for collecting, processing, aggregating,
and writing metrics.
Design goals are to have a minimal memory footprint with a plugin system so
that developers in the community can easily add support for collecting metrics
from well known services (like Hadoop, Postgres, or Redis) and third party
APIs (like Mailchimp, AWS CloudWatch, or Google Analytics).
New input and output plugins are designed to be easy to contribute,
Telegraf is plugin-driven and has the concept of 4 distinct plugins:
1. [Input Plugins](#input-plugins) collect metrics from the system, services, or 3rd party APIs
2. [Processor Plugins](#processor-plugins) transform, decorate, and/or filter metrics
3. [Aggregator Plugins](#aggregator-plugins) create aggregate metrics (e.g. mean, min, max, quantiles, etc.)
4. [Output Plugins](#output-plugins) write metrics to various destinations
For more information on Processor and Aggregator plugins please [read this](./docs/AGGREGATORS_AND_PROCESSORS.md).
New plugins are designed to be easy to contribute,
we'll eagerly accept pull
requests and will manage the set of plugins that Telegraf supports.
See the [contributing guide](CONTRIBUTING.md) for instructions on writing
@ -39,7 +47,7 @@ controlled via `systemctl [action] telegraf`
### yum/apt Repositories:
There is a yum/apt repo available for the whole InfluxData stack, see
[here](https://docs.influxdata.com/influxdb/v0.10/introduction/installation/#installation)
[here](https://docs.influxdata.com/influxdb/latest/introduction/installation/#installation)
for instructions on setting up the repo. Once it is configured, you will be able
to use this repo to install & update telegraf.
@ -127,77 +135,71 @@ telegraf --config telegraf.conf -input-filter cpu:mem -output-filter influxdb
See the [configuration guide](docs/CONFIGURATION.md) for a rundown of the more advanced
configuration options.
## Supported Input Plugins
## Input Plugins
Telegraf currently has support for collecting metrics from many sources. For
more information on each, please look at the directory of the same name in
`plugins/inputs`.
Currently implemented sources:
* [aws cloudwatch](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/cloudwatch)
* [aerospike](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/aerospike)
* [apache](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/apache)
* [bcache](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/bcache)
* [cassandra](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/cassandra)
* [ceph](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/ceph)
* [chrony](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/chrony)
* [consul](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/consul)
* [conntrack](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/conntrack)
* [couchbase](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/couchbase)
* [couchdb](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/couchdb)
* [disque](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/disque)
* [dns query time](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/dns_query)
* [docker](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/docker)
* [dovecot](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/dovecot)
* [elasticsearch](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/elasticsearch)
* [exec](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/exec) (generic executable plugin, support JSON, influx, graphite and nagios)
* [filestat](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/filestat)
* [haproxy](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/haproxy)
* [hddtemp](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/hddtemp)
* [http_response](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/http_response)
* [httpjson](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/httpjson) (generic JSON-emitting http service plugin)
* [influxdb](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/influxdb)
* [ipmi_sensor](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/ipmi_sensor)
* [iptables](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/iptables)
* [jolokia](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/jolokia)
* [leofs](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/leofs)
* [lustre2](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/lustre2)
* [mailchimp](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/mailchimp)
* [memcached](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/memcached)
* [mesos](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/mesos)
* [mongodb](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/mongodb)
* [mysql](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/mysql)
* [net_response](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/net_response)
* [nginx](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/nginx)
* [nsq](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/nsq)
* [nstat](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/nstat)
* [ntpq](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/ntpq)
* [phpfpm](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/phpfpm)
* [phusion passenger](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/passenger)
* [ping](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/ping)
* [postgresql](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/postgresql)
* [postgresql_extensible](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/postgresql_extensible)
* [powerdns](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/powerdns)
* [procstat](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/procstat)
* [prometheus](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/prometheus)
* [puppetagent](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/puppetagent)
* [rabbitmq](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/rabbitmq)
* [raindrops](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/raindrops)
* [redis](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/redis)
* [rethinkdb](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/rethinkdb)
* [riak](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/riak)
* [sensors](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/sensors)
* [snmp](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/snmp)
* [snmp_legacy](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/snmp_legacy)
* [sql server](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/sqlserver) (microsoft)
* [twemproxy](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/twemproxy)
* [varnish](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/varnish)
* [zfs](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/zfs)
* [zookeeper](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/zookeeper)
* [win_perf_counters ](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/win_perf_counters) (windows performance counters)
* [sysstat](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/sysstat)
* [system](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/system)
* [aws cloudwatch](./plugins/inputs/cloudwatch)
* [aerospike](./plugins/inputs/aerospike)
* [apache](./plugins/inputs/apache)
* [bcache](./plugins/inputs/bcache)
* [cassandra](./plugins/inputs/cassandra)
* [ceph](./plugins/inputs/ceph)
* [chrony](./plugins/inputs/chrony)
* [consul](./plugins/inputs/consul)
* [conntrack](./plugins/inputs/conntrack)
* [couchbase](./plugins/inputs/couchbase)
* [couchdb](./plugins/inputs/couchdb)
* [disque](./plugins/inputs/disque)
* [dns query time](./plugins/inputs/dns_query)
* [docker](./plugins/inputs/docker)
* [dovecot](./plugins/inputs/dovecot)
* [elasticsearch](./plugins/inputs/elasticsearch)
* [exec](./plugins/inputs/exec) (generic executable plugin, support JSON, influx, graphite and nagios)
* [filestat](./plugins/inputs/filestat)
* [haproxy](./plugins/inputs/haproxy)
* [hddtemp](./plugins/inputs/hddtemp)
* [http_response](./plugins/inputs/http_response)
* [httpjson](./plugins/inputs/httpjson) (generic JSON-emitting http service plugin)
* [influxdb](./plugins/inputs/influxdb)
* [ipmi_sensor](./plugins/inputs/ipmi_sensor)
* [iptables](./plugins/inputs/iptables)
* [jolokia](./plugins/inputs/jolokia)
* [leofs](./plugins/inputs/leofs)
* [lustre2](./plugins/inputs/lustre2)
* [mailchimp](./plugins/inputs/mailchimp)
* [memcached](./plugins/inputs/memcached)
* [mesos](./plugins/inputs/mesos)
* [mongodb](./plugins/inputs/mongodb)
* [mysql](./plugins/inputs/mysql)
* [net_response](./plugins/inputs/net_response)
* [nginx](./plugins/inputs/nginx)
* [nsq](./plugins/inputs/nsq)
* [nstat](./plugins/inputs/nstat)
* [ntpq](./plugins/inputs/ntpq)
* [phpfpm](./plugins/inputs/phpfpm)
* [phusion passenger](./plugins/inputs/passenger)
* [ping](./plugins/inputs/ping)
* [postgresql](./plugins/inputs/postgresql)
* [postgresql_extensible](./plugins/inputs/postgresql_extensible)
* [powerdns](./plugins/inputs/powerdns)
* [procstat](./plugins/inputs/procstat)
* [prometheus](./plugins/inputs/prometheus)
* [puppetagent](./plugins/inputs/puppetagent)
* [rabbitmq](./plugins/inputs/rabbitmq)
* [raindrops](./plugins/inputs/raindrops)
* [redis](./plugins/inputs/redis)
* [rethinkdb](./plugins/inputs/rethinkdb)
* [riak](./plugins/inputs/riak)
* [sensors](./plugins/inputs/sensors)
* [snmp](./plugins/inputs/snmp)
* [snmp_legacy](./plugins/inputs/snmp_legacy)
* [sql server](./plugins/inputs/sqlserver) (microsoft)
* [twemproxy](./plugins/inputs/twemproxy)
* [varnish](./plugins/inputs/varnish)
* [zfs](./plugins/inputs/zfs)
* [zookeeper](./plugins/inputs/zookeeper)
* [win_perf_counters ](./plugins/inputs/win_perf_counters) (windows performance counters)
* [sysstat](./plugins/inputs/sysstat)
* [system](./plugins/inputs/system)
* cpu
* mem
* net
@ -251,6 +253,50 @@ want to add support for another service or third-party API.
* [opentsdb](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/opentsdb)
* [prometheus](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/prometheus_client)
* [riemann](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/riemann)
* [http_listener](./plugins/inputs/http_listener)
* [kafka_consumer](./plugins/inputs/kafka_consumer)
* [mqtt_consumer](./plugins/inputs/mqtt_consumer)
* [nats_consumer](./plugins/inputs/nats_consumer)
* [nsq_consumer](./plugins/inputs/nsq_consumer)
* [logparser](./plugins/inputs/logparser)
* [statsd](./plugins/inputs/statsd)
* [tail](./plugins/inputs/tail)
* [tcp_listener](./plugins/inputs/tcp_listener)
* [udp_listener](./plugins/inputs/udp_listener)
* [webhooks](./plugins/inputs/webhooks)
* [filestack](./plugins/inputs/webhooks/filestack)
* [github](./plugins/inputs/webhooks/github)
* [mandrill](./plugins/inputs/webhooks/mandrill)
* [rollbar](./plugins/inputs/webhooks/rollbar)
## Processor Plugins
* [printer](./plugins/processors/printer)
## Aggregator Plugins
* [minmax](./plugins/aggregators/minmax)
## Output Plugins
* [influxdb](./plugins/outputs/influxdb)
* [amon](./plugins/outputs/amon)
* [amqp](./plugins/outputs/amqp)
* [aws kinesis](./plugins/outputs/kinesis)
* [aws cloudwatch](./plugins/outputs/cloudwatch)
* [datadog](./plugins/outputs/datadog)
* [file](./plugins/outputs/file)
* [graphite](./plugins/outputs/graphite)
* [graylog](./plugins/outputs/graylog)
* [instrumental](./plugins/outputs/instrumental)
* [kafka](./plugins/outputs/kafka)
* [librato](./plugins/outputs/librato)
* [mqtt](./plugins/outputs/mqtt)
* [nats](./plugins/outputs/nats)
* [nsq](./plugins/outputs/nsq)
* [opentsdb](./plugins/outputs/opentsdb)
* [prometheus](./plugins/outputs/prometheus_client)
* [riemann](./plugins/outputs/riemann)
## Contributing

View File

@ -0,0 +1,59 @@
# Telegraf Aggregator & Processor Plugins
As of release 1.1.0, Telegraf has the concept of Aggregator and Processor Plugins.
These plugins sit in-between Input & Output plugins, aggregating and processing
metrics as they pass through Telegraf:
```
┌───────────┐
│ │
│ CPU │───┐
│ │ │
└───────────┘ │
┌───────────┐ │ ┌───────────┐
│ │ │ │ │
│ Memory │───┤ ┌──▶│ InfluxDB │
│ │ │ │ │ │
└───────────┘ │ ┌─────────────┐ ┌─────────────┐ │ └───────────┘
│ │ │ │Aggregate │ │
┌───────────┐ │ │Process │ │ - mean │ │ ┌───────────┐
│ │ │ │ - transform │ │ - quantiles │ │ │ │
│ MySQL │───┼───▶│ - decorate │────▶│ - min/max │───┼──▶│ File │
│ │ │ │ - filter │ │ - count │ │ │ │
└───────────┘ │ │ │ │ │ │ └───────────┘
│ └─────────────┘ └─────────────┘ │
┌───────────┐ │ │ ┌───────────┐
│ │ │ │ │ │
│ SNMP │───┤ └──▶│ Kafka │
│ │ │ │ │
└───────────┘ │ └───────────┘
┌───────────┐ │
│ │ │
│ Docker │───┘
│ │
└───────────┘
```
Both Aggregators and Processors analyze metrics as they pass through Telegraf.
**Processor** plugins process metrics as they pass through and immediately emit
results based on the values they process. For example, this could be printing
all metrics or adding a tag to all metrics that pass through.
**Aggregator** plugins, on the other hand, are a bit more complicated. Aggregators
are typically for emitting new _aggregate_ metrics, such as a running mean,
minimum, maximum, quantiles, or standard deviation. For this reason, all _aggregator_
plugins are configured with a `period`. The `period` is the size of the window
of metrics that each _aggregate_ represents. In other words, the emitted
_aggregate_ metric will be the aggregated value of the past `period` seconds.
Since many users will only care about their aggregates and not every single metric
gathered, there is also a `drop_original` argument, which tells Telegraf to only
emit the aggregates and not the original metrics.
**NOTE** That since aggregators only aggregate metrics within their period, that
historical data is not supported. In other words, if your metric timestamp is more
than `now() - period` in the past, it will not be aggregated. If this is a feature
that you need, please comment on this [github issue](https://github.com/influxdata/telegraf/issues/1992)

View File

@ -66,7 +66,7 @@
debug = false
## Run telegraf in quiet mode (error log messages only).
quiet = false
## Specify the log file name. The empty string means to log to stdout.
## Specify the log file name. The empty string means to log to stderr.
logfile = ""
## Override default hostname, if empty use os.Hostname()
@ -160,7 +160,7 @@
# # Configuration for AWS CloudWatch output.
# [[outputs.cloudwatch]]
# ## Amazon REGION
# region = 'us-east-1'
# region = "us-east-1"
#
# ## Amazon Credentials
# ## Credentials are loaded in the following order
@ -178,7 +178,7 @@
# #shared_credential_file = ""
#
# ## Namespace for the CloudWatch MetricDatums
# namespace = 'InfluxData/Telegraf'
# namespace = "InfluxData/Telegraf"
# # Configuration for DataDog API to send metrics to.
@ -441,6 +441,30 @@
###############################################################################
# PROCESSOR PLUGINS #
###############################################################################
# # Print all metrics that pass through this filter.
# [[processors.printer]]
###############################################################################
# AGGREGATOR PLUGINS #
###############################################################################
# # Keep the aggregate min/max of each metric passing through.
# [[aggregators.minmax]]
# ## General Aggregator Arguments:
# ## The period on which to flush & clear the aggregator.
# period = "30s"
# ## If true, the original metric will be dropped by the
# ## aggregator and will not get sent to the output plugins.
# drop_original = false
###############################################################################
# INPUT PLUGINS #
###############################################################################
@ -582,21 +606,24 @@
# # Read specific statistics per cgroup
# [[inputs.cgroup]]
# ## Directories in which to look for files, globs are supported.
# # paths = [
# # "/cgroup/memory",
# # "/cgroup/memory/child1",
# # "/cgroup/memory/child2/*",
# # ]
# ## cgroup stat fields, as file names, globs are supported.
# ## these file names are appended to each path from above.
# # files = ["memory.*usage*", "memory.limit_in_bytes"]
# ## Directories in which to look for files, globs are supported.
# ## Consider restricting paths to the set of cgroups you really
# ## want to monitor if you have a large number of cgroups, to avoid
# ## any cardinality issues.
# # paths = [
# # "/cgroup/memory",
# # "/cgroup/memory/child1",
# # "/cgroup/memory/child2/*",
# # ]
# ## cgroup stat fields, as file names, globs are supported.
# ## these file names are appended to each path from above.
# # files = ["memory.*usage*", "memory.limit_in_bytes"]
# # Pull Metric Statistics from Amazon CloudWatch
# [[inputs.cloudwatch]]
# ## Amazon Region
# region = 'us-east-1'
# region = "us-east-1"
#
# ## Amazon Credentials
# ## Credentials are loaded in the following order
@ -614,21 +641,21 @@
# #shared_credential_file = ""
#
# ## Requested CloudWatch aggregation Period (required - must be a multiple of 60s)
# period = '1m'
# period = "5m"
#
# ## Collection Delay (required - must account for metrics availability via CloudWatch API)
# delay = '1m'
# delay = "5m"
#
# ## Recomended: use metric 'interval' that is a multiple of 'period' to avoid
# ## gaps or overlap in pulled data
# interval = '1m'
# interval = "5m"
#
# ## Configure the TTL for the internal cache of metrics.
# ## Defaults to 1 hr if not specified
# #cache_ttl = '10m'
# #cache_ttl = "10m"
#
# ## Metric Statistic Namespace (required)
# namespace = 'AWS/ELB'
# namespace = "AWS/ELB"
#
# ## Maximum requests per second. Note that the global default AWS rate limit is
# ## 10 reqs/sec, so if you define multiple namespaces, these should add up to a
@ -639,12 +666,12 @@
# ## Defaults to all Metrics in Namespace if nothing is provided
# ## Refreshes Namespace available metrics every 1h
# #[[inputs.cloudwatch.metrics]]
# # names = ['Latency', 'RequestCount']
# # names = ["Latency", "RequestCount"]
# #
# # ## Dimension filters for Metric (optional)
# # [[inputs.cloudwatch.metrics.dimensions]]
# # name = 'LoadBalancerName'
# # value = 'p-example'
# # name = "LoadBalancerName"
# # value = "p-example"
# # Gather health check statuses from services registered in Consul
@ -850,12 +877,15 @@
# ## An array of address to gather stats about. Specify an ip on hostname
# ## with optional port. ie localhost, 10.10.3.33:1936, etc.
# ## Make sure you specify the complete path to the stats endpoint
# ## ie 10.10.3.33:1936/haproxy?stats
# ## including the protocol, ie http://10.10.3.33:1936/haproxy?stats
# #
# ## If no servers are specified, then default to 127.0.0.1:1936/haproxy?stats
# servers = ["http://myhaproxy.com:1936/haproxy?stats"]
# ## Or you can also use local socket
# ## servers = ["socket:/run/haproxy/admin.sock"]
# ##
# ## You can also use local socket with standard wildcard globbing.
# ## Server address not starting with 'http' will be treated as a possible
# ## socket, so both examples below are valid.
# ## servers = ["socket:/run/haproxy/admin.sock", "/run/haproxy/*.sock"]
# # HTTP/HTTPS request given an address a method and a timeout
@ -1000,6 +1030,22 @@
# attribute = "LoadedClassCount,UnloadedClassCount,TotalLoadedClassCount"
# # Read metrics from the kubernetes kubelet api
# [[inputs.kubernetes]]
# ## URL for the kubelet
# url = "http://1.1.1.1:10255"
#
# ## Use bearer token for authorization
# # bearer_token = /path/to/bearer/token
#
# ## Optional SSL Config
# # ssl_ca = /path/to/cafile
# # ssl_cert = /path/to/certfile
# # ssl_key = /path/to/keyfile
# ## Use SSL but skip chain & host verification
# # insecure_skip_verify = false
# # Read metrics from a LeoFS Server via SNMP
# [[inputs.leofs]]
# ## An array of URI to gather stats about LeoFS.
@ -1119,13 +1165,13 @@
# ## gather metrics from SHOW BINARY LOGS command output
# gather_binary_logs = false
# #
# ## gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMART_BY_TABLE
# ## gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMARY_BY_TABLE
# gather_table_io_waits = false
# #
# ## gather metrics from PERFORMANCE_SCHEMA.TABLE_LOCK_WAITS
# gather_table_lock_waits = false
# #
# ## gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMART_BY_INDEX_USAGE
# ## gather metrics from PERFORMANCE_SCHEMA.TABLE_IO_WAITS_SUMMARY_BY_INDEX_USAGE
# gather_index_io_waits = false
# #
# ## gather metrics from PERFORMANCE_SCHEMA.EVENT_WAITS
@ -1247,13 +1293,13 @@
# ## urls to ping
# urls = ["www.google.com"] # required
# ## number of pings to send per collection (ping -c <COUNT>)
# count = 1 # required
# # count = 1
# ## interval, in s, at which to ping. 0 == default (ping -i <PING_INTERVAL>)
# ping_interval = 0.0
# # ping_interval = 1.0
# ## per-ping timeout, in s. 0 == no timeout (ping -W <TIMEOUT>)
# timeout = 1.0
# # timeout = 1.0
# ## interface to send ping from (ping -I <INTERFACE>)
# interface = ""
# # interface = ""
# # Read metrics from one or many postgresql servers
@ -1681,9 +1727,18 @@
# ## Address and port to host HTTP listener on
# service_address = ":8186"
#
# ## timeouts
# ## maximum duration before timing out read of the request
# read_timeout = "10s"
# ## maximum duration before timing out write of the response
# write_timeout = "10s"
#
# ## Maximum allowed http request body size in bytes.
# ## 0 means to use the default of 536,870,912 bytes (500 mebibytes)
# max_body_size = 0
#
# ## Maximum line size allowed to be sent in bytes.
# ## 0 means to use the default of 65536 bytes (64 kibibytes)
# max_line_size = 0
# # Read metrics from Kafka topic(s)
@ -1778,13 +1833,18 @@
# # Read metrics from NATS subject(s)
# [[inputs.nats_consumer]]
# ## urls of NATS servers
# servers = ["nats://localhost:4222"]
# # servers = ["nats://localhost:4222"]
# ## Use Transport Layer Security
# secure = false
# # secure = false
# ## subject(s) to consume
# subjects = ["telegraf"]
# # subjects = ["telegraf"]
# ## name a queue group
# queue_group = "telegraf_consumers"
# # queue_group = "telegraf_consumers"
#
# ## Sets the limits for pending msgs and bytes for each subscription
# ## These shouldn't need to be adjusted except in very high throughput scenarios
# # pending_message_limit = 65536
# # pending_bytes_limit = 67108864
#
# ## Data format to consume.
# ## Each data format has it's own unique set of configuration options, read
@ -1871,14 +1931,14 @@
# # Generic TCP listener
# [[inputs.tcp_listener]]
# ## Address and port to host TCP listener on
# service_address = ":8094"
# # service_address = ":8094"
#
# ## Number of TCP messages allowed to queue up. Once filled, the
# ## TCP listener will start dropping packets.
# allowed_pending_messages = 10000
# # allowed_pending_messages = 10000
#
# ## Maximum number of concurrent TCP connections to allow
# max_tcp_connections = 250
# # max_tcp_connections = 250
#
# ## Data format to consume.
# ## Each data format has it's own unique set of configuration options, read
@ -1890,11 +1950,11 @@
# # Generic UDP listener
# [[inputs.udp_listener]]
# ## Address and port to host UDP listener on
# service_address = ":8092"
# # service_address = ":8092"
#
# ## Number of UDP messages allowed to queue up. Once filled, the
# ## UDP listener will start dropping packets.
# allowed_pending_messages = 10000
# # allowed_pending_messages = 10000
#
# ## Data format to consume.
# ## Each data format has it's own unique set of configuration options, read
@ -1919,4 +1979,3 @@
#
# [inputs.webhooks.rollbar]
# path = "/rollbar"

View File

@ -4,6 +4,7 @@ import (
"io"
"log"
"os"
"time"
"github.com/influxdata/wlog"
)
@ -19,8 +20,8 @@ type telegrafLog struct {
writer io.Writer
}
func (t *telegrafLog) Write(p []byte) (n int, err error) {
return t.writer.Write(p)
func (t *telegrafLog) Write(b []byte) (n int, err error) {
return t.writer.Write(append([]byte(time.Now().UTC().Format(time.RFC3339)+" "), b...))
}
// SetupLogging configures the logging output.
@ -30,6 +31,7 @@ func (t *telegrafLog) Write(p []byte) (n int, err error) {
// interpreted as stderr. If there is an error opening the file the
// logger will fallback to stderr.
func SetupLogging(debug, quiet bool, logfile string) {
log.SetFlags(0)
if debug {
wlog.SetLevel(wlog.DEBUG)
}

62
logger/logger_test.go Normal file
View File

@ -0,0 +1,62 @@
package logger
import (
"bytes"
"io/ioutil"
"log"
"os"
"testing"
"github.com/stretchr/testify/assert"
)
func TestWriteLogToFile(t *testing.T) {
tmpfile, err := ioutil.TempFile("", "")
assert.NoError(t, err)
defer func() { os.Remove(tmpfile.Name()) }()
SetupLogging(false, false, tmpfile.Name())
log.Printf("I! TEST")
log.Printf("D! TEST") // <- should be ignored
f, err := ioutil.ReadFile(tmpfile.Name())
assert.NoError(t, err)
assert.Equal(t, f[19:], []byte("Z I! TEST\n"))
}
func TestDebugWriteLogToFile(t *testing.T) {
tmpfile, err := ioutil.TempFile("", "")
assert.NoError(t, err)
defer func() { os.Remove(tmpfile.Name()) }()
SetupLogging(true, false, tmpfile.Name())
log.Printf("D! TEST")
f, err := ioutil.ReadFile(tmpfile.Name())
assert.NoError(t, err)
assert.Equal(t, f[19:], []byte("Z D! TEST\n"))
}
func TestErrorWriteLogToFile(t *testing.T) {
tmpfile, err := ioutil.TempFile("", "")
assert.NoError(t, err)
defer func() { os.Remove(tmpfile.Name()) }()
SetupLogging(false, true, tmpfile.Name())
log.Printf("E! TEST")
log.Printf("I! TEST") // <- should be ignored
f, err := ioutil.ReadFile(tmpfile.Name())
assert.NoError(t, err)
assert.Equal(t, f[19:], []byte("Z E! TEST\n"))
}
func BenchmarkTelegrafLogWrite(b *testing.B) {
var msg = []byte("test")
var buf bytes.Buffer
w := newTelegrafWriter(&buf)
for i := 0; i < b.N; i++ {
buf.Reset()
w.Write(msg)
}
}

View File

@ -0,0 +1,42 @@
# MinMax Aggregator Plugin
The minmax aggregator plugin aggregates min & max values of each field it sees,
emitting the aggrate every `period` seconds.
### Configuration:
```toml
# Keep the aggregate min/max of each metric passing through.
[[aggregators.minmax]]
## General Aggregator Arguments:
## The period on which to flush & clear the aggregator.
period = "30s"
## If true, the original metric will be dropped by the
## aggregator and will not get sent to the output plugins.
drop_original = false
```
### Measurements & Fields:
- measurement1
- field1_max
- field1_min
### Tags:
No tags are applied by this aggregator.
### Example Output:
```
$ telegraf --config telegraf.conf --quiet
system,host=tars load1=1.72 1475583980000000000
system,host=tars load1=1.6 1475583990000000000
system,host=tars load1=1.66 1475584000000000000
system,host=tars load1=1.63 1475584010000000000
system,host=tars load1_max=1.72,load1_min=1.6 1475584010000000000
system,host=tars load1=1.46 1475584020000000000
system,host=tars load1=1.39 1475584030000000000
system,host=tars load1=1.41 1475584040000000000
system,host=tars load1_max=1.46,load1_min=1.39 1475584040000000000
```

View File

@ -103,9 +103,13 @@ func processChronycOutput(out string) (map[string]interface{}, map[string]string
tags["stratum"] = valueFields[0]
continue
}
if strings.Contains(strings.ToLower(name), "reference_id") {
tags["reference_id"] = valueFields[0]
continue
}
value, err := strconv.ParseFloat(valueFields[0], 64)
if err != nil {
tags[name] = strings.ToLower(valueFields[0])
tags[name] = strings.ToLower(strings.Join(valueFields, " "))
continue
}
if strings.Contains(stats[1], "slow") {

View File

@ -27,7 +27,7 @@ func TestGather(t *testing.T) {
tags := map[string]string{
"reference_id": "192.168.1.22",
"leap_status": "normal",
"leap_status": "not synchronized",
"stratum": "3",
}
fields := map[string]interface{}{
@ -85,7 +85,7 @@ Skew : 0.006 ppm
Root delay : 0.001655 seconds
Root dispersion : 0.003307 seconds
Update interval : 507.2 seconds
Leap status : Normal
Leap status : Not synchronized
`
args := os.Args

View File

@ -18,21 +18,28 @@ API endpoint. In the following order the plugin will attempt to authenticate.
```toml
[[inputs.cloudwatch]]
## Amazon Region (required)
region = 'us-east-1'
region = "us-east-1"
# The minimum period for Cloudwatch metrics is 1 minute (60s). However not all
# metrics are made available to the 1 minute period. Some are collected at
# 3 minute and 5 minutes intervals. See https://aws.amazon.com/cloudwatch/faqs/#monitoring.
# Note that if a period is configured that is smaller than the minimum for a
# particular metric, that metric will not be returned by the Cloudwatch API
# and will not be collected by Telegraf.
#
## Requested CloudWatch aggregation Period (required - must be a multiple of 60s)
period = '1m'
period = "5m"
## Collection Delay (required - must account for metrics availability via CloudWatch API)
delay = '1m'
delay = "5m"
## Override global run interval (optional - defaults to global interval)
## Recomended: use metric 'interval' that is a multiple of 'period' to avoid
## gaps or overlap in pulled data
interval = '1m'
interval = "5m"
## Metric Statistic Namespace (required)
namespace = 'AWS/ELB'
namespace = "AWS/ELB"
## Maximum requests per second. Note that the global default AWS rate limit is
## 10 reqs/sec, so if you define multiple namespaces, these should add up to a
@ -43,16 +50,16 @@ API endpoint. In the following order the plugin will attempt to authenticate.
## Defaults to all Metrics in Namespace if nothing is provided
## Refreshes Namespace available metrics every 1h
[[inputs.cloudwatch.metrics]]
names = ['Latency', 'RequestCount']
names = ["Latency", "RequestCount"]
## Dimension filters for Metric (optional)
[[inputs.cloudwatch.metrics.dimensions]]
name = 'LoadBalancerName'
value = 'p-example'
name = "LoadBalancerName"
value = "p-example"
[[inputs.cloudwatch.metrics.dimensions]]
name = 'AvailabilityZone'
value = '*'
name = "AvailabilityZone"
value = "*"
```
#### Requirements and Terminology
@ -71,16 +78,16 @@ wildcard dimension is ignored.
Example:
```
[[inputs.cloudwatch.metrics]]
names = ['Latency']
names = ["Latency"]
## Dimension filters for Metric (optional)
[[inputs.cloudwatch.metrics.dimensions]]
name = 'LoadBalancerName'
value = 'p-example'
name = "LoadBalancerName"
value = "p-example"
[[inputs.cloudwatch.metrics.dimensions]]
name = 'AvailabilityZone'
value = '*'
name = "AvailabilityZone"
value = "*"
```
If the following ELBs are available:

View File

@ -63,7 +63,7 @@ type (
func (c *CloudWatch) SampleConfig() string {
return `
## Amazon Region
region = 'us-east-1'
region = "us-east-1"
## Amazon Credentials
## Credentials are loaded in the following order
@ -80,22 +80,29 @@ func (c *CloudWatch) SampleConfig() string {
#profile = ""
#shared_credential_file = ""
# The minimum period for Cloudwatch metrics is 1 minute (60s). However not all
# metrics are made available to the 1 minute period. Some are collected at
# 3 minute and 5 minutes intervals. See https://aws.amazon.com/cloudwatch/faqs/#monitoring.
# Note that if a period is configured that is smaller than the minimum for a
# particular metric, that metric will not be returned by the Cloudwatch API
# and will not be collected by Telegraf.
#
## Requested CloudWatch aggregation Period (required - must be a multiple of 60s)
period = '1m'
period = "5m"
## Collection Delay (required - must account for metrics availability via CloudWatch API)
delay = '1m'
delay = "5m"
## Recomended: use metric 'interval' that is a multiple of 'period' to avoid
## gaps or overlap in pulled data
interval = '1m'
interval = "5m"
## Configure the TTL for the internal cache of metrics.
## Defaults to 1 hr if not specified
#cache_ttl = '10m'
#cache_ttl = "10m"
## Metric Statistic Namespace (required)
namespace = 'AWS/ELB'
namespace = "AWS/ELB"
## Maximum requests per second. Note that the global default AWS rate limit is
## 10 reqs/sec, so if you define multiple namespaces, these should add up to a
@ -106,12 +113,12 @@ func (c *CloudWatch) SampleConfig() string {
## Defaults to all Metrics in Namespace if nothing is provided
## Refreshes Namespace available metrics every 1h
#[[inputs.cloudwatch.metrics]]
# names = ['Latency', 'RequestCount']
# names = ["Latency", "RequestCount"]
#
# ## Dimension filters for Metric (optional)
# [[inputs.cloudwatch.metrics.dimensions]]
# name = 'LoadBalancerName'
# value = 'p-example'
# name = "LoadBalancerName"
# value = "p-example"
`
}
@ -133,7 +140,6 @@ func (c *CloudWatch) Gather(acc telegraf.Accumulator) error {
if !hasWilcard(m.Dimensions) {
dimensions := make([]*cloudwatch.Dimension, len(m.Dimensions))
for k, d := range m.Dimensions {
fmt.Printf("Dimension [%s]:[%s]\n", d.Name, d.Value)
dimensions[k] = &cloudwatch.Dimension{
Name: aws.String(d.Name),
Value: aws.String(d.Value),
@ -229,13 +235,12 @@ func (c *CloudWatch) initializeCloudWatch() error {
/*
* Fetch available metrics for given CloudWatch Namespace
*/
func (c *CloudWatch) fetchNamespaceMetrics() (metrics []*cloudwatch.Metric, err error) {
func (c *CloudWatch) fetchNamespaceMetrics() ([]*cloudwatch.Metric, error) {
if c.metricCache != nil && c.metricCache.IsValid() {
metrics = c.metricCache.Metrics
return
return c.metricCache.Metrics, nil
}
metrics = []*cloudwatch.Metric{}
metrics := []*cloudwatch.Metric{}
var token *string
for more := true; more; {
@ -263,7 +268,7 @@ func (c *CloudWatch) fetchNamespaceMetrics() (metrics []*cloudwatch.Metric, err
TTL: c.CacheTTL.Duration,
}
return
return metrics, nil
}
/*

View File

@ -133,7 +133,7 @@ The unit of fields varies by the tags.
* file_events_total(float,number)
* file_events_seconds_total(float, milliseconds)
* file_events_bytes_total(float, bytes)
* Perf file events statements - gathers attributes of each event
* Perf events statements - gathers attributes of each event
* events_statements_total(float, number)
* events_statements_seconds_total(float, millieconds)
* events_statements_errors_total(float, number)

View File

@ -28,12 +28,17 @@ type natsConsumer struct {
Servers []string
Secure bool
// Client pending limits:
PendingMessageLimit int
PendingBytesLimit int
// Legacy metric buffer support
MetricBuffer int
parser parsers.Parser
sync.Mutex
wg sync.WaitGroup
Conn *nats.Conn
Subs []*nats.Subscription
@ -47,13 +52,18 @@ type natsConsumer struct {
var sampleConfig = `
## urls of NATS servers
servers = ["nats://localhost:4222"]
# servers = ["nats://localhost:4222"]
## Use Transport Layer Security
secure = false
# secure = false
## subject(s) to consume
subjects = ["telegraf"]
# subjects = ["telegraf"]
## name a queue group
queue_group = "telegraf_consumers"
# queue_group = "telegraf_consumers"
## Sets the limits for pending msgs and bytes for each subscription
## These shouldn't need to be adjusted except in very high throughput scenarios
# pending_message_limit = 65536
# pending_bytes_limit = 67108864
## Data format to consume.
## Each data format has it's own unique set of configuration options, read
@ -112,12 +122,22 @@ func (n *natsConsumer) Start(acc telegraf.Accumulator) error {
n.errs = make(chan error)
n.Conn.SetErrorHandler(n.natsErrHandler)
n.in = make(chan *nats.Msg)
n.in = make(chan *nats.Msg, 1000)
for _, subj := range n.Subjects {
sub, err := n.Conn.ChanQueueSubscribe(subj, n.QueueGroup, n.in)
sub, err := n.Conn.QueueSubscribe(subj, n.QueueGroup, func(m *nats.Msg) {
n.in <- m
})
if err != nil {
return err
}
// ensure that the subscription has been processed by the server
if err = n.Conn.Flush(); err != nil {
return err
}
// set the subscription pending limits
if err = sub.SetPendingLimits(n.PendingMessageLimit, n.PendingBytesLimit); err != nil {
return err
}
n.Subs = append(n.Subs, sub)
}
}
@ -125,6 +145,7 @@ func (n *natsConsumer) Start(acc telegraf.Accumulator) error {
n.done = make(chan struct{})
// Start the message reader
n.wg.Add(1)
go n.receiver()
log.Printf("I! Started the NATS consumer service, nats: %v, subjects: %v, queue: %v\n",
n.Conn.ConnectedUrl(), n.Subjects, n.QueueGroup)
@ -135,7 +156,7 @@ func (n *natsConsumer) Start(acc telegraf.Accumulator) error {
// receiver() reads all incoming messages from NATS, and parses them into
// telegraf metrics.
func (n *natsConsumer) receiver() {
defer n.clean()
defer n.wg.Done()
for {
select {
case <-n.done:
@ -151,17 +172,11 @@ func (n *natsConsumer) receiver() {
for _, metric := range metrics {
n.acc.AddFields(metric.Name(), metric.Fields(), metric.Tags(), metric.Time())
}
}
}
}
func (n *natsConsumer) clean() {
n.Lock()
defer n.Unlock()
close(n.in)
close(n.errs)
for _, sub := range n.Subs {
if err := sub.Unsubscribe(); err != nil {
log.Printf("E! Error unsubscribing from subject %s in queue %s: %s\n",
@ -177,6 +192,8 @@ func (n *natsConsumer) clean() {
func (n *natsConsumer) Stop() {
n.Lock()
close(n.done)
n.wg.Wait()
n.clean()
n.Unlock()
}
@ -186,6 +203,13 @@ func (n *natsConsumer) Gather(acc telegraf.Accumulator) error {
func init() {
inputs.Add("nats_consumer", func() telegraf.Input {
return &natsConsumer{}
return &natsConsumer{
Servers: []string{"nats://localhost:4222"},
Secure: false,
Subjects: []string{"telegraf"},
QueueGroup: "telegraf_consumers",
PendingBytesLimit: nats.DefaultSubPendingBytesLimit,
PendingMessageLimit: nats.DefaultSubPendingMsgsLimit,
}
})
}

View File

@ -39,6 +39,7 @@ func TestRunParser(t *testing.T) {
defer close(n.done)
n.parser, _ = parsers.NewInfluxParser()
n.wg.Add(1)
go n.receiver()
in <- natsMsg(testMsg)
time.Sleep(time.Millisecond * 25)
@ -56,6 +57,7 @@ func TestRunParserInvalidMsg(t *testing.T) {
defer close(n.done)
n.parser, _ = parsers.NewInfluxParser()
n.wg.Add(1)
go n.receiver()
in <- natsMsg(invalidMsg)
time.Sleep(time.Millisecond * 25)
@ -73,6 +75,7 @@ func TestRunParserAndGather(t *testing.T) {
defer close(n.done)
n.parser, _ = parsers.NewInfluxParser()
n.wg.Add(1)
go n.receiver()
in <- natsMsg(testMsg)
time.Sleep(time.Millisecond * 25)
@ -91,6 +94,7 @@ func TestRunParserAndGatherGraphite(t *testing.T) {
defer close(n.done)
n.parser, _ = parsers.NewGraphiteParser("_", []string{}, nil)
n.wg.Add(1)
go n.receiver()
in <- natsMsg(testMsgGraphite)
time.Sleep(time.Millisecond * 25)
@ -109,6 +113,7 @@ func TestRunParserAndGatherJSON(t *testing.T) {
defer close(n.done)
n.parser, _ = parsers.NewJSONParser("nats_json_test", []string{}, nil)
n.wg.Add(1)
go n.receiver()
in <- natsMsg(testMsgJSON)
time.Sleep(time.Millisecond * 25)

View File

@ -64,7 +64,7 @@ Instances (this is an array) is the instances of a counter you would like return
it can be one or more values.
Example, `Instances = ["C:","D:","E:"]` will return only for the instances
C:, D: and E: where relevant. To get all instnaces of a Counter, use ["*"] only.
C:, D: and E: where relevant. To get all instances of a Counter, use ["*"] only.
By default any results containing _Total are stripped,
unless this is specified as the wanted instance.
Alternatively see the option IncludeTotal below.

View File

@ -30,7 +30,7 @@ type CloudWatch struct {
var sampleConfig = `
## Amazon REGION
region = 'us-east-1'
region = "us-east-1"
## Amazon Credentials
## Credentials are loaded in the following order
@ -48,7 +48,7 @@ var sampleConfig = `
#shared_credential_file = ""
## Namespace for the CloudWatch MetricDatums
namespace = 'InfluxData/Telegraf'
namespace = "InfluxData/Telegraf"
`
func (c *CloudWatch) SampleConfig() string {

View File

@ -0,0 +1,14 @@
# Printer Processor Plugin
The printer processor plugin simple prints every metric passing through it.
### Configuration:
```toml
# Print all metrics that pass through this filter.
[[processors.printer]]
```
### Tags:
No tags are applied by this processor.

View File

@ -82,6 +82,6 @@ if [ $? -eq 0 ]; then
unset GOGC
tag=$(git describe --exact-match HEAD)
echo $tag
exit_if_fail ./scripts/build.py --release --package --version=$tag --platform=all --arch=all --upload --bucket=dl.influxdata.com/telegraf/releases
exit_if_fail ./scripts/build.py --release --package --platform=all --arch=all --upload --bucket=dl.influxdata.com/telegraf/releases
mv build $CIRCLE_ARTIFACTS
fi