Fix spelling errors in comments and documentation (#7492)
This commit is contained in:
parent
c78045c13f
commit
2c56d6de81
|
@ -23,7 +23,7 @@ section if available.
|
||||||
|
|
||||||
### Docker
|
### Docker
|
||||||
|
|
||||||
<!-- If your bug involves third party dependencies or services, it can be very helpful to provide a Dockerfile or docker-compose.yml that repoduces the environment you're testing against -->
|
<!-- If your bug involves third party dependencies or services, it can be very helpful to provide a Dockerfile or docker-compose.yml that reproduces the environment you're testing against -->
|
||||||
|
|
||||||
### Steps to reproduce:
|
### Steps to reproduce:
|
||||||
|
|
||||||
|
|
26
CHANGELOG.md
26
CHANGELOG.md
|
@ -42,7 +42,7 @@
|
||||||
|
|
||||||
- [#7371](https://github.com/influxdata/telegraf/issues/7371): Fix unable to write metrics to CloudWatch with IMDSv1 disabled.
|
- [#7371](https://github.com/influxdata/telegraf/issues/7371): Fix unable to write metrics to CloudWatch with IMDSv1 disabled.
|
||||||
- [#7233](https://github.com/influxdata/telegraf/issues/7233): Fix vSphere 6.7 missing data issue.
|
- [#7233](https://github.com/influxdata/telegraf/issues/7233): Fix vSphere 6.7 missing data issue.
|
||||||
- [#7448](https://github.com/influxdata/telegraf/issues/7448): Remove debug fields from spunkmetric serializer.
|
- [#7448](https://github.com/influxdata/telegraf/issues/7448): Remove debug fields from splunkmetric serializer.
|
||||||
- [#7446](https://github.com/influxdata/telegraf/issues/7446): Fix gzip support in socket_listener with tcp sockets.
|
- [#7446](https://github.com/influxdata/telegraf/issues/7446): Fix gzip support in socket_listener with tcp sockets.
|
||||||
- [#7390](https://github.com/influxdata/telegraf/issues/7390): Fix interval drift when round_interval is set in agent.
|
- [#7390](https://github.com/influxdata/telegraf/issues/7390): Fix interval drift when round_interval is set in agent.
|
||||||
|
|
||||||
|
@ -280,7 +280,7 @@
|
||||||
- [#6695](https://github.com/influxdata/telegraf/pull/6695): Allow multiple certificates per file in x509_cert input.
|
- [#6695](https://github.com/influxdata/telegraf/pull/6695): Allow multiple certificates per file in x509_cert input.
|
||||||
- [#6686](https://github.com/influxdata/telegraf/pull/6686): Add additional tags to the x509 input.
|
- [#6686](https://github.com/influxdata/telegraf/pull/6686): Add additional tags to the x509 input.
|
||||||
- [#6703](https://github.com/influxdata/telegraf/pull/6703): Add batch data format support to file output.
|
- [#6703](https://github.com/influxdata/telegraf/pull/6703): Add batch data format support to file output.
|
||||||
- [#6688](https://github.com/influxdata/telegraf/pull/6688): Support partition assignement strategy configuration in kafka_consumer.
|
- [#6688](https://github.com/influxdata/telegraf/pull/6688): Support partition assignment strategy configuration in kafka_consumer.
|
||||||
- [#6731](https://github.com/influxdata/telegraf/pull/6731): Add node type tag to mongodb input.
|
- [#6731](https://github.com/influxdata/telegraf/pull/6731): Add node type tag to mongodb input.
|
||||||
- [#6669](https://github.com/influxdata/telegraf/pull/6669): Add uptime_ns field to mongodb input.
|
- [#6669](https://github.com/influxdata/telegraf/pull/6669): Add uptime_ns field to mongodb input.
|
||||||
- [#6735](https://github.com/influxdata/telegraf/pull/6735): Support resolution of symlinks in filecount input.
|
- [#6735](https://github.com/influxdata/telegraf/pull/6735): Support resolution of symlinks in filecount input.
|
||||||
|
@ -344,7 +344,7 @@
|
||||||
|
|
||||||
- [#6445](https://github.com/influxdata/telegraf/issues/6445): Use batch serialization format in exec output.
|
- [#6445](https://github.com/influxdata/telegraf/issues/6445): Use batch serialization format in exec output.
|
||||||
- [#6455](https://github.com/influxdata/telegraf/issues/6455): Build official packages with Go 1.12.10.
|
- [#6455](https://github.com/influxdata/telegraf/issues/6455): Build official packages with Go 1.12.10.
|
||||||
- [#6464](https://github.com/influxdata/telegraf/pull/6464): Use case insensitive serial numer match in smart input.
|
- [#6464](https://github.com/influxdata/telegraf/pull/6464): Use case insensitive serial number match in smart input.
|
||||||
- [#6469](https://github.com/influxdata/telegraf/pull/6469): Add auth header only when env var is set.
|
- [#6469](https://github.com/influxdata/telegraf/pull/6469): Add auth header only when env var is set.
|
||||||
- [#6468](https://github.com/influxdata/telegraf/pull/6468): Fix running multiple mysql and sqlserver plugin instances.
|
- [#6468](https://github.com/influxdata/telegraf/pull/6468): Fix running multiple mysql and sqlserver plugin instances.
|
||||||
- [#6471](https://github.com/influxdata/telegraf/issues/6471): Fix database routing on retry with exclude_database_tag.
|
- [#6471](https://github.com/influxdata/telegraf/issues/6471): Fix database routing on retry with exclude_database_tag.
|
||||||
|
@ -378,7 +378,7 @@
|
||||||
#### Release Notes
|
#### Release Notes
|
||||||
|
|
||||||
- The cluster health related fields in the elasticsearch input have been split
|
- The cluster health related fields in the elasticsearch input have been split
|
||||||
out from the `elasticsearch_indices` mesasurement into the new
|
out from the `elasticsearch_indices` measurement into the new
|
||||||
`elasticsearch_cluster_health_indices` measurement as they were originally
|
`elasticsearch_cluster_health_indices` measurement as they were originally
|
||||||
combined by error.
|
combined by error.
|
||||||
|
|
||||||
|
@ -416,7 +416,7 @@
|
||||||
- [#6006](https://github.com/influxdata/telegraf/pull/6006): Add support for interface field in http_response input plugin.
|
- [#6006](https://github.com/influxdata/telegraf/pull/6006): Add support for interface field in http_response input plugin.
|
||||||
- [#5996](https://github.com/influxdata/telegraf/pull/5996): Add container uptime_ns in docker input plugin.
|
- [#5996](https://github.com/influxdata/telegraf/pull/5996): Add container uptime_ns in docker input plugin.
|
||||||
- [#6016](https://github.com/influxdata/telegraf/pull/6016): Add better user-facing errors for API timeouts in docker input.
|
- [#6016](https://github.com/influxdata/telegraf/pull/6016): Add better user-facing errors for API timeouts in docker input.
|
||||||
- [#6027](https://github.com/influxdata/telegraf/pull/6027): Add TLS mutal auth support to jti_openconfig_telemetry input.
|
- [#6027](https://github.com/influxdata/telegraf/pull/6027): Add TLS mutual auth support to jti_openconfig_telemetry input.
|
||||||
- [#6053](https://github.com/influxdata/telegraf/pull/6053): Add support for ES 7.x to elasticsearch output.
|
- [#6053](https://github.com/influxdata/telegraf/pull/6053): Add support for ES 7.x to elasticsearch output.
|
||||||
- [#6062](https://github.com/influxdata/telegraf/pull/6062): Add basic auth to prometheus input plugin.
|
- [#6062](https://github.com/influxdata/telegraf/pull/6062): Add basic auth to prometheus input plugin.
|
||||||
- [#6064](https://github.com/influxdata/telegraf/pull/6064): Add node roles tag to elasticsearch input.
|
- [#6064](https://github.com/influxdata/telegraf/pull/6064): Add node roles tag to elasticsearch input.
|
||||||
|
@ -784,7 +784,7 @@
|
||||||
|
|
||||||
- [#5261](https://github.com/influxdata/telegraf/pull/5261): Fix arithmetic overflow in sqlserver input.
|
- [#5261](https://github.com/influxdata/telegraf/pull/5261): Fix arithmetic overflow in sqlserver input.
|
||||||
- [#5194](https://github.com/influxdata/telegraf/issues/5194): Fix latest metrics not sent first when output fails.
|
- [#5194](https://github.com/influxdata/telegraf/issues/5194): Fix latest metrics not sent first when output fails.
|
||||||
- [#5285](https://github.com/influxdata/telegraf/issues/5285): Fix amqp_consumer stops consuming when it receives unparsable messages.
|
- [#5285](https://github.com/influxdata/telegraf/issues/5285): Fix amqp_consumer stops consuming when it receives unparseable messages.
|
||||||
- [#5281](https://github.com/influxdata/telegraf/issues/5281): Fix prometheus input not detecting added and removed pods.
|
- [#5281](https://github.com/influxdata/telegraf/issues/5281): Fix prometheus input not detecting added and removed pods.
|
||||||
- [#5215](https://github.com/influxdata/telegraf/issues/5215): Remove userinfo from cluster tag in couchbase.
|
- [#5215](https://github.com/influxdata/telegraf/issues/5215): Remove userinfo from cluster tag in couchbase.
|
||||||
- [#5298](https://github.com/influxdata/telegraf/issues/5298): Fix internal_write buffer_size not reset on timed writes.
|
- [#5298](https://github.com/influxdata/telegraf/issues/5298): Fix internal_write buffer_size not reset on timed writes.
|
||||||
|
@ -1235,7 +1235,7 @@
|
||||||
|
|
||||||
### Release Notes
|
### Release Notes
|
||||||
|
|
||||||
- The `mysql` input plugin has been updated fix a number of type convertion
|
- The `mysql` input plugin has been updated fix a number of type conversion
|
||||||
issues. This may cause a `field type error` when inserting into InfluxDB due
|
issues. This may cause a `field type error` when inserting into InfluxDB due
|
||||||
the change of types.
|
the change of types.
|
||||||
|
|
||||||
|
@ -1637,7 +1637,7 @@
|
||||||
- [#3058](https://github.com/influxdata/telegraf/issues/3058): Allow iptable entries with trailing text.
|
- [#3058](https://github.com/influxdata/telegraf/issues/3058): Allow iptable entries with trailing text.
|
||||||
- [#1680](https://github.com/influxdata/telegraf/issues/1680): Sanitize password from couchbase metric.
|
- [#1680](https://github.com/influxdata/telegraf/issues/1680): Sanitize password from couchbase metric.
|
||||||
- [#3104](https://github.com/influxdata/telegraf/issues/3104): Converge to typed value in prometheus output.
|
- [#3104](https://github.com/influxdata/telegraf/issues/3104): Converge to typed value in prometheus output.
|
||||||
- [#2899](https://github.com/influxdata/telegraf/issues/2899): Skip compilcation of logparser and tail on solaris.
|
- [#2899](https://github.com/influxdata/telegraf/issues/2899): Skip compilation of logparser and tail on solaris.
|
||||||
- [#2951](https://github.com/influxdata/telegraf/issues/2951): Discard logging from tail library.
|
- [#2951](https://github.com/influxdata/telegraf/issues/2951): Discard logging from tail library.
|
||||||
- [#3126](https://github.com/influxdata/telegraf/pull/3126): Remove log message on ping timeout.
|
- [#3126](https://github.com/influxdata/telegraf/pull/3126): Remove log message on ping timeout.
|
||||||
- [#3144](https://github.com/influxdata/telegraf/issues/3144): Don't retry points beyond retention policy.
|
- [#3144](https://github.com/influxdata/telegraf/issues/3144): Don't retry points beyond retention policy.
|
||||||
|
@ -2084,7 +2084,7 @@ consistent with the behavior of `collection_jitter`.
|
||||||
- [#1390](https://github.com/influxdata/telegraf/pull/1390): Add support for Tengine
|
- [#1390](https://github.com/influxdata/telegraf/pull/1390): Add support for Tengine
|
||||||
- [#1320](https://github.com/influxdata/telegraf/pull/1320): Logparser input plugin for parsing grok-style log patterns.
|
- [#1320](https://github.com/influxdata/telegraf/pull/1320): Logparser input plugin for parsing grok-style log patterns.
|
||||||
- [#1397](https://github.com/influxdata/telegraf/issues/1397): ElasticSearch: now supports connecting to ElasticSearch via SSL
|
- [#1397](https://github.com/influxdata/telegraf/issues/1397): ElasticSearch: now supports connecting to ElasticSearch via SSL
|
||||||
- [#1262](https://github.com/influxdata/telegraf/pull/1261): Add graylog input pluging.
|
- [#1262](https://github.com/influxdata/telegraf/pull/1261): Add graylog input plugin.
|
||||||
- [#1294](https://github.com/influxdata/telegraf/pull/1294): consul input plugin. Thanks @harnash
|
- [#1294](https://github.com/influxdata/telegraf/pull/1294): consul input plugin. Thanks @harnash
|
||||||
- [#1164](https://github.com/influxdata/telegraf/pull/1164): conntrack input plugin. Thanks @robinpercy!
|
- [#1164](https://github.com/influxdata/telegraf/pull/1164): conntrack input plugin. Thanks @robinpercy!
|
||||||
- [#1165](https://github.com/influxdata/telegraf/pull/1165): vmstat input plugin. Thanks @jshim-xm!
|
- [#1165](https://github.com/influxdata/telegraf/pull/1165): vmstat input plugin. Thanks @jshim-xm!
|
||||||
|
@ -2263,7 +2263,7 @@ It is not included on the report path. This is necessary for reporting host disk
|
||||||
- [#1041](https://github.com/influxdata/telegraf/issues/1041): Add `n_cpus` field to the system plugin.
|
- [#1041](https://github.com/influxdata/telegraf/issues/1041): Add `n_cpus` field to the system plugin.
|
||||||
- [#1072](https://github.com/influxdata/telegraf/pull/1072): New Input Plugin: filestat.
|
- [#1072](https://github.com/influxdata/telegraf/pull/1072): New Input Plugin: filestat.
|
||||||
- [#1066](https://github.com/influxdata/telegraf/pull/1066): Replication lag metrics for MongoDB input plugin
|
- [#1066](https://github.com/influxdata/telegraf/pull/1066): Replication lag metrics for MongoDB input plugin
|
||||||
- [#1086](https://github.com/influxdata/telegraf/pull/1086): Ability to specify AWS keys in config file. Thanks @johnrengleman!
|
- [#1086](https://github.com/influxdata/telegraf/pull/1086): Ability to specify AWS keys in config file. Thanks @johnrengelman!
|
||||||
- [#1096](https://github.com/influxdata/telegraf/pull/1096): Performance refactor of running output buffers.
|
- [#1096](https://github.com/influxdata/telegraf/pull/1096): Performance refactor of running output buffers.
|
||||||
- [#967](https://github.com/influxdata/telegraf/issues/967): Buffer logging improvements.
|
- [#967](https://github.com/influxdata/telegraf/issues/967): Buffer logging improvements.
|
||||||
- [#1107](https://github.com/influxdata/telegraf/issues/1107): Support lustre2 job stats. Thanks @hanleyja!
|
- [#1107](https://github.com/influxdata/telegraf/issues/1107): Support lustre2 job stats. Thanks @hanleyja!
|
||||||
|
@ -2351,7 +2351,7 @@ because the `value` field is redundant in the graphite/librato context.
|
||||||
- [#656](https://github.com/influxdata/telegraf/issues/656): No longer run `lsof` on linux to get netstat data, fixes permissions issue.
|
- [#656](https://github.com/influxdata/telegraf/issues/656): No longer run `lsof` on linux to get netstat data, fixes permissions issue.
|
||||||
- [#907](https://github.com/influxdata/telegraf/issues/907): Fix prometheus invalid label/measurement name key.
|
- [#907](https://github.com/influxdata/telegraf/issues/907): Fix prometheus invalid label/measurement name key.
|
||||||
- [#841](https://github.com/influxdata/telegraf/issues/841): Fix memcached unix socket panic.
|
- [#841](https://github.com/influxdata/telegraf/issues/841): Fix memcached unix socket panic.
|
||||||
- [#873](https://github.com/influxdata/telegraf/issues/873): Fix SNMP plugin sometimes not returning metrics. Thanks @titiliambert!
|
- [#873](https://github.com/influxdata/telegraf/issues/873): Fix SNMP plugin sometimes not returning metrics. Thanks @titilambert!
|
||||||
- [#934](https://github.com/influxdata/telegraf/pull/934): phpfpm: Fix fcgi uri path. Thanks @rudenkovk!
|
- [#934](https://github.com/influxdata/telegraf/pull/934): phpfpm: Fix fcgi uri path. Thanks @rudenkovk!
|
||||||
- [#805](https://github.com/influxdata/telegraf/issues/805): Kafka consumer stops gathering after i/o timeout.
|
- [#805](https://github.com/influxdata/telegraf/issues/805): Kafka consumer stops gathering after i/o timeout.
|
||||||
- [#959](https://github.com/influxdata/telegraf/pull/959): reduce mongodb & prometheus collection timeouts. Thanks @PierreF!
|
- [#959](https://github.com/influxdata/telegraf/pull/959): reduce mongodb & prometheus collection timeouts. Thanks @PierreF!
|
||||||
|
@ -2362,7 +2362,7 @@ because the `value` field is redundant in the graphite/librato context.
|
||||||
- Primarily this release was cut to fix [#859](https://github.com/influxdata/telegraf/issues/859)
|
- Primarily this release was cut to fix [#859](https://github.com/influxdata/telegraf/issues/859)
|
||||||
|
|
||||||
### Features
|
### Features
|
||||||
- [#747](https://github.com/influxdata/telegraf/pull/747): Start telegraf on install & remove on uninstall. Thanks @pierref!
|
- [#747](https://github.com/influxdata/telegraf/pull/747): Start telegraf on install & remove on uninstall. Thanks @PierreF!
|
||||||
- [#794](https://github.com/influxdata/telegraf/pull/794): Add service reload ability. Thanks @entertainyou!
|
- [#794](https://github.com/influxdata/telegraf/pull/794): Add service reload ability. Thanks @entertainyou!
|
||||||
|
|
||||||
### Bugfixes
|
### Bugfixes
|
||||||
|
@ -2850,7 +2850,7 @@ and filtering when specifying a config file.
|
||||||
- [#98](https://github.com/influxdata/telegraf/pull/98): LeoFS plugin. Thanks @mocchira!
|
- [#98](https://github.com/influxdata/telegraf/pull/98): LeoFS plugin. Thanks @mocchira!
|
||||||
- [#103](https://github.com/influxdata/telegraf/pull/103): Filter by metric tags. Thanks @srfraser!
|
- [#103](https://github.com/influxdata/telegraf/pull/103): Filter by metric tags. Thanks @srfraser!
|
||||||
- [#106](https://github.com/influxdata/telegraf/pull/106): Options to filter plugins on startup. Thanks @zepouet!
|
- [#106](https://github.com/influxdata/telegraf/pull/106): Options to filter plugins on startup. Thanks @zepouet!
|
||||||
- [#107](https://github.com/influxdata/telegraf/pull/107): Multiple outputs beyong influxdb. Thanks @jipperinbham!
|
- [#107](https://github.com/influxdata/telegraf/pull/107): Multiple outputs beyond influxdb. Thanks @jipperinbham!
|
||||||
- [#108](https://github.com/influxdata/telegraf/issues/108): Support setting per-CPU and total-CPU gathering.
|
- [#108](https://github.com/influxdata/telegraf/issues/108): Support setting per-CPU and total-CPU gathering.
|
||||||
- [#111](https://github.com/influxdata/telegraf/pull/111): Report CPU Usage in cpu plugin. Thanks @jpalay!
|
- [#111](https://github.com/influxdata/telegraf/pull/111): Report CPU Usage in cpu plugin. Thanks @jpalay!
|
||||||
|
|
||||||
|
|
|
@ -117,7 +117,7 @@ telegraf config > telegraf.conf
|
||||||
telegraf --section-filter agent:inputs:outputs --input-filter cpu --output-filter influxdb config
|
telegraf --section-filter agent:inputs:outputs --input-filter cpu --output-filter influxdb config
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Run a single telegraf collection, outputing metrics to stdout:
|
#### Run a single telegraf collection, outputting metrics to stdout:
|
||||||
|
|
||||||
```
|
```
|
||||||
telegraf --config telegraf.conf --test
|
telegraf --config telegraf.conf --test
|
||||||
|
|
|
@ -256,7 +256,7 @@
|
||||||
# specify address via a url matching:
|
# specify address via a url matching:
|
||||||
# postgres://[pqgotest[:password]]@localhost[/dbname]?sslmode=[disable|verify-ca|verify-full]
|
# postgres://[pqgotest[:password]]@localhost[/dbname]?sslmode=[disable|verify-ca|verify-full]
|
||||||
# or a simple string:
|
# or a simple string:
|
||||||
# host=localhost user=pqotest password=... sslmode=... dbname=app_production
|
# host=localhost user=pqgotest password=... sslmode=... dbname=app_production
|
||||||
#
|
#
|
||||||
# All connection parameters are optional. By default, the host is localhost
|
# All connection parameters are optional. By default, the host is localhost
|
||||||
# and the user is the currently running user. For localhost, we default
|
# and the user is the currently running user. For localhost, we default
|
||||||
|
|
|
@ -178,7 +178,7 @@ Telegraf plugins are divided into 4 types: [inputs][], [outputs][],
|
||||||
[processors][], and [aggregators][].
|
[processors][], and [aggregators][].
|
||||||
|
|
||||||
Unlike the `global_tags` and `agent` tables, any plugin can be defined
|
Unlike the `global_tags` and `agent` tables, any plugin can be defined
|
||||||
multiple times and each instance will run independantly. This allows you to
|
multiple times and each instance will run independently. This allows you to
|
||||||
have plugins defined with differing configurations as needed within a single
|
have plugins defined with differing configurations as needed within a single
|
||||||
Telegraf process.
|
Telegraf process.
|
||||||
|
|
||||||
|
|
|
@ -12,7 +12,7 @@ four main components:
|
||||||
- **Timestamp**: Date and time associated with the fields.
|
- **Timestamp**: Date and time associated with the fields.
|
||||||
|
|
||||||
This metric type exists only in memory and must be converted to a concrete
|
This metric type exists only in memory and must be converted to a concrete
|
||||||
representation in order to be transmitted or viewed. To acheive this we
|
representation in order to be transmitted or viewed. To achieve this we
|
||||||
provide several [output data formats][] sometimes referred to as
|
provide several [output data formats][] sometimes referred to as
|
||||||
*serializers*. Our default serializer converts to [InfluxDB Line
|
*serializers*. Our default serializer converts to [InfluxDB Line
|
||||||
Protocol][line protocol] which provides a high performance and one-to-one
|
Protocol][line protocol] which provides a high performance and one-to-one
|
||||||
|
|
|
@ -68,7 +68,7 @@ func (n *node) recursiveSearch(lineParts []string) *Template {
|
||||||
// exclude last child from search if it is a wildcard. sort.Search expects
|
// exclude last child from search if it is a wildcard. sort.Search expects
|
||||||
// a lexicographically sorted set of children and we have artificially sorted
|
// a lexicographically sorted set of children and we have artificially sorted
|
||||||
// wildcards to the end of the child set
|
// wildcards to the end of the child set
|
||||||
// wildcards will be searched seperately if no exact match is found
|
// wildcards will be searched separately if no exact match is found
|
||||||
if hasWildcard = n.children[length-1].value == "*"; hasWildcard {
|
if hasWildcard = n.children[length-1].value == "*"; hasWildcard {
|
||||||
length--
|
length--
|
||||||
}
|
}
|
||||||
|
@ -79,7 +79,7 @@ func (n *node) recursiveSearch(lineParts []string) *Template {
|
||||||
|
|
||||||
// given an exact match is found within children set
|
// given an exact match is found within children set
|
||||||
if i < length && n.children[i].value == lineParts[0] {
|
if i < length && n.children[i].value == lineParts[0] {
|
||||||
// decend into the matching node
|
// descend into the matching node
|
||||||
if tmpl := n.children[i].recursiveSearch(lineParts[1:]); tmpl != nil {
|
if tmpl := n.children[i].recursiveSearch(lineParts[1:]); tmpl != nil {
|
||||||
// given a template is found return it
|
// given a template is found return it
|
||||||
return tmpl
|
return tmpl
|
||||||
|
|
|
@ -21,7 +21,7 @@ func ParseCiphers(ciphers []string) ([]uint16, error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// ParseTLSVersion returns a `uint16` by received version string key that represents tls version from crypto/tls.
|
// ParseTLSVersion returns a `uint16` by received version string key that represents tls version from crypto/tls.
|
||||||
// If version isn't supportes ParseTLSVersion returns 0 with error
|
// If version isn't supported ParseTLSVersion returns 0 with error
|
||||||
func ParseTLSVersion(version string) (uint16, error) {
|
func ParseTLSVersion(version string) (uint16, error) {
|
||||||
if v, ok := tlsVersionMap[version]; ok {
|
if v, ok := tlsVersionMap[version]; ok {
|
||||||
return v, nil
|
return v, nil
|
||||||
|
|
|
@ -48,7 +48,7 @@ Examples:
|
||||||
# generate config with only cpu input & influxdb output plugins defined
|
# generate config with only cpu input & influxdb output plugins defined
|
||||||
telegraf --input-filter cpu --output-filter influxdb config
|
telegraf --input-filter cpu --output-filter influxdb config
|
||||||
|
|
||||||
# run a single telegraf collection, outputing metrics to stdout
|
# run a single telegraf collection, outputting metrics to stdout
|
||||||
telegraf --config telegraf.conf --test
|
telegraf --config telegraf.conf --test
|
||||||
|
|
||||||
# run telegraf with all plugins defined in config file
|
# run telegraf with all plugins defined in config file
|
||||||
|
|
|
@ -50,7 +50,7 @@ Examples:
|
||||||
# generate config with only cpu input & influxdb output plugins defined
|
# generate config with only cpu input & influxdb output plugins defined
|
||||||
telegraf --input-filter cpu --output-filter influxdb config
|
telegraf --input-filter cpu --output-filter influxdb config
|
||||||
|
|
||||||
# run a single telegraf collection, outputing metrics to stdout
|
# run a single telegraf collection, outputting metrics to stdout
|
||||||
telegraf --config telegraf.conf --test
|
telegraf --config telegraf.conf --test
|
||||||
|
|
||||||
# run telegraf with all plugins defined in config file
|
# run telegraf with all plugins defined in config file
|
||||||
|
|
|
@ -57,7 +57,7 @@ type Metric interface {
|
||||||
Time() time.Time
|
Time() time.Time
|
||||||
|
|
||||||
// Type returns a general type for the entire metric that describes how you
|
// Type returns a general type for the entire metric that describes how you
|
||||||
// might interprete, aggregate the values.
|
// might interpret, aggregate the values.
|
||||||
//
|
//
|
||||||
// This method may be removed in the future and its use is discouraged.
|
// This method may be removed in the future and its use is discouraged.
|
||||||
Type() ValueType
|
Type() ValueType
|
||||||
|
|
|
@ -11,7 +11,7 @@ import (
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
|
|
||||||
// MockProcessor is a Processor with an overrideable Apply implementation.
|
// MockProcessor is a Processor with an overridable Apply implementation.
|
||||||
type MockProcessor struct {
|
type MockProcessor struct {
|
||||||
ApplyF func(in ...telegraf.Metric) []telegraf.Metric
|
ApplyF func(in ...telegraf.Metric) []telegraf.Metric
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
# AMQP Consumer Input Plugin
|
# AMQP Consumer Input Plugin
|
||||||
|
|
||||||
This plugin provides a consumer for use with AMQP 0-9-1, a promenent implementation of this protocol being [RabbitMQ](https://www.rabbitmq.com/).
|
This plugin provides a consumer for use with AMQP 0-9-1, a prominent implementation of this protocol being [RabbitMQ](https://www.rabbitmq.com/).
|
||||||
|
|
||||||
Metrics are read from a topic exchange using the configured queue and binding_key.
|
Metrics are read from a topic exchange using the configured queue and binding_key.
|
||||||
|
|
||||||
|
@ -41,7 +41,7 @@ The following defaults are known to work with RabbitMQ:
|
||||||
|
|
||||||
## Additional exchange arguments.
|
## Additional exchange arguments.
|
||||||
# exchange_arguments = { }
|
# exchange_arguments = { }
|
||||||
# exchange_arguments = {"hash_propery" = "timestamp"}
|
# exchange_arguments = {"hash_property" = "timestamp"}
|
||||||
|
|
||||||
## AMQP queue name
|
## AMQP queue name
|
||||||
queue = "telegraf"
|
queue = "telegraf"
|
||||||
|
|
|
@ -116,7 +116,7 @@ func (a *AMQPConsumer) SampleConfig() string {
|
||||||
|
|
||||||
## Additional exchange arguments.
|
## Additional exchange arguments.
|
||||||
# exchange_arguments = { }
|
# exchange_arguments = { }
|
||||||
# exchange_arguments = {"hash_propery" = "timestamp"}
|
# exchange_arguments = {"hash_property" = "timestamp"}
|
||||||
|
|
||||||
## AMQP queue name.
|
## AMQP queue name.
|
||||||
queue = "telegraf"
|
queue = "telegraf"
|
||||||
|
|
|
@ -49,7 +49,7 @@ It has been optimized to support GNMI telemetry as produced by Cisco IOS XR (64-
|
||||||
## See: https://github.com/openconfig/reference/blob/master/rpc/gnmi/gnmi-specification.md#222-paths
|
## See: https://github.com/openconfig/reference/blob/master/rpc/gnmi/gnmi-specification.md#222-paths
|
||||||
##
|
##
|
||||||
## origin usually refers to a (YANG) data model implemented by the device
|
## origin usually refers to a (YANG) data model implemented by the device
|
||||||
## and path to a specific substructe inside it that should be subscribed to (similar to an XPath)
|
## and path to a specific substructure inside it that should be subscribed to (similar to an XPath)
|
||||||
## YANG models can be found e.g. here: https://github.com/YangModels/yang/tree/master/vendor/cisco/xr
|
## YANG models can be found e.g. here: https://github.com/YangModels/yang/tree/master/vendor/cisco/xr
|
||||||
origin = "openconfig-interfaces"
|
origin = "openconfig-interfaces"
|
||||||
path = "/interfaces/interface/state/counters"
|
path = "/interfaces/interface/state/counters"
|
||||||
|
|
|
@ -515,7 +515,7 @@ const sampleConfig = `
|
||||||
## See: https://github.com/openconfig/reference/blob/master/rpc/gnmi/gnmi-specification.md#222-paths
|
## See: https://github.com/openconfig/reference/blob/master/rpc/gnmi/gnmi-specification.md#222-paths
|
||||||
##
|
##
|
||||||
## origin usually refers to a (YANG) data model implemented by the device
|
## origin usually refers to a (YANG) data model implemented by the device
|
||||||
## and path to a specific substructe inside it that should be subscribed to (similar to an XPath)
|
## and path to a specific substructure inside it that should be subscribed to (similar to an XPath)
|
||||||
## YANG models can be found e.g. here: https://github.com/YangModels/yang/tree/master/vendor/cisco/xr
|
## YANG models can be found e.g. here: https://github.com/YangModels/yang/tree/master/vendor/cisco/xr
|
||||||
origin = "openconfig-interfaces"
|
origin = "openconfig-interfaces"
|
||||||
path = "/interfaces/interface/state/counters"
|
path = "/interfaces/interface/state/counters"
|
||||||
|
|
|
@ -34,7 +34,7 @@ For more information on conntrack-tools, see the
|
||||||
"nf_conntrack_count","nf_conntrack_max"]
|
"nf_conntrack_count","nf_conntrack_max"]
|
||||||
|
|
||||||
## Directories to search within for the conntrack files above.
|
## Directories to search within for the conntrack files above.
|
||||||
## Missing directrories will be ignored.
|
## Missing directories will be ignored.
|
||||||
dirs = ["/proc/sys/net/ipv4/netfilter","/proc/sys/net/netfilter"]
|
dirs = ["/proc/sys/net/ipv4/netfilter","/proc/sys/net/netfilter"]
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -61,7 +61,7 @@ var sampleConfig = `
|
||||||
"nf_conntrack_count","nf_conntrack_max"]
|
"nf_conntrack_count","nf_conntrack_max"]
|
||||||
|
|
||||||
## Directories to search within for the conntrack files above.
|
## Directories to search within for the conntrack files above.
|
||||||
## Missing directrories will be ignored.
|
## Missing directories will be ignored.
|
||||||
dirs = ["/proc/sys/net/ipv4/netfilter","/proc/sys/net/netfilter"]
|
dirs = ["/proc/sys/net/ipv4/netfilter","/proc/sys/net/netfilter"]
|
||||||
`
|
`
|
||||||
|
|
||||||
|
|
|
@ -44,7 +44,7 @@ report those stats already using StatsD protocol if needed.
|
||||||
|
|
||||||
- consul_health_checks
|
- consul_health_checks
|
||||||
- tags:
|
- tags:
|
||||||
- node (node that check/service is registred on)
|
- node (node that check/service is registered on)
|
||||||
- service_name
|
- service_name
|
||||||
- check_id
|
- check_id
|
||||||
- fields:
|
- fields:
|
||||||
|
|
|
@ -12,7 +12,7 @@
|
||||||
## http://admin:secret@couchbase-0.example.com:8091/
|
## http://admin:secret@couchbase-0.example.com:8091/
|
||||||
##
|
##
|
||||||
## If no servers are specified, then localhost is used as the host.
|
## If no servers are specified, then localhost is used as the host.
|
||||||
## If no protocol is specifed, HTTP is used.
|
## If no protocol is specified, HTTP is used.
|
||||||
## If no port is specified, 8091 is used.
|
## If no port is specified, 8091 is used.
|
||||||
servers = ["http://localhost:8091"]
|
servers = ["http://localhost:8091"]
|
||||||
```
|
```
|
||||||
|
|
|
@ -55,7 +55,7 @@ func TestCPUStats(t *testing.T) {
|
||||||
err := cs.Gather(&acc)
|
err := cs.Gather(&acc)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
// Computed values are checked with delta > 0 because of floating point arithmatic
|
// Computed values are checked with delta > 0 because of floating point arithmetic
|
||||||
// imprecision
|
// imprecision
|
||||||
assertContainsTaggedFloat(t, &acc, "cpu", "time_user", 8.8, 0, cputags)
|
assertContainsTaggedFloat(t, &acc, "cpu", "time_user", 8.8, 0, cputags)
|
||||||
assertContainsTaggedFloat(t, &acc, "cpu", "time_system", 8.2, 0, cputags)
|
assertContainsTaggedFloat(t, &acc, "cpu", "time_system", 8.2, 0, cputags)
|
||||||
|
@ -102,7 +102,7 @@ func TestCPUStats(t *testing.T) {
|
||||||
assertContainsTaggedFloat(t, &acc, "cpu", "usage_guest_nice", 2.2, 0.0005, cputags)
|
assertContainsTaggedFloat(t, &acc, "cpu", "usage_guest_nice", 2.2, 0.0005, cputags)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Asserts that a given accumulator contains a measurment of type float64 with
|
// Asserts that a given accumulator contains a measurement of type float64 with
|
||||||
// specific tags within a certain distance of a given expected value. Asserts a failure
|
// specific tags within a certain distance of a given expected value. Asserts a failure
|
||||||
// if the measurement is of the wrong type, or if no matching measurements are found
|
// if the measurement is of the wrong type, or if no matching measurements are found
|
||||||
//
|
//
|
||||||
|
@ -113,7 +113,7 @@ func TestCPUStats(t *testing.T) {
|
||||||
// expectedValue float64 : Value to search for within the measurement
|
// expectedValue float64 : Value to search for within the measurement
|
||||||
// delta float64 : Maximum acceptable distance of an accumulated value
|
// delta float64 : Maximum acceptable distance of an accumulated value
|
||||||
// from the expectedValue parameter. Useful when
|
// from the expectedValue parameter. Useful when
|
||||||
// floating-point arithmatic imprecision makes looking
|
// floating-point arithmetic imprecision makes looking
|
||||||
// for an exact match impractical
|
// for an exact match impractical
|
||||||
// tags map[string]string : Tag set the found measurement must have. Set to nil to
|
// tags map[string]string : Tag set the found measurement must have. Set to nil to
|
||||||
// ignore the tag set.
|
// ignore the tag set.
|
||||||
|
@ -225,7 +225,7 @@ func TestCPUTimesDecrease(t *testing.T) {
|
||||||
err := cs.Gather(&acc)
|
err := cs.Gather(&acc)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
// Computed values are checked with delta > 0 because of floating point arithmatic
|
// Computed values are checked with delta > 0 because of floating point arithmetic
|
||||||
// imprecision
|
// imprecision
|
||||||
assertContainsTaggedFloat(t, &acc, "cpu", "time_user", 18, 0, cputags)
|
assertContainsTaggedFloat(t, &acc, "cpu", "time_user", 18, 0, cputags)
|
||||||
assertContainsTaggedFloat(t, &acc, "cpu", "time_idle", 80, 0, cputags)
|
assertContainsTaggedFloat(t, &acc, "cpu", "time_idle", 80, 0, cputags)
|
||||||
|
|
|
@ -16,7 +16,7 @@ The DNS plugin gathers dns query times in miliseconds - like [Dig](https://en.wi
|
||||||
# domains = ["."]
|
# domains = ["."]
|
||||||
|
|
||||||
## Query record type.
|
## Query record type.
|
||||||
## Posible values: A, AAAA, CNAME, MX, NS, PTR, TXT, SOA, SPF, SRV.
|
## Possible values: A, AAAA, CNAME, MX, NS, PTR, TXT, SOA, SPF, SRV.
|
||||||
# record_type = "A"
|
# record_type = "A"
|
||||||
|
|
||||||
## Dns server port.
|
## Dns server port.
|
||||||
|
|
|
@ -52,7 +52,7 @@ var sampleConfig = `
|
||||||
# domains = ["."]
|
# domains = ["."]
|
||||||
|
|
||||||
## Query record type.
|
## Query record type.
|
||||||
## Posible values: A, AAAA, CNAME, MX, NS, PTR, TXT, SOA, SPF, SRV.
|
## Possible values: A, AAAA, CNAME, MX, NS, PTR, TXT, SOA, SPF, SRV.
|
||||||
# record_type = "A"
|
# record_type = "A"
|
||||||
|
|
||||||
## Dns server port.
|
## Dns server port.
|
||||||
|
|
|
@ -184,7 +184,7 @@ The above measurements for the devicemapper storage driver can now be found in t
|
||||||
- container_status
|
- container_status
|
||||||
- container_version
|
- container_version
|
||||||
+ fields:
|
+ fields:
|
||||||
- total_pgmafault
|
- total_pgmajfault
|
||||||
- cache
|
- cache
|
||||||
- mapped_file
|
- mapped_file
|
||||||
- total_inactive_file
|
- total_inactive_file
|
||||||
|
|
|
@ -73,7 +73,7 @@ const (
|
||||||
|
|
||||||
var (
|
var (
|
||||||
containerStates = []string{"created", "restarting", "running", "removing", "paused", "exited", "dead"}
|
containerStates = []string{"created", "restarting", "running", "removing", "paused", "exited", "dead"}
|
||||||
// ensure *DockerLogs implements telegaf.ServiceInput
|
// ensure *DockerLogs implements telegraf.ServiceInput
|
||||||
_ telegraf.ServiceInput = (*DockerLogs)(nil)
|
_ telegraf.ServiceInput = (*DockerLogs)(nil)
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
@ -18,7 +18,7 @@ Specific Elasticsearch endpoints that are queried:
|
||||||
- Indices Stats: /_all/_stats
|
- Indices Stats: /_all/_stats
|
||||||
- Shard Stats: /_all/_stats?level=shards
|
- Shard Stats: /_all/_stats?level=shards
|
||||||
|
|
||||||
Note that specific statistics information can change between Elassticsearch versions. In general, this plugin attempts to stay as version-generic as possible by tagging high-level categories only and using a generic json parser to make unique field names of whatever statistics names are provided at the mid-low level.
|
Note that specific statistics information can change between Elasticsearch versions. In general, this plugin attempts to stay as version-generic as possible by tagging high-level categories only and using a generic json parser to make unique field names of whatever statistics names are provided at the mid-low level.
|
||||||
|
|
||||||
### Configuration
|
### Configuration
|
||||||
|
|
||||||
|
|
|
@ -58,7 +58,7 @@ systems.
|
||||||
|
|
||||||
#### With a PowerShell on Windows, the output of the script appears to be truncated.
|
#### With a PowerShell on Windows, the output of the script appears to be truncated.
|
||||||
|
|
||||||
You may need to set a variable in your script to increase the numer of columns
|
You may need to set a variable in your script to increase the number of columns
|
||||||
available for output:
|
available for output:
|
||||||
```
|
```
|
||||||
$host.UI.RawUI.BufferSize = new-object System.Management.Automation.Host.Size(1024,50)
|
$host.UI.RawUI.BufferSize = new-object System.Management.Automation.Host.Size(1024,50)
|
||||||
|
|
|
@ -5,7 +5,7 @@ out to a stand-alone repo for the purpose of compiling it as a separate app and
|
||||||
running it from the inputs.execd plugin.
|
running it from the inputs.execd plugin.
|
||||||
|
|
||||||
The execd-shim is still experimental and the interface may change in the future.
|
The execd-shim is still experimental and the interface may change in the future.
|
||||||
Especially as the concept expands to prcoessors, aggregators, and outputs.
|
Especially as the concept expands to processors, aggregators, and outputs.
|
||||||
|
|
||||||
## Steps to externalize a plugin
|
## Steps to externalize a plugin
|
||||||
|
|
||||||
|
|
|
@ -203,7 +203,7 @@ func getFakeFileSystem(basePath string) fakeFileSystem {
|
||||||
mtime := time.Date(2015, time.December, 14, 18, 25, 5, 0, time.UTC)
|
mtime := time.Date(2015, time.December, 14, 18, 25, 5, 0, time.UTC)
|
||||||
olderMtime := time.Date(2010, time.December, 14, 18, 25, 5, 0, time.UTC)
|
olderMtime := time.Date(2010, time.December, 14, 18, 25, 5, 0, time.UTC)
|
||||||
|
|
||||||
// set file permisions
|
// set file permissions
|
||||||
var fmask uint32 = 0666
|
var fmask uint32 = 0666
|
||||||
var dmask uint32 = 0666
|
var dmask uint32 = 0666
|
||||||
|
|
||||||
|
|
|
@ -72,7 +72,7 @@ func getTestFileSystem() fakeFileSystem {
|
||||||
|
|
||||||
mtime := time.Date(2015, time.December, 14, 18, 25, 5, 0, time.UTC)
|
mtime := time.Date(2015, time.December, 14, 18, 25, 5, 0, time.UTC)
|
||||||
|
|
||||||
// set file permisions
|
// set file permissions
|
||||||
var fmask uint32 = 0666
|
var fmask uint32 = 0666
|
||||||
var dmask uint32 = 0666
|
var dmask uint32 = 0666
|
||||||
|
|
||||||
|
|
|
@ -53,7 +53,7 @@ type pluginData struct {
|
||||||
|
|
||||||
// parse JSON from fluentd Endpoint
|
// parse JSON from fluentd Endpoint
|
||||||
// Parameters:
|
// Parameters:
|
||||||
// data: unprocessed json recivied from endpoint
|
// data: unprocessed json received from endpoint
|
||||||
//
|
//
|
||||||
// Returns:
|
// Returns:
|
||||||
// pluginData: slice that contains parsed plugins
|
// pluginData: slice that contains parsed plugins
|
||||||
|
@ -76,7 +76,7 @@ func parse(data []byte) (datapointArray []pluginData, err error) {
|
||||||
// Description - display description
|
// Description - display description
|
||||||
func (h *Fluentd) Description() string { return description }
|
func (h *Fluentd) Description() string { return description }
|
||||||
|
|
||||||
// SampleConfig - generate configuretion
|
// SampleConfig - generate configuration
|
||||||
func (h *Fluentd) SampleConfig() string { return sampleConfig }
|
func (h *Fluentd) SampleConfig() string { return sampleConfig }
|
||||||
|
|
||||||
// Gather - Main code responsible for gathering, processing and creating metrics
|
// Gather - Main code responsible for gathering, processing and creating metrics
|
||||||
|
|
|
@ -46,7 +46,7 @@ When the [internal][] input is enabled:
|
||||||
|
|
||||||
+ internal_github
|
+ internal_github
|
||||||
- tags:
|
- tags:
|
||||||
- access_token - An obfusticated reference to the configured access token or "Unauthenticated"
|
- access_token - An obfuscated reference to the configured access token or "Unauthenticated"
|
||||||
- fields:
|
- fields:
|
||||||
- limit - How many requests you are limited to (per hour)
|
- limit - How many requests you are limited to (per hour)
|
||||||
- remaining - How many requests you have remaining (per hour)
|
- remaining - How many requests you have remaining (per hour)
|
||||||
|
|
|
@ -7,7 +7,7 @@ Plugin currently support two type of end points:-
|
||||||
- multiple (Ex http://[graylog-server-ip]:12900/system/metrics/multiple)
|
- multiple (Ex http://[graylog-server-ip]:12900/system/metrics/multiple)
|
||||||
- namespace (Ex http://[graylog-server-ip]:12900/system/metrics/namespace/{namespace})
|
- namespace (Ex http://[graylog-server-ip]:12900/system/metrics/namespace/{namespace})
|
||||||
|
|
||||||
End Point can be a mixe of one multiple end point and several namespaces end points
|
End Point can be a mix of one multiple end point and several namespaces end points
|
||||||
|
|
||||||
|
|
||||||
Note: if namespace end point specified metrics array will be ignored for that call.
|
Note: if namespace end point specified metrics array will be ignored for that call.
|
||||||
|
|
|
@ -47,7 +47,7 @@ type HTTPClient interface {
|
||||||
// req: HTTP request object
|
// req: HTTP request object
|
||||||
//
|
//
|
||||||
// Returns:
|
// Returns:
|
||||||
// http.Response: HTTP respons object
|
// http.Response: HTTP response object
|
||||||
// error : Any error that may have occurred
|
// error : Any error that may have occurred
|
||||||
MakeRequest(req *http.Request) (*http.Response, error)
|
MakeRequest(req *http.Request) (*http.Response, error)
|
||||||
|
|
||||||
|
|
|
@ -274,7 +274,7 @@ func (h *HTTPResponse) httpGather(u string) (map[string]interface{}, map[string]
|
||||||
// Get error details
|
// Get error details
|
||||||
netErr := setError(err, fields, tags)
|
netErr := setError(err, fields, tags)
|
||||||
|
|
||||||
// If recognize the returnded error, get out
|
// If recognize the returned error, get out
|
||||||
if netErr != nil {
|
if netErr != nil {
|
||||||
return fields, tags, nil
|
return fields, tags, nil
|
||||||
}
|
}
|
||||||
|
|
|
@ -722,7 +722,7 @@ func TestNetworkErrors(t *testing.T) {
|
||||||
absentTags := []string{"status_code"}
|
absentTags := []string{"status_code"}
|
||||||
checkOutput(t, &acc, expectedFields, expectedTags, absentFields, absentTags)
|
checkOutput(t, &acc, expectedFields, expectedTags, absentFields, absentTags)
|
||||||
|
|
||||||
// Connecton failed
|
// Connection failed
|
||||||
h = &HTTPResponse{
|
h = &HTTPResponse{
|
||||||
Log: testutil.Logger{},
|
Log: testutil.Logger{},
|
||||||
Address: "https:/nonexistent.nonexistent", // Any non-routable IP works here
|
Address: "https:/nonexistent.nonexistent", // Any non-routable IP works here
|
||||||
|
|
|
@ -42,7 +42,7 @@ type HTTPClient interface {
|
||||||
// req: HTTP request object
|
// req: HTTP request object
|
||||||
//
|
//
|
||||||
// Returns:
|
// Returns:
|
||||||
// http.Response: HTTP respons object
|
// http.Response: HTTP response object
|
||||||
// error : Any error that may have occurred
|
// error : Any error that may have occurred
|
||||||
MakeRequest(req *http.Request) (*http.Response, error)
|
MakeRequest(req *http.Request) (*http.Response, error)
|
||||||
|
|
||||||
|
|
|
@ -44,7 +44,7 @@ ipmitool -I lan -H SERVER -U USERID -P PASSW0RD sdr
|
||||||
##
|
##
|
||||||
# servers = ["USERID:PASSW0RD@lan(192.168.1.1)"]
|
# servers = ["USERID:PASSW0RD@lan(192.168.1.1)"]
|
||||||
|
|
||||||
## Recomended: use metric 'interval' that is a multiple of 'timeout' to avoid
|
## Recommended: use metric 'interval' that is a multiple of 'timeout' to avoid
|
||||||
## gaps or overlap in pulled data
|
## gaps or overlap in pulled data
|
||||||
interval = "30s"
|
interval = "30s"
|
||||||
|
|
||||||
|
|
|
@ -137,7 +137,7 @@ func (j *Jenkins) newHTTPClient() (*http.Client, error) {
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// seperate the client as dependency to use httptest Client for mocking
|
// separate the client as dependency to use httptest Client for mocking
|
||||||
func (j *Jenkins) initialize(client *http.Client) error {
|
func (j *Jenkins) initialize(client *http.Client) error {
|
||||||
var err error
|
var err error
|
||||||
|
|
||||||
|
|
|
@ -75,8 +75,8 @@ func TestResultCode(t *testing.T) {
|
||||||
}
|
}
|
||||||
|
|
||||||
type mockHandler struct {
|
type mockHandler struct {
|
||||||
// responseMap is the path to repsonse interface
|
// responseMap is the path to response interface
|
||||||
// we will ouput the serialized response in json when serving http
|
// we will output the serialized response in json when serving http
|
||||||
// example '/computer/api/json': *gojenkins.
|
// example '/computer/api/json': *gojenkins.
|
||||||
responseMap map[string]interface{}
|
responseMap map[string]interface{}
|
||||||
}
|
}
|
||||||
|
|
|
@ -43,7 +43,7 @@ func (g *Gatherer) Gather(client *Client, acc telegraf.Accumulator) error {
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// gatherReponses adds points to an accumulator from the ReadResponse objects
|
// gatherResponses adds points to an accumulator from the ReadResponse objects
|
||||||
// returned by a Jolokia agent.
|
// returned by a Jolokia agent.
|
||||||
func (g *Gatherer) gatherResponses(responses []ReadResponse, tags map[string]string, acc telegraf.Accumulator) {
|
func (g *Gatherer) gatherResponses(responses []ReadResponse, tags map[string]string, acc telegraf.Accumulator) {
|
||||||
series := make(map[string][]point, 0)
|
series := make(map[string][]point, 0)
|
||||||
|
@ -144,7 +144,7 @@ func metricMatchesResponse(metric Metric, response ReadResponse) bool {
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
// compactPoints attepts to remove points by compacting points
|
// compactPoints attempts to remove points by compacting points
|
||||||
// with matching tag sets. When a match is found, the fields from
|
// with matching tag sets. When a match is found, the fields from
|
||||||
// one point are moved to another, and the empty point is removed.
|
// one point are moved to another, and the empty point is removed.
|
||||||
func compactPoints(points []point) []point {
|
func compactPoints(points []point) []point {
|
||||||
|
|
|
@ -980,7 +980,7 @@ type OpenConfigTelemetryClient interface {
|
||||||
// The device should send telemetry data back on the same
|
// The device should send telemetry data back on the same
|
||||||
// connection as the subscription request.
|
// connection as the subscription request.
|
||||||
TelemetrySubscribe(ctx context.Context, in *SubscriptionRequest, opts ...grpc.CallOption) (OpenConfigTelemetry_TelemetrySubscribeClient, error)
|
TelemetrySubscribe(ctx context.Context, in *SubscriptionRequest, opts ...grpc.CallOption) (OpenConfigTelemetry_TelemetrySubscribeClient, error)
|
||||||
// Terminates and removes an exisiting telemetry subscription
|
// Terminates and removes an existing telemetry subscription
|
||||||
CancelTelemetrySubscription(ctx context.Context, in *CancelSubscriptionRequest, opts ...grpc.CallOption) (*CancelSubscriptionReply, error)
|
CancelTelemetrySubscription(ctx context.Context, in *CancelSubscriptionRequest, opts ...grpc.CallOption) (*CancelSubscriptionReply, error)
|
||||||
// Get the list of current telemetry subscriptions from the
|
// Get the list of current telemetry subscriptions from the
|
||||||
// target. This command returns a list of existing subscriptions
|
// target. This command returns a list of existing subscriptions
|
||||||
|
@ -1076,7 +1076,7 @@ type OpenConfigTelemetryServer interface {
|
||||||
// The device should send telemetry data back on the same
|
// The device should send telemetry data back on the same
|
||||||
// connection as the subscription request.
|
// connection as the subscription request.
|
||||||
TelemetrySubscribe(*SubscriptionRequest, OpenConfigTelemetry_TelemetrySubscribeServer) error
|
TelemetrySubscribe(*SubscriptionRequest, OpenConfigTelemetry_TelemetrySubscribeServer) error
|
||||||
// Terminates and removes an exisiting telemetry subscription
|
// Terminates and removes an existing telemetry subscription
|
||||||
CancelTelemetrySubscription(context.Context, *CancelSubscriptionRequest) (*CancelSubscriptionReply, error)
|
CancelTelemetrySubscription(context.Context, *CancelSubscriptionRequest) (*CancelSubscriptionReply, error)
|
||||||
// Get the list of current telemetry subscriptions from the
|
// Get the list of current telemetry subscriptions from the
|
||||||
// target. This command returns a list of existing subscriptions
|
// target. This command returns a list of existing subscriptions
|
||||||
|
|
|
@ -44,7 +44,7 @@ service OpenConfigTelemetry {
|
||||||
// connection as the subscription request.
|
// connection as the subscription request.
|
||||||
rpc telemetrySubscribe(SubscriptionRequest) returns (stream OpenConfigData) {}
|
rpc telemetrySubscribe(SubscriptionRequest) returns (stream OpenConfigData) {}
|
||||||
|
|
||||||
// Terminates and removes an exisiting telemetry subscription
|
// Terminates and removes an existing telemetry subscription
|
||||||
rpc cancelTelemetrySubscription(CancelSubscriptionRequest) returns (CancelSubscriptionReply) {}
|
rpc cancelTelemetrySubscription(CancelSubscriptionRequest) returns (CancelSubscriptionReply) {}
|
||||||
|
|
||||||
// Get the list of current telemetry subscriptions from the
|
// Get the list of current telemetry subscriptions from the
|
||||||
|
|
|
@ -78,7 +78,7 @@ DynamoDB:
|
||||||
#### DynamoDB Checkpoint
|
#### DynamoDB Checkpoint
|
||||||
|
|
||||||
The DynamoDB checkpoint stores the last processed record in a DynamoDB. To leverage
|
The DynamoDB checkpoint stores the last processed record in a DynamoDB. To leverage
|
||||||
this functionality, create a table with the folowing string type keys:
|
this functionality, create a table with the following string type keys:
|
||||||
|
|
||||||
```
|
```
|
||||||
Partition key: namespace
|
Partition key: namespace
|
||||||
|
|
|
@ -116,7 +116,7 @@ Architecture][k8s-telegraf] or view the Helm charts:
|
||||||
- rootfs_available_bytes
|
- rootfs_available_bytes
|
||||||
- rootfs_capacity_bytes
|
- rootfs_capacity_bytes
|
||||||
- rootfs_used_bytes
|
- rootfs_used_bytes
|
||||||
- logsfs_avaialble_bytes
|
- logsfs_available_bytes
|
||||||
- logsfs_capacity_bytes
|
- logsfs_capacity_bytes
|
||||||
- logsfs_used_bytes
|
- logsfs_used_bytes
|
||||||
|
|
||||||
|
@ -146,7 +146,7 @@ Architecture][k8s-telegraf] or view the Helm charts:
|
||||||
|
|
||||||
```
|
```
|
||||||
kubernetes_node
|
kubernetes_node
|
||||||
kubernetes_pod_container,container_name=deis-controller,namespace=deis,node_name=ip-10-0-0-0.ec2.internal,pod_name=deis-controller-3058870187-xazsr cpu_usage_core_nanoseconds=2432835i,cpu_usage_nanocores=0i,logsfs_avaialble_bytes=121128271872i,logsfs_capacity_bytes=153567944704i,logsfs_used_bytes=20787200i,memory_major_page_faults=0i,memory_page_faults=175i,memory_rss_bytes=0i,memory_usage_bytes=0i,memory_working_set_bytes=0i,rootfs_available_bytes=121128271872i,rootfs_capacity_bytes=153567944704i,rootfs_used_bytes=1110016i 1476477530000000000
|
kubernetes_pod_container,container_name=deis-controller,namespace=deis,node_name=ip-10-0-0-0.ec2.internal,pod_name=deis-controller-3058870187-xazsr cpu_usage_core_nanoseconds=2432835i,cpu_usage_nanocores=0i,logsfs_available_bytes=121128271872i,logsfs_capacity_bytes=153567944704i,logsfs_used_bytes=20787200i,memory_major_page_faults=0i,memory_page_faults=175i,memory_rss_bytes=0i,memory_usage_bytes=0i,memory_working_set_bytes=0i,rootfs_available_bytes=121128271872i,rootfs_capacity_bytes=153567944704i,rootfs_used_bytes=1110016i 1476477530000000000
|
||||||
kubernetes_pod_network,namespace=deis,node_name=ip-10-0-0-0.ec2.internal,pod_name=deis-controller-3058870187-xazsr rx_bytes=120671099i,rx_errors=0i,tx_bytes=102451983i,tx_errors=0i 1476477530000000000
|
kubernetes_pod_network,namespace=deis,node_name=ip-10-0-0-0.ec2.internal,pod_name=deis-controller-3058870187-xazsr rx_bytes=120671099i,rx_errors=0i,tx_bytes=102451983i,tx_errors=0i 1476477530000000000
|
||||||
kubernetes_pod_volume,volume_name=default-token-f7wts,namespace=default,node_name=ip-172-17-0-1.internal,pod_name=storage-7 available_bytes=8415240192i,capacity_bytes=8415252480i,used_bytes=12288i 1546910783000000000
|
kubernetes_pod_volume,volume_name=default-token-f7wts,namespace=default,node_name=ip-172-17-0-1.internal,pod_name=storage-7 available_bytes=8415240192i,capacity_bytes=8415252480i,used_bytes=12288i 1546910783000000000
|
||||||
kubernetes_system_container
|
kubernetes_system_container
|
||||||
|
|
|
@ -2,7 +2,7 @@ package kubernetes
|
||||||
|
|
||||||
import "time"
|
import "time"
|
||||||
|
|
||||||
// SummaryMetrics represents all the summary data about a paritcular node retrieved from a kubelet
|
// SummaryMetrics represents all the summary data about a particular node retrieved from a kubelet
|
||||||
type SummaryMetrics struct {
|
type SummaryMetrics struct {
|
||||||
Node NodeMetrics `json:"node"`
|
Node NodeMetrics `json:"node"`
|
||||||
Pods []PodMetrics `json:"pods"`
|
Pods []PodMetrics `json:"pods"`
|
||||||
|
|
|
@ -366,7 +366,7 @@ func (l *Lustre2) GetLustreProcStats(fileglob string, wantedFields []*mapping, a
|
||||||
for _, file := range files {
|
for _, file := range files {
|
||||||
/* Turn /proc/fs/lustre/obdfilter/<ost_name>/stats and similar
|
/* Turn /proc/fs/lustre/obdfilter/<ost_name>/stats and similar
|
||||||
* into just the object store target name
|
* into just the object store target name
|
||||||
* Assumpion: the target name is always second to last,
|
* Assumption: the target name is always second to last,
|
||||||
* which is true in Lustre 2.1->2.8
|
* which is true in Lustre 2.1->2.8
|
||||||
*/
|
*/
|
||||||
path := strings.Split(file, "/")
|
path := strings.Split(file, "/")
|
||||||
|
|
|
@ -242,7 +242,7 @@ func metricsDiff(role Role, w []string) []string {
|
||||||
return b
|
return b
|
||||||
}
|
}
|
||||||
|
|
||||||
// masterBlocks serves as kind of metrics registry groupping them in sets
|
// masterBlocks serves as kind of metrics registry grouping them in sets
|
||||||
func getMetrics(role Role, group string) []string {
|
func getMetrics(role Role, group string) []string {
|
||||||
var m map[string][]string
|
var m map[string][]string
|
||||||
|
|
||||||
|
|
|
@ -78,7 +78,7 @@ minecraft,player=dinnerbone,source=127.0.0.1,port=25575 deaths=1i,jumps=1999i,co
|
||||||
minecraft,player=jeb,source=127.0.0.1,port=25575 d_pickaxe=1i,damage_dealt=80i,d_sword=2i,hunger=20i,health=20i,kills=1i,level=33i,jumps=264i,armor=15i 1498261397000000000
|
minecraft,player=jeb,source=127.0.0.1,port=25575 d_pickaxe=1i,damage_dealt=80i,d_sword=2i,hunger=20i,health=20i,kills=1i,level=33i,jumps=264i,armor=15i 1498261397000000000
|
||||||
```
|
```
|
||||||
|
|
||||||
[server.properies]: https://minecraft.gamepedia.com/Server.properties
|
[server.properties]: https://minecraft.gamepedia.com/Server.properties
|
||||||
[scoreboard]: http://minecraft.gamepedia.com/Scoreboard
|
[scoreboard]: http://minecraft.gamepedia.com/Scoreboard
|
||||||
[objectives]: https://minecraft.gamepedia.com/Scoreboard#Objectives
|
[objectives]: https://minecraft.gamepedia.com/Scoreboard#Objectives
|
||||||
[rcon]: http://wiki.vg/RCON
|
[rcon]: http://wiki.vg/RCON
|
||||||
|
|
|
@ -57,7 +57,7 @@ type Packet struct {
|
||||||
Body string // Body of packet.
|
Body string // Body of packet.
|
||||||
}
|
}
|
||||||
|
|
||||||
// Compile converts a packets header and body into its approriate
|
// Compile converts a packets header and body into its appropriate
|
||||||
// byte array payload, returning an error if the binary packages
|
// byte array payload, returning an error if the binary packages
|
||||||
// Write method fails to write the header bytes in their little
|
// Write method fails to write the header bytes in their little
|
||||||
// endian byte order.
|
// endian byte order.
|
||||||
|
@ -112,7 +112,7 @@ func (c *Client) Execute(command string) (response *Packet, err error) {
|
||||||
|
|
||||||
// Sends accepts the commands type and its string to execute to the clients server,
|
// Sends accepts the commands type and its string to execute to the clients server,
|
||||||
// creating a packet with a random challenge id for the server to mirror,
|
// creating a packet with a random challenge id for the server to mirror,
|
||||||
// and compiling its payload bytes in the appropriate order. The resonse is
|
// and compiling its payload bytes in the appropriate order. The response is
|
||||||
// decompiled from its bytes into a Packet type for return. An error is returned
|
// decompiled from its bytes into a Packet type for return. An error is returned
|
||||||
// if send fails.
|
// if send fails.
|
||||||
func (c *Client) Send(typ int32, command string) (response *Packet, err error) {
|
func (c *Client) Send(typ int32, command string) (response *Packet, err error) {
|
||||||
|
@ -152,7 +152,7 @@ func (c *Client) Send(typ int32, command string) (response *Packet, err error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
if packet.Header.Type == Auth && header.Type == ResponseValue {
|
if packet.Header.Type == Auth && header.Type == ResponseValue {
|
||||||
// Discard, empty SERVERDATA_RESPOSE_VALUE from authorization.
|
// Discard, empty SERVERDATA_RESPONSE_VALUE from authorization.
|
||||||
c.Connection.Read(make([]byte, header.Size-int32(PacketHeaderSize)))
|
c.Connection.Read(make([]byte, header.Size-int32(PacketHeaderSize)))
|
||||||
|
|
||||||
// Reread the packet header.
|
// Reread the packet header.
|
||||||
|
|
|
@ -215,7 +215,7 @@ by running Telegraf with the `--debug` argument.
|
||||||
- repl_inserts_per_sec (integer, deprecated in 1.10; use `repl_inserts`))
|
- repl_inserts_per_sec (integer, deprecated in 1.10; use `repl_inserts`))
|
||||||
- repl_queries_per_sec (integer, deprecated in 1.10; use `repl_queries`))
|
- repl_queries_per_sec (integer, deprecated in 1.10; use `repl_queries`))
|
||||||
- repl_updates_per_sec (integer, deprecated in 1.10; use `repl_updates`))
|
- repl_updates_per_sec (integer, deprecated in 1.10; use `repl_updates`))
|
||||||
- ttl_deletes_per_sec (integer, deprecated in 1.10; use `ttl_deltes`))
|
- ttl_deletes_per_sec (integer, deprecated in 1.10; use `ttl_deletes`))
|
||||||
- ttl_passes_per_sec (integer, deprecated in 1.10; use `ttl_passes`))
|
- ttl_passes_per_sec (integer, deprecated in 1.10; use `ttl_passes`))
|
||||||
- updates_per_sec (integer, deprecated in 1.10; use `updates`))
|
- updates_per_sec (integer, deprecated in 1.10; use `updates`))
|
||||||
|
|
||||||
|
@ -247,7 +247,7 @@ by running Telegraf with the `--debug` argument.
|
||||||
- total_index_size (integer)
|
- total_index_size (integer)
|
||||||
- ok (integer)
|
- ok (integer)
|
||||||
- count (integer)
|
- count (integer)
|
||||||
- type (tring)
|
- type (string)
|
||||||
|
|
||||||
- mongodb_shard_stats
|
- mongodb_shard_stats
|
||||||
- tags:
|
- tags:
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
/***
|
/***
|
||||||
The code contained here came from https://github.com/mongodb/mongo-tools/blob/master/mongostat/stat_types.go
|
The code contained here came from https://github.com/mongodb/mongo-tools/blob/master/mongostat/stat_types.go
|
||||||
and contains modifications so that no other dependency from that project is needed. Other modifications included
|
and contains modifications so that no other dependency from that project is needed. Other modifications included
|
||||||
removing uneccessary code specific to formatting the output and determine the current state of the database. It
|
removing unnecessary code specific to formatting the output and determine the current state of the database. It
|
||||||
is licensed under Apache Version 2.0, http://www.apache.org/licenses/LICENSE-2.0.html
|
is licensed under Apache Version 2.0, http://www.apache.org/licenses/LICENSE-2.0.html
|
||||||
***/
|
***/
|
||||||
|
|
||||||
|
@ -317,7 +317,7 @@ type NetworkStats struct {
|
||||||
NumRequests int64 `bson:"numRequests"`
|
NumRequests int64 `bson:"numRequests"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// OpcountStats stores information related to comamnds and basic CRUD operations.
|
// OpcountStats stores information related to commands and basic CRUD operations.
|
||||||
type OpcountStats struct {
|
type OpcountStats struct {
|
||||||
Insert int64 `bson:"insert"`
|
Insert int64 `bson:"insert"`
|
||||||
Query int64 `bson:"query"`
|
Query int64 `bson:"query"`
|
||||||
|
@ -691,7 +691,7 @@ type StatLine struct {
|
||||||
CacheDirtyPercent float64
|
CacheDirtyPercent float64
|
||||||
CacheUsedPercent float64
|
CacheUsedPercent float64
|
||||||
|
|
||||||
// Cache ultilization extended (wiredtiger only)
|
// Cache utilization extended (wiredtiger only)
|
||||||
TrackedDirtyBytes int64
|
TrackedDirtyBytes int64
|
||||||
CurrentCachedBytes int64
|
CurrentCachedBytes int64
|
||||||
MaxBytesConfigured int64
|
MaxBytesConfigured int64
|
||||||
|
|
|
@ -41,7 +41,7 @@ Minimum Version of Monit tested with is 5.16.
|
||||||
- address
|
- address
|
||||||
- version
|
- version
|
||||||
- service
|
- service
|
||||||
- paltform_name
|
- platform_name
|
||||||
- status
|
- status
|
||||||
- monitoring_status
|
- monitoring_status
|
||||||
- monitoring_mode
|
- monitoring_mode
|
||||||
|
@ -62,7 +62,7 @@ Minimum Version of Monit tested with is 5.16.
|
||||||
- address
|
- address
|
||||||
- version
|
- version
|
||||||
- service
|
- service
|
||||||
- paltform_name
|
- platform_name
|
||||||
- status
|
- status
|
||||||
- monitoring_status
|
- monitoring_status
|
||||||
- monitoring_mode
|
- monitoring_mode
|
||||||
|
@ -77,7 +77,7 @@ Minimum Version of Monit tested with is 5.16.
|
||||||
- address
|
- address
|
||||||
- version
|
- version
|
||||||
- service
|
- service
|
||||||
- paltform_name
|
- platform_name
|
||||||
- status
|
- status
|
||||||
- monitoring_status
|
- monitoring_status
|
||||||
- monitoring_mode
|
- monitoring_mode
|
||||||
|
@ -93,7 +93,7 @@ Minimum Version of Monit tested with is 5.16.
|
||||||
- address
|
- address
|
||||||
- version
|
- version
|
||||||
- service
|
- service
|
||||||
- paltform_name
|
- platform_name
|
||||||
- status
|
- status
|
||||||
- monitoring_status
|
- monitoring_status
|
||||||
- monitoring_mode
|
- monitoring_mode
|
||||||
|
@ -117,7 +117,7 @@ Minimum Version of Monit tested with is 5.16.
|
||||||
- address
|
- address
|
||||||
- version
|
- version
|
||||||
- service
|
- service
|
||||||
- paltform_name
|
- platform_name
|
||||||
- status
|
- status
|
||||||
- monitoring_status
|
- monitoring_status
|
||||||
- monitoring_mode
|
- monitoring_mode
|
||||||
|
@ -136,7 +136,7 @@ Minimum Version of Monit tested with is 5.16.
|
||||||
- address
|
- address
|
||||||
- version
|
- version
|
||||||
- service
|
- service
|
||||||
- paltform_name
|
- platform_name
|
||||||
- status
|
- status
|
||||||
- monitoring_status
|
- monitoring_status
|
||||||
- monitoring_mode
|
- monitoring_mode
|
||||||
|
@ -160,7 +160,7 @@ Minimum Version of Monit tested with is 5.16.
|
||||||
- address
|
- address
|
||||||
- version
|
- version
|
||||||
- service
|
- service
|
||||||
- paltform_name
|
- platform_name
|
||||||
- status
|
- status
|
||||||
- monitoring_status
|
- monitoring_status
|
||||||
- monitoring_mode
|
- monitoring_mode
|
||||||
|
@ -175,7 +175,7 @@ Minimum Version of Monit tested with is 5.16.
|
||||||
- address
|
- address
|
||||||
- version
|
- version
|
||||||
- service
|
- service
|
||||||
- paltform_name
|
- platform_name
|
||||||
- status
|
- status
|
||||||
- monitoring_status
|
- monitoring_status
|
||||||
- monitoring_mode
|
- monitoring_mode
|
||||||
|
@ -189,7 +189,7 @@ Minimum Version of Monit tested with is 5.16.
|
||||||
- address
|
- address
|
||||||
- version
|
- version
|
||||||
- service
|
- service
|
||||||
- paltform_name
|
- platform_name
|
||||||
- status
|
- status
|
||||||
- monitoring_status
|
- monitoring_status
|
||||||
- monitoring_mode
|
- monitoring_mode
|
||||||
|
@ -203,7 +203,7 @@ Minimum Version of Monit tested with is 5.16.
|
||||||
- address
|
- address
|
||||||
- version
|
- version
|
||||||
- service
|
- service
|
||||||
- paltform_name
|
- platform_name
|
||||||
- status
|
- status
|
||||||
- monitoring_status
|
- monitoring_status
|
||||||
- monitoring_mode
|
- monitoring_mode
|
||||||
|
@ -217,7 +217,7 @@ Minimum Version of Monit tested with is 5.16.
|
||||||
- address
|
- address
|
||||||
- version
|
- version
|
||||||
- service
|
- service
|
||||||
- paltform_name
|
- platform_name
|
||||||
- status
|
- status
|
||||||
- monitoring_status
|
- monitoring_status
|
||||||
- monitoring_mode
|
- monitoring_mode
|
||||||
|
|
|
@ -194,7 +194,7 @@ func (m *MQTTConsumer) Start(acc telegraf.Accumulator) error {
|
||||||
|
|
||||||
// AddRoute sets up the function for handling messages. These need to be
|
// AddRoute sets up the function for handling messages. These need to be
|
||||||
// added in case we find a persistent session containing subscriptions so we
|
// added in case we find a persistent session containing subscriptions so we
|
||||||
// know where to dispatch presisted and new messages to. In the alternate
|
// know where to dispatch persisted and new messages to. In the alternate
|
||||||
// case that we need to create the subscriptions these will be replaced.
|
// case that we need to create the subscriptions these will be replaced.
|
||||||
for _, topic := range m.Topics {
|
for _, topic := range m.Topics {
|
||||||
m.client.AddRoute(topic, m.recvMessage)
|
m.client.AddRoute(topic, m.recvMessage)
|
||||||
|
@ -218,7 +218,7 @@ func (m *MQTTConsumer) connect() error {
|
||||||
m.state = Connected
|
m.state = Connected
|
||||||
m.messages = make(map[telegraf.TrackingID]bool)
|
m.messages = make(map[telegraf.TrackingID]bool)
|
||||||
|
|
||||||
// Presistent sessions should skip subscription if a session is present, as
|
// Persistent sessions should skip subscription if a session is present, as
|
||||||
// the subscriptions are stored by the server.
|
// the subscriptions are stored by the server.
|
||||||
type sessionPresent interface {
|
type sessionPresent interface {
|
||||||
SessionPresent() bool
|
SessionPresent() bool
|
||||||
|
|
|
@ -40,11 +40,11 @@ Path of the file to be parsed, relative to the `base_dir`.
|
||||||
Name of the field/tag key, defaults to `$(basename file)`.
|
Name of the field/tag key, defaults to `$(basename file)`.
|
||||||
* `conversion`:
|
* `conversion`:
|
||||||
Data format used to parse the file contents:
|
Data format used to parse the file contents:
|
||||||
* `float(X)`: Converts the input value into a float and divides by the Xth power of 10. Efficively just moves the decimal left X places. For example a value of `123` with `float(2)` will result in `1.23`.
|
* `float(X)`: Converts the input value into a float and divides by the Xth power of 10. Effectively just moves the decimal left X places. For example a value of `123` with `float(2)` will result in `1.23`.
|
||||||
* `float`: Converts the value into a float with no adjustment. Same as `float(0)`.
|
* `float`: Converts the value into a float with no adjustment. Same as `float(0)`.
|
||||||
* `int`: Convertes the value into an integer.
|
* `int`: Converts the value into an integer.
|
||||||
* `string`, `""`: No conversion.
|
* `string`, `""`: No conversion.
|
||||||
* `bool`: Convertes the value into a boolean.
|
* `bool`: Converts the value into a boolean.
|
||||||
* `tag`: File content is used as a tag.
|
* `tag`: File content is used as a tag.
|
||||||
|
|
||||||
### Example Output
|
### Example Output
|
||||||
|
|
|
@ -45,7 +45,7 @@ This plugin gathers the statistic data from MySQL server
|
||||||
## <1.6: metric_version = 1 (or unset)
|
## <1.6: metric_version = 1 (or unset)
|
||||||
metric_version = 2
|
metric_version = 2
|
||||||
|
|
||||||
## if the list is empty, then metrics are gathered from all databasee tables
|
## if the list is empty, then metrics are gathered from all database tables
|
||||||
# table_schema_databases = []
|
# table_schema_databases = []
|
||||||
|
|
||||||
## gather metrics from INFORMATION_SCHEMA.TABLES for databases provided above list
|
## gather metrics from INFORMATION_SCHEMA.TABLES for databases provided above list
|
||||||
|
@ -153,7 +153,7 @@ If you wish to remove the `name_suffix` you may use Kapacitor to copy the
|
||||||
historical data to the default name. Do this only after retiring the old
|
historical data to the default name. Do this only after retiring the old
|
||||||
measurement name.
|
measurement name.
|
||||||
|
|
||||||
1. Use the techinique described above to write to multiple locations:
|
1. Use the technique described above to write to multiple locations:
|
||||||
```toml
|
```toml
|
||||||
[[inputs.mysql]]
|
[[inputs.mysql]]
|
||||||
servers = ["tcp(127.0.0.1:3306)/"]
|
servers = ["tcp(127.0.0.1:3306)/"]
|
||||||
|
@ -283,7 +283,7 @@ The unit of fields varies by the tags.
|
||||||
* events_statements_rows_examined_total(float, number)
|
* events_statements_rows_examined_total(float, number)
|
||||||
* events_statements_tmp_tables_total(float, number)
|
* events_statements_tmp_tables_total(float, number)
|
||||||
* events_statements_tmp_disk_tables_total(float, number)
|
* events_statements_tmp_disk_tables_total(float, number)
|
||||||
* events_statements_sort_merge_passes_totales(float, number)
|
* events_statements_sort_merge_passes_totals(float, number)
|
||||||
* events_statements_sort_rows_total(float, number)
|
* events_statements_sort_rows_total(float, number)
|
||||||
* events_statements_no_index_used_total(float, number)
|
* events_statements_no_index_used_total(float, number)
|
||||||
* Table schema - gathers statistics of each schema. It has following measurements
|
* Table schema - gathers statistics of each schema. It has following measurements
|
||||||
|
|
|
@ -71,7 +71,7 @@ const sampleConfig = `
|
||||||
## <1.6: metric_version = 1 (or unset)
|
## <1.6: metric_version = 1 (or unset)
|
||||||
metric_version = 2
|
metric_version = 2
|
||||||
|
|
||||||
## if the list is empty, then metrics are gathered from all databasee tables
|
## if the list is empty, then metrics are gathered from all database tables
|
||||||
# table_schema_databases = []
|
# table_schema_databases = []
|
||||||
|
|
||||||
## gather metrics from INFORMATION_SCHEMA.TABLES for databases provided above list
|
## gather metrics from INFORMATION_SCHEMA.TABLES for databases provided above list
|
||||||
|
|
|
@ -151,7 +151,7 @@ func TestNSQStatsV1(t *testing.T) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// v1 version of localhost/stats?format=json reesponse body
|
// v1 version of localhost/stats?format=json response body
|
||||||
var responseV1 = `
|
var responseV1 = `
|
||||||
{
|
{
|
||||||
"version": "1.0.0-compat",
|
"version": "1.0.0-compat",
|
||||||
|
|
|
@ -12,7 +12,7 @@ This plugin gathers stats from [OpenSMTPD - a FREE implementation of the server-
|
||||||
## The default location of the smtpctl binary can be overridden with:
|
## The default location of the smtpctl binary can be overridden with:
|
||||||
binary = "/usr/sbin/smtpctl"
|
binary = "/usr/sbin/smtpctl"
|
||||||
|
|
||||||
# The default timeout of 1s can be overriden with:
|
# The default timeout of 1s can be overridden with:
|
||||||
#timeout = "1s"
|
#timeout = "1s"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -37,7 +37,7 @@ var sampleConfig = `
|
||||||
## The default location of the smtpctl binary can be overridden with:
|
## The default location of the smtpctl binary can be overridden with:
|
||||||
binary = "/usr/sbin/smtpctl"
|
binary = "/usr/sbin/smtpctl"
|
||||||
|
|
||||||
## The default timeout of 1000ms can be overriden with (in milliseconds):
|
## The default timeout of 1000ms can be overridden with (in milliseconds):
|
||||||
timeout = 1000
|
timeout = 1000
|
||||||
`
|
`
|
||||||
|
|
||||||
|
|
|
@ -1,8 +1,8 @@
|
||||||
# PF Plugin
|
# PF Plugin
|
||||||
|
|
||||||
The pf plugin gathers information from the FreeBSD/OpenBSD pf firewall. Currently it can retrive information about the state table: the number of current entries in the table, and counters for the number of searches, inserts, and removals to the table.
|
The pf plugin gathers information from the FreeBSD/OpenBSD pf firewall. Currently it can retrieve information about the state table: the number of current entries in the table, and counters for the number of searches, inserts, and removals to the table.
|
||||||
|
|
||||||
The pf plugin retrives this information by invoking the `pfstat` command. The `pfstat` command requires read access to the device file `/dev/pf`. You have several options to permit telegraf to run `pfctl`:
|
The pf plugin retrieves this information by invoking the `pfstat` command. The `pfstat` command requires read access to the device file `/dev/pf`. You have several options to permit telegraf to run `pfctl`:
|
||||||
|
|
||||||
* Run telegraf as root. This is strongly discouraged.
|
* Run telegraf as root. This is strongly discouraged.
|
||||||
* Change the ownership and permissions for /dev/pf such that the user telegraf runs at can read the /dev/pf device file. This is probably not that good of an idea either.
|
* Change the ownership and permissions for /dev/pf such that the user telegraf runs at can read the /dev/pf device file. This is probably not that good of an idea either.
|
||||||
|
|
|
@ -15,7 +15,7 @@ More information about the meaning of these metrics can be found in the
|
||||||
## postgres://[pqgotest[:password]]@host:port[/dbname]\
|
## postgres://[pqgotest[:password]]@host:port[/dbname]\
|
||||||
## ?sslmode=[disable|verify-ca|verify-full]
|
## ?sslmode=[disable|verify-ca|verify-full]
|
||||||
## or a simple string:
|
## or a simple string:
|
||||||
## host=localhost port=5432 user=pqotest password=... sslmode=... dbname=app_production
|
## host=localhost port=5432 user=pqgotest password=... sslmode=... dbname=app_production
|
||||||
##
|
##
|
||||||
## All connection parameters are optional.
|
## All connection parameters are optional.
|
||||||
##
|
##
|
||||||
|
|
|
@ -24,7 +24,7 @@ var sampleConfig = `
|
||||||
## postgres://[pqgotest[:password]]@localhost[/dbname]\
|
## postgres://[pqgotest[:password]]@localhost[/dbname]\
|
||||||
## ?sslmode=[disable|verify-ca|verify-full]
|
## ?sslmode=[disable|verify-ca|verify-full]
|
||||||
## or a simple string:
|
## or a simple string:
|
||||||
## host=localhost user=pqotest password=... sslmode=... dbname=app_production
|
## host=localhost user=pqgotest password=... sslmode=... dbname=app_production
|
||||||
##
|
##
|
||||||
## All connection parameters are optional.
|
## All connection parameters are optional.
|
||||||
##
|
##
|
||||||
|
|
|
@ -59,7 +59,7 @@ func (client *conn) Request(
|
||||||
rec := &record{}
|
rec := &record{}
|
||||||
var err1 error
|
var err1 error
|
||||||
|
|
||||||
// recive until EOF or FCGI_END_REQUEST
|
// receive until EOF or FCGI_END_REQUEST
|
||||||
READ_LOOP:
|
READ_LOOP:
|
||||||
for {
|
for {
|
||||||
err1 = rec.read(client.rwc)
|
err1 = rec.read(client.rwc)
|
||||||
|
|
|
@ -26,7 +26,7 @@ var sampleConfig = `
|
||||||
## postgres://[pqgotest[:password]]@localhost[/dbname]\
|
## postgres://[pqgotest[:password]]@localhost[/dbname]\
|
||||||
## ?sslmode=[disable|verify-ca|verify-full]
|
## ?sslmode=[disable|verify-ca|verify-full]
|
||||||
## or a simple string:
|
## or a simple string:
|
||||||
## host=localhost user=pqotest password=... sslmode=... dbname=app_production
|
## host=localhost user=pqgotest password=... sslmode=... dbname=app_production
|
||||||
##
|
##
|
||||||
## All connection parameters are optional.
|
## All connection parameters are optional.
|
||||||
##
|
##
|
||||||
|
|
|
@ -16,7 +16,7 @@ The example below has two queries are specified, with the following parameters:
|
||||||
# specify address via a url matching:
|
# specify address via a url matching:
|
||||||
# postgres://[pqgotest[:password]]@host:port[/dbname]?sslmode=...
|
# postgres://[pqgotest[:password]]@host:port[/dbname]?sslmode=...
|
||||||
# or a simple string:
|
# or a simple string:
|
||||||
# host=localhost port=5432 user=pqotest password=... sslmode=... dbname=app_production
|
# host=localhost port=5432 user=pqgotest password=... sslmode=... dbname=app_production
|
||||||
#
|
#
|
||||||
# All connection parameters are optional.
|
# All connection parameters are optional.
|
||||||
# Without the dbname parameter, the driver will default to a database
|
# Without the dbname parameter, the driver will default to a database
|
||||||
|
@ -71,7 +71,7 @@ The example below has two queries are specified, with the following parameters:
|
||||||
```
|
```
|
||||||
|
|
||||||
The system can be easily extended using homemade metrics collection tools or
|
The system can be easily extended using homemade metrics collection tools or
|
||||||
using postgreql extensions ([pg_stat_statements](http://www.postgresql.org/docs/current/static/pgstatstatements.html), [pg_proctab](https://github.com/markwkm/pg_proctab) or [powa](http://dalibo.github.io/powa/))
|
using postgresql extensions ([pg_stat_statements](http://www.postgresql.org/docs/current/static/pgstatstatements.html), [pg_proctab](https://github.com/markwkm/pg_proctab) or [powa](http://dalibo.github.io/powa/))
|
||||||
|
|
||||||
# Sample Queries :
|
# Sample Queries :
|
||||||
- telegraf.conf postgresql_extensible queries (assuming that you have configured
|
- telegraf.conf postgresql_extensible queries (assuming that you have configured
|
||||||
|
|
|
@ -41,7 +41,7 @@ var sampleConfig = `
|
||||||
## postgres://[pqgotest[:password]]@localhost[/dbname]\
|
## postgres://[pqgotest[:password]]@localhost[/dbname]\
|
||||||
## ?sslmode=[disable|verify-ca|verify-full]
|
## ?sslmode=[disable|verify-ca|verify-full]
|
||||||
## or a simple string:
|
## or a simple string:
|
||||||
## host=localhost user=pqotest password=... sslmode=... dbname=app_production
|
## host=localhost user=pqgotest password=... sslmode=... dbname=app_production
|
||||||
#
|
#
|
||||||
## All connection parameters are optional. #
|
## All connection parameters are optional. #
|
||||||
## Without the dbname parameter, the driver will default to a database
|
## Without the dbname parameter, the driver will default to a database
|
||||||
|
@ -153,7 +153,7 @@ func (p *Postgresql) Gather(acc telegraf.Accumulator) error {
|
||||||
columns []string
|
columns []string
|
||||||
)
|
)
|
||||||
|
|
||||||
// Retreiving the database version
|
// Retrieving the database version
|
||||||
query = `SELECT setting::integer / 100 AS version FROM pg_settings WHERE name = 'server_version_num'`
|
query = `SELECT setting::integer / 100 AS version FROM pg_settings WHERE name = 'server_version_num'`
|
||||||
if err = p.DB.QueryRow(query).Scan(&db_version); err != nil {
|
if err = p.DB.QueryRow(query).Scan(&db_version); err != nil {
|
||||||
db_version = 0
|
db_version = 0
|
||||||
|
|
|
@ -10,7 +10,7 @@ import (
|
||||||
"github.com/influxdata/telegraf/internal"
|
"github.com/influxdata/telegraf/internal"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Implemention of PIDGatherer that execs pgrep to find processes
|
// Implementation of PIDGatherer that execs pgrep to find processes
|
||||||
type Pgrep struct {
|
type Pgrep struct {
|
||||||
path string
|
path string
|
||||||
}
|
}
|
||||||
|
|
|
@ -53,7 +53,7 @@ For additional details reference the [RabbitMQ Management HTTP Stats][management
|
||||||
# queue_name_include = []
|
# queue_name_include = []
|
||||||
# queue_name_exclude = []
|
# queue_name_exclude = []
|
||||||
|
|
||||||
## Federation upstreams to include and exlude specified as an array of glob
|
## Federation upstreams to include and exclude specified as an array of glob
|
||||||
## pattern strings. Federation links can also be limited by the queue and
|
## pattern strings. Federation links can also be limited by the queue and
|
||||||
## exchange filters.
|
## exchange filters.
|
||||||
# federation_upstream_include = []
|
# federation_upstream_include = []
|
||||||
|
|
|
@ -15,15 +15,15 @@ import (
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
)
|
)
|
||||||
|
|
||||||
// DefaultUsername will set a default value that corrasponds to the default
|
// DefaultUsername will set a default value that corresponds to the default
|
||||||
// value used by Rabbitmq
|
// value used by Rabbitmq
|
||||||
const DefaultUsername = "guest"
|
const DefaultUsername = "guest"
|
||||||
|
|
||||||
// DefaultPassword will set a default value that corrasponds to the default
|
// DefaultPassword will set a default value that corresponds to the default
|
||||||
// value used by Rabbitmq
|
// value used by Rabbitmq
|
||||||
const DefaultPassword = "guest"
|
const DefaultPassword = "guest"
|
||||||
|
|
||||||
// DefaultURL will set a default value that corrasponds to the default value
|
// DefaultURL will set a default value that corresponds to the default value
|
||||||
// used by Rabbitmq
|
// used by Rabbitmq
|
||||||
const DefaultURL = "http://localhost:15672"
|
const DefaultURL = "http://localhost:15672"
|
||||||
|
|
||||||
|
|
|
@ -21,7 +21,7 @@ It fetches its data from the [limits endpoint](https://developer.salesforce.com/
|
||||||
|
|
||||||
### Measurements & Fields:
|
### Measurements & Fields:
|
||||||
|
|
||||||
Salesforce provide one measurment named "salesforce".
|
Salesforce provide one measurement named "salesforce".
|
||||||
Each entry is converted to snake\_case and 2 fields are created.
|
Each entry is converted to snake\_case and 2 fields are created.
|
||||||
|
|
||||||
- \<key\>_max represents the limit threshold
|
- \<key\>_max represents the limit threshold
|
||||||
|
|
|
@ -166,7 +166,7 @@ func (s *Salesforce) getLoginEndpoint() (string, error) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Authenticate with Salesfroce
|
// Authenticate with Salesforce
|
||||||
func (s *Salesforce) login() error {
|
func (s *Salesforce) login() error {
|
||||||
if s.Username == "" || s.Password == "" {
|
if s.Username == "" || s.Password == "" {
|
||||||
return errors.New("missing username or password")
|
return errors.New("missing username or password")
|
||||||
|
|
|
@ -18,7 +18,7 @@ This plugin collects sensor metrics with the `sensors` executable from the lm-se
|
||||||
```
|
```
|
||||||
|
|
||||||
### Measurements & Fields:
|
### Measurements & Fields:
|
||||||
Fields are created dynamicaly depending on the sensors. All fields are float.
|
Fields are created dynamically depending on the sensors. All fields are float.
|
||||||
|
|
||||||
### Tags:
|
### Tags:
|
||||||
|
|
||||||
|
|
|
@ -962,9 +962,9 @@ func SnmpTranslate(oid string) (mibName string, oidNum string, oidText string, c
|
||||||
// We could speed it up by putting a lock in snmpTranslateCache and then
|
// We could speed it up by putting a lock in snmpTranslateCache and then
|
||||||
// returning it immediately, and multiple callers would then release the
|
// returning it immediately, and multiple callers would then release the
|
||||||
// snmpTranslateCachesLock and instead wait on the individual
|
// snmpTranslateCachesLock and instead wait on the individual
|
||||||
// snmpTranlsation.Lock to release. But I don't know that the extra complexity
|
// snmpTranslation.Lock to release. But I don't know that the extra complexity
|
||||||
// is worth it. Especially when it would slam the system pretty hard if lots
|
// is worth it. Especially when it would slam the system pretty hard if lots
|
||||||
// of lookups are being perfomed.
|
// of lookups are being performed.
|
||||||
|
|
||||||
stc.mibName, stc.oidNum, stc.oidText, stc.conversion, stc.err = snmpTranslateCall(oid)
|
stc.mibName, stc.oidNum, stc.oidText, stc.conversion, stc.err = snmpTranslateCall(oid)
|
||||||
snmpTranslateCaches[oid] = stc
|
snmpTranslateCaches[oid] = stc
|
||||||
|
|
|
@ -254,7 +254,7 @@ func (s *SnmpTrap) lookup(oid string) (e mibEntry, err error) {
|
||||||
defer s.cacheLock.Unlock()
|
defer s.cacheLock.Unlock()
|
||||||
var ok bool
|
var ok bool
|
||||||
if e, ok = s.cache[oid]; !ok {
|
if e, ok = s.cache[oid]; !ok {
|
||||||
// cache miss. exec snmptranlate
|
// cache miss. exec snmptranslate
|
||||||
e, err = s.snmptranslate(oid)
|
e, err = s.snmptranslate(oid)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
s.cache[oid] = e
|
s.cache[oid] = e
|
||||||
|
|
|
@ -84,7 +84,7 @@ func TestReceiveTrap(t *testing.T) {
|
||||||
version gosnmp.SnmpVersion
|
version gosnmp.SnmpVersion
|
||||||
trap gosnmp.SnmpTrap // include pdus
|
trap gosnmp.SnmpTrap // include pdus
|
||||||
|
|
||||||
// recieve
|
// receive
|
||||||
entries []entry
|
entries []entry
|
||||||
metrics []telegraf.Metric
|
metrics []telegraf.Metric
|
||||||
}{
|
}{
|
||||||
|
|
|
@ -82,7 +82,7 @@ setting.
|
||||||
|
|
||||||
Instructions on how to adjust these OS settings are available below.
|
Instructions on how to adjust these OS settings are available below.
|
||||||
|
|
||||||
Some OSes (most notably, Linux) place very restricive limits on the performance
|
Some OSes (most notably, Linux) place very restrictive limits on the performance
|
||||||
of UDP protocols. It is _highly_ recommended that you increase these OS limits to
|
of UDP protocols. It is _highly_ recommended that you increase these OS limits to
|
||||||
at least 8MB before trying to run large amounts of UDP traffic to your instance.
|
at least 8MB before trying to run large amounts of UDP traffic to your instance.
|
||||||
8MB is just a recommendation, and can be adjusted higher.
|
8MB is just a recommendation, and can be adjusted higher.
|
||||||
|
|
|
@ -544,7 +544,7 @@ func (s *Stackdriver) generatetimeSeriesConfs(
|
||||||
for _, filter := range filters {
|
for _, filter := range filters {
|
||||||
// Add filter for list metric descriptors if
|
// Add filter for list metric descriptors if
|
||||||
// includeMetricTypePrefixes is specified,
|
// includeMetricTypePrefixes is specified,
|
||||||
// this is more effecient than iterating over
|
// this is more efficient than iterating over
|
||||||
// all metric descriptors
|
// all metric descriptors
|
||||||
req.Filter = filter
|
req.Filter = filter
|
||||||
mdRespChan, err := s.client.ListMetricDescriptors(ctx, req)
|
mdRespChan, err := s.client.ListMetricDescriptors(ctx, req)
|
||||||
|
|
|
@ -35,7 +35,7 @@ func NewTestStatsd() *Statsd {
|
||||||
return &s
|
return &s
|
||||||
}
|
}
|
||||||
|
|
||||||
// Test that MaxTCPConections is respected
|
// Test that MaxTCPConnections is respected
|
||||||
func TestConcurrentConns(t *testing.T) {
|
func TestConcurrentConns(t *testing.T) {
|
||||||
listener := Statsd{
|
listener := Statsd{
|
||||||
Log: testutil.Logger{},
|
Log: testutil.Logger{},
|
||||||
|
@ -66,7 +66,7 @@ func TestConcurrentConns(t *testing.T) {
|
||||||
assert.Zero(t, acc.NFields())
|
assert.Zero(t, acc.NFields())
|
||||||
}
|
}
|
||||||
|
|
||||||
// Test that MaxTCPConections is respected when max==1
|
// Test that MaxTCPConnections is respected when max==1
|
||||||
func TestConcurrentConns1(t *testing.T) {
|
func TestConcurrentConns1(t *testing.T) {
|
||||||
listener := Statsd{
|
listener := Statsd{
|
||||||
Log: testutil.Logger{},
|
Log: testutil.Logger{},
|
||||||
|
@ -95,7 +95,7 @@ func TestConcurrentConns1(t *testing.T) {
|
||||||
assert.Zero(t, acc.NFields())
|
assert.Zero(t, acc.NFields())
|
||||||
}
|
}
|
||||||
|
|
||||||
// Test that MaxTCPConections is respected
|
// Test that MaxTCPConnections is respected
|
||||||
func TestCloseConcurrentConns(t *testing.T) {
|
func TestCloseConcurrentConns(t *testing.T) {
|
||||||
listener := Statsd{
|
listener := Statsd{
|
||||||
Log: testutil.Logger{},
|
Log: testutil.Logger{},
|
||||||
|
|
|
@ -47,7 +47,7 @@ Syslog messages should be formatted according to
|
||||||
## Must be one of "octect-counting", "non-transparent".
|
## Must be one of "octect-counting", "non-transparent".
|
||||||
# framing = "octet-counting"
|
# framing = "octet-counting"
|
||||||
|
|
||||||
## The trailer to be expected in case of non-trasparent framing (default = "LF").
|
## The trailer to be expected in case of non-transparent framing (default = "LF").
|
||||||
## Must be one of "LF", or "NUL".
|
## Must be one of "LF", or "NUL".
|
||||||
# trailer = "LF"
|
# trailer = "LF"
|
||||||
|
|
||||||
|
|
|
@ -280,7 +280,7 @@ func getTestCasesForOctetCounting() []testCaseStream {
|
||||||
werr: 1,
|
werr: 1,
|
||||||
},
|
},
|
||||||
// {
|
// {
|
||||||
// name: "1st/of/ko", // overflow (msglen greather then max allowed octets)
|
// name: "1st/of/ko", // overflow (msglen greater than max allowed octets)
|
||||||
// data: []byte(fmt.Sprintf("8193 <%d>%d %s %s %s %s %s 12 %s", maxP, maxV, maxTS, maxH, maxA, maxPID, maxMID, message7681)),
|
// data: []byte(fmt.Sprintf("8193 <%d>%d %s %s %s %s %s 12 %s", maxP, maxV, maxTS, maxH, maxA, maxPID, maxMID, message7681)),
|
||||||
// want: []testutil.Metric{},
|
// want: []testutil.Metric{},
|
||||||
// },
|
// },
|
||||||
|
|
|
@ -87,7 +87,7 @@ var sampleConfig = `
|
||||||
## Must be one of "octet-counting", "non-transparent".
|
## Must be one of "octet-counting", "non-transparent".
|
||||||
# framing = "octet-counting"
|
# framing = "octet-counting"
|
||||||
|
|
||||||
## The trailer to be expected in case of non-trasparent framing (default = "LF").
|
## The trailer to be expected in case of non-transparent framing (default = "LF").
|
||||||
## Must be one of "LF", or "NUL".
|
## Must be one of "LF", or "NUL".
|
||||||
# trailer = "LF"
|
# trailer = "LF"
|
||||||
|
|
||||||
|
@ -313,7 +313,7 @@ func (s *Syslog) handle(conn net.Conn, acc telegraf.Accumulator) {
|
||||||
opts = append(opts, syslog.WithBestEffort())
|
opts = append(opts, syslog.WithBestEffort())
|
||||||
}
|
}
|
||||||
|
|
||||||
// Select the parser to use depeding on transport framing
|
// Select the parser to use depending on transport framing
|
||||||
if s.Framing == framing.OctetCounting {
|
if s.Framing == framing.OctetCounting {
|
||||||
// Octet counting transparent framing
|
// Octet counting transparent framing
|
||||||
p = octetcounting.NewParser(opts...)
|
p = octetcounting.NewParser(opts...)
|
||||||
|
|
|
@ -141,7 +141,7 @@ func TestConnectTCP(t *testing.T) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Test that MaxTCPConections is respected
|
// Test that MaxTCPConnections is respected
|
||||||
func TestConcurrentConns(t *testing.T) {
|
func TestConcurrentConns(t *testing.T) {
|
||||||
listener := TcpListener{
|
listener := TcpListener{
|
||||||
Log: testutil.Logger{},
|
Log: testutil.Logger{},
|
||||||
|
@ -177,7 +177,7 @@ func TestConcurrentConns(t *testing.T) {
|
||||||
assert.Equal(t, io.EOF, err)
|
assert.Equal(t, io.EOF, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Test that MaxTCPConections is respected when max==1
|
// Test that MaxTCPConnections is respected when max==1
|
||||||
func TestConcurrentConns1(t *testing.T) {
|
func TestConcurrentConns1(t *testing.T) {
|
||||||
listener := TcpListener{
|
listener := TcpListener{
|
||||||
Log: testutil.Logger{},
|
Log: testutil.Logger{},
|
||||||
|
@ -211,7 +211,7 @@ func TestConcurrentConns1(t *testing.T) {
|
||||||
assert.Equal(t, io.EOF, err)
|
assert.Equal(t, io.EOF, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Test that MaxTCPConections is respected
|
// Test that MaxTCPConnections is respected
|
||||||
func TestCloseConcurrentConns(t *testing.T) {
|
func TestCloseConcurrentConns(t *testing.T) {
|
||||||
listener := TcpListener{
|
listener := TcpListener{
|
||||||
Log: testutil.Logger{},
|
Log: testutil.Logger{},
|
||||||
|
|
|
@ -21,7 +21,7 @@ a validating, recursive, and caching DNS resolver.
|
||||||
## The default location of the unbound config file can be overridden with:
|
## The default location of the unbound config file can be overridden with:
|
||||||
# config_file = "/etc/unbound/unbound.conf"
|
# config_file = "/etc/unbound/unbound.conf"
|
||||||
|
|
||||||
## The default timeout of 1s can be overriden with:
|
## The default timeout of 1s can be overridden with:
|
||||||
# timeout = "1s"
|
# timeout = "1s"
|
||||||
|
|
||||||
## When set to true, thread metrics are tagged with the thread id.
|
## When set to true, thread metrics are tagged with the thread id.
|
||||||
|
|
|
@ -49,7 +49,7 @@ var sampleConfig = `
|
||||||
## The default location of the unbound config file can be overridden with:
|
## The default location of the unbound config file can be overridden with:
|
||||||
# config_file = "/etc/unbound/unbound.conf"
|
# config_file = "/etc/unbound/unbound.conf"
|
||||||
|
|
||||||
## The default timeout of 1s can be overriden with:
|
## The default timeout of 1s can be overridden with:
|
||||||
# timeout = "1s"
|
# timeout = "1s"
|
||||||
|
|
||||||
## When set to true, thread metrics are tagged with the thread id.
|
## When set to true, thread metrics are tagged with the thread id.
|
||||||
|
@ -126,7 +126,7 @@ func unboundRunner(cmdName string, Timeout internal.Duration, UseSudo bool, Serv
|
||||||
// All the dots in stat name will replaced by underscores. Histogram statistics will not be collected.
|
// All the dots in stat name will replaced by underscores. Histogram statistics will not be collected.
|
||||||
func (s *Unbound) Gather(acc telegraf.Accumulator) error {
|
func (s *Unbound) Gather(acc telegraf.Accumulator) error {
|
||||||
|
|
||||||
// Always exclude histrogram statistics
|
// Always exclude histogram statistics
|
||||||
statExcluded := []string{"histogram.*"}
|
statExcluded := []string{"histogram.*"}
|
||||||
filterExcluded, err := filter.Compile(statExcluded)
|
filterExcluded, err := filter.Compile(statExcluded)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|
|
@ -13,7 +13,7 @@ The uWSGI input plugin gathers metrics about uWSGI using its [Stats Server](http
|
||||||
## servers = ["tcp://localhost:5050", "http://localhost:1717", "unix:///tmp/statsock"]
|
## servers = ["tcp://localhost:5050", "http://localhost:1717", "unix:///tmp/statsock"]
|
||||||
servers = ["tcp://127.0.0.1:1717"]
|
servers = ["tcp://127.0.0.1:1717"]
|
||||||
|
|
||||||
## General connection timout
|
## General connection timeout
|
||||||
# timeout = "5s"
|
# timeout = "5s"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -42,7 +42,7 @@ func (u *Uwsgi) SampleConfig() string {
|
||||||
## servers = ["tcp://localhost:5050", "http://localhost:1717", "unix:///tmp/statsock"]
|
## servers = ["tcp://localhost:5050", "http://localhost:1717", "unix:///tmp/statsock"]
|
||||||
servers = ["tcp://127.0.0.1:1717"]
|
servers = ["tcp://127.0.0.1:1717"]
|
||||||
|
|
||||||
## General connection timout
|
## General connection timeout
|
||||||
# timeout = "5s"
|
# timeout = "5s"
|
||||||
`
|
`
|
||||||
}
|
}
|
||||||
|
|
|
@ -19,7 +19,7 @@ This plugin gathers stats from [Varnish HTTP Cache](https://varnish-cache.org/)
|
||||||
stats = ["MAIN.cache_hit", "MAIN.cache_miss", "MAIN.uptime"]
|
stats = ["MAIN.cache_hit", "MAIN.cache_miss", "MAIN.uptime"]
|
||||||
|
|
||||||
## Optional name for the varnish instance (or working directory) to query
|
## Optional name for the varnish instance (or working directory) to query
|
||||||
## Usually appened after -n in varnish cli
|
## Usually append after -n in varnish cli
|
||||||
# instance_name = instanceName
|
# instance_name = instanceName
|
||||||
|
|
||||||
## Timeout for varnishstat command
|
## Timeout for varnishstat command
|
||||||
|
@ -92,7 +92,7 @@ MEMPOOL, etc). In the output, the prefix will be used as a tag, and removed from
|
||||||
- MAIN.s_pipe (uint64, count, Total pipe sessions)
|
- MAIN.s_pipe (uint64, count, Total pipe sessions)
|
||||||
- MAIN.s_pass (uint64, count, Total pass- ed requests)
|
- MAIN.s_pass (uint64, count, Total pass- ed requests)
|
||||||
- MAIN.s_fetch (uint64, count, Total backend fetches)
|
- MAIN.s_fetch (uint64, count, Total backend fetches)
|
||||||
- MAIN.s_synth (uint64, count, Total synthethic responses)
|
- MAIN.s_synth (uint64, count, Total synthetic responses)
|
||||||
- MAIN.s_req_hdrbytes (uint64, count, Request header bytes)
|
- MAIN.s_req_hdrbytes (uint64, count, Request header bytes)
|
||||||
- MAIN.s_req_bodybytes (uint64, count, Request body bytes)
|
- MAIN.s_req_bodybytes (uint64, count, Request body bytes)
|
||||||
- MAIN.s_resp_hdrbytes (uint64, count, Response header bytes)
|
- MAIN.s_resp_hdrbytes (uint64, count, Response header bytes)
|
||||||
|
|
|
@ -49,7 +49,7 @@ var sampleConfig = `
|
||||||
stats = ["MAIN.cache_hit", "MAIN.cache_miss", "MAIN.uptime"]
|
stats = ["MAIN.cache_hit", "MAIN.cache_miss", "MAIN.uptime"]
|
||||||
|
|
||||||
## Optional name for the varnish instance (or working directory) to query
|
## Optional name for the varnish instance (or working directory) to query
|
||||||
## Usually appened after -n in varnish cli
|
## Usually append after -n in varnish cli
|
||||||
# instance_name = instanceName
|
# instance_name = instanceName
|
||||||
|
|
||||||
## Timeout for varnishstat command
|
## Timeout for varnishstat command
|
||||||
|
|
|
@ -155,11 +155,11 @@ vm_metric_exclude = [ "*" ]
|
||||||
## separator character to use for measurement and field names (default: "_")
|
## separator character to use for measurement and field names (default: "_")
|
||||||
# separator = "_"
|
# separator = "_"
|
||||||
|
|
||||||
## number of objects to retreive per query for realtime resources (vms and hosts)
|
## number of objects to retrieve per query for realtime resources (vms and hosts)
|
||||||
## set to 64 for vCenter 5.5 and 6.0 (default: 256)
|
## set to 64 for vCenter 5.5 and 6.0 (default: 256)
|
||||||
# max_query_objects = 256
|
# max_query_objects = 256
|
||||||
|
|
||||||
## number of metrics to retreive per query for non-realtime resources (clusters and datastores)
|
## number of metrics to retrieve per query for non-realtime resources (clusters and datastores)
|
||||||
## set to 64 for vCenter 5.5 and 6.0 (default: 256)
|
## set to 64 for vCenter 5.5 and 6.0 (default: 256)
|
||||||
# max_query_metrics = 256
|
# max_query_metrics = 256
|
||||||
|
|
||||||
|
@ -184,10 +184,10 @@ vm_metric_exclude = [ "*" ]
|
||||||
## Custom attributes from vCenter can be very useful for queries in order to slice the
|
## Custom attributes from vCenter can be very useful for queries in order to slice the
|
||||||
## metrics along different dimension and for forming ad-hoc relationships. They are disabled
|
## metrics along different dimension and for forming ad-hoc relationships. They are disabled
|
||||||
## by default, since they can add a considerable amount of tags to the resulting metrics. To
|
## by default, since they can add a considerable amount of tags to the resulting metrics. To
|
||||||
## enable, simply set custom_attribute_exlude to [] (empty set) and use custom_attribute_include
|
## enable, simply set custom_attribute_exclude to [] (empty set) and use custom_attribute_include
|
||||||
## to select the attributes you want to include.
|
## to select the attributes you want to include.
|
||||||
## By default, since they can add a considerable amount of tags to the resulting metrics. To
|
## By default, since they can add a considerable amount of tags to the resulting metrics. To
|
||||||
## enable, simply set custom_attribute_exlude to [] (empty set) and use custom_attribute_include
|
## enable, simply set custom_attribute_exclude to [] (empty set) and use custom_attribute_include
|
||||||
## to select the attributes you want to include.
|
## to select the attributes you want to include.
|
||||||
# custom_attribute_include = []
|
# custom_attribute_include = []
|
||||||
# custom_attribute_exclude = ["*"]
|
# custom_attribute_exclude = ["*"]
|
||||||
|
@ -208,7 +208,7 @@ A vCenter administrator can change this setting, see this [VMware KB article](ht
|
||||||
Any modification should be reflected in this plugin by modifying the parameter `max_query_objects`
|
Any modification should be reflected in this plugin by modifying the parameter `max_query_objects`
|
||||||
|
|
||||||
```
|
```
|
||||||
## number of objects to retreive per query for realtime resources (vms and hosts)
|
## number of objects to retrieve per query for realtime resources (vms and hosts)
|
||||||
## set to 64 for vCenter 5.5 and 6.0 (default: 256)
|
## set to 64 for vCenter 5.5 and 6.0 (default: 256)
|
||||||
# max_query_objects = 256
|
# max_query_objects = 256
|
||||||
```
|
```
|
||||||
|
@ -275,12 +275,12 @@ We can extend this to looking at a cluster level: ```/DC0/host/Cluster1/*/hadoop
|
||||||
|
|
||||||
vCenter keeps two different kinds of metrics, known as realtime and historical metrics.
|
vCenter keeps two different kinds of metrics, known as realtime and historical metrics.
|
||||||
|
|
||||||
* Realtime metrics: Avaialable at a 20 second granularity. These metrics are stored in memory and are very fast and cheap to query. Our tests have shown that a complete set of realtime metrics for 7000 virtual machines can be obtained in less than 20 seconds. Realtime metrics are only available on **ESXi hosts** and **virtual machine** resources. Realtime metrics are only stored for 1 hour in vCenter.
|
* Realtime metrics: Available at a 20 second granularity. These metrics are stored in memory and are very fast and cheap to query. Our tests have shown that a complete set of realtime metrics for 7000 virtual machines can be obtained in less than 20 seconds. Realtime metrics are only available on **ESXi hosts** and **virtual machine** resources. Realtime metrics are only stored for 1 hour in vCenter.
|
||||||
* Historical metrics: Available at a 5 minute, 30 minutes, 2 hours and 24 hours rollup levels. The vSphere Telegraf plugin only uses the 5 minute rollup. These metrics are stored in the vCenter database and can be expensive and slow to query. Historical metrics are the only type of metrics available for **clusters**, **datastores** and **datacenters**.
|
* Historical metrics: Available at a 5 minute, 30 minutes, 2 hours and 24 hours rollup levels. The vSphere Telegraf plugin only uses the 5 minute rollup. These metrics are stored in the vCenter database and can be expensive and slow to query. Historical metrics are the only type of metrics available for **clusters**, **datastores** and **datacenters**.
|
||||||
|
|
||||||
For more information, refer to the vSphere documentation here: https://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.wssdk.pg.doc_50%2FPG_Ch16_Performance.18.2.html
|
For more information, refer to the vSphere documentation here: https://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.wssdk.pg.doc_50%2FPG_Ch16_Performance.18.2.html
|
||||||
|
|
||||||
This distinction has an impact on how Telegraf collects metrics. A single instance of an input plugin can have one and only one collection interval, which means that you typically set the collection interval based on the most frequently collected metric. Let's assume you set the collection interval to 1 minute. All realtime metrics will be collected every minute. Since the historical metrics are only available on a 5 minute interval, the vSphere Telegraf plugin automatically skips four out of five collection cycles for these metrics. This works fine in many cases. Problems arise when the collection of historical metrics takes longer than the collecition interval. This will cause error messages similar to this to appear in the Telegraf logs:
|
This distinction has an impact on how Telegraf collects metrics. A single instance of an input plugin can have one and only one collection interval, which means that you typically set the collection interval based on the most frequently collected metric. Let's assume you set the collection interval to 1 minute. All realtime metrics will be collected every minute. Since the historical metrics are only available on a 5 minute interval, the vSphere Telegraf plugin automatically skips four out of five collection cycles for these metrics. This works fine in many cases. Problems arise when the collection of historical metrics takes longer than the collection interval. This will cause error messages similar to this to appear in the Telegraf logs:
|
||||||
|
|
||||||
```2019-01-16T13:41:10Z W! [agent] input "inputs.vsphere" did not complete within its interval```
|
```2019-01-16T13:41:10Z W! [agent] input "inputs.vsphere" did not complete within its interval```
|
||||||
|
|
||||||
|
|
|
@ -36,7 +36,7 @@ type ClientFactory struct {
|
||||||
parent *VSphere
|
parent *VSphere
|
||||||
}
|
}
|
||||||
|
|
||||||
// Client represents a connection to vSphere and is backed by a govmoni connection
|
// Client represents a connection to vSphere and is backed by a govmomi connection
|
||||||
type Client struct {
|
type Client struct {
|
||||||
Client *govmomi.Client
|
Client *govmomi.Client
|
||||||
Views *view.Manager
|
Views *view.Manager
|
||||||
|
|
|
@ -535,7 +535,7 @@ func (e *Endpoint) complexMetadataSelect(ctx context.Context, res *resourceKind,
|
||||||
}
|
}
|
||||||
n := len(sampledObjects)
|
n := len(sampledObjects)
|
||||||
if n > maxMetadataSamples {
|
if n > maxMetadataSamples {
|
||||||
// Shuffle samples into the maxMetadatSamples positions
|
// Shuffle samples into the maxMetadataSamples positions
|
||||||
for i := 0; i < maxMetadataSamples; i++ {
|
for i := 0; i < maxMetadataSamples; i++ {
|
||||||
j := int(rand.Int31n(int32(i + 1)))
|
j := int(rand.Int31n(int32(i + 1)))
|
||||||
t := sampledObjects[i]
|
t := sampledObjects[i]
|
||||||
|
@ -1159,7 +1159,7 @@ func (e *Endpoint) collectChunk(ctx context.Context, pqs queryChunk, res *resour
|
||||||
}
|
}
|
||||||
count++
|
count++
|
||||||
|
|
||||||
// Update highwater marks
|
// Update hiwater marks
|
||||||
e.hwMarks.Put(moid, name, ts)
|
e.hwMarks.Put(moid, name, ts)
|
||||||
}
|
}
|
||||||
if nValues == 0 {
|
if nValues == 0 {
|
||||||
|
|
|
@ -200,11 +200,11 @@ var sampleConfig = `
|
||||||
## separator character to use for measurement and field names (default: "_")
|
## separator character to use for measurement and field names (default: "_")
|
||||||
# separator = "_"
|
# separator = "_"
|
||||||
|
|
||||||
## number of objects to retreive per query for realtime resources (vms and hosts)
|
## number of objects to retrieve per query for realtime resources (vms and hosts)
|
||||||
## set to 64 for vCenter 5.5 and 6.0 (default: 256)
|
## set to 64 for vCenter 5.5 and 6.0 (default: 256)
|
||||||
# max_query_objects = 256
|
# max_query_objects = 256
|
||||||
|
|
||||||
## number of metrics to retreive per query for non-realtime resources (clusters and datastores)
|
## number of metrics to retrieve per query for non-realtime resources (clusters and datastores)
|
||||||
## set to 64 for vCenter 5.5 and 6.0 (default: 256)
|
## set to 64 for vCenter 5.5 and 6.0 (default: 256)
|
||||||
# max_query_metrics = 256
|
# max_query_metrics = 256
|
||||||
|
|
||||||
|
@ -229,10 +229,10 @@ var sampleConfig = `
|
||||||
## Custom attributes from vCenter can be very useful for queries in order to slice the
|
## Custom attributes from vCenter can be very useful for queries in order to slice the
|
||||||
## metrics along different dimension and for forming ad-hoc relationships. They are disabled
|
## metrics along different dimension and for forming ad-hoc relationships. They are disabled
|
||||||
## by default, since they can add a considerable amount of tags to the resulting metrics. To
|
## by default, since they can add a considerable amount of tags to the resulting metrics. To
|
||||||
## enable, simply set custom_attribute_exlude to [] (empty set) and use custom_attribute_include
|
## enable, simply set custom_attribute_exclude to [] (empty set) and use custom_attribute_include
|
||||||
## to select the attributes you want to include.
|
## to select the attributes you want to include.
|
||||||
## By default, since they can add a considerable amount of tags to the resulting metrics. To
|
## By default, since they can add a considerable amount of tags to the resulting metrics. To
|
||||||
## enable, simply set custom_attribute_exlude to [] (empty set) and use custom_attribute_include
|
## enable, simply set custom_attribute_exclude to [] (empty set) and use custom_attribute_include
|
||||||
## to select the attributes you want to include.
|
## to select the attributes you want to include.
|
||||||
# custom_attribute_include = []
|
# custom_attribute_include = []
|
||||||
# custom_attribute_exclude = ["*"]
|
# custom_attribute_exclude = ["*"]
|
||||||
|
|
|
@ -78,7 +78,7 @@ The tag values and field values show the place on the incoming JSON object where
|
||||||
* 'issues' = `event.repository.open_issues_count` int
|
* 'issues' = `event.repository.open_issues_count` int
|
||||||
* 'commit' = `event.deployment.sha` string
|
* 'commit' = `event.deployment.sha` string
|
||||||
* 'task' = `event.deployment.task` string
|
* 'task' = `event.deployment.task` string
|
||||||
* 'environment' = `event.deployment.evnironment` string
|
* 'environment' = `event.deployment.environment` string
|
||||||
* 'description' = `event.deployment.description` string
|
* 'description' = `event.deployment.description` string
|
||||||
|
|
||||||
#### [`deployment_status` event](https://developer.github.com/v3/activity/events/types/#deploymentstatusevent)
|
#### [`deployment_status` event](https://developer.github.com/v3/activity/events/types/#deploymentstatusevent)
|
||||||
|
@ -96,7 +96,7 @@ The tag values and field values show the place on the incoming JSON object where
|
||||||
* 'issues' = `event.repository.open_issues_count` int
|
* 'issues' = `event.repository.open_issues_count` int
|
||||||
* 'commit' = `event.deployment.sha` string
|
* 'commit' = `event.deployment.sha` string
|
||||||
* 'task' = `event.deployment.task` string
|
* 'task' = `event.deployment.task` string
|
||||||
* 'environment' = `event.deployment.evnironment` string
|
* 'environment' = `event.deployment.environment` string
|
||||||
* 'description' = `event.deployment.description` string
|
* 'description' = `event.deployment.description` string
|
||||||
* 'depState' = `event.deployment_status.state` string
|
* 'depState' = `event.deployment_status.state` string
|
||||||
* 'depDescription' = `event.deployment_status.description` string
|
* 'depDescription' = `event.deployment_status.description` string
|
||||||
|
|
|
@ -31,7 +31,7 @@ String data = String::format("{ \"tags\" : {
|
||||||
```
|
```
|
||||||
|
|
||||||
Escaping the "" is required in the source file.
|
Escaping the "" is required in the source file.
|
||||||
The number of tag values and field values is not restrictied so you can send as many values per webhook call as you'd like.
|
The number of tag values and field values is not restricted so you can send as many values per webhook call as you'd like.
|
||||||
|
|
||||||
You will need to enable JSON messages in the Webhooks setup of Particle.io, and make sure to check the "include default data" box as well.
|
You will need to enable JSON messages in the Webhooks setup of Particle.io, and make sure to check the "include default data" box as well.
|
||||||
|
|
||||||
|
|
|
@ -214,7 +214,7 @@ func init() {
|
||||||
//
|
//
|
||||||
// To view all (internationalized...) counters on a system, there are three non-programmatic ways: perfmon utility,
|
// To view all (internationalized...) counters on a system, there are three non-programmatic ways: perfmon utility,
|
||||||
// the typeperf command, and the the registry editor. perfmon.exe is perhaps the easiest way, because it's basically a
|
// the typeperf command, and the the registry editor. perfmon.exe is perhaps the easiest way, because it's basically a
|
||||||
// full implemention of the pdh.dll API, except with a GUI and all that. The registry setting also provides an
|
// full implementation of the pdh.dll API, except with a GUI and all that. The registry setting also provides an
|
||||||
// interface to the available counters, and can be found at the following key:
|
// interface to the available counters, and can be found at the following key:
|
||||||
//
|
//
|
||||||
// HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Perflib\CurrentLanguage
|
// HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Perflib\CurrentLanguage
|
||||||
|
|
|
@ -268,7 +268,7 @@ A short description for some of the metrics.
|
||||||
|
|
||||||
`arcstats_evict_l2_ineligible` We evicted something which cannot be stored in the l2.
|
`arcstats_evict_l2_ineligible` We evicted something which cannot be stored in the l2.
|
||||||
Reasons could be:
|
Reasons could be:
|
||||||
- We have multiple pools, we evicted something from a pool whithout an l2 device.
|
- We have multiple pools, we evicted something from a pool without an l2 device.
|
||||||
- The zfs property secondary cache.
|
- The zfs property secondary cache.
|
||||||
|
|
||||||
`arcstats_c` Arc target size, this is the size the system thinks the arc should have.
|
`arcstats_c` Arc target size, this is the size the system thinks the arc should have.
|
||||||
|
|
|
@ -155,7 +155,7 @@ func TestZfsGeneratesMetrics(t *testing.T) {
|
||||||
err = z.Gather(&acc)
|
err = z.Gather(&acc)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
//four pool, vdev_cache_stats and zfetchstatus metrics
|
//four pool, vdev_cache_stats and zfetchstats metrics
|
||||||
intMetrics = getKstatMetricsVdevAndZfetch()
|
intMetrics = getKstatMetricsVdevAndZfetch()
|
||||||
|
|
||||||
acc.AssertContainsTaggedFields(t, "zfs", intMetrics, tags)
|
acc.AssertContainsTaggedFields(t, "zfs", intMetrics, tags)
|
||||||
|
|
|
@ -5,7 +5,7 @@ vice versa.
|
||||||
To convert from json to thrift,
|
To convert from json to thrift,
|
||||||
the json is unmarshalled, converted to zipkincore.Span structures, and
|
the json is unmarshalled, converted to zipkincore.Span structures, and
|
||||||
marshalled into thrift binary protocol. The json must be in an array format (even if it only has one object),
|
marshalled into thrift binary protocol. The json must be in an array format (even if it only has one object),
|
||||||
because the tool automatically tries to unmarshall the json into an array of structs.
|
because the tool automatically tries to unmarshal the json into an array of structs.
|
||||||
|
|
||||||
To convert from thrift to json,
|
To convert from thrift to json,
|
||||||
the opposite process must happen. The thrift binary data must be read into an array of
|
the opposite process must happen. The thrift binary data must be read into an array of
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
# AMQP Output Plugin
|
# AMQP Output Plugin
|
||||||
|
|
||||||
This plugin writes to a AMQP 0-9-1 Exchange, a promenent implementation of this protocol being [RabbitMQ](https://www.rabbitmq.com/).
|
This plugin writes to a AMQP 0-9-1 Exchange, a prominent implementation of this protocol being [RabbitMQ](https://www.rabbitmq.com/).
|
||||||
|
|
||||||
This plugin does not bind the exchange to a queue.
|
This plugin does not bind the exchange to a queue.
|
||||||
|
|
||||||
|
@ -40,7 +40,7 @@ For an introduction to AMQP see:
|
||||||
|
|
||||||
## Additional exchange arguments.
|
## Additional exchange arguments.
|
||||||
# exchange_arguments = { }
|
# exchange_arguments = { }
|
||||||
# exchange_arguments = {"hash_propery" = "timestamp"}
|
# exchange_arguments = {"hash_property" = "timestamp"}
|
||||||
|
|
||||||
## Authentication credentials for the PLAIN auth_method.
|
## Authentication credentials for the PLAIN auth_method.
|
||||||
# username = ""
|
# username = ""
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue