New snmp plugin and snmp_legacy rename

closes #1371
closes #808
closes #1361
closes #1151
closes #997
closes #1163
closes #856
closes #1043
closes #961
closes #1389
This commit is contained in:
Cameron Sparr 2016-07-28 15:55:03 +01:00
parent 58f0259f0a
commit a3706a31e6
14 changed files with 3130 additions and 3114 deletions

View File

@ -1,137 +1,49 @@
## v1.0 [unreleased] ## v1.0 [unreleased]
### Features
- [#1413](https://github.com/influxdata/telegraf/issues/1413): Separate container_version from container_image tag.
- [#1525](https://github.com/influxdata/telegraf/pull/1525): Support setting per-device and total metrics for Docker network and blockio.
- [#1466](https://github.com/influxdata/telegraf/pull/1466): MongoDB input plugin: adding per DB stats from db.stats()
### Bugfixes
- [#1519](https://github.com/influxdata/telegraf/pull/1519): Fix error race conditions and partial failures.
- [#1477](https://github.com/influxdata/telegraf/issues/1477): nstat: fix inaccurate config panic.
- [#1481](https://github.com/influxdata/telegraf/issues/1481): jolokia: fix handling multiple multi-dimensional attributes.
- [#1430](https://github.com/influxdata/telegraf/issues/1430): Fix prometheus character sanitizing. Sanitize more win_perf_counters characters.
- [#1534](https://github.com/influxdata/telegraf/pull/1534): Add diskio io_time to FreeBSD & report timing metrics as ms (as linux does).
- [#1379](https://github.com/influxdata/telegraf/issues/1379): Fix covering Amazon Linux for post remove flow.
## v1.0 beta 3 [2016-07-18]
### Release Notes ### Release Notes
**Breaking Change**: Aerospike main server node measurements have been renamed - **Breaking Change** The SNMP plugin is being deprecated in it's current form.
There is a [new SNMP plugin](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/snmp)
which fixes many of the issues and confusions
of it's predecessor. For users wanting to continue to use the deprecated SNMP
plugin, you will need to change your config file from `[[inputs.snmp]]` to
`[[inputs.snmp_legacy]]`. The configuration of the new SNMP plugin is _not_
backwards-compatible.
- **Breaking Change**: Aerospike main server node measurements have been renamed
aerospike_node. Aerospike namespace measurements have been renamed to aerospike_node. Aerospike namespace measurements have been renamed to
aerospike_namespace. They will also now be tagged with the node_name aerospike_namespace. They will also now be tagged with the node_name
that they correspond to. This has been done to differentiate measurements that they correspond to. This has been done to differentiate measurements
that pertain to node vs. namespace statistics. that pertain to node vs. namespace statistics.
**Breaking Change**: users of github_webhooks must change to the new - **Breaking Change**: users of github_webhooks must change to the new
`[[inputs.webhooks]]` plugin. `[[inputs.webhooks]]` plugin.
This means that the default github_webhooks config: This means that the default github_webhooks config:
``` ```
# A Github Webhook Event collector # A Github Webhook Event collector
[[inputs.github_webhooks]] [[inputs.github_webhooks]]
## Address and port to host Webhook listener on ## Address and port to host Webhook listener on
service_address = ":1618" service_address = ":1618"
``` ```
should now look like: should now look like:
``` ```
# A Webhooks Event collector # A Webhooks Event collector
[[inputs.webhooks]] [[inputs.webhooks]]
## Address and port to host Webhook listener on ## Address and port to host Webhook listener on
service_address = ":1618" service_address = ":1618"
[inputs.webhooks.github] [inputs.webhooks.github]
path = "/" path = "/"
``` ```
### Features
- [#1503](https://github.com/influxdata/telegraf/pull/1503): Add tls support for certs to RabbitMQ input plugin
- [#1289](https://github.com/influxdata/telegraf/pull/1289): webhooks input plugin. Thanks @francois2metz and @cduez!
- [#1247](https://github.com/influxdata/telegraf/pull/1247): rollbar webhook plugin.
- [#1408](https://github.com/influxdata/telegraf/pull/1408): mandrill webhook plugin.
- [#1402](https://github.com/influxdata/telegraf/pull/1402): docker-machine/boot2docker no longer required for unit tests.
- [#1350](https://github.com/influxdata/telegraf/pull/1350): cgroup input plugin.
- [#1369](https://github.com/influxdata/telegraf/pull/1369): Add input plugin for consuming metrics from NSQD.
- [#1369](https://github.com/influxdata/telegraf/pull/1480): add ability to read redis from a socket.
- [#1387](https://github.com/influxdata/telegraf/pull/1387): **Breaking Change** - Redis `role` tag renamed to `replication_role` to avoid global_tags override
- [#1437](https://github.com/influxdata/telegraf/pull/1437): Fetching Galera status metrics in MySQL
- [#1500](https://github.com/influxdata/telegraf/pull/1500): Aerospike plugin refactored to use official client lib.
- [#1434](https://github.com/influxdata/telegraf/pull/1434): Add measurement name arg to logparser plugin.
- [#1479](https://github.com/influxdata/telegraf/pull/1479): logparser: change resp_code from a field to a tag.
- [#1411](https://github.com/influxdata/telegraf/pull/1411): Implement support for fetching hddtemp data
- [#1466](https://github.com/influxdata/telegraf/pull/1466): MongoDB input plugin: adding per DB stats from db.stats()
- [#1389](https://github.com/influxdata/telegraf/pull/1389): Add a new and improved SNMP plugin (snmp2).
### Bugfixes
- [#1472](https://github.com/influxdata/telegraf/pull/1472): diskio input plugin: set 'skip_serial_number = true' by default to avoid high cardinality.
- [#1426](https://github.com/influxdata/telegraf/pull/1426): nil metrics panic fix.
- [#1384](https://github.com/influxdata/telegraf/pull/1384): Fix datarace in apache input plugin.
- [#1399](https://github.com/influxdata/telegraf/issues/1399): Add `read_repairs` statistics to riak plugin.
- [#1405](https://github.com/influxdata/telegraf/issues/1405): Fix memory/connection leak in prometheus input plugin.
- [#1378](https://github.com/influxdata/telegraf/issues/1378): Trim BOM from config file for Windows support.
- [#1339](https://github.com/influxdata/telegraf/issues/1339): Prometheus client output panic on service reload.
- [#1461](https://github.com/influxdata/telegraf/pull/1461): Prometheus parser, protobuf format header fix.
- [#1334](https://github.com/influxdata/telegraf/issues/1334): Prometheus output, metric refresh and caching fixes.
- [#1432](https://github.com/influxdata/telegraf/issues/1432): Panic fix for multiple graphite outputs under very high load.
- [#1412](https://github.com/influxdata/telegraf/pull/1412): Instrumental output has better reconnect behavior
- [#1460](https://github.com/influxdata/telegraf/issues/1460): Remove PID from procstat plugin to fix cardinality issues.
- [#1427](https://github.com/influxdata/telegraf/issues/1427): Cassandra input: version 2.x "column family" fix.
- [#1463](https://github.com/influxdata/telegraf/issues/1463): Shared WaitGroup in Exec plugin
- [#1436](https://github.com/influxdata/telegraf/issues/1436): logparser: honor modifiers in "pattern" config.
- [#1418](https://github.com/influxdata/telegraf/issues/1418): logparser: error and exit on file permissions/missing errors.
- [#1499](https://github.com/influxdata/telegraf/pull/1499): Make the user able to specify full path for HAproxy stats
- [#1521](https://github.com/influxdata/telegraf/pull/1521): Fix Redis url, an extra "tcp://" was added.
## v1.0 beta 2 [2016-06-21]
### Features
- [#1340](https://github.com/influxdata/telegraf/issues/1340): statsd: do not log every dropped metric.
- [#1368](https://github.com/influxdata/telegraf/pull/1368): Add precision rounding to all metrics on collection.
- [#1390](https://github.com/influxdata/telegraf/pull/1390): Add support for Tengine
- [#1320](https://github.com/influxdata/telegraf/pull/1320): Logparser input plugin for parsing grok-style log patterns.
- [#1397](https://github.com/influxdata/telegraf/issues/1397): ElasticSearch: now supports connecting to ElasticSearch via SSL
### Bugfixes
- [#1330](https://github.com/influxdata/telegraf/issues/1330): Fix exec plugin panic when using single binary.
- [#1336](https://github.com/influxdata/telegraf/issues/1336): Fixed incorrect prometheus metrics source selection.
- [#1112](https://github.com/influxdata/telegraf/issues/1112): Set default Zookeeper chroot to empty string.
- [#1335](https://github.com/influxdata/telegraf/issues/1335): Fix overall ping timeout to be calculated based on per-ping timeout.
- [#1374](https://github.com/influxdata/telegraf/pull/1374): Change "default" retention policy to "".
- [#1377](https://github.com/influxdata/telegraf/issues/1377): Graphite output mangling '%' character.
- [#1396](https://github.com/influxdata/telegraf/pull/1396): Prometheus input plugin now supports x509 certs authentication
## v1.0 beta 1 [2016-06-07]
### Release Notes
- `flush_jitter` behavior has been changed. The random jitter will now be - `flush_jitter` behavior has been changed. The random jitter will now be
evaluated at every flush interval, rather than once at startup. This makes it evaluated at every flush interval, rather than once at startup. This makes it
consistent with the behavior of `collection_jitter`. consistent with the behavior of `collection_jitter`.
- All AWS plugins now utilize a standard mechanism for evaluating credentials.
This allows all AWS plugins to support environment variables, shared credential
files & profiles, and role assumptions. See the specific plugin README for
details.
- The AWS CloudWatch input plugin can now declare a wildcard value for a metric
dimension. This causes the plugin to read all metrics that contain the specified
dimension key regardless of value. This is used to export collections of metrics
without having to know the dimension values ahead of time.
- The AWS CloudWatch input plugin can now be configured with the `cache_ttl`
attribute. This configures the TTL of the internal metric cache. This is useful
in conjunction with wildcard dimension values as it will control the amount of
time before a new metric is included by the plugin.
### Features ### Features
- [#1262](https://github.com/influxdata/telegraf/pull/1261): Add graylog input pluging. - [#1262](https://github.com/influxdata/telegraf/pull/1261): Add graylog input pluging.
@ -149,6 +61,30 @@ time before a new metric is included by the plugin.
- [#1278](https://github.com/influxdata/telegraf/pull/1278) & [#1288](https://github.com/influxdata/telegraf/pull/1288) & [#1295](https://github.com/influxdata/telegraf/pull/1295): RabbitMQ/Apache/InfluxDB inputs: made url(s) parameter optional by using reasonable input defaults if not specified - [#1278](https://github.com/influxdata/telegraf/pull/1278) & [#1288](https://github.com/influxdata/telegraf/pull/1288) & [#1295](https://github.com/influxdata/telegraf/pull/1295): RabbitMQ/Apache/InfluxDB inputs: made url(s) parameter optional by using reasonable input defaults if not specified
- [#1296](https://github.com/influxdata/telegraf/issues/1296): Refactor of flush_jitter argument. - [#1296](https://github.com/influxdata/telegraf/issues/1296): Refactor of flush_jitter argument.
- [#1213](https://github.com/influxdata/telegraf/issues/1213): Add inactive & active memory to mem plugin. - [#1213](https://github.com/influxdata/telegraf/issues/1213): Add inactive & active memory to mem plugin.
- [#1340](https://github.com/influxdata/telegraf/issues/1340): statsd: do not log every dropped metric.
- [#1368](https://github.com/influxdata/telegraf/pull/1368): Add precision rounding to all metrics on collection.
- [#1390](https://github.com/influxdata/telegraf/pull/1390): Add support for Tengine
- [#1320](https://github.com/influxdata/telegraf/pull/1320): Logparser input plugin for parsing grok-style log patterns.
- [#1397](https://github.com/influxdata/telegraf/issues/1397): ElasticSearch: now supports connecting to ElasticSearch via SSL
- [#1503](https://github.com/influxdata/telegraf/pull/1503): Add tls support for certs to RabbitMQ input plugin
- [#1289](https://github.com/influxdata/telegraf/pull/1289): webhooks input plugin. Thanks @francois2metz and @cduez!
- [#1247](https://github.com/influxdata/telegraf/pull/1247): rollbar webhook plugin.
- [#1408](https://github.com/influxdata/telegraf/pull/1408): mandrill webhook plugin.
- [#1402](https://github.com/influxdata/telegraf/pull/1402): docker-machine/boot2docker no longer required for unit tests.
- [#1350](https://github.com/influxdata/telegraf/pull/1350): cgroup input plugin.
- [#1369](https://github.com/influxdata/telegraf/pull/1369): Add input plugin for consuming metrics from NSQD.
- [#1369](https://github.com/influxdata/telegraf/pull/1480): add ability to read redis from a socket.
- [#1387](https://github.com/influxdata/telegraf/pull/1387): **Breaking Change** - Redis `role` tag renamed to `replication_role` to avoid global_tags override
- [#1437](https://github.com/influxdata/telegraf/pull/1437): Fetching Galera status metrics in MySQL
- [#1500](https://github.com/influxdata/telegraf/pull/1500): Aerospike plugin refactored to use official client lib.
- [#1434](https://github.com/influxdata/telegraf/pull/1434): Add measurement name arg to logparser plugin.
- [#1479](https://github.com/influxdata/telegraf/pull/1479): logparser: change resp_code from a field to a tag.
- [#1411](https://github.com/influxdata/telegraf/pull/1411): Implement support for fetching hddtemp data
- [#1466](https://github.com/influxdata/telegraf/pull/1466): MongoDB input plugin: adding per DB stats from db.stats()
- [#1389](https://github.com/influxdata/telegraf/pull/1389): Add a new and improved SNMP plugin (snmp2).
- [#1413](https://github.com/influxdata/telegraf/issues/1413): Separate container_version from container_image tag.
- [#1525](https://github.com/influxdata/telegraf/pull/1525): Support setting per-device and total metrics for Docker network and blockio.
- [#1466](https://github.com/influxdata/telegraf/pull/1466): MongoDB input plugin: adding per DB stats from db.stats()
### Bugfixes ### Bugfixes
@ -161,6 +97,38 @@ time before a new metric is included by the plugin.
- [#1316](https://github.com/influxdata/telegraf/pull/1316): Removed leaked "database" tag on redis metrics. Thanks @PierreF! - [#1316](https://github.com/influxdata/telegraf/pull/1316): Removed leaked "database" tag on redis metrics. Thanks @PierreF!
- [#1323](https://github.com/influxdata/telegraf/issues/1323): Processes plugin: fix potential error with /proc/net/stat directory. - [#1323](https://github.com/influxdata/telegraf/issues/1323): Processes plugin: fix potential error with /proc/net/stat directory.
- [#1322](https://github.com/influxdata/telegraf/issues/1322): Fix rare RHEL 5.2 panic in gopsutil diskio gathering function. - [#1322](https://github.com/influxdata/telegraf/issues/1322): Fix rare RHEL 5.2 panic in gopsutil diskio gathering function.
- [#1330](https://github.com/influxdata/telegraf/issues/1330): Fix exec plugin panic when using single binary.
- [#1336](https://github.com/influxdata/telegraf/issues/1336): Fixed incorrect prometheus metrics source selection.
- [#1112](https://github.com/influxdata/telegraf/issues/1112): Set default Zookeeper chroot to empty string.
- [#1335](https://github.com/influxdata/telegraf/issues/1335): Fix overall ping timeout to be calculated based on per-ping timeout.
- [#1374](https://github.com/influxdata/telegraf/pull/1374): Change "default" retention policy to "".
- [#1377](https://github.com/influxdata/telegraf/issues/1377): Graphite output mangling '%' character.
- [#1396](https://github.com/influxdata/telegraf/pull/1396): Prometheus input plugin now supports x509 certs authentication
- [#1472](https://github.com/influxdata/telegraf/pull/1472): diskio input plugin: set 'skip_serial_number = true' by default to avoid high cardinality.
- [#1426](https://github.com/influxdata/telegraf/pull/1426): nil metrics panic fix.
- [#1384](https://github.com/influxdata/telegraf/pull/1384): Fix datarace in apache input plugin.
- [#1399](https://github.com/influxdata/telegraf/issues/1399): Add `read_repairs` statistics to riak plugin.
- [#1405](https://github.com/influxdata/telegraf/issues/1405): Fix memory/connection leak in prometheus input plugin.
- [#1378](https://github.com/influxdata/telegraf/issues/1378): Trim BOM from config file for Windows support.
- [#1339](https://github.com/influxdata/telegraf/issues/1339): Prometheus client output panic on service reload.
- [#1461](https://github.com/influxdata/telegraf/pull/1461): Prometheus parser, protobuf format header fix.
- [#1334](https://github.com/influxdata/telegraf/issues/1334): Prometheus output, metric refresh and caching fixes.
- [#1432](https://github.com/influxdata/telegraf/issues/1432): Panic fix for multiple graphite outputs under very high load.
- [#1412](https://github.com/influxdata/telegraf/pull/1412): Instrumental output has better reconnect behavior
- [#1460](https://github.com/influxdata/telegraf/issues/1460): Remove PID from procstat plugin to fix cardinality issues.
- [#1427](https://github.com/influxdata/telegraf/issues/1427): Cassandra input: version 2.x "column family" fix.
- [#1463](https://github.com/influxdata/telegraf/issues/1463): Shared WaitGroup in Exec plugin
- [#1436](https://github.com/influxdata/telegraf/issues/1436): logparser: honor modifiers in "pattern" config.
- [#1418](https://github.com/influxdata/telegraf/issues/1418): logparser: error and exit on file permissions/missing errors.
- [#1499](https://github.com/influxdata/telegraf/pull/1499): Make the user able to specify full path for HAproxy stats
- [#1521](https://github.com/influxdata/telegraf/pull/1521): Fix Redis url, an extra "tcp://" was added.
- [#1519](https://github.com/influxdata/telegraf/pull/1519): Fix error race conditions and partial failures.
- [#1477](https://github.com/influxdata/telegraf/issues/1477): nstat: fix inaccurate config panic.
- [#1481](https://github.com/influxdata/telegraf/issues/1481): jolokia: fix handling multiple multi-dimensional attributes.
- [#1430](https://github.com/influxdata/telegraf/issues/1430): Fix prometheus character sanitizing. Sanitize more win_perf_counters characters.
- [#1534](https://github.com/influxdata/telegraf/pull/1534): Add diskio io_time to FreeBSD & report timing metrics as ms (as linux does).
- [#1379](https://github.com/influxdata/telegraf/issues/1379): Fix covering Amazon Linux for post remove flow.
## v0.13.1 [2016-05-24] ## v0.13.1 [2016-05-24]

View File

@ -190,7 +190,6 @@ Currently implemented sources:
* [riak](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/riak) * [riak](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/riak)
* [sensors ](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/sensors) (only available if built from source) * [sensors ](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/sensors) (only available if built from source)
* [snmp](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/snmp) * [snmp](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/snmp)
* [snmp2](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/snmp2)
* [sql server](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/sqlserver) (microsoft) * [sql server](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/sqlserver) (microsoft)
* [twemproxy](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/twemproxy) * [twemproxy](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/twemproxy)
* [varnish](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/varnish) * [varnish](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/varnish)

View File

@ -1377,8 +1377,56 @@
# servers = ["http://localhost:8098"] # servers = ["http://localhost:8098"]
# # Reads oids value from one or many snmp agents # # Retrieves SNMP values from remote agents
# [[inputs.snmp]] # [[inputs.snmp]]
# agents = [ "127.0.0.1:161" ]
# version = 2 # Values: 1, 2, or 3
#
# ## SNMPv1 & SNMPv2 parameters
# community = "public"
#
# ## SNMPv2 & SNMPv3 parameters
# max_repetitions = 50
#
# ## SNMPv3 parameters
# #sec_name = "myuser"
# #auth_protocol = "md5" # Values: "MD5", "SHA", ""
# #auth_password = "password123"
# #sec_level = "authNoPriv" # Values: "noAuthNoPriv", "authNoPriv", "authPriv"
# #context_name = ""
# #priv_protocol = "" # Values: "DES", "AES", ""
# #priv_password = ""
#
# ## measurement name
# name = "system"
# [[inputs.snmp.field]]
# name = "hostname"
# oid = ".1.2.3.0.1.1"
# [[inputs.snmp.field]]
# name = "uptime"
# oid = ".1.2.3.0.1.200"
# [[inputs.snmp.field]]
# name = "load"
# oid = ".1.2.3.0.1.201"
#
# [[inputs.snmp.table]]
# # measurement name
# name = "remote_servers"
# inherit_tags = [ "hostname" ]
# [[inputs.snmp.table.field]]
# name = "server"
# oid = ".1.2.3.0.0.0"
# is_tag = true
# [[inputs.snmp.table.field]]
# name = "connections"
# oid = ".1.2.3.0.0.1"
# [[inputs.snmp.table.field]]
# name = "latency"
# oid = ".1.2.3.0.0.2"
# # DEPRECATED: This will be removed in a future release.
# [[inputs.snmp_legacy]]
# ## Use 'oids.txt' file to translate oids to names # ## Use 'oids.txt' file to translate oids to names
# ## To generate 'oids.txt' you need to run: # ## To generate 'oids.txt' you need to run:
# ## snmptranslate -m all -Tz -On | sed -e 's/"//g' > /tmp/oids.txt # ## snmptranslate -m all -Tz -On | sed -e 's/"//g' > /tmp/oids.txt

View File

@ -61,7 +61,7 @@ import (
_ "github.com/influxdata/telegraf/plugins/inputs/riak" _ "github.com/influxdata/telegraf/plugins/inputs/riak"
_ "github.com/influxdata/telegraf/plugins/inputs/sensors" _ "github.com/influxdata/telegraf/plugins/inputs/sensors"
_ "github.com/influxdata/telegraf/plugins/inputs/snmp" _ "github.com/influxdata/telegraf/plugins/inputs/snmp"
_ "github.com/influxdata/telegraf/plugins/inputs/snmp2" _ "github.com/influxdata/telegraf/plugins/inputs/snmp_legacy"
_ "github.com/influxdata/telegraf/plugins/inputs/sqlserver" _ "github.com/influxdata/telegraf/plugins/inputs/sqlserver"
_ "github.com/influxdata/telegraf/plugins/inputs/statsd" _ "github.com/influxdata/telegraf/plugins/inputs/statsd"
_ "github.com/influxdata/telegraf/plugins/inputs/sysstat" _ "github.com/influxdata/telegraf/plugins/inputs/sysstat"

View File

@ -1,549 +1,130 @@
# SNMP Input Plugin # SNMP Plugin
The SNMP input plugin gathers metrics from SNMP agents The SNMP input plugin gathers metrics from SNMP agents.
It is an alternative to the SNMP plugin with significantly different configuration & behavior.
### Configuration: ## Configuration:
### Example:
#### Very simple example SNMP data:
```
In this example, the plugin will gather value of OIDS: .1.2.3.0.0.1.0 octet_str "foo"
.1.2.3.0.0.1.1 octet_str "bar"
- `.1.3.6.1.2.1.2.2.1.4.1` .1.2.3.0.0.102 octet_str "bad"
.1.2.3.0.0.2.0 integer 1
```toml .1.2.3.0.0.2.1 integer 2
# Very Simple Example .1.2.3.0.0.3.0 octet_str "0.123"
[[inputs.snmp]] .1.2.3.0.0.3.1 octet_str "0.456"
.1.2.3.0.0.3.2 octet_str "9.999"
[[inputs.snmp.host]] .1.2.3.0.1 octet_str "baz"
address = "127.0.0.1:161" .1.2.3.0.2 uinteger 54321
# SNMP community .1.2.3.0.3 uinteger 234
community = "public" # default public
# SNMP version (1, 2 or 3)
# Version 3 not supported yet
version = 2 # default 2
# Simple list of OIDs to get, in addition to "collect"
get_oids = [".1.3.6.1.2.1.2.2.1.4.1"]
``` ```
Telegraf config:
#### Simple example
In this example, Telegraf gathers value of OIDS:
- named **ifnumber**
- named **interface_speed**
With **inputs.snmp.get** section the plugin gets the oid number:
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
- **interface_speed** => *ifSpeed*
As you can see *ifSpeed* is not a valid OID. In order to get
the valid OID, the plugin uses `snmptranslate_file` to match the OID:
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
- **interface_speed** => *ifSpeed* => `.1.3.6.1.2.1.2.2.1.5`
Also as the plugin will append `instance` to the corresponding OID:
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
- **interface_speed** => *ifSpeed* => `.1.3.6.1.2.1.2.2.1.5.1`
In this example, the plugin will gather value of OIDS:
- `.1.3.6.1.2.1.2.1.0`
- `.1.3.6.1.2.1.2.2.1.5.1`
```toml ```toml
# Simple example
[[inputs.snmp]] [[inputs.snmp]]
## Use 'oids.txt' file to translate oids to names agents = [ "127.0.0.1:161" ]
## To generate 'oids.txt' you need to run: version = 2
## snmptranslate -m all -Tz -On | sed -e 's/"//g' > /tmp/oids.txt community = "public"
## Or if you have an other MIB folder with custom MIBs
## snmptranslate -M /mycustommibfolder -Tz -On -m all | sed -e 's/"//g' > oids.txt
snmptranslate_file = "/tmp/oids.txt"
[[inputs.snmp.host]]
address = "127.0.0.1:161"
# SNMP community
community = "public" # default public
# SNMP version (1, 2 or 3)
# Version 3 not supported yet
version = 2 # default 2
# Which get/bulk do you want to collect for this host
collect = ["ifnumber", "interface_speed"]
[[inputs.snmp.get]] name = "system"
name = "ifnumber" [[inputs.snmp.field]]
oid = ".1.3.6.1.2.1.2.1.0" name = "hostname"
oid = ".1.2.3.0.1"
is_tag = true
[[inputs.snmp.field]]
name = "uptime"
oid = ".1.2.3.0.2"
[[inputs.snmp.field]]
name = "loadavg"
oid = ".1.2.3.0.3"
conversion = "float(2)"
[[inputs.snmp.get]]
name = "interface_speed"
oid = "ifSpeed"
instance = "1"
```
#### Simple bulk example
In this example, Telegraf gathers value of OIDS:
- named **ifnumber**
- named **interface_speed**
- named **if_out_octets**
With **inputs.snmp.get** section the plugin gets oid number:
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
- **interface_speed** => *ifSpeed*
With **inputs.snmp.bulk** section the plugin gets the oid number:
- **if_out_octets** => *ifOutOctets*
As you can see *ifSpeed* and *ifOutOctets* are not a valid OID.
In order to get the valid OID, the plugin uses `snmptranslate_file`
to match the OID:
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
- **interface_speed** => *ifSpeed* => `.1.3.6.1.2.1.2.2.1.5`
- **if_out_octets** => *ifOutOctets* => `.1.3.6.1.2.1.2.2.1.16`
Also, the plugin will append `instance` to the corresponding OID:
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
- **interface_speed** => *ifSpeed* => `.1.3.6.1.2.1.2.2.1.5.1`
And **if_out_octets** is a bulk request, the plugin will gathers all
OIDS in the table.
- `.1.3.6.1.2.1.2.2.1.16.1`
- `.1.3.6.1.2.1.2.2.1.16.2`
- `.1.3.6.1.2.1.2.2.1.16.3`
- `.1.3.6.1.2.1.2.2.1.16.4`
- `.1.3.6.1.2.1.2.2.1.16.5`
- `...`
In this example, the plugin will gather value of OIDS:
- `.1.3.6.1.2.1.2.1.0`
- `.1.3.6.1.2.1.2.2.1.5.1`
- `.1.3.6.1.2.1.2.2.1.16.1`
- `.1.3.6.1.2.1.2.2.1.16.2`
- `.1.3.6.1.2.1.2.2.1.16.3`
- `.1.3.6.1.2.1.2.2.1.16.4`
- `.1.3.6.1.2.1.2.2.1.16.5`
- `...`
```toml
# Simple bulk example
[[inputs.snmp]]
## Use 'oids.txt' file to translate oids to names
## To generate 'oids.txt' you need to run:
## snmptranslate -m all -Tz -On | sed -e 's/"//g' > /tmp/oids.txt
## Or if you have an other MIB folder with custom MIBs
## snmptranslate -M /mycustommibfolder -Tz -On -m all | sed -e 's/"//g' > oids.txt
snmptranslate_file = "/tmp/oids.txt"
[[inputs.snmp.host]]
address = "127.0.0.1:161"
# SNMP community
community = "public" # default public
# SNMP version (1, 2 or 3)
# Version 3 not supported yet
version = 2 # default 2
# Which get/bulk do you want to collect for this host
collect = ["interface_speed", "if_number", "if_out_octets"]
[[inputs.snmp.get]]
name = "interface_speed"
oid = "ifSpeed"
instance = "1"
[[inputs.snmp.get]]
name = "if_number"
oid = "ifNumber"
[[inputs.snmp.bulk]]
name = "if_out_octets"
oid = "ifOutOctets"
```
#### Table example
In this example, we remove collect attribute to the host section,
but you can still use it in combination of the following part.
Note: This example is like a bulk request a but using an
other configuration
Telegraf gathers value of OIDS of the table:
- named **iftable1**
With **inputs.snmp.table** section the plugin gets oid number:
- **iftable1** => `.1.3.6.1.2.1.31.1.1.1`
Also **iftable1** is a table, the plugin will gathers all
OIDS in the table and in the subtables
- `.1.3.6.1.2.1.31.1.1.1.1`
- `.1.3.6.1.2.1.31.1.1.1.1.1`
- `.1.3.6.1.2.1.31.1.1.1.1.2`
- `.1.3.6.1.2.1.31.1.1.1.1.3`
- `.1.3.6.1.2.1.31.1.1.1.1.4`
- `.1.3.6.1.2.1.31.1.1.1.1....`
- `.1.3.6.1.2.1.31.1.1.1.2`
- `.1.3.6.1.2.1.31.1.1.1.2....`
- `.1.3.6.1.2.1.31.1.1.1.3`
- `.1.3.6.1.2.1.31.1.1.1.3....`
- `.1.3.6.1.2.1.31.1.1.1.4`
- `.1.3.6.1.2.1.31.1.1.1.4....`
- `.1.3.6.1.2.1.31.1.1.1.5`
- `.1.3.6.1.2.1.31.1.1.1.5....`
- `.1.3.6.1.2.1.31.1.1.1.6....`
- `...`
```toml
# Table example
[[inputs.snmp]]
## Use 'oids.txt' file to translate oids to names
## To generate 'oids.txt' you need to run:
## snmptranslate -m all -Tz -On | sed -e 's/"//g' > /tmp/oids.txt
## Or if you have an other MIB folder with custom MIBs
## snmptranslate -M /mycustommibfolder -Tz -On -m all | sed -e 's/"//g' > oids.txt
snmptranslate_file = "/tmp/oids.txt"
[[inputs.snmp.host]]
address = "127.0.0.1:161"
# SNMP community
community = "public" # default public
# SNMP version (1, 2 or 3)
# Version 3 not supported yet
version = 2 # default 2
# Which get/bulk do you want to collect for this host
# Which table do you want to collect
[[inputs.snmp.host.table]]
name = "iftable1"
# table without mapping neither subtables
# This is like bulk request
[[inputs.snmp.table]] [[inputs.snmp.table]]
name = "iftable1" name = "remote_servers"
oid = ".1.3.6.1.2.1.31.1.1.1" inherit_tags = [ "hostname" ]
[[inputs.snmp.table.field]]
name = "server"
oid = ".1.2.3.0.0.1"
is_tag = true
[[inputs.snmp.table.field]]
name = "connections"
oid = ".1.2.3.0.0.2"
[[inputs.snmp.table.field]]
name = "latency"
oid = ".1.2.3.0.0.3"
conversion = "float"
``` ```
Resulting output:
#### Table with subtable example ```
* Plugin: snmp, Collection 1
In this example, we remove collect attribute to the host section, > system,agent_host=127.0.0.1,host=mylocalhost,hostname=baz loadavg=2.34,uptime=54321i 1468953135000000000
but you can still use it in combination of the following part. > remote_servers,agent_host=127.0.0.1,host=mylocalhost,hostname=baz,server=foo connections=1i,latency=0.123 1468953135000000000
> remote_servers,agent_host=127.0.0.1,host=mylocalhost,hostname=baz,server=bar connections=2i,latency=0.456 1468953135000000000
Note: This example is like a bulk request a but using an
other configuration
Telegraf gathers value of OIDS of the table:
- named **iftable2**
With **inputs.snmp.table** section *AND* **sub_tables** attribute,
the plugin will get OIDS from subtables:
- **iftable2** => `.1.3.6.1.2.1.2.2.1.13`
Also **iftable2** is a table, the plugin will gathers all
OIDS in subtables:
- `.1.3.6.1.2.1.2.2.1.13.1`
- `.1.3.6.1.2.1.2.2.1.13.2`
- `.1.3.6.1.2.1.2.2.1.13.3`
- `.1.3.6.1.2.1.2.2.1.13.4`
- `.1.3.6.1.2.1.2.2.1.13....`
```toml
# Table with subtable example
[[inputs.snmp]]
## Use 'oids.txt' file to translate oids to names
## To generate 'oids.txt' you need to run:
## snmptranslate -m all -Tz -On | sed -e 's/"//g' > /tmp/oids.txt
## Or if you have an other MIB folder with custom MIBs
## snmptranslate -M /mycustommibfolder -Tz -On -m all | sed -e 's/"//g' > oids.txt
snmptranslate_file = "/tmp/oids.txt"
[[inputs.snmp.host]]
address = "127.0.0.1:161"
# SNMP community
community = "public" # default public
# SNMP version (1, 2 or 3)
# Version 3 not supported yet
version = 2 # default 2
# Which table do you want to collect
[[inputs.snmp.host.table]]
name = "iftable2"
# table without mapping but with subtables
[[inputs.snmp.table]]
name = "iftable2"
sub_tables = [".1.3.6.1.2.1.2.2.1.13"]
# note
# oid attribute is useless
``` ```
### Config parameters
#### Table with mapping example * `agents`: Default: `[]`
List of SNMP agents to connect to in the form of `IP[:PORT]`. If `:PORT` is unspecified, it defaults to `161`.
In this example, we remove collect attribute to the host section, * `version`: Default: `2`
but you can still use it in combination of the following part. SNMP protocol version to use.
Telegraf gathers value of OIDS of the table: * `community`: Default: `"public"`
SNMP community to use.
- named **iftable3** * `max_repetitions`: Default: `50`
Maximum number of iterations for repeating variables.
With **inputs.snmp.table** section the plugin gets oid number: * `sec_name`:
Security name for authenticated SNMPv3 requests.
- **iftable3** => `.1.3.6.1.2.1.31.1.1.1` * `auth_protocol`: Values: `"MD5"`,`"SHA"`,`""`. Default: `""`
Authentication protocol for authenticated SNMPv3 requests.
Also **iftable2** is a table, the plugin will gathers all * `auth_password`:
OIDS in the table and in the subtables Authentication password for authenticated SNMPv3 requests.
- `.1.3.6.1.2.1.31.1.1.1.1` * `sec_level`: Values: `"noAuthNoPriv"`,`"authNoPriv"`,`"authPriv"`. Default: `"noAuthNoPriv"`
- `.1.3.6.1.2.1.31.1.1.1.1.1` Security level used for SNMPv3 messages.
- `.1.3.6.1.2.1.31.1.1.1.1.2`
- `.1.3.6.1.2.1.31.1.1.1.1.3`
- `.1.3.6.1.2.1.31.1.1.1.1.4`
- `.1.3.6.1.2.1.31.1.1.1.1....`
- `.1.3.6.1.2.1.31.1.1.1.2`
- `.1.3.6.1.2.1.31.1.1.1.2....`
- `.1.3.6.1.2.1.31.1.1.1.3`
- `.1.3.6.1.2.1.31.1.1.1.3....`
- `.1.3.6.1.2.1.31.1.1.1.4`
- `.1.3.6.1.2.1.31.1.1.1.4....`
- `.1.3.6.1.2.1.31.1.1.1.5`
- `.1.3.6.1.2.1.31.1.1.1.5....`
- `.1.3.6.1.2.1.31.1.1.1.6....`
- `...`
But the **include_instances** attribute will filter which OIDS * `context_name`:
will be gathered; As you see, there is an other attribute, `mapping_table`. Context name used for SNMPv3 requests.
`include_instances` and `mapping_table` permit to build a hash table
to filter only OIDS you want.
Let's say, we have the following data on SNMP server:
- OID: `.1.3.6.1.2.1.31.1.1.1.1.1` has as value: `enp5s0`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.2` has as value: `enp5s1`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.3` has as value: `enp5s2`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.4` has as value: `eth0`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.5` has as value: `eth1`
The plugin will build the following hash table: * `priv_protocol`: Values: `"DES"`,`"AES"`,`""`. Default: `""`
Privacy protocol used for encrypted SNMPv3 messages.
| instance name | instance id | * `priv_password`:
|---------------|-------------| Privacy password used for encrypted SNMPv3 messages.
| `enp5s0` | `1` |
| `enp5s1` | `2` |
| `enp5s2` | `3` |
| `eth0` | `4` |
| `eth1` | `5` |
With the **include_instances** attribute, the plugin will gather
the following OIDS:
- `.1.3.6.1.2.1.31.1.1.1.1.1`
- `.1.3.6.1.2.1.31.1.1.1.1.5`
- `.1.3.6.1.2.1.31.1.1.1.2.1`
- `.1.3.6.1.2.1.31.1.1.1.2.5`
- `.1.3.6.1.2.1.31.1.1.1.3.1`
- `.1.3.6.1.2.1.31.1.1.1.3.5`
- `.1.3.6.1.2.1.31.1.1.1.4.1`
- `.1.3.6.1.2.1.31.1.1.1.4.5`
- `.1.3.6.1.2.1.31.1.1.1.5.1`
- `.1.3.6.1.2.1.31.1.1.1.5.5`
- `.1.3.6.1.2.1.31.1.1.1.6.1`
- `.1.3.6.1.2.1.31.1.1.1.6.5`
- `...`
Note: the plugin will add instance name as tag *instance*
```toml
# Simple table with mapping example
[[inputs.snmp]]
## Use 'oids.txt' file to translate oids to names
## To generate 'oids.txt' you need to run:
## snmptranslate -m all -Tz -On | sed -e 's/"//g' > /tmp/oids.txt
## Or if you have an other MIB folder with custom MIBs
## snmptranslate -M /mycustommibfolder -Tz -On -m all | sed -e 's/"//g' > oids.txt
snmptranslate_file = "/tmp/oids.txt"
[[inputs.snmp.host]]
address = "127.0.0.1:161"
# SNMP community
community = "public" # default public
# SNMP version (1, 2 or 3)
# Version 3 not supported yet
version = 2 # default 2
# Which table do you want to collect
[[inputs.snmp.host.table]]
name = "iftable3"
include_instances = ["enp5s0", "eth1"]
# table with mapping but without subtables
[[inputs.snmp.table]]
name = "iftable3"
oid = ".1.3.6.1.2.1.31.1.1.1"
# if empty. get all instances
mapping_table = ".1.3.6.1.2.1.31.1.1.1.1"
# if empty, get all subtables
```
#### Table with both mapping and subtable example * `name`:
Output measurement name.
In this example, we remove collect attribute to the host section, #### Field parameters:
but you can still use it in combination of the following part. * `name`:
Output field/tag name.
Telegraf gathers value of OIDS of the table: * `oid`:
OID to get. Must be in dotted notation, not textual.
- named **iftable4** * `is_tag`:
Output this field as a tag.
With **inputs.snmp.table** section *AND* **sub_tables** attribute, * `conversion`: Values: `"float(X)"`,`"float"`,`"int"`,`""`. Default: `""`
the plugin will get OIDS from subtables: Converts the value according to the given specification.
- **iftable4** => `.1.3.6.1.2.1.31.1.1.1` - `float(X)`: Converts the input value into a float and divides by the Xth power of 10. Efficively just moves the decimal left X places. For example a value of `123` with `float(2)` will result in `1.23`.
- `float`: Converts the value into a float with no adjustment. Same as `float(0)`.
- `int`: Convertes the value into an integer.
Also **iftable2** is a table, the plugin will gathers all #### Table parameters:
OIDS in the table and in the subtables * `name`:
Output measurement name.
- `.1.3.6.1.2.1.31.1.1.1.6.1 * `inherit_tags`:
- `.1.3.6.1.2.1.31.1.1.1.6.2` Which tags to inherit from the top-level config and to use in the output of this table's measurement.
- `.1.3.6.1.2.1.31.1.1.1.6.3`
- `.1.3.6.1.2.1.31.1.1.1.6.4`
- `.1.3.6.1.2.1.31.1.1.1.6....`
- `.1.3.6.1.2.1.31.1.1.1.10.1`
- `.1.3.6.1.2.1.31.1.1.1.10.2`
- `.1.3.6.1.2.1.31.1.1.1.10.3`
- `.1.3.6.1.2.1.31.1.1.1.10.4`
- `.1.3.6.1.2.1.31.1.1.1.10....`
But the **include_instances** attribute will filter which OIDS
will be gathered; As you see, there is an other attribute, `mapping_table`.
`include_instances` and `mapping_table` permit to build a hash table
to filter only OIDS you want.
Let's say, we have the following data on SNMP server:
- OID: `.1.3.6.1.2.1.31.1.1.1.1.1` has as value: `enp5s0`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.2` has as value: `enp5s1`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.3` has as value: `enp5s2`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.4` has as value: `eth0`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.5` has as value: `eth1`
The plugin will build the following hash table:
| instance name | instance id |
|---------------|-------------|
| `enp5s0` | `1` |
| `enp5s1` | `2` |
| `enp5s2` | `3` |
| `eth0` | `4` |
| `eth1` | `5` |
With the **include_instances** attribute, the plugin will gather
the following OIDS:
- `.1.3.6.1.2.1.31.1.1.1.6.1`
- `.1.3.6.1.2.1.31.1.1.1.6.5`
- `.1.3.6.1.2.1.31.1.1.1.10.1`
- `.1.3.6.1.2.1.31.1.1.1.10.5`
Note: the plugin will add instance name as tag *instance*
```toml
# Table with both mapping and subtable example
[[inputs.snmp]]
## Use 'oids.txt' file to translate oids to names
## To generate 'oids.txt' you need to run:
## snmptranslate -m all -Tz -On | sed -e 's/"//g' > /tmp/oids.txt
## Or if you have an other MIB folder with custom MIBs
## snmptranslate -M /mycustommibfolder -Tz -On -m all | sed -e 's/"//g' > oids.txt
snmptranslate_file = "/tmp/oids.txt"
[[inputs.snmp.host]]
address = "127.0.0.1:161"
# SNMP community
community = "public" # default public
# SNMP version (1, 2 or 3)
# Version 3 not supported yet
version = 2 # default 2
# Which table do you want to collect
[[inputs.snmp.host.table]]
name = "iftable4"
include_instances = ["enp5s0", "eth1"]
# table with both mapping and subtables
[[inputs.snmp.table]]
name = "iftable4"
# if empty get all instances
mapping_table = ".1.3.6.1.2.1.31.1.1.1.1"
# if empty get all subtables
# sub_tables could be not "real subtables"
sub_tables=[".1.3.6.1.2.1.2.2.1.13", "bytes_recv", "bytes_send"]
# note
# oid attribute is useless
# SNMP SUBTABLES
[[inputs.snmp.subtable]]
name = "bytes_recv"
oid = ".1.3.6.1.2.1.31.1.1.1.6"
unit = "octets"
[[inputs.snmp.subtable]]
name = "bytes_send"
oid = ".1.3.6.1.2.1.31.1.1.1.10"
unit = "octets"
```
#### Configuration notes
- In **inputs.snmp.table** section, the `oid` attribute is useless if
the `sub_tables` attributes is defined
- In **inputs.snmp.subtable** section, you can put a name from `snmptranslate_file`
as `oid` attribute instead of a valid OID
### Measurements & Fields:
With the last example (Table with both mapping and subtable example):
- ifHCOutOctets
- ifHCOutOctets
- ifInDiscards
- ifInDiscards
- ifHCInOctets
- ifHCInOctets
### Tags:
With the last example (Table with both mapping and subtable example):
- ifHCOutOctets
- host
- instance
- unit
- ifInDiscards
- host
- instance
- ifHCInOctets
- host
- instance
- unit
### Example Output:
With the last example (Table with both mapping and subtable example):
```
ifHCOutOctets,host=127.0.0.1,instance=enp5s0,unit=octets ifHCOutOctets=10565628i 1456878706044462901
ifInDiscards,host=127.0.0.1,instance=enp5s0 ifInDiscards=0i 1456878706044510264
ifHCInOctets,host=127.0.0.1,instance=enp5s0,unit=octets ifHCInOctets=76351777i 1456878706044531312
```

File diff suppressed because it is too large Load Diff

View File

@ -1,482 +1,494 @@
package snmp package snmp
import ( import (
"fmt"
"net"
"sync"
"testing" "testing"
"time"
"github.com/influxdata/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/influxdata/toml"
"github.com/soniah/gosnmp"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
func TestSNMPErrorGet1(t *testing.T) { type testSNMPConnection struct {
get1 := Data{ host string
Name: "oid1", values map[string]interface{}
Unit: "octets", }
Oid: ".1.3.6.1.2.1.2.2.1.16.1",
func (tsc *testSNMPConnection) Host() string {
return tsc.host
}
func (tsc *testSNMPConnection) Get(oids []string) (*gosnmp.SnmpPacket, error) {
sp := &gosnmp.SnmpPacket{}
for _, oid := range oids {
v, ok := tsc.values[oid]
if !ok {
sp.Variables = append(sp.Variables, gosnmp.SnmpPDU{
Name: oid,
Type: gosnmp.NoSuchObject,
})
continue
} }
h := Host{ sp.Variables = append(sp.Variables, gosnmp.SnmpPDU{
Collect: []string{"oid1"}, Name: oid,
Value: v,
})
} }
return sp, nil
}
func (tsc *testSNMPConnection) Walk(oid string, wf gosnmp.WalkFunc) error {
for void, v := range tsc.values {
if void == oid || (len(void) > len(oid) && void[:len(oid)+1] == oid+".") {
if err := wf(gosnmp.SnmpPDU{
Name: void,
Value: v,
}); err != nil {
return err
}
}
}
return nil
}
var tsc = &testSNMPConnection{
host: "tsc",
values: map[string]interface{}{
".1.2.3.0.0.1.0": "foo",
".1.2.3.0.0.1.1": []byte("bar"),
".1.2.3.0.0.102": "bad",
".1.2.3.0.0.2.0": 1,
".1.2.3.0.0.2.1": 2,
".1.2.3.0.0.3.0": "0.123",
".1.2.3.0.0.3.1": "0.456",
".1.2.3.0.0.3.2": "9.999",
".1.2.3.0.0.4.0": 123456,
".1.2.3.0.1": "baz",
".1.2.3.0.2": 234,
".1.2.3.0.3": []byte("byte slice"),
},
}
func TestSampleConfig(t *testing.T) {
conf := struct {
Inputs struct {
Snmp []*Snmp
}
}{}
err := toml.Unmarshal([]byte("[[inputs.snmp]]\n"+(*Snmp)(nil).SampleConfig()), &conf)
assert.NoError(t, err)
s := Snmp{ s := Snmp{
SnmptranslateFile: "bad_oid.txt", Agents: []string{"127.0.0.1:161"},
Host: []Host{h}, Version: 2,
Get: []Data{get1}, Community: "public",
MaxRepetitions: 50,
Name: "system",
Fields: []Field{
{Name: "hostname", Oid: ".1.2.3.0.1.1"},
{Name: "uptime", Oid: ".1.2.3.0.1.200"},
{Name: "load", Oid: ".1.2.3.0.1.201"},
},
Tables: []Table{
{
Name: "remote_servers",
InheritTags: []string{"hostname"},
Fields: []Field{
{Name: "server", Oid: ".1.2.3.0.0.0", IsTag: true},
{Name: "connections", Oid: ".1.2.3.0.0.1"},
{Name: "latency", Oid: ".1.2.3.0.0.2"},
},
},
},
}
assert.Equal(t, s, *conf.Inputs.Snmp[0])
}
func TestGetSNMPConnection_v2(t *testing.T) {
s := &Snmp{
Timeout: "3s",
Retries: 4,
Version: 2,
Community: "foo",
} }
var acc testutil.Accumulator gsc, err := s.getConnection("1.2.3.4:567")
err := s.Gather(&acc) require.NoError(t, err)
gs := gsc.(gosnmpWrapper)
assert.Equal(t, "1.2.3.4", gs.Target)
assert.EqualValues(t, 567, gs.Port)
assert.Equal(t, gosnmp.Version2c, gs.Version)
assert.Equal(t, "foo", gs.Community)
gsc, err = s.getConnection("1.2.3.4")
require.NoError(t, err)
gs = gsc.(gosnmpWrapper)
assert.Equal(t, "1.2.3.4", gs.Target)
assert.EqualValues(t, 161, gs.Port)
}
func TestGetSNMPConnection_v3(t *testing.T) {
s := &Snmp{
Version: 3,
MaxRepetitions: 20,
ContextName: "mycontext",
SecLevel: "authPriv",
SecName: "myuser",
AuthProtocol: "md5",
AuthPassword: "password123",
PrivProtocol: "des",
PrivPassword: "321drowssap",
EngineID: "myengineid",
EngineBoots: 1,
EngineTime: 2,
}
gsc, err := s.getConnection("1.2.3.4")
require.NoError(t, err)
gs := gsc.(gosnmpWrapper)
assert.Equal(t, gs.Version, gosnmp.Version3)
sp := gs.SecurityParameters.(*gosnmp.UsmSecurityParameters)
assert.Equal(t, "1.2.3.4", gsc.Host())
assert.Equal(t, 20, gs.MaxRepetitions)
assert.Equal(t, "mycontext", gs.ContextName)
assert.Equal(t, gosnmp.AuthPriv, gs.MsgFlags&gosnmp.AuthPriv)
assert.Equal(t, "myuser", sp.UserName)
assert.Equal(t, gosnmp.MD5, sp.AuthenticationProtocol)
assert.Equal(t, "password123", sp.AuthenticationPassphrase)
assert.Equal(t, gosnmp.DES, sp.PrivacyProtocol)
assert.Equal(t, "321drowssap", sp.PrivacyPassphrase)
assert.Equal(t, "myengineid", sp.AuthoritativeEngineID)
assert.EqualValues(t, 1, sp.AuthoritativeEngineBoots)
assert.EqualValues(t, 2, sp.AuthoritativeEngineTime)
}
func TestGetSNMPConnection_caching(t *testing.T) {
s := &Snmp{}
gs1, err := s.getConnection("1.2.3.4")
require.NoError(t, err)
gs2, err := s.getConnection("1.2.3.4")
require.NoError(t, err)
gs3, err := s.getConnection("1.2.3.5")
require.NoError(t, err)
assert.True(t, gs1 == gs2)
assert.False(t, gs2 == gs3)
}
func TestGosnmpWrapper_walk_retry(t *testing.T) {
srvr, err := net.ListenUDP("udp4", &net.UDPAddr{})
defer srvr.Close()
require.NoError(t, err)
reqCount := 0
// Set up a WaitGroup to wait for the server goroutine to exit and protect
// reqCount.
// Even though simultaneous access is impossible because the server will be
// blocked on ReadFrom, without this the race detector gets unhappy.
wg := sync.WaitGroup{}
wg.Add(1)
go func() {
defer wg.Done()
buf := make([]byte, 256)
for {
_, addr, err := srvr.ReadFrom(buf)
if err != nil {
return
}
reqCount++
srvr.WriteTo([]byte{'X'}, addr) // will cause decoding error
}
}()
gs := &gosnmp.GoSNMP{
Target: srvr.LocalAddr().(*net.UDPAddr).IP.String(),
Port: uint16(srvr.LocalAddr().(*net.UDPAddr).Port),
Version: gosnmp.Version2c,
Community: "public",
Timeout: time.Millisecond * 10,
Retries: 1,
}
err = gs.Connect()
require.NoError(t, err)
conn := gs.Conn
gsw := gosnmpWrapper{gs}
err = gsw.Walk(".1.2.3", func(_ gosnmp.SnmpPDU) error { return nil })
srvr.Close()
wg.Wait()
assert.Error(t, err)
assert.False(t, gs.Conn == conn)
assert.Equal(t, (gs.Retries+1)*2, reqCount)
}
func TestGosnmpWrapper_get_retry(t *testing.T) {
srvr, err := net.ListenUDP("udp4", &net.UDPAddr{})
defer srvr.Close()
require.NoError(t, err)
reqCount := 0
// Set up a WaitGroup to wait for the server goroutine to exit and protect
// reqCount.
// Even though simultaneous access is impossible because the server will be
// blocked on ReadFrom, without this the race detector gets unhappy.
wg := sync.WaitGroup{}
wg.Add(1)
go func() {
defer wg.Done()
buf := make([]byte, 256)
for {
_, addr, err := srvr.ReadFrom(buf)
if err != nil {
return
}
reqCount++
srvr.WriteTo([]byte{'X'}, addr) // will cause decoding error
}
}()
gs := &gosnmp.GoSNMP{
Target: srvr.LocalAddr().(*net.UDPAddr).IP.String(),
Port: uint16(srvr.LocalAddr().(*net.UDPAddr).Port),
Version: gosnmp.Version2c,
Community: "public",
Timeout: time.Millisecond * 10,
Retries: 1,
}
err = gs.Connect()
require.NoError(t, err)
conn := gs.Conn
gsw := gosnmpWrapper{gs}
_, err = gsw.Get([]string{".1.2.3"})
srvr.Close()
wg.Wait()
assert.Error(t, err)
assert.False(t, gs.Conn == conn)
assert.Equal(t, (gs.Retries+1)*2, reqCount)
}
func TestTableBuild_walk(t *testing.T) {
tbl := Table{
Name: "mytable",
Fields: []Field{
{
Name: "myfield1",
Oid: ".1.2.3.0.0.1",
IsTag: true,
},
{
Name: "myfield2",
Oid: ".1.2.3.0.0.2",
},
{
Name: "myfield3",
Oid: ".1.2.3.0.0.3",
Conversion: "float",
},
},
}
tb, err := tbl.Build(tsc, true)
require.NoError(t, err)
assert.Equal(t, tb.Name, "mytable")
rtr1 := RTableRow{
Tags: map[string]string{"myfield1": "foo"},
Fields: map[string]interface{}{"myfield2": 1, "myfield3": float64(0.123)},
}
rtr2 := RTableRow{
Tags: map[string]string{"myfield1": "bar"},
Fields: map[string]interface{}{"myfield2": 2, "myfield3": float64(0.456)},
}
assert.Len(t, tb.Rows, 2)
assert.Contains(t, tb.Rows, rtr1)
assert.Contains(t, tb.Rows, rtr2)
}
func TestTableBuild_noWalk(t *testing.T) {
tbl := Table{
Name: "mytable",
Fields: []Field{
{
Name: "myfield1",
Oid: ".1.2.3.0.1",
IsTag: true,
},
{
Name: "myfield2",
Oid: ".1.2.3.0.2",
},
{
Name: "myfield3",
Oid: ".1.2.3.0.2",
IsTag: true,
},
},
}
tb, err := tbl.Build(tsc, false)
require.NoError(t, err)
rtr := RTableRow{
Tags: map[string]string{"myfield1": "baz", "myfield3": "234"},
Fields: map[string]interface{}{"myfield2": 234},
}
assert.Len(t, tb.Rows, 1)
assert.Contains(t, tb.Rows, rtr)
}
func TestGather(t *testing.T) {
s := &Snmp{
Agents: []string{"TestGather"},
Name: "mytable",
Fields: []Field{
{
Name: "myfield1",
Oid: ".1.2.3.0.1",
IsTag: true,
},
{
Name: "myfield2",
Oid: ".1.2.3.0.2",
},
{
Name: "myfield3",
Oid: "1.2.3.0.1",
},
},
Tables: []Table{
{
Name: "myOtherTable",
InheritTags: []string{"myfield1"},
Fields: []Field{
{
Name: "myOtherField",
Oid: ".1.2.3.0.0.4",
},
},
},
},
connectionCache: map[string]snmpConnection{
"TestGather": tsc,
},
}
acc := &testutil.Accumulator{}
tstart := time.Now()
s.Gather(acc)
tstop := time.Now()
require.Len(t, acc.Metrics, 2)
m := acc.Metrics[0]
assert.Equal(t, "mytable", m.Measurement)
assert.Equal(t, "tsc", m.Tags["agent_host"])
assert.Equal(t, "baz", m.Tags["myfield1"])
assert.Len(t, m.Fields, 2)
assert.Equal(t, 234, m.Fields["myfield2"])
assert.Equal(t, "baz", m.Fields["myfield3"])
assert.True(t, tstart.Before(m.Time))
assert.True(t, tstop.After(m.Time))
m2 := acc.Metrics[1]
assert.Equal(t, "myOtherTable", m2.Measurement)
assert.Equal(t, "tsc", m2.Tags["agent_host"])
assert.Equal(t, "baz", m2.Tags["myfield1"])
assert.Len(t, m2.Fields, 1)
assert.Equal(t, 123456, m2.Fields["myOtherField"])
}
func TestGather_host(t *testing.T) {
s := &Snmp{
Agents: []string{"TestGather"},
Name: "mytable",
Fields: []Field{
{
Name: "host",
Oid: ".1.2.3.0.1",
IsTag: true,
},
{
Name: "myfield2",
Oid: ".1.2.3.0.2",
},
},
connectionCache: map[string]snmpConnection{
"TestGather": tsc,
},
}
acc := &testutil.Accumulator{}
s.Gather(acc)
require.Len(t, acc.Metrics, 1)
m := acc.Metrics[0]
assert.Equal(t, "baz", m.Tags["host"])
}
func TestFieldConvert(t *testing.T) {
testTable := []struct {
input interface{}
conv string
expected interface{}
}{
{[]byte("foo"), "", string("foo")},
{"0.123", "float", float64(0.123)},
{[]byte("0.123"), "float", float64(0.123)},
{float32(0.123), "float", float64(float32(0.123))},
{float64(0.123), "float", float64(0.123)},
{123, "float", float64(123)},
{123, "float(0)", float64(123)},
{123, "float(4)", float64(0.0123)},
{int8(123), "float(3)", float64(0.123)},
{int16(123), "float(3)", float64(0.123)},
{int32(123), "float(3)", float64(0.123)},
{int64(123), "float(3)", float64(0.123)},
{uint(123), "float(3)", float64(0.123)},
{uint8(123), "float(3)", float64(0.123)},
{uint16(123), "float(3)", float64(0.123)},
{uint32(123), "float(3)", float64(0.123)},
{uint64(123), "float(3)", float64(0.123)},
{"123", "int", int64(123)},
{[]byte("123"), "int", int64(123)},
{float32(12.3), "int", int64(12)},
{float64(12.3), "int", int64(12)},
{int(123), "int", int64(123)},
{int8(123), "int", int64(123)},
{int16(123), "int", int64(123)},
{int32(123), "int", int64(123)},
{int64(123), "int", int64(123)},
{uint(123), "int", int64(123)},
{uint8(123), "int", int64(123)},
{uint16(123), "int", int64(123)},
{uint32(123), "int", int64(123)},
{uint64(123), "int", int64(123)},
}
for _, tc := range testTable {
act := fieldConvert(tc.conv, tc.input)
assert.EqualValues(t, tc.expected, act, "input=%T(%v) conv=%s expected=%T(%v)", tc.input, tc.input, tc.conv, tc.expected, tc.expected)
}
}
func TestError(t *testing.T) {
e := fmt.Errorf("nested error")
err := Errorf(e, "top error %d", 123)
require.Error(t, err) require.Error(t, err)
}
ne, ok := err.(NestedError)
func TestSNMPErrorGet2(t *testing.T) { require.True(t, ok)
get1 := Data{ assert.Equal(t, e, ne.NestedErr)
Name: "oid1",
Unit: "octets", assert.Contains(t, err.Error(), "top error 123")
Oid: ".1.3.6.1.2.1.2.2.1.16.1", assert.Contains(t, err.Error(), "nested error")
}
h := Host{
Collect: []string{"oid1"},
}
s := Snmp{
Host: []Host{h},
Get: []Data{get1},
}
var acc testutil.Accumulator
err := s.Gather(&acc)
require.NoError(t, err)
assert.Equal(t, 0, len(acc.Metrics))
}
func TestSNMPErrorBulk(t *testing.T) {
bulk1 := Data{
Name: "oid1",
Unit: "octets",
Oid: ".1.3.6.1.2.1.2.2.1.16",
}
h := Host{
Address: testutil.GetLocalHost(),
Collect: []string{"oid1"},
}
s := Snmp{
Host: []Host{h},
Bulk: []Data{bulk1},
}
var acc testutil.Accumulator
err := s.Gather(&acc)
require.NoError(t, err)
assert.Equal(t, 0, len(acc.Metrics))
}
func TestSNMPGet1(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
get1 := Data{
Name: "oid1",
Unit: "octets",
Oid: ".1.3.6.1.2.1.2.2.1.16.1",
}
h := Host{
Address: testutil.GetLocalHost() + ":31161",
Community: "telegraf",
Version: 2,
Timeout: 2.0,
Retries: 2,
Collect: []string{"oid1"},
}
s := Snmp{
Host: []Host{h},
Get: []Data{get1},
}
var acc testutil.Accumulator
err := s.Gather(&acc)
require.NoError(t, err)
acc.AssertContainsTaggedFields(t,
"oid1",
map[string]interface{}{
"oid1": uint(543846),
},
map[string]string{
"unit": "octets",
"snmp_host": testutil.GetLocalHost(),
},
)
}
func TestSNMPGet2(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
get1 := Data{
Name: "oid1",
Oid: "ifNumber",
}
h := Host{
Address: testutil.GetLocalHost() + ":31161",
Community: "telegraf",
Version: 2,
Timeout: 2.0,
Retries: 2,
Collect: []string{"oid1"},
}
s := Snmp{
SnmptranslateFile: "./testdata/oids.txt",
Host: []Host{h},
Get: []Data{get1},
}
var acc testutil.Accumulator
err := s.Gather(&acc)
require.NoError(t, err)
acc.AssertContainsTaggedFields(t,
"ifNumber",
map[string]interface{}{
"ifNumber": int(4),
},
map[string]string{
"instance": "0",
"snmp_host": testutil.GetLocalHost(),
},
)
}
func TestSNMPGet3(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
get1 := Data{
Name: "oid1",
Unit: "octets",
Oid: "ifSpeed",
Instance: "1",
}
h := Host{
Address: testutil.GetLocalHost() + ":31161",
Community: "telegraf",
Version: 2,
Timeout: 2.0,
Retries: 2,
Collect: []string{"oid1"},
}
s := Snmp{
SnmptranslateFile: "./testdata/oids.txt",
Host: []Host{h},
Get: []Data{get1},
}
var acc testutil.Accumulator
err := s.Gather(&acc)
require.NoError(t, err)
acc.AssertContainsTaggedFields(t,
"ifSpeed",
map[string]interface{}{
"ifSpeed": uint(10000000),
},
map[string]string{
"unit": "octets",
"instance": "1",
"snmp_host": testutil.GetLocalHost(),
},
)
}
func TestSNMPEasyGet4(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
get1 := Data{
Name: "oid1",
Unit: "octets",
Oid: "ifSpeed",
Instance: "1",
}
h := Host{
Address: testutil.GetLocalHost() + ":31161",
Community: "telegraf",
Version: 2,
Timeout: 2.0,
Retries: 2,
Collect: []string{"oid1"},
GetOids: []string{"ifNumber"},
}
s := Snmp{
SnmptranslateFile: "./testdata/oids.txt",
Host: []Host{h},
Get: []Data{get1},
}
var acc testutil.Accumulator
err := s.Gather(&acc)
require.NoError(t, err)
acc.AssertContainsTaggedFields(t,
"ifSpeed",
map[string]interface{}{
"ifSpeed": uint(10000000),
},
map[string]string{
"unit": "octets",
"instance": "1",
"snmp_host": testutil.GetLocalHost(),
},
)
acc.AssertContainsTaggedFields(t,
"ifNumber",
map[string]interface{}{
"ifNumber": int(4),
},
map[string]string{
"instance": "0",
"snmp_host": testutil.GetLocalHost(),
},
)
}
func TestSNMPEasyGet5(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
get1 := Data{
Name: "oid1",
Unit: "octets",
Oid: "ifSpeed",
Instance: "1",
}
h := Host{
Address: testutil.GetLocalHost() + ":31161",
Community: "telegraf",
Version: 2,
Timeout: 2.0,
Retries: 2,
Collect: []string{"oid1"},
GetOids: []string{".1.3.6.1.2.1.2.1.0"},
}
s := Snmp{
SnmptranslateFile: "./testdata/oids.txt",
Host: []Host{h},
Get: []Data{get1},
}
var acc testutil.Accumulator
err := s.Gather(&acc)
require.NoError(t, err)
acc.AssertContainsTaggedFields(t,
"ifSpeed",
map[string]interface{}{
"ifSpeed": uint(10000000),
},
map[string]string{
"unit": "octets",
"instance": "1",
"snmp_host": testutil.GetLocalHost(),
},
)
acc.AssertContainsTaggedFields(t,
"ifNumber",
map[string]interface{}{
"ifNumber": int(4),
},
map[string]string{
"instance": "0",
"snmp_host": testutil.GetLocalHost(),
},
)
}
func TestSNMPEasyGet6(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
h := Host{
Address: testutil.GetLocalHost() + ":31161",
Community: "telegraf",
Version: 2,
Timeout: 2.0,
Retries: 2,
GetOids: []string{"1.3.6.1.2.1.2.1.0"},
}
s := Snmp{
SnmptranslateFile: "./testdata/oids.txt",
Host: []Host{h},
}
var acc testutil.Accumulator
err := s.Gather(&acc)
require.NoError(t, err)
acc.AssertContainsTaggedFields(t,
"ifNumber",
map[string]interface{}{
"ifNumber": int(4),
},
map[string]string{
"instance": "0",
"snmp_host": testutil.GetLocalHost(),
},
)
}
func TestSNMPBulk1(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
bulk1 := Data{
Name: "oid1",
Unit: "octets",
Oid: ".1.3.6.1.2.1.2.2.1.16",
MaxRepetition: 2,
}
h := Host{
Address: testutil.GetLocalHost() + ":31161",
Community: "telegraf",
Version: 2,
Timeout: 2.0,
Retries: 2,
Collect: []string{"oid1"},
}
s := Snmp{
SnmptranslateFile: "./testdata/oids.txt",
Host: []Host{h},
Bulk: []Data{bulk1},
}
var acc testutil.Accumulator
err := s.Gather(&acc)
require.NoError(t, err)
acc.AssertContainsTaggedFields(t,
"ifOutOctets",
map[string]interface{}{
"ifOutOctets": uint(543846),
},
map[string]string{
"unit": "octets",
"instance": "1",
"snmp_host": testutil.GetLocalHost(),
},
)
acc.AssertContainsTaggedFields(t,
"ifOutOctets",
map[string]interface{}{
"ifOutOctets": uint(26475179),
},
map[string]string{
"unit": "octets",
"instance": "2",
"snmp_host": testutil.GetLocalHost(),
},
)
acc.AssertContainsTaggedFields(t,
"ifOutOctets",
map[string]interface{}{
"ifOutOctets": uint(108963968),
},
map[string]string{
"unit": "octets",
"instance": "3",
"snmp_host": testutil.GetLocalHost(),
},
)
acc.AssertContainsTaggedFields(t,
"ifOutOctets",
map[string]interface{}{
"ifOutOctets": uint(12991453),
},
map[string]string{
"unit": "octets",
"instance": "36",
"snmp_host": testutil.GetLocalHost(),
},
)
}
// TODO find why, if this test is active
// Circle CI stops with the following error...
// bash scripts/circle-test.sh died unexpectedly
// Maybe the test is too long ??
func dTestSNMPBulk2(t *testing.T) {
bulk1 := Data{
Name: "oid1",
Unit: "octets",
Oid: "ifOutOctets",
MaxRepetition: 2,
}
h := Host{
Address: testutil.GetLocalHost() + ":31161",
Community: "telegraf",
Version: 2,
Timeout: 2.0,
Retries: 2,
Collect: []string{"oid1"},
}
s := Snmp{
SnmptranslateFile: "./testdata/oids.txt",
Host: []Host{h},
Bulk: []Data{bulk1},
}
var acc testutil.Accumulator
err := s.Gather(&acc)
require.NoError(t, err)
acc.AssertContainsTaggedFields(t,
"ifOutOctets",
map[string]interface{}{
"ifOutOctets": uint(543846),
},
map[string]string{
"unit": "octets",
"instance": "1",
"snmp_host": testutil.GetLocalHost(),
},
)
acc.AssertContainsTaggedFields(t,
"ifOutOctets",
map[string]interface{}{
"ifOutOctets": uint(26475179),
},
map[string]string{
"unit": "octets",
"instance": "2",
"snmp_host": testutil.GetLocalHost(),
},
)
acc.AssertContainsTaggedFields(t,
"ifOutOctets",
map[string]interface{}{
"ifOutOctets": uint(108963968),
},
map[string]string{
"unit": "octets",
"instance": "3",
"snmp_host": testutil.GetLocalHost(),
},
)
acc.AssertContainsTaggedFields(t,
"ifOutOctets",
map[string]interface{}{
"ifOutOctets": uint(12991453),
},
map[string]string{
"unit": "octets",
"instance": "36",
"snmp_host": testutil.GetLocalHost(),
},
)
} }

View File

@ -1,130 +0,0 @@
# SNMP2 Plugin
The SNMP2 input plugin gathers metrics from SNMP agents.
It is an alternative to the SNMP plugin with significantly different configuration & behavior.
## Configuration:
### Example:
SNMP data:
```
.1.2.3.0.0.1.0 octet_str "foo"
.1.2.3.0.0.1.1 octet_str "bar"
.1.2.3.0.0.102 octet_str "bad"
.1.2.3.0.0.2.0 integer 1
.1.2.3.0.0.2.1 integer 2
.1.2.3.0.0.3.0 octet_str "0.123"
.1.2.3.0.0.3.1 octet_str "0.456"
.1.2.3.0.0.3.2 octet_str "9.999"
.1.2.3.0.1 octet_str "baz"
.1.2.3.0.2 uinteger 54321
.1.2.3.0.3 uinteger 234
```
Telegraf config:
```toml
[[inputs.snmp2]]
agents = [ "127.0.0.1:161" ]
version = 2
community = "public"
name = "system"
[[inputs.snmp2.field]]
name = "hostname"
oid = ".1.2.3.0.1"
is_tag = true
[[inputs.snmp2.field]]
name = "uptime"
oid = ".1.2.3.0.2"
[[inputs.snmp2.field]]
name = "loadavg"
oid = ".1.2.3.0.3"
conversion = "float(2)"
[[inputs.snmp2.table]]
name = "remote_servers"
inherit_tags = [ "hostname" ]
[[inputs.snmp2.table.field]]
name = "server"
oid = ".1.2.3.0.0.1"
is_tag = true
[[inputs.snmp2.table.field]]
name = "connections"
oid = ".1.2.3.0.0.2"
[[inputs.snmp2.table.field]]
name = "latency"
oid = ".1.2.3.0.0.3"
conversion = "float"
```
Resulting output:
```
* Plugin: snmp2, Collection 1
> system,agent_host=127.0.0.1,host=mylocalhost,hostname=baz loadavg=2.34,uptime=54321i 1468953135000000000
> remote_servers,agent_host=127.0.0.1,host=mylocalhost,hostname=baz,server=foo connections=1i,latency=0.123 1468953135000000000
> remote_servers,agent_host=127.0.0.1,host=mylocalhost,hostname=baz,server=bar connections=2i,latency=0.456 1468953135000000000
```
### Config parameters
* `agents`: Default: `[]`
List of SNMP agents to connect to in the form of `IP[:PORT]`. If `:PORT` is unspecified, it defaults to `161`.
* `version`: Default: `2`
SNMP protocol version to use.
* `community`: Default: `"public"`
SNMP community to use.
* `max_repetitions`: Default: `50`
Maximum number of iterations for repeating variables.
* `sec_name`:
Security name for authenticated SNMPv3 requests.
* `auth_protocol`: Values: `"MD5"`,`"SHA"`,`""`. Default: `""`
Authentication protocol for authenticated SNMPv3 requests.
* `auth_password`:
Authentication password for authenticated SNMPv3 requests.
* `sec_level`: Values: `"noAuthNoPriv"`,`"authNoPriv"`,`"authPriv"`. Default: `"noAuthNoPriv"`
Security level used for SNMPv3 messages.
* `context_name`:
Context name used for SNMPv3 requests.
* `priv_protocol`: Values: `"DES"`,`"AES"`,`""`. Default: `""`
Privacy protocol used for encrypted SNMPv3 messages.
* `priv_password`:
Privacy password used for encrypted SNMPv3 messages.
* `name`:
Output measurement name.
#### Field parameters:
* `name`:
Output field/tag name.
* `oid`:
OID to get. Must be in dotted notation, not textual.
* `is_tag`:
Output this field as a tag.
* `conversion`: Values: `"float(X)"`,`"float"`,`"int"`,`""`. Default: `""`
Converts the value according to the given specification.
- `float(X)`: Converts the input value into a float and divides by the Xth power of 10. Efficively just moves the decimal left X places. For example a value of `123` with `float(2)` will result in `1.23`.
- `float`: Converts the value into a float with no adjustment. Same as `float(0)`.
- `int`: Convertes the value into an integer.
#### Table parameters:
* `name`:
Output measurement name.
* `inherit_tags`:
Which tags to inherit from the top-level config and to use in the output of this table's measurement.

View File

@ -1,624 +0,0 @@
package snmp2
import (
"fmt"
"math"
"net"
"strconv"
"strings"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/soniah/gosnmp"
)
const description = `Retrieves SNMP values from remote agents`
const sampleConfig = `
agents = [ "127.0.0.1:161" ]
version = 2
# SNMPv1 & SNMPv2 parameters
community = "public"
# SNMPv2 & SNMPv3 parameters
max_repetitions = 50
# SNMPv3 parameters
#sec_name = "myuser"
#auth_protocol = "md5" # Values: "MD5", "SHA", ""
#auth_password = "password123"
#sec_level = "authNoPriv" # Values: "noAuthNoPriv", "authNoPriv", "authPriv"
#context_name = ""
#priv_protocol = "" # Values: "DES", "AES", ""
#priv_password = ""
# measurement name
name = "system"
[[inputs.snmp2.field]]
name = "hostname"
oid = ".1.2.3.0.1.1"
[[inputs.snmp2.field]]
name = "uptime"
oid = ".1.2.3.0.1.200"
[[inputs.snmp2.field]]
name = "load"
oid = ".1.2.3.0.1.201"
[[inputs.snmp2.table]]
# measurement name
name = "remote_servers"
inherit_tags = [ "hostname" ]
[[inputs.snmp2.table.field]]
name = "server"
oid = ".1.2.3.0.0.0"
is_tag = true
[[inputs.snmp2.table.field]]
name = "connections"
oid = ".1.2.3.0.0.1"
[[inputs.snmp2.table.field]]
name = "latency"
oid = ".1.2.3.0.0.2"
`
// Snmp2 holds the configuration for the plugin.
type Snmp2 struct {
// The SNMP agent to query. Format is ADDR[:PORT] (e.g. 1.2.3.4:161).
Agents []string
// Timeout to wait for a response. Value is anything accepted by time.ParseDuration().
Timeout string
Retries int
// Values: 1, 2, 3
Version uint8
// Parameters for Version 1 & 2
Community string
// Parameters for Version 2 & 3
MaxRepetitions uint
// Parameters for Version 3
ContextName string
// Values: "noAuthNoPriv", "authNoPriv", "authPriv"
SecLevel string
SecName string
// Values: "MD5", "SHA", "". Default: ""
AuthProtocol string
AuthPassword string
// Values: "DES", "AES", "". Default: ""
PrivProtocol string
PrivPassword string
EngineID string
EngineBoots uint32
EngineTime uint32
Tables []Table `toml:"table"`
// Name & Fields are the elements of a Table.
// Telegraf chokes if we try to embed a Table. So instead we have to embed the
// fields of a Table, and construct a Table during runtime.
Name string
Fields []Field `toml:"field"`
connectionCache map[string]snmpConnection
}
// Table holds the configuration for a SNMP table.
type Table struct {
// Name will be the name of the measurement.
Name string
// Which tags to inherit from the top-level config.
InheritTags []string
// Fields is the tags and values to look up.
Fields []Field `toml:"field"`
}
// Field holds the configuration for a Field to look up.
type Field struct {
// Name will be the name of the field.
Name string
// OID is prefix for this field. The plugin will perform a walk through all
// OIDs with this as their parent. For each value found, the plugin will strip
// off the OID prefix, and use the remainder as the index. For multiple fields
// to show up in the same row, they must share the same index.
Oid string
// IsTag controls whether this OID is output as a tag or a value.
IsTag bool
// Conversion controls any type conversion that is done on the value.
// "float"/"float(0)" will convert the value into a float.
// "float(X)" will convert the value into a float, and then move the decimal before Xth right-most digit.
// "int" will conver the value into an integer.
Conversion string
}
// RTable is the resulting table built from a Table.
type RTable struct {
// Name is the name of the field, copied from Table.Name.
Name string
// Time is the time the table was built.
Time time.Time
// Rows are the rows that were found, one row for each table OID index found.
Rows []RTableRow
}
// RTableRow is the resulting row containing all the OID values which shared
// the same index.
type RTableRow struct {
// Tags are all the Field values which had IsTag=true.
Tags map[string]string
// Fields are all the Field values which had IsTag=false.
Fields map[string]interface{}
}
// Errors is a list of errors accumulated during an interval.
type Errors []error
func (errs Errors) Error() string {
s := ""
for _, err := range errs {
if s == "" {
s = err.Error()
} else {
s = s + ". " + err.Error()
}
}
return s
}
// NestedError wraps an error returned from deeper in the code.
type NestedError struct {
// Err is the error from where the NestedError was constructed.
Err error
// NestedError is the error that was passed back from the called function.
NestedErr error
}
// Error returns a concatenated string of all the nested errors.
func (ne NestedError) Error() string {
return ne.Err.Error() + ": " + ne.NestedErr.Error()
}
// Errorf is a convenience function for constructing a NestedError.
func Errorf(err error, msg string, format ...interface{}) error {
return NestedError{
NestedErr: err,
Err: fmt.Errorf(msg, format...),
}
}
func init() {
inputs.Add("snmp2", func() telegraf.Input {
return &Snmp2{
Retries: 5,
MaxRepetitions: 50,
}
})
}
// SampleConfig returns the default configuration of the input.
func (s *Snmp2) SampleConfig() string {
return sampleConfig
}
// Description returns a one-sentence description on the input.
func (s *Snmp2) Description() string {
return description
}
// Gather retrieves all the configured fields and tables.
// Any error encountered does not halt the process. The errors are accumulated
// and returned at the end.
func (s *Snmp2) Gather(acc telegraf.Accumulator) error {
var errs Errors
for _, agent := range s.Agents {
gs, err := s.getConnection(agent)
if err != nil {
errs = append(errs, Errorf(err, "agent %s", agent))
continue
}
// First is the top-level fields. We treat the fields as table prefixes with an empty index.
t := Table{
Name: s.Name,
Fields: s.Fields,
}
topTags := map[string]string{}
if err := s.gatherTable(acc, gs, t, topTags, false); err != nil {
errs = append(errs, Errorf(err, "agent %s", agent))
}
// Now is the real tables.
for _, t := range s.Tables {
if err := s.gatherTable(acc, gs, t, topTags, true); err != nil {
errs = append(errs, Errorf(err, "agent %s", agent))
}
}
}
if errs == nil {
return nil
}
return errs
}
func (s *Snmp2) gatherTable(acc telegraf.Accumulator, gs snmpConnection, t Table, topTags map[string]string, walk bool) error {
rt, err := t.Build(gs, walk)
if err != nil {
return err
}
for _, tr := range rt.Rows {
if !walk {
// top-level table. Add tags to topTags.
for k, v := range tr.Tags {
topTags[k] = v
}
} else {
// real table. Inherit any specified tags.
for _, k := range t.InheritTags {
if v, ok := topTags[k]; ok {
tr.Tags[k] = v
}
}
}
if _, ok := tr.Tags["agent_host"]; !ok {
tr.Tags["agent_host"] = gs.Host()
}
acc.AddFields(rt.Name, tr.Fields, tr.Tags, rt.Time)
}
return nil
}
// Build retrieves all the fields specified in the table and constructs the RTable.
func (t Table) Build(gs snmpConnection, walk bool) (*RTable, error) {
rows := map[string]RTableRow{}
tagCount := 0
for _, f := range t.Fields {
if f.IsTag {
tagCount++
}
if len(f.Oid) == 0 {
return nil, fmt.Errorf("cannot have empty OID")
}
var oid string
if f.Oid[0] == '.' {
oid = f.Oid
} else {
// make sure OID has "." because the BulkWalkAll results do, and the prefix needs to match
oid = "." + f.Oid
}
// ifv contains a mapping of table OID index to field value
ifv := map[string]interface{}{}
if !walk {
// This is used when fetching non-table fields. Fields configured a the top
// scope of the plugin.
// We fetch the fields directly, and add them to ifv as if the index were an
// empty string. This results in all the non-table fields sharing the same
// index, and being added on the same row.
if pkt, err := gs.Get([]string{oid}); err != nil {
return nil, Errorf(err, "performing get")
} else if pkt != nil && len(pkt.Variables) > 0 && pkt.Variables[0].Type != gosnmp.NoSuchObject {
ent := pkt.Variables[0]
ifv[ent.Name[len(oid):]] = fieldConvert(f.Conversion, ent.Value)
}
} else {
err := gs.Walk(oid, func(ent gosnmp.SnmpPDU) error {
if len(ent.Name) <= len(oid) || ent.Name[:len(oid)+1] != oid+"." {
return NestedError{} // break the walk
}
ifv[ent.Name[len(oid):]] = fieldConvert(f.Conversion, ent.Value)
return nil
})
if err != nil {
if _, ok := err.(NestedError); !ok {
return nil, Errorf(err, "performing bulk walk")
}
}
}
for i, v := range ifv {
rtr, ok := rows[i]
if !ok {
rtr = RTableRow{}
rtr.Tags = map[string]string{}
rtr.Fields = map[string]interface{}{}
rows[i] = rtr
}
if f.IsTag {
if vs, ok := v.(string); ok {
rtr.Tags[f.Name] = vs
} else {
rtr.Tags[f.Name] = fmt.Sprintf("%v", v)
}
} else {
rtr.Fields[f.Name] = v
}
}
}
rt := RTable{
Name: t.Name,
Time: time.Now(), //TODO record time at start
Rows: make([]RTableRow, 0, len(rows)),
}
for _, r := range rows {
if len(r.Tags) < tagCount {
// don't add rows which are missing tags, as without tags you can't filter
continue
}
rt.Rows = append(rt.Rows, r)
}
return &rt, nil
}
// snmpConnection is an interface which wraps a *gosnmp.GoSNMP object.
// We interact through an interface so we can mock it out in tests.
type snmpConnection interface {
Host() string
//BulkWalkAll(string) ([]gosnmp.SnmpPDU, error)
Walk(string, gosnmp.WalkFunc) error
Get(oids []string) (*gosnmp.SnmpPacket, error)
}
// gosnmpWrapper wraps a *gosnmp.GoSNMP object so we can use it as a snmpConnection.
type gosnmpWrapper struct {
*gosnmp.GoSNMP
}
// Host returns the value of GoSNMP.Target.
func (gsw gosnmpWrapper) Host() string {
return gsw.Target
}
// Walk wraps GoSNMP.Walk() or GoSNMP.BulkWalk(), depending on whether the
// connection is using SNMPv1 or newer.
// Also, if any error is encountered, it will just once reconnect and try again.
func (gsw gosnmpWrapper) Walk(oid string, fn gosnmp.WalkFunc) error {
var err error
// On error, retry once.
// Unfortunately we can't distinguish between an error returned by gosnmp, and one returned by the walk function.
for i := 0; i < 2; i++ {
if gsw.Version == gosnmp.Version1 {
err = gsw.GoSNMP.Walk(oid, fn)
} else {
err = gsw.GoSNMP.BulkWalk(oid, fn)
}
if err == nil {
return nil
}
if err := gsw.GoSNMP.Connect(); err != nil {
return Errorf(err, "reconnecting")
}
}
return err
}
// Get wraps GoSNMP.GET().
// If any error is encountered, it will just once reconnect and try again.
func (gsw gosnmpWrapper) Get(oids []string) (*gosnmp.SnmpPacket, error) {
var err error
var pkt *gosnmp.SnmpPacket
for i := 0; i < 2; i++ {
pkt, err = gsw.GoSNMP.Get(oids)
if err == nil {
return pkt, nil
}
if err := gsw.GoSNMP.Connect(); err != nil {
return nil, Errorf(err, "reconnecting")
}
}
return nil, err
}
// getConnection creates a snmpConnection (*gosnmp.GoSNMP) object and caches the
// result using `agent` as the cache key.
func (s *Snmp2) getConnection(agent string) (snmpConnection, error) {
if s.connectionCache == nil {
s.connectionCache = map[string]snmpConnection{}
}
if gs, ok := s.connectionCache[agent]; ok {
return gs, nil
}
gs := gosnmpWrapper{&gosnmp.GoSNMP{}}
host, portStr, err := net.SplitHostPort(agent)
if err != nil {
if err, ok := err.(*net.AddrError); !ok || err.Err != "missing port in address" {
return nil, Errorf(err, "parsing host")
}
host = agent
portStr = "161"
}
gs.Target = host
port, err := strconv.ParseUint(portStr, 10, 16)
if err != nil {
return nil, Errorf(err, "parsing port")
}
gs.Port = uint16(port)
if s.Timeout != "" {
if gs.Timeout, err = time.ParseDuration(s.Timeout); err != nil {
return nil, Errorf(err, "parsing timeout")
}
} else {
gs.Timeout = time.Second * 1
}
gs.Retries = s.Retries
switch s.Version {
case 3:
gs.Version = gosnmp.Version3
case 2, 0:
gs.Version = gosnmp.Version2c
case 1:
gs.Version = gosnmp.Version1
default:
return nil, fmt.Errorf("invalid version")
}
if s.Version < 3 {
if s.Community == "" {
gs.Community = "public"
} else {
gs.Community = s.Community
}
}
gs.MaxRepetitions = int(s.MaxRepetitions)
if s.Version == 3 {
gs.ContextName = s.ContextName
sp := &gosnmp.UsmSecurityParameters{}
gs.SecurityParameters = sp
gs.SecurityModel = gosnmp.UserSecurityModel
switch strings.ToLower(s.SecLevel) {
case "noauthnopriv", "":
gs.MsgFlags = gosnmp.NoAuthNoPriv
case "authnopriv":
gs.MsgFlags = gosnmp.AuthNoPriv
case "authpriv":
gs.MsgFlags = gosnmp.AuthPriv
default:
return nil, fmt.Errorf("invalid secLevel")
}
sp.UserName = s.SecName
switch strings.ToLower(s.AuthProtocol) {
case "md5":
sp.AuthenticationProtocol = gosnmp.MD5
case "sha":
sp.AuthenticationProtocol = gosnmp.SHA
case "":
sp.AuthenticationProtocol = gosnmp.NoAuth
default:
return nil, fmt.Errorf("invalid authProtocol")
}
sp.AuthenticationPassphrase = s.AuthPassword
switch strings.ToLower(s.PrivProtocol) {
case "des":
sp.PrivacyProtocol = gosnmp.DES
case "aes":
sp.PrivacyProtocol = gosnmp.AES
case "":
sp.PrivacyProtocol = gosnmp.NoPriv
default:
return nil, fmt.Errorf("invalid privProtocol")
}
sp.PrivacyPassphrase = s.PrivPassword
sp.AuthoritativeEngineID = s.EngineID
sp.AuthoritativeEngineBoots = s.EngineBoots
sp.AuthoritativeEngineTime = s.EngineTime
}
if err := gs.Connect(); err != nil {
return nil, Errorf(err, "setting up connection")
}
s.connectionCache[agent] = gs
return gs, nil
}
// fieldConvert converts from any type according to the conv specification
// "float"/"float(0)" will convert the value into a float.
// "float(X)" will convert the value into a float, and then move the decimal before Xth right-most digit.
// "int" will convert the value into an integer.
// "" will convert a byte slice into a string.
// Any other conv will return the input value unchanged.
func fieldConvert(conv string, v interface{}) interface{} {
if conv == "" {
if bs, ok := v.([]byte); ok {
return string(bs)
}
return v
}
var d int
if _, err := fmt.Sscanf(conv, "float(%d)", &d); err == nil || conv == "float" {
switch vt := v.(type) {
case float32:
v = float64(vt) / math.Pow10(d)
case float64:
v = float64(vt) / math.Pow10(d)
case int:
v = float64(vt) / math.Pow10(d)
case int8:
v = float64(vt) / math.Pow10(d)
case int16:
v = float64(vt) / math.Pow10(d)
case int32:
v = float64(vt) / math.Pow10(d)
case int64:
v = float64(vt) / math.Pow10(d)
case uint:
v = float64(vt) / math.Pow10(d)
case uint8:
v = float64(vt) / math.Pow10(d)
case uint16:
v = float64(vt) / math.Pow10(d)
case uint32:
v = float64(vt) / math.Pow10(d)
case uint64:
v = float64(vt) / math.Pow10(d)
case []byte:
vf, _ := strconv.ParseFloat(string(vt), 64)
v = vf / math.Pow10(d)
case string:
vf, _ := strconv.ParseFloat(vt, 64)
v = vf / math.Pow10(d)
}
}
if conv == "int" {
switch vt := v.(type) {
case float32:
v = int64(vt)
case float64:
v = int64(vt)
case int:
v = int64(vt)
case int8:
v = int64(vt)
case int16:
v = int64(vt)
case int32:
v = int64(vt)
case int64:
v = int64(vt)
case uint:
v = int64(vt)
case uint8:
v = int64(vt)
case uint16:
v = int64(vt)
case uint32:
v = int64(vt)
case uint64:
v = int64(vt)
case []byte:
v, _ = strconv.Atoi(string(vt))
case string:
v, _ = strconv.Atoi(vt)
}
}
return v
}

View File

@ -1,493 +0,0 @@
package snmp2
import (
"fmt"
"net"
"sync"
"testing"
"time"
"github.com/influxdata/telegraf/testutil"
"github.com/influxdata/toml"
"github.com/soniah/gosnmp"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
type testSNMPConnection struct {
host string
values map[string]interface{}
}
func (tsc *testSNMPConnection) Host() string {
return tsc.host
}
func (tsc *testSNMPConnection) Get(oids []string) (*gosnmp.SnmpPacket, error) {
sp := &gosnmp.SnmpPacket{}
for _, oid := range oids {
v, ok := tsc.values[oid]
if !ok {
sp.Variables = append(sp.Variables, gosnmp.SnmpPDU{
Name: oid,
Type: gosnmp.NoSuchObject,
})
continue
}
sp.Variables = append(sp.Variables, gosnmp.SnmpPDU{
Name: oid,
Value: v,
})
}
return sp, nil
}
func (tsc *testSNMPConnection) Walk(oid string, wf gosnmp.WalkFunc) error {
for void, v := range tsc.values {
if void == oid || (len(void) > len(oid) && void[:len(oid)+1] == oid+".") {
if err := wf(gosnmp.SnmpPDU{
Name: void,
Value: v,
}); err != nil {
return err
}
}
}
return nil
}
var tsc = &testSNMPConnection{
host: "tsc",
values: map[string]interface{}{
".1.2.3.0.0.1.0": "foo",
".1.2.3.0.0.1.1": []byte("bar"),
".1.2.3.0.0.102": "bad",
".1.2.3.0.0.2.0": 1,
".1.2.3.0.0.2.1": 2,
".1.2.3.0.0.3.0": "0.123",
".1.2.3.0.0.3.1": "0.456",
".1.2.3.0.0.3.2": "9.999",
".1.2.3.0.0.4.0": 123456,
".1.2.3.0.1": "baz",
".1.2.3.0.2": 234,
".1.2.3.0.3": []byte("byte slice"),
},
}
func TestSampleConfig(t *testing.T) {
conf := struct {
Inputs struct {
Snmp2 []*Snmp2
}
}{}
err := toml.Unmarshal([]byte("[[inputs.snmp2]]\n"+(*Snmp2)(nil).SampleConfig()), &conf)
assert.NoError(t, err)
s := Snmp2{
Agents: []string{"127.0.0.1:161"},
Version: 2,
Community: "public",
MaxRepetitions: 50,
Name: "system",
Fields: []Field{
{Name: "hostname", Oid: ".1.2.3.0.1.1"},
{Name: "uptime", Oid: ".1.2.3.0.1.200"},
{Name: "load", Oid: ".1.2.3.0.1.201"},
},
Tables: []Table{
{
Name: "remote_servers",
InheritTags: []string{"hostname"},
Fields: []Field{
{Name: "server", Oid: ".1.2.3.0.0.0", IsTag: true},
{Name: "connections", Oid: ".1.2.3.0.0.1"},
{Name: "latency", Oid: ".1.2.3.0.0.2"},
},
},
},
}
assert.Equal(t, s, *conf.Inputs.Snmp2[0])
}
func TestGetSNMPConnection_v2(t *testing.T) {
s := &Snmp2{
Timeout: "3s",
Retries: 4,
Version: 2,
Community: "foo",
}
gsc, err := s.getConnection("1.2.3.4:567")
require.NoError(t, err)
gs := gsc.(gosnmpWrapper)
assert.Equal(t, "1.2.3.4", gs.Target)
assert.EqualValues(t, 567, gs.Port)
assert.Equal(t, gosnmp.Version2c, gs.Version)
assert.Equal(t, "foo", gs.Community)
gsc, err = s.getConnection("1.2.3.4")
require.NoError(t, err)
gs = gsc.(gosnmpWrapper)
assert.Equal(t, "1.2.3.4", gs.Target)
assert.EqualValues(t, 161, gs.Port)
}
func TestGetSNMPConnection_v3(t *testing.T) {
s := &Snmp2{
Version: 3,
MaxRepetitions: 20,
ContextName: "mycontext",
SecLevel: "authPriv",
SecName: "myuser",
AuthProtocol: "md5",
AuthPassword: "password123",
PrivProtocol: "des",
PrivPassword: "321drowssap",
EngineID: "myengineid",
EngineBoots: 1,
EngineTime: 2,
}
gsc, err := s.getConnection("1.2.3.4")
require.NoError(t, err)
gs := gsc.(gosnmpWrapper)
assert.Equal(t, gs.Version, gosnmp.Version3)
sp := gs.SecurityParameters.(*gosnmp.UsmSecurityParameters)
assert.Equal(t, "1.2.3.4", gsc.Host())
assert.Equal(t, 20, gs.MaxRepetitions)
assert.Equal(t, "mycontext", gs.ContextName)
assert.Equal(t, gosnmp.AuthPriv, gs.MsgFlags&gosnmp.AuthPriv)
assert.Equal(t, "myuser", sp.UserName)
assert.Equal(t, gosnmp.MD5, sp.AuthenticationProtocol)
assert.Equal(t, "password123", sp.AuthenticationPassphrase)
assert.Equal(t, gosnmp.DES, sp.PrivacyProtocol)
assert.Equal(t, "321drowssap", sp.PrivacyPassphrase)
assert.Equal(t, "myengineid", sp.AuthoritativeEngineID)
assert.EqualValues(t, 1, sp.AuthoritativeEngineBoots)
assert.EqualValues(t, 2, sp.AuthoritativeEngineTime)
}
func TestGetSNMPConnection_caching(t *testing.T) {
s := &Snmp2{}
gs1, err := s.getConnection("1.2.3.4")
require.NoError(t, err)
gs2, err := s.getConnection("1.2.3.4")
require.NoError(t, err)
gs3, err := s.getConnection("1.2.3.5")
require.NoError(t, err)
assert.True(t, gs1 == gs2)
assert.False(t, gs2 == gs3)
}
func TestGosnmpWrapper_walk_retry(t *testing.T) {
srvr, err := net.ListenUDP("udp4", &net.UDPAddr{})
defer srvr.Close()
require.NoError(t, err)
reqCount := 0
// Set up a WaitGroup to wait for the server goroutine to exit and protect
// reqCount.
// Even though simultaneous access is impossible because the server will be
// blocked on ReadFrom, without this the race detector gets unhappy.
wg := sync.WaitGroup{}
wg.Add(1)
go func() {
defer wg.Done()
buf := make([]byte, 256)
for {
_, addr, err := srvr.ReadFrom(buf)
if err != nil {
return
}
reqCount++
srvr.WriteTo([]byte{'X'}, addr) // will cause decoding error
}
}()
gs := &gosnmp.GoSNMP{
Target: srvr.LocalAddr().(*net.UDPAddr).IP.String(),
Port: uint16(srvr.LocalAddr().(*net.UDPAddr).Port),
Version: gosnmp.Version2c,
Community: "public",
Timeout: time.Millisecond * 10,
Retries: 1,
}
err = gs.Connect()
require.NoError(t, err)
conn := gs.Conn
gsw := gosnmpWrapper{gs}
err = gsw.Walk(".1.2.3", func(_ gosnmp.SnmpPDU) error { return nil })
srvr.Close()
wg.Wait()
assert.Error(t, err)
assert.False(t, gs.Conn == conn)
assert.Equal(t, (gs.Retries+1)*2, reqCount)
}
func TestGosnmpWrapper_get_retry(t *testing.T) {
srvr, err := net.ListenUDP("udp4", &net.UDPAddr{})
defer srvr.Close()
require.NoError(t, err)
reqCount := 0
// Set up a WaitGroup to wait for the server goroutine to exit and protect
// reqCount.
// Even though simultaneous access is impossible because the server will be
// blocked on ReadFrom, without this the race detector gets unhappy.
wg := sync.WaitGroup{}
wg.Add(1)
go func() {
defer wg.Done()
buf := make([]byte, 256)
for {
_, addr, err := srvr.ReadFrom(buf)
if err != nil {
return
}
reqCount++
srvr.WriteTo([]byte{'X'}, addr) // will cause decoding error
}
}()
gs := &gosnmp.GoSNMP{
Target: srvr.LocalAddr().(*net.UDPAddr).IP.String(),
Port: uint16(srvr.LocalAddr().(*net.UDPAddr).Port),
Version: gosnmp.Version2c,
Community: "public",
Timeout: time.Millisecond * 10,
Retries: 1,
}
err = gs.Connect()
require.NoError(t, err)
conn := gs.Conn
gsw := gosnmpWrapper{gs}
_, err = gsw.Get([]string{".1.2.3"})
srvr.Close()
wg.Wait()
assert.Error(t, err)
assert.False(t, gs.Conn == conn)
assert.Equal(t, (gs.Retries+1)*2, reqCount)
}
func TestTableBuild_walk(t *testing.T) {
tbl := Table{
Name: "mytable",
Fields: []Field{
{
Name: "myfield1",
Oid: ".1.2.3.0.0.1",
IsTag: true,
},
{
Name: "myfield2",
Oid: ".1.2.3.0.0.2",
},
{
Name: "myfield3",
Oid: ".1.2.3.0.0.3",
Conversion: "float",
},
},
}
tb, err := tbl.Build(tsc, true)
require.NoError(t, err)
assert.Equal(t, tb.Name, "mytable")
rtr1 := RTableRow{
Tags: map[string]string{"myfield1": "foo"},
Fields: map[string]interface{}{"myfield2": 1, "myfield3": float64(0.123)},
}
rtr2 := RTableRow{
Tags: map[string]string{"myfield1": "bar"},
Fields: map[string]interface{}{"myfield2": 2, "myfield3": float64(0.456)},
}
assert.Len(t, tb.Rows, 2)
assert.Contains(t, tb.Rows, rtr1)
assert.Contains(t, tb.Rows, rtr2)
}
func TestTableBuild_noWalk(t *testing.T) {
tbl := Table{
Name: "mytable",
Fields: []Field{
{
Name: "myfield1",
Oid: ".1.2.3.0.1",
IsTag: true,
},
{
Name: "myfield2",
Oid: ".1.2.3.0.2",
},
{
Name: "myfield3",
Oid: ".1.2.3.0.2",
IsTag: true,
},
},
}
tb, err := tbl.Build(tsc, false)
require.NoError(t, err)
rtr := RTableRow{
Tags: map[string]string{"myfield1": "baz", "myfield3": "234"},
Fields: map[string]interface{}{"myfield2": 234},
}
assert.Len(t, tb.Rows, 1)
assert.Contains(t, tb.Rows, rtr)
}
func TestGather(t *testing.T) {
s := &Snmp2{
Agents: []string{"TestGather"},
Name: "mytable",
Fields: []Field{
{
Name: "myfield1",
Oid: ".1.2.3.0.1",
IsTag: true,
},
{
Name: "myfield2",
Oid: ".1.2.3.0.2",
},
{
Name: "myfield3",
Oid: "1.2.3.0.1",
},
},
Tables: []Table{
{
Name: "myOtherTable",
InheritTags: []string{"myfield1"},
Fields: []Field{
{
Name: "myOtherField",
Oid: ".1.2.3.0.0.4",
},
},
},
},
connectionCache: map[string]snmpConnection{
"TestGather": tsc,
},
}
acc := &testutil.Accumulator{}
tstart := time.Now()
s.Gather(acc)
tstop := time.Now()
require.Len(t, acc.Metrics, 2)
m := acc.Metrics[0]
assert.Equal(t, "mytable", m.Measurement)
assert.Equal(t, "tsc", m.Tags["agent_host"])
assert.Equal(t, "baz", m.Tags["myfield1"])
assert.Len(t, m.Fields, 2)
assert.Equal(t, 234, m.Fields["myfield2"])
assert.Equal(t, "baz", m.Fields["myfield3"])
assert.True(t, tstart.Before(m.Time))
assert.True(t, tstop.After(m.Time))
m2 := acc.Metrics[1]
assert.Equal(t, "myOtherTable", m2.Measurement)
assert.Equal(t, "tsc", m2.Tags["agent_host"])
assert.Equal(t, "baz", m2.Tags["myfield1"])
assert.Len(t, m2.Fields, 1)
assert.Equal(t, 123456, m2.Fields["myOtherField"])
}
func TestGather_host(t *testing.T) {
s := &Snmp2{
Agents: []string{"TestGather"},
Name: "mytable",
Fields: []Field{
{
Name: "host",
Oid: ".1.2.3.0.1",
IsTag: true,
},
{
Name: "myfield2",
Oid: ".1.2.3.0.2",
},
},
connectionCache: map[string]snmpConnection{
"TestGather": tsc,
},
}
acc := &testutil.Accumulator{}
s.Gather(acc)
require.Len(t, acc.Metrics, 1)
m := acc.Metrics[0]
assert.Equal(t, "baz", m.Tags["host"])
}
func TestFieldConvert(t *testing.T) {
testTable := []struct {
input interface{}
conv string
expected interface{}
}{
{[]byte("foo"), "", string("foo")},
{"0.123", "float", float64(0.123)},
{[]byte("0.123"), "float", float64(0.123)},
{float32(0.123), "float", float64(float32(0.123))},
{float64(0.123), "float", float64(0.123)},
{123, "float", float64(123)},
{123, "float(0)", float64(123)},
{123, "float(4)", float64(0.0123)},
{int8(123), "float(3)", float64(0.123)},
{int16(123), "float(3)", float64(0.123)},
{int32(123), "float(3)", float64(0.123)},
{int64(123), "float(3)", float64(0.123)},
{uint(123), "float(3)", float64(0.123)},
{uint8(123), "float(3)", float64(0.123)},
{uint16(123), "float(3)", float64(0.123)},
{uint32(123), "float(3)", float64(0.123)},
{uint64(123), "float(3)", float64(0.123)},
{"123", "int", int64(123)},
{[]byte("123"), "int", int64(123)},
{float32(12.3), "int", int64(12)},
{float64(12.3), "int", int64(12)},
{int(123), "int", int64(123)},
{int8(123), "int", int64(123)},
{int16(123), "int", int64(123)},
{int32(123), "int", int64(123)},
{int64(123), "int", int64(123)},
{uint(123), "int", int64(123)},
{uint8(123), "int", int64(123)},
{uint16(123), "int", int64(123)},
{uint32(123), "int", int64(123)},
{uint64(123), "int", int64(123)},
}
for _, tc := range testTable {
act := fieldConvert(tc.conv, tc.input)
assert.EqualValues(t, tc.expected, act, "input=%T(%v) conv=%s expected=%T(%v)", tc.input, tc.input, tc.conv, tc.expected, tc.expected)
}
}
func TestError(t *testing.T) {
e := fmt.Errorf("nested error")
err := Errorf(e, "top error %d", 123)
require.Error(t, err)
ne, ok := err.(NestedError)
require.True(t, ok)
assert.Equal(t, e, ne.NestedErr)
assert.Contains(t, err.Error(), "top error 123")
assert.Contains(t, err.Error(), "nested error")
}

View File

@ -0,0 +1,549 @@
# SNMP Input Plugin
The SNMP input plugin gathers metrics from SNMP agents
### Configuration:
#### Very simple example
In this example, the plugin will gather value of OIDS:
- `.1.3.6.1.2.1.2.2.1.4.1`
```toml
# Very Simple Example
[[inputs.snmp]]
[[inputs.snmp.host]]
address = "127.0.0.1:161"
# SNMP community
community = "public" # default public
# SNMP version (1, 2 or 3)
# Version 3 not supported yet
version = 2 # default 2
# Simple list of OIDs to get, in addition to "collect"
get_oids = [".1.3.6.1.2.1.2.2.1.4.1"]
```
#### Simple example
In this example, Telegraf gathers value of OIDS:
- named **ifnumber**
- named **interface_speed**
With **inputs.snmp.get** section the plugin gets the oid number:
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
- **interface_speed** => *ifSpeed*
As you can see *ifSpeed* is not a valid OID. In order to get
the valid OID, the plugin uses `snmptranslate_file` to match the OID:
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
- **interface_speed** => *ifSpeed* => `.1.3.6.1.2.1.2.2.1.5`
Also as the plugin will append `instance` to the corresponding OID:
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
- **interface_speed** => *ifSpeed* => `.1.3.6.1.2.1.2.2.1.5.1`
In this example, the plugin will gather value of OIDS:
- `.1.3.6.1.2.1.2.1.0`
- `.1.3.6.1.2.1.2.2.1.5.1`
```toml
# Simple example
[[inputs.snmp]]
## Use 'oids.txt' file to translate oids to names
## To generate 'oids.txt' you need to run:
## snmptranslate -m all -Tz -On | sed -e 's/"//g' > /tmp/oids.txt
## Or if you have an other MIB folder with custom MIBs
## snmptranslate -M /mycustommibfolder -Tz -On -m all | sed -e 's/"//g' > oids.txt
snmptranslate_file = "/tmp/oids.txt"
[[inputs.snmp.host]]
address = "127.0.0.1:161"
# SNMP community
community = "public" # default public
# SNMP version (1, 2 or 3)
# Version 3 not supported yet
version = 2 # default 2
# Which get/bulk do you want to collect for this host
collect = ["ifnumber", "interface_speed"]
[[inputs.snmp.get]]
name = "ifnumber"
oid = ".1.3.6.1.2.1.2.1.0"
[[inputs.snmp.get]]
name = "interface_speed"
oid = "ifSpeed"
instance = "1"
```
#### Simple bulk example
In this example, Telegraf gathers value of OIDS:
- named **ifnumber**
- named **interface_speed**
- named **if_out_octets**
With **inputs.snmp.get** section the plugin gets oid number:
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
- **interface_speed** => *ifSpeed*
With **inputs.snmp.bulk** section the plugin gets the oid number:
- **if_out_octets** => *ifOutOctets*
As you can see *ifSpeed* and *ifOutOctets* are not a valid OID.
In order to get the valid OID, the plugin uses `snmptranslate_file`
to match the OID:
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
- **interface_speed** => *ifSpeed* => `.1.3.6.1.2.1.2.2.1.5`
- **if_out_octets** => *ifOutOctets* => `.1.3.6.1.2.1.2.2.1.16`
Also, the plugin will append `instance` to the corresponding OID:
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
- **interface_speed** => *ifSpeed* => `.1.3.6.1.2.1.2.2.1.5.1`
And **if_out_octets** is a bulk request, the plugin will gathers all
OIDS in the table.
- `.1.3.6.1.2.1.2.2.1.16.1`
- `.1.3.6.1.2.1.2.2.1.16.2`
- `.1.3.6.1.2.1.2.2.1.16.3`
- `.1.3.6.1.2.1.2.2.1.16.4`
- `.1.3.6.1.2.1.2.2.1.16.5`
- `...`
In this example, the plugin will gather value of OIDS:
- `.1.3.6.1.2.1.2.1.0`
- `.1.3.6.1.2.1.2.2.1.5.1`
- `.1.3.6.1.2.1.2.2.1.16.1`
- `.1.3.6.1.2.1.2.2.1.16.2`
- `.1.3.6.1.2.1.2.2.1.16.3`
- `.1.3.6.1.2.1.2.2.1.16.4`
- `.1.3.6.1.2.1.2.2.1.16.5`
- `...`
```toml
# Simple bulk example
[[inputs.snmp]]
## Use 'oids.txt' file to translate oids to names
## To generate 'oids.txt' you need to run:
## snmptranslate -m all -Tz -On | sed -e 's/"//g' > /tmp/oids.txt
## Or if you have an other MIB folder with custom MIBs
## snmptranslate -M /mycustommibfolder -Tz -On -m all | sed -e 's/"//g' > oids.txt
snmptranslate_file = "/tmp/oids.txt"
[[inputs.snmp.host]]
address = "127.0.0.1:161"
# SNMP community
community = "public" # default public
# SNMP version (1, 2 or 3)
# Version 3 not supported yet
version = 2 # default 2
# Which get/bulk do you want to collect for this host
collect = ["interface_speed", "if_number", "if_out_octets"]
[[inputs.snmp.get]]
name = "interface_speed"
oid = "ifSpeed"
instance = "1"
[[inputs.snmp.get]]
name = "if_number"
oid = "ifNumber"
[[inputs.snmp.bulk]]
name = "if_out_octets"
oid = "ifOutOctets"
```
#### Table example
In this example, we remove collect attribute to the host section,
but you can still use it in combination of the following part.
Note: This example is like a bulk request a but using an
other configuration
Telegraf gathers value of OIDS of the table:
- named **iftable1**
With **inputs.snmp.table** section the plugin gets oid number:
- **iftable1** => `.1.3.6.1.2.1.31.1.1.1`
Also **iftable1** is a table, the plugin will gathers all
OIDS in the table and in the subtables
- `.1.3.6.1.2.1.31.1.1.1.1`
- `.1.3.6.1.2.1.31.1.1.1.1.1`
- `.1.3.6.1.2.1.31.1.1.1.1.2`
- `.1.3.6.1.2.1.31.1.1.1.1.3`
- `.1.3.6.1.2.1.31.1.1.1.1.4`
- `.1.3.6.1.2.1.31.1.1.1.1....`
- `.1.3.6.1.2.1.31.1.1.1.2`
- `.1.3.6.1.2.1.31.1.1.1.2....`
- `.1.3.6.1.2.1.31.1.1.1.3`
- `.1.3.6.1.2.1.31.1.1.1.3....`
- `.1.3.6.1.2.1.31.1.1.1.4`
- `.1.3.6.1.2.1.31.1.1.1.4....`
- `.1.3.6.1.2.1.31.1.1.1.5`
- `.1.3.6.1.2.1.31.1.1.1.5....`
- `.1.3.6.1.2.1.31.1.1.1.6....`
- `...`
```toml
# Table example
[[inputs.snmp]]
## Use 'oids.txt' file to translate oids to names
## To generate 'oids.txt' you need to run:
## snmptranslate -m all -Tz -On | sed -e 's/"//g' > /tmp/oids.txt
## Or if you have an other MIB folder with custom MIBs
## snmptranslate -M /mycustommibfolder -Tz -On -m all | sed -e 's/"//g' > oids.txt
snmptranslate_file = "/tmp/oids.txt"
[[inputs.snmp.host]]
address = "127.0.0.1:161"
# SNMP community
community = "public" # default public
# SNMP version (1, 2 or 3)
# Version 3 not supported yet
version = 2 # default 2
# Which get/bulk do you want to collect for this host
# Which table do you want to collect
[[inputs.snmp.host.table]]
name = "iftable1"
# table without mapping neither subtables
# This is like bulk request
[[inputs.snmp.table]]
name = "iftable1"
oid = ".1.3.6.1.2.1.31.1.1.1"
```
#### Table with subtable example
In this example, we remove collect attribute to the host section,
but you can still use it in combination of the following part.
Note: This example is like a bulk request a but using an
other configuration
Telegraf gathers value of OIDS of the table:
- named **iftable2**
With **inputs.snmp.table** section *AND* **sub_tables** attribute,
the plugin will get OIDS from subtables:
- **iftable2** => `.1.3.6.1.2.1.2.2.1.13`
Also **iftable2** is a table, the plugin will gathers all
OIDS in subtables:
- `.1.3.6.1.2.1.2.2.1.13.1`
- `.1.3.6.1.2.1.2.2.1.13.2`
- `.1.3.6.1.2.1.2.2.1.13.3`
- `.1.3.6.1.2.1.2.2.1.13.4`
- `.1.3.6.1.2.1.2.2.1.13....`
```toml
# Table with subtable example
[[inputs.snmp]]
## Use 'oids.txt' file to translate oids to names
## To generate 'oids.txt' you need to run:
## snmptranslate -m all -Tz -On | sed -e 's/"//g' > /tmp/oids.txt
## Or if you have an other MIB folder with custom MIBs
## snmptranslate -M /mycustommibfolder -Tz -On -m all | sed -e 's/"//g' > oids.txt
snmptranslate_file = "/tmp/oids.txt"
[[inputs.snmp.host]]
address = "127.0.0.1:161"
# SNMP community
community = "public" # default public
# SNMP version (1, 2 or 3)
# Version 3 not supported yet
version = 2 # default 2
# Which table do you want to collect
[[inputs.snmp.host.table]]
name = "iftable2"
# table without mapping but with subtables
[[inputs.snmp.table]]
name = "iftable2"
sub_tables = [".1.3.6.1.2.1.2.2.1.13"]
# note
# oid attribute is useless
```
#### Table with mapping example
In this example, we remove collect attribute to the host section,
but you can still use it in combination of the following part.
Telegraf gathers value of OIDS of the table:
- named **iftable3**
With **inputs.snmp.table** section the plugin gets oid number:
- **iftable3** => `.1.3.6.1.2.1.31.1.1.1`
Also **iftable2** is a table, the plugin will gathers all
OIDS in the table and in the subtables
- `.1.3.6.1.2.1.31.1.1.1.1`
- `.1.3.6.1.2.1.31.1.1.1.1.1`
- `.1.3.6.1.2.1.31.1.1.1.1.2`
- `.1.3.6.1.2.1.31.1.1.1.1.3`
- `.1.3.6.1.2.1.31.1.1.1.1.4`
- `.1.3.6.1.2.1.31.1.1.1.1....`
- `.1.3.6.1.2.1.31.1.1.1.2`
- `.1.3.6.1.2.1.31.1.1.1.2....`
- `.1.3.6.1.2.1.31.1.1.1.3`
- `.1.3.6.1.2.1.31.1.1.1.3....`
- `.1.3.6.1.2.1.31.1.1.1.4`
- `.1.3.6.1.2.1.31.1.1.1.4....`
- `.1.3.6.1.2.1.31.1.1.1.5`
- `.1.3.6.1.2.1.31.1.1.1.5....`
- `.1.3.6.1.2.1.31.1.1.1.6....`
- `...`
But the **include_instances** attribute will filter which OIDS
will be gathered; As you see, there is an other attribute, `mapping_table`.
`include_instances` and `mapping_table` permit to build a hash table
to filter only OIDS you want.
Let's say, we have the following data on SNMP server:
- OID: `.1.3.6.1.2.1.31.1.1.1.1.1` has as value: `enp5s0`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.2` has as value: `enp5s1`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.3` has as value: `enp5s2`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.4` has as value: `eth0`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.5` has as value: `eth1`
The plugin will build the following hash table:
| instance name | instance id |
|---------------|-------------|
| `enp5s0` | `1` |
| `enp5s1` | `2` |
| `enp5s2` | `3` |
| `eth0` | `4` |
| `eth1` | `5` |
With the **include_instances** attribute, the plugin will gather
the following OIDS:
- `.1.3.6.1.2.1.31.1.1.1.1.1`
- `.1.3.6.1.2.1.31.1.1.1.1.5`
- `.1.3.6.1.2.1.31.1.1.1.2.1`
- `.1.3.6.1.2.1.31.1.1.1.2.5`
- `.1.3.6.1.2.1.31.1.1.1.3.1`
- `.1.3.6.1.2.1.31.1.1.1.3.5`
- `.1.3.6.1.2.1.31.1.1.1.4.1`
- `.1.3.6.1.2.1.31.1.1.1.4.5`
- `.1.3.6.1.2.1.31.1.1.1.5.1`
- `.1.3.6.1.2.1.31.1.1.1.5.5`
- `.1.3.6.1.2.1.31.1.1.1.6.1`
- `.1.3.6.1.2.1.31.1.1.1.6.5`
- `...`
Note: the plugin will add instance name as tag *instance*
```toml
# Simple table with mapping example
[[inputs.snmp]]
## Use 'oids.txt' file to translate oids to names
## To generate 'oids.txt' you need to run:
## snmptranslate -m all -Tz -On | sed -e 's/"//g' > /tmp/oids.txt
## Or if you have an other MIB folder with custom MIBs
## snmptranslate -M /mycustommibfolder -Tz -On -m all | sed -e 's/"//g' > oids.txt
snmptranslate_file = "/tmp/oids.txt"
[[inputs.snmp.host]]
address = "127.0.0.1:161"
# SNMP community
community = "public" # default public
# SNMP version (1, 2 or 3)
# Version 3 not supported yet
version = 2 # default 2
# Which table do you want to collect
[[inputs.snmp.host.table]]
name = "iftable3"
include_instances = ["enp5s0", "eth1"]
# table with mapping but without subtables
[[inputs.snmp.table]]
name = "iftable3"
oid = ".1.3.6.1.2.1.31.1.1.1"
# if empty. get all instances
mapping_table = ".1.3.6.1.2.1.31.1.1.1.1"
# if empty, get all subtables
```
#### Table with both mapping and subtable example
In this example, we remove collect attribute to the host section,
but you can still use it in combination of the following part.
Telegraf gathers value of OIDS of the table:
- named **iftable4**
With **inputs.snmp.table** section *AND* **sub_tables** attribute,
the plugin will get OIDS from subtables:
- **iftable4** => `.1.3.6.1.2.1.31.1.1.1`
Also **iftable2** is a table, the plugin will gathers all
OIDS in the table and in the subtables
- `.1.3.6.1.2.1.31.1.1.1.6.1
- `.1.3.6.1.2.1.31.1.1.1.6.2`
- `.1.3.6.1.2.1.31.1.1.1.6.3`
- `.1.3.6.1.2.1.31.1.1.1.6.4`
- `.1.3.6.1.2.1.31.1.1.1.6....`
- `.1.3.6.1.2.1.31.1.1.1.10.1`
- `.1.3.6.1.2.1.31.1.1.1.10.2`
- `.1.3.6.1.2.1.31.1.1.1.10.3`
- `.1.3.6.1.2.1.31.1.1.1.10.4`
- `.1.3.6.1.2.1.31.1.1.1.10....`
But the **include_instances** attribute will filter which OIDS
will be gathered; As you see, there is an other attribute, `mapping_table`.
`include_instances` and `mapping_table` permit to build a hash table
to filter only OIDS you want.
Let's say, we have the following data on SNMP server:
- OID: `.1.3.6.1.2.1.31.1.1.1.1.1` has as value: `enp5s0`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.2` has as value: `enp5s1`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.3` has as value: `enp5s2`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.4` has as value: `eth0`
- OID: `.1.3.6.1.2.1.31.1.1.1.1.5` has as value: `eth1`
The plugin will build the following hash table:
| instance name | instance id |
|---------------|-------------|
| `enp5s0` | `1` |
| `enp5s1` | `2` |
| `enp5s2` | `3` |
| `eth0` | `4` |
| `eth1` | `5` |
With the **include_instances** attribute, the plugin will gather
the following OIDS:
- `.1.3.6.1.2.1.31.1.1.1.6.1`
- `.1.3.6.1.2.1.31.1.1.1.6.5`
- `.1.3.6.1.2.1.31.1.1.1.10.1`
- `.1.3.6.1.2.1.31.1.1.1.10.5`
Note: the plugin will add instance name as tag *instance*
```toml
# Table with both mapping and subtable example
[[inputs.snmp]]
## Use 'oids.txt' file to translate oids to names
## To generate 'oids.txt' you need to run:
## snmptranslate -m all -Tz -On | sed -e 's/"//g' > /tmp/oids.txt
## Or if you have an other MIB folder with custom MIBs
## snmptranslate -M /mycustommibfolder -Tz -On -m all | sed -e 's/"//g' > oids.txt
snmptranslate_file = "/tmp/oids.txt"
[[inputs.snmp.host]]
address = "127.0.0.1:161"
# SNMP community
community = "public" # default public
# SNMP version (1, 2 or 3)
# Version 3 not supported yet
version = 2 # default 2
# Which table do you want to collect
[[inputs.snmp.host.table]]
name = "iftable4"
include_instances = ["enp5s0", "eth1"]
# table with both mapping and subtables
[[inputs.snmp.table]]
name = "iftable4"
# if empty get all instances
mapping_table = ".1.3.6.1.2.1.31.1.1.1.1"
# if empty get all subtables
# sub_tables could be not "real subtables"
sub_tables=[".1.3.6.1.2.1.2.2.1.13", "bytes_recv", "bytes_send"]
# note
# oid attribute is useless
# SNMP SUBTABLES
[[inputs.snmp.subtable]]
name = "bytes_recv"
oid = ".1.3.6.1.2.1.31.1.1.1.6"
unit = "octets"
[[inputs.snmp.subtable]]
name = "bytes_send"
oid = ".1.3.6.1.2.1.31.1.1.1.10"
unit = "octets"
```
#### Configuration notes
- In **inputs.snmp.table** section, the `oid` attribute is useless if
the `sub_tables` attributes is defined
- In **inputs.snmp.subtable** section, you can put a name from `snmptranslate_file`
as `oid` attribute instead of a valid OID
### Measurements & Fields:
With the last example (Table with both mapping and subtable example):
- ifHCOutOctets
- ifHCOutOctets
- ifInDiscards
- ifInDiscards
- ifHCInOctets
- ifHCInOctets
### Tags:
With the last example (Table with both mapping and subtable example):
- ifHCOutOctets
- host
- instance
- unit
- ifInDiscards
- host
- instance
- ifHCInOctets
- host
- instance
- unit
### Example Output:
With the last example (Table with both mapping and subtable example):
```
ifHCOutOctets,host=127.0.0.1,instance=enp5s0,unit=octets ifHCOutOctets=10565628i 1456878706044462901
ifInDiscards,host=127.0.0.1,instance=enp5s0 ifInDiscards=0i 1456878706044510264
ifHCInOctets,host=127.0.0.1,instance=enp5s0,unit=octets ifHCInOctets=76351777i 1456878706044531312
```

View File

@ -0,0 +1,818 @@
package snmp_legacy
import (
"io/ioutil"
"log"
"net"
"strconv"
"strings"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/soniah/gosnmp"
)
// Snmp is a snmp plugin
type Snmp struct {
Host []Host
Get []Data
Bulk []Data
Table []Table
Subtable []Subtable
SnmptranslateFile string
nameToOid map[string]string
initNode Node
subTableMap map[string]Subtable
}
type Host struct {
Address string
Community string
// SNMP version. Default 2
Version int
// SNMP timeout, in seconds. 0 means no timeout
Timeout float64
// SNMP retries
Retries int
// Data to collect (list of Data names)
Collect []string
// easy get oids
GetOids []string
// Table
Table []HostTable
// Oids
getOids []Data
bulkOids []Data
tables []HostTable
// array of processed oids
// to skip oid duplication
processedOids []string
OidInstanceMapping map[string]map[string]string
}
type Table struct {
// name = "iftable"
Name string
// oid = ".1.3.6.1.2.1.31.1.1.1"
Oid string
//if empty get all instances
//mapping_table = ".1.3.6.1.2.1.31.1.1.1.1"
MappingTable string
// if empty get all subtables
// sub_tables could be not "real subtables"
//sub_tables=[".1.3.6.1.2.1.2.2.1.13", "bytes_recv", "bytes_send"]
SubTables []string
}
type HostTable struct {
// name = "iftable"
Name string
// Includes only these instances
// include_instances = ["eth0", "eth1"]
IncludeInstances []string
// Excludes only these instances
// exclude_instances = ["eth20", "eth21"]
ExcludeInstances []string
// From Table struct
oid string
mappingTable string
subTables []string
}
// TODO find better names
type Subtable struct {
//name = "bytes_send"
Name string
//oid = ".1.3.6.1.2.1.31.1.1.1.10"
Oid string
//unit = "octets"
Unit string
}
type Data struct {
Name string
// OID (could be numbers or name)
Oid string
// Unit
Unit string
// SNMP getbulk max repetition
MaxRepetition uint8 `toml:"max_repetition"`
// SNMP Instance (default 0)
// (only used with GET request and if
// OID is a name from snmptranslate file)
Instance string
// OID (only number) (used for computation)
rawOid string
}
type Node struct {
id string
name string
subnodes map[string]Node
}
var sampleConfig = `
## Use 'oids.txt' file to translate oids to names
## To generate 'oids.txt' you need to run:
## snmptranslate -m all -Tz -On | sed -e 's/"//g' > /tmp/oids.txt
## Or if you have an other MIB folder with custom MIBs
## snmptranslate -M /mycustommibfolder -Tz -On -m all | sed -e 's/"//g' > oids.txt
snmptranslate_file = "/tmp/oids.txt"
[[inputs.snmp.host]]
address = "192.168.2.2:161"
# SNMP community
community = "public" # default public
# SNMP version (1, 2 or 3)
# Version 3 not supported yet
version = 2 # default 2
# SNMP response timeout
timeout = 2.0 # default 2.0
# SNMP request retries
retries = 2 # default 2
# Which get/bulk do you want to collect for this host
collect = ["mybulk", "sysservices", "sysdescr"]
# Simple list of OIDs to get, in addition to "collect"
get_oids = []
[[inputs.snmp.host]]
address = "192.168.2.3:161"
community = "public"
version = 2
timeout = 2.0
retries = 2
collect = ["mybulk"]
get_oids = [
"ifNumber",
".1.3.6.1.2.1.1.3.0",
]
[[inputs.snmp.get]]
name = "ifnumber"
oid = "ifNumber"
[[inputs.snmp.get]]
name = "interface_speed"
oid = "ifSpeed"
instance = "0"
[[inputs.snmp.get]]
name = "sysuptime"
oid = ".1.3.6.1.2.1.1.3.0"
unit = "second"
[[inputs.snmp.bulk]]
name = "mybulk"
max_repetition = 127
oid = ".1.3.6.1.2.1.1"
[[inputs.snmp.bulk]]
name = "ifoutoctets"
max_repetition = 127
oid = "ifOutOctets"
[[inputs.snmp.host]]
address = "192.168.2.13:161"
#address = "127.0.0.1:161"
community = "public"
version = 2
timeout = 2.0
retries = 2
#collect = ["mybulk", "sysservices", "sysdescr", "systype"]
collect = ["sysuptime" ]
[[inputs.snmp.host.table]]
name = "iftable3"
include_instances = ["enp5s0", "eth1"]
# SNMP TABLEs
# table without mapping neither subtables
[[inputs.snmp.table]]
name = "iftable1"
oid = ".1.3.6.1.2.1.31.1.1.1"
# table without mapping but with subtables
[[inputs.snmp.table]]
name = "iftable2"
oid = ".1.3.6.1.2.1.31.1.1.1"
sub_tables = [".1.3.6.1.2.1.2.2.1.13"]
# table with mapping but without subtables
[[inputs.snmp.table]]
name = "iftable3"
oid = ".1.3.6.1.2.1.31.1.1.1"
# if empty. get all instances
mapping_table = ".1.3.6.1.2.1.31.1.1.1.1"
# if empty, get all subtables
# table with both mapping and subtables
[[inputs.snmp.table]]
name = "iftable4"
oid = ".1.3.6.1.2.1.31.1.1.1"
# if empty get all instances
mapping_table = ".1.3.6.1.2.1.31.1.1.1.1"
# if empty get all subtables
# sub_tables could be not "real subtables"
sub_tables=[".1.3.6.1.2.1.2.2.1.13", "bytes_recv", "bytes_send"]
`
// SampleConfig returns sample configuration message
func (s *Snmp) SampleConfig() string {
return sampleConfig
}
// Description returns description of Zookeeper plugin
func (s *Snmp) Description() string {
return `DEPRECATED: This will be removed in a future release.`
}
func fillnode(parentNode Node, oid_name string, ids []string) {
// ids = ["1", "3", "6", ...]
id, ids := ids[0], ids[1:]
node, ok := parentNode.subnodes[id]
if ok == false {
node = Node{
id: id,
name: "",
subnodes: make(map[string]Node),
}
if len(ids) == 0 {
node.name = oid_name
}
parentNode.subnodes[id] = node
}
if len(ids) > 0 {
fillnode(node, oid_name, ids)
}
}
func findnodename(node Node, ids []string) (string, string) {
// ids = ["1", "3", "6", ...]
if len(ids) == 1 {
return node.name, ids[0]
}
id, ids := ids[0], ids[1:]
// Get node
subnode, ok := node.subnodes[id]
if ok {
return findnodename(subnode, ids)
}
// We got a node
// Get node name
if node.name != "" && len(ids) == 0 && id == "0" {
// node with instance 0
return node.name, "0"
} else if node.name != "" && len(ids) == 0 && id != "0" {
// node with an instance
return node.name, string(id)
} else if node.name != "" && len(ids) > 0 {
// node with subinstances
return node.name, strings.Join(ids, ".")
}
// return an empty node name
return node.name, ""
}
func (s *Snmp) Gather(acc telegraf.Accumulator) error {
// TODO put this in cache on first run
// Create subtables mapping
if len(s.subTableMap) == 0 {
s.subTableMap = make(map[string]Subtable)
for _, sb := range s.Subtable {
s.subTableMap[sb.Name] = sb
}
}
// TODO put this in cache on first run
// Create oid tree
if s.SnmptranslateFile != "" && len(s.initNode.subnodes) == 0 {
s.nameToOid = make(map[string]string)
s.initNode = Node{
id: "1",
name: "",
subnodes: make(map[string]Node),
}
data, err := ioutil.ReadFile(s.SnmptranslateFile)
if err != nil {
log.Printf("Reading SNMPtranslate file error: %s", err)
return err
} else {
for _, line := range strings.Split(string(data), "\n") {
oids := strings.Fields(string(line))
if len(oids) == 2 && oids[1] != "" {
oid_name := oids[0]
oid := oids[1]
fillnode(s.initNode, oid_name, strings.Split(string(oid), "."))
s.nameToOid[oid_name] = oid
}
}
}
}
// Fetching data
for _, host := range s.Host {
// Set default args
if len(host.Address) == 0 {
host.Address = "127.0.0.1:161"
}
if host.Community == "" {
host.Community = "public"
}
if host.Timeout <= 0 {
host.Timeout = 2.0
}
if host.Retries <= 0 {
host.Retries = 2
}
// Prepare host
// Get Easy GET oids
for _, oidstring := range host.GetOids {
oid := Data{}
if val, ok := s.nameToOid[oidstring]; ok {
// TODO should we add the 0 instance ?
oid.Name = oidstring
oid.Oid = val
oid.rawOid = "." + val + ".0"
} else {
oid.Name = oidstring
oid.Oid = oidstring
if string(oidstring[:1]) != "." {
oid.rawOid = "." + oidstring
} else {
oid.rawOid = oidstring
}
}
host.getOids = append(host.getOids, oid)
}
for _, oid_name := range host.Collect {
// Get GET oids
for _, oid := range s.Get {
if oid.Name == oid_name {
if val, ok := s.nameToOid[oid.Oid]; ok {
// TODO should we add the 0 instance ?
if oid.Instance != "" {
oid.rawOid = "." + val + "." + oid.Instance
} else {
oid.rawOid = "." + val + ".0"
}
} else {
oid.rawOid = oid.Oid
}
host.getOids = append(host.getOids, oid)
}
}
// Get GETBULK oids
for _, oid := range s.Bulk {
if oid.Name == oid_name {
if val, ok := s.nameToOid[oid.Oid]; ok {
oid.rawOid = "." + val
} else {
oid.rawOid = oid.Oid
}
host.bulkOids = append(host.bulkOids, oid)
}
}
}
// Table
for _, hostTable := range host.Table {
for _, snmpTable := range s.Table {
if hostTable.Name == snmpTable.Name {
table := hostTable
table.oid = snmpTable.Oid
table.mappingTable = snmpTable.MappingTable
table.subTables = snmpTable.SubTables
host.tables = append(host.tables, table)
}
}
}
// Launch Mapping
// TODO put this in cache on first run
// TODO save mapping and computed oids
// to do it only the first time
// only if len(s.OidInstanceMapping) == 0
if len(host.OidInstanceMapping) >= 0 {
if err := host.SNMPMap(acc, s.nameToOid, s.subTableMap); err != nil {
log.Printf("SNMP Mapping error for host '%s': %s", host.Address, err)
continue
}
}
// Launch Get requests
if err := host.SNMPGet(acc, s.initNode); err != nil {
log.Printf("SNMP Error for host '%s': %s", host.Address, err)
}
if err := host.SNMPBulk(acc, s.initNode); err != nil {
log.Printf("SNMP Error for host '%s': %s", host.Address, err)
}
}
return nil
}
func (h *Host) SNMPMap(
acc telegraf.Accumulator,
nameToOid map[string]string,
subTableMap map[string]Subtable,
) error {
if h.OidInstanceMapping == nil {
h.OidInstanceMapping = make(map[string]map[string]string)
}
// Get snmp client
snmpClient, err := h.GetSNMPClient()
if err != nil {
return err
}
// Deconnection
defer snmpClient.Conn.Close()
// Prepare OIDs
for _, table := range h.tables {
// We don't have mapping
if table.mappingTable == "" {
if len(table.subTables) == 0 {
// If We don't have mapping table
// neither subtables list
// This is just a bulk request
oid := Data{}
oid.Oid = table.oid
if val, ok := nameToOid[oid.Oid]; ok {
oid.rawOid = "." + val
} else {
oid.rawOid = oid.Oid
}
h.bulkOids = append(h.bulkOids, oid)
} else {
// If We don't have mapping table
// but we have subtables
// This is a bunch of bulk requests
// For each subtable ...
for _, sb := range table.subTables {
// ... we create a new Data (oid) object
oid := Data{}
// Looking for more information about this subtable
ssb, exists := subTableMap[sb]
if exists {
// We found a subtable section in config files
oid.Oid = ssb.Oid
oid.rawOid = ssb.Oid
oid.Unit = ssb.Unit
} else {
// We did NOT find a subtable section in config files
oid.Oid = sb
oid.rawOid = sb
}
// TODO check oid validity
// Add the new oid to getOids list
h.bulkOids = append(h.bulkOids, oid)
}
}
} else {
// We have a mapping table
// We need to query this table
// To get mapping between instance id
// and instance name
oid_asked := table.mappingTable
oid_next := oid_asked
need_more_requests := true
// Set max repetition
maxRepetition := uint8(32)
// Launch requests
for need_more_requests {
// Launch request
result, err3 := snmpClient.GetBulk([]string{oid_next}, 0, maxRepetition)
if err3 != nil {
return err3
}
lastOid := ""
for _, variable := range result.Variables {
lastOid = variable.Name
if strings.HasPrefix(variable.Name, oid_asked) {
switch variable.Type {
// handle instance names
case gosnmp.OctetString:
// Check if instance is in includes instances
getInstances := true
if len(table.IncludeInstances) > 0 {
getInstances = false
for _, instance := range table.IncludeInstances {
if instance == string(variable.Value.([]byte)) {
getInstances = true
}
}
}
// Check if instance is in excludes instances
if len(table.ExcludeInstances) > 0 {
getInstances = true
for _, instance := range table.ExcludeInstances {
if instance == string(variable.Value.([]byte)) {
getInstances = false
}
}
}
// We don't want this instance
if !getInstances {
continue
}
// remove oid table from the complete oid
// in order to get the current instance id
key := strings.Replace(variable.Name, oid_asked, "", 1)
if len(table.subTables) == 0 {
// We have a mapping table
// but no subtables
// This is just a bulk request
// Building mapping table
mapping := map[string]string{strings.Trim(key, "."): string(variable.Value.([]byte))}
_, exists := h.OidInstanceMapping[table.oid]
if exists {
h.OidInstanceMapping[table.oid][strings.Trim(key, ".")] = string(variable.Value.([]byte))
} else {
h.OidInstanceMapping[table.oid] = mapping
}
// Add table oid in bulk oid list
oid := Data{}
oid.Oid = table.oid
if val, ok := nameToOid[oid.Oid]; ok {
oid.rawOid = "." + val
} else {
oid.rawOid = oid.Oid
}
h.bulkOids = append(h.bulkOids, oid)
} else {
// We have a mapping table
// and some subtables
// This is a bunch of get requests
// This is the best case :)
// For each subtable ...
for _, sb := range table.subTables {
// ... we create a new Data (oid) object
oid := Data{}
// Looking for more information about this subtable
ssb, exists := subTableMap[sb]
if exists {
// We found a subtable section in config files
oid.Oid = ssb.Oid + key
oid.rawOid = ssb.Oid + key
oid.Unit = ssb.Unit
oid.Instance = string(variable.Value.([]byte))
} else {
// We did NOT find a subtable section in config files
oid.Oid = sb + key
oid.rawOid = sb + key
oid.Instance = string(variable.Value.([]byte))
}
// TODO check oid validity
// Add the new oid to getOids list
h.getOids = append(h.getOids, oid)
}
}
default:
}
} else {
break
}
}
// Determine if we need more requests
if strings.HasPrefix(lastOid, oid_asked) {
need_more_requests = true
oid_next = lastOid
} else {
need_more_requests = false
}
}
}
}
// Mapping finished
// Create newoids based on mapping
return nil
}
func (h *Host) SNMPGet(acc telegraf.Accumulator, initNode Node) error {
// Get snmp client
snmpClient, err := h.GetSNMPClient()
if err != nil {
return err
}
// Deconnection
defer snmpClient.Conn.Close()
// Prepare OIDs
oidsList := make(map[string]Data)
for _, oid := range h.getOids {
oidsList[oid.rawOid] = oid
}
oidsNameList := make([]string, 0, len(oidsList))
for _, oid := range oidsList {
oidsNameList = append(oidsNameList, oid.rawOid)
}
// gosnmp.MAX_OIDS == 60
// TODO use gosnmp.MAX_OIDS instead of hard coded value
max_oids := 60
// limit 60 (MAX_OIDS) oids by requests
for i := 0; i < len(oidsList); i = i + max_oids {
// Launch request
max_index := i + max_oids
if i+max_oids > len(oidsList) {
max_index = len(oidsList)
}
result, err3 := snmpClient.Get(oidsNameList[i:max_index]) // Get() accepts up to g.MAX_OIDS
if err3 != nil {
return err3
}
// Handle response
_, err = h.HandleResponse(oidsList, result, acc, initNode)
if err != nil {
return err
}
}
return nil
}
func (h *Host) SNMPBulk(acc telegraf.Accumulator, initNode Node) error {
// Get snmp client
snmpClient, err := h.GetSNMPClient()
if err != nil {
return err
}
// Deconnection
defer snmpClient.Conn.Close()
// Prepare OIDs
oidsList := make(map[string]Data)
for _, oid := range h.bulkOids {
oidsList[oid.rawOid] = oid
}
oidsNameList := make([]string, 0, len(oidsList))
for _, oid := range oidsList {
oidsNameList = append(oidsNameList, oid.rawOid)
}
// TODO Trying to make requests with more than one OID
// to reduce the number of requests
for _, oid := range oidsNameList {
oid_asked := oid
need_more_requests := true
// Set max repetition
maxRepetition := oidsList[oid].MaxRepetition
if maxRepetition <= 0 {
maxRepetition = 32
}
// Launch requests
for need_more_requests {
// Launch request
result, err3 := snmpClient.GetBulk([]string{oid}, 0, maxRepetition)
if err3 != nil {
return err3
}
// Handle response
last_oid, err := h.HandleResponse(oidsList, result, acc, initNode)
if err != nil {
return err
}
// Determine if we need more requests
if strings.HasPrefix(last_oid, oid_asked) {
need_more_requests = true
oid = last_oid
} else {
need_more_requests = false
}
}
}
return nil
}
func (h *Host) GetSNMPClient() (*gosnmp.GoSNMP, error) {
// Prepare Version
var version gosnmp.SnmpVersion
if h.Version == 1 {
version = gosnmp.Version1
} else if h.Version == 3 {
version = gosnmp.Version3
} else {
version = gosnmp.Version2c
}
// Prepare host and port
host, port_str, err := net.SplitHostPort(h.Address)
if err != nil {
port_str = string("161")
}
// convert port_str to port in uint16
port_64, err := strconv.ParseUint(port_str, 10, 16)
port := uint16(port_64)
// Get SNMP client
snmpClient := &gosnmp.GoSNMP{
Target: host,
Port: port,
Community: h.Community,
Version: version,
Timeout: time.Duration(h.Timeout) * time.Second,
Retries: h.Retries,
}
// Connection
err2 := snmpClient.Connect()
if err2 != nil {
return nil, err2
}
// Return snmpClient
return snmpClient, nil
}
func (h *Host) HandleResponse(
oids map[string]Data,
result *gosnmp.SnmpPacket,
acc telegraf.Accumulator,
initNode Node,
) (string, error) {
var lastOid string
for _, variable := range result.Variables {
lastOid = variable.Name
nextresult:
// Get only oid wanted
for oid_key, oid := range oids {
// Skip oids already processed
for _, processedOid := range h.processedOids {
if variable.Name == processedOid {
break nextresult
}
}
// If variable.Name is the same as oid_key
// OR
// the result is SNMP table which "." comes right after oid_key.
// ex: oid_key: .1.3.6.1.2.1.2.2.1.16, variable.Name: .1.3.6.1.2.1.2.2.1.16.1
if variable.Name == oid_key || strings.HasPrefix(variable.Name, oid_key+".") {
switch variable.Type {
// handle Metrics
case gosnmp.Boolean, gosnmp.Integer, gosnmp.Counter32, gosnmp.Gauge32,
gosnmp.TimeTicks, gosnmp.Counter64, gosnmp.Uinteger32, gosnmp.OctetString:
// Prepare tags
tags := make(map[string]string)
if oid.Unit != "" {
tags["unit"] = oid.Unit
}
// Get name and instance
var oid_name string
var instance string
// Get oidname and instance from translate file
oid_name, instance = findnodename(initNode,
strings.Split(string(variable.Name[1:]), "."))
// Set instance tag
// From mapping table
mapping, inMappingNoSubTable := h.OidInstanceMapping[oid_key]
if inMappingNoSubTable {
// filter if the instance in not in
// OidInstanceMapping mapping map
if instance_name, exists := mapping[instance]; exists {
tags["instance"] = instance_name
} else {
continue
}
} else if oid.Instance != "" {
// From config files
tags["instance"] = oid.Instance
} else if instance != "" {
// Using last id of the current oid, ie:
// with .1.3.6.1.2.1.31.1.1.1.10.3
// instance is 3
tags["instance"] = instance
}
// Set name
var field_name string
if oid_name != "" {
// Set fieldname as oid name from translate file
field_name = oid_name
} else {
// Set fieldname as oid name from inputs.snmp.get section
// Because the result oid is equal to inputs.snmp.get section
field_name = oid.Name
}
tags["snmp_host"], _, _ = net.SplitHostPort(h.Address)
fields := make(map[string]interface{})
fields[string(field_name)] = variable.Value
h.processedOids = append(h.processedOids, variable.Name)
acc.AddFields(field_name, fields, tags)
case gosnmp.NoSuchObject, gosnmp.NoSuchInstance:
// Oid not found
log.Printf("[snmp input] Oid not found: %s", oid_key)
default:
// delete other data
}
break
}
}
}
return lastOid, nil
}
func init() {
inputs.Add("snmp_legacy", func() telegraf.Input {
return &Snmp{}
})
}

View File

@ -0,0 +1,482 @@
package snmp_legacy
import (
"testing"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestSNMPErrorGet1(t *testing.T) {
get1 := Data{
Name: "oid1",
Unit: "octets",
Oid: ".1.3.6.1.2.1.2.2.1.16.1",
}
h := Host{
Collect: []string{"oid1"},
}
s := Snmp{
SnmptranslateFile: "bad_oid.txt",
Host: []Host{h},
Get: []Data{get1},
}
var acc testutil.Accumulator
err := s.Gather(&acc)
require.Error(t, err)
}
func TestSNMPErrorGet2(t *testing.T) {
get1 := Data{
Name: "oid1",
Unit: "octets",
Oid: ".1.3.6.1.2.1.2.2.1.16.1",
}
h := Host{
Collect: []string{"oid1"},
}
s := Snmp{
Host: []Host{h},
Get: []Data{get1},
}
var acc testutil.Accumulator
err := s.Gather(&acc)
require.NoError(t, err)
assert.Equal(t, 0, len(acc.Metrics))
}
func TestSNMPErrorBulk(t *testing.T) {
bulk1 := Data{
Name: "oid1",
Unit: "octets",
Oid: ".1.3.6.1.2.1.2.2.1.16",
}
h := Host{
Address: testutil.GetLocalHost(),
Collect: []string{"oid1"},
}
s := Snmp{
Host: []Host{h},
Bulk: []Data{bulk1},
}
var acc testutil.Accumulator
err := s.Gather(&acc)
require.NoError(t, err)
assert.Equal(t, 0, len(acc.Metrics))
}
func TestSNMPGet1(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
get1 := Data{
Name: "oid1",
Unit: "octets",
Oid: ".1.3.6.1.2.1.2.2.1.16.1",
}
h := Host{
Address: testutil.GetLocalHost() + ":31161",
Community: "telegraf",
Version: 2,
Timeout: 2.0,
Retries: 2,
Collect: []string{"oid1"},
}
s := Snmp{
Host: []Host{h},
Get: []Data{get1},
}
var acc testutil.Accumulator
err := s.Gather(&acc)
require.NoError(t, err)
acc.AssertContainsTaggedFields(t,
"oid1",
map[string]interface{}{
"oid1": uint(543846),
},
map[string]string{
"unit": "octets",
"snmp_host": testutil.GetLocalHost(),
},
)
}
func TestSNMPGet2(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
get1 := Data{
Name: "oid1",
Oid: "ifNumber",
}
h := Host{
Address: testutil.GetLocalHost() + ":31161",
Community: "telegraf",
Version: 2,
Timeout: 2.0,
Retries: 2,
Collect: []string{"oid1"},
}
s := Snmp{
SnmptranslateFile: "./testdata/oids.txt",
Host: []Host{h},
Get: []Data{get1},
}
var acc testutil.Accumulator
err := s.Gather(&acc)
require.NoError(t, err)
acc.AssertContainsTaggedFields(t,
"ifNumber",
map[string]interface{}{
"ifNumber": int(4),
},
map[string]string{
"instance": "0",
"snmp_host": testutil.GetLocalHost(),
},
)
}
func TestSNMPGet3(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
get1 := Data{
Name: "oid1",
Unit: "octets",
Oid: "ifSpeed",
Instance: "1",
}
h := Host{
Address: testutil.GetLocalHost() + ":31161",
Community: "telegraf",
Version: 2,
Timeout: 2.0,
Retries: 2,
Collect: []string{"oid1"},
}
s := Snmp{
SnmptranslateFile: "./testdata/oids.txt",
Host: []Host{h},
Get: []Data{get1},
}
var acc testutil.Accumulator
err := s.Gather(&acc)
require.NoError(t, err)
acc.AssertContainsTaggedFields(t,
"ifSpeed",
map[string]interface{}{
"ifSpeed": uint(10000000),
},
map[string]string{
"unit": "octets",
"instance": "1",
"snmp_host": testutil.GetLocalHost(),
},
)
}
func TestSNMPEasyGet4(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
get1 := Data{
Name: "oid1",
Unit: "octets",
Oid: "ifSpeed",
Instance: "1",
}
h := Host{
Address: testutil.GetLocalHost() + ":31161",
Community: "telegraf",
Version: 2,
Timeout: 2.0,
Retries: 2,
Collect: []string{"oid1"},
GetOids: []string{"ifNumber"},
}
s := Snmp{
SnmptranslateFile: "./testdata/oids.txt",
Host: []Host{h},
Get: []Data{get1},
}
var acc testutil.Accumulator
err := s.Gather(&acc)
require.NoError(t, err)
acc.AssertContainsTaggedFields(t,
"ifSpeed",
map[string]interface{}{
"ifSpeed": uint(10000000),
},
map[string]string{
"unit": "octets",
"instance": "1",
"snmp_host": testutil.GetLocalHost(),
},
)
acc.AssertContainsTaggedFields(t,
"ifNumber",
map[string]interface{}{
"ifNumber": int(4),
},
map[string]string{
"instance": "0",
"snmp_host": testutil.GetLocalHost(),
},
)
}
func TestSNMPEasyGet5(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
get1 := Data{
Name: "oid1",
Unit: "octets",
Oid: "ifSpeed",
Instance: "1",
}
h := Host{
Address: testutil.GetLocalHost() + ":31161",
Community: "telegraf",
Version: 2,
Timeout: 2.0,
Retries: 2,
Collect: []string{"oid1"},
GetOids: []string{".1.3.6.1.2.1.2.1.0"},
}
s := Snmp{
SnmptranslateFile: "./testdata/oids.txt",
Host: []Host{h},
Get: []Data{get1},
}
var acc testutil.Accumulator
err := s.Gather(&acc)
require.NoError(t, err)
acc.AssertContainsTaggedFields(t,
"ifSpeed",
map[string]interface{}{
"ifSpeed": uint(10000000),
},
map[string]string{
"unit": "octets",
"instance": "1",
"snmp_host": testutil.GetLocalHost(),
},
)
acc.AssertContainsTaggedFields(t,
"ifNumber",
map[string]interface{}{
"ifNumber": int(4),
},
map[string]string{
"instance": "0",
"snmp_host": testutil.GetLocalHost(),
},
)
}
func TestSNMPEasyGet6(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
h := Host{
Address: testutil.GetLocalHost() + ":31161",
Community: "telegraf",
Version: 2,
Timeout: 2.0,
Retries: 2,
GetOids: []string{"1.3.6.1.2.1.2.1.0"},
}
s := Snmp{
SnmptranslateFile: "./testdata/oids.txt",
Host: []Host{h},
}
var acc testutil.Accumulator
err := s.Gather(&acc)
require.NoError(t, err)
acc.AssertContainsTaggedFields(t,
"ifNumber",
map[string]interface{}{
"ifNumber": int(4),
},
map[string]string{
"instance": "0",
"snmp_host": testutil.GetLocalHost(),
},
)
}
func TestSNMPBulk1(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
bulk1 := Data{
Name: "oid1",
Unit: "octets",
Oid: ".1.3.6.1.2.1.2.2.1.16",
MaxRepetition: 2,
}
h := Host{
Address: testutil.GetLocalHost() + ":31161",
Community: "telegraf",
Version: 2,
Timeout: 2.0,
Retries: 2,
Collect: []string{"oid1"},
}
s := Snmp{
SnmptranslateFile: "./testdata/oids.txt",
Host: []Host{h},
Bulk: []Data{bulk1},
}
var acc testutil.Accumulator
err := s.Gather(&acc)
require.NoError(t, err)
acc.AssertContainsTaggedFields(t,
"ifOutOctets",
map[string]interface{}{
"ifOutOctets": uint(543846),
},
map[string]string{
"unit": "octets",
"instance": "1",
"snmp_host": testutil.GetLocalHost(),
},
)
acc.AssertContainsTaggedFields(t,
"ifOutOctets",
map[string]interface{}{
"ifOutOctets": uint(26475179),
},
map[string]string{
"unit": "octets",
"instance": "2",
"snmp_host": testutil.GetLocalHost(),
},
)
acc.AssertContainsTaggedFields(t,
"ifOutOctets",
map[string]interface{}{
"ifOutOctets": uint(108963968),
},
map[string]string{
"unit": "octets",
"instance": "3",
"snmp_host": testutil.GetLocalHost(),
},
)
acc.AssertContainsTaggedFields(t,
"ifOutOctets",
map[string]interface{}{
"ifOutOctets": uint(12991453),
},
map[string]string{
"unit": "octets",
"instance": "36",
"snmp_host": testutil.GetLocalHost(),
},
)
}
// TODO find why, if this test is active
// Circle CI stops with the following error...
// bash scripts/circle-test.sh died unexpectedly
// Maybe the test is too long ??
func dTestSNMPBulk2(t *testing.T) {
bulk1 := Data{
Name: "oid1",
Unit: "octets",
Oid: "ifOutOctets",
MaxRepetition: 2,
}
h := Host{
Address: testutil.GetLocalHost() + ":31161",
Community: "telegraf",
Version: 2,
Timeout: 2.0,
Retries: 2,
Collect: []string{"oid1"},
}
s := Snmp{
SnmptranslateFile: "./testdata/oids.txt",
Host: []Host{h},
Bulk: []Data{bulk1},
}
var acc testutil.Accumulator
err := s.Gather(&acc)
require.NoError(t, err)
acc.AssertContainsTaggedFields(t,
"ifOutOctets",
map[string]interface{}{
"ifOutOctets": uint(543846),
},
map[string]string{
"unit": "octets",
"instance": "1",
"snmp_host": testutil.GetLocalHost(),
},
)
acc.AssertContainsTaggedFields(t,
"ifOutOctets",
map[string]interface{}{
"ifOutOctets": uint(26475179),
},
map[string]string{
"unit": "octets",
"instance": "2",
"snmp_host": testutil.GetLocalHost(),
},
)
acc.AssertContainsTaggedFields(t,
"ifOutOctets",
map[string]interface{}{
"ifOutOctets": uint(108963968),
},
map[string]string{
"unit": "octets",
"instance": "3",
"snmp_host": testutil.GetLocalHost(),
},
)
acc.AssertContainsTaggedFields(t,
"ifOutOctets",
map[string]interface{}{
"ifOutOctets": uint(12991453),
},
map[string]string{
"unit": "octets",
"instance": "36",
"snmp_host": testutil.GetLocalHost(),
},
)
}