* Add configuration docs to Postgresql input plugin
Add configuration docs to PostgreSQL input plugin README (mostly from the source code) though I've not included the configuration example that seems to use all he connections on the database[1].
[1] https://github.com/influxdata/telegraf/issues/2410
* Fix typo in readme and sampleConfig string.
* Exporting Ipmi.Path to be set by config.
Currently "path" is not exported, giving this error when users try to
override the variable via telegraf.conf as per the sample config:
`field corresponding to `path' is not defined in `*ipmi_sensor.Ipmi'`
Exporting the variable solves the problem.
* Updating changelog.
During issue #2215 it was highlighted that the current behavior where
rules without a comment are ignored is confusing for several users.
This commit improves the documentation and adds a NOTE to the sample
config to clarify the behavior for new users.
* Procstat: don't cache PIDs
Changed the procstat input plugin to not cache PIDs. Solves #1636.
The logic of creating a process by pid was moved from `procstat.go` to
`spec_processor.go`.
* Procstat: go fmt
* procstat: modify changelog for #2206
* ceph: maps are already refs, no need to use a pointer
* ceph: pgmap_states are represented in a single metric "count", differenciated by tag
* Update CHANGELOG
* Add in support for looking for substring in response
* Add note to CHANGELOG.md
* Switch from substring match to regex match
* Requested code changes
* Make requested changes and refactor to avoid nested if-else.
* Convert tabs to space and compile regex once
* Make Logparser Plugin Check For New Files
Check in the Gather metric to see if any new files matching the glob
have appeared. If so, start tailing them from the beginning.
* changelog update for #2141
This changes the current use of the InfluxDB client to instead use a
baked-in client that uses the fasthttp library.
This allows for significantly smaller allocations, the re-use of http
body buffers, and the re-use of the actual bytes of the line-protocol
metric representations.
We were having problems with telegraf talking to
carbon-relay-ng using the graphite output. When
the carbon-relay-ng server restarted the connection
the telegraf side would go into CLOSE_WAIT but telegraf
would continue to send statistics through the connection.
Reading around it seems you need to a read from the connection
and see a EOF error. We've implemented this and added a test
that replicates roughly the error we were having.
Pair: @whpearson @joshmyers
If we write a batch of points and get a "field type conflict" error
message in return, we should drop the entire batch of points because
this indicates that one or more points have a type that doesnt match the
database.
These errors will never go away on their own, and InfluxDB will
successfully write the points that dont have a conflict.
closes#2245
* Added GatherUserStatistics, row Uptime in gatherGlobalStatuses, and version fields & tags
* Updated README file
* pulling in latest from master
* ran go fmt to fix formatting
* fix unreachable code
* few fixes
* cleaning up and applying suggestions from sparrc
I don't like this behavior, but it's what InfluxDB accepts, so the
telegraf listener should be consistent with that.
I accidentally reverted this behavior when I refactored the telegraf
metric representation earlier in this release cycle.
* Fix for broken librato output
These errors are delightful, but I'd rather avoid them:
```
Error parsing /etc/telegraf/telegraf.conf, line 2: field corresponding to `api_user' is not defined in `*librato.Librato'
```
* Fixed bad format from last commit
this basically reverts #887
at some point we might want to do some special handling of reloading
plugins and keeping their state intact, but that will need to be done at
a higher level, and in a way that is thread-safe for multiple input
plugins of the same type.
Unfortunately this is a rather large feature that will not have a quick
fix available for it.
fixes#1975fixes#2102
* plugins/input/consul: moved check_id from regular fields to tags.
When service has more than one check sending data for both would overwrite each other
resulting only in one check being written (the last one). Adding check_id as a tag
ensures we will get info for all unique checks per service.
* plugins/inputs/consul: updated tests
* fixed parsing of docker image name/version
now accounts for custom docker repo's which contain a colon for a non-default port
* 1978: modifying docker test case to have a custom repo with non-standard port
* using a temp var to store index, ran gofmt
* fixes#1987, renaming iterator to 'i'
* MongoDB input plugin: Improve state data
Adds ARB as a "member_status" (replica set arbiter).
Uses MongoDB replica set state string for "state" value.
* MongoDB input plugin: Improve state data - changelog update
put Makefile back to normal
removed comment from puppetagent.go
changed config_version to config_version_string and fixed yaml for build
changed workind from branch to environment for config_string
fixed casing and Changelog
fixed test case
closes#1917
* Fix bug: too many cloudwatch metrics
Cloudwatch metrics were being added incorrectly. The most obvious
symptom of this was that too many metrics were being added. A simple
check against the name of the metric proved to be a sufficient fix. In
order to test the fix, a metric selection function was factored out.
* Go fmt cloudwatch
* Cloudwatch isSelected checks metric name
* Move cloudwatch line in changelog to 1.2 features
* return partition stat alongside disk stat from disk usage method, and report device name (minus /dev/) as a tag in disk stats
* update system/disk tests to include new partition stat return value from disk usage method calls
* update changelog for #1807 (use device name instead of path to report disk stats)
main reasons behind this:
- make adding/removing tags cheap
- make adding/removing fields cheap
- make parsing cheaper
- make parse -> decorate -> write out bytes metric flow much faster
Refactor serializer to use byte buffer
The old gonuts fork has no License and has not seen any commits
differing from the original project, while the original has seen some
activity, even if low.
Having no license is a problem for distributors, as by default, such
projects are undistributable.
* Trim null characters in Value data format
Some producers (such as the paho embedded c mqtt client) add a null
character "\x00" to the end of a message. The Value parser would fail on
any message from such a producer.
* Trim whitespace and null in all Value data formats
* No unnecessary reassignments in Value data format parser
* Update change log for Value data format fix
* added connection Timeout parámeter, basic HTTP autentication and HTTP support with Sslskipverify option
* updated README.md
* added optional SSL config , changed timeout name and type , and other minor fixes
* added some code style improvements
* Update README.md
* NATS output plug-in now retries to reconnect forever after a lost connection.
* NATS input plug-in now retries to reconnect forever after a lost connection.
* Fixes#1953
in this commit:
- chunks out the http request body to avoid making very large
allocations.
- establishes a limit for the maximum http request body size that the
listener will accept.
- utilizes a pool of byte buffers to reduce GC pressure.
The MySQL DB driver has it's own DSN parsing function. Previously we
were using the url.Parse function, but this causes problems because a
valid MySQL DSN can be an invalid http URL, namely when using some
special characters in the password.
This change uses the MySQL DB driver's builtin ParseDSN function and
applies a timeout parameter natively via that.
Another benefit of this change is that we fail earlier if given an
invalid MySQL DSN.
closes#870closes#1842