move tags to influxdb struct, update all sample configs

This commit is contained in:
JP 2015-08-07 15:31:25 -05:00
parent 48c10f9454
commit d422574eaf
27 changed files with 1228 additions and 142 deletions

1
.gitignore vendored
View File

@ -1,3 +1,4 @@
pkg/
tivan
.vagrant
telegraf

View File

@ -1,3 +1,24 @@
## v0.1.5 [unreleased]
### Features
- [#54](https://github.com/influxdb/telegraf/pull/54): MongoDB plugin. Thanks @jipperinbham!
- [#55](https://github.com/influxdb/telegraf/pull/55): Elasticsearch plugin. Thanks @brocaar!
- [#71](https://github.com/influxdb/telegraf/pull/71): HAProxy plugin. Thanks @kureikain!
- [#72](https://github.com/influxdb/telegraf/pull/72): Adding TokuDB metrics to MySQL. Thanks vadimtk!
- [#73](https://github.com/influxdb/telegraf/pull/73): RabbitMQ plugin. Thanks @ianunruh!
- [#77](https://github.com/influxdb/telegraf/issues/77): Automatically create database.
- [#79](https://github.com/influxdb/telegraf/pull/56): Nginx plugin. Thanks @codeb2cc!
- [#86](https://github.com/influxdb/telegraf/pull/86): Lustre2 plugin. Thanks srfraser!
- [#91](https://github.com/influxdb/telegraf/pull/91): Unit testing
- [#92](https://github.com/influxdb/telegraf/pull/92): Exec plugin. Thanks @alvaromorales!
- [#98](https://github.com/influxdb/telegraf/pull/98): LeoFS plugin. Thanks @mocchira!
- [#103](https://github.com/influxdb/telegraf/pull/103): Filter by metric tags. Thanks @srfraser!
### Bugfixes
- [#85](https://github.com/influxdb/telegraf/pull/85): Fix GetLocalHost testutil function for mac users
- [#89](https://github.com/influxdb/telegraf/pull/89): go fmt fixes
- [#94](https://github.com/influxdb/telegraf/pull/94): Fix for issue #93, explicitly call sarama.v1 -> sarama
## v0.1.4 [2015-07-09]
### Features

146
README.md
View File

@ -1,10 +1,16 @@
# Telegraf - A native agent for InfluxDB [![Circle CI](https://circleci.com/gh/influxdb/telegraf.svg?style=svg)](https://circleci.com/gh/influxdb/telegraf)
Telegraf is an agent written in Go for collecting metrics from the system it's running on or from other services and writing them into InfluxDB.
Telegraf is an agent written in Go for collecting metrics from the system it's
running on or from other services and writing them into InfluxDB.
Design goals are to have a minimal memory footprint with a plugin system so that developers in the community can easily add support for collecting metrics from well known services (like Hadoop, or Postgres, or Redis) and third party APIs (like Mailchimp, AWS CloudWatch, or Google Analytics).
Design goals are to have a minimal memory footprint with a plugin system so
that developers in the community can easily add support for collecting metrics
from well known services (like Hadoop, or Postgres, or Redis) and third party
APIs (like Mailchimp, AWS CloudWatch, or Google Analytics).
We'll eagerly accept pull requests for new plugins and will manage the set of plugins that Telegraf supports. See the bottom of this doc for instructions on writing new plugins.
We'll eagerly accept pull requests for new plugins and will manage the set of
plugins that Telegraf supports. See the bottom of this doc for instructions on
writing new plugins.
## Quickstart
@ -34,11 +40,18 @@ brew install telegraf
## Telegraf Options
Telegraf has a few options you can configure under the `agent` section of the config. If you don't see an `agent` section run `telegraf -sample-config > telegraf.toml` to create a valid initial configuration:
Telegraf has a few options you can configure under the `agent` section of the
config. If you don't see an `agent` section run
`telegraf -sample-config > telegraf.toml` to create a valid initial
configuration:
* **hostname**: The hostname is passed as a tag. By default this will be the value retured by `hostname` on the machine running Telegraf. You can override that value here.
* **interval**: How ofter to gather metrics. Uses a simple number + unit parser, ie "10s" for 10 seconds or "5m" for 5 minutes.
* **debug**: Set to true to gather and send metrics to STDOUT as well as InfluxDB.
* **hostname**: The hostname is passed as a tag. By default this will be
the value retured by `hostname` on the machine running Telegraf.
You can override that value here.
* **interval**: How ofter to gather metrics. Uses a simple number +
unit parser, ie "10s" for 10 seconds or "5m" for 5 minutes.
* **debug**: Set to true to gather and send metrics to STDOUT as well as
InfluxDB.
## Supported Plugins
@ -58,19 +71,58 @@ Telegraf currently has support for collecting metrics from:
* Lustre2
* Memcached
We'll be adding support for many more over the coming months. Read on if you want to add support for another service or third-party API.
We'll be adding support for many more over the coming months. Read on if you
want to add support for another service or third-party API.
## Plugin Options
There are 3 configuration options that are configurable per plugin:
* **pass**: An array of strings that is used to filter metrics generated by the current plugin. Each string in the array is tested as a prefix against metrics and if it matches, the metric is emitted.
* **drop**: The inverse of pass, if a metric matches, it is not emitted.
* **interval**: How often to gather this metric. Normal plugins use a single global interval, but if one particular plugin should be run less or more often, you can configure that here.
* **pass**: An array of strings that is used to filter metrics generated by the
current plugin. Each string in the array is tested as a prefix against metric names
and if it matches, the metric is emitted.
* **drop**: The inverse of pass, if a metric name matches, it is not emitted.
* **tagpass**: tag names and arrays of strings that are used to filter metrics by
the current plugin. Each string in the array is tested as an exact match against
the tag name, and if it matches the metric is emitted.
* **tagdrop**: The inverse of tagpass. If a tag matches, the metric is not emitted.
This is tested on metrics that have passed the tagpass test.
* **interval**: How often to gather this metric. Normal plugins use a single
global interval, but if one particular plugin should be run less or more often,
you can configure that here.
### Plugin Configuration Examples
```
# Read metrics about disk usage by mount point
[disk]
interval = "1m" # Run at a 1 minute interval instead of the default
[disk.tagpass]
# These tag conditions are OR, not AND.
# If the (filesystem is ext4 or xfs) or (the path is /opt or /home) then the metric passes
fstype = [ "ext4", "xfs" ]
path = [ "/opt", "/home" ]
[postgresql]
[postgresql.tagdrop]
# Don't report stats about the database name 'testdatabase'
db = [ "testdatabase" ]
```
```
[disk]
# Don't report stats about the following filesystem types
[disk.tagdrop]
fstype = [ "nfs", "tmpfs", "ecryptfs" ]
```
## Plugins
This section is for developers that want to create new collection plugins. Telegraf is entirely plugin driven. This interface allows for operators to
This section is for developers that want to create new collection plugins.
Telegraf is entirely plugin driven. This interface allows for operators to
pick and chose what is gathered as well as makes it easy for developers
to create new ways of generating metrics.
@ -86,22 +138,27 @@ developers don't have to worry about thread safety within these functions.
it prepended. This is to keep plugins honest.
* Plugins should call `plugins.Add` in their `init` function to register themselves.
See below for a quick example.
* To be available within Telegraf itself, plugins must add themselves to the `github.com/influxdb/telegraf/plugins/all/all.go` file.
* The `SampleConfig` function should return valid toml that describes how the plugin can be configured. This is include in `telegraf -sample-config`.
* To be available within Telegraf itself, plugins must add themselves to the
`github.com/influxdb/telegraf/plugins/all/all.go` file.
* The `SampleConfig` function should return valid toml that describes how the
plugin can be configured. This is include in `telegraf -sample-config`.
* The `Description` function should say in one line what this plugin does.
### Plugin interface
```go
type Plugin interface {
SampleConfig() string
Description() string
Gather(Accumulator) error
SampleConfig() string
Description() string
Gather(Accumulator) error
}
type Accumulator interface {
Add(measurement string, value interface{}, tags map[string]string)
AddValuesWithTime(measurement string, values map[string]interface{}, tags map[string]string, timestamp time.Time)
Add(measurement string, value interface{}, tags map[string]string)
AddValuesWithTime(measurement string,
values map[string]interface{},
tags map[string]string,
timestamp time.Time)
}
```
@ -118,7 +175,9 @@ The `Add` function takes 3 arguments:
* **bool**: `true` or `false`, useful to indicate the presence of a state. `light_on`, etc.
* **string**: Typically used to indicate a message, or some kind of freeform information.
* **time.Time**: Useful for indicating when a state last occurred, for instance `light_on_since`.
* **tags**: This is a map of strings to strings to describe the where or who about the metric. For instance, the `net` plugin adds a tag named `"interface"` set to the name of the network interface, like `"eth0"`.
* **tags**: This is a map of strings to strings to describe the where or who
about the metric. For instance, the `net` plugin adds a tag named `"interface"`
set to the name of the network interface, like `"eth0"`.
The `AddValuesWithTime` allows multiple values for a point to be passed. The values
used are the same type profile as **value** above. The **timestamp** argument
@ -129,20 +188,20 @@ Let's say you've written a plugin that emits metrics about processes on the curr
```go
type Process struct {
CPUTime float64
MemoryBytes int64
PID int
CPUTime float64
MemoryBytes int64
PID int
}
func Gather(acc plugins.Accumulator) error {
for _, process := range system.Processes() {
tags := map[string]string {
"pid": fmt.Sprintf("%d", process.Pid),
}
for _, process := range system.Processes() {
tags := map[string]string {
"pid": fmt.Sprintf("%d", process.Pid),
}
acc.Add("cpu", process.CPUTime, tags)
acc.Add("memory", process.MemoryBytes, tags)
}
acc.Add("cpu", process.CPUTime, tags)
acc.Add("memory", process.MemoryBytes, tags)
}
}
```
@ -156,29 +215,29 @@ package simple
import "github.com/influxdb/telegraf/plugins"
type Simple struct {
Ok bool
Ok bool
}
func (s *Simple) Description() string {
return "a demo plugin"
return "a demo plugin"
}
func (s *Simple) SampleConfig() string {
return "ok = true # indicate if everything is fine"
return "ok = true # indicate if everything is fine"
}
func (s *Simple) Gather(acc plugins.Accumulator) error {
if s.Ok {
acc.Add("state", "pretty good", nil)
} else {
acc.Add("state", "not great", nil)
}
if s.Ok {
acc.Add("state", "pretty good", nil)
} else {
acc.Add("state", "not great", nil)
}
return nil
return nil
}
func init() {
plugins.Add("simple", func() plugins.Plugin { return &Simple{} })
plugins.Add("simple", func() plugins.Plugin { return &Simple{} })
}
```
@ -186,7 +245,7 @@ func init() {
### Execute short tests:
execute `make short-test`
execute `make test-short`
### Execute long tests:
@ -202,8 +261,9 @@ a simple mock will suffice.
To execute Telegraf tests follow these simple steps:
- Install docker compose following [these](https://docs.docker.com/compose/install/) instructions
- NOTE: mac users should be able to simply do `brew install boot2docker`
- Install docker compose following [these](https://docs.docker.com/compose/install/)
instructions
- mac users should be able to simply do `brew install boot2docker`
and `brew install docker-compose`
- execute `make test`

View File

@ -32,7 +32,7 @@ func (bp *BatchPoints) Add(measurement string, val interface{}, tags map[string]
measurement = bp.Prefix + measurement
if bp.Config != nil {
if !bp.Config.ShouldPass(measurement) {
if !bp.Config.ShouldPass(measurement, tags) {
return
}
}
@ -71,7 +71,7 @@ func (bp *BatchPoints) AddValuesWithTime(
measurement = bp.Prefix + measurement
if bp.Config != nil {
if !bp.Config.ShouldPass(measurement) {
if !bp.Config.ShouldPass(measurement, tags) {
return
}
}

View File

@ -3,8 +3,10 @@ package telegraf
import (
"fmt"
"log"
"net/url"
"os"
"sort"
"strings"
"sync"
"time"
@ -66,9 +68,10 @@ func NewAgent(config *Config) (*Agent, error) {
return agent, nil
}
// Connect connects to the agent's config URL
func (a *Agent) Connect() error {
for _, o := range a.outputs {
err := o.output.Connect()
err := o.output.Connect(a.Hostname)
if err != nil {
return err
}
@ -96,7 +99,19 @@ func (a *Agent) LoadOutputs() ([]string, error) {
names = append(names, name)
}
<<<<<<< HEAD
_, err = c.Query(client.Query{
Command: fmt.Sprintf("CREATE DATABASE telegraf"),
})
if err != nil && !strings.Contains(err.Error(), "database already exists") {
log.Fatal(err)
}
a.conn = c
=======
sort.Strings(names)
>>>>>>> jipperinbham-outputs-phase1
return names, nil
}
@ -157,7 +172,6 @@ func (a *Agent) crankParallel() error {
close(points)
var bp BatchPoints
bp.Tags = a.Config.Tags
bp.Time = time.Now()
for sub := range points {
@ -181,7 +195,6 @@ func (a *Agent) crank() error {
}
}
acc.Tags = a.Config.Tags
acc.Time = time.Now()
return a.flush(&acc)
@ -204,6 +217,7 @@ func (a *Agent) crankSeparate(shutdown chan struct{}, plugin *runningPlugin) err
acc.Tags = a.Config.Tags
acc.Time = time.Now()
acc.Database = a.Config.Database
err = a.flush(&acc)
if err != nil {

View File

@ -1,24 +1,29 @@
dependencies:
post:
# install rpm & fpm for packaging
- which rpmbuild || sudo apt-get install rpm
- gem install fpm
# install golint
- go get github.com/golang/lint/golint
# install binaries
- go install ./...
# install gox
- go get -u -f github.com/mitchellh/gox
test:
pre:
# Vet go code for any potential errors
- go vet ./...
# Verify that all files are properly go formatted
# install binaries
- go install ./...
# Go fmt should pass all files
- "[ `git ls-files | grep '.go$' | xargs gofmt -l 2>&1 | wc -l` -eq 0 ]"
# Only docker-compose up kafka, the other services are already running
# see: https://circleci.com/docs/environment#databases
# - docker-compose up -d kafka
override:
# Enforce that testutil, cmd, and main directory are fully linted
- go vet ./...
- golint .
- golint testutil/...
- golint cmd/...
# Run short unit tests
override:
- make test-short
# TODO run full unit test suite
post:
# Build linux binaries
- gox -os="linux" -arch="386 amd64" ./...
- mv telegraf* $CIRCLE_ARTIFACTS
# Build .deb and .rpm files
- "GOPATH=/home/ubuntu/.go_project ./package.sh `git rev-parse --short HEAD`"
- mv telegraf*{deb,rpm} $CIRCLE_ARTIFACTS

View File

@ -71,6 +71,10 @@ func main() {
if err != nil {
log.Fatal(err)
}
if len(plugins) == 0 {
log.Printf("Error: no plugins found, did you provide a config file?")
os.Exit(1)
}
if *fTest {
if *fConfig != "" {
@ -102,7 +106,7 @@ func main() {
close(shutdown)
}()
log.Print("InfluxDB Agent running")
log.Print("Telegraf Agent running")
log.Printf("Loaded outputs: %s", strings.Join(outputs, " "))
log.Printf("Loaded plugins: %s", strings.Join(plugins, " "))
if ag.Debug {
@ -111,10 +115,6 @@ func main() {
ag.Interval, ag.Debug, ag.Hostname)
}
if len(outputs) > 0 {
log.Printf("Tags enabled: %v", config.ListTags())
}
if *fPidfile != "" {
f, err := os.Create(*fPidfile)
if err != nil {

110
config.go
View File

@ -34,8 +34,6 @@ func (d *Duration) UnmarshalTOML(b []byte) error {
// will be logging to, as well as all the plugins that the user has
// specified
type Config struct {
Tags map[string]string
agent *ast.Table
plugins map[string]*ast.Table
outputs map[string]*ast.Table
@ -45,24 +43,41 @@ type Config struct {
func (c *Config) Plugins() map[string]*ast.Table {
return c.plugins
}
type TagFilter struct {
Name string
Filter []string
}
// Outputs returns the configured outputs as a map of name -> output toml
func (c *Config) Outputs() map[string]*ast.Table {
return c.outputs
}
// The name of a tag, and the values on which to filter
type TagFilter struct {
Name string
Filter []string
}
// ConfiguredPlugin containing a name, interval, and drop/pass prefix lists
// Also lists the tags to filter
type ConfiguredPlugin struct {
Name string
Drop []string
Pass []string
TagDrop []TagFilter
TagPass []TagFilter
TagDrop []TagFilter
TagPass []TagFilter
Interval time.Duration
}
// ShouldPass returns true if the metric should pass, false if should drop
func (cp *ConfiguredPlugin) ShouldPass(measurement string) bool {
func (cp *ConfiguredPlugin) ShouldPass(measurement string, tags map[string]string) bool {
if cp.Pass != nil {
for _, pat := range cp.Pass {
if strings.HasPrefix(measurement, pat) {
@ -83,16 +98,48 @@ func (cp *ConfiguredPlugin) ShouldPass(measurement string) bool {
return true
}
if cp.TagPass != nil {
for _, pat := range cp.TagPass {
if tagval, ok := tags[pat.Name]; ok {
for _, filter := range pat.Filter {
if filter == tagval {
return true
}
}
}
}
return false
}
<<<<<<< HEAD
=======
>>>>>>> jipperinbham-outputs-phase1
if cp.TagDrop != nil {
for _, pat := range cp.TagDrop {
if tagval, ok := tags[pat.Name]; ok {
for _, filter := range pat.Filter {
if filter == tagval {
return false
}
}
}
}
return true
}
<<<<<<< HEAD
return true
=======
return nil
}
// ApplyOutput loads the toml config into the given interface
func (c *Config) ApplyOutput(name string, v interface{}) error {
if c.outputs[name] != nil {
return toml.UnmarshalTable(c.outputs[name], v)
}
return nil
}
>>>>>>> jipperinbham-outputs-phase1
}
// ApplyAgent loads the toml config into the given interface
@ -149,9 +196,47 @@ func (c *Config) ApplyPlugin(name string, v interface{}) (*ConfiguredPlugin, err
}
}
if node, ok := tbl.Fields["tagpass"]; ok {
if subtbl, ok := node.(*ast.Table); ok {
for name, val := range subtbl.Fields {
if kv, ok := val.(*ast.KeyValue); ok {
tagfilter := &TagFilter{Name: name}
if ary, ok := kv.Value.(*ast.Array); ok {
for _, elem := range ary.Value {
if str, ok := elem.(*ast.String); ok {
tagfilter.Filter = append(tagfilter.Filter, str.Value)
}
}
}
cp.TagPass = append(cp.TagPass, *tagfilter)
}
}
}
}
if node, ok := tbl.Fields["tagdrop"]; ok {
if subtbl, ok := node.(*ast.Table); ok {
for name, val := range subtbl.Fields {
if kv, ok := val.(*ast.KeyValue); ok {
tagfilter := &TagFilter{Name: name}
if ary, ok := kv.Value.(*ast.Array); ok {
for _, elem := range ary.Value {
if str, ok := elem.(*ast.String); ok {
tagfilter.Filter = append(tagfilter.Filter, str.Value)
}
}
}
cp.TagDrop = append(cp.TagDrop, *tagfilter)
}
}
}
}
delete(tbl.Fields, "drop")
delete(tbl.Fields, "pass")
delete(tbl.Fields, "interval")
delete(tbl.Fields, "tagdrop")
delete(tbl.Fields, "tagpass")
return cp, toml.UnmarshalTable(tbl, v)
}
@ -200,7 +285,6 @@ func LoadConfig(path string) (*Config, error) {
}
c := &Config{
Tags: make(map[string]string),
plugins: make(map[string]*ast.Table),
outputs: make(map[string]*ast.Table),
}
@ -214,10 +298,6 @@ func LoadConfig(path string) (*Config, error) {
switch name {
case "agent":
c.agent = subtbl
case "tags":
if err := toml.UnmarshalTable(subtbl, c.Tags); err != nil {
return nil, errInvalidConfig
}
case "outputs":
for outputName, outputVal := range subtbl.Fields {
outputSubtbl, ok := outputVal.(*ast.Table)
@ -280,8 +360,11 @@ var header = `# Telegraf configuration
# NOTE: The configuration has a few required parameters. They are marked
# with 'required'. Be sure to edit those to make this configuration work.
# OUTPUTS
[outputs]
# Configuration for influxdb server to send metrics to
[influxdb]
[outputs.influxdb]
# The full HTTP endpoint URL for your InfluxDB instance
url = "http://localhost:8086" # required.
@ -298,12 +381,11 @@ database = "telegraf" # required.
# Set the user agent for the POSTs (can be useful for log differentiation)
# user_agent = "telegraf"
# tags = { "dc": "us-east-1" }
# Tags can also be specified via a normal map, but only one form at a time:
# [influxdb.tags]
# dc = "us-east-1"
# tags = { "dc" = "us-east-1" }
# Configuration for telegraf itself
# [agent]

View File

@ -39,9 +39,7 @@ database = "telegraf" # required.
# Set the user agent for the POSTs (can be useful for log differentiation)
# user_agent = "telegraf"
# Tags can also be specified via a normal map, but only one form at a time:
# [tags]
# dc = "us-east-1"
# tags = { "dc" = "us-east-1" }
# Configuration for telegraf itself
# [agent]

View File

@ -13,11 +13,12 @@ type InfluxDB struct {
Password string
Database string
UserAgent string
Tags map[string]string
conn *client.Client
}
func (i *InfluxDB) Connect() error {
func (i *InfluxDB) Connect(host string) error {
u, err := url.Parse(i.URL)
if err != nil {
return err
@ -34,12 +35,18 @@ func (i *InfluxDB) Connect() error {
return err
}
if i.Tags == nil {
i.Tags = make(map[string]string)
}
i.Tags["host"] = host
i.conn = c
return nil
}
func (i *InfluxDB) Write(bp client.BatchPoints) error {
bp.Database = i.Database
bp.Tags = i.Tags
if _, err := i.conn.Write(bp); err != nil {
return err
}

View File

@ -5,7 +5,7 @@ import (
)
type Output interface {
Connect() error
Connect(string) error
Write(client.BatchPoints) error
}

View File

@ -151,6 +151,7 @@ make_dir_tree() {
echo "Failed to create configuration directory -- aborting."
cleanup_exit 1
fi
}
@ -195,11 +196,11 @@ if which update-rc.d > /dev/null 2>&1 ; then
update-rc.d -f telegraf remove
update-rc.d telegraf defaults
else
chkconfig --add telegraf
chkconfig --add telegraf
fi
if ! id telegraf >/dev/null 2>&1; then
useradd --system -U -M telegraf
useradd --system -U -M telegraf
fi
chown -R -L telegraf:telegraf $INSTALL_ROOT_DIR
chmod -R a+rX $INSTALL_ROOT_DIR
@ -223,10 +224,14 @@ fi
echo -e "\nStarting package process...\n"
check_gvm
if [ $CIRCLE_BRANCH == "" ]; then
check_gvm
fi
check_gopath
check_clean_tree
update_tree
if [ $CIRCLE_BRANCH == "" ]; then
check_clean_tree
update_tree
fi
check_tag_exists $VERSION
do_build $VERSION
make_dir_tree $TMP_WORK_DIR $VERSION
@ -258,7 +263,7 @@ if [ $? -ne 0 ]; then
cleanup_exit 1
fi
cp $LOGROTATE_CONFIGURATION $TMP_WORK_DIR/$LOGROTATE_DIR/telegraf.conf
cp $LOGROTATE_CONFIGURATION $TMP_WORK_DIR/$LOGROTATE_DIR/telegraf
if [ $? -ne 0 ]; then
echo "Failed to copy $LOGROTATE_CONFIGURATION to packaging directory -- aborting."
cleanup_exit 1
@ -269,12 +274,14 @@ generate_postinstall_script $VERSION
###########################################################################
# Create the actual packages.
echo -n "Commence creation of $ARCH packages, version $VERSION? [Y/n] "
read response
response=`echo $response | tr 'A-Z' 'a-z'`
if [ "x$response" == "xn" ]; then
echo "Packaging aborted."
cleanup_exit 1
if [ $CIRCLE_BRANCH == "" ]; then
echo -n "Commence creation of $ARCH packages, version $VERSION? [Y/n] "
read response
response=`echo $response | tr 'A-Z' 'a-z'`
if [ "x$response" == "xn" ]; then
echo "Packaging aborted."
cleanup_exit 1
fi
fi
if [ $ARCH == "i386" ]; then
@ -308,51 +315,54 @@ echo "Debian package created successfully."
###########################################################################
# Offer to tag the repo.
echo -n "Tag source tree with v$VERSION and push to repo? [y/N] "
read response
response=`echo $response | tr 'A-Z' 'a-z'`
if [ "x$response" == "xy" ]; then
echo "Creating tag v$VERSION and pushing to repo"
git tag v$VERSION
if [ $? -ne 0 ]; then
echo "Failed to create tag v$VERSION -- aborting"
cleanup_exit 1
if [ $CIRCLE_BRANCH == "" ]; then
echo -n "Tag source tree with v$VERSION and push to repo? [y/N] "
read response
response=`echo $response | tr 'A-Z' 'a-z'`
if [ "x$response" == "xy" ]; then
echo "Creating tag v$VERSION and pushing to repo"
git tag v$VERSION
if [ $? -ne 0 ]; then
echo "Failed to create tag v$VERSION -- aborting"
cleanup_exit 1
fi
git push origin v$VERSION
if [ $? -ne 0 ]; then
echo "Failed to push tag v$VERSION to repo -- aborting"
cleanup_exit 1
fi
else
echo "Not creating tag v$VERSION."
fi
git push origin v$VERSION
if [ $? -ne 0 ]; then
echo "Failed to push tag v$VERSION to repo -- aborting"
cleanup_exit 1
fi
else
echo "Not creating tag v$VERSION."
fi
###########################################################################
# Offer to publish the packages.
echo -n "Publish packages to S3? [y/N] "
read response
response=`echo $response | tr 'A-Z' 'a-z'`
if [ "x$response" == "xy" ]; then
echo "Publishing packages to S3."
if [ ! -e "$AWS_FILE" ]; then
echo "$AWS_FILE does not exist -- aborting."
cleanup_exit 1
fi
for filepath in `ls *.{deb,rpm}`; do
echo "Uploading $filepath to S3"
filename=`basename $filepath`
echo "Uploading $filename to s3://get.influxdb.org/telegraf/$filename"
AWS_CONFIG_FILE=$AWS_FILE aws s3 cp $filepath s3://get.influxdb.org/telegraf/$filename --acl public-read --region us-east-1
if [ $? -ne 0 ]; then
echo "Upload failed -- aborting".
if [ $CIRCLE_BRANCH == "" ]; then
echo -n "Publish packages to S3? [y/N] "
read response
response=`echo $response | tr 'A-Z' 'a-z'`
if [ "x$response" == "xy" ]; then
echo "Publishing packages to S3."
if [ ! -e "$AWS_FILE" ]; then
echo "$AWS_FILE does not exist -- aborting."
cleanup_exit 1
fi
done
else
echo "Not publishing packages to S3."
for filepath in `ls *.{deb,rpm}`; do
echo "Uploading $filepath to S3"
filename=`basename $filepath`
echo "Uploading $filename to s3://get.influxdb.org/telegraf/$filename"
AWS_CONFIG_FILE=$AWS_FILE aws s3 cp $filepath s3://get.influxdb.org/telegraf/$filename --acl public-read --region us-east-1
if [ $? -ne 0 ]; then
echo "Upload failed -- aborting".
cleanup_exit 1
fi
done
else
echo "Not publishing packages to S3."
fi
fi
###########################################################################

View File

@ -3,8 +3,10 @@ package all
import (
_ "github.com/influxdb/telegraf/plugins/disque"
_ "github.com/influxdb/telegraf/plugins/elasticsearch"
_ "github.com/influxdb/telegraf/plugins/exec"
_ "github.com/influxdb/telegraf/plugins/haproxy"
_ "github.com/influxdb/telegraf/plugins/kafka_consumer"
_ "github.com/influxdb/telegraf/plugins/leofs"
_ "github.com/influxdb/telegraf/plugins/lustre2"
_ "github.com/influxdb/telegraf/plugins/memcached"
_ "github.com/influxdb/telegraf/plugins/mongodb"

125
plugins/exec/exec.go Normal file
View File

@ -0,0 +1,125 @@
package exec
import (
"bytes"
"encoding/json"
"fmt"
"github.com/gonuts/go-shellquote"
"github.com/influxdb/telegraf/plugins"
"os/exec"
"sync"
)
const sampleConfig = `
# specify commands via an array of tables
[[exec.commands]]
# the command to run
command = "/usr/bin/mycollector --foo=bar"
# name of the command (used as a prefix for measurements)
name = "mycollector"
`
type Command struct {
Command string
Name string
}
type Exec struct {
Commands []*Command
runner Runner
}
type Runner interface {
Run(string, ...string) ([]byte, error)
}
type CommandRunner struct {
}
func NewExec() *Exec {
return &Exec{runner: CommandRunner{}}
}
func (c CommandRunner) Run(command string, args ...string) ([]byte, error) {
cmd := exec.Command(command, args...)
var out bytes.Buffer
cmd.Stdout = &out
if err := cmd.Run(); err != nil {
return nil, fmt.Errorf("exec: %s for command '%s'", err, command)
}
return out.Bytes(), nil
}
func (e *Exec) SampleConfig() string {
return sampleConfig
}
func (e *Exec) Description() string {
return "Read flattened metrics from one or more commands that output JSON to stdout"
}
func (e *Exec) Gather(acc plugins.Accumulator) error {
var wg sync.WaitGroup
var outerr error
for _, c := range e.Commands {
wg.Add(1)
go func(c *Command, acc plugins.Accumulator) {
defer wg.Done()
outerr = e.gatherCommand(c, acc)
}(c, acc)
}
wg.Wait()
return outerr
}
func (e *Exec) gatherCommand(c *Command, acc plugins.Accumulator) error {
words, err := shellquote.Split(c.Command)
if err != nil || len(words) == 0 {
return fmt.Errorf("exec: unable to parse command, %s", err)
}
out, err := e.runner.Run(words[0], words[1:]...)
if err != nil {
return err
}
var jsonOut interface{}
err = json.Unmarshal(out, &jsonOut)
if err != nil {
return fmt.Errorf("exec: unable to parse output of '%s' as JSON, %s", c.Command, err)
}
return processResponse(acc, c.Name, map[string]string{}, jsonOut)
}
func processResponse(acc plugins.Accumulator, prefix string, tags map[string]string, v interface{}) error {
switch t := v.(type) {
case map[string]interface{}:
for k, v := range t {
if err := processResponse(acc, prefix+"_"+k, tags, v); err != nil {
return err
}
}
case float64:
acc.Add(prefix, v, tags)
case bool, string, []interface{}:
// ignored types
return nil
default:
return fmt.Errorf("exec: got unexpected type %T with value %v (%s)", t, v, prefix)
}
return nil
}
func init() {
plugins.Add("exec", func() plugins.Plugin {
return NewExec()
})
}

88
plugins/exec/exec_test.go Normal file
View File

@ -0,0 +1,88 @@
package exec
import (
"fmt"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"testing"
)
const validJson = `
{
"status": "green",
"num_processes": 82,
"cpu": {
"status": "red",
"used": 8234,
"free": 32
},
"percent": 0.81,
"users": [0, 1, 2, 3]
}`
const malformedJson = `
{
"status": "green",
`
type runnerMock struct {
out []byte
err error
}
func newRunnerMock(out []byte, err error) Runner {
return &runnerMock{out: out, err: err}
}
func (r runnerMock) Run(command string, args ...string) ([]byte, error) {
if r.err != nil {
return nil, r.err
}
return r.out, nil
}
func TestExec(t *testing.T) {
runner := newRunnerMock([]byte(validJson), nil)
command := Command{Command: "testcommand arg1", Name: "mycollector"}
e := &Exec{runner: runner, Commands: []*Command{&command}}
var acc testutil.Accumulator
err := e.Gather(&acc)
require.NoError(t, err)
checkFloat := []struct {
name string
value float64
}{
{"mycollector_num_processes", 82},
{"mycollector_cpu_used", 8234},
{"mycollector_cpu_free", 32},
{"mycollector_percent", 0.81},
}
for _, c := range checkFloat {
assert.True(t, acc.CheckValue(c.name, c.value))
}
assert.Equal(t, len(acc.Points), 4, "non-numeric measurements should be ignored")
}
func TestExecMalformed(t *testing.T) {
runner := newRunnerMock([]byte(malformedJson), nil)
command := Command{Command: "badcommand arg1", Name: "mycollector"}
e := &Exec{runner: runner, Commands: []*Command{&command}}
var acc testutil.Accumulator
err := e.Gather(&acc)
require.Error(t, err)
}
func TestCommandError(t *testing.T) {
runner := newRunnerMock(nil, fmt.Errorf("exit status code 1"))
command := Command{Command: "badcommand", Name: "mycollector"}
e := &Exec{runner: runner, Commands: []*Command{&command}}
var acc testutil.Accumulator
err := e.Gather(&acc)
require.Error(t, err)
}

View File

@ -5,10 +5,10 @@ import (
"os/signal"
"time"
"github.com/Shopify/sarama"
"github.com/influxdb/influxdb/tsdb"
"github.com/influxdb/telegraf/plugins"
"github.com/wvanbergen/kafka/consumergroup"
"gopkg.in/Shopify/sarama.v1"
)
type Kafka struct {

View File

@ -5,10 +5,10 @@ import (
"testing"
"time"
"github.com/Shopify/sarama"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gopkg.in/Shopify/sarama.v1"
)
const testMsg = "cpu_load_short,direction=in,host=server01,region=us-west value=23422.0 1422568543702900257"

228
plugins/leofs/leofs.go Normal file
View File

@ -0,0 +1,228 @@
package leofs
import (
"bufio"
"fmt"
"github.com/influxdb/telegraf/plugins"
"net/url"
"os/exec"
"strconv"
"strings"
"sync"
)
const oid = ".1.3.6.1.4.1.35450"
// For Manager Master
const defaultEndpoint = "127.0.0.1:4020"
type ServerType int
const (
ServerTypeManagerMaster ServerType = iota
ServerTypeManagerSlave
ServerTypeStorage
ServerTypeGateway
)
type LeoFS struct {
Servers []string
}
var KeyMapping = map[ServerType][]string{
ServerTypeManagerMaster: {
"num_of_processes",
"total_memory_usage",
"system_memory_usage",
"processes_memory_usage",
"ets_memory_usage",
"num_of_processes_5min",
"total_memory_usage_5min",
"system_memory_usage_5min",
"processes_memory_usage_5min",
"ets_memory_usage_5min",
"used_allocated_memory",
"allocated_memory",
"used_allocated_memory_5min",
"allocated_memory_5min",
},
ServerTypeManagerSlave: {
"num_of_processes",
"total_memory_usage",
"system_memory_usage",
"processes_memory_usage",
"ets_memory_usage",
"num_of_processes_5min",
"total_memory_usage_5min",
"system_memory_usage_5min",
"processes_memory_usage_5min",
"ets_memory_usage_5min",
"used_allocated_memory",
"allocated_memory",
"used_allocated_memory_5min",
"allocated_memory_5min",
},
ServerTypeStorage: {
"num_of_processes",
"total_memory_usage",
"system_memory_usage",
"processes_memory_usage",
"ets_memory_usage",
"num_of_processes_5min",
"total_memory_usage_5min",
"system_memory_usage_5min",
"processes_memory_usage_5min",
"ets_memory_usage_5min",
"num_of_writes",
"num_of_reads",
"num_of_deletes",
"num_of_writes_5min",
"num_of_reads_5min",
"num_of_deletes_5min",
"num_of_active_objects",
"total_objects",
"total_size_of_active_objects",
"total_size",
"num_of_replication_messages",
"num_of_sync-vnode_messages",
"num_of_rebalance_messages",
"used_allocated_memory",
"allocated_memory",
"used_allocated_memory_5min",
"allocated_memory_5min",
},
ServerTypeGateway: {
"num_of_processes",
"total_memory_usage",
"system_memory_usage",
"processes_memory_usage",
"ets_memory_usage",
"num_of_processes_5min",
"total_memory_usage_5min",
"system_memory_usage_5min",
"processes_memory_usage_5min",
"ets_memory_usage_5min",
"num_of_writes",
"num_of_reads",
"num_of_deletes",
"num_of_writes_5min",
"num_of_reads_5min",
"num_of_deletes_5min",
"count_of_cache-hit",
"count_of_cache-miss",
"total_of_files",
"total_cached_size",
"used_allocated_memory",
"allocated_memory",
"used_allocated_memory_5min",
"allocated_memory_5min",
},
}
var serverTypeMapping = map[string]ServerType{
"4020": ServerTypeManagerMaster,
"4021": ServerTypeManagerSlave,
"4010": ServerTypeStorage,
"4011": ServerTypeStorage,
"4012": ServerTypeStorage,
"4013": ServerTypeStorage,
"4000": ServerTypeGateway,
"4001": ServerTypeGateway,
}
var sampleConfig = `
# An array of URI to gather stats about LeoFS.
# Specify an ip or hostname with port. ie 127.0.0.1:4020
#
# If no servers are specified, then 127.0.0.1 is used as the host and 4020 as the port.
servers = ["127.0.0.1:4021"]
`
func (l *LeoFS) SampleConfig() string {
return sampleConfig
}
func (l *LeoFS) Description() string {
return "Read metrics from a LeoFS Server via SNMP"
}
func (l *LeoFS) Gather(acc plugins.Accumulator) error {
if len(l.Servers) == 0 {
l.gatherServer(defaultEndpoint, ServerTypeManagerMaster, acc)
return nil
}
var wg sync.WaitGroup
var outerr error
for _, endpoint := range l.Servers {
_, err := url.Parse(endpoint)
if err != nil {
return fmt.Errorf("Unable to parse the address:%s, err:%s", endpoint, err)
}
port, err := retrieveTokenAfterColon(endpoint)
if err != nil {
return err
}
st, ok := serverTypeMapping[port]
if !ok {
st = ServerTypeStorage
}
wg.Add(1)
go func(endpoint string, st ServerType) {
defer wg.Done()
outerr = l.gatherServer(endpoint, st, acc)
}(endpoint, st)
}
wg.Wait()
return outerr
}
func (l *LeoFS) gatherServer(endpoint string, serverType ServerType, acc plugins.Accumulator) error {
cmd := exec.Command("snmpwalk", "-v2c", "-cpublic", endpoint, oid)
stdout, err := cmd.StdoutPipe()
if err != nil {
return err
}
cmd.Start()
defer cmd.Wait()
scanner := bufio.NewScanner(stdout)
if !scanner.Scan() {
return fmt.Errorf("Unable to retrieve the node name")
}
nodeName, err := retrieveTokenAfterColon(scanner.Text())
if err != nil {
return err
}
nodeNameTrimmed := strings.Trim(nodeName, "\"")
tags := map[string]string{
"node": nodeNameTrimmed,
}
i := 0
for scanner.Scan() {
key := KeyMapping[serverType][i]
val, err := retrieveTokenAfterColon(scanner.Text())
if err != nil {
return err
}
fVal, err := strconv.ParseFloat(val, 64)
if err != nil {
return fmt.Errorf("Unable to parse the value:%s, err:%s", val, err)
}
acc.Add(key, fVal, tags)
i++
}
return nil
}
func retrieveTokenAfterColon(line string) (string, error) {
tokens := strings.Split(line, ":")
if len(tokens) != 2 {
return "", fmt.Errorf("':' not found in the line:%s", line)
}
return strings.TrimSpace(tokens[1]), nil
}
func init() {
plugins.Add("leofs", func() plugins.Plugin {
return &LeoFS{}
})
}

173
plugins/leofs/leofs_test.go Normal file
View File

@ -0,0 +1,173 @@
package leofs
import (
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"io/ioutil"
"log"
"os"
"os/exec"
"testing"
)
var fakeSNMP4Manager = `
package main
import "fmt"
const output = ` + "`" + `iso.3.6.1.4.1.35450.15.1.0 = STRING: "manager_888@127.0.0.1"
iso.3.6.1.4.1.35450.15.2.0 = Gauge32: 186
iso.3.6.1.4.1.35450.15.3.0 = Gauge32: 46235519
iso.3.6.1.4.1.35450.15.4.0 = Gauge32: 32168525
iso.3.6.1.4.1.35450.15.5.0 = Gauge32: 14066068
iso.3.6.1.4.1.35450.15.6.0 = Gauge32: 5512968
iso.3.6.1.4.1.35450.15.7.0 = Gauge32: 186
iso.3.6.1.4.1.35450.15.8.0 = Gauge32: 46269006
iso.3.6.1.4.1.35450.15.9.0 = Gauge32: 32202867
iso.3.6.1.4.1.35450.15.10.0 = Gauge32: 14064995
iso.3.6.1.4.1.35450.15.11.0 = Gauge32: 5492634
iso.3.6.1.4.1.35450.15.12.0 = Gauge32: 60
iso.3.6.1.4.1.35450.15.13.0 = Gauge32: 43515904
iso.3.6.1.4.1.35450.15.14.0 = Gauge32: 60
iso.3.6.1.4.1.35450.15.15.0 = Gauge32: 43533983` + "`" +
`
func main() {
fmt.Println(output)
}
`
var fakeSNMP4Storage = `
package main
import "fmt"
const output = ` + "`" + `iso.3.6.1.4.1.35450.34.1.0 = STRING: "storage_0@127.0.0.1"
iso.3.6.1.4.1.35450.34.2.0 = Gauge32: 512
iso.3.6.1.4.1.35450.34.3.0 = Gauge32: 38126307
iso.3.6.1.4.1.35450.34.4.0 = Gauge32: 22308716
iso.3.6.1.4.1.35450.34.5.0 = Gauge32: 15816448
iso.3.6.1.4.1.35450.34.6.0 = Gauge32: 5232008
iso.3.6.1.4.1.35450.34.7.0 = Gauge32: 512
iso.3.6.1.4.1.35450.34.8.0 = Gauge32: 38113176
iso.3.6.1.4.1.35450.34.9.0 = Gauge32: 22313398
iso.3.6.1.4.1.35450.34.10.0 = Gauge32: 15798779
iso.3.6.1.4.1.35450.34.11.0 = Gauge32: 5237315
iso.3.6.1.4.1.35450.34.12.0 = Gauge32: 191
iso.3.6.1.4.1.35450.34.13.0 = Gauge32: 824
iso.3.6.1.4.1.35450.34.14.0 = Gauge32: 0
iso.3.6.1.4.1.35450.34.15.0 = Gauge32: 50105
iso.3.6.1.4.1.35450.34.16.0 = Gauge32: 196654
iso.3.6.1.4.1.35450.34.17.0 = Gauge32: 0
iso.3.6.1.4.1.35450.34.18.0 = Gauge32: 2052
iso.3.6.1.4.1.35450.34.19.0 = Gauge32: 50296
iso.3.6.1.4.1.35450.34.20.0 = Gauge32: 35
iso.3.6.1.4.1.35450.34.21.0 = Gauge32: 898
iso.3.6.1.4.1.35450.34.22.0 = Gauge32: 0
iso.3.6.1.4.1.35450.34.23.0 = Gauge32: 0
iso.3.6.1.4.1.35450.34.24.0 = Gauge32: 0
iso.3.6.1.4.1.35450.34.31.0 = Gauge32: 51
iso.3.6.1.4.1.35450.34.32.0 = Gauge32: 53219328
iso.3.6.1.4.1.35450.34.33.0 = Gauge32: 51
iso.3.6.1.4.1.35450.34.34.0 = Gauge32: 53351083` + "`" +
`
func main() {
fmt.Println(output)
}
`
var fakeSNMP4Gateway = `
package main
import "fmt"
const output = ` + "`" + `iso.3.6.1.4.1.35450.34.1.0 = STRING: "gateway_0@127.0.0.1"
iso.3.6.1.4.1.35450.34.2.0 = Gauge32: 465
iso.3.6.1.4.1.35450.34.3.0 = Gauge32: 61676335
iso.3.6.1.4.1.35450.34.4.0 = Gauge32: 46890415
iso.3.6.1.4.1.35450.34.5.0 = Gauge32: 14785011
iso.3.6.1.4.1.35450.34.6.0 = Gauge32: 5578855
iso.3.6.1.4.1.35450.34.7.0 = Gauge32: 465
iso.3.6.1.4.1.35450.34.8.0 = Gauge32: 61644426
iso.3.6.1.4.1.35450.34.9.0 = Gauge32: 46880358
iso.3.6.1.4.1.35450.34.10.0 = Gauge32: 14763002
iso.3.6.1.4.1.35450.34.11.0 = Gauge32: 5582125
iso.3.6.1.4.1.35450.34.12.0 = Gauge32: 191
iso.3.6.1.4.1.35450.34.13.0 = Gauge32: 827
iso.3.6.1.4.1.35450.34.14.0 = Gauge32: 0
iso.3.6.1.4.1.35450.34.15.0 = Gauge32: 50105
iso.3.6.1.4.1.35450.34.16.0 = Gauge32: 196650
iso.3.6.1.4.1.35450.34.17.0 = Gauge32: 0
iso.3.6.1.4.1.35450.34.18.0 = Gauge32: 30256
iso.3.6.1.4.1.35450.34.19.0 = Gauge32: 532158
iso.3.6.1.4.1.35450.34.20.0 = Gauge32: 34
iso.3.6.1.4.1.35450.34.21.0 = Gauge32: 1
iso.3.6.1.4.1.35450.34.31.0 = Gauge32: 53
iso.3.6.1.4.1.35450.34.32.0 = Gauge32: 55050240
iso.3.6.1.4.1.35450.34.33.0 = Gauge32: 53
iso.3.6.1.4.1.35450.34.34.0 = Gauge32: 55186538` + "`" +
`
func main() {
fmt.Println(output)
}
`
func makeFakeSNMPSrc(code string) string {
path := os.TempDir() + "/test.go"
err := ioutil.WriteFile(path, []byte(code), 0600)
if err != nil {
log.Fatalln(err)
}
return path
}
func buildFakeSNMPCmd(src string) {
err := exec.Command("go", "build", "-o", "snmpwalk", src).Run()
if err != nil {
log.Fatalln(err)
}
}
func testMain(t *testing.T, code string, endpoint string, serverType ServerType) {
// Build the fake snmpwalk for test
src := makeFakeSNMPSrc(code)
defer os.Remove(src)
buildFakeSNMPCmd(src)
defer os.Remove("./snmpwalk")
envPathOrigin := os.Getenv("PATH")
// Refer to the fake snmpwalk
os.Setenv("PATH", ".")
defer os.Setenv("PATH", envPathOrigin)
l := &LeoFS{
Servers: []string{endpoint},
}
var acc testutil.Accumulator
err := l.Gather(&acc)
require.NoError(t, err)
floatMetrics := KeyMapping[serverType]
for _, metric := range floatMetrics {
assert.True(t, acc.HasFloatValue(metric), metric)
}
}
func TestLeoFSManagerMasterMetrics(t *testing.T) {
testMain(t, fakeSNMP4Manager, "localhost:4020", ServerTypeManagerMaster)
}
func TestLeoFSManagerSlaveMetrics(t *testing.T) {
testMain(t, fakeSNMP4Manager, "localhost:4021", ServerTypeManagerSlave)
}
func TestLeoFSStorageMetrics(t *testing.T) {
testMain(t, fakeSNMP4Storage, "localhost:4010", ServerTypeStorage)
}
func TestLeoFSGatewayMetrics(t *testing.T) {
testMain(t, fakeSNMP4Gateway, "localhost:4000", ServerTypeGateway)
}

View File

@ -24,9 +24,9 @@ func (s *DiskStats) Gather(acc plugins.Accumulator) error {
for _, du := range disks {
tags := map[string]string{
"path": du.Path,
"path": du.Path,
"fstype": du.Fstype,
}
acc.Add("total", du.Total, tags)
acc.Add("free", du.Free, tags)
acc.Add("used", du.Total-du.Free, tags)

View File

@ -63,6 +63,7 @@ func (s *systemPS) DiskUsage() ([]*disk.DiskUsageStat, error) {
for _, p := range parts {
du, err := disk.DiskUsage(p.Mountpoint)
du.Fstype = p.Fstype
if err != nil {
return nil, err
}

View File

@ -6,6 +6,7 @@ import (
type DiskUsageStat struct {
Path string `json:"path"`
Fstype string `json:"fstype"`
Total uint64 `json:"total"`
Free uint64 `json:"free"`
Used uint64 `json:"used"`

View File

@ -59,6 +59,7 @@ func TestDisk_io_counters(t *testing.T) {
func TestDiskUsageStat_String(t *testing.T) {
v := DiskUsageStat{
Path: "/",
Fstype: "ext4",
Total: 1000,
Free: 2000,
Used: 3000,
@ -68,7 +69,7 @@ func TestDiskUsageStat_String(t *testing.T) {
InodesFree: 6000,
InodesUsedPercent: 49.1,
}
e := `{"path":"/","total":1000,"free":2000,"used":3000,"used_percent":50.1,"inodes_total":4000,"inodes_used":5000,"inodes_free":6000,"inodes_used_percent":49.1}`
e := `{"path":"/","fstype":"ext4","total":1000,"free":2000,"used":3000,"used_percent":50.1,"inodes_total":4000,"inodes_used":5000,"inodes_free":6000,"inodes_used_percent":49.1}`
if e != fmt.Sprintf("%v", v) {
t.Errorf("DiskUsageStat string is invalid: %v", v)
}

View File

@ -48,6 +48,7 @@ func TestSystemStats_GenerateStats(t *testing.T) {
du := &disk.DiskUsageStat{
Path: "/",
Fstype: "ext4",
Total: 128,
Free: 23,
InodesTotal: 1234,
@ -195,7 +196,8 @@ func TestSystemStats_GenerateStats(t *testing.T) {
require.NoError(t, err)
tags := map[string]string{
"path": "/",
"path": "/",
"fstype": "ext4",
}
assert.True(t, acc.CheckTaggedValue("total", uint64(128), tags))

BIN
telegraf

Binary file not shown.

View File

@ -9,9 +9,7 @@ url = "http://localhost:8086"
username = "root"
password = "root"
database = "telegraf"
[tags]
dc = "us-phx-1"
tags = { "dc" = "us-phx-1" }
[redis]
address = ":6379"

269
testdata/telegraf-agent.toml vendored Normal file
View File

@ -0,0 +1,269 @@
# Telegraf configuration
# If this file is missing an [agent] section, you must first generate a
# valid config with 'telegraf -sample-config > telegraf.toml'
# Telegraf is entirely plugin driven. All metrics are gathered from the
# declared plugins.
# Even if a plugin has no configuration, it must be declared in here
# to be active. Declaring a plugin means just specifying the name
# as a section with no variables. To deactivate a plugin, comment
# out the name and any variables.
# Use 'telegraf -config telegraf.toml -test' to see what metrics a config
# file would generate.
# One rule that plugins conform to is wherever a connection string
# can be passed, the values '' and 'localhost' are treated specially.
# They indicate to the plugin to use their own builtin configuration to
# connect to the local system.
# NOTE: The configuration has a few required parameters. They are marked
# with 'required'. Be sure to edit those to make this configuration work.
# Configuration for influxdb server to send metrics to
[influxdb]
# The full HTTP endpoint URL for your InfluxDB instance
url = "http://localhost:8086" # required.
# The target database for metrics. This database must already exist
database = "telegraf" # required.
# Connection timeout (for the connection with InfluxDB), formatted as a string.
# Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
# If not provided, will default to 0 (no timeout)
# timeout = "5s"
# username = "telegraf"
# password = "metricsmetricsmetricsmetrics"
# Set the user agent for the POSTs (can be useful for log differentiation)
# user_agent = "telegraf"
# tags = { "dc": "us-east-1" }
# Tags can also be specified via a normal map, but only one form at a time:
# [influxdb.tags]
# dc = "us-east-1"
# Configuration for telegraf itself
# [agent]
# interval = "10s"
# debug = false
# hostname = "prod3241"
# PLUGINS
# Read metrics about cpu usage
[cpu]
# no configuration
# Read metrics about disk usage by mount point
[disk]
# no configuration
# Read metrics from one or many disque servers
[disque]
# An array of URI to gather stats about. Specify an ip or hostname
# with optional port and password. ie disque://localhost, disque://10.10.3.33:18832,
# 10.0.0.1:10000, etc.
#
# If no servers are specified, then localhost is used as the host.
servers = ["localhost"]
# Read metrics about docker containers
[docker]
# no configuration
# Read stats from one or more Elasticsearch servers or clusters
[elasticsearch]
# specify a list of one or more Elasticsearch servers
servers = ["http://localhost:9200"]
# set local to false when you want to read the indices stats from all nodes
# within the cluster
local = true
# Read flattened metrics from one or more commands that output JSON to stdout
[exec]
# specify commands via an array of tables
[[exec.commands]]
# the command to run
command = "/usr/bin/mycollector --foo=bar"
# name of the command (used as a prefix for measurements)
name = "mycollector"
# Read metrics of haproxy, via socket or csv stats page
[haproxy]
# An array of address to gather stats about. Specify an ip on hostname
# with optional port. ie localhost, 10.10.3.33:1936, etc.
#
# If no servers are specified, then default to 127.0.0.1:1936
servers = ["http://myhaproxy.com:1936", "http://anotherhaproxy.com:1936"]
# Or you can also use local socket(not work yet)
# servers = ["socket:/run/haproxy/admin.sock"]
# Read metrics about disk IO by device
[io]
# no configuration
# read metrics from a Kafka topic
[kafka]
# topic to consume
topic = "topic_with_metrics"
# the name of the consumer group
consumerGroupName = "telegraf_metrics_consumers"
# an array of Zookeeper connection strings
zookeeperPeers = ["localhost:2181"]
# Batch size of points sent to InfluxDB
batchSize = 1000
# Read metrics from a LeoFS Server via SNMP
[leofs]
# An array of URI to gather stats about LeoFS.
# Specify an ip or hostname with port. ie 127.0.0.1:4020
#
# If no servers are specified, then 127.0.0.1 is used as the host and 4020 as the port.
servers = ["127.0.0.1:4021"]
# Read metrics from local Lustre service on OST, MDS
[lustre2]
# An array of /proc globs to search for Lustre stats
# If not specified, the default will work on Lustre 2.5.x
#
# ost_procfiles = ["/proc/fs/lustre/obdfilter/*/stats", "/proc/fs/lustre/osd-ldiskfs/*/stats"]
# mds_procfiles = ["/proc/fs/lustre/mdt/*/md_stats"]
# Read metrics about memory usage
[mem]
# no configuration
# Read metrics from one or many memcached servers
[memcached]
# An array of address to gather stats about. Specify an ip on hostname
# with optional port. ie localhost, 10.0.0.1:11211, etc.
#
# If no servers are specified, then localhost is used as the host.
servers = ["localhost"]
# Read metrics from one or many MongoDB servers
[mongodb]
# An array of URI to gather stats about. Specify an ip or hostname
# with optional port add password. ie mongodb://user:auth_key@10.10.3.30:27017,
# mongodb://10.10.3.33:18832, 10.0.0.1:10000, etc.
#
# If no servers are specified, then 127.0.0.1 is used as the host and 27107 as the port.
servers = ["127.0.0.1:27017"]
# Read metrics from one or many mysql servers
[mysql]
# specify servers via a url matching:
# [username[:password]@][protocol[(address)]]/[?tls=[true|false|skip-verify]]
# e.g. root:root@http://10.0.0.18/?tls=false
#
# If no servers are specified, then localhost is used as the host.
servers = ["localhost"]
# Read metrics about network interface usage
[net]
# By default, telegraf gathers stats from any up interface (excluding loopback)
# Setting interfaces will tell it to gather these explicit interfaces,
# regardless of status.
#
# interfaces = ["eth0", ... ]
# Read Nginx's basic status information (ngx_http_stub_status_module)
[nginx]
# An array of Nginx stub_status URI to gather stats.
urls = ["localhost/status"]
# Read metrics from one or many postgresql servers
[postgresql]
# specify servers via an array of tables
[[postgresql.servers]]
# specify address via a url matching:
# postgres://[pqgotest[:password]]@localhost?sslmode=[disable|verify-ca|verify-full]
# or a simple string:
# host=localhost user=pqotest password=... sslmode=...
#
# All connection parameters are optional. By default, the host is localhost
# and the user is the currently running user. For localhost, we default
# to sslmode=disable as well.
#
address = "sslmode=disable"
# A list of databases to pull metrics about. If not specified, metrics for all
# databases are gathered.
# databases = ["app_production", "blah_testing"]
# [[postgresql.servers]]
# address = "influx@remoteserver"
# Read metrics from one or many prometheus clients
[prometheus]
# An array of urls to scrape metrics from.
urls = ["http://localhost:9100/metrics"]
# Read metrics from one or many RabbitMQ servers via the management API
[rabbitmq]
# Specify servers via an array of tables
[[rabbitmq.servers]]
# url = "http://localhost:15672"
# username = "guest"
# password = "guest"
# A list of nodes to pull metrics about. If not specified, metrics for
# all nodes are gathered.
# nodes = ["rabbit@node1", "rabbit@node2"]
# Read metrics from one or many redis servers
[redis]
# An array of URI to gather stats about. Specify an ip or hostname
# with optional port add password. ie redis://localhost, redis://10.10.3.33:18832,
# 10.0.0.1:10000, etc.
#
# If no servers are specified, then localhost is used as the host.
servers = ["localhost"]
# Read metrics from one or many RethinkDB servers
[rethinkdb]
# An array of URI to gather stats about. Specify an ip or hostname
# with optional port add password. ie rethinkdb://user:auth_key@10.10.3.30:28105,
# rethinkdb://10.10.3.33:18832, 10.0.0.1:10000, etc.
#
# If no servers are specified, then 127.0.0.1 is used as the host and 28015 as the port.
servers = ["127.0.0.1:28015"]
# Read metrics about swap memory usage
[swap]
# no configuration
# Read metrics about system load
[system]
# no configuration