renaming plugins -> inputs
This commit is contained in:
160
plugins/inputs/statsd/README.md
Normal file
160
plugins/inputs/statsd/README.md
Normal file
@@ -0,0 +1,160 @@
|
||||
# Telegraf Service Plugin: statsd
|
||||
|
||||
#### Description
|
||||
|
||||
The statsd plugin is a special type of plugin which runs a backgrounded statsd
|
||||
listener service while telegraf is running.
|
||||
|
||||
The format of the statsd messages was based on the format described in the
|
||||
original [etsy statsd](https://github.com/etsy/statsd/blob/master/docs/metric_types.md)
|
||||
implementation. In short, the telegraf statsd listener will accept:
|
||||
|
||||
- Gauges
|
||||
- `users.current.den001.myapp:32|g` <- standard
|
||||
- `users.current.den001.myapp:+10|g` <- additive
|
||||
- `users.current.den001.myapp:-10|g`
|
||||
- Counters
|
||||
- `deploys.test.myservice:1|c` <- increments by 1
|
||||
- `deploys.test.myservice:101|c` <- increments by 101
|
||||
- `deploys.test.myservice:1|c|@0.1` <- with sample rate, increments by 10
|
||||
- Sets
|
||||
- `users.unique:101|s`
|
||||
- `users.unique:101|s`
|
||||
- `users.unique:102|s` <- would result in a count of 2 for `users.unique`
|
||||
- Timings & Histograms
|
||||
- `load.time:320|ms`
|
||||
- `load.time.nanoseconds:1|h`
|
||||
- `load.time:200|ms|@0.1` <- sampled 1/10 of the time
|
||||
|
||||
It is possible to omit repetitive names and merge individual stats into a
|
||||
single line by separating them with additional colons:
|
||||
|
||||
- `users.current.den001.myapp:32|g:+10|g:-10|g`
|
||||
- `deploys.test.myservice:1|c:101|c:1|c|@0.1`
|
||||
- `users.unique:101|s:101|s:102|s`
|
||||
- `load.time:320|ms:200|ms|@0.1`
|
||||
|
||||
This also allows for mixed types in a single line:
|
||||
|
||||
- `foo:1|c:200|ms`
|
||||
|
||||
The string `foo:1|c:200|ms` is internally split into two individual metrics
|
||||
`foo:1|c` and `foo:200|ms` which are added to the aggregator separately.
|
||||
|
||||
|
||||
#### Influx Statsd
|
||||
|
||||
In order to take advantage of InfluxDB's tagging system, we have made a couple
|
||||
additions to the standard statsd protocol. First, you can specify
|
||||
tags in a manner similar to the line-protocol, like this:
|
||||
|
||||
```
|
||||
users.current,service=payroll,region=us-west:32|g
|
||||
```
|
||||
|
||||
COMING SOON: there will be a way to specify multiple fields.
|
||||
<!-- TODO Second, you can specify multiple fields within a measurement:
|
||||
|
||||
```
|
||||
current.users,service=payroll,server=host01:west=10,east=10,central=2,south=10|g
|
||||
``` -->
|
||||
|
||||
#### Measurements:
|
||||
|
||||
Meta:
|
||||
- tags: `metric_type=<gauge|set|counter|timing|histogram>`
|
||||
|
||||
Outputted measurements will depend entirely on the measurements that the user
|
||||
sends, but here is a brief rundown of what you can expect to find from each
|
||||
metric type:
|
||||
|
||||
- Gauges
|
||||
- Gauges are a constant data type. They are not subject to averaging, and they
|
||||
don’t change unless you change them. That is, once you set a gauge value, it
|
||||
will be a flat line on the graph until you change it again.
|
||||
- Counters
|
||||
- Counters are the most basic type. They are treated as a count of a type of
|
||||
event. They will continually increase unless you set `delete_counters=true`.
|
||||
- Sets
|
||||
- Sets count the number of unique values passed to a key. For example, you
|
||||
could count the number of users accessing your system using `users:<user_id>|s`.
|
||||
No matter how many times the same user_id is sent, the count will only increase
|
||||
by 1.
|
||||
- Timings & Histograms
|
||||
- Timers are meant to track how long something took. They are an invaluable
|
||||
tool for tracking application performance.
|
||||
- The following aggregate measurements are made for timers:
|
||||
- `statsd_<name>_lower`: The lower bound is the lowest value statsd saw
|
||||
for that stat during that interval.
|
||||
- `statsd_<name>_upper`: The upper bound is the highest value statsd saw
|
||||
for that stat during that interval.
|
||||
- `statsd_<name>_mean`: The mean is the average of all values statsd saw
|
||||
for that stat during that interval.
|
||||
- `statsd_<name>_stddev`: The stddev is the sample standard deviation
|
||||
of all values statsd saw for that stat during that interval.
|
||||
- `statsd_<name>_count`: The count is the number of timings statsd saw
|
||||
for that stat during that interval. It is not averaged.
|
||||
- `statsd_<name>_percentile_<P>` The `Pth` percentile is a value x such
|
||||
that `P%` of all the values statsd saw for that stat during that time
|
||||
period are below x. The most common value that people use for `P` is the
|
||||
`90`, this is a great number to try to optimize.
|
||||
|
||||
#### Plugin arguments
|
||||
|
||||
- **service_address** string: Address to listen for statsd UDP packets on
|
||||
- **delete_gauges** boolean: Delete gauges on every collection interval
|
||||
- **delete_counters** boolean: Delete counters on every collection interval
|
||||
- **delete_sets** boolean: Delete set counters on every collection interval
|
||||
- **delete_timings** boolean: Delete timings on every collection interval
|
||||
- **percentiles** []int: Percentiles to calculate for timing & histogram stats
|
||||
- **allowed_pending_messages** integer: Number of messages allowed to queue up
|
||||
waiting to be processed. When this fills, messages will be dropped and logged.
|
||||
- **percentile_limit** integer: Number of timing/histogram values to track
|
||||
per-measurement in the calculation of percentiles. Raising this limit increases
|
||||
the accuracy of percentiles but also increases the memory usage and cpu time.
|
||||
- **templates** []string: Templates for transforming statsd buckets into influx
|
||||
measurements and tags.
|
||||
|
||||
#### Statsd bucket -> InfluxDB line-protocol Templates
|
||||
|
||||
The plugin supports specifying templates for transforming statsd buckets into
|
||||
InfluxDB measurement names and tags. The templates have a _measurement_ keyword,
|
||||
which can be used to specify parts of the bucket that are to be used in the
|
||||
measurement name. Other words in the template are used as tag names. For example,
|
||||
the following template:
|
||||
|
||||
```
|
||||
templates = [
|
||||
"measurement.measurement.region"
|
||||
]
|
||||
```
|
||||
|
||||
would result in the following transformation:
|
||||
|
||||
```
|
||||
cpu.load.us-west:100|g
|
||||
=> cpu_load,region=us-west 100
|
||||
```
|
||||
|
||||
Users can also filter the template to use based on the name of the bucket,
|
||||
using glob matching, like so:
|
||||
|
||||
```
|
||||
templates = [
|
||||
"cpu.* measurement.measurement.region",
|
||||
"mem.* measurement.measurement.host"
|
||||
]
|
||||
```
|
||||
|
||||
which would result in the following transformation:
|
||||
|
||||
```
|
||||
cpu.load.us-west:100|g
|
||||
=> cpu_load,region=us-west 100
|
||||
|
||||
mem.cached.localhost:256|g
|
||||
=> mem_cached,host=localhost 256
|
||||
```
|
||||
|
||||
There are many more options available,
|
||||
[More details can be found here](https://github.com/influxdb/influxdb/tree/master/services/graphite#templates)
|
||||
108
plugins/inputs/statsd/running_stats.go
Normal file
108
plugins/inputs/statsd/running_stats.go
Normal file
@@ -0,0 +1,108 @@
|
||||
package statsd
|
||||
|
||||
import (
|
||||
"math"
|
||||
"math/rand"
|
||||
"sort"
|
||||
)
|
||||
|
||||
const defaultPercentileLimit = 1000
|
||||
|
||||
// RunningStats calculates a running mean, variance, standard deviation,
|
||||
// lower bound, upper bound, count, and can calculate estimated percentiles.
|
||||
// It is based on the incremental algorithm described here:
|
||||
// https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance
|
||||
type RunningStats struct {
|
||||
k float64
|
||||
n int64
|
||||
ex float64
|
||||
ex2 float64
|
||||
|
||||
// Array used to calculate estimated percentiles
|
||||
// We will store a maximum of PercLimit values, at which point we will start
|
||||
// randomly replacing old values, hence it is an estimated percentile.
|
||||
perc []float64
|
||||
PercLimit int
|
||||
|
||||
upper float64
|
||||
lower float64
|
||||
|
||||
// cache if we have sorted the list so that we never re-sort a sorted list,
|
||||
// which can have very bad performance.
|
||||
sorted bool
|
||||
}
|
||||
|
||||
func (rs *RunningStats) AddValue(v float64) {
|
||||
// Whenever a value is added, the list is no longer sorted.
|
||||
rs.sorted = false
|
||||
|
||||
if rs.n == 0 {
|
||||
rs.k = v
|
||||
rs.upper = v
|
||||
rs.lower = v
|
||||
if rs.PercLimit == 0 {
|
||||
rs.PercLimit = defaultPercentileLimit
|
||||
}
|
||||
rs.perc = make([]float64, 0, rs.PercLimit)
|
||||
}
|
||||
|
||||
// These are used for the running mean and variance
|
||||
rs.n += 1
|
||||
rs.ex += v - rs.k
|
||||
rs.ex2 += (v - rs.k) * (v - rs.k)
|
||||
|
||||
// track upper and lower bounds
|
||||
if v > rs.upper {
|
||||
rs.upper = v
|
||||
} else if v < rs.lower {
|
||||
rs.lower = v
|
||||
}
|
||||
|
||||
if len(rs.perc) < rs.PercLimit {
|
||||
rs.perc = append(rs.perc, v)
|
||||
} else {
|
||||
// Reached limit, choose random index to overwrite in the percentile array
|
||||
rs.perc[rand.Intn(len(rs.perc))] = v
|
||||
}
|
||||
}
|
||||
|
||||
func (rs *RunningStats) Mean() float64 {
|
||||
return rs.k + rs.ex/float64(rs.n)
|
||||
}
|
||||
|
||||
func (rs *RunningStats) Variance() float64 {
|
||||
return (rs.ex2 - (rs.ex*rs.ex)/float64(rs.n)) / float64(rs.n)
|
||||
}
|
||||
|
||||
func (rs *RunningStats) Stddev() float64 {
|
||||
return math.Sqrt(rs.Variance())
|
||||
}
|
||||
|
||||
func (rs *RunningStats) Upper() float64 {
|
||||
return rs.upper
|
||||
}
|
||||
|
||||
func (rs *RunningStats) Lower() float64 {
|
||||
return rs.lower
|
||||
}
|
||||
|
||||
func (rs *RunningStats) Count() int64 {
|
||||
return rs.n
|
||||
}
|
||||
|
||||
func (rs *RunningStats) Percentile(n int) float64 {
|
||||
if n > 100 {
|
||||
n = 100
|
||||
}
|
||||
|
||||
if !rs.sorted {
|
||||
sort.Float64s(rs.perc)
|
||||
rs.sorted = true
|
||||
}
|
||||
|
||||
i := int(float64(len(rs.perc)) * float64(n) / float64(100))
|
||||
if i < 0 {
|
||||
i = 0
|
||||
}
|
||||
return rs.perc[i]
|
||||
}
|
||||
136
plugins/inputs/statsd/running_stats_test.go
Normal file
136
plugins/inputs/statsd/running_stats_test.go
Normal file
@@ -0,0 +1,136 @@
|
||||
package statsd
|
||||
|
||||
import (
|
||||
"math"
|
||||
"testing"
|
||||
)
|
||||
|
||||
// Test that a single metric is handled correctly
|
||||
func TestRunningStats_Single(t *testing.T) {
|
||||
rs := RunningStats{}
|
||||
values := []float64{10.1}
|
||||
|
||||
for _, v := range values {
|
||||
rs.AddValue(v)
|
||||
}
|
||||
|
||||
if rs.Mean() != 10.1 {
|
||||
t.Errorf("Expected %v, got %v", 10.1, rs.Mean())
|
||||
}
|
||||
if rs.Upper() != 10.1 {
|
||||
t.Errorf("Expected %v, got %v", 10.1, rs.Upper())
|
||||
}
|
||||
if rs.Lower() != 10.1 {
|
||||
t.Errorf("Expected %v, got %v", 10.1, rs.Lower())
|
||||
}
|
||||
if rs.Percentile(90) != 10.1 {
|
||||
t.Errorf("Expected %v, got %v", 10.1, rs.Percentile(90))
|
||||
}
|
||||
if rs.Percentile(50) != 10.1 {
|
||||
t.Errorf("Expected %v, got %v", 10.1, rs.Percentile(50))
|
||||
}
|
||||
if rs.Count() != 1 {
|
||||
t.Errorf("Expected %v, got %v", 1, rs.Count())
|
||||
}
|
||||
if rs.Variance() != 0 {
|
||||
t.Errorf("Expected %v, got %v", 0, rs.Variance())
|
||||
}
|
||||
if rs.Stddev() != 0 {
|
||||
t.Errorf("Expected %v, got %v", 0, rs.Stddev())
|
||||
}
|
||||
}
|
||||
|
||||
// Test that duplicate values are handled correctly
|
||||
func TestRunningStats_Duplicate(t *testing.T) {
|
||||
rs := RunningStats{}
|
||||
values := []float64{10.1, 10.1, 10.1, 10.1}
|
||||
|
||||
for _, v := range values {
|
||||
rs.AddValue(v)
|
||||
}
|
||||
|
||||
if rs.Mean() != 10.1 {
|
||||
t.Errorf("Expected %v, got %v", 10.1, rs.Mean())
|
||||
}
|
||||
if rs.Upper() != 10.1 {
|
||||
t.Errorf("Expected %v, got %v", 10.1, rs.Upper())
|
||||
}
|
||||
if rs.Lower() != 10.1 {
|
||||
t.Errorf("Expected %v, got %v", 10.1, rs.Lower())
|
||||
}
|
||||
if rs.Percentile(90) != 10.1 {
|
||||
t.Errorf("Expected %v, got %v", 10.1, rs.Percentile(90))
|
||||
}
|
||||
if rs.Percentile(50) != 10.1 {
|
||||
t.Errorf("Expected %v, got %v", 10.1, rs.Percentile(50))
|
||||
}
|
||||
if rs.Count() != 4 {
|
||||
t.Errorf("Expected %v, got %v", 4, rs.Count())
|
||||
}
|
||||
if rs.Variance() != 0 {
|
||||
t.Errorf("Expected %v, got %v", 0, rs.Variance())
|
||||
}
|
||||
if rs.Stddev() != 0 {
|
||||
t.Errorf("Expected %v, got %v", 0, rs.Stddev())
|
||||
}
|
||||
}
|
||||
|
||||
// Test a list of sample values, returns all correct values
|
||||
func TestRunningStats(t *testing.T) {
|
||||
rs := RunningStats{}
|
||||
values := []float64{10, 20, 10, 30, 20, 11, 12, 32, 45, 9, 5, 5, 5, 10, 23, 8}
|
||||
|
||||
for _, v := range values {
|
||||
rs.AddValue(v)
|
||||
}
|
||||
|
||||
if rs.Mean() != 15.9375 {
|
||||
t.Errorf("Expected %v, got %v", 15.9375, rs.Mean())
|
||||
}
|
||||
if rs.Upper() != 45 {
|
||||
t.Errorf("Expected %v, got %v", 45, rs.Upper())
|
||||
}
|
||||
if rs.Lower() != 5 {
|
||||
t.Errorf("Expected %v, got %v", 5, rs.Lower())
|
||||
}
|
||||
if rs.Percentile(90) != 32 {
|
||||
t.Errorf("Expected %v, got %v", 32, rs.Percentile(90))
|
||||
}
|
||||
if rs.Percentile(50) != 11 {
|
||||
t.Errorf("Expected %v, got %v", 11, rs.Percentile(50))
|
||||
}
|
||||
if rs.Count() != 16 {
|
||||
t.Errorf("Expected %v, got %v", 4, rs.Count())
|
||||
}
|
||||
if !fuzzyEqual(rs.Variance(), 124.93359, .00001) {
|
||||
t.Errorf("Expected %v, got %v", 124.93359, rs.Variance())
|
||||
}
|
||||
if !fuzzyEqual(rs.Stddev(), 11.17736, .00001) {
|
||||
t.Errorf("Expected %v, got %v", 11.17736, rs.Stddev())
|
||||
}
|
||||
}
|
||||
|
||||
// Test that the percentile limit is respected.
|
||||
func TestRunningStats_PercentileLimit(t *testing.T) {
|
||||
rs := RunningStats{}
|
||||
rs.PercLimit = 10
|
||||
values := []float64{1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}
|
||||
|
||||
for _, v := range values {
|
||||
rs.AddValue(v)
|
||||
}
|
||||
|
||||
if rs.Count() != 11 {
|
||||
t.Errorf("Expected %v, got %v", 11, rs.Count())
|
||||
}
|
||||
if len(rs.perc) != 10 {
|
||||
t.Errorf("Expected %v, got %v", 10, len(rs.perc))
|
||||
}
|
||||
}
|
||||
|
||||
func fuzzyEqual(a, b, epsilon float64) bool {
|
||||
if math.Abs(a-b) > epsilon {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
496
plugins/inputs/statsd/statsd.go
Normal file
496
plugins/inputs/statsd/statsd.go
Normal file
@@ -0,0 +1,496 @@
|
||||
package statsd
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"log"
|
||||
"net"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/influxdb/influxdb/services/graphite"
|
||||
|
||||
"github.com/influxdb/telegraf/plugins/inputs"
|
||||
)
|
||||
|
||||
var dropwarn = "ERROR: Message queue full. Discarding line [%s] " +
|
||||
"You may want to increase allowed_pending_messages in the config\n"
|
||||
|
||||
type Statsd struct {
|
||||
// Address & Port to serve from
|
||||
ServiceAddress string
|
||||
|
||||
// Number of messages allowed to queue up in between calls to Gather. If this
|
||||
// fills up, packets will get dropped until the next Gather interval is ran.
|
||||
AllowedPendingMessages int
|
||||
|
||||
// Percentiles specifies the percentiles that will be calculated for timing
|
||||
// and histogram stats.
|
||||
Percentiles []int
|
||||
PercentileLimit int
|
||||
|
||||
DeleteGauges bool
|
||||
DeleteCounters bool
|
||||
DeleteSets bool
|
||||
DeleteTimings bool
|
||||
|
||||
sync.Mutex
|
||||
|
||||
// Channel for all incoming statsd messages
|
||||
in chan string
|
||||
done chan struct{}
|
||||
|
||||
// Cache gauges, counters & sets so they can be aggregated as they arrive
|
||||
gauges map[string]cachedgauge
|
||||
counters map[string]cachedcounter
|
||||
sets map[string]cachedset
|
||||
timings map[string]cachedtimings
|
||||
|
||||
// bucket -> influx templates
|
||||
Templates []string
|
||||
}
|
||||
|
||||
func NewStatsd() *Statsd {
|
||||
s := Statsd{}
|
||||
|
||||
// Make data structures
|
||||
s.done = make(chan struct{})
|
||||
s.in = make(chan string, s.AllowedPendingMessages)
|
||||
s.gauges = make(map[string]cachedgauge)
|
||||
s.counters = make(map[string]cachedcounter)
|
||||
s.sets = make(map[string]cachedset)
|
||||
s.timings = make(map[string]cachedtimings)
|
||||
|
||||
return &s
|
||||
}
|
||||
|
||||
// One statsd metric, form is <bucket>:<value>|<mtype>|@<samplerate>
|
||||
type metric struct {
|
||||
name string
|
||||
bucket string
|
||||
hash string
|
||||
intvalue int64
|
||||
floatvalue float64
|
||||
mtype string
|
||||
additive bool
|
||||
samplerate float64
|
||||
tags map[string]string
|
||||
}
|
||||
|
||||
type cachedset struct {
|
||||
name string
|
||||
set map[int64]bool
|
||||
tags map[string]string
|
||||
}
|
||||
|
||||
type cachedgauge struct {
|
||||
name string
|
||||
value float64
|
||||
tags map[string]string
|
||||
}
|
||||
|
||||
type cachedcounter struct {
|
||||
name string
|
||||
value int64
|
||||
tags map[string]string
|
||||
}
|
||||
|
||||
type cachedtimings struct {
|
||||
name string
|
||||
stats RunningStats
|
||||
tags map[string]string
|
||||
}
|
||||
|
||||
func (_ *Statsd) Description() string {
|
||||
return "Statsd Server"
|
||||
}
|
||||
|
||||
const sampleConfig = `
|
||||
# Address and port to host UDP listener on
|
||||
service_address = ":8125"
|
||||
# Delete gauges every interval (default=false)
|
||||
delete_gauges = false
|
||||
# Delete counters every interval (default=false)
|
||||
delete_counters = false
|
||||
# Delete sets every interval (default=false)
|
||||
delete_sets = false
|
||||
# Delete timings & histograms every interval (default=true)
|
||||
delete_timings = true
|
||||
# Percentiles to calculate for timing & histogram stats
|
||||
percentiles = [90]
|
||||
|
||||
# templates = [
|
||||
# "cpu.* measurement*"
|
||||
# ]
|
||||
|
||||
# Number of UDP messages allowed to queue up, once filled,
|
||||
# the statsd server will start dropping packets
|
||||
allowed_pending_messages = 10000
|
||||
|
||||
# Number of timing/histogram values to track per-measurement in the
|
||||
# calculation of percentiles. Raising this limit increases the accuracy
|
||||
# of percentiles but also increases the memory usage and cpu time.
|
||||
percentile_limit = 1000
|
||||
`
|
||||
|
||||
func (_ *Statsd) SampleConfig() string {
|
||||
return sampleConfig
|
||||
}
|
||||
|
||||
func (s *Statsd) Gather(acc inputs.Accumulator) error {
|
||||
s.Lock()
|
||||
defer s.Unlock()
|
||||
|
||||
for _, metric := range s.timings {
|
||||
acc.Add(metric.name+"_mean", metric.stats.Mean(), metric.tags)
|
||||
acc.Add(metric.name+"_stddev", metric.stats.Stddev(), metric.tags)
|
||||
acc.Add(metric.name+"_upper", metric.stats.Upper(), metric.tags)
|
||||
acc.Add(metric.name+"_lower", metric.stats.Lower(), metric.tags)
|
||||
acc.Add(metric.name+"_count", metric.stats.Count(), metric.tags)
|
||||
for _, percentile := range s.Percentiles {
|
||||
name := fmt.Sprintf("%s_percentile_%v", metric.name, percentile)
|
||||
acc.Add(name, metric.stats.Percentile(percentile), metric.tags)
|
||||
}
|
||||
}
|
||||
if s.DeleteTimings {
|
||||
s.timings = make(map[string]cachedtimings)
|
||||
}
|
||||
|
||||
for _, metric := range s.gauges {
|
||||
acc.Add(metric.name, metric.value, metric.tags)
|
||||
}
|
||||
if s.DeleteGauges {
|
||||
s.gauges = make(map[string]cachedgauge)
|
||||
}
|
||||
|
||||
for _, metric := range s.counters {
|
||||
acc.Add(metric.name, metric.value, metric.tags)
|
||||
}
|
||||
if s.DeleteCounters {
|
||||
s.counters = make(map[string]cachedcounter)
|
||||
}
|
||||
|
||||
for _, metric := range s.sets {
|
||||
acc.Add(metric.name, int64(len(metric.set)), metric.tags)
|
||||
}
|
||||
if s.DeleteSets {
|
||||
s.sets = make(map[string]cachedset)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *Statsd) Start() error {
|
||||
// Make data structures
|
||||
s.done = make(chan struct{})
|
||||
s.in = make(chan string, s.AllowedPendingMessages)
|
||||
s.gauges = make(map[string]cachedgauge)
|
||||
s.counters = make(map[string]cachedcounter)
|
||||
s.sets = make(map[string]cachedset)
|
||||
s.timings = make(map[string]cachedtimings)
|
||||
|
||||
// Start the UDP listener
|
||||
go s.udpListen()
|
||||
// Start the line parser
|
||||
go s.parser()
|
||||
log.Printf("Started the statsd service on %s\n", s.ServiceAddress)
|
||||
return nil
|
||||
}
|
||||
|
||||
// udpListen starts listening for udp packets on the configured port.
|
||||
func (s *Statsd) udpListen() error {
|
||||
address, _ := net.ResolveUDPAddr("udp", s.ServiceAddress)
|
||||
listener, err := net.ListenUDP("udp", address)
|
||||
if err != nil {
|
||||
log.Fatalf("ERROR: ListenUDP - %s", err)
|
||||
}
|
||||
defer listener.Close()
|
||||
log.Println("Statsd listener listening on: ", listener.LocalAddr().String())
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-s.done:
|
||||
return nil
|
||||
default:
|
||||
buf := make([]byte, 1024)
|
||||
n, _, err := listener.ReadFromUDP(buf)
|
||||
if err != nil {
|
||||
log.Printf("ERROR: %s\n", err.Error())
|
||||
}
|
||||
|
||||
lines := strings.Split(string(buf[:n]), "\n")
|
||||
for _, line := range lines {
|
||||
line = strings.TrimSpace(line)
|
||||
if line != "" {
|
||||
select {
|
||||
case s.in <- line:
|
||||
default:
|
||||
log.Printf(dropwarn, line)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// parser monitors the s.in channel, if there is a line ready, it parses the
|
||||
// statsd string into a usable metric struct and aggregates the value
|
||||
func (s *Statsd) parser() error {
|
||||
for {
|
||||
select {
|
||||
case <-s.done:
|
||||
return nil
|
||||
case line := <-s.in:
|
||||
s.parseStatsdLine(line)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// parseStatsdLine will parse the given statsd line, validating it as it goes.
|
||||
// If the line is valid, it will be cached for the next call to Gather()
|
||||
func (s *Statsd) parseStatsdLine(line string) error {
|
||||
s.Lock()
|
||||
defer s.Unlock()
|
||||
|
||||
// Validate splitting the line on ":"
|
||||
bits := strings.Split(line, ":")
|
||||
if len(bits) < 2 {
|
||||
log.Printf("Error: splitting ':', Unable to parse metric: %s\n", line)
|
||||
return errors.New("Error Parsing statsd line")
|
||||
}
|
||||
|
||||
// Extract bucket name from individual metric bits
|
||||
bucketName, bits := bits[0], bits[1:]
|
||||
|
||||
// Add a metric for each bit available
|
||||
for _, bit := range bits {
|
||||
m := metric{}
|
||||
|
||||
m.bucket = bucketName
|
||||
|
||||
// Validate splitting the bit on "|"
|
||||
pipesplit := strings.Split(bit, "|")
|
||||
if len(pipesplit) < 2 {
|
||||
log.Printf("Error: splitting '|', Unable to parse metric: %s\n", line)
|
||||
return errors.New("Error Parsing statsd line")
|
||||
} else if len(pipesplit) > 2 {
|
||||
sr := pipesplit[2]
|
||||
errmsg := "Error: parsing sample rate, %s, it must be in format like: " +
|
||||
"@0.1, @0.5, etc. Ignoring sample rate for line: %s\n"
|
||||
if strings.Contains(sr, "@") && len(sr) > 1 {
|
||||
samplerate, err := strconv.ParseFloat(sr[1:], 64)
|
||||
if err != nil {
|
||||
log.Printf(errmsg, err.Error(), line)
|
||||
} else {
|
||||
// sample rate successfully parsed
|
||||
m.samplerate = samplerate
|
||||
}
|
||||
} else {
|
||||
log.Printf(errmsg, "", line)
|
||||
}
|
||||
}
|
||||
|
||||
// Validate metric type
|
||||
switch pipesplit[1] {
|
||||
case "g", "c", "s", "ms", "h":
|
||||
m.mtype = pipesplit[1]
|
||||
default:
|
||||
log.Printf("Error: Statsd Metric type %s unsupported", pipesplit[1])
|
||||
return errors.New("Error Parsing statsd line")
|
||||
}
|
||||
|
||||
// Parse the value
|
||||
if strings.ContainsAny(pipesplit[0], "-+") {
|
||||
if m.mtype != "g" {
|
||||
log.Printf("Error: +- values are only supported for gauges: %s\n", line)
|
||||
return errors.New("Error Parsing statsd line")
|
||||
}
|
||||
m.additive = true
|
||||
}
|
||||
|
||||
switch m.mtype {
|
||||
case "g", "ms", "h":
|
||||
v, err := strconv.ParseFloat(pipesplit[0], 64)
|
||||
if err != nil {
|
||||
log.Printf("Error: parsing value to float64: %s\n", line)
|
||||
return errors.New("Error Parsing statsd line")
|
||||
}
|
||||
m.floatvalue = v
|
||||
case "c", "s":
|
||||
v, err := strconv.ParseInt(pipesplit[0], 10, 64)
|
||||
if err != nil {
|
||||
log.Printf("Error: parsing value to int64: %s\n", line)
|
||||
return errors.New("Error Parsing statsd line")
|
||||
}
|
||||
// If a sample rate is given with a counter, divide value by the rate
|
||||
if m.samplerate != 0 && m.mtype == "c" {
|
||||
v = int64(float64(v) / m.samplerate)
|
||||
}
|
||||
m.intvalue = v
|
||||
}
|
||||
|
||||
// Parse the name & tags from bucket
|
||||
m.name, m.tags = s.parseName(m.bucket)
|
||||
switch m.mtype {
|
||||
case "c":
|
||||
m.tags["metric_type"] = "counter"
|
||||
case "g":
|
||||
m.tags["metric_type"] = "gauge"
|
||||
case "s":
|
||||
m.tags["metric_type"] = "set"
|
||||
case "ms":
|
||||
m.tags["metric_type"] = "timing"
|
||||
case "h":
|
||||
m.tags["metric_type"] = "histogram"
|
||||
}
|
||||
|
||||
// Make a unique key for the measurement name/tags
|
||||
var tg []string
|
||||
for k, v := range m.tags {
|
||||
tg = append(tg, fmt.Sprintf("%s=%s", k, v))
|
||||
}
|
||||
sort.Strings(tg)
|
||||
m.hash = fmt.Sprintf("%s%s", strings.Join(tg, ""), m.name)
|
||||
|
||||
s.aggregate(m)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// parseName parses the given bucket name with the list of bucket maps in the
|
||||
// config file. If there is a match, it will parse the name of the metric and
|
||||
// map of tags.
|
||||
// Return values are (<name>, <tags>)
|
||||
func (s *Statsd) parseName(bucket string) (string, map[string]string) {
|
||||
tags := make(map[string]string)
|
||||
|
||||
bucketparts := strings.Split(bucket, ",")
|
||||
// Parse out any tags in the bucket
|
||||
if len(bucketparts) > 1 {
|
||||
for _, btag := range bucketparts[1:] {
|
||||
k, v := parseKeyValue(btag)
|
||||
if k != "" {
|
||||
tags[k] = v
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
o := graphite.Options{
|
||||
Separator: "_",
|
||||
Templates: s.Templates,
|
||||
DefaultTags: tags,
|
||||
}
|
||||
|
||||
name := bucketparts[0]
|
||||
p, err := graphite.NewParserWithOptions(o)
|
||||
if err == nil {
|
||||
name, tags, _, _ = p.ApplyTemplate(name)
|
||||
}
|
||||
name = strings.Replace(name, ".", "_", -1)
|
||||
name = strings.Replace(name, "-", "__", -1)
|
||||
|
||||
return name, tags
|
||||
}
|
||||
|
||||
// Parse the key,value out of a string that looks like "key=value"
|
||||
func parseKeyValue(keyvalue string) (string, string) {
|
||||
var key, val string
|
||||
|
||||
split := strings.Split(keyvalue, "=")
|
||||
// Must be exactly 2 to get anything meaningful out of them
|
||||
if len(split) == 2 {
|
||||
key = split[0]
|
||||
val = split[1]
|
||||
} else if len(split) == 1 {
|
||||
val = split[0]
|
||||
}
|
||||
|
||||
return key, val
|
||||
}
|
||||
|
||||
// aggregate takes in a metric. It then
|
||||
// aggregates and caches the current value(s). It does not deal with the
|
||||
// Delete* options, because those are dealt with in the Gather function.
|
||||
func (s *Statsd) aggregate(m metric) {
|
||||
switch m.mtype {
|
||||
case "ms", "h":
|
||||
cached, ok := s.timings[m.hash]
|
||||
if !ok {
|
||||
cached = cachedtimings{
|
||||
name: m.name,
|
||||
tags: m.tags,
|
||||
stats: RunningStats{
|
||||
PercLimit: s.PercentileLimit,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
if m.samplerate > 0 {
|
||||
for i := 0; i < int(1.0/m.samplerate); i++ {
|
||||
cached.stats.AddValue(m.floatvalue)
|
||||
}
|
||||
s.timings[m.hash] = cached
|
||||
} else {
|
||||
cached.stats.AddValue(m.floatvalue)
|
||||
s.timings[m.hash] = cached
|
||||
}
|
||||
case "c":
|
||||
cached, ok := s.counters[m.hash]
|
||||
if !ok {
|
||||
s.counters[m.hash] = cachedcounter{
|
||||
name: m.name,
|
||||
value: m.intvalue,
|
||||
tags: m.tags,
|
||||
}
|
||||
} else {
|
||||
cached.value += m.intvalue
|
||||
s.counters[m.hash] = cached
|
||||
}
|
||||
case "g":
|
||||
cached, ok := s.gauges[m.hash]
|
||||
if !ok {
|
||||
s.gauges[m.hash] = cachedgauge{
|
||||
name: m.name,
|
||||
value: m.floatvalue,
|
||||
tags: m.tags,
|
||||
}
|
||||
} else {
|
||||
if m.additive {
|
||||
cached.value = cached.value + m.floatvalue
|
||||
} else {
|
||||
cached.value = m.floatvalue
|
||||
}
|
||||
s.gauges[m.hash] = cached
|
||||
}
|
||||
case "s":
|
||||
cached, ok := s.sets[m.hash]
|
||||
if !ok {
|
||||
// Completely new metric (initialize with count of 1)
|
||||
s.sets[m.hash] = cachedset{
|
||||
name: m.name,
|
||||
tags: m.tags,
|
||||
set: map[int64]bool{m.intvalue: true},
|
||||
}
|
||||
} else {
|
||||
cached.set[m.intvalue] = true
|
||||
s.sets[m.hash] = cached
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Statsd) Stop() {
|
||||
s.Lock()
|
||||
defer s.Unlock()
|
||||
log.Println("Stopping the statsd service")
|
||||
close(s.done)
|
||||
close(s.in)
|
||||
}
|
||||
|
||||
func init() {
|
||||
inputs.Add("statsd", func() inputs.Input {
|
||||
return &Statsd{}
|
||||
})
|
||||
}
|
||||
900
plugins/inputs/statsd/statsd_test.go
Normal file
900
plugins/inputs/statsd/statsd_test.go
Normal file
@@ -0,0 +1,900 @@
|
||||
package statsd
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
"github.com/influxdb/telegraf/testutil"
|
||||
)
|
||||
|
||||
// Invalid lines should return an error
|
||||
func TestParse_InvalidLines(t *testing.T) {
|
||||
s := NewStatsd()
|
||||
invalid_lines := []string{
|
||||
"i.dont.have.a.pipe:45g",
|
||||
"i.dont.have.a.colon45|c",
|
||||
"invalid.metric.type:45|e",
|
||||
"invalid.plus.minus.non.gauge:+10|c",
|
||||
"invalid.plus.minus.non.gauge:+10|s",
|
||||
"invalid.plus.minus.non.gauge:+10|ms",
|
||||
"invalid.plus.minus.non.gauge:+10|h",
|
||||
"invalid.plus.minus.non.gauge:-10|c",
|
||||
"invalid.value:foobar|c",
|
||||
"invalid.value:d11|c",
|
||||
"invalid.value:1d1|c",
|
||||
}
|
||||
for _, line := range invalid_lines {
|
||||
err := s.parseStatsdLine(line)
|
||||
if err == nil {
|
||||
t.Errorf("Parsing line %s should have resulted in an error\n", line)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Invalid sample rates should be ignored and not applied
|
||||
func TestParse_InvalidSampleRate(t *testing.T) {
|
||||
s := NewStatsd()
|
||||
invalid_lines := []string{
|
||||
"invalid.sample.rate:45|c|0.1",
|
||||
"invalid.sample.rate.2:45|c|@foo",
|
||||
"invalid.sample.rate:45|g|@0.1",
|
||||
"invalid.sample.rate:45|s|@0.1",
|
||||
}
|
||||
|
||||
for _, line := range invalid_lines {
|
||||
err := s.parseStatsdLine(line)
|
||||
if err != nil {
|
||||
t.Errorf("Parsing line %s should not have resulted in an error\n", line)
|
||||
}
|
||||
}
|
||||
|
||||
counter_validations := []struct {
|
||||
name string
|
||||
value int64
|
||||
cache map[string]cachedcounter
|
||||
}{
|
||||
{
|
||||
"invalid_sample_rate",
|
||||
45,
|
||||
s.counters,
|
||||
},
|
||||
{
|
||||
"invalid_sample_rate_2",
|
||||
45,
|
||||
s.counters,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range counter_validations {
|
||||
err := test_validate_counter(test.name, test.value, test.cache)
|
||||
if err != nil {
|
||||
t.Error(err.Error())
|
||||
}
|
||||
}
|
||||
|
||||
err := test_validate_gauge("invalid_sample_rate", 45, s.gauges)
|
||||
if err != nil {
|
||||
t.Error(err.Error())
|
||||
}
|
||||
|
||||
err = test_validate_set("invalid_sample_rate", 1, s.sets)
|
||||
if err != nil {
|
||||
t.Error(err.Error())
|
||||
}
|
||||
}
|
||||
|
||||
// Names should be parsed like . -> _ and - -> __
|
||||
func TestParse_DefaultNameParsing(t *testing.T) {
|
||||
s := NewStatsd()
|
||||
valid_lines := []string{
|
||||
"valid:1|c",
|
||||
"valid.foo-bar:11|c",
|
||||
}
|
||||
|
||||
for _, line := range valid_lines {
|
||||
err := s.parseStatsdLine(line)
|
||||
if err != nil {
|
||||
t.Errorf("Parsing line %s should not have resulted in an error\n", line)
|
||||
}
|
||||
}
|
||||
|
||||
validations := []struct {
|
||||
name string
|
||||
value int64
|
||||
}{
|
||||
{
|
||||
"valid",
|
||||
1,
|
||||
},
|
||||
{
|
||||
"valid_foo__bar",
|
||||
11,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range validations {
|
||||
err := test_validate_counter(test.name, test.value, s.counters)
|
||||
if err != nil {
|
||||
t.Error(err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Test that template name transformation works
|
||||
func TestParse_Template(t *testing.T) {
|
||||
s := NewStatsd()
|
||||
s.Templates = []string{
|
||||
"measurement.measurement.host.service",
|
||||
}
|
||||
|
||||
lines := []string{
|
||||
"cpu.idle.localhost:1|c",
|
||||
"cpu.busy.host01.myservice:11|c",
|
||||
}
|
||||
|
||||
for _, line := range lines {
|
||||
err := s.parseStatsdLine(line)
|
||||
if err != nil {
|
||||
t.Errorf("Parsing line %s should not have resulted in an error\n", line)
|
||||
}
|
||||
}
|
||||
|
||||
validations := []struct {
|
||||
name string
|
||||
value int64
|
||||
}{
|
||||
{
|
||||
"cpu_idle",
|
||||
1,
|
||||
},
|
||||
{
|
||||
"cpu_busy",
|
||||
11,
|
||||
},
|
||||
}
|
||||
|
||||
// Validate counters
|
||||
for _, test := range validations {
|
||||
err := test_validate_counter(test.name, test.value, s.counters)
|
||||
if err != nil {
|
||||
t.Error(err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Test that template filters properly
|
||||
func TestParse_TemplateFilter(t *testing.T) {
|
||||
s := NewStatsd()
|
||||
s.Templates = []string{
|
||||
"cpu.idle.* measurement.measurement.host",
|
||||
}
|
||||
|
||||
lines := []string{
|
||||
"cpu.idle.localhost:1|c",
|
||||
"cpu.busy.host01.myservice:11|c",
|
||||
}
|
||||
|
||||
for _, line := range lines {
|
||||
err := s.parseStatsdLine(line)
|
||||
if err != nil {
|
||||
t.Errorf("Parsing line %s should not have resulted in an error\n", line)
|
||||
}
|
||||
}
|
||||
|
||||
validations := []struct {
|
||||
name string
|
||||
value int64
|
||||
}{
|
||||
{
|
||||
"cpu_idle",
|
||||
1,
|
||||
},
|
||||
{
|
||||
"cpu_busy_host01_myservice",
|
||||
11,
|
||||
},
|
||||
}
|
||||
|
||||
// Validate counters
|
||||
for _, test := range validations {
|
||||
err := test_validate_counter(test.name, test.value, s.counters)
|
||||
if err != nil {
|
||||
t.Error(err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Test that most specific template is chosen
|
||||
func TestParse_TemplateSpecificity(t *testing.T) {
|
||||
s := NewStatsd()
|
||||
s.Templates = []string{
|
||||
"cpu.* measurement.foo.host",
|
||||
"cpu.idle.* measurement.measurement.host",
|
||||
}
|
||||
|
||||
lines := []string{
|
||||
"cpu.idle.localhost:1|c",
|
||||
}
|
||||
|
||||
for _, line := range lines {
|
||||
err := s.parseStatsdLine(line)
|
||||
if err != nil {
|
||||
t.Errorf("Parsing line %s should not have resulted in an error\n", line)
|
||||
}
|
||||
}
|
||||
|
||||
validations := []struct {
|
||||
name string
|
||||
value int64
|
||||
}{
|
||||
{
|
||||
"cpu_idle",
|
||||
1,
|
||||
},
|
||||
}
|
||||
|
||||
// Validate counters
|
||||
for _, test := range validations {
|
||||
err := test_validate_counter(test.name, test.value, s.counters)
|
||||
if err != nil {
|
||||
t.Error(err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Test that fields are parsed correctly
|
||||
func TestParse_Fields(t *testing.T) {
|
||||
if false {
|
||||
t.Errorf("TODO")
|
||||
}
|
||||
}
|
||||
|
||||
// Test that tags within the bucket are parsed correctly
|
||||
func TestParse_Tags(t *testing.T) {
|
||||
s := NewStatsd()
|
||||
|
||||
tests := []struct {
|
||||
bucket string
|
||||
name string
|
||||
tags map[string]string
|
||||
}{
|
||||
{
|
||||
"cpu.idle,host=localhost",
|
||||
"cpu_idle",
|
||||
map[string]string{
|
||||
"host": "localhost",
|
||||
},
|
||||
},
|
||||
{
|
||||
"cpu.idle,host=localhost,region=west",
|
||||
"cpu_idle",
|
||||
map[string]string{
|
||||
"host": "localhost",
|
||||
"region": "west",
|
||||
},
|
||||
},
|
||||
{
|
||||
"cpu.idle,host=localhost,color=red,region=west",
|
||||
"cpu_idle",
|
||||
map[string]string{
|
||||
"host": "localhost",
|
||||
"region": "west",
|
||||
"color": "red",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
name, tags := s.parseName(test.bucket)
|
||||
if name != test.name {
|
||||
t.Errorf("Expected: %s, got %s", test.name, name)
|
||||
}
|
||||
|
||||
for k, v := range test.tags {
|
||||
actual, ok := tags[k]
|
||||
if !ok {
|
||||
t.Errorf("Expected key: %s not found", k)
|
||||
}
|
||||
if actual != v {
|
||||
t.Errorf("Expected %s, got %s", v, actual)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Test that measurements with the same name, but different tags, are treated
|
||||
// as different outputs
|
||||
func TestParse_MeasurementsWithSameName(t *testing.T) {
|
||||
s := NewStatsd()
|
||||
|
||||
// Test that counters work
|
||||
valid_lines := []string{
|
||||
"test.counter,host=localhost:1|c",
|
||||
"test.counter,host=localhost,region=west:1|c",
|
||||
}
|
||||
|
||||
for _, line := range valid_lines {
|
||||
err := s.parseStatsdLine(line)
|
||||
if err != nil {
|
||||
t.Errorf("Parsing line %s should not have resulted in an error\n", line)
|
||||
}
|
||||
}
|
||||
|
||||
if len(s.counters) != 2 {
|
||||
t.Errorf("Expected 2 separate measurements, found %d", len(s.counters))
|
||||
}
|
||||
}
|
||||
|
||||
// Test that measurements with multiple bits, are treated as different outputs
|
||||
// but are equal to their single-measurement representation
|
||||
func TestParse_MeasurementsWithMultipleValues(t *testing.T) {
|
||||
single_lines := []string{
|
||||
"valid.multiple:0|ms|@0.1",
|
||||
"valid.multiple:0|ms|",
|
||||
"valid.multiple:1|ms",
|
||||
"valid.multiple.duplicate:1|c",
|
||||
"valid.multiple.duplicate:1|c",
|
||||
"valid.multiple.duplicate:2|c",
|
||||
"valid.multiple.duplicate:1|c",
|
||||
"valid.multiple.duplicate:1|h",
|
||||
"valid.multiple.duplicate:1|h",
|
||||
"valid.multiple.duplicate:2|h",
|
||||
"valid.multiple.duplicate:1|h",
|
||||
"valid.multiple.duplicate:1|s",
|
||||
"valid.multiple.duplicate:1|s",
|
||||
"valid.multiple.duplicate:2|s",
|
||||
"valid.multiple.duplicate:1|s",
|
||||
"valid.multiple.duplicate:1|g",
|
||||
"valid.multiple.duplicate:1|g",
|
||||
"valid.multiple.duplicate:2|g",
|
||||
"valid.multiple.duplicate:1|g",
|
||||
"valid.multiple.mixed:1|c",
|
||||
"valid.multiple.mixed:1|ms",
|
||||
"valid.multiple.mixed:2|s",
|
||||
"valid.multiple.mixed:1|g",
|
||||
}
|
||||
|
||||
multiple_lines := []string{
|
||||
"valid.multiple:0|ms|@0.1:0|ms|:1|ms",
|
||||
"valid.multiple.duplicate:1|c:1|c:2|c:1|c",
|
||||
"valid.multiple.duplicate:1|h:1|h:2|h:1|h",
|
||||
"valid.multiple.duplicate:1|s:1|s:2|s:1|s",
|
||||
"valid.multiple.duplicate:1|g:1|g:2|g:1|g",
|
||||
"valid.multiple.mixed:1|c:1|ms:2|s:1|g",
|
||||
}
|
||||
|
||||
s_single := NewStatsd()
|
||||
s_multiple := NewStatsd()
|
||||
|
||||
for _, line := range single_lines {
|
||||
err := s_single.parseStatsdLine(line)
|
||||
if err != nil {
|
||||
t.Errorf("Parsing line %s should not have resulted in an error\n", line)
|
||||
}
|
||||
}
|
||||
|
||||
for _, line := range multiple_lines {
|
||||
err := s_multiple.parseStatsdLine(line)
|
||||
if err != nil {
|
||||
t.Errorf("Parsing line %s should not have resulted in an error\n", line)
|
||||
}
|
||||
}
|
||||
|
||||
if len(s_single.timings) != 3 {
|
||||
t.Errorf("Expected 3 measurement, found %d", len(s_single.timings))
|
||||
}
|
||||
|
||||
if cachedtiming, ok := s_single.timings["metric_type=timingvalid_multiple"]; !ok {
|
||||
t.Errorf("Expected cached measurement with hash 'metric_type=timingvalid_multiple' not found")
|
||||
} else {
|
||||
if cachedtiming.name != "valid_multiple" {
|
||||
t.Errorf("Expected the name to be 'valid_multiple', got %s", cachedtiming.name)
|
||||
}
|
||||
|
||||
// A 0 at samplerate 0.1 will add 10 values of 0,
|
||||
// A 0 with invalid samplerate will add a single 0,
|
||||
// plus the last bit of value 1
|
||||
// which adds up to 12 individual datapoints to be cached
|
||||
if cachedtiming.stats.n != 12 {
|
||||
t.Errorf("Expected 11 additions, got %d", cachedtiming.stats.n)
|
||||
}
|
||||
|
||||
if cachedtiming.stats.upper != 1 {
|
||||
t.Errorf("Expected max input to be 1, got %f", cachedtiming.stats.upper)
|
||||
}
|
||||
}
|
||||
|
||||
// test if s_single and s_multiple did compute the same stats for valid.multiple.duplicate
|
||||
if err := test_validate_set("valid_multiple_duplicate", 2, s_single.sets); err != nil {
|
||||
t.Error(err.Error())
|
||||
}
|
||||
|
||||
if err := test_validate_set("valid_multiple_duplicate", 2, s_multiple.sets); err != nil {
|
||||
t.Error(err.Error())
|
||||
}
|
||||
|
||||
if err := test_validate_counter("valid_multiple_duplicate", 5, s_single.counters); err != nil {
|
||||
t.Error(err.Error())
|
||||
}
|
||||
|
||||
if err := test_validate_counter("valid_multiple_duplicate", 5, s_multiple.counters); err != nil {
|
||||
t.Error(err.Error())
|
||||
}
|
||||
|
||||
if err := test_validate_gauge("valid_multiple_duplicate", 1, s_single.gauges); err != nil {
|
||||
t.Error(err.Error())
|
||||
}
|
||||
|
||||
if err := test_validate_gauge("valid_multiple_duplicate", 1, s_multiple.gauges); err != nil {
|
||||
t.Error(err.Error())
|
||||
}
|
||||
|
||||
// test if s_single and s_multiple did compute the same stats for valid.multiple.mixed
|
||||
if err := test_validate_set("valid_multiple_mixed", 1, s_single.sets); err != nil {
|
||||
t.Error(err.Error())
|
||||
}
|
||||
|
||||
if err := test_validate_set("valid_multiple_mixed", 1, s_multiple.sets); err != nil {
|
||||
t.Error(err.Error())
|
||||
}
|
||||
|
||||
if err := test_validate_counter("valid_multiple_mixed", 1, s_single.counters); err != nil {
|
||||
t.Error(err.Error())
|
||||
}
|
||||
|
||||
if err := test_validate_counter("valid_multiple_mixed", 1, s_multiple.counters); err != nil {
|
||||
t.Error(err.Error())
|
||||
}
|
||||
|
||||
if err := test_validate_gauge("valid_multiple_mixed", 1, s_single.gauges); err != nil {
|
||||
t.Error(err.Error())
|
||||
}
|
||||
|
||||
if err := test_validate_gauge("valid_multiple_mixed", 1, s_multiple.gauges); err != nil {
|
||||
t.Error(err.Error())
|
||||
}
|
||||
}
|
||||
|
||||
// Valid lines should be parsed and their values should be cached
|
||||
func TestParse_ValidLines(t *testing.T) {
|
||||
s := NewStatsd()
|
||||
valid_lines := []string{
|
||||
"valid:45|c",
|
||||
"valid:45|s",
|
||||
"valid:45|g",
|
||||
"valid.timer:45|ms",
|
||||
"valid.timer:45|h",
|
||||
}
|
||||
|
||||
for _, line := range valid_lines {
|
||||
err := s.parseStatsdLine(line)
|
||||
if err != nil {
|
||||
t.Errorf("Parsing line %s should not have resulted in an error\n", line)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Tests low-level functionality of gauges
|
||||
func TestParse_Gauges(t *testing.T) {
|
||||
s := NewStatsd()
|
||||
|
||||
// Test that gauge +- values work
|
||||
valid_lines := []string{
|
||||
"plus.minus:100|g",
|
||||
"plus.minus:-10|g",
|
||||
"plus.minus:+30|g",
|
||||
"plus.plus:100|g",
|
||||
"plus.plus:+100|g",
|
||||
"plus.plus:+100|g",
|
||||
"minus.minus:100|g",
|
||||
"minus.minus:-100|g",
|
||||
"minus.minus:-100|g",
|
||||
"lone.plus:+100|g",
|
||||
"lone.minus:-100|g",
|
||||
"overwrite:100|g",
|
||||
"overwrite:300|g",
|
||||
}
|
||||
|
||||
for _, line := range valid_lines {
|
||||
err := s.parseStatsdLine(line)
|
||||
if err != nil {
|
||||
t.Errorf("Parsing line %s should not have resulted in an error\n", line)
|
||||
}
|
||||
}
|
||||
|
||||
validations := []struct {
|
||||
name string
|
||||
value float64
|
||||
}{
|
||||
{
|
||||
"plus_minus",
|
||||
120,
|
||||
},
|
||||
{
|
||||
"plus_plus",
|
||||
300,
|
||||
},
|
||||
{
|
||||
"minus_minus",
|
||||
-100,
|
||||
},
|
||||
{
|
||||
"lone_plus",
|
||||
100,
|
||||
},
|
||||
{
|
||||
"lone_minus",
|
||||
-100,
|
||||
},
|
||||
{
|
||||
"overwrite",
|
||||
300,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range validations {
|
||||
err := test_validate_gauge(test.name, test.value, s.gauges)
|
||||
if err != nil {
|
||||
t.Error(err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Tests low-level functionality of sets
|
||||
func TestParse_Sets(t *testing.T) {
|
||||
s := NewStatsd()
|
||||
|
||||
// Test that sets work
|
||||
valid_lines := []string{
|
||||
"unique.user.ids:100|s",
|
||||
"unique.user.ids:100|s",
|
||||
"unique.user.ids:100|s",
|
||||
"unique.user.ids:100|s",
|
||||
"unique.user.ids:100|s",
|
||||
"unique.user.ids:101|s",
|
||||
"unique.user.ids:102|s",
|
||||
"unique.user.ids:102|s",
|
||||
"unique.user.ids:123456789|s",
|
||||
"oneuser.id:100|s",
|
||||
"oneuser.id:100|s",
|
||||
}
|
||||
|
||||
for _, line := range valid_lines {
|
||||
err := s.parseStatsdLine(line)
|
||||
if err != nil {
|
||||
t.Errorf("Parsing line %s should not have resulted in an error\n", line)
|
||||
}
|
||||
}
|
||||
|
||||
validations := []struct {
|
||||
name string
|
||||
value int64
|
||||
}{
|
||||
{
|
||||
"unique_user_ids",
|
||||
4,
|
||||
},
|
||||
{
|
||||
"oneuser_id",
|
||||
1,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range validations {
|
||||
err := test_validate_set(test.name, test.value, s.sets)
|
||||
if err != nil {
|
||||
t.Error(err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Tests low-level functionality of counters
|
||||
func TestParse_Counters(t *testing.T) {
|
||||
s := NewStatsd()
|
||||
|
||||
// Test that counters work
|
||||
valid_lines := []string{
|
||||
"small.inc:1|c",
|
||||
"big.inc:100|c",
|
||||
"big.inc:1|c",
|
||||
"big.inc:100000|c",
|
||||
"big.inc:1000000|c",
|
||||
"small.inc:1|c",
|
||||
"zero.init:0|c",
|
||||
"sample.rate:1|c|@0.1",
|
||||
"sample.rate:1|c",
|
||||
}
|
||||
|
||||
for _, line := range valid_lines {
|
||||
err := s.parseStatsdLine(line)
|
||||
if err != nil {
|
||||
t.Errorf("Parsing line %s should not have resulted in an error\n", line)
|
||||
}
|
||||
}
|
||||
|
||||
validations := []struct {
|
||||
name string
|
||||
value int64
|
||||
}{
|
||||
{
|
||||
"small_inc",
|
||||
2,
|
||||
},
|
||||
{
|
||||
"big_inc",
|
||||
1100101,
|
||||
},
|
||||
{
|
||||
"zero_init",
|
||||
0,
|
||||
},
|
||||
{
|
||||
"sample_rate",
|
||||
11,
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range validations {
|
||||
err := test_validate_counter(test.name, test.value, s.counters)
|
||||
if err != nil {
|
||||
t.Error(err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Tests low-level functionality of timings
|
||||
func TestParse_Timings(t *testing.T) {
|
||||
s := NewStatsd()
|
||||
s.Percentiles = []int{90}
|
||||
acc := &testutil.Accumulator{}
|
||||
|
||||
// Test that counters work
|
||||
valid_lines := []string{
|
||||
"test.timing:1|ms",
|
||||
"test.timing:1|ms",
|
||||
"test.timing:1|ms",
|
||||
"test.timing:1|ms",
|
||||
"test.timing:1|ms",
|
||||
}
|
||||
|
||||
for _, line := range valid_lines {
|
||||
err := s.parseStatsdLine(line)
|
||||
if err != nil {
|
||||
t.Errorf("Parsing line %s should not have resulted in an error\n", line)
|
||||
}
|
||||
}
|
||||
|
||||
s.Gather(acc)
|
||||
|
||||
tests := []struct {
|
||||
name string
|
||||
value interface{}
|
||||
}{
|
||||
{
|
||||
"test_timing_mean",
|
||||
float64(1),
|
||||
},
|
||||
{
|
||||
"test_timing_stddev",
|
||||
float64(0),
|
||||
},
|
||||
{
|
||||
"test_timing_upper",
|
||||
float64(1),
|
||||
},
|
||||
{
|
||||
"test_timing_lower",
|
||||
float64(1),
|
||||
},
|
||||
{
|
||||
"test_timing_count",
|
||||
int64(5),
|
||||
},
|
||||
{
|
||||
"test_timing_percentile_90",
|
||||
float64(1),
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
acc.AssertContainsFields(t, test.name,
|
||||
map[string]interface{}{"value": test.value})
|
||||
}
|
||||
}
|
||||
|
||||
func TestParse_Timings_Delete(t *testing.T) {
|
||||
s := NewStatsd()
|
||||
s.DeleteTimings = true
|
||||
fakeacc := &testutil.Accumulator{}
|
||||
var err error
|
||||
|
||||
line := "timing:100|ms"
|
||||
err = s.parseStatsdLine(line)
|
||||
if err != nil {
|
||||
t.Errorf("Parsing line %s should not have resulted in an error\n", line)
|
||||
}
|
||||
|
||||
if len(s.timings) != 1 {
|
||||
t.Errorf("Should be 1 timing, found %d", len(s.timings))
|
||||
}
|
||||
|
||||
s.Gather(fakeacc)
|
||||
|
||||
if len(s.timings) != 0 {
|
||||
t.Errorf("All timings should have been deleted, found %d", len(s.timings))
|
||||
}
|
||||
}
|
||||
|
||||
// Tests the delete_gauges option
|
||||
func TestParse_Gauges_Delete(t *testing.T) {
|
||||
s := NewStatsd()
|
||||
s.DeleteGauges = true
|
||||
fakeacc := &testutil.Accumulator{}
|
||||
var err error
|
||||
|
||||
line := "current.users:100|g"
|
||||
err = s.parseStatsdLine(line)
|
||||
if err != nil {
|
||||
t.Errorf("Parsing line %s should not have resulted in an error\n", line)
|
||||
}
|
||||
|
||||
err = test_validate_gauge("current_users", 100, s.gauges)
|
||||
if err != nil {
|
||||
t.Error(err.Error())
|
||||
}
|
||||
|
||||
s.Gather(fakeacc)
|
||||
|
||||
err = test_validate_gauge("current_users", 100, s.gauges)
|
||||
if err == nil {
|
||||
t.Error("current_users_gauge metric should have been deleted")
|
||||
}
|
||||
}
|
||||
|
||||
// Tests the delete_sets option
|
||||
func TestParse_Sets_Delete(t *testing.T) {
|
||||
s := NewStatsd()
|
||||
s.DeleteSets = true
|
||||
fakeacc := &testutil.Accumulator{}
|
||||
var err error
|
||||
|
||||
line := "unique.user.ids:100|s"
|
||||
err = s.parseStatsdLine(line)
|
||||
if err != nil {
|
||||
t.Errorf("Parsing line %s should not have resulted in an error\n", line)
|
||||
}
|
||||
|
||||
err = test_validate_set("unique_user_ids", 1, s.sets)
|
||||
if err != nil {
|
||||
t.Error(err.Error())
|
||||
}
|
||||
|
||||
s.Gather(fakeacc)
|
||||
|
||||
err = test_validate_set("unique_user_ids", 1, s.sets)
|
||||
if err == nil {
|
||||
t.Error("unique_user_ids_set metric should have been deleted")
|
||||
}
|
||||
}
|
||||
|
||||
// Tests the delete_counters option
|
||||
func TestParse_Counters_Delete(t *testing.T) {
|
||||
s := NewStatsd()
|
||||
s.DeleteCounters = true
|
||||
fakeacc := &testutil.Accumulator{}
|
||||
var err error
|
||||
|
||||
line := "total.users:100|c"
|
||||
err = s.parseStatsdLine(line)
|
||||
if err != nil {
|
||||
t.Errorf("Parsing line %s should not have resulted in an error\n", line)
|
||||
}
|
||||
|
||||
err = test_validate_counter("total_users", 100, s.counters)
|
||||
if err != nil {
|
||||
t.Error(err.Error())
|
||||
}
|
||||
|
||||
s.Gather(fakeacc)
|
||||
|
||||
err = test_validate_counter("total_users", 100, s.counters)
|
||||
if err == nil {
|
||||
t.Error("total_users_counter metric should have been deleted")
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseKeyValue(t *testing.T) {
|
||||
k, v := parseKeyValue("foo=bar")
|
||||
if k != "foo" {
|
||||
t.Errorf("Expected %s, got %s", "foo", k)
|
||||
}
|
||||
if v != "bar" {
|
||||
t.Errorf("Expected %s, got %s", "bar", v)
|
||||
}
|
||||
|
||||
k2, v2 := parseKeyValue("baz")
|
||||
if k2 != "" {
|
||||
t.Errorf("Expected %s, got %s", "", k2)
|
||||
}
|
||||
if v2 != "baz" {
|
||||
t.Errorf("Expected %s, got %s", "baz", v2)
|
||||
}
|
||||
}
|
||||
|
||||
// Test utility functions
|
||||
|
||||
func test_validate_set(
|
||||
name string,
|
||||
value int64,
|
||||
cache map[string]cachedset,
|
||||
) error {
|
||||
var metric cachedset
|
||||
var found bool
|
||||
for _, v := range cache {
|
||||
if v.name == name {
|
||||
metric = v
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
return errors.New(fmt.Sprintf("Test Error: Metric name %s not found\n", name))
|
||||
}
|
||||
|
||||
if value != int64(len(metric.set)) {
|
||||
return errors.New(fmt.Sprintf("Measurement: %s, expected %d, actual %d\n",
|
||||
name, value, len(metric.set)))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func test_validate_counter(
|
||||
name string,
|
||||
value int64,
|
||||
cache map[string]cachedcounter,
|
||||
) error {
|
||||
var metric cachedcounter
|
||||
var found bool
|
||||
for _, v := range cache {
|
||||
if v.name == name {
|
||||
metric = v
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
return errors.New(fmt.Sprintf("Test Error: Metric name %s not found\n", name))
|
||||
}
|
||||
|
||||
if value != metric.value {
|
||||
return errors.New(fmt.Sprintf("Measurement: %s, expected %d, actual %d\n",
|
||||
name, value, metric.value))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func test_validate_gauge(
|
||||
name string,
|
||||
value float64,
|
||||
cache map[string]cachedgauge,
|
||||
) error {
|
||||
var metric cachedgauge
|
||||
var found bool
|
||||
for _, v := range cache {
|
||||
if v.name == name {
|
||||
metric = v
|
||||
found = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
return errors.New(fmt.Sprintf("Test Error: Metric name %s not found\n", name))
|
||||
}
|
||||
|
||||
if value != metric.value {
|
||||
return errors.New(fmt.Sprintf("Measurement: %s, expected %f, actual %f\n",
|
||||
name, value, metric.value))
|
||||
}
|
||||
return nil
|
||||
}
|
||||
Reference in New Issue
Block a user