Added plugin to read Windows performance counters

closes #575
This commit is contained in:
Rune Darrud 2016-01-22 23:02:21 +01:00 committed by Cameron Sparr
parent 10c4e4f63f
commit f088dd7e00
8 changed files with 1122 additions and 0 deletions

View File

@ -3,6 +3,7 @@
### Release Notes
### Features
- [#575](https://github.com/influxdata/telegraf/pull/575): Support for collecting Windows Performance Counters. Thanks @TheFlyingCorpse!
- [#564](https://github.com/influxdata/telegraf/issues/564): features for plugin writing simplification. Internal metric data type.
- [#603](https://github.com/influxdata/telegraf/pull/603): Aggregate statsd timing measurements into fields. Thanks @marcinbunsch!
- [#601](https://github.com/influxdata/telegraf/issues/601): Warn when overwriting cached metrics.

1
Godeps
View File

@ -30,6 +30,7 @@ github.com/influxdb/influxdb 697f48b4e62e514e701ffec39978b864a3c666e6
github.com/jmespath/go-jmespath c01cf91b011868172fdcd9f41838e80c9d716264
github.com/klauspost/crc32 999f3125931f6557b991b2f8472172bdfa578d38
github.com/lib/pq 8ad2b298cadd691a77015666a5372eae5dbfac8f
github.com/lxn/win 9a7734ea4db26bc593d52f6a8a957afdad39c5c1
github.com/matttproud/golang_protobuf_extensions d0c3fe89de86839aecf2e0579c40ba3bb336a453
github.com/mreiferson/go-snappystream 028eae7ab5c4c9e2d1cb4c4ca1e53259bbe7e504
github.com/naoina/go-stringutil 6b638e95a32d0c1131db0e7fe83775cbea4a0d0b

View File

@ -177,6 +177,7 @@ Currently implemented sources:
* zookeeper
* sensors
* snmp
* win_perf_counters (windows performance counters)
* system
* cpu
* mem

View File

@ -39,6 +39,7 @@ import (
_ "github.com/influxdata/telegraf/plugins/inputs/system"
_ "github.com/influxdata/telegraf/plugins/inputs/trig"
_ "github.com/influxdata/telegraf/plugins/inputs/twemproxy"
_ "github.com/influxdata/telegraf/plugins/inputs/win_perf_counters"
_ "github.com/influxdata/telegraf/plugins/inputs/zfs"
_ "github.com/influxdata/telegraf/plugins/inputs/zookeeper"
)

View File

@ -0,0 +1,303 @@
# win_perf_counters readme
The way this plugin works is that on load of Telegraf,
the plugin will be handed configuration from Telegraf.
This configuration is parsed and then tested for validity such as
if the Object, Instance and Counter existing.
If it does not match at startup, it will not be fetched.
Exceptions to this are in cases where you query for all instances "*".
By default the plugin does not return _Total
when it is querying for all (*) as this is redundant.
## Basics
The examples contained in this file have been found on the internet
as counters used when performance monitoring
Active Directory and IIS in perticular.
There are a lot other good objects to monitor, if you know what to look for.
This file is likely to be updated in the future with more examples for
useful configurations for separate scenarios.
### Entry
A new configuration entry consists of the TOML header to start with,
`[[inputs.win_perf_counters.object]]`.
This must follow before other plugins configuration,
beneath the main win_perf_counters entry, `[[inputs.win_perf_counters]]`.
Following this is 3 required key/value pairs and the three optional parameters and their usage.
### ObjectName
**Required**
ObjectName is the Object to query for, like Processor, DirectoryServices, LogicalDisk or similar.
Example: `ObjectName = "LogicalDisk"`
### Instances
**Required**
Instances (this is an array) is the instances of a counter you would like returned,
it can be one or more values.
Example, `Instances = ["C:","D:","E:"]` will return only for the instances
C:, D: and E: where relevant. To get all instnaces of a Counter, use ["*"] only.
By default any results containing _Total are stripped,
unless this is specified as the wanted instance.
Alternatively see the option IncludeTotal below.
Some Objects does not have instances to select from at all,
here only one option is valid if you want data back,
and that is to specify `Instances = ["------"]`.
### Counters
**Required**
Counters (this is an array) is the counters of the ObjectName
you would like returned, it can also be one or more values.
Example: `Counters = ["% Idle Time", "% Disk Read Time", "% Disk Write Time"]`
This must be specified for every counter you want the results of,
it is not possible to ask for all counters in the ObjectName.
### Measurement
*Optional*
This key is optional, if it is not set it will be win_perf_counters.
In InfluxDB this is the key by which the returned data is stored underneath,
so for ordering your data in a good manner,
this is a good key to set with where you want your IIS and Disk results stored,
separate from Processor results.
Example: `Measurement = "win_disk"
### IncludeTotal
*Optional*
This key is optional, it is a simple bool.
If it is not set to true or included it is treated as false.
This key only has an effect if Instances is set to "*"
and you would also like all instances containg _Total returned,
like "_Total", "0,_Total" and so on where applicable
(Processor Information is one example).
### WarnOnMissing
*Optional*
This key is optional, it is a simple bool.
If it is not set to true or included it is treated as false.
This only has an effect on the first execution of the plugin,
it will print out any ObjectName/Instance/Counter combinations
asked for that does not match. Useful when debugging new configurations.
### FailOnMissing
*Internal*
This key should not be used, it is for testing purposes only.
It is a simple bool, if it is not set to true or included this is treaded as false.
If this is set to true, the plugin will abort and end prematurely
if any of the combinations of ObjectName/Instances/Counters are invalid.
## Examples
### Generic Queries
```
[[inputs.win_perf_counters.object]]
# Processor usage, alternative to native, reports on a per core.
ObjectName = "Processor"
Instances = ["*"]
Counters = ["% Idle Time", "% Interrupt Time", "% Privileged Time", "% User Time", "% Processor Time"]
Measurement = "win_cpu"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
# Disk times and queues
ObjectName = "LogicalDisk"
Instances = ["*"]
Counters = ["% Idle Time", "% Disk Time","% Disk Read Time", "% Disk Write Time", "% User Time", "Current Disk Queue Length"]
Measurement = "win_disk"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
ObjectName = "System"
Counters = ["Context Switches/sec","System Calls/sec"]
Instances = ["------"]
Measurement = "win_system"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
# Example query where the Instance portion must be removed to get data back, such as from the Memory object.
ObjectName = "Memory"
Counters = ["Available Bytes","Cache Faults/sec","Demand Zero Faults/sec","Page Faults/sec","Pages/sec","Transition Faults/sec","Pool Nonpaged Bytes","Pool Paged Bytes"]
Instances = ["------"] # Use 6 x - to remove the Instance bit from the query.
Measurement = "win_mem"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
```
### Active Directory Domain Controller
```
[[inputs.win_perf_counters.object]]
ObjectName = "DirectoryServices"
Instances = ["*"]
Counters = ["Base Searches/sec","Database adds/sec","Database deletes/sec","Database modifys/sec","Database recycles/sec","LDAP Client Sessions","LDAP Searches/sec","LDAP Writes/sec"]
Measurement = "win_ad" # Set an alternative measurement to win_perf_counters if wanted.
#Instances = [""] # Gathers all instances by default, specify to only gather these
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
ObjectName = "Security System-Wide Statistics"
Instances = ["*"]
Counters = ["NTLM Authentications","Kerberos Authentications","Digest Authentications"]
Measurement = "win_ad"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
ObjectName = "Database"
Instances = ["*"]
Counters = ["Database Cache % Hit","Database Cache Page Fault Stalls/sec","Database Cache Page Faults/sec","Database Cache Size"]
Measurement = "win_db"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
```
### DFS Namespace + Domain Controllers
```
[[inputs.win_perf_counters.object]]
# AD, DFS N, Useful if the server hosts a DFS Namespace or is a Domain Controller
ObjectName = "DFS Namespace Service Referrals"
Instances = ["*"]
Counters = ["Requests Processed","Requests Failed","Avg. Response Time"]
Measurement = "win_dfsn"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
#WarnOnMissing = false # Print out when the performance counter is missing, either of object, counter or instance.
```
### DFS Replication + Domain Controllers
```
[[inputs.win_perf_counters.object]]
# AD, DFS R, Useful if the server hosts a DFS Replication folder or is a Domain Controller
ObjectName = "DFS Replication Service Volumes"
Instances = ["*"]
Counters = ["Data Lookups","Database Commits"]
Measurement = "win_dfsr"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
#WarnOnMissing = false # Print out when the performance counter is missing, either of object, counter or instance.
```
### DNS Server + Domain Controllers
```
[[inputs.win_perf_counters.object]]
ObjectName = "DNS"
Counters = ["Dynamic Update Received","Dynamic Update Rejected","Recursive Queries","Recursive Queries Failure","Secure Update Failure","Secure Update Received","TCP Query Received","TCP Response Sent","UDP Query Received","UDP Response Sent","Total Query Received","Total Response Sent"]
Instances = ["------"]
Measurement = "win_dns"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
```
### IIS / ASP.NET
```
[[inputs.win_perf_counters.object]]
# HTTP Service request queues in the Kernel before being handed over to User Mode.
ObjectName = "HTTP Service Request Queues"
Instances = ["*"]
Counters = ["CurrentQueueSize","RejectedRequests"]
Measurement = "win_http_queues"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
# IIS, ASP.NET Applications
ObjectName = "ASP.NET Applications"
Counters = ["Cache Total Entries","Cache Total Hit Ratio","Cache Total Turnover Rate","Output Cache Entries","Output Cache Hits","Output Cache Hit Ratio","Output Cache Turnover Rate","Compilations Total","Errors Total/Sec","Pipeline Instance Count","Requests Executing","Requests in Application Queue","Requests/Sec"]
Instances = ["*"]
Measurement = "win_aspnet_app"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
# IIS, ASP.NET
ObjectName = "ASP.NET"
Counters = ["Application Restarts","Request Wait Time","Requests Current","Requests Queued","Requests Rejected"]
Instances = ["*"]
Measurement = "win_aspnet"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
# IIS, Web Service
ObjectName = "Web Service"
Counters = ["Get Requests/sec","Post Requests/sec","Connection Attempts/sec","Current Connections","ISAPI Extension Requests/sec"]
Instances = ["*"]
Measurement = "win_websvc"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
# Web Service Cache / IIS
ObjectName = "Web Service Cache"
Counters = ["URI Cache Hits %","Kernel: URI Cache Hits %","File Cache Hits %"]
Instances = ["*"]
Measurement = "win_websvc_cache"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
```
### Process
```
[[inputs.win_perf_counters.object]]
# Process metrics, in this case for IIS only
ObjectName = "Process"
Counters = ["% Processor Time","Handle Count","Private Bytes","Thread Count","Virtual Bytes","Working Set"]
Instances = ["w3wp"]
Measurement = "win_proc"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
```
### .NET Montioring
```
[[inputs.win_perf_counters.object]]
# .NET CLR Exceptions, in this case for IIS only
ObjectName = ".NET CLR Exceptions"
Counters = ["# of Exceps Thrown / sec"]
Instances = ["w3wp"]
Measurement = "win_dotnet_exceptions"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
# .NET CLR Jit, in this case for IIS only
ObjectName = ".NET CLR Jit"
Counters = ["% Time in Jit","IL Bytes Jitted / sec"]
Instances = ["w3wp"]
Measurement = "win_dotnet_jit"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
# .NET CLR Loading, in this case for IIS only
ObjectName = ".NET CLR Loading"
Counters = ["% Time Loading"]
Instances = ["w3wp"]
Measurement = "win_dotnet_loading"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
# .NET CLR LocksAndThreads, in this case for IIS only
ObjectName = ".NET CLR LocksAndThreads"
Counters = ["# of current logical Threads","# of current physical Threads","# of current recognized threads","# of total recognized threads","Queue Length / sec","Total # of Contentions","Current Queue Length"]
Instances = ["w3wp"]
Measurement = "win_dotnet_locks"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
# .NET CLR Memory, in this case for IIS only
ObjectName = ".NET CLR Memory"
Counters = ["% Time in GC","# Bytes in all Heaps","# Gen 0 Collections","# Gen 1 Collections","# Gen 2 Collections","# Induced GC","Allocated Bytes/sec","Finalization Survivors","Gen 0 heap size","Gen 1 heap size","Gen 2 heap size","Large Object Heap size","# of Pinned Objects"]
Instances = ["w3wp"]
Measurement = "win_dotnet_mem"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
# .NET CLR Security, in this case for IIS only
ObjectName = ".NET CLR Security"
Counters = ["% Time in RT checks","Stack Walk Depth","Total Runtime Checks"]
Instances = ["w3wp"]
Measurement = "win_dotnet_security"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
```

View File

@ -0,0 +1,335 @@
// +build windows
package win_perf_counters
import (
"errors"
"fmt"
"strings"
"syscall"
"unsafe"
"os"
"os/signal"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/lxn/win"
)
var sampleConfig string = `
# By default this plugin returns basic CPU and Disk statistics.
# See the README file for more examples.
# Uncomment examples below or write your own as you see fit. If the system
# being polled for data does not have the Object at startup of the Telegraf
# agent, it will not be gathered.
# Settings:
# PrintValid = false # Print All matching performance counters
[[inputs.win_perf_counters.object]]
# Processor usage, alternative to native, reports on a per core.
ObjectName = "Processor"
Instances = ["*"]
Counters = [
"% Idle Time", "% Interrupt Time",
"% Privileged Time", "% User Time",
"% Processor Time"
]
Measurement = "win_cpu"
# Set to true to include _Total instance when querying for all (*).
# IncludeTotal=false
# Print out when the performance counter is missing from object, counter or instance.
# WarnOnMissing = false
[[inputs.win_perf_counters.object]]
# Disk times and queues
ObjectName = "LogicalDisk"
Instances = ["*"]
Counters = [
"% Idle Time", "% Disk Time","% Disk Read Time",
"% Disk Write Time", "% User Time", "Current Disk Queue Length"
]
Measurement = "win_disk"
[[inputs.win_perf_counters.object]]
ObjectName = "System"
Counters = ["Context Switches/sec","System Calls/sec"]
Instances = ["------"]
Measurement = "win_system"
[[inputs.win_perf_counters.object]]
# Example query where the Instance portion must be removed to get data back,
# such as from the Memory object.
ObjectName = "Memory"
Counters = [
"Available Bytes", "Cache Faults/sec", "Demand Zero Faults/sec",
"Page Faults/sec", "Pages/sec", "Transition Faults/sec",
"Pool Nonpaged Bytes", "Pool Paged Bytes"
]
Instances = ["------"] # Use 6 x - to remove the Instance bit from the query.
Measurement = "win_mem"
`
// Valid queries end up in this map.
var gItemList = make(map[int]*item)
var configParsed bool
var testConfigParsed bool
var testObject string
type Win_PerfCounters struct {
PrintValid bool
TestName string
Object []perfobject
}
type perfobject struct {
ObjectName string
Counters []string
Instances []string
Measurement string
WarnOnMissing bool
FailOnMissing bool
IncludeTotal bool
}
// Parsed configuration ends up here after it has been validated for valid
// Performance Counter paths
type itemList struct {
items map[int]*item
}
type item struct {
query string
objectName string
counter string
instance string
measurement string
include_total bool
handle win.PDH_HQUERY
counterHandle win.PDH_HCOUNTER
}
func (m *Win_PerfCounters) AddItem(metrics *itemList, query string, objectName string, counter string, instance string,
measurement string, include_total bool) {
var handle win.PDH_HQUERY
var counterHandle win.PDH_HCOUNTER
ret := win.PdhOpenQuery(0, 0, &handle)
ret = win.PdhAddEnglishCounter(handle, query, 0, &counterHandle)
_ = ret
temp := &item{query, objectName, counter, instance, measurement,
include_total, handle, counterHandle}
index := len(gItemList)
gItemList[index] = temp
if metrics.items == nil {
metrics.items = make(map[int]*item)
}
metrics.items[index] = temp
}
func (m *Win_PerfCounters) InvalidObject(exists uint32, query string, PerfObject perfobject, instance string, counter string) error {
if exists == 3221228472 { // win.PDH_CSTATUS_NO_OBJECT
if PerfObject.FailOnMissing {
err := errors.New("Performance object does not exist")
return err
} else if PerfObject.WarnOnMissing {
fmt.Printf("Performance Object '%s' does not exist in query: %s\n", PerfObject.ObjectName, query)
}
} else if exists == 3221228473 { //win.PDH_CSTATUS_NO_COUNTER
if PerfObject.FailOnMissing {
err := errors.New("Counter in Performance object does not exist")
return err
} else if PerfObject.WarnOnMissing {
fmt.Printf("Counter '%s' does not exist in query: %s\n", counter, query)
}
} else if exists == 2147485649 { //win.PDH_CSTATUS_NO_INSTANCE
if PerfObject.FailOnMissing {
err := errors.New("Instance in Performance object does not exist")
return err
} else if PerfObject.WarnOnMissing {
fmt.Printf("Instance '%s' does not exist in query: %s\n", instance, query)
}
} else {
fmt.Printf("Invalid result: %v, query: %s\n", exists, query)
if PerfObject.FailOnMissing {
err := errors.New("Invalid query for Performance Counters")
return err
}
}
return nil
}
func (m *Win_PerfCounters) Description() string {
return "Input plugin to query Performance Counters on Windows operating systems"
}
func (m *Win_PerfCounters) SampleConfig() string {
return sampleConfig
}
func (m *Win_PerfCounters) ParseConfig(metrics *itemList) error {
var query string
configParsed = true
if len(m.Object) > 0 {
for _, PerfObject := range m.Object {
for _, counter := range PerfObject.Counters {
for _, instance := range PerfObject.Instances {
objectname := PerfObject.ObjectName
if instance == "------" {
query = "\\" + objectname + "\\" + counter
} else {
query = "\\" + objectname + "(" + instance + ")\\" + counter
}
var exists uint32 = win.PdhValidatePath(query)
if exists == win.ERROR_SUCCESS {
if m.PrintValid {
fmt.Printf("Valid: %s\n", query)
}
m.AddItem(metrics, query, objectname, counter, instance,
PerfObject.Measurement, PerfObject.IncludeTotal)
} else {
err := m.InvalidObject(exists, query, PerfObject, instance, counter)
return err
}
}
}
}
return nil
} else {
err := errors.New("No performance objects configured!")
return err
}
}
func (m *Win_PerfCounters) Cleanup(metrics *itemList) {
// Cleanup
for _, metric := range metrics.items {
ret := win.PdhCloseQuery(metric.handle)
_ = ret
}
}
func (m *Win_PerfCounters) CleanupTestMode() {
// Cleanup for the testmode.
for _, metric := range gItemList {
ret := win.PdhCloseQuery(metric.handle)
_ = ret
}
}
func (m *Win_PerfCounters) Gather(acc telegraf.Accumulator) error {
metrics := itemList{}
// Both values are empty in normal use.
if m.TestName != testObject {
// Cleanup any handles before emptying the global variable containing valid queries.
m.CleanupTestMode()
gItemList = make(map[int]*item)
testObject = m.TestName
testConfigParsed = true
configParsed = false
}
// We only need to parse the config during the init, it uses the global variable after.
if configParsed == false {
err := m.ParseConfig(&metrics)
if err != nil {
return err
}
}
// When interrupt or terminate is called.
c := make(chan os.Signal, 1)
signal.Notify(c, os.Interrupt)
signal.Notify(c, syscall.SIGTERM)
go func() error {
<-c
m.Cleanup(&metrics)
return nil
}()
var bufSize uint32
var bufCount uint32
var size uint32 = uint32(unsafe.Sizeof(win.PDH_FMT_COUNTERVALUE_ITEM_DOUBLE{}))
var emptyBuf [1]win.PDH_FMT_COUNTERVALUE_ITEM_DOUBLE // need at least 1 addressable null ptr.
// For iterate over the known metrics and get the samples.
for _, metric := range gItemList {
// collect
ret := win.PdhCollectQueryData(metric.handle)
if ret == win.ERROR_SUCCESS {
ret = win.PdhGetFormattedCounterArrayDouble(metric.counterHandle, &bufSize,
&bufCount, &emptyBuf[0]) // uses null ptr here according to MSDN.
if ret == win.PDH_MORE_DATA {
filledBuf := make([]win.PDH_FMT_COUNTERVALUE_ITEM_DOUBLE, bufCount*size)
ret = win.PdhGetFormattedCounterArrayDouble(metric.counterHandle,
&bufSize, &bufCount, &filledBuf[0])
for i := 0; i < int(bufCount); i++ {
c := filledBuf[i]
var s string = win.UTF16PtrToString(c.SzName)
var add bool
if metric.include_total {
// If IncludeTotal is set, include all.
add = true
} else if metric.instance == "*" && !strings.Contains(s, "_Total") {
// Catch if set to * and that it is not a '*_Total*' instance.
add = true
} else if metric.instance == s {
// Catch if we set it to total or some form of it
add = true
} else if metric.instance == "------" {
add = true
}
if add {
fields := make(map[string]interface{})
tags := make(map[string]string)
if s != "" {
tags["instance"] = s
}
tags["objectname"] = metric.objectName
fields[string(metric.counter)] = float32(c.FmtValue.DoubleValue)
var measurement string
if metric.measurement == "" {
measurement = "win_perf_counters"
} else {
measurement = metric.measurement
}
acc.AddFields(measurement, fields, tags)
}
}
filledBuf = nil
// Need to at least set bufSize to zero, because if not, the function will not
// return PDH_MORE_DATA and will not set the bufSize.
bufCount = 0
bufSize = 0
}
}
}
return nil
}
func init() {
inputs.Add("win_perf_counters", func() telegraf.Input { return &Win_PerfCounters{} })
}

View File

@ -0,0 +1,3 @@
// +build !windows
package win_perf_counters

View File

@ -0,0 +1,477 @@
// +build windows
package win_perf_counters
import (
"errors"
"testing"
"time"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/require"
)
func TestWinPerfcountersConfigGet1(t *testing.T) {
validmetrics := itemList{}
var instances = make([]string, 1)
var counters = make([]string, 1)
var perfobjects = make([]perfobject, 1)
objectname := "Processor Information"
instances[0] = "_Total"
counters[0] = "% Processor Time"
var measurement string = "test"
var warnonmissing bool = false
var failonmissing bool = true
var includetotal bool = false
PerfObject := perfobject{
ObjectName: objectname,
Instances: instances,
Counters: counters,
Measurement: measurement,
WarnOnMissing: warnonmissing,
FailOnMissing: failonmissing,
IncludeTotal: includetotal,
}
perfobjects[0] = PerfObject
m := Win_PerfCounters{PrintValid: false, TestName: "ConfigGet1", Object: perfobjects}
err := m.ParseConfig(&validmetrics)
require.NoError(t, err)
}
func TestWinPerfcountersConfigGet2(t *testing.T) {
metrics := itemList{}
var instances = make([]string, 1)
var counters = make([]string, 1)
var perfobjects = make([]perfobject, 1)
objectname := "Processor Information"
instances[0] = "_Total"
counters[0] = "% Processor Time"
var measurement string = "test"
var warnonmissing bool = false
var failonmissing bool = true
var includetotal bool = false
PerfObject := perfobject{
ObjectName: objectname,
Instances: instances,
Counters: counters,
Measurement: measurement,
WarnOnMissing: warnonmissing,
FailOnMissing: failonmissing,
IncludeTotal: includetotal,
}
perfobjects[0] = PerfObject
m := Win_PerfCounters{PrintValid: false, TestName: "ConfigGet2", Object: perfobjects}
err := m.ParseConfig(&metrics)
require.NoError(t, err)
if len(metrics.items) == 1 {
require.NoError(t, nil)
} else if len(metrics.items) == 0 {
var errorstring1 string = "No results returned from the query: " + string(len(metrics.items))
err2 := errors.New(errorstring1)
require.NoError(t, err2)
} else if len(metrics.items) > 1 {
var errorstring1 string = "Too many results returned from the query: " + string(len(metrics.items))
err2 := errors.New(errorstring1)
require.NoError(t, err2)
}
}
func TestWinPerfcountersConfigGet3(t *testing.T) {
metrics := itemList{}
var instances = make([]string, 1)
var counters = make([]string, 2)
var perfobjects = make([]perfobject, 1)
objectname := "Processor Information"
instances[0] = "_Total"
counters[0] = "% Processor Time"
counters[1] = "% Idle Time"
var measurement string = "test"
var warnonmissing bool = false
var failonmissing bool = true
var includetotal bool = false
PerfObject := perfobject{
ObjectName: objectname,
Instances: instances,
Counters: counters,
Measurement: measurement,
WarnOnMissing: warnonmissing,
FailOnMissing: failonmissing,
IncludeTotal: includetotal,
}
perfobjects[0] = PerfObject
m := Win_PerfCounters{PrintValid: false, TestName: "ConfigGet3", Object: perfobjects}
err := m.ParseConfig(&metrics)
require.NoError(t, err)
if len(metrics.items) == 2 {
require.NoError(t, nil)
} else if len(metrics.items) < 2 {
var errorstring1 string = "Too few results returned from the query. " + string(len(metrics.items))
err2 := errors.New(errorstring1)
require.NoError(t, err2)
} else if len(metrics.items) > 2 {
var errorstring1 string = "Too many results returned from the query: " + string(len(metrics.items))
err2 := errors.New(errorstring1)
require.NoError(t, err2)
}
}
func TestWinPerfcountersConfigGet4(t *testing.T) {
metrics := itemList{}
var instances = make([]string, 2)
var counters = make([]string, 1)
var perfobjects = make([]perfobject, 1)
objectname := "Processor Information"
instances[0] = "_Total"
instances[1] = "0"
counters[0] = "% Processor Time"
var measurement string = "test"
var warnonmissing bool = false
var failonmissing bool = true
var includetotal bool = false
PerfObject := perfobject{
ObjectName: objectname,
Instances: instances,
Counters: counters,
Measurement: measurement,
WarnOnMissing: warnonmissing,
FailOnMissing: failonmissing,
IncludeTotal: includetotal,
}
perfobjects[0] = PerfObject
m := Win_PerfCounters{PrintValid: false, TestName: "ConfigGet4", Object: perfobjects}
err := m.ParseConfig(&metrics)
require.NoError(t, err)
if len(metrics.items) == 2 {
require.NoError(t, nil)
} else if len(metrics.items) < 2 {
var errorstring1 string = "Too few results returned from the query: " + string(len(metrics.items))
err2 := errors.New(errorstring1)
require.NoError(t, err2)
} else if len(metrics.items) > 2 {
var errorstring1 string = "Too many results returned from the query: " + string(len(metrics.items))
err2 := errors.New(errorstring1)
require.NoError(t, err2)
}
}
func TestWinPerfcountersConfigGet5(t *testing.T) {
metrics := itemList{}
var instances = make([]string, 2)
var counters = make([]string, 2)
var perfobjects = make([]perfobject, 1)
objectname := "Processor Information"
instances[0] = "_Total"
instances[1] = "0"
counters[0] = "% Processor Time"
counters[1] = "% Idle Time"
var measurement string = "test"
var warnonmissing bool = false
var failonmissing bool = true
var includetotal bool = false
PerfObject := perfobject{
ObjectName: objectname,
Instances: instances,
Counters: counters,
Measurement: measurement,
WarnOnMissing: warnonmissing,
FailOnMissing: failonmissing,
IncludeTotal: includetotal,
}
perfobjects[0] = PerfObject
m := Win_PerfCounters{PrintValid: false, TestName: "ConfigGet5", Object: perfobjects}
err := m.ParseConfig(&metrics)
require.NoError(t, err)
if len(metrics.items) == 4 {
require.NoError(t, nil)
} else if len(metrics.items) < 4 {
var errorstring1 string = "Too few results returned from the query: " +
string(len(metrics.items))
err2 := errors.New(errorstring1)
require.NoError(t, err2)
} else if len(metrics.items) > 4 {
var errorstring1 string = "Too many results returned from the query: " +
string(len(metrics.items))
err2 := errors.New(errorstring1)
require.NoError(t, err2)
}
}
func TestWinPerfcountersConfigGet6(t *testing.T) {
validmetrics := itemList{}
var instances = make([]string, 1)
var counters = make([]string, 1)
var perfobjects = make([]perfobject, 1)
objectname := "System"
instances[0] = "------"
counters[0] = "Context Switches/sec"
var measurement string = "test"
var warnonmissing bool = false
var failonmissing bool = true
var includetotal bool = false
PerfObject := perfobject{
ObjectName: objectname,
Instances: instances,
Counters: counters,
Measurement: measurement,
WarnOnMissing: warnonmissing,
FailOnMissing: failonmissing,
IncludeTotal: includetotal,
}
perfobjects[0] = PerfObject
m := Win_PerfCounters{PrintValid: false, TestName: "ConfigGet6", Object: perfobjects}
err := m.ParseConfig(&validmetrics)
require.NoError(t, err)
}
func TestWinPerfcountersConfigError1(t *testing.T) {
metrics := itemList{}
var instances = make([]string, 1)
var counters = make([]string, 1)
var perfobjects = make([]perfobject, 1)
objectname := "Processor InformationERROR"
instances[0] = "_Total"
counters[0] = "% Processor Time"
var measurement string = "test"
var warnonmissing bool = false
var failonmissing bool = true
var includetotal bool = false
PerfObject := perfobject{
ObjectName: objectname,
Instances: instances,
Counters: counters,
Measurement: measurement,
WarnOnMissing: warnonmissing,
FailOnMissing: failonmissing,
IncludeTotal: includetotal,
}
perfobjects[0] = PerfObject
m := Win_PerfCounters{PrintValid: false, TestName: "ConfigError1", Object: perfobjects}
err := m.ParseConfig(&metrics)
require.Error(t, err)
}
func TestWinPerfcountersConfigError2(t *testing.T) {
metrics := itemList{}
var instances = make([]string, 1)
var counters = make([]string, 1)
var perfobjects = make([]perfobject, 1)
objectname := "Processor"
instances[0] = "SuperERROR"
counters[0] = "% C1 Time"
var measurement string = "test"
var warnonmissing bool = false
var failonmissing bool = true
var includetotal bool = false
PerfObject := perfobject{
ObjectName: objectname,
Instances: instances,
Counters: counters,
Measurement: measurement,
WarnOnMissing: warnonmissing,
FailOnMissing: failonmissing,
IncludeTotal: includetotal,
}
perfobjects[0] = PerfObject
m := Win_PerfCounters{PrintValid: false, TestName: "ConfigError2", Object: perfobjects}
err := m.ParseConfig(&metrics)
require.Error(t, err)
}
func TestWinPerfcountersConfigError3(t *testing.T) {
metrics := itemList{}
var instances = make([]string, 1)
var counters = make([]string, 1)
var perfobjects = make([]perfobject, 1)
objectname := "Processor Information"
instances[0] = "_Total"
counters[0] = "% Processor TimeERROR"
var measurement string = "test"
var warnonmissing bool = false
var failonmissing bool = true
var includetotal bool = false
PerfObject := perfobject{
ObjectName: objectname,
Instances: instances,
Counters: counters,
Measurement: measurement,
WarnOnMissing: warnonmissing,
FailOnMissing: failonmissing,
IncludeTotal: includetotal,
}
perfobjects[0] = PerfObject
m := Win_PerfCounters{PrintValid: false, TestName: "ConfigError3", Object: perfobjects}
err := m.ParseConfig(&metrics)
require.Error(t, err)
}
func TestWinPerfcountersCollect1(t *testing.T) {
var instances = make([]string, 1)
var counters = make([]string, 1)
var perfobjects = make([]perfobject, 1)
objectname := "Processor Information"
instances[0] = "_Total"
counters[0] = "Parking Status"
var measurement string = "test"
var warnonmissing bool = false
var failonmissing bool = true
var includetotal bool = false
PerfObject := perfobject{
ObjectName: objectname,
Instances: instances,
Counters: counters,
Measurement: measurement,
WarnOnMissing: warnonmissing,
FailOnMissing: failonmissing,
IncludeTotal: includetotal,
}
perfobjects[0] = PerfObject
m := Win_PerfCounters{PrintValid: false, TestName: "Collect1", Object: perfobjects}
var acc testutil.Accumulator
err := m.Gather(&acc)
require.NoError(t, err)
time.Sleep(2000 * time.Millisecond)
err = m.Gather(&acc)
tags := map[string]string{
"instance": instances[0],
"objectname": objectname,
}
fields := map[string]interface{}{
counters[0]: float32(0),
}
acc.AssertContainsTaggedFields(t, measurement, fields, tags)
}
func TestWinPerfcountersCollect2(t *testing.T) {
var instances = make([]string, 2)
var counters = make([]string, 1)
var perfobjects = make([]perfobject, 1)
objectname := "Processor Information"
instances[0] = "_Total"
instances[1] = "0,0"
counters[0] = "Performance Limit Flags"
var measurement string = "test"
var warnonmissing bool = false
var failonmissing bool = true
var includetotal bool = false
PerfObject := perfobject{
ObjectName: objectname,
Instances: instances,
Counters: counters,
Measurement: measurement,
WarnOnMissing: warnonmissing,
FailOnMissing: failonmissing,
IncludeTotal: includetotal,
}
perfobjects[0] = PerfObject
m := Win_PerfCounters{PrintValid: false, TestName: "Collect2", Object: perfobjects}
var acc testutil.Accumulator
err := m.Gather(&acc)
require.NoError(t, err)
time.Sleep(2000 * time.Millisecond)
err = m.Gather(&acc)
tags := map[string]string{
"instance": instances[0],
"objectname": objectname,
}
fields := map[string]interface{}{
counters[0]: float32(0),
}
acc.AssertContainsTaggedFields(t, measurement, fields, tags)
tags = map[string]string{
"instance": instances[1],
"objectname": objectname,
}
fields = map[string]interface{}{
counters[0]: float32(0),
}
acc.AssertContainsTaggedFields(t, measurement, fields, tags)
}