diff options
Diffstat (limited to 'src/collectors/proc.plugin')
82 files changed, 33640 insertions, 0 deletions
diff --git a/src/collectors/proc.plugin/README.md b/src/collectors/proc.plugin/README.md new file mode 100644 index 000000000..28c204f5c --- /dev/null +++ b/src/collectors/proc.plugin/README.md @@ -0,0 +1,639 @@ +# OS provided metrics (proc.plugin) + +`proc.plugin` gathers metrics from the /proc and /sys folders in Linux systems, along with a few other endpoints, and is responsible for the bulk of the system metrics collected and visualized by Netdata. + +This plugin is not an external plugin, but one of Netdata's threads. + +In detail, it collects metrics from: + +- `/proc/net/dev` (all network interfaces for all their values) +- `/proc/diskstats` (all disks for all their values) +- `/proc/mdstat` (status of RAID arrays) +- `/proc/net/snmp` (total IPv4, TCP and UDP usage) +- `/proc/net/snmp6` (total IPv6 usage) +- `/proc/net/netstat` (more IPv4 usage) +- `/proc/net/wireless` (wireless extension) +- `/proc/net/stat/nf_conntrack` (connection tracking performance) +- `/proc/net/stat/synproxy` (synproxy performance) +- `/proc/net/ip_vs/stats` (IPVS connection statistics) +- `/proc/stat` (CPU utilization and attributes) +- `/proc/meminfo` (memory information) +- `/proc/vmstat` (system performance) +- `/proc/net/rpc/nfsd` (NFS server statistics for both v3 and v4 NFS servers) +- `/sys/fs/cgroup` (Control Groups - Linux Containers) +- `/proc/self/mountinfo` (mount points) +- `/proc/interrupts` (total and per core hardware interrupts) +- `/proc/softirqs` (total and per core software interrupts) +- `/proc/loadavg` (system load and total processes running) +- `/proc/pressure/{cpu,memory,io}` (pressure stall information) +- `/proc/sys/kernel/random/entropy_avail` (random numbers pool availability - used in cryptography) +- `/proc/spl/kstat/zfs/arcstats` (status of ZFS adaptive replacement cache) +- `/proc/spl/kstat/zfs/pool/state` (state of ZFS pools) +- `/sys/class/power_supply` (power supply properties) +- `/sys/class/infiniband` (infiniband interconnect) +- `/sys/class/drm` (AMD GPUs) +- `ipc` (IPC semaphores and message queues) +- `ksm` Kernel Same-Page Merging performance (several files under `/sys/kernel/mm/ksm`). +- `netdata` (internal Netdata resources utilization) + +- - - + +## Monitoring Disks + +> Live demo of disk monitoring at: **[http://london.netdata.rocks](https://registry.my-netdata.io/#menu_disk)** + +Performance monitoring for Linux disks is quite complicated. The main reason is the plethora of disk technologies available. There are many different hardware disk technologies, but there are even more **virtual disk** technologies that can provide additional storage features. + +Hopefully, the Linux kernel provides many metrics that can provide deep insights of what our disks our doing. The kernel measures all these metrics on all layers of storage: **virtual disks**, **physical disks** and **partitions of disks**. + +### Monitored disk metrics + +- **I/O bandwidth/s (kb/s)** + The amount of data transferred from and to the disk. +- **Amount of discarded data (kb/s)** +- **I/O operations/s** + The number of I/O operations completed. +- **Extended I/O operations/s** + The number of extended I/O operations completed. +- **Queued I/O operations** + The number of currently queued I/O operations. For traditional disks that execute commands one after another, one of them is being run by the disk and the rest are just waiting in a queue. +- **Backlog size (time in ms)** + The expected duration of the currently queued I/O operations. +- **Utilization (time percentage)** + The percentage of time the disk was busy with something. This is a very interesting metric, since for most disks, that execute commands sequentially, **this is the key indication of congestion**. A sequential disk that is 100% of the available time busy, has no time to do anything more, so even if the bandwidth or the number of operations executed by the disk is low, its capacity has been reached. + Of course, for newer disk technologies (like fusion cards) that are capable to execute multiple commands in parallel, this metric is just meaningless. +- **Average I/O operation time (ms)** + The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. +- **Average I/O operation time for extended operations (ms)** + The average time for extended I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them. +- **Average I/O operation size (kb)** + The average amount of data of the completed I/O operations. +- **Average amount of discarded data (kb)** + The average amount of data of the completed discard operations. +- **Average Service Time (ms)** + The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading. +- **Average Service Time for extended I/O operations (ms)** + The average service time for completed extended I/O operations. +- **Merged I/O operations/s** + The Linux kernel is capable of merging I/O operations. So, if two requests to read data from the disk are adjacent, the Linux kernel may merge them to one before giving them to disk. This metric measures the number of operations that have been merged by the Linux kernel. +- **Merged discard operations/s** +- **Total I/O time** + The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute multiple I/O operations in parallel. +- **Space usage** + For mounted disks, Netdata will provide a chart for their space, with 3 dimensions: + 1. free + 2. used + 3. reserved for root +- **inode usage** + For mounted disks, Netdata will provide a chart for their inodes (number of file and directories), with 3 dimensions: + 1. free + 2. used + 3. reserved for root + +### disk names + +Netdata will automatically set the name of disks on the dashboard, from the mount point they are mounted, of course only when they are mounted. Changes in mount points are not currently detected (you will have to restart Netdata to change the name of the disk). To use disk IDs provided by `/dev/disk/by-id`, the `name disks by id` option should be enabled. The `preferred disk ids` simple pattern allows choosing disk IDs to be used in the first place. + +### performance metrics + +By default, Netdata will enable monitoring metrics only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after Netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though). Set `yes` for a chart instead of `auto` to enable it permanently. You can also set the `enable zero metrics` option to `yes` in the `[global]` section which enables charts with zero metrics for all internal Netdata plugins. + +Netdata categorizes all block devices in 3 categories: + +1. physical disks (i.e. block devices that do not have child devices and are not partitions) +2. virtual disks (i.e. block devices that have child devices - like RAID devices) +3. disk partitions (i.e. block devices that are part of a physical disk) + +Performance metrics are enabled by default for all disk devices, except partitions and not-mounted virtual disks. Of course, you can enable/disable monitoring any block device by editing the Netdata configuration file. + +### Netdata configuration + +You can get the running Netdata configuration using this: + +```sh +cd /etc/netdata +curl "http://localhost:19999/netdata.conf" >netdata.conf.new +mv netdata.conf.new netdata.conf +``` + +Then edit `netdata.conf` and find the following section. This is the basic plugin configuration. + +``` +[plugin:proc:/proc/diskstats] + # enable new disks detected at runtime = yes + # performance metrics for physical disks = auto + # performance metrics for virtual disks = auto + # performance metrics for partitions = no + # bandwidth for all disks = auto + # operations for all disks = auto + # merged operations for all disks = auto + # i/o time for all disks = auto + # queued operations for all disks = auto + # utilization percentage for all disks = auto + # extended operations for all disks = auto + # backlog for all disks = auto + # bcache for all disks = auto + # bcache priority stats update every = 0 + # remove charts of removed disks = yes + # path to get block device = /sys/block/%s + # path to get block device bcache = /sys/block/%s/bcache + # path to get virtual block device = /sys/devices/virtual/block/%s + # path to get block device infos = /sys/dev/block/%lu:%lu/%s + # path to device mapper = /dev/mapper + # path to /dev/disk/by-label = /dev/disk/by-label + # path to /dev/disk/by-id = /dev/disk/by-id + # path to /dev/vx/dsk = /dev/vx/dsk + # name disks by id = no + # preferred disk ids = * + # exclude disks = loop* ram* + # filename to monitor = /proc/diskstats + # performance metrics for disks with major 8 = yes +``` + +For each virtual disk, physical disk and partition you will have a section like this: + +``` +[plugin:proc:/proc/diskstats:sda] + # enable = yes + # enable performance metrics = auto + # bandwidth = auto + # operations = auto + # merged operations = auto + # i/o time = auto + # queued operations = auto + # utilization percentage = auto + # extended operations = auto + # backlog = auto +``` + +For all configuration options: + +- `auto` = enable monitoring if the collected values are not zero +- `yes` = enable monitoring +- `no` = disable monitoring + +Of course, to set options, you will have to uncomment them. The comments show the internal defaults. + +After saving `/etc/netdata/netdata.conf`, restart your Netdata to apply them. + +#### Disabling performance metrics for individual device and to multiple devices by device type + +You can pretty easy disable performance metrics for individual device, for ex.: + +``` +[plugin:proc:/proc/diskstats:sda] + enable performance metrics = no +``` + +But sometimes you need disable performance metrics for all devices with the same type, to do it you need to figure out device type from `/proc/diskstats` for ex.: + +``` + 7 0 loop0 1651 0 3452 168 0 0 0 0 0 8 168 + 7 1 loop1 4955 0 11924 880 0 0 0 0 0 64 880 + 7 2 loop2 36 0 216 4 0 0 0 0 0 4 4 + 7 6 loop6 0 0 0 0 0 0 0 0 0 0 0 + 7 7 loop7 0 0 0 0 0 0 0 0 0 0 0 + 251 2 zram2 27487 0 219896 188 79953 0 639624 1640 0 1828 1828 + 251 3 zram3 27348 0 218784 152 79952 0 639616 1960 0 2060 2104 +``` + +All zram devices starts with `251` number and all loop devices starts with `7`. +So, to disable performance metrics for all loop devices you could add `performance metrics for disks with major 7 = no` to `[plugin:proc:/proc/diskstats]` section. + +``` +[plugin:proc:/proc/diskstats] + performance metrics for disks with major 7 = no +``` + +## Monitoring RAID arrays + +### Monitored RAID array metrics + +1. **Health** Number of failed disks in every array (aggregate chart). + +2. **Disks stats** + +- total (number of devices array ideally would have) +- inuse (number of devices currently are in use) + +3. **Mismatch count** + +- unsynchronized blocks + +4. **Current status** + +- resync in percent +- recovery in percent +- reshape in percent +- check in percent + +5. **Operation status** (if resync/recovery/reshape/check is active) + +- finish in minutes +- speed in megabytes/s + +6. **Nonredundant array availability** + +#### configuration + +``` +[plugin:proc:/proc/mdstat] + # faulty devices = yes + # nonredundant arrays availability = yes + # mismatch count = auto + # disk stats = yes + # operation status = yes + # make charts obsolete = yes + # filename to monitor = /proc/mdstat + # mismatch_cnt filename to monitor = /sys/block/%s/md/mismatch_cnt +``` + +## Monitoring CPUs + +The `/proc/stat` module monitors CPU utilization, interrupts, context switches, processes started/running, thermal +throttling, frequency, and idle states. It gathers this information from multiple files. + +If your system has more than 50 processors (`physical processors * cores per processor * threads per core`), the Agent +automatically disables CPU thermal throttling, frequency, and idle state charts. To override this default, see the next +section on configuration. + +### Configuration + +The settings for monitoring CPUs is in the `[plugin:proc:/proc/stat]` of your `netdata.conf` file. + +The `keep per core files open` option lets you reduce the number of file operations on multiple files. + +If your system has more than 50 processors and you would like to see the CPU thermal throttling, frequency, and idle +state charts that are automatically disabled, you can set the following boolean options in the +`[plugin:proc:/proc/stat]` section. + +```conf + keep per core files open = yes + keep cpuidle files open = yes + core_throttle_count = yes + package_throttle_count = yes + cpu frequency = yes + cpu idle states = yes +``` + +### CPU frequency + +The module shows the current CPU frequency as set by the `cpufreq` kernel +module. + +**Requirement:** +You need to have `CONFIG_CPU_FREQ` and (optionally) `CONFIG_CPU_FREQ_STAT` +enabled in your kernel. + +`cpufreq` interface provides two different ways of getting the information through `/sys/devices/system/cpu/cpu*/cpufreq/scaling_cur_freq` and `/sys/devices/system/cpu/cpu*/cpufreq/stats/time_in_state` files. The latter is more accurate so it is preferred in the module. `scaling_cur_freq` represents only the current CPU frequency, and doesn't account for any state changes which happen between updates. The module switches back and forth between these two methods if governor is changed. + +It produces one chart with multiple lines (one line per core). + +#### configuration + +`scaling_cur_freq filename to monitor` and `time_in_state filename to monitor` in the `[plugin:proc:/proc/stat]` configuration section + +### CPU idle states + +The module monitors the usage of CPU idle states. + +**Requirement:** +Your kernel needs to have `CONFIG_CPU_IDLE` enabled. + +It produces one stacked chart per CPU, showing the percentage of time spent in +each state. + +#### configuration + +`schedstat filename to monitor`, `cpuidle name filename to monitor`, and `cpuidle time filename to monitor` in the `[plugin:proc:/proc/stat]` configuration section + +## Monitoring memory + +### Monitored memory metrics + +- Amount of memory swapped in/out +- Amount of memory paged from/to disk +- Number of memory page faults +- Number of out of memory kills +- Number of NUMA events + +### Configuration + +```conf +[plugin:proc:/proc/vmstat] + filename to monitor = /proc/vmstat + swap i/o = auto + disk i/o = yes + memory page faults = yes + out of memory kills = yes + system-wide numa metric summary = auto +``` + +## Monitoring Network Interfaces + +### Monitored network interface metrics + +- **Physical Network Interfaces Aggregated Bandwidth (kilobits/s)** + The amount of data received and sent through all physical interfaces in the system. This is the source of data for the Net Inbound and Net Outbound dials in the System Overview section. + +- **Bandwidth (kilobits/s)** + The amount of data received and sent through the interface. + +- **Packets (packets/s)** + The number of packets received, packets sent, and multicast packets transmitted through the interface. + +- **Interface Errors (errors/s)** + The number of errors for the inbound and outbound traffic on the interface. + +- **Interface Drops (drops/s)** + The number of packets dropped for the inbound and outbound traffic on the interface. + +- **Interface FIFO Buffer Errors (errors/s)** + The number of FIFO buffer errors encountered while receiving and transmitting data through the interface. + +- **Compressed Packets (packets/s)** + The number of compressed packets transmitted or received by the device driver. + +- **Network Interface Events (events/s)** + The number of packet framing errors, collisions detected on the interface, and carrier losses detected by the device driver. + +By default Netdata will enable monitoring metrics only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after Netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though). + +### Monitoring wireless network interfaces + +The settings for monitoring wireless is in the `[plugin:proc:/proc/net/wireless]` section of your `netdata.conf` file. + +```conf + status for all interfaces = yes + quality for all interfaces = yes + discarded packets for all interfaces = yes + missed beacon for all interface = yes +``` + +You can set the following values for each configuration option: + +- `auto` = enable monitoring if the collected values are not zero +- `yes` = enable monitoring +- `no` = disable monitoring + +#### Monitored wireless interface metrics + +- **Status** + The current state of the interface. This is a device-dependent option. + +- **Link** + Overall quality of the link. + +- **Level** + Received signal strength (RSSI), which indicates how strong the received signal is. + +- **Noise** + Background noise level. + +- **Discarded packets** + Discarded packets for: Number of packets received with a different NWID or ESSID (`nwid`), unable to decrypt (`crypt`), hardware was not able to properly re-assemble the link layer fragments (`frag`), packets failed to deliver (`retry`), and packets lost in relation with specific wireless operations (`misc`). + +- **Missed beacon** + Number of periodic beacons from the cell or the access point the interface has missed. + +#### Wireless configuration + +#### alerts + +There are several alerts defined in `health.d/net.conf`. + +The tricky ones are `inbound packets dropped` and `inbound packets dropped ratio`. They have quite a strict policy so that they warn users about possible issues. These alerts can be annoying for some network configurations. It is especially true for some bonding configurations if an interface is a child or a bonding interface itself. If it is expected to have a certain number of drops on an interface for a certain network configuration, a separate alert with different triggering thresholds can be created or the existing one can be disabled for this specific interface. It can be done with the help of the [families](https://github.com/netdata/netdata/blob/master/src/health/REFERENCE.md#alert-line-families) line in the alert configuration. For example, if you want to disable the `inbound packets dropped` alert for `eth0`, set `families: !eth0 *` in the alert definition for `template: inbound_packets_dropped`. + +#### configuration + +Module configuration: + +``` +[plugin:proc:/proc/net/dev] + # filename to monitor = /proc/net/dev + # path to get virtual interfaces = /sys/devices/virtual/net/%s + # path to get net device speed = /sys/class/net/%s/speed + # enable new interfaces detected at runtime = auto + # bandwidth for all interfaces = auto + # packets for all interfaces = auto + # errors for all interfaces = auto + # drops for all interfaces = auto + # fifo for all interfaces = auto + # compressed packets for all interfaces = auto + # frames, collisions, carrier counters for all interfaces = auto + # disable by default interfaces matching = lo fireqos* *-ifb + # refresh interface speed every seconds = 10 +``` + +Per interface configuration: + +``` +[plugin:proc:/proc/net/dev:enp0s3] + # enabled = yes + # virtual = no + # bandwidth = auto + # packets = auto + # errors = auto + # drops = auto + # fifo = auto + # compressed = auto + # events = auto +``` + +## Linux Anti-DDoS + +![image6](https://cloud.githubusercontent.com/assets/2662304/14253733/53550b16-fa95-11e5-8d9d-4ed171df4735.gif) + +--- + +SYNPROXY is a TCP SYN packets proxy. It can be used to protect any TCP server (like a web server) from SYN floods and similar DDos attacks. + +SYNPROXY is a netfilter module, in the Linux kernel (since version 3.12). It is optimized to handle millions of packets per second utilizing all CPUs available without any concurrency locking between the connections. + +The net effect of this, is that the real servers will not notice any change during the attack. The valid TCP connections will pass through and served, while the attack will be stopped at the firewall. + +Netdata does not enable SYNPROXY. It just uses the SYNPROXY metrics exposed by your kernel, so you will first need to configure it. The hard way is to run iptables SYNPROXY commands directly on the console. An easier way is to use [FireHOL](https://firehol.org/), which, is a firewall manager for iptables. FireHOL can configure SYNPROXY using the following setup guides: + +- **[Working with SYNPROXY](https://github.com/firehol/firehol/wiki/Working-with-SYNPROXY)** +- **[Working with SYNPROXY and traps](https://github.com/firehol/firehol/wiki/Working-with-SYNPROXY-and-traps)** + +### Real-time monitoring of Linux Anti-DDoS + +Netdata is able to monitor in real-time (per second updates) the operation of the Linux Anti-DDoS protection. + +It visualizes 4 charts: + +1. TCP SYN Packets received on ports operated by SYNPROXY +2. TCP Cookies (valid, invalid, retransmits) +3. Connections Reopened +4. Entries used + +Example image: + +![ddos](https://cloud.githubusercontent.com/assets/2662304/14398891/6016e3fc-fdf0-11e5-942b-55de6a52cb66.gif) + +See Linux Anti-DDoS in action at: **[Netdata demo site (with SYNPROXY enabled)](https://registry.my-netdata.io/#menu_netfilter_submenu_synproxy)** + +## Linux power supply + +This module monitors various metrics reported by power supply drivers +on Linux. This allows tracking and alerting on things like remaining +battery capacity. + +Depending on the underlying driver, it may provide the following charts +and metrics: + +1. Capacity: The power supply capacity expressed as a percentage. + + - capacity_now + +2. Charge: The charge for the power supply, expressed as amphours. + + - charge_full_design + - charge_full + - charge_now + - charge_empty + - charge_empty_design + +3. Energy: The energy for the power supply, expressed as watthours. + + - energy_full_design + - energy_full + - energy_now + - energy_empty + - energy_empty_design + +4. Voltage: The voltage for the power supply, expressed as volts. + + - voltage_max_design + - voltage_max + - voltage_now + - voltage_min + - voltage_min_design + +#### configuration + +``` +[plugin:proc:/sys/class/power_supply] + # battery capacity = yes + # battery charge = no + # battery energy = no + # power supply voltage = no + # keep files open = auto + # directory to monitor = /sys/class/power_supply +``` + +#### notes + +- Most drivers provide at least the first chart. Battery powered ACPI + compliant systems (like most laptops) provide all but the third, but do + not provide all of the metrics for each chart. + +- Current, energy, and voltages are reported with a *very* high precision + by the power_supply framework. Usually, this is far higher than the + actual hardware supports reporting, so expect to see changes in these + charts jump instead of scaling smoothly. + +- If `max` or `full` attribute is defined by the driver, but not a + corresponding `min` or `empty` attribute, then Netdata will still provide + the corresponding `min` or `empty`, which will then always read as zero. + This way, alerts which match on these will still work. + +## Infiniband interconnect + +This module monitors every active Infiniband port. It provides generic counters statistics, and per-vendor hw-counters (if vendor is supported). + +### Monitored interface metrics + +Each port will have its counters metrics monitored, grouped in the following charts: + +- **Bandwidth usage** + Sent/Received data, in KB/s + +- **Packets Statistics** + Sent/Received packets, in 3 categories: total, unicast and multicast. + +- **Errors Statistics** + Many errors counters are provided, presenting statistics for: + - Packets: malformed, sent/received discarded by card/switch, missing resource + - Link: downed, recovered, integrity error, minor error + - Other events: Tick Wait to send, buffer overrun + +If your vendor is supported, you'll also get HW-Counters statistics. These being vendor specific, please refer to their documentation. + +- Mellanox: [see statistics documentation](https://community.mellanox.com/s/article/understanding-mlx5-linux-counters-and-status-parameters) + +### configuration + +Default configuration will monitor only enabled infiniband ports, and refresh newly activated or created ports every 30 seconds + +``` +[plugin:proc:/sys/class/infiniband] + # dirname to monitor = /sys/class/infiniband + # bandwidth counters = yes + # packets counters = yes + # errors counters = yes + # hardware packets counters = auto + # hardware errors counters = auto + # monitor only ports being active = auto + # disable by default interfaces matching = + # refresh ports state every seconds = 30 +``` + +## AMD GPUs + +This module monitors every AMD GPU card discovered at agent startup. + +### Monitored GPU metrics + +The following charts will be provided: + +- **GPU utilization** +- **GPU memory utilization** +- **GPU clock frequency** +- **GPU memory clock frequency** +- **VRAM memory usage percentage** +- **VRAM memory usage** +- **visible VRAM memory usage percentage** +- **visible VRAM memory usage** +- **GTT memory usage percentage** +- **GTT memory usage** + +### configuration + +The `drm` path can be configured if it differs from the default: + +``` +[plugin:proc:/sys/class/drm] + # directory to monitor = /sys/class/drm +``` + +> [!NOTE] +> Temperature, fan speed, voltage and power metrics for AMD GPUs can be monitored using the [Sensors](https://github.com/netdata/netdata/blob/master/src/collectors/charts.d.plugin/sensors/README.md) plugin. + +## IPC + +### Monitored IPC metrics + +- **number of messages in message queues** +- **amount of memory used by message queues** +- **number of semaphores** +- **number of semaphore arrays** +- **number of shared memory segments** +- **amount of memory used by shared memory segments** + +As far as the message queue charts are dynamic, sane limits are applied for the number of dimensions per chart (the limit is configurable). + +### configuration + +``` +[plugin:proc:ipc] + # message queues = yes + # semaphore totals = yes + # shared memory totals = yes + # msg filename to monitor = /proc/sysvipc/msg + # shm filename to monitor = /proc/sysvipc/shm + # max dimensions in memory allowed = 50 +``` + + diff --git a/src/collectors/proc.plugin/integrations/amd_gpu.md b/src/collectors/proc.plugin/integrations/amd_gpu.md new file mode 100644 index 000000000..24f480894 --- /dev/null +++ b/src/collectors/proc.plugin/integrations/amd_gpu.md @@ -0,0 +1,110 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/amd_gpu.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "AMD GPU" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Hardware Devices and Sensors" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# AMD GPU + + +<img src="https://netdata.cloud/img/amd.svg" width="150"/> + + +Plugin: proc.plugin +Module: /sys/class/drm + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +This integration monitors AMD GPU metrics, such as utilization, clock frequency and memory usage. + +It reads `/sys/class/drm` to collect metrics for every AMD GPU card instance it encounters. + +This collector is only supported on the following platforms: + +- Linux + +This collector supports collecting metrics from multiple instances of this integration, including remote instances. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per gpu + +These metrics refer to the GPU. + +Labels: + +| Label | Description | +|:-----------|:----------------| +| product_name | GPU product name (e.g. AMD RX 6600) | + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| amdgpu.gpu_utilization | utilization | percentage | +| amdgpu.gpu_mem_utilization | utilization | percentage | +| amdgpu.gpu_clk_frequency | frequency | MHz | +| amdgpu.gpu_mem_clk_frequency | frequency | MHz | +| amdgpu.gpu_mem_vram_usage_perc | usage | percentage | +| amdgpu.gpu_mem_vram_usage | free, used | bytes | +| amdgpu.gpu_mem_vis_vram_usage_perc | usage | percentage | +| amdgpu.gpu_mem_vis_vram_usage | free, used | bytes | +| amdgpu.gpu_mem_gtt_usage_perc | usage | percentage | +| amdgpu.gpu_mem_gtt_usage | free, used | bytes | + + + +## Alerts + +There are no alerts configured by default for this integration. + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/btrfs.md b/src/collectors/proc.plugin/integrations/btrfs.md new file mode 100644 index 000000000..b7fc85220 --- /dev/null +++ b/src/collectors/proc.plugin/integrations/btrfs.md @@ -0,0 +1,137 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/btrfs.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "BTRFS" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/Filesystem/BTRFS" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# BTRFS + + +<img src="https://netdata.cloud/img/filesystem.svg" width="150"/> + + +Plugin: proc.plugin +Module: /sys/fs/btrfs + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +This integration provides usage and error statistics from the BTRFS filesystem. + + + +This collector is supported on all platforms. + +This collector supports collecting metrics from multiple instances of this integration, including remote instances. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per btrfs filesystem + + + +Labels: + +| Label | Description | +|:-----------|:----------------| +| filesystem_uuid | TBD | +| filesystem_label | TBD | + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| btrfs.disk | unallocated, data_free, data_used, meta_free, meta_used, sys_free, sys_used | MiB | +| btrfs.data | free, used | MiB | +| btrfs.metadata | free, used, reserved | MiB | +| btrfs.system | free, used | MiB | +| btrfs.commits | commits | commits | +| btrfs.commits_perc_time | commits | percentage | +| btrfs.commit_timings | last, max | ms | + +### Per btrfs device + + + +Labels: + +| Label | Description | +|:-----------|:----------------| +| device_id | TBD | +| filesystem_uuid | TBD | +| filesystem_label | TBD | + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| btrfs.device_errors | write_errs, read_errs, flush_errs, corruption_errs, generation_errs | errors | + + + +## Alerts + + +The following alerts are available: + +| Alert name | On metric | Description | +|:------------|:----------|:------------| +| [ btrfs_allocated ](https://github.com/netdata/netdata/blob/master/src/health/health.d/btrfs.conf) | btrfs.disk | percentage of allocated BTRFS physical disk space | +| [ btrfs_data ](https://github.com/netdata/netdata/blob/master/src/health/health.d/btrfs.conf) | btrfs.data | utilization of BTRFS data space | +| [ btrfs_metadata ](https://github.com/netdata/netdata/blob/master/src/health/health.d/btrfs.conf) | btrfs.metadata | utilization of BTRFS metadata space | +| [ btrfs_system ](https://github.com/netdata/netdata/blob/master/src/health/health.d/btrfs.conf) | btrfs.system | utilization of BTRFS system space | +| [ btrfs_device_read_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/btrfs.conf) | btrfs.device_errors | number of encountered BTRFS read errors | +| [ btrfs_device_write_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/btrfs.conf) | btrfs.device_errors | number of encountered BTRFS write errors | +| [ btrfs_device_flush_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/btrfs.conf) | btrfs.device_errors | number of encountered BTRFS flush errors | +| [ btrfs_device_corruption_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/btrfs.conf) | btrfs.device_errors | number of encountered BTRFS corruption errors | +| [ btrfs_device_generation_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/btrfs.conf) | btrfs.device_errors | number of encountered BTRFS generation errors | + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/conntrack.md b/src/collectors/proc.plugin/integrations/conntrack.md new file mode 100644 index 000000000..33f11db24 --- /dev/null +++ b/src/collectors/proc.plugin/integrations/conntrack.md @@ -0,0 +1,105 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/conntrack.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "Conntrack" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/Firewall" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# Conntrack + + +<img src="https://netdata.cloud/img/firewall.svg" width="150"/> + + +Plugin: proc.plugin +Module: /proc/net/stat/nf_conntrack + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +This integration monitors the connection tracking mechanism of Netfilter in the Linux Kernel. + + + +This collector is supported on all platforms. + +This collector supports collecting metrics from multiple instances of this integration, including remote instances. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per Conntrack instance + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| netfilter.conntrack_sockets | connections | active connections | +| netfilter.conntrack_new | new, ignore, invalid | connections/s | +| netfilter.conntrack_changes | inserted, deleted, delete_list | changes/s | +| netfilter.conntrack_expect | created, deleted, new | expectations/s | +| netfilter.conntrack_search | searched, restarted, found | searches/s | +| netfilter.conntrack_errors | icmp_error, error_failed, drop, early_drop | events/s | + + + +## Alerts + + +The following alerts are available: + +| Alert name | On metric | Description | +|:------------|:----------|:------------| +| [ netfilter_conntrack_full ](https://github.com/netdata/netdata/blob/master/src/health/health.d/netfilter.conf) | netfilter.conntrack_sockets | netfilter connection tracker table size utilization | + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/disk_statistics.md b/src/collectors/proc.plugin/integrations/disk_statistics.md new file mode 100644 index 000000000..9dcfa2ede --- /dev/null +++ b/src/collectors/proc.plugin/integrations/disk_statistics.md @@ -0,0 +1,149 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/disk_statistics.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "Disk Statistics" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/Disk" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# Disk Statistics + + +<img src="https://netdata.cloud/img/hard-drive.svg" width="150"/> + + +Plugin: proc.plugin +Module: /proc/diskstats + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +Detailed statistics for each of your system's disk devices and partitions. +The data is reported by the kernel and can be used to monitor disk activity on a Linux system. + +Get valuable insight into how your disks are performing and where potential bottlenecks might be. + + + + +This collector is supported on all platforms. + +This collector supports collecting metrics from multiple instances of this integration, including remote instances. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per Disk Statistics instance + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| system.io | in, out | KiB/s | + +### Per disk + + + +Labels: + +| Label | Description | +|:-----------|:----------------| +| device | TBD | +| mount_point | TBD | +| device_type | TBD | + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| disk.io | reads, writes | KiB/s | +| disk_ext.io | discards | KiB/s | +| disk.ops | reads, writes | operations/s | +| disk_ext.ops | discards, flushes | operations/s | +| disk.qops | operations | operations | +| disk.backlog | backlog | milliseconds | +| disk.busy | busy | milliseconds | +| disk.util | utilization | % of time working | +| disk.mops | reads, writes | merged operations/s | +| disk_ext.mops | discards | merged operations/s | +| disk.iotime | reads, writes | milliseconds/s | +| disk_ext.iotime | discards, flushes | milliseconds/s | +| disk.await | reads, writes | milliseconds/operation | +| disk_ext.await | discards, flushes | milliseconds/operation | +| disk.avgsz | reads, writes | KiB/operation | +| disk_ext.avgsz | discards | KiB/operation | +| disk.svctm | svctm | milliseconds/operation | +| disk.bcache_cache_alloc | ununsed, dirty, clean, metadata, undefined | percentage | +| disk.bcache_hit_ratio | 5min, 1hour, 1day, ever | percentage | +| disk.bcache_rates | congested, writeback | KiB/s | +| disk.bcache_size | dirty | MiB | +| disk.bcache_usage | avail | percentage | +| disk.bcache_cache_read_races | races, errors | operations/s | +| disk.bcache | hits, misses, collisions, readaheads | operations/s | +| disk.bcache_bypass | hits, misses | operations/s | + + + +## Alerts + + +The following alerts are available: + +| Alert name | On metric | Description | +|:------------|:----------|:------------| +| [ 10min_disk_backlog ](https://github.com/netdata/netdata/blob/master/src/health/health.d/disks.conf) | disk.backlog | average backlog size of the ${label:device} disk over the last 10 minutes | +| [ 10min_disk_utilization ](https://github.com/netdata/netdata/blob/master/src/health/health.d/disks.conf) | disk.util | average percentage of time ${label:device} disk was busy over the last 10 minutes | +| [ bcache_cache_dirty ](https://github.com/netdata/netdata/blob/master/src/health/health.d/bcache.conf) | disk.bcache_cache_alloc | percentage of cache space used for dirty data and metadata (this usually means your SSD cache is too small) | +| [ bcache_cache_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/bcache.conf) | disk.bcache_cache_read_races | number of times data was read from the cache, the bucket was reused and invalidated in the last 10 minutes (when this occurs the data is reread from the backing device) | + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/entropy.md b/src/collectors/proc.plugin/integrations/entropy.md new file mode 100644 index 000000000..03b51ecc8 --- /dev/null +++ b/src/collectors/proc.plugin/integrations/entropy.md @@ -0,0 +1,133 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/entropy.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "Entropy" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/System" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# Entropy + + +<img src="https://netdata.cloud/img/syslog.png" width="150"/> + + +Plugin: proc.plugin +Module: /proc/sys/kernel/random/entropy_avail + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +Entropy, a measure of the randomness or unpredictability of data. + +In the context of cryptography, entropy is used to generate random numbers or keys that are essential for +secure communication and encryption. Without a good source of entropy, cryptographic protocols can become +vulnerable to attacks that exploit the predictability of the generated keys. + +In most operating systems, entropy is generated by collecting random events from various sources, such as +hardware interrupts, mouse movements, keyboard presses, and disk activity. These events are fed into a pool +of entropy, which is then used to generate random numbers when needed. + +The `/dev/random` device in Linux is one such source of entropy, and it provides an interface for programs +to access the pool of entropy. When a program requests random numbers, it reads from the `/dev/random` device, +which blocks until enough entropy is available to generate the requested numbers. This ensures that the +generated numbers are truly random and not predictable. + +However, if the pool of entropy gets depleted, the `/dev/random` device may block indefinitely, causing +programs that rely on random numbers to slow down or even freeze. This is especially problematic for +cryptographic protocols that require a continuous stream of random numbers, such as SSL/TLS and SSH. + +To avoid this issue, some systems use a hardware random number generator (RNG) to generate high-quality +entropy. A hardware RNG generates random numbers by measuring physical phenomena, such as thermal noise or +radioactive decay. These sources of randomness are considered to be more reliable and unpredictable than +software-based sources. + +One such hardware RNG is the Trusted Platform Module (TPM), which is a dedicated hardware chip that is used +for cryptographic operations and secure boot. The TPM contains a built-in hardware RNG that generates +high-quality entropy, which can be used to seed the pool of entropy in the operating system. + +Alternatively, software-based solutions such as `Haveged` can be used to generate additional entropy by +exploiting sources of randomness in the system, such as CPU utilization and network traffic. These solutions +can help to mitigate the risk of entropy depletion, but they may not be as reliable as hardware-based solutions. + + + + +This collector is only supported on the following platforms: + +- linux + +This collector only supports collecting metrics from a single instance of this integration. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per Entropy instance + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| system.entropy | entropy | entropy | + + + +## Alerts + + +The following alerts are available: + +| Alert name | On metric | Description | +|:------------|:----------|:------------| +| [ lowest_entropy ](https://github.com/netdata/netdata/blob/master/src/health/health.d/entropy.conf) | system.entropy | minimum number of bits of entropy available for the kernel’s random number generator | + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/infiniband.md b/src/collectors/proc.plugin/integrations/infiniband.md new file mode 100644 index 000000000..5a4f5d702 --- /dev/null +++ b/src/collectors/proc.plugin/integrations/infiniband.md @@ -0,0 +1,99 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/infiniband.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "InfiniBand" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/Network" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# InfiniBand + + +<img src="https://netdata.cloud/img/network-wired.svg" width="150"/> + + +Plugin: proc.plugin +Module: /sys/class/infiniband + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +This integration monitors InfiniBand network inteface statistics. + + + +This collector is supported on all platforms. + +This collector supports collecting metrics from multiple instances of this integration, including remote instances. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per infiniband port + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| ib.bytes | Received, Sent | kilobits/s | +| ib.packets | Received, Sent, Mcast_rcvd, Mcast_sent, Ucast_rcvd, Ucast_sent | packets/s | +| ib.errors | Pkts_malformated, Pkts_rcvd_discarded, Pkts_sent_discarded, Tick_Wait_to_send, Pkts_missed_resource, Buffer_overrun, Link_Downed, Link_recovered, Link_integrity_err, Link_minor_errors, Pkts_rcvd_with_EBP, Pkts_rcvd_discarded_by_switch, Pkts_sent_discarded_by_switch | errors/s | +| ib.hwerrors | Duplicated_packets, Pkt_Seq_Num_gap, Ack_timer_expired, Drop_missing_buffer, Drop_out_of_sequence, NAK_sequence_rcvd, CQE_err_Req, CQE_err_Resp, CQE_Flushed_err_Req, CQE_Flushed_err_Resp, Remote_access_err_Req, Remote_access_err_Resp, Remote_invalid_req, Local_length_err_Resp, RNR_NAK_Packets, CNP_Pkts_ignored, RoCE_ICRC_Errors | errors/s | +| ib.hwpackets | RoCEv2_Congestion_sent, RoCEv2_Congestion_rcvd, IB_Congestion_handled, ATOMIC_req_rcvd, Connection_req_rcvd, Read_req_rcvd, Write_req_rcvd, RoCE_retrans_adaptive, RoCE_retrans_timeout, RoCE_slow_restart, RoCE_slow_restart_congestion, RoCE_slow_restart_count | packets/s | + + + +## Alerts + +There are no alerts configured by default for this integration. + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/inter_process_communication.md b/src/collectors/proc.plugin/integrations/inter_process_communication.md new file mode 100644 index 000000000..363fbea41 --- /dev/null +++ b/src/collectors/proc.plugin/integrations/inter_process_communication.md @@ -0,0 +1,120 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/inter_process_communication.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "Inter Process Communication" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/IPC" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# Inter Process Communication + + +<img src="https://netdata.cloud/img/network-wired.svg" width="150"/> + + +Plugin: proc.plugin +Module: ipc + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +IPC stands for Inter-Process Communication. It is a mechanism which allows processes to communicate with each +other and synchronize their actions. + +This collector exposes information about: + +- Message Queues: This allows messages to be exchanged between processes. It's a more flexible method that + allows messages to be placed onto a queue and read at a later time. + +- Shared Memory: This method allows for the fastest form of IPC because processes can exchange data by + reading/writing into shared memory segments. + +- Semaphores: They are used to synchronize the operations performed by independent processes. So, if multiple + processes are trying to access a single shared resource, semaphores can ensure that only one process + accesses the resource at a given time. + + + + +This collector is supported on all platforms. + +This collector only supports collecting metrics from a single instance of this integration. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per Inter Process Communication instance + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| system.ipc_semaphores | semaphores | semaphores | +| system.ipc_semaphore_arrays | arrays | arrays | +| system.message_queue_message | a dimension per queue | messages | +| system.message_queue_bytes | a dimension per queue | bytes | +| system.shared_memory_segments | segments | segments | +| system.shared_memory_bytes | bytes | bytes | + + + +## Alerts + + +The following alerts are available: + +| Alert name | On metric | Description | +|:------------|:----------|:------------| +| [ semaphores_used ](https://github.com/netdata/netdata/blob/master/src/health/health.d/ipc.conf) | system.ipc_semaphores | IPC semaphore utilization | +| [ semaphore_arrays_used ](https://github.com/netdata/netdata/blob/master/src/health/health.d/ipc.conf) | system.ipc_semaphore_arrays | IPC semaphore arrays utilization | + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/interrupts.md b/src/collectors/proc.plugin/integrations/interrupts.md new file mode 100644 index 000000000..b0d39dbbe --- /dev/null +++ b/src/collectors/proc.plugin/integrations/interrupts.md @@ -0,0 +1,141 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/interrupts.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "Interrupts" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/CPU" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# Interrupts + + +<img src="https://netdata.cloud/img/linuxserver.svg" width="150"/> + + +Plugin: proc.plugin +Module: /proc/interrupts + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +Monitors `/proc/interrupts`, a file organized by CPU and then by the type of interrupt. +The numbers reported are the counts of the interrupts that have occurred of each type. + +An interrupt is a signal to the processor emitted by hardware or software indicating an event that needs +immediate attention. The processor then interrupts its current activities and executes the interrupt handler +to deal with the event. This is part of the way a computer multitasks and handles concurrent processing. + +The types of interrupts include: + +- **I/O interrupts**: These are caused by I/O devices like the keyboard, mouse, printer, etc. For example, when + you type something on the keyboard, an interrupt is triggered so the processor can handle the new input. + +- **Timer interrupts**: These are generated at regular intervals by the system's timer circuit. It's primarily + used to switch the CPU among different tasks. + +- **Software interrupts**: These are generated by a program requiring disk I/O operations, or other system resources. + +- **Hardware interrupts**: These are caused by hardware conditions such as power failure, overheating, etc. + +Monitoring `/proc/interrupts` can be used for: + +- **Performance tuning**: If an interrupt is happening very frequently, it could be a sign that a device is not + configured correctly, or there is a software bug causing unnecessary interrupts. This could lead to system + performance degradation. + +- **System troubleshooting**: If you're seeing a lot of unexpected interrupts, it could be a sign of a hardware problem. + +- **Understanding system behavior**: More generally, keeping an eye on what interrupts are occurring can help you + understand what your system is doing. It can provide insights into the system's interaction with hardware, + drivers, and other parts of the kernel. + + + + +This collector is supported on all platforms. + +This collector supports collecting metrics from multiple instances of this integration, including remote instances. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per Interrupts instance + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| system.interrupts | a dimension per device | interrupts/s | + +### Per cpu core + + + +Labels: + +| Label | Description | +|:-----------|:----------------| +| cpu | TBD | + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| cpu.interrupts | a dimension per device | interrupts/s | + + + +## Alerts + +There are no alerts configured by default for this integration. + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/ip_virtual_server.md b/src/collectors/proc.plugin/integrations/ip_virtual_server.md new file mode 100644 index 000000000..974c2f60c --- /dev/null +++ b/src/collectors/proc.plugin/integrations/ip_virtual_server.md @@ -0,0 +1,97 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/ip_virtual_server.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "IP Virtual Server" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/Network" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# IP Virtual Server + + +<img src="https://netdata.cloud/img/network-wired.svg" width="150"/> + + +Plugin: proc.plugin +Module: /proc/net/ip_vs_stats + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +This integration monitors IP Virtual Server statistics + + + +This collector is supported on all platforms. + +This collector supports collecting metrics from multiple instances of this integration, including remote instances. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per IP Virtual Server instance + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| ipvs.sockets | connections | connections/s | +| ipvs.packets | received, sent | packets/s | +| ipvs.net | received, sent | kilobits/s | + + + +## Alerts + +There are no alerts configured by default for this integration. + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/ipv6_socket_statistics.md b/src/collectors/proc.plugin/integrations/ipv6_socket_statistics.md new file mode 100644 index 000000000..0840d3f0f --- /dev/null +++ b/src/collectors/proc.plugin/integrations/ipv6_socket_statistics.md @@ -0,0 +1,99 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/ipv6_socket_statistics.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "IPv6 Socket Statistics" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/Network" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# IPv6 Socket Statistics + + +<img src="https://netdata.cloud/img/network-wired.svg" width="150"/> + + +Plugin: proc.plugin +Module: /proc/net/sockstat6 + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +This integration provides IPv6 socket statistics. + + + +This collector is supported on all platforms. + +This collector supports collecting metrics from multiple instances of this integration, including remote instances. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per IPv6 Socket Statistics instance + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| ipv6.sockstat6_tcp_sockets | inuse | sockets | +| ipv6.sockstat6_udp_sockets | inuse | sockets | +| ipv6.sockstat6_udplite_sockets | inuse | sockets | +| ipv6.sockstat6_raw_sockets | inuse | sockets | +| ipv6.sockstat6_frag_sockets | inuse | fragments | + + + +## Alerts + +There are no alerts configured by default for this integration. + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/kernel_same-page_merging.md b/src/collectors/proc.plugin/integrations/kernel_same-page_merging.md new file mode 100644 index 000000000..37b64d253 --- /dev/null +++ b/src/collectors/proc.plugin/integrations/kernel_same-page_merging.md @@ -0,0 +1,103 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/kernel_same-page_merging.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "Kernel Same-Page Merging" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/Memory" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# Kernel Same-Page Merging + + +<img src="https://netdata.cloud/img/microchip.svg" width="150"/> + + +Plugin: proc.plugin +Module: /sys/kernel/mm/ksm + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +Kernel Samepage Merging (KSM) is a memory-saving feature in Linux that enables the kernel to examine the +memory of different processes and identify identical pages. It then merges these identical pages into a +single page that the processes share. This is particularly useful for virtualization, where multiple virtual +machines might be running the same operating system or applications and have many identical pages. + +The collector provides information about the operation and effectiveness of KSM on your system. + + + + +This collector is supported on all platforms. + +This collector only supports collecting metrics from a single instance of this integration. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per Kernel Same-Page Merging instance + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| mem.ksm | shared, unshared, sharing, volatile | MiB | +| mem.ksm_savings | savings, offered | MiB | +| mem.ksm_ratios | savings | percentage | + + + +## Alerts + +There are no alerts configured by default for this integration. + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/md_raid.md b/src/collectors/proc.plugin/integrations/md_raid.md new file mode 100644 index 000000000..f96f4c5b1 --- /dev/null +++ b/src/collectors/proc.plugin/integrations/md_raid.md @@ -0,0 +1,125 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/md_raid.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "MD RAID" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/Disk" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# MD RAID + + +<img src="https://netdata.cloud/img/hard-drive.svg" width="150"/> + + +Plugin: proc.plugin +Module: /proc/mdstat + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +This integration monitors the status of MD RAID devices. + + + +This collector is supported on all platforms. + +This collector supports collecting metrics from multiple instances of this integration, including remote instances. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per MD RAID instance + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| md.health | a dimension per md array | failed disks | + +### Per md array + + + +Labels: + +| Label | Description | +|:-----------|:----------------| +| device | TBD | +| raid_level | TBD | + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| md.disks | inuse, down | disks | +| md.mismatch_cnt | count | unsynchronized blocks | +| md.status | check, resync, recovery, reshape | percent | +| md.expected_time_until_operation_finish | finish_in | seconds | +| md.operation_speed | speed | KiB/s | +| md.nonredundant | available | boolean | + + + +## Alerts + + +The following alerts are available: + +| Alert name | On metric | Description | +|:------------|:----------|:------------| +| [ mdstat_last_collected ](https://github.com/netdata/netdata/blob/master/src/health/health.d/mdstat.conf) | md.disks | number of seconds since the last successful data collection | +| [ mdstat_disks ](https://github.com/netdata/netdata/blob/master/src/health/health.d/mdstat.conf) | md.disks | number of devices in the down state for the ${label:device} ${label:raid_level} array. Any number > 0 indicates that the array is degraded. | +| [ mdstat_mismatch_cnt ](https://github.com/netdata/netdata/blob/master/src/health/health.d/mdstat.conf) | md.mismatch_cnt | number of unsynchronized blocks for the ${label:device} ${label:raid_level} array | +| [ mdstat_nonredundant_last_collected ](https://github.com/netdata/netdata/blob/master/src/health/health.d/mdstat.conf) | md.nonredundant | number of seconds since the last successful data collection | + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/memory_modules_dimms.md b/src/collectors/proc.plugin/integrations/memory_modules_dimms.md new file mode 100644 index 000000000..4f4d434fd --- /dev/null +++ b/src/collectors/proc.plugin/integrations/memory_modules_dimms.md @@ -0,0 +1,146 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/memory_modules_dimms.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "Memory modules (DIMMs)" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/Memory" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# Memory modules (DIMMs) + + +<img src="https://netdata.cloud/img/microchip.svg" width="150"/> + + +Plugin: proc.plugin +Module: /sys/devices/system/edac/mc + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +The Error Detection and Correction (EDAC) subsystem is detecting and reporting errors in the system's memory, +primarily ECC (Error-Correcting Code) memory errors. + +The collector provides data for: + +- Per memory controller (MC): correctable and uncorrectable errors. These can be of 2 kinds: + - errors related to a DIMM + - errors that cannot be associated with a DIMM + +- Per memory DIMM: correctable and uncorrectable errors. There are 2 kinds: + - memory controllers that can identify the physical DIMMS and report errors directly for them, + - memory controllers that report errors for memory address ranges that can be linked to dimms. + In this case the DIMMS reported may be more than the physical DIMMS installed. + + + + +This collector is supported on all platforms. + +This collector supports collecting metrics from multiple instances of this integration, including remote instances. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per memory controller + +These metrics refer to the memory controller. + +Labels: + +| Label | Description | +|:-----------|:----------------| +| controller | [mcX](https://www.kernel.org/doc/html/v5.0/admin-guide/ras.html#mcx-directories) directory name of this memory controller. | +| mc_name | Memory controller type. | +| size_mb | The amount of memory in megabytes that this memory controller manages. | +| max_location | Last available memory slot in this memory controller. | + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| mem.edac_mc_errors | correctable, uncorrectable, correctable_noinfo, uncorrectable_noinfo | errors | + +### Per memory module + +These metrics refer to the memory module (or rank, [depends on the memory controller](https://www.kernel.org/doc/html/v5.0/admin-guide/ras.html#f5)). + +Labels: + +| Label | Description | +|:-----------|:----------------| +| controller | [mcX](https://www.kernel.org/doc/html/v5.0/admin-guide/ras.html#mcx-directories) directory name of this memory controller. | +| dimm | [dimmX or rankX](https://www.kernel.org/doc/html/v5.0/admin-guide/ras.html#dimmx-or-rankx-directories) directory name of this memory module. | +| dimm_dev_type | Type of DRAM device used in this memory module. For example, x1, x2, x4, x8. | +| dimm_edac_mode | Used type of error detection and correction. For example, S4ECD4ED would mean a Chipkill with x4 DRAM. | +| dimm_label | Label assigned to this memory module. | +| dimm_location | Location of the memory module. | +| dimm_mem_type | Type of the memory module. | +| size | The amount of memory in megabytes that this memory module manages. | + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| mem.edac_mc_errors | correctable, uncorrectable | errors | + + + +## Alerts + + +The following alerts are available: + +| Alert name | On metric | Description | +|:------------|:----------|:------------| +| [ ecc_memory_mc_noinfo_correctable ](https://github.com/netdata/netdata/blob/master/src/health/health.d/memory.conf) | mem.edac_mc_errors | memory controller ${label:controller} ECC correctable errors (unknown DIMM slot) | +| [ ecc_memory_mc_noinfo_uncorrectable ](https://github.com/netdata/netdata/blob/master/src/health/health.d/memory.conf) | mem.edac_mc_errors | memory controller ${label:controller} ECC uncorrectable errors (unknown DIMM slot) | +| [ ecc_memory_dimm_correctable ](https://github.com/netdata/netdata/blob/master/src/health/health.d/memory.conf) | mem.edac_mc_dimm_errors | DIMM ${label:dimm} controller ${label:controller} (location ${label:dimm_location}) ECC correctable errors | +| [ ecc_memory_dimm_uncorrectable ](https://github.com/netdata/netdata/blob/master/src/health/health.d/memory.conf) | mem.edac_mc_dimm_errors | DIMM ${label:dimm} controller ${label:controller} (location ${label:dimm_location}) ECC uncorrectable errors | + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/memory_statistics.md b/src/collectors/proc.plugin/integrations/memory_statistics.md new file mode 100644 index 000000000..a92df57a7 --- /dev/null +++ b/src/collectors/proc.plugin/integrations/memory_statistics.md @@ -0,0 +1,138 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/memory_statistics.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "Memory Statistics" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/Memory" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# Memory Statistics + + +<img src="https://netdata.cloud/img/linuxserver.svg" width="150"/> + + +Plugin: proc.plugin +Module: /proc/vmstat + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +Linux Virtual memory subsystem. + +Information about memory management, indicating how effectively the kernel allocates and frees +memory resources in response to system demands. + +Monitors page faults, which occur when a process requests a portion of its memory that isn't +immediately available. Monitoring these events can help diagnose inefficiencies in memory management and +provide insights into application behavior. + +Tracks swapping activity — a vital aspect of memory management where the kernel moves data from RAM to +swap space, and vice versa, based on memory demand and usage. It also monitors the utilization of zswap, +a compressed cache for swap pages, and provides insights into its usage and performance implications. + +In the context of virtualized environments, it tracks the ballooning mechanism which is used to balance +memory resources between host and guest systems. + +For systems using NUMA architecture, it provides insights into the local and remote memory accesses, which +can impact the performance based on the memory access times. + +The collector also watches for 'Out of Memory' kills, a drastic measure taken by the system when it runs out +of memory resources. + + + + +This collector is only supported on the following platforms: + +- linux + +This collector only supports collecting metrics from a single instance of this integration. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per Memory Statistics instance + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| mem.swapio | in, out | KiB/s | +| system.pgpgio | in, out | KiB/s | +| system.pgfaults | minor, major | faults/s | +| mem.balloon | inflate, deflate, migrate | KiB/s | +| mem.zswapio | in, out | KiB/s | +| mem.ksm_cow | swapin, write | KiB/s | +| mem.thp_faults | alloc, fallback, fallback_charge | events/s | +| mem.thp_file | alloc, fallback, mapped, fallback_charge | events/s | +| mem.thp_zero | alloc, failed | events/s | +| mem.thp_collapse | alloc, failed | events/s | +| mem.thp_split | split, failed, split_pmd, split_deferred | events/s | +| mem.thp_swapout | swapout, fallback | events/s | +| mem.thp_compact | success, fail, stall | events/s | +| mem.oom_kill | kills | kills/s | +| mem.numa | local, foreign, interleave, other, pte_updates, huge_pte_updates, hint_faults, hint_faults_local, pages_migrated | events/s | + + + +## Alerts + + +The following alerts are available: + +| Alert name | On metric | Description | +|:------------|:----------|:------------| +| [ 30min_ram_swapped_out ](https://github.com/netdata/netdata/blob/master/src/health/health.d/swap.conf) | mem.swapio | percentage of the system RAM swapped in the last 30 minutes | +| [ oom_kill ](https://github.com/netdata/netdata/blob/master/src/health/health.d/ram.conf) | mem.oom_kill | number of out of memory kills in the last 30 minutes | + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/memory_usage.md b/src/collectors/proc.plugin/integrations/memory_usage.md new file mode 100644 index 000000000..6c5168967 --- /dev/null +++ b/src/collectors/proc.plugin/integrations/memory_usage.md @@ -0,0 +1,135 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/memory_usage.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "Memory Usage" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/Memory" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# Memory Usage + + +<img src="https://netdata.cloud/img/linuxserver.svg" width="150"/> + + +Plugin: proc.plugin +Module: /proc/meminfo + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +`/proc/meminfo` provides detailed information about the system's current memory usage. It includes information +about different types of memory, RAM, Swap, ZSwap, HugePages, Transparent HugePages (THP), Kernel memory, +SLAB memory, memory mappings, and more. + +Monitoring /proc/meminfo can be useful for: + +- **Performance Tuning**: Understanding your system's memory usage can help you make decisions about system + tuning and optimization. For example, if your system is frequently low on free memory, it might benefit + from more RAM. + +- **Troubleshooting**: If your system is experiencing problems, `/proc/meminfo` can provide clues about + whether memory usage is a factor. For example, if your system is slow and cached swap is high, it could + mean that your system is swapping out a lot of memory to disk, which can degrade performance. + +- **Capacity Planning**: By monitoring memory usage over time, you can understand trends and make informed + decisions about future capacity needs. + + + + +This collector is supported on all platforms. + +This collector only supports collecting metrics from a single instance of this integration. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per Memory Usage instance + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| system.ram | free, used, cached, buffers | MiB | +| mem.available | avail | MiB | +| mem.swap | free, used | MiB | +| mem.swap_cached | cached | MiB | +| mem.zswap | in-ram, on-disk | MiB | +| mem.hwcorrupt | HardwareCorrupted | MiB | +| mem.commited | Commited_AS | MiB | +| mem.writeback | Dirty, Writeback, FuseWriteback, NfsWriteback, Bounce | MiB | +| mem.kernel | Slab, KernelStack, PageTables, VmallocUsed, Percpu | MiB | +| mem.slab | reclaimable, unreclaimable | MiB | +| mem.hugepages | free, used, surplus, reserved | MiB | +| mem.thp | anonymous, shmem | MiB | +| mem.thp_details | ShmemPmdMapped, FileHugePages, FilePmdMapped | MiB | +| mem.reclaiming | Active, Inactive, Active(anon), Inactive(anon), Active(file), Inactive(file), Unevictable, Mlocked | MiB | +| mem.high_low | high_used, low_used, high_free, low_free | MiB | +| mem.cma | used, free | MiB | +| mem.directmaps | 4k, 2m, 4m, 1g | MiB | + + + +## Alerts + + +The following alerts are available: + +| Alert name | On metric | Description | +|:------------|:----------|:------------| +| [ ram_in_use ](https://github.com/netdata/netdata/blob/master/src/health/health.d/ram.conf) | system.ram | system memory utilization | +| [ ram_available ](https://github.com/netdata/netdata/blob/master/src/health/health.d/ram.conf) | mem.available | percentage of estimated amount of RAM available for userspace processes, without causing swapping | +| [ used_swap ](https://github.com/netdata/netdata/blob/master/src/health/health.d/swap.conf) | mem.swap | swap memory utilization | +| [ 1hour_memory_hw_corrupted ](https://github.com/netdata/netdata/blob/master/src/health/health.d/memory.conf) | mem.hwcorrupt | amount of memory corrupted due to a hardware failure | + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/network_interfaces.md b/src/collectors/proc.plugin/integrations/network_interfaces.md new file mode 100644 index 000000000..dcf746596 --- /dev/null +++ b/src/collectors/proc.plugin/integrations/network_interfaces.md @@ -0,0 +1,137 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/network_interfaces.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "Network interfaces" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/Network" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# Network interfaces + + +<img src="https://netdata.cloud/img/network-wired.svg" width="150"/> + + +Plugin: proc.plugin +Module: /proc/net/dev + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +Monitor network interface metrics about bandwidth, state, errors and more. + + + +This collector is supported on all platforms. + +This collector supports collecting metrics from multiple instances of this integration, including remote instances. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per Network interfaces instance + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| system.net | received, sent | kilobits/s | + +### Per network device + + + +Labels: + +| Label | Description | +|:-----------|:----------------| +| interface_type | TBD | +| device | TBD | + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| net.net | received, sent | kilobits/s | +| net.speed | speed | kilobits/s | +| net.duplex | full, half, unknown | state | +| net.operstate | up, down, notpresent, lowerlayerdown, testing, dormant, unknown | state | +| net.carrier | up, down | state | +| net.mtu | mtu | octets | +| net.packets | received, sent, multicast | packets/s | +| net.errors | inbound, outbound | errors/s | +| net.drops | inbound, outbound | drops/s | +| net.fifo | receive, transmit | errors | +| net.compressed | received, sent | packets/s | +| net.events | frames, collisions, carrier | events/s | + + + +## Alerts + + +The following alerts are available: + +| Alert name | On metric | Description | +|:------------|:----------|:------------| +| [ interface_speed ](https://github.com/netdata/netdata/blob/master/src/health/health.d/net.conf) | net.net | network interface ${label:device} current speed | +| [ 1m_received_traffic_overflow ](https://github.com/netdata/netdata/blob/master/src/health/health.d/net.conf) | net.net | average inbound utilization for the network interface ${label:device} over the last minute | +| [ 1m_sent_traffic_overflow ](https://github.com/netdata/netdata/blob/master/src/health/health.d/net.conf) | net.net | average outbound utilization for the network interface ${label:device} over the last minute | +| [ inbound_packets_dropped_ratio ](https://github.com/netdata/netdata/blob/master/src/health/health.d/net.conf) | net.drops | ratio of inbound dropped packets for the network interface ${label:device} over the last 10 minutes | +| [ outbound_packets_dropped_ratio ](https://github.com/netdata/netdata/blob/master/src/health/health.d/net.conf) | net.drops | ratio of outbound dropped packets for the network interface ${label:device} over the last 10 minutes | +| [ wifi_inbound_packets_dropped_ratio ](https://github.com/netdata/netdata/blob/master/src/health/health.d/net.conf) | net.drops | ratio of inbound dropped packets for the network interface ${label:device} over the last 10 minutes | +| [ wifi_outbound_packets_dropped_ratio ](https://github.com/netdata/netdata/blob/master/src/health/health.d/net.conf) | net.drops | ratio of outbound dropped packets for the network interface ${label:device} over the last 10 minutes | +| [ 1m_received_packets_rate ](https://github.com/netdata/netdata/blob/master/src/health/health.d/net.conf) | net.packets | average number of packets received by the network interface ${label:device} over the last minute | +| [ 10s_received_packets_storm ](https://github.com/netdata/netdata/blob/master/src/health/health.d/net.conf) | net.packets | ratio of average number of received packets for the network interface ${label:device} over the last 10 seconds, compared to the rate over the last minute | +| [ 10min_fifo_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/net.conf) | net.fifo | number of FIFO errors for the network interface ${label:device} in the last 10 minutes | + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/network_statistics.md b/src/collectors/proc.plugin/integrations/network_statistics.md new file mode 100644 index 000000000..460a8a60e --- /dev/null +++ b/src/collectors/proc.plugin/integrations/network_statistics.md @@ -0,0 +1,161 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/network_statistics.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "Network statistics" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/Network" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# Network statistics + + +<img src="https://netdata.cloud/img/network-wired.svg" width="150"/> + + +Plugin: proc.plugin +Module: /proc/net/netstat + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +This integration provides metrics from the `netstat`, `snmp` and `snmp6` modules. + + + +This collector is supported on all platforms. + +This collector supports collecting metrics from multiple instances of this integration, including remote instances. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per Network statistics instance + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| system.ip | received, sent | kilobits/s | +| ip.tcpmemorypressures | pressures | events/s | +| ip.tcpconnaborts | baddata, userclosed, nomemory, timeout, linger, failed | connections/s | +| ip.tcpreorders | timestamp, sack, fack, reno | packets/s | +| ip.tcpofo | inqueue, dropped, merged, pruned | packets/s | +| ip.tcpsyncookies | received, sent, failed | packets/s | +| ip.tcp_syn_queue | drops, cookies | packets/s | +| ip.tcp_accept_queue | overflows, drops | packets/s | +| ip.tcpsock | connections | active connections | +| ip.tcppackets | received, sent | packets/s | +| ip.tcperrors | InErrs, InCsumErrors, RetransSegs | packets/s | +| ip.tcpopens | active, passive | connections/s | +| ip.tcphandshake | EstabResets, OutRsts, AttemptFails, SynRetrans | events/s | +| ipv4.packets | received, sent, forwarded, delivered | packets/s | +| ipv4.errors | InDiscards, OutDiscards, InNoRoutes, OutNoRoutes, InHdrErrors, InAddrErrors, InTruncatedPkts, InCsumErrors | packets/s | +| ipc4.bcast | received, sent | kilobits/s | +| ipv4.bcastpkts | received, sent | packets/s | +| ipv4.mcast | received, sent | kilobits/s | +| ipv4.mcastpkts | received, sent | packets/s | +| ipv4.icmp | received, sent | packets/s | +| ipv4.icmpmsg | InEchoReps, OutEchoReps, InDestUnreachs, OutDestUnreachs, InRedirects, OutRedirects, InEchos, OutEchos, InRouterAdvert, OutRouterAdvert, InRouterSelect, OutRouterSelect, InTimeExcds, OutTimeExcds, InParmProbs, OutParmProbs, InTimestamps, OutTimestamps, InTimestampReps, OutTimestampReps | packets/s | +| ipv4.icmp_errors | InErrors, OutErrors, InCsumErrors | packets/s | +| ipv4.udppackets | received, sent | packets/s | +| ipv4.udperrors | RcvbufErrors, SndbufErrors, InErrors, NoPorts, InCsumErrors, IgnoredMulti | events/s | +| ipv4.udplite | received, sent | packets/s | +| ipv4.udplite_errors | RcvbufErrors, SndbufErrors, InErrors, NoPorts, InCsumErrors, IgnoredMulti | packets/s | +| ipv4.ecnpkts | CEP, NoECTP, ECTP0, ECTP1 | packets/s | +| ipv4.fragsin | ok, failed, all | packets/s | +| ipv4.fragsout | ok, failed, created | packets/s | +| system.ipv6 | received, sent | kilobits/s | +| ipv6.packets | received, sent, forwarded, delivers | packets/s | +| ipv6.errors | InDiscards, OutDiscards, InHdrErrors, InAddrErrors, InUnknownProtos, InTooBigErrors, InTruncatedPkts, InNoRoutes, OutNoRoutes | packets/s | +| ipv6.bcast | received, sent | kilobits/s | +| ipv6.mcast | received, sent | kilobits/s | +| ipv6.mcastpkts | received, sent | packets/s | +| ipv6.udppackets | received, sent | packets/s | +| ipv6.udperrors | RcvbufErrors, SndbufErrors, InErrors, NoPorts, InCsumErrors, IgnoredMulti | events/s | +| ipv6.udplitepackets | received, sent | packets/s | +| ipv6.udpliteerrors | RcvbufErrors, SndbufErrors, InErrors, NoPorts, InCsumErrors | events/s | +| ipv6.icmp | received, sent | messages/s | +| ipv6.icmpredir | received, sent | redirects/s | +| ipv6.icmperrors | InErrors, OutErrors, InCsumErrors, InDestUnreachs, InPktTooBigs, InTimeExcds, InParmProblems, OutDestUnreachs, OutPktTooBigs, OutTimeExcds, OutParmProblems | errors/s | +| ipv6.icmpechos | InEchos, OutEchos, InEchoReplies, OutEchoReplies | messages/s | +| ipv6.groupmemb | InQueries, OutQueries, InResponses, OutResponses, InReductions, OutReductions | messages/s | +| ipv6.icmprouter | InSolicits, OutSolicits, InAdvertisements, OutAdvertisements | messages/s | +| ipv6.icmpneighbor | InSolicits, OutSolicits, InAdvertisements, OutAdvertisements | messages/s | +| ipv6.icmpmldv2 | received, sent | reports/s | +| ipv6.icmptypes | InType1, InType128, InType129, InType136, OutType1, OutType128, OutType129, OutType133, OutType135, OutType143 | messages/s | +| ipv6.ect | InNoECTPkts, InECT1Pkts, InECT0Pkts, InCEPkts | packets/s | +| ipv6.ect | InNoECTPkts, InECT1Pkts, InECT0Pkts, InCEPkts | packets/s | +| ipv6.fragsin | ok, failed, timeout, all | packets/s | +| ipv6.fragsout | ok, failed, all | packets/s | + + + +## Alerts + + +The following alerts are available: + +| Alert name | On metric | Description | +|:------------|:----------|:------------| +| [ 1m_tcp_syn_queue_drops ](https://github.com/netdata/netdata/blob/master/src/health/health.d/tcp_listen.conf) | ip.tcp_syn_queue | average number of SYN requests was dropped due to the full TCP SYN queue over the last minute (SYN cookies were not enabled) | +| [ 1m_tcp_syn_queue_cookies ](https://github.com/netdata/netdata/blob/master/src/health/health.d/tcp_listen.conf) | ip.tcp_syn_queue | average number of sent SYN cookies due to the full TCP SYN queue over the last minute | +| [ 1m_tcp_accept_queue_overflows ](https://github.com/netdata/netdata/blob/master/src/health/health.d/tcp_listen.conf) | ip.tcp_accept_queue | average number of overflows in the TCP accept queue over the last minute | +| [ 1m_tcp_accept_queue_drops ](https://github.com/netdata/netdata/blob/master/src/health/health.d/tcp_listen.conf) | ip.tcp_accept_queue | average number of dropped packets in the TCP accept queue over the last minute | +| [ tcp_connections ](https://github.com/netdata/netdata/blob/master/src/health/health.d/tcp_conn.conf) | ip.tcpsock | TCP connections utilization | +| [ 1m_ip_tcp_resets_sent ](https://github.com/netdata/netdata/blob/master/src/health/health.d/tcp_resets.conf) | ip.tcphandshake | average number of sent TCP RESETS over the last minute | +| [ 10s_ip_tcp_resets_sent ](https://github.com/netdata/netdata/blob/master/src/health/health.d/tcp_resets.conf) | ip.tcphandshake | average number of sent TCP RESETS over the last 10 seconds. This can indicate a port scan, or that a service running on this host has crashed. Netdata will not send a clear notification for this alarm. | +| [ 1m_ip_tcp_resets_received ](https://github.com/netdata/netdata/blob/master/src/health/health.d/tcp_resets.conf) | ip.tcphandshake | average number of received TCP RESETS over the last minute | +| [ 10s_ip_tcp_resets_received ](https://github.com/netdata/netdata/blob/master/src/health/health.d/tcp_resets.conf) | ip.tcphandshake | average number of received TCP RESETS over the last 10 seconds. This can be an indication that a service this host needs has crashed. Netdata will not send a clear notification for this alarm. | +| [ 1m_ipv4_udp_receive_buffer_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/udp_errors.conf) | ipv4.udperrors | average number of UDP receive buffer errors over the last minute | +| [ 1m_ipv4_udp_send_buffer_errors ](https://github.com/netdata/netdata/blob/master/src/health/health.d/udp_errors.conf) | ipv4.udperrors | average number of UDP send buffer errors over the last minute | + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/nfs_client.md b/src/collectors/proc.plugin/integrations/nfs_client.md new file mode 100644 index 000000000..3d3370b80 --- /dev/null +++ b/src/collectors/proc.plugin/integrations/nfs_client.md @@ -0,0 +1,99 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/nfs_client.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "NFS Client" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/Filesystem/NFS" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# NFS Client + + +<img src="https://netdata.cloud/img/nfs.png" width="150"/> + + +Plugin: proc.plugin +Module: /proc/net/rpc/nfs + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +This integration provides statistics from the Linux kernel's NFS Client. + + + +This collector is supported on all platforms. + +This collector supports collecting metrics from multiple instances of this integration, including remote instances. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per NFS Client instance + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| nfs.net | udp, tcp | operations/s | +| nfs.rpc | calls, retransmits, auth_refresh | calls/s | +| nfs.proc2 | a dimension per proc2 call | calls/s | +| nfs.proc3 | a dimension per proc3 call | calls/s | +| nfs.proc4 | a dimension per proc4 call | calls/s | + + + +## Alerts + +There are no alerts configured by default for this integration. + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/nfs_server.md b/src/collectors/proc.plugin/integrations/nfs_server.md new file mode 100644 index 000000000..693e681c5 --- /dev/null +++ b/src/collectors/proc.plugin/integrations/nfs_server.md @@ -0,0 +1,104 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/nfs_server.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "NFS Server" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/Filesystem/NFS" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# NFS Server + + +<img src="https://netdata.cloud/img/nfs.png" width="150"/> + + +Plugin: proc.plugin +Module: /proc/net/rpc/nfsd + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +This integration provides statistics from the Linux kernel's NFS Server. + + + +This collector is supported on all platforms. + +This collector supports collecting metrics from multiple instances of this integration, including remote instances. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per NFS Server instance + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| nfsd.readcache | hits, misses, nocache | reads/s | +| nfsd.filehandles | stale | handles/s | +| nfsd.io | read, write | kilobytes/s | +| nfsd.threads | threads | threads | +| nfsd.net | udp, tcp | packets/s | +| nfsd.rpc | calls, bad_format, bad_auth | calls/s | +| nfsd.proc2 | a dimension per proc2 call | calls/s | +| nfsd.proc3 | a dimension per proc3 call | calls/s | +| nfsd.proc4 | a dimension per proc4 call | calls/s | +| nfsd.proc4ops | a dimension per proc4 operation | operations/s | + + + +## Alerts + +There are no alerts configured by default for this integration. + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/non-uniform_memory_access.md b/src/collectors/proc.plugin/integrations/non-uniform_memory_access.md new file mode 100644 index 000000000..3b55c65f1 --- /dev/null +++ b/src/collectors/proc.plugin/integrations/non-uniform_memory_access.md @@ -0,0 +1,111 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/non-uniform_memory_access.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "Non-Uniform Memory Access" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/Memory" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# Non-Uniform Memory Access + + +<img src="https://netdata.cloud/img/linuxserver.svg" width="150"/> + + +Plugin: proc.plugin +Module: /sys/devices/system/node + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +Information about NUMA (Non-Uniform Memory Access) nodes on the system. + +NUMA is a method of configuring a cluster of microprocessor in a multiprocessing system so that they can +share memory locally, improving performance and the ability of the system to be expanded. NUMA is used in a +symmetric multiprocessing (SMP) system. + +In a NUMA system, processors, memory, and I/O devices are grouped together into cells, also known as nodes. +Each node has its own memory and set of I/O devices, and one or more processors. While a processor can access +memory in any of the nodes, it does so faster when accessing memory within its own node. + +The collector provides statistics on memory allocations for processes running on the NUMA nodes, revealing the +efficiency of memory allocations in multi-node systems. + + + + +This collector is supported on all platforms. + +This collector supports collecting metrics from multiple instances of this integration, including remote instances. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per numa node + + + +Labels: + +| Label | Description | +|:-----------|:----------------| +| numa_node | TBD | + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| mem.numa_nodes | hit, miss, local, foreign, interleave, other | events/s | + + + +## Alerts + +There are no alerts configured by default for this integration. + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/page_types.md b/src/collectors/proc.plugin/integrations/page_types.md new file mode 100644 index 000000000..7dcb0f82d --- /dev/null +++ b/src/collectors/proc.plugin/integrations/page_types.md @@ -0,0 +1,113 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/page_types.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "Page types" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/Memory" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# Page types + + +<img src="https://netdata.cloud/img/microchip.svg" width="150"/> + + +Plugin: proc.plugin +Module: /proc/pagetypeinfo + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +This integration provides metrics about the system's memory page types + + + +This collector is supported on all platforms. + +This collector only supports collecting metrics from a single instance of this integration. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per Page types instance + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| mem.pagetype_global | a dimension per pagesize | B | + +### Per node, zone, type + + + +Labels: + +| Label | Description | +|:-----------|:----------------| +| node_id | TBD | +| node_zone | TBD | +| node_type | TBD | + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| mem.pagetype | a dimension per pagesize | B | + + + +## Alerts + +There are no alerts configured by default for this integration. + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/power_supply.md b/src/collectors/proc.plugin/integrations/power_supply.md new file mode 100644 index 000000000..53191dfff --- /dev/null +++ b/src/collectors/proc.plugin/integrations/power_supply.md @@ -0,0 +1,107 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/power_supply.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "Power Supply" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/Power Supply" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# Power Supply + + +<img src="https://netdata.cloud/img/powersupply.svg" width="150"/> + + +Plugin: proc.plugin +Module: /sys/class/power_supply + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +This integration monitors Power supply metrics, such as battery status, AC power status and more. + + + +This collector is supported on all platforms. + +This collector supports collecting metrics from multiple instances of this integration, including remote instances. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per power device + + + +Labels: + +| Label | Description | +|:-----------|:----------------| +| device | TBD | + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| powersupply.capacity | capacity | percentage | +| powersupply.charge | empty_design, empty, now, full, full_design | Ah | +| powersupply.energy | empty_design, empty, now, full, full_design | Wh | +| powersupply.voltage | min_design, min, now, max, max_design | V | + + + +## Alerts + + +The following alerts are available: + +| Alert name | On metric | Description | +|:------------|:----------|:------------| +| [ linux_power_supply_capacity ](https://github.com/netdata/netdata/blob/master/src/health/health.d/linux_power_supply.conf) | powersupply.capacity | percentage of remaining power supply capacity | + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/pressure_stall_information.md b/src/collectors/proc.plugin/integrations/pressure_stall_information.md new file mode 100644 index 000000000..3af4da4a0 --- /dev/null +++ b/src/collectors/proc.plugin/integrations/pressure_stall_information.md @@ -0,0 +1,129 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/pressure_stall_information.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "Pressure Stall Information" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/Pressure" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# Pressure Stall Information + + +<img src="https://netdata.cloud/img/linuxserver.svg" width="150"/> + + +Plugin: proc.plugin +Module: /proc/pressure + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +Introduced in Linux kernel 4.20, `/proc/pressure` provides information about system pressure stall information +(PSI). PSI is a feature that allows the system to track the amount of time the system is stalled due to +resource contention, such as CPU, memory, or I/O. + +The collectors monitored 3 separate files for CPU, memory, and I/O: + +- **cpu**: Tracks the amount of time tasks are stalled due to CPU contention. +- **memory**: Tracks the amount of time tasks are stalled due to memory contention. +- **io**: Tracks the amount of time tasks are stalled due to I/O contention. +- **irq**: Tracks the amount of time tasks are stalled due to IRQ contention. + +Each of them provides metrics for stall time over the last 10 seconds, 1 minute, 5 minutes, and 15 minutes. + +Monitoring the /proc/pressure files can provide important insights into system performance and capacity planning: + +- **Identifying resource contention**: If these metrics are consistently high, it indicates that tasks are + frequently being stalled due to lack of resources, which can significantly degrade system performance. + +- **Troubleshooting performance issues**: If a system is experiencing performance issues, these metrics can + help identify whether resource contention is the cause. + +- **Capacity planning**: By monitoring these metrics over time, you can understand trends in resource + utilization and make informed decisions about when to add more resources to your system. + + + + +This collector is supported on all platforms. + +This collector only supports collecting metrics from a single instance of this integration. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per Pressure Stall Information instance + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| system.cpu_some_pressure | some10, some60, some300 | percentage | +| system.cpu_some_pressure_stall_time | time | ms | +| system.cpu_full_pressure | some10, some60, some300 | percentage | +| system.cpu_full_pressure_stall_time | time | ms | +| system.memory_some_pressure | some10, some60, some300 | percentage | +| system.memory_some_pressure_stall_time | time | ms | +| system.memory_full_pressure | some10, some60, some300 | percentage | +| system.memory_full_pressure_stall_time | time | ms | +| system.io_some_pressure | some10, some60, some300 | percentage | +| system.io_some_pressure_stall_time | time | ms | +| system.io_full_pressure | some10, some60, some300 | percentage | +| system.io_full_pressure_stall_time | time | ms | + + + +## Alerts + +There are no alerts configured by default for this integration. + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/sctp_statistics.md b/src/collectors/proc.plugin/integrations/sctp_statistics.md new file mode 100644 index 000000000..3c1cb7559 --- /dev/null +++ b/src/collectors/proc.plugin/integrations/sctp_statistics.md @@ -0,0 +1,99 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/sctp_statistics.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "SCTP Statistics" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/Network" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# SCTP Statistics + + +<img src="https://netdata.cloud/img/network-wired.svg" width="150"/> + + +Plugin: proc.plugin +Module: /proc/net/sctp/snmp + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +This integration provides statistics about the Stream Control Transmission Protocol (SCTP). + + + +This collector is supported on all platforms. + +This collector supports collecting metrics from multiple instances of this integration, including remote instances. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per SCTP Statistics instance + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| sctp.established | established | associations | +| sctp.transitions | active, passive, aborted, shutdown | transitions/s | +| sctp.packets | received, sent | packets/s | +| sctp.packet_errors | invalid, checksum | packets/s | +| sctp.fragmentation | reassembled, fragmented | packets/s | + + + +## Alerts + +There are no alerts configured by default for this integration. + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/socket_statistics.md b/src/collectors/proc.plugin/integrations/socket_statistics.md new file mode 100644 index 000000000..73fefc7c0 --- /dev/null +++ b/src/collectors/proc.plugin/integrations/socket_statistics.md @@ -0,0 +1,109 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/socket_statistics.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "Socket statistics" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/Network" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# Socket statistics + + +<img src="https://netdata.cloud/img/network-wired.svg" width="150"/> + + +Plugin: proc.plugin +Module: /proc/net/sockstat + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +This integration provides socket statistics. + + + +This collector is supported on all platforms. + +This collector supports collecting metrics from multiple instances of this integration, including remote instances. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per Socket statistics instance + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| ip.sockstat_sockets | used | sockets | +| ipv4.sockstat_tcp_sockets | alloc, orphan, inuse, timewait | sockets | +| ipv4.sockstat_tcp_mem | mem | KiB | +| ipv4.sockstat_udp_sockets | inuse | sockets | +| ipv4.sockstat_udp_mem | mem | sockets | +| ipv4.sockstat_udplite_sockets | inuse | sockets | +| ipv4.sockstat_raw_sockets | inuse | sockets | +| ipv4.sockstat_frag_sockets | inuse | fragments | +| ipv4.sockstat_frag_mem | mem | KiB | + + + +## Alerts + + +The following alerts are available: + +| Alert name | On metric | Description | +|:------------|:----------|:------------| +| [ tcp_orphans ](https://github.com/netdata/netdata/blob/master/src/health/health.d/tcp_orphans.conf) | ipv4.sockstat_tcp_sockets | orphan IPv4 TCP sockets utilization | +| [ tcp_memory ](https://github.com/netdata/netdata/blob/master/src/health/health.d/tcp_mem.conf) | ipv4.sockstat_tcp_mem | TCP memory utilization | + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/softirq_statistics.md b/src/collectors/proc.plugin/integrations/softirq_statistics.md new file mode 100644 index 000000000..2a130dcce --- /dev/null +++ b/src/collectors/proc.plugin/integrations/softirq_statistics.md @@ -0,0 +1,133 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/softirq_statistics.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "SoftIRQ statistics" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/CPU" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# SoftIRQ statistics + + +<img src="https://netdata.cloud/img/linuxserver.svg" width="150"/> + + +Plugin: proc.plugin +Module: /proc/softirqs + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +In the Linux kernel, handling of hardware interrupts is split into two halves: the top half and the bottom half. +The top half is the routine that responds immediately to an interrupt, while the bottom half is deferred to be processed later. + +Softirqs are a mechanism in the Linux kernel used to handle the bottom halves of interrupts, which can be +deferred and processed later in a context where it's safe to enable interrupts. + +The actual work of handling the interrupt is offloaded to a softirq and executed later when the system +decides it's a good time to process them. This helps to keep the system responsive by not blocking the top +half for too long, which could lead to missed interrupts. + +Monitoring `/proc/softirqs` is useful for: + +- **Performance tuning**: A high rate of softirqs could indicate a performance issue. For instance, a high + rate of network softirqs (`NET_RX` and `NET_TX`) could indicate a network performance issue. + +- **Troubleshooting**: If a system is behaving unexpectedly, checking the softirqs could provide clues about + what is going on. For example, a sudden increase in block device softirqs (BLOCK) might indicate a problem + with a disk. + +- **Understanding system behavior**: Knowing what types of softirqs are happening can help you understand what + your system is doing, particularly in terms of how it's interacting with hardware and how it's handling + interrupts. + + + + +This collector is supported on all platforms. + +This collector supports collecting metrics from multiple instances of this integration, including remote instances. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per SoftIRQ statistics instance + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| system.softirqs | a dimension per softirq | softirqs/s | + +### Per cpu core + + + +Labels: + +| Label | Description | +|:-----------|:----------------| +| cpu | TBD | + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| cpu.softirqs | a dimension per softirq | softirqs/s | + + + +## Alerts + +There are no alerts configured by default for this integration. + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/softnet_statistics.md b/src/collectors/proc.plugin/integrations/softnet_statistics.md new file mode 100644 index 000000000..fbbe08036 --- /dev/null +++ b/src/collectors/proc.plugin/integrations/softnet_statistics.md @@ -0,0 +1,135 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/softnet_statistics.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "Softnet Statistics" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/Network" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# Softnet Statistics + + +<img src="https://netdata.cloud/img/linuxserver.svg" width="150"/> + + +Plugin: proc.plugin +Module: /proc/net/softnet_stat + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +`/proc/net/softnet_stat` provides statistics that relate to the handling of network packets by softirq. + +It provides information about: + +- Total number of processed packets (`processed`). +- Times ksoftirq ran out of quota (`dropped`). +- Times net_rx_action was rescheduled. +- Number of times processed all lists before quota. +- Number of times did not process all lists due to quota. +- Number of times net_rx_action was rescheduled for GRO (Generic Receive Offload) cells. +- Number of times GRO cells were processed. + +Monitoring the /proc/net/softnet_stat file can be useful for: + +- **Network performance monitoring**: By tracking the total number of processed packets and how many packets + were dropped, you can gain insights into your system's network performance. + +- **Troubleshooting**: If you're experiencing network-related issues, this collector can provide valuable clues. + For instance, a high number of dropped packets may indicate a network problem. + +- **Capacity planning**: If your system is consistently processing near its maximum capacity of network + packets, it might be time to consider upgrading your network infrastructure. + + + + +This collector is supported on all platforms. + +This collector supports collecting metrics from multiple instances of this integration, including remote instances. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per Softnet Statistics instance + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| system.softnet_stat | processed, dropped, squeezed, received_rps, flow_limit_count | events/s | + +### Per cpu core + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| cpu.softnet_stat | processed, dropped, squeezed, received_rps, flow_limit_count | events/s | + + + +## Alerts + + +The following alerts are available: + +| Alert name | On metric | Description | +|:------------|:----------|:------------| +| [ 1min_netdev_backlog_exceeded ](https://github.com/netdata/netdata/blob/master/src/health/health.d/softnet.conf) | system.softnet_stat | average number of dropped packets in the last minute due to exceeded net.core.netdev_max_backlog | +| [ 1min_netdev_budget_ran_outs ](https://github.com/netdata/netdata/blob/master/src/health/health.d/softnet.conf) | system.softnet_stat | average number of times ksoftirq ran out of sysctl net.core.netdev_budget or net.core.netdev_budget_usecs with work remaining over the last minute (this can be a cause for dropped packets) | + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/synproxy.md b/src/collectors/proc.plugin/integrations/synproxy.md new file mode 100644 index 000000000..a63e6cbc0 --- /dev/null +++ b/src/collectors/proc.plugin/integrations/synproxy.md @@ -0,0 +1,97 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/synproxy.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "Synproxy" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/Firewall" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# Synproxy + + +<img src="https://netdata.cloud/img/firewall.svg" width="150"/> + + +Plugin: proc.plugin +Module: /proc/net/stat/synproxy + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +This integration provides statistics about the Synproxy netfilter module. + + + +This collector is supported on all platforms. + +This collector supports collecting metrics from multiple instances of this integration, including remote instances. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per Synproxy instance + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| netfilter.synproxy_syn_received | received | packets/s | +| netfilter.synproxy_conn_reopened | reopened | connections/s | +| netfilter.synproxy_cookies | valid, invalid, retransmits | cookies/s | + + + +## Alerts + +There are no alerts configured by default for this integration. + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/system_load_average.md b/src/collectors/proc.plugin/integrations/system_load_average.md new file mode 100644 index 000000000..51f4f14ba --- /dev/null +++ b/src/collectors/proc.plugin/integrations/system_load_average.md @@ -0,0 +1,128 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/system_load_average.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "System Load Average" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/System" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# System Load Average + + +<img src="https://netdata.cloud/img/linuxserver.svg" width="150"/> + + +Plugin: proc.plugin +Module: /proc/loadavg + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +The `/proc/loadavg` file provides information about the system load average. + +The load average is a measure of the amount of computational work that a system performs. It is a +representation of the average system load over a period of time. + +This file contains three numbers representing the system load averages for the last 1, 5, and 15 minutes, +respectively. It also includes the currently running processes and the total number of processes. + +Monitoring the load average can be used for: + +- **System performance**: If the load average is too high, it may indicate that your system is overloaded. + On a system with a single CPU, if the load average is 1, it means the single CPU is fully utilized. If the + load averages are consistently higher than the number of CPUs/cores, it may indicate that your system is + overloaded and tasks are waiting for CPU time. + +- **Troubleshooting**: If the load average is unexpectedly high, it can be a sign of a problem. This could be + due to a runaway process, a software bug, or a hardware issue. + +- **Capacity planning**: By monitoring the load average over time, you can understand the trends in your + system's workload. This can help with capacity planning and scaling decisions. + +Remember that load average not only considers CPU usage, but also includes processes waiting for disk I/O. +Therefore, high load averages could be due to I/O contention as well as CPU contention. + + + + +This collector is supported on all platforms. + +This collector only supports collecting metrics from a single instance of this integration. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per System Load Average instance + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| system.load | load1, load5, load15 | load | +| system.active_processes | active | processes | + + + +## Alerts + + +The following alerts are available: + +| Alert name | On metric | Description | +|:------------|:----------|:------------| +| [ load_cpu_number ](https://github.com/netdata/netdata/blob/master/src/health/health.d/load.conf) | system.load | number of active CPU cores in the system | +| [ load_average_15 ](https://github.com/netdata/netdata/blob/master/src/health/health.d/load.conf) | system.load | system fifteen-minute load average | +| [ load_average_5 ](https://github.com/netdata/netdata/blob/master/src/health/health.d/load.conf) | system.load | system five-minute load average | +| [ load_average_1 ](https://github.com/netdata/netdata/blob/master/src/health/health.d/load.conf) | system.load | system one-minute load average | +| [ active_processes ](https://github.com/netdata/netdata/blob/master/src/health/health.d/processes.conf) | system.active_processes | system process IDs (PID) space utilization | + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/system_statistics.md b/src/collectors/proc.plugin/integrations/system_statistics.md new file mode 100644 index 000000000..264e40bd5 --- /dev/null +++ b/src/collectors/proc.plugin/integrations/system_statistics.md @@ -0,0 +1,169 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/system_statistics.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "System statistics" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/System" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# System statistics + + +<img src="https://netdata.cloud/img/linuxserver.svg" width="150"/> + + +Plugin: proc.plugin +Module: /proc/stat + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +CPU utilization, states and frequencies and key Linux system performance metrics. + +The `/proc/stat` file provides various types of system statistics: + +- The overall system CPU usage statistics +- Per CPU core statistics +- The total context switching of the system +- The total number of processes running +- The total CPU interrupts +- The total CPU softirqs + +The collector also reads: + +- `/proc/schedstat` for statistics about the process scheduler in the Linux kernel. +- `/sys/devices/system/cpu/[X]/thermal_throttle/core_throttle_count` to get the count of thermal throttling events for a specific CPU core on Linux systems. +- `/sys/devices/system/cpu/[X]/thermal_throttle/package_throttle_count` to get the count of thermal throttling events for a specific CPU package on a Linux system. +- `/sys/devices/system/cpu/[X]/cpufreq/scaling_cur_freq` to get the current operating frequency of a specific CPU core. +- `/sys/devices/system/cpu/[X]/cpufreq/stats/time_in_state` to get the amount of time the CPU has spent in each of its available frequency states. +- `/sys/devices/system/cpu/[X]/cpuidle/state[X]/name` to get the names of the idle states for each CPU core in a Linux system. +- `/sys/devices/system/cpu/[X]/cpuidle/state[X]/time` to get the total time each specific CPU core has spent in each idle state since the system was started. + + + + +This collector is only supported on the following platforms: + +- linux + +This collector only supports collecting metrics from a single instance of this integration. + + +### Default Behavior + +#### Auto-Detection + +The collector auto-detects all metrics. No configuration is needed. + + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The collector disables cpu frequency and idle state monitoring when there are more than 128 CPU cores available. + + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per System statistics instance + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| system.cpu | guest_nice, guest, steal, softirq, irq, user, system, nice, iowait, idle | percentage | +| system.intr | interrupts | interrupts/s | +| system.ctxt | switches | context switches/s | +| system.forks | started | processes/s | +| system.processes | running, blocked | processes | +| cpu.core_throttling | a dimension per cpu core | events/s | +| cpu.package_throttling | a dimension per package | events/s | +| cpu.cpufreq | a dimension per cpu core | MHz | + +### Per cpu core + + + +Labels: + +| Label | Description | +|:-----------|:----------------| +| cpu | TBD | + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| cpu.cpu | guest_nice, guest, steal, softirq, irq, user, system, nice, iowait, idle | percentage | +| cpuidle.cpu_cstate_residency_time | a dimension per c-state | percentage | + + + +## Alerts + + +The following alerts are available: + +| Alert name | On metric | Description | +|:------------|:----------|:------------| +| [ 10min_cpu_usage ](https://github.com/netdata/netdata/blob/master/src/health/health.d/cpu.conf) | system.cpu | average CPU utilization over the last 10 minutes (excluding iowait, nice and steal) | +| [ 10min_cpu_iowait ](https://github.com/netdata/netdata/blob/master/src/health/health.d/cpu.conf) | system.cpu | average CPU iowait time over the last 10 minutes | +| [ 20min_steal_cpu ](https://github.com/netdata/netdata/blob/master/src/health/health.d/cpu.conf) | system.cpu | average CPU steal time over the last 20 minutes | + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +The configuration file name for this integration is `netdata.conf`. +Configuration for this specific integration is located in the `plugin:proc:/proc/stat` section within that file. + +The file format is a modified INI syntax. The general structure is: + +```ini +[section1] + option1 = some value + option2 = some other value + +[section2] + option3 = some third value +``` +You can edit the configuration file using the `edit-config` script from the +Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory). + +```bash +cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata +sudo ./edit-config netdata.conf +``` +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/system_uptime.md b/src/collectors/proc.plugin/integrations/system_uptime.md new file mode 100644 index 000000000..4b7e21188 --- /dev/null +++ b/src/collectors/proc.plugin/integrations/system_uptime.md @@ -0,0 +1,108 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/system_uptime.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "System Uptime" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/System" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# System Uptime + + +<img src="https://netdata.cloud/img/linuxserver.svg" width="150"/> + + +Plugin: proc.plugin +Module: /proc/uptime + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +The amount of time the system has been up (running). + +Uptime is a critical aspect of overall system performance: + +- **Availability**: Uptime monitoring can show whether a server is consistently available or experiences frequent downtimes. +- **Performance Monitoring**: While server uptime alone doesn't provide detailed performance data, analyzing the duration and frequency of downtimes can help identify patterns or trends. +- **Proactive problem detection**: If server uptime monitoring reveals unexpected downtimes or a decreasing uptime trend, it can serve as an early warning sign of potential problems. +- **Root cause analysis**: When investigating server downtime, the uptime metric alone may not provide enough information to pinpoint the exact cause. +- **Load balancing**: Uptime data can indirectly indicate load balancing issues if certain servers have significantly lower uptimes than others. +- **Optimize maintenance efforts**: Servers with consistently low uptimes or frequent downtimes may require more attention. +- **Compliance requirements**: Server uptime data can be used to demonstrate compliance with regulatory requirements or SLAs that mandate a minimum level of server availability. + + + + +This collector is only supported on the following platforms: + +- linux + +This collector only supports collecting metrics from a single instance of this integration. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per System Uptime instance + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| system.uptime | uptime | seconds | + + + +## Alerts + +There are no alerts configured by default for this integration. + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/wireless_network_interfaces.md b/src/collectors/proc.plugin/integrations/wireless_network_interfaces.md new file mode 100644 index 000000000..4288a1ebd --- /dev/null +++ b/src/collectors/proc.plugin/integrations/wireless_network_interfaces.md @@ -0,0 +1,100 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/wireless_network_interfaces.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "Wireless network interfaces" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/Network" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# Wireless network interfaces + + +<img src="https://netdata.cloud/img/network-wired.svg" width="150"/> + + +Plugin: proc.plugin +Module: /proc/net/wireless + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +Monitor wireless devices with metrics about status, link quality, signal level, noise level and more. + + + +This collector is supported on all platforms. + +This collector supports collecting metrics from multiple instances of this integration, including remote instances. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per wireless device + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| wireless.status | status | status | +| wireless.link_quality | link_quality | value | +| wireless.signal_level | signal_level | dBm | +| wireless.noise_level | noise_level | dBm | +| wireless.discarded_packets | nwid, crypt, frag, retry, misc | packets/s | +| wireless.missed_beacons | missed_beacons | frames/s | + + + +## Alerts + +There are no alerts configured by default for this integration. + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/zfs_adaptive_replacement_cache.md b/src/collectors/proc.plugin/integrations/zfs_adaptive_replacement_cache.md new file mode 100644 index 000000000..d2d9378f4 --- /dev/null +++ b/src/collectors/proc.plugin/integrations/zfs_adaptive_replacement_cache.md @@ -0,0 +1,125 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/zfs_adaptive_replacement_cache.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "ZFS Adaptive Replacement Cache" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/Filesystem/ZFS" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# ZFS Adaptive Replacement Cache + + +<img src="https://netdata.cloud/img/filesystem.svg" width="150"/> + + +Plugin: proc.plugin +Module: /proc/spl/kstat/zfs/arcstats + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +This integration monitors ZFS Adadptive Replacement Cache (ARC) statistics. + + + +This collector is supported on all platforms. + +This collector supports collecting metrics from multiple instances of this integration, including remote instances. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per ZFS Adaptive Replacement Cache instance + + + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| zfs.arc_size | arcsz, target, min, max | MiB | +| zfs.l2_size | actual, size | MiB | +| zfs.reads | arc, demand, prefetch, metadata, l2 | reads/s | +| zfs.bytes | read, write | KiB/s | +| zfs.hits | hits, misses | percentage | +| zfs.hits_rate | hits, misses | events/s | +| zfs.dhits | hits, misses | percentage | +| zfs.dhits_rate | hits, misses | events/s | +| zfs.phits | hits, misses | percentage | +| zfs.phits_rate | hits, misses | events/s | +| zfs.mhits | hits, misses | percentage | +| zfs.mhits_rate | hits, misses | events/s | +| zfs.l2hits | hits, misses | percentage | +| zfs.l2hits_rate | hits, misses | events/s | +| zfs.list_hits | mfu, mfu_ghost, mru, mru_ghost | hits/s | +| zfs.arc_size_breakdown | recent, frequent | percentage | +| zfs.memory_ops | direct, throttled, indirect | operations/s | +| zfs.important_ops | evict_skip, deleted, mutex_miss, hash_collisions | operations/s | +| zfs.actual_hits | hits, misses | percentage | +| zfs.actual_hits_rate | hits, misses | events/s | +| zfs.demand_data_hits | hits, misses | percentage | +| zfs.demand_data_hits_rate | hits, misses | events/s | +| zfs.prefetch_data_hits | hits, misses | percentage | +| zfs.prefetch_data_hits_rate | hits, misses | events/s | +| zfs.hash_elements | current, max | elements | +| zfs.hash_chains | current, max | chains | + + + +## Alerts + + +The following alerts are available: + +| Alert name | On metric | Description | +|:------------|:----------|:------------| +| [ zfs_memory_throttle ](https://github.com/netdata/netdata/blob/master/src/health/health.d/zfs.conf) | zfs.memory_ops | number of times ZFS had to limit the ARC growth in the last 10 minutes | + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/zfs_pools.md b/src/collectors/proc.plugin/integrations/zfs_pools.md new file mode 100644 index 000000000..f18c82baf --- /dev/null +++ b/src/collectors/proc.plugin/integrations/zfs_pools.md @@ -0,0 +1,105 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/zfs_pools.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "ZFS Pools" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/Filesystem/ZFS" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# ZFS Pools + + +<img src="https://netdata.cloud/img/filesystem.svg" width="150"/> + + +Plugin: proc.plugin +Module: /proc/spl/kstat/zfs + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +This integration provides metrics about the state of ZFS pools. + + + +This collector is supported on all platforms. + +This collector supports collecting metrics from multiple instances of this integration, including remote instances. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per zfs pool + + + +Labels: + +| Label | Description | +|:-----------|:----------------| +| pool | TBD | + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| zfspool.state | online, degraded, faulted, offline, removed, unavail, suspended | boolean | + + + +## Alerts + + +The following alerts are available: + +| Alert name | On metric | Description | +|:------------|:----------|:------------| +| [ zfs_pool_state_warn ](https://github.com/netdata/netdata/blob/master/src/health/health.d/zfs.conf) | zfspool.state | ZFS pool ${label:pool} state is degraded | +| [ zfs_pool_state_crit ](https://github.com/netdata/netdata/blob/master/src/health/health.d/zfs.conf) | zfspool.state | ZFS pool ${label:pool} state is faulted or unavail | + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/integrations/zram.md b/src/collectors/proc.plugin/integrations/zram.md new file mode 100644 index 000000000..b80a72ab1 --- /dev/null +++ b/src/collectors/proc.plugin/integrations/zram.md @@ -0,0 +1,106 @@ +<!--startmeta +custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/integrations/zram.md" +meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/proc.plugin/metadata.yaml" +sidebar_label: "ZRAM" +learn_status: "Published" +learn_rel_path: "Collecting Metrics/Linux Systems/Memory" +most_popular: False +message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE" +endmeta--> + +# ZRAM + + +<img src="https://netdata.cloud/img/microchip.svg" width="150"/> + + +Plugin: proc.plugin +Module: /sys/block/zram + +<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" /> + +## Overview + +zRAM, or compressed RAM, is a block device that uses a portion of your system's RAM as a block device. +The data written to this block device is compressed and stored in memory. + +The collectors provides information about the operation and the effectiveness of zRAM on your system. + + + + +This collector is supported on all platforms. + +This collector supports collecting metrics from multiple instances of this integration, including remote instances. + + +### Default Behavior + +#### Auto-Detection + +This integration doesn't support auto-detection. + +#### Limits + +The default configuration for this integration does not impose any limits on data collection. + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per zram device + + + +Labels: + +| Label | Description | +|:-----------|:----------------| +| device | TBD | + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| mem.zram_usage | compressed, metadata | MiB | +| mem.zram_savings | savings, original | MiB | +| mem.zram_ratio | ratio | ratio | +| mem.zram_efficiency | percent | percentage | + + + +## Alerts + +There are no alerts configured by default for this integration. + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +There is no configuration file. +#### Options + + + +There are no configuration options. + +#### Examples +There are no configuration examples. + + diff --git a/src/collectors/proc.plugin/ipc.c b/src/collectors/proc.plugin/ipc.c new file mode 100644 index 000000000..6d7d920f0 --- /dev/null +++ b/src/collectors/proc.plugin/ipc.c @@ -0,0 +1,556 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +#include <sys/sem.h> +#include <sys/msg.h> +#include <sys/shm.h> + + +#ifndef SEMVMX +#define SEMVMX 32767 /* <= 32767 semaphore maximum value */ +#endif + +/* Some versions of libc only define IPC_INFO when __USE_GNU is defined. */ +#ifndef IPC_INFO +#define IPC_INFO 3 +#endif + +struct ipc_limits { + uint64_t shmmni; /* max number of segments */ + uint64_t shmmax; /* max segment size */ + uint64_t shmall; /* max total shared memory */ + uint64_t shmmin; /* min segment size */ + + int semmni; /* max number of arrays */ + int semmsl; /* max semaphores per array */ + int semmns; /* max semaphores system wide */ + int semopm; /* max ops per semop call */ + unsigned int semvmx; /* semaphore max value (constant) */ + + int msgmni; /* max queues system wide */ + size_t msgmax; /* max size of message */ + int msgmnb; /* default max size of queue */ +}; + +struct ipc_status { + int semusz; /* current number of arrays */ + int semaem; /* current semaphores system wide */ +}; + +/* + * The last arg of semctl is a union semun, but where is it defined? X/OPEN + * tells us to define it ourselves, but until recently Linux include files + * would also define it. + */ +#ifndef HAVE_UNION_SEMUN +/* according to X/OPEN we have to define it ourselves */ +union semun { + int val; + struct semid_ds *buf; + unsigned short int *array; + struct seminfo *__buf; +}; +#endif + +struct message_queue { + unsigned long long id; + int found; + + RRDDIM *rd_messages; + RRDDIM *rd_bytes; + unsigned long long messages; + unsigned long long bytes; + + struct message_queue * next; +}; + +struct shm_stats { + unsigned long long segments; + unsigned long long bytes; +}; + +static inline int ipc_sem_get_limits(struct ipc_limits *lim) { + static procfile *ff = NULL; + static int error_shown = 0; + static char filename[FILENAME_MAX + 1] = ""; + + if(unlikely(!filename[0])) + snprintfz(filename, FILENAME_MAX, "%s/proc/sys/kernel/sem", netdata_configured_host_prefix); + + if(unlikely(!ff)) { + ff = procfile_open(filename, NULL, PROCFILE_FLAG_DEFAULT); + if(unlikely(!ff)) { + if(unlikely(!error_shown)) { + collector_error("IPC: Cannot open file '%s'.", filename); + error_shown = 1; + } + goto ipc; + } + } + + ff = procfile_readall(ff); + if(unlikely(!ff)) { + if(unlikely(!error_shown)) { + collector_error("IPC: Cannot read file '%s'.", filename); + error_shown = 1; + } + goto ipc; + } + + if(procfile_lines(ff) >= 1 && procfile_linewords(ff, 0) >= 4) { + lim->semvmx = SEMVMX; + lim->semmsl = str2i(procfile_lineword(ff, 0, 0)); + lim->semmns = str2i(procfile_lineword(ff, 0, 1)); + lim->semopm = str2i(procfile_lineword(ff, 0, 2)); + lim->semmni = str2i(procfile_lineword(ff, 0, 3)); + return 0; + } + else { + if(unlikely(!error_shown)) { + collector_error("IPC: Invalid content in file '%s'.", filename); + error_shown = 1; + } + goto ipc; + } + +ipc: + // cannot do it from the file + // query IPC + { + struct seminfo seminfo = {.semmni = 0}; + union semun arg = {.array = (ushort *) &seminfo}; + + if(unlikely(semctl(0, 0, IPC_INFO, arg) < 0)) { + collector_error("IPC: Failed to read '%s' and request IPC_INFO with semctl().", filename); + goto error; + } + + lim->semvmx = SEMVMX; + lim->semmni = seminfo.semmni; + lim->semmsl = seminfo.semmsl; + lim->semmns = seminfo.semmns; + lim->semopm = seminfo.semopm; + return 0; + } + +error: + lim->semvmx = 0; + lim->semmni = 0; + lim->semmsl = 0; + lim->semmns = 0; + lim->semopm = 0; + return -1; +} + +/* +printf ("------ Semaphore Limits --------\n"); +printf ("max number of arrays = %d\n", limits.semmni); +printf ("max semaphores per array = %d\n", limits.semmsl); +printf ("max semaphores system wide = %d\n", limits.semmns); +printf ("max ops per semop call = %d\n", limits.semopm); +printf ("semaphore max value = %u\n", limits.semvmx); + +printf ("------ Semaphore Status --------\n"); +printf ("used arrays = %d\n", status.semusz); +printf ("allocated semaphores = %d\n", status.semaem); +*/ + +static inline int ipc_sem_get_status(struct ipc_status *st) { + struct seminfo seminfo; + union semun arg; + + arg.array = (ushort *) (void *) &seminfo; + + if(unlikely(semctl (0, 0, SEM_INFO, arg) < 0)) { + /* kernel not configured for semaphores */ + static int error_shown = 0; + if(unlikely(!error_shown)) { + collector_error("IPC: kernel is not configured for semaphores"); + error_shown = 1; + } + st->semusz = 0; + st->semaem = 0; + return -1; + } + + st->semusz = seminfo.semusz; + st->semaem = seminfo.semaem; + return 0; +} + +int ipc_msq_get_info(char *msg_filename, struct message_queue **message_queue_root) { + static procfile *ff; + struct message_queue *msq; + + if(unlikely(!ff)) { + ff = procfile_open(msg_filename, " \t:", PROCFILE_FLAG_DEFAULT); + if(unlikely(!ff)) return 1; + } + + ff = procfile_readall(ff); + if(unlikely(!ff)) return 1; + + size_t lines = procfile_lines(ff); + size_t words = 0; + + if(unlikely(lines < 2)) { + collector_error("Cannot read %s. Expected 2 or more lines, read %zu.", procfile_filename(ff), lines); + return 1; + } + + // loop through all lines except the first and the last ones + size_t l; + for(l = 1; l < lines - 1; l++) { + words = procfile_linewords(ff, l); + if(unlikely(words < 2)) continue; + if(unlikely(words < 14)) { + collector_error("Cannot read %s line. Expected 14 params, read %zu.", procfile_filename(ff), words); + continue; + } + + // find the id in the linked list or create a new structure + int found = 0; + + unsigned long long id = str2ull(procfile_lineword(ff, l, 1), NULL); + for(msq = *message_queue_root; msq ; msq = msq->next) { + if(unlikely(id == msq->id)) { + found = 1; + break; + } + } + + if(unlikely(!found)) { + msq = callocz(1, sizeof(struct message_queue)); + msq->next = *message_queue_root; + *message_queue_root = msq; + msq->id = id; + } + + msq->messages = str2ull(procfile_lineword(ff, l, 4), NULL); + msq->bytes = str2ull(procfile_lineword(ff, l, 3), NULL); + msq->found = 1; + } + + return 0; +} + +int ipc_shm_get_info(char *shm_filename, struct shm_stats *shm) { + static procfile *ff; + + if(unlikely(!ff)) { + ff = procfile_open(shm_filename, " \t:", PROCFILE_FLAG_DEFAULT); + if(unlikely(!ff)) return 1; + } + + ff = procfile_readall(ff); + if(unlikely(!ff)) return 1; + + size_t lines = procfile_lines(ff); + size_t words = 0; + + if(unlikely(lines < 2)) { + collector_error("Cannot read %s. Expected 2 or more lines, read %zu.", procfile_filename(ff), lines); + return 1; + } + + shm->segments = 0; + shm->bytes = 0; + + // loop through all lines except the first and the last ones + size_t l; + for(l = 1; l < lines - 1; l++) { + words = procfile_linewords(ff, l); + if(unlikely(words < 2)) continue; + if(unlikely(words < 16)) { + collector_error("Cannot read %s line. Expected 16 params, read %zu.", procfile_filename(ff), words); + continue; + } + + shm->segments++; + shm->bytes += str2ull(procfile_lineword(ff, l, 3), NULL); + } + + return 0; +} + +int do_ipc(int update_every, usec_t dt) { + (void)dt; + + static int do_sem = -1, do_msg = -1, do_shm = -1; + static int read_limits_next = -1; + static struct ipc_limits limits; + static struct ipc_status status; + static const RRDVAR_ACQUIRED *arrays_max = NULL, *semaphores_max = NULL; + static RRDSET *st_semaphores = NULL, *st_arrays = NULL; + static RRDDIM *rd_semaphores = NULL, *rd_arrays = NULL; + static char *msg_filename = NULL; + static struct message_queue *message_queue_root = NULL; + static long long dimensions_limit; + static char *shm_filename = NULL; + + if(unlikely(do_sem == -1)) { + do_msg = config_get_boolean("plugin:proc:ipc", "message queues", CONFIG_BOOLEAN_YES); + do_sem = config_get_boolean("plugin:proc:ipc", "semaphore totals", CONFIG_BOOLEAN_YES); + do_shm = config_get_boolean("plugin:proc:ipc", "shared memory totals", CONFIG_BOOLEAN_YES); + + char filename[FILENAME_MAX + 1]; + + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/sysvipc/msg"); + msg_filename = config_get("plugin:proc:ipc", "msg filename to monitor", filename); + + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/sysvipc/shm"); + shm_filename = config_get("plugin:proc:ipc", "shm filename to monitor", filename); + + dimensions_limit = config_get_number("plugin:proc:ipc", "max dimensions in memory allowed", 50); + + // make sure it works + if(ipc_sem_get_limits(&limits) == -1) { + collector_error("unable to fetch semaphore limits"); + do_sem = CONFIG_BOOLEAN_NO; + } + else if(ipc_sem_get_status(&status) == -1) { + collector_error("unable to fetch semaphore statistics"); + do_sem = CONFIG_BOOLEAN_NO; + } + else { + // create the charts + if(unlikely(!st_semaphores)) { + st_semaphores = rrdset_create_localhost( + "system" + , "ipc_semaphores" + , NULL + , "ipc semaphores" + , NULL + , "IPC Semaphores" + , "semaphores" + , PLUGIN_PROC_NAME + , "ipc" + , NETDATA_CHART_PRIO_SYSTEM_IPC_SEMAPHORES + , localhost->rrd_update_every + , RRDSET_TYPE_AREA + ); + rd_semaphores = rrddim_add(st_semaphores, "semaphores", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + if(unlikely(!st_arrays)) { + st_arrays = rrdset_create_localhost( + "system" + , "ipc_semaphore_arrays" + , NULL + , "ipc semaphores" + , NULL + , "IPC Semaphore Arrays" + , "arrays" + , PLUGIN_PROC_NAME + , "ipc" + , NETDATA_CHART_PRIO_SYSTEM_IPC_SEM_ARRAYS + , localhost->rrd_update_every + , RRDSET_TYPE_AREA + ); + rd_arrays = rrddim_add(st_arrays, "arrays", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + // variables + semaphores_max = rrdvar_host_variable_add_and_acquire(localhost, "ipc_semaphores_max"); + arrays_max = rrdvar_host_variable_add_and_acquire(localhost, "ipc_semaphores_arrays_max"); + } + + struct stat stbuf; + if (stat(msg_filename, &stbuf)) { + do_msg = CONFIG_BOOLEAN_NO; + } + + if(unlikely(do_sem == CONFIG_BOOLEAN_NO && do_msg == CONFIG_BOOLEAN_NO)) { + collector_error("ipc module disabled"); + return 1; + } + } + + if(likely(do_sem != CONFIG_BOOLEAN_NO)) { + if(unlikely(read_limits_next < 0)) { + if(unlikely(ipc_sem_get_limits(&limits) == -1)) { + collector_error("Unable to fetch semaphore limits."); + } + else { + if(semaphores_max) + rrdvar_host_variable_set(localhost, semaphores_max, limits.semmns); + if(arrays_max) + rrdvar_host_variable_set(localhost, arrays_max, limits.semmni); + + st_arrays->red = limits.semmni; + st_semaphores->red = limits.semmns; + + read_limits_next = 60 / update_every; + } + } + else + read_limits_next--; + + if(unlikely(ipc_sem_get_status(&status) == -1)) { + collector_error("Unable to get semaphore statistics"); + return 0; + } + + rrddim_set_by_pointer(st_semaphores, rd_semaphores, status.semaem); + rrdset_done(st_semaphores); + + rrddim_set_by_pointer(st_arrays, rd_arrays, status.semusz); + rrdset_done(st_arrays); + } + + if(likely(do_msg != CONFIG_BOOLEAN_NO)) { + static RRDSET *st_msq_messages = NULL, *st_msq_bytes = NULL; + + int ret = ipc_msq_get_info(msg_filename, &message_queue_root); + + if(!ret && message_queue_root) { + if(unlikely(!st_msq_messages)) + st_msq_messages = rrdset_create_localhost( + "system" + , "message_queue_messages" + , NULL + , "ipc message queues" + , NULL + , "IPC Message Queue Number of Messages" + , "messages" + , PLUGIN_PROC_NAME + , "ipc" + , NETDATA_CHART_PRIO_SYSTEM_IPC_MSQ_MESSAGES + , update_every + , RRDSET_TYPE_STACKED + ); + + if(unlikely(!st_msq_bytes)) + st_msq_bytes = rrdset_create_localhost( + "system" + , "message_queue_bytes" + , NULL + , "ipc message queues" + , NULL + , "IPC Message Queue Used Bytes" + , "bytes" + , PLUGIN_PROC_NAME + , "ipc" + , NETDATA_CHART_PRIO_SYSTEM_IPC_MSQ_SIZE + , update_every + , RRDSET_TYPE_STACKED + ); + + struct message_queue *msq = message_queue_root, *msq_prev = NULL; + while(likely(msq)){ + if(likely(msq->found)) { + if(unlikely(!msq->rd_messages || !msq->rd_bytes)) { + char id[RRD_ID_LENGTH_MAX + 1]; + snprintfz(id, RRD_ID_LENGTH_MAX, "%llu", msq->id); + if(likely(!msq->rd_messages)) msq->rd_messages = rrddim_add(st_msq_messages, id, NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + if(likely(!msq->rd_bytes)) msq->rd_bytes = rrddim_add(st_msq_bytes, id, NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_msq_messages, msq->rd_messages, msq->messages); + rrddim_set_by_pointer(st_msq_bytes, msq->rd_bytes, msq->bytes); + + msq->found = 0; + } + else { + rrddim_is_obsolete___safe_from_collector_thread(st_msq_messages, msq->rd_messages); + rrddim_is_obsolete___safe_from_collector_thread(st_msq_bytes, msq->rd_bytes); + + // remove message queue from the linked list + if(!msq_prev) + message_queue_root = msq->next; + else + msq_prev->next = msq->next; + freez(msq); + msq = NULL; + } + if(likely(msq)) { + msq_prev = msq; + msq = msq->next; + } + else if(!msq_prev) + msq = message_queue_root; + else + msq = msq_prev->next; + } + + rrdset_done(st_msq_messages); + rrdset_done(st_msq_bytes); + + long long dimensions_num = rrdset_number_of_dimensions(st_msq_messages); + + if(unlikely(dimensions_num > dimensions_limit)) { + collector_info("Message queue statistics has been disabled"); + collector_info("There are %lld dimensions in memory but limit was set to %lld", dimensions_num, dimensions_limit); + rrdset_is_obsolete___safe_from_collector_thread(st_msq_messages); + rrdset_is_obsolete___safe_from_collector_thread(st_msq_bytes); + st_msq_messages = NULL; + st_msq_bytes = NULL; + do_msg = CONFIG_BOOLEAN_NO; + } + else if(unlikely(!message_queue_root)) { + collector_info("Making chart %s (%s) obsolete since it does not have any dimensions", rrdset_name(st_msq_messages), rrdset_id(st_msq_messages)); + rrdset_is_obsolete___safe_from_collector_thread(st_msq_messages); + st_msq_messages = NULL; + + collector_info("Making chart %s (%s) obsolete since it does not have any dimensions", rrdset_name(st_msq_bytes), rrdset_id(st_msq_bytes)); + rrdset_is_obsolete___safe_from_collector_thread(st_msq_bytes); + st_msq_bytes = NULL; + } + } + } + + if(likely(do_shm != CONFIG_BOOLEAN_NO)) { + static RRDSET *st_shm_segments = NULL, *st_shm_bytes = NULL; + static RRDDIM *rd_shm_segments = NULL, *rd_shm_bytes = NULL; + struct shm_stats shm; + + if(!ipc_shm_get_info(shm_filename, &shm)) { + if(unlikely(!st_shm_segments)) { + st_shm_segments = rrdset_create_localhost( + "system" + , "shared_memory_segments" + , NULL + , "ipc shared memory" + , NULL + , "IPC Shared Memory Number of Segments" + , "segments" + , PLUGIN_PROC_NAME + , "ipc" + , NETDATA_CHART_PRIO_SYSTEM_IPC_SHARED_MEM_SEGS + , update_every + , RRDSET_TYPE_STACKED + ); + + rd_shm_segments = rrddim_add(st_shm_segments, "segments", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_shm_segments, rd_shm_segments, shm.segments); + rrdset_done(st_shm_segments); + + if(unlikely(!st_shm_bytes)) { + st_shm_bytes = rrdset_create_localhost( + "system" + , "shared_memory_bytes" + , NULL + , "ipc shared memory" + , NULL + , "IPC Shared Memory Used Bytes" + , "bytes" + , PLUGIN_PROC_NAME + , "ipc" + , NETDATA_CHART_PRIO_SYSTEM_IPC_SHARED_MEM_SIZE + , update_every + , RRDSET_TYPE_STACKED + ); + + rd_shm_bytes = rrddim_add(st_shm_bytes, "bytes", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_shm_bytes, rd_shm_bytes, shm.bytes); + rrdset_done(st_shm_bytes); + } + } + + return 0; +} diff --git a/src/collectors/proc.plugin/metadata.yaml b/src/collectors/proc.plugin/metadata.yaml new file mode 100644 index 000000000..1ecec9b9e --- /dev/null +++ b/src/collectors/proc.plugin/metadata.yaml @@ -0,0 +1,5299 @@ +plugin_name: proc.plugin +modules: + - meta: + plugin_name: proc.plugin + module_name: /proc/stat + monitored_instance: + name: System statistics + link: "" + categories: + - data-collection.linux-systems.system-metrics + icon_filename: "linuxserver.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - cpu utilization + - process counts + most_popular: false + overview: + data_collection: + metrics_description: | + CPU utilization, states and frequencies and key Linux system performance metrics. + + The `/proc/stat` file provides various types of system statistics: + + - The overall system CPU usage statistics + - Per CPU core statistics + - The total context switching of the system + - The total number of processes running + - The total CPU interrupts + - The total CPU softirqs + + The collector also reads: + + - `/proc/schedstat` for statistics about the process scheduler in the Linux kernel. + - `/sys/devices/system/cpu/[X]/thermal_throttle/core_throttle_count` to get the count of thermal throttling events for a specific CPU core on Linux systems. + - `/sys/devices/system/cpu/[X]/thermal_throttle/package_throttle_count` to get the count of thermal throttling events for a specific CPU package on a Linux system. + - `/sys/devices/system/cpu/[X]/cpufreq/scaling_cur_freq` to get the current operating frequency of a specific CPU core. + - `/sys/devices/system/cpu/[X]/cpufreq/stats/time_in_state` to get the amount of time the CPU has spent in each of its available frequency states. + - `/sys/devices/system/cpu/[X]/cpuidle/state[X]/name` to get the names of the idle states for each CPU core in a Linux system. + - `/sys/devices/system/cpu/[X]/cpuidle/state[X]/time` to get the total time each specific CPU core has spent in each idle state since the system was started. + method_description: "" + supported_platforms: + include: ["linux"] + exclude: [] + multi_instance: false + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: | + The collector auto-detects all metrics. No configuration is needed. + limits: + description: "" + performance_impact: + description: | + The collector disables cpu frequency and idle state monitoring when there are more than 128 CPU cores available. + setup: + prerequisites: + list: [] + configuration: + file: + section_name: "plugin:proc:/proc/stat" + name: "netdata.conf" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: + - name: 10min_cpu_usage + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/cpu.conf + metric: system.cpu + info: average CPU utilization over the last 10 minutes (excluding iowait, nice and steal) + os: "linux" + - name: 10min_cpu_iowait + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/cpu.conf + metric: system.cpu + info: average CPU iowait time over the last 10 minutes + os: "linux" + - name: 20min_steal_cpu + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/cpu.conf + metric: system.cpu + info: average CPU steal time over the last 20 minutes + os: "linux" + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: global + description: "" + labels: [] + metrics: + - name: system.cpu + description: Total CPU utilization + unit: "percentage" + chart_type: stacked + dimensions: + - name: guest_nice + - name: guest + - name: steal + - name: softirq + - name: irq + - name: user + - name: system + - name: nice + - name: iowait + - name: idle + - name: system.intr + description: CPU Interrupts + unit: "interrupts/s" + chart_type: line + dimensions: + - name: interrupts + - name: system.ctxt + description: CPU Context Switches + unit: "context switches/s" + chart_type: line + dimensions: + - name: switches + - name: system.forks + description: Started Processes + unit: "processes/s" + chart_type: line + dimensions: + - name: started + - name: system.processes + description: System Processes + unit: "processes" + chart_type: line + dimensions: + - name: running + - name: blocked + - name: cpu.core_throttling + description: Core Thermal Throttling Events + unit: "events/s" + chart_type: line + dimensions: + - name: a dimension per cpu core + - name: cpu.package_throttling + description: Package Thermal Throttling Events + unit: "events/s" + chart_type: line + dimensions: + - name: a dimension per package + - name: cpu.cpufreq + description: Current CPU Frequency + unit: "MHz" + chart_type: line + dimensions: + - name: a dimension per cpu core + - name: cpu core + description: "" + labels: + - name: cpu + description: TBD + metrics: + - name: cpu.cpu + description: Core utilization + unit: "percentage" + chart_type: stacked + dimensions: + - name: guest_nice + - name: guest + - name: steal + - name: softirq + - name: irq + - name: user + - name: system + - name: nice + - name: iowait + - name: idle + - name: cpuidle.cpu_cstate_residency_time + description: C-state residency time + unit: "percentage" + chart_type: stacked + dimensions: + - name: a dimension per c-state + - meta: + plugin_name: proc.plugin + module_name: /proc/sys/kernel/random/entropy_avail + monitored_instance: + name: Entropy + link: "" + categories: + - data-collection.linux-systems.system-metrics + icon_filename: "syslog.png" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - entropy + most_popular: false + overview: + data_collection: + metrics_description: | + Entropy, a measure of the randomness or unpredictability of data. + + In the context of cryptography, entropy is used to generate random numbers or keys that are essential for + secure communication and encryption. Without a good source of entropy, cryptographic protocols can become + vulnerable to attacks that exploit the predictability of the generated keys. + + In most operating systems, entropy is generated by collecting random events from various sources, such as + hardware interrupts, mouse movements, keyboard presses, and disk activity. These events are fed into a pool + of entropy, which is then used to generate random numbers when needed. + + The `/dev/random` device in Linux is one such source of entropy, and it provides an interface for programs + to access the pool of entropy. When a program requests random numbers, it reads from the `/dev/random` device, + which blocks until enough entropy is available to generate the requested numbers. This ensures that the + generated numbers are truly random and not predictable. + + However, if the pool of entropy gets depleted, the `/dev/random` device may block indefinitely, causing + programs that rely on random numbers to slow down or even freeze. This is especially problematic for + cryptographic protocols that require a continuous stream of random numbers, such as SSL/TLS and SSH. + + To avoid this issue, some systems use a hardware random number generator (RNG) to generate high-quality + entropy. A hardware RNG generates random numbers by measuring physical phenomena, such as thermal noise or + radioactive decay. These sources of randomness are considered to be more reliable and unpredictable than + software-based sources. + + One such hardware RNG is the Trusted Platform Module (TPM), which is a dedicated hardware chip that is used + for cryptographic operations and secure boot. The TPM contains a built-in hardware RNG that generates + high-quality entropy, which can be used to seed the pool of entropy in the operating system. + + Alternatively, software-based solutions such as `Haveged` can be used to generate additional entropy by + exploiting sources of randomness in the system, such as CPU utilization and network traffic. These solutions + can help to mitigate the risk of entropy depletion, but they may not be as reliable as hardware-based solutions. + method_description: "" + supported_platforms: + include: ["linux"] + exclude: [] + multi_instance: false + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: + - name: lowest_entropy + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/entropy.conf + metric: system.entropy + info: minimum number of bits of entropy available for the kernel’s random number generator + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: global + description: "" + labels: [] + metrics: + - name: system.entropy + description: Available Entropy + unit: "entropy" + chart_type: line + dimensions: + - name: entropy + - meta: + plugin_name: proc.plugin + module_name: /proc/uptime + monitored_instance: + name: System Uptime + link: "" + categories: + - data-collection.linux-systems.system-metrics + icon_filename: "linuxserver.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - uptime + most_popular: false + overview: + data_collection: + metrics_description: | + The amount of time the system has been up (running). + + Uptime is a critical aspect of overall system performance: + + - **Availability**: Uptime monitoring can show whether a server is consistently available or experiences frequent downtimes. + - **Performance Monitoring**: While server uptime alone doesn't provide detailed performance data, analyzing the duration and frequency of downtimes can help identify patterns or trends. + - **Proactive problem detection**: If server uptime monitoring reveals unexpected downtimes or a decreasing uptime trend, it can serve as an early warning sign of potential problems. + - **Root cause analysis**: When investigating server downtime, the uptime metric alone may not provide enough information to pinpoint the exact cause. + - **Load balancing**: Uptime data can indirectly indicate load balancing issues if certain servers have significantly lower uptimes than others. + - **Optimize maintenance efforts**: Servers with consistently low uptimes or frequent downtimes may require more attention. + - **Compliance requirements**: Server uptime data can be used to demonstrate compliance with regulatory requirements or SLAs that mandate a minimum level of server availability. + method_description: "" + supported_platforms: + include: ["linux"] + exclude: [] + multi_instance: false + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: [] + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: global + description: "" + labels: [] + metrics: + - name: system.uptime + description: System Uptime + unit: "seconds" + chart_type: line + dimensions: + - name: uptime + - meta: + plugin_name: proc.plugin + module_name: /proc/vmstat + monitored_instance: + name: Memory Statistics + link: "" + categories: + - data-collection.linux-systems.memory-metrics + icon_filename: "linuxserver.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - swap + - page faults + - oom + - numa + most_popular: false + overview: + data_collection: + metrics_description: | + Linux Virtual memory subsystem. + + Information about memory management, indicating how effectively the kernel allocates and frees + memory resources in response to system demands. + + Monitors page faults, which occur when a process requests a portion of its memory that isn't + immediately available. Monitoring these events can help diagnose inefficiencies in memory management and + provide insights into application behavior. + + Tracks swapping activity — a vital aspect of memory management where the kernel moves data from RAM to + swap space, and vice versa, based on memory demand and usage. It also monitors the utilization of zswap, + a compressed cache for swap pages, and provides insights into its usage and performance implications. + + In the context of virtualized environments, it tracks the ballooning mechanism which is used to balance + memory resources between host and guest systems. + + For systems using NUMA architecture, it provides insights into the local and remote memory accesses, which + can impact the performance based on the memory access times. + + The collector also watches for 'Out of Memory' kills, a drastic measure taken by the system when it runs out + of memory resources. + method_description: "" + supported_platforms: + include: ["linux"] + exclude: [] + multi_instance: false + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: + - name: 30min_ram_swapped_out + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/swap.conf + metric: mem.swapio + info: percentage of the system RAM swapped in the last 30 minutes + os: "linux freebsd" + - name: oom_kill + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/ram.conf + metric: mem.oom_kill + info: number of out of memory kills in the last 30 minutes + os: "linux" + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: global + description: "" + labels: [] + metrics: + - name: mem.swapio + description: Swap I/O + unit: "KiB/s" + chart_type: area + dimensions: + - name: in + - name: out + - name: system.pgpgio + description: Memory Paged from/to disk + unit: "KiB/s" + chart_type: area + dimensions: + - name: in + - name: out + - name: system.pgfaults + description: Memory Page Faults + unit: "faults/s" + chart_type: line + dimensions: + - name: minor + - name: major + - name: mem.balloon + description: Memory Ballooning Operations + unit: "KiB/s" + chart_type: line + dimensions: + - name: inflate + - name: deflate + - name: migrate + - name: mem.zswapio + description: ZSwap I/O + unit: "KiB/s" + chart_type: area + dimensions: + - name: in + - name: out + - name: mem.ksm_cow + description: KSM Copy On Write Operations + unit: "KiB/s" + chart_type: line + dimensions: + - name: swapin + - name: write + - name: mem.thp_faults + description: Transparent Huge Page Fault Allocations + unit: "events/s" + chart_type: line + dimensions: + - name: alloc + - name: fallback + - name: fallback_charge + - name: mem.thp_file + description: Transparent Huge Page File Allocations + unit: "events/s" + chart_type: line + dimensions: + - name: alloc + - name: fallback + - name: mapped + - name: fallback_charge + - name: mem.thp_zero + description: Transparent Huge Zero Page Allocations + unit: "events/s" + chart_type: line + dimensions: + - name: alloc + - name: failed + - name: mem.thp_collapse + description: Transparent Huge Pages Collapsed by khugepaged + unit: "events/s" + chart_type: line + dimensions: + - name: alloc + - name: failed + - name: mem.thp_split + description: Transparent Huge Page Splits + unit: "events/s" + chart_type: line + dimensions: + - name: split + - name: failed + - name: split_pmd + - name: split_deferred + - name: mem.thp_swapout + description: Transparent Huge Pages Swap Out + unit: "events/s" + chart_type: line + dimensions: + - name: swapout + - name: fallback + - name: mem.thp_compact + description: Transparent Huge Pages Compaction + unit: "events/s" + chart_type: line + dimensions: + - name: success + - name: fail + - name: stall + - name: mem.oom_kill + description: Out of Memory Kills + unit: "kills/s" + chart_type: line + dimensions: + - name: kills + - name: mem.numa + description: NUMA events + unit: "events/s" + chart_type: line + dimensions: + - name: local + - name: foreign + - name: interleave + - name: other + - name: pte_updates + - name: huge_pte_updates + - name: hint_faults + - name: hint_faults_local + - name: pages_migrated + - meta: + plugin_name: proc.plugin + module_name: /proc/interrupts + monitored_instance: + name: Interrupts + link: "" + categories: + - data-collection.linux-systems.cpu-metrics + icon_filename: "linuxserver.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - interrupts + most_popular: false + overview: + data_collection: + metrics_description: | + Monitors `/proc/interrupts`, a file organized by CPU and then by the type of interrupt. + The numbers reported are the counts of the interrupts that have occurred of each type. + + An interrupt is a signal to the processor emitted by hardware or software indicating an event that needs + immediate attention. The processor then interrupts its current activities and executes the interrupt handler + to deal with the event. This is part of the way a computer multitasks and handles concurrent processing. + + The types of interrupts include: + + - **I/O interrupts**: These are caused by I/O devices like the keyboard, mouse, printer, etc. For example, when + you type something on the keyboard, an interrupt is triggered so the processor can handle the new input. + + - **Timer interrupts**: These are generated at regular intervals by the system's timer circuit. It's primarily + used to switch the CPU among different tasks. + + - **Software interrupts**: These are generated by a program requiring disk I/O operations, or other system resources. + + - **Hardware interrupts**: These are caused by hardware conditions such as power failure, overheating, etc. + + Monitoring `/proc/interrupts` can be used for: + + - **Performance tuning**: If an interrupt is happening very frequently, it could be a sign that a device is not + configured correctly, or there is a software bug causing unnecessary interrupts. This could lead to system + performance degradation. + + - **System troubleshooting**: If you're seeing a lot of unexpected interrupts, it could be a sign of a hardware problem. + + - **Understanding system behavior**: More generally, keeping an eye on what interrupts are occurring can help you + understand what your system is doing. It can provide insights into the system's interaction with hardware, + drivers, and other parts of the kernel. + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: true + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: [] + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: global + description: "" + labels: [] + metrics: + - name: system.interrupts + description: System interrupts + unit: "interrupts/s" + chart_type: stacked + dimensions: + - name: a dimension per device + - name: cpu core + description: "" + labels: + - name: cpu + description: TBD + metrics: + - name: cpu.interrupts + description: CPU interrupts + unit: "interrupts/s" + chart_type: stacked + dimensions: + - name: a dimension per device + - meta: + plugin_name: proc.plugin + module_name: /proc/loadavg + monitored_instance: + name: System Load Average + link: "" + categories: + - data-collection.linux-systems.system-metrics + icon_filename: "linuxserver.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - load + - load average + most_popular: false + overview: + data_collection: + metrics_description: | + The `/proc/loadavg` file provides information about the system load average. + + The load average is a measure of the amount of computational work that a system performs. It is a + representation of the average system load over a period of time. + + This file contains three numbers representing the system load averages for the last 1, 5, and 15 minutes, + respectively. It also includes the currently running processes and the total number of processes. + + Monitoring the load average can be used for: + + - **System performance**: If the load average is too high, it may indicate that your system is overloaded. + On a system with a single CPU, if the load average is 1, it means the single CPU is fully utilized. If the + load averages are consistently higher than the number of CPUs/cores, it may indicate that your system is + overloaded and tasks are waiting for CPU time. + + - **Troubleshooting**: If the load average is unexpectedly high, it can be a sign of a problem. This could be + due to a runaway process, a software bug, or a hardware issue. + + - **Capacity planning**: By monitoring the load average over time, you can understand the trends in your + system's workload. This can help with capacity planning and scaling decisions. + + Remember that load average not only considers CPU usage, but also includes processes waiting for disk I/O. + Therefore, high load averages could be due to I/O contention as well as CPU contention. + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: false + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: + - name: load_cpu_number + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/load.conf + metric: system.load + info: number of active CPU cores in the system + os: "linux" + - name: load_average_15 + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/load.conf + metric: system.load + info: system fifteen-minute load average + os: "linux" + - name: load_average_5 + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/load.conf + metric: system.load + info: system five-minute load average + os: "linux" + - name: load_average_1 + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/load.conf + metric: system.load + info: system one-minute load average + os: "linux" + - name: active_processes + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/processes.conf + metric: system.active_processes + info: system process IDs (PID) space utilization + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: global + description: "" + labels: [] + metrics: + - name: system.load + description: System Load Average + unit: "load" + chart_type: line + dimensions: + - name: load1 + - name: load5 + - name: load15 + - name: system.active_processes + description: System Active Processes + unit: "processes" + chart_type: line + dimensions: + - name: active + - meta: + plugin_name: proc.plugin + module_name: /proc/pressure + monitored_instance: + name: Pressure Stall Information + link: "" + categories: + - data-collection.linux-systems.pressure-metrics + icon_filename: "linuxserver.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - pressure + most_popular: false + overview: + data_collection: + metrics_description: | + Introduced in Linux kernel 4.20, `/proc/pressure` provides information about system pressure stall information + (PSI). PSI is a feature that allows the system to track the amount of time the system is stalled due to + resource contention, such as CPU, memory, or I/O. + + The collectors monitored 3 separate files for CPU, memory, and I/O: + + - **cpu**: Tracks the amount of time tasks are stalled due to CPU contention. + - **memory**: Tracks the amount of time tasks are stalled due to memory contention. + - **io**: Tracks the amount of time tasks are stalled due to I/O contention. + - **irq**: Tracks the amount of time tasks are stalled due to IRQ contention. + + Each of them provides metrics for stall time over the last 10 seconds, 1 minute, 5 minutes, and 15 minutes. + + Monitoring the /proc/pressure files can provide important insights into system performance and capacity planning: + + - **Identifying resource contention**: If these metrics are consistently high, it indicates that tasks are + frequently being stalled due to lack of resources, which can significantly degrade system performance. + + - **Troubleshooting performance issues**: If a system is experiencing performance issues, these metrics can + help identify whether resource contention is the cause. + + - **Capacity planning**: By monitoring these metrics over time, you can understand trends in resource + utilization and make informed decisions about when to add more resources to your system. + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: false + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: [] + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: global + description: "" + labels: [] + metrics: + - name: system.cpu_some_pressure + description: CPU some pressure + unit: "percentage" + chart_type: line + dimensions: + - name: some10 + - name: some60 + - name: some300 + - name: system.cpu_some_pressure_stall_time + description: CPU some pressure stall time + unit: "ms" + chart_type: line + dimensions: + - name: time + - name: system.cpu_full_pressure + description: CPU full pressure + unit: "percentage" + chart_type: line + dimensions: + - name: some10 + - name: some60 + - name: some300 + - name: system.cpu_full_pressure_stall_time + description: CPU full pressure stall time + unit: "ms" + chart_type: line + dimensions: + - name: time + - name: system.memory_some_pressure + description: Memory some pressure + unit: "percentage" + chart_type: line + dimensions: + - name: some10 + - name: some60 + - name: some300 + - name: system.memory_some_pressure_stall_time + description: Memory some pressure stall time + unit: "ms" + chart_type: line + dimensions: + - name: time + - name: system.memory_full_pressure + description: Memory full pressure + unit: "percentage" + chart_type: line + dimensions: + - name: some10 + - name: some60 + - name: some300 + - name: system.memory_full_pressure_stall_time + description: Memory full pressure stall time + unit: "ms" + chart_type: line + dimensions: + - name: time + - name: system.io_some_pressure + description: I/O some pressure + unit: "percentage" + chart_type: line + dimensions: + - name: some10 + - name: some60 + - name: some300 + - name: system.io_some_pressure_stall_time + description: I/O some pressure stall time + unit: "ms" + chart_type: line + dimensions: + - name: time + - name: system.io_full_pressure + description: I/O some pressure + unit: "percentage" + chart_type: line + dimensions: + - name: some10 + - name: some60 + - name: some300 + - name: system.io_full_pressure_stall_time + description: I/O some pressure stall time + unit: "ms" + chart_type: line + dimensions: + - name: time + - meta: + plugin_name: proc.plugin + module_name: /proc/softirqs + monitored_instance: + name: SoftIRQ statistics + link: "" + categories: + - data-collection.linux-systems.cpu-metrics + icon_filename: "linuxserver.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - softirqs + - interrupts + most_popular: false + overview: + data_collection: + metrics_description: | + In the Linux kernel, handling of hardware interrupts is split into two halves: the top half and the bottom half. + The top half is the routine that responds immediately to an interrupt, while the bottom half is deferred to be processed later. + + Softirqs are a mechanism in the Linux kernel used to handle the bottom halves of interrupts, which can be + deferred and processed later in a context where it's safe to enable interrupts. + + The actual work of handling the interrupt is offloaded to a softirq and executed later when the system + decides it's a good time to process them. This helps to keep the system responsive by not blocking the top + half for too long, which could lead to missed interrupts. + + Monitoring `/proc/softirqs` is useful for: + + - **Performance tuning**: A high rate of softirqs could indicate a performance issue. For instance, a high + rate of network softirqs (`NET_RX` and `NET_TX`) could indicate a network performance issue. + + - **Troubleshooting**: If a system is behaving unexpectedly, checking the softirqs could provide clues about + what is going on. For example, a sudden increase in block device softirqs (BLOCK) might indicate a problem + with a disk. + + - **Understanding system behavior**: Knowing what types of softirqs are happening can help you understand what + your system is doing, particularly in terms of how it's interacting with hardware and how it's handling + interrupts. + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: true + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: [] + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: global + description: "" + labels: [] + metrics: + - name: system.softirqs + description: System softirqs + unit: "softirqs/s" + chart_type: stacked + dimensions: + - name: a dimension per softirq + - name: cpu core + description: "" + labels: + - name: cpu + description: TBD + metrics: + - name: cpu.softirqs + description: CPU softirqs + unit: "softirqs/s" + chart_type: stacked + dimensions: + - name: a dimension per softirq + - meta: + plugin_name: proc.plugin + module_name: /proc/net/softnet_stat + monitored_instance: + name: Softnet Statistics + link: "" + categories: + - data-collection.linux-systems.network-metrics + icon_filename: "linuxserver.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - softnet + most_popular: false + overview: + data_collection: + metrics_description: | + `/proc/net/softnet_stat` provides statistics that relate to the handling of network packets by softirq. + + It provides information about: + + - Total number of processed packets (`processed`). + - Times ksoftirq ran out of quota (`dropped`). + - Times net_rx_action was rescheduled. + - Number of times processed all lists before quota. + - Number of times did not process all lists due to quota. + - Number of times net_rx_action was rescheduled for GRO (Generic Receive Offload) cells. + - Number of times GRO cells were processed. + + Monitoring the /proc/net/softnet_stat file can be useful for: + + - **Network performance monitoring**: By tracking the total number of processed packets and how many packets + were dropped, you can gain insights into your system's network performance. + + - **Troubleshooting**: If you're experiencing network-related issues, this collector can provide valuable clues. + For instance, a high number of dropped packets may indicate a network problem. + + - **Capacity planning**: If your system is consistently processing near its maximum capacity of network + packets, it might be time to consider upgrading your network infrastructure. + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: true + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: + - name: 1min_netdev_backlog_exceeded + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/softnet.conf + metric: system.softnet_stat + info: average number of dropped packets in the last minute due to exceeded net.core.netdev_max_backlog + os: "linux" + - name: 1min_netdev_budget_ran_outs + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/softnet.conf + metric: system.softnet_stat + info: + average number of times ksoftirq ran out of sysctl net.core.netdev_budget or net.core.netdev_budget_usecs with work remaining over the last + minute (this can be a cause for dropped packets) + os: "linux" + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: global + description: "" + labels: [] + metrics: + - name: system.softnet_stat + description: System softnet_stat + unit: "events/s" + chart_type: line + dimensions: + - name: processed + - name: dropped + - name: squeezed + - name: received_rps + - name: flow_limit_count + - name: cpu core + description: "" + labels: [] + metrics: + - name: cpu.softnet_stat + description: CPU softnet_stat + unit: "events/s" + chart_type: line + dimensions: + - name: processed + - name: dropped + - name: squeezed + - name: received_rps + - name: flow_limit_count + - meta: + plugin_name: proc.plugin + module_name: /proc/meminfo + monitored_instance: + name: Memory Usage + link: "" + categories: + - data-collection.linux-systems.memory-metrics + icon_filename: "linuxserver.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - memory + - ram + - available + - committed + most_popular: false + overview: + data_collection: + metrics_description: | + `/proc/meminfo` provides detailed information about the system's current memory usage. It includes information + about different types of memory, RAM, Swap, ZSwap, HugePages, Transparent HugePages (THP), Kernel memory, + SLAB memory, memory mappings, and more. + + Monitoring /proc/meminfo can be useful for: + + - **Performance Tuning**: Understanding your system's memory usage can help you make decisions about system + tuning and optimization. For example, if your system is frequently low on free memory, it might benefit + from more RAM. + + - **Troubleshooting**: If your system is experiencing problems, `/proc/meminfo` can provide clues about + whether memory usage is a factor. For example, if your system is slow and cached swap is high, it could + mean that your system is swapping out a lot of memory to disk, which can degrade performance. + + - **Capacity Planning**: By monitoring memory usage over time, you can understand trends and make informed + decisions about future capacity needs. + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: false + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: + - name: ram_in_use + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/ram.conf + metric: system.ram + info: system memory utilization + os: "linux" + - name: ram_available + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/ram.conf + metric: mem.available + info: percentage of estimated amount of RAM available for userspace processes, without causing swapping + os: "linux" + - name: used_swap + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/swap.conf + metric: mem.swap + info: swap memory utilization + os: "linux freebsd" + - name: 1hour_memory_hw_corrupted + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/memory.conf + metric: mem.hwcorrupt + info: amount of memory corrupted due to a hardware failure + os: "linux" + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: global + description: "" + labels: [] + metrics: + - name: system.ram + description: System RAM + unit: "MiB" + chart_type: stacked + dimensions: + - name: free + - name: used + - name: cached + - name: buffers + - name: mem.available + description: Available RAM for applications + unit: "MiB" + chart_type: area + dimensions: + - name: avail + - name: mem.swap + description: System Swap + unit: "MiB" + chart_type: stacked + dimensions: + - name: free + - name: used + - name: mem.swap_cached + description: Swap Memory Cached in RAM + unit: "MiB" + chart_type: stacked + dimensions: + - name: cached + - name: mem.zswap + description: Zswap Usage + unit: "MiB" + chart_type: stacked + dimensions: + - name: in-ram + - name: on-disk + - name: mem.hwcorrupt + description: Corrupted Memory detected by ECC + unit: "MiB" + chart_type: line + dimensions: + - name: HardwareCorrupted + - name: mem.commited + description: Committed (Allocated) Memory + unit: "MiB" + chart_type: area + dimensions: + - name: Commited_AS + - name: mem.writeback + description: Writeback Memory + unit: "MiB" + chart_type: line + dimensions: + - name: Dirty + - name: Writeback + - name: FuseWriteback + - name: NfsWriteback + - name: Bounce + - name: mem.kernel + description: Memory Used by Kernel + unit: "MiB" + chart_type: stacked + dimensions: + - name: Slab + - name: KernelStack + - name: PageTables + - name: VmallocUsed + - name: Percpu + - name: mem.slab + description: Reclaimable Kernel Memory + unit: "MiB" + chart_type: stacked + dimensions: + - name: reclaimable + - name: unreclaimable + - name: mem.hugepages + description: Dedicated HugePages Memory + unit: "MiB" + chart_type: stacked + dimensions: + - name: free + - name: used + - name: surplus + - name: reserved + - name: mem.thp + description: Transparent HugePages Memory + unit: "MiB" + chart_type: stacked + dimensions: + - name: anonymous + - name: shmem + - name: mem.thp_details + description: Details of Transparent HugePages Usage + unit: "MiB" + chart_type: line + dimensions: + - name: ShmemPmdMapped + - name: FileHugePages + - name: FilePmdMapped + - name: mem.reclaiming + description: Memory Reclaiming + unit: "MiB" + chart_type: line + dimensions: + - name: Active + - name: Inactive + - name: Active(anon) + - name: Inactive(anon) + - name: Active(file) + - name: Inactive(file) + - name: Unevictable + - name: Mlocked + - name: mem.high_low + description: High and Low Used and Free Memory Areas + unit: "MiB" + chart_type: stacked + dimensions: + - name: high_used + - name: low_used + - name: high_free + - name: low_free + - name: mem.cma + description: Contiguous Memory Allocator (CMA) Memory + unit: "MiB" + chart_type: stacked + dimensions: + - name: used + - name: free + - name: mem.directmaps + description: Direct Memory Mappings + unit: "MiB" + chart_type: stacked + dimensions: + - name: 4k + - name: 2m + - name: 4m + - name: 1g + - meta: + plugin_name: proc.plugin + module_name: /proc/pagetypeinfo + monitored_instance: + name: Page types + link: "" + categories: + - data-collection.linux-systems.memory-metrics + icon_filename: "microchip.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - memory page types + most_popular: false + overview: + data_collection: + metrics_description: "This integration provides metrics about the system's memory page types" + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: false + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: [] + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: global + description: "" + labels: [] + metrics: + - name: mem.pagetype_global + description: System orders available + unit: "B" + chart_type: stacked + dimensions: + - name: a dimension per pagesize + - name: node, zone, type + description: "" + labels: + - name: node_id + description: TBD + - name: node_zone + description: TBD + - name: node_type + description: TBD + metrics: + - name: mem.pagetype + description: pagetype_Node{node}_{zone}_{type} + unit: "B" + chart_type: stacked + dimensions: + - name: a dimension per pagesize + - meta: + plugin_name: proc.plugin + module_name: /sys/devices/system/edac/mc + monitored_instance: + name: Memory modules (DIMMs) + link: "" + categories: + - data-collection.linux-systems.memory-metrics + icon_filename: "microchip.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - edac + - ecc + - dimm + - ram + - hardware + most_popular: false + overview: + data_collection: + metrics_description: | + The Error Detection and Correction (EDAC) subsystem is detecting and reporting errors in the system's memory, + primarily ECC (Error-Correcting Code) memory errors. + + The collector provides data for: + + - Per memory controller (MC): correctable and uncorrectable errors. These can be of 2 kinds: + - errors related to a DIMM + - errors that cannot be associated with a DIMM + + - Per memory DIMM: correctable and uncorrectable errors. There are 2 kinds: + - memory controllers that can identify the physical DIMMS and report errors directly for them, + - memory controllers that report errors for memory address ranges that can be linked to dimms. + In this case the DIMMS reported may be more than the physical DIMMS installed. + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: true + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: + - name: ecc_memory_mc_noinfo_correctable + metric: mem.edac_mc_errors + info: memory controller ${label:controller} ECC correctable errors (unknown DIMM slot) + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/memory.conf + - name: ecc_memory_mc_noinfo_uncorrectable + metric: mem.edac_mc_errors + info: memory controller ${label:controller} ECC uncorrectable errors (unknown DIMM slot) + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/memory.conf + - name: ecc_memory_dimm_correctable + metric: mem.edac_mc_dimm_errors + info: DIMM ${label:dimm} controller ${label:controller} (location ${label:dimm_location}) ECC correctable errors + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/memory.conf + - name: ecc_memory_dimm_uncorrectable + metric: mem.edac_mc_dimm_errors + info: DIMM ${label:dimm} controller ${label:controller} (location ${label:dimm_location}) ECC uncorrectable errors + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/memory.conf + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: memory controller + description: These metrics refer to the memory controller. + labels: + - name: controller + description: "[mcX](https://www.kernel.org/doc/html/v5.0/admin-guide/ras.html#mcx-directories) directory name of this memory controller." + - name: mc_name + description: Memory controller type. + - name: size_mb + description: The amount of memory in megabytes that this memory controller manages. + - name: max_location + description: Last available memory slot in this memory controller. + metrics: + - name: mem.edac_mc_errors + description: Memory Controller (MC) Error Detection And Correction (EDAC) Errors + unit: errors + chart_type: line + dimensions: + - name: correctable + - name: uncorrectable + - name: correctable_noinfo + - name: uncorrectable_noinfo + - name: memory module + description: These metrics refer to the memory module (or rank, [depends on the memory controller](https://www.kernel.org/doc/html/v5.0/admin-guide/ras.html#f5)). + labels: + - name: controller + description: "[mcX](https://www.kernel.org/doc/html/v5.0/admin-guide/ras.html#mcx-directories) directory name of this memory controller." + - name: dimm + description: "[dimmX or rankX](https://www.kernel.org/doc/html/v5.0/admin-guide/ras.html#dimmx-or-rankx-directories) directory name of this memory module." + - name: dimm_dev_type + description: Type of DRAM device used in this memory module. For example, x1, x2, x4, x8. + - name: dimm_edac_mode + description: Used type of error detection and correction. For example, S4ECD4ED would mean a Chipkill with x4 DRAM. + - name: dimm_label + description: Label assigned to this memory module. + - name: dimm_location + description: Location of the memory module. + - name: dimm_mem_type + description: Type of the memory module. + - name: size + description: The amount of memory in megabytes that this memory module manages. + metrics: + - name: mem.edac_mc_errors + description: DIMM Error Detection And Correction (EDAC) Errors + unit: errors + chart_type: line + dimensions: + - name: correctable + - name: uncorrectable + - meta: + plugin_name: proc.plugin + module_name: /sys/devices/system/node + monitored_instance: + name: Non-Uniform Memory Access + link: "" + categories: + - data-collection.linux-systems.memory-metrics + icon_filename: "linuxserver.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - numa + most_popular: false + overview: + data_collection: + metrics_description: | + Information about NUMA (Non-Uniform Memory Access) nodes on the system. + + NUMA is a method of configuring a cluster of microprocessor in a multiprocessing system so that they can + share memory locally, improving performance and the ability of the system to be expanded. NUMA is used in a + symmetric multiprocessing (SMP) system. + + In a NUMA system, processors, memory, and I/O devices are grouped together into cells, also known as nodes. + Each node has its own memory and set of I/O devices, and one or more processors. While a processor can access + memory in any of the nodes, it does so faster when accessing memory within its own node. + + The collector provides statistics on memory allocations for processes running on the NUMA nodes, revealing the + efficiency of memory allocations in multi-node systems. + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: true + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: [] + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: numa node + description: "" + labels: + - name: numa_node + description: TBD + metrics: + - name: mem.numa_nodes + description: NUMA events + unit: "events/s" + chart_type: line + dimensions: + - name: hit + - name: miss + - name: local + - name: foreign + - name: interleave + - name: other + - meta: + plugin_name: proc.plugin + module_name: /sys/kernel/mm/ksm + monitored_instance: + name: Kernel Same-Page Merging + link: "" + categories: + - data-collection.linux-systems.memory-metrics + icon_filename: "microchip.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - ksm + - samepage + - merging + most_popular: false + overview: + data_collection: + metrics_description: | + Kernel Samepage Merging (KSM) is a memory-saving feature in Linux that enables the kernel to examine the + memory of different processes and identify identical pages. It then merges these identical pages into a + single page that the processes share. This is particularly useful for virtualization, where multiple virtual + machines might be running the same operating system or applications and have many identical pages. + + The collector provides information about the operation and effectiveness of KSM on your system. + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: false + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: [] + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: global + description: "" + labels: [] + metrics: + - name: mem.ksm + description: Kernel Same Page Merging + unit: "MiB" + chart_type: stacked + dimensions: + - name: shared + - name: unshared + - name: sharing + - name: volatile + - name: mem.ksm_savings + description: Kernel Same Page Merging Savings + unit: "MiB" + chart_type: area + dimensions: + - name: savings + - name: offered + - name: mem.ksm_ratios + description: Kernel Same Page Merging Effectiveness + unit: "percentage" + chart_type: line + dimensions: + - name: savings + - meta: + plugin_name: proc.plugin + module_name: /sys/block/zram + monitored_instance: + name: ZRAM + link: "" + categories: + - data-collection.linux-systems.memory-metrics + icon_filename: "microchip.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - zram + most_popular: false + overview: + data_collection: + metrics_description: | + zRAM, or compressed RAM, is a block device that uses a portion of your system's RAM as a block device. + The data written to this block device is compressed and stored in memory. + + The collectors provides information about the operation and the effectiveness of zRAM on your system. + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: true + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: [] + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: zram device + description: "" + labels: + - name: device + description: TBD + metrics: + - name: mem.zram_usage + description: ZRAM Memory Usage + unit: "MiB" + chart_type: area + dimensions: + - name: compressed + - name: metadata + - name: mem.zram_savings + description: ZRAM Memory Savings + unit: "MiB" + chart_type: area + dimensions: + - name: savings + - name: original + - name: mem.zram_ratio + description: ZRAM Compression Ratio (original to compressed) + unit: "ratio" + chart_type: line + dimensions: + - name: ratio + - name: mem.zram_efficiency + description: ZRAM Efficiency + unit: "percentage" + chart_type: line + dimensions: + - name: percent + - meta: + plugin_name: proc.plugin + module_name: ipc + monitored_instance: + name: Inter Process Communication + link: "" + categories: + - data-collection.linux-systems.ipc-metrics + icon_filename: "network-wired.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - ipc + - semaphores + - shared memory + most_popular: false + overview: + data_collection: + metrics_description: | + IPC stands for Inter-Process Communication. It is a mechanism which allows processes to communicate with each + other and synchronize their actions. + + This collector exposes information about: + + - Message Queues: This allows messages to be exchanged between processes. It's a more flexible method that + allows messages to be placed onto a queue and read at a later time. + + - Shared Memory: This method allows for the fastest form of IPC because processes can exchange data by + reading/writing into shared memory segments. + + - Semaphores: They are used to synchronize the operations performed by independent processes. So, if multiple + processes are trying to access a single shared resource, semaphores can ensure that only one process + accesses the resource at a given time. + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: false + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: + - name: semaphores_used + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/ipc.conf + metric: system.ipc_semaphores + info: IPC semaphore utilization + os: "linux" + - name: semaphore_arrays_used + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/ipc.conf + metric: system.ipc_semaphore_arrays + info: IPC semaphore arrays utilization + os: "linux" + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: global + description: "" + labels: [] + metrics: + - name: system.ipc_semaphores + description: IPC Semaphores + unit: "semaphores" + chart_type: area + dimensions: + - name: semaphores + - name: system.ipc_semaphore_arrays + description: IPC Semaphore Arrays + unit: "arrays" + chart_type: area + dimensions: + - name: arrays + - name: system.message_queue_message + description: IPC Message Queue Number of Messages + unit: "messages" + chart_type: stacked + dimensions: + - name: a dimension per queue + - name: system.message_queue_bytes + description: IPC Message Queue Used Bytes + unit: "bytes" + chart_type: stacked + dimensions: + - name: a dimension per queue + - name: system.shared_memory_segments + description: IPC Shared Memory Number of Segments + unit: "segments" + chart_type: stacked + dimensions: + - name: segments + - name: system.shared_memory_bytes + description: IPC Shared Memory Used Bytes + unit: "bytes" + chart_type: stacked + dimensions: + - name: bytes + - meta: + plugin_name: proc.plugin + module_name: /proc/diskstats + monitored_instance: + name: Disk Statistics + link: "" + categories: + - data-collection.linux-systems.disk-metrics + icon_filename: "hard-drive.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - disk + - disks + - io + - bcache + - block devices + most_popular: false + overview: + data_collection: + metrics_description: | + Detailed statistics for each of your system's disk devices and partitions. + The data is reported by the kernel and can be used to monitor disk activity on a Linux system. + + Get valuable insight into how your disks are performing and where potential bottlenecks might be. + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: true + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: + - name: 10min_disk_backlog + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/disks.conf + metric: disk.backlog + info: average backlog size of the ${label:device} disk over the last 10 minutes + os: "linux" + - name: 10min_disk_utilization + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/disks.conf + metric: disk.util + info: average percentage of time ${label:device} disk was busy over the last 10 minutes + os: "linux freebsd" + - name: bcache_cache_dirty + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/bcache.conf + metric: disk.bcache_cache_alloc + info: percentage of cache space used for dirty data and metadata (this usually means your SSD cache is too small) + - name: bcache_cache_errors + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/bcache.conf + metric: disk.bcache_cache_read_races + info: + number of times data was read from the cache, the bucket was reused and invalidated in the last 10 minutes (when this occurs the data is + reread from the backing device) + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: global + description: "" + labels: [] + metrics: + - name: system.io + description: Disk I/O + unit: "KiB/s" + chart_type: area + dimensions: + - name: in + - name: out + - name: disk + description: "" + labels: + - name: device + description: TBD + - name: mount_point + description: TBD + - name: device_type + description: TBD + metrics: + - name: disk.io + description: Disk I/O Bandwidth + unit: "KiB/s" + chart_type: area + dimensions: + - name: reads + - name: writes + - name: disk_ext.io + description: Amount of Discarded Data + unit: "KiB/s" + chart_type: area + dimensions: + - name: discards + - name: disk.ops + description: Disk Completed I/O Operations + unit: "operations/s" + chart_type: line + dimensions: + - name: reads + - name: writes + - name: disk_ext.ops + description: Disk Completed Extended I/O Operations + unit: "operations/s" + chart_type: line + dimensions: + - name: discards + - name: flushes + - name: disk.qops + description: Disk Current I/O Operations + unit: "operations" + chart_type: line + dimensions: + - name: operations + - name: disk.backlog + description: Disk Backlog + unit: "milliseconds" + chart_type: area + dimensions: + - name: backlog + - name: disk.busy + description: Disk Busy Time + unit: "milliseconds" + chart_type: area + dimensions: + - name: busy + - name: disk.util + description: Disk Utilization Time + unit: "% of time working" + chart_type: area + dimensions: + - name: utilization + - name: disk.mops + description: Disk Merged Operations + unit: "merged operations/s" + chart_type: line + dimensions: + - name: reads + - name: writes + - name: disk_ext.mops + description: Disk Merged Discard Operations + unit: "merged operations/s" + chart_type: line + dimensions: + - name: discards + - name: disk.iotime + description: Disk Total I/O Time + unit: "milliseconds/s" + chart_type: line + dimensions: + - name: reads + - name: writes + - name: disk_ext.iotime + description: Disk Total I/O Time for Extended Operations + unit: "milliseconds/s" + chart_type: line + dimensions: + - name: discards + - name: flushes + - name: disk.await + description: Average Completed I/O Operation Time + unit: "milliseconds/operation" + chart_type: line + dimensions: + - name: reads + - name: writes + - name: disk_ext.await + description: Average Completed Extended I/O Operation Time + unit: "milliseconds/operation" + chart_type: line + dimensions: + - name: discards + - name: flushes + - name: disk.avgsz + description: Average Completed I/O Operation Bandwidth + unit: "KiB/operation" + chart_type: area + dimensions: + - name: reads + - name: writes + - name: disk_ext.avgsz + description: Average Amount of Discarded Data + unit: "KiB/operation" + chart_type: area + dimensions: + - name: discards + - name: disk.svctm + description: Average Service Time + unit: "milliseconds/operation" + chart_type: line + dimensions: + - name: svctm + - name: disk.bcache_cache_alloc + description: BCache Cache Allocations + unit: "percentage" + chart_type: stacked + dimensions: + - name: ununsed + - name: dirty + - name: clean + - name: metadata + - name: undefined + - name: disk.bcache_hit_ratio + description: BCache Cache Hit Ratio + unit: "percentage" + chart_type: line + dimensions: + - name: 5min + - name: 1hour + - name: 1day + - name: ever + - name: disk.bcache_rates + description: BCache Rates + unit: "KiB/s" + chart_type: area + dimensions: + - name: congested + - name: writeback + - name: disk.bcache_size + description: BCache Cache Sizes + unit: "MiB" + chart_type: area + dimensions: + - name: dirty + - name: disk.bcache_usage + description: BCache Cache Usage + unit: "percentage" + chart_type: area + dimensions: + - name: avail + - name: disk.bcache_cache_read_races + description: BCache Cache Read Races + unit: "operations/s" + chart_type: line + dimensions: + - name: races + - name: errors + - name: disk.bcache + description: BCache Cache I/O Operations + unit: "operations/s" + chart_type: line + dimensions: + - name: hits + - name: misses + - name: collisions + - name: readaheads + - name: disk.bcache_bypass + description: BCache Cache Bypass I/O Operations + unit: "operations/s" + chart_type: line + dimensions: + - name: hits + - name: misses + - meta: + plugin_name: proc.plugin + module_name: /proc/mdstat + monitored_instance: + name: MD RAID + link: "" + categories: + - data-collection.linux-systems.disk-metrics + icon_filename: "hard-drive.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - raid + - mdadm + - mdstat + - raid + most_popular: false + overview: + data_collection: + metrics_description: "This integration monitors the status of MD RAID devices." + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: true + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: + - name: mdstat_last_collected + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/mdstat.conf + metric: md.disks + info: number of seconds since the last successful data collection + - name: mdstat_disks + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/mdstat.conf + metric: md.disks + info: + number of devices in the down state for the ${label:device} ${label:raid_level} array. Any number > 0 indicates that the array is degraded. + - name: mdstat_mismatch_cnt + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/mdstat.conf + metric: md.mismatch_cnt + info: number of unsynchronized blocks for the ${label:device} ${label:raid_level} array + - name: mdstat_nonredundant_last_collected + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/mdstat.conf + metric: md.nonredundant + info: number of seconds since the last successful data collection + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: global + description: "" + labels: [] + metrics: + - name: md.health + description: Faulty Devices In MD + unit: "failed disks" + chart_type: line + dimensions: + - name: a dimension per md array + - name: md array + description: "" + labels: + - name: device + description: TBD + - name: raid_level + description: TBD + metrics: + - name: md.disks + description: Disks Stats + unit: "disks" + chart_type: stacked + dimensions: + - name: inuse + - name: down + - name: md.mismatch_cnt + description: Mismatch Count + unit: "unsynchronized blocks" + chart_type: line + dimensions: + - name: count + - name: md.status + description: Current Status + unit: "percent" + chart_type: line + dimensions: + - name: check + - name: resync + - name: recovery + - name: reshape + - name: md.expected_time_until_operation_finish + description: Approximate Time Until Finish + unit: "seconds" + chart_type: line + dimensions: + - name: finish_in + - name: md.operation_speed + description: Operation Speed + unit: "KiB/s" + chart_type: line + dimensions: + - name: speed + - name: md.nonredundant + description: Nonredundant Array Availability + unit: "boolean" + chart_type: line + dimensions: + - name: available + - meta: + plugin_name: proc.plugin + module_name: /proc/net/dev + monitored_instance: + name: Network interfaces + link: "" + categories: + - data-collection.linux-systems.network-metrics + icon_filename: "network-wired.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - network interfaces + most_popular: false + overview: + data_collection: + metrics_description: "Monitor network interface metrics about bandwidth, state, errors and more." + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: true + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: + - name: interface_speed + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/net.conf + metric: net.net + info: network interface ${label:device} current speed + os: "*" + - name: 1m_received_traffic_overflow + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/net.conf + metric: net.net + info: average inbound utilization for the network interface ${label:device} over the last minute + os: "linux" + - name: 1m_sent_traffic_overflow + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/net.conf + metric: net.net + info: average outbound utilization for the network interface ${label:device} over the last minute + os: "linux" + - name: inbound_packets_dropped_ratio + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/net.conf + metric: net.drops + info: ratio of inbound dropped packets for the network interface ${label:device} over the last 10 minutes + os: "linux" + - name: outbound_packets_dropped_ratio + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/net.conf + metric: net.drops + info: ratio of outbound dropped packets for the network interface ${label:device} over the last 10 minutes + os: "linux" + - name: wifi_inbound_packets_dropped_ratio + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/net.conf + metric: net.drops + info: ratio of inbound dropped packets for the network interface ${label:device} over the last 10 minutes + os: "linux" + - name: wifi_outbound_packets_dropped_ratio + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/net.conf + metric: net.drops + info: ratio of outbound dropped packets for the network interface ${label:device} over the last 10 minutes + os: "linux" + - name: 1m_received_packets_rate + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/net.conf + metric: net.packets + info: average number of packets received by the network interface ${label:device} over the last minute + os: "linux freebsd" + - name: 10s_received_packets_storm + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/net.conf + metric: net.packets + info: ratio of average number of received packets for the network interface ${label:device} over the last 10 seconds, compared to the rate over the last minute + os: "linux freebsd" + - name: 10min_fifo_errors + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/net.conf + metric: net.fifo + info: number of FIFO errors for the network interface ${label:device} in the last 10 minutes + os: "linux" + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: global + description: "" + labels: [] + metrics: + - name: system.net + description: Physical Network Interfaces Aggregated Bandwidth + unit: "kilobits/s" + chart_type: area + dimensions: + - name: received + - name: sent + - name: network device + description: "" + labels: + - name: interface_type + description: TBD + - name: device + description: TBD + metrics: + - name: net.net + description: Bandwidth + unit: "kilobits/s" + chart_type: area + dimensions: + - name: received + - name: sent + - name: net.speed + description: Interface Speed + unit: "kilobits/s" + chart_type: line + dimensions: + - name: speed + - name: net.duplex + description: Interface Duplex State + unit: "state" + chart_type: line + dimensions: + - name: full + - name: half + - name: unknown + - name: net.operstate + description: Interface Operational State + unit: "state" + chart_type: line + dimensions: + - name: up + - name: down + - name: notpresent + - name: lowerlayerdown + - name: testing + - name: dormant + - name: unknown + - name: net.carrier + description: Interface Physical Link State + unit: "state" + chart_type: line + dimensions: + - name: up + - name: down + - name: net.mtu + description: Interface MTU + unit: "octets" + chart_type: line + dimensions: + - name: mtu + - name: net.packets + description: Packets + unit: "packets/s" + chart_type: line + dimensions: + - name: received + - name: sent + - name: multicast + - name: net.errors + description: Interface Errors + unit: "errors/s" + chart_type: line + dimensions: + - name: inbound + - name: outbound + - name: net.drops + description: Interface Drops + unit: "drops/s" + chart_type: line + dimensions: + - name: inbound + - name: outbound + - name: net.fifo + description: Interface FIFO Buffer Errors + unit: "errors" + chart_type: line + dimensions: + - name: receive + - name: transmit + - name: net.compressed + description: Compressed Packets + unit: "packets/s" + chart_type: line + dimensions: + - name: received + - name: sent + - name: net.events + description: Network Interface Events + unit: "events/s" + chart_type: line + dimensions: + - name: frames + - name: collisions + - name: carrier + - meta: + plugin_name: proc.plugin + module_name: /proc/net/wireless + monitored_instance: + name: Wireless network interfaces + link: "" + categories: + - data-collection.linux-systems.network-metrics + icon_filename: "network-wired.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - wireless devices + most_popular: false + overview: + data_collection: + metrics_description: "Monitor wireless devices with metrics about status, link quality, signal level, noise level and more." + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: true + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: [] + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: wireless device + description: "" + labels: [] + metrics: + - name: wireless.status + description: Internal status reported by interface. + unit: "status" + chart_type: line + dimensions: + - name: status + - name: wireless.link_quality + description: Overall quality of the link. This is an aggregate value, and depends on the driver and hardware. + unit: "value" + chart_type: line + dimensions: + - name: link_quality + - name: wireless.signal_level + description: + The signal level is the wireless signal power level received by the wireless client. The closer the value is to 0, the stronger the + signal. + unit: "dBm" + chart_type: line + dimensions: + - name: signal_level + - name: wireless.noise_level + description: + The noise level indicates the amount of background noise in your environment. The closer the value to 0, the greater the noise level. + unit: "dBm" + chart_type: line + dimensions: + - name: noise_level + - name: wireless.discarded_packets + description: Packet discarded in the wireless adapter due to wireless specific problems. + unit: "packets/s" + chart_type: line + dimensions: + - name: nwid + - name: crypt + - name: frag + - name: retry + - name: misc + - name: wireless.missed_beacons + description: Number of missed beacons. + unit: "frames/s" + chart_type: line + dimensions: + - name: missed_beacons + - meta: + plugin_name: proc.plugin + module_name: /sys/class/infiniband + monitored_instance: + name: InfiniBand + link: "" + categories: + - data-collection.linux-systems.network-metrics + icon_filename: "network-wired.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - infiniband + - rdma + most_popular: false + overview: + data_collection: + metrics_description: "This integration monitors InfiniBand network inteface statistics." + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: true + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: [] + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: infiniband port + description: "" + labels: [] + metrics: + - name: ib.bytes + description: Bandwidth usage + unit: "kilobits/s" + chart_type: area + dimensions: + - name: Received + - name: Sent + - name: ib.packets + description: Packets Statistics + unit: "packets/s" + chart_type: area + dimensions: + - name: Received + - name: Sent + - name: Mcast_rcvd + - name: Mcast_sent + - name: Ucast_rcvd + - name: Ucast_sent + - name: ib.errors + description: Error Counters + unit: "errors/s" + chart_type: line + dimensions: + - name: Pkts_malformated + - name: Pkts_rcvd_discarded + - name: Pkts_sent_discarded + - name: Tick_Wait_to_send + - name: Pkts_missed_resource + - name: Buffer_overrun + - name: Link_Downed + - name: Link_recovered + - name: Link_integrity_err + - name: Link_minor_errors + - name: Pkts_rcvd_with_EBP + - name: Pkts_rcvd_discarded_by_switch + - name: Pkts_sent_discarded_by_switch + - name: ib.hwerrors + description: Hardware Errors + unit: "errors/s" + chart_type: line + dimensions: + - name: Duplicated_packets + - name: Pkt_Seq_Num_gap + - name: Ack_timer_expired + - name: Drop_missing_buffer + - name: Drop_out_of_sequence + - name: NAK_sequence_rcvd + - name: CQE_err_Req + - name: CQE_err_Resp + - name: CQE_Flushed_err_Req + - name: CQE_Flushed_err_Resp + - name: Remote_access_err_Req + - name: Remote_access_err_Resp + - name: Remote_invalid_req + - name: Local_length_err_Resp + - name: RNR_NAK_Packets + - name: CNP_Pkts_ignored + - name: RoCE_ICRC_Errors + - name: ib.hwpackets + description: Hardware Packets Statistics + unit: "packets/s" + chart_type: line + dimensions: + - name: RoCEv2_Congestion_sent + - name: RoCEv2_Congestion_rcvd + - name: IB_Congestion_handled + - name: ATOMIC_req_rcvd + - name: Connection_req_rcvd + - name: Read_req_rcvd + - name: Write_req_rcvd + - name: RoCE_retrans_adaptive + - name: RoCE_retrans_timeout + - name: RoCE_slow_restart + - name: RoCE_slow_restart_congestion + - name: RoCE_slow_restart_count + - meta: + plugin_name: proc.plugin + module_name: /proc/net/netstat + monitored_instance: + name: Network statistics + link: "" + categories: + - data-collection.linux-systems.network-metrics + icon_filename: "network-wired.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - ip + - udp + - udplite + - icmp + - netstat + - snmp + most_popular: false + overview: + data_collection: + metrics_description: "This integration provides metrics from the `netstat`, `snmp` and `snmp6` modules." + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: true + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: + - name: 1m_tcp_syn_queue_drops + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/tcp_listen.conf + metric: ip.tcp_syn_queue + info: average number of SYN requests was dropped due to the full TCP SYN queue over the last minute (SYN cookies were not enabled) + os: "linux" + - name: 1m_tcp_syn_queue_cookies + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/tcp_listen.conf + metric: ip.tcp_syn_queue + info: average number of sent SYN cookies due to the full TCP SYN queue over the last minute + os: "linux" + - name: 1m_tcp_accept_queue_overflows + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/tcp_listen.conf + metric: ip.tcp_accept_queue + info: average number of overflows in the TCP accept queue over the last minute + os: "linux" + - name: 1m_tcp_accept_queue_drops + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/tcp_listen.conf + metric: ip.tcp_accept_queue + info: average number of dropped packets in the TCP accept queue over the last minute + os: "linux" + - name: tcp_connections + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/tcp_conn.conf + metric: ip.tcpsock + info: TCP connections utilization + os: "linux" + - name: 1m_ip_tcp_resets_sent + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/tcp_resets.conf + metric: ip.tcphandshake + info: average number of sent TCP RESETS over the last minute + os: "linux" + - name: 10s_ip_tcp_resets_sent + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/tcp_resets.conf + metric: ip.tcphandshake + info: + average number of sent TCP RESETS over the last 10 seconds. This can indicate a port scan, or that a service running on this host has + crashed. Netdata will not send a clear notification for this alarm. + os: "linux" + - name: 1m_ip_tcp_resets_received + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/tcp_resets.conf + metric: ip.tcphandshake + info: average number of received TCP RESETS over the last minute + os: "linux freebsd" + - name: 10s_ip_tcp_resets_received + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/tcp_resets.conf + metric: ip.tcphandshake + info: + average number of received TCP RESETS over the last 10 seconds. This can be an indication that a service this host needs has crashed. + Netdata will not send a clear notification for this alarm. + os: "linux freebsd" + - name: 1m_ipv4_udp_receive_buffer_errors + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/udp_errors.conf + metric: ipv4.udperrors + info: average number of UDP receive buffer errors over the last minute + os: "linux freebsd" + - name: 1m_ipv4_udp_send_buffer_errors + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/udp_errors.conf + metric: ipv4.udperrors + info: average number of UDP send buffer errors over the last minute + os: "linux" + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: global + description: "" + labels: [] + metrics: + - name: system.ip + description: IPv4 Bandwidth + unit: "kilobits/s" + chart_type: area + dimensions: + - name: received + - name: sent + - name: ip.tcpmemorypressures + description: TCP Memory Pressures + unit: "events/s" + chart_type: line + dimensions: + - name: pressures + - name: ip.tcpconnaborts + description: TCP Connection Aborts + unit: "connections/s" + chart_type: line + dimensions: + - name: baddata + - name: userclosed + - name: nomemory + - name: timeout + - name: linger + - name: failed + - name: ip.tcpreorders + description: TCP Reordered Packets by Detection Method + unit: "packets/s" + chart_type: line + dimensions: + - name: timestamp + - name: sack + - name: fack + - name: reno + - name: ip.tcpofo + description: TCP Out-Of-Order Queue + unit: "packets/s" + chart_type: line + dimensions: + - name: inqueue + - name: dropped + - name: merged + - name: pruned + - name: ip.tcpsyncookies + description: TCP SYN Cookies + unit: "packets/s" + chart_type: line + dimensions: + - name: received + - name: sent + - name: failed + - name: ip.tcp_syn_queue + description: TCP SYN Queue Issues + unit: "packets/s" + chart_type: line + dimensions: + - name: drops + - name: cookies + - name: ip.tcp_accept_queue + description: TCP Accept Queue Issues + unit: "packets/s" + chart_type: line + dimensions: + - name: overflows + - name: drops + - name: ip.tcpsock + description: IPv4 TCP Connections + unit: "active connections" + chart_type: line + dimensions: + - name: connections + - name: ip.tcppackets + description: IPv4 TCP Packets + unit: "packets/s" + chart_type: line + dimensions: + - name: received + - name: sent + - name: ip.tcperrors + description: IPv4 TCP Errors + unit: "packets/s" + chart_type: line + dimensions: + - name: InErrs + - name: InCsumErrors + - name: RetransSegs + - name: ip.tcpopens + description: IPv4 TCP Opens + unit: "connections/s" + chart_type: line + dimensions: + - name: active + - name: passive + - name: ip.tcphandshake + description: IPv4 TCP Handshake Issues + unit: "events/s" + chart_type: line + dimensions: + - name: EstabResets + - name: OutRsts + - name: AttemptFails + - name: SynRetrans + - name: ipv4.packets + description: IPv4 Packets + unit: "packets/s" + chart_type: line + dimensions: + - name: received + - name: sent + - name: forwarded + - name: delivered + - name: ipv4.errors + description: IPv4 Errors + unit: "packets/s" + chart_type: line + dimensions: + - name: InDiscards + - name: OutDiscards + - name: InNoRoutes + - name: OutNoRoutes + - name: InHdrErrors + - name: InAddrErrors + - name: InTruncatedPkts + - name: InCsumErrors + - name: ipc4.bcast + description: IP Broadcast Bandwidth + unit: "kilobits/s" + chart_type: area + dimensions: + - name: received + - name: sent + - name: ipv4.bcastpkts + description: IP Broadcast Packets + unit: "packets/s" + chart_type: line + dimensions: + - name: received + - name: sent + - name: ipv4.mcast + description: IPv4 Multicast Bandwidth + unit: "kilobits/s" + chart_type: area + dimensions: + - name: received + - name: sent + - name: ipv4.mcastpkts + description: IP Multicast Packets + unit: "packets/s" + chart_type: line + dimensions: + - name: received + - name: sent + - name: ipv4.icmp + description: IPv4 ICMP Packets + unit: "packets/s" + chart_type: line + dimensions: + - name: received + - name: sent + - name: ipv4.icmpmsg + description: IPv4 ICMP Messages + unit: "packets/s" + chart_type: line + dimensions: + - name: InEchoReps + - name: OutEchoReps + - name: InDestUnreachs + - name: OutDestUnreachs + - name: InRedirects + - name: OutRedirects + - name: InEchos + - name: OutEchos + - name: InRouterAdvert + - name: OutRouterAdvert + - name: InRouterSelect + - name: OutRouterSelect + - name: InTimeExcds + - name: OutTimeExcds + - name: InParmProbs + - name: OutParmProbs + - name: InTimestamps + - name: OutTimestamps + - name: InTimestampReps + - name: OutTimestampReps + - name: ipv4.icmp_errors + description: IPv4 ICMP Errors + unit: "packets/s" + chart_type: line + dimensions: + - name: InErrors + - name: OutErrors + - name: InCsumErrors + - name: ipv4.udppackets + description: IPv4 UDP Packets + unit: "packets/s" + chart_type: line + dimensions: + - name: received + - name: sent + - name: ipv4.udperrors + description: IPv4 UDP Errors + unit: "events/s" + chart_type: line + dimensions: + - name: RcvbufErrors + - name: SndbufErrors + - name: InErrors + - name: NoPorts + - name: InCsumErrors + - name: IgnoredMulti + - name: ipv4.udplite + description: IPv4 UDPLite Packets + unit: "packets/s" + chart_type: line + dimensions: + - name: received + - name: sent + - name: ipv4.udplite_errors + description: IPv4 UDPLite Errors + unit: "packets/s" + chart_type: line + dimensions: + - name: RcvbufErrors + - name: SndbufErrors + - name: InErrors + - name: NoPorts + - name: InCsumErrors + - name: IgnoredMulti + - name: ipv4.ecnpkts + description: IP ECN Statistics + unit: "packets/s" + chart_type: line + dimensions: + - name: CEP + - name: NoECTP + - name: ECTP0 + - name: ECTP1 + - name: ipv4.fragsin + description: IPv4 Fragments Reassembly + unit: "packets/s" + chart_type: line + dimensions: + - name: ok + - name: failed + - name: all + - name: ipv4.fragsout + description: IPv4 Fragments Sent + unit: "packets/s" + chart_type: line + dimensions: + - name: ok + - name: failed + - name: created + - name: system.ipv6 + description: IPv6 Bandwidth + unit: "kilobits/s" + chart_type: area + dimensions: + - name: received + - name: sent + - name: ipv6.packets + description: IPv6 Packets + unit: "packets/s" + chart_type: line + dimensions: + - name: received + - name: sent + - name: forwarded + - name: delivers + - name: ipv6.errors + description: IPv6 Errors + unit: "packets/s" + chart_type: line + dimensions: + - name: InDiscards + - name: OutDiscards + - name: InHdrErrors + - name: InAddrErrors + - name: InUnknownProtos + - name: InTooBigErrors + - name: InTruncatedPkts + - name: InNoRoutes + - name: OutNoRoutes + - name: ipv6.bcast + description: IPv6 Broadcast Bandwidth + unit: "kilobits/s" + chart_type: area + dimensions: + - name: received + - name: sent + - name: ipv6.mcast + description: IPv6 Multicast Bandwidth + unit: "kilobits/s" + chart_type: area + dimensions: + - name: received + - name: sent + - name: ipv6.mcastpkts + description: IPv6 Multicast Packets + unit: "packets/s" + chart_type: line + dimensions: + - name: received + - name: sent + - name: ipv6.udppackets + description: IPv6 UDP Packets + unit: "packets/s" + chart_type: line + dimensions: + - name: received + - name: sent + - name: ipv6.udperrors + description: IPv6 UDP Errors + unit: "events/s" + chart_type: line + dimensions: + - name: RcvbufErrors + - name: SndbufErrors + - name: InErrors + - name: NoPorts + - name: InCsumErrors + - name: IgnoredMulti + - name: ipv6.udplitepackets + description: IPv6 UDPlite Packets + unit: "packets/s" + chart_type: line + dimensions: + - name: received + - name: sent + - name: ipv6.udpliteerrors + description: IPv6 UDP Lite Errors + unit: "events/s" + chart_type: line + dimensions: + - name: RcvbufErrors + - name: SndbufErrors + - name: InErrors + - name: NoPorts + - name: InCsumErrors + - name: ipv6.icmp + description: IPv6 ICMP Messages + unit: "messages/s" + chart_type: line + dimensions: + - name: received + - name: sent + - name: ipv6.icmpredir + description: IPv6 ICMP Redirects + unit: "redirects/s" + chart_type: line + dimensions: + - name: received + - name: sent + - name: ipv6.icmperrors + description: IPv6 ICMP Errors + unit: "errors/s" + chart_type: line + dimensions: + - name: InErrors + - name: OutErrors + - name: InCsumErrors + - name: InDestUnreachs + - name: InPktTooBigs + - name: InTimeExcds + - name: InParmProblems + - name: OutDestUnreachs + - name: OutPktTooBigs + - name: OutTimeExcds + - name: OutParmProblems + - name: ipv6.icmpechos + description: IPv6 ICMP Echo + unit: "messages/s" + chart_type: line + dimensions: + - name: InEchos + - name: OutEchos + - name: InEchoReplies + - name: OutEchoReplies + - name: ipv6.groupmemb + description: IPv6 ICMP Group Membership + unit: "messages/s" + chart_type: line + dimensions: + - name: InQueries + - name: OutQueries + - name: InResponses + - name: OutResponses + - name: InReductions + - name: OutReductions + - name: ipv6.icmprouter + description: IPv6 Router Messages + unit: "messages/s" + chart_type: line + dimensions: + - name: InSolicits + - name: OutSolicits + - name: InAdvertisements + - name: OutAdvertisements + - name: ipv6.icmpneighbor + description: IPv6 Neighbor Messages + unit: "messages/s" + chart_type: line + dimensions: + - name: InSolicits + - name: OutSolicits + - name: InAdvertisements + - name: OutAdvertisements + - name: ipv6.icmpmldv2 + description: IPv6 ICMP MLDv2 Reports + unit: "reports/s" + chart_type: line + dimensions: + - name: received + - name: sent + - name: ipv6.icmptypes + description: IPv6 ICMP Types + unit: "messages/s" + chart_type: line + dimensions: + - name: InType1 + - name: InType128 + - name: InType129 + - name: InType136 + - name: OutType1 + - name: OutType128 + - name: OutType129 + - name: OutType133 + - name: OutType135 + - name: OutType143 + - name: ipv6.ect + description: IPv6 ECT Packets + unit: "packets/s" + chart_type: line + dimensions: + - name: InNoECTPkts + - name: InECT1Pkts + - name: InECT0Pkts + - name: InCEPkts + - name: ipv6.ect + description: IPv6 ECT Packets + unit: "packets/s" + chart_type: line + dimensions: + - name: InNoECTPkts + - name: InECT1Pkts + - name: InECT0Pkts + - name: InCEPkts + - name: ipv6.fragsin + description: IPv6 Fragments Reassembly + unit: "packets/s" + chart_type: line + dimensions: + - name: ok + - name: failed + - name: timeout + - name: all + - name: ipv6.fragsout + description: IPv6 Fragments Sent + unit: "packets/s" + chart_type: line + dimensions: + - name: ok + - name: failed + - name: all + - meta: + plugin_name: proc.plugin + module_name: /proc/net/sockstat + monitored_instance: + name: Socket statistics + link: "" + categories: + - data-collection.linux-systems.network-metrics + icon_filename: "network-wired.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - sockets + most_popular: false + overview: + data_collection: + metrics_description: "This integration provides socket statistics." + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: true + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: + - name: tcp_orphans + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/tcp_orphans.conf + metric: ipv4.sockstat_tcp_sockets + info: orphan IPv4 TCP sockets utilization + os: "linux" + - name: tcp_memory + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/tcp_mem.conf + metric: ipv4.sockstat_tcp_mem + info: TCP memory utilization + os: "linux" + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: global + description: "" + labels: [] + metrics: + - name: ip.sockstat_sockets + description: Sockets used for all address families + unit: "sockets" + chart_type: line + dimensions: + - name: used + - name: ipv4.sockstat_tcp_sockets + description: IPv4 TCP Sockets + unit: "sockets" + chart_type: line + dimensions: + - name: alloc + - name: orphan + - name: inuse + - name: timewait + - name: ipv4.sockstat_tcp_mem + description: IPv4 TCP Sockets Memory + unit: "KiB" + chart_type: area + dimensions: + - name: mem + - name: ipv4.sockstat_udp_sockets + description: IPv4 UDP Sockets + unit: "sockets" + chart_type: line + dimensions: + - name: inuse + - name: ipv4.sockstat_udp_mem + description: IPv4 UDP Sockets Memory + unit: "sockets" + chart_type: line + dimensions: + - name: mem + - name: ipv4.sockstat_udplite_sockets + description: IPv4 UDPLITE Sockets + unit: "sockets" + chart_type: line + dimensions: + - name: inuse + - name: ipv4.sockstat_raw_sockets + description: IPv4 RAW Sockets + unit: "sockets" + chart_type: line + dimensions: + - name: inuse + - name: ipv4.sockstat_frag_sockets + description: IPv4 FRAG Sockets + unit: "fragments" + chart_type: line + dimensions: + - name: inuse + - name: ipv4.sockstat_frag_mem + description: IPv4 FRAG Sockets Memory + unit: "KiB" + chart_type: area + dimensions: + - name: mem + - meta: + plugin_name: proc.plugin + module_name: /proc/net/sockstat6 + monitored_instance: + name: IPv6 Socket Statistics + link: "" + categories: + - data-collection.linux-systems.network-metrics + icon_filename: "network-wired.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - ipv6 sockets + most_popular: false + overview: + data_collection: + metrics_description: "This integration provides IPv6 socket statistics." + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: true + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: [] + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: global + description: "" + labels: [] + metrics: + - name: ipv6.sockstat6_tcp_sockets + description: IPv6 TCP Sockets + unit: "sockets" + chart_type: line + dimensions: + - name: inuse + - name: ipv6.sockstat6_udp_sockets + description: IPv6 UDP Sockets + unit: "sockets" + chart_type: line + dimensions: + - name: inuse + - name: ipv6.sockstat6_udplite_sockets + description: IPv6 UDPLITE Sockets + unit: "sockets" + chart_type: line + dimensions: + - name: inuse + - name: ipv6.sockstat6_raw_sockets + description: IPv6 RAW Sockets + unit: "sockets" + chart_type: line + dimensions: + - name: inuse + - name: ipv6.sockstat6_frag_sockets + description: IPv6 FRAG Sockets + unit: "fragments" + chart_type: line + dimensions: + - name: inuse + - meta: + plugin_name: proc.plugin + module_name: /proc/net/ip_vs_stats + monitored_instance: + name: IP Virtual Server + link: "" + categories: + - data-collection.linux-systems.network-metrics + icon_filename: "network-wired.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - ip virtual server + most_popular: false + overview: + data_collection: + metrics_description: "This integration monitors IP Virtual Server statistics" + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: true + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: [] + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: global + description: "" + labels: [] + metrics: + - name: ipvs.sockets + description: IPVS New Connections + unit: "connections/s" + chart_type: line + dimensions: + - name: connections + - name: ipvs.packets + description: IPVS Packets + unit: "packets/s" + chart_type: line + dimensions: + - name: received + - name: sent + - name: ipvs.net + description: IPVS Bandwidth + unit: "kilobits/s" + chart_type: area + dimensions: + - name: received + - name: sent + - meta: + plugin_name: proc.plugin + module_name: /proc/net/rpc/nfs + monitored_instance: + name: NFS Client + link: "" + categories: + - data-collection.linux-systems.filesystem-metrics.nfs + icon_filename: "nfs.png" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - nfs client + - filesystem + most_popular: false + overview: + data_collection: + metrics_description: "This integration provides statistics from the Linux kernel's NFS Client." + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: true + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: [] + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: global + description: "" + labels: [] + metrics: + - name: nfs.net + description: NFS Client Network + unit: "operations/s" + chart_type: stacked + dimensions: + - name: udp + - name: tcp + - name: nfs.rpc + description: NFS Client Remote Procedure Calls Statistics + unit: "calls/s" + chart_type: line + dimensions: + - name: calls + - name: retransmits + - name: auth_refresh + - name: nfs.proc2 + description: NFS v2 Client Remote Procedure Calls + unit: "calls/s" + chart_type: stacked + dimensions: + - name: a dimension per proc2 call + - name: nfs.proc3 + description: NFS v3 Client Remote Procedure Calls + unit: "calls/s" + chart_type: stacked + dimensions: + - name: a dimension per proc3 call + - name: nfs.proc4 + description: NFS v4 Client Remote Procedure Calls + unit: "calls/s" + chart_type: stacked + dimensions: + - name: a dimension per proc4 call + - meta: + plugin_name: proc.plugin + module_name: /proc/net/rpc/nfsd + monitored_instance: + name: NFS Server + link: "" + categories: + - data-collection.linux-systems.filesystem-metrics.nfs + icon_filename: "nfs.png" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - nfs server + - filesystem + most_popular: false + overview: + data_collection: + metrics_description: "This integration provides statistics from the Linux kernel's NFS Server." + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: true + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: [] + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: global + description: "" + labels: [] + metrics: + - name: nfsd.readcache + description: NFS Server Read Cache + unit: "reads/s" + chart_type: stacked + dimensions: + - name: hits + - name: misses + - name: nocache + - name: nfsd.filehandles + description: NFS Server File Handles + unit: "handles/s" + chart_type: line + dimensions: + - name: stale + - name: nfsd.io + description: NFS Server I/O + unit: "kilobytes/s" + chart_type: area + dimensions: + - name: read + - name: write + - name: nfsd.threads + description: NFS Server Threads + unit: "threads" + chart_type: line + dimensions: + - name: threads + - name: nfsd.net + description: NFS Server Network Statistics + unit: "packets/s" + chart_type: line + dimensions: + - name: udp + - name: tcp + - name: nfsd.rpc + description: NFS Server Remote Procedure Calls Statistics + unit: "calls/s" + chart_type: line + dimensions: + - name: calls + - name: bad_format + - name: bad_auth + - name: nfsd.proc2 + description: NFS v2 Server Remote Procedure Calls + unit: "calls/s" + chart_type: stacked + dimensions: + - name: a dimension per proc2 call + - name: nfsd.proc3 + description: NFS v3 Server Remote Procedure Calls + unit: "calls/s" + chart_type: stacked + dimensions: + - name: a dimension per proc3 call + - name: nfsd.proc4 + description: NFS v4 Server Remote Procedure Calls + unit: "calls/s" + chart_type: stacked + dimensions: + - name: a dimension per proc4 call + - name: nfsd.proc4ops + description: NFS v4 Server Operations + unit: "operations/s" + chart_type: stacked + dimensions: + - name: a dimension per proc4 operation + - meta: + plugin_name: proc.plugin + module_name: /proc/net/sctp/snmp + monitored_instance: + name: SCTP Statistics + link: "" + categories: + - data-collection.linux-systems.network-metrics + icon_filename: "network-wired.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - sctp + - stream control transmission protocol + most_popular: false + overview: + data_collection: + metrics_description: "This integration provides statistics about the Stream Control Transmission Protocol (SCTP)." + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: true + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: [] + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: global + description: "" + labels: [] + metrics: + - name: sctp.established + description: SCTP current total number of established associations + unit: "associations" + chart_type: line + dimensions: + - name: established + - name: sctp.transitions + description: SCTP Association Transitions + unit: "transitions/s" + chart_type: line + dimensions: + - name: active + - name: passive + - name: aborted + - name: shutdown + - name: sctp.packets + description: SCTP Packets + unit: "packets/s" + chart_type: line + dimensions: + - name: received + - name: sent + - name: sctp.packet_errors + description: SCTP Packet Errors + unit: "packets/s" + chart_type: line + dimensions: + - name: invalid + - name: checksum + - name: sctp.fragmentation + description: SCTP Fragmentation + unit: "packets/s" + chart_type: line + dimensions: + - name: reassembled + - name: fragmented + - meta: + plugin_name: proc.plugin + module_name: /proc/net/stat/nf_conntrack + monitored_instance: + name: Conntrack + link: "" + categories: + - data-collection.linux-systems.firewall-metrics + icon_filename: "firewall.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - connection tracking mechanism + - netfilter + - conntrack + most_popular: false + overview: + data_collection: + metrics_description: "This integration monitors the connection tracking mechanism of Netfilter in the Linux Kernel." + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: true + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: + - name: netfilter_conntrack_full + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/netfilter.conf + metric: netfilter.conntrack_sockets + info: netfilter connection tracker table size utilization + os: "linux" + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: global + description: "" + labels: [] + metrics: + - name: netfilter.conntrack_sockets + description: Connection Tracker Connections + unit: "active connections" + chart_type: line + dimensions: + - name: connections + - name: netfilter.conntrack_new + description: Connection Tracker New Connections + unit: "connections/s" + chart_type: line + dimensions: + - name: new + - name: ignore + - name: invalid + - name: netfilter.conntrack_changes + description: Connection Tracker Changes + unit: "changes/s" + chart_type: line + dimensions: + - name: inserted + - name: deleted + - name: delete_list + - name: netfilter.conntrack_expect + description: Connection Tracker Expectations + unit: "expectations/s" + chart_type: line + dimensions: + - name: created + - name: deleted + - name: new + - name: netfilter.conntrack_search + description: Connection Tracker Searches + unit: "searches/s" + chart_type: line + dimensions: + - name: searched + - name: restarted + - name: found + - name: netfilter.conntrack_errors + description: Connection Tracker Errors + unit: "events/s" + chart_type: line + dimensions: + - name: icmp_error + - name: error_failed + - name: drop + - name: early_drop + - meta: + plugin_name: proc.plugin + module_name: /proc/net/stat/synproxy + monitored_instance: + name: Synproxy + link: "" + categories: + - data-collection.linux-systems.firewall-metrics + icon_filename: "firewall.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - synproxy + most_popular: false + overview: + data_collection: + metrics_description: "This integration provides statistics about the Synproxy netfilter module." + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: true + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: [] + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: global + description: "" + labels: [] + metrics: + - name: netfilter.synproxy_syn_received + description: SYNPROXY SYN Packets received + unit: "packets/s" + chart_type: line + dimensions: + - name: received + - name: netfilter.synproxy_conn_reopened + description: SYNPROXY Connections Reopened + unit: "connections/s" + chart_type: line + dimensions: + - name: reopened + - name: netfilter.synproxy_cookies + description: SYNPROXY TCP Cookies + unit: "cookies/s" + chart_type: line + dimensions: + - name: valid + - name: invalid + - name: retransmits + - meta: + plugin_name: proc.plugin + module_name: /proc/spl/kstat/zfs + monitored_instance: + name: ZFS Pools + link: "" + categories: + - data-collection.linux-systems.filesystem-metrics.zfs + icon_filename: "filesystem.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - zfs pools + - pools + - zfs + - filesystem + most_popular: false + overview: + data_collection: + metrics_description: "This integration provides metrics about the state of ZFS pools." + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: true + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: + - name: zfs_pool_state_warn + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/zfs.conf + metric: zfspool.state + info: ZFS pool ${label:pool} state is degraded + - name: zfs_pool_state_crit + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/zfs.conf + metric: zfspool.state + info: ZFS pool ${label:pool} state is faulted or unavail + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: zfs pool + description: "" + labels: + - name: pool + description: TBD + metrics: + - name: zfspool.state + description: ZFS pool state + unit: "boolean" + chart_type: line + dimensions: + - name: online + - name: degraded + - name: faulted + - name: offline + - name: removed + - name: unavail + - name: suspended + - meta: + plugin_name: proc.plugin + module_name: /proc/spl/kstat/zfs/arcstats + monitored_instance: + name: ZFS Adaptive Replacement Cache + link: "" + categories: + - data-collection.linux-systems.filesystem-metrics.zfs + icon_filename: "filesystem.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - zfs arc + - arc + - zfs + - filesystem + most_popular: false + overview: + data_collection: + metrics_description: "This integration monitors ZFS Adadptive Replacement Cache (ARC) statistics." + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: true + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: + - name: zfs_memory_throttle + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/zfs.conf + metric: zfs.memory_ops + info: number of times ZFS had to limit the ARC growth in the last 10 minutes + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: global + description: "" + labels: [] + metrics: + - name: zfs.arc_size + description: ZFS ARC Size + unit: "MiB" + chart_type: area + dimensions: + - name: arcsz + - name: target + - name: min + - name: max + - name: zfs.l2_size + description: ZFS L2 ARC Size + unit: "MiB" + chart_type: area + dimensions: + - name: actual + - name: size + - name: zfs.reads + description: ZFS Reads + unit: "reads/s" + chart_type: area + dimensions: + - name: arc + - name: demand + - name: prefetch + - name: metadata + - name: l2 + - name: zfs.bytes + description: ZFS ARC L2 Read/Write Rate + unit: "KiB/s" + chart_type: area + dimensions: + - name: read + - name: write + - name: zfs.hits + description: ZFS ARC Hits + unit: "percentage" + chart_type: stacked + dimensions: + - name: hits + - name: misses + - name: zfs.hits_rate + description: ZFS ARC Hits Rate + unit: "events/s" + chart_type: stacked + dimensions: + - name: hits + - name: misses + - name: zfs.dhits + description: ZFS Demand Hits + unit: "percentage" + chart_type: stacked + dimensions: + - name: hits + - name: misses + - name: zfs.dhits_rate + description: ZFS Demand Hits Rate + unit: "events/s" + chart_type: stacked + dimensions: + - name: hits + - name: misses + - name: zfs.phits + description: ZFS Prefetch Hits + unit: "percentage" + chart_type: stacked + dimensions: + - name: hits + - name: misses + - name: zfs.phits_rate + description: ZFS Prefetch Hits Rate + unit: "events/s" + chart_type: stacked + dimensions: + - name: hits + - name: misses + - name: zfs.mhits + description: ZFS Metadata Hits + unit: "percentage" + chart_type: stacked + dimensions: + - name: hits + - name: misses + - name: zfs.mhits_rate + description: ZFS Metadata Hits Rate + unit: "events/s" + chart_type: stacked + dimensions: + - name: hits + - name: misses + - name: zfs.l2hits + description: ZFS L2 Hits + unit: "percentage" + chart_type: stacked + dimensions: + - name: hits + - name: misses + - name: zfs.l2hits_rate + description: ZFS L2 Hits Rate + unit: "events/s" + chart_type: stacked + dimensions: + - name: hits + - name: misses + - name: zfs.list_hits + description: ZFS List Hits + unit: "hits/s" + chart_type: area + dimensions: + - name: mfu + - name: mfu_ghost + - name: mru + - name: mru_ghost + - name: zfs.arc_size_breakdown + description: ZFS ARC Size Breakdown + unit: "percentage" + chart_type: stacked + dimensions: + - name: recent + - name: frequent + - name: zfs.memory_ops + description: ZFS Memory Operations + unit: "operations/s" + chart_type: line + dimensions: + - name: direct + - name: throttled + - name: indirect + - name: zfs.important_ops + description: ZFS Important Operations + unit: "operations/s" + chart_type: line + dimensions: + - name: evict_skip + - name: deleted + - name: mutex_miss + - name: hash_collisions + - name: zfs.actual_hits + description: ZFS Actual Cache Hits + unit: "percentage" + chart_type: stacked + dimensions: + - name: hits + - name: misses + - name: zfs.actual_hits_rate + description: ZFS Actual Cache Hits Rate + unit: "events/s" + chart_type: stacked + dimensions: + - name: hits + - name: misses + - name: zfs.demand_data_hits + description: ZFS Data Demand Efficiency + unit: "percentage" + chart_type: stacked + dimensions: + - name: hits + - name: misses + - name: zfs.demand_data_hits_rate + description: ZFS Data Demand Efficiency Rate + unit: "events/s" + chart_type: stacked + dimensions: + - name: hits + - name: misses + - name: zfs.prefetch_data_hits + description: ZFS Data Prefetch Efficiency + unit: "percentage" + chart_type: stacked + dimensions: + - name: hits + - name: misses + - name: zfs.prefetch_data_hits_rate + description: ZFS Data Prefetch Efficiency Rate + unit: "events/s" + chart_type: stacked + dimensions: + - name: hits + - name: misses + - name: zfs.hash_elements + description: ZFS ARC Hash Elements + unit: "elements" + chart_type: line + dimensions: + - name: current + - name: max + - name: zfs.hash_chains + description: ZFS ARC Hash Chains + unit: "chains" + chart_type: line + dimensions: + - name: current + - name: max + - meta: + plugin_name: proc.plugin + module_name: /sys/fs/btrfs + monitored_instance: + name: BTRFS + link: "" + categories: + - data-collection.linux-systems.filesystem-metrics.btrfs + icon_filename: "filesystem.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - btrfs + - filesystem + most_popular: false + overview: + data_collection: + metrics_description: "This integration provides usage and error statistics from the BTRFS filesystem." + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: true + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: + - name: btrfs_allocated + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/btrfs.conf + metric: btrfs.disk + info: percentage of allocated BTRFS physical disk space + os: "*" + - name: btrfs_data + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/btrfs.conf + metric: btrfs.data + info: utilization of BTRFS data space + os: "*" + - name: btrfs_metadata + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/btrfs.conf + metric: btrfs.metadata + info: utilization of BTRFS metadata space + os: "*" + - name: btrfs_system + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/btrfs.conf + metric: btrfs.system + info: utilization of BTRFS system space + os: "*" + - name: btrfs_device_read_errors + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/btrfs.conf + metric: btrfs.device_errors + info: number of encountered BTRFS read errors + os: "*" + - name: btrfs_device_write_errors + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/btrfs.conf + metric: btrfs.device_errors + info: number of encountered BTRFS write errors + os: "*" + - name: btrfs_device_flush_errors + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/btrfs.conf + metric: btrfs.device_errors + info: number of encountered BTRFS flush errors + os: "*" + - name: btrfs_device_corruption_errors + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/btrfs.conf + metric: btrfs.device_errors + info: number of encountered BTRFS corruption errors + os: "*" + - name: btrfs_device_generation_errors + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/btrfs.conf + metric: btrfs.device_errors + info: number of encountered BTRFS generation errors + os: "*" + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: btrfs filesystem + description: "" + labels: + - name: filesystem_uuid + description: TBD + - name: filesystem_label + description: TBD + metrics: + - name: btrfs.disk + description: BTRFS Physical Disk Allocation + unit: "MiB" + chart_type: stacked + dimensions: + - name: unallocated + - name: data_free + - name: data_used + - name: meta_free + - name: meta_used + - name: sys_free + - name: sys_used + - name: btrfs.data + description: BTRFS Data Allocation + unit: "MiB" + chart_type: stacked + dimensions: + - name: free + - name: used + - name: btrfs.metadata + description: BTRFS Metadata Allocation + unit: "MiB" + chart_type: stacked + dimensions: + - name: free + - name: used + - name: reserved + - name: btrfs.system + description: BTRFS System Allocation + unit: "MiB" + chart_type: stacked + dimensions: + - name: free + - name: used + - name: btrfs.commits + description: BTRFS Commits + unit: "commits" + chart_type: line + dimensions: + - name: commits + - name: btrfs.commits_perc_time + description: BTRFS Commits Time Share + unit: "percentage" + chart_type: line + dimensions: + - name: commits + - name: btrfs.commit_timings + description: BTRFS Commit Timings + unit: "ms" + chart_type: line + dimensions: + - name: last + - name: max + - name: btrfs device + description: "" + labels: + - name: device_id + description: TBD + - name: filesystem_uuid + description: TBD + - name: filesystem_label + description: TBD + metrics: + - name: btrfs.device_errors + description: BTRFS Device Errors + unit: "errors" + chart_type: line + dimensions: + - name: write_errs + - name: read_errs + - name: flush_errs + - name: corruption_errs + - name: generation_errs + - meta: + plugin_name: proc.plugin + module_name: /sys/class/power_supply + monitored_instance: + name: Power Supply + link: "" + categories: + - data-collection.linux-systems.power-supply-metrics + icon_filename: "powersupply.svg" + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - psu + - power supply + most_popular: false + overview: + data_collection: + metrics_description: "This integration monitors Power supply metrics, such as battery status, AC power status and more." + method_description: "" + supported_platforms: + include: [] + exclude: [] + multi_instance: true + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: + - name: linux_power_supply_capacity + link: https://github.com/netdata/netdata/blob/master/src/health/health.d/linux_power_supply.conf + metric: powersupply.capacity + info: percentage of remaining power supply capacity + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: power device + description: "" + labels: + - name: device + description: TBD + metrics: + - name: powersupply.capacity + description: Battery capacity + unit: "percentage" + chart_type: line + dimensions: + - name: capacity + - name: powersupply.charge + description: Battery charge + unit: "Ah" + chart_type: line + dimensions: + - name: empty_design + - name: empty + - name: now + - name: full + - name: full_design + - name: powersupply.energy + description: Battery energy + unit: "Wh" + chart_type: line + dimensions: + - name: empty_design + - name: empty + - name: now + - name: full + - name: full_design + - name: powersupply.voltage + description: Power supply voltage + unit: "V" + chart_type: line + dimensions: + - name: min_design + - name: min + - name: now + - name: max + - name: max_design + - meta: + plugin_name: proc.plugin + module_name: /sys/class/drm + monitored_instance: + name: AMD GPU + link: "https://www.amd.com" + categories: + - data-collection.hardware-devices-and-sensors + icon_filename: amd.svg + related_resources: + integrations: + list: [] + info_provided_to_referring_integrations: + description: "" + keywords: + - amd + - gpu + - hardware + most_popular: false + overview: + data_collection: + metrics_description: "This integration monitors AMD GPU metrics, such as utilization, clock frequency and memory usage." + method_description: "It reads `/sys/class/drm` to collect metrics for every AMD GPU card instance it encounters." + supported_platforms: + include: + - Linux + exclude: [] + multi_instance: true + additional_permissions: + description: "" + default_behavior: + auto_detection: + description: "" + limits: + description: "" + performance_impact: + description: "" + setup: + prerequisites: + list: [] + configuration: + file: + name: "" + description: "" + options: + description: "" + folding: + title: "" + enabled: true + list: [] + examples: + folding: + enabled: true + title: "" + list: [] + troubleshooting: + problems: + list: [] + alerts: [] + metrics: + folding: + title: Metrics + enabled: false + description: "" + availability: [] + scopes: + - name: gpu + description: "These metrics refer to the GPU." + labels: + - name: product_name + description: GPU product name (e.g. AMD RX 6600) + metrics: + - name: amdgpu.gpu_utilization + description: GPU utilization + unit: "percentage" + chart_type: line + dimensions: + - name: utilization + - name: amdgpu.gpu_mem_utilization + description: GPU memory utilization + unit: "percentage" + chart_type: line + dimensions: + - name: utilization + - name: amdgpu.gpu_clk_frequency + description: GPU clock frequency + unit: "MHz" + chart_type: line + dimensions: + - name: frequency + - name: amdgpu.gpu_mem_clk_frequency + description: GPU memory clock frequency + unit: "MHz" + chart_type: line + dimensions: + - name: frequency + - name: amdgpu.gpu_mem_vram_usage_perc + description: VRAM memory usage percentage + unit: "percentage" + chart_type: line + dimensions: + - name: usage + - name: amdgpu.gpu_mem_vram_usage + description: VRAM memory usage + unit: "bytes" + chart_type: area + dimensions: + - name: free + - name: used + - name: amdgpu.gpu_mem_vis_vram_usage_perc + description: visible VRAM memory usage percentage + unit: "percentage" + chart_type: line + dimensions: + - name: usage + - name: amdgpu.gpu_mem_vis_vram_usage + description: visible VRAM memory usage + unit: "bytes" + chart_type: area + dimensions: + - name: free + - name: used + - name: amdgpu.gpu_mem_gtt_usage_perc + description: GTT memory usage percentage + unit: "percentage" + chart_type: line + dimensions: + - name: usage + - name: amdgpu.gpu_mem_gtt_usage + description: GTT memory usage + unit: "bytes" + chart_type: area + dimensions: + - name: free + - name: used diff --git a/src/collectors/proc.plugin/plugin_proc.c b/src/collectors/proc.plugin/plugin_proc.c new file mode 100644 index 000000000..7742b344f --- /dev/null +++ b/src/collectors/proc.plugin/plugin_proc.c @@ -0,0 +1,247 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +static struct proc_module { + const char *name; + const char *dim; + + int enabled; + + int (*func)(int update_every, usec_t dt); + + RRDDIM *rd; + +} proc_modules[] = { + + // system metrics + {.name = "/proc/stat", .dim = "stat", .func = do_proc_stat}, + {.name = "/proc/uptime", .dim = "uptime", .func = do_proc_uptime}, + {.name = "/proc/loadavg", .dim = "loadavg", .func = do_proc_loadavg}, + {.name = "/proc/sys/fs/file-nr", .dim = "file-nr", .func = do_proc_sys_fs_file_nr}, + {.name = "/proc/sys/kernel/random/entropy_avail", .dim = "entropy", .func = do_proc_sys_kernel_random_entropy_avail}, + + // pressure metrics + {.name = "/proc/pressure", .dim = "pressure", .func = do_proc_pressure}, + + // CPU metrics + {.name = "/proc/interrupts", .dim = "interrupts", .func = do_proc_interrupts}, + {.name = "/proc/softirqs", .dim = "softirqs", .func = do_proc_softirqs}, + + // memory metrics + {.name = "/proc/vmstat", .dim = "vmstat", .func = do_proc_vmstat}, + {.name = "/proc/meminfo", .dim = "meminfo", .func = do_proc_meminfo}, + {.name = "/sys/kernel/mm/ksm", .dim = "ksm", .func = do_sys_kernel_mm_ksm}, + {.name = "/sys/block/zram", .dim = "zram", .func = do_sys_block_zram}, + {.name = "/sys/devices/system/edac/mc", .dim = "edac", .func = do_proc_sys_devices_system_edac_mc}, + {.name = "/sys/devices/pci/aer", .dim = "pci_aer", .func = do_proc_sys_devices_pci_aer}, + {.name = "/sys/devices/system/node", .dim = "numa", .func = do_proc_sys_devices_system_node}, + {.name = "/proc/pagetypeinfo", .dim = "pagetypeinfo", .func = do_proc_pagetypeinfo}, + + // network metrics + {.name = "/proc/net/wireless", .dim = "netwireless", .func = do_proc_net_wireless}, + {.name = "/proc/net/sockstat", .dim = "sockstat", .func = do_proc_net_sockstat}, + {.name = "/proc/net/sockstat6", .dim = "sockstat6", .func = do_proc_net_sockstat6}, + {.name = "/proc/net/netstat", .dim = "netstat", .func = do_proc_net_netstat}, + {.name = "/proc/net/sctp/snmp", .dim = "sctp", .func = do_proc_net_sctp_snmp}, + {.name = "/proc/net/softnet_stat", .dim = "softnet", .func = do_proc_net_softnet_stat}, + {.name = "/proc/net/ip_vs/stats", .dim = "ipvs", .func = do_proc_net_ip_vs_stats}, + {.name = "/sys/class/infiniband", .dim = "infiniband", .func = do_sys_class_infiniband}, + + // firewall metrics + {.name = "/proc/net/stat/conntrack", .dim = "conntrack", .func = do_proc_net_stat_conntrack}, + {.name = "/proc/net/stat/synproxy", .dim = "synproxy", .func = do_proc_net_stat_synproxy}, + + // disk metrics + {.name = "/proc/diskstats", .dim = "diskstats", .func = do_proc_diskstats}, + {.name = "/proc/mdstat", .dim = "mdstat", .func = do_proc_mdstat}, + + // NFS metrics + {.name = "/proc/net/rpc/nfsd", .dim = "nfsd", .func = do_proc_net_rpc_nfsd}, + {.name = "/proc/net/rpc/nfs", .dim = "nfs", .func = do_proc_net_rpc_nfs}, + + // ZFS metrics + {.name = "/proc/spl/kstat/zfs/arcstats", .dim = "zfs_arcstats", .func = do_proc_spl_kstat_zfs_arcstats}, + {.name = "/proc/spl/kstat/zfs/pool/state",.dim = "zfs_pool_state",.func = do_proc_spl_kstat_zfs_pool_state}, + + // BTRFS metrics + {.name = "/sys/fs/btrfs", .dim = "btrfs", .func = do_sys_fs_btrfs}, + + // IPC metrics + {.name = "ipc", .dim = "ipc", .func = do_ipc}, + + // linux power supply metrics + {.name = "/sys/class/power_supply", .dim = "power_supply", .func = do_sys_class_power_supply}, + + // GPU metrics + {.name = "/sys/class/drm", .dim = "drm", .func = do_sys_class_drm}, + + // the terminator of this array + {.name = NULL, .dim = NULL, .func = NULL} +}; + +#if WORKER_UTILIZATION_MAX_JOB_TYPES < 36 +#error WORKER_UTILIZATION_MAX_JOB_TYPES has to be at least 36 +#endif + +static netdata_thread_t *netdev_thread = NULL; + +static void proc_main_cleanup(void *ptr) +{ + struct netdata_static_thread *static_thread = (struct netdata_static_thread *)ptr; + static_thread->enabled = NETDATA_MAIN_THREAD_EXITING; + + collector_info("cleaning up..."); + + if (netdev_thread) { + netdata_thread_join(*netdev_thread, NULL); + freez(netdev_thread); + } + + static_thread->enabled = NETDATA_MAIN_THREAD_EXITED; + + worker_unregister(); +} + +bool inside_lxc_container = false; + +static bool is_lxcfs_proc_mounted() { + procfile *ff = NULL; + + if (unlikely(!ff)) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "/proc/self/mounts"); + ff = procfile_open(filename, " \t", PROCFILE_FLAG_DEFAULT); + if (unlikely(!ff)) + return false; + } + + ff = procfile_readall(ff); + if (unlikely(!ff)) + return false; + + unsigned long l, lines = procfile_lines(ff); + + for (l = 0; l < lines; l++) { + size_t words = procfile_linewords(ff, l); + if (words < 2) { + continue; + } + if (!strcmp(procfile_lineword(ff, l, 0), "lxcfs") && !strncmp(procfile_lineword(ff, l, 1), "/proc", 5)) { + procfile_close(ff); + return true; + } + } + + procfile_close(ff); + + return false; +} + +static bool log_proc_module(BUFFER *wb, void *data) { + struct proc_module *pm = data; + buffer_sprintf(wb, "proc.plugin[%s]", pm->name); + return true; +} + +void *proc_main(void *ptr) +{ + worker_register("PROC"); + + rrd_collector_started(); + + if (config_get_boolean("plugin:proc", "/proc/net/dev", CONFIG_BOOLEAN_YES)) { + netdev_thread = mallocz(sizeof(netdata_thread_t)); + netdata_log_debug(D_SYSTEM, "Starting thread %s.", THREAD_NETDEV_NAME); + netdata_thread_create( + netdev_thread, THREAD_NETDEV_NAME, NETDATA_THREAD_OPTION_JOINABLE, netdev_main, netdev_thread); + } + + netdata_thread_cleanup_push(proc_main_cleanup, ptr) + { + config_get_boolean("plugin:proc", "/proc/pagetypeinfo", CONFIG_BOOLEAN_NO); + + // check the enabled status for each module + int i; + for(i = 0; proc_modules[i].name; i++) { + struct proc_module *pm = &proc_modules[i]; + + pm->enabled = config_get_boolean("plugin:proc", pm->name, CONFIG_BOOLEAN_YES); + pm->rd = NULL; + + worker_register_job_name(i, proc_modules[i].dim); + } + + usec_t step = localhost->rrd_update_every * USEC_PER_SEC; + heartbeat_t hb; + heartbeat_init(&hb); + + inside_lxc_container = is_lxcfs_proc_mounted(); + +#define LGS_MODULE_ID 0 + + ND_LOG_STACK lgs[] = { + [LGS_MODULE_ID] = ND_LOG_FIELD_TXT(NDF_MODULE, "proc.plugin"), + ND_LOG_FIELD_END(), + }; + ND_LOG_STACK_PUSH(lgs); + + while(service_running(SERVICE_COLLECTORS)) { + worker_is_idle(); + usec_t hb_dt = heartbeat_next(&hb, step); + + if(unlikely(!service_running(SERVICE_COLLECTORS))) + break; + + for(i = 0; proc_modules[i].name; i++) { + if(unlikely(!service_running(SERVICE_COLLECTORS))) + break; + + struct proc_module *pm = &proc_modules[i]; + if(unlikely(!pm->enabled)) + continue; + + worker_is_busy(i); + lgs[LGS_MODULE_ID] = ND_LOG_FIELD_CB(NDF_MODULE, log_proc_module, pm); + pm->enabled = !pm->func(localhost->rrd_update_every, hb_dt); + lgs[LGS_MODULE_ID] = ND_LOG_FIELD_TXT(NDF_MODULE, "proc.plugin"); + } + } + } + netdata_thread_cleanup_pop(1); + return NULL; +} + +int get_numa_node_count(void) +{ + static int numa_node_count = -1; + + if (numa_node_count != -1) + return numa_node_count; + + numa_node_count = 0; + + char name[FILENAME_MAX + 1]; + snprintfz(name, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/devices/system/node"); + char *dirname = config_get("plugin:proc:/sys/devices/system/node", "directory to monitor", name); + + DIR *dir = opendir(dirname); + if (dir) { + struct dirent *de = NULL; + while ((de = readdir(dir))) { + if (de->d_type != DT_DIR) + continue; + + if (strncmp(de->d_name, "node", 4) != 0) + continue; + + if (!isdigit(de->d_name[4])) + continue; + + numa_node_count++; + } + closedir(dir); + } + + return numa_node_count; +} diff --git a/src/collectors/proc.plugin/plugin_proc.h b/src/collectors/proc.plugin/plugin_proc.h new file mode 100644 index 000000000..187e76a97 --- /dev/null +++ b/src/collectors/proc.plugin/plugin_proc.h @@ -0,0 +1,71 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#ifndef NETDATA_PLUGIN_PROC_H +#define NETDATA_PLUGIN_PROC_H 1 + +#include "daemon/common.h" + +#define PLUGIN_PROC_CONFIG_NAME "proc" +#define PLUGIN_PROC_NAME PLUGIN_PROC_CONFIG_NAME ".plugin" + +#define THREAD_NETDEV_NAME "P[proc netdev]" +void *netdev_main(void *ptr); + +int do_proc_net_wireless(int update_every, usec_t dt); +int do_proc_diskstats(int update_every, usec_t dt); +int do_proc_mdstat(int update_every, usec_t dt); +int do_proc_net_netstat(int update_every, usec_t dt); +int do_proc_net_stat_conntrack(int update_every, usec_t dt); +int do_proc_net_ip_vs_stats(int update_every, usec_t dt); +int do_proc_stat(int update_every, usec_t dt); +int do_proc_meminfo(int update_every, usec_t dt); +int do_proc_vmstat(int update_every, usec_t dt); +int do_proc_net_rpc_nfs(int update_every, usec_t dt); +int do_proc_net_rpc_nfsd(int update_every, usec_t dt); +int do_proc_sys_fs_file_nr(int update_every, usec_t dt); +int do_proc_sys_kernel_random_entropy_avail(int update_every, usec_t dt); +int do_proc_interrupts(int update_every, usec_t dt); +int do_proc_softirqs(int update_every, usec_t dt); +int do_proc_pressure(int update_every, usec_t dt); +int do_sys_kernel_mm_ksm(int update_every, usec_t dt); +int do_sys_block_zram(int update_every, usec_t dt); +int do_proc_loadavg(int update_every, usec_t dt); +int do_proc_net_stat_synproxy(int update_every, usec_t dt); +int do_proc_net_softnet_stat(int update_every, usec_t dt); +int do_proc_uptime(int update_every, usec_t dt); +int do_proc_sys_devices_system_edac_mc(int update_every, usec_t dt); +int do_proc_sys_devices_pci_aer(int update_every, usec_t dt); +int do_proc_sys_devices_system_node(int update_every, usec_t dt); +int do_proc_spl_kstat_zfs_arcstats(int update_every, usec_t dt); +int do_proc_spl_kstat_zfs_pool_state(int update_every, usec_t dt); +int do_sys_fs_btrfs(int update_every, usec_t dt); +int do_proc_net_sockstat(int update_every, usec_t dt); +int do_proc_net_sockstat6(int update_every, usec_t dt); +int do_proc_net_sctp_snmp(int update_every, usec_t dt); +int do_ipc(int update_every, usec_t dt); +int do_sys_class_power_supply(int update_every, usec_t dt); +int do_proc_pagetypeinfo(int update_every, usec_t dt); +int do_sys_class_infiniband(int update_every, usec_t dt); +int do_sys_class_drm(int update_every, usec_t dt); +int get_numa_node_count(void); + +// metrics that need to be shared among data collectors +extern unsigned long long zfs_arcstats_shrinkable_cache_size_bytes; +extern bool inside_lxc_container; + +// netdev renames +void cgroup_rename_task_add( + const char *host_device, + const char *container_device, + const char *container_name, + RRDLABELS *labels, + const char *ctx_prefix, + const DICTIONARY_ITEM *cgroup_netdev_link); + +void cgroup_rename_task_device_del(const char *host_device); + +#include "proc_self_mountinfo.h" +#include "proc_pressure.h" +#include "zfs_common.h" + +#endif /* NETDATA_PLUGIN_PROC_H */ diff --git a/src/collectors/proc.plugin/proc_diskstats.c b/src/collectors/proc.plugin/proc_diskstats.c new file mode 100644 index 000000000..4ff617ff9 --- /dev/null +++ b/src/collectors/proc.plugin/proc_diskstats.c @@ -0,0 +1,2500 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +#define RRD_TYPE_DISK "disk" +#define PLUGIN_PROC_MODULE_DISKSTATS_NAME "/proc/diskstats" +#define CONFIG_SECTION_PLUGIN_PROC_DISKSTATS "plugin:" PLUGIN_PROC_CONFIG_NAME ":" PLUGIN_PROC_MODULE_DISKSTATS_NAME + +#define RRDFUNCTIONS_DISKSTATS_HELP "View block device statistics" + +#define DISK_TYPE_UNKNOWN 0 +#define DISK_TYPE_PHYSICAL 1 +#define DISK_TYPE_PARTITION 2 +#define DISK_TYPE_VIRTUAL 3 + +#define DEFAULT_PREFERRED_IDS "*" +#define DEFAULT_EXCLUDED_DISKS "loop* ram*" + +static netdata_mutex_t diskstats_dev_mutex = NETDATA_MUTEX_INITIALIZER; + +static struct disk { + char *disk; // the name of the disk (sda, sdb, etc, after being looked up) + char *device; // the device of the disk (before being looked up) + char *disk_by_id; + char *model; + char *serial; +// bool rotational; +// bool removable; + uint32_t hash; + unsigned long major; + unsigned long minor; + int sector_size; + int type; + + bool excluded; + bool function_ready; + + char *mount_point; + + char *chart_id; + + // disk options caching + int do_io; + int do_ops; + int do_mops; + int do_iotime; + int do_qops; + int do_util; + int do_ext; + int do_backlog; + int do_bcache; + + int updated; + + int device_is_bcache; + + char *bcache_filename_dirty_data; + char *bcache_filename_writeback_rate; + char *bcache_filename_cache_congested; + char *bcache_filename_cache_available_percent; + char *bcache_filename_stats_five_minute_cache_hit_ratio; + char *bcache_filename_stats_hour_cache_hit_ratio; + char *bcache_filename_stats_day_cache_hit_ratio; + char *bcache_filename_stats_total_cache_hit_ratio; + char *bcache_filename_stats_total_cache_hits; + char *bcache_filename_stats_total_cache_misses; + char *bcache_filename_stats_total_cache_miss_collisions; + char *bcache_filename_stats_total_cache_bypass_hits; + char *bcache_filename_stats_total_cache_bypass_misses; + char *bcache_filename_stats_total_cache_readaheads; + char *bcache_filename_cache_read_races; + char *bcache_filename_cache_io_errors; + char *bcache_filename_priority_stats; + + usec_t bcache_priority_stats_update_every_usec; + usec_t bcache_priority_stats_elapsed_usec; + + RRDSET *st_io; + RRDDIM *rd_io_reads; + RRDDIM *rd_io_writes; + + RRDSET *st_ext_io; + RRDDIM *rd_io_discards; + + RRDSET *st_ops; + RRDDIM *rd_ops_reads; + RRDDIM *rd_ops_writes; + + RRDSET *st_ext_ops; + RRDDIM *rd_ops_discards; + RRDDIM *rd_ops_flushes; + + RRDSET *st_qops; + RRDDIM *rd_qops_operations; + + RRDSET *st_backlog; + RRDDIM *rd_backlog_backlog; + + RRDSET *st_busy; + RRDDIM *rd_busy_busy; + + RRDSET *st_util; + RRDDIM *rd_util_utilization; + + RRDSET *st_mops; + RRDDIM *rd_mops_reads; + RRDDIM *rd_mops_writes; + + RRDSET *st_ext_mops; + RRDDIM *rd_mops_discards; + + RRDSET *st_iotime; + RRDDIM *rd_iotime_reads; + RRDDIM *rd_iotime_writes; + + RRDSET *st_ext_iotime; + RRDDIM *rd_iotime_discards; + RRDDIM *rd_iotime_flushes; + + RRDSET *st_await; + RRDDIM *rd_await_reads; + RRDDIM *rd_await_writes; + + RRDSET *st_ext_await; + RRDDIM *rd_await_discards; + RRDDIM *rd_await_flushes; + + RRDSET *st_avgsz; + RRDDIM *rd_avgsz_reads; + RRDDIM *rd_avgsz_writes; + + RRDSET *st_ext_avgsz; + RRDDIM *rd_avgsz_discards; + + RRDSET *st_svctm; + RRDDIM *rd_svctm_svctm; + + RRDSET *st_bcache_size; + RRDDIM *rd_bcache_dirty_size; + + RRDSET *st_bcache_usage; + RRDDIM *rd_bcache_available_percent; + + RRDSET *st_bcache_hit_ratio; + RRDDIM *rd_bcache_hit_ratio_5min; + RRDDIM *rd_bcache_hit_ratio_1hour; + RRDDIM *rd_bcache_hit_ratio_1day; + RRDDIM *rd_bcache_hit_ratio_total; + + RRDSET *st_bcache; + RRDDIM *rd_bcache_hits; + RRDDIM *rd_bcache_misses; + RRDDIM *rd_bcache_miss_collisions; + + RRDSET *st_bcache_bypass; + RRDDIM *rd_bcache_bypass_hits; + RRDDIM *rd_bcache_bypass_misses; + + RRDSET *st_bcache_rates; + RRDDIM *rd_bcache_rate_congested; + RRDDIM *rd_bcache_readaheads; + RRDDIM *rd_bcache_rate_writeback; + + RRDSET *st_bcache_cache_allocations; + RRDDIM *rd_bcache_cache_allocations_unused; + RRDDIM *rd_bcache_cache_allocations_clean; + RRDDIM *rd_bcache_cache_allocations_dirty; + RRDDIM *rd_bcache_cache_allocations_metadata; + RRDDIM *rd_bcache_cache_allocations_unknown; + + RRDSET *st_bcache_cache_read_races; + RRDDIM *rd_bcache_cache_read_races; + RRDDIM *rd_bcache_cache_io_errors; + + struct disk *next; +} *disk_root = NULL; + +#define rrdset_obsolete_and_pointer_null(st) do { if(st) { rrdset_is_obsolete___safe_from_collector_thread(st); (st) = NULL; } } while(st) + +// static char *path_to_get_hw_sector_size = NULL; +// static char *path_to_get_hw_sector_size_partitions = NULL; +static char *path_to_sys_dev_block_major_minor_string = NULL; +static char *path_to_sys_block_device = NULL; +static char *path_to_sys_block_device_bcache = NULL; +static char *path_to_sys_devices_virtual_block_device = NULL; +static char *path_to_device_mapper = NULL; +static char *path_to_dev_disk = NULL; +static char *path_to_sys_block = NULL; +static char *path_to_device_label = NULL; +static char *path_to_device_id = NULL; +static char *path_to_veritas_volume_groups = NULL; +static int name_disks_by_id = CONFIG_BOOLEAN_NO; +static int global_bcache_priority_stats_update_every = 0; // disabled by default + +static int global_enable_new_disks_detected_at_runtime = CONFIG_BOOLEAN_YES, + global_enable_performance_for_physical_disks = CONFIG_BOOLEAN_AUTO, + global_enable_performance_for_virtual_disks = CONFIG_BOOLEAN_AUTO, + global_enable_performance_for_partitions = CONFIG_BOOLEAN_NO, + global_do_io = CONFIG_BOOLEAN_AUTO, + global_do_ops = CONFIG_BOOLEAN_AUTO, + global_do_mops = CONFIG_BOOLEAN_AUTO, + global_do_iotime = CONFIG_BOOLEAN_AUTO, + global_do_qops = CONFIG_BOOLEAN_AUTO, + global_do_util = CONFIG_BOOLEAN_AUTO, + global_do_ext = CONFIG_BOOLEAN_AUTO, + global_do_backlog = CONFIG_BOOLEAN_AUTO, + global_do_bcache = CONFIG_BOOLEAN_AUTO, + globals_initialized = 0, + global_cleanup_removed_disks = 1; + +static SIMPLE_PATTERN *preferred_ids = NULL; +static SIMPLE_PATTERN *excluded_disks = NULL; + +static unsigned long long int bcache_read_number_with_units(const char *filename) { + char buffer[50 + 1]; + if(read_txt_file(filename, buffer, sizeof(buffer)) == 0) { + static int unknown_units_error = 10; + + char *end = NULL; + NETDATA_DOUBLE value = str2ndd(buffer, &end); + if(end && *end) { + if(*end == 'k') + return (unsigned long long int)(value * 1024.0); + else if(*end == 'M') + return (unsigned long long int)(value * 1024.0 * 1024.0); + else if(*end == 'G') + return (unsigned long long int)(value * 1024.0 * 1024.0 * 1024.0); + else if(*end == 'T') + return (unsigned long long int)(value * 1024.0 * 1024.0 * 1024.0 * 1024.0); + else if(unknown_units_error > 0) { + collector_error("bcache file '%s' provides value '%s' with unknown units '%s'", filename, buffer, end); + unknown_units_error--; + } + } + + return (unsigned long long int)value; + } + + return 0; +} + +void bcache_read_priority_stats(struct disk *d, const char *family, int update_every, usec_t dt) { + static procfile *ff = NULL; + static char *separators = " \t:%[]"; + + static ARL_BASE *arl_base = NULL; + + static unsigned long long unused; + static unsigned long long clean; + static unsigned long long dirty; + static unsigned long long metadata; + static unsigned long long unknown; + + // check if it is time to update this metric + d->bcache_priority_stats_elapsed_usec += dt; + if(likely(d->bcache_priority_stats_elapsed_usec < d->bcache_priority_stats_update_every_usec)) return; + d->bcache_priority_stats_elapsed_usec = 0; + + // initialize ARL + if(unlikely(!arl_base)) { + arl_base = arl_create("bcache/priority_stats", NULL, 60); + arl_expect(arl_base, "Unused", &unused); + arl_expect(arl_base, "Clean", &clean); + arl_expect(arl_base, "Dirty", &dirty); + arl_expect(arl_base, "Metadata", &metadata); + } + + ff = procfile_reopen(ff, d->bcache_filename_priority_stats, separators, PROCFILE_FLAG_DEFAULT); + if(likely(ff)) ff = procfile_readall(ff); + if(unlikely(!ff)) { + separators = " \t:%[]"; + return; + } + + // do not reset the separators on every iteration + separators = NULL; + + arl_begin(arl_base); + unused = clean = dirty = metadata = unknown = 0; + + size_t lines = procfile_lines(ff), l; + + for(l = 0; l < lines ;l++) { + size_t words = procfile_linewords(ff, l); + if(unlikely(words < 2)) { + if(unlikely(words)) collector_error("Cannot read '%s' line %zu. Expected 2 params, read %zu.", d->bcache_filename_priority_stats, l, words); + continue; + } + + if(unlikely(arl_check(arl_base, + procfile_lineword(ff, l, 0), + procfile_lineword(ff, l, 1)))) break; + } + + unknown = 100 - unused - clean - dirty - metadata; + + // create / update the cache allocations chart + { + if(unlikely(!d->st_bcache_cache_allocations)) { + d->st_bcache_cache_allocations = rrdset_create_localhost( + "disk_bcache_cache_alloc" + , d->chart_id + , d->disk + , family + , "disk.bcache_cache_alloc" + , "BCache Cache Allocations" + , "percentage" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DISKSTATS_NAME + , NETDATA_CHART_PRIO_BCACHE_CACHE_ALLOC + , update_every + , RRDSET_TYPE_STACKED + ); + + d->rd_bcache_cache_allocations_unused = rrddim_add(d->st_bcache_cache_allocations, "unused", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + d->rd_bcache_cache_allocations_dirty = rrddim_add(d->st_bcache_cache_allocations, "dirty", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + d->rd_bcache_cache_allocations_clean = rrddim_add(d->st_bcache_cache_allocations, "clean", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + d->rd_bcache_cache_allocations_metadata = rrddim_add(d->st_bcache_cache_allocations, "metadata", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + d->rd_bcache_cache_allocations_unknown = rrddim_add(d->st_bcache_cache_allocations, "undefined", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + d->bcache_priority_stats_update_every_usec = update_every * USEC_PER_SEC; + } + + rrddim_set_by_pointer(d->st_bcache_cache_allocations, d->rd_bcache_cache_allocations_unused, unused); + rrddim_set_by_pointer(d->st_bcache_cache_allocations, d->rd_bcache_cache_allocations_dirty, dirty); + rrddim_set_by_pointer(d->st_bcache_cache_allocations, d->rd_bcache_cache_allocations_clean, clean); + rrddim_set_by_pointer(d->st_bcache_cache_allocations, d->rd_bcache_cache_allocations_metadata, metadata); + rrddim_set_by_pointer(d->st_bcache_cache_allocations, d->rd_bcache_cache_allocations_unknown, unknown); + rrdset_done(d->st_bcache_cache_allocations); + } +} + +static inline int is_major_enabled(int major) { + static int8_t *major_configs = NULL; + static size_t major_size = 0; + + if(major < 0) return 1; + + size_t wanted_size = (size_t)major + 1; + + if(major_size < wanted_size) { + major_configs = reallocz(major_configs, wanted_size * sizeof(int8_t)); + + size_t i; + for(i = major_size; i < wanted_size ; i++) + major_configs[i] = -1; + + major_size = wanted_size; + } + + if(major_configs[major] == -1) { + char buffer[CONFIG_MAX_NAME + 1]; + snprintfz(buffer, CONFIG_MAX_NAME, "performance metrics for disks with major %d", major); + major_configs[major] = (char)config_get_boolean(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, buffer, 1); + } + + return (int)major_configs[major]; +} + +static inline int get_disk_name_from_path(const char *path, char *result, size_t result_size, unsigned long major, unsigned long minor, char *disk, char *prefix, int depth) { + //collector_info("DEVICE-MAPPER ('%s', %lu:%lu): examining directory '%s' (allowed depth %d).", disk, major, minor, path, depth); + + int found = 0, preferred = 0; + + char *first_result = mallocz(result_size + 1); + + DIR *dir = opendir(path); + if (!dir) { + if (errno == ENOENT) + nd_log_collector(NDLP_DEBUG, "DEVICE-MAPPER ('%s', %lu:%lu): Cannot open directory '%s': no such file or directory.", disk, major, minor, path); + else + collector_error("DEVICE-MAPPER ('%s', %lu:%lu): Cannot open directory '%s'.", disk, major, minor, path); + goto failed; + } + + struct dirent *de = NULL; + while ((de = readdir(dir))) { + if(de->d_type == DT_DIR) { + if((de->d_name[0] == '.' && de->d_name[1] == '\0') || (de->d_name[0] == '.' && de->d_name[1] == '.' && de->d_name[2] == '\0')) + continue; + + if(depth <= 0) { + collector_error("DEVICE-MAPPER ('%s', %lu:%lu): Depth limit reached for path '%s/%s'. Ignoring path.", disk, major, minor, path, de->d_name); + break; + } + else { + char *path_nested = NULL; + char *prefix_nested = NULL; + + { + char buffer[FILENAME_MAX + 1]; + snprintfz(buffer, FILENAME_MAX, "%s/%s", path, de->d_name); + path_nested = strdupz(buffer); + + snprintfz(buffer, FILENAME_MAX, "%s%s%s", (prefix)?prefix:"", (prefix)?"_":"", de->d_name); + prefix_nested = strdupz(buffer); + } + + found = get_disk_name_from_path(path_nested, result, result_size, major, minor, disk, prefix_nested, depth - 1); + freez(path_nested); + freez(prefix_nested); + + if(found) break; + } + } + else if(de->d_type == DT_LNK || de->d_type == DT_BLK) { + char filename[FILENAME_MAX + 1]; + + if(de->d_type == DT_LNK) { + snprintfz(filename, FILENAME_MAX, "%s/%s", path, de->d_name); + ssize_t len = readlink(filename, result, result_size - 1); + if(len <= 0) { + collector_error("DEVICE-MAPPER ('%s', %lu:%lu): Cannot read link '%s'.", disk, major, minor, filename); + continue; + } + + result[len] = '\0'; + if(result[0] != '/') + snprintfz(filename, FILENAME_MAX, "%s/%s", path, result); + else + strncpyz(filename, result, FILENAME_MAX); + } + else { + snprintfz(filename, FILENAME_MAX, "%s/%s", path, de->d_name); + } + + struct stat sb; + if(stat(filename, &sb) == -1) { + collector_error("DEVICE-MAPPER ('%s', %lu:%lu): Cannot stat() file '%s'.", disk, major, minor, filename); + continue; + } + + if((sb.st_mode & S_IFMT) != S_IFBLK) { + //collector_info("DEVICE-MAPPER ('%s', %lu:%lu): file '%s' is not a block device.", disk, major, minor, filename); + continue; + } + + if(major(sb.st_rdev) != major || minor(sb.st_rdev) != minor || strcmp(basename(filename), disk)) { + //collector_info("DEVICE-MAPPER ('%s', %lu:%lu): filename '%s' does not match %lu:%lu.", disk, major, minor, filename, (unsigned long)major(sb.st_rdev), (unsigned long)minor(sb.st_rdev)); + continue; + } + + //collector_info("DEVICE-MAPPER ('%s', %lu:%lu): filename '%s' matches.", disk, major, minor, filename); + + snprintfz(result, result_size - 1, "%s%s%s", (prefix)?prefix:"", (prefix)?"_":"", de->d_name); + + if(!found) { + strncpyz(first_result, result, result_size); + found = 1; + } + + if(simple_pattern_matches(preferred_ids, result)) { + preferred = 1; + break; + } + } + } + closedir(dir); + + +failed: + + if(!found) + result[0] = '\0'; + else if(!preferred) + strncpyz(result, first_result, result_size); + + freez(first_result); + + return found; +} + +static inline char *get_disk_name(unsigned long major, unsigned long minor, char *disk) { + char result[FILENAME_MAX + 2] = ""; + + if(!path_to_device_mapper || !*path_to_device_mapper || !get_disk_name_from_path(path_to_device_mapper, result, FILENAME_MAX + 1, major, minor, disk, NULL, 0)) + if(!path_to_device_label || !*path_to_device_label || !get_disk_name_from_path(path_to_device_label, result, FILENAME_MAX + 1, major, minor, disk, NULL, 0)) + if(!path_to_veritas_volume_groups || !*path_to_veritas_volume_groups || !get_disk_name_from_path(path_to_veritas_volume_groups, result, FILENAME_MAX + 1, major, minor, disk, "vx", 2)) + if(name_disks_by_id != CONFIG_BOOLEAN_YES || !path_to_device_id || !*path_to_device_id || !get_disk_name_from_path(path_to_device_id, result, FILENAME_MAX + 1, major, minor, disk, NULL, 0)) + strncpy(result, disk, FILENAME_MAX); + + if(!result[0]) + strncpy(result, disk, FILENAME_MAX); + + netdata_fix_chart_name(result); + return strdup(result); +} + +static inline bool ends_with(const char *str, const char *suffix) { + if (!str || !suffix) + return false; + + size_t len_str = strlen(str); + size_t len_suffix = strlen(suffix); + if (len_suffix > len_str) + return false; + + return strncmp(str + len_str - len_suffix, suffix, len_suffix) == 0; +} + +static inline char *get_disk_by_id(char *device) { + char pathname[256 + 1]; + snprintfz(pathname, sizeof(pathname) - 1, "%s/by-id", path_to_dev_disk); + + struct dirent *entry; + DIR *dp = opendir(pathname); + if (dp == NULL) { + internal_error(true, "Cannot open '%s'", pathname); + return NULL; + } + + while ((entry = readdir(dp))) { + // We ignore the '.' and '..' entries + if (strcmp(entry->d_name, ".") == 0 || strcmp(entry->d_name, "..") == 0) + continue; + + if(strncmp(entry->d_name, "md-uuid-", 8) == 0 || + strncmp(entry->d_name, "dm-uuid-", 8) == 0 || + strncmp(entry->d_name, "nvme-eui.", 9) == 0 || + strncmp(entry->d_name, "wwn-", 4) == 0 || + strncmp(entry->d_name, "lvm-pv-uuid-", 12) == 0) + continue; + + char link_target[256 + 1]; + char full_path[256 + 1]; + snprintfz(full_path, 256, "%s/%s", pathname, entry->d_name); + + ssize_t len = readlink(full_path, link_target, 256); + if (len == -1) + continue; + + link_target[len] = '\0'; + + if (ends_with(link_target, device)) { + char *s = strdupz(entry->d_name); + closedir(dp); + return s; + } + } + + closedir(dp); + return NULL; +} + +static inline char *get_disk_model(char *device) { + char path[256 + 1]; + char buffer[256 + 1]; + + snprintfz(path, sizeof(path) - 1, "%s/%s/device/model", path_to_sys_block, device); + if(read_txt_file(path, buffer, sizeof(buffer)) != 0) { + snprintfz(path, sizeof(path) - 1, "%s/%s/device/name", path_to_sys_block, device); + if(read_txt_file(path, buffer, sizeof(buffer)) != 0) + return NULL; + } + + char *clean = trim(buffer); + if (!clean) + return NULL; + + return strdupz(clean); +} + +static inline char *get_disk_serial(char *device) { + char path[256 + 1]; + char buffer[256 + 1]; + + snprintfz(path, sizeof(path) - 1, "%s/%s/device/serial", path_to_sys_block, device); + if(read_txt_file(path, buffer, sizeof(buffer)) != 0) + return NULL; + + return strdupz(buffer); +} + +//static inline bool get_disk_rotational(char *device) { +// char path[256 + 1]; +// char buffer[256 + 1]; +// +// snprintfz(path, 256, "%s/%s/queue/rotational", path_to_sys_block, device); +// if(read_file(path, buffer, 256) != 0) +// return false; +// +// return buffer[0] == '1'; +//} +// +//static inline bool get_disk_removable(char *device) { +// char path[256 + 1]; +// char buffer[256 + 1]; +// +// snprintfz(path, 256, "%s/%s/removable", path_to_sys_block, device); +// if(read_file(path, buffer, 256) != 0) +// return false; +// +// return buffer[0] == '1'; +//} + +static void get_disk_config(struct disk *d) { + int def_enable = global_enable_new_disks_detected_at_runtime; + + if(def_enable != CONFIG_BOOLEAN_NO && (simple_pattern_matches(excluded_disks, d->device) || simple_pattern_matches(excluded_disks, d->disk))) { + d->excluded = true; + def_enable = CONFIG_BOOLEAN_NO; + } + + char var_name[4096 + 1]; + snprintfz(var_name, 4096, CONFIG_SECTION_PLUGIN_PROC_DISKSTATS ":%s", d->disk); + + if (config_exists(var_name, "enable")) + def_enable = config_get_boolean_ondemand(var_name, "enable", def_enable); + + if(unlikely(def_enable == CONFIG_BOOLEAN_NO)) { + // the user does not want any metrics for this disk + d->do_io = CONFIG_BOOLEAN_NO; + d->do_ops = CONFIG_BOOLEAN_NO; + d->do_mops = CONFIG_BOOLEAN_NO; + d->do_iotime = CONFIG_BOOLEAN_NO; + d->do_qops = CONFIG_BOOLEAN_NO; + d->do_util = CONFIG_BOOLEAN_NO; + d->do_ext = CONFIG_BOOLEAN_NO; + d->do_backlog = CONFIG_BOOLEAN_NO; + d->do_bcache = CONFIG_BOOLEAN_NO; + } + else { + // this disk is enabled + // check its direct settings + + int def_performance = CONFIG_BOOLEAN_AUTO; + + // since this is 'on demand' we can figure the performance settings + // based on the type of disk + + if(!d->device_is_bcache) { + switch(d->type) { + default: + case DISK_TYPE_UNKNOWN: + break; + + case DISK_TYPE_PHYSICAL: + def_performance = global_enable_performance_for_physical_disks; + break; + + case DISK_TYPE_PARTITION: + def_performance = global_enable_performance_for_partitions; + break; + + case DISK_TYPE_VIRTUAL: + def_performance = global_enable_performance_for_virtual_disks; + break; + } + } + + // check if we have to disable performance for this disk + if(def_performance) + def_performance = is_major_enabled((int)d->major); + + // ------------------------------------------------------------ + // now we have def_performance and def_space + // to work further + + // def_performance + // check the user configuration (this will also show our 'on demand' decision) + if (config_exists(var_name, "enable performance metrics")) + def_performance = config_get_boolean_ondemand(var_name, "enable performance metrics", def_performance); + + int ddo_io = CONFIG_BOOLEAN_NO, + ddo_ops = CONFIG_BOOLEAN_NO, + ddo_mops = CONFIG_BOOLEAN_NO, + ddo_iotime = CONFIG_BOOLEAN_NO, + ddo_qops = CONFIG_BOOLEAN_NO, + ddo_util = CONFIG_BOOLEAN_NO, + ddo_ext = CONFIG_BOOLEAN_NO, + ddo_backlog = CONFIG_BOOLEAN_NO, + ddo_bcache = CONFIG_BOOLEAN_NO; + + // we enable individual performance charts only when def_performance is not disabled + if(unlikely(def_performance != CONFIG_BOOLEAN_NO)) { + ddo_io = global_do_io, + ddo_ops = global_do_ops, + ddo_mops = global_do_mops, + ddo_iotime = global_do_iotime, + ddo_qops = global_do_qops, + ddo_util = global_do_util, + ddo_ext = global_do_ext, + ddo_backlog = global_do_backlog, + ddo_bcache = global_do_bcache; + } else { + d->excluded = true; + } + + d->do_io = ddo_io; + d->do_ops = ddo_ops; + d->do_mops = ddo_mops; + d->do_iotime = ddo_iotime; + d->do_qops = ddo_qops; + d->do_util = ddo_util; + d->do_ext = ddo_ext; + d->do_backlog = ddo_backlog; + + if (config_exists(var_name, "bandwidth")) + d->do_io = config_get_boolean_ondemand(var_name, "bandwidth", ddo_io); + if (config_exists(var_name, "operations")) + d->do_ops = config_get_boolean_ondemand(var_name, "operations", ddo_ops); + if (config_exists(var_name, "merged operations")) + d->do_mops = config_get_boolean_ondemand(var_name, "merged operations", ddo_mops); + if (config_exists(var_name, "i/o time")) + d->do_iotime = config_get_boolean_ondemand(var_name, "i/o time", ddo_iotime); + if (config_exists(var_name, "queued operations")) + d->do_qops = config_get_boolean_ondemand(var_name, "queued operations", ddo_qops); + if (config_exists(var_name, "utilization percentage")) + d->do_util = config_get_boolean_ondemand(var_name, "utilization percentage", ddo_util); + if (config_exists(var_name, "extended operations")) + d->do_ext = config_get_boolean_ondemand(var_name, "extended operations", ddo_ext); + if (config_exists(var_name, "backlog")) + d->do_backlog = config_get_boolean_ondemand(var_name, "backlog", ddo_backlog); + + d->do_bcache = ddo_bcache; + + if (d->device_is_bcache) { + if (config_exists(var_name, "bcache")) + d->do_bcache = config_get_boolean_ondemand(var_name, "bcache", ddo_bcache); + } else { + d->do_bcache = 0; + } + } +} + +static struct disk *get_disk(unsigned long major, unsigned long minor, char *disk) { + static struct mountinfo *disk_mountinfo_root = NULL; + + struct disk *d; + + uint32_t hash = simple_hash(disk); + + // search for it in our RAM list. + // this is sequential, but since we just walk through + // and the number of disks / partitions in a system + // should not be that many, it should be acceptable + for(d = disk_root; d ; d = d->next){ + if (unlikely( + d->major == major && d->minor == minor && d->hash == hash && !strcmp(d->device, disk))) + return d; + } + + // not found + // create a new disk structure + d = (struct disk *)callocz(1, sizeof(struct disk)); + + d->excluded = false; + d->function_ready = false; + d->disk = get_disk_name(major, minor, disk); + d->device = strdupz(disk); + d->disk_by_id = get_disk_by_id(disk); + d->model = get_disk_model(disk); + d->serial = get_disk_serial(disk); +// d->rotational = get_disk_rotational(disk); +// d->removable = get_disk_removable(disk); + d->hash = simple_hash(d->device); + d->major = major; + d->minor = minor; + d->type = DISK_TYPE_UNKNOWN; // Default type. Changed later if not correct. + d->sector_size = 512; // the default, will be changed below + d->next = NULL; + + // append it to the list + if(unlikely(!disk_root)) + disk_root = d; + else { + struct disk *last; + for(last = disk_root; last->next ;last = last->next); + last->next = d; + } + + d->chart_id = strdupz(d->device); + + // read device uuid if it is an LVM volume + if (!strncmp(d->device, "dm-", 3)) { + char uuid_filename[FILENAME_MAX + 1]; + int size = snprintfz(uuid_filename, FILENAME_MAX, path_to_sys_devices_virtual_block_device, disk); + strncat(uuid_filename, "/dm/uuid", FILENAME_MAX - size); + + char device_uuid[RRD_ID_LENGTH_MAX + 1]; + if (!read_txt_file(uuid_filename, device_uuid, sizeof(device_uuid)) && !strncmp(device_uuid, "LVM-", 4)) { + trim(device_uuid); + + char chart_id[RRD_ID_LENGTH_MAX + 1]; + snprintf(chart_id, RRD_ID_LENGTH_MAX, "%s-%s", d->device, device_uuid + 4); + + freez(d->chart_id); + d->chart_id = strdupz(chart_id); + } + } + + char buffer[FILENAME_MAX + 1]; + + // find if it is a physical disk + // by checking if /sys/block/DISK is readable. + snprintfz(buffer, FILENAME_MAX, path_to_sys_block_device, disk); + if(likely(access(buffer, R_OK) == 0)) { + // assign it here, but it will be overwritten if it is not a physical disk + d->type = DISK_TYPE_PHYSICAL; + } + + // find if it is a partition + // by checking if /sys/dev/block/MAJOR:MINOR/partition is readable. + snprintfz(buffer, FILENAME_MAX, path_to_sys_dev_block_major_minor_string, major, minor, "partition"); + if(likely(access(buffer, R_OK) == 0)) { + d->type = DISK_TYPE_PARTITION; + } + else { + // find if it is a virtual disk + // by checking if /sys/devices/virtual/block/DISK is readable. + snprintfz(buffer, FILENAME_MAX, path_to_sys_devices_virtual_block_device, disk); + if(likely(access(buffer, R_OK) == 0)) { + d->type = DISK_TYPE_VIRTUAL; + } + else { + // find if it is a virtual device + // by checking if /sys/dev/block/MAJOR:MINOR/slaves has entries + snprintfz(buffer, FILENAME_MAX, path_to_sys_dev_block_major_minor_string, major, minor, "slaves/"); + DIR *dirp = opendir(buffer); + if (likely(dirp != NULL)) { + struct dirent *dp; + while ((dp = readdir(dirp))) { + // . and .. are also files in empty folders. + if (unlikely(strcmp(dp->d_name, ".") == 0 || strcmp(dp->d_name, "..") == 0)) { + continue; + } + + d->type = DISK_TYPE_VIRTUAL; + + // Stop the loop after we found one file. + break; + } + if (unlikely(closedir(dirp) == -1)) + collector_error("Unable to close dir %s", buffer); + } + } + } + + // ------------------------------------------------------------------------ + // check if we can find its mount point + + // mountinfo_find() can be called with NULL disk_mountinfo_root + struct mountinfo *mi = mountinfo_find(disk_mountinfo_root, d->major, d->minor, d->device); + if(unlikely(!mi)) { + // mountinfo_free_all can be called with NULL + mountinfo_free_all(disk_mountinfo_root); + disk_mountinfo_root = mountinfo_read(0); + mi = mountinfo_find(disk_mountinfo_root, d->major, d->minor, d->device); + } + + if(unlikely(mi)) + d->mount_point = strdupz(mi->mount_point); + else + d->mount_point = NULL; + + // ------------------------------------------------------------------------ + // find the disk sector size + + /* + * sector size is always 512 bytes inside the kernel #3481 + * + { + char tf[FILENAME_MAX + 1], *t; + strncpyz(tf, d->device, FILENAME_MAX); + + // replace all / with ! + for(t = tf; *t ;t++) + if(unlikely(*t == '/')) *t = '!'; + + if(likely(d->type == DISK_TYPE_PARTITION)) + snprintfz(buffer, FILENAME_MAX, path_to_get_hw_sector_size_partitions, d->major, d->minor, tf); + else + snprintfz(buffer, FILENAME_MAX, path_to_get_hw_sector_size, tf); + + FILE *fpss = fopen(buffer, "r"); + if(likely(fpss)) { + char buffer2[1024 + 1]; + char *tmp = fgets(buffer2, 1024, fpss); + + if(likely(tmp)) { + d->sector_size = str2i(tmp); + if(unlikely(d->sector_size <= 0)) { + collector_error("Invalid sector size %d for device %s in %s. Assuming 512.", d->sector_size, d->device, buffer); + d->sector_size = 512; + } + } + else collector_error("Cannot read data for sector size for device %s from %s. Assuming 512.", d->device, buffer); + + fclose(fpss); + } + else collector_error("Cannot read sector size for device %s from %s. Assuming 512.", d->device, buffer); + } + */ + + // ------------------------------------------------------------------------ + // check if the device is a bcache + + struct stat bcache; + snprintfz(buffer, FILENAME_MAX, path_to_sys_block_device_bcache, disk); + if(unlikely(stat(buffer, &bcache) == 0 && (bcache.st_mode & S_IFMT) == S_IFDIR)) { + // we have the 'bcache' directory + d->device_is_bcache = 1; + + char buffer2[FILENAME_MAX + 1]; + + snprintfz(buffer2, FILENAME_MAX, "%s/cache/congested", buffer); + if(access(buffer2, R_OK) == 0) + d->bcache_filename_cache_congested = strdupz(buffer2); + else + collector_error("bcache file '%s' cannot be read.", buffer2); + + snprintfz(buffer2, FILENAME_MAX, "%s/readahead", buffer); + if(access(buffer2, R_OK) == 0) + d->bcache_filename_stats_total_cache_readaheads = strdupz(buffer2); + else + collector_error("bcache file '%s' cannot be read.", buffer2); + + snprintfz(buffer2, FILENAME_MAX, "%s/cache/cache0/priority_stats", buffer); // only one cache is supported by bcache + if(access(buffer2, R_OK) == 0) + d->bcache_filename_priority_stats = strdupz(buffer2); + else + collector_error("bcache file '%s' cannot be read.", buffer2); + + snprintfz(buffer2, FILENAME_MAX, "%s/cache/internal/cache_read_races", buffer); + if(access(buffer2, R_OK) == 0) + d->bcache_filename_cache_read_races = strdupz(buffer2); + else + collector_error("bcache file '%s' cannot be read.", buffer2); + + snprintfz(buffer2, FILENAME_MAX, "%s/cache/cache0/io_errors", buffer); + if(access(buffer2, R_OK) == 0) + d->bcache_filename_cache_io_errors = strdupz(buffer2); + else + collector_error("bcache file '%s' cannot be read.", buffer2); + + snprintfz(buffer2, FILENAME_MAX, "%s/dirty_data", buffer); + if(access(buffer2, R_OK) == 0) + d->bcache_filename_dirty_data = strdupz(buffer2); + else + collector_error("bcache file '%s' cannot be read.", buffer2); + + snprintfz(buffer2, FILENAME_MAX, "%s/writeback_rate", buffer); + if(access(buffer2, R_OK) == 0) + d->bcache_filename_writeback_rate = strdupz(buffer2); + else + collector_error("bcache file '%s' cannot be read.", buffer2); + + snprintfz(buffer2, FILENAME_MAX, "%s/cache/cache_available_percent", buffer); + if(access(buffer2, R_OK) == 0) + d->bcache_filename_cache_available_percent = strdupz(buffer2); + else + collector_error("bcache file '%s' cannot be read.", buffer2); + + snprintfz(buffer2, FILENAME_MAX, "%s/stats_total/cache_hits", buffer); + if(access(buffer2, R_OK) == 0) + d->bcache_filename_stats_total_cache_hits = strdupz(buffer2); + else + collector_error("bcache file '%s' cannot be read.", buffer2); + + snprintfz(buffer2, FILENAME_MAX, "%s/stats_five_minute/cache_hit_ratio", buffer); + if(access(buffer2, R_OK) == 0) + d->bcache_filename_stats_five_minute_cache_hit_ratio = strdupz(buffer2); + else + collector_error("bcache file '%s' cannot be read.", buffer2); + + snprintfz(buffer2, FILENAME_MAX, "%s/stats_hour/cache_hit_ratio", buffer); + if(access(buffer2, R_OK) == 0) + d->bcache_filename_stats_hour_cache_hit_ratio = strdupz(buffer2); + else + collector_error("bcache file '%s' cannot be read.", buffer2); + + snprintfz(buffer2, FILENAME_MAX, "%s/stats_day/cache_hit_ratio", buffer); + if(access(buffer2, R_OK) == 0) + d->bcache_filename_stats_day_cache_hit_ratio = strdupz(buffer2); + else + collector_error("bcache file '%s' cannot be read.", buffer2); + + snprintfz(buffer2, FILENAME_MAX, "%s/stats_total/cache_hit_ratio", buffer); + if(access(buffer2, R_OK) == 0) + d->bcache_filename_stats_total_cache_hit_ratio = strdupz(buffer2); + else + collector_error("bcache file '%s' cannot be read.", buffer2); + + snprintfz(buffer2, FILENAME_MAX, "%s/stats_total/cache_misses", buffer); + if(access(buffer2, R_OK) == 0) + d->bcache_filename_stats_total_cache_misses = strdupz(buffer2); + else + collector_error("bcache file '%s' cannot be read.", buffer2); + + snprintfz(buffer2, FILENAME_MAX, "%s/stats_total/cache_bypass_hits", buffer); + if(access(buffer2, R_OK) == 0) + d->bcache_filename_stats_total_cache_bypass_hits = strdupz(buffer2); + else + collector_error("bcache file '%s' cannot be read.", buffer2); + + snprintfz(buffer2, FILENAME_MAX, "%s/stats_total/cache_bypass_misses", buffer); + if(access(buffer2, R_OK) == 0) + d->bcache_filename_stats_total_cache_bypass_misses = strdupz(buffer2); + else + collector_error("bcache file '%s' cannot be read.", buffer2); + + snprintfz(buffer2, FILENAME_MAX, "%s/stats_total/cache_miss_collisions", buffer); + if(access(buffer2, R_OK) == 0) + d->bcache_filename_stats_total_cache_miss_collisions = strdupz(buffer2); + else + collector_error("bcache file '%s' cannot be read.", buffer2); + } + + get_disk_config(d); + + return d; +} + +static const char *get_disk_type_string(int disk_type) { + switch (disk_type) { + case DISK_TYPE_PHYSICAL: + return "physical"; + case DISK_TYPE_PARTITION: + return "partition"; + case DISK_TYPE_VIRTUAL: + return "virtual"; + default: + return "unknown"; + } +} + +static void add_labels_to_disk(struct disk *d, RRDSET *st) { + rrdlabels_add(st->rrdlabels, "device", d->disk, RRDLABEL_SRC_AUTO); + rrdlabels_add(st->rrdlabels, "mount_point", d->mount_point, RRDLABEL_SRC_AUTO); + rrdlabels_add(st->rrdlabels, "id", d->disk_by_id, RRDLABEL_SRC_AUTO); + rrdlabels_add(st->rrdlabels, "model", d->model, RRDLABEL_SRC_AUTO); + rrdlabels_add(st->rrdlabels, "serial", d->serial, RRDLABEL_SRC_AUTO); + rrdlabels_add(st->rrdlabels, "device_type", get_disk_type_string(d->type), RRDLABEL_SRC_AUTO); +} + +static int diskstats_function_block_devices(BUFFER *wb, const char *function __maybe_unused) { + buffer_flush(wb); + wb->content_type = CT_APPLICATION_JSON; + buffer_json_initialize(wb, "\"", "\"", 0, true, BUFFER_JSON_OPTIONS_DEFAULT); + + buffer_json_member_add_string(wb, "hostname", rrdhost_hostname(localhost)); + buffer_json_member_add_uint64(wb, "status", HTTP_RESP_OK); + buffer_json_member_add_string(wb, "type", "table"); + buffer_json_member_add_time_t(wb, "update_every", 1); + buffer_json_member_add_boolean(wb, "has_history", false); + buffer_json_member_add_string(wb, "help", RRDFUNCTIONS_DISKSTATS_HELP); + buffer_json_member_add_array(wb, "data"); + + double max_io_reads = 0.0; + double max_io_writes = 0.0; + double max_io = 0.0; + double max_backlog_time = 0.0; + double max_busy_time = 0.0; + double max_busy_perc = 0.0; + double max_iops_reads = 0.0; + double max_iops_writes = 0.0; + double max_iops_time_reads = 0.0; + double max_iops_time_writes = 0.0; + double max_iops_avg_time_read = 0.0; + double max_iops_avg_time_write = 0.0; + double max_iops_avg_size_read = 0.0; + double max_iops_avg_size_write = 0.0; + + netdata_mutex_lock(&diskstats_dev_mutex); + + for (struct disk *d = disk_root; d; d = d->next) { + if (unlikely(!d->function_ready)) + continue; + + buffer_json_add_array_item_array(wb); + + buffer_json_add_array_item_string(wb, d->device); + buffer_json_add_array_item_string(wb, get_disk_type_string(d->type)); + buffer_json_add_array_item_string(wb, d->disk_by_id); + buffer_json_add_array_item_string(wb, d->model); + buffer_json_add_array_item_string(wb, d->serial); + + // IO + double io_reads = rrddim_get_last_stored_value(d->rd_io_reads, &max_io_reads, 1024.0); + double io_writes = rrddim_get_last_stored_value(d->rd_io_writes, &max_io_writes, 1024.0); + double io_total = NAN; + if (!isnan(io_reads) && !isnan(io_writes)) { + io_total = io_reads + io_writes; + max_io = MAX(max_io, io_total); + } + // Backlog and Busy Time + double busy_perc = rrddim_get_last_stored_value(d->rd_util_utilization, &max_busy_perc, 1); + double busy_time = rrddim_get_last_stored_value(d->rd_busy_busy, &max_busy_time, 1); + double backlog_time = rrddim_get_last_stored_value(d->rd_backlog_backlog, &max_backlog_time, 1); + // IOPS + double iops_reads = rrddim_get_last_stored_value(d->rd_ops_reads, &max_iops_reads, 1); + double iops_writes = rrddim_get_last_stored_value(d->rd_ops_writes, &max_iops_writes, 1); + // IO Time + double iops_time_reads = rrddim_get_last_stored_value(d->rd_iotime_reads, &max_iops_time_reads, 1); + double iops_time_writes = rrddim_get_last_stored_value(d->rd_iotime_writes, &max_iops_time_writes, 1); + // Avg IO Time + double iops_avg_time_read = rrddim_get_last_stored_value(d->rd_await_reads, &max_iops_avg_time_read, 1); + double iops_avg_time_write = rrddim_get_last_stored_value(d->rd_await_writes, &max_iops_avg_time_write, 1); + // Avg IO Size + double iops_avg_size_read = rrddim_get_last_stored_value(d->rd_avgsz_reads, &max_iops_avg_size_read, 1); + double iops_avg_size_write = rrddim_get_last_stored_value(d->rd_avgsz_writes, &max_iops_avg_size_write, 1); + + + buffer_json_add_array_item_double(wb, io_reads); + buffer_json_add_array_item_double(wb, io_writes); + buffer_json_add_array_item_double(wb, io_total); + buffer_json_add_array_item_double(wb, busy_perc); + buffer_json_add_array_item_double(wb, busy_time); + buffer_json_add_array_item_double(wb, backlog_time); + buffer_json_add_array_item_double(wb, iops_reads); + buffer_json_add_array_item_double(wb, iops_writes); + buffer_json_add_array_item_double(wb, iops_time_reads); + buffer_json_add_array_item_double(wb, iops_time_writes); + buffer_json_add_array_item_double(wb, iops_avg_time_read); + buffer_json_add_array_item_double(wb, iops_avg_time_write); + buffer_json_add_array_item_double(wb, iops_avg_size_read); + buffer_json_add_array_item_double(wb, iops_avg_size_write); + + // End + buffer_json_array_close(wb); + } + + netdata_mutex_unlock(&diskstats_dev_mutex); + + buffer_json_array_close(wb); // data + buffer_json_member_add_object(wb, "columns"); + { + size_t field_id = 0; + + buffer_rrdf_table_add_field(wb, field_id++, "Device", "Device Name", + RRDF_FIELD_TYPE_STRING, RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NONE, + 0, NULL, NAN, RRDF_FIELD_SORT_ASCENDING, NULL, + RRDF_FIELD_SUMMARY_COUNT, RRDF_FIELD_FILTER_MULTISELECT, + RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_UNIQUE_KEY | RRDF_FIELD_OPTS_STICKY, + NULL); + buffer_rrdf_table_add_field(wb, field_id++, "Type", "Device Type", + RRDF_FIELD_TYPE_STRING, RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NONE, + 0, NULL, NAN, RRDF_FIELD_SORT_ASCENDING, NULL, + RRDF_FIELD_SUMMARY_COUNT, RRDF_FIELD_FILTER_MULTISELECT, + RRDF_FIELD_OPTS_UNIQUE_KEY, + NULL); + buffer_rrdf_table_add_field(wb, field_id++, "ID", "Device ID", + RRDF_FIELD_TYPE_STRING, RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NONE, + 0, NULL, NAN, RRDF_FIELD_SORT_ASCENDING, NULL, + RRDF_FIELD_SUMMARY_COUNT, RRDF_FIELD_FILTER_MULTISELECT, + RRDF_FIELD_OPTS_UNIQUE_KEY, + NULL); + buffer_rrdf_table_add_field(wb, field_id++, "Model", "Device Model", + RRDF_FIELD_TYPE_STRING, RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NONE, + 0, NULL, NAN, RRDF_FIELD_SORT_ASCENDING, NULL, + RRDF_FIELD_SUMMARY_COUNT, RRDF_FIELD_FILTER_MULTISELECT, + RRDF_FIELD_OPTS_UNIQUE_KEY, + NULL); + buffer_rrdf_table_add_field(wb, field_id++, "Serial", "Device Serial Number", + RRDF_FIELD_TYPE_STRING, RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NONE, + 0, NULL, NAN, RRDF_FIELD_SORT_ASCENDING, NULL, + RRDF_FIELD_SUMMARY_COUNT, RRDF_FIELD_FILTER_MULTISELECT, + RRDF_FIELD_OPTS_UNIQUE_KEY, + NULL); + + buffer_rrdf_table_add_field(wb, field_id++, "Read", "Data Read from Device", + RRDF_FIELD_TYPE_BAR_WITH_INTEGER, RRDF_FIELD_VISUAL_BAR, RRDF_FIELD_TRANSFORM_NUMBER, + 2, "MiB", max_io_reads, RRDF_FIELD_SORT_DESCENDING, NULL, + RRDF_FIELD_SUMMARY_SUM, RRDF_FIELD_FILTER_NONE, + RRDF_FIELD_OPTS_VISIBLE, + NULL); + buffer_rrdf_table_add_field(wb, field_id++, "Written", "Data Writen to Device", + RRDF_FIELD_TYPE_BAR_WITH_INTEGER, RRDF_FIELD_VISUAL_BAR, RRDF_FIELD_TRANSFORM_NUMBER, + 2, "MiB", max_io_writes, RRDF_FIELD_SORT_DESCENDING, NULL, + RRDF_FIELD_SUMMARY_SUM, RRDF_FIELD_FILTER_NONE, + RRDF_FIELD_OPTS_VISIBLE, + NULL); + buffer_rrdf_table_add_field(wb, field_id++, "Total", "Data Transferred to and from Device", + RRDF_FIELD_TYPE_BAR_WITH_INTEGER, RRDF_FIELD_VISUAL_BAR, RRDF_FIELD_TRANSFORM_NUMBER, + 2, "MiB", max_io, RRDF_FIELD_SORT_DESCENDING, NULL, + RRDF_FIELD_SUMMARY_SUM, RRDF_FIELD_FILTER_NONE, + RRDF_FIELD_OPTS_NONE, + NULL); + + buffer_rrdf_table_add_field(wb, field_id++, "Busy%", "Disk Busy Percentage", + RRDF_FIELD_TYPE_BAR_WITH_INTEGER, RRDF_FIELD_VISUAL_BAR, RRDF_FIELD_TRANSFORM_NUMBER, + 2, "%", max_busy_perc, RRDF_FIELD_SORT_DESCENDING, NULL, + RRDF_FIELD_SUMMARY_SUM, RRDF_FIELD_FILTER_NONE, + RRDF_FIELD_OPTS_VISIBLE, + NULL); + buffer_rrdf_table_add_field(wb, field_id++, "Busy", "Disk Busy Time", + RRDF_FIELD_TYPE_BAR_WITH_INTEGER, RRDF_FIELD_VISUAL_BAR, RRDF_FIELD_TRANSFORM_NUMBER, + 2, "milliseconds", max_busy_time, RRDF_FIELD_SORT_DESCENDING, NULL, + RRDF_FIELD_SUMMARY_SUM, RRDF_FIELD_FILTER_NONE, + RRDF_FIELD_OPTS_VISIBLE, + NULL); + buffer_rrdf_table_add_field(wb, field_id++, "Backlog", "Disk Backlog", + RRDF_FIELD_TYPE_BAR_WITH_INTEGER, RRDF_FIELD_VISUAL_BAR, RRDF_FIELD_TRANSFORM_NUMBER, + 2, "milliseconds", max_backlog_time, RRDF_FIELD_SORT_DESCENDING, NULL, + RRDF_FIELD_SUMMARY_SUM, RRDF_FIELD_FILTER_NONE, + RRDF_FIELD_OPTS_VISIBLE, + NULL); + + buffer_rrdf_table_add_field(wb, field_id++, "Reads", "Completed Read Operations", + RRDF_FIELD_TYPE_BAR_WITH_INTEGER, RRDF_FIELD_VISUAL_BAR, RRDF_FIELD_TRANSFORM_NUMBER, + 2, "ops", max_iops_reads, RRDF_FIELD_SORT_DESCENDING, NULL, + RRDF_FIELD_SUMMARY_SUM, RRDF_FIELD_FILTER_NONE, + RRDF_FIELD_OPTS_VISIBLE, + NULL); + buffer_rrdf_table_add_field(wb, field_id++, "Writes", "Completed Write Operations", + RRDF_FIELD_TYPE_BAR_WITH_INTEGER, RRDF_FIELD_VISUAL_BAR, RRDF_FIELD_TRANSFORM_NUMBER, + 2, "ops", max_iops_writes, RRDF_FIELD_SORT_DESCENDING, NULL, + RRDF_FIELD_SUMMARY_SUM, RRDF_FIELD_FILTER_NONE, + RRDF_FIELD_OPTS_VISIBLE, + NULL); + + buffer_rrdf_table_add_field(wb, field_id++, "ReadsTime", "Read Operations Time", + RRDF_FIELD_TYPE_BAR_WITH_INTEGER, RRDF_FIELD_VISUAL_BAR, RRDF_FIELD_TRANSFORM_NUMBER, + 2, "milliseconds", max_iops_time_reads, RRDF_FIELD_SORT_DESCENDING, NULL, + RRDF_FIELD_SUMMARY_SUM, RRDF_FIELD_FILTER_NONE, + RRDF_FIELD_OPTS_VISIBLE, + NULL); + buffer_rrdf_table_add_field(wb, field_id++, "WritesTime", "Write Operations Time", + RRDF_FIELD_TYPE_BAR_WITH_INTEGER, RRDF_FIELD_VISUAL_BAR, RRDF_FIELD_TRANSFORM_NUMBER, + 2, "milliseconds", max_iops_time_writes, RRDF_FIELD_SORT_DESCENDING, NULL, + RRDF_FIELD_SUMMARY_SUM, RRDF_FIELD_FILTER_NONE, + RRDF_FIELD_OPTS_VISIBLE, + NULL); + + buffer_rrdf_table_add_field(wb, field_id++, "ReadAvgTime", "Average Read Operation Service Time", + RRDF_FIELD_TYPE_BAR_WITH_INTEGER, RRDF_FIELD_VISUAL_BAR, RRDF_FIELD_TRANSFORM_NUMBER, + 2, "milliseconds", max_iops_avg_time_read, RRDF_FIELD_SORT_DESCENDING, NULL, + RRDF_FIELD_SUMMARY_SUM, RRDF_FIELD_FILTER_NONE, + RRDF_FIELD_OPTS_VISIBLE, + NULL); + buffer_rrdf_table_add_field(wb, field_id++, "WriteAvgTime", "Average Write Operation Service Time", + RRDF_FIELD_TYPE_BAR_WITH_INTEGER, RRDF_FIELD_VISUAL_BAR, RRDF_FIELD_TRANSFORM_NUMBER, + 2, "milliseconds", max_iops_avg_time_write, RRDF_FIELD_SORT_DESCENDING, NULL, + RRDF_FIELD_SUMMARY_SUM, RRDF_FIELD_FILTER_NONE, + RRDF_FIELD_OPTS_VISIBLE, + NULL); + + buffer_rrdf_table_add_field(wb, field_id++, "ReadAvgSz", "Average Read Operation Size", + RRDF_FIELD_TYPE_BAR_WITH_INTEGER, RRDF_FIELD_VISUAL_BAR, RRDF_FIELD_TRANSFORM_NUMBER, + 2, "KiB", max_iops_avg_size_read, RRDF_FIELD_SORT_DESCENDING, NULL, + RRDF_FIELD_SUMMARY_SUM, RRDF_FIELD_FILTER_NONE, + RRDF_FIELD_OPTS_VISIBLE, + NULL); + buffer_rrdf_table_add_field(wb, field_id++, "WriteAvgSz", "Average Write Operation Size", + RRDF_FIELD_TYPE_BAR_WITH_INTEGER, RRDF_FIELD_VISUAL_BAR, RRDF_FIELD_TRANSFORM_NUMBER, + 2, "KiB", max_iops_avg_size_write, RRDF_FIELD_SORT_DESCENDING, NULL, + RRDF_FIELD_SUMMARY_SUM, RRDF_FIELD_FILTER_NONE, + RRDF_FIELD_OPTS_VISIBLE, + NULL); + } + + buffer_json_object_close(wb); // columns + buffer_json_member_add_string(wb, "default_sort_column", "Total"); + + buffer_json_member_add_object(wb, "charts"); + { + buffer_json_member_add_object(wb, "IO"); + { + buffer_json_member_add_string(wb, "name", "IO"); + buffer_json_member_add_string(wb, "type", "stacked-bar"); + buffer_json_member_add_array(wb, "columns"); + { + buffer_json_add_array_item_string(wb, "Read"); + buffer_json_add_array_item_string(wb, "Written"); + } + buffer_json_array_close(wb); + } + buffer_json_object_close(wb); + + buffer_json_member_add_object(wb, "Busy"); + { + buffer_json_member_add_string(wb, "name", "Busy"); + buffer_json_member_add_string(wb, "type", "stacked-bar"); + buffer_json_member_add_array(wb, "columns"); + { + buffer_json_add_array_item_string(wb, "Busy"); + } + buffer_json_array_close(wb); + } + buffer_json_object_close(wb); + } + buffer_json_object_close(wb); // charts + + buffer_json_member_add_array(wb, "default_charts"); + { + buffer_json_add_array_item_array(wb); + buffer_json_add_array_item_string(wb, "IO"); + buffer_json_add_array_item_string(wb, "Device"); + buffer_json_array_close(wb); + + buffer_json_add_array_item_array(wb); + buffer_json_add_array_item_string(wb, "Busy"); + buffer_json_add_array_item_string(wb, "Device"); + buffer_json_array_close(wb); + } + buffer_json_array_close(wb); + + buffer_json_member_add_object(wb, "group_by"); + { + buffer_json_member_add_object(wb, "Type"); + { + buffer_json_member_add_string(wb, "name", "Type"); + buffer_json_member_add_array(wb, "columns"); + { + buffer_json_add_array_item_string(wb, "Type"); + } + buffer_json_array_close(wb); + } + buffer_json_object_close(wb); + } + buffer_json_object_close(wb); // group_by + + buffer_json_member_add_time_t(wb, "expires", now_realtime_sec() + 1); + buffer_json_finalize(wb); + + return HTTP_RESP_OK; +} + +static void diskstats_cleanup_disks() { + struct disk *d = disk_root, *last = NULL; + while (d) { + if (unlikely(global_cleanup_removed_disks && !d->updated)) { + struct disk *t = d; + + rrdset_obsolete_and_pointer_null(d->st_avgsz); + rrdset_obsolete_and_pointer_null(d->st_ext_avgsz); + rrdset_obsolete_and_pointer_null(d->st_await); + rrdset_obsolete_and_pointer_null(d->st_ext_await); + rrdset_obsolete_and_pointer_null(d->st_backlog); + rrdset_obsolete_and_pointer_null(d->st_busy); + rrdset_obsolete_and_pointer_null(d->st_io); + rrdset_obsolete_and_pointer_null(d->st_ext_io); + rrdset_obsolete_and_pointer_null(d->st_iotime); + rrdset_obsolete_and_pointer_null(d->st_ext_iotime); + rrdset_obsolete_and_pointer_null(d->st_mops); + rrdset_obsolete_and_pointer_null(d->st_ext_mops); + rrdset_obsolete_and_pointer_null(d->st_ops); + rrdset_obsolete_and_pointer_null(d->st_ext_ops); + rrdset_obsolete_and_pointer_null(d->st_qops); + rrdset_obsolete_and_pointer_null(d->st_svctm); + rrdset_obsolete_and_pointer_null(d->st_util); + rrdset_obsolete_and_pointer_null(d->st_bcache); + rrdset_obsolete_and_pointer_null(d->st_bcache_bypass); + rrdset_obsolete_and_pointer_null(d->st_bcache_rates); + rrdset_obsolete_and_pointer_null(d->st_bcache_size); + rrdset_obsolete_and_pointer_null(d->st_bcache_usage); + rrdset_obsolete_and_pointer_null(d->st_bcache_hit_ratio); + rrdset_obsolete_and_pointer_null(d->st_bcache_cache_allocations); + rrdset_obsolete_and_pointer_null(d->st_bcache_cache_read_races); + + if (d == disk_root) { + disk_root = d = d->next; + last = NULL; + } else if (last) { + last->next = d = d->next; + } + + freez(t->bcache_filename_dirty_data); + freez(t->bcache_filename_writeback_rate); + freez(t->bcache_filename_cache_congested); + freez(t->bcache_filename_cache_available_percent); + freez(t->bcache_filename_stats_five_minute_cache_hit_ratio); + freez(t->bcache_filename_stats_hour_cache_hit_ratio); + freez(t->bcache_filename_stats_day_cache_hit_ratio); + freez(t->bcache_filename_stats_total_cache_hit_ratio); + freez(t->bcache_filename_stats_total_cache_hits); + freez(t->bcache_filename_stats_total_cache_misses); + freez(t->bcache_filename_stats_total_cache_miss_collisions); + freez(t->bcache_filename_stats_total_cache_bypass_hits); + freez(t->bcache_filename_stats_total_cache_bypass_misses); + freez(t->bcache_filename_stats_total_cache_readaheads); + freez(t->bcache_filename_cache_read_races); + freez(t->bcache_filename_cache_io_errors); + freez(t->bcache_filename_priority_stats); + + freez(t->disk); + freez(t->device); + freez(t->disk_by_id); + freez(t->model); + freez(t->serial); + freez(t->mount_point); + freez(t->chart_id); + freez(t); + } else { + d->updated = 0; + last = d; + d = d->next; + } + } +} + +int do_proc_diskstats(int update_every, usec_t dt) { + static procfile *ff = NULL; + + if(unlikely(!globals_initialized)) { + globals_initialized = 1; + + global_enable_new_disks_detected_at_runtime = config_get_boolean(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "enable new disks detected at runtime", global_enable_new_disks_detected_at_runtime); + global_enable_performance_for_physical_disks = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "performance metrics for physical disks", global_enable_performance_for_physical_disks); + global_enable_performance_for_virtual_disks = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "performance metrics for virtual disks", global_enable_performance_for_virtual_disks); + global_enable_performance_for_partitions = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "performance metrics for partitions", global_enable_performance_for_partitions); + + global_do_io = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "bandwidth for all disks", global_do_io); + global_do_ops = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "operations for all disks", global_do_ops); + global_do_mops = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "merged operations for all disks", global_do_mops); + global_do_iotime = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "i/o time for all disks", global_do_iotime); + global_do_qops = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "queued operations for all disks", global_do_qops); + global_do_util = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "utilization percentage for all disks", global_do_util); + global_do_ext = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "extended operations for all disks", global_do_ext); + global_do_backlog = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "backlog for all disks", global_do_backlog); + global_do_bcache = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "bcache for all disks", global_do_bcache); + global_bcache_priority_stats_update_every = (int)config_get_number(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "bcache priority stats update every", global_bcache_priority_stats_update_every); + + global_cleanup_removed_disks = config_get_boolean(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "remove charts of removed disks" , global_cleanup_removed_disks); + + char buffer[FILENAME_MAX + 1]; + + snprintfz(buffer, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/block/%s"); + path_to_sys_block_device = config_get(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "path to get block device", buffer); + + snprintfz(buffer, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/block/%s/bcache"); + path_to_sys_block_device_bcache = config_get(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "path to get block device bcache", buffer); + + snprintfz(buffer, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/devices/virtual/block/%s"); + path_to_sys_devices_virtual_block_device = config_get(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "path to get virtual block device", buffer); + + snprintfz(buffer, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/dev/block/%lu:%lu/%s"); + path_to_sys_dev_block_major_minor_string = config_get(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "path to get block device infos", buffer); + + //snprintfz(buffer, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/block/%s/queue/hw_sector_size"); + //path_to_get_hw_sector_size = config_get(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "path to get h/w sector size", buffer); + + //snprintfz(buffer, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/dev/block/%lu:%lu/subsystem/%s/../queue/hw_sector_size"); + //path_to_get_hw_sector_size_partitions = config_get(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "path to get h/w sector size for partitions", buffer); + + snprintfz(buffer, FILENAME_MAX, "%s/dev/mapper", netdata_configured_host_prefix); + path_to_device_mapper = config_get(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "path to device mapper", buffer); + + snprintfz(buffer, FILENAME_MAX, "%s/dev/disk", netdata_configured_host_prefix); + path_to_dev_disk = config_get(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "path to /dev/disk", buffer); + + snprintfz(buffer, FILENAME_MAX, "%s/sys/block", netdata_configured_host_prefix); + path_to_sys_block = config_get(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "path to /sys/block", buffer); + + snprintfz(buffer, FILENAME_MAX, "%s/dev/disk/by-label", netdata_configured_host_prefix); + path_to_device_label = config_get(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "path to /dev/disk/by-label", buffer); + + snprintfz(buffer, FILENAME_MAX, "%s/dev/disk/by-id", netdata_configured_host_prefix); + path_to_device_id = config_get(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "path to /dev/disk/by-id", buffer); + + snprintfz(buffer, FILENAME_MAX, "%s/dev/vx/dsk", netdata_configured_host_prefix); + path_to_veritas_volume_groups = config_get(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "path to /dev/vx/dsk", buffer); + + name_disks_by_id = config_get_boolean(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "name disks by id", name_disks_by_id); + + preferred_ids = simple_pattern_create( + config_get(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "preferred disk ids", DEFAULT_PREFERRED_IDS), NULL, + SIMPLE_PATTERN_EXACT, true); + + excluded_disks = simple_pattern_create( + config_get(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "exclude disks", DEFAULT_EXCLUDED_DISKS), NULL, + SIMPLE_PATTERN_EXACT, true); + + rrd_function_add_inline(localhost, NULL, "block-devices", 10, + RRDFUNCTIONS_PRIORITY_DEFAULT, RRDFUNCTIONS_DISKSTATS_HELP, + "top", HTTP_ACCESS_ANONYMOUS_DATA, + diskstats_function_block_devices); + } + + // -------------------------------------------------------------------------- + + if(unlikely(!ff)) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/diskstats"); + ff = procfile_open(config_get(CONFIG_SECTION_PLUGIN_PROC_DISKSTATS, "filename to monitor", filename), " \t", PROCFILE_FLAG_DEFAULT); + } + if(unlikely(!ff)) return 0; + + ff = procfile_readall(ff); + if(unlikely(!ff)) return 0; // we return 0, so that we will retry to open it next time + + size_t lines = procfile_lines(ff), l; + + collected_number system_read_kb = 0, system_write_kb = 0; + + int do_dc_stats = 0, do_fl_stats = 0; + + netdata_mutex_lock(&diskstats_dev_mutex); + + for(l = 0; l < lines ;l++) { + // -------------------------------------------------------------------------- + // Read parameters + + char *disk; + unsigned long major = 0, minor = 0; + + collected_number reads = 0, mreads = 0, readsectors = 0, readms = 0, + writes = 0, mwrites = 0, writesectors = 0, writems = 0, + queued_ios = 0, busy_ms = 0, backlog_ms = 0, + discards = 0, mdiscards = 0, discardsectors = 0, discardms = 0, + flushes = 0, flushms = 0; + + + collected_number last_reads = 0, last_readsectors = 0, last_readms = 0, + last_writes = 0, last_writesectors = 0, last_writems = 0, + last_busy_ms = 0, + last_discards = 0, last_discardsectors = 0, last_discardms = 0, + last_flushes = 0, last_flushms = 0; + + size_t words = procfile_linewords(ff, l); + if(unlikely(words < 14)) continue; + + major = str2ul(procfile_lineword(ff, l, 0)); + minor = str2ul(procfile_lineword(ff, l, 1)); + disk = procfile_lineword(ff, l, 2); + + // # of reads completed # of writes completed + // This is the total number of reads or writes completed successfully. + reads = str2ull(procfile_lineword(ff, l, 3), NULL); // rd_ios + writes = str2ull(procfile_lineword(ff, l, 7), NULL); // wr_ios + + // # of reads merged # of writes merged + // Reads and writes which are adjacent to each other may be merged for + // efficiency. Thus two 4K reads may become one 8K read before it is + // ultimately handed to the disk, and so it will be counted (and queued) + mreads = str2ull(procfile_lineword(ff, l, 4), NULL); // rd_merges_or_rd_sec + mwrites = str2ull(procfile_lineword(ff, l, 8), NULL); // wr_merges + + // # of sectors read # of sectors written + // This is the total number of sectors read or written successfully. + readsectors = str2ull(procfile_lineword(ff, l, 5), NULL); // rd_sec_or_wr_ios + writesectors = str2ull(procfile_lineword(ff, l, 9), NULL); // wr_sec + + // # of milliseconds spent reading # of milliseconds spent writing + // This is the total number of milliseconds spent by all reads or writes (as + // measured from __make_request() to end_that_request_last()). + readms = str2ull(procfile_lineword(ff, l, 6), NULL); // rd_ticks_or_wr_sec + writems = str2ull(procfile_lineword(ff, l, 10), NULL); // wr_ticks + + // # of I/Os currently in progress + // The only field that should go to zero. Incremented as requests are + // given to appropriate struct request_queue and decremented as they finish. + queued_ios = str2ull(procfile_lineword(ff, l, 11), NULL); // ios_pgr + + // # of milliseconds spent doing I/Os + // This field increases so long as field queued_ios is nonzero. + busy_ms = str2ull(procfile_lineword(ff, l, 12), NULL); // tot_ticks + + // weighted # of milliseconds spent doing I/Os + // This field is incremented at each I/O start, I/O completion, I/O + // merge, or read of these stats by the number of I/Os in progress + // (field queued_ios) times the number of milliseconds spent doing I/O since the + // last update of this field. This can provide an easy measure of both + // I/O completion time and the backlog that may be accumulating. + backlog_ms = str2ull(procfile_lineword(ff, l, 13), NULL); // rq_ticks + + if (unlikely(words > 13)) { + do_dc_stats = 1; + + // # of discards completed + // This is the total number of discards completed successfully. + discards = str2ull(procfile_lineword(ff, l, 14), NULL); // dc_ios + + // # of discards merged + // See the description of mreads/mwrites + mdiscards = str2ull(procfile_lineword(ff, l, 15), NULL); // dc_merges + + // # of sectors discarded + // This is the total number of sectors discarded successfully. + discardsectors = str2ull(procfile_lineword(ff, l, 16), NULL); // dc_sec + + // # of milliseconds spent discarding + // This is the total number of milliseconds spent by all discards (as + // measured from __make_request() to end_that_request_last()). + discardms = str2ull(procfile_lineword(ff, l, 17), NULL); // dc_ticks + } + + if (unlikely(words > 17)) { + do_fl_stats = 1; + + // number of flush I/Os processed + // These values increment when an flush I/O request completes. + // Block layer combines flush requests and executes at most one at a time. + // This counts flush requests executed by disk. Not tracked for partitions. + flushes = str2ull(procfile_lineword(ff, l, 18), NULL); // fl_ios + + // total wait time for flush requests + flushms = str2ull(procfile_lineword(ff, l, 19), NULL); // fl_ticks + } + + // -------------------------------------------------------------------------- + // get a disk structure for the disk + + struct disk *d = get_disk(major, minor, disk); + d->updated = 1; + + // -------------------------------------------------------------------------- + // count the global system disk I/O of physical disks + + if(unlikely(d->type == DISK_TYPE_PHYSICAL)) { + system_read_kb += readsectors * d->sector_size / 1024; + system_write_kb += writesectors * d->sector_size / 1024; + } + + // -------------------------------------------------------------------------- + // Set its family based on mount point + + char *family = d->mount_point; + if(!family) family = d->disk; + + + // -------------------------------------------------------------------------- + // Do performance metrics + if(d->do_io == CONFIG_BOOLEAN_YES || (d->do_io == CONFIG_BOOLEAN_AUTO && + (readsectors || writesectors || discardsectors || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + d->do_io = CONFIG_BOOLEAN_YES; + + if(unlikely(!d->st_io)) { + d->st_io = rrdset_create_localhost( + RRD_TYPE_DISK + , d->chart_id + , d->disk + , family + , "disk.io" + , "Disk I/O Bandwidth" + , "KiB/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DISKSTATS_NAME + , NETDATA_CHART_PRIO_DISK_IO + , update_every + , RRDSET_TYPE_AREA + ); + + d->rd_io_reads = rrddim_add(d->st_io, "reads", NULL, d->sector_size, 1024, RRD_ALGORITHM_INCREMENTAL); + d->rd_io_writes = rrddim_add(d->st_io, "writes", NULL, d->sector_size * -1, 1024, RRD_ALGORITHM_INCREMENTAL); + + add_labels_to_disk(d, d->st_io); + } + + last_readsectors = rrddim_set_by_pointer(d->st_io, d->rd_io_reads, readsectors); + last_writesectors = rrddim_set_by_pointer(d->st_io, d->rd_io_writes, writesectors); + rrdset_done(d->st_io); + } + + if (do_dc_stats && d->do_io == CONFIG_BOOLEAN_YES && d->do_ext != CONFIG_BOOLEAN_NO) { + if (unlikely(!d->st_ext_io)) { + d->st_ext_io = rrdset_create_localhost( + "disk_ext" + , d->chart_id + , d->disk + , family + , "disk_ext.io" + , "Amount of Discarded Data" + , "KiB/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DISKSTATS_NAME + , NETDATA_CHART_PRIO_DISK_IO + 1 + , update_every + , RRDSET_TYPE_AREA + ); + + d->rd_io_discards = rrddim_add(d->st_ext_io, "discards", NULL, d->sector_size, 1024, RRD_ALGORITHM_INCREMENTAL); + + add_labels_to_disk(d, d->st_ext_io); + } + + last_discardsectors = rrddim_set_by_pointer(d->st_ext_io, d->rd_io_discards, discardsectors); + rrdset_done(d->st_ext_io); + } + + if(d->do_ops == CONFIG_BOOLEAN_YES || (d->do_ops == CONFIG_BOOLEAN_AUTO && + (reads || writes || discards || flushes || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + d->do_ops = CONFIG_BOOLEAN_YES; + + if(unlikely(!d->st_ops)) { + d->st_ops = rrdset_create_localhost( + "disk_ops" + , d->chart_id + , d->disk + , family + , "disk.ops" + , "Disk Completed I/O Operations" + , "operations/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DISKSTATS_NAME + , NETDATA_CHART_PRIO_DISK_OPS + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(d->st_ops, RRDSET_FLAG_DETAIL); + + d->rd_ops_reads = rrddim_add(d->st_ops, "reads", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + d->rd_ops_writes = rrddim_add(d->st_ops, "writes", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + + add_labels_to_disk(d, d->st_ops); + } + + last_reads = rrddim_set_by_pointer(d->st_ops, d->rd_ops_reads, reads); + last_writes = rrddim_set_by_pointer(d->st_ops, d->rd_ops_writes, writes); + rrdset_done(d->st_ops); + } + + if (do_dc_stats && d->do_ops == CONFIG_BOOLEAN_YES && d->do_ext != CONFIG_BOOLEAN_NO) { + if (unlikely(!d->st_ext_ops)) { + d->st_ext_ops = rrdset_create_localhost( + "disk_ext_ops" + , d->chart_id + , d->disk + , family + , "disk_ext.ops" + , "Disk Completed Extended I/O Operations" + , "operations/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DISKSTATS_NAME + , NETDATA_CHART_PRIO_DISK_OPS + 1 + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(d->st_ext_ops, RRDSET_FLAG_DETAIL); + + d->rd_ops_discards = rrddim_add(d->st_ext_ops, "discards", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + if (do_fl_stats) + d->rd_ops_flushes = rrddim_add(d->st_ext_ops, "flushes", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + + add_labels_to_disk(d, d->st_ext_ops); + } + + last_discards = rrddim_set_by_pointer(d->st_ext_ops, d->rd_ops_discards, discards); + if (do_fl_stats) + last_flushes = rrddim_set_by_pointer(d->st_ext_ops, d->rd_ops_flushes, flushes); + rrdset_done(d->st_ext_ops); + } + + if(d->do_qops == CONFIG_BOOLEAN_YES || (d->do_qops == CONFIG_BOOLEAN_AUTO && + (queued_ios || netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + d->do_qops = CONFIG_BOOLEAN_YES; + + if(unlikely(!d->st_qops)) { + d->st_qops = rrdset_create_localhost( + "disk_qops" + , d->chart_id + , d->disk + , family + , "disk.qops" + , "Disk Current I/O Operations" + , "operations" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DISKSTATS_NAME + , NETDATA_CHART_PRIO_DISK_QOPS + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(d->st_qops, RRDSET_FLAG_DETAIL); + + d->rd_qops_operations = rrddim_add(d->st_qops, "operations", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + add_labels_to_disk(d, d->st_qops); + } + + rrddim_set_by_pointer(d->st_qops, d->rd_qops_operations, queued_ios); + rrdset_done(d->st_qops); + } + + if(d->do_backlog == CONFIG_BOOLEAN_YES || (d->do_backlog == CONFIG_BOOLEAN_AUTO && + (backlog_ms || netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + d->do_backlog = CONFIG_BOOLEAN_YES; + + if(unlikely(!d->st_backlog)) { + d->st_backlog = rrdset_create_localhost( + "disk_backlog" + , d->chart_id + , d->disk + , family + , "disk.backlog" + , "Disk Backlog" + , "milliseconds" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DISKSTATS_NAME + , NETDATA_CHART_PRIO_DISK_BACKLOG + , update_every + , RRDSET_TYPE_AREA + ); + + rrdset_flag_set(d->st_backlog, RRDSET_FLAG_DETAIL); + + d->rd_backlog_backlog = rrddim_add(d->st_backlog, "backlog", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + + add_labels_to_disk(d, d->st_backlog); + } + + rrddim_set_by_pointer(d->st_backlog, d->rd_backlog_backlog, backlog_ms); + rrdset_done(d->st_backlog); + } + + if(d->do_util == CONFIG_BOOLEAN_YES || (d->do_util == CONFIG_BOOLEAN_AUTO && + (busy_ms || netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + d->do_util = CONFIG_BOOLEAN_YES; + + if(unlikely(!d->st_busy)) { + d->st_busy = rrdset_create_localhost( + "disk_busy" + , d->chart_id + , d->disk + , family + , "disk.busy" + , "Disk Busy Time" + , "milliseconds" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DISKSTATS_NAME + , NETDATA_CHART_PRIO_DISK_BUSY + , update_every + , RRDSET_TYPE_AREA + ); + + rrdset_flag_set(d->st_busy, RRDSET_FLAG_DETAIL); + + d->rd_busy_busy = rrddim_add(d->st_busy, "busy", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + + add_labels_to_disk(d, d->st_busy); + } + + last_busy_ms = rrddim_set_by_pointer(d->st_busy, d->rd_busy_busy, busy_ms); + rrdset_done(d->st_busy); + + if(unlikely(!d->st_util)) { + d->st_util = rrdset_create_localhost( + "disk_util" + , d->chart_id + , d->disk + , family + , "disk.util" + , "Disk Utilization Time" + , "% of time working" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DISKSTATS_NAME + , NETDATA_CHART_PRIO_DISK_UTIL + , update_every + , RRDSET_TYPE_AREA + ); + + rrdset_flag_set(d->st_util, RRDSET_FLAG_DETAIL); + + d->rd_util_utilization = rrddim_add(d->st_util, "utilization", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + add_labels_to_disk(d, d->st_util); + } + + collected_number disk_utilization = (busy_ms - last_busy_ms) / (10 * update_every); + if (disk_utilization > 100) + disk_utilization = 100; + + rrddim_set_by_pointer(d->st_util, d->rd_util_utilization, disk_utilization); + rrdset_done(d->st_util); + } + + if(d->do_mops == CONFIG_BOOLEAN_YES || (d->do_mops == CONFIG_BOOLEAN_AUTO && + (mreads || mwrites || mdiscards || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + d->do_mops = CONFIG_BOOLEAN_YES; + + if(unlikely(!d->st_mops)) { + d->st_mops = rrdset_create_localhost( + "disk_mops" + , d->chart_id + , d->disk + , family + , "disk.mops" + , "Disk Merged Operations" + , "merged operations/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DISKSTATS_NAME + , NETDATA_CHART_PRIO_DISK_MOPS + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(d->st_mops, RRDSET_FLAG_DETAIL); + + d->rd_mops_reads = rrddim_add(d->st_mops, "reads", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + d->rd_mops_writes = rrddim_add(d->st_mops, "writes", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + + add_labels_to_disk(d, d->st_mops); + } + + rrddim_set_by_pointer(d->st_mops, d->rd_mops_reads, mreads); + rrddim_set_by_pointer(d->st_mops, d->rd_mops_writes, mwrites); + rrdset_done(d->st_mops); + } + + if(do_dc_stats && d->do_mops == CONFIG_BOOLEAN_YES && d->do_ext != CONFIG_BOOLEAN_NO) { + d->do_mops = CONFIG_BOOLEAN_YES; + + if(unlikely(!d->st_ext_mops)) { + d->st_ext_mops = rrdset_create_localhost( + "disk_ext_mops" + , d->chart_id + , d->disk + , family + , "disk_ext.mops" + , "Disk Merged Discard Operations" + , "merged operations/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DISKSTATS_NAME + , NETDATA_CHART_PRIO_DISK_MOPS + 1 + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(d->st_ext_mops, RRDSET_FLAG_DETAIL); + + d->rd_mops_discards = rrddim_add(d->st_ext_mops, "discards", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + + add_labels_to_disk(d, d->st_ext_mops); + } + + rrddim_set_by_pointer(d->st_ext_mops, d->rd_mops_discards, mdiscards); + rrdset_done(d->st_ext_mops); + } + + if(d->do_iotime == CONFIG_BOOLEAN_YES || (d->do_iotime == CONFIG_BOOLEAN_AUTO && + (readms || writems || discardms || flushms || netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + d->do_iotime = CONFIG_BOOLEAN_YES; + + if(unlikely(!d->st_iotime)) { + d->st_iotime = rrdset_create_localhost( + "disk_iotime" + , d->chart_id + , d->disk + , family + , "disk.iotime" + , "Disk Total I/O Time" + , "milliseconds/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DISKSTATS_NAME + , NETDATA_CHART_PRIO_DISK_IOTIME + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(d->st_iotime, RRDSET_FLAG_DETAIL); + + d->rd_iotime_reads = rrddim_add(d->st_iotime, "reads", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + d->rd_iotime_writes = rrddim_add(d->st_iotime, "writes", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + + add_labels_to_disk(d, d->st_iotime); + } + + last_readms = rrddim_set_by_pointer(d->st_iotime, d->rd_iotime_reads, readms); + last_writems = rrddim_set_by_pointer(d->st_iotime, d->rd_iotime_writes, writems); + rrdset_done(d->st_iotime); + } + + if(do_dc_stats && d->do_iotime == CONFIG_BOOLEAN_YES && d->do_ext != CONFIG_BOOLEAN_NO) { + if(unlikely(!d->st_ext_iotime)) { + d->st_ext_iotime = rrdset_create_localhost( + "disk_ext_iotime" + , d->chart_id + , d->disk + , family + , "disk_ext.iotime" + , "Disk Total I/O Time for Extended Operations" + , "milliseconds/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DISKSTATS_NAME + , NETDATA_CHART_PRIO_DISK_IOTIME + 1 + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(d->st_ext_iotime, RRDSET_FLAG_DETAIL); + + d->rd_iotime_discards = rrddim_add(d->st_ext_iotime, "discards", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + if (do_fl_stats) + d->rd_iotime_flushes = rrddim_add(d->st_ext_iotime, "flushes", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + + add_labels_to_disk(d, d->st_ext_iotime); + } + + last_discardms = rrddim_set_by_pointer(d->st_ext_iotime, d->rd_iotime_discards, discardms); + if (do_fl_stats) + last_flushms = rrddim_set_by_pointer(d->st_ext_iotime, d->rd_iotime_flushes, flushms); + rrdset_done(d->st_ext_iotime); + } + + // calculate differential charts + // only if this is not the first time we run + + if(likely(dt)) { + if( (d->do_iotime == CONFIG_BOOLEAN_YES || (d->do_iotime == CONFIG_BOOLEAN_AUTO && + (readms || writems || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) && + (d->do_ops == CONFIG_BOOLEAN_YES || (d->do_ops == CONFIG_BOOLEAN_AUTO && + (reads || writes || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES)))) { + + if(unlikely(!d->st_await)) { + d->st_await = rrdset_create_localhost( + "disk_await" + , d->chart_id + , d->disk + , family + , "disk.await" + , "Average Completed I/O Operation Time" + , "milliseconds/operation" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DISKSTATS_NAME + , NETDATA_CHART_PRIO_DISK_AWAIT + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(d->st_await, RRDSET_FLAG_DETAIL); + + d->rd_await_reads = rrddim_add(d->st_await, "reads", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + d->rd_await_writes = rrddim_add(d->st_await, "writes", NULL, -1, 1, RRD_ALGORITHM_ABSOLUTE); + + add_labels_to_disk(d, d->st_await); + } + + rrddim_set_by_pointer(d->st_await, d->rd_await_reads, (reads - last_reads) ? (readms - last_readms) / (reads - last_reads) : 0); + rrddim_set_by_pointer(d->st_await, d->rd_await_writes, (writes - last_writes) ? (writems - last_writems) / (writes - last_writes) : 0); + rrdset_done(d->st_await); + } + + if (do_dc_stats && d->do_iotime == CONFIG_BOOLEAN_YES && d->do_ops == CONFIG_BOOLEAN_YES && d->do_ext != CONFIG_BOOLEAN_NO) { + if(unlikely(!d->st_ext_await)) { + d->st_ext_await = rrdset_create_localhost( + "disk_ext_await" + , d->chart_id + , d->disk + , family + , "disk_ext.await" + , "Average Completed Extended I/O Operation Time" + , "milliseconds/operation" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DISKSTATS_NAME + , NETDATA_CHART_PRIO_DISK_AWAIT + 1 + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(d->st_ext_await, RRDSET_FLAG_DETAIL); + + d->rd_await_discards = rrddim_add(d->st_ext_await, "discards", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + if (do_fl_stats) + d->rd_await_flushes = rrddim_add(d->st_ext_await, "flushes", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + add_labels_to_disk(d, d->st_ext_await); + } + + rrddim_set_by_pointer( + d->st_ext_await, d->rd_await_discards, + (discards - last_discards) ? (discardms - last_discardms) / (discards - last_discards) : 0); + + if (do_fl_stats) + rrddim_set_by_pointer( + d->st_ext_await, d->rd_await_flushes, + (flushes - last_flushes) ? (flushms - last_flushms) / (flushes - last_flushes) : 0); + + rrdset_done(d->st_ext_await); + } + + if( (d->do_io == CONFIG_BOOLEAN_YES || (d->do_io == CONFIG_BOOLEAN_AUTO && + (readsectors || writesectors || netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) && + (d->do_ops == CONFIG_BOOLEAN_YES || (d->do_ops == CONFIG_BOOLEAN_AUTO && + (reads || writes || netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES)))) { + + if(unlikely(!d->st_avgsz)) { + d->st_avgsz = rrdset_create_localhost( + "disk_avgsz" + , d->chart_id + , d->disk + , family + , "disk.avgsz" + , "Average Completed I/O Operation Bandwidth" + , "KiB/operation" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DISKSTATS_NAME + , NETDATA_CHART_PRIO_DISK_AVGSZ + , update_every + , RRDSET_TYPE_AREA + ); + + rrdset_flag_set(d->st_avgsz, RRDSET_FLAG_DETAIL); + + d->rd_avgsz_reads = rrddim_add(d->st_avgsz, "reads", NULL, d->sector_size, 1024, RRD_ALGORITHM_ABSOLUTE); + d->rd_avgsz_writes = rrddim_add(d->st_avgsz, "writes", NULL, d->sector_size * -1, 1024, RRD_ALGORITHM_ABSOLUTE); + + add_labels_to_disk(d, d->st_avgsz); + } + + rrddim_set_by_pointer(d->st_avgsz, d->rd_avgsz_reads, (reads - last_reads) ? (readsectors - last_readsectors) / (reads - last_reads) : 0); + rrddim_set_by_pointer(d->st_avgsz, d->rd_avgsz_writes, (writes - last_writes) ? (writesectors - last_writesectors) / (writes - last_writes) : 0); + rrdset_done(d->st_avgsz); + } + + if(do_dc_stats && d->do_io == CONFIG_BOOLEAN_YES && d->do_ops == CONFIG_BOOLEAN_YES && d->do_ext != CONFIG_BOOLEAN_NO) { + if(unlikely(!d->st_ext_avgsz)) { + d->st_ext_avgsz = rrdset_create_localhost( + "disk_ext_avgsz" + , d->chart_id + , d->disk + , family + , "disk_ext.avgsz" + , "Average Amount of Discarded Data" + , "KiB/operation" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DISKSTATS_NAME + , NETDATA_CHART_PRIO_DISK_AVGSZ + , update_every + , RRDSET_TYPE_AREA + ); + + rrdset_flag_set(d->st_ext_avgsz, RRDSET_FLAG_DETAIL); + + d->rd_avgsz_discards = rrddim_add(d->st_ext_avgsz, "discards", NULL, d->sector_size, 1024, RRD_ALGORITHM_ABSOLUTE); + + add_labels_to_disk(d, d->st_ext_avgsz); + } + + rrddim_set_by_pointer( + d->st_ext_avgsz, d->rd_avgsz_discards, + (discards - last_discards) ? (discardsectors - last_discardsectors) / (discards - last_discards) : + 0); + rrdset_done(d->st_ext_avgsz); + } + + if( (d->do_util == CONFIG_BOOLEAN_YES || (d->do_util == CONFIG_BOOLEAN_AUTO && + (busy_ms || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) && + (d->do_ops == CONFIG_BOOLEAN_YES || (d->do_ops == CONFIG_BOOLEAN_AUTO && + (reads || writes || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES)))) { + + if(unlikely(!d->st_svctm)) { + d->st_svctm = rrdset_create_localhost( + "disk_svctm" + , d->chart_id + , d->disk + , family + , "disk.svctm" + , "Average Service Time" + , "milliseconds/operation" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DISKSTATS_NAME + , NETDATA_CHART_PRIO_DISK_SVCTM + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(d->st_svctm, RRDSET_FLAG_DETAIL); + + d->rd_svctm_svctm = rrddim_add(d->st_svctm, "svctm", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + add_labels_to_disk(d, d->st_svctm); + } + + rrddim_set_by_pointer(d->st_svctm, d->rd_svctm_svctm, ((reads - last_reads) + (writes - last_writes)) ? (busy_ms - last_busy_ms) / ((reads - last_reads) + (writes - last_writes)) : 0); + rrdset_done(d->st_svctm); + } + } + + // read bcache metrics and generate the bcache charts + + if(d->device_is_bcache && d->do_bcache != CONFIG_BOOLEAN_NO) { + unsigned long long int + stats_total_cache_bypass_hits = 0, + stats_total_cache_bypass_misses = 0, + stats_total_cache_hits = 0, + stats_total_cache_miss_collisions = 0, + stats_total_cache_misses = 0, + stats_five_minute_cache_hit_ratio = 0, + stats_hour_cache_hit_ratio = 0, + stats_day_cache_hit_ratio = 0, + stats_total_cache_hit_ratio = 0, + cache_available_percent = 0, + cache_readaheads = 0, + cache_read_races = 0, + cache_io_errors = 0, + cache_congested = 0, + dirty_data = 0, + writeback_rate = 0; + + // read the bcache values + + if(d->bcache_filename_dirty_data) + dirty_data = bcache_read_number_with_units(d->bcache_filename_dirty_data); + + if(d->bcache_filename_writeback_rate) + writeback_rate = bcache_read_number_with_units(d->bcache_filename_writeback_rate); + + if(d->bcache_filename_cache_congested) + cache_congested = bcache_read_number_with_units(d->bcache_filename_cache_congested); + + if(d->bcache_filename_cache_available_percent) + read_single_number_file(d->bcache_filename_cache_available_percent, &cache_available_percent); + + if(d->bcache_filename_stats_five_minute_cache_hit_ratio) + read_single_number_file(d->bcache_filename_stats_five_minute_cache_hit_ratio, &stats_five_minute_cache_hit_ratio); + + if(d->bcache_filename_stats_hour_cache_hit_ratio) + read_single_number_file(d->bcache_filename_stats_hour_cache_hit_ratio, &stats_hour_cache_hit_ratio); + + if(d->bcache_filename_stats_day_cache_hit_ratio) + read_single_number_file(d->bcache_filename_stats_day_cache_hit_ratio, &stats_day_cache_hit_ratio); + + if(d->bcache_filename_stats_total_cache_hit_ratio) + read_single_number_file(d->bcache_filename_stats_total_cache_hit_ratio, &stats_total_cache_hit_ratio); + + if(d->bcache_filename_stats_total_cache_hits) + read_single_number_file(d->bcache_filename_stats_total_cache_hits, &stats_total_cache_hits); + + if(d->bcache_filename_stats_total_cache_misses) + read_single_number_file(d->bcache_filename_stats_total_cache_misses, &stats_total_cache_misses); + + if(d->bcache_filename_stats_total_cache_miss_collisions) + read_single_number_file(d->bcache_filename_stats_total_cache_miss_collisions, &stats_total_cache_miss_collisions); + + if(d->bcache_filename_stats_total_cache_bypass_hits) + read_single_number_file(d->bcache_filename_stats_total_cache_bypass_hits, &stats_total_cache_bypass_hits); + + if(d->bcache_filename_stats_total_cache_bypass_misses) + read_single_number_file(d->bcache_filename_stats_total_cache_bypass_misses, &stats_total_cache_bypass_misses); + + if(d->bcache_filename_stats_total_cache_readaheads) + cache_readaheads = bcache_read_number_with_units(d->bcache_filename_stats_total_cache_readaheads); + + if(d->bcache_filename_cache_read_races) + read_single_number_file(d->bcache_filename_cache_read_races, &cache_read_races); + + if(d->bcache_filename_cache_io_errors) + read_single_number_file(d->bcache_filename_cache_io_errors, &cache_io_errors); + + if(d->bcache_filename_priority_stats && global_bcache_priority_stats_update_every >= 1) + bcache_read_priority_stats(d, family, global_bcache_priority_stats_update_every, dt); + + // update the charts + + { + if(unlikely(!d->st_bcache_hit_ratio)) { + d->st_bcache_hit_ratio = rrdset_create_localhost( + "disk_bcache_hit_ratio" + , d->chart_id + , d->disk + , family + , "disk.bcache_hit_ratio" + , "BCache Cache Hit Ratio" + , "percentage" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DISKSTATS_NAME + , NETDATA_CHART_PRIO_BCACHE_HIT_RATIO + , update_every + , RRDSET_TYPE_LINE + ); + + d->rd_bcache_hit_ratio_5min = rrddim_add(d->st_bcache_hit_ratio, "5min", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + d->rd_bcache_hit_ratio_1hour = rrddim_add(d->st_bcache_hit_ratio, "1hour", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + d->rd_bcache_hit_ratio_1day = rrddim_add(d->st_bcache_hit_ratio, "1day", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + d->rd_bcache_hit_ratio_total = rrddim_add(d->st_bcache_hit_ratio, "ever", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + add_labels_to_disk(d, d->st_bcache_hit_ratio); + } + + rrddim_set_by_pointer(d->st_bcache_hit_ratio, d->rd_bcache_hit_ratio_5min, stats_five_minute_cache_hit_ratio); + rrddim_set_by_pointer(d->st_bcache_hit_ratio, d->rd_bcache_hit_ratio_1hour, stats_hour_cache_hit_ratio); + rrddim_set_by_pointer(d->st_bcache_hit_ratio, d->rd_bcache_hit_ratio_1day, stats_day_cache_hit_ratio); + rrddim_set_by_pointer(d->st_bcache_hit_ratio, d->rd_bcache_hit_ratio_total, stats_total_cache_hit_ratio); + rrdset_done(d->st_bcache_hit_ratio); + } + + { + + if(unlikely(!d->st_bcache_rates)) { + d->st_bcache_rates = rrdset_create_localhost( + "disk_bcache_rates" + , d->chart_id + , d->disk + , family + , "disk.bcache_rates" + , "BCache Rates" + , "KiB/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DISKSTATS_NAME + , NETDATA_CHART_PRIO_BCACHE_RATES + , update_every + , RRDSET_TYPE_AREA + ); + + d->rd_bcache_rate_congested = rrddim_add(d->st_bcache_rates, "congested", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + d->rd_bcache_rate_writeback = rrddim_add(d->st_bcache_rates, "writeback", NULL, -1, 1024, RRD_ALGORITHM_ABSOLUTE); + + add_labels_to_disk(d, d->st_bcache_rates); + } + + rrddim_set_by_pointer(d->st_bcache_rates, d->rd_bcache_rate_writeback, writeback_rate); + rrddim_set_by_pointer(d->st_bcache_rates, d->rd_bcache_rate_congested, cache_congested); + rrdset_done(d->st_bcache_rates); + } + + { + if(unlikely(!d->st_bcache_size)) { + d->st_bcache_size = rrdset_create_localhost( + "disk_bcache_size" + , d->chart_id + , d->disk + , family + , "disk.bcache_size" + , "BCache Cache Sizes" + , "MiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DISKSTATS_NAME + , NETDATA_CHART_PRIO_BCACHE_SIZE + , update_every + , RRDSET_TYPE_AREA + ); + + d->rd_bcache_dirty_size = rrddim_add(d->st_bcache_size, "dirty", NULL, 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + + add_labels_to_disk(d, d->st_bcache_size); + } + + rrddim_set_by_pointer(d->st_bcache_size, d->rd_bcache_dirty_size, dirty_data); + rrdset_done(d->st_bcache_size); + } + + { + if(unlikely(!d->st_bcache_usage)) { + d->st_bcache_usage = rrdset_create_localhost( + "disk_bcache_usage" + , d->chart_id + , d->disk + , family + , "disk.bcache_usage" + , "BCache Cache Usage" + , "percentage" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DISKSTATS_NAME + , NETDATA_CHART_PRIO_BCACHE_USAGE + , update_every + , RRDSET_TYPE_AREA + ); + + d->rd_bcache_available_percent = rrddim_add(d->st_bcache_usage, "avail", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + add_labels_to_disk(d, d->st_bcache_usage); + } + + rrddim_set_by_pointer(d->st_bcache_usage, d->rd_bcache_available_percent, cache_available_percent); + rrdset_done(d->st_bcache_usage); + } + + { + + if(unlikely(!d->st_bcache_cache_read_races)) { + d->st_bcache_cache_read_races = rrdset_create_localhost( + "disk_bcache_cache_read_races" + , d->chart_id + , d->disk + , family + , "disk.bcache_cache_read_races" + , "BCache Cache Read Races" + , "operations/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DISKSTATS_NAME + , NETDATA_CHART_PRIO_BCACHE_CACHE_READ_RACES + , update_every + , RRDSET_TYPE_LINE + ); + + d->rd_bcache_cache_read_races = rrddim_add(d->st_bcache_cache_read_races, "races", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + d->rd_bcache_cache_io_errors = rrddim_add(d->st_bcache_cache_read_races, "errors", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + + add_labels_to_disk(d, d->st_bcache_cache_read_races); + } + + rrddim_set_by_pointer(d->st_bcache_cache_read_races, d->rd_bcache_cache_read_races, cache_read_races); + rrddim_set_by_pointer(d->st_bcache_cache_read_races, d->rd_bcache_cache_io_errors, cache_io_errors); + rrdset_done(d->st_bcache_cache_read_races); + } + + if(d->do_bcache == CONFIG_BOOLEAN_YES || (d->do_bcache == CONFIG_BOOLEAN_AUTO && + (stats_total_cache_hits || + stats_total_cache_misses || + stats_total_cache_miss_collisions || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + + if(unlikely(!d->st_bcache)) { + d->st_bcache = rrdset_create_localhost( + "disk_bcache" + , d->chart_id + , d->disk + , family + , "disk.bcache" + , "BCache Cache I/O Operations" + , "operations/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DISKSTATS_NAME + , NETDATA_CHART_PRIO_BCACHE_OPS + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(d->st_bcache, RRDSET_FLAG_DETAIL); + + d->rd_bcache_hits = rrddim_add(d->st_bcache, "hits", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + d->rd_bcache_misses = rrddim_add(d->st_bcache, "misses", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + d->rd_bcache_miss_collisions = rrddim_add(d->st_bcache, "collisions", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + d->rd_bcache_readaheads = rrddim_add(d->st_bcache, "readaheads", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + + add_labels_to_disk(d, d->st_bcache); + } + + rrddim_set_by_pointer(d->st_bcache, d->rd_bcache_hits, stats_total_cache_hits); + rrddim_set_by_pointer(d->st_bcache, d->rd_bcache_misses, stats_total_cache_misses); + rrddim_set_by_pointer(d->st_bcache, d->rd_bcache_miss_collisions, stats_total_cache_miss_collisions); + rrddim_set_by_pointer(d->st_bcache, d->rd_bcache_readaheads, cache_readaheads); + rrdset_done(d->st_bcache); + } + + if(d->do_bcache == CONFIG_BOOLEAN_YES || (d->do_bcache == CONFIG_BOOLEAN_AUTO && + (stats_total_cache_bypass_hits || + stats_total_cache_bypass_misses || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + + if(unlikely(!d->st_bcache_bypass)) { + d->st_bcache_bypass = rrdset_create_localhost( + "disk_bcache_bypass" + , d->chart_id + , d->disk + , family + , "disk.bcache_bypass" + , "BCache Cache Bypass I/O Operations" + , "operations/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DISKSTATS_NAME + , NETDATA_CHART_PRIO_BCACHE_BYPASS + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(d->st_bcache_bypass, RRDSET_FLAG_DETAIL); + + d->rd_bcache_bypass_hits = rrddim_add(d->st_bcache_bypass, "hits", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + d->rd_bcache_bypass_misses = rrddim_add(d->st_bcache_bypass, "misses", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + + add_labels_to_disk(d, d->st_bcache_bypass); + } + + rrddim_set_by_pointer(d->st_bcache_bypass, d->rd_bcache_bypass_hits, stats_total_cache_bypass_hits); + rrddim_set_by_pointer(d->st_bcache_bypass, d->rd_bcache_bypass_misses, stats_total_cache_bypass_misses); + rrdset_done(d->st_bcache_bypass); + } + } + + d->function_ready = !d->excluded; + } + + diskstats_cleanup_disks(); + + netdata_mutex_unlock(&diskstats_dev_mutex); + // update the system total I/O + + if(global_do_io == CONFIG_BOOLEAN_YES || (global_do_io == CONFIG_BOOLEAN_AUTO && + (system_read_kb || system_write_kb || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + static RRDSET *st_io = NULL; + static RRDDIM *rd_in = NULL, *rd_out = NULL; + + if(unlikely(!st_io)) { + st_io = rrdset_create_localhost( + "system" + , "io" + , NULL + , "disk" + , NULL + , "Disk I/O" + , "KiB/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DISKSTATS_NAME + , NETDATA_CHART_PRIO_SYSTEM_IO + , update_every + , RRDSET_TYPE_AREA + ); + + rd_in = rrddim_add(st_io, "in", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_out = rrddim_add(st_io, "out", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_io, rd_in, system_read_kb); + rrddim_set_by_pointer(st_io, rd_out, system_write_kb); + rrdset_done(st_io); + } + + return 0; +} diff --git a/src/collectors/proc.plugin/proc_interrupts.c b/src/collectors/proc.plugin/proc_interrupts.c new file mode 100644 index 000000000..aa9bd0eb5 --- /dev/null +++ b/src/collectors/proc.plugin/proc_interrupts.c @@ -0,0 +1,245 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +#define PLUGIN_PROC_MODULE_INTERRUPTS_NAME "/proc/interrupts" +#define CONFIG_SECTION_PLUGIN_PROC_INTERRUPTS "plugin:" PLUGIN_PROC_CONFIG_NAME ":" PLUGIN_PROC_MODULE_INTERRUPTS_NAME + +#define MAX_INTERRUPT_NAME 50 + +struct cpu_interrupt { + unsigned long long value; + RRDDIM *rd; +}; + +struct interrupt { + int used; + char *id; + char name[MAX_INTERRUPT_NAME + 1]; + RRDDIM *rd; + unsigned long long total; + struct cpu_interrupt cpu[]; +}; + +// since each interrupt is variable in size +// we use this to calculate its record size +#define recordsize(cpus) (sizeof(struct interrupt) + ((cpus) * sizeof(struct cpu_interrupt))) + +// given a base, get a pointer to each record +#define irrindex(base, line, cpus) ((struct interrupt *)&((char *)(base))[(line) * recordsize(cpus)]) + +static inline struct interrupt *get_interrupts_array(size_t lines, int cpus) { + static struct interrupt *irrs = NULL; + static size_t allocated = 0; + + if(unlikely(lines != allocated)) { + size_t l; + int c; + + irrs = (struct interrupt *)reallocz(irrs, lines * recordsize(cpus)); + + // reset all interrupt RRDDIM pointers as any line could have shifted + for(l = 0; l < lines ;l++) { + struct interrupt *irr = irrindex(irrs, l, cpus); + irr->rd = NULL; + irr->name[0] = '\0'; + for(c = 0; c < cpus ;c++) + irr->cpu[c].rd = NULL; + } + + allocated = lines; + } + + return irrs; +} + +int do_proc_interrupts(int update_every, usec_t dt) { + (void)dt; + static procfile *ff = NULL; + static int cpus = -1, do_per_core = CONFIG_BOOLEAN_INVALID; + struct interrupt *irrs = NULL; + + if(unlikely(do_per_core == CONFIG_BOOLEAN_INVALID)) + do_per_core = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_INTERRUPTS, "interrupts per core", CONFIG_BOOLEAN_NO); + + if(unlikely(!ff)) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/interrupts"); + ff = procfile_open(config_get(CONFIG_SECTION_PLUGIN_PROC_INTERRUPTS, "filename to monitor", filename), " \t:", PROCFILE_FLAG_DEFAULT); + } + if(unlikely(!ff)) + return 1; + + ff = procfile_readall(ff); + if(unlikely(!ff)) + return 0; // we return 0, so that we will retry to open it next time + + size_t lines = procfile_lines(ff), l; + size_t words = procfile_linewords(ff, 0); + + if(unlikely(!lines)) { + collector_error("Cannot read /proc/interrupts, zero lines reported."); + return 1; + } + + // find how many CPUs are there + if(unlikely(cpus == -1)) { + uint32_t w; + cpus = 0; + for(w = 0; w < words ; w++) { + if(likely(strncmp(procfile_lineword(ff, 0, w), "CPU", 3) == 0)) + cpus++; + } + } + + if(unlikely(!cpus)) { + collector_error("PLUGIN: PROC_INTERRUPTS: Cannot find the number of CPUs in /proc/interrupts"); + return 1; + } + + // allocate the size we need; + irrs = get_interrupts_array(lines, cpus); + irrs[0].used = 0; + + // loop through all lines + for(l = 1; l < lines ;l++) { + struct interrupt *irr = irrindex(irrs, l, cpus); + irr->used = 0; + irr->total = 0; + + words = procfile_linewords(ff, l); + if(unlikely(!words)) continue; + + irr->id = procfile_lineword(ff, l, 0); + if(unlikely(!irr->id || !irr->id[0])) continue; + + size_t idlen = strlen(irr->id); + if(irr->id[idlen - 1] == ':') + irr->id[--idlen] = '\0'; + + int c; + for(c = 0; c < cpus ;c++) { + if(likely((c + 1) < (int)words)) + irr->cpu[c].value = str2ull(procfile_lineword(ff, l, (uint32_t) (c + 1)), NULL); + else + irr->cpu[c].value = 0; + + irr->total += irr->cpu[c].value; + } + + if(unlikely(isdigit(irr->id[0]) && (uint32_t)(cpus + 2) < words)) { + strncpyz(irr->name, procfile_lineword(ff, l, words - 1), MAX_INTERRUPT_NAME); + size_t nlen = strlen(irr->name); + if(likely(nlen + 1 + idlen <= MAX_INTERRUPT_NAME)) { + irr->name[nlen] = '_'; + strncpyz(&irr->name[nlen + 1], irr->id, MAX_INTERRUPT_NAME - nlen - 1); + } + else { + irr->name[MAX_INTERRUPT_NAME - idlen - 1] = '_'; + strncpyz(&irr->name[MAX_INTERRUPT_NAME - idlen], irr->id, idlen); + } + } + else { + strncpyz(irr->name, irr->id, MAX_INTERRUPT_NAME); + } + + irr->used = 1; + } + + static RRDSET *st_system_interrupts = NULL; + if(unlikely(!st_system_interrupts)) + st_system_interrupts = rrdset_create_localhost( + "system" + , "interrupts" + , NULL + , "interrupts" + , NULL + , "System interrupts" + , "interrupts/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_INTERRUPTS_NAME + , NETDATA_CHART_PRIO_SYSTEM_INTERRUPTS + , update_every + , RRDSET_TYPE_STACKED + ); + + for(l = 0; l < lines ;l++) { + struct interrupt *irr = irrindex(irrs, l, cpus); + if(irr->used && irr->total) { + // some interrupt may have changed without changing the total number of lines + // if the same number of interrupts have been added and removed between two + // calls of this function. + if(unlikely(!irr->rd || strncmp(rrddim_name(irr->rd), irr->name, MAX_INTERRUPT_NAME) != 0)) { + irr->rd = rrddim_add(st_system_interrupts, irr->id, irr->name, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rrddim_reset_name(st_system_interrupts, irr->rd, irr->name); + + // also reset per cpu RRDDIMs to avoid repeating strncmp() in the per core loop + if(likely(do_per_core != CONFIG_BOOLEAN_NO)) { + int c; + for(c = 0; c < cpus; c++) irr->cpu[c].rd = NULL; + } + } + + rrddim_set_by_pointer(st_system_interrupts, irr->rd, irr->total); + } + } + + rrdset_done(st_system_interrupts); + + if(likely(do_per_core != CONFIG_BOOLEAN_NO)) { + static RRDSET **core_st = NULL; + static int old_cpus = 0; + + if(old_cpus < cpus) { + core_st = reallocz(core_st, sizeof(RRDSET *) * cpus); + memset(&core_st[old_cpus], 0, sizeof(RRDSET *) * (cpus - old_cpus)); + old_cpus = cpus; + } + + int c; + + for(c = 0; c < cpus ;c++) { + if(unlikely(!core_st[c])) { + char id[50+1]; + snprintfz(id, sizeof(id) - 1, "cpu%d_interrupts", c); + + char title[100+1]; + snprintfz(title, sizeof(title) - 1, "CPU Interrupts"); + core_st[c] = rrdset_create_localhost( + "cpu" + , id + , NULL + , "interrupts" + , "cpu.interrupts" + , title + , "interrupts/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_INTERRUPTS_NAME + , NETDATA_CHART_PRIO_INTERRUPTS_PER_CORE + c + , update_every + , RRDSET_TYPE_STACKED + ); + + char core[50+1]; + snprintfz(core, sizeof(core) - 1, "cpu%d", c); + rrdlabels_add(core_st[c]->rrdlabels, "cpu", core, RRDLABEL_SRC_AUTO); + } + + for(l = 0; l < lines ;l++) { + struct interrupt *irr = irrindex(irrs, l, cpus); + if(irr->used && (do_per_core == CONFIG_BOOLEAN_YES || irr->cpu[c].value)) { + if(unlikely(!irr->cpu[c].rd)) { + irr->cpu[c].rd = rrddim_add(core_st[c], irr->id, irr->name, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rrddim_reset_name(core_st[c], irr->cpu[c].rd, irr->name); + } + + rrddim_set_by_pointer(core_st[c], irr->cpu[c].rd, irr->cpu[c].value); + } + } + + rrdset_done(core_st[c]); + } + } + + return 0; +} diff --git a/src/collectors/proc.plugin/proc_loadavg.c b/src/collectors/proc.plugin/proc_loadavg.c new file mode 100644 index 000000000..c9339525e --- /dev/null +++ b/src/collectors/proc.plugin/proc_loadavg.c @@ -0,0 +1,126 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +#define PLUGIN_PROC_MODULE_LOADAVG_NAME "/proc/loadavg" +#define CONFIG_SECTION_PLUGIN_PROC_LOADAVG "plugin:" PLUGIN_PROC_CONFIG_NAME ":" PLUGIN_PROC_MODULE_LOADAVG_NAME + +// linux calculates this once every 5 seconds +#define MIN_LOADAVG_UPDATE_EVERY 5 + +int do_proc_loadavg(int update_every, usec_t dt) { + static procfile *ff = NULL; + static int do_loadavg = -1, do_all_processes = -1; + static usec_t next_loadavg_dt = 0; + + if(unlikely(!ff)) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/loadavg"); + + ff = procfile_open(config_get(CONFIG_SECTION_PLUGIN_PROC_LOADAVG, "filename to monitor", filename), " \t,:|/", PROCFILE_FLAG_DEFAULT); + if(unlikely(!ff)) + return 1; + } + + ff = procfile_readall(ff); + if(unlikely(!ff)) + return 0; // we return 0, so that we will retry to open it next time + + if(unlikely(do_loadavg == -1)) { + do_loadavg = config_get_boolean(CONFIG_SECTION_PLUGIN_PROC_LOADAVG, "enable load average", 1); + do_all_processes = config_get_boolean(CONFIG_SECTION_PLUGIN_PROC_LOADAVG, "enable total processes", 1); + } + + if(unlikely(procfile_lines(ff) < 1)) { + collector_error("/proc/loadavg has no lines."); + return 1; + } + if(unlikely(procfile_linewords(ff, 0) < 6)) { + collector_error("/proc/loadavg has less than 6 words in it."); + return 1; + } + + double load1 = strtod(procfile_lineword(ff, 0, 0), NULL); + double load5 = strtod(procfile_lineword(ff, 0, 1), NULL); + double load15 = strtod(procfile_lineword(ff, 0, 2), NULL); + + //unsigned long long running_processes = str2ull(procfile_lineword(ff, 0, 3)); + unsigned long long active_processes = str2ull(procfile_lineword(ff, 0, 4), NULL); + + //get system pid_max + unsigned long long max_processes = get_system_pid_max(); + // + //unsigned long long next_pid = str2ull(procfile_lineword(ff, 0, 5)); + + if(next_loadavg_dt <= dt) { + if(likely(do_loadavg)) { + static RRDSET *load_chart = NULL; + static RRDDIM *rd_load1 = NULL, *rd_load5 = NULL, *rd_load15 = NULL; + + if(unlikely(!load_chart)) { + load_chart = rrdset_create_localhost( + "system" + , "load" + , NULL + , "load" + , NULL + , "System Load Average" + , "load" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_LOADAVG_NAME + , NETDATA_CHART_PRIO_SYSTEM_LOAD + , (update_every < MIN_LOADAVG_UPDATE_EVERY) ? MIN_LOADAVG_UPDATE_EVERY : update_every + , RRDSET_TYPE_LINE + ); + + rd_load1 = rrddim_add(load_chart, "load1", NULL, 1, 1000, RRD_ALGORITHM_ABSOLUTE); + rd_load5 = rrddim_add(load_chart, "load5", NULL, 1, 1000, RRD_ALGORITHM_ABSOLUTE); + rd_load15 = rrddim_add(load_chart, "load15", NULL, 1, 1000, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(load_chart, rd_load1, (collected_number) (load1 * 1000)); + rrddim_set_by_pointer(load_chart, rd_load5, (collected_number) (load5 * 1000)); + rrddim_set_by_pointer(load_chart, rd_load15, (collected_number) (load15 * 1000)); + rrdset_done(load_chart); + + next_loadavg_dt = load_chart->update_every * USEC_PER_SEC; + } + else + next_loadavg_dt = MIN_LOADAVG_UPDATE_EVERY * USEC_PER_SEC; + } + else + next_loadavg_dt -= dt; + + + if(likely(do_all_processes)) { + static RRDSET *processes_chart = NULL; + static RRDDIM *rd_active = NULL; + static const RRDVAR_ACQUIRED *rd_pidmax; + + if(unlikely(!processes_chart)) { + processes_chart = rrdset_create_localhost( + "system" + , "active_processes" + , NULL + , "processes" + , NULL + , "System Active Processes" + , "processes" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_LOADAVG_NAME + , NETDATA_CHART_PRIO_SYSTEM_ACTIVE_PROCESSES + , update_every + , RRDSET_TYPE_LINE + ); + + rd_active = rrddim_add(processes_chart, "active", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + rd_pidmax = rrdvar_chart_variable_add_and_acquire(processes_chart, "pidmax"); + } + + rrddim_set_by_pointer(processes_chart, rd_active, active_processes); + rrdvar_chart_variable_set(processes_chart, rd_pidmax, max_processes); + rrdset_done(processes_chart); + } + + return 0; +} diff --git a/src/collectors/proc.plugin/proc_mdstat.c b/src/collectors/proc.plugin/proc_mdstat.c new file mode 100644 index 000000000..3857d9ec4 --- /dev/null +++ b/src/collectors/proc.plugin/proc_mdstat.c @@ -0,0 +1,640 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +#define PLUGIN_PROC_MODULE_MDSTAT_NAME "/proc/mdstat" + +struct raid { + int redundant; + char *name; + uint32_t hash; + char *level; + + RRDDIM *rd_health; + unsigned long long failed_disks; + + RRDSET *st_disks; + RRDDIM *rd_down; + RRDDIM *rd_inuse; + unsigned long long total_disks; + unsigned long long inuse_disks; + + RRDSET *st_operation; + RRDDIM *rd_check; + RRDDIM *rd_resync; + RRDDIM *rd_recovery; + RRDDIM *rd_reshape; + unsigned long long check; + unsigned long long resync; + unsigned long long recovery; + unsigned long long reshape; + + RRDSET *st_finish; + RRDDIM *rd_finish_in; + unsigned long long finish_in; + + RRDSET *st_speed; + RRDDIM *rd_speed; + unsigned long long speed; + + char *mismatch_cnt_filename; + RRDSET *st_mismatch_cnt; + RRDDIM *rd_mismatch_cnt; + unsigned long long mismatch_cnt; + + RRDSET *st_nonredundant; + RRDDIM *rd_nonredundant; +}; + +struct old_raid { + int redundant; + char *name; + uint32_t hash; + int found; +}; + +static inline char *remove_trailing_chars(char *s, char c) +{ + while (*s) { + if (unlikely(*s == c)) { + *s = '\0'; + } + s++; + } + return s; +} + +static inline void make_chart_obsolete(char *name, const char *id_modifier) +{ + char id[50 + 1]; + RRDSET *st = NULL; + + if (likely(name && id_modifier)) { + snprintfz(id, sizeof(id) - 1, "mdstat.%s_%s", name, id_modifier); + st = rrdset_find_active_byname_localhost(id); + if (likely(st)) + rrdset_is_obsolete___safe_from_collector_thread(st); + } +} + +static void add_labels_to_mdstat(struct raid *raid, RRDSET *st) { + rrdlabels_add(st->rrdlabels, "device", raid->name, RRDLABEL_SRC_AUTO); + rrdlabels_add(st->rrdlabels, "raid_level", raid->level, RRDLABEL_SRC_AUTO); +} + +int do_proc_mdstat(int update_every, usec_t dt) +{ + (void)dt; + static procfile *ff = NULL; + static int do_health = -1, do_nonredundant = -1, do_disks = -1, do_operations = -1, do_mismatch = -1, + do_mismatch_config = -1; + static int make_charts_obsolete = -1; + static char *mdstat_filename = NULL, *mismatch_cnt_filename = NULL; + static struct raid *raids = NULL; + static size_t raids_allocated = 0; + size_t raids_num = 0, raid_idx = 0, redundant_num = 0; + static struct old_raid *old_raids = NULL; + static size_t old_raids_allocated = 0; + size_t old_raid_idx = 0; + + if (unlikely(do_health == -1)) { + do_health = + config_get_boolean("plugin:proc:/proc/mdstat", "faulty devices", CONFIG_BOOLEAN_YES); + do_nonredundant = + config_get_boolean("plugin:proc:/proc/mdstat", "nonredundant arrays availability", CONFIG_BOOLEAN_YES); + do_mismatch_config = + config_get_boolean_ondemand("plugin:proc:/proc/mdstat", "mismatch count", CONFIG_BOOLEAN_AUTO); + do_disks = + config_get_boolean("plugin:proc:/proc/mdstat", "disk stats", CONFIG_BOOLEAN_YES); + do_operations = + config_get_boolean("plugin:proc:/proc/mdstat", "operation status", CONFIG_BOOLEAN_YES); + + make_charts_obsolete = + config_get_boolean("plugin:proc:/proc/mdstat", "make charts obsolete", CONFIG_BOOLEAN_YES); + + char filename[FILENAME_MAX + 1]; + + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/mdstat"); + mdstat_filename = config_get("plugin:proc:/proc/mdstat", "filename to monitor", filename); + + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/block/%s/md/mismatch_cnt"); + mismatch_cnt_filename = config_get("plugin:proc:/proc/mdstat", "mismatch_cnt filename to monitor", filename); + } + + if (unlikely(!ff)) { + ff = procfile_open(mdstat_filename, " \t:", PROCFILE_FLAG_DEFAULT); + if (unlikely(!ff)) + return 1; + } + + ff = procfile_readall(ff); + if (unlikely(!ff)) + return 0; // we return 0, so that we will retry opening it next time + + size_t lines = procfile_lines(ff); + size_t words = 0; + + if (unlikely(lines < 2)) { + collector_error("Cannot read /proc/mdstat. Expected 2 or more lines, read %zu.", lines); + return 1; + } + + // find how many raids are there + size_t l; + raids_num = 0; + for (l = 1; l < lines - 2; l++) { + if (unlikely(procfile_lineword(ff, l, 1)[0] == 'a')) // check if the raid is active + raids_num++; + } + + if (unlikely(!raids_num && !old_raids_allocated)) + return 0; // we return 0, so that we will retry searching for raids next time + + // allocate the memory we need; + if (unlikely(raids_num != raids_allocated)) { + for (raid_idx = 0; raid_idx < raids_allocated; raid_idx++) { + struct raid *raid = &raids[raid_idx]; + freez(raid->name); + freez(raid->level); + freez(raid->mismatch_cnt_filename); + } + if (raids_num) { + raids = (struct raid *)reallocz(raids, raids_num * sizeof(struct raid)); + memset(raids, 0, raids_num * sizeof(struct raid)); + } else { + freez(raids); + raids = NULL; + } + raids_allocated = raids_num; + } + + // loop through all lines except the first and the last ones + for (l = 1, raid_idx = 0; l < (lines - 2) && raid_idx < raids_num; l++) { + struct raid *raid = &raids[raid_idx]; + raid->redundant = 0; + + words = procfile_linewords(ff, l); + + if (unlikely(words < 3)) + continue; + + if (unlikely(procfile_lineword(ff, l, 1)[0] != 'a')) + continue; + + if (unlikely(!raid->name)) { + raid->name = strdupz(procfile_lineword(ff, l, 0)); + raid->hash = simple_hash(raid->name); + raid->level = strdupz(procfile_lineword(ff, l, 2)); + } else if (unlikely(strcmp(raid->name, procfile_lineword(ff, l, 0)))) { + freez(raid->name); + freez(raid->mismatch_cnt_filename); + freez(raid->level); + memset(raid, 0, sizeof(struct raid)); + raid->name = strdupz(procfile_lineword(ff, l, 0)); + raid->hash = simple_hash(raid->name); + raid->level = strdupz(procfile_lineword(ff, l, 2)); + } + + if (unlikely(!raid->name || !raid->name[0])) + continue; + + raid_idx++; + + // check if raid has disk status + l++; + words = procfile_linewords(ff, l); + if (words < 2 || procfile_lineword(ff, l, words - 1)[0] != '[') + continue; + + // split inuse and total number of disks + if (likely(do_health || do_disks)) { + char *s = NULL, *str_total = NULL, *str_inuse = NULL; + + s = procfile_lineword(ff, l, words - 2); + if (unlikely(s[0] != '[')) { + collector_error("Cannot read /proc/mdstat raid health status. Unexpected format: missing opening bracket."); + continue; + } + str_total = ++s; + while (*s) { + if (unlikely(*s == '/')) { + *s = '\0'; + str_inuse = s + 1; + } else if (unlikely(*s == ']')) { + *s = '\0'; + break; + } + s++; + } + if (unlikely(str_total[0] == '\0' || !str_inuse || str_inuse[0] == '\0')) { + collector_error("Cannot read /proc/mdstat raid health status. Unexpected format."); + continue; + } + + raid->inuse_disks = str2ull(str_inuse, NULL); + raid->total_disks = str2ull(str_total, NULL); + raid->failed_disks = raid->total_disks - raid->inuse_disks; + } + + raid->redundant = 1; + redundant_num++; + l++; + + // check if any operation is performed on the raid + if (likely(do_operations)) { + char *s = NULL; + + raid->check = 0; + raid->resync = 0; + raid->recovery = 0; + raid->reshape = 0; + raid->finish_in = 0; + raid->speed = 0; + + words = procfile_linewords(ff, l); + + if (likely(words < 2)) + continue; + + if (unlikely(procfile_lineword(ff, l, 0)[0] != '[')) + continue; + + if (unlikely(words < 7)) { + collector_error("Cannot read /proc/mdstat line. Expected 7 params, read %zu.", words); + continue; + } + + char *word; + word = procfile_lineword(ff, l, 3); + remove_trailing_chars(word, '%'); + + unsigned long long percentage = (unsigned long long)(str2ndd(word, NULL) * 100); + // possible operations: check, resync, recovery, reshape + // 4-th character is unique for each operation so it is checked + switch (procfile_lineword(ff, l, 1)[3]) { + case 'c': // check + raid->check = percentage; + break; + case 'y': // resync + raid->resync = percentage; + break; + case 'o': // recovery + raid->recovery = percentage; + break; + case 'h': // reshape + raid->reshape = percentage; + break; + } + + word = procfile_lineword(ff, l, 5); + s = remove_trailing_chars(word, 'm'); // remove trailing "min" + + word += 7; // skip leading "finish=" + + if (likely(s > word)) + raid->finish_in = (unsigned long long)(str2ndd(word, NULL) * 60); + + word = procfile_lineword(ff, l, 6); + s = remove_trailing_chars(word, 'K'); // remove trailing "K/sec" + + word += 6; // skip leading "speed=" + + if (likely(s > word)) + raid->speed = str2ull(word, NULL); + } + } + + // read mismatch_cnt files + if (do_mismatch == -1) { + if (do_mismatch_config == CONFIG_BOOLEAN_AUTO) { + if (raids_num > 50) + do_mismatch = CONFIG_BOOLEAN_NO; + else + do_mismatch = CONFIG_BOOLEAN_YES; + } else + do_mismatch = do_mismatch_config; + } + + if (likely(do_mismatch)) { + for (raid_idx = 0; raid_idx < raids_num; raid_idx++) { + char filename[FILENAME_MAX + 1]; + struct raid *raid = &raids[raid_idx]; + + if (likely(raid->redundant)) { + if (unlikely(!raid->mismatch_cnt_filename)) { + snprintfz(filename, FILENAME_MAX, mismatch_cnt_filename, raid->name); + raid->mismatch_cnt_filename = strdupz(filename); + } + if (unlikely(read_single_number_file(raid->mismatch_cnt_filename, &raid->mismatch_cnt))) { + collector_error("Cannot read file '%s'", raid->mismatch_cnt_filename); + do_mismatch = CONFIG_BOOLEAN_NO; + collector_error("Monitoring for mismatch count has been disabled"); + break; + } + } + } + } + + // check for disappeared raids + for (old_raid_idx = 0; old_raid_idx < old_raids_allocated; old_raid_idx++) { + struct old_raid *old_raid = &old_raids[old_raid_idx]; + int found = 0; + + for (raid_idx = 0; raid_idx < raids_num; raid_idx++) { + struct raid *raid = &raids[raid_idx]; + + if (unlikely( + raid->hash == old_raid->hash && !strcmp(raid->name, old_raid->name) && + raid->redundant == old_raid->redundant)) + found = 1; + } + + old_raid->found = found; + } + + int raid_disappeared = 0; + for (old_raid_idx = 0; old_raid_idx < old_raids_allocated; old_raid_idx++) { + struct old_raid *old_raid = &old_raids[old_raid_idx]; + + if (unlikely(!old_raid->found)) { + if (likely(make_charts_obsolete)) { + make_chart_obsolete(old_raid->name, "disks"); + make_chart_obsolete(old_raid->name, "mismatch"); + make_chart_obsolete(old_raid->name, "operation"); + make_chart_obsolete(old_raid->name, "finish"); + make_chart_obsolete(old_raid->name, "speed"); + make_chart_obsolete(old_raid->name, "availability"); + } + raid_disappeared = 1; + } + } + + // allocate memory for nonredundant arrays + if (unlikely(raid_disappeared || old_raids_allocated != raids_num)) { + for (old_raid_idx = 0; old_raid_idx < old_raids_allocated; old_raid_idx++) { + freez(old_raids[old_raid_idx].name); + } + if (likely(raids_num)) { + old_raids = reallocz(old_raids, sizeof(struct old_raid) * raids_num); + memset(old_raids, 0, sizeof(struct old_raid) * raids_num); + } else { + freez(old_raids); + old_raids = NULL; + } + old_raids_allocated = raids_num; + for (old_raid_idx = 0; old_raid_idx < old_raids_allocated; old_raid_idx++) { + struct old_raid *old_raid = &old_raids[old_raid_idx]; + struct raid *raid = &raids[old_raid_idx]; + + old_raid->name = strdupz(raid->name); + old_raid->hash = raid->hash; + old_raid->redundant = raid->redundant; + } + } + + if (likely(do_health && redundant_num)) { + static RRDSET *st_mdstat_health = NULL; + if (unlikely(!st_mdstat_health)) { + st_mdstat_health = rrdset_create_localhost( + "mdstat", + "mdstat_health", + NULL, + "health", + "md.health", + "Faulty Devices In MD", + "failed disks", + PLUGIN_PROC_NAME, + PLUGIN_PROC_MODULE_MDSTAT_NAME, + NETDATA_CHART_PRIO_MDSTAT_HEALTH, + update_every, + RRDSET_TYPE_LINE); + + rrdset_isnot_obsolete___safe_from_collector_thread(st_mdstat_health); + } + + if (!redundant_num) { + if (likely(make_charts_obsolete)) + make_chart_obsolete("mdstat", "health"); + } else { + for (raid_idx = 0; raid_idx < raids_num; raid_idx++) { + struct raid *raid = &raids[raid_idx]; + + if (likely(raid->redundant)) { + if (unlikely(!raid->rd_health && !(raid->rd_health = rrddim_find_active(st_mdstat_health, raid->name)))) + raid->rd_health = rrddim_add(st_mdstat_health, raid->name, NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + rrddim_set_by_pointer(st_mdstat_health, raid->rd_health, raid->failed_disks); + } + } + + rrdset_done(st_mdstat_health); + } + } + + for (raid_idx = 0; raid_idx < raids_num; raid_idx++) { + struct raid *raid = &raids[raid_idx]; + char id[50 + 1]; + char family[50 + 1]; + + if (likely(raid->redundant)) { + if (likely(do_disks)) { + snprintfz(id, sizeof(id) - 1, "%s_disks", raid->name); + + if (unlikely(!raid->st_disks && !(raid->st_disks = rrdset_find_active_byname_localhost(id)))) { + snprintfz(family, sizeof(family) - 1, "%s (%s)", raid->name, raid->level); + + raid->st_disks = rrdset_create_localhost( + "mdstat", + id, + NULL, + family, + "md.disks", + "Disks Stats", + "disks", + PLUGIN_PROC_NAME, + PLUGIN_PROC_MODULE_MDSTAT_NAME, + NETDATA_CHART_PRIO_MDSTAT_DISKS + raid_idx * 10, + update_every, + RRDSET_TYPE_STACKED); + + rrdset_isnot_obsolete___safe_from_collector_thread(raid->st_disks); + + add_labels_to_mdstat(raid, raid->st_disks); + } + + if (unlikely(!raid->rd_inuse && !(raid->rd_inuse = rrddim_find_active(raid->st_disks, "inuse")))) + raid->rd_inuse = rrddim_add(raid->st_disks, "inuse", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + if (unlikely(!raid->rd_down && !(raid->rd_down = rrddim_find_active(raid->st_disks, "down")))) + raid->rd_down = rrddim_add(raid->st_disks, "down", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + rrddim_set_by_pointer(raid->st_disks, raid->rd_inuse, raid->inuse_disks); + rrddim_set_by_pointer(raid->st_disks, raid->rd_down, raid->failed_disks); + rrdset_done(raid->st_disks); + } + + if (likely(do_mismatch)) { + snprintfz(id, sizeof(id) - 1, "%s_mismatch", raid->name); + + if (unlikely(!raid->st_mismatch_cnt && !(raid->st_mismatch_cnt = rrdset_find_active_byname_localhost(id)))) { + snprintfz(family, sizeof(family) - 1, "%s (%s)", raid->name, raid->level); + + raid->st_mismatch_cnt = rrdset_create_localhost( + "mdstat", + id, + NULL, + family, + "md.mismatch_cnt", + "Mismatch Count", + "unsynchronized blocks", + PLUGIN_PROC_NAME, + PLUGIN_PROC_MODULE_MDSTAT_NAME, + NETDATA_CHART_PRIO_MDSTAT_MISMATCH + raid_idx * 10, + update_every, + RRDSET_TYPE_LINE); + + rrdset_isnot_obsolete___safe_from_collector_thread(raid->st_mismatch_cnt); + + add_labels_to_mdstat(raid, raid->st_mismatch_cnt); + } + + if (unlikely(!raid->rd_mismatch_cnt && !(raid->rd_mismatch_cnt = rrddim_find_active(raid->st_mismatch_cnt, "count")))) + raid->rd_mismatch_cnt = rrddim_add(raid->st_mismatch_cnt, "count", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + rrddim_set_by_pointer(raid->st_mismatch_cnt, raid->rd_mismatch_cnt, raid->mismatch_cnt); + rrdset_done(raid->st_mismatch_cnt); + } + + if (likely(do_operations)) { + snprintfz(id, sizeof(id) - 1, "%s_operation", raid->name); + + if (unlikely(!raid->st_operation && !(raid->st_operation = rrdset_find_active_byname_localhost(id)))) { + snprintfz(family, sizeof(family) - 1, "%s (%s)", raid->name, raid->level); + + raid->st_operation = rrdset_create_localhost( + "mdstat", + id, + NULL, + family, + "md.status", + "Current Status", + "percent", + PLUGIN_PROC_NAME, + PLUGIN_PROC_MODULE_MDSTAT_NAME, + NETDATA_CHART_PRIO_MDSTAT_OPERATION + raid_idx * 10, + update_every, + RRDSET_TYPE_LINE); + + rrdset_isnot_obsolete___safe_from_collector_thread(raid->st_operation); + + add_labels_to_mdstat(raid, raid->st_operation); + } + + if(unlikely(!raid->rd_check && !(raid->rd_check = rrddim_find_active(raid->st_operation, "check")))) + raid->rd_check = rrddim_add(raid->st_operation, "check", NULL, 1, 100, RRD_ALGORITHM_ABSOLUTE); + if(unlikely(!raid->rd_resync && !(raid->rd_resync = rrddim_find_active(raid->st_operation, "resync")))) + raid->rd_resync = rrddim_add(raid->st_operation, "resync", NULL, 1, 100, RRD_ALGORITHM_ABSOLUTE); + if(unlikely(!raid->rd_recovery && !(raid->rd_recovery = rrddim_find_active(raid->st_operation, "recovery")))) + raid->rd_recovery = rrddim_add(raid->st_operation, "recovery", NULL, 1, 100, RRD_ALGORITHM_ABSOLUTE); + if(unlikely(!raid->rd_reshape && !(raid->rd_reshape = rrddim_find_active(raid->st_operation, "reshape")))) + raid->rd_reshape = rrddim_add(raid->st_operation, "reshape", NULL, 1, 100, RRD_ALGORITHM_ABSOLUTE); + + rrddim_set_by_pointer(raid->st_operation, raid->rd_check, raid->check); + rrddim_set_by_pointer(raid->st_operation, raid->rd_resync, raid->resync); + rrddim_set_by_pointer(raid->st_operation, raid->rd_recovery, raid->recovery); + rrddim_set_by_pointer(raid->st_operation, raid->rd_reshape, raid->reshape); + rrdset_done(raid->st_operation); + + snprintfz(id, sizeof(id) - 1, "%s_finish", raid->name); + if (unlikely(!raid->st_finish && !(raid->st_finish = rrdset_find_active_byname_localhost(id)))) { + snprintfz(family, sizeof(family) - 1, "%s (%s)", raid->name, raid->level); + + raid->st_finish = rrdset_create_localhost( + "mdstat", + id, + NULL, + family, + "md.expected_time_until_operation_finish", + "Approximate Time Until Finish", + "seconds", + PLUGIN_PROC_NAME, + PLUGIN_PROC_MODULE_MDSTAT_NAME, + NETDATA_CHART_PRIO_MDSTAT_FINISH + raid_idx * 10, + update_every, RRDSET_TYPE_LINE); + + rrdset_isnot_obsolete___safe_from_collector_thread(raid->st_finish); + + add_labels_to_mdstat(raid, raid->st_finish); + } + + if(unlikely(!raid->rd_finish_in && !(raid->rd_finish_in = rrddim_find_active(raid->st_finish, "finish_in")))) + raid->rd_finish_in = rrddim_add(raid->st_finish, "finish_in", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + rrddim_set_by_pointer(raid->st_finish, raid->rd_finish_in, raid->finish_in); + rrdset_done(raid->st_finish); + + snprintfz(id, sizeof(id) - 1, "%s_speed", raid->name); + if (unlikely(!raid->st_speed && !(raid->st_speed = rrdset_find_active_byname_localhost(id)))) { + snprintfz(family, sizeof(family) - 1, "%s (%s)", raid->name, raid->level); + + raid->st_speed = rrdset_create_localhost( + "mdstat", + id, + NULL, + family, + "md.operation_speed", + "Operation Speed", + "KiB/s", + PLUGIN_PROC_NAME, + PLUGIN_PROC_MODULE_MDSTAT_NAME, + NETDATA_CHART_PRIO_MDSTAT_SPEED + raid_idx * 10, + update_every, + RRDSET_TYPE_LINE); + + rrdset_isnot_obsolete___safe_from_collector_thread(raid->st_speed); + + add_labels_to_mdstat(raid, raid->st_speed); + } + + if (unlikely(!raid->rd_speed && !(raid->rd_speed = rrddim_find_active(raid->st_speed, "speed")))) + raid->rd_speed = rrddim_add(raid->st_speed, "speed", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + rrddim_set_by_pointer(raid->st_speed, raid->rd_speed, raid->speed); + rrdset_done(raid->st_speed); + } + } else { + if (likely(do_nonredundant)) { + snprintfz(id, sizeof(id) - 1, "%s_availability", raid->name); + + if (unlikely(!raid->st_nonredundant && !(raid->st_nonredundant = rrdset_find_active_localhost(id)))) { + snprintfz(family, sizeof(family) - 1, "%s (%s)", raid->name, raid->level); + + raid->st_nonredundant = rrdset_create_localhost( + "mdstat", + id, + NULL, + family, + "md.nonredundant", + "Nonredundant Array Availability", + "boolean", + PLUGIN_PROC_NAME, + PLUGIN_PROC_MODULE_MDSTAT_NAME, + NETDATA_CHART_PRIO_MDSTAT_NONREDUNDANT + raid_idx * 10, + update_every, + RRDSET_TYPE_LINE); + + rrdset_isnot_obsolete___safe_from_collector_thread(raid->st_nonredundant); + + add_labels_to_mdstat(raid, raid->st_nonredundant); + } + + if (unlikely(!raid->rd_nonredundant && !(raid->rd_nonredundant = rrddim_find_active(raid->st_nonredundant, "available")))) + raid->rd_nonredundant = rrddim_add(raid->st_nonredundant, "available", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + rrddim_set_by_pointer(raid->st_nonredundant, raid->rd_nonredundant, 1); + rrdset_done(raid->st_nonredundant); + } + } + } + + return 0; +} diff --git a/src/collectors/proc.plugin/proc_meminfo.c b/src/collectors/proc.plugin/proc_meminfo.c new file mode 100644 index 000000000..a357cc782 --- /dev/null +++ b/src/collectors/proc.plugin/proc_meminfo.c @@ -0,0 +1,849 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +#define PLUGIN_PROC_MODULE_MEMINFO_NAME "/proc/meminfo" +#define CONFIG_SECTION_PLUGIN_PROC_MEMINFO "plugin:" PLUGIN_PROC_CONFIG_NAME ":" PLUGIN_PROC_MODULE_MEMINFO_NAME + +int do_proc_meminfo(int update_every, usec_t dt) { + (void)dt; + + static procfile *ff = NULL; + static int do_ram = -1 + , do_swap = -1 + , do_hwcorrupt = -1 + , do_committed = -1 + , do_writeback = -1 + , do_kernel = -1 + , do_slab = -1 + , do_hugepages = -1 + , do_transparent_hugepages = -1 + , do_reclaiming = -1 + , do_high_low = -1 + , do_cma = -1 + , do_directmap = -1; + + static ARL_BASE *arl_base = NULL; + static ARL_ENTRY *arl_hwcorrupted = NULL, *arl_memavailable = NULL, *arl_hugepages_total = NULL, + *arl_zswapped = NULL, *arl_high_low = NULL, *arl_cma_total = NULL, + *arl_directmap4k = NULL, *arl_directmap2m = NULL, *arl_directmap4m = NULL, *arl_directmap1g = NULL; + + static unsigned long long + MemTotal = 0 + , MemFree = 0 + , MemAvailable = 0 + , Buffers = 0 + , Cached = 0 + , SwapCached = 0 + , Active = 0 + , Inactive = 0 + , ActiveAnon = 0 + , InactiveAnon = 0 + , ActiveFile = 0 + , InactiveFile = 0 + , Unevictable = 0 + , Mlocked = 0 + , HighTotal = 0 + , HighFree = 0 + , LowTotal = 0 + , LowFree = 0 + , MmapCopy = 0 + , SwapTotal = 0 + , SwapFree = 0 + , Zswap = 0 + , Zswapped = 0 + , Dirty = 0 + , Writeback = 0 + , AnonPages = 0 + , Mapped = 0 + , Shmem = 0 + , KReclaimable = 0 + , Slab = 0 + , SReclaimable = 0 + , SUnreclaim = 0 + , KernelStack = 0 + , ShadowCallStack = 0 + , PageTables = 0 + , SecPageTables = 0 + , NFS_Unstable = 0 + , Bounce = 0 + , WritebackTmp = 0 + , CommitLimit = 0 + , Committed_AS = 0 + , VmallocTotal = 0 + , VmallocUsed = 0 + , VmallocChunk = 0 + , Percpu = 0 + //, EarlyMemtestBad = 0 + , HardwareCorrupted = 0 + , AnonHugePages = 0 + , ShmemHugePages = 0 + , ShmemPmdMapped = 0 + , FileHugePages = 0 + , FilePmdMapped = 0 + , CmaTotal = 0 + , CmaFree = 0 + //, Unaccepted = 0 + , HugePages_Total = 0 + , HugePages_Free = 0 + , HugePages_Rsvd = 0 + , HugePages_Surp = 0 + , Hugepagesize = 0 + //, Hugetlb = 0 + , DirectMap4k = 0 + , DirectMap2M = 0 + , DirectMap4M = 0 + , DirectMap1G = 0 + ; + + if(unlikely(!arl_base)) { + do_ram = config_get_boolean(CONFIG_SECTION_PLUGIN_PROC_MEMINFO, "system ram", 1); + do_swap = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_MEMINFO, "system swap", CONFIG_BOOLEAN_AUTO); + do_hwcorrupt = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_MEMINFO, "hardware corrupted ECC", CONFIG_BOOLEAN_AUTO); + do_committed = config_get_boolean(CONFIG_SECTION_PLUGIN_PROC_MEMINFO, "committed memory", 1); + do_writeback = config_get_boolean(CONFIG_SECTION_PLUGIN_PROC_MEMINFO, "writeback memory", 1); + do_kernel = config_get_boolean(CONFIG_SECTION_PLUGIN_PROC_MEMINFO, "kernel memory", 1); + do_slab = config_get_boolean(CONFIG_SECTION_PLUGIN_PROC_MEMINFO, "slab memory", 1); + do_hugepages = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_MEMINFO, "hugepages", CONFIG_BOOLEAN_AUTO); + do_transparent_hugepages = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_MEMINFO, "transparent hugepages", CONFIG_BOOLEAN_AUTO); + do_reclaiming = config_get_boolean(CONFIG_SECTION_PLUGIN_PROC_MEMINFO, "memory reclaiming", CONFIG_BOOLEAN_AUTO); + do_high_low = config_get_boolean(CONFIG_SECTION_PLUGIN_PROC_MEMINFO, "high low memory", CONFIG_BOOLEAN_AUTO); + do_cma = config_get_boolean(CONFIG_SECTION_PLUGIN_PROC_MEMINFO, "cma memory", CONFIG_BOOLEAN_AUTO); + do_directmap = config_get_boolean(CONFIG_SECTION_PLUGIN_PROC_MEMINFO, "direct maps", CONFIG_BOOLEAN_AUTO); + + // https://github.com/torvalds/linux/blob/master/fs/proc/meminfo.c + + arl_base = arl_create("meminfo", NULL, 60); + arl_expect(arl_base, "MemTotal", &MemTotal); + arl_expect(arl_base, "MemFree", &MemFree); + arl_memavailable = arl_expect(arl_base, "MemAvailable", &MemAvailable); + arl_expect(arl_base, "Buffers", &Buffers); + arl_expect(arl_base, "Cached", &Cached); + arl_expect(arl_base, "SwapCached", &SwapCached); + arl_expect(arl_base, "Active", &Active); + arl_expect(arl_base, "Inactive", &Inactive); + arl_expect(arl_base, "Active(anon)", &ActiveAnon); + arl_expect(arl_base, "Inactive(anon)", &InactiveAnon); + arl_expect(arl_base, "Active(file)", &ActiveFile); + arl_expect(arl_base, "Inactive(file)", &InactiveFile); + arl_expect(arl_base, "Unevictable", &Unevictable); + arl_expect(arl_base, "Mlocked", &Mlocked); + + // CONFIG_HIGHMEM + arl_high_low = arl_expect(arl_base, "HighTotal", &HighTotal); + arl_expect(arl_base, "HighFree", &HighFree); + arl_expect(arl_base, "LowTotal", &LowTotal); + arl_expect(arl_base, "LowFree", &LowFree); + + // CONFIG_MMU + arl_expect(arl_base, "MmapCopy", &MmapCopy); + + arl_expect(arl_base, "SwapTotal", &SwapTotal); + arl_expect(arl_base, "SwapFree", &SwapFree); + + // CONFIG_ZSWAP + arl_zswapped = arl_expect(arl_base, "Zswap", &Zswap); + arl_expect(arl_base, "Zswapped", &Zswapped); + + arl_expect(arl_base, "Dirty", &Dirty); + arl_expect(arl_base, "Writeback", &Writeback); + arl_expect(arl_base, "AnonPages", &AnonPages); + arl_expect(arl_base, "Mapped", &Mapped); + arl_expect(arl_base, "Shmem", &Shmem); + arl_expect(arl_base, "KReclaimable", &KReclaimable); + arl_expect(arl_base, "Slab", &Slab); + arl_expect(arl_base, "SReclaimable", &SReclaimable); + arl_expect(arl_base, "SUnreclaim", &SUnreclaim); + arl_expect(arl_base, "KernelStack", &KernelStack); + + // CONFIG_SHADOW_CALL_STACK + arl_expect(arl_base, "ShadowCallStack", &ShadowCallStack); + + arl_expect(arl_base, "PageTables", &PageTables); + arl_expect(arl_base, "SecPageTables", &SecPageTables); + arl_expect(arl_base, "NFS_Unstable", &NFS_Unstable); + arl_expect(arl_base, "Bounce", &Bounce); + arl_expect(arl_base, "WritebackTmp", &WritebackTmp); + arl_expect(arl_base, "CommitLimit", &CommitLimit); + arl_expect(arl_base, "Committed_AS", &Committed_AS); + arl_expect(arl_base, "VmallocTotal", &VmallocTotal); + arl_expect(arl_base, "VmallocUsed", &VmallocUsed); + arl_expect(arl_base, "VmallocChunk", &VmallocChunk); + arl_expect(arl_base, "Percpu", &Percpu); + + // CONFIG_MEMTEST + //arl_expect(arl_base, "EarlyMemtestBad", &EarlyMemtestBad); + + // CONFIG_MEMORY_FAILURE + arl_hwcorrupted = arl_expect(arl_base, "HardwareCorrupted", &HardwareCorrupted); + + // CONFIG_TRANSPARENT_HUGEPAGE + arl_expect(arl_base, "AnonHugePages", &AnonHugePages); + arl_expect(arl_base, "ShmemHugePages", &ShmemHugePages); + arl_expect(arl_base, "ShmemPmdMapped", &ShmemPmdMapped); + arl_expect(arl_base, "FileHugePages", &FileHugePages); + arl_expect(arl_base, "FilePmdMapped", &FilePmdMapped); + + // CONFIG_CMA + arl_cma_total = arl_expect(arl_base, "CmaTotal", &CmaTotal); + arl_expect(arl_base, "CmaFree", &CmaFree); + + // CONFIG_UNACCEPTED_MEMORY + //arl_expect(arl_base, "Unaccepted", &Unaccepted); + + // these appear only when hugepages are supported + arl_hugepages_total = arl_expect(arl_base, "HugePages_Total", &HugePages_Total); + arl_expect(arl_base, "HugePages_Free", &HugePages_Free); + arl_expect(arl_base, "HugePages_Rsvd", &HugePages_Rsvd); + arl_expect(arl_base, "HugePages_Surp", &HugePages_Surp); + arl_expect(arl_base, "Hugepagesize", &Hugepagesize); + //arl_expect(arl_base, "Hugetlb", &Hugetlb); + + arl_directmap4k = arl_expect(arl_base, "DirectMap4k", &DirectMap4k); + arl_directmap2m = arl_expect(arl_base, "DirectMap2M", &DirectMap2M); + arl_directmap4m = arl_expect(arl_base, "DirectMap4M", &DirectMap4M); + arl_directmap1g = arl_expect(arl_base, "DirectMap1G", &DirectMap1G); + } + + if(unlikely(!ff)) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/meminfo"); + ff = procfile_open(config_get(CONFIG_SECTION_PLUGIN_PROC_MEMINFO, "filename to monitor", filename), " \t:", PROCFILE_FLAG_DEFAULT); + if(unlikely(!ff)) + return 1; + } + + ff = procfile_readall(ff); + if(unlikely(!ff)) + return 0; // we return 0, so that we will retry to open it next time + + size_t lines = procfile_lines(ff), l; + + arl_begin(arl_base); + for(l = 0; l < lines ;l++) { + size_t words = procfile_linewords(ff, l); + if(unlikely(words < 2)) continue; + + if(unlikely(arl_check(arl_base, + procfile_lineword(ff, l, 0), + procfile_lineword(ff, l, 1)))) break; + } + + // http://calimeroteknik.free.fr/blag/?article20/really-used-memory-on-gnu-linux + // KReclaimable includes SReclaimable, it was added in kernel v4.20 + unsigned long long reclaimable = KReclaimable > 0 ? KReclaimable : SReclaimable; + unsigned long long MemCached = Cached + reclaimable - Shmem; + unsigned long long MemUsed = MemTotal - MemFree - MemCached - Buffers; + // The Linux kernel doesn't report ZFS ARC usage as cache memory (the ARC is included in the total used system memory) + if (!inside_lxc_container) { + MemCached += (zfs_arcstats_shrinkable_cache_size_bytes / 1024); + MemUsed -= (zfs_arcstats_shrinkable_cache_size_bytes / 1024); + MemAvailable += (zfs_arcstats_shrinkable_cache_size_bytes / 1024); + } + + if(do_ram) { + { + static RRDSET *st_system_ram = NULL; + static RRDDIM *rd_free = NULL, *rd_used = NULL, *rd_cached = NULL, *rd_buffers = NULL; + + if(unlikely(!st_system_ram)) { + st_system_ram = rrdset_create_localhost( + "system" + , "ram" + , NULL + , "ram" + , NULL + , "System RAM" + , "MiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_MEMINFO_NAME + , NETDATA_CHART_PRIO_SYSTEM_RAM + , update_every + , RRDSET_TYPE_STACKED + ); + + rd_free = rrddim_add(st_system_ram, "free", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_used = rrddim_add(st_system_ram, "used", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_cached = rrddim_add(st_system_ram, "cached", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_buffers = rrddim_add(st_system_ram, "buffers", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_system_ram, rd_free, MemFree); + rrddim_set_by_pointer(st_system_ram, rd_used, MemUsed); + rrddim_set_by_pointer(st_system_ram, rd_cached, MemCached); + rrddim_set_by_pointer(st_system_ram, rd_buffers, Buffers); + rrdset_done(st_system_ram); + } + + if(arl_memavailable->flags & ARL_ENTRY_FLAG_FOUND) { + static RRDSET *st_mem_available = NULL; + static RRDDIM *rd_avail = NULL; + + if(unlikely(!st_mem_available)) { + st_mem_available = rrdset_create_localhost( + "mem" + , "available" + , NULL + , "overview" + , NULL + , "Available RAM for applications" + , "MiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_MEMINFO_NAME + , NETDATA_CHART_PRIO_MEM_SYSTEM_AVAILABLE + , update_every + , RRDSET_TYPE_AREA + ); + + rd_avail = rrddim_add(st_mem_available, "MemAvailable", "avail", 1, 1024, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_mem_available, rd_avail, MemAvailable); + rrdset_done(st_mem_available); + } + } + + unsigned long long SwapUsed = SwapTotal - SwapFree; + + if(do_swap == CONFIG_BOOLEAN_YES || (do_swap == CONFIG_BOOLEAN_AUTO && + (SwapTotal || SwapUsed || SwapFree || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_swap = CONFIG_BOOLEAN_YES; + + static RRDSET *st_system_swap = NULL; + static RRDDIM *rd_free = NULL, *rd_used = NULL; + + if(unlikely(!st_system_swap)) { + st_system_swap = rrdset_create_localhost( + "mem" + , "swap" + , NULL + , "swap" + , NULL + , "System Swap" + , "MiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_MEMINFO_NAME + , NETDATA_CHART_PRIO_MEM_SWAP + , update_every + , RRDSET_TYPE_STACKED + ); + + rrdset_flag_set(st_system_swap, RRDSET_FLAG_DETAIL); + + rd_free = rrddim_add(st_system_swap, "free", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_used = rrddim_add(st_system_swap, "used", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_system_swap, rd_used, SwapUsed); + rrddim_set_by_pointer(st_system_swap, rd_free, SwapFree); + rrdset_done(st_system_swap); + + { + static RRDSET *st_mem_swap_cached = NULL; + static RRDDIM *rd_cached = NULL; + + if (unlikely(!st_mem_swap_cached)) { + st_mem_swap_cached = rrdset_create_localhost( + "mem" + , "swap_cached" + , NULL + , "swap" + , NULL + , "Swap Memory Cached in RAM" + , "MiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_MEMINFO_NAME + , NETDATA_CHART_PRIO_MEM_SWAP + 1 + , update_every + , RRDSET_TYPE_AREA + ); + + rd_cached = rrddim_add(st_mem_swap_cached, "cached", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_mem_swap_cached, rd_cached, SwapCached); + rrdset_done(st_mem_swap_cached); + } + + if(arl_zswapped->flags & ARL_ENTRY_FLAG_FOUND) { + static RRDSET *st_mem_zswap = NULL; + static RRDDIM *rd_zswap = NULL, *rd_zswapped = NULL; + + if (unlikely(!st_mem_zswap)) { + st_mem_zswap = rrdset_create_localhost( + "mem" + , "zswap" + , NULL + , "zswap" + , NULL + , "Zswap Usage" + , "MiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_MEMINFO_NAME + , NETDATA_CHART_PRIO_MEM_ZSWAP + , update_every + , RRDSET_TYPE_STACKED + ); + + rd_zswap = rrddim_add(st_mem_zswap, "zswap", "in-ram", 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_zswapped = rrddim_add(st_mem_zswap, "zswapped", "on-disk", 1, 1024, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_mem_zswap, rd_zswap, Zswap); + rrddim_set_by_pointer(st_mem_zswap, rd_zswapped, Zswapped); + rrdset_done(st_mem_zswap); + } + } + + if(arl_hwcorrupted->flags & ARL_ENTRY_FLAG_FOUND && + (do_hwcorrupt == CONFIG_BOOLEAN_YES || (do_hwcorrupt == CONFIG_BOOLEAN_AUTO && + (HardwareCorrupted > 0 || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES)))) { + do_hwcorrupt = CONFIG_BOOLEAN_YES; + + static RRDSET *st_mem_hwcorrupt = NULL; + static RRDDIM *rd_corrupted = NULL; + + if(unlikely(!st_mem_hwcorrupt)) { + st_mem_hwcorrupt = rrdset_create_localhost( + "mem" + , "hwcorrupt" + , NULL + , "ecc" + , NULL + , "Corrupted Memory, detected by ECC" + , "MiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_MEMINFO_NAME + , NETDATA_CHART_PRIO_MEM_HW + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(st_mem_hwcorrupt, RRDSET_FLAG_DETAIL); + + rd_corrupted = rrddim_add(st_mem_hwcorrupt, "HardwareCorrupted", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_mem_hwcorrupt, rd_corrupted, HardwareCorrupted); + rrdset_done(st_mem_hwcorrupt); + } + + if(do_committed) { + static RRDSET *st_mem_committed = NULL; + static RRDDIM *rd_committed = NULL; + + if(unlikely(!st_mem_committed)) { + st_mem_committed = rrdset_create_localhost( + "mem" + , "committed" + , NULL + , "overview" + , NULL + , "Committed (Allocated) Memory" + , "MiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_MEMINFO_NAME + , NETDATA_CHART_PRIO_MEM_SYSTEM_COMMITTED + , update_every + , RRDSET_TYPE_AREA + ); + + rrdset_flag_set(st_mem_committed, RRDSET_FLAG_DETAIL); + + rd_committed = rrddim_add(st_mem_committed, "Committed_AS", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_mem_committed, rd_committed, Committed_AS); + rrdset_done(st_mem_committed); + } + + if(do_writeback) { + static RRDSET *st_mem_writeback = NULL; + static RRDDIM *rd_dirty = NULL, *rd_writeback = NULL, *rd_fusewriteback = NULL, *rd_nfs_writeback = NULL, *rd_bounce = NULL; + + if(unlikely(!st_mem_writeback)) { + st_mem_writeback = rrdset_create_localhost( + "mem" + , "writeback" + , NULL + , "writeback" + , NULL + , "Writeback Memory" + , "MiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_MEMINFO_NAME + , NETDATA_CHART_PRIO_MEM_KERNEL + , update_every + , RRDSET_TYPE_LINE + ); + rrdset_flag_set(st_mem_writeback, RRDSET_FLAG_DETAIL); + + rd_dirty = rrddim_add(st_mem_writeback, "Dirty", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_writeback = rrddim_add(st_mem_writeback, "Writeback", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_fusewriteback = rrddim_add(st_mem_writeback, "FuseWriteback", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_nfs_writeback = rrddim_add(st_mem_writeback, "NfsWriteback", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_bounce = rrddim_add(st_mem_writeback, "Bounce", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_mem_writeback, rd_dirty, Dirty); + rrddim_set_by_pointer(st_mem_writeback, rd_writeback, Writeback); + rrddim_set_by_pointer(st_mem_writeback, rd_fusewriteback, WritebackTmp); + rrddim_set_by_pointer(st_mem_writeback, rd_nfs_writeback, NFS_Unstable); + rrddim_set_by_pointer(st_mem_writeback, rd_bounce, Bounce); + rrdset_done(st_mem_writeback); + } + + // -------------------------------------------------------------------- + + if(do_kernel) { + static RRDSET *st_mem_kernel = NULL; + static RRDDIM *rd_slab = NULL, *rd_kernelstack = NULL, *rd_pagetables = NULL, *rd_vmallocused = NULL, + *rd_percpu = NULL, *rd_kreclaimable = NULL; + + if(unlikely(!st_mem_kernel)) { + st_mem_kernel = rrdset_create_localhost( + "mem" + , "kernel" + , NULL + , "kernel" + , NULL + , "Memory Used by Kernel" + , "MiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_MEMINFO_NAME + , NETDATA_CHART_PRIO_MEM_KERNEL + 1 + , update_every + , RRDSET_TYPE_STACKED + ); + + rrdset_flag_set(st_mem_kernel, RRDSET_FLAG_DETAIL); + + rd_slab = rrddim_add(st_mem_kernel, "Slab", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_kernelstack = rrddim_add(st_mem_kernel, "KernelStack", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_pagetables = rrddim_add(st_mem_kernel, "PageTables", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_vmallocused = rrddim_add(st_mem_kernel, "VmallocUsed", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_percpu = rrddim_add(st_mem_kernel, "Percpu", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_kreclaimable = rrddim_add(st_mem_kernel, "KReclaimable", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_mem_kernel, rd_slab, Slab); + rrddim_set_by_pointer(st_mem_kernel, rd_kernelstack, KernelStack); + rrddim_set_by_pointer(st_mem_kernel, rd_pagetables, PageTables); + rrddim_set_by_pointer(st_mem_kernel, rd_vmallocused, VmallocUsed); + rrddim_set_by_pointer(st_mem_kernel, rd_percpu, Percpu); + rrddim_set_by_pointer(st_mem_kernel, rd_kreclaimable, KReclaimable); + + rrdset_done(st_mem_kernel); + } + + if(do_slab) { + static RRDSET *st_mem_slab = NULL; + static RRDDIM *rd_reclaimable = NULL, *rd_unreclaimable = NULL; + + if(unlikely(!st_mem_slab)) { + st_mem_slab = rrdset_create_localhost( + "mem" + , "slab" + , NULL + , "slab" + , NULL + , "Reclaimable Kernel Memory" + , "MiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_MEMINFO_NAME + , NETDATA_CHART_PRIO_MEM_SLAB + , update_every + , RRDSET_TYPE_STACKED + ); + + rrdset_flag_set(st_mem_slab, RRDSET_FLAG_DETAIL); + + rd_reclaimable = rrddim_add(st_mem_slab, "reclaimable", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_unreclaimable = rrddim_add(st_mem_slab, "unreclaimable", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_mem_slab, rd_reclaimable, SReclaimable); + rrddim_set_by_pointer(st_mem_slab, rd_unreclaimable, SUnreclaim); + rrdset_done(st_mem_slab); + } + + if(arl_hugepages_total->flags & ARL_ENTRY_FLAG_FOUND && + (do_hugepages == CONFIG_BOOLEAN_YES || (do_hugepages == CONFIG_BOOLEAN_AUTO && + ((Hugepagesize && HugePages_Total) || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES)))) { + do_hugepages = CONFIG_BOOLEAN_YES; + + static RRDSET *st_mem_hugepages = NULL; + static RRDDIM *rd_used = NULL, *rd_free = NULL, *rd_rsvd = NULL, *rd_surp = NULL; + + if(unlikely(!st_mem_hugepages)) { + st_mem_hugepages = rrdset_create_localhost( + "mem" + , "hugepages" + , NULL + , "hugepages" + , NULL + , "Dedicated HugePages Memory" + , "MiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_MEMINFO_NAME + , NETDATA_CHART_PRIO_MEM_HUGEPAGES + , update_every + , RRDSET_TYPE_STACKED + ); + + rrdset_flag_set(st_mem_hugepages, RRDSET_FLAG_DETAIL); + + rd_free = rrddim_add(st_mem_hugepages, "free", NULL, Hugepagesize, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_used = rrddim_add(st_mem_hugepages, "used", NULL, Hugepagesize, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_surp = rrddim_add(st_mem_hugepages, "surplus", NULL, Hugepagesize, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_rsvd = rrddim_add(st_mem_hugepages, "reserved", NULL, Hugepagesize, 1024, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_mem_hugepages, rd_used, HugePages_Total - HugePages_Free - HugePages_Rsvd); + rrddim_set_by_pointer(st_mem_hugepages, rd_free, HugePages_Free); + rrddim_set_by_pointer(st_mem_hugepages, rd_rsvd, HugePages_Rsvd); + rrddim_set_by_pointer(st_mem_hugepages, rd_surp, HugePages_Surp); + rrdset_done(st_mem_hugepages); + } + + if(do_transparent_hugepages == CONFIG_BOOLEAN_YES || (do_transparent_hugepages == CONFIG_BOOLEAN_AUTO && + (AnonHugePages || + ShmemHugePages || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_transparent_hugepages = CONFIG_BOOLEAN_YES; + + static RRDSET *st_mem_transparent_hugepages = NULL; + static RRDDIM *rd_anonymous = NULL, *rd_shared = NULL; + + if(unlikely(!st_mem_transparent_hugepages)) { + st_mem_transparent_hugepages = rrdset_create_localhost( + "mem" + , "thp" + , NULL + , "hugepages" + , NULL + , "Transparent HugePages Memory" + , "MiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_MEMINFO_NAME + , NETDATA_CHART_PRIO_MEM_HUGEPAGES + 1 + , update_every + , RRDSET_TYPE_STACKED + ); + + rrdset_flag_set(st_mem_transparent_hugepages, RRDSET_FLAG_DETAIL); + + rd_anonymous = rrddim_add(st_mem_transparent_hugepages, "anonymous", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_shared = rrddim_add(st_mem_transparent_hugepages, "shmem", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_mem_transparent_hugepages, rd_anonymous, AnonHugePages); + rrddim_set_by_pointer(st_mem_transparent_hugepages, rd_shared, ShmemHugePages); + rrdset_done(st_mem_transparent_hugepages); + + { + static RRDSET *st_mem_thp_details = NULL; + static RRDDIM *rd_shmem_pmd_mapped = NULL, *rd_file_huge_pages = NULL, *rd_file_pmd_mapped = NULL; + + if(unlikely(!st_mem_thp_details)) { + st_mem_thp_details = rrdset_create_localhost( + "mem" + , "thp_details" + , NULL + , "hugepages" + , NULL + , "Details of Transparent HugePages Usage" + , "MiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_MEMINFO_NAME + , NETDATA_CHART_PRIO_MEM_HUGEPAGES_DETAILS + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(st_mem_thp_details, RRDSET_FLAG_DETAIL); + + rd_shmem_pmd_mapped = rrddim_add(st_mem_thp_details, "shmem_pmd", "ShmemPmdMapped", 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_file_huge_pages = rrddim_add(st_mem_thp_details, "file", "FileHugePages", 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_file_pmd_mapped = rrddim_add(st_mem_thp_details, "file_pmd", "FilePmdMapped", 1, 1024, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_mem_thp_details, rd_shmem_pmd_mapped, ShmemPmdMapped); + rrddim_set_by_pointer(st_mem_thp_details, rd_file_huge_pages, FileHugePages); + rrddim_set_by_pointer(st_mem_thp_details, rd_file_pmd_mapped, FilePmdMapped); + rrdset_done(st_mem_thp_details); + } + } + + if(do_reclaiming != CONFIG_BOOLEAN_NO) { + static RRDSET *st_mem_reclaiming = NULL; + static RRDDIM *rd_active = NULL, *rd_inactive = NULL, + *rd_active_anon = NULL, *rd_inactive_anon = NULL, + *rd_active_file = NULL, *rd_inactive_file = NULL, + *rd_unevictable = NULL, *rd_mlocked = NULL; + + if(unlikely(!st_mem_reclaiming)) { + st_mem_reclaiming = rrdset_create_localhost( + "mem" + , "reclaiming" + , NULL + , "reclaiming" + , NULL + , "Memory Reclaiming" + , "MiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_MEMINFO_NAME + , NETDATA_CHART_PRIO_MEM_RECLAIMING + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(st_mem_reclaiming, RRDSET_FLAG_DETAIL); + + rd_active = rrddim_add(st_mem_reclaiming, "active", "Active", 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_inactive = rrddim_add(st_mem_reclaiming, "inactive", "Inactive", 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_active_anon = rrddim_add(st_mem_reclaiming, "active_anon", "Active(anon)", 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_inactive_anon = rrddim_add(st_mem_reclaiming, "inactive_anon", "Inactive(anon)", 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_active_file = rrddim_add(st_mem_reclaiming, "active_file", "Active(file)", 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_inactive_file = rrddim_add(st_mem_reclaiming, "inactive_file", "Inactive(file)", 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_unevictable = rrddim_add(st_mem_reclaiming, "unevictable", "Unevictable", 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_mlocked = rrddim_add(st_mem_reclaiming, "mlocked", "Mlocked", 1, 1024, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_mem_reclaiming, rd_active, Active); + rrddim_set_by_pointer(st_mem_reclaiming, rd_inactive, Inactive); + rrddim_set_by_pointer(st_mem_reclaiming, rd_active_anon, ActiveAnon); + rrddim_set_by_pointer(st_mem_reclaiming, rd_inactive_anon, InactiveAnon); + rrddim_set_by_pointer(st_mem_reclaiming, rd_active_file, ActiveFile); + rrddim_set_by_pointer(st_mem_reclaiming, rd_inactive_file, InactiveFile); + rrddim_set_by_pointer(st_mem_reclaiming, rd_unevictable, Unevictable); + rrddim_set_by_pointer(st_mem_reclaiming, rd_mlocked, Mlocked); + + rrdset_done(st_mem_reclaiming); + } + + if(do_high_low != CONFIG_BOOLEAN_NO && (arl_high_low->flags & ARL_ENTRY_FLAG_FOUND)) { + static RRDSET *st_mem_high_low = NULL; + static RRDDIM *rd_high_used = NULL, *rd_low_used = NULL; + static RRDDIM *rd_high_free = NULL, *rd_low_free = NULL; + + if(unlikely(!st_mem_high_low)) { + st_mem_high_low = rrdset_create_localhost( + "mem" + , "high_low" + , NULL + , "high_low" + , NULL + , "High and Low Used and Free Memory Areas" + , "MiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_MEMINFO_NAME + , NETDATA_CHART_PRIO_MEM_HIGH_LOW + , update_every + , RRDSET_TYPE_STACKED + ); + + rrdset_flag_set(st_mem_high_low, RRDSET_FLAG_DETAIL); + + rd_high_used = rrddim_add(st_mem_high_low, "high_used", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_low_used = rrddim_add(st_mem_high_low, "low_used", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_high_free = rrddim_add(st_mem_high_low, "high_free", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_low_free = rrddim_add(st_mem_high_low, "low_free", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_mem_high_low, rd_high_used, HighTotal - HighFree); + rrddim_set_by_pointer(st_mem_high_low, rd_low_used, LowTotal - LowFree); + rrddim_set_by_pointer(st_mem_high_low, rd_high_free, HighFree); + rrddim_set_by_pointer(st_mem_high_low, rd_low_free, LowFree); + rrdset_done(st_mem_high_low); + } + + if(do_cma == CONFIG_BOOLEAN_YES || (do_cma == CONFIG_BOOLEAN_AUTO && (arl_cma_total->flags & ARL_ENTRY_FLAG_FOUND) && CmaTotal)) { + do_cma = CONFIG_BOOLEAN_YES; + + static RRDSET *st_mem_cma = NULL; + static RRDDIM *rd_used = NULL, *rd_free = NULL; + + if(unlikely(!st_mem_cma)) { + st_mem_cma = rrdset_create_localhost( + "mem" + , "cma" + , NULL + , "cma" + , NULL + , "Contiguous Memory Allocator (CMA) Memory" + , "MiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_MEMINFO_NAME + , NETDATA_CHART_PRIO_MEM_CMA + , update_every + , RRDSET_TYPE_STACKED + ); + + rd_used = rrddim_add(st_mem_cma, "used", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + rd_free = rrddim_add(st_mem_cma, "free", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_mem_cma, rd_used, CmaTotal - CmaFree); + rrddim_set_by_pointer(st_mem_cma, rd_free, CmaFree); + rrdset_done(st_mem_cma); + } + + if(do_directmap != CONFIG_BOOLEAN_NO && + ((arl_directmap4k->flags & ARL_ENTRY_FLAG_FOUND) || + (arl_directmap2m->flags & ARL_ENTRY_FLAG_FOUND) || + (arl_directmap4m->flags & ARL_ENTRY_FLAG_FOUND) || + (arl_directmap1g->flags & ARL_ENTRY_FLAG_FOUND))) + { + static RRDSET *st_mem_directmap = NULL; + static RRDDIM *rd_4k = NULL, *rd_2m = NULL, *rd_1g = NULL, *rd_4m = NULL; + + if(unlikely(!st_mem_directmap)) { + st_mem_directmap = rrdset_create_localhost( + "mem" + , "directmaps" + , NULL + , "overview" + , NULL + , "Direct Memory Mappings" + , "MiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_MEMINFO_NAME + , NETDATA_CHART_PRIO_MEM_DIRECTMAP + , update_every + , RRDSET_TYPE_STACKED + ); + + if(arl_directmap4k->flags & ARL_ENTRY_FLAG_FOUND) + rd_4k = rrddim_add(st_mem_directmap, "4k", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + + if(arl_directmap2m->flags & ARL_ENTRY_FLAG_FOUND) + rd_2m = rrddim_add(st_mem_directmap, "2m", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + + if(arl_directmap4m->flags & ARL_ENTRY_FLAG_FOUND) + rd_4m = rrddim_add(st_mem_directmap, "4m", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + + if(arl_directmap1g->flags & ARL_ENTRY_FLAG_FOUND) + rd_1g = rrddim_add(st_mem_directmap, "1g", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + } + + if(rd_4k) + rrddim_set_by_pointer(st_mem_directmap, rd_4k, DirectMap4k); + + if(rd_2m) + rrddim_set_by_pointer(st_mem_directmap, rd_2m, DirectMap2M); + + if(rd_4m) + rrddim_set_by_pointer(st_mem_directmap, rd_4m, DirectMap4M); + + if(rd_1g) + rrddim_set_by_pointer(st_mem_directmap, rd_1g, DirectMap1G); + + rrdset_done(st_mem_directmap); + } + + return 0; +} diff --git a/src/collectors/proc.plugin/proc_net_dev.c b/src/collectors/proc.plugin/proc_net_dev.c new file mode 100644 index 000000000..d29bb7a72 --- /dev/null +++ b/src/collectors/proc.plugin/proc_net_dev.c @@ -0,0 +1,1788 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" +#include "proc_net_dev_renames.h" + +#define PLUGIN_PROC_MODULE_NETDEV_NAME "/proc/net/dev" +#define CONFIG_SECTION_PLUGIN_PROC_NETDEV "plugin:" PLUGIN_PROC_CONFIG_NAME ":" PLUGIN_PROC_MODULE_NETDEV_NAME + +#define RRDFUNCTIONS_NETDEV_HELP "View network interface statistics" + +#define STATE_LENGTH_MAX 32 + +#define READ_RETRY_PERIOD 60 // seconds + +time_t double_linked_device_collect_delay_secs = 120; + +enum { + NETDEV_DUPLEX_UNKNOWN, + NETDEV_DUPLEX_HALF, + NETDEV_DUPLEX_FULL +}; + +static const char *get_duplex_string(int duplex) { + switch (duplex) { + case NETDEV_DUPLEX_FULL: + return "full"; + case NETDEV_DUPLEX_HALF: + return "half"; + default: + return "unknown"; + } +} + +enum { + NETDEV_OPERSTATE_UNKNOWN, + NETDEV_OPERSTATE_NOTPRESENT, + NETDEV_OPERSTATE_DOWN, + NETDEV_OPERSTATE_LOWERLAYERDOWN, + NETDEV_OPERSTATE_TESTING, + NETDEV_OPERSTATE_DORMANT, + NETDEV_OPERSTATE_UP +}; + +static inline int get_operstate(char *operstate) { + // As defined in https://www.kernel.org/doc/Documentation/ABI/testing/sysfs-class-net + if (!strcmp(operstate, "up")) + return NETDEV_OPERSTATE_UP; + if (!strcmp(operstate, "down")) + return NETDEV_OPERSTATE_DOWN; + if (!strcmp(operstate, "notpresent")) + return NETDEV_OPERSTATE_NOTPRESENT; + if (!strcmp(operstate, "lowerlayerdown")) + return NETDEV_OPERSTATE_LOWERLAYERDOWN; + if (!strcmp(operstate, "testing")) + return NETDEV_OPERSTATE_TESTING; + if (!strcmp(operstate, "dormant")) + return NETDEV_OPERSTATE_DORMANT; + + return NETDEV_OPERSTATE_UNKNOWN; +} + +static const char *get_operstate_string(int operstate) { + switch (operstate) { + case NETDEV_OPERSTATE_UP: + return "up"; + case NETDEV_OPERSTATE_DOWN: + return "down"; + case NETDEV_OPERSTATE_NOTPRESENT: + return "notpresent"; + case NETDEV_OPERSTATE_LOWERLAYERDOWN: + return "lowerlayerdown"; + case NETDEV_OPERSTATE_TESTING: + return "testing"; + case NETDEV_OPERSTATE_DORMANT: + return "dormant"; + default: + return "unknown"; + } +} + +// ---------------------------------------------------------------------------- +// netdev list + +static struct netdev { + char *name; + uint32_t hash; + size_t len; + + // flags + bool virtual; + bool configured; + int enabled; + bool updated; + bool function_ready; + bool double_linked; // iflink != ifindex + + time_t discover_time; + + int carrier_file_exists; + time_t carrier_file_lost_time; + + int duplex_file_exists; + time_t duplex_file_lost_time; + + int speed_file_exists; + time_t speed_file_lost_time; + + int do_bandwidth; + int do_packets; + int do_errors; + int do_drops; + int do_fifo; + int do_compressed; + int do_events; + int do_speed; + int do_duplex; + int do_operstate; + int do_carrier; + int do_mtu; + + const char *chart_type_net_bytes; + const char *chart_type_net_packets; + const char *chart_type_net_errors; + const char *chart_type_net_fifo; + const char *chart_type_net_events; + const char *chart_type_net_drops; + const char *chart_type_net_compressed; + const char *chart_type_net_speed; + const char *chart_type_net_duplex; + const char *chart_type_net_operstate; + const char *chart_type_net_carrier; + const char *chart_type_net_mtu; + + const char *chart_id_net_bytes; + const char *chart_id_net_packets; + const char *chart_id_net_errors; + const char *chart_id_net_fifo; + const char *chart_id_net_events; + const char *chart_id_net_drops; + const char *chart_id_net_compressed; + const char *chart_id_net_speed; + const char *chart_id_net_duplex; + const char *chart_id_net_operstate; + const char *chart_id_net_carrier; + const char *chart_id_net_mtu; + + const char *chart_ctx_net_bytes; + const char *chart_ctx_net_packets; + const char *chart_ctx_net_errors; + const char *chart_ctx_net_fifo; + const char *chart_ctx_net_events; + const char *chart_ctx_net_drops; + const char *chart_ctx_net_compressed; + const char *chart_ctx_net_speed; + const char *chart_ctx_net_duplex; + const char *chart_ctx_net_operstate; + const char *chart_ctx_net_carrier; + const char *chart_ctx_net_mtu; + + const char *chart_family; + + RRDLABELS *chart_labels; + + int flipped; + unsigned long priority; + + // data collected + kernel_uint_t rbytes; + kernel_uint_t rpackets; + kernel_uint_t rerrors; + kernel_uint_t rdrops; + kernel_uint_t rfifo; + kernel_uint_t rframe; + kernel_uint_t rcompressed; + kernel_uint_t rmulticast; + + kernel_uint_t tbytes; + kernel_uint_t tpackets; + kernel_uint_t terrors; + kernel_uint_t tdrops; + kernel_uint_t tfifo; + kernel_uint_t tcollisions; + kernel_uint_t tcarrier; + kernel_uint_t tcompressed; + kernel_uint_t speed; + kernel_uint_t duplex; + kernel_uint_t operstate; + unsigned long long carrier; + unsigned long long mtu; + + // charts + RRDSET *st_bandwidth; + RRDSET *st_packets; + RRDSET *st_errors; + RRDSET *st_drops; + RRDSET *st_fifo; + RRDSET *st_compressed; + RRDSET *st_events; + RRDSET *st_speed; + RRDSET *st_duplex; + RRDSET *st_operstate; + RRDSET *st_carrier; + RRDSET *st_mtu; + + // dimensions + RRDDIM *rd_rbytes; + RRDDIM *rd_rpackets; + RRDDIM *rd_rerrors; + RRDDIM *rd_rdrops; + RRDDIM *rd_rfifo; + RRDDIM *rd_rframe; + RRDDIM *rd_rcompressed; + RRDDIM *rd_rmulticast; + + RRDDIM *rd_tbytes; + RRDDIM *rd_tpackets; + RRDDIM *rd_terrors; + RRDDIM *rd_tdrops; + RRDDIM *rd_tfifo; + RRDDIM *rd_tcollisions; + RRDDIM *rd_tcarrier; + RRDDIM *rd_tcompressed; + + RRDDIM *rd_speed; + RRDDIM *rd_duplex_full; + RRDDIM *rd_duplex_half; + RRDDIM *rd_duplex_unknown; + RRDDIM *rd_operstate_unknown; + RRDDIM *rd_operstate_notpresent; + RRDDIM *rd_operstate_down; + RRDDIM *rd_operstate_lowerlayerdown; + RRDDIM *rd_operstate_testing; + RRDDIM *rd_operstate_dormant; + RRDDIM *rd_operstate_up; + RRDDIM *rd_carrier_up; + RRDDIM *rd_carrier_down; + RRDDIM *rd_mtu; + + char *filename_speed; + const RRDVAR_ACQUIRED *chart_var_speed; + + char *filename_duplex; + char *filename_operstate; + char *filename_carrier; + char *filename_mtu; + + const DICTIONARY_ITEM *cgroup_netdev_link; + + struct netdev *prev, *next; +} *netdev_root = NULL; + +// ---------------------------------------------------------------------------- + +static void netdev_charts_release(struct netdev *d) { + if(d->st_bandwidth) rrdset_is_obsolete___safe_from_collector_thread(d->st_bandwidth); + if(d->st_packets) rrdset_is_obsolete___safe_from_collector_thread(d->st_packets); + if(d->st_errors) rrdset_is_obsolete___safe_from_collector_thread(d->st_errors); + if(d->st_drops) rrdset_is_obsolete___safe_from_collector_thread(d->st_drops); + if(d->st_fifo) rrdset_is_obsolete___safe_from_collector_thread(d->st_fifo); + if(d->st_compressed) rrdset_is_obsolete___safe_from_collector_thread(d->st_compressed); + if(d->st_events) rrdset_is_obsolete___safe_from_collector_thread(d->st_events); + if(d->st_speed) rrdset_is_obsolete___safe_from_collector_thread(d->st_speed); + if(d->st_duplex) rrdset_is_obsolete___safe_from_collector_thread(d->st_duplex); + if(d->st_operstate) rrdset_is_obsolete___safe_from_collector_thread(d->st_operstate); + if(d->st_carrier) rrdset_is_obsolete___safe_from_collector_thread(d->st_carrier); + if(d->st_mtu) rrdset_is_obsolete___safe_from_collector_thread(d->st_mtu); + + d->st_bandwidth = NULL; + d->st_compressed = NULL; + d->st_drops = NULL; + d->st_errors = NULL; + d->st_events = NULL; + d->st_fifo = NULL; + d->st_packets = NULL; + d->st_speed = NULL; + d->st_duplex = NULL; + d->st_operstate = NULL; + d->st_carrier = NULL; + d->st_mtu = NULL; + + d->rd_rbytes = NULL; + d->rd_rpackets = NULL; + d->rd_rerrors = NULL; + d->rd_rdrops = NULL; + d->rd_rfifo = NULL; + d->rd_rframe = NULL; + d->rd_rcompressed = NULL; + d->rd_rmulticast = NULL; + + d->rd_tbytes = NULL; + d->rd_tpackets = NULL; + d->rd_terrors = NULL; + d->rd_tdrops = NULL; + d->rd_tfifo = NULL; + d->rd_tcollisions = NULL; + d->rd_tcarrier = NULL; + d->rd_tcompressed = NULL; + + d->rd_speed = NULL; + d->rd_duplex_full = NULL; + d->rd_duplex_half = NULL; + d->rd_duplex_unknown = NULL; + d->rd_carrier_up = NULL; + d->rd_carrier_down = NULL; + d->rd_mtu = NULL; + + d->rd_operstate_unknown = NULL; + d->rd_operstate_notpresent = NULL; + d->rd_operstate_down = NULL; + d->rd_operstate_lowerlayerdown = NULL; + d->rd_operstate_testing = NULL; + d->rd_operstate_dormant = NULL; + d->rd_operstate_up = NULL; + + d->chart_var_speed = NULL; +} + +static void netdev_free_chart_strings(struct netdev *d) { + freez((void *)d->chart_type_net_bytes); + freez((void *)d->chart_type_net_compressed); + freez((void *)d->chart_type_net_drops); + freez((void *)d->chart_type_net_errors); + freez((void *)d->chart_type_net_events); + freez((void *)d->chart_type_net_fifo); + freez((void *)d->chart_type_net_packets); + freez((void *)d->chart_type_net_speed); + freez((void *)d->chart_type_net_duplex); + freez((void *)d->chart_type_net_operstate); + freez((void *)d->chart_type_net_carrier); + freez((void *)d->chart_type_net_mtu); + + freez((void *)d->chart_id_net_bytes); + freez((void *)d->chart_id_net_compressed); + freez((void *)d->chart_id_net_drops); + freez((void *)d->chart_id_net_errors); + freez((void *)d->chart_id_net_events); + freez((void *)d->chart_id_net_fifo); + freez((void *)d->chart_id_net_packets); + freez((void *)d->chart_id_net_speed); + freez((void *)d->chart_id_net_duplex); + freez((void *)d->chart_id_net_operstate); + freez((void *)d->chart_id_net_carrier); + freez((void *)d->chart_id_net_mtu); + + freez((void *)d->chart_ctx_net_bytes); + freez((void *)d->chart_ctx_net_compressed); + freez((void *)d->chart_ctx_net_drops); + freez((void *)d->chart_ctx_net_errors); + freez((void *)d->chart_ctx_net_events); + freez((void *)d->chart_ctx_net_fifo); + freez((void *)d->chart_ctx_net_packets); + freez((void *)d->chart_ctx_net_speed); + freez((void *)d->chart_ctx_net_duplex); + freez((void *)d->chart_ctx_net_operstate); + freez((void *)d->chart_ctx_net_carrier); + freez((void *)d->chart_ctx_net_mtu); + + freez((void *)d->chart_family); +} + +static void netdev_free(struct netdev *d) { + netdev_charts_release(d); + netdev_free_chart_strings(d); + rrdlabels_destroy(d->chart_labels); + cgroup_netdev_release(d->cgroup_netdev_link); + + freez((void *)d->name); + freez((void *)d->filename_speed); + freez((void *)d->filename_duplex); + freez((void *)d->filename_operstate); + freez((void *)d->filename_carrier); + freez((void *)d->filename_mtu); + freez((void *)d); +} + +static netdata_mutex_t netdev_mutex = NETDATA_MUTEX_INITIALIZER; + +// ---------------------------------------------------------------------------- + +static inline void netdev_rename(struct netdev *d, struct rename_task *r) { + collector_info("CGROUP: renaming network interface '%s' as '%s' under '%s'", d->name, r->container_device, r->container_name); + + netdev_charts_release(d); + netdev_free_chart_strings(d); + + cgroup_netdev_release(d->cgroup_netdev_link); + d->cgroup_netdev_link = cgroup_netdev_dup(r->cgroup_netdev_link); + d->discover_time = 0; + + char buffer[RRD_ID_LENGTH_MAX + 1]; + + snprintfz(buffer, RRD_ID_LENGTH_MAX, "cgroup_%s", r->container_name); + d->chart_type_net_bytes = strdupz(buffer); + d->chart_type_net_compressed = strdupz(buffer); + d->chart_type_net_drops = strdupz(buffer); + d->chart_type_net_errors = strdupz(buffer); + d->chart_type_net_events = strdupz(buffer); + d->chart_type_net_fifo = strdupz(buffer); + d->chart_type_net_packets = strdupz(buffer); + d->chart_type_net_speed = strdupz(buffer); + d->chart_type_net_duplex = strdupz(buffer); + d->chart_type_net_operstate = strdupz(buffer); + d->chart_type_net_carrier = strdupz(buffer); + d->chart_type_net_mtu = strdupz(buffer); + + snprintfz(buffer, RRD_ID_LENGTH_MAX, "net_%s", r->container_device); + d->chart_id_net_bytes = strdupz(buffer); + snprintfz(buffer, RRD_ID_LENGTH_MAX, "net_compressed_%s", r->container_device); + d->chart_id_net_compressed = strdupz(buffer); + snprintfz(buffer, RRD_ID_LENGTH_MAX, "net_drops_%s", r->container_device); + d->chart_id_net_drops = strdupz(buffer); + snprintfz(buffer, RRD_ID_LENGTH_MAX, "net_errors_%s", r->container_device); + d->chart_id_net_errors = strdupz(buffer); + snprintfz(buffer, RRD_ID_LENGTH_MAX, "net_events_%s", r->container_device); + d->chart_id_net_events = strdupz(buffer); + snprintfz(buffer, RRD_ID_LENGTH_MAX, "net_fifo_%s", r->container_device); + d->chart_id_net_fifo = strdupz(buffer); + snprintfz(buffer, RRD_ID_LENGTH_MAX, "net_packets_%s", r->container_device); + d->chart_id_net_packets = strdupz(buffer); + snprintfz(buffer, RRD_ID_LENGTH_MAX, "net_speed_%s", r->container_device); + d->chart_id_net_speed = strdupz(buffer); + snprintfz(buffer, RRD_ID_LENGTH_MAX, "net_duplex_%s", r->container_device); + d->chart_id_net_duplex = strdupz(buffer); + snprintfz(buffer, RRD_ID_LENGTH_MAX, "net_operstate_%s", r->container_device); + d->chart_id_net_operstate = strdupz(buffer); + snprintfz(buffer, RRD_ID_LENGTH_MAX, "net_carrier_%s", r->container_device); + d->chart_id_net_carrier = strdupz(buffer); + snprintfz(buffer, RRD_ID_LENGTH_MAX, "net_mtu_%s", r->container_device); + d->chart_id_net_mtu = strdupz(buffer); + + snprintfz(buffer, RRD_ID_LENGTH_MAX, "%scgroup.net_net", r->ctx_prefix); + d->chart_ctx_net_bytes = strdupz(buffer); + snprintfz(buffer, RRD_ID_LENGTH_MAX, "%scgroup.net_compressed", r->ctx_prefix); + d->chart_ctx_net_compressed = strdupz(buffer); + snprintfz(buffer, RRD_ID_LENGTH_MAX, "%scgroup.net_drops", r->ctx_prefix); + d->chart_ctx_net_drops = strdupz(buffer); + snprintfz(buffer, RRD_ID_LENGTH_MAX, "%scgroup.net_errors", r->ctx_prefix); + d->chart_ctx_net_errors = strdupz(buffer); + snprintfz(buffer, RRD_ID_LENGTH_MAX, "%scgroup.net_events", r->ctx_prefix); + d->chart_ctx_net_events = strdupz(buffer); + snprintfz(buffer, RRD_ID_LENGTH_MAX, "%scgroup.net_fifo", r->ctx_prefix); + d->chart_ctx_net_fifo = strdupz(buffer); + snprintfz(buffer, RRD_ID_LENGTH_MAX, "%scgroup.net_packets", r->ctx_prefix); + d->chart_ctx_net_packets = strdupz(buffer); + snprintfz(buffer, RRD_ID_LENGTH_MAX, "%scgroup.net_speed", r->ctx_prefix); + d->chart_ctx_net_speed = strdupz(buffer); + snprintfz(buffer, RRD_ID_LENGTH_MAX, "%scgroup.net_duplex", r->ctx_prefix); + d->chart_ctx_net_duplex = strdupz(buffer); + snprintfz(buffer, RRD_ID_LENGTH_MAX, "%scgroup.net_operstate", r->ctx_prefix); + d->chart_ctx_net_operstate = strdupz(buffer); + snprintfz(buffer, RRD_ID_LENGTH_MAX, "%scgroup.net_carrier", r->ctx_prefix); + d->chart_ctx_net_carrier = strdupz(buffer); + snprintfz(buffer, RRD_ID_LENGTH_MAX, "%scgroup.net_mtu", r->ctx_prefix); + d->chart_ctx_net_mtu = strdupz(buffer); + + d->chart_family = strdupz("net"); + + rrdlabels_copy(d->chart_labels, r->chart_labels); + rrdlabels_add(d->chart_labels, "container_device", r->container_device, RRDLABEL_SRC_AUTO); + + d->priority = NETDATA_CHART_PRIO_CGROUP_NET_IFACE; + d->flipped = 1; +} + +static void netdev_rename_this_device(struct netdev *d) { + const DICTIONARY_ITEM *item = dictionary_get_and_acquire_item(netdev_renames, d->name); + if(item) { + struct rename_task *r = dictionary_acquired_item_value(item); + netdev_rename(d, r); + dictionary_acquired_item_release(netdev_renames, item); + } +} + +// ---------------------------------------------------------------------------- + +int netdev_function_net_interfaces(BUFFER *wb, const char *function __maybe_unused) { + buffer_flush(wb); + wb->content_type = CT_APPLICATION_JSON; + buffer_json_initialize(wb, "\"", "\"", 0, true, BUFFER_JSON_OPTIONS_DEFAULT); + + buffer_json_member_add_string(wb, "hostname", rrdhost_hostname(localhost)); + buffer_json_member_add_uint64(wb, "status", HTTP_RESP_OK); + buffer_json_member_add_string(wb, "type", "table"); + buffer_json_member_add_time_t(wb, "update_every", 1); + buffer_json_member_add_boolean(wb, "has_history", false); + buffer_json_member_add_string(wb, "help", RRDFUNCTIONS_NETDEV_HELP); + buffer_json_member_add_array(wb, "data"); + + double max_traffic_rx = 0.0; + double max_traffic_tx = 0.0; + double max_traffic = 0.0; + double max_packets_rx = 0.0; + double max_packets_tx = 0.0; + double max_mcast_rx = 0.0; + double max_drops_rx = 0.0; + double max_drops_tx = 0.0; + + netdata_mutex_lock(&netdev_mutex); + + RRDDIM *rd = NULL; + + for (struct netdev *d = netdev_root; d ; d = d->next) { + if (unlikely(!d->function_ready)) + continue; + + buffer_json_add_array_item_array(wb); + + buffer_json_add_array_item_string(wb, d->name); + + buffer_json_add_array_item_string(wb, d->virtual ? "virtual" : "physical"); + buffer_json_add_array_item_string(wb, d->flipped ? "cgroup" : "host"); + buffer_json_add_array_item_string(wb, d->carrier == 1 ? "up" : "down"); + buffer_json_add_array_item_string(wb, get_operstate_string(d->operstate)); + buffer_json_add_array_item_string(wb, get_duplex_string(d->duplex)); + buffer_json_add_array_item_double(wb, d->speed > 0 ? d->speed : NAN); + buffer_json_add_array_item_double(wb, d->mtu > 0 ? d->mtu : NAN); + + rd = d->flipped ? d->rd_tbytes : d->rd_rbytes; + double traffic_rx = rrddim_get_last_stored_value(rd, &max_traffic_rx, 1000.0); + rd = d->flipped ? d->rd_rbytes : d->rd_tbytes; + double traffic_tx = rrddim_get_last_stored_value(rd, &max_traffic_tx, 1000.0); + + rd = d->flipped ? d->rd_tpackets : d->rd_rpackets; + double packets_rx = rrddim_get_last_stored_value(rd, &max_packets_rx, 1000.0); + rd = d->flipped ? d->rd_rpackets : d->rd_tpackets; + double packets_tx = rrddim_get_last_stored_value(rd, &max_packets_tx, 1000.0); + + double mcast_rx = rrddim_get_last_stored_value(d->rd_rmulticast, &max_mcast_rx, 1000.0); + + rd = d->flipped ? d->rd_tdrops : d->rd_rdrops; + double drops_rx = rrddim_get_last_stored_value(rd, &max_drops_rx, 1.0); + rd = d->flipped ? d->rd_rdrops : d->rd_tdrops; + double drops_tx = rrddim_get_last_stored_value(rd, &max_drops_tx, 1.0); + + // FIXME: "traffic" (total) is needed only for default_sorting + // can be removed when default_sorting will accept multiple columns (sum) + double traffic = NAN; + if (!isnan(traffic_rx) && !isnan(traffic_tx)) { + traffic = traffic_rx + traffic_tx; + max_traffic = MAX(max_traffic, traffic); + } + + + buffer_json_add_array_item_double(wb, traffic_rx); + buffer_json_add_array_item_double(wb, traffic_tx); + buffer_json_add_array_item_double(wb, traffic); + buffer_json_add_array_item_double(wb, packets_rx); + buffer_json_add_array_item_double(wb, packets_tx); + buffer_json_add_array_item_double(wb, mcast_rx); + buffer_json_add_array_item_double(wb, drops_rx); + buffer_json_add_array_item_double(wb, drops_tx); + + buffer_json_add_array_item_object(wb); + { + buffer_json_member_add_string(wb, "severity", drops_rx + drops_tx > 0 ? "warning" : "normal"); + } + buffer_json_object_close(wb); + + buffer_json_array_close(wb); + } + + netdata_mutex_unlock(&netdev_mutex); + + buffer_json_array_close(wb); // data + buffer_json_member_add_object(wb, "columns"); + { + size_t field_id = 0; + + buffer_rrdf_table_add_field(wb, field_id++, "Interface", "Network Interface Name", + RRDF_FIELD_TYPE_STRING, RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NONE, + 0, NULL, NAN, RRDF_FIELD_SORT_ASCENDING, NULL, + RRDF_FIELD_SUMMARY_COUNT, RRDF_FIELD_FILTER_MULTISELECT, + RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_UNIQUE_KEY | RRDF_FIELD_OPTS_STICKY, + NULL); + + buffer_rrdf_table_add_field(wb, field_id++, "Type", "Network Interface Type", + RRDF_FIELD_TYPE_STRING, RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NONE, + 0, NULL, NAN, RRDF_FIELD_SORT_ASCENDING, NULL, + RRDF_FIELD_SUMMARY_COUNT, RRDF_FIELD_FILTER_MULTISELECT, + RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_UNIQUE_KEY, + NULL); + + buffer_rrdf_table_add_field(wb, field_id++, "UsedBy", "Indicates whether the network interface is used by a cgroup or by the host system", + RRDF_FIELD_TYPE_STRING, RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NONE, + 0, NULL, NAN, RRDF_FIELD_SORT_ASCENDING, NULL, + RRDF_FIELD_SUMMARY_COUNT, RRDF_FIELD_FILTER_MULTISELECT, + RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_UNIQUE_KEY, + NULL); + + buffer_rrdf_table_add_field(wb, field_id++, "PhState", "Current Physical State", + RRDF_FIELD_TYPE_STRING, RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NONE, + 0, NULL, NAN, RRDF_FIELD_SORT_ASCENDING, NULL, + RRDF_FIELD_SUMMARY_COUNT, RRDF_FIELD_FILTER_MULTISELECT, + RRDF_FIELD_OPTS_VISIBLE | RRDF_FIELD_OPTS_UNIQUE_KEY, + NULL); + + buffer_rrdf_table_add_field(wb, field_id++, "OpState", "Current Operational State", + RRDF_FIELD_TYPE_STRING, RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NONE, + 0, NULL, NAN, RRDF_FIELD_SORT_ASCENDING, NULL, + RRDF_FIELD_SUMMARY_COUNT, RRDF_FIELD_FILTER_MULTISELECT, + RRDF_FIELD_OPTS_UNIQUE_KEY, + NULL); + + buffer_rrdf_table_add_field(wb, field_id++, "Duplex", "Current Duplex Mode", + RRDF_FIELD_TYPE_STRING, RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NONE, + 0, NULL, NAN, RRDF_FIELD_SORT_ASCENDING, NULL, + RRDF_FIELD_SUMMARY_COUNT, RRDF_FIELD_FILTER_MULTISELECT, + RRDF_FIELD_OPTS_UNIQUE_KEY, + NULL); + + buffer_rrdf_table_add_field(wb, field_id++, "Speed", "Current Link Speed", + RRDF_FIELD_TYPE_BAR_WITH_INTEGER, RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NUMBER, + 0, "Mbit", NAN, RRDF_FIELD_SORT_DESCENDING, NULL, + RRDF_FIELD_SUMMARY_COUNT, RRDF_FIELD_FILTER_MULTISELECT, + RRDF_FIELD_OPTS_UNIQUE_KEY, + NULL); + + buffer_rrdf_table_add_field(wb, field_id++, "MTU", "Maximum Transmission Unit", + RRDF_FIELD_TYPE_BAR_WITH_INTEGER, RRDF_FIELD_VISUAL_VALUE, RRDF_FIELD_TRANSFORM_NUMBER, + 0, "Octets", NAN, RRDF_FIELD_SORT_DESCENDING, NULL, + RRDF_FIELD_SUMMARY_COUNT, RRDF_FIELD_FILTER_MULTISELECT, + RRDF_FIELD_OPTS_UNIQUE_KEY, + NULL); + + buffer_rrdf_table_add_field(wb, field_id++, "In", "Traffic Received", + RRDF_FIELD_TYPE_BAR_WITH_INTEGER, RRDF_FIELD_VISUAL_BAR, RRDF_FIELD_TRANSFORM_NUMBER, + 2, "Mbit", max_traffic_rx, RRDF_FIELD_SORT_DESCENDING, NULL, + RRDF_FIELD_SUMMARY_SUM, RRDF_FIELD_FILTER_NONE, + RRDF_FIELD_OPTS_VISIBLE, + NULL); + + buffer_rrdf_table_add_field(wb, field_id++, "Out", "Traffic Sent", + RRDF_FIELD_TYPE_BAR_WITH_INTEGER, RRDF_FIELD_VISUAL_BAR, RRDF_FIELD_TRANSFORM_NUMBER, + 2, "Mbit", max_traffic_tx, RRDF_FIELD_SORT_DESCENDING, NULL, + RRDF_FIELD_SUMMARY_SUM, RRDF_FIELD_FILTER_NONE, + RRDF_FIELD_OPTS_VISIBLE, + NULL); + + buffer_rrdf_table_add_field(wb, field_id++, "Total", "Traffic Received and Sent", + RRDF_FIELD_TYPE_BAR_WITH_INTEGER, RRDF_FIELD_VISUAL_BAR, RRDF_FIELD_TRANSFORM_NUMBER, + 2, "Mbit", max_traffic, RRDF_FIELD_SORT_DESCENDING, NULL, + RRDF_FIELD_SUMMARY_SUM, RRDF_FIELD_FILTER_NONE, + RRDF_FIELD_OPTS_NONE, + NULL); + + buffer_rrdf_table_add_field(wb, field_id++, "PktsIn", "Received Packets", + RRDF_FIELD_TYPE_BAR_WITH_INTEGER, RRDF_FIELD_VISUAL_BAR, RRDF_FIELD_TRANSFORM_NUMBER, + 2, "Kpps", max_packets_rx, RRDF_FIELD_SORT_DESCENDING, NULL, + RRDF_FIELD_SUMMARY_SUM, RRDF_FIELD_FILTER_NONE, + RRDF_FIELD_OPTS_VISIBLE, + NULL); + + buffer_rrdf_table_add_field(wb, field_id++, "PktsOut", "Sent Packets", + RRDF_FIELD_TYPE_BAR_WITH_INTEGER, RRDF_FIELD_VISUAL_BAR, RRDF_FIELD_TRANSFORM_NUMBER, + 2, "Kpps", max_packets_tx, RRDF_FIELD_SORT_DESCENDING, NULL, + RRDF_FIELD_SUMMARY_SUM, RRDF_FIELD_FILTER_NONE, + RRDF_FIELD_OPTS_VISIBLE, + NULL); + + buffer_rrdf_table_add_field(wb, field_id++, "McastIn", "Multicast Received Packets", + RRDF_FIELD_TYPE_BAR_WITH_INTEGER, RRDF_FIELD_VISUAL_BAR, RRDF_FIELD_TRANSFORM_NUMBER, + 2, "Kpps", max_mcast_rx, RRDF_FIELD_SORT_DESCENDING, NULL, + RRDF_FIELD_SUMMARY_SUM, RRDF_FIELD_FILTER_NONE, + RRDF_FIELD_OPTS_NONE, + NULL); + + buffer_rrdf_table_add_field(wb, field_id++, "DropsIn", "Dropped Inbound Packets", + RRDF_FIELD_TYPE_BAR_WITH_INTEGER, RRDF_FIELD_VISUAL_BAR, RRDF_FIELD_TRANSFORM_NUMBER, + 2, "Drops", max_drops_rx, RRDF_FIELD_SORT_DESCENDING, NULL, + RRDF_FIELD_SUMMARY_SUM, RRDF_FIELD_FILTER_NONE, + RRDF_FIELD_OPTS_VISIBLE, + NULL); + + buffer_rrdf_table_add_field(wb, field_id++, "DropsOut", "Dropped Outbound Packets", + RRDF_FIELD_TYPE_BAR_WITH_INTEGER, RRDF_FIELD_VISUAL_BAR, RRDF_FIELD_TRANSFORM_NUMBER, + 2, "Drops", max_drops_tx, RRDF_FIELD_SORT_DESCENDING, NULL, + RRDF_FIELD_SUMMARY_SUM, RRDF_FIELD_FILTER_NONE, + RRDF_FIELD_OPTS_VISIBLE, + NULL); + + buffer_rrdf_table_add_field( + wb, field_id++, + "rowOptions", "rowOptions", + RRDF_FIELD_TYPE_NONE, + RRDR_FIELD_VISUAL_ROW_OPTIONS, + RRDF_FIELD_TRANSFORM_NONE, 0, NULL, NAN, + RRDF_FIELD_SORT_FIXED, + NULL, + RRDF_FIELD_SUMMARY_COUNT, + RRDF_FIELD_FILTER_NONE, + RRDF_FIELD_OPTS_DUMMY, + NULL); + } + + buffer_json_object_close(wb); // columns + buffer_json_member_add_string(wb, "default_sort_column", "Total"); + + buffer_json_member_add_object(wb, "charts"); + { + buffer_json_member_add_object(wb, "Traffic"); + { + buffer_json_member_add_string(wb, "name", "Traffic"); + buffer_json_member_add_string(wb, "type", "stacked-bar"); + buffer_json_member_add_array(wb, "columns"); + { + buffer_json_add_array_item_string(wb, "In"); + buffer_json_add_array_item_string(wb, "Out"); + } + buffer_json_array_close(wb); + } + buffer_json_object_close(wb); + + buffer_json_member_add_object(wb, "Packets"); + { + buffer_json_member_add_string(wb, "name", "Packets"); + buffer_json_member_add_string(wb, "type", "stacked-bar"); + buffer_json_member_add_array(wb, "columns"); + { + buffer_json_add_array_item_string(wb, "PktsIn"); + buffer_json_add_array_item_string(wb, "PktsOut"); + } + buffer_json_array_close(wb); + } + buffer_json_object_close(wb); + } + buffer_json_object_close(wb); // charts + + buffer_json_member_add_array(wb, "default_charts"); + { + buffer_json_add_array_item_array(wb); + buffer_json_add_array_item_string(wb, "Traffic"); + buffer_json_add_array_item_string(wb, "Interface"); + buffer_json_array_close(wb); + + buffer_json_add_array_item_array(wb); + buffer_json_add_array_item_string(wb, "Traffic"); + buffer_json_add_array_item_string(wb, "Type"); + buffer_json_array_close(wb); + } + buffer_json_array_close(wb); + + buffer_json_member_add_object(wb, "group_by"); + { + buffer_json_member_add_object(wb, "Type"); + { + buffer_json_member_add_string(wb, "name", "Type"); + buffer_json_member_add_array(wb, "columns"); + { + buffer_json_add_array_item_string(wb, "Type"); + } + buffer_json_array_close(wb); + } + buffer_json_object_close(wb); + + buffer_json_member_add_object(wb, "UsedBy"); + { + buffer_json_member_add_string(wb, "name", "UsedBy"); + buffer_json_member_add_array(wb, "columns"); + { + buffer_json_add_array_item_string(wb, "UsedBy"); + } + buffer_json_array_close(wb); + } + buffer_json_object_close(wb); + } + buffer_json_object_close(wb); // group_by + + buffer_json_member_add_time_t(wb, "expires", now_realtime_sec() + 1); + buffer_json_finalize(wb); + + return HTTP_RESP_OK; +} + +// netdev data collection + +static void netdev_cleanup() { + struct netdev *d = netdev_root; + while(d) { + if(unlikely(!d->updated)) { + struct netdev *next = d->next; // keep the next, to continue; + + DOUBLE_LINKED_LIST_REMOVE_ITEM_UNSAFE(netdev_root, d, prev, next); + + netdev_free(d); + d = next; + continue; + } + + d->updated = false; + d = d->next; + } +} + +static struct netdev *get_netdev(const char *name) { + struct netdev *d; + + uint32_t hash = simple_hash(name); + + // search it, from the last position to the end + for(d = netdev_root ; d ; d = d->next) { + if(unlikely(hash == d->hash && !strcmp(name, d->name))) + return d; + } + + // create a new one + d = callocz(1, sizeof(struct netdev)); + d->name = strdupz(name); + d->hash = simple_hash(d->name); + d->len = strlen(d->name); + d->chart_labels = rrdlabels_create(); + d->function_ready = false; + d->double_linked = false; + + d->chart_type_net_bytes = strdupz("net"); + d->chart_type_net_compressed = strdupz("net_compressed"); + d->chart_type_net_drops = strdupz("net_drops"); + d->chart_type_net_errors = strdupz("net_errors"); + d->chart_type_net_events = strdupz("net_events"); + d->chart_type_net_fifo = strdupz("net_fifo"); + d->chart_type_net_packets = strdupz("net_packets"); + d->chart_type_net_speed = strdupz("net_speed"); + d->chart_type_net_duplex = strdupz("net_duplex"); + d->chart_type_net_operstate = strdupz("net_operstate"); + d->chart_type_net_carrier = strdupz("net_carrier"); + d->chart_type_net_mtu = strdupz("net_mtu"); + + d->chart_id_net_bytes = strdupz(d->name); + d->chart_id_net_compressed = strdupz(d->name); + d->chart_id_net_drops = strdupz(d->name); + d->chart_id_net_errors = strdupz(d->name); + d->chart_id_net_events = strdupz(d->name); + d->chart_id_net_fifo = strdupz(d->name); + d->chart_id_net_packets = strdupz(d->name); + d->chart_id_net_speed = strdupz(d->name); + d->chart_id_net_duplex = strdupz(d->name); + d->chart_id_net_operstate = strdupz(d->name); + d->chart_id_net_carrier = strdupz(d->name); + d->chart_id_net_mtu = strdupz(d->name); + + d->chart_ctx_net_bytes = strdupz("net.net"); + d->chart_ctx_net_compressed = strdupz("net.compressed"); + d->chart_ctx_net_drops = strdupz("net.drops"); + d->chart_ctx_net_errors = strdupz("net.errors"); + d->chart_ctx_net_events = strdupz("net.events"); + d->chart_ctx_net_fifo = strdupz("net.fifo"); + d->chart_ctx_net_packets = strdupz("net.packets"); + d->chart_ctx_net_speed = strdupz("net.speed"); + d->chart_ctx_net_duplex = strdupz("net.duplex"); + d->chart_ctx_net_operstate = strdupz("net.operstate"); + d->chart_ctx_net_carrier = strdupz("net.carrier"); + d->chart_ctx_net_mtu = strdupz("net.mtu"); + + d->chart_family = strdupz(d->name); + d->priority = NETDATA_CHART_PRIO_FIRST_NET_IFACE; + + DOUBLE_LINKED_LIST_APPEND_ITEM_UNSAFE(netdev_root, d, prev, next); + + return d; +} + +static bool is_iface_double_linked(struct netdev *d) { + char filename[FILENAME_MAX + 1]; + unsigned long long iflink = 0; + unsigned long long ifindex = 0; + + snprintfz(filename, FILENAME_MAX, "%s/sys/class/net/%s/iflink", netdata_configured_host_prefix, d->name); + if (read_single_number_file(filename, &iflink)) + return false; + + snprintfz(filename, FILENAME_MAX, "%s/sys/class/net/%s/ifindex", netdata_configured_host_prefix, d->name); + if (read_single_number_file(filename, &ifindex)) + return false; + + return iflink != ifindex; +} + +int do_proc_net_dev(int update_every, usec_t dt) { + (void)dt; + static SIMPLE_PATTERN *disabled_list = NULL; + static procfile *ff = NULL; + static int enable_new_interfaces = -1; + static int do_bandwidth = -1, do_packets = -1, do_errors = -1, do_drops = -1, do_fifo = -1, do_compressed = -1, + do_events = -1, do_speed = -1, do_duplex = -1, do_operstate = -1, do_carrier = -1, do_mtu = -1; + static char *path_to_sys_devices_virtual_net = NULL, *path_to_sys_class_net_speed = NULL, + *proc_net_dev_filename = NULL; + static char *path_to_sys_class_net_duplex = NULL; + static char *path_to_sys_class_net_operstate = NULL; + static char *path_to_sys_class_net_carrier = NULL; + static char *path_to_sys_class_net_mtu = NULL; + + if(unlikely(enable_new_interfaces == -1)) { + char filename[FILENAME_MAX + 1]; + + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, (*netdata_configured_host_prefix)?"/proc/1/net/dev":"/proc/net/dev"); + proc_net_dev_filename = config_get(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "filename to monitor", filename); + + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/devices/virtual/net/%s"); + path_to_sys_devices_virtual_net = config_get(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "path to get virtual interfaces", filename); + + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/class/net/%s/speed"); + path_to_sys_class_net_speed = config_get(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "path to get net device speed", filename); + + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/class/net/%s/duplex"); + path_to_sys_class_net_duplex = config_get(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "path to get net device duplex", filename); + + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/class/net/%s/operstate"); + path_to_sys_class_net_operstate = config_get(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "path to get net device operstate", filename); + + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/class/net/%s/carrier"); + path_to_sys_class_net_carrier = config_get(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "path to get net device carrier", filename); + + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/class/net/%s/mtu"); + path_to_sys_class_net_mtu = config_get(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "path to get net device mtu", filename); + + + enable_new_interfaces = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "enable new interfaces detected at runtime", CONFIG_BOOLEAN_AUTO); + + do_bandwidth = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "bandwidth for all interfaces", CONFIG_BOOLEAN_AUTO); + do_packets = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "packets for all interfaces", CONFIG_BOOLEAN_AUTO); + do_errors = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "errors for all interfaces", CONFIG_BOOLEAN_AUTO); + do_drops = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "drops for all interfaces", CONFIG_BOOLEAN_AUTO); + do_fifo = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "fifo for all interfaces", CONFIG_BOOLEAN_AUTO); + do_compressed = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "compressed packets for all interfaces", CONFIG_BOOLEAN_AUTO); + do_events = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "frames, collisions, carrier counters for all interfaces", CONFIG_BOOLEAN_AUTO); + do_speed = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "speed for all interfaces", CONFIG_BOOLEAN_AUTO); + do_duplex = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "duplex for all interfaces", CONFIG_BOOLEAN_AUTO); + do_operstate = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "operstate for all interfaces", CONFIG_BOOLEAN_AUTO); + do_carrier = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "carrier for all interfaces", CONFIG_BOOLEAN_AUTO); + do_mtu = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "mtu for all interfaces", CONFIG_BOOLEAN_AUTO); + + disabled_list = simple_pattern_create( + config_get(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "disable by default interfaces matching", + "lo fireqos* *-ifb fwpr* fwbr* fwln*"), NULL, SIMPLE_PATTERN_EXACT, true); + + netdev_renames_init(); + } + + if(unlikely(!ff)) { + ff = procfile_open(proc_net_dev_filename, " \t,|", PROCFILE_FLAG_DEFAULT); + if(unlikely(!ff)) return 1; + } + + ff = procfile_readall(ff); + if(unlikely(!ff)) return 0; // we return 0, so that we will retry to open it next time + + kernel_uint_t system_rbytes = 0; + kernel_uint_t system_tbytes = 0; + + time_t now = now_realtime_sec(); + + size_t lines = procfile_lines(ff), l; + for(l = 2; l < lines ;l++) { + // require 17 words on each line + if(unlikely(procfile_linewords(ff, l) < 17)) continue; + + char *name = procfile_lineword(ff, l, 0); + size_t len = strlen(name); + if(name[len - 1] == ':') name[len - 1] = '\0'; + + struct netdev *d = get_netdev(name); + d->updated = true; + + if(unlikely(!d->configured)) { + // the first time we see this interface + + // remember we configured it + d->configured = true; + d->discover_time = now; + + d->enabled = enable_new_interfaces; + + if(d->enabled) + d->enabled = !simple_pattern_matches(disabled_list, d->name); + + char buf[FILENAME_MAX + 1]; + snprintfz(buf, FILENAME_MAX, path_to_sys_devices_virtual_net, d->name); + + d->virtual = likely(access(buf, R_OK) == 0) ? true : false; + + // At least on Proxmox inside LXC: eth0 is virtual. + // Virtual interfaces are not taken into account in system.net calculations + if (inside_lxc_container && d->virtual && strncmp(d->name, "eth", 3) == 0) + d->virtual = false; + + if (d->virtual) + rrdlabels_add(d->chart_labels, "interface_type", "virtual", RRDLABEL_SRC_AUTO); + else + rrdlabels_add(d->chart_labels, "interface_type", "real", RRDLABEL_SRC_AUTO); + + rrdlabels_add(d->chart_labels, "device", name, RRDLABEL_SRC_AUTO); + + if(likely(!d->virtual)) { + // set the filename to get the interface speed + snprintfz(buf, FILENAME_MAX, path_to_sys_class_net_speed, d->name); + d->filename_speed = strdupz(buf); + + snprintfz(buf, FILENAME_MAX, path_to_sys_class_net_duplex, d->name); + d->filename_duplex = strdupz(buf); + } + + snprintfz(buf, FILENAME_MAX, path_to_sys_class_net_operstate, d->name); + d->filename_operstate = strdupz(buf); + + snprintfz(buf, FILENAME_MAX, path_to_sys_class_net_carrier, d->name); + d->filename_carrier = strdupz(buf); + + snprintfz(buf, FILENAME_MAX, path_to_sys_class_net_mtu, d->name); + d->filename_mtu = strdupz(buf); + + snprintfz(buf, FILENAME_MAX, "plugin:proc:/proc/net/dev:%s", d->name); + + if (config_exists(buf, "enabled")) + d->enabled = config_get_boolean_ondemand(buf, "enabled", d->enabled); + if (config_exists(buf, "virtual")) + d->virtual = config_get_boolean(buf, "virtual", d->virtual); + + if(d->enabled == CONFIG_BOOLEAN_NO) + continue; + + d->double_linked = is_iface_double_linked(d); + + d->do_bandwidth = do_bandwidth; + d->do_packets = do_packets; + d->do_errors = do_errors; + d->do_drops = do_drops; + d->do_fifo = do_fifo; + d->do_compressed = do_compressed; + d->do_events = do_events; + d->do_speed = do_speed; + d->do_duplex = do_duplex; + d->do_operstate = do_operstate; + d->do_carrier = do_carrier; + d->do_mtu = do_mtu; + + if (config_exists(buf, "bandwidth")) + d->do_bandwidth = config_get_boolean_ondemand(buf, "bandwidth", do_bandwidth); + if (config_exists(buf, "packets")) + d->do_packets = config_get_boolean_ondemand(buf, "packets", do_packets); + if (config_exists(buf, "errors")) + d->do_errors = config_get_boolean_ondemand(buf, "errors", do_errors); + if (config_exists(buf, "drops")) + d->do_drops = config_get_boolean_ondemand(buf, "drops", do_drops); + if (config_exists(buf, "fifo")) + d->do_fifo = config_get_boolean_ondemand(buf, "fifo", do_fifo); + if (config_exists(buf, "compressed")) + d->do_compressed = config_get_boolean_ondemand(buf, "compressed", do_compressed); + if (config_exists(buf, "events")) + d->do_events = config_get_boolean_ondemand(buf, "events", do_events); + if (config_exists(buf, "speed")) + d->do_speed = config_get_boolean_ondemand(buf, "speed", do_speed); + if (config_exists(buf, "duplex")) + d->do_duplex = config_get_boolean_ondemand(buf, "duplex", do_duplex); + if (config_exists(buf, "operstate")) + d->do_operstate = config_get_boolean_ondemand(buf, "operstate", do_operstate); + if (config_exists(buf, "carrier")) + d->do_carrier = config_get_boolean_ondemand(buf, "carrier", do_carrier); + if (config_exists(buf, "mtu")) + d->do_mtu = config_get_boolean_ondemand(buf, "mtu", do_mtu); + } + + if(unlikely(!d->enabled)) + continue; + + if(!d->cgroup_netdev_link) + netdev_rename_this_device(d); + + // See https://github.com/netdata/netdata/issues/15206 + // This is necessary to prevent the creation of charts for virtual interfaces that will later be + // recreated as container interfaces (create container) or + // rediscovered and recreated only to be deleted almost immediately (stop/remove container) + if (d->double_linked && d->virtual && (now - d->discover_time < double_linked_device_collect_delay_secs)) + continue; + + if(likely(d->do_bandwidth != CONFIG_BOOLEAN_NO || !d->virtual)) { + d->rbytes = str2kernel_uint_t(procfile_lineword(ff, l, 1)); + d->tbytes = str2kernel_uint_t(procfile_lineword(ff, l, 9)); + + if(likely(!d->virtual)) { + system_rbytes += d->rbytes; + system_tbytes += d->tbytes; + } + } + + if(likely(d->do_packets != CONFIG_BOOLEAN_NO)) { + d->rpackets = str2kernel_uint_t(procfile_lineword(ff, l, 2)); + d->rmulticast = str2kernel_uint_t(procfile_lineword(ff, l, 8)); + d->tpackets = str2kernel_uint_t(procfile_lineword(ff, l, 10)); + } + + if(likely(d->do_errors != CONFIG_BOOLEAN_NO)) { + d->rerrors = str2kernel_uint_t(procfile_lineword(ff, l, 3)); + d->terrors = str2kernel_uint_t(procfile_lineword(ff, l, 11)); + } + + if(likely(d->do_drops != CONFIG_BOOLEAN_NO)) { + d->rdrops = str2kernel_uint_t(procfile_lineword(ff, l, 4)); + d->tdrops = str2kernel_uint_t(procfile_lineword(ff, l, 12)); + } + + if(likely(d->do_fifo != CONFIG_BOOLEAN_NO)) { + d->rfifo = str2kernel_uint_t(procfile_lineword(ff, l, 5)); + d->tfifo = str2kernel_uint_t(procfile_lineword(ff, l, 13)); + } + + if(likely(d->do_compressed != CONFIG_BOOLEAN_NO)) { + d->rcompressed = str2kernel_uint_t(procfile_lineword(ff, l, 7)); + d->tcompressed = str2kernel_uint_t(procfile_lineword(ff, l, 16)); + } + + if(likely(d->do_events != CONFIG_BOOLEAN_NO)) { + d->rframe = str2kernel_uint_t(procfile_lineword(ff, l, 6)); + d->tcollisions = str2kernel_uint_t(procfile_lineword(ff, l, 14)); + d->tcarrier = str2kernel_uint_t(procfile_lineword(ff, l, 15)); + } + + if ((d->do_carrier != CONFIG_BOOLEAN_NO || + d->do_duplex != CONFIG_BOOLEAN_NO || + d->do_speed != CONFIG_BOOLEAN_NO) && + d->filename_carrier && + (d->carrier_file_exists || + now_monotonic_sec() - d->carrier_file_lost_time > READ_RETRY_PERIOD)) { + if (read_single_number_file(d->filename_carrier, &d->carrier)) { + if (d->carrier_file_exists) + collector_error( + "Cannot refresh interface %s carrier state by reading '%s'. Next update is in %d seconds.", + d->name, + d->filename_carrier, + READ_RETRY_PERIOD); + d->carrier_file_exists = 0; + d->carrier_file_lost_time = now_monotonic_sec(); + } else { + d->carrier_file_exists = 1; + d->carrier_file_lost_time = 0; + } + } + + if (d->do_duplex != CONFIG_BOOLEAN_NO && + d->filename_duplex && + (d->carrier || d->carrier_file_exists) && + (d->duplex_file_exists || + now_monotonic_sec() - d->duplex_file_lost_time > READ_RETRY_PERIOD)) { + char buffer[STATE_LENGTH_MAX + 1]; + + if (read_txt_file(d->filename_duplex, buffer, sizeof(buffer))) { + if (d->duplex_file_exists) + collector_error("Cannot refresh interface %s duplex state by reading '%s'.", d->name, d->filename_duplex); + d->duplex_file_exists = 0; + d->duplex_file_lost_time = now_monotonic_sec(); + d->duplex = NETDEV_DUPLEX_UNKNOWN; + } else { + // values can be unknown, half or full -- just check the first letter for speed + if (buffer[0] == 'f') + d->duplex = NETDEV_DUPLEX_FULL; + else if (buffer[0] == 'h') + d->duplex = NETDEV_DUPLEX_HALF; + else + d->duplex = NETDEV_DUPLEX_UNKNOWN; + d->duplex_file_exists = 1; + d->duplex_file_lost_time = 0; + } + } else { + d->duplex = NETDEV_DUPLEX_UNKNOWN; + } + + if(d->do_operstate != CONFIG_BOOLEAN_NO && d->filename_operstate) { + char buffer[STATE_LENGTH_MAX + 1], *trimmed_buffer; + + if (read_txt_file(d->filename_operstate, buffer, sizeof(buffer))) { + collector_error( + "Cannot refresh %s operstate by reading '%s'. Will not update its status anymore.", + d->name, d->filename_operstate); + freez(d->filename_operstate); + d->filename_operstate = NULL; + } else { + trimmed_buffer = trim(buffer); + d->operstate = get_operstate(trimmed_buffer); + } + } + + if (d->do_mtu != CONFIG_BOOLEAN_NO && d->filename_mtu) { + if (read_single_number_file(d->filename_mtu, &d->mtu)) { + collector_error( + "Cannot refresh mtu for interface %s by reading '%s'. Stop updating it.", d->name, d->filename_mtu); + freez(d->filename_mtu); + d->filename_mtu = NULL; + } + } + + //collector_info("PROC_NET_DEV: %s speed %zu, bytes %zu/%zu, packets %zu/%zu/%zu, errors %zu/%zu, drops %zu/%zu, fifo %zu/%zu, compressed %zu/%zu, rframe %zu, tcollisions %zu, tcarrier %zu" + // , d->name, d->speed + // , d->rbytes, d->tbytes + // , d->rpackets, d->tpackets, d->rmulticast + // , d->rerrors, d->terrors + // , d->rdrops, d->tdrops + // , d->rfifo, d->tfifo + // , d->rcompressed, d->tcompressed + // , d->rframe, d->tcollisions, d->tcarrier + // ); + + if(unlikely(d->do_bandwidth == CONFIG_BOOLEAN_AUTO && + (d->rbytes || d->tbytes || netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) + d->do_bandwidth = CONFIG_BOOLEAN_YES; + + if(d->do_bandwidth == CONFIG_BOOLEAN_YES) { + if(unlikely(!d->st_bandwidth)) { + + d->st_bandwidth = rrdset_create_localhost( + d->chart_type_net_bytes + , d->chart_id_net_bytes + , NULL + , d->chart_family + , d->chart_ctx_net_bytes + , "Bandwidth" + , "kilobits/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETDEV_NAME + , d->priority + , update_every + , RRDSET_TYPE_AREA + ); + + rrdset_update_rrdlabels(d->st_bandwidth, d->chart_labels); + + d->rd_rbytes = rrddim_add(d->st_bandwidth, "received", NULL, 8, BITS_IN_A_KILOBIT, RRD_ALGORITHM_INCREMENTAL); + d->rd_tbytes = rrddim_add(d->st_bandwidth, "sent", NULL, -8, BITS_IN_A_KILOBIT, RRD_ALGORITHM_INCREMENTAL); + + if(d->flipped) { + // flip receive/transmit + + RRDDIM *td = d->rd_rbytes; + d->rd_rbytes = d->rd_tbytes; + d->rd_tbytes = td; + } + } + + rrddim_set_by_pointer(d->st_bandwidth, d->rd_rbytes, (collected_number)d->rbytes); + rrddim_set_by_pointer(d->st_bandwidth, d->rd_tbytes, (collected_number)d->tbytes); + rrdset_done(d->st_bandwidth); + + if(d->cgroup_netdev_link) + cgroup_netdev_add_bandwidth(d->cgroup_netdev_link, + d->flipped ? d->rd_tbytes->collector.last_stored_value : -d->rd_rbytes->collector.last_stored_value, + d->flipped ? -d->rd_rbytes->collector.last_stored_value : d->rd_tbytes->collector.last_stored_value); + + if(unlikely(!d->chart_var_speed)) { + d->chart_var_speed = rrdvar_chart_variable_add_and_acquire(d->st_bandwidth, "nic_speed_max"); + if(!d->chart_var_speed) { + collector_error( + "Cannot create interface %s chart variable 'nic_speed_max'. Will not update its speed anymore.", + d->name); + } + else { + rrdvar_chart_variable_set(d->st_bandwidth, d->chart_var_speed, NAN); + } + } + + // update the interface speed + if(d->filename_speed) { + if (d->filename_speed && d->chart_var_speed) { + int ret = 0; + + if ((d->carrier || d->carrier_file_exists) && + (d->speed_file_exists || now_monotonic_sec() - d->speed_file_lost_time > READ_RETRY_PERIOD)) { + ret = read_single_number_file(d->filename_speed, (unsigned long long *) &d->speed); + } else { + d->speed = 0; // TODO: this is wrong, shouldn't use 0 value, but NULL. + } + + if(ret) { + if (d->speed_file_exists) + collector_error("Cannot refresh interface %s speed by reading '%s'.", d->name, d->filename_speed); + d->speed_file_exists = 0; + d->speed_file_lost_time = now_monotonic_sec(); + } + else { + if(d->do_speed != CONFIG_BOOLEAN_NO) { + if(unlikely(!d->st_speed)) { + d->st_speed = rrdset_create_localhost( + d->chart_type_net_speed + , d->chart_id_net_speed + , NULL + , d->chart_family + , d->chart_ctx_net_speed + , "Interface Speed" + , "kilobits/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETDEV_NAME + , d->priority + 7 + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(d->st_speed, RRDSET_FLAG_DETAIL); + + rrdset_update_rrdlabels(d->st_speed, d->chart_labels); + + d->rd_speed = rrddim_add(d->st_speed, "speed", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(d->st_speed, d->rd_speed, (collected_number)d->speed * KILOBITS_IN_A_MEGABIT); + rrdset_done(d->st_speed); + } + + rrdvar_chart_variable_set( + d->st_bandwidth, d->chart_var_speed, (NETDATA_DOUBLE)d->speed * KILOBITS_IN_A_MEGABIT); + + if (d->speed) { + d->speed_file_exists = 1; + d->speed_file_lost_time = 0; + } + } + } + } + } + + if(d->do_duplex != CONFIG_BOOLEAN_NO && d->filename_duplex) { + if(unlikely(!d->st_duplex)) { + d->st_duplex = rrdset_create_localhost( + d->chart_type_net_duplex + , d->chart_id_net_duplex + , NULL + , d->chart_family + , d->chart_ctx_net_duplex + , "Interface Duplex State" + , "state" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETDEV_NAME + , d->priority + 8 + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(d->st_duplex, RRDSET_FLAG_DETAIL); + + rrdset_update_rrdlabels(d->st_duplex, d->chart_labels); + + d->rd_duplex_full = rrddim_add(d->st_duplex, "full", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + d->rd_duplex_half = rrddim_add(d->st_duplex, "half", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + d->rd_duplex_unknown = rrddim_add(d->st_duplex, "unknown", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(d->st_duplex, d->rd_duplex_full, (collected_number)(d->duplex == NETDEV_DUPLEX_FULL)); + rrddim_set_by_pointer(d->st_duplex, d->rd_duplex_half, (collected_number)(d->duplex == NETDEV_DUPLEX_HALF)); + rrddim_set_by_pointer(d->st_duplex, d->rd_duplex_unknown, (collected_number)(d->duplex == NETDEV_DUPLEX_UNKNOWN)); + rrdset_done(d->st_duplex); + } + + if(d->do_operstate != CONFIG_BOOLEAN_NO && d->filename_operstate) { + if(unlikely(!d->st_operstate)) { + d->st_operstate = rrdset_create_localhost( + d->chart_type_net_operstate + , d->chart_id_net_operstate + , NULL + , d->chart_family + , d->chart_ctx_net_operstate + , "Interface Operational State" + , "state" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETDEV_NAME + , d->priority + 9 + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(d->st_operstate, RRDSET_FLAG_DETAIL); + + rrdset_update_rrdlabels(d->st_operstate, d->chart_labels); + + d->rd_operstate_up = rrddim_add(d->st_operstate, "up", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + d->rd_operstate_down = rrddim_add(d->st_operstate, "down", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + d->rd_operstate_notpresent = rrddim_add(d->st_operstate, "notpresent", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + d->rd_operstate_lowerlayerdown = rrddim_add(d->st_operstate, "lowerlayerdown", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + d->rd_operstate_testing = rrddim_add(d->st_operstate, "testing", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + d->rd_operstate_dormant = rrddim_add(d->st_operstate, "dormant", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + d->rd_operstate_unknown = rrddim_add(d->st_operstate, "unknown", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(d->st_operstate, d->rd_operstate_up, (collected_number)(d->operstate == NETDEV_OPERSTATE_UP)); + rrddim_set_by_pointer(d->st_operstate, d->rd_operstate_down, (collected_number)(d->operstate == NETDEV_OPERSTATE_DOWN)); + rrddim_set_by_pointer(d->st_operstate, d->rd_operstate_notpresent, (collected_number)(d->operstate == NETDEV_OPERSTATE_NOTPRESENT)); + rrddim_set_by_pointer(d->st_operstate, d->rd_operstate_lowerlayerdown, (collected_number)(d->operstate == NETDEV_OPERSTATE_LOWERLAYERDOWN)); + rrddim_set_by_pointer(d->st_operstate, d->rd_operstate_testing, (collected_number)(d->operstate == NETDEV_OPERSTATE_TESTING)); + rrddim_set_by_pointer(d->st_operstate, d->rd_operstate_dormant, (collected_number)(d->operstate == NETDEV_OPERSTATE_DORMANT)); + rrddim_set_by_pointer(d->st_operstate, d->rd_operstate_unknown, (collected_number)(d->operstate == NETDEV_OPERSTATE_UNKNOWN)); + rrdset_done(d->st_operstate); + } + + if(d->do_carrier != CONFIG_BOOLEAN_NO && d->carrier_file_exists) { + if(unlikely(!d->st_carrier)) { + d->st_carrier = rrdset_create_localhost( + d->chart_type_net_carrier + , d->chart_id_net_carrier + , NULL + , d->chart_family + , d->chart_ctx_net_carrier + , "Interface Physical Link State" + , "state" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETDEV_NAME + , d->priority + 10 + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(d->st_carrier, RRDSET_FLAG_DETAIL); + + rrdset_update_rrdlabels(d->st_carrier, d->chart_labels); + + d->rd_carrier_up = rrddim_add(d->st_carrier, "up", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + d->rd_carrier_down = rrddim_add(d->st_carrier, "down", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(d->st_carrier, d->rd_carrier_up, (collected_number)(d->carrier == 1)); + rrddim_set_by_pointer(d->st_carrier, d->rd_carrier_down, (collected_number)(d->carrier != 1)); + rrdset_done(d->st_carrier); + } + + if(d->do_mtu != CONFIG_BOOLEAN_NO && d->filename_mtu) { + if(unlikely(!d->st_mtu)) { + d->st_mtu = rrdset_create_localhost( + d->chart_type_net_mtu + , d->chart_id_net_mtu + , NULL + , d->chart_family + , d->chart_ctx_net_mtu + , "Interface MTU" + , "octets" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETDEV_NAME + , d->priority + 11 + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(d->st_mtu, RRDSET_FLAG_DETAIL); + + rrdset_update_rrdlabels(d->st_mtu, d->chart_labels); + + d->rd_mtu = rrddim_add(d->st_mtu, "mtu", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(d->st_mtu, d->rd_mtu, (collected_number)d->mtu); + rrdset_done(d->st_mtu); + } + + if(unlikely(d->do_packets == CONFIG_BOOLEAN_AUTO && + (d->rpackets || d->tpackets || d->rmulticast || netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) + d->do_packets = CONFIG_BOOLEAN_YES; + + if(d->do_packets == CONFIG_BOOLEAN_YES) { + if(unlikely(!d->st_packets)) { + + d->st_packets = rrdset_create_localhost( + d->chart_type_net_packets + , d->chart_id_net_packets + , NULL + , d->chart_family + , d->chart_ctx_net_packets + , "Packets" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETDEV_NAME + , d->priority + 1 + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(d->st_packets, RRDSET_FLAG_DETAIL); + + rrdset_update_rrdlabels(d->st_packets, d->chart_labels); + + d->rd_rpackets = rrddim_add(d->st_packets, "received", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + d->rd_tpackets = rrddim_add(d->st_packets, "sent", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + d->rd_rmulticast = rrddim_add(d->st_packets, "multicast", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + + if(d->flipped) { + // flip receive/transmit + + RRDDIM *td = d->rd_rpackets; + d->rd_rpackets = d->rd_tpackets; + d->rd_tpackets = td; + } + } + + rrddim_set_by_pointer(d->st_packets, d->rd_rpackets, (collected_number)d->rpackets); + rrddim_set_by_pointer(d->st_packets, d->rd_tpackets, (collected_number)d->tpackets); + rrddim_set_by_pointer(d->st_packets, d->rd_rmulticast, (collected_number)d->rmulticast); + rrdset_done(d->st_packets); + } + + if(unlikely(d->do_errors == CONFIG_BOOLEAN_AUTO && + (d->rerrors || d->terrors || netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) + d->do_errors = CONFIG_BOOLEAN_YES; + + if(d->do_errors == CONFIG_BOOLEAN_YES) { + if(unlikely(!d->st_errors)) { + + d->st_errors = rrdset_create_localhost( + d->chart_type_net_errors + , d->chart_id_net_errors + , NULL + , d->chart_family + , d->chart_ctx_net_errors + , "Interface Errors" + , "errors/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETDEV_NAME + , d->priority + 2 + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(d->st_errors, RRDSET_FLAG_DETAIL); + + rrdset_update_rrdlabels(d->st_errors, d->chart_labels); + + d->rd_rerrors = rrddim_add(d->st_errors, "inbound", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + d->rd_terrors = rrddim_add(d->st_errors, "outbound", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + + if(d->flipped) { + // flip receive/transmit + + RRDDIM *td = d->rd_rerrors; + d->rd_rerrors = d->rd_terrors; + d->rd_terrors = td; + } + } + + rrddim_set_by_pointer(d->st_errors, d->rd_rerrors, (collected_number)d->rerrors); + rrddim_set_by_pointer(d->st_errors, d->rd_terrors, (collected_number)d->terrors); + rrdset_done(d->st_errors); + } + + if(unlikely(d->do_drops == CONFIG_BOOLEAN_AUTO && + (d->rdrops || d->tdrops || netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) + d->do_drops = CONFIG_BOOLEAN_YES; + + if(d->do_drops == CONFIG_BOOLEAN_YES) { + if(unlikely(!d->st_drops)) { + + d->st_drops = rrdset_create_localhost( + d->chart_type_net_drops + , d->chart_id_net_drops + , NULL + , d->chart_family + , d->chart_ctx_net_drops + , "Interface Drops" + , "drops/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETDEV_NAME + , d->priority + 3 + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(d->st_drops, RRDSET_FLAG_DETAIL); + + rrdset_update_rrdlabels(d->st_drops, d->chart_labels); + + d->rd_rdrops = rrddim_add(d->st_drops, "inbound", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + d->rd_tdrops = rrddim_add(d->st_drops, "outbound", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + + if(d->flipped) { + // flip receive/transmit + + RRDDIM *td = d->rd_rdrops; + d->rd_rdrops = d->rd_tdrops; + d->rd_tdrops = td; + } + } + + rrddim_set_by_pointer(d->st_drops, d->rd_rdrops, (collected_number)d->rdrops); + rrddim_set_by_pointer(d->st_drops, d->rd_tdrops, (collected_number)d->tdrops); + rrdset_done(d->st_drops); + } + + if(unlikely(d->do_fifo == CONFIG_BOOLEAN_AUTO && + (d->rfifo || d->tfifo || netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) + d->do_fifo = CONFIG_BOOLEAN_YES; + + if(d->do_fifo == CONFIG_BOOLEAN_YES) { + if(unlikely(!d->st_fifo)) { + + d->st_fifo = rrdset_create_localhost( + d->chart_type_net_fifo + , d->chart_id_net_fifo + , NULL + , d->chart_family + , d->chart_ctx_net_fifo + , "Interface FIFO Buffer Errors" + , "errors" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETDEV_NAME + , d->priority + 4 + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(d->st_fifo, RRDSET_FLAG_DETAIL); + + rrdset_update_rrdlabels(d->st_fifo, d->chart_labels); + + d->rd_rfifo = rrddim_add(d->st_fifo, "receive", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + d->rd_tfifo = rrddim_add(d->st_fifo, "transmit", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + + if(d->flipped) { + // flip receive/transmit + + RRDDIM *td = d->rd_rfifo; + d->rd_rfifo = d->rd_tfifo; + d->rd_tfifo = td; + } + } + + rrddim_set_by_pointer(d->st_fifo, d->rd_rfifo, (collected_number)d->rfifo); + rrddim_set_by_pointer(d->st_fifo, d->rd_tfifo, (collected_number)d->tfifo); + rrdset_done(d->st_fifo); + } + + if(unlikely(d->do_compressed == CONFIG_BOOLEAN_AUTO && + (d->rcompressed || d->tcompressed || netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) + d->do_compressed = CONFIG_BOOLEAN_YES; + + if(d->do_compressed == CONFIG_BOOLEAN_YES) { + if(unlikely(!d->st_compressed)) { + + d->st_compressed = rrdset_create_localhost( + d->chart_type_net_compressed + , d->chart_id_net_compressed + , NULL + , d->chart_family + , d->chart_ctx_net_compressed + , "Compressed Packets" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETDEV_NAME + , d->priority + 5 + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(d->st_compressed, RRDSET_FLAG_DETAIL); + + rrdset_update_rrdlabels(d->st_compressed, d->chart_labels); + + d->rd_rcompressed = rrddim_add(d->st_compressed, "received", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + d->rd_tcompressed = rrddim_add(d->st_compressed, "sent", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + + if(d->flipped) { + // flip receive/transmit + + RRDDIM *td = d->rd_rcompressed; + d->rd_rcompressed = d->rd_tcompressed; + d->rd_tcompressed = td; + } + } + + rrddim_set_by_pointer(d->st_compressed, d->rd_rcompressed, (collected_number)d->rcompressed); + rrddim_set_by_pointer(d->st_compressed, d->rd_tcompressed, (collected_number)d->tcompressed); + rrdset_done(d->st_compressed); + } + + if(unlikely(d->do_events == CONFIG_BOOLEAN_AUTO && + (d->rframe || d->tcollisions || d->tcarrier || netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) + d->do_events = CONFIG_BOOLEAN_YES; + + if(d->do_events == CONFIG_BOOLEAN_YES) { + if(unlikely(!d->st_events)) { + + d->st_events = rrdset_create_localhost( + d->chart_type_net_events + , d->chart_id_net_events + , NULL + , d->chart_family + , d->chart_ctx_net_events + , "Network Interface Events" + , "events/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETDEV_NAME + , d->priority + 6 + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(d->st_events, RRDSET_FLAG_DETAIL); + + rrdset_update_rrdlabels(d->st_events, d->chart_labels); + + d->rd_rframe = rrddim_add(d->st_events, "frames", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + d->rd_tcollisions = rrddim_add(d->st_events, "collisions", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + d->rd_tcarrier = rrddim_add(d->st_events, "carrier", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(d->st_events, d->rd_rframe, (collected_number)d->rframe); + rrddim_set_by_pointer(d->st_events, d->rd_tcollisions, (collected_number)d->tcollisions); + rrddim_set_by_pointer(d->st_events, d->rd_tcarrier, (collected_number)d->tcarrier); + rrdset_done(d->st_events); + } + + d->function_ready = true; + } + + if(do_bandwidth == CONFIG_BOOLEAN_YES || (do_bandwidth == CONFIG_BOOLEAN_AUTO && + (system_rbytes || system_tbytes || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_bandwidth = CONFIG_BOOLEAN_YES; + static RRDSET *st_system_net = NULL; + static RRDDIM *rd_in = NULL, *rd_out = NULL; + + if(unlikely(!st_system_net)) { + st_system_net = rrdset_create_localhost( + "system" + , "net" + , NULL + , "network" + , NULL + , "Physical Network Interfaces Aggregated Bandwidth" + , "kilobits/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETDEV_NAME + , NETDATA_CHART_PRIO_SYSTEM_NET + , update_every + , RRDSET_TYPE_AREA + ); + + rd_in = rrddim_add(st_system_net, "InOctets", "received", 8, BITS_IN_A_KILOBIT, RRD_ALGORITHM_INCREMENTAL); + rd_out = rrddim_add(st_system_net, "OutOctets", "sent", -8, BITS_IN_A_KILOBIT, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_system_net, rd_in, (collected_number)system_rbytes); + rrddim_set_by_pointer(st_system_net, rd_out, (collected_number)system_tbytes); + + rrdset_done(st_system_net); + } + + netdev_cleanup(); + + return 0; +} + +static void netdev_main_cleanup(void *ptr) +{ + UNUSED(ptr); + + collector_info("cleaning up..."); + + worker_unregister(); +} + +void *netdev_main(void *ptr) +{ + worker_register("NETDEV"); + worker_register_job_name(0, "netdev"); + + if (getenv("KUBERNETES_SERVICE_HOST") != NULL && getenv("KUBERNETES_SERVICE_PORT") != NULL) + double_linked_device_collect_delay_secs = 300; + + rrd_function_add_inline(localhost, NULL, "network-interfaces", 10, + RRDFUNCTIONS_PRIORITY_DEFAULT, RRDFUNCTIONS_NETDEV_HELP, + "top", HTTP_ACCESS_ANONYMOUS_DATA, + netdev_function_net_interfaces); + + netdata_thread_cleanup_push(netdev_main_cleanup, ptr) { + usec_t step = localhost->rrd_update_every * USEC_PER_SEC; + heartbeat_t hb; + heartbeat_init(&hb); + + while (service_running(SERVICE_COLLECTORS)) { + worker_is_idle(); + usec_t hb_dt = heartbeat_next(&hb, step); + + if (unlikely(!service_running(SERVICE_COLLECTORS))) + break; + + cgroup_netdev_reset_all(); + + worker_is_busy(0); + + netdata_mutex_lock(&netdev_mutex); + if (do_proc_net_dev(localhost->rrd_update_every, hb_dt)) + break; + netdata_mutex_unlock(&netdev_mutex); + } + } + netdata_thread_cleanup_pop(1); + + return NULL; +} diff --git a/src/collectors/proc.plugin/proc_net_dev_renames.c b/src/collectors/proc.plugin/proc_net_dev_renames.c new file mode 100644 index 000000000..fb50ce66c --- /dev/null +++ b/src/collectors/proc.plugin/proc_net_dev_renames.c @@ -0,0 +1,53 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "proc_net_dev_renames.h" + +DICTIONARY *netdev_renames = NULL; + +static void dictionary_netdev_rename_delete_cb(const DICTIONARY_ITEM *item __maybe_unused, void *value, void *data __maybe_unused) { + struct rename_task *r = value; + + cgroup_netdev_release(r->cgroup_netdev_link); + rrdlabels_destroy(r->chart_labels); + freez((void *) r->container_name); + freez((void *) r->container_device); + freez((void *) r->ctx_prefix); +} + +void netdev_renames_init(void) { + static SPINLOCK spinlock = NETDATA_SPINLOCK_INITIALIZER; + + spinlock_lock(&spinlock); + if(!netdev_renames) { + netdev_renames = dictionary_create_advanced(DICT_OPTION_FIXED_SIZE, NULL, sizeof(struct rename_task)); + dictionary_register_delete_callback(netdev_renames, dictionary_netdev_rename_delete_cb, NULL); + } + spinlock_unlock(&spinlock); +} + +void cgroup_rename_task_add( + const char *host_device, + const char *container_device, + const char *container_name, + RRDLABELS *labels, + const char *ctx_prefix, + const DICTIONARY_ITEM *cgroup_netdev_link) +{ + netdev_renames_init(); + + struct rename_task tmp = { + .container_device = strdupz(container_device), + .container_name = strdupz(container_name), + .ctx_prefix = strdupz(ctx_prefix), + .chart_labels = rrdlabels_create(), + .cgroup_netdev_link = cgroup_netdev_link, + }; + rrdlabels_migrate_to_these(tmp.chart_labels, labels); + + dictionary_set(netdev_renames, host_device, &tmp, sizeof(tmp)); +} + +// other threads can call this function to delete a rename to a netdev +void cgroup_rename_task_device_del(const char *host_device) { + dictionary_del(netdev_renames, host_device); +} diff --git a/src/collectors/proc.plugin/proc_net_dev_renames.h b/src/collectors/proc.plugin/proc_net_dev_renames.h new file mode 100644 index 000000000..51f3cfd94 --- /dev/null +++ b/src/collectors/proc.plugin/proc_net_dev_renames.h @@ -0,0 +1,26 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#ifndef NETDATA_PROC_NET_DEV_RENAMES_H +#define NETDATA_PROC_NET_DEV_RENAMES_H + +#include "plugin_proc.h" + +extern DICTIONARY *netdev_renames; + +struct rename_task { + const char *container_device; + const char *container_name; + const char *ctx_prefix; + RRDLABELS *chart_labels; + const DICTIONARY_ITEM *cgroup_netdev_link; +}; + +void netdev_renames_init(void); + +void cgroup_netdev_reset_all(void); +void cgroup_netdev_release(const DICTIONARY_ITEM *link); +const void *cgroup_netdev_dup(const DICTIONARY_ITEM *link); +void cgroup_netdev_add_bandwidth(const DICTIONARY_ITEM *link, NETDATA_DOUBLE received, NETDATA_DOUBLE sent); + + +#endif //NETDATA_PROC_NET_DEV_RENAMES_H diff --git a/src/collectors/proc.plugin/proc_net_ip_vs_stats.c b/src/collectors/proc.plugin/proc_net_ip_vs_stats.c new file mode 100644 index 000000000..2b9c9332e --- /dev/null +++ b/src/collectors/proc.plugin/proc_net_ip_vs_stats.c @@ -0,0 +1,123 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +#define RRD_TYPE_NET_IPVS "ipvs" +#define PLUGIN_PROC_MODULE_NET_IPVS_NAME "/proc/net/ip_vs_stats" +#define CONFIG_SECTION_PLUGIN_PROC_NET_IPVS "plugin:" PLUGIN_PROC_CONFIG_NAME ":" PLUGIN_PROC_MODULE_NET_IPVS_NAME + +int do_proc_net_ip_vs_stats(int update_every, usec_t dt) { + (void)dt; + static int do_bandwidth = -1, do_sockets = -1, do_packets = -1; + static procfile *ff = NULL; + + if(do_bandwidth == -1) do_bandwidth = config_get_boolean(CONFIG_SECTION_PLUGIN_PROC_NET_IPVS, "IPVS bandwidth", 1); + if(do_sockets == -1) do_sockets = config_get_boolean(CONFIG_SECTION_PLUGIN_PROC_NET_IPVS, "IPVS connections", 1); + if(do_packets == -1) do_packets = config_get_boolean(CONFIG_SECTION_PLUGIN_PROC_NET_IPVS, "IPVS packets", 1); + + if(!ff) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/net/ip_vs_stats"); + ff = procfile_open(config_get(CONFIG_SECTION_PLUGIN_PROC_NET_IPVS, "filename to monitor", filename), " \t,:|", PROCFILE_FLAG_DEFAULT); + } + if(!ff) return 1; + + ff = procfile_readall(ff); + if(!ff) return 0; // we return 0, so that we will retry to open it next time + + // make sure we have 3 lines + if(procfile_lines(ff) < 3) return 1; + + // make sure we have 5 words on the 3rd line + if(procfile_linewords(ff, 2) < 5) return 1; + + unsigned long long entries, InPackets, OutPackets, InBytes, OutBytes; + + entries = strtoull(procfile_lineword(ff, 2, 0), NULL, 16); + InPackets = strtoull(procfile_lineword(ff, 2, 1), NULL, 16); + OutPackets = strtoull(procfile_lineword(ff, 2, 2), NULL, 16); + InBytes = strtoull(procfile_lineword(ff, 2, 3), NULL, 16); + OutBytes = strtoull(procfile_lineword(ff, 2, 4), NULL, 16); + + if(do_sockets) { + static RRDSET *st = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IPVS + , "sockets" + , NULL + , RRD_TYPE_NET_IPVS + , NULL + , "IPVS New Connections" + , "connections/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NET_IPVS_NAME + , NETDATA_CHART_PRIO_IPVS_SOCKETS + , update_every + , RRDSET_TYPE_LINE + ); + + rrddim_add(st, "connections", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set(st, "connections", entries); + rrdset_done(st); + } + + if(do_packets) { + static RRDSET *st = NULL; + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IPVS + , "packets" + , NULL + , RRD_TYPE_NET_IPVS + , NULL + , "IPVS Packets" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NET_IPVS_NAME + , NETDATA_CHART_PRIO_IPVS_PACKETS + , update_every + , RRDSET_TYPE_LINE + ); + + rrddim_add(st, "received", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rrddim_add(st, "sent", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set(st, "received", InPackets); + rrddim_set(st, "sent", OutPackets); + rrdset_done(st); + } + + if(do_bandwidth) { + static RRDSET *st = NULL; + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IPVS + , "net" + , NULL + , RRD_TYPE_NET_IPVS + , NULL + , "IPVS Bandwidth" + , "kilobits/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NET_IPVS_NAME + , NETDATA_CHART_PRIO_IPVS_NET + , update_every + , RRDSET_TYPE_AREA + ); + + rrddim_add(st, "received", NULL, 8, BITS_IN_A_KILOBIT, RRD_ALGORITHM_INCREMENTAL); + rrddim_add(st, "sent", NULL, -8, BITS_IN_A_KILOBIT, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set(st, "received", InBytes); + rrddim_set(st, "sent", OutBytes); + rrdset_done(st); + } + + return 0; +} diff --git a/src/collectors/proc.plugin/proc_net_netstat.c b/src/collectors/proc.plugin/proc_net_netstat.c new file mode 100644 index 000000000..4a999803f --- /dev/null +++ b/src/collectors/proc.plugin/proc_net_netstat.c @@ -0,0 +1,3087 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +#define RRD_TYPE_NET_IP "ip" +#define RRD_TYPE_NET_IP4 "ipv4" +#define RRD_TYPE_NET_IP6 "ipv6" +#define PLUGIN_PROC_MODULE_NETSTAT_NAME "/proc/net/netstat" +#define CONFIG_SECTION_PLUGIN_PROC_NETSTAT "plugin:" PLUGIN_PROC_CONFIG_NAME ":" PLUGIN_PROC_MODULE_NETSTAT_NAME + +static struct proc_net_snmp { + // kernel_uint_t ip_Forwarding; + kernel_uint_t ip_DefaultTTL; + kernel_uint_t ip_InReceives; + kernel_uint_t ip_InHdrErrors; + kernel_uint_t ip_InAddrErrors; + kernel_uint_t ip_ForwDatagrams; + kernel_uint_t ip_InUnknownProtos; + kernel_uint_t ip_InDiscards; + kernel_uint_t ip_InDelivers; + kernel_uint_t ip_OutRequests; + kernel_uint_t ip_OutDiscards; + kernel_uint_t ip_OutNoRoutes; + kernel_uint_t ip_ReasmTimeout; + kernel_uint_t ip_ReasmReqds; + kernel_uint_t ip_ReasmOKs; + kernel_uint_t ip_ReasmFails; + kernel_uint_t ip_FragOKs; + kernel_uint_t ip_FragFails; + kernel_uint_t ip_FragCreates; + + kernel_uint_t icmp_InMsgs; + kernel_uint_t icmp_OutMsgs; + kernel_uint_t icmp_InErrors; + kernel_uint_t icmp_OutErrors; + kernel_uint_t icmp_InCsumErrors; + + kernel_uint_t icmpmsg_InEchoReps; + kernel_uint_t icmpmsg_OutEchoReps; + kernel_uint_t icmpmsg_InDestUnreachs; + kernel_uint_t icmpmsg_OutDestUnreachs; + kernel_uint_t icmpmsg_InRedirects; + kernel_uint_t icmpmsg_OutRedirects; + kernel_uint_t icmpmsg_InEchos; + kernel_uint_t icmpmsg_OutEchos; + kernel_uint_t icmpmsg_InRouterAdvert; + kernel_uint_t icmpmsg_OutRouterAdvert; + kernel_uint_t icmpmsg_InRouterSelect; + kernel_uint_t icmpmsg_OutRouterSelect; + kernel_uint_t icmpmsg_InTimeExcds; + kernel_uint_t icmpmsg_OutTimeExcds; + kernel_uint_t icmpmsg_InParmProbs; + kernel_uint_t icmpmsg_OutParmProbs; + kernel_uint_t icmpmsg_InTimestamps; + kernel_uint_t icmpmsg_OutTimestamps; + kernel_uint_t icmpmsg_InTimestampReps; + kernel_uint_t icmpmsg_OutTimestampReps; + + //kernel_uint_t tcp_RtoAlgorithm; + //kernel_uint_t tcp_RtoMin; + //kernel_uint_t tcp_RtoMax; + ssize_t tcp_MaxConn; + kernel_uint_t tcp_ActiveOpens; + kernel_uint_t tcp_PassiveOpens; + kernel_uint_t tcp_AttemptFails; + kernel_uint_t tcp_EstabResets; + kernel_uint_t tcp_CurrEstab; + kernel_uint_t tcp_InSegs; + kernel_uint_t tcp_OutSegs; + kernel_uint_t tcp_RetransSegs; + kernel_uint_t tcp_InErrs; + kernel_uint_t tcp_OutRsts; + kernel_uint_t tcp_InCsumErrors; + + kernel_uint_t udp_InDatagrams; + kernel_uint_t udp_NoPorts; + kernel_uint_t udp_InErrors; + kernel_uint_t udp_OutDatagrams; + kernel_uint_t udp_RcvbufErrors; + kernel_uint_t udp_SndbufErrors; + kernel_uint_t udp_InCsumErrors; + kernel_uint_t udp_IgnoredMulti; + + kernel_uint_t udplite_InDatagrams; + kernel_uint_t udplite_NoPorts; + kernel_uint_t udplite_InErrors; + kernel_uint_t udplite_OutDatagrams; + kernel_uint_t udplite_RcvbufErrors; + kernel_uint_t udplite_SndbufErrors; + kernel_uint_t udplite_InCsumErrors; + kernel_uint_t udplite_IgnoredMulti; +} snmp_root = { 0 }; + +static void parse_line_pair(procfile *ff_netstat, ARL_BASE *base, size_t header_line, size_t values_line) { + size_t hwords = procfile_linewords(ff_netstat, header_line); + size_t vwords = procfile_linewords(ff_netstat, values_line); + size_t w; + + if(unlikely(vwords > hwords)) { + collector_error("File /proc/net/netstat on header line %zu has %zu words, but on value line %zu has %zu words.", header_line, hwords, values_line, vwords); + vwords = hwords; + } + + for(w = 1; w < vwords ;w++) { + if(unlikely(arl_check(base, procfile_lineword(ff_netstat, header_line, w), procfile_lineword(ff_netstat, values_line, w)))) + break; + } +} + +static void do_proc_net_snmp6(int update_every) { + static bool do_snmp6 = true; + + if (!do_snmp6) { + return; + } + + static int do_ip6_packets = -1, do_ip6_fragsout = -1, do_ip6_fragsin = -1, do_ip6_errors = -1, + do_ip6_udplite_packets = -1, do_ip6_udplite_errors = -1, do_ip6_udp_packets = -1, do_ip6_udp_errors = -1, + do_ip6_bandwidth = -1, do_ip6_mcast = -1, do_ip6_bcast = -1, do_ip6_mcast_p = -1, do_ip6_icmp = -1, + do_ip6_icmp_redir = -1, do_ip6_icmp_errors = -1, do_ip6_icmp_echos = -1, do_ip6_icmp_groupmemb = -1, + do_ip6_icmp_router = -1, do_ip6_icmp_neighbor = -1, do_ip6_icmp_mldv2 = -1, do_ip6_icmp_types = -1, + do_ip6_ect = -1; + + static procfile *ff_snmp6 = NULL; + + static ARL_BASE *arl_ipv6 = NULL; + + static unsigned long long Ip6InReceives = 0ULL; + static unsigned long long Ip6InHdrErrors = 0ULL; + static unsigned long long Ip6InTooBigErrors = 0ULL; + static unsigned long long Ip6InNoRoutes = 0ULL; + static unsigned long long Ip6InAddrErrors = 0ULL; + static unsigned long long Ip6InUnknownProtos = 0ULL; + static unsigned long long Ip6InTruncatedPkts = 0ULL; + static unsigned long long Ip6InDiscards = 0ULL; + static unsigned long long Ip6InDelivers = 0ULL; + static unsigned long long Ip6OutForwDatagrams = 0ULL; + static unsigned long long Ip6OutRequests = 0ULL; + static unsigned long long Ip6OutDiscards = 0ULL; + static unsigned long long Ip6OutNoRoutes = 0ULL; + static unsigned long long Ip6ReasmTimeout = 0ULL; + static unsigned long long Ip6ReasmReqds = 0ULL; + static unsigned long long Ip6ReasmOKs = 0ULL; + static unsigned long long Ip6ReasmFails = 0ULL; + static unsigned long long Ip6FragOKs = 0ULL; + static unsigned long long Ip6FragFails = 0ULL; + static unsigned long long Ip6FragCreates = 0ULL; + static unsigned long long Ip6InMcastPkts = 0ULL; + static unsigned long long Ip6OutMcastPkts = 0ULL; + static unsigned long long Ip6InOctets = 0ULL; + static unsigned long long Ip6OutOctets = 0ULL; + static unsigned long long Ip6InMcastOctets = 0ULL; + static unsigned long long Ip6OutMcastOctets = 0ULL; + static unsigned long long Ip6InBcastOctets = 0ULL; + static unsigned long long Ip6OutBcastOctets = 0ULL; + static unsigned long long Ip6InNoECTPkts = 0ULL; + static unsigned long long Ip6InECT1Pkts = 0ULL; + static unsigned long long Ip6InECT0Pkts = 0ULL; + static unsigned long long Ip6InCEPkts = 0ULL; + static unsigned long long Icmp6InMsgs = 0ULL; + static unsigned long long Icmp6InErrors = 0ULL; + static unsigned long long Icmp6OutMsgs = 0ULL; + static unsigned long long Icmp6OutErrors = 0ULL; + static unsigned long long Icmp6InCsumErrors = 0ULL; + static unsigned long long Icmp6InDestUnreachs = 0ULL; + static unsigned long long Icmp6InPktTooBigs = 0ULL; + static unsigned long long Icmp6InTimeExcds = 0ULL; + static unsigned long long Icmp6InParmProblems = 0ULL; + static unsigned long long Icmp6InEchos = 0ULL; + static unsigned long long Icmp6InEchoReplies = 0ULL; + static unsigned long long Icmp6InGroupMembQueries = 0ULL; + static unsigned long long Icmp6InGroupMembResponses = 0ULL; + static unsigned long long Icmp6InGroupMembReductions = 0ULL; + static unsigned long long Icmp6InRouterSolicits = 0ULL; + static unsigned long long Icmp6InRouterAdvertisements = 0ULL; + static unsigned long long Icmp6InNeighborSolicits = 0ULL; + static unsigned long long Icmp6InNeighborAdvertisements = 0ULL; + static unsigned long long Icmp6InRedirects = 0ULL; + static unsigned long long Icmp6InMLDv2Reports = 0ULL; + static unsigned long long Icmp6OutDestUnreachs = 0ULL; + static unsigned long long Icmp6OutPktTooBigs = 0ULL; + static unsigned long long Icmp6OutTimeExcds = 0ULL; + static unsigned long long Icmp6OutParmProblems = 0ULL; + static unsigned long long Icmp6OutEchos = 0ULL; + static unsigned long long Icmp6OutEchoReplies = 0ULL; + static unsigned long long Icmp6OutGroupMembQueries = 0ULL; + static unsigned long long Icmp6OutGroupMembResponses = 0ULL; + static unsigned long long Icmp6OutGroupMembReductions = 0ULL; + static unsigned long long Icmp6OutRouterSolicits = 0ULL; + static unsigned long long Icmp6OutRouterAdvertisements = 0ULL; + static unsigned long long Icmp6OutNeighborSolicits = 0ULL; + static unsigned long long Icmp6OutNeighborAdvertisements = 0ULL; + static unsigned long long Icmp6OutRedirects = 0ULL; + static unsigned long long Icmp6OutMLDv2Reports = 0ULL; + static unsigned long long Icmp6InType1 = 0ULL; + static unsigned long long Icmp6InType128 = 0ULL; + static unsigned long long Icmp6InType129 = 0ULL; + static unsigned long long Icmp6InType136 = 0ULL; + static unsigned long long Icmp6OutType1 = 0ULL; + static unsigned long long Icmp6OutType128 = 0ULL; + static unsigned long long Icmp6OutType129 = 0ULL; + static unsigned long long Icmp6OutType133 = 0ULL; + static unsigned long long Icmp6OutType135 = 0ULL; + static unsigned long long Icmp6OutType143 = 0ULL; + static unsigned long long Udp6InDatagrams = 0ULL; + static unsigned long long Udp6NoPorts = 0ULL; + static unsigned long long Udp6InErrors = 0ULL; + static unsigned long long Udp6OutDatagrams = 0ULL; + static unsigned long long Udp6RcvbufErrors = 0ULL; + static unsigned long long Udp6SndbufErrors = 0ULL; + static unsigned long long Udp6InCsumErrors = 0ULL; + static unsigned long long Udp6IgnoredMulti = 0ULL; + static unsigned long long UdpLite6InDatagrams = 0ULL; + static unsigned long long UdpLite6NoPorts = 0ULL; + static unsigned long long UdpLite6InErrors = 0ULL; + static unsigned long long UdpLite6OutDatagrams = 0ULL; + static unsigned long long UdpLite6RcvbufErrors = 0ULL; + static unsigned long long UdpLite6SndbufErrors = 0ULL; + static unsigned long long UdpLite6InCsumErrors = 0ULL; + + // prepare for /proc/net/snmp6 parsing + + if(unlikely(!arl_ipv6)) { + do_ip6_packets = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp6", "ipv6 packets", CONFIG_BOOLEAN_AUTO); + do_ip6_fragsout = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp6", "ipv6 fragments sent", CONFIG_BOOLEAN_AUTO); + do_ip6_fragsin = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp6", "ipv6 fragments assembly", CONFIG_BOOLEAN_AUTO); + do_ip6_errors = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp6", "ipv6 errors", CONFIG_BOOLEAN_AUTO); + do_ip6_udp_packets = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp6", "ipv6 UDP packets", CONFIG_BOOLEAN_AUTO); + do_ip6_udp_errors = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp6", "ipv6 UDP errors", CONFIG_BOOLEAN_AUTO); + do_ip6_udplite_packets = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp6", "ipv6 UDPlite packets", CONFIG_BOOLEAN_AUTO); + do_ip6_udplite_errors = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp6", "ipv6 UDPlite errors", CONFIG_BOOLEAN_AUTO); + do_ip6_bandwidth = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp6", "bandwidth", CONFIG_BOOLEAN_AUTO); + do_ip6_mcast = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp6", "multicast bandwidth", CONFIG_BOOLEAN_AUTO); + do_ip6_bcast = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp6", "broadcast bandwidth", CONFIG_BOOLEAN_AUTO); + do_ip6_mcast_p = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp6", "multicast packets", CONFIG_BOOLEAN_AUTO); + do_ip6_icmp = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp6", "icmp", CONFIG_BOOLEAN_AUTO); + do_ip6_icmp_redir = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp6", "icmp redirects", CONFIG_BOOLEAN_AUTO); + do_ip6_icmp_errors = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp6", "icmp errors", CONFIG_BOOLEAN_AUTO); + do_ip6_icmp_echos = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp6", "icmp echos", CONFIG_BOOLEAN_AUTO); + do_ip6_icmp_groupmemb = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp6", "icmp group membership", CONFIG_BOOLEAN_AUTO); + do_ip6_icmp_router = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp6", "icmp router", CONFIG_BOOLEAN_AUTO); + do_ip6_icmp_neighbor = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp6", "icmp neighbor", CONFIG_BOOLEAN_AUTO); + do_ip6_icmp_mldv2 = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp6", "icmp mldv2", CONFIG_BOOLEAN_AUTO); + do_ip6_icmp_types = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp6", "icmp types", CONFIG_BOOLEAN_AUTO); + do_ip6_ect = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp6", "ect", CONFIG_BOOLEAN_AUTO); + + arl_ipv6 = arl_create("snmp6", NULL, 60); + arl_expect(arl_ipv6, "Ip6InReceives", &Ip6InReceives); + arl_expect(arl_ipv6, "Ip6InHdrErrors", &Ip6InHdrErrors); + arl_expect(arl_ipv6, "Ip6InTooBigErrors", &Ip6InTooBigErrors); + arl_expect(arl_ipv6, "Ip6InNoRoutes", &Ip6InNoRoutes); + arl_expect(arl_ipv6, "Ip6InAddrErrors", &Ip6InAddrErrors); + arl_expect(arl_ipv6, "Ip6InUnknownProtos", &Ip6InUnknownProtos); + arl_expect(arl_ipv6, "Ip6InTruncatedPkts", &Ip6InTruncatedPkts); + arl_expect(arl_ipv6, "Ip6InDiscards", &Ip6InDiscards); + arl_expect(arl_ipv6, "Ip6InDelivers", &Ip6InDelivers); + arl_expect(arl_ipv6, "Ip6OutForwDatagrams", &Ip6OutForwDatagrams); + arl_expect(arl_ipv6, "Ip6OutRequests", &Ip6OutRequests); + arl_expect(arl_ipv6, "Ip6OutDiscards", &Ip6OutDiscards); + arl_expect(arl_ipv6, "Ip6OutNoRoutes", &Ip6OutNoRoutes); + arl_expect(arl_ipv6, "Ip6ReasmTimeout", &Ip6ReasmTimeout); + arl_expect(arl_ipv6, "Ip6ReasmReqds", &Ip6ReasmReqds); + arl_expect(arl_ipv6, "Ip6ReasmOKs", &Ip6ReasmOKs); + arl_expect(arl_ipv6, "Ip6ReasmFails", &Ip6ReasmFails); + arl_expect(arl_ipv6, "Ip6FragOKs", &Ip6FragOKs); + arl_expect(arl_ipv6, "Ip6FragFails", &Ip6FragFails); + arl_expect(arl_ipv6, "Ip6FragCreates", &Ip6FragCreates); + arl_expect(arl_ipv6, "Ip6InMcastPkts", &Ip6InMcastPkts); + arl_expect(arl_ipv6, "Ip6OutMcastPkts", &Ip6OutMcastPkts); + arl_expect(arl_ipv6, "Ip6InOctets", &Ip6InOctets); + arl_expect(arl_ipv6, "Ip6OutOctets", &Ip6OutOctets); + arl_expect(arl_ipv6, "Ip6InMcastOctets", &Ip6InMcastOctets); + arl_expect(arl_ipv6, "Ip6OutMcastOctets", &Ip6OutMcastOctets); + arl_expect(arl_ipv6, "Ip6InBcastOctets", &Ip6InBcastOctets); + arl_expect(arl_ipv6, "Ip6OutBcastOctets", &Ip6OutBcastOctets); + arl_expect(arl_ipv6, "Ip6InNoECTPkts", &Ip6InNoECTPkts); + arl_expect(arl_ipv6, "Ip6InECT1Pkts", &Ip6InECT1Pkts); + arl_expect(arl_ipv6, "Ip6InECT0Pkts", &Ip6InECT0Pkts); + arl_expect(arl_ipv6, "Ip6InCEPkts", &Ip6InCEPkts); + arl_expect(arl_ipv6, "Icmp6InMsgs", &Icmp6InMsgs); + arl_expect(arl_ipv6, "Icmp6InErrors", &Icmp6InErrors); + arl_expect(arl_ipv6, "Icmp6OutMsgs", &Icmp6OutMsgs); + arl_expect(arl_ipv6, "Icmp6OutErrors", &Icmp6OutErrors); + arl_expect(arl_ipv6, "Icmp6InCsumErrors", &Icmp6InCsumErrors); + arl_expect(arl_ipv6, "Icmp6InDestUnreachs", &Icmp6InDestUnreachs); + arl_expect(arl_ipv6, "Icmp6InPktTooBigs", &Icmp6InPktTooBigs); + arl_expect(arl_ipv6, "Icmp6InTimeExcds", &Icmp6InTimeExcds); + arl_expect(arl_ipv6, "Icmp6InParmProblems", &Icmp6InParmProblems); + arl_expect(arl_ipv6, "Icmp6InEchos", &Icmp6InEchos); + arl_expect(arl_ipv6, "Icmp6InEchoReplies", &Icmp6InEchoReplies); + arl_expect(arl_ipv6, "Icmp6InGroupMembQueries", &Icmp6InGroupMembQueries); + arl_expect(arl_ipv6, "Icmp6InGroupMembResponses", &Icmp6InGroupMembResponses); + arl_expect(arl_ipv6, "Icmp6InGroupMembReductions", &Icmp6InGroupMembReductions); + arl_expect(arl_ipv6, "Icmp6InRouterSolicits", &Icmp6InRouterSolicits); + arl_expect(arl_ipv6, "Icmp6InRouterAdvertisements", &Icmp6InRouterAdvertisements); + arl_expect(arl_ipv6, "Icmp6InNeighborSolicits", &Icmp6InNeighborSolicits); + arl_expect(arl_ipv6, "Icmp6InNeighborAdvertisements", &Icmp6InNeighborAdvertisements); + arl_expect(arl_ipv6, "Icmp6InRedirects", &Icmp6InRedirects); + arl_expect(arl_ipv6, "Icmp6InMLDv2Reports", &Icmp6InMLDv2Reports); + arl_expect(arl_ipv6, "Icmp6OutDestUnreachs", &Icmp6OutDestUnreachs); + arl_expect(arl_ipv6, "Icmp6OutPktTooBigs", &Icmp6OutPktTooBigs); + arl_expect(arl_ipv6, "Icmp6OutTimeExcds", &Icmp6OutTimeExcds); + arl_expect(arl_ipv6, "Icmp6OutParmProblems", &Icmp6OutParmProblems); + arl_expect(arl_ipv6, "Icmp6OutEchos", &Icmp6OutEchos); + arl_expect(arl_ipv6, "Icmp6OutEchoReplies", &Icmp6OutEchoReplies); + arl_expect(arl_ipv6, "Icmp6OutGroupMembQueries", &Icmp6OutGroupMembQueries); + arl_expect(arl_ipv6, "Icmp6OutGroupMembResponses", &Icmp6OutGroupMembResponses); + arl_expect(arl_ipv6, "Icmp6OutGroupMembReductions", &Icmp6OutGroupMembReductions); + arl_expect(arl_ipv6, "Icmp6OutRouterSolicits", &Icmp6OutRouterSolicits); + arl_expect(arl_ipv6, "Icmp6OutRouterAdvertisements", &Icmp6OutRouterAdvertisements); + arl_expect(arl_ipv6, "Icmp6OutNeighborSolicits", &Icmp6OutNeighborSolicits); + arl_expect(arl_ipv6, "Icmp6OutNeighborAdvertisements", &Icmp6OutNeighborAdvertisements); + arl_expect(arl_ipv6, "Icmp6OutRedirects", &Icmp6OutRedirects); + arl_expect(arl_ipv6, "Icmp6OutMLDv2Reports", &Icmp6OutMLDv2Reports); + arl_expect(arl_ipv6, "Icmp6InType1", &Icmp6InType1); + arl_expect(arl_ipv6, "Icmp6InType128", &Icmp6InType128); + arl_expect(arl_ipv6, "Icmp6InType129", &Icmp6InType129); + arl_expect(arl_ipv6, "Icmp6InType136", &Icmp6InType136); + arl_expect(arl_ipv6, "Icmp6OutType1", &Icmp6OutType1); + arl_expect(arl_ipv6, "Icmp6OutType128", &Icmp6OutType128); + arl_expect(arl_ipv6, "Icmp6OutType129", &Icmp6OutType129); + arl_expect(arl_ipv6, "Icmp6OutType133", &Icmp6OutType133); + arl_expect(arl_ipv6, "Icmp6OutType135", &Icmp6OutType135); + arl_expect(arl_ipv6, "Icmp6OutType143", &Icmp6OutType143); + arl_expect(arl_ipv6, "Udp6InDatagrams", &Udp6InDatagrams); + arl_expect(arl_ipv6, "Udp6NoPorts", &Udp6NoPorts); + arl_expect(arl_ipv6, "Udp6InErrors", &Udp6InErrors); + arl_expect(arl_ipv6, "Udp6OutDatagrams", &Udp6OutDatagrams); + arl_expect(arl_ipv6, "Udp6RcvbufErrors", &Udp6RcvbufErrors); + arl_expect(arl_ipv6, "Udp6SndbufErrors", &Udp6SndbufErrors); + arl_expect(arl_ipv6, "Udp6InCsumErrors", &Udp6InCsumErrors); + arl_expect(arl_ipv6, "Udp6IgnoredMulti", &Udp6IgnoredMulti); + arl_expect(arl_ipv6, "UdpLite6InDatagrams", &UdpLite6InDatagrams); + arl_expect(arl_ipv6, "UdpLite6NoPorts", &UdpLite6NoPorts); + arl_expect(arl_ipv6, "UdpLite6InErrors", &UdpLite6InErrors); + arl_expect(arl_ipv6, "UdpLite6OutDatagrams", &UdpLite6OutDatagrams); + arl_expect(arl_ipv6, "UdpLite6RcvbufErrors", &UdpLite6RcvbufErrors); + arl_expect(arl_ipv6, "UdpLite6SndbufErrors", &UdpLite6SndbufErrors); + arl_expect(arl_ipv6, "UdpLite6InCsumErrors", &UdpLite6InCsumErrors); + } + + // parse /proc/net/snmp + + if (unlikely(!ff_snmp6)) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/net/snmp6"); + ff_snmp6 = procfile_open( + config_get("plugin:proc:/proc/net/snmp6", "filename to monitor", filename), " \t:", PROCFILE_FLAG_DEFAULT); + if (unlikely(!ff_snmp6)) { + do_snmp6 = false; + return; + } + } + + ff_snmp6 = procfile_readall(ff_snmp6); + if (unlikely(!ff_snmp6)) + return; + + size_t lines, l; + + lines = procfile_lines(ff_snmp6); + + arl_begin(arl_ipv6); + + for (l = 0; l < lines; l++) { + size_t words = procfile_linewords(ff_snmp6, l); + if (unlikely(words < 2)) { + if (unlikely(words)) { + collector_error("Cannot read /proc/net/snmp6 line %zu. Expected 2 params, read %zu.", l, words); + continue; + } + } + + if (unlikely(arl_check(arl_ipv6, procfile_lineword(ff_snmp6, l, 0), procfile_lineword(ff_snmp6, l, 1)))) + break; + } + + if(do_ip6_bandwidth == CONFIG_BOOLEAN_YES || (do_ip6_bandwidth == CONFIG_BOOLEAN_AUTO && + (Ip6InOctets || + Ip6OutOctets || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_ip6_bandwidth = CONFIG_BOOLEAN_YES; + static RRDSET *st = NULL; + static RRDDIM *rd_received = NULL, + *rd_sent = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "system" + , "ipv6" + , NULL + , "network" + , NULL + , "IPv6 Bandwidth" + , "kilobits/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_SYSTEM_IPV6 + , update_every + , RRDSET_TYPE_AREA + ); + + rd_received = rrddim_add(st, "InOctets", "received", 8, BITS_IN_A_KILOBIT, RRD_ALGORITHM_INCREMENTAL); + rd_sent = rrddim_add(st, "OutOctets", "sent", -8, BITS_IN_A_KILOBIT, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_received, Ip6InOctets); + rrddim_set_by_pointer(st, rd_sent, Ip6OutOctets); + rrdset_done(st); + } + + if(do_ip6_packets == CONFIG_BOOLEAN_YES || (do_ip6_packets == CONFIG_BOOLEAN_AUTO && + (Ip6InReceives || + Ip6OutRequests || + Ip6InDelivers || + Ip6OutForwDatagrams || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_ip6_packets = CONFIG_BOOLEAN_YES; + static RRDSET *st = NULL; + static RRDDIM *rd_received = NULL, + *rd_sent = NULL, + *rd_forwarded = NULL, + *rd_delivers = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP6 + , "packets" + , NULL + , "packets" + , NULL + , "IPv6 Packets" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV6_PACKETS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_received = rrddim_add(st, "InReceives", "received", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_sent = rrddim_add(st, "OutRequests", "sent", -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_forwarded = rrddim_add(st, "OutForwDatagrams", "forwarded", -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_delivers = rrddim_add(st, "InDelivers", "delivers", 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_received, Ip6InReceives); + rrddim_set_by_pointer(st, rd_sent, Ip6OutRequests); + rrddim_set_by_pointer(st, rd_forwarded, Ip6OutForwDatagrams); + rrddim_set_by_pointer(st, rd_delivers, Ip6InDelivers); + rrdset_done(st); + } + + if(do_ip6_fragsout == CONFIG_BOOLEAN_YES || (do_ip6_fragsout == CONFIG_BOOLEAN_AUTO && + (Ip6FragOKs || + Ip6FragFails || + Ip6FragCreates || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_ip6_fragsout = CONFIG_BOOLEAN_YES; + static RRDSET *st = NULL; + static RRDDIM *rd_ok = NULL, + *rd_failed = NULL, + *rd_all = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP6 + , "fragsout" + , NULL + , "fragments6" + , NULL + , "IPv6 Fragments Sent" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV6_FRAGSOUT + , update_every + , RRDSET_TYPE_LINE + ); + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_ok = rrddim_add(st, "FragOKs", "ok", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_failed = rrddim_add(st, "FragFails", "failed", -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_all = rrddim_add(st, "FragCreates", "all", 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_ok, Ip6FragOKs); + rrddim_set_by_pointer(st, rd_failed, Ip6FragFails); + rrddim_set_by_pointer(st, rd_all, Ip6FragCreates); + rrdset_done(st); + } + + if(do_ip6_fragsin == CONFIG_BOOLEAN_YES || (do_ip6_fragsin == CONFIG_BOOLEAN_AUTO && + (Ip6ReasmOKs || + Ip6ReasmFails || + Ip6ReasmTimeout || + Ip6ReasmReqds || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_ip6_fragsin = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_ok = NULL, + *rd_failed = NULL, + *rd_timeout = NULL, + *rd_all = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP6 + , "fragsin" + , NULL + , "fragments6" + , NULL + , "IPv6 Fragments Reassembly" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV6_FRAGSIN + , update_every + , RRDSET_TYPE_LINE); + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_ok = rrddim_add(st, "ReasmOKs", "ok", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_failed = rrddim_add(st, "ReasmFails", "failed", -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_timeout = rrddim_add(st, "ReasmTimeout", "timeout", -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_all = rrddim_add(st, "ReasmReqds", "all", 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_ok, Ip6ReasmOKs); + rrddim_set_by_pointer(st, rd_failed, Ip6ReasmFails); + rrddim_set_by_pointer(st, rd_timeout, Ip6ReasmTimeout); + rrddim_set_by_pointer(st, rd_all, Ip6ReasmReqds); + rrdset_done(st); + } + + if(do_ip6_errors == CONFIG_BOOLEAN_YES || (do_ip6_errors == CONFIG_BOOLEAN_AUTO && + (Ip6InDiscards || + Ip6OutDiscards || + Ip6InHdrErrors || + Ip6InAddrErrors || + Ip6InUnknownProtos || + Ip6InTooBigErrors || + Ip6InTruncatedPkts || + Ip6InNoRoutes || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_ip6_errors = CONFIG_BOOLEAN_YES; + static RRDSET *st = NULL; + static RRDDIM *rd_InDiscards = NULL, + *rd_OutDiscards = NULL, + *rd_InHdrErrors = NULL, + *rd_InAddrErrors = NULL, + *rd_InUnknownProtos = NULL, + *rd_InTooBigErrors = NULL, + *rd_InTruncatedPkts = NULL, + *rd_InNoRoutes = NULL, + *rd_OutNoRoutes = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP6 + , "errors" + , NULL + , "errors" + , NULL + , "IPv6 Errors" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV6_ERRORS + , update_every + , RRDSET_TYPE_LINE + ); + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_InDiscards = rrddim_add(st, "InDiscards", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutDiscards = rrddim_add(st, "OutDiscards", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InHdrErrors = rrddim_add(st, "InHdrErrors", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InAddrErrors = rrddim_add(st, "InAddrErrors", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InUnknownProtos = rrddim_add(st, "InUnknownProtos", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InTooBigErrors = rrddim_add(st, "InTooBigErrors", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InTruncatedPkts = rrddim_add(st, "InTruncatedPkts", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InNoRoutes = rrddim_add(st, "InNoRoutes", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutNoRoutes = rrddim_add(st, "OutNoRoutes", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_InDiscards, Ip6InDiscards); + rrddim_set_by_pointer(st, rd_OutDiscards, Ip6OutDiscards); + rrddim_set_by_pointer(st, rd_InHdrErrors, Ip6InHdrErrors); + rrddim_set_by_pointer(st, rd_InAddrErrors, Ip6InAddrErrors); + rrddim_set_by_pointer(st, rd_InUnknownProtos, Ip6InUnknownProtos); + rrddim_set_by_pointer(st, rd_InTooBigErrors, Ip6InTooBigErrors); + rrddim_set_by_pointer(st, rd_InTruncatedPkts, Ip6InTruncatedPkts); + rrddim_set_by_pointer(st, rd_InNoRoutes, Ip6InNoRoutes); + rrddim_set_by_pointer(st, rd_OutNoRoutes, Ip6OutNoRoutes); + rrdset_done(st); + } + + if(do_ip6_udp_packets == CONFIG_BOOLEAN_YES || (do_ip6_udp_packets == CONFIG_BOOLEAN_AUTO && + (Udp6InDatagrams || + Udp6OutDatagrams || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + static RRDSET *st = NULL; + static RRDDIM *rd_received = NULL, + *rd_sent = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP6 + , "udppackets" + , NULL + , "udp6" + , NULL + , "IPv6 UDP Packets" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV6_UDP_PACKETS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_received = rrddim_add(st, "InDatagrams", "received", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_sent = rrddim_add(st, "OutDatagrams", "sent", -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_received, Udp6InDatagrams); + rrddim_set_by_pointer(st, rd_sent, Udp6OutDatagrams); + rrdset_done(st); + } + + if(do_ip6_udp_errors == CONFIG_BOOLEAN_YES || (do_ip6_udp_errors == CONFIG_BOOLEAN_AUTO && + (Udp6InErrors || + Udp6NoPorts || + Udp6RcvbufErrors || + Udp6SndbufErrors || + Udp6InCsumErrors || + Udp6IgnoredMulti || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_ip6_udp_errors = CONFIG_BOOLEAN_YES; + static RRDSET *st = NULL; + static RRDDIM *rd_RcvbufErrors = NULL, + *rd_SndbufErrors = NULL, + *rd_InErrors = NULL, + *rd_NoPorts = NULL, + *rd_InCsumErrors = NULL, + *rd_IgnoredMulti = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP6 + , "udperrors" + , NULL + , "udp6" + , NULL + , "IPv6 UDP Errors" + , "events/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV6_UDP_ERRORS + , update_every + , RRDSET_TYPE_LINE + ); + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_RcvbufErrors = rrddim_add(st, "RcvbufErrors", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_SndbufErrors = rrddim_add(st, "SndbufErrors", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InErrors = rrddim_add(st, "InErrors", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_NoPorts = rrddim_add(st, "NoPorts", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InCsumErrors = rrddim_add(st, "InCsumErrors", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_IgnoredMulti = rrddim_add(st, "IgnoredMulti", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_RcvbufErrors, Udp6RcvbufErrors); + rrddim_set_by_pointer(st, rd_SndbufErrors, Udp6SndbufErrors); + rrddim_set_by_pointer(st, rd_InErrors, Udp6InErrors); + rrddim_set_by_pointer(st, rd_NoPorts, Udp6NoPorts); + rrddim_set_by_pointer(st, rd_InCsumErrors, Udp6InCsumErrors); + rrddim_set_by_pointer(st, rd_IgnoredMulti, Udp6IgnoredMulti); + rrdset_done(st); + } + + if(do_ip6_udplite_packets == CONFIG_BOOLEAN_YES || (do_ip6_udplite_packets == CONFIG_BOOLEAN_AUTO && + (UdpLite6InDatagrams || + UdpLite6OutDatagrams || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + static RRDSET *st = NULL; + static RRDDIM *rd_received = NULL, + *rd_sent = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP6 + , "udplitepackets" + , NULL + , "udplite6" + , NULL + , "IPv6 UDPlite Packets" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV6_UDPLITE_PACKETS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_received = rrddim_add(st, "InDatagrams", "received", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_sent = rrddim_add(st, "OutDatagrams", "sent", -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_received, UdpLite6InDatagrams); + rrddim_set_by_pointer(st, rd_sent, UdpLite6OutDatagrams); + rrdset_done(st); + } + + if(do_ip6_udplite_errors == CONFIG_BOOLEAN_YES || (do_ip6_udplite_errors == CONFIG_BOOLEAN_AUTO && + (UdpLite6InErrors || + UdpLite6NoPorts || + UdpLite6RcvbufErrors || + UdpLite6SndbufErrors || + Udp6InCsumErrors || + UdpLite6InCsumErrors || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_ip6_udplite_errors = CONFIG_BOOLEAN_YES; + static RRDSET *st = NULL; + static RRDDIM *rd_RcvbufErrors = NULL, + *rd_SndbufErrors = NULL, + *rd_InErrors = NULL, + *rd_NoPorts = NULL, + *rd_InCsumErrors = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP6 + , "udpliteerrors" + , NULL + , "udplite6" + , NULL + , "IPv6 UDP Lite Errors" + , "events/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV6_UDPLITE_ERRORS + , update_every + , RRDSET_TYPE_LINE + ); + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_RcvbufErrors = rrddim_add(st, "RcvbufErrors", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_SndbufErrors = rrddim_add(st, "SndbufErrors", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InErrors = rrddim_add(st, "InErrors", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_NoPorts = rrddim_add(st, "NoPorts", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InCsumErrors = rrddim_add(st, "InCsumErrors", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_InErrors, UdpLite6InErrors); + rrddim_set_by_pointer(st, rd_NoPorts, UdpLite6NoPorts); + rrddim_set_by_pointer(st, rd_RcvbufErrors, UdpLite6RcvbufErrors); + rrddim_set_by_pointer(st, rd_SndbufErrors, UdpLite6SndbufErrors); + rrddim_set_by_pointer(st, rd_InCsumErrors, UdpLite6InCsumErrors); + rrdset_done(st); + } + + if(do_ip6_mcast == CONFIG_BOOLEAN_YES || (do_ip6_mcast == CONFIG_BOOLEAN_AUTO && + (Ip6OutMcastOctets || + Ip6InMcastOctets || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_ip6_mcast = CONFIG_BOOLEAN_YES; + static RRDSET *st = NULL; + static RRDDIM *rd_Ip6InMcastOctets = NULL, + *rd_Ip6OutMcastOctets = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP6 + , "mcast" + , NULL + , "multicast6" + , NULL + , "IPv6 Multicast Bandwidth" + , "kilobits/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV6_MCAST + , update_every + , RRDSET_TYPE_AREA + ); + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_Ip6InMcastOctets = rrddim_add(st, "InMcastOctets", "received", 8, BITS_IN_A_KILOBIT, RRD_ALGORITHM_INCREMENTAL); + rd_Ip6OutMcastOctets = rrddim_add(st, "OutMcastOctets", "sent", -8, BITS_IN_A_KILOBIT, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_Ip6InMcastOctets, Ip6InMcastOctets); + rrddim_set_by_pointer(st, rd_Ip6OutMcastOctets, Ip6OutMcastOctets); + rrdset_done(st); + } + + if(do_ip6_bcast == CONFIG_BOOLEAN_YES || (do_ip6_bcast == CONFIG_BOOLEAN_AUTO && + (Ip6OutBcastOctets || + Ip6InBcastOctets || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_ip6_bcast = CONFIG_BOOLEAN_YES; + static RRDSET *st = NULL; + static RRDDIM *rd_Ip6InBcastOctets = NULL, + *rd_Ip6OutBcastOctets = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP6 + , "bcast" + , NULL + , "broadcast6" + , NULL + , "IPv6 Broadcast Bandwidth" + , "kilobits/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV6_BCAST + , update_every + , RRDSET_TYPE_AREA + ); + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_Ip6InBcastOctets = rrddim_add(st, "InBcastOctets", "received", 8, BITS_IN_A_KILOBIT, RRD_ALGORITHM_INCREMENTAL); + rd_Ip6OutBcastOctets = rrddim_add(st, "OutBcastOctets", "sent", -8, BITS_IN_A_KILOBIT, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_Ip6InBcastOctets, Ip6InBcastOctets); + rrddim_set_by_pointer(st, rd_Ip6OutBcastOctets, Ip6OutBcastOctets); + rrdset_done(st); + } + + if(do_ip6_mcast_p == CONFIG_BOOLEAN_YES || (do_ip6_mcast_p == CONFIG_BOOLEAN_AUTO && + (Ip6OutMcastPkts || + Ip6InMcastPkts || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_ip6_mcast_p = CONFIG_BOOLEAN_YES; + static RRDSET *st = NULL; + static RRDDIM *rd_Ip6InMcastPkts = NULL, + *rd_Ip6OutMcastPkts = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP6 + , "mcastpkts" + , NULL + , "multicast6" + , NULL + , "IPv6 Multicast Packets" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV6_MCAST_PACKETS + , update_every + , RRDSET_TYPE_LINE + ); + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_Ip6InMcastPkts = rrddim_add(st, "InMcastPkts", "received", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_Ip6OutMcastPkts = rrddim_add(st, "OutMcastPkts", "sent", -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_Ip6InMcastPkts, Ip6InMcastPkts); + rrddim_set_by_pointer(st, rd_Ip6OutMcastPkts, Ip6OutMcastPkts); + rrdset_done(st); + } + + if(do_ip6_icmp == CONFIG_BOOLEAN_YES || (do_ip6_icmp == CONFIG_BOOLEAN_AUTO && + (Icmp6InMsgs || + Icmp6OutMsgs || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_ip6_icmp = CONFIG_BOOLEAN_YES; + static RRDSET *st = NULL; + static RRDDIM *rd_Icmp6InMsgs = NULL, + *rd_Icmp6OutMsgs = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP6 + , "icmp" + , NULL + , "icmp6" + , NULL + , "IPv6 ICMP Messages" + , "messages/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV6_ICMP + , update_every + , RRDSET_TYPE_LINE + ); + + rd_Icmp6InMsgs = rrddim_add(st, "InMsgs", "received", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_Icmp6OutMsgs = rrddim_add(st, "OutMsgs", "sent", -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_Icmp6InMsgs, Icmp6InMsgs); + rrddim_set_by_pointer(st, rd_Icmp6OutMsgs, Icmp6OutMsgs); + rrdset_done(st); + } + + if(do_ip6_icmp_redir == CONFIG_BOOLEAN_YES || (do_ip6_icmp_redir == CONFIG_BOOLEAN_AUTO && + (Icmp6InRedirects || + Icmp6OutRedirects || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_ip6_icmp_redir = CONFIG_BOOLEAN_YES; + static RRDSET *st = NULL; + static RRDDIM *rd_Icmp6InRedirects = NULL, + *rd_Icmp6OutRedirects = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP6 + , "icmpredir" + , NULL + , "icmp6" + , NULL + , "IPv6 ICMP Redirects" + , "redirects/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV6_ICMP_REDIR + , update_every + , RRDSET_TYPE_LINE + ); + + rd_Icmp6InRedirects = rrddim_add(st, "InRedirects", "received", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_Icmp6OutRedirects = rrddim_add(st, "OutRedirects", "sent", -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_Icmp6InRedirects, Icmp6InRedirects); + rrddim_set_by_pointer(st, rd_Icmp6OutRedirects, Icmp6OutRedirects); + rrdset_done(st); + } + + if(do_ip6_icmp_errors == CONFIG_BOOLEAN_YES || (do_ip6_icmp_errors == CONFIG_BOOLEAN_AUTO && + (Icmp6InErrors || + Icmp6OutErrors || + Icmp6InCsumErrors || + Icmp6InDestUnreachs || + Icmp6InPktTooBigs || + Icmp6InTimeExcds || + Icmp6InParmProblems || + Icmp6OutDestUnreachs || + Icmp6OutPktTooBigs || + Icmp6OutTimeExcds || + Icmp6OutParmProblems || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_ip6_icmp_errors = CONFIG_BOOLEAN_YES; + static RRDSET *st = NULL; + static RRDDIM *rd_InErrors = NULL, + *rd_OutErrors = NULL, + *rd_InCsumErrors = NULL, + *rd_InDestUnreachs = NULL, + *rd_InPktTooBigs = NULL, + *rd_InTimeExcds = NULL, + *rd_InParmProblems = NULL, + *rd_OutDestUnreachs = NULL, + *rd_OutPktTooBigs = NULL, + *rd_OutTimeExcds = NULL, + *rd_OutParmProblems = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP6 + , "icmperrors" + , NULL + , "icmp6" + , NULL + , "IPv6 ICMP Errors" + , "errors/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV6_ICMP_ERRORS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_InErrors = rrddim_add(st, "InErrors", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutErrors = rrddim_add(st, "OutErrors", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InCsumErrors = rrddim_add(st, "InCsumErrors", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InDestUnreachs = rrddim_add(st, "InDestUnreachs", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InPktTooBigs = rrddim_add(st, "InPktTooBigs", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InTimeExcds = rrddim_add(st, "InTimeExcds", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InParmProblems = rrddim_add(st, "InParmProblems", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutDestUnreachs = rrddim_add(st, "OutDestUnreachs", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutPktTooBigs = rrddim_add(st, "OutPktTooBigs", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutTimeExcds = rrddim_add(st, "OutTimeExcds", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutParmProblems = rrddim_add(st, "OutParmProblems", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_InErrors, Icmp6InErrors); + rrddim_set_by_pointer(st, rd_OutErrors, Icmp6OutErrors); + rrddim_set_by_pointer(st, rd_InCsumErrors, Icmp6InCsumErrors); + rrddim_set_by_pointer(st, rd_InDestUnreachs, Icmp6InDestUnreachs); + rrddim_set_by_pointer(st, rd_InPktTooBigs, Icmp6InPktTooBigs); + rrddim_set_by_pointer(st, rd_InTimeExcds, Icmp6InTimeExcds); + rrddim_set_by_pointer(st, rd_InParmProblems, Icmp6InParmProblems); + rrddim_set_by_pointer(st, rd_OutDestUnreachs, Icmp6OutDestUnreachs); + rrddim_set_by_pointer(st, rd_OutPktTooBigs, Icmp6OutPktTooBigs); + rrddim_set_by_pointer(st, rd_OutTimeExcds, Icmp6OutTimeExcds); + rrddim_set_by_pointer(st, rd_OutParmProblems, Icmp6OutParmProblems); + rrdset_done(st); + } + + if(do_ip6_icmp_echos == CONFIG_BOOLEAN_YES || (do_ip6_icmp_echos == CONFIG_BOOLEAN_AUTO && + (Icmp6InEchos || + Icmp6OutEchos || + Icmp6InEchoReplies || + Icmp6OutEchoReplies || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_ip6_icmp_echos = CONFIG_BOOLEAN_YES; + static RRDSET *st = NULL; + static RRDDIM *rd_InEchos = NULL, + *rd_OutEchos = NULL, + *rd_InEchoReplies = NULL, + *rd_OutEchoReplies = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP6 + , "icmpechos" + , NULL + , "icmp6" + , NULL + , "IPv6 ICMP Echo" + , "messages/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV6_ICMP_ECHOS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_InEchos = rrddim_add(st, "InEchos", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutEchos = rrddim_add(st, "OutEchos", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InEchoReplies = rrddim_add(st, "InEchoReplies", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutEchoReplies = rrddim_add(st, "OutEchoReplies", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_InEchos, Icmp6InEchos); + rrddim_set_by_pointer(st, rd_OutEchos, Icmp6OutEchos); + rrddim_set_by_pointer(st, rd_InEchoReplies, Icmp6InEchoReplies); + rrddim_set_by_pointer(st, rd_OutEchoReplies, Icmp6OutEchoReplies); + rrdset_done(st); + } + + if(do_ip6_icmp_groupmemb == CONFIG_BOOLEAN_YES || (do_ip6_icmp_groupmemb == CONFIG_BOOLEAN_AUTO && + (Icmp6InGroupMembQueries || + Icmp6OutGroupMembQueries || + Icmp6InGroupMembResponses || + Icmp6OutGroupMembResponses || + Icmp6InGroupMembReductions || + Icmp6OutGroupMembReductions || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_ip6_icmp_groupmemb = CONFIG_BOOLEAN_YES; + static RRDSET *st = NULL; + static RRDDIM *rd_InQueries = NULL, + *rd_OutQueries = NULL, + *rd_InResponses = NULL, + *rd_OutResponses = NULL, + *rd_InReductions = NULL, + *rd_OutReductions = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP6 + , "groupmemb" + , NULL + , "icmp6" + , NULL + , "IPv6 ICMP Group Membership" + , "messages/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV6_ICMP_GROUPMEMB + , update_every + , RRDSET_TYPE_LINE); + + rd_InQueries = rrddim_add(st, "InQueries", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutQueries = rrddim_add(st, "OutQueries", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InResponses = rrddim_add(st, "InResponses", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutResponses = rrddim_add(st, "OutResponses", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InReductions = rrddim_add(st, "InReductions", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutReductions = rrddim_add(st, "OutReductions", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_InQueries, Icmp6InGroupMembQueries); + rrddim_set_by_pointer(st, rd_OutQueries, Icmp6OutGroupMembQueries); + rrddim_set_by_pointer(st, rd_InResponses, Icmp6InGroupMembResponses); + rrddim_set_by_pointer(st, rd_OutResponses, Icmp6OutGroupMembResponses); + rrddim_set_by_pointer(st, rd_InReductions, Icmp6InGroupMembReductions); + rrddim_set_by_pointer(st, rd_OutReductions, Icmp6OutGroupMembReductions); + rrdset_done(st); + } + + if(do_ip6_icmp_router == CONFIG_BOOLEAN_YES || (do_ip6_icmp_router == CONFIG_BOOLEAN_AUTO && + (Icmp6InRouterSolicits || + Icmp6OutRouterSolicits || + Icmp6InRouterAdvertisements || + Icmp6OutRouterAdvertisements || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_ip6_icmp_router = CONFIG_BOOLEAN_YES; + static RRDSET *st = NULL; + static RRDDIM *rd_InSolicits = NULL, + *rd_OutSolicits = NULL, + *rd_InAdvertisements = NULL, + *rd_OutAdvertisements = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP6 + , "icmprouter" + , NULL + , "icmp6" + , NULL + , "IPv6 Router Messages" + , "messages/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV6_ICMP_ROUTER + , update_every + , RRDSET_TYPE_LINE + ); + + rd_InSolicits = rrddim_add(st, "InSolicits", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutSolicits = rrddim_add(st, "OutSolicits", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InAdvertisements = rrddim_add(st, "InAdvertisements", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutAdvertisements = rrddim_add(st, "OutAdvertisements", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_InSolicits, Icmp6InRouterSolicits); + rrddim_set_by_pointer(st, rd_OutSolicits, Icmp6OutRouterSolicits); + rrddim_set_by_pointer(st, rd_InAdvertisements, Icmp6InRouterAdvertisements); + rrddim_set_by_pointer(st, rd_OutAdvertisements, Icmp6OutRouterAdvertisements); + rrdset_done(st); + } + + if(do_ip6_icmp_neighbor == CONFIG_BOOLEAN_YES || (do_ip6_icmp_neighbor == CONFIG_BOOLEAN_AUTO && + (Icmp6InNeighborSolicits || + Icmp6OutNeighborSolicits || + Icmp6InNeighborAdvertisements || + Icmp6OutNeighborAdvertisements || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_ip6_icmp_neighbor = CONFIG_BOOLEAN_YES; + static RRDSET *st = NULL; + static RRDDIM *rd_InSolicits = NULL, + *rd_OutSolicits = NULL, + *rd_InAdvertisements = NULL, + *rd_OutAdvertisements = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP6 + , "icmpneighbor" + , NULL + , "icmp6" + , NULL + , "IPv6 Neighbor Messages" + , "messages/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV6_ICMP_NEIGHBOR + , update_every + , RRDSET_TYPE_LINE + ); + + rd_InSolicits = rrddim_add(st, "InSolicits", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutSolicits = rrddim_add(st, "OutSolicits", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InAdvertisements = rrddim_add(st, "InAdvertisements", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutAdvertisements = rrddim_add(st, "OutAdvertisements", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_InSolicits, Icmp6InNeighborSolicits); + rrddim_set_by_pointer(st, rd_OutSolicits, Icmp6OutNeighborSolicits); + rrddim_set_by_pointer(st, rd_InAdvertisements, Icmp6InNeighborAdvertisements); + rrddim_set_by_pointer(st, rd_OutAdvertisements, Icmp6OutNeighborAdvertisements); + rrdset_done(st); + } + + if(do_ip6_icmp_mldv2 == CONFIG_BOOLEAN_YES || (do_ip6_icmp_mldv2 == CONFIG_BOOLEAN_AUTO && + (Icmp6InMLDv2Reports || + Icmp6OutMLDv2Reports || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_ip6_icmp_mldv2 = CONFIG_BOOLEAN_YES; + static RRDSET *st = NULL; + static RRDDIM *rd_InMLDv2Reports = NULL, + *rd_OutMLDv2Reports = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP6 + , "icmpmldv2" + , NULL + , "icmp6" + , NULL + , "IPv6 ICMP MLDv2 Reports" + , "reports/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV6_ICMP_LDV2 + , update_every + , RRDSET_TYPE_LINE + ); + + rd_InMLDv2Reports = rrddim_add(st, "InMLDv2Reports", "received", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutMLDv2Reports = rrddim_add(st, "OutMLDv2Reports", "sent", -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_InMLDv2Reports, Icmp6InMLDv2Reports); + rrddim_set_by_pointer(st, rd_OutMLDv2Reports, Icmp6OutMLDv2Reports); + rrdset_done(st); + } + + if(do_ip6_icmp_types == CONFIG_BOOLEAN_YES || (do_ip6_icmp_types == CONFIG_BOOLEAN_AUTO && + (Icmp6InType1 || + Icmp6InType128 || + Icmp6InType129 || + Icmp6InType136 || + Icmp6OutType1 || + Icmp6OutType128 || + Icmp6OutType129 || + Icmp6OutType133 || + Icmp6OutType135 || + Icmp6OutType143 || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_ip6_icmp_types = CONFIG_BOOLEAN_YES; + static RRDSET *st = NULL; + static RRDDIM *rd_InType1 = NULL, + *rd_InType128 = NULL, + *rd_InType129 = NULL, + *rd_InType136 = NULL, + *rd_OutType1 = NULL, + *rd_OutType128 = NULL, + *rd_OutType129 = NULL, + *rd_OutType133 = NULL, + *rd_OutType135 = NULL, + *rd_OutType143 = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP6 + , "icmptypes" + , NULL + , "icmp6" + , NULL + , "IPv6 ICMP Types" + , "messages/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV6_ICMP_TYPES + , update_every + , RRDSET_TYPE_LINE + ); + + rd_InType1 = rrddim_add(st, "InType1", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InType128 = rrddim_add(st, "InType128", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InType129 = rrddim_add(st, "InType129", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InType136 = rrddim_add(st, "InType136", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutType1 = rrddim_add(st, "OutType1", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutType128 = rrddim_add(st, "OutType128", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutType129 = rrddim_add(st, "OutType129", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutType133 = rrddim_add(st, "OutType133", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutType135 = rrddim_add(st, "OutType135", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutType143 = rrddim_add(st, "OutType143", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_InType1, Icmp6InType1); + rrddim_set_by_pointer(st, rd_InType128, Icmp6InType128); + rrddim_set_by_pointer(st, rd_InType129, Icmp6InType129); + rrddim_set_by_pointer(st, rd_InType136, Icmp6InType136); + rrddim_set_by_pointer(st, rd_OutType1, Icmp6OutType1); + rrddim_set_by_pointer(st, rd_OutType128, Icmp6OutType128); + rrddim_set_by_pointer(st, rd_OutType129, Icmp6OutType129); + rrddim_set_by_pointer(st, rd_OutType133, Icmp6OutType133); + rrddim_set_by_pointer(st, rd_OutType135, Icmp6OutType135); + rrddim_set_by_pointer(st, rd_OutType143, Icmp6OutType143); + rrdset_done(st); + } + + if (do_ip6_ect == CONFIG_BOOLEAN_YES || + (do_ip6_ect == CONFIG_BOOLEAN_AUTO && (Ip6InNoECTPkts || Ip6InECT1Pkts || Ip6InECT0Pkts || Ip6InCEPkts || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_ip6_ect = CONFIG_BOOLEAN_YES; + static RRDSET *st = NULL; + static RRDDIM *rd_InNoECTPkts = NULL, *rd_InECT1Pkts = NULL, *rd_InECT0Pkts = NULL, *rd_InCEPkts = NULL; + + if (unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP6, + "ect", + NULL, + "packets", + NULL, + "IPv6 ECT Packets", + "packets/s", + PLUGIN_PROC_NAME, + PLUGIN_PROC_MODULE_NETSTAT_NAME, + NETDATA_CHART_PRIO_IPV6_ECT, + update_every, + RRDSET_TYPE_LINE); + + rd_InNoECTPkts = rrddim_add(st, "InNoECTPkts", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InECT1Pkts = rrddim_add(st, "InECT1Pkts", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InECT0Pkts = rrddim_add(st, "InECT0Pkts", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InCEPkts = rrddim_add(st, "InCEPkts", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_InNoECTPkts, Ip6InNoECTPkts); + rrddim_set_by_pointer(st, rd_InECT1Pkts, Ip6InECT1Pkts); + rrddim_set_by_pointer(st, rd_InECT0Pkts, Ip6InECT0Pkts); + rrddim_set_by_pointer(st, rd_InCEPkts, Ip6InCEPkts); + rrdset_done(st); + } +} + +int do_proc_net_netstat(int update_every, usec_t dt) { + (void)dt; + + static int do_bandwidth = -1, do_inerrors = -1, do_mcast = -1, do_bcast = -1, do_mcast_p = -1, do_bcast_p = -1, do_ecn = -1, \ + do_tcpext_reorder = -1, do_tcpext_syscookies = -1, do_tcpext_ofo = -1, do_tcpext_connaborts = -1, do_tcpext_memory = -1, + do_tcpext_syn_queue = -1, do_tcpext_accept_queue = -1; + + static int do_ip_packets = -1, do_ip_fragsout = -1, do_ip_fragsin = -1, do_ip_errors = -1, + do_tcp_sockets = -1, do_tcp_packets = -1, do_tcp_errors = -1, do_tcp_handshake = -1, do_tcp_opens = -1, + do_udp_packets = -1, do_udp_errors = -1, do_icmp_packets = -1, do_icmpmsg = -1, do_udplite_packets = -1; + + static uint32_t hash_ipext = 0, hash_tcpext = 0; + static uint32_t hash_ip = 0, hash_icmp = 0, hash_tcp = 0, hash_udp = 0, hash_icmpmsg = 0, hash_udplite = 0; + + static procfile *ff_netstat = NULL; + static procfile *ff_snmp = NULL; + + static ARL_BASE *arl_tcpext = NULL; + static ARL_BASE *arl_ipext = NULL; + + static ARL_BASE *arl_ip = NULL; + static ARL_BASE *arl_icmp = NULL; + static ARL_BASE *arl_icmpmsg = NULL; + static ARL_BASE *arl_tcp = NULL; + static ARL_BASE *arl_udp = NULL; + static ARL_BASE *arl_udplite = NULL; + + static const RRDVAR_ACQUIRED *tcp_max_connections_var = NULL; + + // -------------------------------------------------------------------- + // IP + + // IP bandwidth + static unsigned long long ipext_InOctets = 0; + static unsigned long long ipext_OutOctets = 0; + + // IP input errors + static unsigned long long ipext_InNoRoutes = 0; + static unsigned long long ipext_InTruncatedPkts = 0; + static unsigned long long ipext_InCsumErrors = 0; + + // IP multicast bandwidth + static unsigned long long ipext_InMcastOctets = 0; + static unsigned long long ipext_OutMcastOctets = 0; + + // IP multicast packets + static unsigned long long ipext_InMcastPkts = 0; + static unsigned long long ipext_OutMcastPkts = 0; + + // IP broadcast bandwidth + static unsigned long long ipext_InBcastOctets = 0; + static unsigned long long ipext_OutBcastOctets = 0; + + // IP broadcast packets + static unsigned long long ipext_InBcastPkts = 0; + static unsigned long long ipext_OutBcastPkts = 0; + + // IP ECN + static unsigned long long ipext_InNoECTPkts = 0; + static unsigned long long ipext_InECT1Pkts = 0; + static unsigned long long ipext_InECT0Pkts = 0; + static unsigned long long ipext_InCEPkts = 0; + + // -------------------------------------------------------------------- + // IP TCP + + // IP TCP Reordering + static unsigned long long tcpext_TCPRenoReorder = 0; + static unsigned long long tcpext_TCPFACKReorder = 0; + static unsigned long long tcpext_TCPSACKReorder = 0; + static unsigned long long tcpext_TCPTSReorder = 0; + + // IP TCP SYN Cookies + static unsigned long long tcpext_SyncookiesSent = 0; + static unsigned long long tcpext_SyncookiesRecv = 0; + static unsigned long long tcpext_SyncookiesFailed = 0; + + // IP TCP Out Of Order Queue + // http://www.spinics.net/lists/netdev/msg204696.html + static unsigned long long tcpext_TCPOFOQueue = 0; // Number of packets queued in OFO queue + static unsigned long long tcpext_TCPOFODrop = 0; // Number of packets meant to be queued in OFO but dropped because socket rcvbuf limit hit. + static unsigned long long tcpext_TCPOFOMerge = 0; // Number of packets in OFO that were merged with other packets. + static unsigned long long tcpext_OfoPruned = 0; // packets dropped from out-of-order queue because of socket buffer overrun + + // IP TCP connection resets + // https://github.com/ecki/net-tools/blob/bd8bceaed2311651710331a7f8990c3e31be9840/statistics.c + static unsigned long long tcpext_TCPAbortOnData = 0; // connections reset due to unexpected data + static unsigned long long tcpext_TCPAbortOnClose = 0; // connections reset due to early user close + static unsigned long long tcpext_TCPAbortOnMemory = 0; // connections aborted due to memory pressure + static unsigned long long tcpext_TCPAbortOnTimeout = 0; // connections aborted due to timeout + static unsigned long long tcpext_TCPAbortOnLinger = 0; // connections aborted after user close in linger timeout + static unsigned long long tcpext_TCPAbortFailed = 0; // times unable to send RST due to no memory + + // https://perfchron.com/2015/12/26/investigating-linux-network-issues-with-netstat-and-nstat/ + static unsigned long long tcpext_ListenOverflows = 0; // times the listen queue of a socket overflowed + static unsigned long long tcpext_ListenDrops = 0; // SYNs to LISTEN sockets ignored + + // IP TCP memory pressures + static unsigned long long tcpext_TCPMemoryPressures = 0; + + static unsigned long long tcpext_TCPReqQFullDrop = 0; + static unsigned long long tcpext_TCPReqQFullDoCookies = 0; + + static unsigned long long tcpext_TCPSynRetrans = 0; + + // prepare for /proc/net/netstat parsing + + if(unlikely(!arl_ipext)) { + hash_ipext = simple_hash("IpExt"); + hash_tcpext = simple_hash("TcpExt"); + + do_bandwidth = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETSTAT, "bandwidth", CONFIG_BOOLEAN_AUTO); + do_inerrors = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETSTAT, "input errors", CONFIG_BOOLEAN_AUTO); + do_mcast = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETSTAT, "multicast bandwidth", CONFIG_BOOLEAN_AUTO); + do_bcast = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETSTAT, "broadcast bandwidth", CONFIG_BOOLEAN_AUTO); + do_mcast_p = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETSTAT, "multicast packets", CONFIG_BOOLEAN_AUTO); + do_bcast_p = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETSTAT, "broadcast packets", CONFIG_BOOLEAN_AUTO); + do_ecn = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETSTAT, "ECN packets", CONFIG_BOOLEAN_AUTO); + + do_tcpext_reorder = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETSTAT, "TCP reorders", CONFIG_BOOLEAN_AUTO); + do_tcpext_syscookies = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETSTAT, "TCP SYN cookies", CONFIG_BOOLEAN_AUTO); + do_tcpext_ofo = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETSTAT, "TCP out-of-order queue", CONFIG_BOOLEAN_AUTO); + do_tcpext_connaborts = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETSTAT, "TCP connection aborts", CONFIG_BOOLEAN_AUTO); + do_tcpext_memory = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETSTAT, "TCP memory pressures", CONFIG_BOOLEAN_AUTO); + + do_tcpext_syn_queue = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETSTAT, "TCP SYN queue", CONFIG_BOOLEAN_AUTO); + do_tcpext_accept_queue = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETSTAT, "TCP accept queue", CONFIG_BOOLEAN_AUTO); + + arl_ipext = arl_create("netstat/ipext", NULL, 60); + arl_tcpext = arl_create("netstat/tcpext", NULL, 60); + + // -------------------------------------------------------------------- + // IP + + if(do_bandwidth != CONFIG_BOOLEAN_NO) { + arl_expect(arl_ipext, "InOctets", &ipext_InOctets); + arl_expect(arl_ipext, "OutOctets", &ipext_OutOctets); + } + + if(do_inerrors != CONFIG_BOOLEAN_NO) { + arl_expect(arl_ipext, "InNoRoutes", &ipext_InNoRoutes); + arl_expect(arl_ipext, "InTruncatedPkts", &ipext_InTruncatedPkts); + arl_expect(arl_ipext, "InCsumErrors", &ipext_InCsumErrors); + } + + if(do_mcast != CONFIG_BOOLEAN_NO) { + arl_expect(arl_ipext, "InMcastOctets", &ipext_InMcastOctets); + arl_expect(arl_ipext, "OutMcastOctets", &ipext_OutMcastOctets); + } + + if(do_mcast_p != CONFIG_BOOLEAN_NO) { + arl_expect(arl_ipext, "InMcastPkts", &ipext_InMcastPkts); + arl_expect(arl_ipext, "OutMcastPkts", &ipext_OutMcastPkts); + } + + if(do_bcast != CONFIG_BOOLEAN_NO) { + arl_expect(arl_ipext, "InBcastPkts", &ipext_InBcastPkts); + arl_expect(arl_ipext, "OutBcastPkts", &ipext_OutBcastPkts); + } + + if(do_bcast_p != CONFIG_BOOLEAN_NO) { + arl_expect(arl_ipext, "InBcastOctets", &ipext_InBcastOctets); + arl_expect(arl_ipext, "OutBcastOctets", &ipext_OutBcastOctets); + } + + if(do_ecn != CONFIG_BOOLEAN_NO) { + arl_expect(arl_ipext, "InNoECTPkts", &ipext_InNoECTPkts); + arl_expect(arl_ipext, "InECT1Pkts", &ipext_InECT1Pkts); + arl_expect(arl_ipext, "InECT0Pkts", &ipext_InECT0Pkts); + arl_expect(arl_ipext, "InCEPkts", &ipext_InCEPkts); + } + + // -------------------------------------------------------------------- + // IP TCP + + if(do_tcpext_reorder != CONFIG_BOOLEAN_NO) { + arl_expect(arl_tcpext, "TCPFACKReorder", &tcpext_TCPFACKReorder); + arl_expect(arl_tcpext, "TCPSACKReorder", &tcpext_TCPSACKReorder); + arl_expect(arl_tcpext, "TCPRenoReorder", &tcpext_TCPRenoReorder); + arl_expect(arl_tcpext, "TCPTSReorder", &tcpext_TCPTSReorder); + } + + if(do_tcpext_syscookies != CONFIG_BOOLEAN_NO) { + arl_expect(arl_tcpext, "SyncookiesSent", &tcpext_SyncookiesSent); + arl_expect(arl_tcpext, "SyncookiesRecv", &tcpext_SyncookiesRecv); + arl_expect(arl_tcpext, "SyncookiesFailed", &tcpext_SyncookiesFailed); + } + + if(do_tcpext_ofo != CONFIG_BOOLEAN_NO) { + arl_expect(arl_tcpext, "TCPOFOQueue", &tcpext_TCPOFOQueue); + arl_expect(arl_tcpext, "TCPOFODrop", &tcpext_TCPOFODrop); + arl_expect(arl_tcpext, "TCPOFOMerge", &tcpext_TCPOFOMerge); + arl_expect(arl_tcpext, "OfoPruned", &tcpext_OfoPruned); + } + + if(do_tcpext_connaborts != CONFIG_BOOLEAN_NO) { + arl_expect(arl_tcpext, "TCPAbortOnData", &tcpext_TCPAbortOnData); + arl_expect(arl_tcpext, "TCPAbortOnClose", &tcpext_TCPAbortOnClose); + arl_expect(arl_tcpext, "TCPAbortOnMemory", &tcpext_TCPAbortOnMemory); + arl_expect(arl_tcpext, "TCPAbortOnTimeout", &tcpext_TCPAbortOnTimeout); + arl_expect(arl_tcpext, "TCPAbortOnLinger", &tcpext_TCPAbortOnLinger); + arl_expect(arl_tcpext, "TCPAbortFailed", &tcpext_TCPAbortFailed); + } + + if(do_tcpext_memory != CONFIG_BOOLEAN_NO) { + arl_expect(arl_tcpext, "TCPMemoryPressures", &tcpext_TCPMemoryPressures); + } + + if(do_tcpext_accept_queue != CONFIG_BOOLEAN_NO) { + arl_expect(arl_tcpext, "ListenOverflows", &tcpext_ListenOverflows); + arl_expect(arl_tcpext, "ListenDrops", &tcpext_ListenDrops); + } + + if(do_tcpext_syn_queue != CONFIG_BOOLEAN_NO) { + arl_expect(arl_tcpext, "TCPReqQFullDrop", &tcpext_TCPReqQFullDrop); + arl_expect(arl_tcpext, "TCPReqQFullDoCookies", &tcpext_TCPReqQFullDoCookies); + } + + arl_expect(arl_tcpext, "TCPSynRetrans", &tcpext_TCPSynRetrans); + } + + // prepare for /proc/net/snmp parsing + + if(unlikely(!arl_ip)) { + do_ip_packets = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp", "ipv4 packets", CONFIG_BOOLEAN_AUTO); + do_ip_fragsout = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp", "ipv4 fragments sent", CONFIG_BOOLEAN_AUTO); + do_ip_fragsin = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp", "ipv4 fragments assembly", CONFIG_BOOLEAN_AUTO); + do_ip_errors = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp", "ipv4 errors", CONFIG_BOOLEAN_AUTO); + do_tcp_sockets = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp", "ipv4 TCP connections", CONFIG_BOOLEAN_AUTO); + do_tcp_packets = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp", "ipv4 TCP packets", CONFIG_BOOLEAN_AUTO); + do_tcp_errors = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp", "ipv4 TCP errors", CONFIG_BOOLEAN_AUTO); + do_tcp_opens = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp", "ipv4 TCP opens", CONFIG_BOOLEAN_AUTO); + do_tcp_handshake = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp", "ipv4 TCP handshake issues", CONFIG_BOOLEAN_AUTO); + do_udp_packets = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp", "ipv4 UDP packets", CONFIG_BOOLEAN_AUTO); + do_udp_errors = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp", "ipv4 UDP errors", CONFIG_BOOLEAN_AUTO); + do_icmp_packets = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp", "ipv4 ICMP packets", CONFIG_BOOLEAN_AUTO); + do_icmpmsg = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp", "ipv4 ICMP messages", CONFIG_BOOLEAN_AUTO); + do_udplite_packets = config_get_boolean_ondemand("plugin:proc:/proc/net/snmp", "ipv4 UDPLite packets", CONFIG_BOOLEAN_AUTO); + + hash_ip = simple_hash("Ip"); + hash_tcp = simple_hash("Tcp"); + hash_udp = simple_hash("Udp"); + hash_icmp = simple_hash("Icmp"); + hash_icmpmsg = simple_hash("IcmpMsg"); + hash_udplite = simple_hash("UdpLite"); + + arl_ip = arl_create("snmp/Ip", arl_callback_str2kernel_uint_t, 60); + // arl_expect(arl_ip, "Forwarding", &snmp_root.ip_Forwarding); + arl_expect(arl_ip, "DefaultTTL", &snmp_root.ip_DefaultTTL); + arl_expect(arl_ip, "InReceives", &snmp_root.ip_InReceives); + arl_expect(arl_ip, "InHdrErrors", &snmp_root.ip_InHdrErrors); + arl_expect(arl_ip, "InAddrErrors", &snmp_root.ip_InAddrErrors); + arl_expect(arl_ip, "ForwDatagrams", &snmp_root.ip_ForwDatagrams); + arl_expect(arl_ip, "InUnknownProtos", &snmp_root.ip_InUnknownProtos); + arl_expect(arl_ip, "InDiscards", &snmp_root.ip_InDiscards); + arl_expect(arl_ip, "InDelivers", &snmp_root.ip_InDelivers); + arl_expect(arl_ip, "OutRequests", &snmp_root.ip_OutRequests); + arl_expect(arl_ip, "OutDiscards", &snmp_root.ip_OutDiscards); + arl_expect(arl_ip, "OutNoRoutes", &snmp_root.ip_OutNoRoutes); + arl_expect(arl_ip, "ReasmTimeout", &snmp_root.ip_ReasmTimeout); + arl_expect(arl_ip, "ReasmReqds", &snmp_root.ip_ReasmReqds); + arl_expect(arl_ip, "ReasmOKs", &snmp_root.ip_ReasmOKs); + arl_expect(arl_ip, "ReasmFails", &snmp_root.ip_ReasmFails); + arl_expect(arl_ip, "FragOKs", &snmp_root.ip_FragOKs); + arl_expect(arl_ip, "FragFails", &snmp_root.ip_FragFails); + arl_expect(arl_ip, "FragCreates", &snmp_root.ip_FragCreates); + + arl_icmp = arl_create("snmp/Icmp", arl_callback_str2kernel_uint_t, 60); + arl_expect(arl_icmp, "InMsgs", &snmp_root.icmp_InMsgs); + arl_expect(arl_icmp, "OutMsgs", &snmp_root.icmp_OutMsgs); + arl_expect(arl_icmp, "InErrors", &snmp_root.icmp_InErrors); + arl_expect(arl_icmp, "OutErrors", &snmp_root.icmp_OutErrors); + arl_expect(arl_icmp, "InCsumErrors", &snmp_root.icmp_InCsumErrors); + + arl_icmpmsg = arl_create("snmp/Icmpmsg", arl_callback_str2kernel_uint_t, 60); + arl_expect(arl_icmpmsg, "InType0", &snmp_root.icmpmsg_InEchoReps); + arl_expect(arl_icmpmsg, "OutType0", &snmp_root.icmpmsg_OutEchoReps); + arl_expect(arl_icmpmsg, "InType3", &snmp_root.icmpmsg_InDestUnreachs); + arl_expect(arl_icmpmsg, "OutType3", &snmp_root.icmpmsg_OutDestUnreachs); + arl_expect(arl_icmpmsg, "InType5", &snmp_root.icmpmsg_InRedirects); + arl_expect(arl_icmpmsg, "OutType5", &snmp_root.icmpmsg_OutRedirects); + arl_expect(arl_icmpmsg, "InType8", &snmp_root.icmpmsg_InEchos); + arl_expect(arl_icmpmsg, "OutType8", &snmp_root.icmpmsg_OutEchos); + arl_expect(arl_icmpmsg, "InType9", &snmp_root.icmpmsg_InRouterAdvert); + arl_expect(arl_icmpmsg, "OutType9", &snmp_root.icmpmsg_OutRouterAdvert); + arl_expect(arl_icmpmsg, "InType10", &snmp_root.icmpmsg_InRouterSelect); + arl_expect(arl_icmpmsg, "OutType10", &snmp_root.icmpmsg_OutRouterSelect); + arl_expect(arl_icmpmsg, "InType11", &snmp_root.icmpmsg_InTimeExcds); + arl_expect(arl_icmpmsg, "OutType11", &snmp_root.icmpmsg_OutTimeExcds); + arl_expect(arl_icmpmsg, "InType12", &snmp_root.icmpmsg_InParmProbs); + arl_expect(arl_icmpmsg, "OutType12", &snmp_root.icmpmsg_OutParmProbs); + arl_expect(arl_icmpmsg, "InType13", &snmp_root.icmpmsg_InTimestamps); + arl_expect(arl_icmpmsg, "OutType13", &snmp_root.icmpmsg_OutTimestamps); + arl_expect(arl_icmpmsg, "InType14", &snmp_root.icmpmsg_InTimestampReps); + arl_expect(arl_icmpmsg, "OutType14", &snmp_root.icmpmsg_OutTimestampReps); + + arl_tcp = arl_create("snmp/Tcp", arl_callback_str2kernel_uint_t, 60); + // arl_expect(arl_tcp, "RtoAlgorithm", &snmp_root.tcp_RtoAlgorithm); + // arl_expect(arl_tcp, "RtoMin", &snmp_root.tcp_RtoMin); + // arl_expect(arl_tcp, "RtoMax", &snmp_root.tcp_RtoMax); + arl_expect_custom(arl_tcp, "MaxConn", arl_callback_ssize_t, &snmp_root.tcp_MaxConn); + arl_expect(arl_tcp, "ActiveOpens", &snmp_root.tcp_ActiveOpens); + arl_expect(arl_tcp, "PassiveOpens", &snmp_root.tcp_PassiveOpens); + arl_expect(arl_tcp, "AttemptFails", &snmp_root.tcp_AttemptFails); + arl_expect(arl_tcp, "EstabResets", &snmp_root.tcp_EstabResets); + arl_expect(arl_tcp, "CurrEstab", &snmp_root.tcp_CurrEstab); + arl_expect(arl_tcp, "InSegs", &snmp_root.tcp_InSegs); + arl_expect(arl_tcp, "OutSegs", &snmp_root.tcp_OutSegs); + arl_expect(arl_tcp, "RetransSegs", &snmp_root.tcp_RetransSegs); + arl_expect(arl_tcp, "InErrs", &snmp_root.tcp_InErrs); + arl_expect(arl_tcp, "OutRsts", &snmp_root.tcp_OutRsts); + arl_expect(arl_tcp, "InCsumErrors", &snmp_root.tcp_InCsumErrors); + + arl_udp = arl_create("snmp/Udp", arl_callback_str2kernel_uint_t, 60); + arl_expect(arl_udp, "InDatagrams", &snmp_root.udp_InDatagrams); + arl_expect(arl_udp, "NoPorts", &snmp_root.udp_NoPorts); + arl_expect(arl_udp, "InErrors", &snmp_root.udp_InErrors); + arl_expect(arl_udp, "OutDatagrams", &snmp_root.udp_OutDatagrams); + arl_expect(arl_udp, "RcvbufErrors", &snmp_root.udp_RcvbufErrors); + arl_expect(arl_udp, "SndbufErrors", &snmp_root.udp_SndbufErrors); + arl_expect(arl_udp, "InCsumErrors", &snmp_root.udp_InCsumErrors); + arl_expect(arl_udp, "IgnoredMulti", &snmp_root.udp_IgnoredMulti); + + arl_udplite = arl_create("snmp/Udplite", arl_callback_str2kernel_uint_t, 60); + arl_expect(arl_udplite, "InDatagrams", &snmp_root.udplite_InDatagrams); + arl_expect(arl_udplite, "NoPorts", &snmp_root.udplite_NoPorts); + arl_expect(arl_udplite, "InErrors", &snmp_root.udplite_InErrors); + arl_expect(arl_udplite, "OutDatagrams", &snmp_root.udplite_OutDatagrams); + arl_expect(arl_udplite, "RcvbufErrors", &snmp_root.udplite_RcvbufErrors); + arl_expect(arl_udplite, "SndbufErrors", &snmp_root.udplite_SndbufErrors); + arl_expect(arl_udplite, "InCsumErrors", &snmp_root.udplite_InCsumErrors); + arl_expect(arl_udplite, "IgnoredMulti", &snmp_root.udplite_IgnoredMulti); + + tcp_max_connections_var = rrdvar_host_variable_add_and_acquire(localhost, "tcp_max_connections"); + } + + size_t lines, l, words; + + // parse /proc/net/netstat + + if(unlikely(!ff_netstat)) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/net/netstat"); + ff_netstat = procfile_open(config_get(CONFIG_SECTION_PLUGIN_PROC_NETSTAT, "filename to monitor", filename), " \t:", PROCFILE_FLAG_DEFAULT); + if(unlikely(!ff_netstat)) return 1; + } + + ff_netstat = procfile_readall(ff_netstat); + if(unlikely(!ff_netstat)) return 0; // we return 0, so that we will retry to open it next time + + lines = procfile_lines(ff_netstat); + + arl_begin(arl_ipext); + arl_begin(arl_tcpext); + + for(l = 0; l < lines ;l++) { + char *key = procfile_lineword(ff_netstat, l, 0); + uint32_t hash = simple_hash(key); + + if(unlikely(hash == hash_ipext && strcmp(key, "IpExt") == 0)) { + size_t h = l++; + + words = procfile_linewords(ff_netstat, l); + if(unlikely(words < 2)) { + collector_error("Cannot read /proc/net/netstat IpExt line. Expected 2+ params, read %zu.", words); + continue; + } + + parse_line_pair(ff_netstat, arl_ipext, h, l); + + } + else if(unlikely(hash == hash_tcpext && strcmp(key, "TcpExt") == 0)) { + size_t h = l++; + + words = procfile_linewords(ff_netstat, l); + if(unlikely(words < 2)) { + collector_error("Cannot read /proc/net/netstat TcpExt line. Expected 2+ params, read %zu.", words); + continue; + } + + parse_line_pair(ff_netstat, arl_tcpext, h, l); + } + } + + // parse /proc/net/snmp + + if(unlikely(!ff_snmp)) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/net/snmp"); + ff_snmp = procfile_open(config_get("plugin:proc:/proc/net/snmp", "filename to monitor", filename), " \t:", PROCFILE_FLAG_DEFAULT); + if(unlikely(!ff_snmp)) return 1; + } + + ff_snmp = procfile_readall(ff_snmp); + if(unlikely(!ff_snmp)) return 0; // we return 0, so that we will retry to open it next time + + lines = procfile_lines(ff_snmp); + size_t w; + + for(l = 0; l < lines ;l++) { + char *key = procfile_lineword(ff_snmp, l, 0); + uint32_t hash = simple_hash(key); + + if(unlikely(hash == hash_ip && strcmp(key, "Ip") == 0)) { + size_t h = l++; + + if(strcmp(procfile_lineword(ff_snmp, l, 0), "Ip") != 0) { + collector_error("Cannot read Ip line from /proc/net/snmp."); + break; + } + + words = procfile_linewords(ff_snmp, l); + if(words < 3) { + collector_error("Cannot read /proc/net/snmp Ip line. Expected 3+ params, read %zu.", words); + continue; + } + + arl_begin(arl_ip); + for(w = 1; w < words ; w++) { + if (unlikely(arl_check(arl_ip, procfile_lineword(ff_snmp, h, w), procfile_lineword(ff_snmp, l, w)) != 0)) + break; + } + } + else if(unlikely(hash == hash_icmp && strcmp(key, "Icmp") == 0)) { + size_t h = l++; + + if(strcmp(procfile_lineword(ff_snmp, l, 0), "Icmp") != 0) { + collector_error("Cannot read Icmp line from /proc/net/snmp."); + break; + } + + words = procfile_linewords(ff_snmp, l); + if(words < 3) { + collector_error("Cannot read /proc/net/snmp Icmp line. Expected 3+ params, read %zu.", words); + continue; + } + + arl_begin(arl_icmp); + for(w = 1; w < words ; w++) { + if (unlikely(arl_check(arl_icmp, procfile_lineword(ff_snmp, h, w), procfile_lineword(ff_snmp, l, w)) != 0)) + break; + } + } + else if(unlikely(hash == hash_icmpmsg && strcmp(key, "IcmpMsg") == 0)) { + size_t h = l++; + + if(strcmp(procfile_lineword(ff_snmp, l, 0), "IcmpMsg") != 0) { + collector_error("Cannot read IcmpMsg line from /proc/net/snmp."); + break; + } + + words = procfile_linewords(ff_snmp, l); + if(words < 2) { + collector_error("Cannot read /proc/net/snmp IcmpMsg line. Expected 2+ params, read %zu.", words); + continue; + } + + arl_begin(arl_icmpmsg); + for(w = 1; w < words ; w++) { + if (unlikely(arl_check(arl_icmpmsg, procfile_lineword(ff_snmp, h, w), procfile_lineword(ff_snmp, l, w)) != 0)) + break; + } + } + else if(unlikely(hash == hash_tcp && strcmp(key, "Tcp") == 0)) { + size_t h = l++; + + if(strcmp(procfile_lineword(ff_snmp, l, 0), "Tcp") != 0) { + collector_error("Cannot read Tcp line from /proc/net/snmp."); + break; + } + + words = procfile_linewords(ff_snmp, l); + if(words < 3) { + collector_error("Cannot read /proc/net/snmp Tcp line. Expected 3+ params, read %zu.", words); + continue; + } + + arl_begin(arl_tcp); + for(w = 1; w < words ; w++) { + if (unlikely(arl_check(arl_tcp, procfile_lineword(ff_snmp, h, w), procfile_lineword(ff_snmp, l, w)) != 0)) + break; + } + } + else if(unlikely(hash == hash_udp && strcmp(key, "Udp") == 0)) { + size_t h = l++; + + if(strcmp(procfile_lineword(ff_snmp, l, 0), "Udp") != 0) { + collector_error("Cannot read Udp line from /proc/net/snmp."); + break; + } + + words = procfile_linewords(ff_snmp, l); + if(words < 3) { + collector_error("Cannot read /proc/net/snmp Udp line. Expected 3+ params, read %zu.", words); + continue; + } + + arl_begin(arl_udp); + for(w = 1; w < words ; w++) { + if (unlikely(arl_check(arl_udp, procfile_lineword(ff_snmp, h, w), procfile_lineword(ff_snmp, l, w)) != 0)) + break; + } + } + else if(unlikely(hash == hash_udplite && strcmp(key, "UdpLite") == 0)) { + size_t h = l++; + + if(strcmp(procfile_lineword(ff_snmp, l, 0), "UdpLite") != 0) { + collector_error("Cannot read UdpLite line from /proc/net/snmp."); + break; + } + + words = procfile_linewords(ff_snmp, l); + if(words < 3) { + collector_error("Cannot read /proc/net/snmp UdpLite line. Expected 3+ params, read %zu.", words); + continue; + } + + arl_begin(arl_udplite); + for(w = 1; w < words ; w++) { + if (unlikely(arl_check(arl_udplite, procfile_lineword(ff_snmp, h, w), procfile_lineword(ff_snmp, l, w)) != 0)) + break; + } + } + } + + // netstat IpExt charts + + if(do_bandwidth == CONFIG_BOOLEAN_YES || (do_bandwidth == CONFIG_BOOLEAN_AUTO && + (ipext_InOctets || + ipext_OutOctets || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_bandwidth = CONFIG_BOOLEAN_YES; + static RRDSET *st_system_ip = NULL; + static RRDDIM *rd_in = NULL, *rd_out = NULL; + + if(unlikely(!st_system_ip)) { + st_system_ip = rrdset_create_localhost( + "system" + , "ip" // FIXME: this is ipv4. Not changing it because it will require to do changes in cloud-frontend too + , NULL + , "network" + , NULL + , "IPv4 Bandwidth" + , "kilobits/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_SYSTEM_IP + , update_every + , RRDSET_TYPE_AREA + ); + + rd_in = rrddim_add(st_system_ip, "InOctets", "received", 8, BITS_IN_A_KILOBIT, RRD_ALGORITHM_INCREMENTAL); + rd_out = rrddim_add(st_system_ip, "OutOctets", "sent", -8, BITS_IN_A_KILOBIT, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_system_ip, rd_in, ipext_InOctets); + rrddim_set_by_pointer(st_system_ip, rd_out, ipext_OutOctets); + rrdset_done(st_system_ip); + } + + if(do_mcast == CONFIG_BOOLEAN_YES || (do_mcast == CONFIG_BOOLEAN_AUTO && + (ipext_InMcastOctets || + ipext_OutMcastOctets || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_mcast = CONFIG_BOOLEAN_YES; + static RRDSET *st_ip_mcast = NULL; + static RRDDIM *rd_in = NULL, *rd_out = NULL; + + if(unlikely(!st_ip_mcast)) { + st_ip_mcast = rrdset_create_localhost( + RRD_TYPE_NET_IP4 + , "mcast" + , NULL + , "multicast" + , NULL + , "IP Multicast Bandwidth" + , "kilobits/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV4_MCAST + , update_every + , RRDSET_TYPE_AREA + ); + + rrdset_flag_set(st_ip_mcast, RRDSET_FLAG_DETAIL); + + rd_in = rrddim_add(st_ip_mcast, "InMcastOctets", "received", 8, BITS_IN_A_KILOBIT, RRD_ALGORITHM_INCREMENTAL); + rd_out = rrddim_add(st_ip_mcast, "OutMcastOctets", "sent", -8, BITS_IN_A_KILOBIT, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_ip_mcast, rd_in, ipext_InMcastOctets); + rrddim_set_by_pointer(st_ip_mcast, rd_out, ipext_OutMcastOctets); + + rrdset_done(st_ip_mcast); + } + + // -------------------------------------------------------------------- + + if(do_bcast == CONFIG_BOOLEAN_YES || (do_bcast == CONFIG_BOOLEAN_AUTO && + (ipext_InBcastOctets || + ipext_OutBcastOctets || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_bcast = CONFIG_BOOLEAN_YES; + + static RRDSET *st_ip_bcast = NULL; + static RRDDIM *rd_in = NULL, *rd_out = NULL; + + if(unlikely(!st_ip_bcast)) { + st_ip_bcast = rrdset_create_localhost( + RRD_TYPE_NET_IP4 + , "bcast" + , NULL + , "broadcast" + , NULL + , "IPv4 Broadcast Bandwidth" + , "kilobits/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV4_BCAST + , update_every + , RRDSET_TYPE_AREA + ); + + rrdset_flag_set(st_ip_bcast, RRDSET_FLAG_DETAIL); + + rd_in = rrddim_add(st_ip_bcast, "InBcastOctets", "received", 8, BITS_IN_A_KILOBIT, RRD_ALGORITHM_INCREMENTAL); + rd_out = rrddim_add(st_ip_bcast, "OutBcastOctets", "sent", -8, BITS_IN_A_KILOBIT, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_ip_bcast, rd_in, ipext_InBcastOctets); + rrddim_set_by_pointer(st_ip_bcast, rd_out, ipext_OutBcastOctets); + + rrdset_done(st_ip_bcast); + } + + // -------------------------------------------------------------------- + + if(do_mcast_p == CONFIG_BOOLEAN_YES || (do_mcast_p == CONFIG_BOOLEAN_AUTO && + (ipext_InMcastPkts || + ipext_OutMcastPkts || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_mcast_p = CONFIG_BOOLEAN_YES; + + static RRDSET *st_ip_mcastpkts = NULL; + static RRDDIM *rd_in = NULL, *rd_out = NULL; + + if(unlikely(!st_ip_mcastpkts)) { + st_ip_mcastpkts = rrdset_create_localhost( + RRD_TYPE_NET_IP4 + , "mcastpkts" + , NULL + , "multicast" + , NULL + , "IPv4 Multicast Packets" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV4_MCAST_PACKETS + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(st_ip_mcastpkts, RRDSET_FLAG_DETAIL); + + rd_in = rrddim_add(st_ip_mcastpkts, "InMcastPkts", "received", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_out = rrddim_add(st_ip_mcastpkts, "OutMcastPkts", "sent", -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_ip_mcastpkts, rd_in, ipext_InMcastPkts); + rrddim_set_by_pointer(st_ip_mcastpkts, rd_out, ipext_OutMcastPkts); + rrdset_done(st_ip_mcastpkts); + } + + if(do_bcast_p == CONFIG_BOOLEAN_YES || (do_bcast_p == CONFIG_BOOLEAN_AUTO && + (ipext_InBcastPkts || + ipext_OutBcastPkts || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_bcast_p = CONFIG_BOOLEAN_YES; + + static RRDSET *st_ip_bcastpkts = NULL; + static RRDDIM *rd_in = NULL, *rd_out = NULL; + + if(unlikely(!st_ip_bcastpkts)) { + st_ip_bcastpkts = rrdset_create_localhost( + RRD_TYPE_NET_IP4 + , "bcastpkts" + , NULL + , "broadcast" + , NULL + , "IPv4 Broadcast Packets" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV4_BCAST_PACKETS + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(st_ip_bcastpkts, RRDSET_FLAG_DETAIL); + + rd_in = rrddim_add(st_ip_bcastpkts, "InBcastPkts", "received", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_out = rrddim_add(st_ip_bcastpkts, "OutBcastPkts", "sent", -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_ip_bcastpkts, rd_in, ipext_InBcastPkts); + rrddim_set_by_pointer(st_ip_bcastpkts, rd_out, ipext_OutBcastPkts); + rrdset_done(st_ip_bcastpkts); + } + + if(do_ecn == CONFIG_BOOLEAN_YES || (do_ecn == CONFIG_BOOLEAN_AUTO && + (ipext_InCEPkts || + ipext_InECT0Pkts || + ipext_InECT1Pkts || + ipext_InNoECTPkts || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_ecn = CONFIG_BOOLEAN_YES; + + static RRDSET *st_ecnpkts = NULL; + static RRDDIM *rd_cep = NULL, *rd_noectp = NULL, *rd_ectp0 = NULL, *rd_ectp1 = NULL; + + if(unlikely(!st_ecnpkts)) { + st_ecnpkts = rrdset_create_localhost( + RRD_TYPE_NET_IP4 + , "ecnpkts" + , NULL + , "ecn" + , NULL + , "IPv4 ECN Statistics" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV4_ECN + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(st_ecnpkts, RRDSET_FLAG_DETAIL); + + rd_cep = rrddim_add(st_ecnpkts, "InCEPkts", "CEP", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_noectp = rrddim_add(st_ecnpkts, "InNoECTPkts", "NoECTP", -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_ectp0 = rrddim_add(st_ecnpkts, "InECT0Pkts", "ECTP0", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_ectp1 = rrddim_add(st_ecnpkts, "InECT1Pkts", "ECTP1", 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_ecnpkts, rd_cep, ipext_InCEPkts); + rrddim_set_by_pointer(st_ecnpkts, rd_noectp, ipext_InNoECTPkts); + rrddim_set_by_pointer(st_ecnpkts, rd_ectp0, ipext_InECT0Pkts); + rrddim_set_by_pointer(st_ecnpkts, rd_ectp1, ipext_InECT1Pkts); + rrdset_done(st_ecnpkts); + } + + // netstat TcpExt charts + + if(do_tcpext_memory == CONFIG_BOOLEAN_YES || (do_tcpext_memory == CONFIG_BOOLEAN_AUTO && + (tcpext_TCPMemoryPressures || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_tcpext_memory = CONFIG_BOOLEAN_YES; + + static RRDSET *st_tcpmemorypressures = NULL; + static RRDDIM *rd_pressures = NULL; + + if(unlikely(!st_tcpmemorypressures)) { + st_tcpmemorypressures = rrdset_create_localhost( + RRD_TYPE_NET_IP + , "tcpmemorypressures" + , NULL + , "tcp" + , NULL + , "TCP Memory Pressures" + , "events/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IP_TCP_MEM_PRESSURE + , update_every + , RRDSET_TYPE_LINE + ); + + rd_pressures = rrddim_add(st_tcpmemorypressures, "TCPMemoryPressures", "pressures", 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_tcpmemorypressures, rd_pressures, tcpext_TCPMemoryPressures); + rrdset_done(st_tcpmemorypressures); + } + + if(do_tcpext_connaborts == CONFIG_BOOLEAN_YES || (do_tcpext_connaborts == CONFIG_BOOLEAN_AUTO && + (tcpext_TCPAbortOnData || + tcpext_TCPAbortOnClose || + tcpext_TCPAbortOnMemory || + tcpext_TCPAbortOnTimeout || + tcpext_TCPAbortOnLinger || + tcpext_TCPAbortFailed || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_tcpext_connaborts = CONFIG_BOOLEAN_YES; + + static RRDSET *st_tcpconnaborts = NULL; + static RRDDIM *rd_baddata = NULL, *rd_userclosed = NULL, *rd_nomemory = NULL, *rd_timeout = NULL, *rd_linger = NULL, *rd_failed = NULL; + + if(unlikely(!st_tcpconnaborts)) { + st_tcpconnaborts = rrdset_create_localhost( + RRD_TYPE_NET_IP + , "tcpconnaborts" + , NULL + , "tcp" + , NULL + , "TCP Connection Aborts" + , "connections/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IP_TCP_CONNABORTS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_baddata = rrddim_add(st_tcpconnaborts, "TCPAbortOnData", "baddata", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_userclosed = rrddim_add(st_tcpconnaborts, "TCPAbortOnClose", "userclosed", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_nomemory = rrddim_add(st_tcpconnaborts, "TCPAbortOnMemory", "nomemory", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_timeout = rrddim_add(st_tcpconnaborts, "TCPAbortOnTimeout", "timeout", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_linger = rrddim_add(st_tcpconnaborts, "TCPAbortOnLinger", "linger", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_failed = rrddim_add(st_tcpconnaborts, "TCPAbortFailed", "failed", -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_tcpconnaborts, rd_baddata, tcpext_TCPAbortOnData); + rrddim_set_by_pointer(st_tcpconnaborts, rd_userclosed, tcpext_TCPAbortOnClose); + rrddim_set_by_pointer(st_tcpconnaborts, rd_nomemory, tcpext_TCPAbortOnMemory); + rrddim_set_by_pointer(st_tcpconnaborts, rd_timeout, tcpext_TCPAbortOnTimeout); + rrddim_set_by_pointer(st_tcpconnaborts, rd_linger, tcpext_TCPAbortOnLinger); + rrddim_set_by_pointer(st_tcpconnaborts, rd_failed, tcpext_TCPAbortFailed); + rrdset_done(st_tcpconnaborts); + } + + if(do_tcpext_reorder == CONFIG_BOOLEAN_YES || (do_tcpext_reorder == CONFIG_BOOLEAN_AUTO && + (tcpext_TCPRenoReorder || + tcpext_TCPFACKReorder || + tcpext_TCPSACKReorder || + tcpext_TCPTSReorder || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_tcpext_reorder = CONFIG_BOOLEAN_YES; + + static RRDSET *st_tcpreorders = NULL; + static RRDDIM *rd_timestamp = NULL, *rd_sack = NULL, *rd_fack = NULL, *rd_reno = NULL; + + if(unlikely(!st_tcpreorders)) { + st_tcpreorders = rrdset_create_localhost( + RRD_TYPE_NET_IP + , "tcpreorders" + , NULL + , "tcp" + , NULL + , "TCP Reordered Packets by Detection Method" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IP_TCP_REORDERS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_timestamp = rrddim_add(st_tcpreorders, "TCPTSReorder", "timestamp", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_sack = rrddim_add(st_tcpreorders, "TCPSACKReorder", "sack", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_fack = rrddim_add(st_tcpreorders, "TCPFACKReorder", "fack", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_reno = rrddim_add(st_tcpreorders, "TCPRenoReorder", "reno", 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_tcpreorders, rd_timestamp, tcpext_TCPTSReorder); + rrddim_set_by_pointer(st_tcpreorders, rd_sack, tcpext_TCPSACKReorder); + rrddim_set_by_pointer(st_tcpreorders, rd_fack, tcpext_TCPFACKReorder); + rrddim_set_by_pointer(st_tcpreorders, rd_reno, tcpext_TCPRenoReorder); + rrdset_done(st_tcpreorders); + } + + // -------------------------------------------------------------------- + + if(do_tcpext_ofo == CONFIG_BOOLEAN_YES || (do_tcpext_ofo == CONFIG_BOOLEAN_AUTO && + (tcpext_TCPOFOQueue || + tcpext_TCPOFODrop || + tcpext_TCPOFOMerge || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_tcpext_ofo = CONFIG_BOOLEAN_YES; + + static RRDSET *st_ip_tcpofo = NULL; + static RRDDIM *rd_inqueue = NULL, *rd_dropped = NULL, *rd_merged = NULL, *rd_pruned = NULL; + + if(unlikely(!st_ip_tcpofo)) { + + st_ip_tcpofo = rrdset_create_localhost( + RRD_TYPE_NET_IP + , "tcpofo" + , NULL + , "tcp" + , NULL + , "TCP Out-Of-Order Queue" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IP_TCP_OFO + , update_every + , RRDSET_TYPE_LINE + ); + + rd_inqueue = rrddim_add(st_ip_tcpofo, "TCPOFOQueue", "inqueue", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_dropped = rrddim_add(st_ip_tcpofo, "TCPOFODrop", "dropped", -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_merged = rrddim_add(st_ip_tcpofo, "TCPOFOMerge", "merged", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_pruned = rrddim_add(st_ip_tcpofo, "OfoPruned", "pruned", -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_ip_tcpofo, rd_inqueue, tcpext_TCPOFOQueue); + rrddim_set_by_pointer(st_ip_tcpofo, rd_dropped, tcpext_TCPOFODrop); + rrddim_set_by_pointer(st_ip_tcpofo, rd_merged, tcpext_TCPOFOMerge); + rrddim_set_by_pointer(st_ip_tcpofo, rd_pruned, tcpext_OfoPruned); + rrdset_done(st_ip_tcpofo); + } + + if(do_tcpext_syscookies == CONFIG_BOOLEAN_YES || (do_tcpext_syscookies == CONFIG_BOOLEAN_AUTO && + (tcpext_SyncookiesSent || + tcpext_SyncookiesRecv || + tcpext_SyncookiesFailed || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_tcpext_syscookies = CONFIG_BOOLEAN_YES; + + static RRDSET *st_syncookies = NULL; + static RRDDIM *rd_received = NULL, *rd_sent = NULL, *rd_failed = NULL; + + if(unlikely(!st_syncookies)) { + + st_syncookies = rrdset_create_localhost( + RRD_TYPE_NET_IP + , "tcpsyncookies" + , NULL + , "tcp" + , NULL + , "TCP SYN Cookies" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IP_TCP_SYNCOOKIES + , update_every + , RRDSET_TYPE_LINE + ); + + rd_received = rrddim_add(st_syncookies, "SyncookiesRecv", "received", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_sent = rrddim_add(st_syncookies, "SyncookiesSent", "sent", -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_failed = rrddim_add(st_syncookies, "SyncookiesFailed", "failed", -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_syncookies, rd_received, tcpext_SyncookiesRecv); + rrddim_set_by_pointer(st_syncookies, rd_sent, tcpext_SyncookiesSent); + rrddim_set_by_pointer(st_syncookies, rd_failed, tcpext_SyncookiesFailed); + rrdset_done(st_syncookies); + } + + if(do_tcpext_syn_queue == CONFIG_BOOLEAN_YES || (do_tcpext_syn_queue == CONFIG_BOOLEAN_AUTO && + (tcpext_TCPReqQFullDrop || + tcpext_TCPReqQFullDoCookies || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_tcpext_syn_queue = CONFIG_BOOLEAN_YES; + + static RRDSET *st_syn_queue = NULL; + static RRDDIM + *rd_TCPReqQFullDrop = NULL, + *rd_TCPReqQFullDoCookies = NULL; + + if(unlikely(!st_syn_queue)) { + + st_syn_queue = rrdset_create_localhost( + RRD_TYPE_NET_IP + , "tcp_syn_queue" + , NULL + , "tcp" + , NULL + , "TCP SYN Queue Issues" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IP_TCP_SYN_QUEUE + , update_every + , RRDSET_TYPE_LINE + ); + + rd_TCPReqQFullDrop = rrddim_add(st_syn_queue, "TCPReqQFullDrop", "drops", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_TCPReqQFullDoCookies = rrddim_add(st_syn_queue, "TCPReqQFullDoCookies", "cookies", 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_syn_queue, rd_TCPReqQFullDrop, tcpext_TCPReqQFullDrop); + rrddim_set_by_pointer(st_syn_queue, rd_TCPReqQFullDoCookies, tcpext_TCPReqQFullDoCookies); + rrdset_done(st_syn_queue); + } + + if(do_tcpext_accept_queue == CONFIG_BOOLEAN_YES || (do_tcpext_accept_queue == CONFIG_BOOLEAN_AUTO && + (tcpext_ListenOverflows || + tcpext_ListenDrops || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_tcpext_accept_queue = CONFIG_BOOLEAN_YES; + + static RRDSET *st_accept_queue = NULL; + static RRDDIM *rd_overflows = NULL, + *rd_drops = NULL; + + if(unlikely(!st_accept_queue)) { + + st_accept_queue = rrdset_create_localhost( + RRD_TYPE_NET_IP + , "tcp_accept_queue" + , NULL + , "tcp" + , NULL + , "TCP Accept Queue Issues" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IP_TCP_ACCEPT_QUEUE + , update_every + , RRDSET_TYPE_LINE + ); + + rd_overflows = rrddim_add(st_accept_queue, "ListenOverflows", "overflows", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_drops = rrddim_add(st_accept_queue, "ListenDrops", "drops", 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_accept_queue, rd_overflows, tcpext_ListenOverflows); + rrddim_set_by_pointer(st_accept_queue, rd_drops, tcpext_ListenDrops); + rrdset_done(st_accept_queue); + } + + // snmp Ip charts + + if(do_ip_packets == CONFIG_BOOLEAN_YES || (do_ip_packets == CONFIG_BOOLEAN_AUTO && + (snmp_root.ip_OutRequests || + snmp_root.ip_InReceives || + snmp_root.ip_ForwDatagrams || + snmp_root.ip_InDelivers || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_ip_packets = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_InReceives = NULL, + *rd_OutRequests = NULL, + *rd_ForwDatagrams = NULL, + *rd_InDelivers = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP4 + , "packets" + , NULL + , "packets" + , NULL + , "IPv4 Packets" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV4_PACKETS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_InReceives = rrddim_add(st, "InReceives", "received", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutRequests = rrddim_add(st, "OutRequests", "sent", -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_ForwDatagrams = rrddim_add(st, "ForwDatagrams", "forwarded", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InDelivers = rrddim_add(st, "InDelivers", "delivered", 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_OutRequests, (collected_number)snmp_root.ip_OutRequests); + rrddim_set_by_pointer(st, rd_InReceives, (collected_number)snmp_root.ip_InReceives); + rrddim_set_by_pointer(st, rd_ForwDatagrams, (collected_number)snmp_root.ip_ForwDatagrams); + rrddim_set_by_pointer(st, rd_InDelivers, (collected_number)snmp_root.ip_InDelivers); + rrdset_done(st); + } + + if(do_ip_fragsout == CONFIG_BOOLEAN_YES || (do_ip_fragsout == CONFIG_BOOLEAN_AUTO && + (snmp_root.ip_FragOKs || + snmp_root.ip_FragFails || + snmp_root.ip_FragCreates || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_ip_fragsout = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_FragOKs = NULL, + *rd_FragFails = NULL, + *rd_FragCreates = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP4 + , "fragsout" + , NULL + , "fragments" + , NULL + , "IPv4 Fragments Sent" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV4_FRAGMENTS_OUT + , update_every + , RRDSET_TYPE_LINE + ); + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_FragOKs = rrddim_add(st, "FragOKs", "ok", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_FragFails = rrddim_add(st, "FragFails", "failed", -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_FragCreates = rrddim_add(st, "FragCreates", "created", 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_FragOKs, (collected_number)snmp_root.ip_FragOKs); + rrddim_set_by_pointer(st, rd_FragFails, (collected_number)snmp_root.ip_FragFails); + rrddim_set_by_pointer(st, rd_FragCreates, (collected_number)snmp_root.ip_FragCreates); + rrdset_done(st); + } + + if(do_ip_fragsin == CONFIG_BOOLEAN_YES || (do_ip_fragsin == CONFIG_BOOLEAN_AUTO && + (snmp_root.ip_ReasmOKs || + snmp_root.ip_ReasmFails || + snmp_root.ip_ReasmReqds || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_ip_fragsin = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_ReasmOKs = NULL, + *rd_ReasmFails = NULL, + *rd_ReasmReqds = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP4 + , "fragsin" + , NULL + , "fragments" + , NULL + , "IPv4 Fragments Reassembly" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV4_FRAGMENTS_IN + , update_every + , RRDSET_TYPE_LINE + ); + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_ReasmOKs = rrddim_add(st, "ReasmOKs", "ok", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_ReasmFails = rrddim_add(st, "ReasmFails", "failed", -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_ReasmReqds = rrddim_add(st, "ReasmReqds", "all", 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_ReasmOKs, (collected_number)snmp_root.ip_ReasmOKs); + rrddim_set_by_pointer(st, rd_ReasmFails, (collected_number)snmp_root.ip_ReasmFails); + rrddim_set_by_pointer(st, rd_ReasmReqds, (collected_number)snmp_root.ip_ReasmReqds); + rrdset_done(st); + } + + if(do_ip_errors == CONFIG_BOOLEAN_YES || (do_ip_errors == CONFIG_BOOLEAN_AUTO && + (snmp_root.ip_InDiscards || + snmp_root.ip_OutDiscards || + snmp_root.ip_InHdrErrors || + snmp_root.ip_InAddrErrors || + snmp_root.ip_InUnknownProtos || + snmp_root.ip_OutNoRoutes || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_ip_errors = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_InDiscards = NULL, + *rd_OutDiscards = NULL, + *rd_InHdrErrors = NULL, + *rd_InNoRoutes = NULL, + *rd_OutNoRoutes = NULL, + *rd_InAddrErrors = NULL, + *rd_InTruncatedPkts = NULL, + *rd_InCsumErrors = NULL, + *rd_InUnknownProtos = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP4 + , "errors" + , NULL + , "errors" + , NULL + , "IPv4 Errors" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV4_ERRORS + , update_every + , RRDSET_TYPE_LINE + ); + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_InDiscards = rrddim_add(st, "InDiscards", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutDiscards = rrddim_add(st, "OutDiscards", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + + rd_InNoRoutes = rrddim_add(st, "InNoRoutes", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutNoRoutes = rrddim_add(st, "OutNoRoutes", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + + rd_InHdrErrors = rrddim_add(st, "InHdrErrors", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InAddrErrors = rrddim_add(st, "InAddrErrors", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InUnknownProtos = rrddim_add(st, "InUnknownProtos", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InTruncatedPkts = rrddim_add(st, "InTruncatedPkts", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InCsumErrors = rrddim_add(st, "InCsumErrors", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_InDiscards, (collected_number)snmp_root.ip_InDiscards); + rrddim_set_by_pointer(st, rd_OutDiscards, (collected_number)snmp_root.ip_OutDiscards); + rrddim_set_by_pointer(st, rd_InHdrErrors, (collected_number)snmp_root.ip_InHdrErrors); + rrddim_set_by_pointer(st, rd_InAddrErrors, (collected_number)snmp_root.ip_InAddrErrors); + rrddim_set_by_pointer(st, rd_InUnknownProtos, (collected_number)snmp_root.ip_InUnknownProtos); + rrddim_set_by_pointer(st, rd_InNoRoutes, (collected_number)ipext_InNoRoutes); + rrddim_set_by_pointer(st, rd_OutNoRoutes, (collected_number)snmp_root.ip_OutNoRoutes); + rrddim_set_by_pointer(st, rd_InTruncatedPkts, (collected_number)ipext_InTruncatedPkts); + rrddim_set_by_pointer(st, rd_InCsumErrors, (collected_number)ipext_InCsumErrors); + rrdset_done(st); + } + + // snmp Icmp charts + + if(do_icmp_packets == CONFIG_BOOLEAN_YES || (do_icmp_packets == CONFIG_BOOLEAN_AUTO && + (snmp_root.icmp_InMsgs || + snmp_root.icmp_OutMsgs || + snmp_root.icmp_InErrors || + snmp_root.icmp_OutErrors || + snmp_root.icmp_InCsumErrors || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_icmp_packets = CONFIG_BOOLEAN_YES; + + { + static RRDSET *st_packets = NULL; + static RRDDIM *rd_InMsgs = NULL, + *rd_OutMsgs = NULL; + + if(unlikely(!st_packets)) { + st_packets = rrdset_create_localhost( + RRD_TYPE_NET_IP4 + , "icmp" + , NULL + , "icmp" + , NULL + , "IPv4 ICMP Packets" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV4_ICMP_PACKETS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_InMsgs = rrddim_add(st_packets, "InMsgs", "received", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutMsgs = rrddim_add(st_packets, "OutMsgs", "sent", -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_packets, rd_InMsgs, (collected_number)snmp_root.icmp_InMsgs); + rrddim_set_by_pointer(st_packets, rd_OutMsgs, (collected_number)snmp_root.icmp_OutMsgs); + rrdset_done(st_packets); + } + + { + static RRDSET *st_errors = NULL; + static RRDDIM *rd_InErrors = NULL, + *rd_OutErrors = NULL, + *rd_InCsumErrors = NULL; + + if(unlikely(!st_errors)) { + st_errors = rrdset_create_localhost( + RRD_TYPE_NET_IP4 + , "icmp_errors" + , NULL + , "icmp" + , NULL + , "IPv4 ICMP Errors" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV4_ICMP_ERRORS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_InErrors = rrddim_add(st_errors, "InErrors", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutErrors = rrddim_add(st_errors, "OutErrors", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InCsumErrors = rrddim_add(st_errors, "InCsumErrors", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_errors, rd_InErrors, (collected_number)snmp_root.icmp_InErrors); + rrddim_set_by_pointer(st_errors, rd_OutErrors, (collected_number)snmp_root.icmp_OutErrors); + rrddim_set_by_pointer(st_errors, rd_InCsumErrors, (collected_number)snmp_root.icmp_InCsumErrors); + rrdset_done(st_errors); + } + } + + // snmp IcmpMsg charts + + if(do_icmpmsg == CONFIG_BOOLEAN_YES || (do_icmpmsg == CONFIG_BOOLEAN_AUTO && + (snmp_root.icmpmsg_InEchoReps || + snmp_root.icmpmsg_OutEchoReps || + snmp_root.icmpmsg_InDestUnreachs || + snmp_root.icmpmsg_OutDestUnreachs || + snmp_root.icmpmsg_InRedirects || + snmp_root.icmpmsg_OutRedirects || + snmp_root.icmpmsg_InEchos || + snmp_root.icmpmsg_OutEchos || + snmp_root.icmpmsg_InRouterAdvert || + snmp_root.icmpmsg_OutRouterAdvert || + snmp_root.icmpmsg_InRouterSelect || + snmp_root.icmpmsg_OutRouterSelect || + snmp_root.icmpmsg_InTimeExcds || + snmp_root.icmpmsg_OutTimeExcds || + snmp_root.icmpmsg_InParmProbs || + snmp_root.icmpmsg_OutParmProbs || + snmp_root.icmpmsg_InTimestamps || + snmp_root.icmpmsg_OutTimestamps || + snmp_root.icmpmsg_InTimestampReps || + snmp_root.icmpmsg_OutTimestampReps || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_icmpmsg = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_InEchoReps = NULL, + *rd_OutEchoReps = NULL, + *rd_InDestUnreachs = NULL, + *rd_OutDestUnreachs = NULL, + *rd_InRedirects = NULL, + *rd_OutRedirects = NULL, + *rd_InEchos = NULL, + *rd_OutEchos = NULL, + *rd_InRouterAdvert = NULL, + *rd_OutRouterAdvert = NULL, + *rd_InRouterSelect = NULL, + *rd_OutRouterSelect = NULL, + *rd_InTimeExcds = NULL, + *rd_OutTimeExcds = NULL, + *rd_InParmProbs = NULL, + *rd_OutParmProbs = NULL, + *rd_InTimestamps = NULL, + *rd_OutTimestamps = NULL, + *rd_InTimestampReps = NULL, + *rd_OutTimestampReps = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP4 + , "icmpmsg" + , NULL + , "icmp" + , NULL + , "IPv4 ICMP Messages" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV4_ICMP_MESSAGES + , update_every + , RRDSET_TYPE_LINE + ); + + rd_InEchoReps = rrddim_add(st, "InType0", "InEchoReps", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutEchoReps = rrddim_add(st, "OutType0", "OutEchoReps", -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InDestUnreachs = rrddim_add(st, "InType3", "InDestUnreachs", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutDestUnreachs = rrddim_add(st, "OutType3", "OutDestUnreachs", -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InRedirects = rrddim_add(st, "InType5", "InRedirects", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutRedirects = rrddim_add(st, "OutType5", "OutRedirects", -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InEchos = rrddim_add(st, "InType8", "InEchos", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutEchos = rrddim_add(st, "OutType8", "OutEchos", -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InRouterAdvert = rrddim_add(st, "InType9", "InRouterAdvert", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutRouterAdvert = rrddim_add(st, "OutType9", "OutRouterAdvert", -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InRouterSelect = rrddim_add(st, "InType10", "InRouterSelect", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutRouterSelect = rrddim_add(st, "OutType10", "OutRouterSelect", -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InTimeExcds = rrddim_add(st, "InType11", "InTimeExcds", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutTimeExcds = rrddim_add(st, "OutType11", "OutTimeExcds", -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InParmProbs = rrddim_add(st, "InType12", "InParmProbs", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutParmProbs = rrddim_add(st, "OutType12", "OutParmProbs", -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InTimestamps = rrddim_add(st, "InType13", "InTimestamps", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutTimestamps = rrddim_add(st, "OutType13", "OutTimestamps", -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InTimestampReps = rrddim_add(st, "InType14", "InTimestampReps", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutTimestampReps = rrddim_add(st, "OutType14", "OutTimestampReps", -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_InEchoReps, (collected_number)snmp_root.icmpmsg_InEchoReps); + rrddim_set_by_pointer(st, rd_OutEchoReps, (collected_number)snmp_root.icmpmsg_OutEchoReps); + rrddim_set_by_pointer(st, rd_InDestUnreachs, (collected_number)snmp_root.icmpmsg_InDestUnreachs); + rrddim_set_by_pointer(st, rd_OutDestUnreachs, (collected_number)snmp_root.icmpmsg_OutDestUnreachs); + rrddim_set_by_pointer(st, rd_InRedirects, (collected_number)snmp_root.icmpmsg_InRedirects); + rrddim_set_by_pointer(st, rd_OutRedirects, (collected_number)snmp_root.icmpmsg_OutRedirects); + rrddim_set_by_pointer(st, rd_InEchos, (collected_number)snmp_root.icmpmsg_InEchos); + rrddim_set_by_pointer(st, rd_OutEchos, (collected_number)snmp_root.icmpmsg_OutEchos); + rrddim_set_by_pointer(st, rd_InRouterAdvert, (collected_number)snmp_root.icmpmsg_InRouterAdvert); + rrddim_set_by_pointer(st, rd_OutRouterAdvert, (collected_number)snmp_root.icmpmsg_OutRouterAdvert); + rrddim_set_by_pointer(st, rd_InRouterSelect, (collected_number)snmp_root.icmpmsg_InRouterSelect); + rrddim_set_by_pointer(st, rd_OutRouterSelect, (collected_number)snmp_root.icmpmsg_OutRouterSelect); + rrddim_set_by_pointer(st, rd_InTimeExcds, (collected_number)snmp_root.icmpmsg_InTimeExcds); + rrddim_set_by_pointer(st, rd_OutTimeExcds, (collected_number)snmp_root.icmpmsg_OutTimeExcds); + rrddim_set_by_pointer(st, rd_InParmProbs, (collected_number)snmp_root.icmpmsg_InParmProbs); + rrddim_set_by_pointer(st, rd_OutParmProbs, (collected_number)snmp_root.icmpmsg_OutParmProbs); + rrddim_set_by_pointer(st, rd_InTimestamps, (collected_number)snmp_root.icmpmsg_InTimestamps); + rrddim_set_by_pointer(st, rd_OutTimestamps, (collected_number)snmp_root.icmpmsg_OutTimestamps); + rrddim_set_by_pointer(st, rd_InTimestampReps, (collected_number)snmp_root.icmpmsg_InTimestampReps); + rrddim_set_by_pointer(st, rd_OutTimestampReps, (collected_number)snmp_root.icmpmsg_OutTimestampReps); + + rrdset_done(st); + } + + // snmp Tcp charts + + // this is smart enough to update it, only when it is changed + rrdvar_host_variable_set(localhost, tcp_max_connections_var, snmp_root.tcp_MaxConn); + + // see http://net-snmp.sourceforge.net/docs/mibs/tcp.html + if(do_tcp_sockets == CONFIG_BOOLEAN_YES || (do_tcp_sockets == CONFIG_BOOLEAN_AUTO && + (snmp_root.tcp_CurrEstab || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_tcp_sockets = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_CurrEstab = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP + , "tcpsock" + , NULL + , "tcp" + , NULL + , "TCP Connections" + , "active connections" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IP_TCP_ESTABLISHED_CONNS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_CurrEstab = rrddim_add(st, "CurrEstab", "connections", 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st, rd_CurrEstab, (collected_number)snmp_root.tcp_CurrEstab); + rrdset_done(st); + } + + if(do_tcp_packets == CONFIG_BOOLEAN_YES || (do_tcp_packets == CONFIG_BOOLEAN_AUTO && + (snmp_root.tcp_InSegs || + snmp_root.tcp_OutSegs || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_tcp_packets = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_InSegs = NULL, + *rd_OutSegs = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP + , "tcppackets" + , NULL + , "tcp" + , NULL + , "IPv4 TCP Packets" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IP_TCP_PACKETS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_InSegs = rrddim_add(st, "InSegs", "received", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutSegs = rrddim_add(st, "OutSegs", "sent", -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_InSegs, (collected_number)snmp_root.tcp_InSegs); + rrddim_set_by_pointer(st, rd_OutSegs, (collected_number)snmp_root.tcp_OutSegs); + rrdset_done(st); + } + + // -------------------------------------------------------------------- + + if(do_tcp_errors == CONFIG_BOOLEAN_YES || (do_tcp_errors == CONFIG_BOOLEAN_AUTO && + (snmp_root.tcp_InErrs || + snmp_root.tcp_InCsumErrors || + snmp_root.tcp_RetransSegs || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_tcp_errors = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_InErrs = NULL, + *rd_InCsumErrors = NULL, + *rd_RetransSegs = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP + , "tcperrors" + , NULL + , "tcp" + , NULL + , "IPv4 TCP Errors" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IP_TCP_ERRORS + , update_every + , RRDSET_TYPE_LINE + ); + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_InErrs = rrddim_add(st, "InErrs", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InCsumErrors = rrddim_add(st, "InCsumErrors", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_RetransSegs = rrddim_add(st, "RetransSegs", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_InErrs, (collected_number)snmp_root.tcp_InErrs); + rrddim_set_by_pointer(st, rd_InCsumErrors, (collected_number)snmp_root.tcp_InCsumErrors); + rrddim_set_by_pointer(st, rd_RetransSegs, (collected_number)snmp_root.tcp_RetransSegs); + rrdset_done(st); + } + + if(do_tcp_opens == CONFIG_BOOLEAN_YES || (do_tcp_opens == CONFIG_BOOLEAN_AUTO && + (snmp_root.tcp_ActiveOpens || + snmp_root.tcp_PassiveOpens || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_tcp_opens = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_ActiveOpens = NULL, + *rd_PassiveOpens = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP + , "tcpopens" + , NULL + , "tcp" + , NULL + , "IPv4 TCP Opens" + , "connections/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IP_TCP_OPENS + , update_every + , RRDSET_TYPE_LINE + ); + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_ActiveOpens = rrddim_add(st, "ActiveOpens", "active", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_PassiveOpens = rrddim_add(st, "PassiveOpens", "passive", 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_ActiveOpens, (collected_number)snmp_root.tcp_ActiveOpens); + rrddim_set_by_pointer(st, rd_PassiveOpens, (collected_number)snmp_root.tcp_PassiveOpens); + rrdset_done(st); + } + + if(do_tcp_handshake == CONFIG_BOOLEAN_YES || (do_tcp_handshake == CONFIG_BOOLEAN_AUTO && + (snmp_root.tcp_EstabResets || + snmp_root.tcp_OutRsts || + snmp_root.tcp_AttemptFails || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_tcp_handshake = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_EstabResets = NULL, + *rd_OutRsts = NULL, + *rd_AttemptFails = NULL, + *rd_TCPSynRetrans = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP + , "tcphandshake" + , NULL + , "tcp" + , NULL + , "IPv4 TCP Handshake Issues" + , "events/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IP_TCP_HANDSHAKE + , update_every + , RRDSET_TYPE_LINE + ); + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_EstabResets = rrddim_add(st, "EstabResets", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutRsts = rrddim_add(st, "OutRsts", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_AttemptFails = rrddim_add(st, "AttemptFails", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_TCPSynRetrans = rrddim_add(st, "TCPSynRetrans", "SynRetrans", 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_EstabResets, (collected_number)snmp_root.tcp_EstabResets); + rrddim_set_by_pointer(st, rd_OutRsts, (collected_number)snmp_root.tcp_OutRsts); + rrddim_set_by_pointer(st, rd_AttemptFails, (collected_number)snmp_root.tcp_AttemptFails); + rrddim_set_by_pointer(st, rd_TCPSynRetrans, tcpext_TCPSynRetrans); + rrdset_done(st); + } + + // snmp Udp charts + + // see http://net-snmp.sourceforge.net/docs/mibs/udp.html + if(do_udp_packets == CONFIG_BOOLEAN_YES || (do_udp_packets == CONFIG_BOOLEAN_AUTO && + (snmp_root.udp_InDatagrams || + snmp_root.udp_OutDatagrams || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_udp_packets = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_InDatagrams = NULL, + *rd_OutDatagrams = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP4 + , "udppackets" + , NULL + , "udp" + , NULL + , "IPv4 UDP Packets" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV4_UDP_PACKETS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_InDatagrams = rrddim_add(st, "InDatagrams", "received", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutDatagrams = rrddim_add(st, "OutDatagrams", "sent", -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_InDatagrams, (collected_number)snmp_root.udp_InDatagrams); + rrddim_set_by_pointer(st, rd_OutDatagrams, (collected_number)snmp_root.udp_OutDatagrams); + rrdset_done(st); + } + + // -------------------------------------------------------------------- + + if(do_udp_errors == CONFIG_BOOLEAN_YES || (do_udp_errors == CONFIG_BOOLEAN_AUTO && + (snmp_root.udp_InErrors || + snmp_root.udp_NoPorts || + snmp_root.udp_RcvbufErrors || + snmp_root.udp_SndbufErrors || + snmp_root.udp_InCsumErrors || + snmp_root.udp_IgnoredMulti || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_udp_errors = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_RcvbufErrors = NULL, + *rd_SndbufErrors = NULL, + *rd_InErrors = NULL, + *rd_NoPorts = NULL, + *rd_InCsumErrors = NULL, + *rd_IgnoredMulti = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP4 + , "udperrors" + , NULL + , "udp" + , NULL + , "IPv4 UDP Errors" + , "events/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV4_UDP_ERRORS + , update_every + , RRDSET_TYPE_LINE + ); + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_RcvbufErrors = rrddim_add(st, "RcvbufErrors", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_SndbufErrors = rrddim_add(st, "SndbufErrors", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InErrors = rrddim_add(st, "InErrors", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_NoPorts = rrddim_add(st, "NoPorts", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InCsumErrors = rrddim_add(st, "InCsumErrors", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_IgnoredMulti = rrddim_add(st, "IgnoredMulti", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_InErrors, (collected_number)snmp_root.udp_InErrors); + rrddim_set_by_pointer(st, rd_NoPorts, (collected_number)snmp_root.udp_NoPorts); + rrddim_set_by_pointer(st, rd_RcvbufErrors, (collected_number)snmp_root.udp_RcvbufErrors); + rrddim_set_by_pointer(st, rd_SndbufErrors, (collected_number)snmp_root.udp_SndbufErrors); + rrddim_set_by_pointer(st, rd_InCsumErrors, (collected_number)snmp_root.udp_InCsumErrors); + rrddim_set_by_pointer(st, rd_IgnoredMulti, (collected_number)snmp_root.udp_IgnoredMulti); + rrdset_done(st); + } + + // snmp UdpLite charts + + if(do_udplite_packets == CONFIG_BOOLEAN_YES || (do_udplite_packets == CONFIG_BOOLEAN_AUTO && + (snmp_root.udplite_InDatagrams || + snmp_root.udplite_OutDatagrams || + snmp_root.udplite_NoPorts || + snmp_root.udplite_InErrors || + snmp_root.udplite_InCsumErrors || + snmp_root.udplite_RcvbufErrors || + snmp_root.udplite_SndbufErrors || + snmp_root.udplite_IgnoredMulti || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_udplite_packets = CONFIG_BOOLEAN_YES; + + { + static RRDSET *st = NULL; + static RRDDIM *rd_InDatagrams = NULL, + *rd_OutDatagrams = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP4 + , "udplite" + , NULL + , "udplite" + , NULL + , "IPv4 UDPLite Packets" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV4_UDPLITE_PACKETS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_InDatagrams = rrddim_add(st, "InDatagrams", "received", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutDatagrams = rrddim_add(st, "OutDatagrams", "sent", -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_InDatagrams, (collected_number)snmp_root.udplite_InDatagrams); + rrddim_set_by_pointer(st, rd_OutDatagrams, (collected_number)snmp_root.udplite_OutDatagrams); + rrdset_done(st); + } + + { + static RRDSET *st = NULL; + static RRDDIM *rd_RcvbufErrors = NULL, + *rd_SndbufErrors = NULL, + *rd_InErrors = NULL, + *rd_NoPorts = NULL, + *rd_InCsumErrors = NULL, + *rd_IgnoredMulti = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_IP4 + , "udplite_errors" + , NULL + , "udplite" + , NULL + , "IPv4 UDPLite Errors" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NETSTAT_NAME + , NETDATA_CHART_PRIO_IPV4_UDPLITE_ERRORS + , update_every + , RRDSET_TYPE_LINE); + + rd_RcvbufErrors = rrddim_add(st, "RcvbufErrors", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_SndbufErrors = rrddim_add(st, "SndbufErrors", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InErrors = rrddim_add(st, "InErrors", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_NoPorts = rrddim_add(st, "NoPorts", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InCsumErrors = rrddim_add(st, "InCsumErrors", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_IgnoredMulti = rrddim_add(st, "IgnoredMulti", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_NoPorts, (collected_number)snmp_root.udplite_NoPorts); + rrddim_set_by_pointer(st, rd_InErrors, (collected_number)snmp_root.udplite_InErrors); + rrddim_set_by_pointer(st, rd_InCsumErrors, (collected_number)snmp_root.udplite_InCsumErrors); + rrddim_set_by_pointer(st, rd_RcvbufErrors, (collected_number)snmp_root.udplite_RcvbufErrors); + rrddim_set_by_pointer(st, rd_SndbufErrors, (collected_number)snmp_root.udplite_SndbufErrors); + rrddim_set_by_pointer(st, rd_IgnoredMulti, (collected_number)snmp_root.udplite_IgnoredMulti); + rrdset_done(st); + } + } + + do_proc_net_snmp6(update_every); + + return 0; +} diff --git a/src/collectors/proc.plugin/proc_net_rpc_nfs.c b/src/collectors/proc.plugin/proc_net_rpc_nfs.c new file mode 100644 index 000000000..d6547636e --- /dev/null +++ b/src/collectors/proc.plugin/proc_net_rpc_nfs.c @@ -0,0 +1,439 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +#define PLUGIN_PROC_MODULE_NFS_NAME "/proc/net/rpc/nfs" +#define CONFIG_SECTION_PLUGIN_PROC_NFS "plugin:" PLUGIN_PROC_CONFIG_NAME ":" PLUGIN_PROC_MODULE_NFS_NAME + +struct nfs_procs { + char name[30]; + unsigned long long value; + int present; + RRDDIM *rd; +}; + +struct nfs_procs nfs_proc2_values[] = { + { "null" , 0ULL, 0, NULL} + , {"getattr" , 0ULL, 0, NULL} + , {"setattr" , 0ULL, 0, NULL} + , {"root" , 0ULL, 0, NULL} + , {"lookup" , 0ULL, 0, NULL} + , {"readlink", 0ULL, 0, NULL} + , {"read" , 0ULL, 0, NULL} + , {"wrcache" , 0ULL, 0, NULL} + , {"write" , 0ULL, 0, NULL} + , {"create" , 0ULL, 0, NULL} + , {"remove" , 0ULL, 0, NULL} + , {"rename" , 0ULL, 0, NULL} + , {"link" , 0ULL, 0, NULL} + , {"symlink" , 0ULL, 0, NULL} + , {"mkdir" , 0ULL, 0, NULL} + , {"rmdir" , 0ULL, 0, NULL} + , {"readdir" , 0ULL, 0, NULL} + , {"fsstat" , 0ULL, 0, NULL} + , + + /* termination */ + { "" , 0ULL, 0, NULL} +}; + +struct nfs_procs nfs_proc3_values[] = { + { "null" , 0ULL, 0, NULL} + , {"getattr" , 0ULL, 0, NULL} + , {"setattr" , 0ULL, 0, NULL} + , {"lookup" , 0ULL, 0, NULL} + , {"access" , 0ULL, 0, NULL} + , {"readlink" , 0ULL, 0, NULL} + , {"read" , 0ULL, 0, NULL} + , {"write" , 0ULL, 0, NULL} + , {"create" , 0ULL, 0, NULL} + , {"mkdir" , 0ULL, 0, NULL} + , {"symlink" , 0ULL, 0, NULL} + , {"mknod" , 0ULL, 0, NULL} + , {"remove" , 0ULL, 0, NULL} + , {"rmdir" , 0ULL, 0, NULL} + , {"rename" , 0ULL, 0, NULL} + , {"link" , 0ULL, 0, NULL} + , {"readdir" , 0ULL, 0, NULL} + , {"readdirplus", 0ULL, 0, NULL} + , {"fsstat" , 0ULL, 0, NULL} + , {"fsinfo" , 0ULL, 0, NULL} + , {"pathconf" , 0ULL, 0, NULL} + , {"commit" , 0ULL, 0, NULL} + , + + /* termination */ + { "" , 0ULL, 0, NULL} +}; + +struct nfs_procs nfs_proc4_values[] = { + { "null" , 0ULL, 0, NULL} + , {"read" , 0ULL, 0, NULL} + , {"write" , 0ULL, 0, NULL} + , {"commit" , 0ULL, 0, NULL} + , {"open" , 0ULL, 0, NULL} + , {"open_conf" , 0ULL, 0, NULL} + , {"open_noat" , 0ULL, 0, NULL} + , {"open_dgrd" , 0ULL, 0, NULL} + , {"close" , 0ULL, 0, NULL} + , {"setattr" , 0ULL, 0, NULL} + , {"fsinfo" , 0ULL, 0, NULL} + , {"renew" , 0ULL, 0, NULL} + , {"setclntid" , 0ULL, 0, NULL} + , {"confirm" , 0ULL, 0, NULL} + , {"lock" , 0ULL, 0, NULL} + , {"lockt" , 0ULL, 0, NULL} + , {"locku" , 0ULL, 0, NULL} + , {"access" , 0ULL, 0, NULL} + , {"getattr" , 0ULL, 0, NULL} + , {"lookup" , 0ULL, 0, NULL} + , {"lookup_root" , 0ULL, 0, NULL} + , {"remove" , 0ULL, 0, NULL} + , {"rename" , 0ULL, 0, NULL} + , {"link" , 0ULL, 0, NULL} + , {"symlink" , 0ULL, 0, NULL} + , {"create" , 0ULL, 0, NULL} + , {"pathconf" , 0ULL, 0, NULL} + , {"statfs" , 0ULL, 0, NULL} + , {"readlink" , 0ULL, 0, NULL} + , {"readdir" , 0ULL, 0, NULL} + , {"server_caps" , 0ULL, 0, NULL} + , {"delegreturn" , 0ULL, 0, NULL} + , {"getacl" , 0ULL, 0, NULL} + , {"setacl" , 0ULL, 0, NULL} + , {"fs_locations" , 0ULL, 0, NULL} + , {"rel_lkowner" , 0ULL, 0, NULL} + , {"secinfo" , 0ULL, 0, NULL} + , {"fsid_present" , 0ULL, 0, NULL} + , + + /* nfsv4.1 client ops */ + { "exchange_id" , 0ULL, 0, NULL} + , {"create_session" , 0ULL, 0, NULL} + , {"destroy_session" , 0ULL, 0, NULL} + , {"sequence" , 0ULL, 0, NULL} + , {"get_lease_time" , 0ULL, 0, NULL} + , {"reclaim_comp" , 0ULL, 0, NULL} + , {"layoutget" , 0ULL, 0, NULL} + , {"getdevinfo" , 0ULL, 0, NULL} + , {"layoutcommit" , 0ULL, 0, NULL} + , {"layoutreturn" , 0ULL, 0, NULL} + , {"secinfo_no" , 0ULL, 0, NULL} + , {"test_stateid" , 0ULL, 0, NULL} + , {"free_stateid" , 0ULL, 0, NULL} + , {"getdevicelist" , 0ULL, 0, NULL} + , {"bind_conn_to_ses", 0ULL, 0, NULL} + , {"destroy_clientid", 0ULL, 0, NULL} + , + + /* nfsv4.2 client ops */ + { "seek" , 0ULL, 0, NULL} + , {"allocate" , 0ULL, 0, NULL} + , {"deallocate" , 0ULL, 0, NULL} + , {"layoutstats" , 0ULL, 0, NULL} + , {"clone" , 0ULL, 0, NULL} + , + + /* termination */ + { "" , 0ULL, 0, NULL} +}; + +int do_proc_net_rpc_nfs(int update_every, usec_t dt) { + (void)dt; + + static procfile *ff = NULL; + static int do_net = -1, do_rpc = -1, do_proc2 = -1, do_proc3 = -1, do_proc4 = -1; + static int proc2_warning = 0, proc3_warning = 0, proc4_warning = 0; + + if(!ff) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/net/rpc/nfs"); + ff = procfile_open(config_get(CONFIG_SECTION_PLUGIN_PROC_NFS, "filename to monitor", filename), " \t", PROCFILE_FLAG_DEFAULT); + } + if(!ff) return 1; + + ff = procfile_readall(ff); + if(!ff) return 0; // we return 0, so that we will retry to open it next time + + if(do_net == -1) do_net = config_get_boolean(CONFIG_SECTION_PLUGIN_PROC_NFS, "network", 1); + if(do_rpc == -1) do_rpc = config_get_boolean(CONFIG_SECTION_PLUGIN_PROC_NFS, "rpc", 1); + if(do_proc2 == -1) do_proc2 = config_get_boolean(CONFIG_SECTION_PLUGIN_PROC_NFS, "NFS v2 procedures", 1); + if(do_proc3 == -1) do_proc3 = config_get_boolean(CONFIG_SECTION_PLUGIN_PROC_NFS, "NFS v3 procedures", 1); + if(do_proc4 == -1) do_proc4 = config_get_boolean(CONFIG_SECTION_PLUGIN_PROC_NFS, "NFS v4 procedures", 1); + + // if they are enabled, reset them to 1 + // later we do them =2 to avoid doing strcmp() for all lines + if(do_net) do_net = 1; + if(do_rpc) do_rpc = 1; + if(do_proc2) do_proc2 = 1; + if(do_proc3) do_proc3 = 1; + if(do_proc4) do_proc4 = 1; + + size_t lines = procfile_lines(ff), l; + + char *type; + unsigned long long net_count = 0, net_udp_count = 0, net_tcp_count = 0, net_tcp_connections = 0; + unsigned long long rpc_calls = 0, rpc_retransmits = 0, rpc_auth_refresh = 0; + + for(l = 0; l < lines ;l++) { + size_t words = procfile_linewords(ff, l); + if(!words) continue; + + type = procfile_lineword(ff, l, 0); + + if(do_net == 1 && strcmp(type, "net") == 0) { + if(words < 5) { + collector_error("%s line of /proc/net/rpc/nfs has %zu words, expected %d", type, words, 5); + continue; + } + + net_count = str2ull(procfile_lineword(ff, l, 1), NULL); + net_udp_count = str2ull(procfile_lineword(ff, l, 2), NULL); + net_tcp_count = str2ull(procfile_lineword(ff, l, 3), NULL); + net_tcp_connections = str2ull(procfile_lineword(ff, l, 4), NULL); + + unsigned long long sum = net_count + net_udp_count + net_tcp_count + net_tcp_connections; + if(sum == 0ULL) do_net = -1; + else do_net = 2; + } + else if(do_rpc == 1 && strcmp(type, "rpc") == 0) { + if(words < 4) { + collector_error("%s line of /proc/net/rpc/nfs has %zu words, expected %d", type, words, 6); + continue; + } + + rpc_calls = str2ull(procfile_lineword(ff, l, 1), NULL); + rpc_retransmits = str2ull(procfile_lineword(ff, l, 2), NULL); + rpc_auth_refresh = str2ull(procfile_lineword(ff, l, 3), NULL); + + unsigned long long sum = rpc_calls + rpc_retransmits + rpc_auth_refresh; + if(sum == 0ULL) do_rpc = -1; + else do_rpc = 2; + } + else if(do_proc2 == 1 && strcmp(type, "proc2") == 0) { + // the first number is the count of numbers present + // so we start for word 2 + + unsigned long long sum = 0; + unsigned int i, j; + for(i = 0, j = 2; j < words && nfs_proc2_values[i].name[0] ; i++, j++) { + nfs_proc2_values[i].value = str2ull(procfile_lineword(ff, l, j), NULL); + nfs_proc2_values[i].present = 1; + sum += nfs_proc2_values[i].value; + } + + if(sum == 0ULL) { + if(!proc2_warning) { + collector_error("Disabling /proc/net/rpc/nfs v2 procedure calls chart. It seems unused on this machine. It will be enabled automatically when found with data in it."); + proc2_warning = 1; + } + do_proc2 = 0; + } + else do_proc2 = 2; + } + else if(do_proc3 == 1 && strcmp(type, "proc3") == 0) { + // the first number is the count of numbers present + // so we start for word 2 + + unsigned long long sum = 0; + unsigned int i, j; + for(i = 0, j = 2; j < words && nfs_proc3_values[i].name[0] ; i++, j++) { + nfs_proc3_values[i].value = str2ull(procfile_lineword(ff, l, j), NULL); + nfs_proc3_values[i].present = 1; + sum += nfs_proc3_values[i].value; + } + + if(sum == 0ULL) { + if(!proc3_warning) { + collector_info("Disabling /proc/net/rpc/nfs v3 procedure calls chart. It seems unused on this machine. It will be enabled automatically when found with data in it."); + proc3_warning = 1; + } + do_proc3 = 0; + } + else do_proc3 = 2; + } + else if(do_proc4 == 1 && strcmp(type, "proc4") == 0) { + // the first number is the count of numbers present + // so we start for word 2 + + unsigned long long sum = 0; + unsigned int i, j; + for(i = 0, j = 2; j < words && nfs_proc4_values[i].name[0] ; i++, j++) { + nfs_proc4_values[i].value = str2ull(procfile_lineword(ff, l, j), NULL); + nfs_proc4_values[i].present = 1; + sum += nfs_proc4_values[i].value; + } + + if(sum == 0ULL) { + if(!proc4_warning) { + collector_info("Disabling /proc/net/rpc/nfs v4 procedure calls chart. It seems unused on this machine. It will be enabled automatically when found with data in it."); + proc4_warning = 1; + } + do_proc4 = 0; + } + else do_proc4 = 2; + } + } + + if(do_net == 2) { + static RRDSET *st = NULL; + static RRDDIM *rd_udp = NULL, + *rd_tcp = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "nfs" + , "net" + , NULL + , "network" + , NULL + , "NFS Client Network" + , "operations/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NFS_NAME + , NETDATA_CHART_PRIO_NFS_NET + , update_every + , RRDSET_TYPE_STACKED + ); + + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_udp = rrddim_add(st, "udp", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_tcp = rrddim_add(st, "tcp", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + // ignore net_count, net_tcp_connections + (void)net_count; + (void)net_tcp_connections; + + rrddim_set_by_pointer(st, rd_udp, net_udp_count); + rrddim_set_by_pointer(st, rd_tcp, net_tcp_count); + rrdset_done(st); + } + + if(do_rpc == 2) { + static RRDSET *st = NULL; + static RRDDIM *rd_calls = NULL, + *rd_retransmits = NULL, + *rd_auth_refresh = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "nfs" + , "rpc" + , NULL + , "rpc" + , NULL + , "NFS Client Remote Procedure Calls Statistics" + , "calls/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NFS_NAME + , NETDATA_CHART_PRIO_NFS_RPC + , update_every + , RRDSET_TYPE_LINE + ); + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_calls = rrddim_add(st, "calls", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_retransmits = rrddim_add(st, "retransmits", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_auth_refresh = rrddim_add(st, "auth_refresh", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_calls, rpc_calls); + rrddim_set_by_pointer(st, rd_retransmits, rpc_retransmits); + rrddim_set_by_pointer(st, rd_auth_refresh, rpc_auth_refresh); + rrdset_done(st); + } + + if(do_proc2 == 2) { + static RRDSET *st = NULL; + if(unlikely(!st)) { + st = rrdset_create_localhost( + "nfs" + , "proc2" + , NULL + , "nfsv2rpc" + , NULL + , "NFS v2 Client Remote Procedure Calls" + , "calls/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NFS_NAME + , NETDATA_CHART_PRIO_NFS_PROC2 + , update_every + , RRDSET_TYPE_STACKED + ); + } + + size_t i; + for(i = 0; nfs_proc2_values[i].present ; i++) { + if(unlikely(!nfs_proc2_values[i].rd)) + nfs_proc2_values[i].rd = rrddim_add(st, nfs_proc2_values[i].name, NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + + rrddim_set_by_pointer(st, nfs_proc2_values[i].rd, nfs_proc2_values[i].value); + } + + rrdset_done(st); + } + + if(do_proc3 == 2) { + static RRDSET *st = NULL; + if(unlikely(!st)) { + st = rrdset_create_localhost( + "nfs" + , "proc3" + , NULL + , "nfsv3rpc" + , NULL + , "NFS v3 Client Remote Procedure Calls" + , "calls/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NFS_NAME + , NETDATA_CHART_PRIO_NFS_PROC3 + , update_every + , RRDSET_TYPE_STACKED + ); + } + + size_t i; + for(i = 0; nfs_proc3_values[i].present ; i++) { + if(unlikely(!nfs_proc3_values[i].rd)) + nfs_proc3_values[i].rd = rrddim_add(st, nfs_proc3_values[i].name, NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + + rrddim_set_by_pointer(st, nfs_proc3_values[i].rd, nfs_proc3_values[i].value); + } + + rrdset_done(st); + } + + if(do_proc4 == 2) { + static RRDSET *st = NULL; + if(unlikely(!st)) { + st = rrdset_create_localhost( + "nfs" + , "proc4" + , NULL + , "nfsv4rpc" + , NULL + , "NFS v4 Client Remote Procedure Calls" + , "calls/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NFS_NAME + , NETDATA_CHART_PRIO_NFS_PROC4 + , update_every + , RRDSET_TYPE_STACKED + ); + } + + size_t i; + for(i = 0; nfs_proc4_values[i].present ; i++) { + if(unlikely(!nfs_proc4_values[i].rd)) + nfs_proc4_values[i].rd = rrddim_add(st, nfs_proc4_values[i].name, NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + + rrddim_set_by_pointer(st, nfs_proc4_values[i].rd, nfs_proc4_values[i].value); + } + + rrdset_done(st); + } + + return 0; +} diff --git a/src/collectors/proc.plugin/proc_net_rpc_nfsd.c b/src/collectors/proc.plugin/proc_net_rpc_nfsd.c new file mode 100644 index 000000000..1d9127a03 --- /dev/null +++ b/src/collectors/proc.plugin/proc_net_rpc_nfsd.c @@ -0,0 +1,763 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +#define PLUGIN_PROC_MODULE_NFSD_NAME "/proc/net/rpc/nfsd" + +struct nfsd_procs { + char name[30]; + unsigned long long value; + int present; + RRDDIM *rd; +}; + +struct nfsd_procs nfsd_proc2_values[] = { + { "null" , 0ULL, 0, NULL} + , {"getattr" , 0ULL, 0, NULL} + , {"setattr" , 0ULL, 0, NULL} + , {"root" , 0ULL, 0, NULL} + , {"lookup" , 0ULL, 0, NULL} + , {"readlink", 0ULL, 0, NULL} + , {"read" , 0ULL, 0, NULL} + , {"wrcache" , 0ULL, 0, NULL} + , {"write" , 0ULL, 0, NULL} + , {"create" , 0ULL, 0, NULL} + , {"remove" , 0ULL, 0, NULL} + , {"rename" , 0ULL, 0, NULL} + , {"link" , 0ULL, 0, NULL} + , {"symlink" , 0ULL, 0, NULL} + , {"mkdir" , 0ULL, 0, NULL} + , {"rmdir" , 0ULL, 0, NULL} + , {"readdir" , 0ULL, 0, NULL} + , {"fsstat" , 0ULL, 0, NULL} + , + + /* termination */ + { "" , 0ULL, 0, NULL} +}; + +struct nfsd_procs nfsd_proc3_values[] = { + { "null" , 0ULL, 0, NULL} + , {"getattr" , 0ULL, 0, NULL} + , {"setattr" , 0ULL, 0, NULL} + , {"lookup" , 0ULL, 0, NULL} + , {"access" , 0ULL, 0, NULL} + , {"readlink" , 0ULL, 0, NULL} + , {"read" , 0ULL, 0, NULL} + , {"write" , 0ULL, 0, NULL} + , {"create" , 0ULL, 0, NULL} + , {"mkdir" , 0ULL, 0, NULL} + , {"symlink" , 0ULL, 0, NULL} + , {"mknod" , 0ULL, 0, NULL} + , {"remove" , 0ULL, 0, NULL} + , {"rmdir" , 0ULL, 0, NULL} + , {"rename" , 0ULL, 0, NULL} + , {"link" , 0ULL, 0, NULL} + , {"readdir" , 0ULL, 0, NULL} + , {"readdirplus", 0ULL, 0, NULL} + , {"fsstat" , 0ULL, 0, NULL} + , {"fsinfo" , 0ULL, 0, NULL} + , {"pathconf" , 0ULL, 0, NULL} + , {"commit" , 0ULL, 0, NULL} + , + + /* termination */ + { "" , 0ULL, 0, NULL} +}; + +struct nfsd_procs nfsd_proc4_values[] = { + { "null" , 0ULL, 0, NULL} + , {"read" , 0ULL, 0, NULL} + , {"write" , 0ULL, 0, NULL} + , {"commit" , 0ULL, 0, NULL} + , {"open" , 0ULL, 0, NULL} + , {"open_conf" , 0ULL, 0, NULL} + , {"open_noat" , 0ULL, 0, NULL} + , {"open_dgrd" , 0ULL, 0, NULL} + , {"close" , 0ULL, 0, NULL} + , {"setattr" , 0ULL, 0, NULL} + , {"fsinfo" , 0ULL, 0, NULL} + , {"renew" , 0ULL, 0, NULL} + , {"setclntid" , 0ULL, 0, NULL} + , {"confirm" , 0ULL, 0, NULL} + , {"lock" , 0ULL, 0, NULL} + , {"lockt" , 0ULL, 0, NULL} + , {"locku" , 0ULL, 0, NULL} + , {"access" , 0ULL, 0, NULL} + , {"getattr" , 0ULL, 0, NULL} + , {"lookup" , 0ULL, 0, NULL} + , {"lookup_root" , 0ULL, 0, NULL} + , {"remove" , 0ULL, 0, NULL} + , {"rename" , 0ULL, 0, NULL} + , {"link" , 0ULL, 0, NULL} + , {"symlink" , 0ULL, 0, NULL} + , {"create" , 0ULL, 0, NULL} + , {"pathconf" , 0ULL, 0, NULL} + , {"statfs" , 0ULL, 0, NULL} + , {"readlink" , 0ULL, 0, NULL} + , {"readdir" , 0ULL, 0, NULL} + , {"server_caps" , 0ULL, 0, NULL} + , {"delegreturn" , 0ULL, 0, NULL} + , {"getacl" , 0ULL, 0, NULL} + , {"setacl" , 0ULL, 0, NULL} + , {"fs_locations" , 0ULL, 0, NULL} + , {"rel_lkowner" , 0ULL, 0, NULL} + , {"secinfo" , 0ULL, 0, NULL} + , {"fsid_present" , 0ULL, 0, NULL} + , + + /* nfsv4.1 client ops */ + { "exchange_id" , 0ULL, 0, NULL} + , {"create_session" , 0ULL, 0, NULL} + , {"destroy_session" , 0ULL, 0, NULL} + , {"sequence" , 0ULL, 0, NULL} + , {"get_lease_time" , 0ULL, 0, NULL} + , {"reclaim_comp" , 0ULL, 0, NULL} + , {"layoutget" , 0ULL, 0, NULL} + , {"getdevinfo" , 0ULL, 0, NULL} + , {"layoutcommit" , 0ULL, 0, NULL} + , {"layoutreturn" , 0ULL, 0, NULL} + , {"secinfo_no" , 0ULL, 0, NULL} + , {"test_stateid" , 0ULL, 0, NULL} + , {"free_stateid" , 0ULL, 0, NULL} + , {"getdevicelist" , 0ULL, 0, NULL} + , {"bind_conn_to_ses", 0ULL, 0, NULL} + , {"destroy_clientid", 0ULL, 0, NULL} + , + + /* nfsv4.2 client ops */ + { "seek" , 0ULL, 0, NULL} + , {"allocate" , 0ULL, 0, NULL} + , {"deallocate" , 0ULL, 0, NULL} + , {"layoutstats" , 0ULL, 0, NULL} + , {"clone" , 0ULL, 0, NULL} + , + + /* termination */ + { "" , 0ULL, 0, NULL} +}; + +struct nfsd_procs nfsd4_ops_values[] = { + { "unused_op0" , 0ULL, 0, NULL} + , {"unused_op1" , 0ULL, 0, NULL} + , {"future_op2" , 0ULL, 0, NULL} + , {"access" , 0ULL, 0, NULL} + , {"close" , 0ULL, 0, NULL} + , {"commit" , 0ULL, 0, NULL} + , {"create" , 0ULL, 0, NULL} + , {"delegpurge" , 0ULL, 0, NULL} + , {"delegreturn" , 0ULL, 0, NULL} + , {"getattr" , 0ULL, 0, NULL} + , {"getfh" , 0ULL, 0, NULL} + , {"link" , 0ULL, 0, NULL} + , {"lock" , 0ULL, 0, NULL} + , {"lockt" , 0ULL, 0, NULL} + , {"locku" , 0ULL, 0, NULL} + , {"lookup" , 0ULL, 0, NULL} + , {"lookup_root" , 0ULL, 0, NULL} + , {"nverify" , 0ULL, 0, NULL} + , {"open" , 0ULL, 0, NULL} + , {"openattr" , 0ULL, 0, NULL} + , {"open_confirm" , 0ULL, 0, NULL} + , {"open_downgrade" , 0ULL, 0, NULL} + , {"putfh" , 0ULL, 0, NULL} + , {"putpubfh" , 0ULL, 0, NULL} + , {"putrootfh" , 0ULL, 0, NULL} + , {"read" , 0ULL, 0, NULL} + , {"readdir" , 0ULL, 0, NULL} + , {"readlink" , 0ULL, 0, NULL} + , {"remove" , 0ULL, 0, NULL} + , {"rename" , 0ULL, 0, NULL} + , {"renew" , 0ULL, 0, NULL} + , {"restorefh" , 0ULL, 0, NULL} + , {"savefh" , 0ULL, 0, NULL} + , {"secinfo" , 0ULL, 0, NULL} + , {"setattr" , 0ULL, 0, NULL} + , {"setclientid" , 0ULL, 0, NULL} + , {"setclientid_confirm" , 0ULL, 0, NULL} + , {"verify" , 0ULL, 0, NULL} + , {"write" , 0ULL, 0, NULL} + , {"release_lockowner" , 0ULL, 0, NULL} + , + + /* nfs41 */ + { "backchannel_ctl" , 0ULL, 0, NULL} + , {"bind_conn_to_session", 0ULL, 0, NULL} + , {"exchange_id" , 0ULL, 0, NULL} + , {"create_session" , 0ULL, 0, NULL} + , {"destroy_session" , 0ULL, 0, NULL} + , {"free_stateid" , 0ULL, 0, NULL} + , {"get_dir_delegation" , 0ULL, 0, NULL} + , {"getdeviceinfo" , 0ULL, 0, NULL} + , {"getdevicelist" , 0ULL, 0, NULL} + , {"layoutcommit" , 0ULL, 0, NULL} + , {"layoutget" , 0ULL, 0, NULL} + , {"layoutreturn" , 0ULL, 0, NULL} + , {"secinfo_no_name" , 0ULL, 0, NULL} + , {"sequence" , 0ULL, 0, NULL} + , {"set_ssv" , 0ULL, 0, NULL} + , {"test_stateid" , 0ULL, 0, NULL} + , {"want_delegation" , 0ULL, 0, NULL} + , {"destroy_clientid" , 0ULL, 0, NULL} + , {"reclaim_complete" , 0ULL, 0, NULL} + , + + /* nfs42 */ + { "allocate" , 0ULL, 0, NULL} + , {"copy" , 0ULL, 0, NULL} + , {"copy_notify" , 0ULL, 0, NULL} + , {"deallocate" , 0ULL, 0, NULL} + , {"ioadvise" , 0ULL, 0, NULL} + , {"layouterror" , 0ULL, 0, NULL} + , {"layoutstats" , 0ULL, 0, NULL} + , {"offload_cancel" , 0ULL, 0, NULL} + , {"offload_status" , 0ULL, 0, NULL} + , {"read_plus" , 0ULL, 0, NULL} + , {"seek" , 0ULL, 0, NULL} + , {"write_same" , 0ULL, 0, NULL} + , + + /* termination */ + { "" , 0ULL, 0, NULL} +}; + + +int do_proc_net_rpc_nfsd(int update_every, usec_t dt) { + (void)dt; + static procfile *ff = NULL; + static int do_rc = -1, do_fh = -1, do_io = -1, do_th = -1, do_net = -1, do_rpc = -1, do_proc2 = -1, do_proc3 = -1, do_proc4 = -1, do_proc4ops = -1; + static int proc2_warning = 0, proc3_warning = 0, proc4_warning = 0, proc4ops_warning = 0; + + if(unlikely(!ff)) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/net/rpc/nfsd"); + ff = procfile_open(config_get("plugin:proc:/proc/net/rpc/nfsd", "filename to monitor", filename), " \t", PROCFILE_FLAG_DEFAULT); + if(unlikely(!ff)) return 1; + } + + ff = procfile_readall(ff); + if(unlikely(!ff)) return 0; // we return 0, so that we will retry to open it next time + + if(unlikely(do_rc == -1)) { + do_rc = config_get_boolean("plugin:proc:/proc/net/rpc/nfsd", "read cache", 1); + do_fh = config_get_boolean("plugin:proc:/proc/net/rpc/nfsd", "file handles", 1); + do_io = config_get_boolean("plugin:proc:/proc/net/rpc/nfsd", "I/O", 1); + do_th = config_get_boolean("plugin:proc:/proc/net/rpc/nfsd", "threads", 1); + do_net = config_get_boolean("plugin:proc:/proc/net/rpc/nfsd", "network", 1); + do_rpc = config_get_boolean("plugin:proc:/proc/net/rpc/nfsd", "rpc", 1); + do_proc2 = config_get_boolean("plugin:proc:/proc/net/rpc/nfsd", "NFS v2 procedures", 1); + do_proc3 = config_get_boolean("plugin:proc:/proc/net/rpc/nfsd", "NFS v3 procedures", 1); + do_proc4 = config_get_boolean("plugin:proc:/proc/net/rpc/nfsd", "NFS v4 procedures", 1); + do_proc4ops = config_get_boolean("plugin:proc:/proc/net/rpc/nfsd", "NFS v4 operations", 1); + } + + // if they are enabled, reset them to 1 + // later we do them = 2 to avoid doing strcmp() for all lines + if(do_rc) do_rc = 1; + if(do_fh) do_fh = 1; + if(do_io) do_io = 1; + if(do_th) do_th = 1; + if(do_net) do_net = 1; + if(do_rpc) do_rpc = 1; + if(do_proc2) do_proc2 = 1; + if(do_proc3) do_proc3 = 1; + if(do_proc4) do_proc4 = 1; + if(do_proc4ops) do_proc4ops = 1; + + size_t lines = procfile_lines(ff), l; + + char *type; + unsigned long long rc_hits = 0, rc_misses = 0, rc_nocache = 0; + unsigned long long fh_stale = 0; + unsigned long long io_read = 0, io_write = 0; + unsigned long long th_threads = 0; + unsigned long long net_count = 0, net_udp_count = 0, net_tcp_count = 0, net_tcp_connections = 0; + unsigned long long rpc_calls = 0, rpc_bad_format = 0, rpc_bad_auth = 0, rpc_bad_client = 0; + + for(l = 0; l < lines ;l++) { + size_t words = procfile_linewords(ff, l); + if(unlikely(!words)) continue; + + type = procfile_lineword(ff, l, 0); + + if(do_rc == 1 && strcmp(type, "rc") == 0) { + if(unlikely(words < 4)) { + collector_error("%s line of /proc/net/rpc/nfsd has %zu words, expected %d", type, words, 4); + continue; + } + + rc_hits = str2ull(procfile_lineword(ff, l, 1), NULL); + rc_misses = str2ull(procfile_lineword(ff, l, 2), NULL); + rc_nocache = str2ull(procfile_lineword(ff, l, 3), NULL); + + unsigned long long sum = rc_hits + rc_misses + rc_nocache; + if(sum == 0ULL) do_rc = -1; + else do_rc = 2; + } + else if(do_fh == 1 && strcmp(type, "fh") == 0) { + if(unlikely(words < 6)) { + collector_error("%s line of /proc/net/rpc/nfsd has %zu words, expected %d", type, words, 6); + continue; + } + + fh_stale = str2ull(procfile_lineword(ff, l, 1), NULL); + + // other file handler metrics were never used and are always zero + + if(fh_stale == 0ULL) do_fh = -1; + else do_fh = 2; + } + else if(do_io == 1 && strcmp(type, "io") == 0) { + if(unlikely(words < 3)) { + collector_error("%s line of /proc/net/rpc/nfsd has %zu words, expected %d", type, words, 3); + continue; + } + + io_read = str2ull(procfile_lineword(ff, l, 1), NULL); + io_write = str2ull(procfile_lineword(ff, l, 2), NULL); + + unsigned long long sum = io_read + io_write; + if(sum == 0ULL) do_io = -1; + else do_io = 2; + } + else if(do_th == 1 && strcmp(type, "th") == 0) { + if(unlikely(words < 13)) { + collector_error("%s line of /proc/net/rpc/nfsd has %zu words, expected %d", type, words, 13); + continue; + } + + th_threads = str2ull(procfile_lineword(ff, l, 1), NULL); + + // thread histogram has been disabled since 2009 (kernel 2.6.30) + // https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=8bbfa9f3889b643fc7de82c0c761ef17097f8faf + + do_th = 2; + } + else if(do_net == 1 && strcmp(type, "net") == 0) { + if(unlikely(words < 5)) { + collector_error("%s line of /proc/net/rpc/nfsd has %zu words, expected %d", type, words, 5); + continue; + } + + net_count = str2ull(procfile_lineword(ff, l, 1), NULL); + net_udp_count = str2ull(procfile_lineword(ff, l, 2), NULL); + net_tcp_count = str2ull(procfile_lineword(ff, l, 3), NULL); + net_tcp_connections = str2ull(procfile_lineword(ff, l, 4), NULL); + + unsigned long long sum = net_count + net_udp_count + net_tcp_count + net_tcp_connections; + if(sum == 0ULL) do_net = -1; + else do_net = 2; + } + else if(do_rpc == 1 && strcmp(type, "rpc") == 0) { + if(unlikely(words < 6)) { + collector_error("%s line of /proc/net/rpc/nfsd has %zu words, expected %d", type, words, 6); + continue; + } + + rpc_calls = str2ull(procfile_lineword(ff, l, 1), NULL); + rpc_bad_format = str2ull(procfile_lineword(ff, l, 3), NULL); + rpc_bad_auth = str2ull(procfile_lineword(ff, l, 4), NULL); + rpc_bad_client = str2ull(procfile_lineword(ff, l, 5), NULL); + + unsigned long long sum = rpc_calls + rpc_bad_format + rpc_bad_auth + rpc_bad_client; + if(sum == 0ULL) do_rpc = -1; + else do_rpc = 2; + } + else if(do_proc2 == 1 && strcmp(type, "proc2") == 0) { + // the first number is the count of numbers present + // so we start for word 2 + + unsigned long long sum = 0; + unsigned int i, j; + for(i = 0, j = 2; j < words && nfsd_proc2_values[i].name[0] ; i++, j++) { + nfsd_proc2_values[i].value = str2ull(procfile_lineword(ff, l, j), NULL); + nfsd_proc2_values[i].present = 1; + sum += nfsd_proc2_values[i].value; + } + + if(sum == 0ULL) { + if(!proc2_warning) { + collector_error("Disabling /proc/net/rpc/nfsd v2 procedure calls chart. It seems unused on this machine. It will be enabled automatically when found with data in it."); + proc2_warning = 1; + } + do_proc2 = 0; + } + else do_proc2 = 2; + } + else if(do_proc3 == 1 && strcmp(type, "proc3") == 0) { + // the first number is the count of numbers present + // so we start for word 2 + + unsigned long long sum = 0; + unsigned int i, j; + for(i = 0, j = 2; j < words && nfsd_proc3_values[i].name[0] ; i++, j++) { + nfsd_proc3_values[i].value = str2ull(procfile_lineword(ff, l, j), NULL); + nfsd_proc3_values[i].present = 1; + sum += nfsd_proc3_values[i].value; + } + + if(sum == 0ULL) { + if(!proc3_warning) { + collector_info("Disabling /proc/net/rpc/nfsd v3 procedure calls chart. It seems unused on this machine. It will be enabled automatically when found with data in it."); + proc3_warning = 1; + } + do_proc3 = 0; + } + else do_proc3 = 2; + } + else if(do_proc4 == 1 && strcmp(type, "proc4") == 0) { + // the first number is the count of numbers present + // so we start for word 2 + + unsigned long long sum = 0; + unsigned int i, j; + for(i = 0, j = 2; j < words && nfsd_proc4_values[i].name[0] ; i++, j++) { + nfsd_proc4_values[i].value = str2ull(procfile_lineword(ff, l, j), NULL); + nfsd_proc4_values[i].present = 1; + sum += nfsd_proc4_values[i].value; + } + + if(sum == 0ULL) { + if(!proc4_warning) { + collector_info("Disabling /proc/net/rpc/nfsd v4 procedure calls chart. It seems unused on this machine. It will be enabled automatically when found with data in it."); + proc4_warning = 1; + } + do_proc4 = 0; + } + else do_proc4 = 2; + } + else if(do_proc4ops == 1 && strcmp(type, "proc4ops") == 0) { + // the first number is the count of numbers present + // so we start for word 2 + + unsigned long long sum = 0; + unsigned int i, j; + for(i = 0, j = 2; j < words && nfsd4_ops_values[i].name[0] ; i++, j++) { + nfsd4_ops_values[i].value = str2ull(procfile_lineword(ff, l, j), NULL); + nfsd4_ops_values[i].present = 1; + sum += nfsd4_ops_values[i].value; + } + + if(sum == 0ULL) { + if(!proc4ops_warning) { + collector_info("Disabling /proc/net/rpc/nfsd v4 operations chart. It seems unused on this machine. It will be enabled automatically when found with data in it."); + proc4ops_warning = 1; + } + do_proc4ops = 0; + } + else do_proc4ops = 2; + } + } + + if(do_rc == 2) { + static RRDSET *st = NULL; + static RRDDIM *rd_hits = NULL, + *rd_misses = NULL, + *rd_nocache = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "nfsd" + , "readcache" + , NULL + , "cache" + , NULL + , "NFS Server Read Cache" + , "reads/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NFSD_NAME + , NETDATA_CHART_PRIO_NFSD_READCACHE + , update_every + , RRDSET_TYPE_STACKED + ); + + rd_hits = rrddim_add(st, "hits", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_misses = rrddim_add(st, "misses", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_nocache = rrddim_add(st, "nocache", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_hits, rc_hits); + rrddim_set_by_pointer(st, rd_misses, rc_misses); + rrddim_set_by_pointer(st, rd_nocache, rc_nocache); + rrdset_done(st); + } + + if(do_fh == 2) { + static RRDSET *st = NULL; + static RRDDIM *rd_stale = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "nfsd" + , "filehandles" + , NULL + , "filehandles" + , NULL + , "NFS Server File Handles" + , "handles/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NFSD_NAME + , NETDATA_CHART_PRIO_NFSD_FILEHANDLES + , update_every + , RRDSET_TYPE_LINE + ); + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_stale = rrddim_add(st, "stale", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st, rd_stale, fh_stale); + rrdset_done(st); + } + + if(do_io == 2) { + static RRDSET *st = NULL; + static RRDDIM *rd_read = NULL, + *rd_write = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "nfsd" + , "io" + , NULL + , "io" + , NULL + , "NFS Server I/O" + , "kilobytes/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NFSD_NAME + , NETDATA_CHART_PRIO_NFSD_IO + , update_every + , RRDSET_TYPE_AREA + ); + + rd_read = rrddim_add(st, "read", NULL, 1, 1000, RRD_ALGORITHM_INCREMENTAL); + rd_write = rrddim_add(st, "write", NULL, -1, 1000, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_read, io_read); + rrddim_set_by_pointer(st, rd_write, io_write); + rrdset_done(st); + } + + if(do_th == 2) { + static RRDSET *st = NULL; + static RRDDIM *rd_threads = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "nfsd" + , "threads" + , NULL + , "threads" + , NULL + , "NFS Server Threads" + , "threads" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NFSD_NAME + , NETDATA_CHART_PRIO_NFSD_THREADS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_threads = rrddim_add(st, "threads", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st, rd_threads, th_threads); + rrdset_done(st); + } + + if(do_net == 2) { + static RRDSET *st = NULL; + static RRDDIM *rd_udp = NULL, + *rd_tcp = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "nfsd" + , "net" + , NULL + , "network" + , NULL + , "NFS Server Network Statistics" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NFSD_NAME + , NETDATA_CHART_PRIO_NFSD_NET + , update_every + , RRDSET_TYPE_STACKED + ); + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_udp = rrddim_add(st, "udp", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_tcp = rrddim_add(st, "tcp", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + // ignore net_count, net_tcp_connections + (void)net_count; + (void)net_tcp_connections; + + rrddim_set_by_pointer(st, rd_udp, net_udp_count); + rrddim_set_by_pointer(st, rd_tcp, net_tcp_count); + rrdset_done(st); + } + + if(do_rpc == 2) { + static RRDSET *st = NULL; + static RRDDIM *rd_calls = NULL, + *rd_bad_format = NULL, + *rd_bad_auth = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "nfsd" + , "rpc" + , NULL + , "rpc" + , NULL + , "NFS Server Remote Procedure Calls Statistics" + , "calls/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NFSD_NAME + , NETDATA_CHART_PRIO_NFSD_RPC + , update_every + , RRDSET_TYPE_LINE + ); + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_calls = rrddim_add(st, "calls", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_bad_format = rrddim_add(st, "bad_format", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_bad_auth = rrddim_add(st, "bad_auth", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + // ignore rpc_bad_client + (void)rpc_bad_client; + + rrddim_set_by_pointer(st, rd_calls, rpc_calls); + rrddim_set_by_pointer(st, rd_bad_format, rpc_bad_format); + rrddim_set_by_pointer(st, rd_bad_auth, rpc_bad_auth); + rrdset_done(st); + } + + if(do_proc2 == 2) { + static RRDSET *st = NULL; + if(unlikely(!st)) { + st = rrdset_create_localhost( + "nfsd" + , "proc2" + , NULL + , "nfsv2rpc" + , NULL + , "NFS v2 Server Remote Procedure Calls" + , "calls/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NFSD_NAME + , NETDATA_CHART_PRIO_NFSD_PROC2 + , update_every + , RRDSET_TYPE_STACKED + ); + } + + size_t i; + for(i = 0; nfsd_proc2_values[i].present ; i++) { + if(unlikely(!nfsd_proc2_values[i].rd)) + nfsd_proc2_values[i].rd = rrddim_add(st, nfsd_proc2_values[i].name, NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + + rrddim_set_by_pointer(st, nfsd_proc2_values[i].rd, nfsd_proc2_values[i].value); + } + + rrdset_done(st); + } + + if(do_proc3 == 2) { + static RRDSET *st = NULL; + if(unlikely(!st)) { + st = rrdset_create_localhost( + "nfsd" + , "proc3" + , NULL + , "nfsv3rpc" + , NULL + , "NFS v3 Server Remote Procedure Calls" + , "calls/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NFSD_NAME + , NETDATA_CHART_PRIO_NFSD_PROC3 + , update_every + , RRDSET_TYPE_STACKED + ); + } + + size_t i; + for(i = 0; nfsd_proc3_values[i].present ; i++) { + if(unlikely(!nfsd_proc3_values[i].rd)) + nfsd_proc3_values[i].rd = rrddim_add(st, nfsd_proc3_values[i].name, NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + + rrddim_set_by_pointer(st, nfsd_proc3_values[i].rd, nfsd_proc3_values[i].value); + } + + rrdset_done(st); + } + + if(do_proc4 == 2) { + static RRDSET *st = NULL; + if(unlikely(!st)) { + st = rrdset_create_localhost( + "nfsd" + , "proc4" + , NULL + , "nfsv4rpc" + , NULL + , "NFS v4 Server Remote Procedure Calls" + , "calls/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NFSD_NAME + , NETDATA_CHART_PRIO_NFSD_PROC4 + , update_every + , RRDSET_TYPE_STACKED + ); + } + + size_t i; + for(i = 0; nfsd_proc4_values[i].present ; i++) { + if(unlikely(!nfsd_proc4_values[i].rd)) + nfsd_proc4_values[i].rd = rrddim_add(st, nfsd_proc4_values[i].name, NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + + rrddim_set_by_pointer(st, nfsd_proc4_values[i].rd, nfsd_proc4_values[i].value); + } + + rrdset_done(st); + } + + if(do_proc4ops == 2) { + static RRDSET *st = NULL; + if(unlikely(!st)) { + st = rrdset_create_localhost( + "nfsd" + , "proc4ops" + , NULL + , "nfsv4ops" + , NULL + , "NFS v4 Server Operations" + , "operations/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NFSD_NAME + , NETDATA_CHART_PRIO_NFSD_PROC4OPS + , update_every + , RRDSET_TYPE_STACKED + ); + } + + size_t i; + for(i = 0; nfsd4_ops_values[i].present ; i++) { + if(unlikely(!nfsd4_ops_values[i].rd)) + nfsd4_ops_values[i].rd = rrddim_add(st, nfsd4_ops_values[i].name, NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + + rrddim_set_by_pointer(st, nfsd4_ops_values[i].rd, nfsd4_ops_values[i].value); + } + + rrdset_done(st); + } + + return 0; +} diff --git a/src/collectors/proc.plugin/proc_net_sctp_snmp.c b/src/collectors/proc.plugin/proc_net_sctp_snmp.c new file mode 100644 index 000000000..e67143e69 --- /dev/null +++ b/src/collectors/proc.plugin/proc_net_sctp_snmp.c @@ -0,0 +1,367 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" +#define PLUGIN_PROC_MODULE_NET_SCTP_SNMP_NAME "/proc/net/sctp/snmp" + +int do_proc_net_sctp_snmp(int update_every, usec_t dt) { + (void)dt; + + static procfile *ff = NULL; + + static int + do_associations = -1, + do_transitions = -1, + do_packet_errors = -1, + do_packets = -1, + do_fragmentation = -1, + do_chunk_types = -1; + + static ARL_BASE *arl_base = NULL; + + static unsigned long long SctpCurrEstab = 0ULL; + static unsigned long long SctpActiveEstabs = 0ULL; + static unsigned long long SctpPassiveEstabs = 0ULL; + static unsigned long long SctpAborteds = 0ULL; + static unsigned long long SctpShutdowns = 0ULL; + static unsigned long long SctpOutOfBlues = 0ULL; + static unsigned long long SctpChecksumErrors = 0ULL; + static unsigned long long SctpOutCtrlChunks = 0ULL; + static unsigned long long SctpOutOrderChunks = 0ULL; + static unsigned long long SctpOutUnorderChunks = 0ULL; + static unsigned long long SctpInCtrlChunks = 0ULL; + static unsigned long long SctpInOrderChunks = 0ULL; + static unsigned long long SctpInUnorderChunks = 0ULL; + static unsigned long long SctpFragUsrMsgs = 0ULL; + static unsigned long long SctpReasmUsrMsgs = 0ULL; + static unsigned long long SctpOutSCTPPacks = 0ULL; + static unsigned long long SctpInSCTPPacks = 0ULL; + static unsigned long long SctpT1InitExpireds = 0ULL; + static unsigned long long SctpT1CookieExpireds = 0ULL; + static unsigned long long SctpT2ShutdownExpireds = 0ULL; + static unsigned long long SctpT3RtxExpireds = 0ULL; + static unsigned long long SctpT4RtoExpireds = 0ULL; + static unsigned long long SctpT5ShutdownGuardExpireds = 0ULL; + static unsigned long long SctpDelaySackExpireds = 0ULL; + static unsigned long long SctpAutocloseExpireds = 0ULL; + static unsigned long long SctpT3Retransmits = 0ULL; + static unsigned long long SctpPmtudRetransmits = 0ULL; + static unsigned long long SctpFastRetransmits = 0ULL; + static unsigned long long SctpInPktSoftirq = 0ULL; + static unsigned long long SctpInPktBacklog = 0ULL; + static unsigned long long SctpInPktDiscards = 0ULL; + static unsigned long long SctpInDataChunkDiscards = 0ULL; + + if(unlikely(!arl_base)) { + do_associations = config_get_boolean_ondemand("plugin:proc:/proc/net/sctp/snmp", "established associations", CONFIG_BOOLEAN_AUTO); + do_transitions = config_get_boolean_ondemand("plugin:proc:/proc/net/sctp/snmp", "association transitions", CONFIG_BOOLEAN_AUTO); + do_fragmentation = config_get_boolean_ondemand("plugin:proc:/proc/net/sctp/snmp", "fragmentation", CONFIG_BOOLEAN_AUTO); + do_packets = config_get_boolean_ondemand("plugin:proc:/proc/net/sctp/snmp", "packets", CONFIG_BOOLEAN_AUTO); + do_packet_errors = config_get_boolean_ondemand("plugin:proc:/proc/net/sctp/snmp", "packet errors", CONFIG_BOOLEAN_AUTO); + do_chunk_types = config_get_boolean_ondemand("plugin:proc:/proc/net/sctp/snmp", "chunk types", CONFIG_BOOLEAN_AUTO); + + arl_base = arl_create("sctp", NULL, 60); + arl_expect(arl_base, "SctpCurrEstab", &SctpCurrEstab); + arl_expect(arl_base, "SctpActiveEstabs", &SctpActiveEstabs); + arl_expect(arl_base, "SctpPassiveEstabs", &SctpPassiveEstabs); + arl_expect(arl_base, "SctpAborteds", &SctpAborteds); + arl_expect(arl_base, "SctpShutdowns", &SctpShutdowns); + arl_expect(arl_base, "SctpOutOfBlues", &SctpOutOfBlues); + arl_expect(arl_base, "SctpChecksumErrors", &SctpChecksumErrors); + arl_expect(arl_base, "SctpOutCtrlChunks", &SctpOutCtrlChunks); + arl_expect(arl_base, "SctpOutOrderChunks", &SctpOutOrderChunks); + arl_expect(arl_base, "SctpOutUnorderChunks", &SctpOutUnorderChunks); + arl_expect(arl_base, "SctpInCtrlChunks", &SctpInCtrlChunks); + arl_expect(arl_base, "SctpInOrderChunks", &SctpInOrderChunks); + arl_expect(arl_base, "SctpInUnorderChunks", &SctpInUnorderChunks); + arl_expect(arl_base, "SctpFragUsrMsgs", &SctpFragUsrMsgs); + arl_expect(arl_base, "SctpReasmUsrMsgs", &SctpReasmUsrMsgs); + arl_expect(arl_base, "SctpOutSCTPPacks", &SctpOutSCTPPacks); + arl_expect(arl_base, "SctpInSCTPPacks", &SctpInSCTPPacks); + arl_expect(arl_base, "SctpT1InitExpireds", &SctpT1InitExpireds); + arl_expect(arl_base, "SctpT1CookieExpireds", &SctpT1CookieExpireds); + arl_expect(arl_base, "SctpT2ShutdownExpireds", &SctpT2ShutdownExpireds); + arl_expect(arl_base, "SctpT3RtxExpireds", &SctpT3RtxExpireds); + arl_expect(arl_base, "SctpT4RtoExpireds", &SctpT4RtoExpireds); + arl_expect(arl_base, "SctpT5ShutdownGuardExpireds", &SctpT5ShutdownGuardExpireds); + arl_expect(arl_base, "SctpDelaySackExpireds", &SctpDelaySackExpireds); + arl_expect(arl_base, "SctpAutocloseExpireds", &SctpAutocloseExpireds); + arl_expect(arl_base, "SctpT3Retransmits", &SctpT3Retransmits); + arl_expect(arl_base, "SctpPmtudRetransmits", &SctpPmtudRetransmits); + arl_expect(arl_base, "SctpFastRetransmits", &SctpFastRetransmits); + arl_expect(arl_base, "SctpInPktSoftirq", &SctpInPktSoftirq); + arl_expect(arl_base, "SctpInPktBacklog", &SctpInPktBacklog); + arl_expect(arl_base, "SctpInPktDiscards", &SctpInPktDiscards); + arl_expect(arl_base, "SctpInDataChunkDiscards", &SctpInDataChunkDiscards); + } + + if(unlikely(!ff)) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/net/sctp/snmp"); + ff = procfile_open(config_get("plugin:proc:/proc/net/sctp/snmp", "filename to monitor", filename), " \t:", PROCFILE_FLAG_DEFAULT); + if(unlikely(!ff)) + return 1; + } + + ff = procfile_readall(ff); + if(unlikely(!ff)) + return 0; // we return 0, so that we will retry to open it next time + + size_t lines = procfile_lines(ff), l; + + arl_begin(arl_base); + + for(l = 0; l < lines ;l++) { + size_t words = procfile_linewords(ff, l); + if(unlikely(words < 2)) { + if(unlikely(words)) collector_error("Cannot read /proc/net/sctp/snmp line %zu. Expected 2 params, read %zu.", l, words); + continue; + } + + if(unlikely(arl_check(arl_base, + procfile_lineword(ff, l, 0), + procfile_lineword(ff, l, 1)))) break; + } + + // -------------------------------------------------------------------- + + if(do_associations == CONFIG_BOOLEAN_YES || (do_associations == CONFIG_BOOLEAN_AUTO && + (SctpCurrEstab || netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_associations = CONFIG_BOOLEAN_YES; + static RRDSET *st = NULL; + static RRDDIM *rd_established = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "sctp" + , "established" + , NULL + , "associations" + , NULL + , "SCTP current total number of established associations" + , "associations" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NET_SCTP_SNMP_NAME + , NETDATA_CHART_PRIO_SCTP + , update_every + , RRDSET_TYPE_LINE + ); + + rd_established = rrddim_add(st, "SctpCurrEstab", "established", 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st, rd_established, SctpCurrEstab); + rrdset_done(st); + } + + // -------------------------------------------------------------------- + + if(do_transitions == CONFIG_BOOLEAN_YES || (do_transitions == CONFIG_BOOLEAN_AUTO && + (SctpActiveEstabs || + SctpPassiveEstabs || + SctpAborteds || + SctpShutdowns || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_transitions = CONFIG_BOOLEAN_YES; + static RRDSET *st = NULL; + static RRDDIM *rd_active = NULL, + *rd_passive = NULL, + *rd_aborted = NULL, + *rd_shutdown = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "sctp" + , "transitions" + , NULL + , "transitions" + , NULL + , "SCTP Association Transitions" + , "transitions/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NET_SCTP_SNMP_NAME + , NETDATA_CHART_PRIO_SCTP + 10 + , update_every + , RRDSET_TYPE_LINE + ); + + rd_active = rrddim_add(st, "SctpActiveEstabs", "active", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_passive = rrddim_add(st, "SctpPassiveEstabs", "passive", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_aborted = rrddim_add(st, "SctpAborteds", "aborted", -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_shutdown = rrddim_add(st, "SctpShutdowns", "shutdown", -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_active, SctpActiveEstabs); + rrddim_set_by_pointer(st, rd_passive, SctpPassiveEstabs); + rrddim_set_by_pointer(st, rd_aborted, SctpAborteds); + rrddim_set_by_pointer(st, rd_shutdown, SctpShutdowns); + rrdset_done(st); + } + + // -------------------------------------------------------------------- + + if(do_packets == CONFIG_BOOLEAN_YES || (do_packets == CONFIG_BOOLEAN_AUTO && + (SctpInSCTPPacks || + SctpOutSCTPPacks || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_packets = CONFIG_BOOLEAN_YES; + static RRDSET *st = NULL; + static RRDDIM *rd_received = NULL, + *rd_sent = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "sctp" + , "packets" + , NULL + , "packets" + , NULL + , "SCTP Packets" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NET_SCTP_SNMP_NAME + , NETDATA_CHART_PRIO_SCTP + 20 + , update_every + , RRDSET_TYPE_LINE + ); + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_received = rrddim_add(st, "SctpInSCTPPacks", "received", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_sent = rrddim_add(st, "SctpOutSCTPPacks", "sent", -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_received, SctpInSCTPPacks); + rrddim_set_by_pointer(st, rd_sent, SctpOutSCTPPacks); + rrdset_done(st); + } + + // -------------------------------------------------------------------- + + if(do_packet_errors == CONFIG_BOOLEAN_YES || (do_packet_errors == CONFIG_BOOLEAN_AUTO && + (SctpOutOfBlues || + SctpChecksumErrors || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_packet_errors = CONFIG_BOOLEAN_YES; + static RRDSET *st = NULL; + static RRDDIM *rd_invalid = NULL, + *rd_csum = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "sctp" + , "packet_errors" + , NULL + , "packets" + , NULL + , "SCTP Packet Errors" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NET_SCTP_SNMP_NAME + , NETDATA_CHART_PRIO_SCTP + 30 + , update_every + , RRDSET_TYPE_LINE + ); + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_invalid = rrddim_add(st, "SctpOutOfBlues", "invalid", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_csum = rrddim_add(st, "SctpChecksumErrors", "checksum", 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_invalid, SctpOutOfBlues); + rrddim_set_by_pointer(st, rd_csum, SctpChecksumErrors); + rrdset_done(st); + } + + // -------------------------------------------------------------------- + + if(do_fragmentation == CONFIG_BOOLEAN_YES || (do_fragmentation == CONFIG_BOOLEAN_AUTO && + (SctpFragUsrMsgs || + SctpReasmUsrMsgs || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_fragmentation = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_fragmented = NULL, + *rd_reassembled = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "sctp" + , "fragmentation" + , NULL + , "fragmentation" + , NULL + , "SCTP Fragmentation" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NET_SCTP_SNMP_NAME + , NETDATA_CHART_PRIO_SCTP + 40 + , update_every + , RRDSET_TYPE_LINE); + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_reassembled = rrddim_add(st, "SctpReasmUsrMsgs", "reassembled", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_fragmented = rrddim_add(st, "SctpFragUsrMsgs", "fragmented", -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_reassembled, SctpReasmUsrMsgs); + rrddim_set_by_pointer(st, rd_fragmented, SctpFragUsrMsgs); + rrdset_done(st); + } + + // -------------------------------------------------------------------- + + if(do_chunk_types == CONFIG_BOOLEAN_YES || (do_chunk_types == CONFIG_BOOLEAN_AUTO && + (SctpInCtrlChunks || + SctpInOrderChunks || + SctpInUnorderChunks || + SctpOutCtrlChunks || + SctpOutOrderChunks || + SctpOutUnorderChunks || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_chunk_types = CONFIG_BOOLEAN_YES; + static RRDSET *st = NULL; + static RRDDIM + *rd_InCtrl = NULL, + *rd_InOrder = NULL, + *rd_InUnorder = NULL, + *rd_OutCtrl = NULL, + *rd_OutOrder = NULL, + *rd_OutUnorder = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "sctp" + , "chunks" + , NULL + , "chunks" + , NULL + , "SCTP Chunk Types" + , "chunks/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NET_SCTP_SNMP_NAME + , NETDATA_CHART_PRIO_SCTP + 50 + , update_every + , RRDSET_TYPE_LINE + ); + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_InCtrl = rrddim_add(st, "SctpInCtrlChunks", "InCtrl", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InOrder = rrddim_add(st, "SctpInOrderChunks", "InOrder", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_InUnorder = rrddim_add(st, "SctpInUnorderChunks", "InUnorder", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutCtrl = rrddim_add(st, "SctpOutCtrlChunks", "OutCtrl", -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutOrder = rrddim_add(st, "SctpOutOrderChunks", "OutOrder", -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_OutUnorder = rrddim_add(st, "SctpOutUnorderChunks", "OutUnorder", -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_InCtrl, SctpInCtrlChunks); + rrddim_set_by_pointer(st, rd_InOrder, SctpInOrderChunks); + rrddim_set_by_pointer(st, rd_InUnorder, SctpInUnorderChunks); + rrddim_set_by_pointer(st, rd_OutCtrl, SctpOutCtrlChunks); + rrddim_set_by_pointer(st, rd_OutOrder, SctpOutOrderChunks); + rrddim_set_by_pointer(st, rd_OutUnorder, SctpOutUnorderChunks); + rrdset_done(st); + } + + return 0; +} + diff --git a/src/collectors/proc.plugin/proc_net_sockstat.c b/src/collectors/proc.plugin/proc_net_sockstat.c new file mode 100644 index 000000000..4be67d61e --- /dev/null +++ b/src/collectors/proc.plugin/proc_net_sockstat.c @@ -0,0 +1,529 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +#define PLUGIN_PROC_MODULE_NET_SOCKSTAT_NAME "/proc/net/sockstat" + +static struct proc_net_sockstat { + kernel_uint_t sockets_used; + + kernel_uint_t tcp_inuse; + kernel_uint_t tcp_orphan; + kernel_uint_t tcp_tw; + kernel_uint_t tcp_alloc; + kernel_uint_t tcp_mem; + + kernel_uint_t udp_inuse; + kernel_uint_t udp_mem; + + kernel_uint_t udplite_inuse; + + kernel_uint_t raw_inuse; + + kernel_uint_t frag_inuse; + kernel_uint_t frag_memory; +} sockstat_root = { 0 }; + + +static int read_tcp_mem(void) { + static char *filename = NULL; + static const RRDVAR_ACQUIRED *tcp_mem_low_threshold = NULL, + *tcp_mem_pressure_threshold = NULL, + *tcp_mem_high_threshold = NULL; + + if(unlikely(!tcp_mem_low_threshold)) { + tcp_mem_low_threshold = rrdvar_host_variable_add_and_acquire(localhost, "tcp_mem_low"); + tcp_mem_pressure_threshold = rrdvar_host_variable_add_and_acquire(localhost, "tcp_mem_pressure"); + tcp_mem_high_threshold = rrdvar_host_variable_add_and_acquire(localhost, "tcp_mem_high"); + } + + if(unlikely(!filename)) { + char buffer[FILENAME_MAX + 1]; + snprintfz(buffer, FILENAME_MAX, "%s/proc/sys/net/ipv4/tcp_mem", netdata_configured_host_prefix); + filename = strdupz(buffer); + } + + char buffer[200 + 1], *start, *end; + if(read_txt_file(filename, buffer, sizeof(buffer)) != 0) return 1; + buffer[200] = '\0'; + + unsigned long long low = 0, pressure = 0, high = 0; + + start = buffer; + low = strtoull(start, &end, 10); + + start = end; + pressure = strtoull(start, &end, 10); + + start = end; + high = strtoull(start, &end, 10); + + // fprintf(stderr, "TCP MEM low = %llu, pressure = %llu, high = %llu\n", low, pressure, high); + + rrdvar_host_variable_set(localhost, tcp_mem_low_threshold, low * sysconf(_SC_PAGESIZE) / 1024.0); + rrdvar_host_variable_set(localhost, tcp_mem_pressure_threshold, pressure * sysconf(_SC_PAGESIZE) / 1024.0); + rrdvar_host_variable_set(localhost, tcp_mem_high_threshold, high * sysconf(_SC_PAGESIZE) / 1024.0); + + return 0; +} + +static kernel_uint_t read_tcp_max_orphans(void) { + static char *filename = NULL; + static const RRDVAR_ACQUIRED *tcp_max_orphans_var = NULL; + + if(unlikely(!filename)) { + char buffer[FILENAME_MAX + 1]; + snprintfz(buffer, FILENAME_MAX, "%s/proc/sys/net/ipv4/tcp_max_orphans", netdata_configured_host_prefix); + filename = strdupz(buffer); + } + + unsigned long long tcp_max_orphans = 0; + if(read_single_number_file(filename, &tcp_max_orphans) == 0) { + + if(unlikely(!tcp_max_orphans_var)) + tcp_max_orphans_var = rrdvar_host_variable_add_and_acquire(localhost, "tcp_max_orphans"); + + rrdvar_host_variable_set(localhost, tcp_max_orphans_var, tcp_max_orphans); + return tcp_max_orphans; + } + + return 0; +} + +int do_proc_net_sockstat(int update_every, usec_t dt) { + (void)dt; + + static procfile *ff = NULL; + + static uint32_t hash_sockets = 0, + hash_raw = 0, + hash_frag = 0, + hash_tcp = 0, + hash_udp = 0, + hash_udplite = 0; + + static long long update_constants_every = 60, update_constants_count = 0; + + static ARL_BASE *arl_sockets = NULL; + static ARL_BASE *arl_tcp = NULL; + static ARL_BASE *arl_udp = NULL; + static ARL_BASE *arl_udplite = NULL; + static ARL_BASE *arl_raw = NULL; + static ARL_BASE *arl_frag = NULL; + + static int do_sockets = -1, do_tcp_sockets = -1, do_tcp_mem = -1, do_udp_sockets = -1, do_udp_mem = -1, do_udplite_sockets = -1, do_raw_sockets = -1, do_frag_sockets = -1, do_frag_mem = -1; + + static char *keys[7] = { NULL }; + static uint32_t hashes[7] = { 0 }; + static ARL_BASE *bases[7] = { NULL }; + + if(unlikely(!arl_sockets)) { + do_sockets = config_get_boolean_ondemand("plugin:proc:/proc/net/sockstat", "ipv4 sockets", CONFIG_BOOLEAN_AUTO); + do_tcp_sockets = config_get_boolean_ondemand("plugin:proc:/proc/net/sockstat", "ipv4 TCP sockets", CONFIG_BOOLEAN_AUTO); + do_tcp_mem = config_get_boolean_ondemand("plugin:proc:/proc/net/sockstat", "ipv4 TCP memory", CONFIG_BOOLEAN_AUTO); + do_udp_sockets = config_get_boolean_ondemand("plugin:proc:/proc/net/sockstat", "ipv4 UDP sockets", CONFIG_BOOLEAN_AUTO); + do_udp_mem = config_get_boolean_ondemand("plugin:proc:/proc/net/sockstat", "ipv4 UDP memory", CONFIG_BOOLEAN_AUTO); + do_udplite_sockets = config_get_boolean_ondemand("plugin:proc:/proc/net/sockstat", "ipv4 UDPLITE sockets", CONFIG_BOOLEAN_AUTO); + do_raw_sockets = config_get_boolean_ondemand("plugin:proc:/proc/net/sockstat", "ipv4 RAW sockets", CONFIG_BOOLEAN_AUTO); + do_frag_sockets = config_get_boolean_ondemand("plugin:proc:/proc/net/sockstat", "ipv4 FRAG sockets", CONFIG_BOOLEAN_AUTO); + do_frag_mem = config_get_boolean_ondemand("plugin:proc:/proc/net/sockstat", "ipv4 FRAG memory", CONFIG_BOOLEAN_AUTO); + + update_constants_every = config_get_number("plugin:proc:/proc/net/sockstat", "update constants every", update_constants_every); + update_constants_count = update_constants_every; + + arl_sockets = arl_create("sockstat/sockets", arl_callback_str2kernel_uint_t, 60); + arl_expect(arl_sockets, "used", &sockstat_root.sockets_used); + + arl_tcp = arl_create("sockstat/TCP", arl_callback_str2kernel_uint_t, 60); + arl_expect(arl_tcp, "inuse", &sockstat_root.tcp_inuse); + arl_expect(arl_tcp, "orphan", &sockstat_root.tcp_orphan); + arl_expect(arl_tcp, "tw", &sockstat_root.tcp_tw); + arl_expect(arl_tcp, "alloc", &sockstat_root.tcp_alloc); + arl_expect(arl_tcp, "mem", &sockstat_root.tcp_mem); + + arl_udp = arl_create("sockstat/UDP", arl_callback_str2kernel_uint_t, 60); + arl_expect(arl_udp, "inuse", &sockstat_root.udp_inuse); + arl_expect(arl_udp, "mem", &sockstat_root.udp_mem); + + arl_udplite = arl_create("sockstat/UDPLITE", arl_callback_str2kernel_uint_t, 60); + arl_expect(arl_udplite, "inuse", &sockstat_root.udplite_inuse); + + arl_raw = arl_create("sockstat/RAW", arl_callback_str2kernel_uint_t, 60); + arl_expect(arl_raw, "inuse", &sockstat_root.raw_inuse); + + arl_frag = arl_create("sockstat/FRAG", arl_callback_str2kernel_uint_t, 60); + arl_expect(arl_frag, "inuse", &sockstat_root.frag_inuse); + arl_expect(arl_frag, "memory", &sockstat_root.frag_memory); + + hash_sockets = simple_hash("sockets"); + hash_tcp = simple_hash("TCP"); + hash_udp = simple_hash("UDP"); + hash_udplite = simple_hash("UDPLITE"); + hash_raw = simple_hash("RAW"); + hash_frag = simple_hash("FRAG"); + + keys[0] = "sockets"; hashes[0] = hash_sockets; bases[0] = arl_sockets; + keys[1] = "TCP"; hashes[1] = hash_tcp; bases[1] = arl_tcp; + keys[2] = "UDP"; hashes[2] = hash_udp; bases[2] = arl_udp; + keys[3] = "UDPLITE"; hashes[3] = hash_udplite; bases[3] = arl_udplite; + keys[4] = "RAW"; hashes[4] = hash_raw; bases[4] = arl_raw; + keys[5] = "FRAG"; hashes[5] = hash_frag; bases[5] = arl_frag; + keys[6] = NULL; // terminator + } + + update_constants_count += update_every; + if(unlikely(update_constants_count > update_constants_every)) { + read_tcp_max_orphans(); + read_tcp_mem(); + update_constants_count = 0; + } + + if(unlikely(!ff)) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/net/sockstat"); + ff = procfile_open(config_get("plugin:proc:/proc/net/sockstat", "filename to monitor", filename), " \t:", PROCFILE_FLAG_DEFAULT); + if(unlikely(!ff)) return 1; + } + + ff = procfile_readall(ff); + if(unlikely(!ff)) return 0; // we return 0, so that we will retry to open it next time + + size_t lines = procfile_lines(ff), l; + + for(l = 0; l < lines ;l++) { + size_t words = procfile_linewords(ff, l); + char *key = procfile_lineword(ff, l, 0); + uint32_t hash = simple_hash(key); + + int k; + for(k = 0; keys[k] ; k++) { + if(unlikely(hash == hashes[k] && strcmp(key, keys[k]) == 0)) { + // fprintf(stderr, "KEY: '%s', l=%zu, w=1, words=%zu\n", key, l, words); + ARL_BASE *arl = bases[k]; + arl_begin(arl); + size_t w = 1; + + while(w + 1 < words) { + char *name = procfile_lineword(ff, l, w); w++; + char *value = procfile_lineword(ff, l, w); w++; + // fprintf(stderr, " > NAME '%s', VALUE '%s', l=%zu, w=%zu, words=%zu\n", name, value, l, w, words); + if(unlikely(arl_check(arl, name, value) != 0)) + break; + } + + break; + } + } + } + + // ------------------------------------------------------------------------ + + if(do_sockets == CONFIG_BOOLEAN_YES || (do_sockets == CONFIG_BOOLEAN_AUTO && + (sockstat_root.sockets_used || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_sockets = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_used = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "ip" + , "sockstat_sockets" + , NULL + , "sockets" + , NULL + , "Sockets used for all address families" + , "sockets" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NET_SOCKSTAT_NAME + , NETDATA_CHART_PRIO_IP_SOCKETS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_used = rrddim_add(st, "used", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st, rd_used, (collected_number)sockstat_root.sockets_used); + rrdset_done(st); + } + + // ------------------------------------------------------------------------ + + if(do_tcp_sockets == CONFIG_BOOLEAN_YES || (do_tcp_sockets == CONFIG_BOOLEAN_AUTO && + (sockstat_root.tcp_inuse || + sockstat_root.tcp_orphan || + sockstat_root.tcp_tw || + sockstat_root.tcp_alloc || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_tcp_sockets = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_inuse = NULL, + *rd_orphan = NULL, + *rd_timewait = NULL, + *rd_alloc = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "ipv4" + , "sockstat_tcp_sockets" + , NULL + , "tcp" + , NULL + , "TCP Sockets" + , "sockets" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NET_SOCKSTAT_NAME + , NETDATA_CHART_PRIO_IPV4_TCP_SOCKETS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_alloc = rrddim_add(st, "alloc", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + rd_orphan = rrddim_add(st, "orphan", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + rd_inuse = rrddim_add(st, "inuse", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + rd_timewait = rrddim_add(st, "timewait", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st, rd_inuse, (collected_number)sockstat_root.tcp_inuse); + rrddim_set_by_pointer(st, rd_orphan, (collected_number)sockstat_root.tcp_orphan); + rrddim_set_by_pointer(st, rd_timewait, (collected_number)sockstat_root.tcp_tw); + rrddim_set_by_pointer(st, rd_alloc, (collected_number)sockstat_root.tcp_alloc); + rrdset_done(st); + } + + // ------------------------------------------------------------------------ + + if(do_tcp_mem == CONFIG_BOOLEAN_YES || (do_tcp_mem == CONFIG_BOOLEAN_AUTO && + (sockstat_root.tcp_mem || netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_tcp_mem = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_mem = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "ipv4" + , "sockstat_tcp_mem" + , NULL + , "tcp" + , NULL + , "TCP Sockets Memory" + , "KiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NET_SOCKSTAT_NAME + , NETDATA_CHART_PRIO_IPV4_TCP_SOCKETS_MEM + , update_every + , RRDSET_TYPE_AREA + ); + + rd_mem = rrddim_add(st, "mem", NULL, sysconf(_SC_PAGESIZE), 1024, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st, rd_mem, (collected_number)sockstat_root.tcp_mem); + rrdset_done(st); + } + + // ------------------------------------------------------------------------ + + if(do_udp_sockets == CONFIG_BOOLEAN_YES || (do_udp_sockets == CONFIG_BOOLEAN_AUTO && + (sockstat_root.udp_inuse || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_udp_sockets = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_inuse = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "ipv4" + , "sockstat_udp_sockets" + , NULL + , "udp" + , NULL + , "IPv4 UDP Sockets" + , "sockets" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NET_SOCKSTAT_NAME + , NETDATA_CHART_PRIO_IPV4_UDP_SOCKETS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_inuse = rrddim_add(st, "inuse", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st, rd_inuse, (collected_number)sockstat_root.udp_inuse); + rrdset_done(st); + } + + // ------------------------------------------------------------------------ + + if(do_udp_mem == CONFIG_BOOLEAN_YES || (do_udp_mem == CONFIG_BOOLEAN_AUTO && + (sockstat_root.udp_mem || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_udp_mem = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_mem = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "ipv4" + , "sockstat_udp_mem" + , NULL + , "udp" + , NULL + , "IPv4 UDP Sockets Memory" + , "KiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NET_SOCKSTAT_NAME + , NETDATA_CHART_PRIO_IPV4_UDP_SOCKETS_MEM + , update_every + , RRDSET_TYPE_AREA + ); + + rd_mem = rrddim_add(st, "mem", NULL, sysconf(_SC_PAGESIZE), 1024, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st, rd_mem, (collected_number)sockstat_root.udp_mem); + rrdset_done(st); + } + + // ------------------------------------------------------------------------ + + if(do_udplite_sockets == CONFIG_BOOLEAN_YES || (do_udplite_sockets == CONFIG_BOOLEAN_AUTO && + (sockstat_root.udplite_inuse || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_udplite_sockets = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_inuse = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "ipv4" + , "sockstat_udplite_sockets" + , NULL + , "udplite" + , NULL + , "IPv4 UDPLITE Sockets" + , "sockets" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NET_SOCKSTAT_NAME + , NETDATA_CHART_PRIO_IPV4_UDPLITE_SOCKETS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_inuse = rrddim_add(st, "inuse", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st, rd_inuse, (collected_number)sockstat_root.udplite_inuse); + rrdset_done(st); + } + + // ------------------------------------------------------------------------ + + if(do_raw_sockets == CONFIG_BOOLEAN_YES || (do_raw_sockets == CONFIG_BOOLEAN_AUTO && + (sockstat_root.raw_inuse || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_raw_sockets = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_inuse = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "ipv4" + , "sockstat_raw_sockets" + , NULL + , "raw" + , NULL + , "IPv4 RAW Sockets" + , "sockets" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NET_SOCKSTAT_NAME + , NETDATA_CHART_PRIO_IPV4_RAW + , update_every + , RRDSET_TYPE_LINE + ); + + rd_inuse = rrddim_add(st, "inuse", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st, rd_inuse, (collected_number)sockstat_root.raw_inuse); + rrdset_done(st); + } + + // ------------------------------------------------------------------------ + + if(do_frag_sockets == CONFIG_BOOLEAN_YES || (do_frag_sockets == CONFIG_BOOLEAN_AUTO && + (sockstat_root.frag_inuse || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_frag_sockets = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_inuse = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "ipv4" + , "sockstat_frag_sockets" + , NULL + , "fragments" + , NULL + , "IPv4 FRAG Sockets" + , "fragments" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NET_SOCKSTAT_NAME + , NETDATA_CHART_PRIO_IPV4_FRAGMENTS_SOCKETS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_inuse = rrddim_add(st, "inuse", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st, rd_inuse, (collected_number)sockstat_root.frag_inuse); + rrdset_done(st); + } + + // ------------------------------------------------------------------------ + + if(do_frag_mem == CONFIG_BOOLEAN_YES || (do_frag_mem == CONFIG_BOOLEAN_AUTO && + (sockstat_root.frag_memory || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_frag_mem = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_mem = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "ipv4" + , "sockstat_frag_mem" + , NULL + , "fragments" + , NULL + , "IPv4 FRAG Sockets Memory" + , "KiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NET_SOCKSTAT_NAME + , NETDATA_CHART_PRIO_IPV4_FRAGMENTS_SOCKETS_MEM + , update_every + , RRDSET_TYPE_AREA + ); + + rd_mem = rrddim_add(st, "mem", NULL, 1, 1024, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st, rd_mem, (collected_number)sockstat_root.frag_memory); + rrdset_done(st); + } + + return 0; +} + diff --git a/src/collectors/proc.plugin/proc_net_sockstat6.c b/src/collectors/proc.plugin/proc_net_sockstat6.c new file mode 100644 index 000000000..16e0248af --- /dev/null +++ b/src/collectors/proc.plugin/proc_net_sockstat6.c @@ -0,0 +1,278 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +#define PLUGIN_PROC_MODULE_NET_SOCKSTAT6_NAME "/proc/net/sockstat6" + +static struct proc_net_sockstat6 { + kernel_uint_t tcp6_inuse; + kernel_uint_t udp6_inuse; + kernel_uint_t udplite6_inuse; + kernel_uint_t raw6_inuse; + kernel_uint_t frag6_inuse; +} sockstat6_root = { 0 }; + +int do_proc_net_sockstat6(int update_every, usec_t dt) { + (void)dt; + + static procfile *ff = NULL; + + static uint32_t hash_raw = 0, + hash_frag = 0, + hash_tcp = 0, + hash_udp = 0, + hash_udplite = 0; + + static ARL_BASE *arl_tcp = NULL; + static ARL_BASE *arl_udp = NULL; + static ARL_BASE *arl_udplite = NULL; + static ARL_BASE *arl_raw = NULL; + static ARL_BASE *arl_frag = NULL; + + static int do_tcp_sockets = -1, do_udp_sockets = -1, do_udplite_sockets = -1, do_raw_sockets = -1, do_frag_sockets = -1; + + static char *keys[6] = { NULL }; + static uint32_t hashes[6] = { 0 }; + static ARL_BASE *bases[6] = { NULL }; + + if(unlikely(!arl_tcp)) { + do_tcp_sockets = config_get_boolean_ondemand("plugin:proc:/proc/net/sockstat6", "ipv6 TCP sockets", CONFIG_BOOLEAN_AUTO); + do_udp_sockets = config_get_boolean_ondemand("plugin:proc:/proc/net/sockstat6", "ipv6 UDP sockets", CONFIG_BOOLEAN_AUTO); + do_udplite_sockets = config_get_boolean_ondemand("plugin:proc:/proc/net/sockstat6", "ipv6 UDPLITE sockets", CONFIG_BOOLEAN_AUTO); + do_raw_sockets = config_get_boolean_ondemand("plugin:proc:/proc/net/sockstat6", "ipv6 RAW sockets", CONFIG_BOOLEAN_AUTO); + do_frag_sockets = config_get_boolean_ondemand("plugin:proc:/proc/net/sockstat6", "ipv6 FRAG sockets", CONFIG_BOOLEAN_AUTO); + + arl_tcp = arl_create("sockstat6/TCP6", arl_callback_str2kernel_uint_t, 60); + arl_expect(arl_tcp, "inuse", &sockstat6_root.tcp6_inuse); + + arl_udp = arl_create("sockstat6/UDP6", arl_callback_str2kernel_uint_t, 60); + arl_expect(arl_udp, "inuse", &sockstat6_root.udp6_inuse); + + arl_udplite = arl_create("sockstat6/UDPLITE6", arl_callback_str2kernel_uint_t, 60); + arl_expect(arl_udplite, "inuse", &sockstat6_root.udplite6_inuse); + + arl_raw = arl_create("sockstat6/RAW6", arl_callback_str2kernel_uint_t, 60); + arl_expect(arl_raw, "inuse", &sockstat6_root.raw6_inuse); + + arl_frag = arl_create("sockstat6/FRAG6", arl_callback_str2kernel_uint_t, 60); + arl_expect(arl_frag, "inuse", &sockstat6_root.frag6_inuse); + + hash_tcp = simple_hash("TCP6"); + hash_udp = simple_hash("UDP6"); + hash_udplite = simple_hash("UDPLITE6"); + hash_raw = simple_hash("RAW6"); + hash_frag = simple_hash("FRAG6"); + + keys[0] = "TCP6"; hashes[0] = hash_tcp; bases[0] = arl_tcp; + keys[1] = "UDP6"; hashes[1] = hash_udp; bases[1] = arl_udp; + keys[2] = "UDPLITE6"; hashes[2] = hash_udplite; bases[2] = arl_udplite; + keys[3] = "RAW6"; hashes[3] = hash_raw; bases[3] = arl_raw; + keys[4] = "FRAG6"; hashes[4] = hash_frag; bases[4] = arl_frag; + keys[5] = NULL; // terminator + } + + if(unlikely(!ff)) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/net/sockstat6"); + ff = procfile_open(config_get("plugin:proc:/proc/net/sockstat6", "filename to monitor", filename), " \t:", PROCFILE_FLAG_DEFAULT); + if(unlikely(!ff)) return 1; + } + + ff = procfile_readall(ff); + if(unlikely(!ff)) return 0; // we return 0, so that we will retry to open it next time + + size_t lines = procfile_lines(ff), l; + + for(l = 0; l < lines ;l++) { + size_t words = procfile_linewords(ff, l); + char *key = procfile_lineword(ff, l, 0); + uint32_t hash = simple_hash(key); + + int k; + for(k = 0; keys[k] ; k++) { + if(unlikely(hash == hashes[k] && strcmp(key, keys[k]) == 0)) { + // fprintf(stderr, "KEY: '%s', l=%zu, w=1, words=%zu\n", key, l, words); + ARL_BASE *arl = bases[k]; + arl_begin(arl); + size_t w = 1; + + while(w + 1 < words) { + char *name = procfile_lineword(ff, l, w); w++; + char *value = procfile_lineword(ff, l, w); w++; + // fprintf(stderr, " > NAME '%s', VALUE '%s', l=%zu, w=%zu, words=%zu\n", name, value, l, w, words); + if(unlikely(arl_check(arl, name, value) != 0)) + break; + } + + break; + } + } + } + + // ------------------------------------------------------------------------ + + if(do_tcp_sockets == CONFIG_BOOLEAN_YES || (do_tcp_sockets == CONFIG_BOOLEAN_AUTO && + (sockstat6_root.tcp6_inuse || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_tcp_sockets = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_inuse = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "ipv6" + , "sockstat6_tcp_sockets" + , NULL + , "tcp6" + , NULL + , "IPv6 TCP Sockets" + , "sockets" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NET_SOCKSTAT6_NAME + , NETDATA_CHART_PRIO_IPV6_TCP_SOCKETS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_inuse = rrddim_add(st, "inuse", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st, rd_inuse, (collected_number)sockstat6_root.tcp6_inuse); + rrdset_done(st); + } + + // ------------------------------------------------------------------------ + + if(do_udp_sockets == CONFIG_BOOLEAN_YES || (do_udp_sockets == CONFIG_BOOLEAN_AUTO && + (sockstat6_root.udp6_inuse || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_udp_sockets = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_inuse = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "ipv6" + , "sockstat6_udp_sockets" + , NULL + , "udp6" + , NULL + , "IPv6 UDP Sockets" + , "sockets" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NET_SOCKSTAT6_NAME + , NETDATA_CHART_PRIO_IPV6_UDP_SOCKETS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_inuse = rrddim_add(st, "inuse", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st, rd_inuse, (collected_number)sockstat6_root.udp6_inuse); + rrdset_done(st); + } + + // ------------------------------------------------------------------------ + + if(do_udplite_sockets == CONFIG_BOOLEAN_YES || (do_udplite_sockets == CONFIG_BOOLEAN_AUTO && + (sockstat6_root.udplite6_inuse || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_udplite_sockets = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_inuse = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "ipv6" + , "sockstat6_udplite_sockets" + , NULL + , "udplite6" + , NULL + , "IPv6 UDPLITE Sockets" + , "sockets" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NET_SOCKSTAT6_NAME + , NETDATA_CHART_PRIO_IPV6_UDPLITE_SOCKETS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_inuse = rrddim_add(st, "inuse", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st, rd_inuse, (collected_number)sockstat6_root.udplite6_inuse); + rrdset_done(st); + } + + // ------------------------------------------------------------------------ + + if(do_raw_sockets == CONFIG_BOOLEAN_YES || (do_raw_sockets == CONFIG_BOOLEAN_AUTO && + (sockstat6_root.raw6_inuse || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_raw_sockets = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_inuse = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "ipv6" + , "sockstat6_raw_sockets" + , NULL + , "raw6" + , NULL + , "IPv6 RAW Sockets" + , "sockets" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NET_SOCKSTAT6_NAME + , NETDATA_CHART_PRIO_IPV6_RAW_SOCKETS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_inuse = rrddim_add(st, "inuse", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st, rd_inuse, (collected_number)sockstat6_root.raw6_inuse); + rrdset_done(st); + } + + // ------------------------------------------------------------------------ + + if(do_frag_sockets == CONFIG_BOOLEAN_YES || (do_frag_sockets == CONFIG_BOOLEAN_AUTO && + (sockstat6_root.frag6_inuse || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_frag_sockets = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + static RRDDIM *rd_inuse = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "ipv6" + , "sockstat6_frag_sockets" + , NULL + , "fragments6" + , NULL + , "IPv6 FRAG Sockets" + , "fragments" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NET_SOCKSTAT6_NAME + , NETDATA_CHART_PRIO_IPV6_FRAGMENTS_SOCKETS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_inuse = rrddim_add(st, "inuse", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st, rd_inuse, (collected_number)sockstat6_root.frag6_inuse); + rrdset_done(st); + } + + return 0; +} diff --git a/src/collectors/proc.plugin/proc_net_softnet_stat.c b/src/collectors/proc.plugin/proc_net_softnet_stat.c new file mode 100644 index 000000000..a225a9f0d --- /dev/null +++ b/src/collectors/proc.plugin/proc_net_softnet_stat.c @@ -0,0 +1,152 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +#define PLUGIN_PROC_MODULE_NET_SOFTNET_NAME "/proc/net/softnet_stat" + +static inline char *softnet_column_name(size_t column) { + switch(column) { + // https://github.com/torvalds/linux/blob/a7fd20d1c476af4563e66865213474a2f9f473a4/net/core/net-procfs.c#L161-L166 + case 0: return "processed"; + case 1: return "dropped"; + case 2: return "squeezed"; + case 9: return "received_rps"; + case 10: return "flow_limit_count"; + default: return NULL; + } +} + +int do_proc_net_softnet_stat(int update_every, usec_t dt) { + (void)dt; + + static procfile *ff = NULL; + static int do_per_core = -1; + static size_t allocated_lines = 0, allocated_columns = 0; + static uint32_t *data = NULL; + + if (unlikely(do_per_core == -1)) { + do_per_core = + config_get_boolean("plugin:proc:/proc/net/softnet_stat", "softnet_stat per core", CONFIG_BOOLEAN_NO); + } + + if(unlikely(!ff)) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/net/softnet_stat"); + ff = procfile_open(config_get("plugin:proc:/proc/net/softnet_stat", "filename to monitor", filename), " \t", PROCFILE_FLAG_DEFAULT); + if(unlikely(!ff)) return 1; + } + + ff = procfile_readall(ff); + if(unlikely(!ff)) return 0; // we return 0, so that we will retry to open it next time + + size_t lines = procfile_lines(ff), l; + size_t words = procfile_linewords(ff, 0), w; + + if(unlikely(!lines || !words)) { + collector_error("Cannot read /proc/net/softnet_stat, %zu lines and %zu columns reported.", lines, words); + return 1; + } + + if(unlikely(lines > 200)) lines = 200; + if(unlikely(words > 50)) words = 50; + + if(unlikely(!data || lines > allocated_lines || words > allocated_columns)) { + freez(data); + allocated_lines = lines; + allocated_columns = words; + data = mallocz((allocated_lines + 1) * allocated_columns * sizeof(uint32_t)); + } + + // initialize to zero + memset(data, 0, (allocated_lines + 1) * allocated_columns * sizeof(uint32_t)); + + // parse the values + for(l = 0; l < lines ;l++) { + words = procfile_linewords(ff, l); + if(unlikely(!words)) continue; + + if(unlikely(words > allocated_columns)) + words = allocated_columns; + + for(w = 0; w < words ; w++) { + if(unlikely(softnet_column_name(w))) { + uint32_t t = (uint32_t)strtoul(procfile_lineword(ff, l, w), NULL, 16); + data[w] += t; + data[((l + 1) * allocated_columns) + w] = t; + } + } + } + + if(unlikely(data[(lines * allocated_columns)] == 0)) + lines--; + + RRDSET *st; + + // -------------------------------------------------------------------- + + st = rrdset_find_active_bytype_localhost("system", "softnet_stat"); + if(unlikely(!st)) { + st = rrdset_create_localhost( + "system" + , "softnet_stat" + , NULL + , "softnet_stat" + , "system.softnet_stat" + , "System softnet_stat" + , "events/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NET_SOFTNET_NAME + , NETDATA_CHART_PRIO_SYSTEM_SOFTNET_STAT + , update_every + , RRDSET_TYPE_LINE + ); + for(w = 0; w < allocated_columns ;w++) + if(unlikely(softnet_column_name(w))) + rrddim_add(st, softnet_column_name(w), NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + for(w = 0; w < allocated_columns ;w++) + if(unlikely(softnet_column_name(w))) + rrddim_set(st, softnet_column_name(w), data[w]); + + rrdset_done(st); + + if(do_per_core) { + for(l = 0; l < lines ;l++) { + char id[50+1]; + snprintfz(id, sizeof(id) - 1,"cpu%zu_softnet_stat", l); + + st = rrdset_find_active_bytype_localhost("cpu", id); + if(unlikely(!st)) { + char title[100+1]; + snprintfz(title, sizeof(title) - 1, "CPU softnet_stat"); + + st = rrdset_create_localhost( + "cpu" + , id + , NULL + , "softnet_stat" + , "cpu.softnet_stat" + , title + , "events/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_NET_SOFTNET_NAME + , NETDATA_CHART_PRIO_SOFTNET_PER_CORE + l + , update_every + , RRDSET_TYPE_LINE + ); + for(w = 0; w < allocated_columns ;w++) + if(unlikely(softnet_column_name(w))) + rrddim_add(st, softnet_column_name(w), NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + for(w = 0; w < allocated_columns ;w++) + if(unlikely(softnet_column_name(w))) + rrddim_set(st, softnet_column_name(w), data[((l + 1) * allocated_columns) + w]); + + rrdset_done(st); + } + } + + return 0; +} diff --git a/src/collectors/proc.plugin/proc_net_stat_conntrack.c b/src/collectors/proc.plugin/proc_net_stat_conntrack.c new file mode 100644 index 000000000..6951cba79 --- /dev/null +++ b/src/collectors/proc.plugin/proc_net_stat_conntrack.c @@ -0,0 +1,345 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +#define RRD_TYPE_NET_STAT_NETFILTER "netfilter" +#define RRD_TYPE_NET_STAT_CONNTRACK "conntrack" +#define PLUGIN_PROC_MODULE_CONNTRACK_NAME "/proc/net/stat/nf_conntrack" + +int do_proc_net_stat_conntrack(int update_every, usec_t dt) { + static procfile *ff = NULL; + static int do_sockets = -1, do_new = -1, do_changes = -1, do_expect = -1, do_search = -1, do_errors = -1; + static usec_t get_max_every = 10 * USEC_PER_SEC, usec_since_last_max = 0; + static int read_full = 1; + static char *nf_conntrack_filename, *nf_conntrack_count_filename, *nf_conntrack_max_filename; + static const RRDVAR_ACQUIRED *rrdvar_max = NULL; + + unsigned long long aentries = 0, asearched = 0, afound = 0, anew = 0, ainvalid = 0, aignore = 0, adelete = 0, adelete_list = 0, + ainsert = 0, ainsert_failed = 0, adrop = 0, aearly_drop = 0, aicmp_error = 0, aexpect_new = 0, aexpect_create = 0, aexpect_delete = 0, asearch_restart = 0; + + if(unlikely(do_sockets == -1)) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/net/stat/nf_conntrack"); + nf_conntrack_filename = config_get("plugin:proc:/proc/net/stat/nf_conntrack", "filename to monitor", filename); + + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/sys/net/netfilter/nf_conntrack_max"); + nf_conntrack_max_filename = config_get("plugin:proc:/proc/sys/net/netfilter/nf_conntrack_max", "filename to monitor", filename); + usec_since_last_max = get_max_every = config_get_number("plugin:proc:/proc/sys/net/netfilter/nf_conntrack_max", "read every seconds", 10) * USEC_PER_SEC; + + read_full = 1; + ff = procfile_open(nf_conntrack_filename, " \t:", PROCFILE_FLAG_DEFAULT); + if(!ff) read_full = 0; + + do_new = config_get_boolean("plugin:proc:/proc/net/stat/nf_conntrack", "netfilter new connections", read_full); + do_changes = config_get_boolean("plugin:proc:/proc/net/stat/nf_conntrack", "netfilter connection changes", read_full); + do_expect = config_get_boolean("plugin:proc:/proc/net/stat/nf_conntrack", "netfilter connection expectations", read_full); + do_search = config_get_boolean("plugin:proc:/proc/net/stat/nf_conntrack", "netfilter connection searches", read_full); + do_errors = config_get_boolean("plugin:proc:/proc/net/stat/nf_conntrack", "netfilter errors", read_full); + + do_sockets = 1; + if(!read_full) { + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/sys/net/netfilter/nf_conntrack_count"); + nf_conntrack_count_filename = config_get("plugin:proc:/proc/sys/net/netfilter/nf_conntrack_count", "filename to monitor", filename); + + if(read_single_number_file(nf_conntrack_count_filename, &aentries)) + do_sockets = 0; + } + + do_sockets = config_get_boolean("plugin:proc:/proc/net/stat/nf_conntrack", "netfilter connections", do_sockets); + + if(!do_sockets && !read_full) + return 1; + + rrdvar_max = rrdvar_host_variable_add_and_acquire(localhost, "netfilter_conntrack_max"); + } + + if(likely(read_full)) { + if(unlikely(!ff)) { + ff = procfile_open(nf_conntrack_filename, " \t:", PROCFILE_FLAG_DEFAULT); + if(unlikely(!ff)) + return 0; // we return 0, so that we will retry to open it next time + } + + ff = procfile_readall(ff); + if(unlikely(!ff)) + return 0; // we return 0, so that we will retry to open it next time + + size_t lines = procfile_lines(ff), l; + + for(l = 1; l < lines ;l++) { + size_t words = procfile_linewords(ff, l); + if(unlikely(words < 17)) { + if(unlikely(words)) collector_error("Cannot read /proc/net/stat/nf_conntrack line. Expected 17 params, read %zu.", words); + continue; + } + + unsigned long long tentries = 0, tsearched = 0, tfound = 0, tnew = 0, tinvalid = 0, tignore = 0, tdelete = 0, tdelete_list = 0, tinsert = 0, tinsert_failed = 0, tdrop = 0, tearly_drop = 0, ticmp_error = 0, texpect_new = 0, texpect_create = 0, texpect_delete = 0, tsearch_restart = 0; + + tentries = strtoull(procfile_lineword(ff, l, 0), NULL, 16); + tsearched = strtoull(procfile_lineword(ff, l, 1), NULL, 16); + tfound = strtoull(procfile_lineword(ff, l, 2), NULL, 16); + tnew = strtoull(procfile_lineword(ff, l, 3), NULL, 16); + tinvalid = strtoull(procfile_lineword(ff, l, 4), NULL, 16); + tignore = strtoull(procfile_lineword(ff, l, 5), NULL, 16); + tdelete = strtoull(procfile_lineword(ff, l, 6), NULL, 16); + tdelete_list = strtoull(procfile_lineword(ff, l, 7), NULL, 16); + tinsert = strtoull(procfile_lineword(ff, l, 8), NULL, 16); + tinsert_failed = strtoull(procfile_lineword(ff, l, 9), NULL, 16); + tdrop = strtoull(procfile_lineword(ff, l, 10), NULL, 16); + tearly_drop = strtoull(procfile_lineword(ff, l, 11), NULL, 16); + ticmp_error = strtoull(procfile_lineword(ff, l, 12), NULL, 16); + texpect_new = strtoull(procfile_lineword(ff, l, 13), NULL, 16); + texpect_create = strtoull(procfile_lineword(ff, l, 14), NULL, 16); + texpect_delete = strtoull(procfile_lineword(ff, l, 15), NULL, 16); + tsearch_restart = strtoull(procfile_lineword(ff, l, 16), NULL, 16); + + if(unlikely(!aentries)) aentries = tentries; + + // sum all the cpus together + asearched += tsearched; // conntrack.search + afound += tfound; // conntrack.search + anew += tnew; // conntrack.new + ainvalid += tinvalid; // conntrack.new + aignore += tignore; // conntrack.new + adelete += tdelete; // conntrack.changes + adelete_list += tdelete_list; // conntrack.changes + ainsert += tinsert; // conntrack.changes + ainsert_failed += tinsert_failed; // conntrack.errors + adrop += tdrop; // conntrack.errors + aearly_drop += tearly_drop; // conntrack.errors + aicmp_error += ticmp_error; // conntrack.errors + aexpect_new += texpect_new; // conntrack.expect + aexpect_create += texpect_create; // conntrack.expect + aexpect_delete += texpect_delete; // conntrack.expect + asearch_restart += tsearch_restart; // conntrack.search + } + } + else { + if(unlikely(read_single_number_file(nf_conntrack_count_filename, &aentries))) + return 0; // we return 0, so that we will retry to open it next time + } + + usec_since_last_max += dt; + if(unlikely(rrdvar_max && usec_since_last_max >= get_max_every)) { + usec_since_last_max = 0; + + unsigned long long max; + if(likely(!read_single_number_file(nf_conntrack_max_filename, &max))) + rrdvar_host_variable_set(localhost, rrdvar_max, max); + } + + // -------------------------------------------------------------------- + + if(do_sockets) { + static RRDSET *st = NULL; + static RRDDIM *rd_connections = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_STAT_NETFILTER + , RRD_TYPE_NET_STAT_CONNTRACK "_sockets" + , NULL + , RRD_TYPE_NET_STAT_CONNTRACK + , NULL + , "Connection Tracker Connections" + , "active connections" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_CONNTRACK_NAME + , NETDATA_CHART_PRIO_NETFILTER_SOCKETS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_connections = rrddim_add(st, "connections", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st, rd_connections, aentries); + rrdset_done(st); + } + + // -------------------------------------------------------------------- + + if(do_new) { + static RRDSET *st = NULL; + static RRDDIM + *rd_new = NULL, + *rd_ignore = NULL, + *rd_invalid = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_STAT_NETFILTER + , RRD_TYPE_NET_STAT_CONNTRACK "_new" + , NULL + , RRD_TYPE_NET_STAT_CONNTRACK + , NULL + , "Connection Tracker New Connections" + , "connections/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_CONNTRACK_NAME + , NETDATA_CHART_PRIO_NETFILTER_NEW + , update_every + , RRDSET_TYPE_LINE + ); + + rd_new = rrddim_add(st, "new", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_ignore = rrddim_add(st, "ignore", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_invalid = rrddim_add(st, "invalid", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_new, anew); + rrddim_set_by_pointer(st, rd_ignore, aignore); + rrddim_set_by_pointer(st, rd_invalid, ainvalid); + rrdset_done(st); + } + + // -------------------------------------------------------------------- + + if(do_changes) { + static RRDSET *st = NULL; + static RRDDIM + *rd_inserted = NULL, + *rd_deleted = NULL, + *rd_delete_list = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_STAT_NETFILTER + , RRD_TYPE_NET_STAT_CONNTRACK "_changes" + , NULL + , RRD_TYPE_NET_STAT_CONNTRACK + , NULL + , "Connection Tracker Changes" + , "changes/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_CONNTRACK_NAME + , NETDATA_CHART_PRIO_NETFILTER_CHANGES + , update_every + , RRDSET_TYPE_LINE + ); + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_inserted = rrddim_add(st, "inserted", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_deleted = rrddim_add(st, "deleted", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_delete_list = rrddim_add(st, "delete_list", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_inserted, ainsert); + rrddim_set_by_pointer(st, rd_deleted, adelete); + rrddim_set_by_pointer(st, rd_delete_list, adelete_list); + rrdset_done(st); + } + + // -------------------------------------------------------------------- + + if(do_expect) { + static RRDSET *st = NULL; + static RRDDIM *rd_created = NULL, + *rd_deleted = NULL, + *rd_new = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_STAT_NETFILTER + , RRD_TYPE_NET_STAT_CONNTRACK "_expect" + , NULL + , RRD_TYPE_NET_STAT_CONNTRACK + , NULL + , "Connection Tracker Expectations" + , "expectations/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_CONNTRACK_NAME + , NETDATA_CHART_PRIO_NETFILTER_EXPECT + , update_every + , RRDSET_TYPE_LINE + ); + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_created = rrddim_add(st, "created", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_deleted = rrddim_add(st, "deleted", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_new = rrddim_add(st, "new", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_created, aexpect_create); + rrddim_set_by_pointer(st, rd_deleted, aexpect_delete); + rrddim_set_by_pointer(st, rd_new, aexpect_new); + rrdset_done(st); + } + + // -------------------------------------------------------------------- + + if(do_search) { + static RRDSET *st = NULL; + static RRDDIM *rd_searched = NULL, + *rd_restarted = NULL, + *rd_found = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_STAT_NETFILTER + , RRD_TYPE_NET_STAT_CONNTRACK "_search" + , NULL + , RRD_TYPE_NET_STAT_CONNTRACK + , NULL + , "Connection Tracker Searches" + , "searches/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_CONNTRACK_NAME + , NETDATA_CHART_PRIO_NETFILTER_SEARCH + , update_every + , RRDSET_TYPE_LINE + ); + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_searched = rrddim_add(st, "searched", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_restarted = rrddim_add(st, "restarted", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_found = rrddim_add(st, "found", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_searched, asearched); + rrddim_set_by_pointer(st, rd_restarted, asearch_restart); + rrddim_set_by_pointer(st, rd_found, afound); + rrdset_done(st); + } + + // -------------------------------------------------------------------- + + if(do_errors) { + static RRDSET *st = NULL; + static RRDDIM *rd_icmp_error = NULL, + *rd_insert_failed = NULL, + *rd_drop = NULL, + *rd_early_drop = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_STAT_NETFILTER + , RRD_TYPE_NET_STAT_CONNTRACK "_errors" + , NULL + , RRD_TYPE_NET_STAT_CONNTRACK + , NULL + , "Connection Tracker Errors" + , "events/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_CONNTRACK_NAME + , NETDATA_CHART_PRIO_NETFILTER_ERRORS + , update_every + , RRDSET_TYPE_LINE + ); + rrdset_flag_set(st, RRDSET_FLAG_DETAIL); + + rd_icmp_error = rrddim_add(st, "icmp_error", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_insert_failed = rrddim_add(st, "insert_failed", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_drop = rrddim_add(st, "drop", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_early_drop = rrddim_add(st, "early_drop", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st, rd_icmp_error, aicmp_error); + rrddim_set_by_pointer(st, rd_insert_failed, ainsert_failed); + rrddim_set_by_pointer(st, rd_drop, adrop); + rrddim_set_by_pointer(st, rd_early_drop, aearly_drop); + rrdset_done(st); + } + + return 0; +} diff --git a/src/collectors/proc.plugin/proc_net_stat_synproxy.c b/src/collectors/proc.plugin/proc_net_stat_synproxy.c new file mode 100644 index 000000000..e23a0ab7b --- /dev/null +++ b/src/collectors/proc.plugin/proc_net_stat_synproxy.c @@ -0,0 +1,153 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +#define PLUGIN_PROC_MODULE_SYNPROXY_NAME "/proc/net/stat/synproxy" + +#define RRD_TYPE_NET_STAT_NETFILTER "netfilter" +#define RRD_TYPE_NET_STAT_SYNPROXY "synproxy" + +int do_proc_net_stat_synproxy(int update_every, usec_t dt) { + (void)dt; + + static int do_cookies = -1, do_syns = -1, do_reopened = -1; + static procfile *ff = NULL; + + if(unlikely(do_cookies == -1)) { + do_cookies = config_get_boolean_ondemand("plugin:proc:/proc/net/stat/synproxy", "SYNPROXY cookies", CONFIG_BOOLEAN_AUTO); + do_syns = config_get_boolean_ondemand("plugin:proc:/proc/net/stat/synproxy", "SYNPROXY SYN received", CONFIG_BOOLEAN_AUTO); + do_reopened = config_get_boolean_ondemand("plugin:proc:/proc/net/stat/synproxy", "SYNPROXY connections reopened", CONFIG_BOOLEAN_AUTO); + } + + if(unlikely(!ff)) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/net/stat/synproxy"); + ff = procfile_open(config_get("plugin:proc:/proc/net/stat/synproxy", "filename to monitor", filename), " \t,:|", PROCFILE_FLAG_DEFAULT); + if(unlikely(!ff)) + return 1; + } + + ff = procfile_readall(ff); + if(unlikely(!ff)) + return 0; // we return 0, so that we will retry to open it next time + + // make sure we have 3 lines + size_t lines = procfile_lines(ff), l; + if(unlikely(lines < 2)) { + collector_error("/proc/net/stat/synproxy has %zu lines, expected no less than 2. Disabling it.", lines); + return 1; + } + + unsigned long long syn_received = 0, cookie_invalid = 0, cookie_valid = 0, cookie_retrans = 0, conn_reopened = 0; + + // synproxy gives its values per CPU + for(l = 1; l < lines ;l++) { + size_t words = procfile_linewords(ff, l); + if(unlikely(words < 6)) + continue; + + syn_received += strtoull(procfile_lineword(ff, l, 1), NULL, 16); + cookie_invalid += strtoull(procfile_lineword(ff, l, 2), NULL, 16); + cookie_valid += strtoull(procfile_lineword(ff, l, 3), NULL, 16); + cookie_retrans += strtoull(procfile_lineword(ff, l, 4), NULL, 16); + conn_reopened += strtoull(procfile_lineword(ff, l, 5), NULL, 16); + } + + unsigned long long events = syn_received + cookie_invalid + cookie_valid + cookie_retrans + conn_reopened; + + // -------------------------------------------------------------------- + + if(do_syns == CONFIG_BOOLEAN_YES || (do_syns == CONFIG_BOOLEAN_AUTO && + (events || netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_syns = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_STAT_NETFILTER + , RRD_TYPE_NET_STAT_SYNPROXY "_syn_received" + , NULL + , RRD_TYPE_NET_STAT_SYNPROXY + , NULL + , "SYNPROXY SYN Packets received" + , "packets/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_SYNPROXY_NAME + , NETDATA_CHART_PRIO_SYNPROXY_SYN_RECEIVED + , update_every + , RRDSET_TYPE_LINE + ); + + rrddim_add(st, "received", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set(st, "received", syn_received); + rrdset_done(st); + } + + // -------------------------------------------------------------------- + + if(do_reopened == CONFIG_BOOLEAN_YES || (do_reopened == CONFIG_BOOLEAN_AUTO && + (events || netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_reopened = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_STAT_NETFILTER + , RRD_TYPE_NET_STAT_SYNPROXY "_conn_reopened" + , NULL + , RRD_TYPE_NET_STAT_SYNPROXY + , NULL + , "SYNPROXY Connections Reopened" + , "connections/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_SYNPROXY_NAME + , NETDATA_CHART_PRIO_SYNPROXY_CONN_OPEN + , update_every + , RRDSET_TYPE_LINE + ); + + rrddim_add(st, "reopened", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set(st, "reopened", conn_reopened); + rrdset_done(st); + } + + // -------------------------------------------------------------------- + + if(do_cookies == CONFIG_BOOLEAN_YES || (do_cookies == CONFIG_BOOLEAN_AUTO && + (events || netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_cookies = CONFIG_BOOLEAN_YES; + + static RRDSET *st = NULL; + if(unlikely(!st)) { + st = rrdset_create_localhost( + RRD_TYPE_NET_STAT_NETFILTER + , RRD_TYPE_NET_STAT_SYNPROXY "_cookies" + , NULL + , RRD_TYPE_NET_STAT_SYNPROXY + , NULL + , "SYNPROXY TCP Cookies" + , "cookies/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_SYNPROXY_NAME + , NETDATA_CHART_PRIO_SYNPROXY_COOKIES + , update_every + , RRDSET_TYPE_LINE + ); + + rrddim_add(st, "valid", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rrddim_add(st, "invalid", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rrddim_add(st, "retransmits", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set(st, "valid", cookie_valid); + rrddim_set(st, "invalid", cookie_invalid); + rrddim_set(st, "retransmits", cookie_retrans); + rrdset_done(st); + } + + return 0; +} diff --git a/src/collectors/proc.plugin/proc_net_wireless.c b/src/collectors/proc.plugin/proc_net_wireless.c new file mode 100644 index 000000000..c7efa3335 --- /dev/null +++ b/src/collectors/proc.plugin/proc_net_wireless.c @@ -0,0 +1,433 @@ +#include <stdbool.h> +#include "plugin_proc.h" + +#define PLUGIN_PROC_MODULE_NETWIRELESS_NAME "/proc/net/wireless" + +#define CONFIG_SECTION_PLUGIN_PROC_NETWIRELESS "plugin:" PLUGIN_PROC_CONFIG_NAME ":" PLUGIN_PROC_MODULE_NETWIRELESS_NAME + + +static struct netwireless { + char *name; + uint32_t hash; + + //flags + bool configured; + struct timeval updated; + + int do_status; + int do_quality; + int do_discarded_packets; + int do_missed_beacon; + + // Data collected + // status + kernel_uint_t status; + + // Quality + NETDATA_DOUBLE link; + NETDATA_DOUBLE level; + NETDATA_DOUBLE noise; + + // Discarded packets + kernel_uint_t nwid; + kernel_uint_t crypt; + kernel_uint_t frag; + kernel_uint_t retry; + kernel_uint_t misc; + + // missed beacon + kernel_uint_t missed_beacon; + + const char *chart_id_net_status; + const char *chart_id_net_link; + const char *chart_id_net_level; + const char *chart_id_net_noise; + const char *chart_id_net_discarded_packets; + const char *chart_id_net_missed_beacon; + + const char *chart_family; + + // charts + // status + RRDSET *st_status; + + // Quality + RRDSET *st_link; + RRDSET *st_level; + RRDSET *st_noise; + + // Discarded Packets + RRDSET *st_discarded_packets; + // Missed beacon + RRDSET *st_missed_beacon; + + // Dimensions + // status + RRDDIM *rd_status; + + // Quality + RRDDIM *rd_link; + RRDDIM *rd_level; + RRDDIM *rd_noise; + + // Discarded packets + RRDDIM *rd_nwid; + RRDDIM *rd_crypt; + RRDDIM *rd_frag; + RRDDIM *rd_retry; + RRDDIM *rd_misc; + + // missed beacon + RRDDIM *rd_missed_beacon; + + struct netwireless *next; +} *netwireless_root = NULL; + +static void netwireless_free_st(struct netwireless *wireless_dev) +{ + if (wireless_dev->st_status) rrdset_is_obsolete___safe_from_collector_thread(wireless_dev->st_status); + if (wireless_dev->st_link) rrdset_is_obsolete___safe_from_collector_thread(wireless_dev->st_link); + if (wireless_dev->st_level) rrdset_is_obsolete___safe_from_collector_thread(wireless_dev->st_level); + if (wireless_dev->st_noise) rrdset_is_obsolete___safe_from_collector_thread(wireless_dev->st_noise); + if (wireless_dev->st_discarded_packets) + rrdset_is_obsolete___safe_from_collector_thread(wireless_dev->st_discarded_packets); + if (wireless_dev->st_missed_beacon) rrdset_is_obsolete___safe_from_collector_thread(wireless_dev->st_missed_beacon); + + wireless_dev->st_status = NULL; + wireless_dev->st_link = NULL; + wireless_dev->st_level = NULL; + wireless_dev->st_noise = NULL; + wireless_dev->st_discarded_packets = NULL; + wireless_dev->st_missed_beacon = NULL; +} + +static void netwireless_free(struct netwireless *wireless_dev) +{ + wireless_dev->next = NULL; + freez((void *)wireless_dev->name); + netwireless_free_st(wireless_dev); + freez((void *)wireless_dev->chart_id_net_status); + freez((void *)wireless_dev->chart_id_net_link); + freez((void *)wireless_dev->chart_id_net_level); + freez((void *)wireless_dev->chart_id_net_noise); + freez((void *)wireless_dev->chart_id_net_discarded_packets); + freez((void *)wireless_dev->chart_id_net_missed_beacon); + + freez((void *)wireless_dev); +} + +static void netwireless_cleanup(struct timeval *timestamp) +{ + struct netwireless *previous = NULL; + struct netwireless *current; + // search it, from beginning to the end + for (current = netwireless_root; current;) { + + if (timercmp(¤t->updated, timestamp, <)) { + struct netwireless *to_free = current; + current = current->next; + netwireless_free(to_free); + + if (previous) { + previous->next = current; + } else { + netwireless_root = current; + } + } else { + previous = current; + current = current->next; + } + } +} + +// finds an existing interface or creates a new entry +static struct netwireless *find_or_create_wireless(const char *name) +{ + struct netwireless *wireless; + uint32_t hash = simple_hash(name); + + // search it, from beginning to the end + for (wireless = netwireless_root ; wireless ; wireless = wireless->next) { + if (unlikely(hash == wireless->hash && !strcmp(name, wireless->name))) { + return wireless; + } + } + + // create a new one + wireless = callocz(1, sizeof(struct netwireless)); + wireless->name = strdupz(name); + wireless->hash = hash; + + // link it to the end + if (netwireless_root) { + struct netwireless *last_node; + for (last_node = netwireless_root; last_node->next ; last_node = last_node->next); + + last_node->next = wireless; + } else + netwireless_root = wireless; + + return wireless; +} + +static void configure_device(int do_status, int do_quality, int do_discarded_packets, int do_missed, + struct netwireless *wireless_dev) { + wireless_dev->do_status = do_status; + wireless_dev->do_quality = do_quality; + wireless_dev->do_discarded_packets = do_discarded_packets; + wireless_dev->do_missed_beacon = do_missed; + wireless_dev->configured = true; + + char buffer[RRD_ID_LENGTH_MAX + 1]; + + snprintfz(buffer, RRD_ID_LENGTH_MAX, "%s_status", wireless_dev->name); + wireless_dev->chart_id_net_status = strdupz(buffer); + + snprintfz(buffer, RRD_ID_LENGTH_MAX, "%s_link_quality", wireless_dev->name); + wireless_dev->chart_id_net_link = strdupz(buffer); + + snprintfz(buffer, RRD_ID_LENGTH_MAX, "%s_signal_level", wireless_dev->name); + wireless_dev->chart_id_net_level = strdupz(buffer); + + snprintfz(buffer, RRD_ID_LENGTH_MAX, "%s_noise_level", wireless_dev->name); + wireless_dev->chart_id_net_noise = strdupz(buffer); + + snprintfz(buffer, RRD_ID_LENGTH_MAX, "%s_discarded_packets", wireless_dev->name); + wireless_dev->chart_id_net_discarded_packets = strdupz(buffer); + + snprintfz(buffer, RRD_ID_LENGTH_MAX, "%s_missed_beacon", wireless_dev->name); + wireless_dev->chart_id_net_missed_beacon = strdupz(buffer); +} + +static void add_labels_to_wireless(struct netwireless *w, RRDSET *st) { + rrdlabels_add(st->rrdlabels, "device", w->name, RRDLABEL_SRC_AUTO); +} + +int do_proc_net_wireless(int update_every, usec_t dt) +{ + UNUSED(dt); + static procfile *ff = NULL; + static int do_status, do_quality = -1, do_discarded_packets, do_beacon; + static char *proc_net_wireless_filename = NULL; + + if (unlikely(do_quality == -1)) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/net/wireless"); + + proc_net_wireless_filename = config_get(CONFIG_SECTION_PLUGIN_PROC_NETWIRELESS,"filename to monitor", filename); + do_status = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETWIRELESS, "status for all interfaces", CONFIG_BOOLEAN_AUTO); + do_quality = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETWIRELESS, "quality for all interfaces", CONFIG_BOOLEAN_AUTO); + do_discarded_packets = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETWIRELESS, "discarded packets for all interfaces", CONFIG_BOOLEAN_AUTO); + do_beacon = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETWIRELESS, "missed beacon for all interface", CONFIG_BOOLEAN_AUTO); + } + + if (unlikely(!ff)) { + ff = procfile_open(proc_net_wireless_filename, " \t,|", PROCFILE_FLAG_DEFAULT); + if (unlikely(!ff)) return 1; + } + + ff = procfile_readall(ff); + if (unlikely(!ff)) return 1; + + size_t lines = procfile_lines(ff); + struct timeval timestamp; + size_t l; + gettimeofday(×tamp, NULL); + for (l = 2; l < lines; l++) { + if (unlikely(procfile_linewords(ff, l) < 11)) continue; + + char *name = procfile_lineword(ff, l, 0); + size_t len = strlen(name); + if (name[len - 1] == ':') name[len - 1] = '\0'; + + struct netwireless *wireless_dev = find_or_create_wireless(name); + + if (unlikely(!wireless_dev->configured)) { + configure_device(do_status, do_quality, do_discarded_packets, do_beacon, wireless_dev); + } + + if (likely(do_status != CONFIG_BOOLEAN_NO)) { + wireless_dev->status = str2kernel_uint_t(procfile_lineword(ff, l, 1)); + + if (unlikely(!wireless_dev->st_status)) { + wireless_dev->st_status = rrdset_create_localhost( + "wireless", + wireless_dev->chart_id_net_status, + NULL, + wireless_dev->name, + "wireless.status", + "Internal status reported by interface.", + "status", + PLUGIN_PROC_NAME, + PLUGIN_PROC_MODULE_NETWIRELESS_NAME, + NETDATA_CHART_PRIO_WIRELESS_IFACE, + update_every, + RRDSET_TYPE_LINE); + + rrdset_flag_set(wireless_dev->st_status, RRDSET_FLAG_DETAIL); + + wireless_dev->rd_status = rrddim_add(wireless_dev->st_status, "status", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + add_labels_to_wireless(wireless_dev, wireless_dev->st_status); + } + + rrddim_set_by_pointer(wireless_dev->st_status, wireless_dev->rd_status, + (collected_number)wireless_dev->status); + rrdset_done(wireless_dev->st_status); + } + + if (likely(do_quality != CONFIG_BOOLEAN_NO)) { + wireless_dev->link = str2ndd(procfile_lineword(ff, l, 2), NULL); + wireless_dev->level = str2ndd(procfile_lineword(ff, l, 3), NULL); + wireless_dev->noise = str2ndd(procfile_lineword(ff, l, 4), NULL); + + if (unlikely(!wireless_dev->st_link)) { + wireless_dev->st_link = rrdset_create_localhost( + "wireless", + wireless_dev->chart_id_net_link, + NULL, + wireless_dev->name, + "wireless.link_quality", + "Overall quality of the link. This is an aggregate value, and depends on the driver and hardware.", + "value", + PLUGIN_PROC_NAME, + PLUGIN_PROC_MODULE_NETWIRELESS_NAME, + NETDATA_CHART_PRIO_WIRELESS_IFACE + 1, + update_every, + RRDSET_TYPE_LINE); + rrdset_flag_set(wireless_dev->st_link, RRDSET_FLAG_DETAIL); + + wireless_dev->rd_link = rrddim_add(wireless_dev->st_link, "link_quality", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + add_labels_to_wireless(wireless_dev, wireless_dev->st_link); + } + + if (unlikely(!wireless_dev->st_level)) { + wireless_dev->st_level = rrdset_create_localhost( + "wireless", + wireless_dev->chart_id_net_level, + NULL, + wireless_dev->name, + "wireless.signal_level", + "The signal level is the wireless signal power level received by the wireless client. The closer the value is to 0, the stronger the signal.", + "dBm", + PLUGIN_PROC_NAME, + PLUGIN_PROC_MODULE_NETWIRELESS_NAME, + NETDATA_CHART_PRIO_WIRELESS_IFACE + 2, + update_every, + RRDSET_TYPE_LINE); + rrdset_flag_set(wireless_dev->st_level, RRDSET_FLAG_DETAIL); + + wireless_dev->rd_level = rrddim_add(wireless_dev->st_level, "signal_level", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + add_labels_to_wireless(wireless_dev, wireless_dev->st_level); + } + + if (unlikely(!wireless_dev->st_noise)) { + wireless_dev->st_noise = rrdset_create_localhost( + "wireless", + wireless_dev->chart_id_net_noise, + NULL, + wireless_dev->name, + "wireless.noise_level", + "The noise level indicates the amount of background noise in your environment. The closer the value to 0, the greater the noise level.", + "dBm", + PLUGIN_PROC_NAME, + PLUGIN_PROC_MODULE_NETWIRELESS_NAME, + NETDATA_CHART_PRIO_WIRELESS_IFACE + 3, + update_every, + RRDSET_TYPE_LINE); + rrdset_flag_set(wireless_dev->st_noise, RRDSET_FLAG_DETAIL); + + wireless_dev->rd_noise = rrddim_add(wireless_dev->st_noise, "noise_level", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + add_labels_to_wireless(wireless_dev, wireless_dev->st_noise); + } + + rrddim_set_by_pointer(wireless_dev->st_link, wireless_dev->rd_link, (collected_number)wireless_dev->link); + rrdset_done(wireless_dev->st_link); + + rrddim_set_by_pointer(wireless_dev->st_level, wireless_dev->rd_level, (collected_number)wireless_dev->level); + rrdset_done(wireless_dev->st_level); + + rrddim_set_by_pointer(wireless_dev->st_noise, wireless_dev->rd_noise, (collected_number)wireless_dev->noise); + rrdset_done(wireless_dev->st_noise); + } + + if (likely(do_discarded_packets)) { + wireless_dev->nwid = str2kernel_uint_t(procfile_lineword(ff, l, 5)); + wireless_dev->crypt = str2kernel_uint_t(procfile_lineword(ff, l, 6)); + wireless_dev->frag = str2kernel_uint_t(procfile_lineword(ff, l, 7)); + wireless_dev->retry = str2kernel_uint_t(procfile_lineword(ff, l, 8)); + wireless_dev->misc = str2kernel_uint_t(procfile_lineword(ff, l, 9)); + + if (unlikely(!wireless_dev->st_discarded_packets)) { + wireless_dev->st_discarded_packets = rrdset_create_localhost( + "wireless", + wireless_dev->chart_id_net_discarded_packets, + NULL, + wireless_dev->name, + "wireless.discarded_packets", + "Packet discarded in the wireless adapter due to \"wireless\" specific problems.", + "packets/s", + PLUGIN_PROC_NAME, + PLUGIN_PROC_MODULE_NETWIRELESS_NAME, + NETDATA_CHART_PRIO_WIRELESS_IFACE + 4, + update_every, + RRDSET_TYPE_LINE); + + rrdset_flag_set(wireless_dev->st_discarded_packets, RRDSET_FLAG_DETAIL); + + wireless_dev->rd_nwid = rrddim_add(wireless_dev->st_discarded_packets, "nwid", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + wireless_dev->rd_crypt = rrddim_add(wireless_dev->st_discarded_packets, "crypt", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + wireless_dev->rd_frag = rrddim_add(wireless_dev->st_discarded_packets, "frag", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + wireless_dev->rd_retry = rrddim_add(wireless_dev->st_discarded_packets, "retry", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + wireless_dev->rd_misc = rrddim_add(wireless_dev->st_discarded_packets, "misc", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + + add_labels_to_wireless(wireless_dev, wireless_dev->st_discarded_packets); + } + + rrddim_set_by_pointer(wireless_dev->st_discarded_packets, wireless_dev->rd_nwid, (collected_number)wireless_dev->nwid); + rrddim_set_by_pointer(wireless_dev->st_discarded_packets, wireless_dev->rd_crypt, (collected_number)wireless_dev->crypt); + rrddim_set_by_pointer(wireless_dev->st_discarded_packets, wireless_dev->rd_frag, (collected_number)wireless_dev->frag); + rrddim_set_by_pointer(wireless_dev->st_discarded_packets, wireless_dev->rd_retry, (collected_number)wireless_dev->retry); + rrddim_set_by_pointer(wireless_dev->st_discarded_packets, wireless_dev->rd_misc, (collected_number)wireless_dev->misc); + + rrdset_done(wireless_dev->st_discarded_packets); + } + + if (likely(do_beacon)) { + wireless_dev->missed_beacon = str2kernel_uint_t(procfile_lineword(ff, l, 10)); + + if (unlikely(!wireless_dev->st_missed_beacon)) { + wireless_dev->st_missed_beacon = rrdset_create_localhost( + "wireless", + wireless_dev->chart_id_net_missed_beacon, + NULL, + wireless_dev->name, + "wireless.missed_beacons", + "Number of missed beacons.", + "frames/s", + PLUGIN_PROC_NAME, + PLUGIN_PROC_MODULE_NETWIRELESS_NAME, + NETDATA_CHART_PRIO_WIRELESS_IFACE + 5, + update_every, + RRDSET_TYPE_LINE); + + rrdset_flag_set(wireless_dev->st_missed_beacon, RRDSET_FLAG_DETAIL); + + wireless_dev->rd_missed_beacon = rrddim_add(wireless_dev->st_missed_beacon, "missed_beacons", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + + add_labels_to_wireless(wireless_dev, wireless_dev->st_missed_beacon); + } + + rrddim_set_by_pointer(wireless_dev->st_missed_beacon, wireless_dev->rd_missed_beacon, (collected_number)wireless_dev->missed_beacon); + rrdset_done(wireless_dev->st_missed_beacon); + } + + wireless_dev->updated = timestamp; + } + + netwireless_cleanup(×tamp); + return 0; +} diff --git a/src/collectors/proc.plugin/proc_pagetypeinfo.c b/src/collectors/proc.plugin/proc_pagetypeinfo.c new file mode 100644 index 000000000..fc5496c63 --- /dev/null +++ b/src/collectors/proc.plugin/proc_pagetypeinfo.c @@ -0,0 +1,336 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +// For ULONG_MAX +#include <limits.h> + +#define PLUGIN_PROC_MODULE_PAGETYPEINFO_NAME "/proc/pagetypeinfo" +#define CONFIG_SECTION_PLUGIN_PROC_PAGETYPEINFO "plugin:" PLUGIN_PROC_CONFIG_NAME ":" PLUGIN_PROC_MODULE_PAGETYPEINFO_NAME + +// Zone struct is pglist_data, in include/linux/mmzone.h +// MAX_NR_ZONES is from __MAX_NR_ZONE, which is the last value of the enum. +#define MAX_PAGETYPE_ORDER 11 + +// Names are in mm/page_alloc.c :: migratetype_names. Max size = 10. +#define MAX_ZONETYPE_NAME 16 +#define MAX_PAGETYPE_NAME 16 + +// Defined in include/linux/mmzone.h as __MAX_NR_ZONE (last enum of zone_type) +#define MAX_ZONETYPE 6 +// Defined in include/linux/mmzone.h as MIGRATE_TYPES (last enum of migratetype) +#define MAX_PAGETYPE 7 + + +// +// /proc/pagetypeinfo is declared in mm/vmstat.c :: init_mm_internals +// + +// One line of /proc/pagetypeinfo +struct pageline { + int node; + char *zone; + char *type; + int line; + uint64_t free_pages_size[MAX_PAGETYPE_ORDER]; + RRDDIM *rd[MAX_PAGETYPE_ORDER]; +}; + +// Sum of all orders +struct systemorder { + uint64_t size; + RRDDIM *rd; +}; + + +static inline uint64_t pageline_total_count(struct pageline *p) { + uint64_t sum = 0, o; + for (o=0; o<MAX_PAGETYPE_ORDER; o++) + sum += p->free_pages_size[o]; + return sum; +} + +// Check if a line of /proc/pagetypeinfo is valid to use +// Free block lines starts by "Node" && 4th col is "type" +#define pagetypeinfo_line_valid(ff, l) (strncmp(procfile_lineword(ff, l, 0), "Node", 4) == 0 && strncmp(procfile_lineword(ff, l, 4), "type", 4) == 0) + +// Dimension name from the order +#define dim_name(s, o, pagesize) (snprintfz(s, 16,"%ldKB (%lu)", (1 << o) * pagesize / 1024, o)) + +int do_proc_pagetypeinfo(int update_every, usec_t dt) { + (void)dt; + + // Config + static int do_global, do_detail; + static SIMPLE_PATTERN *filter_types = NULL; + + // Counters from parsing the file, that doesn't change after boot + static struct systemorder systemorders[MAX_PAGETYPE_ORDER] = {}; + static struct pageline* pagelines = NULL; + static long pagesize = 0; + static size_t pageorders_cnt = 0, pagelines_cnt = 0, ff_lines = 0; + + // Handle + static procfile *ff = NULL; + static char ff_path[FILENAME_MAX + 1]; + + // RRD Sets + static RRDSET *st_order = NULL; + static RRDSET **st_nodezonetype = NULL; + + // Local temp variables + long unsigned int l, o, p; + struct pageline *pgl = NULL; + + // -------------------------------------------------------------------- + // Startup: Init arch and open /proc/pagetypeinfo + if (unlikely(!pagesize)) { + pagesize = sysconf(_SC_PAGESIZE); + } + + if(unlikely(!ff)) { + snprintfz(ff_path, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, PLUGIN_PROC_MODULE_PAGETYPEINFO_NAME); + ff = procfile_open(config_get(CONFIG_SECTION_PLUGIN_PROC_PAGETYPEINFO, "filename to monitor", ff_path), " \t:", PROCFILE_FLAG_DEFAULT); + + if(unlikely(!ff)) { + strncpyz(ff_path, PLUGIN_PROC_MODULE_PAGETYPEINFO_NAME, FILENAME_MAX); + ff = procfile_open(PLUGIN_PROC_MODULE_PAGETYPEINFO_NAME, " \t,", PROCFILE_FLAG_DEFAULT); + } + } + if(unlikely(!ff)) + return 1; + + ff = procfile_readall(ff); + if(unlikely(!ff)) + return 0; // we return 0, so that we will retry to open it next time + + // -------------------------------------------------------------------- + // Init: find how many Nodes, Zones and Types + if(unlikely(pagelines_cnt == 0)) { + size_t nodenumlast = -1; + char *zonenamelast = NULL; + + ff_lines = procfile_lines(ff); + if(unlikely(!ff_lines)) { + collector_error("PLUGIN: PROC_PAGETYPEINFO: Cannot read %s, zero lines reported.", ff_path); + return 1; + } + + // Configuration + do_global = config_get_boolean(CONFIG_SECTION_PLUGIN_PROC_PAGETYPEINFO, "enable system summary", CONFIG_BOOLEAN_YES); + do_detail = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_PAGETYPEINFO, "enable detail per-type", CONFIG_BOOLEAN_AUTO); + filter_types = simple_pattern_create( + config_get(CONFIG_SECTION_PLUGIN_PROC_PAGETYPEINFO, "hide charts id matching", ""), NULL, + SIMPLE_PATTERN_SUFFIX, true); + + pagelines_cnt = 0; + + // Pass 1: how many lines would be valid + for (l = 4; l < ff_lines; l++) { + if (!pagetypeinfo_line_valid(ff, l)) + continue; + + pagelines_cnt++; + } + if (pagelines_cnt == 0) { + collector_error("PLUGIN: PROC_PAGETYPEINFO: Unable to parse any valid line in %s", ff_path); + return 1; + } + + // 4th line is the "Free pages count per migrate type at order". Just subtract these 8 words. + pageorders_cnt = procfile_linewords(ff, 3); + if (pageorders_cnt < 9) { + collector_error("PLUGIN: PROC_PAGETYPEINFO: Unable to parse Line 4 of %s", ff_path); + return 1; + } + + pageorders_cnt -= 9; + + if (pageorders_cnt > MAX_PAGETYPE_ORDER) { + collector_error("PLUGIN: PROC_PAGETYPEINFO: pageorder found (%lu) is higher than max %d", + (long unsigned int) pageorders_cnt, MAX_PAGETYPE_ORDER); + return 1; + } + + // Init pagelines from scanned lines + if (!pagelines) { + pagelines = callocz(pagelines_cnt, sizeof(struct pageline)); + if (!pagelines) { + collector_error("PLUGIN: PROC_PAGETYPEINFO: Cannot allocate %lu pagelines of %lu B", + (long unsigned int) pagelines_cnt, (long unsigned int) sizeof(struct pageline)); + return 1; + } + } + + // Pass 2: Scan the file again, with details + p = 0; + for (l=4; l < ff_lines; l++) { + + if (!pagetypeinfo_line_valid(ff, l)) + continue; + + size_t nodenum = strtoul(procfile_lineword(ff, l, 1), NULL, 10); + char *zonename = procfile_lineword(ff, l, 3); + char *typename = procfile_lineword(ff, l, 5); + + // We changed node or zone + if (nodenum != nodenumlast || !zonenamelast || strncmp(zonename, zonenamelast, 6) != 0) { + zonenamelast = zonename; + } + + // Populate the line + pgl = &pagelines[p]; + + pgl->line = l; + pgl->node = nodenum; + pgl->type = typename; + pgl->zone = zonename; + for (o = 0; o < pageorders_cnt; o++) + pgl->free_pages_size[o] = str2uint64_t(procfile_lineword(ff, l, o + 6), NULL) * 1 << o; + + p++; + } + + // Init the RRD graphs + + // Per-Order: sum of all node, zone, type Grouped by order + if (do_global != CONFIG_BOOLEAN_NO) { + st_order = rrdset_create_localhost( + "mem" + , "pagetype_global" + , NULL + , "pagetype" + , NULL + , "System orders available" + , "B" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_PAGETYPEINFO_NAME + , NETDATA_CHART_PRIO_MEM_PAGEFRAG + , update_every + , RRDSET_TYPE_STACKED + ); + for (o = 0; o < pageorders_cnt; o++) { + char id[3+1]; + snprintfz(id, sizeof(id) - 1, "%lu", o); + + char name[20+1]; + dim_name(name, o, pagesize); + + systemorders[o].rd = rrddim_add(st_order, id, name, pagesize, 1, RRD_ALGORITHM_ABSOLUTE); + } + } + + + // Per-Numa Node & Zone & Type (full detail). Only if sum(line) > 0 + st_nodezonetype = callocz(pagelines_cnt, sizeof(RRDSET *)); + for (p = 0; p < pagelines_cnt; p++) { + pgl = &pagelines[p]; + + // Skip invalid, refused or empty pagelines if not explicitly requested + if (!pgl + || do_detail == CONFIG_BOOLEAN_NO + || (do_detail == CONFIG_BOOLEAN_AUTO && pageline_total_count(pgl) == 0 && netdata_zero_metrics_enabled != CONFIG_BOOLEAN_YES)) + continue; + + // "pagetype Node" + NUMA-NodeId + ZoneName + TypeName + char setid[13+1+2+1+MAX_ZONETYPE_NAME+1+MAX_PAGETYPE_NAME+1]; + snprintfz(setid, sizeof(setid) - 1, "pagetype_Node%d_%s_%s", pgl->node, pgl->zone, pgl->type); + + // Skip explicitly refused charts + if (simple_pattern_matches(filter_types, setid)) + continue; + + // "Node" + NUMA-NodeID + ZoneName + TypeName + char setname[4+1+MAX_ZONETYPE_NAME+1+MAX_PAGETYPE_NAME +1]; + snprintfz(setname, MAX_ZONETYPE_NAME + MAX_PAGETYPE_NAME, "Node %d %s %s", pgl->node, pgl->zone, pgl->type); + + st_nodezonetype[p] = rrdset_create_localhost( + "mem" + , setid + , NULL + , "pagetype" + , "mem.pagetype" + , setname + , "B" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_PAGETYPEINFO_NAME + , NETDATA_CHART_PRIO_MEM_PAGEFRAG + 1 + p + , update_every + , RRDSET_TYPE_STACKED + ); + + char node[50+1]; + snprintfz(node, sizeof(node) - 1, "node%d", pgl->node); + rrdlabels_add(st_nodezonetype[p]->rrdlabels, "node_id", node, RRDLABEL_SRC_AUTO); + rrdlabels_add(st_nodezonetype[p]->rrdlabels, "node_zone", pgl->zone, RRDLABEL_SRC_AUTO); + rrdlabels_add(st_nodezonetype[p]->rrdlabels, "node_type", pgl->type, RRDLABEL_SRC_AUTO); + + for (o = 0; o < pageorders_cnt; o++) { + char dimid[3+1]; + snprintfz(dimid, sizeof(dimid) - 1, "%lu", o); + char dimname[20+1]; + dim_name(dimname, o, pagesize); + + pgl->rd[o] = rrddim_add(st_nodezonetype[p], dimid, dimname, pagesize, 1, RRD_ALGORITHM_ABSOLUTE); + } + } + } + + // -------------------------------------------------------------------- + // Update pagelines + + // Process each line + p = 0; + for (l=4; l<ff_lines; l++) { + + if (!pagetypeinfo_line_valid(ff, l)) + continue; + + size_t words = procfile_linewords(ff, l); + + if (words != 7+pageorders_cnt) { + collector_error("PLUGIN: PROC_PAGETYPEINFO: Unable to read line %lu, %lu words found instead of %lu", + l+1, (long unsigned int) words, (long unsigned int) 7+pageorders_cnt); + break; + } + + for (o = 0; o < pageorders_cnt; o++) { + // Reset counter + if (p == 0) + systemorders[o].size = 0; + + // Update orders of the current line + pagelines[p].free_pages_size[o] = str2uint64_t(procfile_lineword(ff, l, o + 6), NULL) * 1 << o; + + // Update sum by order + systemorders[o].size += pagelines[p].free_pages_size[o]; + } + + p++; + } + + // -------------------------------------------------------------------- + // update RRD values + + // Global system per order + if (st_order) { + for (o = 0; o < pageorders_cnt; o++) + rrddim_set_by_pointer(st_order, systemorders[o].rd, systemorders[o].size); + rrdset_done(st_order); + } + + // Per Node-Zone-Type + if (do_detail) { + for (p = 0; p < pagelines_cnt; p++) { + // Skip empty graphs + if (!st_nodezonetype[p]) + continue; + + for (o = 0; o < pageorders_cnt; o++) + rrddim_set_by_pointer(st_nodezonetype[p], pagelines[p].rd[o], pagelines[p].free_pages_size[o]); + rrdset_done(st_nodezonetype[p]); + } + } + + return 0; +} diff --git a/src/collectors/proc.plugin/proc_pressure.c b/src/collectors/proc.plugin/proc_pressure.c new file mode 100644 index 000000000..4037e60ac --- /dev/null +++ b/src/collectors/proc.plugin/proc_pressure.c @@ -0,0 +1,257 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +#define PLUGIN_PROC_MODULE_PRESSURE_NAME "/proc/pressure" +#define CONFIG_SECTION_PLUGIN_PROC_PRESSURE "plugin:" PLUGIN_PROC_CONFIG_NAME ":" PLUGIN_PROC_MODULE_PRESSURE_NAME + +// linux calculates this every 2 seconds, see kernel/sched/psi.c PSI_FREQ +#define MIN_PRESSURE_UPDATE_EVERY 2 + +static int pressure_update_every = 0; + +static struct pressure resources[PRESSURE_NUM_RESOURCES] = { + { + .some = { + .available = true, + .share_time = {.id = "cpu_some_pressure", .title = "CPU some pressure"}, + .total_time = {.id = "cpu_some_pressure_stall_time", .title = "CPU some pressure stall time"} + }, + .full = { + // Disable CPU full pressure. + // See https://github.com/torvalds/linux/commit/890d550d7dbac7a31ecaa78732aa22be282bb6b8 + .available = false, + .share_time = {.id = "cpu_full_pressure", .title = "CPU full pressure"}, + .total_time = {.id = "cpu_full_pressure_stall_time", .title = "CPU full pressure stall time"} + }, + }, + { + .some = { + .available = true, + .share_time = {.id = "memory_some_pressure", .title = "Memory some pressure"}, + .total_time = {.id = "memory_some_pressure_stall_time", .title = "Memory some pressure stall time"} + }, + .full = { + .available = true, + .share_time = {.id = "memory_full_pressure", .title = "Memory full pressure"}, + .total_time = {.id = "memory_full_pressure_stall_time", .title = "Memory full pressure stall time"} + }, + }, + { + .some = { + .available = true, + .share_time = {.id = "io_some_pressure", .title = "I/O some pressure"}, + .total_time = {.id = "io_some_pressure_stall_time", .title = "I/O some pressure stall time"} + }, + .full = { + .available = true, + .share_time = {.id = "io_full_pressure", .title = "I/O full pressure"}, + .total_time = {.id = "io_full_pressure_stall_time", .title = "I/O full pressure stall time"} + }, + }, + { + .some = { + // this is not available + .available = false, + .share_time = {.id = "irq_some_pressure", .title = "IRQ some pressure"}, + .total_time = {.id = "irq_some_pressure_stall_time", .title = "IRQ some pressure stall time"} + }, + .full = { + .available = true, + .share_time = {.id = "irq_full_pressure", .title = "IRQ full pressure"}, + .total_time = {.id = "irq_full_pressure_stall_time", .title = "IRQ full pressure stall time"} + }, + }, +}; + +static struct resource_info { + procfile *pf; + const char *name; // metric file name + const char *family; // webui section name + int section_priority; +} resource_info[PRESSURE_NUM_RESOURCES] = { + { .name = "cpu", .family = "cpu", .section_priority = NETDATA_CHART_PRIO_SYSTEM_CPU }, + { .name = "memory", .family = "ram", .section_priority = NETDATA_CHART_PRIO_SYSTEM_RAM }, + { .name = "io", .family = "disk", .section_priority = NETDATA_CHART_PRIO_SYSTEM_IO }, + { .name = "irq", .family = "interrupts", .section_priority = NETDATA_CHART_PRIO_SYSTEM_INTERRUPTS }, +}; + +void update_pressure_charts(struct pressure_charts *pcs) { + if (pcs->share_time.st) { + rrddim_set_by_pointer( + pcs->share_time.st, pcs->share_time.rd10, (collected_number)(pcs->share_time.value10 * 100)); + rrddim_set_by_pointer( + pcs->share_time.st, pcs->share_time.rd60, (collected_number)(pcs->share_time.value60 * 100)); + rrddim_set_by_pointer( + pcs->share_time.st, pcs->share_time.rd300, (collected_number)(pcs->share_time.value300 * 100)); + rrdset_done(pcs->share_time.st); + } + if (pcs->total_time.st) { + rrddim_set_by_pointer( + pcs->total_time.st, pcs->total_time.rdtotal, (collected_number)(pcs->total_time.value_total)); + rrdset_done(pcs->total_time.st); + } +} + +static void proc_pressure_do_resource(procfile *ff, int res_idx, size_t line, bool some) { + struct pressure_charts *pcs; + struct resource_info ri; + pcs = some ? &resources[res_idx].some : &resources[res_idx].full; + ri = resource_info[res_idx]; + + if (unlikely(!pcs->share_time.st)) { + pcs->share_time.st = rrdset_create_localhost( + "system", + pcs->share_time.id, + NULL, + ri.family, + NULL, + pcs->share_time.title, + "percentage", + PLUGIN_PROC_NAME, + PLUGIN_PROC_MODULE_PRESSURE_NAME, + ri.section_priority + (some ? 40 : 50), + pressure_update_every, + RRDSET_TYPE_LINE); + pcs->share_time.rd10 = + rrddim_add(pcs->share_time.st, some ? "some 10" : "full 10", NULL, 1, 100, RRD_ALGORITHM_ABSOLUTE); + pcs->share_time.rd60 = + rrddim_add(pcs->share_time.st, some ? "some 60" : "full 60", NULL, 1, 100, RRD_ALGORITHM_ABSOLUTE); + pcs->share_time.rd300 = + rrddim_add(pcs->share_time.st, some ? "some 300" : "full 300", NULL, 1, 100, RRD_ALGORITHM_ABSOLUTE); + } + + pcs->share_time.value10 = strtod(procfile_lineword(ff, line, 2), NULL); + pcs->share_time.value60 = strtod(procfile_lineword(ff, line, 4), NULL); + pcs->share_time.value300 = strtod(procfile_lineword(ff, line, 6), NULL); + + if (unlikely(!pcs->total_time.st)) { + pcs->total_time.st = rrdset_create_localhost( + "system", + pcs->total_time.id, + NULL, + ri.family, + NULL, + pcs->total_time.title, + "ms", + PLUGIN_PROC_NAME, + PLUGIN_PROC_MODULE_PRESSURE_NAME, + ri.section_priority + (some ? 45 : 55), + pressure_update_every, + RRDSET_TYPE_LINE); + pcs->total_time.rdtotal = rrddim_add(pcs->total_time.st, "time", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + pcs->total_time.value_total = str2ull(procfile_lineword(ff, line, 8), NULL) / 1000; +} + +static void proc_pressure_do_resource_some(procfile *ff, int res_idx, size_t line) { + proc_pressure_do_resource(ff, res_idx, line, true); +} + +static void proc_pressure_do_resource_full(procfile *ff, int res_idx, size_t line) { + proc_pressure_do_resource(ff, res_idx, line, false); +} + +int do_proc_pressure(int update_every, usec_t dt) { + int ok_count = 0; + int i; + + static usec_t next_pressure_dt = 0; + static char *base_path = NULL; + + update_every = (update_every < MIN_PRESSURE_UPDATE_EVERY) ? MIN_PRESSURE_UPDATE_EVERY : update_every; + pressure_update_every = update_every; + + if (next_pressure_dt <= dt) { + next_pressure_dt = update_every * USEC_PER_SEC; + } else { + next_pressure_dt -= dt; + return 0; + } + + if (unlikely(!base_path)) { + base_path = config_get(CONFIG_SECTION_PLUGIN_PROC_PRESSURE, "base path of pressure metrics", "/proc/pressure"); + } + + for (i = 0; i < PRESSURE_NUM_RESOURCES; i++) { + procfile *ff = resource_info[i].pf; + int do_some = resources[i].some.enabled, do_full = resources[i].full.enabled; + + if (!resources[i].some.available && !resources[i].full.available) + continue; + + if (unlikely(!ff)) { + char filename[FILENAME_MAX + 1]; + char config_key[CONFIG_MAX_NAME + 1]; + + snprintfz(filename + , FILENAME_MAX + , "%s%s/%s" + , netdata_configured_host_prefix + , base_path + , resource_info[i].name); + + do_some = resources[i].some.available ? CONFIG_BOOLEAN_YES : CONFIG_BOOLEAN_NO; + do_full = resources[i].full.available ? CONFIG_BOOLEAN_YES : CONFIG_BOOLEAN_NO; + + snprintfz(config_key, CONFIG_MAX_NAME, "enable %s some pressure", resource_info[i].name); + do_some = config_get_boolean(CONFIG_SECTION_PLUGIN_PROC_PRESSURE, config_key, do_some); + resources[i].some.enabled = do_some; + + snprintfz(config_key, CONFIG_MAX_NAME, "enable %s full pressure", resource_info[i].name); + do_full = config_get_boolean(CONFIG_SECTION_PLUGIN_PROC_PRESSURE, config_key, do_full); + resources[i].full.enabled = do_full; + + if (!do_full && !do_some) { + resources[i].some.available = false; + resources[i].full.available = false; + continue; + } + + ff = procfile_open(filename, " =", PROCFILE_FLAG_NO_ERROR_ON_FILE_IO); + if (unlikely(!ff)) { + // PSI IRQ was added recently (https://github.com/torvalds/linux/commit/52b1364ba0b105122d6de0e719b36db705011ac1) + if (strcmp(resource_info[i].name, "irq") != 0) + collector_error("Cannot read pressure information from %s.", filename); + resources[i].some.available = false; + resources[i].full.available = false; + continue; + } + } + + ff = procfile_readall(ff); + resource_info[i].pf = ff; + if (unlikely(!ff)) + continue; + + size_t lines = procfile_lines(ff); + if (unlikely(lines < 1)) { + collector_error("%s has no lines.", procfile_filename(ff)); + continue; + } + + for(size_t l = 0; l < lines ;l++) { + const char *key = procfile_lineword(ff, l, 0); + if(strcmp(key, "some") == 0) { + if(do_some) { + proc_pressure_do_resource_some(ff, i, l); + update_pressure_charts(&resources[i].some); + ok_count++; + } + } + else if(strcmp(key, "full") == 0) { + if(do_full) { + proc_pressure_do_resource_full(ff, i, l); + update_pressure_charts(&resources[i].full); + ok_count++; + } + } + } + } + + if(!ok_count) + return 1; + + return 0; +} diff --git a/src/collectors/proc.plugin/proc_pressure.h b/src/collectors/proc.plugin/proc_pressure.h new file mode 100644 index 000000000..2e5cab2cc --- /dev/null +++ b/src/collectors/proc.plugin/proc_pressure.h @@ -0,0 +1,44 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#ifndef NETDATA_PROC_PRESSURE_H +#define NETDATA_PROC_PRESSURE_H + +#define PRESSURE_NUM_RESOURCES 4 + +struct pressure { + int updated; + char *filename; + + struct pressure_charts { + bool available; + int enabled; + + struct pressure_share_time_chart { + const char *id; + const char *title; + + double value10; + double value60; + double value300; + + RRDSET *st; + RRDDIM *rd10; + RRDDIM *rd60; + RRDDIM *rd300; + } share_time; + + struct pressure_total_time_chart { + const char *id; + const char *title; + + unsigned long long value_total; + + RRDSET *st; + RRDDIM *rdtotal; + } total_time; + } some, full; +}; + +void update_pressure_charts(struct pressure_charts *charts); + +#endif //NETDATA_PROC_PRESSURE_H diff --git a/src/collectors/proc.plugin/proc_self_mountinfo.c b/src/collectors/proc.plugin/proc_self_mountinfo.c new file mode 100644 index 000000000..194791603 --- /dev/null +++ b/src/collectors/proc.plugin/proc_self_mountinfo.c @@ -0,0 +1,471 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +// ---------------------------------------------------------------------------- +// taken from gnulib/mountlist.c + +#ifndef ME_REMOTE +/* A file system is "remote" if its Fs_name contains a ':' + or if (it is of type (smbfs or cifs) and its Fs_name starts with '//') + or Fs_name is equal to "-hosts" (used by autofs to mount remote fs). */ +# define ME_REMOTE(Fs_name, Fs_type) \ + (strchr (Fs_name, ':') != NULL \ + || ((Fs_name)[0] == '/' \ + && (Fs_name)[1] == '/' \ + && (strcmp (Fs_type, "smbfs") == 0 \ + || strcmp (Fs_type, "cifs") == 0)) \ + || (strcmp("-hosts", Fs_name) == 0)) +#endif + +#define ME_DUMMY_0(Fs_name, Fs_type) \ + (strcmp (Fs_type, "autofs") == 0 \ + || strcmp (Fs_type, "proc") == 0 \ + || strcmp (Fs_type, "subfs") == 0 \ + /* for Linux 2.6/3.x */ \ + || strcmp (Fs_type, "debugfs") == 0 \ + || strcmp (Fs_type, "devpts") == 0 \ + || strcmp (Fs_type, "fusectl") == 0 \ + || strcmp (Fs_type, "mqueue") == 0 \ + || strcmp (Fs_type, "rpc_pipefs") == 0 \ + || strcmp (Fs_type, "sysfs") == 0 \ + /* FreeBSD, Linux 2.4 */ \ + || strcmp (Fs_type, "devfs") == 0 \ + /* for NetBSD 3.0 */ \ + || strcmp (Fs_type, "kernfs") == 0 \ + /* for Irix 6.5 */ \ + || strcmp (Fs_type, "ignore") == 0) + +/* Historically, we have marked as "dummy" any file system of type "none", + but now that programs like du need to know about bind-mounted directories, + we grant an exception to any with "bind" in its list of mount options. + I.e., those are *not* dummy entries. */ +# define ME_DUMMY(Fs_name, Fs_type) \ + (ME_DUMMY_0 (Fs_name, Fs_type) || strcmp (Fs_type, "none") == 0) + +// ---------------------------------------------------------------------------- + +// find the mount info with the given major:minor +// in the supplied linked list of mountinfo structures +struct mountinfo *mountinfo_find(struct mountinfo *root, unsigned long major, unsigned long minor, char *device) { + struct mountinfo *mi; + + uint32_t hash = simple_hash(device); + + for(mi = root; mi ; mi = mi->next) + if (unlikely( + mi->major == major && + mi->minor == minor && + mi->mount_source_name_hash == hash && + !strcmp(mi->mount_source_name, device))) + return mi; + + return NULL; +} + +// find the mount info with the given filesystem and mount_source +// in the supplied linked list of mountinfo structures +struct mountinfo *mountinfo_find_by_filesystem_mount_source(struct mountinfo *root, const char *filesystem, const char *mount_source) { + struct mountinfo *mi; + uint32_t filesystem_hash = simple_hash(filesystem), mount_source_hash = simple_hash(mount_source); + + for(mi = root; mi ; mi = mi->next) + if(unlikely(mi->filesystem + && mi->mount_source + && mi->filesystem_hash == filesystem_hash + && mi->mount_source_hash == mount_source_hash + && !strcmp(mi->filesystem, filesystem) + && !strcmp(mi->mount_source, mount_source))) + return mi; + + return NULL; +} + +struct mountinfo *mountinfo_find_by_filesystem_super_option(struct mountinfo *root, const char *filesystem, const char *super_options) { + struct mountinfo *mi; + uint32_t filesystem_hash = simple_hash(filesystem); + + size_t solen = strlen(super_options); + + for(mi = root; mi ; mi = mi->next) + if(unlikely(mi->filesystem + && mi->super_options + && mi->filesystem_hash == filesystem_hash + && !strcmp(mi->filesystem, filesystem))) { + + // super_options is a comma separated list + char *s = mi->super_options, *e; + while(*s) { + e = s + 1; + while(*e && *e != ',') e++; + + size_t len = e - s; + if(unlikely(len == solen && !strncmp(s, super_options, len))) + return mi; + + if(*e == ',') s = ++e; + else s = e; + } + } + + return NULL; +} + +static void mountinfo_free(struct mountinfo *mi) { + freez(mi->root); + freez(mi->mount_point); + freez(mi->mount_options); + freez(mi->persistent_id); +/* + if(mi->optional_fields_count) { + int i; + for(i = 0; i < mi->optional_fields_count ; i++) + free(*mi->optional_fields[i]); + } + free(mi->optional_fields); +*/ + freez(mi->filesystem); + freez(mi->mount_source); + freez(mi->mount_source_name); + freez(mi->super_options); + freez(mi); +} + +// free a linked list of mountinfo structures +void mountinfo_free_all(struct mountinfo *mi) { + while(mi) { + struct mountinfo *t = mi; + mi = mi->next; + + mountinfo_free(t); + } +} + +static char *strdupz_decoding_octal(const char *string) { + char *buffer = strdupz(string); + + char *d = buffer; + const char *s = string; + + while(*s) { + if(unlikely(*s == '\\')) { + s++; + if(likely(isdigit(*s) && isdigit(s[1]) && isdigit(s[2]))) { + char c = *s++ - '0'; + c <<= 3; + c |= *s++ - '0'; + c <<= 3; + c |= *s++ - '0'; + *d++ = c; + } + else *d++ = '_'; + } + else *d++ = *s++; + } + *d = '\0'; + + return buffer; +} + +static inline int is_read_only(const char *s) { + if(!s) return 0; + + size_t len = strlen(s); + if(len < 2) return 0; + if(len == 2) { + if(!strcmp(s, "ro")) return 1; + return 0; + } + if(!strncmp(s, "ro,", 3)) return 1; + if(!strncmp(&s[len - 3], ",ro", 3)) return 1; + if(strstr(s, ",ro,")) return 1; + return 0; +} + +// for the full list of protected mount points look at +// https://github.com/systemd/systemd/blob/1eb3ef78b4df28a9e9f464714208f2682f957e36/src/core/namespace.c#L142-L149 +// https://github.com/systemd/systemd/blob/1eb3ef78b4df28a9e9f464714208f2682f957e36/src/core/namespace.c#L180-L194 +static const char *systemd_protected_mount_points[] = { + "/home", + "/root", + "/usr", + "/boot", + "/efi", + "/etc", + "/run/user", + "/lib", + "/lib64", + "/bin", + "/sbin", + NULL +}; + +static inline int mount_point_is_protected(char *mount_point) +{ + for (size_t i = 0; systemd_protected_mount_points[i] != NULL; i++) + if (!strcmp(mount_point, systemd_protected_mount_points[i])) + return 1; + + return 0; +} + +// read the whole mountinfo into a linked list +struct mountinfo *mountinfo_read(int do_statvfs) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s/proc/self/mountinfo", netdata_configured_host_prefix); + procfile *ff = procfile_open(filename, " \t", PROCFILE_FLAG_DEFAULT); + if(unlikely(!ff)) { + snprintfz(filename, FILENAME_MAX, "%s/proc/1/mountinfo", netdata_configured_host_prefix); + ff = procfile_open(filename, " \t", PROCFILE_FLAG_DEFAULT); + if(unlikely(!ff)) return NULL; + } + + ff = procfile_readall(ff); + if(unlikely(!ff)) + return NULL; + + struct mountinfo *root = NULL, *last = NULL, *mi = NULL; + + // create a dictionary to track uniqueness + DICTIONARY *dict = dictionary_create_advanced( + DICT_OPTION_SINGLE_THREADED | DICT_OPTION_DONT_OVERWRITE_VALUE | DICT_OPTION_NAME_LINK_DONT_CLONE, + &dictionary_stats_category_collectors, 0); + + unsigned long l, lines = procfile_lines(ff); + for(l = 0; l < lines ;l++) { + if(unlikely(procfile_linewords(ff, l) < 5)) + continue; + + // make sure we don't add the same item twice + char *v = (char *)dictionary_set(dict, procfile_lineword(ff, l, 4), "N", 2); + if(v) { + if(*v == 'O') continue; + *v = 'O'; + } + + mi = mallocz(sizeof(struct mountinfo)); + + unsigned long w = 0; + mi->id = str2ul(procfile_lineword(ff, l, w)); w++; + mi->parentid = str2ul(procfile_lineword(ff, l, w)); w++; + + char *major = procfile_lineword(ff, l, w), *minor; w++; + for(minor = major; *minor && *minor != ':' ;minor++) ; + + if(unlikely(!*minor)) { + collector_error("Cannot parse major:minor on '%s' at line %lu of '%s'", major, l + 1, filename); + freez(mi); + continue; + } + + *minor = '\0'; + minor++; + + mi->flags = 0; + + mi->major = str2ul(major); + mi->minor = str2ul(minor); + + mi->root = strdupz(procfile_lineword(ff, l, w)); w++; + mi->root_hash = simple_hash(mi->root); + + mi->mount_point = strdupz_decoding_octal(procfile_lineword(ff, l, w)); w++; + mi->mount_point_hash = simple_hash(mi->mount_point); + + mi->persistent_id = strdupz(mi->mount_point); + netdata_fix_chart_id(mi->persistent_id); + mi->persistent_id_hash = simple_hash(mi->persistent_id); + + mi->mount_options = strdupz(procfile_lineword(ff, l, w)); w++; + + if(unlikely(is_read_only(mi->mount_options))) + mi->flags |= MOUNTINFO_READONLY; + + if(unlikely(mount_point_is_protected(mi->mount_point))) + mi->flags |= MOUNTINFO_IS_IN_SYSD_PROTECTED_LIST; + + // count the optional fields +/* + unsigned long wo = w; +*/ + mi->optional_fields_count = 0; + char *s = procfile_lineword(ff, l, w); + while(*s && *s != '-') { + w++; + s = procfile_lineword(ff, l, w); + mi->optional_fields_count++; + } + +/* + if(unlikely(mi->optional_fields_count)) { + // we have some optional fields + // read them into a new array of pointers; + + mi->optional_fields = mallocz(mi->optional_fields_count * sizeof(char *)); + + int i; + for(i = 0; i < mi->optional_fields_count ; i++) { + *mi->optional_fields[wo] = strdupz(procfile_lineword(ff, l, w)); + wo++; + } + } + else + mi->optional_fields = NULL; +*/ + + if(likely(*s == '-')) { + w++; + + mi->filesystem = strdupz(procfile_lineword(ff, l, w)); w++; + mi->filesystem_hash = simple_hash(mi->filesystem); + + mi->mount_source = strdupz_decoding_octal(procfile_lineword(ff, l, w)); w++; + mi->mount_source_hash = simple_hash(mi->mount_source); + + mi->mount_source_name = strdupz(basename(mi->mount_source)); + mi->mount_source_name_hash = simple_hash(mi->mount_source_name); + + mi->super_options = strdupz(procfile_lineword(ff, l, w)); w++; + + if(unlikely(is_read_only(mi->super_options))) + mi->flags |= MOUNTINFO_READONLY; + + if(unlikely(ME_DUMMY(mi->mount_source, mi->filesystem))) + mi->flags |= MOUNTINFO_IS_DUMMY; + + if(unlikely(ME_REMOTE(mi->mount_source, mi->filesystem))) + mi->flags |= MOUNTINFO_IS_REMOTE; + + // mark as BIND the duplicates (i.e. same filesystem + same source) + if(do_statvfs) { + struct stat buf; + if(unlikely(stat(mi->mount_point, &buf) == -1)) { + mi->st_dev = 0; + mi->flags |= MOUNTINFO_NO_STAT; + } + else { + mi->st_dev = buf.st_dev; + + struct mountinfo *mt; + for(mt = root; mt; mt = mt->next) { + if(unlikely(mt->st_dev == mi->st_dev && !(mt->flags & MOUNTINFO_IS_SAME_DEV))) { + if(strlen(mi->mount_point) < strlen(mt->mount_point)) + mt->flags |= MOUNTINFO_IS_SAME_DEV; + else + mi->flags |= MOUNTINFO_IS_SAME_DEV; + } + } + } + } + else { + mi->st_dev = 0; + } + + //try to detect devices with same minor and major modes. Within these, + //the larger mount point is considered a bind. + struct mountinfo *mt; + for(mt = root; mt; mt = mt->next) { + if(unlikely(mt->major == mi->major && mt->minor == mi->minor && !(mi->flags & MOUNTINFO_IS_BIND))) { + if(strlen(mi->root) < strlen(mt->root)) + mt->flags |= MOUNTINFO_IS_BIND; + else + mi->flags |= MOUNTINFO_IS_BIND; + } + } + } + else { + mi->filesystem = NULL; + mi->filesystem_hash = 0; + + mi->mount_source = NULL; + mi->mount_source_hash = 0; + + mi->mount_source_name = NULL; + mi->mount_source_name_hash = 0; + + mi->super_options = NULL; + + mi->st_dev = 0; + } + + // check if it has size + if(do_statvfs && !(mi->flags & MOUNTINFO_IS_DUMMY)) { + struct statvfs buff_statvfs; + if(unlikely(statvfs(mi->mount_point, &buff_statvfs) < 0)) { + mi->flags |= MOUNTINFO_NO_STAT; + } + else if(unlikely(!buff_statvfs.f_blocks /* || !buff_statvfs.f_files */)) { + mi->flags |= MOUNTINFO_NO_SIZE; + } + } + + // link it + if(unlikely(!root)) + root = mi; + else + last->next = mi; + + last = mi; + mi->next = NULL; + +/* +#ifdef NETDATA_INTERNAL_CHECKS + fprintf(stderr, "MOUNTINFO: %ld %ld %lu:%lu root '%s', persistent id '%s', mount point '%s', mount options '%s', filesystem '%s', mount source '%s', super options '%s'%s%s%s%s%s%s\n", + mi->id, + mi->parentid, + mi->major, + mi->minor, + mi->root, + mi->persistent_id, + (mi->mount_point)?mi->mount_point:"", + (mi->mount_options)?mi->mount_options:"", + (mi->filesystem)?mi->filesystem:"", + (mi->mount_source)?mi->mount_source:"", + (mi->super_options)?mi->super_options:"", + (mi->flags & MOUNTINFO_IS_DUMMY)?" DUMMY":"", + (mi->flags & MOUNTINFO_IS_BIND)?" BIND":"", + (mi->flags & MOUNTINFO_IS_REMOTE)?" REMOTE":"", + (mi->flags & MOUNTINFO_NO_STAT)?" NOSTAT":"", + (mi->flags & MOUNTINFO_NO_SIZE)?" NOSIZE":"", + (mi->flags & MOUNTINFO_IS_SAME_DEV)?" SAMEDEV":"" + ); +#endif +*/ + } + +/* find if the mount options have "bind" in them + { + FILE *fp = setmntent(MOUNTED, "r"); + if (fp != NULL) { + struct mntent mntbuf; + struct mntent *mnt; + char buf[4096 + 1]; + + while ((mnt = getmntent_r(fp, &mntbuf, buf, 4096))) { + char *bind = hasmntopt(mnt, "bind"); + if(unlikely(bind)) { + struct mountinfo *mi; + for(mi = root; mi ; mi = mi->next) { + if(unlikely(strcmp(mnt->mnt_dir, mi->mount_point) == 0)) { + fprintf(stderr, "Mount point '%s' is BIND\n", mi->mount_point); + mi->flags |= MOUNTINFO_IS_BIND; + break; + } + } + +#ifdef NETDATA_INTERNAL_CHECKS + if(unlikely(!mi)) { + collector_error("Mount point '%s' not found in /proc/self/mountinfo", mnt->mnt_dir); + } +#endif + } + } + endmntent(fp); + } + } +*/ + + dictionary_destroy(dict); + procfile_close(ff); + return root; +} diff --git a/src/collectors/proc.plugin/proc_self_mountinfo.h b/src/collectors/proc.plugin/proc_self_mountinfo.h new file mode 100644 index 000000000..4bd24d2d2 --- /dev/null +++ b/src/collectors/proc.plugin/proc_self_mountinfo.h @@ -0,0 +1,61 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#ifndef NETDATA_PROC_SELF_MOUNTINFO_H +#define NETDATA_PROC_SELF_MOUNTINFO_H 1 + +#define MOUNTINFO_IS_DUMMY 0x00000001 +#define MOUNTINFO_IS_REMOTE 0x00000002 +#define MOUNTINFO_IS_BIND 0x00000004 +#define MOUNTINFO_IS_SAME_DEV 0x00000008 +#define MOUNTINFO_NO_STAT 0x00000010 +#define MOUNTINFO_NO_SIZE 0x00000020 +#define MOUNTINFO_READONLY 0x00000040 +#define MOUNTINFO_IS_IN_SYSD_PROTECTED_LIST 0x00000080 + +struct mountinfo { + long id; // mount ID: unique identifier of the mount (may be reused after umount(2)). + long parentid; // parent ID: ID of parent mount (or of self for the top of the mount tree). + unsigned long major; // major:minor: value of st_dev for files on filesystem (see stat(2)). + unsigned long minor; + + char *persistent_id; // a calculated persistent id for the mount point + uint32_t persistent_id_hash; + + char *root; // root: root of the mount within the filesystem. + uint32_t root_hash; + + char *mount_point; // mount point: mount point relative to the process's root. + uint32_t mount_point_hash; + + char *mount_options; // mount options: per-mount options. + + int optional_fields_count; +/* + char ***optional_fields; // optional fields: zero or more fields of the form "tag[:value]". +*/ + char *filesystem; // filesystem type: name of filesystem in the form "type[.subtype]". + uint32_t filesystem_hash; + + char *mount_source; // mount source: filesystem-specific information or "none". + uint32_t mount_source_hash; + + char *mount_source_name; + uint32_t mount_source_name_hash; + + char *super_options; // super options: per-superblock options. + + uint32_t flags; + + dev_t st_dev; // id of device as given by stat() + + struct mountinfo *next; +}; + +struct mountinfo *mountinfo_find(struct mountinfo *root, unsigned long major, unsigned long minor, char *device); +struct mountinfo *mountinfo_find_by_filesystem_mount_source(struct mountinfo *root, const char *filesystem, const char *mount_source); +struct mountinfo *mountinfo_find_by_filesystem_super_option(struct mountinfo *root, const char *filesystem, const char *super_options); + +void mountinfo_free_all(struct mountinfo *mi); +struct mountinfo *mountinfo_read(int do_statvfs); + +#endif /* NETDATA_PROC_SELF_MOUNTINFO_H */ diff --git a/src/collectors/proc.plugin/proc_softirqs.c b/src/collectors/proc.plugin/proc_softirqs.c new file mode 100644 index 000000000..7968a2287 --- /dev/null +++ b/src/collectors/proc.plugin/proc_softirqs.c @@ -0,0 +1,243 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +#define PLUGIN_PROC_MODULE_SOFTIRQS_NAME "/proc/softirqs" + +#define MAX_INTERRUPT_NAME 50 + +struct cpu_interrupt { + unsigned long long value; + RRDDIM *rd; +}; + +struct interrupt { + int used; + char *id; + char name[MAX_INTERRUPT_NAME + 1]; + RRDDIM *rd; + unsigned long long total; + struct cpu_interrupt cpu[]; +}; + +// since each interrupt is variable in size +// we use this to calculate its record size +#define recordsize(cpus) (sizeof(struct interrupt) + ((cpus) * sizeof(struct cpu_interrupt))) + +// given a base, get a pointer to each record +#define irrindex(base, line, cpus) ((struct interrupt *)&((char *)(base))[(line) * recordsize(cpus)]) + +static inline struct interrupt *get_interrupts_array(size_t lines, int cpus) { + static struct interrupt *irrs = NULL; + static size_t allocated = 0; + + if(unlikely(lines != allocated)) { + uint32_t l; + int c; + + irrs = (struct interrupt *)reallocz(irrs, lines * recordsize(cpus)); + + // reset all interrupt RRDDIM pointers as any line could have shifted + for(l = 0; l < lines ;l++) { + struct interrupt *irr = irrindex(irrs, l, cpus); + irr->rd = NULL; + irr->name[0] = '\0'; + for(c = 0; c < cpus ;c++) + irr->cpu[c].rd = NULL; + } + + allocated = lines; + } + + return irrs; +} + +int do_proc_softirqs(int update_every, usec_t dt) { + (void)dt; + static procfile *ff = NULL; + static int cpus = -1, do_per_core = CONFIG_BOOLEAN_INVALID; + struct interrupt *irrs = NULL; + + if(unlikely(do_per_core == CONFIG_BOOLEAN_INVALID)) + do_per_core = config_get_boolean_ondemand("plugin:proc:/proc/softirqs", "interrupts per core", CONFIG_BOOLEAN_NO); + + if(unlikely(!ff)) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/softirqs"); + ff = procfile_open(config_get("plugin:proc:/proc/softirqs", "filename to monitor", filename), " \t:", PROCFILE_FLAG_DEFAULT); + if(unlikely(!ff)) return 1; + } + + ff = procfile_readall(ff); + if(unlikely(!ff)) return 0; // we return 0, so that we will retry to open it next time + + size_t lines = procfile_lines(ff), l; + size_t words = procfile_linewords(ff, 0); + + if(unlikely(!lines)) { + collector_error("Cannot read /proc/softirqs, zero lines reported."); + return 1; + } + + // find how many CPUs are there + if(unlikely(cpus == -1)) { + uint32_t w; + cpus = 0; + for(w = 0; w < words ; w++) { + if(likely(strncmp(procfile_lineword(ff, 0, w), "CPU", 3) == 0)) + cpus++; + } + } + + if(unlikely(!cpus)) { + collector_error("PLUGIN: PROC_SOFTIRQS: Cannot find the number of CPUs in /proc/softirqs"); + return 1; + } + + // allocate the size we need; + irrs = get_interrupts_array(lines, cpus); + irrs[0].used = 0; + + // loop through all lines + for(l = 1; l < lines ;l++) { + struct interrupt *irr = irrindex(irrs, l, cpus); + irr->used = 0; + irr->total = 0; + + words = procfile_linewords(ff, l); + if(unlikely(!words)) continue; + + irr->id = procfile_lineword(ff, l, 0); + if(unlikely(!irr->id || !irr->id[0])) continue; + + int c; + for(c = 0; c < cpus ;c++) { + if(likely((c + 1) < (int)words)) + irr->cpu[c].value = str2ull(procfile_lineword(ff, l, (uint32_t) (c + 1)), NULL); + else + irr->cpu[c].value = 0; + + irr->total += irr->cpu[c].value; + } + + strncpyz(irr->name, irr->id, MAX_INTERRUPT_NAME); + + irr->used = 1; + } + + // -------------------------------------------------------------------- + + static RRDSET *st_system_softirqs = NULL; + if(unlikely(!st_system_softirqs)) { + st_system_softirqs = rrdset_create_localhost( + "system" + , "softirqs" + , NULL + , "softirqs" + , NULL + , "System softirqs" + , "softirqs/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_SOFTIRQS_NAME + , NETDATA_CHART_PRIO_SYSTEM_SOFTIRQS + , update_every + , RRDSET_TYPE_STACKED + ); + } + + for(l = 0; l < lines ;l++) { + struct interrupt *irr = irrindex(irrs, l, cpus); + + if(irr->used && irr->total) { + // some interrupt may have changed without changing the total number of lines + // if the same number of interrupts have been added and removed between two + // calls of this function. + if(unlikely(!irr->rd || strncmp(irr->name, rrddim_name(irr->rd), MAX_INTERRUPT_NAME) != 0)) { + irr->rd = rrddim_add(st_system_softirqs, irr->id, irr->name, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rrddim_reset_name(st_system_softirqs, irr->rd, irr->name); + + // also reset per cpu RRDDIMs to avoid repeating strncmp() in the per core loop + if(likely(do_per_core != CONFIG_BOOLEAN_NO)) { + int c; + for(c = 0; c < cpus; c++) irr->cpu[c].rd = NULL; + } + } + + rrddim_set_by_pointer(st_system_softirqs, irr->rd, irr->total); + } + } + + rrdset_done(st_system_softirqs); + + // -------------------------------------------------------------------- + + if(do_per_core != CONFIG_BOOLEAN_NO) { + static RRDSET **core_st = NULL; + static int old_cpus = 0; + + if(old_cpus < cpus) { + core_st = reallocz(core_st, sizeof(RRDSET *) * cpus); + memset(&core_st[old_cpus], 0, sizeof(RRDSET *) * (cpus - old_cpus)); + old_cpus = cpus; + } + + int c; + + for(c = 0; c < cpus ; c++) { + if(unlikely(!core_st[c])) { + // find if everything is just zero + unsigned long long core_sum = 0; + + for (l = 0; l < lines; l++) { + struct interrupt *irr = irrindex(irrs, l, cpus); + if (unlikely(!irr->used)) continue; + core_sum += irr->cpu[c].value; + } + + if (unlikely(core_sum == 0)) continue; // try next core + + char id[50 + 1]; + snprintfz(id, sizeof(id) - 1, "cpu%d_softirqs", c); + + char title[100 + 1]; + snprintfz(title, sizeof(title) - 1, "CPU softirqs"); + + core_st[c] = rrdset_create_localhost( + "cpu" + , id + , NULL + , "softirqs" + , "cpu.softirqs" + , title + , "softirqs/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_SOFTIRQS_NAME + , NETDATA_CHART_PRIO_SOFTIRQS_PER_CORE + c + , update_every + , RRDSET_TYPE_STACKED + ); + + char core[50+1]; + snprintfz(core, sizeof(core) - 1, "cpu%d", c); + rrdlabels_add(core_st[c]->rrdlabels, "cpu", core, RRDLABEL_SRC_AUTO); + } + + for(l = 0; l < lines ;l++) { + struct interrupt *irr = irrindex(irrs, l, cpus); + + if(irr->used && (do_per_core == CONFIG_BOOLEAN_YES || irr->cpu[c].value)) { + if(unlikely(!irr->cpu[c].rd)) { + irr->cpu[c].rd = rrddim_add(core_st[c], irr->id, irr->name, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rrddim_reset_name(core_st[c], irr->cpu[c].rd, irr->name); + } + + rrddim_set_by_pointer(core_st[c], irr->cpu[c].rd, irr->cpu[c].value); + } + } + + rrdset_done(core_st[c]); + } + } + + return 0; +} diff --git a/src/collectors/proc.plugin/proc_spl_kstat_zfs.c b/src/collectors/proc.plugin/proc_spl_kstat_zfs.c new file mode 100644 index 000000000..e6b12c31f --- /dev/null +++ b/src/collectors/proc.plugin/proc_spl_kstat_zfs.c @@ -0,0 +1,435 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" +#include "zfs_common.h" + +#define ZFS_PROC_ARCSTATS "/proc/spl/kstat/zfs/arcstats" +#define ZFS_PROC_POOLS "/proc/spl/kstat/zfs" + +#define STATE_SIZE 20 +#define MAX_CHART_ID 256 + +extern struct arcstats arcstats; + +unsigned long long zfs_arcstats_shrinkable_cache_size_bytes = 0; + +int do_proc_spl_kstat_zfs_arcstats(int update_every, usec_t dt) { + (void)dt; + + static int show_zero_charts = 0, do_zfs_stats = 0; + static procfile *ff = NULL; + static char *dirname = NULL; + static ARL_BASE *arl_base = NULL; + + arcstats.l2exist = -1; + + if(unlikely(!arl_base)) { + arl_base = arl_create("arcstats", NULL, 60); + + arl_expect(arl_base, "hits", &arcstats.hits); + arl_expect(arl_base, "misses", &arcstats.misses); + arl_expect(arl_base, "demand_data_hits", &arcstats.demand_data_hits); + arl_expect(arl_base, "demand_data_misses", &arcstats.demand_data_misses); + arl_expect(arl_base, "demand_metadata_hits", &arcstats.demand_metadata_hits); + arl_expect(arl_base, "demand_metadata_misses", &arcstats.demand_metadata_misses); + arl_expect(arl_base, "prefetch_data_hits", &arcstats.prefetch_data_hits); + arl_expect(arl_base, "prefetch_data_misses", &arcstats.prefetch_data_misses); + arl_expect(arl_base, "prefetch_metadata_hits", &arcstats.prefetch_metadata_hits); + arl_expect(arl_base, "prefetch_metadata_misses", &arcstats.prefetch_metadata_misses); + arl_expect(arl_base, "mru_hits", &arcstats.mru_hits); + arl_expect(arl_base, "mru_ghost_hits", &arcstats.mru_ghost_hits); + arl_expect(arl_base, "mfu_hits", &arcstats.mfu_hits); + arl_expect(arl_base, "mfu_ghost_hits", &arcstats.mfu_ghost_hits); + arl_expect(arl_base, "deleted", &arcstats.deleted); + arl_expect(arl_base, "mutex_miss", &arcstats.mutex_miss); + arl_expect(arl_base, "evict_skip", &arcstats.evict_skip); + arl_expect(arl_base, "evict_not_enough", &arcstats.evict_not_enough); + arl_expect(arl_base, "evict_l2_cached", &arcstats.evict_l2_cached); + arl_expect(arl_base, "evict_l2_eligible", &arcstats.evict_l2_eligible); + arl_expect(arl_base, "evict_l2_ineligible", &arcstats.evict_l2_ineligible); + arl_expect(arl_base, "evict_l2_skip", &arcstats.evict_l2_skip); + arl_expect(arl_base, "hash_elements", &arcstats.hash_elements); + arl_expect(arl_base, "hash_elements_max", &arcstats.hash_elements_max); + arl_expect(arl_base, "hash_collisions", &arcstats.hash_collisions); + arl_expect(arl_base, "hash_chains", &arcstats.hash_chains); + arl_expect(arl_base, "hash_chain_max", &arcstats.hash_chain_max); + arl_expect(arl_base, "p", &arcstats.p); + arl_expect(arl_base, "c", &arcstats.c); + arl_expect(arl_base, "c_min", &arcstats.c_min); + arl_expect(arl_base, "c_max", &arcstats.c_max); + arl_expect(arl_base, "size", &arcstats.size); + arl_expect(arl_base, "hdr_size", &arcstats.hdr_size); + arl_expect(arl_base, "data_size", &arcstats.data_size); + arl_expect(arl_base, "metadata_size", &arcstats.metadata_size); + arl_expect(arl_base, "other_size", &arcstats.other_size); + arl_expect(arl_base, "anon_size", &arcstats.anon_size); + arl_expect(arl_base, "anon_evictable_data", &arcstats.anon_evictable_data); + arl_expect(arl_base, "anon_evictable_metadata", &arcstats.anon_evictable_metadata); + arl_expect(arl_base, "mru_size", &arcstats.mru_size); + arl_expect(arl_base, "mru_evictable_data", &arcstats.mru_evictable_data); + arl_expect(arl_base, "mru_evictable_metadata", &arcstats.mru_evictable_metadata); + arl_expect(arl_base, "mru_ghost_size", &arcstats.mru_ghost_size); + arl_expect(arl_base, "mru_ghost_evictable_data", &arcstats.mru_ghost_evictable_data); + arl_expect(arl_base, "mru_ghost_evictable_metadata", &arcstats.mru_ghost_evictable_metadata); + arl_expect(arl_base, "mfu_size", &arcstats.mfu_size); + arl_expect(arl_base, "mfu_evictable_data", &arcstats.mfu_evictable_data); + arl_expect(arl_base, "mfu_evictable_metadata", &arcstats.mfu_evictable_metadata); + arl_expect(arl_base, "mfu_ghost_size", &arcstats.mfu_ghost_size); + arl_expect(arl_base, "mfu_ghost_evictable_data", &arcstats.mfu_ghost_evictable_data); + arl_expect(arl_base, "mfu_ghost_evictable_metadata", &arcstats.mfu_ghost_evictable_metadata); + arl_expect(arl_base, "l2_hits", &arcstats.l2_hits); + arl_expect(arl_base, "l2_misses", &arcstats.l2_misses); + arl_expect(arl_base, "l2_feeds", &arcstats.l2_feeds); + arl_expect(arl_base, "l2_rw_clash", &arcstats.l2_rw_clash); + arl_expect(arl_base, "l2_read_bytes", &arcstats.l2_read_bytes); + arl_expect(arl_base, "l2_write_bytes", &arcstats.l2_write_bytes); + arl_expect(arl_base, "l2_writes_sent", &arcstats.l2_writes_sent); + arl_expect(arl_base, "l2_writes_done", &arcstats.l2_writes_done); + arl_expect(arl_base, "l2_writes_error", &arcstats.l2_writes_error); + arl_expect(arl_base, "l2_writes_lock_retry", &arcstats.l2_writes_lock_retry); + arl_expect(arl_base, "l2_evict_lock_retry", &arcstats.l2_evict_lock_retry); + arl_expect(arl_base, "l2_evict_reading", &arcstats.l2_evict_reading); + arl_expect(arl_base, "l2_evict_l1cached", &arcstats.l2_evict_l1cached); + arl_expect(arl_base, "l2_free_on_write", &arcstats.l2_free_on_write); + arl_expect(arl_base, "l2_cdata_free_on_write", &arcstats.l2_cdata_free_on_write); + arl_expect(arl_base, "l2_abort_lowmem", &arcstats.l2_abort_lowmem); + arl_expect(arl_base, "l2_cksum_bad", &arcstats.l2_cksum_bad); + arl_expect(arl_base, "l2_io_error", &arcstats.l2_io_error); + arl_expect(arl_base, "l2_size", &arcstats.l2_size); + arl_expect(arl_base, "l2_asize", &arcstats.l2_asize); + arl_expect(arl_base, "l2_hdr_size", &arcstats.l2_hdr_size); + arl_expect(arl_base, "l2_compress_successes", &arcstats.l2_compress_successes); + arl_expect(arl_base, "l2_compress_zeros", &arcstats.l2_compress_zeros); + arl_expect(arl_base, "l2_compress_failures", &arcstats.l2_compress_failures); + arl_expect(arl_base, "memory_throttle_count", &arcstats.memory_throttle_count); + arl_expect(arl_base, "duplicate_buffers", &arcstats.duplicate_buffers); + arl_expect(arl_base, "duplicate_buffers_size", &arcstats.duplicate_buffers_size); + arl_expect(arl_base, "duplicate_reads", &arcstats.duplicate_reads); + arl_expect(arl_base, "memory_direct_count", &arcstats.memory_direct_count); + arl_expect(arl_base, "memory_indirect_count", &arcstats.memory_indirect_count); + arl_expect(arl_base, "arc_no_grow", &arcstats.arc_no_grow); + arl_expect(arl_base, "arc_tempreserve", &arcstats.arc_tempreserve); + arl_expect(arl_base, "arc_loaned_bytes", &arcstats.arc_loaned_bytes); + arl_expect(arl_base, "arc_prune", &arcstats.arc_prune); + arl_expect(arl_base, "arc_meta_used", &arcstats.arc_meta_used); + arl_expect(arl_base, "arc_meta_limit", &arcstats.arc_meta_limit); + arl_expect(arl_base, "arc_meta_max", &arcstats.arc_meta_max); + arl_expect(arl_base, "arc_meta_min", &arcstats.arc_meta_min); + arl_expect(arl_base, "arc_need_free", &arcstats.arc_need_free); + arl_expect(arl_base, "arc_sys_free", &arcstats.arc_sys_free); + } + + if(unlikely(!ff)) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, ZFS_PROC_ARCSTATS); + ff = procfile_open(config_get("plugin:proc:" ZFS_PROC_ARCSTATS, "filename to monitor", filename), " \t:", PROCFILE_FLAG_DEFAULT); + if(unlikely(!ff)) + return 1; + + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/spl/kstat/zfs"); + dirname = config_get("plugin:proc:" ZFS_PROC_ARCSTATS, "directory to monitor", filename); + + show_zero_charts = config_get_boolean_ondemand("plugin:proc:" ZFS_PROC_ARCSTATS, "show zero charts", CONFIG_BOOLEAN_NO); + if(show_zero_charts == CONFIG_BOOLEAN_AUTO && netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES) + show_zero_charts = CONFIG_BOOLEAN_YES; + if(unlikely(show_zero_charts == CONFIG_BOOLEAN_YES)) + do_zfs_stats = 1; + } + + // check if any pools exist + if(likely(!do_zfs_stats)) { + DIR *dir = opendir(dirname); + if(unlikely(!dir)) { + collector_error("Cannot read directory '%s'", dirname); + return 1; + } + + struct dirent *de = NULL; + while(likely(de = readdir(dir))) { + if(likely(de->d_type == DT_DIR + && ( + (de->d_name[0] == '.' && de->d_name[1] == '\0') + || (de->d_name[0] == '.' && de->d_name[1] == '.' && de->d_name[2] == '\0') + ))) + continue; + + if(unlikely(de->d_type == DT_LNK || de->d_type == DT_DIR)) { + do_zfs_stats = 1; + break; + } + } + + closedir(dir); + } + + // do not show ZFS filesystem metrics if there haven't been any pools in the system yet + if(unlikely(!do_zfs_stats)) + return 0; + + ff = procfile_readall(ff); + if(unlikely(!ff)) + return 0; // we return 0, so that we will retry to open it next time + + size_t lines = procfile_lines(ff), l; + + arl_begin(arl_base); + + for(l = 0; l < lines ;l++) { + size_t words = procfile_linewords(ff, l); + if(unlikely(words < 3)) { + if(unlikely(words)) collector_error("Cannot read " ZFS_PROC_ARCSTATS " line %zu. Expected 3 params, read %zu.", l, words); + continue; + } + + const char *key = procfile_lineword(ff, l, 0); + const char *value = procfile_lineword(ff, l, 2); + + if(unlikely(arcstats.l2exist == -1)) { + if(key[0] == 'l' && key[1] == '2' && key[2] == '_') + arcstats.l2exist = 1; + } + + if(unlikely(arl_check(arl_base, key, value))) break; + } + + if (arcstats.size > arcstats.c_min) { + zfs_arcstats_shrinkable_cache_size_bytes = arcstats.size - arcstats.c_min; + } else { + zfs_arcstats_shrinkable_cache_size_bytes = 0; + } + + if(unlikely(arcstats.l2exist == -1)) + arcstats.l2exist = 0; + + generate_charts_arcstats(PLUGIN_PROC_NAME, ZFS_PROC_ARCSTATS, show_zero_charts, update_every); + generate_charts_arc_summary(PLUGIN_PROC_NAME, ZFS_PROC_ARCSTATS, show_zero_charts, update_every); + + return 0; +} + +struct zfs_pool { + RRDSET *st; + + RRDDIM *rd_online; + RRDDIM *rd_degraded; + RRDDIM *rd_faulted; + RRDDIM *rd_offline; + RRDDIM *rd_removed; + RRDDIM *rd_unavail; + RRDDIM *rd_suspended; + + int updated; + int disabled; + + int online; + int degraded; + int faulted; + int offline; + int removed; + int unavail; + int suspended; +}; + +struct deleted_zfs_pool { + char *name; + struct deleted_zfs_pool *next; +} *deleted_zfs_pools = NULL; + +DICTIONARY *zfs_pools = NULL; + +void disable_zfs_pool_state(struct zfs_pool *pool) +{ + if (pool->st) + rrdset_is_obsolete___safe_from_collector_thread(pool->st); + + pool->st = NULL; + + pool->rd_online = NULL; + pool->rd_degraded = NULL; + pool->rd_faulted = NULL; + pool->rd_offline = NULL; + pool->rd_removed = NULL; + pool->rd_unavail = NULL; + pool->rd_suspended = NULL; + + pool->disabled = 1; +} + +int update_zfs_pool_state_chart(const DICTIONARY_ITEM *item, void *pool_p, void *update_every_p) { + const char *name = dictionary_acquired_item_name(item); + struct zfs_pool *pool = (struct zfs_pool *)pool_p; + int update_every = *(int *)update_every_p; + + if (pool->updated) { + pool->updated = 0; + + if (!pool->disabled) { + if (unlikely(!pool->st)) { + char chart_id[MAX_CHART_ID + 1]; + snprintf(chart_id, MAX_CHART_ID, "state_%s", name); + + pool->st = rrdset_create_localhost( + "zfspool", + chart_id, + NULL, + "state", + "zfspool.state", + "ZFS pool state", + "boolean", + PLUGIN_PROC_NAME, + ZFS_PROC_POOLS, + NETDATA_CHART_PRIO_ZFS_POOL_STATE, + update_every, + RRDSET_TYPE_LINE); + + pool->rd_online = rrddim_add(pool->st, "online", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + pool->rd_degraded = rrddim_add(pool->st, "degraded", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + pool->rd_faulted = rrddim_add(pool->st, "faulted", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + pool->rd_offline = rrddim_add(pool->st, "offline", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + pool->rd_removed = rrddim_add(pool->st, "removed", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + pool->rd_unavail = rrddim_add(pool->st, "unavail", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + pool->rd_suspended = rrddim_add(pool->st, "suspended", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + rrdlabels_add(pool->st->rrdlabels, "pool", name, RRDLABEL_SRC_AUTO); + } + + rrddim_set_by_pointer(pool->st, pool->rd_online, pool->online); + rrddim_set_by_pointer(pool->st, pool->rd_degraded, pool->degraded); + rrddim_set_by_pointer(pool->st, pool->rd_faulted, pool->faulted); + rrddim_set_by_pointer(pool->st, pool->rd_offline, pool->offline); + rrddim_set_by_pointer(pool->st, pool->rd_removed, pool->removed); + rrddim_set_by_pointer(pool->st, pool->rd_unavail, pool->unavail); + rrddim_set_by_pointer(pool->st, pool->rd_suspended, pool->suspended); + rrdset_done(pool->st); + } + } else { + disable_zfs_pool_state(pool); + struct deleted_zfs_pool *new = callocz(1, sizeof(struct deleted_zfs_pool)); + new->name = strdupz(name); + new->next = deleted_zfs_pools; + deleted_zfs_pools = new; + } + + return 0; +} + +int do_proc_spl_kstat_zfs_pool_state(int update_every, usec_t dt) +{ + (void)dt; + + static int do_zfs_pool_state = -1; + static char *dirname = NULL; + + int pool_found = 0, state_file_found = 0; + + if (unlikely(do_zfs_pool_state == -1)) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/spl/kstat/zfs"); + dirname = config_get("plugin:proc:" ZFS_PROC_POOLS, "directory to monitor", filename); + + zfs_pools = dictionary_create_advanced(DICT_OPTION_SINGLE_THREADED, &dictionary_stats_category_collectors, 0); + + do_zfs_pool_state = 1; + } + + if (likely(do_zfs_pool_state)) { + DIR *dir = opendir(dirname); + if (unlikely(!dir)) { + if (errno == ENOENT) + collector_info("Cannot read directory '%s'", dirname); + else + collector_error("Cannot read directory '%s'", dirname); + return 1; + } + + struct dirent *de = NULL; + while (likely(de = readdir(dir))) { + if (likely( + de->d_type == DT_DIR && ((de->d_name[0] == '.' && de->d_name[1] == '\0') || + (de->d_name[0] == '.' && de->d_name[1] == '.' && de->d_name[2] == '\0')))) + continue; + + if (unlikely(de->d_type == DT_LNK || de->d_type == DT_DIR)) { + pool_found = 1; + + struct zfs_pool *pool = dictionary_get(zfs_pools, de->d_name); + + if (unlikely(!pool)) { + struct zfs_pool new_zfs_pool = {}; + pool = dictionary_set(zfs_pools, de->d_name, &new_zfs_pool, sizeof(struct zfs_pool)); + } + + pool->updated = 1; + + if (pool->disabled) { + state_file_found = 1; + continue; + } + + pool->online = 0; + pool->degraded = 0; + pool->faulted = 0; + pool->offline = 0; + pool->removed = 0; + pool->unavail = 0; + pool->suspended = 0; + + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s/%s/state", dirname, de->d_name); + + char state[STATE_SIZE + 1]; + int ret = read_txt_file(filename, state, sizeof(state)); + + if (!ret) { + state_file_found = 1; + + // ZFS pool states are described at https://openzfs.github.io/openzfs-docs/man/8/zpoolconcepts.8.html?#Device_Failure_and_Recovery + if (!strcmp(state, "ONLINE\n")) { + pool->online = 1; + } else if (!strcmp(state, "DEGRADED\n")) { + pool->degraded = 1; + } else if (!strcmp(state, "FAULTED\n")) { + pool->faulted = 1; + } else if (!strcmp(state, "OFFLINE\n")) { + pool->offline = 1; + } else if (!strcmp(state, "REMOVED\n")) { + pool->removed = 1; + } else if (!strcmp(state, "UNAVAIL\n")) { + pool->unavail = 1; + } else if (!strcmp(state, "SUSPENDED\n")) { + pool->suspended = 1; + } else { + disable_zfs_pool_state(pool); + + char *c = strchr(state, '\n'); + if (c) + *c = '\0'; + collector_error("ZFS POOLS: Undefined state %s for zpool %s, disabling the chart", state, de->d_name); + } + } + } + } + + closedir(dir); + } + + if (do_zfs_pool_state && pool_found && !state_file_found) { + collector_info("ZFS POOLS: State files not found. Disabling the module."); + do_zfs_pool_state = 0; + } + + if (do_zfs_pool_state) + dictionary_walkthrough_read(zfs_pools, update_zfs_pool_state_chart, &update_every); + + while (deleted_zfs_pools) { + struct deleted_zfs_pool *current_pool = deleted_zfs_pools; + dictionary_del(zfs_pools, current_pool->name); + + deleted_zfs_pools = deleted_zfs_pools->next; + + freez(current_pool->name); + freez(current_pool); + } + + return 0; +} diff --git a/src/collectors/proc.plugin/proc_stat.c b/src/collectors/proc.plugin/proc_stat.c new file mode 100644 index 000000000..481cb906a --- /dev/null +++ b/src/collectors/proc.plugin/proc_stat.c @@ -0,0 +1,1081 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +#define PLUGIN_PROC_MODULE_STAT_NAME "/proc/stat" + +struct per_core_single_number_file { + unsigned char found:1; + const char *filename; + int fd; + collected_number value; + RRDDIM *rd; +}; + +struct last_ticks { + collected_number frequency; + collected_number ticks; +}; + +// This is an extension of struct per_core_single_number_file at CPU_FREQ_INDEX. +// Either scaling_cur_freq or time_in_state file is used at one time. +struct per_core_time_in_state_file { + const char *filename; + procfile *ff; + size_t last_ticks_len; + struct last_ticks *last_ticks; +}; + +#define CORE_THROTTLE_COUNT_INDEX 0 +#define PACKAGE_THROTTLE_COUNT_INDEX 1 +#define CPU_FREQ_INDEX 2 +#define PER_CORE_FILES 3 + +struct cpu_chart { + const char *id; + + RRDSET *st; + RRDDIM *rd_user; + RRDDIM *rd_nice; + RRDDIM *rd_system; + RRDDIM *rd_idle; + RRDDIM *rd_iowait; + RRDDIM *rd_irq; + RRDDIM *rd_softirq; + RRDDIM *rd_steal; + RRDDIM *rd_guest; + RRDDIM *rd_guest_nice; + + bool per_core_files_found; + struct per_core_single_number_file files[PER_CORE_FILES]; + + struct per_core_time_in_state_file time_in_state_files; +}; + +static int keep_per_core_fds_open = CONFIG_BOOLEAN_YES; +static int keep_cpuidle_fds_open = CONFIG_BOOLEAN_YES; + +static int read_per_core_files(struct cpu_chart *all_cpu_charts, size_t len, size_t index) { + char buf[50 + 1]; + size_t x, files_read = 0, files_nonzero = 0; + + for(x = 0; x < len ; x++) { + struct per_core_single_number_file *f = &all_cpu_charts[x].files[index]; + + f->found = 0; + + if(unlikely(!f->filename)) + continue; + + if(unlikely(f->fd == -1)) { + f->fd = open(f->filename, O_RDONLY | O_CLOEXEC); + if (unlikely(f->fd == -1)) { + collector_error("Cannot open file '%s'", f->filename); + continue; + } + } + + ssize_t ret = read(f->fd, buf, 50); + if(unlikely(ret < 0)) { + // cannot read that file + + collector_error("Cannot read file '%s'", f->filename); + close(f->fd); + f->fd = -1; + continue; + } + else { + // successful read + + // terminate the buffer + buf[ret] = '\0'; + + if(unlikely(keep_per_core_fds_open != CONFIG_BOOLEAN_YES)) { + close(f->fd); + f->fd = -1; + } + else if(lseek(f->fd, 0, SEEK_SET) == -1) { + collector_error("Cannot seek in file '%s'", f->filename); + close(f->fd); + f->fd = -1; + } + } + + files_read++; + f->found = 1; + + f->value = str2ll(buf, NULL); + if(likely(f->value != 0)) + files_nonzero++; + } + + if(files_read == 0) + return -1; + + if(files_nonzero == 0) + return 0; + + return (int)files_nonzero; +} + +static int read_per_core_time_in_state_files(struct cpu_chart *all_cpu_charts, size_t len, size_t index) { + size_t x, files_read = 0, files_nonzero = 0; + + for(x = 0; x < len ; x++) { + struct per_core_single_number_file *f = &all_cpu_charts[x].files[index]; + struct per_core_time_in_state_file *tsf = &all_cpu_charts[x].time_in_state_files; + + f->found = 0; + + if(unlikely(!tsf->filename)) + continue; + + if(unlikely(!tsf->ff)) { + tsf->ff = procfile_open(tsf->filename, " \t:", PROCFILE_FLAG_DEFAULT); + if(unlikely(!tsf->ff)) + { + collector_error("Cannot open file '%s'", tsf->filename); + continue; + } + } + + tsf->ff = procfile_readall(tsf->ff); + if(unlikely(!tsf->ff)) { + collector_error("Cannot read file '%s'", tsf->filename); + procfile_close(tsf->ff); + tsf->ff = NULL; + continue; + } + else { + // successful read + + size_t lines = procfile_lines(tsf->ff), l; + size_t words; + unsigned long long total_ticks_since_last = 0, avg_freq = 0; + + // Check if there is at least one frequency in time_in_state + if (procfile_word(tsf->ff, 0)[0] == '\0') { + if(unlikely(keep_per_core_fds_open != CONFIG_BOOLEAN_YES)) { + procfile_close(tsf->ff); + tsf->ff = NULL; + } + // TODO: Is there a better way to avoid spikes than calculating the average over + // the whole period under schedutil governor? + // freez(tsf->last_ticks); + // tsf->last_ticks = NULL; + // tsf->last_ticks_len = 0; + continue; + } + + if (unlikely(tsf->last_ticks_len < lines || tsf->last_ticks == NULL)) { + tsf->last_ticks = reallocz(tsf->last_ticks, sizeof(struct last_ticks) * lines); + memset(tsf->last_ticks, 0, sizeof(struct last_ticks) * lines); + tsf->last_ticks_len = lines; + } + + f->value = 0; + + for(l = 0; l < lines - 1 ;l++) { + unsigned long long frequency = 0, ticks = 0, ticks_since_last = 0; + + words = procfile_linewords(tsf->ff, l); + if(unlikely(words < 2)) { + collector_error("Cannot read time_in_state line. Expected 2 params, read %zu.", words); + continue; + } + frequency = str2ull(procfile_lineword(tsf->ff, l, 0), NULL); + ticks = str2ull(procfile_lineword(tsf->ff, l, 1), NULL); + + // It is assumed that frequencies are static and sorted + ticks_since_last = ticks - tsf->last_ticks[l].ticks; + tsf->last_ticks[l].frequency = frequency; + tsf->last_ticks[l].ticks = ticks; + + total_ticks_since_last += ticks_since_last; + avg_freq += frequency * ticks_since_last; + + } + + if (likely(total_ticks_since_last)) { + avg_freq /= total_ticks_since_last; + f->value = avg_freq; + } + + if(unlikely(keep_per_core_fds_open != CONFIG_BOOLEAN_YES)) { + procfile_close(tsf->ff); + tsf->ff = NULL; + } + } + + files_read++; + + f->found = 1; + + if(likely(f->value != 0)) + files_nonzero++; + } + + if(unlikely(files_read == 0)) + return -1; + + if(unlikely(files_nonzero == 0)) + return 0; + + return (int)files_nonzero; +} + +static void chart_per_core_files(struct cpu_chart *all_cpu_charts, size_t len, size_t index, RRDSET *st, collected_number multiplier, collected_number divisor, RRD_ALGORITHM algorithm) { + size_t x; + for(x = 0; x < len ; x++) { + struct per_core_single_number_file *f = &all_cpu_charts[x].files[index]; + + if(unlikely(!f->found)) + continue; + + if(unlikely(!f->rd)) + f->rd = rrddim_add(st, all_cpu_charts[x].id, NULL, multiplier, divisor, algorithm); + + rrddim_set_by_pointer(st, f->rd, f->value); + } +} + +struct cpuidle_state { + char *name; + + char *time_filename; + int time_fd; + + collected_number value; + + RRDDIM *rd; +}; + +struct per_core_cpuidle_chart { + RRDSET *st; + + RRDDIM *active_time_rd; + collected_number active_time; + collected_number last_active_time; + + struct cpuidle_state *cpuidle_state; + size_t cpuidle_state_len; + int rescan_cpu_states; +}; + +static void* wake_cpu_thread(void* core) { + pthread_t thread; + cpu_set_t cpu_set; + static size_t cpu_wakeups = 0; + static int errors = 0; + + CPU_ZERO(&cpu_set); + CPU_SET(*(int*)core, &cpu_set); + + thread = pthread_self(); + if(unlikely(pthread_setaffinity_np(thread, sizeof(cpu_set_t), &cpu_set))) { + if(unlikely(errors < 8)) { + collector_error("Cannot set CPU affinity for core %d", *(int*)core); + errors++; + } + else if(unlikely(errors < 9)) { + collector_error("CPU affinity errors are disabled"); + errors++; + } + } + + // Make the CPU core do something to force it to update its idle counters + cpu_wakeups++; + + return 0; +} + +static int read_schedstat(char *schedstat_filename, struct per_core_cpuidle_chart **cpuidle_charts_address, size_t *schedstat_cores_found) { + static size_t cpuidle_charts_len = 0; + static procfile *ff = NULL; + struct per_core_cpuidle_chart *cpuidle_charts = *cpuidle_charts_address; + size_t cores_found = 0; + + if(unlikely(!ff)) { + ff = procfile_open(schedstat_filename, " \t:", PROCFILE_FLAG_DEFAULT); + if(unlikely(!ff)) return 1; + } + + ff = procfile_readall(ff); + if(unlikely(!ff)) return 1; + + size_t lines = procfile_lines(ff), l; + size_t words; + + for(l = 0; l < lines ;l++) { + char *row_key = procfile_lineword(ff, l, 0); + + // faster strncmp(row_key, "cpu", 3) == 0 + if(likely(row_key[0] == 'c' && row_key[1] == 'p' && row_key[2] == 'u')) { + words = procfile_linewords(ff, l); + if(unlikely(words < 10)) { + collector_error("Cannot read /proc/schedstat cpu line. Expected 9 params, read %zu.", words); + return 1; + } + cores_found++; + + size_t core = str2ul(&row_key[3]); + if(unlikely(core >= cores_found)) { + collector_error("Core %zu found but no more than %zu cores were expected.", core, cores_found); + return 1; + } + + if(unlikely(cpuidle_charts_len < cores_found)) { + cpuidle_charts = reallocz(cpuidle_charts, sizeof(struct per_core_cpuidle_chart) * cores_found); + *cpuidle_charts_address = cpuidle_charts; + memset(cpuidle_charts + cpuidle_charts_len, 0, sizeof(struct per_core_cpuidle_chart) * (cores_found - cpuidle_charts_len)); + cpuidle_charts_len = cores_found; + } + + cpuidle_charts[core].active_time = str2ull(procfile_lineword(ff, l, 7), NULL) / 1000; + } + } + + *schedstat_cores_found = cores_found; + return 0; +} + +static int read_one_state(char *buf, const char *filename, int *fd) { + ssize_t ret = read(*fd, buf, 50); + + if(unlikely(ret <= 0)) { + // cannot read that file + collector_error("Cannot read file '%s'", filename); + close(*fd); + *fd = -1; + return 0; + } + else { + // successful read + + // terminate the buffer + buf[ret - 1] = '\0'; + + if(unlikely(keep_cpuidle_fds_open != CONFIG_BOOLEAN_YES)) { + close(*fd); + *fd = -1; + } + else if(lseek(*fd, 0, SEEK_SET) == -1) { + collector_error("Cannot seek in file '%s'", filename); + close(*fd); + *fd = -1; + } + } + + return 1; +} + +static int read_cpuidle_states(char *cpuidle_name_filename , char *cpuidle_time_filename, struct per_core_cpuidle_chart *cpuidle_charts, size_t core) { + char filename[FILENAME_MAX + 1]; + static char next_state_filename[FILENAME_MAX + 1]; + struct stat stbuf; + struct per_core_cpuidle_chart *cc = &cpuidle_charts[core]; + size_t state; + + if(unlikely(!cc->cpuidle_state_len || cc->rescan_cpu_states)) { + int state_file_found = 1; // check at least one state + + if(cc->cpuidle_state_len) { + for(state = 0; state < cc->cpuidle_state_len; state++) { + freez(cc->cpuidle_state[state].name); + + freez(cc->cpuidle_state[state].time_filename); + close(cc->cpuidle_state[state].time_fd); + cc->cpuidle_state[state].time_fd = -1; + } + + freez(cc->cpuidle_state); + cc->cpuidle_state = NULL; + cc->cpuidle_state_len = 0; + + cc->active_time_rd = NULL; + cc->st = NULL; + } + + while(likely(state_file_found)) { + snprintfz(filename, FILENAME_MAX, cpuidle_name_filename, core, cc->cpuidle_state_len); + if (stat(filename, &stbuf) == 0) + cc->cpuidle_state_len++; + else + state_file_found = 0; + } + snprintfz(next_state_filename, FILENAME_MAX, cpuidle_name_filename, core, cc->cpuidle_state_len); + + if(likely(cc->cpuidle_state_len)) + cc->cpuidle_state = callocz(cc->cpuidle_state_len, sizeof(struct cpuidle_state)); + + for(state = 0; state < cc->cpuidle_state_len; state++) { + char name_buf[50 + 1]; + snprintfz(filename, FILENAME_MAX, cpuidle_name_filename, core, state); + + int fd = open(filename, O_RDONLY | O_CLOEXEC, 0666); + if(unlikely(fd == -1)) { + collector_error("Cannot open file '%s'", filename); + cc->rescan_cpu_states = 1; + return 1; + } + + ssize_t r = read(fd, name_buf, 50); + if(unlikely(r < 1)) { + collector_error("Cannot read file '%s'", filename); + close(fd); + cc->rescan_cpu_states = 1; + return 1; + } + + name_buf[r - 1] = '\0'; // erase extra character + cc->cpuidle_state[state].name = strdupz(trim(name_buf)); + close(fd); + + snprintfz(filename, FILENAME_MAX, cpuidle_time_filename, core, state); + cc->cpuidle_state[state].time_filename = strdupz(filename); + cc->cpuidle_state[state].time_fd = -1; + } + + cc->rescan_cpu_states = 0; + } + + for(state = 0; state < cc->cpuidle_state_len; state++) { + + struct cpuidle_state *cs = &cc->cpuidle_state[state]; + + if(unlikely(cs->time_fd == -1)) { + cs->time_fd = open(cs->time_filename, O_RDONLY | O_CLOEXEC); + if (unlikely(cs->time_fd == -1)) { + collector_error("Cannot open file '%s'", cs->time_filename); + cc->rescan_cpu_states = 1; + return 1; + } + } + + char time_buf[50 + 1]; + if(likely(read_one_state(time_buf, cs->time_filename, &cs->time_fd))) { + cs->value = str2ll(time_buf, NULL); + } + else { + cc->rescan_cpu_states = 1; + return 1; + } + } + + // check if the number of states was increased + if(unlikely(stat(next_state_filename, &stbuf) == 0)) { + cc->rescan_cpu_states = 1; + return 1; + } + + return 0; +} + +int do_proc_stat(int update_every, usec_t dt) { + (void)dt; + + static struct cpu_chart *all_cpu_charts = NULL; + static size_t all_cpu_charts_size = 0; + static procfile *ff = NULL; + static int do_cpu = -1, do_cpu_cores = -1, do_interrupts = -1, do_context = -1, do_forks = -1, do_processes = -1, + do_core_throttle_count = -1, do_package_throttle_count = -1, do_cpu_freq = -1, do_cpuidle = -1; + static uint32_t hash_intr, hash_ctxt, hash_processes, hash_procs_running, hash_procs_blocked; + static char *core_throttle_count_filename = NULL, *package_throttle_count_filename = NULL, *scaling_cur_freq_filename = NULL, + *time_in_state_filename = NULL, *schedstat_filename = NULL, *cpuidle_name_filename = NULL, *cpuidle_time_filename = NULL; + static const RRDVAR_ACQUIRED *cpus_var = NULL; + static int accurate_freq_avail = 0, accurate_freq_is_used = 0; + size_t cores_found = (size_t)get_system_cpus(); + + if(unlikely(do_cpu == -1)) { + do_cpu = config_get_boolean("plugin:proc:/proc/stat", "cpu utilization", CONFIG_BOOLEAN_YES); + do_cpu_cores = config_get_boolean("plugin:proc:/proc/stat", "per cpu core utilization", CONFIG_BOOLEAN_NO); + do_interrupts = config_get_boolean("plugin:proc:/proc/stat", "cpu interrupts", CONFIG_BOOLEAN_YES); + do_context = config_get_boolean("plugin:proc:/proc/stat", "context switches", CONFIG_BOOLEAN_YES); + do_forks = config_get_boolean("plugin:proc:/proc/stat", "processes started", CONFIG_BOOLEAN_YES); + do_processes = config_get_boolean("plugin:proc:/proc/stat", "processes running", CONFIG_BOOLEAN_YES); + + // give sane defaults based on the number of processors + if(unlikely(get_system_cpus() > 128)) { + // the system has too many processors + keep_per_core_fds_open = CONFIG_BOOLEAN_NO; + do_core_throttle_count = CONFIG_BOOLEAN_NO; + do_package_throttle_count = CONFIG_BOOLEAN_NO; + do_cpu_freq = CONFIG_BOOLEAN_NO; + do_cpuidle = CONFIG_BOOLEAN_NO; + } + else { + // the system has a reasonable number of processors + keep_per_core_fds_open = CONFIG_BOOLEAN_YES; + do_core_throttle_count = CONFIG_BOOLEAN_AUTO; + do_package_throttle_count = CONFIG_BOOLEAN_NO; + do_cpu_freq = CONFIG_BOOLEAN_YES; + do_cpuidle = CONFIG_BOOLEAN_NO; + } + if(unlikely(get_system_cpus() > 24)) { + // the system has too many processors + keep_cpuidle_fds_open = CONFIG_BOOLEAN_NO; + } + else { + // the system has a reasonable number of processors + keep_cpuidle_fds_open = CONFIG_BOOLEAN_YES; + } + + keep_per_core_fds_open = config_get_boolean("plugin:proc:/proc/stat", "keep per core files open", keep_per_core_fds_open); + keep_cpuidle_fds_open = config_get_boolean("plugin:proc:/proc/stat", "keep cpuidle files open", keep_cpuidle_fds_open); + do_core_throttle_count = config_get_boolean_ondemand("plugin:proc:/proc/stat", "core_throttle_count", do_core_throttle_count); + do_package_throttle_count = config_get_boolean_ondemand("plugin:proc:/proc/stat", "package_throttle_count", do_package_throttle_count); + do_cpu_freq = config_get_boolean_ondemand("plugin:proc:/proc/stat", "cpu frequency", do_cpu_freq); + do_cpuidle = config_get_boolean_ondemand("plugin:proc:/proc/stat", "cpu idle states", do_cpuidle); + + hash_intr = simple_hash("intr"); + hash_ctxt = simple_hash("ctxt"); + hash_processes = simple_hash("processes"); + hash_procs_running = simple_hash("procs_running"); + hash_procs_blocked = simple_hash("procs_blocked"); + + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/devices/system/cpu/%s/thermal_throttle/core_throttle_count"); + core_throttle_count_filename = config_get("plugin:proc:/proc/stat", "core_throttle_count filename to monitor", filename); + + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/devices/system/cpu/%s/thermal_throttle/package_throttle_count"); + package_throttle_count_filename = config_get("plugin:proc:/proc/stat", "package_throttle_count filename to monitor", filename); + + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/devices/system/cpu/%s/cpufreq/scaling_cur_freq"); + scaling_cur_freq_filename = config_get("plugin:proc:/proc/stat", "scaling_cur_freq filename to monitor", filename); + + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/devices/system/cpu/%s/cpufreq/stats/time_in_state"); + time_in_state_filename = config_get("plugin:proc:/proc/stat", "time_in_state filename to monitor", filename); + + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/schedstat"); + schedstat_filename = config_get("plugin:proc:/proc/stat", "schedstat filename to monitor", filename); + + if(do_cpuidle != CONFIG_BOOLEAN_NO) { + struct stat stbuf; + + if (stat(schedstat_filename, &stbuf)) + do_cpuidle = CONFIG_BOOLEAN_NO; + } + + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/devices/system/cpu/cpu%zu/cpuidle/state%zu/name"); + cpuidle_name_filename = config_get("plugin:proc:/proc/stat", "cpuidle name filename to monitor", filename); + + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/devices/system/cpu/cpu%zu/cpuidle/state%zu/time"); + cpuidle_time_filename = config_get("plugin:proc:/proc/stat", "cpuidle time filename to monitor", filename); + } + + if(unlikely(!ff)) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/stat"); + ff = procfile_open(config_get("plugin:proc:/proc/stat", "filename to monitor", filename), " \t:", PROCFILE_FLAG_DEFAULT); + if(unlikely(!ff)) return 1; + } + + ff = procfile_readall(ff); + if(unlikely(!ff)) return 0; // we return 0, so that we will retry to open it next time + + size_t lines = procfile_lines(ff), l; + size_t words; + + unsigned long long processes = 0, running = 0 , blocked = 0; + + for(l = 0; l < lines ;l++) { + char *row_key = procfile_lineword(ff, l, 0); + uint32_t hash = simple_hash(row_key); + + // faster strncmp(row_key, "cpu", 3) == 0 + if(likely(row_key[0] == 'c' && row_key[1] == 'p' && row_key[2] == 'u')) { + words = procfile_linewords(ff, l); + if(unlikely(words < 9)) { + collector_error("Cannot read /proc/stat cpu line. Expected 9 params, read %zu.", words); + continue; + } + + size_t core = (row_key[3] == '\0') ? 0 : str2ul(&row_key[3]) + 1; + if (likely(core > 0)) + cores_found = core; + + bool do_any_core_metric = do_cpu_cores || do_core_throttle_count || do_cpu_freq || do_cpuidle; + + if (likely((core == 0 && do_cpu) || (core > 0 && do_any_core_metric))) { + if (unlikely(core >= all_cpu_charts_size)) { + size_t old_cpu_charts_size = all_cpu_charts_size; + all_cpu_charts_size = core + 1; + all_cpu_charts = reallocz(all_cpu_charts, sizeof(struct cpu_chart) * all_cpu_charts_size); + memset(&all_cpu_charts[old_cpu_charts_size], 0, sizeof(struct cpu_chart) * (all_cpu_charts_size - old_cpu_charts_size)); + } + + struct cpu_chart *cpu_chart = &all_cpu_charts[core]; + + if (unlikely(!cpu_chart->id)) + cpu_chart->id = strdupz(row_key); + + if (core > 0 && !cpu_chart->per_core_files_found) { + cpu_chart->per_core_files_found = true; + + char filename[FILENAME_MAX + 1]; + struct stat stbuf; + + if (do_core_throttle_count != CONFIG_BOOLEAN_NO) { + snprintfz(filename, FILENAME_MAX, core_throttle_count_filename, cpu_chart->id); + if (stat(filename, &stbuf) == 0) { + cpu_chart->files[CORE_THROTTLE_COUNT_INDEX].filename = strdupz(filename); + cpu_chart->files[CORE_THROTTLE_COUNT_INDEX].fd = -1; + do_core_throttle_count = CONFIG_BOOLEAN_YES; + } + } + + if (do_package_throttle_count != CONFIG_BOOLEAN_NO) { + snprintfz(filename, FILENAME_MAX, package_throttle_count_filename, cpu_chart->id); + if (stat(filename, &stbuf) == 0) { + cpu_chart->files[PACKAGE_THROTTLE_COUNT_INDEX].filename = strdupz(filename); + cpu_chart->files[PACKAGE_THROTTLE_COUNT_INDEX].fd = -1; + do_package_throttle_count = CONFIG_BOOLEAN_YES; + } + } + + if (do_cpu_freq != CONFIG_BOOLEAN_NO) { + snprintfz(filename, FILENAME_MAX, scaling_cur_freq_filename, cpu_chart->id); + if (stat(filename, &stbuf) == 0) { + cpu_chart->files[CPU_FREQ_INDEX].filename = strdupz(filename); + cpu_chart->files[CPU_FREQ_INDEX].fd = -1; + do_cpu_freq = CONFIG_BOOLEAN_YES; + } + + snprintfz(filename, FILENAME_MAX, time_in_state_filename, cpu_chart->id); + if (stat(filename, &stbuf) == 0) { + cpu_chart->time_in_state_files.filename = strdupz(filename); + cpu_chart->time_in_state_files.ff = NULL; + do_cpu_freq = CONFIG_BOOLEAN_YES; + accurate_freq_avail = 1; + } + } + } + } + + if(likely((core == 0 && do_cpu) || (core > 0 && do_cpu_cores))) { + unsigned long long user = 0, nice = 0, system = 0, idle = 0, iowait = 0, irq = 0, softirq = 0, steal = 0, guest = 0, guest_nice = 0; + + user = str2ull(procfile_lineword(ff, l, 1), NULL); + nice = str2ull(procfile_lineword(ff, l, 2), NULL); + system = str2ull(procfile_lineword(ff, l, 3), NULL); + idle = str2ull(procfile_lineword(ff, l, 4), NULL); + iowait = str2ull(procfile_lineword(ff, l, 5), NULL); + irq = str2ull(procfile_lineword(ff, l, 6), NULL); + softirq = str2ull(procfile_lineword(ff, l, 7), NULL); + steal = str2ull(procfile_lineword(ff, l, 8), NULL); + + guest = str2ull(procfile_lineword(ff, l, 9), NULL); + user -= guest; + + guest_nice = str2ull(procfile_lineword(ff, l, 10), NULL); + nice -= guest_nice; + + char *title, *type, *context, *family; + long priority; + + struct cpu_chart *cpu_chart = &all_cpu_charts[core]; + + char *id = row_key; + + if(unlikely(!cpu_chart->st)) { + if(unlikely(core == 0)) { + title = "Total CPU utilization"; + type = "system"; + context = "system.cpu"; + family = id; + priority = NETDATA_CHART_PRIO_SYSTEM_CPU; + } + else { + title = "Core utilization"; + type = "cpu"; + context = "cpu.cpu"; + family = "utilization"; + priority = NETDATA_CHART_PRIO_CPU_PER_CORE; + } + + cpu_chart->st = rrdset_create_localhost( + type + , id + , NULL + , family + , context + , title + , "percentage" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_STAT_NAME + , priority + core + , update_every + , RRDSET_TYPE_STACKED + ); + + long multiplier = 1; + long divisor = 1; // sysconf(_SC_CLK_TCK); + + cpu_chart->rd_guest_nice = rrddim_add(cpu_chart->st, "guest_nice", NULL, multiplier, divisor, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + cpu_chart->rd_guest = rrddim_add(cpu_chart->st, "guest", NULL, multiplier, divisor, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + cpu_chart->rd_steal = rrddim_add(cpu_chart->st, "steal", NULL, multiplier, divisor, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + cpu_chart->rd_softirq = rrddim_add(cpu_chart->st, "softirq", NULL, multiplier, divisor, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + cpu_chart->rd_irq = rrddim_add(cpu_chart->st, "irq", NULL, multiplier, divisor, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + cpu_chart->rd_user = rrddim_add(cpu_chart->st, "user", NULL, multiplier, divisor, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + cpu_chart->rd_system = rrddim_add(cpu_chart->st, "system", NULL, multiplier, divisor, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + cpu_chart->rd_nice = rrddim_add(cpu_chart->st, "nice", NULL, multiplier, divisor, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + cpu_chart->rd_iowait = rrddim_add(cpu_chart->st, "iowait", NULL, multiplier, divisor, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + cpu_chart->rd_idle = rrddim_add(cpu_chart->st, "idle", NULL, multiplier, divisor, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + rrddim_hide(cpu_chart->st, "idle"); + + if (core > 0) { + char cpu_core[50 + 1]; + snprintfz(cpu_core, 50, "cpu%lu", core - 1); + rrdlabels_add(cpu_chart->st->rrdlabels, "cpu", cpu_core, RRDLABEL_SRC_AUTO); + } + + if(unlikely(core == 0 && cpus_var == NULL)) + cpus_var = rrdvar_host_variable_add_and_acquire(localhost, "active_processors"); + } + + rrddim_set_by_pointer(cpu_chart->st, cpu_chart->rd_user, user); + rrddim_set_by_pointer(cpu_chart->st, cpu_chart->rd_nice, nice); + rrddim_set_by_pointer(cpu_chart->st, cpu_chart->rd_system, system); + rrddim_set_by_pointer(cpu_chart->st, cpu_chart->rd_idle, idle); + rrddim_set_by_pointer(cpu_chart->st, cpu_chart->rd_iowait, iowait); + rrddim_set_by_pointer(cpu_chart->st, cpu_chart->rd_irq, irq); + rrddim_set_by_pointer(cpu_chart->st, cpu_chart->rd_softirq, softirq); + rrddim_set_by_pointer(cpu_chart->st, cpu_chart->rd_steal, steal); + rrddim_set_by_pointer(cpu_chart->st, cpu_chart->rd_guest, guest); + rrddim_set_by_pointer(cpu_chart->st, cpu_chart->rd_guest_nice, guest_nice); + rrdset_done(cpu_chart->st); + } + } + else if(unlikely(hash == hash_intr && strcmp(row_key, "intr") == 0)) { + if(likely(do_interrupts)) { + static RRDSET *st_intr = NULL; + static RRDDIM *rd_interrupts = NULL; + unsigned long long value = str2ull(procfile_lineword(ff, l, 1), NULL); + + if(unlikely(!st_intr)) { + st_intr = rrdset_create_localhost( + "system" + , "intr" + , NULL + , "interrupts" + , NULL + , "CPU Interrupts" + , "interrupts/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_STAT_NAME + , NETDATA_CHART_PRIO_SYSTEM_INTR + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(st_intr, RRDSET_FLAG_DETAIL); + + rd_interrupts = rrddim_add(st_intr, "interrupts", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_intr, rd_interrupts, value); + rrdset_done(st_intr); + } + } + else if(unlikely(hash == hash_ctxt && strcmp(row_key, "ctxt") == 0)) { + if(likely(do_context)) { + static RRDSET *st_ctxt = NULL; + static RRDDIM *rd_switches = NULL; + unsigned long long value = str2ull(procfile_lineword(ff, l, 1), NULL); + + if(unlikely(!st_ctxt)) { + st_ctxt = rrdset_create_localhost( + "system" + , "ctxt" + , NULL + , "processes" + , NULL + , "CPU Context Switches" + , "context switches/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_STAT_NAME + , NETDATA_CHART_PRIO_SYSTEM_CTXT + , update_every + , RRDSET_TYPE_LINE + ); + + rd_switches = rrddim_add(st_ctxt, "switches", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_ctxt, rd_switches, value); + rrdset_done(st_ctxt); + } + } + else if(unlikely(hash == hash_processes && !processes && strcmp(row_key, "processes") == 0)) { + processes = str2ull(procfile_lineword(ff, l, 1), NULL); + } + else if(unlikely(hash == hash_procs_running && !running && strcmp(row_key, "procs_running") == 0)) { + running = str2ull(procfile_lineword(ff, l, 1), NULL); + } + else if(unlikely(hash == hash_procs_blocked && !blocked && strcmp(row_key, "procs_blocked") == 0)) { + blocked = str2ull(procfile_lineword(ff, l, 1), NULL); + } + } + + // -------------------------------------------------------------------- + + if(likely(do_forks)) { + static RRDSET *st_forks = NULL; + static RRDDIM *rd_started = NULL; + + if(unlikely(!st_forks)) { + st_forks = rrdset_create_localhost( + "system" + , "forks" + , NULL + , "processes" + , NULL + , "Started Processes" + , "processes/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_STAT_NAME + , NETDATA_CHART_PRIO_SYSTEM_FORKS + , update_every + , RRDSET_TYPE_LINE + ); + rrdset_flag_set(st_forks, RRDSET_FLAG_DETAIL); + + rd_started = rrddim_add(st_forks, "started", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_forks, rd_started, processes); + rrdset_done(st_forks); + } + + // -------------------------------------------------------------------- + + if(likely(do_processes)) { + static RRDSET *st_processes = NULL; + static RRDDIM *rd_running = NULL; + static RRDDIM *rd_blocked = NULL; + + if(unlikely(!st_processes)) { + st_processes = rrdset_create_localhost( + "system" + , "processes" + , NULL + , "processes" + , NULL + , "System Processes" + , "processes" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_STAT_NAME + , NETDATA_CHART_PRIO_SYSTEM_PROCESSES + , update_every + , RRDSET_TYPE_LINE + ); + + rd_running = rrddim_add(st_processes, "running", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + rd_blocked = rrddim_add(st_processes, "blocked", NULL, -1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_processes, rd_running, running); + rrddim_set_by_pointer(st_processes, rd_blocked, blocked); + rrdset_done(st_processes); + } + + if(likely(all_cpu_charts_size > 1)) { + if(likely(do_core_throttle_count != CONFIG_BOOLEAN_NO)) { + int r = read_per_core_files(&all_cpu_charts[1], all_cpu_charts_size - 1, CORE_THROTTLE_COUNT_INDEX); + if(likely(r != -1 && (do_core_throttle_count == CONFIG_BOOLEAN_YES || r > 0))) { + do_core_throttle_count = CONFIG_BOOLEAN_YES; + + static RRDSET *st_core_throttle_count = NULL; + + if (unlikely(!st_core_throttle_count)) { + st_core_throttle_count = rrdset_create_localhost( + "cpu" + , "core_throttling" + , NULL + , "throttling" + , "cpu.core_throttling" + , "Core Thermal Throttling Events" + , "events/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_STAT_NAME + , NETDATA_CHART_PRIO_CORE_THROTTLING + , update_every + , RRDSET_TYPE_LINE + ); + } + + chart_per_core_files(&all_cpu_charts[1], all_cpu_charts_size - 1, CORE_THROTTLE_COUNT_INDEX, st_core_throttle_count, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rrdset_done(st_core_throttle_count); + } + } + + if(likely(do_package_throttle_count != CONFIG_BOOLEAN_NO)) { + int r = read_per_core_files(&all_cpu_charts[1], all_cpu_charts_size - 1, PACKAGE_THROTTLE_COUNT_INDEX); + if(likely(r != -1 && (do_package_throttle_count == CONFIG_BOOLEAN_YES || r > 0))) { + do_package_throttle_count = CONFIG_BOOLEAN_YES; + + static RRDSET *st_package_throttle_count = NULL; + + if(unlikely(!st_package_throttle_count)) { + st_package_throttle_count = rrdset_create_localhost( + "cpu" + , "package_throttling" + , NULL + , "throttling" + , "cpu.package_throttling" + , "Package Thermal Throttling Events" + , "events/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_STAT_NAME + , NETDATA_CHART_PRIO_PACKAGE_THROTTLING + , update_every + , RRDSET_TYPE_LINE + ); + } + + chart_per_core_files(&all_cpu_charts[1], all_cpu_charts_size - 1, PACKAGE_THROTTLE_COUNT_INDEX, st_package_throttle_count, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rrdset_done(st_package_throttle_count); + } + } + + if(likely(do_cpu_freq != CONFIG_BOOLEAN_NO)) { + char filename[FILENAME_MAX + 1]; + int r = 0; + + if (accurate_freq_avail) { + r = read_per_core_time_in_state_files(&all_cpu_charts[1], all_cpu_charts_size - 1, CPU_FREQ_INDEX); + if(r > 0 && !accurate_freq_is_used) { + accurate_freq_is_used = 1; + snprintfz(filename, FILENAME_MAX, time_in_state_filename, "cpu*"); + collector_info("cpufreq is using %s", filename); + } + } + if (r < 1) { + r = read_per_core_files(&all_cpu_charts[1], all_cpu_charts_size - 1, CPU_FREQ_INDEX); + if(accurate_freq_is_used) { + accurate_freq_is_used = 0; + snprintfz(filename, FILENAME_MAX, scaling_cur_freq_filename, "cpu*"); + collector_info("cpufreq fell back to %s", filename); + } + } + + if(likely(r != -1 && (do_cpu_freq == CONFIG_BOOLEAN_YES || r > 0))) { + do_cpu_freq = CONFIG_BOOLEAN_YES; + + static RRDSET *st_scaling_cur_freq = NULL; + + if(unlikely(!st_scaling_cur_freq)) { + st_scaling_cur_freq = rrdset_create_localhost( + "cpu" + , "cpufreq" + , NULL + , "cpufreq" + , "cpufreq.cpufreq" + , "Current CPU Frequency" + , "MHz" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_STAT_NAME + , NETDATA_CHART_PRIO_CPUFREQ_SCALING_CUR_FREQ + , update_every + , RRDSET_TYPE_LINE + ); + } + + chart_per_core_files(&all_cpu_charts[1], all_cpu_charts_size - 1, CPU_FREQ_INDEX, st_scaling_cur_freq, 1, 1000, RRD_ALGORITHM_ABSOLUTE); + rrdset_done(st_scaling_cur_freq); + } + } + } + + // -------------------------------------------------------------------- + + static struct per_core_cpuidle_chart *cpuidle_charts = NULL; + size_t schedstat_cores_found = 0; + + if(likely(do_cpuidle != CONFIG_BOOLEAN_NO && !read_schedstat(schedstat_filename, &cpuidle_charts, &schedstat_cores_found))) { + int cpu_states_updated = 0; + size_t core, state; + + + // proc.plugin runs on Linux systems only. Multi-platform compatibility is not needed here, + // so bare pthread functions are used to avoid unneeded overheads. + for(core = 0; core < schedstat_cores_found; core++) { + if(unlikely(!(cpuidle_charts[core].active_time - cpuidle_charts[core].last_active_time))) { + pthread_t thread; + cpu_set_t global_cpu_set; + + if (likely(!pthread_getaffinity_np(pthread_self(), sizeof(cpu_set_t), &global_cpu_set))) { + if (unlikely(!CPU_ISSET(core, &global_cpu_set))) { + continue; + } + } + else + collector_error("Cannot read current process affinity"); + + // These threads are very ephemeral and don't need to have a specific name + if(unlikely(pthread_create(&thread, NULL, wake_cpu_thread, (void *)&core))) + collector_error("Cannot create wake_cpu_thread"); + else if(unlikely(pthread_join(thread, NULL))) + collector_error("Cannot join wake_cpu_thread"); + cpu_states_updated = 1; + } + } + + if(unlikely(!cpu_states_updated || !read_schedstat(schedstat_filename, &cpuidle_charts, &schedstat_cores_found))) { + for(core = 0; core < schedstat_cores_found; core++) { + cpuidle_charts[core].last_active_time = cpuidle_charts[core].active_time; + + int r = read_cpuidle_states(cpuidle_name_filename, cpuidle_time_filename, cpuidle_charts, core); + if(likely(r != -1 && (do_cpuidle == CONFIG_BOOLEAN_YES || r > 0))) { + do_cpuidle = CONFIG_BOOLEAN_YES; + + char cpuidle_chart_id[RRD_ID_LENGTH_MAX + 1]; + snprintfz(cpuidle_chart_id, RRD_ID_LENGTH_MAX, "cpu%zu_cpuidle", core); + + if(unlikely(!cpuidle_charts[core].st)) { + cpuidle_charts[core].st = rrdset_create_localhost( + "cpu" + , cpuidle_chart_id + , NULL + , "cpuidle" + , "cpuidle.cpu_cstate_residency_time" + , "C-state residency time" + , "percentage" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_STAT_NAME + , NETDATA_CHART_PRIO_CPUIDLE + core + , update_every + , RRDSET_TYPE_STACKED + ); + + char corebuf[50+1]; + snprintfz(corebuf, sizeof(corebuf) - 1, "cpu%zu", core); + rrdlabels_add(cpuidle_charts[core].st->rrdlabels, "cpu", corebuf, RRDLABEL_SRC_AUTO); + + char cpuidle_dim_id[RRD_ID_LENGTH_MAX + 1]; + cpuidle_charts[core].active_time_rd = rrddim_add(cpuidle_charts[core].st, "active", "C0 (active)", 1, 1, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + for(state = 0; state < cpuidle_charts[core].cpuidle_state_len; state++) { + strncpyz(cpuidle_dim_id, cpuidle_charts[core].cpuidle_state[state].name, RRD_ID_LENGTH_MAX); + for(int i = 0; cpuidle_dim_id[i]; i++) + cpuidle_dim_id[i] = tolower(cpuidle_dim_id[i]); + cpuidle_charts[core].cpuidle_state[state].rd = rrddim_add(cpuidle_charts[core].st, cpuidle_dim_id, + cpuidle_charts[core].cpuidle_state[state].name, + 1, 1, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + } + } + + rrddim_set_by_pointer(cpuidle_charts[core].st, cpuidle_charts[core].active_time_rd, cpuidle_charts[core].active_time); + for(state = 0; state < cpuidle_charts[core].cpuidle_state_len; state++) { + rrddim_set_by_pointer(cpuidle_charts[core].st, cpuidle_charts[core].cpuidle_state[state].rd, cpuidle_charts[core].cpuidle_state[state].value); + } + rrdset_done(cpuidle_charts[core].st); + } + } + } + } + + if(cpus_var) + rrdvar_host_variable_set(localhost, cpus_var, cores_found); + + return 0; +} diff --git a/src/collectors/proc.plugin/proc_sys_fs_file_nr.c b/src/collectors/proc.plugin/proc_sys_fs_file_nr.c new file mode 100644 index 000000000..570945d01 --- /dev/null +++ b/src/collectors/proc.plugin/proc_sys_fs_file_nr.c @@ -0,0 +1,81 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +int do_proc_sys_fs_file_nr(int update_every, usec_t dt) { + (void)dt; + + static procfile *ff = NULL; + + if(unlikely(!ff)) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/sys/fs/file-nr"); + ff = procfile_open(config_get("plugin:proc:/proc/sys/fs/file-nr", "filename to monitor", filename), "", PROCFILE_FLAG_DEFAULT); + if(unlikely(!ff)) return 1; + } + + ff = procfile_readall(ff); + if(unlikely(!ff)) return 0; // we return 0, so that we will retry to open it next time + + uint64_t allocated = str2ull(procfile_lineword(ff, 0, 0), NULL); + uint64_t unused = str2ull(procfile_lineword(ff, 0, 1), NULL); + uint64_t max = str2ull(procfile_lineword(ff, 0, 2), NULL); + + uint64_t used = allocated - unused; + + static RRDSET *st_files = NULL; + static RRDDIM *rd_used = NULL; + + if(unlikely(!st_files)) { + st_files = rrdset_create_localhost( + "system" + , "file_nr_used" + , NULL + , "files" + , NULL + , "File Descriptors" + , "files" + , PLUGIN_PROC_NAME + , "/proc/sys/fs/file-nr" + , NETDATA_CHART_PRIO_SYSTEM_FILES_NR + , update_every + , RRDSET_TYPE_LINE + ); + + rd_used = rrddim_add(st_files, "used", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_files, rd_used, (collected_number )used); + rrdset_done(st_files); + + static RRDSET *st_files_utilization = NULL; + static RRDDIM *rd_utilization = NULL; + + if(unlikely(!st_files_utilization)) { + st_files_utilization = rrdset_create_localhost( + "system" + , "file_nr_utilization" + , NULL + , "files" + , NULL + , "File Descriptors Utilization" + , "percentage" + , PLUGIN_PROC_NAME + , "/proc/sys/fs/file-nr" + , NETDATA_CHART_PRIO_SYSTEM_FILES_NR + 1 + , update_every + , RRDSET_TYPE_LINE + ); + + rd_utilization = rrddim_add(st_files_utilization, "utilization", NULL, 1, 10000, RRD_ALGORITHM_ABSOLUTE); + } + + NETDATA_DOUBLE d_used = (NETDATA_DOUBLE)used; + NETDATA_DOUBLE d_max = (NETDATA_DOUBLE)max; + NETDATA_DOUBLE percent = d_used * 100.0 / d_max; + + rrddim_set_by_pointer(st_files_utilization, rd_utilization, (collected_number)(percent * 10000)); + rrdset_done(st_files_utilization); + + return 0; +} diff --git a/src/collectors/proc.plugin/proc_sys_kernel_random_entropy_avail.c b/src/collectors/proc.plugin/proc_sys_kernel_random_entropy_avail.c new file mode 100644 index 000000000..b32597bc4 --- /dev/null +++ b/src/collectors/proc.plugin/proc_sys_kernel_random_entropy_avail.c @@ -0,0 +1,47 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +int do_proc_sys_kernel_random_entropy_avail(int update_every, usec_t dt) { + (void)dt; + + static procfile *ff = NULL; + + if(unlikely(!ff)) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/sys/kernel/random/entropy_avail"); + ff = procfile_open(config_get("plugin:proc:/proc/sys/kernel/random/entropy_avail", "filename to monitor", filename), "", PROCFILE_FLAG_DEFAULT); + if(unlikely(!ff)) return 1; + } + + ff = procfile_readall(ff); + if(unlikely(!ff)) return 0; // we return 0, so that we will retry to open it next time + + unsigned long long entropy = str2ull(procfile_lineword(ff, 0, 0), NULL); + + static RRDSET *st = NULL; + static RRDDIM *rd = NULL; + + if(unlikely(!st)) { + st = rrdset_create_localhost( + "system" + , "entropy" + , NULL + , "entropy" + , NULL + , "Available Entropy" + , "entropy" + , PLUGIN_PROC_NAME + , "/proc/sys/kernel/random/entropy_avail" + , NETDATA_CHART_PRIO_SYSTEM_ENTROPY + , update_every + , RRDSET_TYPE_LINE + ); + + rd = rrddim_add(st, "entropy", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st, rd, entropy); + rrdset_done(st); + return 0; +} diff --git a/src/collectors/proc.plugin/proc_uptime.c b/src/collectors/proc.plugin/proc_uptime.c new file mode 100644 index 000000000..ddab7269b --- /dev/null +++ b/src/collectors/proc.plugin/proc_uptime.c @@ -0,0 +1,42 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +int do_proc_uptime(int update_every, usec_t dt) { + (void)dt; + + static char *uptime_filename = NULL; + if(!uptime_filename) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/uptime"); + + uptime_filename = config_get("plugin:proc:/proc/uptime", "filename to monitor", filename); + } + + static RRDSET *st = NULL; + static RRDDIM *rd = NULL; + + if(unlikely(!st)) { + + st = rrdset_create_localhost( + "system" + , "uptime" + , NULL + , "uptime" + , NULL + , "System Uptime" + , "seconds" + , PLUGIN_PROC_NAME + , "/proc/uptime" + , NETDATA_CHART_PRIO_SYSTEM_UPTIME + , update_every + , RRDSET_TYPE_LINE + ); + + rd = rrddim_add(st, "uptime", NULL, 1, 1000, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st, rd, uptime_msec(uptime_filename)); + rrdset_done(st); + return 0; +} diff --git a/src/collectors/proc.plugin/proc_vmstat.c b/src/collectors/proc.plugin/proc_vmstat.c new file mode 100644 index 000000000..b44733b6a --- /dev/null +++ b/src/collectors/proc.plugin/proc_vmstat.c @@ -0,0 +1,810 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +#define PLUGIN_PROC_MODULE_VMSTAT_NAME "/proc/vmstat" + +#define OOM_KILL_STRING "oom_kill" + +int do_proc_vmstat(int update_every, usec_t dt) { + (void)dt; + + static procfile *ff = NULL; + static int do_swapio = -1, do_io = -1, do_pgfaults = -1, do_oom_kill = -1, do_numa = -1, do_thp = -1, do_zswapio = -1, do_balloon = -1, do_ksm = -1; + static int has_numa = -1; + + static ARL_BASE *arl_base = NULL; + static unsigned long long numa_foreign = 0ULL; + static unsigned long long numa_hint_faults = 0ULL; + static unsigned long long numa_hint_faults_local = 0ULL; + static unsigned long long numa_huge_pte_updates = 0ULL; + static unsigned long long numa_interleave = 0ULL; + static unsigned long long numa_local = 0ULL; + static unsigned long long numa_other = 0ULL; + static unsigned long long numa_pages_migrated = 0ULL; + static unsigned long long numa_pte_updates = 0ULL; + static unsigned long long pgfault = 0ULL; + static unsigned long long pgmajfault = 0ULL; + static unsigned long long pgpgin = 0ULL; + static unsigned long long pgpgout = 0ULL; + static unsigned long long pswpin = 0ULL; + static unsigned long long pswpout = 0ULL; + static unsigned long long oom_kill = 0ULL; + + // THP page migration +// static unsigned long long pgmigrate_success = 0ULL; +// static unsigned long long pgmigrate_fail = 0ULL; +// static unsigned long long thp_migration_success = 0ULL; +// static unsigned long long thp_migration_fail = 0ULL; +// static unsigned long long thp_migration_split = 0ULL; + + // Compaction cost model + // https://lore.kernel.org/lkml/20121022080525.GB2198@suse.de/ +// static unsigned long long compact_migrate_scanned = 0ULL; +// static unsigned long long compact_free_scanned = 0ULL; +// static unsigned long long compact_isolated = 0ULL; + + // THP defragmentation + static unsigned long long compact_stall = 0ULL; // incremented when an application stalls allocating THP + static unsigned long long compact_fail = 0ULL; // defragmentation events that failed + static unsigned long long compact_success = 0ULL; // defragmentation events that succeeded + + // ? +// static unsigned long long compact_daemon_wake = 0ULL; +// static unsigned long long compact_daemon_migrate_scanned = 0ULL; +// static unsigned long long compact_daemon_free_scanned = 0ULL; + + // ? +// static unsigned long long htlb_buddy_alloc_success = 0ULL; +// static unsigned long long htlb_buddy_alloc_fail = 0ULL; + + // ? +// static unsigned long long cma_alloc_success = 0ULL; +// static unsigned long long cma_alloc_fail = 0ULL; + + // ? +// static unsigned long long unevictable_pgs_culled = 0ULL; +// static unsigned long long unevictable_pgs_scanned = 0ULL; +// static unsigned long long unevictable_pgs_rescued = 0ULL; +// static unsigned long long unevictable_pgs_mlocked = 0ULL; +// static unsigned long long unevictable_pgs_munlocked = 0ULL; +// static unsigned long long unevictable_pgs_cleared = 0ULL; +// static unsigned long long unevictable_pgs_stranded = 0ULL; + + // THP handling of page faults + static unsigned long long thp_fault_alloc = 0ULL; // is incremented every time a huge page is successfully allocated to handle a page fault. This applies to both the first time a page is faulted and for COW faults. + static unsigned long long thp_fault_fallback = 0ULL; // is incremented if a page fault fails to allocate a huge page and instead falls back to using small pages. + static unsigned long long thp_fault_fallback_charge = 0ULL; // is incremented if a page fault fails to charge a huge page and instead falls back to using small pages even though the allocation was successful. + + // khugepaged collapsing of small pages into huge pages + static unsigned long long thp_collapse_alloc = 0ULL; // is incremented by khugepaged when it has found a range of pages to collapse into one huge page and has successfully allocated a new huge page to store the data. + static unsigned long long thp_collapse_alloc_failed = 0ULL; // is incremented if khugepaged found a range of pages that should be collapsed into one huge page but failed the allocation. + + // THP handling of file allocations + static unsigned long long thp_file_alloc = 0ULL; // is incremented every time a file huge page is successfully allocated + static unsigned long long thp_file_fallback = 0ULL; // is incremented if a file huge page is attempted to be allocated but fails and instead falls back to using small pages + static unsigned long long thp_file_fallback_charge = 0ULL; // is incremented if a file huge page cannot be charged and instead falls back to using small pages even though the allocation was successful + static unsigned long long thp_file_mapped = 0ULL; // is incremented every time a file huge page is mapped into user address space + + // THP splitting of huge pages into small pages + static unsigned long long thp_split_page = 0ULL; + static unsigned long long thp_split_page_failed = 0ULL; + static unsigned long long thp_deferred_split_page = 0ULL; // is incremented when a huge page is put onto split queue. This happens when a huge page is partially unmapped and splitting it would free up some memory. Pages on split queue are going to be split under memory pressure + static unsigned long long thp_split_pmd = 0ULL; // is incremented every time a PMD split into table of PTEs. This can happen, for instance, when application calls mprotect() or munmap() on part of huge page. It doesn’t split huge page, only page table entry + + // ? +// static unsigned long long thp_scan_exceed_none_pte = 0ULL; +// static unsigned long long thp_scan_exceed_swap_pte = 0ULL; +// static unsigned long long thp_scan_exceed_share_pte = 0ULL; +// static unsigned long long thp_split_pud = 0ULL; + + // THP Zero Huge Page + static unsigned long long thp_zero_page_alloc = 0ULL; // is incremented every time a huge zero page used for thp is successfully allocated. Note, it doesn’t count every map of the huge zero page, only its allocation + static unsigned long long thp_zero_page_alloc_failed = 0ULL; // is incremented if kernel fails to allocate huge zero page and falls back to using small pages + + // THP Swap Out + static unsigned long long thp_swpout = 0ULL; // is incremented every time a huge page is swapout in one piece without splitting + static unsigned long long thp_swpout_fallback = 0ULL; // is incremented if a huge page has to be split before swapout. Usually because failed to allocate some continuous swap space for the huge page + + // memory ballooning + // Current size of balloon is (balloon_inflate - balloon_deflate) pages + static unsigned long long balloon_inflate = 0ULL; + static unsigned long long balloon_deflate = 0ULL; + static unsigned long long balloon_migrate = 0ULL; + + // ? +// static unsigned long long swap_ra = 0ULL; +// static unsigned long long swap_ra_hit = 0ULL; + + static unsigned long long ksm_swpin_copy = 0ULL; // is incremented every time a KSM page is copied when swapping in + static unsigned long long cow_ksm = 0ULL; // is incremented every time a KSM page triggers copy on write (COW) when users try to write to a KSM page, we have to make a copy + + // zswap + static unsigned long long zswpin = 0ULL; + static unsigned long long zswpout = 0ULL; + + // ? +// static unsigned long long direct_map_level2_splits = 0ULL; +// static unsigned long long direct_map_level3_splits = 0ULL; +// static unsigned long long nr_unstable = 0ULL; + + if(unlikely(!ff)) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/vmstat"); + ff = procfile_open(config_get("plugin:proc:/proc/vmstat", "filename to monitor", filename), " \t:", PROCFILE_FLAG_DEFAULT); + if(unlikely(!ff)) return 1; + } + + ff = procfile_readall(ff); + if(unlikely(!ff)) return 0; // we return 0, so that we will retry to open it next time + + size_t lines = procfile_lines(ff), l; + + if(unlikely(!arl_base)) { + do_swapio = config_get_boolean_ondemand("plugin:proc:/proc/vmstat", "swap i/o", CONFIG_BOOLEAN_AUTO); + do_io = config_get_boolean("plugin:proc:/proc/vmstat", "disk i/o", CONFIG_BOOLEAN_YES); + do_pgfaults = config_get_boolean("plugin:proc:/proc/vmstat", "memory page faults", CONFIG_BOOLEAN_YES); + do_oom_kill = config_get_boolean("plugin:proc:/proc/vmstat", "out of memory kills", CONFIG_BOOLEAN_AUTO); + do_numa = config_get_boolean_ondemand("plugin:proc:/proc/vmstat", "system-wide numa metric summary", CONFIG_BOOLEAN_AUTO); + do_thp = config_get_boolean_ondemand("plugin:proc:/proc/vmstat", "transparent huge pages", CONFIG_BOOLEAN_AUTO); + do_zswapio = config_get_boolean_ondemand("plugin:proc:/proc/vmstat", "zswap i/o", CONFIG_BOOLEAN_AUTO); + do_balloon = config_get_boolean_ondemand("plugin:proc:/proc/vmstat", "memory ballooning", CONFIG_BOOLEAN_AUTO); + do_ksm = config_get_boolean_ondemand("plugin:proc:/proc/vmstat", "kernel same memory", CONFIG_BOOLEAN_AUTO); + + arl_base = arl_create("vmstat", NULL, 60); + arl_expect(arl_base, "pgfault", &pgfault); + arl_expect(arl_base, "pgmajfault", &pgmajfault); + arl_expect(arl_base, "pgpgin", &pgpgin); + arl_expect(arl_base, "pgpgout", &pgpgout); + arl_expect(arl_base, "pswpin", &pswpin); + arl_expect(arl_base, "pswpout", &pswpout); + + int has_oom_kill = 0; + + for (l = 0; l < lines; l++) { + if (!strcmp(procfile_lineword(ff, l, 0), OOM_KILL_STRING)) { + has_oom_kill = 1; + break; + } + } + + if (has_oom_kill) + arl_expect(arl_base, OOM_KILL_STRING, &oom_kill); + else + do_oom_kill = CONFIG_BOOLEAN_NO; + + if(do_numa == CONFIG_BOOLEAN_YES || (do_numa == CONFIG_BOOLEAN_AUTO && + (get_numa_node_count() >= 2 || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + arl_expect(arl_base, "numa_foreign", &numa_foreign); + arl_expect(arl_base, "numa_hint_faults_local", &numa_hint_faults_local); + arl_expect(arl_base, "numa_hint_faults", &numa_hint_faults); + arl_expect(arl_base, "numa_huge_pte_updates", &numa_huge_pte_updates); + arl_expect(arl_base, "numa_interleave", &numa_interleave); + arl_expect(arl_base, "numa_local", &numa_local); + arl_expect(arl_base, "numa_other", &numa_other); + arl_expect(arl_base, "numa_pages_migrated", &numa_pages_migrated); + arl_expect(arl_base, "numa_pte_updates", &numa_pte_updates); + } + else { + // Do not expect numa metrics when they are not needed. + // By not adding them, the ARL will stop processing the file + // when all the expected metrics are collected. + // Also ARL will not parse their values. + has_numa = 0; + do_numa = CONFIG_BOOLEAN_NO; + } + + if(do_thp == CONFIG_BOOLEAN_YES || do_thp == CONFIG_BOOLEAN_AUTO) { +// arl_expect(arl_base, "pgmigrate_success", &pgmigrate_success); +// arl_expect(arl_base, "pgmigrate_fail", &pgmigrate_fail); +// arl_expect(arl_base, "thp_migration_success", &thp_migration_success); +// arl_expect(arl_base, "thp_migration_fail", &thp_migration_fail); +// arl_expect(arl_base, "thp_migration_split", &thp_migration_split); +// arl_expect(arl_base, "compact_migrate_scanned", &compact_migrate_scanned); +// arl_expect(arl_base, "compact_free_scanned", &compact_free_scanned); +// arl_expect(arl_base, "compact_isolated", &compact_isolated); + arl_expect(arl_base, "compact_stall", &compact_stall); + arl_expect(arl_base, "compact_fail", &compact_fail); + arl_expect(arl_base, "compact_success", &compact_success); +// arl_expect(arl_base, "compact_daemon_wake", &compact_daemon_wake); +// arl_expect(arl_base, "compact_daemon_migrate_scanned", &compact_daemon_migrate_scanned); +// arl_expect(arl_base, "compact_daemon_free_scanned", &compact_daemon_free_scanned); + arl_expect(arl_base, "thp_fault_alloc", &thp_fault_alloc); + arl_expect(arl_base, "thp_fault_fallback", &thp_fault_fallback); + arl_expect(arl_base, "thp_fault_fallback_charge", &thp_fault_fallback_charge); + arl_expect(arl_base, "thp_collapse_alloc", &thp_collapse_alloc); + arl_expect(arl_base, "thp_collapse_alloc_failed", &thp_collapse_alloc_failed); + arl_expect(arl_base, "thp_file_alloc", &thp_file_alloc); + arl_expect(arl_base, "thp_file_fallback", &thp_file_fallback); + arl_expect(arl_base, "thp_file_fallback_charge", &thp_file_fallback_charge); + arl_expect(arl_base, "thp_file_mapped", &thp_file_mapped); + arl_expect(arl_base, "thp_split_page", &thp_split_page); + arl_expect(arl_base, "thp_split_page_failed", &thp_split_page_failed); + arl_expect(arl_base, "thp_deferred_split_page", &thp_deferred_split_page); + arl_expect(arl_base, "thp_split_pmd", &thp_split_pmd); + arl_expect(arl_base, "thp_zero_page_alloc", &thp_zero_page_alloc); + arl_expect(arl_base, "thp_zero_page_alloc_failed", &thp_zero_page_alloc_failed); + arl_expect(arl_base, "thp_swpout", &thp_swpout); + arl_expect(arl_base, "thp_swpout_fallback", &thp_swpout_fallback); + } + + if(do_balloon == CONFIG_BOOLEAN_YES || do_balloon == CONFIG_BOOLEAN_AUTO) { + arl_expect(arl_base, "balloon_inflate", &balloon_inflate); + arl_expect(arl_base, "balloon_deflate", &balloon_deflate); + arl_expect(arl_base, "balloon_migrate", &balloon_migrate); + } + + if(do_ksm == CONFIG_BOOLEAN_YES || do_ksm == CONFIG_BOOLEAN_AUTO) { + arl_expect(arl_base, "ksm_swpin_copy", &ksm_swpin_copy); + arl_expect(arl_base, "cow_ksm", &cow_ksm); + } + + if(do_zswapio == CONFIG_BOOLEAN_YES || do_zswapio == CONFIG_BOOLEAN_AUTO) { + arl_expect(arl_base, "zswpin", &zswpin); + arl_expect(arl_base, "zswpout", &zswpout); + } + } + + arl_begin(arl_base); + for(l = 0; l < lines ;l++) { + size_t words = procfile_linewords(ff, l); + if(unlikely(words < 2)) { + if(unlikely(words)) collector_error("Cannot read /proc/vmstat line %zu. Expected 2 params, read %zu.", l, words); + continue; + } + + if(unlikely(arl_check(arl_base, + procfile_lineword(ff, l, 0), + procfile_lineword(ff, l, 1)))) break; + } + + // -------------------------------------------------------------------- + + if(do_swapio == CONFIG_BOOLEAN_YES || (do_swapio == CONFIG_BOOLEAN_AUTO && + (pswpin || pswpout || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_swapio = CONFIG_BOOLEAN_YES; + + static RRDSET *st_swapio = NULL; + static RRDDIM *rd_in = NULL, *rd_out = NULL; + + if(unlikely(!st_swapio)) { + st_swapio = rrdset_create_localhost( + "mem" + , "swapio" + , NULL + , "swap" + , NULL + , "Swap I/O" + , "KiB/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_VMSTAT_NAME + , NETDATA_CHART_PRIO_MEM_SWAPIO + , update_every + , RRDSET_TYPE_AREA + ); + + rd_in = rrddim_add(st_swapio, "in", NULL, sysconf(_SC_PAGESIZE), 1024, RRD_ALGORITHM_INCREMENTAL); + rd_out = rrddim_add(st_swapio, "out", NULL, -sysconf(_SC_PAGESIZE), 1024, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_swapio, rd_in, pswpin); + rrddim_set_by_pointer(st_swapio, rd_out, pswpout); + rrdset_done(st_swapio); + } + + // -------------------------------------------------------------------- + + if(do_io) { + static RRDSET *st_io = NULL; + static RRDDIM *rd_in = NULL, *rd_out = NULL; + + if(unlikely(!st_io)) { + st_io = rrdset_create_localhost( + "system" + , "pgpgio" + , NULL + , "disk" + , NULL + , "Memory Paged from/to disk" + , "KiB/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_VMSTAT_NAME + , NETDATA_CHART_PRIO_SYSTEM_PGPGIO + , update_every + , RRDSET_TYPE_AREA + ); + + rd_in = rrddim_add(st_io, "in", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_out = rrddim_add(st_io, "out", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_io, rd_in, pgpgin); + rrddim_set_by_pointer(st_io, rd_out, pgpgout); + rrdset_done(st_io); + } + + // -------------------------------------------------------------------- + + if(do_pgfaults) { + static RRDSET *st_pgfaults = NULL; + static RRDDIM *rd_minor = NULL, *rd_major = NULL; + + if(unlikely(!st_pgfaults)) { + st_pgfaults = rrdset_create_localhost( + "mem" + , "pgfaults" + , NULL + , "page faults" + , NULL + , "Memory Page Faults" + , "faults/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_VMSTAT_NAME + , NETDATA_CHART_PRIO_MEM_SYSTEM_PGFAULTS + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(st_pgfaults, RRDSET_FLAG_DETAIL); + + rd_minor = rrddim_add(st_pgfaults, "minor", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_major = rrddim_add(st_pgfaults, "major", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_pgfaults, rd_minor, pgfault); + rrddim_set_by_pointer(st_pgfaults, rd_major, pgmajfault); + rrdset_done(st_pgfaults); + } + + // -------------------------------------------------------------------- + + if (do_oom_kill == CONFIG_BOOLEAN_YES || + (do_oom_kill == CONFIG_BOOLEAN_AUTO && (oom_kill || netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + static RRDSET *st_oom_kill = NULL; + static RRDDIM *rd_oom_kill = NULL; + + do_oom_kill = CONFIG_BOOLEAN_YES; + + if(unlikely(!st_oom_kill)) { + st_oom_kill = rrdset_create_localhost( + "mem" + , "oom_kill" + , NULL + , "OOM kills" + , NULL + , "Out of Memory Kills" + , "kills/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_VMSTAT_NAME + , NETDATA_CHART_PRIO_MEM_SYSTEM_OOM_KILL + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(st_oom_kill, RRDSET_FLAG_DETAIL); + + rd_oom_kill = rrddim_add(st_oom_kill, "kills", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_oom_kill, rd_oom_kill, oom_kill); + rrdset_done(st_oom_kill); + } + + // -------------------------------------------------------------------- + + // Ondemand criteria for NUMA. Since this won't change at run time, we + // check it only once. We check whether the node count is >= 2 because + // single-node systems have uninteresting statistics (since all accesses + // are local). + if(unlikely(has_numa == -1)) + + has_numa = (numa_local || numa_foreign || numa_interleave || numa_other || numa_pte_updates || + numa_huge_pte_updates || numa_hint_faults || numa_hint_faults_local || numa_pages_migrated) ? 1 : 0; + + if(do_numa == CONFIG_BOOLEAN_YES || (do_numa == CONFIG_BOOLEAN_AUTO && has_numa)) { + do_numa = CONFIG_BOOLEAN_YES; + + static RRDSET *st_numa = NULL; + static RRDDIM *rd_local = NULL, *rd_foreign = NULL, *rd_interleave = NULL, *rd_other = NULL, *rd_pte_updates = NULL, *rd_huge_pte_updates = NULL, *rd_hint_faults = NULL, *rd_hint_faults_local = NULL, *rd_pages_migrated = NULL; + + if(unlikely(!st_numa)) { + st_numa = rrdset_create_localhost( + "mem" + , "numa" + , NULL + , "numa" + , NULL + , "NUMA events" + , "events/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_VMSTAT_NAME + , NETDATA_CHART_PRIO_MEM_NUMA + , update_every + , RRDSET_TYPE_LINE + ); + + rrdset_flag_set(st_numa, RRDSET_FLAG_DETAIL); + + // These depend on CONFIG_NUMA in the kernel. + rd_local = rrddim_add(st_numa, "local", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_foreign = rrddim_add(st_numa, "foreign", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_interleave = rrddim_add(st_numa, "interleave", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_other = rrddim_add(st_numa, "other", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + + // The following stats depend on CONFIG_NUMA_BALANCING in the + // kernel. + rd_pte_updates = rrddim_add(st_numa, "pte_updates", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_huge_pte_updates = rrddim_add(st_numa, "huge_pte_updates", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_hint_faults = rrddim_add(st_numa, "hint_faults", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_hint_faults_local = rrddim_add(st_numa, "hint_faults_local", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_pages_migrated = rrddim_add(st_numa, "pages_migrated", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_numa, rd_local, numa_local); + rrddim_set_by_pointer(st_numa, rd_foreign, numa_foreign); + rrddim_set_by_pointer(st_numa, rd_interleave, numa_interleave); + rrddim_set_by_pointer(st_numa, rd_other, numa_other); + + rrddim_set_by_pointer(st_numa, rd_pte_updates, numa_pte_updates); + rrddim_set_by_pointer(st_numa, rd_huge_pte_updates, numa_huge_pte_updates); + rrddim_set_by_pointer(st_numa, rd_hint_faults, numa_hint_faults); + rrddim_set_by_pointer(st_numa, rd_hint_faults_local, numa_hint_faults_local); + rrddim_set_by_pointer(st_numa, rd_pages_migrated, numa_pages_migrated); + + rrdset_done(st_numa); + } + + // -------------------------------------------------------------------- + + if(do_balloon == CONFIG_BOOLEAN_YES || (do_balloon == CONFIG_BOOLEAN_AUTO && (balloon_inflate || balloon_deflate || + balloon_migrate || netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_balloon = CONFIG_BOOLEAN_YES; + + static RRDSET *st_balloon = NULL; + static RRDDIM *rd_inflate = NULL, *rd_deflate = NULL, *rd_migrate = NULL; + + if(unlikely(!st_balloon)) { + st_balloon = rrdset_create_localhost( + "mem" + , "balloon" + , NULL + , "balloon" + , NULL + , "Memory Ballooning Operations" + , "KiB/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_VMSTAT_NAME + , NETDATA_CHART_PRIO_MEM_BALLOON + , update_every + , RRDSET_TYPE_LINE + ); + + rd_inflate = rrddim_add(st_balloon, "inflate", NULL, sysconf(_SC_PAGESIZE), 1024, RRD_ALGORITHM_INCREMENTAL); + rd_deflate = rrddim_add(st_balloon, "deflate", NULL, -sysconf(_SC_PAGESIZE), 1024, RRD_ALGORITHM_INCREMENTAL); + rd_migrate = rrddim_add(st_balloon, "migrate", NULL, sysconf(_SC_PAGESIZE), 1024, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_balloon, rd_inflate, balloon_inflate); + rrddim_set_by_pointer(st_balloon, rd_deflate, balloon_deflate); + rrddim_set_by_pointer(st_balloon, rd_migrate, balloon_migrate); + + rrdset_done(st_balloon); + } + + // -------------------------------------------------------------------- + + if(do_zswapio == CONFIG_BOOLEAN_YES || (do_zswapio == CONFIG_BOOLEAN_AUTO && + (zswpin || zswpout || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_zswapio = CONFIG_BOOLEAN_YES; + + static RRDSET *st_zswapio = NULL; + static RRDDIM *rd_in = NULL, *rd_out = NULL; + + if(unlikely(!st_zswapio)) { + st_zswapio = rrdset_create_localhost( + "mem" + , "zswapio" + , NULL + , "zswap" + , NULL + , "ZSwap I/O" + , "KiB/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_VMSTAT_NAME + , NETDATA_CHART_PRIO_MEM_ZSWAPIO + , update_every + , RRDSET_TYPE_AREA + ); + + rd_in = rrddim_add(st_zswapio, "in", NULL, sysconf(_SC_PAGESIZE), 1024, RRD_ALGORITHM_INCREMENTAL); + rd_out = rrddim_add(st_zswapio, "out", NULL, -sysconf(_SC_PAGESIZE), 1024, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_zswapio, rd_in, zswpin); + rrddim_set_by_pointer(st_zswapio, rd_out, zswpout); + rrdset_done(st_zswapio); + } + + // -------------------------------------------------------------------- + + if(do_ksm == CONFIG_BOOLEAN_YES || (do_ksm == CONFIG_BOOLEAN_AUTO && + (cow_ksm || ksm_swpin_copy || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_ksm = CONFIG_BOOLEAN_YES; + + static RRDSET *st_ksm_cow = NULL; + static RRDDIM *rd_swapin = NULL, *rd_write = NULL; + + if(unlikely(!st_ksm_cow)) { + st_ksm_cow = rrdset_create_localhost( + "mem" + , "ksm_cow" + , NULL + , "ksm" + , NULL + , "KSM Copy On Write Operations" + , "KiB/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_VMSTAT_NAME + , NETDATA_CHART_PRIO_MEM_KSM_COW + , update_every + , RRDSET_TYPE_LINE + ); + + rd_swapin = rrddim_add(st_ksm_cow, "swapin", NULL, sysconf(_SC_PAGESIZE), 1024, RRD_ALGORITHM_INCREMENTAL); + rd_write = rrddim_add(st_ksm_cow, "write", NULL, sysconf(_SC_PAGESIZE), 1024, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_ksm_cow, rd_swapin, ksm_swpin_copy); + rrddim_set_by_pointer(st_ksm_cow, rd_write, cow_ksm); + + rrdset_done(st_ksm_cow); + } + + // -------------------------------------------------------------------- + + if(do_thp == CONFIG_BOOLEAN_YES || do_thp == CONFIG_BOOLEAN_AUTO) { + + if(do_thp == CONFIG_BOOLEAN_YES || (do_thp == CONFIG_BOOLEAN_AUTO && + (netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES || thp_fault_alloc || thp_fault_fallback || thp_fault_fallback_charge))) { + + static RRDSET *st_thp_fault = NULL; + static RRDDIM *rd_alloc = NULL, *rd_fallback = NULL, *rd_fallback_charge = NULL; + + if(unlikely(!st_thp_fault)) { + st_thp_fault = rrdset_create_localhost( + "mem" + , "thp_faults" + , NULL + , "hugepages" + , NULL + , "Transparent Huge Page Fault Allocations" + , "events/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_VMSTAT_NAME + , NETDATA_CHART_PRIO_MEM_HUGEPAGES_FAULTS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_alloc = rrddim_add(st_thp_fault, "alloc", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_fallback = rrddim_add(st_thp_fault, "fallback", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_fallback_charge = rrddim_add(st_thp_fault, "fallback_charge", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_thp_fault, rd_alloc, thp_fault_alloc); + rrddim_set_by_pointer(st_thp_fault, rd_fallback, thp_fault_fallback); + rrddim_set_by_pointer(st_thp_fault, rd_fallback_charge, thp_fault_fallback_charge); + + rrdset_done(st_thp_fault); + } + + if(do_thp == CONFIG_BOOLEAN_YES || (do_thp == CONFIG_BOOLEAN_AUTO && + (netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES || thp_fault_alloc || thp_fault_fallback || thp_fault_fallback_charge || thp_file_mapped))) { + + static RRDSET *st_thp_file = NULL; + static RRDDIM *rd_alloc = NULL, *rd_fallback = NULL, *rd_fallback_charge = NULL, *rd_mapped = NULL; + + if(unlikely(!st_thp_file)) { + st_thp_file = rrdset_create_localhost( + "mem" + , "thp_file" + , NULL + , "hugepages" + , NULL + , "Transparent Huge Page File Allocations" + , "events/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_VMSTAT_NAME + , NETDATA_CHART_PRIO_MEM_HUGEPAGES_FILE + , update_every + , RRDSET_TYPE_LINE + ); + + rd_alloc = rrddim_add(st_thp_file, "alloc", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_fallback = rrddim_add(st_thp_file, "fallback", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_mapped = rrddim_add(st_thp_file, "mapped", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_fallback_charge = rrddim_add(st_thp_file, "fallback_charge", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_thp_file, rd_alloc, thp_file_alloc); + rrddim_set_by_pointer(st_thp_file, rd_fallback, thp_file_fallback); + rrddim_set_by_pointer(st_thp_file, rd_mapped, thp_file_fallback_charge); + rrddim_set_by_pointer(st_thp_file, rd_fallback_charge, thp_file_fallback_charge); + + rrdset_done(st_thp_file); + } + + if(do_thp == CONFIG_BOOLEAN_YES || (do_thp == CONFIG_BOOLEAN_AUTO && + (netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES || thp_zero_page_alloc || thp_zero_page_alloc_failed))) { + + static RRDSET *st_thp_zero = NULL; + static RRDDIM *rd_alloc = NULL, *rd_failed = NULL; + + if(unlikely(!st_thp_zero)) { + st_thp_zero = rrdset_create_localhost( + "mem" + , "thp_zero" + , NULL + , "hugepages" + , NULL + , "Transparent Huge Zero Page Allocations" + , "events/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_VMSTAT_NAME + , NETDATA_CHART_PRIO_MEM_HUGEPAGES_ZERO + , update_every + , RRDSET_TYPE_LINE + ); + + rd_alloc = rrddim_add(st_thp_zero, "alloc", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_failed = rrddim_add(st_thp_zero, "failed", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_thp_zero, rd_alloc, thp_zero_page_alloc); + rrddim_set_by_pointer(st_thp_zero, rd_failed, thp_zero_page_alloc_failed); + + rrdset_done(st_thp_zero); + } + + if(do_thp == CONFIG_BOOLEAN_YES || (do_thp == CONFIG_BOOLEAN_AUTO && + (netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES || thp_collapse_alloc || thp_collapse_alloc_failed))) { + + static RRDSET *st_khugepaged = NULL; + static RRDDIM *rd_alloc = NULL, *rd_failed = NULL; + + if(unlikely(!st_khugepaged)) { + st_khugepaged = rrdset_create_localhost( + "mem" + , "thp_collapse" + , NULL + , "hugepages" + , NULL + , "Transparent Huge Pages Collapsed by khugepaged" + , "events/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_VMSTAT_NAME + , NETDATA_CHART_PRIO_MEM_HUGEPAGES_KHUGEPAGED + , update_every + , RRDSET_TYPE_LINE + ); + + rd_alloc = rrddim_add(st_khugepaged, "alloc", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_failed = rrddim_add(st_khugepaged, "failed", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_khugepaged, rd_alloc, thp_collapse_alloc); + rrddim_set_by_pointer(st_khugepaged, rd_failed, thp_collapse_alloc_failed); + + rrdset_done(st_khugepaged); + } + + if(do_thp == CONFIG_BOOLEAN_YES || (do_thp == CONFIG_BOOLEAN_AUTO && + (netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES || thp_split_page || thp_split_page_failed || thp_deferred_split_page || thp_split_pmd))) { + + static RRDSET *st_thp_split = NULL; + static RRDDIM *rd_split = NULL, *rd_failed = NULL, *rd_deferred_split = NULL, *rd_split_pmd = NULL; + + if(unlikely(!st_thp_split)) { + st_thp_split = rrdset_create_localhost( + "mem" + , "thp_split" + , NULL + , "hugepages" + , NULL + , "Transparent Huge Page Splits" + , "events/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_VMSTAT_NAME + , NETDATA_CHART_PRIO_MEM_HUGEPAGES_SPLITS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_split = rrddim_add(st_thp_split, "split", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_failed = rrddim_add(st_thp_split, "failed", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_split_pmd = rrddim_add(st_thp_split, "split_pmd", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_deferred_split = rrddim_add(st_thp_split, "split_deferred", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_thp_split, rd_split, thp_split_page); + rrddim_set_by_pointer(st_thp_split, rd_failed, thp_split_page_failed); + rrddim_set_by_pointer(st_thp_split, rd_split_pmd, thp_split_pmd); + rrddim_set_by_pointer(st_thp_split, rd_deferred_split, thp_deferred_split_page); + + rrdset_done(st_thp_split); + } + + if(do_thp == CONFIG_BOOLEAN_YES || (do_thp == CONFIG_BOOLEAN_AUTO && + (netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES || thp_swpout || thp_swpout_fallback))) { + + static RRDSET *st_tmp_swapout = NULL; + static RRDDIM *rd_swapout = NULL, *rd_fallback = NULL; + + if(unlikely(!st_tmp_swapout)) { + st_tmp_swapout = rrdset_create_localhost( + "mem" + , "thp_swapout" + , NULL + , "hugepages" + , NULL + , "Transparent Huge Pages Swap Out" + , "events/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_VMSTAT_NAME + , NETDATA_CHART_PRIO_MEM_HUGEPAGES_SWAPOUT + , update_every + , RRDSET_TYPE_LINE + ); + + rd_swapout = rrddim_add(st_tmp_swapout, "swapout", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_fallback = rrddim_add(st_tmp_swapout, "fallback", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_tmp_swapout, rd_swapout, thp_swpout); + rrddim_set_by_pointer(st_tmp_swapout, rd_fallback, thp_swpout_fallback); + + rrdset_done(st_tmp_swapout); + } + + if(do_thp == CONFIG_BOOLEAN_YES || (do_thp == CONFIG_BOOLEAN_AUTO && + (netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES || compact_stall || compact_fail || compact_success))) { + + static RRDSET *st_thp_compact = NULL; + static RRDDIM *rd_success = NULL, *rd_fail = NULL, *rd_stall = NULL; + + if(unlikely(!st_thp_compact)) { + st_thp_compact = rrdset_create_localhost( + "mem" + , "thp_compact" + , NULL + , "hugepages" + , NULL + , "Transparent Huge Pages Compaction" + , "events/s" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_VMSTAT_NAME + , NETDATA_CHART_PRIO_MEM_HUGEPAGES_COMPACT + , update_every + , RRDSET_TYPE_LINE + ); + + rd_success = rrddim_add(st_thp_compact, "success", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_fail = rrddim_add(st_thp_compact, "fail", NULL, -1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_stall = rrddim_add(st_thp_compact, "stall", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_thp_compact, rd_success, compact_success); + rrddim_set_by_pointer(st_thp_compact, rd_fail, compact_fail); + rrddim_set_by_pointer(st_thp_compact, rd_stall, compact_stall); + + rrdset_done(st_thp_compact); + } + } + + return 0; +} + diff --git a/src/collectors/proc.plugin/sys_block_zram.c b/src/collectors/proc.plugin/sys_block_zram.c new file mode 100644 index 000000000..dac7cac0f --- /dev/null +++ b/src/collectors/proc.plugin/sys_block_zram.c @@ -0,0 +1,285 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +#define PLUGIN_PROC_MODULE_ZRAM_NAME "/sys/block/zram" +#define rrdset_obsolete_and_pointer_null(st) do { if(st) { rrdset_is_obsolete___safe_from_collector_thread(st); (st) = NULL; } } while(st) + +typedef struct mm_stat { + unsigned long long orig_data_size; + unsigned long long compr_data_size; + unsigned long long mem_used_total; + unsigned long long mem_limit; + unsigned long long mem_used_max; + unsigned long long same_pages; + unsigned long long pages_compacted; +} MM_STAT; + +typedef struct zram_device { + procfile *file; + + RRDSET *st_usage; + RRDDIM *rd_compr_data_size; + RRDDIM *rd_metadata_size; + + RRDSET *st_savings; + RRDDIM *rd_original_size; + RRDDIM *rd_savings_size; + + RRDSET *st_comp_ratio; + RRDDIM *rd_comp_ratio; + + RRDSET *st_alloc_efficiency; + RRDDIM *rd_alloc_efficiency; +} ZRAM_DEVICE; + +static int try_get_zram_major_number(procfile *file) { + size_t i; + unsigned int lines = procfile_lines(file); + int id = -1; + char *name = NULL; + for (i = 0; i < lines; i++) + { + if (procfile_linewords(file, i) < 2) + continue; + name = procfile_lineword(file, i, 1); + if (strcmp(name, "zram") == 0) + { + id = str2i(procfile_lineword(file, i, 0)); + if (id == 0) + return -1; + return id; + } + } + return -1; +} + +static inline void init_rrd(const char *name, ZRAM_DEVICE *d, int update_every) { + char chart_name[RRD_ID_LENGTH_MAX + 1]; + + snprintfz(chart_name, RRD_ID_LENGTH_MAX, "zram_usage.%s", name); + d->st_usage = rrdset_create_localhost( + "mem" + , chart_name + , chart_name + , name + , "mem.zram_usage" + , "ZRAM Memory Usage" + , "MiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_ZRAM_NAME + , NETDATA_CHART_PRIO_MEM_ZRAM + , update_every + , RRDSET_TYPE_AREA); + d->rd_compr_data_size = rrddim_add(d->st_usage, "compressed", NULL, 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + d->rd_metadata_size = rrddim_add(d->st_usage, "metadata", NULL, 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + rrdlabels_add(d->st_usage->rrdlabels, "device", name, RRDLABEL_SRC_AUTO); + + snprintfz(chart_name, RRD_ID_LENGTH_MAX, "zram_savings.%s", name); + d->st_savings = rrdset_create_localhost( + "mem" + , chart_name + , chart_name + , name + , "mem.zram_savings" + , "ZRAM Memory Savings" + , "MiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_ZRAM_NAME + , NETDATA_CHART_PRIO_MEM_ZRAM_SAVINGS + , update_every + , RRDSET_TYPE_AREA); + d->rd_savings_size = rrddim_add(d->st_savings, "savings", NULL, 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + d->rd_original_size = rrddim_add(d->st_savings, "original", NULL, 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + rrdlabels_add(d->st_savings->rrdlabels, "device", name, RRDLABEL_SRC_AUTO); + + snprintfz(chart_name, RRD_ID_LENGTH_MAX, "zram_ratio.%s", name); + d->st_comp_ratio = rrdset_create_localhost( + "mem" + , chart_name + , chart_name + , name + , "mem.zram_ratio" + , "ZRAM Compression Ratio (original to compressed)" + , "ratio" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_ZRAM_NAME + , NETDATA_CHART_PRIO_MEM_ZRAM_RATIO + , update_every + , RRDSET_TYPE_LINE); + d->rd_comp_ratio = rrddim_add(d->st_comp_ratio, "ratio", NULL, 1, 100, RRD_ALGORITHM_ABSOLUTE); + rrdlabels_add(d->st_comp_ratio->rrdlabels, "device", name, RRDLABEL_SRC_AUTO); + + snprintfz(chart_name, RRD_ID_LENGTH_MAX, "zram_efficiency.%s", name); + d->st_alloc_efficiency = rrdset_create_localhost( + "mem" + , chart_name + , chart_name + , name + , "mem.zram_efficiency" + , "ZRAM Efficiency" + , "percentage" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_ZRAM_NAME + , NETDATA_CHART_PRIO_MEM_ZRAM_EFFICIENCY + , update_every + , RRDSET_TYPE_LINE); + d->rd_alloc_efficiency = rrddim_add(d->st_alloc_efficiency, "percent", NULL, 1, 10000, RRD_ALGORITHM_ABSOLUTE); + rrdlabels_add(d->st_alloc_efficiency->rrdlabels, "device", name, RRDLABEL_SRC_AUTO); +} + +static int init_devices(DICTIONARY *devices, unsigned int zram_id, int update_every) { + int count = 0; + struct dirent *de; + struct stat st; + procfile *ff = NULL; + ZRAM_DEVICE device; + char filename[FILENAME_MAX + 1]; + + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/dev"); + DIR *dir = opendir(filename); + + if (unlikely(!dir)) + return 0; + while ((de = readdir(dir))) + { + snprintfz(filename, FILENAME_MAX, "%s/dev/%s", netdata_configured_host_prefix, de->d_name); + if (unlikely(stat(filename, &st) != 0)) + { + collector_error("ZRAM : Unable to stat %s: %s", filename, strerror(errno)); + continue; + } + if (major(st.st_rdev) == zram_id) + { + collector_info("ZRAM : Found device %s", filename); + snprintfz(filename, FILENAME_MAX, "%s/sys/block/%s/mm_stat", netdata_configured_host_prefix, de->d_name); + ff = procfile_open(filename, " \t:", PROCFILE_FLAG_DEFAULT); + if (ff == NULL) + { + collector_error("ZRAM : Failed to open %s: %s", filename, strerror(errno)); + continue; + } + device.file = ff; + init_rrd(de->d_name, &device, update_every); + dictionary_set(devices, de->d_name, &device, sizeof(ZRAM_DEVICE)); + count++; + } + } + closedir(dir); + return count; +} + +static void free_device(DICTIONARY *dict, const char *name) +{ + ZRAM_DEVICE *d = (ZRAM_DEVICE*)dictionary_get(dict, name); + collector_info("ZRAM : Disabling monitoring of device %s", name); + rrdset_obsolete_and_pointer_null(d->st_usage); + rrdset_obsolete_and_pointer_null(d->st_savings); + rrdset_obsolete_and_pointer_null(d->st_alloc_efficiency); + rrdset_obsolete_and_pointer_null(d->st_comp_ratio); + dictionary_del(dict, name); +} + +static inline int read_mm_stat(procfile *ff, MM_STAT *stats) { + ff = procfile_readall(ff); + if (!ff) + return -1; + if (procfile_lines(ff) < 1) { + procfile_close(ff); + return -1; + } + if (procfile_linewords(ff, 0) < 7) { + procfile_close(ff); + return -1; + } + + stats->orig_data_size = str2ull(procfile_word(ff, 0), NULL); + stats->compr_data_size = str2ull(procfile_word(ff, 1), NULL); + stats->mem_used_total = str2ull(procfile_word(ff, 2), NULL); + stats->mem_limit = str2ull(procfile_word(ff, 3), NULL); + stats->mem_used_max = str2ull(procfile_word(ff, 4), NULL); + stats->same_pages = str2ull(procfile_word(ff, 5), NULL); + stats->pages_compacted = str2ull(procfile_word(ff, 6), NULL); + return 0; +} + +static int collect_zram_metrics(const DICTIONARY_ITEM *item, void *entry, void *data) { + const char *name = dictionary_acquired_item_name(item); + ZRAM_DEVICE *dev = entry; + DICTIONARY *dict = data; + + MM_STAT mm; + int value; + + if (unlikely(read_mm_stat(dev->file, &mm) < 0)) { + free_device(dict, name); + return -1; + } + + // zram_usage + rrddim_set_by_pointer(dev->st_usage, dev->rd_compr_data_size, mm.compr_data_size); + rrddim_set_by_pointer(dev->st_usage, dev->rd_metadata_size, mm.mem_used_total - mm.compr_data_size); + rrdset_done(dev->st_usage); + + // zram_savings + rrddim_set_by_pointer(dev->st_savings, dev->rd_savings_size, mm.compr_data_size - mm.orig_data_size); + rrddim_set_by_pointer(dev->st_savings, dev->rd_original_size, mm.orig_data_size); + rrdset_done(dev->st_savings); + + // zram_ratio + value = mm.compr_data_size == 0 ? 1 : mm.orig_data_size * 100 / mm.compr_data_size; + rrddim_set_by_pointer(dev->st_comp_ratio, dev->rd_comp_ratio, value); + rrdset_done(dev->st_comp_ratio); + + // zram_efficiency + value = mm.mem_used_total == 0 ? 100 : (mm.compr_data_size * 1000000 / mm.mem_used_total); + rrddim_set_by_pointer(dev->st_alloc_efficiency, dev->rd_alloc_efficiency, value); + rrdset_done(dev->st_alloc_efficiency); + + return 0; +} + +int do_sys_block_zram(int update_every, usec_t dt) { + static procfile *ff = NULL; + static DICTIONARY *devices = NULL; + static int initialized = 0; + static int device_count = 0; + int zram_id = -1; + + (void)dt; + + if (unlikely(!initialized)) + { + initialized = 1; + + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/proc/devices"); + + ff = procfile_open(filename, " \t:", PROCFILE_FLAG_DEFAULT); + if (ff == NULL) + { + collector_error("Cannot read %s", filename); + return 1; + } + ff = procfile_readall(ff); + if (!ff) + return 1; + zram_id = try_get_zram_major_number(ff); + if (zram_id == -1) + { + if (ff != NULL) + procfile_close(ff); + return 1; + } + procfile_close(ff); + + devices = dictionary_create_advanced(DICT_OPTION_SINGLE_THREADED, &dictionary_stats_category_collectors, 0); + device_count = init_devices(devices, (unsigned int)zram_id, update_every); + } + + if (unlikely(device_count < 1)) + return 1; + + dictionary_walkthrough_write(devices, collect_zram_metrics, devices); + return 0; +}
\ No newline at end of file diff --git a/src/collectors/proc.plugin/sys_class_drm.c b/src/collectors/proc.plugin/sys_class_drm.c new file mode 100644 index 000000000..ab4d98a72 --- /dev/null +++ b/src/collectors/proc.plugin/sys_class_drm.c @@ -0,0 +1,1181 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +#define PLUGIN_PROC_MODULE_DRM_NAME "/sys/class/drm" +#define CONFIG_SECTION_PLUGIN_PROC_DRM "plugin:proc:/sys/class/drm" +#define AMDGPU_CHART_TYPE "amdgpu" + +struct amdgpu_id_struct { + unsigned long long asic_id; + unsigned long long pci_rev_id; + const char *marketing_name; +}; + +/* + * About amdgpu_ids list: + * ------------------------------------------------------------------------ + * Copyright (C) 2023 Advanced Micro Devices, Inc. All rights reserved. + * + * The list is copied from: + * https://raw.githubusercontent.com/Syllo/nvtop/master/src/amdgpu_ids.h + * + * which is modified from libdrm (MIT License): + * + * URL: https://gitlab.freedesktop.org/mesa/drm/-/blob/main/data/amdgpu.ids + * ------------------------------------------------------------------------ + * **IMPORTANT**: The amdgpu_ids has to be modified after new GPU releases. + * ------------------------------------------------------------------------*/ + +static const struct amdgpu_id_struct amdgpu_ids[] = { + {0x1309, 0x00, "AMD Radeon R7 Graphics"}, + {0x130A, 0x00, "AMD Radeon R6 Graphics"}, + {0x130B, 0x00, "AMD Radeon R4 Graphics"}, + {0x130C, 0x00, "AMD Radeon R7 Graphics"}, + {0x130D, 0x00, "AMD Radeon R6 Graphics"}, + {0x130E, 0x00, "AMD Radeon R5 Graphics"}, + {0x130F, 0x00, "AMD Radeon R7 Graphics"}, + {0x130F, 0xD4, "AMD Radeon R7 Graphics"}, + {0x130F, 0xD5, "AMD Radeon R7 Graphics"}, + {0x130F, 0xD6, "AMD Radeon R7 Graphics"}, + {0x130F, 0xD7, "AMD Radeon R7 Graphics"}, + {0x1313, 0x00, "AMD Radeon R7 Graphics"}, + {0x1313, 0xD4, "AMD Radeon R7 Graphics"}, + {0x1313, 0xD5, "AMD Radeon R7 Graphics"}, + {0x1313, 0xD6, "AMD Radeon R7 Graphics"}, + {0x1315, 0x00, "AMD Radeon R5 Graphics"}, + {0x1315, 0xD4, "AMD Radeon R5 Graphics"}, + {0x1315, 0xD5, "AMD Radeon R5 Graphics"}, + {0x1315, 0xD6, "AMD Radeon R5 Graphics"}, + {0x1315, 0xD7, "AMD Radeon R5 Graphics"}, + {0x1316, 0x00, "AMD Radeon R5 Graphics"}, + {0x1318, 0x00, "AMD Radeon R5 Graphics"}, + {0x131B, 0x00, "AMD Radeon R4 Graphics"}, + {0x131C, 0x00, "AMD Radeon R7 Graphics"}, + {0x131D, 0x00, "AMD Radeon R6 Graphics"}, + {0x15D8, 0x00, "AMD Radeon RX Vega 8 Graphics WS"}, + {0x15D8, 0x91, "AMD Radeon Vega 3 Graphics"}, + {0x15D8, 0x91, "AMD Ryzen Embedded R1606G with Radeon Vega Gfx"}, + {0x15D8, 0x92, "AMD Radeon Vega 3 Graphics"}, + {0x15D8, 0x92, "AMD Ryzen Embedded R1505G with Radeon Vega Gfx"}, + {0x15D8, 0x93, "AMD Radeon Vega 1 Graphics"}, + {0x15D8, 0xA1, "AMD Radeon Vega 10 Graphics"}, + {0x15D8, 0xA2, "AMD Radeon Vega 8 Graphics"}, + {0x15D8, 0xA3, "AMD Radeon Vega 6 Graphics"}, + {0x15D8, 0xA4, "AMD Radeon Vega 3 Graphics"}, + {0x15D8, 0xB1, "AMD Radeon Vega 10 Graphics"}, + {0x15D8, 0xB2, "AMD Radeon Vega 8 Graphics"}, + {0x15D8, 0xB3, "AMD Radeon Vega 6 Graphics"}, + {0x15D8, 0xB4, "AMD Radeon Vega 3 Graphics"}, + {0x15D8, 0xC1, "AMD Radeon Vega 10 Graphics"}, + {0x15D8, 0xC2, "AMD Radeon Vega 8 Graphics"}, + {0x15D8, 0xC3, "AMD Radeon Vega 6 Graphics"}, + {0x15D8, 0xC4, "AMD Radeon Vega 3 Graphics"}, + {0x15D8, 0xC5, "AMD Radeon Vega 3 Graphics"}, + {0x15D8, 0xC8, "AMD Radeon Vega 11 Graphics"}, + {0x15D8, 0xC9, "AMD Radeon Vega 8 Graphics"}, + {0x15D8, 0xCA, "AMD Radeon Vega 11 Graphics"}, + {0x15D8, 0xCB, "AMD Radeon Vega 8 Graphics"}, + {0x15D8, 0xCC, "AMD Radeon Vega 3 Graphics"}, + {0x15D8, 0xCE, "AMD Radeon Vega 3 Graphics"}, + {0x15D8, 0xCF, "AMD Ryzen Embedded R1305G with Radeon Vega Gfx"}, + {0x15D8, 0xD1, "AMD Radeon Vega 10 Graphics"}, + {0x15D8, 0xD2, "AMD Radeon Vega 8 Graphics"}, + {0x15D8, 0xD3, "AMD Radeon Vega 6 Graphics"}, + {0x15D8, 0xD4, "AMD Radeon Vega 3 Graphics"}, + {0x15D8, 0xD8, "AMD Radeon Vega 11 Graphics"}, + {0x15D8, 0xD9, "AMD Radeon Vega 8 Graphics"}, + {0x15D8, 0xDA, "AMD Radeon Vega 11 Graphics"}, + {0x15D8, 0xDB, "AMD Radeon Vega 3 Graphics"}, + {0x15D8, 0xDB, "AMD Radeon Vega 8 Graphics"}, + {0x15D8, 0xDC, "AMD Radeon Vega 3 Graphics"}, + {0x15D8, 0xDD, "AMD Radeon Vega 3 Graphics"}, + {0x15D8, 0xDE, "AMD Radeon Vega 3 Graphics"}, + {0x15D8, 0xDF, "AMD Radeon Vega 3 Graphics"}, + {0x15D8, 0xE3, "AMD Radeon Vega 3 Graphics"}, + {0x15D8, 0xE4, "AMD Ryzen Embedded R1102G with Radeon Vega Gfx"}, + {0x15DD, 0x81, "AMD Ryzen Embedded V1807B with Radeon Vega Gfx"}, + {0x15DD, 0x82, "AMD Ryzen Embedded V1756B with Radeon Vega Gfx"}, + {0x15DD, 0x83, "AMD Ryzen Embedded V1605B with Radeon Vega Gfx"}, + {0x15DD, 0x84, "AMD Radeon Vega 6 Graphics"}, + {0x15DD, 0x85, "AMD Ryzen Embedded V1202B with Radeon Vega Gfx"}, + {0x15DD, 0x86, "AMD Radeon Vega 11 Graphics"}, + {0x15DD, 0x88, "AMD Radeon Vega 8 Graphics"}, + {0x15DD, 0xC1, "AMD Radeon Vega 11 Graphics"}, + {0x15DD, 0xC2, "AMD Radeon Vega 8 Graphics"}, + {0x15DD, 0xC3, "AMD Radeon Vega 3 / 10 Graphics"}, + {0x15DD, 0xC4, "AMD Radeon Vega 8 Graphics"}, + {0x15DD, 0xC5, "AMD Radeon Vega 3 Graphics"}, + {0x15DD, 0xC6, "AMD Radeon Vega 11 Graphics"}, + {0x15DD, 0xC8, "AMD Radeon Vega 8 Graphics"}, + {0x15DD, 0xC9, "AMD Radeon Vega 11 Graphics"}, + {0x15DD, 0xCA, "AMD Radeon Vega 8 Graphics"}, + {0x15DD, 0xCB, "AMD Radeon Vega 3 Graphics"}, + {0x15DD, 0xCC, "AMD Radeon Vega 6 Graphics"}, + {0x15DD, 0xCE, "AMD Radeon Vega 3 Graphics"}, + {0x15DD, 0xCF, "AMD Radeon Vega 3 Graphics"}, + {0x15DD, 0xD0, "AMD Radeon Vega 10 Graphics"}, + {0x15DD, 0xD1, "AMD Radeon Vega 8 Graphics"}, + {0x15DD, 0xD3, "AMD Radeon Vega 11 Graphics"}, + {0x15DD, 0xD5, "AMD Radeon Vega 8 Graphics"}, + {0x15DD, 0xD6, "AMD Radeon Vega 11 Graphics"}, + {0x15DD, 0xD7, "AMD Radeon Vega 8 Graphics"}, + {0x15DD, 0xD8, "AMD Radeon Vega 3 Graphics"}, + {0x15DD, 0xD9, "AMD Radeon Vega 6 Graphics"}, + {0x15DD, 0xE1, "AMD Radeon Vega 3 Graphics"}, + {0x15DD, 0xE2, "AMD Radeon Vega 3 Graphics"}, + {0x163F, 0xAE, "AMD Custom GPU 0405"}, + {0x6600, 0x00, "AMD Radeon HD 8600 / 8700M"}, + {0x6600, 0x81, "AMD Radeon R7 M370"}, + {0x6601, 0x00, "AMD Radeon HD 8500M / 8700M"}, + {0x6604, 0x00, "AMD Radeon R7 M265 Series"}, + {0x6604, 0x81, "AMD Radeon R7 M350"}, + {0x6605, 0x00, "AMD Radeon R7 M260 Series"}, + {0x6605, 0x81, "AMD Radeon R7 M340"}, + {0x6606, 0x00, "AMD Radeon HD 8790M"}, + {0x6607, 0x00, "AMD Radeon R5 M240"}, + {0x6608, 0x00, "AMD FirePro W2100"}, + {0x6610, 0x00, "AMD Radeon R7 200 Series"}, + {0x6610, 0x81, "AMD Radeon R7 350"}, + {0x6610, 0x83, "AMD Radeon R5 340"}, + {0x6610, 0x87, "AMD Radeon R7 200 Series"}, + {0x6611, 0x00, "AMD Radeon R7 200 Series"}, + {0x6611, 0x87, "AMD Radeon R7 200 Series"}, + {0x6613, 0x00, "AMD Radeon R7 200 Series"}, + {0x6617, 0x00, "AMD Radeon R7 240 Series"}, + {0x6617, 0x87, "AMD Radeon R7 200 Series"}, + {0x6617, 0xC7, "AMD Radeon R7 240 Series"}, + {0x6640, 0x00, "AMD Radeon HD 8950"}, + {0x6640, 0x80, "AMD Radeon R9 M380"}, + {0x6646, 0x00, "AMD Radeon R9 M280X"}, + {0x6646, 0x80, "AMD Radeon R9 M385"}, + {0x6646, 0x80, "AMD Radeon R9 M470X"}, + {0x6647, 0x00, "AMD Radeon R9 M200X Series"}, + {0x6647, 0x80, "AMD Radeon R9 M380"}, + {0x6649, 0x00, "AMD FirePro W5100"}, + {0x6658, 0x00, "AMD Radeon R7 200 Series"}, + {0x665C, 0x00, "AMD Radeon HD 7700 Series"}, + {0x665D, 0x00, "AMD Radeon R7 200 Series"}, + {0x665F, 0x81, "AMD Radeon R7 360 Series"}, + {0x6660, 0x00, "AMD Radeon HD 8600M Series"}, + {0x6660, 0x81, "AMD Radeon R5 M335"}, + {0x6660, 0x83, "AMD Radeon R5 M330"}, + {0x6663, 0x00, "AMD Radeon HD 8500M Series"}, + {0x6663, 0x83, "AMD Radeon R5 M320"}, + {0x6664, 0x00, "AMD Radeon R5 M200 Series"}, + {0x6665, 0x00, "AMD Radeon R5 M230 Series"}, + {0x6665, 0x83, "AMD Radeon R5 M320"}, + {0x6665, 0xC3, "AMD Radeon R5 M435"}, + {0x6666, 0x00, "AMD Radeon R5 M200 Series"}, + {0x6667, 0x00, "AMD Radeon R5 M200 Series"}, + {0x666F, 0x00, "AMD Radeon HD 8500M"}, + {0x66A1, 0x02, "AMD Instinct MI60 / MI50"}, + {0x66A1, 0x06, "AMD Radeon Pro VII"}, + {0x66AF, 0xC1, "AMD Radeon VII"}, + {0x6780, 0x00, "AMD FirePro W9000"}, + {0x6784, 0x00, "ATI FirePro V (FireGL V) Graphics Adapter"}, + {0x6788, 0x00, "ATI FirePro V (FireGL V) Graphics Adapter"}, + {0x678A, 0x00, "AMD FirePro W8000"}, + {0x6798, 0x00, "AMD Radeon R9 200 / HD 7900 Series"}, + {0x6799, 0x00, "AMD Radeon HD 7900 Series"}, + {0x679A, 0x00, "AMD Radeon HD 7900 Series"}, + {0x679B, 0x00, "AMD Radeon HD 7900 Series"}, + {0x679E, 0x00, "AMD Radeon HD 7800 Series"}, + {0x67A0, 0x00, "AMD Radeon FirePro W9100"}, + {0x67A1, 0x00, "AMD Radeon FirePro W8100"}, + {0x67B0, 0x00, "AMD Radeon R9 200 Series"}, + {0x67B0, 0x80, "AMD Radeon R9 390 Series"}, + {0x67B1, 0x00, "AMD Radeon R9 200 Series"}, + {0x67B1, 0x80, "AMD Radeon R9 390 Series"}, + {0x67B9, 0x00, "AMD Radeon R9 200 Series"}, + {0x67C0, 0x00, "AMD Radeon Pro WX 7100 Graphics"}, + {0x67C0, 0x80, "AMD Radeon E9550"}, + {0x67C2, 0x01, "AMD Radeon Pro V7350x2"}, + {0x67C2, 0x02, "AMD Radeon Pro V7300X"}, + {0x67C4, 0x00, "AMD Radeon Pro WX 7100 Graphics"}, + {0x67C4, 0x80, "AMD Radeon E9560 / E9565 Graphics"}, + {0x67C7, 0x00, "AMD Radeon Pro WX 5100 Graphics"}, + {0x67C7, 0x80, "AMD Radeon E9390 Graphics"}, + {0x67D0, 0x01, "AMD Radeon Pro V7350x2"}, + {0x67D0, 0x02, "AMD Radeon Pro V7300X"}, + {0x67DF, 0xC0, "AMD Radeon Pro 580X"}, + {0x67DF, 0xC1, "AMD Radeon RX 580 Series"}, + {0x67DF, 0xC2, "AMD Radeon RX 570 Series"}, + {0x67DF, 0xC3, "AMD Radeon RX 580 Series"}, + {0x67DF, 0xC4, "AMD Radeon RX 480 Graphics"}, + {0x67DF, 0xC5, "AMD Radeon RX 470 Graphics"}, + {0x67DF, 0xC6, "AMD Radeon RX 570 Series"}, + {0x67DF, 0xC7, "AMD Radeon RX 480 Graphics"}, + {0x67DF, 0xCF, "AMD Radeon RX 470 Graphics"}, + {0x67DF, 0xD7, "AMD Radeon RX 470 Graphics"}, + {0x67DF, 0xE0, "AMD Radeon RX 470 Series"}, + {0x67DF, 0xE1, "AMD Radeon RX 590 Series"}, + {0x67DF, 0xE3, "AMD Radeon RX Series"}, + {0x67DF, 0xE7, "AMD Radeon RX 580 Series"}, + {0x67DF, 0xEB, "AMD Radeon Pro 580X"}, + {0x67DF, 0xEF, "AMD Radeon RX 570 Series"}, + {0x67DF, 0xF7, "AMD Radeon RX P30PH"}, + {0x67DF, 0xFF, "AMD Radeon RX 470 Series"}, + {0x67E0, 0x00, "AMD Radeon Pro WX Series"}, + {0x67E3, 0x00, "AMD Radeon Pro WX 4100"}, + {0x67E8, 0x00, "AMD Radeon Pro WX Series"}, + {0x67E8, 0x01, "AMD Radeon Pro WX Series"}, + {0x67E8, 0x80, "AMD Radeon E9260 Graphics"}, + {0x67EB, 0x00, "AMD Radeon Pro V5300X"}, + {0x67EF, 0xC0, "AMD Radeon RX Graphics"}, + {0x67EF, 0xC1, "AMD Radeon RX 460 Graphics"}, + {0x67EF, 0xC2, "AMD Radeon Pro Series"}, + {0x67EF, 0xC3, "AMD Radeon RX Series"}, + {0x67EF, 0xC5, "AMD Radeon RX 460 Graphics"}, + {0x67EF, 0xC7, "AMD Radeon RX Graphics"}, + {0x67EF, 0xCF, "AMD Radeon RX 460 Graphics"}, + {0x67EF, 0xE0, "AMD Radeon RX 560 Series"}, + {0x67EF, 0xE1, "AMD Radeon RX Series"}, + {0x67EF, 0xE2, "AMD Radeon RX 560X"}, + {0x67EF, 0xE3, "AMD Radeon RX Series"}, + {0x67EF, 0xE5, "AMD Radeon RX 560 Series"}, + {0x67EF, 0xE7, "AMD Radeon RX 560 Series"}, + {0x67EF, 0xEF, "AMD Radeon 550 Series"}, + {0x67EF, 0xFF, "AMD Radeon RX 460 Graphics"}, + {0x67FF, 0xC0, "AMD Radeon Pro 465"}, + {0x67FF, 0xC1, "AMD Radeon RX 560 Series"}, + {0x67FF, 0xCF, "AMD Radeon RX 560 Series"}, + {0x67FF, 0xEF, "AMD Radeon RX 560 Series"}, + {0x67FF, 0xFF, "AMD Radeon RX 550 Series"}, + {0x6800, 0x00, "AMD Radeon HD 7970M"}, + {0x6801, 0x00, "AMD Radeon HD 8970M"}, + {0x6806, 0x00, "AMD Radeon R9 M290X"}, + {0x6808, 0x00, "AMD FirePro W7000"}, + {0x6808, 0x00, "ATI FirePro V (FireGL V) Graphics Adapter"}, + {0x6809, 0x00, "ATI FirePro W5000"}, + {0x6810, 0x00, "AMD Radeon R9 200 Series"}, + {0x6810, 0x81, "AMD Radeon R9 370 Series"}, + {0x6811, 0x00, "AMD Radeon R9 200 Series"}, + {0x6811, 0x81, "AMD Radeon R7 370 Series"}, + {0x6818, 0x00, "AMD Radeon HD 7800 Series"}, + {0x6819, 0x00, "AMD Radeon HD 7800 Series"}, + {0x6820, 0x00, "AMD Radeon R9 M275X"}, + {0x6820, 0x81, "AMD Radeon R9 M375"}, + {0x6820, 0x83, "AMD Radeon R9 M375X"}, + {0x6821, 0x00, "AMD Radeon R9 M200X Series"}, + {0x6821, 0x83, "AMD Radeon R9 M370X"}, + {0x6821, 0x87, "AMD Radeon R7 M380"}, + {0x6822, 0x00, "AMD Radeon E8860"}, + {0x6823, 0x00, "AMD Radeon R9 M200X Series"}, + {0x6825, 0x00, "AMD Radeon HD 7800M Series"}, + {0x6826, 0x00, "AMD Radeon HD 7700M Series"}, + {0x6827, 0x00, "AMD Radeon HD 7800M Series"}, + {0x6828, 0x00, "AMD FirePro W600"}, + {0x682B, 0x00, "AMD Radeon HD 8800M Series"}, + {0x682B, 0x87, "AMD Radeon R9 M360"}, + {0x682C, 0x00, "AMD FirePro W4100"}, + {0x682D, 0x00, "AMD Radeon HD 7700M Series"}, + {0x682F, 0x00, "AMD Radeon HD 7700M Series"}, + {0x6830, 0x00, "AMD Radeon 7800M Series"}, + {0x6831, 0x00, "AMD Radeon 7700M Series"}, + {0x6835, 0x00, "AMD Radeon R7 Series / HD 9000 Series"}, + {0x6837, 0x00, "AMD Radeon HD 7700 Series"}, + {0x683D, 0x00, "AMD Radeon HD 7700 Series"}, + {0x683F, 0x00, "AMD Radeon HD 7700 Series"}, + {0x684C, 0x00, "ATI FirePro V (FireGL V) Graphics Adapter"}, + {0x6860, 0x00, "AMD Radeon Instinct MI25"}, + {0x6860, 0x01, "AMD Radeon Instinct MI25"}, + {0x6860, 0x02, "AMD Radeon Instinct MI25"}, + {0x6860, 0x03, "AMD Radeon Pro V340"}, + {0x6860, 0x04, "AMD Radeon Instinct MI25x2"}, + {0x6860, 0x07, "AMD Radeon Pro V320"}, + {0x6861, 0x00, "AMD Radeon Pro WX 9100"}, + {0x6862, 0x00, "AMD Radeon Pro SSG"}, + {0x6863, 0x00, "AMD Radeon Vega Frontier Edition"}, + {0x6864, 0x03, "AMD Radeon Pro V340"}, + {0x6864, 0x04, "AMD Radeon Instinct MI25x2"}, + {0x6864, 0x05, "AMD Radeon Pro V340"}, + {0x6868, 0x00, "AMD Radeon Pro WX 8200"}, + {0x686C, 0x00, "AMD Radeon Instinct MI25 MxGPU"}, + {0x686C, 0x01, "AMD Radeon Instinct MI25 MxGPU"}, + {0x686C, 0x02, "AMD Radeon Instinct MI25 MxGPU"}, + {0x686C, 0x03, "AMD Radeon Pro V340 MxGPU"}, + {0x686C, 0x04, "AMD Radeon Instinct MI25x2 MxGPU"}, + {0x686C, 0x05, "AMD Radeon Pro V340L MxGPU"}, + {0x686C, 0x06, "AMD Radeon Instinct MI25 MxGPU"}, + {0x687F, 0x01, "AMD Radeon RX Vega"}, + {0x687F, 0xC0, "AMD Radeon RX Vega"}, + {0x687F, 0xC1, "AMD Radeon RX Vega"}, + {0x687F, 0xC3, "AMD Radeon RX Vega"}, + {0x687F, 0xC7, "AMD Radeon RX Vega"}, + {0x6900, 0x00, "AMD Radeon R7 M260"}, + {0x6900, 0x81, "AMD Radeon R7 M360"}, + {0x6900, 0x83, "AMD Radeon R7 M340"}, + {0x6900, 0xC1, "AMD Radeon R5 M465 Series"}, + {0x6900, 0xC3, "AMD Radeon R5 M445 Series"}, + {0x6900, 0xD1, "AMD Radeon 530 Series"}, + {0x6900, 0xD3, "AMD Radeon 530 Series"}, + {0x6901, 0x00, "AMD Radeon R5 M255"}, + {0x6902, 0x00, "AMD Radeon Series"}, + {0x6907, 0x00, "AMD Radeon R5 M255"}, + {0x6907, 0x87, "AMD Radeon R5 M315"}, + {0x6920, 0x00, "AMD Radeon R9 M395X"}, + {0x6920, 0x01, "AMD Radeon R9 M390X"}, + {0x6921, 0x00, "AMD Radeon R9 M390X"}, + {0x6929, 0x00, "AMD FirePro S7150"}, + {0x6929, 0x01, "AMD FirePro S7100X"}, + {0x692B, 0x00, "AMD FirePro W7100"}, + {0x6938, 0x00, "AMD Radeon R9 200 Series"}, + {0x6938, 0xF0, "AMD Radeon R9 200 Series"}, + {0x6938, 0xF1, "AMD Radeon R9 380 Series"}, + {0x6939, 0x00, "AMD Radeon R9 200 Series"}, + {0x6939, 0xF0, "AMD Radeon R9 200 Series"}, + {0x6939, 0xF1, "AMD Radeon R9 380 Series"}, + {0x694C, 0xC0, "AMD Radeon RX Vega M GH Graphics"}, + {0x694E, 0xC0, "AMD Radeon RX Vega M GL Graphics"}, + {0x6980, 0x00, "AMD Radeon Pro WX 3100"}, + {0x6981, 0x00, "AMD Radeon Pro WX 3200 Series"}, + {0x6981, 0x01, "AMD Radeon Pro WX 3200 Series"}, + {0x6981, 0x10, "AMD Radeon Pro WX 3200 Series"}, + {0x6985, 0x00, "AMD Radeon Pro WX 3100"}, + {0x6986, 0x00, "AMD Radeon Pro WX 2100"}, + {0x6987, 0x80, "AMD Embedded Radeon E9171"}, + {0x6987, 0xC0, "AMD Radeon 550X Series"}, + {0x6987, 0xC1, "AMD Radeon RX 640"}, + {0x6987, 0xC3, "AMD Radeon 540X Series"}, + {0x6987, 0xC7, "AMD Radeon 540"}, + {0x6995, 0x00, "AMD Radeon Pro WX 2100"}, + {0x6997, 0x00, "AMD Radeon Pro WX 2100"}, + {0x699F, 0x81, "AMD Embedded Radeon E9170 Series"}, + {0x699F, 0xC0, "AMD Radeon 500 Series"}, + {0x699F, 0xC1, "AMD Radeon 540 Series"}, + {0x699F, 0xC3, "AMD Radeon 500 Series"}, + {0x699F, 0xC7, "AMD Radeon RX 550 / 550 Series"}, + {0x699F, 0xC9, "AMD Radeon 540"}, + {0x6FDF, 0xE7, "AMD Radeon RX 590 GME"}, + {0x6FDF, 0xEF, "AMD Radeon RX 580 2048SP"}, + {0x7300, 0xC1, "AMD FirePro S9300 x2"}, + {0x7300, 0xC8, "AMD Radeon R9 Fury Series"}, + {0x7300, 0xC9, "AMD Radeon Pro Duo"}, + {0x7300, 0xCA, "AMD Radeon R9 Fury Series"}, + {0x7300, 0xCB, "AMD Radeon R9 Fury Series"}, + {0x7312, 0x00, "AMD Radeon Pro W5700"}, + {0x731E, 0xC6, "AMD Radeon RX 5700XTB"}, + {0x731E, 0xC7, "AMD Radeon RX 5700B"}, + {0x731F, 0xC0, "AMD Radeon RX 5700 XT 50th Anniversary"}, + {0x731F, 0xC1, "AMD Radeon RX 5700 XT"}, + {0x731F, 0xC2, "AMD Radeon RX 5600M"}, + {0x731F, 0xC3, "AMD Radeon RX 5700M"}, + {0x731F, 0xC4, "AMD Radeon RX 5700"}, + {0x731F, 0xC5, "AMD Radeon RX 5700 XT"}, + {0x731F, 0xCA, "AMD Radeon RX 5600 XT"}, + {0x731F, 0xCB, "AMD Radeon RX 5600 OEM"}, + {0x7340, 0xC1, "AMD Radeon RX 5500M"}, + {0x7340, 0xC3, "AMD Radeon RX 5300M"}, + {0x7340, 0xC5, "AMD Radeon RX 5500 XT"}, + {0x7340, 0xC7, "AMD Radeon RX 5500"}, + {0x7340, 0xC9, "AMD Radeon RX 5500XTB"}, + {0x7340, 0xCF, "AMD Radeon RX 5300"}, + {0x7341, 0x00, "AMD Radeon Pro W5500"}, + {0x7347, 0x00, "AMD Radeon Pro W5500M"}, + {0x7360, 0x41, "AMD Radeon Pro 5600M"}, + {0x7360, 0xC3, "AMD Radeon Pro V520"}, + {0x738C, 0x01, "AMD Instinct MI100"}, + {0x73A3, 0x00, "AMD Radeon Pro W6800"}, + {0x73A5, 0xC0, "AMD Radeon RX 6950 XT"}, + {0x73AF, 0xC0, "AMD Radeon RX 6900 XT"}, + {0x73BF, 0xC0, "AMD Radeon RX 6900 XT"}, + {0x73BF, 0xC1, "AMD Radeon RX 6800 XT"}, + {0x73BF, 0xC3, "AMD Radeon RX 6800"}, + {0x73DF, 0xC0, "AMD Radeon RX 6750 XT"}, + {0x73DF, 0xC1, "AMD Radeon RX 6700 XT"}, + {0x73DF, 0xC2, "AMD Radeon RX 6800M"}, + {0x73DF, 0xC3, "AMD Radeon RX 6800M"}, + {0x73DF, 0xC5, "AMD Radeon RX 6700 XT"}, + {0x73DF, 0xCF, "AMD Radeon RX 6700M"}, + {0x73DF, 0xD7, "AMD TDC-235"}, + {0x73E1, 0x00, "AMD Radeon Pro W6600M"}, + {0x73E3, 0x00, "AMD Radeon Pro W6600"}, + {0x73EF, 0xC0, "AMD Radeon RX 6800S"}, + {0x73EF, 0xC1, "AMD Radeon RX 6650 XT"}, + {0x73EF, 0xC2, "AMD Radeon RX 6700S"}, + {0x73EF, 0xC3, "AMD Radeon RX 6650M"}, + {0x73EF, 0xC4, "AMD Radeon RX 6650M XT"}, + {0x73FF, 0xC1, "AMD Radeon RX 6600 XT"}, + {0x73FF, 0xC3, "AMD Radeon RX 6600M"}, + {0x73FF, 0xC7, "AMD Radeon RX 6600"}, + {0x73FF, 0xCB, "AMD Radeon RX 6600S"}, + {0x7408, 0x00, "AMD Instinct MI250X"}, + {0x740C, 0x01, "AMD Instinct MI250X / MI250"}, + {0x740F, 0x02, "AMD Instinct MI210"}, + {0x7421, 0x00, "AMD Radeon Pro W6500M"}, + {0x7422, 0x00, "AMD Radeon Pro W6400"}, + {0x7423, 0x00, "AMD Radeon Pro W6300M"}, + {0x7423, 0x01, "AMD Radeon Pro W6300"}, + {0x7424, 0x00, "AMD Radeon RX 6300"}, + {0x743F, 0xC1, "AMD Radeon RX 6500 XT"}, + {0x743F, 0xC3, "AMD Radeon RX 6500"}, + {0x743F, 0xC3, "AMD Radeon RX 6500M"}, + {0x743F, 0xC7, "AMD Radeon RX 6400"}, + {0x743F, 0xCF, "AMD Radeon RX 6300M"}, + {0x744C, 0xC8, "AMD Radeon RX 7900 XTX"}, + {0x744C, 0xCC, "AMD Radeon RX 7900 XT"}, + {0x7480, 0xC1, "AMD Radeon RX 7700S"}, + {0x7480, 0xC3, "AMD Radeon RX 7600S"}, + {0x7480, 0xC7, "AMD Radeon RX 7600M XT"}, + {0x7483, 0xCF, "AMD Radeon RX 7600M"}, + {0x9830, 0x00, "AMD Radeon HD 8400 / R3 Series"}, + {0x9831, 0x00, "AMD Radeon HD 8400E"}, + {0x9832, 0x00, "AMD Radeon HD 8330"}, + {0x9833, 0x00, "AMD Radeon HD 8330E"}, + {0x9834, 0x00, "AMD Radeon HD 8210"}, + {0x9835, 0x00, "AMD Radeon HD 8210E"}, + {0x9836, 0x00, "AMD Radeon HD 8200 / R3 Series"}, + {0x9837, 0x00, "AMD Radeon HD 8280E"}, + {0x9838, 0x00, "AMD Radeon HD 8200 / R3 series"}, + {0x9839, 0x00, "AMD Radeon HD 8180"}, + {0x983D, 0x00, "AMD Radeon HD 8250"}, + {0x9850, 0x00, "AMD Radeon R3 Graphics"}, + {0x9850, 0x03, "AMD Radeon R3 Graphics"}, + {0x9850, 0x40, "AMD Radeon R2 Graphics"}, + {0x9850, 0x45, "AMD Radeon R3 Graphics"}, + {0x9851, 0x00, "AMD Radeon R4 Graphics"}, + {0x9851, 0x01, "AMD Radeon R5E Graphics"}, + {0x9851, 0x05, "AMD Radeon R5 Graphics"}, + {0x9851, 0x06, "AMD Radeon R5E Graphics"}, + {0x9851, 0x40, "AMD Radeon R4 Graphics"}, + {0x9851, 0x45, "AMD Radeon R5 Graphics"}, + {0x9852, 0x00, "AMD Radeon R2 Graphics"}, + {0x9852, 0x40, "AMD Radeon E1 Graphics"}, + {0x9853, 0x00, "AMD Radeon R2 Graphics"}, + {0x9853, 0x01, "AMD Radeon R4E Graphics"}, + {0x9853, 0x03, "AMD Radeon R2 Graphics"}, + {0x9853, 0x05, "AMD Radeon R1E Graphics"}, + {0x9853, 0x06, "AMD Radeon R1E Graphics"}, + {0x9853, 0x07, "AMD Radeon R1E Graphics"}, + {0x9853, 0x08, "AMD Radeon R1E Graphics"}, + {0x9853, 0x40, "AMD Radeon R2 Graphics"}, + {0x9854, 0x00, "AMD Radeon R3 Graphics"}, + {0x9854, 0x01, "AMD Radeon R3E Graphics"}, + {0x9854, 0x02, "AMD Radeon R3 Graphics"}, + {0x9854, 0x05, "AMD Radeon R2 Graphics"}, + {0x9854, 0x06, "AMD Radeon R4 Graphics"}, + {0x9854, 0x07, "AMD Radeon R3 Graphics"}, + {0x9855, 0x02, "AMD Radeon R6 Graphics"}, + {0x9855, 0x05, "AMD Radeon R4 Graphics"}, + {0x9856, 0x00, "AMD Radeon R2 Graphics"}, + {0x9856, 0x01, "AMD Radeon R2E Graphics"}, + {0x9856, 0x02, "AMD Radeon R2 Graphics"}, + {0x9856, 0x05, "AMD Radeon R1E Graphics"}, + {0x9856, 0x06, "AMD Radeon R2 Graphics"}, + {0x9856, 0x07, "AMD Radeon R1E Graphics"}, + {0x9856, 0x08, "AMD Radeon R1E Graphics"}, + {0x9856, 0x13, "AMD Radeon R1E Graphics"}, + {0x9874, 0x81, "AMD Radeon R6 Graphics"}, + {0x9874, 0x84, "AMD Radeon R7 Graphics"}, + {0x9874, 0x85, "AMD Radeon R6 Graphics"}, + {0x9874, 0x87, "AMD Radeon R5 Graphics"}, + {0x9874, 0x88, "AMD Radeon R7E Graphics"}, + {0x9874, 0x89, "AMD Radeon R6E Graphics"}, + {0x9874, 0xC4, "AMD Radeon R7 Graphics"}, + {0x9874, 0xC5, "AMD Radeon R6 Graphics"}, + {0x9874, 0xC6, "AMD Radeon R6 Graphics"}, + {0x9874, 0xC7, "AMD Radeon R5 Graphics"}, + {0x9874, 0xC8, "AMD Radeon R7 Graphics"}, + {0x9874, 0xC9, "AMD Radeon R7 Graphics"}, + {0x9874, 0xCA, "AMD Radeon R5 Graphics"}, + {0x9874, 0xCB, "AMD Radeon R5 Graphics"}, + {0x9874, 0xCC, "AMD Radeon R7 Graphics"}, + {0x9874, 0xCD, "AMD Radeon R7 Graphics"}, + {0x9874, 0xCE, "AMD Radeon R5 Graphics"}, + {0x9874, 0xE1, "AMD Radeon R7 Graphics"}, + {0x9874, 0xE2, "AMD Radeon R7 Graphics"}, + {0x9874, 0xE3, "AMD Radeon R7 Graphics"}, + {0x9874, 0xE4, "AMD Radeon R7 Graphics"}, + {0x9874, 0xE5, "AMD Radeon R5 Graphics"}, + {0x9874, 0xE6, "AMD Radeon R5 Graphics"}, + {0x98E4, 0x80, "AMD Radeon R5E Graphics"}, + {0x98E4, 0x81, "AMD Radeon R4E Graphics"}, + {0x98E4, 0x83, "AMD Radeon R2E Graphics"}, + {0x98E4, 0x84, "AMD Radeon R2E Graphics"}, + {0x98E4, 0x86, "AMD Radeon R1E Graphics"}, + {0x98E4, 0xC0, "AMD Radeon R4 Graphics"}, + {0x98E4, 0xC1, "AMD Radeon R5 Graphics"}, + {0x98E4, 0xC2, "AMD Radeon R4 Graphics"}, + {0x98E4, 0xC4, "AMD Radeon R5 Graphics"}, + {0x98E4, 0xC6, "AMD Radeon R5 Graphics"}, + {0x98E4, 0xC8, "AMD Radeon R4 Graphics"}, + {0x98E4, 0xC9, "AMD Radeon R4 Graphics"}, + {0x98E4, 0xCA, "AMD Radeon R5 Graphics"}, + {0x98E4, 0xD0, "AMD Radeon R2 Graphics"}, + {0x98E4, 0xD1, "AMD Radeon R2 Graphics"}, + {0x98E4, 0xD2, "AMD Radeon R2 Graphics"}, + {0x98E4, 0xD4, "AMD Radeon R2 Graphics"}, + {0x98E4, 0xD9, "AMD Radeon R5 Graphics"}, + {0x98E4, 0xDA, "AMD Radeon R5 Graphics"}, + {0x98E4, 0xDB, "AMD Radeon R3 Graphics"}, + {0x98E4, 0xE1, "AMD Radeon R3 Graphics"}, + {0x98E4, 0xE2, "AMD Radeon R3 Graphics"}, + {0x98E4, 0xE9, "AMD Radeon R4 Graphics"}, + {0x98E4, 0xEA, "AMD Radeon R4 Graphics"}, + {0x98E4, 0xEB, "AMD Radeon R3 Graphics"}, + {0x98E4, 0xEC, "AMD Radeon R4 Graphics"}, + {0x0000, 0x00, "unknown AMD GPU"} // this must always be the last item +}; + +struct card { + const char *pathname; + struct amdgpu_id_struct id; + + /* GPU and VRAM utilizations */ + + const char *pathname_util_gpu; + RRDSET *st_util_gpu; + RRDDIM *rd_util_gpu; + collected_number util_gpu; + + const char *pathname_util_mem; + RRDSET *st_util_mem; + RRDDIM *rd_util_mem; + collected_number util_mem; + + + /* GPU and VRAM clock frequencies */ + + const char *pathname_clk_gpu; + procfile *ff_clk_gpu; + RRDSET *st_clk_gpu; + RRDDIM *rd_clk_gpu; + collected_number clk_gpu; + + const char *pathname_clk_mem; + procfile *ff_clk_mem; + RRDSET *st_clk_mem; + RRDDIM *rd_clk_mem; + collected_number clk_mem; + + + /* GPU memory usage */ + + const char *pathname_mem_used_vram; + const char *pathname_mem_total_vram; + + RRDSET *st_mem_usage_perc_vram; + RRDDIM *rd_mem_used_perc_vram; + + RRDSET *st_mem_usage_vram; + RRDDIM *rd_mem_used_vram; + RRDDIM *rd_mem_free_vram; + + collected_number used_vram; + collected_number total_vram; + + + const char *pathname_mem_used_vis_vram; + const char *pathname_mem_total_vis_vram; + + RRDSET *st_mem_usage_perc_vis_vram; + RRDDIM *rd_mem_used_perc_vis_vram; + + RRDSET *st_mem_usage_vis_vram; + RRDDIM *rd_mem_used_vis_vram; + RRDDIM *rd_mem_free_vis_vram; + + collected_number used_vis_vram; + collected_number total_vis_vram; + + + const char *pathname_mem_used_gtt; + const char *pathname_mem_total_gtt; + + RRDSET *st_mem_usage_perc_gtt; + RRDDIM *rd_mem_used_perc_gtt; + + RRDSET *st_mem_usage_gtt; + RRDDIM *rd_mem_used_gtt; + RRDDIM *rd_mem_free_gtt; + + collected_number used_gtt; + collected_number total_gtt; + + struct do_rrd_x *do_rrd_x_root; + + struct card *next; +}; +static struct card *card_root = NULL; + +static void card_free(struct card *c){ + if(c->pathname) freez((void *) c->pathname); + if(c->id.marketing_name) freez((void *) c->id.marketing_name); + + /* remove card from linked list */ + if(c == card_root) card_root = c->next; + else { + struct card *last; + for(last = card_root; last && last->next != c; last = last->next); + if(last) last->next = c->next; + } + + freez(c); +} + +static int check_card_is_amdgpu(const char *const pathname){ + int rc = -1; + + procfile *ff = procfile_open(pathname, " ", PROCFILE_FLAG_NO_ERROR_ON_FILE_IO); + if(unlikely(!ff)){ + rc = -1; + goto cleanup; + } + + ff = procfile_readall(ff); + if(unlikely(!ff || procfile_lines(ff) < 1 || procfile_linewords(ff, 0) < 1)){ + rc = -2; + goto cleanup; + } + + for(size_t l = 0; l < procfile_lines(ff); l++) { + if(!strcmp(procfile_lineword(ff, l, 0), "DRIVER=amdgpu")){ + rc = 0; + goto cleanup; + } + } + + rc = -3; // no match + +cleanup: + procfile_close(ff); + return rc; +} + +static int read_clk_freq_file(procfile **p_ff, const char *const pathname, collected_number *num){ + if(unlikely(!*p_ff)){ + *p_ff = procfile_open(pathname, NULL, PROCFILE_FLAG_NO_ERROR_ON_FILE_IO); + if(unlikely(!*p_ff)) return -2; + } + + if(unlikely(NULL == (*p_ff = procfile_readall(*p_ff)))) return -3; + + for(size_t l = 0; l < procfile_lines(*p_ff) ; l++) { + char *str_with_units = NULL; + if((*p_ff)->lines->lines[l].words >= 3 && !strcmp(procfile_lineword((*p_ff), l, 2), "*")) //format: X: collected_number * + str_with_units = procfile_lineword((*p_ff), l, 1); + else if ((*p_ff)->lines->lines[l].words == 2 && !strcmp(procfile_lineword((*p_ff), l, 1), "*")) //format: collected_number * + str_with_units = procfile_lineword((*p_ff), l, 0); + + if (str_with_units) { + char *delim = strchr(str_with_units, 'M'); + char str_without_units[10]; + memcpy(str_without_units, str_with_units, delim - str_with_units); + *num = str2ll(str_without_units, NULL); + return 0; + } + } + + procfile_close((*p_ff)); + return -4; +} + +static char *set_id(const char *const suf_1, const char *const suf_2, const char *const suf_3){ + static char id[RRD_ID_LENGTH_MAX + 1]; + snprintfz(id, RRD_ID_LENGTH_MAX, "%s_%s_%s", suf_1, suf_2, suf_3); + return id; +} + +typedef int (*do_rrd_x_func)(struct card *const c); + +struct do_rrd_x { + do_rrd_x_func func; + struct do_rrd_x *next; +}; + +static void add_do_rrd_x(struct card *const c, const do_rrd_x_func func){ + struct do_rrd_x *const drrd = callocz(1, sizeof(struct do_rrd_x)); + drrd->func = func; + drrd->next = c->do_rrd_x_root; + c->do_rrd_x_root = drrd; +} + +static void rm_do_rrd_x(struct card *const c, struct do_rrd_x *const drrd){ + if(drrd == c->do_rrd_x_root) c->do_rrd_x_root = drrd->next; + else { + struct do_rrd_x *last; + for(last = c->do_rrd_x_root; last && last->next != drrd; last = last->next); + if(last) last->next = drrd->next; + } + + freez(drrd); +} + +static int do_rrd_util_gpu(struct card *const c){ + if(likely(!read_single_number_file(c->pathname_util_gpu, (unsigned long long *) &c->util_gpu))){ + rrddim_set_by_pointer(c->st_util_gpu, c->rd_util_gpu, c->util_gpu); + rrdset_done(c->st_util_gpu); + return 0; + } + else { + collector_error("Cannot read util_gpu for %s: [%s]", c->pathname, c->id.marketing_name); + freez((void *) c->pathname_util_gpu); + rrdset_is_obsolete___safe_from_collector_thread(c->st_util_gpu); + return 1; + } +} + +static int do_rrd_util_mem(struct card *const c){ + if(likely(!read_single_number_file(c->pathname_util_mem, (unsigned long long *) &c->util_mem))){ + rrddim_set_by_pointer(c->st_util_mem, c->rd_util_mem, c->util_mem); + rrdset_done(c->st_util_mem); + return 0; + } + else { + collector_error("Cannot read util_mem for %s: [%s]", c->pathname, c->id.marketing_name); + freez((void *) c->pathname_util_mem); + rrdset_is_obsolete___safe_from_collector_thread(c->st_util_mem); + return 1; + } +} + +static int do_rrd_clk_gpu(struct card *const c){ + if(likely(!read_clk_freq_file(&c->ff_clk_gpu, (char *) c->pathname_clk_gpu, &c->clk_gpu))){ + rrddim_set_by_pointer(c->st_clk_gpu, c->rd_clk_gpu, c->clk_gpu); + rrdset_done(c->st_clk_gpu); + return 0; + } + else { + collector_error("Cannot read clk_gpu for %s: [%s]", c->pathname, c->id.marketing_name); + freez((void *) c->pathname_clk_gpu); + rrdset_is_obsolete___safe_from_collector_thread(c->st_clk_gpu); + return 1; + } +} + +static int do_rrd_clk_mem(struct card *const c){ + if(likely(!read_clk_freq_file(&c->ff_clk_mem, (char *) c->pathname_clk_mem, &c->clk_mem))){ + rrddim_set_by_pointer(c->st_clk_mem, c->rd_clk_mem, c->clk_mem); + rrdset_done(c->st_clk_mem); + return 0; + } + else { + collector_error("Cannot read clk_mem for %s: [%s]", c->pathname, c->id.marketing_name); + freez((void *) c->pathname_clk_mem); + rrdset_is_obsolete___safe_from_collector_thread(c->st_clk_mem); + return 1; + } +} + +static int do_rrd_vram(struct card *const c){ + if(likely(!read_single_number_file(c->pathname_mem_used_vram, (unsigned long long *) &c->used_vram) && + c->total_vram)){ + rrddim_set_by_pointer( c->st_mem_usage_perc_vram, + c->rd_mem_used_perc_vram, + c->used_vram * 10000 / c->total_vram); + rrdset_done(c->st_mem_usage_perc_vram); + + rrddim_set_by_pointer(c->st_mem_usage_vram, c->rd_mem_used_vram, c->used_vram); + rrddim_set_by_pointer(c->st_mem_usage_vram, c->rd_mem_free_vram, c->total_vram - c->used_vram); + rrdset_done(c->st_mem_usage_vram); + return 0; + } + else { + collector_error("Cannot read used_vram for %s: [%s]", c->pathname, c->id.marketing_name); + freez((void *) c->pathname_mem_used_vram); + freez((void *) c->pathname_mem_total_vram); + rrdset_is_obsolete___safe_from_collector_thread(c->st_mem_usage_perc_vram); + rrdset_is_obsolete___safe_from_collector_thread(c->st_mem_usage_vram); + return 1; + } +} + +static int do_rrd_vis_vram(struct card *const c){ + if(likely(!read_single_number_file(c->pathname_mem_used_vis_vram, (unsigned long long *) &c->used_vis_vram) && + c->total_vis_vram)){ + rrddim_set_by_pointer( c->st_mem_usage_perc_vis_vram, + c->rd_mem_used_perc_vis_vram, + c->used_vis_vram * 10000 / c->total_vis_vram); + rrdset_done(c->st_mem_usage_perc_vis_vram); + + rrddim_set_by_pointer(c->st_mem_usage_vis_vram, c->rd_mem_used_vis_vram, c->used_vis_vram); + rrddim_set_by_pointer(c->st_mem_usage_vis_vram, c->rd_mem_free_vis_vram, c->total_vis_vram - c->used_vis_vram); + rrdset_done(c->st_mem_usage_vis_vram); + return 0; + } + else { + collector_error("Cannot read used_vis_vram for %s: [%s]", c->pathname, c->id.marketing_name); + freez((void *) c->pathname_mem_used_vis_vram); + freez((void *) c->pathname_mem_total_vis_vram); + rrdset_is_obsolete___safe_from_collector_thread(c->st_mem_usage_perc_vis_vram); + rrdset_is_obsolete___safe_from_collector_thread(c->st_mem_usage_vis_vram); + return 1; + } +} + +static int do_rrd_gtt(struct card *const c){ + if(likely(!read_single_number_file(c->pathname_mem_used_gtt, (unsigned long long *) &c->used_gtt) && + c->total_gtt)){ + rrddim_set_by_pointer( c->st_mem_usage_perc_gtt, + c->rd_mem_used_perc_gtt, + c->used_gtt * 10000 / c->total_gtt); + rrdset_done(c->st_mem_usage_perc_gtt); + + rrddim_set_by_pointer(c->st_mem_usage_gtt, c->rd_mem_used_gtt, c->used_gtt); + rrddim_set_by_pointer(c->st_mem_usage_gtt, c->rd_mem_free_gtt, c->total_gtt - c->used_gtt); + rrdset_done(c->st_mem_usage_gtt); + return 0; + } + else { + collector_error("Cannot read used_gtt for %s: [%s]", c->pathname, c->id.marketing_name); + freez((void *) c->pathname_mem_used_gtt); + freez((void *) c->pathname_mem_total_gtt); + rrdset_is_obsolete___safe_from_collector_thread(c->st_mem_usage_perc_gtt); + rrdset_is_obsolete___safe_from_collector_thread(c->st_mem_usage_gtt); + return 1; + } +} + +int do_sys_class_drm(int update_every, usec_t dt) { + (void)dt; + + static DIR *drm_dir = NULL; + + int chart_prio = NETDATA_CHART_PRIO_DRM_AMDGPU; + + if(unlikely(!drm_dir)) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/class/drm"); + char *drm_dir_name = config_get(CONFIG_SECTION_PLUGIN_PROC_DRM, "directory to monitor", filename); + if(unlikely(NULL == (drm_dir = opendir(drm_dir_name)))){ + collector_error("Cannot read directory '%s'", drm_dir_name); + return 1; + } + + struct dirent *de = NULL; + while(likely(de = readdir(drm_dir))) { + if( de->d_type == DT_DIR && ((de->d_name[0] == '.' && de->d_name[1] == '\0') || + (de->d_name[0] == '.' && de->d_name[1] == '.' && de->d_name[2] == '\0'))) continue; + + if(de->d_type == DT_LNK && !strncmp(de->d_name, "card", 4) && !strchr(de->d_name, '-')) { + snprintfz(filename, FILENAME_MAX, "%s/%s/%s", drm_dir_name, de->d_name, "device/uevent"); + if(check_card_is_amdgpu(filename)) continue; + + /* Get static info */ + + struct card *const c = callocz(1, sizeof(struct card)); + snprintfz(filename, FILENAME_MAX, "%s/%s", drm_dir_name, de->d_name); + c->pathname = strdupz(filename); + + snprintfz(filename, FILENAME_MAX, "%s/%s", c->pathname, "device/device"); + if(read_single_base64_or_hex_number_file(filename, &c->id.asic_id)){ + collector_error("Cannot read asic_id from '%s'", filename); + card_free(c); + continue; + } + + snprintfz(filename, FILENAME_MAX, "%s/%s", c->pathname, "device/revision"); + if(read_single_base64_or_hex_number_file(filename, &c->id.pci_rev_id)){ + collector_error("Cannot read pci_rev_id from '%s'", filename); + card_free(c); + continue; + } + + for(int i = 0; amdgpu_ids[i].asic_id; i++){ + if(c->id.asic_id == amdgpu_ids[i].asic_id && c->id.pci_rev_id == amdgpu_ids[i].pci_rev_id){ + c->id.marketing_name = strdupz(amdgpu_ids[i].marketing_name); + break; + } + } + if(!c->id.marketing_name) + c->id.marketing_name = strdupz(amdgpu_ids[sizeof(amdgpu_ids)/sizeof(amdgpu_ids[0]) - 1].marketing_name); + + + collected_number tmp_val; + #define set_prop_pathname(prop_filename, prop_pathname, p_ff) do { \ + snprintfz(filename, FILENAME_MAX, "%s/%s", c->pathname, prop_filename); \ + if((p_ff && !read_clk_freq_file(p_ff, filename, &tmp_val)) || \ + !read_single_number_file(filename, (unsigned long long *) &tmp_val)) \ + prop_pathname = strdupz(filename); \ + else \ + collector_info("Cannot read file '%s'", filename); \ + } while(0) + + /* Initialize GPU and VRAM utilization metrics */ + + set_prop_pathname("device/gpu_busy_percent", c->pathname_util_gpu, NULL); + + if(c->pathname_util_gpu){ + c->st_util_gpu = rrdset_create_localhost( + AMDGPU_CHART_TYPE + , set_id("gpu_utilization", c->id.marketing_name, de->d_name) + , NULL + , "utilization" + , AMDGPU_CHART_TYPE ".gpu_utilization" + , "GPU utilization" + , "percentage" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DRM_NAME + , chart_prio++ + , update_every + , RRDSET_TYPE_LINE + ); + + rrdlabels_add(c->st_util_gpu->rrdlabels, "product_name", c->id.marketing_name, RRDLABEL_SRC_AUTO); + + c->rd_util_gpu = rrddim_add(c->st_util_gpu, "utilization", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + add_do_rrd_x(c, do_rrd_util_gpu); + } + + set_prop_pathname("device/mem_busy_percent", c->pathname_util_mem, NULL); + + if(c->pathname_util_mem){ + c->st_util_mem = rrdset_create_localhost( + AMDGPU_CHART_TYPE + , set_id("gpu_mem_utilization", c->id.marketing_name, de->d_name) + , NULL + , "utilization" + , AMDGPU_CHART_TYPE ".gpu_mem_utilization" + , "GPU memory utilization" + , "percentage" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DRM_NAME + , chart_prio++ + , update_every + , RRDSET_TYPE_LINE + ); + + rrdlabels_add(c->st_util_mem->rrdlabels, "product_name", c->id.marketing_name, RRDLABEL_SRC_AUTO); + + c->rd_util_mem = rrddim_add(c->st_util_mem, "utilization", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + add_do_rrd_x(c, do_rrd_util_mem); + } + + + /* Initialize GPU and VRAM clock frequency metrics */ + + set_prop_pathname("device/pp_dpm_sclk", c->pathname_clk_gpu, &c->ff_clk_gpu); + + if(c->pathname_clk_gpu){ + c->st_clk_gpu = rrdset_create_localhost( + AMDGPU_CHART_TYPE + , set_id("gpu_clk_frequency", c->id.marketing_name, de->d_name) + , NULL + , "frequency" + , AMDGPU_CHART_TYPE ".gpu_clk_frequency" + , "GPU clock frequency" + , "MHz" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DRM_NAME + , chart_prio++ + , update_every + , RRDSET_TYPE_LINE + ); + + rrdlabels_add(c->st_clk_gpu->rrdlabels, "product_name", c->id.marketing_name, RRDLABEL_SRC_AUTO); + + c->rd_clk_gpu = rrddim_add(c->st_clk_gpu, "frequency", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + add_do_rrd_x(c, do_rrd_clk_gpu); + + } + + set_prop_pathname("device/pp_dpm_mclk", c->pathname_clk_mem, &c->ff_clk_mem); + + if(c->pathname_clk_mem){ + c->st_clk_mem = rrdset_create_localhost( + AMDGPU_CHART_TYPE + , set_id("gpu_mem_clk_frequency", c->id.marketing_name, de->d_name) + , NULL + , "frequency" + , AMDGPU_CHART_TYPE ".gpu_mem_clk_frequency" + , "GPU memory clock frequency" + , "MHz" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DRM_NAME + , chart_prio++ + , update_every + , RRDSET_TYPE_LINE + ); + + rrdlabels_add(c->st_clk_mem->rrdlabels, "product_name", c->id.marketing_name, RRDLABEL_SRC_AUTO); + + c->rd_clk_mem = rrddim_add(c->st_clk_mem, "frequency", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + add_do_rrd_x(c, do_rrd_clk_mem); + } + + + /* Initialize GPU memory usage metrics */ + + set_prop_pathname("device/mem_info_vram_used", c->pathname_mem_used_vram, NULL); + set_prop_pathname("device/mem_info_vram_total", c->pathname_mem_total_vram, NULL); + if(c->pathname_mem_total_vram) c->total_vram = tmp_val; + + if(c->pathname_mem_used_vram && c->pathname_mem_total_vram){ + c->st_mem_usage_perc_vram = rrdset_create_localhost( + AMDGPU_CHART_TYPE + , set_id("gpu_mem_vram_usage_perc", c->id.marketing_name, de->d_name) + , NULL + , "memory_usage" + , AMDGPU_CHART_TYPE ".gpu_mem_vram_usage_perc" + , "VRAM memory usage percentage" + , "percentage" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DRM_NAME + , chart_prio++ + , update_every + , RRDSET_TYPE_LINE + ); + + rrdlabels_add(c->st_mem_usage_perc_vram->rrdlabels, "product_name", c->id.marketing_name, RRDLABEL_SRC_AUTO); + + c->rd_mem_used_perc_vram = rrddim_add(c->st_mem_usage_perc_vram, "usage", NULL, 1, 100, RRD_ALGORITHM_ABSOLUTE); + + + c->st_mem_usage_vram = rrdset_create_localhost( + AMDGPU_CHART_TYPE + , set_id("gpu_mem_vram_usage", c->id.marketing_name, de->d_name) + , NULL + , "memory_usage" + , AMDGPU_CHART_TYPE ".gpu_mem_vram_usage" + , "VRAM memory usage" + , "bytes" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DRM_NAME + , chart_prio++ + , update_every + , RRDSET_TYPE_STACKED + ); + + rrdlabels_add(c->st_mem_usage_vram->rrdlabels, "product_name", c->id.marketing_name, RRDLABEL_SRC_AUTO); + + c->rd_mem_free_vram = rrddim_add(c->st_mem_usage_vram, "free", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + c->rd_mem_used_vram = rrddim_add(c->st_mem_usage_vram, "used", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + + add_do_rrd_x(c, do_rrd_vram); + } + + set_prop_pathname("device/mem_info_vis_vram_used", c->pathname_mem_used_vis_vram, NULL); + set_prop_pathname("device/mem_info_vis_vram_total", c->pathname_mem_total_vis_vram, NULL); + if(c->pathname_mem_total_vis_vram) c->total_vis_vram = tmp_val; + + if(c->pathname_mem_used_vis_vram && c->pathname_mem_total_vis_vram){ + c->st_mem_usage_perc_vis_vram = rrdset_create_localhost( + AMDGPU_CHART_TYPE + , set_id("gpu_mem_vis_vram_usage_perc", c->id.marketing_name, de->d_name) + , NULL + , "memory_usage" + , AMDGPU_CHART_TYPE ".gpu_mem_vis_vram_usage_perc" + , "visible VRAM memory usage percentage" + , "percentage" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DRM_NAME + , chart_prio++ + , update_every + , RRDSET_TYPE_LINE + ); + + rrdlabels_add(c->st_mem_usage_perc_vis_vram->rrdlabels, "product_name", c->id.marketing_name, RRDLABEL_SRC_AUTO); + + c->rd_mem_used_perc_vis_vram = rrddim_add(c->st_mem_usage_perc_vis_vram, "usage", NULL, 1, 100, RRD_ALGORITHM_ABSOLUTE); + + + c->st_mem_usage_vis_vram = rrdset_create_localhost( + AMDGPU_CHART_TYPE + , set_id("gpu_mem_vis_vram_usage", c->id.marketing_name, de->d_name) + , NULL + , "memory_usage" + , AMDGPU_CHART_TYPE ".gpu_mem_vis_vram_usage" + , "visible VRAM memory usage" + , "bytes" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DRM_NAME + , chart_prio++ + , update_every + , RRDSET_TYPE_STACKED + ); + + rrdlabels_add(c->st_mem_usage_vis_vram->rrdlabels, "product_name", c->id.marketing_name, RRDLABEL_SRC_AUTO); + + c->rd_mem_free_vis_vram = rrddim_add(c->st_mem_usage_vis_vram, "free", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + c->rd_mem_used_vis_vram = rrddim_add(c->st_mem_usage_vis_vram, "used", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + + add_do_rrd_x(c, do_rrd_vis_vram); + } + + set_prop_pathname("device/mem_info_gtt_used", c->pathname_mem_used_gtt, NULL); + set_prop_pathname("device/mem_info_gtt_total", c->pathname_mem_total_gtt, NULL); + if(c->pathname_mem_total_gtt) c->total_gtt = tmp_val; + + if(c->pathname_mem_used_gtt && c->pathname_mem_total_gtt){ + c->st_mem_usage_perc_gtt = rrdset_create_localhost( + AMDGPU_CHART_TYPE + , set_id("gpu_mem_gtt_usage_perc", c->id.marketing_name, de->d_name) + , NULL + , "memory_usage" + , AMDGPU_CHART_TYPE ".gpu_mem_gtt_usage_perc" + , "GTT memory usage percentage" + , "percentage" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DRM_NAME + , chart_prio++ + , update_every + , RRDSET_TYPE_LINE + ); + + rrdlabels_add(c->st_mem_usage_perc_gtt->rrdlabels, "product_name", c->id.marketing_name, RRDLABEL_SRC_AUTO); + + c->rd_mem_used_perc_gtt = rrddim_add(c->st_mem_usage_perc_gtt, "usage", NULL, 1, 100, RRD_ALGORITHM_ABSOLUTE); + + c->st_mem_usage_gtt = rrdset_create_localhost( + AMDGPU_CHART_TYPE + , set_id("gpu_mem_gtt_usage", c->id.marketing_name, de->d_name) + , NULL + , "memory_usage" + , AMDGPU_CHART_TYPE ".gpu_mem_gtt_usage" + , "GTT memory usage" + , "bytes" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_DRM_NAME + , chart_prio++ + , update_every + , RRDSET_TYPE_STACKED + ); + + rrdlabels_add(c->st_mem_usage_gtt->rrdlabels, "product_name", c->id.marketing_name, RRDLABEL_SRC_AUTO); + + c->rd_mem_free_gtt = rrddim_add(c->st_mem_usage_gtt, "free", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + c->rd_mem_used_gtt = rrddim_add(c->st_mem_usage_gtt, "used", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + + add_do_rrd_x(c, do_rrd_gtt); + } + + c->next = card_root; + card_root = c; + } + } + } + + + struct card *card_cur = card_root, + *card_next; + while(card_cur){ + + struct do_rrd_x *do_rrd_x_cur = card_cur->do_rrd_x_root, + *do_rrd_x_next; + while(do_rrd_x_cur){ + if(unlikely(do_rrd_x_cur->func(card_cur))) { + do_rrd_x_next = do_rrd_x_cur->next; + rm_do_rrd_x(card_cur, do_rrd_x_cur); + do_rrd_x_cur = do_rrd_x_next; + } + else do_rrd_x_cur = do_rrd_x_cur->next; + } + + if(unlikely(!card_cur->do_rrd_x_root)){ + card_next = card_cur->next; + card_free(card_cur); + card_cur = card_next; + } + else card_cur = card_cur->next; + } + + return card_root ? 0 : 1; +} diff --git a/src/collectors/proc.plugin/sys_class_infiniband.c b/src/collectors/proc.plugin/sys_class_infiniband.c new file mode 100644 index 000000000..ff1652ddf --- /dev/null +++ b/src/collectors/proc.plugin/sys_class_infiniband.c @@ -0,0 +1,705 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +// Heavily inspired from proc_net_dev.c +#include "plugin_proc.h" + +#define PLUGIN_PROC_MODULE_INFINIBAND_NAME "/sys/class/infiniband" +#define CONFIG_SECTION_PLUGIN_SYS_CLASS_INFINIBAND \ + "plugin:" PLUGIN_PROC_CONFIG_NAME ":" PLUGIN_PROC_MODULE_INFINIBAND_NAME + +// ib_device::name[IB_DEVICE_NAME_MAX(64)] + "-" + ib_device::phys_port_cnt[u8 = 3 chars] +#define IBNAME_MAX 68 + +// ---------------------------------------------------------------------------- +// infiniband & omnipath standard counters + +// I use macro as there's no single file acting as summary, but a lot of different files, so can't use helpers like +// procfile(). Also, omnipath generates other counters, that are not provided by infiniband +#define FOREACH_COUNTER(GEN, ...) \ + FOREACH_COUNTER_BYTES(GEN, __VA_ARGS__) \ + FOREACH_COUNTER_PACKETS(GEN, __VA_ARGS__) \ + FOREACH_COUNTER_ERRORS(GEN, __VA_ARGS__) + +#define FOREACH_COUNTER_BYTES(GEN, ...) \ + GEN(port_rcv_data, bytes, "Received", 1, __VA_ARGS__) \ + GEN(port_xmit_data, bytes, "Sent", -1, __VA_ARGS__) + +#define FOREACH_COUNTER_PACKETS(GEN, ...) \ + GEN(port_rcv_packets, packets, "Received", 1, __VA_ARGS__) \ + GEN(port_xmit_packets, packets, "Sent", -1, __VA_ARGS__) \ + GEN(multicast_rcv_packets, packets, "Mcast rcvd", 1, __VA_ARGS__) \ + GEN(multicast_xmit_packets, packets, "Mcast sent", -1, __VA_ARGS__) \ + GEN(unicast_rcv_packets, packets, "Ucast rcvd", 1, __VA_ARGS__) \ + GEN(unicast_xmit_packets, packets, "Ucast sent", -1, __VA_ARGS__) + +#define FOREACH_COUNTER_ERRORS(GEN, ...) \ + GEN(port_rcv_errors, errors, "Pkts malformated", 1, __VA_ARGS__) \ + GEN(port_rcv_constraint_errors, errors, "Pkts rcvd discarded ", 1, __VA_ARGS__) \ + GEN(port_xmit_discards, errors, "Pkts sent discarded", 1, __VA_ARGS__) \ + GEN(port_xmit_wait, errors, "Tick Wait to send", 1, __VA_ARGS__) \ + GEN(VL15_dropped, errors, "Pkts missed resource", 1, __VA_ARGS__) \ + GEN(excessive_buffer_overrun_errors, errors, "Buffer overrun", 1, __VA_ARGS__) \ + GEN(link_downed, errors, "Link Downed", 1, __VA_ARGS__) \ + GEN(link_error_recovery, errors, "Link recovered", 1, __VA_ARGS__) \ + GEN(local_link_integrity_errors, errors, "Link integrity err", 1, __VA_ARGS__) \ + GEN(symbol_error, errors, "Link minor errors", 1, __VA_ARGS__) \ + GEN(port_rcv_remote_physical_errors, errors, "Pkts rcvd with EBP", 1, __VA_ARGS__) \ + GEN(port_rcv_switch_relay_errors, errors, "Pkts rcvd discarded by switch", 1, __VA_ARGS__) \ + GEN(port_xmit_constraint_errors, errors, "Pkts sent discarded by switch", 1, __VA_ARGS__) + +// +// Hardware Counters +// + +// IMPORTANT: These are vendor-specific fields. +// If you want to add a new vendor, search this for for 'VENDORS:' keyword and +// add your definition as 'VENDOR-<key>:' where <key> if the string part that +// is shown in /sys/class/infiniband/<key>X_Y +// EG: for Mellanox, shown as mlx0_1, it's 'mlx' +// for Intel, shown as hfi1_1, it's 'hfi' + +// VENDORS: List of implemented hardware vendors +#define FOREACH_HWCOUNTER_NAME(GEN, ...) GEN(mlx, __VA_ARGS__) + +// VENDOR-MLX: HW Counters for Mellanox ConnectX Devices +#define FOREACH_HWCOUNTER_MLX(GEN, ...) \ + FOREACH_HWCOUNTER_MLX_PACKETS(GEN, __VA_ARGS__) \ + FOREACH_HWCOUNTER_MLX_ERRORS(GEN, __VA_ARGS__) + +#define FOREACH_HWCOUNTER_MLX_PACKETS(GEN, ...) \ + GEN(np_cnp_sent, hwpackets, "RoCEv2 Congestion sent", 1, __VA_ARGS__) \ + GEN(np_ecn_marked_roce_packets, hwpackets, "RoCEv2 Congestion rcvd", -1, __VA_ARGS__) \ + GEN(rp_cnp_handled, hwpackets, "IB Congestion handled", 1, __VA_ARGS__) \ + GEN(rx_atomic_requests, hwpackets, "ATOMIC req. rcvd", 1, __VA_ARGS__) \ + GEN(rx_dct_connect, hwpackets, "Connection req. rcvd", 1, __VA_ARGS__) \ + GEN(rx_read_requests, hwpackets, "Read req. rcvd", 1, __VA_ARGS__) \ + GEN(rx_write_requests, hwpackets, "Write req. rcvd", 1, __VA_ARGS__) \ + GEN(roce_adp_retrans, hwpackets, "RoCE retrans adaptive", 1, __VA_ARGS__) \ + GEN(roce_adp_retrans_to, hwpackets, "RoCE retrans timeout", 1, __VA_ARGS__) \ + GEN(roce_slow_restart, hwpackets, "RoCE slow restart", 1, __VA_ARGS__) \ + GEN(roce_slow_restart_cnps, hwpackets, "RoCE slow restart congestion", 1, __VA_ARGS__) \ + GEN(roce_slow_restart_trans, hwpackets, "RoCE slow restart count", 1, __VA_ARGS__) + +#define FOREACH_HWCOUNTER_MLX_ERRORS(GEN, ...) \ + GEN(duplicate_request, hwerrors, "Duplicated packets", -1, __VA_ARGS__) \ + GEN(implied_nak_seq_err, hwerrors, "Pkt Seq Num gap", 1, __VA_ARGS__) \ + GEN(local_ack_timeout_err, hwerrors, "Ack timer expired", 1, __VA_ARGS__) \ + GEN(out_of_buffer, hwerrors, "Drop missing buffer", 1, __VA_ARGS__) \ + GEN(out_of_sequence, hwerrors, "Drop out of sequence", 1, __VA_ARGS__) \ + GEN(packet_seq_err, hwerrors, "NAK sequence rcvd", 1, __VA_ARGS__) \ + GEN(req_cqe_error, hwerrors, "CQE err Req", 1, __VA_ARGS__) \ + GEN(resp_cqe_error, hwerrors, "CQE err Resp", 1, __VA_ARGS__) \ + GEN(req_cqe_flush_error, hwerrors, "CQE Flushed err Req", 1, __VA_ARGS__) \ + GEN(resp_cqe_flush_error, hwerrors, "CQE Flushed err Resp", 1, __VA_ARGS__) \ + GEN(req_remote_access_errors, hwerrors, "Remote access err Req", 1, __VA_ARGS__) \ + GEN(resp_remote_access_errors, hwerrors, "Remote access err Resp", 1, __VA_ARGS__) \ + GEN(req_remote_invalid_request, hwerrors, "Remote invalid req", 1, __VA_ARGS__) \ + GEN(resp_local_length_error, hwerrors, "Local length err Resp", 1, __VA_ARGS__) \ + GEN(rnr_nak_retry_err, hwerrors, "RNR NAK Packets", 1, __VA_ARGS__) \ + GEN(rp_cnp_ignored, hwerrors, "CNP Pkts ignored", 1, __VA_ARGS__) \ + GEN(rx_icrc_encapsulated, hwerrors, "RoCE ICRC Errors", 1, __VA_ARGS__) + +// Common definitions used more than once +#define GEN_RRD_DIM_ADD(NAME, GRP, DESC, DIR, PORT) \ + GEN_RRD_DIM_ADD_CUSTOM(NAME, GRP, DESC, DIR, PORT, 1, 1, RRD_ALGORITHM_INCREMENTAL) + +#define GEN_RRD_DIM_ADD_CUSTOM(NAME, GRP, DESC, DIR, PORT, MULT, DIV, ALGO) \ + PORT->rd_##NAME = rrddim_add(PORT->st_##GRP, DESC, NULL, DIR * MULT, DIV, ALGO); + +#define GEN_RRD_DIM_ADD_HW(NAME, GRP, DESC, DIR, PORT, HW) \ + HW->rd_##NAME = rrddim_add(PORT->st_##GRP, DESC, NULL, DIR, 1, RRD_ALGORITHM_INCREMENTAL); + +#define GEN_RRD_DIM_SETP(NAME, GRP, DESC, DIR, PORT) \ + rrddim_set_by_pointer(PORT->st_##GRP, PORT->rd_##NAME, (collected_number)PORT->NAME); + +#define GEN_RRD_DIM_SETP_HW(NAME, GRP, DESC, DIR, PORT, HW) \ + rrddim_set_by_pointer(PORT->st_##GRP, HW->rd_##NAME, (collected_number)HW->NAME); + +// https://community.mellanox.com/s/article/understanding-mlx5-linux-counters-and-status-parameters +// https://community.mellanox.com/s/article/infiniband-port-counters +static struct ibport { + char *name; + char *counters_path; + char *hwcounters_path; + int len; + + // flags + int configured; + int enabled; + int discovered; + + int do_bytes; + int do_packets; + int do_errors; + int do_hwpackets; + int do_hwerrors; + + const char *chart_type_bytes; + const char *chart_type_packets; + const char *chart_type_errors; + const char *chart_type_hwpackets; + const char *chart_type_hwerrors; + + const char *chart_id_bytes; + const char *chart_id_packets; + const char *chart_id_errors; + const char *chart_id_hwpackets; + const char *chart_id_hwerrors; + + const char *chart_family; + + unsigned long priority; + + // Port details using drivers/infiniband/core/sysfs.c :: rate_show() + RRDDIM *rd_speed; + uint64_t speed; + uint64_t width; + +// Stats from /$device/ports/$portid/counters +// as drivers/infiniband/hw/qib/qib_verbs.h +// All uint64 except vl15_dropped, local_link_integrity_errors, excessive_buffer_overrun_errors uint32 +// Will generate 2 elements for each counter: +// - uint64_t to store the value +// - char* to store the filename path +// - RRDDIM* to store the RRD Dimension +#define GEN_DEF_COUNTER(NAME, ...) \ + uint64_t NAME; \ + char *file_##NAME; \ + RRDDIM *rd_##NAME; + FOREACH_COUNTER(GEN_DEF_COUNTER) + +// Vendor specific hwcounters from /$device/ports/$portid/hw_counters +// We will generate one struct pointer per vendor to avoid future casting +#define GEN_DEF_HWCOUNTER_PTR(VENDOR, ...) struct ibporthw_##VENDOR *hwcounters_##VENDOR; + FOREACH_HWCOUNTER_NAME(GEN_DEF_HWCOUNTER_PTR) + + // Function pointer to the "infiniband_hwcounters_parse_<vendor>" function + void (*hwcounters_parse)(struct ibport *); + void (*hwcounters_dorrd)(struct ibport *); + + // charts and dim + RRDSET *st_bytes; + RRDSET *st_packets; + RRDSET *st_errors; + RRDSET *st_hwpackets; + RRDSET *st_hwerrors; + + const RRDVAR_ACQUIRED *stv_speed; + + usec_t speed_last_collected_usec; + + struct ibport *next; +} *ibport_root = NULL, *ibport_last_used = NULL; + +// VENDORS: reading / calculation functions +#define GEN_DEF_HWCOUNTER(NAME, ...) \ + uint64_t NAME; \ + char *file_##NAME; \ + RRDDIM *rd_##NAME; + +#define GEN_DO_HWCOUNTER_READ(NAME, GRP, DESC, DIR, PORT, HW, ...) \ + if (HW->file_##NAME) { \ + if (read_single_number_file(HW->file_##NAME, (unsigned long long *)&HW->NAME)) { \ + collector_error("cannot read iface '%s' hwcounter '" #HW "'", PORT->name); \ + HW->file_##NAME = NULL; \ + } \ + } + +// VENDOR-MLX: Mellanox +struct ibporthw_mlx { + FOREACH_HWCOUNTER_MLX(GEN_DEF_HWCOUNTER) +}; +void infiniband_hwcounters_parse_mlx(struct ibport *port) +{ + if (port->do_hwerrors != CONFIG_BOOLEAN_NO) + FOREACH_HWCOUNTER_MLX_ERRORS(GEN_DO_HWCOUNTER_READ, port, port->hwcounters_mlx) + if (port->do_hwpackets != CONFIG_BOOLEAN_NO) + FOREACH_HWCOUNTER_MLX_PACKETS(GEN_DO_HWCOUNTER_READ, port, port->hwcounters_mlx) +} +void infiniband_hwcounters_dorrd_mlx(struct ibport *port) +{ + if (port->do_hwerrors != CONFIG_BOOLEAN_NO) { + FOREACH_HWCOUNTER_MLX_ERRORS(GEN_RRD_DIM_SETP_HW, port, port->hwcounters_mlx) + rrdset_done(port->st_hwerrors); + } + if (port->do_hwpackets != CONFIG_BOOLEAN_NO) { + FOREACH_HWCOUNTER_MLX_PACKETS(GEN_RRD_DIM_SETP_HW, port, port->hwcounters_mlx) + rrdset_done(port->st_hwpackets); + } +} + +// ---------------------------------------------------------------------------- + +static struct ibport *get_ibport(const char *dev, const char *port) +{ + struct ibport *p; + + char name[IBNAME_MAX + 1]; + snprintfz(name, IBNAME_MAX, "%s-%s", dev, port); + + // search it, resuming from the last position in sequence + for (p = ibport_last_used; p; p = p->next) { + if (unlikely(!strcmp(name, p->name))) { + ibport_last_used = p->next; + return p; + } + } + + // new round, from the beginning to the last position used this time + for (p = ibport_root; p != ibport_last_used; p = p->next) { + if (unlikely(!strcmp(name, p->name))) { + ibport_last_used = p->next; + return p; + } + } + + // create a new one + p = callocz(1, sizeof(struct ibport)); + p->name = strdupz(name); + p->len = strlen(p->name); + + p->chart_type_bytes = strdupz("infiniband_cnt_bytes"); + p->chart_type_packets = strdupz("infiniband_cnt_packets"); + p->chart_type_errors = strdupz("infiniband_cnt_errors"); + p->chart_type_hwpackets = strdupz("infiniband_hwc_packets"); + p->chart_type_hwerrors = strdupz("infiniband_hwc_errors"); + + char buffer[RRD_ID_LENGTH_MAX + 1]; + snprintfz(buffer, RRD_ID_LENGTH_MAX, "ib_cntbytes_%s", p->name); + p->chart_id_bytes = strdupz(buffer); + + snprintfz(buffer, RRD_ID_LENGTH_MAX, "ib_cntpackets_%s", p->name); + p->chart_id_packets = strdupz(buffer); + + snprintfz(buffer, RRD_ID_LENGTH_MAX, "ib_cnterrors_%s", p->name); + p->chart_id_errors = strdupz(buffer); + + snprintfz(buffer, RRD_ID_LENGTH_MAX, "ib_hwcntpackets_%s", p->name); + p->chart_id_hwpackets = strdupz(buffer); + + snprintfz(buffer, RRD_ID_LENGTH_MAX, "ib_hwcnterrors_%s", p->name); + p->chart_id_hwerrors = strdupz(buffer); + + p->chart_family = strdupz(p->name); + p->priority = NETDATA_CHART_PRIO_INFINIBAND; + + // Link current ibport to last one in the list + if (ibport_root) { + struct ibport *t; + for (t = ibport_root; t->next; t = t->next) + ; + t->next = p; + } else + ibport_root = p; + + return p; +} + +int do_sys_class_infiniband(int update_every, usec_t dt) +{ + (void)dt; + static SIMPLE_PATTERN *disabled_list = NULL; + static int initialized = 0; + static int enable_new_ports = -1, enable_only_active = CONFIG_BOOLEAN_YES; + static int do_bytes = -1, do_packets = -1, do_errors = -1, do_hwpackets = -1, do_hwerrors = -1; + static char *sys_class_infiniband_dirname = NULL; + + static long long int dt_to_refresh_ports = 0, last_refresh_ports_usec = 0; + + if (unlikely(enable_new_ports == -1)) { + char dirname[FILENAME_MAX + 1]; + + snprintfz(dirname, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/class/infiniband"); + sys_class_infiniband_dirname = + config_get(CONFIG_SECTION_PLUGIN_SYS_CLASS_INFINIBAND, "dirname to monitor", dirname); + + do_bytes = config_get_boolean_ondemand( + CONFIG_SECTION_PLUGIN_SYS_CLASS_INFINIBAND, "bandwidth counters", CONFIG_BOOLEAN_YES); + do_packets = config_get_boolean_ondemand( + CONFIG_SECTION_PLUGIN_SYS_CLASS_INFINIBAND, "packets counters", CONFIG_BOOLEAN_YES); + do_errors = config_get_boolean_ondemand( + CONFIG_SECTION_PLUGIN_SYS_CLASS_INFINIBAND, "errors counters", CONFIG_BOOLEAN_YES); + do_hwpackets = config_get_boolean_ondemand( + CONFIG_SECTION_PLUGIN_SYS_CLASS_INFINIBAND, "hardware packets counters", CONFIG_BOOLEAN_AUTO); + do_hwerrors = config_get_boolean_ondemand( + CONFIG_SECTION_PLUGIN_SYS_CLASS_INFINIBAND, "hardware errors counters", CONFIG_BOOLEAN_AUTO); + + enable_only_active = config_get_boolean_ondemand( + CONFIG_SECTION_PLUGIN_SYS_CLASS_INFINIBAND, "monitor only active ports", CONFIG_BOOLEAN_AUTO); + disabled_list = simple_pattern_create( + config_get(CONFIG_SECTION_PLUGIN_SYS_CLASS_INFINIBAND, "disable by default interfaces matching", ""), + NULL, + SIMPLE_PATTERN_EXACT, true); + + dt_to_refresh_ports = + config_get_number(CONFIG_SECTION_PLUGIN_SYS_CLASS_INFINIBAND, "refresh ports state every seconds", 30) * + USEC_PER_SEC; + if (dt_to_refresh_ports < 0) + dt_to_refresh_ports = 0; + } + + // init listing of /sys/class/infiniband/ (or rediscovery) + if (unlikely(!initialized) || unlikely(last_refresh_ports_usec >= dt_to_refresh_ports)) { + // If folder does not exists, return 1 to disable + DIR *devices_dir = opendir(sys_class_infiniband_dirname); + if (unlikely(!devices_dir)) + return 1; + + // Work on all device available + struct dirent *dev_dent; + while ((dev_dent = readdir(devices_dir))) { + // Skip special folders + if (!strcmp(dev_dent->d_name, "..") || !strcmp(dev_dent->d_name, ".")) + continue; + + // /sys/class/infiniband/<dev>/ports + char ports_dirname[FILENAME_MAX + 1]; + snprintfz(ports_dirname, FILENAME_MAX, "%s/%s/%s", sys_class_infiniband_dirname, dev_dent->d_name, "ports"); + + DIR *ports_dir = opendir(ports_dirname); + if (unlikely(!ports_dir)) + continue; + + struct dirent *port_dent; + while ((port_dent = readdir(ports_dir))) { + // Skip special folders + if (!strcmp(port_dent->d_name, "..") || !strcmp(port_dent->d_name, ".")) + continue; + + char buffer[FILENAME_MAX + 1]; + + // Check if counters are available (mandatory) + // /sys/class/infiniband/<device>/ports/<port>/counters + char counters_dirname[FILENAME_MAX + 1]; + snprintfz(counters_dirname, FILENAME_MAX, "%s/%s/%s", ports_dirname, port_dent->d_name, "counters"); + DIR *counters_dir = opendir(counters_dirname); + // Standard counters are mandatory + if (!counters_dir) + continue; + closedir(counters_dir); + + // Hardware Counters are optional, used later + char hwcounters_dirname[FILENAME_MAX + 1]; + snprintfz( + hwcounters_dirname, FILENAME_MAX, "%s/%s/%s", ports_dirname, port_dent->d_name, "hw_counters"); + + // Get new or existing ibport + struct ibport *p = get_ibport(dev_dent->d_name, port_dent->d_name); + if (!p) + continue; + + // Prepare configuration + if (!p->configured) { + p->configured = 1; + + // Enable by default, will be filtered out later + p->enabled = 1; + + p->counters_path = strdupz(counters_dirname); + p->hwcounters_path = strdupz(hwcounters_dirname); + + snprintfz(buffer, FILENAME_MAX, "plugin:proc:/sys/class/infiniband:%s", p->name); + + // Standard counters + p->do_bytes = config_get_boolean_ondemand(buffer, "bytes", do_bytes); + p->do_packets = config_get_boolean_ondemand(buffer, "packets", do_packets); + p->do_errors = config_get_boolean_ondemand(buffer, "errors", do_errors); + +// Gen filename allocation and concatenation +#define GEN_DO_COUNTER_NAME(NAME, GRP, DESC, DIR, PORT, ...) \ + PORT->file_##NAME = callocz(1, strlen(PORT->counters_path) + sizeof(#NAME) + 3); \ + strcat(PORT->file_##NAME, PORT->counters_path); \ + strcat(PORT->file_##NAME, "/" #NAME); + FOREACH_COUNTER(GEN_DO_COUNTER_NAME, p) + + // Check HW Counters vendor dependent + DIR *hwcounters_dir = opendir(hwcounters_dirname); + if (hwcounters_dir) { + // By default set standard + p->do_hwpackets = config_get_boolean_ondemand(buffer, "hwpackets", do_hwpackets); + p->do_hwerrors = config_get_boolean_ondemand(buffer, "hwerrors", do_hwerrors); + +// VENDORS: Set your own + +// Allocate the chars to the filenames +#define GEN_DO_HWCOUNTER_NAME(NAME, GRP, DESC, DIR, PORT, HW, ...) \ + HW->file_##NAME = callocz(1, strlen(PORT->hwcounters_path) + sizeof(#NAME) + 3); \ + strcat(HW->file_##NAME, PORT->hwcounters_path); \ + strcat(HW->file_##NAME, "/" #NAME); + + // VENDOR-MLX: Mellanox + if (strncmp(dev_dent->d_name, "mlx", 3) == 0) { + // Allocate the vendor specific struct + p->hwcounters_mlx = callocz(1, sizeof(struct ibporthw_mlx)); + + FOREACH_HWCOUNTER_MLX(GEN_DO_HWCOUNTER_NAME, p, p->hwcounters_mlx) + + // Set the function pointer for hwcounter parsing + p->hwcounters_parse = &infiniband_hwcounters_parse_mlx; + p->hwcounters_dorrd = &infiniband_hwcounters_dorrd_mlx; + } + + // VENDOR: Unknown + else { + p->do_hwpackets = CONFIG_BOOLEAN_NO; + p->do_hwerrors = CONFIG_BOOLEAN_NO; + } + closedir(hwcounters_dir); + } + } + + // Check port state to keep activation + if (enable_only_active) { + snprintfz(buffer, FILENAME_MAX, "%s/%s/%s", ports_dirname, port_dent->d_name, "state"); + unsigned long long active; + // File is "1: DOWN" or "4: ACTIVE", but str2ull will stop on first non-decimal char + read_single_number_file(buffer, &active); + + // Want "IB_PORT_ACTIVE" == "4", as defined by drivers/infiniband/core/sysfs.c::state_show() + if (active == 4) + p->enabled = 1; + else + p->enabled = 0; + } + + if (p->enabled) + p->enabled = !simple_pattern_matches(disabled_list, p->name); + + // Check / Update the link speed & width frm "rate" file + // Sample output: "100 Gb/sec (4X EDR)" + snprintfz(buffer, FILENAME_MAX, "%s/%s/%s", ports_dirname, port_dent->d_name, "rate"); + char buffer_rate[65]; + p->width = 4; + if (read_txt_file(buffer, buffer_rate, sizeof(buffer_rate))) { + collector_error("Unable to read '%s'", buffer); + } else { + char *buffer_width = strstr(buffer_rate, "("); + if (buffer_width) { + buffer_width++; + // str2ull will stop on first non-decimal value + p->speed = str2ull(buffer_rate, NULL); + p->width = str2ull(buffer_width, NULL); + } + } + + if (!p->discovered) + collector_info("Infiniband card %s port %s at speed %" PRIu64 " width %" PRIu64 "", + dev_dent->d_name, + port_dent->d_name, + p->speed, + p->width); + + p->discovered = 1; + } + closedir(ports_dir); + } + closedir(devices_dir); + + initialized = 1; + last_refresh_ports_usec = 0; + } + last_refresh_ports_usec += dt; + + // Update all ports values + struct ibport *port; + for (port = ibport_root; port; port = port->next) { + if (!port->enabled) + continue; + // + // Read values from system to struct + // + +// counter from file and place it in ibport struct +#define GEN_DO_COUNTER_READ(NAME, GRP, DESC, DIR, PORT, ...) \ + if (PORT->file_##NAME) { \ + if (read_single_number_file(PORT->file_##NAME, (unsigned long long *)&PORT->NAME)) { \ + collector_error("cannot read iface '%s' counter '" #NAME "'", PORT->name); \ + PORT->file_##NAME = NULL; \ + } \ + } + + // Update charts + if (port->do_bytes != CONFIG_BOOLEAN_NO) { + // Read values from sysfs + FOREACH_COUNTER_BYTES(GEN_DO_COUNTER_READ, port) + + // First creation of RRD Set (charts) + if (unlikely(!port->st_bytes)) { + port->st_bytes = rrdset_create_localhost( + "Infiniband", + port->chart_id_bytes, + NULL, + port->chart_family, + "ib.bytes", + "Bandwidth usage", + "kilobits/s", + PLUGIN_PROC_NAME, + PLUGIN_PROC_MODULE_INFINIBAND_NAME, + port->priority + 1, + update_every, + RRDSET_TYPE_AREA); + // Create Dimensions + rrdset_flag_set(port->st_bytes, RRDSET_FLAG_DETAIL); + // On this chart, we want to have a KB/s so the dashboard will autoscale it + // The reported values are also per-lane, so we must multiply it by the width + // x4 lanes multiplier as per Documentation/ABI/stable/sysfs-class-infiniband + FOREACH_COUNTER_BYTES(GEN_RRD_DIM_ADD_CUSTOM, port, port->width * 8, 1000, RRD_ALGORITHM_INCREMENTAL) + + port->stv_speed = rrdvar_chart_variable_add_and_acquire(port->st_bytes, "link_speed"); + } + + // Link read values to dimensions + FOREACH_COUNTER_BYTES(GEN_RRD_DIM_SETP, port) + + // For link speed set only variable + rrdvar_chart_variable_set(port->st_bytes, port->stv_speed, port->speed); + + rrdset_done(port->st_bytes); + } + + if (port->do_packets != CONFIG_BOOLEAN_NO) { + // Read values from sysfs + FOREACH_COUNTER_PACKETS(GEN_DO_COUNTER_READ, port) + + // First creation of RRD Set (charts) + if (unlikely(!port->st_packets)) { + port->st_packets = rrdset_create_localhost( + "Infiniband", + port->chart_id_packets, + NULL, + port->chart_family, + "ib.packets", + "Packets Statistics", + "packets/s", + PLUGIN_PROC_NAME, + PLUGIN_PROC_MODULE_INFINIBAND_NAME, + port->priority + 2, + update_every, + RRDSET_TYPE_AREA); + // Create Dimensions + rrdset_flag_set(port->st_packets, RRDSET_FLAG_DETAIL); + FOREACH_COUNTER_PACKETS(GEN_RRD_DIM_ADD, port) + } + + // Link read values to dimensions + FOREACH_COUNTER_PACKETS(GEN_RRD_DIM_SETP, port) + rrdset_done(port->st_packets); + } + + if (port->do_errors != CONFIG_BOOLEAN_NO) { + // Read values from sysfs + FOREACH_COUNTER_ERRORS(GEN_DO_COUNTER_READ, port) + + // First creation of RRD Set (charts) + if (unlikely(!port->st_errors)) { + port->st_errors = rrdset_create_localhost( + "Infiniband", + port->chart_id_errors, + NULL, + port->chart_family, + "ib.errors", + "Error Counters", + "errors/s", + PLUGIN_PROC_NAME, + PLUGIN_PROC_MODULE_INFINIBAND_NAME, + port->priority + 3, + update_every, + RRDSET_TYPE_LINE); + // Create Dimensions + rrdset_flag_set(port->st_errors, RRDSET_FLAG_DETAIL); + FOREACH_COUNTER_ERRORS(GEN_RRD_DIM_ADD, port) + } + + // Link read values to dimensions + FOREACH_COUNTER_ERRORS(GEN_RRD_DIM_SETP, port) + rrdset_done(port->st_errors); + } + + // + // HW Counters + // + + // Call the function for parsing and setting hwcounters + if (port->hwcounters_parse && port->hwcounters_dorrd) { + // Read all values (done by vendor-specific function) + (*port->hwcounters_parse)(port); + + if (port->do_hwerrors != CONFIG_BOOLEAN_NO) { + // First creation of RRD Set (charts) + if (unlikely(!port->st_hwerrors)) { + port->st_hwerrors = rrdset_create_localhost( + "Infiniband", + port->chart_id_hwerrors, + NULL, + port->chart_family, + "ib.hwerrors", + "Hardware Errors", + "errors/s", + PLUGIN_PROC_NAME, + PLUGIN_PROC_MODULE_INFINIBAND_NAME, + port->priority + 4, + update_every, + RRDSET_TYPE_LINE); + + rrdset_flag_set(port->st_hwerrors, RRDSET_FLAG_DETAIL); + + // VENDORS: Set your selection + + // VENDOR: Mellanox + if (strncmp(port->name, "mlx", 3) == 0) { + FOREACH_HWCOUNTER_MLX_ERRORS(GEN_RRD_DIM_ADD_HW, port, port->hwcounters_mlx) + } + + // Unknown vendor, should not happen + else { + collector_error( + "Unmanaged vendor for '%s', do_hwerrors should have been set to no. Please report this bug", + port->name); + port->do_hwerrors = CONFIG_BOOLEAN_NO; + } + } + } + + if (port->do_hwpackets != CONFIG_BOOLEAN_NO) { + // First creation of RRD Set (charts) + if (unlikely(!port->st_hwpackets)) { + port->st_hwpackets = rrdset_create_localhost( + "Infiniband", + port->chart_id_hwpackets, + NULL, + port->chart_family, + "ib.hwpackets", + "Hardware Packets Statistics", + "packets/s", + PLUGIN_PROC_NAME, + PLUGIN_PROC_MODULE_INFINIBAND_NAME, + port->priority + 5, + update_every, + RRDSET_TYPE_LINE); + + rrdset_flag_set(port->st_hwpackets, RRDSET_FLAG_DETAIL); + + // VENDORS: Set your selection + + // VENDOR: Mellanox + if (strncmp(port->name, "mlx", 3) == 0) { + FOREACH_HWCOUNTER_MLX_PACKETS(GEN_RRD_DIM_ADD_HW, port, port->hwcounters_mlx) + } + + // Unknown vendor, should not happen + else { + collector_error( + "Unmanaged vendor for '%s', do_hwpackets should have been set to no. Please report this bug", + port->name); + port->do_hwpackets = CONFIG_BOOLEAN_NO; + } + } + } + + // Update values to rrd (done by vendor-specific function) + (*port->hwcounters_dorrd)(port); + } + } + + return 0; +} diff --git a/src/collectors/proc.plugin/sys_class_power_supply.c b/src/collectors/proc.plugin/sys_class_power_supply.c new file mode 100644 index 000000000..494a293bc --- /dev/null +++ b/src/collectors/proc.plugin/sys_class_power_supply.c @@ -0,0 +1,414 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +#define PLUGIN_PROC_MODULE_POWER_SUPPLY_NAME "/sys/class/power_supply" + +const char *ps_property_names[] = { "charge", "energy", "voltage"}; +const char *ps_property_titles[] = {"Battery charge", "Battery energy", "Power supply voltage"}; +const char *ps_property_units[] = { "Ah", "Wh", "V"}; + +const char *ps_property_dim_names[] = {"empty_design", "empty", "now", "full", "full_design", + "empty_design", "empty", "now", "full", "full_design", + "min_design", "min", "now", "max", "max_design"}; + +struct ps_property_dim { + char *name; + char *filename; + int fd; + + RRDDIM *rd; + unsigned long long value; + int always_zero; + + struct ps_property_dim *next; +}; + +struct ps_property { + char *name; + char *title; + char *units; + + RRDSET *st; + + struct ps_property_dim *property_dim_root; + + struct ps_property *next; +}; + +struct capacity { + char *filename; + int fd; + + RRDSET *st; + RRDDIM *rd; + unsigned long long value; +}; + +struct power_supply { + char *name; + uint32_t hash; + int found; + + struct capacity *capacity; + + struct ps_property *property_root; + + struct power_supply *next; +}; + +static struct power_supply *power_supply_root = NULL; +static int files_num = 0; + +void power_supply_free(struct power_supply *ps) { + if(likely(ps)) { + + // free capacity structure + if(likely(ps->capacity)) { + if(likely(ps->capacity->st)) rrdset_is_obsolete___safe_from_collector_thread(ps->capacity->st); + freez(ps->capacity->filename); + if(likely(ps->capacity->fd != -1)) close(ps->capacity->fd); + files_num--; + freez(ps->capacity); + } + freez(ps->name); + + struct ps_property *pr = ps->property_root; + while(likely(pr)) { + + // free dimensions + struct ps_property_dim *pd = pr->property_dim_root; + while(likely(pd)) { + freez(pd->name); + freez(pd->filename); + if(likely(pd->fd != -1)) close(pd->fd); + files_num--; + struct ps_property_dim *d = pd; + pd = pd->next; + freez(d); + } + + // free properties + if(likely(pr->st)) rrdset_is_obsolete___safe_from_collector_thread(pr->st); + freez(pr->name); + freez(pr->title); + freez(pr->units); + struct ps_property *p = pr; + pr = pr->next; + freez(p); + } + + // remove power supply from linked list + if(likely(ps == power_supply_root)) { + power_supply_root = ps->next; + } + else { + struct power_supply *last; + for(last = power_supply_root; last && last->next != ps; last = last->next); + if(likely(last)) last->next = ps->next; + } + + freez(ps); + } +} + +static void add_labels_to_power_supply(struct power_supply *ps, RRDSET *st) { + rrdlabels_add(st->rrdlabels, "device", ps->name, RRDLABEL_SRC_AUTO); +} + +int do_sys_class_power_supply(int update_every, usec_t dt) { + (void)dt; + static int do_capacity = -1, do_property[3] = {-1}; + static int keep_fds_open = CONFIG_BOOLEAN_NO, keep_fds_open_config = -1; + static char *dirname = NULL; + + if(unlikely(do_capacity == -1)) { + do_capacity = config_get_boolean("plugin:proc:/sys/class/power_supply", "battery capacity", CONFIG_BOOLEAN_YES); + do_property[0] = config_get_boolean("plugin:proc:/sys/class/power_supply", "battery charge", CONFIG_BOOLEAN_NO); + do_property[1] = config_get_boolean("plugin:proc:/sys/class/power_supply", "battery energy", CONFIG_BOOLEAN_NO); + do_property[2] = config_get_boolean("plugin:proc:/sys/class/power_supply", "power supply voltage", CONFIG_BOOLEAN_NO); + + keep_fds_open_config = config_get_boolean_ondemand("plugin:proc:/sys/class/power_supply", "keep files open", CONFIG_BOOLEAN_AUTO); + + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/class/power_supply"); + dirname = config_get("plugin:proc:/sys/class/power_supply", "directory to monitor", filename); + } + + DIR *dir = opendir(dirname); + if(unlikely(!dir)) { + collector_error("Cannot read directory '%s'", dirname); + return 1; + } + + struct dirent *de = NULL; + while(likely(de = readdir(dir))) { + if(likely(de->d_type == DT_DIR + && ( + (de->d_name[0] == '.' && de->d_name[1] == '\0') + || (de->d_name[0] == '.' && de->d_name[1] == '.' && de->d_name[2] == '\0') + ))) + continue; + + if(likely(de->d_type == DT_LNK || de->d_type == DT_DIR)) { + uint32_t hash = simple_hash(de->d_name); + + struct power_supply *ps; + for(ps = power_supply_root; ps; ps = ps->next) { + if(unlikely(ps->hash == hash && !strcmp(ps->name, de->d_name))) { + ps->found = 1; + break; + } + } + + // allocate memory for power supply and initialize it + if(unlikely(!ps)) { + ps = callocz(sizeof(struct power_supply), 1); + ps->name = strdupz(de->d_name); + ps->hash = simple_hash(de->d_name); + ps->found = 1; + ps->next = power_supply_root; + power_supply_root = ps; + + struct stat stbuf; + if(likely(do_capacity != CONFIG_BOOLEAN_NO)) { + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s/%s/%s", dirname, de->d_name, "capacity"); + if (stat(filename, &stbuf) == 0) { + ps->capacity = callocz(sizeof(struct capacity), 1); + ps->capacity->filename = strdupz(filename); + ps->capacity->fd = -1; + files_num++; + } + } + + // allocate memory and initialize structures for every property and file found + size_t pr_idx, pd_idx; + size_t prev_idx = 3; // there is no property with this index + + for(pr_idx = 0; pr_idx < 3; pr_idx++) { + if(unlikely(do_property[pr_idx] != CONFIG_BOOLEAN_NO)) { + struct ps_property *pr = NULL; + int min_value_found = 0, max_value_found = 0; + + for(pd_idx = pr_idx * 5; pd_idx < pr_idx * 5 + 5; pd_idx++) { + + // check if file exists + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s/%s/%s_%s", dirname, de->d_name, + ps_property_names[pr_idx], ps_property_dim_names[pd_idx]); + if (stat(filename, &stbuf) == 0) { + + if(unlikely(pd_idx == pr_idx * 5 + 1)) + min_value_found = 1; + if(unlikely(pd_idx == pr_idx * 5 + 3)) + max_value_found = 1; + + // add chart + if(unlikely(prev_idx != pr_idx)) { + pr = callocz(sizeof(struct ps_property), 1); + pr->name = strdupz(ps_property_names[pr_idx]); + pr->title = strdupz(ps_property_titles[pr_idx]); + pr->units = strdupz(ps_property_units[pr_idx]); + prev_idx = pr_idx; + pr->next = ps->property_root; + ps->property_root = pr; + } + + // add dimension + struct ps_property_dim *pd; + pd= callocz(sizeof(struct ps_property_dim), 1); + pd->name = strdupz(ps_property_dim_names[pd_idx]); + pd->filename = strdupz(filename); + pd->fd = -1; + files_num++; + pd->next = pr->property_dim_root; + pr->property_dim_root = pd; + } + } + + // create a zero empty/min dimension + if(unlikely(max_value_found && !min_value_found)) { + struct ps_property_dim *pd; + pd= callocz(sizeof(struct ps_property_dim), 1); + pd->name = strdupz(ps_property_dim_names[pr_idx * 5 + 1]); + pd->always_zero = 1; + pd->next = pr->property_dim_root; + pr->property_dim_root = pd; + } + } + } + } + + // read capacity file + if(likely(ps->capacity)) { + char buffer[30 + 1]; + + if(unlikely(ps->capacity->fd == -1)) { + ps->capacity->fd = open(ps->capacity->filename, O_RDONLY | O_CLOEXEC, 0666); + if(unlikely(ps->capacity->fd == -1)) { + collector_error("Cannot open file '%s'", ps->capacity->filename); + power_supply_free(ps); + ps = NULL; + } + } + + if (ps) + { + ssize_t r = read(ps->capacity->fd, buffer, 30); + if(unlikely(r < 1)) { + collector_error("Cannot read file '%s'", ps->capacity->filename); + power_supply_free(ps); + ps = NULL; + } + else { + buffer[r] = '\0'; + ps->capacity->value = str2ull(buffer, NULL); + + if(unlikely(!keep_fds_open)) { + close(ps->capacity->fd); + ps->capacity->fd = -1; + } + else if(unlikely(lseek(ps->capacity->fd, 0, SEEK_SET) == -1)) { + collector_error("Cannot seek in file '%s'", ps->capacity->filename); + close(ps->capacity->fd); + ps->capacity->fd = -1; + } + } + } + } + + // read property files + int read_error = 0; + struct ps_property *pr; + if (ps) + { + for(pr = ps->property_root; pr && !read_error; pr = pr->next) { + struct ps_property_dim *pd; + for(pd = pr->property_dim_root; pd; pd = pd->next) { + if(likely(!pd->always_zero)) { + char buffer[30 + 1]; + + if(unlikely(pd->fd == -1)) { + pd->fd = open(pd->filename, O_RDONLY | O_CLOEXEC, 0666); + if(unlikely(pd->fd == -1)) { + collector_error("Cannot open file '%s'", pd->filename); + read_error = 1; + power_supply_free(ps); + break; + } + } + + ssize_t r = read(pd->fd, buffer, 30); + if(unlikely(r < 1)) { + collector_error("Cannot read file '%s'", pd->filename); + read_error = 1; + power_supply_free(ps); + break; + } + buffer[r] = '\0'; + pd->value = str2ull(buffer, NULL); + + if(unlikely(!keep_fds_open)) { + close(pd->fd); + pd->fd = -1; + } + else if(unlikely(lseek(pd->fd, 0, SEEK_SET) == -1)) { + collector_error("Cannot seek in file '%s'", pd->filename); + close(pd->fd); + pd->fd = -1; + } + } + } + } + } + } + } + + closedir(dir); + + keep_fds_open = keep_fds_open_config; + if(likely(keep_fds_open_config == CONFIG_BOOLEAN_AUTO)) { + if(unlikely(files_num > 32)) + keep_fds_open = CONFIG_BOOLEAN_NO; + else + keep_fds_open = CONFIG_BOOLEAN_YES; + } + + // -------------------------------------------------------------------- + + struct power_supply *ps = power_supply_root; + while(unlikely(ps)) { + if(unlikely(!ps->found)) { + struct power_supply *f = ps; + ps = ps->next; + power_supply_free(f); + continue; + } + + if(likely(ps->capacity)) { + if(unlikely(!ps->capacity->st)) { + ps->capacity->st = rrdset_create_localhost( + "powersupply_capacity" + , ps->name + , NULL + , ps->name + , "powersupply.capacity" + , "Battery capacity" + , "percentage" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_POWER_SUPPLY_NAME + , NETDATA_CHART_PRIO_POWER_SUPPLY_CAPACITY + , update_every + , RRDSET_TYPE_LINE + ); + + add_labels_to_power_supply(ps, ps->capacity->st); + } + + if(unlikely(!ps->capacity->rd)) ps->capacity->rd = rrddim_add(ps->capacity->st, "capacity", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + rrddim_set_by_pointer(ps->capacity->st, ps->capacity->rd, ps->capacity->value); + + rrdset_done(ps->capacity->st); + } + + struct ps_property *pr; + for(pr = ps->property_root; pr; pr = pr->next) { + if(unlikely(!pr->st)) { + char id[RRD_ID_LENGTH_MAX + 1], context[RRD_ID_LENGTH_MAX + 1]; + snprintfz(id, RRD_ID_LENGTH_MAX, "powersupply_%s", pr->name); + snprintfz(context, RRD_ID_LENGTH_MAX, "powersupply.%s", pr->name); + + pr->st = rrdset_create_localhost( + id + , ps->name + , NULL + , ps->name + , context + , pr->title + , pr->units + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_POWER_SUPPLY_NAME + , NETDATA_CHART_PRIO_POWER_SUPPLY_CAPACITY + , update_every + , RRDSET_TYPE_LINE + ); + + add_labels_to_power_supply(ps, pr->st); + } + + struct ps_property_dim *pd; + for(pd = pr->property_dim_root; pd; pd = pd->next) { + if(unlikely(!pd->rd)) pd->rd = rrddim_add(pr->st, pd->name, NULL, 1, 1000000, RRD_ALGORITHM_ABSOLUTE); + rrddim_set_by_pointer(pr->st, pd->rd, pd->value); + } + + rrdset_done(pr->st); + } + + ps->found = 0; + ps = ps->next; + } + + return 0; +} diff --git a/src/collectors/proc.plugin/sys_devices_pci_aer.c b/src/collectors/proc.plugin/sys_devices_pci_aer.c new file mode 100644 index 000000000..563ebf051 --- /dev/null +++ b/src/collectors/proc.plugin/sys_devices_pci_aer.c @@ -0,0 +1,340 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +static char *pci_aer_dirname = NULL; + +typedef enum __attribute__((packed)) { + AER_DEV_NONFATAL = (1 << 0), + AER_DEV_CORRECTABLE = (1 << 1), + AER_DEV_FATAL = (1 << 2), + AER_ROOTPORT_TOTAL_ERR_COR = (1 << 3), + AER_ROOTPORT_TOTAL_ERR_FATAL = (1 << 4), +} AER_TYPE; + +struct aer_value { + kernel_uint_t count; + RRDDIM *rd; +}; + +struct aer_entry { + bool updated; + + STRING *name; + AER_TYPE type; + + procfile *ff; + DICTIONARY *values; + + RRDSET *st; +}; + +DICTIONARY *aer_root = NULL; + +static bool aer_value_conflict_callback(const DICTIONARY_ITEM *item __maybe_unused, void *old_value, void *new_value, void *data __maybe_unused) { + struct aer_value *v = old_value; + struct aer_value *nv = new_value; + + v->count = nv->count; + + return false; +} + +static void aer_insert_callback(const DICTIONARY_ITEM *item __maybe_unused, void *value, void *data __maybe_unused) { + struct aer_entry *a = value; + a->values = dictionary_create(DICT_OPTION_SINGLE_THREADED|DICT_OPTION_DONT_OVERWRITE_VALUE); + dictionary_register_conflict_callback(a->values, aer_value_conflict_callback, NULL); +} + +static void add_pci_aer(const char *base_dir, const char *d_name, AER_TYPE type) { + char buffer[FILENAME_MAX + 1]; + snprintfz(buffer, FILENAME_MAX, "%s/%s", base_dir, d_name); + struct aer_entry *a = dictionary_set(aer_root, buffer, NULL, sizeof(struct aer_entry)); + + if(!a->name) + a->name = string_strdupz(d_name); + + a->type = type; +} + +static bool recursively_find_pci_aer(AER_TYPE types, const char *base_dir, const char *d_name, int depth) { + if(depth > 100) + return false; + + char buffer[FILENAME_MAX + 1]; + snprintfz(buffer, FILENAME_MAX, "%s/%s", base_dir, d_name); + DIR *dir = opendir(buffer); + if(unlikely(!dir)) { + collector_error("Cannot read PCI_AER directory '%s'", buffer); + return true; + } + + struct dirent *de = NULL; + while((de = readdir(dir))) { + if(de->d_type == DT_DIR) { + if(de->d_name[0] == '.') + continue; + + recursively_find_pci_aer(types, buffer, de->d_name, depth + 1); + } + else if(de->d_type == DT_REG) { + if((types & AER_DEV_NONFATAL) && strcmp(de->d_name, "aer_dev_nonfatal") == 0) { + add_pci_aer(buffer, de->d_name, AER_DEV_NONFATAL); + } + else if((types & AER_DEV_CORRECTABLE) && strcmp(de->d_name, "aer_dev_correctable") == 0) { + add_pci_aer(buffer, de->d_name, AER_DEV_CORRECTABLE); + } + else if((types & AER_DEV_FATAL) && strcmp(de->d_name, "aer_dev_fatal") == 0) { + add_pci_aer(buffer, de->d_name, AER_DEV_FATAL); + } + else if((types & AER_ROOTPORT_TOTAL_ERR_COR) && strcmp(de->d_name, "aer_rootport_total_err_cor") == 0) { + add_pci_aer(buffer, de->d_name, AER_ROOTPORT_TOTAL_ERR_COR); + } + else if((types & AER_ROOTPORT_TOTAL_ERR_FATAL) && strcmp(de->d_name, "aer_rootport_total_err_fatal") == 0) { + add_pci_aer(buffer, de->d_name, AER_ROOTPORT_TOTAL_ERR_FATAL); + } + } + } + closedir(dir); + return true; +} + +static void find_all_pci_aer(AER_TYPE types) { + char name[FILENAME_MAX + 1]; + snprintfz(name, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/devices"); + pci_aer_dirname = config_get("plugin:proc:/sys/devices/pci/aer", "directory to monitor", name); + + DIR *dir = opendir(pci_aer_dirname); + if(unlikely(!dir)) { + collector_error("Cannot read PCI_AER directory '%s'", pci_aer_dirname); + return; + } + + struct dirent *de = NULL; + while((de = readdir(dir))) { + if(de->d_type == DT_DIR && de->d_name[0] == 'p' && de->d_name[1] == 'c' && de->d_name[2] == 'i' && isdigit(de->d_name[3])) + recursively_find_pci_aer(types, pci_aer_dirname, de->d_name, 1); + } + closedir(dir); +} + +static void read_pci_aer_values(const char *filename, struct aer_entry *t) { + t->updated = false; + + if(unlikely(!t->ff)) { + t->ff = procfile_open(filename, " \t", PROCFILE_FLAG_DEFAULT); + if(unlikely(!t->ff)) + return; + } + + t->ff = procfile_readall(t->ff); + if(unlikely(!t->ff || procfile_lines(t->ff) < 1 || procfile_linewords(t->ff, 0) < 1)) + return; + + size_t lines = procfile_lines(t->ff); + for(size_t l = 0; l < lines ; l++) { + if(procfile_linewords(t->ff, l) != 2) + continue; + + struct aer_value v = { + .count = str2ull(procfile_lineword(t->ff, l, 1), NULL) + }; + + char *key = procfile_lineword(t->ff, l, 0); + if(!key || !*key || (key[0] == 'T' && key[1] == 'O' && key[2] == 'T' && key[3] == 'A' && key[4] == 'L' && key[5] == '_')) + continue; + + dictionary_set(t->values, key, &v, sizeof(v)); + } + + t->updated = true; +} + +static void read_pci_aer_count(const char *filename, struct aer_entry *t) { + t->updated = false; + + if(unlikely(!t->ff)) { + t->ff = procfile_open(filename, " \t", PROCFILE_FLAG_DEFAULT); + if(unlikely(!t->ff)) + return; + } + + t->ff = procfile_readall(t->ff); + if(unlikely(!t->ff || procfile_lines(t->ff) < 1 || procfile_linewords(t->ff, 0) < 1)) + return; + + struct aer_value v = { + .count = str2ull(procfile_lineword(t->ff, 0, 0), NULL) + }; + dictionary_set(t->values, "count", &v, sizeof(v)); + t->updated = true; +} + +static void add_label_from_link(struct aer_entry *a, const char *path, const char *link) { + char name[FILENAME_MAX + 1]; + strncpyz(name, path, FILENAME_MAX); + char *slash = strrchr(name, '/'); + if(slash) + *slash = '\0'; + + char name2[FILENAME_MAX + 1]; + snprintfz(name2, FILENAME_MAX, "%s/%s", name, link); + + ssize_t len = readlink(name2, name, FILENAME_MAX); + if(len != -1) { + name[len] = '\0'; // Null-terminate the string + slash = strrchr(name, '/'); + if(slash) slash++; + else slash = name; + rrdlabels_add(a->st->rrdlabels, link, slash, RRDLABEL_SRC_AUTO); + } +} + +int do_proc_sys_devices_pci_aer(int update_every, usec_t dt __maybe_unused) { + if(unlikely(!aer_root)) { + int do_root_ports = CONFIG_BOOLEAN_AUTO; + int do_pci_slots = CONFIG_BOOLEAN_NO; + + char buffer[100 + 1] = ""; + rrdlabels_get_value_strcpyz(localhost->rrdlabels, buffer, 100, "_virtualization"); + if(strcmp(buffer, "none") != 0) { + // no need to run on virtualized environments + do_root_ports = CONFIG_BOOLEAN_NO; + do_pci_slots = CONFIG_BOOLEAN_NO; + } + + do_root_ports = config_get_boolean("plugin:proc:/sys/class/pci/aer", "enable root ports", do_root_ports); + do_pci_slots = config_get_boolean("plugin:proc:/sys/class/pci/aer", "enable pci slots", do_pci_slots); + + if(!do_root_ports && !do_pci_slots) + return 1; + + aer_root = dictionary_create(DICT_OPTION_SINGLE_THREADED | DICT_OPTION_DONT_OVERWRITE_VALUE); + dictionary_register_insert_callback(aer_root, aer_insert_callback, NULL); + + AER_TYPE types = ((do_root_ports) ? (AER_ROOTPORT_TOTAL_ERR_COR|AER_ROOTPORT_TOTAL_ERR_FATAL) : 0) | + ((do_pci_slots) ? (AER_DEV_FATAL|AER_DEV_NONFATAL|AER_DEV_CORRECTABLE) : 0); + + find_all_pci_aer(types); + + if(!dictionary_entries(aer_root)) + return 1; + } + + struct aer_entry *a; + dfe_start_read(aer_root, a) { + switch(a->type) { + case AER_DEV_NONFATAL: + case AER_DEV_FATAL: + case AER_DEV_CORRECTABLE: + read_pci_aer_values(a_dfe.name, a); + break; + + case AER_ROOTPORT_TOTAL_ERR_COR: + case AER_ROOTPORT_TOTAL_ERR_FATAL: + read_pci_aer_count(a_dfe.name, a); + break; + } + + if(!a->updated) + continue; + + if(!a->st) { + const char *title = ""; + const char *context = ""; + + switch(a->type) { + case AER_DEV_NONFATAL: + title = "PCI Advanced Error Reporting (AER) Non-Fatal Errors"; + context = "pci.aer_nonfatal"; + break; + + case AER_DEV_FATAL: + title = "PCI Advanced Error Reporting (AER) Fatal Errors"; + context = "pci.aer_fatal"; + break; + + case AER_DEV_CORRECTABLE: + title = "PCI Advanced Error Reporting (AER) Correctable Errors"; + context = "pci.aer_correctable"; + break; + + case AER_ROOTPORT_TOTAL_ERR_COR: + title = "PCI Root-Port Advanced Error Reporting (AER) Correctable Errors"; + context = "pci.rootport_aer_correctable"; + break; + + case AER_ROOTPORT_TOTAL_ERR_FATAL: + title = "PCI Root-Port Advanced Error Reporting (AER) Fatal Errors"; + context = "pci.rootport_aer_fatal"; + break; + + default: + title = "Unknown PCI Advanced Error Reporting"; + context = "pci.unknown_aer"; + break; + } + + char id[RRD_ID_LENGTH_MAX + 1]; + char nm[RRD_ID_LENGTH_MAX + 1]; + size_t len = strlen(pci_aer_dirname); + + const char *fname = a_dfe.name; + if(strncmp(a_dfe.name, pci_aer_dirname, len) == 0) + fname = &a_dfe.name[len]; + + if(*fname == '/') + fname++; + + snprintfz(id, RRD_ID_LENGTH_MAX, "%s_%s", &context[4], fname); + char *slash = strrchr(id, '/'); + if(slash) + *slash = '\0'; + + netdata_fix_chart_id(id); + + snprintfz(nm, RRD_ID_LENGTH_MAX, "%s", fname); + slash = strrchr(nm, '/'); + if(slash) + *slash = '\0'; + + a->st = rrdset_create_localhost( + "pci" + , id + , NULL + , "aer" + , context + , title + , "errors/s" + , PLUGIN_PROC_NAME + , "/sys/devices/pci/aer" + , NETDATA_CHART_PRIO_PCI_AER + , update_every + , RRDSET_TYPE_LINE + ); + + rrdlabels_add(a->st->rrdlabels, "device", nm, RRDLABEL_SRC_AUTO); + add_label_from_link(a, a_dfe.name, "driver"); + + struct aer_value *v; + dfe_start_read(a->values, v) { + v->rd = rrddim_add(a->st, v_dfe.name, NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + dfe_done(v); + } + + struct aer_value *v; + dfe_start_read(a->values, v) { + if(unlikely(!v->rd)) + v->rd = rrddim_add(a->st, v_dfe.name, NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + + rrddim_set_by_pointer(a->st, v->rd, (collected_number)v->count); + } + dfe_done(v); + + rrdset_done(a->st); + } + dfe_done(a); + + return 0; +} diff --git a/src/collectors/proc.plugin/sys_devices_system_edac_mc.c b/src/collectors/proc.plugin/sys_devices_system_edac_mc.c new file mode 100644 index 000000000..d3db8c044 --- /dev/null +++ b/src/collectors/proc.plugin/sys_devices_system_edac_mc.c @@ -0,0 +1,298 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +struct edac_count { + bool updated; + char *filename; + procfile *ff; + kernel_uint_t count; + RRDDIM *rd; +}; + +struct edac_dimm { + char *name; + + struct edac_count ce; + struct edac_count ue; + + RRDSET *st; + + struct edac_dimm *prev, *next; +}; + +struct mc { + char *name; + + struct edac_count ce; + struct edac_count ue; + struct edac_count ce_noinfo; + struct edac_count ue_noinfo; + + RRDSET *st; + + struct edac_dimm *dimms; + + struct mc *prev, *next; +}; + +static struct mc *mc_root = NULL; +static char *mc_dirname = NULL; + +static void find_all_mc() { + char name[FILENAME_MAX + 1]; + snprintfz(name, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/devices/system/edac/mc"); + mc_dirname = config_get("plugin:proc:/sys/devices/system/edac/mc", "directory to monitor", name); + + DIR *dir = opendir(mc_dirname); + if(unlikely(!dir)) { + collector_error("Cannot read EDAC memory errors directory '%s'", mc_dirname); + return; + } + + struct dirent *de = NULL; + while((de = readdir(dir))) { + if(de->d_type == DT_DIR && de->d_name[0] == 'm' && de->d_name[1] == 'c' && isdigit(de->d_name[2])) { + struct mc *m = callocz(1, sizeof(struct mc)); + m->name = strdupz(de->d_name); + + struct stat st; + + snprintfz(name, FILENAME_MAX, "%s/%s/ce_count", mc_dirname, de->d_name); + if(stat(name, &st) != -1) + m->ce.filename = strdupz(name); + + snprintfz(name, FILENAME_MAX, "%s/%s/ue_count", mc_dirname, de->d_name); + if(stat(name, &st) != -1) + m->ue.filename = strdupz(name); + + snprintfz(name, FILENAME_MAX, "%s/%s/ce_noinfo_count", mc_dirname, de->d_name); + if(stat(name, &st) != -1) + m->ce_noinfo.filename = strdupz(name); + + snprintfz(name, FILENAME_MAX, "%s/%s/ue_noinfo_count", mc_dirname, de->d_name); + if(stat(name, &st) != -1) + m->ue_noinfo.filename = strdupz(name); + + if(!m->ce.filename && !m->ue.filename && !m->ce_noinfo.filename && !m->ue_noinfo.filename) { + freez(m->name); + freez(m); + } + else + DOUBLE_LINKED_LIST_APPEND_ITEM_UNSAFE(mc_root, m, prev, next); + } + } + closedir(dir); + + for(struct mc *m = mc_root; m ;m = m->next) { + snprintfz(name, FILENAME_MAX, "%s/%s", mc_dirname, m->name); + dir = opendir(name); + if(!dir) { + collector_error("Cannot read EDAC memory errors directory '%s'", name); + continue; + } + + while((de = readdir(dir))) { + // it can be dimmX or rankX directory + // https://www.kernel.org/doc/html/v5.0/admin-guide/ras.html#f5 + + if (de->d_type == DT_DIR && + ((strncmp(de->d_name, "rank", 4) == 0 || strncmp(de->d_name, "dimm", 4) == 0)) && + isdigit(de->d_name[4])) { + + struct edac_dimm *d = callocz(1, sizeof(struct edac_dimm)); + d->name = strdupz(de->d_name); + + struct stat st; + + snprintfz(name, FILENAME_MAX, "%s/%s/%s/dimm_ce_count", mc_dirname, m->name, de->d_name); + if(stat(name, &st) != -1) + d->ce.filename = strdupz(name); + + snprintfz(name, FILENAME_MAX, "%s/%s/%s/dimm_ue_count", mc_dirname, m->name, de->d_name); + if(stat(name, &st) != -1) + d->ue.filename = strdupz(name); + + if(!d->ce.filename && !d->ue.filename) { + freez(d->name); + freez(d); + } + else + DOUBLE_LINKED_LIST_APPEND_ITEM_UNSAFE(m->dimms, d, prev, next); + } + } + closedir(dir); + } +} + +static kernel_uint_t read_edac_count(struct edac_count *t) { + t->updated = false; + t->count = 0; + + if(t->filename) { + if(unlikely(!t->ff)) { + t->ff = procfile_open(t->filename, " \t", PROCFILE_FLAG_DEFAULT); + if(unlikely(!t->ff)) + return 0; + } + + t->ff = procfile_readall(t->ff); + if(unlikely(!t->ff || procfile_lines(t->ff) < 1 || procfile_linewords(t->ff, 0) < 1)) + return 0; + + t->count = str2ull(procfile_lineword(t->ff, 0, 0), NULL); + t->updated = true; + } + + return t->count; +} + +static bool read_edac_mc_file(const char *mc, const char *filename, char *out, size_t out_size) { + char f[FILENAME_MAX + 1]; + snprintfz(f, FILENAME_MAX, "%s/%s/%s", mc_dirname, mc, filename); + if(read_txt_file(f, out, out_size) != 0) { + collector_error("EDAC: cannot read file '%s'", f); + return false; + } + return true; +} + +static bool read_edac_mc_rank_file(const char *mc, const char *rank, const char *filename, char *out, size_t out_size) { + char f[FILENAME_MAX + 1]; + snprintfz(f, FILENAME_MAX, "%s/%s/%s/%s", mc_dirname, mc, rank, filename); + if(read_txt_file(f, out, out_size) != 0) { + collector_error("EDAC: cannot read file '%s'", f); + return false; + } + return true; +} + +int do_proc_sys_devices_system_edac_mc(int update_every, usec_t dt __maybe_unused) { + if(unlikely(!mc_root)) { + find_all_mc(); + + if(!mc_root) + // don't call this again + return 1; + } + + for(struct mc *m = mc_root; m; m = m->next) { + read_edac_count(&m->ce); + read_edac_count(&m->ce_noinfo); + read_edac_count(&m->ue); + read_edac_count(&m->ue_noinfo); + + for(struct edac_dimm *d = m->dimms; d ;d = d->next) { + read_edac_count(&d->ce); + read_edac_count(&d->ue); + } + } + + // -------------------------------------------------------------------- + + for(struct mc *m = mc_root; m ; m = m->next) { + if(unlikely(!m->ce.updated && !m->ue.updated && !m->ce_noinfo.updated && !m->ue_noinfo.updated)) + continue; + + if(unlikely(!m->st)) { + char id[RRD_ID_LENGTH_MAX + 1]; + snprintfz(id, RRD_ID_LENGTH_MAX, "edac_%s", m->name); + m->st = rrdset_create_localhost( + "mem" + , id + , NULL + , "edac" + , "mem.edac_mc_errors" + , "Memory Controller (MC) Error Detection And Correction (EDAC) Errors" + , "errors" + , PLUGIN_PROC_NAME + , "/sys/devices/system/edac/mc" + , NETDATA_CHART_PRIO_MEM_HW_ECC_CE + , update_every + , RRDSET_TYPE_LINE + ); + + rrdlabels_add(m->st->rrdlabels, "controller", m->name, RRDLABEL_SRC_AUTO); + + char buffer[1024 + 1]; + + if(read_edac_mc_file(m->name, "mc_name", buffer, 1024)) + rrdlabels_add(m->st->rrdlabels, "mc_name", buffer, RRDLABEL_SRC_AUTO); + + if(read_edac_mc_file(m->name, "size_mb", buffer, 1024)) + rrdlabels_add(m->st->rrdlabels, "size_mb", buffer, RRDLABEL_SRC_AUTO); + + if(read_edac_mc_file(m->name, "max_location", buffer, 1024)) + rrdlabels_add(m->st->rrdlabels, "max_location", buffer, RRDLABEL_SRC_AUTO); + + m->ce.rd = rrddim_add(m->st, "correctable", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + m->ue.rd = rrddim_add(m->st, "uncorrectable", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + m->ce_noinfo.rd = rrddim_add(m->st, "correctable_noinfo", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + m->ue_noinfo.rd = rrddim_add(m->st, "uncorrectable_noinfo", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(m->st, m->ce.rd, (collected_number)m->ce.count); + rrddim_set_by_pointer(m->st, m->ue.rd, (collected_number)m->ue.count); + rrddim_set_by_pointer(m->st, m->ce_noinfo.rd, (collected_number)m->ce_noinfo.count); + rrddim_set_by_pointer(m->st, m->ue_noinfo.rd, (collected_number)m->ue_noinfo.count); + + rrdset_done(m->st); + + for(struct edac_dimm *d = m->dimms; d ;d = d->next) { + if(unlikely(!d->ce.updated && !d->ue.updated)) + continue; + + if(unlikely(!d->st)) { + char id[RRD_ID_LENGTH_MAX + 1]; + snprintfz(id, RRD_ID_LENGTH_MAX, "edac_%s_%s", m->name, d->name); + d->st = rrdset_create_localhost( + "mem" + , id + , NULL + , "edac" + , "mem.edac_mc_dimm_errors" + , "DIMM Error Detection And Correction (EDAC) Errors" + , "errors" + , PLUGIN_PROC_NAME + , "/sys/devices/system/edac/mc" + , NETDATA_CHART_PRIO_MEM_HW_ECC_CE + 1 + , update_every + , RRDSET_TYPE_LINE + ); + + rrdlabels_add(d->st->rrdlabels, "controller", m->name, RRDLABEL_SRC_AUTO); + rrdlabels_add(d->st->rrdlabels, "dimm", d->name, RRDLABEL_SRC_AUTO); + + char buffer[1024 + 1]; + + if (read_edac_mc_rank_file(m->name, d->name, "dimm_dev_type", buffer, 1024)) + rrdlabels_add(d->st->rrdlabels, "dimm_dev_type", buffer, RRDLABEL_SRC_AUTO); + + if (read_edac_mc_rank_file(m->name, d->name, "dimm_edac_mode", buffer, 1024)) + rrdlabels_add(d->st->rrdlabels, "dimm_edac_mode", buffer, RRDLABEL_SRC_AUTO); + + if (read_edac_mc_rank_file(m->name, d->name, "dimm_label", buffer, 1024)) + rrdlabels_add(d->st->rrdlabels, "dimm_label", buffer, RRDLABEL_SRC_AUTO); + + if (read_edac_mc_rank_file(m->name, d->name, "dimm_location", buffer, 1024)) + rrdlabels_add(d->st->rrdlabels, "dimm_location", buffer, RRDLABEL_SRC_AUTO); + + if (read_edac_mc_rank_file(m->name, d->name, "dimm_mem_type", buffer, 1024)) + rrdlabels_add(d->st->rrdlabels, "dimm_mem_type", buffer, RRDLABEL_SRC_AUTO); + + if (read_edac_mc_rank_file(m->name, d->name, "size", buffer, 1024)) + rrdlabels_add(d->st->rrdlabels, "size", buffer, RRDLABEL_SRC_AUTO); + + d->ce.rd = rrddim_add(d->st, "correctable", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + d->ue.rd = rrddim_add(d->st, "uncorrectable", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(d->st, d->ce.rd, (collected_number)d->ce.count); + rrddim_set_by_pointer(d->st, d->ue.rd, (collected_number)d->ue.count); + + rrdset_done(d->st); + } + } + + return 0; +} diff --git a/src/collectors/proc.plugin/sys_devices_system_node.c b/src/collectors/proc.plugin/sys_devices_system_node.c new file mode 100644 index 000000000..d6db94a27 --- /dev/null +++ b/src/collectors/proc.plugin/sys_devices_system_node.c @@ -0,0 +1,165 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +struct node { + char *name; + char *numastat_filename; + procfile *numastat_ff; + RRDSET *numastat_st; + struct node *next; +}; +static struct node *numa_root = NULL; + +static int find_all_nodes() { + int numa_node_count = 0; + char name[FILENAME_MAX + 1]; + snprintfz(name, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/devices/system/node"); + char *dirname = config_get("plugin:proc:/sys/devices/system/node", "directory to monitor", name); + + DIR *dir = opendir(dirname); + if(!dir) { + collector_error("Cannot read NUMA node directory '%s'", dirname); + return 0; + } + + struct dirent *de = NULL; + while((de = readdir(dir))) { + if(de->d_type != DT_DIR) + continue; + + if(strncmp(de->d_name, "node", 4) != 0) + continue; + + if(!isdigit(de->d_name[4])) + continue; + + numa_node_count++; + + struct node *m = callocz(1, sizeof(struct node)); + m->name = strdupz(de->d_name); + + struct stat st; + + snprintfz(name, FILENAME_MAX, "%s/%s/numastat", dirname, de->d_name); + if(stat(name, &st) == -1) { + freez(m->name); + freez(m); + continue; + } + + m->numastat_filename = strdupz(name); + + m->next = numa_root; + numa_root = m; + } + + closedir(dir); + + return numa_node_count; +} + +int do_proc_sys_devices_system_node(int update_every, usec_t dt) { + (void)dt; + + static uint32_t hash_local_node = 0, hash_numa_foreign = 0, hash_interleave_hit = 0, hash_other_node = 0, hash_numa_hit = 0, hash_numa_miss = 0; + static int do_numastat = -1, numa_node_count = 0; + struct node *m; + + if(unlikely(numa_root == NULL)) { + numa_node_count = find_all_nodes(); + if(unlikely(numa_root == NULL)) + return 1; + } + + if(unlikely(do_numastat == -1)) { + do_numastat = config_get_boolean_ondemand("plugin:proc:/sys/devices/system/node", "enable per-node numa metrics", CONFIG_BOOLEAN_AUTO); + + hash_local_node = simple_hash("local_node"); + hash_numa_foreign = simple_hash("numa_foreign"); + hash_interleave_hit = simple_hash("interleave_hit"); + hash_other_node = simple_hash("other_node"); + hash_numa_hit = simple_hash("numa_hit"); + hash_numa_miss = simple_hash("numa_miss"); + } + + if(do_numastat == CONFIG_BOOLEAN_YES || (do_numastat == CONFIG_BOOLEAN_AUTO && + (numa_node_count >= 2 || netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + for(m = numa_root; m; m = m->next) { + if(m->numastat_filename) { + + if(unlikely(!m->numastat_ff)) { + m->numastat_ff = procfile_open(m->numastat_filename, " ", PROCFILE_FLAG_DEFAULT); + + if(unlikely(!m->numastat_ff)) + continue; + } + + m->numastat_ff = procfile_readall(m->numastat_ff); + if(unlikely(!m->numastat_ff || procfile_lines(m->numastat_ff) < 1 || procfile_linewords(m->numastat_ff, 0) < 1)) + continue; + + if(unlikely(!m->numastat_st)) { + m->numastat_st = rrdset_create_localhost( + "mem" + , m->name + , NULL + , "numa" + , "mem.numa_nodes" + , "NUMA events" + , "events/s" + , PLUGIN_PROC_NAME + , "/sys/devices/system/node" + , NETDATA_CHART_PRIO_MEM_NUMA_NODES + , update_every + , RRDSET_TYPE_LINE + ); + + rrdlabels_add(m->numastat_st->rrdlabels, "numa_node", m->name, RRDLABEL_SRC_AUTO); + + rrdset_flag_set(m->numastat_st, RRDSET_FLAG_DETAIL); + + rrddim_add(m->numastat_st, "numa_hit", "hit", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rrddim_add(m->numastat_st, "numa_miss", "miss", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rrddim_add(m->numastat_st, "local_node", "local", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rrddim_add(m->numastat_st, "numa_foreign", "foreign", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rrddim_add(m->numastat_st, "interleave_hit", "interleave", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rrddim_add(m->numastat_st, "other_node", "other", 1, 1, RRD_ALGORITHM_INCREMENTAL); + + } + + size_t lines = procfile_lines(m->numastat_ff), l; + for(l = 0; l < lines; l++) { + size_t words = procfile_linewords(m->numastat_ff, l); + + if(unlikely(words < 2)) { + if(unlikely(words)) + collector_error("Cannot read %s numastat line %zu. Expected 2 params, read %zu.", m->name, l, words); + continue; + } + + char *name = procfile_lineword(m->numastat_ff, l, 0); + char *value = procfile_lineword(m->numastat_ff, l, 1); + + if (unlikely(!name || !*name || !value || !*value)) + continue; + + uint32_t hash = simple_hash(name); + if(likely( + (hash == hash_numa_hit && !strcmp(name, "numa_hit")) + || (hash == hash_numa_miss && !strcmp(name, "numa_miss")) + || (hash == hash_local_node && !strcmp(name, "local_node")) + || (hash == hash_numa_foreign && !strcmp(name, "numa_foreign")) + || (hash == hash_interleave_hit && !strcmp(name, "interleave_hit")) + || (hash == hash_other_node && !strcmp(name, "other_node")) + )) + rrddim_set(m->numastat_st, name, (collected_number)str2kernel_uint_t(value)); + } + + rrdset_done(m->numastat_st); + } + } + } + + return 0; +} diff --git a/src/collectors/proc.plugin/sys_fs_btrfs.c b/src/collectors/proc.plugin/sys_fs_btrfs.c new file mode 100644 index 000000000..7023484ca --- /dev/null +++ b/src/collectors/proc.plugin/sys_fs_btrfs.c @@ -0,0 +1,1155 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +#define PLUGIN_PROC_MODULE_BTRFS_NAME "/sys/fs/btrfs" + +typedef struct btrfs_disk { + char *name; + uint32_t hash; + int exists; + + char *size_filename; + unsigned long long size; + + struct btrfs_disk *next; +} BTRFS_DISK; + +typedef struct btrfs_device { + int id; + int exists; + + char *error_stats_filename; + RRDSET *st_error_stats; + RRDDIM *rd_write_errs; + RRDDIM *rd_read_errs; + RRDDIM *rd_flush_errs; + RRDDIM *rd_corruption_errs; + RRDDIM *rd_generation_errs; + collected_number write_errs; + collected_number read_errs; + collected_number flush_errs; + collected_number corruption_errs; + collected_number generation_errs; + + struct btrfs_device *next; +} BTRFS_DEVICE; + +typedef struct btrfs_node { + int exists; + int logged_error; + + char *id; + uint32_t hash; + + char *label; + + #define declare_btrfs_allocation_section_field(SECTION, FIELD) \ + char *allocation_ ## SECTION ## _ ## FIELD ## _filename; \ + unsigned long long int allocation_ ## SECTION ## _ ## FIELD; + + #define declare_btrfs_allocation_field(FIELD) \ + char *allocation_ ## FIELD ## _filename; \ + unsigned long long int allocation_ ## FIELD; + + RRDSET *st_allocation_disks; + RRDDIM *rd_allocation_disks_unallocated; + RRDDIM *rd_allocation_disks_data_used; + RRDDIM *rd_allocation_disks_data_free; + RRDDIM *rd_allocation_disks_metadata_used; + RRDDIM *rd_allocation_disks_metadata_free; + RRDDIM *rd_allocation_disks_system_used; + RRDDIM *rd_allocation_disks_system_free; + unsigned long long all_disks_total; + + RRDSET *st_allocation_data; + RRDDIM *rd_allocation_data_free; + RRDDIM *rd_allocation_data_used; + declare_btrfs_allocation_section_field(data, total_bytes) + declare_btrfs_allocation_section_field(data, bytes_used) + declare_btrfs_allocation_section_field(data, disk_total) + declare_btrfs_allocation_section_field(data, disk_used) + + RRDSET *st_allocation_metadata; + RRDDIM *rd_allocation_metadata_free; + RRDDIM *rd_allocation_metadata_used; + RRDDIM *rd_allocation_metadata_reserved; + declare_btrfs_allocation_section_field(metadata, total_bytes) + declare_btrfs_allocation_section_field(metadata, bytes_used) + declare_btrfs_allocation_section_field(metadata, disk_total) + declare_btrfs_allocation_section_field(metadata, disk_used) + //declare_btrfs_allocation_field(global_rsv_reserved) + declare_btrfs_allocation_field(global_rsv_size) + + RRDSET *st_allocation_system; + RRDDIM *rd_allocation_system_free; + RRDDIM *rd_allocation_system_used; + declare_btrfs_allocation_section_field(system, total_bytes) + declare_btrfs_allocation_section_field(system, bytes_used) + declare_btrfs_allocation_section_field(system, disk_total) + declare_btrfs_allocation_section_field(system, disk_used) + + // -------------------------------------------------------------------- + // commit stats + + char *commit_stats_filename; + + RRDSET *st_commits; + RRDDIM *rd_commits; + long long commits_total; + collected_number commits_new; + + RRDSET *st_commits_percentage_time; + RRDDIM *rd_commits_percentage_time; + long long commit_timings_total; + long long commits_percentage_time; + + RRDSET *st_commit_timings; + RRDDIM *rd_commit_timings_last; + RRDDIM *rd_commit_timings_max; + collected_number commit_timings_last; + collected_number commit_timings_max; + + BTRFS_DISK *disks; + + BTRFS_DEVICE *devices; + + struct btrfs_node *next; +} BTRFS_NODE; + +static BTRFS_NODE *nodes = NULL; + +static inline int collect_btrfs_error_stats(BTRFS_DEVICE *device){ + char buffer[120 + 1]; + + int ret = read_txt_file(device->error_stats_filename, buffer, sizeof(buffer)); + if(unlikely(ret)) { + collector_error("BTRFS: failed to read '%s'", device->error_stats_filename); + device->write_errs = 0; + device->read_errs = 0; + device->flush_errs = 0; + device->corruption_errs = 0; + device->generation_errs = 0; + return ret; + } + + char *p = buffer; + while(p){ + char *val = strsep_skip_consecutive_separators(&p, "\n"); + if(unlikely(!val || !*val)) break; + char *key = strsep_skip_consecutive_separators(&val, " "); + + if(!strcmp(key, "write_errs")) device->write_errs = str2ull(val, NULL); + else if(!strcmp(key, "read_errs")) device->read_errs = str2ull(val, NULL); + else if(!strcmp(key, "flush_errs")) device->flush_errs = str2ull(val, NULL); + else if(!strcmp(key, "corruption_errs")) device->corruption_errs = str2ull(val, NULL); + else if(!strcmp(key, "generation_errs")) device->generation_errs = str2ull(val, NULL); + } + return 0; +} + +static inline int collect_btrfs_commits_stats(BTRFS_NODE *node, int update_every){ + char buffer[120 + 1]; + + int ret = read_txt_file(node->commit_stats_filename, buffer, sizeof(buffer)); + if(unlikely(ret)) { + collector_error("BTRFS: failed to read '%s'", node->commit_stats_filename); + node->commits_total = 0; + node->commits_new = 0; + node->commit_timings_last = 0; + node->commit_timings_max = 0; + node->commit_timings_total = 0; + node->commits_percentage_time = 0; + + return ret; + } + + char *p = buffer; + while(p){ + char *val = strsep_skip_consecutive_separators(&p, "\n"); + if(unlikely(!val || !*val)) break; + char *key = strsep_skip_consecutive_separators(&val, " "); + + if(!strcmp(key, "commits")){ + long long commits_total_new = str2ull(val, NULL); + if(likely(node->commits_total)){ + if((node->commits_new = commits_total_new - node->commits_total)) + node->commits_total = commits_total_new; + } else node->commits_total = commits_total_new; + } + else if(!strcmp(key, "last_commit_ms")) node->commit_timings_last = str2ull(val, NULL); + else if(!strcmp(key, "max_commit_ms")) node->commit_timings_max = str2ull(val, NULL); + else if(!strcmp(key, "total_commit_ms")) { + long long commit_timings_total_new = str2ull(val, NULL); + if(likely(node->commit_timings_total)){ + long time_delta = commit_timings_total_new - node->commit_timings_total; + if(time_delta){ + node->commits_percentage_time = time_delta * 10 / update_every; + node->commit_timings_total = commit_timings_total_new; + } else node->commits_percentage_time = 0; + + } else node->commit_timings_total = commit_timings_total_new; + } + } + return 0; +} + +static inline void btrfs_free_commits_stats(BTRFS_NODE *node){ + if(node->st_commits){ + rrdset_is_obsolete___safe_from_collector_thread(node->st_commits); + rrdset_is_obsolete___safe_from_collector_thread(node->st_commit_timings); + } + freez(node->commit_stats_filename); + node->commit_stats_filename = NULL; +} + +static inline void btrfs_free_disk(BTRFS_DISK *d) { + freez(d->name); + freez(d->size_filename); + freez(d); +} + +static inline void btrfs_free_device(BTRFS_DEVICE *d) { + if(d->st_error_stats) + rrdset_is_obsolete___safe_from_collector_thread(d->st_error_stats); + freez(d->error_stats_filename); + freez(d); +} + +static inline void btrfs_free_node(BTRFS_NODE *node) { + // collector_info("BTRFS: destroying '%s'", node->id); + + if(node->st_allocation_disks) + rrdset_is_obsolete___safe_from_collector_thread(node->st_allocation_disks); + + if(node->st_allocation_data) + rrdset_is_obsolete___safe_from_collector_thread(node->st_allocation_data); + + if(node->st_allocation_metadata) + rrdset_is_obsolete___safe_from_collector_thread(node->st_allocation_metadata); + + if(node->st_allocation_system) + rrdset_is_obsolete___safe_from_collector_thread(node->st_allocation_system); + + freez(node->allocation_data_bytes_used_filename); + freez(node->allocation_data_total_bytes_filename); + + freez(node->allocation_metadata_bytes_used_filename); + freez(node->allocation_metadata_total_bytes_filename); + + freez(node->allocation_system_bytes_used_filename); + freez(node->allocation_system_total_bytes_filename); + + btrfs_free_commits_stats(node); + + while(node->disks) { + BTRFS_DISK *d = node->disks; + node->disks = node->disks->next; + btrfs_free_disk(d); + } + + while(node->devices) { + BTRFS_DEVICE *d = node->devices; + node->devices = node->devices->next; + btrfs_free_device(d); + } + + freez(node->label); + freez(node->id); + freez(node); +} + +static inline int find_btrfs_disks(BTRFS_NODE *node, const char *path) { + char filename[FILENAME_MAX + 1]; + + node->all_disks_total = 0; + + BTRFS_DISK *d; + for(d = node->disks ; d ; d = d->next) + d->exists = 0; + + DIR *dir = opendir(path); + if (!dir) { + if(!node->logged_error) { + collector_error("BTRFS: Cannot open directory '%s'.", path); + node->logged_error = 1; + } + return 1; + } + node->logged_error = 0; + + struct dirent *de = NULL; + while ((de = readdir(dir))) { + if (de->d_type != DT_LNK + || !strcmp(de->d_name, ".") + || !strcmp(de->d_name, "..") + ) { + // collector_info("BTRFS: ignoring '%s'", de->d_name); + continue; + } + + uint32_t hash = simple_hash(de->d_name); + + // -------------------------------------------------------------------- + // search for it + + for(d = node->disks ; d ; d = d->next) { + if(hash == d->hash && !strcmp(de->d_name, d->name)) + break; + } + + // -------------------------------------------------------------------- + // did we find it? + + if(!d) { + d = callocz(sizeof(BTRFS_DISK), 1); + + d->name = strdupz(de->d_name); + d->hash = simple_hash(d->name); + + snprintfz(filename, FILENAME_MAX, "%s/%s/size", path, de->d_name); + d->size_filename = strdupz(filename); + + // link it + d->next = node->disks; + node->disks = d; + } + + d->exists = 1; + + + // -------------------------------------------------------------------- + // update the values + + if(read_single_number_file(d->size_filename, &d->size) != 0) { + collector_error("BTRFS: failed to read '%s'", d->size_filename); + d->exists = 0; + continue; + } + + // /sys/block/<name>/size is in fixed-size sectors of 512 bytes + // https://github.com/torvalds/linux/blob/v6.2/block/genhd.c#L946-L950 + // https://github.com/torvalds/linux/blob/v6.2/include/linux/types.h#L120-L121 + // (also see #3481, #3483) + node->all_disks_total += d->size * 512; + } + closedir(dir); + + // ------------------------------------------------------------------------ + // cleanup + + BTRFS_DISK *last = NULL; + d = node->disks; + + while(d) { + if(unlikely(!d->exists)) { + if(unlikely(node->disks == d)) { + node->disks = d->next; + btrfs_free_disk(d); + d = node->disks; + last = NULL; + } + else { + last->next = d->next; + btrfs_free_disk(d); + d = last->next; + } + + continue; + } + + last = d; + d = d->next; + } + + return 0; +} + +static inline int find_btrfs_devices(BTRFS_NODE *node, const char *path) { + char filename[FILENAME_MAX + 1]; + + BTRFS_DEVICE *d; + for(d = node->devices ; d ; d = d->next) + d->exists = 0; + + DIR *dir = opendir(path); + if (!dir) { + if(!node->logged_error) { + collector_error("BTRFS: Cannot open directory '%s'.", path); + node->logged_error = 1; + } + return 1; + } + node->logged_error = 0; + + struct dirent *de = NULL; + while ((de = readdir(dir))) { + if (de->d_type != DT_DIR + || !strcmp(de->d_name, ".") + || !strcmp(de->d_name, "..") + ) { + // collector_info("BTRFS: ignoring '%s'", de->d_name); + continue; + } + + // internal_error("BTRFS: device found '%s'", de->d_name); + + // -------------------------------------------------------------------- + // search for it + + for(d = node->devices ; d ; d = d->next) { + if(str2ll(de->d_name, NULL) == d->id){ + // collector_info("BTRFS: existing device id '%d'", d->id); + break; + } + } + + // -------------------------------------------------------------------- + // did we find it? + + if(!d) { + d = callocz(sizeof(BTRFS_DEVICE), 1); + + d->id = str2ll(de->d_name, NULL); + // collector_info("BTRFS: new device with id '%d'", d->id); + + snprintfz(filename, FILENAME_MAX, "%s/%d/error_stats", path, d->id); + d->error_stats_filename = strdupz(filename); + // collector_info("BTRFS: error_stats_filename '%s'", filename); + + // link it + d->next = node->devices; + node->devices = d; + } + + d->exists = 1; + + + // -------------------------------------------------------------------- + // update the values + + if(unlikely(collect_btrfs_error_stats(d))) + d->exists = 0; // 'd' will be garbaged collected in loop below + } + closedir(dir); + + // ------------------------------------------------------------------------ + // cleanup + + BTRFS_DEVICE *last = NULL; + d = node->devices; + + while(d) { + if(unlikely(!d->exists)) { + if(unlikely(node->devices == d)) { + node->devices = d->next; + btrfs_free_device(d); + d = node->devices; + last = NULL; + } + else { + last->next = d->next; + btrfs_free_device(d); + d = last->next; + } + + continue; + } + + last = d; + d = d->next; + } + + return 0; +} + + +static inline int find_all_btrfs_pools(const char *path, int update_every) { + static int logged_error = 0; + char filename[FILENAME_MAX + 1]; + + BTRFS_NODE *node; + for(node = nodes ; node ; node = node->next) + node->exists = 0; + + DIR *dir = opendir(path); + if (!dir) { + if(!logged_error) { + collector_error("BTRFS: Cannot open directory '%s'.", path); + logged_error = 1; + } + return 1; + } + logged_error = 0; + + struct dirent *de = NULL; + while ((de = readdir(dir))) { + if(de->d_type != DT_DIR + || !strcmp(de->d_name, ".") + || !strcmp(de->d_name, "..") + || !strcmp(de->d_name, "features") + ) { + // collector_info("BTRFS: ignoring '%s'", de->d_name); + continue; + } + + uint32_t hash = simple_hash(de->d_name); + + // search for it + for(node = nodes ; node ; node = node->next) { + if(hash == node->hash && !strcmp(de->d_name, node->id)) + break; + } + + // did we find it? + if(node) { + // collector_info("BTRFS: already exists '%s'", de->d_name); + node->exists = 1; + + // update the disk sizes + snprintfz(filename, FILENAME_MAX, "%s/%s/devices", path, de->d_name); + find_btrfs_disks(node, filename); + + // update devices + snprintfz(filename, FILENAME_MAX, "%s/%s/devinfo", path, de->d_name); + find_btrfs_devices(node, filename); + + continue; + } + + // collector_info("BTRFS: adding '%s'", de->d_name); + + // not found, create it + node = callocz(sizeof(BTRFS_NODE), 1); + + node->id = strdupz(de->d_name); + node->hash = simple_hash(node->id); + node->exists = 1; + + { + char label[FILENAME_MAX + 1] = ""; + + snprintfz(filename, FILENAME_MAX, "%s/%s/label", path, de->d_name); + if(read_txt_file(filename, label, sizeof(label)) != 0) { + collector_error("BTRFS: failed to read '%s'", filename); + btrfs_free_node(node); + continue; + } + + char *s = label; + if (s[0]) + s = trim(label); + + if(s && s[0]) + node->label = strdupz(s); + else + node->label = strdupz(node->id); + } + + // -------------------------------------------------------------------- + // macros to simplify our life + + #define init_btrfs_allocation_field(FIELD) {\ + snprintfz(filename, FILENAME_MAX, "%s/%s/allocation/" #FIELD, path, de->d_name); \ + if(read_single_number_file(filename, &node->allocation_ ## FIELD) != 0) {\ + collector_error("BTRFS: failed to read '%s'", filename);\ + btrfs_free_node(node);\ + continue;\ + }\ + if(!node->allocation_ ## FIELD ## _filename)\ + node->allocation_ ## FIELD ## _filename = strdupz(filename);\ + } + + #define init_btrfs_allocation_section_field(SECTION, FIELD) {\ + snprintfz(filename, FILENAME_MAX, "%s/%s/allocation/" #SECTION "/" #FIELD, path, de->d_name); \ + if(read_single_number_file(filename, &node->allocation_ ## SECTION ## _ ## FIELD) != 0) {\ + collector_error("BTRFS: failed to read '%s'", filename);\ + btrfs_free_node(node);\ + continue;\ + }\ + if(!node->allocation_ ## SECTION ## _ ## FIELD ## _filename)\ + node->allocation_ ## SECTION ## _ ## FIELD ## _filename = strdupz(filename);\ + } + + // -------------------------------------------------------------------- + // allocation/data + + init_btrfs_allocation_section_field(data, total_bytes); + init_btrfs_allocation_section_field(data, bytes_used); + init_btrfs_allocation_section_field(data, disk_total); + init_btrfs_allocation_section_field(data, disk_used); + + + // -------------------------------------------------------------------- + // allocation/metadata + + init_btrfs_allocation_section_field(metadata, total_bytes); + init_btrfs_allocation_section_field(metadata, bytes_used); + init_btrfs_allocation_section_field(metadata, disk_total); + init_btrfs_allocation_section_field(metadata, disk_used); + + init_btrfs_allocation_field(global_rsv_size); + // init_btrfs_allocation_field(global_rsv_reserved); + + + // -------------------------------------------------------------------- + // allocation/system + + init_btrfs_allocation_section_field(system, total_bytes); + init_btrfs_allocation_section_field(system, bytes_used); + init_btrfs_allocation_section_field(system, disk_total); + init_btrfs_allocation_section_field(system, disk_used); + + // -------------------------------------------------------------------- + // commit stats + + snprintfz(filename, FILENAME_MAX, "%s/%s/commit_stats", path, de->d_name); + if(!node->commit_stats_filename) node->commit_stats_filename = strdupz(filename); + if(unlikely(collect_btrfs_commits_stats(node, update_every))){ + collector_error("BTRFS: failed to collect commit stats for '%s'", node->id); + btrfs_free_commits_stats(node); + } + + // -------------------------------------------------------------------- + // find all disks related to this node + // and collect their sizes + + snprintfz(filename, FILENAME_MAX, "%s/%s/devices", path, de->d_name); + find_btrfs_disks(node, filename); + + // -------------------------------------------------------------------- + // find all devices related to this node + + snprintfz(filename, FILENAME_MAX, "%s/%s/devinfo", path, de->d_name); + find_btrfs_devices(node, filename); + + // -------------------------------------------------------------------- + // link it + + // collector_info("BTRFS: linking '%s'", node->id); + node->next = nodes; + nodes = node; + } + closedir(dir); + + + // ------------------------------------------------------------------------ + // cleanup + + BTRFS_NODE *last = NULL; + node = nodes; + + while(node) { + if(unlikely(!node->exists)) { + if(unlikely(nodes == node)) { + nodes = node->next; + btrfs_free_node(node); + node = nodes; + last = NULL; + } + else { + last->next = node->next; + btrfs_free_node(node); + node = last->next; + } + + continue; + } + + last = node; + node = node->next; + } + + return 0; +} + +static void add_labels_to_btrfs(BTRFS_NODE *n, RRDSET *st) { + rrdlabels_add(st->rrdlabels, "filesystem_uuid", n->id, RRDLABEL_SRC_AUTO); + rrdlabels_add(st->rrdlabels, "filesystem_label", n->label, RRDLABEL_SRC_AUTO); +} + +int do_sys_fs_btrfs(int update_every, usec_t dt) { + static int initialized = 0 + , do_allocation_disks = CONFIG_BOOLEAN_AUTO + , do_allocation_system = CONFIG_BOOLEAN_AUTO + , do_allocation_data = CONFIG_BOOLEAN_AUTO + , do_allocation_metadata = CONFIG_BOOLEAN_AUTO + , do_commit_stats = CONFIG_BOOLEAN_AUTO + , do_error_stats = CONFIG_BOOLEAN_AUTO; + + static usec_t refresh_delta = 0, refresh_every = 60 * USEC_PER_SEC; + static char *btrfs_path = NULL; + + (void)dt; + + if(unlikely(!initialized)) { + initialized = 1; + + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/fs/btrfs"); + btrfs_path = config_get("plugin:proc:/sys/fs/btrfs", "path to monitor", filename); + + refresh_every = config_get_number("plugin:proc:/sys/fs/btrfs", "check for btrfs changes every", refresh_every / USEC_PER_SEC) * USEC_PER_SEC; + refresh_delta = refresh_every; + + do_allocation_disks = config_get_boolean_ondemand("plugin:proc:/sys/fs/btrfs", "physical disks allocation", do_allocation_disks); + do_allocation_data = config_get_boolean_ondemand("plugin:proc:/sys/fs/btrfs", "data allocation", do_allocation_data); + do_allocation_metadata = config_get_boolean_ondemand("plugin:proc:/sys/fs/btrfs", "metadata allocation", do_allocation_metadata); + do_allocation_system = config_get_boolean_ondemand("plugin:proc:/sys/fs/btrfs", "system allocation", do_allocation_system); + do_commit_stats = config_get_boolean_ondemand("plugin:proc:/sys/fs/btrfs", "commit stats", do_commit_stats); + do_error_stats = config_get_boolean_ondemand("plugin:proc:/sys/fs/btrfs", "error stats", do_error_stats); + } + + refresh_delta += dt; + if(refresh_delta >= refresh_every) { + refresh_delta = 0; + find_all_btrfs_pools(btrfs_path, update_every); + } + + BTRFS_NODE *node; + for(node = nodes; node ; node = node->next) { + // -------------------------------------------------------------------- + // allocation/system + + #define collect_btrfs_allocation_field(FIELD) \ + read_single_number_file(node->allocation_ ## FIELD ## _filename, &node->allocation_ ## FIELD) + + #define collect_btrfs_allocation_section_field(SECTION, FIELD) \ + read_single_number_file(node->allocation_ ## SECTION ## _ ## FIELD ## _filename, &node->allocation_ ## SECTION ## _ ## FIELD) + + if(do_allocation_disks != CONFIG_BOOLEAN_NO) { + if( collect_btrfs_allocation_section_field(data, disk_total) != 0 + || collect_btrfs_allocation_section_field(data, disk_used) != 0 + || collect_btrfs_allocation_section_field(metadata, disk_total) != 0 + || collect_btrfs_allocation_section_field(metadata, disk_used) != 0 + || collect_btrfs_allocation_section_field(system, disk_total) != 0 + || collect_btrfs_allocation_section_field(system, disk_used) != 0) { + collector_error("BTRFS: failed to collect physical disks allocation for '%s'", node->id); + // make it refresh btrfs at the next iteration + refresh_delta = refresh_every; + continue; + } + } + + if(do_allocation_data != CONFIG_BOOLEAN_NO) { + if (collect_btrfs_allocation_section_field(data, total_bytes) != 0 + || collect_btrfs_allocation_section_field(data, bytes_used) != 0) { + collector_error("BTRFS: failed to collect allocation/data for '%s'", node->id); + // make it refresh btrfs at the next iteration + refresh_delta = refresh_every; + continue; + } + } + + if(do_allocation_metadata != CONFIG_BOOLEAN_NO) { + if (collect_btrfs_allocation_section_field(metadata, total_bytes) != 0 + || collect_btrfs_allocation_section_field(metadata, bytes_used) != 0 + || collect_btrfs_allocation_field(global_rsv_size) != 0 + ) { + collector_error("BTRFS: failed to collect allocation/metadata for '%s'", node->id); + // make it refresh btrfs at the next iteration + refresh_delta = refresh_every; + continue; + } + } + + if(do_allocation_system != CONFIG_BOOLEAN_NO) { + if (collect_btrfs_allocation_section_field(system, total_bytes) != 0 + || collect_btrfs_allocation_section_field(system, bytes_used) != 0) { + collector_error("BTRFS: failed to collect allocation/system for '%s'", node->id); + // make it refresh btrfs at the next iteration + refresh_delta = refresh_every; + continue; + } + } + + if(do_commit_stats != CONFIG_BOOLEAN_NO && node->commit_stats_filename) { + if (unlikely(collect_btrfs_commits_stats(node, update_every))) { + collector_error("BTRFS: failed to collect commit stats for '%s'", node->id); + btrfs_free_commits_stats(node); + } + } + + if(do_error_stats != CONFIG_BOOLEAN_NO) { + for(BTRFS_DEVICE *d = node->devices ; d ; d = d->next) { + if(unlikely(collect_btrfs_error_stats(d))){ + collector_error("BTRFS: failed to collect error stats for '%s', devid:'%d'", node->id, d->id); + /* make it refresh btrfs at the next iteration, + * btrfs_free_device(d) will be called in + * find_btrfs_devices() as part of the garbage collection */ + refresh_delta = refresh_every; + } + } + } + + // -------------------------------------------------------------------- + // allocation/disks + + if(do_allocation_disks == CONFIG_BOOLEAN_YES || (do_allocation_disks == CONFIG_BOOLEAN_AUTO && + ((node->all_disks_total && node->allocation_data_disk_total) || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_allocation_disks = CONFIG_BOOLEAN_YES; + + if(unlikely(!node->st_allocation_disks)) { + char id[RRD_ID_LENGTH_MAX + 1], name[RRD_ID_LENGTH_MAX + 1], title[200 + 1]; + + snprintfz(id, RRD_ID_LENGTH_MAX, "disk_%s", node->id); + snprintfz(name, RRD_ID_LENGTH_MAX, "disk_%s", node->label); + snprintfz(title, sizeof(title) - 1, "BTRFS Physical Disk Allocation"); + + netdata_fix_chart_id(id); + netdata_fix_chart_name(name); + + node->st_allocation_disks = rrdset_create_localhost( + "btrfs" + , id + , name + , node->label + , "btrfs.disk" + , title + , "MiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_BTRFS_NAME + , NETDATA_CHART_PRIO_BTRFS_DISK + , update_every + , RRDSET_TYPE_STACKED + ); + + node->rd_allocation_disks_unallocated = rrddim_add(node->st_allocation_disks, "unallocated", NULL, 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + node->rd_allocation_disks_data_free = rrddim_add(node->st_allocation_disks, "data_free", "data free", 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + node->rd_allocation_disks_data_used = rrddim_add(node->st_allocation_disks, "data_used", "data used", 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + node->rd_allocation_disks_metadata_free = rrddim_add(node->st_allocation_disks, "meta_free", "meta free", 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + node->rd_allocation_disks_metadata_used = rrddim_add(node->st_allocation_disks, "meta_used", "meta used", 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + node->rd_allocation_disks_system_free = rrddim_add(node->st_allocation_disks, "sys_free", "sys free", 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + node->rd_allocation_disks_system_used = rrddim_add(node->st_allocation_disks, "sys_used", "sys used", 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + + add_labels_to_btrfs(node, node->st_allocation_disks); + } + + // unsigned long long disk_used = node->allocation_data_disk_used + node->allocation_metadata_disk_used + node->allocation_system_disk_used; + unsigned long long disk_total = node->allocation_data_disk_total + node->allocation_metadata_disk_total + node->allocation_system_disk_total; + unsigned long long disk_unallocated = node->all_disks_total - disk_total; + + rrddim_set_by_pointer(node->st_allocation_disks, node->rd_allocation_disks_unallocated, disk_unallocated); + rrddim_set_by_pointer(node->st_allocation_disks, node->rd_allocation_disks_data_used, node->allocation_data_disk_used); + rrddim_set_by_pointer(node->st_allocation_disks, node->rd_allocation_disks_data_free, node->allocation_data_disk_total - node->allocation_data_disk_used); + rrddim_set_by_pointer(node->st_allocation_disks, node->rd_allocation_disks_metadata_used, node->allocation_metadata_disk_used); + rrddim_set_by_pointer(node->st_allocation_disks, node->rd_allocation_disks_metadata_free, node->allocation_metadata_disk_total - node->allocation_metadata_disk_used); + rrddim_set_by_pointer(node->st_allocation_disks, node->rd_allocation_disks_system_used, node->allocation_system_disk_used); + rrddim_set_by_pointer(node->st_allocation_disks, node->rd_allocation_disks_system_free, node->allocation_system_disk_total - node->allocation_system_disk_used); + rrdset_done(node->st_allocation_disks); + } + + + // -------------------------------------------------------------------- + // allocation/data + + if(do_allocation_data == CONFIG_BOOLEAN_YES || (do_allocation_data == CONFIG_BOOLEAN_AUTO && + (node->allocation_data_total_bytes || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_allocation_data = CONFIG_BOOLEAN_YES; + + if(unlikely(!node->st_allocation_data)) { + char id[RRD_ID_LENGTH_MAX + 1], name[RRD_ID_LENGTH_MAX + 1], title[200 + 1]; + + snprintfz(id, RRD_ID_LENGTH_MAX, "data_%s", node->id); + snprintfz(name, RRD_ID_LENGTH_MAX, "data_%s", node->label); + snprintfz(title, sizeof(title) - 1, "BTRFS Data Allocation"); + + netdata_fix_chart_id(id); + netdata_fix_chart_name(name); + + node->st_allocation_data = rrdset_create_localhost( + "btrfs" + , id + , name + , node->label + , "btrfs.data" + , title + , "MiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_BTRFS_NAME + , NETDATA_CHART_PRIO_BTRFS_DATA + , update_every + , RRDSET_TYPE_STACKED + ); + + node->rd_allocation_data_free = rrddim_add(node->st_allocation_data, "free", NULL, 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + node->rd_allocation_data_used = rrddim_add(node->st_allocation_data, "used", NULL, 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + + add_labels_to_btrfs(node, node->st_allocation_data); + } + + rrddim_set_by_pointer(node->st_allocation_data, node->rd_allocation_data_free, node->allocation_data_total_bytes - node->allocation_data_bytes_used); + rrddim_set_by_pointer(node->st_allocation_data, node->rd_allocation_data_used, node->allocation_data_bytes_used); + rrdset_done(node->st_allocation_data); + } + + // -------------------------------------------------------------------- + // allocation/metadata + + if(do_allocation_metadata == CONFIG_BOOLEAN_YES || (do_allocation_metadata == CONFIG_BOOLEAN_AUTO && + (node->allocation_metadata_total_bytes || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_allocation_metadata = CONFIG_BOOLEAN_YES; + + if(unlikely(!node->st_allocation_metadata)) { + char id[RRD_ID_LENGTH_MAX + 1], name[RRD_ID_LENGTH_MAX + 1], title[200 + 1]; + + snprintfz(id, RRD_ID_LENGTH_MAX, "metadata_%s", node->id); + snprintfz(name, RRD_ID_LENGTH_MAX, "metadata_%s", node->label); + snprintfz(title, sizeof(title) - 1, "BTRFS Metadata Allocation"); + + netdata_fix_chart_id(id); + netdata_fix_chart_name(name); + + node->st_allocation_metadata = rrdset_create_localhost( + "btrfs" + , id + , name + , node->label + , "btrfs.metadata" + , title + , "MiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_BTRFS_NAME + , NETDATA_CHART_PRIO_BTRFS_METADATA + , update_every + , RRDSET_TYPE_STACKED + ); + + node->rd_allocation_metadata_free = rrddim_add(node->st_allocation_metadata, "free", NULL, 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + node->rd_allocation_metadata_used = rrddim_add(node->st_allocation_metadata, "used", NULL, 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + node->rd_allocation_metadata_reserved = rrddim_add(node->st_allocation_metadata, "reserved", NULL, 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + + add_labels_to_btrfs(node, node->st_allocation_metadata); + } + + rrddim_set_by_pointer(node->st_allocation_metadata, node->rd_allocation_metadata_free, node->allocation_metadata_total_bytes - node->allocation_metadata_bytes_used - node->allocation_global_rsv_size); + rrddim_set_by_pointer(node->st_allocation_metadata, node->rd_allocation_metadata_used, node->allocation_metadata_bytes_used); + rrddim_set_by_pointer(node->st_allocation_metadata, node->rd_allocation_metadata_reserved, node->allocation_global_rsv_size); + rrdset_done(node->st_allocation_metadata); + } + + // -------------------------------------------------------------------- + // allocation/system + + if(do_allocation_system == CONFIG_BOOLEAN_YES || (do_allocation_system == CONFIG_BOOLEAN_AUTO && + (node->allocation_system_total_bytes || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_allocation_system = CONFIG_BOOLEAN_YES; + + if(unlikely(!node->st_allocation_system)) { + char id[RRD_ID_LENGTH_MAX + 1], name[RRD_ID_LENGTH_MAX + 1], title[200 + 1]; + + snprintfz(id, RRD_ID_LENGTH_MAX, "system_%s", node->id); + snprintfz(name, RRD_ID_LENGTH_MAX, "system_%s", node->label); + snprintfz(title, sizeof(title) - 1, "BTRFS System Allocation"); + + netdata_fix_chart_id(id); + netdata_fix_chart_name(name); + + node->st_allocation_system = rrdset_create_localhost( + "btrfs" + , id + , name + , node->label + , "btrfs.system" + , title + , "MiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_BTRFS_NAME + , NETDATA_CHART_PRIO_BTRFS_SYSTEM + , update_every + , RRDSET_TYPE_STACKED + ); + + node->rd_allocation_system_free = rrddim_add(node->st_allocation_system, "free", NULL, 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + node->rd_allocation_system_used = rrddim_add(node->st_allocation_system, "used", NULL, 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + + add_labels_to_btrfs(node, node->st_allocation_system); + } + + rrddim_set_by_pointer(node->st_allocation_system, node->rd_allocation_system_free, node->allocation_system_total_bytes - node->allocation_system_bytes_used); + rrddim_set_by_pointer(node->st_allocation_system, node->rd_allocation_system_used, node->allocation_system_bytes_used); + rrdset_done(node->st_allocation_system); + } + + // -------------------------------------------------------------------- + // commit_stats + + if(do_commit_stats == CONFIG_BOOLEAN_YES || (do_commit_stats == CONFIG_BOOLEAN_AUTO && + (node->commits_total || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_commit_stats = CONFIG_BOOLEAN_YES; + + if(unlikely(!node->st_commits)) { + char id[RRD_ID_LENGTH_MAX + 1], name[RRD_ID_LENGTH_MAX + 1], title[200 + 1]; + + snprintfz(id, RRD_ID_LENGTH_MAX, "commits_%s", node->id); + snprintfz(name, RRD_ID_LENGTH_MAX, "commits_%s", node->label); + snprintfz(title, sizeof(title) - 1, "BTRFS Commits"); + + netdata_fix_chart_id(id); + netdata_fix_chart_name(name); + + node->st_commits = rrdset_create_localhost( + "btrfs" + , id + , name + , node->label + , "btrfs.commits" + , title + , "commits" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_BTRFS_NAME + , NETDATA_CHART_PRIO_BTRFS_COMMITS + , update_every + , RRDSET_TYPE_LINE + ); + + node->rd_commits = rrddim_add(node->st_commits, "commits", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + add_labels_to_btrfs(node, node->st_commits); + } + + rrddim_set_by_pointer(node->st_commits, node->rd_commits, node->commits_new); + rrdset_done(node->st_commits); + + if(unlikely(!node->st_commits_percentage_time)) { + char id[RRD_ID_LENGTH_MAX + 1], name[RRD_ID_LENGTH_MAX + 1], title[200 + 1]; + + snprintfz(id, RRD_ID_LENGTH_MAX, "commits_perc_time_%s", node->id); + snprintfz(name, RRD_ID_LENGTH_MAX, "commits_perc_time_%s", node->label); + snprintfz(title, sizeof(title) - 1, "BTRFS Commits Time Share"); + + netdata_fix_chart_id(id); + netdata_fix_chart_name(name); + + node->st_commits_percentage_time = rrdset_create_localhost( + "btrfs" + , id + , name + , node->label + , "btrfs.commits_perc_time" + , title + , "percentage" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_BTRFS_NAME + , NETDATA_CHART_PRIO_BTRFS_COMMITS_PERC_TIME + , update_every + , RRDSET_TYPE_LINE + ); + + node->rd_commits_percentage_time = rrddim_add(node->st_commits_percentage_time, "commits", NULL, 1, 100, RRD_ALGORITHM_ABSOLUTE); + + add_labels_to_btrfs(node, node->st_commits_percentage_time); + } + + rrddim_set_by_pointer(node->st_commits_percentage_time, node->rd_commits_percentage_time, node->commits_percentage_time); + rrdset_done(node->st_commits_percentage_time); + + + if(unlikely(!node->st_commit_timings)) { + char id[RRD_ID_LENGTH_MAX + 1], name[RRD_ID_LENGTH_MAX + 1], title[200 + 1]; + + snprintfz(id, RRD_ID_LENGTH_MAX, "commit_timings_%s", node->id); + snprintfz(name, RRD_ID_LENGTH_MAX, "commit_timings_%s", node->label); + snprintfz(title, sizeof(title) - 1, "BTRFS Commit Timings"); + + netdata_fix_chart_id(id); + netdata_fix_chart_name(name); + + node->st_commit_timings = rrdset_create_localhost( + "btrfs" + , id + , name + , node->label + , "btrfs.commit_timings" + , title + , "ms" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_BTRFS_NAME + , NETDATA_CHART_PRIO_BTRFS_COMMIT_TIMINGS + , update_every + , RRDSET_TYPE_LINE + ); + + node->rd_commit_timings_last = rrddim_add(node->st_commit_timings, "last", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + node->rd_commit_timings_max = rrddim_add(node->st_commit_timings, "max", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + add_labels_to_btrfs(node, node->st_commit_timings); + } + + rrddim_set_by_pointer(node->st_commit_timings, node->rd_commit_timings_last, node->commit_timings_last); + rrddim_set_by_pointer(node->st_commit_timings, node->rd_commit_timings_max, node->commit_timings_max); + rrdset_done(node->st_commit_timings); + } + + // -------------------------------------------------------------------- + // error_stats per device + + if(do_error_stats == CONFIG_BOOLEAN_YES || (do_error_stats == CONFIG_BOOLEAN_AUTO && + (node->devices || + netdata_zero_metrics_enabled == CONFIG_BOOLEAN_YES))) { + do_error_stats = CONFIG_BOOLEAN_YES; + + for(BTRFS_DEVICE *d = node->devices ; d ; d = d->next) { + + if(unlikely(!d->st_error_stats)) { + char id[RRD_ID_LENGTH_MAX + 1], name[RRD_ID_LENGTH_MAX + 1], title[200 + 1]; + + snprintfz(id, RRD_ID_LENGTH_MAX, "device_errors_dev%d_%s", d->id, node->id); + snprintfz(name, RRD_ID_LENGTH_MAX, "device_errors_dev%d_%s", d->id, node->label); + snprintfz(title, sizeof(title) - 1, "BTRFS Device Errors"); + + netdata_fix_chart_id(id); + netdata_fix_chart_name(name); + + d->st_error_stats = rrdset_create_localhost( + "btrfs" + , id + , name + , node->label + , "btrfs.device_errors" + , title + , "errors" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_BTRFS_NAME + , NETDATA_CHART_PRIO_BTRFS_ERRORS + , update_every + , RRDSET_TYPE_LINE + ); + + char rd_id[RRD_ID_LENGTH_MAX + 1]; + snprintfz(rd_id, RRD_ID_LENGTH_MAX, "write_errs"); + d->rd_write_errs = rrddim_add(d->st_error_stats, rd_id, NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + snprintfz(rd_id, RRD_ID_LENGTH_MAX, "read_errs"); + d->rd_read_errs = rrddim_add(d->st_error_stats, rd_id, NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + snprintfz(rd_id, RRD_ID_LENGTH_MAX, "flush_errs"); + d->rd_flush_errs = rrddim_add(d->st_error_stats, rd_id, NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + snprintfz(rd_id, RRD_ID_LENGTH_MAX, "corruption_errs"); + d->rd_corruption_errs = rrddim_add(d->st_error_stats, rd_id, NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + snprintfz(rd_id, RRD_ID_LENGTH_MAX, "generation_errs"); + d->rd_generation_errs = rrddim_add(d->st_error_stats, rd_id, NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + + char dev_id[5]; + snprintfz(dev_id, 4, "%d", d->id); + rrdlabels_add(d->st_error_stats->rrdlabels, "device_id", dev_id, RRDLABEL_SRC_AUTO); + add_labels_to_btrfs(node, d->st_error_stats); + } + + rrddim_set_by_pointer(d->st_error_stats, d->rd_write_errs, d->write_errs); + rrddim_set_by_pointer(d->st_error_stats, d->rd_read_errs, d->read_errs); + rrddim_set_by_pointer(d->st_error_stats, d->rd_flush_errs, d->flush_errs); + rrddim_set_by_pointer(d->st_error_stats, d->rd_corruption_errs, d->corruption_errs); + rrddim_set_by_pointer(d->st_error_stats, d->rd_generation_errs, d->generation_errs); + + rrdset_done(d->st_error_stats); + } + } + } + + return 0; +} + diff --git a/src/collectors/proc.plugin/sys_kernel_mm_ksm.c b/src/collectors/proc.plugin/sys_kernel_mm_ksm.c new file mode 100644 index 000000000..8f43acc93 --- /dev/null +++ b/src/collectors/proc.plugin/sys_kernel_mm_ksm.c @@ -0,0 +1,194 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "plugin_proc.h" + +#define PLUGIN_PROC_MODULE_KSM_NAME "/sys/kernel/mm/ksm" + +typedef struct ksm_name_value { + char filename[FILENAME_MAX + 1]; + unsigned long long value; +} KSM_NAME_VALUE; + +#define PAGES_SHARED 0 +#define PAGES_SHARING 1 +#define PAGES_UNSHARED 2 +#define PAGES_VOLATILE 3 +// #define PAGES_TO_SCAN 4 + +KSM_NAME_VALUE values[] = { + [PAGES_SHARED] = { "/sys/kernel/mm/ksm/pages_shared", 0ULL }, + [PAGES_SHARING] = { "/sys/kernel/mm/ksm/pages_sharing", 0ULL }, + [PAGES_UNSHARED] = { "/sys/kernel/mm/ksm/pages_unshared", 0ULL }, + [PAGES_VOLATILE] = { "/sys/kernel/mm/ksm/pages_volatile", 0ULL }, + // [PAGES_TO_SCAN] = { "/sys/kernel/mm/ksm/pages_to_scan", 0ULL }, +}; + +int do_sys_kernel_mm_ksm(int update_every, usec_t dt) { + (void)dt; + static procfile *ff_pages_shared = NULL, *ff_pages_sharing = NULL, *ff_pages_unshared = NULL, *ff_pages_volatile = NULL/*, *ff_pages_to_scan = NULL*/; + static unsigned long page_size = 0; + + if(unlikely(page_size == 0)) + page_size = (unsigned long)sysconf(_SC_PAGESIZE); + + if(unlikely(!ff_pages_shared)) { + snprintfz(values[PAGES_SHARED].filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/kernel/mm/ksm/pages_shared"); + snprintfz(values[PAGES_SHARED].filename, FILENAME_MAX, "%s", config_get("plugin:proc:/sys/kernel/mm/ksm", "/sys/kernel/mm/ksm/pages_shared", values[PAGES_SHARED].filename)); + ff_pages_shared = procfile_open(values[PAGES_SHARED].filename, " \t:", PROCFILE_FLAG_DEFAULT); + } + + if(unlikely(!ff_pages_sharing)) { + snprintfz(values[PAGES_SHARING].filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/kernel/mm/ksm/pages_sharing"); + snprintfz(values[PAGES_SHARING].filename, FILENAME_MAX, "%s", config_get("plugin:proc:/sys/kernel/mm/ksm", "/sys/kernel/mm/ksm/pages_sharing", values[PAGES_SHARING].filename)); + ff_pages_sharing = procfile_open(values[PAGES_SHARING].filename, " \t:", PROCFILE_FLAG_DEFAULT); + } + + if(unlikely(!ff_pages_unshared)) { + snprintfz(values[PAGES_UNSHARED].filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/kernel/mm/ksm/pages_unshared"); + snprintfz(values[PAGES_UNSHARED].filename, FILENAME_MAX, "%s", config_get("plugin:proc:/sys/kernel/mm/ksm", "/sys/kernel/mm/ksm/pages_unshared", values[PAGES_UNSHARED].filename)); + ff_pages_unshared = procfile_open(values[PAGES_UNSHARED].filename, " \t:", PROCFILE_FLAG_DEFAULT); + } + + if(unlikely(!ff_pages_volatile)) { + snprintfz(values[PAGES_VOLATILE].filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/kernel/mm/ksm/pages_volatile"); + snprintfz(values[PAGES_VOLATILE].filename, FILENAME_MAX, "%s", config_get("plugin:proc:/sys/kernel/mm/ksm", "/sys/kernel/mm/ksm/pages_volatile", values[PAGES_VOLATILE].filename)); + ff_pages_volatile = procfile_open(values[PAGES_VOLATILE].filename, " \t:", PROCFILE_FLAG_DEFAULT); + } + + //if(unlikely(!ff_pages_to_scan)) { + // snprintfz(values[PAGES_TO_SCAN].filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/kernel/mm/ksm/pages_to_scan"); + // snprintfz(values[PAGES_TO_SCAN].filename, FILENAME_MAX, "%s", config_get("plugin:proc:/sys/kernel/mm/ksm", "/sys/kernel/mm/ksm/pages_to_scan", values[PAGES_TO_SCAN].filename)); + // ff_pages_to_scan = procfile_open(values[PAGES_TO_SCAN].filename, " \t:", PROCFILE_FLAG_DEFAULT); + //} + + if(unlikely(!ff_pages_shared || !ff_pages_sharing || !ff_pages_unshared || !ff_pages_volatile /*|| !ff_pages_to_scan */)) + return 1; + + unsigned long long pages_shared = 0, pages_sharing = 0, pages_unshared = 0, pages_volatile = 0, /*pages_to_scan = 0,*/ offered = 0, saved = 0; + + ff_pages_shared = procfile_readall(ff_pages_shared); + if(unlikely(!ff_pages_shared)) return 0; // we return 0, so that we will retry to open it next time + pages_shared = str2ull(procfile_lineword(ff_pages_shared, 0, 0), NULL); + + ff_pages_sharing = procfile_readall(ff_pages_sharing); + if(unlikely(!ff_pages_sharing)) return 0; // we return 0, so that we will retry to open it next time + pages_sharing = str2ull(procfile_lineword(ff_pages_sharing, 0, 0), NULL); + + ff_pages_unshared = procfile_readall(ff_pages_unshared); + if(unlikely(!ff_pages_unshared)) return 0; // we return 0, so that we will retry to open it next time + pages_unshared = str2ull(procfile_lineword(ff_pages_unshared, 0, 0), NULL); + + ff_pages_volatile = procfile_readall(ff_pages_volatile); + if(unlikely(!ff_pages_volatile)) return 0; // we return 0, so that we will retry to open it next time + pages_volatile = str2ull(procfile_lineword(ff_pages_volatile, 0, 0), NULL); + + //ff_pages_to_scan = procfile_readall(ff_pages_to_scan); + //if(unlikely(!ff_pages_to_scan)) return 0; // we return 0, so that we will retry to open it next time + //pages_to_scan = str2ull(procfile_lineword(ff_pages_to_scan, 0, 0)); + + offered = pages_sharing + pages_shared + pages_unshared + pages_volatile; + saved = pages_sharing; + + if(unlikely(!offered /*|| !pages_to_scan*/ && netdata_zero_metrics_enabled == CONFIG_BOOLEAN_NO)) return 0; + + // -------------------------------------------------------------------- + + { + static RRDSET *st_mem_ksm = NULL; + static RRDDIM *rd_shared = NULL, *rd_unshared = NULL, *rd_sharing = NULL, *rd_volatile = NULL/*, *rd_to_scan = NULL*/; + + if (unlikely(!st_mem_ksm)) { + st_mem_ksm = rrdset_create_localhost( + "mem" + , "ksm" + , NULL + , "ksm" + , NULL + , "Kernel Same Page Merging" + , "MiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_KSM_NAME + , NETDATA_CHART_PRIO_MEM_KSM + , update_every + , RRDSET_TYPE_STACKED + ); + + rd_shared = rrddim_add(st_mem_ksm, "shared", NULL, 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + rd_unshared = rrddim_add(st_mem_ksm, "unshared", NULL, -1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + rd_sharing = rrddim_add(st_mem_ksm, "sharing", NULL, 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + rd_volatile = rrddim_add(st_mem_ksm, "volatile", NULL, -1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + //rd_to_scan = rrddim_add(st_mem_ksm, "to_scan", "to scan", -1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_mem_ksm, rd_shared, pages_shared * page_size); + rrddim_set_by_pointer(st_mem_ksm, rd_unshared, pages_unshared * page_size); + rrddim_set_by_pointer(st_mem_ksm, rd_sharing, pages_sharing * page_size); + rrddim_set_by_pointer(st_mem_ksm, rd_volatile, pages_volatile * page_size); + //rrddim_set_by_pointer(st_mem_ksm, rd_to_scan, pages_to_scan * page_size); + + rrdset_done(st_mem_ksm); + } + + // -------------------------------------------------------------------- + + { + static RRDSET *st_mem_ksm_savings = NULL; + static RRDDIM *rd_savings = NULL, *rd_offered = NULL; + + if (unlikely(!st_mem_ksm_savings)) { + st_mem_ksm_savings = rrdset_create_localhost( + "mem" + , "ksm_savings" + , NULL + , "ksm" + , NULL + , "Kernel Same Page Merging Savings" + , "MiB" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_KSM_NAME + , NETDATA_CHART_PRIO_MEM_KSM_SAVINGS + , update_every + , RRDSET_TYPE_AREA + ); + + rd_savings = rrddim_add(st_mem_ksm_savings, "savings", NULL, -1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + rd_offered = rrddim_add(st_mem_ksm_savings, "offered", NULL, 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_mem_ksm_savings, rd_savings, saved * page_size); + rrddim_set_by_pointer(st_mem_ksm_savings, rd_offered, offered * page_size); + + rrdset_done(st_mem_ksm_savings); + } + + // -------------------------------------------------------------------- + + { + static RRDSET *st_mem_ksm_ratios = NULL; + static RRDDIM *rd_savings = NULL; + + if (unlikely(!st_mem_ksm_ratios)) { + st_mem_ksm_ratios = rrdset_create_localhost( + "mem" + , "ksm_ratios" + , NULL + , "ksm" + , NULL + , "Kernel Same Page Merging Effectiveness" + , "percentage" + , PLUGIN_PROC_NAME + , PLUGIN_PROC_MODULE_KSM_NAME + , NETDATA_CHART_PRIO_MEM_KSM_RATIOS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_savings = rrddim_add(st_mem_ksm_ratios, "savings", NULL, 1, 10000, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_mem_ksm_ratios, rd_savings, offered ? (saved * 1000000) / offered : 0); + rrdset_done(st_mem_ksm_ratios); + } + + return 0; +} diff --git a/src/collectors/proc.plugin/zfs_common.c b/src/collectors/proc.plugin/zfs_common.c new file mode 100644 index 000000000..cca0ae0e6 --- /dev/null +++ b/src/collectors/proc.plugin/zfs_common.c @@ -0,0 +1,960 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#include "zfs_common.h" + +struct arcstats arcstats = { 0 }; + +void generate_charts_arcstats(const char *plugin, const char *module, int show_zero_charts, int update_every) { + static int do_arc_size = -1, do_l2_size = -1, do_reads = -1, do_l2bytes = -1, do_ahits = -1, do_dhits = -1, \ + do_phits = -1, do_mhits = -1, do_l2hits = -1, do_list_hits = -1; + + if(unlikely(do_arc_size == -1)) + do_arc_size = do_l2_size = do_reads = do_l2bytes = do_ahits = do_dhits = do_phits = do_mhits \ + = do_l2hits = do_list_hits = show_zero_charts; + + // ARC reads + unsigned long long aread = arcstats.hits + arcstats.misses; + + // Demand reads + unsigned long long dhit = arcstats.demand_data_hits + arcstats.demand_metadata_hits; + unsigned long long dmiss = arcstats.demand_data_misses + arcstats.demand_metadata_misses; + unsigned long long dread = dhit + dmiss; + + // Prefetch reads + unsigned long long phit = arcstats.prefetch_data_hits + arcstats.prefetch_metadata_hits; + unsigned long long pmiss = arcstats.prefetch_data_misses + arcstats.prefetch_metadata_misses; + unsigned long long pread = phit + pmiss; + + // Metadata reads + unsigned long long mhit = arcstats.prefetch_metadata_hits + arcstats.demand_metadata_hits; + unsigned long long mmiss = arcstats.prefetch_metadata_misses + arcstats.demand_metadata_misses; + unsigned long long mread = mhit + mmiss; + + // l2 reads + unsigned long long l2hit = arcstats.l2_hits; + unsigned long long l2miss = arcstats.l2_misses; + unsigned long long l2read = l2hit + l2miss; + + // -------------------------------------------------------------------- + + if(do_arc_size == CONFIG_BOOLEAN_YES || arcstats.size || arcstats.c || arcstats.c_min || arcstats.c_max) { + do_arc_size = CONFIG_BOOLEAN_YES; + + static RRDSET *st_arc_size = NULL; + static RRDDIM *rd_arc_size = NULL; + static RRDDIM *rd_arc_target_size = NULL; + static RRDDIM *rd_arc_target_min_size = NULL; + static RRDDIM *rd_arc_target_max_size = NULL; + + if (unlikely(!st_arc_size)) { + st_arc_size = rrdset_create_localhost( + "zfs" + , "arc_size" + , NULL + , ZFS_FAMILY_SIZE + , NULL + , "ZFS ARC Size" + , "MiB" + , plugin + , module + , NETDATA_CHART_PRIO_ZFS_ARC_SIZE + , update_every + , RRDSET_TYPE_AREA + ); + + rd_arc_size = rrddim_add(st_arc_size, "size", "arcsz", 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + rd_arc_target_size = rrddim_add(st_arc_size, "target", NULL, 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + rd_arc_target_min_size = rrddim_add(st_arc_size, "min", "min (hard limit)", 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + rd_arc_target_max_size = rrddim_add(st_arc_size, "max", "max (high water)", 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_arc_size, rd_arc_size, arcstats.size); + rrddim_set_by_pointer(st_arc_size, rd_arc_target_size, arcstats.c); + rrddim_set_by_pointer(st_arc_size, rd_arc_target_min_size, arcstats.c_min); + rrddim_set_by_pointer(st_arc_size, rd_arc_target_max_size, arcstats.c_max); + rrdset_done(st_arc_size); + } + + // -------------------------------------------------------------------- + + if(likely(arcstats.l2exist) && (do_l2_size == CONFIG_BOOLEAN_YES || arcstats.l2_size || arcstats.l2_asize)) { + do_l2_size = CONFIG_BOOLEAN_YES; + + static RRDSET *st_l2_size = NULL; + static RRDDIM *rd_l2_size = NULL; + static RRDDIM *rd_l2_asize = NULL; + + if (unlikely(!st_l2_size)) { + st_l2_size = rrdset_create_localhost( + "zfs" + , "l2_size" + , NULL + , ZFS_FAMILY_SIZE + , NULL + , "ZFS L2 ARC Size" + , "MiB" + , plugin + , module + , NETDATA_CHART_PRIO_ZFS_L2_SIZE + , update_every + , RRDSET_TYPE_AREA + ); + + rd_l2_asize = rrddim_add(st_l2_size, "actual", NULL, 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + rd_l2_size = rrddim_add(st_l2_size, "size", NULL, 1, 1024 * 1024, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_l2_size, rd_l2_size, arcstats.l2_size); + rrddim_set_by_pointer(st_l2_size, rd_l2_asize, arcstats.l2_asize); + rrdset_done(st_l2_size); + } + + // -------------------------------------------------------------------- + + if(likely(do_reads == CONFIG_BOOLEAN_YES || aread || dread || pread || mread || l2read)) { + do_reads = CONFIG_BOOLEAN_YES; + + static RRDSET *st_reads = NULL; + static RRDDIM *rd_aread = NULL; + static RRDDIM *rd_dread = NULL; + static RRDDIM *rd_pread = NULL; + static RRDDIM *rd_mread = NULL; + static RRDDIM *rd_l2read = NULL; + + if (unlikely(!st_reads)) { + st_reads = rrdset_create_localhost( + "zfs" + , "reads" + , NULL + , ZFS_FAMILY_ACCESSES + , NULL + , "ZFS Reads" + , "reads/s" + , plugin + , module + , NETDATA_CHART_PRIO_ZFS_READS + , update_every + , RRDSET_TYPE_AREA + ); + + rd_aread = rrddim_add(st_reads, "areads", "arc", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_dread = rrddim_add(st_reads, "dreads", "demand", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_pread = rrddim_add(st_reads, "preads", "prefetch", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_mread = rrddim_add(st_reads, "mreads", "metadata", 1, 1, RRD_ALGORITHM_INCREMENTAL); + + if(arcstats.l2exist) + rd_l2read = rrddim_add(st_reads, "l2reads", "l2", 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_reads, rd_aread, aread); + rrddim_set_by_pointer(st_reads, rd_dread, dread); + rrddim_set_by_pointer(st_reads, rd_pread, pread); + rrddim_set_by_pointer(st_reads, rd_mread, mread); + + if(arcstats.l2exist) + rrddim_set_by_pointer(st_reads, rd_l2read, l2read); + + rrdset_done(st_reads); + } + + // -------------------------------------------------------------------- + + if(likely(arcstats.l2exist && (do_l2bytes == CONFIG_BOOLEAN_YES || arcstats.l2_read_bytes || arcstats.l2_write_bytes))) { + do_l2bytes = CONFIG_BOOLEAN_YES; + + static RRDSET *st_l2bytes = NULL; + static RRDDIM *rd_l2_read_bytes = NULL; + static RRDDIM *rd_l2_write_bytes = NULL; + + if (unlikely(!st_l2bytes)) { + st_l2bytes = rrdset_create_localhost( + "zfs" + , "bytes" + , NULL + , ZFS_FAMILY_ACCESSES + , NULL + , "ZFS ARC L2 Read/Write Rate" + , "KiB/s" + , plugin + , module + , NETDATA_CHART_PRIO_ZFS_IO + , update_every + , RRDSET_TYPE_AREA + ); + + rd_l2_read_bytes = rrddim_add(st_l2bytes, "read", NULL, 1, 1024, RRD_ALGORITHM_INCREMENTAL); + rd_l2_write_bytes = rrddim_add(st_l2bytes, "write", NULL, -1, 1024, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_l2bytes, rd_l2_read_bytes, arcstats.l2_read_bytes); + rrddim_set_by_pointer(st_l2bytes, rd_l2_write_bytes, arcstats.l2_write_bytes); + rrdset_done(st_l2bytes); + } + + // -------------------------------------------------------------------- + + if(likely(do_ahits == CONFIG_BOOLEAN_YES || arcstats.hits || arcstats.misses)) { + do_ahits = CONFIG_BOOLEAN_YES; + + static RRDSET *st_ahits = NULL; + static RRDDIM *rd_ahits = NULL; + static RRDDIM *rd_amisses = NULL; + + if (unlikely(!st_ahits)) { + st_ahits = rrdset_create_localhost( + "zfs" + , "hits" + , NULL + , ZFS_FAMILY_EFFICIENCY + , NULL + , "ZFS ARC Hits" + , "percentage" + , plugin + , module + , NETDATA_CHART_PRIO_ZFS_HITS + , update_every + , RRDSET_TYPE_STACKED + ); + + rd_ahits = rrddim_add(st_ahits, "hits", NULL, 1, 1, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + rd_amisses = rrddim_add(st_ahits, "misses", NULL, 1, 1, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + } + + rrddim_set_by_pointer(st_ahits, rd_ahits, arcstats.hits); + rrddim_set_by_pointer(st_ahits, rd_amisses, arcstats.misses); + rrdset_done(st_ahits); + + static RRDSET *st_ahits_rate = NULL; + static RRDDIM *rd_ahits_rate = NULL; + static RRDDIM *rd_amisses_rate = NULL; + + if (unlikely(!st_ahits_rate)) { + st_ahits_rate = rrdset_create_localhost( + "zfs" + , "hits_rate" + , NULL + , ZFS_FAMILY_EFFICIENCY + , NULL + , "ZFS ARC Hits Rate" + , "events/s" + , plugin + , module + , NETDATA_CHART_PRIO_ZFS_HITS + 1 + , update_every + , RRDSET_TYPE_STACKED + ); + + rd_ahits_rate = rrddim_add(st_ahits_rate, "hits", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_amisses_rate = rrddim_add(st_ahits_rate, "misses", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_ahits_rate, rd_ahits_rate, arcstats.hits); + rrddim_set_by_pointer(st_ahits_rate, rd_amisses_rate, arcstats.misses); + rrdset_done(st_ahits_rate); + } + + // -------------------------------------------------------------------- + + if(likely(do_dhits == CONFIG_BOOLEAN_YES || dhit || dmiss)) { + do_dhits = CONFIG_BOOLEAN_YES; + + static RRDSET *st_dhits = NULL; + static RRDDIM *rd_dhits = NULL; + static RRDDIM *rd_dmisses = NULL; + + if (unlikely(!st_dhits)) { + st_dhits = rrdset_create_localhost( + "zfs" + , "dhits" + , NULL + , ZFS_FAMILY_EFFICIENCY + , NULL + , "ZFS Demand Hits" + , "percentage" + , plugin + , module + , NETDATA_CHART_PRIO_ZFS_DHITS + , update_every + , RRDSET_TYPE_STACKED + ); + + rd_dhits = rrddim_add(st_dhits, "hits", NULL, 1, 1, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + rd_dmisses = rrddim_add(st_dhits, "misses", NULL, 1, 1, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + } + + rrddim_set_by_pointer(st_dhits, rd_dhits, dhit); + rrddim_set_by_pointer(st_dhits, rd_dmisses, dmiss); + rrdset_done(st_dhits); + + static RRDSET *st_dhits_rate = NULL; + static RRDDIM *rd_dhits_rate = NULL; + static RRDDIM *rd_dmisses_rate = NULL; + + if (unlikely(!st_dhits_rate)) { + st_dhits_rate = rrdset_create_localhost( + "zfs" + , "dhits_rate" + , NULL + , ZFS_FAMILY_EFFICIENCY + , NULL + , "ZFS Demand Hits Rate" + , "events/s" + , plugin + , module + , NETDATA_CHART_PRIO_ZFS_DHITS + 1 + , update_every + , RRDSET_TYPE_STACKED + ); + + rd_dhits_rate = rrddim_add(st_dhits_rate, "hits", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_dmisses_rate = rrddim_add(st_dhits_rate, "misses", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_dhits_rate, rd_dhits_rate, dhit); + rrddim_set_by_pointer(st_dhits_rate, rd_dmisses_rate, dmiss); + rrdset_done(st_dhits_rate); + } + + // -------------------------------------------------------------------- + + if(likely(do_phits == CONFIG_BOOLEAN_YES || phit || pmiss)) { + do_phits = CONFIG_BOOLEAN_YES; + + static RRDSET *st_phits = NULL; + static RRDDIM *rd_phits = NULL; + static RRDDIM *rd_pmisses = NULL; + + if (unlikely(!st_phits)) { + st_phits = rrdset_create_localhost( + "zfs" + , "phits" + , NULL + , ZFS_FAMILY_EFFICIENCY + , NULL + , "ZFS Prefetch Hits" + , "percentage" + , plugin + , module + , NETDATA_CHART_PRIO_ZFS_PHITS + , update_every + , RRDSET_TYPE_STACKED + ); + + rd_phits = rrddim_add(st_phits, "hits", NULL, 1, 1, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + rd_pmisses = rrddim_add(st_phits, "misses", NULL, 1, 1, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + } + + rrddim_set_by_pointer(st_phits, rd_phits, phit); + rrddim_set_by_pointer(st_phits, rd_pmisses, pmiss); + rrdset_done(st_phits); + + static RRDSET *st_phits_rate = NULL; + static RRDDIM *rd_phits_rate = NULL; + static RRDDIM *rd_pmisses_rate = NULL; + + if (unlikely(!st_phits_rate)) { + st_phits_rate = rrdset_create_localhost( + "zfs" + , "phits_rate" + , NULL + , ZFS_FAMILY_EFFICIENCY + , NULL + , "ZFS Prefetch Hits Rate" + , "events/s" + , plugin + , module + , NETDATA_CHART_PRIO_ZFS_PHITS + 1 + , update_every + , RRDSET_TYPE_STACKED + ); + + rd_phits_rate = rrddim_add(st_phits_rate, "hits", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_pmisses_rate = rrddim_add(st_phits_rate, "misses", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_phits_rate, rd_phits_rate, phit); + rrddim_set_by_pointer(st_phits_rate, rd_pmisses_rate, pmiss); + rrdset_done(st_phits_rate); + } + + // -------------------------------------------------------------------- + + if(likely(do_mhits == CONFIG_BOOLEAN_YES || mhit || mmiss)) { + do_mhits = CONFIG_BOOLEAN_YES; + + static RRDSET *st_mhits = NULL; + static RRDDIM *rd_mhits = NULL; + static RRDDIM *rd_mmisses = NULL; + + if (unlikely(!st_mhits)) { + st_mhits = rrdset_create_localhost( + "zfs" + , "mhits" + , NULL + , ZFS_FAMILY_EFFICIENCY + , NULL + , "ZFS Metadata Hits" + , "percentage" + , plugin + , module + , NETDATA_CHART_PRIO_ZFS_MHITS + , update_every + , RRDSET_TYPE_STACKED + ); + + rd_mhits = rrddim_add(st_mhits, "hits", NULL, 1, 1, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + rd_mmisses = rrddim_add(st_mhits, "misses", NULL, 1, 1, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + } + + rrddim_set_by_pointer(st_mhits, rd_mhits, mhit); + rrddim_set_by_pointer(st_mhits, rd_mmisses, mmiss); + rrdset_done(st_mhits); + + static RRDSET *st_mhits_rate = NULL; + static RRDDIM *rd_mhits_rate = NULL; + static RRDDIM *rd_mmisses_rate = NULL; + + if (unlikely(!st_mhits_rate)) { + st_mhits_rate = rrdset_create_localhost( + "zfs" + , "mhits_rate" + , NULL + , ZFS_FAMILY_EFFICIENCY + , NULL + , "ZFS Metadata Hits Rate" + , "events/s" + , plugin + , module + , NETDATA_CHART_PRIO_ZFS_MHITS + 1 + , update_every + , RRDSET_TYPE_STACKED + ); + + rd_mhits_rate = rrddim_add(st_mhits_rate, "hits", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_mmisses_rate = rrddim_add(st_mhits_rate, "misses", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_mhits_rate, rd_mhits_rate, mhit); + rrddim_set_by_pointer(st_mhits_rate, rd_mmisses_rate, mmiss); + rrdset_done(st_mhits_rate); + } + + // -------------------------------------------------------------------- + + if(likely(arcstats.l2exist && (do_l2hits == CONFIG_BOOLEAN_YES || l2hit || l2miss))) { + do_l2hits = CONFIG_BOOLEAN_YES; + + static RRDSET *st_l2hits = NULL; + static RRDDIM *rd_l2hits = NULL; + static RRDDIM *rd_l2misses = NULL; + + if (unlikely(!st_l2hits)) { + st_l2hits = rrdset_create_localhost( + "zfs" + , "l2hits" + , NULL + , ZFS_FAMILY_EFFICIENCY + , NULL + , "ZFS L2 Hits" + , "percentage" + , plugin + , module + , NETDATA_CHART_PRIO_ZFS_L2HITS + , update_every + , RRDSET_TYPE_STACKED + ); + + rd_l2hits = rrddim_add(st_l2hits, "hits", NULL, 1, 1, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + rd_l2misses = rrddim_add(st_l2hits, "misses", NULL, 1, 1, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + } + + rrddim_set_by_pointer(st_l2hits, rd_l2hits, l2hit); + rrddim_set_by_pointer(st_l2hits, rd_l2misses, l2miss); + rrdset_done(st_l2hits); + + static RRDSET *st_l2hits_rate = NULL; + static RRDDIM *rd_l2hits_rate = NULL; + static RRDDIM *rd_l2misses_rate = NULL; + + if (unlikely(!st_l2hits_rate)) { + st_l2hits_rate = rrdset_create_localhost( + "zfs" + , "l2hits_rate" + , NULL + , ZFS_FAMILY_EFFICIENCY + , NULL + , "ZFS L2 Hits Rate" + , "events/s" + , plugin + , module + , NETDATA_CHART_PRIO_ZFS_L2HITS + 1 + , update_every + , RRDSET_TYPE_STACKED + ); + + rd_l2hits_rate = rrddim_add(st_l2hits_rate, "hits", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_l2misses_rate = rrddim_add(st_l2hits_rate, "misses", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_l2hits_rate, rd_l2hits_rate, l2hit); + rrddim_set_by_pointer(st_l2hits_rate, rd_l2misses_rate, l2miss); + rrdset_done(st_l2hits_rate); + } + + // -------------------------------------------------------------------- + + if(likely(do_list_hits == CONFIG_BOOLEAN_YES || arcstats.mfu_hits \ + || arcstats.mru_hits \ + || arcstats.mfu_ghost_hits \ + || arcstats.mru_ghost_hits)) { + do_list_hits = CONFIG_BOOLEAN_YES; + + static RRDSET *st_list_hits = NULL; + static RRDDIM *rd_mfu = NULL; + static RRDDIM *rd_mru = NULL; + static RRDDIM *rd_mfug = NULL; + static RRDDIM *rd_mrug = NULL; + + if (unlikely(!st_list_hits)) { + st_list_hits = rrdset_create_localhost( + "zfs" + , "list_hits" + , NULL + , ZFS_FAMILY_EFFICIENCY + , NULL + , "ZFS List Hits" + , "hits/s" + , plugin + , module + , NETDATA_CHART_PRIO_ZFS_LIST_HITS + , update_every + , RRDSET_TYPE_AREA + ); + + rd_mfu = rrddim_add(st_list_hits, "mfu", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_mfug = rrddim_add(st_list_hits, "mfug", "mfu ghost", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_mru = rrddim_add(st_list_hits, "mru", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_mrug = rrddim_add(st_list_hits, "mrug", "mru ghost", 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_list_hits, rd_mfu, arcstats.mfu_hits); + rrddim_set_by_pointer(st_list_hits, rd_mru, arcstats.mru_hits); + rrddim_set_by_pointer(st_list_hits, rd_mfug, arcstats.mfu_ghost_hits); + rrddim_set_by_pointer(st_list_hits, rd_mrug, arcstats.mru_ghost_hits); + rrdset_done(st_list_hits); + } +} + +void generate_charts_arc_summary(const char *plugin, const char *module, int show_zero_charts, int update_every) { + static int do_arc_size_breakdown = -1, do_memory = -1, do_important_ops = -1, do_actual_hits = -1, \ + do_demand_data_hits = -1, do_prefetch_data_hits = -1, do_hash_elements = -1, do_hash_chains = -1; + + if(unlikely(do_arc_size_breakdown == -1)) + do_arc_size_breakdown = do_memory = do_important_ops = do_actual_hits = do_demand_data_hits \ + = do_prefetch_data_hits = do_hash_elements = do_hash_chains = show_zero_charts; + + unsigned long long arc_accesses_total = arcstats.hits + arcstats.misses; + unsigned long long real_hits = arcstats.mfu_hits + arcstats.mru_hits; + unsigned long long real_misses = arc_accesses_total - real_hits; + + //unsigned long long anon_hits = arcstats.hits - (arcstats.mfu_hits + arcstats.mru_hits + arcstats.mfu_ghost_hits + arcstats.mru_ghost_hits); + + unsigned long long arc_size = arcstats.size; + unsigned long long mru_size = arcstats.p; + //unsigned long long target_min_size = arcstats.c_min; + //unsigned long long target_max_size = arcstats.c_max; + unsigned long long target_size = arcstats.c; + //unsigned long long target_size_ratio = (target_max_size / target_min_size); + + unsigned long long mfu_size; + if(arc_size > target_size) + mfu_size = arc_size - mru_size; + else + mfu_size = target_size - mru_size; + + // -------------------------------------------------------------------- + + if(likely(do_arc_size_breakdown == CONFIG_BOOLEAN_YES || mru_size || mfu_size)) { + do_arc_size_breakdown = CONFIG_BOOLEAN_YES; + + static RRDSET *st_arc_size_breakdown = NULL; + static RRDDIM *rd_most_recent = NULL; + static RRDDIM *rd_most_frequent = NULL; + + if (unlikely(!st_arc_size_breakdown)) { + st_arc_size_breakdown = rrdset_create_localhost( + "zfs" + , "arc_size_breakdown" + , NULL + , ZFS_FAMILY_EFFICIENCY + , NULL + , "ZFS ARC Size Breakdown" + , "percentage" + , plugin + , module + , NETDATA_CHART_PRIO_ZFS_ARC_SIZE_BREAKDOWN + , update_every + , RRDSET_TYPE_STACKED + ); + + rd_most_recent = rrddim_add(st_arc_size_breakdown, "recent", NULL, 1, 1, RRD_ALGORITHM_PCENT_OVER_ROW_TOTAL); + rd_most_frequent = rrddim_add(st_arc_size_breakdown, "frequent", NULL, 1, 1, RRD_ALGORITHM_PCENT_OVER_ROW_TOTAL); + } + + rrddim_set_by_pointer(st_arc_size_breakdown, rd_most_recent, mru_size); + rrddim_set_by_pointer(st_arc_size_breakdown, rd_most_frequent, mfu_size); + rrdset_done(st_arc_size_breakdown); + } + + // -------------------------------------------------------------------- + + if(likely(do_memory == CONFIG_BOOLEAN_YES || arcstats.memory_direct_count \ + || arcstats.memory_throttle_count \ + || arcstats.memory_indirect_count)) { + do_memory = CONFIG_BOOLEAN_YES; + + static RRDSET *st_memory = NULL; +#ifndef __FreeBSD__ + static RRDDIM *rd_direct = NULL; +#endif + static RRDDIM *rd_throttled = NULL; +#ifndef __FreeBSD__ + static RRDDIM *rd_indirect = NULL; +#endif + + if (unlikely(!st_memory)) { + st_memory = rrdset_create_localhost( + "zfs" + , "memory_ops" + , NULL + , ZFS_FAMILY_OPERATIONS + , NULL + , "ZFS Memory Operations" + , "operations/s" + , plugin + , module + , NETDATA_CHART_PRIO_ZFS_MEMORY_OPS + , update_every + , RRDSET_TYPE_LINE + ); + +#ifndef __FreeBSD__ + rd_direct = rrddim_add(st_memory, "direct", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); +#endif + rd_throttled = rrddim_add(st_memory, "throttled", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); +#ifndef __FreeBSD__ + rd_indirect = rrddim_add(st_memory, "indirect", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); +#endif + } + +#ifndef __FreeBSD__ + rrddim_set_by_pointer(st_memory, rd_direct, arcstats.memory_direct_count); +#endif + rrddim_set_by_pointer(st_memory, rd_throttled, arcstats.memory_throttle_count); +#ifndef __FreeBSD__ + rrddim_set_by_pointer(st_memory, rd_indirect, arcstats.memory_indirect_count); +#endif + rrdset_done(st_memory); + } + + // -------------------------------------------------------------------- + + if(likely(do_important_ops == CONFIG_BOOLEAN_YES || arcstats.deleted \ + || arcstats.evict_skip \ + || arcstats.mutex_miss \ + || arcstats.hash_collisions)) { + do_important_ops = CONFIG_BOOLEAN_YES; + + static RRDSET *st_important_ops = NULL; + static RRDDIM *rd_deleted = NULL; + static RRDDIM *rd_mutex_misses = NULL; + static RRDDIM *rd_evict_skips = NULL; + static RRDDIM *rd_hash_collisions = NULL; + + if (unlikely(!st_important_ops)) { + st_important_ops = rrdset_create_localhost( + "zfs" + , "important_ops" + , NULL + , ZFS_FAMILY_OPERATIONS + , NULL + , "ZFS Important Operations" + , "operations/s" + , plugin + , module + , NETDATA_CHART_PRIO_ZFS_IMPORTANT_OPS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_evict_skips = rrddim_add(st_important_ops, "eskip", "evict skip", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_deleted = rrddim_add(st_important_ops, "deleted", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_mutex_misses = rrddim_add(st_important_ops, "mtxmis", "mutex miss", 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_hash_collisions = rrddim_add(st_important_ops, "hash_collisions", "hash collisions", 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_important_ops, rd_deleted, arcstats.deleted); + rrddim_set_by_pointer(st_important_ops, rd_evict_skips, arcstats.evict_skip); + rrddim_set_by_pointer(st_important_ops, rd_mutex_misses, arcstats.mutex_miss); + rrddim_set_by_pointer(st_important_ops, rd_hash_collisions, arcstats.hash_collisions); + rrdset_done(st_important_ops); + } + + // -------------------------------------------------------------------- + + if(likely(do_actual_hits == CONFIG_BOOLEAN_YES || real_hits || real_misses)) { + do_actual_hits = CONFIG_BOOLEAN_YES; + + static RRDSET *st_actual_hits = NULL; + static RRDDIM *rd_actual_hits = NULL; + static RRDDIM *rd_actual_misses = NULL; + + if (unlikely(!st_actual_hits)) { + st_actual_hits = rrdset_create_localhost( + "zfs" + , "actual_hits" + , NULL + , ZFS_FAMILY_EFFICIENCY + , NULL + , "ZFS Actual Cache Hits" + , "percentage" + , plugin + , module + , NETDATA_CHART_PRIO_ZFS_ACTUAL_HITS + , update_every + , RRDSET_TYPE_STACKED + ); + + rd_actual_hits = rrddim_add(st_actual_hits, "hits", NULL, 1, 1, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + rd_actual_misses = rrddim_add(st_actual_hits, "misses", NULL, 1, 1, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + } + + rrddim_set_by_pointer(st_actual_hits, rd_actual_hits, real_hits); + rrddim_set_by_pointer(st_actual_hits, rd_actual_misses, real_misses); + rrdset_done(st_actual_hits); + + static RRDSET *st_actual_hits_rate = NULL; + static RRDDIM *rd_actual_hits_rate = NULL; + static RRDDIM *rd_actual_misses_rate = NULL; + + if (unlikely(!st_actual_hits_rate)) { + st_actual_hits_rate = rrdset_create_localhost( + "zfs" + , "actual_hits_rate" + , NULL + , ZFS_FAMILY_EFFICIENCY + , NULL + , "ZFS Actual Cache Hits Rate" + , "events/s" + , plugin + , module + , NETDATA_CHART_PRIO_ZFS_ACTUAL_HITS + 1 + , update_every + , RRDSET_TYPE_STACKED + ); + + rd_actual_hits_rate = rrddim_add(st_actual_hits_rate, "hits", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_actual_misses_rate = rrddim_add(st_actual_hits_rate, "misses", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_actual_hits_rate, rd_actual_hits_rate, real_hits); + rrddim_set_by_pointer(st_actual_hits_rate, rd_actual_misses_rate, real_misses); + rrdset_done(st_actual_hits_rate); + } + + // -------------------------------------------------------------------- + + if(likely(do_demand_data_hits == CONFIG_BOOLEAN_YES || arcstats.demand_data_hits || arcstats.demand_data_misses)) { + do_demand_data_hits = CONFIG_BOOLEAN_YES; + + static RRDSET *st_demand_data_hits = NULL; + static RRDDIM *rd_demand_data_hits = NULL; + static RRDDIM *rd_demand_data_misses = NULL; + + if (unlikely(!st_demand_data_hits)) { + st_demand_data_hits = rrdset_create_localhost( + "zfs" + , "demand_data_hits" + , NULL + , ZFS_FAMILY_EFFICIENCY + , NULL + , "ZFS Data Demand Efficiency" + , "percentage" + , plugin + , module + , NETDATA_CHART_PRIO_ZFS_DEMAND_DATA_HITS + , update_every + , RRDSET_TYPE_STACKED + ); + + rd_demand_data_hits = rrddim_add(st_demand_data_hits, "hits", NULL, 1, 1, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + rd_demand_data_misses = rrddim_add(st_demand_data_hits, "misses", NULL, 1, 1, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + } + + rrddim_set_by_pointer(st_demand_data_hits, rd_demand_data_hits, arcstats.demand_data_hits); + rrddim_set_by_pointer(st_demand_data_hits, rd_demand_data_misses, arcstats.demand_data_misses); + rrdset_done(st_demand_data_hits); + + static RRDSET *st_demand_data_hits_rate = NULL; + static RRDDIM *rd_demand_data_hits_rate = NULL; + static RRDDIM *rd_demand_data_misses_rate = NULL; + + if (unlikely(!st_demand_data_hits_rate)) { + st_demand_data_hits_rate = rrdset_create_localhost( + "zfs" + , "demand_data_hits_rate" + , NULL + , ZFS_FAMILY_EFFICIENCY + , NULL + , "ZFS Data Demand Efficiency Rate" + , "events/s" + , plugin + , module + , NETDATA_CHART_PRIO_ZFS_DEMAND_DATA_HITS + 1 + , update_every + , RRDSET_TYPE_STACKED + ); + + rd_demand_data_hits_rate = rrddim_add(st_demand_data_hits_rate, "hits", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_demand_data_misses_rate = rrddim_add(st_demand_data_hits_rate, "misses", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_demand_data_hits_rate, rd_demand_data_hits_rate, arcstats.demand_data_hits); + rrddim_set_by_pointer(st_demand_data_hits_rate, rd_demand_data_misses_rate, arcstats.demand_data_misses); + rrdset_done(st_demand_data_hits_rate); + } + + // -------------------------------------------------------------------- + + if(likely(do_prefetch_data_hits == CONFIG_BOOLEAN_YES || arcstats.prefetch_data_hits \ + || arcstats.prefetch_data_misses)) { + do_prefetch_data_hits = CONFIG_BOOLEAN_YES; + + static RRDSET *st_prefetch_data_hits = NULL; + static RRDDIM *rd_prefetch_data_hits = NULL; + static RRDDIM *rd_prefetch_data_misses = NULL; + + if (unlikely(!st_prefetch_data_hits)) { + st_prefetch_data_hits = rrdset_create_localhost( + "zfs" + , "prefetch_data_hits" + , NULL + , ZFS_FAMILY_EFFICIENCY + , NULL + , "ZFS Data Prefetch Efficiency" + , "percentage" + , plugin + , module + , NETDATA_CHART_PRIO_ZFS_PREFETCH_DATA_HITS + , update_every + , RRDSET_TYPE_STACKED + ); + + rd_prefetch_data_hits = rrddim_add(st_prefetch_data_hits, "hits", NULL, 1, 1, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + rd_prefetch_data_misses = rrddim_add(st_prefetch_data_hits, "misses", NULL, 1, 1, RRD_ALGORITHM_PCENT_OVER_DIFF_TOTAL); + } + + rrddim_set_by_pointer(st_prefetch_data_hits, rd_prefetch_data_hits, arcstats.prefetch_data_hits); + rrddim_set_by_pointer(st_prefetch_data_hits, rd_prefetch_data_misses, arcstats.prefetch_data_misses); + rrdset_done(st_prefetch_data_hits); + + static RRDSET *st_prefetch_data_hits_rate = NULL; + static RRDDIM *rd_prefetch_data_hits_rate = NULL; + static RRDDIM *rd_prefetch_data_misses_rate = NULL; + + if (unlikely(!st_prefetch_data_hits_rate)) { + st_prefetch_data_hits_rate = rrdset_create_localhost( + "zfs" + , "prefetch_data_hits_rate" + , NULL + , ZFS_FAMILY_EFFICIENCY + , NULL + , "ZFS Data Prefetch Efficiency Rate" + , "events/s" + , plugin + , module + , NETDATA_CHART_PRIO_ZFS_PREFETCH_DATA_HITS + 1 + , update_every + , RRDSET_TYPE_STACKED + ); + + rd_prefetch_data_hits_rate = rrddim_add(st_prefetch_data_hits_rate, "hits", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + rd_prefetch_data_misses_rate = rrddim_add(st_prefetch_data_hits_rate, "misses", NULL, 1, 1, RRD_ALGORITHM_INCREMENTAL); + } + + rrddim_set_by_pointer(st_prefetch_data_hits_rate, rd_prefetch_data_hits_rate, arcstats.prefetch_data_hits); + rrddim_set_by_pointer(st_prefetch_data_hits_rate, rd_prefetch_data_misses_rate, arcstats.prefetch_data_misses); + rrdset_done(st_prefetch_data_hits_rate); + } + + // -------------------------------------------------------------------- + + if(likely(do_hash_elements == CONFIG_BOOLEAN_YES || arcstats.hash_elements || arcstats.hash_elements_max)) { + do_hash_elements = CONFIG_BOOLEAN_YES; + + static RRDSET *st_hash_elements = NULL; + static RRDDIM *rd_hash_elements_current = NULL; + static RRDDIM *rd_hash_elements_max = NULL; + + if (unlikely(!st_hash_elements)) { + st_hash_elements = rrdset_create_localhost( + "zfs" + , "hash_elements" + , NULL + , ZFS_FAMILY_HASH + , NULL + , "ZFS ARC Hash Elements" + , "elements" + , plugin + , module + , NETDATA_CHART_PRIO_ZFS_HASH_ELEMENTS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_hash_elements_current = rrddim_add(st_hash_elements, "current", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + rd_hash_elements_max = rrddim_add(st_hash_elements, "max", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_hash_elements, rd_hash_elements_current, arcstats.hash_elements); + rrddim_set_by_pointer(st_hash_elements, rd_hash_elements_max, arcstats.hash_elements_max); + rrdset_done(st_hash_elements); + } + + // -------------------------------------------------------------------- + + if(likely(do_hash_chains == CONFIG_BOOLEAN_YES || arcstats.hash_chains || arcstats.hash_chain_max)) { + do_hash_chains = CONFIG_BOOLEAN_YES; + + static RRDSET *st_hash_chains = NULL; + static RRDDIM *rd_hash_chains_current = NULL; + static RRDDIM *rd_hash_chains_max = NULL; + + if (unlikely(!st_hash_chains)) { + st_hash_chains = rrdset_create_localhost( + "zfs" + , "hash_chains" + , NULL + , ZFS_FAMILY_HASH + , NULL + , "ZFS ARC Hash Chains" + , "chains" + , plugin + , module + , NETDATA_CHART_PRIO_ZFS_HASH_CHAINS + , update_every + , RRDSET_TYPE_LINE + ); + + rd_hash_chains_current = rrddim_add(st_hash_chains, "current", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + rd_hash_chains_max = rrddim_add(st_hash_chains, "max", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE); + } + + rrddim_set_by_pointer(st_hash_chains, rd_hash_chains_current, arcstats.hash_chains); + rrddim_set_by_pointer(st_hash_chains, rd_hash_chains_max, arcstats.hash_chain_max); + rrdset_done(st_hash_chains); + } + + // -------------------------------------------------------------------- + +} diff --git a/src/collectors/proc.plugin/zfs_common.h b/src/collectors/proc.plugin/zfs_common.h new file mode 100644 index 000000000..9d61de2f3 --- /dev/null +++ b/src/collectors/proc.plugin/zfs_common.h @@ -0,0 +1,115 @@ +// SPDX-License-Identifier: GPL-3.0-or-later + +#ifndef NETDATA_ZFS_COMMON_H +#define NETDATA_ZFS_COMMON_H 1 + +#include "daemon/common.h" + +#define ZFS_FAMILY_SIZE "size" +#define ZFS_FAMILY_EFFICIENCY "efficiency" +#define ZFS_FAMILY_ACCESSES "accesses" +#define ZFS_FAMILY_OPERATIONS "operations" +#define ZFS_FAMILY_HASH "hashes" + +struct arcstats { + // values + unsigned long long hits; + unsigned long long misses; + unsigned long long demand_data_hits; + unsigned long long demand_data_misses; + unsigned long long demand_metadata_hits; + unsigned long long demand_metadata_misses; + unsigned long long prefetch_data_hits; + unsigned long long prefetch_data_misses; + unsigned long long prefetch_metadata_hits; + unsigned long long prefetch_metadata_misses; + unsigned long long mru_hits; + unsigned long long mru_ghost_hits; + unsigned long long mfu_hits; + unsigned long long mfu_ghost_hits; + unsigned long long deleted; + unsigned long long mutex_miss; + unsigned long long evict_skip; + unsigned long long evict_not_enough; + unsigned long long evict_l2_cached; + unsigned long long evict_l2_eligible; + unsigned long long evict_l2_ineligible; + unsigned long long evict_l2_skip; + unsigned long long hash_elements; + unsigned long long hash_elements_max; + unsigned long long hash_collisions; + unsigned long long hash_chains; + unsigned long long hash_chain_max; + unsigned long long p; + unsigned long long c; + unsigned long long c_min; + unsigned long long c_max; + unsigned long long size; + unsigned long long hdr_size; + unsigned long long data_size; + unsigned long long metadata_size; + unsigned long long other_size; + unsigned long long anon_size; + unsigned long long anon_evictable_data; + unsigned long long anon_evictable_metadata; + unsigned long long mru_size; + unsigned long long mru_evictable_data; + unsigned long long mru_evictable_metadata; + unsigned long long mru_ghost_size; + unsigned long long mru_ghost_evictable_data; + unsigned long long mru_ghost_evictable_metadata; + unsigned long long mfu_size; + unsigned long long mfu_evictable_data; + unsigned long long mfu_evictable_metadata; + unsigned long long mfu_ghost_size; + unsigned long long mfu_ghost_evictable_data; + unsigned long long mfu_ghost_evictable_metadata; + unsigned long long l2_hits; + unsigned long long l2_misses; + unsigned long long l2_feeds; + unsigned long long l2_rw_clash; + unsigned long long l2_read_bytes; + unsigned long long l2_write_bytes; + unsigned long long l2_writes_sent; + unsigned long long l2_writes_done; + unsigned long long l2_writes_error; + unsigned long long l2_writes_lock_retry; + unsigned long long l2_evict_lock_retry; + unsigned long long l2_evict_reading; + unsigned long long l2_evict_l1cached; + unsigned long long l2_free_on_write; + unsigned long long l2_cdata_free_on_write; + unsigned long long l2_abort_lowmem; + unsigned long long l2_cksum_bad; + unsigned long long l2_io_error; + unsigned long long l2_size; + unsigned long long l2_asize; + unsigned long long l2_hdr_size; + unsigned long long l2_compress_successes; + unsigned long long l2_compress_zeros; + unsigned long long l2_compress_failures; + unsigned long long memory_throttle_count; + unsigned long long duplicate_buffers; + unsigned long long duplicate_buffers_size; + unsigned long long duplicate_reads; + unsigned long long memory_direct_count; + unsigned long long memory_indirect_count; + unsigned long long arc_no_grow; + unsigned long long arc_tempreserve; + unsigned long long arc_loaned_bytes; + unsigned long long arc_prune; + unsigned long long arc_meta_used; + unsigned long long arc_meta_limit; + unsigned long long arc_meta_max; + unsigned long long arc_meta_min; + unsigned long long arc_need_free; + unsigned long long arc_sys_free; + + // flags + int l2exist; +}; + +void generate_charts_arcstats(const char *plugin, const char *module, int show_zero_charts, int update_every); +void generate_charts_arc_summary(const char *plugin, const char *module, int show_zero_charts, int update_every); + +#endif //NETDATA_ZFS_COMMON_H |