summaryrefslogtreecommitdiffstats
path: root/collectors/proc.plugin
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2018-12-28 14:38:58 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2018-12-28 14:38:58 +0000
commitfa4ece01aed54c9a146af868be0d3db611ded229 (patch)
tree319cffc5f6c2abd7cce514383716153469fc6295 /collectors/proc.plugin
parentNew upstream version 1.11.0+dfsg (diff)
downloadnetdata-fa4ece01aed54c9a146af868be0d3db611ded229.tar.xz
netdata-fa4ece01aed54c9a146af868be0d3db611ded229.zip
New upstream version 1.11.1+dfsgupstream/1.11.1+dfsg
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'collectors/proc.plugin')
-rwxr-xr-x[-rw-r--r--]collectors/proc.plugin/README.md180
-rw-r--r--collectors/proc.plugin/proc_net_dev.c87
-rw-r--r--collectors/proc.plugin/proc_net_stat_conntrack.c2
-rwxr-xr-x[-rw-r--r--]collectors/proc.plugin/proc_stat.c204
4 files changed, 359 insertions, 114 deletions
diff --git a/collectors/proc.plugin/README.md b/collectors/proc.plugin/README.md
index 9d444f3d0..123065655 100644..100755
--- a/collectors/proc.plugin/README.md
+++ b/collectors/proc.plugin/README.md
@@ -1,4 +1,3 @@
-
# proc.plugin
- `/proc/net/dev` (all network interfaces for all their values)
@@ -9,7 +8,7 @@
- `/proc/net/stat/nf_conntrack` (connection tracking performance)
- `/proc/net/stat/synproxy` (synproxy performance)
- `/proc/net/ip_vs/stats` (IPVS connection statistics)
- - `/proc/stat` (CPU utilization)
+ - `/proc/stat` (CPU utilization and attributes)
- `/proc/meminfo` (memory information)
- `/proc/vmstat` (system performance)
- `/proc/net/rpc/nfsd` (NFS server statistics for both v3 and v4 NFS servers)
@@ -25,7 +24,7 @@
---
-# Monitoring Disks
+## Monitoring Disks
> Live demo of disk monitoring at: **[http://london.netdata.rocks](https://registry.my-netdata.io/#menu_disk)**
@@ -33,75 +32,45 @@ Performance monitoring for Linux disks is quite complicated. The main reason is
Hopefully, the Linux kernel provides many metrics that can provide deep insights of what our disks our doing. The kernel measures all these metrics on all layers of storage: **virtual disks**, **physical disks** and **partitions of disks**.
-Let's see the list of metrics provided by netdata for each of the above:
-
-### I/O bandwidth/s (kb/s)
-
-The amount of data transferred from and to the disk.
-
-### I/O operations/s
-
-The number of I/O operations completed.
-
-### Queued I/O operations
-
-The number of currently queued I/O operations. For traditional disks that execute commands one after another, one of them is being run by the disk and the rest are just waiting in a queue.
-
-### Backlog size (time in ms)
-
-The expected duration of the currently queued I/O operations.
-
-### Utilization (time percentage)
-
-The percentage of time the disk was busy with something. This is a very interesting metric, since for most disks, that execute commands sequentially, **this is the key indication of congestion**. A sequential disk that is 100% of the available time busy, has no time to do anything more, so even if the bandwidth or the number of operations executed by the disk is low, its capacity has been reached.
-
-Of course, for newer disk technologies (like fusion cards) that are capable to execute multiple commands in parallel, this metric is just meaningless.
-
-### Average I/O operation time (ms)
-
-The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.
-
-### Average I/O operation size (kb)
-
-The average amount of data of the completed I/O operations.
-
-### Average Service Time (ms)
-
-The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading.
-
-### Merged I/O operations/s
-
-The Linux kernel is capable of merging I/O operations. So, if two requests to read data from the disk are adjacent, the Linux kernel may merge them to one before giving them to disk. This metric measures the number of operations that have been merged by the Linux kernel.
-
-### Total I/O time
-
-The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute multiple I/O operations in parallel.
-
-### Space usage
-
-For mounted disks, netdata will provide a chart for their space, with 3 dimensions:
-
-1. free
-2. used
-3. reserved for root
-
-### inode usage
-
-For mounted disks, netdata will provide a chart for their inodes (number of file and directories), with 3 dimensions:
-
-1. free
-2. used
-3. reserved for root
-
----
-
-## disk names
+### Monitored disk metrics
+
+- I/O bandwidth/s (kb/s)
+ The amount of data transferred from and to the disk.
+- I/O operations/s
+ The number of I/O operations completed.
+- Queued I/O operations
+ The number of currently queued I/O operations. For traditional disks that execute commands one after another, one of them is being run by the disk and the rest are just waiting in a queue.
+- Backlog size (time in ms)
+ The expected duration of the currently queued I/O operations.
+- Utilization (time percentage)
+ The percentage of time the disk was busy with something. This is a very interesting metric, since for most disks, that execute commands sequentially, **this is the key indication of congestion**. A sequential disk that is 100% of the available time busy, has no time to do anything more, so even if the bandwidth or the number of operations executed by the disk is low, its capacity has been reached.
+ Of course, for newer disk technologies (like fusion cards) that are capable to execute multiple commands in parallel, this metric is just meaningless.
+- Average I/O operation time (ms)
+ The average time for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.
+- Average I/O operation size (kb)
+ The average amount of data of the completed I/O operations.
+- Average Service Time (ms)
+ The average service time for completed I/O operations. This metric is calculated using the total busy time of the disk and the number of completed operations. If the disk is able to execute multiple parallel operations the reporting average service time will be misleading.
+- Merged I/O operations/s
+ The Linux kernel is capable of merging I/O operations. So, if two requests to read data from the disk are adjacent, the Linux kernel may merge them to one before giving them to disk. This metric measures the number of operations that have been merged by the Linux kernel.
+- Total I/O time
+ The sum of the duration of all completed I/O operations. This number can exceed the interval if the disk is able to execute multiple I/O operations in parallel.
+- Space usage
+ For mounted disks, netdata will provide a chart for their space, with 3 dimensions:
+ 1. free
+ 2. used
+ 3. reserved for root
+- inode usage
+ For mounted disks, netdata will provide a chart for their inodes (number of file and directories), with 3 dimensions:
+ 1. free
+ 2. used
+ 3. reserved for root
+
+### disk names
netdata will automatically set the name of disks on the dashboard, from the mount point they are mounted, of course only when they are mounted. Changes in mount points are not currently detected (you will have to restart netdata to change the name of the disk).
----
-
-## performance metrics
+### performance metrics
By default netdata will enable monitoring metrics only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though).
@@ -198,3 +167,76 @@ So, to disable performance metrics for all loop devices you could add `performan
performance metrics for disks with major 7 = no
```
+## Monitoring CPUs
+
+The `/proc/stat` module monitors CPU utilization, interrupts, context switches, processes started/running, thermal throttling, frequency, and idle states. It gathers this information from multiple files.
+
+If more than 50 cores are present in a system then CPU thermal throttling, frequency, and idle state charts are disabled.
+
+#### configuration
+
+`keep per core files open` option in the `[plugin:proc:/proc/stat]` configuration section allows reducing the number of file operations on multiple files.
+
+### CPU frequency
+
+The module shows the current CPU frequency as set by the `cpufreq` kernel
+module.
+
+**Requirement:**
+You need to have `CONFIG_CPU_FREQ` and (optionally) `CONFIG_CPU_FREQ_STAT`
+enabled in your kernel.
+
+`cpufreq` interface provides two different ways of getting the information through `/sys/devices/system/cpu/cpu*/cpufreq/scaling_cur_freq` and `/sys/devices/system/cpu/cpu*/cpufreq/stats/time_in_state` files. The latter is more accurate so it is preferred in the module. `scaling_cur_freq` represents only the current CPU frequency, and doesn't account for any state changes which happen between updates. The module switches back and forth between these two methods if governor is changed.
+
+It produces one chart with multiple lines (one line per core).
+
+#### configuration
+
+`scaling_cur_freq filename to monitor` and `time_in_state filename to monitor` in the `[plugin:proc:/proc/stat]` configuration section
+
+### CPU idle states
+
+The module monitors the usage of CPU idle states.
+
+**Requirement:**
+Your kernel needs to have `CONFIG_CPU_IDLE` enabled.
+
+It produces one stacked chart per CPU, showing the percentage of time spent in
+each state.
+
+#### configuration
+
+`schedstat filename to monitor`, `cpuidle name filename to monitor`, and `cpuidle time filename to monitor` in the `[plugin:proc:/proc/stat]` configuration section
+
+## Linux Anti-DDoS
+
+![image6](https://cloud.githubusercontent.com/assets/2662304/14253733/53550b16-fa95-11e5-8d9d-4ed171df4735.gif)
+
+---
+SYNPROXY is a TCP SYN packets proxy. It can be used to protect any TCP server (like a web server) from SYN floods and similar DDos attacks.
+
+SYNPROXY is a netfilter module, in the Linux kernel (since version 3.12). It is optimized to handle millions of packets per second utilizing all CPUs available without any concurrency locking between the connections.
+
+The net effect of this, is that the real servers will not notice any change during the attack. The valid TCP connections will pass through and served, while the attack will be stopped at the firewall.
+
+To use SYNPROXY on your firewall, please follow our setup guides:
+
+ - **[Working with SYNPROXY](https://github.com/firehol/firehol/wiki/Working-with-SYNPROXY)**
+ - **[Working with SYNPROXY and traps](https://github.com/firehol/firehol/wiki/Working-with-SYNPROXY-and-traps)**
+
+### Real-time monitoring of Linux Anti-DDoS
+
+netdata is able to monitor in real-time (per second updates) the operation of the Linux Anti-DDoS protection.
+
+It visualizes 4 charts:
+
+1. TCP SYN Packets received on ports operated by SYNPROXY
+2. TCP Cookies (valid, invalid, retransmits)
+3. Connections Reopened
+4. Entries used
+
+Example image:
+
+![ddos](https://cloud.githubusercontent.com/assets/2662304/14398891/6016e3fc-fdf0-11e5-942b-55de6a52cb66.gif)
+
+See Linux Anti-DDoS in action at: **[netdata demo site (with SYNPROXY enabled)](https://registry.my-netdata.io/#menu_netfilter_submenu_synproxy)**
diff --git a/collectors/proc.plugin/proc_net_dev.c b/collectors/proc.plugin/proc_net_dev.c
index 97cbc060a..1e426e977 100644
--- a/collectors/proc.plugin/proc_net_dev.c
+++ b/collectors/proc.plugin/proc_net_dev.c
@@ -66,7 +66,7 @@ static struct netdev {
kernel_uint_t tcollisions;
kernel_uint_t tcarrier;
kernel_uint_t tcompressed;
- kernel_uint_t speed_max;
+ kernel_uint_t speed;
// charts
RRDSET *st_bandwidth;
@@ -96,6 +96,10 @@ static struct netdev {
RRDDIM *rd_tcarrier;
RRDDIM *rd_tcompressed;
+ usec_t speed_last_collected_usec;
+ char *filename_speed;
+ RRDSETVAR *chart_var_speed;
+
struct netdev *next;
} *netdev_root = NULL, *netdev_last_used = NULL;
@@ -139,7 +143,7 @@ static void netdev_charts_release(struct netdev *d) {
d->rd_tcompressed = NULL;
}
-static void netdev_free_strings(struct netdev *d) {
+static void netdev_free_chart_strings(struct netdev *d) {
freez((void *)d->chart_type_net_bytes);
freez((void *)d->chart_type_net_compressed);
freez((void *)d->chart_type_net_drops);
@@ -161,9 +165,10 @@ static void netdev_free_strings(struct netdev *d) {
static void netdev_free(struct netdev *d) {
netdev_charts_release(d);
- netdev_free_strings(d);
+ netdev_free_chart_strings(d);
freez((void *)d->name);
+ freez((void *)d->filename_speed);
freez((void *)d);
netdev_added--;
}
@@ -265,7 +270,7 @@ static inline void netdev_rename_cgroup(struct netdev *d, struct netdev_rename *
info("CGROUP: renaming network interface '%s' as '%s' under '%s'", r->host_device, r->container_device, r->container_name);
netdev_charts_release(d);
- netdev_free_strings(d);
+ netdev_free_chart_strings(d);
char buffer[RRD_ID_LENGTH_MAX + 1];
@@ -435,15 +440,21 @@ int do_proc_net_dev(int update_every, usec_t dt) {
static procfile *ff = NULL;
static int enable_new_interfaces = -1;
static int do_bandwidth = -1, do_packets = -1, do_errors = -1, do_drops = -1, do_fifo = -1, do_compressed = -1, do_events = -1;
- static char *path_to_sys_devices_virtual_net = NULL;
- static char *path_to_sys_net_speed = NULL;
+ static char *path_to_sys_devices_virtual_net = NULL, *path_to_sys_class_net_speed = NULL, *proc_net_dev_filename = NULL;
+ static long long int dt_to_refresh_speed = 0;
if(unlikely(enable_new_interfaces == -1)) {
char filename[FILENAME_MAX + 1];
+ snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, (*netdata_configured_host_prefix)?"/proc/1/net/dev":"/proc/net/dev");
+ proc_net_dev_filename = config_get(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "filename to monitor", filename);
+
snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/devices/virtual/net/%s");
path_to_sys_devices_virtual_net = config_get(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "path to get virtual interfaces", filename);
+ snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/class/net/%s/speed");
+ path_to_sys_class_net_speed = config_get(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "path to get net device speed", filename);
+
enable_new_interfaces = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "enable new interfaces detected at runtime", CONFIG_BOOLEAN_AUTO);
do_bandwidth = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "bandwidth for all interfaces", CONFIG_BOOLEAN_AUTO);
@@ -455,12 +466,13 @@ int do_proc_net_dev(int update_every, usec_t dt) {
do_events = config_get_boolean_ondemand(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "frames, collisions, carrier counters for all interfaces", CONFIG_BOOLEAN_AUTO);
disabled_list = simple_pattern_create(config_get(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "disable by default interfaces matching", "lo fireqos* *-ifb"), NULL, SIMPLE_PATTERN_EXACT);
+
+ dt_to_refresh_speed = config_get_number(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "refresh interface speed every seconds", 10) * USEC_PER_SEC;
+ if(dt_to_refresh_speed < 0) dt_to_refresh_speed = 0;
}
if(unlikely(!ff)) {
- char filename[FILENAME_MAX + 1];
- snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, (*netdata_configured_host_prefix)?"/proc/1/net/dev":"/proc/net/dev");
- ff = procfile_open(config_get(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "filename to monitor", filename), " \t,:|", PROCFILE_FLAG_DEFAULT);
+ ff = procfile_open(proc_net_dev_filename, " \t,|", PROCFILE_FLAG_DEFAULT);
if(unlikely(!ff)) return 1;
}
@@ -481,7 +493,11 @@ int do_proc_net_dev(int update_every, usec_t dt) {
// require 17 words on each line
if(unlikely(procfile_linewords(ff, l) < 17)) continue;
- struct netdev *d = get_netdev(procfile_lineword(ff, l, 0));
+ char *name = procfile_lineword(ff, l, 0);
+ size_t len = strlen(name);
+ if(name[len - 1] == ':') name[len - 1] = '\0';
+
+ struct netdev *d = get_netdev(name);
d->updated = 1;
netdev_found++;
@@ -505,12 +521,10 @@ int do_proc_net_dev(int update_every, usec_t dt) {
else
d->virtual = 0;
- // set nic speed if present
if(likely(!d->virtual)) {
- snprintfz(buffer, FILENAME_MAX, "%s/sys/class/net/%s/speed", netdata_configured_host_prefix, d->name);
- path_to_sys_net_speed = config_get(CONFIG_SECTION_PLUGIN_PROC_NETDEV, "path to get net device speed", buffer);
- int ret = read_single_number_file(path_to_sys_net_speed, (unsigned long long*)&d->speed_max);
- if(ret) error("Cannot read '%s'.", path_to_sys_net_speed);
+ // set the filename to get the interface speed
+ snprintfz(buffer, FILENAME_MAX, path_to_sys_class_net_speed, d->name);
+ d->filename_speed = strdupz(buffer);
}
snprintfz(buffer, FILENAME_MAX, "plugin:proc:/proc/net/dev:%s", d->name);
@@ -574,6 +588,17 @@ int do_proc_net_dev(int update_every, usec_t dt) {
d->tcarrier = str2kernel_uint_t(procfile_lineword(ff, l, 15));
}
+ //info("PROC_NET_DEV: %s speed %zu, bytes %zu/%zu, packets %zu/%zu/%zu, errors %zu/%zu, drops %zu/%zu, fifo %zu/%zu, compressed %zu/%zu, rframe %zu, tcollisions %zu, tcarrier %zu"
+ // , d->name, d->speed
+ // , d->rbytes, d->tbytes
+ // , d->rpackets, d->tpackets, d->rmulticast
+ // , d->rerrors, d->terrors
+ // , d->rdrops, d->tdrops
+ // , d->rfifo, d->tfifo
+ // , d->rcompressed, d->tcompressed
+ // , d->rframe, d->tcollisions, d->tcarrier
+ // );
+
// --------------------------------------------------------------------
if(unlikely((d->do_bandwidth == CONFIG_BOOLEAN_AUTO && (d->rbytes || d->tbytes))))
@@ -597,9 +622,6 @@ int do_proc_net_dev(int update_every, usec_t dt) {
, RRDSET_TYPE_AREA
);
- RRDSETVAR *nic_speed_max = rrdsetvar_custom_chart_variable_create(d->st_bandwidth, "nic_speed_max");
- if(nic_speed_max) rrdsetvar_custom_chart_variable_set(nic_speed_max, (calculated_number)d->speed_max);
-
d->rd_rbytes = rrddim_add(d->st_bandwidth, "received", NULL, 8, BITS_IN_A_KILOBIT, RRD_ALGORITHM_INCREMENTAL);
d->rd_tbytes = rrddim_add(d->st_bandwidth, "sent", NULL, -8, BITS_IN_A_KILOBIT, RRD_ALGORITHM_INCREMENTAL);
@@ -616,6 +638,35 @@ int do_proc_net_dev(int update_every, usec_t dt) {
rrddim_set_by_pointer(d->st_bandwidth, d->rd_rbytes, (collected_number)d->rbytes);
rrddim_set_by_pointer(d->st_bandwidth, d->rd_tbytes, (collected_number)d->tbytes);
rrdset_done(d->st_bandwidth);
+
+ // update the interface speed
+ if(d->filename_speed) {
+ d->speed_last_collected_usec += dt;
+
+ if(unlikely(d->speed_last_collected_usec >= (usec_t)dt_to_refresh_speed)) {
+
+ if(unlikely(!d->chart_var_speed)) {
+ d->chart_var_speed = rrdsetvar_custom_chart_variable_create(d->st_bandwidth, "nic_speed_max");
+ if(!d->chart_var_speed) {
+ error("Cannot create interface %s chart variable 'nic_speed_max'. Will not update its speed anymore.", d->name);
+ freez(d->filename_speed);
+ d->filename_speed = NULL;
+ }
+ }
+
+ if(d->filename_speed && d->chart_var_speed) {
+ if(read_single_number_file(d->filename_speed, (unsigned long long *) &d->speed)) {
+ error("Cannot refresh interface %s speed by reading '%s'. Will not update its speed anymore.", d->name, d->filename_speed);
+ freez(d->filename_speed);
+ d->filename_speed = NULL;
+ }
+ else {
+ rrdsetvar_custom_chart_variable_set(d->chart_var_speed, (calculated_number) d->speed);
+ d->speed_last_collected_usec = 0;
+ }
+ }
+ }
+ }
}
// --------------------------------------------------------------------
diff --git a/collectors/proc.plugin/proc_net_stat_conntrack.c b/collectors/proc.plugin/proc_net_stat_conntrack.c
index f5257c0a0..642e33f8e 100644
--- a/collectors/proc.plugin/proc_net_stat_conntrack.c
+++ b/collectors/proc.plugin/proc_net_stat_conntrack.c
@@ -50,7 +50,7 @@ int do_proc_net_stat_conntrack(int update_every, usec_t dt) {
if(!do_sockets && !read_full)
return 1;
- rrdvar_max = rrdvar_custom_host_variable_create(localhost, "netfilter.conntrack.max");
+ rrdvar_max = rrdvar_custom_host_variable_create(localhost, "netfilter_conntrack_max");
}
if(likely(read_full)) {
diff --git a/collectors/proc.plugin/proc_stat.c b/collectors/proc.plugin/proc_stat.c
index fb77df647..931b415a5 100644..100755
--- a/collectors/proc.plugin/proc_stat.c
+++ b/collectors/proc.plugin/proc_stat.c
@@ -12,9 +12,23 @@ struct per_core_single_number_file {
RRDDIM *rd;
};
+struct last_ticks {
+ collected_number frequency;
+ collected_number ticks;
+};
+
+// This is an extension of struct per_core_single_number_file at CPU_FREQ_INDEX.
+// Either scaling_cur_freq or time_in_state file is used at one time.
+struct per_core_time_in_state_file {
+ const char *filename;
+ procfile *ff;
+ size_t last_ticks_len;
+ struct last_ticks *last_ticks;
+};
+
#define CORE_THROTTLE_COUNT_INDEX 0
#define PACKAGE_THROTTLE_COUNT_INDEX 1
-#define SCALING_CUR_FREQ_INDEX 2
+#define CPU_FREQ_INDEX 2
#define PER_CORE_FILES 3
struct cpu_chart {
@@ -33,6 +47,8 @@ struct cpu_chart {
RRDDIM *rd_guest_nice;
struct per_core_single_number_file files[PER_CORE_FILES];
+
+ struct per_core_time_in_state_file time_in_state_files;
};
static int keep_per_core_fds_open = CONFIG_BOOLEAN_YES;
@@ -87,7 +103,6 @@ static int read_per_core_files(struct cpu_chart *all_cpu_charts, size_t len, siz
f->found = 1;
f->value = str2ll(buf, NULL);
- // info("read '%s', parsed as " COLLECTED_NUMBER_FORMAT, buf, f->value);
if(likely(f->value != 0))
files_nonzero++;
}
@@ -101,6 +116,112 @@ static int read_per_core_files(struct cpu_chart *all_cpu_charts, size_t len, siz
return (int)files_nonzero;
}
+static int read_per_core_time_in_state_files(struct cpu_chart *all_cpu_charts, size_t len, size_t index) {
+ size_t x, files_read = 0, files_nonzero = 0;
+
+ for(x = 0; x < len ; x++) {
+ struct per_core_single_number_file *f = &all_cpu_charts[x].files[index];
+ struct per_core_time_in_state_file *tsf = &all_cpu_charts[x].time_in_state_files;
+
+ f->found = 0;
+
+ if(unlikely(!tsf->filename))
+ continue;
+
+ if(unlikely(!tsf->ff)) {
+ tsf->ff = procfile_open(tsf->filename, " \t:", PROCFILE_FLAG_DEFAULT);
+ if(unlikely(!tsf->ff))
+ {
+ error("Cannot open file '%s'", tsf->filename);
+ continue;
+ }
+ }
+
+ tsf->ff = procfile_readall(tsf->ff);
+ if(unlikely(!tsf->ff)) {
+ error("Cannot read file '%s'", tsf->filename);
+ procfile_close(tsf->ff);
+ tsf->ff = NULL;
+ continue;
+ }
+ else {
+ // successful read
+
+ size_t lines = procfile_lines(tsf->ff), l;
+ size_t words;
+ unsigned long long total_ticks_since_last = 0, avg_freq = 0;
+
+ // Check if there is at least one frequency in time_in_state
+ if (procfile_word(tsf->ff, 0)[0] == '\0') {
+ if(unlikely(keep_per_core_fds_open != CONFIG_BOOLEAN_YES)) {
+ procfile_close(tsf->ff);
+ tsf->ff = NULL;
+ }
+ // TODO: Is there a better way to avoid spikes than calculating the average over
+ // the whole period under schedutil governor?
+ // freez(tsf->last_ticks);
+ // tsf->last_ticks = NULL;
+ // tsf->last_ticks_len = 0;
+ continue;
+ }
+
+ if (unlikely(tsf->last_ticks_len < lines || tsf->last_ticks == NULL)) {
+ tsf->last_ticks = reallocz(tsf->last_ticks, sizeof(struct last_ticks) * lines);
+ memset(tsf->last_ticks, 0, sizeof(struct last_ticks) * lines);
+ tsf->last_ticks_len = lines;
+ }
+
+ f->value = 0;
+
+ for(l = 0; l < lines - 1 ;l++) {
+ unsigned long long frequency = 0, ticks = 0, ticks_since_last = 0;
+
+ words = procfile_linewords(tsf->ff, l);
+ if(unlikely(words < 2)) {
+ error("Cannot read time_in_state line. Expected 2 params, read %zu.", words);
+ continue;
+ }
+ frequency = str2ull(procfile_lineword(tsf->ff, l, 0));
+ ticks = str2ull(procfile_lineword(tsf->ff, l, 1));
+
+ // It is assumed that frequencies are static and sorted
+ ticks_since_last = ticks - tsf->last_ticks[l].ticks;
+ tsf->last_ticks[l].frequency = frequency;
+ tsf->last_ticks[l].ticks = ticks;
+
+ total_ticks_since_last += ticks_since_last;
+ avg_freq += frequency * ticks_since_last;
+
+ }
+
+ if (likely(total_ticks_since_last)) {
+ avg_freq /= total_ticks_since_last;
+ f->value = avg_freq;
+ }
+
+ if(unlikely(keep_per_core_fds_open != CONFIG_BOOLEAN_YES)) {
+ procfile_close(tsf->ff);
+ tsf->ff = NULL;
+ }
+ }
+
+ files_read++;
+
+ f->found = 1;
+
+ if(likely(f->value != 0))
+ files_nonzero++;
+ }
+
+ if(unlikely(files_read == 0))
+ return -1;
+
+ if(unlikely(files_nonzero == 0))
+ return 0;
+
+ return (int)files_nonzero;
+}
+
static void chart_per_core_files(struct cpu_chart *all_cpu_charts, size_t len, size_t index, RRDSET *st, collected_number multiplier, collected_number divisor, RRD_ALGORITHM algorithm) {
size_t x;
for(x = 0; x < len ; x++) {
@@ -122,10 +243,11 @@ int do_proc_stat(int update_every, usec_t dt) {
static struct cpu_chart *all_cpu_charts = NULL;
static size_t all_cpu_charts_size = 0;
static procfile *ff = NULL;
- static int do_cpu = -1, do_cpu_cores = -1, do_interrupts = -1, do_context = -1, do_forks = -1, do_processes = -1, do_core_throttle_count = -1, do_package_throttle_count = -1, do_scaling_cur_freq = -1;
+ static int do_cpu = -1, do_cpu_cores = -1, do_interrupts = -1, do_context = -1, do_forks = -1, do_processes = -1, do_core_throttle_count = -1, do_package_throttle_count = -1, do_cpu_freq = -1;
static uint32_t hash_intr, hash_ctxt, hash_processes, hash_procs_running, hash_procs_blocked;
- static char *core_throttle_count_filename = NULL, *package_throttle_count_filename = NULL, *scaling_cur_freq_filename = NULL;
+ static char *core_throttle_count_filename = NULL, *package_throttle_count_filename = NULL, *scaling_cur_freq_filename = NULL, *time_in_state_filename = NULL;
static RRDVAR *cpus_var = NULL;
+ static int accurate_freq_avail = 0, accurate_freq_is_used = 0;
size_t cores_found = (size_t)processors;
if(unlikely(do_cpu == -1)) {
@@ -137,25 +259,25 @@ int do_proc_stat(int update_every, usec_t dt) {
do_processes = config_get_boolean("plugin:proc:/proc/stat", "processes running", CONFIG_BOOLEAN_YES);
// give sane defaults based on the number of processors
- if(processors > 50) {
+ if(unlikely(processors > 50)) {
// the system has too many processors
keep_per_core_fds_open = CONFIG_BOOLEAN_NO;
do_core_throttle_count = CONFIG_BOOLEAN_NO;
do_package_throttle_count = CONFIG_BOOLEAN_NO;
- do_scaling_cur_freq = CONFIG_BOOLEAN_NO;
+ do_cpu_freq = CONFIG_BOOLEAN_NO;
}
else {
// the system has a reasonable number of processors
keep_per_core_fds_open = CONFIG_BOOLEAN_YES;
do_core_throttle_count = CONFIG_BOOLEAN_AUTO;
do_package_throttle_count = CONFIG_BOOLEAN_NO;
- do_scaling_cur_freq = CONFIG_BOOLEAN_NO;
+ do_cpu_freq = CONFIG_BOOLEAN_YES;
}
keep_per_core_fds_open = config_get_boolean("plugin:proc:/proc/stat", "keep per core files open", keep_per_core_fds_open);
do_core_throttle_count = config_get_boolean_ondemand("plugin:proc:/proc/stat", "core_throttle_count", do_core_throttle_count);
do_package_throttle_count = config_get_boolean_ondemand("plugin:proc:/proc/stat", "package_throttle_count", do_package_throttle_count);
- do_scaling_cur_freq = config_get_boolean_ondemand("plugin:proc:/proc/stat", "scaling_cur_freq", do_scaling_cur_freq);
+ do_cpu_freq = config_get_boolean_ondemand("plugin:proc:/proc/stat", "cpu frequency", do_cpu_freq);
hash_intr = simple_hash("intr");
hash_ctxt = simple_hash("ctxt");
@@ -172,6 +294,9 @@ int do_proc_stat(int update_every, usec_t dt) {
snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/devices/system/cpu/%s/cpufreq/scaling_cur_freq");
scaling_cur_freq_filename = config_get("plugin:proc:/proc/stat", "scaling_cur_freq filename to monitor", filename);
+
+ snprintfz(filename, FILENAME_MAX, "%s%s", netdata_configured_host_prefix, "/sys/devices/system/cpu/%s/cpufreq/stats/time_in_state");
+ time_in_state_filename = config_get("plugin:proc:/proc/stat", "time_in_state filename to monitor", filename);
}
if(unlikely(!ff)) {
@@ -202,7 +327,7 @@ int do_proc_stat(int update_every, usec_t dt) {
}
size_t core = (row_key[3] == '\0') ? 0 : str2ul(&row_key[3]) + 1;
- if(core > 0) cores_found = core;
+ if(likely(core > 0)) cores_found = core;
if(likely((core == 0 && do_cpu) || (core > 0 && do_cpu_cores))) {
char *id;
@@ -227,7 +352,7 @@ int do_proc_stat(int update_every, usec_t dt) {
char *title, *type, *context, *family;
long priority;
- if(core >= all_cpu_charts_size) {
+ if(unlikely(core >= all_cpu_charts_size)) {
size_t old_cpu_charts_size = all_cpu_charts_size;
all_cpu_charts_size = core + 1;
all_cpu_charts = reallocz(all_cpu_charts, sizeof(struct cpu_chart) * all_cpu_charts_size);
@@ -238,7 +363,7 @@ int do_proc_stat(int update_every, usec_t dt) {
if(unlikely(!cpu_chart->st)) {
cpu_chart->id = strdupz(id);
- if(core == 0) {
+ if(unlikely(core == 0)) {
title = "Total CPU utilization";
type = "system";
context = "system.cpu";
@@ -252,9 +377,6 @@ int do_proc_stat(int update_every, usec_t dt) {
family = "utilization";
priority = NETDATA_CHART_PRIO_CPU_PER_CORE;
- // TODO: check for /sys/devices/system/cpu/cpu*/cpufreq/scaling_cur_freq
- // TODO: check for /sys/devices/system/cpu/cpu*/cpufreq/stats/time_in_state
-
char filename[FILENAME_MAX + 1];
struct stat stbuf;
@@ -276,12 +398,23 @@ int do_proc_stat(int update_every, usec_t dt) {
}
}
- if(do_scaling_cur_freq != CONFIG_BOOLEAN_NO) {
+ if(do_cpu_freq != CONFIG_BOOLEAN_NO) {
+
snprintfz(filename, FILENAME_MAX, scaling_cur_freq_filename, id);
+
+ if (stat(filename, &stbuf) == 0) {
+ cpu_chart->files[CPU_FREQ_INDEX].filename = strdupz(filename);
+ cpu_chart->files[CPU_FREQ_INDEX].fd = -1;
+ do_cpu_freq = CONFIG_BOOLEAN_YES;
+ }
+
+ snprintfz(filename, FILENAME_MAX, time_in_state_filename, id);
+
if (stat(filename, &stbuf) == 0) {
- cpu_chart->files[SCALING_CUR_FREQ_INDEX].filename = strdupz(filename);
- cpu_chart->files[SCALING_CUR_FREQ_INDEX].fd = -1;
- do_scaling_cur_freq = CONFIG_BOOLEAN_YES;
+ cpu_chart->time_in_state_files.filename = strdupz(filename);
+ cpu_chart->time_in_state_files.ff = NULL;
+ do_cpu_freq = CONFIG_BOOLEAN_YES;
+ accurate_freq_avail = 1;
}
}
}
@@ -532,21 +665,40 @@ int do_proc_stat(int update_every, usec_t dt) {
}
}
- if(likely(do_scaling_cur_freq != CONFIG_BOOLEAN_NO)) {
- int r = read_per_core_files(&all_cpu_charts[1], all_cpu_charts_size - 1, SCALING_CUR_FREQ_INDEX);
- if(likely(r != -1 && (do_scaling_cur_freq == CONFIG_BOOLEAN_YES || r > 0))) {
- do_scaling_cur_freq = CONFIG_BOOLEAN_YES;
+ if(likely(do_cpu_freq != CONFIG_BOOLEAN_NO)) {
+ char filename[FILENAME_MAX + 1];
+ int r = 0;
+
+ if (accurate_freq_avail) {
+ r = read_per_core_time_in_state_files(&all_cpu_charts[1], all_cpu_charts_size - 1, CPU_FREQ_INDEX);
+ if(r > 0 && !accurate_freq_is_used) {
+ accurate_freq_is_used = 1;
+ snprintfz(filename, FILENAME_MAX, time_in_state_filename, "cpu*");
+ info("cpufreq is using %s", filename);
+ }
+ }
+ if (r < 1) {
+ r = read_per_core_files(&all_cpu_charts[1], all_cpu_charts_size - 1, CPU_FREQ_INDEX);
+ if(accurate_freq_is_used) {
+ accurate_freq_is_used = 0;
+ snprintfz(filename, FILENAME_MAX, scaling_cur_freq_filename, "cpu*");
+ info("cpufreq fell back to %s", filename);
+ }
+ }
+
+ if(likely(r != -1 && (do_cpu_freq == CONFIG_BOOLEAN_YES || r > 0))) {
+ do_cpu_freq = CONFIG_BOOLEAN_YES;
static RRDSET *st_scaling_cur_freq = NULL;
if(unlikely(!st_scaling_cur_freq))
st_scaling_cur_freq = rrdset_create_localhost(
"cpu"
- , "scaling_cur_freq"
+ , "cpufreq"
, NULL
, "cpufreq"
- , "cpu.scaling_cur_freq"
- , "Per CPU Core, Current CPU Scaling Frequency"
+ , "cpufreq.cpufreq"
+ , "Current CPU Frequency"
, "MHz"
, PLUGIN_PROC_NAME
, PLUGIN_PROC_MODULE_STAT_NAME
@@ -557,7 +709,7 @@ int do_proc_stat(int update_every, usec_t dt) {
else
rrdset_next(st_scaling_cur_freq);
- chart_per_core_files(&all_cpu_charts[1], all_cpu_charts_size - 1, SCALING_CUR_FREQ_INDEX, st_scaling_cur_freq, 1, 1000, RRD_ALGORITHM_ABSOLUTE);
+ chart_per_core_files(&all_cpu_charts[1], all_cpu_charts_size - 1, CPU_FREQ_INDEX, st_scaling_cur_freq, 1, 1000, RRD_ALGORITHM_ABSOLUTE);
rrdset_done(st_scaling_cur_freq);
}
}