diff options
author | Lennart Weller <lhw@ring0.de> | 2016-05-25 10:36:24 +0000 |
---|---|---|
committer | Lennart Weller <lhw@ring0.de> | 2016-05-25 10:36:24 +0000 |
commit | b4f64f72a3e4bf590c60b0cbd6cd365aa1a58542 (patch) | |
tree | e6706c727a1fedb44da614453ad3e429a7403a9b | |
parent | Imported Upstream version 1.1.0 (diff) | |
download | netdata-b4f64f72a3e4bf590c60b0cbd6cd365aa1a58542.tar.xz netdata-b4f64f72a3e4bf590c60b0cbd6cd365aa1a58542.zip |
Imported Upstream version 1.2.0upstream/1.2.0
110 files changed, 8512 insertions, 2267 deletions
diff --git a/.gitignore b/.gitignore index 72c79d863..02801b778 100644 --- a/.gitignore +++ b/.gitignore @@ -72,3 +72,9 @@ apps.plugin-profiler.sh CMakeCache.txt CMakeFiles/ cmake_install.cmake + +.jetbrains* + +contrib/debian/changelog +profile/benchmark-dictionary +profile/benchmark-registry @@ -1,3 +1,70 @@ +netdata (1.2.0) - 2016-05-16 + + At a glance: + + - netdata is now 30% faster + - netdata now has a registry (my-netdata dashboard menu) + - netdata now monitors Linux Containers (docker, lxc, etc) + + IMPORTANT: + This version requires libuuid. The package you need is: + + - uuid-dev (debian/ubuntu), or + - libuuid-devel (centos/fedora/redhat) + + In detail: + + * netdata is now 30% faster ! + + - Patches submitted by @fredericopissarra improved overall + netdata performance by 10%. + + - A new improved search function in the internal indexes + made all searches faster by 50%, resulting in about + 20% better performance for the core of netdata. + + - More efficient threads locking in key components + contributed to the overal efficiency. + + * netdata now has a CENTRAL REGISTRY ! + + The central registry tracks all your netdata servers + and bookmarks them for you at the 'my-netdata' menu + on all dashboards. + + Every netdata can act as a registry, but there is also + a global registry provided for free for all netdata users! + + * netdata now monitors CONTAINERS ! + + docker, lxc, or anything else. For each container it monitors + CPU, RAM, DISK I/O (network interfaces were already monitored) + + * apps.plugin: now uses linux capabilities by default + without setuid to root + + * netdata has now an improved signal handler + thanks to @simonnagl + + * API: new improved CORS support + + * SNMP: counter64 support fixed + + * MYSQL: more charts, about QCache, MyISAM key cache, + InnoDB buffer pools, open files + + * DISK charts now show mount point when available + + * Dashboard: improved support for older web browsers + and mobile web browsers (thanks to @simonnagl) + + * Multi-server dashboards now allow de-coupled refreshes for + each chart, so that if one netdata has a network latency + the other charts are not affected + + * Several other minor improvements and bugfixes + + netdata (1.1.0) - 2016-04-20 Dozens of commits that improve netdata in several ways: diff --git a/LICENSE.md b/LICENSE.md index 3221f2231..380e86eee 100644 --- a/LICENSE.md +++ b/LICENSE.md @@ -138,8 +138,4 @@ connectivity is not available. Copyright 2015, Joseph Huckaby [MIT License](https://github.com/jhuckaby/pixl-xml) -- [node-int64](https://github.com/broofa/node-int64) - - Copyright 2014, Robert Kieffer - [MIT License](https://github.com/broofa/node-int64/blob/master/LICENSE) diff --git a/Makefile.in b/Makefile.in index 9399b66fd..9cfa9bead 100644 --- a/Makefile.in +++ b/Makefile.in @@ -241,6 +241,8 @@ OPTIONAL_MATH_CLFAGS = @OPTIONAL_MATH_CLFAGS@ OPTIONAL_MATH_LIBS = @OPTIONAL_MATH_LIBS@ OPTIONAL_NFACCT_CLFAGS = @OPTIONAL_NFACCT_CLFAGS@ OPTIONAL_NFACCT_LIBS = @OPTIONAL_NFACCT_LIBS@ +OPTIONAL_UUID_CLFAGS = @OPTIONAL_UUID_CLFAGS@ +OPTIONAL_UUID_LIBS = @OPTIONAL_UUID_LIBS@ OPTIONAL_ZLIB_CLFAGS = @OPTIONAL_ZLIB_CLFAGS@ OPTIONAL_ZLIB_LIBS = @OPTIONAL_ZLIB_LIBS@ PACKAGE = @PACKAGE@ @@ -262,6 +264,8 @@ PTHREAD_LIBS = @PTHREAD_LIBS@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ STRIP = @STRIP@ +UUID_CFLAGS = @UUID_CFLAGS@ +UUID_LIBS = @UUID_LIBS@ VERSION = @VERSION@ ZLIB_CFLAGS = @ZLIB_CFLAGS@ ZLIB_LIBS = @ZLIB_LIBS@ @@ -322,6 +326,7 @@ target_alias = @target_alias@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ +varlibdir = @varlibdir@ webdir = @webdir@ # @@ -1,8 +1,10 @@ +[![Build Status](https://travis-ci.org/firehol/netdata.svg?branch=master)](https://travis-ci.org/firehol/netdata) + # netdata -#### 230.000+ views, 62.000+ visitors, 18.500+ downloads, 9.500+ github stars, 500+ forks, 14 days! +##### 320.000+ views, 92.000+ visitors, 28.500+ downloads, 11.000+ github stars, 700+ forks, 1 month! -And it still runs with 700+ git downloads... per day! +And it still runs with 600+ git downloads... per day! **[Check what our users say about netdata](https://github.com/firehol/netdata/issues/148)**. @@ -65,7 +67,7 @@ This is what it currently monitors (most with zero configuration): - **netfilter / iptables Linux firewall** (connections, connection tracker events, errors, etc) -- **Linux anti-DDoS protection** (SYNPROXY metrics) +- **Linux DDoS protection** (SYNPROXY metrics) - **Processes** (running, blocked, forks, active, etc) @@ -77,6 +79,8 @@ This is what it currently monitors (most with zero configuration): ![qos-tc-classes](https://cloud.githubusercontent.com/assets/2662304/14093004/68966020-f553-11e5-98fe-ffee2086fafd.gif) +- **Linux Control Groups** (containers), systemd, lxc, docker, etc + - **Applications**, by grouping the process tree (CPU, memory, disk reads, disk writes, swap, threads, pipes, sockets, etc) ![apps](https://cloud.githubusercontent.com/assets/2662304/14093565/67c4002c-f557-11e5-86bd-0154f5135def.gif) @@ -99,6 +103,10 @@ This is what it currently monitors (most with zero configuration): - **NUT UPSes** (load, charge, battery voltage, temperature, utility metrics, output metrics) +- **Tomcat** (accesses, threads, free memory, volume) + +- **PHP-FPM** (multiple instances, each reporting connections, requests, performance) + - **SNMP devices** can be monitored too (although you will need to configure these) And you can extend it, by writing plugins that collect data from any source, using any computer language. @@ -118,7 +126,7 @@ Use our **[automatic installer](https://github.com/firehol/netdata/wiki/Installa It should run on **any Linux** system. It has been tested on: - Gentoo -- ArchLinux +- Arch Linux - Ubuntu / Debian - CentOS - Fedora @@ -132,3 +140,4 @@ It should run on **any Linux** system. It has been tested on: ## Documentation Check the **[netdata wiki](https://github.com/firehol/netdata/wiki)**. + diff --git a/charts.d/Makefile.am b/charts.d/Makefile.am index 29b41efa9..ad11e972a 100644 --- a/charts.d/Makefile.am +++ b/charts.d/Makefile.am @@ -17,9 +17,11 @@ dist_charts_SCRIPTS = \ nginx.chart.sh \ nut.chart.sh \ opensips.chart.sh \ + phpfpm.chart.sh \ postfix.chart.sh \ sensors.chart.sh \ squid.chart.sh \ + tomcat.chart.sh \ $(NULL) dist_charts_DATA = \ diff --git a/charts.d/Makefile.in b/charts.d/Makefile.in index c14403fae..3aff5a94c 100644 --- a/charts.d/Makefile.in +++ b/charts.d/Makefile.in @@ -186,6 +186,8 @@ OPTIONAL_MATH_CLFAGS = @OPTIONAL_MATH_CLFAGS@ OPTIONAL_MATH_LIBS = @OPTIONAL_MATH_LIBS@ OPTIONAL_NFACCT_CLFAGS = @OPTIONAL_NFACCT_CLFAGS@ OPTIONAL_NFACCT_LIBS = @OPTIONAL_NFACCT_LIBS@ +OPTIONAL_UUID_CLFAGS = @OPTIONAL_UUID_CLFAGS@ +OPTIONAL_UUID_LIBS = @OPTIONAL_UUID_LIBS@ OPTIONAL_ZLIB_CLFAGS = @OPTIONAL_ZLIB_CLFAGS@ OPTIONAL_ZLIB_LIBS = @OPTIONAL_ZLIB_LIBS@ PACKAGE = @PACKAGE@ @@ -207,6 +209,8 @@ PTHREAD_LIBS = @PTHREAD_LIBS@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ STRIP = @STRIP@ +UUID_CFLAGS = @UUID_CFLAGS@ +UUID_LIBS = @UUID_LIBS@ VERSION = @VERSION@ ZLIB_CFLAGS = @ZLIB_CFLAGS@ ZLIB_LIBS = @ZLIB_LIBS@ @@ -267,6 +271,7 @@ target_alias = @target_alias@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ +varlibdir = @varlibdir@ webdir = @webdir@ # @@ -287,9 +292,11 @@ dist_charts_SCRIPTS = \ nginx.chart.sh \ nut.chart.sh \ opensips.chart.sh \ + phpfpm.chart.sh \ postfix.chart.sh \ sensors.chart.sh \ squid.chart.sh \ + tomcat.chart.sh \ $(NULL) dist_charts_DATA = \ diff --git a/charts.d/ap.chart.sh b/charts.d/ap.chart.sh index 10a65688c..aed51c1b6 100755 --- a/charts.d/ap.chart.sh +++ b/charts.d/ap.chart.sh @@ -54,25 +54,25 @@ ap_create() { # create the chart with 3 dimensions cat <<EOF -CHART ap_clients.${dev} '' "Connected clients to ${ssid} on ${dev}" "clients" ${dev} ap.clients line $[ap_priority + 1] $ap_update_every +CHART ap_clients.${dev} '' "Connected clients to ${ssid} on ${dev}" "clients" ${dev} ap.clients line $((ap_priority + 1)) $ap_update_every DIMENSION clients '' absolute 1 1 -CHART ap_bandwidth.${dev} '' "Bandwidth for ${ssid} on ${dev}" "kilobits/s" ${dev} ap.net area $[ap_priority + 2] $ap_update_every +CHART ap_bandwidth.${dev} '' "Bandwidth for ${ssid} on ${dev}" "kilobits/s" ${dev} ap.net area $((ap_priority + 2)) $ap_update_every DIMENSION received '' incremental 8 1024 DIMENSION sent '' incremental -8 1024 -CHART ap_packets.${dev} '' "Packets for ${ssid} on ${dev}" "packets/s" ${dev} ap.packets line $[ap_priority + 3] $ap_update_every +CHART ap_packets.${dev} '' "Packets for ${ssid} on ${dev}" "packets/s" ${dev} ap.packets line $((ap_priority + 3)) $ap_update_every DIMENSION received '' incremental 1 1 DIMENSION sent '' incremental -1 1 -CHART ap_issues.${dev} '' "Transmit Issues for ${ssid} on ${dev}" "issues/s" ${dev} ap.issues line $[ap_priority + 4] $ap_update_every +CHART ap_issues.${dev} '' "Transmit Issues for ${ssid} on ${dev}" "issues/s" ${dev} ap.issues line $((ap_priority + 4)) $ap_update_every DIMENSION retries 'tx retries' incremental 1 1 DIMENSION failures 'tx failures' incremental -1 1 -CHART ap_signal.${dev} '' "Average Signal for ${ssid} on ${dev}" "dBm" ${dev} ap.signal line $[ap_priority + 5] $ap_update_every +CHART ap_signal.${dev} '' "Average Signal for ${ssid} on ${dev}" "dBm" ${dev} ap.signal line $((ap_priority + 5)) $ap_update_every DIMENSION signal 'average signal' absolute 1 1 -CHART ap_bitrate.${dev} '' "Bitrate for ${ssid} on ${dev}" "Mbps" ${dev} ap.bitrate line $[ap_priority + 6] $ap_update_every +CHART ap_bitrate.${dev} '' "Bitrate for ${ssid} on ${dev}" "Mbps" ${dev} ap.bitrate line $((ap_priority + 6)) $ap_update_every DIMENSION receive '' absolute 1 1000 DIMENSION transmit '' absolute -1 1000 DIMENSION expected 'expected throughput' absolute 1 1000 diff --git a/charts.d/apache.chart.sh b/charts.d/apache.chart.sh index 9b6d53b53..dbf14a432 100755 --- a/charts.d/apache.chart.sh +++ b/charts.d/apache.chart.sh @@ -47,21 +47,21 @@ apache_detect() { for x in "${@}" do case "${x}" in - 'Total Accesses') apache_key_accesses=$[i + 1] ;; - 'Total kBytes') apache_key_kbytes=$[i + 1] ;; - 'ReqPerSec') apache_key_reqpersec=$[i + 1] ;; - 'BytesPerSec') apache_key_bytespersec=$[i + 1] ;; - 'BytesPerReq') apache_key_bytesperreq=$[i + 1] ;; - 'BusyWorkers') apache_key_busyworkers=$[i + 1] ;; - 'IdleWorkers') apache_key_idleworkers=$[i + 1];; - 'ConnsTotal') apache_key_connstotal=$[i + 1] ;; - 'ConnsAsyncWriting') apache_key_connsasyncwriting=$[i + 1] ;; - 'ConnsAsyncKeepAlive') apache_key_connsasynckeepalive=$[i + 1] ;; - 'ConnsAsyncClosing') apache_key_connsasyncclosing=$[i + 1] ;; - 'Scoreboard') apache_key_scoreboard=$[i] ;; + 'Total Accesses') apache_key_accesses=$((i + 1)) ;; + 'Total kBytes') apache_key_kbytes=$((i + 1)) ;; + 'ReqPerSec') apache_key_reqpersec=$((i + 1)) ;; + 'BytesPerSec') apache_key_bytespersec=$((i + 1)) ;; + 'BytesPerReq') apache_key_bytesperreq=$((i + 1)) ;; + 'BusyWorkers') apache_key_busyworkers=$((i + 1)) ;; + 'IdleWorkers') apache_key_idleworkers=$((i + 1));; + 'ConnsTotal') apache_key_connstotal=$((i + 1)) ;; + 'ConnsAsyncWriting') apache_key_connsasyncwriting=$((i + 1)) ;; + 'ConnsAsyncKeepAlive') apache_key_connsasynckeepalive=$((i + 1)) ;; + 'ConnsAsyncClosing') apache_key_connsasyncclosing=$((i + 1)) ;; + 'Scoreboard') apache_key_scoreboard=$((i)) ;; esac - i=$[i + 1] + i=$((i + 1)) done # we will not check of the Conns* @@ -94,7 +94,7 @@ apache_detect() { apache_get() { local oIFS="${IFS}" ret - IFS=$':\n' apache_response=($(curl -s "${apache_url}")) + IFS=$':\n' apache_response=($(curl -Ss "${apache_url}")) ret=$? IFS="${oIFS}" @@ -167,27 +167,27 @@ apache_check() { # _create is called once, to create the charts apache_create() { cat <<EOF -CHART apache.bytesperreq '' "apache Lifetime Avg. Response Size" "bytes/request" statistics apache.bytesperreq area $[apache_priority + 8] $apache_update_every +CHART apache.bytesperreq '' "apache Lifetime Avg. Response Size" "bytes/request" statistics apache.bytesperreq area $((apache_priority + 8)) $apache_update_every DIMENSION size '' absolute 1 ${apache_decimal_detail} -CHART apache.workers '' "apache Workers" "workers" workers apache.workers stacked $[apache_priority + 5] $apache_update_every +CHART apache.workers '' "apache Workers" "workers" workers apache.workers stacked $((apache_priority + 5)) $apache_update_every DIMENSION idle '' absolute 1 1 DIMENSION busy '' absolute 1 1 -CHART apache.reqpersec '' "apache Lifetime Avg. Requests/s" "requests/s" statistics apache.reqpersec line $[apache_priority + 6] $apache_update_every +CHART apache.reqpersec '' "apache Lifetime Avg. Requests/s" "requests/s" statistics apache.reqpersec line $((apache_priority + 6)) $apache_update_every DIMENSION requests '' absolute 1 ${apache_decimal_detail} -CHART apache.bytespersec '' "apache Lifetime Avg. Bandwidth/s" "kilobits/s" statistics apache.bytespersec area $[apache_priority + 7] $apache_update_every -DIMENSION sent '' absolute 8 $[apache_decimal_detail * 1000] -CHART apache.requests '' "apache Requests" "requests/s" requests apache.requests line $[apache_priority + 1] $apache_update_every +CHART apache.bytespersec '' "apache Lifetime Avg. Bandwidth/s" "kilobits/s" statistics apache.bytespersec area $((apache_priority + 7)) $apache_update_every +DIMENSION sent '' absolute 8 $((apache_decimal_detail * 1000)) +CHART apache.requests '' "apache Requests" "requests/s" requests apache.requests line $((apache_priority + 1)) $apache_update_every DIMENSION requests '' incremental 1 1 -CHART apache.net '' "apache Bandwidth" "kilobits/s" bandwidth apache.net area $[apache_priority + 3] $apache_update_every +CHART apache.net '' "apache Bandwidth" "kilobits/s" bandwidth apache.net area $((apache_priority + 3)) $apache_update_every DIMENSION sent '' incremental 8 1 EOF if [ ${apache_has_conns} -eq 1 ] then cat <<EOF2 -CHART apache.connections '' "apache Connections" "connections" connections apache.connections line $[apache_priority + 2] $apache_update_every +CHART apache.connections '' "apache Connections" "connections" connections apache.connections line $((apache_priority + 2)) $apache_update_every DIMENSION connections '' absolute 1 1 -CHART apache.conns_async '' "apache Async Connections" "connections" connections apache.conns_async stacked $[apache_priority + 4] $apache_update_every +CHART apache.conns_async '' "apache Async Connections" "connections" connections apache.conns_async stacked $((apache_priority + 4)) $apache_update_every DIMENSION keepalive '' absolute 1 1 DIMENSION closing '' absolute 1 1 DIMENSION writing '' absolute 1 1 @@ -212,23 +212,23 @@ apache_update() { # write the result of the work. cat <<VALUESEOF BEGIN apache.requests $1 -SET requests = $[apache_accesses] +SET requests = $((apache_accesses)) END BEGIN apache.net $1 -SET sent = $[apache_kbytes] +SET sent = $((apache_kbytes)) END BEGIN apache.reqpersec $1 -SET requests = $[apache_reqpersec] +SET requests = $((apache_reqpersec)) END BEGIN apache.bytespersec $1 -SET sent = $[apache_bytespersec] +SET sent = $((apache_bytespersec)) END BEGIN apache.bytesperreq $1 -SET size = $[apache_bytesperreq] +SET size = $((apache_bytesperreq)) END BEGIN apache.workers $1 -SET idle = $[apache_idleworkers] -SET busy = $[apache_busyworkers] +SET idle = $((apache_idleworkers)) +SET busy = $((apache_busyworkers)) END VALUESEOF @@ -236,12 +236,12 @@ VALUESEOF then cat <<VALUESEOF2 BEGIN apache.connections $1 -SET connections = $[apache_connstotal] +SET connections = $((apache_connstotal)) END BEGIN apache.conns_async $1 -SET keepalive = $[apache_connsasynckeepalive] -SET closing = $[apache_connsasyncwriting] -SET writing = $[apache_connsasyncwriting] +SET keepalive = $((apache_connsasynckeepalive)) +SET closing = $((apache_connsasyncwriting)) +SET writing = $((apache_connsasyncwriting)) END VALUESEOF2 fi diff --git a/charts.d/cpufreq.chart.sh b/charts.d/cpufreq.chart.sh index 6a968237d..008ffe1d7 100755 --- a/charts.d/cpufreq.chart.sh +++ b/charts.d/cpufreq.chart.sh @@ -36,7 +36,7 @@ cpufreq_create() { # - the highest speed we can achieve - [ $cpufreq_source_update -eq 1 ] && echo >$TMP_DIR/cpufreq.sh "cpufreq_update() {" - echo "CHART cpu.cpufreq '' 'CPU Clock' 'MHz' 'cpufreq' '' line $[cpufreq_priority + 1] $cpufreq_update_every" + echo "CHART cpu.cpufreq '' 'CPU Clock' 'MHz' 'cpufreq' '' line $((cpufreq_priority + 1)) $cpufreq_update_every" echo >>$TMP_DIR/cpufreq.sh "echo \"BEGIN cpu.cpufreq \$1\"" i=0 diff --git a/charts.d/example.chart.sh b/charts.d/example.chart.sh index 34a3bc1bf..ad2050462 100755 --- a/charts.d/example.chart.sh +++ b/charts.d/example.chart.sh @@ -22,11 +22,11 @@ example_check() { example_create() { # create the chart with 3 dimensions cat <<EOF -CHART example.random '' "Random Numbers Stacked Chart" "% of random numbers" random random stacked $[example_priority] $example_update_every +CHART example.random '' "Random Numbers Stacked Chart" "% of random numbers" random random stacked $((example_priority)) $example_update_every DIMENSION random1 '' percentage-of-absolute-row 1 1 DIMENSION random2 '' percentage-of-absolute-row 1 1 DIMENSION random3 '' percentage-of-absolute-row 1 1 -CHART example.random2 '' "A random number" "random number" random random area $[example_priority + 1] $example_update_every +CHART example.random2 '' "A random number" "random number" random random area $((example_priority + 1)) $example_update_every DIMENSION random '' absolute 1 1 EOF @@ -49,19 +49,19 @@ example_update() { value1=$RANDOM value2=$RANDOM value3=$RANDOM - value4=$[8192 + (RANDOM * 16383 / 32767) ] + value4=$((8192 + (RANDOM * 16383 / 32767) )) if [ $example_count -gt 0 ] then - example_count=$[example_count - 1] + example_count=$((example_count - 1)) - [ $example_last -gt 16383 ] && value4=$[example_last + (RANDOM * ( (32767 - example_last) / 2) / 32767)] - [ $example_last -le 16383 ] && value4=$[example_last - (RANDOM * (example_last / 2) / 32767)] + [ $example_last -gt 16383 ] && value4=$((example_last + (RANDOM * ( (32767 - example_last) / 2) / 32767))) + [ $example_last -le 16383 ] && value4=$((example_last - (RANDOM * (example_last / 2) / 32767))) else - example_count=$[1 + (RANDOM * 5 / 32767) ] + example_count=$((1 + (RANDOM * 5 / 32767) )) - [ $example_last -gt 16383 -a $value4 -gt 16383 ] && value4=$[value4 - 16383] - [ $example_last -le 16383 -a $value4 -lt 16383 ] && value4=$[value4 + 16383] + [ $example_last -gt 16383 -a $value4 -gt 16383 ] && value4=$((value4 - 16383)) + [ $example_last -le 16383 -a $value4 -lt 16383 ] && value4=$((value4 + 16383)) fi example_last=$value4 diff --git a/charts.d/load_average.chart.sh b/charts.d/load_average.chart.sh index 257ea7cad..4d86a8f4c 100755 --- a/charts.d/load_average.chart.sh +++ b/charts.d/load_average.chart.sh @@ -28,7 +28,7 @@ load_average_check() { load_average_create() { # create a chart with 3 dimensions cat <<EOF -CHART system.load '' "System Load Average" "load" load system.load line $[load_priority + 1] $load_average_update_every +CHART system.load '' "System Load Average" "load" load system.load line $((load_priority + 1)) $load_average_update_every DIMENSION load1 '1 min' absolute 1 100 DIMENSION load5 '5 mins' absolute 1 100 DIMENSION load15 '15 mins' absolute 1 100 diff --git a/charts.d/mysql.chart.sh b/charts.d/mysql.chart.sh index e037aed5d..56dce42d7 100755 --- a/charts.d/mysql.chart.sh +++ b/charts.d/mysql.chart.sh @@ -91,7 +91,40 @@ mysql_get() { mysql_Innodb_rows_inserted \ mysql_Innodb_rows_read \ mysql_Innodb_rows_updated \ - mysql_Innodb_rows_deleted + mysql_Innodb_rows_deleted \ + mysql_Innodb_buffer_pool_pages_data \ + mysql_Innodb_buffer_pool_pages_dirty \ + mysql_Innodb_buffer_pool_pages_flushed \ + mysql_Innodb_buffer_pool_pages_free \ + mysql_Innodb_buffer_pool_pages_misc \ + mysql_Innodb_buffer_pool_pages_total \ + mysql_Innodb_buffer_pool_bytes_data \ + mysql_Innodb_buffer_pool_bytes_dirty \ + mysql_Innodb_buffer_pool_read_ahead_rnd \ + mysql_Innodb_buffer_pool_read_ahead \ + mysql_Innodb_buffer_pool_read_ahead_evicted \ + mysql_Innodb_buffer_pool_read_requests \ + mysql_Innodb_buffer_pool_reads \ + mysql_Innodb_buffer_pool_wait_free \ + mysql_Innodb_buffer_pool_write_requests \ + mysql_Qcache_free_blocks \ + mysql_Qcache_free_memory \ + mysql_Qcache_hits \ + mysql_Qcache_inserts \ + mysql_Qcache_lowmem_prunes \ + mysql_Qcache_not_cached \ + mysql_Qcache_queries_in_cache \ + mysql_Qcache_total_blocks \ + mysql_Key_blocks_not_flushed \ + mysql_Key_blocks_unused \ + mysql_Key_blocks_used \ + mysql_Key_read_requests \ + mysql_Key_reads \ + mysql_Key_write_requests \ + mysql_Key_writes \ + mysql_Open_files \ + mysql_Opened_files + mysql_plugin_command_failure=0 @@ -116,14 +149,31 @@ mysql_check() { # - 0 to enable the chart # - 1 to disable the chart - local x m mysql_cmd + local x m mysql_cmd tryroot=0 unconfigured=0 + + if [ "${1}" = "tryroot" ] + then + tryroot=1 + shift + fi [ -z "${mysql_cmd}" ] && mysql_cmd="$(which mysql)" if [ ${#mysql_opts[@]} -eq 0 ] then + unconfigured=1 + mysql_cmds[local]="$mysql_cmd" - mysql_opts[local]= + + if [ $tryroot -eq 1 ] + then + # the user has not configured us for mysql access + # if the root user is passwordless in mysql, we can + # attempt to connect to mysql as root + mysql_opts[local]="-u root" + else + mysql_opts[local]= + fi fi # check once if the url works @@ -150,30 +200,36 @@ mysql_check() { if [ ${#mysql_opts[@]} -eq 0 ] then - echo >&2 "$PROGRAM_NAME: mysql: no mysql servers found. Please set mysql_opts[name]='options' to whatever needed to get connected to the mysql server, in $confd/mysql.conf" - return 1 + if [ ${unconfigured} -eq 1 && ${tryroot} -eq 0 ] + then + mysql_check tryroot "${@}" + return $? + else + echo >&2 "$PROGRAM_NAME: mysql: no mysql servers found. Please set mysql_opts[name]='options' to whatever needed to get connected to the mysql server, in $confd/mysql.conf" + return 1 + fi fi return 0 } mysql_create() { - local m + local x # create the charts - for m in "${mysql_ids[@]}" + for x in "${mysql_ids[@]}" do cat <<EOF -CHART mysql_$m.net '' "mysql Bandwidth" "kilobits/s" bandwidth mysql.net area $[mysql_priority + 1] $mysql_update_every +CHART mysql_$x.net '' "mysql Bandwidth" "kilobits/s" bandwidth mysql.net area $((mysql_priority + 1)) $mysql_update_every DIMENSION Bytes_received in incremental 8 1024 DIMENSION Bytes_sent out incremental -8 1024 -CHART mysql_$m.queries '' "mysql Queries" "queries/s" queries mysql.queries line $[mysql_priority + 2] $mysql_update_every +CHART mysql_$x.queries '' "mysql Queries" "queries/s" queries mysql.queries line $((mysql_priority + 2)) $mysql_update_every DIMENSION Queries queries incremental 1 1 DIMENSION Questions questions incremental 1 1 DIMENSION Slow_queries slow_queries incremental -1 1 -CHART mysql_$m.handlers '' "mysql Handlers" "handlers/s" handlers mysql.handlers line $[mysql_priority + 3] $mysql_update_every +CHART mysql_$x.handlers '' "mysql Handlers" "handlers/s" handlers mysql.handlers line $((mysql_priority + 3)) $mysql_update_every DIMENSION Handler_commit commit incremental 1 1 DIMENSION Handler_delete delete incremental 1 1 DIMENSION Handler_prepare prepare incremental 1 1 @@ -189,86 +245,145 @@ DIMENSION Handler_savepoint_rollback savepoint_rollback incremental 1 1 DIMENSION Handler_update update incremental 1 1 DIMENSION Handler_write write incremental 1 1 -CHART mysql_$m.table_locks '' "mysql Tables Locks" "locks/s" locks mysql.table_locks line $[mysql_priority + 4] $mysql_update_every +CHART mysql_$x.table_locks '' "mysql Tables Locks" "locks/s" locks mysql.table_locks line $((mysql_priority + 4)) $mysql_update_every DIMENSION Table_locks_immediate immediate incremental 1 1 DIMENSION Table_locks_waited waited incremental -1 1 -CHART mysql_$m.join_issues '' "mysql Select Join Issues" "joins/s" issues mysql.join_issues line $[mysql_priority + 5] $mysql_update_every +CHART mysql_$x.join_issues '' "mysql Select Join Issues" "joins/s" issues mysql.join_issues line $((mysql_priority + 5)) $mysql_update_every DIMENSION Select_full_join full_join incremental 1 1 DIMENSION Select_full_range_join full_range_join incremental 1 1 DIMENSION Select_range range incremental 1 1 DIMENSION Select_range_check range_check incremental 1 1 DIMENSION Select_scan scan incremental 1 1 -CHART mysql_$m.sort_issues '' "mysql Sort Issues" "issues/s" issues mysql.sort.issues line $[mysql_priority + 6] $mysql_update_every +CHART mysql_$x.sort_issues '' "mysql Sort Issues" "issues/s" issues mysql.sort.issues line $((mysql_priority + 6)) $mysql_update_every DIMENSION Sort_merge_passes merge_passes incremental 1 1 DIMENSION Sort_range range incremental 1 1 DIMENSION Sort_scan scan incremental 1 1 -CHART mysql_$m.tmp '' "mysql Tmp Operations" "counter" temporaries mysql.tmp line $[mysql_priority + 7] $mysql_update_every +CHART mysql_$x.tmp '' "mysql Tmp Operations" "counter" temporaries mysql.tmp line $((mysql_priority + 7)) $mysql_update_every DIMENSION Created_tmp_disk_tables disk_tables incremental 1 1 DIMENSION Created_tmp_files files incremental 1 1 DIMENSION Created_tmp_tables tables incremental 1 1 -CHART mysql_$m.connections '' "mysql Connections" "connections/s" connections mysql.connections line $[mysql_priority + 8] $mysql_update_every +CHART mysql_$x.connections '' "mysql Connections" "connections/s" connections mysql.connections line $((mysql_priority + 8)) $mysql_update_every DIMENSION Connections all incremental 1 1 DIMENSION Aborted_connects aborded incremental 1 1 -CHART mysql_$m.binlog_cache '' "mysql Binlog Cache" "transactions/s" binlog mysql.binlog_cache line $[mysql_priority + 9] $mysql_update_every +CHART mysql_$x.binlog_cache '' "mysql Binlog Cache" "transactions/s" binlog mysql.binlog_cache line $((mysql_priority + 9)) $mysql_update_every DIMENSION Binlog_cache_disk_use disk incremental 1 1 DIMENSION Binlog_cache_use all incremental 1 1 -CHART mysql_$m.threads '' "mysql Threads" "threads" threads mysql.threads line $[mysql_priority + 10] $mysql_update_every +CHART mysql_$x.threads '' "mysql Threads" "threads" threads mysql.threads line $((mysql_priority + 10)) $mysql_update_every DIMENSION Threads_connected connected absolute 1 1 DIMENSION Threads_created created incremental 1 1 DIMENSION Threads_cached cached absolute -1 1 DIMENSION Threads_running running absolute 1 1 -CHART mysql_$m.thread_cache_misses '' "mysql Threads Cache Misses" "misses" threads mysql.thread_cache_misses area $[mysql_priority + 11] $mysql_update_every +CHART mysql_$x.thread_cache_misses '' "mysql Threads Cache Misses" "misses" threads mysql.thread_cache_misses area $((mysql_priority + 11)) $mysql_update_every DIMENSION misses misses absolute 1 100 -CHART mysql_$m.innodb_io '' "mysql InnoDB I/O Bandwidth" "kilobytes/s" innodb mysql.innodb_io area $[mysql_priority + 12] $mysql_update_every +CHART mysql_$x.innodb_io '' "mysql InnoDB I/O Bandwidth" "kilobytes/s" innodb mysql.innodb_io area $((mysql_priority + 12)) $mysql_update_every DIMENSION Innodb_data_read read incremental 1 1024 DIMENSION Innodb_data_written write incremental -1 1024 -CHART mysql_$m.innodb_io_ops '' "mysql InnoDB I/O Operations" "operations/s" innodb mysql.innodb_io_ops line $[mysql_priority + 13] $mysql_update_every +CHART mysql_$x.innodb_io_ops '' "mysql InnoDB I/O Operations" "operations/s" innodb mysql.innodb_io_ops line $((mysql_priority + 13)) $mysql_update_every DIMENSION Innodb_data_reads reads incremental 1 1 DIMENSION Innodb_data_writes writes incremental -1 1 DIMENSION Innodb_data_fsyncs fsyncs incremental 1 1 -CHART mysql_$m.innodb_io_pending_ops '' "mysql InnoDB Pending I/O Operations" "operations" innodb mysql.innodb_io_pending_ops line $[mysql_priority + 14] $mysql_update_every +CHART mysql_$x.innodb_io_pending_ops '' "mysql InnoDB Pending I/O Operations" "operations" innodb mysql.innodb_io_pending_ops line $((mysql_priority + 14)) $mysql_update_every DIMENSION Innodb_data_pending_reads reads absolute 1 1 DIMENSION Innodb_data_pending_writes writes absolute -1 1 DIMENSION Innodb_data_pending_fsyncs fsyncs absolute 1 1 -CHART mysql_$m.innodb_log '' "mysql InnoDB Log Operations" "operations/s" innodb mysql.innodb_log line $[mysql_priority + 15] $mysql_update_every +CHART mysql_$x.innodb_log '' "mysql InnoDB Log Operations" "operations/s" innodb mysql.innodb_log line $((mysql_priority + 15)) $mysql_update_every DIMENSION Innodb_log_waits waits incremental 1 1 DIMENSION Innodb_log_write_requests write_requests incremental -1 1 DIMENSION Innodb_log_writes writes incremental -1 1 -CHART mysql_$m.innodb_os_log '' "mysql InnoDB OS Log Operations" "operations" innodb mysql.innodb_os_log line $[mysql_priority + 16] $mysql_update_every +CHART mysql_$x.innodb_os_log '' "mysql InnoDB OS Log Operations" "operations" innodb mysql.innodb_os_log line $((mysql_priority + 16)) $mysql_update_every DIMENSION Innodb_os_log_fsyncs fsyncs incremental 1 1 DIMENSION Innodb_os_log_pending_fsyncs pending_fsyncs absolute 1 1 DIMENSION Innodb_os_log_pending_writes pending_writes absolute -1 1 -CHART mysql_$m.innodb_os_log_io '' "mysql InnoDB OS Log Bandwidth" "kilobytes/s" innodb mysql.innodb_os_log_io area $[mysql_priority + 17] $mysql_update_every +CHART mysql_$x.innodb_os_log_io '' "mysql InnoDB OS Log Bandwidth" "kilobytes/s" innodb mysql.innodb_os_log_io area $((mysql_priority + 17)) $mysql_update_every DIMENSION Innodb_os_log_written write incremental -1 1024 -CHART mysql_$m.innodb_cur_row_lock '' "mysql InnoDB Current Row Locks" "operations" innodb mysql.innodb_cur_row_lock area $[mysql_priority + 18] $mysql_update_every +CHART mysql_$x.innodb_cur_row_lock '' "mysql InnoDB Current Row Locks" "operations" innodb mysql.innodb_cur_row_lock area $((mysql_priority + 18)) $mysql_update_every DIMENSION Innodb_row_lock_current_waits current_waits absolute 1 1 -CHART mysql_$m.innodb_rows '' "mysql InnoDB Row Operations" "operations/s" innodb mysql.innodb_rows area $[mysql_priority + 19] $mysql_update_every +CHART mysql_$x.innodb_rows '' "mysql InnoDB Row Operations" "operations/s" innodb mysql.innodb_rows area $((mysql_priority + 19)) $mysql_update_every DIMENSION Innodb_rows_read read incremental 1 1 DIMENSION Innodb_rows_deleted deleted incremental -1 1 DIMENSION Innodb_rows_inserted inserted incremental 1 1 DIMENSION Innodb_rows_updated updated incremental -1 1 +CHART mysql_$x.innodb_buffer_pool_pages '' "mysql InnoDB Buffer Pool Pages" "pages" innodb mysql.innodb_buffer_pool_pages line $((mysql_priority + 20)) $mysql_update_every +DIMENSION Innodb_buffer_pool_pages_data data absolute 1 1 +DIMENSION Innodb_buffer_pool_pages_dirty dirty absolute -1 1 +DIMENSION Innodb_buffer_pool_pages_free free absolute 1 1 +DIMENSION Innodb_buffer_pool_pages_flushed flushed incremental -1 1 +DIMENSION Innodb_buffer_pool_pages_misc misc absolute -1 1 +DIMENSION Innodb_buffer_pool_pages_total total absolute 1 1 + +CHART mysql_$x.innodb_buffer_pool_bytes '' "mysql InnoDB Buffer Pool Bytes" "MB" innodb mysql.innodb_buffer_pool_bytes area $((mysql_priority + 21)) $mysql_update_every +DIMENSION Innodb_buffer_pool_bytes_data data absolute 1 $((1024 * 1024)) +DIMENSION Innodb_buffer_pool_bytes_dirty dirty absolute -1 $((1024 * 1024)) + +CHART mysql_$x.innodb_buffer_pool_read_ahead '' "mysql InnoDB Buffer Pool Read Ahead" "operations/s" innodb mysql.innodb_buffer_pool_read_ahead area $((mysql_priority + 22)) $mysql_update_every +DIMENSION Innodb_buffer_pool_read_ahead all incremental 1 1 +DIMENSION Innodb_buffer_pool_read_ahead_evicted evicted incremental -1 1 +DIMENSION Innodb_buffer_pool_read_ahead_rnd random incremental 1 1 + +CHART mysql_$x.innodb_buffer_pool_reqs '' "mysql InnoDB Buffer Pool Requests" "requests/s" innodb mysql.innodb_buffer_pool_reqs area $((mysql_priority + 23)) $mysql_update_every +DIMENSION Innodb_buffer_pool_read_requests reads incremental 1 1 +DIMENSION Innodb_buffer_pool_write_requests writes incremental -1 1 + +CHART mysql_$x.innodb_buffer_pool_ops '' "mysql InnoDB Buffer Pool Operations" "operations/s" innodb mysql.innodb_buffer_pool_ops area $((mysql_priority + 24)) $mysql_update_every +DIMENSION Innodb_buffer_pool_reads 'disk reads' incremental 1 1 +DIMENSION Innodb_buffer_pool_wait_free 'wait free' incremental -1 1 + +CHART mysql_$x.qcache_ops '' "mysql QCache Operations" "queries/s" qcache mysql.qcache_ops line $((mysql_priority + 25)) $mysql_update_every +DIMENSION Qcache_hits hits incremental 1 1 +DIMENSION Qcache_lowmem_prunes 'lowmem prunes' incremental -1 1 +DIMENSION Qcache_inserts inserts incremental 1 1 +DIMENSION Qcache_not_cached 'not cached' incremental -1 1 + +CHART mysql_$x.qcache '' "mysql QCache Queries in Cache" "queries" qcache mysql.qcache line $((mysql_priority + 26)) $mysql_update_every +DIMENSION Qcache_queries_in_cache queries absolute 1 1 + +CHART mysql_$x.qcache_freemem '' "mysql QCache Free Memory" "MB" qcache mysql.qcache_freemem area $((mysql_priority + 27)) $mysql_update_every +DIMENSION Qcache_free_memory free absolute 1 $((1024 * 1024)) + +CHART mysql_$x.qcache_memblocks '' "mysql QCache Memory Blocks" "blocks" qcache mysql.qcache_memblocks line $((mysql_priority + 28)) $mysql_update_every +DIMENSION Qcache_free_blocks free absolute 1 1 +DIMENSION Qcache_total_blocks total absolute 1 1 + +CHART mysql_$x.key_blocks '' "mysql MyISAM Key Cache Blocks" "blocks" myisam mysql.key_blocks line $((mysql_priority + 29)) $mysql_update_every +DIMENSION Key_blocks_unused unused absolute 1 1 +DIMENSION Key_blocks_used used absolute -1 1 +DIMENSION Key_blocks_not_flushed 'not flushed' absolute 1 1 + +CHART mysql_$x.key_requests '' "mysql MyISAM Key Cache Requests" "requests/s" myisam mysql.key_requests area $((mysql_priority + 30)) $mysql_update_every +DIMENSION Key_read_requests reads incremental 1 1 +DIMENSION Key_write_requests writes incremental -1 1 + +CHART mysql_$x.key_disk_ops '' "mysql MyISAM Key Cache Disk Operations" "operations/s" myisam mysql.key_disk_ops area $((mysql_priority + 31)) $mysql_update_every +DIMENSION Key_reads reads incremental 1 1 +DIMENSION Key_writes writes incremental -1 1 + +CHART mysql_$x.files '' "mysql Open Files" "files" files mysql.files line $((mysql_priority + 32)) $mysql_update_every +DIMENSION Open_files files absolute 1 1 + +CHART mysql_$x.files_rate '' "mysql Opened Files Rate" "files/s" files mysql.files_rate line $((mysql_priority + 33)) $mysql_update_every +DIMENSION Opened_files files incremental 1 1 EOF if [ ! -z "$mysql_Binlog_stmt_cache_disk_use" ] then cat <<EOF -CHART mysql_$m.binlog_stmt_cache '' "mysql Binlog Statement Cache" "statements/s" binlog mysql.binlog_stmt_cache line $[mysql_priority + 20] $mysql_update_every +CHART mysql_$x.binlog_stmt_cache '' "mysql Binlog Statement Cache" "statements/s" binlog mysql.binlog_stmt_cache line $((mysql_priority + 50)) $mysql_update_every DIMENSION Binlog_stmt_cache_disk_use disk incremental 1 1 DIMENSION Binlog_stmt_cache_use all incremental 1 1 EOF @@ -277,7 +392,7 @@ EOF if [ ! -z "$mysql_Connection_errors_accept" ] then cat <<EOF -CHART mysql_$m.connection_errors '' "mysql Connection Errors" "connections/s" connections mysql.connection_errors line $[mysql_priority + 21] $mysql_update_every +CHART mysql_$x.connection_errors '' "mysql Connection Errors" "connections/s" connections mysql.connection_errors line $((mysql_priority + 51)) $mysql_update_every DIMENSION Connection_errors_accept accept incremental 1 1 DIMENSION Connection_errors_internal internal incremental 1 1 DIMENSION Connection_errors_max_connections max incremental 1 1 @@ -369,70 +484,130 @@ SET Sort_merge_passes = $mysql_Sort_merge_passes SET Sort_range = $mysql_Sort_range SET Sort_scan = $mysql_Sort_scan END -BEGIN mysql_$m.tmp $1 +BEGIN mysql_$x.tmp $1 SET Created_tmp_disk_tables = $mysql_Created_tmp_disk_tables SET Created_tmp_files = $mysql_Created_tmp_files SET Created_tmp_tables = $mysql_Created_tmp_tables END -BEGIN mysql_$m.connections $1 +BEGIN mysql_$x.connections $1 SET Connections = $mysql_Connections SET Aborted_connects = $mysql_Aborted_connects END -BEGIN mysql_$m.binlog_cache $1 +BEGIN mysql_$x.binlog_cache $1 SET Binlog_cache_disk_use = $mysql_Binlog_cache_disk_use SET Binlog_cache_use = $mysql_Binlog_cache_use END -BEGIN mysql_$m.threads $1 +BEGIN mysql_$x.threads $1 SET Threads_connected = $mysql_Threads_connected SET Threads_created = $mysql_Threads_created SET Threads_cached = $mysql_Threads_cached SET Threads_running = $mysql_Threads_running END -BEGIN mysql_$m.thread_cache_misses $1 +BEGIN mysql_$x.thread_cache_misses $1 SET misses = $mysql_Thread_cache_misses END -BEGIN mysql_$m.innodb_io $1 +BEGIN mysql_$x.innodb_io $1 SET Innodb_data_read = $mysql_Innodb_data_read SET Innodb_data_written = $mysql_Innodb_data_written END -BEGIN mysql_$m.innodb_io_ops $1 +BEGIN mysql_$x.innodb_io_ops $1 SET Innodb_data_reads = $mysql_Innodb_data_reads SET Innodb_data_writes = $mysql_Innodb_data_writes SET Innodb_data_fsyncs = $mysql_Innodb_data_fsyncs END -BEGIN mysql_$m.innodb_io_pending_ops $1 +BEGIN mysql_$x.innodb_io_pending_ops $1 SET Innodb_data_pending_reads = $mysql_Innodb_data_pending_reads SET Innodb_data_pending_writes = $mysql_Innodb_data_pending_writes SET Innodb_data_pending_fsyncs = $mysql_Innodb_data_pending_fsyncs END -BEGIN mysql_$m.innodb_log $1 +BEGIN mysql_$x.innodb_log $1 SET Innodb_log_waits = $mysql_Innodb_log_waits SET Innodb_log_write_requests = $mysql_Innodb_log_write_requests SET Innodb_log_writes = $mysql_Innodb_log_writes END -BEGIN mysql_$m.innodb_os_log $1 +BEGIN mysql_$x.innodb_os_log $1 SET Innodb_os_log_fsyncs = $mysql_Innodb_os_log_fsyncs SET Innodb_os_log_pending_fsyncs = $mysql_Innodb_os_log_pending_fsyncs SET Innodb_os_log_pending_writes = $mysql_Innodb_os_log_pending_writes END -BEGIN mysql_$m.innodb_os_log_io $1 +BEGIN mysql_$x.innodb_os_log_io $1 SET Innodb_os_log_written = $mysql_Innodb_os_log_written END -BEGIN mysql_$m.innodb_cur_row_lock $1 +BEGIN mysql_$x.innodb_cur_row_lock $1 SET Innodb_row_lock_current_waits = $mysql_Innodb_row_lock_current_waits END -BEGIN mysql_$m.innodb_rows $1 +BEGIN mysql_$x.innodb_rows $1 SET Innodb_rows_inserted = $mysql_Innodb_rows_inserted SET Innodb_rows_read = $mysql_Innodb_rows_read SET Innodb_rows_updated = $mysql_Innodb_rows_updated SET Innodb_rows_deleted = $mysql_Innodb_rows_deleted END +BEGIN mysql_$x.innodb_buffer_pool_pages $1 +SET Innodb_buffer_pool_pages_data = $mysql_Innodb_buffer_pool_pages_data +SET Innodb_buffer_pool_pages_dirty = $mysql_Innodb_buffer_pool_pages_dirty +SET Innodb_buffer_pool_pages_free = $mysql_Innodb_buffer_pool_pages_free +SET Innodb_buffer_pool_pages_flushed = $mysql_Innodb_buffer_pool_pages_flushed +SET Innodb_buffer_pool_pages_misc = $mysql_Innodb_buffer_pool_pages_misc +SET Innodb_buffer_pool_pages_total = $mysql_Innodb_buffer_pool_pages_total +END +BEGIN mysql_$x.innodb_buffer_pool_bytes $1 +SET Innodb_buffer_pool_bytes_data = $mysql_Innodb_buffer_pool_bytes_data +SET Innodb_buffer_pool_bytes_dirty = $mysql_Innodb_buffer_pool_bytes_dirty +END +BEGIN mysql_$x.innodb_buffer_pool_read_ahead $1 +SET Innodb_buffer_pool_read_ahead = $mysql_Innodb_buffer_pool_read_ahead +SET Innodb_buffer_pool_read_ahead_evicted = $mysql_Innodb_buffer_pool_read_ahead_evicted +SET Innodb_buffer_pool_read_ahead_rnd = $mysql_Innodb_buffer_pool_read_ahead_rnd +END +BEGIN mysql_$x.innodb_buffer_pool_reqs $1 +SET Innodb_buffer_pool_read_requests = $mysql_Innodb_buffer_pool_read_requests +SET Innodb_buffer_pool_write_requests = $mysql_Innodb_buffer_pool_write_requests +END +BEGIN mysql_$x.innodb_buffer_pool_ops $1 +SET Innodb_buffer_pool_reads = $mysql_Innodb_buffer_pool_reads +SET Innodb_buffer_pool_wait_free = $mysql_Innodb_buffer_pool_wait_free +END +BEGIN mysql_$x.qcache_ops $1 +SET Qcache_hits hits = $mysql_Qcache_hits +SET Qcache_lowmem_prunes = $mysql_Qcache_lowmem_prunes +SET Qcache_inserts inserts = $mysql_Qcache_inserts inserts +SET Qcache_not_cached = $mysql_Qcache_not_cached +END +BEGIN mysql_$x.qcache $1 +SET Qcache_queries_in_cache = $mysql_Qcache_queries_in_cache +END +BEGIN mysql_$x.qcache_freemem $1 +SET Qcache_free_memory = $mysql_Qcache_free_memory +END +BEGIN mysql_$x.qcache_memblocks $1 +SET Qcache_free_blocks = $mysql_Qcache_free_blocks +SET Qcache_total_blocks = $mysql_Qcache_total_blocks +END +BEGIN mysql_$x.key_blocks $1 +SET Key_blocks_unused = $mysql_Key_blocks_unused +SET Key_blocks_used = $mysql_Key_blocks_used +SET Key_blocks_not_flushed = $mysql_Key_blocks_not_flushed +END +BEGIN mysql_$x.key_requests $1 +SET Key_read_requests = $mysql_Key_read_requests +SET Key_write_requests = $mysql_Key_write_requests +END +BEGIN mysql_$x.key_disk_ops $1 +SET Key_reads = $mysql_Key_reads +SET Key_writes = $mysql_Key_writes +END +BEGIN mysql_$x.files $1 +SET Open_files = $mysql_Open_files +END +BEGIN mysql_$x.files_rate $1 +SET Opened_files = $mysql_Opened_files +END VALUESEOF if [ ! -z "$mysql_Binlog_stmt_cache_disk_use" ] then cat <<VALUESEOF -BEGIN mysql_$m.binlog_stmt_cache $1 +BEGIN mysql_$x.binlog_stmt_cache $1 SET Binlog_stmt_cache_disk_use = $mysql_Binlog_stmt_cache_disk_use SET Binlog_stmt_cache_use = $mysql_Binlog_stmt_cache_use END @@ -442,7 +617,7 @@ VALUESEOF if [ ! -z "$mysql_Connection_errors_accept" ] then cat <<VALUESEOF -BEGIN mysql_$m.connection_errors $1 +BEGIN mysql_$x.connection_errors $1 SET Connection_errors_accept = $mysql_Connection_errors_accept SET Connection_errors_internal = $mysql_Connection_errors_internal SET Connection_errors_max_connections = $mysql_Connection_errors_max_connections diff --git a/charts.d/nginx.chart.sh b/charts.d/nginx.chart.sh index a6795415b..450aa94b3 100755 --- a/charts.d/nginx.chart.sh +++ b/charts.d/nginx.chart.sh @@ -19,7 +19,7 @@ nginx_reading=0 nginx_writing=0 nginx_waiting=0 nginx_get() { - nginx_response=($(curl -s "${nginx_url}")) + nginx_response=($(curl -Ss "${nginx_url}")) [ $? -ne 0 -o "${#nginx_response[@]}" -eq 0 ] && return 1 if [ "${nginx_response[0]}" != "Active" \ @@ -81,18 +81,18 @@ nginx_check() { # _create is called once, to create the charts nginx_create() { cat <<EOF -CHART nginx.connections '' "nginx Active Connections" "connections" nginx nginx.connections line $[nginx_priority + 1] $nginx_update_every +CHART nginx.connections '' "nginx Active Connections" "connections" nginx nginx.connections line $((nginx_priority + 1)) $nginx_update_every DIMENSION active '' absolute 1 1 -CHART nginx.requests '' "nginx Requests" "requests/s" nginx nginx.requests line $[nginx_priority + 2] $nginx_update_every +CHART nginx.requests '' "nginx Requests" "requests/s" nginx nginx.requests line $((nginx_priority + 2)) $nginx_update_every DIMENSION requests '' incremental 1 1 -CHART nginx.connections_status '' "nginx Active Connections by Status" "connections" nginx nginx.connections.status line $[nginx_priority + 3] $nginx_update_every +CHART nginx.connections_status '' "nginx Active Connections by Status" "connections" nginx nginx.connections.status line $((nginx_priority + 3)) $nginx_update_every DIMENSION reading '' absolute 1 1 DIMENSION writing '' absolute 1 1 DIMENSION waiting idle absolute 1 1 -CHART nginx.connect_rate '' "nginx Connections Rate" "connections/s" nginx nginx.connections.rate line $[nginx_priority + 4] $nginx_update_every +CHART nginx.connect_rate '' "nginx Connections Rate" "connections/s" nginx nginx.connections.rate line $((nginx_priority + 4)) $nginx_update_every DIMENSION accepts accepted incremental 1 1 DIMENSION handled '' incremental 1 1 EOF @@ -114,19 +114,19 @@ nginx_update() { # write the result of the work. cat <<VALUESEOF BEGIN nginx.connections $1 -SET active = $[nginx_active_connections] +SET active = $((nginx_active_connections)) END BEGIN nginx.requests $1 -SET requests = $[nginx_requests] +SET requests = $((nginx_requests)) END BEGIN nginx.connections_status $1 -SET reading = $[nginx_reading] -SET writing = $[nginx_writing] -SET waiting = $[nginx_waiting] +SET reading = $((nginx_reading)) +SET writing = $((nginx_writing)) +SET waiting = $((nginx_waiting)) END BEGIN nginx.connect_rate $1 -SET accepts = $[nginx_accepts] -SET handled = $[nginx_handled] +SET accepts = $((nginx_accepts)) +SET handled = $((nginx_handled)) END VALUESEOF diff --git a/charts.d/nut.chart.sh b/charts.d/nut.chart.sh index 343c6d9cd..a47208451 100755 --- a/charts.d/nut.chart.sh +++ b/charts.d/nut.chart.sh @@ -61,34 +61,34 @@ nut_create() { for x in "${nut_ids[@]}" do cat <<EOF -CHART nut_$x.charge '' "UPS Charge" "percentage" ups nut.charge area $[nut_priority + 1] $nut_update_every +CHART nut_$x.charge '' "UPS Charge" "percentage" ups nut.charge area $((nut_priority + 1)) $nut_update_every DIMENSION battery_charge charge absolute 1 100 -CHART nut_$x.battery_voltage '' "UPS Battery Voltage" "Volts" ups nut.battery.voltage line $[nut_priority + 2] $nut_update_every +CHART nut_$x.battery_voltage '' "UPS Battery Voltage" "Volts" ups nut.battery.voltage line $((nut_priority + 2)) $nut_update_every DIMENSION battery_voltage voltage absolute 1 100 DIMENSION battery_voltage_high high absolute 1 100 DIMENSION battery_voltage_low low absolute 1 100 DIMENSION battery_voltage_nominal nominal absolute 1 100 -CHART nut_$x.input_voltage '' "UPS Input Voltage" "Volts" input nut.input.voltage line $[nut_priority + 3] $nut_update_every +CHART nut_$x.input_voltage '' "UPS Input Voltage" "Volts" input nut.input.voltage line $((nut_priority + 3)) $nut_update_every DIMENSION input_voltage voltage absolute 1 100 DIMENSION input_voltage_fault fault absolute 1 100 DIMENSION input_voltage_nominal nominal absolute 1 100 -CHART nut_$x.input_current '' "UPS Input Current" "Ampere" input nut.input.current line $[nut_priority + 4] $nut_update_every +CHART nut_$x.input_current '' "UPS Input Current" "Ampere" input nut.input.current line $((nut_priority + 4)) $nut_update_every DIMENSION input_current_nominal nominal absolute 1 100 -CHART nut_$x.input_frequency '' "UPS Input Frequency" "Hz" input nut.input.frequency line $[nut_priority + 5] $nut_update_every +CHART nut_$x.input_frequency '' "UPS Input Frequency" "Hz" input nut.input.frequency line $((nut_priority + 5)) $nut_update_every DIMENSION input_frequency frequency absolute 1 100 DIMENSION input_frequency_nominal nominal absolute 1 100 -CHART nut_$x.output_voltage '' "UPS Output Voltage" "Volts" output nut.output.voltage line $[nut_priority + 6] $nut_update_every +CHART nut_$x.output_voltage '' "UPS Output Voltage" "Volts" output nut.output.voltage line $((nut_priority + 6)) $nut_update_every DIMENSION output_voltage voltage absolute 1 100 -CHART nut_$x.load '' "UPS Load" "percentage" ups nut.load area $[nut_priority] $nut_update_every +CHART nut_$x.load '' "UPS Load" "percentage" ups nut.load area $((nut_priority)) $nut_update_every DIMENSION load load absolute 1 100 -CHART nut_$x.temp '' "UPS Temperature" "temperature" ups nut.temperature line $[nut_priority + 7] $nut_update_every +CHART nut_$x.temp '' "UPS Temperature" "temperature" ups nut.temperature line $((nut_priority + 7)) $nut_update_every DIMENSION temp temp absolute 1 100 EOF done diff --git a/charts.d/opensips.chart.sh b/charts.d/opensips.chart.sh index 4b60c811d..c7066ec05 100755 --- a/charts.d/opensips.chart.sh +++ b/charts.d/opensips.chart.sh @@ -45,61 +45,61 @@ opensips_check() { opensips_create() { # create the charts cat <<EOF -CHART opensips.dialogs_active '' "OpenSIPS Active Dialogs" "dialogs" dialogs '' area $[opensips_priority + 1] $opensips_update_every +CHART opensips.dialogs_active '' "OpenSIPS Active Dialogs" "dialogs" dialogs '' area $((opensips_priority + 1)) $opensips_update_every DIMENSION dialog_active_dialogs active absolute 1 1 DIMENSION dialog_early_dialogs early absolute -1 1 -CHART opensips.users '' "OpenSIPS Users" "users" users '' line $[opensips_priority + 2] $opensips_update_every +CHART opensips.users '' "OpenSIPS Users" "users" users '' line $((opensips_priority + 2)) $opensips_update_every DIMENSION usrloc_registered_users registered absolute 1 1 DIMENSION usrloc_location_users location absolute 1 1 DIMENSION usrloc_location_contacts contacts absolute 1 1 DIMENSION usrloc_location_expires expires incremental -1 1 -CHART opensips.registrar '' "OpenSIPS Registrar" "registrations/s" registrar '' line $[opensips_priority + 3] $opensips_update_every +CHART opensips.registrar '' "OpenSIPS Registrar" "registrations/s" registrar '' line $((opensips_priority + 3)) $opensips_update_every DIMENSION registrar_accepted_regs accepted incremental 1 1 DIMENSION registrar_rejected_regs rejected incremental -1 1 -CHART opensips.transactions '' "OpenSIPS Transactions" "transactions/s" transactions '' line $[opensips_priority + 4] $opensips_update_every +CHART opensips.transactions '' "OpenSIPS Transactions" "transactions/s" transactions '' line $((opensips_priority + 4)) $opensips_update_every DIMENSION tm_UAS_transactions UAS incremental 1 1 DIMENSION tm_UAC_transactions UAC incremental -1 1 -CHART opensips.core_rcv '' "OpenSIPS Core Receives" "queries/s" core '' line $[opensips_priority + 5] $opensips_update_every +CHART opensips.core_rcv '' "OpenSIPS Core Receives" "queries/s" core '' line $((opensips_priority + 5)) $opensips_update_every DIMENSION core_rcv_requests requests incremental 1 1 DIMENSION core_rcv_replies replies incremental -1 1 -CHART opensips.core_fwd '' "OpenSIPS Core Forwards" "queries/s" core '' line $[opensips_priority + 6] $opensips_update_every +CHART opensips.core_fwd '' "OpenSIPS Core Forwards" "queries/s" core '' line $((opensips_priority + 6)) $opensips_update_every DIMENSION core_fwd_requests requests incremental 1 1 DIMENSION core_fwd_replies replies incremental -1 1 -CHART opensips.core_drop '' "OpenSIPS Core Drops" "queries/s" core '' line $[opensips_priority + 7] $opensips_update_every +CHART opensips.core_drop '' "OpenSIPS Core Drops" "queries/s" core '' line $((opensips_priority + 7)) $opensips_update_every DIMENSION core_drop_requests requests incremental 1 1 DIMENSION core_drop_replies replies incremental -1 1 -CHART opensips.core_err '' "OpenSIPS Core Errors" "queries/s" core '' line $[opensips_priority + 8] $opensips_update_every +CHART opensips.core_err '' "OpenSIPS Core Errors" "queries/s" core '' line $((opensips_priority + 8)) $opensips_update_every DIMENSION core_err_requests requests incremental 1 1 DIMENSION core_err_replies replies incremental -1 1 -CHART opensips.core_bad '' "OpenSIPS Core Bad" "queries/s" core '' line $[opensips_priority + 9] $opensips_update_every +CHART opensips.core_bad '' "OpenSIPS Core Bad" "queries/s" core '' line $((opensips_priority + 9)) $opensips_update_every DIMENSION core_bad_URIs_rcvd bad_URIs_rcvd incremental 1 1 DIMENSION core_unsupported_methods unsupported_methods incremental 1 1 DIMENSION core_bad_msg_hdr bad_msg_hdr incremental 1 1 -CHART opensips.tm_replies '' "OpenSIPS TM Replies" "replies/s" transactions '' line $[opensips_priority + 10] $opensips_update_every +CHART opensips.tm_replies '' "OpenSIPS TM Replies" "replies/s" transactions '' line $((opensips_priority + 10)) $opensips_update_every DIMENSION tm_received_replies received incremental 1 1 DIMENSION tm_relayed_replies relayed incremental 1 1 DIMENSION tm_local_replies local incremental 1 1 -CHART opensips.transactions_status '' "OpenSIPS Transactions Status" "transactions/s" transactions '' line $[opensips_priority + 11] $opensips_update_every +CHART opensips.transactions_status '' "OpenSIPS Transactions Status" "transactions/s" transactions '' line $((opensips_priority + 11)) $opensips_update_every DIMENSION tm_2xx_transactions 2xx incremental 1 1 DIMENSION tm_3xx_transactions 3xx incremental 1 1 DIMENSION tm_4xx_transactions 4xx incremental 1 1 DIMENSION tm_5xx_transactions 5xx incremental 1 1 DIMENSION tm_6xx_transactions 6xx incremental 1 1 -CHART opensips.transactions_inuse '' "OpenSIPS InUse Transactions" "transactions" transactions '' line $[opensips_priority + 12] $opensips_update_every +CHART opensips.transactions_inuse '' "OpenSIPS InUse Transactions" "transactions" transactions '' line $((opensips_priority + 12)) $opensips_update_every DIMENSION tm_inuse_transactions inuse absolute 1 1 -CHART opensips.sl_replies '' "OpenSIPS SL Replies" "replies/s" core '' line $[opensips_priority + 13] $opensips_update_every +CHART opensips.sl_replies '' "OpenSIPS SL Replies" "replies/s" core '' line $((opensips_priority + 13)) $opensips_update_every DIMENSION sl_1xx_replies 1xx incremental 1 1 DIMENSION sl_2xx_replies 2xx incremental 1 1 DIMENSION sl_3xx_replies 3xx incremental 1 1 @@ -110,31 +110,31 @@ DIMENSION sl_sent_replies sent incremental 1 1 DIMENSION sl_sent_err_replies error incremental 1 1 DIMENSION sl_received_ACKs ACKed incremental 1 1 -CHART opensips.dialogs '' "OpenSIPS Dialogs" "dialogs/s" dialogs '' line $[opensips_priority + 14] $opensips_update_every +CHART opensips.dialogs '' "OpenSIPS Dialogs" "dialogs/s" dialogs '' line $((opensips_priority + 14)) $opensips_update_every DIMENSION dialog_processed_dialogs processed incremental 1 1 DIMENSION dialog_expired_dialogs expired incremental 1 1 DIMENSION dialog_failed_dialogs failed incremental -1 1 -CHART opensips.net_waiting '' "OpenSIPS Network Waiting" "kilobytes" net '' line $[opensips_priority + 15] $opensips_update_every +CHART opensips.net_waiting '' "OpenSIPS Network Waiting" "kilobytes" net '' line $((opensips_priority + 15)) $opensips_update_every DIMENSION net_waiting_udp UDP absolute 1 1024 DIMENSION net_waiting_tcp TCP absolute 1 1024 -CHART opensips.uri_checks '' "OpenSIPS URI Checks" "checks / sec" uri '' line $[opensips_priority + 16] $opensips_update_every +CHART opensips.uri_checks '' "OpenSIPS URI Checks" "checks / sec" uri '' line $((opensips_priority + 16)) $opensips_update_every DIMENSION uri_positive_checks positive incremental 1 1 DIMENSION uri_negative_checks negative incremental -1 1 -CHART opensips.traces '' "OpenSIPS Traces" "traces / sec" traces '' line $[opensips_priority + 17] $opensips_update_every +CHART opensips.traces '' "OpenSIPS Traces" "traces / sec" traces '' line $((opensips_priority + 17)) $opensips_update_every DIMENSION siptrace_traced_requests requests incremental 1 1 DIMENSION siptrace_traced_replies replies incremental -1 1 -CHART opensips.shmem '' "OpenSIPS Shared Memory" "kilobytes" mem '' line $[opensips_priority + 18] $opensips_update_every +CHART opensips.shmem '' "OpenSIPS Shared Memory" "kilobytes" mem '' line $((opensips_priority + 18)) $opensips_update_every DIMENSION shmem_total_size total absolute 1 1024 DIMENSION shmem_used_size used absolute 1 1024 DIMENSION shmem_real_used_size real_used absolute 1 1024 DIMENSION shmem_max_used_size max_used absolute 1 1024 DIMENSION shmem_free_size free absolute 1 1024 -CHART opensips.shmem_fragments '' "OpenSIPS Shared Memory Fragmentation" "fragments" mem '' line $[opensips_priority + 19] $opensips_update_every +CHART opensips.shmem_fragments '' "OpenSIPS Shared Memory Fragmentation" "fragments" mem '' line $((opensips_priority + 19)) $opensips_update_every DIMENSION shmem_fragments fragments absolute 1 1 EOF diff --git a/charts.d/phpfpm.chart.sh b/charts.d/phpfpm.chart.sh new file mode 100755 index 000000000..c0532fab1 --- /dev/null +++ b/charts.d/phpfpm.chart.sh @@ -0,0 +1,175 @@ +#!/bin/bash + +# if this chart is called X.chart.sh, then all functions and global variables +# must start with X_ + +# first, you need open php-fpm status in php-fpm.conf +# second, you need add status location in nginx.conf +# you can see, https://easyengine.io/tutorials/php/fpm-status-page/ + +declare -A phpfpm_urls=() + +# _update_every is a special variable - it holds the number of seconds +# between the calls of the _update() function +phpfpm_update_every= +phpfpm_priority=60000 + +declare -a phpfpm_response=() +phpfpm_pool="" +phpfpm_start_time="" +phpfpm_start_since=0 +phpfpm_accepted_conn=0 +phpfpm_listen_queue=0 +phpfpm_max_listen_queue=0 +phpfpm_listen_queue_len=0 +phpfpm_idle_processes=0 +phpfpm_active_processes=0 +phpfpm_total_processes=0 +phpfpm_max_active_processes=0 +phpfpm_max_children_reached=0 +phpfpm_slow_requests=0 +phpfpm_get() { + url=$1 + phpfpm_response=($(curl -Ss "${url}")) + [ $? -ne 0 -o "${#phpfpm_response[@]}" -eq 0 ] && return 1 + + if [[ "${phpfpm_response[0]}" != "pool:" \ + || "${phpfpm_response[2]}" != "process" \ + || "${phpfpm_response[5]}" != "start" \ + || "${phpfpm_response[12]}" != "accepted" \ + || "${phpfpm_response[15]}" != "listen" \ + || "${phpfpm_response[16]}" != "queue:" \ + || "${phpfpm_response[26]}" != "idle" \ + || "${phpfpm_response[29]}" != "active" \ + || "${phpfpm_response[32]}" != "total" \ + || "${phpfpm_response[43]}" != "slow" \ + ]] + then + echo >&2 "phpfpm: invalid response from phpfpm status server: ${phpfpm_response[*]}" + return 1 + fi + + phpfpm_pool="${phpfpm_response[1]}" + phpfpm_start_time="${phpfpm_response[7]} ${phpfpm_response[8]}" + phpfpm_start_since="${phpfpm_response[11]}" + phpfpm_accepted_conn="${phpfpm_response[14]}" + phpfpm_listen_queue="${phpfpm_response[17]}" + phpfpm_max_listen_queue="${phpfpm_response[21]}" + phpfpm_listen_queue_len="${phpfpm_response[25]}" + phpfpm_idle_processes="${phpfpm_response[28]}" + phpfpm_active_processes="${phpfpm_response[31]}" + phpfpm_total_processes="${phpfpm_response[34]}" + phpfpm_max_active_processes="${phpfpm_response[38]}" + phpfpm_max_children_reached="${phpfpm_response[42]}" + phpfpm_slow_requests="${phpfpm_response[45]}" + + if [[ -z "${phpfpm_pool}" \ + || -z "${phpfpm_start_time}" \ + || -z "${phpfpm_start_since}" \ + || -z "${phpfpm_accepted_conn}" \ + || -z "${phpfpm_listen_queue}" \ + || -z "${phpfpm_max_listen_queue}" \ + || -z "${phpfpm_listen_queue_len}" \ + || -z "${phpfpm_idle_processes}" \ + || -z "${phpfpm_active_processes}" \ + || -z "${phpfpm_total_processes}" \ + || -z "${phpfpm_max_active_processes}" \ + || -z "${phpfpm_max_children_reached}" \ + || -z "${phpfpm_slow_requests}" \ + ]] + then + echo >&2 "phpfpm: empty values got from phpfpm status server: ${phpfpm_response[*]}" + return 1 + fi + + return 0 +} + +# _check is called once, to find out if this chart should be enabled or not +phpfpm_check() { + if [ ${#phpfpm_urls[@]} -eq 0 ]; then + phpfpm_urls[local]="http://localhost/status" + fi + + local m + for m in "${!phpfpm_urls[@]}" + do + phpfpm_get "${phpfpm_urls[$m]}" + if [ $? -ne 0 ]; then + echo >&2 "phpfpm: cannot find status on URL '${phpfpm_url[$m]}'. Please set phpfpm_urls[$m]='http://localhost/status' in $confd/phpfpm.conf" + unset phpfpm_urls[$m] + continue + fi + done + + if [ ${#phpfpm_urls[@]} -eq 0 ]; then + echo >&2 "phpfpm: no phpfpm servers found. Please set phpfpm_urls[name]='url' to whatever needed to get status to the phpfpm server, in $confd/phpfpm.conf" + return 1 + fi + + # this should return: + # - 0 to enable the chart + # - 1 to disable the chart + + return 0 +} + +# _create is called once, to create the charts +phpfpm_create() { + local m + for m in "${!phpfpm_urls[@]}" + do + cat <<EOF +CHART phpfpm_$m.connections '' "PHP-FPM Active Connections" "connections" phpfpm phpfpm.connections line $((phpfpm_priority + 1)) $phpfpm_update_every +DIMENSION active '' absolute 1 1 +DIMENSION maxActive 'max active' absolute 1 1 +DIMENSION idle '' absolute 1 1 + +CHART phpfpm_$m.requests '' "PHP-FPM Requests" "requests/s" phpfpm phpfpm.requests line $((phpfpm_priority + 2)) $phpfpm_update_every +DIMENSION requests '' incremental 1 1 + +CHART phpfpm_$m.performance '' "PHP-FPM Performance" "status" phpfpm phpfpm.performance line $((phpfpm_priority + 3)) $phpfpm_update_every +DIMENSION reached 'max children reached' absolute 1 1 +DIMENSION slow 'slow requests' absolute 1 1 +EOF + done + + return 0 +} + +# _update is called continiously, to collect the values +phpfpm_update() { + # the first argument to this function is the microseconds since last update + # pass this parameter to the BEGIN statement (see bellow). + + # do all the work to collect / calculate the values + # for each dimension + # remember: KEEP IT SIMPLE AND SHORT + + local m + for m in "${!phpfpm_urls[@]}" + do + phpfpm_get "${phpfpm_urls[$m]}" + if [ $? -ne 0 ]; then + continue + fi + + # write the result of the work. + cat <<EOF +BEGIN phpfpm_$m.connections $1 +SET active = $((phpfpm_active_processes)) +SET maxActive = $((phpfpm_max_active_processes)) +SET idle = $((phpfpm_idle_processes)) +END +BEGIN phpfpm_$m.requests $1 +SET requests = $((phpfpm_accepted_conn)) +END +BEGIN phpfpm_$m.performance $1 +SET reached = $((phpfpm_max_children_reached)) +SET slow = $((phpfpm_slow_requests)) +END +EOF + done + + return 0 +} diff --git a/charts.d/postfix.chart.sh b/charts.d/postfix.chart.sh index d286f99f2..f4f710275 100755 --- a/charts.d/postfix.chart.sh +++ b/charts.d/postfix.chart.sh @@ -43,9 +43,9 @@ postfix_check() { postfix_create() { cat <<EOF -CHART postfix.qemails '' "Postfix Queue Emails" "emails" queue postfix.queued.emails line $[postfix_priority + 1] $postfix_update_every +CHART postfix.qemails '' "Postfix Queue Emails" "emails" queue postfix.queued.emails line $((postfix_priority + 1)) $postfix_update_every DIMENSION emails '' absolute 1 1 -CHART postfix.qsize '' "Postfix Queue Emails Size" "emails size in KB" queue postfix.queued.size area $[postfix_priority + 2] $postfix_update_every +CHART postfix.qsize '' "Postfix Queue Emails Size" "emails size in KB" queue postfix.queued.size area $((postfix_priority + 2)) $postfix_update_every DIMENSION size '' absolute 1 1 EOF diff --git a/charts.d/sensors.chart.sh b/charts.d/sensors.chart.sh index 946de5406..19e938586 100755 --- a/charts.d/sensors.chart.sh +++ b/charts.d/sensors.chart.sh @@ -129,7 +129,7 @@ sensors_create() { files="$( sensors_check_files $files )" files="$( sensors_check_temp_type $files )" [ -z "$files" ] && continue - echo "CHART sensors.temp_$id '' '$name Temperature' 'Celsius' 'temperature' 'sensors.temp' line $[sensors_priority + 1] $sensors_update_every" + echo "CHART sensors.temp_$id '' '$name Temperature' 'Celsius' 'temperature' 'sensors.temp' line $((sensors_priority + 1)) $sensors_update_every" echo >>$TMP_DIR/sensors.sh "echo \"BEGIN sensors.temp_$id \$1\"" divisor=1000 ;; @@ -138,7 +138,7 @@ sensors_create() { files="$( ls $path/in*_input 2>/dev/null )" files="$( sensors_check_files $files )" [ -z "$files" ] && continue - echo "CHART sensors.volt_$id '' '$name Voltage' 'Volts' 'voltage' 'sensors.volt' line $[sensors_priority + 2] $sensors_update_every" + echo "CHART sensors.volt_$id '' '$name Voltage' 'Volts' 'voltage' 'sensors.volt' line $((sensors_priority + 2)) $sensors_update_every" echo >>$TMP_DIR/sensors.sh "echo \"BEGIN sensors.volt_$id \$1\"" divisor=1000 ;; @@ -147,7 +147,7 @@ sensors_create() { files="$( ls $path/curr*_input 2>/dev/null )" files="$( sensors_check_files $files )" [ -z "$files" ] && continue - echo "CHART sensors.curr_$id '' '$name Current' 'Ampere' 'current' 'sensors.curr' line $[sensors_priority + 3] $sensors_update_every" + echo "CHART sensors.curr_$id '' '$name Current' 'Ampere' 'current' 'sensors.curr' line $((sensors_priority + 3)) $sensors_update_every" echo >>$TMP_DIR/sensors.sh "echo \"BEGIN sensors.curr_$id \$1\"" divisor=1000 ;; @@ -156,7 +156,7 @@ sensors_create() { files="$( ls $path/power*_input 2>/dev/null )" files="$( sensors_check_files $files )" [ -z "$files" ] && continue - echo "CHART sensors.power_$id '' '$name Power' 'Watt' 'power' 'sensors.power' line $[sensors_priority + 4] $sensors_update_every" + echo "CHART sensors.power_$id '' '$name Power' 'Watt' 'power' 'sensors.power' line $((sensors_priority + 4)) $sensors_update_every" echo >>$TMP_DIR/sensors.sh "echo \"BEGIN sensors.power_$id \$1\"" divisor=1000000 ;; @@ -165,7 +165,7 @@ sensors_create() { files="$( ls $path/fan*_input 2>/dev/null )" files="$( sensors_check_files $files )" [ -z "$files" ] && continue - echo "CHART sensors.fan_$id '' '$name Fans Speed' 'Rotations / Minute' 'fans' 'sensors.fans' line $[sensors_priority + 5] $sensors_update_every" + echo "CHART sensors.fan_$id '' '$name Fans Speed' 'Rotations / Minute' 'fans' 'sensors.fans' line $((sensors_priority + 5)) $sensors_update_every" echo >>$TMP_DIR/sensors.sh "echo \"BEGIN sensors.fan_$id \$1\"" ;; @@ -173,7 +173,7 @@ sensors_create() { files="$( ls $path/energy*_input 2>/dev/null )" files="$( sensors_check_files $files )" [ -z "$files" ] && continue - echo "CHART sensors.energy_$id '' '$name Energy' 'Joule' 'energy' 'sensors.energy' areastack $[sensors_priority + 6] $sensors_update_every" + echo "CHART sensors.energy_$id '' '$name Energy' 'Joule' 'energy' 'sensors.energy' areastack $((sensors_priority + 6)) $sensors_update_every" echo >>$TMP_DIR/sensors.sh "echo \"BEGIN sensors.energy_$id \$1\"" algorithm="incremental" divisor=1000000 @@ -183,7 +183,7 @@ sensors_create() { files="$( ls $path/humidity*_input 2>/dev/null )" files="$( sensors_check_files $files )" [ -z "$files" ] && continue - echo "CHART sensors.humidity_$id '' '$name Humidity' 'Percent' 'humidity' 'sensors.humidity' line $[sensors_priority + 7] $sensors_update_every" + echo "CHART sensors.humidity_$id '' '$name Humidity' 'Percent' 'humidity' 'sensors.humidity' line $((sensors_priority + 7)) $sensors_update_every" echo >>$TMP_DIR/sensors.sh "echo \"BEGIN sensors.humidity_$id \$1\"" divisor=1000 ;; diff --git a/charts.d/squid.chart.sh b/charts.d/squid.chart.sh index 5e1ebb062..f6154b256 100755 --- a/charts.d/squid.chart.sh +++ b/charts.d/squid.chart.sh @@ -63,21 +63,21 @@ squid_check() { squid_create() { # create the charts cat <<EOF -CHART squid.clients_net '' "Squid Client Bandwidth" "kilobits / sec" clients squid.clients.net area $[squid_priority + 1] $squid_update_every +CHART squid.clients_net '' "Squid Client Bandwidth" "kilobits / sec" clients squid.clients.net area $((squid_priority + 1)) $squid_update_every DIMENSION client_http_kbytes_in in incremental 8 1 DIMENSION client_http_kbytes_out out incremental -8 1 DIMENSION client_http_hit_kbytes_out hits incremental -8 1 -CHART squid.clients_requests '' "Squid Client Requests" "requests / sec" clients squid.clients.requests line $[squid_priority + 3] $squid_update_every +CHART squid.clients_requests '' "Squid Client Requests" "requests / sec" clients squid.clients.requests line $((squid_priority + 3)) $squid_update_every DIMENSION client_http_requests requests incremental 1 1 DIMENSION client_http_hits hits incremental 1 1 DIMENSION client_http_errors errors incremental -1 1 -CHART squid.servers_net '' "Squid Server Bandwidth" "kilobits / sec" servers squid.servers.net area $[squid_priority + 2] $squid_update_every +CHART squid.servers_net '' "Squid Server Bandwidth" "kilobits / sec" servers squid.servers.net area $((squid_priority + 2)) $squid_update_every DIMENSION server_all_kbytes_in in incremental 8 1 DIMENSION server_all_kbytes_out out incremental -8 1 -CHART squid.servers_requests '' "Squid Server Requests" "requests / sec" servers squid.servers.requests line $[squid_priority + 4] $squid_update_every +CHART squid.servers_requests '' "Squid Server Requests" "requests / sec" servers squid.servers.requests line $((squid_priority + 4)) $squid_update_every DIMENSION server_all_requests requests incremental 1 1 DIMENSION server_all_errors errors incremental -1 1 EOF diff --git a/charts.d/tomcat.chart.sh b/charts.d/tomcat.chart.sh new file mode 100755 index 000000000..4e10a9183 --- /dev/null +++ b/charts.d/tomcat.chart.sh @@ -0,0 +1,129 @@ +#!/bin/bash + +# Description: Tomcat netdata charts.d plugin +# Author: Jorge Romero + +# the URL to download tomcat status info +# usually http://localhost:8080/manager/status?XML=true +tomcat_url="" + +# set tomcat username/password here +tomcatUser="" +tomcatPassword="" + +# _update_every is a special variable - it holds the number of seconds +# between the calls of the _update() function +tomcat_update_every= + +tomcat_priority=60000 + +# convert tomcat floating point values +# to integer using this multiplier +# this only affects precision - the values +# will be in the proper units +tomcat_decimal_detail=1000000 + +# used by volume chart to convert bytes to KB +tomcat_decimal_KB_detail=1000 + +tomcat_check() { + + require_cmd xmlstarlet || return 1 + + + # check if url, username, passwords are set + if [ -z "${tomcat_url}" ]; then + echo >&2 "tomcat url is unset or set to the empty string" + return 1 + fi + if [ -z "${tomcatUser}" ]; then + echo >&2 "tomcat user is unset or set to the empty string" + return 1 + fi + if [ -z "${tomcatPassword}" ]; then + echo >&2 "tomcat password is unset or set to the empty string" + return 1 + fi + + # check if we can get to tomcat's status page + tomcat_get + if [ $? -ne 0 ] + then + echo >&2 "tomcat: couldn't get to status page on URL '${tomcat_url}'."\ + "Please make sure tomcat url, username and password are correct." + return 1 + fi + + # this should return: + # - 0 to enable the chart + # - 1 to disable the chart + + return 0 +} + +tomcat_get() { + # Collect tomcat values + mapfile -t lines < <(curl -u "$tomcatUser":"$tomcatPassword" -Ss "$tomcat_url" |\ + xmlstarlet sel \ + -t -m "/status/jvm/memory" -v @free \ + -n -m "/status/connector[@name='\"http-bio-8080\"']/threadInfo" -v @currentThreadCount \ + -n -v @currentThreadsBusy \ + -n -m "/status/connector[@name='\"http-bio-8080\"']/requestInfo" -v @requestCount \ + -n -v @bytesSent -n -) + + tomcat_jvm_freememory="${lines[0]}" + tomcat_threads="${lines[1]}" + tomcat_threads_busy="${lines[2]}" + tomcat_accesses="${lines[3]}" + tomcat_volume="${lines[4]}" + + return 0 +} + +# _create is called once, to create the charts +tomcat_create() { + cat <<EOF +CHART tomcat.accesses '' "tomcat requests" "requests/s" statistics tomcat.accesses area $((tomcat_priority + 8)) $tomcat_update_every +DIMENSION accesses '' incremental +CHART tomcat.volume '' "tomcat volume" "KB/s" volume tomcat.volume area $((tomcat_priority + 5)) $tomcat_update_every +DIMENSION volume '' incremental divisor ${tomcat_decimal_KB_detail} +CHART tomcat.threads '' "tomcat threads" "current threads" statistics tomcat.threads line $((tomcat_priority + 6)) $tomcat_update_every +DIMENSION current '' absolute 1 +DIMENSION busy '' absolute 1 +CHART tomcat.jvm '' "JVM Free Memory" "MB" statistics tomcat.jvm area $((tomcat_priority + 8)) $tomcat_update_every +DIMENSION jvm '' absolute 1 ${tomcat_decimal_detail} +EOF + return 0 +} + +# _update is called continiously, to collect the values +tomcat_update() { + local reqs net + # the first argument to this function is the microseconds since last update + # pass this parameter to the BEGIN statement (see bellow). + + # do all the work to collect / calculate the values + # for each dimension + # remember: KEEP IT SIMPLE AND SHORT + + tomcat_get || return 1 + + # write the result of the work. + cat <<VALUESEOF +BEGIN tomcat.accesses $1 +SET accesses = $((tomcat_accesses)) +END +BEGIN tomcat.volume $1 +SET volume = $((tomcat_volume)) +END +BEGIN tomcat.threads $1 +SET current = $((tomcat_threads)) +SET busy = $((tomcat_threads_busy)) +END +BEGIN tomcat.jvm $1 +SET jvm = $((tomcat_jvm_freememory)) +END +VALUESEOF + + return 0 +} diff --git a/conf.d/Makefile.in b/conf.d/Makefile.in index aaf39a760..1938bd940 100644 --- a/conf.d/Makefile.in +++ b/conf.d/Makefile.in @@ -184,6 +184,8 @@ OPTIONAL_MATH_CLFAGS = @OPTIONAL_MATH_CLFAGS@ OPTIONAL_MATH_LIBS = @OPTIONAL_MATH_LIBS@ OPTIONAL_NFACCT_CLFAGS = @OPTIONAL_NFACCT_CLFAGS@ OPTIONAL_NFACCT_LIBS = @OPTIONAL_NFACCT_LIBS@ +OPTIONAL_UUID_CLFAGS = @OPTIONAL_UUID_CLFAGS@ +OPTIONAL_UUID_LIBS = @OPTIONAL_UUID_LIBS@ OPTIONAL_ZLIB_CLFAGS = @OPTIONAL_ZLIB_CLFAGS@ OPTIONAL_ZLIB_LIBS = @OPTIONAL_ZLIB_LIBS@ PACKAGE = @PACKAGE@ @@ -205,6 +207,8 @@ PTHREAD_LIBS = @PTHREAD_LIBS@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ STRIP = @STRIP@ +UUID_CFLAGS = @UUID_CFLAGS@ +UUID_LIBS = @UUID_LIBS@ VERSION = @VERSION@ ZLIB_CFLAGS = @ZLIB_CFLAGS@ ZLIB_LIBS = @ZLIB_LIBS@ @@ -265,6 +269,7 @@ target_alias = @target_alias@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ +varlibdir = @varlibdir@ webdir = @webdir@ # diff --git a/conf.d/apps_groups.conf b/conf.d/apps_groups.conf index 995ee5d74..887563c44 100644 --- a/conf.d/apps_groups.conf +++ b/conf.d/apps_groups.conf @@ -49,7 +49,7 @@ compile: cc1 cc1plus as gcc* ld make automake autoconf git rsync: rsync media: mplayer vlc xine mediatomb omxplayer* kodi* xbmc* mediacenter eventlircd squid: squid* c-icap -apache: apache* +apache: apache* httpd mysql: mysql* asterisk: asterisk opensips: opensips* stund diff --git a/config.h.in b/config.h.in index d7de0448b..ce8cd7450 100644 --- a/config.h.in +++ b/config.h.in @@ -39,6 +39,9 @@ /* use this user to drop privileged */ #undef NETDATA_USER +/* uuid settings */ +#undef NETDATA_WITH_UUID + /* zlib settings */ #undef NETDATA_WITH_ZLIB @@ -1,6 +1,6 @@ #! /bin/sh # Guess values for system-dependent variables and create Makefiles. -# Generated by GNU Autoconf 2.69 for netdata 1.1.0. +# Generated by GNU Autoconf 2.69 for netdata 1.2.0. # # # Copyright (C) 1992-1996, 1998-2012 Free Software Foundation, Inc. @@ -577,8 +577,8 @@ MAKEFLAGS= # Identity of this package. PACKAGE_NAME='netdata' PACKAGE_TARNAME='netdata' -PACKAGE_VERSION='1.1.0' -PACKAGE_STRING='netdata 1.1.0' +PACKAGE_VERSION='1.2.0' +PACKAGE_STRING='netdata 1.2.0' PACKAGE_BUGREPORT='' PACKAGE_URL='' @@ -623,6 +623,8 @@ ac_subst_vars='am__EXEEXT_FALSE am__EXEEXT_TRUE LTLIBOBJS LIBOBJS +OPTIONAL_UUID_LIBS +OPTIONAL_UUID_CLFAGS OPTIONAL_ZLIB_LIBS OPTIONAL_ZLIB_CLFAGS OPTIONAL_NFACCT_LIBS @@ -636,12 +638,15 @@ configdir nodedir chartsdir cachedir +varlibdir ZLIB_LIBS ZLIB_CFLAGS LIBMNL_LIBS LIBMNL_CFLAGS NFACCT_LIBS NFACCT_CFLAGS +UUID_LIBS +UUID_CFLAGS MATH_LIBS MATH_CFLAGS PTHREAD_CFLAGS @@ -776,6 +781,8 @@ PKG_CONFIG_LIBDIR CPP MATH_CFLAGS MATH_LIBS +UUID_CFLAGS +UUID_LIBS NFACCT_CFLAGS NFACCT_LIBS LIBMNL_CFLAGS @@ -1322,7 +1329,7 @@ if test "$ac_init_help" = "long"; then # Omit some internal or obsolete options to make the list less imposing. # This message is too long to be a string in the A/UX 3.1 sh. cat <<_ACEOF -\`configure' configures netdata 1.1.0 to adapt to many kinds of systems. +\`configure' configures netdata 1.2.0 to adapt to many kinds of systems. Usage: $0 [OPTION]... [VAR=VALUE]... @@ -1392,7 +1399,7 @@ fi if test -n "$ac_init_help"; then case $ac_init_help in - short | recursive ) echo "Configuration of netdata 1.1.0:";; + short | recursive ) echo "Configuration of netdata 1.2.0:";; esac cat <<\_ACEOF @@ -1436,6 +1443,8 @@ Some influential environment variables: CPP C preprocessor MATH_CFLAGS C compiler flags for math MATH_LIBS linker flags for math + UUID_CFLAGS C compiler flags for UUID, overriding pkg-config + UUID_LIBS linker flags for UUID, overriding pkg-config NFACCT_CFLAGS C compiler flags for NFACCT, overriding pkg-config NFACCT_LIBS linker flags for NFACCT, overriding pkg-config @@ -1511,7 +1520,7 @@ fi test -n "$ac_init_help" && exit $ac_status if $ac_init_version; then cat <<\_ACEOF -netdata configure 1.1.0 +netdata configure 1.2.0 generated by GNU Autoconf 2.69 Copyright (C) 2012 Free Software Foundation, Inc. @@ -1863,7 +1872,7 @@ cat >config.log <<_ACEOF This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake. -It was created by netdata $as_me 1.1.0, which was +It was created by netdata $as_me 1.2.0, which was generated by GNU Autoconf 2.69. Invocation command line was $ $0 $@ @@ -2241,7 +2250,7 @@ $as_echo "$as_me: ***************** MAINTAINER MODE *****************" >&6;} PACKAGE_BUILT_DATE=$(date '+%d %b %Y') fi -PACKAGE_RPM_VERSION="1.1.0" +PACKAGE_RPM_VERSION="1.2.0" @@ -2764,7 +2773,7 @@ fi # Define the identity of the package. PACKAGE='netdata' - VERSION='1.1.0' + VERSION='1.2.0' cat >>confdefs.h <<_ACEOF @@ -5175,6 +5184,104 @@ fi fi + +pkg_failed=no +{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for UUID" >&5 +$as_echo_n "checking for UUID... " >&6; } + +if test -n "$UUID_CFLAGS"; then + pkg_cv_UUID_CFLAGS="$UUID_CFLAGS" + elif test -n "$PKG_CONFIG"; then + if test -n "$PKG_CONFIG" && \ + { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"uuid\""; } >&5 + ($PKG_CONFIG --exists --print-errors "uuid") 2>&5 + ac_status=$? + $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 + test $ac_status = 0; }; then + pkg_cv_UUID_CFLAGS=`$PKG_CONFIG --cflags "uuid" 2>/dev/null` + test "x$?" != "x0" && pkg_failed=yes +else + pkg_failed=yes +fi + else + pkg_failed=untried +fi +if test -n "$UUID_LIBS"; then + pkg_cv_UUID_LIBS="$UUID_LIBS" + elif test -n "$PKG_CONFIG"; then + if test -n "$PKG_CONFIG" && \ + { { $as_echo "$as_me:${as_lineno-$LINENO}: \$PKG_CONFIG --exists --print-errors \"uuid\""; } >&5 + ($PKG_CONFIG --exists --print-errors "uuid") 2>&5 + ac_status=$? + $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 + test $ac_status = 0; }; then + pkg_cv_UUID_LIBS=`$PKG_CONFIG --libs "uuid" 2>/dev/null` + test "x$?" != "x0" && pkg_failed=yes +else + pkg_failed=yes +fi + else + pkg_failed=untried +fi + + + +if test $pkg_failed = yes; then + { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 +$as_echo "no" >&6; } + +if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then + _pkg_short_errors_supported=yes +else + _pkg_short_errors_supported=no +fi + if test $_pkg_short_errors_supported = yes; then + UUID_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "uuid" 2>&1` + else + UUID_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "uuid" 2>&1` + fi + # Put the nasty error message in config.log where it belongs + echo "$UUID_PKG_ERRORS" >&5 + + as_fn_error $? "Package requirements (uuid) were not met: + +$UUID_PKG_ERRORS + +Consider adjusting the PKG_CONFIG_PATH environment variable if you +installed software in a non-standard prefix. + +Alternatively, you may set the environment variables UUID_CFLAGS +and UUID_LIBS to avoid the need to call pkg-config. +See the pkg-config man page for more details." "$LINENO" 5 +elif test $pkg_failed = untried; then + { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 +$as_echo "no" >&6; } + { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 +$as_echo "$as_me: error: in \`$ac_pwd':" >&2;} +as_fn_error $? "The pkg-config script could not be found or is too old. Make sure it +is in your PATH or set the PKG_CONFIG environment variable to the full +path to pkg-config. + +Alternatively, you may set the environment variables UUID_CFLAGS +and UUID_LIBS to avoid the need to call pkg-config. +See the pkg-config man page for more details. + +To get pkg-config, see <http://pkg-config.freedesktop.org/>. +See \`config.log' for more details" "$LINENO" 5; } +else + UUID_CFLAGS=$pkg_cv_UUID_CFLAGS + UUID_LIBS=$pkg_cv_UUID_LIBS + { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 +$as_echo "yes" >&6; } + +fi +test -z "${UUID_LIBS}" && as_fn_error $? "libuuid required but not found. Try installing 'uuid-dev' or 'libuuid-devel'." "$LINENO" 5 + +$as_echo "#define NETDATA_WITH_UUID 1" >>confdefs.h + +OPTIONAL_UUID_CLFAGS="${UUID_CFLAGS}" +OPTIONAL_UUID_LIBS="${UUID_LIBS}" + if test "${enable_plugin_nfacct}" = "yes"; then pkg_failed=no @@ -5359,7 +5466,7 @@ $as_echo "yes" >&6; } fi test -z "${NFACCT_LIBS}" && as_fn_error $? "netfilter_acct required but not found" "$LINENO" 5 - test -z "${LIBMNL_LIBS}" && as_fn_error $? "libmnl required but not found" "$LINENO" 5 + test -z "${LIBMNL_LIBS}" && as_fn_error $? "libmnl required but not found. Try installing 'libmnl-dev' or 'libmnl-devel'" "$LINENO" 5 $as_echo "#define INTERNAL_PLUGIN_NFACCT 1" >>confdefs.h @@ -5458,7 +5565,7 @@ else $as_echo "yes" >&6; } fi - test -z "${ZLIB_LIBS}" && as_fn_error $? "zlib required but not found" "$LINENO" 5 + test -z "${ZLIB_LIBS}" && as_fn_error $? "zlib required but not found. Try installing 'zlib1g-dev' or 'zlib-devel'." "$LINENO" 5 $as_echo "#define NETDATA_WITH_ZLIB 1" >>confdefs.h @@ -5509,6 +5616,8 @@ cat >>confdefs.h <<_ACEOF _ACEOF +varlibdir="\$(localstatedir)/lib/netdata" + cachedir="\$(localstatedir)/cache/netdata" chartsdir="\$(libexecdir)/netdata/charts.d" @@ -5530,6 +5639,8 @@ pluginsdir="\$(libexecdir)/netdata/plugins.d" + + ac_config_files="$ac_config_files Makefile charts.d/Makefile conf.d/Makefile netdata.spec node.d/Makefile plugins.d/Makefile src/Makefile system/Makefile web/Makefile contrib/Makefile" cat >confcache <<\_ACEOF @@ -6066,7 +6177,7 @@ cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # report actual input values of CONFIG_FILES etc. instead of their # values after options handling. ac_log=" -This file was extended by netdata $as_me 1.1.0, which was +This file was extended by netdata $as_me 1.2.0, which was generated by GNU Autoconf 2.69. Invocation command line was CONFIG_FILES = $CONFIG_FILES @@ -6132,7 +6243,7 @@ _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 ac_cs_config="`$as_echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`" ac_cs_version="\\ -netdata config.status 1.1.0 +netdata config.status 1.2.0 configured by $0, generated by GNU Autoconf 2.69, with options \\"\$ac_cs_config\\" diff --git a/configure.ac b/configure.ac index 2f82979a8..3d85f6d1c 100644 --- a/configure.ac +++ b/configure.ac @@ -4,7 +4,7 @@ AC_PREREQ(2.60) define([VERSION_MAJOR], [1]) -define([VERSION_MINOR], [1]) +define([VERSION_MINOR], [2]) define([VERSION_FIX], [0]) define([VERSION_NUMBER], VERSION_MAJOR[.]VERSION_MINOR[.]VERSION_FIX) define([VERSION_SUFFIX], []) @@ -92,6 +92,15 @@ if test -z "${MATH_LIBS}"; then ) fi +PKG_CHECK_MODULES( + [UUID], + [uuid], +) +test -z "${UUID_LIBS}" && AC_MSG_ERROR([libuuid required but not found. Try installing 'uuid-dev' or 'libuuid-devel'.]) +AC_DEFINE([NETDATA_WITH_UUID], [1], [uuid settings]) +OPTIONAL_UUID_CLFAGS="${UUID_CFLAGS}" +OPTIONAL_UUID_LIBS="${UUID_LIBS}" + if test "${enable_plugin_nfacct}" = "yes"; then PKG_CHECK_MODULES( [NFACCT], @@ -102,7 +111,7 @@ if test "${enable_plugin_nfacct}" = "yes"; then [libmnl], ) test -z "${NFACCT_LIBS}" && AC_MSG_ERROR([netfilter_acct required but not found]) - test -z "${LIBMNL_LIBS}" && AC_MSG_ERROR([libmnl required but not found]) + test -z "${LIBMNL_LIBS}" && AC_MSG_ERROR([libmnl required but not found. Try installing 'libmnl-dev' or 'libmnl-devel']) AC_DEFINE([INTERNAL_PLUGIN_NFACCT], [1], [nfacct plugin settings]) OPTIONAL_NFACCT_CLFAGS="${NFACCT_CFLAGS} ${LIBMNL_CFLAGS}" OPTIONAL_NFACCT_LIBS="${NFACCT_LIBS} ${LIBMNL_LIBS}" @@ -112,7 +121,7 @@ if test "${with_zlib}" = "yes"; then [ZLIB], [zlib], ) - test -z "${ZLIB_LIBS}" && AC_MSG_ERROR([zlib required but not found]) + test -z "${ZLIB_LIBS}" && AC_MSG_ERROR([zlib required but not found. Try installing 'zlib1g-dev' or 'zlib-devel'.]) AC_DEFINE([NETDATA_WITH_ZLIB], [1], [zlib settings]) OPTIONAL_ZLIB_CLFAGS="${ZLIB_CFLAGS}" OPTIONAL_ZLIB_LIBS="${ZLIB_LIBS}" @@ -139,6 +148,7 @@ fi AC_DEFINE_UNQUOTED([NETDATA_USER], ["${with_user}"], [use this user to drop privileged]) +AC_SUBST([varlibdir], ["\$(localstatedir)/lib/netdata"]) AC_SUBST([cachedir], ["\$(localstatedir)/cache/netdata"]) AC_SUBST([chartsdir], ["\$(libexecdir)/netdata/charts.d"]) AC_SUBST([nodedir], ["\$(libexecdir)/netdata/node.d"]) @@ -153,6 +163,8 @@ AC_SUBST([OPTIONAL_NFACCT_CLFAGS]) AC_SUBST([OPTIONAL_NFACCT_LIBS]) AC_SUBST([OPTIONAL_ZLIB_CLFAGS]) AC_SUBST([OPTIONAL_ZLIB_LIBS]) +AC_SUBST([OPTIONAL_UUID_CLFAGS]) +AC_SUBST([OPTIONAL_UUID_LIBS]) AC_CONFIG_FILES([ Makefile diff --git a/contrib/Makefile.am b/contrib/Makefile.am index 5eef88470..19e5df77f 100644 --- a/contrib/Makefile.am +++ b/contrib/Makefile.am @@ -21,3 +21,9 @@ dist_noinst_DATA = \ dist_noinst_SCRIPTS = \ debian/netdata.init \ $(NULL) + +debian/changelog: + echo "netdata ($(PACKAGE_VERSION)) UNRELEASED; urgency=medium" | \ + tr '_' '~' > $@ + echo " * Latest release" >> $@ + echo " -- Netdata Team <> `date -R`" >> $@ diff --git a/contrib/Makefile.in b/contrib/Makefile.in index 084501db4..f2d5ffa25 100644 --- a/contrib/Makefile.in +++ b/contrib/Makefile.in @@ -158,6 +158,8 @@ OPTIONAL_MATH_CLFAGS = @OPTIONAL_MATH_CLFAGS@ OPTIONAL_MATH_LIBS = @OPTIONAL_MATH_LIBS@ OPTIONAL_NFACCT_CLFAGS = @OPTIONAL_NFACCT_CLFAGS@ OPTIONAL_NFACCT_LIBS = @OPTIONAL_NFACCT_LIBS@ +OPTIONAL_UUID_CLFAGS = @OPTIONAL_UUID_CLFAGS@ +OPTIONAL_UUID_LIBS = @OPTIONAL_UUID_LIBS@ OPTIONAL_ZLIB_CLFAGS = @OPTIONAL_ZLIB_CLFAGS@ OPTIONAL_ZLIB_LIBS = @OPTIONAL_ZLIB_LIBS@ PACKAGE = @PACKAGE@ @@ -179,6 +181,8 @@ PTHREAD_LIBS = @PTHREAD_LIBS@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ STRIP = @STRIP@ +UUID_CFLAGS = @UUID_CFLAGS@ +UUID_LIBS = @UUID_LIBS@ VERSION = @VERSION@ ZLIB_CFLAGS = @ZLIB_CFLAGS@ ZLIB_LIBS = @ZLIB_LIBS@ @@ -239,6 +243,7 @@ target_alias = @target_alias@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ +varlibdir = @varlibdir@ webdir = @webdir@ MAINTAINERCLEANFILES = $(srcdir)/Makefile.in dist_noinst_DATA = \ @@ -448,6 +453,12 @@ uninstall-am: pdf-am ps ps-am tags-am uninstall uninstall-am +debian/changelog: + echo "netdata ($(PACKAGE_VERSION)) UNRELEASED; urgency=medium" | \ + tr '_' '~' > $@ + echo " * Latest release" >> $@ + echo " -- Netdata Team <> `date -R`" >> $@ + # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: diff --git a/contrib/debian/changelog b/contrib/debian/changelog index 795eaadd8..ed818d4df 100644 --- a/contrib/debian/changelog +++ b/contrib/debian/changelog @@ -1,5 +1,3 @@ -netdata (1.0.0) UNRELEASED; urgency=medium - - * Initial release. - - -- Matthew Newton <mcn4@leicester.ac.uk> Fri, 01 Apr 2016 17:24:11 +0100 +netdata (1.2.0) UNRELEASED; urgency=medium + * Latest release + -- Netdata Team <> Mon, 16 May 2016 22:12:32 +0200 diff --git a/contrib/debian/netdata.init b/contrib/debian/netdata.init index 7403b459a..c1b2b74de 100755 --- a/contrib/debian/netdata.init +++ b/contrib/debian/netdata.init @@ -49,7 +49,7 @@ restart|force-reload) log_daemon_msg "Restarting real-time system monitoring" "n status) status_of_proc -p $PIDFILE $DAEMON $NAME && exit 0 || exit $? ;; -*) log_action_msg "Usage: /etc/init.d/cron {start|stop|status|restart|force-reload}" +*) log_action_msg "Usage: $SCRIPTNAME {start|stop|status|restart|force-reload}" exit 2 ;; esac diff --git a/netdata-installer.sh b/netdata-installer.sh index 06a283df9..42ab0e156 100755 --- a/netdata-installer.sh +++ b/netdata-installer.sh @@ -3,7 +3,15 @@ # reload the user profile [ -f /etc/profile ] && . /etc/profile +# fix PKG_CHECK_MODULES error +if [ -d /usr/share/aclocal ] +then + ACLOCAL_PATH=${ACLOCAL_PATH-/usr/share/aclocal} + export ACLOCAL_PATH +fi + LC_ALL=C +umask 022 # you can set CFLAGS before running installer CFLAGS="${CFLAGS--O3}" @@ -19,51 +27,53 @@ ME="$0" DONOTSTART=0 DONOTWAIT=0 NETDATA_PREFIX= -ZLIB_IS_HERE=0 +LIBS_ARE_HERE=0 usage() { - cat <<USAGE + cat <<-USAGE -${ME} <installer options> + ${ME} <installer options> -Valid <installer options> are: + Valid <installer options> are: - --install /PATH/TO/INSTALL + --install /PATH/TO/INSTALL - If your give: --install /opt - netdata will be installed in /opt/netdata + If your give: --install /opt + netdata will be installed in /opt/netdata - --dont-start-it + --dont-start-it - Do not (re)start netdata. - Just install it. + Do not (re)start netdata. + Just install it. - --dont-wait + --dont-wait - Do not wait for the user to press ENTER. - Start immediately building it. + Do not wait for the user to press ENTER. + Start immediately building it. - --zlib-is-really-here + --zlib-is-really-here + --libs-are-really-here - If you get errors about missing zlib, - but you know it is available, - you have a broken pkg-config. - Use this option to allow it continue - without checking pkg-config. + If you get errors about missing zlib, + or libuuid but you know it is available, + you have a broken pkg-config. + Use this option to allow it continue + without checking pkg-config. -Netdata will by default be compiled with gcc optimization -O3 -If you need to pass different CFLAGS, use something like this: + Netdata will by default be compiled with gcc optimization -O3 + If you need to pass different CFLAGS, use something like this: - CFLAGS="<gcc options>" $ME <installer options> + CFLAGS="<gcc options>" ${ME} <installer options> -For the installer to complete successfully, you will need -these packages installed: + For the installer to complete successfully, you will need + these packages installed: - gcc make autoconf automake pkg-config zlib1g-dev + gcc make autoconf automake pkg-config zlib1g-dev (or zlib-devel) + uuid-dev (or libuuid-devel) -For the plugins, you will at least need: + For the plugins, you will at least need: - curl node + curl nodejs USAGE } @@ -74,9 +84,9 @@ do then NETDATA_PREFIX="${2}/netdata" shift 2 - elif [ "$1" = "--zlib-is-really-here" ] + elif [ "$1" = "--zlib-is-really-here" -o "$1" = "--libs-are-really-here" ] then - ZLIB_IS_HERE=1 + LIBS_ARE_HERE=1 shift 1 elif [ "$1" = "--dont-start-it" ] then @@ -99,25 +109,26 @@ do fi done -cat <<BANNER +cat <<-BANNER -Welcome to netdata! -Nice to see you are giving it a try! + Welcome to netdata! + Nice to see you are giving it a try! -You are about to build and install netdata to your system. + You are about to build and install netdata to your system. -It will be installed at these locations: + It will be installed at these locations: - - the daemon at ${NETDATA_PREFIX}/usr/sbin/netdata - - config files at ${NETDATA_PREFIX}/etc/netdata - - web files at ${NETDATA_PREFIX}/usr/share/netdata - - plugins at ${NETDATA_PREFIX}/usr/libexec/netdata - - cache files at ${NETDATA_PREFIX}/var/cache/netdata - - log files at ${NETDATA_PREFIX}/var/log/netdata - - pid file at ${NETDATA_PREFIX}/var/run + - the daemon at ${NETDATA_PREFIX}/usr/sbin/netdata + - config files at ${NETDATA_PREFIX}/etc/netdata + - web files at ${NETDATA_PREFIX}/usr/share/netdata + - plugins at ${NETDATA_PREFIX}/usr/libexec/netdata + - cache files at ${NETDATA_PREFIX}/var/cache/netdata + - db files at ${NETDATA_PREFIX}/var/lib/netdata + - log files at ${NETDATA_PREFIX}/var/log/netdata + - pid file at ${NETDATA_PREFIX}/var/run -This installer allows you to change the installation path. -Press Control-C and run the same command with --help for help. + This installer allows you to change the installation path. + Press Control-C and run the same command with --help for help. BANNER @@ -125,40 +136,40 @@ if [ "${UID}" -ne 0 ] then if [ -z "${NETDATA_PREFIX}" ] then - cat <<NONROOTNOPREFIX + cat <<-NONROOTNOPREFIX -Sorry! This will wrong! + Sorry! This will fail! -You are attempting to install netdata as non-root, but you plan to install it -in system paths. + You are attempting to install netdata as non-root, but you plan to install it + in system paths. -Please set an installation prefix, like this: + Please set an installation prefix, like this: - $0 ${@} --install /tmp + $0 ${@} --install /tmp -or, run the installer as root: + or, run the installer as root: - sudo $0 ${@} + sudo $0 ${@} -We suggest to install it as root, or certain data collectors will not be able -to work. Netdata drops root privileges when running. So, if you plan to keep -it, install it as root to get the full functionality. + We suggest to install it as root, or certain data collectors will not be able + to work. Netdata drops root privileges when running. So, if you plan to keep + it, install it as root to get the full functionality. NONROOTNOPREFIX exit 1 else - cat <<NONROOT + cat <<-NONROOT -IMPORTANT: -You are about to install netdata as a non-root user. -Netdata will work, but a few data collection modules that -require root access will fail. + IMPORTANT: + You are about to install netdata as a non-root user. + Netdata will work, but a few data collection modules that + require root access will fail. -If you installing permanently on your system, run the -installer like this: + If you installing permanently on your system, run the + installer like this: - sudo $0 ${@} + sudo $0 ${@} NONROOT fi @@ -199,22 +210,22 @@ then else cat <<-"EOF" - ------------------------------------------------------------------------------- - autotools 2.60 or later is required + ------------------------------------------------------------------------------- + autotools 2.60 or later is required - Sorry, you do not seem to have autotools 2.60 or later, which is - required to build from the git sources of netdata. + Sorry, you do not seem to have autotools 2.60 or later, which is + required to build from the git sources of netdata. - You can either install a suitable version of autotools and automake - or download a netdata package which does not have these dependencies. + You can either install a suitable version of autotools and automake + or download a netdata package which does not have these dependencies. - Source packages where autotools have already been run are available - here: - https://firehol.org/download/netdata/ + Source packages where autotools have already been run are available + here: + https://firehol.org/download/netdata/ - The unsigned/master folder tracks the head of the git tree and released - packages are also available. - EOF + The unsigned/master folder tracks the head of the git tree and released + packages are also available. +EOF exit 1 fi fi @@ -230,29 +241,33 @@ if [ ${DONOTWAIT} -eq 0 ] fi build_error() { - cat <<EOF + cat <<-EOF + + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + Sorry! NetData failed to build... -Sorry! NetData failed to build... + You many need to check these: -You many need to check these: + 1. The package uuid-dev (or libuuid-devel) has to be installed. -1. The package zlib1g-dev has to be installed. + If your system cannot find libuuid, although it is installed + run me with the option: --libs-are-really-here -2. You need basic build tools installed, like: + 2. The package zlib1g-dev (or zlib-devel) has to be installed. - gcc make autoconf automake pkg-config + If your system cannot find zlib, although it is installed + run me with the option: --libs-are-really-here - Autoconf version 2.60 or higher is required + 3. You need basic build tools installed, like: -3. If your system cannot find ZLIB, although it is installed - run me with the option: --zlib-is-really-here + gcc make autoconf automake pkg-config + Autoconf version 2.60 or higher is required. -If you still cannot get it to build, ask for help at github: + If you still cannot get it to build, ask for help at github: - https://github.com/firehol/netdata/issues + https://github.com/firehol/netdata/issues EOF @@ -261,23 +276,38 @@ EOF } run() { + printf >>netdata-installer.log "# " + printf >>netdata-installer.log "%q " "${@}" + printf >>netdata-installer.log " ... " + printf >&2 "\n" printf >&2 ":-----------------------------------------------------------------------------\n" printf >&2 "Running command:\n" printf >&2 "\n" printf >&2 "%q " "${@}" printf >&2 "\n" - printf >&2 "\n" "${@}" + + local ret=$? + if [ ${ret} -ne 0 ] + then + printf >>netdata-installer.log "FAILED!\n" + else + printf >>netdata-installer.log "OK\n" + fi + + return ${ret} } -if [ ${ZLIB_IS_HERE} -eq 1 ] +if [ ${LIBS_ARE_HERE} -eq 1 ] then shift - echo >&2 "ok, assuming zlib is really installed." + echo >&2 "ok, assuming libs are really installed." export ZLIB_CFLAGS=" " export ZLIB_LIBS="-lz" + export UUID_CFLAGS=" " + export UUID_LIBS="-luuid" fi trap build_error EXIT @@ -307,11 +337,16 @@ echo >&2 "Compiling netdata ..." run make || exit 1 # backup user configurations +installer_backup_suffix="${PID}.${RANDOM}" for x in apps_groups.conf charts.d.conf do if [ -f "${NETDATA_PREFIX}/etc/netdata/${x}" ] then - cp -p "${NETDATA_PREFIX}/etc/netdata/${x}" "${NETDATA_PREFIX}/etc/netdata/${x}.installer_backup" + cp -p "${NETDATA_PREFIX}/etc/netdata/${x}" "${NETDATA_PREFIX}/etc/netdata/${x}.installer_backup.${installer_backup_suffix}" + + elif [ -f "${NETDATA_PREFIX}/etc/netdata/${x}.installer_backup.${installer_backup_suffix}" ] + then + rm -f "${NETDATA_PREFIX}/etc/netdata/${x}.installer_backup.${installer_backup_suffix}" fi done @@ -321,19 +356,52 @@ run make install || exit 1 # restore user configurations for x in apps_groups.conf charts.d.conf do - if [ -f "${NETDATA_PREFIX}/etc/netdata/${x}.installer_backup" ] + if [ -f "${NETDATA_PREFIX}/etc/netdata/${x}.installer_backup.${installer_backup_suffix}" ] then - cp -p "${NETDATA_PREFIX}/etc/netdata/${x}.installer_backup" "${NETDATA_PREFIX}/etc/netdata/${x}" + cp -p "${NETDATA_PREFIX}/etc/netdata/${x}.installer_backup.${installer_backup_suffix}" "${NETDATA_PREFIX}/etc/netdata/${x}" fi done +NETDATA_ADDED_TO_DOCKER=0 if [ ${UID} -eq 0 ] then - echo >&2 "Adding netdata user group ..." - getent group netdata > /dev/null || run groupadd -r netdata + getent group netdata > /dev/null + if [ $? -ne 0 ] + then + echo >&2 "Adding netdata user group ..." + run groupadd -r netdata + fi + + getent passwd netdata > /dev/null + if [ $? -ne 0 ] + then + echo >&2 "Adding netdata user account ..." + run useradd -r -g netdata -c netdata -s /sbin/nologin -d / netdata + fi + + getent group docker > /dev/null + if [ $? -eq 0 ] + then + # find the users in the docker group + docker=$(getent group docker | cut -d ':' -f 4) + if [[ ",${docker}," =~ ,netdata, ]] + then + # netdata is already there + : + else + # netdata is not in docker group + echo >&2 "Adding netdata user to the docker group (needed to get container names) ..." + run usermod -a -G docker netdata + fi + # let the uninstall script know + NETDATA_ADDED_TO_DOCKER=1 + fi - echo >&2 "Adding netdata user account ..." - getent passwd netdata > /dev/null || run useradd -r -g netdata -c netdata -s /sbin/nologin -d / netdata + if [ -d /etc/logrotate.d -a ! -f /etc/logrotate.d/netdata ] + then + echo >&2 "Adding netdata logrotate configuration ..." + run cp system/netdata.logrotate /etc/logrotate.d/netdata + fi fi @@ -373,10 +441,12 @@ defport=19999 NETDATA_PORT="$( config_option "port" ${defport} )" # directories +NETDATA_LIB_DIR="$( config_option "lib directory" "${NETDATA_PREFIX}/var/lib/netdata" )" NETDATA_CACHE_DIR="$( config_option "cache directory" "${NETDATA_PREFIX}/var/cache/netdata" )" NETDATA_WEB_DIR="$( config_option "web files directory" "${NETDATA_PREFIX}/usr/share/netdata/web" )" NETDATA_LOG_DIR="$( config_option "log directory" "${NETDATA_PREFIX}/var/log/netdata" )" NETDATA_CONF_DIR="$( config_option "config directory" "${NETDATA_PREFIX}/etc/netdata" )" +NETDATA_BIND="$( config_option "bind socket to IP" "*" )" NETDATA_RUN_DIR="${NETDATA_PREFIX}/var/run" @@ -389,8 +459,9 @@ if [ ! -d "${NETDATA_RUN_DIR}" ] mkdir -p "${NETDATA_RUN_DIR}" || exit 1 fi +echo >&2 echo >&2 "Fixing directories (user: ${NETDATA_USER})..." -for x in "${NETDATA_WEB_DIR}" "${NETDATA_CONF_DIR}" "${NETDATA_CACHE_DIR}" "${NETDATA_LOG_DIR}" +for x in "${NETDATA_WEB_DIR}" "${NETDATA_CONF_DIR}" "${NETDATA_CACHE_DIR}" "${NETDATA_LOG_DIR}" "${NETDATA_LIB_DIR}" do if [ ! -d "${x}" ] then @@ -408,14 +479,20 @@ do fi fi - run chmod 0775 "${x}" || echo >&2 "WARNING: Cannot change the permissions of the directory ${x} to 0755..." + run chmod 0755 "${x}" || echo >&2 "WARNING: Cannot change the permissions of the directory ${x} to 0755..." done if [ ${UID} -eq 0 ] then - # fix apps.plugin to be setuid to root run chown root "${NETDATA_PREFIX}/usr/libexec/netdata/plugins.d/apps.plugin" - run chmod 4755 "${NETDATA_PREFIX}/usr/libexec/netdata/plugins.d/apps.plugin" + run chmod 0755 "${NETDATA_PREFIX}/usr/libexec/netdata/plugins.d/apps.plugin" + run setcap cap_dac_read_search,cap_sys_ptrace+ep "${NETDATA_PREFIX}/usr/libexec/netdata/plugins.d/apps.plugin" + if [ $? -ne 0 ] + then + # fix apps.plugin to be setuid to root + run chown root "${NETDATA_PREFIX}/usr/libexec/netdata/plugins.d/apps.plugin" + run chmod 4755 "${NETDATA_PREFIX}/usr/libexec/netdata/plugins.d/apps.plugin" + fi fi # ----------------------------------------------------------------------------- @@ -448,6 +525,9 @@ isnetdata() { } +echo >&2 +echo >&2 "-------------------------------------------------------------------------------" +echo >&2 printf >&2 "Stopping a (possibly) running netdata..." ret=0 count=0 @@ -496,6 +576,7 @@ if [ $? -ne 0 ] else echo >&2 "OK. NetData Started!" fi +echo >&2 # ----------------------------------------------------------------------------- @@ -503,6 +584,9 @@ fi if [ ! -s "${NETDATA_PREFIX}/etc/netdata/netdata.conf" ] then + echo >&2 + echo >&2 "-------------------------------------------------------------------------------" + echo >&2 echo >&2 "Downloading default configuration from netdata..." sleep 5 @@ -544,36 +628,36 @@ fi # Check for KSM ksm_is_available_but_disabled() { - cat <<KSM1 + cat <<-KSM1 -------------------------------------------------------------------------------- -Memory de-duplication instructions + ------------------------------------------------------------------------------- + Memory de-duplication instructions -I see you have kernel memory de-duper (called Kernel Same-page Merging, -or KSM) available, but it is not currently enabled. + I see you have kernel memory de-duper (called Kernel Same-page Merging, + or KSM) available, but it is not currently enabled. -To enable it run: + To enable it run: -echo 1 >/sys/kernel/mm/ksm/run -echo 1000 >/sys/kernel/mm/ksm/sleep_millisecs + echo 1 >/sys/kernel/mm/ksm/run + echo 1000 >/sys/kernel/mm/ksm/sleep_millisecs -If you enable it, you will save 40-60% of netdata memory. + If you enable it, you will save 40-60% of netdata memory. KSM1 } ksm_is_not_available() { - cat <<KSM2 + cat <<-KSM2 -------------------------------------------------------------------------------- -Memory de-duplication not present in your kernel + ------------------------------------------------------------------------------- + Memory de-duplication not present in your kernel -It seems you do not have kernel memory de-duper (called Kernel Same-page -Merging, or KSM) available. + It seems you do not have kernel memory de-duper (called Kernel Same-page + Merging, or KSM) available. -To enable it, you need a kernel built with CONFIG_KSM=y + To enable it, you need a kernel built with CONFIG_KSM=y -If you can have it, you will save 40-60% of netdata memory. + If you can have it, you will save 40-60% of netdata memory. KSM2 } @@ -593,18 +677,18 @@ fi if [ ! -s web/version.txt ] then -cat <<VERMSG + cat <<-VERMSG -------------------------------------------------------------------------------- -Version update check warning + ------------------------------------------------------------------------------- + Version update check warning -The way you downloaded netdata, we cannot find its version. This means the -Update check on the dashboard, will not work. + The way you downloaded netdata, we cannot find its version. This means the + Update check on the dashboard, will not work. -If you want to have version update check, please re-install it -following the procedure in: + If you want to have version update check, please re-install it + following the procedure in: -https://github.com/firehol/netdata/wiki/Installation + https://github.com/firehol/netdata/wiki/Installation VERMSG fi @@ -614,23 +698,29 @@ fi if [ "${UID}" -ne 0 ] then -cat <<SETUID_WARNING + cat <<-SETUID_WARNING + + ------------------------------------------------------------------------------- + apps.plugin needs privileges + + Since you have installed netdata as a normal user, to have apps.plugin collect + all the needed data, you have to give it the access rights it needs, by running + either of the following sets of commands: -------------------------------------------------------------------------------- -apps.plugin needs privileges + To run apps.plugin with escalated capabilities: -Since you have installed netdata as a normal user, to have apps.plugin collect -all the needed data, you have to give it the access rights it needs, by running -these commands: + sudo chown root:${NETDATA_USER} "${NETDATA_PREFIX}/usr/libexec/netdata/plugins.d/apps.plugin" + sudo chmod 0750 "${NETDATA_PREFIX}/usr/libexec/netdata/plugins.d/apps.plugin" + sudo setcap cap_dac_read_search,cap_sys_ptrace+ep "${NETDATA_PREFIX}/usr/libexec/netdata/plugins.d/apps.plugin" - sudo chown root "${NETDATA_PREFIX}/usr/libexec/netdata/plugins.d/apps.plugin" - sudo chmod 4755 "${NETDATA_PREFIX}/usr/libexec/netdata/plugins.d/apps.plugin" + or, to run apps.plugin as root: -The commands allow apps.plugin to run as root. + sudo chown root "${NETDATA_PREFIX}/usr/libexec/netdata/plugins.d/apps.plugin" + sudo chmod 4755 "${NETDATA_PREFIX}/usr/libexec/netdata/plugins.d/apps.plugin" -apps.plugin is performing a hard-coded function of data collection for all -running processes. It cannot be instructed from the netdata daemon to perform -any task, so it is pretty safe to do this. + apps.plugin is performing a hard-coded function of data collection for all + running processes. It cannot be instructed from the netdata daemon to perform + any task, so it is pretty safe to do this. SETUID_WARNING fi @@ -638,103 +728,121 @@ fi # ----------------------------------------------------------------------------- # Keep un-install info -cat >netdata-uninstaller.sh <<UNINSTALL -#!/bin/bash +cat >netdata-uninstaller.sh <<-UNINSTALL + #!/bin/bash -# this script will uninstall netdata + # this script will uninstall netdata -if [ "\$1" != "--force" ] - then - echo >&2 "This script will REMOVE netdata from your system." - echo >&2 "Run it again with --force to do it." - exit 1 -fi + if [ "\$1" != "--force" ] + then + echo >&2 "This script will REMOVE netdata from your system." + echo >&2 "Run it again with --force to do it." + exit 1 + fi + + echo >&2 "Stopping a possibly running netdata..." + killall netdata + sleep 2 -echo >&2 "Stopping a possibly running netdata..." -killall netdata -sleep 2 + deletedir() { + if [ ! -z "\$1" -a -d "\$1" ] + then + echo + echo "Deleting directory '\$1' ..." + rm -I -R "\$1" + fi + } -deletedir() { - if [ ! -z "\$1" -a -d "\$1" ] + if [ ! -z "${NETDATA_PREFIX}" -a -d "${NETDATA_PREFIX}" ] then - echo - echo "Deleting directory '\$1' ..." - rm -I -R "\$1" - fi -} + # installation prefix was given -if [ ! -z "${NETDATA_PREFIX}" -a -d "${NETDATA_PREFIX}" ] - then - # installation prefix was given + deletedir "${NETDATA_PREFIX}" + + else + # installation prefix was NOT given - deletedir "${NETDATA_PREFIX}" + if [ -f "${NETDATA_PREFIX}/usr/sbin/netdata" ] + then + echo "Deleting ${NETDATA_PREFIX}/usr/sbin/netdata ..." + rm -i "${NETDATA_PREFIX}/usr/sbin/netdata" + fi -else - # installation prefix was NOT given + deletedir "${NETDATA_PREFIX}/etc/netdata" + deletedir "${NETDATA_PREFIX}/usr/share/netdata" + deletedir "${NETDATA_PREFIX}/usr/libexec/netdata" + deletedir "${NETDATA_PREFIX}/var/lib/netdata" + deletedir "${NETDATA_PREFIX}/var/cache/netdata" + deletedir "${NETDATA_PREFIX}/var/log/netdata" + fi - if [ -f "${NETDATA_PREFIX}/usr/sbin/netdata" ] + if [ -f /etc/logrotate.d/netdata ] then - echo "Deleting ${NETDATA_PREFIX}/usr/sbin/netdata ..." - rm -i "${NETDATA_PREFIX}/usr/sbin/netdata" + echo "Deleting /etc/logrotate.d/netdata ..." + rm -i /etc/logrotate.d/netdata fi - deletedir "${NETDATA_PREFIX}/etc/netdata" - deletedir "${NETDATA_PREFIX}/usr/share/netdata" - deletedir "${NETDATA_PREFIX}/usr/libexec/netdata" - deletedir "${NETDATA_PREFIX}/var/cache/netdata" - deletedir "${NETDATA_PREFIX}/var/log/netdata" -fi + getent passwd netdata > /dev/null + if [ $? -eq 0 ] + then + echo + echo "You may also want to remove the user netdata" + echo "by running:" + echo " userdel netdata" + fi -getent passwd netdata > /dev/null -if [ $? -eq 0 ] - then - echo - echo "You may also want to remove the user netdata" - echo "by running:" - echo " userdel netdata" -fi + getent group netdata > /dev/null + if [ $? -eq 0 ] + then + echo + echo "You may also want to remove the group netdata" + echo "by running:" + echo " groupdel netdata" + fi -getent group netdata > /dev/null -if [ $? -eq 0 ] - then - echo - echo "You may also want to remove the group netdata" - echo "by running:" - echo " groupdel netdata" -fi + getent group docker > /dev/null + if [ $? -eq 0 -a "${NETDATA_ADDED_TO_DOCKER}" = "1" ] + then + echo + echo "You may also want to remove the netdata user from the docker group" + echo "by running:" + echo " gpasswd -d netdata docker" + fi UNINSTALL chmod 750 netdata-uninstaller.sh # ----------------------------------------------------------------------------- -cat <<END - - -------------------------------------------------------------------------------- +if [ "${NETDATA_BIND}" = "*" ] + then + access="localhost" +else + access="${NETDATA_BIND}" +fi -OK. NetData is installed and it is running. +cat <<-END -------------------------------------------------------------------------------- + ------------------------------------------------------------------------------- -Hit http://localhost:${NETDATA_PORT}/ from your browser. + OK. NetData is installed and it is running (listening to ${NETDATA_BIND}:${NETDATA_PORT}). -To stop netdata, just kill it, with: + ------------------------------------------------------------------------------- - killall netdata -To start it, just run it: + Hit http://${access}:${NETDATA_PORT}/ from your browser. - ${NETDATA_PREFIX}/usr/sbin/netdata + To stop netdata, just kill it, with: + killall netdata -Enjoy! + To start it, just run it: - Give netdata a Github Star, at: + ${NETDATA_PREFIX}/usr/sbin/netdata - https://github.com/firehol/netdata/wiki + Enjoy! END echo >&2 "Uninstall script generated: ./netdata-uninstaller.sh" diff --git a/netdata.spec b/netdata.spec index 21d028fe2..88a7c7641 100644 --- a/netdata.spec +++ b/netdata.spec @@ -10,11 +10,11 @@ Summary: Real-time performance monitoring, done right Name: netdata -Version: 1.1.0 +Version: 1.2.0 Release: %{?release_suffix}%{?dist} License: GPL v3+ Group: Applications/System -Source0: http://firehol.org/download/netdata/releases/v1.1.0/%{name}-1.1.0.tar.xz +Source0: http://firehol.org/download/netdata/releases/v1.2.0/%{name}-1.2.0.tar.xz URL: http://netdata.firehol.org/ BuildRequires: pkgconfig BuildRequires: xz @@ -42,7 +42,7 @@ so that you can get insights of what is happening now and what just happened, on your systems and applications. %prep -%setup -q -n %{name}-1.1.0 +%setup -q -n %{name}-1.2.0 %build %configure \ @@ -96,10 +96,15 @@ rm -rf $RPM_BUILD_ROOT %dir %{_datadir}/%{name} # override defattr for web files -%defattr(755,root,netdata,644) +%defattr(644,root,netdata,755) %{_datadir}/%{name}/web %changelog +* Mon May 16 2016 Costa Tsaousis <costa@tsaousis.gr> - 1.2.0-1 +- netdata is now 30% faster. +- netdata now has a registry (my-netdata menu on the dashboard). +- netdata now monitors Linux containers. +- Several more improvements, new features and bugfixes. * Wed Apr 20 2016 Costa Tsaousis <costa@tsaousis.gr> - 1.1.0-1 - Several new features (IPv6, SYNPROXY, Users, Users Groups). - A lot of bug fixes and optimizations. diff --git a/netdata.spec.in b/netdata.spec.in index 7d63650ec..339c5c63d 100644 --- a/netdata.spec.in +++ b/netdata.spec.in @@ -96,10 +96,15 @@ rm -rf $RPM_BUILD_ROOT %dir %{_datadir}/%{name} # override defattr for web files -%defattr(755,root,netdata,644) +%defattr(644,root,netdata,755) %{_datadir}/%{name}/web %changelog +* Mon May 16 2016 Costa Tsaousis <costa@tsaousis.gr> - 1.2.0-1 +- netdata is now 30% faster. +- netdata now has a registry (my-netdata menu on the dashboard). +- netdata now monitors Linux containers. +- Several more improvements, new features and bugfixes. * Wed Apr 20 2016 Costa Tsaousis <costa@tsaousis.gr> - 1.1.0-1 - Several new features (IPv6, SYNPROXY, Users, Users Groups). - A lot of bug fixes and optimizations. diff --git a/node.d/Makefile.am b/node.d/Makefile.am index b6892bb6c..c1caa4f0e 100644 --- a/node.d/Makefile.am +++ b/node.d/Makefile.am @@ -14,7 +14,6 @@ dist_nodemodules_DATA = \ node_modules/pixl-xml.js \ node_modules/net-snmp.js \ node_modules/asn1.js \ - node_modules/node-int64.js \ $(NULL) nodemodulesberdir=$(nodedir)/node_modules/ber diff --git a/node.d/Makefile.in b/node.d/Makefile.in index eb4a678ff..cb073117c 100644 --- a/node.d/Makefile.in +++ b/node.d/Makefile.in @@ -187,6 +187,8 @@ OPTIONAL_MATH_CLFAGS = @OPTIONAL_MATH_CLFAGS@ OPTIONAL_MATH_LIBS = @OPTIONAL_MATH_LIBS@ OPTIONAL_NFACCT_CLFAGS = @OPTIONAL_NFACCT_CLFAGS@ OPTIONAL_NFACCT_LIBS = @OPTIONAL_NFACCT_LIBS@ +OPTIONAL_UUID_CLFAGS = @OPTIONAL_UUID_CLFAGS@ +OPTIONAL_UUID_LIBS = @OPTIONAL_UUID_LIBS@ OPTIONAL_ZLIB_CLFAGS = @OPTIONAL_ZLIB_CLFAGS@ OPTIONAL_ZLIB_LIBS = @OPTIONAL_ZLIB_LIBS@ PACKAGE = @PACKAGE@ @@ -208,6 +210,8 @@ PTHREAD_LIBS = @PTHREAD_LIBS@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ STRIP = @STRIP@ +UUID_CFLAGS = @UUID_CFLAGS@ +UUID_LIBS = @UUID_LIBS@ VERSION = @VERSION@ ZLIB_CFLAGS = @ZLIB_CFLAGS@ ZLIB_LIBS = @ZLIB_LIBS@ @@ -268,6 +272,7 @@ target_alias = @target_alias@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ +varlibdir = @varlibdir@ webdir = @webdir@ MAINTAINERCLEANFILES = $(srcdir)/Makefile.in dist_node_DATA = \ @@ -284,7 +289,6 @@ dist_nodemodules_DATA = \ node_modules/pixl-xml.js \ node_modules/net-snmp.js \ node_modules/asn1.js \ - node_modules/node-int64.js \ $(NULL) nodemodulesberdir = $(nodedir)/node_modules/ber diff --git a/node.d/node_modules/netdata.js b/node.d/node_modules/netdata.js index f36a97b69..6183993b5 100644 --- a/node.d/node_modules/netdata.js +++ b/node.d/node_modules/netdata.js @@ -3,7 +3,6 @@ var url = require('url'); var http = require('http'); var util = require('util'); -var Int64 = require('node-int64'); /* var netdata = require('netdata'); @@ -341,10 +340,8 @@ var netdata = { return false; if(this._current_chart._dimensions_count !== 0) { - if (value instanceof Buffer) { - var value64 = new Int64(value); - this.queue('SET ' + dimension + ' = ' + value64.toString(10)); - } + if (value instanceof Buffer) + this.queue('SET ' + dimension + ' = 0x' + value.toString('hex')); else this.queue('SET ' + dimension + ' = ' + value.toString()); } diff --git a/node.d/node_modules/node-int64.js b/node.d/node_modules/node-int64.js deleted file mode 100644 index f870a2a94..000000000 --- a/node.d/node_modules/node-int64.js +++ /dev/null @@ -1,268 +0,0 @@ -// Int64.js -// -// Copyright (c) 2012 Robert Kieffer -// MIT License - http://opensource.org/licenses/mit-license.php - -/** - * Support for handling 64-bit int numbers in Javascript (node.js) - * - * JS Numbers are IEEE-754 binary double-precision floats, which limits the - * range of values that can be represented with integer precision to: - * - * 2^^53 <= N <= 2^53 - * - * Int64 objects wrap a node Buffer that holds the 8-bytes of int64 data. These - * objects operate directly on the buffer which means that if they are created - * using an existing buffer then setting the value will modify the Buffer, and - * vice-versa. - * - * Internal Representation - * - * The internal buffer format is Big Endian. I.e. the most-significant byte is - * at buffer[0], the least-significant at buffer[7]. For the purposes of - * converting to/from JS native numbers, the value is assumed to be a signed - * integer stored in 2's complement form. - * - * For details about IEEE-754 see: - * http://en.wikipedia.org/wiki/Double_precision_floating-point_format - */ - -// Useful masks and values for bit twiddling -var MASK31 = 0x7fffffff, VAL31 = 0x80000000; -var MASK32 = 0xffffffff, VAL32 = 0x100000000; - -// Map for converting hex octets to strings -var _HEX = []; -for (var i = 0; i < 256; i++) { - _HEX[i] = (i > 0xF ? '' : '0') + i.toString(16); -} - -// -// Int64 -// - -/** - * Constructor accepts any of the following argument types: - * - * new Int64(buffer[, offset=0]) - Existing Buffer with byte offset - * new Int64(Uint8Array[, offset=0]) - Existing Uint8Array with a byte offset - * new Int64(string) - Hex string (throws if n is outside int64 range) - * new Int64(number) - Number (throws if n is outside int64 range) - * new Int64(hi, lo) - Raw bits as two 32-bit values - */ -var Int64 = module.exports = function(a1, a2) { - if (a1 instanceof Buffer) { - this.buffer = a1; - this.offset = a2 || 0; - } else if (Object.prototype.toString.call(a1) == '[object Uint8Array]') { - // Under Browserify, Buffers can extend Uint8Arrays rather than an - // instance of Buffer. We could assume the passed in Uint8Array is actually - // a buffer but that won't handle the case where a raw Uint8Array is passed - // in. We construct a new Buffer just in case. - this.buffer = new Buffer(a1); - this.offset = a2 || 0; - } else { - this.buffer = this.buffer || new Buffer(8); - this.offset = 0; - this.setValue.apply(this, arguments); - } -}; - - -// Max integer value that JS can accurately represent -Int64.MAX_INT = Math.pow(2, 53); - -// Min integer value that JS can accurately represent -Int64.MIN_INT = -Math.pow(2, 53); - -Int64.prototype = { - - constructor: Int64, - - /** - * Do in-place 2's compliment. See - * http://en.wikipedia.org/wiki/Two's_complement - */ - _2scomp: function() { - var b = this.buffer, o = this.offset, carry = 1; - for (var i = o + 7; i >= o; i--) { - var v = (b[i] ^ 0xff) + carry; - b[i] = v & 0xff; - carry = v >> 8; - } - }, - - /** - * Set the value. Takes any of the following arguments: - * - * setValue(string) - A hexidecimal string - * setValue(number) - Number (throws if n is outside int64 range) - * setValue(hi, lo) - Raw bits as two 32-bit values - */ - setValue: function(hi, lo) { - var negate = false; - if (arguments.length == 1) { - if (typeof(hi) == 'number') { - // Simplify bitfield retrieval by using abs() value. We restore sign - // later - negate = hi < 0; - hi = Math.abs(hi); - lo = hi % VAL32; - hi = hi / VAL32; - if (hi > VAL32) throw new RangeError(hi + ' is outside Int64 range'); - hi = hi | 0; - } else if (typeof(hi) == 'string') { - hi = (hi + '').replace(/^0x/, ''); - lo = hi.substr(-8); - hi = hi.length > 8 ? hi.substr(0, hi.length - 8) : ''; - hi = parseInt(hi, 16); - lo = parseInt(lo, 16); - } else { - throw new Error(hi + ' must be a Number or String'); - } - } - - // Technically we should throw if hi or lo is outside int32 range here, but - // it's not worth the effort. Anything past the 32'nd bit is ignored. - - // Copy bytes to buffer - var b = this.buffer, o = this.offset; - for (var i = 7; i >= 0; i--) { - b[o+i] = lo & 0xff; - lo = i == 4 ? hi : lo >>> 8; - } - - // Restore sign of passed argument - if (negate) this._2scomp(); - }, - - /** - * Convert to a native JS number. - * - * WARNING: Do not expect this value to be accurate to integer precision for - * large (positive or negative) numbers! - * - * @param allowImprecise If true, no check is performed to verify the - * returned value is accurate to integer precision. If false, imprecise - * numbers (very large positive or negative numbers) will be forced to +/- - * Infinity. - */ - toNumber: function(allowImprecise) { - var b = this.buffer, o = this.offset; - - // Running sum of octets, doing a 2's complement - var negate = b[o] & 0x80, x = 0, carry = 1; - for (var i = 7, m = 1; i >= 0; i--, m *= 256) { - var v = b[o+i]; - - // 2's complement for negative numbers - if (negate) { - v = (v ^ 0xff) + carry; - carry = v >> 8; - v = v & 0xff; - } - - x += v * m; - } - - // Return Infinity if we've lost integer precision - if (!allowImprecise && x >= Int64.MAX_INT) { - return negate ? -Infinity : Infinity; - } - - return negate ? -x : x; - }, - - /** - * Convert to a JS Number. Returns +/-Infinity for values that can't be - * represented to integer precision. - */ - valueOf: function() { - return this.toNumber(false); - }, - - /** - * Return string value - * - * @param radix Just like Number#toString()'s radix - */ - toString: function(radix) { - return this.valueOf().toString(radix || 10); - }, - - /** - * Return a string showing the buffer octets, with MSB on the left. - * - * @param sep separator string. default is '' (empty string) - */ - toOctetString: function(sep) { - var out = new Array(8); - var b = this.buffer, o = this.offset; - for (var i = 0; i < 8; i++) { - out[i] = _HEX[b[o+i]]; - } - return out.join(sep || ''); - }, - - /** - * Returns the int64's 8 bytes in a buffer. - * - * @param {bool} [rawBuffer=false] If no offset and this is true, return the internal buffer. Should only be used if - * you're discarding the Int64 afterwards, as it breaks encapsulation. - */ - toBuffer: function(rawBuffer) { - if (rawBuffer && this.offset === 0) return this.buffer; - - var out = new Buffer(8); - this.buffer.copy(out, 0, this.offset, this.offset + 8); - return out; - }, - - /** - * Copy 8 bytes of int64 into target buffer at target offset. - * - * @param {Buffer} targetBuffer Buffer to copy into. - * @param {number} [targetOffset=0] Offset into target buffer. - */ - copy: function(targetBuffer, targetOffset) { - this.buffer.copy(targetBuffer, targetOffset || 0, this.offset, this.offset + 8); - }, - - /** - * Returns a number indicating whether this comes before or after or is the - * same as the other in sort order. - * - * @param {Int64} other Other Int64 to compare. - */ - compare: function(other) { - - // If sign bits differ ... - if ((this.buffer[this.offset] & 0x80) != (other.buffer[other.offset] & 0x80)) { - return other.buffer[other.offset] - this.buffer[this.offset]; - } - - // otherwise, compare bytes lexicographically - for (var i = 0; i < 8; i++) { - if (this.buffer[this.offset+i] !== other.buffer[other.offset+i]) { - return this.buffer[this.offset+i] - other.buffer[other.offset+i]; - } - } - return 0; - }, - - /** - * Returns a boolean indicating if this integer is equal to other. - * - * @param {Int64} other Other Int64 to compare. - */ - equals: function(other) { - return this.compare(other) === 0; - }, - - /** - * Pretty output in console.log - */ - inspect: function() { - return '[Int64 value:' + this + ' octets:' + this.toOctetString(' ') + ']'; - } -}; diff --git a/plugins.d/Makefile.am b/plugins.d/Makefile.am index a89ee4cdd..a717cbed1 100644 --- a/plugins.d/Makefile.am +++ b/plugins.d/Makefile.am @@ -8,6 +8,7 @@ dist_plugins_DATA = \ $(NULL) dist_plugins_SCRIPTS = \ + cgroup-name.sh \ charts.d.dryrun-helper.sh \ charts.d.plugin \ node.d.plugin \ diff --git a/plugins.d/Makefile.in b/plugins.d/Makefile.in index 3b4e5198f..c74997688 100644 --- a/plugins.d/Makefile.in +++ b/plugins.d/Makefile.in @@ -186,6 +186,8 @@ OPTIONAL_MATH_CLFAGS = @OPTIONAL_MATH_CLFAGS@ OPTIONAL_MATH_LIBS = @OPTIONAL_MATH_LIBS@ OPTIONAL_NFACCT_CLFAGS = @OPTIONAL_NFACCT_CLFAGS@ OPTIONAL_NFACCT_LIBS = @OPTIONAL_NFACCT_LIBS@ +OPTIONAL_UUID_CLFAGS = @OPTIONAL_UUID_CLFAGS@ +OPTIONAL_UUID_LIBS = @OPTIONAL_UUID_LIBS@ OPTIONAL_ZLIB_CLFAGS = @OPTIONAL_ZLIB_CLFAGS@ OPTIONAL_ZLIB_LIBS = @OPTIONAL_ZLIB_LIBS@ PACKAGE = @PACKAGE@ @@ -207,6 +209,8 @@ PTHREAD_LIBS = @PTHREAD_LIBS@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ STRIP = @STRIP@ +UUID_CFLAGS = @UUID_CFLAGS@ +UUID_LIBS = @UUID_LIBS@ VERSION = @VERSION@ ZLIB_CFLAGS = @ZLIB_CFLAGS@ ZLIB_LIBS = @ZLIB_LIBS@ @@ -267,6 +271,7 @@ target_alias = @target_alias@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ +varlibdir = @varlibdir@ webdir = @webdir@ # @@ -278,6 +283,7 @@ dist_plugins_DATA = \ $(NULL) dist_plugins_SCRIPTS = \ + cgroup-name.sh \ charts.d.dryrun-helper.sh \ charts.d.plugin \ node.d.plugin \ diff --git a/plugins.d/cgroup-name.sh b/plugins.d/cgroup-name.sh new file mode 100755 index 000000000..8ce64b3d7 --- /dev/null +++ b/plugins.d/cgroup-name.sh @@ -0,0 +1,75 @@ +#!/bin/bash + +export PATH="${PATH}:/sbin:/usr/sbin:/usr/local/sbin" +export LC_ALL=C + +NETDATA_CONFIG_DIR="${NETDATA_CONFIG_DIR-/etc/netdata}" +CONFIG="${NETDATA_CONFIG_DIR}/cgroups-names.conf" +CGROUP="${1}" +NAME= + +if [ -z "${CGROUP}" ] + then + echo >&2 "${0}: called without a cgroup name. Nothing to do." + exit 1 +fi + +if [ -f "${CONFIG}" ] + then + NAME="$(cat "${CONFIG}" | grep "^${CGROUP} " | sed "s/[[:space:]]\+/ /g" | cut -d ' ' -f 2)" + if [ -z "${NAME}" ] + then + echo >&2 "${0}: cannot find cgroup '${CGROUP}' in '${CONFIG}'." + fi +#else +# echo >&2 "${0}: configuration file '${CONFIG}' is not available." +fi + +function get_name_classic { + DOCKERID=$1 + echo >&2 "Running command: docker ps --filter=id=\"${DOCKERID}\" --format=\"{{.Names}}\"" + NAME="$( docker ps --filter=id="${DOCKERID}" --format="{{.Names}}" )" +} + +function get_name_api { + DOCKERID=$1 + if [ ! -S "/var/run/docker.sock" ] + then + echo >&2 "Can't find /var/run/docker.sock" + return + fi + echo >&2 "Running API command: /containers/${DOCKERID}/json" + JSON=$(echo -e "GET /containers/${DOCKERID}/json HTTP/1.0\r\n" | nc -U /var/run/docker.sock | egrep '^{.*') + NAME=$(echo $JSON | jq -r .Name,.Config.Hostname | grep -v null | head -n1 | sed 's|^/||') +} + +if [ -z "${NAME}" ] + then + if [[ "${CGROUP}" =~ ^.*docker[-/\.][a-fA-F0-9]+[-\.]?.*$ ]] + then + DOCKERID="$( echo "${CGROUP}" | sed "s|^.*docker[-/]\([a-fA-F0-9]\+\)[-\.]\?.*$|\1|" )" + + if [ ! -z "${DOCKERID}" -a \( ${#DOCKERID} -eq 64 -o ${#DOCKERID} -eq 12 \) ] + then + if hash docker 2>/dev/null + then + get_name_classic $DOCKERID + else + get_name_api $DOCKERID + fi + if [ -z "${NAME}" ] + then + echo >&2 "Cannot find the name of docker container '${DOCKERID}'" + NAME="${DOCKERID:0:12}" + else + echo >&2 "Docker container '${DOCKERID}' is named '${NAME}'" + fi + fi + fi + + [ -z "${NAME}" ] && NAME="${CGROUP}" + [ ${#NAME} -gt 50 ] && NAME="${NAME:0:50}" +fi + +echo >&2 "${0}: cgroup '${CGROUP}' is called '${NAME}'" +echo "${NAME}" diff --git a/plugins.d/charts.d.plugin b/plugins.d/charts.d.plugin index 27b294709..2824fa3c6 100755 --- a/plugins.d/charts.d.plugin +++ b/plugins.d/charts.d.plugin @@ -84,7 +84,7 @@ time_divisor=50 # number of seconds to run without restart # after this time, charts.d.plugin will exit # netdata will restart it -restart_timeout=$[3600 * 4] +restart_timeout=$((3600 * 4)) # check if the charts.d plugins are using global variables # they should not. @@ -247,7 +247,7 @@ float2int() { # echo >&2 "value='${1}' f='${f}', m='${m}'" # the length of the multiplier - 1 - l=$[ ${#m} - 1 ] + l=$(( ${#m} - 1 )) # check if the number is in scientific notation if [[ ${f} =~ ^[[:space:]]*(-)?[0-9.]+(e|E)(\+|-)[0-9]+ ]] @@ -269,7 +269,7 @@ float2int() { # strip leading zeros from the integer part # base 10 convertion - a=$[10#$a] + a=$((10#$a)) # check the length of the decimal part # against the length of the multiplier @@ -281,16 +281,16 @@ float2int() { elif [ ${#b} -lt ${l} ] then # too few digits - pad with zero on the right - local z="00000000000000000000000" r=$[l - ${#b}] + local z="00000000000000000000000" r=$((l - ${#b})) b="${b}${z:0:${r}}" fi # strip leading zeros from the decimal part # base 10 convertion - b=$[10#$b] + b=$((10#$b)) # store the result - FLOAT2INT_RESULT=$[ (a * m) + b ] + FLOAT2INT_RESULT=$(( (a * m) + b )) #echo >&2 "FLOAT2INT_RESULT='${FLOAT2INT_RESULT}'" } diff --git a/src/Makefile.am b/src/Makefile.am index a6808f424..e9fc8f332 100644 --- a/src/Makefile.am +++ b/src/Makefile.am @@ -4,6 +4,7 @@ MAINTAINERCLEANFILES= $(srcdir)/Makefile.in AM_CPPFLAGS = \ + -DVARLIB_DIR="\"$(varlibdir)\"" \ -DCACHE_DIR="\"$(cachedir)\"" \ -DCONFIG_DIR="\"$(configdir)\"" \ -DLOG_DIR="\"$(logdir)\"" \ @@ -15,6 +16,7 @@ AM_CFLAGS = \ $(OPTIONAL_MATH_CFLAGS) \ $(OPTIONAL_NFACCT_CLFAGS) \ $(OPTIONAL_ZLIB_CFLAGS) \ + $(OPTIONAL_UUID_CFLAGS) \ $(NULL) sbin_PROGRAMS = netdata @@ -52,10 +54,13 @@ netdata_SOURCES = \ proc_net_stat_conntrack.c \ proc_net_stat_synproxy.c \ proc_stat.c \ + proc_self_mountinfo.c proc_self_mountinfo.h \ proc_sys_kernel_random_entropy_avail.c \ proc_vmstat.c \ sys_kernel_mm_ksm.c \ + sys_fs_cgroup.c \ procfile.c procfile.h \ + registry.c registry.h \ rrd.c rrd.h \ rrd2json.c rrd2json.h \ storage_number.c storage_number.h \ @@ -69,6 +74,7 @@ netdata_LDADD = \ $(OPTIONAL_MATH_LIBS) \ $(OPTIONAL_NFACCT_LIBS) \ $(OPTIONAL_ZLIB_LIBS) \ + $(OPTIONAL_UUID_LIBS) \ $(NULL) apps_plugin_SOURCES = \ @@ -82,12 +88,16 @@ apps_plugin_SOURCES = \ install-data-hook: if [ `id -u` == 0 ]; then \ chown root '$(DESTDIR)$(pluginsdir)/apps.plugin' && \ - chmod 4755 '$(DESTDIR)$(pluginsdir)/apps.plugin'; \ + chmod 0755 '$(DESTDIR)$(pluginsdir)/apps.plugin' && \ + ( setcap cap_dac_read_search,cap_sys_ptrace+ep '$(DESTDIR)$(pluginsdir)/apps.plugin' || \ + chmod 4755 '$(DESTDIR)$(pluginsdir)/apps.plugin' ); \ else \ echo; \ echo "ATTENTION"; \ echo; \ - echo "setuid bit of $(pluginsdir)/apps.plugin must be set, please execute as root:"; \ - echo "chown root '$(pluginsdir)/apps.plugin' && chmod 4755 '$(pluginsdir)/apps.plugin'"; \ + echo "$(pluginsdir)/apps.plugin requires escalated capabilities:"; \ + echo "sudo chown root '$(DESTDIR)$(pluginsdir)/apps.plugin'"; \ + echo "sudo chmod 0755 '$(DESTDIR)$(pluginsdir)/apps.plugin'"; \ + echo "sudo setcap cap_dac_read_search,cap_sys_ptrace+ep '$(DESTDIR)$(pluginsdir)/apps.plugin'"; \ echo; \ fi diff --git a/src/Makefile.in b/src/Makefile.in index 20aa81ef0..11b68946e 100644 --- a/src/Makefile.in +++ b/src/Makefile.in @@ -113,15 +113,17 @@ am_netdata_OBJECTS = appconfig.$(OBJEXT) avl.$(OBJEXT) \ proc_net_rpc_nfsd.$(OBJEXT) proc_net_snmp.$(OBJEXT) \ proc_net_snmp6.$(OBJEXT) proc_net_stat_conntrack.$(OBJEXT) \ proc_net_stat_synproxy.$(OBJEXT) proc_stat.$(OBJEXT) \ + proc_self_mountinfo.$(OBJEXT) \ proc_sys_kernel_random_entropy_avail.$(OBJEXT) \ proc_vmstat.$(OBJEXT) sys_kernel_mm_ksm.$(OBJEXT) \ - procfile.$(OBJEXT) rrd.$(OBJEXT) rrd2json.$(OBJEXT) \ - storage_number.$(OBJEXT) unit_test.$(OBJEXT) url.$(OBJEXT) \ - web_buffer.$(OBJEXT) web_client.$(OBJEXT) web_server.$(OBJEXT) + sys_fs_cgroup.$(OBJEXT) procfile.$(OBJEXT) registry.$(OBJEXT) \ + rrd.$(OBJEXT) rrd2json.$(OBJEXT) storage_number.$(OBJEXT) \ + unit_test.$(OBJEXT) url.$(OBJEXT) web_buffer.$(OBJEXT) \ + web_client.$(OBJEXT) web_server.$(OBJEXT) netdata_OBJECTS = $(am_netdata_OBJECTS) am__DEPENDENCIES_1 = netdata_DEPENDENCIES = $(am__DEPENDENCIES_1) $(am__DEPENDENCIES_1) \ - $(am__DEPENDENCIES_1) + $(am__DEPENDENCIES_1) $(am__DEPENDENCIES_1) AM_V_P = $(am__v_P_@AM_V@) am__v_P_ = $(am__v_P_@AM_DEFAULT_V@) am__v_P_0 = false @@ -249,6 +251,8 @@ OPTIONAL_MATH_CLFAGS = @OPTIONAL_MATH_CLFAGS@ OPTIONAL_MATH_LIBS = @OPTIONAL_MATH_LIBS@ OPTIONAL_NFACCT_CLFAGS = @OPTIONAL_NFACCT_CLFAGS@ OPTIONAL_NFACCT_LIBS = @OPTIONAL_NFACCT_LIBS@ +OPTIONAL_UUID_CLFAGS = @OPTIONAL_UUID_CLFAGS@ +OPTIONAL_UUID_LIBS = @OPTIONAL_UUID_LIBS@ OPTIONAL_ZLIB_CLFAGS = @OPTIONAL_ZLIB_CLFAGS@ OPTIONAL_ZLIB_LIBS = @OPTIONAL_ZLIB_LIBS@ PACKAGE = @PACKAGE@ @@ -270,6 +274,8 @@ PTHREAD_LIBS = @PTHREAD_LIBS@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ STRIP = @STRIP@ +UUID_CFLAGS = @UUID_CFLAGS@ +UUID_LIBS = @UUID_LIBS@ VERSION = @VERSION@ ZLIB_CFLAGS = @ZLIB_CFLAGS@ ZLIB_LIBS = @ZLIB_LIBS@ @@ -330,6 +336,7 @@ target_alias = @target_alias@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ +varlibdir = @varlibdir@ webdir = @webdir@ # @@ -337,6 +344,7 @@ webdir = @webdir@ # MAINTAINERCLEANFILES = $(srcdir)/Makefile.in AM_CPPFLAGS = \ + -DVARLIB_DIR="\"$(varlibdir)\"" \ -DCACHE_DIR="\"$(cachedir)\"" \ -DCONFIG_DIR="\"$(configdir)\"" \ -DLOG_DIR="\"$(logdir)\"" \ @@ -349,6 +357,7 @@ AM_CFLAGS = \ $(OPTIONAL_MATH_CFLAGS) \ $(OPTIONAL_NFACCT_CLFAGS) \ $(OPTIONAL_ZLIB_CFLAGS) \ + $(OPTIONAL_UUID_CFLAGS) \ $(NULL) dist_cache_DATA = .keep @@ -383,10 +392,13 @@ netdata_SOURCES = \ proc_net_stat_conntrack.c \ proc_net_stat_synproxy.c \ proc_stat.c \ + proc_self_mountinfo.c proc_self_mountinfo.h \ proc_sys_kernel_random_entropy_avail.c \ proc_vmstat.c \ sys_kernel_mm_ksm.c \ + sys_fs_cgroup.c \ procfile.c procfile.h \ + registry.c registry.h \ rrd.c rrd.h \ rrd2json.c rrd2json.h \ storage_number.c storage_number.h \ @@ -401,6 +413,7 @@ netdata_LDADD = \ $(OPTIONAL_MATH_LIBS) \ $(OPTIONAL_NFACCT_LIBS) \ $(OPTIONAL_ZLIB_LIBS) \ + $(OPTIONAL_UUID_LIBS) \ $(NULL) apps_plugin_SOURCES = \ @@ -572,14 +585,17 @@ distclean-compile: @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/proc_net_snmp6.Po@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/proc_net_stat_conntrack.Po@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/proc_net_stat_synproxy.Po@am__quote@ +@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/proc_self_mountinfo.Po@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/proc_softirqs.Po@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/proc_stat.Po@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/proc_sys_kernel_random_entropy_avail.Po@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/proc_vmstat.Po@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/procfile.Po@am__quote@ +@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/registry.Po@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/rrd.Po@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/rrd2json.Po@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/storage_number.Po@am__quote@ +@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/sys_fs_cgroup.Po@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/sys_kernel_mm_ksm.Po@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/unit_test.Po@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/url.Po@am__quote@ @@ -858,13 +874,17 @@ uninstall-am: uninstall-dist_cacheDATA uninstall-dist_logDATA \ install-data-hook: if [ `id -u` == 0 ]; then \ chown root '$(DESTDIR)$(pluginsdir)/apps.plugin' && \ - chmod 4755 '$(DESTDIR)$(pluginsdir)/apps.plugin'; \ + chmod 0755 '$(DESTDIR)$(pluginsdir)/apps.plugin' && \ + ( setcap cap_dac_read_search,cap_sys_ptrace+ep '$(DESTDIR)$(pluginsdir)/apps.plugin' || \ + chmod 4755 '$(DESTDIR)$(pluginsdir)/apps.plugin' ); \ else \ echo; \ echo "ATTENTION"; \ echo; \ - echo "setuid bit of $(pluginsdir)/apps.plugin must be set, please execute as root:"; \ - echo "chown root '$(pluginsdir)/apps.plugin' && chmod 4755 '$(pluginsdir)/apps.plugin'"; \ + echo "$(pluginsdir)/apps.plugin requires escalated capabilities:"; \ + echo "sudo chown root '$(DESTDIR)$(pluginsdir)/apps.plugin'"; \ + echo "sudo chmod 0755 '$(DESTDIR)$(pluginsdir)/apps.plugin'"; \ + echo "sudo setcap cap_dac_read_search,cap_sys_ptrace+ep '$(DESTDIR)$(pluginsdir)/apps.plugin'"; \ echo; \ fi diff --git a/src/appconfig.c b/src/appconfig.c index 73b946508..0ec4cad32 100644 --- a/src/appconfig.c +++ b/src/appconfig.c @@ -1,3 +1,11 @@ + +/* + * TODO + * + * 1. Re-write this using DICTIONARY + * + */ + #ifdef HAVE_CONFIG_H #include <config.h> #endif @@ -12,7 +20,7 @@ #define CONFIG_FILE_LINE_MAX ((CONFIG_MAX_NAME + CONFIG_MAX_VALUE + 1024) * 2) -pthread_rwlock_t config_rwlock = PTHREAD_RWLOCK_INITIALIZER; +pthread_mutex_t config_mutex = PTHREAD_MUTEX_INITIALIZER; // ---------------------------------------------------------------------------- // definitions @@ -25,15 +33,14 @@ pthread_rwlock_t config_rwlock = PTHREAD_RWLOCK_INITIALIZER; struct config_value { avl avl; // the index - this has to be first! + uint8_t flags; uint32_t hash; // a simple hash to speed up searching // we first compare hashes, and only if the hashes are equal we do string comparisons char *name; char *value; - uint8_t flags; - - struct config_value *next; + struct config_value *next; // config->mutex protects just this }; struct config { @@ -44,19 +51,38 @@ struct config { char *name; - struct config_value *values; - avl_tree values_index; + struct config *next; // gloabl config_mutex protects just this - struct config *next; + struct config_value *values; + avl_tree_lock values_index; - pthread_rwlock_t rwlock; + pthread_mutex_t mutex; // this locks only the writers, to ensure atomic updates + // readers are protected using the rwlock in avl_tree_lock } *config_root = NULL; // ---------------------------------------------------------------------------- -// config value +// locking + +static inline void config_global_write_lock(void) { + pthread_mutex_lock(&config_mutex); +} + +static inline void config_global_unlock(void) { + pthread_mutex_unlock(&config_mutex); +} + +static inline void config_section_write_lock(struct config *co) { + pthread_mutex_lock(&co->mutex); +} + +static inline void config_section_unlock(struct config *co) { + pthread_mutex_unlock(&co->mutex); +} -static int config_value_iterator(avl *a) { if(a) {}; return 0; } + +// ---------------------------------------------------------------------------- +// config name-value index static int config_value_compare(void* a, void* b) { if(((struct config_value *)a)->hash < ((struct config_value *)b)->hash) return -1; @@ -64,22 +90,20 @@ static int config_value_compare(void* a, void* b) { else return strcmp(((struct config_value *)a)->name, ((struct config_value *)b)->name); } -#define config_value_index_add(co, cv) avl_insert(&((co)->values_index), (avl *)(cv)) -#define config_value_index_del(co, cv) avl_remove(&((co)->values_index), (avl *)(cv)) +#define config_value_index_add(co, cv) avl_insert_lock(&((co)->values_index), (avl *)(cv)) +#define config_value_index_del(co, cv) avl_remove_lock(&((co)->values_index), (avl *)(cv)) static struct config_value *config_value_index_find(struct config *co, const char *name, uint32_t hash) { - struct config_value *result = NULL, tmp; + struct config_value tmp; tmp.hash = (hash)?hash:simple_hash(name); tmp.name = (char *)name; - avl_search(&(co->values_index), (avl *)&tmp, config_value_iterator, (avl **)&result); - return result; + return (struct config_value *)avl_search_lock(&(co->values_index), (avl *) &tmp); } -// ---------------------------------------------------------------------------- -// config -static int config_iterator(avl *a) { if(a) {}; return 0; } +// ---------------------------------------------------------------------------- +// config sections index static int config_compare(void* a, void* b) { if(((struct config *)a)->hash < ((struct config *)b)->hash) return -1; @@ -87,61 +111,31 @@ static int config_compare(void* a, void* b) { else return strcmp(((struct config *)a)->name, ((struct config *)b)->name); } -avl_tree config_root_index = { - NULL, - config_compare, -#ifdef AVL_LOCK_WITH_MUTEX - PTHREAD_MUTEX_INITIALIZER -#else - PTHREAD_RWLOCK_INITIALIZER -#endif +avl_tree_lock config_root_index = { + { NULL, config_compare }, + AVL_LOCK_INITIALIZER }; -#define config_index_add(cfg) avl_insert(&config_root_index, (avl *)(cfg)) -#define config_index_del(cfg) avl_remove(&config_root_index, (avl *)(cfg)) +#define config_index_add(cfg) avl_insert_lock(&config_root_index, (avl *)(cfg)) +#define config_index_del(cfg) avl_remove_lock(&config_root_index, (avl *)(cfg)) static struct config *config_index_find(const char *name, uint32_t hash) { - struct config *result = NULL, tmp; + struct config tmp; tmp.hash = (hash)?hash:simple_hash(name); tmp.name = (char *)name; - avl_search(&config_root_index, (avl *)&tmp, config_iterator, (avl **)&result); - return result; + return (struct config *)avl_search_lock(&config_root_index, (avl *) &tmp); } -struct config_value *config_value_create(struct config *co, const char *name, const char *value) -{ - debug(D_CONFIG, "Creating config entry for name '%s', value '%s', in section '%s'.", name, value, co->name); - - struct config_value *cv = calloc(1, sizeof(struct config_value)); - if(!cv) fatal("Cannot allocate config_value"); - - cv->name = strdup(name); - if(!cv->name) fatal("Cannot allocate config.name"); - cv->hash = simple_hash(cv->name); - - cv->value = strdup(value); - if(!cv->value) fatal("Cannot allocate config.value"); - - config_value_index_add(co, cv); - - // no need for string termination, due to calloc() - pthread_rwlock_wrlock(&co->rwlock); - - struct config_value *cv2 = co->values; - if(cv2) { - while (cv2->next) cv2 = cv2->next; - cv2->next = cv; - } - else co->values = cv; - - pthread_rwlock_unlock(&co->rwlock); +// ---------------------------------------------------------------------------- +// config section methods - return cv; +static inline struct config *config_section_find(const char *section) { + return config_index_find(section, 0); } -struct config *config_create(const char *section) +static inline struct config *config_section_create(const char *section) { debug(D_CONFIG, "Creating section '%s'.", section); @@ -152,114 +146,52 @@ struct config *config_create(const char *section) if(!co->name) fatal("Cannot allocate config.name"); co->hash = simple_hash(co->name); - pthread_rwlock_init(&co->rwlock, NULL); - avl_init(&co->values_index, config_value_compare); + avl_init_lock(&co->values_index, config_value_compare); config_index_add(co); - // no need for string termination, due to calloc() - - pthread_rwlock_wrlock(&config_rwlock); - + config_global_write_lock(); struct config *co2 = config_root; if(co2) { while (co2->next) co2 = co2->next; co2->next = co; } else config_root = co; - - pthread_rwlock_unlock(&config_rwlock); + config_global_unlock(); return co; } -struct config *config_find_section(const char *section) -{ - return config_index_find(section, 0); -} - -int load_config(char *filename, int overwrite_used) -{ - int line = 0; - struct config *co = NULL; - char buffer[CONFIG_FILE_LINE_MAX + 1], *s; - - if(!filename) filename = CONFIG_DIR "/" CONFIG_FILENAME; - FILE *fp = fopen(filename, "r"); - if(!fp) { - error("Cannot open file '%s'", filename); - return 0; - } - - while(fgets(buffer, CONFIG_FILE_LINE_MAX, fp) != NULL) { - buffer[CONFIG_FILE_LINE_MAX] = '\0'; - line++; - - s = trim(buffer); - if(!s) { - debug(D_CONFIG, "Ignoring line %d, it is empty.", line); - continue; - } - - int len = (int) strlen(s); - if(*s == '[' && s[len - 1] == ']') { - // new section - s[len - 1] = '\0'; - s++; - - co = config_find_section(s); - if(!co) co = config_create(s); - - continue; - } +// ---------------------------------------------------------------------------- +// config name-value methods - if(!co) { - // line outside a section - error("Ignoring line %d ('%s'), it is outside all sections.", line, s); - continue; - } +static inline struct config_value *config_value_create(struct config *co, const char *name, const char *value) +{ + debug(D_CONFIG, "Creating config entry for name '%s', value '%s', in section '%s'.", name, value, co->name); - char *name = s; - char *value = strchr(s, '='); - if(!value) { - error("Ignoring line %d ('%s'), there is no = in it.", line, s); - continue; - } - *value = '\0'; - value++; + struct config_value *cv = calloc(1, sizeof(struct config_value)); + if(!cv) fatal("Cannot allocate config_value"); - name = trim(name); - value = trim(value); + cv->name = strdup(name); + if(!cv->name) fatal("Cannot allocate config.name"); + cv->hash = simple_hash(cv->name); - if(!name) { - error("Ignoring line %d, name is empty.", line); - continue; - } - if(!value) { - debug(D_CONFIG, "Ignoring line %d, value is empty.", line); - continue; - } + cv->value = strdup(value); + if(!cv->value) fatal("Cannot allocate config.value"); - struct config_value *cv = config_value_index_find(co, name, 0); + config_value_index_add(co, cv); - if(!cv) cv = config_value_create(co, name, value); - else { - if(((cv->flags & CONFIG_VALUE_USED) && overwrite_used) || !(cv->flags & CONFIG_VALUE_USED)) { - debug(D_CONFIG, "Overwriting '%s/%s'.", line, co->name, cv->name); - free(cv->value); - cv->value = strdup(value); - if(!cv->value) fatal("Cannot allocate config.value"); - } - else - debug(D_CONFIG, "Ignoring line %d, '%s/%s' is already present and used.", line, co->name, cv->name); - } - cv->flags |= CONFIG_VALUE_LOADED; + config_section_write_lock(co); + struct config_value *cv2 = co->values; + if(cv2) { + while (cv2->next) cv2 = cv2->next; + cv2->next = cv; } + else co->values = cv; + config_section_unlock(co); - fclose(fp); - - return 1; + return cv; } char *config_get(const char *section, const char *name, const char *default_value) @@ -268,8 +200,8 @@ char *config_get(const char *section, const char *name, const char *default_valu debug(D_CONFIG, "request to get config in section '%s', name '%s', default_value '%s'", section, name, default_value); - struct config *co = config_find_section(section); - if(!co) co = config_create(section); + struct config *co = config_section_find(section); + if(!co) co = config_section_create(section); cv = config_value_index_find(co, name, 0); if(!cv) { @@ -346,7 +278,7 @@ const char *config_set_default(const char *section, const char *name, const char debug(D_CONFIG, "request to set config in section '%s', name '%s', value '%s'", section, name, value); - struct config *co = config_find_section(section); + struct config *co = config_section_find(section); if(!co) return config_set(section, name, value); cv = config_value_index_find(co, name, 0); @@ -374,8 +306,8 @@ const char *config_set(const char *section, const char *name, const char *value) debug(D_CONFIG, "request to set config in section '%s', name '%s', value '%s'", section, name, value); - struct config *co = config_find_section(section); - if(!co) co = config_create(section); + struct config *co = config_section_find(section); + if(!co) co = config_section_create(section); cv = config_value_index_find(co, name, 0); if(!cv) cv = config_value_create(co, name, value); @@ -413,6 +345,94 @@ int config_set_boolean(const char *section, const char *name, int value) return value; } + +// ---------------------------------------------------------------------------- +// config load/save + +int load_config(char *filename, int overwrite_used) +{ + int line = 0; + struct config *co = NULL; + + char buffer[CONFIG_FILE_LINE_MAX + 1], *s; + + if(!filename) filename = CONFIG_DIR "/" CONFIG_FILENAME; + FILE *fp = fopen(filename, "r"); + if(!fp) { + error("Cannot open file '%s'", filename); + return 0; + } + + while(fgets(buffer, CONFIG_FILE_LINE_MAX, fp) != NULL) { + buffer[CONFIG_FILE_LINE_MAX] = '\0'; + line++; + + s = trim(buffer); + if(!s) { + debug(D_CONFIG, "Ignoring line %d, it is empty.", line); + continue; + } + + int len = (int) strlen(s); + if(*s == '[' && s[len - 1] == ']') { + // new section + s[len - 1] = '\0'; + s++; + + co = config_section_find(s); + if(!co) co = config_section_create(s); + + continue; + } + + if(!co) { + // line outside a section + error("Ignoring line %d ('%s'), it is outside all sections.", line, s); + continue; + } + + char *name = s; + char *value = strchr(s, '='); + if(!value) { + error("Ignoring line %d ('%s'), there is no = in it.", line, s); + continue; + } + *value = '\0'; + value++; + + name = trim(name); + value = trim(value); + + if(!name) { + error("Ignoring line %d, name is empty.", line); + continue; + } + if(!value) { + debug(D_CONFIG, "Ignoring line %d, value is empty.", line); + continue; + } + + struct config_value *cv = config_value_index_find(co, name, 0); + + if(!cv) cv = config_value_create(co, name, value); + else { + if(((cv->flags & CONFIG_VALUE_USED) && overwrite_used) || !(cv->flags & CONFIG_VALUE_USED)) { + debug(D_CONFIG, "Overwriting '%s/%s'.", line, co->name, cv->name); + free(cv->value); + cv->value = strdup(value); + if(!cv->value) fatal("Cannot allocate config.value"); + } + else + debug(D_CONFIG, "Ignoring line %d, '%s/%s' is already present and used.", line, co->name, cv->name); + } + cv->flags |= CONFIG_VALUE_LOADED; + } + + fclose(fp); + + return 1; +} + void generate_config(BUFFER *wb, int only_changed) { int i, pri; @@ -438,9 +458,9 @@ void generate_config(BUFFER *wb, int only_changed) break; } - pthread_rwlock_wrlock(&config_rwlock); + config_global_write_lock(); for(co = config_root; co ; co = co->next) { - if(strcmp(co->name, "global") == 0 || strcmp(co->name, "plugins") == 0) pri = 0; + if(strcmp(co->name, "global") == 0 || strcmp(co->name, "plugins") == 0 || strcmp(co->name, "registry") == 0) pri = 0; else if(strncmp(co->name, "plugin:", 7) == 0) pri = 1; else pri = 2; @@ -449,15 +469,13 @@ void generate_config(BUFFER *wb, int only_changed) int changed = 0; int count = 0; - pthread_rwlock_wrlock(&co->rwlock); - + config_section_write_lock(co); for(cv = co->values; cv ; cv = cv->next) { - used += (cv->flags && CONFIG_VALUE_USED)?1:0; + used += (cv->flags & CONFIG_VALUE_USED)?1:0; changed += (cv->flags & CONFIG_VALUE_CHANGED)?1:0; count++; } - - pthread_rwlock_unlock(&co->rwlock); + config_section_unlock(co); if(!count) continue; if(only_changed && !changed) continue; @@ -468,7 +486,7 @@ void generate_config(BUFFER *wb, int only_changed) buffer_sprintf(wb, "\n[%s]\n", co->name); - pthread_rwlock_wrlock(&co->rwlock); + config_section_write_lock(co); for(cv = co->values; cv ; cv = cv->next) { if(used && !(cv->flags & CONFIG_VALUE_USED)) { @@ -476,10 +494,9 @@ void generate_config(BUFFER *wb, int only_changed) } buffer_sprintf(wb, "\t%s%s = %s\n", ((!(cv->flags & CONFIG_VALUE_CHANGED)) && (cv->flags & CONFIG_VALUE_USED))?"# ":"", cv->name, cv->value); } - pthread_rwlock_unlock(&co->rwlock); + config_section_unlock(co); } } - pthread_rwlock_unlock(&config_rwlock); + config_global_unlock(); } } - diff --git a/src/apps_plugin.c b/src/apps_plugin.c index e8a6f43ae..0bcdfcf50 100644 --- a/src/apps_plugin.c +++ b/src/apps_plugin.c @@ -39,12 +39,14 @@ #include "procfile.h" #include "../config.h" +#ifdef NETDATA_INTERNAL_CHECKS +#include <sys/prctl.h> +#endif + #define MAX_COMPARE_NAME 100 #define MAX_NAME 100 #define MAX_CMDLINE 1024 -unsigned long long Hertz = 1; - long processors = 1; long pid_max = 32768; int debug = 0; @@ -221,7 +223,7 @@ long get_system_cpus(void) { int processors = 0; char filename[FILENAME_MAX + 1]; - snprintf(filename, FILENAME_MAX, "%s/proc/stat", host_prefix); + snprintfz(filename, FILENAME_MAX, "%s/proc/stat", host_prefix); ff = procfile_open(filename, NULL, PROCFILE_FLAG_DEFAULT); if(!ff) return 1; @@ -250,7 +252,7 @@ long get_system_pid_max(void) { long mpid = 32768; char filename[FILENAME_MAX + 1]; - snprintf(filename, FILENAME_MAX, "%s/proc/sys/kernel/pid_max", host_prefix); + snprintfz(filename, FILENAME_MAX, "%s/proc/sys/kernel/pid_max", host_prefix); ff = procfile_open(filename, NULL, PROCFILE_FLAG_DEFAULT); if(!ff) return mpid; @@ -267,28 +269,6 @@ long get_system_pid_max(void) { return mpid; } -unsigned long long get_system_hertz(void) -{ - unsigned long long myhz = 1; - -#ifdef _SC_CLK_TCK - if((myhz = (unsigned long long int) sysconf(_SC_CLK_TCK)) > 0) { - return myhz; - } -#endif - -#ifdef HZ - myhz = HZ; /* <asm/param.h> */ -#else /* HZ */ - /* If 32-bit or big-endian (not Alpha or ia64), assume HZ is 100. */ - hz = (sizeof(long)==sizeof(int) || htons(999)==999) ? 100UL : 1024UL; -#endif /* HZ */ - - error("Unknown HZ value. Assuming %llu.", myhz); - return myhz; -} - - // ---------------------------------------------------------------------------- // target // target is the structure that process data are aggregated @@ -397,20 +377,20 @@ struct target *get_users_target(uid_t uid) return NULL; } - snprintf(w->compare, MAX_COMPARE_NAME, "%d", uid); + snprintfz(w->compare, MAX_COMPARE_NAME, "%d", uid); w->comparehash = simple_hash(w->compare); w->comparelen = strlen(w->compare); - snprintf(w->id, MAX_NAME, "%d", uid); + snprintfz(w->id, MAX_NAME, "%d", uid); w->idhash = simple_hash(w->id); struct passwd *pw = getpwuid(uid); if(!pw) - snprintf(w->name, MAX_NAME, "%d", uid); + snprintfz(w->name, MAX_NAME, "%d", uid); else - snprintf(w->name, MAX_NAME, "%s", pw->pw_name); + snprintfz(w->name, MAX_NAME, "%s", pw->pw_name); - netdata_fix_id(w->name); + netdata_fix_chart_name(w->name); w->uid = uid; @@ -435,20 +415,20 @@ struct target *get_groups_target(gid_t gid) return NULL; } - snprintf(w->compare, MAX_COMPARE_NAME, "%d", gid); + snprintfz(w->compare, MAX_COMPARE_NAME, "%d", gid); w->comparehash = simple_hash(w->compare); w->comparelen = strlen(w->compare); - snprintf(w->id, MAX_NAME, "%d", gid); + snprintfz(w->id, MAX_NAME, "%d", gid); w->idhash = simple_hash(w->id); struct group *gr = getgrgid(gid); if(!gr) - snprintf(w->name, MAX_NAME, "%d", gid); + snprintfz(w->name, MAX_NAME, "%d", gid); else - snprintf(w->name, MAX_NAME, "%s", gr->gr_name); + snprintfz(w->name, MAX_NAME, "%s", gr->gr_name); - netdata_fix_id(w->name); + netdata_fix_chart_name(w->name); w->gid = gid; @@ -488,12 +468,12 @@ struct target *get_apps_groups_target(const char *id, struct target *target) return NULL; } - strncpy(w->id, nid, MAX_NAME); + strncpyz(w->id, nid, MAX_NAME); w->idhash = simple_hash(w->id); - strncpy(w->name, nid, MAX_NAME); + strncpyz(w->name, nid, MAX_NAME); - strncpy(w->compare, nid, MAX_COMPARE_NAME); + strncpyz(w->compare, nid, MAX_COMPARE_NAME); int len = strlen(w->compare); if(w->compare[len - 1] == '*') { w->compare[len - 1] = '\0'; @@ -531,7 +511,7 @@ int read_apps_groups_conf(const char *name) { char filename[FILENAME_MAX + 1]; - snprintf(filename, FILENAME_MAX, "%s/apps_%s.conf", config_dir, name); + snprintfz(filename, FILENAME_MAX, "%s/apps_%s.conf", config_dir, name); if(unlikely(debug)) fprintf(stderr, "apps.plugin: process groups file: '%s'\n", filename); @@ -583,8 +563,7 @@ int read_apps_groups_conf(const char *name) t++; } - strncpy(w->name, t, MAX_NAME); - w->name[MAX_NAME] = '\0'; + strncpyz(w->name, t, MAX_NAME); w->hidden = thidden; w->debug = tdebug; @@ -606,7 +585,7 @@ int read_apps_groups_conf(const char *name) if(!apps_groups_default_target) error("Cannot create default target"); else - strncpy(apps_groups_default_target->name, "other", MAX_NAME); + strncpyz(apps_groups_default_target->name, "other", MAX_NAME); return 0; } @@ -796,7 +775,7 @@ void del_pid_entry(pid_t pid) int read_proc_pid_cmdline(struct pid_stat *p) { char filename[FILENAME_MAX + 1]; - snprintf(filename, FILENAME_MAX, "%s/proc/%d/cmdline", host_prefix, p->pid); + snprintfz(filename, FILENAME_MAX, "%s/proc/%d/cmdline", host_prefix, p->pid); int fd = open(filename, O_RDONLY, 0666); if(unlikely(fd == -1)) return 1; @@ -806,8 +785,7 @@ int read_proc_pid_cmdline(struct pid_stat *p) { if(bytes <= 0) { // copy the command to the command line - strncpy(p->cmdline, p->comm, MAX_CMDLINE); - p->cmdline[MAX_CMDLINE] = '\0'; + strncpyz(p->cmdline, p->comm, MAX_CMDLINE); return 0; } @@ -824,7 +802,7 @@ int read_proc_pid_cmdline(struct pid_stat *p) { int read_proc_pid_ownership(struct pid_stat *p) { char filename[FILENAME_MAX + 1]; - snprintf(filename, FILENAME_MAX, "%s/proc/%d", host_prefix, p->pid); + snprintfz(filename, FILENAME_MAX, "%s/proc/%d", host_prefix, p->pid); // ---------------------------------------- // read uid and gid @@ -844,7 +822,7 @@ int read_proc_pid_stat(struct pid_stat *p) { char filename[FILENAME_MAX + 1]; - snprintf(filename, FILENAME_MAX, "%s/proc/%d/stat", host_prefix, p->pid); + snprintfz(filename, FILENAME_MAX, "%s/proc/%d/stat", host_prefix, p->pid); // ---------------------------------------- @@ -866,8 +844,7 @@ int read_proc_pid_stat(struct pid_stat *p) { // parse the process name unsigned int i = 0; - strncpy(p->comm, procfile_lineword(ff, 0, 1), MAX_COMPARE_NAME); - p->comm[MAX_COMPARE_NAME] = '\0'; + strncpyz(p->comm, procfile_lineword(ff, 0, 1), MAX_COMPARE_NAME); // p->pid = atol(procfile_lineword(ff, 0, 0+i)); // comm is at 1 @@ -926,7 +903,7 @@ int read_proc_pid_statm(struct pid_stat *p) { char filename[FILENAME_MAX + 1]; - snprintf(filename, FILENAME_MAX, "%s/proc/%d/statm", host_prefix, p->pid); + snprintfz(filename, FILENAME_MAX, "%s/proc/%d/statm", host_prefix, p->pid); ff = procfile_reopen(ff, filename, NULL, PROCFILE_FLAG_NO_ERROR_ON_FILE_IO); if(!ff) return 1; @@ -956,7 +933,7 @@ int read_proc_pid_io(struct pid_stat *p) { char filename[FILENAME_MAX + 1]; - snprintf(filename, FILENAME_MAX, "%s/proc/%d/io", host_prefix, p->pid); + snprintfz(filename, FILENAME_MAX, "%s/proc/%d/io", host_prefix, p->pid); ff = procfile_reopen(ff, filename, NULL, PROCFILE_FLAG_NO_ERROR_ON_FILE_IO); if(!ff) return 1; @@ -1024,18 +1001,11 @@ int file_descriptor_iterator(avl *a) { if(a) {}; return 0; } avl_tree all_files_index = { NULL, - file_descriptor_compare, -#ifndef AVL_WITHOUT_PTHREADS -#ifdef AVL_LOCK_WITH_MUTEX - PTHREAD_MUTEX_INITIALIZER -#else - PTHREAD_RWLOCK_INITIALIZER -#endif -#endif /* AVL_WITHOUT_PTHREADS */ + file_descriptor_compare }; static struct file_descriptor *file_descriptor_find(const char *name, uint32_t hash) { - struct file_descriptor *result = NULL, tmp; + struct file_descriptor tmp; tmp.hash = (hash)?hash:simple_hash(name); tmp.name = name; tmp.count = 0; @@ -1044,8 +1014,7 @@ static struct file_descriptor *file_descriptor_find(const char *name, uint32_t h tmp.magic = 0x0BADCAFE; #endif /* NETDATA_INTERNAL_CHECKS */ - avl_search(&all_files_index, (avl *)&tmp, file_descriptor_iterator, (avl **)&result); - return result; + return (struct file_descriptor *)avl_search(&all_files_index, (avl *) &tmp); } #define file_descriptor_add(fd) avl_insert(&all_files_index, (avl *)(fd)) @@ -1211,7 +1180,7 @@ int file_descriptor_find_or_add(const char *name) int read_pid_file_descriptors(struct pid_stat *p) { char dirname[FILENAME_MAX+1]; - snprintf(dirname, FILENAME_MAX, "%s/proc/%d/fd", host_prefix, p->pid); + snprintfz(dirname, FILENAME_MAX, "%s/proc/%d/fd", host_prefix, p->pid); DIR *fds = opendir(dirname); if(fds) { int c; @@ -1235,7 +1204,7 @@ int read_pid_file_descriptors(struct pid_stat *p) { if(debug) fprintf(stderr, "apps.plugin: extending fd memory slots for %s from %d to %d\n", p->comm, p->fds_size, fdid + 100); p->fds = realloc(p->fds, (fdid + 100) * sizeof(int)); if(!p->fds) { - error("Cannot re-allocate fds for %s", p->comm); + fatal("Cannot re-allocate fds for %s", p->comm); break; } @@ -1304,7 +1273,7 @@ int collect_data_for_all_processes_from_proc(void) { char dirname[FILENAME_MAX + 1]; - snprintf(dirname, FILENAME_MAX, "%s/proc", host_prefix); + snprintfz(dirname, FILENAME_MAX, "%s/proc", host_prefix); DIR *dir = opendir(dirname); if(!dir) return 0; @@ -2267,7 +2236,7 @@ void send_charts_updates_to_netdata(struct target *root, const char *type, const for (w = root; w ; w = w->next) { if(w->target || (!w->processes && !w->exposed)) continue; - fprintf(stdout, "DIMENSION %s '' incremental 100 %llu %s\n", w->name, Hertz, w->hidden ? "hidden,noreset" : "noreset"); + fprintf(stdout, "DIMENSION %s '' incremental 100 %u %s\n", w->name, hz, w->hidden ? "hidden,noreset" : "noreset"); } fprintf(stdout, "CHART %s.mem '' '%s Dedicated Memory (w/o shared)' 'MB' mem %s.mem stacked 20003 %d\n", type, title, type, update_every); @@ -2295,14 +2264,14 @@ void send_charts_updates_to_netdata(struct target *root, const char *type, const for (w = root; w ; w = w->next) { if(w->target || (!w->processes && !w->exposed)) continue; - fprintf(stdout, "DIMENSION %s '' incremental 100 %llu noreset\n", w->name, Hertz * processors); + fprintf(stdout, "DIMENSION %s '' incremental 100 %ld noreset\n", w->name, hz * processors); } fprintf(stdout, "CHART %s.cpu_system '' '%s CPU System Time (%ld%% = %ld core%s)' 'cpu time %%' cpu %s.cpu_system stacked 20021 %d\n", type, title, (processors * 100), processors, (processors>1)?"s":"", type, update_every); for (w = root; w ; w = w->next) { if(w->target || (!w->processes && !w->exposed)) continue; - fprintf(stdout, "DIMENSION %s '' incremental 100 %llu noreset\n", w->name, Hertz * processors); + fprintf(stdout, "DIMENSION %s '' incremental 100 %ld noreset\n", w->name, hz * processors); } fprintf(stdout, "CHART %s.major_faults '' '%s Major Page Faults (swap read)' 'page faults/s' swap %s.major_faults stacked 20010 %d\n", type, title, type, update_every); @@ -2411,12 +2380,6 @@ void parse_args(int argc, char **argv) } } -unsigned long long sutime() { - struct timeval now; - gettimeofday(&now, NULL); - return now.tv_sec * 1000000ULL + now.tv_usec; -} - int main(int argc, char **argv) { // debug_flags = D_PROCFILE; @@ -2427,6 +2390,10 @@ int main(int argc, char **argv) // disable syslog for apps.plugin error_log_syslog = 0; + // set errors flood protection to 100 logs per hour + error_log_errors_per_period = 100; + error_log_throttle_period = 3600; + host_prefix = getenv("NETDATA_HOST_PREFIX"); if(host_prefix == NULL) { info("NETDATA_HOST_PREFIX is not passed from netdata"); @@ -2441,13 +2408,22 @@ int main(int argc, char **argv) } else info("Found NETDATA_CONFIG_DIR='%s'", config_dir); +#ifdef NETDATA_INTERNAL_CHECKS + if(debug_flags != 0) { + struct rlimit rl = { RLIM_INFINITY, RLIM_INFINITY }; + if(setrlimit(RLIMIT_CORE, &rl) != 0) + info("Cannot request unlimited core dumps for debugging... Proceeding anyway..."); + prctl(PR_SET_DUMPABLE, 1, 0, 0, 0); + } +#endif /* NETDATA_INTERNAL_CHECKS */ + info("starting..."); procfile_adaptive_initial_allocation = 1; time_t started_t = time(NULL); time_t current_t; - Hertz = get_system_hertz(); + get_HZ(); pid_max = get_system_pid_max(); processors = get_system_cpus(); @@ -2460,16 +2436,14 @@ int main(int argc, char **argv) exit(1); } - fprintf(stdout, "CHART netdata.apps_cpu '' 'Apps Plugin CPU' 'milliseconds/s' apps.plugin netdata.apps_cpu stacked 140000 %d\n", update_every); - fprintf(stdout, "DIMENSION user '' incremental 1 %d\n", 1000); - fprintf(stdout, "DIMENSION system '' incremental 1 %d\n", 1000); - - fprintf(stdout, "CHART netdata.apps_files '' 'Apps Plugin Files' 'files/s' apps.plugin netdata.apps_files line 140001 %d\n", update_every); - fprintf(stdout, "DIMENSION files '' incremental 1 1\n"); - fprintf(stdout, "DIMENSION pids '' absolute 1 1\n"); - fprintf(stdout, "DIMENSION fds '' absolute 1 1\n"); - fprintf(stdout, "DIMENSION targets '' absolute 1 1\n"); - + fprintf(stdout, "CHART netdata.apps_cpu '' 'Apps Plugin CPU' 'milliseconds/s' apps.plugin netdata.apps_cpu stacked 140000 %1$d\n" + "DIMENSION user '' incremental 1 1000\n" + "DIMENSION system '' incremental 1 1000\n" + "CHART netdata.apps_files '' 'Apps Plugin Files' 'files/s' apps.plugin netdata.apps_files line 140001 %1$d\n" + "DIMENSION files '' incremental 1 1\n" + "DIMENSION pids '' absolute 1 1\n" + "DIMENSION fds '' absolute 1 1\n" + "DIMENSION targets '' absolute 1 1\n", update_every); #ifndef PROFILING_MODE unsigned long long sunext = (time(NULL) - (time(NULL) % update_every) + update_every) * 1000000ULL; @@ -2480,11 +2454,11 @@ int main(int argc, char **argv) for(;1; counter++) { #ifndef PROFILING_MODE // delay until it is our time to run - while((sunow = sutime()) < sunext) + while((sunow = timems()) < sunext) usleep((useconds_t)(sunext - sunow)); // find the next time we need to run - while(sutime() > sunext) + while(timems() > sunext) sunext += update_every * 1000000ULL; #endif /* PROFILING_MODE */ @@ -2508,7 +2482,6 @@ int main(int argc, char **argv) send_collected_data_to_netdata(groups_root_target, "groups", dt); if(debug) fprintf(stderr, "apps.plugin: done Loop No %llu\n", counter); - fflush(NULL); current_t = time(NULL); @@ -19,9 +19,6 @@ #include "avl.h" #include "log.h" -/* Private methods */ -int _avl_removeroot(avl_tree* t); - /* Swing to the left * Warning: no balance maintainance */ @@ -69,7 +66,7 @@ void avl_nasty(avl* root) { * returns 1 if the depth of the tree has grown * Warning: do not insert elements already present */ -int _avl_insert(avl_tree* t, avl* a) { +int avl_insert(avl_tree* t, avl* a) { /* initialize */ a->left = 0; a->right = 0; @@ -86,7 +83,7 @@ int _avl_insert(avl_tree* t, avl* a) { avl_tree left_subtree; left_subtree.root = t->root->left; left_subtree.compar = t->compar; - if (_avl_insert(&left_subtree, a)) { + if (avl_insert(&left_subtree, a)) { switch (t->root->balance--) { case 1: return 0; @@ -117,7 +114,7 @@ int _avl_insert(avl_tree* t, avl* a) { avl_tree right_subtree; right_subtree.root = t->root->right; right_subtree.compar = t->compar; - if (_avl_insert(&right_subtree, a)) { + if (avl_insert(&right_subtree, a)) { switch (t->root->balance++) { case -1: return 0; @@ -144,36 +141,16 @@ int _avl_insert(avl_tree* t, avl* a) { } } } -int avl_insert(avl_tree* t, avl* a) { -#ifndef AVL_WITHOUT_PTHREADS -#ifdef AVL_LOCK_WITH_MUTEX - pthread_mutex_lock(&t->mutex); -#else - pthread_rwlock_wrlock(&t->rwlock); -#endif -#endif /* AVL_WITHOUT_PTHREADS */ - - int ret = _avl_insert(t, a); - -#ifndef AVL_WITHOUT_PTHREADS -#ifdef AVL_LOCK_WITH_MUTEX - pthread_mutex_unlock(&t->mutex); -#else - pthread_rwlock_unlock(&t->rwlock); -#endif -#endif /* AVL_WITHOUT_PTHREADS */ - return ret; -} /* Remove an element a from the AVL tree t * returns -1 if the depth of the tree has shrunk * Warning: if the element is not present in the tree, * returns 0 as if it had been removed succesfully. */ -int _avl_remove(avl_tree* t, avl* a) { +int avl_remove(avl_tree* t, avl* a) { int b; if (t->root == a) - return _avl_removeroot(t); + return avl_removeroot(t); b = t->compar(t->root, a); if (b >= 0) { /* remove from the left subtree */ @@ -181,7 +158,7 @@ int _avl_remove(avl_tree* t, avl* a) { avl_tree left_subtree; if ((left_subtree.root = t->root->left)) { left_subtree.compar = t->compar; - ch = _avl_remove(&left_subtree, a); + ch = avl_remove(&left_subtree, a); t->root->left = left_subtree.root; if (ch) { switch (t->root->balance++) { @@ -215,7 +192,7 @@ int _avl_remove(avl_tree* t, avl* a) { avl_tree right_subtree; if ((right_subtree.root = t->root->right)) { right_subtree.compar = t->compar; - ch = _avl_remove(&right_subtree, a); + ch = avl_remove(&right_subtree, a); t->root->right = right_subtree.root; if (ch) { switch (t->root->balance--) { @@ -246,31 +223,10 @@ int _avl_remove(avl_tree* t, avl* a) { return 0; } -int avl_remove(avl_tree* t, avl* a) { -#ifndef AVL_WITHOUT_PTHREADS -#ifdef AVL_LOCK_WITH_MUTEX - pthread_mutex_lock(&t->mutex); -#else - pthread_rwlock_wrlock(&t->rwlock); -#endif -#endif /* AVL_WITHOUT_PTHREADS */ - - int ret = _avl_remove(t, a); - -#ifndef AVL_WITHOUT_PTHREADS -#ifdef AVL_LOCK_WITH_MUTEX - pthread_mutex_unlock(&t->mutex); -#else - pthread_rwlock_unlock(&t->rwlock); -#endif -#endif /* AVL_WITHOUT_PTHREADS */ - return ret; -} - /* Remove the root of the AVL tree t * Warning: dumps core if t is empty */ -int _avl_removeroot(avl_tree* t) { +int avl_removeroot(avl_tree* t) { int ch; avl* a; if (!t->root->left) { @@ -296,7 +252,7 @@ int _avl_removeroot(avl_tree* t) { while (a->left) a = a->left; } - ch = _avl_remove(t, a); + ch = avl_remove(t, a); a->left = t->root->left; a->right = t->root->right; a->balance = t->root->balance; @@ -306,33 +262,12 @@ int _avl_removeroot(avl_tree* t) { return 0; } -int avl_removeroot(avl_tree* t) { -#ifndef AVL_WITHOUT_PTHREADS -#ifdef AVL_LOCK_WITH_MUTEX - pthread_mutex_lock(&t->mutex); -#else - pthread_rwlock_wrlock(&t->rwlock); -#endif -#endif /* AVL_WITHOUT_PTHREADS */ - - int ret = _avl_removeroot(t); - -#ifndef AVL_WITHOUT_PTHREADS -#ifdef AVL_LOCK_WITH_MUTEX - pthread_mutex_unlock(&t->mutex); -#else - pthread_rwlock_unlock(&t->rwlock); -#endif -#endif /* AVL_WITHOUT_PTHREADS */ - return ret; -} - /* Iterate through elements in t from a range between a and b (inclusive) * for each element calls iter(a) until it returns 0 * returns the last value returned by iterator or 0 if there were no calls * Warning: a<=b must hold */ -int _avl_range(avl_tree* t, avl* a, avl* b, int (*iter)(avl*), avl** ret) { +int avl_range(avl_tree* t, avl* a, avl* b, int (*iter)(avl*), avl** ret) { int x, c = 0; if (!t->root) return 0; @@ -349,7 +284,7 @@ int _avl_range(avl_tree* t, avl* a, avl* b, int (*iter)(avl*), avl** ret) { avl_tree left_subtree; if ((left_subtree.root = t->root->left)) { left_subtree.compar = t->compar; - if (!(c = _avl_range(&left_subtree, a, b, iter, ret))) + if (!(c = avl_range(&left_subtree, a, b, iter, ret))) if (x > 0) return 0; } @@ -366,7 +301,7 @@ int _avl_range(avl_tree* t, avl* a, avl* b, int (*iter)(avl*), avl** ret) { avl_tree right_subtree; if ((right_subtree.root = t->root->right)) { right_subtree.compar = t->compar; - if (!(c = _avl_range(&right_subtree, a, b, iter, ret))) + if (!(c = avl_range(&right_subtree, a, b, iter, ret))) if (x < 0) return 0; } @@ -374,17 +309,57 @@ int _avl_range(avl_tree* t, avl* a, avl* b, int (*iter)(avl*), avl** ret) { return c; } -int avl_range(avl_tree* t, avl* a, avl* b, int (*iter)(avl*), avl** ret) { +/* high performance searching - by ktsaou */ +avl *avl_search(avl_tree *t, avl *a) { + avl *root = t->root; + + while(root) { + int x = t->compar(root, a); + + if(x > 0) { + root = root->left; + continue; + } + + if(x < 0) { + root = root->right; + continue; + } + + return root; + } + + return NULL; +} + +void avl_init(avl_tree *t, int (*compar)(void *a, void *b)) { + t->root = NULL; + t->compar = compar; +} + +/* ------------------------------------------------------------------------- */ + +void avl_read_lock(avl_tree_lock *t) { #ifndef AVL_WITHOUT_PTHREADS #ifdef AVL_LOCK_WITH_MUTEX pthread_mutex_lock(&t->mutex); #else - pthread_rwlock_wrlock(&t->rwlock); + pthread_rwlock_rdlock(&t->rwlock); #endif #endif /* AVL_WITHOUT_PTHREADS */ +} - int ret2 = _avl_range(t, a, b, iter, ret); +void avl_write_lock(avl_tree_lock *t) { +#ifndef AVL_WITHOUT_PTHREADS +#ifdef AVL_LOCK_WITH_MUTEX + pthread_mutex_lock(&t->mutex); +#else + pthread_rwlock_wrlock(&t->rwlock); +#endif +#endif /* AVL_WITHOUT_PTHREADS */ +} +void avl_unlock(avl_tree_lock *t) { #ifndef AVL_WITHOUT_PTHREADS #ifdef AVL_LOCK_WITH_MUTEX pthread_mutex_unlock(&t->mutex); @@ -392,21 +367,12 @@ int avl_range(avl_tree* t, avl* a, avl* b, int (*iter)(avl*), avl** ret) { pthread_rwlock_unlock(&t->rwlock); #endif #endif /* AVL_WITHOUT_PTHREADS */ - - return ret2; } -/* Iterate through elements in t equal to a - * for each element calls iter(a) until it returns 0 - * returns the last value returned by iterator or 0 if there were no calls - */ -int avl_search(avl_tree* t, avl* a, int (*iter)(avl* a), avl** ret) { - return avl_range(t, a, a, iter, ret); -} +/* ------------------------------------------------------------------------- */ -void avl_init(avl_tree* t, int (*compar)(void* a, void* b)) { - t->root = NULL; - t->compar = compar; +void avl_init_lock(avl_tree_lock *t, int (*compar)(void *a, void *b)) { + avl_init(&t->avl_tree, compar); #ifndef AVL_WITHOUT_PTHREADS int lock; @@ -421,5 +387,39 @@ void avl_init(avl_tree* t, int (*compar)(void* a, void* b)) { fatal("Failed to initialize AVL mutex/rwlock, error: %d", lock); #endif /* AVL_WITHOUT_PTHREADS */ +} + +avl *avl_search_lock(avl_tree_lock *t, avl *a) { + avl_read_lock(t); + avl *ret = avl_search(&t->avl_tree, a); + avl_unlock(t); + return ret; +} + +int avl_range_lock(avl_tree_lock *t, avl *a, avl *b, int (*iter)(avl *), avl **ret) { + avl_read_lock(t); + int ret2 = avl_range(&t->avl_tree, a, b, iter, ret); + avl_unlock(t); + return ret2; +} +int avl_removeroot_lock(avl_tree_lock *t) { + avl_write_lock(t); + int ret = avl_removeroot(&t->avl_tree); + avl_unlock(t); + return ret; +} + +int avl_remove_lock(avl_tree_lock *t, avl *a) { + avl_write_lock(t); + int ret = avl_remove(&t->avl_tree, a); + avl_unlock(t); + return ret; +} + +int avl_insert_lock(avl_tree_lock *t, avl *a) { + avl_write_lock(t); + int ret = avl_insert(&t->avl_tree, a); + avl_unlock(t); + return ret; } @@ -17,10 +17,19 @@ #ifndef AVL_WITHOUT_PTHREADS #include <pthread.h> -#endif /* AVL_WITHOUT_PTHREADS */ // #define AVL_LOCK_WITH_MUTEX 1 +#ifdef AVL_LOCK_WITH_MUTEX +#define AVL_LOCK_INITIALIZER PTHREAD_MUTEX_INITIALIZER +#else /* AVL_LOCK_WITH_MUTEX */ +#define AVL_LOCK_INITIALIZER PTHREAD_RWLOCK_INITIALIZER +#endif /* AVL_LOCK_WITH_MUTEX */ + +#else /* AVL_WITHOUT_PTHREADS */ +#define AVL_LOCK_INITIALIZER +#endif /* AVL_WITHOUT_PTHREADS */ + /* Data structures */ /* One element of the AVL tree */ @@ -32,8 +41,13 @@ typedef struct avl { /* An AVL tree */ typedef struct avl_tree { - avl* root; - int (*compar)(void* a, void* b); + avl *root; + + int (*compar)(void *a, void *b); +} avl_tree; + +typedef struct avl_tree_lock { + avl_tree avl_tree; #ifndef AVL_WITHOUT_PTHREADS #ifdef AVL_LOCK_WITH_MUTEX @@ -42,7 +56,7 @@ typedef struct avl_tree { pthread_rwlock_t rwlock; #endif /* AVL_LOCK_WITH_MUTEX */ #endif /* AVL_WITHOUT_PTHREADS */ -} avl_tree; +} avl_tree_lock; /* Public methods */ @@ -50,35 +64,41 @@ typedef struct avl_tree { * returns 1 if the depth of the tree has grown * Warning: do not insert elements already present */ -int avl_insert(avl_tree* t, avl* a); +int avl_insert_lock(avl_tree_lock *t, avl *a); +int avl_insert(avl_tree *t, avl *a); /* Remove an element a from the AVL tree t * returns -1 if the depth of the tree has shrunk * Warning: if the element is not present in the tree, * returns 0 as if it had been removed succesfully. */ -int avl_remove(avl_tree* t, avl* a); +int avl_remove_lock(avl_tree_lock *t, avl *a); +int avl_remove(avl_tree *t, avl *a); /* Remove the root of the AVL tree t * Warning: dumps core if t is empty */ -int avl_removeroot(avl_tree* t); +int avl_removeroot_lock(avl_tree_lock *t); +int avl_removeroot(avl_tree *t); /* Iterate through elements in t from a range between a and b (inclusive) * for each element calls iter(a) until it returns 0 * returns the last value returned by iterator or 0 if there were no calls * Warning: a<=b must hold */ -int avl_range(avl_tree* t, avl* a, avl* b, int (*iter)(avl*), avl** ret); +int avl_range_lock(avl_tree_lock *t, avl *a, avl *b, int (*iter)(avl *), avl **ret); +int avl_range(avl_tree *t, avl *a, avl *b, int (*iter)(avl *), avl **ret); /* Iterate through elements in t equal to a * for each element calls iter(a) until it returns 0 * returns the last value returned by iterator or 0 if there were no calls */ -int avl_search(avl_tree* t, avl* a, int (*iter)(avl*), avl** ret); +avl *avl_search_lock(avl_tree_lock *t, avl *a); +avl *avl_search(avl_tree *t, avl *a); -/* Initialize the avl_tree +/* Initialize the avl_tree_lock */ -void avl_init(avl_tree* t, int (*compar)(void* a, void* b)); +void avl_init_lock(avl_tree_lock *t, int (*compar)(void *a, void *b)); +void avl_init(avl_tree *t, int (*compar)(void *a, void *b)); #endif /* avl.h */ diff --git a/src/common.c b/src/common.c index cb74b6335..a2b0d940f 100644 --- a/src/common.c +++ b/src/common.c @@ -20,7 +20,14 @@ char *global_host_prefix = ""; int enable_ksm = 1; -unsigned char netdata_keys_map[256] = { +// time(NULL) in milliseconds +unsigned long long timems(void) { + struct timeval now; + gettimeofday(&now, NULL); + return now.tv_sec * 1000000ULL + now.tv_usec; +} + +unsigned char netdata_map_chart_names[256] = { [0] = '\0', // [1] = '_', // [2] = '_', // @@ -281,8 +288,273 @@ unsigned char netdata_keys_map[256] = { // make sure the supplied string // is good for a netdata chart/dimension ID/NAME -void netdata_fix_id(char *s) { - while((*s = netdata_keys_map[(unsigned char)*s])) s++; +void netdata_fix_chart_name(char *s) { + while((*s = netdata_map_chart_names[(unsigned char)*s])) s++; +} + +unsigned char netdata_map_chart_ids[256] = { + [0] = '\0', // + [1] = '_', // + [2] = '_', // + [3] = '_', // + [4] = '_', // + [5] = '_', // + [6] = '_', // + [7] = '_', // + [8] = '_', // + [9] = '_', // + [10] = '_', // + [11] = '_', // + [12] = '_', // + [13] = '_', // + [14] = '_', // + [15] = '_', // + [16] = '_', // + [17] = '_', // + [18] = '_', // + [19] = '_', // + [20] = '_', // + [21] = '_', // + [22] = '_', // + [23] = '_', // + [24] = '_', // + [25] = '_', // + [26] = '_', // + [27] = '_', // + [28] = '_', // + [29] = '_', // + [30] = '_', // + [31] = '_', // + [32] = '_', // + [33] = '_', // ! + [34] = '_', // " + [35] = '_', // # + [36] = '_', // $ + [37] = '_', // % + [38] = '_', // & + [39] = '_', // ' + [40] = '_', // ( + [41] = '_', // ) + [42] = '_', // * + [43] = '_', // + + [44] = '.', // , + [45] = '-', // - + [46] = '.', // . + [47] = '_', // / + [48] = '0', // 0 + [49] = '1', // 1 + [50] = '2', // 2 + [51] = '3', // 3 + [52] = '4', // 4 + [53] = '5', // 5 + [54] = '6', // 6 + [55] = '7', // 7 + [56] = '8', // 8 + [57] = '9', // 9 + [58] = '_', // : + [59] = '_', // ; + [60] = '_', // < + [61] = '_', // = + [62] = '_', // > + [63] = '_', // ? + [64] = '_', // @ + [65] = 'a', // A + [66] = 'b', // B + [67] = 'c', // C + [68] = 'd', // D + [69] = 'e', // E + [70] = 'f', // F + [71] = 'g', // G + [72] = 'h', // H + [73] = 'i', // I + [74] = 'j', // J + [75] = 'k', // K + [76] = 'l', // L + [77] = 'm', // M + [78] = 'n', // N + [79] = 'o', // O + [80] = 'p', // P + [81] = 'q', // Q + [82] = 'r', // R + [83] = 's', // S + [84] = 't', // T + [85] = 'u', // U + [86] = 'v', // V + [87] = 'w', // W + [88] = 'x', // X + [89] = 'y', // Y + [90] = 'z', // Z + [91] = '_', // [ + [92] = '/', // backslash + [93] = '_', // ] + [94] = '_', // ^ + [95] = '_', // _ + [96] = '_', // ` + [97] = 'a', // a + [98] = 'b', // b + [99] = 'c', // c + [100] = 'd', // d + [101] = 'e', // e + [102] = 'f', // f + [103] = 'g', // g + [104] = 'h', // h + [105] = 'i', // i + [106] = 'j', // j + [107] = 'k', // k + [108] = 'l', // l + [109] = 'm', // m + [110] = 'n', // n + [111] = 'o', // o + [112] = 'p', // p + [113] = 'q', // q + [114] = 'r', // r + [115] = 's', // s + [116] = 't', // t + [117] = 'u', // u + [118] = 'v', // v + [119] = 'w', // w + [120] = 'x', // x + [121] = 'y', // y + [122] = 'z', // z + [123] = '_', // { + [124] = '_', // | + [125] = '_', // } + [126] = '_', // ~ + [127] = '_', // + [128] = '_', // + [129] = '_', // + [130] = '_', // + [131] = '_', // + [132] = '_', // + [133] = '_', // + [134] = '_', // + [135] = '_', // + [136] = '_', // + [137] = '_', // + [138] = '_', // + [139] = '_', // + [140] = '_', // + [141] = '_', // + [142] = '_', // + [143] = '_', // + [144] = '_', // + [145] = '_', // + [146] = '_', // + [147] = '_', // + [148] = '_', // + [149] = '_', // + [150] = '_', // + [151] = '_', // + [152] = '_', // + [153] = '_', // + [154] = '_', // + [155] = '_', // + [156] = '_', // + [157] = '_', // + [158] = '_', // + [159] = '_', // + [160] = '_', // + [161] = '_', // + [162] = '_', // + [163] = '_', // + [164] = '_', // + [165] = '_', // + [166] = '_', // + [167] = '_', // + [168] = '_', // + [169] = '_', // + [170] = '_', // + [171] = '_', // + [172] = '_', // + [173] = '_', // + [174] = '_', // + [175] = '_', // + [176] = '_', // + [177] = '_', // + [178] = '_', // + [179] = '_', // + [180] = '_', // + [181] = '_', // + [182] = '_', // + [183] = '_', // + [184] = '_', // + [185] = '_', // + [186] = '_', // + [187] = '_', // + [188] = '_', // + [189] = '_', // + [190] = '_', // + [191] = '_', // + [192] = '_', // + [193] = '_', // + [194] = '_', // + [195] = '_', // + [196] = '_', // + [197] = '_', // + [198] = '_', // + [199] = '_', // + [200] = '_', // + [201] = '_', // + [202] = '_', // + [203] = '_', // + [204] = '_', // + [205] = '_', // + [206] = '_', // + [207] = '_', // + [208] = '_', // + [209] = '_', // + [210] = '_', // + [211] = '_', // + [212] = '_', // + [213] = '_', // + [214] = '_', // + [215] = '_', // + [216] = '_', // + [217] = '_', // + [218] = '_', // + [219] = '_', // + [220] = '_', // + [221] = '_', // + [222] = '_', // + [223] = '_', // + [224] = '_', // + [225] = '_', // + [226] = '_', // + [227] = '_', // + [228] = '_', // + [229] = '_', // + [230] = '_', // + [231] = '_', // + [232] = '_', // + [233] = '_', // + [234] = '_', // + [235] = '_', // + [236] = '_', // + [237] = '_', // + [238] = '_', // + [239] = '_', // + [240] = '_', // + [241] = '_', // + [242] = '_', // + [243] = '_', // + [244] = '_', // + [245] = '_', // + [246] = '_', // + [247] = '_', // + [248] = '_', // + [249] = '_', // + [250] = '_', // + [251] = '_', // + [252] = '_', // + [253] = '_', // + [254] = '_', // + [255] = '_' // +}; + +// make sure the supplied string +// is good for a netdata chart/dimension ID/NAME +void netdata_fix_chart_id(char *s) { + while((*s = netdata_map_chart_ids[(unsigned char)*s])) s++; } /* @@ -310,8 +582,11 @@ uint32_t simple_hash(const char *name) { // FNV-1a algorithm while (*s) { // multiply by the 32 bit FNV magic prime mod 2^32 - // gcc optimized - hval += (hval<<1) + (hval<<4) + (hval<<7) + (hval<<8) + (hval<<24); + // NOTE: No need to optimize with left shifts. + // GCC will use imul instruction anyway. + // Tested with 'gcc -O3 -S' + //hval += (hval<<1) + (hval<<4) + (hval<<7) + (hval<<8) + (hval<<24); + hval *= 16777619; // xor the bottom with the current octet hval ^= (uint32_t)*s++; @@ -346,9 +621,15 @@ uint32_t simple_hash(const char *name) { void strreverse(char* begin, char* end) { - char aux; - while (end > begin) - aux = *end, *end-- = *begin, *begin++ = aux; + char aux; + + while (end > begin) + { + // clearer code. + aux = *end; + *end-- = *begin; + *begin++ = aux; + } } char *mystrsep(char **ptr, char *s) @@ -361,17 +642,22 @@ char *mystrsep(char **ptr, char *s) char *trim(char *s) { // skip leading spaces + // and 'comments' as well!? while(*s && isspace(*s)) s++; if(!*s || *s == '#') return NULL; // skip tailing spaces - long c = (long) strlen(s) - 1; - while(c >= 0 && isspace(s[c])) { - s[c] = '\0'; - c--; + // this way is way faster. Writes only one NUL char. + ssize_t l = strlen(s); + if (--l >= 0) + { + char *p = s + l; + while (p > s && isspace(*p)) p--; + *++p = '\0'; } - if(c < 0) return NULL; + if(!*s) return NULL; + return s; } @@ -438,7 +724,7 @@ int savememory(const char *filename, void *mem, unsigned long size) { char tmpfilename[FILENAME_MAX + 1]; - snprintf(tmpfilename, FILENAME_MAX, "%s.%ld.tmp", filename, (long)getpid()); + snprintfz(tmpfilename, FILENAME_MAX, "%s.%ld.tmp", filename, (long)getpid()); int fd = open(tmpfilename, O_RDWR|O_CREAT|O_NOATIME, 0664); if(fd < 0) { @@ -490,3 +776,57 @@ pid_t gettid(void) return syscall(SYS_gettid); } +char *fgets_trim_len(char *buf, size_t buf_size, FILE *fp, size_t *len) { + char *s = fgets(buf, buf_size, fp); + if(!s) return NULL; + + char *t = s; + if(*t != '\0') { + // find the string end + while (*++t != '\0'); + + // trim trailing spaces/newlines/tabs + while (--t > s && *t == '\n') + *t = '\0'; + } + + if(len) + *len = t - s + 1; + + return s; +} + +char *strncpyz(char *dst, const char *src, size_t n) { + char *p = dst; + + while(*src && n--) + *dst++ = *src++; + + *dst = '\0'; + + return p; +} + +int vsnprintfz(char *dst, size_t n, const char *fmt, va_list args) { + int size; + + size = vsnprintf(dst, n, fmt, args); + + if(unlikely((size_t)size > n)) { + // there is bug in vsnprintf() and it returns + // a number higher to len, but it does not + // overflow the buffer. + size = n; + } + + dst[size] = '\0'; + return size; +} + +int snprintfz(char *dst, size_t n, const char *fmt, ...) { + va_list args; + + va_start(args, fmt); + return vsnprintfz(dst, n, fmt, args); + va_end(args); +} diff --git a/src/common.h b/src/common.h index e9987af72..c94f1cde5 100644 --- a/src/common.h +++ b/src/common.h @@ -1,5 +1,7 @@ +#include <stdarg.h> #include <sys/time.h> #include <sys/resource.h> +#include <stdio.h> #ifndef NETDATA_COMMON_H #define NETDATA_COMMON_H 1 @@ -15,13 +17,18 @@ #define abs(x) ((x < 0)? -x : x) #define usecdiff(now, last) (((((now)->tv_sec * 1000000ULL) + (now)->tv_usec) - (((last)->tv_sec * 1000000ULL) + (last)->tv_usec))) -extern void netdata_fix_id(char *s); +extern void netdata_fix_chart_id(char *s); +extern void netdata_fix_chart_name(char *s); extern uint32_t simple_hash(const char *name); extern void strreverse(char* begin, char* end); extern char *mystrsep(char **ptr, char *s); extern char *trim(char *s); +extern char *strncpyz(char *dst, const char *src, size_t n); +extern int vsnprintfz(char *dst, size_t n, const char *fmt, va_list args); +extern int snprintfz(char *dst, size_t n, const char *fmt, ...); + extern void *mymmap(const char *filename, size_t size, int flags, int ksm); extern int savememory(const char *filename, void *mem, unsigned long size); @@ -31,12 +38,15 @@ extern char *global_host_prefix; extern int enable_ksm; /* Number of ticks per second */ -#define HZ myhz extern unsigned int hz; extern void get_HZ(void); extern pid_t gettid(void); +extern unsigned long long timems(void); + +extern char *fgets_trim_len(char *buf, size_t buf_size, FILE *fp, size_t *len); + /* fix for alpine linux */ #ifndef RUSAGE_THREAD #ifdef RUSAGE_CHILDREN diff --git a/src/daemon.c b/src/daemon.c index 9dcf32f0b..2a56ae0cc 100644 --- a/src/daemon.c +++ b/src/daemon.c @@ -10,6 +10,7 @@ #include <string.h> #include <sys/types.h> #include <pwd.h> +#include <grp.h> #include <pthread.h> #include <sys/wait.h> #include <sys/stat.h> @@ -29,49 +30,8 @@ int pidfd = -1; void sig_handler(int signo) { - switch(signo) { - case SIGILL: - case SIGABRT: - case SIGFPE: - case SIGSEGV: - case SIGBUS: - case SIGSYS: - case SIGTRAP: - case SIGXCPU: - case SIGXFSZ: - infoerr("Death signaled exit (signal %d).", signo); - signal(signo, SIG_DFL); - break; - - case SIGKILL: - case SIGTERM: - case SIGQUIT: - case SIGINT: - case SIGHUP: - case SIGUSR1: - case SIGUSR2: - infoerr("Signaled exit (signal %d).", signo); - signal(SIGPIPE, SIG_IGN); - signal(SIGTERM, SIG_IGN); - signal(SIGQUIT, SIG_IGN); - signal(SIGHUP, SIG_IGN); - signal(SIGINT, SIG_IGN); - signal(SIGCHLD, SIG_IGN); - netdata_cleanup_and_exit(1); - break; - - case SIGPIPE: - infoerr("Signaled PIPE (signal %d).", signo); - // this is received when web clients send a reset - // no need to log it. - // infoerr("Ignoring signal %d.", signo); - break; - - default: - info("Signal %d received. Falling back to default action for it.", signo); - signal(signo, SIG_DFL); - break; - } + if(signo) + netdata_exit = 1; } int become_user(const char *username) @@ -85,6 +45,21 @@ int become_user(const char *username) uid_t uid = pw->pw_uid; gid_t gid = pw->pw_gid; + int ngroups = sysconf(_SC_NGROUPS_MAX); + gid_t *supplementary_groups = NULL; + if(ngroups) { + supplementary_groups = malloc(sizeof(gid_t) * ngroups); + if(supplementary_groups) { + if(getgrouplist(username, gid, supplementary_groups, &ngroups) == -1) { + error("Cannot get supplementary groups of user '%s'.", username); + free(supplementary_groups); + supplementary_groups = NULL; + ngroups = 0; + } + } + else fatal("Cannot allocate memory for %d supplementary groups", ngroups); + } + if(pidfile[0] && getuid() != uid) { // we are dropping privileges if(chown(pidfile, uid, gid) != 0) @@ -102,6 +77,15 @@ int become_user(const char *username) pidfd = -1; } + if(supplementary_groups && ngroups) { + if(setgroups(ngroups, supplementary_groups) == -1) + error("Cannot set supplementary groups for user '%s'", username); + + free(supplementary_groups); + supplementary_groups = NULL; + ngroups = 0; + } + if(setresgid(gid, gid, gid) != 0) { error("Cannot switch to user's %s group (gid: %d).", username, gid); return -1; @@ -183,6 +167,8 @@ int become_daemon(int dont_fork, int close_all_files, const char *user, const ch *access_fd = -1; return -1; } + if(setvbuf(*access_fp, NULL, _IOLBF, 0) != 0) + error("Cannot set line buffering on access.log"); } } @@ -222,10 +208,6 @@ int become_daemon(int dont_fork, int close_all_files, const char *user, const ch } } - signal(SIGCHLD, SIG_IGN); - signal(SIGHUP, SIG_IGN); - signal(SIGWINCH, SIG_IGN); - // fork() again if(!dont_fork) { int i = fork(); @@ -276,6 +258,10 @@ int become_daemon(int dont_fork, int close_all_files, const char *user, const ch dup2(output_fd, STDOUT_FILENO); close(output_fd); } + + if(setvbuf(stdout, NULL, _IOLBF, 0) != 0) + error("Cannot set line buffering on debug.log"); + output_fd = -1; } else dup2(dev_null, STDOUT_FILENO); @@ -285,6 +271,10 @@ int become_daemon(int dont_fork, int close_all_files, const char *user, const ch dup2(error_fd, STDERR_FILENO); close(error_fd); } + + if(setvbuf(stderr, NULL, _IOLBF, 0) != 0) + error("Cannot set line buffering on error.log"); + error_fd = -1; } else dup2(dev_null, STDERR_FILENO); diff --git a/src/dictionary.c b/src/dictionary.c index 31f4d52e1..1543f4d0e 100644 --- a/src/dictionary.c +++ b/src/dictionary.c @@ -1,6 +1,7 @@ #ifdef HAVE_CONFIG_H #include <config.h> #endif + #include <pthread.h> #include <stdlib.h> #include <string.h> @@ -12,9 +13,32 @@ #include "dictionary.h" // ---------------------------------------------------------------------------- -// name_value index +// dictionary locks -static int name_value_iterator(avl *a) { if(a) {}; return 0; } +static inline void dictionary_read_lock(DICTIONARY *dict) { + if(likely(!(dict->flags & DICTIONARY_FLAG_SINGLE_THREADED))) { + // debug(D_DICTIONARY, "Dictionary READ lock"); + pthread_rwlock_rdlock(&dict->rwlock); + } +} + +static inline void dictionary_write_lock(DICTIONARY *dict) { + if(likely(!(dict->flags & DICTIONARY_FLAG_SINGLE_THREADED))) { + // debug(D_DICTIONARY, "Dictionary WRITE lock"); + pthread_rwlock_wrlock(&dict->rwlock); + } +} + +static inline void dictionary_unlock(DICTIONARY *dict) { + if(likely(!(dict->flags & DICTIONARY_FLAG_SINGLE_THREADED))) { + // debug(D_DICTIONARY, "Dictionary UNLOCK lock"); + pthread_rwlock_unlock(&dict->rwlock); + } +} + + +// ---------------------------------------------------------------------------- +// avl index static int name_value_compare(void* a, void* b) { if(((NAME_VALUE *)a)->hash < ((NAME_VALUE *)b)->hash) return -1; @@ -22,93 +46,100 @@ static int name_value_compare(void* a, void* b) { else return strcmp(((NAME_VALUE *)a)->name, ((NAME_VALUE *)b)->name); } -#define name_value_index_add(dict, cv) avl_insert(&((dict)->values_index), (avl *)(cv)) -#define name_value_index_del(dict, cv) avl_remove(&((dict)->values_index), (avl *)(cv)) +#define dictionary_name_value_index_add_nolock(dict, nv) do { (dict)->inserts++; avl_insert(&((dict)->values_index), (avl *)(nv)); } while(0) +#define dictionary_name_value_index_del_nolock(dict, nv) do { (dict)->deletes++; avl_remove(&(dict->values_index), (avl *)(nv)); } while(0) -static NAME_VALUE *dictionary_name_value_index_find(DICTIONARY *dict, const char *name, uint32_t hash) { - NAME_VALUE *result = NULL, tmp; +static inline NAME_VALUE *dictionary_name_value_index_find_nolock(DICTIONARY *dict, const char *name, uint32_t hash) { + NAME_VALUE tmp; tmp.hash = (hash)?hash:simple_hash(name); tmp.name = (char *)name; - avl_search(&(dict->values_index), (avl *)&tmp, name_value_iterator, (avl **)&result); - return result; + dict->searches++; + return (NAME_VALUE *)avl_search(&(dict->values_index), (avl *) &tmp); } // ---------------------------------------------------------------------------- +// internal methods -static NAME_VALUE *dictionary_name_value_create(DICTIONARY *dict, const char *name, void *value, size_t value_len) { - debug(D_DICTIONARY, "Creating name value entry for name '%s', value '%s'.", name, value); +static NAME_VALUE *dictionary_name_value_create_nolock(DICTIONARY *dict, const char *name, void *value, size_t value_len, uint32_t hash) { + debug(D_DICTIONARY, "Creating name value entry for name '%s'.", name); NAME_VALUE *nv = calloc(1, sizeof(NAME_VALUE)); - if(!nv) { - fatal("Cannot allocate name_value of size %z", sizeof(NAME_VALUE)); - exit(1); + if(unlikely(!nv)) fatal("Cannot allocate name_value of size %z", sizeof(NAME_VALUE)); + + if(dict->flags & DICTIONARY_FLAG_NAME_LINK_DONT_CLONE) + nv->name = (char *)name; + else { + nv->name = strdup(name); + if (unlikely(!nv->name)) + fatal("Cannot allocate name_value.name of size %z", strlen(name)); } - nv->name = strdup(name); - if(!nv->name) fatal("Cannot allocate name_value.name of size %z", strlen(name)); - nv->hash = simple_hash(nv->name); + nv->hash = (hash)?hash:simple_hash(nv->name); - nv->value = malloc(value_len); - if(!nv->value) fatal("Cannot allocate name_value.value of size %z", value_len); - memcpy(nv->value, value, value_len); + if(dict->flags & DICTIONARY_FLAG_VALUE_LINK_DONT_CLONE) + nv->value = value; + else { + nv->value = malloc(value_len); + if (unlikely(!nv->value)) + fatal("Cannot allocate name_value.value of size %z", value_len); - // link it - pthread_rwlock_wrlock(&dict->rwlock); - nv->next = dict->values; - dict->values = nv; - pthread_rwlock_unlock(&dict->rwlock); + memcpy(nv->value, value, value_len); + } // index it - name_value_index_add(dict, nv); + dictionary_name_value_index_add_nolock(dict, nv); + dict->entries++; return nv; } -static void dictionary_name_value_destroy(DICTIONARY *dict, NAME_VALUE *nv) { +static void dictionary_name_value_destroy_nolock(DICTIONARY *dict, NAME_VALUE *nv) { debug(D_DICTIONARY, "Destroying name value entry for name '%s'.", nv->name); - pthread_rwlock_wrlock(&dict->rwlock); - if(dict->values == nv) dict->values = nv->next; - else { - NAME_VALUE *n = dict->values; - while(n && n->next && n->next != nv) nv = nv->next; - if(!n || n->next != nv) { - fatal("Cannot find name_value with name '%s' in dictionary.", nv->name); - exit(1); - } - n->next = nv->next; - nv->next = NULL; + dictionary_name_value_index_del_nolock(dict, nv); + + dict->entries--; + + if(!(dict->flags & DICTIONARY_FLAG_VALUE_LINK_DONT_CLONE)) { + debug(D_REGISTRY, "Dictionary freeing value of '%s'", nv->name); + free(nv->value); + } + + if(!(dict->flags & DICTIONARY_FLAG_NAME_LINK_DONT_CLONE)) { + debug(D_REGISTRY, "Dictionary freeing name '%s'", nv->name); + free(nv->name); } - pthread_rwlock_unlock(&dict->rwlock); - free(nv->value); free(nv); } // ---------------------------------------------------------------------------- +// API - basic methods -DICTIONARY *dictionary_create(void) { +DICTIONARY *dictionary_create(uint32_t flags) { debug(D_DICTIONARY, "Creating dictionary."); DICTIONARY *dict = calloc(1, sizeof(DICTIONARY)); - if(!dict) { - fatal("Cannot allocate DICTIONARY"); - exit(1); - } + if(unlikely(!dict)) fatal("Cannot allocate DICTIONARY"); avl_init(&dict->values_index, name_value_compare); pthread_rwlock_init(&dict->rwlock, NULL); + dict->flags = flags; + return dict; } void dictionary_destroy(DICTIONARY *dict) { debug(D_DICTIONARY, "Destroying dictionary."); - pthread_rwlock_wrlock(&dict->rwlock); - while(dict->values) dictionary_name_value_destroy(dict, dict->values); - pthread_rwlock_unlock(&dict->rwlock); + dictionary_write_lock(dict); + + while(dict->values_index.root) + dictionary_name_value_destroy_nolock(dict, (NAME_VALUE *)dict->values_index.root); + + dictionary_unlock(dict); free(dict); } @@ -118,39 +149,55 @@ void dictionary_destroy(DICTIONARY *dict) { void *dictionary_set(DICTIONARY *dict, const char *name, void *value, size_t value_len) { debug(D_DICTIONARY, "SET dictionary entry with name '%s'.", name); - pthread_rwlock_rdlock(&dict->rwlock); - NAME_VALUE *nv = dictionary_name_value_index_find(dict, name, 0); - pthread_rwlock_unlock(&dict->rwlock); - if(!nv) { + uint32_t hash = simple_hash(name); + + dictionary_write_lock(dict); + + NAME_VALUE *nv = dictionary_name_value_index_find_nolock(dict, name, hash); + if(unlikely(!nv)) { debug(D_DICTIONARY, "Dictionary entry with name '%s' not found. Creating a new one.", name); - nv = dictionary_name_value_create(dict, name, value, value_len); - if(!nv) { + + nv = dictionary_name_value_create_nolock(dict, name, value, value_len, hash); + if(unlikely(!nv)) fatal("Cannot create name_value."); - exit(1); - } - return nv->value; } else { debug(D_DICTIONARY, "Dictionary entry with name '%s' found. Changing its value.", name); - pthread_rwlock_wrlock(&dict->rwlock); - void *old = nv->value; - nv->value = malloc(value_len); - if(!nv->value) fatal("Cannot allocate value of size %z", value_len); - memcpy(nv->value, value, value_len); - pthread_rwlock_unlock(&dict->rwlock); - free(old); + + if(dict->flags & DICTIONARY_FLAG_VALUE_LINK_DONT_CLONE) { + debug(D_REGISTRY, "Dictionary: linking value to '%s'", name); + nv->value = value; + } + else { + debug(D_REGISTRY, "Dictionary: cloning value to '%s'", name); + + void *value = malloc(value_len), + *old = nv->value; + + if(unlikely(!nv->value)) + fatal("Cannot allocate value of size %z", value_len); + + memcpy(value, value, value_len); + nv->value = value; + + debug(D_REGISTRY, "Dictionary: freeing old value of '%s'", name); + free(old); + } } + dictionary_unlock(dict); + return nv->value; } void *dictionary_get(DICTIONARY *dict, const char *name) { debug(D_DICTIONARY, "GET dictionary entry with name '%s'.", name); - pthread_rwlock_rdlock(&dict->rwlock); - NAME_VALUE *nv = dictionary_name_value_index_find(dict, name, 0); - pthread_rwlock_unlock(&dict->rwlock); - if(!nv) { + dictionary_read_lock(dict); + NAME_VALUE *nv = dictionary_name_value_index_find_nolock(dict, name, 0); + dictionary_unlock(dict); + + if(unlikely(!nv)) { debug(D_DICTIONARY, "Not found dictionary entry with name '%s'.", name); return NULL; } @@ -158,3 +205,67 @@ void *dictionary_get(DICTIONARY *dict, const char *name) { debug(D_DICTIONARY, "Found dictionary entry with name '%s'.", name); return nv->value; } + +int dictionary_del(DICTIONARY *dict, const char *name) { + int ret; + + debug(D_DICTIONARY, "DEL dictionary entry with name '%s'.", name); + + dictionary_write_lock(dict); + + NAME_VALUE *nv = dictionary_name_value_index_find_nolock(dict, name, 0); + if(unlikely(!nv)) { + debug(D_DICTIONARY, "Not found dictionary entry with name '%s'.", name); + ret = -1; + } + else { + debug(D_DICTIONARY, "Found dictionary entry with name '%s'.", name); + dictionary_name_value_destroy_nolock(dict, nv); + ret = 0; + } + + dictionary_unlock(dict); + + return ret; +} + + +// ---------------------------------------------------------------------------- +// API - walk through the dictionary +// the dictionary is locked for reading while this happens +// do not user other dictionary calls while walking the dictionary - deadlock! + +static int dictionary_walker(avl *a, int (*callback)(void *entry, void *data), void *data) { + int total = 0, ret = 0; + + if(a->right) { + ret = dictionary_walker(a->right, callback, data); + if(ret < 0) return ret; + total += ret; + } + + ret = callback(((NAME_VALUE *)a)->value, data); + if(ret < 0) return ret; + total += ret; + + if(a->left) { + dictionary_walker(a->left, callback, data); + if (ret < 0) return ret; + total += ret; + } + + return total; +} + +int dictionary_get_all(DICTIONARY *dict, int (*callback)(void *entry, void *data), void *data) { + int ret = 0; + + dictionary_read_lock(dict); + + if(likely(dict->values_index.root)) + ret = dictionary_walker(dict->values_index.root, callback, data); + + dictionary_unlock(dict); + + return ret; +} diff --git a/src/dictionary.h b/src/dictionary.h index 9822b23c2..575f28271 100644 --- a/src/dictionary.h +++ b/src/dictionary.h @@ -1,4 +1,7 @@ +#include <pthread.h> + #include "web_buffer.h" +#include "avl.h" #ifndef NETDATA_DICTIONARY_H #define NETDATA_DICTIONARY_H 1 @@ -10,20 +13,33 @@ typedef struct name_value { // we first compare hashes, and only if the hashes are equal we do string comparisons char *name; - char *value; - - struct name_value *next; + void *value; } NAME_VALUE; typedef struct dictionary { - NAME_VALUE *values; avl_tree values_index; + + uint8_t flags; + + unsigned long long inserts; + unsigned long long deletes; + unsigned long long searches; + unsigned long long entries; + pthread_rwlock_t rwlock; } DICTIONARY; -extern DICTIONARY *dictionary_create(void); +#define DICTIONARY_FLAG_DEFAULT 0x00000000 +#define DICTIONARY_FLAG_SINGLE_THREADED 0x00000001 +#define DICTIONARY_FLAG_VALUE_LINK_DONT_CLONE 0x00000002 +#define DICTIONARY_FLAG_NAME_LINK_DONT_CLONE 0x00000004 + +extern DICTIONARY *dictionary_create(uint32_t flags); extern void dictionary_destroy(DICTIONARY *dict); extern void *dictionary_set(DICTIONARY *dict, const char *name, void *value, size_t value_len); extern void *dictionary_get(DICTIONARY *dict, const char *name); +extern int dictionary_del(DICTIONARY *dict, const char *name); + +extern int dictionary_get_all(DICTIONARY *dict, int (*callback)(void *entry, void *data), void *data); #endif /* NETDATA_DICTIONARY_H */ @@ -131,6 +131,7 @@ void debug_int( const char *file, const char *function, const unsigned long line vfprintf( stdout, fmt, args ); va_end( args ); fprintf(stdout, "\n"); + // fflush( stdout ); if(output_log_syslog) { va_start( args, fmt ); @@ -228,7 +229,7 @@ void log_access( const char *fmt, ... ) vfprintf( stdaccess, fmt, args ); va_end( args ); fprintf( stdaccess, "\n"); - fflush( stdaccess ); + // fflush( stdaccess ); } if(access_log_syslog) { @@ -1,5 +1,6 @@ #include <stdio.h> #include <stdarg.h> +#include <time.h> #ifndef NETDATA_LOG_H #define NETDATA_LOG_H 1 @@ -24,6 +25,8 @@ #define D_RRD_CALLS 0x00020000 #define D_DICTIONARY 0x00040000 #define D_MEMORY 0x00080000 +#define D_CGROUP 0x00100000 +#define D_REGISTRY 0x00200000 //#define DEBUG (D_WEB_CLIENT_ACCESS|D_LISTENER|D_RRD_STATS) //#define DEBUG 0xffffffff diff --git a/src/main.c b/src/main.c index ad24debfa..ec3c59ca7 100644 --- a/src/main.c +++ b/src/main.c @@ -34,11 +34,13 @@ #include "plugin_checks.h" #include "plugin_proc.h" #include "plugin_nfacct.h" +#include "registry.h" #include "main.h" -#include "../config.h" -int netdata_exit = 0; +extern void *cgroups_main(void *ptr); + +volatile sig_atomic_t netdata_exit = 0; void netdata_cleanup_and_exit(int ret) { @@ -84,6 +86,7 @@ struct netdata_static_thread static_threads[] = { {"tc", "plugins", "tc", 1, NULL, NULL, tc_main}, {"idlejitter", "plugins", "idlejitter", 1, NULL, NULL, cpuidlejitter_main}, {"proc", "plugins", "proc", 1, NULL, NULL, proc_main}, + {"cgroups", "plugins", "cgroups", 1, NULL, NULL, cgroups_main}, #ifdef INTERNAL_PLUGIN_NFACCT // nfacct requires root access @@ -120,19 +123,7 @@ int killpid(pid_t pid, int sig) } else { errno = 0; - - void (*old)(int); - old = signal(sig, SIG_IGN); - if(old == SIG_ERR) { - error("Cannot overwrite signal handler for signal %d", sig); - old = sig_handler; - } - ret = kill(pid, sig); - - if(signal(sig, old) == SIG_ERR) - error("Cannot restore signal handler for signal %d", sig); - if(ret == -1) { switch(errno) { case ESRCH: @@ -239,8 +230,7 @@ int main(int argc, char **argv) else if(strcmp(argv[i], "-nodaemon") == 0 || strcmp(argv[i], "-nd") == 0) dont_fork = 1; else if(strcmp(argv[i], "-pidfile") == 0 && (i+1) < argc) { i++; - strncpy(pidfile, argv[i], FILENAME_MAX); - pidfile[FILENAME_MAX] = '\0'; + strncpyz(pidfile, argv[i], FILENAME_MAX); } else if(strcmp(argv[i], "--unittest") == 0) { rrd_update_every = 1; @@ -273,8 +263,10 @@ int main(int argc, char **argv) setenv("NETDATA_PLUGINS_DIR", config_get("global", "plugins directory" , PLUGINS_DIR), 1); setenv("NETDATA_WEB_DIR" , config_get("global", "web files directory", WEB_DIR) , 1); setenv("NETDATA_CACHE_DIR" , config_get("global", "cache directory" , CACHE_DIR) , 1); + setenv("NETDATA_LIB_DIR" , config_get("global", "lib directory" , VARLIB_DIR) , 1); setenv("NETDATA_LOG_DIR" , config_get("global", "log directory" , LOG_DIR) , 1); setenv("NETDATA_HOST_PREFIX", config_get("global", "host access prefix" , "") , 1); + setenv("HOME" , config_get("global", "home directory" , CACHE_DIR) , 1); // avoid extended to stat(/etc/localtime) // http://stackoverflow.com/questions/4554271/how-to-avoid-excessive-stat-etc-localtime-calls-in-strftime-on-linux @@ -400,13 +392,49 @@ int main(int argc, char **argv) // let the plugins know the min update_every { - char buf[50]; - snprintf(buf, 50, "%d", rrd_update_every); + char buf[51]; + snprintfz(buf, 50, "%d", rrd_update_every); setenv("NETDATA_UPDATE_EVERY", buf, 1); } // -------------------------------------------------------------------- + // block signals while initializing threads. + // this causes the threads to block signals. + sigset_t sigset; + sigfillset(&sigset); + + if(pthread_sigmask(SIG_BLOCK, &sigset, NULL) == -1) { + error("Could not block signals for threads"); + } + + // Catch signals which we want to use to quit savely + struct sigaction sa; + sigemptyset(&sa.sa_mask); + sigaddset(&sa.sa_mask, SIGHUP); + sigaddset(&sa.sa_mask, SIGINT); + sigaddset(&sa.sa_mask, SIGTERM); + sa.sa_handler = sig_handler; + sa.sa_flags = 0; + if(sigaction(SIGHUP, &sa, NULL) == -1) { + error("Failed to change signal handler for SIGHUP"); + } + if(sigaction(SIGINT, &sa, NULL) == -1) { + error("Failed to change signal handler for SIGINT"); + } + if(sigaction(SIGTERM, &sa, NULL) == -1) { + error("Failed to change signal handler for SIGTERM"); + } + // Ignore SIGPIPE completely. + // INFO: If we add signals here we have to unblock them + // at popen.c when running a external plugin. + sa.sa_handler = SIG_IGN; + if(sigaction(SIGPIPE, &sa, NULL) == -1) { + error("Failed to change signal handler for SIGTERM"); + } + + // -------------------------------------------------------------------- + i = pthread_attr_init(&attr); if(i != 0) fatal("pthread_attr_init() failed with code %d.", i); @@ -468,18 +496,17 @@ int main(int argc, char **argv) // never become a problem if(nice(20) == -1) error("Cannot lower my CPU priority."); - if(become_daemon(dont_fork, 0, user, input_log_file, output_log_file, error_log_file, access_log_file, &access_fd, &stdaccess) == -1) { + if(become_daemon(dont_fork, 0, user, input_log_file, output_log_file, error_log_file, access_log_file, &access_fd, &stdaccess) == -1) fatal("Cannot demonize myself."); - exit(1); - } +#ifdef NETDATA_INTERNAL_CHECKS if(debug_flags != 0) { struct rlimit rl = { RLIM_INFINITY, RLIM_INFINITY }; if(setrlimit(RLIMIT_CORE, &rl) != 0) info("Cannot request unlimited core dumps for debugging... Proceeding anyway..."); - prctl(PR_SET_DUMPABLE, 1, 0, 0, 0); } +#endif /* NETDATA_INTERNAL_CHECKS */ if(output_log_syslog || error_log_syslog || access_log_syslog) openlog("netdata", LOG_PID, LOG_DAEMON); @@ -487,25 +514,6 @@ int main(int argc, char **argv) info("NetData started on pid %d", getpid()); - // catch all signals - for (i = 1 ; i < 65 ;i++) { - switch(i) { - case SIGKILL: // not catchable - case SIGSTOP: // not catchable - break; - - case SIGSEGV: - case SIGFPE: - case SIGCHLD: - signal(i, SIG_DFL); - break; - - default: - signal(i, sig_handler); - break; - } - } - // ------------------------------------------------------------------------ // get default pthread stack size @@ -517,6 +525,11 @@ int main(int argc, char **argv) info("Successfully set pthread stacksize to %zu bytes", wanted_stacksize); } + // -------------------------------------------------------------------- + // initialize the registry + + registry_init(); + // ------------------------------------------------------------------------ // spawn the threads @@ -539,18 +552,22 @@ int main(int argc, char **argv) else info("Not starting thread %s.", st->name); } - // for future use - the main thread - while(1) { - if(netdata_exit != 0) { - netdata_exit++; + // ------------------------------------------------------------------------ + // block signals while initializing threads. + sigset_t sigset; + sigfillset(&sigset); - if(netdata_exit > 5) { - netdata_cleanup_and_exit(0); - exit(0); - } - } - sleep(2); + if(pthread_sigmask(SIG_UNBLOCK, &sigset, NULL) == -1) { + error("Could not unblock signals for threads"); } - exit(0); + // Handle flags set in the signal handler. + while(1) { + pause(); + if(netdata_exit) { + info("Exit main loop of netdata."); + netdata_cleanup_and_exit(0); + exit(0); + } + } } diff --git a/src/main.h b/src/main.h index 6a90efd9d..d9edda58e 100644 --- a/src/main.h +++ b/src/main.h @@ -1,7 +1,9 @@ #ifndef NETDATA_MAIN_H #define NETDATA_MAIN_H 1 -extern int netdata_exit; +#include <signal.h> + +extern volatile sig_atomic_t netdata_exit; extern void kill_childs(void); extern int killpid(pid_t pid, int signal); diff --git a/src/plugin_proc.c b/src/plugin_proc.c index 4cd20afc5..a147d971f 100644 --- a/src/plugin_proc.c +++ b/src/plugin_proc.c @@ -14,12 +14,7 @@ #include "rrd.h" #include "plugin_proc.h" #include "main.h" - -unsigned long long sutime() { - struct timeval now; - gettimeofday(&now, NULL); - return now.tv_sec * 1000000ULL + now.tv_usec; -} +#include "registry.h" void *proc_main(void *ptr) { @@ -88,11 +83,11 @@ void *proc_main(void *ptr) if(unlikely(netdata_exit)) break; // delay until it is our time to run - while((sunow = sutime()) < sunext) + while((sunow = timems()) < sunext) usleep((useconds_t)(sunext - sunow)); // find the next time we need to run - while(sutime() > sunext) + while(timems() > sunext) sunext += rrd_update_every * 1000000ULL; if(unlikely(netdata_exit)) break; @@ -102,7 +97,7 @@ void *proc_main(void *ptr) if(!vdo_sys_kernel_mm_ksm) { debug(D_PROCNETDEV_LOOP, "PROCNETDEV: calling do_sys_kernel_mm_ksm()."); - sunow = sutime(); + sunow = timems(); vdo_sys_kernel_mm_ksm = do_sys_kernel_mm_ksm(rrd_update_every, (sutime_sys_kernel_mm_ksm > 0)?sunow - sutime_sys_kernel_mm_ksm:0ULL); sutime_sys_kernel_mm_ksm = sunow; } @@ -110,7 +105,7 @@ void *proc_main(void *ptr) if(!vdo_proc_loadavg) { debug(D_PROCNETDEV_LOOP, "PROCNETDEV: calling do_proc_loadavg()."); - sunow = sutime(); + sunow = timems(); vdo_proc_loadavg = do_proc_loadavg(rrd_update_every, (sutime_proc_loadavg > 0)?sunow - sutime_proc_loadavg:0ULL); sutime_proc_loadavg = sunow; } @@ -118,7 +113,7 @@ void *proc_main(void *ptr) if(!vdo_proc_interrupts) { debug(D_PROCNETDEV_LOOP, "PROCNETDEV: calling do_proc_interrupts()."); - sunow = sutime(); + sunow = timems(); vdo_proc_interrupts = do_proc_interrupts(rrd_update_every, (sutime_proc_interrupts > 0)?sunow - sutime_proc_interrupts:0ULL); sutime_proc_interrupts = sunow; } @@ -126,7 +121,7 @@ void *proc_main(void *ptr) if(!vdo_proc_softirqs) { debug(D_PROCNETDEV_LOOP, "PROCNETDEV: calling do_proc_softirqs()."); - sunow = sutime(); + sunow = timems(); vdo_proc_softirqs = do_proc_softirqs(rrd_update_every, (sutime_proc_softirqs > 0)?sunow - sutime_proc_softirqs:0ULL); sutime_proc_softirqs = sunow; } @@ -134,7 +129,7 @@ void *proc_main(void *ptr) if(!vdo_proc_sys_kernel_random_entropy_avail) { debug(D_PROCNETDEV_LOOP, "PROCNETDEV: calling do_proc_sys_kernel_random_entropy_avail()."); - sunow = sutime(); + sunow = timems(); vdo_proc_sys_kernel_random_entropy_avail = do_proc_sys_kernel_random_entropy_avail(rrd_update_every, (sutime_proc_sys_kernel_random_entropy_avail > 0)?sunow - sutime_proc_sys_kernel_random_entropy_avail:0ULL); sutime_proc_sys_kernel_random_entropy_avail = sunow; } @@ -142,7 +137,7 @@ void *proc_main(void *ptr) if(!vdo_proc_net_dev) { debug(D_PROCNETDEV_LOOP, "PROCNETDEV: calling do_proc_net_dev()."); - sunow = sutime(); + sunow = timems(); vdo_proc_net_dev = do_proc_net_dev(rrd_update_every, (sutime_proc_net_dev > 0)?sunow - sutime_proc_net_dev:0ULL); sutime_proc_net_dev = sunow; } @@ -150,7 +145,7 @@ void *proc_main(void *ptr) if(!vdo_proc_diskstats) { debug(D_PROCNETDEV_LOOP, "PROCNETDEV: calling do_proc_diskstats()."); - sunow = sutime(); + sunow = timems(); vdo_proc_diskstats = do_proc_diskstats(rrd_update_every, (sutime_proc_diskstats > 0)?sunow - sutime_proc_diskstats:0ULL); sutime_proc_diskstats = sunow; } @@ -158,7 +153,7 @@ void *proc_main(void *ptr) if(!vdo_proc_net_snmp) { debug(D_PROCNETDEV_LOOP, "PROCNETDEV: calling do_proc_net_snmp()."); - sunow = sutime(); + sunow = timems(); vdo_proc_net_snmp = do_proc_net_snmp(rrd_update_every, (sutime_proc_net_snmp > 0)?sunow - sutime_proc_net_snmp:0ULL); sutime_proc_net_snmp = sunow; } @@ -166,7 +161,7 @@ void *proc_main(void *ptr) if(!vdo_proc_net_snmp6) { debug(D_PROCNETDEV_LOOP, "PROCNETDEV: calling do_proc_net_snmp6()."); - sunow = sutime(); + sunow = timems(); vdo_proc_net_snmp6 = do_proc_net_snmp6(rrd_update_every, (sutime_proc_net_snmp6 > 0)?sunow - sutime_proc_net_snmp6:0ULL); sutime_proc_net_snmp6 = sunow; } @@ -174,7 +169,7 @@ void *proc_main(void *ptr) if(!vdo_proc_net_netstat) { debug(D_PROCNETDEV_LOOP, "PROCNETDEV: calling do_proc_net_netstat()."); - sunow = sutime(); + sunow = timems(); vdo_proc_net_netstat = do_proc_net_netstat(rrd_update_every, (sutime_proc_net_netstat > 0)?sunow - sutime_proc_net_netstat:0ULL); sutime_proc_net_netstat = sunow; } @@ -182,7 +177,7 @@ void *proc_main(void *ptr) if(!vdo_proc_net_stat_conntrack) { debug(D_PROCNETDEV_LOOP, "PROCNETDEV: calling do_proc_net_stat_conntrack()."); - sunow = sutime(); + sunow = timems(); vdo_proc_net_stat_conntrack = do_proc_net_stat_conntrack(rrd_update_every, (sutime_proc_net_stat_conntrack > 0)?sunow - sutime_proc_net_stat_conntrack:0ULL); sutime_proc_net_stat_conntrack = sunow; } @@ -190,7 +185,7 @@ void *proc_main(void *ptr) if(!vdo_proc_net_ip_vs_stats) { debug(D_PROCNETDEV_LOOP, "PROCNETDEV: calling vdo_proc_net_ip_vs_stats()."); - sunow = sutime(); + sunow = timems(); vdo_proc_net_ip_vs_stats = do_proc_net_ip_vs_stats(rrd_update_every, (sutime_proc_net_ip_vs_stats > 0)?sunow - sutime_proc_net_ip_vs_stats:0ULL); sutime_proc_net_ip_vs_stats = sunow; } @@ -198,7 +193,7 @@ void *proc_main(void *ptr) if(!vdo_proc_net_stat_synproxy) { debug(D_PROCNETDEV_LOOP, "PROCNETDEV: calling vdo_proc_net_stat_synproxy()."); - sunow = sutime(); + sunow = timems(); vdo_proc_net_stat_synproxy = do_proc_net_stat_synproxy(rrd_update_every, (sutime_proc_net_stat_synproxy > 0)?sunow - sutime_proc_net_stat_synproxy:0ULL); sutime_proc_net_stat_synproxy = sunow; } @@ -206,7 +201,7 @@ void *proc_main(void *ptr) if(!vdo_proc_stat) { debug(D_PROCNETDEV_LOOP, "PROCNETDEV: calling do_proc_stat()."); - sunow = sutime(); + sunow = timems(); vdo_proc_stat = do_proc_stat(rrd_update_every, (sutime_proc_stat > 0)?sunow - sutime_proc_stat:0ULL); sutime_proc_stat = sunow; } @@ -214,7 +209,7 @@ void *proc_main(void *ptr) if(!vdo_proc_meminfo) { debug(D_PROCNETDEV_LOOP, "PROCNETDEV: calling vdo_proc_meminfo()."); - sunow = sutime(); + sunow = timems(); vdo_proc_meminfo = do_proc_meminfo(rrd_update_every, (sutime_proc_meminfo > 0)?sunow - sutime_proc_meminfo:0ULL); sutime_proc_meminfo = sunow; } @@ -222,7 +217,7 @@ void *proc_main(void *ptr) if(!vdo_proc_vmstat) { debug(D_PROCNETDEV_LOOP, "PROCNETDEV: calling vdo_proc_vmstat()."); - sunow = sutime(); + sunow = timems(); vdo_proc_vmstat = do_proc_vmstat(rrd_update_every, (sutime_proc_vmstat > 0)?sunow - sutime_proc_vmstat:0ULL); sutime_proc_vmstat = sunow; } @@ -230,7 +225,7 @@ void *proc_main(void *ptr) if(!vdo_proc_net_rpc_nfsd) { debug(D_PROCNETDEV_LOOP, "PROCNETDEV: calling do_proc_net_rpc_nfsd()."); - sunow = sutime(); + sunow = timems(); vdo_proc_net_rpc_nfsd = do_proc_net_rpc_nfsd(rrd_update_every, (sutime_proc_net_rpc_nfsd > 0)?sunow - sutime_proc_net_rpc_nfsd:0ULL); sutime_proc_net_rpc_nfsd = sunow; } @@ -246,7 +241,7 @@ void *proc_main(void *ptr) if(!stcpu_thread) stcpu_thread = rrdset_find("netdata.plugin_proc_cpu"); if(!stcpu_thread) { - stcpu_thread = rrdset_create("netdata", "plugin_proc_cpu", NULL, "proc.internal", NULL, "NetData Proc Plugin CPU usage", "milliseconds/s", 131000, rrd_update_every, RRDSET_TYPE_STACKED); + stcpu_thread = rrdset_create("netdata", "plugin_proc_cpu", NULL, "proc.internal", NULL, "NetData Proc Plugin CPU usage", "milliseconds/s", 132000, rrd_update_every, RRDSET_TYPE_STACKED); rrddim_add(stcpu_thread, "user", NULL, 1, 1000, RRDDIM_INCREMENTAL); rrddim_add(stcpu_thread, "system", NULL, 1, 1000, RRDDIM_INCREMENTAL); @@ -276,7 +271,7 @@ void *proc_main(void *ptr) if(!stclients) stclients = rrdset_find("netdata.clients"); if(!stclients) { - stclients = rrdset_create("netdata", "clients", NULL, "netdata", NULL, "NetData Web Clients", "connected clients", 131000, rrd_update_every, RRDSET_TYPE_LINE); + stclients = rrdset_create("netdata", "clients", NULL, "netdata", NULL, "NetData Web Clients", "connected clients", 130100, rrd_update_every, RRDSET_TYPE_LINE); rrddim_add(stclients, "clients", NULL, 1, 1, RRDDIM_ABSOLUTE); } @@ -289,7 +284,7 @@ void *proc_main(void *ptr) if(!streqs) streqs = rrdset_find("netdata.requests"); if(!streqs) { - streqs = rrdset_create("netdata", "requests", NULL, "netdata", NULL, "NetData Web Requests", "requests/s", 131100, rrd_update_every, RRDSET_TYPE_LINE); + streqs = rrdset_create("netdata", "requests", NULL, "netdata", NULL, "NetData Web Requests", "requests/s", 130200, rrd_update_every, RRDSET_TYPE_LINE); rrddim_add(streqs, "requests", NULL, 1, 1, RRDDIM_INCREMENTAL); } @@ -302,7 +297,7 @@ void *proc_main(void *ptr) if(!stbytes) stbytes = rrdset_find("netdata.net"); if(!stbytes) { - stbytes = rrdset_create("netdata", "net", NULL, "netdata", NULL, "NetData Network Traffic", "kilobits/s", 131200, rrd_update_every, RRDSET_TYPE_AREA); + stbytes = rrdset_create("netdata", "net", NULL, "netdata", NULL, "NetData Network Traffic", "kilobits/s", 130300, rrd_update_every, RRDSET_TYPE_AREA); rrddim_add(stbytes, "in", NULL, 8, 1024, RRDDIM_INCREMENTAL); rrddim_add(stbytes, "out", NULL, -8, 1024, RRDDIM_INCREMENTAL); @@ -312,6 +307,10 @@ void *proc_main(void *ptr) rrddim_set(stbytes, "in", global_statistics.bytes_received); rrddim_set(stbytes, "out", global_statistics.bytes_sent); rrdset_done(stbytes); + + // ---------------------------------------------------------------- + + registry_statistics(); } } diff --git a/src/plugin_tc.c b/src/plugin_tc.c index 2c7a55cee..3d3e35217 100644 --- a/src/plugin_tc.c +++ b/src/plugin_tc.c @@ -83,8 +83,6 @@ struct tc_device *tc_device_root = NULL; // ---------------------------------------------------------------------------- // tc_device index -static int tc_device_iterator(avl *a) { if(a) {}; return 0; } - static int tc_device_compare(void* a, void* b) { if(((struct tc_device *)a)->hash < ((struct tc_device *)b)->hash) return -1; else if(((struct tc_device *)a)->hash > ((struct tc_device *)b)->hash) return 1; @@ -93,32 +91,24 @@ static int tc_device_compare(void* a, void* b) { avl_tree tc_device_root_index = { NULL, - tc_device_compare, -#ifdef AVL_LOCK_WITH_MUTEX - PTHREAD_MUTEX_INITIALIZER -#else - PTHREAD_RWLOCK_INITIALIZER -#endif + tc_device_compare }; #define tc_device_index_add(st) avl_insert(&tc_device_root_index, (avl *)(st)) #define tc_device_index_del(st) avl_remove(&tc_device_root_index, (avl *)(st)) -static struct tc_device *tc_device_index_find(const char *id, uint32_t hash) { - struct tc_device *result = NULL, tmp; +static inline struct tc_device *tc_device_index_find(const char *id, uint32_t hash) { + struct tc_device tmp; tmp.id = (char *)id; tmp.hash = (hash)?hash:simple_hash(tmp.id); - avl_search(&(tc_device_root_index), (avl *)&tmp, tc_device_iterator, (avl **)&result); - return result; + return (struct tc_device *)avl_search(&(tc_device_root_index), (avl *)&tmp); } // ---------------------------------------------------------------------------- // tc_class index -static int tc_class_iterator(avl *a) { if(a) {}; return 0; } - static int tc_class_compare(void* a, void* b) { if(((struct tc_class *)a)->hash < ((struct tc_class *)b)->hash) return -1; else if(((struct tc_class *)a)->hash > ((struct tc_class *)b)->hash) return 1; @@ -128,18 +118,17 @@ static int tc_class_compare(void* a, void* b) { #define tc_class_index_add(st, rd) avl_insert(&((st)->classes_index), (avl *)(rd)) #define tc_class_index_del(st, rd) avl_remove(&((st)->classes_index), (avl *)(rd)) -static struct tc_class *tc_class_index_find(struct tc_device *st, const char *id, uint32_t hash) { - struct tc_class *result = NULL, tmp; +static inline struct tc_class *tc_class_index_find(struct tc_device *st, const char *id, uint32_t hash) { + struct tc_class tmp; tmp.id = (char *)id; tmp.hash = (hash)?hash:simple_hash(tmp.id); - avl_search(&(st->classes_index), (avl *)&tmp, tc_class_iterator, (avl **)&result); - return result; + return (struct tc_class *)avl_search(&(st->classes_index), (avl *) &tmp); } // ---------------------------------------------------------------------------- -static void tc_class_free(struct tc_device *n, struct tc_class *c) { +static inline void tc_class_free(struct tc_device *n, struct tc_class *c) { debug(D_TC_LOOP, "Removing from device '%s' class '%s', parentid '%s', leafid '%s', seen=%d", n->id, c->id, c->parentid?c->parentid:"", c->leafid?c->leafid:"", c->seen); if(c->next) c->next->prev = c->prev; @@ -159,7 +148,7 @@ static void tc_class_free(struct tc_device *n, struct tc_class *c) { free(c); } -static void tc_device_classes_cleanup(struct tc_device *d) { +static inline void tc_device_classes_cleanup(struct tc_device *d) { static int cleanup_every = 999; if(cleanup_every > 0) { @@ -184,7 +173,7 @@ static void tc_device_classes_cleanup(struct tc_device *d) { } } -static void tc_device_commit(struct tc_device *d) +static inline void tc_device_commit(struct tc_device *d) { static int enable_new_interfaces = -1; @@ -239,7 +228,7 @@ static void tc_device_commit(struct tc_device *d) } char var_name[CONFIG_MAX_NAME + 1]; - snprintf(var_name, CONFIG_MAX_NAME, "qos for %s", d->id); + snprintfz(var_name, CONFIG_MAX_NAME, "qos for %s", d->id); if(config_get_boolean("plugin:tc", var_name, enable_new_interfaces)) { RRDSET *st = rrdset_find_bytype(RRD_TYPE_TC, d->id); if(!st) { @@ -294,7 +283,7 @@ static void tc_device_commit(struct tc_device *d) tc_device_classes_cleanup(d); } -static void tc_device_set_class_name(struct tc_device *d, char *id, char *name) +static inline void tc_device_set_class_name(struct tc_device *d, char *id, char *name) { struct tc_class *c = tc_class_index_find(d, id, 0); if(c) { @@ -308,7 +297,7 @@ static void tc_device_set_class_name(struct tc_device *d, char *id, char *name) } } -static void tc_device_set_device_name(struct tc_device *d, char *name) { +static inline void tc_device_set_device_name(struct tc_device *d, char *name) { if(d->name) free(d->name); d->name = NULL; @@ -318,7 +307,7 @@ static void tc_device_set_device_name(struct tc_device *d, char *name) { } } -static void tc_device_set_device_family(struct tc_device *d, char *family) { +static inline void tc_device_set_device_family(struct tc_device *d, char *family) { if(d->family) free(d->family); d->family = NULL; @@ -329,7 +318,7 @@ static void tc_device_set_device_family(struct tc_device *d, char *family) { // no need for null termination - it is already null } -static struct tc_device *tc_device_create(char *id) +static inline struct tc_device *tc_device_create(char *id) { struct tc_device *d = tc_device_index_find(id, 0); @@ -345,18 +334,7 @@ static struct tc_device *tc_device_create(char *id) d->id = strdup(id); d->hash = simple_hash(d->id); - d->classes_index.root = NULL; - d->classes_index.compar = tc_class_compare; - - int lock; -#ifdef AVL_LOCK_WITH_MUTEX - lock = pthread_mutex_init(&d->classes_index.mutex, NULL); -#else - lock = pthread_rwlock_init(&d->classes_index.rwlock, NULL); -#endif - if(lock != 0) - fatal("Failed to initialize plugin_tc mutex/rwlock, return code %d.", lock); - + avl_init(&d->classes_index, tc_class_compare); tc_device_index_add(d); if(!tc_device_root) { @@ -372,7 +350,7 @@ static struct tc_device *tc_device_create(char *id) return(d); } -static struct tc_class *tc_class_add(struct tc_device *n, char *id, char *parentid, char *leafid) +static inline struct tc_class *tc_class_add(struct tc_device *n, char *id, char *parentid, char *leafid) { struct tc_class *c = tc_class_index_find(n, id, 0); @@ -414,7 +392,7 @@ static struct tc_class *tc_class_add(struct tc_device *n, char *id, char *parent return(c); } -static void tc_device_free(struct tc_device *n) +static inline void tc_device_free(struct tc_device *n) { if(n->next) n->next->prev = n->prev; if(n->prev) n->prev->next = n->next; @@ -434,7 +412,7 @@ static void tc_device_free(struct tc_device *n) free(n); } -static void tc_device_free_all() +static inline void tc_device_free_all() { while(tc_device_root) tc_device_free(tc_device_root); @@ -455,7 +433,7 @@ static inline int tc_space(char c) { } } -static void tc_split_words(char *str, char **words, int max_words) { +static inline void tc_split_words(char *str, char **words, int max_words) { char *s = str; int i = 0; @@ -531,7 +509,7 @@ void *tc_main(void *ptr) struct tc_device *device = NULL; struct tc_class *class = NULL; - snprintf(buffer, TC_LINE_MAX, "exec %s %d", config_get("plugin:tc", "script to run to get tc values", PLUGINS_DIR "/tc-qos-helper.sh"), rrd_update_every); + snprintfz(buffer, TC_LINE_MAX, "exec %s %d", config_get("plugin:tc", "script to run to get tc values", PLUGINS_DIR "/tc-qos-helper.sh"), rrd_update_every); debug(D_TC_LOOP, "executing '%s'", buffer); // fp = popen(buffer, "r"); fp = mypopen(buffer, &tc_child_pid); @@ -586,7 +564,7 @@ void *tc_main(void *ptr) char leafbuf[20 + 1] = ""; if(leafid && leafid[strlen(leafid) - 1] == ':') { - strncpy(leafbuf, leafid, 20 - 1); + strncpyz(leafbuf, leafid, 20 - 1); strcat(leafbuf, "1"); leafid = leafbuf; } diff --git a/src/plugins_d.c b/src/plugins_d.c index b8524d99c..0ccbd36e4 100644 --- a/src/plugins_d.c +++ b/src/plugins_d.c @@ -181,7 +181,7 @@ void *pluginsd_worker_thread(void *arg) if(unlikely(st->debug)) debug(D_PLUGINSD, "PLUGINSD: '%s' is setting dimension %s/%s to %s", cd->fullfilename, st->id, dimension, value?value:"<nothing>"); - if(value) rrddim_set(st, dimension, atoll(value)); + if(value) rrddim_set(st, dimension, strtoll(value, NULL, 0)); count++; } @@ -311,11 +311,11 @@ void *pluginsd_worker_thread(void *arg) } long multiplier = 1; - if(multiplier_s && *multiplier_s) multiplier = atol(multiplier_s); + if(multiplier_s && *multiplier_s) multiplier = strtol(multiplier_s, NULL, 0); if(unlikely(!multiplier)) multiplier = 1; long divisor = 1; - if(likely(divisor_s && *divisor_s)) divisor = atol(divisor_s); + if(likely(divisor_s && *divisor_s)) divisor = strtol(divisor_s, NULL, 0); if(unlikely(!divisor)) divisor = 1; if(unlikely(!algorithm || !*algorithm)) algorithm = "absolute"; @@ -351,7 +351,7 @@ void *pluginsd_worker_thread(void *arg) #ifdef DETACH_PLUGINS_FROM_NETDATA else if(likely(hash == MYPID_HASH && !strcmp(s, "MYPID"))) { char *pid_s = words[1]; - pid_t pid = atol(pid_s); + pid_t pid = strtod(pid_s, NULL, 0); if(likely(pid)) cd->pid = pid; debug(D_PLUGINSD, "PLUGINSD: %s is on pid %d", cd->id, cd->pid); @@ -470,7 +470,7 @@ void *pluginsd_main(void *ptr) } char pluginname[CONFIG_MAX_NAME + 1]; - snprintf(pluginname, CONFIG_MAX_NAME, "%.*s", (int)(len - PLUGINSD_FILE_SUFFIX_LEN), file->d_name); + snprintfz(pluginname, CONFIG_MAX_NAME, "%.*s", (int)(len - PLUGINSD_FILE_SUFFIX_LEN), file->d_name); int enabled = config_get_boolean("plugins", pluginname, automatic_run); if(unlikely(!enabled)) { @@ -493,17 +493,17 @@ void *pluginsd_main(void *ptr) cd = calloc(sizeof(struct plugind), 1); if(unlikely(!cd)) fatal("Cannot allocate memory for plugin."); - snprintf(cd->id, CONFIG_MAX_NAME, "plugin:%s", pluginname); + snprintfz(cd->id, CONFIG_MAX_NAME, "plugin:%s", pluginname); - strncpy(cd->filename, file->d_name, FILENAME_MAX); - snprintf(cd->fullfilename, FILENAME_MAX, "%s/%s", dir_name, cd->filename); + strncpyz(cd->filename, file->d_name, FILENAME_MAX); + snprintfz(cd->fullfilename, FILENAME_MAX, "%s/%s", dir_name, cd->filename); cd->enabled = enabled; cd->update_every = (int) config_get_number(cd->id, "update every", rrd_update_every); cd->started_t = time(NULL); char *def = ""; - snprintf(cd->cmd, PLUGINSD_CMD_MAX, "exec %s %d %s", cd->fullfilename, cd->update_every, config_get(cd->id, "command options", def)); + snprintfz(cd->cmd, PLUGINSD_CMD_MAX, "exec %s %d %s", cd->fullfilename, cd->update_every, config_get(cd->id, "command options", def)); // link it if(likely(pluginsd_root)) cd->next = pluginsd_root; diff --git a/src/popen.c b/src/popen.c index 882a4cc5a..06f27c0b7 100644 --- a/src/popen.c +++ b/src/popen.c @@ -114,10 +114,27 @@ FILE *mypopen(const char *command, pid_t *pidptr) #endif // reset all signals - for (i = 1 ; i < 65 ;i++) if(i != SIGSEGV) signal(i, SIG_DFL); + { + sigset_t sigset; + sigfillset(&sigset); + + if(pthread_sigmask(SIG_UNBLOCK, &sigset, NULL) == -1) { + error("Could not block signals for threads"); + } + // We only need to reset ignored signals. + // Signals with signal handlers are reset by default. + struct sigaction sa; + sigemptyset(&sa.sa_mask); + sa.sa_handler = SIG_DFL; + sa.sa_flags = 0; + if(sigaction(SIGPIPE, &sa, NULL) == -1) { + error("Failed to change signal handler for SIGTERM"); + } + } + info("executing command: '%s' on pid %d.", command, getpid()); - execl("/bin/sh", "sh", "-c", command, NULL); + execl("/bin/sh", "sh", "-c", command, NULL); exit(1); } diff --git a/src/proc_diskstats.c b/src/proc_diskstats.c index c2b84aae1..c62a1351c 100644 --- a/src/proc_diskstats.c +++ b/src/proc_diskstats.c @@ -15,17 +15,21 @@ #include "rrd.h" #include "plugin_proc.h" +#include "proc_self_mountinfo.h" + #define RRD_TYPE_DISK "disk" struct disk { unsigned long major; unsigned long minor; int partition_id; // -1 = this is not a partition + char *family; struct disk *next; } *disk_root = NULL; struct disk *get_disk(unsigned long major, unsigned long minor) { static char path_find_block_device_partition[FILENAME_MAX + 1] = ""; + static struct mountinfo *mountinfo_root = NULL; struct disk *d; // search for it in our RAM list. @@ -42,8 +46,8 @@ struct disk *get_disk(unsigned long major, unsigned long minor) { if(unlikely(!path_find_block_device_partition[0])) { char filename[FILENAME_MAX + 1]; - snprintf(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/sys/dev/block/%lu:%lu/partition"); - snprintf(path_find_block_device_partition, FILENAME_MAX, "%s", config_get("plugin:proc:/proc/diskstats", "path to get block device partition", filename)); + snprintfz(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/sys/dev/block/%lu:%lu/partition"); + snprintfz(path_find_block_device_partition, FILENAME_MAX, "%s", config_get("plugin:proc:/proc/diskstats", "path to get block device partition", filename)); } // not found @@ -68,7 +72,7 @@ struct disk *get_disk(unsigned long major, unsigned long minor) { // find if it is a partition // by reading /sys/dev/block/MAJOR:MINOR/partition char buffer[FILENAME_MAX + 1]; - snprintf(buffer, FILENAME_MAX, path_find_block_device_partition, major, minor); + snprintfz(buffer, FILENAME_MAX, path_find_block_device_partition, major, minor); int fd = open(buffer, O_RDONLY, 0666); if(likely(fd != -1)) { @@ -81,6 +85,28 @@ struct disk *get_disk(unsigned long major, unsigned long minor) { } // if the /partition file does not exist, it is a disk, not a partition + // ------------------------------------------------------------------------ + // check if we can find its mount point + + // mountinfo_find() can be called with NULL mountinfo_root + struct mountinfo *mi = mountinfo_find(mountinfo_root, d->major, d->minor); + if(unlikely(!mi)) { + // mountinfo_free() can be called with NULL mountinfo_root + mountinfo_free(mountinfo_root); + + // re-read mountinfo in case something changed + mountinfo_root = mountinfo_read(); + + // search again for this disk + mi = mountinfo_find(mountinfo_root, d->major, d->minor); + } + + if(mi) + d->family = strdup(mi->mount_point); + // no need to check for NULL + else + d->family = NULL; + return d; } @@ -102,15 +128,15 @@ int do_proc_diskstats(int update_every, unsigned long long dt) { if(!ff) { char filename[FILENAME_MAX + 1]; - snprintf(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/diskstats"); + snprintfz(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/diskstats"); ff = procfile_open(config_get("plugin:proc:/proc/diskstats", "filename to monitor", filename), " \t", PROCFILE_FLAG_DEFAULT); } if(!ff) return 1; if(!path_to_get_hw_sector_size[0]) { char filename[FILENAME_MAX + 1]; - snprintf(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/sys/block/%s/queue/hw_sector_size"); - snprintf(path_to_get_hw_sector_size, FILENAME_MAX, "%s", config_get("plugin:proc:/proc/diskstats", "path to get h/w sector size", filename)); + snprintfz(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/sys/block/%s/queue/hw_sector_size"); + snprintfz(path_to_get_hw_sector_size, FILENAME_MAX, "%s", config_get("plugin:proc:/proc/diskstats", "path to get h/w sector size", filename)); } ff = procfile_readall(ff); @@ -189,6 +215,9 @@ int do_proc_diskstats(int update_every, unsigned long long dt) { else def_enabled = 0; + char *family = d->family; + if(!family) family = disk; + /* switch(major) { case 9: // MDs @@ -316,7 +345,7 @@ int do_proc_diskstats(int update_every, unsigned long long dt) { // check which charts are enabled for this disk { char var_name[4096 + 1]; - snprintf(var_name, 4096, "plugin:proc:/proc/diskstats:%s", disk); + snprintfz(var_name, 4096, "plugin:proc:/proc/diskstats:%s", disk); def_enabled = config_get_boolean_ondemand(var_name, "enabled", def_enabled); if(def_enabled == CONFIG_ONDEMAND_NO) continue; if(def_enabled == CONFIG_ONDEMAND_ONDEMAND && !reads && !writes) continue; @@ -354,13 +383,12 @@ int do_proc_diskstats(int update_every, unsigned long long dt) { char tf[FILENAME_MAX + 1], *t; char ssfilename[FILENAME_MAX + 1]; - strncpy(tf, disk, FILENAME_MAX); - tf[FILENAME_MAX] = '\0'; + strncpyz(tf, disk, FILENAME_MAX); // replace all / with ! while((t = strchr(tf, '/'))) *t = '!'; - snprintf(ssfilename, FILENAME_MAX, path_to_get_hw_sector_size, tf); + snprintfz(ssfilename, FILENAME_MAX, path_to_get_hw_sector_size, tf); FILE *fpss = fopen(ssfilename, "r"); if(fpss) { char ssbuffer[1025]; @@ -379,7 +407,7 @@ int do_proc_diskstats(int update_every, unsigned long long dt) { } else error("Cannot read sector size for device %s from %s. Assuming 512.", disk, ssfilename); - st = rrdset_create(RRD_TYPE_DISK, disk, NULL, disk, "disk.io", "Disk I/O Bandwidth", "kilobytes/s", 2000, update_every, RRDSET_TYPE_AREA); + st = rrdset_create(RRD_TYPE_DISK, disk, NULL, family, "disk.io", "Disk I/O Bandwidth", "kilobytes/s", 2000, update_every, RRDSET_TYPE_AREA); rrddim_add(st, "reads", NULL, sector_size, 1024, RRDDIM_INCREMENTAL); rrddim_add(st, "writes", NULL, sector_size * -1, 1024, RRDDIM_INCREMENTAL); @@ -396,7 +424,7 @@ int do_proc_diskstats(int update_every, unsigned long long dt) { if(ddo_ops) { st = rrdset_find_bytype("disk_ops", disk); if(!st) { - st = rrdset_create("disk_ops", disk, NULL, disk, "disk.ops", "Disk Completed I/O Operations", "operations/s", 2001, update_every, RRDSET_TYPE_LINE); + st = rrdset_create("disk_ops", disk, NULL, family, "disk.ops", "Disk Completed I/O Operations", "operations/s", 2001, update_every, RRDSET_TYPE_LINE); st->isdetail = 1; rrddim_add(st, "reads", NULL, 1, 1, RRDDIM_INCREMENTAL); @@ -414,7 +442,7 @@ int do_proc_diskstats(int update_every, unsigned long long dt) { if(ddo_qops) { st = rrdset_find_bytype("disk_qops", disk); if(!st) { - st = rrdset_create("disk_qops", disk, NULL, disk, "disk.qops", "Disk Current I/O Operations", "operations", 2002, update_every, RRDSET_TYPE_LINE); + st = rrdset_create("disk_qops", disk, NULL, family, "disk.qops", "Disk Current I/O Operations", "operations", 2002, update_every, RRDSET_TYPE_LINE); st->isdetail = 1; rrddim_add(st, "operations", NULL, 1, 1, RRDDIM_ABSOLUTE); @@ -430,7 +458,7 @@ int do_proc_diskstats(int update_every, unsigned long long dt) { if(ddo_backlog) { st = rrdset_find_bytype("disk_backlog", disk); if(!st) { - st = rrdset_create("disk_backlog", disk, NULL, disk, "disk.backlog", "Disk Backlog", "backlog (ms)", 2003, update_every, RRDSET_TYPE_AREA); + st = rrdset_create("disk_backlog", disk, NULL, family, "disk.backlog", "Disk Backlog", "backlog (ms)", 2003, update_every, RRDSET_TYPE_AREA); st->isdetail = 1; rrddim_add(st, "backlog", NULL, 1, 10, RRDDIM_INCREMENTAL); @@ -446,7 +474,7 @@ int do_proc_diskstats(int update_every, unsigned long long dt) { if(ddo_util) { st = rrdset_find_bytype("disk_util", disk); if(!st) { - st = rrdset_create("disk_util", disk, NULL, disk, "disk.util", "Disk Utilization Time", "% of time working", 2004, update_every, RRDSET_TYPE_AREA); + st = rrdset_create("disk_util", disk, NULL, family, "disk.util", "Disk Utilization Time", "% of time working", 2004, update_every, RRDSET_TYPE_AREA); st->isdetail = 1; rrddim_add(st, "utilization", NULL, 1, 10, RRDDIM_INCREMENTAL); @@ -462,7 +490,7 @@ int do_proc_diskstats(int update_every, unsigned long long dt) { if(ddo_mops) { st = rrdset_find_bytype("disk_mops", disk); if(!st) { - st = rrdset_create("disk_mops", disk, NULL, disk, "disk.mops", "Disk Merged Operations", "merged operations/s", 2021, update_every, RRDSET_TYPE_LINE); + st = rrdset_create("disk_mops", disk, NULL, family, "disk.mops", "Disk Merged Operations", "merged operations/s", 2021, update_every, RRDSET_TYPE_LINE); st->isdetail = 1; rrddim_add(st, "reads", NULL, 1, 1, RRDDIM_INCREMENTAL); @@ -480,7 +508,7 @@ int do_proc_diskstats(int update_every, unsigned long long dt) { if(ddo_iotime) { st = rrdset_find_bytype("disk_iotime", disk); if(!st) { - st = rrdset_create("disk_iotime", disk, NULL, disk, "disk.iotime", "Disk Total I/O Time", "milliseconds/s", 2022, update_every, RRDSET_TYPE_LINE); + st = rrdset_create("disk_iotime", disk, NULL, family, "disk.iotime", "Disk Total I/O Time", "milliseconds/s", 2022, update_every, RRDSET_TYPE_LINE); st->isdetail = 1; rrddim_add(st, "reads", NULL, 1, 1, RRDDIM_INCREMENTAL); @@ -501,7 +529,7 @@ int do_proc_diskstats(int update_every, unsigned long long dt) { if(ddo_iotime && ddo_ops) { st = rrdset_find_bytype("disk_await", disk); if(!st) { - st = rrdset_create("disk_await", disk, NULL, disk, "disk.await", "Average Completed I/O Operation Time", "ms per operation", 2005, update_every, RRDSET_TYPE_LINE); + st = rrdset_create("disk_await", disk, NULL, family, "disk.await", "Average Completed I/O Operation Time", "ms per operation", 2005, update_every, RRDSET_TYPE_LINE); st->isdetail = 1; rrddim_add(st, "reads", NULL, 1, 1, RRDDIM_ABSOLUTE); @@ -517,7 +545,7 @@ int do_proc_diskstats(int update_every, unsigned long long dt) { if(ddo_io && ddo_ops) { st = rrdset_find_bytype("disk_avgsz", disk); if(!st) { - st = rrdset_create("disk_avgsz", disk, NULL, disk, "disk.avgsz", "Average Completed I/O Operation Bandwidth", "kilobytes per operation", 2006, update_every, RRDSET_TYPE_AREA); + st = rrdset_create("disk_avgsz", disk, NULL, family, "disk.avgsz", "Average Completed I/O Operation Bandwidth", "kilobytes per operation", 2006, update_every, RRDSET_TYPE_AREA); st->isdetail = 1; rrddim_add(st, "reads", NULL, sector_size, 1024, RRDDIM_ABSOLUTE); @@ -533,7 +561,7 @@ int do_proc_diskstats(int update_every, unsigned long long dt) { if(ddo_util && ddo_ops) { st = rrdset_find_bytype("disk_svctm", disk); if(!st) { - st = rrdset_create("disk_svctm", disk, NULL, disk, "disk.svctm", "Average Service Time", "ms per operation", 2007, update_every, RRDSET_TYPE_LINE); + st = rrdset_create("disk_svctm", disk, NULL, family, "disk.svctm", "Average Service Time", "ms per operation", 2007, update_every, RRDSET_TYPE_LINE); st->isdetail = 1; rrddim_add(st, "svctm", NULL, 1, 1, RRDDIM_ABSOLUTE); diff --git a/src/proc_interrupts.c b/src/proc_interrupts.c index 6704ef1a5..ad00c2022 100644 --- a/src/proc_interrupts.c +++ b/src/proc_interrupts.c @@ -13,29 +13,34 @@ #include "plugin_proc.h" #include "log.h" -#define MAX_INTERRUPTS 256 -#define MAX_INTERRUPT_CPUS 256 #define MAX_INTERRUPT_NAME 50 struct interrupt { int used; char *id; char name[MAX_INTERRUPT_NAME + 1]; - unsigned long long value[MAX_INTERRUPT_CPUS]; unsigned long long total; + unsigned long long value[]; }; -static struct interrupt *alloc_interrupts(int lines) { +// since each interrupt is variable in size +// we use this to calculate its record size +#define recordsize(cpus) (sizeof(struct interrupt) + (cpus * sizeof(unsigned long long))) + +// given a base, get a pointer to each record +#define irrindex(base, line, cpus) ((struct interrupt *)&((char *)(base))[line * recordsize(cpus)]) + +static inline struct interrupt *get_interrupts_array(int lines, int cpus) { static struct interrupt *irrs = NULL; - static int alloced = 0; + static int allocated = 0; - if(lines < alloced) return irrs; + if(lines < allocated) return irrs; else { - irrs = (struct interrupt *)realloc(irrs, lines * sizeof(struct interrupt)); + irrs = (struct interrupt *)realloc(irrs, lines * recordsize(cpus)); if(!irrs) fatal("Cannot allocate memory for %d interrupts", lines); - alloced = lines; + allocated = lines; } return irrs; @@ -44,7 +49,6 @@ static struct interrupt *alloc_interrupts(int lines) { int do_proc_interrupts(int update_every, unsigned long long dt) { static procfile *ff = NULL; static int cpus = -1, do_per_core = -1; - struct interrupt *irrs = NULL; if(dt) {}; @@ -53,7 +57,7 @@ int do_proc_interrupts(int update_every, unsigned long long dt) { if(!ff) { char filename[FILENAME_MAX + 1]; - snprintf(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/interrupts"); + snprintfz(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/interrupts"); ff = procfile_open(config_get("plugin:proc:/proc/interrupts", "filename to monitor", filename), " \t", PROCFILE_FLAG_DEFAULT); } if(!ff) return 1; @@ -76,8 +80,6 @@ int do_proc_interrupts(int update_every, unsigned long long dt) { if(strncmp(procfile_lineword(ff, 0, w), "CPU", 3) == 0) cpus++; } - - if(cpus > MAX_INTERRUPT_CPUS) cpus = MAX_INTERRUPT_CPUS; } if(!cpus) { @@ -86,12 +88,12 @@ int do_proc_interrupts(int update_every, unsigned long long dt) { } // allocate the size we need; - irrs = alloc_interrupts(lines); + irrs = get_interrupts_array(lines, cpus); irrs[0].used = 0; // loop through all lines for(l = 1; l < lines ;l++) { - struct interrupt *irr = &irrs[l]; + struct interrupt *irr = irrindex(irrs, l, cpus); irr->used = 0; irr->total = 0; @@ -116,18 +118,15 @@ int do_proc_interrupts(int update_every, unsigned long long dt) { } if(isdigit(irr->id[0]) && (uint32_t)(cpus + 2) < words) { - strncpy(irr->name, procfile_lineword(ff, l, words - 1), MAX_INTERRUPT_NAME); - irr->name[MAX_INTERRUPT_NAME] = '\0'; + strncpyz(irr->name, procfile_lineword(ff, l, words - 1), MAX_INTERRUPT_NAME); int nlen = strlen(irr->name); if(nlen < (MAX_INTERRUPT_NAME-1)) { irr->name[nlen] = '_'; - strncpy(&irr->name[nlen + 1], irr->id, MAX_INTERRUPT_NAME - nlen); - irr->name[MAX_INTERRUPT_NAME] = '\0'; + strncpyz(&irr->name[nlen + 1], irr->id, MAX_INTERRUPT_NAME - nlen); } } else { - strncpy(irr->name, irr->id, MAX_INTERRUPT_NAME); - irr->name[MAX_INTERRUPT_NAME] = '\0'; + strncpyz(irr->name, irr->id, MAX_INTERRUPT_NAME); } irr->used = 1; @@ -142,15 +141,17 @@ int do_proc_interrupts(int update_every, unsigned long long dt) { st = rrdset_create("system", "interrupts", NULL, "interrupts", NULL, "System interrupts", "interrupts/s", 1000, update_every, RRDSET_TYPE_STACKED); for(l = 0; l < lines ;l++) { - if(!irrs[l].used) continue; - rrddim_add(st, irrs[l].id, irrs[l].name, 1, 1, RRDDIM_INCREMENTAL); + struct interrupt *irr = irrindex(irrs, l, cpus); + if(!irr->used) continue; + rrddim_add(st, irr->id, irr->name, 1, 1, RRDDIM_INCREMENTAL); } } else rrdset_next(st); for(l = 0; l < lines ;l++) { - if(!irrs[l].used) continue; - rrddim_set(st, irrs[l].id, irrs[l].total); + struct interrupt *irr = irrindex(irrs, l, cpus); + if(!irr->used) continue; + rrddim_set(st, irr->id, irr->total); } rrdset_done(st); @@ -158,26 +159,28 @@ int do_proc_interrupts(int update_every, unsigned long long dt) { int c; for(c = 0; c < cpus ; c++) { - char id[256]; - snprintf(id, 256, "cpu%d_interrupts", c); + char id[256+1]; + snprintfz(id, 256, "cpu%d_interrupts", c); st = rrdset_find_bytype("cpu", id); if(!st) { - char name[256], title[256]; - snprintf(name, 256, "cpu%d_interrupts", c); - snprintf(title, 256, "CPU%d Interrupts", c); + char name[256+1], title[256+1]; + snprintfz(name, 256, "cpu%d_interrupts", c); + snprintfz(title, 256, "CPU%d Interrupts", c); st = rrdset_create("cpu", id, name, "interrupts", "cpu.interrupts", title, "interrupts/s", 2000 + c, update_every, RRDSET_TYPE_STACKED); for(l = 0; l < lines ;l++) { - if(!irrs[l].used) continue; - rrddim_add(st, irrs[l].id, irrs[l].name, 1, 1, RRDDIM_INCREMENTAL); + struct interrupt *irr = irrindex(irrs, l, cpus); + if(!irr->used) continue; + rrddim_add(st, irr->id, irr->name, 1, 1, RRDDIM_INCREMENTAL); } } else rrdset_next(st); for(l = 0; l < lines ;l++) { - if(!irrs[l].used) continue; - rrddim_set(st, irrs[l].id, irrs[l].value[c]); + struct interrupt *irr = irrindex(irrs, l, cpus); + if(!irr->used) continue; + rrddim_set(st, irr->id, irr->value[c]); } rrdset_done(st); } diff --git a/src/proc_loadavg.c b/src/proc_loadavg.c index cd7edc832..c8e893b99 100644 --- a/src/proc_loadavg.c +++ b/src/proc_loadavg.c @@ -20,7 +20,7 @@ int do_proc_loadavg(int update_every, unsigned long long dt) { if(!ff) { char filename[FILENAME_MAX + 1]; - snprintf(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/loadavg"); + snprintfz(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/loadavg"); ff = procfile_open(config_get("plugin:proc:/proc/loadavg", "filename to monitor", filename), " \t,:|/", PROCFILE_FLAG_DEFAULT); } if(!ff) return 1; diff --git a/src/proc_meminfo.c b/src/proc_meminfo.c index dbd43369f..611b4ed21 100644 --- a/src/proc_meminfo.c +++ b/src/proc_meminfo.c @@ -32,7 +32,7 @@ int do_proc_meminfo(int update_every, unsigned long long dt) { if(!ff) { char filename[FILENAME_MAX + 1]; - snprintf(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/meminfo"); + snprintfz(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/meminfo"); ff = procfile_open(config_get("plugin:proc:/proc/meminfo", "filename to monitor", filename), " \t:", PROCFILE_FLAG_DEFAULT); } if(!ff) return 1; diff --git a/src/proc_net_dev.c b/src/proc_net_dev.c index 5070ab817..12d8078c7 100644 --- a/src/proc_net_dev.c +++ b/src/proc_net_dev.c @@ -20,7 +20,7 @@ int do_proc_net_dev(int update_every, unsigned long long dt) { if(!ff) { char filename[FILENAME_MAX + 1]; - snprintf(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/net/dev"); + snprintfz(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/net/dev"); ff = procfile_open(config_get("plugin:proc:/proc/net/dev", "filename to monitor", filename), " \t,:|", PROCFILE_FLAG_DEFAULT); } if(!ff) return 1; @@ -86,7 +86,7 @@ int do_proc_net_dev(int update_every, unsigned long long dt) { // check if the user wants it { char var_name[512 + 1]; - snprintf(var_name, 512, "plugin:proc:/proc/net/dev:%s", iface); + snprintfz(var_name, 512, "plugin:proc:/proc/net/dev:%s", iface); default_enable = config_get_boolean_ondemand(var_name, "enabled", default_enable); if(default_enable == CONFIG_ONDEMAND_NO) continue; if(default_enable == CONFIG_ONDEMAND_ONDEMAND && !rbytes && !tbytes) continue; diff --git a/src/proc_net_ip_vs_stats.c b/src/proc_net_ip_vs_stats.c index 8c2ece7d3..ffb5da7b5 100644 --- a/src/proc_net_ip_vs_stats.c +++ b/src/proc_net_ip_vs_stats.c @@ -25,7 +25,7 @@ int do_proc_net_ip_vs_stats(int update_every, unsigned long long dt) { if(!ff) { char filename[FILENAME_MAX + 1]; - snprintf(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/net/ip_vs_stats"); + snprintfz(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/net/ip_vs_stats"); ff = procfile_open(config_get("plugin:proc:/proc/net/ip_vs_stats", "filename to monitor", filename), " \t,:|", PROCFILE_FLAG_DEFAULT); } if(!ff) return 1; diff --git a/src/proc_net_netstat.c b/src/proc_net_netstat.c index 859cf9053..c8c12c1db 100644 --- a/src/proc_net_netstat.c +++ b/src/proc_net_netstat.c @@ -27,7 +27,7 @@ int do_proc_net_netstat(int update_every, unsigned long long dt) { if(!ff) { char filename[FILENAME_MAX + 1]; - snprintf(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/net/netstat"); + snprintfz(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/net/netstat"); ff = procfile_open(config_get("plugin:proc:/proc/net/netstat", "filename to monitor", filename), " \t:", PROCFILE_FLAG_DEFAULT); } if(!ff) return 1; diff --git a/src/proc_net_rpc_nfsd.c b/src/proc_net_rpc_nfsd.c index 12949f5d2..6c6dd7066 100644 --- a/src/proc_net_rpc_nfsd.c +++ b/src/proc_net_rpc_nfsd.c @@ -142,7 +142,7 @@ int do_proc_net_rpc_nfsd(int update_every, unsigned long long dt) { if(!ff) { char filename[FILENAME_MAX + 1]; - snprintf(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/net/rpc/nfsd"); + snprintfz(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/net/rpc/nfsd"); ff = procfile_open(config_get("plugin:proc:/proc/net/rpc/nfsd", "filename to monitor", filename), " \t", PROCFILE_FLAG_DEFAULT); } if(!ff) return 1; diff --git a/src/proc_net_snmp.c b/src/proc_net_snmp.c index 742b4cfc7..e0ac6a263 100644 --- a/src/proc_net_snmp.c +++ b/src/proc_net_snmp.c @@ -36,7 +36,7 @@ int do_proc_net_snmp(int update_every, unsigned long long dt) { if(!ff) { char filename[FILENAME_MAX + 1]; - snprintf(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/net/snmp"); + snprintfz(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/net/snmp"); ff = procfile_open(config_get("plugin:proc:/proc/net/snmp", "filename to monitor", filename), " \t:", PROCFILE_FLAG_DEFAULT); } if(!ff) return 1; diff --git a/src/proc_net_snmp6.c b/src/proc_net_snmp6.c index e7fadf573..885835a8c 100644 --- a/src/proc_net_snmp6.c +++ b/src/proc_net_snmp6.c @@ -254,7 +254,7 @@ int do_proc_net_snmp6(int update_every, unsigned long long dt) { if(!ff) { char filename[FILENAME_MAX + 1]; - snprintf(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/net/snmp6"); + snprintfz(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/net/snmp6"); ff = procfile_open(config_get("plugin:proc:/proc/net/snmp6", "filename to monitor", filename), " \t:", PROCFILE_FLAG_DEFAULT); } if(!ff) return 1; diff --git a/src/proc_net_stat_conntrack.c b/src/proc_net_stat_conntrack.c index f7e5c45b2..7d754a1d9 100644 --- a/src/proc_net_stat_conntrack.c +++ b/src/proc_net_stat_conntrack.c @@ -31,7 +31,7 @@ int do_proc_net_stat_conntrack(int update_every, unsigned long long dt) { if(!ff) { char filename[FILENAME_MAX + 1]; - snprintf(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/net/stat/nf_conntrack"); + snprintfz(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/net/stat/nf_conntrack"); ff = procfile_open(config_get("plugin:proc:/proc/net/stat/nf_conntrack", "filename to monitor", filename), " \t:", PROCFILE_FLAG_DEFAULT); } if(!ff) return 1; diff --git a/src/proc_net_stat_synproxy.c b/src/proc_net_stat_synproxy.c index 62296d78a..508b7d3b4 100644 --- a/src/proc_net_stat_synproxy.c +++ b/src/proc_net_stat_synproxy.c @@ -28,7 +28,7 @@ int do_proc_net_stat_synproxy(int update_every, unsigned long long dt) { if(!ff) { char filename[FILENAME_MAX + 1]; - snprintf(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/net/stat/synproxy"); + snprintfz(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/net/stat/synproxy"); ff = procfile_open(config_get("plugin:proc:/proc/net/stat/synproxy", "filename to monitor", filename), " \t,:|", PROCFILE_FLAG_DEFAULT); } if(!ff) return 1; diff --git a/src/proc_self_mountinfo.c b/src/proc_self_mountinfo.c new file mode 100644 index 000000000..45630b4c0 --- /dev/null +++ b/src/proc_self_mountinfo.c @@ -0,0 +1,231 @@ +#ifdef HAVE_CONFIG_H +#include <config.h> +#endif +#include <stdio.h> +#include <stdlib.h> +#include <string.h> +#include <sys/types.h> +#include <sys/stat.h> +#include <fcntl.h> + +#include "common.h" +#include "log.h" +#include "appconfig.h" + +#include "proc_self_mountinfo.h" + +// find the mount info with the given major:minor +// in the supplied linked list of mountinfo structures +struct mountinfo *mountinfo_find(struct mountinfo *root, unsigned long major, unsigned long minor) { + struct mountinfo *mi; + + for(mi = root; mi ; mi = mi->next) + if(mi->major == major && mi->minor == minor) + return mi; + + return NULL; +} + +// find the mount info with the given filesystem and mount_source +// in the supplied linked list of mountinfo structures +struct mountinfo *mountinfo_find_by_filesystem_mount_source(struct mountinfo *root, const char *filesystem, const char *mount_source) { + struct mountinfo *mi; + uint32_t filesystem_hash = simple_hash(filesystem), mount_source_hash = simple_hash(mount_source); + + for(mi = root; mi ; mi = mi->next) + if(mi->filesystem + && mi->mount_source + && mi->filesystem_hash == filesystem_hash + && mi->mount_source_hash == mount_source_hash + && !strcmp(mi->filesystem, filesystem) + && !strcmp(mi->mount_source, mount_source)) + return mi; + + return NULL; +} + +struct mountinfo *mountinfo_find_by_filesystem_super_option(struct mountinfo *root, const char *filesystem, const char *super_options) { + struct mountinfo *mi; + uint32_t filesystem_hash = simple_hash(filesystem); + + size_t solen = strlen(super_options); + + for(mi = root; mi ; mi = mi->next) + if(mi->filesystem + && mi->super_options + && mi->filesystem_hash == filesystem_hash + && !strcmp(mi->filesystem, filesystem)) { + + // super_options is a comma separated list + char *s = mi->super_options, *e; + while(*s) { + e = ++s; + while(*e && *e != ',') e++; + + size_t len = e - s; + if(len == solen && !strncmp(s, super_options, len)) + return mi; + + if(*e == ',') s = ++e; + else s = e; + } + } + + return NULL; +} + + +// free a linked list of mountinfo structures +void mountinfo_free(struct mountinfo *mi) { + if(unlikely(!mi)) + return; + + if(likely(mi->next)) + mountinfo_free(mi->next); + + if(mi->root) free(mi->root); + if(mi->mount_point) free(mi->mount_point); + if(mi->mount_options) free(mi->mount_options); + +/* + if(mi->optional_fields_count) { + int i; + for(i = 0; i < mi->optional_fields_count ; i++) + free(*mi->optional_fields[i]); + } + free(mi->optional_fields); +*/ + free(mi->filesystem); + free(mi->mount_source); + free(mi->super_options); + free(mi); +} + +// read the whole mountinfo into a linked list +struct mountinfo *mountinfo_read() { + procfile *ff = NULL; + + char filename[FILENAME_MAX + 1]; + snprintfz(filename, FILENAME_MAX, "%s/proc/self/mountinfo", global_host_prefix); + ff = procfile_open(filename, " \t", PROCFILE_FLAG_DEFAULT); + if(!ff) { + snprintfz(filename, FILENAME_MAX, "%s/proc/1/mountinfo", global_host_prefix); + ff = procfile_open(filename, " \t", PROCFILE_FLAG_DEFAULT); + if(!ff) return NULL; + } + + ff = procfile_readall(ff); + if(!ff) return NULL; + + struct mountinfo *root = NULL, *last = NULL, *mi = NULL; + + unsigned long l, lines = procfile_lines(ff); + for(l = 0; l < lines ;l++) { + if(procfile_linewords(ff, l) < 5) + continue; + + mi = malloc(sizeof(struct mountinfo)); + if(unlikely(!mi)) fatal("Cannot allocate memory for mountinfo"); + + if(unlikely(!root)) + root = last = mi; + else + last->next = mi; + + last = mi; + mi->next = NULL; + + unsigned long w = 0; + mi->id = strtoul(procfile_lineword(ff, l, w), NULL, 10); w++; + mi->parentid = strtoul(procfile_lineword(ff, l, w), NULL, 10); w++; + + char *major = procfile_lineword(ff, l, w), *minor; w++; + for(minor = major; *minor && *minor != ':' ;minor++) ; + *minor = '\0'; + minor++; + + mi->major = strtoul(major, NULL, 10); + mi->minor = strtoul(minor, NULL, 10); + + mi->root = strdup(procfile_lineword(ff, l, w)); w++; + if(unlikely(!mi->root)) fatal("Cannot allocate memory"); + mi->root_hash = simple_hash(mi->root); + + mi->mount_point = strdup(procfile_lineword(ff, l, w)); w++; + if(unlikely(!mi->mount_point)) fatal("Cannot allocate memory"); + mi->mount_point_hash = simple_hash(mi->mount_point); + + mi->mount_options = strdup(procfile_lineword(ff, l, w)); w++; + if(unlikely(!mi->mount_options)) fatal("Cannot allocate memory"); + + // count the optional fields +/* + unsigned long wo = w; +*/ + mi->optional_fields_count = 0; + char *s = procfile_lineword(ff, l, w); + while(*s && *s != '-') { + w++; + s = procfile_lineword(ff, l, w); + mi->optional_fields_count++; + } + +/* + if(unlikely(mi->optional_fields_count)) { + // we have some optional fields + // read them into a new array of pointers; + + mi->optional_fields = malloc(mi->optional_fields_count * sizeof(char *)); + if(unlikely(!mi->optional_fields)) + fatal("Cannot allocate memory for %d mountinfo optional fields", mi->optional_fields_count); + + int i; + for(i = 0; i < mi->optional_fields_count ; i++) { + *mi->optional_fields[wo] = strdup(procfile_lineword(ff, l, w)); + if(!mi->optional_fields[wo]) fatal("Cannot allocate memory"); + wo++; + } + } + else + mi->optional_fields = NULL; +*/ + + if(likely(*s == '-')) { + w++; + + mi->filesystem = strdup(procfile_lineword(ff, l, w)); w++; + if(!mi->filesystem) fatal("Cannot allocate memory"); + mi->filesystem_hash = simple_hash(mi->filesystem); + + mi->mount_source = strdup(procfile_lineword(ff, l, w)); w++; + if(!mi->mount_source) fatal("Cannot allocate memory"); + mi->mount_source_hash = simple_hash(mi->mount_source); + + mi->super_options = strdup(procfile_lineword(ff, l, w)); w++; + if(!mi->super_options) fatal("Cannot allocate memory"); + } + else { + mi->filesystem = NULL; + mi->mount_source = NULL; + mi->super_options = NULL; + } + +/* + info("MOUNTINFO: %u %u %u:%u root '%s', mount point '%s', mount options '%s', filesystem '%s', mount source '%s', super options '%s'", + mi->id, + mi->parentid, + mi->major, + mi->minor, + mi->root, + mi->mount_point, + mi->mount_options, + mi->filesystem, + mi->mount_source, + mi->super_options + ); +*/ + } + + procfile_close(ff); + return root; +} diff --git a/src/proc_self_mountinfo.h b/src/proc_self_mountinfo.h new file mode 100644 index 000000000..51712a58a --- /dev/null +++ b/src/proc_self_mountinfo.h @@ -0,0 +1,42 @@ +#include "procfile.h" + +#ifndef NETDATA_PROC_SELF_MOUNTINFO_H +#define NETDATA_PROC_SELF_MOUNTINFO_H 1 + +struct mountinfo { + long id; // mount ID: unique identifier of the mount (may be reused after umount(2)). + long parentid; // parent ID: ID of parent mount (or of self for the top of the mount tree). + unsigned long major; // major:minor: value of st_dev for files on filesystem (see stat(2)). + unsigned long minor; + + char *root; // root: root of the mount within the filesystem. + uint32_t root_hash; + + char *mount_point; // mount point: mount point relative to the process's root. + uint32_t mount_point_hash; + + char *mount_options; // mount options: per-mount options. + + int optional_fields_count; +/* + char ***optional_fields; // optional fields: zero or more fields of the form "tag[:value]". +*/ + char *filesystem; // filesystem type: name of filesystem in the form "type[.subtype]". + uint32_t filesystem_hash; + + char *mount_source; // mount source: filesystem-specific information or "none". + uint32_t mount_source_hash; + + char *super_options; // super options: per-superblock options. + + struct mountinfo *next; +}; + +extern struct mountinfo *mountinfo_find(struct mountinfo *root, unsigned long major, unsigned long minor); +extern struct mountinfo *mountinfo_find_by_filesystem_mount_source(struct mountinfo *root, const char *filesystem, const char *mount_source); +extern struct mountinfo *mountinfo_find_by_filesystem_super_option(struct mountinfo *root, const char *filesystem, const char *super_options); + +extern void mountinfo_free(struct mountinfo *mi); +extern struct mountinfo *mountinfo_read(); + +#endif /* NETDATA_PROC_SELF_MOUNTINFO_H */
\ No newline at end of file diff --git a/src/proc_softirqs.c b/src/proc_softirqs.c index c3a75f600..96b5d3d30 100644 --- a/src/proc_softirqs.c +++ b/src/proc_softirqs.c @@ -13,29 +13,34 @@ #include "plugin_proc.h" #include "log.h" -#define MAX_INTERRUPTS 256 -#define MAX_INTERRUPT_CPUS 256 #define MAX_INTERRUPT_NAME 50 struct interrupt { int used; char *id; char name[MAX_INTERRUPT_NAME + 1]; - unsigned long long value[MAX_INTERRUPT_CPUS]; unsigned long long total; + unsigned long long value[]; }; -static struct interrupt *alloc_interrupts(int lines) { +// since each interrupt is variable in size +// we use this to calculate its record size +#define recordsize(cpus) (sizeof(struct interrupt) + (cpus * sizeof(unsigned long long))) + +// given a base, get a pointer to each record +#define irrindex(base, line, cpus) ((struct interrupt *)&((char *)(base))[line * recordsize(cpus)]) + +static inline struct interrupt *get_interrupts_array(int lines, int cpus) { static struct interrupt *irrs = NULL; - static int alloced = 0; + static int allocated = 0; - if(lines < alloced) return irrs; + if(lines < allocated) return irrs; else { - irrs = (struct interrupt *)realloc(irrs, lines * sizeof(struct interrupt)); + irrs = (struct interrupt *)realloc(irrs, lines * recordsize(cpus)); if(!irrs) fatal("Cannot allocate memory for %d interrupts", lines); - alloced = lines; + allocated = lines; } return irrs; @@ -53,7 +58,7 @@ int do_proc_softirqs(int update_every, unsigned long long dt) { if(!ff) { char filename[FILENAME_MAX + 1]; - snprintf(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/softirqs"); + snprintfz(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/softirqs"); ff = procfile_open(config_get("plugin:proc:/proc/softirqs", "filename to monitor", filename), " \t", PROCFILE_FLAG_DEFAULT); } if(!ff) return 1; @@ -76,8 +81,6 @@ int do_proc_softirqs(int update_every, unsigned long long dt) { if(strncmp(procfile_lineword(ff, 0, w), "CPU", 3) == 0) cpus++; } - - if(cpus > MAX_INTERRUPT_CPUS) cpus = MAX_INTERRUPT_CPUS; } if(!cpus) { @@ -86,12 +89,12 @@ int do_proc_softirqs(int update_every, unsigned long long dt) { } // allocate the size we need; - irrs = alloc_interrupts(lines); + irrs = get_interrupts_array(lines, cpus); irrs[0].used = 0; // loop through all lines for(l = 1; l < lines ;l++) { - struct interrupt *irr = &irrs[l]; + struct interrupt *irr = irrindex(irrs, l, cpus); irr->used = 0; irr->total = 0; @@ -115,8 +118,7 @@ int do_proc_softirqs(int update_every, unsigned long long dt) { irr->total += irr->value[c]; } - strncpy(irr->name, irr->id, MAX_INTERRUPT_NAME); - irr->name[MAX_INTERRUPT_NAME] = '\0'; + strncpyz(irr->name, irr->id, MAX_INTERRUPT_NAME); irr->used = 1; } @@ -130,15 +132,17 @@ int do_proc_softirqs(int update_every, unsigned long long dt) { st = rrdset_create("system", "softirqs", NULL, "softirqs", NULL, "System softirqs", "softirqs/s", 950, update_every, RRDSET_TYPE_STACKED); for(l = 0; l < lines ;l++) { - if(!irrs[l].used) continue; - rrddim_add(st, irrs[l].id, irrs[l].name, 1, 1, RRDDIM_INCREMENTAL); + struct interrupt *irr = irrindex(irrs, l, cpus); + if(!irr->used) continue; + rrddim_add(st, irr->id, irr->name, 1, 1, RRDDIM_INCREMENTAL); } } else rrdset_next(st); for(l = 0; l < lines ;l++) { - if(!irrs[l].used) continue; - rrddim_set(st, irrs[l].id, irrs[l].total); + struct interrupt *irr = irrindex(irrs, l, cpus); + if(!irr->used) continue; + rrddim_set(st, irr->id, irr->total); } rrdset_done(st); @@ -146,34 +150,37 @@ int do_proc_softirqs(int update_every, unsigned long long dt) { int c; for(c = 0; c < cpus ; c++) { - char id[256]; - snprintf(id, 256, "cpu%d_softirqs", c); + char id[256+1]; + snprintfz(id, 256, "cpu%d_softirqs", c); st = rrdset_find_bytype("cpu", id); if(!st) { // find if everything is zero unsigned long long core_sum = 0 ; for(l = 0; l < lines ;l++) { - if(!irrs[l].used) continue; - core_sum += irrs[l].value[c]; + struct interrupt *irr = irrindex(irrs, l, cpus); + if(!irr->used) continue; + core_sum += irr->value[c]; } if(core_sum == 0) continue; // try next core - char name[256], title[256]; - snprintf(name, 256, "cpu%d_softirqs", c); - snprintf(title, 256, "CPU%d softirqs", c); + char name[256+1], title[256+1]; + snprintfz(name, 256, "cpu%d_softirqs", c); + snprintfz(title, 256, "CPU%d softirqs", c); st = rrdset_create("cpu", id, name, "softirqs", "cpu.softirqs", title, "softirqs/s", 3000 + c, update_every, RRDSET_TYPE_STACKED); for(l = 0; l < lines ;l++) { - if(!irrs[l].used) continue; - rrddim_add(st, irrs[l].id, irrs[l].name, 1, 1, RRDDIM_INCREMENTAL); + struct interrupt *irr = irrindex(irrs, l, cpus); + if(!irr->used) continue; + rrddim_add(st, irr->id, irr->name, 1, 1, RRDDIM_INCREMENTAL); } } else rrdset_next(st); for(l = 0; l < lines ;l++) { - if(!irrs[l].used) continue; - rrddim_set(st, irrs[l].id, irrs[l].value[c]); + struct interrupt *irr = irrindex(irrs, l, cpus); + if(!irr->used) continue; + rrddim_set(st, irr->id, irr->value[c]); } rrdset_done(st); } diff --git a/src/proc_stat.c b/src/proc_stat.c index 47f994b52..154ba167d 100644 --- a/src/proc_stat.c +++ b/src/proc_stat.c @@ -30,7 +30,7 @@ int do_proc_stat(int update_every, unsigned long long dt) { if(!ff) { char filename[FILENAME_MAX + 1]; - snprintf(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/stat"); + snprintfz(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/stat"); ff = procfile_open(config_get("plugin:proc:/proc/stat", "filename to monitor", filename), " \t:", PROCFILE_FLAG_DEFAULT); } if(!ff) return 1; diff --git a/src/proc_sys_kernel_random_entropy_avail.c b/src/proc_sys_kernel_random_entropy_avail.c index be9070aca..d7d1e8261 100644 --- a/src/proc_sys_kernel_random_entropy_avail.c +++ b/src/proc_sys_kernel_random_entropy_avail.c @@ -17,7 +17,7 @@ int do_proc_sys_kernel_random_entropy_avail(int update_every, unsigned long long if(!ff) { char filename[FILENAME_MAX + 1]; - snprintf(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/sys/kernel/random/entropy_avail"); + snprintfz(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/sys/kernel/random/entropy_avail"); ff = procfile_open(config_get("plugin:proc:/proc/sys/kernel/random/entropy_avail", "filename to monitor", filename), "", PROCFILE_FLAG_DEFAULT); } if(!ff) return 1; diff --git a/src/proc_vmstat.c b/src/proc_vmstat.c index c8222390c..7b20ed8cf 100644 --- a/src/proc_vmstat.c +++ b/src/proc_vmstat.c @@ -214,7 +214,7 @@ int do_proc_vmstat(int update_every, unsigned long long dt) { if(!ff) { char filename[FILENAME_MAX + 1]; - snprintf(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/vmstat"); + snprintfz(filename, FILENAME_MAX, "%s%s", global_host_prefix, "/proc/vmstat"); ff = procfile_open(config_get("plugin:proc:/proc/vmstat", "filename to monitor", filename), " \t:", PROCFILE_FLAG_DEFAULT); } if(!ff) return 1; diff --git a/src/procfile.c b/src/procfile.c index 1fc33ef5f..291f14519 100644 --- a/src/procfile.c +++ b/src/procfile.c @@ -445,8 +445,7 @@ procfile *procfile_open(const char *filename, const char *separators, uint32_t f return NULL; } - strncpy(ff->filename, filename, FILENAME_MAX); - ff->filename[FILENAME_MAX] = '\0'; + strncpyz(ff->filename, filename, FILENAME_MAX); ff->fd = fd; ff->size = size; @@ -479,8 +478,7 @@ procfile *procfile_reopen(procfile *ff, const char *filename, const char *separa return NULL; } - strncpy(ff->filename, filename, FILENAME_MAX); - ff->filename[FILENAME_MAX] = '\0'; + strncpyz(ff->filename, filename, FILENAME_MAX); ff->flags = flags; diff --git a/src/registry.c b/src/registry.c new file mode 100644 index 000000000..f39ce3e2e --- /dev/null +++ b/src/registry.c @@ -0,0 +1,1838 @@ +#ifdef HAVE_CONFIG_H +#include <config.h> +#endif + +#include <uuid/uuid.h> +#include <inttypes.h> +#include <stdlib.h> +#include <string.h> +#include <ctype.h> +#include <unistd.h> +#include <sys/stat.h> +#include <sys/types.h> +#include <errno.h> +#include <fcntl.h> + +#include "log.h" +#include "common.h" +#include "dictionary.h" +#include "appconfig.h" + +#include "web_client.h" +#include "rrd.h" +#include "rrd2json.h" +#include "registry.h" + + +// ---------------------------------------------------------------------------- +// TODO +// +// 1. the default tracking cookie expires in 1 year, but the persons are not +// removed from the db - this means the database only grows - ideally the +// database should be cleaned in registry_save() for both on-disk and +// on-memory entries. +// +// Cleanup: +// i. Find all the PERSONs that have expired cookie +// ii. For each of their PERSON_URLs: +// - decrement the linked MACHINE links +// - if the linked MACHINE has no other links, remove the linked MACHINE too +// - remove the PERSON_URL +// +// 2. add protection to prevent abusing the registry by flooding it with +// requests to fill the memory and crash it. +// +// Possible protections: +// - limit the number of URLs per person +// - limit the number of URLs per machine +// - limit the number of persons +// - limit the number of machines +// - [DONE] limit the size of URLs +// - [DONE] limit the size of PERSON_URL names +// - limit the number of requests that add data to the registry, +// per client IP per hour + + + +#define REGISTRY_URL_FLAGS_DEFAULT 0x00 +#define REGISTRY_URL_FLAGS_EXPIRED 0x01 + +#define DICTIONARY_FLAGS DICTIONARY_FLAG_VALUE_LINK_DONT_CLONE | DICTIONARY_FLAG_NAME_LINK_DONT_CLONE + +// ---------------------------------------------------------------------------- +// COMMON structures + +struct registry { + int enabled; + + char machine_guid[36 + 1]; + + // entries counters / statistics + unsigned long long persons_count; + unsigned long long machines_count; + unsigned long long usages_count; + unsigned long long urls_count; + unsigned long long persons_urls_count; + unsigned long long machines_urls_count; + unsigned long long log_count; + + // memory counters / statistics + unsigned long long persons_memory; + unsigned long long machines_memory; + unsigned long long urls_memory; + unsigned long long persons_urls_memory; + unsigned long long machines_urls_memory; + + // configuration + unsigned long long save_registry_every_entries; + char *registry_domain; + char *hostname; + char *registry_to_announce; + time_t persons_expiration; // seconds to expire idle persons + + size_t max_url_length; + size_t max_name_length; + + // file/path names + char *pathname; + char *db_filename; + char *log_filename; + char *machine_guid_filename; + + // open files + FILE *log_fp; + + // the database + DICTIONARY *persons; // dictionary of PERSON *, with key the PERSON.guid + DICTIONARY *machines; // dictionary of MACHINE *, with key the MACHINE.guid + DICTIONARY *urls; // dictionary of URL *, with key the URL.url + + // concurrency locking + // we keep different locks for different things + // so that many tasks can be completed in parallel + pthread_mutex_t persons_lock; + pthread_mutex_t machines_lock; + pthread_mutex_t urls_lock; + pthread_mutex_t person_urls_lock; + pthread_mutex_t machine_urls_lock; + pthread_mutex_t log_lock; +} registry; + + +// ---------------------------------------------------------------------------- +// URL structures +// Save memory by de-duplicating URLs +// so instead of storing URLs all over the place +// we store them here and we keep pointers elsewhere + +struct url { + uint32_t links; // the number of links to this URL - when none is left, we free it + uint16_t len; // the length of the URL in bytes + char url[1]; // the URL - dynamically allocated to more size +}; +typedef struct url URL; + + +// ---------------------------------------------------------------------------- +// MACHINE structures + +// For each MACHINE-URL pair we keep this +struct machine_url { + URL *url; // de-duplicated URL +// DICTIONARY *persons; // dictionary of PERSON * + + uint8_t flags; + uint32_t first_t; // the first time we saw this + uint32_t last_t; // the last time we saw this + uint32_t usages; // how many times this has been accessed +}; +typedef struct machine_url MACHINE_URL; + +// A machine +struct machine { + char guid[36 + 1]; // the GUID + + uint32_t links; // the number of PERSON_URLs linked to this machine + + DICTIONARY *urls; // MACHINE_URL * + + uint32_t first_t; // the first time we saw this + uint32_t last_t; // the last time we saw this + uint32_t usages; // how many times this has been accessed +}; +typedef struct machine MACHINE; + + +// ---------------------------------------------------------------------------- +// PERSON structures + +// for each PERSON-URL pair we keep this +struct person_url { + URL *url; // de-duplicated URL + MACHINE *machine; // link the MACHINE of this URL + + uint8_t flags; + uint32_t first_t; // the first time we saw this + uint32_t last_t; // the last time we saw this + uint32_t usages; // how many times this has been accessed + + char name[1]; // the name of the URL, as known by the user + // dynamically allocated to fit properly +}; +typedef struct person_url PERSON_URL; + +// A person +struct person { + char guid[36 + 1]; // the person GUID + + DICTIONARY *urls; // dictionary of PERSON_URL * + + uint32_t first_t; // the first time we saw this + uint32_t last_t; // the last time we saw this + uint32_t usages; // how many times this has been accessed +}; +typedef struct person PERSON; + + +// ---------------------------------------------------------------------------- +// REGISTRY concurrency locking + +static inline void registry_persons_lock(void) { + pthread_mutex_lock(®istry.persons_lock); +} + +static inline void registry_persons_unlock(void) { + pthread_mutex_unlock(®istry.persons_lock); +} + +static inline void registry_machines_lock(void) { + pthread_mutex_lock(®istry.machines_lock); +} + +static inline void registry_machines_unlock(void) { + pthread_mutex_unlock(®istry.machines_lock); +} + +static inline void registry_urls_lock(void) { + pthread_mutex_lock(®istry.urls_lock); +} + +static inline void registry_urls_unlock(void) { + pthread_mutex_unlock(®istry.urls_lock); +} + +// ideally, we should not lock the whole registry for +// updating a person's urls. +// however, to save the memory required for keeping a +// mutex (40 bytes) per person, we do... +static inline void registry_person_urls_lock(PERSON *p) { + (void)p; + pthread_mutex_lock(®istry.person_urls_lock); +} + +static inline void registry_person_urls_unlock(PERSON *p) { + (void)p; + pthread_mutex_unlock(®istry.person_urls_lock); +} + +// ideally, we should not lock the whole registry for +// updating a machine's urls. +// however, to save the memory required for keeping a +// mutex (40 bytes) per machine, we do... +static inline void registry_machine_urls_lock(MACHINE *m) { + (void)m; + pthread_mutex_lock(®istry.machine_urls_lock); +} + +static inline void registry_machine_urls_unlock(MACHINE *m) { + (void)m; + pthread_mutex_unlock(®istry.machine_urls_lock); +} + +static inline void registry_log_lock(void) { + pthread_mutex_lock(®istry.log_lock); +} + +static inline void registry_log_unlock(void) { + pthread_mutex_unlock(®istry.log_lock); +} + + +// ---------------------------------------------------------------------------- +// common functions + +// parse a GUID and re-generated to be always lower case +// this is used as a protection against the variations of GUIDs +static inline int registry_regenerate_guid(const char *guid, char *result) { + uuid_t uuid; + if(unlikely(uuid_parse(guid, uuid) == -1)) { + info("Registry: GUID '%s' is not a valid GUID.", guid); + return -1; + } + else { + uuid_unparse_lower(uuid, result); + +#ifdef NETDATA_INTERNAL_CHECKS + if(strcmp(guid, result)) + info("Registry: source GUID '%s' and re-generated GUID '%s' differ!", guid, result); +#endif /* NETDATA_INTERNAL_CHECKS */ + } + + return 0; +} + +// make sure the names of the machines / URLs do not contain any tabs +// (which are used as our separator in the database files) +// and are properly trimmed (before and after) +static inline char *registry_fix_machine_name(char *name, size_t *len) { + char *s = name?name:""; + + // skip leading spaces + while(*s && isspace(*s)) s++; + + // make sure all spaces are a SPACE + char *t = s; + while(*t) { + if(unlikely(isspace(*t))) + *t = ' '; + + t++; + } + + // remove trailing spaces + while(--t >= s) { + if(*t == ' ') + *t = '\0'; + else + break; + } + t++; + + if(likely(len)) + *len = (t - s); + + return s; +} + +static inline char *registry_fix_url(char *url, size_t *len) { + return registry_fix_machine_name(url, len); +} + + +// ---------------------------------------------------------------------------- +// forward definition of functions + +extern PERSON *registry_request_access(char *person_guid, char *machine_guid, char *url, char *name, time_t when); +extern PERSON *registry_request_delete(char *person_guid, char *machine_guid, char *url, char *delete_url, time_t when); + + +// ---------------------------------------------------------------------------- +// URL + +static inline URL *registry_url_allocate_nolock(const char *url, size_t urllen) { + // protection from too big URLs + if(urllen > registry.max_url_length) + urllen = registry.max_url_length; + + debug(D_REGISTRY, "Registry: registry_url_allocate_nolock('%s'): allocating %zu bytes", url, sizeof(URL) + urllen); + URL *u = malloc(sizeof(URL) + urllen); + if(!u) fatal("Cannot allocate %zu bytes for URL '%s'", sizeof(URL) + urllen); + + // a simple strcpy() should do the job + // but I prefer to be safe, since the caller specified urllen + strncpyz(u->url, url, urllen); + + u->len = urllen; + u->links = 0; + + registry.urls_memory += sizeof(URL) + urllen; + + debug(D_REGISTRY, "Registry: registry_url_allocate_nolock('%s'): indexing it", url); + dictionary_set(registry.urls, u->url, u, sizeof(URL)); + + return u; +} + +static inline URL *registry_url_get(const char *url, size_t urllen) { + debug(D_REGISTRY, "Registry: registry_url_get('%s')", url); + + registry_urls_lock(); + + URL *u = dictionary_get(registry.urls, url); + if(!u) { + u = registry_url_allocate_nolock(url, urllen); + registry.urls_count++; + } + + registry_urls_unlock(); + + return u; +} + +static inline void registry_url_link_nolock(URL *u) { + u->links++; + debug(D_REGISTRY, "Registry: registry_url_link_nolock('%s'): URL has now %u links", u->url, u->links); +} + +static inline void registry_url_unlink_nolock(URL *u) { + u->links--; + if(!u->links) { + debug(D_REGISTRY, "Registry: registry_url_unlink_nolock('%s'): No more links for this URL", u->url); + dictionary_del(registry.urls, u->url); + free(u); + } + else + debug(D_REGISTRY, "Registry: registry_url_unlink_nolock('%s'): URL has %u links left", u->url, u->links); +} + + +// ---------------------------------------------------------------------------- +// MACHINE + +static inline MACHINE *registry_machine_find(const char *machine_guid) { + debug(D_REGISTRY, "Registry: registry_machine_find('%s')", machine_guid); + return dictionary_get(registry.machines, machine_guid); +} + +static inline MACHINE_URL *registry_machine_url_allocate(MACHINE *m, URL *u, time_t when) { + debug(D_REGISTRY, "registry_machine_link_to_url('%s', '%s'): allocating %zu bytes", m->guid, u->url, sizeof(MACHINE_URL)); + + MACHINE_URL *mu = malloc(sizeof(MACHINE_URL)); + if(!mu) fatal("registry_machine_link_to_url('%s', '%s'): cannot allocate %zu bytes.", m->guid, u->url, sizeof(MACHINE_URL)); + + // mu->persons = dictionary_create(DICTIONARY_FLAGS); + // dictionary_set(mu->persons, p->guid, p, sizeof(PERSON)); + + mu->first_t = mu->last_t = when; + mu->usages = 1; + mu->url = u; + mu->flags = REGISTRY_URL_FLAGS_DEFAULT; + + registry.machines_urls_memory += sizeof(MACHINE_URL); + + debug(D_REGISTRY, "registry_machine_link_to_url('%s', '%s'): indexing URL in machine", m->guid, u->url); + dictionary_set(m->urls, u->url, mu, sizeof(MACHINE_URL)); + registry_url_link_nolock(u); + + return mu; +} + +static inline MACHINE *registry_machine_allocate(const char *machine_guid, time_t when) { + debug(D_REGISTRY, "Registry: registry_machine_allocate('%s'): creating new machine, sizeof(MACHINE)=%zu", machine_guid, sizeof(MACHINE)); + + MACHINE *m = malloc(sizeof(MACHINE)); + if(!m) fatal("Registry: cannot allocate memory for new machine '%s'", machine_guid); + + strncpyz(m->guid, machine_guid, 36); + + debug(D_REGISTRY, "Registry: registry_machine_allocate('%s'): creating dictionary of urls", machine_guid); + m->urls = dictionary_create(DICTIONARY_FLAGS); + + m->first_t = m->last_t = when; + m->usages = 0; + + registry.machines_memory += sizeof(MACHINE); + + registry.machines_count++; + dictionary_set(registry.machines, m->guid, m, sizeof(MACHINE)); + + return m; +} + +// 1. validate machine GUID +// 2. if it is valid, find it or create it and return it +// 3. if it is not valid, return NULL +static inline MACHINE *registry_machine_get(const char *machine_guid, time_t when) { + MACHINE *m = NULL; + + registry_machines_lock(); + + if(likely(machine_guid && *machine_guid)) { + // validate it is a GUID + char buf[36 + 1]; + if(unlikely(registry_regenerate_guid(machine_guid, buf) == -1)) + info("Registry: machine guid '%s' is not a valid guid. Ignoring it.", machine_guid); + else { + machine_guid = buf; + m = registry_machine_find(machine_guid); + if(!m) m = registry_machine_allocate(machine_guid, when); + } + } + + registry_machines_unlock(); + + return m; +} + + +// ---------------------------------------------------------------------------- +// PERSON + +static inline PERSON *registry_person_find(const char *person_guid) { + debug(D_REGISTRY, "Registry: registry_person_find('%s')", person_guid); + return dictionary_get(registry.persons, person_guid); +} + +static inline PERSON_URL *registry_person_url_allocate(PERSON *p, MACHINE *m, URL *u, char *name, size_t namelen, time_t when) { + // protection from too big names + if(namelen > registry.max_name_length) + namelen = registry.max_name_length; + + debug(D_REGISTRY, "registry_person_url_allocate('%s', '%s', '%s'): allocating %zu bytes", p->guid, m->guid, u->url, + sizeof(PERSON_URL) + namelen); + + PERSON_URL *pu = malloc(sizeof(PERSON_URL) + namelen); + if(!pu) fatal("registry_person_url_allocate('%s', '%s', '%s'): cannot allocate %zu bytes.", p->guid, m->guid, u->url, sizeof(PERSON_URL) + namelen); + + // a simple strcpy() should do the job + // but I prefer to be safe, since the caller specified urllen + strncpyz(pu->name, name, namelen); + + pu->machine = m; + pu->first_t = pu->last_t = when; + pu->usages = 1; + pu->url = u; + pu->flags = REGISTRY_URL_FLAGS_DEFAULT; + m->links++; + + registry.persons_urls_memory += sizeof(PERSON_URL) + namelen; + + debug(D_REGISTRY, "registry_person_url_allocate('%s', '%s', '%s'): indexing URL in person", p->guid, m->guid, u->url); + dictionary_set(p->urls, u->url, pu, sizeof(PERSON_URL)); + registry_url_link_nolock(u); + + return pu; +} + +static inline PERSON_URL *registry_person_url_reallocate(PERSON *p, MACHINE *m, URL *u, char *name, size_t namelen, time_t when, PERSON_URL *pu) { + // this function is needed to change the name of a PERSON_URL + + debug(D_REGISTRY, "registry_person_url_reallocate('%s', '%s', '%s'): allocating %zu bytes", p->guid, m->guid, u->url, + sizeof(PERSON_URL) + namelen); + + PERSON_URL *tpu = registry_person_url_allocate(p, m, u, name, namelen, when); + tpu->first_t = pu->first_t; + tpu->last_t = pu->last_t; + tpu->usages = pu->usages; + + // ok, these are a hack - since the registry_person_url_allocate() is + // adding these, we have to subtract them + tpu->machine->links--; + registry.persons_urls_memory -= sizeof(PERSON_URL) + strlen(pu->name); + registry_url_unlink_nolock(u); + + free(pu); + + return tpu; +} + +static inline PERSON *registry_person_allocate(const char *person_guid, time_t when) { + PERSON *p = NULL; + + debug(D_REGISTRY, "Registry: registry_person_allocate('%s'): allocating new person, sizeof(PERSON)=%zu", (person_guid)?person_guid:"", sizeof(PERSON)); + + p = malloc(sizeof(PERSON)); + if(!p) fatal("Registry: cannot allocate memory for new person."); + + if(!person_guid) { + for (; ;) { + uuid_t uuid; + uuid_generate(uuid); + uuid_unparse_lower(uuid, p->guid); + + debug(D_REGISTRY, "Registry: Checking if the generated person guid '%s' is unique", p->guid); + if (!dictionary_get(registry.persons, p->guid)) { + debug(D_REGISTRY, "Registry: generated person guid '%s' is unique", p->guid); + break; + } + else + info("Registry: generated person guid '%s' found in the registry. Retrying...", p->guid); + } + } + else + strncpyz(p->guid, person_guid, 36); + + debug(D_REGISTRY, "Registry: registry_person_allocate('%s'): creating dictionary of urls", p->guid); + p->urls = dictionary_create(DICTIONARY_FLAGS); + + p->first_t = p->last_t = when; + p->usages = 0; + + registry.persons_memory += sizeof(PERSON); + + registry.persons_count++; + dictionary_set(registry.persons, p->guid, p, sizeof(PERSON)); + + return p; +} + + +// 1. validate person GUID +// 2. if it is valid, find it +// 3. if it is not valid, create a new one +// 4. return it +static inline PERSON *registry_person_get(const char *person_guid, time_t when) { + PERSON *p = NULL; + + registry_persons_lock(); + + if(person_guid && *person_guid) { + char buf[36 + 1]; + // validate it is a GUID + if(unlikely(registry_regenerate_guid(person_guid, buf) == -1)) + info("Registry: person guid '%s' is not a valid guid. Ignoring it.", person_guid); + else { + person_guid = buf; + p = registry_person_find(person_guid); + if(!p) person_guid = NULL; + } + } + + if(!p) p = registry_person_allocate(NULL, when); + + registry_persons_unlock(); + + return p; +} + +// ---------------------------------------------------------------------------- +// LINKING OF OBJECTS + +static inline PERSON_URL *registry_person_link_to_url(PERSON *p, MACHINE *m, URL *u, char *name, size_t namelen, time_t when) { + debug(D_REGISTRY, "registry_person_link_to_url('%s', '%s', '%s'): searching for URL in person", p->guid, m->guid, u->url); + + registry_person_urls_lock(p); + + PERSON_URL *pu = dictionary_get(p->urls, u->url); + if(!pu) { + debug(D_REGISTRY, "registry_person_link_to_url('%s', '%s', '%s'): not found", p->guid, m->guid, u->url); + pu = registry_person_url_allocate(p, m, u, name, namelen, when); + registry.persons_urls_count++; + } + else { + debug(D_REGISTRY, "registry_person_link_to_url('%s', '%s', '%s'): found", p->guid, m->guid, u->url); + pu->usages++; + if(likely(pu->last_t < when)) pu->last_t = when; + + if(pu->machine != m) { + MACHINE_URL *mu = dictionary_get(pu->machine->urls, u->url); + if(mu) { + info("registry_person_link_to_url('%s', '%s', '%s'): URL switched machines (old was '%s') - expiring it from previous machine.", + p->guid, m->guid, u->url, pu->machine->guid); + mu->flags |= REGISTRY_URL_FLAGS_EXPIRED; + } + else { + info("registry_person_link_to_url('%s', '%s', '%s'): URL switched machines (old was '%s') - but the URL is not linked to the old machine.", + p->guid, m->guid, u->url, pu->machine->guid); + } + + pu->machine->links--; + pu->machine = m; + } + + if(strcmp(pu->name, name)) { + // the name of the PERSON_URL has changed ! + pu = registry_person_url_reallocate(p, m, u, name, namelen, when, pu); + } + } + + p->usages++; + if(likely(p->last_t < when)) p->last_t = when; + + if(pu->flags & REGISTRY_URL_FLAGS_EXPIRED) { + info("registry_person_link_to_url('%s', '%s', '%s'): accessing an expired URL. Re-enabling URL.", p->guid, m->guid, u->url); + pu->flags &= ~REGISTRY_URL_FLAGS_EXPIRED; + } + + registry_person_urls_unlock(p); + + return pu; +} + +static inline MACHINE_URL *registry_machine_link_to_url(PERSON *p, MACHINE *m, URL *u, time_t when) { + debug(D_REGISTRY, "registry_machine_link_to_url('%s', '%s', '%s'): searching for URL in machine", p->guid, m->guid, u->url); + + registry_machine_urls_lock(m); + + MACHINE_URL *mu = dictionary_get(m->urls, u->url); + if(!mu) { + debug(D_REGISTRY, "registry_machine_link_to_url('%s', '%s', '%s'): not found", p->guid, m->guid, u->url); + mu = registry_machine_url_allocate(m, u, when); + registry.machines_urls_count++; + } + else { + debug(D_REGISTRY, "registry_machine_link_to_url('%s', '%s', '%s'): found", p->guid, m->guid, u->url); + mu->usages++; + if(likely(mu->last_t < when)) mu->last_t = when; + } + + //debug(D_REGISTRY, "registry_machine_link_to_url('%s', '%s', '%s'): indexing person in machine", p->guid, m->guid, u->url); + //dictionary_set(mu->persons, p->guid, p, sizeof(PERSON)); + + m->usages++; + if(likely(m->last_t < when)) m->last_t = when; + + if(mu->flags & REGISTRY_URL_FLAGS_EXPIRED) { + info("registry_machine_link_to_url('%s', '%s', '%s'): accessing an expired URL.", p->guid, m->guid, u->url); + mu->flags &= ~REGISTRY_URL_FLAGS_EXPIRED; + } + + registry_machine_urls_unlock(m); + + return mu; +} + +// ---------------------------------------------------------------------------- +// REGISTRY LOG LOAD/SAVE + +static inline int registry_should_save_db(void) { + debug(D_REGISTRY, "log entries %llu, max %llu", registry.log_count, registry.save_registry_every_entries); + return registry.log_count > registry.save_registry_every_entries; +} + +static inline void registry_log(const char action, PERSON *p, MACHINE *m, URL *u, char *name) { + if(likely(registry.log_fp)) { + // we lock only if the file is open + // to allow replaying the log at registry_log_load() + registry_log_lock(); + + if(unlikely(fprintf(registry.log_fp, "%c\t%08x\t%s\t%s\t%s\t%s\n", + action, + p->last_t, + p->guid, + m->guid, + name, + u->url) < 0)) + error("Registry: failed to save log. Registry data may be lost in case of abnormal restart."); + + // we increase the counter even on failures + // so that the registry will be saved periodically + registry.log_count++; + + registry_log_unlock(); + + // this must be outside the log_lock(), or a deadlock will happen. + // registry_save() checks the same inside the log_lock, so only + // one thread will save the db + if(unlikely(registry_should_save_db())) + registry_save(); + } +} + +static inline int registry_log_open_nolock(void) { + if(registry.log_fp) + fclose(registry.log_fp); + + registry.log_fp = fopen(registry.log_filename, "a"); + + if(registry.log_fp) { + if (setvbuf(registry.log_fp, NULL, _IOLBF, 0) != 0) + error("Cannot set line buffering on registry log file."); + return 0; + } + + error("Cannot open registry log file '%s'. Registry data will be lost in case of netdata or server crash.", registry.log_filename); + return -1; +} + +static inline void registry_log_close_nolock(void) { + if(registry.log_fp) { + fclose(registry.log_fp); + registry.log_fp = NULL; + } +} + +static inline void registry_log_recreate_nolock(void) { + if(registry.log_fp != NULL) { + registry_log_close_nolock(); + + // open it with truncate + registry.log_fp = fopen(registry.log_filename, "w"); + if(registry.log_fp) fclose(registry.log_fp); + else error("Cannot truncate registry log '%s'", registry.log_filename); + + registry.log_fp = NULL; + + registry_log_open_nolock(); + } +} + +int registry_log_load(void) { + char *s, buf[4096 + 1]; + size_t line = -1; + + // closing the log is required here + // otherwise we will append to it the values we read + registry_log_close_nolock(); + + debug(D_REGISTRY, "Registry: loading active db from: %s", registry.log_filename); + FILE *fp = fopen(registry.log_filename, "r"); + if(!fp) + error("Registry: cannot open registry file: %s", registry.log_filename); + else { + line = 0; + size_t len = 0; + while ((s = fgets_trim_len(buf, 4096, fp, &len))) { + line++; + + switch (s[0]) { + case 'A': // accesses + case 'D': // deletes + + // verify it is valid + if (unlikely(len < 85 || s[1] != '\t' || s[10] != '\t' || s[47] != '\t' || s[84] != '\t')) { + error("Registry: log line %u is wrong (len = %zu).", line, len); + continue; + } + s[1] = s[10] = s[47] = s[84] = '\0'; + + // get the variables + time_t when = strtoul(&s[2], NULL, 16); + char *person_guid = &s[11]; + char *machine_guid = &s[48]; + char *name = &s[85]; + + // skip the name to find the url + char *url = name; + while(*url && *url != '\t') url++; + if(!*url) { + error("Registry: log line %u does not have a url.", line); + continue; + } + *url++ = '\0'; + + // make sure the person exists + // without this, a new person guid will be created + PERSON *p = registry_person_find(person_guid); + if(!p) p = registry_person_allocate(person_guid, when); + + if(s[0] == 'A') + registry_request_access(p->guid, machine_guid, url, name, when); + else + registry_request_delete(p->guid, machine_guid, url, name, when); + + break; + + default: + error("Registry: ignoring line %zu of filename '%s': %s.", line, registry.log_filename, s); + break; + } + } + } + + // open the log again + registry_log_open_nolock(); + + return line; +} + + +// ---------------------------------------------------------------------------- +// REGISTRY REQUESTS + +PERSON *registry_request_access(char *person_guid, char *machine_guid, char *url, char *name, time_t when) { + debug(D_REGISTRY, "registry_request_access('%s', '%s', '%s'): NEW REQUEST", (person_guid)?person_guid:"", machine_guid, url); + + MACHINE *m = registry_machine_get(machine_guid, when); + if(!m) return NULL; + + // make sure the name is valid + size_t namelen; + name = registry_fix_machine_name(name, &namelen); + + size_t urllen; + url = registry_fix_url(url, &urllen); + + URL *u = registry_url_get(url, urllen); + PERSON *p = registry_person_get(person_guid, when); + + registry_person_link_to_url(p, m, u, name, namelen, when); + registry_machine_link_to_url(p, m, u, when); + + registry_log('A', p, m, u, name); + + registry.usages_count++; + return p; +} + +// verify the person, the machine and the URL exist in our DB +PERSON_URL *registry_verify_request(char *person_guid, char *machine_guid, char *url, PERSON **pp, MACHINE **mm) { + char pbuf[36 + 1], mbuf[36 + 1]; + + if(!person_guid || !*person_guid || !machine_guid || !*machine_guid || !url || !*url) { + info("Registry Request Verification: invalid request! person: '%s', machine '%s', url '%s'", person_guid?person_guid:"UNSET", machine_guid?machine_guid:"UNSET", url?url:"UNSET"); + return NULL; + } + + // normalize the url + url = registry_fix_url(url, NULL); + + // make sure the person GUID is valid + if(registry_regenerate_guid(person_guid, pbuf) == -1) { + info("Registry Request Verification: invalid person GUID, person: '%s', machine '%s', url '%s'", person_guid, machine_guid, url); + return NULL; + } + person_guid = pbuf; + + // make sure the machine GUID is valid + if(registry_regenerate_guid(machine_guid, mbuf) == -1) { + info("Registry Request Verification: invalid machine GUID, person: '%s', machine '%s', url '%s'", person_guid, machine_guid, url); + return NULL; + } + machine_guid = mbuf; + + // make sure the machine exists + MACHINE *m = registry_machine_find(machine_guid); + if(!m) { + info("Registry Request Verification: machine not found, person: '%s', machine '%s', url '%s'", person_guid, machine_guid, url); + return NULL; + } + if(mm) *mm = m; + + // make sure the person exist + PERSON *p = registry_person_find(person_guid); + if(!p) { + info("Registry Request Verification: person not found, person: '%s', machine '%s', url '%s'", person_guid, machine_guid, url); + return NULL; + } + if(pp) *pp = p; + + PERSON_URL *pu = dictionary_get(p->urls, url); + if(!pu) { + info("Registry Request Verification: URL not found for person, person: '%s', machine '%s', url '%s'", person_guid, machine_guid, url); + return NULL; + } + return pu; +} + +PERSON *registry_request_delete(char *person_guid, char *machine_guid, char *url, char *delete_url, time_t when) { + (void)when; + + PERSON *p = NULL; + MACHINE *m = NULL; + PERSON_URL *pu = registry_verify_request(person_guid, machine_guid, url, &p, &m); + if(!pu || !p || !m) return NULL; + + // normalize the url + delete_url = registry_fix_url(delete_url, NULL); + + // make sure the user is not deleting the url it uses + if(!strcmp(delete_url, pu->url->url)) { + info("Registry Delete Request: delete URL is the one currently accessed, person: '%s', machine '%s', url '%s', delete url '%s'", p->guid, m->guid, pu->url->url, delete_url); + return NULL; + } + + registry_person_urls_lock(p); + + PERSON_URL *dpu = dictionary_get(p->urls, delete_url); + if(!dpu) { + info("Registry Delete Request: URL not found for person: '%s', machine '%s', url '%s', delete url '%s'", p->guid, m->guid, pu->url->url, delete_url); + registry_person_urls_unlock(p); + return NULL; + } + + registry_log('D', p, m, pu->url, dpu->url->url); + + dictionary_del(p->urls, dpu->url->url); + registry_url_unlink_nolock(dpu->url); + free(dpu); + + registry_person_urls_unlock(p); + return p; +} + + +// a structure to pass to the dictionary_get_all() callback handler +struct machine_request_callback_data { + MACHINE *find_this_machine; + PERSON_URL *result; +}; + +// the callback function +// this will be run for every PERSON_URL of this PERSON +int machine_request_callback(void *entry, void *data) { + PERSON_URL *mypu = (PERSON_URL *)entry; + struct machine_request_callback_data *myrdata = (struct machine_request_callback_data *)data; + + if(mypu->machine == myrdata->find_this_machine) { + myrdata->result = mypu; + return -1; // this will also stop the walk through + } + + return 0; // continue +} + +MACHINE *registry_request_machine(char *person_guid, char *machine_guid, char *url, char *request_machine, time_t when) { + (void)when; + + char mbuf[36 + 1]; + + PERSON *p = NULL; + MACHINE *m = NULL; + PERSON_URL *pu = registry_verify_request(person_guid, machine_guid, url, &p, &m); + if(!pu || !p || !m) return NULL; + + // make sure the machine GUID is valid + if(registry_regenerate_guid(request_machine, mbuf) == -1) { + info("Registry Machine URLs request: invalid machine GUID, person: '%s', machine '%s', url '%s', request machine '%s'", p->guid, m->guid, pu->url->url, request_machine); + return NULL; + } + request_machine = mbuf; + + // make sure the machine exists + m = registry_machine_find(request_machine); + if(!m) { + info("Registry Machine URLs request: machine not found, person: '%s', machine '%s', url '%s', request machine '%s'", p->guid, m->guid, pu->url->url, request_machine); + return NULL; + } + + // Verify the user has in the past accessed this machine + // We will walk through the PERSON_URLs to find the machine + // linking to our machine + + // a structure to pass to the dictionary_get_all() callback handler + struct machine_request_callback_data rdata = { m, NULL }; + + // request a walk through on the dictionary + // no need for locking here, the underlying dictionary has its own + dictionary_get_all(p->urls, machine_request_callback, &rdata); + + if(rdata.result) + return m; + + return NULL; +} + + +// ---------------------------------------------------------------------------- +// REGISTRY JSON generation + +#define REGISTRY_STATUS_OK "ok" +#define REGISTRY_STATUS_FAILED "failed" +#define REGISTRY_STATUS_DISABLED "disabled" + +static inline void registry_set_person_cookie(struct web_client *w, PERSON *p) { + char edate[100]; + time_t et = time(NULL) + registry.persons_expiration; + struct tm etmbuf, *etm = gmtime_r(&et, &etmbuf); + strftime(edate, sizeof(edate), "%a, %d %b %Y %H:%M:%S %Z", etm); + + snprintfz(w->cookie1, COOKIE_MAX, NETDATA_REGISTRY_COOKIE_NAME "=%s; Expires=%s", p->guid, edate); + + if(registry.registry_domain && registry.registry_domain[0]) + snprintfz(w->cookie2, COOKIE_MAX, NETDATA_REGISTRY_COOKIE_NAME "=%s; Domain=%s; Expires=%s", p->guid, registry.registry_domain, edate); +} + +static inline void registry_json_header(struct web_client *w, const char *action, const char *status) { + w->response.data->contenttype = CT_APPLICATION_JSON; + buffer_flush(w->response.data); + buffer_sprintf(w->response.data, "{\n\t\"action\": \"%s\",\n\t\"status\": \"%s\",\n\t\"hostname\": \"%s\",\n\t\"machine_guid\": \"%s\"", + action, status, registry.hostname, registry.machine_guid); +} + +static inline void registry_json_footer(struct web_client *w) { + buffer_strcat(w->response.data, "\n}\n"); +} + +int registry_request_hello_json(struct web_client *w) { + registry_json_header(w, "hello", REGISTRY_STATUS_OK); + + buffer_sprintf(w->response.data, ",\n\t\"registry\": \"%s\"", + registry.registry_to_announce); + + registry_json_footer(w); + return 200; +} + +static inline int registry_json_disabled(struct web_client *w, const char *action) { + registry_json_header(w, action, REGISTRY_STATUS_DISABLED); + + buffer_sprintf(w->response.data, ",\n\t\"registry\": \"%s\"", + registry.registry_to_announce); + + registry_json_footer(w); + return 200; +} + +// structure used be the callbacks below +struct registry_json_walk_person_urls_callback { + PERSON *p; + MACHINE *m; + struct web_client *w; + int count; +}; + +// callback for rendering PERSON_URLs +static inline int registry_json_person_url_callback(void *entry, void *data) { + PERSON_URL *pu = (PERSON_URL *)entry; + struct registry_json_walk_person_urls_callback *c = (struct registry_json_walk_person_urls_callback *)data; + struct web_client *w = c->w; + + if(unlikely(c->count++)) + buffer_strcat(w->response.data, ","); + + buffer_sprintf(w->response.data, "\n\t\t[ \"%s\", \"%s\", %u000, %u, \"%s\" ]", + pu->machine->guid, pu->url->url, pu->last_t, pu->usages, pu->name); + + return 1; +} + +// callback for rendering MACHINE_URLs +static inline int registry_json_machine_url_callback(void *entry, void *data) { + MACHINE_URL *mu = (MACHINE_URL *)entry; + struct registry_json_walk_person_urls_callback *c = (struct registry_json_walk_person_urls_callback *)data; + struct web_client *w = c->w; + MACHINE *m = c->m; + + if(unlikely(c->count++)) + buffer_strcat(w->response.data, ","); + + buffer_sprintf(w->response.data, "\n\t\t[ \"%s\", \"%s\", %u000, %u ]", + m->guid, mu->url->url, mu->last_t, mu->usages); + + return 1; +} + + +// the main method for registering an access +int registry_request_access_json(struct web_client *w, char *person_guid, char *machine_guid, char *url, char *name, time_t when) { + if(!registry.enabled) + return registry_json_disabled(w, "access"); + + PERSON *p = registry_request_access(person_guid, machine_guid, url, name, when); + if(!p) { + registry_json_header(w, "access", REGISTRY_STATUS_FAILED); + registry_json_footer(w); + return 412; + } + + // set the cookie + registry_set_person_cookie(w, p); + + // generate the response + registry_json_header(w, "access", REGISTRY_STATUS_OK); + + buffer_sprintf(w->response.data, ",\n\t\"person_guid\": \"%s\",\n\t\"urls\": [", p->guid); + struct registry_json_walk_person_urls_callback c = { p, NULL, w, 0 }; + dictionary_get_all(p->urls, registry_json_person_url_callback, &c); + buffer_strcat(w->response.data, "\n\t]\n"); + + registry_json_footer(w); + return 200; +} + +// the main method for deleting a URL from a person +int registry_request_delete_json(struct web_client *w, char *person_guid, char *machine_guid, char *url, char *delete_url, time_t when) { + if(!registry.enabled) + return registry_json_disabled(w, "delete"); + + PERSON *p = registry_request_delete(person_guid, machine_guid, url, delete_url, when); + if(!p) { + registry_json_header(w, "delete", REGISTRY_STATUS_FAILED); + registry_json_footer(w); + return 412; + } + + // generate the response + registry_json_header(w, "delete", REGISTRY_STATUS_OK); + registry_json_footer(w); + return 200; +} + +// the main method for searching the URLs of a netdata +int registry_request_search_json(struct web_client *w, char *person_guid, char *machine_guid, char *url, char *request_machine, time_t when) { + if(!registry.enabled) + return registry_json_disabled(w, "search"); + + MACHINE *m = registry_request_machine(person_guid, machine_guid, url, request_machine, when); + if(!m) { + registry_json_header(w, "search", REGISTRY_STATUS_FAILED); + registry_json_footer(w); + return 404; + } + + registry_json_header(w, "search", REGISTRY_STATUS_OK); + + buffer_strcat(w->response.data, ",\n\t\"urls\": ["); + struct registry_json_walk_person_urls_callback c = { NULL, m, w, 0 }; + dictionary_get_all(m->urls, registry_json_machine_url_callback, &c); + buffer_strcat(w->response.data, "\n\t]\n"); + + registry_json_footer(w); + return 200; +} + +// structure used be the callbacks below +struct registry_person_url_callback_verify_machine_exists_data { + MACHINE *m; + int count; +}; + +int registry_person_url_callback_verify_machine_exists(void *entry, void *data) { + struct registry_person_url_callback_verify_machine_exists_data *d = (struct registry_person_url_callback_verify_machine_exists_data *)data; + PERSON_URL *pu = (PERSON_URL *)entry; + MACHINE *m = d->m; + + if(pu->machine == m) + d->count++; + + return 0; +} + +// the main method for switching user identity +int registry_request_switch_json(struct web_client *w, char *person_guid, char *machine_guid, char *url, char *new_person_guid, time_t when) { + (void)url; + (void)when; + + if(!registry.enabled) + return registry_json_disabled(w, "switch"); + + PERSON *op = registry_person_find(person_guid); + if(!op) { + registry_json_header(w, "switch", REGISTRY_STATUS_FAILED); + registry_json_footer(w); + return 430; + } + + PERSON *np = registry_person_find(new_person_guid); + if(!np) { + registry_json_header(w, "switch", REGISTRY_STATUS_FAILED); + registry_json_footer(w); + return 431; + } + + MACHINE *m = registry_machine_find(machine_guid); + if(!m) { + registry_json_header(w, "switch", REGISTRY_STATUS_FAILED); + registry_json_footer(w); + return 432; + } + + struct registry_person_url_callback_verify_machine_exists_data data = { m, 0 }; + + // verify the old person has access to this machine + dictionary_get_all(op->urls, registry_person_url_callback_verify_machine_exists, &data); + if(!data.count) { + registry_json_header(w, "switch", REGISTRY_STATUS_FAILED); + registry_json_footer(w); + return 433; + } + + // verify the new person has access to this machine + data.count = 0; + dictionary_get_all(np->urls, registry_person_url_callback_verify_machine_exists, &data); + if(!data.count) { + registry_json_header(w, "switch", REGISTRY_STATUS_FAILED); + registry_json_footer(w); + return 434; + } + + // set the cookie of the new person + // the user just switched identity + registry_set_person_cookie(w, np); + + // generate the response + registry_json_header(w, "switch", REGISTRY_STATUS_OK); + buffer_sprintf(w->response.data, ",\n\t\"person_guid\": \"%s\"", np->guid); + registry_json_footer(w); + return 200; +} + + +// ---------------------------------------------------------------------------- +// REGISTRY THIS MACHINE UNIQUE ID + +char *registry_get_this_machine_guid(void) { + if(likely(registry.machine_guid[0])) + return registry.machine_guid; + + // read it from disk + int fd = open(registry.machine_guid_filename, O_RDONLY); + if(fd != -1) { + char buf[36 + 1]; + if(read(fd, buf, 36) != 36) + error("Failed to read machine GUID from '%s'", registry.machine_guid_filename); + else { + buf[36] = '\0'; + if(registry_regenerate_guid(buf, registry.machine_guid) == -1) { + error("Failed to validate machine GUID '%s' from '%s'. Ignoring it - this might mean this netdata will appear as duplicate in the registry.", + buf, registry.machine_guid_filename); + + registry.machine_guid[0] = '\0'; + } + } + close(fd); + } + + // generate a new one? + if(!registry.machine_guid[0]) { + uuid_t uuid; + + uuid_generate_time(uuid); + uuid_unparse_lower(uuid, registry.machine_guid); + registry.machine_guid[36] = '\0'; + + // save it + fd = open(registry.machine_guid_filename, O_WRONLY|O_CREAT|O_TRUNC, 444); + if(fd == -1) + fatal("Cannot create unique machine id file '%s'. Please fix this.", registry.machine_guid_filename); + + if(write(fd, registry.machine_guid, 36) != 36) + fatal("Cannot write the unique machine id file '%s'. Please fix this.", registry.machine_guid_filename); + + close(fd); + } + + return registry.machine_guid; +} + + +// ---------------------------------------------------------------------------- +// REGISTRY LOAD/SAVE + +int registry_machine_save_url(void *entry, void *file) { + MACHINE_URL *mu = entry; + FILE *fp = file; + + debug(D_REGISTRY, "Registry: registry_machine_save_url('%s')", mu->url->url); + + int ret = fprintf(fp, "V\t%08x\t%08x\t%08x\t%02x\t%s\n", + mu->first_t, + mu->last_t, + mu->usages, + mu->flags, + mu->url->url + ); + + // error handling is done at registry_save() + + return ret; +} + +int registry_machine_save(void *entry, void *file) { + MACHINE *m = entry; + FILE *fp = file; + + debug(D_REGISTRY, "Registry: registry_machine_save('%s')", m->guid); + + int ret = fprintf(fp, "M\t%08x\t%08x\t%08x\t%s\n", + m->first_t, + m->last_t, + m->usages, + m->guid + ); + + if(ret >= 0) { + int ret2 = dictionary_get_all(m->urls, registry_machine_save_url, fp); + if(ret2 < 0) return ret2; + ret += ret2; + } + + // error handling is done at registry_save() + + return ret; +} + +static inline int registry_person_save_url(void *entry, void *file) { + PERSON_URL *pu = entry; + FILE *fp = file; + + debug(D_REGISTRY, "Registry: registry_person_save_url('%s')", pu->url->url); + + int ret = fprintf(fp, "U\t%08x\t%08x\t%08x\t%02x\t%s\t%s\t%s\n", + pu->first_t, + pu->last_t, + pu->usages, + pu->flags, + pu->machine->guid, + pu->name, + pu->url->url + ); + + // error handling is done at registry_save() + + return ret; +} + +static inline int registry_person_save(void *entry, void *file) { + PERSON *p = entry; + FILE *fp = file; + + debug(D_REGISTRY, "Registry: registry_person_save('%s')", p->guid); + + int ret = fprintf(fp, "P\t%08x\t%08x\t%08x\t%s\n", + p->first_t, + p->last_t, + p->usages, + p->guid + ); + + if(ret >= 0) { + int ret2 = dictionary_get_all(p->urls, registry_person_save_url, fp); + if (ret2 < 0) return ret2; + ret += ret2; + } + + // error handling is done at registry_save() + + return ret; +} + +int registry_save(void) { + if(!registry.enabled) return -1; + + // make sure the log is not updated + registry_log_lock(); + + if(unlikely(!registry_should_save_db())) { + registry_log_unlock(); + return -2; + } + + char tmp_filename[FILENAME_MAX + 1]; + char old_filename[FILENAME_MAX + 1]; + + snprintfz(old_filename, FILENAME_MAX, "%s.old", registry.db_filename); + snprintfz(tmp_filename, FILENAME_MAX, "%s.tmp", registry.db_filename); + + debug(D_REGISTRY, "Registry: Creating file '%s'", tmp_filename); + FILE *fp = fopen(tmp_filename, "w"); + if(!fp) { + error("Registry: Cannot create file: %s", tmp_filename); + registry_log_unlock(); + return -1; + } + + // dictionary_get_all() has its own locking, so this is safe to do + + debug(D_REGISTRY, "Saving all machines"); + int bytes1 = dictionary_get_all(registry.machines, registry_machine_save, fp); + if(bytes1 < 0) { + error("Registry: Cannot save registry machines - return value %d", bytes1); + fclose(fp); + registry_log_unlock(); + return bytes1; + } + debug(D_REGISTRY, "Registry: saving machines took %d bytes", bytes1); + + debug(D_REGISTRY, "Saving all persons"); + int bytes2 = dictionary_get_all(registry.persons, registry_person_save, fp); + if(bytes2 < 0) { + error("Registry: Cannot save registry persons - return value %d", bytes2); + fclose(fp); + registry_log_unlock(); + return bytes2; + } + debug(D_REGISTRY, "Registry: saving persons took %d bytes", bytes2); + + // save the totals + fprintf(fp, "T\t%016llx\t%016llx\t%016llx\t%016llx\t%016llx\t%016llx\n", + registry.persons_count, + registry.machines_count, + registry.usages_count + 1, // this is required - it is lost on db rotation + registry.urls_count, + registry.persons_urls_count, + registry.machines_urls_count + ); + + fclose(fp); + + errno = 0; + + // remove the .old db + debug(D_REGISTRY, "Registry: Removing old db '%s'", old_filename); + if(unlink(old_filename) == -1 && errno != ENOENT) + error("Registry: cannot remove old registry file '%s'", old_filename); + + // rename the db to .old + debug(D_REGISTRY, "Registry: Link current db '%s' to .old: '%s'", registry.db_filename, old_filename); + if(link(registry.db_filename, old_filename) == -1 && errno != ENOENT) + error("Registry: cannot move file '%s' to '%s'. Saving registry DB failed!", tmp_filename, registry.db_filename); + + else { + // remove the database (it is saved in .old) + debug(D_REGISTRY, "Registry: removing db '%s'", registry.db_filename); + if (unlink(registry.db_filename) == -1 && errno != ENOENT) + error("Registry: cannot remove old registry file '%s'", registry.db_filename); + + // move the .tmp to make it active + debug(D_REGISTRY, "Registry: linking tmp db '%s' to active db '%s'", tmp_filename, registry.db_filename); + if (link(tmp_filename, registry.db_filename) == -1) { + error("Registry: cannot move file '%s' to '%s'. Saving registry DB failed!", tmp_filename, + registry.db_filename); + + // move the .old back + debug(D_REGISTRY, "Registry: linking old db '%s' to active db '%s'", old_filename, registry.db_filename); + if(link(old_filename, registry.db_filename) == -1) + error("Registry: cannot move file '%s' to '%s'. Recovering the old registry DB failed!", old_filename, registry.db_filename); + } + else { + debug(D_REGISTRY, "Registry: removing tmp db '%s'", tmp_filename); + if(unlink(tmp_filename) == -1) + error("Registry: cannot remove tmp registry file '%s'", tmp_filename); + + // it has been moved successfully + // discard the current registry log + registry_log_recreate_nolock(); + + registry.log_count = 0; + } + } + + // continue operations + registry_log_unlock(); + + return -1; +} + +static inline size_t registry_load(void) { + char *s, buf[4096 + 1]; + PERSON *p = NULL; + MACHINE *m = NULL; + URL *u = NULL; + size_t line = 0; + + debug(D_REGISTRY, "Registry: loading active db from: '%s'", registry.db_filename); + FILE *fp = fopen(registry.db_filename, "r"); + if(!fp) { + error("Registry: cannot open registry file: '%s'", registry.db_filename); + return 0; + } + + size_t len = 0; + buf[4096] = '\0'; + while((s = fgets_trim_len(buf, 4096, fp, &len))) { + line++; + + debug(D_REGISTRY, "Registry: read line %zu to length %zu: %s", line, len, s); + switch(*s) { + case 'T': // totals + if(unlikely(len != 103 || s[1] != '\t' || s[18] != '\t' || s[35] != '\t' || s[52] != '\t' || s[69] != '\t' || s[86] != '\t' || s[103] != '\0')) { + error("Registry totals line %u is wrong (len = %zu).", line, len); + continue; + } + registry.persons_count = strtoull(&s[2], NULL, 16); + registry.machines_count = strtoull(&s[19], NULL, 16); + registry.usages_count = strtoull(&s[36], NULL, 16); + registry.urls_count = strtoull(&s[53], NULL, 16); + registry.persons_urls_count = strtoull(&s[70], NULL, 16); + registry.machines_urls_count = strtoull(&s[87], NULL, 16); + break; + + case 'P': // person + m = NULL; + // verify it is valid + if(unlikely(len != 65 || s[1] != '\t' || s[10] != '\t' || s[19] != '\t' || s[28] != '\t' || s[65] != '\0')) { + error("Registry person line %u is wrong (len = %zu).", line, len); + continue; + } + + s[1] = s[10] = s[19] = s[28] = '\0'; + p = registry_person_allocate(&s[29], strtoul(&s[2], NULL, 16)); + p->last_t = strtoul(&s[11], NULL, 16); + p->usages = strtoul(&s[20], NULL, 16); + debug(D_REGISTRY, "Registry loaded person '%s', first: %u, last: %u, usages: %u", p->guid, p->first_t, p->last_t, p->usages); + break; + + case 'M': // machine + p = NULL; + // verify it is valid + if(unlikely(len != 65 || s[1] != '\t' || s[10] != '\t' || s[19] != '\t' || s[28] != '\t' || s[65] != '\0')) { + error("Registry person line %u is wrong (len = %zu).", line, len); + continue; + } + + s[1] = s[10] = s[19] = s[28] = '\0'; + m = registry_machine_allocate(&s[29], strtoul(&s[2], NULL, 16)); + m->last_t = strtoul(&s[11], NULL, 16); + m->usages = strtoul(&s[20], NULL, 16); + debug(D_REGISTRY, "Registry loaded machine '%s', first: %u, last: %u, usages: %u", m->guid, m->first_t, m->last_t, m->usages); + break; + + case 'U': // person URL + if(unlikely(!p)) { + error("Registry: ignoring line %zu, no person loaded: %s", line, s); + continue; + } + + // verify it is valid + if(len < 69 || s[1] != '\t' || s[10] != '\t' || s[19] != '\t' || s[28] != '\t' || s[31] != '\t' || s[68] != '\t') { + error("Registry person URL line %u is wrong (len = %zu).", line, len); + continue; + } + + s[1] = s[10] = s[19] = s[28] = s[31] = s[68] = '\0'; + + // skip the name to find the url + char *url = &s[69]; + while(*url && *url != '\t') url++; + if(!*url) { + error("Registry person URL line %u does not have a url.", line); + continue; + } + *url++ = '\0'; + + u = registry_url_allocate_nolock(url, strlen(url)); + + time_t first_t = strtoul(&s[2], NULL, 16); + + m = registry_machine_find(&s[32]); + if(!m) m = registry_machine_allocate(&s[32], first_t); + + PERSON_URL *pu = registry_person_url_allocate(p, m, u, &s[69], strlen(&s[69]), first_t); + pu->last_t = strtoul(&s[11], NULL, 16); + pu->usages = strtoul(&s[20], NULL, 16); + pu->flags = strtoul(&s[29], NULL, 16); + debug(D_REGISTRY, "Registry loaded person URL '%s' with name '%s' of machine '%s', first: %u, last: %u, usages: %u, flags: %02x", u->url, pu->name, m->guid, pu->first_t, pu->last_t, pu->usages, pu->flags); + break; + + case 'V': // machine URL + if(unlikely(!m)) { + error("Registry: ignoring line %zu, no machine loaded: %s", line, s); + continue; + } + + // verify it is valid + if(len < 32 || s[1] != '\t' || s[10] != '\t' || s[19] != '\t' || s[28] != '\t' || s[31] != '\t') { + error("Registry person URL line %u is wrong (len = %zu).", line, len); + continue; + } + + s[1] = s[10] = s[19] = s[28] = s[31] = '\0'; + u = registry_url_allocate_nolock(&s[32], strlen(&s[32])); + + MACHINE_URL *mu = registry_machine_url_allocate(m, u, strtoul(&s[2], NULL, 16)); + mu->last_t = strtoul(&s[11], NULL, 16); + mu->usages = strtoul(&s[20], NULL, 16); + mu->flags = strtoul(&s[29], NULL, 16); + debug(D_REGISTRY, "Registry loaded machine URL '%s', machine '%s', first: %u, last: %u, usages: %u, flags: %02x", u->url, m->guid, mu->first_t, mu->last_t, mu->usages, mu->flags); + break; + + default: + error("Registry: ignoring line %zu of filename '%s': %s.", line, registry.db_filename, s); + break; + } + } + fclose(fp); + + return line; +} + +// ---------------------------------------------------------------------------- +// REGISTRY + +int registry_init(void) { + char filename[FILENAME_MAX + 1]; + + // registry enabled? + registry.enabled = config_get_boolean("registry", "enabled", 0); + + // pathnames + registry.pathname = config_get("registry", "registry db directory", VARLIB_DIR "/registry"); + if(mkdir(registry.pathname, 0755) == -1 && errno != EEXIST) { + error("Cannot create directory '%s'. Registry disabled.", registry.pathname); + registry.enabled = 0; + return -1; + } + + // filenames + snprintfz(filename, FILENAME_MAX, "%s/netdata.public.unique.id", registry.pathname); + registry.machine_guid_filename = config_get("registry", "netdata unique id file", filename); + registry_get_this_machine_guid(); + + snprintfz(filename, FILENAME_MAX, "%s/registry.db", registry.pathname); + registry.db_filename = config_get("registry", "registry db file", filename); + + snprintfz(filename, FILENAME_MAX, "%s/registry-log.db", registry.pathname); + registry.log_filename = config_get("registry", "registry log file", filename); + + // configuration options + registry.save_registry_every_entries = config_get_number("registry", "registry save db every new entries", 1000000); + registry.persons_expiration = config_get_number("registry", "registry expire idle persons days", 365) * 86400; + registry.registry_domain = config_get("registry", "registry domain", ""); + registry.registry_to_announce = config_get("registry", "registry to announce", "https://registry.my-netdata.io"); + registry.hostname = config_get("registry", "registry hostname", config_get("global", "hostname", hostname)); + + registry.max_url_length = config_get_number("registry", "max URL length", 1024); + registry.max_name_length = config_get_number("registry", "max URL name length", 50); + + + // initialize entries counters + registry.persons_count = 0; + registry.machines_count = 0; + registry.usages_count = 0; + registry.urls_count = 0; + registry.persons_urls_count = 0; + registry.machines_urls_count = 0; + + // initialize memory counters + registry.persons_memory = 0; + registry.machines_memory = 0; + registry.urls_memory = 0; + registry.persons_urls_memory = 0; + registry.machines_urls_memory = 0; + + // initialize locks + pthread_mutex_init(®istry.persons_lock, NULL); + pthread_mutex_init(®istry.machines_lock, NULL); + pthread_mutex_init(®istry.urls_lock, NULL); + pthread_mutex_init(®istry.person_urls_lock, NULL); + pthread_mutex_init(®istry.machine_urls_lock, NULL); + + // create dictionaries + registry.persons = dictionary_create(DICTIONARY_FLAGS); + registry.machines = dictionary_create(DICTIONARY_FLAGS); + registry.urls = dictionary_create(DICTIONARY_FLAGS); + + // load the registry database + if(registry.enabled) { + registry_log_open_nolock(); + registry_load(); + registry_log_load(); + } + + return 0; +} + +void registry_free(void) { + if(!registry.enabled) return; + + // we need to destroy the dictionaries ourselves + // since the dictionaries use memory we allocated + + while(registry.persons->values_index.root) { + PERSON *p = ((NAME_VALUE *)registry.persons->values_index.root)->value; + + // fprintf(stderr, "\nPERSON: '%s', first: %u, last: %u, usages: %u\n", p->guid, p->first_t, p->last_t, p->usages); + + while(p->urls->values_index.root) { + PERSON_URL *pu = ((NAME_VALUE *)p->urls->values_index.root)->value; + + // fprintf(stderr, "\tURL: '%s', first: %u, last: %u, usages: %u, flags: 0x%02x\n", pu->url->url, pu->first_t, pu->last_t, pu->usages, pu->flags); + + debug(D_REGISTRY, "Registry: deleting url '%s' from person '%s'", pu->url->url, p->guid); + dictionary_del(p->urls, pu->url->url); + + debug(D_REGISTRY, "Registry: unlinking url '%s' from person", pu->url->url); + registry_url_unlink_nolock(pu->url); + + debug(D_REGISTRY, "Registry: freeing person url"); + free(pu); + } + + debug(D_REGISTRY, "Registry: deleting person '%s' from persons registry", p->guid); + dictionary_del(registry.persons, p->guid); + + debug(D_REGISTRY, "Registry: destroying URL dictionary of person '%s'", p->guid); + dictionary_destroy(p->urls); + + debug(D_REGISTRY, "Registry: freeing person '%s'", p->guid); + free(p); + } + + while(registry.machines->values_index.root) { + MACHINE *m = ((NAME_VALUE *)registry.machines->values_index.root)->value; + + // fprintf(stderr, "\nMACHINE: '%s', first: %u, last: %u, usages: %u\n", m->guid, m->first_t, m->last_t, m->usages); + + while(m->urls->values_index.root) { + MACHINE_URL *mu = ((NAME_VALUE *)m->urls->values_index.root)->value; + + // fprintf(stderr, "\tURL: '%s', first: %u, last: %u, usages: %u, flags: 0x%02x\n", mu->url->url, mu->first_t, mu->last_t, mu->usages, mu->flags); + + //debug(D_REGISTRY, "Registry: destroying persons dictionary from url '%s'", mu->url->url); + //dictionary_destroy(mu->persons); + + debug(D_REGISTRY, "Registry: deleting url '%s' from person '%s'", mu->url->url, m->guid); + dictionary_del(m->urls, mu->url->url); + + debug(D_REGISTRY, "Registry: unlinking url '%s' from machine", mu->url->url); + registry_url_unlink_nolock(mu->url); + + debug(D_REGISTRY, "Registry: freeing machine url"); + free(mu); + } + + debug(D_REGISTRY, "Registry: deleting machine '%s' from machines registry", m->guid); + dictionary_del(registry.machines, m->guid); + + debug(D_REGISTRY, "Registry: destroying URL dictionary of machine '%s'", m->guid); + dictionary_destroy(m->urls); + + debug(D_REGISTRY, "Registry: freeing machine '%s'", m->guid); + free(m); + } + + // and free the memory of remaining dictionary structures + + debug(D_REGISTRY, "Registry: destroying persons dictionary"); + dictionary_destroy(registry.persons); + + debug(D_REGISTRY, "Registry: destroying machines dictionary"); + dictionary_destroy(registry.machines); + + debug(D_REGISTRY, "Registry: destroying urls dictionary"); + dictionary_destroy(registry.urls); +} + +// ---------------------------------------------------------------------------- +// STATISTICS + +void registry_statistics(void) { + if(!registry.enabled) return; + + static RRDSET *sts = NULL, *stc = NULL, *stm = NULL; + + if(!sts) sts = rrdset_find("netdata.registry_sessions"); + if(!sts) { + sts = rrdset_create("netdata", "registry_sessions", NULL, "registry", NULL, "NetData Registry Sessions", "session", 131000, rrd_update_every, RRDSET_TYPE_LINE); + + rrddim_add(sts, "sessions", NULL, 1, 1, RRDDIM_ABSOLUTE); + } + else rrdset_next(sts); + + rrddim_set(sts, "sessions", registry.usages_count); + rrdset_done(sts); + + // ------------------------------------------------------------------------ + + if(!stc) stc = rrdset_find("netdata.registry_entries"); + if(!stc) { + stc = rrdset_create("netdata", "registry_entries", NULL, "registry", NULL, "NetData Registry Entries", "entries", 131100, rrd_update_every, RRDSET_TYPE_LINE); + + rrddim_add(stc, "persons", NULL, 1, 1, RRDDIM_ABSOLUTE); + rrddim_add(stc, "machines", NULL, 1, 1, RRDDIM_ABSOLUTE); + rrddim_add(stc, "urls", NULL, 1, 1, RRDDIM_ABSOLUTE); + rrddim_add(stc, "persons_urls", NULL, 1, 1, RRDDIM_ABSOLUTE); + rrddim_add(stc, "machines_urls", NULL, 1, 1, RRDDIM_ABSOLUTE); + } + else rrdset_next(stc); + + rrddim_set(stc, "persons", registry.persons_count); + rrddim_set(stc, "machines", registry.machines_count); + rrddim_set(stc, "urls", registry.urls_count); + rrddim_set(stc, "persons_urls", registry.persons_urls_count); + rrddim_set(stc, "machines_urls", registry.machines_urls_count); + rrdset_done(stc); + + // ------------------------------------------------------------------------ + + if(!stm) stm = rrdset_find("netdata.registry_mem"); + if(!stm) { + stm = rrdset_create("netdata", "registry_mem", NULL, "registry", NULL, "NetData Registry Memory", "KB", 131300, rrd_update_every, RRDSET_TYPE_STACKED); + + rrddim_add(stm, "persons", NULL, 1, 1024, RRDDIM_ABSOLUTE); + rrddim_add(stm, "machines", NULL, 1, 1024, RRDDIM_ABSOLUTE); + rrddim_add(stm, "urls", NULL, 1, 1024, RRDDIM_ABSOLUTE); + rrddim_add(stm, "persons_urls", NULL, 1, 1024, RRDDIM_ABSOLUTE); + rrddim_add(stm, "machines_urls", NULL, 1, 1024, RRDDIM_ABSOLUTE); + } + else rrdset_next(stm); + + rrddim_set(stm, "persons", registry.persons_memory + registry.persons_count * sizeof(NAME_VALUE) + sizeof(DICTIONARY)); + rrddim_set(stm, "machines", registry.machines_memory + registry.machines_count * sizeof(NAME_VALUE) + sizeof(DICTIONARY)); + rrddim_set(stm, "urls", registry.urls_memory + registry.urls_count * sizeof(NAME_VALUE) + sizeof(DICTIONARY)); + rrddim_set(stm, "persons_urls", registry.persons_urls_memory + registry.persons_count * sizeof(DICTIONARY) + registry.persons_urls_count * sizeof(NAME_VALUE)); + rrddim_set(stm, "machines_urls", registry.machines_urls_memory + registry.machines_count * sizeof(DICTIONARY) + registry.machines_urls_count * sizeof(NAME_VALUE)); + rrdset_done(stm); +} diff --git a/src/registry.h b/src/registry.h new file mode 100644 index 000000000..d95383b5d --- /dev/null +++ b/src/registry.h @@ -0,0 +1,23 @@ +#include "web_client.h" + +#ifndef NETDATA_REGISTRY_H +#define NETDATA_REGISTRY_H 1 + +#define NETDATA_REGISTRY_COOKIE_NAME "netdata_registry_id" + +extern int registry_request_access_json(struct web_client *w, char *person_guid, char *machine_guid, char *url, char *name, time_t when); +extern int registry_request_delete_json(struct web_client *w, char *person_guid, char *machine_guid, char *url, char *delete_url, time_t when); +extern int registry_request_search_json(struct web_client *w, char *person_guid, char *machine_guid, char *url, char *request_machine, time_t when); +extern int registry_request_switch_json(struct web_client *w, char *person_guid, char *machine_guid, char *url, char *new_person_guid, time_t when); +extern int registry_request_hello_json(struct web_client *w); + +extern int registry_init(void); +extern void registry_free(void); +extern int registry_save(void); + +extern char *registry_get_this_machine_guid(void); + +extern void registry_statistics(void); + + +#endif /* NETDATA_REGISTRY_H */ @@ -42,35 +42,26 @@ int rrd_memory_mode = RRD_MEMORY_MODE_SAVE; // ---------------------------------------------------------------------------- // RRDSET index -static int rrdset_iterator(avl *a) { if(a) {}; return 0; } - static int rrdset_compare(void* a, void* b) { if(((RRDSET *)a)->hash < ((RRDSET *)b)->hash) return -1; else if(((RRDSET *)a)->hash > ((RRDSET *)b)->hash) return 1; else return strcmp(((RRDSET *)a)->id, ((RRDSET *)b)->id); } -avl_tree rrdset_root_index = { - NULL, - rrdset_compare, -#ifdef AVL_LOCK_WITH_MUTEX - PTHREAD_MUTEX_INITIALIZER -#else - PTHREAD_RWLOCK_INITIALIZER -#endif +avl_tree_lock rrdset_root_index = { + { NULL, rrdset_compare }, + AVL_LOCK_INITIALIZER }; -#define rrdset_index_add(st) avl_insert(&rrdset_root_index, (avl *)(st)) -#define rrdset_index_del(st) avl_remove(&rrdset_root_index, (avl *)(st)) +#define rrdset_index_add(st) avl_insert_lock(&rrdset_root_index, (avl *)(st)) +#define rrdset_index_del(st) avl_remove_lock(&rrdset_root_index, (avl *)(st)) static RRDSET *rrdset_index_find(const char *id, uint32_t hash) { - RRDSET *result = NULL, tmp; - strncpy(tmp.id, id, RRD_ID_LENGTH_MAX); - tmp.id[RRD_ID_LENGTH_MAX] = '\0'; + RRDSET tmp; + strncpyz(tmp.id, id, RRD_ID_LENGTH_MAX); tmp.hash = (hash)?hash:simple_hash(tmp.id); - avl_search(&(rrdset_root_index), (avl *)&tmp, rrdset_iterator, (avl **)&result); - return result; + return (RRDSET *)avl_search_lock(&(rrdset_root_index), (avl *) &tmp); } // ---------------------------------------------------------------------------- @@ -78,8 +69,6 @@ static RRDSET *rrdset_index_find(const char *id, uint32_t hash) { #define rrdset_from_avlname(avlname_ptr) ((RRDSET *)((avlname_ptr) - offsetof(RRDSET, avlname))) -static int rrdset_iterator_name(avl *a) { if(a) {}; return 0; } - static int rrdset_compare_name(void* a, void* b) { RRDSET *A = rrdset_from_avlname(a); RRDSET *B = rrdset_from_avlname(b); @@ -91,22 +80,17 @@ static int rrdset_compare_name(void* a, void* b) { else return strcmp(A->name, B->name); } -avl_tree rrdset_root_index_name = { - NULL, - rrdset_compare_name, -#ifdef AVL_LOCK_WITH_MUTEX - PTHREAD_MUTEX_INITIALIZER -#else - PTHREAD_RWLOCK_INITIALIZER -#endif +avl_tree_lock rrdset_root_index_name = { + { NULL, rrdset_compare_name }, + AVL_LOCK_INITIALIZER }; int rrdset_index_add_name(RRDSET *st) { // fprintf(stderr, "ADDING: %s (name: %s)\n", st->id, st->name); - return avl_insert(&rrdset_root_index_name, (avl *)(&st->avlname)); + return avl_insert_lock(&rrdset_root_index_name, (avl *) (&st->avlname)); } -#define rrdset_index_del_name(st) avl_remove(&rrdset_root_index_name, (avl *)(&st->avlname)) +#define rrdset_index_del_name(st) avl_remove_lock(&rrdset_root_index_name, (avl *)(&st->avlname)) static RRDSET *rrdset_index_find_name(const char *name, uint32_t hash) { void *result = NULL; @@ -115,7 +99,7 @@ static RRDSET *rrdset_index_find_name(const char *name, uint32_t hash) { tmp.hash_name = (hash)?hash:simple_hash(tmp.name); // fprintf(stderr, "SEARCHING: %s\n", name); - avl_search(&(rrdset_root_index_name), (avl *)(&(tmp.avlname)), rrdset_iterator_name, (avl **)&result); + result = avl_search_lock(&(rrdset_root_index_name), (avl *) (&(tmp.avlname))); if(result) { RRDSET *st = rrdset_from_avlname(result); if(strcmp(st->magic, RRDSET_MAGIC)) @@ -132,25 +116,21 @@ static RRDSET *rrdset_index_find_name(const char *name, uint32_t hash) { // ---------------------------------------------------------------------------- // RRDDIM index -static int rrddim_iterator(avl *a) { if(a) {}; return 0; } - static int rrddim_compare(void* a, void* b) { if(((RRDDIM *)a)->hash < ((RRDDIM *)b)->hash) return -1; else if(((RRDDIM *)a)->hash > ((RRDDIM *)b)->hash) return 1; else return strcmp(((RRDDIM *)a)->id, ((RRDDIM *)b)->id); } -#define rrddim_index_add(st, rd) avl_insert(&((st)->dimensions_index), (avl *)(rd)) -#define rrddim_index_del(st,rd ) avl_remove(&((st)->dimensions_index), (avl *)(rd)) +#define rrddim_index_add(st, rd) avl_insert_lock(&((st)->dimensions_index), (avl *)(rd)) +#define rrddim_index_del(st,rd ) avl_remove_lock(&((st)->dimensions_index), (avl *)(rd)) static RRDDIM *rrddim_index_find(RRDSET *st, const char *id, uint32_t hash) { - RRDDIM *result = NULL, tmp; - strncpy(tmp.id, id, RRD_ID_LENGTH_MAX); - tmp.id[RRD_ID_LENGTH_MAX] = '\0'; + RRDDIM tmp; + strncpyz(tmp.id, id, RRD_ID_LENGTH_MAX); tmp.hash = (hash)?hash:simple_hash(tmp.id); - avl_search(&(st->dimensions_index), (avl *)&tmp, rrddim_iterator, (avl **)&result); - return result; + return (RRDDIM *)avl_search_lock(&(st->dimensions_index), (avl *) &tmp); } // ---------------------------------------------------------------------------- @@ -222,8 +202,8 @@ int rrd_memory_mode_id(const char *name) int rrddim_algorithm_id(const char *name) { - if(strcmp(name, RRDDIM_ABSOLUTE_NAME) == 0) return RRDDIM_ABSOLUTE; if(strcmp(name, RRDDIM_INCREMENTAL_NAME) == 0) return RRDDIM_INCREMENTAL; + if(strcmp(name, RRDDIM_ABSOLUTE_NAME) == 0) return RRDDIM_ABSOLUTE; if(strcmp(name, RRDDIM_PCENT_OVER_ROW_TOTAL_NAME) == 0) return RRDDIM_PCENT_OVER_ROW_TOTAL; if(strcmp(name, RRDDIM_PCENT_OVER_DIFF_TOTAL_NAME) == 0) return RRDDIM_PCENT_OVER_DIFF_TOTAL; return RRDDIM_ABSOLUTE; @@ -277,7 +257,7 @@ void rrdset_set_name(RRDSET *st, const char *name) char b[CONFIG_MAX_VALUE + 1]; char n[RRD_ID_LENGTH_MAX + 1]; - snprintf(n, RRD_ID_LENGTH_MAX, "%s.%s", st->type, name); + snprintfz(n, RRD_ID_LENGTH_MAX, "%s.%s", st->type, name); rrdset_strncpy_name(b, n, CONFIG_MAX_VALUE); st->name = config_get(st->id, "name", b); st->hash_name = simple_hash(st->name); @@ -299,7 +279,7 @@ char *rrdset_cache_dir(const char *id) char n[FILENAME_MAX + 1]; rrdset_strncpy_name(b, id, FILENAME_MAX); - snprintf(n, FILENAME_MAX, "%s/%s", cache_dir, b); + snprintfz(n, FILENAME_MAX, "%s/%s", cache_dir, b); ret = config_get(id, "cache directory", n); if(rrd_memory_mode == RRD_MEMORY_MODE_MAP || rrd_memory_mode == RRD_MEMORY_MODE_SAVE) { @@ -351,7 +331,7 @@ RRDSET *rrdset_create(const char *type, const char *id, const char *name, const char fullfilename[FILENAME_MAX + 1]; RRDSET *st = NULL; - snprintf(fullid, RRD_ID_LENGTH_MAX, "%s.%s", type, id); + snprintfz(fullid, RRD_ID_LENGTH_MAX, "%s.%s", type, id); st = rrdset_find(fullid); if(st) { @@ -371,7 +351,7 @@ RRDSET *rrdset_create(const char *type, const char *id, const char *name, const debug(D_RRD_CALLS, "Creating RRD_STATS for '%s.%s'.", type, id); - snprintf(fullfilename, FILENAME_MAX, "%s/main.db", cache_dir); + snprintfz(fullfilename, FILENAME_MAX, "%s/main.db", cache_dir); if(rrd_memory_mode != RRD_MEMORY_MODE_RAM) st = (RRDSET *)mymmap(fullfilename, size, ((rrd_memory_mode == RRD_MEMORY_MODE_MAP)?MAP_SHARED:MAP_PRIVATE), 0); if(st) { if(strcmp(st->magic, RRDSET_MAGIC) != 0) { @@ -453,7 +433,7 @@ RRDSET *rrdset_create(const char *type, const char *id, const char *name, const st->gap_when_lost_iterations_above = (int) ( config_get_number(st->id, "gap when lost iterations above", RRD_DEFAULT_GAP_INTERPOLATIONS) + 2); - avl_init(&st->dimensions_index, rrddim_compare); + avl_init_lock(&st->dimensions_index, rrddim_compare); pthread_rwlock_init(&st->rwlock, NULL); pthread_rwlock_wrlock(&rrdset_root_rwlock); @@ -463,7 +443,7 @@ RRDSET *rrdset_create(const char *type, const char *id, const char *name, const { char varvalue[CONFIG_MAX_VALUE + 1]; - snprintf(varvalue, CONFIG_MAX_VALUE, "%s (%s)", title?title:"", st->name); + snprintfz(varvalue, CONFIG_MAX_VALUE, "%s (%s)", title?title:"", st->name); st->title = config_get(st->id, "title", varvalue); } @@ -489,7 +469,7 @@ RRDDIM *rrddim_add(RRDSET *st, const char *id, const char *name, long multiplier debug(D_RRD_CALLS, "Adding dimension '%s/%s'.", st->id, id); rrdset_strncpy_name(filename, id, FILENAME_MAX); - snprintf(fullfilename, FILENAME_MAX, "%s/%s.db", st->cache_dir, filename); + snprintfz(fullfilename, FILENAME_MAX, "%s/%s.db", st->cache_dir, filename); if(rrd_memory_mode != RRD_MEMORY_MODE_RAM) rd = (RRDDIM *)mymmap(fullfilename, size, ((rrd_memory_mode == RRD_MEMORY_MODE_MAP)?MAP_SHARED:MAP_PRIVATE), 1); if(rd) { struct timeval now; @@ -561,19 +541,19 @@ RRDDIM *rrddim_add(RRDSET *st, const char *id, const char *name, long multiplier strcpy(rd->magic, RRDDIMENSION_MAGIC); strcpy(rd->cache_filename, fullfilename); - strncpy(rd->id, id, RRD_ID_LENGTH_MAX); + strncpyz(rd->id, id, RRD_ID_LENGTH_MAX); rd->hash = simple_hash(rd->id); - snprintf(varname, CONFIG_MAX_NAME, "dim %s name", rd->id); + snprintfz(varname, CONFIG_MAX_NAME, "dim %s name", rd->id); rd->name = config_get(st->id, varname, (name && *name)?name:rd->id); - snprintf(varname, CONFIG_MAX_NAME, "dim %s algorithm", rd->id); + snprintfz(varname, CONFIG_MAX_NAME, "dim %s algorithm", rd->id); rd->algorithm = rrddim_algorithm_id(config_get(st->id, varname, rrddim_algorithm_name(algorithm))); - snprintf(varname, CONFIG_MAX_NAME, "dim %s multiplier", rd->id); + snprintfz(varname, CONFIG_MAX_NAME, "dim %s multiplier", rd->id); rd->multiplier = config_get_number(st->id, varname, multiplier); - snprintf(varname, CONFIG_MAX_NAME, "dim %s divisor", rd->id); + snprintfz(varname, CONFIG_MAX_NAME, "dim %s divisor", rd->id); rd->divisor = config_get_number(st->id, varname, divisor); if(!rd->divisor) rd->divisor = 1; @@ -604,7 +584,7 @@ void rrddim_set_name(RRDSET *st, RRDDIM *rd, const char *name) debug(D_RRD_CALLS, "rrddim_set_name() %s.%s", st->name, rd->name); char varname[CONFIG_MAX_NAME + 1]; - snprintf(varname, CONFIG_MAX_NAME, "dim %s name", rd->id); + snprintfz(varname, CONFIG_MAX_NAME, "dim %s name", rd->id); config_set_default(st->id, varname, name); } @@ -724,12 +704,10 @@ RRDSET *rrdset_find_bytype(const char *type, const char *id) char buf[RRD_ID_LENGTH_MAX + 1]; - strncpy(buf, type, RRD_ID_LENGTH_MAX - 1); - buf[RRD_ID_LENGTH_MAX - 1] = '\0'; + strncpyz(buf, type, RRD_ID_LENGTH_MAX - 1); strcat(buf, "."); int len = (int) strlen(buf); - strncpy(&buf[len], id, (size_t) (RRD_ID_LENGTH_MAX - len)); - buf[RRD_ID_LENGTH_MAX] = '\0'; + strncpyz(&buf[len], id, (size_t) (RRD_ID_LENGTH_MAX - len)); return(rrdset_find(buf)); } @@ -846,7 +824,8 @@ unsigned long long rrdset_done(RRDSET *st) pthread_rwlock_rdlock(&st->rwlock); // enable the chart, if it was disabled - st->enabled = 1; + if(unlikely(rrd_delete_unupdated_dimensions) && !st->enabled) + st->enabled = 1; // check if the chart has a long time to be updated if(unlikely(st->usec_since_last_update > st->entries * st->update_every * 1000000ULL)) { @@ -246,7 +246,7 @@ struct rrdset { // ------------------------------------------------------------------------ // the dimensions - avl_tree dimensions_index; // the root of the dimensions index + avl_tree_lock dimensions_index; // the root of the dimensions index RRDDIM *dimensions; // the actual data for every dimension }; typedef struct rrdset RRDSET; diff --git a/src/rrd2json.c b/src/rrd2json.c index 88a750443..e0bd06670 100644 --- a/src/rrd2json.c +++ b/src/rrd2json.c @@ -5,6 +5,7 @@ #include <stdlib.h> #include <string.h> #include <stdint.h> +#include <math.h> #include "log.h" #include "common.h" @@ -681,18 +682,18 @@ static void rrdr2json(RRDR *r, BUFFER *wb, uint32_t options, int datatable) sq[0] = '"'; } row_annotations = 1; - snprintf(pre_date, 100, " {%sc%s:[{%sv%s:%s", kq, kq, kq, kq, sq); - snprintf(post_date, 100, "%s}", sq); - snprintf(pre_label, 100, ",\n {%sid%s:%s%s,%slabel%s:%s", kq, kq, sq, sq, kq, kq, sq); - snprintf(post_label, 100, "%s,%spattern%s:%s%s,%stype%s:%snumber%s}", sq, kq, kq, sq, sq, kq, kq, sq, sq); - snprintf(pre_value, 100, ",{%sv%s:", kq, kq); - snprintf(post_value, 100, "}"); - snprintf(post_line, 100, "]}"); - snprintf(data_begin, 100, "\n ],\n %srows%s:\n [\n", kq, kq); - snprintf(finish, 100, "\n ]\n}"); - - snprintf(overflow_annotation, 200, ",{%sv%s:%sRESET OR OVERFLOW%s},{%sv%s:%sThe counters have been wrapped.%s}", kq, kq, sq, sq, kq, kq, sq, sq); - snprintf(normal_annotation, 200, ",{%sv%s:null},{%sv%s:null}", kq, kq, kq, kq); + snprintfz(pre_date, 100, " {%sc%s:[{%sv%s:%s", kq, kq, kq, kq, sq); + snprintfz(post_date, 100, "%s}", sq); + snprintfz(pre_label, 100, ",\n {%sid%s:%s%s,%slabel%s:%s", kq, kq, sq, sq, kq, kq, sq); + snprintfz(post_label, 100, "%s,%spattern%s:%s%s,%stype%s:%snumber%s}", sq, kq, kq, sq, sq, kq, kq, sq, sq); + snprintfz(pre_value, 100, ",{%sv%s:", kq, kq); + strcpy(post_value, "}"); + strcpy(post_line, "]}"); + snprintfz(data_begin, 100, "\n ],\n %srows%s:\n [\n", kq, kq); + strcpy(finish, "\n ]\n}"); + + snprintfz(overflow_annotation, 200, ",{%sv%s:%sRESET OR OVERFLOW%s},{%sv%s:%sThe counters have been wrapped.%s}", kq, kq, sq, sq, kq, kq, sq, sq); + snprintfz(normal_annotation, 200, ",{%sv%s:null},{%sv%s:null}", kq, kq, kq, kq); buffer_sprintf(wb, "{\n %scols%s:\n [\n", kq, kq, kq, kq); buffer_sprintf(wb, " {%sid%s:%s%s,%slabel%s:%stime%s,%spattern%s:%s%s,%stype%s:%sdatetime%s},\n", kq, kq, sq, sq, kq, kq, sq, sq, kq, kq, sq, sq, kq, kq, sq, sq); @@ -716,18 +717,18 @@ static void rrdr2json(RRDR *r, BUFFER *wb, uint32_t options, int datatable) dates_with_new = 1; } if( options & RRDR_OPTION_OBJECTSROWS ) - snprintf(pre_date, 100, " { "); + strcpy(pre_date, " { "); else - snprintf(pre_date, 100, " [ "); - snprintf(pre_label, 100, ", \""); - snprintf(post_label, 100, "\""); - snprintf(pre_value, 100, ", "); + strcpy(pre_date, " [ "); + strcpy(pre_label, ", \""); + strcpy(post_label, "\""); + strcpy(pre_value, ", "); if( options & RRDR_OPTION_OBJECTSROWS ) - snprintf(post_line, 100, "}"); + strcpy(post_line, "}"); else - snprintf(post_line, 100, "]"); - snprintf(data_begin, 100, "],\n %sdata%s:\n [\n", kq, kq); - snprintf(finish, 100, "\n ]\n}"); + strcpy(post_line, "]"); + snprintfz(data_begin, 100, "],\n %sdata%s:\n [\n", kq, kq); + strcpy(finish, "\n ]\n}"); buffer_sprintf(wb, "{\n %slabels%s: [", kq, kq); buffer_sprintf(wb, "%stime%s", sq, sq); @@ -1450,7 +1451,7 @@ RRDR *rrd2rrdr(RRDSET *st, long points, long long after, long long before, int g switch(group_method) { case GROUP_MAX: - if(unlikely(abs(value) > abs(group_values[c]))) + if(unlikely(fabsl(value) > fabsl(group_values[c]))) group_values[c] = value; break; @@ -1742,12 +1743,12 @@ time_t rrd_stats_json(int type, RRDSET *st, BUFFER *wb, long points, long group, // ------------------------------------------------------------------------- // prepare various strings, to speed up the loop - char overflow_annotation[201]; snprintf(overflow_annotation, 200, ",{%sv%s:%sRESET OR OVERFLOW%s},{%sv%s:%sThe counters have been wrapped.%s}", kq, kq, sq, sq, kq, kq, sq, sq); - char normal_annotation[201]; snprintf(normal_annotation, 200, ",{%sv%s:null},{%sv%s:null}", kq, kq, kq, kq); - char pre_date[51]; snprintf(pre_date, 50, " {%sc%s:[{%sv%s:%s", kq, kq, kq, kq, sq); - char post_date[21]; snprintf(post_date, 20, "%s}", sq); - char pre_value[21]; snprintf(pre_value, 20, ",{%sv%s:", kq, kq); - char post_value[21]; snprintf(post_value, 20, "}"); + char overflow_annotation[201]; snprintfz(overflow_annotation, 200, ",{%sv%s:%sRESET OR OVERFLOW%s},{%sv%s:%sThe counters have been wrapped.%s}", kq, kq, sq, sq, kq, kq, sq, sq); + char normal_annotation[201]; snprintfz(normal_annotation, 200, ",{%sv%s:null},{%sv%s:null}", kq, kq, kq, kq); + char pre_date[51]; snprintfz(pre_date, 50, " {%sc%s:[{%sv%s:%s", kq, kq, kq, kq, sq); + char post_date[21]; snprintfz(post_date, 20, "%s}", sq); + char pre_value[21]; snprintfz(pre_value, 20, ",{%sv%s:", kq, kq); + char post_value[21]; strcpy(post_value, "}"); // ------------------------------------------------------------------------- diff --git a/src/storage_number.c b/src/storage_number.c index 225cf0348..b5c5f4067 100644 --- a/src/storage_number.c +++ b/src/storage_number.c @@ -129,7 +129,7 @@ static char *print_calculated_number_lu_r(char *str, unsigned long uvalue) { char *wstr = str; // print each digit - do *wstr++ = (char)(48 + (uvalue % 10)); while(uvalue /= 10); + do *wstr++ = (char)('0' + (uvalue % 10)); while(uvalue /= 10); return wstr; } @@ -137,7 +137,7 @@ static char *print_calculated_number_llu_r(char *str, unsigned long long uvalue) char *wstr = str; // print each digit - do *wstr++ = (char)(48 + (uvalue % 10)); while((uvalue /= 10) && uvalue > (unsigned long long)0xffffffff); + do *wstr++ = (char)('0' + (uvalue % 10)); while((uvalue /= 10) && uvalue > (unsigned long long)0xffffffff); if(uvalue) return print_calculated_number_lu_r(wstr, uvalue); return wstr; } @@ -164,7 +164,7 @@ int print_calculated_number(char *str, calculated_number value) else wstr = print_calculated_number_lu_r(str, uvalue); #else - do *wstr++ = (char)(48 + (uvalue % 10)); while(uvalue /= 10); + do *wstr++ = (char)('0' + (uvalue % 10)); while(uvalue /= 10); #endif // make sure we have 6 bytes at least diff --git a/src/sys_fs_cgroup.c b/src/sys_fs_cgroup.c new file mode 100644 index 000000000..9f3d3f0fd --- /dev/null +++ b/src/sys_fs_cgroup.c @@ -0,0 +1,1310 @@ +#ifdef HAVE_CONFIG_H +#include <config.h> +#endif + +#include <stdio.h> +#include <stdlib.h> +#include <inttypes.h> +#include <sys/types.h> +#include <dirent.h> +#include <string.h> + +#include "common.h" +#include "appconfig.h" +#include "procfile.h" +#include "log.h" +#include "rrd.h" +#include "main.h" +#include "popen.h" +#include "proc_self_mountinfo.h" + +// ---------------------------------------------------------------------------- +// cgroup globals + +static int cgroup_enable_cpuacct_stat = CONFIG_ONDEMAND_ONDEMAND; +static int cgroup_enable_cpuacct_usage = CONFIG_ONDEMAND_ONDEMAND; +static int cgroup_enable_memory = CONFIG_ONDEMAND_ONDEMAND; +static int cgroup_enable_blkio = CONFIG_ONDEMAND_ONDEMAND; +static int cgroup_enable_new_cgroups_detected_at_runtime = 1; +static int cgroup_check_for_new_every = 10; +static char *cgroup_cpuacct_base = NULL; +static char *cgroup_blkio_base = NULL; +static char *cgroup_memory_base = NULL; + +static int cgroup_root_count = 0; +static int cgroup_root_max = 500; +static int cgroup_max_depth = 0; + +void read_cgroup_plugin_configuration() { + cgroup_check_for_new_every = config_get_number("plugin:cgroups", "check for new cgroups every", cgroup_check_for_new_every); + + cgroup_enable_cpuacct_stat = config_get_boolean_ondemand("plugin:cgroups", "enable cpuacct stat", cgroup_enable_cpuacct_stat); + cgroup_enable_cpuacct_usage = config_get_boolean_ondemand("plugin:cgroups", "enable cpuacct usage", cgroup_enable_cpuacct_usage); + cgroup_enable_memory = config_get_boolean_ondemand("plugin:cgroups", "enable memory", cgroup_enable_memory); + cgroup_enable_blkio = config_get_boolean_ondemand("plugin:cgroups", "enable blkio", cgroup_enable_blkio); + + char filename[FILENAME_MAX + 1], *s; + struct mountinfo *mi, *root = mountinfo_read(); + + mi = mountinfo_find_by_filesystem_mount_source(root, "cgroup", "cpuacct"); + if(!mi) mi = mountinfo_find_by_filesystem_super_option(root, "cgroup", "cpuacct"); + if(!mi) s = "/sys/fs/cgroup/cpuacct"; + else s = mi->mount_point; + snprintfz(filename, FILENAME_MAX, "%s%s", global_host_prefix, s); + cgroup_cpuacct_base = config_get("plugin:cgroups", "path to /sys/fs/cgroup/cpuacct", filename); + + mi = mountinfo_find_by_filesystem_mount_source(root, "cgroup", "blkio"); + if(!mi) mi = mountinfo_find_by_filesystem_super_option(root, "cgroup", "blkio"); + if(!mi) s = "/sys/fs/cgroup/blkio"; + else s = mi->mount_point; + snprintfz(filename, FILENAME_MAX, "%s%s", global_host_prefix, s); + cgroup_blkio_base = config_get("plugin:cgroups", "path to /sys/fs/cgroup/blkio", filename); + + mi = mountinfo_find_by_filesystem_mount_source(root, "cgroup", "memory"); + if(!mi) mi = mountinfo_find_by_filesystem_super_option(root, "cgroup", "memory"); + if(!mi) s = "/sys/fs/cgroup/memory"; + else s = mi->mount_point; + snprintfz(filename, FILENAME_MAX, "%s%s", global_host_prefix, s); + cgroup_memory_base = config_get("plugin:cgroups", "path to /sys/fs/cgroup/memory", filename); + + cgroup_root_max = config_get_number("plugin:cgroups", "max cgroups to allow", cgroup_root_max); + cgroup_max_depth = config_get_number("plugin:cgroups", "max cgroups depth to monitor", cgroup_max_depth); + + cgroup_enable_new_cgroups_detected_at_runtime = config_get_boolean("plugin:cgroups", "enable new cgroups detected at run time", cgroup_enable_new_cgroups_detected_at_runtime); + + mountinfo_free(root); +} + +// ---------------------------------------------------------------------------- +// cgroup objects + +struct blkio { + int updated; + + char *filename; + + unsigned long long Read; + unsigned long long Write; +/* + unsigned long long Sync; + unsigned long long Async; + unsigned long long Total; +*/ +}; + +// https://www.kernel.org/doc/Documentation/cgroup-v1/memory.txt +struct memory { + int updated; + + char *filename; + + int has_dirty_swap; + + unsigned long long cache; + unsigned long long rss; + unsigned long long rss_huge; + unsigned long long mapped_file; + unsigned long long writeback; + unsigned long long dirty; + unsigned long long swap; + unsigned long long pgpgin; + unsigned long long pgpgout; + unsigned long long pgfault; + unsigned long long pgmajfault; +/* + unsigned long long inactive_anon; + unsigned long long active_anon; + unsigned long long inactive_file; + unsigned long long active_file; + unsigned long long unevictable; + unsigned long long hierarchical_memory_limit; + unsigned long long total_cache; + unsigned long long total_rss; + unsigned long long total_rss_huge; + unsigned long long total_mapped_file; + unsigned long long total_writeback; + unsigned long long total_dirty; + unsigned long long total_swap; + unsigned long long total_pgpgin; + unsigned long long total_pgpgout; + unsigned long long total_pgfault; + unsigned long long total_pgmajfault; + unsigned long long total_inactive_anon; + unsigned long long total_active_anon; + unsigned long long total_inactive_file; + unsigned long long total_active_file; + unsigned long long total_unevictable; +*/ +}; + +// https://www.kernel.org/doc/Documentation/cgroup-v1/cpuacct.txt +struct cpuacct_stat { + int updated; + + char *filename; + + unsigned long long user; + unsigned long long system; +}; + +// https://www.kernel.org/doc/Documentation/cgroup-v1/cpuacct.txt +struct cpuacct_usage { + int updated; + + char *filename; + + unsigned int cpus; + unsigned long long *cpu_percpu; +}; + +struct cgroup { + int available; // found in the filesystem + int enabled; // enabled in the config + + char *id; + uint32_t hash; + + char *chart_id; + char *chart_title; + + struct cpuacct_stat cpuacct_stat; + struct cpuacct_usage cpuacct_usage; + + struct memory memory; + + struct blkio io_service_bytes; // bytes + struct blkio io_serviced; // operations + + struct blkio throttle_io_service_bytes; // bytes + struct blkio throttle_io_serviced; // operations + + struct blkio io_merged; // operations + struct blkio io_queued; // operations + + struct cgroup *next; + +} *cgroup_root = NULL; + +// ---------------------------------------------------------------------------- +// read values from /sys + +void cgroup_read_cpuacct_stat(struct cpuacct_stat *cp) { + static procfile *ff = NULL; + + static uint32_t user_hash = 0; + static uint32_t system_hash = 0; + + if(unlikely(user_hash == 0)) { + user_hash = simple_hash("user"); + system_hash = simple_hash("system"); + } + + cp->updated = 0; + if(cp->filename) { + ff = procfile_reopen(ff, cp->filename, NULL, PROCFILE_FLAG_DEFAULT); + if(!ff) return; + + ff = procfile_readall(ff); + if(!ff) return; + + unsigned long i, lines = procfile_lines(ff); + + if(lines < 1) { + error("File '%s' should have 1+ lines.", cp->filename); + return; + } + + for(i = 0; i < lines ; i++) { + char *s = procfile_lineword(ff, i, 0); + uint32_t hash = simple_hash(s); + + if(hash == user_hash && !strcmp(s, "user")) + cp->user = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == system_hash && !strcmp(s, "system")) + cp->system = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + } + + cp->updated = 1; + + // fprintf(stderr, "READ '%s': user: %llu, system: %llu\n", cp->filename, cp->user, cp->system); + } +} + +void cgroup_read_cpuacct_usage(struct cpuacct_usage *ca) { + static procfile *ff = NULL; + + ca->updated = 0; + if(ca->filename) { + ff = procfile_reopen(ff, ca->filename, NULL, PROCFILE_FLAG_DEFAULT); + if(!ff) return; + + ff = procfile_readall(ff); + if(!ff) return; + + if(procfile_lines(ff) < 1) { + error("File '%s' should have 1+ lines but has %d.", ca->filename, procfile_lines(ff)); + return; + } + + unsigned long i = procfile_linewords(ff, 0); + if(i <= 0) return; + + // we may have 1 more CPU reported + while(i > 0) { + char *s = procfile_lineword(ff, 0, i - 1); + if(!*s) i--; + else break; + } + + if(i != ca->cpus) { + free(ca->cpu_percpu); + + ca->cpu_percpu = malloc(sizeof(unsigned long long) * i); + if(!ca->cpu_percpu) + fatal("Cannot allocate memory (%z bytes)", sizeof(unsigned long long) * i); + + ca->cpus = i; + } + + for(i = 0; i < ca->cpus ;i++) { + ca->cpu_percpu[i] = strtoull(procfile_lineword(ff, 0, i), NULL, 10); + // fprintf(stderr, "READ '%s': cpu%d/%d: %llu ('%s')\n", ca->filename, i, ca->cpus, ca->cpu_percpu[i], procfile_lineword(ff, 0, i)); + } + + ca->updated = 1; + } +} + +void cgroup_read_blkio(struct blkio *io) { + static procfile *ff = NULL; + + static uint32_t Read_hash = 0; + static uint32_t Write_hash = 0; +/* + static uint32_t Sync_hash = 0; + static uint32_t Async_hash = 0; + static uint32_t Total_hash = 0; +*/ + + if(unlikely(Read_hash == 0)) { + Read_hash = simple_hash("Read"); + Write_hash = simple_hash("Write"); +/* + Sync_hash = simple_hash("Sync"); + Async_hash = simple_hash("Async"); + Total_hash = simple_hash("Total"); +*/ + } + + io->updated = 0; + if(io->filename) { + ff = procfile_reopen(ff, io->filename, NULL, PROCFILE_FLAG_DEFAULT); + if(!ff) return; + + ff = procfile_readall(ff); + if(!ff) return; + + unsigned long i, lines = procfile_lines(ff); + + if(lines < 1) { + error("File '%s' should have 1+ lines.", io->filename); + return; + } + + io->Read = 0; + io->Write = 0; +/* + io->Sync = 0; + io->Async = 0; + io->Total = 0; +*/ + + for(i = 0; i < lines ; i++) { + char *s = procfile_lineword(ff, i, 1); + uint32_t hash = simple_hash(s); + + if(hash == Read_hash && !strcmp(s, "Read")) + io->Read += strtoull(procfile_lineword(ff, i, 2), NULL, 10); + + else if(hash == Write_hash && !strcmp(s, "Write")) + io->Write += strtoull(procfile_lineword(ff, i, 2), NULL, 10); + +/* + else if(hash == Sync_hash && !strcmp(s, "Sync")) + io->Sync += strtoull(procfile_lineword(ff, i, 2), NULL, 10); + + else if(hash == Async_hash && !strcmp(s, "Async")) + io->Async += strtoull(procfile_lineword(ff, i, 2), NULL, 10); + + else if(hash == Total_hash && !strcmp(s, "Total")) + io->Total += strtoull(procfile_lineword(ff, i, 2), NULL, 10); +*/ + } + + io->updated = 1; + // fprintf(stderr, "READ '%s': Read: %llu, Write: %llu, Sync: %llu, Async: %llu, Total: %llu\n", io->filename, io->Read, io->Write, io->Sync, io->Async, io->Total); + } +} + +void cgroup_read_memory(struct memory *mem) { + static procfile *ff = NULL; + + static uint32_t cache_hash = 0; + static uint32_t rss_hash = 0; + static uint32_t rss_huge_hash = 0; + static uint32_t mapped_file_hash = 0; + static uint32_t writeback_hash = 0; + static uint32_t dirty_hash = 0; + static uint32_t swap_hash = 0; + static uint32_t pgpgin_hash = 0; + static uint32_t pgpgout_hash = 0; + static uint32_t pgfault_hash = 0; + static uint32_t pgmajfault_hash = 0; +/* + static uint32_t inactive_anon_hash = 0; + static uint32_t active_anon_hash = 0; + static uint32_t inactive_file_hash = 0; + static uint32_t active_file_hash = 0; + static uint32_t unevictable_hash = 0; + static uint32_t hierarchical_memory_limit_hash = 0; + static uint32_t total_cache_hash = 0; + static uint32_t total_rss_hash = 0; + static uint32_t total_rss_huge_hash = 0; + static uint32_t total_mapped_file_hash = 0; + static uint32_t total_writeback_hash = 0; + static uint32_t total_dirty_hash = 0; + static uint32_t total_swap_hash = 0; + static uint32_t total_pgpgin_hash = 0; + static uint32_t total_pgpgout_hash = 0; + static uint32_t total_pgfault_hash = 0; + static uint32_t total_pgmajfault_hash = 0; + static uint32_t total_inactive_anon_hash = 0; + static uint32_t total_active_anon_hash = 0; + static uint32_t total_inactive_file_hash = 0; + static uint32_t total_active_file_hash = 0; + static uint32_t total_unevictable_hash = 0; +*/ + if(unlikely(cache_hash == 0)) { + cache_hash = simple_hash("cache"); + rss_hash = simple_hash("rss"); + rss_huge_hash = simple_hash("rss_huge"); + mapped_file_hash = simple_hash("mapped_file"); + writeback_hash = simple_hash("writeback"); + dirty_hash = simple_hash("dirty"); + swap_hash = simple_hash("swap"); + pgpgin_hash = simple_hash("pgpgin"); + pgpgout_hash = simple_hash("pgpgout"); + pgfault_hash = simple_hash("pgfault"); + pgmajfault_hash = simple_hash("pgmajfault"); +/* + inactive_anon_hash = simple_hash("inactive_anon"); + active_anon_hash = simple_hash("active_anon"); + inactive_file_hash = simple_hash("inactive_file"); + active_file_hash = simple_hash("active_file"); + unevictable_hash = simple_hash("unevictable"); + hierarchical_memory_limit_hash = simple_hash("hierarchical_memory_limit"); + total_cache_hash = simple_hash("total_cache"); + total_rss_hash = simple_hash("total_rss"); + total_rss_huge_hash = simple_hash("total_rss_huge"); + total_mapped_file_hash = simple_hash("total_mapped_file"); + total_writeback_hash = simple_hash("total_writeback"); + total_dirty_hash = simple_hash("total_dirty"); + total_swap_hash = simple_hash("total_swap"); + total_pgpgin_hash = simple_hash("total_pgpgin"); + total_pgpgout_hash = simple_hash("total_pgpgout"); + total_pgfault_hash = simple_hash("total_pgfault"); + total_pgmajfault_hash = simple_hash("total_pgmajfault"); + total_inactive_anon_hash = simple_hash("total_inactive_anon"); + total_active_anon_hash = simple_hash("total_active_anon"); + total_inactive_file_hash = simple_hash("total_inactive_file"); + total_active_file_hash = simple_hash("total_active_file"); + total_unevictable_hash = simple_hash("total_unevictable"); +*/ + } + + mem->updated = 0; + if(mem->filename) { + ff = procfile_reopen(ff, mem->filename, NULL, PROCFILE_FLAG_DEFAULT); + if(!ff) return; + + ff = procfile_readall(ff); + if(!ff) return; + + unsigned long i, lines = procfile_lines(ff); + + if(lines < 1) { + error("File '%s' should have 1+ lines.", mem->filename); + return; + } + + for(i = 0; i < lines ; i++) { + char *s = procfile_lineword(ff, i, 0); + uint32_t hash = simple_hash(s); + + if(hash == cache_hash && !strcmp(s, "cache")) + mem->cache = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == rss_hash && !strcmp(s, "rss")) + mem->rss = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == rss_huge_hash && !strcmp(s, "rss_huge")) + mem->rss_huge = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == mapped_file_hash && !strcmp(s, "mapped_file")) + mem->mapped_file = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == writeback_hash && !strcmp(s, "writeback")) + mem->writeback = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == dirty_hash && !strcmp(s, "dirty")) { + mem->dirty = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + mem->has_dirty_swap = 1; + } + + else if(hash == swap_hash && !strcmp(s, "swap")) { + mem->swap = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + mem->has_dirty_swap = 1; + } + + else if(hash == pgpgin_hash && !strcmp(s, "pgpgin")) + mem->pgpgin = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == pgpgout_hash && !strcmp(s, "pgpgout")) + mem->pgpgout = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == pgfault_hash && !strcmp(s, "pgfault")) + mem->pgfault = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == pgmajfault_hash && !strcmp(s, "pgmajfault")) + mem->pgmajfault = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + +/* + else if(hash == inactive_anon_hash && !strcmp(s, "inactive_anon")) + mem->inactive_anon = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == active_anon_hash && !strcmp(s, "active_anon")) + mem->active_anon = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == inactive_file_hash && !strcmp(s, "inactive_file")) + mem->inactive_file = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == active_file_hash && !strcmp(s, "active_file")) + mem->active_file = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == unevictable_hash && !strcmp(s, "unevictable")) + mem->unevictable = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == hierarchical_memory_limit_hash && !strcmp(s, "hierarchical_memory_limit")) + mem->hierarchical_memory_limit = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == total_cache_hash && !strcmp(s, "total_cache")) + mem->total_cache = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == total_rss_hash && !strcmp(s, "total_rss")) + mem->total_rss = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == total_rss_huge_hash && !strcmp(s, "total_rss_huge")) + mem->total_rss_huge = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == total_mapped_file_hash && !strcmp(s, "total_mapped_file")) + mem->total_mapped_file = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == total_writeback_hash && !strcmp(s, "total_writeback")) + mem->total_writeback = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == total_dirty_hash && !strcmp(s, "total_dirty")) + mem->total_dirty = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == total_swap_hash && !strcmp(s, "total_swap")) + mem->total_swap = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == total_pgpgin_hash && !strcmp(s, "total_pgpgin")) + mem->total_pgpgin = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == total_pgpgout_hash && !strcmp(s, "total_pgpgout")) + mem->total_pgpgout = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == total_pgfault_hash && !strcmp(s, "total_pgfault")) + mem->total_pgfault = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == total_pgmajfault_hash && !strcmp(s, "total_pgmajfault")) + mem->total_pgmajfault = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == total_inactive_anon_hash && !strcmp(s, "total_inactive_anon")) + mem->total_inactive_anon = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == total_active_anon_hash && !strcmp(s, "total_active_anon")) + mem->total_active_anon = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == total_inactive_file_hash && !strcmp(s, "total_inactive_file")) + mem->total_inactive_file = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == total_active_file_hash && !strcmp(s, "total_active_file")) + mem->total_active_file = strtoull(procfile_lineword(ff, i, 1), NULL, 10); + + else if(hash == total_unevictable_hash && !strcmp(s, "total_unevictable")) + mem->total_unevictable = strtoull(procfile_lineword(ff, i, 1), NULL, 10); +*/ + } + + // fprintf(stderr, "READ: '%s', cache: %llu, rss: %llu, rss_huge: %llu, mapped_file: %llu, writeback: %llu, dirty: %llu, swap: %llu, pgpgin: %llu, pgpgout: %llu, pgfault: %llu, pgmajfault: %llu, inactive_anon: %llu, active_anon: %llu, inactive_file: %llu, active_file: %llu, unevictable: %llu, hierarchical_memory_limit: %llu, total_cache: %llu, total_rss: %llu, total_rss_huge: %llu, total_mapped_file: %llu, total_writeback: %llu, total_dirty: %llu, total_swap: %llu, total_pgpgin: %llu, total_pgpgout: %llu, total_pgfault: %llu, total_pgmajfault: %llu, total_inactive_anon: %llu, total_active_anon: %llu, total_inactive_file: %llu, total_active_file: %llu, total_unevictable: %llu\n", mem->filename, mem->cache, mem->rss, mem->rss_huge, mem->mapped_file, mem->writeback, mem->dirty, mem->swap, mem->pgpgin, mem->pgpgout, mem->pgfault, mem->pgmajfault, mem->inactive_anon, mem->active_anon, mem->inactive_file, mem->active_file, mem->unevictable, mem->hierarchical_memory_limit, mem->total_cache, mem->total_rss, mem->total_rss_huge, mem->total_mapped_file, mem->total_writeback, mem->total_dirty, mem->total_swap, mem->total_pgpgin, mem->total_pgpgout, mem->total_pgfault, mem->total_pgmajfault, mem->total_inactive_anon, mem->total_active_anon, mem->total_inactive_file, mem->total_active_file, mem->total_unevictable); + + mem->updated = 1; + } +} + +void cgroup_read(struct cgroup *cg) { + debug(D_CGROUP, "reading metrics for cgroups '%s'", cg->id); + + cgroup_read_cpuacct_stat(&cg->cpuacct_stat); + cgroup_read_cpuacct_usage(&cg->cpuacct_usage); + cgroup_read_memory(&cg->memory); + cgroup_read_blkio(&cg->io_service_bytes); + cgroup_read_blkio(&cg->io_serviced); + cgroup_read_blkio(&cg->throttle_io_service_bytes); + cgroup_read_blkio(&cg->throttle_io_serviced); + cgroup_read_blkio(&cg->io_merged); + cgroup_read_blkio(&cg->io_queued); +} + +void read_all_cgroups(struct cgroup *root) { + debug(D_CGROUP, "reading metrics for all cgroups"); + + struct cgroup *cg; + + for(cg = root; cg ; cg = cg->next) + if(cg->enabled && cg->available) + cgroup_read(cg); +} + +// ---------------------------------------------------------------------------- +// add/remove/find cgroup objects + +#define CGROUP_CHARTID_LINE_MAX 1024 + +void cgroup_get_chart_id(struct cgroup *cg) { + debug(D_CGROUP, "getting the name of cgroup '%s'", cg->id); + + pid_t cgroup_pid; + char buffer[CGROUP_CHARTID_LINE_MAX + 1]; + + snprintfz(buffer, CGROUP_CHARTID_LINE_MAX, "exec %s '%s'", + config_get("plugin:cgroups", "script to get cgroup names", PLUGINS_DIR "/cgroup-name.sh"), cg->chart_id); + + debug(D_CGROUP, "executing command '%s' for cgroup '%s'", buffer, cg->id); + FILE *fp = mypopen(buffer, &cgroup_pid); + if(!fp) { + error("CGROUP: Cannot popen(\"%s\", \"r\").", buffer); + return; + } + debug(D_CGROUP, "reading from command '%s' for cgroup '%s'", buffer, cg->id); + char *s = fgets(buffer, CGROUP_CHARTID_LINE_MAX, fp); + debug(D_CGROUP, "closing command for cgroup '%s'", cg->id); + mypclose(fp, cgroup_pid); + debug(D_CGROUP, "closed command for cgroup '%s'", cg->id); + + if(s && *s && *s != '\n') { + debug(D_CGROUP, "cgroup '%s' should be renamed to '%s'", cg->id, s); + + trim(s); + + free(cg->chart_title); + cg->chart_title = strdup(s); + if(!cg->chart_title) + fatal("CGROUP: Cannot allocate memory for chart name of cgroup '%s' chart name: '%s'", cg->id, s); + + netdata_fix_chart_name(cg->chart_title); + + free(cg->chart_id); + cg->chart_id = strdup(s); + if(!cg->chart_id) + fatal("CGROUP: Cannot allocate memory for chart id of cgroup '%s' chart id: '%s'", cg->id, s); + + netdata_fix_chart_id(cg->chart_id); + + debug(D_CGROUP, "cgroup '%s' renamed to '%s' (title: '%s')", cg->id, cg->chart_id, cg->chart_title); + } + else debug(D_CGROUP, "cgroup '%s' is not to be renamed (will be shown as '%s')", cg->id, cg->chart_id); +} + +struct cgroup *cgroup_add(const char *id) { + debug(D_CGROUP, "adding cgroup '%s'", id); + + if(cgroup_root_count >= cgroup_root_max) { + info("Maximum number of cgroups reached (%d). Not adding cgroup '%s'", cgroup_root_count, id); + return NULL; + } + + int def = cgroup_enable_new_cgroups_detected_at_runtime; + const char *chart_id = id; + if(!*chart_id) { + chart_id = "/"; + + // disable by default the root cgroup + def = 0; + debug(D_CGROUP, "cgroup '%s' is the root container (by default %s)", id, (def)?"enabled":"disabled"); + } + else { + if(*chart_id == '/') chart_id++; + + size_t len = strlen(chart_id); + + // disable by default the parent cgroup + // for known cgroup managers + if(!strcmp(chart_id, "lxc") || + !strcmp(chart_id, "docker") || + !strcmp(chart_id, "libvirt") || + !strcmp(chart_id, "qemu") || + !strcmp(chart_id, "systemd") || + !strcmp(chart_id, "system.slice") || + !strcmp(chart_id, "machine.slice") || + !strcmp(chart_id, "user") || + !strcmp(chart_id, "system") || + !strcmp(chart_id, "machine") || + // starts with them + (len > 6 && !strncmp(chart_id, "user/", 6)) || + (len > 11 && !strncmp(chart_id, "user.slice/", 11)) || + // ends with them + (len > 5 && !strncmp(&chart_id[len - 5], ".user", 5)) || + (len > 5 && !strncmp(&chart_id[len - 5], ".swap", 5)) || + (len > 6 && !strncmp(&chart_id[len - 6], ".slice", 6)) || + (len > 6 && !strncmp(&chart_id[len - 6], ".mount", 6)) || + (len > 8 && !strncmp(&chart_id[len - 8], ".session", 8)) || + (len > 8 && !strncmp(&chart_id[len - 8], ".service", 8)) || + (len > 10 && !strncmp(&chart_id[len - 10], ".partition", 10)) + ) { + def = 0; + debug(D_CGROUP, "cgroup '%s' is %s (by default)", id, (def)?"enabled":"disabled"); + } + } + + struct cgroup *cg = calloc(1, sizeof(struct cgroup)); + if(!cg) fatal("Cannot allocate memory for cgroup '%s'", id); + + debug(D_CGROUP, "adding cgroup '%s'", id); + + cg->id = strdup(id); + if(!cg->id) fatal("Cannot allocate memory for cgroup '%s'", id); + + cg->hash = simple_hash(cg->id); + + cg->chart_id = strdup(chart_id); + if(!cg->chart_id) fatal("Cannot allocate memory for cgroup '%s'", id); + + cg->chart_title = strdup(chart_id); + if(!cg->chart_title) fatal("Cannot allocate memory for cgroup '%s'", id); + + if(!cgroup_root) + cgroup_root = cg; + else { + // append it + struct cgroup *e; + for(e = cgroup_root; e->next ;e = e->next) ; + e->next = cg; + } + + cgroup_root_count++; + + // fix the name by calling the external script + cgroup_get_chart_id(cg); + + char option[FILENAME_MAX + 1]; + snprintfz(option, FILENAME_MAX, "enable cgroup %s", cg->chart_title); + cg->enabled = config_get_boolean("plugin:cgroups", option, def); + + debug(D_CGROUP, "Added cgroup '%s' with chart id '%s' and title '%s' as %s (default was %s)", cg->id, cg->chart_id, cg->chart_title, (cg->enabled)?"enabled":"disabled", (def)?"enabled":"disabled"); + + return cg; +} + +void cgroup_free(struct cgroup *cg) { + debug(D_CGROUP, "Removing cgroup '%s' with chart id '%s' (was %s and %s)", cg->id, cg->chart_id, (cg->enabled)?"enabled":"disabled", (cg->available)?"available":"not available"); + + free(cg->cpuacct_usage.cpu_percpu); + + free(cg->cpuacct_stat.filename); + free(cg->cpuacct_usage.filename); + free(cg->memory.filename); + free(cg->io_service_bytes.filename); + free(cg->io_serviced.filename); + free(cg->throttle_io_service_bytes.filename); + free(cg->throttle_io_serviced.filename); + free(cg->io_merged.filename); + free(cg->io_queued.filename); + + free(cg->id); + free(cg->chart_id); + free(cg->chart_title); + free(cg); + + cgroup_root_count--; +} + +// find if a given cgroup exists +struct cgroup *cgroup_find(const char *id) { + debug(D_CGROUP, "searching for cgroup '%s'", id); + + uint32_t hash = simple_hash(id); + + struct cgroup *cg; + for(cg = cgroup_root; cg ; cg = cg->next) { + if(hash == cg->hash && strcmp(id, cg->id) == 0) + break; + } + + debug(D_CGROUP, "cgroup_find('%s') %s", id, (cg)?"found":"not found"); + return cg; +} + +// ---------------------------------------------------------------------------- +// detect running cgroups + +// callback for find_file_in_subdirs() +void found_subdir_in_dir(const char *dir) { + debug(D_CGROUP, "examining cgroup dir '%s'", dir); + + struct cgroup *cg = cgroup_find(dir); + if(!cg) { + if(*dir && cgroup_max_depth > 0) { + int depth = 0; + const char *s; + + for(s = dir; *s ;s++) + if(unlikely(*s == '/')) + depth++; + + if(depth > cgroup_max_depth) { + info("cgroup '%s' is too deep (%d, while max is %d)", dir, depth, cgroup_max_depth); + return; + } + } + debug(D_CGROUP, "will add dir '%s' as cgroup", dir); + cg = cgroup_add(dir); + } + + if(cg) cg->available = 1; +} + +void find_dir_in_subdirs(const char *base, const char *this, void (*callback)(const char *)) { + int enabled = -1; + if(!this) this = base; + size_t dirlen = strlen(this), baselen = strlen(base); + const char *relative_path = &this[baselen]; + + DIR *dir = opendir(this); + if(!dir) return; + + callback(relative_path); + + struct dirent *de = NULL; + while((de = readdir(dir))) { + if(de->d_type == DT_DIR + && ( + (de->d_name[0] == '.' && de->d_name[1] == '\0') + || (de->d_name[0] == '.' && de->d_name[1] == '.' && de->d_name[2] == '\0') + )) + continue; + + debug(D_CGROUP, "examining '%s/%s'", this, de->d_name); + + if(de->d_type == DT_DIR) { + if(enabled == -1) { + const char *r = relative_path; + if(*r == '\0') r = "/"; + else if (*r == '/') r++; + + // we check for this option here + // so that the config will not have settings + // for leaf directories + char option[FILENAME_MAX + 1]; + snprintfz(option, FILENAME_MAX, "search for cgroups under %s", r); + option[FILENAME_MAX] = '\0'; + enabled = config_get_boolean("plugin:cgroups", option, 1); + } + + if(enabled) { + char *s = malloc(dirlen + strlen(de->d_name) + 2); + if(s) { + strcpy(s, this); + strcat(s, "/"); + strcat(s, de->d_name); + find_dir_in_subdirs(base, s, callback); + free(s); + } + } + } + } + + closedir(dir); +} + +void mark_all_cgroups_as_not_available() { + debug(D_CGROUP, "marking all cgroups as not available"); + + struct cgroup *cg; + + // mark all as not available + for(cg = cgroup_root; cg ; cg = cg->next) + cg->available = 0; +} + +void cleanup_all_cgroups() { + struct cgroup *cg = cgroup_root, *last = NULL; + + for(; cg ;) { + if(!cg->available) { + + if(!last) + cgroup_root = cg->next; + else + last->next = cg->next; + + cgroup_free(cg); + + if(!last) + cg = cgroup_root; + else + cg = last->next; + } + else { + last = cg; + cg = cg->next; + } + } +} + +void find_all_cgroups() { + debug(D_CGROUP, "searching for cgroups"); + + mark_all_cgroups_as_not_available(); + + if(cgroup_enable_cpuacct_stat || cgroup_enable_cpuacct_usage) + find_dir_in_subdirs(cgroup_cpuacct_base, NULL, found_subdir_in_dir); + + if(cgroup_enable_blkio) + find_dir_in_subdirs(cgroup_blkio_base, NULL, found_subdir_in_dir); + + if(cgroup_enable_memory) + find_dir_in_subdirs(cgroup_memory_base, NULL, found_subdir_in_dir); + + // remove any non-existing cgroups + cleanup_all_cgroups(); + + struct cgroup *cg; + for(cg = cgroup_root; cg ; cg = cg->next) { + // fprintf(stderr, " >>> CGROUP '%s' (%u - %s) with name '%s'\n", cg->id, cg->hash, cg->available?"available":"stopped", cg->name); + + if(unlikely(!cg->available)) + continue; + + debug(D_CGROUP, "checking paths for cgroup '%s'", cg->id); + + // check for newly added cgroups + // and update the filenames they read + char filename[FILENAME_MAX + 1]; + if(cgroup_enable_cpuacct_stat && !cg->cpuacct_stat.filename) { + snprintfz(filename, FILENAME_MAX, "%s%s/cpuacct.stat", cgroup_cpuacct_base, cg->id); + cg->cpuacct_stat.filename = strdup(filename); + debug(D_CGROUP, "cpuacct.stat filename for cgroup '%s': '%s'", cg->id, cg->cpuacct_stat.filename); + } + if(cgroup_enable_cpuacct_usage && !cg->cpuacct_usage.filename) { + snprintfz(filename, FILENAME_MAX, "%s%s/cpuacct.usage_percpu", cgroup_cpuacct_base, cg->id); + cg->cpuacct_usage.filename = strdup(filename); + debug(D_CGROUP, "cpuacct.usage_percpu filename for cgroup '%s': '%s'", cg->id, cg->cpuacct_usage.filename); + } + if(cgroup_enable_memory && !cg->memory.filename) { + snprintfz(filename, FILENAME_MAX, "%s%s/memory.stat", cgroup_memory_base, cg->id); + cg->memory.filename = strdup(filename); + debug(D_CGROUP, "memory.stat filename for cgroup '%s': '%s'", cg->id, cg->memory.filename); + } + if(cgroup_enable_blkio) { + if(!cg->io_service_bytes.filename) { + snprintfz(filename, FILENAME_MAX, "%s%s/blkio.io_service_bytes", cgroup_blkio_base, cg->id); + cg->io_service_bytes.filename = strdup(filename); + debug(D_CGROUP, "io_service_bytes filename for cgroup '%s': '%s'", cg->id, cg->io_service_bytes.filename); + } + if(!cg->io_serviced.filename) { + snprintfz(filename, FILENAME_MAX, "%s%s/blkio.io_serviced", cgroup_blkio_base, cg->id); + cg->io_serviced.filename = strdup(filename); + debug(D_CGROUP, "io_serviced filename for cgroup '%s': '%s'", cg->id, cg->io_serviced.filename); + } + if(!cg->throttle_io_service_bytes.filename) { + snprintfz(filename, FILENAME_MAX, "%s%s/blkio.throttle.io_service_bytes", cgroup_blkio_base, cg->id); + cg->throttle_io_service_bytes.filename = strdup(filename); + debug(D_CGROUP, "throttle_io_service_bytes filename for cgroup '%s': '%s'", cg->id, cg->throttle_io_service_bytes.filename); + } + if(!cg->throttle_io_serviced.filename) { + snprintfz(filename, FILENAME_MAX, "%s%s/blkio.throttle.io_serviced", cgroup_blkio_base, cg->id); + cg->throttle_io_serviced.filename = strdup(filename); + debug(D_CGROUP, "throttle_io_serviced filename for cgroup '%s': '%s'", cg->id, cg->throttle_io_serviced.filename); + } + if(!cg->io_merged.filename) { + snprintfz(filename, FILENAME_MAX, "%s%s/blkio.io_merged", cgroup_blkio_base, cg->id); + cg->io_merged.filename = strdup(filename); + debug(D_CGROUP, "io_merged filename for cgroup '%s': '%s'", cg->id, cg->io_merged.filename); + } + if(!cg->io_queued.filename) { + snprintfz(filename, FILENAME_MAX, "%s%s/blkio.io_queued", cgroup_blkio_base, cg->id); + cg->io_queued.filename = strdup(filename); + debug(D_CGROUP, "io_queued filename for cgroup '%s': '%s'", cg->id, cg->io_queued.filename); + } + } + } + + debug(D_CGROUP, "done searching for cgroups"); + return; +} + +// ---------------------------------------------------------------------------- +// generate charts + +#define CHART_TITLE_MAX 300 + +void update_cgroup_charts(int update_every) { + debug(D_CGROUP, "updating cgroups charts"); + + char type[RRD_ID_LENGTH_MAX + 1]; + char title[CHART_TITLE_MAX + 1]; + + struct cgroup *cg; + RRDSET *st; + + for(cg = cgroup_root; cg ; cg = cg->next) { + if(!cg->available || !cg->enabled) + continue; + + if(cg->id[0] == '\0') + strcpy(type, "cgroup_root"); + else if(cg->id[0] == '/') + snprintfz(type, RRD_ID_LENGTH_MAX, "cgroup_%s", cg->chart_id); + else + snprintfz(type, RRD_ID_LENGTH_MAX, "cgroup_%s", cg->chart_id); + + netdata_fix_chart_id(type); + + if(cg->cpuacct_stat.updated) { + st = rrdset_find_bytype(type, "cpu"); + if(!st) { + snprintfz(title, CHART_TITLE_MAX, "CPU Usage for cgroup %s", cg->chart_title); + st = rrdset_create(type, "cpu", NULL, "cpu", "cgroup.cpu", title, "%", 40000, update_every, RRDSET_TYPE_STACKED); + + rrddim_add(st, "user", NULL, 100, hz, RRDDIM_INCREMENTAL); + rrddim_add(st, "system", NULL, 100, hz, RRDDIM_INCREMENTAL); + } + else rrdset_next(st); + + rrddim_set(st, "user", cg->cpuacct_stat.user); + rrddim_set(st, "system", cg->cpuacct_stat.system); + rrdset_done(st); + } + + if(cg->cpuacct_usage.updated) { + char id[RRD_ID_LENGTH_MAX + 1]; + unsigned int i; + + st = rrdset_find_bytype(type, "cpu_per_core"); + if(!st) { + snprintfz(title, CHART_TITLE_MAX, "CPU Usage Per Core for cgroup %s", cg->chart_title); + st = rrdset_create(type, "cpu_per_core", NULL, "cpu", "cgroup.cpu_per_core", title, "%", 40100, update_every, RRDSET_TYPE_STACKED); + + for(i = 0; i < cg->cpuacct_usage.cpus ;i++) { + snprintfz(id, CHART_TITLE_MAX, "cpu%d", i); + rrddim_add(st, id, NULL, 100, 1000000, RRDDIM_INCREMENTAL); + } + } + else rrdset_next(st); + + for(i = 0; i < cg->cpuacct_usage.cpus ;i++) { + snprintfz(id, CHART_TITLE_MAX, "cpu%d", i); + rrddim_set(st, id, cg->cpuacct_usage.cpu_percpu[i]); + } + rrdset_done(st); + } + + if(cg->memory.updated) { + if(cg->memory.cache + cg->memory.rss + cg->memory.rss_huge + cg->memory.mapped_file > 0) { + st = rrdset_find_bytype(type, "mem"); + if(!st) { + snprintfz(title, CHART_TITLE_MAX, "Memory Usage for cgroup %s", cg->chart_title); + st = rrdset_create(type, "mem", NULL, "mem", "cgroup.mem", title, "MB", 40200, update_every, + RRDSET_TYPE_STACKED); + + rrddim_add(st, "cache", NULL, 1, 1024 * 1024, RRDDIM_ABSOLUTE); + rrddim_add(st, "rss", NULL, 1, 1024 * 1024, RRDDIM_ABSOLUTE); + if(cg->memory.has_dirty_swap) + rrddim_add(st, "swap", NULL, 1, 1024 * 1024, RRDDIM_ABSOLUTE); + rrddim_add(st, "rss_huge", NULL, 1, 1024 * 1024, RRDDIM_ABSOLUTE); + rrddim_add(st, "mapped_file", NULL, 1, 1024 * 1024, RRDDIM_ABSOLUTE); + } + else rrdset_next(st); + + rrddim_set(st, "cache", cg->memory.cache); + rrddim_set(st, "rss", cg->memory.rss); + if(cg->memory.has_dirty_swap) + rrddim_set(st, "swap", cg->memory.swap); + rrddim_set(st, "rss_huge", cg->memory.rss_huge); + rrddim_set(st, "mapped_file", cg->memory.mapped_file); + rrdset_done(st); + } + + st = rrdset_find_bytype(type, "writeback"); + if(!st) { + snprintfz(title, CHART_TITLE_MAX, "Writeback Memory for cgroup %s", cg->chart_title); + st = rrdset_create(type, "writeback", NULL, "mem", "cgroup.writeback", title, "MB", 40300, + update_every, RRDSET_TYPE_AREA); + + if(cg->memory.has_dirty_swap) + rrddim_add(st, "dirty", NULL, 1, 1024 * 1024, RRDDIM_ABSOLUTE); + rrddim_add(st, "writeback", NULL, 1, 1024 * 1024, RRDDIM_ABSOLUTE); + } + else rrdset_next(st); + + if(cg->memory.has_dirty_swap) + rrddim_set(st, "dirty", cg->memory.dirty); + rrddim_set(st, "writeback", cg->memory.writeback); + rrdset_done(st); + + if(cg->memory.pgpgin + cg->memory.pgpgout > 0) { + st = rrdset_find_bytype(type, "mem_activity"); + if(!st) { + snprintfz(title, CHART_TITLE_MAX, "Memory Activity for cgroup %s", cg->chart_title); + st = rrdset_create(type, "mem_activity", NULL, "mem", "cgroup.mem_activity", title, "MB/s", + 40400, update_every, RRDSET_TYPE_LINE); + + rrddim_add(st, "pgpgin", "in", sysconf(_SC_PAGESIZE), 1024 * 1024, RRDDIM_INCREMENTAL); + rrddim_add(st, "pgpgout", "out", -sysconf(_SC_PAGESIZE), 1024 * 1024, RRDDIM_INCREMENTAL); + } + else rrdset_next(st); + + rrddim_set(st, "pgpgin", cg->memory.pgpgin); + rrddim_set(st, "pgpgout", cg->memory.pgpgout); + rrdset_done(st); + } + + if(cg->memory.pgfault + cg->memory.pgmajfault > 0) { + st = rrdset_find_bytype(type, "pgfaults"); + if(!st) { + snprintfz(title, CHART_TITLE_MAX, "Memory Page Faults for cgroup %s", cg->chart_title); + st = rrdset_create(type, "pgfaults", NULL, "mem", "cgroup.pgfaults", title, "MB/s", 40500, + update_every, RRDSET_TYPE_LINE); + + rrddim_add(st, "pgfault", NULL, sysconf(_SC_PAGESIZE), 1024 * 1024, RRDDIM_INCREMENTAL); + rrddim_add(st, "pgmajfault", "swap", -sysconf(_SC_PAGESIZE), 1024 * 1024, RRDDIM_INCREMENTAL); + } + else rrdset_next(st); + + rrddim_set(st, "pgfault", cg->memory.pgfault); + rrddim_set(st, "pgmajfault", cg->memory.pgmajfault); + rrdset_done(st); + } + } + + if(cg->io_service_bytes.updated && cg->io_service_bytes.Read + cg->io_service_bytes.Write > 0) { + st = rrdset_find_bytype(type, "io"); + if(!st) { + snprintfz(title, CHART_TITLE_MAX, "I/O Bandwidth (all disks) for cgroup %s", cg->chart_title); + st = rrdset_create(type, "io", NULL, "disk", "cgroup.io", title, "KB/s", 41200, + update_every, RRDSET_TYPE_LINE); + + rrddim_add(st, "read", NULL, 1, 1024, RRDDIM_INCREMENTAL); + rrddim_add(st, "write", NULL, -1, 1024, RRDDIM_INCREMENTAL); + } + else rrdset_next(st); + + rrddim_set(st, "read", cg->io_service_bytes.Read); + rrddim_set(st, "write", cg->io_service_bytes.Write); + rrdset_done(st); + } + + if(cg->io_serviced.updated && cg->io_serviced.Read + cg->io_serviced.Write > 0) { + st = rrdset_find_bytype(type, "serviced_ops"); + if(!st) { + snprintfz(title, CHART_TITLE_MAX, "Serviced I/O Operations (all disks) for cgroup %s", cg->chart_title); + st = rrdset_create(type, "serviced_ops", NULL, "disk", "cgroup.serviced_ops", title, "operations/s", 41200, + update_every, RRDSET_TYPE_LINE); + + rrddim_add(st, "read", NULL, 1, 1, RRDDIM_INCREMENTAL); + rrddim_add(st, "write", NULL, -1, 1, RRDDIM_INCREMENTAL); + } + else rrdset_next(st); + + rrddim_set(st, "read", cg->io_serviced.Read); + rrddim_set(st, "write", cg->io_serviced.Write); + rrdset_done(st); + } + + if(cg->throttle_io_service_bytes.updated && cg->throttle_io_service_bytes.Read + cg->throttle_io_service_bytes.Write > 0) { + st = rrdset_find_bytype(type, "io"); + if(!st) { + snprintfz(title, CHART_TITLE_MAX, "Throttle I/O Bandwidth (all disks) for cgroup %s", cg->chart_title); + st = rrdset_create(type, "io", NULL, "disk", "cgroup.io", title, "KB/s", 41200, + update_every, RRDSET_TYPE_LINE); + + rrddim_add(st, "read", NULL, 1, 1024, RRDDIM_INCREMENTAL); + rrddim_add(st, "write", NULL, -1, 1024, RRDDIM_INCREMENTAL); + } + else rrdset_next(st); + + rrddim_set(st, "read", cg->throttle_io_service_bytes.Read); + rrddim_set(st, "write", cg->throttle_io_service_bytes.Write); + rrdset_done(st); + } + + + if(cg->throttle_io_serviced.updated && cg->throttle_io_serviced.Read + cg->throttle_io_serviced.Write > 0) { + st = rrdset_find_bytype(type, "throttle_serviced_ops"); + if(!st) { + snprintfz(title, CHART_TITLE_MAX, "Throttle Serviced I/O Operations (all disks) for cgroup %s", cg->chart_title); + st = rrdset_create(type, "throttle_serviced_ops", NULL, "disk", "cgroup.throttle_serviced_ops", title, "operations/s", 41200, + update_every, RRDSET_TYPE_LINE); + + rrddim_add(st, "read", NULL, 1, 1, RRDDIM_INCREMENTAL); + rrddim_add(st, "write", NULL, -1, 1, RRDDIM_INCREMENTAL); + } + else rrdset_next(st); + + rrddim_set(st, "read", cg->throttle_io_serviced.Read); + rrddim_set(st, "write", cg->throttle_io_serviced.Write); + rrdset_done(st); + } + + if(cg->io_queued.updated) { + st = rrdset_find_bytype(type, "queued_ops"); + if(!st) { + snprintfz(title, CHART_TITLE_MAX, "Queued I/O Operations (all disks) for cgroup %s", cg->chart_title); + st = rrdset_create(type, "queued_ops", NULL, "disk", "cgroup.queued_ops", title, "operations", 42000, + update_every, RRDSET_TYPE_LINE); + + rrddim_add(st, "read", NULL, 1, 1, RRDDIM_ABSOLUTE); + rrddim_add(st, "write", NULL, -1, 1, RRDDIM_ABSOLUTE); + } + else rrdset_next(st); + + rrddim_set(st, "read", cg->io_queued.Read); + rrddim_set(st, "write", cg->io_queued.Write); + rrdset_done(st); + } + + if(cg->io_merged.updated && cg->io_merged.Read + cg->io_merged.Write > 0) { + st = rrdset_find_bytype(type, "merged_ops"); + if(!st) { + snprintfz(title, CHART_TITLE_MAX, "Merged I/O Operations (all disks) for cgroup %s", cg->chart_title); + st = rrdset_create(type, "merged_ops", NULL, "disk", "cgroup.merged_ops", title, "operations/s", 42100, + update_every, RRDSET_TYPE_LINE); + + rrddim_add(st, "read", NULL, 1, 1024, RRDDIM_INCREMENTAL); + rrddim_add(st, "write", NULL, -1, 1024, RRDDIM_INCREMENTAL); + } + else rrdset_next(st); + + rrddim_set(st, "read", cg->io_merged.Read); + rrddim_set(st, "write", cg->io_merged.Write); + rrdset_done(st); + } + } + + debug(D_CGROUP, "done updating cgroups charts"); +} + +// ---------------------------------------------------------------------------- +// cgroups main + +int do_sys_fs_cgroup(int update_every, unsigned long long dt) { + static int cgroup_global_config_read = 0; + static time_t last_run = 0; + time_t now = time(NULL); + + if(dt) {}; + + if(unlikely(!cgroup_global_config_read)) { + read_cgroup_plugin_configuration(); + cgroup_global_config_read = 1; + } + + if(unlikely(cgroup_enable_new_cgroups_detected_at_runtime && now - last_run > cgroup_check_for_new_every)) { + find_all_cgroups(); + last_run = now; + } + + read_all_cgroups(cgroup_root); + update_cgroup_charts(update_every); + + return 0; +} + +void *cgroups_main(void *ptr) +{ + if(ptr) { ; } + + info("CGROUP Plugin thread created with task id %d", gettid()); + + if(pthread_setcanceltype(PTHREAD_CANCEL_DEFERRED, NULL) != 0) + error("Cannot set pthread cancel type to DEFERRED."); + + if(pthread_setcancelstate(PTHREAD_CANCEL_ENABLE, NULL) != 0) + error("Cannot set pthread cancel state to ENABLE."); + + struct rusage thread; + + // when ZERO, attempt to do it + int vdo_sys_fs_cgroup = 0; + int vdo_cpu_netdata = !config_get_boolean("plugin:cgroups", "cgroups plugin resources", 1); + + // keep track of the time each module was called + unsigned long long sutime_sys_fs_cgroup = 0ULL; + + // the next time we will run - aligned properly + unsigned long long sunext = (time(NULL) - (time(NULL) % rrd_update_every) + rrd_update_every) * 1000000ULL; + unsigned long long sunow; + + RRDSET *stcpu_thread = NULL; + + for(;1;) { + if(unlikely(netdata_exit)) break; + + // delay until it is our time to run + while((sunow = timems()) < sunext) + usleep((useconds_t)(sunext - sunow)); + + // find the next time we need to run + while(timems() > sunext) + sunext += rrd_update_every * 1000000ULL; + + if(unlikely(netdata_exit)) break; + + // BEGIN -- the job to be done + + if(!vdo_sys_fs_cgroup) { + debug(D_PROCNETDEV_LOOP, "PROCNETDEV: calling do_sys_fs_cgroup()."); + sunow = timems(); + vdo_sys_fs_cgroup = do_sys_fs_cgroup(rrd_update_every, (sutime_sys_fs_cgroup > 0)?sunow - sutime_sys_fs_cgroup:0ULL); + sutime_sys_fs_cgroup = sunow; + } + if(unlikely(netdata_exit)) break; + + // END -- the job is done + + // -------------------------------------------------------------------- + + if(!vdo_cpu_netdata) { + getrusage(RUSAGE_THREAD, &thread); + + if(!stcpu_thread) stcpu_thread = rrdset_find("netdata.plugin_cgroups_cpu"); + if(!stcpu_thread) { + stcpu_thread = rrdset_create("netdata", "plugin_cgroups_cpu", NULL, "proc.internal", NULL, "NetData CGroups Plugin CPU usage", "milliseconds/s", 132000, rrd_update_every, RRDSET_TYPE_STACKED); + + rrddim_add(stcpu_thread, "user", NULL, 1, 1000, RRDDIM_INCREMENTAL); + rrddim_add(stcpu_thread, "system", NULL, 1, 1000, RRDDIM_INCREMENTAL); + } + else rrdset_next(stcpu_thread); + + rrddim_set(stcpu_thread, "user" , thread.ru_utime.tv_sec * 1000000ULL + thread.ru_utime.tv_usec); + rrddim_set(stcpu_thread, "system", thread.ru_stime.tv_sec * 1000000ULL + thread.ru_stime.tv_usec); + rrdset_done(stcpu_thread); + } + } + + pthread_exit(NULL); + return NULL; +} diff --git a/src/sys_kernel_mm_ksm.c b/src/sys_kernel_mm_ksm.c index 822e0d41a..928ac8c62 100644 --- a/src/sys_kernel_mm_ksm.c +++ b/src/sys_kernel_mm_ksm.c @@ -41,32 +41,32 @@ int do_sys_kernel_mm_ksm(int update_every, unsigned long long dt) { page_size = sysconf(_SC_PAGESIZE); if(!ff_pages_shared) { - snprintf(values[PAGES_SHARED].filename, FILENAME_MAX, "%s%s", global_host_prefix, "/sys/kernel/mm/ksm/pages_shared"); - snprintf(values[PAGES_SHARED].filename, FILENAME_MAX, "%s", config_get("plugin:proc:/sys/kernel/mm/ksm", "/sys/kernel/mm/ksm/pages_shared", values[PAGES_SHARED].filename)); + snprintfz(values[PAGES_SHARED].filename, FILENAME_MAX, "%s%s", global_host_prefix, "/sys/kernel/mm/ksm/pages_shared"); + snprintfz(values[PAGES_SHARED].filename, FILENAME_MAX, "%s", config_get("plugin:proc:/sys/kernel/mm/ksm", "/sys/kernel/mm/ksm/pages_shared", values[PAGES_SHARED].filename)); ff_pages_shared = procfile_open(values[PAGES_SHARED].filename, " \t:", PROCFILE_FLAG_DEFAULT); } if(!ff_pages_sharing) { - snprintf(values[PAGES_SHARING].filename, FILENAME_MAX, "%s%s", global_host_prefix, "/sys/kernel/mm/ksm/pages_sharing"); - snprintf(values[PAGES_SHARING].filename, FILENAME_MAX, "%s", config_get("plugin:proc:/sys/kernel/mm/ksm", "/sys/kernel/mm/ksm/pages_sharing", values[PAGES_SHARING].filename)); + snprintfz(values[PAGES_SHARING].filename, FILENAME_MAX, "%s%s", global_host_prefix, "/sys/kernel/mm/ksm/pages_sharing"); + snprintfz(values[PAGES_SHARING].filename, FILENAME_MAX, "%s", config_get("plugin:proc:/sys/kernel/mm/ksm", "/sys/kernel/mm/ksm/pages_sharing", values[PAGES_SHARING].filename)); ff_pages_sharing = procfile_open(values[PAGES_SHARING].filename, " \t:", PROCFILE_FLAG_DEFAULT); } if(!ff_pages_unshared) { - snprintf(values[PAGES_UNSHARED].filename, FILENAME_MAX, "%s%s", global_host_prefix, "/sys/kernel/mm/ksm/pages_unshared"); - snprintf(values[PAGES_UNSHARED].filename, FILENAME_MAX, "%s", config_get("plugin:proc:/sys/kernel/mm/ksm", "/sys/kernel/mm/ksm/pages_unshared", values[PAGES_UNSHARED].filename)); + snprintfz(values[PAGES_UNSHARED].filename, FILENAME_MAX, "%s%s", global_host_prefix, "/sys/kernel/mm/ksm/pages_unshared"); + snprintfz(values[PAGES_UNSHARED].filename, FILENAME_MAX, "%s", config_get("plugin:proc:/sys/kernel/mm/ksm", "/sys/kernel/mm/ksm/pages_unshared", values[PAGES_UNSHARED].filename)); ff_pages_unshared = procfile_open(values[PAGES_UNSHARED].filename, " \t:", PROCFILE_FLAG_DEFAULT); } if(!ff_pages_volatile) { - snprintf(values[PAGES_VOLATILE].filename, FILENAME_MAX, "%s%s", global_host_prefix, "/sys/kernel/mm/ksm/pages_volatile"); - snprintf(values[PAGES_VOLATILE].filename, FILENAME_MAX, "%s", config_get("plugin:proc:/sys/kernel/mm/ksm", "/sys/kernel/mm/ksm/pages_volatile", values[PAGES_VOLATILE].filename)); + snprintfz(values[PAGES_VOLATILE].filename, FILENAME_MAX, "%s%s", global_host_prefix, "/sys/kernel/mm/ksm/pages_volatile"); + snprintfz(values[PAGES_VOLATILE].filename, FILENAME_MAX, "%s", config_get("plugin:proc:/sys/kernel/mm/ksm", "/sys/kernel/mm/ksm/pages_volatile", values[PAGES_VOLATILE].filename)); ff_pages_volatile = procfile_open(values[PAGES_VOLATILE].filename, " \t:", PROCFILE_FLAG_DEFAULT); } if(!ff_pages_to_scan) { - snprintf(values[PAGES_TO_SCAN].filename, FILENAME_MAX, "%s%s", global_host_prefix, "/sys/kernel/mm/ksm/pages_to_scan"); - snprintf(values[PAGES_TO_SCAN].filename, FILENAME_MAX, "%s", config_get("plugin:proc:/sys/kernel/mm/ksm", "/sys/kernel/mm/ksm/pages_to_scan", values[PAGES_TO_SCAN].filename)); + snprintfz(values[PAGES_TO_SCAN].filename, FILENAME_MAX, "%s%s", global_host_prefix, "/sys/kernel/mm/ksm/pages_to_scan"); + snprintfz(values[PAGES_TO_SCAN].filename, FILENAME_MAX, "%s", config_get("plugin:proc:/sys/kernel/mm/ksm", "/sys/kernel/mm/ksm/pages_to_scan", values[PAGES_TO_SCAN].filename)); ff_pages_to_scan = procfile_open(values[PAGES_TO_SCAN].filename, " \t:", PROCFILE_FLAG_DEFAULT); } diff --git a/src/unit_test.c b/src/unit_test.c index 47aa5396c..06b7afacb 100644 --- a/src/unit_test.c +++ b/src/unit_test.c @@ -128,7 +128,7 @@ void benchmark_storage_number(int loop, int multiplier) { for(i = 0; i < loop ;i++) { n *= multiplier; if(n > STORAGE_NUMBER_POSITIVE_MAX) n = STORAGE_NUMBER_POSITIVE_MIN; - snprintf(buffer, 100, CALCULATED_NUMBER_FORMAT, n); + snprintfz(buffer, 100, CALCULATED_NUMBER_FORMAT, n); } } @@ -614,7 +614,7 @@ int run_test(struct test *test) rrd_update_every = test->update_every; char name[101]; - snprintf(name, 100, "unittest-%s", test->name); + snprintfz(name, 100, "unittest-%s", test->name); // create the chart RRDSET *st = rrdset_create("netdata", name, name, "netdata", NULL, "Unit Testing", "a value", 1, 1, RRDSET_TYPE_LINE); @@ -703,7 +703,7 @@ int unit_test(long delay, long shift) repeat++; char name[101]; - snprintf(name, 100, "unittest-%d-%ld-%ld", repeat, delay, shift); + snprintfz(name, 100, "unittest-%d-%ld-%ld", repeat, delay, shift); //debug_flags = 0xffffffff; rrd_memory_mode = RRD_MEMORY_MODE_RAM; @@ -27,28 +27,36 @@ char to_hex(char code) { /* Returns a url-encoded version of str */ /* IMPORTANT: be sure to free() the returned string after use */ char *url_encode(char *str) { - char *pstr = str, - *buf = malloc(strlen(str) * 3 + 1), - *pbuf = buf; + char *buf, *pbuf; + + pbuf = buf = malloc(strlen(str) * 3 + 1); if(!buf) fatal("Cannot allocate memory."); - while (*pstr) { - if (isalnum(*pstr) || *pstr == '-' || *pstr == '_' || *pstr == '.' || *pstr == '~') - *pbuf++ = *pstr; + while (*str) { + if (isalnum(*str) || *str == '-' || *str == '_' || *str == '.' || *str == '~') + *pbuf++ = *str; - else if (*pstr == ' ') + else if (*str == ' ') *pbuf++ = '+'; else - *pbuf++ = '%', *pbuf++ = to_hex(*pstr >> 4), *pbuf++ = to_hex(*pstr & 15); + *pbuf++ = '%', *pbuf++ = to_hex(*str >> 4), *pbuf++ = to_hex(*str & 15); - pstr++; + str++; } - *pbuf = '\0'; + // FIX: I think this is prudent. URLs can be as long as 2 KiB or more. + // We allocated 3 times more space to accomodate %NN encoding of + // non ASCII chars. If URL has none of these kind of chars we will + // end up with a big unused buffer. + // + // Try to shrink the buffer... + if (!!(pbuf = (char *)realloc(buf, strlen(buf)+1))) + buf = pbuf; + return buf; } diff --git a/src/web_buffer.c b/src/web_buffer.c index 482eb3900..a0f153721 100644 --- a/src/web_buffer.c +++ b/src/web_buffer.c @@ -53,7 +53,7 @@ void buffer_reset(BUFFER *wb) const char *buffer_tostring(BUFFER *wb) { - buffer_need_bytes(wb, (size_t)1); + buffer_need_bytes(wb, 1); wb->buffer[wb->len] = '\0'; buffer_overflow_check(wb); @@ -78,15 +78,16 @@ void buffer_strcat(BUFFER *wb, const char *txt) { if(unlikely(!txt || !*txt)) return; - buffer_need_bytes(wb, (size_t)(1)); + buffer_need_bytes(wb, 1); - char *s = &wb->buffer[wb->len], *end = &wb->buffer[wb->size]; + char *s = &wb->buffer[wb->len], *start, *end = &wb->buffer[wb->size]; long len = wb->len; - while(*txt && s != end) { + start = s; + while(*txt && s != end) *s++ = *txt++; - len++; - } + + len += s - start; wb->len = len; buffer_overflow_check(wb); @@ -110,44 +111,45 @@ void buffer_snprintf(BUFFER *wb, size_t len, const char *fmt, ...) { if(unlikely(!fmt || !*fmt)) return; - buffer_need_bytes(wb, len+1); + buffer_need_bytes(wb, len + 1); va_list args; va_start(args, fmt); - wb->len += vsnprintf(&wb->buffer[wb->len], len+1, fmt, args); + wb->len += vsnprintfz(&wb->buffer[wb->len], len, fmt, args); va_end(args); buffer_overflow_check(wb); - // the buffer is \0 terminated by vsnprintf + // the buffer is \0 terminated by vsnprintfz } void buffer_vsprintf(BUFFER *wb, const char *fmt, va_list args) { if(unlikely(!fmt || !*fmt)) return; - buffer_need_bytes(wb, 1); + buffer_need_bytes(wb, 2); - size_t len = wb->size - wb->len; + size_t len = wb->size - wb->len - 1; - wb->len += vsnprintf(&wb->buffer[wb->len], len, fmt, args); + wb->len += vsnprintfz(&wb->buffer[wb->len], len, fmt, args); buffer_overflow_check(wb); - // the buffer is \0 terminated by vsnprintf + // the buffer is \0 terminated by vsnprintfz } void buffer_sprintf(BUFFER *wb, const char *fmt, ...) { if(unlikely(!fmt || !*fmt)) return; - buffer_need_bytes(wb, 1); + buffer_need_bytes(wb, 2); - size_t len = wb->size - wb->len, wrote; + size_t len = wb->size - wb->len - 1; + size_t wrote; va_list args; va_start(args, fmt); - wrote = (size_t) vsnprintf(&wb->buffer[wb->len], len, fmt, args); + wrote = (size_t) vsnprintfz(&wb->buffer[wb->len], len, fmt, args); va_end(args); if(unlikely(wrote >= len)) { @@ -187,43 +189,52 @@ void buffer_rrd_value(BUFFER *wb, calculated_number value) // generate a javascript date, the fastest possible way... void buffer_jsdate(BUFFER *wb, int year, int month, int day, int hours, int minutes, int seconds) { - // 10 20 30 = 35 + // 10 20 30 = 35 // 01234567890123456789012345678901234 // Date(2014,04,01,03,28,20) buffer_need_bytes(wb, 30); - char *b = &wb->buffer[wb->len]; - - int i = 0; - b[i++]='D'; - b[i++]='a'; - b[i++]='t'; - b[i++]='e'; - b[i++]='('; - b[i++]= (char) (48 + year / 1000); year -= (year / 1000) * 1000; - b[i++]= (char) (48 + year / 100); year -= (year / 100) * 100; - b[i++]= (char) (48 + year / 10); - b[i++]= (char) (48 + year % 10); - b[i++]=','; - b[i]= (char) (48 + month / 10); if(b[i] != '0') i++; - b[i++]= (char) (48 + month % 10); - b[i++]=','; - b[i]= (char) (48 + day / 10); if(b[i] != '0') i++; - b[i++]= (char) (48 + day % 10); - b[i++]=','; - b[i]= (char) (48 + hours / 10); if(b[i] != '0') i++; - b[i++]= (char) (48 + hours % 10); - b[i++]=','; - b[i]= (char) (48 + minutes / 10); if(b[i] != '0') i++; - b[i++]= (char) (48 + minutes % 10); - b[i++]=','; - b[i]= (char) (48 + seconds / 10); if(b[i] != '0') i++; - b[i++]= (char) (48 + seconds % 10); - b[i++]=')'; - b[i]='\0'; - - wb->len += i; + char *b = &wb->buffer[wb->len], *p; + unsigned int *q = (unsigned int *)b; + + #if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + *q++ = 0x65746144; // "Date" backwards. + #else + *q++ = 0x44617465; // "Date" + #endif + p = (char *)q; + + *p++ = '('; + *p++ = '0' + year / 1000; year %= 1000; + *p++ = '0' + year / 100; year %= 100; + *p++ = '0' + year / 10; + *p++ = '0' + year % 10; + *p++ = ','; + *p = '0' + month / 10; if (*p != '0') p++; + *p++ = '0' + month % 10; + *p++ = ','; + *p = '0' + day / 10; if (*p != '0') p++; + *p++ = '0' + day % 10; + *p++ = ','; + *p = '0' + hours / 10; if (*p != '0') p++; + *p++ = '0' + hours % 10; + *p++ = ','; + *p = '0' + minutes / 10; if (*p != '0') p++; + *p++ = '0' + minutes % 10; + *p++ = ','; + *p = '0' + seconds / 10; if (*p != '0') p++; + *p++ = '0' + seconds % 10; + + unsigned short *r = (unsigned short *)p; + +#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__ + *r++ = 0x0029; // ")\0" backwards. + #else + *r++ = 0x2900; // ")\0" + #endif + + wb->len += (size_t)((char *)r - b - 1); // terminate it wb->buffer[wb->len] = '\0'; @@ -240,30 +251,30 @@ void buffer_date(BUFFER *wb, int year, int month, int day, int hours, int minute buffer_need_bytes(wb, 36); char *b = &wb->buffer[wb->len]; - - int i = 0; - b[i++]= (char) (48 + year / 1000); year -= (year / 1000) * 1000; - b[i++]= (char) (48 + year / 100); year -= (year / 100) * 100; - b[i++]= (char) (48 + year / 10); - b[i++]= (char) (48 + year % 10); - b[i++]='-'; - b[i++]= (char) (48 + month / 10); - b[i++]= (char) (48 + month % 10); - b[i++]='-'; - b[i++]= (char) (48 + day / 10); - b[i++]= (char) (48 + day % 10); - b[i++]=' '; - b[i++]= (char) (48 + hours / 10); - b[i++]= (char) (48 + hours % 10); - b[i++]=':'; - b[i++]= (char) (48 + minutes / 10); - b[i++]= (char) (48 + minutes % 10); - b[i++]=':'; - b[i++]= (char) (48 + seconds / 10); - b[i++]= (char) (48 + seconds % 10); - b[i]='\0'; - - wb->len += i; + char *p = b; + + *p++ = '0' + year / 1000; year %= 1000; + *p++ = '0' + year / 100; year %= 100; + *p++ = '0' + year / 10; + *p++ = '0' + year % 10; + *p++ = '-'; + *p++ = '0' + month / 10; + *p++ = '0' + month % 10; + *p++ = '-'; + *p++ = '0' + day / 10; + *p++ = '0' + day % 10; + *p++ = ' '; + *p++ = '0' + hours / 10; + *p++ = '0' + hours % 10; + *p++ = ':'; + *p++ = '0' + minutes / 10; + *p++ = '0' + minutes % 10; + *p++ = ':'; + *p++ = '0' + seconds / 10; + *p++ = '0' + seconds % 10; + *p = '\0'; + + wb->len += (size_t)(p - b); // terminate it wb->buffer[wb->len] = '\0'; diff --git a/src/web_buffer.h b/src/web_buffer.h index 58dd9c094..73533f499 100644 --- a/src/web_buffer.h +++ b/src/web_buffer.h @@ -48,7 +48,7 @@ extern const char *buffer_tostring(BUFFER *wb); #define buffer_need_bytes(buffer, needed_free_size) do { if(unlikely((buffer)->size - (buffer)->len < (size_t)(needed_free_size))) buffer_increase((buffer), (size_t)(needed_free_size)); } while(0) -#define buffer_flush(wb) wb->buffer[wb->len = 0] = '\0' +#define buffer_flush(wb) wb->buffer[(wb)->len = 0] = '\0' extern void buffer_reset(BUFFER *wb); extern void buffer_strcat(BUFFER *wb, const char *txt); diff --git a/src/web_client.c b/src/web_client.c index 6500a59b2..601dda083 100644 --- a/src/web_client.c +++ b/src/web_client.c @@ -26,6 +26,7 @@ #include "global_statistics.h" #include "rrd.h" #include "rrd2json.h" +#include "registry.h" #include "web_client.h" #include "../config.h" @@ -72,8 +73,8 @@ struct web_client *web_client_create(int listener) if(getnameinfo(sadr, addrlen, w->client_ip, NI_MAXHOST, w->client_port, NI_MAXSERV, NI_NUMERICHOST | NI_NUMERICSERV) != 0) { error("Cannot getnameinfo() on received client connection."); - strncpy(w->client_ip, "UNKNOWN", NI_MAXHOST); - strncpy(w->client_port, "UNKNOWN", NI_MAXSERV); + strncpyz(w->client_ip, "UNKNOWN", NI_MAXHOST); + strncpyz(w->client_port, "UNKNOWN", NI_MAXSERV); } w->client_ip[NI_MAXHOST] = '\0'; w->client_port[NI_MAXSERV] = '\0'; @@ -128,6 +129,7 @@ struct web_client *web_client_create(int listener) return NULL; } + w->origin[0] = '*'; w->wait_receive = 1; if(web_clients) web_clients->prev = w; @@ -173,8 +175,18 @@ void web_client_reset(struct web_client *w) } w->last_url[0] = '\0'; + w->cookie1[0] = '\0'; + w->cookie2[0] = '\0'; + w->origin[0] = '*'; + w->origin[1] = '\0'; w->mode = WEB_CLIENT_MODE_NORMAL; + w->enable_gzip = 0; + w->keepalive = 0; + if(w->decoded_url) { + free(w->decoded_url); + w->decoded_url = NULL; + } buffer_reset(w->response.header_output); buffer_reset(w->response.header); @@ -316,7 +328,7 @@ int mysendfile(struct web_client *w, char *filename) // access the file char webfilename[FILENAME_MAX + 1]; - snprintf(webfilename, FILENAME_MAX, "%s/%s", web_dir, filename); + snprintfz(webfilename, FILENAME_MAX, "%s/%s", web_dir, filename); // check if the file exists struct stat stat; @@ -341,7 +353,7 @@ int mysendfile(struct web_client *w, char *filename) } if((stat.st_mode & S_IFMT) == S_IFDIR) { - snprintf(webfilename, FILENAME_MAX+1, "%s/index.html", filename); + snprintfz(webfilename, FILENAME_MAX, "%s/index.html", filename); return mysendfile(w, webfilename); } @@ -644,7 +656,7 @@ int web_client_api_request_v1_data(struct web_client *w, char *url) if(!name || !*name) continue; if(!value || !*value) continue; - debug(D_WEB_CLIENT, "%llu: API v1 query param '%s' with value '%s'", w->id, name, value); + debug(D_WEB_CLIENT, "%llu: API v1 data query param '%s' with value '%s'", w->id, name, value); // name and value are now the parameters // they are not null and not empty @@ -784,19 +796,149 @@ cleanup: return ret; } +int web_client_api_request_v1_registry(struct web_client *w, char *url) +{ + char person_guid[36 + 1] = ""; + + debug(D_WEB_CLIENT, "%llu: API v1 registry with URL '%s'", w->id, url); + + // FIXME + // The browser may send multiple cookies with our id + + char *cookie = strstr(w->response.data->buffer, " " NETDATA_REGISTRY_COOKIE_NAME "="); + if(cookie) + strncpyz(person_guid, &cookie[sizeof(NETDATA_REGISTRY_COOKIE_NAME) + 1], 36); + + char action = '\0'; + char *machine_guid = NULL, + *machine_url = NULL, + *url_name = NULL, + *search_machine_guid = NULL, + *delete_url = NULL, + *to_person_guid = NULL; + + while(url) { + char *value = mystrsep(&url, "?&[]"); + if (!value || !*value) continue; + + char *name = mystrsep(&value, "="); + if (!name || !*name) continue; + if (!value || !*value) continue; + + debug(D_WEB_CLIENT, "%llu: API v1 registry query param '%s' with value '%s'", w->id, name, value); + + if(!strcmp(name, "action")) { + if(!strcmp(value, "access")) action = 'A'; + else if(!strcmp(value, "hello")) action = 'H'; + else if(!strcmp(value, "delete")) action = 'D'; + else if(!strcmp(value, "search")) action = 'S'; + else if(!strcmp(value, "switch")) action = 'W'; + } + else if(!strcmp(name, "machine")) + machine_guid = value; + + else if(!strcmp(name, "url")) + machine_url = value; + + else if(action == 'A') { + if(!strcmp(name, "name")) + url_name = value; + } + else if(action == 'D') { + if(!strcmp(name, "delete_url")) + delete_url = value; + } + else if(action == 'S') { + if(!strcmp(name, "for")) + search_machine_guid = value; + } + else if(action == 'W') { + if(!strcmp(name, "to")) + to_person_guid = value; + } + } + + if(action == 'A' && (!machine_guid || !machine_url || !url_name)) { + buffer_flush(w->response.data); + buffer_sprintf(w->response.data, "Invalid registry request - access requires these parameters: machine ('%s'), url ('%s'), name ('%s')", + machine_guid?machine_guid:"UNSET", machine_url?machine_url:"UNSET", url_name?url_name:"UNSET"); + return 400; + } + else if(action == 'D' && (!machine_guid || !machine_url || !delete_url)) { + buffer_flush(w->response.data); + buffer_sprintf(w->response.data, "Invalid registry request - delete requires these parameters: machine ('%s'), url ('%s'), delete_url ('%s')", + machine_guid?machine_guid:"UNSET", machine_url?machine_url:"UNSET", delete_url?delete_url:"UNSET"); + return 400; + } + else if(action == 'S' && (!machine_guid || !machine_url || !search_machine_guid)) { + buffer_flush(w->response.data); + buffer_sprintf(w->response.data, "Invalid registry request - search requires these parameters: machine ('%s'), url ('%s'), for ('%s')", + machine_guid?machine_guid:"UNSET", machine_url?machine_url:"UNSET", search_machine_guid?search_machine_guid:"UNSET"); + return 400; + } + else if(action == 'W' && (!machine_guid || !machine_url || !to_person_guid)) { + buffer_flush(w->response.data); + buffer_sprintf(w->response.data, "Invalid registry request - switching identity requires these parameters: machine ('%s'), url ('%s'), to ('%s')", + machine_guid?machine_guid:"UNSET", machine_url?machine_url:"UNSET", to_person_guid?to_person_guid:"UNSET"); + return 400; + } + + switch(action) { + case 'A': + return registry_request_access_json(w, person_guid, machine_guid, machine_url, url_name, time(NULL)); + + case 'D': + return registry_request_delete_json(w, person_guid, machine_guid, machine_url, delete_url, time(NULL)); + + case 'S': + return registry_request_search_json(w, person_guid, machine_guid, machine_url, search_machine_guid, time(NULL)); + + case 'W': + return registry_request_switch_json(w, person_guid, machine_guid, machine_url, to_person_guid, time(NULL)); + + case 'H': + return registry_request_hello_json(w); + + default: + buffer_flush(w->response.data); + buffer_sprintf(w->response.data, "Invalid registry request - you need to set an action: hello, access, delete, search"); + return 400; + } + + buffer_flush(w->response.data); + buffer_sprintf(w->response.data, "Invalid or no registry action."); + return 400; +} + int web_client_api_request_v1(struct web_client *w, char *url) { + static uint32_t data_hash = 0, chart_hash = 0, charts_hash = 0, registry_hash = 0; + + if(unlikely(data_hash == 0)) { + data_hash = simple_hash("data"); + chart_hash = simple_hash("chart"); + charts_hash = simple_hash("charts"); + registry_hash = simple_hash("registry"); + } + // get the command char *tok = mystrsep(&url, "/?&"); if(tok && *tok) { debug(D_WEB_CLIENT, "%llu: Searching for API v1 command '%s'.", w->id, tok); + uint32_t hash = simple_hash(tok); - if(strcmp(tok, "data") == 0) + if(hash == data_hash && !strcmp(tok, "data")) return web_client_api_request_v1_data(w, url); - else if(strcmp(tok, "chart") == 0) + + else if(hash == chart_hash && !strcmp(tok, "chart")) return web_client_api_request_v1_chart(w, url); - else if(strcmp(tok, "charts") == 0) + + else if(hash == charts_hash && !strcmp(tok, "charts")) return web_client_api_request_v1_charts(w, url); + + else if(hash == registry_hash && !strcmp(tok, "registry")) + return web_client_api_request_v1_registry(w, url); + else { buffer_flush(w->response.data); buffer_sprintf(w->response.data, "Unsupported v1 API command: %s", tok); @@ -1043,110 +1185,209 @@ cleanup: } */ -void web_client_process(struct web_client *w) { - int code = 500; - ssize_t bytes; - int enable_gzip = 0; - w->wait_receive = 0; +static inline char *http_header_parse(struct web_client *w, char *s) { + char *e = s; - // check if we have an empty line (end of HTTP header) - if(strstr(w->response.data->buffer, "\r\n\r\n")) { - global_statistics_lock(); - global_statistics.web_requests++; - global_statistics_unlock(); + // find the : + while(*e && *e != ':') e++; + if(!*e || e[1] != ' ') return e; - gettimeofday(&w->tv_in, NULL); - debug(D_WEB_DATA, "%llu: Processing data buffer of %d bytes: '%s'.", w->id, w->response.data->len, w->response.data->buffer); + // get the name + *e = '\0'; + + // find the value + char *v, *ve; + v = ve = e + 2; + + // find the \r + while(*ve && *ve != '\r') ve++; + if(!*ve || ve[1] != '\n') { + *e = ':'; + return ve; + } + + // terminate the value + *ve = '\0'; - // check if the client requested keep-alive HTTP - if(strcasestr(w->response.data->buffer, "Connection: keep-alive")) w->keepalive = 1; - else w->keepalive = 0; + // fprintf(stderr, "HEADER: '%s' = '%s'\n", s, v); + if(!strcasecmp(s, "Origin")) + strncpyz(w->origin, v, ORIGIN_MAX); + + else if(!strcasecmp(s, "Connection")) { + if(strcasestr(v, "keep-alive")) + w->keepalive = 1; + } #ifdef NETDATA_WITH_ZLIB - // check if the client accepts deflate - if(web_enable_gzip && strstr(w->response.data->buffer, "gzip")) - enable_gzip = 1; -#endif // NETDATA_WITH_ZLIB + else if(!strcasecmp(s, "Accept-Encoding")) { + if(web_enable_gzip && strcasestr(v, "gzip")) { + w->enable_gzip = 1; + } + } +#endif /* NETDATA_WITH_ZLIB */ + + *e = ':'; + *ve = '\r'; + return ve; +} - int datasource_type = DATASOURCE_DATATABLE_JSONP; - //if(strstr(w->response.data->buffer, "X-DataSource-Auth")) - // datasource_type = DATASOURCE_GOOGLE_JSON; +// http_request_validate() +// returns: +// = 0 : all good, process the request +// > 0 : request is complete, but is not supported +// < 0 : request is incomplete - wait for more data - char *buf = (char *)buffer_tostring(w->response.data); - char *tok = strsep(&buf, " \r\n"); - char *url = NULL; - char *pointer_to_free = NULL; // keep url_decode() allocated buffer +static inline int http_request_validate(struct web_client *w) { + char *s = w->response.data->buffer, *encoded_url = NULL; + // is is a valid request? + if(!strncmp(s, "GET ", 4)) { + encoded_url = s = &s[4]; w->mode = WEB_CLIENT_MODE_NORMAL; + } + else if(!strncmp(s, "OPTIONS ", 8)) { + encoded_url = s = &s[8]; + w->mode = WEB_CLIENT_MODE_OPTIONS; + } + else { + w->wait_receive = 0; + return 1; + } + + // find the SPACE + "HTTP/" + while(*s) { + // find the space + while (*s && *s != ' ') s++; + + // is it SPACE + "HTTP/" ? + if(*s && !strncmp(s, " HTTP/", 6)) break; + else s++; + } + + // incomplete requests + if(!*s) { + w->wait_receive = 1; + return -2; + } + + // we have the end of encoded_url - remember it + char *ue = s; + + while(*s) { + // find a line feed + while (*s && *s != '\r') s++; + + // did we reach the end? + if(unlikely(!*s)) break; - if(buf && strcmp(tok, "GET") == 0) { - tok = strsep(&buf, " \r\n"); - pointer_to_free = url = url_decode(tok); - debug(D_WEB_CLIENT, "%llu: Processing HTTP GET on url '%s'.", w->id, url); + // is it \r\n ? + if (likely(s[1] == '\n')) { + + // is it again \r\n ? (header end) + if(unlikely(s[2] == '\r' && s[3] == '\n')) { + // a valid complete HTTP request found + + *ue = '\0'; + w->decoded_url = url_decode(encoded_url); + *ue = ' '; + + w->wait_receive = 0; + return 0; + } + + // another header line + s = http_header_parse(w, &s[2]); } - else if(buf && strcmp(tok, "OPTIONS") == 0) { - tok = strsep(&buf, " \r\n"); - pointer_to_free = url = url_decode(tok); - debug(D_WEB_CLIENT, "%llu: Processing HTTP OPTIONS on url '%s'.", w->id, url); - w->mode = WEB_CLIENT_MODE_OPTIONS; + else s++; + } + + // incomplete request + w->wait_receive = 1; + return -3; +} + +void web_client_process(struct web_client *w) { + int code = 500; + ssize_t bytes; + + int what_to_do = http_request_validate(w); + + // wait for more data + if(what_to_do < 0) { + if(w->response.data->len > TOO_BIG_REQUEST) { + strcpy(w->last_url, "too big request"); + + debug(D_WEB_CLIENT_ACCESS, "%llu: Received request is too big (%zd bytes).", w->id, w->response.data->len); + + code = 400; + buffer_flush(w->response.data); + buffer_sprintf(w->response.data, "Received request is too big (%zd bytes).\r\n", w->response.data->len); } - else if (buf && strcmp(tok, "POST") == 0) { - w->keepalive = 0; - tok = strsep(&buf, " \r\n"); - pointer_to_free = url = url_decode(tok); - debug(D_WEB_CLIENT, "%llu: I don't know how to handle POST with form data. Assuming it is a GET on url '%s'.", w->id, url); + else { + // wait for more data + return; } + } + else if(what_to_do > 0) { + strcpy(w->last_url, "not a valid response"); - w->last_url[0] = '\0'; + debug(D_WEB_CLIENT_ACCESS, "%llu: Cannot understand '%s'.", w->id, w->response.data->buffer); - if(w->mode == WEB_CLIENT_MODE_OPTIONS) { - strncpy(w->last_url, url, URL_MAX); - w->last_url[URL_MAX] = '\0'; + code = 500; + buffer_flush(w->response.data); + buffer_strcat(w->response.data, "I don't understand you...\r\n"); + } + else { // what_to_do == 0 + gettimeofday(&w->tv_in, NULL); + global_statistics_lock(); + global_statistics.web_requests++; + global_statistics_unlock(); + + // copy the URL - we are going to overwrite parts of it + // FIXME -- we should avoid it + strncpyz(w->last_url, w->decoded_url, URL_MAX); + + if(w->mode == WEB_CLIENT_MODE_OPTIONS) { code = 200; w->response.data->contenttype = CT_TEXT_PLAIN; buffer_flush(w->response.data); buffer_strcat(w->response.data, "OK"); } - else if(url) { + else { #ifdef NETDATA_WITH_ZLIB - if(enable_gzip) + if(w->enable_gzip) web_client_enable_deflate(w); #endif - strncpy(w->last_url, url, URL_MAX); - w->last_url[URL_MAX] = '\0'; - - tok = mystrsep(&url, "/?"); + char *url = w->decoded_url; + char *tok = mystrsep(&url, "/?"); if(tok && *tok) { debug(D_WEB_CLIENT, "%llu: Processing command '%s'.", w->id, tok); if(strcmp(tok, "api") == 0) { // the client is requesting api access - datasource_type = DATASOURCE_JSON; code = web_client_api_request(w, url); } -#ifdef NETDATA_INTERNAL_CHECKS - else if(strcmp(tok, "exit") == 0) { - netdata_exit = 1; + else if(strcmp(tok, "netdata.conf") == 0) { code = 200; + debug(D_WEB_CLIENT_ACCESS, "%llu: Sending netdata.conf ...", w->id); + w->response.data->contenttype = CT_TEXT_PLAIN; buffer_flush(w->response.data); - buffer_strcat(w->response.data, "will do"); + generate_config(w->response.data, 0); } -#endif else if(strcmp(tok, WEB_PATH_DATA) == 0) { // "data" - // the client is requesting rrd data - datasource_type = DATASOURCE_JSON; - code = web_client_data_request(w, url, datasource_type); + // the client is requesting rrd data -- OLD API + code = web_client_data_request(w, url, DATASOURCE_JSON); } else if(strcmp(tok, WEB_PATH_DATASOURCE) == 0) { // "datasource" - // the client is requesting google datasource - code = web_client_data_request(w, url, datasource_type); + // the client is requesting google datasource -- OLD API + code = web_client_data_request(w, url, DATASOURCE_DATATABLE_JSONP); } else if(strcmp(tok, WEB_PATH_GRAPH) == 0) { // "graph" - // the client is requesting an rrd graph + // the client is requesting an rrd graph -- OLD API // get the name of the data to show tok = mystrsep(&url, "/?&"); @@ -1176,7 +1417,40 @@ void web_client_process(struct web_client *w) { buffer_strcat(w->response.data, "Graph name?\r\n"); } } + else if(strcmp(tok, "list") == 0) { + // OLD API + code = 200; + + debug(D_WEB_CLIENT_ACCESS, "%llu: Sending list of RRD_STATS...", w->id); + + buffer_flush(w->response.data); + RRDSET *st = rrdset_root; + + for ( ; st ; st = st->next ) + buffer_sprintf(w->response.data, "%s\n", st->name); + } + else if(strcmp(tok, "all.json") == 0) { + // OLD API + code = 200; + debug(D_WEB_CLIENT_ACCESS, "%llu: Sending JSON list of all monitors of RRD_STATS...", w->id); + + w->response.data->contenttype = CT_APPLICATION_JSON; + buffer_flush(w->response.data); + rrd_stats_all_json(w->response.data); + } #ifdef NETDATA_INTERNAL_CHECKS + else if(strcmp(tok, "exit") == 0) { + code = 200; + w->response.data->contenttype = CT_TEXT_PLAIN; + buffer_flush(w->response.data); + + if(!netdata_exit) + buffer_strcat(w->response.data, "ok, will do..."); + else + buffer_strcat(w->response.data, "I am doing it already"); + + netdata_exit = 1; + } else if(strcmp(tok, "debug") == 0) { buffer_flush(w->response.data); @@ -1196,7 +1470,7 @@ void web_client_process(struct web_client *w) { else { code = 200; debug_flags |= D_RRD_STATS; - st->debug = st->debug?0:1; + st->debug = !st->debug; buffer_sprintf(w->response.data, "Chart %s has now debug %s.\r\n", tok, st->debug?"enabled":"disabled"); debug(D_WEB_CLIENT_ACCESS, "%llu: debug for %s is %s.", w->id, tok, st->debug?"enabled":"disabled"); } @@ -1218,39 +1492,11 @@ void web_client_process(struct web_client *w) { // just leave the buffer as is // it will be copied back to the client } -#endif - else if(strcmp(tok, "list") == 0) { - code = 200; - - debug(D_WEB_CLIENT_ACCESS, "%llu: Sending list of RRD_STATS...", w->id); - - buffer_flush(w->response.data); - RRDSET *st = rrdset_root; - - for ( ; st ; st = st->next ) - buffer_sprintf(w->response.data, "%s\n", st->name); - } - else if(strcmp(tok, "all.json") == 0) { - code = 200; - debug(D_WEB_CLIENT_ACCESS, "%llu: Sending JSON list of all monitors of RRD_STATS...", w->id); - - w->response.data->contenttype = CT_APPLICATION_JSON; - buffer_flush(w->response.data); - rrd_stats_all_json(w->response.data); - } - else if(strcmp(tok, "netdata.conf") == 0) { - code = 200; - debug(D_WEB_CLIENT_ACCESS, "%llu: Sending netdata.conf ...", w->id); - - w->response.data->contenttype = CT_TEXT_PLAIN; - buffer_flush(w->response.data); - generate_config(w->response.data, 0); - } +#endif /* NETDATA_INTERNAL_CHECKS */ else { char filename[FILENAME_MAX+1]; url = filename; - strncpy(filename, w->last_url, FILENAME_MAX); - filename[FILENAME_MAX] = '\0'; + strncpyz(filename, w->last_url, FILENAME_MAX); tok = mystrsep(&url, "?"); buffer_flush(w->response.data); code = mysendfile(w, (tok && *tok)?tok:"/"); @@ -1259,42 +1505,12 @@ void web_client_process(struct web_client *w) { else { char filename[FILENAME_MAX+1]; url = filename; - strncpy(filename, w->last_url, FILENAME_MAX); - filename[FILENAME_MAX] = '\0'; + strncpyz(filename, w->last_url, FILENAME_MAX); tok = mystrsep(&url, "?"); buffer_flush(w->response.data); code = mysendfile(w, (tok && *tok)?tok:"/"); } } - else { - strcpy(w->last_url, "not a valid response"); - - if(buf) debug(D_WEB_CLIENT_ACCESS, "%llu: Cannot understand '%s'.", w->id, buf); - - code = 500; - buffer_flush(w->response.data); - buffer_strcat(w->response.data, "I don't understand you...\r\n"); - } - - // free url_decode() buffer - if(pointer_to_free) { - free(pointer_to_free); - pointer_to_free = NULL; - } - } - else if(w->response.data->len > TOO_BIG_REQUEST) { - strcpy(w->last_url, "too big request"); - - debug(D_WEB_CLIENT_ACCESS, "%llu: Received request is too big (%zd bytes).", w->id, w->response.data->len); - - code = 400; - buffer_flush(w->response.data); - buffer_sprintf(w->response.data, "Received request is too big (%zd bytes).\r\n", w->response.data->len); - } - else { - // wait for more data - w->wait_receive = 1; - return; } gettimeofday(&w->tv_ready, NULL); @@ -1415,6 +1631,10 @@ void web_client_process(struct web_client *w) { code_msg = "Not Found"; break; + case 412: + code_msg = "Preconditions Failed"; + break; + default: code_msg = "Internal Server Error"; break; @@ -1428,18 +1648,37 @@ void web_client_process(struct web_client *w) { "HTTP/1.1 %d %s\r\n" "Connection: %s\r\n" "Server: NetData Embedded HTTP Server\r\n" + "Access-Control-Allow-Origin: %s\r\n" + "Access-Control-Allow-Credentials: true\r\n" "Content-Type: %s\r\n" - "Access-Control-Allow-Origin: *\r\n" - "Access-Control-Allow-Methods: GET, OPTIONS\r\n" - "Access-Control-Allow-Headers: accept, x-requested-with\r\n" - "Access-Control-Max-Age: 86400\r\n" "Date: %s\r\n" , code, code_msg , w->keepalive?"keep-alive":"close" + , w->origin , content_type_string , date ); + if(w->cookie1[0]) { + buffer_sprintf(w->response.header_output, + "Set-Cookie: %s\r\n", + w->cookie1); + } + + if(w->cookie2[0]) { + buffer_sprintf(w->response.header_output, + "Set-Cookie: %s\r\n", + w->cookie2); + } + + if(w->mode == WEB_CLIENT_MODE_OPTIONS) { + buffer_strcat(w->response.header_output, + "Access-Control-Allow-Methods: GET, OPTIONS\r\n" + "Access-Control-Allow-Headers: accept, x-requested-with, origin, content-type, cookie\r\n" + "Access-Control-Max-Age: 1209600\r\n" // 86400 * 14 + ); + } + if(buffer_strlen(w->response.header)) buffer_strcat(w->response.header_output, buffer_tostring(w->response.header)); @@ -1449,7 +1688,7 @@ void web_client_process(struct web_client *w) { "Cache-Control: no-cache\r\n" , date); } - else { + else if(w->mode != WEB_CLIENT_MODE_OPTIONS) { char edate[100]; time_t et = w->response.data->date + (86400 * 14); struct tm etmbuf, *etm = gmtime_r(&et, &etmbuf); diff --git a/src/web_client.h b/src/web_client.h index 3823dbc91..f663be4a1 100644 --- a/src/web_client.h +++ b/src/web_client.h @@ -11,6 +11,7 @@ #include <netdb.h> #include "web_buffer.h" +#include "dictionary.h" #define DEFAULT_DISCONNECT_IDLE_WEB_CLIENTS_AFTER_SECONDS 60 extern int web_client_timeout; @@ -26,6 +27,8 @@ extern int web_enable_gzip; #define URL_MAX 8192 #define ZLIB_CHUNK 16384 #define HTTP_RESPONSE_HEADER_SIZE 4096 +#define COOKIE_MAX 1024 +#define ORIGIN_MAX 1024 struct response { BUFFER *header; // our response header @@ -58,8 +61,14 @@ struct web_client { struct timeval tv_in, tv_ready; + char cookie1[COOKIE_MAX+1]; + char cookie2[COOKIE_MAX+1]; + char origin[ORIGIN_MAX+1]; + int mode; int keepalive; + int enable_gzip; + char *decoded_url; struct sockaddr_storage clientaddr; diff --git a/src/web_server.c b/src/web_server.c index 10bf39a78..0da72b5be 100644 --- a/src/web_server.c +++ b/src/web_server.c @@ -67,7 +67,6 @@ int create_listen_socket4(const char *ip, int port, int listen_backlog) { int sock; int sockopt = 1; - struct sockaddr_in name; debug(D_LISTENER, "IPv4 creating new listening socket on port %d", port); @@ -80,6 +79,7 @@ int create_listen_socket4(const char *ip, int port, int listen_backlog) /* avoid "address already in use" */ setsockopt(sock, SOL_SOCKET, SO_REUSEADDR, (void*)&sockopt, sizeof(sockopt)); + struct sockaddr_in name; memset(&name, 0, sizeof(struct sockaddr_in)); name.sin_family = AF_INET; name.sin_port = htons (port); @@ -118,7 +118,6 @@ int create_listen_socket6(const char *ip, int port, int listen_backlog) { int sock = -1; int sockopt = 1; - struct sockaddr_in6 name; debug(D_LISTENER, "IPv6 creating new listening socket on port %d", port); @@ -131,6 +130,7 @@ int create_listen_socket6(const char *ip, int port, int listen_backlog) /* avoid "address already in use" */ setsockopt(sock, SOL_SOCKET, SO_REUSEADDR, (void*)&sockopt, sizeof(sockopt)); + struct sockaddr_in6 name; memset(&name, 0, sizeof(struct sockaddr_in6)); name.sin6_family = AF_INET6; name.sin6_port = htons ((uint16_t) port); diff --git a/system/Makefile.in b/system/Makefile.in index b5acce0ba..e8f5eb3c8 100644 --- a/system/Makefile.in +++ b/system/Makefile.in @@ -156,6 +156,8 @@ OPTIONAL_MATH_CLFAGS = @OPTIONAL_MATH_CLFAGS@ OPTIONAL_MATH_LIBS = @OPTIONAL_MATH_LIBS@ OPTIONAL_NFACCT_CLFAGS = @OPTIONAL_NFACCT_CLFAGS@ OPTIONAL_NFACCT_LIBS = @OPTIONAL_NFACCT_LIBS@ +OPTIONAL_UUID_CLFAGS = @OPTIONAL_UUID_CLFAGS@ +OPTIONAL_UUID_LIBS = @OPTIONAL_UUID_LIBS@ OPTIONAL_ZLIB_CLFAGS = @OPTIONAL_ZLIB_CLFAGS@ OPTIONAL_ZLIB_LIBS = @OPTIONAL_ZLIB_LIBS@ PACKAGE = @PACKAGE@ @@ -177,6 +179,8 @@ PTHREAD_LIBS = @PTHREAD_LIBS@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ STRIP = @STRIP@ +UUID_CFLAGS = @UUID_CFLAGS@ +UUID_LIBS = @UUID_LIBS@ VERSION = @VERSION@ ZLIB_CFLAGS = @ZLIB_CFLAGS@ ZLIB_LIBS = @ZLIB_LIBS@ @@ -237,6 +241,7 @@ target_alias = @target_alias@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ +varlibdir = @varlibdir@ webdir = @webdir@ # diff --git a/system/netdata.logrotate.in b/system/netdata.logrotate.in index 763eb09c9..e77d5ff72 100644 --- a/system/netdata.logrotate.in +++ b/system/netdata.logrotate.in @@ -6,10 +6,16 @@ delaycompress notifempty sharedscripts - postrotate - if service netdata status > /dev/null ; then \ - service netdata restart > /dev/null; \ - fi; - endscript + # + # if you add netdata to your init.d/system.d + # comment su & copytruncate and uncomment postrotate + # to have netdata restart when logs are rotated + su netdata + copytruncate + # + #postrotate + # if service netdata status > /dev/null ; then \ + # service netdata restart > /dev/null; \ + # fi; + #endscript } - diff --git a/system/netdata.service.in b/system/netdata.service.in index bc26cc9dc..91db6122d 100644 --- a/system/netdata.service.in +++ b/system/netdata.service.in @@ -9,7 +9,8 @@ User=root Group=root PIDFile=@localstatedir_POST@/run/netdata.pid ExecStart=@sbindir_POST@/netdata -pidfile @localstatedir_POST@/run/netdata.pid -ExecStop=/bin/kill -SIGTERM $MAINPID +KillMode=mixed +KillSignal=SIGTERM TimeoutStopSec=30 [Install] diff --git a/tests/stress.sh b/tests/stress.sh index 572dc7d19..d09d69895 100755 --- a/tests/stress.sh +++ b/tests/stress.sh @@ -17,10 +17,10 @@ if [ "${#charts[@]}" -eq 0 ] fi update_every="$(curl "$host/netdata.conf" 2>/dev/null | grep "update every = " | head -n 1 | cut -d '=' -f 2)" -[ $[ update_every + 1 - 1] -eq 0 ] && update_every=1 +[ $(( update_every + 1 - 1)) -eq 0 ] && update_every=1 entries="$(curl "$host/netdata.conf" 2>/dev/null | grep "history = " | head -n 1 | cut -d '=' -f 2)" -[ $[ entries + 1 - 1] -eq 0 ] && entries=3600 +[ $(( entries + 1 - 1)) -eq 0 ] && entries=3600 # to compare equal things, set the entries to 3600 max [ $entries -gt 3600 ] && entries=3600 @@ -35,8 +35,8 @@ formats=("jsonp" "json" "ssv" "csv" "datatable" "datasource" "tsv" "ssvcomma" "h options="flip|jsonwrap" now=$(date +%s) -first=$[now - (entries * update_every)] -duration=$[now - first] +first=$((now - (entries * update_every))) +duration=$((now - first)) file="$(mktemp /tmp/netdata-stress-XXXXXXXX)" cleanup() { @@ -50,22 +50,22 @@ do echo "curl --compressed --keepalive-time 120 --header \"Connection: keep-alive\" \\" >$file for x in {1..100} do - dt=$[RANDOM * duration / 32767] - st=$[RANDOM * duration / 32767] - et=$[ st + dt ] - [ $et -gt $now ] && st=$[ now - dt ] + dt=$((RANDOM * duration / 32767)) + st=$((RANDOM * duration / 32767)) + et=$(( st + dt )) + [ $et -gt $now ] && st=$(( now - dt )) - points=$[RANDOM * 2000 / 32767 + 2] - st=$[first + st] - et=$[first + et] + points=$((RANDOM * 2000 / 32767 + 2)) + st=$((first + st)) + et=$((first + et)) - mode=$[RANDOM * ${#modes[@]} / 32767] + mode=$((RANDOM * ${#modes[@]} / 32767)) mode="${modes[$mode]}" - chart=$[RANDOM * ${#charts[@]} / 32767] + chart=$((RANDOM * ${#charts[@]} / 32767)) chart="${charts[$chart]}" - format=$[RANDOM * ${#formats[@]} / 32767] + format=$((RANDOM * ${#formats[@]} / 32767)) format="${formats[$format]}" echo "--url \"$host/api/v1/data?chart=$chart&mode=$mode&format=$format&options=$options&after=$st&before=$et&points=$points\" \\" diff --git a/web/Makefile.am b/web/Makefile.am index 1b6b918be..174ef229b 100644 --- a/web/Makefile.am +++ b/web/Makefile.am @@ -4,18 +4,19 @@ MAINTAINERCLEANFILES= $(srcdir)/Makefile.in dist_web_DATA = \ - robots.txt \ - index.html \ demo.html \ demo2.html \ - tv.html \ + demosites.html \ dashboard.html \ dashboard.js \ dashboard.css \ dashboard.slate.css \ favicon.ico \ + index.html \ netdata-swagger.yaml \ netdata-swagger.json \ + robots.txt \ + tv.html \ version.txt \ $(NULL) diff --git a/web/Makefile.in b/web/Makefile.in index 98d5dcc76..e95290128 100644 --- a/web/Makefile.in +++ b/web/Makefile.in @@ -188,6 +188,8 @@ OPTIONAL_MATH_CLFAGS = @OPTIONAL_MATH_CLFAGS@ OPTIONAL_MATH_LIBS = @OPTIONAL_MATH_LIBS@ OPTIONAL_NFACCT_CLFAGS = @OPTIONAL_NFACCT_CLFAGS@ OPTIONAL_NFACCT_LIBS = @OPTIONAL_NFACCT_LIBS@ +OPTIONAL_UUID_CLFAGS = @OPTIONAL_UUID_CLFAGS@ +OPTIONAL_UUID_LIBS = @OPTIONAL_UUID_LIBS@ OPTIONAL_ZLIB_CLFAGS = @OPTIONAL_ZLIB_CLFAGS@ OPTIONAL_ZLIB_LIBS = @OPTIONAL_ZLIB_LIBS@ PACKAGE = @PACKAGE@ @@ -209,6 +211,8 @@ PTHREAD_LIBS = @PTHREAD_LIBS@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ STRIP = @STRIP@ +UUID_CFLAGS = @UUID_CFLAGS@ +UUID_LIBS = @UUID_LIBS@ VERSION = @VERSION@ ZLIB_CFLAGS = @ZLIB_CFLAGS@ ZLIB_LIBS = @ZLIB_LIBS@ @@ -269,6 +273,7 @@ target_alias = @target_alias@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ +varlibdir = @varlibdir@ webdir = @webdir@ # @@ -276,18 +281,19 @@ webdir = @webdir@ # MAINTAINERCLEANFILES = $(srcdir)/Makefile.in dist_web_DATA = \ - robots.txt \ - index.html \ demo.html \ demo2.html \ - tv.html \ + demosites.html \ dashboard.html \ dashboard.js \ dashboard.css \ dashboard.slate.css \ favicon.ico \ + index.html \ netdata-swagger.yaml \ netdata-swagger.json \ + robots.txt \ + tv.html \ version.txt \ $(NULL) diff --git a/web/dashboard.css b/web/dashboard.css index a7b090d66..63e2b905f 100644 --- a/web/dashboard.css +++ b/web/dashboard.css @@ -10,11 +10,22 @@ body { margin-left: 55px; } +.netdata-chart-row { + width: 100%; + text-align: center; + display: flex; + display: -webkit-flex; + display: -moz-flex; + align-items: baseline; + -moz-align-items: baseline; + -webkit-align-items: baseline; + justify-content: center; + -webkit-justify-content: center; + -moz-justify-content: center; +} + .netdata-container { - display: -webkit-flex; /* Safari */ - -webkit-flex-wrap: wrap; /* Safari 6.1+ */ display: inline-block; - flex-wrap: wrap; overflow: hidden; /* required for child elements to have absolute position */ @@ -31,10 +42,7 @@ body { } .netdata-container-with-legend { - display: -webkit-flex; /* Safari */ - -webkit-flex-wrap: wrap; /* Safari 6.1+ */ display: inline-block; - flex-wrap: wrap; overflow: hidden; /* fix minimum scrollbar issue in firefox */ diff --git a/web/dashboard.html b/web/dashboard.html index fd505078d..2b6c80684 100644 --- a/web/dashboard.html +++ b/web/dashboard.html @@ -646,4 +646,4 @@ So, to avoid flashing the charts, we destroy and re-create the charts on each up <!-- <script> netdataServer = "http://box:19999"; </script> --> <!-- load the dashboard manager - it will do the rest --> - <script type="text/javascript" src="dashboard.js"></script> + <script type="text/javascript" src="dashboard.js?v37"></script> diff --git a/web/dashboard.js b/web/dashboard.js index b6c62ae3c..27847a243 100644 --- a/web/dashboard.js +++ b/web/dashboard.js @@ -12,6 +12,9 @@ // var netdataNoBootstrap = true; // do not load bootstrap // var netdataDontStart = true; // do not start the thread to process the charts // var netdataErrorCallback = null; // Callback function that will be invoked upon error +// var netdataNoRegistry = true; // Don't update the registry for this access +// var netdataRegistryCallback = null; // Callback function that will be invoked with one param, +// the URLs from the registry // // You can also set the default netdata server, using the following. // When this variable is not set, we assume the page is hosted on your @@ -19,8 +22,28 @@ // var netdataServer = "http://yourhost:19999"; // set your NetData server //(function(window, document, undefined) { + + // ------------------------------------------------------------------------ + // compatibility fixes + // fix IE issue with console - if(!window.console){ window.console = {log: function(){} }; } + if(!window.console) { window.console = { log: function(){} }; } + + // if string.endsWith is not defined, define it + if(typeof String.prototype.endsWith !== 'function') { + String.prototype.endsWith = function(s) { + if(s.length > this.length) return false; + return this.slice(-s.length) === s; + }; + } + + // if string.startsWith is not defined, define it + if(typeof String.prototype.startsWith !== 'function') { + String.prototype.startsWith = function(s) { + if(s.length > this.length) return false; + return this.slice(s.length) === s; + }; + } // global namespace var NETDATA = window.NETDATA || {}; @@ -53,7 +76,11 @@ NETDATA.serverDefault = netdataServer; else { var s = NETDATA._scriptSource(); - NETDATA.serverDefault = s.replace(/\/dashboard.js(\?.*)*$/g, ""); + if(s) NETDATA.serverDefault = s.replace(/\/dashboard.js(\?.*)*$/g, ""); + else { + console.log('WARNING: Cannot detect the URL of the netdata server.'); + NETDATA.serverDefault = null; + } } if(NETDATA.serverDefault === null) @@ -80,7 +107,7 @@ NETDATA.google_js = 'https://www.google.com/jsapi'; NETDATA.themes = { - default: { + white: { bootstrap_css: NETDATA.serverDefault + 'css/bootstrap.min.css', dashboard_css: NETDATA.serverDefault + 'dashboard.css', background: '#FFFFFF', @@ -95,7 +122,7 @@ easypiechart_scale: '#dfe0e0', gauge_pointer: '#C0C0C0', gauge_stroke: '#F0F0F0', - gauge_gradient: true + gauge_gradient: false }, slate: { bootstrap_css: NETDATA.serverDefault + 'css/bootstrap.slate.min.css', @@ -124,7 +151,7 @@ if(typeof netdataTheme !== 'undefined' && typeof NETDATA.themes[netdataTheme] !== 'undefined') NETDATA.themes.current = NETDATA.themes[netdataTheme]; else - NETDATA.themes.current = NETDATA.themes.default; + NETDATA.themes.current = NETDATA.themes.white; NETDATA.colors = NETDATA.themes.current.colors; @@ -188,8 +215,6 @@ last_resized: new Date().getTime(), // the timestamp of the last resize request - crossDomainAjax: false, // enable this to request crossDomain AJAX - last_page_scroll: 0, // the timestamp the last time the page was scrolled // the current profile @@ -284,6 +309,12 @@ } }; + NETDATA.statistics = { + refreshes_total: 0, + refreshes_active: 0, + refreshes_active_max: 0 + }; + // ---------------------------------------------------------------------------------------------------------------- // local storage options @@ -461,7 +492,15 @@ 403: { message: "Chart library not enabled/is failed", alert: false }, 404: { message: "Chart not found", alert: false }, 405: { message: "Cannot download charts index from server", alert: true }, - 406: { message: "Invalid charts index downloaded from server", alert: true } + 406: { message: "Invalid charts index downloaded from server", alert: true }, + 407: { message: "Cannot HELLO netdata server", alert: false }, + 408: { message: "Netdata servers sent invalid response to HELLO", alert: false }, + 409: { message: "Cannot ACCESS netdata registry", alert: false }, + 410: { message: "Netdata registry ACCESS failed", alert: false }, + 411: { message: "Netdata registry server send invalid response to DELETE ", alert: false }, + 412: { message: "Netdata registry DELETE failed", alert: false }, + 413: { message: "Netdata registry server send invalid response to SWITCH ", alert: false }, + 414: { message: "Netdata registry SWITCH failed", alert: false } }; NETDATA.errorLast = { code: 0, @@ -541,7 +580,6 @@ $.ajax({ url: host + '/api/v1/charts', - crossDomain: NETDATA.options.crossDomainAjax, async: true, cache: false }) @@ -975,6 +1013,7 @@ this.units = self.data('units') || null; // the units of the chart dimensions this.append_options = self.data('append-options') || null; // the units of the chart dimensions + this.running = false; // boolean - true when the chart is being refreshed now this.validated = false; // boolean - has the chart been validated? this.enabled = true; // boolean - is the chart enabled for refresh? this.paused = false; // boolean - is the chart paused for any reason? @@ -1098,7 +1137,7 @@ var w = that.element.offsetWidth; if(w === null || w === 0) { // the div is hidden - // this is resize the chart when next viewed + // this will resize the chart when next viewed that.tm.last_resized = 0; } else @@ -1167,7 +1206,7 @@ var lost = Math.max(h * 0.2, 5); h -= lost; - // center the text, verically + // center the text, vertically var paddingTop = (lost - 5) / 2; // but check the width too @@ -1247,8 +1286,8 @@ if(isHidden() === true) return; if(that.chart_created === true) { - // we should destroy it if(NETDATA.options.current.destroy_on_hide === true) { + // we should destroy it init(); } else { @@ -1256,6 +1295,12 @@ that.element_chart.style.display = 'none'; if(that.element_legend !== null) that.element_legend.style.display = 'none'; that.tm.last_hidden = new Date().getTime(); + + // de-allocate data + // This works, but I not sure there are no corner cases somewhere + // so it is commented - if the user has memory issues he can + // set Destroy on Hide for all charts + // that.data = null; } } @@ -2580,8 +2625,6 @@ if(this.debug === true) this.log('updateChartWithData() called.'); - this._updating = false; - // this may force the chart to be re-created resizeChart(); @@ -2676,8 +2719,8 @@ if(NETDATA.globalPanAndZoom.isActive()) this.tm.last_autorefreshed = 0; else { - if(NETDATA.options.current.parallel_refresher === true && NETDATA.options.current.concurrent_refreshes) - this.tm.last_autorefreshed = Math.round(now / this.data_update_every) * this.data_update_every; + if(NETDATA.options.current.parallel_refresher === true && NETDATA.options.current.concurrent_refreshes === true) + this.tm.last_autorefreshed = now - (now % this.data_update_every); else this.tm.last_autorefreshed = now; } @@ -2739,11 +2782,16 @@ if(this.debug === true) this.log('updating from ' + this.data_url); + NETDATA.statistics.refreshes_total++; + NETDATA.statistics.refreshes_active++; + + if(NETDATA.statistics.refreshes_active > NETDATA.statistics.refreshes_active_max) + NETDATA.statistics.refreshes_active_max = NETDATA.statistics.refreshes_active; + this._updating = true; this.xhr = $.ajax( { url: this.data_url, - crossDomain: NETDATA.options.crossDomainAjax, cache: false, async: true }) @@ -2757,6 +2805,7 @@ error('data download failed for url: ' + that.data_url); }) .always(function() { + NETDATA.statistics.refreshes_active--; that._updating = false; if(typeof callback === 'function') callback(); }); @@ -2813,13 +2862,20 @@ } }; - this.isAutoRefreshed = function() { + this.isAutoRefreshable = function() { return (this.current.autorefresh); }; this.canBeAutoRefreshed = function() { var now = new Date().getTime(); + if(this.running === true) { + if(this.debug === true) + this.log('I am already running'); + + return false; + } + if(this.enabled === false) { if(this.debug === true) this.log('I am not enabled'); @@ -2850,7 +2906,7 @@ return true; } - if(this.isAutoRefreshed() === true) { + if(this.isAutoRefreshable() === true) { // allow the first update, even if the page is not visible if(this.updates_counter && this.updates_since_last_unhide && NETDATA.options.page_is_visible === false) { if(NETDATA.options.debug.focus === true || this.debug === true) @@ -2910,8 +2966,16 @@ }; this.autoRefresh = function(callback) { - if(this.canBeAutoRefreshed() === true) { - this.updateChart(callback); + if(this.canBeAutoRefreshed() === true && this.running === false) { + var state = this; + + state.running = true; + state.updateChart(function() { + state.running = false; + + if(typeof callback !== 'undefined') + callback(); + }); } else { if(typeof callback !== 'undefined') @@ -2948,7 +3012,6 @@ $.ajax( { url: this.host + this.chart_url, - crossDomain: NETDATA.options.crossDomainAjax, cache: false, async: true }) @@ -3234,11 +3297,12 @@ var parallel = new Array(); var targets = NETDATA.options.targets; var len = targets.length; + var state; while(len--) { - if(targets[len].isVisible() === false) + state = targets[len]; + if(state.isVisible() === false || state.running === true) continue; - var state = targets[len]; if(state.library.initialized === false) { if(state.library.enabled === true) { state.library.initialize(NETDATA.chartRefresher); @@ -3253,24 +3317,15 @@ } if(parallel.length > 0) { - var parallel_jobs = parallel.length; - // this will execute the jobs in parallel $(parallel).each(function() { - this.autoRefresh(function() { - parallel_jobs--; - - if(parallel_jobs === 0) { - setTimeout(NETDATA.chartRefresher, - NETDATA.chartRefresherWaitTime()); - } - }); + this.autoRefresh(); }) } - else { - setTimeout(NETDATA.chartRefresher, - NETDATA.chartRefresherWaitTime()); - } + + // run the next refresh iteration + setTimeout(NETDATA.chartRefresher, + NETDATA.chartRefresherWaitTime()); }; NETDATA.parseDom = function(callback) { @@ -3330,7 +3385,14 @@ $('.modal').on('hidden.bs.modal', NETDATA.onscroll); $('.modal').on('shown.bs.modal', NETDATA.onscroll); + // bootstrap collapse switching + $('.collapse').on('hidden.bs.collapse', NETDATA.onscroll); + $('.collapse').on('shown.bs.collapse', NETDATA.onscroll); + NETDATA.parseDom(NETDATA.chartRefresher); + + // Registry initialization + setTimeout(NETDATA.registry.init, 3000); }; // ---------------------------------------------------------------------------------------------------------------- @@ -4623,7 +4685,7 @@ state.easyPieChartEvent.timer = null; } - if(state.isAutoRefreshed() === true && state.data !== null) { + if(state.isAutoRefreshable() === true && state.data !== null) { NETDATA.easypiechartChartUpdate(state, state.data); } else { @@ -4674,7 +4736,7 @@ NETDATA.easypiechartChartUpdate = function(state, data) { var value, max, pcent; - if(NETDATA.globalPanAndZoom.isActive() === true || state.isAutoRefreshed() === false) { + if(NETDATA.globalPanAndZoom.isActive() === true || state.isAutoRefreshable() === false) { value = null; max = 0; pcent = 0; @@ -4877,7 +4939,7 @@ state.gaugeEvent.timer = null; } - if(state.isAutoRefreshed() === true && state.data !== null) { + if(state.isAutoRefreshable() === true && state.data !== null) { NETDATA.gaugeChartUpdate(state, state.data); } else { @@ -4931,7 +4993,7 @@ NETDATA.gaugeChartUpdate = function(state, data) { var value, min, max; - if(NETDATA.globalPanAndZoom.isActive() === true || state.isAutoRefreshed() === false) { + if(NETDATA.globalPanAndZoom.isActive() === true || state.isAutoRefreshable() === false) { value = 0; min = 0; max = 1; @@ -4993,11 +5055,33 @@ colorStop: stopColor, // just experiment with them strokeColor: strokeColor, // to see which ones work best for you limitMax: true, - generateGradient: generateGradient, + generateGradient: (generateGradient === true)?true:false, gradientType: 0 }; - if(generateGradient === false && NETDATA.themes.current.gauge_gradient === true) { + if (generateGradient.constructor === Array) { + // example options: + // data-gauge-generate-gradient="[0, 50, 100]" + // data-gauge-gradient-percent-color-0="#FFFFFF" + // data-gauge-gradient-percent-color-50="#999900" + // data-gauge-gradient-percent-color-100="#000000" + + options.percentColors = new Array(); + var len = generateGradient.length; + while(len--) { + var pcent = generateGradient[len]; + var color = self.data('gauge-gradient-percent-color-' + pcent.toString()) || false; + if(color !== false) { + var a = new Array(); + a[0] = pcent / 100; + a[1] = color; + options.percentColors.unshift(a); + } + } + if(options.percentColors.length === 0) + delete options.percentColors; + } + else if(generateGradient === false && NETDATA.themes.current.gauge_gradient === true) { options.percentColors = [ [0.0, NETDATA.colorLuminance(startColor, (lum_d * 10) - (lum_d * 0))], [0.1, NETDATA.colorLuminance(startColor, (lum_d * 10) - (lum_d * 1))], @@ -5304,12 +5388,13 @@ }; // ---------------------------------------------------------------------------------------------------------------- - // Start up + // Load required JS libraries and CSS NETDATA.requiredJs = [ { url: NETDATA.serverDefault + 'lib/bootstrap.min.js', isAlreadyLoaded: function() { + // check if bootstrap is loaded if(typeof $().emulateTransitionEnd == 'function') return true; else { @@ -5401,11 +5486,214 @@ NETDATA.loadRequiredCSS(++index); }; + + // ---------------------------------------------------------------------------------------------------------------- + // Registry of netdata hosts + + NETDATA.registry = { + server: null, // the netdata registry server + person_guid: null, // the unique ID of this browser / user + machine_guid: null, // the unique ID the netdata server that served dashboard.js + hostname: null, // the hostname of the netdata server that served dashboard.js + urls: null, // the user's other URLs + urls_array: null, // the user's other URLs in an array + + parsePersonUrls: function(person_urls) { + // console.log(person_urls); + + if(person_urls) { + NETDATA.registry.urls = {}; + NETDATA.registry.urls_array = new Array(); + + var now = new Date().getTime(); + var apu = person_urls; + var i = apu.length; + while(i--) { + if(typeof NETDATA.registry.urls[apu[i][0]] === 'undefined') { + // console.log('adding: ' + apu[i][4] + ', ' + ((now - apu[i][2]) / 1000).toString()); + + var obj = { + guid: apu[i][0], + url: apu[i][1], + last_t: apu[i][2], + accesses: apu[i][3], + name: apu[i][4], + alternate_urls: new Array() + }; + + NETDATA.registry.urls[apu[i][0]] = obj; + NETDATA.registry.urls_array.push(obj); + } + else { + // console.log('appending: ' + apu[i][4] + ', ' + ((now - apu[i][2]) / 1000).toString()); + + var pu = NETDATA.registry.urls[apu[i][0]]; + if(pu.last_t < apu[i][2]) { + pu.url = apu[i][1]; + pu.last_t = apu[i][2]; + pu.name = apu[i][4]; + } + pu.accesses += apu[i][3]; + pu.alternate_urls.push(apu[i][1]); + } + } + } + + if(typeof netdataRegistryCallback === 'function') + netdataRegistryCallback(NETDATA.registry.urls_array); + }, + + init: function() { + if(typeof netdataNoRegistry !== 'undefined' && netdataNoRegistry) + return; + + NETDATA.registry.hello(NETDATA.serverDefault, function(data) { + if(data) { + NETDATA.registry.server = data.registry; + NETDATA.registry.machine_guid = data.machine_guid; + NETDATA.registry.hostname = data.hostname; + + NETDATA.registry.access(10, function (person_urls) { + NETDATA.registry.parsePersonUrls(person_urls); + + }); + } + }); + }, + + hello: function(host, callback) { + // send HELLO to a netdata server: + // 1. verifies the server is reachable + // 2. responds with the registry URL, the machine GUID of this netdata server and its hostname + $.ajax({ + url: host + '/api/v1/registry?action=hello', + async: true, + cache: false, + xhrFields: { withCredentials: true } // required for the cookie + }) + .done(function(data) { + if(typeof data.status !== 'string' || data.status !== 'ok') { + NETDATA.error(408, host + ' response: ' + JSON.stringify(data)); + data = null; + } + + if(typeof callback === 'function') + callback(data); + }) + .fail(function() { + NETDATA.error(407, host); + + if(typeof callback === 'function') + callback(null); + }); + }, + + access: function(max_redirects, callback) { + // send ACCESS to a netdata registry: + // 1. it lets it know we are accessing a netdata server (its machine GUID and its URL) + // 2. it responds with a list of netdata servers we know + // the registry identifies us using a cookie it sets the first time we access it + // the registry may respond with a redirect URL to send us to another registry + $.ajax({ + url: NETDATA.registry.server + '/api/v1/registry?action=access&machine=' + NETDATA.registry.machine_guid + '&name=' + encodeURIComponent(NETDATA.registry.hostname) + '&url=' + encodeURIComponent(NETDATA.serverDefault), // + '&visible_url=' + encodeURIComponent(document.location), + async: true, + cache: false, + xhrFields: { withCredentials: true } // required for the cookie + }) + .done(function(data) { + var redirect = null; + if(typeof data.registry === 'string') + redirect = data.registry; + + if(typeof data.status !== 'string' || data.status !== 'ok') { + NETDATA.error(409, NETDATA.registry.server + ' responded with: ' + JSON.stringify(data)); + data = null; + } + + if(data === null && redirect !== null && max_redirects > 0) { + NETDATA.registry.server = redirect; + NETDATA.registry.access(max_redirects - 1, callback); + } + else { + if(typeof data.person_guid === 'string') + NETDATA.registry.person_guid = data.person_guid; + + if(typeof callback === 'function') + callback(data.urls); + } + }) + .fail(function() { + NETDATA.error(410, NETDATA.registry.server); + + if(typeof callback === 'function') + callback(null); + }); + }, + + delete: function(delete_url, callback) { + // send DELETE to a netdata registry: + $.ajax({ + url: NETDATA.registry.server + '/api/v1/registry?action=delete&machine=' + NETDATA.registry.machine_guid + '&name=' + encodeURIComponent(NETDATA.registry.hostname) + '&url=' + encodeURIComponent(NETDATA.serverDefault) + '&delete_url=' + encodeURIComponent(delete_url), + async: true, + cache: false, + xhrFields: { withCredentials: true } // required for the cookie + }) + .done(function(data) { + if(typeof data.status !== 'string' || data.status !== 'ok') { + NETDATA.error(411, NETDATA.registry.server + ' responded with: ' + JSON.stringify(data)); + data = null; + } + + if(typeof callback === 'function') + callback(data); + }) + .fail(function() { + NETDATA.error(412, NETDATA.registry.server); + + if(typeof callback === 'function') + callback(null); + }); + }, + + switch: function(new_person_guid, callback) { + // impersonate + $.ajax({ + url: NETDATA.registry.server + '/api/v1/registry?action=switch&machine=' + NETDATA.registry.machine_guid + '&name=' + encodeURIComponent(NETDATA.registry.hostname) + '&url=' + encodeURIComponent(NETDATA.serverDefault) + '&to=' + new_person_guid, + async: true, + cache: false, + xhrFields: { withCredentials: true } // required for the cookie + }) + .done(function(data) { + if(typeof data.status !== 'string' || data.status !== 'ok') { + NETDATA.error(413, NETDATA.registry.server + ' responded with: ' + JSON.stringify(data)); + data = null; + } + + if(typeof callback === 'function') + callback(data); + }) + .fail(function() { + NETDATA.error(414, NETDATA.registry.server); + + if(typeof callback === 'function') + callback(null); + }); + } + }; + + // ---------------------------------------------------------------------------------------------------------------- + // Boot it! + NETDATA.errorReset(); NETDATA.loadRequiredCSS(0); NETDATA._loadjQuery(function() { NETDATA.loadRequiredJs(0, function() { + if(typeof $().emulateTransitionEnd !== 'function') { + // bootstrap is not available + NETDATA.options.current.show_help = false; + } + if(typeof netdataDontStart === 'undefined' || !netdataDontStart) { if(NETDATA.options.debug.main_loop === true) console.log('starting chart refresh thread'); diff --git a/web/dashboard.slate.css b/web/dashboard.slate.css index 662731061..0536a3ed6 100644 --- a/web/dashboard.slate.css +++ b/web/dashboard.slate.css @@ -18,11 +18,22 @@ body { margin-left: 55px; } +.netdata-chart-row { + width: 100%; + text-align: center; + display: flex; + display: -webkit-flex; + display: -moz-flex; + align-items: flex-end; + -moz-align-items: flex-end; + -webkit-align-items: flex-end; + justify-content: center; + -moz--webkit-justify-content: center; + -moz-justify-content: center; +} + .netdata-container { - display: -webkit-flex; /* Safari */ - -webkit-flex-wrap: wrap; /* Safari 6.1+ */ display: inline-block; - flex-wrap: wrap; overflow: hidden; /* required for child elements to have absolute position */ @@ -39,10 +50,7 @@ body { } .netdata-container-with-legend { - display: -webkit-flex; /* Safari */ - -webkit-flex-wrap: wrap; /* Safari 6.1+ */ display: inline-block; - flex-wrap: wrap; overflow: hidden; /* fix minimum scrollbar issue in firefox */ diff --git a/web/demo.html b/web/demo.html index 8a6c0c129..4b91d8394 100644 --- a/web/demo.html +++ b/web/demo.html @@ -1,42 +1,42 @@ -<!DOCTYPE html>
-<html lang="en">
-<head>
- <title>NetData Dashboard</title>
-
- <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
- <meta charset="utf-8">
- <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
- <meta name="viewport" content="width=device-width, initial-scale=1">
- <meta name="apple-mobile-web-app-capable" content="yes">
- <meta name="apple-mobile-web-app-status-bar-style" content="black-translucent">
- <meta name="author" content="costa@tsaousis.gr">
-
- <script type="text/javascript" src="dashboard.js"></script>
-</head>
-<body>
-
-<div style="width: 100%; text-align: center;">
- <div data-netdata="netdata.server_cpu"
- data-dimensions="user"
- data-chart-library="gauge"
- data-width="150px"
- data-after="-60"
- data-points="60"
- data-title="Yes! Realtime!"
- data-units="I am alive!"
- data-colors="#FF5555"
- ></div>
- <br/>
- <div data-netdata="netdata.server_cpu"
- data-dimensions="user"
- data-chart-library="dygraph"
- data-dygraph-theme="sparkline"
- data-width="200px"
- data-height="30px"
- data-after="-60"
- data-points="60"
- data-colors="#FF5555"
- ></div>
-</div>
-</body>
-</html>
+<!DOCTYPE html> +<html lang="en"> +<head> + <title>NetData Dashboard</title> + + <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> + <meta charset="utf-8"> + <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> + <meta name="viewport" content="width=device-width, initial-scale=1"> + <meta name="apple-mobile-web-app-capable" content="yes"> + <meta name="apple-mobile-web-app-status-bar-style" content="black-translucent"> + <meta name="author" content="costa@tsaousis.gr"> + + <script type="text/javascript" src="dashboard.js?v37"></script> +</head> +<body> + +<div style="width: 100%; text-align: center;"> + <div data-netdata="netdata.server_cpu" + data-dimensions="user" + data-chart-library="gauge" + data-width="150px" + data-after="-60" + data-points="60" + data-title="Yes! Realtime!" + data-units="I am alive!" + data-colors="#FF5555" + ></div> + <br/> + <div data-netdata="netdata.server_cpu" + data-dimensions="user" + data-chart-library="dygraph" + data-dygraph-theme="sparkline" + data-width="200px" + data-height="30px" + data-after="-60" + data-points="60" + data-colors="#FF5555" + ></div> +</div> +</body> +</html> diff --git a/web/demo2.html b/web/demo2.html index 7a8d75a54..9530d914e 100644 --- a/web/demo2.html +++ b/web/demo2.html @@ -1,134 +1,134 @@ -<!DOCTYPE html>
-<html lang="en">
-<head>
- <title>NetData Dashboard</title>
-
- <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
- <meta charset="utf-8">
- <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
- <meta name="viewport" content="width=device-width, initial-scale=1">
- <meta name="apple-mobile-web-app-capable" content="yes">
- <meta name="apple-mobile-web-app-status-bar-style" content="black-translucent">
- <meta name="author" content="costa@tsaousis.gr">
-
- <script>var netdataTheme = 'slate';</script>
- <script type="text/javascript" src="dashboard.js"></script>
-</head>
-<body>
-
-<div class="container" style="width: 90%; padding-top: 10px; text-align: center; color: #AAA">
- <div style="font-size: 7vw;">why netdata?</div>
- <br/>
- <div style="font-size: 2vw; color: white;">These charts visualize the same data...</div>
-
-
- <!-- Nav tabs -->
- <ul class="nav nav-tabs" role="tablist">
- <li role="presentation" class="active"><a href="#gauge" aria-controls="gauge" role="tab" data-toggle="tab">Gauge.js</a></li>
- <li role="presentation"><a href="#easypiechart" aria-controls="easypiechart" role="tab" data-toggle="tab">Easy Pie Chart</a></li>
- </ul>
-
- <!-- Tab panes -->
- <div class="tab-content">
- <div role="tabpanel" class="tab-pane active" id="gauge">
-
- <div style="display: inline-block; width: 35.8%">
- <div style="font-size: 1.2vw; color: #666; padding-top: 10px;"><i class="fa fa-comment"></i> I can trace an issue like this</div>
- <br/>
- <div data-netdata="example.random2"
- data-dimensions="random"
- data-chart-library="gauge"
- data-gauge-max-value="32767"
- data-width="100%"
- data-after="-600"
- data-points="600"
- data-title="1/second (netdata default)"
- data-units="important metric"
- data-colors="#5A5"
- ></div>
- </div>
- <div style="display: inline-block; width: 50%">
- <div style="font-size: 1.2vw; color: #666;"><i class="fa fa-comment"></i> Can you trace an issue like these?<br/> <br/></div>
- <div data-netdata="example.random2"
- data-dimensions="random"
- data-chart-library="gauge"
- data-gauge-max-value="32767"
- data-width="45%"
- data-after="-600"
- data-points="40"
- data-title="Updates Every 15 Sec"
- data-units="important metric"
- data-colors="#C55"
- ></div>
- <div data-netdata="example.random2"
- data-dimensions="random"
- data-chart-library="gauge"
- data-gauge-max-value="32767"
- data-width="45%"
- data-after="-600"
- data-points="2"
- data-title="Updates Every 5 Mins"
- data-units="important metric"
- data-colors="#C55"
- ></div>
- </div>
- </div>
- <div role="tabpanel" class="tab-pane" id="easypiechart">
-
- <div style="display: inline-block; width: 25%">
- <div style="font-size: 1.2vw; color: #666; padding-top: 10px;"><i class="fa fa-comment"></i> I can trace an issue like this</div>
- <br/>
- <div data-netdata="example.random2"
- data-dimensions="random"
- data-chart-library="easypiechart"
- data-easypiechart-max-value="32767"
- data-width="100%"
- data-after="-600"
- data-points="600"
- data-title="1/second (netdata default)"
- data-units="important metric"
- data-colors="#5A5"
- ></div>
- </div>
- <div style="display: inline-block; width: 40%">
- <div style="font-size: 1.2vw; color: #666;"><i class="fa fa-comment"></i> Can you trace an issue like these?<br/> <br/></div>
- <div data-netdata="example.random2"
- data-dimensions="random"
- data-chart-library="easypiechart"
- data-easypiechart-max-value="32767"
- data-width="45%"
- data-after="-600"
- data-points="40"
- data-title="Updates Every 15 Sec (<a href='https://github.com/OpenTSDB/opentsdb.net/blob/gh-pages/docs/source/user_guide/utilities/tcollector.rst#collecting-lots-of-metrics-with-tcollector' target='_blank'>OpenTSDB</a>)"
- data-units="important metric"
- data-colors="#C55"
- ></div>
- <div data-netdata="example.random2"
- data-dimensions="random"
- data-chart-library="easypiechart"
- data-easypiechart-max-value="32767"
- data-width="45%"
- data-after="-600"
- data-points="2"
- data-title="Updates Every 5 Mins (your NMS)"
- data-units="important metric"
- data-colors="#C55"
- ></div>
- </div>
- </div>
- </div>
- <div style="font-size: 1.5vw;">Hover on the chart below, to see the selected value on the charts above!</div>
- <div data-netdata="example.random2"
- data-dimensions="random"
- data-dygraph-theme="sparkline"
- data-width="100%"
- data-height="20vh"
- data-after="-600"
- data-points="600"
- data-title="1/second (netdata default)"
- data-units="something"
- data-colors="#888"
- ></div>
-</div>
-</body>
-</html>
+<!DOCTYPE html> +<html lang="en"> +<head> + <title>NetData Dashboard</title> + + <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> + <meta charset="utf-8"> + <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> + <meta name="viewport" content="width=device-width, initial-scale=1"> + <meta name="apple-mobile-web-app-capable" content="yes"> + <meta name="apple-mobile-web-app-status-bar-style" content="black-translucent"> + <meta name="author" content="costa@tsaousis.gr"> + + <script>var netdataTheme = 'slate';</script> + <script type="text/javascript" src="dashboard.js?v37"></script> +</head> +<body> + +<div class="container" style="width: 90%; padding-top: 10px; text-align: center; color: #AAA"> + <div style="font-size: 7vw;">why netdata?</div> + <br/> + <div style="font-size: 2vw; color: white;">These charts visualize the same data...</div> + + + <!-- Nav tabs --> + <ul class="nav nav-tabs" role="tablist"> + <li role="presentation" class="active"><a href="#gauge" aria-controls="gauge" role="tab" data-toggle="tab">Gauge.js</a></li> + <li role="presentation"><a href="#easypiechart" aria-controls="easypiechart" role="tab" data-toggle="tab">Easy Pie Chart</a></li> + </ul> + + <!-- Tab panes --> + <div class="tab-content"> + <div role="tabpanel" class="tab-pane active" id="gauge"> + + <div style="display: inline-block; width: 35.8%"> + <div style="font-size: 1.2vw; color: #666; padding-top: 10px;"><i class="fa fa-comment"></i> I can trace an issue like this</div> + <br/> + <div data-netdata="example.random2" + data-dimensions="random" + data-chart-library="gauge" + data-gauge-max-value="32767" + data-width="100%" + data-after="-600" + data-points="600" + data-title="1/second (netdata default)" + data-units="important metric" + data-colors="#5A5" + ></div> + </div> + <div style="display: inline-block; width: 50%"> + <div style="font-size: 1.2vw; color: #666;"><i class="fa fa-comment"></i> Can you trace an issue like these?<br/> <br/></div> + <div data-netdata="example.random2" + data-dimensions="random" + data-chart-library="gauge" + data-gauge-max-value="32767" + data-width="45%" + data-after="-600" + data-points="40" + data-title="Updates Every 15 Sec" + data-units="important metric" + data-colors="#C55" + ></div> + <div data-netdata="example.random2" + data-dimensions="random" + data-chart-library="gauge" + data-gauge-max-value="32767" + data-width="45%" + data-after="-600" + data-points="2" + data-title="Updates Every 5 Mins" + data-units="important metric" + data-colors="#C55" + ></div> + </div> + </div> + <div role="tabpanel" class="tab-pane" id="easypiechart"> + + <div style="display: inline-block; width: 25%"> + <div style="font-size: 1.2vw; color: #666; padding-top: 10px;"><i class="fa fa-comment"></i> I can trace an issue like this</div> + <br/> + <div data-netdata="example.random2" + data-dimensions="random" + data-chart-library="easypiechart" + data-easypiechart-max-value="32767" + data-width="100%" + data-after="-600" + data-points="600" + data-title="1/second (netdata default)" + data-units="important metric" + data-colors="#5A5" + ></div> + </div> + <div style="display: inline-block; width: 40%"> + <div style="font-size: 1.2vw; color: #666;"><i class="fa fa-comment"></i> Can you trace an issue like these?<br/> <br/></div> + <div data-netdata="example.random2" + data-dimensions="random" + data-chart-library="easypiechart" + data-easypiechart-max-value="32767" + data-width="45%" + data-after="-600" + data-points="40" + data-title="Updates Every 15 Sec (<a href='https://github.com/OpenTSDB/opentsdb.net/blob/gh-pages/docs/source/user_guide/utilities/tcollector.rst#collecting-lots-of-metrics-with-tcollector' target='_blank'>OpenTSDB</a>)" + data-units="important metric" + data-colors="#C55" + ></div> + <div data-netdata="example.random2" + data-dimensions="random" + data-chart-library="easypiechart" + data-easypiechart-max-value="32767" + data-width="45%" + data-after="-600" + data-points="2" + data-title="Updates Every 5 Mins (your NMS)" + data-units="important metric" + data-colors="#C55" + ></div> + </div> + </div> + </div> + <div style="font-size: 1.5vw;">Hover on the chart below, to see the selected value on the charts above!</div> + <div data-netdata="example.random2" + data-dimensions="random" + data-dygraph-theme="sparkline" + data-width="100%" + data-height="20vh" + data-after="-600" + data-points="600" + data-title="1/second (netdata default)" + data-units="something" + data-colors="#888" + ></div> +</div> +</body> +</html> diff --git a/web/demosites.html b/web/demosites.html new file mode 100644 index 000000000..f5047f4b2 --- /dev/null +++ b/web/demosites.html @@ -0,0 +1,721 @@ +<!DOCTYPE html> +<html lang="en"> +<head> + <title>NetData - Real-time performance monitoring, done right!</title> + + <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> + <meta charset="utf-8"> + <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> + <meta name="viewport" content="width=device-width, initial-scale=1"> + <meta name="apple-mobile-web-app-capable" content="yes"> + <meta name="apple-mobile-web-app-status-bar-style" content="black-translucent"> + + <script> + // --- OPTIONS FOR THE DASHBOARD -- + + // this section has to appear before loading dashboard.js + + // Select a theme. + // uncomment on of the two themes: + + // var netdataTheme = 'default'; // this is white + var netdataTheme = 'slate'; // this is dark + + + // Set the default netdata server. + // on charts without a 'data-host', this one will be used. + // the default is the server that dashboard.js is downloaded from. + + // var netdataServer = 'http://my.server:19999/'; + </script> + + <!-- + --- LOAD dashboard.js --- + + to host this HTML file on your web server, + you have to load dashboard.js from the netdata server. + + So, pick one the two below + If you pick the first, set the server name/IP. + + The second assumes you host this file on /usr/share/netdata/web + and that you have chown it to be owned by netdata:netdata + --> + <!-- <script type="text/javascript" src="http://my.server:19999/dashboard.js"></script> --> + <script type="text/javascript" src="dashboard.js?v37"></script> + + <script> + // --- OPTIONS FOR THE CHARTS -- + + // destroy charts not shown (lowers memory on the browsers) + // set this to 'yes' to destroy, 'false' to hide the charts + NETDATA.options.current.destroy_on_hide = false; + + // set this to false, to always show all dimensions + NETDATA.options.current.eliminate_zero_dimensions = true; + + // set this to false, to lower the pressure on the browser + NETDATA.options.current.concurrent_refreshes = true; + + // if you need to support slow mobile phones, set this to false + NETDATA.options.current.parallel_refresher = true; + + // set this to false, to always update the charts, even if focus is lost + NETDATA.options.current.stop_updates_when_focus_is_lost = true; + </script> + + <style> + +body { + font-size: 1vw; +} + +.mysparkline { + position: relative; + display: inline-block; + min-height: 50px; + width: 100%; + height: 8vmax; + text-align: left; +} + +.mysparkline-overchart-label { + position: absolute; + display: block; + top: 0; + left: 10px; + bottom: 0; + right: 0; + font-size: 1vmax; + z-index: 1; +} + +.mysparkline-overchart-value { + position: absolute; + display: block; + top: 1.1vmax; + left: 10px; + bottom: 0; + right: 0; + font-size: 5vmax; + z-index: 2; + text-shadow: #333 0px 0px 2px; +} + +.myfullchart { + position: relative; + display: inline-block; + width: 100%; + height: 14vmax; + min-height: 150px; + text-align: left; +} + +.mygauge-combo { + display: inline-block; +} + +.mygauge { + position: relative; + display: block; + width: 20vw; + height: 12vw; +} + +.mygauge-button { + display: block; +} + +.mytitle { + padding-top: 6vw; + padding-bottom: 1vw; + text-align: center; + font-size: 2.4vw; +} + +.mysubtitle { + padding-top: 2vw; + padding-bottom: 1vw; + text-align: center; + font-size: 1.8vw; +} + +.mycontent { + text-align: center; + font-size: 1.5vw; +} + +@media only screen and (min-width : 992px) { + .container { + width: 90%; + } +} +@media only screen and (max-width : 992px) { + .container { + width: 100%; + } +} + </style> + +</head> +<body style="text-align: center;"> + +<div class="container"> + + <div style="text-align: center; font-size: 13vw; height: 14vw;"> + <b>netdata</b> + </div> + <div style="text-align: center; font-size: 2vw; height: 2.5vw;"> + real-time performance monitoring + </div> + <div style="width:80%; text-align: right; font-size: 2.7vw;"> + <strong>scaled out</strong>! + </div> + <div class="mytitle"> + pick a <b>netdata</b> demo server + </div> + <div class="mycontent"> + these demo servers show what you will get by installing <b>netdata</b> + </div> + + <div style="width: 100%; text-align: center; padding-top: 2vw;"> + <div style="width: 100%; text-align: center;"> + + <div class="mygauge-combo"> + <div class="mygauge"> + <div data-netdata="netdata.requests" + data-host="//london.my-netdata.io" + data-title="EU - London" + data-chart-library="gauge" + data-width="100%" + data-after="-300" + data-points="300" + data-colors="#558855" + ></div> + </div> + <div class="mygauge-button"> + <br/> <br/> + <button type="button" class="btn btn-default" data-toggle="button" aria-pressed="false" autocomplete="off" onclick="window.location='//london.my-netdata.io/default.html'" style="font-size: 1.5vw;">Enter London!</button> + <div style="font-size: 1vw;"> + Donated by DigitalOcean.com + </div> + </div> + </div> + <div class="mygauge-combo"> + <div class="mygauge"> + <div data-netdata="netdata.requests" + data-host="//atlanta.my-netdata.io" + data-title="US - Atlanta" + data-chart-library="gauge" + data-width="100%" + data-after="-300" + data-points="300" + data-colors="#AA5555" + ></div> + </div> + <div class="mygauge-button"> + <br/> <br/> + <button type="button" class="btn btn-default" data-toggle="button" aria-pressed="false" autocomplete="off" onclick="window.location='//atlanta.my-netdata.io/default.html'" style="font-size: 1.5vw;">Enter Atlanta!</button> + <div style="font-size: 1vw;"> + Donated by CDN77.com + </div> + </div> + </div> + <div class="mygauge-combo"> + <div class="mygauge"> + <div data-netdata="netdata.requests" + data-host="//athens.my-netdata.io" + data-title="EU - Greece" + data-chart-library="gauge" + data-width="100%" + data-after="-300" + data-points="300" + data-colors="#5555AA" + ></div> + </div> + <div class="mygauge-button"> + <br/> <br/> + <button type="button" class="btn btn-default" data-toggle="button" aria-pressed="false" autocomplete="off" onclick="window.location='//athens.my-netdata.io/default.html'" style="font-size: 1.5vw;">Come to Greece!</button> + <div style="font-size: 0.8vw;"> + + </div> + </div> + </div> + </div> + </div> + + <div class="mytitle"> + this page is a custom <b>netdata</b> dashboard + </div> + <div class="mycontent"> + charts are coming from 3 servers, in parallel + <br/> + the servers are not aware of this multi-server dashboard, + <br/> + each server is not aware of the other 2 servers, + <br/> + but on this dashboard <b>they are one</b>! + </div> + <div style="padding-top: 1vw; width: 100%; text-align: center; font-size: 1.5vw;"> + <i class="fa fa-comment" aria-hidden="true"></i> + hover on a chart below, or drag it to show the past - <b>the others will follow</b>! + <br/> + double click on a chart to reset them all + </div> + + <div class="mytitle"> + our <code>ngnix</code> performance + </div> + <div class="mycontent"> + (we proxy netdata through nginx, on the demo sites) + </div> + + <!-- Nav tabs --> + <ul class="nav nav-tabs" role="tablist" style="padding-top: 1vw;"> + <li role="presentation" class="active"><a href="#nginx_requests" aria-controls="nginx_requests" role="tab" data-toggle="tab">Requests</a></li> + <li role="presentation"><a href="#nginx_connections" aria-controls="nginx_connections" role="tab" data-toggle="tab">Connections</a></li> + </ul> + + <!-- Tab panes --> + <div class="tab-content"> + <div role="tabpanel" class="tab-pane active" id="nginx_requests"> + <div class="mysparkline"> + <div class="mysparkline-overchart-label"> + <b>EU - London</b> web requests/s + </div> + <div class="mysparkline-overchart-value" id="nginx.requests.netdata" > + </div> + <div data-netdata="nginx.requests" + data-dimensions="requests" + data-host="//london.my-netdata.io" + data-chart-library="dygraph" + data-dygraph-theme="sparkline" + data-dygraph-type="area" + data-width="100%" + data-height="100%" + data-after="-300" + data-colors="#558855" + data-show-value-of-requests-at="nginx.requests.netdata" + ></div> + </div> + + <div class="mysparkline"> + <div class="mysparkline-overchart-label"> + <b>US - Atlanta</b> web requests/s + </div> + <div class="mysparkline-overchart-value" id="nginx.requests.netdata2" > + </div> + <div data-netdata="nginx.requests" + data-dimensions="requests" + data-host="//atlanta.my-netdata.io" + data-chart-library="dygraph" + data-dygraph-theme="sparkline" + data-dygraph-type="area" + data-width="100%" + data-height="100%" + data-after="-300" + data-colors="#AA5555" + data-show-value-of-requests-at="nginx.requests.netdata2" + ></div> + </div> + + <div class="mysparkline"> + <div class="mysparkline-overchart-label"> + <b>EU - Greece</b> web requests/s + </div> + <div class="mysparkline-overchart-value" id="nginx.requests.netdata3" > + </div> + <div data-netdata="nginx.requests" + data-dimensions="requests" + data-host="//athens.my-netdata.io" + data-chart-library="dygraph" + data-dygraph-theme="sparkline" + data-dygraph-type="area" + data-width="100%" + data-height="100%" + data-after="-300" + data-colors="#5555AA" + data-show-value-of-requests-at="nginx.requests.netdata3" + ></div> + </div> + </div> + + <div role="tabpanel" class="tab-pane" id="nginx_connections"> + <div class="mysparkline"> + <div class="mysparkline-overchart-label"> + <b>EU - London</b> active connections + </div> + <div class="mysparkline-overchart-value" id="nginx.connections.netdata1" > + </div> + <div data-netdata="nginx.connections" + data-dimensions="active" + data-host="//london.my-netdata.io" + data-chart-library="dygraph" + data-dygraph-theme="sparkline" + data-dygraph-type="area" + data-width="100%" + data-height="100%" + data-after="-300" + data-colors="#558855" + data-show-value-of-active-at="nginx.connections.netdata1" + ></div> + </div> + + <div class="mysparkline"> + <div class="mysparkline-overchart-label"> + <b>US - Atlanta</b> active connections + </div> + <div class="mysparkline-overchart-value" id="nginx.connections.netdata2" > + </div> + <div data-netdata="nginx.connections" + data-dimensions="active" + data-host="//atlanta.my-netdata.io" + data-chart-library="dygraph" + data-dygraph-theme="sparkline" + data-dygraph-type="area" + data-width="100%" + data-height="100%" + data-after="-300" + data-colors="#AA5555" + data-show-value-of-active-at="nginx.connections.netdata2" + ></div> + </div> + + <div class="mysparkline"> + <div class="mysparkline-overchart-label"> + <b>EU - Greece</b> active connections + </div> + <div class="mysparkline-overchart-value" id="nginx.connections.netdata3" > + </div> + <div data-netdata="nginx.connections" + data-dimensions="active" + data-host="//athens.my-netdata.io" + data-chart-library="dygraph" + data-dygraph-theme="sparkline" + data-dygraph-type="area" + data-width="100%" + data-height="100%" + data-after="-300" + data-colors="#5555AA" + data-show-value-of-active-at="nginx.connections.netdata3" + ></div> + </div> + </div> + </div> + + <div style="width: 100%; text-align: right; font-size: 1vw;"> + <i class="fa fa-comment" aria-hidden="true"></i> these charts are draggable and touchable, double click them to reset them + </div> + + + <div class="mytitle"> + bandwidth consumption on the demo sites + </div> + <div class="mycontent"> + Linux QoS is configured by <a href="https://github.com/firehol/netdata/wiki/You-should-install-QoS-on-all-your-servers">FireQOS</a> + </div> + + <!-- Nav tabs --> + <ul class="nav nav-tabs" role="tablist" style="padding-top: 1vw;"> + <li role="presentation" class="active"><a href="#outbout" aria-controls="outbout" role="tab" data-toggle="tab">Outbound</a></li> + <li role="presentation"><a href="#inbound" aria-controls="inbound" role="tab" data-toggle="tab">Inbound</a></li> + </ul> + + <!-- Tab panes --> + <div class="tab-content"> + <div role="tabpanel" class="tab-pane active" id="outbout"> + <div class="myfullchart"> + <div data-netdata="tc.world_out" + data-host="//london.my-netdata.io" + data-chart-library="dygraph" + data-title="EU - London, traffic we send per service" + data-width="100%" + data-height="100%" + data-after="-300" + ></div> + </div> + + <div class="myfullchart"> + <div data-netdata="tc.world_out" + data-host="//atlanta.my-netdata.io" + data-chart-library="dygraph" + data-title="US - Atlanta, traffic we send per service" + data-width="100%" + data-height="100%" + data-after="-300" + ></div> + + </div> + + <div class="myfullchart"> + <div data-netdata="tc.world_out" + data-host="//athens.my-netdata.io" + data-chart-library="dygraph" + data-title="EU - Greece, traffic we send per service" + data-width="100%" + data-height="100%" + data-after="-300" + ></div> + </div> + </div> + <div role="tabpanel" class="tab-pane" id="inbound"> + <div class="myfullchart"> + <div data-netdata="tc.world_in" + data-host="//london.my-netdata.io" + data-chart-library="dygraph" + data-title="EU - London, traffic we receive per service" + data-width="100%" + data-height="100%" + data-after="-300" + ></div> + + </div> + + <div class="myfullchart"> + <div data-netdata="tc.world_in" + data-host="//atlanta.my-netdata.io" + data-chart-library="dygraph" + data-title="US - Atlanta, traffic we receive per service" + data-width="100%" + data-height="100%" + data-after="-300" + ></div> + + </div> + + <div class="myfullchart"> + <div data-netdata="tc.world_in" + data-host="//athens.my-netdata.io" + data-chart-library="dygraph" + data-title="EU - Greece, traffic we receive per service" + data-width="100%" + data-height="100%" + data-after="-300" + ></div> + </div> + </div> + </div> + <div style="width: 100%; text-align: right; font-size: 1vw;"> + <i class="fa fa-comment" aria-hidden="true"></i> <i>these legends are interactive and the charts are resizable here ^^^</i> + </div> + + <div class="mytitle"> + DDoS protection performance on the demo sites + </div> + <div class="mycontent"> + iptables SYNPROXY configured by <a href="https://github.com/firehol/netdata/wiki/Monitoring-SYNPROXY">FireHOL</a> + </div> + + <div style="padding-top: 4vw; width: 100%; text-align: center; font-size: 1.5vw;"> + + <div class="mysparkline"> + <div class="mysparkline-overchart-label"> + <b>EU - London</b>, TCP SYN packets/s received + </div> + <div class="mysparkline-overchart-value" id="netfilter.synproxy_syn_received.netdata1" > + </div> + <div data-netdata="netfilter.synproxy_syn_received" + data-dimensions="received" + data-host="//london.my-netdata.io" + data-chart-library="dygraph" + data-dygraph-theme="sparkline" + data-dygraph-type="area" + data-width="100%" + data-height="100%" + data-after="-300" + data-colors="#558855" + data-show-value-of-received-at="netfilter.synproxy_syn_received.netdata1" + ></div> + </div> + + <div class="mysparkline"> + <div class="mysparkline-overchart-label"> + <b>US - Atlanta</b>, TCP SYN packets/s received + </div> + <div class="mysparkline-overchart-value" id="netfilter.synproxy_syn_received.netdata2" > + </div> + <div data-netdata="netfilter.synproxy_syn_received" + data-dimensions="received" + data-host="//atlanta.my-netdata.io" + data-chart-library="dygraph" + data-dygraph-theme="sparkline" + data-dygraph-type="area" + data-width="100%" + data-height="100%" + data-after="-300" + data-colors="#885555" + data-show-value-of-received-at="netfilter.synproxy_syn_received.netdata2" + ></div> + </div> + + <div class="mysparkline"> + <div class="mysparkline-overchart-label"> + <b>EU - Greece</b>, TCP SYN packets/s received + </div> + <div class="mysparkline-overchart-value" id="netfilter.synproxy_syn_received.netdata3" > + </div> + <div data-netdata="netfilter.synproxy_syn_received" + data-dimensions="received" + data-host="//athens.my-netdata.io" + data-chart-library="dygraph" + data-dygraph-theme="sparkline" + data-dygraph-type="area" + data-width="100%" + data-height="100%" + data-after="-300" + data-colors="#555588" + data-show-value-of-received-at="netfilter.synproxy_syn_received.netdata3" + ></div> + </div> + </div> + <div style="width: 100%; text-align: right; font-size: 1vw;"> + <i class="fa fa-comment" aria-hidden="true"></i> <i>did you notice the decimal numbers? + <br/>netdata interpolates collected values at second boundaries, with nanosecond detail!</i> + </div> + + + <div class="mytitle"> + CPU Utilization of the demo sites + </div> + + <div style="padding-top: 1vw;"> + <div class="myfullchart"> + <div data-netdata="system.cpu" + data-host="//london.my-netdata.io" + data-chart-library="dygraph" + data-title="EU - London, CPU Usage" + data-width="100%" + data-height="100%" + data-after="-300" + ></div> + </div> + + <div class="myfullchart"> + <div data-netdata="system.cpu" + data-host="//atlanta.my-netdata.io" + data-chart-library="dygraph" + data-title="US - Atlanta, CPU Usage" + data-width="100%" + data-height="100%" + data-after="-300" + ></div> + </div> + + <div class="myfullchart"> + <div data-netdata="system.cpu" + data-host="//athens.my-netdata.io" + data-chart-library="dygraph" + data-title="EU - Greece, CPU Usage" + data-width="100%" + data-height="100%" + data-after="-300" + ></div> + </div> + </div> + <div style="width: 100%; text-align: right; font-size: 1vw;"> + <i class="fa fa-comment" aria-hidden="true"></i> <i>what is using so much CPU? + <br/>The site <a href="//iplists.firehol.org/">iplists.firehol.org</a> is maintained by FireHOL - the CPU is used for comparing security IP Lists.</i> + </div> + + <div class="mytitle"> + CPU Usage of the netdata user + </div> + <div class="mycontent"> + netdata monitors <b>users</b>, <b>user groups</b>, <b>applications (process trees)</b> + <br/> + and <b>containers</b> (<code>lxc</code>, <code>docker</code>, etc.) + </div> + + <div style="padding-top: 4vw; width: 100%; text-align: center; font-size: 1.5vw;"> + + <div class="mysparkline"> + <div class="mysparkline-overchart-label"> + <b>EU - London</b>, CPU % of a single core + </div> + <div class="mysparkline-overchart-value" id="users.cpu.netdata1" > + </div> + <div data-netdata="users.cpu" + data-dimensions="netdata" + data-host="//london.my-netdata.io" + data-chart-library="dygraph" + data-dygraph-theme="sparkline" + data-dygraph-type="area" + data-width="100%" + data-height="100%" + data-after="-300" + data-colors="#558855" + data-show-value-of-netdata-at="users.cpu.netdata1" + ></div> + </div> + + <div class="mysparkline"> + <div class="mysparkline-overchart-label"> + <b>US - Atlanta</b>, CPU % of a single core + </div> + <div class="mysparkline-overchart-value" id="users.cpu.netdata2" > + </div> + <div data-netdata="users.cpu" + data-dimensions="netdata" + data-host="//atlanta.my-netdata.io" + data-chart-library="dygraph" + data-dygraph-theme="sparkline" + data-dygraph-type="area" + data-width="100%" + data-height="100%" + data-after="-300" + data-colors="#885555" + data-show-value-of-netdata-at="users.cpu.netdata2" + ></div> + </div> + + <div class="mysparkline"> + <div class="mysparkline-overchart-label"> + <b>EU - Greece</b>, CPU % of a single core + </div> + <div class="mysparkline-overchart-value" id="users.cpu.netdata3" > + </div> + <div data-netdata="users.cpu" + data-dimensions="netdata" + data-host="//athens.my-netdata.io" + data-chart-library="dygraph" + data-dygraph-theme="sparkline" + data-dygraph-type="area" + data-width="100%" + data-height="100%" + data-after="-300" + data-colors="#555588" + data-show-value-of-netdata-at="users.cpu.netdata3" + ></div> + </div> + </div> + <div style="width: 100%; text-align: right; font-size: 1vw;"> + <i class="fa fa-comment" aria-hidden="true"></i> <i>this utilization is about the whole netdata process tree and the percentage is of <b>a single core</b>! + <br/>including <b>BASH</b> plugins (it monitors <code>mysql</code> on the demo sites), <b>node.js</b> plugins (it monitors <code>bind9</code> on the demo sites), etc. + <br/>and including the chart refreshes for the dashboards of all viewers.</i> + </div> + + <div style="padding-top: 6vw; width: 100%; text-align: center; font-size: 2vw;"> + want to know more? + <br/> + jump to <a href="https://github.com/firehol/netdata/">the netdata page at github</a> + <br/> + it needs just 3 mins to be installed on your servers! + <br/> + + </div> +</div> +</body> +<script> + // google analytics when this is used for the home page of the demo sites + // you don't need this if you customize this dashboard for your needs + setTimeout(function() { + (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ + (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), + m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) + })(window,document,'script','https://www.google-analytics.com/analytics.js','ga'); + + ga('create', 'UA-64295674-3', 'auto'); + ga('send', 'pageview'); + }, 2000); +</script> +</html> diff --git a/web/index.html b/web/index.html index 6f6013da1..9cc2b4bbe 100644 --- a/web/index.html +++ b/web/index.html @@ -191,6 +191,42 @@ font-weight: 500; } + .dropdown-menu { + min-width: 200px; + } + .dropdown-menu.columns-2 { + margin: 0; + padding: 0; + width: 400px; + } + .dropdown-menu li a { + padding: 5px 15px; + font-weight: 300; + } + .dropdown-menu.multi-column { + overflow-x: hidden; + } + .multi-column-dropdown { + list-style: none; + padding: 0; + } + .multi-column-dropdown li a { + display: block; + clear: both; + line-height: 1.428571429; + white-space: normal; + } + .multi-column-dropdown li a:hover { + text-decoration: none; + color: #f5f5f5; + background-color: #262626; + } + .scrollable-menu { + height: auto; + max-height: 80vh; + overflow-x: hidden; + } + /* Back to top (hidden on mobile) */ .back-to-top, .dashboard-theme-toggle { @@ -320,6 +356,7 @@ else return ret; } + var netdataTheme = getTheme('slate'); function setTheme(theme) { @@ -327,15 +364,94 @@ return saveLocalStorage('netdataTheme', theme); } + + var netdataRegistryCallback = function(urls_array) { + var el = ''; + var a1 = ''; + var found = 0; + + if(urls_array) { + function name_comparator_desc(a, b) { + if (a.name > b.name) return -1; + if (a.name < b.name) return 1; + return 0; + } + + var urls = urls_array.sort(name_comparator_desc); + var len = urls.length; + while(len--) { + var u = urls[len]; + + var status = "enabled"; + found++; + + if(u.guid === NETDATA.registry.machine_guid) + status = "disabled" + + el += '<li id="registry_server_' + u.guid + '" class="' + status + '"><a href="' + u.url + '">' + u.name + '</a></li>'; + a1 += '<li id="registry_action_' + u.guid + '"><a href="#" onclick="deleteRegistryModalHandler(\'' + u.guid + '\',\'' + u.name + '\',\'' + u.url + '\'); return false;"><i class="fa fa-trash-o" aria-hidden="true" style="color: #999;"></i></a></li>'; + } + } + + if(!found) { + if(urls) + el += '<li><a href="https://github.com/firehol/netdata/wiki/mynetdata-menu-item" style="color: #666;" target="_blank">your netdata server list is empty...</a></li>'; + else + el += '<li><a href="https://github.com/firehol/netdata/wiki/mynetdata-menu-item" style="color: #666;" target="_blank">failed to contact the registry...</a></li>'; + + a1 += '<li><a href="#"> </a></li>'; + + el += '<li role="separator" class="divider"></li>' + + '<li><a href="//london.netdata.rocks/default.html">EU - London (DigitalOcean.com)</a></li>' + + '<li><a href="//atlanta.netdata.rocks/default.html">US - Atlanta (CDN77.com)</a></li>' + + '<li><a href="//athens.netdata.rocks/default.html">EU - Athens</a></li>'; + a1 += '<li role="separator" class="divider"></li>' + + '<li><a href="#"> </a></li>' + + '<li><a href="#"> </a></li>'+ + '<li><a href="#"> </a></li>'; + } + + el += '<li role="separator" class="divider"></li>'; + a1 += '<li role="separator" class="divider"></li>'; + + el += '<li><a href="https://github.com/firehol/netdata/wiki/mynetdata-menu-item" style="color: #999;" target="_blank">What is this?</a></li>'; + a1 += '<li><a href="#" style="color: #999;" onclick="switchRegistryModalHandler(); return false;"><i class="fa fa-cog" aria-hidden="true" style="color: #999;"></i></a></li>' + + document.getElementById('mynetdata_servers').innerHTML = el; + document.getElementById('mynetdata_servers2').innerHTML = el; + document.getElementById('mynetdata_actions1').innerHTML = a1; + }; + </script> <!-- load the dashboard manager - it will do the rest --> - <script type="text/javascript" src="dashboard.js?v32"></script> + <script type="text/javascript" src="dashboard.js?v37"></script> </head> <body data-spy="scroll" data-target="#sidebar"> <nav class="navbar navbar-default navbar-fixed-top" role="banner"> <div class="container"> + <nav id="mynetdata_nav" class="collapse navbar-collapse navbar-left hidden-sm hidden-xs" role="navigation" style="padding-right: 20px;"> + <ul class="nav navbar-nav"> + <li class="dropdown"> + <a href="#" class="dropdown-toggle" data-toggle="dropdown" id="current_view">my-netdata <strong class="caret"></strong></a> + <ul class="dropdown-menu scrollable-menu inpagemenu multi-column columns-2" role="menu"> + <div class="row"> + <div class="col-sm-6" style="width: 85%; padding-right: 0;"> + <ul id="mynetdata_servers" class="multi-column-dropdown"> + <li><a href="#" onclck="return false;" style="color: #999;">loading...</a></li> + </ul> + </div> + <div class="col-sm-3 hidden-xs" style="width: 15%; padding-left: 0;"> + <ul id="mynetdata_actions1" class="multi-column-dropdown"> + <li style="color: #999;"> </li> + </ul> + </div> + </div> + </ul> + </li> + </ul> + </nav> <div class="navbar-header"> <button class="navbar-toggle" type="button" data-toggle="collapse" data-target=".navbar-collapse"> <span class="sr-only">Toggle navigation</span> @@ -347,15 +463,27 @@ </div> <nav class="collapse navbar-collapse navbar-right" role="navigation"> <ul class="nav navbar-nav"> - <li><a href="#" class="btn" data-toggle="modal" data-target="#optionsModal"><i class="fa fa-cog"></i> settings</a></li> - <li><a href="https://github.com/firehol/netdata/wiki" class="btn" target="_blank"><i class="fa fa-github"></i> community</a></li> - <li id="updateButton"><a href="#" class="btn" data-toggle="modal" data-target="#updateModal"><i class="fa fa-cloud-download"></i> update</a></li> -<!-- <li><a href="old/" class="btn" target="_blank"><i class="fa fa-step-backward"></i> old dashboard</a></li> --> - <li><a href="#" class="btn" data-toggle="modal" data-target="#helpModal"><i class="fa fa-question-circle"></i> help</a></li> -<!-- <li><a href="#sec">Visualize</a></li> - <li><a href="#sec">Prototype</a></li> ---> </ul> + <li class="hidden-sm"><a href="#" class="btn" data-toggle="modal" data-target="#optionsModal"><i class="fa fa-cog"></i> settings</a></li> + <li class="hidden-sm"><a href="https://github.com/firehol/netdata/wiki" class="btn" target="_blank"><i class="fa fa-github"></i> community</a></li> + <li class="hidden-sm" id="updateButton"><a href="#" class="btn" data-toggle="modal" data-target="#updateModal"><i class="fa fa-cloud-download"></i> update</a></li> + <li class="hidden-sm"><a href="#" class="btn" data-toggle="modal" data-target="#helpModal"><i class="fa fa-question-circle"></i> help</a></li> + <li class="dropdown hidden-md hidden-lg hidden-xs"> + <a href="#" class="dropdown-toggle" data-toggle="dropdown" id="current_view">Menu <strong class="caret"></strong></a> + <ul class="dropdown-menu scrollable-menu inpagemenu" role="menu"> + <li><a href="#" class="btn" data-toggle="modal" data-target="#optionsModal"><i class="fa fa-cog"></i> settings</a></li> + <li><a href="https://github.com/firehol/netdata/wiki" class="btn" target="_blank"><i class="fa fa-github"></i> community</a></li> + <li><a href="#" class="btn" data-toggle="modal" data-target="#helpModal"><i class="fa fa-question-circle"></i> help</a></li> + </ul> + </li> + <li class="dropdown hidden-sm hidden-md hidden-lg"> + <a href="#" class="dropdown-toggle" data-toggle="dropdown" id="current_view">my-netdata <strong class="caret"></strong></a> + <ul id="mynetdata_servers2" class="dropdown-menu scrollable-menu inpagemenu" role="menu"> + <li><a href="#" onclck="return false;" style="color: #999;">loading...</a></li> + </ul> + </li> + </ul> </nav> + </nav> </div> </nav> @@ -456,9 +584,6 @@ <i class="fa fa-circle"></i> <a href="http://D3js.org/" target="_blank">D3</a>, <i class="fa fa-copyright"></i> Copyright 2015, Mike Bostock, <a href="http://opensource.org/licenses/BSD-3-Clause" target="_blank">BSD License</a> - <i class="fa fa-circle"></i> <a href="https://github.com/broofa/node-int64" target="_blank">node-int64</a>, - <i class="fa fa-copyright"></i> Copyright 2014, Robert Kieffer, <a href="https://github.com/broofa/node-int64/blob/master/LICENSE" target="_blank">MIT License</a> - </small> </div> </div> @@ -775,16 +900,86 @@ </div> </div> -<script> + <div class="modal fade" id="deleteRegistryModal" tabindex="-1" role="dialog" aria-labelledby="deleteRegistryModalLabel"> + <div class="modal-dialog" role="document"> + <div class="modal-content"> + <div class="modal-header"> + <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> + <h4 class="modal-title" id="deleteRegistryModalLabel">Delete <span id="deleteRegistryServerName"></span>?</h4> + </div> + <div class="modal-body"> + You are about to delete, from your personal list of netdata servers, the following server: + <p style="text-align: center; padding-top: 10px; padding-bottom: 10px; line-height: 2;"> + <b><span id="deleteRegistryServerName2"></span></b> + <br/> + <b><span id="deleteRegistryServerURL"></span></b> + </p> + Are you sure you want to do this? + <br/> + <div style="padding: 10px;"></div> + <small>Keep in mind, this server will be added back if and when you visit it again.</small> + <br/> + <div id="deleteRegistryResponse" style="display: block; width: 100%; text-align: center; padding-top: 20px;"></div> + </div> + <div class="modal-footer"> + <button type="button" class="btn btn-success" data-dismiss="modal">keep it</button> + <a href="#" onclick="notifyForDeleteRegistry(true); return false;" type="button" class="btn btn-danger">delete it</a> + </div> + </div> + </div> + </div> + + <div class="modal fade" id="switchRegistryModal" tabindex="-1" role="dialog" aria-labelledby="switchRegistryModalLabel"> + <div class="modal-dialog" role="document"> + <div class="modal-content"> + <div class="modal-header"> + <button type="button" class="close" data-dismiss="modal" aria-label="Close"><span aria-hidden="true">×</span></button> + <h4 class="modal-title" id="switchRegistryModalLabel">Switch Netdata Registry Identity</h4> + </div> + <div class="modal-body"> + You can copy and paste the following ID to all your browsers (e.g. work and home). + <br/> + All the browsers with the same ID will identify <b>you</b>, so please don't share this with others. + <p style="text-align: center; padding-top: 10px; padding-bottom: 10px; line-height: 2;"> + <form action="#"> + <input type="text" class="form-control" id="switchRegistryPersonGUID" placeholder="your personal ID" maxlength="36" autocomplete="off" style="text-align: center; font-size: 1.4em;"> + </form> + </p> + Either copy this ID and paste it to another browser, or paste here the ID you have taken from another browser. + <p style="padding-top: 10px;"><small> + Keep in mind that: + <ul> + <li>when you switch ID, your previous ID will be lost forever - this is irreversible.</li> + <li>both IDs (your old and the new) must list this netdata at their personal lists.</li> + <li>both IDs have to be known by the registry: <b><span id="switchRegistryURL"></span></b>.</li> + <li>to get a new ID, just clear your browser cookies.</li> + </ul> + </small></p> + <div id="switchRegistryResponse" style="display: block; width: 100%; text-align: center; padding-top: 20px;"></div> + </div> + <div class="modal-footer"> + <button type="button" class="btn btn-success" data-dismiss="modal">cancel</button> + <a href="#" onclick="notifyForSwitchRegistry(true); return false;" type="button" class="btn btn-danger">impersonate</a> + </div> + </div> + </div> + </div> +<script> var this_is_demo = null; function isdemo() { if(this_is_demo !== null) return this_is_demo; this_is_demo = false; try { - if(typeof document.location.hostname === 'string') - this_is_demo = document.location.hostname.endsWith('.firehol.org'); + if(typeof document.location.hostname === 'string') { + if(document.location.hostname.endsWith('.my-netdata.io') || + document.location.hostname.endsWith('.mynetdata.io') || + document.location.hostname.endsWith('.netdata.rocks') || + document.location.hostname.endsWith('.firehol.org') || + document.location.hostname.endsWith('.netdata.online')) + this_is_demo = true; + } } catch(error) { ; @@ -797,6 +992,56 @@ if(isdemo()) { document.getElementById('masthead').style.display = 'block'; } +function switchRegistryModalHandler() { + document.getElementById('switchRegistryPersonGUID').value = NETDATA.registry.person_guid; + document.getElementById('switchRegistryURL').innerHTML = NETDATA.registry.server; + document.getElementById('switchRegistryResponse').innerHTML = ''; + $('#switchRegistryModal').modal('show'); +} + +function notifyForSwitchRegistry() { + var n = document.getElementById('switchRegistryPersonGUID').value; + + if(n !== '' && n.length === 36) { + NETDATA.registry.switch(n, function(result) { + if(result !== null) { + $('#switchRegistryModal').modal('hide'); + NETDATA.registry.init(); + } + else { + document.getElementById('switchRegistryResponse').innerHTML = "<b>Sorry! The registry rejected your request.</b>"; + } + }); + } + else + document.getElementById('switchRegistryResponse').innerHTML = "<b>The ID you have entered is not a GUID.</b>"; +} + +var deleteRegistryUrl = null; +function deleteRegistryModalHandler(guid, name, url) { + deleteRegistryUrl = url; + document.getElementById('deleteRegistryServerName').innerHTML = name; + document.getElementById('deleteRegistryServerName2').innerHTML = name; + document.getElementById('deleteRegistryServerURL').innerHTML = url; + document.getElementById('deleteRegistryResponse').innerHTML = ''; + $('#deleteRegistryModal').modal('show'); +} + +function notifyForDeleteRegistry() { + if(deleteRegistryUrl) { + NETDATA.registry.delete(deleteRegistryUrl, function(result) { + if(result !== null) { + deleteRegistryUrl = null; + $('#deleteRegistryModal').modal('hide'); + NETDATA.registry.init(); + } + else { + document.getElementById('deleteRegistryResponse').innerHTML = "<b>Sorry! this command was rejected by the registry server.</b>"; + } + }); + } +} + var options = { sparklines_registry: {}, submenu_names: {}, @@ -987,6 +1232,21 @@ var menuData = { title: 'Example Charts', info: undefined }, + + 'cgroup': { + title: 'Container', + info: undefined + }, + + 'mysql': { + title: 'MySQL', + info: undefined + }, + + 'named': { + title: 'named', + info: undefined + }, }; var submenuData = { @@ -1297,8 +1557,13 @@ function anyAttribute(obj, attr, key, def) { return def; } -function menuTitle(menu) { - return anyAttribute(menuData, 'title', menu, menu); +function menuTitle(chart) { + if(typeof chart.menu_pattern !== 'undefined') { + return anyAttribute(menuData, 'title', chart.menu_pattern, chart.menu_pattern).toString() + + ': ' + chart.type.slice(-(chart.type.length - chart.menu_pattern.length - 1)).toString(); + } + + return anyAttribute(menuData, 'title', chart.menu, chart.menu); } function menuInfo(menu) { @@ -1354,6 +1619,13 @@ function enrichChartData(chart) { chart.menu = tmp; break; + case 'mysql': + case 'named': + case 'cgroup': + chart.menu = chart.type; + chart.menu_pattern = tmp; + break; + case 'tc': chart.menu = tmp; @@ -1390,11 +1662,11 @@ function enrichChartData(chart) { function name2id(s) { return s - .replace(' ', '_') - .replace('(', '_') - .replace(')', '_') - .replace('.', '_') - .replace('/', '_'); + .replace(/ /g, '_') + .replace(/\(/g, '_') + .replace(/\)/g, '_') + .replace(/\./g, '_') + .replace(/\//g, '_'); } function headMain(charts, duration) { @@ -1549,8 +1821,9 @@ function renderPage(menus, data) { // generate an entry at the main menu - sidebar += '<li class=""><a href="#' + menu + '">' + menus[menu].title + '</a><ul class="nav">'; - html += '<div role="section"><div role="sectionhead"><h1 id="' + menu + '" role="heading">' + menus[menu].title + '</h1></div><div id="' + menu + '" role="document">'; + var menuid = name2id(menu); + sidebar += '<li class=""><a href="#' + menuid + '">' + menus[menu].title + '</a><ul class="nav">'; + html += '<div role="section"><div role="sectionhead"><h1 id="' + menuid + '" role="heading">' + menus[menu].title + '</h1></div><div id="menu_' + menuid + '" role="document">'; if(menus[menu].info !== null) html += menus[menu].info; @@ -1558,7 +1831,7 @@ function renderPage(menus, data) { // console.log(' >> ' + menu + ' (' + menus[menu].priority + '): ' + menus[menu].title); var shtml = ''; - var mhead = '<div style="width: 100%; text-align: center;">' + mainhead; + var mhead = '<div class="netdata-chart-row">' + mainhead; mainhead = ''; // sort the submenus of this menu @@ -1568,13 +1841,14 @@ function renderPage(menus, data) { var submenu = sub[si++]; // generate an entry at the submenu - sidebar += '<li class><a href="#' + name2id(menu + '_' + submenu) + '">' + menus[menu].submenus[submenu].title + '</a></li>'; - shtml += '<div class="netdata-group-container" id="submenu_' + name2id(menu + '_' + submenu) + '" style="display: inline-block; width: ' + pcent_width.toString() + '%"><h2 id="' + name2id(menu + '_' + submenu) + '" class="netdata-chart-alignment" role="heading">' + menus[menu].submenus[submenu].title + '</h2>'; + var submenuid = name2id(menu + '_' + submenu); + sidebar += '<li class><a href="#' + submenuid + '">' + menus[menu].submenus[submenu].title + '</a></li>'; + shtml += '<div class="netdata-group-container" id="submenu_' + submenuid + '" style="display: inline-block; width: ' + pcent_width.toString() + '%"><h2 id="' + submenuid + '" class="netdata-chart-alignment" role="heading">' + menus[menu].submenus[submenu].title + '</h2>'; if(menus[menu].submenus[submenu].info !== null) shtml += '<div class="chart-message netdata-chart-alignment" role="document">' + menus[menu].submenus[submenu].info + '</div>'; - var head = '<div style="width: 100%; text-align: center;">'; + var head = '<div class="netdata-chart-row">'; var chtml = ''; // console.log(' \------- ' + submenu + ' (' + menus[menu].submenus[submenu].priority + '): ' + menus[menu].submenus[submenu].title); @@ -1629,7 +1903,7 @@ function renderChartsAndMenu(data) { menus[charts[c].menu] = { priority: charts[c].priority, submenus: {}, - title: menuTitle(charts[c].menu), + title: menuTitle(charts[c]), info: menuInfo(charts[c].menu), height: menuHeight(charts[c].menu, options.chartsHeight) }; @@ -1656,6 +1930,8 @@ function renderChartsAndMenu(data) { menus[charts[c].menu].submenus[charts[c].submenu].charts.push(charts[c]); } + // propagate the descriptive subname given to QoS + // to all the other submenus with the same name for(var m in menus) { for(var s in menus[m].submenus) { // set the family using a name @@ -1841,12 +2117,15 @@ function finalizePage() { // the Dom elements are initially zero-sized NETDATA.parseDom(); - var before = 0, after = 0; + var before = 0, after = 0, nowelcome = 0; after = getUrlParameter('force_after_ms'); before = getUrlParameter('force_before_ms'); + nowelcome = (getUrlParameter('nowelcome') === true)?true:false; - if(before > 0 && after > 0) + if(before > 0 && after > 0) { + nowelcome = true; NETDATA.globalPanAndZoom.setMaster(NETDATA.options.targets[0], after, before); + } // let it run (update the charts) NETDATA.unpause(); @@ -1939,21 +2218,38 @@ function finalizePage() { // this has to be the last // it reloads the page $('#netdata_theme_control').change(function() { - if(setTheme($(this).prop('checked')?'slate':'default')) + if(setTheme($(this).prop('checked')?'slate':'white')) location.reload(); }); - if(isdemo()) { - setTimeout(function() { - $('#welcomeModal').modal(); - }, 1000); - } - else - notifyForUpdate(); - $('#updateModal').on('shown.bs.modal', function() { notifyForUpdate(true); }); + + $('#deleteRegistryModal').on('hidden.bs.modal', function() { + deleteRegistryGuid = null; + }); + + if(isdemo()) { + if(!nowelcome) { + setTimeout(function() { + $('#welcomeModal').modal(); + }, 1000); + } + + // google analytics when this is used for the home page of the demo sites + // this does not run on user's installations + setTimeout(function() { + (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ + (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), + m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) + })(window,document,'script','https://www.google-analytics.com/analytics.js','ga'); + + ga('create', 'UA-64295674-3', 'auto'); + ga('send', 'pageview'); + }, 2000); + } + else notifyForUpdate(); } function resetDashboardOptions() { diff --git a/web/tv.html b/web/tv.html index 2003a6060..58f08eb39 100644 --- a/web/tv.html +++ b/web/tv.html @@ -1,239 +1,244 @@ -<!DOCTYPE html>
-<html lang="en">
-<head>
- <title>NetData TV Dashboard</title>
-
- <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
- <meta charset="utf-8">
- <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1">
- <meta name="viewport" content="width=device-width, initial-scale=1">
- <meta name="apple-mobile-web-app-capable" content="yes">
- <meta name="apple-mobile-web-app-status-bar-style" content="black-translucent">
-
- <script>
- // this section has to appear before loading dashboard.js
-
- // Select a theme.
- // uncomment on of the two themes:
-
- // var netdataTheme = 'default'; // this is white
- var netdataTheme = 'slate'; // this is dark
-
-
- // Set the default netdata server.
- // on charts without a 'data-host', this one will be used.
- // the default is the server that dashboard.js is downloaded from.
-
- // var netdataServer = 'http://my.server:19999/';
- </script>
-
- <!--
- Load dashboard.js
-
- to host this HTML file on your web server,
- you have to load dashboard.js from the netdata server.
-
- So, pick one the two below
- If you pick the first, set the server name/IP.
-
- The second assumes you host this file on /usr/share/netdata/web
- and that you have chown it to be owned by netdata:netdata
- -->
- <!-- <script type="text/javascript" src="http://my.server:19999/dashboard.js"></script> -->
- <script type="text/javascript" src="dashboard.js"></script>
-
- <script>
- // Set options for TV operation
- // This has to be done, after dashboard.js is loaded
-
- // destroy charts not shown (lowers memory on the browser)
- NETDATA.options.current.destroy_on_hide = true;
-
- // set this to false, to always show all dimensions
- NETDATA.options.current.eliminate_zero_dimensions = true;
-
- // always update the charts, even if focus is lost
- NETDATA.options.current.stop_updates_when_focus_is_lost = false;
-
- // Since you may render charts from many servers and any of them may
- // become offline for some time, the charts will break.
- // This will reload the page every RELOAD_EVERY minutes
- var RELOAD_EVERY = 5;
- setTimeout(function(){
- location.reload();
- }, RELOAD_EVERY * 60 * 1000);
-
- </script>
-
-</head>
-<body>
-
-<div style="width: 100%; text-align: center; display: inline-block;">
-
- <b>PLEASE RESPECT OUR DEMO SITE RESOURCES - DON'T PUT THIS AS-IS ON TV - CLOSE IT WHEN YOU DON'T NEED IT !</b>
-
-
- <div style="width: 100%; height: 24vh; text-align: center; display: inline-block;">
- <div style="width: 100%; height: 15px; text-align: center; display: inline-block;">
- <b>CPU On both servers</b>
- </div>
- <div style="width: 100%; height: calc(100% - 15px); text-align: center; display: inline-block;">
- <br/>
- <div data-netdata="system.cpu"
- data-host="http://netdata.firehol.org"
- data-title="CPU usage of netdata.firehol.org"
- data-chart-library="dygraph"
- data-width="49%"
- data-height="100%"
- data-after="-300"
- ></div>
- <div data-netdata="system.cpu"
- data-title="CPU usage of your netdata server"
- data-chart-library="dygraph"
- data-width="49%"
- data-height="100%"
- data-after="-300"
- ></div>
- </div>
- </div>
-
-
- <div style="width: 100%; height: 24vh; text-align: center; display: inline-block;">
- <div style="width: 100%; height: 15px; text-align: center; display: inline-block;">
- <b>Disk I/O on both servers</b>
- </div>
- <div style="width: 100%; height: calc(100% - 15px); text-align: center; display: inline-block;">
- <div data-netdata="system.io"
- data-host="http://netdata.firehol.org"
- data-title="I/O on netdata.firehol.org"
- data-chart-library="dygraph"
- data-width="49%"
- data-height="100%"
- data-after="-300"
- ></div>
- <div data-netdata="system.io"
- data-title="I/O on your netdata server"
- data-chart-library="dygraph"
- data-width="49%"
- data-height="100%"
- data-after="-300"
- ></div>
- </div>
- </div>
-
-
- <div style="width: 100%; height: 24vh; text-align: center; display: inline-block;">
- <div style="width: 100%; height: 15px; text-align: center; display: inline-block;">
- <b>IPv4 traffic on both servers</b>
- </div>
- <div style="width: 100%; height: calc(100% - 15px); text-align: center; display: inline-block;">
- <div data-netdata="system.ipv4"
- data-host="http://netdata.firehol.org"
- data-title="IPv4 traffic on netdata.firehol.org"
- data-chart-library="dygraph"
- data-width="49%"
- data-height="100%"
- data-after="-300"
- ></div>
- <div data-netdata="system.ipv4"
- data-title="IPv4 traffic on your netdata server"
- data-chart-library="dygraph"
- data-width="49%"
- data-height="100%"
- data-after="-300"
- ></div>
- </div>
- </div>
-
- <div style="width: 100%; height: 23vh; text-align: center; display: inline-block;">
- <div style="width: 100%; height: 15px; text-align: center; display: inline-block;">
- <b>Netdata statistics on both servers</b>
- </div>
- <div style="width: 100%; max-height: calc(100% - 15px); text-align: center; display: inline-block;">
- <div style="width: 49%; height:100%; align: center; display: inline-block;">
- netdata.firehol.org
- <br/>
- <div data-netdata="netdata.requests"
- data-host="http://netdata.firehol.org"
- data-title="Chart Refreshes/s"
- data-chart-library="gauge"
- data-width="20%"
- data-height="100%"
- data-after="-300"
- data-points="300"
- ></div>
- <div data-netdata="netdata.clients"
- data-host="http://netdata.firehol.org"
- data-title="Sockets"
- data-chart-library="gauge"
- data-width="20%"
- data-height="100%"
- data-after="-300"
- data-points="300"
- data-colors="#AA5500"
- ></div>
- <div data-netdata="netdata.net"
- data-dimensions="in"
- data-host="http://netdata.firehol.org"
- data-title="Requests Traffic"
- data-chart-library="easypiechart"
- data-width="15%"
- data-height="100%"
- data-after="-300"
- data-points="300"
- ></div>
- <div data-netdata="netdata.net"
- data-dimensions="out"
- data-host="http://netdata.firehol.org"
- data-title="Chart Data Traffic"
- data-chart-library="easypiechart"
- data-width="15%"
- data-height="100%"
- data-after="-300"
- data-points="300"
- ></div>
- </div>
- <div style="width: 49%; height:100%; align: center; display: inline-block;">
- your netdata server
- <br/>
- <div data-netdata="netdata.requests"
- data-title="Chart Refreshes/s"
- data-chart-library="gauge"
- data-width="20%"
- data-height="100%"
- data-after="-300"
- data-points="300"
- ></div>
- <div data-netdata="netdata.clients"
- data-title="Sockets"
- data-chart-library="gauge"
- data-width="20%"
- data-height="100%"
- data-after="-300"
- data-points="300"
- data-colors="#AA5500"
- ></div>
- <div data-netdata="netdata.net"
- data-dimensions="in"
- data-title="Requests Traffic"
- data-chart-library="easypiechart"
- data-width="15%"
- data-height="100%"
- data-after="-300"
- data-points="300"
- ></div>
- <div data-netdata="netdata.net"
- data-dimensions="out"
- data-title="Chart Data Traffic"
- data-chart-library="easypiechart"
- data-width="15%"
- data-height="100%"
- data-after="-300"
- data-points="300"
- ></div>
- </div>
- </div>
- </div>
-</div>
-</body>
-</html>
+<!DOCTYPE html> +<html lang="en"> +<head> + <title>NetData TV Dashboard</title> + + <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> + <meta charset="utf-8"> + <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> + <meta name="viewport" content="width=device-width, initial-scale=1"> + <meta name="apple-mobile-web-app-capable" content="yes"> + <meta name="apple-mobile-web-app-status-bar-style" content="black-translucent"> + + <script> + // this section has to appear before loading dashboard.js + + // Select a theme. + // uncomment on of the two themes: + + // var netdataTheme = 'default'; // this is white + var netdataTheme = 'slate'; // this is dark + + + // Set the default netdata server. + // on charts without a 'data-host', this one will be used. + // the default is the server that dashboard.js is downloaded from. + + // var netdataServer = 'http://my.server:19999/'; + </script> + + <!-- + Load dashboard.js + + to host this HTML file on your web server, + you have to load dashboard.js from the netdata server. + + So, pick one the two below + If you pick the first, set the server name/IP. + + The second assumes you host this file on /usr/share/netdata/web + and that you have chown it to be owned by netdata:netdata + --> + <!-- <script type="text/javascript" src="http://my.server:19999/dashboard.js"></script> --> + <script type="text/javascript" src="dashboard.js?v37"></script> + + <script> + // Set options for TV operation + // This has to be done, after dashboard.js is loaded + + // destroy charts not shown (lowers memory on the browser) + NETDATA.options.current.destroy_on_hide = true; + + // set this to false, to always show all dimensions + NETDATA.options.current.eliminate_zero_dimensions = true; + + // lower the pressure on this browser + NETDATA.options.current.concurrent_refreshes = false; + + // if the tv browser is too slow (a pi?) + // set this to false + NETDATA.options.current.parallel_refresher = true; + + // always update the charts, even if focus is lost + // NETDATA.options.current.stop_updates_when_focus_is_lost = false; + + // Since you may render charts from many servers and any of them may + // become offline for some time, the charts will break. + // This will reload the page every RELOAD_EVERY minutes + + var RELOAD_EVERY = 5; + setTimeout(function(){ + location.reload(); + }, RELOAD_EVERY * 60 * 1000); + + </script> + +</head> +<body> + +<div style="width: 100%; text-align: center; display: inline-block;"> + + <div style="width: 100%; height: 24vh; text-align: center; display: inline-block;"> + <div style="width: 100%; height: 15px; text-align: center; display: inline-block;"> + <b>CPU On both servers</b> + </div> + <div style="width: 100%; height: calc(100% - 15px); text-align: center; display: inline-block;"> + <br/> + <div data-netdata="system.cpu" + data-host="http://netdata.firehol.org" + data-title="CPU usage of netdata.firehol.org" + data-chart-library="dygraph" + data-width="49%" + data-height="100%" + data-after="-300" + ></div> + <div data-netdata="system.cpu" + data-title="CPU usage of your netdata server" + data-chart-library="dygraph" + data-width="49%" + data-height="100%" + data-after="-300" + ></div> + </div> + </div> + + + <div style="width: 100%; height: 24vh; text-align: center; display: inline-block;"> + <div style="width: 100%; height: 15px; text-align: center; display: inline-block;"> + <b>Disk I/O on both servers</b> + </div> + <div style="width: 100%; height: calc(100% - 15px); text-align: center; display: inline-block;"> + <div data-netdata="system.io" + data-host="http://netdata.firehol.org" + data-title="I/O on netdata.firehol.org" + data-chart-library="dygraph" + data-width="49%" + data-height="100%" + data-after="-300" + ></div> + <div data-netdata="system.io" + data-title="I/O on your netdata server" + data-chart-library="dygraph" + data-width="49%" + data-height="100%" + data-after="-300" + ></div> + </div> + </div> + + + <div style="width: 100%; height: 24vh; text-align: center; display: inline-block;"> + <div style="width: 100%; height: 15px; text-align: center; display: inline-block;"> + <b>IPv4 traffic on both servers</b> + </div> + <div style="width: 100%; height: calc(100% - 15px); text-align: center; display: inline-block;"> + <div data-netdata="system.ipv4" + data-host="http://netdata.firehol.org" + data-title="IPv4 traffic on netdata.firehol.org" + data-chart-library="dygraph" + data-width="49%" + data-height="100%" + data-after="-300" + ></div> + <div data-netdata="system.ipv4" + data-title="IPv4 traffic on your netdata server" + data-chart-library="dygraph" + data-width="49%" + data-height="100%" + data-after="-300" + ></div> + </div> + </div> + + <div style="width: 100%; height: 23vh; text-align: center; display: inline-block;"> + <div style="width: 100%; height: 15px; text-align: center; display: inline-block;"> + <b>Netdata statistics on both servers</b> + </div> + <div style="width: 100%; max-height: calc(100% - 15px); text-align: center; display: inline-block;"> + <div style="width: 49%; height:100%; align: center; display: inline-block;"> + netdata.firehol.org + <br/> + <div data-netdata="netdata.requests" + data-host="http://netdata.firehol.org" + data-title="Chart Refreshes/s" + data-chart-library="gauge" + data-width="20%" + data-height="100%" + data-after="-300" + data-points="300" + ></div> + <div data-netdata="netdata.clients" + data-host="http://netdata.firehol.org" + data-title="Sockets" + data-chart-library="gauge" + data-width="20%" + data-height="100%" + data-after="-300" + data-points="300" + data-colors="#AA5500" + ></div> + <div data-netdata="netdata.net" + data-dimensions="in" + data-host="http://netdata.firehol.org" + data-title="Requests Traffic" + data-chart-library="easypiechart" + data-width="15%" + data-height="100%" + data-after="-300" + data-points="300" + ></div> + <div data-netdata="netdata.net" + data-dimensions="out" + data-host="http://netdata.firehol.org" + data-title="Chart Data Traffic" + data-chart-library="easypiechart" + data-width="15%" + data-height="100%" + data-after="-300" + data-points="300" + ></div> + </div> + <div style="width: 49%; height:100%; align: center; display: inline-block;"> + your netdata server + <br/> + <div data-netdata="netdata.requests" + data-title="Chart Refreshes/s" + data-chart-library="gauge" + data-width="20%" + data-height="100%" + data-after="-300" + data-points="300" + ></div> + <div data-netdata="netdata.clients" + data-title="Sockets" + data-chart-library="gauge" + data-width="20%" + data-height="100%" + data-after="-300" + data-points="300" + data-colors="#AA5500" + ></div> + <div data-netdata="netdata.net" + data-dimensions="in" + data-title="Requests Traffic" + data-chart-library="easypiechart" + data-width="15%" + data-height="100%" + data-after="-300" + data-points="300" + ></div> + <div data-netdata="netdata.net" + data-dimensions="out" + data-title="Chart Data Traffic" + data-chart-library="easypiechart" + data-width="15%" + data-height="100%" + data-after="-300" + data-points="300" + ></div> + </div> + </div> + </div> +</div> +</body> +</html> diff --git a/web/version.txt b/web/version.txt index 9aef34725..afce0e539 100644 --- a/web/version.txt +++ b/web/version.txt @@ -1 +1 @@ -39c196708756fc8f85bfc70c931836479be3b9c2 +bb4aa949f5ac825253d8adc6070661299abc1c3b |