summaryrefslogtreecommitdiffstats
path: root/charts.d
diff options
context:
space:
mode:
Diffstat (limited to 'charts.d')
-rw-r--r--charts.d/Makefile.am24
-rw-r--r--charts.d/README.md271
-rwxr-xr-xcharts.d/airsearches.chart.sh91
-rwxr-xr-xcharts.d/ap.chart.sh160
-rwxr-xr-xcharts.d/apache.chart.sh250
-rwxr-xr-xcharts.d/cpu_apps.chart.sh66
-rwxr-xr-xcharts.d/cpufreq.chart.sh83
-rwxr-xr-xcharts.d/crsproxy.chart.sh148
-rwxr-xr-xcharts.d/example.chart.sh82
-rwxr-xr-xcharts.d/load_average.chart.sh64
-rwxr-xr-xcharts.d/mem_apps.chart.sh56
-rwxr-xr-xcharts.d/mysql.chart.sh460
-rwxr-xr-xcharts.d/nginx.chart.sh134
-rwxr-xr-xcharts.d/nut.chart.sh187
-rwxr-xr-xcharts.d/opensips.chart.sh320
-rwxr-xr-xcharts.d/postfix.chart.sh92
-rwxr-xr-xcharts.d/sensors.chart.sh238
-rwxr-xr-xcharts.d/squid.chart.sh145
18 files changed, 2871 insertions, 0 deletions
diff --git a/charts.d/Makefile.am b/charts.d/Makefile.am
new file mode 100644
index 000000000..6c33ed24d
--- /dev/null
+++ b/charts.d/Makefile.am
@@ -0,0 +1,24 @@
+#
+# Copyright (C) 2015 Alon Bar-Lev <alon.barlev@gmail.com>
+#
+MAINTAINERCLEANFILES= $(srcdir)/Makefile.in
+
+dist_charts_SCRIPTS = \
+ README.md \
+ airsearches.chart.sh \
+ ap.chart.sh \
+ apache.chart.sh \
+ cpu_apps.chart.sh \
+ cpufreq.chart.sh \
+ crsproxy.chart.sh \
+ example.chart.sh \
+ load_average.chart.sh \
+ mem_apps.chart.sh \
+ mysql.chart.sh \
+ nginx.chart.sh \
+ nut.chart.sh \
+ opensips.chart.sh \
+ postfix.chart.sh \
+ sensors.chart.sh \
+ squid.chart.sh \
+ $(NULL)
diff --git a/charts.d/README.md b/charts.d/README.md
new file mode 100644
index 000000000..fd66c0d6a
--- /dev/null
+++ b/charts.d/README.md
@@ -0,0 +1,271 @@
+The following charts.d plugins are supported:
+
+# mysql
+
+The plugin will monitor one or more mysql servers
+
+It will produce the following charts:
+
+1. **Bandwidth** in kbps
+ * in
+ * out
+
+2. **Queries** in queries/sec
+ * queries
+ * questions
+ * slow queries
+
+3. **Operations** in operations/sec
+ * opened tables
+ * flush
+ * commit
+ * delete
+ * prepare
+ * read first
+ * read key
+ * read next
+ * read prev
+ * read random
+ * read random next
+ * rollback
+ * save point
+ * update
+ * write
+
+4. **Table Locks** in locks/sec
+ * immediate
+ * waited
+
+5. **Select Issues** in issues/sec
+ * full join
+ * full range join
+ * range
+ * range check
+ * scan
+
+6. **Sort Issues** in issues/sec
+ * merge passes
+ * range
+ * scan
+
+### configuration
+
+You can configure many database servers, like this:
+
+You can provide, per server, the following:
+
+1. a name, anything you like, but keep it short
+2. the mysql command to connect to the server
+3. the mysql command line options to be used for connecting to the server
+
+Here is an example for 2 servers:
+
+```sh
+mysql_opts[server1]="-h server1.example.com"
+mysql_opts[server2]="-h server2.example.com --connect_timeout 2"
+```
+
+The above will use the `mysql` command found in the system path.
+You can also provide a custom mysql command per server, like this:
+
+```sh
+mysql_cmds[server2]="/opt/mysql/bin/mysql"
+```
+
+The above sets the mysql command only for server2. server1 will use the system default.
+
+If no configuration is given, the plugin will attempt to connect to mysql server at localhost.
+
+---
+
+# squid
+
+The plugin will monitor a squid server.
+
+It will produce 4 charts:
+
+1. **Squid Client Bandwidth** in kbps
+
+ * in
+ * out
+ * hits
+
+2. **Squid Client Requests** in requests/sec
+
+ * requests
+ * hits
+ * errors
+
+3. **Squid Server Bandwidth** in kbps
+
+ * in
+ * out
+
+4. **Squid Server Requests** in requests/sec
+
+ * requests
+ * errors
+
+### autoconfig
+
+The plugin will by itself detect squid servers running on
+localhost, on ports 3128 or 8080.
+
+It will attempt to download URLs in the form:
+
+- `cache_object://HOST:PORT/counters`
+- `/squid-internal-mgr/counters`
+
+If any succeeds, it will use this.
+
+### configuration
+
+If you need to configure it by hand, create the file
+`/etc/netdata/squid.conf` with the following variables:
+
+- `squid_host=IP` the IP of the squid host
+- `squid_port=PORT` the port the squid is listening
+- `squid_url="URL"` the URL with the statistics to be fetched from squid
+- `squid_timeout=SECONDS` how much time we should wait for squid to respond
+- `squid_update_every=SECONDS` the frequency of the data collection
+
+Example `/etc/netdata/squid.conf`:
+
+```sh
+squid_host=127.0.0.1
+squid_port=3128
+squid_url="cache_object://127.0.0.1:3128/counters"
+squid_timeout=2
+squid_update_every=5
+```
+
+---
+
+# sensors
+
+The plugin will provide charts for all configured system sensors
+
+> This plugin is reading sensors directly from the kernel.
+> The `lm-sensors` package is able to perform calculations on the
+> kernel provided values, this plugin will not perform.
+> So, the values graphed, are the raw hardware values of the sensors.
+
+The plugin will create netdata charts for:
+
+1. **Temperature**
+2. **Voltage**
+3. **Current**
+4. **Power**
+5. **Fans Speed**
+6. **Energy**
+7. **Humidity**
+
+One chart for every sensor chip found and each of the above will be created.
+
+### configuration
+
+This is the internal default for `/etc/netdata/sensors.conf`
+
+```sh
+# the directory the kernel keeps sensor data
+sensors_sys_dir="${NETDATA_HOST_PREFIX}/sys/devices"
+
+# how deep in the tree to check for sensor data
+sensors_sys_depth=10
+
+# if set to 1, the script will overwrite internal
+# script functions with code generated ones
+# leave to 1, is faster
+sensors_source_update=1
+
+# how frequently to collect sensor data
+# the default is to collect it at every iteration of charts.d
+sensors_update_every=
+```
+
+---
+
+# postfix
+
+The plugin will collect the postfix queue size.
+
+It will create two charts:
+
+1. **queue size in emails**
+2. **queue size in KB**
+
+### configuration
+
+This is the internal default for `/etc/netdata/postfix.conf`
+
+```sh
+# the postqueue command
+# if empty, it will use the one found in the system path
+postfix_postqueue=
+
+# how frequently to collect queue size
+postfix_update_every=15
+```
+
+---
+
+# nut
+
+The plugin will collect UPS data for all UPSes configured in the system.
+
+The following charts will be created:
+
+1. **UPS Charge**
+
+ * percentage changed
+
+2. **UPS Battery Voltage**
+
+ * current voltage
+ * high voltage
+ * low voltage
+ * nominal voltage
+
+3. **UPS Input Voltage**
+
+ * current voltage
+ * fault voltage
+ * nominal voltage
+
+4. **UPS Input Current**
+
+ * nominal current
+
+5. **UPS Input Frequency**
+
+ * current frequency
+ * nominal frequency
+
+6. **UPS Output Voltage**
+
+ * current voltage
+
+7. **UPS Load**
+
+ * current load
+
+8. **UPS Temperature**
+
+ * current temperature
+
+
+### configuration
+
+This is the internal default for `/etc/netdata/nut.conf`
+
+```sh
+# a space separated list of UPS names
+# if empty, the list returned by 'upsc -l' will be used
+nut_ups=
+
+# how frequently to collect UPS data
+nut_update_every=2
+```
+
+---
+
diff --git a/charts.d/airsearches.chart.sh b/charts.d/airsearches.chart.sh
new file mode 100755
index 000000000..449b14255
--- /dev/null
+++ b/charts.d/airsearches.chart.sh
@@ -0,0 +1,91 @@
+#!/bin/sh
+
+airsearches_url=
+airsearches_cmds=
+airsearches_update_every=15
+
+airsearches_get() {
+ wget 2>/dev/null -O - "$airsearches_url" |\
+ sed -e "s|<br />|\n|g" -e "s|: |=|g" -e "s| \+|_|g" -e "s/^/airsearches_/g" |\
+ tr "[A-Z]\.\!@#\$%^&*()_+\-" "[a-z]_____________" |\
+ egrep "^airsearches_[a-z0-9_]+=[0-9]+$"
+}
+
+airsearches_check() {
+ # make sure we have all the commands we need
+ require_cmd wget || return 1
+
+ # make sure we are configured
+ if [ -z "$airsearches_url" ]
+ then
+ echo >&2 "$PROGRAM_NAME: airsearches: not configured. Please set airsearches_url='url' in $confd/airsearches.conf"
+ return 1
+ fi
+
+ # check once if the url works
+ wget 2>/dev/null -O /dev/null "$airsearches_url"
+ if [ ! $? -eq 0 ]
+ then
+ echo >&2 "$PROGRAM_NAME: airsearches: cannot fetch the url: $airsearches_url. Please set airsearches_url='url' in $confd/airsearches.conf"
+ return 1
+ fi
+
+ # if the admin did not give any commands
+ # find the available ones
+ if [ -z "$airsearches_cmds" ]
+ then
+ airsearches_cmds="$(airsearches_get | cut -d '=' -f 1 | sed "s/^airsearches_//g" | sort -u)"
+ echo
+ fi
+
+ # did we find any commands?
+ if [ -z "$airsearches_cmds" ]
+ then
+ echo >&2 "$PROGRAM_NAME: airsearches: cannot find command list automatically. Please set airsearches_cmds='...' in $confd/airsearches.conf"
+ return 1
+ fi
+
+ # ok we can do it
+ return 0
+}
+
+airsearches_create() {
+ [ -z "$airsearches_cmds" ] && return 1
+
+ # create the charts
+ local x=
+ echo "CHART airsearches.affiliates '' 'Air Searches per affiliate' 'requests / min' airsearches '' stacked 20000 $airsearches_update_every"
+ for x in $airsearches_cmds
+ do
+ echo "DIMENSION $x '' incremental 60 1"
+ done
+
+ return 0
+}
+
+airsearches_update() {
+ # the first argument to this function is the microseconds since last update
+ # pass this parameter to the BEGIN statement (see bellow).
+
+ # do all the work to collect / calculate the values
+ # for each dimension
+ # remember: KEEP IT SIMPLE AND SHORT
+
+ # get the values from airsearches
+ eval "$(airsearches_get)"
+
+ # write the result of the work.
+ local x=
+
+ echo "BEGIN airsearches.affiliates $1"
+ for x in $airsearches_cmds
+ do
+ eval "v=\$airsearches_$x"
+ echo "SET $x = $v"
+ done
+ echo "END"
+
+ airsearches_dt=0
+
+ return 0
+}
diff --git a/charts.d/ap.chart.sh b/charts.d/ap.chart.sh
new file mode 100755
index 000000000..4704b89de
--- /dev/null
+++ b/charts.d/ap.chart.sh
@@ -0,0 +1,160 @@
+#!/bin/sh
+
+# _update_every is a special variable - it holds the number of seconds
+# between the calls of the _update() function
+ap_update_every=
+ap_priority=6900
+
+declare -A ap_devs=()
+
+export PATH="${PATH}:/sbin:/usr/sbin:/usr/local/sbin"
+
+# _check is called once, to find out if this chart should be enabled or not
+ap_check() {
+ local ev=$(iw dev | awk '
+ BEGIN {
+ i = "";
+ ssid = "";
+ ap = 0;
+ }
+ /^[ \t]+Interface / {
+ if( ap == 1 ) {
+ print "ap_devs[" i "]=\"" ssid "\""
+ }
+
+ i = $2;
+ ssid = "";
+ ap = 0;
+ }
+ /^[ \t]+ssid / { ssid = $2; }
+ /^[ \t]+type AP$/ { ap = 1; }
+ END {
+ if( ap == 1 ) {
+ print "ap_devs[" i "]=\"" ssid "\""
+ }
+ }
+ ')
+ eval "${ev}"
+
+ # this should return:
+ # - 0 to enable the chart
+ # - 1 to disable the chart
+
+ [ ${#ap_devs[@]} -gt 0 ] && return 0
+ return 1
+}
+
+# _create is called once, to create the charts
+ap_create() {
+ local ssid dev
+
+ for dev in "${!ap_devs[@]}"
+ do
+ ssid="${ap_devs[${dev}]}"
+
+ # create the chart with 3 dimensions
+ cat <<EOF
+CHART ap_clients.${dev} '' "Connected clients to ${ssid} on ${dev}" "clients" ${dev} ap.clients line $[ap_priority + 1] $ap_update_every
+DIMENSION clients '' absolute 1 1
+
+CHART ap_bandwidth.${dev} '' "Bandwidth for ${ssid} on ${dev}" "kilobits/s" ${dev} ap.net area $[ap_priority + 2] $ap_update_every
+DIMENSION received '' incremental 8 1024
+DIMENSION sent '' incremental -8 1024
+
+CHART ap_packets.${dev} '' "Packets for ${ssid} on ${dev}" "packets/s" ${dev} ap.packets line $[ap_priority + 3] $ap_update_every
+DIMENSION received '' incremental 1 1
+DIMENSION sent '' incremental -1 1
+
+CHART ap_issues.${dev} '' "Transmit Issues for ${ssid} on ${dev}" "issues/s" ${dev} ap.issues line $[ap_priority + 4] $ap_update_every
+DIMENSION retries 'tx retries' incremental 1 1
+DIMENSION failures 'tx failures' incremental -1 1
+
+CHART ap_signal.${dev} '' "Average Signal for ${ssid} on ${dev}" "dBm" ${dev} ap.signal line $[ap_priority + 5] $ap_update_every
+DIMENSION signal 'average signal' absolute 1 1
+
+CHART ap_bitrate.${dev} '' "Bitrate for ${ssid} on ${dev}" "Mbps" ${dev} ap.bitrate line $[ap_priority + 6] $ap_update_every
+DIMENSION receive '' absolute 1 1000
+DIMENSION transmit '' absolute -1 1000
+DIMENSION expected 'expected throughput' absolute 1 1000
+EOF
+
+ done
+
+ return 0
+}
+
+# _update is called continiously, to collect the values
+ap_update() {
+ # the first argument to this function is the microseconds since last update
+ # pass this parameter to the BEGIN statement (see bellow).
+
+ # do all the work to collect / calculate the values
+ # for each dimension
+ # remember: KEEP IT SIMPLE AND SHORT
+
+ for dev in "${!ap_devs[@]}"
+ do
+ iw ${dev} station dump |\
+ awk "
+ BEGIN {
+ c = 0;
+ rb = 0;
+ tb = 0;
+ rp = 0;
+ tp = 0;
+ tr = 0;
+ tf = 0;
+ tt = 0;
+ rt = 0;
+ s = 0;
+ g = 0;
+ e = 0;
+ }
+ /^Station/ { c++; }
+ /^[ \\t]+rx bytes:/ { rb += \$3 }
+ /^[ \\t]+tx bytes:/ { tb += \$3 }
+ /^[ \\t]+rx packets:/ { rp += \$3 }
+ /^[ \\t]+tx packets:/ { tp += \$3 }
+ /^[ \\t]+tx retries:/ { tr += \$3 }
+ /^[ \\t]+tx failed:/ { tf += \$3 }
+ /^[ \\t]+signal:/ { s += \$2; }
+ /^[ \\t]+rx bitrate:/ { x = \$3; rt += x * 1000; }
+ /^[ \\t]+tx bitrate:/ { x = \$3; tt += x * 1000; }
+ /^[ \\t]+expected throughput:(.*)Mbps/ {
+ x=\$3;
+ sub(/Mbps/, \"\", x);
+ e += x * 1000;
+ }
+ END {
+ print \"BEGIN ap_clients.${dev}\"
+ print \"SET clients = \" c;
+ print \"END\"
+ print \"BEGIN ap_bandwidth.${dev}\"
+ print \"SET received = \" rb;
+ print \"SET sent = \" tb;
+ print \"END\"
+ print \"BEGIN ap_packets.${dev}\"
+ print \"SET received = \" rp;
+ print \"SET sent = \" tp;
+ print \"END\"
+ print \"BEGIN ap_issues.${dev}\"
+ print \"SET retries = \" tr;
+ print \"SET failures = \" tf;
+ print \"END\"
+ print \"BEGIN ap_signal.${dev}\"
+ print \"SET signal = \" s / c;
+ print \"END\"
+
+ if( c == 0 ) c = 1;
+ print \"BEGIN ap_bitrate.${dev}\"
+ print \"SET receive = \" rt / c;
+ print \"SET transmit = \" tt / c;
+ print \"SET expected = \" e / c;
+ print \"END\"
+ }
+ "
+ done
+
+ return 0
+}
+
diff --git a/charts.d/apache.chart.sh b/charts.d/apache.chart.sh
new file mode 100755
index 000000000..efa559ddb
--- /dev/null
+++ b/charts.d/apache.chart.sh
@@ -0,0 +1,250 @@
+#!/bin/sh
+
+# the URL to download apache status info
+apache_url="http://127.0.0.1:80/server-status?auto"
+
+# _update_every is a special variable - it holds the number of seconds
+# between the calls of the _update() function
+apache_update_every=
+
+apache_priority=60000
+
+# convert apache floating point values
+# to integer using this multiplier
+# this only affects precision - the values
+# will be in the proper units
+apache_decimal_detail=1000000
+
+declare -a apache_response=()
+apache_accesses=0
+apache_kbytes=0
+apache_reqpersec=0
+apache_bytespersec=0
+apache_bytesperreq=0
+apache_busyworkers=0
+apache_idleworkers=0
+apache_connstotal=0
+apache_connsasyncwriting=0
+apache_connsasynckeepalive=0
+apache_connsasyncclosing=0
+
+apache_keys_detected=0
+apache_has_conns=0
+apache_key_accesses=
+apache_key_kbytes=
+apache_key_reqpersec=
+apache_key_bytespersec=
+apache_key_bytesperreq=
+apache_key_busyworkers=
+apache_key_idleworkers=
+apache_key_scoreboard=
+apache_key_connstotal=
+apache_key_connsasyncwriting=
+apache_key_connsasynckeepalive=
+apache_key_connsasyncclosing=
+apache_detect() {
+ local i=0
+ for x in "${@}"
+ do
+ case "${x}" in
+ 'Total Accesses') apache_key_accesses=$[i + 1] ;;
+ 'Total kBytes') apache_key_kbytes=$[i + 1] ;;
+ 'ReqPerSec') apache_key_reqpersec=$[i + 1] ;;
+ 'BytesPerSec') apache_key_bytespersec=$[i + 1] ;;
+ 'BytesPerReq') apache_key_bytesperreq=$[i + 1] ;;
+ 'BusyWorkers') apache_key_busyworkers=$[i + 1] ;;
+ 'IdleWorkers') apache_key_idleworkers=$[i + 1];;
+ 'ConnsTotal') apache_key_connstotal=$[i + 1] ;;
+ 'ConnsAsyncWriting') apache_key_connsasyncwriting=$[i + 1] ;;
+ 'ConnsAsyncKeepAlive') apache_key_connsasynckeepalive=$[i + 1] ;;
+ 'ConnsAsyncClosing') apache_key_connsasyncclosing=$[i + 1] ;;
+ 'Scoreboard') apache_key_scoreboard=$[i] ;;
+ esac
+
+ i=$[i + 1]
+ done
+
+ # we will not check of the Conns*
+ # keys, since these are apache 2.4 specific
+ if [ -z "${apache_key_accesses}" \
+ -o -z "${apache_key_kbytes}" \
+ -o -z "${apache_key_reqpersec}" \
+ -o -z "${apache_key_bytespersec}" \
+ -o -z "${apache_key_bytesperreq}" \
+ -o -z "${apache_key_busyworkers}" \
+ -o -z "${apache_key_idleworkers}" \
+ -o -z "${apache_key_scoreboard}" \
+ ]
+ then
+ echo >&2 "apache: Invalid response or missing keys from apache server: ${*}"
+ return 1
+ fi
+
+ if [ ! -z "${apache_key_connstotal}" \
+ -a ! -z "${apache_key_connsasyncwriting}" \
+ -a ! -z "${apache_key_connsasynckeepalive}" \
+ -a ! -z "${apache_key_connsasyncclosing}" \
+ ]
+ then
+ apache_has_conns=1
+ fi
+
+ return 0
+}
+
+apache_get() {
+ local oIFS="${IFS}" ret
+ IFS=$':\n' apache_response=($(curl -s "${apache_url}"))
+ ret=$?
+ IFS="${oIFS}"
+
+ [ $ret -ne 0 -o "${#apache_response[@]}" -eq 0 ] && return 1
+
+ # the last line on the apache output is "Scoreboard"
+ # we use this label to detect that the output has a new word count
+ if [ ${apache_keys_detected} -eq 0 -o "${apache_response[${apache_key_scoreboard}]}" != "Scoreboard" ]
+ then
+ apache_detect "${apache_response[@]}" || return 1
+ apache_keys_detected=1
+ fi
+
+ apache_accesses="${apache_response[${apache_key_accesses}]}"
+ apache_kbytes="${apache_response[${apache_key_kbytes}]}"
+
+ float2int "${apache_response[${apache_key_reqpersec}]}" ${apache_decimal_detail}
+ apache_reqpersec=${FLOAT2INT_RESULT}
+
+ float2int "${apache_response[${apache_key_bytespersec}]}" ${apache_decimal_detail}
+ apache_bytespersec=${FLOAT2INT_RESULT}
+
+ float2int "${apache_response[${apache_key_bytesperreq}]}" ${apache_decimal_detail}
+ apache_bytesperreq=${FLOAT2INT_RESULT}
+
+ apache_busyworkers="${apache_response[${apache_key_busyworkers}]}"
+ apache_idleworkers="${apache_response[${apache_key_idleworkers}]}"
+
+ if [ -z "${apache_accesses}" \
+ -o -z "${apache_kbytes}" \
+ -o -z "${apache_reqpersec}" \
+ -o -z "${apache_bytespersec}" \
+ -o -z "${apache_bytesperreq}" \
+ -o -z "${apache_busyworkers}" \
+ -o -z "${apache_idleworkers}" \
+ ]
+ then
+ echo >&2 "apache: empty values got from apache server: ${apache_response[*]}"
+ return 1
+ fi
+
+ if [ ${apache_has_conns} -eq 1 ]
+ then
+ apache_connstotal="${apache_response[${apache_key_connstotal}]}"
+ apache_connsasyncwriting="${apache_response[${apache_key_connsasyncwriting}]}"
+ apache_connsasynckeepalive="${apache_response[${apache_key_connsasynckeepalive}]}"
+ apache_connsasyncclosing="${apache_response[${apache_key_connsasyncclosing}]}"
+ fi
+
+ return 0
+}
+
+# _check is called once, to find out if this chart should be enabled or not
+apache_check() {
+
+ apache_get
+ if [ $? -ne 0 ]
+ then
+ echo >&2 "apache: cannot find stub_status on URL '${apache_url}'. Please set apache_url='http://apache.server:80/server-status?auto' in $confd/apache.conf"
+ return 1
+ fi
+
+ # this should return:
+ # - 0 to enable the chart
+ # - 1 to disable the chart
+
+ return 0
+}
+
+# _create is called once, to create the charts
+apache_create() {
+ cat <<EOF
+CHART apache.bytesperreq '' "apache Lifetime Avg. Response Size" "bytes/request" statistics apache.bytesperreq area $[apache_priority + 8] $apache_update_every
+DIMENSION size '' absolute 1 ${apache_decimal_detail}
+CHART apache.workers '' "apache Workers" "workers" workers apache.workers stacked $[apache_priority + 5] $apache_update_every
+DIMENSION idle '' absolute 1 1
+DIMENSION busy '' absolute 1 1
+CHART apache.reqpersec '' "apache Lifetime Avg. Requests/s" "requests/s" statistics apache.reqpersec line $[apache_priority + 6] $apache_update_every
+DIMENSION requests '' absolute 1 ${apache_decimal_detail}
+CHART apache.bytespersec '' "apache Lifetime Avg. Bandwidth/s" "kilobits/s" statistics apache.bytespersec area $[apache_priority + 7] $apache_update_every
+DIMENSION sent '' absolute 8 $[apache_decimal_detail * 1000]
+CHART apache.requests '' "apache Requests" "requests/s" requests apache.requests line $[apache_priority + 1] $apache_update_every
+DIMENSION requests '' incremental 1 1
+CHART apache.net '' "apache Bandwidth" "kilobits/s" bandwidth apache.net area $[apache_priority + 3] $apache_update_every
+DIMENSION sent '' incremental 8 1
+EOF
+
+ if [ ${apache_has_conns} -eq 1 ]
+ then
+ cat <<EOF2
+CHART apache.connections '' "apache Connections" "connections" connections apache.connections line $[apache_priority + 2] $apache_update_every
+DIMENSION connections '' absolute 1 1
+CHART apache.conns_async '' "apache Async Connections" "connections" connections apache.conns_async stacked $[apache_priority + 4] $apache_update_every
+DIMENSION keepalive '' absolute 1 1
+DIMENSION closing '' absolute 1 1
+DIMENSION writing '' absolute 1 1
+EOF2
+ fi
+
+ return 0
+}
+
+# _update is called continiously, to collect the values
+apache_update() {
+ local reqs net
+ # the first argument to this function is the microseconds since last update
+ # pass this parameter to the BEGIN statement (see bellow).
+
+ # do all the work to collect / calculate the values
+ # for each dimension
+ # remember: KEEP IT SIMPLE AND SHORT
+
+ apache_get || return 1
+
+ # write the result of the work.
+ cat <<VALUESEOF
+BEGIN apache.requests $1
+SET requests = $[apache_accesses]
+END
+BEGIN apache.net $1
+SET sent = $[apache_kbytes]
+END
+BEGIN apache.reqpersec $1
+SET requests = $[apache_reqpersec]
+END
+BEGIN apache.bytespersec $1
+SET sent = $[apache_bytespersec]
+END
+BEGIN apache.bytesperreq $1
+SET size = $[apache_bytesperreq]
+END
+BEGIN apache.workers $1
+SET idle = $[apache_idleworkers]
+SET busy = $[apache_busyworkers]
+END
+VALUESEOF
+
+ if [ ${apache_has_conns} -eq 1 ]
+ then
+ cat <<VALUESEOF2
+BEGIN apache.connections $1
+SET connections = $[apache_connstotal]
+END
+BEGIN apache.conns_async $1
+SET keepalive = $[apache_connsasynckeepalive]
+SET closing = $[apache_connsasyncwriting]
+SET writing = $[apache_connsasyncwriting]
+END
+VALUESEOF2
+ fi
+
+ return 0
+}
diff --git a/charts.d/cpu_apps.chart.sh b/charts.d/cpu_apps.chart.sh
new file mode 100755
index 000000000..5a25163e1
--- /dev/null
+++ b/charts.d/cpu_apps.chart.sh
@@ -0,0 +1,66 @@
+#!/bin/sh
+
+# THIS PLUGIN IS OBSOLETE
+# USE apps.plugin INSTEAD
+
+# a space separated list of command to monitor
+cpu_apps_apps=
+
+# these are required for computing memory in bytes and cpu in seconds
+#cpu_apps_pagesize="`getconf PAGESIZE`"
+cpu_apps_clockticks="$(getconf CLK_TCK)"
+
+cpu_apps_update_every=60
+
+cpu_apps_check() {
+ # this should return:
+ # - 0 to enable the chart
+ # - 1 to disable the chart
+
+ if [ -z "$cpu_apps_apps" ]
+ then
+ echo >&2 "$PROGRAM_NAME: cpu_apps: Please set cpu_apps_apps='command1 command2 ...' in $confd/cpu_apps_apps.conf"
+ return 1
+ fi
+ return 0
+}
+
+cpu_apps_bc_finalze=
+
+cpu_apps_create() {
+
+ echo "CHART chartsd_apps.cpu '' 'Apps CPU' 'milliseconds / $cpu_apps_update_every sec' apps apps stacked 20001 $cpu_apps_update_every"
+
+ local x=
+ for x in $cpu_apps_apps
+ do
+ echo "DIMENSION $x $x incremental 1000 $cpu_apps_clockticks"
+
+ # this string is needed later in the update() function
+ # to finalize the instructions for the bc command
+ cpu_apps_bc_finalze="$cpu_apps_bc_finalze \"SET $x = \"; $x;"
+ done
+ return 0
+}
+
+cpu_apps_update() {
+ # do all the work to collect / calculate the values
+ # for each dimension
+ # remember: KEEP IT SIMPLE AND SHORT
+
+ echo "BEGIN chartsd_apps.cpu"
+ ps -o pid,comm -C "$cpu_apps_apps" |\
+ grep -v "COMMAND" |\
+ (
+ while read pid name
+ do
+ echo "$name+=`cat /proc/$pid/stat | cut -d ' ' -f 14-15`"
+ done
+ ) |\
+ ( sed -e "s/ \+/ /g" -e "s/ /+/g";
+ echo "$cpu_apps_bc_finalze"
+ ) | bc
+ echo "END"
+
+ return 0
+}
diff --git a/charts.d/cpufreq.chart.sh b/charts.d/cpufreq.chart.sh
new file mode 100755
index 000000000..6a968237d
--- /dev/null
+++ b/charts.d/cpufreq.chart.sh
@@ -0,0 +1,83 @@
+#!/bin/sh
+
+# if this chart is called X.chart.sh, then all functions and global variables
+# must start with X_
+
+cpufreq_sys_dir="/sys/devices"
+cpufreq_sys_depth=10
+cpufreq_source_update=1
+
+# _update_every is a special variable - it holds the number of seconds
+# between the calls of the _update() function
+cpufreq_update_every=
+cpufreq_priority=10000
+
+cpufreq_find_all_files() {
+ find $1 -maxdepth $cpufreq_sys_depth -name scaling_cur_freq 2>/dev/null
+}
+
+# _check is called once, to find out if this chart should be enabled or not
+cpufreq_check() {
+
+ # this should return:
+ # - 0 to enable the chart
+ # - 1 to disable the chart
+
+ [ -z "$( cpufreq_find_all_files $cpufreq_sys_dir )" ] && return 1
+ return 0
+}
+
+# _create is called once, to create the charts
+cpufreq_create() {
+ local dir= file= id= i=
+
+ # we create a script with the source of the
+ # cpufreq_update() function
+ # - the highest speed we can achieve -
+ [ $cpufreq_source_update -eq 1 ] && echo >$TMP_DIR/cpufreq.sh "cpufreq_update() {"
+
+ echo "CHART cpu.cpufreq '' 'CPU Clock' 'MHz' 'cpufreq' '' line $[cpufreq_priority + 1] $cpufreq_update_every"
+ echo >>$TMP_DIR/cpufreq.sh "echo \"BEGIN cpu.cpufreq \$1\""
+
+ i=0
+ for file in $( cpufreq_find_all_files $cpufreq_sys_dir | sort -u )
+ do
+ i=$(( i + 1 ))
+ dir=$( dirname $file )
+ cpu=
+
+ [ -f $dir/affected_cpus ] && cpu=$( cat $dir/affected_cpus )
+ [ -z "$cpu" ] && cpu="$i.a"
+
+ id="$( fixid "cpu$cpu" )"
+
+ echo >&2 "charts.d: cpufreq: on file='$file', dir='$dir', cpu='$cpu', id='$id'"
+
+ echo "DIMENSION $id '$id' absolute 1 1000"
+ echo >>$TMP_DIR/cpufreq.sh "printf \"SET $id = \"; cat $file "
+ done
+ echo >>$TMP_DIR/cpufreq.sh "echo END"
+
+ [ $cpufreq_source_update -eq 1 ] && echo >>$TMP_DIR/cpufreq.sh "}"
+ # cat >&2 $TMP_DIR/cpufreq.sh
+
+ # ok, load the function cpufreq_update() we created
+ [ $cpufreq_source_update -eq 1 ] && . $TMP_DIR/cpufreq.sh
+
+ return 0
+}
+
+# _update is called continiously, to collect the values
+cpufreq_update() {
+ # the first argument to this function is the microseconds since last update
+ # pass this parameter to the BEGIN statement (see bellow).
+
+ # do all the work to collect / calculate the values
+ # for each dimension
+ # remember: KEEP IT SIMPLE AND SHORT
+
+ [ $cpufreq_source_update -eq 0 ] && . $TMP_DIR/cpufreq.sh $1
+
+ return 0
+}
+
diff --git a/charts.d/crsproxy.chart.sh b/charts.d/crsproxy.chart.sh
new file mode 100755
index 000000000..fc5358b43
--- /dev/null
+++ b/charts.d/crsproxy.chart.sh
@@ -0,0 +1,148 @@
+#!/bin/sh
+
+crsproxy_url=
+crsproxy_cmds=
+crsproxy_update_every=15
+
+crsproxy_get() {
+ wget 2>/dev/null -O - "$crsproxy_url" |\
+ sed \
+ -e "s/ \+/ /g" \
+ -e "s/\./_/g" \
+ -e "s/ =/=/g" \
+ -e "s/= /=/g" \
+ -e "s/^/crsproxy_/g" |\
+ egrep "^crsproxy_[a-zA-Z][a-zA-Z0-9_]*=[0-9]+$"
+}
+
+crsproxy_check() {
+ if [ -z "$crsproxy_url" ]
+ then
+ echo >&2 "$PROGRAM_NAME: crsproxy: not configured. Please set crsproxy_url='url' in $confd/crsproxy.conf"
+ return 1
+ fi
+
+ # check once if the url works
+ wget 2>/dev/null -O /dev/null "$crsproxy_url"
+ if [ ! $? -eq 0 ]
+ then
+ echo >&2 "$PROGRAM_NAME: crsproxy: cannot fetch the url: $crsproxy_url. Please set crsproxy_url='url' in $confd/crsproxy.conf"
+ return 1
+ fi
+
+ # if the user did not request specific commands
+ # find the commands available
+ if [ -z "$crsproxy_cmds" ]
+ then
+ crsproxy_cmds="$(crsproxy_get | cut -d '=' -f 1 | sed "s/^crsproxy_cmd_//g" | sort -u)"
+ fi
+
+ # if no commands are available
+ if [ -z "$crsproxy_cmds" ]
+ then
+ echo >&2 "$PROGRAM_NAME: crsproxy: cannot find command list automatically. Please set crsproxy_cmds='...' in $confd/crsproxy.conf"
+ return 1
+ fi
+ return 0
+}
+
+crsproxy_create() {
+ # create the charts
+ cat <<EOF
+CHART crsproxy.connected '' "CRS Proxy Connected Clients" "clients" crsproxy '' line 20000 $crsproxy_update_every
+DIMENSION web '' absolute 1 1
+DIMENSION native '' absolute 1 1
+DIMENSION virtual '' absolute 1 1
+CHART crsproxy.requests '' "CRS Proxy Requests Rate" "requests / min" crsproxy '' area 20001 $crsproxy_update_every
+DIMENSION web '' incremental 60 1
+DIMENSION native '' incremental -60 1
+CHART crsproxy.clients '' "CRS Proxy Clients Rate" "clients / min" crsproxy '' area 20010 $crsproxy_update_every
+DIMENSION web '' incremental 60 1
+DIMENSION native '' incremental -60 1
+DIMENSION virtual '' incremental 60 1
+CHART crsproxy.replies '' "CRS Replies Rate" "replies / min" crsproxy '' area 20020 $crsproxy_update_every
+DIMENSION ok '' incremental 60 1
+DIMENSION failed '' incremental -60 1
+CHART crsproxy.bconnections '' "Back-End Connections Rate" "connections / min" crsproxy '' area 20030 $crsproxy_update_every
+DIMENSION ok '' incremental 60 1
+DIMENSION failed '' incremental -60 1
+EOF
+
+ local x=
+ echo "CHART crsproxy.commands '' 'CRS Commands Requests' 'requests / min' crsproxy '' stacked 20100 $crsproxy_update_every"
+ for x in $crsproxy_cmds
+ do
+ echo "DIMENSION $x '' incremental 60 $crsproxy_update_every"
+ done
+
+ echo "CHART crsproxy.commands_failed '' 'CRS Failed Commands' 'replies / min' crsproxy '' stacked 20110 $crsproxy_update_every"
+ for x in $crsproxy_cmds
+ do
+ echo "DIMENSION $x '' incremental 60 $crsproxy_update_every"
+ done
+
+ return 0
+}
+
+
+crsproxy_update() {
+ # the first argument to this function is the microseconds since last update
+ # pass this parameter to the BEGIN statement (see bellow).
+
+ # do all the work to collect / calculate the values
+ # for each dimension
+ # remember: KEEP IT SIMPLE AND SHORT
+
+ # get the values from crsproxy
+ eval "$(crsproxy_get)"
+
+
+ # write the result of the work.
+ cat <<VALUESEOF
+BEGIN crsproxy.connected $1
+SET web = $((crsproxy_web_clients_opened - crsproxy_web_clients_closed))
+SET native = $((crsproxy_crs_clients_opened - crsproxy_crs_clients_closed))
+SET virtual = $((crsproxy_virtual_clients_opened - crsproxy_virtual_clients_closed))
+END
+BEGIN crsproxy.requests $1
+SET web = $crsproxy_web_requests
+SET native = $crsproxy_native_requests
+END
+BEGIN crsproxy.clients $1
+SET web = $crsproxy_web_clients_opened
+SET native = $crsproxy_crs_clients_opened
+SET virtual = $crsproxy_virtual_clients_opened
+END
+BEGIN crsproxy.replies $1
+SET ok = $crsproxy_replies_success
+SET failed = $crsproxy_replies_error
+END
+BEGIN crsproxy.bconnections $1
+SET ok = $crsproxy_connections_nonblocking_established
+SET failed = $crsproxy_connections_nonblocking_failed
+END
+VALUESEOF
+
+ local native_requests="_native_requests"
+ local web_requests="_web_requests"
+ local replies_error="_replies_error"
+ local x=
+
+ echo "BEGIN crsproxy.commands $1"
+ for x in $crsproxy_cmds
+ do
+ eval "v=\$(( crsproxy_cmd_$x$native_requests + crsproxy_cmd_$x$web_requests ))"
+ echo "SET $x = $v"
+ done
+ echo "END"
+
+ echo "BEGIN crsproxy.commands_failed $1"
+ for x in $crsproxy_cmds
+ do
+ eval "v=\$crsproxy_cmd_$x$replies_error"
+ echo "SET $x = $v"
+ done
+ echo "END"
+
+ return 0
+}
diff --git a/charts.d/example.chart.sh b/charts.d/example.chart.sh
new file mode 100755
index 000000000..641d03e5d
--- /dev/null
+++ b/charts.d/example.chart.sh
@@ -0,0 +1,82 @@
+#!/bin/sh
+
+# if this chart is called X.chart.sh, then all functions and global variables
+# must start with X_
+
+# _update_every is a special variable - it holds the number of seconds
+# between the calls of the _update() function
+example_update_every=
+
+example_priority=150000
+
+# _check is called once, to find out if this chart should be enabled or not
+example_check() {
+ # this should return:
+ # - 0 to enable the chart
+ # - 1 to disable the chart
+
+ return 0
+}
+
+# _create is called once, to create the charts
+example_create() {
+ # create the chart with 3 dimensions
+ cat <<EOF
+CHART example.random '' "Random Numbers Stacked Chart" "% of random numbers" random random stacked $[example_priority] $example_update_every
+DIMENSION random1 '' percentage-of-absolute-row 1 1
+DIMENSION random2 '' percentage-of-absolute-row 1 1
+DIMENSION random3 '' percentage-of-absolute-row 1 1
+CHART example.random2 '' "A random number" "random number" random random area $[example_priority + 1] $example_update_every
+DIMENSION random '' absolute 1 1
+EOF
+
+ return 0
+}
+
+# _update is called continiously, to collect the values
+example_last=0
+example_count=0
+example_update() {
+ local value1 value2 value3 value4 mode
+
+ # the first argument to this function is the microseconds since last update
+ # pass this parameter to the BEGIN statement (see bellow).
+
+ # do all the work to collect / calculate the values
+ # for each dimension
+ # remember: KEEP IT SIMPLE AND SHORT
+
+ value1=$RANDOM
+ value2=$RANDOM
+ value3=$RANDOM
+ value4=$[8192 + (RANDOM * 16383 / 32767) ]
+
+ if [ $example_count -gt 0 ]
+ then
+ example_count=$[example_count - 1]
+
+ [ $example_last -gt 16383 ] && value4=$[example_last + (RANDOM * ( (32767 - example_last) / 2) / 32767)]
+ [ $example_last -le 16383 ] && value4=$[example_last - (RANDOM * (example_last / 2) / 32767)]
+ else
+ example_count=$[1 + (RANDOM * 5 / 32767) ]
+
+ [ $example_last -gt 16383 -a $value4 -gt 16383 ] && value4=$[value4 - 16383]
+ [ $example_last -le 16383 -a $value4 -lt 16383 ] && value4=$[value4 + 16383]
+ fi
+ example_last=$value4
+
+ # write the result of the work.
+ cat <<VALUESEOF
+BEGIN example.random $1
+SET random1 = $value1
+SET random2 = $value2
+SET random3 = $value3
+END
+BEGIN example.random2 $1
+SET random = $value4
+END
+VALUESEOF
+ # echo >&2 "example_count = $example_count value = $value4"
+
+ return 0
+}
diff --git a/charts.d/load_average.chart.sh b/charts.d/load_average.chart.sh
new file mode 100755
index 000000000..257ea7cad
--- /dev/null
+++ b/charts.d/load_average.chart.sh
@@ -0,0 +1,64 @@
+#!/bin/sh
+
+load_average_update_every=5
+load_priority=100
+
+# this is an example charts.d collector
+# it is disabled by default.
+# there is no point to enable it, since netdata already
+# collects this information using its internal plugins.
+load_average_enabled=0
+
+load_average_check() {
+ # this should return:
+ # - 0 to enable the chart
+ # - 1 to disable the chart
+
+ if [ ${load_average_update_every} -lt 5 ]
+ then
+ # there is no meaning for shorter than 5 seconds
+ # the kernel changes this value every 5 seconds
+ load_average_update_every=5
+ fi
+
+ [ ${load_average_enabled} -eq 0 ] && return 1
+ return 0
+}
+
+load_average_create() {
+ # create a chart with 3 dimensions
+cat <<EOF
+CHART system.load '' "System Load Average" "load" load system.load line $[load_priority + 1] $load_average_update_every
+DIMENSION load1 '1 min' absolute 1 100
+DIMENSION load5 '5 mins' absolute 1 100
+DIMENSION load15 '15 mins' absolute 1 100
+EOF
+
+ return 0
+}
+
+load_average_update() {
+ # do all the work to collect / calculate the values
+ # for each dimension
+ # remember: KEEP IT SIMPLE AND SHORT
+
+ # here we parse the system average load
+ # it is decimal (with 2 decimal digits), so we remove the dot and
+ # at the definition we have divisor = 100, to have the graph show the right value
+ loadavg="`cat /proc/loadavg | sed -e "s/\.//g"`"
+ load1=`echo $loadavg | cut -d ' ' -f 1`
+ load5=`echo $loadavg | cut -d ' ' -f 2`
+ load15=`echo $loadavg | cut -d ' ' -f 3`
+
+ # write the result of the work.
+ cat <<VALUESEOF
+BEGIN system.load
+SET load1 = $load1
+SET load5 = $load5
+SET load15 = $load15
+END
+VALUESEOF
+
+ return 0
+}
+
diff --git a/charts.d/mem_apps.chart.sh b/charts.d/mem_apps.chart.sh
new file mode 100755
index 000000000..f537ada48
--- /dev/null
+++ b/charts.d/mem_apps.chart.sh
@@ -0,0 +1,56 @@
+#!/bin/sh
+
+mem_apps_apps=
+
+# these are required for computing memory in bytes and cpu in seconds
+#mem_apps_pagesize="`getconf PAGESIZE`"
+#mem_apps_clockticks="`getconf CLK_TCK`"
+
+mem_apps_update_every=
+
+mem_apps_check() {
+ # this should return:
+ # - 0 to enable the chart
+ # - 1 to disable the chart
+
+ if [ -z "$mem_apps_apps" ]
+ then
+ echo >&2 "$PROGRAM_NAME: mem_apps: not configured. Please set mem_apps_apps='command1 command2 ...' in $confd/mem_apps_apps.conf"
+ return 1
+ fi
+ return 0
+}
+
+mem_apps_bc_finalze=
+
+mem_apps_create() {
+
+ echo "CHART chartsd_apps.mem '' 'Apps Memory' MB apps apps.mem stacked 20000 $mem_apps_update_every"
+
+ local x=
+ for x in $mem_apps_apps
+ do
+ echo "DIMENSION $x $x absolute 1 1024"
+
+ # this string is needed later in the update() function
+ # to finalize the instructions for the bc command
+ mem_apps_bc_finalze="$mem_apps_bc_finalze \"SET $x = \"; $x;"
+ done
+ return 0
+}
+
+mem_apps_update() {
+ # do all the work to collect / calculate the values
+ # for each dimension
+ # remember: KEEP IT SIMPLE AND SHORT
+
+ echo "BEGIN chartsd_apps.mem"
+ ps -o comm,rss -C "$mem_apps_apps" |\
+ grep -v "^COMMAND" |\
+ ( sed -e "s/ \+/ /g" -e "s/ /+=/g";
+ echo "$mem_apps_bc_finalze"
+ ) | bc
+ echo "END"
+
+ return 0
+}
diff --git a/charts.d/mysql.chart.sh b/charts.d/mysql.chart.sh
new file mode 100755
index 000000000..283905289
--- /dev/null
+++ b/charts.d/mysql.chart.sh
@@ -0,0 +1,460 @@
+#!/bin/sh
+
+# http://dev.mysql.com/doc/refman/5.0/en/server-status-variables.html
+#
+# https://dev.mysql.com/doc/refman/5.1/en/show-status.html
+# SHOW STATUS provides server status information (see Section 5.1.6, “Server Status Variables”).
+# This statement does not require any privilege.
+# It requires only the ability to connect to the server.
+
+mysql_update_every=5
+mysql_priority=60000
+
+declare -A mysql_cmds=() mysql_opts=() mysql_ids=()
+
+mysql_exec() {
+ local ret
+
+ "${@}" -s -e "show global status;"
+ ret=$?
+
+ [ $ret -ne 0 ] && echo "plugin_command_failure $ret"
+ return $ret
+}
+
+mysql_get() {
+ unset \
+ mysql_Bytes_received \
+ mysql_Bytes_sent \
+ mysql_Queries \
+ mysql_Questions \
+ mysql_Slow_queries \
+ mysql_Handler_commit \
+ mysql_Handler_delete \
+ mysql_Handler_prepare \
+ mysql_Handler_read_first \
+ mysql_Handler_read_key \
+ mysql_Handler_read_next \
+ mysql_Handler_read_prev \
+ mysql_Handler_read_rnd \
+ mysql_Handler_read_rnd_next \
+ mysql_Handler_rollback \
+ mysql_Handler_savepoint \
+ mysql_Handler_savepoint_rollback \
+ mysql_Handler_update \
+ mysql_Handler_write \
+ mysql_Table_locks_immediate \
+ mysql_Table_locks_waited \
+ mysql_Select_full_join \
+ mysql_Select_full_range_join \
+ mysql_Select_range \
+ mysql_Select_range_check \
+ mysql_Select_scan \
+ mysql_Sort_merge_passes \
+ mysql_Sort_range \
+ mysql_Sort_scan \
+ mysql_Created_tmp_disk_tables \
+ mysql_Created_tmp_files \
+ mysql_Created_tmp_tables \
+ mysql_Connection_errors_accept \
+ mysql_Connection_errors_internal \
+ mysql_Connection_errors_max_connections \
+ mysql_Connection_errors_peer_addr \
+ mysql_Connection_errors_select \
+ mysql_Connection_errors_tcpwrap \
+ mysql_Connections \
+ mysql_Aborted_connects \
+ mysql_Binlog_cache_disk_use \
+ mysql_Binlog_cache_use \
+ mysql_Binlog_stmt_cache_disk_use \
+ mysql_Binlog_stmt_cache_use \
+ mysql_Threads_connected \
+ mysql_Threads_created \
+ mysql_Threads_cached \
+ mysql_Threads_running \
+ mysql_Innodb_data_read \
+ mysql_Innodb_data_written \
+ mysql_Innodb_data_reads \
+ mysql_Innodb_data_writes \
+ mysql_Innodb_data_fsyncs \
+ mysql_Innodb_data_pending_reads \
+ mysql_Innodb_data_pending_writes \
+ mysql_Innodb_data_pending_fsyncs \
+ mysql_Innodb_log_waits \
+ mysql_Innodb_log_write_requests \
+ mysql_Innodb_log_writes \
+ mysql_Innodb_os_log_fsyncs \
+ mysql_Innodb_os_log_pending_fsyncs \
+ mysql_Innodb_os_log_pending_writes \
+ mysql_Innodb_os_log_written \
+ mysql_Innodb_row_lock_current_waits \
+ mysql_Innodb_rows_inserted \
+ mysql_Innodb_rows_read \
+ mysql_Innodb_rows_updated \
+ mysql_Innodb_rows_deleted
+
+ mysql_plugin_command_failure=0
+
+ eval "$(mysql_exec "${@}" |\
+ sed \
+ -e "s/[[:space:]]\+/ /g" \
+ -e "s/\./_/g" \
+ -e "s/^\([a-zA-Z0-9_]\+\)[[:space:]]\+\([0-9]\+\)$/mysql_\1=\2/g" |\
+ egrep "^mysql_[a-zA-Z0-9_]+=[[:digit:]]+$")"
+
+ [ $mysql_plugin_command_failure -eq 1 ] && return 1
+ [ -z "$mysql_Connections" ] && return 1
+
+ mysql_Thread_cache_misses=0
+ [ $(( mysql_Connections + 1 - 1 )) -gt 0 ] && mysql_Thread_cache_misses=$(( mysql_Threads_created * 10000 / mysql_Connections ))
+
+ return 0
+}
+
+mysql_check() {
+ # this should return:
+ # - 0 to enable the chart
+ # - 1 to disable the chart
+
+ local x m mysql_cmd
+
+ [ -z "${mysql_cmd}" ] && mysql_cmd="$(which mysql)"
+
+ if [ ${#mysql_opts[@]} -eq 0 ]
+ then
+ mysql_cmds[local]="$mysql_cmd"
+ mysql_opts[local]=
+ fi
+
+ # check once if the url works
+ for m in "${!mysql_opts[@]}"
+ do
+ [ -z "${mysql_cmds[$m]}" ] && mysql_cmds[$m]="$mysql_cmd"
+ if [ -z "${mysql_cmds[$m]}" ]
+ then
+ echo >&2 "$PROGRAM_NAME: mysql: cannot get mysql command for '$m'. Please set mysql_cmds[$m]='/path/to/mysql', in $confd/mysql.conf"
+ fi
+
+ mysql_get "${mysql_cmds[$m]}" ${mysql_opts[$m]}
+ if [ ! $? -eq 0 ]
+ then
+ echo >&2 "$PROGRAM_NAME: mysql: cannot get global status for '$m'. Please set mysql_opts[$m]='options' to whatever needed to get connected to the mysql server, in $confd/mysql.conf"
+ unset mysql_cmds[$m]
+ unset mysql_opts[$m]
+ unset mysql_ids[$m]
+ continue
+ fi
+
+ mysql_ids[$m]="$( fixid "$m" )"
+ done
+
+ if [ ${#mysql_opts[@]} -eq 0 ]
+ then
+ echo >&2 "$PROGRAM_NAME: mysql: no mysql servers found. Please set mysql_opts[name]='options' to whatever needed to get connected to the mysql server, in $confd/mysql.conf"
+ return 1
+ fi
+
+ return 0
+}
+
+mysql_create() {
+ local m
+
+ # create the charts
+ for m in "${mysql_ids[@]}"
+ do
+ cat <<EOF
+CHART mysql_$m.net '' "mysql Bandwidth" "kilobits/s" bandwidth mysql.net area $[mysql_priority + 1] $mysql_update_every
+DIMENSION Bytes_received in incremental 8 1024
+DIMENSION Bytes_sent out incremental -8 1024
+
+CHART mysql_$m.queries '' "mysql Queries" "queries/s" queries mysql.queries line $[mysql_priority + 2] $mysql_update_every
+DIMENSION Queries queries incremental 1 1
+DIMENSION Questions questions incremental 1 1
+DIMENSION Slow_queries slow_queries incremental -1 1
+
+CHART mysql_$m.handlers '' "mysql Handlers" "handlers/s" handlers mysql.handlers line $[mysql_priority + 3] $mysql_update_every
+DIMENSION Handler_commit commit incremental 1 1
+DIMENSION Handler_delete delete incremental 1 1
+DIMENSION Handler_prepare prepare incremental 1 1
+DIMENSION Handler_read_first read_first incremental 1 1
+DIMENSION Handler_read_key read_key incremental 1 1
+DIMENSION Handler_read_next read_next incremental 1 1
+DIMENSION Handler_read_prev read_prev incremental 1 1
+DIMENSION Handler_read_rnd read_rnd incremental 1 1
+DIMENSION Handler_read_rnd_next read_rnd_next incremental 1 1
+DIMENSION Handler_rollback rollback incremental 1 1
+DIMENSION Handler_savepoint savepoint incremental 1 1
+DIMENSION Handler_savepoint_rollback savepoint_rollback incremental 1 1
+DIMENSION Handler_update update incremental 1 1
+DIMENSION Handler_write write incremental 1 1
+
+CHART mysql_$m.table_locks '' "mysql Tables Locks" "locks/s" locks mysql.table_locks line $[mysql_priority + 4] $mysql_update_every
+DIMENSION Table_locks_immediate immediate incremental 1 1
+DIMENSION Table_locks_waited waited incremental -1 1
+
+CHART mysql_$m.join_issues '' "mysql Select Join Issues" "joins/s" issues mysql.join_issues line $[mysql_priority + 5] $mysql_update_every
+DIMENSION Select_full_join full_join incremental 1 1
+DIMENSION Select_full_range_join full_range_join incremental 1 1
+DIMENSION Select_range range incremental 1 1
+DIMENSION Select_range_check range_check incremental 1 1
+DIMENSION Select_scan scan incremental 1 1
+
+CHART mysql_$m.sort_issues '' "mysql Sort Issues" "issues/s" issues mysql.sort.issues line $[mysql_priority + 6] $mysql_update_every
+DIMENSION Sort_merge_passes merge_passes incremental 1 1
+DIMENSION Sort_range range incremental 1 1
+DIMENSION Sort_scan scan incremental 1 1
+
+CHART mysql_$m.tmp '' "mysql Tmp Operations" "counter" temporaries mysql.tmp line $[mysql_priority + 7] $mysql_update_every
+DIMENSION Created_tmp_disk_tables disk_tables incremental 1 1
+DIMENSION Created_tmp_files files incremental 1 1
+DIMENSION Created_tmp_tables tables incremental 1 1
+
+CHART mysql_$m.connections '' "mysql Connections" "connections/s" connections mysql.connections line $[mysql_priority + 8] $mysql_update_every
+DIMENSION Connections all incremental 1 1
+DIMENSION Aborted_connects aborded incremental 1 1
+
+CHART mysql_$m.binlog_cache '' "mysql Binlog Cache" "transactions/s" binlog mysql.binlog_cache line $[mysql_priority + 9] $mysql_update_every
+DIMENSION Binlog_cache_disk_use disk incremental 1 1
+DIMENSION Binlog_cache_use all incremental 1 1
+
+CHART mysql_$m.threads '' "mysql Threads" "threads" threads mysql.threads line $[mysql_priority + 10] $mysql_update_every
+DIMENSION Threads_connected connected absolute 1 1
+DIMENSION Threads_created created incremental 1 1
+DIMENSION Threads_cached cached absolute -1 1
+DIMENSION Threads_running running absolute 1 1
+
+CHART mysql_$m.thread_cache_misses '' "mysql Threads Cache Misses" "misses" threads mysql.thread_cache_misses area $[mysql_priority + 11] $mysql_update_every
+DIMENSION misses misses absolute 1 100
+
+CHART mysql_$m.innodb_io '' "mysql InnoDB I/O Bandwidth" "kilobytes/s" innodb mysql.innodb_io area $[mysql_priority + 12] $mysql_update_every
+DIMENSION Innodb_data_read read incremental 1 1024
+DIMENSION Innodb_data_written write incremental -1 1024
+
+CHART mysql_$m.innodb_io_ops '' "mysql InnoDB I/O Operations" "operations/s" innodb mysql.innodb_io_ops line $[mysql_priority + 13] $mysql_update_every
+DIMENSION Innodb_data_reads reads incremental 1 1
+DIMENSION Innodb_data_writes writes incremental -1 1
+DIMENSION Innodb_data_fsyncs fsyncs incremental 1 1
+
+CHART mysql_$m.innodb_io_pending_ops '' "mysql InnoDB Pending I/O Operations" "operations" innodb mysql.innodb_io_pending_ops line $[mysql_priority + 14] $mysql_update_every
+DIMENSION Innodb_data_pending_reads reads absolute 1 1
+DIMENSION Innodb_data_pending_writes writes absolute -1 1
+DIMENSION Innodb_data_pending_fsyncs fsyncs absolute 1 1
+
+CHART mysql_$m.innodb_log '' "mysql InnoDB Log Operations" "operations/s" innodb mysql.innodb_log line $[mysql_priority + 15] $mysql_update_every
+DIMENSION Innodb_log_waits waits incremental 1 1
+DIMENSION Innodb_log_write_requests write_requests incremental -1 1
+DIMENSION Innodb_log_writes writes incremental -1 1
+
+CHART mysql_$m.innodb_os_log '' "mysql InnoDB OS Log Operations" "operations" innodb mysql.innodb_os_log line $[mysql_priority + 16] $mysql_update_every
+DIMENSION Innodb_os_log_fsyncs fsyncs incremental 1 1
+DIMENSION Innodb_os_log_pending_fsyncs pending_fsyncs absolute 1 1
+DIMENSION Innodb_os_log_pending_writes pending_writes absolute -1 1
+
+CHART mysql_$m.innodb_os_log_io '' "mysql InnoDB OS Log Bandwidth" "kilobytes/s" innodb mysql.innodb_os_log_io area $[mysql_priority + 17] $mysql_update_every
+DIMENSION Innodb_os_log_written write incremental -1 1024
+
+CHART mysql_$m.innodb_cur_row_lock '' "mysql InnoDB Current Row Locks" "operations" innodb mysql.innodb_cur_row_lock area $[mysql_priority + 18] $mysql_update_every
+DIMENSION Innodb_row_lock_current_waits current_waits absolute 1 1
+
+CHART mysql_$m.innodb_rows '' "mysql InnoDB Row Operations" "operations/s" innodb mysql.innodb_rows area $[mysql_priority + 19] $mysql_update_every
+DIMENSION Innodb_rows_read read incremental 1 1
+DIMENSION Innodb_rows_deleted deleted incremental -1 1
+DIMENSION Innodb_rows_inserted inserted incremental 1 1
+DIMENSION Innodb_rows_updated updated incremental -1 1
+
+EOF
+
+ if [ ! -z "$mysql_Binlog_stmt_cache_disk_use" ]
+ then
+ cat <<EOF
+CHART mysql_$m.binlog_stmt_cache '' "mysql Binlog Statement Cache" "statements/s" binlog mysql.binlog_stmt_cache line $[mysql_priority + 20] $mysql_update_every
+DIMENSION Binlog_stmt_cache_disk_use disk incremental 1 1
+DIMENSION Binlog_stmt_cache_use all incremental 1 1
+EOF
+ fi
+
+ if [ ! -z "$mysql_Connection_errors_accept" ]
+ then
+ cat <<EOF
+CHART mysql_$m.connection_errors '' "mysql Connection Errors" "connections/s" connections mysql.connection_errors line $[mysql_priority + 21] $mysql_update_every
+DIMENSION Connection_errors_accept accept incremental 1 1
+DIMENSION Connection_errors_internal internal incremental 1 1
+DIMENSION Connection_errors_max_connections max incremental 1 1
+DIMENSION Connection_errors_peer_addr peer_addr incremental 1 1
+DIMENSION Connection_errors_select select incremental 1 1
+DIMENSION Connection_errors_tcpwrap tcpwrap incremental 1 1
+EOF
+ fi
+
+ done
+ return 0
+}
+
+
+mysql_update() {
+ # the first argument to this function is the microseconds since last update
+ # pass this parameter to the BEGIN statement (see bellow).
+
+ # do all the work to collect / calculate the values
+ # for each dimension
+ # remember: KEEP IT SIMPLE AND SHORT
+
+ # 1. get the counters page from mysql
+ # 2. sed to remove spaces; replace . with _; remove spaces around =; prepend each line with: local mysql_
+ # 3. egrep lines starting with:
+ # local mysql_client_http_ then one or more of these a-z 0-9 _ then = and one of more of 0-9
+ # local mysql_server_all_ then one or more of these a-z 0-9 _ then = and one of more of 0-9
+ # 4. then execute this as a script with the eval
+ #
+ # be very carefull with eval:
+ # prepare the script and always grep at the end the lines that are usefull, so that
+ # even if something goes wrong, no other code can be executed
+
+ local m x
+ for m in "${!mysql_ids[@]}"
+ do
+ x="${mysql_ids[$m]}"
+
+ mysql_get "${mysql_cmds[$m]}" ${mysql_opts[$m]}
+ if [ $? -ne 0 ]
+ then
+ unset mysql_ids[$m]
+ unset mysql_opts[$m]
+ unset mysql_cmds[$m]
+ echo >&2 "$PROGRAM_NAME: mysql: failed to get values for '$m', disabling it."
+ continue
+ fi
+
+ # write the result of the work.
+ cat <<VALUESEOF
+BEGIN mysql_$x.net $1
+SET Bytes_received = $mysql_Bytes_received
+SET Bytes_sent = $mysql_Bytes_sent
+END
+BEGIN mysql_$x.queries $1
+SET Queries = $mysql_Queries
+SET Questions = $mysql_Questions
+SET Slow_queries = $mysql_Slow_queries
+END
+BEGIN mysql_$x.handlers $1
+SET Handler_commit = $mysql_Handler_commit
+SET Handler_delete = $mysql_Handler_delete
+SET Handler_prepare = $mysql_Handler_prepare
+SET Handler_read_first = $mysql_Handler_read_first
+SET Handler_read_key = $mysql_Handler_read_key
+SET Handler_read_next = $mysql_Handler_read_next
+SET Handler_read_prev = $mysql_Handler_read_prev
+SET Handler_read_rnd = $mysql_Handler_read_rnd
+SET Handler_read_rnd_next = $mysql_Handler_read_rnd_next
+SET Handler_rollback = $mysql_Handler_rollback
+SET Handler_savepoint = $mysql_Handler_savepoint
+SET Handler_savepoint_rollback = $mysql_Handler_savepoint_rollback
+SET Handler_update = $mysql_Handler_update
+SET Handler_write = $mysql_Handler_write
+END
+BEGIN mysql_$x.table_locks $1
+SET Table_locks_immediate = $mysql_Table_locks_immediate
+SET Table_locks_waited = $mysql_Table_locks_waited
+END
+BEGIN mysql_$x.join_issues $1
+SET Select_full_join = $mysql_Select_full_join
+SET Select_full_range_join = $mysql_Select_full_range_join
+SET Select_range = $mysql_Select_range
+SET Select_range_check = $mysql_Select_range_check
+SET Select_scan = $mysql_Select_scan
+END
+BEGIN mysql_$x.sort_issues $1
+SET Sort_merge_passes = $mysql_Sort_merge_passes
+SET Sort_range = $mysql_Sort_range
+SET Sort_scan = $mysql_Sort_scan
+END
+BEGIN mysql_$m.tmp $1
+SET Created_tmp_disk_tables = $mysql_Created_tmp_disk_tables
+SET Created_tmp_files = $mysql_Created_tmp_files
+SET Created_tmp_tables = $mysql_Created_tmp_tables
+END
+BEGIN mysql_$m.connections $1
+SET Connections = $mysql_Connections
+SET Aborted_connects = $mysql_Aborted_connects
+END
+BEGIN mysql_$m.binlog_cache $1
+SET Binlog_cache_disk_use = $mysql_Binlog_cache_disk_use
+SET Binlog_cache_use = $mysql_Binlog_cache_use
+END
+BEGIN mysql_$m.threads $1
+SET Threads_connected = $mysql_Threads_connected
+SET Threads_created = $mysql_Threads_created
+SET Threads_cached = $mysql_Threads_cached
+SET Threads_running = $mysql_Threads_running
+END
+BEGIN mysql_$m.thread_cache_misses $1
+SET misses = $mysql_Thread_cache_misses
+END
+BEGIN mysql_$m.innodb_io $1
+SET Innodb_data_read = $mysql_Innodb_data_read
+SET Innodb_data_written = $mysql_Innodb_data_written
+END
+BEGIN mysql_$m.innodb_io_ops $1
+SET Innodb_data_reads = $mysql_Innodb_data_reads
+SET Innodb_data_writes = $mysql_Innodb_data_writes
+SET Innodb_data_fsyncs = $mysql_Innodb_data_fsyncs
+END
+BEGIN mysql_$m.innodb_io_pending_ops $1
+SET Innodb_data_pending_reads = $mysql_Innodb_data_pending_reads
+SET Innodb_data_pending_writes = $mysql_Innodb_data_pending_writes
+SET Innodb_data_pending_fsyncs = $mysql_Innodb_data_pending_fsyncs
+END
+BEGIN mysql_$m.innodb_log $1
+SET Innodb_log_waits = $mysql_Innodb_log_waits
+SET Innodb_log_write_requests = $mysql_Innodb_log_write_requests
+SET Innodb_log_writes = $mysql_Innodb_log_writes
+END
+BEGIN mysql_$m.innodb_os_log $1
+SET Innodb_os_log_fsyncs = $mysql_Innodb_os_log_fsyncs
+SET Innodb_os_log_pending_fsyncs = $mysql_Innodb_os_log_pending_fsyncs
+SET Innodb_os_log_pending_writes = $mysql_Innodb_os_log_pending_writes
+END
+BEGIN mysql_$m.innodb_os_log_io $1
+SET Innodb_os_log_written = $mysql_Innodb_os_log_written
+END
+BEGIN mysql_$m.innodb_cur_row_lock $1
+SET Innodb_row_lock_current_waits = $mysql_Innodb_row_lock_current_waits
+END
+BEGIN mysql_$m.innodb_rows $1
+SET Innodb_rows_inserted = $mysql_Innodb_rows_inserted
+SET Innodb_rows_read = $mysql_Innodb_rows_read
+SET Innodb_rows_updated = $mysql_Innodb_rows_updated
+SET Innodb_rows_deleted = $mysql_Innodb_rows_deleted
+END
+VALUESEOF
+
+ if [ ! -z "$mysql_Binlog_stmt_cache_disk_use" ]
+ then
+ cat <<VALUESEOF
+BEGIN mysql_$m.binlog_stmt_cache $1
+SET Binlog_stmt_cache_disk_use = $mysql_Binlog_stmt_cache_disk_use
+SET Binlog_stmt_cache_use = $mysql_Binlog_stmt_cache_use
+END
+VALUESEOF
+ fi
+
+ if [ ! -z "$mysql_Connection_errors_accept" ]
+ then
+ cat <<VALUESEOF
+BEGIN mysql_$m.connection_errors $1
+SET Connection_errors_accept = $mysql_Connection_errors_accept
+SET Connection_errors_internal = $mysql_Connection_errors_internal
+SET Connection_errors_max_connections = $mysql_Connection_errors_max_connections
+SET Connection_errors_peer_addr = $mysql_Connection_errors_peer_addr
+SET Connection_errors_select = $mysql_Connection_errors_select
+SET Connection_errors_tcpwrap = $mysql_Connection_errors_tcpwrap
+END
+VALUESEOF
+ fi
+ done
+
+ [ ${#mysql_ids[@]} -eq 0 ] && echo >&2 "$PROGRAM_NAME: mysql: no mysql servers left active." && return 1
+ return 0
+}
+
diff --git a/charts.d/nginx.chart.sh b/charts.d/nginx.chart.sh
new file mode 100755
index 000000000..bc8293c5d
--- /dev/null
+++ b/charts.d/nginx.chart.sh
@@ -0,0 +1,134 @@
+#!/bin/sh
+
+# if this chart is called X.chart.sh, then all functions and global variables
+# must start with X_
+
+nginx_url="http://127.0.0.1:80/stub_status"
+
+# _update_every is a special variable - it holds the number of seconds
+# between the calls of the _update() function
+nginx_update_every=
+nginx_priority=60000
+
+declare -a nginx_response=()
+nginx_active_connections=0
+nginx_accepts=0
+nginx_handled=0
+nginx_requests=0
+nginx_reading=0
+nginx_writing=0
+nginx_waiting=0
+nginx_get() {
+ nginx_response=($(curl -s "${nginx_url}"))
+ [ $? -ne 0 -o "${#nginx_response[@]}" -eq 0 ] && return 1
+
+ if [ "${nginx_response[0]}" != "Active" \
+ -o "${nginx_response[1]}" != "connections:" \
+ -o "${nginx_response[3]}" != "server" \
+ -o "${nginx_response[4]}" != "accepts" \
+ -o "${nginx_response[5]}" != "handled" \
+ -o "${nginx_response[6]}" != "requests" \
+ -o "${nginx_response[10]}" != "Reading:" \
+ -o "${nginx_response[12]}" != "Writing:" \
+ -o "${nginx_response[14]}" != "Waiting:" \
+ ]
+ then
+ echo >&2 "nginx: Invalid response from nginx server: ${nginx_response[*]}"
+ return 1
+ fi
+
+ nginx_active_connections="${nginx_response[2]}"
+ nginx_accepts="${nginx_response[7]}"
+ nginx_handled="${nginx_response[8]}"
+ nginx_requests="${nginx_response[9]}"
+ nginx_reading="${nginx_response[11]}"
+ nginx_writing="${nginx_response[13]}"
+ nginx_waiting="${nginx_response[15]}"
+
+ if [ -z "${nginx_active_connections}" \
+ -o -z "${nginx_accepts}" \
+ -o -z "${nginx_handled}" \
+ -o -z "${nginx_requests}" \
+ -o -z "${nginx_reading}" \
+ -o -z "${nginx_writing}" \
+ -o -z "${nginx_waiting}" \
+ ]
+ then
+ echo >&2 "nginx: empty values got from nginx server: ${nginx_response[*]}"
+ return 1
+ fi
+
+ return 0
+}
+
+# _check is called once, to find out if this chart should be enabled or not
+nginx_check() {
+
+ nginx_get
+ if [ $? -ne 0 ]
+ then
+ echo >&2 "nginx: cannot find stub_status on URL '${nginx_url}'. Please set nginx_url='http://nginx.server/stub_status' in $confd/nginx.conf"
+ return 1
+ fi
+
+ # this should return:
+ # - 0 to enable the chart
+ # - 1 to disable the chart
+
+ return 0
+}
+
+# _create is called once, to create the charts
+nginx_create() {
+ cat <<EOF
+CHART nginx.connections '' "nginx Active Connections" "connections" nginx nginx.connections line $[nginx_priority + 1] $nginx_update_every
+DIMENSION active '' absolute 1 1
+
+CHART nginx.requests '' "nginx Requests" "requests/s" nginx nginx.requests line $[nginx_priority + 2] $nginx_update_every
+DIMENSION requests '' incremental 1 1
+
+CHART nginx.connections_status '' "nginx Active Connections by Status" "connections" nginx nginx.connections.status line $[nginx_priority + 3] $nginx_update_every
+DIMENSION reading '' absolute 1 1
+DIMENSION writing '' absolute 1 1
+DIMENSION waiting idle absolute 1 1
+
+CHART nginx.connect_rate '' "nginx Connections Rate" "connections/s" nginx nginx.connections.rate line $[nginx_priority + 4] $nginx_update_every
+DIMENSION accepts accepted incremental 1 1
+DIMENSION handled '' incremental 1 1
+EOF
+
+ return 0
+}
+
+# _update is called continiously, to collect the values
+nginx_update() {
+ # the first argument to this function is the microseconds since last update
+ # pass this parameter to the BEGIN statement (see bellow).
+
+ # do all the work to collect / calculate the values
+ # for each dimension
+ # remember: KEEP IT SIMPLE AND SHORT
+
+ nginx_get || return 1
+
+ # write the result of the work.
+ cat <<VALUESEOF
+BEGIN nginx.connections $1
+SET active = $[nginx_active_connections]
+END
+BEGIN nginx.requests $1
+SET requests = $[nginx_requests]
+END
+BEGIN nginx.connections_status $1
+SET reading = $[nginx_reading]
+SET writing = $[nginx_writing]
+SET waiting = $[nginx_waiting]
+END
+BEGIN nginx.connect_rate $1
+SET accepts = $[nginx_accepts]
+SET handled = $[nginx_handled]
+END
+VALUESEOF
+
+ return 0
+}
diff --git a/charts.d/nut.chart.sh b/charts.d/nut.chart.sh
new file mode 100755
index 000000000..343c6d9cd
--- /dev/null
+++ b/charts.d/nut.chart.sh
@@ -0,0 +1,187 @@
+#!/bin/bash
+
+# a space separated list of UPS names
+# if empty, the list returned by 'upsc -l' will be used
+nut_ups=
+
+# how frequently to collect UPS data
+nut_update_every=2
+
+nut_timeout=2
+
+# the priority of nut related to other charts
+nut_priority=90000
+
+declare -A nut_ids=()
+
+nut_get_all() {
+ timeout $nut_timeout upsc -l
+}
+
+nut_get() {
+ timeout $nut_timeout upsc "$1"
+}
+
+nut_check() {
+
+ # this should return:
+ # - 0 to enable the chart
+ # - 1 to disable the chart
+
+ local x
+
+ require_cmd upsc || return 1
+
+ [ -z "$nut_ups" ] && nut_ups="$( nut_get_all )"
+
+ for x in $nut_ups
+ do
+ nut_get "$x" >/dev/null
+ if [ $? -eq 0 ]
+ then
+ nut_ids[$x]="$( fixid "$x" )"
+ continue
+ fi
+ echo >&2 "nut: ERROR: Cannot get information for NUT UPS '$x'."
+ done
+
+ if [ ${#nut_ids[@]} -eq 0 ]
+ then
+ echo >&2 "nut: Please set nut_ups='ups_name' in $confd/nut.conf"
+ return 1
+ fi
+
+ return 0
+}
+
+nut_create() {
+ # create the charts
+ local x
+
+ for x in "${nut_ids[@]}"
+ do
+ cat <<EOF
+CHART nut_$x.charge '' "UPS Charge" "percentage" ups nut.charge area $[nut_priority + 1] $nut_update_every
+DIMENSION battery_charge charge absolute 1 100
+
+CHART nut_$x.battery_voltage '' "UPS Battery Voltage" "Volts" ups nut.battery.voltage line $[nut_priority + 2] $nut_update_every
+DIMENSION battery_voltage voltage absolute 1 100
+DIMENSION battery_voltage_high high absolute 1 100
+DIMENSION battery_voltage_low low absolute 1 100
+DIMENSION battery_voltage_nominal nominal absolute 1 100
+
+CHART nut_$x.input_voltage '' "UPS Input Voltage" "Volts" input nut.input.voltage line $[nut_priority + 3] $nut_update_every
+DIMENSION input_voltage voltage absolute 1 100
+DIMENSION input_voltage_fault fault absolute 1 100
+DIMENSION input_voltage_nominal nominal absolute 1 100
+
+CHART nut_$x.input_current '' "UPS Input Current" "Ampere" input nut.input.current line $[nut_priority + 4] $nut_update_every
+DIMENSION input_current_nominal nominal absolute 1 100
+
+CHART nut_$x.input_frequency '' "UPS Input Frequency" "Hz" input nut.input.frequency line $[nut_priority + 5] $nut_update_every
+DIMENSION input_frequency frequency absolute 1 100
+DIMENSION input_frequency_nominal nominal absolute 1 100
+
+CHART nut_$x.output_voltage '' "UPS Output Voltage" "Volts" output nut.output.voltage line $[nut_priority + 6] $nut_update_every
+DIMENSION output_voltage voltage absolute 1 100
+
+CHART nut_$x.load '' "UPS Load" "percentage" ups nut.load area $[nut_priority] $nut_update_every
+DIMENSION load load absolute 1 100
+
+CHART nut_$x.temp '' "UPS Temperature" "temperature" ups nut.temperature line $[nut_priority + 7] $nut_update_every
+DIMENSION temp temp absolute 1 100
+EOF
+ done
+
+ return 0
+}
+
+
+nut_update() {
+ # the first argument to this function is the microseconds since last update
+ # pass this parameter to the BEGIN statement (see bellow).
+
+ # do all the work to collect / calculate the values
+ # for each dimension
+ # remember: KEEP IT SIMPLE AND SHORT
+
+ local i x
+ for i in "${!nut_ids[@]}"
+ do
+ x="${nut_ids[$i]}"
+ nut_get "$i" | awk "
+BEGIN {
+ battery_charge = 0;
+ battery_voltage = 0;
+ battery_voltage_high = 0;
+ battery_voltage_low = 0;
+ battery_voltage_nominal = 0;
+ input_voltage = 0;
+ input_voltage_fault = 0;
+ input_voltage_nominal = 0;
+ input_current_nominal = 0;
+ input_frequency = 0;
+ input_frequency_nominal = 0;
+ output_voltage = 0;
+ load = 0;
+ temp = 0;
+}
+/^battery.charge: .*/ { battery_charge = \$2 * 100 };
+/^battery.voltage: .*/ { battery_voltage = \$2 * 100 };
+/^battery.voltage.high: .*/ { battery_voltage_high = \$2 * 100 };
+/^battery.voltage.low: .*/ { battery_voltage_low = \$2 * 100 };
+/^battery.voltage.nominal: .*/ { battery_voltage_nominal = \$2 * 100 };
+/^input.voltage: .*/ { input_voltage = \$2 * 100 };
+/^input.voltage.fault: .*/ { input_voltage_fault = \$2 * 100 };
+/^input.voltage.nominal: .*/ { input_voltage_nominal = \$2 * 100 };
+/^input.current.nominal: .*/ { input_current_nominal = \$2 * 100 };
+/^input.frequency: .*/ { input_frequency = \$2 * 100 };
+/^input.frequency.nominal: .*/ { input_frequency_nominal = \$2 * 100 };
+/^output.voltage: .*/ { output_voltage = \$2 * 100 };
+/^ups.load: .*/ { load = \$2 * 100 };
+/^ups.temperature: .*/ { temp = \$2 * 100 };
+END {
+ print \"BEGIN nut_$x.charge $1\";
+ print \"SET battery_charge = \" battery_charge;
+ print \"END\"
+
+ print \"BEGIN nut_$x.battery_voltage $1\";
+ print \"SET battery_voltage = \" battery_voltage;
+ print \"SET battery_voltage_high = \" battery_voltage_high;
+ print \"SET battery_voltage_low = \" battery_voltage_low;
+ print \"SET battery_voltage_nominal = \" battery_voltage_nominal;
+ print \"END\"
+
+ print \"BEGIN nut_$x.input_voltage $1\";
+ print \"SET input_voltage = \" input_voltage;
+ print \"SET input_voltage_fault = \" input_voltage_fault;
+ print \"SET input_voltage_nominal = \" input_voltage_nominal;
+ print \"END\"
+
+ print \"BEGIN nut_$x.input_current $1\";
+ print \"SET input_current_nominal = \" input_current_nominal;
+ print \"END\"
+
+ print \"BEGIN nut_$x.input_frequency $1\";
+ print \"SET input_frequency = \" input_frequency;
+ print \"SET input_frequency_nominal = \" input_frequency_nominal;
+ print \"END\"
+
+ print \"BEGIN nut_$x.output_voltage $1\";
+ print \"SET output_voltage = \" output_voltage;
+ print \"END\"
+
+ print \"BEGIN nut_$x.load $1\";
+ print \"SET load = \" load;
+ print \"END\"
+
+ print \"BEGIN nut_$x.temp $1\";
+ print \"SET temp = \" temp;
+ print \"END\"
+}"
+ [ $? -ne 0 ] && unset nut_ids[$i] && echo >&2 "nut: failed to get values for '$i', disabling it."
+ done
+
+ [ ${#nut_ids[@]} -eq 0 ] && echo >&2 "nut: no UPSes left active." && return 1
+ return 0
+}
diff --git a/charts.d/opensips.chart.sh b/charts.d/opensips.chart.sh
new file mode 100755
index 000000000..4b60c811d
--- /dev/null
+++ b/charts.d/opensips.chart.sh
@@ -0,0 +1,320 @@
+#!/bin/sh
+
+opensips_opts="fifo get_statistics all"
+opensips_cmd=
+opensips_update_every=5
+opensips_timeout=2
+opensips_priority=80000
+
+opensips_get_stats() {
+ timeout $opensips_timeout "$opensips_cmd" $opensips_opts |\
+ grep "^\(core\|dialog\|net\|registrar\|shmem\|siptrace\|sl\|tm\|uri\|usrloc\):[a-zA-Z0-9_ -]\+[[:space:]]*=[[:space:]]*[0-9]\+[[:space:]]*$" |\
+ sed \
+ -e "s|-|_|g" \
+ -e "s|:|_|g" \
+ -e "s|[[:space:]]\+=[[:space:]]\+|=|g" \
+ -e "s|[[:space:]]\+$||" \
+ -e "s|^[[:space:]]\+||" \
+ -e "s|[[:space:]]\+|_|" \
+ -e "s|^|opensips_|g"
+
+ local ret=$?
+ [ $ret -ne 0 ] && echo "opensips_command_failed=1"
+ return $ret
+}
+
+opensips_check() {
+ # if the user did not provide an opensips_cmd
+ # try to find it in the system
+ if [ -z "$opensips_cmd" ]
+ then
+ require_cmd opensipsctl || return 1
+ fi
+
+ # check once if the command works
+ local x="$(opensips_get_stats | grep "^opensips_core_")"
+ if [ ! $? -eq 0 -o -z "$x" ]
+ then
+ echo >&2 "$PROGRAM_NAME: opensips: cannot get global status. Please set opensips_opts='options' whatever needed to get connected to opensips server, in $confd/opensips.conf"
+ return 1
+ fi
+
+ return 0
+}
+
+opensips_create() {
+ # create the charts
+ cat <<EOF
+CHART opensips.dialogs_active '' "OpenSIPS Active Dialogs" "dialogs" dialogs '' area $[opensips_priority + 1] $opensips_update_every
+DIMENSION dialog_active_dialogs active absolute 1 1
+DIMENSION dialog_early_dialogs early absolute -1 1
+
+CHART opensips.users '' "OpenSIPS Users" "users" users '' line $[opensips_priority + 2] $opensips_update_every
+DIMENSION usrloc_registered_users registered absolute 1 1
+DIMENSION usrloc_location_users location absolute 1 1
+DIMENSION usrloc_location_contacts contacts absolute 1 1
+DIMENSION usrloc_location_expires expires incremental -1 1
+
+CHART opensips.registrar '' "OpenSIPS Registrar" "registrations/s" registrar '' line $[opensips_priority + 3] $opensips_update_every
+DIMENSION registrar_accepted_regs accepted incremental 1 1
+DIMENSION registrar_rejected_regs rejected incremental -1 1
+
+CHART opensips.transactions '' "OpenSIPS Transactions" "transactions/s" transactions '' line $[opensips_priority + 4] $opensips_update_every
+DIMENSION tm_UAS_transactions UAS incremental 1 1
+DIMENSION tm_UAC_transactions UAC incremental -1 1
+
+CHART opensips.core_rcv '' "OpenSIPS Core Receives" "queries/s" core '' line $[opensips_priority + 5] $opensips_update_every
+DIMENSION core_rcv_requests requests incremental 1 1
+DIMENSION core_rcv_replies replies incremental -1 1
+
+CHART opensips.core_fwd '' "OpenSIPS Core Forwards" "queries/s" core '' line $[opensips_priority + 6] $opensips_update_every
+DIMENSION core_fwd_requests requests incremental 1 1
+DIMENSION core_fwd_replies replies incremental -1 1
+
+CHART opensips.core_drop '' "OpenSIPS Core Drops" "queries/s" core '' line $[opensips_priority + 7] $opensips_update_every
+DIMENSION core_drop_requests requests incremental 1 1
+DIMENSION core_drop_replies replies incremental -1 1
+
+CHART opensips.core_err '' "OpenSIPS Core Errors" "queries/s" core '' line $[opensips_priority + 8] $opensips_update_every
+DIMENSION core_err_requests requests incremental 1 1
+DIMENSION core_err_replies replies incremental -1 1
+
+CHART opensips.core_bad '' "OpenSIPS Core Bad" "queries/s" core '' line $[opensips_priority + 9] $opensips_update_every
+DIMENSION core_bad_URIs_rcvd bad_URIs_rcvd incremental 1 1
+DIMENSION core_unsupported_methods unsupported_methods incremental 1 1
+DIMENSION core_bad_msg_hdr bad_msg_hdr incremental 1 1
+
+CHART opensips.tm_replies '' "OpenSIPS TM Replies" "replies/s" transactions '' line $[opensips_priority + 10] $opensips_update_every
+DIMENSION tm_received_replies received incremental 1 1
+DIMENSION tm_relayed_replies relayed incremental 1 1
+DIMENSION tm_local_replies local incremental 1 1
+
+CHART opensips.transactions_status '' "OpenSIPS Transactions Status" "transactions/s" transactions '' line $[opensips_priority + 11] $opensips_update_every
+DIMENSION tm_2xx_transactions 2xx incremental 1 1
+DIMENSION tm_3xx_transactions 3xx incremental 1 1
+DIMENSION tm_4xx_transactions 4xx incremental 1 1
+DIMENSION tm_5xx_transactions 5xx incremental 1 1
+DIMENSION tm_6xx_transactions 6xx incremental 1 1
+
+CHART opensips.transactions_inuse '' "OpenSIPS InUse Transactions" "transactions" transactions '' line $[opensips_priority + 12] $opensips_update_every
+DIMENSION tm_inuse_transactions inuse absolute 1 1
+
+CHART opensips.sl_replies '' "OpenSIPS SL Replies" "replies/s" core '' line $[opensips_priority + 13] $opensips_update_every
+DIMENSION sl_1xx_replies 1xx incremental 1 1
+DIMENSION sl_2xx_replies 2xx incremental 1 1
+DIMENSION sl_3xx_replies 3xx incremental 1 1
+DIMENSION sl_4xx_replies 4xx incremental 1 1
+DIMENSION sl_5xx_replies 5xx incremental 1 1
+DIMENSION sl_6xx_replies 6xx incremental 1 1
+DIMENSION sl_sent_replies sent incremental 1 1
+DIMENSION sl_sent_err_replies error incremental 1 1
+DIMENSION sl_received_ACKs ACKed incremental 1 1
+
+CHART opensips.dialogs '' "OpenSIPS Dialogs" "dialogs/s" dialogs '' line $[opensips_priority + 14] $opensips_update_every
+DIMENSION dialog_processed_dialogs processed incremental 1 1
+DIMENSION dialog_expired_dialogs expired incremental 1 1
+DIMENSION dialog_failed_dialogs failed incremental -1 1
+
+CHART opensips.net_waiting '' "OpenSIPS Network Waiting" "kilobytes" net '' line $[opensips_priority + 15] $opensips_update_every
+DIMENSION net_waiting_udp UDP absolute 1 1024
+DIMENSION net_waiting_tcp TCP absolute 1 1024
+
+CHART opensips.uri_checks '' "OpenSIPS URI Checks" "checks / sec" uri '' line $[opensips_priority + 16] $opensips_update_every
+DIMENSION uri_positive_checks positive incremental 1 1
+DIMENSION uri_negative_checks negative incremental -1 1
+
+CHART opensips.traces '' "OpenSIPS Traces" "traces / sec" traces '' line $[opensips_priority + 17] $opensips_update_every
+DIMENSION siptrace_traced_requests requests incremental 1 1
+DIMENSION siptrace_traced_replies replies incremental -1 1
+
+CHART opensips.shmem '' "OpenSIPS Shared Memory" "kilobytes" mem '' line $[opensips_priority + 18] $opensips_update_every
+DIMENSION shmem_total_size total absolute 1 1024
+DIMENSION shmem_used_size used absolute 1 1024
+DIMENSION shmem_real_used_size real_used absolute 1 1024
+DIMENSION shmem_max_used_size max_used absolute 1 1024
+DIMENSION shmem_free_size free absolute 1 1024
+
+CHART opensips.shmem_fragments '' "OpenSIPS Shared Memory Fragmentation" "fragments" mem '' line $[opensips_priority + 19] $opensips_update_every
+DIMENSION shmem_fragments fragments absolute 1 1
+EOF
+
+ return 0
+}
+
+opensips_update() {
+ # the first argument to this function is the microseconds since last update
+ # pass this parameter to the BEGIN statement (see bellow).
+
+ # do all the work to collect / calculate the values
+ # for each dimension
+
+ # 1. get the counters page from opensips
+ # 2. sed to remove spaces; replace . with _; remove spaces around =; prepend each line with: local opensips_
+ # 3. egrep lines starting with:
+ # local opensips_client_http_ then one or more of these a-z 0-9 _ then = and one of more of 0-9
+ # local opensips_server_all_ then one or more of these a-z 0-9 _ then = and one of more of 0-9
+ # 4. then execute this as a script with the eval
+ # be very carefull with eval:
+ # prepare the script and always grep at the end the lines that are usefull, so that
+ # even if something goes wrong, no other code can be executed
+
+ unset \
+ opensips_dialog_active_dialogs \
+ opensips_dialog_early_dialogs \
+ opensips_usrloc_registered_users \
+ opensips_usrloc_location_users \
+ opensips_usrloc_location_contacts \
+ opensips_usrloc_location_expires \
+ opensips_registrar_accepted_regs \
+ opensips_registrar_rejected_regs \
+ opensips_tm_UAS_transactions \
+ opensips_tm_UAC_transactions \
+ opensips_core_rcv_requests \
+ opensips_core_rcv_replies \
+ opensips_core_fwd_requests \
+ opensips_core_fwd_replies \
+ opensips_core_drop_requests \
+ opensips_core_drop_replies \
+ opensips_core_err_requests \
+ opensips_core_err_replies \
+ opensips_core_bad_URIs_rcvd \
+ opensips_core_unsupported_methods \
+ opensips_core_bad_msg_hdr \
+ opensips_tm_received_replies \
+ opensips_tm_relayed_replies \
+ opensips_tm_local_replies \
+ opensips_tm_2xx_transactions \
+ opensips_tm_3xx_transactions \
+ opensips_tm_4xx_transactions \
+ opensips_tm_5xx_transactions \
+ opensips_tm_6xx_transactions \
+ opensips_tm_inuse_transactions \
+ opensips_sl_1xx_replies \
+ opensips_sl_2xx_replies \
+ opensips_sl_3xx_replies \
+ opensips_sl_4xx_replies \
+ opensips_sl_5xx_replies \
+ opensips_sl_6xx_replies \
+ opensips_sl_sent_replies \
+ opensips_sl_sent_err_replies \
+ opensips_sl_received_ACKs \
+ opensips_dialog_processed_dialogs \
+ opensips_dialog_expired_dialogs \
+ opensips_dialog_failed_dialogs \
+ opensips_net_waiting_udp \
+ opensips_net_waiting_tcp \
+ opensips_uri_positive_checks \
+ opensips_uri_negative_checks \
+ opensips_siptrace_traced_requests \
+ opensips_siptrace_traced_replies \
+ opensips_shmem_total_size \
+ opensips_shmem_used_size \
+ opensips_shmem_real_used_size \
+ opensips_shmem_max_used_size \
+ opensips_shmem_free_size \
+ opensips_shmem_fragments
+
+ opensips_command_failed=0
+ eval "local $(opensips_get_stats)"
+ [ $? -ne 0 ] && return 1
+
+ [ $opensips_command_failed -eq 1 ] && echo >&2 "$PROGRAM_NAME: opensips: failed to get values, disabling." && return 1
+
+ # write the result of the work.
+ cat <<VALUESEOF
+BEGIN opensips.dialogs_active $1
+SET dialog_active_dialogs = $opensips_dialog_active_dialogs
+SET dialog_early_dialogs = $opensips_dialog_early_dialogs
+END
+BEGIN opensips.users $1
+SET usrloc_registered_users = $opensips_usrloc_registered_users
+SET usrloc_location_users = $opensips_usrloc_location_users
+SET usrloc_location_contacts = $opensips_usrloc_location_contacts
+SET usrloc_location_expires = $opensips_usrloc_location_expires
+END
+BEGIN opensips.registrar $1
+SET registrar_accepted_regs = $opensips_registrar_accepted_regs
+SET registrar_rejected_regs = $opensips_registrar_rejected_regs
+END
+BEGIN opensips.transactions $1
+SET tm_UAS_transactions = $opensips_tm_UAS_transactions
+SET tm_UAC_transactions = $opensips_tm_UAC_transactions
+END
+BEGIN opensips.core_rcv $1
+SET core_rcv_requests = $opensips_core_rcv_requests
+SET core_rcv_replies = $opensips_core_rcv_replies
+END
+BEGIN opensips.core_fwd $1
+SET core_fwd_requests = $opensips_core_fwd_requests
+SET core_fwd_replies = $opensips_core_fwd_replies
+END
+BEGIN opensips.core_drop $1
+SET core_drop_requests = $opensips_core_drop_requests
+SET core_drop_replies = $opensips_core_drop_replies
+END
+BEGIN opensips.core_err $1
+SET core_err_requests = $opensips_core_err_requests
+SET core_err_replies = $opensips_core_err_replies
+END
+BEGIN opensips.core_bad $1
+SET core_bad_URIs_rcvd = $opensips_core_bad_URIs_rcvd
+SET core_unsupported_methods = $opensips_core_unsupported_methods
+SET core_bad_msg_hdr = $opensips_core_bad_msg_hdr
+END
+BEGIN opensips.tm_replies $1
+SET tm_received_replies = $opensips_tm_received_replies
+SET tm_relayed_replies = $opensips_tm_relayed_replies
+SET tm_local_replies = $opensips_tm_local_replies
+END
+BEGIN opensips.transactions_status $1
+SET tm_2xx_transactions = $opensips_tm_2xx_transactions
+SET tm_3xx_transactions = $opensips_tm_3xx_transactions
+SET tm_4xx_transactions = $opensips_tm_4xx_transactions
+SET tm_5xx_transactions = $opensips_tm_5xx_transactions
+SET tm_6xx_transactions = $opensips_tm_6xx_transactions
+END
+BEGIN opensips.transactions_inuse $1
+SET tm_inuse_transactions = $opensips_tm_inuse_transactions
+END
+BEGIN opensips.sl_replies $1
+SET sl_1xx_replies = $opensips_sl_1xx_replies
+SET sl_2xx_replies = $opensips_sl_2xx_replies
+SET sl_3xx_replies = $opensips_sl_3xx_replies
+SET sl_4xx_replies = $opensips_sl_4xx_replies
+SET sl_5xx_replies = $opensips_sl_5xx_replies
+SET sl_6xx_replies = $opensips_sl_6xx_replies
+SET sl_sent_replies = $opensips_sl_sent_replies
+SET sl_sent_err_replies = $opensips_sl_sent_err_replies
+SET sl_received_ACKs = $opensips_sl_received_ACKs
+END
+BEGIN opensips.dialogs $1
+SET dialog_processed_dialogs = $opensips_dialog_processed_dialogs
+SET dialog_expired_dialogs = $opensips_dialog_expired_dialogs
+SET dialog_failed_dialogs = $opensips_dialog_failed_dialogs
+END
+BEGIN opensips.net_waiting $1
+SET net_waiting_udp = $opensips_net_waiting_udp
+SET net_waiting_tcp = $opensips_net_waiting_tcp
+END
+BEGIN opensips.uri_checks $1
+SET uri_positive_checks = $opensips_uri_positive_checks
+SET uri_negative_checks = $opensips_uri_negative_checks
+END
+BEGIN opensips.traces $1
+SET siptrace_traced_requests = $opensips_siptrace_traced_requests
+SET siptrace_traced_replies = $opensips_siptrace_traced_replies
+END
+BEGIN opensips.shmem $1
+SET shmem_total_size = $opensips_shmem_total_size
+SET shmem_used_size = $opensips_shmem_used_size
+SET shmem_real_used_size = $opensips_shmem_real_used_size
+SET shmem_max_used_size = $opensips_shmem_max_used_size
+SET shmem_free_size = $opensips_shmem_free_size
+END
+BEGIN opensips.shmem_fragments $1
+SET shmem_fragments = $opensips_shmem_fragments
+END
+VALUESEOF
+
+ return 0
+}
diff --git a/charts.d/postfix.chart.sh b/charts.d/postfix.chart.sh
new file mode 100755
index 000000000..d286f99f2
--- /dev/null
+++ b/charts.d/postfix.chart.sh
@@ -0,0 +1,92 @@
+#!/bin/sh
+
+# the postqueue command
+# if empty, it will use the one found in the system path
+postfix_postqueue=
+
+# how frequently to collect queue size
+postfix_update_every=15
+
+postfix_priority=60000
+
+postfix_check() {
+ # this should return:
+ # - 0 to enable the chart
+ # - 1 to disable the chart
+
+ # try to find the postqueue executable
+ if [ -z "$postfix_postqueue" -o ! -x "$postfix_postqueue" ]
+ then
+ postfix_postqueue="`which postqueue 2>/dev/null`"
+ if [ -z "$postfix_postqueue" -o ! -x "$postfix_postqueue" ]
+ then
+ local d=
+ for d in /sbin /usr/sbin /usr/local/sbin
+ do
+ if [ -x "$d/postqueue" ]
+ then
+ postfix_postqueue="$d/postqueue"
+ break
+ fi
+ done
+ fi
+ fi
+
+ if [ -z "$postfix_postqueue" -o ! -x "$postfix_postqueue" ]
+ then
+ echo >&2 "$PROGRAM_NAME: postfix: cannot find postqueue. Please set 'postfix_postqueue=/path/to/postqueue' in $confd/postfix.conf"
+ return 1
+ fi
+
+ return 0
+}
+
+postfix_create() {
+cat <<EOF
+CHART postfix.qemails '' "Postfix Queue Emails" "emails" queue postfix.queued.emails line $[postfix_priority + 1] $postfix_update_every
+DIMENSION emails '' absolute 1 1
+CHART postfix.qsize '' "Postfix Queue Emails Size" "emails size in KB" queue postfix.queued.size area $[postfix_priority + 2] $postfix_update_every
+DIMENSION size '' absolute 1 1
+EOF
+
+ return 0
+}
+
+postfix_update() {
+ # the first argument to this function is the microseconds since last update
+ # pass this parameter to the BEGIN statement (see bellow).
+
+ # do all the work to collect / calculate the values
+ # for each dimension
+ # remember: KEEP IT SIMPLE AND SHORT
+
+ # 1. execute postqueue -p
+ # 2. get the line that begins with --
+ # 3. match the 2 numbers on the line and output 2 lines like these:
+ # local postfix_q_size=NUMBER
+ # local postfix_q_emails=NUMBER
+ # 4. then execute this a script with the eval
+ #
+ # be very carefull with eval:
+ # prepare the script and always egrep at the end the lines that are usefull, so that
+ # even if something goes wrong, no other code can be executed
+ postfix_q_emails=0
+ postfix_q_size=0
+
+ eval "`$postfix_postqueue -p |\
+ grep "^--" |\
+ sed -e "s/-- \([0-9]\+\) Kbytes in \([0-9]\+\) Requests.$/local postfix_q_size=\1\nlocal postfix_q_emails=\2/g" |\
+ egrep "^local postfix_q_(emails|size)=[0-9]+$"`"
+
+ # write the result of the work.
+ cat <<VALUESEOF
+BEGIN postfix.qemails $1
+SET emails = $postfix_q_emails
+END
+BEGIN postfix.qsize $1
+SET size = $postfix_q_size
+END
+VALUESEOF
+
+ return 0
+}
diff --git a/charts.d/sensors.chart.sh b/charts.d/sensors.chart.sh
new file mode 100755
index 000000000..d14ddf0de
--- /dev/null
+++ b/charts.d/sensors.chart.sh
@@ -0,0 +1,238 @@
+#!/bin/sh
+
+# sensors docs
+# https://www.kernel.org/doc/Documentation/hwmon/sysfs-interface
+
+# if this chart is called X.chart.sh, then all functions and global variables
+# must start with X_
+
+# the directory the kernel keeps sensor data
+sensors_sys_dir="${NETDATA_HOST_PREFIX}/sys/devices"
+
+# how deep in the tree to check for sensor data
+sensors_sys_depth=10
+
+# if set to 1, the script will overwrite internal
+# script functions with code generated ones
+# leave to 1, is faster
+sensors_source_update=1
+
+# how frequently to collect sensor data
+# the default is to collect it at every iteration of charts.d
+sensors_update_every=
+
+sensors_priority=90000
+
+sensors_find_all_files() {
+ find $1 -maxdepth $sensors_sys_depth -name \*_input -o -name temp 2>/dev/null
+}
+
+sensors_find_all_dirs() {
+ sensors_find_all_files $1 | while read
+ do
+ dirname $REPLY
+ done | sort -u
+}
+
+# _check is called once, to find out if this chart should be enabled or not
+sensors_check() {
+
+ # this should return:
+ # - 0 to enable the chart
+ # - 1 to disable the chart
+
+ [ -z "$( sensors_find_all_files $sensors_sys_dir )" ] && echo >&2 "$PROGRAM_NAME: sensors: no sensors found in '$sensors_sys_dir'." && return 1
+ return 0
+}
+
+sensors_check_files() {
+ # we only need sensors that report a non-zero value
+
+ local f= v=
+ for f in $*
+ do
+ [ ! -f "$f" ] && continue
+
+ v="$( cat $f )"
+ v=$(( v + 1 - 1 ))
+ [ $v -ne 0 ] && echo "$f" && continue
+
+ echo >&2 "$PROGRAM_NAME: sensors: $f gives zero values"
+ done
+}
+
+sensors_check_temp_type() {
+ # valid temp types are 1 to 6
+ # disabled sensors have the value 0
+
+ local f= t= v=
+ for f in $*
+ do
+ t=$( echo $f | sed "s|_input$|_type|g" )
+ [ "$f" = "$t" ] && echo "$f" && continue
+ [ ! -f "$t" ] && echo "$f" && continue
+
+ v="$( cat $t )"
+ v=$(( v + 1 - 1 ))
+ [ $v -ne 0 ] && echo "$f" && continue
+
+ echo >&2 "$PROGRAM_NAME: sensors: $f is disabled"
+ done
+}
+
+# _create is called once, to create the charts
+sensors_create() {
+ local path= dir= name= x= file= lfile= labelname= labelid= device= subsystem= id= type= mode= files= multiplier= divisor=
+
+ # we create a script with the source of the
+ # sensors_update() function
+ # - the highest speed we can achieve -
+ [ $sensors_source_update -eq 1 ] && echo >$TMP_DIR/sensors.sh "sensors_update() {"
+
+ for path in $( sensors_find_all_dirs $sensors_sys_dir | sort -u )
+ do
+ dir=$( basename $path )
+ device=
+ subsystem=
+ id=
+ type=
+ name=
+
+ [ -h $path/device ] && device=$( readlink -f $path/device )
+ [ ! -z "$device" ] && device=$( basename $device )
+ [ -z "$device" ] && device="$dir"
+
+ [ -h $path/subsystem ] && subsystem=$( readlink -f $path/subsystem )
+ [ ! -z "$subsystem" ] && subsystem=$( basename $subsystem )
+ [ -z "$subsystem" ] && subsystem="$dir"
+
+ [ -f $path/name ] && name=$( cat $path/name )
+ [ -z "$name" ] && name="$dir"
+
+ [ -f $path/type ] && type=$( cat $path/type )
+ [ -z "$type" ] && type="$dir"
+
+ id="$( fixid "$device.$subsystem.$dir" )"
+
+ echo >&2 "charts.d: sensors: on path='$path', dir='$dir', device='$device', subsystem='$subsystem', id='$id', name='$name'"
+
+ for mode in temperature voltage fans power current energy humidity
+ do
+ files=
+ multiplier=1
+ divisor=1
+ algorithm="absolute"
+
+ case $mode in
+ temperature)
+ files="$( ls $path/temp*_input 2>/dev/null; ls $path/temp 2>/dev/null )"
+ files="$( sensors_check_files $files )"
+ files="$( sensors_check_temp_type $files )"
+ [ -z "$files" ] && continue
+ echo "CHART sensors.temp_$id '' '$name Temperature' 'Celcius' 'temperature' 'sensors.temp' line $[sensors_priority + 1] $sensors_update_every"
+ echo >>$TMP_DIR/sensors.sh "echo \"BEGIN sensors.temp_$id \$1\""
+ divisor=1000
+ ;;
+
+ voltage)
+ files="$( ls $path/in*_input 2>/dev/null )"
+ files="$( sensors_check_files $files )"
+ [ -z "$files" ] && continue
+ echo "CHART sensors.volt_$id '' '$name Voltage' 'Volts' 'voltage' 'sensors.volt' line $[sensors_priority + 2] $sensors_update_every"
+ echo >>$TMP_DIR/sensors.sh "echo \"BEGIN sensors.volt_$id \$1\""
+ divisor=1000
+ ;;
+
+ current)
+ files="$( ls $path/curr*_input 2>/dev/null )"
+ files="$( sensors_check_files $files )"
+ [ -z "$files" ] && continue
+ echo "CHART sensors.curr_$id '' '$name Current' 'Ampere' 'current' 'sensors.curr' line $[sensors_priority + 3] $sensors_update_every"
+ echo >>$TMP_DIR/sensors.sh "echo \"BEGIN sensors.curr_$id \$1\""
+ divisor=1000
+ ;;
+
+ power)
+ files="$( ls $path/power*_input 2>/dev/null )"
+ files="$( sensors_check_files $files )"
+ [ -z "$files" ] && continue
+ echo "CHART sensors.power_$id '' '$name Power' 'Watt' 'power' 'sensors.power' line $[sensors_priority + 4] $sensors_update_every"
+ echo >>$TMP_DIR/sensors.sh "echo \"BEGIN sensors.power_$id \$1\""
+ divisor=1000000
+ ;;
+
+ fans)
+ files="$( ls $path/fan*_input 2>/dev/null )"
+ files="$( sensors_check_files $files )"
+ [ -z "$files" ] && continue
+ echo "CHART sensors.fan_$id '' '$name Fans Speed' 'Rotations / Minute' 'fans' 'sensors.fans' line $[sensors_priority + 5] $sensors_update_every"
+ echo >>$TMP_DIR/sensors.sh "echo \"BEGIN sensors.fan_$id \$1\""
+ ;;
+
+ energy)
+ files="$( ls $path/energy*_input 2>/dev/null )"
+ files="$( sensors_check_files $files )"
+ [ -z "$files" ] && continue
+ echo "CHART sensors.energy_$id '' '$name Energy' 'Joule' 'energy' 'sensors.energy' areastack $[sensors_priority + 6] $sensors_update_every"
+ echo >>$TMP_DIR/sensors.sh "echo \"BEGIN sensors.energy_$id \$1\""
+ algorithm="incremental"
+ divisor=1000000
+ ;;
+
+ humidity)
+ files="$( ls $path/humidity*_input 2>/dev/null )"
+ files="$( sensors_check_files $files )"
+ [ -z "$files" ] && continue
+ echo "CHART sensors.humidity_$id '' '$name Humidity' 'Percent' 'humidity' 'sensors.humidity' line $[sensors_priority + 7] $sensors_update_every"
+ echo >>$TMP_DIR/sensors.sh "echo \"BEGIN sensors.humidity_$id \$1\""
+ divisor=1000
+ ;;
+
+ *)
+ continue
+ ;;
+ esac
+
+ for x in $files
+ do
+ file="$x"
+ fid="$( fixid "$file" )"
+ lfile="$( basename $file | sed "s|_input$|_label|g" )"
+ labelname="$( basename $file | sed "s|_input$||g" )"
+
+ if [ ! "$path/$lfile" = "$file" -a -f "$path/$lfile" ]
+ then
+ labelname="$( cat "$path/$lfile" )"
+ fi
+
+ echo "DIMENSION $fid '$labelname' $algorithm $multiplier $divisor"
+ echo >>$TMP_DIR/sensors.sh "printf \"SET $fid = \"; cat $file "
+ done
+
+ echo >>$TMP_DIR/sensors.sh "echo END"
+ done
+ done
+
+ [ $sensors_source_update -eq 1 ] && echo >>$TMP_DIR/sensors.sh "}"
+ # cat >&2 $TMP_DIR/sensors.sh
+
+ # ok, load the function sensors_update() we created
+ [ $sensors_source_update -eq 1 ] && . $TMP_DIR/sensors.sh
+
+ return 0
+}
+
+# _update is called continiously, to collect the values
+sensors_update() {
+ # the first argument to this function is the microseconds since last update
+ # pass this parameter to the BEGIN statement (see bellow).
+
+ # do all the work to collect / calculate the values
+ # for each dimension
+ # remember: KEEP IT SIMPLE AND SHORT
+
+ [ $sensors_source_update -eq 0 ] && . $TMP_DIR/sensors.sh $1
+
+ return 0
+}
+
diff --git a/charts.d/squid.chart.sh b/charts.d/squid.chart.sh
new file mode 100755
index 000000000..6260ce97f
--- /dev/null
+++ b/charts.d/squid.chart.sh
@@ -0,0 +1,145 @@
+#!/bin/sh
+
+squid_host=
+squid_port=
+squid_url=
+squid_timeout=2
+squid_update_every=5
+squid_priority=60000
+
+squid_get_stats_internal() {
+ local host="$1" port="$2" url="$3"
+
+ nc -w $squid_timeout $host $port <<EOF
+GET $url HTTP/1.0
+Host: $host:$port
+Accept: */*
+User-Agent: netdata (charts.d/squid.chart.sh)
+
+EOF
+}
+
+squid_get_stats() {
+ squid_get_stats_internal "$squid_host" "$squid_port" "$squid_url"
+}
+
+squid_autodetect() {
+ local host="127.0.0.1" port url x
+
+ for port in 3128 8080
+ do
+ for url in "cache_object://$host:$port/counters" "/squid-internal-mgr/counters"
+ do
+ x=$(squid_get_stats_internal "$host" "$port" "$url" | grep client_http.requests)
+ if [ ! -z "$x" ]
+ then
+ squid_host="$host"
+ squid_port="$port"
+ squid_url="$url"
+ echo >&2 "squid: found squid at '$host:$port' with url '$url'"
+ return 0
+ fi
+ done
+ done
+
+ echo >&2 "squid: cannot find squid running in localhost. Please set squid_url='url' and squid_host='IP' and squid_port='PORT' in $confd/squid.conf"
+ return 1
+}
+
+squid_check() {
+ require_cmd nc || return 1
+ require_cmd sed || return 1
+ require_cmd egrep || return 1
+
+ if [ -z "$squid_host" -o -z "$squid_port" -o -z "$squid_url" ]
+ then
+ squid_autodetect || return 1
+ fi
+
+ # check once if the url works
+ local x="$(squid_get_stats | grep client_http.requests)"
+ if [ ! $? -eq 0 -o -z "$x" ]
+ then
+ echo >&2 "squid: cannot fetch URL '$squid_url' by connecting to $squid_host:$squid_port. Please set squid_url='url' and squid_host='host' and squid_port='port' in $confd/squid.conf"
+ return 1
+ fi
+
+ return 0
+}
+
+squid_create() {
+ # create the charts
+ cat <<EOF
+CHART squid.clients_net '' "Squid Client Bandwidth" "kilobits / sec" clients squid.clients.net area $[squid_priority + 1] $squid_update_every
+DIMENSION client_http_kbytes_in in incremental 8 1
+DIMENSION client_http_kbytes_out out incremental -8 1
+DIMENSION client_http_hit_kbytes_out hits incremental -8 1
+
+CHART squid.clients_requests '' "Squid Client Requests" "requests / sec" clients squid.clients.requests line $[squid_priority + 3] $squid_update_every
+DIMENSION client_http_requests requests incremental 1 1
+DIMENSION client_http_hits hits incremental 1 1
+DIMENSION client_http_errors errors incremental -1 1
+
+CHART squid.servers_net '' "Squid Server Bandwidth" "kilobits / sec" servers squid.servers.net area $[squid_priority + 2] $squid_update_every
+DIMENSION server_all_kbytes_in in incremental 8 1
+DIMENSION server_all_kbytes_out out incremental -8 1
+
+CHART squid.servers_requests '' "Squid Server Requests" "requests / sec" servers squid.servers.requests line $[squid_priority + 4] $squid_update_every
+DIMENSION server_all_requests requests incremental 1 1
+DIMENSION server_all_errors errors incremental -1 1
+EOF
+
+ return 0
+}
+
+
+squid_update() {
+ # the first argument to this function is the microseconds since last update
+ # pass this parameter to the BEGIN statement (see bellow).
+
+ # do all the work to collect / calculate the values
+ # for each dimension
+ # remember: KEEP IT SIMPLE AND SHORT
+
+ # 1. get the counters page from squid
+ # 2. sed to remove spaces; replace . with _; remove spaces around =; prepend each line with: local squid_
+ # 3. egrep lines starting with:
+ # local squid_client_http_ then one or more of these a-z 0-9 _ then = and one of more of 0-9
+ # local squid_server_all_ then one or more of these a-z 0-9 _ then = and one of more of 0-9
+ # 4. then execute this as a script with the eval
+ #
+ # be very carefull with eval:
+ # prepare the script and always grep at the end the lines that are usefull, so that
+ # even if something goes wrong, no other code can be executed
+
+ eval "$(squid_get_stats |\
+ sed -e "s/ \+/ /g" -e "s/\./_/g" -e "s/^\([a-z0-9_]\+\) *= *\([0-9]\+\)$/local squid_\1=\2/g" |\
+ egrep "^local squid_(client_http|server_all)_[a-z0-9_]+=[0-9]+$")"
+
+ # write the result of the work.
+ cat <<VALUESEOF
+BEGIN squid.clients_net $1
+SET client_http_kbytes_in = $squid_client_http_kbytes_in
+SET client_http_kbytes_out = $squid_client_http_kbytes_out
+SET client_http_hit_kbytes_out = $squid_client_http_hit_kbytes_out
+END
+
+BEGIN squid.clients_requests $1
+SET client_http_requests = $squid_client_http_requests
+SET client_http_hits = $squid_client_http_hits
+SET client_http_errors = $squid_client_http_errors
+END
+
+BEGIN squid.servers_net $1
+SET server_all_kbytes_in = $squid_server_all_kbytes_in
+SET server_all_kbytes_out = $squid_server_all_kbytes_out
+END
+
+BEGIN squid.servers_requests $1
+SET server_all_requests = $squid_server_all_requests
+SET server_all_errors = $squid_server_all_errors
+END
+VALUESEOF
+
+ return 0
+}