summaryrefslogtreecommitdiffstats
path: root/collectors/python.d.plugin/rabbitmq
diff options
context:
space:
mode:
Diffstat (limited to 'collectors/python.d.plugin/rabbitmq')
-rw-r--r--collectors/python.d.plugin/rabbitmq/Makefile.inc13
-rw-r--r--collectors/python.d.plugin/rabbitmq/README.md141
-rw-r--r--collectors/python.d.plugin/rabbitmq/rabbitmq.chart.py443
-rw-r--r--collectors/python.d.plugin/rabbitmq/rabbitmq.conf86
4 files changed, 0 insertions, 683 deletions
diff --git a/collectors/python.d.plugin/rabbitmq/Makefile.inc b/collectors/python.d.plugin/rabbitmq/Makefile.inc
deleted file mode 100644
index 7e67ef512..000000000
--- a/collectors/python.d.plugin/rabbitmq/Makefile.inc
+++ /dev/null
@@ -1,13 +0,0 @@
-# SPDX-License-Identifier: GPL-3.0-or-later
-
-# THIS IS NOT A COMPLETE Makefile
-# IT IS INCLUDED BY ITS PARENT'S Makefile.am
-# IT IS REQUIRED TO REFERENCE ALL FILES RELATIVE TO THE PARENT
-
-# install these files
-dist_python_DATA += rabbitmq/rabbitmq.chart.py
-dist_pythonconfig_DATA += rabbitmq/rabbitmq.conf
-
-# do not install these files, but include them in the distribution
-dist_noinst_DATA += rabbitmq/README.md rabbitmq/Makefile.inc
-
diff --git a/collectors/python.d.plugin/rabbitmq/README.md b/collectors/python.d.plugin/rabbitmq/README.md
deleted file mode 100644
index 19df65694..000000000
--- a/collectors/python.d.plugin/rabbitmq/README.md
+++ /dev/null
@@ -1,141 +0,0 @@
-<!--
-title: "RabbitMQ monitoring with Netdata"
-custom_edit_url: "https://github.com/netdata/netdata/edit/master/collectors/python.d.plugin/rabbitmq/README.md"
-sidebar_label: "rabbitmq-python.d.plugin"
-learn_status: "Published"
-learn_topic_type: "References"
-learn_rel_path: "References/Collectors references/Message brokers"
--->
-
-# RabbitMQ monitoring with Netdata
-
-Collects message broker global and per virtual host metrics.
-
-
-Following charts are drawn:
-
-1. **Queued Messages**
-
- - ready
- - unacknowledged
-
-2. **Message Rates**
-
- - ack
- - redelivered
- - deliver
- - publish
-
-3. **Global Counts**
-
- - channels
- - consumers
- - connections
- - queues
- - exchanges
-
-4. **File Descriptors**
-
- - used descriptors
-
-5. **Socket Descriptors**
-
- - used descriptors
-
-6. **Erlang processes**
-
- - used processes
-
-7. **Erlang run queue**
-
- - Erlang run queue
-
-8. **Memory**
-
- - free memory in megabytes
-
-9. **Disk Space**
-
- - free disk space in gigabytes
-
-
-Per Vhost charts:
-
-1. **Vhost Messages**
-
- - ack
- - confirm
- - deliver
- - get
- - get_no_ack
- - publish
- - redeliver
- - return_unroutable
-
-2. Per Queue charts:
-
- 1. **Queued Messages**
-
- - messages
- - paged_out
- - persistent
- - ready
- - unacknowledged
-
- 2. **Queue Messages stats**
-
- - ack
- - confirm
- - deliver
- - get
- - get_no_ack
- - publish
- - redeliver
- - return_unroutable
-
-## Configuration
-
-Edit the `python.d/rabbitmq.conf` configuration file using `edit-config` from the Netdata [config
-directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md), which is typically at `/etc/netdata`.
-
-```bash
-cd /etc/netdata # Replace this path with your Netdata config directory, if different
-sudo ./edit-config python.d/rabbitmq.conf
-```
-
-When no configuration file is found, module tries to connect to: `localhost:15672`.
-
-```yaml
-socket:
- name : 'local'
- host : '127.0.0.1'
- port : 15672
- user : 'guest'
- pass : 'guest'
-```
-
----
-
-### Per-Queue Chart configuration
-
-RabbitMQ users with the "monitoring" tag cannot see all queue data. You'll need a user with read permissions.
-To create a dedicated user for netdata:
-
-```bash
-rabbitmqctl add_user netdata ChangeThisSuperSecretPassword
-rabbitmqctl set_permissions netdata "^$" "^$" ".*"
-```
-
-See [set_permissions](https://www.rabbitmq.com/rabbitmqctl.8.html#set_permissions) for details.
-
-Once the user is set up, add `collect_queues_metrics: yes` to your `rabbitmq.conf`:
-
-```yaml
-local:
- name : 'local'
- host : '127.0.0.1'
- port : 15672
- user : 'netdata'
- pass : 'ChangeThisSuperSecretPassword'
- collect_queues_metrics : 'yes'
-```
diff --git a/collectors/python.d.plugin/rabbitmq/rabbitmq.chart.py b/collectors/python.d.plugin/rabbitmq/rabbitmq.chart.py
deleted file mode 100644
index 866b777f7..000000000
--- a/collectors/python.d.plugin/rabbitmq/rabbitmq.chart.py
+++ /dev/null
@@ -1,443 +0,0 @@
-# -*- coding: utf-8 -*-
-# Description: rabbitmq netdata python.d module
-# Author: ilyam8
-# SPDX-License-Identifier: GPL-3.0-or-later
-
-from json import loads
-
-from bases.FrameworkServices.UrlService import UrlService
-
-API_NODE = 'api/nodes'
-API_OVERVIEW = 'api/overview'
-API_QUEUES = 'api/queues'
-API_VHOSTS = 'api/vhosts'
-
-NODE_STATS = [
- 'fd_used',
- 'mem_used',
- 'sockets_used',
- 'proc_used',
- 'disk_free',
- 'run_queue'
-]
-
-OVERVIEW_STATS = [
- 'object_totals.channels',
- 'object_totals.consumers',
- 'object_totals.connections',
- 'object_totals.queues',
- 'object_totals.exchanges',
- 'queue_totals.messages_ready',
- 'queue_totals.messages_unacknowledged',
- 'message_stats.ack',
- 'message_stats.redeliver',
- 'message_stats.deliver',
- 'message_stats.publish',
- 'churn_rates.connection_created_details.rate',
- 'churn_rates.connection_closed_details.rate',
- 'churn_rates.channel_created_details.rate',
- 'churn_rates.channel_closed_details.rate',
- 'churn_rates.queue_created_details.rate',
- 'churn_rates.queue_declared_details.rate',
- 'churn_rates.queue_deleted_details.rate'
-]
-
-QUEUE_STATS = [
- 'messages',
- 'messages_paged_out',
- 'messages_persistent',
- 'messages_ready',
- 'messages_unacknowledged',
- 'message_stats.ack',
- 'message_stats.confirm',
- 'message_stats.deliver',
- 'message_stats.get',
- 'message_stats.get_no_ack',
- 'message_stats.publish',
- 'message_stats.redeliver',
- 'message_stats.return_unroutable',
-]
-
-VHOST_MESSAGE_STATS = [
- 'message_stats.ack',
- 'message_stats.confirm',
- 'message_stats.deliver',
- 'message_stats.get',
- 'message_stats.get_no_ack',
- 'message_stats.publish',
- 'message_stats.redeliver',
- 'message_stats.return_unroutable',
-]
-
-ORDER = [
- 'queued_messages',
- 'connection_churn_rates',
- 'channel_churn_rates',
- 'queue_churn_rates',
- 'message_rates',
- 'global_counts',
- 'file_descriptors',
- 'socket_descriptors',
- 'erlang_processes',
- 'erlang_run_queue',
- 'memory',
- 'disk_space'
-]
-
-CHARTS = {
- 'file_descriptors': {
- 'options': [None, 'File Descriptors', 'descriptors', 'overview', 'rabbitmq.file_descriptors', 'line'],
- 'lines': [
- ['fd_used', 'used', 'absolute']
- ]
- },
- 'memory': {
- 'options': [None, 'Memory', 'MiB', 'overview', 'rabbitmq.memory', 'area'],
- 'lines': [
- ['mem_used', 'used', 'absolute', 1, 1 << 20]
- ]
- },
- 'disk_space': {
- 'options': [None, 'Disk Space', 'GiB', 'overview', 'rabbitmq.disk_space', 'area'],
- 'lines': [
- ['disk_free', 'free', 'absolute', 1, 1 << 30]
- ]
- },
- 'socket_descriptors': {
- 'options': [None, 'Socket Descriptors', 'descriptors', 'overview', 'rabbitmq.sockets', 'line'],
- 'lines': [
- ['sockets_used', 'used', 'absolute']
- ]
- },
- 'erlang_processes': {
- 'options': [None, 'Erlang Processes', 'processes', 'overview', 'rabbitmq.processes', 'line'],
- 'lines': [
- ['proc_used', 'used', 'absolute']
- ]
- },
- 'erlang_run_queue': {
- 'options': [None, 'Erlang Run Queue', 'processes', 'overview', 'rabbitmq.erlang_run_queue', 'line'],
- 'lines': [
- ['run_queue', 'length', 'absolute']
- ]
- },
- 'global_counts': {
- 'options': [None, 'Global Counts', 'counts', 'overview', 'rabbitmq.global_counts', 'line'],
- 'lines': [
- ['object_totals_channels', 'channels', 'absolute'],
- ['object_totals_consumers', 'consumers', 'absolute'],
- ['object_totals_connections', 'connections', 'absolute'],
- ['object_totals_queues', 'queues', 'absolute'],
- ['object_totals_exchanges', 'exchanges', 'absolute']
- ]
- },
- 'connection_churn_rates': {
- 'options': [None, 'Connection Churn Rates', 'operations/s', 'overview', 'rabbitmq.connection_churn_rates', 'line'],
- 'lines': [
- ['churn_rates_connection_created_details_rate', 'created', 'absolute'],
- ['churn_rates_connection_closed_details_rate', 'closed', 'absolute']
- ]
- },
- 'channel_churn_rates': {
- 'options': [None, 'Channel Churn Rates', 'operations/s', 'overview', 'rabbitmq.channel_churn_rates', 'line'],
- 'lines': [
- ['churn_rates_channel_created_details_rate', 'created', 'absolute'],
- ['churn_rates_channel_closed_details_rate', 'closed', 'absolute']
- ]
- },
- 'queue_churn_rates': {
- 'options': [None, 'Queue Churn Rates', 'operations/s', 'overview', 'rabbitmq.queue_churn_rates', 'line'],
- 'lines': [
- ['churn_rates_queue_created_details_rate', 'created', 'absolute'],
- ['churn_rates_queue_declared_details_rate', 'declared', 'absolute'],
- ['churn_rates_queue_deleted_details_rate', 'deleted', 'absolute']
- ]
- },
- 'queued_messages': {
- 'options': [None, 'Queued Messages', 'messages', 'overview', 'rabbitmq.queued_messages', 'stacked'],
- 'lines': [
- ['queue_totals_messages_ready', 'ready', 'absolute'],
- ['queue_totals_messages_unacknowledged', 'unacknowledged', 'absolute']
- ]
- },
- 'message_rates': {
- 'options': [None, 'Message Rates', 'messages/s', 'overview', 'rabbitmq.message_rates', 'line'],
- 'lines': [
- ['message_stats_ack', 'ack', 'incremental'],
- ['message_stats_redeliver', 'redeliver', 'incremental'],
- ['message_stats_deliver', 'deliver', 'incremental'],
- ['message_stats_publish', 'publish', 'incremental']
- ]
- }
-}
-
-
-def vhost_chart_template(name):
- order = [
- 'vhost_{0}_message_stats'.format(name),
- ]
- family = 'vhost {0}'.format(name)
-
- charts = {
- order[0]: {
- 'options': [
- None, 'Vhost "{0}" Messages'.format(name), 'messages/s', family, 'rabbitmq.vhost_messages', 'stacked'],
- 'lines': [
- ['vhost_{0}_message_stats_ack'.format(name), 'ack', 'incremental'],
- ['vhost_{0}_message_stats_confirm'.format(name), 'confirm', 'incremental'],
- ['vhost_{0}_message_stats_deliver'.format(name), 'deliver', 'incremental'],
- ['vhost_{0}_message_stats_get'.format(name), 'get', 'incremental'],
- ['vhost_{0}_message_stats_get_no_ack'.format(name), 'get_no_ack', 'incremental'],
- ['vhost_{0}_message_stats_publish'.format(name), 'publish', 'incremental'],
- ['vhost_{0}_message_stats_redeliver'.format(name), 'redeliver', 'incremental'],
- ['vhost_{0}_message_stats_return_unroutable'.format(name), 'return_unroutable', 'incremental'],
- ]
- },
- }
-
- return order, charts
-
-def queue_chart_template(queue_id):
- vhost, name = queue_id
- order = [
- 'vhost_{0}_queue_{1}_queued_message'.format(vhost, name),
- 'vhost_{0}_queue_{1}_messages_stats'.format(vhost, name),
- ]
- family = 'vhost {0}'.format(vhost)
-
- charts = {
- order[0]: {
- 'options': [
- None, 'Queue "{0}" in "{1}" queued messages'.format(name, vhost), 'messages', family, 'rabbitmq.queue_messages', 'line'],
- 'lines': [
- ['vhost_{0}_queue_{1}_messages'.format(vhost, name), 'messages', 'absolute'],
- ['vhost_{0}_queue_{1}_messages_paged_out'.format(vhost, name), 'paged_out', 'absolute'],
- ['vhost_{0}_queue_{1}_messages_persistent'.format(vhost, name), 'persistent', 'absolute'],
- ['vhost_{0}_queue_{1}_messages_ready'.format(vhost, name), 'ready', 'absolute'],
- ['vhost_{0}_queue_{1}_messages_unacknowledged'.format(vhost, name), 'unack', 'absolute'],
- ]
- },
- order[1]: {
- 'options': [
- None, 'Queue "{0}" in "{1}" messages stats'.format(name, vhost), 'messages/s', family, 'rabbitmq.queue_messages_stats', 'line'],
- 'lines': [
- ['vhost_{0}_queue_{1}_message_stats_ack'.format(vhost, name), 'ack', 'incremental'],
- ['vhost_{0}_queue_{1}_message_stats_confirm'.format(vhost, name), 'confirm', 'incremental'],
- ['vhost_{0}_queue_{1}_message_stats_deliver'.format(vhost, name), 'deliver', 'incremental'],
- ['vhost_{0}_queue_{1}_message_stats_get'.format(vhost, name), 'get', 'incremental'],
- ['vhost_{0}_queue_{1}_message_stats_get_no_ack'.format(vhost, name), 'get_no_ack', 'incremental'],
- ['vhost_{0}_queue_{1}_message_stats_publish'.format(vhost, name), 'publish', 'incremental'],
- ['vhost_{0}_queue_{1}_message_stats_redeliver'.format(vhost, name), 'redeliver', 'incremental'],
- ['vhost_{0}_queue_{1}_message_stats_return_unroutable'.format(vhost, name), 'return_unroutable', 'incremental'],
- ]
- },
- }
-
- return order, charts
-
-
-class VhostStatsBuilder:
- def __init__(self):
- self.stats = None
-
- def set(self, raw_stats):
- self.stats = raw_stats
-
- def name(self):
- return self.stats['name']
-
- def has_msg_stats(self):
- return bool(self.stats.get('message_stats'))
-
- def msg_stats(self):
- name = self.name()
- stats = fetch_data(raw_data=self.stats, metrics=VHOST_MESSAGE_STATS)
- return dict(('vhost_{0}_{1}'.format(name, k), v) for k, v in stats.items())
-
-class QueueStatsBuilder:
- def __init__(self):
- self.stats = None
-
- def set(self, raw_stats):
- self.stats = raw_stats
-
- def id(self):
- return self.stats['vhost'], self.stats['name']
-
- def queue_stats(self):
- vhost, name = self.id()
- stats = fetch_data(raw_data=self.stats, metrics=QUEUE_STATS)
- return dict(('vhost_{0}_queue_{1}_{2}'.format(vhost, name, k), v) for k, v in stats.items())
-
-
-class Service(UrlService):
- def __init__(self, configuration=None, name=None):
- UrlService.__init__(self, configuration=configuration, name=name)
- self.order = ORDER
- self.definitions = CHARTS
- self.url = '{0}://{1}:{2}'.format(
- configuration.get('scheme', 'http'),
- configuration.get('host', '127.0.0.1'),
- configuration.get('port', 15672),
- )
- self.node_name = str()
- self.vhost = VhostStatsBuilder()
- self.collected_vhosts = set()
- self.collect_queues_metrics = configuration.get('collect_queues_metrics', False)
- self.debug("collect_queues_metrics is {0}".format("enabled" if self.collect_queues_metrics else "disabled"))
- if self.collect_queues_metrics:
- self.queue = QueueStatsBuilder()
- self.collected_queues = set()
-
- def _get_data(self):
- data = dict()
-
- stats = self.get_overview_stats()
- if not stats:
- return None
-
- data.update(stats)
-
- stats = self.get_nodes_stats()
- if not stats:
- return None
-
- data.update(stats)
-
- stats = self.get_vhosts_stats()
- if stats:
- data.update(stats)
-
- if self.collect_queues_metrics:
- stats = self.get_queues_stats()
- if stats:
- data.update(stats)
-
- return data or None
-
- def get_overview_stats(self):
- url = '{0}/{1}'.format(self.url, API_OVERVIEW)
- self.debug("doing http request to '{0}'".format(url))
- raw = self._get_raw_data(url)
- if not raw:
- return None
-
- data = loads(raw)
- self.node_name = data['node']
- self.debug("found node name: '{0}'".format(self.node_name))
-
- stats = fetch_data(raw_data=data, metrics=OVERVIEW_STATS)
- self.debug("number of metrics: {0}".format(len(stats)))
- return stats
-
- def get_nodes_stats(self):
- if self.node_name == "":
- self.error("trying to get node stats, but node name is not set")
- return None
-
- url = '{0}/{1}/{2}'.format(self.url, API_NODE, self.node_name)
- self.debug("doing http request to '{0}'".format(url))
- raw = self._get_raw_data(url)
- if not raw:
- return None
-
- data = loads(raw)
- stats = fetch_data(raw_data=data, metrics=NODE_STATS)
- handle_disabled_disk_monitoring(stats)
- self.debug("number of metrics: {0}".format(len(stats)))
- return stats
-
- def get_vhosts_stats(self):
- url = '{0}/{1}'.format(self.url, API_VHOSTS)
- self.debug("doing http request to '{0}'".format(url))
- raw = self._get_raw_data(url)
- if not raw:
- return None
-
- data = dict()
- vhosts = loads(raw)
- charts_initialized = len(self.charts) > 0
-
- for vhost in vhosts:
- self.vhost.set(vhost)
- if not self.vhost.has_msg_stats():
- continue
-
- if charts_initialized and self.vhost.name() not in self.collected_vhosts:
- self.collected_vhosts.add(self.vhost.name())
- self.add_vhost_charts(self.vhost.name())
-
- data.update(self.vhost.msg_stats())
-
- self.debug("number of vhosts: {0}, metrics: {1}".format(len(vhosts), len(data)))
- return data
-
- def get_queues_stats(self):
- url = '{0}/{1}'.format(self.url, API_QUEUES)
- self.debug("doing http request to '{0}'".format(url))
- raw = self._get_raw_data(url)
- if not raw:
- return None
-
- data = dict()
- queues = loads(raw)
- charts_initialized = len(self.charts) > 0
-
- for queue in queues:
- self.queue.set(queue)
- if self.queue.id()[0] not in self.collected_vhosts:
- continue
-
- if charts_initialized and self.queue.id() not in self.collected_queues:
- self.collected_queues.add(self.queue.id())
- self.add_queue_charts(self.queue.id())
-
- data.update(self.queue.queue_stats())
-
- self.debug("number of queues: {0}, metrics: {1}".format(len(queues), len(data)))
- return data
-
- def add_vhost_charts(self, vhost_name):
- order, charts = vhost_chart_template(vhost_name)
-
- for chart_name in order:
- params = [chart_name] + charts[chart_name]['options']
- dimensions = charts[chart_name]['lines']
-
- new_chart = self.charts.add_chart(params)
- for dimension in dimensions:
- new_chart.add_dimension(dimension)
-
- def add_queue_charts(self, queue_id):
- order, charts = queue_chart_template(queue_id)
-
- for chart_name in order:
- params = [chart_name] + charts[chart_name]['options']
- dimensions = charts[chart_name]['lines']
-
- new_chart = self.charts.add_chart(params)
- for dimension in dimensions:
- new_chart.add_dimension(dimension)
-
-
-def fetch_data(raw_data, metrics):
- data = dict()
- for metric in metrics:
- value = raw_data
- metrics_list = metric.split('.')
- try:
- for m in metrics_list:
- value = value[m]
- except (KeyError, TypeError):
- continue
- data['_'.join(metrics_list)] = value
-
- return data
-
-
-def handle_disabled_disk_monitoring(node_stats):
- # https://github.com/netdata/netdata/issues/7218
- # can be "disk_free": "disk_free_monitoring_disabled"
- v = node_stats.get('disk_free')
- if v and not isinstance(v, int):
- del node_stats['disk_free']
diff --git a/collectors/python.d.plugin/rabbitmq/rabbitmq.conf b/collectors/python.d.plugin/rabbitmq/rabbitmq.conf
deleted file mode 100644
index 47d47a1bf..000000000
--- a/collectors/python.d.plugin/rabbitmq/rabbitmq.conf
+++ /dev/null
@@ -1,86 +0,0 @@
-# netdata python.d.plugin configuration for rabbitmq
-#
-# This file is in YaML format. Generally the format is:
-#
-# name: value
-#
-# There are 2 sections:
-# - global variables
-# - one or more JOBS
-#
-# JOBS allow you to collect values from multiple sources.
-# Each source will have its own set of charts.
-#
-# JOB parameters have to be indented (using spaces only, example below).
-
-# ----------------------------------------------------------------------
-# Global Variables
-# These variables set the defaults for all JOBs, however each JOB
-# may define its own, overriding the defaults.
-
-# update_every sets the default data collection frequency.
-# If unset, the python.d.plugin default is used.
-# update_every: 1
-
-# priority controls the order of charts at the netdata dashboard.
-# Lower numbers move the charts towards the top of the page.
-# If unset, the default for python.d.plugin is used.
-# priority: 60000
-
-# penalty indicates whether to apply penalty to update_every in case of failures.
-# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
-# penalty: yes
-
-# autodetection_retry sets the job re-check interval in seconds.
-# The job is not deleted if check fails.
-# Attempts to start the job are made once every autodetection_retry.
-# This feature is disabled by default.
-# autodetection_retry: 0
-
-# ----------------------------------------------------------------------
-# JOBS (data collection sources)
-#
-# The default JOBS share the same *name*. JOBS with the same name
-# are mutually exclusive. Only one of them will be allowed running at
-# any time. This allows autodetection to try several alternatives and
-# pick the one that works.
-#
-# Any number of jobs is supported.
-#
-# All python.d.plugin JOBS (for all its modules) support a set of
-# predefined parameters. These are:
-#
-# job_name:
-# name: myname # the JOB's name as it will appear at the
-# # dashboard (by default is the job_name)
-# # JOBs sharing a name are mutually exclusive
-# update_every: 1 # the JOB's data collection frequency
-# priority: 60000 # the JOB's order on the dashboard
-# penalty: yes # the JOB's penalty
-# autodetection_retry: 0 # the JOB's re-check interval in seconds
-#
-# Additionally to the above, rabbitmq plugin also supports the following:
-#
-# host: 'ipaddress' # Server ip address or hostname. Default: 127.0.0.1
-# port: 'port' # Rabbitmq port. Default: 15672
-# scheme: 'scheme' # http or https. Default: http
-#
-# if the URL is password protected, the following are supported:
-#
-# user: 'username'
-# pass: 'password'
-#
-# Rabbitmq plugin can also collect stats per vhost per queues, which is disabled
-# by default. Please note that enabling this can induced a serious overhead on
-# both netdata and rabbitmq if a look of queues are configured and used.
-#
-# collect_queues_metrics: 'yes/no'
-#
-# ----------------------------------------------------------------------
-# AUTO-DETECTION JOBS
-# only one of them will run (they have the same name)
-#
-local:
- host: '127.0.0.1'
- user: 'guest'
- pass: 'guest'