summaryrefslogtreecommitdiffstats
path: root/src/collectors/python.d.plugin/ipfs
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-05-05 12:08:03 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-05-05 12:08:18 +0000
commit5da14042f70711ea5cf66e034699730335462f66 (patch)
tree0f6354ccac934ed87a2d555f45be4c831cf92f4a /src/collectors/python.d.plugin/ipfs
parentReleasing debian version 1.44.3-2. (diff)
downloadnetdata-5da14042f70711ea5cf66e034699730335462f66.tar.xz
netdata-5da14042f70711ea5cf66e034699730335462f66.zip
Merging upstream version 1.45.3+dfsg.
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'src/collectors/python.d.plugin/ipfs')
l---------src/collectors/python.d.plugin/ipfs/README.md1
-rw-r--r--src/collectors/python.d.plugin/ipfs/integrations/ipfs.md203
-rw-r--r--src/collectors/python.d.plugin/ipfs/ipfs.chart.py149
-rw-r--r--src/collectors/python.d.plugin/ipfs/ipfs.conf82
-rw-r--r--src/collectors/python.d.plugin/ipfs/metadata.yaml172
5 files changed, 607 insertions, 0 deletions
diff --git a/src/collectors/python.d.plugin/ipfs/README.md b/src/collectors/python.d.plugin/ipfs/README.md
new file mode 120000
index 000000000..eee6a07b2
--- /dev/null
+++ b/src/collectors/python.d.plugin/ipfs/README.md
@@ -0,0 +1 @@
+integrations/ipfs.md \ No newline at end of file
diff --git a/src/collectors/python.d.plugin/ipfs/integrations/ipfs.md b/src/collectors/python.d.plugin/ipfs/integrations/ipfs.md
new file mode 100644
index 000000000..6b742d4d8
--- /dev/null
+++ b/src/collectors/python.d.plugin/ipfs/integrations/ipfs.md
@@ -0,0 +1,203 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/python.d.plugin/ipfs/README.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/src/collectors/python.d.plugin/ipfs/metadata.yaml"
+sidebar_label: "IPFS"
+learn_status: "Published"
+learn_rel_path: "Collecting Metrics/Storage, Mount Points and Filesystems"
+most_popular: False
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE COLLECTOR'S metadata.yaml FILE"
+endmeta-->
+
+# IPFS
+
+
+<img src="https://netdata.cloud/img/ipfs.svg" width="150"/>
+
+
+Plugin: python.d.plugin
+Module: ipfs
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Overview
+
+This collector monitors IPFS server metrics about its quality and performance.
+
+It connects to an http endpoint of the IPFS server to collect the metrics
+
+This collector is supported on all platforms.
+
+This collector supports collecting metrics from multiple instances of this integration, including remote instances.
+
+
+### Default Behavior
+
+#### Auto-Detection
+
+If the endpoint is accessible by the Agent, netdata will autodetect it
+
+#### Limits
+
+Calls to the following endpoints are disabled due to IPFS bugs:
+
+/api/v0/stats/repo (https://github.com/ipfs/go-ipfs/issues/3874)
+/api/v0/pin/ls (https://github.com/ipfs/go-ipfs/issues/7528)
+
+
+#### Performance Impact
+
+The default configuration for this integration is not expected to impose a significant performance impact on the system.
+
+
+## Metrics
+
+Metrics grouped by *scope*.
+
+The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels.
+
+
+
+### Per IPFS instance
+
+These metrics refer to the entire monitored application.
+
+This scope has no labels.
+
+Metrics:
+
+| Metric | Dimensions | Unit |
+|:------|:----------|:----|
+| ipfs.bandwidth | in, out | kilobits/s |
+| ipfs.peers | peers | peers |
+| ipfs.repo_size | avail, size | GiB |
+| ipfs.repo_objects | objects, pinned, recursive_pins | objects |
+
+
+
+## Alerts
+
+
+The following alerts are available:
+
+| Alert name | On metric | Description |
+|:------------|:----------|:------------|
+| [ ipfs_datastore_usage ](https://github.com/netdata/netdata/blob/master/src/health/health.d/ipfs.conf) | ipfs.repo_size | IPFS datastore utilization |
+
+
+## Setup
+
+### Prerequisites
+
+No action required.
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `python.d/ipfs.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config python.d/ipfs.conf
+```
+#### Options
+
+There are 2 sections:
+
+* Global variables
+* One or more JOBS that can define multiple different instances to monitor.
+
+The following options can be defined globally: priority, penalty, autodetection_retry, update_every, but can also be defined per JOB to override the global values.
+
+Additionally, the following collapsed table contains all the options that can be configured inside a JOB definition.
+
+Every configuration JOB starts with a `job_name` value which will appear in the dashboard, unless a `name` parameter is specified.
+
+
+<details><summary></summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| update_every | Sets the default data collection frequency. | 5 | no |
+| priority | Controls the order of charts at the netdata dashboard. | 60000 | no |
+| autodetection_retry | Sets the job re-check interval in seconds. | 0 | no |
+| penalty | Indicates whether to apply penalty to update_every in case of failures. | yes | no |
+| name | The JOB's name as it will appear at the dashboard (by default is the job_name) | job_name | no |
+| url | URL to the IPFS API | no | yes |
+| repoapi | Collect repo metrics. | no | no |
+| pinapi | Set status of IPFS pinned object polling. | no | no |
+
+</details>
+
+#### Examples
+
+##### Basic (default out-of-the-box)
+
+A basic example configuration, one job will run at a time. Autodetect mechanism uses it by default.
+
+```yaml
+localhost:
+ name: 'local'
+ url: 'http://localhost:5001'
+ repoapi: no
+ pinapi: no
+
+```
+##### Multi-instance
+
+> **Note**: When you define multiple jobs, their names must be unique.
+
+Collecting metrics from local and remote instances.
+
+
+<details><summary>Config</summary>
+
+```yaml
+localhost:
+ name: 'local'
+ url: 'http://localhost:5001'
+ repoapi: no
+ pinapi: no
+
+remote_host:
+ name: 'remote'
+ url: 'http://192.0.2.1:5001'
+ repoapi: no
+ pinapi: no
+
+```
+</details>
+
+
+
+## Troubleshooting
+
+### Debug Mode
+
+To troubleshoot issues with the `ipfs` collector, run the `python.d.plugin` with the debug option enabled. The output
+should give you clues as to why the collector isn't working.
+
+- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on
+ your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`.
+
+ ```bash
+ cd /usr/libexec/netdata/plugins.d/
+ ```
+
+- Switch to the `netdata` user.
+
+ ```bash
+ sudo -u netdata -s
+ ```
+
+- Run the `python.d.plugin` to debug the collector:
+
+ ```bash
+ ./python.d.plugin ipfs debug trace
+ ```
+
+
diff --git a/src/collectors/python.d.plugin/ipfs/ipfs.chart.py b/src/collectors/python.d.plugin/ipfs/ipfs.chart.py
new file mode 100644
index 000000000..abfc9c492
--- /dev/null
+++ b/src/collectors/python.d.plugin/ipfs/ipfs.chart.py
@@ -0,0 +1,149 @@
+# -*- coding: utf-8 -*-
+# Description: IPFS netdata python.d module
+# Authors: davidak
+# SPDX-License-Identifier: GPL-3.0-or-later
+
+import json
+
+from bases.FrameworkServices.UrlService import UrlService
+
+ORDER = [
+ 'bandwidth',
+ 'peers',
+ 'repo_size',
+ 'repo_objects',
+]
+
+CHARTS = {
+ 'bandwidth': {
+ 'options': [None, 'IPFS Bandwidth', 'kilobits/s', 'Bandwidth', 'ipfs.bandwidth', 'line'],
+ 'lines': [
+ ['in', None, 'absolute', 8, 1000],
+ ['out', None, 'absolute', -8, 1000]
+ ]
+ },
+ 'peers': {
+ 'options': [None, 'IPFS Peers', 'peers', 'Peers', 'ipfs.peers', 'line'],
+ 'lines': [
+ ['peers', None, 'absolute']
+ ]
+ },
+ 'repo_size': {
+ 'options': [None, 'IPFS Repo Size', 'GiB', 'Size', 'ipfs.repo_size', 'area'],
+ 'lines': [
+ ['avail', None, 'absolute', 1, 1 << 30],
+ ['size', None, 'absolute', 1, 1 << 30],
+ ]
+ },
+ 'repo_objects': {
+ 'options': [None, 'IPFS Repo Objects', 'objects', 'Objects', 'ipfs.repo_objects', 'line'],
+ 'lines': [
+ ['objects', None, 'absolute', 1, 1],
+ ['pinned', None, 'absolute', 1, 1],
+ ['recursive_pins', None, 'absolute', 1, 1]
+ ]
+ }
+}
+
+SI_zeroes = {
+ 'k': 3,
+ 'm': 6,
+ 'g': 9,
+ 't': 12,
+ 'p': 15,
+ 'e': 18,
+ 'z': 21,
+ 'y': 24
+}
+
+
+class Service(UrlService):
+ def __init__(self, configuration=None, name=None):
+ UrlService.__init__(self, configuration=configuration, name=name)
+ self.order = ORDER
+ self.definitions = CHARTS
+ self.baseurl = self.configuration.get('url', 'http://localhost:5001')
+ self.method = "POST"
+ self.do_pinapi = self.configuration.get('pinapi')
+ self.do_repoapi = self.configuration.get('repoapi')
+ self.__storage_max = None
+
+ def _get_json(self, sub_url):
+ """
+ :return: json decoding of the specified url
+ """
+ self.url = self.baseurl + sub_url
+ try:
+ return json.loads(self._get_raw_data())
+ except (TypeError, ValueError):
+ return dict()
+
+ @staticmethod
+ def _recursive_pins(keys):
+ return sum(1 for k in keys if keys[k]['Type'] == b'recursive')
+
+ @staticmethod
+ def _dehumanize(store_max):
+ # convert from '10Gb' to 10000000000
+ if not isinstance(store_max, int):
+ store_max = store_max.lower()
+ if store_max.endswith('b'):
+ val, units = store_max[:-2], store_max[-2]
+ if units in SI_zeroes:
+ val += '0' * SI_zeroes[units]
+ store_max = val
+ try:
+ store_max = int(store_max)
+ except (TypeError, ValueError):
+ store_max = None
+ return store_max
+
+ def _storagemax(self, store_cfg):
+ if self.__storage_max is None:
+ self.__storage_max = self._dehumanize(store_cfg)
+ return self.__storage_max
+
+ def _get_data(self):
+ """
+ Get data from API
+ :return: dict
+ """
+ # suburl : List of (result-key, original-key, transform-func)
+ cfg = {
+ '/api/v0/stats/bw':
+ [
+ ('in', 'RateIn', int),
+ ('out', 'RateOut', int),
+ ],
+ '/api/v0/swarm/peers':
+ [
+ ('peers', 'Peers', len),
+ ],
+ }
+ if self.do_repoapi:
+ cfg.update({
+ '/api/v0/stats/repo':
+ [
+ ('size', 'RepoSize', int),
+ ('objects', 'NumObjects', int),
+ ('avail', 'StorageMax', self._storagemax),
+ ],
+ })
+
+ if self.do_pinapi:
+ cfg.update({
+ '/api/v0/pin/ls':
+ [
+ ('pinned', 'Keys', len),
+ ('recursive_pins', 'Keys', self._recursive_pins),
+ ]
+ })
+ r = dict()
+ for suburl in cfg:
+ in_json = self._get_json(suburl)
+ for new_key, orig_key, xmute in cfg[suburl]:
+ try:
+ r[new_key] = xmute(in_json[orig_key])
+ except Exception as error:
+ self.debug(error)
+ return r or None
diff --git a/src/collectors/python.d.plugin/ipfs/ipfs.conf b/src/collectors/python.d.plugin/ipfs/ipfs.conf
new file mode 100644
index 000000000..8b167b399
--- /dev/null
+++ b/src/collectors/python.d.plugin/ipfs/ipfs.conf
@@ -0,0 +1,82 @@
+# netdata python.d.plugin configuration for ipfs
+#
+# This file is in YaML format. Generally the format is:
+#
+# name: value
+#
+# There are 2 sections:
+# - global variables
+# - one or more JOBS
+#
+# JOBS allow you to collect values from multiple sources.
+# Each source will have its own set of charts.
+#
+# JOB parameters have to be indented (using spaces only, example below).
+
+# ----------------------------------------------------------------------
+# Global Variables
+# These variables set the defaults for all JOBs, however each JOB
+# may define its own, overriding the defaults.
+
+# update_every sets the default data collection frequency.
+# If unset, the python.d.plugin default is used.
+# update_every: 1
+
+# priority controls the order of charts at the netdata dashboard.
+# Lower numbers move the charts towards the top of the page.
+# If unset, the default for python.d.plugin is used.
+# priority: 60000
+
+# penalty indicates whether to apply penalty to update_every in case of failures.
+# Penalty will increase every 5 failed updates in a row. Maximum penalty is 10 minutes.
+# penalty: yes
+
+# autodetection_retry sets the job re-check interval in seconds.
+# The job is not deleted if check fails.
+# Attempts to start the job are made once every autodetection_retry.
+# This feature is disabled by default.
+# autodetection_retry: 0
+
+# ----------------------------------------------------------------------
+# JOBS (data collection sources)
+#
+# The default JOBS share the same *name*. JOBS with the same name
+# are mutually exclusive. Only one of them will be allowed running at
+# any time. This allows autodetection to try several alternatives and
+# pick the one that works.
+#
+# Any number of jobs is supported.
+#
+# All python.d.plugin JOBS (for all its modules) support a set of
+# predefined parameters. These are:
+#
+# job_name:
+# name: myname # the JOB's name as it will appear at the
+# # dashboard (by default is the job_name)
+# # JOBs sharing a name are mutually exclusive
+# update_every: 1 # the JOB's data collection frequency
+# priority: 60000 # the JOB's order on the dashboard
+# penalty: yes # the JOB's penalty
+# autodetection_retry: 0 # the JOB's re-check interval in seconds
+#
+# Additionally to the above, ipfs also supports the following:
+#
+# url: 'URL' # URL to the IPFS API
+# repoapi: no # Collect repo metrics
+# # Currently defaults to disabled due to IPFS Bug
+# # https://github.com/ipfs/go-ipfs/issues/7528
+# # resulting in very high CPU Usage
+# pinapi: no # Set status of IPFS pinned object polling
+# # Currently defaults to disabled due to IPFS Bug
+# # https://github.com/ipfs/go-ipfs/issues/3874
+# # resulting in very high CPU Usage
+#
+# ----------------------------------------------------------------------
+# AUTO-DETECTION JOBS
+# only one of them will run (they have the same name)
+
+localhost:
+ name: 'local'
+ url: 'http://localhost:5001'
+ repoapi: no
+ pinapi: no
diff --git a/src/collectors/python.d.plugin/ipfs/metadata.yaml b/src/collectors/python.d.plugin/ipfs/metadata.yaml
new file mode 100644
index 000000000..55c39e31e
--- /dev/null
+++ b/src/collectors/python.d.plugin/ipfs/metadata.yaml
@@ -0,0 +1,172 @@
+plugin_name: python.d.plugin
+modules:
+ - meta:
+ plugin_name: python.d.plugin
+ module_name: ipfs
+ monitored_instance:
+ name: IPFS
+ link: "https://ipfs.tech/"
+ categories:
+ - data-collection.storage-mount-points-and-filesystems
+ icon_filename: "ipfs.svg"
+ related_resources:
+ integrations:
+ list: []
+ info_provided_to_referring_integrations:
+ description: ""
+ keywords: []
+ most_popular: false
+ overview:
+ data_collection:
+ metrics_description: "This collector monitors IPFS server metrics about its quality and performance."
+ method_description: "It connects to an http endpoint of the IPFS server to collect the metrics"
+ supported_platforms:
+ include: []
+ exclude: []
+ multi_instance: true
+ additional_permissions:
+ description: ""
+ default_behavior:
+ auto_detection:
+ description: "If the endpoint is accessible by the Agent, netdata will autodetect it"
+ limits:
+ description: |
+ Calls to the following endpoints are disabled due to IPFS bugs:
+
+ /api/v0/stats/repo (https://github.com/ipfs/go-ipfs/issues/3874)
+ /api/v0/pin/ls (https://github.com/ipfs/go-ipfs/issues/7528)
+ performance_impact:
+ description: ""
+ setup:
+ prerequisites:
+ list: []
+ configuration:
+ file:
+ name: "python.d/ipfs.conf"
+ options:
+ description: |
+ There are 2 sections:
+
+ * Global variables
+ * One or more JOBS that can define multiple different instances to monitor.
+
+ The following options can be defined globally: priority, penalty, autodetection_retry, update_every, but can also be defined per JOB to override the global values.
+
+ Additionally, the following collapsed table contains all the options that can be configured inside a JOB definition.
+
+ Every configuration JOB starts with a `job_name` value which will appear in the dashboard, unless a `name` parameter is specified.
+ folding:
+ title: ""
+ enabled: true
+ list:
+ - name: update_every
+ description: Sets the default data collection frequency.
+ default_value: 5
+ required: false
+ - name: priority
+ description: Controls the order of charts at the netdata dashboard.
+ default_value: 60000
+ required: false
+ - name: autodetection_retry
+ description: Sets the job re-check interval in seconds.
+ default_value: 0
+ required: false
+ - name: penalty
+ description: Indicates whether to apply penalty to update_every in case of failures.
+ default_value: yes
+ required: false
+ - name: name
+ description: The JOB's name as it will appear at the dashboard (by default is the job_name)
+ default_value: job_name
+ required: false
+ - name: url
+ description: URL to the IPFS API
+ default_value: no
+ required: true
+ - name: repoapi
+ description: Collect repo metrics.
+ default_value: no
+ required: false
+ - name: pinapi
+ description: Set status of IPFS pinned object polling.
+ default_value: no
+ required: false
+ examples:
+ folding:
+ enabled: true
+ title: "Config"
+ list:
+ - name: Basic (default out-of-the-box)
+ description: A basic example configuration, one job will run at a time. Autodetect mechanism uses it by default.
+ folding:
+ enabled: false
+ config: |
+ localhost:
+ name: 'local'
+ url: 'http://localhost:5001'
+ repoapi: no
+ pinapi: no
+ - name: Multi-instance
+ description: |
+ > **Note**: When you define multiple jobs, their names must be unique.
+
+ Collecting metrics from local and remote instances.
+ config: |
+ localhost:
+ name: 'local'
+ url: 'http://localhost:5001'
+ repoapi: no
+ pinapi: no
+
+ remote_host:
+ name: 'remote'
+ url: 'http://192.0.2.1:5001'
+ repoapi: no
+ pinapi: no
+ troubleshooting:
+ problems:
+ list: []
+ alerts:
+ - name: ipfs_datastore_usage
+ link: https://github.com/netdata/netdata/blob/master/src/health/health.d/ipfs.conf
+ metric: ipfs.repo_size
+ info: IPFS datastore utilization
+ metrics:
+ folding:
+ title: Metrics
+ enabled: false
+ description: ""
+ availability: []
+ scopes:
+ - name: global
+ description: "These metrics refer to the entire monitored application."
+ labels: []
+ metrics:
+ - name: ipfs.bandwidth
+ description: IPFS Bandwidth
+ unit: "kilobits/s"
+ chart_type: line
+ dimensions:
+ - name: in
+ - name: out
+ - name: ipfs.peers
+ description: IPFS Peers
+ unit: "peers"
+ chart_type: line
+ dimensions:
+ - name: peers
+ - name: ipfs.repo_size
+ description: IPFS Repo Size
+ unit: "GiB"
+ chart_type: area
+ dimensions:
+ - name: avail
+ - name: size
+ - name: ipfs.repo_objects
+ description: IPFS Repo Objects
+ unit: "objects"
+ chart_type: line
+ dimensions:
+ - name: objects
+ - name: pinned
+ - name: recursive_pins