From 386ccdd61e8256c8b21ee27ee2fc12438fc5ca98 Mon Sep 17 00:00:00 2001 From: Daniel Baumann Date: Tue, 17 Oct 2023 11:30:20 +0200 Subject: Adding upstream version 1.43.0. Signed-off-by: Daniel Baumann --- .../python.d.plugin/ipfs/integrations/ipfs.md | 202 +++++++++++++++++++++ 1 file changed, 202 insertions(+) create mode 100644 collectors/python.d.plugin/ipfs/integrations/ipfs.md (limited to 'collectors/python.d.plugin/ipfs/integrations/ipfs.md') diff --git a/collectors/python.d.plugin/ipfs/integrations/ipfs.md b/collectors/python.d.plugin/ipfs/integrations/ipfs.md new file mode 100644 index 000000000..c43c27b34 --- /dev/null +++ b/collectors/python.d.plugin/ipfs/integrations/ipfs.md @@ -0,0 +1,202 @@ + + +# IPFS + + + + + +Plugin: python.d.plugin +Module: ipfs + + + +## Overview + +This collector monitors IPFS server metrics about its quality and performance. + +It connects to an http endpoint of the IPFS server to collect the metrics + +This collector is supported on all platforms. + +This collector supports collecting metrics from multiple instances of this integration, including remote instances. + + +### Default Behavior + +#### Auto-Detection + +If the endpoint is accessible by the Agent, netdata will autodetect it + +#### Limits + +Calls to the following endpoints are disabled due to IPFS bugs: + +/api/v0/stats/repo (https://github.com/ipfs/go-ipfs/issues/3874) +/api/v0/pin/ls (https://github.com/ipfs/go-ipfs/issues/7528) + + +#### Performance Impact + +The default configuration for this integration is not expected to impose a significant performance impact on the system. + + +## Metrics + +Metrics grouped by *scope*. + +The scope defines the instance that the metric belongs to. An instance is uniquely identified by a set of labels. + + + +### Per IPFS instance + +These metrics refer to the entire monitored application. + +This scope has no labels. + +Metrics: + +| Metric | Dimensions | Unit | +|:------|:----------|:----| +| ipfs.bandwidth | in, out | kilobits/s | +| ipfs.peers | peers | peers | +| ipfs.repo_size | avail, size | GiB | +| ipfs.repo_objects | objects, pinned, recursive_pins | objects | + + + +## Alerts + + +The following alerts are available: + +| Alert name | On metric | Description | +|:------------|:----------|:------------| +| [ ipfs_datastore_usage ](https://github.com/netdata/netdata/blob/master/health/health.d/ipfs.conf) | ipfs.repo_size | IPFS datastore utilization | + + +## Setup + +### Prerequisites + +No action required. + +### Configuration + +#### File + +The configuration file name for this integration is `python.d/ipfs.conf`. + + +You can edit the configuration file using the `edit-config` script from the +Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory). + +```bash +cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata +sudo ./edit-config python.d/ipfs.conf +``` +#### Options + +There are 2 sections: + +* Global variables +* One or more JOBS that can define multiple different instances to monitor. + +The following options can be defined globally: priority, penalty, autodetection_retry, update_every, but can also be defined per JOB to override the global values. + +Additionally, the following collapsed table contains all the options that can be configured inside a JOB definition. + +Every configuration JOB starts with a `job_name` value which will appear in the dashboard, unless a `name` parameter is specified. + + +
+ +| Name | Description | Default | Required | +|:----|:-----------|:-------|:--------:| +| update_every | Sets the default data collection frequency. | 5 | False | +| priority | Controls the order of charts at the netdata dashboard. | 60000 | False | +| autodetection_retry | Sets the job re-check interval in seconds. | 0 | False | +| penalty | Indicates whether to apply penalty to update_every in case of failures. | yes | False | +| name | The JOB's name as it will appear at the dashboard (by default is the job_name) | job_name | False | +| url | URL to the IPFS API | no | True | +| repoapi | Collect repo metrics. | no | False | +| pinapi | Set status of IPFS pinned object polling. | no | False | + +
+ +#### Examples + +##### Basic (default out-of-the-box) + +A basic example configuration, one job will run at a time. Autodetect mechanism uses it by default. + +```yaml +localhost: + name: 'local' + url: 'http://localhost:5001' + repoapi: no + pinapi: no + +``` +##### Multi-instance + +> **Note**: When you define multiple jobs, their names must be unique. + +Collecting metrics from local and remote instances. + + +
Config + +```yaml +localhost: + name: 'local' + url: 'http://localhost:5001' + repoapi: no + pinapi: no + +remote_host: + name: 'remote' + url: 'http://192.0.2.1:5001' + repoapi: no + pinapi: no + +``` +
+ + + +## Troubleshooting + +### Debug Mode + +To troubleshoot issues with the `ipfs` collector, run the `python.d.plugin` with the debug option enabled. The output +should give you clues as to why the collector isn't working. + +- Navigate to the `plugins.d` directory, usually at `/usr/libexec/netdata/plugins.d/`. If that's not the case on + your system, open `netdata.conf` and look for the `plugins` setting under `[directories]`. + + ```bash + cd /usr/libexec/netdata/plugins.d/ + ``` + +- Switch to the `netdata` user. + + ```bash + sudo -u netdata -s + ``` + +- Run the `python.d.plugin` to debug the collector: + + ```bash + ./python.d.plugin ipfs debug trace + ``` + + -- cgit v1.2.3