summaryrefslogtreecommitdiffstats
path: root/src/exporting/README.md
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-11-25 17:33:56 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-11-25 17:34:10 +0000
commit83ba6762cc43d9db581b979bb5e3445669e46cc2 (patch)
tree2e69833b43f791ed253a7a20318b767ebe56cdb8 /src/exporting/README.md
parentReleasing debian version 1.47.5-1. (diff)
downloadnetdata-83ba6762cc43d9db581b979bb5e3445669e46cc2.tar.xz
netdata-83ba6762cc43d9db581b979bb5e3445669e46cc2.zip
Merging upstream version 2.0.3+dfsg (Closes: #923993, #1042533, #1045145).
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'src/exporting/README.md')
-rw-r--r--src/exporting/README.md102
1 files changed, 45 insertions, 57 deletions
diff --git a/src/exporting/README.md b/src/exporting/README.md
index 83b391f72..a626ee66b 100644
--- a/src/exporting/README.md
+++ b/src/exporting/README.md
@@ -1,13 +1,3 @@
-<!--
-title: "Exporting reference"
-description: "With the exporting engine, you can archive your Netdata metrics to multiple external databases for long-term storage or further analysis."
-sidebar_label: "Export"
-custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/exporting/README.md"
-learn_status: "Published"
-learn_rel_path: "Integrations/Export"
-learn_doc_purpose: "Explain the exporting engine options and all of our the exporting connectors options"
--->
-
# Exporting reference
Welcome to the exporting engine reference guide. This guide contains comprehensive information about enabling,
@@ -18,7 +8,7 @@ For a quick introduction to the exporting engine's features, read our doc on [ex
databases](/docs/exporting-metrics/README.md), or jump in to [enabling a connector](/docs/exporting-metrics/enable-an-exporting-connector.md).
The exporting engine has a modular structure and supports metric exporting via multiple exporting connector instances at
-the same time. You can have different update intervals and filters configured for every exporting connector instance.
+the same time. You can have different update intervals and filters configured for every exporting connector instance.
When you enable the exporting engine and a connector, the Netdata Agent exports metrics _beginning from the time you
restart its process_, not the entire [database of long-term metrics](/docs/netdata-agent/configuration/optimizing-metrics-database/change-metrics-storage.md).
@@ -37,24 +27,24 @@ The exporting engine uses a number of connectors to send Netdata metrics to exte
[list of supported databases](/docs/exporting-metrics/README.md#supported-databases) for information on which
connector to enable and configure for your database of choice.
-- [**AWS Kinesis Data Streams**](/src/exporting/aws_kinesis/README.md): Metrics are sent to the service in `JSON`
+- [**AWS Kinesis Data Streams**](/src/exporting/aws_kinesis/README.md): Metrics are sent to the service in `JSON`
format.
-- [**Google Cloud Pub/Sub Service**](/src/exporting/pubsub/README.md): Metrics are sent to the service in `JSON`
+- [**Google Cloud Pub/Sub Service**](/src/exporting/pubsub/README.md): Metrics are sent to the service in `JSON`
format.
-- [**Graphite**](/src/exporting/graphite/README.md): A plaintext interface. Metrics are sent to the database server as
+- [**Graphite**](/src/exporting/graphite/README.md): A plaintext interface. Metrics are sent to the database server as
`prefix.hostname.chart.dimension`. `prefix` is configured below, `hostname` is the hostname of the machine (can
also be configured). Learn more in our guide to [export and visualize Netdata metrics in
Graphite](/src/exporting/graphite/README.md).
-- [**JSON** document databases](/src/exporting/json/README.md)
-- [**OpenTSDB**](/src/exporting/opentsdb/README.md): Use a plaintext or HTTP interfaces. Metrics are sent to
+- [**JSON** document databases](/src/exporting/json/README.md)
+- [**OpenTSDB**](/src/exporting/opentsdb/README.md): Use a plaintext or HTTP interfaces. Metrics are sent to
OpenTSDB as `prefix.chart.dimension` with tag `host=hostname`.
-- [**MongoDB**](/src/exporting/mongodb/README.md): Metrics are sent to the database in `JSON` format.
-- [**Prometheus**](/src/exporting/prometheus/README.md): Use an existing Prometheus installation to scrape metrics
+- [**MongoDB**](/src/exporting/mongodb/README.md): Metrics are sent to the database in `JSON` format.
+- [**Prometheus**](/src/exporting/prometheus/README.md): Use an existing Prometheus installation to scrape metrics
from node using the Netdata API.
-- [**Prometheus remote write**](/src/exporting/prometheus/remote_write/README.md). A binary snappy-compressed protocol
+- [**Prometheus remote write**](/src/exporting/prometheus/remote_write/README.md). A binary snappy-compressed protocol
buffer encoding over HTTP. Supports many [storage
providers](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage).
-- [**TimescaleDB**](/src/exporting/TIMESCALE.md): Use a community-built connector that takes JSON streams from a
+- [**TimescaleDB**](/src/exporting/TIMESCALE.md): Use a community-built connector that takes JSON streams from a
Netdata client and writes them to a TimescaleDB table.
### Chart filtering
@@ -62,14 +52,14 @@ connector to enable and configure for your database of choice.
Netdata can filter metrics, to send only a subset of the collected metrics. You can use the
configuration file
-```txt
+```text
[prometheus:exporter]
send charts matching = system.*
```
or the URL parameter `filter` in the `allmetrics` API call.
-```txt
+```text
http://localhost:19999/api/v1/allmetrics?format=shell&filter=system.*
```
@@ -77,17 +67,17 @@ http://localhost:19999/api/v1/allmetrics?format=shell&filter=system.*
Netdata supports three modes of operation for all exporting connectors:
-- `as-collected` sends to external databases the metrics as they are collected, in the units they are collected.
+- `as-collected` sends to external databases the metrics as they are collected, in the units they are collected.
So, counters are sent as counters and gauges are sent as gauges, much like all data collectors do. For example,
to calculate CPU utilization in this format, you need to know how to convert kernel ticks to percentage.
-- `average` sends to external databases normalized metrics from the Netdata database. In this mode, all metrics
+- `average` sends to external databases normalized metrics from the Netdata database. In this mode, all metrics
are sent as gauges, in the units Netdata uses. This abstracts data collection and simplifies visualization, but
you will not be able to copy and paste queries from other sources to convert units. For example, CPU utilization
percentage is calculated by Netdata, so Netdata will convert ticks to percentage and send the average percentage
to the external database.
-- `sum` or `volume`: the sum of the interpolated values shown on the Netdata graphs is sent to the external
+- `sum` or `volume`: the sum of the interpolated values shown on the Netdata graphs is sent to the external
database. So, if Netdata is configured to send data to the database every 10 seconds, the sum of the 10 values
shown on the Netdata charts will be used.
@@ -102,7 +92,7 @@ see in Netdata, which is not necessarily true for the other modes of operation.
### Independent operation
-This code is smart enough, not to slow down Netdata, independently of the speed of the external database server.
+This code is smart enough, not to slow down Netdata, independently of the speed of the external database server.
> ❗ You should keep in mind though that many exporting connector instances can consume a lot of CPU resources if they
> run their batches at the same time. You can set different update intervals for every exporting connector instance,
@@ -111,12 +101,12 @@ This code is smart enough, not to slow down Netdata, independently of the speed
## Configuration
Here are the configuration blocks for every supported connector. Your current `exporting.conf` file may look a little
-different.
+different.
You can configure each connector individually using the available [options](#options). The
`[graphite:my_graphite_instance]` block contains examples of some of these additional options in action.
-```conf
+```text
[exporting:global]
enabled = yes
send configured labels = no
@@ -192,23 +182,23 @@ You can configure each connector individually using the available [options](#opt
### Sections
-- `[exporting:global]` is a section where you can set your defaults for all exporting connectors
-- `[prometheus:exporter]` defines settings for Prometheus exporter API queries (e.g.:
+- `[exporting:global]` is a section where you can set your defaults for all exporting connectors
+- `[prometheus:exporter]` defines settings for Prometheus exporter API queries (e.g.:
`http://NODE:19999/api/v1/allmetrics?format=prometheus&help=yes&source=as-collected`).
-- `[<type>:<name>]` keeps settings for a particular exporting connector instance, where:
- - `type` selects the exporting connector type: graphite | opentsdb:telnet | opentsdb:http |
+- `[<type>:<name>]` keeps settings for a particular exporting connector instance, where:
+- `type` selects the exporting connector type: graphite | opentsdb:telnet | opentsdb:http |
prometheus_remote_write | json | kinesis | pubsub | mongodb. For graphite, opentsdb,
json, and prometheus_remote_write connectors you can also use `:http` or `:https` modifiers
(e.g.: `opentsdb:https`).
- - `name` can be arbitrary instance name you chose.
+- `name` can be arbitrary instance name you chose.
### Options
Configure individual connectors and override any global settings with the following options.
-- `enabled = yes | no`, enables or disables an exporting connector instance
+- `enabled = yes | no`, enables or disables an exporting connector instance
-- `destination = host1 host2 host3 ...`, accepts **a space separated list** of hostnames, IPs (IPv4 and IPv6) and
+- `destination = host1 host2 host3 ...`, accepts **a space separated list** of hostnames, IPs (IPv4 and IPv6) and
ports to connect to. Netdata will use the **first available** to send the metrics.
The format of each item in this list, is: `[PROTOCOL:]IP[:PORT]`.
@@ -223,13 +213,13 @@ Configure individual connectors and override any global settings with the follow
Example IPv4:
-```conf
+```text
destination = 10.11.14.2:4242 10.11.14.3:4242 10.11.14.4:4242
```
Example IPv6 and IPv4 together:
-```conf
+```text
destination = [ffff:...:0001]:2003 10.11.12.1:2003
```
@@ -246,48 +236,48 @@ Configure individual connectors and override any global settings with the follow
For the Pub/Sub exporting connector `destination` can be set to a specific service endpoint.
-- `data source = as collected`, or `data source = average`, or `data source = sum`, selects the kind of data that will
+- `data source = as collected`, or `data source = average`, or `data source = sum`, selects the kind of data that will
be sent to the external database.
-- `hostname = my-name`, is the hostname to be used for sending data to the external database server. By default this
+- `hostname = my-name`, is the hostname to be used for sending data to the external database server. By default this
is `[global].hostname`.
-- `prefix = Netdata`, is the prefix to add to all metrics.
+- `prefix = Netdata`, is the prefix to add to all metrics.
-- `update every = 10`, is the number of seconds between sending data to the external database. Netdata will add some
+- `update every = 10`, is the number of seconds between sending data to the external database. Netdata will add some
randomness to this number, to prevent stressing the external server when many Netdata servers send data to the same
database. This randomness does not affect the quality of the data, only the time they are sent.
-- `buffer on failures = 10`, is the number of iterations (each iteration is `update every` seconds) to buffer data,
+- `buffer on failures = 10`, is the number of iterations (each iteration is `update every` seconds) to buffer data,
when the external database server is not available. If the server fails to receive the data after that many
failures, data loss on the connector instance is expected (Netdata will also log it).
-- `timeout ms = 20000`, is the timeout in milliseconds to wait for the external database server to process the data.
+- `timeout ms = 20000`, is the timeout in milliseconds to wait for the external database server to process the data.
By default this is `2 * update_every * 1000`.
-- `send hosts matching = localhost *` includes one or more space separated patterns, using `*` as wildcard (any number
+- `send hosts matching = localhost *` includes one or more space separated patterns, using `*` as wildcard (any number
of times within each pattern). The patterns are checked against the hostname (the localhost is always checked as
`localhost`), allowing us to filter which hosts will be sent to the external database when this Netdata is a central
Netdata aggregating multiple hosts. A pattern starting with `!` gives a negative match. So to match all hosts named
`*db*` except hosts containing `*child*`, use `!*child* *db*` (so, the order is important: the first
pattern matching the hostname will be used - positive or negative).
-- `send charts matching = *` includes one or more space separated patterns, using `*` as wildcard (any number of times
+- `send charts matching = *` includes one or more space separated patterns, using `*` as wildcard (any number of times
within each pattern). The patterns are checked against both chart id and chart name. A pattern starting with `!`
gives a negative match. So to match all charts named `apps.*` except charts ending in `*reads`, use `!*reads
apps.*` (so, the order is important: the first pattern matching the chart id or the chart name will be used -
positive or negative). There is also a URL parameter `filter` that can be used while querying `allmetrics`. The URL
parameter has a higher priority than the configuration option.
-- `send names instead of ids = yes | no` controls the metric names Netdata should send to the external database.
+- `send names instead of ids = yes | no` controls the metric names Netdata should send to the external database.
Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system
and names are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several
cases they are different: disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
-- `send configured labels = yes | no` controls if host labels defined in the `[host labels]` section in `netdata.conf`
+- `send configured labels = yes | no` controls if host labels defined in the `[host labels]` section in `netdata.conf`
should be sent to the external database
-- `send automatic labels = yes | no` controls if automatically created labels, like `_os_name` or `_architecture`
+- `send automatic labels = yes | no` controls if automatically created labels, like `_os_name` or `_architecture`
should be sent to the external database
## HTTPS
@@ -302,14 +292,14 @@ HTTPS communication between Netdata and an external database. You can set up a r
Netdata creates five charts in the dashboard, under the **Netdata Monitoring** section, to help you monitor the health
and performance of the exporting engine itself:
-1. **Buffered metrics**, the number of metrics Netdata added to the buffer for dispatching them to the
+1. **Buffered metrics**, the number of metrics Netdata added to the buffer for dispatching them to the
external database server.
-2. **Exporting data size**, the amount of data (in KB) Netdata added the buffer.
+2. **Exporting data size**, the amount of data (in KB) Netdata added the buffer.
-3. **Exporting operations**, the number of operations performed by Netdata.
+3. **Exporting operations**, the number of operations performed by Netdata.
-4. **Exporting thread CPU usage**, the CPU resources consumed by the Netdata thread, that is responsible for sending
+4. **Exporting thread CPU usage**, the CPU resources consumed by the Netdata thread, that is responsible for sending
the metrics to the external database server.
![image](https://cloud.githubusercontent.com/assets/2662304/20463536/eb196084-af3d-11e6-8ee5-ddbd3b4d8449.png)
@@ -318,10 +308,8 @@ and performance of the exporting engine itself:
Netdata adds 3 alerts:
-1. `exporting_last_buffering`, number of seconds since the last successful buffering of exported data
-2. `exporting_metrics_sent`, percentage of metrics sent to the external database server
-3. `exporting_metrics_lost`, number of metrics lost due to repeating failures to contact the external database server
+1. `exporting_last_buffering`, number of seconds since the last successful buffering of exported data
+2. `exporting_metrics_sent`, percentage of metrics sent to the external database server
+3. `exporting_metrics_lost`, number of metrics lost due to repeating failures to contact the external database server
![image](https://cloud.githubusercontent.com/assets/2662304/20463779/a46ed1c2-af43-11e6-91a5-07ca4533cac3.png)
-
-