summaryrefslogtreecommitdiffstats
path: root/exporting/prometheus
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2023-10-17 09:30:23 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2023-10-17 09:30:23 +0000
commit517a443636daa1e8085cb4e5325524a54e8a8fd7 (patch)
tree5352109cc7cd5122274ab0cfc1f887b685f04edf /exporting/prometheus
parentReleasing debian version 1.42.4-1. (diff)
downloadnetdata-517a443636daa1e8085cb4e5325524a54e8a8fd7.tar.xz
netdata-517a443636daa1e8085cb4e5325524a54e8a8fd7.zip
Merging upstream version 1.43.0.
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'exporting/prometheus')
-rw-r--r--exporting/prometheus/README.md2
-rw-r--r--exporting/prometheus/integrations/appoptics.md158
-rw-r--r--exporting/prometheus/integrations/azure_data_explorer.md158
-rw-r--r--exporting/prometheus/integrations/azure_event_hub.md158
-rw-r--r--exporting/prometheus/integrations/chronix.md158
-rw-r--r--exporting/prometheus/integrations/cortex.md158
-rw-r--r--exporting/prometheus/integrations/cratedb.md158
-rw-r--r--exporting/prometheus/integrations/elasticsearch.md158
-rw-r--r--exporting/prometheus/integrations/gnocchi.md158
-rw-r--r--exporting/prometheus/integrations/google_bigquery.md158
-rw-r--r--exporting/prometheus/integrations/irondb.md158
-rw-r--r--exporting/prometheus/integrations/kafka.md158
-rw-r--r--exporting/prometheus/integrations/m3db.md158
-rw-r--r--exporting/prometheus/integrations/metricfire.md158
-rw-r--r--exporting/prometheus/integrations/new_relic.md158
-rw-r--r--exporting/prometheus/integrations/postgresql.md158
-rw-r--r--exporting/prometheus/integrations/prometheus_remote_write.md158
-rw-r--r--exporting/prometheus/integrations/quasardb.md158
-rw-r--r--exporting/prometheus/integrations/splunk_signalfx.md158
-rw-r--r--exporting/prometheus/integrations/thanos.md158
-rw-r--r--exporting/prometheus/integrations/tikv.md158
-rw-r--r--exporting/prometheus/integrations/timescaledb.md158
-rw-r--r--exporting/prometheus/integrations/victoriametrics.md158
-rw-r--r--exporting/prometheus/integrations/vmware_aria.md158
-rw-r--r--exporting/prometheus/integrations/wavefront.md158
-rw-r--r--exporting/prometheus/prometheus.c10
l---------[-rw-r--r--]exporting/prometheus/remote_write/README.md61
-rw-r--r--exporting/prometheus/remote_write/remote_write.c4
28 files changed, 3800 insertions, 69 deletions
diff --git a/exporting/prometheus/README.md b/exporting/prometheus/README.md
index d3b37f12..abd81554 100644
--- a/exporting/prometheus/README.md
+++ b/exporting/prometheus/README.md
@@ -24,7 +24,7 @@ Each chart in Netdata has several properties (common to all its metrics):
- `chart_name` - a more human friendly name for `chart_id`, also unique.
- `context` - this is the template of the chart. All disk I/O charts have the same context, all mysql requests charts
- have the same context, etc. This is used for alarm templates to match all the charts they should be attached to.
+ have the same context, etc. This is used for alert templates to match all the charts they should be attached to.
- `family` groups a set of charts together. It is used as the submenu of the dashboard.
diff --git a/exporting/prometheus/integrations/appoptics.md b/exporting/prometheus/integrations/appoptics.md
new file mode 100644
index 00000000..29954b65
--- /dev/null
+++ b/exporting/prometheus/integrations/appoptics.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/appoptics.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "AppOptics"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# AppOptics
+
+
+<img src="https://netdata.cloud/img/solarwinds.svg" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | True |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | True |
+| username | Username for HTTP authentication | my_username | False |
+| password | Password for HTTP authentication | my_password | False |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | False |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | False |
+| prefix | The prefix to add to all metrics. | netdata | False |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | False |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | False |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | False |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | False |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | False |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | False |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | False |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | False |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/azure_data_explorer.md b/exporting/prometheus/integrations/azure_data_explorer.md
new file mode 100644
index 00000000..c2ff6f21
--- /dev/null
+++ b/exporting/prometheus/integrations/azure_data_explorer.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/azure_data_explorer.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "Azure Data Explorer"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# Azure Data Explorer
+
+
+<img src="https://netdata.cloud/img/azuredataex.jpg" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | True |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | True |
+| username | Username for HTTP authentication | my_username | False |
+| password | Password for HTTP authentication | my_password | False |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | False |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | False |
+| prefix | The prefix to add to all metrics. | netdata | False |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | False |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | False |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | False |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | False |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | False |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | False |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | False |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | False |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/azure_event_hub.md b/exporting/prometheus/integrations/azure_event_hub.md
new file mode 100644
index 00000000..0d6f97d8
--- /dev/null
+++ b/exporting/prometheus/integrations/azure_event_hub.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/azure_event_hub.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "Azure Event Hub"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# Azure Event Hub
+
+
+<img src="https://netdata.cloud/img/azureeventhub.png" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | True |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | True |
+| username | Username for HTTP authentication | my_username | False |
+| password | Password for HTTP authentication | my_password | False |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | False |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | False |
+| prefix | The prefix to add to all metrics. | netdata | False |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | False |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | False |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | False |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | False |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | False |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | False |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | False |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | False |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/chronix.md b/exporting/prometheus/integrations/chronix.md
new file mode 100644
index 00000000..5f00e6d1
--- /dev/null
+++ b/exporting/prometheus/integrations/chronix.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/chronix.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "Chronix"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# Chronix
+
+
+<img src="https://netdata.cloud/img/chronix.png" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | True |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | True |
+| username | Username for HTTP authentication | my_username | False |
+| password | Password for HTTP authentication | my_password | False |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | False |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | False |
+| prefix | The prefix to add to all metrics. | netdata | False |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | False |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | False |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | False |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | False |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | False |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | False |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | False |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | False |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/cortex.md b/exporting/prometheus/integrations/cortex.md
new file mode 100644
index 00000000..64e7aed1
--- /dev/null
+++ b/exporting/prometheus/integrations/cortex.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/cortex.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "Cortex"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# Cortex
+
+
+<img src="https://netdata.cloud/img/cortex.png" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | True |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | True |
+| username | Username for HTTP authentication | my_username | False |
+| password | Password for HTTP authentication | my_password | False |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | False |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | False |
+| prefix | The prefix to add to all metrics. | netdata | False |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | False |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | False |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | False |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | False |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | False |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | False |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | False |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | False |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/cratedb.md b/exporting/prometheus/integrations/cratedb.md
new file mode 100644
index 00000000..7e2ca3ff
--- /dev/null
+++ b/exporting/prometheus/integrations/cratedb.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/cratedb.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "CrateDB"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# CrateDB
+
+
+<img src="https://netdata.cloud/img/crate.svg" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | True |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | True |
+| username | Username for HTTP authentication | my_username | False |
+| password | Password for HTTP authentication | my_password | False |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | False |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | False |
+| prefix | The prefix to add to all metrics. | netdata | False |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | False |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | False |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | False |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | False |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | False |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | False |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | False |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | False |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/elasticsearch.md b/exporting/prometheus/integrations/elasticsearch.md
new file mode 100644
index 00000000..67bc9d0e
--- /dev/null
+++ b/exporting/prometheus/integrations/elasticsearch.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/elasticsearch.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "ElasticSearch"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# ElasticSearch
+
+
+<img src="https://netdata.cloud/img/elasticsearch.svg" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | True |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | True |
+| username | Username for HTTP authentication | my_username | False |
+| password | Password for HTTP authentication | my_password | False |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | False |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | False |
+| prefix | The prefix to add to all metrics. | netdata | False |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | False |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | False |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | False |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | False |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | False |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | False |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | False |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | False |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/gnocchi.md b/exporting/prometheus/integrations/gnocchi.md
new file mode 100644
index 00000000..c3b11c24
--- /dev/null
+++ b/exporting/prometheus/integrations/gnocchi.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/gnocchi.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "Gnocchi"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# Gnocchi
+
+
+<img src="https://netdata.cloud/img/gnocchi.svg" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | True |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | True |
+| username | Username for HTTP authentication | my_username | False |
+| password | Password for HTTP authentication | my_password | False |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | False |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | False |
+| prefix | The prefix to add to all metrics. | netdata | False |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | False |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | False |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | False |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | False |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | False |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | False |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | False |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | False |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/google_bigquery.md b/exporting/prometheus/integrations/google_bigquery.md
new file mode 100644
index 00000000..3639fd48
--- /dev/null
+++ b/exporting/prometheus/integrations/google_bigquery.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/google_bigquery.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "Google BigQuery"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# Google BigQuery
+
+
+<img src="https://netdata.cloud/img/bigquery.png" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | True |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | True |
+| username | Username for HTTP authentication | my_username | False |
+| password | Password for HTTP authentication | my_password | False |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | False |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | False |
+| prefix | The prefix to add to all metrics. | netdata | False |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | False |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | False |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | False |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | False |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | False |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | False |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | False |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | False |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/irondb.md b/exporting/prometheus/integrations/irondb.md
new file mode 100644
index 00000000..c2525848
--- /dev/null
+++ b/exporting/prometheus/integrations/irondb.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/irondb.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "IRONdb"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# IRONdb
+
+
+<img src="https://netdata.cloud/img/irondb.png" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | True |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | True |
+| username | Username for HTTP authentication | my_username | False |
+| password | Password for HTTP authentication | my_password | False |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | False |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | False |
+| prefix | The prefix to add to all metrics. | netdata | False |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | False |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | False |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | False |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | False |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | False |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | False |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | False |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | False |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/kafka.md b/exporting/prometheus/integrations/kafka.md
new file mode 100644
index 00000000..de98992b
--- /dev/null
+++ b/exporting/prometheus/integrations/kafka.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/kafka.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "Kafka"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# Kafka
+
+
+<img src="https://netdata.cloud/img/kafka.svg" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | True |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | True |
+| username | Username for HTTP authentication | my_username | False |
+| password | Password for HTTP authentication | my_password | False |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | False |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | False |
+| prefix | The prefix to add to all metrics. | netdata | False |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | False |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | False |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | False |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | False |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | False |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | False |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | False |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | False |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/m3db.md b/exporting/prometheus/integrations/m3db.md
new file mode 100644
index 00000000..38be54a6
--- /dev/null
+++ b/exporting/prometheus/integrations/m3db.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/m3db.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "M3DB"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# M3DB
+
+
+<img src="https://netdata.cloud/img/m3db.png" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | True |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | True |
+| username | Username for HTTP authentication | my_username | False |
+| password | Password for HTTP authentication | my_password | False |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | False |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | False |
+| prefix | The prefix to add to all metrics. | netdata | False |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | False |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | False |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | False |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | False |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | False |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | False |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | False |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | False |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/metricfire.md b/exporting/prometheus/integrations/metricfire.md
new file mode 100644
index 00000000..e9c4f7ea
--- /dev/null
+++ b/exporting/prometheus/integrations/metricfire.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/metricfire.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "MetricFire"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# MetricFire
+
+
+<img src="https://netdata.cloud/img/metricfire.png" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | True |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | True |
+| username | Username for HTTP authentication | my_username | False |
+| password | Password for HTTP authentication | my_password | False |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | False |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | False |
+| prefix | The prefix to add to all metrics. | netdata | False |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | False |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | False |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | False |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | False |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | False |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | False |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | False |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | False |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/new_relic.md b/exporting/prometheus/integrations/new_relic.md
new file mode 100644
index 00000000..6d541740
--- /dev/null
+++ b/exporting/prometheus/integrations/new_relic.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/new_relic.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "New Relic"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# New Relic
+
+
+<img src="https://netdata.cloud/img/newrelic.svg" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | True |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | True |
+| username | Username for HTTP authentication | my_username | False |
+| password | Password for HTTP authentication | my_password | False |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | False |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | False |
+| prefix | The prefix to add to all metrics. | netdata | False |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | False |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | False |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | False |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | False |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | False |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | False |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | False |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | False |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/postgresql.md b/exporting/prometheus/integrations/postgresql.md
new file mode 100644
index 00000000..99865988
--- /dev/null
+++ b/exporting/prometheus/integrations/postgresql.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/postgresql.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "PostgreSQL"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# PostgreSQL
+
+
+<img src="https://netdata.cloud/img/postgres.svg" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | True |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | True |
+| username | Username for HTTP authentication | my_username | False |
+| password | Password for HTTP authentication | my_password | False |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | False |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | False |
+| prefix | The prefix to add to all metrics. | netdata | False |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | False |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | False |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | False |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | False |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | False |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | False |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | False |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | False |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/prometheus_remote_write.md b/exporting/prometheus/integrations/prometheus_remote_write.md
new file mode 100644
index 00000000..213414d6
--- /dev/null
+++ b/exporting/prometheus/integrations/prometheus_remote_write.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/prometheus_remote_write.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "Prometheus Remote Write"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# Prometheus Remote Write
+
+
+<img src="https://netdata.cloud/img/prometheus.svg" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | True |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | True |
+| username | Username for HTTP authentication | my_username | False |
+| password | Password for HTTP authentication | my_password | False |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | False |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | False |
+| prefix | The prefix to add to all metrics. | netdata | False |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | False |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | False |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | False |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | False |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | False |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | False |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | False |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | False |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/quasardb.md b/exporting/prometheus/integrations/quasardb.md
new file mode 100644
index 00000000..66d65766
--- /dev/null
+++ b/exporting/prometheus/integrations/quasardb.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/quasardb.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "QuasarDB"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# QuasarDB
+
+
+<img src="https://netdata.cloud/img/quasar.jpeg" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | True |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | True |
+| username | Username for HTTP authentication | my_username | False |
+| password | Password for HTTP authentication | my_password | False |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | False |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | False |
+| prefix | The prefix to add to all metrics. | netdata | False |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | False |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | False |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | False |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | False |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | False |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | False |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | False |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | False |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/splunk_signalfx.md b/exporting/prometheus/integrations/splunk_signalfx.md
new file mode 100644
index 00000000..eba1cec5
--- /dev/null
+++ b/exporting/prometheus/integrations/splunk_signalfx.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/splunk_signalfx.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "Splunk SignalFx"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# Splunk SignalFx
+
+
+<img src="https://netdata.cloud/img/splunk.svg" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | True |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | True |
+| username | Username for HTTP authentication | my_username | False |
+| password | Password for HTTP authentication | my_password | False |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | False |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | False |
+| prefix | The prefix to add to all metrics. | netdata | False |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | False |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | False |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | False |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | False |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | False |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | False |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | False |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | False |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/thanos.md b/exporting/prometheus/integrations/thanos.md
new file mode 100644
index 00000000..09fa6d8a
--- /dev/null
+++ b/exporting/prometheus/integrations/thanos.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/thanos.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "Thanos"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# Thanos
+
+
+<img src="https://netdata.cloud/img/thanos.png" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | True |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | True |
+| username | Username for HTTP authentication | my_username | False |
+| password | Password for HTTP authentication | my_password | False |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | False |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | False |
+| prefix | The prefix to add to all metrics. | netdata | False |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | False |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | False |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | False |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | False |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | False |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | False |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | False |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | False |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/tikv.md b/exporting/prometheus/integrations/tikv.md
new file mode 100644
index 00000000..3735e52c
--- /dev/null
+++ b/exporting/prometheus/integrations/tikv.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/tikv.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "TiKV"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# TiKV
+
+
+<img src="https://netdata.cloud/img/tikv.png" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | True |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | True |
+| username | Username for HTTP authentication | my_username | False |
+| password | Password for HTTP authentication | my_password | False |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | False |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | False |
+| prefix | The prefix to add to all metrics. | netdata | False |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | False |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | False |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | False |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | False |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | False |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | False |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | False |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | False |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/timescaledb.md b/exporting/prometheus/integrations/timescaledb.md
new file mode 100644
index 00000000..41cfc193
--- /dev/null
+++ b/exporting/prometheus/integrations/timescaledb.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/timescaledb.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "TimescaleDB"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# TimescaleDB
+
+
+<img src="https://netdata.cloud/img/timescale.png" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | True |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | True |
+| username | Username for HTTP authentication | my_username | False |
+| password | Password for HTTP authentication | my_password | False |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | False |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | False |
+| prefix | The prefix to add to all metrics. | netdata | False |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | False |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | False |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | False |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | False |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | False |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | False |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | False |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | False |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/victoriametrics.md b/exporting/prometheus/integrations/victoriametrics.md
new file mode 100644
index 00000000..d51dd82f
--- /dev/null
+++ b/exporting/prometheus/integrations/victoriametrics.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/victoriametrics.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "VictoriaMetrics"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# VictoriaMetrics
+
+
+<img src="https://netdata.cloud/img/victoriametrics.png" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | True |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | True |
+| username | Username for HTTP authentication | my_username | False |
+| password | Password for HTTP authentication | my_password | False |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | False |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | False |
+| prefix | The prefix to add to all metrics. | netdata | False |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | False |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | False |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | False |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | False |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | False |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | False |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | False |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | False |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/vmware_aria.md b/exporting/prometheus/integrations/vmware_aria.md
new file mode 100644
index 00000000..9311f148
--- /dev/null
+++ b/exporting/prometheus/integrations/vmware_aria.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/vmware_aria.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "VMware Aria"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# VMware Aria
+
+
+<img src="https://netdata.cloud/img/aria.png" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | True |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | True |
+| username | Username for HTTP authentication | my_username | False |
+| password | Password for HTTP authentication | my_password | False |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | False |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | False |
+| prefix | The prefix to add to all metrics. | netdata | False |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | False |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | False |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | False |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | False |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | False |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | False |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | False |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | False |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/wavefront.md b/exporting/prometheus/integrations/wavefront.md
new file mode 100644
index 00000000..fd199dab
--- /dev/null
+++ b/exporting/prometheus/integrations/wavefront.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/wavefront.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "Wavefront"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# Wavefront
+
+
+<img src="https://netdata.cloud/img/wavefront.png" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | True |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | True |
+| username | Username for HTTP authentication | my_username | False |
+| password | Password for HTTP authentication | my_password | False |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | False |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | False |
+| prefix | The prefix to add to all metrics. | netdata | False |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | False |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | False |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | False |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | False |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | False |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | False |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | False |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | False |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/prometheus.c b/exporting/prometheus/prometheus.c
index 2d0611fd..9e3f4bbf 100644
--- a/exporting/prometheus/prometheus.c
+++ b/exporting/prometheus/prometheus.c
@@ -292,7 +292,7 @@ struct format_prometheus_label_callback {
size_t count;
};
-static int format_prometheus_label_callback(const char *name, const char *value, RRDLABEL_SRC ls, void *data) {
+static int format_prometheus_label_callback(const char *name, const char *value, RRDLABEL_SRC ls __maybe_unused, void *data) {
struct format_prometheus_label_callback *d = (struct format_prometheus_label_callback *)data;
if (!should_send_label(d->instance, ls)) return 0;
@@ -333,11 +333,9 @@ void format_host_labels_prometheus(struct instance *instance, RRDHOST *host)
* @param data is the buffer used to add labels.
*/
-static int format_prometheus_chart_label_callback(const char *name, const char *value, RRDLABEL_SRC ls, void *data) {
+static int format_prometheus_chart_label_callback(const char *name, const char *value, RRDLABEL_SRC ls __maybe_unused, void *data) {
BUFFER *wb = data;
- (void)ls;
-
if (name[0] == '_' )
return 1;
@@ -496,7 +494,7 @@ static void generate_as_collected_prom_metric(BUFFER *wb,
struct gen_parameters *p,
int homogeneous,
int prometheus_collector,
- DICTIONARY *chart_labels)
+ RRDLABELS *chart_labels)
{
buffer_sprintf(wb, "%s_%s", p->prefix, p->context);
@@ -524,7 +522,7 @@ static void generate_as_collected_prom_metric(BUFFER *wb,
buffer_sprintf(wb, COLLECTED_NUMBER_FORMAT, p->rd->collector.last_collected_value);
if (p->output_options & PROMETHEUS_OUTPUT_TIMESTAMPS)
- buffer_sprintf(wb, " %llu\n", timeval_msec(&p->rd->collector.last_collected_time));
+ buffer_sprintf(wb, " %"PRIu64"\n", timeval_msec(&p->rd->collector.last_collected_time));
else
buffer_sprintf(wb, "\n");
}
diff --git a/exporting/prometheus/remote_write/README.md b/exporting/prometheus/remote_write/README.md
index c2ad22a6..8ca4673a 100644..120000
--- a/exporting/prometheus/remote_write/README.md
+++ b/exporting/prometheus/remote_write/README.md
@@ -1,60 +1 @@
-<!--
-title: "Export metrics to Prometheus remote write providers"
-description: "Send Netdata metrics to your choice of more than 20 external storage providers for long-term archiving and further analysis."
-custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/remote_write/README.md"
-sidebar_label: "Prometheus remote write"
-learn_status: "Published"
-learn_rel_path: "Integrations/Export"
--->
-
-# Export metrics to Prometheus remote write providers
-
-The Prometheus remote write exporting connector uses the exporting engine to send Netdata metrics to your choice of more
-than 20 external storage providers for long-term archiving and further analysis.
-
-## Prerequisites
-
-To use the Prometheus remote write API with [storage
-providers](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage), install
-[protobuf](https://developers.google.com/protocol-buffers/) and [snappy](https://github.com/google/snappy) libraries.
-Next, [reinstall Netdata](https://github.com/netdata/netdata/blob/master/packaging/installer/REINSTALL.md), which detects that the required libraries and utilities
-are now available.
-
-## Configuration
-
-To enable data exporting to a storage provider using the Prometheus remote write API, run `./edit-config exporting.conf`
-in the Netdata configuration directory and set the following options:
-
-```conf
-[prometheus_remote_write:my_instance]
- enabled = yes
- destination = example.domain:example_port
- remote write URL path = /receive
-```
-
-You can also add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example:
-`remote_write:https:my_instance`.
-
-`remote write URL path` is used to set an endpoint path for the remote write protocol. The default value is `/receive`.
-For example, if your endpoint is `http://example.domain:example_port/storage/read`:
-
-```conf
- destination = example.domain:example_port
- remote write URL path = /storage/read
-```
-
-You can set basic HTTP authentication credentials using
-
-```conf
- username = my_username
- password = my_password
-```
-
-`buffered` and `lost` dimensions in the Netdata Exporting Connector Data Size operation monitoring chart estimate uncompressed
-buffer size on failures.
-
-## Notes
-
-The remote write exporting connector does not support `buffer on failures`
-
-
+../integrations/prometheus_remote_write.md \ No newline at end of file
diff --git a/exporting/prometheus/remote_write/remote_write.c b/exporting/prometheus/remote_write/remote_write.c
index 2b53b1c2..ed431c9d 100644
--- a/exporting/prometheus/remote_write/remote_write.c
+++ b/exporting/prometheus/remote_write/remote_write.c
@@ -139,11 +139,11 @@ struct format_remote_write_label_callback {
void *write_request;
};
-static int format_remote_write_label_callback(const char *name, const char *value, RRDLABEL_SRC ls, void *data) {
+static int format_remote_write_label_callback(const char *name, const char *value, RRDLABEL_SRC ls __maybe_unused, void *data)
+{
struct format_remote_write_label_callback *d = (struct format_remote_write_label_callback *)data;
if (!should_send_label(d->instance, ls)) return 0;
-
char k[PROMETHEUS_ELEMENT_MAX + 1];
char v[PROMETHEUS_ELEMENT_MAX + 1];