summaryrefslogtreecommitdiffstats
path: root/exporting/prometheus/integrations
diff options
context:
space:
mode:
Diffstat (limited to 'exporting/prometheus/integrations')
-rw-r--r--exporting/prometheus/integrations/appoptics.md158
-rw-r--r--exporting/prometheus/integrations/azure_data_explorer.md158
-rw-r--r--exporting/prometheus/integrations/azure_event_hub.md158
-rw-r--r--exporting/prometheus/integrations/chronix.md158
-rw-r--r--exporting/prometheus/integrations/cortex.md158
-rw-r--r--exporting/prometheus/integrations/cratedb.md158
-rw-r--r--exporting/prometheus/integrations/elasticsearch.md158
-rw-r--r--exporting/prometheus/integrations/gnocchi.md158
-rw-r--r--exporting/prometheus/integrations/google_bigquery.md158
-rw-r--r--exporting/prometheus/integrations/irondb.md158
-rw-r--r--exporting/prometheus/integrations/kafka.md158
-rw-r--r--exporting/prometheus/integrations/m3db.md158
-rw-r--r--exporting/prometheus/integrations/metricfire.md158
-rw-r--r--exporting/prometheus/integrations/new_relic.md158
-rw-r--r--exporting/prometheus/integrations/postgresql.md158
-rw-r--r--exporting/prometheus/integrations/prometheus_remote_write.md158
-rw-r--r--exporting/prometheus/integrations/quasardb.md158
-rw-r--r--exporting/prometheus/integrations/splunk_signalfx.md158
-rw-r--r--exporting/prometheus/integrations/thanos.md158
-rw-r--r--exporting/prometheus/integrations/tikv.md158
-rw-r--r--exporting/prometheus/integrations/timescaledb.md158
-rw-r--r--exporting/prometheus/integrations/victoriametrics.md158
-rw-r--r--exporting/prometheus/integrations/vmware_aria.md158
-rw-r--r--exporting/prometheus/integrations/wavefront.md158
24 files changed, 3792 insertions, 0 deletions
diff --git a/exporting/prometheus/integrations/appoptics.md b/exporting/prometheus/integrations/appoptics.md
new file mode 100644
index 00000000..29293320
--- /dev/null
+++ b/exporting/prometheus/integrations/appoptics.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/appoptics.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "AppOptics"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# AppOptics
+
+
+<img src="https://netdata.cloud/img/solarwinds.svg" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | yes |
+| username | Username for HTTP authentication | my_username | no |
+| password | Password for HTTP authentication | my_password | no |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |
+| prefix | The prefix to add to all metrics. | netdata | no |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | no |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | no |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/azure_data_explorer.md b/exporting/prometheus/integrations/azure_data_explorer.md
new file mode 100644
index 00000000..aa8710aa
--- /dev/null
+++ b/exporting/prometheus/integrations/azure_data_explorer.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/azure_data_explorer.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "Azure Data Explorer"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# Azure Data Explorer
+
+
+<img src="https://netdata.cloud/img/azuredataex.jpg" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | yes |
+| username | Username for HTTP authentication | my_username | no |
+| password | Password for HTTP authentication | my_password | no |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |
+| prefix | The prefix to add to all metrics. | netdata | no |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | no |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | no |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/azure_event_hub.md b/exporting/prometheus/integrations/azure_event_hub.md
new file mode 100644
index 00000000..bc8a0c9e
--- /dev/null
+++ b/exporting/prometheus/integrations/azure_event_hub.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/azure_event_hub.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "Azure Event Hub"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# Azure Event Hub
+
+
+<img src="https://netdata.cloud/img/azureeventhub.png" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | yes |
+| username | Username for HTTP authentication | my_username | no |
+| password | Password for HTTP authentication | my_password | no |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |
+| prefix | The prefix to add to all metrics. | netdata | no |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | no |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | no |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/chronix.md b/exporting/prometheus/integrations/chronix.md
new file mode 100644
index 00000000..9794a624
--- /dev/null
+++ b/exporting/prometheus/integrations/chronix.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/chronix.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "Chronix"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# Chronix
+
+
+<img src="https://netdata.cloud/img/chronix.png" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | yes |
+| username | Username for HTTP authentication | my_username | no |
+| password | Password for HTTP authentication | my_password | no |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |
+| prefix | The prefix to add to all metrics. | netdata | no |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | no |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | no |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/cortex.md b/exporting/prometheus/integrations/cortex.md
new file mode 100644
index 00000000..784c62ce
--- /dev/null
+++ b/exporting/prometheus/integrations/cortex.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/cortex.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "Cortex"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# Cortex
+
+
+<img src="https://netdata.cloud/img/cortex.png" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | yes |
+| username | Username for HTTP authentication | my_username | no |
+| password | Password for HTTP authentication | my_password | no |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |
+| prefix | The prefix to add to all metrics. | netdata | no |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | no |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | no |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/cratedb.md b/exporting/prometheus/integrations/cratedb.md
new file mode 100644
index 00000000..75a46391
--- /dev/null
+++ b/exporting/prometheus/integrations/cratedb.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/cratedb.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "CrateDB"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# CrateDB
+
+
+<img src="https://netdata.cloud/img/crate.svg" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | yes |
+| username | Username for HTTP authentication | my_username | no |
+| password | Password for HTTP authentication | my_password | no |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |
+| prefix | The prefix to add to all metrics. | netdata | no |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | no |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | no |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/elasticsearch.md b/exporting/prometheus/integrations/elasticsearch.md
new file mode 100644
index 00000000..94e8d916
--- /dev/null
+++ b/exporting/prometheus/integrations/elasticsearch.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/elasticsearch.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "ElasticSearch"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# ElasticSearch
+
+
+<img src="https://netdata.cloud/img/elasticsearch.svg" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | yes |
+| username | Username for HTTP authentication | my_username | no |
+| password | Password for HTTP authentication | my_password | no |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |
+| prefix | The prefix to add to all metrics. | netdata | no |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | no |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | no |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/gnocchi.md b/exporting/prometheus/integrations/gnocchi.md
new file mode 100644
index 00000000..a61986c1
--- /dev/null
+++ b/exporting/prometheus/integrations/gnocchi.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/gnocchi.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "Gnocchi"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# Gnocchi
+
+
+<img src="https://netdata.cloud/img/gnocchi.svg" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | yes |
+| username | Username for HTTP authentication | my_username | no |
+| password | Password for HTTP authentication | my_password | no |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |
+| prefix | The prefix to add to all metrics. | netdata | no |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | no |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | no |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/google_bigquery.md b/exporting/prometheus/integrations/google_bigquery.md
new file mode 100644
index 00000000..aec0a9a5
--- /dev/null
+++ b/exporting/prometheus/integrations/google_bigquery.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/google_bigquery.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "Google BigQuery"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# Google BigQuery
+
+
+<img src="https://netdata.cloud/img/bigquery.png" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | yes |
+| username | Username for HTTP authentication | my_username | no |
+| password | Password for HTTP authentication | my_password | no |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |
+| prefix | The prefix to add to all metrics. | netdata | no |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | no |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | no |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/irondb.md b/exporting/prometheus/integrations/irondb.md
new file mode 100644
index 00000000..450f8833
--- /dev/null
+++ b/exporting/prometheus/integrations/irondb.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/irondb.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "IRONdb"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# IRONdb
+
+
+<img src="https://netdata.cloud/img/irondb.png" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | yes |
+| username | Username for HTTP authentication | my_username | no |
+| password | Password for HTTP authentication | my_password | no |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |
+| prefix | The prefix to add to all metrics. | netdata | no |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | no |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | no |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/kafka.md b/exporting/prometheus/integrations/kafka.md
new file mode 100644
index 00000000..e052620c
--- /dev/null
+++ b/exporting/prometheus/integrations/kafka.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/kafka.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "Kafka"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# Kafka
+
+
+<img src="https://netdata.cloud/img/kafka.svg" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | yes |
+| username | Username for HTTP authentication | my_username | no |
+| password | Password for HTTP authentication | my_password | no |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |
+| prefix | The prefix to add to all metrics. | netdata | no |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | no |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | no |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/m3db.md b/exporting/prometheus/integrations/m3db.md
new file mode 100644
index 00000000..689e8e85
--- /dev/null
+++ b/exporting/prometheus/integrations/m3db.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/m3db.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "M3DB"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# M3DB
+
+
+<img src="https://netdata.cloud/img/m3db.png" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | yes |
+| username | Username for HTTP authentication | my_username | no |
+| password | Password for HTTP authentication | my_password | no |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |
+| prefix | The prefix to add to all metrics. | netdata | no |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | no |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | no |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/metricfire.md b/exporting/prometheus/integrations/metricfire.md
new file mode 100644
index 00000000..2d69e33f
--- /dev/null
+++ b/exporting/prometheus/integrations/metricfire.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/metricfire.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "MetricFire"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# MetricFire
+
+
+<img src="https://netdata.cloud/img/metricfire.png" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | yes |
+| username | Username for HTTP authentication | my_username | no |
+| password | Password for HTTP authentication | my_password | no |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |
+| prefix | The prefix to add to all metrics. | netdata | no |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | no |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | no |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/new_relic.md b/exporting/prometheus/integrations/new_relic.md
new file mode 100644
index 00000000..f488b620
--- /dev/null
+++ b/exporting/prometheus/integrations/new_relic.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/new_relic.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "New Relic"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# New Relic
+
+
+<img src="https://netdata.cloud/img/newrelic.svg" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | yes |
+| username | Username for HTTP authentication | my_username | no |
+| password | Password for HTTP authentication | my_password | no |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |
+| prefix | The prefix to add to all metrics. | netdata | no |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | no |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | no |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/postgresql.md b/exporting/prometheus/integrations/postgresql.md
new file mode 100644
index 00000000..a1b81339
--- /dev/null
+++ b/exporting/prometheus/integrations/postgresql.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/postgresql.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "PostgreSQL"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# PostgreSQL
+
+
+<img src="https://netdata.cloud/img/postgres.svg" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | yes |
+| username | Username for HTTP authentication | my_username | no |
+| password | Password for HTTP authentication | my_password | no |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |
+| prefix | The prefix to add to all metrics. | netdata | no |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | no |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | no |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/prometheus_remote_write.md b/exporting/prometheus/integrations/prometheus_remote_write.md
new file mode 100644
index 00000000..b9ce730e
--- /dev/null
+++ b/exporting/prometheus/integrations/prometheus_remote_write.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/prometheus_remote_write.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "Prometheus Remote Write"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# Prometheus Remote Write
+
+
+<img src="https://netdata.cloud/img/prometheus.svg" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | yes |
+| username | Username for HTTP authentication | my_username | no |
+| password | Password for HTTP authentication | my_password | no |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |
+| prefix | The prefix to add to all metrics. | netdata | no |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | no |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | no |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/quasardb.md b/exporting/prometheus/integrations/quasardb.md
new file mode 100644
index 00000000..48d2419e
--- /dev/null
+++ b/exporting/prometheus/integrations/quasardb.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/quasardb.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "QuasarDB"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# QuasarDB
+
+
+<img src="https://netdata.cloud/img/quasar.jpeg" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | yes |
+| username | Username for HTTP authentication | my_username | no |
+| password | Password for HTTP authentication | my_password | no |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |
+| prefix | The prefix to add to all metrics. | netdata | no |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | no |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | no |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/splunk_signalfx.md b/exporting/prometheus/integrations/splunk_signalfx.md
new file mode 100644
index 00000000..324101b2
--- /dev/null
+++ b/exporting/prometheus/integrations/splunk_signalfx.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/splunk_signalfx.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "Splunk SignalFx"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# Splunk SignalFx
+
+
+<img src="https://netdata.cloud/img/splunk.svg" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | yes |
+| username | Username for HTTP authentication | my_username | no |
+| password | Password for HTTP authentication | my_password | no |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |
+| prefix | The prefix to add to all metrics. | netdata | no |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | no |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | no |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/thanos.md b/exporting/prometheus/integrations/thanos.md
new file mode 100644
index 00000000..77fe1159
--- /dev/null
+++ b/exporting/prometheus/integrations/thanos.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/thanos.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "Thanos"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# Thanos
+
+
+<img src="https://netdata.cloud/img/thanos.png" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | yes |
+| username | Username for HTTP authentication | my_username | no |
+| password | Password for HTTP authentication | my_password | no |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |
+| prefix | The prefix to add to all metrics. | netdata | no |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | no |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | no |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/tikv.md b/exporting/prometheus/integrations/tikv.md
new file mode 100644
index 00000000..656ee695
--- /dev/null
+++ b/exporting/prometheus/integrations/tikv.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/tikv.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "TiKV"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# TiKV
+
+
+<img src="https://netdata.cloud/img/tikv.png" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | yes |
+| username | Username for HTTP authentication | my_username | no |
+| password | Password for HTTP authentication | my_password | no |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |
+| prefix | The prefix to add to all metrics. | netdata | no |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | no |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | no |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/timescaledb.md b/exporting/prometheus/integrations/timescaledb.md
new file mode 100644
index 00000000..681a0a61
--- /dev/null
+++ b/exporting/prometheus/integrations/timescaledb.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/timescaledb.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "TimescaleDB"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# TimescaleDB
+
+
+<img src="https://netdata.cloud/img/timescale.png" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | yes |
+| username | Username for HTTP authentication | my_username | no |
+| password | Password for HTTP authentication | my_password | no |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |
+| prefix | The prefix to add to all metrics. | netdata | no |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | no |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | no |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/victoriametrics.md b/exporting/prometheus/integrations/victoriametrics.md
new file mode 100644
index 00000000..114aefc8
--- /dev/null
+++ b/exporting/prometheus/integrations/victoriametrics.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/victoriametrics.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "VictoriaMetrics"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# VictoriaMetrics
+
+
+<img src="https://netdata.cloud/img/victoriametrics.png" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | yes |
+| username | Username for HTTP authentication | my_username | no |
+| password | Password for HTTP authentication | my_password | no |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |
+| prefix | The prefix to add to all metrics. | netdata | no |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | no |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | no |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/vmware_aria.md b/exporting/prometheus/integrations/vmware_aria.md
new file mode 100644
index 00000000..493d3550
--- /dev/null
+++ b/exporting/prometheus/integrations/vmware_aria.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/vmware_aria.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "VMware Aria"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# VMware Aria
+
+
+<img src="https://netdata.cloud/img/aria.png" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | yes |
+| username | Username for HTTP authentication | my_username | no |
+| password | Password for HTTP authentication | my_password | no |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |
+| prefix | The prefix to add to all metrics. | netdata | no |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | no |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | no |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/prometheus/integrations/wavefront.md b/exporting/prometheus/integrations/wavefront.md
new file mode 100644
index 00000000..a6bab056
--- /dev/null
+++ b/exporting/prometheus/integrations/wavefront.md
@@ -0,0 +1,158 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/integrations/wavefront.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/metadata.yaml"
+sidebar_label: "Wavefront"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# Wavefront
+
+
+<img src="https://netdata.cloud/img/wavefront.png" width="150"/>
+
+
+Use the Prometheus remote write exporting connector to archive your Netdata metrics to the external storage provider of your choice for long-term storage and further analysis.
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Limitations
+
+The remote write exporting connector does not support buffer on failures.
+
+
+## Setup
+
+### Prerequisites
+
+####
+
+- Netdata and the external storage provider of your choice, installed, configured and operational.
+- `protobuf` and `snappy` libraries installed.
+- Netdata reinstalled after the libraries.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | yes |
+| username | Username for HTTP authentication | my_username | no |
+| password | Password for HTTP authentication | my_password | no |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |
+| prefix | The prefix to add to all metrics. | netdata | no |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 20000 | no |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | no |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:2003 10.11.14.3:4242 10.11.14.4:2003
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic example configuration for Prometheus remote write.
+
+```yaml
+[prometheus_remote_write:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+
+```
+##### Example configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[prometheus_remote_write:https:my_instance]
+ enabled = yes
+ destination = 10.11.14.2:2003
+ remote write URL path = /receive
+ username = my_username
+ password = my_password
+
+```
+