summaryrefslogtreecommitdiffstats
path: root/exporting/json
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-19 02:57:58 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-19 02:57:58 +0000
commitbe1c7e50e1e8809ea56f2c9d472eccd8ffd73a97 (patch)
tree9754ff1ca740f6346cf8483ec915d4054bc5da2d /exporting/json
parentInitial commit. (diff)
downloadnetdata-be1c7e50e1e8809ea56f2c9d472eccd8ffd73a97.tar.xz
netdata-be1c7e50e1e8809ea56f2c9d472eccd8ffd73a97.zip
Adding upstream version 1.44.3.upstream/1.44.3upstream
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to '')
-rw-r--r--exporting/json/Makefile.am4
l---------exporting/json/README.md1
-rw-r--r--exporting/json/integrations/json.md147
-rw-r--r--exporting/json/json.c349
-rw-r--r--exporting/json/json.h21
-rw-r--r--exporting/json/metadata.yaml151
6 files changed, 673 insertions, 0 deletions
diff --git a/exporting/json/Makefile.am b/exporting/json/Makefile.am
new file mode 100644
index 00000000..babdcf0d
--- /dev/null
+++ b/exporting/json/Makefile.am
@@ -0,0 +1,4 @@
+# SPDX-License-Identifier: GPL-3.0-or-later
+
+AUTOMAKE_OPTIONS = subdir-objects
+MAINTAINERCLEANFILES = $(srcdir)/Makefile.in
diff --git a/exporting/json/README.md b/exporting/json/README.md
new file mode 120000
index 00000000..0a8793ca
--- /dev/null
+++ b/exporting/json/README.md
@@ -0,0 +1 @@
+integrations/json.md \ No newline at end of file
diff --git a/exporting/json/integrations/json.md b/exporting/json/integrations/json.md
new file mode 100644
index 00000000..ab4699d9
--- /dev/null
+++ b/exporting/json/integrations/json.md
@@ -0,0 +1,147 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/json/README.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/json/metadata.yaml"
+sidebar_label: "JSON"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# JSON
+
+
+<img src="https://netdata.cloud/img/json.svg" width="150"/>
+
+
+Use the JSON connector for the exporting engine to archive your agent's metrics to JSON document databases for long-term storage,
+further analysis, or correlation with data from other sources
+
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Setup
+
+### Prerequisites
+
+####
+
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+The following options can be defined for this exporter.
+
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | pubsub.googleapis.com | yes |
+| username | Username for HTTP authentication | my_username | no |
+| password | Password for HTTP authentication | my_password | no |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |
+| prefix | The prefix to add to all metrics. | Netdata | no |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 2 * update_every * 1000 | no |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | no |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = localhost:5448
+ ```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Basic configuration
+
+
+
+```yaml
+[json:my_json_instance]
+ enabled = yes
+ destination = localhost:5448
+
+```
+##### Configuration with HTTPS and HTTP authentication
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `json:https:my_json_instance`.
+
+```yaml
+[json:my_json_instance]
+ enabled = yes
+ destination = localhost:5448
+ username = my_username
+ password = my_password
+
+```
+
diff --git a/exporting/json/json.c b/exporting/json/json.c
new file mode 100644
index 00000000..d916fe77
--- /dev/null
+++ b/exporting/json/json.c
@@ -0,0 +1,349 @@
+// SPDX-License-Identifier: GPL-3.0-or-later
+
+#include "json.h"
+
+/**
+ * Initialize JSON connector instance
+ *
+ * @param instance an instance data structure.
+ * @return Returns 0 on success, 1 on failure.
+ */
+int init_json_instance(struct instance *instance)
+{
+ instance->worker = simple_connector_worker;
+
+ struct simple_connector_config *connector_specific_config = callocz(1, sizeof(struct simple_connector_config));
+ instance->config.connector_specific_config = (void *)connector_specific_config;
+ connector_specific_config->default_port = 5448;
+
+ struct simple_connector_data *connector_specific_data = callocz(1, sizeof(struct simple_connector_data));
+ instance->connector_specific_data = connector_specific_data;
+
+ instance->start_batch_formatting = NULL;
+ instance->start_host_formatting = format_host_labels_json_plaintext;
+ instance->start_chart_formatting = NULL;
+
+ if (EXPORTING_OPTIONS_DATA_SOURCE(instance->config.options) == EXPORTING_SOURCE_DATA_AS_COLLECTED)
+ instance->metric_formatting = format_dimension_collected_json_plaintext;
+ else
+ instance->metric_formatting = format_dimension_stored_json_plaintext;
+
+ instance->end_chart_formatting = NULL;
+ instance->variables_formatting = NULL;
+ instance->end_host_formatting = flush_host_labels;
+ instance->end_batch_formatting = simple_connector_end_batch;
+
+ instance->prepare_header = NULL;
+
+ instance->check_response = exporting_discard_response;
+
+ instance->buffer = (void *)buffer_create(0, &netdata_buffers_statistics.buffers_exporters);
+ if (!instance->buffer) {
+ netdata_log_error("EXPORTING: cannot create buffer for json exporting connector instance %s", instance->config.name);
+ return 1;
+ }
+
+ simple_connector_init(instance);
+
+ if (uv_mutex_init(&instance->mutex))
+ return 1;
+ if (uv_cond_init(&instance->cond_var))
+ return 1;
+
+ return 0;
+}
+
+/**
+ * Initialize JSON connector instance for HTTP protocol
+ *
+ * @param instance an instance data structure.
+ * @return Returns 0 on success, 1 on failure.
+ */
+int init_json_http_instance(struct instance *instance)
+{
+ instance->worker = simple_connector_worker;
+
+ struct simple_connector_config *connector_specific_config = callocz(1, sizeof(struct simple_connector_config));
+ instance->config.connector_specific_config = (void *)connector_specific_config;
+ connector_specific_config->default_port = 5448;
+
+ struct simple_connector_data *connector_specific_data = callocz(1, sizeof(struct simple_connector_data));
+ instance->connector_specific_data = connector_specific_data;
+
+#ifdef ENABLE_HTTPS
+ connector_specific_data->ssl = NETDATA_SSL_UNSET_CONNECTION;
+ if (instance->config.options & EXPORTING_OPTION_USE_TLS) {
+ netdata_ssl_initialize_ctx(NETDATA_SSL_EXPORTING_CTX);
+ }
+#endif
+
+ instance->start_batch_formatting = open_batch_json_http;
+ instance->start_host_formatting = format_host_labels_json_plaintext;
+ instance->start_chart_formatting = NULL;
+
+ if (EXPORTING_OPTIONS_DATA_SOURCE(instance->config.options) == EXPORTING_SOURCE_DATA_AS_COLLECTED)
+ instance->metric_formatting = format_dimension_collected_json_plaintext;
+ else
+ instance->metric_formatting = format_dimension_stored_json_plaintext;
+
+ instance->end_chart_formatting = NULL;
+ instance->variables_formatting = NULL;
+ instance->end_host_formatting = flush_host_labels;
+ instance->end_batch_formatting = close_batch_json_http;
+
+ instance->prepare_header = json_http_prepare_header;
+
+ instance->check_response = exporting_discard_response;
+
+ instance->buffer = (void *)buffer_create(0, &netdata_buffers_statistics.buffers_exporters);
+
+ simple_connector_init(instance);
+
+ if (uv_mutex_init(&instance->mutex))
+ return 1;
+ if (uv_cond_init(&instance->cond_var))
+ return 1;
+
+ return 0;
+}
+
+/**
+ * Format host labels for JSON connector
+ *
+ * @param instance an instance data structure.
+ * @param host a data collecting host.
+ * @return Always returns 0.
+ */
+
+int format_host_labels_json_plaintext(struct instance *instance, RRDHOST *host)
+{
+ if (!instance->labels_buffer)
+ instance->labels_buffer = buffer_create(1024, &netdata_buffers_statistics.buffers_exporters);
+
+ if (unlikely(!sending_labels_configured(instance)))
+ return 0;
+
+ buffer_strcat(instance->labels_buffer, "\"labels\":{");
+ rrdlabels_to_buffer(host->rrdlabels, instance->labels_buffer, "", ":", "\"", ",",
+ exporting_labels_filter_callback, instance,
+ NULL, sanitize_json_string);
+ buffer_strcat(instance->labels_buffer, "},");
+
+ return 0;
+}
+
+/**
+ * Format dimension using collected data for JSON connector
+ *
+ * @param instance an instance data structure.
+ * @param rd a dimension.
+ * @return Always returns 0.
+ */
+int format_dimension_collected_json_plaintext(struct instance *instance, RRDDIM *rd)
+{
+ RRDSET *st = rd->rrdset;
+ RRDHOST *host = st->rrdhost;
+
+ const char *tags_pre = "", *tags_post = "", *tags = rrdhost_tags(host);
+ if (!tags)
+ tags = "";
+
+ if (*tags) {
+ if (*tags == '{' || *tags == '[' || *tags == '"') {
+ tags_pre = "\"host_tags\":";
+ tags_post = ",";
+ } else {
+ tags_pre = "\"host_tags\":\"";
+ tags_post = "\",";
+ }
+ }
+
+ if (instance->config.type == EXPORTING_CONNECTOR_TYPE_JSON_HTTP) {
+ if (buffer_strlen((BUFFER *)instance->buffer) > 2)
+ buffer_strcat(instance->buffer, ",\n");
+ }
+
+ buffer_sprintf(
+ instance->buffer,
+
+ "{"
+ "\"prefix\":\"%s\","
+ "\"hostname\":\"%s\","
+ "%s%s%s"
+ "%s"
+
+ "\"chart_id\":\"%s\","
+ "\"chart_name\":\"%s\","
+ "\"chart_family\":\"%s\","
+ "\"chart_context\":\"%s\","
+ "\"chart_type\":\"%s\","
+ "\"units\":\"%s\","
+
+ "\"id\":\"%s\","
+ "\"name\":\"%s\","
+ "\"value\":" COLLECTED_NUMBER_FORMAT ","
+
+ "\"timestamp\":%llu}",
+
+ instance->config.prefix,
+ (host == localhost) ? instance->config.hostname : rrdhost_hostname(host),
+ tags_pre,
+ tags,
+ tags_post,
+ instance->labels_buffer ? buffer_tostring(instance->labels_buffer) : "",
+
+ rrdset_id(st),
+ rrdset_name(st),
+ rrdset_family(st),
+ rrdset_context(st),
+ rrdset_parts_type(st),
+ rrdset_units(st),
+ rrddim_id(rd),
+ rrddim_name(rd),
+ rd->collector.last_collected_value,
+
+ (unsigned long long)rd->collector.last_collected_time.tv_sec);
+
+ if (instance->config.type != EXPORTING_CONNECTOR_TYPE_JSON_HTTP) {
+ buffer_strcat(instance->buffer, "\n");
+ }
+
+ return 0;
+}
+
+/**
+ * Format dimension using a calculated value from stored data for JSON connector
+ *
+ * @param instance an instance data structure.
+ * @param rd a dimension.
+ * @return Always returns 0.
+ */
+int format_dimension_stored_json_plaintext(struct instance *instance, RRDDIM *rd)
+{
+ RRDSET *st = rd->rrdset;
+ RRDHOST *host = st->rrdhost;
+
+ time_t last_t;
+ NETDATA_DOUBLE value = exporting_calculate_value_from_stored_data(instance, rd, &last_t);
+
+ if(isnan(value))
+ return 0;
+
+ const char *tags_pre = "", *tags_post = "", *tags = rrdhost_tags(host);
+ if (!tags)
+ tags = "";
+
+ if (*tags) {
+ if (*tags == '{' || *tags == '[' || *tags == '"') {
+ tags_pre = "\"host_tags\":";
+ tags_post = ",";
+ } else {
+ tags_pre = "\"host_tags\":\"";
+ tags_post = "\",";
+ }
+ }
+
+ if (instance->config.type == EXPORTING_CONNECTOR_TYPE_JSON_HTTP) {
+ if (buffer_strlen((BUFFER *)instance->buffer) > 2)
+ buffer_strcat(instance->buffer, ",\n");
+ }
+
+ buffer_sprintf(
+ instance->buffer,
+ "{"
+ "\"prefix\":\"%s\","
+ "\"hostname\":\"%s\","
+ "%s%s%s"
+ "%s"
+
+ "\"chart_id\":\"%s\","
+ "\"chart_name\":\"%s\","
+ "\"chart_family\":\"%s\","
+ "\"chart_context\": \"%s\","
+ "\"chart_type\":\"%s\","
+ "\"units\": \"%s\","
+
+ "\"id\":\"%s\","
+ "\"name\":\"%s\","
+ "\"value\":" NETDATA_DOUBLE_FORMAT ","
+
+ "\"timestamp\": %llu}",
+
+ instance->config.prefix,
+ (host == localhost) ? instance->config.hostname : rrdhost_hostname(host),
+ tags_pre,
+ tags,
+ tags_post,
+ instance->labels_buffer ? buffer_tostring(instance->labels_buffer) : "",
+
+ rrdset_id(st),
+ rrdset_name(st),
+ rrdset_family(st),
+ rrdset_context(st),
+ rrdset_parts_type(st),
+ rrdset_units(st),
+ rrddim_id(rd),
+ rrddim_name(rd),
+ value,
+
+ (unsigned long long)last_t);
+
+ if (instance->config.type != EXPORTING_CONNECTOR_TYPE_JSON_HTTP) {
+ buffer_strcat(instance->buffer, "\n");
+ }
+
+ return 0;
+}
+
+/**
+ * Open a JSON list for a bach
+ *
+ * @param instance an instance data structure.
+ * @return Always returns 0.
+ */
+int open_batch_json_http(struct instance *instance)
+{
+ buffer_strcat(instance->buffer, "[\n");
+
+ return 0;
+}
+
+/**
+ * Close a JSON list for a bach and update buffered bytes counter
+ *
+ * @param instance an instance data structure.
+ * @return Always returns 0.
+ */
+int close_batch_json_http(struct instance *instance)
+{
+ buffer_strcat(instance->buffer, "\n]\n");
+
+ simple_connector_end_batch(instance);
+
+ return 0;
+}
+
+/**
+ * Prepare HTTP header
+ *
+ * @param instance an instance data structure.
+ * @return Returns 0 on success, 1 on failure.
+ */
+void json_http_prepare_header(struct instance *instance)
+{
+ struct simple_connector_data *simple_connector_data = instance->connector_specific_data;
+
+ buffer_sprintf(
+ simple_connector_data->last_buffer->header,
+ "POST /api/put HTTP/1.1\r\n"
+ "Host: %s\r\n"
+ "%s"
+ "Content-Type: application/json\r\n"
+ "Content-Length: %lu\r\n"
+ "\r\n",
+ instance->config.destination,
+ simple_connector_data->auth_string ? simple_connector_data->auth_string : "",
+ (unsigned long int) buffer_strlen(simple_connector_data->last_buffer->buffer));
+
+ return;
+}
diff --git a/exporting/json/json.h b/exporting/json/json.h
new file mode 100644
index 00000000..d916263a
--- /dev/null
+++ b/exporting/json/json.h
@@ -0,0 +1,21 @@
+// SPDX-License-Identifier: GPL-3.0-or-later
+
+#ifndef NETDATA_EXPORTING_JSON_H
+#define NETDATA_EXPORTING_JSON_H
+
+#include "exporting/exporting_engine.h"
+
+int init_json_instance(struct instance *instance);
+int init_json_http_instance(struct instance *instance);
+
+int format_host_labels_json_plaintext(struct instance *instance, RRDHOST *host);
+
+int format_dimension_collected_json_plaintext(struct instance *instance, RRDDIM *rd);
+int format_dimension_stored_json_plaintext(struct instance *instance, RRDDIM *rd);
+
+int open_batch_json_http(struct instance *instance);
+int close_batch_json_http(struct instance *instance);
+
+void json_http_prepare_header(struct instance *instance);
+
+#endif //NETDATA_EXPORTING_JSON_H
diff --git a/exporting/json/metadata.yaml b/exporting/json/metadata.yaml
new file mode 100644
index 00000000..d9f93e4a
--- /dev/null
+++ b/exporting/json/metadata.yaml
@@ -0,0 +1,151 @@
+# yamllint disable rule:line-length
+---
+id: 'export-json'
+meta:
+ name: 'JSON'
+ link: 'https://learn.netdata.cloud/docs/exporting/json-document-databases'
+ categories:
+ - export
+ icon_filename: 'json.svg'
+keywords:
+ - exporter
+ - json
+overview:
+ exporter_description: |
+ Use the JSON connector for the exporting engine to archive your agent's metrics to JSON document databases for long-term storage,
+ further analysis, or correlation with data from other sources
+ exporter_limitations: ''
+setup:
+ prerequisites:
+ list:
+ - title: ''
+ description: ''
+ configuration:
+ file:
+ name: 'exporting.conf'
+ options:
+ description: |
+ The following options can be defined for this exporter.
+ folding:
+ title: 'Config options'
+ enabled: true
+ list:
+ - name: 'enabled'
+ default_value: 'no'
+ description: 'Enables or disables an exporting connector instance (yes|no).'
+ required: true
+ - name: 'destination'
+ default_value: 'pubsub.googleapis.com'
+ description: 'Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics.'
+ required: true
+ detailed_description: |
+ The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+ - PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+ - IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+ - PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+ Example IPv4:
+ ```yaml
+ destination = localhost:5448
+ ```
+ When multiple servers are defined, Netdata will try the next one when the previous one fails.
+ - name: 'username'
+ default_value: 'my_username'
+ description: 'Username for HTTP authentication'
+ required: false
+ - name: 'password'
+ default_value: 'my_password'
+ description: 'Password for HTTP authentication'
+ required: false
+ - name: 'data source'
+ default_value: ''
+ description: 'Selects the kind of data that will be sent to the external database. (as collected|average|sum)'
+ required: false
+ - name: 'hostname'
+ default_value: '[global].hostname'
+ description: 'The hostname to be used for sending data to the external database server.'
+ required: false
+ - name: 'prefix'
+ default_value: 'Netdata'
+ description: 'The prefix to add to all metrics.'
+ required: false
+ - name: 'update every'
+ default_value: '10'
+ description: |
+ Frequency of sending sending data to the external database, in seconds.
+ required: false
+ detailed_description: |
+ Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+ send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+ - name: 'buffer on failures'
+ default_value: '10'
+ description: |
+ The number of iterations (`update every` seconds) to buffer data, when the external database server is not available.
+ required: false
+ detailed_description: |
+ If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+ - name: 'timeout ms'
+ default_value: '2 * update_every * 1000'
+ description: 'The timeout in milliseconds to wait for the external database server to process the data.'
+ required: false
+ - name: 'send hosts matching'
+ default_value: 'localhost *'
+ description: |
+ Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns).
+ required: false
+ detailed_description: |
+ Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+ The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+ filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+ A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+ use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+ - name: 'send charts matching'
+ default_value: '*'
+ description: |
+ One or more space separated patterns (use * as wildcard) checked against both chart id and chart name.
+ required: false
+ detailed_description: |
+ A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+ use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+ positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+ has a higher priority than the configuration option.
+ - name: 'send names instead of ids'
+ default_value: ''
+ description: 'Controls the metric names Netdata should send to the external database (yes|no).'
+ required: false
+ detailed_description: |
+ Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+ are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+ different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+ - name: 'send configured labels'
+ default_value: ''
+ description: 'Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes|no).'
+ required: false
+ - name: 'send automatic labels'
+ default_value: ''
+ description: 'Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes|no).'
+ required: false
+ examples:
+ folding:
+ enabled: true
+ title: ''
+ list:
+ - name: 'Basic configuration'
+ folding:
+ enabled: false
+ description: ''
+ config: |
+ [json:my_json_instance]
+ enabled = yes
+ destination = localhost:5448
+ - name: 'Configuration with HTTPS and HTTP authentication'
+ folding:
+ enabled: false
+ description: 'Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `json:https:my_json_instance`.'
+ config: |
+ [json:my_json_instance]
+ enabled = yes
+ destination = localhost:5448
+ username = my_username
+ password = my_password