diff options
author | Daniel Baumann <daniel.baumann@progress-linux.org> | 2022-04-14 18:12:10 +0000 |
---|---|---|
committer | Daniel Baumann <daniel.baumann@progress-linux.org> | 2022-04-14 18:12:10 +0000 |
commit | b5321aff06d6ea8d730d62aec2ffd8e9271c1ffc (patch) | |
tree | 36c41e35994786456154f9d3bf88c324763aeea4 /exporting | |
parent | Adding upstream version 1.33.1. (diff) | |
download | netdata-b5321aff06d6ea8d730d62aec2ffd8e9271c1ffc.tar.xz netdata-b5321aff06d6ea8d730d62aec2ffd8e9271c1ffc.zip |
Adding upstream version 1.34.0.upstream/1.34.0
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'exporting')
-rw-r--r-- | exporting/README.md | 14 | ||||
-rw-r--r-- | exporting/TIMESCALE.md | 4 | ||||
-rw-r--r-- | exporting/WALKTHROUGH.md | 2 | ||||
-rw-r--r-- | exporting/aws_kinesis/README.md | 2 | ||||
-rw-r--r-- | exporting/aws_kinesis/aws_kinesis.c | 4 | ||||
-rw-r--r-- | exporting/check_filters.c | 18 | ||||
-rw-r--r-- | exporting/exporting_engine.h | 9 | ||||
-rw-r--r-- | exporting/graphite/README.md | 2 | ||||
-rw-r--r-- | exporting/json/README.md | 2 | ||||
-rw-r--r-- | exporting/mongodb/README.md | 2 | ||||
-rw-r--r-- | exporting/mongodb/mongodb.c | 2 | ||||
-rwxr-xr-x | exporting/nc-exporting.sh | 44 | ||||
-rw-r--r-- | exporting/opentsdb/README.md | 2 | ||||
-rw-r--r-- | exporting/process_data.c | 4 | ||||
-rw-r--r-- | exporting/prometheus/README.md | 2 | ||||
-rw-r--r-- | exporting/prometheus/prometheus.c | 10 | ||||
-rw-r--r-- | exporting/prometheus/remote_write/README.md | 2 | ||||
-rw-r--r-- | exporting/prometheus/remote_write/remote_write.c | 2 | ||||
-rw-r--r-- | exporting/pubsub/README.md | 2 | ||||
-rw-r--r-- | exporting/pubsub/pubsub.c | 2 | ||||
-rw-r--r-- | exporting/read_config.c | 39 | ||||
-rw-r--r-- | exporting/send_data.c | 2 | ||||
-rw-r--r-- | exporting/tests/test_exporting_engine.c | 9 |
23 files changed, 86 insertions, 95 deletions
diff --git a/exporting/README.md b/exporting/README.md index 18f56fbb6..ae9c8ccf2 100644 --- a/exporting/README.md +++ b/exporting/README.md @@ -20,12 +20,6 @@ the same time. You can have different update intervals and filters configured fo When you enable the exporting engine and a connector, the Netdata Agent exports metrics _beginning from the time you restart its process_, not the entire [database of long-term metrics](/docs/store/change-metrics-storage.md). -The exporting engine has its own configuration file `exporting.conf`. The configuration is almost similar to the -deprecated [backends](/backends/README.md#configuration) system. The most important difference is that type of a -connector should be specified in a section name before a colon and an instance name after the colon. Also, you can't use -`host tags` anymore. Set your labels using the [`[host labels]`](/docs/guides/using-host-labels.md) section in -`netdata.conf`. - Since Netdata collects thousands of metrics per server per second, which would easily congest any database server when several Netdata servers are sending data to it, Netdata allows sending metrics at a lower frequency, by resampling them. @@ -271,12 +265,6 @@ Configure individual connectors and override any global settings with the follow - `send automatic labels = yes | no` controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database -> Starting from Netdata v1.20 the host tags (defined in the `[backend]` section of `netdata.conf`) are parsed in -> accordance with a configured backend type and stored as host labels so that they can be reused in API responses and -> exporting connectors. The parsing is supported for graphite, json, opentsdb, and prometheus (default) backend types. -> You can check how the host tags were parsed using the /api/v1/info API call. But, keep in mind that backends subsystem -> is deprecated and will be deleted soon. Please move your existing tags to the `[host labels]` section. - ## HTTPS Netdata can send metrics to external databases using the TLS/SSL protocol. Unfortunately, some of @@ -311,4 +299,4 @@ Netdata adds 3 alarms: ![image](https://cloud.githubusercontent.com/assets/2662304/20463779/a46ed1c2-af43-11e6-91a5-07ca4533cac3.png) -[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fexporting%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>) + diff --git a/exporting/TIMESCALE.md b/exporting/TIMESCALE.md index c98003ed4..07aa1b7a2 100644 --- a/exporting/TIMESCALE.md +++ b/exporting/TIMESCALE.md @@ -20,7 +20,7 @@ What's TimescaleDB? Here's how their team defines the project on their [GitHub p To get started archiving metrics to TimescaleDB right away, check out Mahlon's [`netdata-timescale-relay` repository](https://github.com/mahlonsmith/netdata-timescale-relay) on GitHub. Please be aware that backends subsystem -is deprecated and Netdata configuration should be moved to the new `exporting conf` configuration file. Use +was removed and Netdata configuration should be moved to the new `exporting.conf` configuration file. Use ```conf [json:my_instance] ``` @@ -66,4 +66,4 @@ blog](https://blog.timescale.com/blog/writing-it-metrics-from-netdata-to-timesca Thank you to Mahlon, Rune, TimescaleDB, and the members of the Netdata community that requested and then built this exporting connection between Netdata and TimescaleDB! -[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fexporting%2FTIMESCALE&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>) + diff --git a/exporting/WALKTHROUGH.md b/exporting/WALKTHROUGH.md index d6ede8235..0612b298a 100644 --- a/exporting/WALKTHROUGH.md +++ b/exporting/WALKTHROUGH.md @@ -263,4 +263,4 @@ achieved you do not have to think about the monitoring system until Prometheus c happens there are options presented in the Prometheus documentation for solving this. Hope this was helpful, happy monitoring. -[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fexporting%2FWALKTHROUGH&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>) + diff --git a/exporting/aws_kinesis/README.md b/exporting/aws_kinesis/README.md index 299fec581..29dd3438e 100644 --- a/exporting/aws_kinesis/README.md +++ b/exporting/aws_kinesis/README.md @@ -55,4 +55,4 @@ Alternatively, you can set AWS credentials for the `netdata` user using AWS SDK Netdata automatically computes a partition key for every record with the purpose to distribute records across available shards evenly. -[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fexporting%2Faws_kinesis%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>) + diff --git a/exporting/aws_kinesis/aws_kinesis.c b/exporting/aws_kinesis/aws_kinesis.c index 036afb49f..fe4181e3a 100644 --- a/exporting/aws_kinesis/aws_kinesis.c +++ b/exporting/aws_kinesis/aws_kinesis.c @@ -151,7 +151,7 @@ void aws_kinesis_connector_worker(void *instance_p) char error_message[ERROR_LINE_MAX + 1] = ""; debug( - D_BACKEND, + D_EXPORTING, "EXPORTING: kinesis_put_record(): dest = %s, id = %s, key = %s, stream = %s, partition_key = %s, \ buffer = %zu, record = %zu", instance->config.destination, @@ -175,7 +175,7 @@ void aws_kinesis_connector_worker(void *instance_p) // oops! we couldn't send (all or some of the) data error("EXPORTING: %s", error_message); error( - "EXPORTING: failed to write data to database backend '%s'. Willing to write %zu bytes, wrote %zu bytes.", + "EXPORTING: failed to write data to external database '%s'. Willing to write %zu bytes, wrote %zu bytes.", instance->config.destination, sent_bytes, sent_bytes - lost_bytes); stats->transmission_failures++; diff --git a/exporting/check_filters.c b/exporting/check_filters.c index d2d7d870f..f7eba22db 100644 --- a/exporting/check_filters.c +++ b/exporting/check_filters.c @@ -16,19 +16,19 @@ int rrdhost_is_exportable(struct instance *instance, RRDHOST *host) RRDHOST_FLAGS *flags = &host->exporting_flags[instance->index]; - if (unlikely((*flags & (RRDHOST_FLAG_BACKEND_SEND | RRDHOST_FLAG_BACKEND_DONT_SEND)) == 0)) { + if (unlikely((*flags & (RRDHOST_FLAG_EXPORTING_SEND | RRDHOST_FLAG_EXPORTING_DONT_SEND)) == 0)) { char *host_name = (host == localhost) ? "localhost" : host->hostname; if (!instance->config.hosts_pattern || simple_pattern_matches(instance->config.hosts_pattern, host_name)) { - *flags |= RRDHOST_FLAG_BACKEND_SEND; + *flags |= RRDHOST_FLAG_EXPORTING_SEND; info("enabled exporting of host '%s' for instance '%s'", host_name, instance->config.name); } else { - *flags |= RRDHOST_FLAG_BACKEND_DONT_SEND; + *flags |= RRDHOST_FLAG_EXPORTING_DONT_SEND; info("disabled exporting of host '%s' for instance '%s'", host_name, instance->config.name); } } - if (likely(*flags & RRDHOST_FLAG_BACKEND_SEND)) + if (likely(*flags & RRDHOST_FLAG_EXPORTING_SEND)) return 1; else return 0; @@ -47,6 +47,10 @@ int rrdset_is_exportable(struct instance *instance, RRDSET *st) RRDHOST *host = st->rrdhost; #endif + // Do not export anomaly rates charts. + if (st->state && st->state->is_ar_chart) + return 0; + if (st->exporting_flags == NULL) st->exporting_flags = callocz(instance->engine->instance_num, sizeof(size_t)); @@ -61,18 +65,18 @@ int rrdset_is_exportable(struct instance *instance, RRDSET *st) *flags |= RRDSET_FLAG_EXPORTING_SEND; else { *flags |= RRDSET_FLAG_EXPORTING_IGNORE; - debug(D_BACKEND, "BACKEND: not sending chart '%s' of host '%s', because it is disabled for backends.", st->id, host->hostname); + debug(D_EXPORTING, "EXPORTING: not sending chart '%s' of host '%s', because it is disabled for exporting.", st->id, host->hostname); return 0; } } if(unlikely(!rrdset_is_available_for_exporting_and_alarms(st))) { - debug(D_BACKEND, "BACKEND: not sending chart '%s' of host '%s', because it is not available for backends.", st->id, host->hostname); + debug(D_EXPORTING, "EXPORTING: not sending chart '%s' of host '%s', because it is not available for exporting.", st->id, host->hostname); return 0; } if(unlikely(st->rrd_memory_mode == RRD_MEMORY_MODE_NONE && !(EXPORTING_OPTIONS_DATA_SOURCE(instance->config.options) == EXPORTING_SOURCE_DATA_AS_COLLECTED))) { - debug(D_BACKEND, "BACKEND: not sending chart '%s' of host '%s' because its memory mode is '%s' and the backend requires database access.", st->id, host->hostname, rrd_memory_mode_name(host->rrd_memory_mode)); + debug(D_EXPORTING, "EXPORTING: not sending chart '%s' of host '%s' because its memory mode is '%s' and the exporting engine requires database access.", st->id, host->hostname, rrd_memory_mode_name(host->rrd_memory_mode)); return 0; } diff --git a/exporting/exporting_engine.h b/exporting/exporting_engine.h index f08583fb5..20f260c15 100644 --- a/exporting/exporting_engine.h +++ b/exporting/exporting_engine.h @@ -34,6 +34,9 @@ typedef enum exporting_options { (EXPORTING_SOURCE_DATA_AS_COLLECTED | EXPORTING_SOURCE_DATA_AVERAGE | EXPORTING_SOURCE_DATA_SUM) #define EXPORTING_OPTIONS_DATA_SOURCE(exporting_options) (exporting_options & EXPORTING_OPTIONS_SOURCE_BITS) +extern EXPORTING_OPTIONS global_exporting_options; +extern const char *global_exporting_prefix; + #define sending_labels_configured(instance) \ (instance->config.options & (EXPORTING_OPTION_SEND_CONFIGURED_LABELS | EXPORTING_OPTION_SEND_AUTOMATIC_LABELS)) @@ -51,11 +54,11 @@ typedef enum exporting_connector_types { EXPORTING_CONNECTOR_TYPE_JSON_HTTP, // Send data in JSON format using HTTP API EXPORTING_CONNECTOR_TYPE_OPENTSDB, // Send data to OpenTSDB using telnet API EXPORTING_CONNECTOR_TYPE_OPENTSDB_HTTP, // Send data to OpenTSDB using HTTP API - EXPORTING_CONNECTOR_TYPE_PROMETHEUS_REMOTE_WRITE, // User selected to use Prometheus backend + EXPORTING_CONNECTOR_TYPE_PROMETHEUS_REMOTE_WRITE, // Send data using Prometheus remote write protocol EXPORTING_CONNECTOR_TYPE_KINESIS, // Send message to AWS Kinesis EXPORTING_CONNECTOR_TYPE_PUBSUB, // Send message to Google Cloud Pub/Sub EXPORTING_CONNECTOR_TYPE_MONGODB, // Send data to MongoDB collection - EXPORTING_CONNECTOR_TYPE_NUM // Number of backend types + EXPORTING_CONNECTOR_TYPE_NUM // Number of exporting connector types } EXPORTING_CONNECTOR_TYPE; struct engine; @@ -265,6 +268,8 @@ size_t exporting_name_copy(char *dst, const char *src, size_t max_len); int rrdhost_is_exportable(struct instance *instance, RRDHOST *host); int rrdset_is_exportable(struct instance *instance, RRDSET *st); +extern EXPORTING_OPTIONS exporting_parse_data_source(const char *source, EXPORTING_OPTIONS exporting_options); + calculated_number exporting_calculate_value_from_stored_data( struct instance *instance, RRDDIM *rd, diff --git a/exporting/graphite/README.md b/exporting/graphite/README.md index d755e0934..6c96c78c9 100644 --- a/exporting/graphite/README.md +++ b/exporting/graphite/README.md @@ -32,4 +32,4 @@ Add `:http` or `:https` modifiers to the connector type if you need to use other The Graphite connector is further configurable using additional settings. See the [exporting reference doc](/exporting/README.md#options) for details. -[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fexporting%2Fjson%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>) + diff --git a/exporting/json/README.md b/exporting/json/README.md index 7cce463e2..d129ffbd7 100644 --- a/exporting/json/README.md +++ b/exporting/json/README.md @@ -32,4 +32,4 @@ Add `:http` or `:https` modifiers to the connector type if you need to use other The JSON connector is further configurable using additional settings. See the [exporting reference doc](/exporting/README.md#options) for details. -[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fexporting%2Fjson%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>) + diff --git a/exporting/mongodb/README.md b/exporting/mongodb/README.md index 2934f38c5..b10d54716 100644 --- a/exporting/mongodb/README.md +++ b/exporting/mongodb/README.md @@ -35,4 +35,4 @@ You can find more information about the `destination` string URI format in the M The default socket timeout depends on the exporting connector update interval. The timeout is 500 ms shorter than the interval (but not less than 1000 ms). You can alter the timeout using the `sockettimeoutms` MongoDB URI option. -[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fexporting%2Fmongodb%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>) + diff --git a/exporting/mongodb/mongodb.c b/exporting/mongodb/mongodb.c index 49ce95269..aab1770d2 100644 --- a/exporting/mongodb/mongodb.c +++ b/exporting/mongodb/mongodb.c @@ -327,7 +327,7 @@ void mongodb_connector_worker(void *instance_p) } debug( - D_BACKEND, + D_EXPORTING, "EXPORTING: mongodb_insert(): destination = %s, database = %s, collection = %s, data size = %zu", instance->config.destination, connector_specific_config->database, diff --git a/exporting/nc-exporting.sh b/exporting/nc-exporting.sh index 168c0d4b7..740f65d18 100755 --- a/exporting/nc-exporting.sh +++ b/exporting/nc-exporting.sh @@ -2,17 +2,17 @@ # SPDX-License-Identifier: GPL-3.0-or-later -# This is a simple backend database proxy, written in BASH, using the nc command. +# This is a simple exporting proxy, written in BASH, using the nc command. # Run the script without any parameters for help. MODE="${1}" MY_PORT="${2}" -BACKEND_HOST="${3}" -BACKEND_PORT="${4}" -FILE="${NETDATA_NC_BACKEND_DIR-/tmp}/netdata-nc-backend-${MY_PORT}" +EXPORTING_HOST="${3}" +EXPORTING_PORT="${4}" +FILE="${NETDATA_NC_EXPORTING_DIR-/tmp}/netdata-nc-exporting-${MY_PORT}" log() { - logger --stderr --id=$$ --tag "netdata-nc-backend" "${*}" + logger --stderr --id=$$ --tag "netdata-nc-exporting" "${*}" } mync() { @@ -28,7 +28,7 @@ mync() { } listen_save_replay_forever() { - local file="${1}" port="${2}" real_backend_host="${3}" real_backend_port="${4}" ret delay=1 started ended + local file="${1}" port="${2}" real_exporting_host="${3}" real_exporting_port="${4}" ret delay=1 started ended while true do @@ -40,23 +40,23 @@ listen_save_replay_forever() { if [ -s "${file}" ] then - if [ -n "${real_backend_host}" ] && [ -n "${real_backend_port}" ] + if [ -n "${real_exporting_host}" ] && [ -n "${real_exporting_port}" ] then - log "Attempting to send the metrics to the real backend at ${real_backend_host}:${real_backend_port}" + log "Attempting to send the metrics to the real external database at ${real_exporting_host}:${real_exporting_port}" - mync "${real_backend_host}" "${real_backend_port}" <"${file}" + mync "${real_exporting_host}" "${real_exporting_port}" <"${file}" ret=$? if [ ${ret} -eq 0 ] then - log "Successfully sent the metrics to ${real_backend_host}:${real_backend_port}" + log "Successfully sent the metrics to ${real_exporting_host}:${real_exporting_port}" mv "${file}" "${file}.old" touch "${file}" else - log "Failed to send the metrics to ${real_backend_host}:${real_backend_port} (nc returned ${ret}) - appending more data to ${file}" + log "Failed to send the metrics to ${real_exporting_host}:${real_exporting_port} (nc returned ${ret}) - appending more data to ${file}" fi else - log "No backend configured - appending more data to ${file}" + log "No external database configured - appending more data to ${file}" fi fi @@ -92,7 +92,7 @@ if [ "${MODE}" = "start" ] # save our PID to the lock file echo "$$" >"${FILE}.lock" - listen_save_replay_forever "${FILE}" "${MY_PORT}" "${BACKEND_HOST}" "${BACKEND_PORT}" + listen_save_replay_forever "${FILE}" "${MY_PORT}" "${EXPORTING_HOST}" "${EXPORTING_PORT}" ret=$? log "listener exited." @@ -131,20 +131,20 @@ else cat <<EOF Usage: - "${0}" start|stop PORT [BACKEND_HOST BACKEND_PORT] + "${0}" start|stop PORT [EXPORTING_HOST EXPORTING_PORT] PORT The port this script will listen - (configure netdata to use this as a second backend) + (configure netdata to use this as an external database) - BACKEND_HOST The real backend host - BACKEND_PORT The real backend port + EXPORTING_HOST The real host for the external database + EXPORTING_PORT The real port for the external database - This script can act as fallback backend for netdata. + This script can act as fallback database for netdata. It will receive metrics from netdata, save them to ${FILE} - and once netdata reconnects to the real-backend, this script - will push all metrics collected to the real-backend too and - wait for a failure to happen again. + and once netdata reconnects to the real external database, + this script will push all metrics collected to the real + external database too and wait for a failure to happen again. Only one netdata can connect to this script at a time. If you need fallback for multiple netdata, run this script @@ -152,7 +152,7 @@ Usage: You can run me in the background with this: - screen -d -m "${0}" start PORT [BACKEND_HOST BACKEND_PORT] + screen -d -m "${0}" start PORT [EXPORTING_HOST EXPORTING_PORT] EOF exit 1 fi diff --git a/exporting/opentsdb/README.md b/exporting/opentsdb/README.md index 0ca6d2449..c9b1ab95a 100644 --- a/exporting/opentsdb/README.md +++ b/exporting/opentsdb/README.md @@ -32,4 +32,4 @@ Add `:http` or `:https` modifiers to the connector type if you need to use other The OpenTSDB connector is further configurable using additional settings. See the [exporting reference doc](/exporting/README.md#options) for details. -[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fexporting%2Fopentsdb%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>) + diff --git a/exporting/process_data.c b/exporting/process_data.c index 2c0c2d17c..c77b7ad4a 100644 --- a/exporting/process_data.c +++ b/exporting/process_data.c @@ -109,7 +109,7 @@ calculated_number exporting_calculate_value_from_stored_data( if (unlikely(before < first_t || after > last_t)) { // the chart has not been updated in the wanted timeframe debug( - D_BACKEND, + D_EXPORTING, "EXPORTING: %s.%s.%s: aligned timeframe %lu to %lu is outside the chart's database range %lu to %lu", host->hostname, st->id, @@ -143,7 +143,7 @@ calculated_number exporting_calculate_value_from_stored_data( rd->state->query_ops.finalize(&handle); if (unlikely(!counter)) { debug( - D_BACKEND, + D_EXPORTING, "EXPORTING: %s.%s.%s: no values stored in database for range %lu to %lu", host->hostname, st->id, diff --git a/exporting/prometheus/README.md b/exporting/prometheus/README.md index ceb778a43..5c15ca580 100644 --- a/exporting/prometheus/README.md +++ b/exporting/prometheus/README.md @@ -458,4 +458,4 @@ through a web proxy, or when multiple Prometheus servers are NATed to a single I `&server=NAME` to the URL. This `NAME` is used by Netdata to uniquely identify each Prometheus server and keep track of its last access time. -[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fexporting%2Fprometheus%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>) + diff --git a/exporting/prometheus/prometheus.c b/exporting/prometheus/prometheus.c index 0a3190074..97c3a29f0 100644 --- a/exporting/prometheus/prometheus.c +++ b/exporting/prometheus/prometheus.c @@ -20,6 +20,10 @@ inline int can_send_rrdset(struct instance *instance, RRDSET *st) RRDHOST *host = st->rrdhost; #endif + // Do not send anomaly rates charts. + if (st->state && st->state->is_ar_chart) + return 0; + if (unlikely(rrdset_flag_check(st, RRDSET_FLAG_EXPORTING_IGNORE))) return 0; @@ -31,7 +35,7 @@ inline int can_send_rrdset(struct instance *instance, RRDSET *st) else { rrdset_flag_set(st, RRDSET_FLAG_EXPORTING_IGNORE); debug( - D_BACKEND, + D_EXPORTING, "EXPORTING: not sending chart '%s' of host '%s', because it is disabled for exporting.", st->id, host->hostname); @@ -41,7 +45,7 @@ inline int can_send_rrdset(struct instance *instance, RRDSET *st) if (unlikely(!rrdset_is_available_for_exporting_and_alarms(st))) { debug( - D_BACKEND, + D_EXPORTING, "EXPORTING: not sending chart '%s' of host '%s', because it is not available for exporting.", st->id, host->hostname); @@ -52,7 +56,7 @@ inline int can_send_rrdset(struct instance *instance, RRDSET *st) st->rrd_memory_mode == RRD_MEMORY_MODE_NONE && !(EXPORTING_OPTIONS_DATA_SOURCE(instance->config.options) == EXPORTING_SOURCE_DATA_AS_COLLECTED))) { debug( - D_BACKEND, + D_EXPORTING, "EXPORTING: not sending chart '%s' of host '%s' because its memory mode is '%s' and the exporting connector requires database access.", st->id, host->hostname, diff --git a/exporting/prometheus/remote_write/README.md b/exporting/prometheus/remote_write/README.md index ce379063e..54c5d6588 100644 --- a/exporting/prometheus/remote_write/README.md +++ b/exporting/prometheus/remote_write/README.md @@ -55,4 +55,4 @@ buffer size on failures. The remote write exporting connector does not support `buffer on failures` -[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fexporting%2Fprometheus%2Fremote_write%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>) + diff --git a/exporting/prometheus/remote_write/remote_write.c b/exporting/prometheus/remote_write/remote_write.c index 8339712eb..59a488e1b 100644 --- a/exporting/prometheus/remote_write/remote_write.c +++ b/exporting/prometheus/remote_write/remote_write.c @@ -236,7 +236,7 @@ int format_dimension_prometheus_remote_write(struct instance *instance, RRDDIM * if (unlikely(rd->last_collected_time.tv_sec < instance->after)) { debug( - D_BACKEND, + D_EXPORTING, "EXPORTING: not sending dimension '%s' of chart '%s' from host '%s', " "its last data collection (%lu) is not within our timeframe (%lu to %lu)", rd->id, rd->rrdset->id, diff --git a/exporting/pubsub/README.md b/exporting/pubsub/README.md index 73b6a2031..2f9ac83d4 100644 --- a/exporting/pubsub/README.md +++ b/exporting/pubsub/README.md @@ -35,4 +35,4 @@ Next, create the credentials JSON file by following Google Cloud's [authenticati `chmod 400 google_cloud_credentials.json; chown netdata google_cloud_credentials.json`. Set the `credentials file` option to the full path of the file. -[![analytics](https://www.google-analytics.com/collect?v=1&aip=1&t=pageview&_s=1&ds=github&dr=https%3A%2F%2Fgithub.com%2Fnetdata%2Fnetdata&dl=https%3A%2F%2Fmy-netdata.io%2Fgithub%2Fexporting%2Fpubsub%2FREADME&_u=MAC~&cid=5792dfd7-8dc4-476b-af31-da2fdb9f93d2&tid=UA-64295674-3)](<>) + diff --git a/exporting/pubsub/pubsub.c b/exporting/pubsub/pubsub.c index 336a096ab..5a5afbdc2 100644 --- a/exporting/pubsub/pubsub.c +++ b/exporting/pubsub/pubsub.c @@ -141,7 +141,7 @@ void pubsub_connector_worker(void *instance_p) } debug( - D_BACKEND, "EXPORTING: pubsub_publish(): project = %s, topic = %s, buffer = %zu", + D_EXPORTING, "EXPORTING: pubsub_publish(): project = %s, topic = %s, buffer = %zu", connector_specific_config->project_id, connector_specific_config->topic_id, buffer_len); if (pubsub_publish((void *)connector_specific_data, error_message, stats->buffered_metrics, buffer_len)) { diff --git a/exporting/read_config.c b/exporting/read_config.c index 77687d845..b834e867d 100644 --- a/exporting/read_config.c +++ b/exporting/read_config.c @@ -2,6 +2,9 @@ #include "exporting_engine.h" +EXPORTING_OPTIONS global_exporting_options = EXPORTING_SOURCE_DATA_AVERAGE | EXPORTING_OPTION_SEND_NAMES; +const char *global_exporting_prefix = "netdata"; + struct config exporting_config = { .first_section = NULL, .last_section = NULL, .mutex = NETDATA_MUTEX_INITIALIZER, @@ -160,7 +163,7 @@ EXPORTING_CONNECTOR_TYPE exporting_select_type(const char *type) return EXPORTING_CONNECTOR_TYPE_UNKNOWN; } -EXPORTING_OPTIONS exporting_parse_data_source(const char *data_source, EXPORTING_OPTIONS exporting_options) +inline EXPORTING_OPTIONS exporting_parse_data_source(const char *data_source, EXPORTING_OPTIONS exporting_options) { if (!strcmp(data_source, "raw") || !strcmp(data_source, "as collected") || !strcmp(data_source, "as-collected") || !strcmp(data_source, "as_collected") || !strcmp(data_source, "ascollected")) { @@ -194,7 +197,7 @@ struct engine *read_exporting_config() static struct engine *engine = NULL; struct connector_instance_list { struct connector_instance local_ci; - EXPORTING_CONNECTOR_TYPE backend_type; + EXPORTING_CONNECTOR_TYPE exporting_type; struct connector_instance_list *next; }; @@ -238,21 +241,14 @@ struct engine *read_exporting_config() prometheus_exporter_instance->config.update_every = prometheus_config_get_number(EXPORTING_UPDATE_EVERY_OPTION_NAME, EXPORTING_UPDATE_EVERY_DEFAULT); - // wait for backend subsystem to be initialized - for (int retries = 0; !global_backend_source && retries < 1000; retries++) - sleep_usec(10000); - - if (!global_backend_source) - global_backend_source = "average"; - - prometheus_exporter_instance->config.options |= global_backend_options & EXPORTING_OPTIONS_SOURCE_BITS; + prometheus_exporter_instance->config.options |= global_exporting_options & EXPORTING_OPTIONS_SOURCE_BITS; - char *data_source = prometheus_config_get("data source", global_backend_source); + char *data_source = prometheus_config_get("data source", "average"); prometheus_exporter_instance->config.options = exporting_parse_data_source(data_source, prometheus_exporter_instance->config.options); if (prometheus_config_get_boolean( - "send names instead of ids", global_backend_options & EXPORTING_OPTION_SEND_NAMES)) + "send names instead of ids", global_exporting_options & EXPORTING_OPTION_SEND_NAMES)) prometheus_exporter_instance->config.options |= EXPORTING_OPTION_SEND_NAMES; else prometheus_exporter_instance->config.options &= ~EXPORTING_OPTION_SEND_NAMES; @@ -268,18 +264,17 @@ struct engine *read_exporting_config() prometheus_exporter_instance->config.options &= ~EXPORTING_OPTION_SEND_AUTOMATIC_LABELS; prometheus_exporter_instance->config.charts_pattern = simple_pattern_create( - prometheus_config_get("send charts matching", global_backend_send_charts_matching), + prometheus_config_get("send charts matching", "*"), NULL, SIMPLE_PATTERN_EXACT); prometheus_exporter_instance->config.hosts_pattern = simple_pattern_create( prometheus_config_get("send hosts matching", "localhost *"), NULL, SIMPLE_PATTERN_EXACT); - prometheus_exporter_instance->config.prefix = prometheus_config_get("prefix", global_backend_prefix); + prometheus_exporter_instance->config.prefix = prometheus_config_get("prefix", global_exporting_prefix); prometheus_exporter_instance->config.initialized = 1; } - // TODO: change BACKEND to EXPORTING while (get_connector_instance(&local_ci)) { info("Processing connector instance (%s)", local_ci.instance_name); @@ -290,7 +285,7 @@ struct engine *read_exporting_config() tmp_ci_list = (struct connector_instance_list *)callocz(1, sizeof(struct connector_instance_list)); memcpy(&tmp_ci_list->local_ci, &local_ci, sizeof(local_ci)); - tmp_ci_list->backend_type = exporting_select_type(local_ci.connector_name); + tmp_ci_list->exporting_type = exporting_select_type(local_ci.connector_name); tmp_ci_list->next = tmp_ci_list_prev; tmp_ci_list_prev = tmp_ci_list; instances_to_activate++; @@ -320,34 +315,34 @@ struct engine *read_exporting_config() info("Instance %s on %s", tmp_ci_list->local_ci.instance_name, tmp_ci_list->local_ci.connector_name); - if (tmp_ci_list->backend_type == EXPORTING_CONNECTOR_TYPE_UNKNOWN) { + if (tmp_ci_list->exporting_type == EXPORTING_CONNECTOR_TYPE_UNKNOWN) { error("Unknown exporting connector type"); goto next_connector_instance; } #ifndef ENABLE_PROMETHEUS_REMOTE_WRITE - if (tmp_ci_list->backend_type == EXPORTING_CONNECTOR_TYPE_PROMETHEUS_REMOTE_WRITE) { + if (tmp_ci_list->exporting_type == EXPORTING_CONNECTOR_TYPE_PROMETHEUS_REMOTE_WRITE) { error("Prometheus Remote Write support isn't compiled"); goto next_connector_instance; } #endif #ifndef HAVE_KINESIS - if (tmp_ci_list->backend_type == EXPORTING_CONNECTOR_TYPE_KINESIS) { + if (tmp_ci_list->exporting_type == EXPORTING_CONNECTOR_TYPE_KINESIS) { error("AWS Kinesis support isn't compiled"); goto next_connector_instance; } #endif #ifndef ENABLE_EXPORTING_PUBSUB - if (tmp_ci_list->backend_type == EXPORTING_CONNECTOR_TYPE_PUBSUB) { + if (tmp_ci_list->exporting_type == EXPORTING_CONNECTOR_TYPE_PUBSUB) { error("Google Cloud Pub/Sub support isn't compiled"); goto next_connector_instance; } #endif #ifndef HAVE_MONGOC - if (tmp_ci_list->backend_type == EXPORTING_CONNECTOR_TYPE_MONGODB) { + if (tmp_ci_list->exporting_type == EXPORTING_CONNECTOR_TYPE_MONGODB) { error("MongoDB support isn't compiled"); goto next_connector_instance; } @@ -358,7 +353,7 @@ struct engine *read_exporting_config() engine->instance_root = tmp_instance; tmp_instance->engine = engine; - tmp_instance->config.type = tmp_ci_list->backend_type; + tmp_instance->config.type = tmp_ci_list->exporting_type; instance_name = tmp_ci_list->local_ci.instance_name; diff --git a/exporting/send_data.c b/exporting/send_data.c index 0f5e41929..ed649b640 100644 --- a/exporting/send_data.c +++ b/exporting/send_data.c @@ -41,7 +41,7 @@ int exporting_discard_response(BUFFER *buffer, struct instance *instance) { *d = '\0'; debug( - D_BACKEND, + D_EXPORTING, "EXPORTING: received %zu bytes from %s connector instance. Ignoring them. Sample: '%s'", buffer_strlen(buffer), instance->config.name, diff --git a/exporting/tests/test_exporting_engine.c b/exporting/tests/test_exporting_engine.c index fb08ff43b..4e1addbb5 100644 --- a/exporting/tests/test_exporting_engine.c +++ b/exporting/tests/test_exporting_engine.c @@ -14,11 +14,6 @@ char *netdata_configured_hostname = "test_global_host"; char log_line[MAX_LOG_LINE + 1]; -BACKEND_OPTIONS global_backend_options = 0; -const char *global_backend_source = "average"; -const char *global_backend_prefix = "netdata"; -const char *global_backend_send_charts_matching = "*"; - void init_connectors_in_tests(struct engine *engine) { expect_function_call(__wrap_now_realtime_sec); @@ -235,7 +230,7 @@ static void test_rrdhost_is_exportable(void **state) assert_string_equal(log_line, "enabled exporting of host 'localhost' for instance 'instance_name'"); assert_ptr_not_equal(localhost->exporting_flags, NULL); - assert_int_equal(localhost->exporting_flags[0], RRDHOST_FLAG_BACKEND_SEND); + assert_int_equal(localhost->exporting_flags[0], RRDHOST_FLAG_EXPORTING_SEND); } static void test_false_rrdhost_is_exportable(void **state) @@ -255,7 +250,7 @@ static void test_false_rrdhost_is_exportable(void **state) assert_string_equal(log_line, "disabled exporting of host 'localhost' for instance 'instance_name'"); assert_ptr_not_equal(localhost->exporting_flags, NULL); - assert_int_equal(localhost->exporting_flags[0], RRDHOST_FLAG_BACKEND_DONT_SEND); + assert_int_equal(localhost->exporting_flags[0], RRDHOST_FLAG_EXPORTING_DONT_SEND); } static void test_rrdset_is_exportable(void **state) |