diff options
Diffstat (limited to '')
-rw-r--r-- | exporting/README.md | 134 | ||||
-rw-r--r-- | exporting/prometheus/prometheus.c | 35 | ||||
-rw-r--r-- | exporting/prometheus/prometheus.h | 6 | ||||
-rw-r--r-- | exporting/tests/exporting_fixtures.c | 1 | ||||
-rw-r--r-- | exporting/tests/test_exporting_engine.c | 14 |
5 files changed, 115 insertions, 75 deletions
diff --git a/exporting/README.md b/exporting/README.md index ae9c8ccf2..60028a38a 100644 --- a/exporting/README.md +++ b/exporting/README.md @@ -28,61 +28,82 @@ X seconds (though, it can send them per second if you need it to). ## Features -1. The exporting engine uses a number of connectors to send Netdata metrics to external time-series databases. See our - [list of supported databases](/docs/export/external-databases.md#supported-databases) for information on which - connector to enable and configure for your database of choice. - - - [**AWS Kinesis Data Streams**](/exporting/aws_kinesis/README.md): Metrics are sent to the service in `JSON` - format. - - [**Google Cloud Pub/Sub Service**](/exporting/pubsub/README.md): Metrics are sent to the service in `JSON` - format. - - [**Graphite**](/exporting/graphite/README.md): A plaintext interface. Metrics are sent to the database server as - `prefix.hostname.chart.dimension`. `prefix` is configured below, `hostname` is the hostname of the machine (can - also be configured). Learn more in our guide to [export and visualize Netdata metrics in - Graphite](/docs/guides/export/export-netdata-metrics-graphite.md). - - [**JSON** document databases](/exporting/json/README.md) - - [**OpenTSDB**](/exporting/opentsdb/README.md): Use a plaintext or HTTP interfaces. Metrics are sent to - OpenTSDB as `prefix.chart.dimension` with tag `host=hostname`. - - [**MongoDB**](/exporting/mongodb/README.md): Metrics are sent to the database in `JSON` format. - - [**Prometheus**](/exporting/prometheus/README.md): Use an existing Prometheus installation to scrape metrics - from node using the Netdata API. - - [**Prometheus remote write**](/exporting/prometheus/remote_write/README.md). A binary snappy-compressed protocol - buffer encoding over HTTP. Supports many [storage - providers](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage). - - [**TimescaleDB**](/exporting/TIMESCALE.md): Use a community-built connector that takes JSON streams from a - Netdata client and writes them to a TimescaleDB table. - -2. Netdata can filter metrics (at the chart level), to send only a subset of the collected metrics. - -3. Netdata supports three modes of operation for all exporting connectors: - - - `as-collected` sends to external databases the metrics as they are collected, in the units they are collected. - So, counters are sent as counters and gauges are sent as gauges, much like all data collectors do. For example, - to calculate CPU utilization in this format, you need to know how to convert kernel ticks to percentage. - - - `average` sends to external databases normalized metrics from the Netdata database. In this mode, all metrics - are sent as gauges, in the units Netdata uses. This abstracts data collection and simplifies visualization, but - you will not be able to copy and paste queries from other sources to convert units. For example, CPU utilization - percentage is calculated by Netdata, so Netdata will convert ticks to percentage and send the average percentage - to the external database. - - - `sum` or `volume`: the sum of the interpolated values shown on the Netdata graphs is sent to the external - database. So, if Netdata is configured to send data to the database every 10 seconds, the sum of the 10 values - shown on the Netdata charts will be used. - - Time-series databases suggest to collect the raw values (`as-collected`). If you plan to invest on building your - monitoring around a time-series database and you already know (or you will invest in learning) how to convert units - and normalize the metrics in Grafana or other visualization tools, we suggest to use `as-collected`. - - If, on the other hand, you just need long term archiving of Netdata metrics and you plan to mainly work with - Netdata, we suggest to use `average`. It decouples visualization from data collection, so it will generally be a lot - simpler. Furthermore, if you use `average`, the charts shown in the external service will match exactly what you - see in Netdata, which is not necessarily true for the other modes of operation. - -4. This code is smart enough, not to slow down Netdata, independently of the speed of the external database server. You - should keep in mind though that many exporting connector instances can consume a lot of CPU resources if they run - their batches at the same time. You can set different update intervals for every exporting connector instance, but - even in that case they can occasionally synchronize their batches for a moment. +### Integration + +The exporting engine uses a number of connectors to send Netdata metrics to external time-series databases. See our +[list of supported databases](/docs/export/external-databases.md#supported-databases) for information on which +connector to enable and configure for your database of choice. + +- [**AWS Kinesis Data Streams**](/exporting/aws_kinesis/README.md): Metrics are sent to the service in `JSON` + format. +- [**Google Cloud Pub/Sub Service**](/exporting/pubsub/README.md): Metrics are sent to the service in `JSON` + format. +- [**Graphite**](/exporting/graphite/README.md): A plaintext interface. Metrics are sent to the database server as + `prefix.hostname.chart.dimension`. `prefix` is configured below, `hostname` is the hostname of the machine (can + also be configured). Learn more in our guide to [export and visualize Netdata metrics in + Graphite](/docs/guides/export/export-netdata-metrics-graphite.md). +- [**JSON** document databases](/exporting/json/README.md) +- [**OpenTSDB**](/exporting/opentsdb/README.md): Use a plaintext or HTTP interfaces. Metrics are sent to + OpenTSDB as `prefix.chart.dimension` with tag `host=hostname`. +- [**MongoDB**](/exporting/mongodb/README.md): Metrics are sent to the database in `JSON` format. +- [**Prometheus**](/exporting/prometheus/README.md): Use an existing Prometheus installation to scrape metrics + from node using the Netdata API. +- [**Prometheus remote write**](/exporting/prometheus/remote_write/README.md). A binary snappy-compressed protocol + buffer encoding over HTTP. Supports many [storage + providers](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage). +- [**TimescaleDB**](/exporting/TIMESCALE.md): Use a community-built connector that takes JSON streams from a + Netdata client and writes them to a TimescaleDB table. + +### Chart filtering + +Netdata can filter metrics, to send only a subset of the collected metrics. You can use the +configuration file + +```txt +[prometheus:exporter] + send charts matching = system.* +``` + +or the URL parameter `filter` in the `allmetrics` API call. + +```txt +http://localhost:19999/api/v1/allmetrics?format=shell&filter=system.* +``` + +### Operation modes + +Netdata supports three modes of operation for all exporting connectors: + +- `as-collected` sends to external databases the metrics as they are collected, in the units they are collected. + So, counters are sent as counters and gauges are sent as gauges, much like all data collectors do. For example, + to calculate CPU utilization in this format, you need to know how to convert kernel ticks to percentage. + +- `average` sends to external databases normalized metrics from the Netdata database. In this mode, all metrics + are sent as gauges, in the units Netdata uses. This abstracts data collection and simplifies visualization, but + you will not be able to copy and paste queries from other sources to convert units. For example, CPU utilization + percentage is calculated by Netdata, so Netdata will convert ticks to percentage and send the average percentage + to the external database. + +- `sum` or `volume`: the sum of the interpolated values shown on the Netdata graphs is sent to the external + database. So, if Netdata is configured to send data to the database every 10 seconds, the sum of the 10 values + shown on the Netdata charts will be used. + +Time-series databases suggest to collect the raw values (`as-collected`). If you plan to invest on building your +monitoring around a time-series database and you already know (or you will invest in learning) how to convert units +and normalize the metrics in Grafana or other visualization tools, we suggest to use `as-collected`. + +If, on the other hand, you just need long term archiving of Netdata metrics and you plan to mainly work with +Netdata, we suggest to use `average`. It decouples visualization from data collection, so it will generally be a lot +simpler. Furthermore, if you use `average`, the charts shown in the external service will match exactly what you +see in Netdata, which is not necessarily true for the other modes of operation. + +### Independent operation + +This code is smart enough, not to slow down Netdata, independently of the speed of the external database server. + +> ❗ You should keep in mind though that many exporting connector instances can consume a lot of CPU resources if they +> run their batches at the same time. You can set different update intervals for every exporting connector instance, +> but even in that case they can occasionally synchronize their batches for a moment. ## Configuration @@ -252,7 +273,8 @@ Configure individual connectors and override any global settings with the follow within each pattern). The patterns are checked against both chart id and chart name. A pattern starting with `!` gives a negative match. So to match all charts named `apps.*` except charts ending in `*reads`, use `!*reads apps.*` (so, the order is important: the first pattern matching the chart id or the chart name will be used - - positive or negative). + positive or negative). There is also a URL parameter `filter` that can be used while querying `allmetrics`. The URL + parameter has a higher priority than the configuration option. - `send names instead of ids = yes | no` controls the metric names Netdata should send to the external database. Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system diff --git a/exporting/prometheus/prometheus.c b/exporting/prometheus/prometheus.c index 97c3a29f0..c7f3f1d38 100644 --- a/exporting/prometheus/prometheus.c +++ b/exporting/prometheus/prometheus.c @@ -7,14 +7,22 @@ // PROMETHEUS // /api/v1/allmetrics?format=prometheus and /api/v1/allmetrics?format=prometheus_all_hosts +static int is_matches_rrdset(struct instance *instance, RRDSET *st, SIMPLE_PATTERN *filter) { + if (instance->config.options & EXPORTING_OPTION_SEND_NAMES) { + return simple_pattern_matches(filter, st->name); + } + return simple_pattern_matches(filter, st->id); +} + /** * Check if a chart can be sent to Prometheus * * @param instance an instance data structure. * @param st a chart. + * @param filter a simple pattern to match against. * @return Returns 1 if the chart can be sent, 0 otherwise. */ -inline int can_send_rrdset(struct instance *instance, RRDSET *st) +inline int can_send_rrdset(struct instance *instance, RRDSET *st, SIMPLE_PATTERN *filter) { #ifdef NETDATA_INTERNAL_CHECKS RRDHOST *host = st->rrdhost; @@ -27,12 +35,15 @@ inline int can_send_rrdset(struct instance *instance, RRDSET *st) if (unlikely(rrdset_flag_check(st, RRDSET_FLAG_EXPORTING_IGNORE))) return 0; - if (unlikely(!rrdset_flag_check(st, RRDSET_FLAG_EXPORTING_SEND))) { + if (filter) { + if (!is_matches_rrdset(instance, st, filter)) { + return 0; + } + } else if (unlikely(!rrdset_flag_check(st, RRDSET_FLAG_EXPORTING_SEND))) { // we have not checked this chart - if (simple_pattern_matches(instance->config.charts_pattern, st->id) || - simple_pattern_matches(instance->config.charts_pattern, st->name)) + if (is_matches_rrdset(instance, st, instance->config.charts_pattern)) { rrdset_flag_set(st, RRDSET_FLAG_EXPORTING_SEND); - else { + } else { rrdset_flag_set(st, RRDSET_FLAG_EXPORTING_IGNORE); debug( D_EXPORTING, @@ -480,6 +491,7 @@ static void generate_as_collected_prom_metric(BUFFER *wb, struct gen_parameters * * @param instance an instance data structure. * @param host a data collecting host. + * @param filter_string a simple pattern filter. * @param wb the buffer to fill with metrics. * @param prefix a prefix for every metric. * @param exporting_options options to configure what data is exported. @@ -489,12 +501,14 @@ static void generate_as_collected_prom_metric(BUFFER *wb, struct gen_parameters static void rrd_stats_api_v1_charts_allmetrics_prometheus( struct instance *instance, RRDHOST *host, + const char *filter_string, BUFFER *wb, const char *prefix, EXPORTING_OPTIONS exporting_options, int allhosts, PROMETHEUS_OUTPUT_OPTIONS output_options) { + SIMPLE_PATTERN *filter = simple_pattern_create(filter_string, NULL, SIMPLE_PATTERN_EXACT); rrdhost_rdlock(host); char hostname[PROMETHEUS_ELEMENT_MAX + 1]; @@ -592,7 +606,7 @@ static void rrd_stats_api_v1_charts_allmetrics_prometheus( rrdset_foreach_read(st, host) { - if (likely(can_send_rrdset(instance, st))) { + if (likely(can_send_rrdset(instance, st, filter))) { rrdset_rdlock(st); char chart[PROMETHEUS_ELEMENT_MAX + 1]; @@ -777,6 +791,7 @@ static void rrd_stats_api_v1_charts_allmetrics_prometheus( } rrdhost_unlock(host); + simple_pattern_free(filter); } /** @@ -850,6 +865,7 @@ static inline time_t prometheus_preparation( * Write metrics and auxiliary information for one host to a buffer. * * @param host a data collecting host. + * @param filter_string a simple pattern filter. * @param wb the buffer to write to. * @param server the name of a Prometheus server. * @param prefix a prefix for every metric. @@ -858,6 +874,7 @@ static inline time_t prometheus_preparation( */ void rrd_stats_api_v1_charts_allmetrics_prometheus_single_host( RRDHOST *host, + const char *filter_string, BUFFER *wb, const char *server, const char *prefix, @@ -880,13 +897,14 @@ void rrd_stats_api_v1_charts_allmetrics_prometheus_single_host( output_options); rrd_stats_api_v1_charts_allmetrics_prometheus( - prometheus_exporter_instance, host, wb, prefix, exporting_options, 0, output_options); + prometheus_exporter_instance, host, filter_string, wb, prefix, exporting_options, 0, output_options); } /** * Write metrics and auxiliary information for all hosts to a buffer. * * @param host a data collecting host. + * @param filter_string a simple pattern filter. * @param wb the buffer to write to. * @param server the name of a Prometheus server. * @param prefix a prefix for every metric. @@ -895,6 +913,7 @@ void rrd_stats_api_v1_charts_allmetrics_prometheus_single_host( */ void rrd_stats_api_v1_charts_allmetrics_prometheus_all_hosts( RRDHOST *host, + const char *filter_string, BUFFER *wb, const char *server, const char *prefix, @@ -920,7 +939,7 @@ void rrd_stats_api_v1_charts_allmetrics_prometheus_all_hosts( rrdhost_foreach_read(host) { rrd_stats_api_v1_charts_allmetrics_prometheus( - prometheus_exporter_instance, host, wb, prefix, exporting_options, 1, output_options); + prometheus_exporter_instance, host, filter_string, wb, prefix, exporting_options, 1, output_options); } rrd_unlock(); } diff --git a/exporting/prometheus/prometheus.h b/exporting/prometheus/prometheus.h index 2f0845ce9..4b8860ded 100644 --- a/exporting/prometheus/prometheus.h +++ b/exporting/prometheus/prometheus.h @@ -23,13 +23,13 @@ typedef enum prometheus_output_flags { } PROMETHEUS_OUTPUT_OPTIONS; extern void rrd_stats_api_v1_charts_allmetrics_prometheus_single_host( - RRDHOST *host, BUFFER *wb, const char *server, const char *prefix, + RRDHOST *host, const char *filter_string, BUFFER *wb, const char *server, const char *prefix, EXPORTING_OPTIONS exporting_options, PROMETHEUS_OUTPUT_OPTIONS output_options); extern void rrd_stats_api_v1_charts_allmetrics_prometheus_all_hosts( - RRDHOST *host, BUFFER *wb, const char *server, const char *prefix, + RRDHOST *host, const char *filter_string, BUFFER *wb, const char *server, const char *prefix, EXPORTING_OPTIONS exporting_options, PROMETHEUS_OUTPUT_OPTIONS output_options); -int can_send_rrdset(struct instance *instance, RRDSET *st); +int can_send_rrdset(struct instance *instance, RRDSET *st, SIMPLE_PATTERN *filter); size_t prometheus_name_copy(char *d, const char *s, size_t usable); size_t prometheus_label_copy(char *d, const char *s, size_t usable); char *prometheus_units_copy(char *d, const char *s, size_t usable, int showoldunits); diff --git a/exporting/tests/exporting_fixtures.c b/exporting/tests/exporting_fixtures.c index b632761e7..501fc405c 100644 --- a/exporting/tests/exporting_fixtures.c +++ b/exporting/tests/exporting_fixtures.c @@ -58,7 +58,6 @@ int setup_rrdhost() st->rrdhost = localhost; strcpy(st->id, "chart_id"); st->name = strdupz("chart_name"); - st->flags |= RRDSET_FLAG_ENABLED; st->rrd_memory_mode |= RRD_MEMORY_MODE_SAVE; st->update_every = 1; diff --git a/exporting/tests/test_exporting_engine.c b/exporting/tests/test_exporting_engine.c index 4e1addbb5..6bb7d2efd 100644 --- a/exporting/tests/test_exporting_engine.c +++ b/exporting/tests/test_exporting_engine.c @@ -988,21 +988,21 @@ static void test_can_send_rrdset(void **state) { (void)*state; - assert_int_equal(can_send_rrdset(prometheus_exporter_instance, localhost->rrdset_root), 1); + assert_int_equal(can_send_rrdset(prometheus_exporter_instance, localhost->rrdset_root, NULL), 1); rrdset_flag_set(localhost->rrdset_root, RRDSET_FLAG_EXPORTING_IGNORE); - assert_int_equal(can_send_rrdset(prometheus_exporter_instance, localhost->rrdset_root), 0); + assert_int_equal(can_send_rrdset(prometheus_exporter_instance, localhost->rrdset_root, NULL), 0); rrdset_flag_clear(localhost->rrdset_root, RRDSET_FLAG_EXPORTING_IGNORE); // TODO: test with a denying simple pattern rrdset_flag_set(localhost->rrdset_root, RRDSET_FLAG_OBSOLETE); - assert_int_equal(can_send_rrdset(prometheus_exporter_instance, localhost->rrdset_root), 0); + assert_int_equal(can_send_rrdset(prometheus_exporter_instance, localhost->rrdset_root, NULL), 0); rrdset_flag_clear(localhost->rrdset_root, RRDSET_FLAG_OBSOLETE); localhost->rrdset_root->rrd_memory_mode = RRD_MEMORY_MODE_NONE; prometheus_exporter_instance->config.options |= EXPORTING_SOURCE_DATA_AVERAGE; - assert_int_equal(can_send_rrdset(prometheus_exporter_instance, localhost->rrdset_root), 0); + assert_int_equal(can_send_rrdset(prometheus_exporter_instance, localhost->rrdset_root, NULL), 0); } static void test_prometheus_name_copy(void **state) @@ -1067,7 +1067,7 @@ static void rrd_stats_api_v1_charts_allmetrics_prometheus(void **state) expect_function_call(__wrap_exporting_calculate_value_from_stored_data); will_return(__wrap_exporting_calculate_value_from_stored_data, pack_storage_number(27, SN_DEFAULT_FLAGS)); - rrd_stats_api_v1_charts_allmetrics_prometheus_single_host(localhost, buffer, "test_server", "test_prefix", 0, 0); + rrd_stats_api_v1_charts_allmetrics_prometheus_single_host(localhost, NULL, buffer, "test_server", "test_prefix", 0, 0); assert_string_equal( buffer_tostring(buffer), @@ -1085,7 +1085,7 @@ static void rrd_stats_api_v1_charts_allmetrics_prometheus(void **state) will_return(__wrap_exporting_calculate_value_from_stored_data, pack_storage_number(27, SN_DEFAULT_FLAGS)); rrd_stats_api_v1_charts_allmetrics_prometheus_single_host( - localhost, buffer, "test_server", "test_prefix", 0, PROMETHEUS_OUTPUT_NAMES | PROMETHEUS_OUTPUT_TYPES); + localhost, NULL, buffer, "test_server", "test_prefix", 0, PROMETHEUS_OUTPUT_NAMES | PROMETHEUS_OUTPUT_TYPES); assert_string_equal( buffer_tostring(buffer), @@ -1103,7 +1103,7 @@ static void rrd_stats_api_v1_charts_allmetrics_prometheus(void **state) expect_function_call(__wrap_exporting_calculate_value_from_stored_data); will_return(__wrap_exporting_calculate_value_from_stored_data, pack_storage_number(27, SN_DEFAULT_FLAGS)); - rrd_stats_api_v1_charts_allmetrics_prometheus_all_hosts(localhost, buffer, "test_server", "test_prefix", 0, 0); + rrd_stats_api_v1_charts_allmetrics_prometheus_all_hosts(localhost, NULL, buffer, "test_server", "test_prefix", 0, 0); assert_string_equal( buffer_tostring(buffer), |