diff options
author | Daniel Baumann <daniel.baumann@progress-linux.org> | 2023-02-06 16:11:30 +0000 |
---|---|---|
committer | Daniel Baumann <daniel.baumann@progress-linux.org> | 2023-02-06 16:11:30 +0000 |
commit | aa2fe8ccbfcb117efa207d10229eeeac5d0f97c7 (patch) | |
tree | 941cbdd387b41c1a81587c20a6df9f0e5e0ff7ab /exporting | |
parent | Adding upstream version 1.37.1. (diff) | |
download | netdata-aa2fe8ccbfcb117efa207d10229eeeac5d0f97c7.tar.xz netdata-aa2fe8ccbfcb117efa207d10229eeeac5d0f97c7.zip |
Adding upstream version 1.38.0.upstream/1.38.0
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'exporting')
26 files changed, 196 insertions, 146 deletions
diff --git a/exporting/README.md b/exporting/README.md index 60028a38a..bc3ca1c7d 100644 --- a/exporting/README.md +++ b/exporting/README.md @@ -1,8 +1,12 @@ <!-- title: "Exporting reference" description: "With the exporting engine, you can archive your Netdata metrics to multiple external databases for long-term storage or further analysis." -sidebar_label: Exporting reference -custom_edit_url: https://github.com/netdata/netdata/edit/master/exporting/README.md +sidebar_label: "Exporting reference" +custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/README.md" +learn_status: "Published" +learn_topic_type: "References" +learn_rel_path: "References/Configuration" +learn_doc_purpose: "Explain the exporting engine options and all of our the exporting connectors options" --> # Exporting reference @@ -12,13 +16,13 @@ configuring, and monitoring Netdata's exporting engine, which allows you to send databases. For a quick introduction to the exporting engine's features, read our doc on [exporting metrics to time-series -databases](/docs/export/external-databases.md), or jump in to [enabling a connector](/docs/export/enable-connector.md). +databases](https://github.com/netdata/netdata/blob/master/docs/export/external-databases.md), or jump in to [enabling a connector](https://github.com/netdata/netdata/blob/master/docs/export/enable-connector.md). The exporting engine has a modular structure and supports metric exporting via multiple exporting connector instances at the same time. You can have different update intervals and filters configured for every exporting connector instance. When you enable the exporting engine and a connector, the Netdata Agent exports metrics _beginning from the time you -restart its process_, not the entire [database of long-term metrics](/docs/store/change-metrics-storage.md). +restart its process_, not the entire [database of long-term metrics](https://github.com/netdata/netdata/blob/master/docs/store/change-metrics-storage.md). Since Netdata collects thousands of metrics per server per second, which would easily congest any database server when several Netdata servers are sending data to it, Netdata allows sending metrics at a lower frequency, by resampling them. @@ -31,27 +35,27 @@ X seconds (though, it can send them per second if you need it to). ### Integration The exporting engine uses a number of connectors to send Netdata metrics to external time-series databases. See our -[list of supported databases](/docs/export/external-databases.md#supported-databases) for information on which +[list of supported databases](https://github.com/netdata/netdata/blob/master/docs/export/external-databases.md#supported-databases) for information on which connector to enable and configure for your database of choice. -- [**AWS Kinesis Data Streams**](/exporting/aws_kinesis/README.md): Metrics are sent to the service in `JSON` +- [**AWS Kinesis Data Streams**](https://github.com/netdata/netdata/blob/master/exporting/aws_kinesis/README.md): Metrics are sent to the service in `JSON` format. -- [**Google Cloud Pub/Sub Service**](/exporting/pubsub/README.md): Metrics are sent to the service in `JSON` +- [**Google Cloud Pub/Sub Service**](https://github.com/netdata/netdata/blob/master/exporting/pubsub/README.md): Metrics are sent to the service in `JSON` format. -- [**Graphite**](/exporting/graphite/README.md): A plaintext interface. Metrics are sent to the database server as +- [**Graphite**](https://github.com/netdata/netdata/blob/master/exporting/graphite/README.md): A plaintext interface. Metrics are sent to the database server as `prefix.hostname.chart.dimension`. `prefix` is configured below, `hostname` is the hostname of the machine (can also be configured). Learn more in our guide to [export and visualize Netdata metrics in - Graphite](/docs/guides/export/export-netdata-metrics-graphite.md). -- [**JSON** document databases](/exporting/json/README.md) -- [**OpenTSDB**](/exporting/opentsdb/README.md): Use a plaintext or HTTP interfaces. Metrics are sent to + Graphite](https://github.com/netdata/netdata/blob/master/docs/guides/export/export-netdata-metrics-graphite.md). +- [**JSON** document databases](https://github.com/netdata/netdata/blob/master/exporting/json/README.md) +- [**OpenTSDB**](https://github.com/netdata/netdata/blob/master/exporting/opentsdb/README.md): Use a plaintext or HTTP interfaces. Metrics are sent to OpenTSDB as `prefix.chart.dimension` with tag `host=hostname`. -- [**MongoDB**](/exporting/mongodb/README.md): Metrics are sent to the database in `JSON` format. -- [**Prometheus**](/exporting/prometheus/README.md): Use an existing Prometheus installation to scrape metrics +- [**MongoDB**](https://github.com/netdata/netdata/blob/master/exporting/mongodb/README.md): Metrics are sent to the database in `JSON` format. +- [**Prometheus**](https://github.com/netdata/netdata/blob/master/exporting/prometheus/README.md): Use an existing Prometheus installation to scrape metrics from node using the Netdata API. -- [**Prometheus remote write**](/exporting/prometheus/remote_write/README.md). A binary snappy-compressed protocol +- [**Prometheus remote write**](https://github.com/netdata/netdata/blob/master/exporting/prometheus/remote_write/README.md). A binary snappy-compressed protocol buffer encoding over HTTP. Supports many [storage providers](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage). -- [**TimescaleDB**](/exporting/TIMESCALE.md): Use a community-built connector that takes JSON streams from a +- [**TimescaleDB**](https://github.com/netdata/netdata/blob/master/exporting/TIMESCALE.md): Use a community-built connector that takes JSON streams from a Netdata client and writes them to a TimescaleDB table. ### Chart filtering @@ -292,7 +296,7 @@ Configure individual connectors and override any global settings with the follow Netdata can send metrics to external databases using the TLS/SSL protocol. Unfortunately, some of them does not support encrypted connections, so you will have to configure a reverse proxy to enable HTTPS communication between Netdata and an external database. You can set up a reverse proxy with -[Nginx](/docs/Running-behind-nginx.md). +[Nginx](https://github.com/netdata/netdata/blob/master/docs/Running-behind-nginx.md). ## Exporting engine monitoring diff --git a/exporting/TIMESCALE.md b/exporting/TIMESCALE.md index 07aa1b7a2..2bd6db8c5 100644 --- a/exporting/TIMESCALE.md +++ b/exporting/TIMESCALE.md @@ -1,8 +1,12 @@ <!-- title: "Writing metrics to TimescaleDB" description: "Send Netdata metrics to TimescaleDB for long-term archiving and further analysis." -custom_edit_url: https://github.com/netdata/netdata/edit/master/exporting/TIMESCALE.md -sidebar_label: Writing metrics to TimescaleDB +custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/TIMESCALE.md" +sidebar_label: "Writing metrics to TimescaleDB" +learn_status: "Published" +learn_topic_type: "Tasks" +learn_rel_path: "Setup/Exporting connectors" +learn_autogeneration_metadata: "{'part_of_cloud': False, 'part_of_agent': True}" --> # Writing metrics to TimescaleDB diff --git a/exporting/WALKTHROUGH.md b/exporting/WALKTHROUGH.md index 0612b298a..5afd26045 100644 --- a/exporting/WALKTHROUGH.md +++ b/exporting/WALKTHROUGH.md @@ -1,8 +1,11 @@ <!-- title: "Exporting to Netdata, Prometheus, Grafana stack" description: "Using Netdata in conjunction with Prometheus and Grafana." -custom_edit_url: https://github.com/netdata/netdata/edit/master/exporting/WALKTHROUGH.md -sidebar_label: Netdata, Prometheus, Grafana stack +custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/WALKTHROUGH.md" +sidebar_label: "Netdata, Prometheus, Grafana stack" +learn_status: "Published" +learn_topic_type: "Tasks" +learn_rel_path: "Setup/Exporting connectors" --> # Netdata, Prometheus, Grafana stack @@ -64,7 +67,7 @@ command to run (`/bin/bash`) and then chooses the base container images (`centos be sitting inside the shell of the container. After we have entered the shell we can install Netdata. This process could not be easier. If you take a look at [this -link](/packaging/installer/README.md), the Netdata devs give us several one-liners to install Netdata. I have not had +link](https://github.com/netdata/netdata/blob/master/packaging/installer/README.md), the Netdata devs give us several one-liners to install Netdata. I have not had any issues with these one liners and their bootstrapping scripts so far (If you guys run into anything do share). Run the following command in your container. @@ -223,7 +226,7 @@ the `chart` dimension. If you'd like you can combine the `chart` and `instance` Let's give this a try: `netdata_system_cpu_percentage_average{chart="system.cpu", instance="netdata:19999"}` This is the basics of using Prometheus to query Netdata. I'd advise everyone at this point to read [this -page](/exporting/prometheus/README.md#using-netdata-with-prometheus). The key point here is that Netdata can export metrics from +page](https://github.com/netdata/netdata/blob/master/exporting/prometheus/README.md#using-netdata-with-prometheus). The key point here is that Netdata can export metrics from its internal DB or can send metrics _as-collected_ by specifying the `source=as-collected` URL parameter like so. <http://localhost:19999/api/v1/allmetrics?format=prometheus&help=yes&types=yes&source=as-collected> If you choose to use this method you will need to use Prometheus's set of functions here: <https://prometheus.io/docs/querying/functions/> to diff --git a/exporting/aws_kinesis/README.md b/exporting/aws_kinesis/README.md index 29dd3438e..7921a2654 100644 --- a/exporting/aws_kinesis/README.md +++ b/exporting/aws_kinesis/README.md @@ -1,8 +1,12 @@ <!-- title: "Export metrics to AWS Kinesis Data Streams" description: "Archive your Agent's metrics to AWS Kinesis Data Streams for long-term storage, further analysis, or correlation with data from other sources." -custom_edit_url: https://github.com/netdata/netdata/edit/master/exporting/aws_kinesis/README.md -sidebar_label: AWS Kinesis Data Streams +custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/aws_kinesis/README.md" +sidebar_label: "AWS Kinesis Data Streams" +learn_status: "Published" +learn_topic_type: "Tasks" +learn_rel_path: "Setup/Exporting connectors" +learn_autogeneration_metadata: "{'part_of_cloud': False, 'part_of_agent': True}" --> # Export metrics to AWS Kinesis Data Streams @@ -50,7 +54,8 @@ Set AWS credentials and stream name: stream name = your_stream_name ``` -Alternatively, you can set AWS credentials for the `netdata` user using AWS SDK for C++ [standard methods](https://docs.aws.amazon.com/sdk-for-cpp/v1/developer-guide/credentials.html). +Alternatively, you can set AWS credentials for the `netdata` user using AWS SDK for +C++ [standard methods](https://docs.aws.amazon.com/sdk-for-cpp/v1/developer-guide/credentials.html). Netdata automatically computes a partition key for every record with the purpose to distribute records across available shards evenly. diff --git a/exporting/aws_kinesis/aws_kinesis.c b/exporting/aws_kinesis/aws_kinesis.c index 1d89cc79a..c7d7a9d34 100644 --- a/exporting/aws_kinesis/aws_kinesis.c +++ b/exporting/aws_kinesis/aws_kinesis.c @@ -52,7 +52,7 @@ int init_aws_kinesis_instance(struct instance *instance) instance->prepare_header = NULL; instance->check_response = NULL; - instance->buffer = (void *)buffer_create(0); + instance->buffer = (void *)buffer_create(0, &netdata_buffers_statistics.buffers_exporters); if (!instance->buffer) { error("EXPORTING: cannot create buffer for AWS Kinesis exporting connector instance %s", instance->config.name); return 1; diff --git a/exporting/exporting_engine.c b/exporting/exporting_engine.c index fd16d982b..2ad8cdd96 100644 --- a/exporting/exporting_engine.c +++ b/exporting/exporting_engine.c @@ -197,7 +197,7 @@ void *exporting_main(void *ptr) heartbeat_t hb; heartbeat_init(&hb); - while (!netdata_exit) { + while (service_running(SERVICE_EXPORTERS)) { heartbeat_next(&hb, step_ut); engine->now = now_realtime_sec(); diff --git a/exporting/graphite/README.md b/exporting/graphite/README.md index 6c96c78c9..afcdf7984 100644 --- a/exporting/graphite/README.md +++ b/exporting/graphite/README.md @@ -1,14 +1,19 @@ <!-- title: "Export metrics to Graphite providers" description: "Archive your Agent's metrics to a any Graphite database provider for long-term storage, further analysis, or correlation with data from other sources." -custom_edit_url: https://github.com/netdata/netdata/edit/master/exporting/graphite/README.md -sidebar_label: Graphite +custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/graphite/README.md" +sidebar_label: "Graphite" +learn_status: "Published" +learn_topic_type: "Tasks" +learn_rel_path: "Setup/Exporting connectors" +learn_autogeneration_metadata: "{'part_of_cloud': False, 'part_of_agent': True}" --> # Export metrics to Graphite providers -You can use the Graphite connector for the [exporting engine](/exporting/README.md) to archive your agent's metrics to -Graphite providers for long-term storage, further analysis, or correlation with data from other sources. +You can use the Graphite connector for +the [exporting engine](https://github.com/netdata/netdata/blob/master/exporting/README.md) to archive your agent's +metrics to Graphite providers for long-term storage, further analysis, or correlation with data from other sources. ## Configuration @@ -21,7 +26,8 @@ directory and set the following options: destination = localhost:2003 ``` -Add `:http` or `:https` modifiers to the connector type if you need to use other than a plaintext protocol. For example: `graphite:http:my_graphite_instance`, +Add `:http` or `:https` modifiers to the connector type if you need to use other than a plaintext protocol. For +example: `graphite:http:my_graphite_instance`, `graphite:https:my_graphite_instance`. You can set basic HTTP authentication credentials using ```conf @@ -29,7 +35,7 @@ Add `:http` or `:https` modifiers to the connector type if you need to use other password = my_password ``` -The Graphite connector is further configurable using additional settings. See the [exporting reference -doc](/exporting/README.md#options) for details. +The Graphite connector is further configurable using additional settings. See +the [exporting reference doc](https://github.com/netdata/netdata/blob/master/exporting/README.md#options) for details. diff --git a/exporting/graphite/graphite.c b/exporting/graphite/graphite.c index 0b33f6428..f1964f3e5 100644 --- a/exporting/graphite/graphite.c +++ b/exporting/graphite/graphite.c @@ -48,7 +48,7 @@ int init_graphite_instance(struct instance *instance) instance->check_response = exporting_discard_response; - instance->buffer = (void *)buffer_create(0); + instance->buffer = (void *)buffer_create(0, &netdata_buffers_statistics.buffers_exporters); if (!instance->buffer) { error("EXPORTING: cannot create buffer for graphite exporting connector instance %s", instance->config.name); return 1; @@ -96,7 +96,7 @@ void sanitize_graphite_label_value(char *dst, const char *src, size_t len) int format_host_labels_graphite_plaintext(struct instance *instance, RRDHOST *host) { if (!instance->labels_buffer) - instance->labels_buffer = buffer_create(1024); + instance->labels_buffer = buffer_create(1024, &netdata_buffers_statistics.buffers_exporters); if (unlikely(!sending_labels_configured(instance))) return 0; diff --git a/exporting/init_connectors.c b/exporting/init_connectors.c index bfb6525ea..15e1951f8 100644 --- a/exporting/init_connectors.c +++ b/exporting/init_connectors.c @@ -171,8 +171,8 @@ void simple_connector_init(struct instance *instance) if (connector_specific_data->first_buffer) return; - connector_specific_data->header = buffer_create(0); - connector_specific_data->buffer = buffer_create(0); + connector_specific_data->header = buffer_create(0, &netdata_buffers_statistics.buffers_exporters); + connector_specific_data->buffer = buffer_create(0, &netdata_buffers_statistics.buffers_exporters); // create a ring buffer struct simple_connector_buffer *first_buffer = NULL; @@ -195,7 +195,7 @@ void simple_connector_init(struct instance *instance) connector_specific_data->last_buffer = connector_specific_data->first_buffer; if (*instance->config.username || *instance->config.password) { - BUFFER *auth_string = buffer_create(0); + BUFFER *auth_string = buffer_create(0, &netdata_buffers_statistics.buffers_exporters); buffer_sprintf(auth_string, "%s:%s", instance->config.username, instance->config.password); diff --git a/exporting/json/README.md b/exporting/json/README.md index d129ffbd7..23ff555cb 100644 --- a/exporting/json/README.md +++ b/exporting/json/README.md @@ -1,13 +1,17 @@ <!-- title: "Export metrics to JSON document databases" description: "Archive your Agent's metrics to a JSON document database for long-term storage, further analysis, or correlation with data from other sources." -custom_edit_url: https://github.com/netdata/netdata/edit/master/exporting/json/README.md -sidebar_label: JSON Document Databases +custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/json/README.md" +sidebar_label: "JSON Document Databases" +learn_status: "Published" +learn_topic_type: "Tasks" +learn_rel_path: "Setup/Exporting connectors" +learn_autogeneration_metadata: "{'part_of_cloud': False, 'part_of_agent': True}" --> # Export metrics to JSON document databases -You can use the JSON connector for the [exporting engine](/exporting/README.md) to archive your agent's metrics to JSON +You can use the JSON connector for the [exporting engine](https://github.com/netdata/netdata/blob/master/exporting/README.md) to archive your agent's metrics to JSON document databases for long-term storage, further analysis, or correlation with data from other sources. ## Configuration @@ -29,7 +33,7 @@ Add `:http` or `:https` modifiers to the connector type if you need to use other password = my_password ``` -The JSON connector is further configurable using additional settings. See the [exporting reference -doc](/exporting/README.md#options) for details. +The JSON connector is further configurable using additional settings. See +the [exporting reference doc](https://github.com/netdata/netdata/blob/master/exporting/README.md#options) for details. diff --git a/exporting/json/json.c b/exporting/json/json.c index dd53f6f0a..4cafd4c04 100644 --- a/exporting/json/json.c +++ b/exporting/json/json.c @@ -37,7 +37,7 @@ int init_json_instance(struct instance *instance) instance->check_response = exporting_discard_response; - instance->buffer = (void *)buffer_create(0); + instance->buffer = (void *)buffer_create(0, &netdata_buffers_statistics.buffers_exporters); if (!instance->buffer) { error("EXPORTING: cannot create buffer for json exporting connector instance %s", instance->config.name); return 1; @@ -96,7 +96,7 @@ int init_json_http_instance(struct instance *instance) instance->check_response = exporting_discard_response; - instance->buffer = (void *)buffer_create(0); + instance->buffer = (void *)buffer_create(0, &netdata_buffers_statistics.buffers_exporters); simple_connector_init(instance); @@ -119,7 +119,7 @@ int init_json_http_instance(struct instance *instance) int format_host_labels_json_plaintext(struct instance *instance, RRDHOST *host) { if (!instance->labels_buffer) - instance->labels_buffer = buffer_create(1024); + instance->labels_buffer = buffer_create(1024, &netdata_buffers_statistics.buffers_exporters); if (unlikely(!sending_labels_configured(instance))) return 0; diff --git a/exporting/mongodb/README.md b/exporting/mongodb/README.md index b10d54716..0cbe8f059 100644 --- a/exporting/mongodb/README.md +++ b/exporting/mongodb/README.md @@ -1,14 +1,19 @@ <!-- title: "Export metrics to MongoDB" description: "Archive your Agent's metrics to a MongoDB database for long-term storage, further analysis, or correlation with data from other sources." -custom_edit_url: https://github.com/netdata/netdata/edit/master/exporting/mongodb/README.md -sidebar_label: MongoDB +custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/mongodb/README.md" +sidebar_label: "MongoDB" +learn_status: "Published" +learn_topic_type: "Tasks" +learn_rel_path: "Setup/Exporting connectors" +learn_autogeneration_metadata: "{'part_of_cloud': False, 'part_of_agent': True}" --> # Export metrics to MongoDB -You can use the MongoDB connector for the [exporting engine](/exporting/README.md) to archive your agent's metrics to a -MongoDB database for long-term storage, further analysis, or correlation with data from other sources. +You can use the MongoDB connector for +the [exporting engine](https://github.com/netdata/netdata/blob/master/exporting/README.md) to archive your agent's +metrics to a MongoDB database for long-term storage, further analysis, or correlation with data from other sources. ## Prerequisites diff --git a/exporting/mongodb/mongodb.c b/exporting/mongodb/mongodb.c index 850d07fb3..186a7dcfd 100644 --- a/exporting/mongodb/mongodb.c +++ b/exporting/mongodb/mongodb.c @@ -106,7 +106,7 @@ int init_mongodb_instance(struct instance *instance) instance->prepare_header = NULL; instance->check_response = NULL; - instance->buffer = (void *)buffer_create(0); + instance->buffer = (void *)buffer_create(0, &netdata_buffers_statistics.buffers_exporters); if (!instance->buffer) { error("EXPORTING: cannot create buffer for MongoDB exporting connector instance %s", instance->config.name); return 1; diff --git a/exporting/opentsdb/README.md b/exporting/opentsdb/README.md index c9b1ab95a..c6069f372 100644 --- a/exporting/opentsdb/README.md +++ b/exporting/opentsdb/README.md @@ -1,14 +1,19 @@ <!-- title: "Export metrics to OpenTSDB" description: "Archive your Agent's metrics to an OpenTSDB database for long-term storage and further analysis." -custom_edit_url: https://github.com/netdata/netdata/edit/master/exporting/opentsdb/README.md -sidebar_label: OpenTSDB +custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/opentsdb/README.md" +sidebar_label: "OpenTSDB" +learn_status: "Published" +learn_topic_type: "Tasks" +learn_rel_path: "Setup/Exporting connectors" +learn_autogeneration_metadata: "{'part_of_cloud': False, 'part_of_agent': True}" --> # Export metrics to OpenTSDB -You can use the OpenTSDB connector for the [exporting engine](/exporting/README.md) to archive your agent's metrics to OpenTSDB -databases for long-term storage, further analysis, or correlation with data from other sources. +You can use the OpenTSDB connector for +the [exporting engine](https://github.com/netdata/netdata/blob/master/exporting/README.md) to archive your agent's +metrics to OpenTSDB databases for long-term storage, further analysis, or correlation with data from other sources. ## Configuration @@ -21,7 +26,8 @@ directory and set the following options: destination = localhost:4242 ``` -Add `:http` or `:https` modifiers to the connector type if you need to use other than a plaintext protocol. For example: `opentsdb:http:my_opentsdb_instance`, +Add `:http` or `:https` modifiers to the connector type if you need to use other than a plaintext protocol. For +example: `opentsdb:http:my_opentsdb_instance`, `opentsdb:https:my_opentsdb_instance`. You can set basic HTTP authentication credentials using ```conf @@ -29,7 +35,7 @@ Add `:http` or `:https` modifiers to the connector type if you need to use other password = my_password ``` -The OpenTSDB connector is further configurable using additional settings. See the [exporting reference -doc](/exporting/README.md#options) for details. +The OpenTSDB connector is further configurable using additional settings. See +the [exporting reference doc](https://github.com/netdata/netdata/blob/master/exporting/README.md#options) for details. diff --git a/exporting/opentsdb/opentsdb.c b/exporting/opentsdb/opentsdb.c index a974c1264..fc01ae461 100644 --- a/exporting/opentsdb/opentsdb.c +++ b/exporting/opentsdb/opentsdb.c @@ -45,7 +45,7 @@ int init_opentsdb_telnet_instance(struct instance *instance) instance->prepare_header = NULL; instance->check_response = exporting_discard_response; - instance->buffer = (void *)buffer_create(0); + instance->buffer = (void *)buffer_create(0, &netdata_buffers_statistics.buffers_exporters); if (!instance->buffer) { error("EXPORTING: cannot create buffer for opentsdb telnet exporting connector instance %s", instance->config.name); return 1; @@ -102,7 +102,7 @@ int init_opentsdb_http_instance(struct instance *instance) instance->prepare_header = opentsdb_http_prepare_header; instance->check_response = exporting_discard_response; - instance->buffer = (void *)buffer_create(0); + instance->buffer = (void *)buffer_create(0, &netdata_buffers_statistics.buffers_exporters); if (!instance->buffer) { error("EXPORTING: cannot create buffer for opentsdb HTTP exporting connector instance %s", instance->config.name); return 1; @@ -150,7 +150,7 @@ void sanitize_opentsdb_label_value(char *dst, const char *src, size_t len) int format_host_labels_opentsdb_telnet(struct instance *instance, RRDHOST *host) { if(!instance->labels_buffer) - instance->labels_buffer = buffer_create(1024); + instance->labels_buffer = buffer_create(1024, &netdata_buffers_statistics.buffers_exporters); if (unlikely(!sending_labels_configured(instance))) return 0; @@ -283,7 +283,7 @@ void opentsdb_http_prepare_header(struct instance *instance) int format_host_labels_opentsdb_http(struct instance *instance, RRDHOST *host) { if (!instance->labels_buffer) - instance->labels_buffer = buffer_create(1024); + instance->labels_buffer = buffer_create(1024, &netdata_buffers_statistics.buffers_exporters); if (unlikely(!sending_labels_configured(instance))) return 0; diff --git a/exporting/process_data.c b/exporting/process_data.c index fbcda0d9b..eb492535d 100644 --- a/exporting/process_data.c +++ b/exporting/process_data.c @@ -77,8 +77,8 @@ NETDATA_DOUBLE exporting_calculate_value_from_stored_data( time_t before = instance->before; // find the edges of the rrd database for this chart - time_t first_t = rd->tiers[0]->query_ops->oldest_time(rd->tiers[0]->db_metric_handle); - time_t last_t = rd->tiers[0]->query_ops->latest_time(rd->tiers[0]->db_metric_handle); + time_t first_t = rd->tiers[0].query_ops->oldest_time_s(rd->tiers[0].db_metric_handle); + time_t last_t = rd->tiers[0].query_ops->latest_time_s(rd->tiers[0].db_metric_handle); time_t update_every = st->update_every; struct storage_engine_query_handle handle; @@ -126,11 +126,11 @@ NETDATA_DOUBLE exporting_calculate_value_from_stored_data( size_t counter = 0; NETDATA_DOUBLE sum = 0; - for (rd->tiers[0]->query_ops->init(rd->tiers[0]->db_metric_handle, &handle, after, before); !rd->tiers[0]->query_ops->is_finished(&handle);) { - STORAGE_POINT sp = rd->tiers[0]->query_ops->next_metric(&handle); + for (rd->tiers[0].query_ops->init(rd->tiers[0].db_metric_handle, &handle, after, before, STORAGE_PRIORITY_LOW); !rd->tiers[0].query_ops->is_finished(&handle);) { + STORAGE_POINT sp = rd->tiers[0].query_ops->next_metric(&handle); points_read++; - if (unlikely(storage_point_is_empty(sp))) { + if (unlikely(storage_point_is_gap(sp))) { // not collected continue; } @@ -138,7 +138,7 @@ NETDATA_DOUBLE exporting_calculate_value_from_stored_data( sum += sp.sum; counter += sp.count; } - rd->tiers[0]->query_ops->finalize(&handle); + rd->tiers[0].query_ops->finalize(&handle); global_statistics_exporters_query_completed(points_read); if (unlikely(!counter)) { @@ -397,7 +397,7 @@ int simple_connector_end_batch(struct instance *instance) struct simple_connector_buffer *last_buffer = simple_connector_data->last_buffer; if (!last_buffer->buffer) { - last_buffer->buffer = buffer_create(0); + last_buffer->buffer = buffer_create(0, &netdata_buffers_statistics.buffers_exporters); } if (last_buffer->used) { @@ -419,7 +419,7 @@ int simple_connector_end_batch(struct instance *instance) if (last_buffer->header) buffer_flush(last_buffer->header); else - last_buffer->header = buffer_create(0); + last_buffer->header = buffer_create(0, &netdata_buffers_statistics.buffers_exporters); if (instance->prepare_header) instance->prepare_header(instance); diff --git a/exporting/prometheus/README.md b/exporting/prometheus/README.md index ae94867fa..97e9c632f 100644 --- a/exporting/prometheus/README.md +++ b/exporting/prometheus/README.md @@ -1,9 +1,14 @@ <!-- title: "Export metrics to Prometheus" description: "Export Netdata metrics to Prometheus for archiving and further analysis." -custom_edit_url: https://github.com/netdata/netdata/edit/master/exporting/prometheus/README.md +custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/README.md" sidebar_label: "Using Netdata with Prometheus" +learn_status: "Published" +learn_topic_type: "Tasks" +learn_rel_path: "Setup/Exporting connectors" +learn_autogeneration_metadata: "{'part_of_cloud': False, 'part_of_agent': True}" --> + import { OneLineInstallWget, OneLineInstallCurl } from '@site/src/components/OneLineInstall/' # Using Netdata with Prometheus @@ -17,7 +22,8 @@ are starting at a fresh ubuntu shell (whether you'd like to follow along in a VM ### Installing Netdata -There are number of ways to install Netdata according to [Installation](/packaging/installer/README.md). The suggested way +There are number of ways to install Netdata according to +[Installation](https://github.com/netdata/netdata/blob/master/packaging/installer/README.md). The suggested way of installing the latest Netdata and keep it upgrade automatically. <!-- candidate for reuse --> @@ -77,24 +83,24 @@ sudo tar -xvf /tmp/prometheus-*linux-amd64.tar.gz -C /opt/prometheus --strip=1 We will use the following `prometheus.yml` file. Save it at `/opt/prometheus/prometheus.yml`. -Make sure to replace `your.netdata.ip` with the IP or hostname of the host running Netdata. +Make sure to replace `your.netdata.ip` with the IP or hostname of the host running Netdata. ```yaml # my global config global: - scrape_interval: 5s # Set the scrape interval to every 5 seconds. Default is every 1 minute. + scrape_interval: 5s # Set the scrape interval to every 5 seconds. Default is every 1 minute. evaluation_interval: 5s # Evaluate rules every 5 seconds. The default is every 1 minute. # scrape_timeout is set to the global default (10s). # Attach these labels to any time series or alerts when communicating with # external systems (federation, remote storage, Alertmanager). external_labels: - monitor: 'codelab-monitor' + monitor: 'codelab-monitor' # Load rules once and periodically evaluate them according to the global 'evaluation_interval'. rule_files: - # - "first.rules" - # - "second.rules" +# - "first.rules" +# - "second.rules" # A scrape configuration containing exactly one endpoint to scrape: # Here it's Prometheus itself. @@ -106,7 +112,7 @@ scrape_configs: # scheme defaults to 'http'. static_configs: - - targets: ['0.0.0.0:9090'] + - targets: [ '0.0.0.0:9090' ] - job_name: 'netdata-scrape' @@ -114,7 +120,7 @@ scrape_configs: params: # format: prometheus | prometheus_all_hosts # You can use `prometheus_all_hosts` if you want Prometheus to set the `instance` to your hostname instead of IP - format: [prometheus] + format: [ prometheus ] # # sources: as-collected | raw | average | sum | volume # default is: average @@ -126,7 +132,7 @@ scrape_configs: honor_labels: true static_configs: - - targets: ['{your.netdata.ip}:19999'] + - targets: [ '{your.netdata.ip}:19999' ] ``` #### Install nodes.yml @@ -202,7 +208,7 @@ sudo systemctl start prometheus sudo systemctl enable prometheus ``` -Prometheus should now start and listen on port 9090. Attempt to head there with your browser. +Prometheus should now start and listen on port 9090. Attempt to head there with your browser. If everything is working correctly when you fetch `http://your.prometheus.ip:9090` you will see a 'Status' tab. Click this and click on 'targets' We should see the Netdata host as a scraped target. @@ -219,16 +225,16 @@ Before explaining the changes, we have to understand the key differences between Each chart in Netdata has several properties (common to all its metrics): -- `chart_id` - uniquely identifies a chart. +- `chart_id` - uniquely identifies a chart. -- `chart_name` - a more human friendly name for `chart_id`, also unique. +- `chart_name` - a more human friendly name for `chart_id`, also unique. -- `context` - this is the template of the chart. All disk I/O charts have the same context, all mysql requests charts - have the same context, etc. This is used for alarm templates to match all the charts they should be attached to. +- `context` - this is the template of the chart. All disk I/O charts have the same context, all mysql requests charts + have the same context, etc. This is used for alarm templates to match all the charts they should be attached to. -- `family` groups a set of charts together. It is used as the submenu of the dashboard. +- `family` groups a set of charts together. It is used as the submenu of the dashboard. -- `units` is the units for all the metrics attached to the chart. +- `units` is the units for all the metrics attached to the chart. #### dimensions @@ -240,44 +246,44 @@ they are both in the same chart). Netdata can send metrics to Prometheus from 3 data sources: -- `as collected` or `raw` - this data source sends the metrics to Prometheus as they are collected. No conversion is - done by Netdata. The latest value for each metric is just given to Prometheus. This is the most preferred method by - Prometheus, but it is also the harder to work with. To work with this data source, you will need to understand how - to get meaningful values out of them. +- `as collected` or `raw` - this data source sends the metrics to Prometheus as they are collected. No conversion is + done by Netdata. The latest value for each metric is just given to Prometheus. This is the most preferred method by + Prometheus, but it is also the harder to work with. To work with this data source, you will need to understand how + to get meaningful values out of them. + + The format of the metrics is: `CONTEXT{chart="CHART",family="FAMILY",dimension="DIMENSION"}`. - The format of the metrics is: `CONTEXT{chart="CHART",family="FAMILY",dimension="DIMENSION"}`. + If the metric is a counter (`incremental` in Netdata lingo), `_total` is appended the context. - If the metric is a counter (`incremental` in Netdata lingo), `_total` is appended the context. + Unlike Prometheus, Netdata allows each dimension of a chart to have a different algorithm and conversion constants + (`multiplier` and `divisor`). In this case, that the dimensions of a charts are heterogeneous, Netdata will use this + format: `CONTEXT_DIMENSION{chart="CHART",family="FAMILY"}` - Unlike Prometheus, Netdata allows each dimension of a chart to have a different algorithm and conversion constants - (`multiplier` and `divisor`). In this case, that the dimensions of a charts are heterogeneous, Netdata will use this - format: `CONTEXT_DIMENSION{chart="CHART",family="FAMILY"}` +- `average` - this data source uses the Netdata database to send the metrics to Prometheus as they are presented on + the Netdata dashboard. So, all the metrics are sent as gauges, at the units they are presented in the Netdata + dashboard charts. This is the easiest to work with. -- `average` - this data source uses the Netdata database to send the metrics to Prometheus as they are presented on - the Netdata dashboard. So, all the metrics are sent as gauges, at the units they are presented in the Netdata - dashboard charts. This is the easiest to work with. + The format of the metrics is: `CONTEXT_UNITS_average{chart="CHART",family="FAMILY",dimension="DIMENSION"}`. - The format of the metrics is: `CONTEXT_UNITS_average{chart="CHART",family="FAMILY",dimension="DIMENSION"}`. + When this source is used, Netdata keeps track of the last access time for each Prometheus server fetching the + metrics. This last access time is used at the subsequent queries of the same Prometheus server to identify the + time-frame the `average` will be calculated. - When this source is used, Netdata keeps track of the last access time for each Prometheus server fetching the - metrics. This last access time is used at the subsequent queries of the same Prometheus server to identify the - time-frame the `average` will be calculated. + So, no matter how frequently Prometheus scrapes Netdata, it will get all the database data. + To identify each Prometheus server, Netdata uses by default the IP of the client fetching the metrics. - So, no matter how frequently Prometheus scrapes Netdata, it will get all the database data. - To identify each Prometheus server, Netdata uses by default the IP of the client fetching the metrics. - - If there are multiple Prometheus servers fetching data from the same Netdata, using the same IP, each Prometheus - server can append `server=NAME` to the URL. Netdata will use this `NAME` to uniquely identify the Prometheus server. + If there are multiple Prometheus servers fetching data from the same Netdata, using the same IP, each Prometheus + server can append `server=NAME` to the URL. Netdata will use this `NAME` to uniquely identify the Prometheus server. -- `sum` or `volume`, is like `average` but instead of averaging the values, it sums them. +- `sum` or `volume`, is like `average` but instead of averaging the values, it sums them. - The format of the metrics is: `CONTEXT_UNITS_sum{chart="CHART",family="FAMILY",dimension="DIMENSION"}`. All the - other operations are the same with `average`. + The format of the metrics is: `CONTEXT_UNITS_sum{chart="CHART",family="FAMILY",dimension="DIMENSION"}`. All the + other operations are the same with `average`. - To change the data source to `sum` or `as-collected` you need to provide the `source` parameter in the request URL. - e.g.: `http://your.netdata.ip:19999/api/v1/allmetrics?format=prometheus&help=yes&source=as-collected` + To change the data source to `sum` or `as-collected` you need to provide the `source` parameter in the request URL. + e.g.: `http://your.netdata.ip:19999/api/v1/allmetrics?format=prometheus&help=yes&source=as-collected` - Keep in mind that early versions of Netdata were sending the metrics as: `CHART_DIMENSION{}`. + Keep in mind that early versions of Netdata were sending the metrics as: `CHART_DIMENSION{}`. ### Querying Metrics @@ -364,7 +370,7 @@ functionality of Netdata this ignores any upstream hosts - so you should conside ```yaml metrics_path: '/api/v1/allmetrics' params: - format: [prometheus_all_hosts] + format: [ prometheus_all_hosts ] honor_labels: true ``` @@ -389,7 +395,9 @@ To save bandwidth, and because Prometheus does not use them anyway, `# TYPE` and wanted they can be re-enabled via `types=yes` and `help=yes`, e.g. `/api/v1/allmetrics?format=prometheus&types=yes&help=yes` -Note that if enabled, the `# TYPE` and `# HELP` lines are repeated for every occurrence of a metric, which goes against the Prometheus documentation's [specification for these lines](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#comments-help-text-and-type-information). +Note that if enabled, the `# TYPE` and `# HELP` lines are repeated for every occurrence of a metric, which goes against +the Prometheus +documentation's [specification for these lines](https://github.com/prometheus/docs/blob/master/content/docs/instrumenting/exposition_formats.md#comments-help-text-and-type-information). ### Names and IDs @@ -408,8 +416,8 @@ The default is controlled in `exporting.conf`: You can overwrite it from Prometheus, by appending to the URL: -- `&names=no` to get IDs (the old behaviour) -- `&names=yes` to get names +- `&names=no` to get IDs (the old behaviour) +- `&names=yes` to get names ### Filtering metrics sent to Prometheus @@ -420,7 +428,8 @@ Netdata can filter the metrics it sends to Prometheus with this setting: send charts matching = * ``` -This settings accepts a space separated list of [simple patterns](/libnetdata/simple_pattern/README.md) to match the +This settings accepts a space separated list +of [simple patterns](https://github.com/netdata/netdata/blob/master/libnetdata/simple_pattern/README.md) to match the **charts** to be sent to Prometheus. Each pattern can use `*` as wildcard, any number of times (e.g `*a*b*c*` is valid). Patterns starting with `!` give a negative match (e.g `!*.bad users.* groups.*` will send all the users and groups except `bad` user and `bad` group). The order is important: the first match (positive or negative) left to right, is diff --git a/exporting/prometheus/prometheus.c b/exporting/prometheus/prometheus.c index 294d8ec2c..dc675dd32 100644 --- a/exporting/prometheus/prometheus.c +++ b/exporting/prometheus/prometheus.c @@ -317,7 +317,7 @@ void format_host_labels_prometheus(struct instance *instance, RRDHOST *host) return; if (!instance->labels_buffer) - instance->labels_buffer = buffer_create(1024); + instance->labels_buffer = buffer_create(1024, &netdata_buffers_statistics.buffers_exporters); struct format_prometheus_label_callback tmp = { .instance = instance, diff --git a/exporting/prometheus/remote_write/README.md b/exporting/prometheus/remote_write/README.md index 54c5d6588..9bda02d49 100644 --- a/exporting/prometheus/remote_write/README.md +++ b/exporting/prometheus/remote_write/README.md @@ -1,8 +1,11 @@ <!-- title: "Export metrics to Prometheus remote write providers" description: "Send Netdata metrics to your choice of more than 20 external storage providers for long-term archiving and further analysis." -custom_edit_url: https://github.com/netdata/netdata/edit/master/exporting/prometheus/remote_write/README.md -sidebar_label: Prometheus remote write +custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/prometheus/remote_write/README.md" +sidebar_label: "Prometheus remote write" +learn_status: "Published" +learn_topic_type: "Tasks" +learn_rel_path: "Setup/Exporting connectors" --> # Prometheus remote write exporting connector @@ -15,7 +18,7 @@ than 20 external storage providers for long-term archiving and further analysis. To use the Prometheus remote write API with [storage providers](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage), install [protobuf](https://developers.google.com/protocol-buffers/) and [snappy](https://github.com/google/snappy) libraries. -Next, [reinstall Netdata](/packaging/installer/REINSTALL.md), which detects that the required libraries and utilities +Next, [reinstall Netdata](https://github.com/netdata/netdata/blob/master/packaging/installer/REINSTALL.md), which detects that the required libraries and utilities are now available. ## Configuration diff --git a/exporting/prometheus/remote_write/remote_write.c b/exporting/prometheus/remote_write/remote_write.c index 2e2fa3c12..1857ca333 100644 --- a/exporting/prometheus/remote_write/remote_write.c +++ b/exporting/prometheus/remote_write/remote_write.c @@ -104,7 +104,7 @@ int init_prometheus_remote_write_instance(struct instance *instance) instance->prepare_header = prometheus_remote_write_prepare_header; instance->check_response = process_prometheus_remote_write_response; - instance->buffer = (void *)buffer_create(0); + instance->buffer = (void *)buffer_create(0, &netdata_buffers_statistics.buffers_exporters); if (uv_mutex_init(&instance->mutex)) return 1; diff --git a/exporting/pubsub/README.md b/exporting/pubsub/README.md index 2f9ac83d4..10252f167 100644 --- a/exporting/pubsub/README.md +++ b/exporting/pubsub/README.md @@ -1,8 +1,12 @@ <!-- title: "Export metrics to Google Cloud Pub/Sub Service" description: "Export Netdata metrics to the Google Cloud Pub/Sub Service for long-term archiving or analytical processing." -custom_edit_url: https://github.com/netdata/netdata/edit/master/exporting/pubsub/README.md -sidebar_label: Google Cloud Pub/Sub Service +custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/pubsub/README.md" +sidebar_label: "Google Cloud Pub/Sub Service" +learn_status: "Published" +learn_topic_type: "Tasks" +learn_rel_path: "Setup/Exporting connectors" +learn_autogeneration_metadata: "{'part_of_cloud': False, 'part_of_agent': True}" --> # Export metrics to Google Cloud Pub/Sub Service diff --git a/exporting/pubsub/pubsub.c b/exporting/pubsub/pubsub.c index b218338f1..d65fc2c40 100644 --- a/exporting/pubsub/pubsub.c +++ b/exporting/pubsub/pubsub.c @@ -30,7 +30,7 @@ int init_pubsub_instance(struct instance *instance) instance->prepare_header = NULL; instance->check_response = NULL; - instance->buffer = (void *)buffer_create(0); + instance->buffer = (void *)buffer_create(0, &netdata_buffers_statistics.buffers_exporters); if (!instance->buffer) { error("EXPORTING: cannot create buffer for Pub/Sub exporting connector instance %s", instance->config.name); return 1; diff --git a/exporting/send_data.c b/exporting/send_data.c index 1d20f3b74..045aab6ed 100644 --- a/exporting/send_data.c +++ b/exporting/send_data.c @@ -64,7 +64,7 @@ void simple_connector_receive_response(int *sock, struct instance *instance) { static BUFFER *response = NULL; if (!response) - response = buffer_create(4096); + response = buffer_create(4096, &netdata_buffers_statistics.buffers_exporters); struct stats *stats = &instance->stats; #ifdef ENABLE_HTTPS diff --git a/exporting/send_internal_metrics.c b/exporting/send_internal_metrics.c index 515cda3b2..e4347964f 100644 --- a/exporting/send_internal_metrics.c +++ b/exporting/send_internal_metrics.c @@ -65,7 +65,7 @@ void send_internal_metrics(struct instance *instance) if (!stats->initialized) { char id[RRD_ID_LENGTH_MAX + 1]; - BUFFER *family = buffer_create(0); + BUFFER *family = buffer_create(0, &netdata_buffers_statistics.buffers_exporters); buffer_sprintf(family, "exporting_%s", instance->config.name); diff --git a/exporting/tests/test_exporting_engine.c b/exporting/tests/test_exporting_engine.c index 6ea6b1e5c..418be0b01 100644 --- a/exporting/tests/test_exporting_engine.c +++ b/exporting/tests/test_exporting_engine.c @@ -612,7 +612,7 @@ static void test_exporting_discard_response(void **state) { struct engine *engine = *state; - BUFFER *response = buffer_create(0); + BUFFER *response = buffer_create(0, NULL); buffer_sprintf(response, "Test response"); assert_int_equal(exporting_discard_response(response, engine->instance_root), 0); @@ -651,8 +651,8 @@ static void test_simple_connector_send_buffer(void **state) int sock = 1; int failures = 3; size_t buffered_metrics = 1; - BUFFER *header = buffer_create(0); - BUFFER *buffer = buffer_create(0); + BUFFER *header = buffer_create(0, NULL); + BUFFER *buffer = buffer_create(0, NULL); buffer_strcat(header, "test header\n"); buffer_strcat(buffer, "test buffer\n"); @@ -695,10 +695,10 @@ static void test_simple_connector_worker(void **state) instance->connector_specific_data = simple_connector_data; simple_connector_data->last_buffer = callocz(1, sizeof(struct simple_connector_buffer)); simple_connector_data->first_buffer = simple_connector_data->last_buffer; - simple_connector_data->header = buffer_create(0); - simple_connector_data->buffer = buffer_create(0); - simple_connector_data->last_buffer->header = buffer_create(0); - simple_connector_data->last_buffer->buffer = buffer_create(0); + simple_connector_data->header = buffer_create(0, NULL); + simple_connector_data->buffer = buffer_create(0, NULL); + simple_connector_data->last_buffer->header = buffer_create(0, NULL); + simple_connector_data->last_buffer->buffer = buffer_create(0, NULL); strcpy(simple_connector_data->connected_to, "localhost"); buffer_sprintf(simple_connector_data->last_buffer->header, "test header"); @@ -822,7 +822,7 @@ static void test_flush_host_labels(void **state) struct engine *engine = *state; struct instance *instance = engine->instance_root; - instance->labels_buffer = buffer_create(12); + instance->labels_buffer = buffer_create(12, NULL); buffer_strcat(instance->labels_buffer, "check string"); assert_int_equal(buffer_strlen(instance->labels_buffer), 12); @@ -1133,7 +1133,7 @@ static void rrd_stats_api_v1_charts_allmetrics_prometheus(void **state) { (void)state; - BUFFER *buffer = buffer_create(0); + BUFFER *buffer = buffer_create(0, NULL); RRDSET *st; rrdset_foreach_read(st, localhost); @@ -1241,8 +1241,8 @@ static void test_prometheus_remote_write_prepare_header(void **state) struct simple_connector_data *simple_connector_data = callocz(1, sizeof(struct simple_connector_data)); instance->connector_specific_data = simple_connector_data; simple_connector_data->last_buffer = callocz(1, sizeof(struct simple_connector_buffer)); - simple_connector_data->last_buffer->header = buffer_create(0); - simple_connector_data->last_buffer->buffer = buffer_create(0); + simple_connector_data->last_buffer->header = buffer_create(0, NULL); + simple_connector_data->last_buffer->buffer = buffer_create(0, NULL); strcpy(simple_connector_data->connected_to, "localhost"); buffer_sprintf(simple_connector_data->last_buffer->buffer, "test buffer"); @@ -1269,7 +1269,7 @@ static void test_prometheus_remote_write_prepare_header(void **state) static void test_process_prometheus_remote_write_response(void **state) { (void)state; - BUFFER *buffer = buffer_create(0); + BUFFER *buffer = buffer_create(0, NULL); buffer_sprintf(buffer, "HTTP/1.1 200 OK\r\n"); assert_int_equal(process_prometheus_remote_write_response(buffer, NULL), 0); @@ -1834,7 +1834,7 @@ static void test_format_batch_mongodb(void **state) connector_specific_data->first_buffer->next = current_buffer; connector_specific_data->last_buffer = current_buffer; - BUFFER *buffer = buffer_create(0); + BUFFER *buffer = buffer_create(0, NULL); buffer_sprintf(buffer, "{ \"metric\": \"test_metric\" }\n"); instance->buffer = buffer; stats->buffered_metrics = 1; diff --git a/exporting/tests/test_exporting_engine.h b/exporting/tests/test_exporting_engine.h index a9180a518..24dac8630 100644 --- a/exporting/tests/test_exporting_engine.h +++ b/exporting/tests/test_exporting_engine.h @@ -55,9 +55,6 @@ int __wrap_connect_to_one_of( size_t *reconnects_counter, char *connected_to, size_t connected_to_size); -void __rrdhost_check_rdlock(RRDHOST *host, const char *file, const char *function, const unsigned long line); -void __rrdset_check_rdlock(RRDSET *st, const char *file, const char *function, const unsigned long line); -void __rrd_check_rdlock(const char *file, const char *function, const unsigned long line); time_t __mock_rrddim_query_oldest_time(STORAGE_METRIC_HANDLE *db_metric_handle); time_t __mock_rrddim_query_latest_time(STORAGE_METRIC_HANDLE *db_metric_handle); void __mock_rrddim_query_init(STORAGE_METRIC_HANDLE *db_metric_handle, struct rrddim_query_handle *handle, time_t start_time, time_t end_time); |