From 386ccdd61e8256c8b21ee27ee2fc12438fc5ca98 Mon Sep 17 00:00:00 2001 From: Daniel Baumann Date: Tue, 17 Oct 2023 11:30:20 +0200 Subject: Adding upstream version 1.43.0. Signed-off-by: Daniel Baumann --- exporting/aws_kinesis/README.md | 62 +------- exporting/aws_kinesis/integrations/aws_kinesis.md | 168 ++++++++++++++++++++++ 2 files changed, 169 insertions(+), 61 deletions(-) mode change 100644 => 120000 exporting/aws_kinesis/README.md create mode 100644 exporting/aws_kinesis/integrations/aws_kinesis.md (limited to 'exporting/aws_kinesis') diff --git a/exporting/aws_kinesis/README.md b/exporting/aws_kinesis/README.md deleted file mode 100644 index 29b191b81..000000000 --- a/exporting/aws_kinesis/README.md +++ /dev/null @@ -1,61 +0,0 @@ - - -# Export metrics to AWS Kinesis Data Streams - -## Prerequisites - -To use AWS Kinesis for metric collecting and processing, you should first -[install](https://docs.aws.amazon.com/en_us/sdk-for-cpp/v1/developer-guide/setup.html) AWS SDK for C++. -`libcrypto`, `libssl`, and `libcurl` are also required to compile Netdata with Kinesis support enabled. Next, Netdata -should be re-installed from the source. The installer will detect that the required libraries are now available. - -If the AWS SDK for C++ is being installed from source, it is useful to set `-DBUILD_ONLY=kinesis`. Otherwise, the -build process could take a very long time. Note, that the default installation path for the libraries is -`/usr/local/lib64`. Many Linux distributions don't include this path as the default one for a library search, so it is -advisable to use the following options to `cmake` while building the AWS SDK: - -```sh -sudo cmake -DCMAKE_INSTALL_PREFIX=/usr -DBUILD_ONLY=kinesis -``` - -The `-DCMAKE_INSTALL_PREFIX=/usr` option also ensures that -[third party dependencies](https://github.com/aws/aws-sdk-cpp#third-party-dependencies) are installed in your system -during the SDK build process. - -## Configuration - -To enable data sending to the Kinesis service, run `./edit-config exporting.conf` in the Netdata configuration directory -and set the following options: - -```conf -[kinesis:my_instance] - enabled = yes - destination = us-east-1 -``` - -Set the `destination` option to an AWS region. - -Set AWS credentials and stream name: - -```conf - # AWS credentials - aws_access_key_id = your_access_key_id - aws_secret_access_key = your_secret_access_key - # destination stream - stream name = your_stream_name -``` - -Alternatively, you can set AWS credentials for the `netdata` user using AWS SDK for -C++ [standard methods](https://docs.aws.amazon.com/sdk-for-cpp/v1/developer-guide/credentials.html). - -Netdata automatically computes a partition key for every record with the purpose to distribute records across -available shards evenly. - - diff --git a/exporting/aws_kinesis/README.md b/exporting/aws_kinesis/README.md new file mode 120000 index 000000000..dbc98ac13 --- /dev/null +++ b/exporting/aws_kinesis/README.md @@ -0,0 +1 @@ +integrations/aws_kinesis.md \ No newline at end of file diff --git a/exporting/aws_kinesis/integrations/aws_kinesis.md b/exporting/aws_kinesis/integrations/aws_kinesis.md new file mode 100644 index 000000000..b9246d391 --- /dev/null +++ b/exporting/aws_kinesis/integrations/aws_kinesis.md @@ -0,0 +1,168 @@ + + +# AWS Kinesis + + + + + +Export metrics to AWS Kinesis Data Streams + + + + + +## Setup + +### Prerequisites + +#### + +- First [install](https://docs.aws.amazon.com/en_us/sdk-for-cpp/v1/developer-guide/setup.html) AWS SDK for C++ +- Here are the instructions when building from source, to ensure 3rd party dependencies are installed: + ```bash + git clone --recursive https://github.com/aws/aws-sdk-cpp.git + cd aws-sdk-cpp/ + git submodule update --init --recursive + mkdir BUILT + cd BUILT + cmake -DCMAKE_INSTALL_PREFIX=/usr -DBUILD_ONLY=kinesis .. + make + make install + ``` +- `libcrypto`, `libssl`, and `libcurl` are also required to compile Netdata with Kinesis support enabled. +- Next, Netdata should be re-installed from the source. The installer will detect that the required libraries are now available. + + + +### Configuration + +#### File + +The configuration file name for this integration is `exporting.conf`. + + +You can edit the configuration file using the `edit-config` script from the +Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory). + +```bash +cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata +sudo ./edit-config exporting.conf +``` +#### Options + +Netdata automatically computes a partition key for every record with the purpose to distribute records across available shards evenly. +The following options can be defined for this exporter. + + +
Config options + +| Name | Description | Default | Required | +|:----|:-----------|:-------|:--------:| +| enabled | Enables or disables an exporting connector instance (yes/no). | no | True | +| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | True | +| username | Username for HTTP authentication | my_username | False | +| password | Password for HTTP authentication | my_password | False | +| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | False | +| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | False | +| prefix | The prefix to add to all metrics. | Netdata | False | +| update every | Frequency of sending sending data to the external database, in seconds. | 10 | False | +| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | False | +| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 2 * update_every * 1000 | False | +| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | False | +| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | False | +| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | False | +| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | False | +| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | False | + +##### destination + +The format of each item in this list, is: [PROTOCOL:]IP[:PORT]. +- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine. +- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port. +- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used. + +Example IPv4: + ```yaml + destination = 10.11.14.2:4242 10.11.14.3:4242 10.11.14.4:4242 + ``` +Example IPv6 and IPv4 together: +```yaml +destination = [ffff:...:0001]:2003 10.11.12.1:2003 +``` +When multiple servers are defined, Netdata will try the next one when the previous one fails. + + +##### update every + +Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers +send data to the same database. This randomness does not affect the quality of the data, only the time they are sent. + + +##### buffer on failures + +If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it). + + +##### send hosts matching + +Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern). +The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to +filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts. + +A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`, +use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative). + + +##### send charts matching + +A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads, +use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used, +positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter +has a higher priority than the configuration option. + + +##### send names instead of ids + +Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names +are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are +different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc. + + +
+ +#### Examples + +##### Example configuration + +Basic configuration + +```yaml +[kinesis:my_instance] + enabled = yes + destination = us-east-1 + +``` +##### Configuration with AWS credentials + +Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`. + +```yaml +[kinesis:my_instance] + enabled = yes + destination = us-east-1 + # AWS credentials + aws_access_key_id = your_access_key_id + aws_secret_access_key = your_secret_access_key + # destination stream + stream name = your_stream_name + +``` + -- cgit v1.2.3