summaryrefslogtreecommitdiffstats
path: root/exporting/aws_kinesis
diff options
context:
space:
mode:
Diffstat (limited to 'exporting/aws_kinesis')
l---------[-rw-r--r--]exporting/aws_kinesis/README.md62
-rw-r--r--exporting/aws_kinesis/integrations/aws_kinesis.md168
2 files changed, 169 insertions, 61 deletions
diff --git a/exporting/aws_kinesis/README.md b/exporting/aws_kinesis/README.md
index 29b191b81..dbc98ac13 100644..120000
--- a/exporting/aws_kinesis/README.md
+++ b/exporting/aws_kinesis/README.md
@@ -1,61 +1 @@
-<!--
-title: "Export metrics to AWS Kinesis Data Streams"
-description: "Archive your Agent's metrics to AWS Kinesis Data Streams for long-term storage, further analysis, or correlation with data from other sources."
-custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/aws_kinesis/README.md"
-sidebar_label: "AWS Kinesis Data Streams"
-learn_status: "Published"
-learn_rel_path: "Integrations/Export"
--->
-
-# Export metrics to AWS Kinesis Data Streams
-
-## Prerequisites
-
-To use AWS Kinesis for metric collecting and processing, you should first
-[install](https://docs.aws.amazon.com/en_us/sdk-for-cpp/v1/developer-guide/setup.html) AWS SDK for C++.
-`libcrypto`, `libssl`, and `libcurl` are also required to compile Netdata with Kinesis support enabled. Next, Netdata
-should be re-installed from the source. The installer will detect that the required libraries are now available.
-
-If the AWS SDK for C++ is being installed from source, it is useful to set `-DBUILD_ONLY=kinesis`. Otherwise, the
-build process could take a very long time. Note, that the default installation path for the libraries is
-`/usr/local/lib64`. Many Linux distributions don't include this path as the default one for a library search, so it is
-advisable to use the following options to `cmake` while building the AWS SDK:
-
-```sh
-sudo cmake -DCMAKE_INSTALL_PREFIX=/usr -DBUILD_ONLY=kinesis <aws-sdk-cpp sources>
-```
-
-The `-DCMAKE_INSTALL_PREFIX=/usr` option also ensures that
-[third party dependencies](https://github.com/aws/aws-sdk-cpp#third-party-dependencies) are installed in your system
-during the SDK build process.
-
-## Configuration
-
-To enable data sending to the Kinesis service, run `./edit-config exporting.conf` in the Netdata configuration directory
-and set the following options:
-
-```conf
-[kinesis:my_instance]
- enabled = yes
- destination = us-east-1
-```
-
-Set the `destination` option to an AWS region.
-
-Set AWS credentials and stream name:
-
-```conf
- # AWS credentials
- aws_access_key_id = your_access_key_id
- aws_secret_access_key = your_secret_access_key
- # destination stream
- stream name = your_stream_name
-```
-
-Alternatively, you can set AWS credentials for the `netdata` user using AWS SDK for
-C++ [standard methods](https://docs.aws.amazon.com/sdk-for-cpp/v1/developer-guide/credentials.html).
-
-Netdata automatically computes a partition key for every record with the purpose to distribute records across
-available shards evenly.
-
-
+integrations/aws_kinesis.md \ No newline at end of file
diff --git a/exporting/aws_kinesis/integrations/aws_kinesis.md b/exporting/aws_kinesis/integrations/aws_kinesis.md
new file mode 100644
index 000000000..b9246d391
--- /dev/null
+++ b/exporting/aws_kinesis/integrations/aws_kinesis.md
@@ -0,0 +1,168 @@
+<!--startmeta
+custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/aws_kinesis/README.md"
+meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/aws_kinesis/metadata.yaml"
+sidebar_label: "AWS Kinesis"
+learn_status: "Published"
+learn_rel_path: "Exporting"
+message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
+endmeta-->
+
+# AWS Kinesis
+
+
+<img src="https://netdata.cloud/img/aws-kinesis.svg" width="150"/>
+
+
+Export metrics to AWS Kinesis Data Streams
+
+
+
+<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
+
+## Setup
+
+### Prerequisites
+
+####
+
+- First [install](https://docs.aws.amazon.com/en_us/sdk-for-cpp/v1/developer-guide/setup.html) AWS SDK for C++
+- Here are the instructions when building from source, to ensure 3rd party dependencies are installed:
+ ```bash
+ git clone --recursive https://github.com/aws/aws-sdk-cpp.git
+ cd aws-sdk-cpp/
+ git submodule update --init --recursive
+ mkdir BUILT
+ cd BUILT
+ cmake -DCMAKE_INSTALL_PREFIX=/usr -DBUILD_ONLY=kinesis ..
+ make
+ make install
+ ```
+- `libcrypto`, `libssl`, and `libcurl` are also required to compile Netdata with Kinesis support enabled.
+- Next, Netdata should be re-installed from the source. The installer will detect that the required libraries are now available.
+
+
+
+### Configuration
+
+#### File
+
+The configuration file name for this integration is `exporting.conf`.
+
+
+You can edit the configuration file using the `edit-config` script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
+
+```bash
+cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
+sudo ./edit-config exporting.conf
+```
+#### Options
+
+Netdata automatically computes a partition key for every record with the purpose to distribute records across available shards evenly.
+The following options can be defined for this exporter.
+
+
+<details><summary>Config options</summary>
+
+| Name | Description | Default | Required |
+|:----|:-----------|:-------|:--------:|
+| enabled | Enables or disables an exporting connector instance (yes/no). | no | True |
+| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | True |
+| username | Username for HTTP authentication | my_username | False |
+| password | Password for HTTP authentication | my_password | False |
+| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | False |
+| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | False |
+| prefix | The prefix to add to all metrics. | Netdata | False |
+| update every | Frequency of sending sending data to the external database, in seconds. | 10 | False |
+| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | False |
+| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 2 * update_every * 1000 | False |
+| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | False |
+| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | False |
+| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | False |
+| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | False |
+| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | False |
+
+##### destination
+
+The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
+- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
+- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
+- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
+
+Example IPv4:
+ ```yaml
+ destination = 10.11.14.2:4242 10.11.14.3:4242 10.11.14.4:4242
+ ```
+Example IPv6 and IPv4 together:
+```yaml
+destination = [ffff:...:0001]:2003 10.11.12.1:2003
+```
+When multiple servers are defined, Netdata will try the next one when the previous one fails.
+
+
+##### update every
+
+Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
+send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
+
+
+##### buffer on failures
+
+If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
+
+
+##### send hosts matching
+
+Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
+The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
+filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
+
+A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
+use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
+
+
+##### send charts matching
+
+A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
+use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
+positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
+has a higher priority than the configuration option.
+
+
+##### send names instead of ids
+
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
+are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
+different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
+
+
+</details>
+
+#### Examples
+
+##### Example configuration
+
+Basic configuration
+
+```yaml
+[kinesis:my_instance]
+ enabled = yes
+ destination = us-east-1
+
+```
+##### Configuration with AWS credentials
+
+Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
+
+```yaml
+[kinesis:my_instance]
+ enabled = yes
+ destination = us-east-1
+ # AWS credentials
+ aws_access_key_id = your_access_key_id
+ aws_secret_access_key = your_secret_access_key
+ # destination stream
+ stream name = your_stream_name
+
+```
+