summaryrefslogtreecommitdiffstats
path: root/exporting/aws_kinesis/integrations
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-07-24 09:54:23 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-07-24 09:54:44 +0000
commit836b47cb7e99a977c5a23b059ca1d0b5065d310e (patch)
tree1604da8f482d02effa033c94a84be42bc0c848c3 /exporting/aws_kinesis/integrations
parentReleasing debian version 1.44.3-2. (diff)
downloadnetdata-836b47cb7e99a977c5a23b059ca1d0b5065d310e.tar.xz
netdata-836b47cb7e99a977c5a23b059ca1d0b5065d310e.zip
Merging upstream version 1.46.3.
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'exporting/aws_kinesis/integrations')
-rw-r--r--exporting/aws_kinesis/integrations/aws_kinesis.md168
1 files changed, 0 insertions, 168 deletions
diff --git a/exporting/aws_kinesis/integrations/aws_kinesis.md b/exporting/aws_kinesis/integrations/aws_kinesis.md
deleted file mode 100644
index deff55be..00000000
--- a/exporting/aws_kinesis/integrations/aws_kinesis.md
+++ /dev/null
@@ -1,168 +0,0 @@
-<!--startmeta
-custom_edit_url: "https://github.com/netdata/netdata/edit/master/exporting/aws_kinesis/README.md"
-meta_yaml: "https://github.com/netdata/netdata/edit/master/exporting/aws_kinesis/metadata.yaml"
-sidebar_label: "AWS Kinesis"
-learn_status: "Published"
-learn_rel_path: "Exporting"
-message: "DO NOT EDIT THIS FILE DIRECTLY, IT IS GENERATED BY THE EXPORTER'S metadata.yaml FILE"
-endmeta-->
-
-# AWS Kinesis
-
-
-<img src="https://netdata.cloud/img/aws-kinesis.svg" width="150"/>
-
-
-Export metrics to AWS Kinesis Data Streams
-
-
-
-<img src="https://img.shields.io/badge/maintained%20by-Netdata-%2300ab44" />
-
-## Setup
-
-### Prerequisites
-
-####
-
-- First [install](https://docs.aws.amazon.com/en_us/sdk-for-cpp/v1/developer-guide/setup.html) AWS SDK for C++
-- Here are the instructions when building from source, to ensure 3rd party dependencies are installed:
- ```bash
- git clone --recursive https://github.com/aws/aws-sdk-cpp.git
- cd aws-sdk-cpp/
- git submodule update --init --recursive
- mkdir BUILT
- cd BUILT
- cmake -DCMAKE_INSTALL_PREFIX=/usr -DBUILD_ONLY=kinesis ..
- make
- make install
- ```
-- `libcrypto`, `libssl`, and `libcurl` are also required to compile Netdata with Kinesis support enabled.
-- Next, Netdata should be re-installed from the source. The installer will detect that the required libraries are now available.
-
-
-
-### Configuration
-
-#### File
-
-The configuration file name for this integration is `exporting.conf`.
-
-
-You can edit the configuration file using the `edit-config` script from the
-Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md#the-netdata-config-directory).
-
-```bash
-cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
-sudo ./edit-config exporting.conf
-```
-#### Options
-
-Netdata automatically computes a partition key for every record with the purpose to distribute records across available shards evenly.
-The following options can be defined for this exporter.
-
-
-<details><summary>Config options</summary>
-
-| Name | Description | Default | Required |
-|:----|:-----------|:-------|:--------:|
-| enabled | Enables or disables an exporting connector instance (yes/no). | no | yes |
-| destination | Accepts a space separated list of hostnames, IPs (IPv4 and IPv6) and ports to connect to. Netdata will use the first available to send the metrics. | no | yes |
-| username | Username for HTTP authentication | my_username | no |
-| password | Password for HTTP authentication | my_password | no |
-| data source | Selects the kind of data that will be sent to the external database. (as collected/average/sum) | | no |
-| hostname | The hostname to be used for sending data to the external database server. | [global].hostname | no |
-| prefix | The prefix to add to all metrics. | Netdata | no |
-| update every | Frequency of sending sending data to the external database, in seconds. | 10 | no |
-| buffer on failures | The number of iterations (`update every` seconds) to buffer data, when the external database server is not available. | 10 | no |
-| timeout ms | The timeout in milliseconds to wait for the external database server to process the data. | 2 * update_every * 1000 | no |
-| send hosts matching | Hosts filter. Determines which hosts will be sent to the external database. The syntax is [simple patterns](https://github.com/netdata/netdata/tree/master/libnetdata/simple_pattern#simple-patterns). | localhost * | no |
-| send charts matching | One or more space separated patterns (use * as wildcard) checked against both chart id and chart name. | * | no |
-| send names instead of ids | Controls the metric names Netdata should send to the external database (yes/no). | | no |
-| send configured labels | Controls if host labels defined in the `[host labels]` section in `netdata.conf` should be sent to the external database (yes/no). | | no |
-| send automatic labels | Controls if automatically created labels, like `_os_name` or `_architecture` should be sent to the external database (yes/no). | | no |
-
-##### destination
-
-The format of each item in this list, is: [PROTOCOL:]IP[:PORT].
-- PROTOCOL can be udp or tcp. tcp is the default and only supported by the current exporting engine.
-- IP can be XX.XX.XX.XX (IPv4), or [XX:XX...XX:XX] (IPv6). For IPv6 you can to enclose the IP in [] to separate it from the port.
-- PORT can be a number of a service name. If omitted, the default port for the exporting connector will be used.
-
-Example IPv4:
- ```yaml
- destination = 10.11.14.2:4242 10.11.14.3:4242 10.11.14.4:4242
- ```
-Example IPv6 and IPv4 together:
-```yaml
-destination = [ffff:...:0001]:2003 10.11.12.1:2003
-```
-When multiple servers are defined, Netdata will try the next one when the previous one fails.
-
-
-##### update every
-
-Netdata will add some randomness to this number, to prevent stressing the external server when many Netdata servers
-send data to the same database. This randomness does not affect the quality of the data, only the time they are sent.
-
-
-##### buffer on failures
-
-If the server fails to receive the data after that many failures, data loss on the connector instance is expected (Netdata will also log it).
-
-
-##### send hosts matching
-
-Includes one or more space separated patterns, using * as wildcard (any number of times within each pattern).
-The patterns are checked against the hostname (the localhost is always checked as localhost), allowing us to
-filter which hosts will be sent to the external database when this Netdata is a central Netdata aggregating multiple hosts.
-
-A pattern starting with `!` gives a negative match. So to match all hosts named `*db*` except hosts containing `*child*`,
-use `!*child* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive or negative).
-
-
-##### send charts matching
-
-A pattern starting with ! gives a negative match. So to match all charts named apps.* except charts ending in *reads,
-use !*reads apps.* (so, the order is important: the first pattern matching the chart id or the chart name will be used,
-positive or negative). There is also a URL parameter filter that can be used while querying allmetrics. The URL parameter
-has a higher priority than the configuration option.
-
-
-##### send names instead of ids
-
-Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names
-are human friendly labels (also unique). Most charts and metrics have the same ID and name, but in several cases they are
-different : disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
-
-
-</details>
-
-#### Examples
-
-##### Example configuration
-
-Basic configuration
-
-```yaml
-[kinesis:my_instance]
- enabled = yes
- destination = us-east-1
-
-```
-##### Configuration with AWS credentials
-
-Add `:https` modifier to the connector type if you need to use the TLS/SSL protocol. For example: `remote_write:https:my_instance`.
-
-```yaml
-[kinesis:my_instance]
- enabled = yes
- destination = us-east-1
- # AWS credentials
- aws_access_key_id = your_access_key_id
- aws_secret_access_key = your_secret_access_key
- # destination stream
- stream name = your_stream_name
-
-```
-