diff options
author | Daniel Baumann <daniel.baumann@progress-linux.org> | 2022-08-12 07:26:17 +0000 |
---|---|---|
committer | Daniel Baumann <daniel.baumann@progress-linux.org> | 2022-08-12 07:26:17 +0000 |
commit | 7877a98bd9c00db5e81dd2f8c734cba2bab20be7 (patch) | |
tree | d18b767250f7c7ced9b8abe2ece784ac1fe24d3e /docs | |
parent | Releasing debian version 1.35.1-2. (diff) | |
download | netdata-7877a98bd9c00db5e81dd2f8c734cba2bab20be7.tar.xz netdata-7877a98bd9c00db5e81dd2f8c734cba2bab20be7.zip |
Merging upstream version 1.36.0.
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to '')
-rw-r--r-- | docs/Demo-Sites.md | 1 | ||||
-rw-r--r-- | docs/Running-behind-h2o.md | 183 | ||||
-rw-r--r-- | docs/anonymous-statistics.md | 4 | ||||
-rw-r--r-- | docs/collect/application-metrics.md | 39 | ||||
-rw-r--r-- | docs/configure/nodes.md | 4 | ||||
-rw-r--r-- | docs/guides/deploy/ansible.md | 2 | ||||
-rw-r--r-- | docs/guides/longer-metrics-storage.md | 218 | ||||
-rw-r--r-- | docs/guides/monitor/anomaly-detection.md | 271 | ||||
-rw-r--r-- | docs/guides/troubleshoot/troubleshooting-agent-with-cloud-connection.md | 117 | ||||
-rw-r--r-- | docs/store/change-metrics-storage.md | 95 | ||||
-rw-r--r-- | docs/store/distributed-data-architecture.md | 67 |
11 files changed, 572 insertions, 429 deletions
diff --git a/docs/Demo-Sites.md b/docs/Demo-Sites.md index e6e8eb3dc..80c98df3d 100644 --- a/docs/Demo-Sites.md +++ b/docs/Demo-Sites.md @@ -12,7 +12,6 @@ You can also view live demos of Netdata at **[https://www.netdata.cloud](https:/ | :------------------ | :-------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| :------------------------------------------------- | | London (UK) | **[london3.my-netdata.io](https://london3.my-netdata.io)**<br/>(this is the global Netdata **registry** and has **named** and **mysql** charts) | [![Requests Per Second](https://london3.my-netdata.io/api/v1/badge.svg?chart=netdata.requests&dimensions=requests&after=-3600&options=unaligned&group=sum&label=reqs&units=empty&value_color=blue&precision=0&v42)](https://london3.my-netdata.io) | [DigitalOcean.com](https://m.do.co/c/83dc9f941745) | | Atlanta (USA) | **[cdn77.my-netdata.io](https://cdn77.my-netdata.io)**<br/>(with **named** and **mysql** charts) | [![Requests Per Second](https://cdn77.my-netdata.io/api/v1/badge.svg?chart=netdata.requests&dimensions=requests&after=-3600&options=unaligned&group=sum&label=reqs&units=empty&value_color=blue&precision=0&v42)](https://cdn77.my-netdata.io) | [CDN77.com](https://www.cdn77.com/) | -| Israel | **[octopuscs.my-netdata.io](https://octopuscs.my-netdata.io)** | [![Requests Per Second](https://octopuscs.my-netdata.io/api/v1/badge.svg?chart=netdata.requests&dimensions=requests&after=-3600&options=unaligned&group=sum&label=reqs&units=empty&value_color=blue&precision=0&v42)](https://octopuscs.my-netdata.io) | [OctopusCS.com](https://www.octopuscs.com) | | Bangalore (India) | **[bangalore.my-netdata.io](https://bangalore.my-netdata.io)** | [![Requests Per Second](https://bangalore.my-netdata.io/api/v1/badge.svg?chart=netdata.requests&dimensions=requests&after=-3600&options=unaligned&group=sum&label=reqs&units=empty&value_color=blue&precision=0&v42)](https://bangalore.my-netdata.io) | [DigitalOcean.com](https://m.do.co/c/83dc9f941745) | | Frankfurt (Germany) | **[frankfurt.my-netdata.io](https://frankfurt.my-netdata.io)** | [![Requests Per Second](https://frankfurt.my-netdata.io/api/v1/badge.svg?chart=netdata.requests&dimensions=requests&after=-3600&options=unaligned&group=sum&label=reqs&units=empty&value_color=blue&precision=0&v42)](https://frankfurt.my-netdata.io) | [DigitalOcean.com](https://m.do.co/c/83dc9f941745) | | New York (USA) | **[newyork.my-netdata.io](https://newyork.my-netdata.io)** | [![Requests Per Second](https://newyork.my-netdata.io/api/v1/badge.svg?chart=netdata.requests&dimensions=requests&after=-3600&options=unaligned&group=sum&label=reqs&units=empty&value_color=blue&precision=0&v42)](https://newyork.my-netdata.io) | [DigitalOcean.com](https://m.do.co/c/83dc9f941745) | diff --git a/docs/Running-behind-h2o.md b/docs/Running-behind-h2o.md new file mode 100644 index 000000000..c49e4e16f --- /dev/null +++ b/docs/Running-behind-h2o.md @@ -0,0 +1,183 @@ +<!-- +title: "Running Netdata behind H2O" +custom_edit_url: https://github.com/netdata/netdata/edit/master/docs/Running-behind-h2o.md +--> + +# Running Netdata behind H2O + +[H2O](https://h2o.examp1e.net/) is a new generation HTTP server that provides quicker response to users with less CPU utilization when compared to older generation of web servers. + +It is notable for having much simpler configuration than many popular HTTP servers, low resource requirements, and integrated native support for many things that other HTTP servers may need special setup to use. + +## Why H2O + +- Sane configuration defaults mean that typical configurations are very minimalistic and easy to work with. + +- Native support for HTTP/2 provides improved performance when accessing the Netdata dashboard remotely. + +- Password protect access to the Netdata dashboard without requiring Netdata Cloud. + +## H2O configuration file. + +On most systems, the H2O configuration is found under `/etc/h2o`. H2O uses [YAML 1.1](https://yaml.org/spec/1.1/), with a few special extensions, for it’s configuration files, with the main configuration file being `/etc/h2o/h2o.conf`. + +You can edit the H2O configuration file with Nano, Vim or any other text editors with which you are comfortable. + +After making changes to the configuration files, perform the following: + +- Test the configuration with `h2o -m test -c /etc/h2o/h2o.conf` + +- Restart H2O to apply tha changes with `/etc/init.d/h2o restart` or `service h2o restart` + +## Ways to access Netdata via H2O + +### As a virtual host + +With this method instead of `SERVER_IP_ADDRESS:19999`, the Netdata dashboard can be accessed via a human-readable URL such as `netdata.example.com` used in the configuration below. + +```yaml +hosts: + netdata.example.com: + listen: + port: 80 + paths: + /: + proxy.preserve-host: ON + proxy.reverse.url: http://127.0.0.1:19999 +``` + +### As a subfolder of an existing virtual host + +This method is recommended when Netdata is to be served from a subfolder (or directory). +In this case, the virtual host `netdata.example.com` already exists and Netdata has to be accessed via `netdata.example.com/netdata/`. + +```yaml +hosts: + netdata.example.com: + listen: + port: 80 + paths: + /netdata: + redirect: + status: 301 + url: /netdata/ + /netdata/: + proxy.preserve-host: ON + proxy.reverse.url: http://127.0.0.1:19999 +``` + +### As a subfolder for multiple Netdata servers, via one H2O instance + +This is the recommended configuration when one H2O instance will be used to manage multiple Netdata servers via subfolders. + +```yaml +hosts: + netdata.example.com: + listen: + port: 80 + paths: + /netdata/server1: + redirect: + status: 301 + url: /netdata/server1/ + /netdata/server1/: + proxy.preserve-host: ON + proxy.reverse.url: http://198.51.100.1:19999 + /netdata/server2: + redirect: + status: 301 + url: /netdata/server2/ + /netdata/server2/: + proxy.preserve-host: ON + proxy.reverse.url: http://198.51.100.2:19999 +``` + +Of course you can add as many backend servers as you like. + +Using the above, you access Netdata on the backend servers, like this: + +- `http://netdata.example.com/netdata/server1/` to reach Netdata on `198.51.100.1:19999` +- `http://netdata.example.com/netdata/server2/` to reach Netdata on `198.51.100.2:19999` + +### Encrypt the communication between H2O and Netdata + +In case Netdata's web server has been [configured to use TLS](/web/server/README.md#enabling-tls-support), it is +necessary to specify inside the H2O configuration that the final destination is using TLS. To do this, change the +`http://` on the `proxy.reverse.url` line in your H2O configuration with `https://` + +### Enable authentication + +Create an authentication file to enable basic authentication via H2O, this secures your Netdata dashboard. + +If you don't have an authentication file, you can use the following command: + +```sh +printf "yourusername:$(openssl passwd -apr1)" > /etc/h2o/passwords +``` + +And then add a basic authentication handler to each path definition: + +```yaml +hosts: + netdata.example.com: + listen: + port: 80 + paths: + /: + mruby.handler: | + require "htpasswd.rb" + Htpasswd.new("/etc/h2o/passwords", "netdata.example.com") + proxy.preserve-host: ON + proxy.reverse.url: http://127.0.0.1:19999 +``` + +For more information on using basic authentication with H2O, see [their official documentation](https://h2o.examp1e.net/configure/basic_auth.html). + +## Limit direct access to Netdata + +If your H2O server is on `localhost`, you can use this to ensure external access is only possible through H2O: + +``` +[web] + bind to = 127.0.0.1 ::1 +``` + +--- + +You can also use a unix domain socket. This will provide faster communication between H2O and Netdata as well: + +``` +[web] + bind to = unix:/run/netdata/netdata.sock +``` + +In the H2O configuration, use a line like the following to connect to Netdata via the unix socket: + +```yaml +proxy.reverse.url http://[unix:/run/netdata/netdata.sock] +``` + +--- + +If your H2O server is not on localhost, you can set: + +``` +[web] + bind to = * + allow connections from = IP_OF_H2O_SERVER +``` + +*note: Netdata v1.9+ support `allow connections from`* + +`allow connections from` accepts [Netdata simple patterns](/libnetdata/simple_pattern/README.md) to match against +the connection IP address. + +## Prevent the double access.log + +H2O logs accesses and Netdata logs them too. You can prevent Netdata from generating its access log, by setting +this in `/etc/netdata/netdata.conf`: + +``` +[global] + access log = none +``` diff --git a/docs/anonymous-statistics.md b/docs/anonymous-statistics.md index 73ac8e978..99bd3dc7f 100644 --- a/docs/anonymous-statistics.md +++ b/docs/anonymous-statistics.md @@ -1,8 +1,6 @@ <!-- ---- title: "Anonymous statistics" custom_edit_url: https://github.com/netdata/netdata/edit/master/docs/anonymous-statistics.md ---- --> # Anonymous statistics @@ -10,7 +8,7 @@ custom_edit_url: https://github.com/netdata/netdata/edit/master/docs/anonymous-s By default, Netdata collects anonymous usage information from the open-source monitoring agent using the open-source product analytics platform [PostHog](https://github.com/PostHog/posthog). We use their [cloud enterprise platform](https://posthog.com/product). -We are strongly committed to your [data privacy](https://netdata.cloud/data-privacy/). +We are strongly committed to your [data privacy](https://netdata.cloud/privacy/). We use the statistics gathered from this information for two purposes: diff --git a/docs/collect/application-metrics.md b/docs/collect/application-metrics.md index ffd6b0ac0..c9bc4e2c8 100644 --- a/docs/collect/application-metrics.md +++ b/docs/collect/application-metrics.md @@ -21,25 +21,26 @@ charts under **Users**, and per-user group charts under **User Groups**. Our most popular application collectors: -- [Prometheus endpoints](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/prometheus): Gathers - metrics from one or more Prometheus endpoints that use the OpenMetrics exposition format. Autodetects more than 600 - endpoints. -- [Web server logs (Apache, NGINX)](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/weblog/): - Tail access logs and provide very detailed web server performance statistics. This module is able to parse 200k+ - rows in less than half a second. -- [MySQL](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/mysql/): Collect database global, - replication, and per-user statistics. -- [Redis](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/redis): Monitor database status by reading the server's response to the `INFO` - command. -- [Apache](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/apache/): Collect Apache web - server performance metrics via the `server-status?auto` endpoint. -- [Nginx](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/nginx/): Monitor web server - status information by gathering metrics via `ngx_http_stub_status_module`. -- [Postgres](/collectors/python.d.plugin/postgres/README.md): Collect database health and performance metrics. -- [ElasticSearch](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/elasticsearch): Collect search engine performance and health - statistics. Optionally collects per-index metrics. -- [PHP-FPM](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/phpfpm/): Collect application - summary and processes health metrics by scraping the status page (`/status?full`). +- [Prometheus endpoints](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/prometheus): Gathers + metrics from one or more Prometheus endpoints that use the OpenMetrics exposition format. Auto-detects more than 600 + endpoints. +- [Web server logs (Apache, NGINX)](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/weblog/): + Tail access logs and provide very detailed web server performance statistics. This module is able to parse 200k+ + rows in less than half a second. +- [MySQL](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/mysql/): Collect database global, + replication, and per-user statistics. +- [Redis](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/redis): Monitor database status by + reading the server's response to the `INFO` command. +- [Apache](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/apache/): Collect Apache web server + performance metrics via the `server-status?auto` endpoint. +- [Nginx](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/nginx/): Monitor web server status + information by gathering metrics via `ngx_http_stub_status_module`. +- [Postgres](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/postgres): Collect database health + and performance metrics. +- [ElasticSearch](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/elasticsearch): Collect search + engine performance and health statistics. Optionally collects per-index metrics. +- [PHP-FPM](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/phpfpm/): Collect application summary + and processes health metrics by scraping the status page (`/status?full`). Our [supported collectors list](/collectors/COLLECTORS.md#service-and-application-collectors) shows all Netdata's application metrics collectors, including those for containers/k8s clusters. diff --git a/docs/configure/nodes.md b/docs/configure/nodes.md index 02cf6f052..841419a72 100644 --- a/docs/configure/nodes.md +++ b/docs/configure/nodes.md @@ -61,6 +61,10 @@ The Netdata config directory also contains one symlink: versions are copied into the config directory when opened with `edit-config`. _Do not edit the files in `/usr/lib/netdata/conf.d`, as they are overwritten by updates to the Netdata Agent._ +## Configure a Netdata docker container + +See [configure agent containers](/packaging/docker/README.md#configure-agent-containers). + ## Use `edit-config` to edit configuration files The **recommended way to easily and safely edit Netdata's configuration** is with the `edit-config` script. This script diff --git a/docs/guides/deploy/ansible.md b/docs/guides/deploy/ansible.md index 7e5532b62..35c946021 100644 --- a/docs/guides/deploy/ansible.md +++ b/docs/guides/deploy/ansible.md @@ -56,6 +56,8 @@ mv community/netdata-agent-deployment/ansible-quickstart . rm -rf community ``` +Or if you don't want to clone the entire repository, use the [gitzip browser extension](https://gitzip.org/) to get the netdata-agent-deployment directory as a zip file. + Next, `cd` into the Ansible directory. ```bash diff --git a/docs/guides/longer-metrics-storage.md b/docs/guides/longer-metrics-storage.md index 85edb55ee..8ccd9585f 100644 --- a/docs/guides/longer-metrics-storage.md +++ b/docs/guides/longer-metrics-storage.md @@ -1,160 +1,158 @@ <!-- -title: "Change how long Netdata stores metrics" -description: "With a single configuration change, the Netdata Agent can store days, weeks, or months of metrics at its famous per-second granularity." +title: "Netdata Longer Metrics Retention" +description: "" custom_edit_url: https://github.com/netdata/netdata/edit/master/docs/guides/longer-metrics-storage.md --> -# Change how long Netdata stores metrics +# Netdata Longer Metrics Retention -Netdata helps you collect thousands of system and application metrics every second, but what about storing them for the -long term? +Metrics retention affects 3 parameters on the operation of a Netdata Agent: -Many people think Netdata can only store about an hour's worth of real-time metrics, but that's simply not true any -more. With the right settings, Netdata is quite capable of efficiently storing hours or days worth of historical, -per-second metrics without having to rely on an [exporting engine](/docs/export/external-databases.md). +1. The disk space required to store the metrics. +2. The memory the Netdata Agent will require to have that retention available for queries. +3. The CPU resources that will be required to query longer time-frames. -This guide gives two options for configuring Netdata to store more metrics. **We recommend the default [database -engine](#using-the-database-engine)**, but you can stick with or switch to the round-robin database if you prefer. +As retention increases, the resources required to support that retention increase too. -Let's get started. +Since Netdata Agents usually run at the edge, inside production systems, Netdata Agent **parents** should be considered. When having a **parent - child** setup, the child (the Netdata Agent running on a production system) delegates all its functions, including longer metrics retention and querying, to the parent node that can dedicate more resources to this task. A single Netdata Agent parent can centralize multiple children Netdata Agents (dozens, hundreds, or even thousands depending on its available resources). -## Using the database engine -The database engine uses RAM to store recent metrics while also using a "spill to disk" feature that takes advantage of -available disk space for long-term metrics storage. This feature of the database engine allows you to store a much -larger dataset than your system's available RAM. +## Ephemerality of metrics -The database engine is currently the default method of storing metrics, but if you're not sure which database you're -using, check out your `netdata.conf` file and look for the `memory mode` setting: +The ephemerality of metrics plays an important role in retention. In environments where metrics stop being collected and new metrics are constantly being generated, we are interested about 2 parameters: -```conf -[global] - memory mode = dbengine -``` - -If `memory mode` is set to anything but `dbengine`, change it and restart Netdata using the standard command for -restarting services on your system. You're now using the database engine! +1. The **expected concurrent number of metrics** as an average for the lifetime of the database. + This affects mainly the storage requirements. -What makes the database engine efficient? While it's structured like a traditional database, the database engine splits -data between RAM and disk. The database engine caches and indexes data on RAM to keep memory usage low, and then -compresses older metrics onto disk for long-term storage. +2. The **expected total number of unique metrics** for the lifetime of the database. + This affects mainly the memory requirements for having all these metrics indexed and available to be queried. -When the Netdata dashboard queries for historical metrics, the database engine will use its cache, stored in RAM, to -return relevant metrics for visualization in charts. +## Granularity of metrics -Now, given that the database engine uses _both_ RAM and disk, there are two other settings to consider: `page cache -size` and `dbengine multihost disk space`. +The granularity of metrics (the frequency they are collected and stored, i.e. their resolution) is significantly affecting retention. -```conf -[global] - page cache size = 32 - dbengine multihost disk space = 256 -``` +Lowering the granularity from per second to every two seconds, will double their retention and half the CPU requirements of the Netdata Agent, without affecting disk space or memory requirements. -`page cache size` sets the maximum amount of RAM (in MiB) the database engine will use for caching and indexing. -`dbengine multihost disk space` sets the maximum disk space (again, in MiB) the database engine will use for storing -compressed metrics. The default settings retain about two day's worth of metrics on a system collecting 2,000 metrics -every second. +## Which database mode to use -[**See our database engine -calculator**](/docs/store/change-metrics-storage.md#calculate-the-system-resources-ram-disk-space-needed-to-store-metrics) -to help you correctly set `dbengine multihost disk space` based on your needs. The calculator gives an accurate estimate -based on how many child nodes you have, how many metrics your Agent collects, and more. +Netdata Agents support multiple database modes. -With the database engine active, you can back up your `/var/cache/netdata/dbengine/` folder to another location for -redundancy. +The default mode `[db].mode = dbengine` has been designed to scale for longer retentions. -Now that you know how to switch to the database engine, let's cover the default round-robin database for those who -aren't ready to make the move. +The other available database modes are designed to minimize resource utilization and should usually be considered on **parent - child** setups at the children side. -## Using the round-robin database +So, -In previous versions, Netdata used a round-robin database to store 1 hour of per-second metrics. +* On a single node setup, use `[db].mode = dbengine` to increase retention. +* On a **parent - child** setup, use `[db].mode = dbengine` on the parent to increase retention and a more resource efficient mode (like `save`, `ram` or `none`) for the child to minimize resources utilization. -To see if you're still using this database, or if you would like to switch to it, open your `netdata.conf` file and see -if `memory mode` option is set to `save`. +To use `dbengine`, set this in `netdata.conf` (it is the default): -```conf -[global] - memory mode = save ``` +[db] + mode = dbengine +``` + +## Tiering -If `memory mode` is set to `save`, then you're using the round-robin database. If so, the `history` option is set to -`3600`, which is the equivalent to 3,600 seconds, or one hour. +`dbengine` supports tiering. Tiering allows having up to 3 versions of the data: -To increase your historical metrics, you can increase `history` to the number of seconds you'd like to store: +1. Tier 0 is the high resolution data. +2. Tier 1 is the first tier that samples data every 60 data collections of Tier 0. +3. Tier 2 is the second tier that samples data every 3600 data collections of Tier 0 (60 of Tier 1). -```conf -[global] - # 2 hours = 2 * 60 * 60 = 7200 seconds - history = 7200 - # 4 hours = 4 * 60 * 60 = 14440 seconds - history = 14440 - # 24 hours = 24 * 60 * 60 = 86400 seconds - history = 86400 +To enable tiering set `[db].storage tiers` in `netdata.conf` (the default is 1, to enable only Tier 0): + +``` +[db] + mode = dbengine + storage tiers = 3 ``` -And so on. +## Disk space requirements -Next, check to see how many metrics Netdata collects on your system, and how much RAM that uses. Visit the Netdata -dashboard and look at the bottom-right corner of the interface. You'll find a sentence similar to the following: +Netdata Agents require about 1 bytes on disk per database point on Tier 0 and 4 times more on higher tiers (Tier 1 and 2). They require 4 times more storage per point compared to Tier 0, because for every point higher tiers store `min`, `max`, `sum`, `count` and `anomaly rate` (the values are 5, but they require 4 times the storage because `count` and `anomaly rate` are 16-bit integers). The `average` is calculated on the fly at query time using `sum / count`. -> Every second, Netdata collects 1,938 metrics, presents them in 299 charts and monitors them with 81 alarms. Netdata is -> using 25 MB of memory on **netdata-linux** for 1 hour, 6 minutes and 36 seconds of real-time history. +### Tier 0 - per second for a week -On this desktop system, using a Ryzen 5 1600 and 16GB of RAM, the round-robin databases uses 25 MB of RAM to store just -over an hour's worth of data for nearly 2,000 metrics. +For 2000 metrics, collected every second and retained for a week, Tier 0 needs: 1 byte x 2000 metrics x 3600 secs per hour x 24 hours per day x 7 days per week = 1100MB. -To increase the `history` option, you need to edit your `netdata.conf` file and increase the `history` setting. In most -installations, you'll find it at `/etc/netdata/netdata.conf`, but some operating systems place it at -`/opt/netdata/etc/netdata/netdata.conf`. +The setting to control this is in `netdata.conf`: -Use `/etc/netdata/edit-config netdata.conf`, or your favorite text editor, to replace `3600` with the number of seconds -you'd like to store. +``` +[db] + mode = dbengine + + # per second data collection + update every = 1 + + # enable only Tier 0 + storage tiers = 1 + + # Tier 0, per second data for a week + dbengine multihost disk space MB = 1100 +``` -You should base this number on two things: How much history you need for your use case, and how much RAM you're willing -to dedicate to Netdata. +By setting it to `1100` and restarting the Netdata Agent, this node will start maintaining about a week of data. But pay attention to the number of metrics. If you have more than 2000 metrics on a node, or you need more that a week of high resolution metrics, you may need to adjust this setting accordingly. -> Take care when you change the `history` option on production systems. Netdata is configured to stop its process if -> your system starts running out of RAM, but you can never be too careful. Out of memory situations are very bad. +### Tier 1 - per minute for a month -How much RAM will a longer history use? Let's use a little math. +Tier 1 is by default sampling the data every 60 points of Tier 0. If Tier 0 is per second, then Tier 1 is per minute. -The round-robin database needs 4 bytes for every value Netdata collects. If Netdata collects metrics every second, -that's 4 bytes, per second, per metric. +Tier 1 needs 4 times more storage per point compared to Tier 0. So, for 2000 metrics, with per minute resolution, retained for a month, Tier 1 needs: 4 bytes x 2000 metrics x 60 minutes per hour x 24 hours per day x 30 days per month = 330MB. -```text -4 bytes * X seconds * Y metrics = RAM usage in bytes +Do this in `netdata.conf`: + +``` +[db] + mode = dbengine + + # per second data collection + update every = 1 + + # enable only Tier 0 and Tier 1 + storage tiers = 2 + + # Tier 0, per second data for a week + dbengine multihost disk space MB = 1100 + + # Tier 1, per minute data for a month + dbengine tier 1 multihost disk space MB = 330 ``` -Let's assume your system collects 1,000 metrics per second. +Once `netdata.conf` is edited, the Netdata Agent needs to be restarted for the changes to take effect. -```text -4 bytes * 3600 seconds * 1,000 metrics = 14400000 bytes = 14.4 MB RAM -``` +### Tier 2 - per hour for a year -With that formula, you can calculate the RAM usage for much larger history settings. - -```conf -# 2 hours at 1,000 metrics per second -4 bytes * 7200 seconds * 1,000 metrics = 28800000 bytes = 28.8 MB RAM -# 2 hours at 2,000 metrics per second -4 bytes * 7200 seconds * 2,000 metrics = 57600000 bytes = 57.6 MB RAM -# 4 hours at 2,000 metrics per second -4 bytes * 14440 seconds * 2,000 metrics = 115520000 bytes = 115.52 MB RAM -# 24 hours at 1,000 metrics per second -4 bytes * 86400 seconds * 1,000 metrics = 345600000 bytes = 345.6 MB RAM -``` +Tier 2 is by default sampling data every 3600 points of Tier 0 (60 of Tier 1). If Tier 0 is per second, then Tier 2 is per hour. -## What's next? +The storage requirements are the same to Tier 1. -Now that you have either configured database engine or round-robin database engine to store more metrics, you'll -probably want to see it in action! +For 2000 metrics, with per hour resolution, retained for a year, Tier 2 needs: 4 bytes x 2000 metrics x 24 hours per day x 365 days per year = 67MB. + +Do this in `netdata.conf`: + +``` +[db] + mode = dbengine + + # per second data collection + update every = 1 + + # enable only Tier 0 and Tier 1 + storage tiers = 3 + + # Tier 0, per second data for a week + dbengine multihost disk space MB = 1100 + + # Tier 1, per minute data for a month + dbengine tier 1 multihost disk space MB = 330 + + # Tier 2, per hour data for a year + dbengine tier 2 multihost disk space MB = 67 +``` -For more information about how to pan charts to view historical metrics, see our documentation on [using -charts](/web/README.md#using-charts). +Once `netdata.conf` is edited, the Netdata Agent needs to be restarted for the changes to take effect. -And if you'd now like to reduce Netdata's resource usage, view our [performance -guide](/docs/guides/configure/performance.md) for our best practices on optimization. diff --git a/docs/guides/monitor/anomaly-detection.md b/docs/guides/monitor/anomaly-detection.md index 7b4388a05..e98c5c02e 100644 --- a/docs/guides/monitor/anomaly-detection.md +++ b/docs/guides/monitor/anomaly-detection.md @@ -5,270 +5,71 @@ image: /img/seo/guides/monitor/anomaly-detection.png author: "Andrew Maguire" author_title: "Analytics & ML Lead" author_img: "/img/authors/andy-maguire.jpg" -custom_edit_url: https://github.com/netdata/netdata/edit/master/ml/README.md +custom_edit_url: https://github.com/netdata/netdata/edit/master/docs/guides/monitor/anomaly-detection.md --> -# Machine learning (ML) powered anomaly detection -## Overview - -As of [`v1.32.0`](https://github.com/netdata/netdata/releases/tag/v1.32.0), Netdata comes with some ML powered [anomaly detection](https://en.wikipedia.org/wiki/Anomaly_detection) capabilities built into it and available to use out of the box, with minimal configuration required. - -🚧 **Note**: This functionality is still under active development and considered experimental. Changes might cause the feature to break. We dogfood it internally and among early adopters within the Netdata community to build the feature. If you would like to get involved and help us with some feedback, email us at analytics-ml-team@netdata.cloud or come join us in the [🤖-ml-powered-monitoring](https://discord.gg/4eRSEUpJnc) channel of the Netdata discord. - -Once ML is enabled, Netdata will begin training a model for each dimension. By default this model is a [k-means clustering](https://en.wikipedia.org/wiki/K-means_clustering) model trained on the most recent 4 hours of data. Rather than just using the most recent value of each raw metric, the model works on a preprocessed ["feature vector"](#feature-vector) of recent smoothed and differenced values. This should enable the model to detect a wider range of potentially anomalous patterns in recent observations as opposed to just point anomalies like big spikes or drops. ([This infographic](https://user-images.githubusercontent.com/2178292/144414415-275a3477-5b47-43d6-8959-509eb48ebb20.png) shows some different types of anomalies.) - -The sections below will introduce some of the main concepts: -- anomaly bit -- anomaly score -- anomaly rate -- anomaly detector - -Additional explanations and details can be found in the [Glossary](#glossary) and [Notes](#notes) at the bottom of the page. - -### Anomaly Bit - (100 = Anomalous, 0 = Normal) - -Once each model is trained, Netdata will begin producing an ["anomaly score"](#anomaly-score) at each time step for each dimension. This ["anomaly score"](#anomaly-score) is essentially a distance measure to the trained cluster centers of the model (by default each model has k=2, so two cluster centers are learned). More anomalous looking data should be more distant to those cluster centers. If this ["anomaly score"](#anomaly-score) is sufficiently large, this is a sign that the recent raw values of the dimension could potentially be anomalous. By default, "sufficiently large" means that the distance is in the 99th percentile or above of all distances observed during training or, put another way, it has to be further away than the furthest 1% of the data used during training. Once this threshold is passed, the ["anomaly bit"](#anomaly-bit) corresponding to that dimension is set to 100 to flag it as anomalous, otherwise it would be left at 0 to signal normal data. - -What this means is that in addition to the raw value of each metric, Netdata now also stores an ["anomaly bit"](#anomaly-bit) that is either 100 (anomalous) or 0 (normal). Importantly, this is achieved without additional storage overhead due to how the anomaly bit has been implemented within the existing internal Netdata storage representation. - -This ["anomaly bit"](#anomaly-bit) is exposed via the `anomaly-bit` key that can be passed to the `options` param of the `/api/v1/data` REST API. - -For example, here are some recent raw dimension values for `system.ip` on our [london](http://london.my-netdata.io/) demo server: - -[`https://london.my-netdata.io/api/v1/data?chart=system.ip`](https://london.my-netdata.io/api/v1/data?chart=system.ip) - -``` -{ - "labels": ["time", "received", "sent"], - "data": - [ - [ 1638365672, 54.84098, -76.70201], - [ 1638365671, 124.4328, -309.7543], - [ 1638365670, 123.73152, -167.9056], - ... - ] -} -``` - -And if we add the `&options=anomaly-bit` params, we can see the "anomaly bit" value corresponding to each raw dimension value: - -[`https://london.my-netdata.io/api/v1/data?chart=system.ip&options=anomaly-bit`](https://london.my-netdata.io/api/v1/data?chart=system.ip&options=anomaly-bit) - -``` -{ - "labels": ["time", "received", "sent"], - "data": - [ - [ 1638365672, 0, 0], - [ 1638365671, 0, 0], - [ 1638365670, 0, 0], - ... - ] -} -``` -In this example, the dimensions "received" and "sent" didn't show any abnormal behavior, so the anomaly bit is zero. -Under normal circumstances, the anomaly bit will mostly be 0. However, there can be random fluctuations setting the anomaly to 100, although this very much depends on the nature of the dimension in question. - -### Anomaly Rate - average(anomaly bit) - -Once all models have been trained, we can think of the Netdata dashboard as essentially a big matrix or table of 0's and 100's. If we consider this "anomaly bit"-based representation of the state of the node, we can now think about how we might detect overall node level anomalies. The figure below illustrates the main ideas. - -``` - dimensions -time d1 d2 d3 d4 d5 NAR - 1 0 0 0 0 0 0% - 2 0 0 0 0 100 20% - 3 0 0 0 0 0 0% - 4 0 100 0 0 0 20% - 5 100 0 0 0 0 20% - 6 0 100 100 0 100 60% - 7 0 100 0 100 0 40% - 8 0 0 0 0 100 20% - 9 0 0 100 100 0 40% - 10 0 0 0 0 0 0% - -DAR 10% 30% 20% 20% 30% 22% NAR_t1-t10 - -DAR = Dimension Anomaly Rate -NAR = Node Anomaly Rate -NAR_t1-t10 = Node Anomaly Rate over t1 to t10 -``` - -To work out an ["anomaly rate"](#anomaly-rate), we can just average a row or a column in any direction. For example, if we were to just average along a row then this would be the ["node anomaly rate"](#node-anomaly-rate) (all dimensions) at time t. Likewise if we averaged a column then we would have the ["dimension anomaly rate"](#dimension-anomaly-rate) for each dimension over the time window t=1-10. Extending this idea, we can work out an overall ["anomaly rate"](#anomaly-rate) for the whole matrix or any subset of it we might be interested in. - -### Anomaly Detector - Node level anomaly events - -An ["anomaly detector"](#anomaly-detector) looks at all anomaly bits of a node. Netdata's anomaly detector produces an ["anomaly event"](#anomaly-event) when a the percentage of anomaly bits is high enough for a persistent amount of time. This anomaly event signals that there was sufficient evidence among all the anomaly bits that some strange behavior might have been detected in a more global sense across the node. -Essentially if the ["Node Anomaly Rate"](#node-anomaly-rate) (NAR) passes a defined threshold and stays above that threshold for a persistent amount of time, a "Node [Anomaly Event](#anomaly-event)" will be triggered. - -These anomaly events are currently exposed via `/api/v1/anomaly_events` - -**Note**: Clicking the link below will likely return an empty list of `[]`. This is the response when no anomaly events exist in the specified range. The example response below is illustrative of what the response would be when one or more anomaly events exist within the range of `after` to `before`. - -https://london.my-netdata.io/api/v1/anomaly_events?after=1638365182000&before=1638365602000 - -If an event exists within the window, the result would be a list of start and end times. - -``` -[ - [ - 1638367788, - 1638367851 - ] -] -``` +## Overview -Information about each anomaly event can then be found at the `/api/v1/anomaly_event_info` endpoint (making sure to pass the `after` and `before` params): +As of [`v1.32.0`](https://github.com/netdata/netdata/releases/tag/v1.32.0), Netdata comes with some ML powered [anomaly detection](https://en.wikipedia.org/wiki/Anomaly_detection) capabilities built into it and available to use out of the box, with zero configuration required (ML was enabled by default in `v1.35.0-29-nightly` in [this PR](https://github.com/netdata/netdata/pull/13158), previously it required a one line config change). -**Note**: If you click the below url you will get a `null` since no such anomaly event exists as the response is just an illustrative example taken from a node that did have such an anomaly event. +This means that in addition to collecting raw value metrics, the Netdata agent will also produce an [`anomaly-bit`](https://learn.netdata.cloud/docs/agent/ml#anomaly-bit---100--anomalous-0--normal) every second which will be `100` when recent raw metric values are considered anomalous by Netdata and `0` when they look normal. Once we aggregate beyond one second intervals this aggregated `anomaly-bit` becomes an ["anomaly rate"](https://learn.netdata.cloud/docs/agent/ml#anomaly-rate---averageanomaly-bit). -https://london.my-netdata.io/api/v1/anomaly_event_info?after=1638367788&before=1638367851 +To be as concrete as possible, the below api call shows how to access the raw anomaly bit of the `system.cpu` chart from the [london.my-netdata.io](https://london.my-netdata.io) Netdata demo server. Passing `options=anomaly-bit` returns the anomay bit instead of the raw metric value. ``` -[ - [ - 0.66, - "netdata.response_time|max" - ], - [ - 0.63, - "netdata.response_time|average" - ], - [ - 0.54, - "netdata.requests|requests" - ], - ... +https://london.my-netdata.io/api/v1/data?chart=system.cpu&options=anomaly-bit ``` -The query returns a list of dimension anomaly rates for all dimensions that were considered part of the detected anomaly event. - -**Note**: We plan to build additional anomaly detection and exploration features into both Netdata Agent and Netdata Cloud. The current endpoints are still under active development to power the upcoming features. - -## Configuration - -To enable anomaly detection: -1. Find and open the Netdata configuration file `netdata.conf`. -2. In the `[ml]` section, set `enabled = yes`. -3. Restart netdata (typically `sudo systemctl restart netdata`). - -**Note**: If you would like to learn more about configuring Netdata please see [the configuration guide](https://learn.netdata.cloud/guides/step-by-step/step-04). - -Below is a list of all the available configuration params and their default values. +If we aggregate the above to just 1 point by adding `points=1` we get an "[Anomaly Rate](https://learn.netdata.cloud/docs/agent/ml#anomaly-rate---averageanomaly-bit)": ``` -[ml] - # enabled = no - # maximum num samples to train = 14400 - # minimum num samples to train = 3600 - # train every = 3600 - # num samples to diff = 1 - # num samples to smooth = 3 - # num samples to lag = 5 - # maximum number of k-means iterations = 1000 - # dimension anomaly score threshold = 0.99 - # host anomaly rate threshold = 0.01000 - # minimum window size = 30.00000 - # maximum window size = 600.00000 - # idle window size = 30.00000 - # window minimum anomaly rate = 0.25000 - # anomaly event min dimension rate threshold = 0.05000 - # hosts to skip from training = !* - # charts to skip from training = !system.* !cpu.* !mem.* !disk.* !disk_* !ip.* !ipv4.* !ipv6.* !net.* !net_* !netfilter.* !services.* !apps.* !groups.* !user.* !ebpf.* !netdata.* * +https://london.my-netdata.io/api/v1/data?chart=system.cpu&options=anomaly-bit&points=1 ``` -### Descriptions (min/max) - -- `enabled`: `yes` to enable, `no` to disable. -- `maximum num samples to train`: (`3600`/`21600`) This is the maximum amount of time you would like to train each model on. For example, the default of `14400` trains on the preceding 4 hours of data, assuming an `update every` of 1 second. -- `minimum num samples to train`: (`900`/`21600`) This is the minimum amount of data required to be able to train a model. For example, the default of `3600` implies that once at least 1 hour of data is available for training, a model is trained, otherwise it is skipped and checked again at the next training run. -- `train every`: (`1800`/`21600`) This is how often each model will be retrained. For example, the default of `3600` means that each model is retrained every hour. Note: The training of all models is spread out across the `train every` period for efficiency, so in reality, it means that each model will be trained in a staggered manner within each `train every` period. -- `num samples to diff`: (`0`/`1`) This is a `0` or `1` to determine if you want the model to operate on differences of the raw data or just the raw data. For example, the default of `1` means that we take differences of the raw values. Using differences is more general and works on dimensions that might naturally tend to have some trends or cycles in them that is normal behavior to which we don't want to be too sensitive. -- `num samples to smooth`: (`0`/`5`) This is a small integer that controls the amount of smoothing applied as part of the feature processing used by the model. For example, the default of `3` means that the rolling average of the last 3 values is used. Smoothing like this helps the model be a little more robust to spiky types of dimensions that naturally "jump" up or down as part of their normal behavior. -- `num samples to lag`: (`0`/`5`) This is a small integer that determines how many lagged values of the dimension to include in the feature vector. For example, the default of `5` means that in addition to the most recent (by default, differenced and smoothed) value of the dimension, the feature vector will also include the 5 previous values too. Using lagged values in our feature representation allows the model to work over strange patterns over recent values of a dimension as opposed to just focusing on if the most recent value itself is big or small enough to be anomalous. -- `maximum number of k-means iterations`: This is a parameter that can be passed to the model to limit the number of iterations in training the k-means model. Vast majority of cases can ignore and leave as default. -- `dimension anomaly score threshold`: (`0.01`/`5.00`) This is the threshold at which an individual dimension at a specific timestep is considered anomalous or not. For example, the default of `0.99` means that a dimension with an anomaly score of 99% or higher is flagged as anomalous. This is a normalized probability based on the training data, so the default of 99% means that anything that is as strange (based on distance measure) or more strange as the most strange 1% of data observed during training will be flagged as anomalous. If you wanted to make the anomaly detection on individual dimensions more sensitive you could try a value like `0.90` (90%) or to make it less sensitive you could try `1.5` (150%). -- `host anomaly rate threshold`: (`0.0`/`1.0`) This is the percentage of dimensions (based on all those enabled for anomaly detection) that need to be considered anomalous at specific timestep for the host itself to be considered anomalous. For example, the default value of `0.01` means that if more than 1% of dimensions are anomalous at the same time then the host itself is considered in an anomalous state. -- `minimum window size`: The Netdata "Anomaly Detector" logic works over a rolling window of data. This parameter defines the minimum length of window to consider. If over this window the host is in an anomalous state then an anomaly detection event will be triggered. For example, the default of `30` means that the detector will initially work over a rolling window of 30 seconds. Note: The length of this window will be dynamic once an anomaly event has been triggered such that it will expand as needed until either the max length of an anomaly event is hit or the host settles back into a normal state with sufficiently decreased host level anomaly states in the rolling window. Note: If you wanted to adjust the higher level anomaly detector behavior then this is one parameter you might adjust to see the impact of on anomaly detection events. -- `maximum window size`: This parameter defines the maximum length of window to consider. If an anomaly event reaches this size, it will be closed. This is to provide an upper bound on the length of an anomaly event and cost of the anomaly detector logic for that event. -- `window minimum anomaly rate`: (`0.0`/`1.0`) This parameter corresponds to a threshold on the percentage of time in the rolling window that the host was considered in an anomalous state. For example, the default of `0.25` means that if the host is in an anomalous state for 25% of more of the rolling window then and anomaly event will be triggered or extended if one is already active. Note: If you want to make the anomaly detector itself less sensitive, you can adjust this value to something like `0.75` which would mean the host needs to be much more consistently in an anomalous state to trigger an anomaly detection event. Likewise, a lower value like `0.1` would make the anomaly detector more sensitive. -- `anomaly event min dimension rate threshold`: (`0.0`/`1.0`) This is a parameter that helps filter out irrelevant dimensions from anomaly events. For example, the default of `0.05` means that only dimensions that were considered anomalous for at least 5% of the anomaly event itself will be included in that anomaly event. The idea here is to just include dimensions that were consistently anomalous as opposed to those that may have just randomly happened to be anomalous at the same time. -- `hosts to skip from training`: This parameter allows you to turn off anomaly detection for any child hosts on a parent host by defining those you would like to skip from training here. For example, a value like `dev-*` skips all hosts on a parent that begin with the "dev-" prefix. The default value of `!*` means "don't skip any". -- `charts to skip from training`: This parameter allows you to exclude certain charts from anomaly detection by defining them here. By default, all charts, apart from a specific allow list of the typical basic Netdata charts, are excluded. If you have additional charts you would like to include for anomaly detection, you can add them here. **Note**: It is recommended to add charts in small groups and then measure any impact on performance before adding additional ones. - -## Charts - -Once enabled, the "Anomaly Detection" menu and charts will be available on the dashboard. - -![anomaly_detection_menu](https://user-images.githubusercontent.com/2178292/144255721-4568aabf-39c7-4855-bf1c-31b1d60e28e6.png) - -In terms of anomaly detection, the most interesting charts would be the `anomaly_detection.dimensions` and `anomaly_detection.anomaly_rate` ones, which hold the `anomalous` and `anomaly_rate` dimensions that show the overall number of dimensions considered anomalous at any time and the corresponding anomaly rate. - -- `anomaly_detection.dimensions`: Total count of dimensions considered anomalous or normal. -- `anomaly_detection.dimensions`: Percentage of anomalous dimensions. -- `anomaly_detection.detector_window`: The length of the active window used by the detector. -- `anomaly_detection.detector_events`: Flags (0 or 1) to show when an anomaly event has been triggered by the detector. -- `anomaly_detection.prediction_stats`: Diagnostic metrics relating to prediction time of anomaly detection. -- `anomaly_detection.training_stats`: Diagnostic metrics relating to training time of anomaly detection. - -Below is an example of how these charts may look in the presence of an anomaly event. - -Initially we see a jump in `anomalous` dimensions: - -![anomalous](https://user-images.githubusercontent.com/2178292/144256036-c89fa768-5e5f-4278-9725-c67521c0d95e.png) - -And a corresponding jump in the `anomaly_rate`: - -![anomaly_rate](https://user-images.githubusercontent.com/2178292/144256071-7d157438-31f3-4b23-a795-0fd3b2e2e85c.png) - -After a short while the rolling node anomaly rate goes `above_threshold`, and once it stays above threshold for long enough a `new_anomaly_event` is created: - -![anomaly_event](https://user-images.githubusercontent.com/2178292/144256152-910b06ec-26b8-45b4-bcb7-4c2acdf9af15.png) - -## Glossary - -#### _feature vector_ +The fundamentals of Netdata's anomaly detection approach and implmentation are covered in lots more detail in the [agent ML documentation](https://learn.netdata.cloud/docs/agent/ml). -A [feature vector](https://en.wikipedia.org/wiki/Feature_(machine_learning)) is what the ML model is trained on and uses for prediction. The most simple feature vector would be just the latest raw dimension value itself [x]. By default Netdata will use a feature vector consisting of the 6 latest differences and smoothed values of the dimension so conceptually something like `[avg3(diff1(x-5)), avg3(diff1(x-4)), avg3(diff1(x-3)), avg3(diff1(x-2)), avg3(diff1(x-1)), avg3(diff1(x))]` which ends up being just 6 floating point numbers that try and represent the "shape" of recent data. +This guide will explain how to get started using these ML based anomaly detection capabilities within Netdata. -#### _anomaly score_ +## Anomaly Advisor -At prediction time the anomaly score is just the distance of the most recent feature vector to the trained cluster centers of the model, which are themselves just feature vectors, albeit supposedly the best most representative feature vectors that could be "learned" from the training data. So if the most recent feature vector is very far away in terms of [euclidean distance](https://en.wikipedia.org/wiki/Euclidean_distance#:~:text=In%20mathematics%2C%20the%20Euclidean%20distance,being%20called%20the%20Pythagorean%20distance.) it's more likely that the recent data it represents consists of some strange pattern not commonly found in the training data. +The [Anomaly Advisor](https://learn.netdata.cloud/docs/cloud/insights/anomaly-advisor) is the flagship anomaly detection feature within Netdata. In the "Anomalies" tab of Netdata you will see an overall "Anomaly Rate" chart that aggregates node level anomaly rate for all nodes in a space. The aim of this chart is to make it easy to quickly spot periods of time where the overall "[node anomaly rate](https://learn.netdata.cloud/docs/agent/ml#node-anomaly-rate)" is evelated in some unusual way and for what node or nodes this relates to. -#### _anomaly bit_ +![image](https://user-images.githubusercontent.com/2178292/175928290-490dd8b9-9c55-4724-927e-e145cb1cc837.png) -If the anomaly score is greater than a specified threshold then the most recent feature vector, and hence most recent raw data, is considered anomalous. Since storing the raw anomaly score would essentially double amount of storage space Netdata would need, we instead efficiently store just the anomaly bit in the existing internal Netdata data representation without any additional storage overhead. +Once an area on the Anomaly Rate chart is highlighted netdata will append a "heatmap" to the bottom of the screen that shows which metrics were more anomalous in the highlighted timeframe. Each row in the heatmap consists of an anomaly rate sparkline graph that can be expanded to reveal the raw underlying metric chart for that dimension. -#### _anomaly rate_ +![image](https://user-images.githubusercontent.com/2178292/175929162-02c8fe69-cc4f-4cf4-9b3a-a5e559a6feca.png) -An anomaly rate is really just an average over one or more anomaly bits. An anomaly rate can be calculated over time for one or more dimensions or at a point in time across multiple dimensions, or some combination of the two. Its just an average of some collection of anomaly bits. +## Embedded Anomaly Rate Charts -#### _anomaly detector_ +Charts in both the [Overview](https://learn.netdata.cloud/docs/cloud/visualize/overview) and [single node dashboard](https://learn.netdata.cloud/docs/cloud/visualize/overview#jump-to-single-node-dashboards) tabs also expose the underlying anomaly rates for each dimension so users can easily see if the raw metrics are considered anomalous or not by Netdata. -The is essentially business logic that just tries to process a collection of anomaly bits to determine if there is enough active anomaly bits to merit investigation or declaration of a node level anomaly event. +Pressing the anomalies icon (next to the information icon in the chart header) will expand the anomaly rate chart to make it easy to see how the anomaly rate for any individual dimension corresponds to the raw underlying data. In the example below we can see that the spike in `system.pgpgio|in` corresponded in the anomaly rate for that dimension jumping to 100% for a small period of time until the spike passed. -#### _anomaly event_ +![image](https://user-images.githubusercontent.com/2178292/175933078-5dd951ff-7709-4bb9-b4be-34199afb3945.png) -Anomaly events are triggered by the anomaly detector and represent a window of time on the node with sufficiently elevated anomaly rates across all dimensions. +## Anomaly Rate Based Alerts -#### _dimension anomaly rate_ +It is possible to use the `anomaly-bit` when defining traditional Alerts within netdata. The `anomaly-bit` is just another `options` parameter that can be passed as part of an [alarm line lookup](https://learn.netdata.cloud/docs/agent/health/reference#alarm-line-lookup). -The anomaly rate of a specific dimension over some window of time. +You can see some example ML based alert configurations below: -#### _node anomaly rate_ +- [Anomaly rate based CPU dimensions alarm](https://learn.netdata.cloud/docs/agent/health/reference#example-8---anomaly-rate-based-cpu-dimensions-alarm) +- [Anomaly rate based CPU chart alarm](https://learn.netdata.cloud/docs/agent/health/reference#example-9---anomaly-rate-based-cpu-chart-alarm) +- [Anomaly rate based node level alarm](https://learn.netdata.cloud/docs/agent/health/reference#example-10---anomaly-rate-based-node-level-alarm) +- More examples in the [`/health/health.d/ml.conf`](https://github.com/netdata/netdata/blob/master/health/health.d/ml.conf) file that ships with the agent. -The anomaly rate across all dimensions of a node. +## Learn More -## Notes +Check out the resources below to learn more about how Netdata is approaching ML: -- We would love to hear any feedback relating to this functionality, please email us at analytics-ml-team@netdata.cloud or come join us in the [🤖-ml-powered-monitoring](https://discord.gg/4eRSEUpJnc) channel of the Netdata discord. -- We are working on additional UI/UX based features that build on these core components to make them as useful as possible out of the box. -- Although not yet a core focus of this work, users could leverage the `anomaly_detection` chart dimensions and/or `anomaly-bit` options in defining alarms based on ML driven anomaly detection models. -- [This presentation](https://docs.google.com/presentation/d/18zkCvU3nKP-Bw_nQZuXTEa4PIVM6wppH3VUnAauq-RU/edit?usp=sharing) walks through some of the main concepts covered above in a more informal way. -- After restart Netdata will wait until `minimum num samples to train` observations of data are available before starting training and prediction. -- Netdata uses [dlib](https://github.com/davisking/dlib) under the hood for its core ML features. -- You should benchmark Netdata resource usage before and after enabling ML. Typical overhead ranges from 1-2% additional CPU at most. -- The "anomaly bit" has been implemented to be a building block to underpin many more ML based use cases that we plan to deliver soon. -- At its core Netdata uses an approach and problem formulation very similar to the Netdata python [anomalies collector](https://learn.netdata.cloud/docs/agent/collectors/python.d.plugin/anomalies), just implemented in a much much more efficient and scalable way in the agent in c++. So if you would like to learn more about the approach and are familiar with Python that is a useful resource to explore, as is the corresponding [deep dive tutorial](https://nbviewer.org/github/netdata/community/blob/main/netdata-agent-api/netdata-pandas/anomalies_collector_deepdive.ipynb) where the default model used is PCA instead of K-Means but the overall approach and formulation is similar.
\ No newline at end of file +- [Agent ML documentation](https://learn.netdata.cloud/docs/agent/ml). +- [Anomaly Advisor documentation](https://learn.netdata.cloud/docs/cloud/insights/anomaly-advisor). +- [Metric Correlations documentation](https://learn.netdata.cloud/docs/cloud/insights/metric-correlations). +- Anomaly Advisor [launch blog post](https://www.netdata.cloud/blog/introducing-anomaly-advisor-unsupervised-anomaly-detection-in-netdata/). +- Netdata Approach to ML [blog post](https://www.netdata.cloud/blog/our-approach-to-machine-learning/). +- `areal/ml` related [GitHub Discussions](https://github.com/netdata/netdata/discussions?discussions_q=label%3Aarea%2Fml). +- Netdata Machine Learning Meetup [deck](https://docs.google.com/presentation/d/1rfSxktg2av2k-eMwMbjN0tXeo76KC33iBaxerYinovs/edit?usp=sharing) and [YouTube recording](https://www.youtube.com/watch?v=eJGWZHVQdNU). +- Netdata Anomaly Advisor [YouTube Playlist](https://youtube.com/playlist?list=PL-P-gAHfL2KPeUcCKmNHXC-LX-FfdO43j). diff --git a/docs/guides/troubleshoot/troubleshooting-agent-with-cloud-connection.md b/docs/guides/troubleshoot/troubleshooting-agent-with-cloud-connection.md new file mode 100644 index 000000000..ee214c814 --- /dev/null +++ b/docs/guides/troubleshoot/troubleshooting-agent-with-cloud-connection.md @@ -0,0 +1,117 @@ +<!-- +title: "Troubleshoot Agent-Cloud connectivity issues" +description: "A simple guide to troubleshoot occurrences where the Agent is showing as offline after claiming." +custom_edit_url: https://github.com/netdata/netdata/edit/master/guides/troubleshoot/troubleshooting-agent-with-cloud-connection.md +--> + +# Troubleshoot Agent-Cloud connectivity issues + +When you are claiming a node, you might not be able to immediately see it online in Netdata Cloud. +This could be due to an error in the claiming process or a temporary outage of some services. + +We identified some scenarios that might cause this delay and possible actions you could take to overcome each situation. + +The most common explanation for the delay usually falls into one of the following three categories: + +- [The claiming process of the kickstart script was unsuccessful](#the-claiming-process-of-the-kickstart-script-was-unsuccessful) +- [Claiming on an older, deprecated version of the Agent](#claiming-on-an-older-deprecated-version-of-the-agent) +- [Network issues while connecting to the Cloud](#network-issues-while-connecting-to-the-cloud) + +## The claiming process of the kickstart script was unsuccessful + +Here, we will try to define some edge cases you might encounter when claiming a node. + +### The kickstart script auto-claimed the Agent but there was no error message displayed + +The kickstart script will install/update your Agent and then try to claim the node to the Cloud (if tokens are provided). To +complete the second part, the Agent must be running. In some platforms, the Netdata service cannot be enabled by default +and you must do it manually, using the following steps: + +1. Check if the Agent is running: + + ```bash + systemctl status netdata + ``` + + The expected output should contain info like this: + + ```bash + Active: active (running) since Wed 2022-07-06 12:25:02 EEST; 1h 40min ago + ``` + +2. Enable and start the Netdata Service. + + ```bash + systemctl enable netdata + systemctl start netdata + ``` + +3. Retry the kickstart claiming process. + +:::note + +In some cases a simple restart of the Agent can fix the issue. +Read more about [Starting, Stopping and Restarting the Agent](/docs/configure/start-stop-restart.md). + +::: + +## Claiming on an older, deprecated version of the Agent + +Make sure that you are using the latest version of Netdata if you are using the [Claiming script](https://learn.netdata.cloud/docs/agent/claim#claiming-script). + +With the introduction of our new architecture, Agents running versions lower than `v1.32.0` can face claiming problems, so we recommend you [update the Netdata Agent](https://learn.netdata.cloud/docs/agent/packaging/installer/update) to the latest stable version. + +## Network issues while connecting to the Cloud + +### Verify that your IP is whitelisted from Netdata Cloud + +Most of the nodes change IPs dynamically. It is possible that your current IP has been restricted from accessing `app.netdata.cloud` due to security concerns. + +To verify this: + +1. Check the Agent's `aclk-state`. + + ```bash + sudo netdatacli aclk-state | grep "Banned By Cloud" + ``` + + The output will contain a line indicating if the IP is banned from `app.netdata.cloud`: + + ```bash + Banned By Cloud: yes + ``` + +2. If your node's IP is banned, you can: + + - Contact our team to whitelist your IP by submitting a ticket in the [Netdata forum](https://community.netdata.cloud/) + - Change your node's IP + +### Make sure that your node has internet connectivity and can resolve network domains + +1. Try to reach a well known host: + + ```bash + ping 8.8.8.8 + ``` + +2. If you can reach external IPs, then check your domain resolution. + + ```bash + host app.netdata.cloud + ``` + + The expected output should be something like this: + + ```bash + app.netdata.cloud is an alias for main-ingress-545609a41fcaf5d6.elb.us-east-1.amazonaws.com. + main-ingress-545609a41fcaf5d6.elb.us-east-1.amazonaws.com has address 54.198.178.11 + main-ingress-545609a41fcaf5d6.elb.us-east-1.amazonaws.com has address 44.207.131.212 + main-ingress-545609a41fcaf5d6.elb.us-east-1.amazonaws.com has address 44.196.50.41 + ``` + + :::info + + There will be cases in which the firewall restricts network access. In those cases, you need to whitelist the `app.netdata.cloud` domain to be able to see your nodes in Netdata Cloud. + If you can't whitelist domains in your firewall, you can whitelist the IPs that the above command will produce, but keep in mind that they can change without any notice. + + ::: diff --git a/docs/store/change-metrics-storage.md b/docs/store/change-metrics-storage.md index 99760e8d3..437b45fc2 100644 --- a/docs/store/change-metrics-storage.md +++ b/docs/store/change-metrics-storage.md @@ -6,72 +6,95 @@ custom_edit_url: https://github.com/netdata/netdata/edit/master/docs/store/chang # Change how long Netdata stores metrics -import { Calculator } from '../../src/components/agent/dbCalc/' +The Netdata Agent uses a custom made time-series database (TSDB), named the [`dbengine`](/database/engine/README.md), to store metrics. -The Netdata Agent uses a time-series database (TSDB), named the [database engine -(`dbengine`)](/database/engine/README.md), to store metrics data. The most recently-collected metrics are stored in RAM, -and when metrics reach a certain age, and based on how much system RAM you allocate toward storing metrics in memory, -they are compressed and "spilled" to disk for long-term storage. +The default settings retain approximately two day's worth of metrics on a system collecting 2,000 metrics every second, +but the Netdata Agent is highly configurable if you want your nodes to store days, weeks, or months worth of per-second +data. -The default settings retain about two day's worth of metrics on a system collecting 2,000 metrics every second, but the -Netdata Agent is highly configurable if you want your nodes to store days, weeks, or months worth of per-second data. - -The Netdata Agent uses two settings in `netdata.conf` to change the behavior of the database engine: +The Netdata Agent uses the following three fundamental settings in `netdata.conf` to change the behavior of the database engine: ```conf [global] - page cache size = 32 + dbengine page cache size = 32 dbengine multihost disk space = 256 + storage tiers = 1 ``` -`page cache size` sets the maximum amount of RAM (in MiB) the database engine uses to cache and index recent metrics. +`dbengine page cache size` sets the maximum amount of RAM (in MiB) the database engine uses to cache and index recent +metrics. `dbengine multihost disk space` sets the maximum disk space (again, in MiB) the database engine uses to store -historical, compressed metrics. When the size of stored metrics exceeds the allocated disk space, the database engine -removes the oldest metrics on a rolling basis. +historical, compressed metrics and `storage tiers` specifies the number of storage tiers you want to have in +your `dbengine`. When the size of stored metrics exceeds the allocated disk space, the database engine removes the +oldest metrics on a rolling basis. ## Calculate the system resources (RAM, disk space) needed to store metrics You can store more or less metrics using the database engine by changing the allocated disk space. Use the calculator -below to find an appropriate value for `dbengine multihost disk space` based on how many metrics your node(s) collect, -whether you are streaming metrics to a parent node, and more. +below to find the appropriate value for the `dbengine` based on how many metrics your node(s) collect, whether you are +streaming metrics to a parent node, and more. + +You do not need to edit the `dbengine page cache size` setting to store more metrics using the database engine. However, +if you want to store more metrics _specifically in memory_, you can increase the cache size. + +:::tip + +We advise you to visit the [tiering mechanism](/database/engine/README.md#tiering) reference. This will help you +configure the Agent to retain metrics for longer periods. -You do not need to edit the `page cache size` setting to store more metrics using the database engine. However, if you -want to store more metrics _specifically in memory_, you can increase the cache size. +::: -> ⚠️ This calculator provides an estimate of disk and RAM usage for **metrics storage**, along with its best -> recommendation for the `dbengine multihost disk space` setting. Real-life usage may vary based on the accuracy of the -> values you enter below, changes in the compression ratio, and the types of metrics stored. +:::caution -<Calculator /> +This calculator provides an estimation of disk and RAM usage for **metrics usage**. Real-life usage may vary based on +the accuracy of the values you enter below, changes in the compression ratio, and the types of metrics stored. + +::: + +Download +the [calculator](https://docs.google.com/spreadsheets/d/e/2PACX-1vTYMhUU90aOnIQ7qF6iIk6tXps57wmY9lxS6qDXznNJrzCKMDzxU3zkgh8Uv0xj_XqwFl3U6aHDZ6ag/pub?output=xlsx) +to optimize the data retention to your preferences. Utilize the "Front" spreadsheet. Experiment with the variables which +are padded with yellow to come up with the best settings for your use case. ## Edit `netdata.conf` with recommended database engine settings -Now that you have a recommended setting for `dbengine multihost disk space`, open `netdata.conf` with -[`edit-config`](/docs/configure/nodes.md#use-edit-config-to-edit-configuration-files) and look for the `dbengine -multihost disk space` setting. Change it to the value recommended above. For example: +Now that you have a recommended setting for your Agent's `dbengine`, open `netdata.conf` with +[`edit-config`](/docs/configure/nodes.md#use-edit-config-to-edit-configuration-files) and look for the `[db]` +subsection. Change it to the recommended values you calculated from the calculator. For example: ```conf -[global] - dbengine multihost disk space = 1024 +[db] + mode = dbengine + storage tiers = 3 + update every = 1 + dbengine multihost disk space MB = 1024 + dbengine page cache size MB = 32 + dbengine tier 1 update every iterations = 60 + dbengine tier 1 multihost disk space MB = 384 + dbengine tier 1 page cache size MB = 32 + dbengine tier 2 update every iterations = 60 + dbengine tier 2 multihost disk space MB = 16 + dbengine tier 2 page cache size MB = 32 ``` -Save the file and restart the Agent with `sudo systemctl restart netdata`, or the [appropriate -method](/docs/configure/start-stop-restart.md) for your system, to change the database engine's size. +Save the file and restart the Agent with `sudo systemctl restart netdata`, or +the [appropriate method](/docs/configure/start-stop-restart.md) for your system, to change the database engine's size. ## What's next? -If you have multiple nodes with the Netdata Agent installed, you can [stream -metrics](/docs/metrics-storage-management/how-streaming-works.mdx) from any number of _child_ nodes to a _parent_ node -and store metrics using a centralized time-series database. Streaming allows you to centralize your data, run Agents as -headless collectors, replicate data, and more. +If you have multiple nodes with the Netdata Agent installed, you +can [stream metrics](/docs/metrics-storage-management/how-streaming-works.mdx) from any number of _child_ nodes to a _ +parent_ node and store metrics using a centralized time-series database. Streaming allows you to centralize your data, +run Agents as headless collectors, replicate data, and more. -Storing metrics with the database engine is completely interoperable with [exporting to other time-series -databases](/docs/export/external-databases.md). With exporting, you can use the node's resources to surface metrics -when [viewing dashboards](/docs/visualize/interact-dashboards-charts.md), while also archiving metrics elsewhere for -further analysis, visualization, or correlation with other tools. +Storing metrics with the database engine is completely interoperable +with [exporting to other time-series databases](/docs/export/external-databases.md). With exporting, you can use the +node's resources to surface metrics when [viewing dashboards](/docs/visualize/interact-dashboards-charts.md), while also +archiving metrics elsewhere for further analysis, visualization, or correlation with other tools. ### Related reference documentation - [Netdata Agent · Database engine](/database/engine/README.md) +- [Netdata Agent · Database engine configuration option](/daemon/config/README.md#[db]-section-options) diff --git a/docs/store/distributed-data-architecture.md b/docs/store/distributed-data-architecture.md index c834d710a..62933cfe5 100644 --- a/docs/store/distributed-data-architecture.md +++ b/docs/store/distributed-data-architecture.md @@ -10,34 +10,43 @@ Netdata uses a distributed data architecture to help you collect and store per-s Every node in your infrastructure, whether it's one or a thousand, stores the metrics it collects. Netdata Cloud bridges the gap between many distributed databases by _centralizing the interface_ you use to query and -visualize your nodes' metrics. When you [look at charts in Netdata -Cloud](/docs/visualize/interact-dashboards-charts.md), the metrics values are queried directly from that node's database -and securely streamed to Netdata Cloud, which proxies them to your browser. +visualize your nodes' metrics. When you [look at charts in Netdata Cloud](/docs/visualize/interact-dashboards-charts.md) +, the metrics values are queried directly from that node's database and securely streamed to Netdata Cloud, which +proxies them to your browser. Netdata's distributed data architecture has a number of benefits: -- **Performance**: Every query to a node's database takes only a few milliseconds to complete for responsiveness when - viewing dashboards or using features like [Metric - Correlations](https://learn.netdata.cloud/docs/cloud/insights/metric-correlations). -- **Scalability**: As your infrastructure scales, install the Netdata Agent on every new node to immediately add it to - your monitoring solution without adding cost or complexity. -- **1-second granularity**: Without an expensive centralized data lake, you can store all of your nodes' per-second - metrics, for any period of time, while keeping costs down. -- **No filtering or selecting of metrics**: Because Netdata's distributed data architecture allows you to store all - metrics, you don't have to configure which metrics you retain. Keep everything for full visibility during - troubleshooting and root cause analysis. -- **Easy maintenance**: There is no centralized data lake to purchase, allocate, monitor, and update, removing - complexity from your monitoring infrastructure. +- **Performance**: Every query to a node's database takes only a few milliseconds to complete for responsiveness when + viewing dashboards or using features + like [Metric Correlations](https://learn.netdata.cloud/docs/cloud/insights/metric-correlations). +- **Scalability**: As your infrastructure scales, install the Netdata Agent on every new node to immediately add it to + your monitoring solution without adding cost or complexity. +- **1-second granularity**: Without an expensive centralized data lake, you can store all of your nodes' per-second + metrics, for any period of time, while keeping costs down. +- **No filtering or selecting of metrics**: Because Netdata's distributed data architecture allows you to store all + metrics, you don't have to configure which metrics you retain. Keep everything for full visibility during + troubleshooting and root cause analysis. +- **Easy maintenance**: There is no centralized data lake to purchase, allocate, monitor, and update, removing + complexity from your monitoring infrastructure. -## Does Netdata Cloud store my metrics? +## Ephemerality of metrics -Netdata Cloud does not store metric values. +The ephemerality of metrics plays an important role in retention. In environments where metrics collection is dynamic and +new metrics are constantly being generated, we are interested about 2 parameters: -To enable certain features, such as [viewing active alarms](/docs/monitor/view-active-alarms.md) or [filtering by -hostname/service](https://learn.netdata.cloud/docs/cloud/war-rooms#node-filter), Netdata Cloud does store configured -alarms, their status, and a list of active collectors. +1. The **expected concurrent number of metrics** as an average for the lifetime of the database. This affects mainly the + storage requirements. -Netdata does not and never will sell your personal data or data about your deployment. +2. The **expected total number of unique metrics** for the lifetime of the database. This affects mainly the memory + requirements for having all these metrics indexed and available to be queried. + +## Granularity of metrics + +The granularity of metrics (the frequency they are collected and stored, i.e. their resolution) is significantly +affecting retention. + +Lowering the granularity from per second to every two seconds, will double their retention and half the CPU requirements +of the Netdata Agent, without affecting disk space or memory requirements. ## Long-term metrics storage with Netdata @@ -47,7 +56,8 @@ appropriate amount of RAM and disk space. Read our document on changing [how long Netdata stores metrics](/docs/store/change-metrics-storage.md) on your nodes for details. -## Other options for your metrics data +You can also stream between nodes using [streaming](/streaming/README.md), allowing to replicate databases and create +your own centralized data lake of metrics, if you choose to do so. While a distributed data architecture is the default when monitoring infrastructure with Netdata, you can also configure its behavior based on your needs or the type of infrastructure you manage. @@ -55,12 +65,19 @@ its behavior based on your needs or the type of infrastructure you manage. To archive metrics to an external time-series database, such as InfluxDB, Graphite, OpenTSDB, Elasticsearch, TimescaleDB, and many others, see details on [integrating Netdata via exporting](/docs/export/external-databases.md). -You can also stream between nodes using [streaming](/streaming/README.md), allowing to replicate databases and create -your own centralized data lake of metrics, if you choose to do so. - When you use the database engine to store your metrics, you can always perform a quick backup of a node's `/var/cache/netdata/dbengine/` folder using the tool of your choice. +## Does Netdata Cloud store my metrics? + +Netdata Cloud does not store metric values. + +To enable certain features, such as [viewing active alarms](/docs/monitor/view-active-alarms.md) +or [filtering by hostname/service](https://learn.netdata.cloud/docs/cloud/war-rooms#node-filter), Netdata Cloud does +store configured alarms, their status, and a list of active collectors. + +Netdata does not and never will sell your personal data or data about your deployment. + ## What's next? You can configure the Netdata Agent to store days, weeks, or months worth of distributed, per-second data by |