summaryrefslogtreecommitdiffstats
path: root/docs/guides/monitor
diff options
context:
space:
mode:
Diffstat (limited to '')
-rw-r--r--docs/guides/monitor-cockroachdb.md136
-rw-r--r--docs/guides/monitor-hadoop-cluster.md203
-rw-r--r--docs/guides/monitor/anomaly-detection-python.md189
-rw-r--r--docs/guides/monitor/anomaly-detection.md75
-rw-r--r--docs/guides/monitor/dimension-templates.md176
-rw-r--r--docs/guides/monitor/kubernetes-k8s-netdata.md254
-rw-r--r--docs/guides/monitor/lamp-stack.md246
-rw-r--r--docs/guides/monitor/pi-hole-raspberry-pi.md162
-rw-r--r--docs/guides/monitor/process.md301
-rw-r--r--docs/guides/monitor/raspberry-pi-anomaly-detection.md125
-rw-r--r--docs/guides/monitor/statsd.md298
-rw-r--r--docs/guides/monitor/stop-notifications-alarms.md92
-rw-r--r--docs/guides/monitor/visualize-monitor-anomalies.md142
13 files changed, 2399 insertions, 0 deletions
diff --git a/docs/guides/monitor-cockroachdb.md b/docs/guides/monitor-cockroachdb.md
new file mode 100644
index 0000000..46dd253
--- /dev/null
+++ b/docs/guides/monitor-cockroachdb.md
@@ -0,0 +1,136 @@
+<!--
+title: "Monitor CockroachDB metrics with Netdata"
+custom_edit_url: https://github.com/netdata/netdata/edit/master/docs/guides/monitor-cockroachdb.md
+-->
+
+# Monitor CockroachDB metrics with Netdata
+
+[CockroachDB](https://github.com/cockroachdb/cockroach) is an open-source project that brings SQL databases into
+scalable, disaster-resilient cloud deployments. Thanks to a [new CockroachDB
+collector](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/cockroachdb/) released in
+[v1.20](https://blog.netdata.cloud/posts/release-1.20/), you can now monitor any number of CockroachDB databases with
+maximum granularity using Netdata. Collect more than 50 unique metrics and put them on interactive visualizations
+designed for better visual anomaly detection.
+
+Netdata itself uses CockroachDB as part of its Netdata Cloud infrastructure, so we're happy to introduce this new
+collector and help others get started with it straight away.
+
+Let's dive in and walk through the process of monitoring CockroachDB metrics with Netdata.
+
+## What's in this guide
+
+- [Configure the CockroachDB collector](#configure-the-cockroachdb-collector)
+ - [Manual setup for a local CockroachDB database](#manual-setup-for-a-local-cockroachdb-database)
+- [Tweak CockroachDB alarms](#tweak-cockroachdb-alarms)
+
+## Configure the CockroachDB collector
+
+Because _all_ of Netdata's collectors can auto-detect the services they monitor, you _shouldn't_ need to worry about
+configuring CockroachDB. Netdata only needs to regularly query the database's `_status/vars` page to gather metrics and
+display them on the dashboard.
+
+If your CockroachDB instance is accessible through `http://localhost:8080/` or `http://127.0.0.1:8080`, your setup is
+complete. Restart Netdata with `sudo systemctl restart netdata`, or the [appropriate
+method](/docs/configure/start-stop-restart.md) for your system, and refresh your browser. You should see CockroachDB
+metrics in your Netdata dashboard!
+
+<figure>
+ <img src="https://user-images.githubusercontent.com/1153921/73564467-d7e36b00-441c-11ea-9ec9-b5d5ea7277d4.png" alt="CPU utilization charts from a CockroachDB database monitored by Netdata" />
+ <figcaption>CPU utilization charts from a CockroachDB database monitored by Netdata</figcaption>
+</figure>
+
+> Note: Netdata collects metrics from CockroachDB every 10 seconds, instead of our usual 1 second, because CockroachDB
+> only updates `_status/vars` every 10 seconds. You can't change this setting in CockroachDB.
+
+If you don't see CockroachDB charts, you may need to configure the collector manually.
+
+### Manual setup for a local CockroachDB database
+
+To configure Netdata's CockroachDB collector, navigate to your Netdata configuration directory (typically at
+`/etc/netdata/`) and use `edit-config` to initialize and edit your CockroachDB configuration file.
+
+```bash
+cd /etc/netdata/ # Replace with your Netdata configuration directory, if not /etc/netdata/
+./edit-config go.d/cockroachdb.conf
+```
+
+Scroll down to the `[JOBS]` section at the bottom of the file. You will see the two default jobs there, which you can
+edit, or create a new job with any of the parameters listed above in the file. Both the `name` and `url` values are
+required, and everything else is optional.
+
+For a production cluster, you'll use either an IP address or the system's hostname. Be sure that your remote system
+allows TCP communication on port 8080, or whichever port you have configured CockroachDB's [Admin
+UI](https://www.cockroachlabs.com/docs/stable/monitoring-and-alerting.html#prometheus-endpoint) to listen on.
+
+```yaml
+# [ JOBS ]
+jobs:
+ - name: remote
+ url: http://203.0.113.0:8080/_status/vars
+
+ - name: remote_hostname
+ url: http://cockroachdb.example.com:8080/_status/vars
+```
+
+For a secure cluster, use `https` in the `url` field instead.
+
+```yaml
+# [ JOBS ]
+jobs:
+ - name: remote
+ url: https://203.0.113.0:8080/_status/vars
+ tls_skip_verify: yes # If your certificate is self-signed
+
+ - name: remote_hostname
+ url: https://cockroachdb.example.com:8080/_status/vars
+ tls_skip_verify: yes # If your certificate is self-signed
+```
+
+You can add as many jobs as you'd like based on how many CockroachDB databases you have—Netdata will create separate
+charts for each job. Once you've edited `cockroachdb.conf` according to the needs of your infrastructure, restart
+Netdata to see your new charts.
+
+<figure>
+ <img src="https://user-images.githubusercontent.com/1153921/73564469-d7e36b00-441c-11ea-8333-02ba0e1c294c.png" alt="Charts showing a node failure during a simulated test" />
+ <figcaption>Charts showing a node failure during a simulated test</figcaption>
+</figure>
+
+## Tweak CockroachDB alarms
+
+This release also includes eight pre-configured alarms for live nodes, such as whether the node is live, storage
+capacity, issues with replication, and the number of SQL connections/statements. See [health.d/cockroachdb.conf on
+GitHub](https://raw.githubusercontent.com/netdata/netdata/master/health/health.d/cockroachdb.conf) for details.
+
+You can also edit these files directly with `edit-config`:
+
+```bash
+cd /etc/netdata/ # Replace with your Netdata configuration directory, if not /etc/netdata/
+./edit-config health.d/cockroachdb.conf # You may need to use `sudo` for write privileges
+```
+
+For more information about editing the defaults or writing new alarm entities, see our health monitoring [quickstart
+guide](/health/QUICKSTART.md).
+
+## What's next?
+
+Now that you're collecting metrics from your CockroachDB databases, let us know how it's working for you! There's always
+room for improvement or refinement based on real-world use cases. Feel free to [file an
+issue](https://github.com/netdata/netdata/issues/new?assignees=&labels=bug%2Cneeds+triage&template=BUG_REPORT.yml) with your
+thoughts.
+
+Also, be sure to check out these useful resources:
+
+- [Netdata's CockroachDB
+ documentation](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/cockroachdb/)
+- [Netdata's CockroachDB
+ configuration](https://github.com/netdata/go.d.plugin/blob/master/config/go.d/cockroachdb.conf)
+- [Netdata's CockroachDB
+ alarms](https://github.com/netdata/netdata/blob/29d9b5e51603792ee27ef5a21f1de0ba8e130158/health/health.d/cockroachdb.conf)
+- [CockroachDB homepage](https://www.cockroachlabs.com/product/)
+- [CockroachDB documentation](https://www.cockroachlabs.com/docs/stable/)
+- [`_status/vars` endpoint
+ docs](https://www.cockroachlabs.com/docs/stable/monitoring-and-alerting.html#prometheus-endpoint)
+- [Monitor CockroachDB with
+ Prometheus](https://www.cockroachlabs.com/docs/stable/monitor-cockroachdb-with-prometheus.html)
+
+
diff --git a/docs/guides/monitor-hadoop-cluster.md b/docs/guides/monitor-hadoop-cluster.md
new file mode 100644
index 0000000..62403f8
--- /dev/null
+++ b/docs/guides/monitor-hadoop-cluster.md
@@ -0,0 +1,203 @@
+<!--
+title: "Monitor a Hadoop cluster with Netdata"
+custom_edit_url: https://github.com/netdata/netdata/edit/master/docs/guides/monitor-hadoop-cluster.md
+-->
+
+# Monitor a Hadoop cluster with Netdata
+
+Hadoop is an [Apache project](https://hadoop.apache.org/) is a framework for processing large sets of data across a
+distributed cluster of systems.
+
+And while Hadoop is designed to be a highly-available and fault-tolerant service, those who operate a Hadoop cluster
+will want to monitor the health and performance of their [Hadoop Distributed File System
+(HDFS)](https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html) and [Zookeeper](https://zookeeper.apache.org/)
+implementations.
+
+Netdata comes with built-in and pre-configured support for monitoring both HDFS and Zookeeper.
+
+This guide assumes you have a Hadoop cluster, with HDFS and Zookeeper, running already. If you don't, please follow
+the [official Hadoop
+instructions](http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html) or an
+alternative, like the guide available from
+[DigitalOcean](https://www.digitalocean.com/community/tutorials/how-to-install-hadoop-in-stand-alone-mode-on-ubuntu-18-04).
+
+For more specifics on the collection modules used in this guide, read the respective pages in our documentation:
+
+- [HDFS](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/hdfs)
+- [Zookeeper](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/zookeeper)
+
+## Set up your HDFS and Zookeeper installations
+
+As with all data sources, Netdata can auto-detect HDFS and Zookeeper nodes if you installed them using the standard
+installation procedure.
+
+For Netdata to collect HDFS metrics, it needs to be able to access the node's `/jmx` endpoint. You can test whether an
+JMX endpoint is accessible by using `curl HDFS-IP:PORT/jmx`. For a NameNode, you should see output similar to the
+following:
+
+```json
+{
+ "beans" : [ {
+ "name" : "Hadoop:service=NameNode,name=JvmMetrics",
+ "modelerType" : "JvmMetrics",
+ "MemNonHeapUsedM" : 65.67851,
+ "MemNonHeapCommittedM" : 67.3125,
+ "MemNonHeapMaxM" : -1.0,
+ "MemHeapUsedM" : 154.46341,
+ "MemHeapCommittedM" : 215.0,
+ "MemHeapMaxM" : 843.0,
+ "MemMaxM" : 843.0,
+ "GcCount" : 15,
+ "GcTimeMillis" : 305,
+ "GcNumWarnThresholdExceeded" : 0,
+ "GcNumInfoThresholdExceeded" : 0,
+ "GcTotalExtraSleepTime" : 92,
+ "ThreadsNew" : 0,
+ "ThreadsRunnable" : 6,
+ "ThreadsBlocked" : 0,
+ "ThreadsWaiting" : 7,
+ "ThreadsTimedWaiting" : 34,
+ "ThreadsTerminated" : 0,
+ "LogFatal" : 0,
+ "LogError" : 0,
+ "LogWarn" : 2,
+ "LogInfo" : 348
+ },
+ { ... }
+ ]
+}
+```
+
+The JSON result for a DataNode's `/jmx` endpoint is slightly different:
+
+```json
+{
+ "beans" : [ {
+ "name" : "Hadoop:service=DataNode,name=DataNodeActivity-dev-slave-01.dev.local-9866",
+ "modelerType" : "DataNodeActivity-dev-slave-01.dev.local-9866",
+ "tag.SessionId" : null,
+ "tag.Context" : "dfs",
+ "tag.Hostname" : "dev-slave-01.dev.local",
+ "BytesWritten" : 500960407,
+ "TotalWriteTime" : 463,
+ "BytesRead" : 80689178,
+ "TotalReadTime" : 41203,
+ "BlocksWritten" : 16,
+ "BlocksRead" : 16,
+ "BlocksReplicated" : 4,
+ ...
+ },
+ { ... }
+ ]
+}
+```
+
+If Netdata can't access the `/jmx` endpoint for either a NameNode or DataNode, it will not be able to auto-detect and
+collect metrics from your HDFS implementation.
+
+Zookeeper auto-detection relies on an accessible client port and a allow-listed `mntr` command. For more details on
+`mntr`, see Zookeeper's documentation on [cluster
+options](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_clusterOptions) and [Zookeeper
+commands](https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_zkCommands).
+
+## Configure the HDFS and Zookeeper modules
+
+To configure Netdata's HDFS module, navigate to your Netdata directory (typically at `/etc/netdata/`) and use
+`edit-config` to initialize and edit your HDFS configuration file.
+
+```bash
+cd /etc/netdata/
+sudo ./edit-config go.d/hdfs.conf
+```
+
+At the bottom of the file, you will see two example jobs, both of which are commented out:
+
+```yaml
+# [ JOBS ]
+#jobs:
+# - name: namenode
+# url: http://127.0.0.1:9870/jmx
+#
+# - name: datanode
+# url: http://127.0.0.1:9864/jmx
+```
+
+Uncomment these lines and edit the `url` value(s) according to your setup. Now's the time to add any other configuration
+details, which you can find inside of the `hdfs.conf` file itself. Most production implementations will require TLS
+certificates.
+
+The result for a simple HDFS setup, running entirely on `localhost` and without certificate authentication, might look
+like this:
+
+```yaml
+# [ JOBS ]
+jobs:
+ - name: namenode
+ url: http://127.0.0.1:9870/jmx
+
+ - name: datanode
+ url: http://127.0.0.1:9864/jmx
+```
+
+At this point, Netdata should be configured to collect metrics from your HDFS servers. Let's move on to Zookeeper.
+
+Next, use `edit-config` again to initialize/edit your `zookeeper.conf` file.
+
+```bash
+cd /etc/netdata/
+sudo ./edit-config go.d/zookeeper.conf
+```
+
+As with the `hdfs.conf` file, head to the bottom, uncomment the example jobs, and tweak the `address` values according
+to your setup. Again, you may need to add additional configuration options, like TLS certificates.
+
+```yaml
+jobs:
+ - name : local
+ address : 127.0.0.1:2181
+
+ - name : remote
+ address : 203.0.113.10:2182
+```
+
+Finally, [restart Netdata](/docs/configure/start-stop-restart.md).
+
+```sh
+sudo systemctl restart netdata
+```
+
+Upon restart, Netdata should recognize your HDFS/Zookeeper servers, enable the HDFS and Zookeeper modules, and begin
+showing real-time metrics for both in your Netdata dashboard. 🎉
+
+## Configuring HDFS and Zookeeper alarms
+
+The Netdata community helped us create sane defaults for alarms related to both HDFS and Zookeeper. You may want to
+investigate these to ensure they work well with your Hadoop implementation.
+
+- [HDFS alarms](https://raw.githubusercontent.com/netdata/netdata/master/health/health.d/hdfs.conf)
+- [Zookeeper alarms](https://raw.githubusercontent.com/netdata/netdata/master/health/health.d/zookeeper.conf)
+
+You can also access/edit these files directly with `edit-config`:
+
+```bash
+sudo /etc/netdata/edit-config health.d/hdfs.conf
+sudo /etc/netdata/edit-config health.d/zookeeper.conf
+```
+
+For more information about editing the defaults or writing new alarm entities, see our [health monitoring
+documentation](/health/README.md).
+
+## What's next?
+
+If you're having issues with Netdata auto-detecting your HDFS/Zookeeper servers, or want to help improve how Netdata
+collects or presents metrics from these services, feel free to [file an
+issue](https://github.com/netdata/netdata/issues/new?assignees=&labels=bug%2Cneeds+triage&template=BUG_REPORT.yml).
+
+- Read up on the [HDFS configuration
+ file](https://github.com/netdata/go.d.plugin/blob/master/config/go.d/hdfs.conf) to understand how to configure
+ global options or per-job options, such as username/password, TLS certificates, timeouts, and more.
+- Read up on the [Zookeeper configuration
+ file](https://github.com/netdata/go.d.plugin/blob/master/config/go.d/zookeeper.conf) to understand how to configure
+ global options or per-job options, timeouts, TLS certificates, and more.
+
+
diff --git a/docs/guides/monitor/anomaly-detection-python.md b/docs/guides/monitor/anomaly-detection-python.md
new file mode 100644
index 0000000..ad8398c
--- /dev/null
+++ b/docs/guides/monitor/anomaly-detection-python.md
@@ -0,0 +1,189 @@
+<!--
+title: "Detect anomalies in systems and applications"
+description: "Detect anomalies in any system, container, or application in your infrastructure with machine learning and the open-source Netdata Agent."
+image: /img/seo/guides/monitor/anomaly-detection.png
+author: "Joel Hans"
+author_title: "Editorial Director, Technical & Educational Resources"
+author_img: "/img/authors/joel-hans.jpg"
+custom_edit_url: https://github.com/netdata/netdata/edit/master/docs/guides/monitor/anomaly-detection-python.md
+-->
+
+# Detect anomalies in systems and applications
+
+Beginning with v1.27, the [open-source Netdata Agent](https://github.com/netdata/netdata) is capable of unsupervised
+[anomaly detection](https://en.wikipedia.org/wiki/Anomaly_detection) with machine learning (ML). As with all things
+Netdata, the anomalies collector comes with preconfigured alarms and instant visualizations that require no query
+languages or organizing metrics. You configure the collector to look at specific charts, and it handles the rest.
+
+Netdata's implementation uses a handful of functions in the [Python Outlier Detection (PyOD)
+library](https://github.com/yzhao062/pyod/tree/master), which periodically runs a `train` function that learns what
+"normal" looks like on your node and creates an ML model for each chart, then utilizes the
+[`predict_proba()`](https://pyod.readthedocs.io/en/latest/api_cc.html#pyod.models.base.BaseDetector.predict_proba) and
+[`predict()`](https://pyod.readthedocs.io/en/latest/api_cc.html#pyod.models.base.BaseDetector.predict) PyOD functions to
+quantify how anomalous certain charts are.
+
+All these metrics and alarms are available for centralized monitoring in [Netdata Cloud](https://app.netdata.cloud). If
+you choose to sign up for Netdata Cloud and [connect your nodes](/claim/README.md), you will have the ability to run
+tailored anomaly detection on every node in your infrastructure, regardless of its purpose or workload.
+
+In this guide, you'll learn how to set up the anomalies collector to instantly detect anomalies in an Nginx web server
+and/or the node that hosts it, which will give you the tools to configure parallel unsupervised monitors for any
+application in your infrastructure. Let's get started.
+
+![Example anomaly detection with an Nginx web
+server](https://user-images.githubusercontent.com/1153921/103586700-da5b0a00-4ea2-11eb-944e-46edd3f83e3a.png)
+
+## Prerequisites
+
+- A node running the Netdata Agent. If you don't yet have that, [get Netdata](/docs/get-started.mdx).
+- A Netdata Cloud account. [Sign up](https://app.netdata.cloud) if you don't have one already.
+- Familiarity with configuring the Netdata Agent with [`edit-config`](/docs/configure/nodes.md).
+- _Optional_: An Nginx web server running on the same node to follow the example configuration steps.
+
+## Install required Python packages
+
+The anomalies collector uses a few Python packages, available with `pip3`, to run ML training. It requires
+[`numba`](http://numba.pydata.org/), [`scikit-learn`](https://scikit-learn.org/stable/),
+[`pyod`](https://pyod.readthedocs.io/en/latest/), in addition to
+[`netdata-pandas`](https://github.com/netdata/netdata-pandas), which is a package built by the Netdata team to pull data
+from a Netdata Agent's API into a [Pandas](https://pandas.pydata.org/). Read more about `netdata-pandas` on its [package
+repo](https://github.com/netdata/netdata-pandas) or in Netdata's [community
+repo](https://github.com/netdata/community/tree/main/netdata-agent-api/netdata-pandas).
+
+```bash
+# Become the netdata user
+sudo su -s /bin/bash netdata
+
+# Install required packages for the netdata user
+pip3 install --user netdata-pandas==0.0.38 numba==0.50.1 scikit-learn==0.23.2 pyod==0.8.3
+```
+
+> If the `pip3` command fails, you need to install it. For example, on an Ubuntu system, use `sudo apt install
+> python3-pip`.
+
+Use `exit` to become your normal user again.
+
+## Enable the anomalies collector
+
+Navigate to your [Netdata config directory](/docs/configure/nodes.md#the-netdata-config-directory) and use `edit-config`
+to open the `python.d.conf` file.
+
+```bash
+sudo ./edit-config python.d.conf
+```
+
+In `python.d.conf` file, search for the `anomalies` line. If the line exists, set the value to `yes`. Add the line
+yourself if it doesn't already exist. Either way, the final result should look like:
+
+```conf
+anomalies: yes
+```
+
+[Restart the Agent](/docs/configure/start-stop-restart.md) with `sudo systemctl restart netdata`, or the [appropriate
+method](/docs/configure/start-stop-restart.md) for your system, to start up the anomalies collector. By default, the
+model training process runs every 30 minutes, and uses the previous 4 hours of metrics to establish a baseline for
+health and performance across the default included charts.
+
+> 💡 The anomaly collector may need 30-60 seconds to finish its initial training and have enough data to start
+> generating anomaly scores. You may need to refresh your browser tab for the **Anomalies** section to appear in menus
+> on both the local Agent dashboard or Netdata Cloud.
+
+## Configure the anomalies collector
+
+Open `python.d/anomalies.conf` with `edit-conf`.
+
+```bash
+sudo ./edit-config python.d/anomalies.conf
+```
+
+The file contains many user-configurable settings with sane defaults. Here are some important settings that don't
+involve tweaking the behavior of the ML training itself.
+
+- `charts_regex`: Which charts to train models for and run anomaly detection on, with each chart getting a separate
+ model.
+- `charts_to_exclude`: Specific charts, selected by the regex in `charts_regex`, to exclude.
+- `train_every_n`: How often to train the ML models.
+- `train_n_secs`: The number of historical observations to train each model on. The default is 4 hours, but if your node
+ doesn't have historical metrics going back that far, consider [changing the metrics retention
+ policy](/docs/store/change-metrics-storage.md) or reducing this window.
+- `custom_models`: A way to define custom models that you want anomaly probabilities for, including multi-node or
+ streaming setups.
+
+> ⚠️ Setting `charts_regex` with many charts or `train_n_secs` to a very large number will have an impact on the
+> resources and time required to train a model for every chart. The actual performance implications depend on the
+> resources available on your node. If you plan on changing these settings beyond the default, or what's mentioned in
+> this guide, make incremental changes to observe the performance impact. Considering `train_max_n` to cap the number of
+> observations actually used to train on.
+
+### Run anomaly detection on Nginx and log file metrics
+
+As mentioned above, this guide uses an Nginx web server to demonstrate how the anomalies collector works. You must
+configure the collector to monitor charts from the
+[Nginx](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/nginx) and [web
+log](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/weblog) collectors.
+
+`charts_regex` allows for some basic regex, such as wildcards (`*`) to match all contexts with a certain pattern. For
+example, `system\..*` matches with any chart with a context that begins with `system.`, and ends in any number of other
+characters (`.*`). Note the escape character (`\`) around the first period to capture a period character exactly, and
+not any character.
+
+Change `charts_regex` in `anomalies.conf` to the following:
+
+```conf
+ charts_regex: 'system\..*|nginx_local\..*|web_log_nginx\..*|apps.cpu|apps.mem'
+```
+
+This value tells the anomaly collector to train against every `system.` chart, every `nginx_local` chart, every
+`web_log_nginx` chart, and specifically the `apps.cpu` and `apps.mem` charts.
+
+![The anomalies collector chart with many
+dimensions](https://user-images.githubusercontent.com/1153921/102813877-db5e4880-4386-11eb-8040-d7a1d7a476bb.png)
+
+### Remove some metrics from anomaly detection
+
+As you can see in the above screenshot, this node is now looking for anomalies in many places. The result is a single
+`anomalies_local.probability` chart with more than twenty dimensions, some of which the dashboard hides at the bottom of
+a scrollable area. In addition, training and analyzing the anomaly collector on many charts might require more CPU
+utilization that you're willing to give.
+
+First, explicitly declare which `system.` charts to monitor rather than of all of them using regex (`system\..*`).
+
+```conf
+ charts_regex: 'system\.cpu|system\.load|system\.io|system\.net|system\.ram|nginx_local\..*|web_log_nginx\..*|apps.cpu|apps.mem'
+```
+
+Next, remove some charts with the `charts_to_exclude` setting. For this example, using an Nginx web server, focus on the
+volume of requests/responses, not, for example, which type of 4xx response a user might receive.
+
+```conf
+ charts_to_exclude: 'web_log_nginx.excluded_requests,web_log_nginx.responses_by_status_code_class,web_log_nginx.status_code_class_2xx_responses,web_log_nginx.status_code_class_4xx_responses,web_log_nginx.current_poll_uniq_clients,web_log_nginx.requests_by_http_method,web_log_nginx.requests_by_http_version,web_log_nginx.requests_by_ip_proto'
+```
+
+![The anomalies collector with less
+dimensions](https://user-images.githubusercontent.com/1153921/102820642-d69f9180-4392-11eb-91c5-d3d166d40105.png)
+
+Apply the ideas behind the collector's regex and exclude settings to any other
+[system](/docs/collect/system-metrics.md), [container](/docs/collect/container-metrics.md), or
+[application](/docs/collect/application-metrics.md) metrics you want to detect anomalies for.
+
+## What's next?
+
+Now that you know how to set up unsupervised anomaly detection in the Netdata Agent, using an Nginx web server as an
+example, it's time to apply that knowledge to other mission-critical parts of your infrastructure. If you're not sure
+what to monitor next, check out our list of [collectors](/collectors/COLLECTORS.md) to see what kind of metrics Netdata
+can collect from your systems, containers, and applications.
+
+Keep on moving to [part 2](/docs/guides/monitor/visualize-monitor-anomalies.md), which covers the charts and alarms
+Netdata creates for unsupervised anomaly detection.
+
+For a different troubleshooting experience, try out the [Metric
+Correlations](https://learn.netdata.cloud/docs/cloud/insights/metric-correlations) feature in Netdata Cloud. Metric
+Correlations helps you perform faster root cause analysis by narrowing a dashboard to only the charts most likely to be
+related to an anomaly.
+
+### Related reference documentation
+
+- [Netdata Agent · Anomalies collector](/collectors/python.d.plugin/anomalies/README.md)
+- [Netdata Agent · Nginx collector](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/nginx)
+- [Netdata Agent · web log collector](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/weblog)
+- [Netdata Cloud · Metric Correlations](https://learn.netdata.cloud/docs/cloud/insights/metric-correlations)
diff --git a/docs/guides/monitor/anomaly-detection.md b/docs/guides/monitor/anomaly-detection.md
new file mode 100644
index 0000000..e98c5c0
--- /dev/null
+++ b/docs/guides/monitor/anomaly-detection.md
@@ -0,0 +1,75 @@
+<!--
+title: "Machine learning (ML) powered anomaly detection"
+description: "Detect anomalies in any system, container, or application in your infrastructure with machine learning and the open-source Netdata Agent."
+image: /img/seo/guides/monitor/anomaly-detection.png
+author: "Andrew Maguire"
+author_title: "Analytics & ML Lead"
+author_img: "/img/authors/andy-maguire.jpg"
+custom_edit_url: https://github.com/netdata/netdata/edit/master/docs/guides/monitor/anomaly-detection.md
+-->
+
+
+
+## Overview
+
+As of [`v1.32.0`](https://github.com/netdata/netdata/releases/tag/v1.32.0), Netdata comes with some ML powered [anomaly detection](https://en.wikipedia.org/wiki/Anomaly_detection) capabilities built into it and available to use out of the box, with zero configuration required (ML was enabled by default in `v1.35.0-29-nightly` in [this PR](https://github.com/netdata/netdata/pull/13158), previously it required a one line config change).
+
+This means that in addition to collecting raw value metrics, the Netdata agent will also produce an [`anomaly-bit`](https://learn.netdata.cloud/docs/agent/ml#anomaly-bit---100--anomalous-0--normal) every second which will be `100` when recent raw metric values are considered anomalous by Netdata and `0` when they look normal. Once we aggregate beyond one second intervals this aggregated `anomaly-bit` becomes an ["anomaly rate"](https://learn.netdata.cloud/docs/agent/ml#anomaly-rate---averageanomaly-bit).
+
+To be as concrete as possible, the below api call shows how to access the raw anomaly bit of the `system.cpu` chart from the [london.my-netdata.io](https://london.my-netdata.io) Netdata demo server. Passing `options=anomaly-bit` returns the anomay bit instead of the raw metric value.
+
+```
+https://london.my-netdata.io/api/v1/data?chart=system.cpu&options=anomaly-bit
+```
+
+If we aggregate the above to just 1 point by adding `points=1` we get an "[Anomaly Rate](https://learn.netdata.cloud/docs/agent/ml#anomaly-rate---averageanomaly-bit)":
+
+```
+https://london.my-netdata.io/api/v1/data?chart=system.cpu&options=anomaly-bit&points=1
+```
+
+The fundamentals of Netdata's anomaly detection approach and implmentation are covered in lots more detail in the [agent ML documentation](https://learn.netdata.cloud/docs/agent/ml).
+
+This guide will explain how to get started using these ML based anomaly detection capabilities within Netdata.
+
+## Anomaly Advisor
+
+The [Anomaly Advisor](https://learn.netdata.cloud/docs/cloud/insights/anomaly-advisor) is the flagship anomaly detection feature within Netdata. In the "Anomalies" tab of Netdata you will see an overall "Anomaly Rate" chart that aggregates node level anomaly rate for all nodes in a space. The aim of this chart is to make it easy to quickly spot periods of time where the overall "[node anomaly rate](https://learn.netdata.cloud/docs/agent/ml#node-anomaly-rate)" is evelated in some unusual way and for what node or nodes this relates to.
+
+![image](https://user-images.githubusercontent.com/2178292/175928290-490dd8b9-9c55-4724-927e-e145cb1cc837.png)
+
+Once an area on the Anomaly Rate chart is highlighted netdata will append a "heatmap" to the bottom of the screen that shows which metrics were more anomalous in the highlighted timeframe. Each row in the heatmap consists of an anomaly rate sparkline graph that can be expanded to reveal the raw underlying metric chart for that dimension.
+
+![image](https://user-images.githubusercontent.com/2178292/175929162-02c8fe69-cc4f-4cf4-9b3a-a5e559a6feca.png)
+
+## Embedded Anomaly Rate Charts
+
+Charts in both the [Overview](https://learn.netdata.cloud/docs/cloud/visualize/overview) and [single node dashboard](https://learn.netdata.cloud/docs/cloud/visualize/overview#jump-to-single-node-dashboards) tabs also expose the underlying anomaly rates for each dimension so users can easily see if the raw metrics are considered anomalous or not by Netdata.
+
+Pressing the anomalies icon (next to the information icon in the chart header) will expand the anomaly rate chart to make it easy to see how the anomaly rate for any individual dimension corresponds to the raw underlying data. In the example below we can see that the spike in `system.pgpgio|in` corresponded in the anomaly rate for that dimension jumping to 100% for a small period of time until the spike passed.
+
+![image](https://user-images.githubusercontent.com/2178292/175933078-5dd951ff-7709-4bb9-b4be-34199afb3945.png)
+
+## Anomaly Rate Based Alerts
+
+It is possible to use the `anomaly-bit` when defining traditional Alerts within netdata. The `anomaly-bit` is just another `options` parameter that can be passed as part of an [alarm line lookup](https://learn.netdata.cloud/docs/agent/health/reference#alarm-line-lookup).
+
+You can see some example ML based alert configurations below:
+
+- [Anomaly rate based CPU dimensions alarm](https://learn.netdata.cloud/docs/agent/health/reference#example-8---anomaly-rate-based-cpu-dimensions-alarm)
+- [Anomaly rate based CPU chart alarm](https://learn.netdata.cloud/docs/agent/health/reference#example-9---anomaly-rate-based-cpu-chart-alarm)
+- [Anomaly rate based node level alarm](https://learn.netdata.cloud/docs/agent/health/reference#example-10---anomaly-rate-based-node-level-alarm)
+- More examples in the [`/health/health.d/ml.conf`](https://github.com/netdata/netdata/blob/master/health/health.d/ml.conf) file that ships with the agent.
+
+## Learn More
+
+Check out the resources below to learn more about how Netdata is approaching ML:
+
+- [Agent ML documentation](https://learn.netdata.cloud/docs/agent/ml).
+- [Anomaly Advisor documentation](https://learn.netdata.cloud/docs/cloud/insights/anomaly-advisor).
+- [Metric Correlations documentation](https://learn.netdata.cloud/docs/cloud/insights/metric-correlations).
+- Anomaly Advisor [launch blog post](https://www.netdata.cloud/blog/introducing-anomaly-advisor-unsupervised-anomaly-detection-in-netdata/).
+- Netdata Approach to ML [blog post](https://www.netdata.cloud/blog/our-approach-to-machine-learning/).
+- `areal/ml` related [GitHub Discussions](https://github.com/netdata/netdata/discussions?discussions_q=label%3Aarea%2Fml).
+- Netdata Machine Learning Meetup [deck](https://docs.google.com/presentation/d/1rfSxktg2av2k-eMwMbjN0tXeo76KC33iBaxerYinovs/edit?usp=sharing) and [YouTube recording](https://www.youtube.com/watch?v=eJGWZHVQdNU).
+- Netdata Anomaly Advisor [YouTube Playlist](https://youtube.com/playlist?list=PL-P-gAHfL2KPeUcCKmNHXC-LX-FfdO43j).
diff --git a/docs/guides/monitor/dimension-templates.md b/docs/guides/monitor/dimension-templates.md
new file mode 100644
index 0000000..5391273
--- /dev/null
+++ b/docs/guides/monitor/dimension-templates.md
@@ -0,0 +1,176 @@
+<!--
+title: "Use dimension templates to create dynamic alarms"
+custom_edit_url: https://github.com/netdata/netdata/edit/master/docs/guides/monitor/dimension-templates.md
+-->
+
+# Use dimension templates to create dynamic alarms
+
+Your ability to monitor the health of your systems and applications relies on your ability to create and maintain
+the best set of alarms for your particular needs.
+
+In v1.18 of Netdata, we introduced **dimension templates** for alarms, which simplifies the process of writing [alarm
+entities](/health/REFERENCE.md#health-entity-reference) for charts with many dimensions.
+
+Dimension templates can condense many individual entities into one—no more copy-pasting one entity and changing the
+`alarm`/`template` and `lookup` lines for each dimension you'd like to monitor.
+
+They are, however, an advanced health monitoring feature. For more basic instructions on creating your first alarm,
+check out our [health monitoring documentation](/health/README.md), which also includes
+[examples](/health/REFERENCE.md#example-alarms).
+
+## The fundamentals of `foreach`
+
+Our dimension templates update creates a new `foreach` parameter to the existing [`lookup`
+line](/health/REFERENCE.md#alarm-line-lookup). This is where the magic happens.
+
+You use the `foreach` parameter to specify which dimensions you want to monitor with this single alarm. You can separate
+them with a comma (`,`) or a pipe (`|`). You can also use a [Netdata simple pattern](/libnetdata/simple_pattern/README.md)
+to create many alarms with a regex-like syntax.
+
+The `foreach` parameter _has_ to be the last parameter in your `lookup` line, and if you have both `of` and `foreach` in
+the same `lookup` line, Netdata will ignore the `of` parameter and use `foreach` instead.
+
+Let's get into some examples so you can see how the new parameter works.
+
+> ⚠️ The following entities are examples to showcase the functionality and syntax of dimension templates. They are not
+> meant to be run as-is on production systems.
+
+## Condensing entities with `foreach`
+
+Let's say you want to monitor the `system`, `user`, and `nice` dimensions in your system's overall CPU utilization.
+Before dimension templates, you would need the following three entities:
+
+```yaml
+ alarm: cpu_system
+ on: system.cpu
+lookup: average -10m percentage of system
+ every: 1m
+ warn: $this > 50
+ crit: $this > 80
+
+ alarm: cpu_user
+ on: system.cpu
+lookup: average -10m percentage of user
+ every: 1m
+ warn: $this > 50
+ crit: $this > 80
+
+ alarm: cpu_nice
+ on: system.cpu
+lookup: average -10m percentage of nice
+ every: 1m
+ warn: $this > 50
+ crit: $this > 80
+```
+
+With dimension templates, you can condense these into a single alarm. Take note of the `alarm` and `lookup` lines.
+
+```yaml
+ alarm: cpu_template
+ on: system.cpu
+lookup: average -10m percentage foreach system,user,nice
+ every: 1m
+ warn: $this > 50
+ crit: $this > 80
+```
+
+The `alarm` line specifies the naming scheme Netdata will use. You can use whatever naming scheme you'd like, with `.`
+and `_` being the only allowed symbols.
+
+The `lookup` line has changed from `of` to `foreach`, and we're now passing three dimensions.
+
+In this example, Netdata will create three alarms with the names `cpu_template_system`, `cpu_template_user`, and
+`cpu_template_nice`. Every minute, each alarm will use the same database query to calculate the average CPU usage for
+the `system`, `user`, and `nice` dimensions over the last 10 minutes and send out alarms if necessary.
+
+You can find these three alarms active by clicking on the **Alarms** button in the top navigation, and then clicking on
+the **All** tab and scrolling to the **system - cpu** collapsible section.
+
+![Three new alarms created from the dimension template](https://user-images.githubusercontent.com/1153921/66218994-29523800-e67f-11e9-9bcb-9bca23e2c554.png)
+
+Let's look at some other examples of how `foreach` works so you can best apply it in your configurations.
+
+### Using a Netdata simple pattern in `foreach`
+
+In the last example, we used `foreach system,user,nice` to create three distinct alarms using dimension templates. But
+what if you want to quickly create alarms for _all_ the dimensions of a given chart?
+
+Use a [simple pattern](/libnetdata/simple_pattern/README.md)! One example of a simple pattern is a single wildcard
+(`*`).
+
+Instead of monitoring system CPU usage, let's monitor per-application CPU usage using the `apps.cpu` chart. Passing a
+wildcard as the simple pattern tells Netdata to create a separate alarm for _every_ process on your system:
+
+```yaml
+ alarm: app_cpu
+ on: apps.cpu
+lookup: average -10m percentage foreach *
+ every: 1m
+ warn: $this > 50
+ crit: $this > 80
+```
+
+This entity will now create alarms for every dimension in the `apps.cpu` chart. Given that most `apps.cpu` charts have
+10 or more dimensions, using the wildcard ensures you catch every CPU-hogging process.
+
+To learn more about how to use simple patterns with dimension templates, see our [simple patterns
+documentation](/libnetdata/simple_pattern/README.md).
+
+## Using `foreach` with alarm templates
+
+Dimension templates also work with [alarm templates](/health/REFERENCE.md#alarm-line-alarm-or-template). Alarm
+templates help you create alarms for all the charts with a given context—for example, all the cores of your system's
+CPU.
+
+By combining the two, you can create dozens of individual alarms with a single template entity. Here's how you would
+create alarms for the `system`, `user`, and `nice` dimensions for every chart in the `cpu.cpu` context—or, in other
+words, every CPU core.
+
+```yaml
+template: cpu_template
+ on: cpu.cpu
+ lookup: average -10m percentage foreach system,user,nice
+ every: 1m
+ warn: $this > 50
+ crit: $this > 80
+```
+
+On a system with a 6-core, 12-thread Ryzen 5 1600 CPU, this one entity creates alarms on the following charts and
+dimensions:
+
+- `cpu.cpu0`
+ - `cpu_template_user`
+ - `cpu_template_system`
+ - `cpu_template_nice`
+- `cpu.cpu1`
+ - `cpu_template_user`
+ - `cpu_template_system`
+ - `cpu_template_nice`
+- `cpu.cpu2`
+ - `cpu_template_user`
+ - `cpu_template_system`
+ - `cpu_template_nice`
+- ...
+- `cpu.cpu11`
+ - `cpu_template_user`
+ - `cpu_template_system`
+ - `cpu_template_nice`
+
+And how just a few of those dimension template-generated alarms look like in the Netdata dashboard.
+
+![A few of the created alarms in the Netdata dashboard](https://user-images.githubusercontent.com/1153921/66219669-708cf880-e680-11e9-8b3a-7bfe178fa28b.png)
+
+All in all, this single entity creates 36 individual alarms. Much easier than writing 36 separate entities in your
+health configuration files!
+
+## What's next?
+
+We hope you're excited about the possibilities of using dimension templates! Maybe they'll inspire you to build new
+alarms that will help you better monitor the health of your systems.
+
+Or, at the very least, simplify your configuration files.
+
+For information about other advanced features in Netdata's health monitoring toolkit, check out our [health
+documentation](/health/README.md). And if you have some cool alarms you built using dimension templates,
+
+
diff --git a/docs/guides/monitor/kubernetes-k8s-netdata.md b/docs/guides/monitor/kubernetes-k8s-netdata.md
new file mode 100644
index 0000000..5cfefe8
--- /dev/null
+++ b/docs/guides/monitor/kubernetes-k8s-netdata.md
@@ -0,0 +1,254 @@
+<!--
+title: "Kubernetes monitoring with Netdata: Overview and visualizations"
+description: "Learn how to navigate Netdata's Kubernetes monitoring features for visualizing the health and performance of a Kubernetes cluster with per-second granularity."
+image: /img/seo/guides/monitor/kubernetes-k8s-netdata.png
+author: "Joel Hans"
+author_title: "Editorial Director, Technical & Educational Resources"
+author_img: "/img/authors/joel-hans.jpg"
+custom_edit_url: https://github.com/netdata/netdata/edit/master/docs/guides/monitor/kubernetes-k8s-netdata.md
+-->
+
+# Kubernetes monitoring with Netdata: Overview and visualizations
+
+At Netdata, we've built Kubernetes monitoring tools that add visibility without complexity while also helping you
+actively troubleshoot anomalies or outages. This guide walks you through each of the visualizations and offers best
+practices on how to use them to start Kubernetes monitoring in a matter of minutes, not hours or days.
+
+Netdata's Kubernetes monitoring solution uses a handful of [complementary tools and
+collectors](#related-reference-documentation) for peeling back the many complex layers of a Kubernetes cluster,
+_entirely for free_. These methods work together to give you every metric you need to troubleshoot performance or
+availability issues across your Kubernetes infrastructure.
+
+## Challenge
+
+While Kubernetes (k8s) might simplify the way you deploy, scale, and load-balance your applications, not all clusters
+come with "batteries included" when it comes to monitoring. Doubly so for a monitoring stack that helps you actively
+troubleshoot issues with your cluster.
+
+Some k8s providers, like GKE (Google Kubernetes Engine), do deploy clusters bundled with monitoring capabilities, such
+as Google Stackdriver Monitoring. However, these pre-configured solutions might not offer the depth of metrics,
+customization, or integration with your preferred alerting methods.
+
+Without this visibility, it's like you built an entire house and _then_ smashed your way through the finished walls to
+add windows.
+
+## Solution
+
+In this tutorial, you'll learn how to navigate Netdata's Kubernetes monitoring features, using
+[robot-shop](https://github.com/instana/robot-shop) as an example deployment. Deploying robot-shop is purely optional.
+You can also follow along with your own Kubernetes deployment if you choose. While the metrics might be different, the
+navigation and best practices are the same for every cluster.
+
+## What you need to get started
+
+To follow this tutorial, you need:
+
+- A free Netdata Cloud account. [Sign up](https://app.netdata.cloud/sign-up?cloudRoute=/spaces) if you don't have one
+ already.
+- A working cluster running Kubernetes v1.9 or newer, with a Netdata deployment and connected parent/child nodes. See
+ our [Kubernetes deployment process](/packaging/installer/methods/kubernetes.md) for details on deployment and
+ conneting to Cloud.
+- The [`kubectl`](https://kubernetes.io/docs/reference/kubectl/overview/) command line tool, within [one minor version
+ difference](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin) of your cluster, on an
+ administrative system.
+- The [Helm package manager](https://helm.sh/) v3.0.0 or newer on the same administrative system.
+
+### Install the `robot-shop` demo (optional)
+
+Begin by downloading the robot-shop code and using `helm` to create a new deployment.
+
+```bash
+git clone git@github.com:instana/robot-shop.git
+cd robot-shop/K8s/helm
+kubectl create ns robot-shop
+helm install robot-shop --namespace robot-shop .
+```
+
+Running `kubectl get pods` shows both the Netdata and robot-shop deployments.
+
+```bash
+kubectl get pods --all-namespaces
+NAMESPACE NAME READY STATUS RESTARTS AGE
+default netdata-child-29f9c 2/2 Running 0 10m
+default netdata-child-8xphf 2/2 Running 0 10m
+default netdata-child-jdvds 2/2 Running 0 11m
+default netdata-parent-554c755b7d-qzrx4 1/1 Running 0 11m
+kube-system aws-node-jnjv8 1/1 Running 0 17m
+kube-system aws-node-svzdb 1/1 Running 0 17m
+kube-system aws-node-ts6n2 1/1 Running 0 17m
+kube-system coredns-559b5db75d-f58hp 1/1 Running 0 22h
+kube-system coredns-559b5db75d-tkzj2 1/1 Running 0 22h
+kube-system kube-proxy-9p9cd 1/1 Running 0 17m
+kube-system kube-proxy-lt9ss 1/1 Running 0 17m
+kube-system kube-proxy-n75t9 1/1 Running 0 17m
+robot-shop cart-b4bbc8fff-t57js 1/1 Running 0 14m
+robot-shop catalogue-8b5f66c98-mr85z 1/1 Running 0 14m
+robot-shop dispatch-67d955c7d8-lnr44 1/1 Running 0 14m
+robot-shop mongodb-7f65d86c-dsslc 1/1 Running 0 14m
+robot-shop mysql-764c4c5fc7-kkbnf 1/1 Running 0 14m
+robot-shop payment-67c87cb7d-5krxv 1/1 Running 0 14m
+robot-shop rabbitmq-5bb66bb6c9-6xr5b 1/1 Running 0 14m
+robot-shop ratings-94fd9c75b-42wvh 1/1 Running 0 14m
+robot-shop redis-0 0/1 Pending 0 14m
+robot-shop shipping-7d69cb88b-w7hpj 1/1 Running 0 14m
+robot-shop user-79c445b44b-hwnm9 1/1 Running 0 14m
+robot-shop web-8bb887476-lkcjx 1/1 Running 0 14m
+```
+
+## Explore Netdata's Kubernetes monitoring charts
+
+The Netdata Helm chart deploys and enables everything you need for monitoring Kubernetes on every layer. Once you deploy
+Netdata and connect your cluster's nodes, you're ready to check out the visualizations **with zero configuration**.
+
+To get started, [sign in](https://app.netdata.cloud/sign-in?cloudRoute=/spaces) to your Netdata Cloud account. Head over
+to the War Room you connected your cluster to, if not **General**.
+
+Netdata Cloud is already visualizing your Kubernetes metrics, streamed in real-time from each node, in the
+[Overview](https://learn.netdata.cloud/docs/cloud/visualize/overview):
+
+![Netdata's Kubernetes monitoring
+dashboard](https://user-images.githubusercontent.com/1153921/109037415-eafc5500-7687-11eb-8773-9b95941e3328.png)
+
+Let's walk through monitoring each layer of a Kubernetes cluster using the Overview as our framework.
+
+## Cluster and node metrics
+
+The gauges and time-series charts you see right away in the Overview show aggregated metrics from every node in your
+cluster.
+
+For example, the `apps.cpu` chart (in the **Applications** menu item), visualizes the CPU utilization of various
+applications/services running on each of the nodes in your cluster. The **X Nodes** dropdown shows which nodes
+contribute to the chart and links to jump a single-node dashboard for further investigation.
+
+![Per-application monitoring in a Kubernetes
+cluster](https://user-images.githubusercontent.com/1153921/109042169-19c8fa00-768d-11eb-91a7-1a7afc41fea2.png)
+
+For example, the chart above shows a spike in the CPU utilization from `rabbitmq` every minute or so, along with a
+baseline CPU utilization of 10-15% across the cluster.
+
+Read about the [Overview](https://learn.netdata.cloud/docs/cloud/visualize/overview) and some best practices on [viewing
+an overview of your infrastructure](/docs/visualize/overview-infrastructure.md) for details on using composite charts to
+drill down into per-node performance metrics.
+
+## Pod and container metrics
+
+Click on the **Kubernetes xxxxxxx...** section to jump down to Netdata Cloud's unique Kubernetes visualizations for view
+real-time resource utilization metrics from your Kubernetes pods and containers.
+
+![Navigating to the Kubernetes monitoring
+visualizations](https://user-images.githubusercontent.com/1153921/109049195-349f6c80-7695-11eb-8902-52a029dca77f.png)
+
+### Health map
+
+The first visualization is the [health map](https://learn.netdata.cloud/docs/cloud/visualize/kubernetes#health-map),
+which places each container into its own box, then varies the intensity of their color to visualize the resource
+utilization. By default, the health map shows the **average CPU utilization as a percentage of the configured limit**
+for every container in your cluster.
+
+![The Kubernetes health map in Netdata
+Cloud](https://user-images.githubusercontent.com/1153921/109050085-3f0e3600-7696-11eb-988f-52cb187f53ea.png)
+
+Let's explore the most colorful box by hovering over it.
+
+![Hovering over a
+container](https://user-images.githubusercontent.com/1153921/109049544-a8417980-7695-11eb-80a7-109b4a645a27.png)
+
+The **Context** tab shows `rabbitmq-5bb66bb6c9-6xr5b` as the container's image name, which means this container is
+running a [RabbitMQ](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/rabbitmq) workload.
+
+Click the **Metrics** tab to see real-time metrics from that container. Unsurprisingly, it shows a spike in CPU
+utilization at regular intervals.
+
+![Viewing real-time container
+metrics](https://user-images.githubusercontent.com/1153921/109050482-aa580800-7696-11eb-9e3e-d3bdf0f3eff7.png)
+
+### Time-series charts
+
+Beneath the health map is a variety of time-series charts that help you visualize resource utilization over time, which
+is useful for targeted troubleshooting.
+
+The default is to display metrics grouped by the `k8s_namespace` label, which shows resource utilization based on your
+different namespaces.
+
+![Time-series Kubernetes monitoring in Netdata
+Cloud](https://user-images.githubusercontent.com/1153921/109075210-126a1680-76b6-11eb-918d-5acdcdac152d.png)
+
+Each composite chart has a [definition bar](https://learn.netdata.cloud/docs/cloud/visualize/overview#definition-bar)
+for complete customization. For example, grouping the top chart by `k8s_container_name` reveals new information.
+
+![Changing time-series charts](https://user-images.githubusercontent.com/1153921/109075212-139b4380-76b6-11eb-836f-939482ae55fc.png)
+
+## Service metrics
+
+Netdata has a [service discovery plugin](https://github.com/netdata/agent-service-discovery), which discovers and
+creates configuration files for [compatible
+services](https://github.com/netdata/helmchart#service-discovery-and-supported-services) and any endpoints covered by
+our [generic Prometheus collector](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/prometheus).
+Netdata uses these files to collect metrics from any compatible application as they run _inside_ of a pod. Service
+discovery happens without manual intervention as pods are created, destroyed, or moved between nodes.
+
+Service metrics show up on the Overview as well, beneath the **Kubernetes** section, and are labeled according to the
+service in question. For example, the **RabbitMQ** section has numerous charts from the [`rabbitmq`
+collector](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/rabbitmq):
+
+![Finding service discovery
+metrics](https://user-images.githubusercontent.com/1153921/109054511-2eac8a00-769b-11eb-97f1-da93acb4b5fe.png)
+
+> The robot-shop cluster has more supported services, such as MySQL, which are not visible with zero configuration. This
+> is usually because of services running on non-default ports, using non-default names, or required passwords. Read up
+> on [configuring service discovery](/packaging/installer/methods/kubernetes.md#configure-service-discovery) to collect
+> more service metrics.
+
+Service metrics are essential to infrastructure monitoring, as they're the best indicator of the end-user experience,
+and key signals for troubleshooting anomalies or issues.
+
+## Kubernetes components
+
+Netdata also automatically collects metrics from two essential Kubernetes processes.
+
+### kubelet
+
+The **k8s kubelet** section visualizes metrics from the Kubernetes agent responsible for managing every pod on a given
+node. This also happens without any configuration thanks to the [kubelet
+collector](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/k8s_kubelet).
+
+Monitoring each node's kubelet can be invaluable when diagnosing issues with your Kubernetes cluster. For example, you
+can see if the number of running containers/pods has dropped, which could signal a fault or crash in a particular
+Kubernetes service or deployment (see `kubectl get services` or `kubectl get deployments` for more details). If the
+number of pods increases, it may be because of something more benign, like another team member scaling up a
+service with `kubectl scale`.
+
+You can also view charts for the Kubelet API server, the volume of runtime/Docker operations by type,
+configuration-related errors, and the actual vs. desired numbers of volumes, plus a lot more.
+
+### kube-proxy
+
+The **k8s kube-proxy** section displays metrics about the network proxy that runs on each node in your Kubernetes
+cluster. kube-proxy lets pods communicate with each other and accept sessions from outside your cluster. Its metrics are
+collected by the [kube-proxy
+collector](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/k8s_kubeproxy).
+
+With Netdata, you can monitor how often your k8s proxies are syncing proxy rules between nodes. Dramatic changes in
+these figures could indicate an anomaly in your cluster that's worthy of further investigation.
+
+## What's next?
+
+After reading this guide, you should now be able to monitor any Kubernetes cluster with Netdata, including nodes, pods,
+containers, services, and more.
+
+With the health map, time-series charts, and the ability to drill down into individual nodes, you can see hundreds of
+per-second metrics with zero configuration and less time remembering all the `kubectl` options. Netdata moves with your
+cluster, automatically picking up new nodes or services as your infrastructure scales. And it's entirely free for
+clusters of all sizes.
+
+### Related reference documentation
+
+- [Netdata Helm chart](https://github.com/netdata/helmchart)
+- [Netdata service discovery](https://github.com/netdata/agent-service-discovery)
+- [Netdata Agent · `kubelet`
+ collector](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/k8s_kubelet)
+- [Netdata Agent · `kube-proxy`
+ collector](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/k8s_kubeproxy)
+- [Netdata Agent · `cgroups.plugin`](/collectors/cgroups.plugin/README.md)
+
+
diff --git a/docs/guides/monitor/lamp-stack.md b/docs/guides/monitor/lamp-stack.md
new file mode 100644
index 0000000..29b35e1
--- /dev/null
+++ b/docs/guides/monitor/lamp-stack.md
@@ -0,0 +1,246 @@
+<!--
+title: "LAMP stack monitoring (Linux, Apache, MySQL, PHP) with Netdata"
+description: "Set up robust LAMP stack monitoring (Linux, Apache, MySQL, PHP) in just a few minutes using a free, open-source monitoring tool that collects metrics every second."
+image: /img/seo/guides/monitor/lamp-stack.png
+author: "Joel Hans"
+author_title: "Editorial Director, Technical & Educational Resources"
+author_img: "/img/authors/joel-hans.jpg"
+custom_edit_url: https://github.com/netdata/netdata/edit/master/docs/guides/monitor/lamp-stack.md
+-->
+import { OneLineInstallWget } from '@site/src/components/OneLineInstall/'
+
+# LAMP stack monitoring (Linux, Apache, MySQL, PHP) with Netdata
+
+The LAMP stack is the "hello world" for deploying dynamic web applications. It's fast, flexible, and reliable, which
+means a developer or sysadmin won't go far in their career without interacting with the stack and its services.
+
+_LAMP_ is an acronym of the core services that make up the web application: **L**inux, **A**pache, **M**ySQL, and
+**P**HP.
+
+- [Linux](https://en.wikipedia.org/wiki/Linux) is the operating system running the whole stack.
+- [Apache](https://httpd.apache.org/) is a web server that responds to HTTP requests from users and returns web pages.
+- [MySQL](https://www.mysql.com/) is a database that stores and returns information based on queries from the web
+ application.
+- [PHP](https://www.php.net/) is a scripting language used to query the MySQL database and build new pages.
+
+LAMP stacks are the foundation for tons of end-user applications, with [Wordpress](https://wordpress.org/) being the
+most popular.
+
+## Challenge
+
+You've already deployed a LAMP stack, either in testing or production. You want to monitor every service's performance
+and availability to ensure the best possible experience for your end-users. You might also be particularly interested in
+using a free, open-source monitoring tool.
+
+Depending on your monitoring experience, you may not even know what metrics you're looking for, much less how to build
+dashboards using a query language. You need a robust monitoring experience that has the metrics you need without a ton
+of required setup.
+
+## Solution
+
+In this tutorial, you'll set up robust LAMP stack monitoring with Netdata in just a few minutes. When you're done,
+you'll have one dashboard to monitor every part of your web application, including each essential LAMP stack service.
+
+This dashboard updates every second with new metrics, and pairs those metrics up with preconfigured alarms to keep you
+informed of any errors or odd behavior.
+
+## What you need to get started
+
+To follow this tutorial, you need:
+
+- A physical or virtual Linux system, which we'll call a _node_.
+- A functional LAMP stack. There's plenty of tutorials for installing a LAMP stack, like [this
+ one](https://www.digitalocean.com/community/tutorials/how-to-install-linux-apache-mysql-php-lamp-stack-ubuntu-18-04)
+ from Digital Ocean.
+- Optionally, a [Netdata Cloud](https://app.netdata.cloud/sign-up?cloudRoute=/spaces) account, which you can use to view
+ metrics from multiple nodes in one dashboard, and a whole lot more, for free.
+
+## Install the Netdata Agent
+
+If you don't have the free, open-source Netdata monitoring agent installed on your node yet, get started with a [single
+kickstart command](/docs/get-started.mdx):
+
+<OneLineInstallWget/>
+
+The Netdata Agent is now collecting metrics from your node every second. You don't need to jump into the dashboard yet,
+but if you're curious, open your favorite browser and navigate to `http://localhost:19999` or `http://NODE:19999`,
+replacing `NODE` with the hostname or IP address of your system.
+
+## Enable hardware and Linux system monitoring
+
+There's nothing you need to do to enable [system monitoring](/docs/collect/system-metrics.md) and Linux monitoring with
+the Netdata Agent, which autodetects metrics from CPUs, memory, disks, networking devices, and Linux processes like
+systemd without any configuration. If you're using containers, Netdata automatically collects resource utilization
+metrics from each using the [cgroups data collector](/collectors/cgroups.plugin/README.md).
+
+## Enable Apache monitoring
+
+Let's begin by configuring Apache to work with Netdata's [Apache data
+collector](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/apache).
+
+Actually, there's nothing for you to do to enable Apache monitoring with Netdata.
+
+Apache comes with `mod_status` enabled by default these days, and Netdata is smart enough to look for metrics at that
+endpoint without you configuring it. Netdata is already collecting [`mod_status`
+metrics](https://httpd.apache.org/docs/2.4/mod/mod_status.html), which is just _part_ of your web server monitoring.
+
+## Enable web log monitoring
+
+The Netdata Agent also comes with a [web log
+collector](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/weblog), which reads Apache's access
+log file, processes each line, and converts them into per-second metrics. On Debian systems, it reads the file at
+`/var/log/apache2/access.log`.
+
+At installation, the Netdata Agent adds itself to the [`adm`
+group](https://wiki.debian.org/SystemGroups#Groups_without_an_associated_user), which gives the `netdata` process the
+right privileges to read Apache's log files. In other words, you don't need to do anything to enable Apache web log
+monitoring.
+
+## Enable MySQL monitoring
+
+Because your MySQL database is password-protected, you do need to tell MySQL to allow the `netdata` user to connect to
+without a password. Netdata's [MySQL data
+collector](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/mysql) collects metrics in _read-only_
+mode, without being able to alter or affect operations in any way.
+
+First, log into the MySQL shell. Then, run the following three commands, one at a time:
+
+```mysql
+CREATE USER 'netdata'@'localhost';
+GRANT USAGE, REPLICATION CLIENT, PROCESS ON *.* TO 'netdata'@'localhost';
+FLUSH PRIVILEGES;
+```
+
+Run `sudo systemctl restart netdata`, or the [appropriate alternative for your
+system](/docs/configure/start-stop-restart.md), to collect dozens of metrics every second for robust MySQL monitoring.
+
+## Enable PHP monitoring
+
+Unlike Apache or MySQL, PHP isn't a service that you can monitor directly, unless you instrument a PHP-based application
+with [StatsD](/collectors/statsd.plugin/README.md).
+
+However, if you use [PHP-FPM](https://php-fpm.org/) in your LAMP stack, you can monitor that process with our [PHP-FPM
+data collector](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/phpfpm).
+
+Open your PHP-FPM configuration for editing, replacing `7.4` with your version of PHP:
+
+```bash
+sudo nano /etc/php/7.4/fpm/pool.d/www.conf
+```
+
+> Not sure what version of PHP you're using? Run `php -v`.
+
+Find the line that reads `;pm.status_path = /status` and remove the `;` so it looks like this:
+
+```conf
+pm.status_path = /status
+```
+
+Next, add a new `/status` endpoint to Apache. Open the Apache configuration file you're using for your LAMP stack.
+
+```bash
+sudo nano /etc/apache2/sites-available/your_lamp_stack.conf
+```
+
+Add the following to the end of the file, again replacing `7.4` with your version of PHP:
+
+```apache
+ProxyPass "/status" "unix:/run/php/php7.4-fpm.sock|fcgi://localhost"
+```
+
+Save and close the file. Finally, restart the PHP-FPM, Apache, and Netdata processes.
+
+```bash
+sudo systemctl restart php7.4-fpm.service
+sudo systemctl restart apache2
+sudo systemctl restart netdata
+```
+
+As the Netdata Agent starts up again, it automatically connects to the new `127.0.0.1/status` page and collects
+per-second PHP-FPM metrics to get you started with PHP monitoring.
+
+## View LAMP stack metrics
+
+If the Netdata Agent isn't already open in your browser, open a new tab and navigate to `http://localhost:19999` or
+`http://NODE:19999`, replacing `NODE` with the hostname or IP address of your system.
+
+> If you [signed up](https://app.netdata.cloud/sign-up?cloudRoute=/spaces) for Netdata Cloud earlier, you can also view
+> the exact same LAMP stack metrics there, plus additional features, like drag-and-drop custom dashboards. Be sure to
+> [connecting your node](/claim/README.md) to start streaming metrics to your browser through Netdata Cloud.
+
+Netdata automatically organizes all metrics and charts onto a single page for easy navigation. Peek at gauges to see
+overall system performance, then scroll down to see more. Click-and-drag with your mouse to pan _all_ charts back and
+forth through different time intervals, or hold `SHIFT` and use the scrollwheel (or two-finger scroll) to zoom in and
+out. Check out our doc on [interacting with charts](/docs/visualize/interact-dashboards-charts.md) for all the details.
+
+![The Netdata
+dashboard](https://user-images.githubusercontent.com/1153921/109520555-98e17800-7a69-11eb-86ec-16f689da4527.png)
+
+The **System Overview** section, which you can also see in the right-hand menu, contains key hardware monitoring charts,
+including CPU utilization, memory page faults, network monitoring, and much more. The **Applications** section shows you
+exactly which Linux processes are using the most system resources.
+
+Next, let's check out LAMP-specific metrics. You should see four relevant sections: **Apache local**, **MySQL local**,
+**PHP-FPM local**, and **web log apache**. Click on any of these to see metrics from each service in your LAMP stack.
+
+![LAMP stack monitoring in
+Netdata](https://user-images.githubusercontent.com/1153921/109516332-49994880-7a65-11eb-807c-3cba045582e6.png)
+
+### Key LAMP stack monitoring charts
+
+Here's a quick reference for what charts you might want to focus on after setting up Netdata.
+
+| Chart name / context | Type | Why? |
+|-------------------------------------------------------|---------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| System Load Average (`system.load`) | Hardware monitoring | A good baseline load average is `0.7`, while `1` (on a 1-core system, `2` on a 2-core system, and so on) means resources are "perfectly" utilized. Higher load indicates a bottleneck somewhere in your system. |
+| System RAM (`system.ram`) | Hardware monitoring | Look at the `free` dimension. If that drops to `0`, your system will use swap memory and slow down. |
+| Uptime (`apache_local.uptime`) | Apache monitoring | This chart should always be "climbing," indicating a continuous uptime. Investigate any drops back to `0`. |
+| Requests By Type (`web_log_apache.requests_by_type`) | Apache monitoring | Check for increases in the `error` or `bad` dimensions, which could indicate users arriving at broken pages or PHP returning errors. |
+| Queries (`mysql_local.queries`) | MySQL monitoring | Queries is the total number of queries (queries per second, QPS). Check this chart for sudden spikes or drops, which indicate either increases in traffic/demand or bottlenecks in hardware performance. |
+| Active Connections (`mysql_local.connections_active`) | MySQL monitoring | If the `active` dimension nears the `limit`, your MySQL database will bottleneck responses. |
+| Performance (phpfpm_local.performance) | PHP monitoring | The `slow requests` dimension lets you know if any requests exceed the configured `request_slowlog_timeout`. If so, users might be having a less-than-ideal experience. |
+
+## Get alarms for LAMP stack errors
+
+The Netdata Agent comes with hundreds of pre-configured alarms to help you keep tabs on your system, including 19 alarms
+designed for smarter LAMP stack monitoring.
+
+Click the 🔔 icon in the top navigation to [see active alarms](/docs/monitor/view-active-alarms.md). The **Active** tabs
+shows any alarms currently triggered, while the **All** tab displays a list of _every_ pre-configured alarm. The
+
+![An example of LAMP stack
+alarms](https://user-images.githubusercontent.com/1153921/109524120-5883f900-7a6d-11eb-830e-0e7baaa28163.png)
+
+[Tweak alarms](/docs/monitor/configure-alarms.md) based on your infrastructure monitoring needs, and to see these alarms
+in other places, like your inbox or a Slack channel, [enable a notification
+method](/docs/monitor/enable-notifications.md).
+
+## What's next?
+
+You've now set up robust monitoring for your entire LAMP stack: Linux, Apache, MySQL, and PHP (-FPM, to be exact). These
+metrics will help you keep tabs on the performance and availability of your web application and all its essential
+services. The per-second metrics granularity means you have the most accurate information possible for troubleshooting
+any LAMP-related issues.
+
+Another powerful way to monitor the availability of a LAMP stack is the [`httpcheck`
+collector](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/httpcheck), which pings a web server at
+a regular interval and tells you whether if and how quickly it's responding. The `response_match` option also lets you
+monitor when the web server's response isn't what you expect it to be, which might happen if PHP-FPM crashes, for
+example.
+
+The best way to use the `httpcheck` collector is from a separate node from the one running your LAMP stack, which is why
+we're not covering it here, but it _does_ work in a single-node setup. Just don't expect it to tell you if your whole
+node crashed.
+
+If you're planning on managing more than one node, or want to take advantage of advanced features, like finding the
+source of issues faster with [Metric Correlations](https://learn.netdata.cloud/docs/cloud/insights/metric-correlations),
+[sign up](https://app.netdata.cloud/sign-up?cloudRoute=/spaces) for a free Netdata Cloud account.
+
+### Related reference documentation
+
+- [Netdata Agent · Get started](/docs/get-started.mdx)
+- [Netdata Agent · Apache data collector](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/apache)
+- [Netdata Agent · Web log collector](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/weblog)
+- [Netdata Agent · MySQL data collector](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/mysql)
+- [Netdata Agent · PHP-FPM data collector](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/phpfpm)
+
diff --git a/docs/guides/monitor/pi-hole-raspberry-pi.md b/docs/guides/monitor/pi-hole-raspberry-pi.md
new file mode 100644
index 0000000..1246d8b
--- /dev/null
+++ b/docs/guides/monitor/pi-hole-raspberry-pi.md
@@ -0,0 +1,162 @@
+<!--
+title: "Monitor Pi-hole (and a Raspberry Pi) with Netdata"
+description: "Monitor Pi-hole metrics, plus Raspberry Pi system metrics, in minutes and completely for free with Netdata's open-source monitoring agent."
+image: /img/seo/guides/monitor/netdata-pi-hole-raspberry-pi.png
+custom_edit_url: https://github.com/netdata/netdata/edit/master/docs/guides/monitor/pi-hole-raspberry-pi.md
+-->
+import { OneLineInstallWget } from '@site/src/components/OneLineInstall/'
+
+# Monitor Pi-hole (and a Raspberry Pi) with Netdata
+
+Between intrusive ads, invasive trackers, and vicious malware, many techies and homelab enthusiasts are advancing their
+networks' security and speed with a tiny computer and a powerful piece of software: [Pi-hole](https://pi-hole.net/).
+
+Pi-hole is a DNS sinkhole that prevents unwanted content from even reaching devices on your home network. It blocks ads
+and malware at the network, instead of using extensions/add-ons for individual browsers, so you'll stop seeing ads in
+some of the most intrusive places, like your smart TV. Pi-hole can even [improve your network's speed and reduce
+bandwidth](https://discourse.pi-hole.net/t/will-pi-hole-slow-down-my-network/2048).
+
+Most Pi-hole users run it on a [Raspberry Pi](https://www.raspberrypi.org/products/raspberry-pi-4-model-b/) (hence the
+name), a credit card-sized, super-capable computer that costs about $35.
+
+And to keep tabs on how both Pi-hole and the Raspberry Pi are working to protect your network, you can use the
+open-source [Netdata monitoring agent](https://github.com/netdata/netdata).
+
+To get started, all you need is a [Raspberry Pi](https://www.raspberrypi.org/products/raspberry-pi-4-model-b/) with
+Raspbian installed. This guide uses a Raspberry Pi 4 Model B and Raspbian GNU/Linux 10 (buster). This guide assumes
+you're connecting to a Raspberry Pi remotely over SSH, but you could also complete all these steps on the system
+directly using a keyboard, mouse, and monitor.
+
+## Why monitor Pi-hole and a Raspberry Pi with Netdata?
+
+Netdata helps you monitor and troubleshoot all kinds of devices and the applications they run, including IoT devices
+like the Raspberry Pi and applications like Pi-hole.
+
+After a two-minute installation and with zero configuration, you'll be able to see all of Pi-hole's metrics, including
+the volume of queries, connected clients, DNS queries per type, top clients, top blocked domains, and more.
+
+With Netdata installed, you can also monitor system metrics and any other applications you might be running. By default,
+Netdata collects metrics on CPU usage, disk IO, bandwidth, per-application resource usage, and a ton more. With the
+Raspberry Pi used for this guide, Netdata automatically collects about 1,500 metrics every second!
+
+![Real-time Pi-hole monitoring with
+Netdata](https://user-images.githubusercontent.com/1153921/90447745-c8fe9600-e098-11ea-8a57-4f07339f002b.png)
+
+## Install Netdata
+
+Let's start by installing Netdata first so that it can start collecting system metrics as soon as possible for the most
+possible historic data.
+
+> ⚠️ Don't install Netdata using `apt` and the default package available in Raspbian. The Netdata team does not maintain
+> this package, and can't guarantee it works properly.
+
+On Raspberry Pis running Raspbian, the best way to install Netdata is our one-line kickstart script. This script asks
+you to install dependencies, then compiles Netdata from source via [GitHub](https://github.com/netdata/netdata).
+
+<OneLineInstallWget/>
+
+Once installed on a Raspberry Pi 4 with no accessories, Netdata starts collecting roughly 1,500 metrics every second and
+populates its dashboard with more than 250 charts.
+
+Open your browser of choice and navigate to `http://NODE:19999/`, replacing `NODE` with the IP address of your Raspberry
+Pi. Not sure what that IP is? Try running `hostname -I | awk '{print $1}'` from the Pi itself.
+
+You'll see Netdata's dashboard and a few hundred real-time,
+[interactive](https://learn.netdata.cloud/guides/step-by-step/step-02#interact-with-charts) charts. Feel free to
+explore, but let's turn our attention to installing Pi-hole.
+
+## Install Pi-Hole
+
+Like Netdata, Pi-hole has a one-line script for simple installation. From your Raspberry Pi, run the following:
+
+```bash
+curl -sSL https://install.pi-hole.net | bash
+```
+
+The installer will help you set up Pi-hole based on the topology of your network. Once finished, you should set up your
+devices—or your router for system-wide sinkhole protection—to [use Pi-hole as their DNS
+service](https://discourse.pi-hole.net/t/how-do-i-configure-my-devices-to-use-pi-hole-as-their-dns-server/245). You've
+finished setting up Pi-hole at this point.
+
+As far as configuring Netdata to monitor Pi-hole metrics, there's nothing you actually need to do. Netdata's [Pi-hole
+collector](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/pihole) will autodetect the new service
+running on your Raspberry Pi and immediately start collecting metrics every second.
+
+Restart Netdata with `sudo systemctl restart netdata`, which will then recognize that Pi-hole is running and start a
+per-second collection job. When you refresh your Netdata dashboard or load it up again in a new tab, you'll see a new
+entry in the menu for **Pi-hole** metrics.
+
+## Use Netdata to explore and monitor your Raspberry Pi and Pi-hole
+
+By the time you've reached this point in the guide, Netdata has already collected a ton of valuable data about your
+Raspberry Pi, Pi-hole, and any other apps/services you might be running. Even a few minutes of collecting 1,500 metrics
+per second adds up quickly.
+
+You can now use Netdata's synchronized charts to zoom, highlight, scrub through time, and discern how an anomaly in one
+part of your system might affect another.
+
+![The Netdata dashboard in
+action](https://user-images.githubusercontent.com/1153921/80827388-b9fee100-8b98-11ea-8f60-0d7824667cd3.gif)
+
+If you're completely new to Netdata, look at our [step-by-step guide](/docs/guides/step-by-step/step-00.md) for a
+walkthrough of all its features. For a more expedited tour, see the [get started guide](/docs/get-started.mdx).
+
+### Enable temperature sensor monitoring
+
+You need to manually enable Netdata's built-in [temperature sensor
+collector](https://learn.netdata.cloud/docs/agent/collectors/charts.d.plugin/sensors) to start collecting metrics.
+
+> Netdata uses a few plugins to manage its [collectors](/collectors/REFERENCE.md), each using a different language: Go,
+> Python, Node.js, and Bash. While our Go collectors are undergoing the most active development, we still support the
+> other languages. In this case, you need to enable a temperature sensor collector that's written in Bash.
+
+First, open the `charts.d.conf` file for editing. You should always use the `edit-config` script to edit Netdata's
+configuration files, as it ensures your settings persist across updates to the Netdata Agent.
+
+```bash
+cd /etc/netdata
+sudo ./edit-config charts.d.conf
+```
+
+Uncomment the `sensors=force` line and save the file. Restart Netdata with `sudo systemctl restart netdata` to enable
+Raspberry Pi temperature sensor monitoring.
+
+### Storing historical metrics on your Raspberry Pi
+
+By default, Netdata allocates 256 MiB in disk space to store historical metrics inside the [database
+engine](/database/engine/README.md). On the Raspberry Pi used for this guide, Netdata collects 1,500 metrics every
+second, which equates to storing 3.5 days worth of historical metrics.
+
+You can increase this allocation by editing `netdata.conf` and increasing the `dbengine multihost disk space` setting to
+more than 256.
+
+```yaml
+[global]
+ dbengine multihost disk space = 512
+```
+
+Use our [database sizing
+calculator](/docs/store/change-metrics-storage.md#calculate-the-system-resources-ram-disk-space-needed-to-store-metrics)
+and [guide on storing historical metrics](/docs/guides/longer-metrics-storage.md) to help you determine the right
+setting for your Raspberry Pi.
+
+## What's next?
+
+Now that you're monitoring Pi-hole and your Raspberry Pi with Netdata, you can extend its capabilities even further, or
+configure Netdata to more specific goals.
+
+Most importantly, you can always install additional services and instantly collect metrics from many of them with our
+[300+ integrations](/collectors/COLLECTORS.md).
+
+- [Optimize performance](/docs/guides/configure/performance.md) using tweaks developed for IoT devices.
+- [Stream Raspberry Pi metrics](/streaming/README.md) to a parent host for easy access or longer-term storage.
+- [Tweak alarms](/health/QUICKSTART.md) for either Pi-hole or the health of your Raspberry Pi.
+- [Export metrics to external databases](/exporting/README.md) with the exporting engine.
+
+Or, head over to [our guides](https://learn.netdata.cloud/guides/) for even more experiments and insights into
+troubleshooting the health of your systems and services.
+
+If you have any questions about using Netdata to monitor your Raspberry Pi, Pi-hole, or any other applications, head on
+over to our [community forum](https://community.netdata.cloud/).
+
+
diff --git a/docs/guides/monitor/process.md b/docs/guides/monitor/process.md
new file mode 100644
index 0000000..2f46d7a
--- /dev/null
+++ b/docs/guides/monitor/process.md
@@ -0,0 +1,301 @@
+<!--
+title: Monitor any process in real-time with Netdata
+description: "Tap into Netdata's powerful collectors, with per-second utilization metrics for every process, to troubleshoot faster and make data-informed decisions."
+image: /img/seo/guides/monitor/process.png
+custom_edit_url: https://github.com/netdata/netdata/edit/master/docs/guides/monitor/process.md
+-->
+
+# Monitor any process in real-time with Netdata
+
+Netdata is more than a multitude of generic system-level metrics and visualizations. Instead of providing only a bird's
+eye view of your system, leaving you to wonder exactly _what_ is taking up 99% CPU, Netdata also gives you visibility
+into _every layer_ of your node. These additional layers give you context, and meaningful insights, into the true health
+and performance of your infrastructure.
+
+One of these layers is the _process_. Every time a Linux system runs a program, it creates an independent process that
+executes the program's instructions in parallel with anything else happening on the system. Linux systems track the
+state and resource utilization of processes using the [`/proc` filesystem](https://en.wikipedia.org/wiki/Procfs), and
+Netdata is designed to hook into those metrics to create meaningful visualizations out of the box.
+
+While there are a lot of existing command-line tools for tracking processes on Linux systems, such as `ps` or `top`,
+only Netdata provides dozens of real-time charts, at both per-second and event frequency, without you having to write
+SQL queries or know a bunch of arbitrary command-line flags.
+
+With Netdata's process monitoring, you can:
+
+- Benchmark/optimize performance of standard applications, like web servers or databases
+- Benchmark/optimize performance of custom applications
+- Troubleshoot CPU/memory/disk utilization issues (why is my system's CPU spiking right now?)
+- Perform granular capacity planning based on the specific needs of your infrastructure
+- Search for leaking file descriptors
+- Investigate zombie processes
+
+... and much more. Let's get started.
+
+## Prerequisites
+
+- One or more Linux nodes running [Netdata](/docs/get-started.mdx). If you need more time to understand Netdata before
+ following this guide, see the [infrastructure](/docs/quickstart/infrastructure.md) or
+ [single-node](/docs/quickstart/single-node.md) monitoring quickstarts.
+- A general understanding of how to [configure the Netdata Agent](/docs/configure/nodes.md) using `edit-config`.
+- A Netdata Cloud account. [Sign up](https://app.netdata.cloud) if you don't have one already.
+
+## How does Netdata do process monitoring?
+
+The Netdata Agent already knows to look for hundreds of [standard applications that we support via
+collectors](/collectors/COLLECTORS.md), and groups them based on their purpose. Let's say you want to monitor a MySQL
+database using its process. The Netdata Agent already knows to look for processes with the string `mysqld` in their
+name, along with a few others, and puts them into the `sql` group. This `sql` group then becomes a dimension in all
+process-specific charts.
+
+The process and groups settings are used by two unique and powerful collectors.
+
+[**`apps.plugin`**](/collectors/apps.plugin/README.md) looks at the Linux process tree every second, much like `top` or
+`ps fax`, and collects resource utilization information on every running process. It then automatically adds a layer of
+meaningful visualization on top of these metrics, and creates per-process/application charts.
+
+[**`ebpf.plugin`**](/collectors/ebpf.plugin/README.md): Netdata's extended Berkeley Packet Filter (eBPF) collector
+monitors Linux kernel-level metrics for file descriptors, virtual filesystem IO, and process management, and then hands
+process-specific metrics over to `apps.plugin` for visualization. The eBPF collector also collects and visualizes
+metrics on an _event frequency_, which means it captures every kernel interaction, and not just the volume of
+interaction at every second in time. That's even more precise than Netdata's standard per-second granularity.
+
+### Per-process metrics and charts in Netdata
+
+With these collectors working in parallel, Netdata visualizes the following per-second metrics for _any_ process on your
+Linux systems:
+
+- CPU utilization (`apps.cpu`)
+ - Total CPU usage
+ - User/system CPU usage (`apps.cpu_user`/`apps.cpu_system`)
+- Disk I/O
+ - Physical reads/writes (`apps.preads`/`apps.pwrites`)
+ - Logical reads/writes (`apps.lreads`/`apps.lwrites`)
+ - Open unique files (if a file is found open multiple times, it is counted just once, `apps.files`)
+- Memory
+ - Real Memory Used (non-shared, `apps.mem`)
+ - Virtual Memory Allocated (`apps.vmem`)
+ - Minor page faults (i.e. memory activity, `apps.minor_faults`)
+- Processes
+ - Threads running (`apps.threads`)
+ - Processes running (`apps.processes`)
+ - Carried over uptime (since the last Netdata Agent restart, `apps.uptime`)
+ - Minimum uptime (`apps.uptime_min`)
+ - Average uptime (`apps.uptime_average`)
+ - Maximum uptime (`apps.uptime_max`)
+ - Pipes open (`apps.pipes`)
+- Swap memory
+ - Swap memory used (`apps.swap`)
+ - Major page faults (i.e. swap activity, `apps.major_faults`)
+- Network
+ - Sockets open (`apps.sockets`)
+- eBPF file
+ - Number of calls to open files. (`apps.file_open`)
+ - Number of files closed. (`apps.file_closed`)
+ - Number of calls to open files that returned errors.
+ - Number of calls to close files that returned errors.
+- eBPF syscall
+ - Number of calls to delete files. (`apps.file_deleted`)
+ - Number of calls to `vfs_write`. (`apps.vfs_write_call`)
+ - Number of calls to `vfs_read`. (`apps.vfs_read_call`)
+ - Number of bytes written with `vfs_write`. (`apps.vfs_write_bytes`)
+ - Number of bytes read with `vfs_read`. (`apps.vfs_read_bytes`)
+ - Number of calls to write a file that returned errors.
+ - Number of calls to read a file that returned errors.
+- eBPF process
+ - Number of process created with `do_fork`. (`apps.process_create`)
+ - Number of threads created with `do_fork` or `__x86_64_sys_clone`, depending on your system's kernel version. (`apps.thread_create`)
+ - Number of times that a process called `do_exit`. (`apps.task_close`)
+- eBPF net
+ - Number of bytes sent. (`apps.bandwidth_sent`)
+ - Number of bytes received. (`apps.bandwidth_recv`)
+
+As an example, here's the per-process CPU utilization chart, including a `sql` group/dimension.
+
+![A per-process CPU utilization chart in Netdata
+Cloud](https://user-images.githubusercontent.com/1153921/101217226-3a5d5700-363e-11eb-8610-aa1640aefb5d.png)
+
+## Configure the Netdata Agent to recognize a specific process
+
+To monitor any process, you need to make sure the Netdata Agent is aware of it. As mentioned above, the Agent is already
+aware of hundreds of processes, and collects metrics from them automatically.
+
+But, if you want to change the grouping behavior, add an application that isn't yet supported in the Netdata Agent, or
+monitor a custom application, you need to edit the `apps_groups.conf` configuration file.
+
+Navigate to your [Netdata config directory](/docs/configure/nodes.md) and use `edit-config` to edit the file.
+
+```bash
+cd /etc/netdata # Replace this with your Netdata config directory if not at /etc/netdata.
+sudo ./edit-config apps_groups.conf
+```
+
+Inside the file are lists of process names, oftentimes using wildcards (`*`), that the Netdata Agent looks for and
+groups together. For example, the Netdata Agent looks for processes starting with `mysqld`, `mariad`, `postgres`, and
+others, and groups them into `sql`. That makes sense, since all these processes are for SQL databases.
+
+```conf
+sql: mysqld* mariad* postgres* postmaster* oracle_* ora_* sqlservr
+```
+
+These groups are then reflected as [dimensions](/web/README.md#dimensions) within Netdata's charts.
+
+![An example per-process CPU utilization chart in Netdata
+Cloud](https://user-images.githubusercontent.com/1153921/101369156-352e2100-3865-11eb-9f0d-b8fac162e034.png)
+
+See the following two sections for details based on your needs. If you don't need to configure `apps_groups.conf`, jump
+down to [visualizing process metrics](#visualize-process-metrics).
+
+### Standard applications (web servers, databases, containers, and more)
+
+As explained above, the Netdata Agent is already aware of most standard applications you run on Linux nodes, and you
+shouldn't need to configure it to discover them.
+
+However, if you're using multiple applications that the Netdata Agent groups together you may want to separate them for
+more precise monitoring. If you're not running any other types of SQL databases on that node, you don't need to change
+the grouping, since you know that any MySQL is the only process contributing to the `sql` group.
+
+Let's say you're using both MySQL and PostgreSQL databases on a single node, and want to monitor their processes
+independently. Open the `apps_groups.conf` file as explained in the [section
+above](#configure-the-netdata-agent-to-recognize-a-specific-process) and scroll down until you find the `database
+servers` section. Create new groups for MySQL and PostgreSQL, and move their process queries into the unique groups.
+
+```conf
+# -----------------------------------------------------------------------------
+# database servers
+
+mysql: mysqld*
+postgres: postgres*
+sql: mariad* postmaster* oracle_* ora_* sqlservr
+```
+
+Restart Netdata with `sudo systemctl restart netdata`, or the [appropriate
+method](/docs/configure/start-stop-restart.md) for your system, to start collecting utilization metrics from your
+application. Time to [visualize your process metrics](#visualize-process-metrics).
+
+### Custom applications
+
+Let's assume you have an application that runs on the process `custom-app`. To monitor eBPF metrics for that application
+separate from any others, you need to create a new group in `apps_groups.conf` and associate that process name with it.
+
+Open the `apps_groups.conf` file as explained in the [section
+above](#configure-the-netdata-agent-to-recognize-a-specific-process). Scroll down to `# NETDATA processes accounting`.
+Above that, paste in the following text, which creates a new `custom-app` group with the `custom-app` process. Replace
+`custom-app` with the name of your application's Linux process. `apps_groups.conf` should now look like this:
+
+```conf
+...
+# -----------------------------------------------------------------------------
+# Custom applications to monitor with apps.plugin and ebpf.plugin
+
+custom-app: custom-app
+
+# -----------------------------------------------------------------------------
+# NETDATA processes accounting
+...
+```
+
+Restart Netdata with `sudo systemctl restart netdata`, or the [appropriate
+method](/docs/configure/start-stop-restart.md) for your system, to start collecting utilization metrics from your
+application.
+
+## Visualize process metrics
+
+Now that you're collecting metrics for your process, you'll want to visualize them using Netdata's real-time,
+interactive charts. Find these visualizations in the same section regardless of whether you use [Netdata
+Cloud](https://app.netdata.cloud) for infrastructure monitoring, or single-node monitoring with the local Agent's
+dashboard at `http://localhost:19999`.
+
+If you need a refresher on all the available per-process charts, see the [above
+list](#per-process-metrics-and-charts-in-netdata).
+
+### Using Netdata's application collector (`apps.plugin`)
+
+`apps.plugin` puts all of its charts under the **Applications** section of any Netdata dashboard.
+
+![Screenshot of the Applications section on a Netdata
+dashboard](https://user-images.githubusercontent.com/1153921/101401172-2ceadb80-388f-11eb-9e9a-88443894c272.png)
+
+Let's continue with the MySQL example. We can create a [test
+database](https://www.digitalocean.com/community/tutorials/how-to-measure-mysql-query-performance-with-mysqlslap) in
+MySQL to generate load on the `mysql` process.
+
+`apps.plugin` immediately collects and visualizes this activity `apps.cpu` chart, which shows an increase in CPU
+utilization from the `sql` group. There is a parallel increase in `apps.pwrites`, which visualizes writes to disk.
+
+![Per-application CPU utilization
+metrics](https://user-images.githubusercontent.com/1153921/101409725-8527da80-389b-11eb-96e9-9f401535aafc.png)
+
+![Per-application disk writing
+metrics](https://user-images.githubusercontent.com/1153921/101409728-85c07100-389b-11eb-83fd-d79dd1545b5a.png)
+
+Next, the `mysqlslap` utility queries the database to provide some benchmarking load on the MySQL database. It won't
+look exactly like a production database executing lots of user queries, but it gives you an idea into the possibility of
+these visualizations.
+
+```bash
+sudo mysqlslap --user=sysadmin --password --host=localhost --concurrency=50 --iterations=10 --create-schema=employees --query="SELECT * FROM dept_emp;" --verbose
+```
+
+The following per-process disk utilization charts show spikes under the `sql` group at the same time `mysqlslap` was run
+numerous times, with slightly different concurrency and query options.
+
+![Per-application disk
+metrics](https://user-images.githubusercontent.com/1153921/101411810-d08fb800-389e-11eb-85b3-f3fa41f1f887.png)
+
+> 💡 Click on any dimension below a chart in Netdata Cloud (or to the right of a chart on a local Agent dashboard), to
+> visualize only that dimension. This can be particularly useful in process monitoring to separate one process'
+> utilization from the rest of the system.
+
+### Using Netdata's eBPF collector (`ebpf.plugin`)
+
+Netdata's eBPF collector puts its charts in two places. Of most importance to process monitoring are the **ebpf file**,
+**ebpf syscall**, **ebpf process**, and **ebpf net** sub-sections under **Applications**, shown in the above screenshot.
+
+For example, running the above workload shows the entire "story" how MySQL interacts with the Linux kernel to open
+processes/threads to handle a large number of SQL queries, then subsequently close the tasks as each query returns the
+relevant data.
+
+![Per-process eBPF
+charts](https://user-images.githubusercontent.com/1153921/101412395-c8844800-389f-11eb-86d2-20c8a0f7b3c0.png)
+
+`ebpf.plugin` visualizes additional eBPF metrics, which are system-wide and not per-process, under the **eBPF** section.
+
+## What's next?
+
+Now that you have `apps_groups.conf` configured correctly, and know where to find per-process visualizations throughout
+Netdata's ecosystem, you can precisely monitor the health and performance of any process on your node using per-second
+metrics.
+
+For even more in-depth troubleshooting, see our guide on [monitoring and debugging applications with
+eBPF](/docs/guides/troubleshoot/monitor-debug-applications-ebpf.md).
+
+If the process you're monitoring also has a [supported collector](/collectors/COLLECTORS.md), now is a great time to set
+that up if it wasn't autodetected. With both process utilization and application-specific metrics, you should have every
+piece of data needed to discover the root cause of an incident. See our [collector
+setup](/docs/collect/enable-configure.md) doc for details.
+
+[Create new dashboards](/docs/visualize/create-dashboards.md) in Netdata Cloud using charts from `apps.plugin`,
+`ebpf.plugin`, and application-specific collectors to build targeted dashboards for monitoring key processes across your
+infrastructure.
+
+Try running [Metric Correlations](https://learn.netdata.cloud/docs/cloud/insights/metric-correlations) on a node that's
+running the process(es) you're monitoring. Even if nothing is going wrong at the moment, Netdata Cloud's embedded
+intelligence helps you better understand how a MySQL database, for example, might influence a system's volume of memory
+page faults. And when an incident is afoot, use Metric Correlations to reduce mean time to resolution (MTTR) and
+cognitive load.
+
+If you want more specific metrics from your custom application, check out Netdata's [statsd
+support](/collectors/statsd.plugin/README.md). With statd, you can send detailed metrics from your application to
+Netdata and visualize them with per-second granularity. Netdata's statsd collector works with dozens of [statsd server
+implementations](https://github.com/etsy/statsd/wiki#client-implementations), which work with most application
+frameworks.
+
+### Related reference documentation
+
+- [Netdata Agent · `apps.plugin`](/collectors/apps.plugin/README.md)
+- [Netdata Agent · `ebpf.plugin`](/collectors/ebpf.plugin/README.md)
+- [Netdata Agent · Dashboards](/web/README.md#dimensions)
+- [Netdata Agent · MySQL collector](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/mysql)
+
+
diff --git a/docs/guides/monitor/raspberry-pi-anomaly-detection.md b/docs/guides/monitor/raspberry-pi-anomaly-detection.md
new file mode 100644
index 0000000..73f57cd
--- /dev/null
+++ b/docs/guides/monitor/raspberry-pi-anomaly-detection.md
@@ -0,0 +1,125 @@
+---
+title: "Unsupervised anomaly detection for Raspberry Pi monitoring"
+description: "Use a low-overhead machine learning algorithm and an open-source monitoring tool to detect anomalous metrics on a Raspberry Pi."
+image: /img/seo/guides/monitor/raspberry-pi-anomaly-detection.png
+author: "Andy Maguire"
+author_title: "Senior Machine Learning Engineer"
+author_img: "/img/authors/andy-maguire.jpg"
+custom_edit_url: https://github.com/netdata/netdata/edit/master/docs/guides/monitor/raspberry-pi-anomaly-detection.md
+---
+
+We love IoT and edge at Netdata, we also love machine learning. Even better if we can combine the two to ease the pain
+of monitoring increasingly complex systems.
+
+We recently explored what might be involved in enabling our Python-based [anomalies
+collector](/collectors/python.d.plugin/anomalies/README.md) on a Raspberry Pi. To our delight, it's actually quite
+straightforward!
+
+Read on to learn all the steps and enable unsupervised anomaly detection on your on Raspberry Pi(s).
+
+> Spoiler: It's just a couple of extra commands that will make you feel like a pro.
+
+## What you need to get started
+
+- A Raspberry Pi running Raspbian, which we'll call a _node_.
+- The [open-source Netdata](https://github.com/netdata/netdata) monitoring agent. If you don't have it installed on your
+ node yet, [get started now](/docs/get-started.mdx).
+
+## Install dependencies
+
+First make sure Netdata is using Python 3 when it runs Python-based data collectors.
+
+Next, open `netdata.conf` using [`edit-config`](/docs/configure/nodes.md#use-edit-config-to-edit-configuration-files)
+from within the [Netdata config directory](/docs/configure/nodes.md#the-netdata-config-directory). Scroll down to the
+`[plugin:python.d]` section to pass in the `-ppython3` command option.
+
+```conf
+[plugin:python.d]
+ # update every = 1
+ command options = -ppython3
+```
+
+Next, install some of the underlying libraries used by the Python packages the collector depends upon.
+
+```bash
+sudo apt install llvm-9 libatlas3-base libgfortran5 libatlas-base-dev
+```
+
+Now you're ready to install the Python packages used by the collector itself. First, become the `netdata` user.
+
+```bash
+sudo su -s /bin/bash netdata
+```
+
+Then pass in the location to find `llvm` as an environment variable for `pip3`.
+
+```bash
+LLVM_CONFIG=llvm-config-9 pip3 install --user llvmlite numpy==1.20.1 netdata-pandas==0.0.38 numba==0.50.1 scikit-learn==0.23.2 pyod==0.8.3
+```
+
+## Enable the anomalies collector
+
+Now you're ready to enable the collector and [restart Netdata](/docs/configure/start-stop-restart.md).
+
+```bash
+sudo ./edit-config python.d.conf
+# set `anomalies: no` to `anomalies: yes`
+
+# restart netdata
+sudo systemctl restart netdata
+```
+
+And that should be it! Wait a minute or two, refresh your Netdata dashboard, you should see the default anomalies
+charts under the **Anomalies** section in the dashboard's menu.
+
+![Anomaly detection on the Raspberry
+Pi](https://user-images.githubusercontent.com/1153921/110149717-9d749c00-7d9b-11eb-853c-e041a36f0a41.png)
+
+## Overhead on system
+
+Of course one of the most important considerations when trying to do anomaly detection at the edge (as opposed to in a
+centralized cloud somewhere) is the resource utilization impact of running a monitoring tool.
+
+With the default configuration, the anomalies collector uses about 6.5% of CPU at each run. During the retraining step,
+CPU utilization jumps to between 20-30% for a few seconds, but you can [configure
+retraining](/collectors/python.d.plugin/anomalies/README.md#configuration) to happen less often if you wish.
+
+![CPU utilization of anomaly detection on the Raspberry
+Pi](https://user-images.githubusercontent.com/1153921/110149718-9d749c00-7d9b-11eb-9af8-46e2032cd1d0.png)
+
+In terms of the runtime of the collector, it was averaging around 250ms during each prediction step, jumping to about
+8-10 seconds during a retraining step. This jump equates only to a small gap in the anomaly charts for a few seconds.
+
+![Execution time of anomaly detection on the Raspberry
+Pi](https://user-images.githubusercontent.com/1153921/110149715-9cdc0580-7d9b-11eb-826d-faf6f620621a.png)
+
+The last consideration then is the amount of RAM the collector needs to store both the models and some of the data
+during training. By default, the anomalies collector, along with all other running Python-based collectors, uses about
+100MB of system memory.
+
+![RAM utilization of anomaly detection on the Raspberry
+Pi](https://user-images.githubusercontent.com/1153921/110149720-9e0d3280-7d9b-11eb-883d-b1d4d9b9b5e1.png)
+
+## What's next?
+
+So, all in all, with a small little bit of extra set up and a small overhead on the Pi itself, the anomalies collector
+looks like a potentially useful addition to enable unsupervised anomaly detection on your Pi.
+
+See our two-part guide series for a more complete picture of configuring the anomalies collector, plus some best
+practices on using the charts it automatically generates:
+
+- [_Detect anomalies in systems and applications_](/docs/guides/monitor/anomaly-detection-python.md)
+- [_Monitor and visualize anomalies with Netdata_](/docs/guides/monitor/visualize-monitor-anomalies.md)
+
+If you're using your Raspberry Pi for other purposes, like blocking ads/trackers with Pi-hole, check out our companions
+Pi guide: [_Monitor Pi-hole (and a Raspberry Pi) with Netdata_](/docs/guides/monitor/pi-hole-raspberry-pi.md).
+
+Once you've had a chance to give unsupervised anomaly detection a go, share your use cases and let us know of any
+feedback on our [community forum](https://community.netdata.cloud/t/anomalies-collector-feedback-megathread/767).
+
+### Related reference documentation
+
+- [Netdata Agent · Get Netdata](/docs/get-started.mdx)
+- [Netdata Agent · Anomalies collector](/collectors/python.d.plugin/anomalies/README.md)
+
+
diff --git a/docs/guides/monitor/statsd.md b/docs/guides/monitor/statsd.md
new file mode 100644
index 0000000..3e2f0f8
--- /dev/null
+++ b/docs/guides/monitor/statsd.md
@@ -0,0 +1,298 @@
+<!--
+title: How to use any StatsD data source with Netdata
+description: "Learn how to monitor any custom application instrumented with StatsD with per-second metrics and fully customizable, interactive charts."
+image: /img/seo/guides/monitor/statsd.png
+author: "Odysseas Lamtzidis"
+author_title: "Developer Advocate"
+author_img: "/img/authors/odysseas-lamtzidis.jpg"
+custom_edit_url: https://github.com/netdata/netdata/edit/master/docs/guides/monitor/statsd.md
+-->
+
+# StatsD Guide
+
+StatsD is a protocol and server implementation, first introduced at Etsy, to aggregate and summarize application metrics. With StatsD, applications are instrumented by developers using the libraries that already exist for the language, without caring about managing the data. The StatsD server is in charge of receiving the metrics, performing some simple processing on them, and then pushing them to the time-series database (TSDB) for long-term storage and visualization.
+
+Netdata is a fully-functional StatsD server and TSDB implementation, so you can instantly visualize metrics by simply sending them to Netdata using the built-in StatsD server.
+
+In this guide, we'll go through a scenario of visualizing our data in Netdata in a matter of seconds using [k6](https://k6.io), an open-source tool for automating load testing that outputs metrics to the StatsD format.
+
+Although we'll use k6 as the use-case, the same principles can be applied to every application that supports the StatsD protocol. Simply enable the StatsD output and point it to the node that runs Netdata, which is `localhost` in this case.
+
+In general, the process for creating a StatsD collector can be summarized in 2 steps:
+
+- Run an experiment by sending StatsD metrics to Netdata, without any prior configuration. This will create a chart per metric (called private charts) and will help you verify that everything works as expected from the application side of things.
+ - Make sure to reload the dashboard tab **after** you start sending data to Netdata.
+- Create a configuration file for your app using [edit-config](/docs/configure/nodes.md): `sudo ./edit-config
+ statsd.d/myapp.conf`
+ - Each app will have it's own section in the right-hand menu.
+
+Now, let's see the above process in detail.
+
+## Prerequisites
+
+- A node with the [Netdata](/docs/get-started.mdx) installed.
+- An application to instrument. For this guide, that will be [k6](https://k6.io/docs/getting-started/installation).
+
+## Understanding the metrics
+
+The real in instrumenting an application with StatsD for you is to decide what metrics you want to visualize and how you want them grouped. In other words, you need decide which metrics will be grouped in the same charts and how the charts will be grouped on Netdata's dashboard.
+
+Start with documentation for the particular application that you want to monitor (or the technological stack that you are using). In our case, the [k6 documentation](https://k6.io/docs/using-k6/metrics/) has a whole page dedicated to the metrics output by k6, along with descriptions.
+
+If you are using StatsD to monitor an existing application, you don't have much control over these metrics. For example, k6 has a type called `trend`, which is identical to timers and histograms. Thus, _k6 is clearly dictating_ which metrics can be used as histograms and simple gauges.
+
+On the other hand, if you are instrumenting your own code, you will need to not only decide what are the "things" that you want to measure, but also decide which StatsD metric type is the appropriate for each.
+
+## Use private charts to see all available metrics
+
+In Netdata, every metric will receive its own chart, called a `private chart`. Although in the final implementation this is something that we will disable, since it can create considerable noise (imagine having 100s of metrics), it’s very handy while building the configuration file.
+
+You can get a quick visual representation of the metrics and their type (e.g it’s a gauge, a timer, etc.).
+
+An important thing to notice is that StatsD has different types of metrics, as illustrated in the [Netdata documentation](https://learn.netdata.cloud/docs/agent/collectors/statsd.plugin#metrics-supported-by-netdata). Histograms and timers support mathematical operations to be performed on top of the baseline metric, like reporting the `average` of the value.
+
+Here are some examples of default private charts. You can see that the histogram private charts will visualize all the available operations.
+
+**Gauge private chart**
+
+![Gauge metric example](https://i.imgur.com/Sr5nJEV.png)
+
+**Histogram private chart**
+
+![Timer metric example](https://i.imgur.com/P4p0hvq.png)
+
+## Create a new StatsD configuration file
+
+Start by creating a new configuration file under the `statsd.d/` folder in the [Netdata config directory](/docs/configure/nodes.md#the-netdata-config-directory). Use [`edit-config`](/docs/configure/nodes.md#use-edit-config-to-edit-configuration-files) to create a new file called `k6.conf`.
+
+```bash=
+sudo ./edit-config statsd.d/k6.conf
+```
+
+Copy the following configuration into your file as a starting point.
+
+```conf
+[app]
+ name = k6
+ metrics = k6*
+ private charts = yes
+ gaps when not collected = no
+ memory mode = dbengine
+```
+
+Next, you need is to understand how to organize metrics in Netdata’s StatsD.
+
+### Synthetic charts
+
+Netdata lets you group the metrics exposed by your instrumented application with _synthetic charts_.
+
+First, create a `[dictionary]` section to transform the names of the metrics into human-readable equivalents. `http_req_blocked`, `http_req_connecting`, `http_req_receiving`, and `http_reqs` are all metrics exposed by k6.
+
+```
+[dictionary]
+ http_req_blocked = Blocked HTTP Requests
+ http_req_connecting = Connecting HTTP Requests
+ http_req_receiving = Receiving HTTP Requests
+ http_reqs = Total HTTP requests
+```
+
+Continue this dictionary process with any other metrics you want to collect with Netdata.
+
+### Families and context
+
+Families and context are additional ways to group metrics. Families control the submenu at right-hand menu and it's a subcategory of the section. Given the metrics given by K6, we are organizing them in 2 major groups, or `families`: `k6 native metrics` and `http metrics`.
+
+Context is a second way to group metrics, when the metrics are of the same nature but different origin. In our case, if we ran several different load testing experiments side-by-side, we could define the same app, but different context (e.g `http_requests.experiment1`, `http_requests.experiment2`).
+
+Find more details about family and context in our [documentation](/web/README.md#families).
+
+### Dimension
+
+Now, having decided on how we are going to group the charts, we need to define how we are going to group metrics into different charts. This is particularly important, since we decide:
+
+- What metrics **not** to show, since they are not useful for our use-case.
+- What metrics to consolidate into the same charts, so as to reduce noise and increase visual correlation.
+
+The dimension option has this syntax: `dimension = [pattern] METRIC NAME TYPE MULTIPLIER DIVIDER OPTIONS`
+
+- **pattern**: A keyword that tells the StatsD server the `METRIC` string is actually a [simple pattern].(/libnetdata/simple_pattern/README.md). We don't simple patterns in the example, but if we wanted to visualize all the `http_req` metrics, we could have a single dimension: `dimension = pattern 'k6.http_req*' last 1 1`. Find detailed examples with patterns in our [documentation](/collectors/statsd.plugin/README.md#dimension-patterns).
+- **METRIC** The id of the metric as it comes from the client. You can easily find this in the private charts above, for example: `k6.http_req_connecting`.
+- **NAME**: The name of the dimension. You can use the dictionary to expand this to something more human-readable.
+- **TYPE**:
+ - For all charts:
+ - `events`: The number of events (data points) received by the StatsD server
+ - `last`: The last value that the server received
+ - For histograms and timers:
+ - `min`, `max`, `sum`, `average`, `percentile`, `median`, `stddev`: This is helpful if you want to see different representations of the same value. You can find an example at the `[iteration_duration]` above. Note that the baseline `metric` is the same, but the `name` of the dimension is different, since we use the baseline, but we perform a computation on it, creating a different final metric for visualization(dimension).
+- **MULTIPLIER DIVIDER**: Handy if you want to convert Kilobytes to Megabytes or you want to give negative value. The second is handy for better visualization of send/receive. You can find an example at the **packets** submenu of the **IPv4 Networking Section**.
+
+> ❕ If you define a chart, run Netdata to visualize metrics, and then add or remove a dimension from that chart, this will result in a new chart with the same name, confusing Netdata. If you change the dimensions of the chart, please make sure to also change the `name` of that chart, since it serves as the `id` of that chart in Netdata's storage. (e.g http_req --> http_req_1).
+
+### Finalize your StatsD configuration file
+
+It's time to assemble all the pieces together and create the synthetic charts that will consist our application dashboard in Netdata. We can do it in a few simple steps:
+
+- Decide which metrics we want to use (we have viewed all of them as private charts). For example, we want to use `k6.http_requests`, `k6.vus`, etc.
+- Decide how we want organize them in different synthetic charts. For example, we want `k6.http_requests`, `k6.vus` on their own, but `k6.http_req_blocked` and `k6.http_req_connecting` on the same chart.
+- For each synthetic chart, we define a **unique** name and a human readable title.
+- We decide at which `family` (submenu section) we want each synthetic chart to belong to. For example, here we have defined 2 families: `http requests`, `k6_metrics`.
+- If we have multiple instances of the same metric, we can define different contexts, (Optional).
+- We define a dimension according to the syntax we highlighted above.
+- We define a type for each synthetic chart (line, area, stacked)
+- We define the units for each synthetic chart.
+
+Following the above steps, we append to the `k6.conf` that we defined above, the following configuration:
+
+```
+[http_req_total]
+ name = http_req_total
+ title = Total HTTP Requests
+ family = http requests
+ context = k6.http_requests
+ dimension = k6.http_reqs http_reqs last 1 1 sum
+ type = line
+ units = requests/s
+
+[vus]
+ name = vus
+ title = Virtual Active Users
+ family = k6_metrics
+ dimension = k6.vus vus last 1 1
+ dimension = k6.vus_max vus_max last 1 1
+ type = line
+ unit = vus
+
+[iteration_duration]
+ name = iteration_duration_2
+ title = Iteration duration
+ family = k6_metrics
+ dimension = k6.iteration_duration iteration_duration last 1 1
+ dimension = k6.iteration_duration iteration_duration_max max 1 1
+ dimension = k6.iteration_duration iteration_duration_min min 1 1
+ dimension = k6.iteration_duration iteration_duration_avg avg 1 1
+ type = line
+ unit = s
+
+[dropped_iterations]
+ name = dropped_iterations
+ title = Dropped Iterations
+ family = k6_metrics
+ dimension = k6.dropped_iterations dropped_iterations last 1 1
+ units = iterations
+ type = line
+
+[data]
+ name = data
+ title = K6 Data
+ family = k6_metrics
+ dimension = k6.data_received data_received last 1 1
+ dimension = k6.data_sent data_sent last -1 1
+ units = kb/s
+ type = area
+
+[http_req_status]
+ name = http_req_status
+ title = HTTP Requests Status
+ family = http requests
+ dimension = k6.http_req_blocked http_req_blocked last 1 1
+ dimension = k6.http_req_connecting http_req_connecting last 1 1
+ units = ms
+ type = line
+
+[http_req_duration]
+ name = http_req_duration
+ title = HTTP requests duration
+ family = http requests
+ dimension = k6.http_req_sending http_req_sending last 1 1
+ dimension = k6.http_req_waiting http_req_waiting last 1 1
+ dimension = k6.http_req_receiving http_req_receiving last 1 1
+ units = ms
+ type = stacked
+```
+
+> Take note that Netdata will report the rate for metrics and counters, even if k6 or another application sends an _absolute_ number. For example, k6 sends absolute HTTP requests with `http_reqs`, but Netdat visualizes that in `requests/second`.
+
+To enable this StatsD configuration, [restart Netdata](/docs/configure/start-stop-restart.md).
+
+## Final touches
+
+At this point, you have used StatsD to gather metrics for k6, creating a whole new section in your Netdata dashboard in the process. Moreover, you can further customize the icon of the particular section, as well as the description for each chart.
+
+To edit the section, please follow the Netdata [documentation](https://learn.netdata.cloud/docs/agent/web/gui#customizing-the-local-dashboard).
+
+While the following configuration will be placed in a new file, as the documentation suggests, it is instructing to use `dashboard_info.js` as a template. Open the file and see how the rest of sections and collectors have been defined.
+
+```javascript=
+netdataDashboard.menu = {
+ 'k6': {
+ title: 'K6 Load Testing',
+ icon: '<i class="fas fa-cogs"></i>',
+ info: 'k6 is an open-source load testing tool and cloud service providing the best developer experience for API performance testing.'
+ },
+ .
+ .
+ .
+```
+
+We can then add a description for each chart. Simply find the following section in `dashboard_info.js` to understand how a chart definitions are used:
+
+```javascript=
+netdataDashboard.context = {
+ 'system.cpu': {
+ info: function (os) {
+ void (os);
+ return 'Total CPU utilization (all cores). 100% here means there is no CPU idle time at all. You can get per core usage at the <a href="#menu_cpu">CPUs</a> section and per application usage at the <a href="#menu_apps">Applications Monitoring</a> section.'
+ + netdataDashboard.sparkline('<br/>Keep an eye on <b>iowait</b> ', 'system.cpu', 'iowait', '%', '. If it is constantly high, your disks are a bottleneck and they slow your system down.')
+ + netdataDashboard.sparkline('<br/>An important metric worth monitoring, is <b>softirq</b> ', 'system.cpu', 'softirq', '%', '. A constantly high percentage of softirq may indicate network driver issues.');
+ },
+ valueRange: "[0, 100]"
+ },
+```
+
+Afterwards, you can open your `custom_dashboard_info.js`, as suggested in the documentation linked above, and add something like the following example:
+
+```javascript=
+netdataDashboard.context = {
+ 'k6.http_req_duration': {
+ info: "Total time for the request. It's equal to http_req_sending + http_req_waiting + http_req_receiving (i.e. how long did the remote server take to process the request and respond, without the initial DNS lookup/connection times)"
+ },
+
+```
+The chart is identified as ``<section_name>.<chart_name>``.
+
+These descriptions can greatly help the Netdata user who is monitoring your application in the midst of an incident.
+
+The `info` field supports `html`, embedding useful links and instructions in the description.
+
+## Vendoring a new collector
+
+While we learned how to visualize any data source in Netdata using the StatsD protocol, we have also created a new collector.
+
+As long as you use the same underlying collector, every new `myapp.conf` file will create a new data source and dashboard section for Netdata. Netdata loads all the configuration files by default, but it will **not** create dashboard sections or charts, unless it starts receiving data for that particular data source. This means that we can now share our collector with the rest of the Netdata community.
+
+If you want to contribute or you need any help in developing your collector, we have a whole [Forum Category](https://community.netdata.cloud/c/agent-development/9) dedicated to contributing to the Netdata Agent.
+
+### Making a PR to the netdata/netdata repository
+
+- Make sure you follow the contributing guide and read our Code of Conduct
+- Fork the netdata/netdata repository
+- Place the configuration file inside `netdata/collectors/statsd.plugin`
+- Add a reference in `netdata/collectors/statsd.plugin/Makefile.am`. For example, if we contribute the `k6.conf` file:
+```Makefile
+dist_statsdconfig_DATA = \
+ example.conf \
+ k6.conf \
+ $(NULL)
+```
+
+## What's next?
+
+In this tutorial, you learned how to monitor an application using Netdata's StatsD implementation.
+
+Netdata allows you easily visualize any StatsD metric without any configuration, since it creates a private metric per chart by default. But to make your implementation more robust, you also learned how to group metrics by family and context, and create multiple dimensions. With these tools, you can quickly instrument any application with StatsD to monitor its performance and availability with per-second metrics.
+
+### Related reference documentation
+
+- [Netdata Agent · StatsD](/collectors/statsd.plugin/README.md)
+
+
diff --git a/docs/guides/monitor/stop-notifications-alarms.md b/docs/guides/monitor/stop-notifications-alarms.md
new file mode 100644
index 0000000..a8b73a8
--- /dev/null
+++ b/docs/guides/monitor/stop-notifications-alarms.md
@@ -0,0 +1,92 @@
+<!--
+title: "Stop notifications for individual alarms"
+custom_edit_url: https://github.com/netdata/netdata/edit/master/docs/guides/monitor/stop-notifications-alarms.md
+-->
+
+# Stop notifications for individual alarms
+
+In this short tutorial, you'll learn how to stop notifications for individual alarms in Netdata's health
+monitoring system. We also refer to this process as _silencing_ the alarm.
+
+Why silence alarms? We designed Netdata's pre-configured alarms for production systems, so they might not be
+relevant if you run Netdata on your laptop or a small virtual server. If they're not helpful, they can be a distraction
+to real issues with health and performance.
+
+Silencing individual alarms is an excellent solution for situations where you're not interested in seeing a specific
+alarm but don't want to disable a [notification system](/health/notifications/README.md) entirely.
+
+## Find the alarm configuration file
+
+To silence an alarm, you need to know where to find its configuration file.
+
+Let's use the `system.cpu` chart as an example. It's the first chart you'll see on most Netdata dashboards.
+
+To figure out which file you need to edit, open up Netdata's dashboard and, click the **Alarms** button at the top
+of the dashboard, followed by clicking on the **All** tab.
+
+In this example, we're looking for the `system - cpu` entity, which, when opened, looks like this:
+
+![The system - cpu alarm
+entity](https://user-images.githubusercontent.com/1153921/67034648-ebb4cc80-f0cc-11e9-9d49-1023629924f5.png)
+
+In the `source` row, you see that this chart is getting its configuration from
+`4@/usr/lib/netdata/conf.d/health.d/cpu.conf`. The relevant part of begins at `health.d`: `health.d/cpu.conf`. That's
+the file you need to edit if you want to silence this alarm.
+
+For more information about editing or referencing health configuration files on your system, see the [health
+quickstart](/health/QUICKSTART.md#edit-health-configuration-files).
+
+## Edit the file to enable silencing
+
+To edit `health.d/cpu.conf`, use `edit-config` from inside of your Netdata configuration directory.
+
+```bash
+cd /etc/netdata/ # Replace with your Netdata configuration directory, if not /etc/netdata/
+./edit-config health.d/cpu.conf
+```
+
+> You may need to use `sudo` or another method of elevating your privileges.
+
+The beginning of the file looks like this:
+
+```yaml
+template: 10min_cpu_usage
+ on: system.cpu
+ os: linux
+ hosts: *
+ lookup: average -10m unaligned of user,system,softirq,irq,guest
+ units: %
+ every: 1m
+ warn: $this > (($status >= $WARNING) ? (75) : (85))
+ crit: $this > (($status == $CRITICAL) ? (85) : (95))
+ delay: down 15m multiplier 1.5 max 1h
+ info: average cpu utilization for the last 10 minutes (excluding iowait, nice and steal)
+ to: sysadmin
+```
+
+To silence this alarm, change `sysadmin` to `silent`.
+
+```yaml
+ to: silent
+```
+
+Use one of the available [methods](/health/QUICKSTART.md#reload-health-configuration) to reload your health configuration
+ and ensure you get no more notifications about that alarm**.
+
+You can add `to: silent` to any alarm you'd rather not bother you with notifications.
+
+## What's next?
+
+You should now know the fundamentals behind silencing any individual alarm in Netdata.
+
+To learn about _all_ of Netdata's health configuration possibilities, visit the [health reference
+guide](/health/REFERENCE.md), or check out other [tutorials on health monitoring](/health/README.md#guides).
+
+Or, take better control over how you get notified about alarms via the [notification
+system](/health/notifications/README.md).
+
+You can also use Netdata's [Health Management API](/web/api/health/README.md#health-management-api) to control health
+checks and notifications while Netdata runs. With this API, you can disable health checks during a maintenance window or
+backup process, for example.
+
+
diff --git a/docs/guides/monitor/visualize-monitor-anomalies.md b/docs/guides/monitor/visualize-monitor-anomalies.md
new file mode 100644
index 0000000..1f8c2c8
--- /dev/null
+++ b/docs/guides/monitor/visualize-monitor-anomalies.md
@@ -0,0 +1,142 @@
+---
+title: "Monitor and visualize anomalies with Netdata (part 2)"
+description: "Using unsupervised anomaly detection and machine learning, get notified "
+image: /img/seo/guides/monitor/visualize-monitor-anomalies.png
+author: "Joel Hans"
+author_title: "Editorial Director, Technical & Educational Resources"
+author_img: "/img/authors/joel-hans.jpg"
+custom_edit_url: https://github.com/netdata/netdata/edit/master/docs/guides/monitor/visualize-monitor-anomalies.md
+---
+
+Welcome to part 2 of our series of guides on using _unsupervised anomaly detection_ to detect issues with your systems,
+containers, and applications using the open-source Netdata Agent. For an introduction to detecting anomalies and
+monitoring associated metrics, see [part 1](/docs/guides/monitor/anomaly-detection-python.md), which covers prerequisites and
+configuration basics.
+
+With anomaly detection in the Netdata Agent set up, you will now want to visualize and monitor which charts have
+anomalous data, when, and where to look next.
+
+> 💡 In certain cases, the anomalies collector doesn't start immediately after restarting the Netdata Agent. If this
+> happens, you won't see the dashboard section or the relevant [charts](#visualize-anomalies-in-charts) right away. Wait
+> a minute or two, refresh, and look again. If the anomalies charts and alarms are still not present, investigate the
+> error log with `less /var/log/netdata/error.log | grep anomalies`.
+
+## Test anomaly detection
+
+Time to see the Netdata Agent's unsupervised anomaly detection in action. To trigger anomalies on the Nginx web server,
+use `ab`, otherwise known as [Apache Bench](https://httpd.apache.org/docs/2.4/programs/ab.html). Despite its name, it
+works just as well with Nginx web servers. Install it on Ubuntu/Debian systems with `sudo apt install apache2-utils`.
+
+> 💡 If you haven't followed the guide's example of using Nginx, an easy way to test anomaly detection on your node is
+> to use the `stress-ng` command, which is available on most Linux distributions. Run `stress-ng --cpu 0` to create CPU
+> stress or `stress-ng --vm 0` for RAM stress. Each test will cause some "collateral damage," in that you may see CPU
+> utilization rise when running the RAM test, and vice versa.
+
+The following test creates a minimum of 10,000,000 requests for Nginx to handle, with a maximum of 10 at any given time,
+with a run time of 60 seconds. If your system can handle those 10,000,000 in less than 60 seconds, `ab` will keep
+sending requests until the timer runs out.
+
+```bash
+ab -k -c 10 -t 60 -n 10000000 http://127.0.0.1/
+```
+
+Let's see how Netdata detects this anomalous behavior and propagates information to you through preconfigured alarms and
+dashboards that automatically organize anomaly detection metrics into meaningful charts to help you begin root cause
+analysis (RCA).
+
+## Monitor anomalies with alarms
+
+The anomalies collector creates two "classes" of alarms for each chart captured by the `charts_regex` setting. All these
+alarms are preconfigured based on your [configuration in
+`anomalies.conf`](/docs/guides/monitor/anomaly-detection-python.md#configure-the-anomalies-collector). With the `charts_regex`
+and `charts_to_exclude` settings from [part 1](/docs/guides/monitor/anomaly-detection-python.md) of this guide series, the
+Netdata Agent creates 32 alarms driven by unsupervised anomaly detection.
+
+The first class triggers warning alarms when the average anomaly probability for a given chart has stayed above 50% for
+at least the last two minutes.
+
+![An example anomaly probability
+alarm](https://user-images.githubusercontent.com/1153921/104225767-0a0a9480-5404-11eb-9bfd-e29592397203.png)
+
+The second class triggers warning alarms when the number of anomalies in the last two minutes hits 10 or higher.
+
+![An example anomaly count
+alarm](https://user-images.githubusercontent.com/1153921/104225769-0aa32b00-5404-11eb-95f3-7309f9429fe1.png)
+
+If you see either of these alarms in Netdata Cloud, the local Agent dashboard, or on your preferred notification
+platform, it's a safe bet that the node's current metrics have deviated from normal. That doesn't necessarily mean
+there's a full-blown incident, depending on what application/service you're using anomaly detection on, but it's worth
+further investigation.
+
+As you use the anomalies collector, you may find that the default settings provide too many or too few genuine alarms.
+In this case, [configure the alarm](/docs/monitor/configure-alarms.md) with `sudo ./edit-config
+health.d/anomalies.conf`. Take a look at the `lookup` line syntax in the [health
+reference](/health/REFERENCE.md#alarm-line-lookup) to understand how the anomalies collector automatically creates
+alarms for any dimension on the `anomalies_local.probability` and `anomalies_local.anomaly` charts.
+
+## Visualize anomalies in charts
+
+In either [Netdata Cloud](https://app.netdata.cloud) or the local Agent dashboard at `http://NODE:19999`, click on the
+**Anomalies** [section](/web/gui/README.md#sections) to see the pair of anomaly detection charts, which are
+preconfigured to visualize per-second anomaly metrics based on your [configuration in
+`anomalies.conf`](/docs/guides/monitor/anomaly-detection-python.md#configure-the-anomalies-collector).
+
+These charts have the contexts `anomalies.probability` and `anomalies.anomaly`. Together, these charts
+create meaningful visualizations for immediately recognizing not only that something is going wrong on your node, but
+give context as to where to look next.
+
+The `anomalies_local.probability` chart shows the probability that the latest observed data is anomalous, based on the
+trained model. The `anomalies_local.anomaly` chart visualizes 0&rarr;1 predictions based on whether the latest observed
+data is anomalous based on the trained model. Both charts share the same dimensions, which you configured via
+`charts_regex` and `charts_to_exclude` in [part 1](/docs/guides/monitor/anomaly-detection-python.md).
+
+In other words, the `probability` chart shows the amplitude of the anomaly, whereas the `anomaly` chart provides quick
+yes/no context.
+
+![Two charts created by the anomalies
+collector](https://user-images.githubusercontent.com/1153921/104226380-ef84eb00-5404-11eb-9faf-9e64c43b95ff.png)
+
+Before `08:32:00`, both charts show little in the way of verified anomalies. Based on the metrics the anomalies
+collector has trained on, a certain percentage of anomaly probability score is normal, as seen in the
+`web_log_nginx_requests_prob` dimension and a few others. What you're looking for is large deviations from the "noise"
+in the `anomalies.probability` chart, or any increments to the `anomalies.anomaly` chart.
+
+Unsurprisingly, the stress test that began at `08:32:00` caused significant changes to these charts. The three
+dimensions that immediately shot to 100% anomaly probability, and remained there during the test, were
+`web_log_nginx.requests_prob`, `nginx_local.connections_accepted_handled_prob`, and `system.cpu_pressure_prob`.
+
+## Build an anomaly detection dashboard
+
+[Netdata Cloud](https://app.netdata.cloud) features a drag-and-drop [dashboard
+editor](/docs/visualize/create-dashboards.md) that helps you create entirely new dashboards with charts targeted for
+your specific applications.
+
+For example, here's a dashboard designed for visualizing anomalies present in an Nginx web server, including
+documentation about why the dashboard exists and where to look next based on what you're seeing:
+
+![An example anomaly detection
+dashboard](https://user-images.githubusercontent.com/1153921/104226915-c6188f00-5405-11eb-9bb4-559a18016fa7.png)
+
+Use the anomaly charts for instant visual identification of potential anomalies, and then Nginx-specific charts, in the
+right column, to validate whether the probability and anomaly counters are showing a valid incident worth further
+investigation using [Metric Correlations](https://learn.netdata.cloud/docs/cloud/insights/metric-correlations) to narrow
+the dashboard into only the charts relevant to what you're seeing from the anomalies collector.
+
+## What's next?
+
+Between this guide and [part 1](/docs/guides/monitor/anomaly-detection-python.md), which covered setup and configuration, you
+now have a fundamental understanding of how unsupervised anomaly detection in Netdata works, from root cause to alarms
+to preconfigured or custom dashboards.
+
+We'd love to hear your feedback on the anomalies collector. Hop over to the [community
+forum](https://community.netdata.cloud/t/anomalies-collector-feedback-megathread/767), and let us know if you're already getting value from
+unsupervised anomaly detection, or would like to see something added to it. You might even post a custom configuration
+that works well for monitoring some other popular application, like MySQL, PostgreSQL, Redis, or anything else we
+[support through collectors](/collectors/COLLECTORS.md).
+
+### Related reference documentation
+
+- [Netdata Agent · Anomalies collector](/collectors/python.d.plugin/anomalies/README.md)
+- [Netdata Cloud · Build new dashboards](https://learn.netdata.cloud/docs/cloud/visualize/dashboards)
+
+