summaryrefslogtreecommitdiffstats
path: root/health/guides/anomalies/anomalies_anomaly_probabilities.md
diff options
context:
space:
mode:
Diffstat (limited to 'health/guides/anomalies/anomalies_anomaly_probabilities.md')
-rw-r--r--health/guides/anomalies/anomalies_anomaly_probabilities.md30
1 files changed, 30 insertions, 0 deletions
diff --git a/health/guides/anomalies/anomalies_anomaly_probabilities.md b/health/guides/anomalies/anomalies_anomaly_probabilities.md
new file mode 100644
index 000000000..cea04a43e
--- /dev/null
+++ b/health/guides/anomalies/anomalies_anomaly_probabilities.md
@@ -0,0 +1,30 @@
+### Understand the alert
+
+This alert, `anomalies_anomaly_probabilities`, is generated by the Netdata agent when the average anomaly probability over the last 2 minutes is 50. An anomaly probability is a value calculated by the machine learning (ML) component in Netdata, aiming to detect unusual events or behavior in system metrics.
+
+### What is anomaly probability?
+
+Anomaly probability is a percentage calculated by the Netdata's ML feature that represents the probability of an observed metric value being considered an anomaly. A higher anomaly probability indicates a higher chance that the system behavior is deviating from its historical patterns or expected behavior.
+
+### What does an average anomaly probability of 50 mean?
+
+An average anomaly probability of 50 indicates that there might be some unusual events, metrics, or behavior in your monitored system. This might not necessarily indicate an issue, but rather, it raises suspicious deviations in the system metrics that are worth investigating.
+
+### Troubleshoot the alert
+
+1. Investigate the unusual events or behavior
+
+ The first step is to identify the metric(s) or series of metric values that are causing the alert. Look for changes in the monitored metrics or a combination of metrics that deviate significantly from their historical patterns.
+
+2. Check system performance and resource usage
+
+ Use the overview and anomalies tab to explore the metrics that could be contributing to anomalies.
+
+3. Inspect system logs
+
+ System logs can provide valuable information about unusual events or behaviors. Check system logs using tools like `journalctl`, `dmesg`, or `tail` for any error messages, warnings, or critical events that might be related to the anomaly.
+
+4. Review the alert settings
+
+ In some cases, the alert may be caused by overly strict or sensitive settings, leading to the triggering of false positives. Review the settings and consider adjusting the anomaly probability threshold, if necessary.
+