summaryrefslogtreecommitdiffstats
path: root/docs/guides
diff options
context:
space:
mode:
Diffstat (limited to 'docs/guides')
-rw-r--r--docs/guides/deploy/ansible.md18
-rw-r--r--docs/guides/monitor-cockroachdb.md2
-rw-r--r--docs/guides/monitor/anomaly-detection.md4
-rw-r--r--docs/guides/monitor/kubernetes-k8s-netdata.md8
-rw-r--r--docs/guides/monitor/lamp-stack.md2
-rw-r--r--docs/guides/monitor/statsd.md7
-rw-r--r--docs/guides/python-collector.md12
-rw-r--r--docs/guides/step-by-step/step-00.md1
-rw-r--r--docs/guides/step-by-step/step-03.md4
-rw-r--r--docs/guides/step-by-step/step-05.md7
-rw-r--r--docs/guides/troubleshoot/monitor-debug-applications-ebpf.md4
-rw-r--r--docs/guides/using-host-labels.md2
12 files changed, 39 insertions, 32 deletions
diff --git a/docs/guides/deploy/ansible.md b/docs/guides/deploy/ansible.md
index 8298fd00..f7bf514e 100644
--- a/docs/guides/deploy/ansible.md
+++ b/docs/guides/deploy/ansible.md
@@ -7,11 +7,11 @@ custom_edit_url: https://github.com/netdata/netdata/edit/master/docs/guides/depl
# Deploy Netdata with Ansible
-Netdata's [one-line kickstart](https://learn.netdata.cloud/docs/get) is zero-configuration, highly adaptable, and
-compatible with tons of different operating systems and Linux distributions. You can use it on bare metal, VMs,
-containers, and everything in-between.
+Netdata's [one-line kickstart](/docs/get-started.mdx) is zero-configuration, highly adaptable, and compatible with tons
+of different operating systems and Linux distributions. You can use it on bare metal, VMs, containers, and everything
+in-between.
-But what if you're trying to bootstrap an infrastructure monitoring solution as quickly as possible. What if you need to
+But what if you're trying to bootstrap an infrastructure monitoring solution as quickly as possible? What if you need to
deploy Netdata across an entire infrastructure with many nodes? What if you want to make this deployment reliable,
repeatable, and idempotent? What if you want to write and deploy your infrastructure or cloud monitoring system like
code?
@@ -22,7 +22,7 @@ those operations over standard and secure SSH connections. There's no agent to i
have to worry about is your application and your monitoring software.
Ansible has some competition from the likes of [Puppet](https://puppet.com/) or [Chef](https://www.chef.io/), but the
-most valuable feature about Ansible is that every is **idempotent**. From the [Ansible
+most valuable feature about Ansible is **idempotent**. From the [Ansible
glossary](https://docs.ansible.com/ansible/latest/reference_appendices/glossary.html)
> An operation is idempotent if the result of performing it once is exactly the same as the result of performing it
@@ -33,7 +33,7 @@ operate. When you deploy Netdata with Ansible, you're also deploying _monitoring
In this guide, we'll walk through the process of using an [Ansible
playbook](https://github.com/netdata/community/tree/main/netdata-agent-deployment/ansible-quickstart) to automatically
-deploy the Netdata Agent to any number of distributed nodes, manage the configuration of each node, and claim them to
+deploy the Netdata Agent to any number of distributed nodes, manage the configuration of each node, and connect them to
your Netdata Cloud account. You'll go from some unmonitored nodes to a infrastructure monitoring solution in a matter of
minutes.
@@ -98,7 +98,7 @@ two different SSH keys supplied by AWS.
### Edit the `vars/main.yml` file
-In order to claim your node(s) to your Space in Netdata Cloud, and see all their metrics in real-time in [composite
+In order to connect your node(s) to your Space in Netdata Cloud, and see all their metrics in real-time in [composite
charts](/docs/visualize/overview-infrastructure.md) or perform [Metric
Correlations](https://learn.netdata.cloud/docs/cloud/insights/metric-correlations), you need to set the `claim_token`
and `claim_room` variables.
@@ -120,7 +120,7 @@ claim_rooms: XXXXX
Change the `dbengine_multihost_disk_space` if you want to change the metrics retention policy by allocating more or less
disk space for storing metrics. The default is 2048 Mib, or 2 GiB.
-Because we're claiming this node to Netdata Cloud, and will view its dashboards there instead of via the IP address or
+Because we're connecting this node to Netdata Cloud, and will view its dashboards there instead of via the IP address or
hostname of the node, the playbook disables that local dashboard by setting `web_mode` to `none`. This gives a small
security boost by not allowing any unwanted access to the local dashboard.
@@ -147,7 +147,7 @@ Next, Ansible makes changes to each node according to the `tasks` defined in the
[returns](https://docs.ansible.com/ansible/latest/reference_appendices/common_return_values.html#changed) whether each
task results in a changed, failure, or was skipped entirely.
-The task to install Netdata will take a few minutes per node, so be patient! Once the playbook reaches the claiming
+The task to install Netdata will take a few minutes per node, so be patient! Once the playbook reaches the connect to Cloud
task, your nodes start populating your Space in Netdata Cloud.
## What's next?
diff --git a/docs/guides/monitor-cockroachdb.md b/docs/guides/monitor-cockroachdb.md
index 0ff9f3c7..0307381e 100644
--- a/docs/guides/monitor-cockroachdb.md
+++ b/docs/guides/monitor-cockroachdb.md
@@ -13,7 +13,7 @@ maximum granularity using Netdata. Collect more than 50 unique metrics and put t
designed for better visual anomaly detection.
Netdata itself uses CockroachDB as part of its Netdata Cloud infrastructure, so we're happy to introduce this new
-collector and help others get started with it straightaway.
+collector and help others get started with it straight away.
Let's dive in and walk through the process of monitoring CockroachDB metrics with Netdata.
diff --git a/docs/guides/monitor/anomaly-detection.md b/docs/guides/monitor/anomaly-detection.md
index f680f5f2..2d8b6d1d 100644
--- a/docs/guides/monitor/anomaly-detection.md
+++ b/docs/guides/monitor/anomaly-detection.md
@@ -23,7 +23,7 @@ library](https://github.com/yzhao062/pyod/tree/master), which periodically runs
quantify how anomalous certain charts are.
All these metrics and alarms are available for centralized monitoring in [Netdata Cloud](https://app.netdata.cloud). If
-you choose to sign up for Netdata Cloud and [claim your nodes](/claim/README.md), you will have the ability to run
+you choose to sign up for Netdata Cloud and [coonect your nodes](/claim/README.md), you will have the ability to run
tailored anomaly detection on every node in your infrastructure, regardless of its purpose or workload.
In this guide, you'll learn how to set up the anomalies collector to instantly detect anomalies in an Nginx web server
@@ -123,7 +123,7 @@ configure the collector to monitor charts from the
log](https://learn.netdata.cloud/docs/agent/collectors/go.d.plugin/modules/weblog) collectors.
`charts_regex` allows for some basic regex, such as wildcards (`*`) to match all contexts with a certain pattern. For
-example, `system\..*` matches with any chart wit ha context that begins with `system.`, and ends in any number of other
+example, `system\..*` matches with any chart with a context that begins with `system.`, and ends in any number of other
characters (`.*`). Note the escape character (`\`) around the first period to capture a period character exactly, and
not any character.
diff --git a/docs/guides/monitor/kubernetes-k8s-netdata.md b/docs/guides/monitor/kubernetes-k8s-netdata.md
index c5cb2c1b..5d4886e6 100644
--- a/docs/guides/monitor/kubernetes-k8s-netdata.md
+++ b/docs/guides/monitor/kubernetes-k8s-netdata.md
@@ -45,9 +45,9 @@ To follow this tutorial, you need:
- A free Netdata Cloud account. [Sign up](https://app.netdata.cloud/sign-up?cloudRoute=/spaces) if you don't have one
already.
-- A working cluster running Kubernetes v1.9 or newer, with a Netdata deployment and claimed parent/child nodes. See
+- A working cluster running Kubernetes v1.9 or newer, with a Netdata deployment and connected parent/child nodes. See
our [Kubernetes deployment process](/packaging/installer/methods/kubernetes.md) for details on deployment and
- claiming.
+ conneting to Cloud.
- The [`kubectl`](https://kubernetes.io/docs/reference/kubectl/overview/) command line tool, within [one minor version
difference](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin) of your cluster, on an
administrative system.
@@ -98,10 +98,10 @@ robot-shop web-8bb887476-lkcjx 1/1 Running 0 14m
## Explore Netdata's Kubernetes monitoring charts
The Netdata Helm chart deploys and enables everything you need for monitoring Kubernetes on every layer. Once you deploy
-Netdata and claim your cluster's nodes, you're ready to check out the visualizations **with zero configuration**.
+Netdata and connect your cluster's nodes, you're ready to check out the visualizations **with zero configuration**.
To get started, [sign in](https://app.netdata.cloud/sign-in?cloudRoute=/spaces) to your Netdata Cloud account. Head over
-to the War Room you claimed your cluster to, if not **General**.
+to the War Room you connected your cluster to, if not **General**.
Netdata Cloud is already visualizing your Kubernetes metrics, streamed in real-time from each node, in the
[Overview](https://learn.netdata.cloud/docs/cloud/visualize/overview):
diff --git a/docs/guides/monitor/lamp-stack.md b/docs/guides/monitor/lamp-stack.md
index 95aa03f0..38b9d0be 100644
--- a/docs/guides/monitor/lamp-stack.md
+++ b/docs/guides/monitor/lamp-stack.md
@@ -167,7 +167,7 @@ If the Netdata Agent isn't already open in your browser, open a new tab and navi
> If you [signed up](https://app.netdata.cloud/sign-up?cloudRoute=/spaces) for Netdata Cloud earlier, you can also view
> the exact same LAMP stack metrics there, plus additional features, like drag-and-drop custom dashboards. Be sure to
-> [claim your node](/claim/README.md) to start streaming metrics to your browser through Netdata Cloud.
+> [connecting your node](/claim/README.md) to start streaming metrics to your browser through Netdata Cloud.
Netdata automatically organizes all metrics and charts onto a single page for easy navigation. Peek at gauges to see
overall system performance, then scroll down to see more. Click-and-drag with your mouse to pan _all_ charts back and
diff --git a/docs/guides/monitor/statsd.md b/docs/guides/monitor/statsd.md
index 120715b1..e4f04c57 100644
--- a/docs/guides/monitor/statsd.md
+++ b/docs/guides/monitor/statsd.md
@@ -22,14 +22,15 @@ In general, the process for creating a StatsD collector can be summarized in 2 s
- Run an experiment by sending StatsD metrics to Netdata, without any prior configuration. This will create a chart per metric (called private charts) and will help you verify that everything works as expected from the application side of things.
- Make sure to reload the dashboard tab **after** you start sending data to Netdata.
-- Create a configuration file for your app using [edit-config](https://learn.netdata.cloud/guides/step-by-step/step-04): `sudo ./edit-config statsd.d/myapp.conf`
+- Create a configuration file for your app using [edit-config](/docs/configure/nodes.md): `sudo ./edit-config
+ statsd.d/myapp.conf`
- Each app will have it's own section in the right-hand menu.
Now, let's see the above process in detail.
## Prerequisites
-- A node with the [Netdata Agent](https://learn.netdata.cloud/docs/get#install-the-netdata-agent) installed.
+- A node with the [Netdata](/docs/get-started.mdx) installed.
- An application to instrument. For this guide, that will be [k6](https://k6.io/docs/getting-started/installation).
## Understanding the metrics
@@ -110,7 +111,7 @@ Find more details about family and context in our [documentation](/web/README.md
Now, having decided on how we are going to group the charts, we need to define how we are going to group metrics into different charts. This is particularly important, since we decide:
- What metrics **not** to show, since they are not useful for our use-case.
-- What metrics to consolidate into the same charts, so as to reduce noice and increase visual correlation.
+- What metrics to consolidate into the same charts, so as to reduce noise and increase visual correlation.
The dimension option has this syntax: `dimension = [pattern] METRIC NAME TYPE MULTIPLIER DIVIDER OPTIONS`
diff --git a/docs/guides/python-collector.md b/docs/guides/python-collector.md
index 0478bffe..b8facd9f 100644
--- a/docs/guides/python-collector.md
+++ b/docs/guides/python-collector.md
@@ -24,7 +24,7 @@ prebuilt method for collecting your required metric data.
In this tutorial, you'll learn how to leverage the [Python programming language](https://www.python.org/) to build a
custom data collector for the Netdata Agent. Follow along with your own dataset, using the techniques and best practices
-covered here, or use the included examples for collecting and organizing eithre random or weather data.
+covered here, or use the included examples for collecting and organizing either random or weather data.
## What you need to get started
@@ -48,7 +48,7 @@ The basic elements of a Netdata collector are:
- `ORDER[]`: A list containing the charts to be displayed.
- `CHARTS{}`: A dictionary containing the details for the charts to be displayed.
- `data{}`: A dictionary containing the values to be displayed.
-- `get_data()`: The basic function of the plugin which will retrun to Netdata the correct values.
+- `get_data()`: The basic function of the plugin which will return to Netdata the correct values.
Let's walk through these jobs and elements as independent elements first, then apply them to example Python code.
@@ -138,7 +138,7 @@ correct values.
The `python.d` plugin has a number of framework classes that can be used to speed up the development of your python
collector. Your class can inherit one of these framework classes, which have preconfigured methods.
-For example, the snippet bellow is from the [RabbitMQ
+For example, the snippet below is from the [RabbitMQ
collector](https://github.com/netdata/netdata/blob/91f3268e9615edd393bd43de4ad8068111024cc9/collectors/python.d.plugin/rabbitmq/rabbitmq.chart.py#L273).
This collector uses an HTTP endpoint and uses the `UrlService` framework class, which only needs to define an HTTP
endpoint for data collection.
@@ -298,7 +298,7 @@ class Service(SimpleService):
def get_data(self):
#The data dict is basically all the values to be represented
# The entries are in the format: { "dimension": value}
- #And each "dimension" shoudl belong to a chart.
+ #And each "dimension" should belong to a chart.
data = dict()
self.populate_data()
@@ -356,7 +356,7 @@ chart:
Next, time to add one more chart that visualizes the average, minimum, and maximum temperature values.
Add a new entry in the `CHARTS` dictionary with the definition for the new chart. Since you want three values
-represented in this this chart, add three dimensions. You shoudl also use the same `FAMILY` value in the charts (`TEMP`)
+represented in this this chart, add three dimensions. You should also use the same `FAMILY` value in the charts (`TEMP`)
so that those two charts are grouped together.
```python
@@ -418,7 +418,7 @@ configuration in [YAML](https://www.tutorialspoint.com/yaml/yaml_basics.htm) for
- Create a configuration file in the same directory as the `<plugin_name>.chart.py`. Name it `<plugin_name>.conf`.
- Define a `job`, which is an instance of the collector. It is useful when you want to collect data from different
sources with different attributes. For example, we could gather data from 2 different weather stations, which use
- different temperature measures: Fahrenheit and Celcius.
+ different temperature measures: Fahrenheit and Celsius.
- You can define many different jobs with the same name, but with different attributes. Netdata will try each job
serially and will stop at the first job that returns data. If multiple jobs have the same name, only one of them can
run. This enables you to define different "ways" to fetch data from a particular data source so that the collector has
diff --git a/docs/guides/step-by-step/step-00.md b/docs/guides/step-by-step/step-00.md
index 79436664..10657191 100644
--- a/docs/guides/step-by-step/step-00.md
+++ b/docs/guides/step-by-step/step-00.md
@@ -32,7 +32,6 @@ Click on the **issues** tab to see all the conversations we're having with Netda
previously-written advice for your specific problem, and if you don't see any results, hit the **New issue** button to
send us a question.
-Or, if that's too complicated, feel free to send this guide's author [an email](mailto:joel@netdata.cloud).
## Before we get started
diff --git a/docs/guides/step-by-step/step-03.md b/docs/guides/step-by-step/step-03.md
index 2319adb4..a2f37bee 100644
--- a/docs/guides/step-by-step/step-03.md
+++ b/docs/guides/step-by-step/step-03.md
@@ -43,7 +43,7 @@ features, new collectors for more applications, and improved UI, so will Cloud.
## Get started with Netdata Cloud
-Signing in, onboarding, and claiming your first nodes only takes a few minutes, and we have a [Get started with
+Signing in, onboarding, and connecting your first nodes only takes a few minutes, and we have a [Get started with
Cloud](https://learn.netdata.cloud/docs/cloud/get-started) guide to help you walk through every step.
Or, if you're feeling confident, dive right in.
@@ -82,7 +82,7 @@ nodes](https://user-images.githubusercontent.com/1153921/80831018-e158ac80-8b9e-
## What's next?
-Now that you have a Netdata Cloud account with a claimed node (or a few!) and can navigate between your dashboards with
+Now that you have a Netdata Cloud account with a connected node (or a few!) and can navigate between your dashboards with
Visited nodes, it's time to learn more about how you can configure Netdata to your liking. From there, you'll be able to
customize your Netdata experience to your exact infrastructure and the information you need.
diff --git a/docs/guides/step-by-step/step-05.md b/docs/guides/step-by-step/step-05.md
index 30ab329c..8a4d084e 100644
--- a/docs/guides/step-by-step/step-05.md
+++ b/docs/guides/step-by-step/step-05.md
@@ -110,6 +110,13 @@ bother you with notifications.
The best way to understand how health entities work is building your own and experimenting with the options. To start,
let's build a health entity that triggers an alarm when system RAM usage goes above 80%.
+We will first create a new file inside of the `health.d/` directory. We'll name our file
+`example.conf` for now.
+
+```bash
+./edit-config health.d/example.conf
+```
+
The first line in a health entity will be `alarm:`. This is how you name your entity. You can give it any name you
choose, but the only symbols allowed are `.` and `_`. Let's call the alarm `ram_usage`.
diff --git a/docs/guides/troubleshoot/monitor-debug-applications-ebpf.md b/docs/guides/troubleshoot/monitor-debug-applications-ebpf.md
index d6c4b069..688e7d29 100644
--- a/docs/guides/troubleshoot/monitor-debug-applications-ebpf.md
+++ b/docs/guides/troubleshoot/monitor-debug-applications-ebpf.md
@@ -236,8 +236,8 @@ same application on multiple systems and want to correlate how it performs on ea
findings with someone else on your team.
If you don't already have a Netdata Cloud account, go [sign in](https://app.netdata.cloud) and get started for free.
-Read the [get started with Cloud guide](https://learn.netdata.cloud/docs/cloud/get-started) for a walkthrough of node
-claiming and other fundamentals.
+Read the [get started with Cloud guide](https://learn.netdata.cloud/docs/cloud/get-started) for a walkthrough of
+connecting nodes to and other fundamentals.
Once you've added one or more nodes to a Space in Netdata Cloud, you can see aggregated eBPF metrics in the [Overview
dashboard](/docs/visualize/overview-infrastructure.md) under the same **Applications** or **eBPF** sections that you
diff --git a/docs/guides/using-host-labels.md b/docs/guides/using-host-labels.md
index 6d4af2e5..79558dd1 100644
--- a/docs/guides/using-host-labels.md
+++ b/docs/guides/using-host-labels.md
@@ -27,7 +27,7 @@ sudo ./edit-config netdata.conf
```
Create a new `[host labels]` section defining a new host label and its value for the system in question. Make sure not
-to violate any of the [host label naming rules](/docs/configuration-guide.md#netdata-labels).
+to violate any of the [host label naming rules](/docs/configure/common-changes.md#organize-nodes-with-host-labels).
```conf
[host labels]