summaryrefslogtreecommitdiffstats
path: root/src/collectors/ebpf.plugin
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-11-25 17:33:56 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-11-25 17:34:10 +0000
commit83ba6762cc43d9db581b979bb5e3445669e46cc2 (patch)
tree2e69833b43f791ed253a7a20318b767ebe56cdb8 /src/collectors/ebpf.plugin
parentReleasing debian version 1.47.5-1. (diff)
downloadnetdata-83ba6762cc43d9db581b979bb5e3445669e46cc2.tar.xz
netdata-83ba6762cc43d9db581b979bb5e3445669e46cc2.zip
Merging upstream version 2.0.3+dfsg (Closes: #923993, #1042533, #1045145).
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'src/collectors/ebpf.plugin')
-rw-r--r--src/collectors/ebpf.plugin/README.md468
-rw-r--r--src/collectors/ebpf.plugin/ebpf.c59
-rw-r--r--src/collectors/ebpf.plugin/ebpf_apps.c11
-rw-r--r--src/collectors/ebpf.plugin/ebpf_apps.h2
-rw-r--r--src/collectors/ebpf.plugin/ebpf_cachestat.c24
-rw-r--r--src/collectors/ebpf.plugin/ebpf_cgroup.c5
-rw-r--r--src/collectors/ebpf.plugin/ebpf_dcstat.c24
-rw-r--r--src/collectors/ebpf.plugin/ebpf_disk.c10
-rw-r--r--src/collectors/ebpf.plugin/ebpf_fd.c25
-rw-r--r--src/collectors/ebpf.plugin/ebpf_filesystem.c12
-rw-r--r--src/collectors/ebpf.plugin/ebpf_functions.c6
-rw-r--r--src/collectors/ebpf.plugin/ebpf_hardirq.c12
-rw-r--r--src/collectors/ebpf.plugin/ebpf_mdflush.c12
-rw-r--r--src/collectors/ebpf.plugin/ebpf_mount.c10
-rw-r--r--src/collectors/ebpf.plugin/ebpf_oomkill.c12
-rw-r--r--src/collectors/ebpf.plugin/ebpf_process.c14
-rw-r--r--src/collectors/ebpf.plugin/ebpf_shm.c27
-rw-r--r--src/collectors/ebpf.plugin/ebpf_socket.c37
-rw-r--r--src/collectors/ebpf.plugin/ebpf_socket.h4
-rw-r--r--src/collectors/ebpf.plugin/ebpf_softirq.c10
-rw-r--r--src/collectors/ebpf.plugin/ebpf_swap.c27
-rw-r--r--src/collectors/ebpf.plugin/ebpf_sync.c12
-rw-r--r--src/collectors/ebpf.plugin/ebpf_vfs.c20
-rw-r--r--src/collectors/ebpf.plugin/integrations/ebpf_cachestat.md4
-rw-r--r--src/collectors/ebpf.plugin/integrations/ebpf_dcstat.md4
-rw-r--r--src/collectors/ebpf.plugin/integrations/ebpf_disk.md4
-rw-r--r--src/collectors/ebpf.plugin/integrations/ebpf_filedescriptor.md4
-rw-r--r--src/collectors/ebpf.plugin/integrations/ebpf_filesystem.md4
-rw-r--r--src/collectors/ebpf.plugin/integrations/ebpf_hardirq.md4
-rw-r--r--src/collectors/ebpf.plugin/integrations/ebpf_mdflush.md4
-rw-r--r--src/collectors/ebpf.plugin/integrations/ebpf_mount.md4
-rw-r--r--src/collectors/ebpf.plugin/integrations/ebpf_oomkill.md4
-rw-r--r--src/collectors/ebpf.plugin/integrations/ebpf_processes.md4
-rw-r--r--src/collectors/ebpf.plugin/integrations/ebpf_shm.md4
-rw-r--r--src/collectors/ebpf.plugin/integrations/ebpf_socket.md4
-rw-r--r--src/collectors/ebpf.plugin/integrations/ebpf_softirq.md4
-rw-r--r--src/collectors/ebpf.plugin/integrations/ebpf_swap.md4
-rw-r--r--src/collectors/ebpf.plugin/integrations/ebpf_sync.md4
-rw-r--r--src/collectors/ebpf.plugin/integrations/ebpf_vfs.md4
39 files changed, 407 insertions, 500 deletions
diff --git a/src/collectors/ebpf.plugin/README.md b/src/collectors/ebpf.plugin/README.md
index e9243966b..1246fec04 100644
--- a/src/collectors/ebpf.plugin/README.md
+++ b/src/collectors/ebpf.plugin/README.md
@@ -1,16 +1,6 @@
-<!--
-title: "Kernel traces/metrics (eBPF) monitoring with Netdata"
-description: "Use Netdata's extended Berkeley Packet Filter (eBPF) collector to monitor kernel-level metrics about yourcomplex applications with per-second granularity."
-custom_edit_url: "https://github.com/netdata/netdata/edit/master/src/collectors/ebpf.plugin/README.md"
-sidebar_label: "Kernel traces/metrics (eBPF)"
-learn_status: "Published"
-learn_topic_type: "References"
-learn_rel_path: "Integrations/Monitor/System metrics"
--->
-
# Kernel traces/metrics (eBPF) collector
-The Netdata Agent provides many [eBPF](https://ebpf.io/what-is-ebpf/) programs to help you troubleshoot and debug how applications interact with the Linux kernel. The `ebpf.plugin` uses [tracepoints, trampoline, and2 kprobes](#how-netdata-collects-data-using-probes-and-tracepoints) to collect a wide array of high value data about the host that would otherwise be impossible to capture.
+The Netdata Agent provides many [eBPF](https://ebpf.io/what-is-ebpf/) programs to help you troubleshoot and debug how applications interact with the Linux kernel. The `ebpf.plugin` uses [tracepoints, trampoline, and2 kprobes](#how-netdata-collects-data-using-probes-and-tracepoints) to collect a wide array of high value data about the host that would otherwise be impossible to capture.
> ❗ eBPF monitoring only works on Linux systems and with specific Linux kernels, including all kernels newer than `4.11.0`, and all kernels on CentOS 7.6 or later. For kernels older than `4.11.0`, improved support is in active development.
@@ -26,10 +16,10 @@ For hands-on configuration and troubleshooting tips see our [tutorial on trouble
Netdata uses the following features from the Linux kernel to run eBPF programs:
-- Tracepoints are hooks to call specific functions. Tracepoints are more stable than `kprobes` and are preferred when
+- Tracepoints are hooks to call specific functions. Tracepoints are more stable than `kprobes` and are preferred when
both options are available.
-- Trampolines are bridges between kernel functions, and BPF programs. Netdata uses them by default whenever available.
-- Kprobes and return probes (`kretprobe`): Probes can insert virtually into any kernel instruction. When eBPF runs in `entry` mode, it attaches only `kprobes` for internal functions monitoring calls and some arguments every time a function is called. The user can also change configuration to use [`return`](#global-configuration-options) mode, and this will allow users to monitor return from these functions and detect possible failures.
+- Trampolines are bridges between kernel functions, and BPF programs. Netdata uses them by default whenever available.
+- Kprobes and return probes (`kretprobe`): Probes can insert virtually into any kernel instruction. When eBPF runs in `entry` mode, it attaches only `kprobes` for internal functions monitoring calls and some arguments every time a function is called. The user can also change configuration to use [`return`](#global-configuration-options) mode, and this will allow users to monitor return from these functions and detect possible failures.
In each case, wherever a normal kprobe, kretprobe, or tracepoint would have run its hook function, an eBPF program is run instead, performing various collection logic before letting the kernel continue its normal control flow.
@@ -38,42 +28,45 @@ There are more methods to trigger eBPF programs, such as uprobes, but currently
## Configuring ebpf.plugin
The eBPF collector is installed and enabled by default on most new installations of the Agent.
-If your Agent is v1.22 or older, you may to enable the collector yourself.
+If your Agent is v1.22 or older, you may to enable the collector yourself.
### Enable the eBPF collector
-To enable or disable the entire eBPF collector:
+To enable or disable the entire eBPF collector:
+
+1. Navigate to the [Netdata config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
-1. Navigate to the [Netdata config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
```bash
cd /etc/netdata
```
-2. Use the [`edit-config`](/docs/netdata-agent/configuration/README.md#edit-netdataconf) script to edit `netdata.conf`.
+2. Use the [`edit-config`](/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script to edit `netdata.conf`.
```bash
./edit-config netdata.conf
```
-3. Enable the collector by scrolling down to the `[plugins]` section. Uncomment the line `ebpf` (not
+3. Enable the collector by scrolling down to the `[plugins]` section. Uncomment the line `ebpf` (not
`ebpf_process`) and set it to `yes`.
- ```conf
+ ```text
[plugins]
ebpf = yes
```
### Configure the eBPF collector
-You can configure the eBPF collector's behavior to fine-tune which metrics you receive and [optimize performance]\(#performance opimization).
+You can configure the eBPF collector's behavior to fine-tune which metrics you receive and [optimize performance](#performance-opimization).
To edit the `ebpf.d.conf`:
-1. Navigate to the [Netdata config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
+1. Navigate to the [Netdata config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
+
```bash
cd /etc/netdata
```
-2. Use the [`edit-config`](/docs/netdata-agent/configuration/README.md#edit-netdataconf) script to edit [`ebpf.d.conf`](https://github.com/netdata/netdata/blob/master/src/collectors/ebpf.plugin/ebpf.d.conf).
+
+2. Use the [`edit-config`](/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script to edit [`ebpf.d.conf`](https://github.com/netdata/netdata/blob/master/src/collectors/ebpf.plugin/ebpf.d.conf).
```bash
./edit-config ebpf.d.conf
@@ -94,9 +87,9 @@ By default, this plugin uses the `entry` mode. Changing this mode can create sig
system, but also offer valuable information if you are developing or debugging software. The `ebpf load mode` option
accepts the following values:
-- `entry`: This is the default mode. In this mode, the eBPF collector only monitors calls for the functions described in
+- `entry`: This is the default mode. In this mode, the eBPF collector only monitors calls for the functions described in
the sections above, and does not show charts related to errors.
-- `return`: In the `return` mode, the eBPF collector monitors the same kernel functions as `entry`, but also creates new
+- `return`: In the `return` mode, the eBPF collector monitors the same kernel functions as `entry`, but also creates new
charts for the return of these functions, such as errors. Monitoring function returns can help in debugging software,
such as failing to close file descriptors or creating zombie processes.
@@ -108,7 +101,7 @@ interact with the Linux kernel.
If you want to enable `apps.plugin` integration, change the "apps" setting to "yes".
-```conf
+```text
[global]
apps = yes
```
@@ -122,7 +115,7 @@ interacts with the Linux kernel.
The integration with `cgroups.plugin` is disabled by default to avoid creating overhead on your system. If you want to
_enable_ the integration with `cgroups.plugin`, change the `cgroups` setting to `yes`.
-```conf
+```text
[global]
cgroups = yes
```
@@ -133,10 +126,7 @@ If you do not need to monitor specific metrics for your `cgroups`, you can enabl
#### Maps per Core
-When netdata is running on kernels newer than `4.6` users are allowed to modify how the `ebpf.plugin` creates maps (hash or
-array). When `maps per core` is defined as `yes`, plugin will create a map per core on host, on the other hand,
-when the value is set as `no` only one hash table will be created, this option will use less memory, but it also can
-increase overhead for processes.
+When netdata is running on kernels newer than `4.6` users are allowed to modify how the `ebpf.plugin` creates maps (hash or array). When `maps per core` is defined as `yes`, plugin will create a map per core on host, on the other hand, when the value is set as `no` only one hash table will be created, this option will use less memory, but it also can increase overhead for processes.
#### Collect PID
@@ -146,10 +136,10 @@ process group for which it needs to plot data.
There are different ways to collect PID, and you can select the way `ebpf.plugin` collects data with the following
values:
-- `real parent`: This is the default mode. Collection will aggregate data for the real parent, the thread that creates
+- `real parent`: This is the default mode. Collection will aggregate data for the real parent, the thread that creates
child threads.
-- `parent`: Parent and real parent are the same when a process starts, but this value can be changed during run time.
-- `all`: This option will store all PIDs that run on the host. Note, this method can be expensive for the host,
+- `parent`: Parent and real parent are the same when a process starts, but this value can be changed during run time.
+- `all`: This option will store all PIDs that run on the host. Note, this method can be expensive for the host,
because more memory needs to be allocated and parsed.
The threads that have integration with other collectors have an internal clean up wherein they attach either a
@@ -174,97 +164,97 @@ Linux metrics:
> Note: The parenthetical accompanying each bulleted item provides the chart name.
-- mem
- - Number of processes killed due out of memory. (`oomkills`)
-- process
- - Number of processes created with `do_fork`. (`process_create`)
- - Number of threads created with `do_fork` or `clone (2)`, depending on your system's kernel
+- mem
+ - Number of processes killed due out of memory. (`oomkills`)
+- process
+ - Number of processes created with `do_fork`. (`process_create`)
+ - Number of threads created with `do_fork` or `clone (2)`, depending on your system's kernel
version. (`thread_create`)
- - Number of times that a process called `do_exit`. (`task_exit`)
- - Number of times that a process called `release_task`. (`task_close`)
- - Number of times that an error happened to create thread or process. (`task_error`)
-- swap
- - Number of calls to `swap_readpage`. (`swap_read_call`)
- - Number of calls to `swap_writepage`. (`swap_write_call`)
-- network
- - Number of outbound connections using TCP/IPv4. (`outbound_conn_ipv4`)
- - Number of outbound connections using TCP/IPv6. (`outbound_conn_ipv6`)
- - Number of bytes sent. (`total_bandwidth_sent`)
- - Number of bytes received. (`total_bandwidth_recv`)
- - Number of calls to `tcp_sendmsg`. (`bandwidth_tcp_send`)
- - Number of calls to `tcp_cleanup_rbuf`. (`bandwidth_tcp_recv`)
- - Number of calls to `tcp_retransmit_skb`. (`bandwidth_tcp_retransmit`)
- - Number of calls to `udp_sendmsg`. (`bandwidth_udp_send`)
- - Number of calls to `udp_recvmsg`. (`bandwidth_udp_recv`)
-- file access
- - Number of calls to open files. (`file_open`)
- - Number of calls to open files that returned errors. (`open_error`)
- - Number of files closed. (`file_closed`)
- - Number of calls to close files that returned errors. (`file_error_closed`)
-- vfs
- - Number of calls to `vfs_unlink`. (`file_deleted`)
- - Number of calls to `vfs_write`. (`vfs_write_call`)
- - Number of calls to write a file that returned errors. (`vfs_write_error`)
- - Number of calls to `vfs_read`. (`vfs_read_call`)
- - - Number of calls to read a file that returned errors. (`vfs_read_error`)
- - Number of bytes written with `vfs_write`. (`vfs_write_bytes`)
- - Number of bytes read with `vfs_read`. (`vfs_read_bytes`)
- - Number of calls to `vfs_fsync`. (`vfs_fsync`)
- - Number of calls to sync file that returned errors. (`vfs_fsync_error`)
- - Number of calls to `vfs_open`. (`vfs_open`)
- - Number of calls to open file that returned errors. (`vfs_open_error`)
- - Number of calls to `vfs_create`. (`vfs_create`)
- - Number of calls to open file that returned errors. (`vfs_create_error`)
-- page cache
- - Ratio of pages accessed. (`cachestat_ratio`)
- - Number of modified pages ("dirty"). (`cachestat_dirties`)
- - Number of accessed pages. (`cachestat_hits`)
- - Number of pages brought from disk. (`cachestat_misses`)
-- directory cache
- - Ratio of files available in directory cache. (`dc_hit_ratio`)
- - Number of files accessed. (`dc_reference`)
- - Number of files accessed that were not in cache. (`dc_not_cache`)
- - Number of files not found. (`dc_not_found`)
-- ipc shm
- - Number of calls to `shm_get`. (`shmget_call`)
- - Number of calls to `shm_at`. (`shmat_call`)
- - Number of calls to `shm_dt`. (`shmdt_call`)
- - Number of calls to `shm_ctl`. (`shmctl_call`)
+ - Number of times that a process called `do_exit`. (`task_exit`)
+ - Number of times that a process called `release_task`. (`task_close`)
+ - Number of times that an error happened to create thread or process. (`task_error`)
+- swap
+ - Number of calls to `swap_readpage`. (`swap_read_call`)
+ - Number of calls to `swap_writepage`. (`swap_write_call`)
+- network
+ - Number of outbound connections using TCP/IPv4. (`outbound_conn_ipv4`)
+ - Number of outbound connections using TCP/IPv6. (`outbound_conn_ipv6`)
+ - Number of bytes sent. (`total_bandwidth_sent`)
+ - Number of bytes received. (`total_bandwidth_recv`)
+ - Number of calls to `tcp_sendmsg`. (`bandwidth_tcp_send`)
+ - Number of calls to `tcp_cleanup_rbuf`. (`bandwidth_tcp_recv`)
+ - Number of calls to `tcp_retransmit_skb`. (`bandwidth_tcp_retransmit`)
+ - Number of calls to `udp_sendmsg`. (`bandwidth_udp_send`)
+ - Number of calls to `udp_recvmsg`. (`bandwidth_udp_recv`)
+- file access
+ - Number of calls to open files. (`file_open`)
+ - Number of calls to open files that returned errors. (`open_error`)
+ - Number of files closed. (`file_closed`)
+ - Number of calls to close files that returned errors. (`file_error_closed`)
+- vfs
+ - Number of calls to `vfs_unlink`. (`file_deleted`)
+ - Number of calls to `vfs_write`. (`vfs_write_call`)
+ - Number of calls to write a file that returned errors. (`vfs_write_error`)
+ - Number of calls to `vfs_read`. (`vfs_read_call`)
+ - - Number of calls to read a file that returned errors. (`vfs_read_error`)
+ - Number of bytes written with `vfs_write`. (`vfs_write_bytes`)
+ - Number of bytes read with `vfs_read`. (`vfs_read_bytes`)
+ - Number of calls to `vfs_fsync`. (`vfs_fsync`)
+ - Number of calls to sync file that returned errors. (`vfs_fsync_error`)
+ - Number of calls to `vfs_open`. (`vfs_open`)
+ - Number of calls to open file that returned errors. (`vfs_open_error`)
+ - Number of calls to `vfs_create`. (`vfs_create`)
+ - Number of calls to open file that returned errors. (`vfs_create_error`)
+- page cache
+ - Ratio of pages accessed. (`cachestat_ratio`)
+ - Number of modified pages ("dirty"). (`cachestat_dirties`)
+ - Number of accessed pages. (`cachestat_hits`)
+ - Number of pages brought from disk. (`cachestat_misses`)
+- directory cache
+ - Ratio of files available in directory cache. (`dc_hit_ratio`)
+ - Number of files accessed. (`dc_reference`)
+ - Number of files accessed that were not in cache. (`dc_not_cache`)
+ - Number of files not found. (`dc_not_found`)
+- ipc shm
+ - Number of calls to `shm_get`. (`shmget_call`)
+ - Number of calls to `shm_at`. (`shmat_call`)
+ - Number of calls to `shm_dt`. (`shmdt_call`)
+ - Number of calls to `shm_ctl`. (`shmctl_call`)
### `[ebpf programs]` configuration options
The eBPF collector enables and runs the following eBPF programs by default:
-- `cachestat`: Netdata's eBPF data collector creates charts about the memory page cache. When the integration with
+- `cachestat`: Netdata's eBPF data collector creates charts about the memory page cache. When the integration with
[`apps.plugin`](/src/collectors/apps.plugin/README.md) is enabled, this collector creates charts for the whole host _and_
for each application.
-- `fd` : This eBPF program creates charts that show information about calls to open files.
-- `mount`: This eBPF program creates charts that show calls to syscalls mount(2) and umount(2).
-- `shm`: This eBPF program creates charts that show calls to syscalls shmget(2), shmat(2), shmdt(2) and shmctl(2).
-- `process`: This eBPF program creates charts that show information about process life. When in `return` mode, it also
+- `fd` : This eBPF program creates charts that show information about calls to open files.
+- `mount`: This eBPF program creates charts that show calls to syscalls mount(2) and umount(2).
+- `shm`: This eBPF program creates charts that show calls to syscalls shmget(2), shmat(2), shmdt(2) and shmctl(2).
+- `process`: This eBPF program creates charts that show information about process life. When in `return` mode, it also
creates charts showing errors when these operations are executed.
-- `hardirq`: This eBPF program creates charts that show information about time spent servicing individual hardware
+- `hardirq`: This eBPF program creates charts that show information about time spent servicing individual hardware
interrupt requests (hard IRQs).
-- `softirq`: This eBPF program creates charts that show information about time spent servicing individual software
+- `softirq`: This eBPF program creates charts that show information about time spent servicing individual software
interrupt requests (soft IRQs).
-- `oomkill`: This eBPF program creates a chart that shows OOM kills for all applications recognized via
+- `oomkill`: This eBPF program creates a chart that shows OOM kills for all applications recognized via
the `apps.plugin` integration. Note that this program will show application charts regardless of whether apps
integration is turned on or off.
You can also enable the following eBPF programs:
-- `dcstat` : This eBPF program creates charts that show information about file access using directory cache. It appends
+- `dcstat` : This eBPF program creates charts that show information about file access using directory cache. It appends
`kprobes` for `lookup_fast()` and `d_lookup()` to identify if files are inside directory cache, outside and files are
not found.
-- `disk` : This eBPF program creates charts that show information about disk latency independent of filesystem.
-- `filesystem` : This eBPF program creates charts that show information about some filesystem latency.
-- `swap` : This eBPF program creates charts that show information about swap access.
-- `mdflush`: This eBPF program creates charts that show information about
-- `sync`: Monitor calls to syscalls sync(2), fsync(2), fdatasync(2), syncfs(2), msync(2), and sync_file_range(2).
-- `socket`: This eBPF program creates charts with information about `TCP` and `UDP` functions, including the
+- `disk` : This eBPF program creates charts that show information about disk latency independent of filesystem.
+- `filesystem` : This eBPF program creates charts that show information about some filesystem latency.
+- `swap` : This eBPF program creates charts that show information about swap access.
+- `mdflush`: This eBPF program creates charts that show information about
+- `sync`: Monitor calls to syscalls sync(2), fsync(2), fdatasync(2), syncfs(2), msync(2), and sync_file_range(2).
+- `socket`: This eBPF program creates charts with information about `TCP` and `UDP` functions, including the
bandwidth consumed by each.
multi-device software flushes.
-- `vfs`: This eBPF program creates charts that show information about VFS (Virtual File System) functions.
+- `vfs`: This eBPF program creates charts that show information about VFS (Virtual File System) functions.
### Configuring eBPF threads
@@ -272,24 +262,26 @@ You can configure each thread of the eBPF data collector. This allows you to ove
To configure an eBPF thread:
-1. Navigate to the [Netdata config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
+1. Navigate to the [Netdata config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
+
```bash
cd /etc/netdata
```
-2. Use the [`edit-config`](/docs/netdata-agent/configuration/README.md#edit-netdataconf) script to edit a thread configuration file. The following configuration files are available:
- - `network.conf`: Configuration for the [`network` thread](#network-configuration). This config file overwrites the global options and also
+2. Use the [`edit-config`](/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script to edit a thread configuration file. The following configuration files are available:
+
+ - `network.conf`: Configuration for the [`network` thread](#network-configuration). This config file overwrites the global options and also
lets you specify which network the eBPF collector monitors.
- - `process.conf`: Configuration for the [`process` thread](#sync-configuration).
- - `cachestat.conf`: Configuration for the `cachestat` thread(#filesystem-configuration).
- - `dcstat.conf`: Configuration for the `dcstat` thread.
- - `disk.conf`: Configuration for the `disk` thread.
- - `fd.conf`: Configuration for the `file descriptor` thread.
- - `filesystem.conf`: Configuration for the `filesystem` thread.
- - `hardirq.conf`: Configuration for the `hardirq` thread.
- - `softirq.conf`: Configuration for the `softirq` thread.
- - `sync.conf`: Configuration for the `sync` thread.
- - `vfs.conf`: Configuration for the `vfs` thread.
+ - `process.conf`: Configuration for the [`process` thread](#sync-configuration).
+ - `cachestat.conf`: Configuration for the `cachestat` thread(#filesystem-configuration).
+ - `dcstat.conf`: Configuration for the `dcstat` thread.
+ - `disk.conf`: Configuration for the `disk` thread.
+ - `fd.conf`: Configuration for the `file descriptor` thread.
+ - `filesystem.conf`: Configuration for the `filesystem` thread.
+ - `hardirq.conf`: Configuration for the `hardirq` thread.
+ - `softirq.conf`: Configuration for the `softirq` thread.
+ - `sync.conf`: Configuration for the `sync` thread.
+ - `vfs.conf`: Configuration for the `vfs` thread.
```bash
./edit-config FILE.conf
@@ -304,7 +296,7 @@ are divided in the following sections:
You can configure the information shown with function `ebpf_socket` using the settings in this section.
-```conf
+```text
[network connections]
enabled = yes
resolve hostname ips = no
@@ -324,13 +316,13 @@ and `145`.
The following options are available:
-- `enabled`: Disable network connections monitoring. This can affect directly some funcion output.
-- `resolve hostname ips`: Enable resolving IPs to hostnames. It is disabled by default because it can be too slow.
-- `resolve service names`: Convert destination ports into service names, for example, port `53` protocol `UDP` becomes `domain`.
+- `enabled`: Disable network connections monitoring. This can affect directly some funcion output.
+- `resolve hostname ips`: Enable resolving IPs to hostnames. It is disabled by default because it can be too slow.
+- `resolve service names`: Convert destination ports into service names, for example, port `53` protocol `UDP` becomes `domain`.
all names are read from /etc/services.
-- `ports`: Define the destination ports for Netdata to monitor.
-- `hostnames`: The list of hostnames that can be resolved to an IP address.
-- `ips`: The IP or range of IPs that you want to monitor. You can use IPv4 or IPv6 addresses, use dashes to define a
+- `ports`: Define the destination ports for Netdata to monitor.
+- `hostnames`: The list of hostnames that can be resolved to an IP address.
+- `ips`: The IP or range of IPs that you want to monitor. You can use IPv4 or IPv6 addresses, use dashes to define a
range of IPs, or use CIDR values.
By default the traffic table is created using the destination IPs and ports of the sockets. This can be
@@ -346,7 +338,7 @@ section.
For example, Netdata's default port (`19999`) is not listed in `/etc/services`. To associate that port with the Netdata
service in network connection charts, and thus see the name of the service instead of its port, define it:
-```conf
+```text
[service name]
19999 = Netdata
```
@@ -355,7 +347,7 @@ service in network connection charts, and thus see the name of the service inste
The sync configuration has specific options to disable monitoring for syscalls. All syscalls are monitored by default.
-```conf
+```text
[syscalls]
sync = yes
msync = yes
@@ -370,7 +362,7 @@ The sync configuration has specific options to disable monitoring for syscalls.
The filesystem configuration has specific options to disable monitoring for filesystems; by default, all filesystems are
monitored.
-```conf
+```text
[filesystem]
btrfsdist = yes
ext4dist = yes
@@ -408,19 +400,18 @@ You can run our helper script to determine whether your system can support eBPF
curl -sSL https://raw.githubusercontent.com/netdata/kernel-collector/master/tools/check-kernel-config.sh | sudo bash
```
-
If you see a warning about a missing kernel
configuration (`KPROBES KPROBES_ON_FTRACE HAVE_KPROBES BPF BPF_SYSCALL BPF_JIT`), you will need to recompile your kernel
to support this configuration. The process of recompiling Linux kernels varies based on your distribution and version.
Read the documentation for your system's distribution to learn more about the specific workflow for recompiling the
kernel, ensuring that you set all the necessary
-- [Ubuntu](https://wiki.ubuntu.com/Kernel/BuildYourOwnKernel)
-- [Debian](https://kernel-team.pages.debian.net/kernel-handbook/ch-common-tasks.html#s-common-official)
-- [Fedora](https://fedoraproject.org/wiki/Building_a_custom_kernel)
-- [CentOS](https://wiki.centos.org/HowTos/Custom_Kernel)
-- [Arch Linux](https://wiki.archlinux.org/index.php/Kernel/Traditional_compilation)
-- [Slackware](https://docs.slackware.com/howtos:slackware_admin:kernelbuilding)
+- [Ubuntu](https://wiki.ubuntu.com/Kernel/BuildYourOwnKernel)
+- [Debian](https://kernel-team.pages.debian.net/kernel-handbook/ch-common-tasks.html#s-common-official)
+- [Fedora](https://fedoraproject.org/wiki/Building_a_custom_kernel)
+- [CentOS](https://wiki.centos.org/HowTos/Custom_Kernel)
+- [Arch Linux](https://wiki.archlinux.org/index.php/Kernel/Traditional_compilation)
+- [Slackware](https://docs.slackware.com/howtos:slackware_admin:kernelbuilding)
### Mount `debugfs` and `tracefs`
@@ -455,12 +446,12 @@ Internally, the Linux kernel treats both processes and threads as `tasks`. To cr
system calls: `fork(2)`, `vfork(2)`, and `clone(2)`. To generate this chart, the eBPF
collector uses the following `tracepoints` and `kprobe`:
-- `sched/sched_process_fork`: Tracepoint called after a call for `fork (2)`, `vfork (2)` and `clone (2)`.
-- `sched/sched_process_exec`: Tracepoint called after a exec-family syscall.
-- `kprobe/kernel_clone`: This is the main [`fork()`](https://elixir.bootlin.com/linux/v5.10/source/kernel/fork.c#L2415)
+- `sched/sched_process_fork`: Tracepoint called after a call for `fork (2)`, `vfork (2)` and `clone (2)`.
+- `sched/sched_process_exec`: Tracepoint called after a exec-family syscall.
+- `kprobe/kernel_clone`: This is the main [`fork()`](https://elixir.bootlin.com/linux/v5.10/source/kernel/fork.c#L2415)
routine since kernel `5.10.0` was released.
-- `kprobe/_do_fork`: Like `kernel_clone`, but this was the main function between kernels `4.2.0` and `5.9.16`
-- `kprobe/do_fork`: This was the main function before kernel `4.2.0`.
+- `kprobe/_do_fork`: Like `kernel_clone`, but this was the main function between kernels `4.2.0` and `5.9.16`
+- `kprobe/do_fork`: This was the main function before kernel `4.2.0`.
#### Process Exit
@@ -469,8 +460,8 @@ system that the task is finishing its work. The second step is to release the ke
function `release_task`. The difference between the two dimensions can help you discover
[zombie processes](https://en.wikipedia.org/wiki/Zombie_process). To get the metrics, the collector uses:
-- `sched/sched_process_exit`: Tracepoint called after a task exits.
-- `kprobe/release_task`: This function is called when a process exits, as the kernel still needs to remove the process
+- `sched/sched_process_exit`: Tracepoint called after a task exits.
+- `kprobe/release_task`: This function is called when a process exits, as the kernel still needs to remove the process
descriptor.
#### Task error
@@ -489,9 +480,9 @@ the collector attaches `kprobes` for cited functions.
The following `tracepoints` are used to measure time usage for soft IRQs:
-- [`irq/softirq_entry`](https://www.kernel.org/doc/html/latest/core-api/tracepoint.html#c.trace_softirq_entry): Called
+- [`irq/softirq_entry`](https://www.kernel.org/doc/html/latest/core-api/tracepoint.html#c.trace_softirq_entry): Called
before softirq handler
-- [`irq/softirq_exit`](https://www.kernel.org/doc/html/latest/core-api/tracepoint.html#c.trace_softirq_exit): Called when
+- [`irq/softirq_exit`](https://www.kernel.org/doc/html/latest/core-api/tracepoint.html#c.trace_softirq_exit): Called when
softirq handler returns.
#### Hard IRQ
@@ -499,60 +490,60 @@ The following `tracepoints` are used to measure time usage for soft IRQs:
The following tracepoints are used to measure the latency of servicing a
hardware interrupt request (hard IRQ).
-- [`irq/irq_handler_entry`](https://www.kernel.org/doc/html/latest/core-api/tracepoint.html#c.trace_irq_handler_entry):
+- [`irq/irq_handler_entry`](https://www.kernel.org/doc/html/latest/core-api/tracepoint.html#c.trace_irq_handler_entry):
Called immediately before the IRQ action handler.
-- [`irq/irq_handler_exit`](https://www.kernel.org/doc/html/latest/core-api/tracepoint.html#c.trace_irq_handler_exit):
+- [`irq/irq_handler_exit`](https://www.kernel.org/doc/html/latest/core-api/tracepoint.html#c.trace_irq_handler_exit):
Called immediately after the IRQ action handler returns.
-- `irq_vectors`: These are traces from `irq_handler_entry` and
+- `irq_vectors`: These are traces from `irq_handler_entry` and
`irq_handler_exit` when an IRQ is handled. The following elements from vector
are triggered:
- - `irq_vectors/local_timer_entry`
- - `irq_vectors/local_timer_exit`
- - `irq_vectors/reschedule_entry`
- - `irq_vectors/reschedule_exit`
- - `irq_vectors/call_function_entry`
- - `irq_vectors/call_function_exit`
- - `irq_vectors/call_function_single_entry`
- - `irq_vectors/call_function_single_xit`
- - `irq_vectors/irq_work_entry`
- - `irq_vectors/irq_work_exit`
- - `irq_vectors/error_apic_entry`
- - `irq_vectors/error_apic_exit`
- - `irq_vectors/thermal_apic_entry`
- - `irq_vectors/thermal_apic_exit`
- - `irq_vectors/threshold_apic_entry`
- - `irq_vectors/threshold_apic_exit`
- - `irq_vectors/deferred_error_entry`
- - `irq_vectors/deferred_error_exit`
- - `irq_vectors/spurious_apic_entry`
- - `irq_vectors/spurious_apic_exit`
- - `irq_vectors/x86_platform_ipi_entry`
- - `irq_vectors/x86_platform_ipi_exit`
+ - `irq_vectors/local_timer_entry`
+ - `irq_vectors/local_timer_exit`
+ - `irq_vectors/reschedule_entry`
+ - `irq_vectors/reschedule_exit`
+ - `irq_vectors/call_function_entry`
+ - `irq_vectors/call_function_exit`
+ - `irq_vectors/call_function_single_entry`
+ - `irq_vectors/call_function_single_xit`
+ - `irq_vectors/irq_work_entry`
+ - `irq_vectors/irq_work_exit`
+ - `irq_vectors/error_apic_entry`
+ - `irq_vectors/error_apic_exit`
+ - `irq_vectors/thermal_apic_entry`
+ - `irq_vectors/thermal_apic_exit`
+ - `irq_vectors/threshold_apic_entry`
+ - `irq_vectors/threshold_apic_exit`
+ - `irq_vectors/deferred_error_entry`
+ - `irq_vectors/deferred_error_exit`
+ - `irq_vectors/spurious_apic_entry`
+ - `irq_vectors/spurious_apic_exit`
+ - `irq_vectors/x86_platform_ipi_entry`
+ - `irq_vectors/x86_platform_ipi_exit`
#### IPC shared memory
To monitor shared memory system call counts, Netdata attaches tracing in the following functions:
-- `shmget`: Runs when [`shmget`](https://man7.org/linux/man-pages/man2/shmget.2.html) is called.
-- `shmat`: Runs when [`shmat`](https://man7.org/linux/man-pages/man2/shmat.2.html) is called.
-- `shmdt`: Runs when [`shmdt`](https://man7.org/linux/man-pages/man2/shmat.2.html) is called.
-- `shmctl`: Runs when [`shmctl`](https://man7.org/linux/man-pages/man2/shmctl.2.html) is called.
+- `shmget`: Runs when [`shmget`](https://man7.org/linux/man-pages/man2/shmget.2.html) is called.
+- `shmat`: Runs when [`shmat`](https://man7.org/linux/man-pages/man2/shmat.2.html) is called.
+- `shmdt`: Runs when [`shmdt`](https://man7.org/linux/man-pages/man2/shmat.2.html) is called.
+- `shmctl`: Runs when [`shmctl`](https://man7.org/linux/man-pages/man2/shmctl.2.html) is called.
### Memory
In the memory submenu the eBPF plugin creates two submenus **page cache** and **synchronization** with the following
organization:
-- Page Cache
- - Page cache ratio
- - Dirty pages
- - Page cache hits
- - Page cache misses
-- Synchronization
- - File sync
- - Memory map sync
- - File system sync
- - File range sync
+- Page Cache
+ - Page cache ratio
+ - Dirty pages
+ - Page cache hits
+ - Page cache misses
+- Synchronization
+ - File sync
+ - Memory map sync
+ - File system sync
+ - File range sync
#### Page cache hits
@@ -587,10 +578,10 @@ The chart `cachestat_ratio` shows how processes are accessing page cache. In a n
100%, which means that the majority of the work on the machine is processed in memory. To calculate the ratio, Netdata
attaches `kprobes` for kernel functions:
-- `add_to_page_cache_lru`: Page addition.
-- `mark_page_accessed`: Access to cache.
-- `account_page_dirtied`: Dirty (modified) pages.
-- `mark_buffer_dirty`: Writes to page cache.
+- `add_to_page_cache_lru`: Page addition.
+- `mark_page_accessed`: Access to cache.
+- `account_page_dirtied`: Dirty (modified) pages.
+- `mark_buffer_dirty`: Writes to page cache.
#### Page cache misses
@@ -629,7 +620,7 @@ in [disk latency](#disk) charts.
By default, MD flush is disabled. To enable it, configure your
`/etc/netdata/ebpf.d.conf` file as:
-```conf
+```text
[global]
mdflush = yes
```
@@ -638,7 +629,7 @@ By default, MD flush is disabled. To enable it, configure your
To collect data related to Linux multi-device (MD) flushing, the following kprobe is used:
-- `kprobe/md_flush_request`: called whenever a request for flushing multi-device data is made.
+- `kprobe/md_flush_request`: called whenever a request for flushing multi-device data is made.
### Disk
@@ -648,9 +639,9 @@ The eBPF plugin also shows a chart in the Disk section when the `disk` thread is
This will create the chart `disk_latency_io` for each disk on the host. The following tracepoints are used:
-- [`block/block_rq_issue`](https://www.kernel.org/doc/html/latest/core-api/tracepoint.html#c.trace_block_rq_issue):
+- [`block/block_rq_issue`](https://www.kernel.org/doc/html/latest/core-api/tracepoint.html#c.trace_block_rq_issue):
IO request operation to a device drive.
-- [`block/block_rq_complete`](https://www.kernel.org/doc/html/latest/core-api/tracepoint.html#c.trace_block_rq_complete):
+- [`block/block_rq_complete`](https://www.kernel.org/doc/html/latest/core-api/tracepoint.html#c.trace_block_rq_complete):
IO operation completed by device.
Disk Latency is the single most important metric to focus on when it comes to storage performance, under most circumstances.
@@ -675,10 +666,10 @@ To measure the latency of executing some actions in an
collector needs to attach `kprobes` and `kretprobes` for each of the following
functions:
-- `ext4_file_read_iter`: Function used to measure read latency.
-- `ext4_file_write_iter`: Function used to measure write latency.
-- `ext4_file_open`: Function used to measure open latency.
-- `ext4_sync_file`: Function used to measure sync latency.
+- `ext4_file_read_iter`: Function used to measure read latency.
+- `ext4_file_write_iter`: Function used to measure write latency.
+- `ext4_file_open`: Function used to measure open latency.
+- `ext4_sync_file`: Function used to measure sync latency.
#### ZFS
@@ -686,10 +677,10 @@ To measure the latency of executing some actions in a zfs filesystem, the
collector needs to attach `kprobes` and `kretprobes` for each of the following
functions:
-- `zpl_iter_read`: Function used to measure read latency.
-- `zpl_iter_write`: Function used to measure write latency.
-- `zpl_open`: Function used to measure open latency.
-- `zpl_fsync`: Function used to measure sync latency.
+- `zpl_iter_read`: Function used to measure read latency.
+- `zpl_iter_write`: Function used to measure write latency.
+- `zpl_open`: Function used to measure open latency.
+- `zpl_fsync`: Function used to measure sync latency.
#### XFS
@@ -698,10 +689,10 @@ To measure the latency of executing some actions in an
collector needs to attach `kprobes` and `kretprobes` for each of the following
functions:
-- `xfs_file_read_iter`: Function used to measure read latency.
-- `xfs_file_write_iter`: Function used to measure write latency.
-- `xfs_file_open`: Function used to measure open latency.
-- `xfs_file_fsync`: Function used to measure sync latency.
+- `xfs_file_read_iter`: Function used to measure read latency.
+- `xfs_file_write_iter`: Function used to measure write latency.
+- `xfs_file_open`: Function used to measure open latency.
+- `xfs_file_fsync`: Function used to measure sync latency.
#### NFS
@@ -710,11 +701,11 @@ To measure the latency of executing some actions in an
collector needs to attach `kprobes` and `kretprobes` for each of the following
functions:
-- `nfs_file_read`: Function used to measure read latency.
-- `nfs_file_write`: Function used to measure write latency.
-- `nfs_file_open`: Functions used to measure open latency.
-- `nfs4_file_open`: Functions used to measure open latency for NFS v4.
-- `nfs_getattr`: Function used to measure sync latency.
+- `nfs_file_read`: Function used to measure read latency.
+- `nfs_file_write`: Function used to measure write latency.
+- `nfs_file_open`: Functions used to measure open latency.
+- `nfs4_file_open`: Functions used to measure open latency for NFS v4.
+- `nfs_getattr`: Function used to measure sync latency.
#### btrfs
@@ -724,24 +715,24 @@ filesystem, the collector needs to attach `kprobes` and `kretprobes` for each of
> Note: We are listing two functions used to measure `read` latency, but we use either `btrfs_file_read_iter` or
> `generic_file_read_iter`, depending on kernel version.
-- `btrfs_file_read_iter`: Function used to measure read latency since kernel `5.10.0`.
-- `generic_file_read_iter`: Like `btrfs_file_read_iter`, but this function was used before kernel `5.10.0`.
-- `btrfs_file_write_iter`: Function used to write data.
-- `btrfs_file_open`: Function used to open files.
-- `btrfs_sync_file`: Function used to synchronize data to filesystem.
+- `btrfs_file_read_iter`: Function used to measure read latency since kernel `5.10.0`.
+- `generic_file_read_iter`: Like `btrfs_file_read_iter`, but this function was used before kernel `5.10.0`.
+- `btrfs_file_write_iter`: Function used to write data.
+- `btrfs_file_open`: Function used to open files.
+- `btrfs_sync_file`: Function used to synchronize data to filesystem.
#### File descriptor
To give metrics related to `open` and `close` events, instead of attaching kprobes for each syscall used to do these
events, the collector attaches `kprobes` for the common function used for syscalls:
-- [`do_sys_open`](https://0xax.gitbooks.io/linux-insides/content/SysCall/linux-syscall-5.html): Internal function used to
+- [`do_sys_open`](https://0xax.gitbooks.io/linux-insides/content/SysCall/linux-syscall-5.html): Internal function used to
open files.
-- [`do_sys_openat2`](https://elixir.bootlin.com/linux/v5.6/source/fs/open.c#L1162):
+- [`do_sys_openat2`](https://elixir.bootlin.com/linux/v5.6/source/fs/open.c#L1162):
Function called from `do_sys_open` since version `5.6.0`.
-- [`close_fd`](https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg2271761.html): Function used to close file
+- [`close_fd`](https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg2271761.html): Function used to close file
descriptor since kernel `5.11.0`.
-- `__close_fd`: Function used to close files before version `5.11.0`.
+- `__close_fd`: Function used to close files before version `5.11.0`.
#### File error
@@ -761,21 +752,21 @@ To measure the latency and total quantity of executing some VFS-level
functions, ebpf.plugin needs to attach kprobes and kretprobes for each of the
following functions:
-- `vfs_write`: Function used monitoring the number of successful & failed
+- `vfs_write`: Function used monitoring the number of successful & failed
filesystem write calls, as well as the total number of written bytes.
-- `vfs_writev`: Same function as `vfs_write` but for vector writes (i.e. a
+- `vfs_writev`: Same function as `vfs_write` but for vector writes (i.e. a
single write operation using a group of buffers rather than 1).
-- `vfs_read`: Function used for monitoring the number of successful & failed
+- `vfs_read`: Function used for monitoring the number of successful & failed
filesystem read calls, as well as the total number of read bytes.
-- `vfs_readv` Same function as `vfs_read` but for vector reads (i.e. a single
+- `vfs_readv` Same function as `vfs_read` but for vector reads (i.e. a single
read operation using a group of buffers rather than 1).
-- `vfs_unlink`: Function used for monitoring the number of successful & failed
+- `vfs_unlink`: Function used for monitoring the number of successful & failed
filesystem unlink calls.
-- `vfs_fsync`: Function used for monitoring the number of successful & failed
+- `vfs_fsync`: Function used for monitoring the number of successful & failed
filesystem fsync calls.
-- `vfs_open`: Function used for monitoring the number of successful & failed
+- `vfs_open`: Function used for monitoring the number of successful & failed
filesystem open calls.
-- `vfs_create`: Function used for monitoring the number of successful & failed
+- `vfs_create`: Function used for monitoring the number of successful & failed
filesystem create calls.
##### VFS Deleted objects
@@ -816,8 +807,8 @@ Metrics for directory cache are collected using kprobe for `lookup_fast`, becaus
times this function is accessed. On the other hand, for `d_lookup` we are not only interested in the number of times it
is accessed, but also in possible errors, so we need to attach a `kretprobe`. For this reason, the following is used:
-- [`lookup_fast`](https://lwn.net/Articles/649115/): Called to look at data inside the directory cache.
-- [`d_lookup`](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/fs/dcache.c?id=052b398a43a7de8c68c13e7fa05d6b3d16ce6801#n2223):
+- [`lookup_fast`](https://lwn.net/Articles/649115/): Called to look at data inside the directory cache.
+- [`d_lookup`](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/fs/dcache.c?id=052b398a43a7de8c68c13e7fa05d6b3d16ce6801#n2223):
Called when the desired file is not inside the directory cache.
##### Directory Cache Interpretation
@@ -830,8 +821,8 @@ accessed before.
The following `tracing` are used to collect `mount` & `unmount` call counts:
-- [`mount`](https://man7.org/linux/man-pages/man2/mount.2.html): mount filesystem on host.
-- [`umount`](https://man7.org/linux/man-pages/man2/umount.2.html): umount filesystem on host.
+- [`mount`](https://man7.org/linux/man-pages/man2/mount.2.html): mount filesystem on host.
+- [`umount`](https://man7.org/linux/man-pages/man2/umount.2.html): umount filesystem on host.
### Networking Stack
@@ -855,10 +846,10 @@ to send & receive data and to close connections when `TCP` protocol is used.
This chart demonstrates calls to functions:
-- `tcp_sendmsg`: Function responsible to send data for a specified destination.
-- `tcp_cleanup_rbuf`: We use this function instead of `tcp_recvmsg`, because the last one misses `tcp_read_sock` traffic
+- `tcp_sendmsg`: Function responsible to send data for a specified destination.
+- `tcp_cleanup_rbuf`: We use this function instead of `tcp_recvmsg`, because the last one misses `tcp_read_sock` traffic
and we would also need to add more `tracing` to get the socket and package size.
-- `tcp_close`: Function responsible to close connection.
+- `tcp_close`: Function responsible to close connection.
#### TCP retransmit
@@ -881,7 +872,7 @@ calls, it monitors the number of bytes sent and received.
These are tracepoints related to [OOM](https://en.wikipedia.org/wiki/Out_of_memory) killing processes.
-- `oom/mark_victim`: Monitors when an oomkill event happens.
+- `oom/mark_victim`: Monitors when an oomkill event happens.
## Known issues
@@ -897,15 +888,14 @@ node is experiencing high memory usage and there is no obvious culprit to be fou
- Disable [integration with apps](#integration-with-appsplugin).
- Disable [integration with cgroup](#integration-with-cgroupsplugin).
-If with these changes you still suspect eBPF using too much memory, and there is no obvious culprit to be found
+If with these changes you still suspect eBPF using too much memory, and there is no obvious culprit to be found
in the `apps.mem` chart, consider testing for high kernel memory usage by [disabling eBPF monitoring](#configuring-ebpfplugin).
-Next, [restart Netdata](/packaging/installer/README.md#maintaining-a-netdata-agent-installation) with
-`sudo systemctl restart netdata` to see if system memory usage (see the `system.ram` chart) has dropped significantly.
+Next, [restart Netdata](/docs/netdata-agent/start-stop-restart.md) to see if system memory usage (see the `system.ram` chart) has dropped significantly.
Beginning with `v1.31`, kernel memory usage is configurable via the [`pid table size` setting](#pid-table-size)
in `ebpf.conf`.
-The total memory usage is a well known [issue](https://lore.kernel.org/all/167821082315.1693.6957546778534183486.git-patchwork-notify@kernel.org/)
+The total memory usage is a well known [issue](https://lore.kernel.org/all/167821082315.1693.6957546778534183486.git-patchwork-notify@kernel.org/)
for eBPF, this is not a bug present in plugin.
### SELinux
@@ -950,7 +940,7 @@ This will create two new files: `netdata_ebpf.te` and `netdata_ebpf.mod`.
Edit the `netdata_ebpf.te` file to change the options `class` and `allow`. You should have the following at the end of
the `netdata_ebpf.te` file.
-```conf
+```text
module netdata_ebpf 1.0;
require {
type unconfined_service_t;
@@ -981,7 +971,7 @@ a feature called "lockdown," which may affect `ebpf.plugin` depending how the ke
shows how the lockdown module impacts `ebpf.plugin` based on the selected options:
| Enforcing kernel lockdown | Enable lockdown LSM early in init | Default lockdown mode | Can `ebpf.plugin` run with this? |
-| :------------------------ | :-------------------------------- | :-------------------- | :------------------------------- |
+|:--------------------------|:----------------------------------|:----------------------|:---------------------------------|
| YES | NO | NO | YES |
| YES | Yes | None | YES |
| YES | Yes | Integrity | YES |
diff --git a/src/collectors/ebpf.plugin/ebpf.c b/src/collectors/ebpf.plugin/ebpf.c
index 5424ea8f0..4cc263e73 100644
--- a/src/collectors/ebpf.plugin/ebpf.c
+++ b/src/collectors/ebpf.plugin/ebpf.c
@@ -19,11 +19,7 @@ char *ebpf_plugin_dir = PLUGINS_DIR;
static char *ebpf_configured_log_dir = LOG_DIR;
char *ebpf_algorithms[] = { EBPF_CHART_ALGORITHM_ABSOLUTE, EBPF_CHART_ALGORITHM_INCREMENTAL};
-struct config collector_config = { .first_section = NULL,
- .last_section = NULL,
- .mutex = NETDATA_MUTEX_INITIALIZER,
- .index = { .avl_tree = { .root = NULL, .compar = appconfig_section_compare },
- .rwlock = AVL_LOCK_INITIALIZER } };
+struct config collector_config = APPCONFIG_INITIALIZER;
int running_on_kernel = 0;
int ebpf_nprocs;
@@ -661,7 +657,7 @@ struct vfs_bpf *vfs_bpf_obj = NULL;
#else
void *default_btf = NULL;
#endif
-char *btf_path = NULL;
+const char *btf_path = NULL;
/*****************************************************************
*
@@ -1415,7 +1411,7 @@ void ebpf_send_data_aral_chart(ARAL *memory, ebpf_module_t *em)
char *mem = { NETDATA_EBPF_STAT_DIMENSION_MEMORY };
char *aral = { NETDATA_EBPF_STAT_DIMENSION_ARAL };
- struct aral_statistics *stats = aral_statistics(memory);
+ struct aral_statistics *stats = aral_get_statistics(memory);
ebpf_write_begin_chart(NETDATA_MONITORING_FAMILY, em->memory_usage, "");
write_chart_dimension(mem, (long long)stats->structures.allocated_bytes);
@@ -1608,7 +1604,7 @@ static void get_ipv6_last_addr(union netdata_ip_t *out, union netdata_ip_t *in,
*
* @return it returns 0 on success and -1 otherwise.
*/
-static inline int ebpf_ip2nl(uint8_t *dst, char *ip, int domain, char *source)
+static inline int ebpf_ip2nl(uint8_t *dst, const char *ip, int domain, char *source)
{
if (inet_pton(domain, ip, dst) <= 0) {
netdata_log_error("The address specified (%s) is invalid ", source);
@@ -1666,14 +1662,14 @@ void ebpf_clean_ip_structure(ebpf_network_viewer_ip_list_t **clean)
* @param out a pointer to store the link list
* @param ip the value given as parameter
*/
-static void ebpf_parse_ip_list_unsafe(void **out, char *ip)
+static void ebpf_parse_ip_list_unsafe(void **out, const char *ip)
{
ebpf_network_viewer_ip_list_t **list = (ebpf_network_viewer_ip_list_t **)out;
char *ipdup = strdupz(ip);
union netdata_ip_t first = { };
union netdata_ip_t last = { };
- char *is_ipv6;
+ const char *is_ipv6;
if (*ip == '*' && *(ip+1) == '\0') {
memset(first.addr8, 0, sizeof(first.addr8));
memset(last.addr8, 0xFF, sizeof(last.addr8));
@@ -1684,7 +1680,8 @@ static void ebpf_parse_ip_list_unsafe(void **out, char *ip)
goto storethisip;
}
- char *end = ip;
+ char *enddup = strdupz(ip);
+ char *end = enddup;
// Move while I cannot find a separator
while (*end && *end != '/' && *end != '-') end++;
@@ -1814,7 +1811,7 @@ static void ebpf_parse_ip_list_unsafe(void **out, char *ip)
ebpf_network_viewer_ip_list_t *store;
- storethisip:
+storethisip:
store = callocz(1, sizeof(ebpf_network_viewer_ip_list_t));
store->value = ipdup;
store->hash = simple_hash(ipdup);
@@ -1825,8 +1822,9 @@ static void ebpf_parse_ip_list_unsafe(void **out, char *ip)
ebpf_fill_ip_list_unsafe(list, store, "socket");
return;
- cleanipdup:
+cleanipdup:
freez(ipdup);
+ freez(enddup);
}
/**
@@ -1836,7 +1834,7 @@ static void ebpf_parse_ip_list_unsafe(void **out, char *ip)
*
* @param ptr is a pointer with the text to parse.
*/
-void ebpf_parse_ips_unsafe(char *ptr)
+void ebpf_parse_ips_unsafe(const char *ptr)
{
// No value
if (unlikely(!ptr))
@@ -1927,7 +1925,7 @@ static inline void fill_port_list(ebpf_network_viewer_port_list_t **out, ebpf_ne
* @param out a pointer to store the link list
* @param service the service used to create the structure that will be linked.
*/
-static void ebpf_parse_service_list(void **out, char *service)
+static void ebpf_parse_service_list(void **out, const char *service)
{
ebpf_network_viewer_port_list_t **list = (ebpf_network_viewer_port_list_t **)out;
struct servent *serv = getservbyname((const char *)service, "tcp");
@@ -1956,8 +1954,10 @@ static void ebpf_parse_service_list(void **out, char *service)
* @param out a pointer to store the link list
* @param range the informed range for the user.
*/
-static void ebpf_parse_port_list(void **out, char *range)
-{
+static void ebpf_parse_port_list(void **out, const char *range_param) {
+ char range[strlen(range_param) + 1];
+ strncpyz(range, range_param, strlen(range_param));
+
int first, last;
ebpf_network_viewer_port_list_t **list = (ebpf_network_viewer_port_list_t **)out;
@@ -2029,7 +2029,7 @@ static void ebpf_parse_port_list(void **out, char *range)
*
* @param ptr is a pointer with the text to parse.
*/
-void ebpf_parse_ports(char *ptr)
+void ebpf_parse_ports(const char *ptr)
{
// No value
if (unlikely(!ptr))
@@ -2480,7 +2480,7 @@ static void ebpf_link_hostname(ebpf_network_viewer_hostname_list_t **out, ebpf_n
* @param out is the output link list
* @param parse is a pointer with the text to parser.
*/
-static void ebpf_link_hostnames(char *parse)
+static void ebpf_link_hostnames(const char *parse)
{
// No value
if (unlikely(!parse))
@@ -2536,7 +2536,7 @@ void parse_network_viewer_section(struct config *cfg)
EBPF_CONFIG_RESOLVE_SERVICE,
CONFIG_BOOLEAN_YES);
- char *value = appconfig_get(cfg, EBPF_NETWORK_VIEWER_SECTION, EBPF_CONFIG_PORTS, NULL);
+ const char *value = appconfig_get(cfg, EBPF_NETWORK_VIEWER_SECTION, EBPF_CONFIG_PORTS, NULL);
ebpf_parse_ports(value);
if (network_viewer_opt.hostname_resolution_enabled) {
@@ -2684,7 +2684,7 @@ static void ebpf_allocate_common_vectors()
*
* @param ptr the option given by users
*/
-static inline void ebpf_how_to_load(char *ptr)
+static inline void ebpf_how_to_load(const char *ptr)
{
if (!strcasecmp(ptr, EBPF_CFG_LOAD_MODE_RETURN))
ebpf_set_thread_mode(MODE_RETURN);
@@ -2775,7 +2775,7 @@ static inline void ebpf_set_load_mode(netdata_ebpf_load_mode_t load, netdata_ebp
* @param str value read from configuration file.
* @param origin specify the configuration file loaded
*/
-static inline void epbf_update_load_mode(char *str, netdata_ebpf_load_mode_t origin)
+static inline void epbf_update_load_mode(const char *str, netdata_ebpf_load_mode_t origin)
{
netdata_ebpf_load_mode_t load = epbf_convert_string_to_load_mode(str);
@@ -2808,7 +2808,7 @@ static void read_collector_values(int *disable_cgroups,
int update_every, netdata_ebpf_load_mode_t origin)
{
// Read global section
- char *value;
+ const char *value;
if (appconfig_exists(&collector_config, EBPF_GLOBAL_SECTION, "load")) // Backward compatibility
value = appconfig_get(&collector_config, EBPF_GLOBAL_SECTION, "load",
EBPF_CFG_LOAD_MODE_DEFAULT);
@@ -4005,7 +4005,6 @@ static void ebpf_manage_pid(pid_t pid)
*/
int main(int argc, char **argv)
{
- clocks_init();
nd_log_initialize_for_external_plugins(NETDATA_EBPF_PLUGIN_NAME);
ebpf_set_global_variables();
@@ -4034,6 +4033,10 @@ int main(int argc, char **argv)
#ifdef LIBBPF_MAJOR_VERSION
libbpf_set_strict_mode(LIBBPF_STRICT_ALL);
+
+#ifndef NETDATA_INTERNAL_CHECKS
+ libbpf_set_print(netdata_silent_libbpf_vfprintf);
+#endif
#endif
ebpf_read_local_addresses_unsafe();
@@ -4072,16 +4075,14 @@ int main(int argc, char **argv)
}
}
- usec_t step = USEC_PER_SEC;
heartbeat_t hb;
- heartbeat_init(&hb);
+ heartbeat_init(&hb, USEC_PER_SEC);
int update_apps_every = (int) EBPF_CFG_UPDATE_APPS_EVERY_DEFAULT;
- uint32_t max_period = EBPF_CLEANUP_FACTOR;
int update_apps_list = update_apps_every - 1;
int process_maps_per_core = ebpf_modules[EBPF_MODULE_PROCESS_IDX].maps_per_core;
//Plugin will be killed when it receives a signal
for ( ; !ebpf_plugin_stop(); global_iterations_counter++) {
- (void)heartbeat_next(&hb, step);
+ (void)heartbeat_next(&hb);
if (global_iterations_counter % EBPF_DEFAULT_UPDATE_EVERY == 0) {
pthread_mutex_lock(&lock);
@@ -4099,7 +4100,7 @@ int main(int argc, char **argv)
pthread_mutex_lock(&collect_data_mutex);
ebpf_parse_proc_files();
if (collect_pids & (1<<EBPF_MODULE_PROCESS_IDX)) {
- collect_data_for_all_processes(process_pid_fd, process_maps_per_core, max_period);
+ collect_data_for_all_processes(process_pid_fd, process_maps_per_core);
}
ebpf_create_apps_charts(apps_groups_root_target);
diff --git a/src/collectors/ebpf.plugin/ebpf_apps.c b/src/collectors/ebpf.plugin/ebpf_apps.c
index d90c5f128..dc66cf774 100644
--- a/src/collectors/ebpf.plugin/ebpf_apps.c
+++ b/src/collectors/ebpf.plugin/ebpf_apps.c
@@ -327,7 +327,7 @@ int pids_fd[EBPF_PIDS_END_IDX];
static size_t
// global_iterations_counter = 1,
- calls_counter = 0,
+ //calls_counter = 0,
// file_counter = 0,
// filenames_allocated_counter = 0,
// inodes_changed_counter = 0,
@@ -426,7 +426,7 @@ static inline void assign_target_to_pid(ebpf_pid_data_t *p)
static inline int read_proc_pid_cmdline(ebpf_pid_data_t *p, char *cmdline)
{
char filename[FILENAME_MAX + 1];
- snprintfz(filename, FILENAME_MAX, "%s/proc/%d/cmdline", netdata_configured_host_prefix, p->pid);
+ snprintfz(filename, FILENAME_MAX, "%s/proc/%u/cmdline", netdata_configured_host_prefix, p->pid);
int ret = 0;
@@ -490,7 +490,7 @@ static inline int read_proc_pid_stat(ebpf_pid_data_t *p)
char *comm = procfile_lineword(ff, 0, 1);
int32_t ppid = (int32_t)str2pid_t(procfile_lineword(ff, 0, 3));
- if (p->ppid == ppid && p->target)
+ if (p->ppid == (uint32_t)ppid && p->target)
goto without_cmdline_target;
p->ppid = ppid;
@@ -546,7 +546,7 @@ static inline int ebpf_collect_data_for_pid(pid_t pid)
read_proc_pid_stat(p);
// check its parent pid
- if (unlikely( p->ppid > pid_max)) {
+ if (unlikely( p->ppid > (uint32_t)pid_max)) {
netdata_log_error("Pid %d (command '%s') states invalid parent pid %u. Using 0.", pid, p->comm, p->ppid);
p->ppid = 0;
}
@@ -906,9 +906,8 @@ void ebpf_process_sum_values_for_pids(ebpf_process_stat_t *process, struct ebpf_
*
* @param tbl_pid_stats_fd The mapped file descriptor for the hash table.
* @param maps_per_core do I have hash maps per core?
- * @param max_period max period to wait before remove from hash table.
*/
-void collect_data_for_all_processes(int tbl_pid_stats_fd, int maps_per_core, uint32_t max_period)
+void collect_data_for_all_processes(int tbl_pid_stats_fd, int maps_per_core)
{
if (tbl_pid_stats_fd == -1)
return;
diff --git a/src/collectors/ebpf.plugin/ebpf_apps.h b/src/collectors/ebpf.plugin/ebpf_apps.h
index 98c9995da..5bf8953ad 100644
--- a/src/collectors/ebpf.plugin/ebpf_apps.h
+++ b/src/collectors/ebpf.plugin/ebpf_apps.h
@@ -495,7 +495,7 @@ int ebpf_read_hash_table(void *ep, int fd, uint32_t pid);
int get_pid_comm(pid_t pid, size_t n, char *dest);
-void collect_data_for_all_processes(int tbl_pid_stats_fd, int maps_per_core, uint32_t max_period);
+void collect_data_for_all_processes(int tbl_pid_stats_fd, int maps_per_core);
void ebpf_process_apps_accumulator(ebpf_process_stat_t *out, int maps_per_core);
// The default value is at least 32 times smaller than maximum number of PIDs allowed on system,
diff --git a/src/collectors/ebpf.plugin/ebpf_cachestat.c b/src/collectors/ebpf.plugin/ebpf_cachestat.c
index 8c0260d51..49a5d98a1 100644
--- a/src/collectors/ebpf.plugin/ebpf_cachestat.c
+++ b/src/collectors/ebpf.plugin/ebpf_cachestat.c
@@ -43,11 +43,7 @@ ebpf_local_maps_t cachestat_maps[] = {{.name = "cstat_global", .internal_input =
#endif
}};
-struct config cachestat_config = { .first_section = NULL,
- .last_section = NULL,
- .mutex = NETDATA_MUTEX_INITIALIZER,
- .index = { .avl_tree = { .root = NULL, .compar = appconfig_section_compare },
- .rwlock = AVL_LOCK_INITIALIZER } };
+struct config cachestat_config = APPCONFIG_INITIALIZER;
netdata_ebpf_targets_t cachestat_targets[] = { {.name = "add_to_page_cache_lru", .mode = EBPF_LOAD_TRAMPOLINE},
{.name = "mark_page_accessed", .mode = EBPF_LOAD_TRAMPOLINE},
@@ -716,9 +712,8 @@ static inline void cachestat_save_pid_values(netdata_publish_cachestat_t *out, n
* Read the apps table and store data inside the structure.
*
* @param maps_per_core do I need to read all cores?
- * @param max_period limit of iterations without updates before remove data from hash table
*/
-static void ebpf_read_cachestat_apps_table(int maps_per_core, uint32_t max_period)
+static void ebpf_read_cachestat_apps_table(int maps_per_core)
{
netdata_cachestat_pid_t *cv = cachestat_vector;
int fd = cachestat_maps[NETDATA_CACHESTAT_PID_STATS].map_fd;
@@ -842,28 +837,25 @@ void ebpf_resume_apps_data()
*/
void *ebpf_read_cachestat_thread(void *ptr)
{
- heartbeat_t hb;
- heartbeat_init(&hb);
-
ebpf_module_t *em = (ebpf_module_t *)ptr;
int maps_per_core = em->maps_per_core;
int update_every = em->update_every;
- uint32_t max_period = EBPF_CLEANUP_FACTOR;
int counter = update_every - 1;
uint32_t lifetime = em->lifetime;
uint32_t running_time = 0;
- usec_t period = update_every * USEC_PER_SEC;
pids_fd[EBPF_PIDS_CACHESTAT_IDX] = cachestat_maps[NETDATA_CACHESTAT_PID_STATS].map_fd;
+ heartbeat_t hb;
+ heartbeat_init(&hb, update_every * USEC_PER_SEC);
while (!ebpf_plugin_stop() && running_time < lifetime) {
- (void)heartbeat_next(&hb, period);
+ (void)heartbeat_next(&hb);
if (ebpf_plugin_stop() || ++counter != update_every)
continue;
pthread_mutex_lock(&collect_data_mutex);
- ebpf_read_cachestat_apps_table(maps_per_core, max_period);
+ ebpf_read_cachestat_apps_table(maps_per_core);
ebpf_resume_apps_data();
pthread_mutex_unlock(&collect_data_mutex);
@@ -1407,7 +1399,7 @@ static void cachestat_collector(ebpf_module_t *em)
int update_every = em->update_every;
int maps_per_core = em->maps_per_core;
heartbeat_t hb;
- heartbeat_init(&hb);
+ heartbeat_init(&hb, USEC_PER_SEC);
int counter = update_every - 1;
//This will be cancelled by its parent
uint32_t running_time = 0;
@@ -1415,7 +1407,7 @@ static void cachestat_collector(ebpf_module_t *em)
netdata_idx_t *stats = em->hash_table_stats;
memset(stats, 0, sizeof(em->hash_table_stats));
while (!ebpf_plugin_stop() && running_time < lifetime) {
- (void)heartbeat_next(&hb, USEC_PER_SEC);
+ (void)heartbeat_next(&hb);
if (ebpf_plugin_stop() || ++counter != update_every)
continue;
diff --git a/src/collectors/ebpf.plugin/ebpf_cgroup.c b/src/collectors/ebpf.plugin/ebpf_cgroup.c
index 9e1fa8231..0bc5989e1 100644
--- a/src/collectors/ebpf.plugin/ebpf_cgroup.c
+++ b/src/collectors/ebpf.plugin/ebpf_cgroup.c
@@ -373,13 +373,12 @@ void ebpf_create_charts_on_systemd(ebpf_systemd_args_t *chart)
*/
void *ebpf_cgroup_integration(void *ptr __maybe_unused)
{
- usec_t step = USEC_PER_SEC;
int counter = NETDATA_EBPF_CGROUP_UPDATE - 1;
heartbeat_t hb;
- heartbeat_init(&hb);
+ heartbeat_init(&hb, USEC_PER_SEC);
//Plugin will be killed when it receives a signal
while (!ebpf_plugin_stop()) {
- (void)heartbeat_next(&hb, step);
+ heartbeat_next(&hb);
// We are using a small heartbeat time to wake up thread,
// but we should not update so frequently the shared memory data
diff --git a/src/collectors/ebpf.plugin/ebpf_dcstat.c b/src/collectors/ebpf.plugin/ebpf_dcstat.c
index e6053cb4a..e84517686 100644
--- a/src/collectors/ebpf.plugin/ebpf_dcstat.c
+++ b/src/collectors/ebpf.plugin/ebpf_dcstat.c
@@ -12,11 +12,7 @@ netdata_dcstat_pid_t *dcstat_vector = NULL;
static netdata_idx_t dcstat_hash_values[NETDATA_DCSTAT_IDX_END];
static netdata_idx_t *dcstat_values = NULL;
-struct config dcstat_config = { .first_section = NULL,
- .last_section = NULL,
- .mutex = NETDATA_MUTEX_INITIALIZER,
- .index = { .avl_tree = { .root = NULL, .compar = appconfig_section_compare },
- .rwlock = AVL_LOCK_INITIALIZER } };
+struct config dcstat_config = APPCONFIG_INITIALIZER;
ebpf_local_maps_t dcstat_maps[] = {{.name = "dcstat_global", .internal_input = NETDATA_DIRECTORY_CACHE_END,
.user_input = 0, .type = NETDATA_EBPF_MAP_STATIC,
@@ -542,9 +538,8 @@ static void ebpf_dcstat_apps_accumulator(netdata_dcstat_pid_t *out, int maps_per
* Read the apps table and store data inside the structure.
*
* @param maps_per_core do I need to read all cores?
- * @param max_period limit of iterations without updates before remove data from hash table
*/
-static void ebpf_read_dc_apps_table(int maps_per_core, uint32_t max_period)
+static void ebpf_read_dc_apps_table(int maps_per_core)
{
netdata_dcstat_pid_t *cv = dcstat_vector;
int fd = dcstat_maps[NETDATA_DCSTAT_PID_STATS].map_fd;
@@ -644,9 +639,6 @@ void ebpf_dc_resume_apps_data()
*/
void *ebpf_read_dcstat_thread(void *ptr)
{
- heartbeat_t hb;
- heartbeat_init(&hb);
-
ebpf_module_t *em = (ebpf_module_t *)ptr;
int maps_per_core = em->maps_per_core;
@@ -659,16 +651,16 @@ void *ebpf_read_dcstat_thread(void *ptr)
uint32_t lifetime = em->lifetime;
uint32_t running_time = 0;
- usec_t period = update_every * USEC_PER_SEC;
- uint32_t max_period = EBPF_CLEANUP_FACTOR;
pids_fd[EBPF_PIDS_DCSTAT_IDX] = dcstat_maps[NETDATA_DCSTAT_PID_STATS].map_fd;
+ heartbeat_t hb;
+ heartbeat_init(&hb, update_every * USEC_PER_SEC);
while (!ebpf_plugin_stop() && running_time < lifetime) {
- (void)heartbeat_next(&hb, period);
+ (void)heartbeat_next(&hb);
if (ebpf_plugin_stop() || ++counter != update_every)
continue;
pthread_mutex_lock(&collect_data_mutex);
- ebpf_read_dc_apps_table(maps_per_core, max_period);
+ ebpf_read_dc_apps_table(maps_per_core);
ebpf_dc_resume_apps_data();
pthread_mutex_unlock(&collect_data_mutex);
@@ -1271,7 +1263,7 @@ static void dcstat_collector(ebpf_module_t *em)
int cgroups = em->cgroup_charts;
int update_every = em->update_every;
heartbeat_t hb;
- heartbeat_init(&hb);
+ heartbeat_init(&hb, USEC_PER_SEC);
int counter = update_every - 1;
int maps_per_core = em->maps_per_core;
uint32_t running_time = 0;
@@ -1279,7 +1271,7 @@ static void dcstat_collector(ebpf_module_t *em)
netdata_idx_t *stats = em->hash_table_stats;
memset(stats, 0, sizeof(em->hash_table_stats));
while (!ebpf_plugin_stop() && running_time < lifetime) {
- (void)heartbeat_next(&hb, USEC_PER_SEC);
+ heartbeat_next(&hb);
if (ebpf_plugin_stop() || ++counter != update_every)
continue;
diff --git a/src/collectors/ebpf.plugin/ebpf_disk.c b/src/collectors/ebpf.plugin/ebpf_disk.c
index 246f98702..3d9c5789c 100644
--- a/src/collectors/ebpf.plugin/ebpf_disk.c
+++ b/src/collectors/ebpf.plugin/ebpf_disk.c
@@ -6,11 +6,7 @@
#include "ebpf.h"
#include "ebpf_disk.h"
-struct config disk_config = { .first_section = NULL,
- .last_section = NULL,
- .mutex = NETDATA_MUTEX_INITIALIZER,
- .index = { .avl_tree = { .root = NULL, .compar = appconfig_section_compare },
- .rwlock = AVL_LOCK_INITIALIZER } };
+struct config disk_config = APPCONFIG_INITIALIZER;
static ebpf_local_maps_t disk_maps[] = {{.name = "tbl_disk_iocall", .internal_input = NETDATA_DISK_HISTOGRAM_LENGTH,
.user_input = 0, .type = NETDATA_EBPF_MAP_STATIC,
@@ -775,13 +771,13 @@ static void disk_collector(ebpf_module_t *em)
int update_every = em->update_every;
heartbeat_t hb;
- heartbeat_init(&hb);
+ heartbeat_init(&hb, USEC_PER_SEC);
int counter = update_every - 1;
int maps_per_core = em->maps_per_core;
uint32_t running_time = 0;
uint32_t lifetime = em->lifetime;
while (!ebpf_plugin_stop() && running_time < lifetime) {
- (void)heartbeat_next(&hb, USEC_PER_SEC);
+ heartbeat_next(&hb);
if (ebpf_plugin_stop() || ++counter != update_every)
continue;
diff --git a/src/collectors/ebpf.plugin/ebpf_fd.c b/src/collectors/ebpf.plugin/ebpf_fd.c
index 61a9595cc..256efa4fe 100644
--- a/src/collectors/ebpf.plugin/ebpf_fd.c
+++ b/src/collectors/ebpf.plugin/ebpf_fd.c
@@ -46,9 +46,7 @@ static ebpf_local_maps_t fd_maps[] = {{.name = "tbl_fd_pid", .internal_input = N
}};
-struct config fd_config = { .first_section = NULL, .last_section = NULL, .mutex = NETDATA_MUTEX_INITIALIZER,
- .index = {.avl_tree = { .root = NULL, .compar = appconfig_section_compare },
- .rwlock = AVL_LOCK_INITIALIZER } };
+struct config fd_config = APPCONFIG_INITIALIZER;
static netdata_idx_t fd_hash_values[NETDATA_FD_COUNTER];
static netdata_idx_t *fd_values = NULL;
@@ -683,9 +681,8 @@ static void fd_apps_accumulator(netdata_fd_stat_t *out, int maps_per_core)
* Read the apps table and store data inside the structure.
*
* @param maps_per_core do I need to read all cores?
- * @param max_period limit of iterations without updates before remove data from hash table
*/
-static void ebpf_read_fd_apps_table(int maps_per_core, uint32_t max_period)
+static void ebpf_read_fd_apps_table(int maps_per_core)
{
netdata_fd_stat_t *fv = fd_vector;
int fd = fd_maps[NETDATA_FD_PID_STATS].map_fd;
@@ -783,9 +780,6 @@ void ebpf_fd_resume_apps_data()
*/
void *ebpf_read_fd_thread(void *ptr)
{
- heartbeat_t hb;
- heartbeat_init(&hb);
-
ebpf_module_t *em = (ebpf_module_t *)ptr;
int maps_per_core = em->maps_per_core;
@@ -798,16 +792,17 @@ void *ebpf_read_fd_thread(void *ptr)
uint32_t lifetime = em->lifetime;
uint32_t running_time = 0;
- int period = USEC_PER_SEC;
- uint32_t max_period = EBPF_CLEANUP_FACTOR;
pids_fd[EBPF_PIDS_FD_IDX] = fd_maps[NETDATA_FD_PID_STATS].map_fd;
+
+ heartbeat_t hb;
+ heartbeat_init(&hb, USEC_PER_SEC);
while (!ebpf_plugin_stop() && running_time < lifetime) {
- (void)heartbeat_next(&hb, period);
+ heartbeat_next(&hb);
if (ebpf_plugin_stop() || ++counter != update_every)
continue;
pthread_mutex_lock(&collect_data_mutex);
- ebpf_read_fd_apps_table(maps_per_core, max_period);
+ ebpf_read_fd_apps_table(maps_per_core);
ebpf_fd_resume_apps_data();
pthread_mutex_unlock(&collect_data_mutex);
@@ -1217,8 +1212,6 @@ static void ebpf_fd_send_cgroup_data(ebpf_module_t *em)
static void fd_collector(ebpf_module_t *em)
{
int cgroups = em->cgroup_charts;
- heartbeat_t hb;
- heartbeat_init(&hb);
int update_every = em->update_every;
int counter = update_every - 1;
int maps_per_core = em->maps_per_core;
@@ -1226,8 +1219,10 @@ static void fd_collector(ebpf_module_t *em)
uint32_t lifetime = em->lifetime;
netdata_idx_t *stats = em->hash_table_stats;
memset(stats, 0, sizeof(em->hash_table_stats));
+ heartbeat_t hb;
+ heartbeat_init(&hb, USEC_PER_SEC);
while (!ebpf_plugin_stop() && running_time < lifetime) {
- (void)heartbeat_next(&hb, USEC_PER_SEC);
+ heartbeat_next(&hb);
if (ebpf_plugin_stop() || ++counter != update_every)
continue;
diff --git a/src/collectors/ebpf.plugin/ebpf_filesystem.c b/src/collectors/ebpf.plugin/ebpf_filesystem.c
index 1187b03e9..30f3c7460 100644
--- a/src/collectors/ebpf.plugin/ebpf_filesystem.c
+++ b/src/collectors/ebpf.plugin/ebpf_filesystem.c
@@ -2,11 +2,7 @@
#include "ebpf_filesystem.h"
-struct config fs_config = { .first_section = NULL,
- .last_section = NULL,
- .mutex = NETDATA_MUTEX_INITIALIZER,
- .index = { .avl_tree = { .root = NULL, .compar = appconfig_section_compare },
- .rwlock = AVL_LOCK_INITIALIZER } };
+struct config fs_config = APPCONFIG_INITIALIZER;
ebpf_local_maps_t ext4_maps[] = {{.name = "tbl_ext4", .internal_input = NETDATA_KEY_CALLS_SYNC,
.user_input = 0, .type = NETDATA_EBPF_MAP_STATIC,
@@ -984,13 +980,13 @@ static void ebpf_histogram_send_data()
static void filesystem_collector(ebpf_module_t *em)
{
int update_every = em->update_every;
- heartbeat_t hb;
- heartbeat_init(&hb);
int counter = update_every - 1;
uint32_t running_time = 0;
uint32_t lifetime = em->lifetime;
+ heartbeat_t hb;
+ heartbeat_init(&hb, USEC_PER_SEC);
while (!ebpf_plugin_stop() && running_time < lifetime) {
- (void)heartbeat_next(&hb, USEC_PER_SEC);
+ heartbeat_next(&hb);
if (ebpf_plugin_stop() || ++counter != update_every)
continue;
diff --git a/src/collectors/ebpf.plugin/ebpf_functions.c b/src/collectors/ebpf.plugin/ebpf_functions.c
index 8e9fb01ed..267159a40 100644
--- a/src/collectors/ebpf.plugin/ebpf_functions.c
+++ b/src/collectors/ebpf.plugin/ebpf_functions.c
@@ -287,7 +287,7 @@ static void ebpf_function_socket_manipulation(const char *transaction,
ebpf_module_t *em = &ebpf_modules[EBPF_MODULE_SOCKET_IDX];
char *words[PLUGINSD_MAX_WORDS] = {NULL};
- size_t num_words = quoted_strings_splitter_pluginsd(function, words, PLUGINSD_MAX_WORDS);
+ size_t num_words = quoted_strings_splitter_whitespace(function, words, PLUGINSD_MAX_WORDS);
const char *name;
int period = -1;
rw_spinlock_write_lock(&ebpf_judy_pid.index.rw_spinlock);
@@ -712,9 +712,9 @@ void *ebpf_function_thread(void *ptr)
pthread_mutex_unlock(&lock);
heartbeat_t hb;
- heartbeat_init(&hb);
+ heartbeat_init(&hb, USEC_PER_SEC);
while(!ebpf_plugin_stop()) {
- (void)heartbeat_next(&hb, USEC_PER_SEC);
+ heartbeat_next(&hb);
if (ebpf_plugin_stop()) {
break;
diff --git a/src/collectors/ebpf.plugin/ebpf_hardirq.c b/src/collectors/ebpf.plugin/ebpf_hardirq.c
index 911425e54..e7974ac05 100644
--- a/src/collectors/ebpf.plugin/ebpf_hardirq.c
+++ b/src/collectors/ebpf.plugin/ebpf_hardirq.c
@@ -3,11 +3,7 @@
#include "ebpf.h"
#include "ebpf_hardirq.h"
-struct config hardirq_config = { .first_section = NULL,
- .last_section = NULL,
- .mutex = NETDATA_MUTEX_INITIALIZER,
- .index = { .avl_tree = { .root = NULL, .compar = appconfig_section_compare },
- .rwlock = AVL_LOCK_INITIALIZER } };
+struct config hardirq_config = APPCONFIG_INITIALIZER;
static ebpf_local_maps_t hardirq_maps[] = {
{
@@ -575,15 +571,15 @@ static void hardirq_collector(ebpf_module_t *em)
pthread_mutex_unlock(&lock);
// loop and read from published data until ebpf plugin is closed.
- heartbeat_t hb;
- heartbeat_init(&hb);
int update_every = em->update_every;
int counter = update_every - 1;
//This will be cancelled by its parent
uint32_t running_time = 0;
uint32_t lifetime = em->lifetime;
+ heartbeat_t hb;
+ heartbeat_init(&hb, USEC_PER_SEC);
while (!ebpf_plugin_stop() && running_time < lifetime) {
- (void)heartbeat_next(&hb, USEC_PER_SEC);
+ heartbeat_next(&hb);
if (ebpf_plugin_stop() || ++counter != update_every)
continue;
diff --git a/src/collectors/ebpf.plugin/ebpf_mdflush.c b/src/collectors/ebpf.plugin/ebpf_mdflush.c
index 77c109bff..3d70b7792 100644
--- a/src/collectors/ebpf.plugin/ebpf_mdflush.c
+++ b/src/collectors/ebpf.plugin/ebpf_mdflush.c
@@ -3,11 +3,7 @@
#include "ebpf.h"
#include "ebpf_mdflush.h"
-struct config mdflush_config = { .first_section = NULL,
- .last_section = NULL,
- .mutex = NETDATA_MUTEX_INITIALIZER,
- .index = { .avl_tree = { .root = NULL, .compar = appconfig_section_compare },
- .rwlock = AVL_LOCK_INITIALIZER } };
+struct config mdflush_config = APPCONFIG_INITIALIZER;
#define MDFLUSH_MAP_COUNT 0
static ebpf_local_maps_t mdflush_maps[] = {
@@ -341,14 +337,14 @@ static void mdflush_collector(ebpf_module_t *em)
pthread_mutex_unlock(&lock);
// loop and read from published data until ebpf plugin is closed.
- heartbeat_t hb;
- heartbeat_init(&hb);
int counter = update_every - 1;
int maps_per_core = em->maps_per_core;
uint32_t running_time = 0;
uint32_t lifetime = em->lifetime;
+ heartbeat_t hb;
+ heartbeat_init(&hb, USEC_PER_SEC);
while (!ebpf_plugin_stop() && running_time < lifetime) {
- (void)heartbeat_next(&hb, USEC_PER_SEC);
+ heartbeat_next(&hb);
if (ebpf_plugin_stop() || ++counter != update_every)
continue;
diff --git a/src/collectors/ebpf.plugin/ebpf_mount.c b/src/collectors/ebpf.plugin/ebpf_mount.c
index 7441cc6e2..4e310c8a6 100644
--- a/src/collectors/ebpf.plugin/ebpf_mount.c
+++ b/src/collectors/ebpf.plugin/ebpf_mount.c
@@ -22,9 +22,7 @@ static char *mount_dimension_name[NETDATA_EBPF_MOUNT_SYSCALL] = { "mount", "umou
static netdata_syscall_stat_t mount_aggregated_data[NETDATA_EBPF_MOUNT_SYSCALL];
static netdata_publish_syscall_t mount_publish_aggregated[NETDATA_EBPF_MOUNT_SYSCALL];
-struct config mount_config = { .first_section = NULL, .last_section = NULL, .mutex = NETDATA_MUTEX_INITIALIZER,
- .index = {.avl_tree = { .root = NULL, .compar = appconfig_section_compare },
- .rwlock = AVL_LOCK_INITIALIZER } };
+struct config mount_config = APPCONFIG_INITIALIZER;
static netdata_idx_t mount_hash_values[NETDATA_MOUNT_END];
@@ -363,15 +361,15 @@ static void mount_collector(ebpf_module_t *em)
{
memset(mount_hash_values, 0, sizeof(mount_hash_values));
- heartbeat_t hb;
- heartbeat_init(&hb);
int update_every = em->update_every;
int counter = update_every - 1;
int maps_per_core = em->maps_per_core;
uint32_t running_time = 0;
uint32_t lifetime = em->lifetime;
+ heartbeat_t hb;
+ heartbeat_init(&hb, USEC_PER_SEC);
while (!ebpf_plugin_stop() && running_time < lifetime) {
- (void)heartbeat_next(&hb, USEC_PER_SEC);
+ heartbeat_next(&hb);
if (ebpf_plugin_stop() || ++counter != update_every)
continue;
diff --git a/src/collectors/ebpf.plugin/ebpf_oomkill.c b/src/collectors/ebpf.plugin/ebpf_oomkill.c
index 34361550b..d32095abc 100644
--- a/src/collectors/ebpf.plugin/ebpf_oomkill.c
+++ b/src/collectors/ebpf.plugin/ebpf_oomkill.c
@@ -3,11 +3,7 @@
#include "ebpf.h"
#include "ebpf_oomkill.h"
-struct config oomkill_config = { .first_section = NULL,
- .last_section = NULL,
- .mutex = NETDATA_MUTEX_INITIALIZER,
- .index = { .avl_tree = { .root = NULL, .compar = appconfig_section_compare },
- .rwlock = AVL_LOCK_INITIALIZER } };
+struct config oomkill_config = APPCONFIG_INITIALIZER;
#define OOMKILL_MAP_KILLCNT 0
static ebpf_local_maps_t oomkill_maps[] = {
@@ -463,14 +459,14 @@ static void oomkill_collector(ebpf_module_t *em)
memset(keys, 0, sizeof(keys));
// loop and read until ebpf plugin is closed.
- heartbeat_t hb;
- heartbeat_init(&hb);
int counter = update_every - 1;
uint32_t running_time = 0;
uint32_t lifetime = em->lifetime;
netdata_idx_t *stats = em->hash_table_stats;
+ heartbeat_t hb;
+ heartbeat_init(&hb, USEC_PER_SEC);
while (!ebpf_plugin_stop() && running_time < lifetime) {
- (void)heartbeat_next(&hb, USEC_PER_SEC);
+ (void)heartbeat_next(&hb);
if (ebpf_plugin_stop() || ++counter != update_every)
continue;
diff --git a/src/collectors/ebpf.plugin/ebpf_process.c b/src/collectors/ebpf.plugin/ebpf_process.c
index d2810f899..d80f7a3e8 100644
--- a/src/collectors/ebpf.plugin/ebpf_process.c
+++ b/src/collectors/ebpf.plugin/ebpf_process.c
@@ -57,11 +57,7 @@ ebpf_process_stat_t *process_stat_vector = NULL;
static netdata_syscall_stat_t process_aggregated_data[NETDATA_KEY_PUBLISH_PROCESS_END];
static netdata_publish_syscall_t process_publish_aggregated[NETDATA_KEY_PUBLISH_PROCESS_END];
-struct config process_config = { .first_section = NULL,
- .last_section = NULL,
- .mutex = NETDATA_MUTEX_INITIALIZER,
- .index = { .avl_tree = { .root = NULL, .compar = appconfig_section_compare },
- .rwlock = AVL_LOCK_INITIALIZER } };
+struct config process_config = APPCONFIG_INITIALIZER;
/*****************************************************************
*
@@ -1124,8 +1120,6 @@ void ebpf_process_update_cgroup_algorithm()
*/
static void process_collector(ebpf_module_t *em)
{
- heartbeat_t hb;
- heartbeat_init(&hb);
int publish_global = em->global_charts;
int cgroups = em->cgroup_charts;
pthread_mutex_lock(&ebpf_exit_cleanup);
@@ -1141,9 +1135,11 @@ static void process_collector(ebpf_module_t *em)
uint32_t lifetime = em->lifetime;
netdata_idx_t *stats = em->hash_table_stats;
memset(stats, 0, sizeof(em->hash_table_stats));
+ heartbeat_t hb;
+ heartbeat_init(&hb, USEC_PER_SEC);
while (!ebpf_plugin_stop() && running_time < lifetime) {
- usec_t dt = heartbeat_next(&hb, USEC_PER_SEC);
- (void)dt;
+ heartbeat_next(&hb);
+
if (ebpf_plugin_stop())
break;
diff --git a/src/collectors/ebpf.plugin/ebpf_shm.c b/src/collectors/ebpf.plugin/ebpf_shm.c
index ac44549b2..6282a2547 100644
--- a/src/collectors/ebpf.plugin/ebpf_shm.c
+++ b/src/collectors/ebpf.plugin/ebpf_shm.c
@@ -12,11 +12,7 @@ netdata_ebpf_shm_t *shm_vector = NULL;
static netdata_idx_t shm_hash_values[NETDATA_SHM_END];
static netdata_idx_t *shm_values = NULL;
-struct config shm_config = { .first_section = NULL,
- .last_section = NULL,
- .mutex = NETDATA_MUTEX_INITIALIZER,
- .index = { .avl_tree = { .root = NULL, .compar = appconfig_section_compare },
- .rwlock = AVL_LOCK_INITIALIZER } };
+struct config shm_config = APPCONFIG_INITIALIZER;
static ebpf_local_maps_t shm_maps[] = {{.name = "tbl_pid_shm", .internal_input = ND_EBPF_DEFAULT_PID_SIZE,
.user_input = 0,
@@ -569,9 +565,8 @@ static void ebpf_update_shm_cgroup()
* Read the apps table and store data inside the structure.
*
* @param maps_per_core do I need to read all cores?
- * @param max_period limit of iterations without updates before remove data from hash table
*/
-static void ebpf_read_shm_apps_table(int maps_per_core, uint32_t max_period)
+static void ebpf_read_shm_apps_table(int maps_per_core)
{
netdata_ebpf_shm_t *cv = shm_vector;
int fd = shm_maps[NETDATA_PID_SHM_TABLE].map_fd;
@@ -1063,9 +1058,6 @@ void ebpf_shm_resume_apps_data() {
*/
void *ebpf_read_shm_thread(void *ptr)
{
- heartbeat_t hb;
- heartbeat_init(&hb);
-
ebpf_module_t *em = (ebpf_module_t *)ptr;
int maps_per_core = em->maps_per_core;
@@ -1078,16 +1070,16 @@ void *ebpf_read_shm_thread(void *ptr)
uint32_t lifetime = em->lifetime;
uint32_t running_time = 0;
- usec_t period = update_every * USEC_PER_SEC;
- uint32_t max_period = EBPF_CLEANUP_FACTOR;
pids_fd[EBPF_PIDS_SHM_IDX] = shm_maps[NETDATA_PID_SHM_TABLE].map_fd;
+ heartbeat_t hb;
+ heartbeat_init(&hb, update_every * USEC_PER_SEC);
while (!ebpf_plugin_stop() && running_time < lifetime) {
- (void)heartbeat_next(&hb, period);
+ (void)heartbeat_next(&hb);
if (ebpf_plugin_stop() || ++counter != update_every)
continue;
pthread_mutex_lock(&collect_data_mutex);
- ebpf_read_shm_apps_table(maps_per_core, max_period);
+ ebpf_read_shm_apps_table(maps_per_core);
ebpf_shm_resume_apps_data();
pthread_mutex_unlock(&collect_data_mutex);
@@ -1113,16 +1105,17 @@ static void shm_collector(ebpf_module_t *em)
{
int cgroups = em->cgroup_charts;
int update_every = em->update_every;
- heartbeat_t hb;
- heartbeat_init(&hb);
int counter = update_every - 1;
int maps_per_core = em->maps_per_core;
uint32_t running_time = 0;
uint32_t lifetime = em->lifetime;
netdata_idx_t *stats = em->hash_table_stats;
memset(stats, 0, sizeof(em->hash_table_stats));
+ heartbeat_t hb;
+ heartbeat_init(&hb, USEC_PER_SEC);
while (!ebpf_plugin_stop() && running_time < lifetime) {
- (void)heartbeat_next(&hb, USEC_PER_SEC);
+ heartbeat_next(&hb);
+
if (ebpf_plugin_stop() || ++counter != update_every)
continue;
diff --git a/src/collectors/ebpf.plugin/ebpf_socket.c b/src/collectors/ebpf.plugin/ebpf_socket.c
index 5b87a3256..f0d376f43 100644
--- a/src/collectors/ebpf.plugin/ebpf_socket.c
+++ b/src/collectors/ebpf.plugin/ebpf_socket.c
@@ -77,11 +77,7 @@ netdata_socket_t *socket_values;
ebpf_network_viewer_port_list_t *listen_ports = NULL;
ebpf_addresses_t tcp_v6_connect_address = {.function = "tcp_v6_connect", .hash = 0, .addr = 0, .type = 0};
-struct config socket_config = { .first_section = NULL,
- .last_section = NULL,
- .mutex = NETDATA_MUTEX_INITIALIZER,
- .index = { .avl_tree = { .root = NULL, .compar = appconfig_section_compare },
- .rwlock = AVL_LOCK_INITIALIZER } };
+struct config socket_config = APPCONFIG_INITIALIZER;
netdata_ebpf_targets_t socket_targets[] = { {.name = "inet_csk_accept", .mode = EBPF_LOAD_PROBE},
{.name = "tcp_retransmit_skb", .mode = EBPF_LOAD_PROBE},
@@ -1815,9 +1811,6 @@ void ebpf_socket_resume_apps_data()
*/
void *ebpf_read_socket_thread(void *ptr)
{
- heartbeat_t hb;
- heartbeat_init(&hb);
-
ebpf_module_t *em = (ebpf_module_t *)ptr;
ebpf_update_array_vectors(em);
@@ -1830,9 +1823,10 @@ void *ebpf_read_socket_thread(void *ptr)
uint32_t running_time = 0;
uint32_t lifetime = em->lifetime;
- usec_t period = update_every * USEC_PER_SEC;
+ heartbeat_t hb;
+ heartbeat_init(&hb, update_every * USEC_PER_SEC);
while (!ebpf_plugin_stop() && running_time < lifetime) {
- (void)heartbeat_next(&hb, period);
+ heartbeat_next(&hb);
if (ebpf_plugin_stop() || ++counter != update_every)
continue;
@@ -2612,9 +2606,6 @@ static void ebpf_socket_send_cgroup_data(int update_every)
*/
static void socket_collector(ebpf_module_t *em)
{
- heartbeat_t hb;
- heartbeat_init(&hb);
-
int cgroups = em->cgroup_charts;
if (cgroups)
ebpf_socket_update_cgroup_algorithm();
@@ -2627,8 +2618,10 @@ static void socket_collector(ebpf_module_t *em)
uint32_t lifetime = em->lifetime;
netdata_idx_t *stats = em->hash_table_stats;
memset(stats, 0, sizeof(em->hash_table_stats));
+ heartbeat_t hb;
+ heartbeat_init(&hb, USEC_PER_SEC);
while (!ebpf_plugin_stop() && running_time < lifetime) {
- (void)heartbeat_next(&hb, USEC_PER_SEC);
+ heartbeat_next(&hb);
if (ebpf_plugin_stop() || ++counter != update_every)
continue;
@@ -2708,7 +2701,7 @@ static void ebpf_socket_initialize_global_vectors()
* @param hash the calculated hash for the dimension name.
* @param name the dimension name.
*/
-static void ebpf_link_dimension_name(char *port, uint32_t hash, char *value)
+static void ebpf_link_dimension_name(const char *port, uint32_t hash, const char *value)
{
int test = str2i(port);
if (test < NETDATA_MINIMUM_PORT_VALUE || test > NETDATA_MAXIMUM_PORT_VALUE){
@@ -2753,15 +2746,15 @@ static void ebpf_link_dimension_name(char *port, uint32_t hash, char *value)
*
* @param cfg the configuration structure
*/
+
+static bool config_service_value_cb(void *data __maybe_unused, const char *name, const char *value) {
+ ebpf_link_dimension_name(name, simple_hash(name), value);
+ return true;
+}
+
void ebpf_parse_service_name_section(struct config *cfg)
{
- struct section *co = appconfig_get_section(cfg, EBPF_SERVICE_NAME_SECTION);
- if (co) {
- struct config_option *cv;
- for (cv = co->values; cv ; cv = cv->next) {
- ebpf_link_dimension_name(cv->name, cv->hash, cv->value);
- }
- }
+ appconfig_foreach_value_in_section(cfg, EBPF_SERVICE_NAME_SECTION, config_service_value_cb, NULL);
// Always associated the default port to Netdata
ebpf_network_viewer_dim_name_t *names = network_viewer_opt.names;
diff --git a/src/collectors/ebpf.plugin/ebpf_socket.h b/src/collectors/ebpf.plugin/ebpf_socket.h
index e01126035..a236985eb 100644
--- a/src/collectors/ebpf.plugin/ebpf_socket.h
+++ b/src/collectors/ebpf.plugin/ebpf_socket.h
@@ -339,8 +339,8 @@ extern ebpf_network_viewer_port_list_t *listen_ports;
void update_listen_table(uint16_t value, uint16_t proto, netdata_passive_connection_t *values);
void ebpf_fill_ip_list_unsafe(ebpf_network_viewer_ip_list_t **out, ebpf_network_viewer_ip_list_t *in, char *table);
void ebpf_parse_service_name_section(struct config *cfg);
-void ebpf_parse_ips_unsafe(char *ptr);
-void ebpf_parse_ports(char *ptr);
+void ebpf_parse_ips_unsafe(const char *ptr);
+void ebpf_parse_ports(const char *ptr);
void ebpf_socket_read_open_connections(BUFFER *buf, struct ebpf_module *em);
void ebpf_socket_fill_publish_apps(ebpf_socket_publish_apps_t *curr, netdata_socket_t *ns);
diff --git a/src/collectors/ebpf.plugin/ebpf_softirq.c b/src/collectors/ebpf.plugin/ebpf_softirq.c
index 21bd83a3e..19c495eea 100644
--- a/src/collectors/ebpf.plugin/ebpf_softirq.c
+++ b/src/collectors/ebpf.plugin/ebpf_softirq.c
@@ -3,11 +3,7 @@
#include "ebpf.h"
#include "ebpf_softirq.h"
-struct config softirq_config = { .first_section = NULL,
- .last_section = NULL,
- .mutex = NETDATA_MUTEX_INITIALIZER,
- .index = { .avl_tree = { .root = NULL, .compar = appconfig_section_compare },
- .rwlock = AVL_LOCK_INITIALIZER } };
+struct config softirq_config = APPCONFIG_INITIALIZER;
#define SOFTIRQ_MAP_LATENCY 0
static ebpf_local_maps_t softirq_maps[] = {
@@ -213,7 +209,7 @@ static void softirq_collector(ebpf_module_t *em)
// loop and read from published data until ebpf plugin is closed.
heartbeat_t hb;
- heartbeat_init(&hb);
+ heartbeat_init(&hb, USEC_PER_SEC);
int update_every = em->update_every;
int counter = update_every - 1;
int maps_per_core = em->maps_per_core;
@@ -221,7 +217,7 @@ static void softirq_collector(ebpf_module_t *em)
uint32_t running_time = 0;
uint32_t lifetime = em->lifetime;
while (!ebpf_plugin_stop() && running_time < lifetime) {
- (void)heartbeat_next(&hb, USEC_PER_SEC);
+ heartbeat_next(&hb);
if (ebpf_plugin_stop() || ++counter != update_every)
continue;
diff --git a/src/collectors/ebpf.plugin/ebpf_swap.c b/src/collectors/ebpf.plugin/ebpf_swap.c
index 933353178..3be56cfa4 100644
--- a/src/collectors/ebpf.plugin/ebpf_swap.c
+++ b/src/collectors/ebpf.plugin/ebpf_swap.c
@@ -12,11 +12,7 @@ static netdata_idx_t *swap_values = NULL;
netdata_ebpf_swap_t *swap_vector = NULL;
-struct config swap_config = { .first_section = NULL,
- .last_section = NULL,
- .mutex = NETDATA_MUTEX_INITIALIZER,
- .index = { .avl_tree = { .root = NULL, .compar = appconfig_section_compare },
- .rwlock = AVL_LOCK_INITIALIZER } };
+struct config swap_config = APPCONFIG_INITIALIZER;
static ebpf_local_maps_t swap_maps[] = {{.name = "tbl_pid_swap", .internal_input = ND_EBPF_DEFAULT_PID_SIZE,
.user_input = 0,
@@ -543,9 +539,8 @@ void ebpf_swap_resume_apps_data() {
* Read the apps table and store data inside the structure.
*
* @param maps_per_core do I need to read all cores?
- * @param max_period limit of iterations without updates before remove data from hash table
*/
-static void ebpf_read_swap_apps_table(int maps_per_core, uint32_t max_period)
+static void ebpf_read_swap_apps_table(int maps_per_core)
{
netdata_ebpf_swap_t *cv = swap_vector;
int fd = swap_maps[NETDATA_PID_SWAP_TABLE].map_fd;
@@ -597,9 +592,6 @@ end_swap_loop:
*/
void *ebpf_read_swap_thread(void *ptr)
{
- heartbeat_t hb;
- heartbeat_init(&hb);
-
ebpf_module_t *em = (ebpf_module_t *)ptr;
int maps_per_core = em->maps_per_core;
@@ -612,17 +604,17 @@ void *ebpf_read_swap_thread(void *ptr)
uint32_t lifetime = em->lifetime;
uint32_t running_time = 0;
- usec_t period = update_every * USEC_PER_SEC;
- uint32_t max_period = EBPF_CLEANUP_FACTOR;
pids_fd[EBPF_PIDS_SWAP_IDX] = swap_maps[NETDATA_PID_SWAP_TABLE].map_fd;
+ heartbeat_t hb;
+ heartbeat_init(&hb, update_every * USEC_PER_SEC);
while (!ebpf_plugin_stop() && running_time < lifetime) {
- (void)heartbeat_next(&hb, period);
+ heartbeat_next(&hb);
if (ebpf_plugin_stop() || ++counter != update_every)
continue;
pthread_mutex_lock(&collect_data_mutex);
- ebpf_read_swap_apps_table(maps_per_core, max_period);
+ ebpf_read_swap_apps_table(maps_per_core);
ebpf_swap_resume_apps_data();
pthread_mutex_unlock(&collect_data_mutex);
@@ -930,16 +922,17 @@ static void swap_collector(ebpf_module_t *em)
{
int cgroup = em->cgroup_charts;
int update_every = em->update_every;
- heartbeat_t hb;
- heartbeat_init(&hb);
int counter = update_every - 1;
int maps_per_core = em->maps_per_core;
uint32_t running_time = 0;
uint32_t lifetime = em->lifetime;
netdata_idx_t *stats = em->hash_table_stats;
memset(stats, 0, sizeof(em->hash_table_stats));
+
+ heartbeat_t hb;
+ heartbeat_init(&hb, USEC_PER_SEC);
while (!ebpf_plugin_stop() && running_time < lifetime) {
- (void)heartbeat_next(&hb, USEC_PER_SEC);
+ (void)heartbeat_next(&hb);
if (ebpf_plugin_stop() || ++counter != update_every)
continue;
diff --git a/src/collectors/ebpf.plugin/ebpf_sync.c b/src/collectors/ebpf.plugin/ebpf_sync.c
index 2be9192c5..094de7019 100644
--- a/src/collectors/ebpf.plugin/ebpf_sync.c
+++ b/src/collectors/ebpf.plugin/ebpf_sync.c
@@ -100,11 +100,7 @@ ebpf_local_maps_t sync_file_range_maps[] = {{.name = "tbl_syncfr", .internal_inp
#endif
}};
-struct config sync_config = { .first_section = NULL,
- .last_section = NULL,
- .mutex = NETDATA_MUTEX_INITIALIZER,
- .index = { .avl_tree = { .root = NULL, .compar = appconfig_section_compare },
- .rwlock = AVL_LOCK_INITIALIZER } };
+struct config sync_config = APPCONFIG_INITIALIZER;
netdata_ebpf_targets_t sync_targets[] = { {.name = NETDATA_SYSCALLS_SYNC, .mode = EBPF_LOAD_TRAMPOLINE},
{.name = NETDATA_SYSCALLS_SYNCFS, .mode = EBPF_LOAD_TRAMPOLINE},
@@ -558,15 +554,15 @@ static void sync_send_data()
*/
static void sync_collector(ebpf_module_t *em)
{
- heartbeat_t hb;
- heartbeat_init(&hb);
int update_every = em->update_every;
int counter = update_every - 1;
int maps_per_core = em->maps_per_core;
uint32_t running_time = 0;
uint32_t lifetime = em->lifetime;
+ heartbeat_t hb;
+ heartbeat_init(&hb, USEC_PER_SEC);
while (!ebpf_plugin_stop() && running_time < lifetime) {
- (void)heartbeat_next(&hb, USEC_PER_SEC);
+ heartbeat_next(&hb);
if (ebpf_plugin_stop() || ++counter != update_every)
continue;
diff --git a/src/collectors/ebpf.plugin/ebpf_vfs.c b/src/collectors/ebpf.plugin/ebpf_vfs.c
index cf1f50e99..c0c1bee38 100644
--- a/src/collectors/ebpf.plugin/ebpf_vfs.c
+++ b/src/collectors/ebpf.plugin/ebpf_vfs.c
@@ -52,11 +52,7 @@ struct netdata_static_thread ebpf_read_vfs = {
.start_routine = NULL
};
-struct config vfs_config = { .first_section = NULL,
- .last_section = NULL,
- .mutex = NETDATA_MUTEX_INITIALIZER,
- .index = { .avl_tree = { .root = NULL, .compar = appconfig_section_compare },
- .rwlock = AVL_LOCK_INITIALIZER } };
+struct config vfs_config = APPCONFIG_INITIALIZER;
netdata_ebpf_targets_t vfs_targets[] = { {.name = "vfs_write", .mode = EBPF_LOAD_TRAMPOLINE},
{.name = "vfs_writev", .mode = EBPF_LOAD_TRAMPOLINE},
@@ -2064,9 +2060,6 @@ void ebpf_vfs_resume_apps_data() {
*/
void *ebpf_read_vfs_thread(void *ptr)
{
- heartbeat_t hb;
- heartbeat_init(&hb);
-
ebpf_module_t *em = (ebpf_module_t *)ptr;
int maps_per_core = em->maps_per_core;
@@ -2079,11 +2072,12 @@ void *ebpf_read_vfs_thread(void *ptr)
uint32_t lifetime = em->lifetime;
uint32_t running_time = 0;
- usec_t period = update_every * USEC_PER_SEC;
uint32_t max_period = EBPF_CLEANUP_FACTOR;
pids_fd[EBPF_PIDS_VFS_IDX] = vfs_maps[NETDATA_VFS_PID].map_fd;
+ heartbeat_t hb;
+ heartbeat_init(&hb, update_every * USEC_PER_SEC);
while (!ebpf_plugin_stop() && running_time < lifetime) {
- (void)heartbeat_next(&hb, period);
+ heartbeat_next(&hb);
if (ebpf_plugin_stop() || ++counter != update_every)
continue;
@@ -2116,8 +2110,6 @@ void *ebpf_read_vfs_thread(void *ptr)
static void vfs_collector(ebpf_module_t *em)
{
int cgroups = em->cgroup_charts;
- heartbeat_t hb;
- heartbeat_init(&hb);
int update_every = em->update_every;
int counter = update_every - 1;
int maps_per_core = em->maps_per_core;
@@ -2125,8 +2117,10 @@ static void vfs_collector(ebpf_module_t *em)
uint32_t lifetime = em->lifetime;
netdata_idx_t *stats = em->hash_table_stats;
memset(stats, 0, sizeof(em->hash_table_stats));
+ heartbeat_t hb;
+ heartbeat_init(&hb, USEC_PER_SEC);
while (!ebpf_plugin_stop() && running_time < lifetime) {
- (void)heartbeat_next(&hb, USEC_PER_SEC);
+ heartbeat_next(&hb);
if (ebpf_plugin_stop() || ++counter != update_every)
continue;
diff --git a/src/collectors/ebpf.plugin/integrations/ebpf_cachestat.md b/src/collectors/ebpf.plugin/integrations/ebpf_cachestat.md
index 352bc0721..4bfb238ba 100644
--- a/src/collectors/ebpf.plugin/integrations/ebpf_cachestat.md
+++ b/src/collectors/ebpf.plugin/integrations/ebpf_cachestat.md
@@ -145,8 +145,8 @@ Now follow steps:
The configuration file name for this integration is `ebpf.d/cachestat.conf`.
-You can edit the configuration file using the `edit-config` script from the
-Netdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
+You can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
diff --git a/src/collectors/ebpf.plugin/integrations/ebpf_dcstat.md b/src/collectors/ebpf.plugin/integrations/ebpf_dcstat.md
index 5ca7a6a68..9e6f8ef32 100644
--- a/src/collectors/ebpf.plugin/integrations/ebpf_dcstat.md
+++ b/src/collectors/ebpf.plugin/integrations/ebpf_dcstat.md
@@ -143,8 +143,8 @@ Now follow steps:
The configuration file name for this integration is `ebpf.d/dcstat.conf`.
-You can edit the configuration file using the `edit-config` script from the
-Netdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
+You can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
diff --git a/src/collectors/ebpf.plugin/integrations/ebpf_disk.md b/src/collectors/ebpf.plugin/integrations/ebpf_disk.md
index 4fc3dc700..7dccc51c4 100644
--- a/src/collectors/ebpf.plugin/integrations/ebpf_disk.md
+++ b/src/collectors/ebpf.plugin/integrations/ebpf_disk.md
@@ -109,8 +109,8 @@ This thread needs to attach a tracepoint to monitor when a process schedule an e
The configuration file name for this integration is `ebpf.d/disk.conf`.
-You can edit the configuration file using the `edit-config` script from the
-Netdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
+You can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
diff --git a/src/collectors/ebpf.plugin/integrations/ebpf_filedescriptor.md b/src/collectors/ebpf.plugin/integrations/ebpf_filedescriptor.md
index 2f917d183..f9c9aa1a6 100644
--- a/src/collectors/ebpf.plugin/integrations/ebpf_filedescriptor.md
+++ b/src/collectors/ebpf.plugin/integrations/ebpf_filedescriptor.md
@@ -143,8 +143,8 @@ Now follow steps:
The configuration file name for this integration is `ebpf.d/fd.conf`.
-You can edit the configuration file using the `edit-config` script from the
-Netdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
+You can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
diff --git a/src/collectors/ebpf.plugin/integrations/ebpf_filesystem.md b/src/collectors/ebpf.plugin/integrations/ebpf_filesystem.md
index ea55a6c04..b4b8e490c 100644
--- a/src/collectors/ebpf.plugin/integrations/ebpf_filesystem.md
+++ b/src/collectors/ebpf.plugin/integrations/ebpf_filesystem.md
@@ -130,8 +130,8 @@ Now follow steps:
The configuration file name for this integration is `ebpf.d/filesystem.conf`.
-You can edit the configuration file using the `edit-config` script from the
-Netdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
+You can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
diff --git a/src/collectors/ebpf.plugin/integrations/ebpf_hardirq.md b/src/collectors/ebpf.plugin/integrations/ebpf_hardirq.md
index d5f79353f..8d77f9ee3 100644
--- a/src/collectors/ebpf.plugin/integrations/ebpf_hardirq.md
+++ b/src/collectors/ebpf.plugin/integrations/ebpf_hardirq.md
@@ -109,8 +109,8 @@ This thread needs to attach a tracepoint to monitor when a process schedule an e
The configuration file name for this integration is `ebpf.d/hardirq.conf`.
-You can edit the configuration file using the `edit-config` script from the
-Netdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
+You can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
diff --git a/src/collectors/ebpf.plugin/integrations/ebpf_mdflush.md b/src/collectors/ebpf.plugin/integrations/ebpf_mdflush.md
index 369e8958f..663557eca 100644
--- a/src/collectors/ebpf.plugin/integrations/ebpf_mdflush.md
+++ b/src/collectors/ebpf.plugin/integrations/ebpf_mdflush.md
@@ -104,8 +104,8 @@ Now follow steps:
The configuration file name for this integration is `ebpf.d/mdflush.conf`.
-You can edit the configuration file using the `edit-config` script from the
-Netdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
+You can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
diff --git a/src/collectors/ebpf.plugin/integrations/ebpf_mount.md b/src/collectors/ebpf.plugin/integrations/ebpf_mount.md
index 5e6738e2c..64dcaeacd 100644
--- a/src/collectors/ebpf.plugin/integrations/ebpf_mount.md
+++ b/src/collectors/ebpf.plugin/integrations/ebpf_mount.md
@@ -110,8 +110,8 @@ This thread needs to attach a tracepoint to monitor when a process schedule an e
The configuration file name for this integration is `ebpf.d/mount.conf`.
-You can edit the configuration file using the `edit-config` script from the
-Netdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
+You can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
diff --git a/src/collectors/ebpf.plugin/integrations/ebpf_oomkill.md b/src/collectors/ebpf.plugin/integrations/ebpf_oomkill.md
index d9e14f4fb..bc40c883b 100644
--- a/src/collectors/ebpf.plugin/integrations/ebpf_oomkill.md
+++ b/src/collectors/ebpf.plugin/integrations/ebpf_oomkill.md
@@ -126,8 +126,8 @@ This thread needs to attach a tracepoint to monitor when a process schedule an e
The configuration file name for this integration is `ebpf.d/oomkill.conf`.
-You can edit the configuration file using the `edit-config` script from the
-Netdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
+You can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
diff --git a/src/collectors/ebpf.plugin/integrations/ebpf_processes.md b/src/collectors/ebpf.plugin/integrations/ebpf_processes.md
index 8ff091da0..f3bc209d0 100644
--- a/src/collectors/ebpf.plugin/integrations/ebpf_processes.md
+++ b/src/collectors/ebpf.plugin/integrations/ebpf_processes.md
@@ -153,8 +153,8 @@ This thread needs to attach a tracepoint to monitor when a process schedule an e
The configuration file name for this integration is `ebpf.d/process.conf`.
-You can edit the configuration file using the `edit-config` script from the
-Netdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
+You can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
diff --git a/src/collectors/ebpf.plugin/integrations/ebpf_shm.md b/src/collectors/ebpf.plugin/integrations/ebpf_shm.md
index c65d3a85e..2e037ea30 100644
--- a/src/collectors/ebpf.plugin/integrations/ebpf_shm.md
+++ b/src/collectors/ebpf.plugin/integrations/ebpf_shm.md
@@ -147,8 +147,8 @@ This thread needs to attach a tracepoint to monitor when a process schedule an e
The configuration file name for this integration is `ebpf.d/shm.conf`.
-You can edit the configuration file using the `edit-config` script from the
-Netdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
+You can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
diff --git a/src/collectors/ebpf.plugin/integrations/ebpf_socket.md b/src/collectors/ebpf.plugin/integrations/ebpf_socket.md
index 917dcaba6..441e72963 100644
--- a/src/collectors/ebpf.plugin/integrations/ebpf_socket.md
+++ b/src/collectors/ebpf.plugin/integrations/ebpf_socket.md
@@ -162,8 +162,8 @@ Now follow steps:
The configuration file name for this integration is `ebpf.d/network.conf`.
-You can edit the configuration file using the `edit-config` script from the
-Netdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
+You can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
diff --git a/src/collectors/ebpf.plugin/integrations/ebpf_softirq.md b/src/collectors/ebpf.plugin/integrations/ebpf_softirq.md
index 1571dd4b5..e8214cff6 100644
--- a/src/collectors/ebpf.plugin/integrations/ebpf_softirq.md
+++ b/src/collectors/ebpf.plugin/integrations/ebpf_softirq.md
@@ -109,8 +109,8 @@ This thread needs to attach a tracepoint to monitor when a process schedule an e
The configuration file name for this integration is `ebpf.d/softirq.conf`.
-You can edit the configuration file using the `edit-config` script from the
-Netdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
+You can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
diff --git a/src/collectors/ebpf.plugin/integrations/ebpf_swap.md b/src/collectors/ebpf.plugin/integrations/ebpf_swap.md
index 4358ac71b..0fe6cd6ca 100644
--- a/src/collectors/ebpf.plugin/integrations/ebpf_swap.md
+++ b/src/collectors/ebpf.plugin/integrations/ebpf_swap.md
@@ -136,8 +136,8 @@ Now follow steps:
The configuration file name for this integration is `ebpf.d/swap.conf`.
-You can edit the configuration file using the `edit-config` script from the
-Netdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
+You can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
diff --git a/src/collectors/ebpf.plugin/integrations/ebpf_sync.md b/src/collectors/ebpf.plugin/integrations/ebpf_sync.md
index 08d69fada..237f340ed 100644
--- a/src/collectors/ebpf.plugin/integrations/ebpf_sync.md
+++ b/src/collectors/ebpf.plugin/integrations/ebpf_sync.md
@@ -117,8 +117,8 @@ This thread needs to attach a tracepoint to monitor when a process schedule an e
The configuration file name for this integration is `ebpf.d/sync.conf`.
-You can edit the configuration file using the `edit-config` script from the
-Netdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
+You can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata
diff --git a/src/collectors/ebpf.plugin/integrations/ebpf_vfs.md b/src/collectors/ebpf.plugin/integrations/ebpf_vfs.md
index 3adb00e9b..bf45d3858 100644
--- a/src/collectors/ebpf.plugin/integrations/ebpf_vfs.md
+++ b/src/collectors/ebpf.plugin/integrations/ebpf_vfs.md
@@ -178,8 +178,8 @@ Now follow steps:
The configuration file name for this integration is `ebpf.d/vfs.conf`.
-You can edit the configuration file using the `edit-config` script from the
-Netdata [config directory](/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
+You can edit the configuration file using the [`edit-config`](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#edit-a-configuration-file-using-edit-config) script from the
+Netdata [config directory](https://github.com/netdata/netdata/blob/master/docs/netdata-agent/configuration/README.md#the-netdata-config-directory).
```bash
cd /etc/netdata 2>/dev/null || cd /opt/netdata/etc/netdata