summaryrefslogtreecommitdiffstats
path: root/libnetdata/log
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-19 02:57:58 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-19 02:57:58 +0000
commitbe1c7e50e1e8809ea56f2c9d472eccd8ffd73a97 (patch)
tree9754ff1ca740f6346cf8483ec915d4054bc5da2d /libnetdata/log
parentInitial commit. (diff)
downloadnetdata-upstream/1.44.3.tar.xz
netdata-upstream/1.44.3.zip
Adding upstream version 1.44.3.upstream/1.44.3upstream
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to '')
-rw-r--r--libnetdata/log/Makefile.am9
-rw-r--r--libnetdata/log/README.md204
-rw-r--r--libnetdata/log/journal.c138
-rw-r--r--libnetdata/log/journal.h18
-rw-r--r--libnetdata/log/log.c2431
-rw-r--r--libnetdata/log/log.h301
-rw-r--r--libnetdata/log/systemd-cat-native.c820
-rw-r--r--libnetdata/log/systemd-cat-native.h8
-rw-r--r--libnetdata/log/systemd-cat-native.md209
9 files changed, 4138 insertions, 0 deletions
diff --git a/libnetdata/log/Makefile.am b/libnetdata/log/Makefile.am
new file mode 100644
index 00000000..a02b8ebd
--- /dev/null
+++ b/libnetdata/log/Makefile.am
@@ -0,0 +1,9 @@
+# SPDX-License-Identifier: GPL-3.0-or-later
+
+AUTOMAKE_OPTIONS = subdir-objects
+MAINTAINERCLEANFILES = $(srcdir)/Makefile.in
+
+dist_noinst_DATA = \
+ README.md \
+ systemd-cat-native.md \
+ $(NULL)
diff --git a/libnetdata/log/README.md b/libnetdata/log/README.md
new file mode 100644
index 00000000..d9ed6437
--- /dev/null
+++ b/libnetdata/log/README.md
@@ -0,0 +1,204 @@
+<!--
+title: "Log"
+custom_edit_url: https://github.com/netdata/netdata/edit/master/libnetdata/log/README.md
+sidebar_label: "Log"
+learn_status: "Published"
+learn_topic_type: "Tasks"
+learn_rel_path: "Developers/libnetdata"
+-->
+
+# Netdata Logging
+
+This document describes how Netdata generates its own logs, not how Netdata manages and queries logs databases.
+
+## Log sources
+
+Netdata supports the following log sources:
+
+1. **daemon**, logs generated by Netdata daemon.
+2. **collector**, logs generated by Netdata collectors, including internal and external ones.
+3. **access**, API requests received by Netdata
+4. **health**, all alert transitions and notifications
+
+## Log outputs
+
+For each log source, Netdata supports the following output methods:
+
+- **off**, to disable this log source
+- **journal**, to send the logs to systemd-journal.
+- **syslog**, to send the logs to syslog.
+- **system**, to send the output to `stderr` or `stdout` depending on the log source.
+- **stdout**, to write the logs to Netdata's `stdout`.
+- **stderr**, to write the logs to Netdata's `stderr`.
+- **filename**, to send the logs to a file.
+
+For `daemon` and `collector` the default is `journal` when systemd-journal is available.
+To decide if systemd-journal is available, Netdata checks:
+
+1. `stderr` is connected to systemd-journald
+2. `/run/systemd/journal/socket` exists
+3. `/host/run/systemd/journal/socket` exists (`/host` is configurable in containers)
+
+If any of the above is detected, Netdata will select `journal` for `daemon` and `collector` sources.
+
+All other sources default to a file.
+
+## Log formats
+
+| Format | Description |
+|---------|--------------------------------------------------------------------------------------------------------|
+| journal | journald-specific log format. Automatically selected when logging to systemd-journal. |
+| logfmt | logs data as a series of key/value pairs. The default when logging to any output other than `journal`. |
+| json | logs data in JSON format. |
+
+## Log levels
+
+Each time Netdata logs, it assigns a priority to the log. It can be one of this (in order of importance):
+
+| Level | Description |
+|-----------|----------------------------------------------------------------------------------------|
+| emergency | a fatal condition, Netdata will most likely exit immediately after. |
+| alert | a very important issue that may affect how Netdata operates. |
+| critical | a very important issue the user should know which, Netdata thinks it can survive. |
+| error | an error condition indicating that Netdata is trying to do something, but it fails. |
+| warning | something unexpected has happened that may or may not affect the operation of Netdata. |
+| notice | something that does not affect the operation of Netdata, but the user should notice. |
+| info | the default log level about information the user should know. |
+| debug | these are more verbose logs that can be ignored. |
+
+## Logs Configuration
+
+In `netdata.conf`, there are the following settings:
+
+```
+[logs]
+ # logs to trigger flood protection = 1000
+ # logs flood protection period = 60
+ # facility = daemon
+ # level = info
+ # daemon = journal
+ # collector = journal
+ # access = /var/log/netdata/access.log
+ # health = /var/log/netdata/health.log
+```
+
+- `logs to trigger flood protection` and `logs flood protection period` enable logs flood protection for `daemon` and `collector` sources. It can also be configured per log source.
+- `facility` is used only when Netdata logs to syslog.
+- `level` defines the minimum [log level](#log-levels) of logs that will be logged. This setting is applied only to `daemon` and `collector` sources. It can also be configured per source.
+
+### Configuring log sources
+
+Each for the sources (`daemon`, `collector`, `access`, `health`), accepts the following:
+
+```
+source = {FORMAT},level={LEVEL},protection={LOG}/{PERIOD}@{OUTPUT}
+```
+
+Where:
+
+- `{FORMAT}`, is one of the [log formats](#log-formats),
+- `{LEVEL}`, is the minimum [log level](#log-levels) to be logged,
+- `{LOGS}` is the number of `logs to trigger flood protection` configured per output,
+- `{PERIOD}` is the equivalent of `logs flood protection period` configured per output,
+- `{OUTPUT}` is one of the `[log outputs](#log-outputs),
+
+All parameters can be omitted, except `{OUTPUT}`. If `{OUTPUT}` is the only given parameter, `@` can be omitted.
+
+### Logs rotation
+
+Netdata comes with `logrotate` configuration to rotate its log files periodically.
+
+The default is usually found in `/etc/logrotate.d/netdata`.
+
+Sending a `SIGHUP` to Netdata, will instruct it to re-open all its log files.
+
+## Log Fields
+
+Netdata exposes the following fields to its logs:
+
+| journal | logfmt | json | Description |
+|:--------------------------------------:|:------------------------------:|:------------------------------:|:---------------------------------------------------------------------------------------------------------:|
+| `_SOURCE_REALTIME_TIMESTAMP` | `time` | `time` | the timestamp of the event |
+| `SYSLOG_IDENTIFIER` | `comm` | `comm` | the program logging the event |
+| `ND_LOG_SOURCE` | `source` | `source` | one of the [log sources](#log-sources) |
+| `PRIORITY`<br/>numeric | `level`<br/>text | `level`<br/>numeric | one of the [log levels](#log-levels) |
+| `ERRNO` | `errno` | `errno` | the numeric value of `errno` |
+| `INVOCATION_ID` | - | - | a unique UUID of the Netdata session, reset on every Netdata restart, inherited by systemd when available |
+| `CODE_LINE` | - | - | the line number of of the source code logging this event |
+| `CODE_FILE` | - | - | the filename of the source code logging this event |
+| `CODE_FUNCTION` | - | - | the function name of the source code logging this event |
+| `TID` | `tid` | `tid` | the thread id of the thread logging this event |
+| `THREAD_TAG` | `thread` | `thread` | the name of the thread logging this event |
+| `MESSAGE_ID` | `msg_id` | `msg_id` | see [message IDs](#message-ids) |
+| `ND_MODULE` | `module` | `module` | the Netdata module logging this event |
+| `ND_NIDL_NODE` | `node` | `node` | the hostname of the node the event is related to |
+| `ND_NIDL_INSTANCE` | `instance` | `instance` | the instance of the node the event is related to |
+| `ND_NIDL_CONTEXT` | `context` | `context` | the context the event is related to (this is usually the chart name, as shown on netdata dashboards |
+| `ND_NIDL_DIMENSION` | `dimension` | `dimension` | the dimension the event is related to |
+| `ND_SRC_TRANSPORT` | `src_transport` | `src_transport` | when the event happened during a request, this is the request transport |
+| `ND_SRC_IP` | `src_ip` | `src_ip` | when the event happened during an inbound request, this is the IP the request came from |
+| `ND_SRC_PORT` | `src_port` | `src_port` | when the event happened during an inbound request, this is the port the request came from |
+| `ND_SRC_CAPABILITIES` | `src_capabilities` | `src_capabilities` | when the request came from a child, this is the communication capabilities of the child |
+| `ND_DST_TRANSPORT` | `dst_transport` | `dst_transport` | when the event happened during an outbound request, this is the outbound request transport |
+| `ND_DST_IP` | `dst_ip` | `dst_ip` | when the event happened during an outbound request, this is the IP the request destination |
+| `ND_DST_PORT` | `dst_port` | `dst_port` | when the event happened during an outbound request, this is the port the request destination |
+| `ND_DST_CAPABILITIES` | `dst_capabilities` | `dst_capabilities` | when the request goes to a parent, this is the communication capabilities of the parent |
+| `ND_REQUEST_METHOD` | `req_method` | `req_method` | when the event happened during an inbound request, this is the method the request was received |
+| `ND_RESPONSE_CODE` | `code` | `code` | when responding to a request, this this the response code |
+| `ND_CONNECTION_ID` | `conn` | `conn` | when there is a connection id for an inbound connection, this is the connection id |
+| `ND_TRANSACTION_ID` | `transaction` | `transaction` | the transaction id (UUID) of all API requests |
+| `ND_RESPONSE_SENT_BYTES` | `sent_bytes` | `sent_bytes` | the bytes we sent to API responses |
+| `ND_RESPONSE_SIZE_BYTES` | `size_bytes` | `size_bytes` | the uncompressed bytes of the API responses |
+| `ND_RESPONSE_PREP_TIME_USEC` | `prep_ut` | `prep_ut` | the time needed to prepare a response |
+| `ND_RESPONSE_SENT_TIME_USEC` | `sent_ut` | `sent_ut` | the time needed to send a response |
+| `ND_RESPONSE_TOTAL_TIME_USEC` | `total_ut` | `total_ut` | the total time needed to complete a response |
+| `ND_ALERT_ID` | `alert_id` | `alert_id` | the alert id this event is related to |
+| `ND_ALERT_EVENT_ID` | `alert_event_id` | `alert_event_id` | a sequential number of the alert transition (per host) |
+| `ND_ALERT_UNIQUE_ID` | `alert_unique_id` | `alert_unique_id` | a sequential number of the alert transition (per alert) |
+| `ND_ALERT_TRANSITION_ID` | `alert_transition_id` | `alert_transition_id` | the unique UUID of this alert transition |
+| `ND_ALERT_CONFIG` | `alert_config` | `alert_config` | the alert configuration hash (UUID) |
+| `ND_ALERT_NAME` | `alert` | `alert` | the alert name |
+| `ND_ALERT_CLASS` | `alert_class` | `alert_class` | the alert classification |
+| `ND_ALERT_COMPONENT` | `alert_component` | `alert_component` | the alert component |
+| `ND_ALERT_TYPE` | `alert_type` | `alert_type` | the alert type |
+| `ND_ALERT_EXEC` | `alert_exec` | `alert_exec` | the alert notification program |
+| `ND_ALERT_RECIPIENT` | `alert_recipient` | `alert_recipient` | the alert recipient(s) |
+| `ND_ALERT_VALUE` | `alert_value` | `alert_value` | the current alert value |
+| `ND_ALERT_VALUE_OLD` | `alert_value_old` | `alert_value_old` | the previous alert value |
+| `ND_ALERT_STATUS` | `alert_status` | `alert_status` | the current alert status |
+| `ND_ALERT_STATUS_OLD` | `alert_value_old` | `alert_value_old` | the previous alert value |
+| `ND_ALERT_UNITS` | `alert_units` | `alert_units` | the units of the alert |
+| `ND_ALERT_SUMMARY` | `alert_summary` | `alert_summary` | the summary text of the alert |
+| `ND_ALERT_INFO` | `alert_info` | `alert_info` | the info text of the alert |
+| `ND_ALERT_DURATION` | `alert_duration` | `alert_duration` | the duration the alert was in its previous state |
+| `ND_ALERT_NOTIFICATION_TIMESTAMP_USEC` | `alert_notification_timestamp` | `alert_notification_timestamp` | the timestamp the notification delivery is scheduled |
+| `ND_REQUEST` | `request` | `request` | the full request during which the event happened |
+| `MESSAGE` | `msg` | `msg` | the event message |
+
+
+### Message IDs
+
+Netdata assigns specific message IDs to certain events:
+
+- `ed4cdb8f1beb4ad3b57cb3cae2d162fa` when a Netdata child connects to this Netdata
+- `6e2e3839067648968b646045dbf28d66` when this Netdata connects to a Netdata parent
+- `9ce0cb58ab8b44df82c4bf1ad9ee22de` when alerts change state
+- `6db0018e83e34320ae2a659d78019fb7` when notifications are sent
+
+You can view these events using the Netdata systemd-journal.plugin at the `MESSAGE_ID` filter,
+or using `journalctl` like this:
+
+```bash
+# query children connection
+journalctl MESSAGE_ID=ed4cdb8f1beb4ad3b57cb3cae2d162fa
+
+# query parent connection
+journalctl MESSAGE_ID=6e2e3839067648968b646045dbf28d66
+
+# query alert transitions
+journalctl MESSAGE_ID=9ce0cb58ab8b44df82c4bf1ad9ee22de
+
+# query alert notifications
+journalctl MESSAGE_ID=6db0018e83e34320ae2a659d78019fb7
+```
+
diff --git a/libnetdata/log/journal.c b/libnetdata/log/journal.c
new file mode 100644
index 00000000..21978cf5
--- /dev/null
+++ b/libnetdata/log/journal.c
@@ -0,0 +1,138 @@
+// SPDX-License-Identifier: GPL-3.0-or-later
+
+#include "journal.h"
+
+bool is_path_unix_socket(const char *path) {
+ if(!path || !*path)
+ return false;
+
+ struct stat statbuf;
+
+ // Check if the path is valid
+ if (!path || !*path)
+ return false;
+
+ // Use stat to check if the file exists and is a socket
+ if (stat(path, &statbuf) == -1)
+ // The file does not exist or cannot be accessed
+ return false;
+
+ // Check if the file is a socket
+ if (S_ISSOCK(statbuf.st_mode))
+ return true;
+
+ return false;
+}
+
+bool is_stderr_connected_to_journal(void) {
+ const char *journal_stream = getenv("JOURNAL_STREAM");
+ if (!journal_stream)
+ return false; // JOURNAL_STREAM is not set
+
+ struct stat stderr_stat;
+ if (fstat(STDERR_FILENO, &stderr_stat) < 0)
+ return false; // Error in getting stderr info
+
+ // Parse device and inode from JOURNAL_STREAM
+ char *endptr;
+ long journal_dev = strtol(journal_stream, &endptr, 10);
+ if (*endptr != ':')
+ return false; // Format error in JOURNAL_STREAM
+
+ long journal_ino = strtol(endptr + 1, NULL, 10);
+
+ return (stderr_stat.st_dev == (dev_t)journal_dev) && (stderr_stat.st_ino == (ino_t)journal_ino);
+}
+
+int journal_direct_fd(const char *path) {
+ if(!path || !*path)
+ path = JOURNAL_DIRECT_SOCKET;
+
+ if(!is_path_unix_socket(path))
+ return -1;
+
+ int fd = socket(AF_UNIX, SOCK_DGRAM, 0);
+ if (fd < 0) return -1;
+
+ struct sockaddr_un addr;
+ memset(&addr, 0, sizeof(struct sockaddr_un));
+ addr.sun_family = AF_UNIX;
+ strncpy(addr.sun_path, path, sizeof(addr.sun_path) - 1);
+
+ // Connect the socket (optional, but can simplify send operations)
+ if (connect(fd, (struct sockaddr *)&addr, sizeof(addr)) < 0) {
+ close(fd);
+ return -1;
+ }
+
+ return fd;
+}
+
+static inline bool journal_send_with_memfd(int fd, const char *msg, size_t msg_len) {
+#if defined(__NR_memfd_create) && defined(MFD_ALLOW_SEALING) && defined(F_ADD_SEALS) && defined(F_SEAL_SHRINK) && defined(F_SEAL_GROW) && defined(F_SEAL_WRITE)
+ // Create a memory file descriptor
+ int memfd = (int)syscall(__NR_memfd_create, "journald", MFD_ALLOW_SEALING);
+ if (memfd < 0) return false;
+
+ // Write data to the memfd
+ if (write(memfd, msg, msg_len) != (ssize_t)msg_len) {
+ close(memfd);
+ return false;
+ }
+
+ // Seal the memfd to make it immutable
+ if (fcntl(memfd, F_ADD_SEALS, F_SEAL_SHRINK | F_SEAL_GROW | F_SEAL_WRITE) < 0) {
+ close(memfd);
+ return false;
+ }
+
+ struct iovec iov = {0};
+ struct msghdr msghdr = {0};
+ struct cmsghdr *cmsghdr;
+ char cmsgbuf[CMSG_SPACE(sizeof(int))];
+
+ msghdr.msg_iov = &iov;
+ msghdr.msg_iovlen = 1;
+ msghdr.msg_control = cmsgbuf;
+ msghdr.msg_controllen = sizeof(cmsgbuf);
+
+ cmsghdr = CMSG_FIRSTHDR(&msghdr);
+ cmsghdr->cmsg_level = SOL_SOCKET;
+ cmsghdr->cmsg_type = SCM_RIGHTS;
+ cmsghdr->cmsg_len = CMSG_LEN(sizeof(int));
+ memcpy(CMSG_DATA(cmsghdr), &memfd, sizeof(int));
+
+ ssize_t r = sendmsg(fd, &msghdr, 0);
+
+ close(memfd);
+ return r >= 0;
+#else
+ return false;
+#endif
+}
+
+bool journal_direct_send(int fd, const char *msg, size_t msg_len) {
+ // Send the datagram
+ if (send(fd, msg, msg_len, 0) < 0) {
+ if(errno != EMSGSIZE)
+ return false;
+
+ // datagram is too large, fallback to memfd
+ if(!journal_send_with_memfd(fd, msg, msg_len))
+ return false;
+ }
+
+ return true;
+}
+
+void journal_construct_path(char *dst, size_t dst_len, const char *host_prefix, const char *namespace_str) {
+ if(!host_prefix)
+ host_prefix = "";
+
+ if(namespace_str)
+ snprintfz(dst, dst_len, "%s/run/systemd/journal.%s/socket",
+ host_prefix, namespace_str);
+ else
+ snprintfz(dst, dst_len, "%s" JOURNAL_DIRECT_SOCKET,
+ host_prefix);
+}
diff --git a/libnetdata/log/journal.h b/libnetdata/log/journal.h
new file mode 100644
index 00000000..df8ece18
--- /dev/null
+++ b/libnetdata/log/journal.h
@@ -0,0 +1,18 @@
+// SPDX-License-Identifier: GPL-3.0-or-later
+
+#include "../libnetdata.h"
+
+#ifndef NETDATA_LOG_JOURNAL_H
+#define NETDATA_LOG_JOURNAL_H
+
+#define JOURNAL_DIRECT_SOCKET "/run/systemd/journal/socket"
+
+void journal_construct_path(char *dst, size_t dst_len, const char *host_prefix, const char *namespace_str);
+
+int journal_direct_fd(const char *path);
+bool journal_direct_send(int fd, const char *msg, size_t msg_len);
+
+bool is_path_unix_socket(const char *path);
+bool is_stderr_connected_to_journal(void);
+
+#endif //NETDATA_LOG_JOURNAL_H
diff --git a/libnetdata/log/log.c b/libnetdata/log/log.c
new file mode 100644
index 00000000..c805716c
--- /dev/null
+++ b/libnetdata/log/log.c
@@ -0,0 +1,2431 @@
+// SPDX-License-Identifier: GPL-3.0-or-later
+
+#define SD_JOURNAL_SUPPRESS_LOCATION
+
+#include "../libnetdata.h"
+#include <daemon/main.h>
+
+#ifdef __FreeBSD__
+#include <sys/endian.h>
+#endif
+
+#ifdef __APPLE__
+#include <machine/endian.h>
+#endif
+
+#ifdef HAVE_BACKTRACE
+#include <execinfo.h>
+#endif
+
+#ifdef HAVE_SYSTEMD
+#include <systemd/sd-journal.h>
+#endif
+
+#include <syslog.h>
+
+const char *program_name = "";
+
+uint64_t debug_flags = 0;
+
+#ifdef ENABLE_ACLK
+int aclklog_enabled = 0;
+#endif
+
+// ----------------------------------------------------------------------------
+
+struct nd_log_source;
+static bool nd_log_limit_reached(struct nd_log_source *source);
+
+// ----------------------------------------------------------------------------
+// logging method
+
+typedef enum __attribute__((__packed__)) {
+ NDLM_DISABLED = 0,
+ NDLM_DEVNULL,
+ NDLM_DEFAULT,
+ NDLM_JOURNAL,
+ NDLM_SYSLOG,
+ NDLM_STDOUT,
+ NDLM_STDERR,
+ NDLM_FILE,
+} ND_LOG_METHOD;
+
+static struct {
+ ND_LOG_METHOD method;
+ const char *name;
+} nd_log_methods[] = {
+ { .method = NDLM_DISABLED, .name = "none" },
+ { .method = NDLM_DEVNULL, .name = "/dev/null" },
+ { .method = NDLM_DEFAULT, .name = "default" },
+ { .method = NDLM_JOURNAL, .name = "journal" },
+ { .method = NDLM_SYSLOG, .name = "syslog" },
+ { .method = NDLM_STDOUT, .name = "stdout" },
+ { .method = NDLM_STDERR, .name = "stderr" },
+ { .method = NDLM_FILE, .name = "file" },
+};
+
+static ND_LOG_METHOD nd_log_method2id(const char *method) {
+ if(!method || !*method)
+ return NDLM_DEFAULT;
+
+ size_t entries = sizeof(nd_log_methods) / sizeof(nd_log_methods[0]);
+ for(size_t i = 0; i < entries ;i++) {
+ if(strcmp(nd_log_methods[i].name, method) == 0)
+ return nd_log_methods[i].method;
+ }
+
+ return NDLM_FILE;
+}
+
+static const char *nd_log_id2method(ND_LOG_METHOD method) {
+ size_t entries = sizeof(nd_log_methods) / sizeof(nd_log_methods[0]);
+ for(size_t i = 0; i < entries ;i++) {
+ if(method == nd_log_methods[i].method)
+ return nd_log_methods[i].name;
+ }
+
+ return "unknown";
+}
+
+#define IS_VALID_LOG_METHOD_FOR_EXTERNAL_PLUGINS(ndlo) ((ndlo) == NDLM_JOURNAL || (ndlo) == NDLM_SYSLOG || (ndlo) == NDLM_STDERR)
+
+const char *nd_log_method_for_external_plugins(const char *s) {
+ if(s && *s) {
+ ND_LOG_METHOD method = nd_log_method2id(s);
+ if(IS_VALID_LOG_METHOD_FOR_EXTERNAL_PLUGINS(method))
+ return nd_log_id2method(method);
+ }
+
+ return nd_log_id2method(NDLM_STDERR);
+}
+
+// ----------------------------------------------------------------------------
+// workaround strerror_r()
+
+#if defined(STRERROR_R_CHAR_P)
+// GLIBC version of strerror_r
+static const char *strerror_result(const char *a, const char *b) { (void)b; return a; }
+#elif defined(HAVE_STRERROR_R)
+// POSIX version of strerror_r
+static const char *strerror_result(int a, const char *b) { (void)a; return b; }
+#elif defined(HAVE_C__GENERIC)
+
+// what a trick!
+// http://stackoverflow.com/questions/479207/function-overloading-in-c
+static const char *strerror_result_int(int a, const char *b) { (void)a; return b; }
+static const char *strerror_result_string(const char *a, const char *b) { (void)b; return a; }
+
+#define strerror_result(a, b) _Generic((a), \
+ int: strerror_result_int, \
+ char *: strerror_result_string \
+ )(a, b)
+
+#else
+#error "cannot detect the format of function strerror_r()"
+#endif
+
+static const char *errno2str(int errnum, char *buf, size_t size) {
+ return strerror_result(strerror_r(errnum, buf, size), buf);
+}
+
+// ----------------------------------------------------------------------------
+// facilities
+//
+// sys/syslog.h (Linux)
+// sys/sys/syslog.h (FreeBSD)
+// bsd/sys/syslog.h (darwin-xnu)
+
+static struct {
+ int facility;
+ const char *name;
+} nd_log_facilities[] = {
+ { LOG_AUTH, "auth" },
+ { LOG_AUTHPRIV, "authpriv" },
+ { LOG_CRON, "cron" },
+ { LOG_DAEMON, "daemon" },
+ { LOG_FTP, "ftp" },
+ { LOG_KERN, "kern" },
+ { LOG_LPR, "lpr" },
+ { LOG_MAIL, "mail" },
+ { LOG_NEWS, "news" },
+ { LOG_SYSLOG, "syslog" },
+ { LOG_USER, "user" },
+ { LOG_UUCP, "uucp" },
+ { LOG_LOCAL0, "local0" },
+ { LOG_LOCAL1, "local1" },
+ { LOG_LOCAL2, "local2" },
+ { LOG_LOCAL3, "local3" },
+ { LOG_LOCAL4, "local4" },
+ { LOG_LOCAL5, "local5" },
+ { LOG_LOCAL6, "local6" },
+ { LOG_LOCAL7, "local7" },
+
+#ifdef __FreeBSD__
+ { LOG_CONSOLE, "console" },
+ { LOG_NTP, "ntp" },
+
+ // FreeBSD does not consider 'security' as deprecated.
+ { LOG_SECURITY, "security" },
+#else
+ // For all other O/S 'security' is mapped to 'auth'.
+ { LOG_AUTH, "security" },
+#endif
+
+#ifdef __APPLE__
+ { LOG_INSTALL, "install" },
+ { LOG_NETINFO, "netinfo" },
+ { LOG_RAS, "ras" },
+ { LOG_REMOTEAUTH, "remoteauth" },
+ { LOG_LAUNCHD, "launchd" },
+
+#endif
+};
+
+static int nd_log_facility2id(const char *facility) {
+ size_t entries = sizeof(nd_log_facilities) / sizeof(nd_log_facilities[0]);
+ for(size_t i = 0; i < entries ;i++) {
+ if(strcmp(nd_log_facilities[i].name, facility) == 0)
+ return nd_log_facilities[i].facility;
+ }
+
+ return LOG_DAEMON;
+}
+
+static const char *nd_log_id2facility(int facility) {
+ size_t entries = sizeof(nd_log_facilities) / sizeof(nd_log_facilities[0]);
+ for(size_t i = 0; i < entries ;i++) {
+ if(nd_log_facilities[i].facility == facility)
+ return nd_log_facilities[i].name;
+ }
+
+ return "daemon";
+}
+
+// ----------------------------------------------------------------------------
+// priorities
+
+static struct {
+ ND_LOG_FIELD_PRIORITY priority;
+ const char *name;
+} nd_log_priorities[] = {
+ { .priority = NDLP_EMERG, .name = "emergency" },
+ { .priority = NDLP_EMERG, .name = "emerg" },
+ { .priority = NDLP_ALERT, .name = "alert" },
+ { .priority = NDLP_CRIT, .name = "critical" },
+ { .priority = NDLP_CRIT, .name = "crit" },
+ { .priority = NDLP_ERR, .name = "error" },
+ { .priority = NDLP_ERR, .name = "err" },
+ { .priority = NDLP_WARNING, .name = "warning" },
+ { .priority = NDLP_WARNING, .name = "warn" },
+ { .priority = NDLP_NOTICE, .name = "notice" },
+ { .priority = NDLP_INFO, .name = NDLP_INFO_STR },
+ { .priority = NDLP_DEBUG, .name = "debug" },
+};
+
+int nd_log_priority2id(const char *priority) {
+ size_t entries = sizeof(nd_log_priorities) / sizeof(nd_log_priorities[0]);
+ for(size_t i = 0; i < entries ;i++) {
+ if(strcmp(nd_log_priorities[i].name, priority) == 0)
+ return nd_log_priorities[i].priority;
+ }
+
+ return NDLP_INFO;
+}
+
+const char *nd_log_id2priority(ND_LOG_FIELD_PRIORITY priority) {
+ size_t entries = sizeof(nd_log_priorities) / sizeof(nd_log_priorities[0]);
+ for(size_t i = 0; i < entries ;i++) {
+ if(priority == nd_log_priorities[i].priority)
+ return nd_log_priorities[i].name;
+ }
+
+ return NDLP_INFO_STR;
+}
+
+// ----------------------------------------------------------------------------
+// log sources
+
+const char *nd_log_sources[] = {
+ [NDLS_UNSET] = "UNSET",
+ [NDLS_ACCESS] = "access",
+ [NDLS_ACLK] = "aclk",
+ [NDLS_COLLECTORS] = "collector",
+ [NDLS_DAEMON] = "daemon",
+ [NDLS_HEALTH] = "health",
+ [NDLS_DEBUG] = "debug",
+};
+
+size_t nd_log_source2id(const char *source, ND_LOG_SOURCES def) {
+ size_t entries = sizeof(nd_log_sources) / sizeof(nd_log_sources[0]);
+ for(size_t i = 0; i < entries ;i++) {
+ if(strcmp(nd_log_sources[i], source) == 0)
+ return i;
+ }
+
+ return def;
+}
+
+
+static const char *nd_log_id2source(ND_LOG_SOURCES source) {
+ size_t entries = sizeof(nd_log_sources) / sizeof(nd_log_sources[0]);
+ if(source < entries)
+ return nd_log_sources[source];
+
+ return nd_log_sources[NDLS_COLLECTORS];
+}
+
+// ----------------------------------------------------------------------------
+// log output formats
+
+typedef enum __attribute__((__packed__)) {
+ NDLF_JOURNAL,
+ NDLF_LOGFMT,
+ NDLF_JSON,
+} ND_LOG_FORMAT;
+
+static struct {
+ ND_LOG_FORMAT format;
+ const char *name;
+} nd_log_formats[] = {
+ { .format = NDLF_JOURNAL, .name = "journal" },
+ { .format = NDLF_LOGFMT, .name = "logfmt" },
+ { .format = NDLF_JSON, .name = "json" },
+};
+
+static ND_LOG_FORMAT nd_log_format2id(const char *format) {
+ if(!format || !*format)
+ return NDLF_LOGFMT;
+
+ size_t entries = sizeof(nd_log_formats) / sizeof(nd_log_formats[0]);
+ for(size_t i = 0; i < entries ;i++) {
+ if(strcmp(nd_log_formats[i].name, format) == 0)
+ return nd_log_formats[i].format;
+ }
+
+ return NDLF_LOGFMT;
+}
+
+static const char *nd_log_id2format(ND_LOG_FORMAT format) {
+ size_t entries = sizeof(nd_log_formats) / sizeof(nd_log_formats[0]);
+ for(size_t i = 0; i < entries ;i++) {
+ if(format == nd_log_formats[i].format)
+ return nd_log_formats[i].name;
+ }
+
+ return "logfmt";
+}
+
+// ----------------------------------------------------------------------------
+// format dates
+
+void log_date(char *buffer, size_t len, time_t now) {
+ if(unlikely(!buffer || !len))
+ return;
+
+ time_t t = now;
+ struct tm *tmp, tmbuf;
+
+ tmp = localtime_r(&t, &tmbuf);
+
+ if (unlikely(!tmp)) {
+ buffer[0] = '\0';
+ return;
+ }
+
+ if (unlikely(strftime(buffer, len, "%Y-%m-%d %H:%M:%S", tmp) == 0))
+ buffer[0] = '\0';
+
+ buffer[len - 1] = '\0';
+}
+
+// ----------------------------------------------------------------------------
+
+struct nd_log_limit {
+ usec_t started_monotonic_ut;
+ uint32_t counter;
+ uint32_t prevented;
+
+ uint32_t throttle_period;
+ uint32_t logs_per_period;
+ uint32_t logs_per_period_backup;
+};
+
+#define ND_LOG_LIMITS_DEFAULT (struct nd_log_limit){ .logs_per_period = ND_LOG_DEFAULT_THROTTLE_LOGS, .logs_per_period_backup = ND_LOG_DEFAULT_THROTTLE_LOGS, .throttle_period = ND_LOG_DEFAULT_THROTTLE_PERIOD, }
+#define ND_LOG_LIMITS_UNLIMITED (struct nd_log_limit){ .logs_per_period = 0, .logs_per_period_backup = 0, .throttle_period = 0, }
+
+struct nd_log_source {
+ SPINLOCK spinlock;
+ ND_LOG_METHOD method;
+ ND_LOG_FORMAT format;
+ const char *filename;
+ int fd;
+ FILE *fp;
+
+ ND_LOG_FIELD_PRIORITY min_priority;
+ const char *pending_msg;
+ struct nd_log_limit limits;
+};
+
+static __thread ND_LOG_SOURCES overwrite_thread_source = 0;
+
+void nd_log_set_thread_source(ND_LOG_SOURCES source) {
+ overwrite_thread_source = source;
+}
+
+static struct {
+ uuid_t invocation_id;
+
+ ND_LOG_SOURCES overwrite_process_source;
+
+ struct nd_log_source sources[_NDLS_MAX];
+
+ struct {
+ bool initialized;
+ } journal;
+
+ struct {
+ bool initialized;
+ int fd;
+ char filename[FILENAME_MAX + 1];
+ } journal_direct;
+
+ struct {
+ bool initialized;
+ int facility;
+ } syslog;
+
+ struct {
+ SPINLOCK spinlock;
+ bool initialized;
+ } std_output;
+
+ struct {
+ SPINLOCK spinlock;
+ bool initialized;
+ } std_error;
+
+} nd_log = {
+ .overwrite_process_source = 0,
+ .journal = {
+ .initialized = false,
+ },
+ .journal_direct = {
+ .initialized = false,
+ .fd = -1,
+ },
+ .syslog = {
+ .initialized = false,
+ .facility = LOG_DAEMON,
+ },
+ .std_output = {
+ .spinlock = NETDATA_SPINLOCK_INITIALIZER,
+ .initialized = false,
+ },
+ .std_error = {
+ .spinlock = NETDATA_SPINLOCK_INITIALIZER,
+ .initialized = false,
+ },
+ .sources = {
+ [NDLS_UNSET] = {
+ .spinlock = NETDATA_SPINLOCK_INITIALIZER,
+ .method = NDLM_DISABLED,
+ .format = NDLF_JOURNAL,
+ .filename = NULL,
+ .fd = -1,
+ .fp = NULL,
+ .min_priority = NDLP_EMERG,
+ .limits = ND_LOG_LIMITS_UNLIMITED,
+ },
+ [NDLS_ACCESS] = {
+ .spinlock = NETDATA_SPINLOCK_INITIALIZER,
+ .method = NDLM_DEFAULT,
+ .format = NDLF_LOGFMT,
+ .filename = LOG_DIR "/access.log",
+ .fd = -1,
+ .fp = NULL,
+ .min_priority = NDLP_DEBUG,
+ .limits = ND_LOG_LIMITS_UNLIMITED,
+ },
+ [NDLS_ACLK] = {
+ .spinlock = NETDATA_SPINLOCK_INITIALIZER,
+ .method = NDLM_FILE,
+ .format = NDLF_LOGFMT,
+ .filename = LOG_DIR "/aclk.log",
+ .fd = -1,
+ .fp = NULL,
+ .min_priority = NDLP_DEBUG,
+ .limits = ND_LOG_LIMITS_UNLIMITED,
+ },
+ [NDLS_COLLECTORS] = {
+ .spinlock = NETDATA_SPINLOCK_INITIALIZER,
+ .method = NDLM_DEFAULT,
+ .format = NDLF_LOGFMT,
+ .filename = LOG_DIR "/collectors.log",
+ .fd = STDERR_FILENO,
+ .fp = NULL,
+ .min_priority = NDLP_INFO,
+ .limits = ND_LOG_LIMITS_DEFAULT,
+ },
+ [NDLS_DEBUG] = {
+ .spinlock = NETDATA_SPINLOCK_INITIALIZER,
+ .method = NDLM_DISABLED,
+ .format = NDLF_LOGFMT,
+ .filename = LOG_DIR "/debug.log",
+ .fd = STDOUT_FILENO,
+ .fp = NULL,
+ .min_priority = NDLP_DEBUG,
+ .limits = ND_LOG_LIMITS_UNLIMITED,
+ },
+ [NDLS_DAEMON] = {
+ .spinlock = NETDATA_SPINLOCK_INITIALIZER,
+ .method = NDLM_DEFAULT,
+ .filename = LOG_DIR "/daemon.log",
+ .format = NDLF_LOGFMT,
+ .fd = -1,
+ .fp = NULL,
+ .min_priority = NDLP_INFO,
+ .limits = ND_LOG_LIMITS_DEFAULT,
+ },
+ [NDLS_HEALTH] = {
+ .spinlock = NETDATA_SPINLOCK_INITIALIZER,
+ .method = NDLM_DEFAULT,
+ .format = NDLF_LOGFMT,
+ .filename = LOG_DIR "/health.log",
+ .fd = -1,
+ .fp = NULL,
+ .min_priority = NDLP_DEBUG,
+ .limits = ND_LOG_LIMITS_UNLIMITED,
+ },
+ },
+};
+
+__attribute__((constructor)) void initialize_invocation_id(void) {
+ // check for a NETDATA_INVOCATION_ID
+ if(uuid_parse_flexi(getenv("NETDATA_INVOCATION_ID"), nd_log.invocation_id) != 0) {
+ // not found, check for systemd set INVOCATION_ID
+ if(uuid_parse_flexi(getenv("INVOCATION_ID"), nd_log.invocation_id) != 0) {
+ // not found, generate a new one
+ uuid_generate_random(nd_log.invocation_id);
+ }
+ }
+
+ char uuid[UUID_COMPACT_STR_LEN];
+ uuid_unparse_lower_compact(nd_log.invocation_id, uuid);
+ setenv("NETDATA_INVOCATION_ID", uuid, 1);
+}
+
+int nd_log_health_fd(void) {
+ if(nd_log.sources[NDLS_HEALTH].method == NDLM_FILE && nd_log.sources[NDLS_HEALTH].fd != -1)
+ return nd_log.sources[NDLS_HEALTH].fd;
+
+ return STDERR_FILENO;
+}
+
+void nd_log_set_user_settings(ND_LOG_SOURCES source, const char *setting) {
+ char buf[FILENAME_MAX + 100];
+ if(setting && *setting)
+ strncpyz(buf, setting, sizeof(buf) - 1);
+ else
+ buf[0] = '\0';
+
+ struct nd_log_source *ls = &nd_log.sources[source];
+ char *output = strrchr(buf, '@');
+
+ if(!output)
+ // all of it is the output
+ output = buf;
+ else {
+ // we found an '@', the next char is the output
+ *output = '\0';
+ output++;
+
+ // parse the other params
+ char *remaining = buf;
+ while(remaining) {
+ char *value = strsep_skip_consecutive_separators(&remaining, ",");
+ if (!value || !*value) continue;
+
+ char *name = strsep_skip_consecutive_separators(&value, "=");
+ if (!name || !*name) continue;
+
+ if(strcmp(name, "logfmt") == 0)
+ ls->format = NDLF_LOGFMT;
+ else if(strcmp(name, "json") == 0)
+ ls->format = NDLF_JSON;
+ else if(strcmp(name, "journal") == 0)
+ ls->format = NDLF_JOURNAL;
+ else if(strcmp(name, "level") == 0 && value && *value)
+ ls->min_priority = nd_log_priority2id(value);
+ else if(strcmp(name, "protection") == 0 && value && *value) {
+ if(strcmp(value, "off") == 0 || strcmp(value, "none") == 0) {
+ ls->limits = ND_LOG_LIMITS_UNLIMITED;
+ ls->limits.counter = 0;
+ ls->limits.prevented = 0;
+ }
+ else {
+ ls->limits = ND_LOG_LIMITS_DEFAULT;
+
+ char *slash = strchr(value, '/');
+ if(slash) {
+ *slash = '\0';
+ slash++;
+ ls->limits.logs_per_period = ls->limits.logs_per_period_backup = str2u(value);
+ ls->limits.throttle_period = str2u(slash);
+ }
+ else {
+ ls->limits.logs_per_period = ls->limits.logs_per_period_backup = str2u(value);
+ ls->limits.throttle_period = ND_LOG_DEFAULT_THROTTLE_PERIOD;
+ }
+ }
+ }
+ else
+ nd_log(NDLS_DAEMON, NDLP_ERR, "Error while parsing configuration of log source '%s'. "
+ "In config '%s', '%s' is not understood.",
+ nd_log_id2source(source), setting, name);
+ }
+ }
+
+ if(!output || !*output || strcmp(output, "none") == 0 || strcmp(output, "off") == 0) {
+ ls->method = NDLM_DISABLED;
+ ls->filename = "/dev/null";
+ }
+ else if(strcmp(output, "journal") == 0) {
+ ls->method = NDLM_JOURNAL;
+ ls->filename = NULL;
+ }
+ else if(strcmp(output, "syslog") == 0) {
+ ls->method = NDLM_SYSLOG;
+ ls->filename = NULL;
+ }
+ else if(strcmp(output, "/dev/null") == 0) {
+ ls->method = NDLM_DEVNULL;
+ ls->filename = "/dev/null";
+ }
+ else if(strcmp(output, "system") == 0) {
+ if(ls->fd == STDERR_FILENO) {
+ ls->method = NDLM_STDERR;
+ ls->filename = NULL;
+ ls->fd = STDERR_FILENO;
+ }
+ else {
+ ls->method = NDLM_STDOUT;
+ ls->filename = NULL;
+ ls->fd = STDOUT_FILENO;
+ }
+ }
+ else if(strcmp(output, "stderr") == 0) {
+ ls->method = NDLM_STDERR;
+ ls->filename = NULL;
+ ls->fd = STDERR_FILENO;
+ }
+ else if(strcmp(output, "stdout") == 0) {
+ ls->method = NDLM_STDOUT;
+ ls->filename = NULL;
+ ls->fd = STDOUT_FILENO;
+ }
+ else {
+ ls->method = NDLM_FILE;
+ ls->filename = strdupz(output);
+ }
+
+#if defined(NETDATA_INTERNAL_CHECKS) || defined(NETDATA_DEV_MODE)
+ ls->min_priority = NDLP_DEBUG;
+#endif
+
+ if(source == NDLS_COLLECTORS) {
+ // set the method for the collector processes we will spawn
+
+ ND_LOG_METHOD method;
+ ND_LOG_FORMAT format = ls->format;
+ ND_LOG_FIELD_PRIORITY priority = ls->min_priority;
+
+ if(ls->method == NDLM_SYSLOG || ls->method == NDLM_JOURNAL)
+ method = ls->method;
+ else
+ method = NDLM_STDERR;
+
+ setenv("NETDATA_LOG_METHOD", nd_log_id2method(method), 1);
+ setenv("NETDATA_LOG_FORMAT", nd_log_id2format(format), 1);
+ setenv("NETDATA_LOG_LEVEL", nd_log_id2priority(priority), 1);
+ }
+}
+
+void nd_log_set_priority_level(const char *setting) {
+ if(!setting || !*setting)
+ setting = "info";
+
+ ND_LOG_FIELD_PRIORITY priority = nd_log_priority2id(setting);
+
+#if defined(NETDATA_INTERNAL_CHECKS) || defined(NETDATA_DEV_MODE)
+ priority = NDLP_DEBUG;
+#endif
+
+ for (size_t i = 0; i < _NDLS_MAX; i++) {
+ if (i != NDLS_DEBUG)
+ nd_log.sources[i].min_priority = priority;
+ }
+
+ // the right one
+ setenv("NETDATA_LOG_LEVEL", nd_log_id2priority(priority), 1);
+}
+
+void nd_log_set_facility(const char *facility) {
+ if(!facility || !*facility)
+ facility = "daemon";
+
+ nd_log.syslog.facility = nd_log_facility2id(facility);
+ setenv("NETDATA_SYSLOG_FACILITY", nd_log_id2facility(nd_log.syslog.facility), 1);
+}
+
+void nd_log_set_flood_protection(size_t logs, time_t period) {
+ nd_log.sources[NDLS_DAEMON].limits.logs_per_period =
+ nd_log.sources[NDLS_DAEMON].limits.logs_per_period_backup;
+ nd_log.sources[NDLS_COLLECTORS].limits.logs_per_period =
+ nd_log.sources[NDLS_COLLECTORS].limits.logs_per_period_backup = logs;
+
+ nd_log.sources[NDLS_DAEMON].limits.throttle_period =
+ nd_log.sources[NDLS_COLLECTORS].limits.throttle_period = period;
+
+ char buf[100];
+ snprintfz(buf, sizeof(buf), "%" PRIu64, (uint64_t )period);
+ setenv("NETDATA_ERRORS_THROTTLE_PERIOD", buf, 1);
+ snprintfz(buf, sizeof(buf), "%" PRIu64, (uint64_t )logs);
+ setenv("NETDATA_ERRORS_PER_PERIOD", buf, 1);
+}
+
+static bool nd_log_journal_systemd_init(void) {
+#ifdef HAVE_SYSTEMD
+ nd_log.journal.initialized = true;
+#else
+ nd_log.journal.initialized = false;
+#endif
+
+ return nd_log.journal.initialized;
+}
+
+static void nd_log_journal_direct_set_env(void) {
+ if(nd_log.sources[NDLS_COLLECTORS].method == NDLM_JOURNAL)
+ setenv("NETDATA_SYSTEMD_JOURNAL_PATH", nd_log.journal_direct.filename, 1);
+}
+
+static bool nd_log_journal_direct_init(const char *path) {
+ if(nd_log.journal_direct.initialized) {
+ nd_log_journal_direct_set_env();
+ return true;
+ }
+
+ int fd;
+ char filename[FILENAME_MAX + 1];
+ if(!is_path_unix_socket(path)) {
+
+ journal_construct_path(filename, sizeof(filename), netdata_configured_host_prefix, "netdata");
+ if (!is_path_unix_socket(filename) || (fd = journal_direct_fd(filename)) == -1) {
+
+ journal_construct_path(filename, sizeof(filename), netdata_configured_host_prefix, NULL);
+ if (!is_path_unix_socket(filename) || (fd = journal_direct_fd(filename)) == -1) {
+
+ journal_construct_path(filename, sizeof(filename), NULL, "netdata");
+ if (!is_path_unix_socket(filename) || (fd = journal_direct_fd(filename)) == -1) {
+
+ journal_construct_path(filename, sizeof(filename), NULL, NULL);
+ if (!is_path_unix_socket(filename) || (fd = journal_direct_fd(filename)) == -1)
+ return false;
+ }
+ }
+ }
+ }
+ else {
+ snprintfz(filename, sizeof(filename), "%s", path);
+ fd = journal_direct_fd(filename);
+ }
+
+ if(fd < 0)
+ return false;
+
+ nd_log.journal_direct.fd = fd;
+ nd_log.journal_direct.initialized = true;
+
+ strncpyz(nd_log.journal_direct.filename, filename, sizeof(nd_log.journal_direct.filename) - 1);
+ nd_log_journal_direct_set_env();
+
+ return true;
+}
+
+static void nd_log_syslog_init() {
+ if(nd_log.syslog.initialized)
+ return;
+
+ openlog(program_name, LOG_PID, nd_log.syslog.facility);
+ nd_log.syslog.initialized = true;
+}
+
+void nd_log_initialize_for_external_plugins(const char *name) {
+ // if we don't run under Netdata, log to stderr,
+ // otherwise, use the logging method Netdata wants us to use.
+ setenv("NETDATA_LOG_METHOD", "stderr", 0);
+ setenv("NETDATA_LOG_FORMAT", "logfmt", 0);
+
+ nd_log.overwrite_process_source = NDLS_COLLECTORS;
+ program_name = name;
+
+ for(size_t i = 0; i < _NDLS_MAX ;i++) {
+ nd_log.sources[i].method = STDERR_FILENO;
+ nd_log.sources[i].fd = -1;
+ nd_log.sources[i].fp = NULL;
+ }
+
+ nd_log_set_priority_level(getenv("NETDATA_LOG_LEVEL"));
+ nd_log_set_facility(getenv("NETDATA_SYSLOG_FACILITY"));
+
+ time_t period = 1200;
+ size_t logs = 200;
+ const char *s = getenv("NETDATA_ERRORS_THROTTLE_PERIOD");
+ if(s && *s >= '0' && *s <= '9') {
+ period = str2l(s);
+ if(period < 0) period = 0;
+ }
+
+ s = getenv("NETDATA_ERRORS_PER_PERIOD");
+ if(s && *s >= '0' && *s <= '9')
+ logs = str2u(s);
+
+ nd_log_set_flood_protection(logs, period);
+
+ if(!netdata_configured_host_prefix) {
+ s = getenv("NETDATA_HOST_PREFIX");
+ if(s && *s)
+ netdata_configured_host_prefix = (char *)s;
+ }
+
+ ND_LOG_METHOD method = nd_log_method2id(getenv("NETDATA_LOG_METHOD"));
+ ND_LOG_FORMAT format = nd_log_format2id(getenv("NETDATA_LOG_FORMAT"));
+
+ if(!IS_VALID_LOG_METHOD_FOR_EXTERNAL_PLUGINS(method)) {
+ if(is_stderr_connected_to_journal()) {
+ nd_log(NDLS_COLLECTORS, NDLP_WARNING, "NETDATA_LOG_METHOD is not set. Using journal.");
+ method = NDLM_JOURNAL;
+ }
+ else {
+ nd_log(NDLS_COLLECTORS, NDLP_WARNING, "NETDATA_LOG_METHOD is not set. Using stderr.");
+ method = NDLM_STDERR;
+ }
+ }
+
+ switch(method) {
+ case NDLM_JOURNAL:
+ if(!nd_log_journal_direct_init(getenv("NETDATA_SYSTEMD_JOURNAL_PATH")) ||
+ !nd_log_journal_direct_init(NULL) || !nd_log_journal_systemd_init()) {
+ nd_log(NDLS_COLLECTORS, NDLP_WARNING, "Failed to initialize journal. Using stderr.");
+ method = NDLM_STDERR;
+ }
+ break;
+
+ case NDLM_SYSLOG:
+ nd_log_syslog_init();
+ break;
+
+ default:
+ method = NDLM_STDERR;
+ break;
+ }
+
+ for(size_t i = 0; i < _NDLS_MAX ;i++) {
+ nd_log.sources[i].method = method;
+ nd_log.sources[i].format = format;
+ nd_log.sources[i].fd = -1;
+ nd_log.sources[i].fp = NULL;
+ }
+
+// nd_log(NDLS_COLLECTORS, NDLP_NOTICE, "FINAL_LOG_METHOD: %s", nd_log_id2method(method));
+}
+
+static bool nd_log_replace_existing_fd(struct nd_log_source *e, int new_fd) {
+ if(new_fd == -1 || e->fd == -1 ||
+ (e->fd == STDOUT_FILENO && nd_log.std_output.initialized) ||
+ (e->fd == STDERR_FILENO && nd_log.std_error.initialized))
+ return false;
+
+ if(new_fd != e->fd) {
+ int t = dup2(new_fd, e->fd);
+
+ bool ret = true;
+ if (t == -1) {
+ netdata_log_error("Cannot dup2() new fd %d to old fd %d for '%s'", new_fd, e->fd, e->filename);
+ ret = false;
+ }
+ else
+ close(new_fd);
+
+ if(e->fd == STDOUT_FILENO)
+ nd_log.std_output.initialized = true;
+ else if(e->fd == STDERR_FILENO)
+ nd_log.std_error.initialized = true;
+
+ return ret;
+ }
+
+ return false;
+}
+
+static void nd_log_open(struct nd_log_source *e, ND_LOG_SOURCES source) {
+ if(e->method == NDLM_DEFAULT)
+ nd_log_set_user_settings(source, e->filename);
+
+ if((e->method == NDLM_FILE && !e->filename) ||
+ (e->method == NDLM_DEVNULL && e->fd == -1))
+ e->method = NDLM_DISABLED;
+
+ if(e->fp)
+ fflush(e->fp);
+
+ switch(e->method) {
+ case NDLM_SYSLOG:
+ nd_log_syslog_init();
+ break;
+
+ case NDLM_JOURNAL:
+ nd_log_journal_direct_init(NULL);
+ nd_log_journal_systemd_init();
+ break;
+
+ case NDLM_STDOUT:
+ e->fp = stdout;
+ e->fd = STDOUT_FILENO;
+ break;
+
+ case NDLM_DISABLED:
+ break;
+
+ case NDLM_DEFAULT:
+ case NDLM_STDERR:
+ e->method = NDLM_STDERR;
+ e->fp = stderr;
+ e->fd = STDERR_FILENO;
+ break;
+
+ case NDLM_DEVNULL:
+ case NDLM_FILE: {
+ int fd = open(e->filename, O_WRONLY | O_APPEND | O_CREAT, 0664);
+ if(fd == -1) {
+ if(e->fd != STDOUT_FILENO && e->fd != STDERR_FILENO) {
+ e->fd = STDERR_FILENO;
+ e->method = NDLM_STDERR;
+ netdata_log_error("Cannot open log file '%s'. Falling back to stderr.", e->filename);
+ }
+ else
+ netdata_log_error("Cannot open log file '%s'. Leaving fd %d as-is.", e->filename, e->fd);
+ }
+ else {
+ if (!nd_log_replace_existing_fd(e, fd)) {
+ if(e->fd == STDOUT_FILENO || e->fd == STDERR_FILENO) {
+ if(e->fd == STDOUT_FILENO)
+ e->method = NDLM_STDOUT;
+ else if(e->fd == STDERR_FILENO)
+ e->method = NDLM_STDERR;
+
+ // we have dup2() fd, so we can close the one we opened
+ if(fd != STDOUT_FILENO && fd != STDERR_FILENO)
+ close(fd);
+ }
+ else
+ e->fd = fd;
+ }
+ }
+
+ // at this point we have e->fd set properly
+
+ if(e->fd == STDOUT_FILENO)
+ e->fp = stdout;
+ else if(e->fd == STDERR_FILENO)
+ e->fp = stderr;
+
+ if(!e->fp) {
+ e->fp = fdopen(e->fd, "a");
+ if (!e->fp) {
+ netdata_log_error("Cannot fdopen() fd %d ('%s')", e->fd, e->filename);
+
+ if(e->fd != STDOUT_FILENO && e->fd != STDERR_FILENO)
+ close(e->fd);
+
+ e->fp = stderr;
+ e->fd = STDERR_FILENO;
+ }
+ }
+ else {
+ if (setvbuf(e->fp, NULL, _IOLBF, 0) != 0)
+ netdata_log_error("Cannot set line buffering on fd %d ('%s')", e->fd, e->filename);
+ }
+ }
+ break;
+ }
+}
+
+static void nd_log_stdin_init(int fd, const char *filename) {
+ int f = open(filename, O_WRONLY | O_APPEND | O_CREAT, 0664);
+ if(f == -1)
+ return;
+
+ if(f != fd) {
+ dup2(f, fd);
+ close(f);
+ }
+}
+
+void nd_log_initialize(void) {
+ nd_log_stdin_init(STDIN_FILENO, "/dev/null");
+
+ for(size_t i = 0 ; i < _NDLS_MAX ; i++)
+ nd_log_open(&nd_log.sources[i], i);
+}
+
+void nd_log_reopen_log_files(void) {
+ netdata_log_info("Reopening all log files.");
+
+ nd_log.std_output.initialized = false;
+ nd_log.std_error.initialized = false;
+ nd_log_initialize();
+
+ netdata_log_info("Log files re-opened.");
+}
+
+void chown_open_file(int fd, uid_t uid, gid_t gid) {
+ if(fd == -1) return;
+
+ struct stat buf;
+
+ if(fstat(fd, &buf) == -1) {
+ netdata_log_error("Cannot fstat() fd %d", fd);
+ return;
+ }
+
+ if((buf.st_uid != uid || buf.st_gid != gid) && S_ISREG(buf.st_mode)) {
+ if(fchown(fd, uid, gid) == -1)
+ netdata_log_error("Cannot fchown() fd %d.", fd);
+ }
+}
+
+void nd_log_chown_log_files(uid_t uid, gid_t gid) {
+ for(size_t i = 0 ; i < _NDLS_MAX ; i++) {
+ if(nd_log.sources[i].fd != -1 && nd_log.sources[i].fd != STDIN_FILENO)
+ chown_open_file(nd_log.sources[i].fd, uid, gid);
+ }
+}
+
+// ----------------------------------------------------------------------------
+// annotators
+struct log_field;
+static void errno_annotator(BUFFER *wb, const char *key, struct log_field *lf);
+static void priority_annotator(BUFFER *wb, const char *key, struct log_field *lf);
+static void timestamp_usec_annotator(BUFFER *wb, const char *key, struct log_field *lf);
+
+// ----------------------------------------------------------------------------
+
+typedef void (*annotator_t)(BUFFER *wb, const char *key, struct log_field *lf);
+
+struct log_field {
+ const char *journal;
+ const char *logfmt;
+ annotator_t logfmt_annotator;
+ struct log_stack_entry entry;
+};
+
+#define THREAD_LOG_STACK_MAX 50
+
+static __thread struct log_stack_entry *thread_log_stack_base[THREAD_LOG_STACK_MAX];
+static __thread size_t thread_log_stack_next = 0;
+
+static __thread struct log_field thread_log_fields[_NDF_MAX] = {
+ // THE ORDER DEFINES THE ORDER FIELDS WILL APPEAR IN logfmt
+
+ [NDF_STOP] = { // processing will not stop on this - so it is ok to be first
+ .journal = NULL,
+ .logfmt = NULL,
+ .logfmt_annotator = NULL,
+ },
+ [NDF_TIMESTAMP_REALTIME_USEC] = {
+ .journal = NULL,
+ .logfmt = "time",
+ .logfmt_annotator = timestamp_usec_annotator,
+ },
+ [NDF_SYSLOG_IDENTIFIER] = {
+ .journal = "SYSLOG_IDENTIFIER", // standard journald field
+ .logfmt = "comm",
+ },
+ [NDF_LOG_SOURCE] = {
+ .journal = "ND_LOG_SOURCE",
+ .logfmt = "source",
+ },
+ [NDF_PRIORITY] = {
+ .journal = "PRIORITY", // standard journald field
+ .logfmt = "level",
+ .logfmt_annotator = priority_annotator,
+ },
+ [NDF_ERRNO] = {
+ .journal = "ERRNO", // standard journald field
+ .logfmt = "errno",
+ .logfmt_annotator = errno_annotator,
+ },
+ [NDF_INVOCATION_ID] = {
+ .journal = "INVOCATION_ID", // standard journald field
+ .logfmt = NULL,
+ },
+ [NDF_LINE] = {
+ .journal = "CODE_LINE", // standard journald field
+ .logfmt = NULL,
+ },
+ [NDF_FILE] = {
+ .journal = "CODE_FILE", // standard journald field
+ .logfmt = NULL,
+ },
+ [NDF_FUNC] = {
+ .journal = "CODE_FUNC", // standard journald field
+ .logfmt = NULL,
+ },
+ [NDF_TID] = {
+ .journal = "TID", // standard journald field
+ .logfmt = "tid",
+ },
+ [NDF_THREAD_TAG] = {
+ .journal = "THREAD_TAG",
+ .logfmt = "thread",
+ },
+ [NDF_MESSAGE_ID] = {
+ .journal = "MESSAGE_ID",
+ .logfmt = "msg_id",
+ },
+ [NDF_MODULE] = {
+ .journal = "ND_MODULE",
+ .logfmt = "module",
+ },
+ [NDF_NIDL_NODE] = {
+ .journal = "ND_NIDL_NODE",
+ .logfmt = "node",
+ },
+ [NDF_NIDL_INSTANCE] = {
+ .journal = "ND_NIDL_INSTANCE",
+ .logfmt = "instance",
+ },
+ [NDF_NIDL_CONTEXT] = {
+ .journal = "ND_NIDL_CONTEXT",
+ .logfmt = "context",
+ },
+ [NDF_NIDL_DIMENSION] = {
+ .journal = "ND_NIDL_DIMENSION",
+ .logfmt = "dimension",
+ },
+ [NDF_SRC_TRANSPORT] = {
+ .journal = "ND_SRC_TRANSPORT",
+ .logfmt = "src_transport",
+ },
+ [NDF_SRC_IP] = {
+ .journal = "ND_SRC_IP",
+ .logfmt = "src_ip",
+ },
+ [NDF_SRC_PORT] = {
+ .journal = "ND_SRC_PORT",
+ .logfmt = "src_port",
+ },
+ [NDF_SRC_CAPABILITIES] = {
+ .journal = "ND_SRC_CAPABILITIES",
+ .logfmt = "src_capabilities",
+ },
+ [NDF_DST_TRANSPORT] = {
+ .journal = "ND_DST_TRANSPORT",
+ .logfmt = "dst_transport",
+ },
+ [NDF_DST_IP] = {
+ .journal = "ND_DST_IP",
+ .logfmt = "dst_ip",
+ },
+ [NDF_DST_PORT] = {
+ .journal = "ND_DST_PORT",
+ .logfmt = "dst_port",
+ },
+ [NDF_DST_CAPABILITIES] = {
+ .journal = "ND_DST_CAPABILITIES",
+ .logfmt = "dst_capabilities",
+ },
+ [NDF_REQUEST_METHOD] = {
+ .journal = "ND_REQUEST_METHOD",
+ .logfmt = "req_method",
+ },
+ [NDF_RESPONSE_CODE] = {
+ .journal = "ND_RESPONSE_CODE",
+ .logfmt = "code",
+ },
+ [NDF_CONNECTION_ID] = {
+ .journal = "ND_CONNECTION_ID",
+ .logfmt = "conn",
+ },
+ [NDF_TRANSACTION_ID] = {
+ .journal = "ND_TRANSACTION_ID",
+ .logfmt = "transaction",
+ },
+ [NDF_RESPONSE_SENT_BYTES] = {
+ .journal = "ND_RESPONSE_SENT_BYTES",
+ .logfmt = "sent_bytes",
+ },
+ [NDF_RESPONSE_SIZE_BYTES] = {
+ .journal = "ND_RESPONSE_SIZE_BYTES",
+ .logfmt = "size_bytes",
+ },
+ [NDF_RESPONSE_PREPARATION_TIME_USEC] = {
+ .journal = "ND_RESPONSE_PREP_TIME_USEC",
+ .logfmt = "prep_ut",
+ },
+ [NDF_RESPONSE_SENT_TIME_USEC] = {
+ .journal = "ND_RESPONSE_SENT_TIME_USEC",
+ .logfmt = "sent_ut",
+ },
+ [NDF_RESPONSE_TOTAL_TIME_USEC] = {
+ .journal = "ND_RESPONSE_TOTAL_TIME_USEC",
+ .logfmt = "total_ut",
+ },
+ [NDF_ALERT_ID] = {
+ .journal = "ND_ALERT_ID",
+ .logfmt = "alert_id",
+ },
+ [NDF_ALERT_UNIQUE_ID] = {
+ .journal = "ND_ALERT_UNIQUE_ID",
+ .logfmt = "alert_unique_id",
+ },
+ [NDF_ALERT_TRANSITION_ID] = {
+ .journal = "ND_ALERT_TRANSITION_ID",
+ .logfmt = "alert_transition_id",
+ },
+ [NDF_ALERT_EVENT_ID] = {
+ .journal = "ND_ALERT_EVENT_ID",
+ .logfmt = "alert_event_id",
+ },
+ [NDF_ALERT_CONFIG_HASH] = {
+ .journal = "ND_ALERT_CONFIG",
+ .logfmt = "alert_config",
+ },
+ [NDF_ALERT_NAME] = {
+ .journal = "ND_ALERT_NAME",
+ .logfmt = "alert",
+ },
+ [NDF_ALERT_CLASS] = {
+ .journal = "ND_ALERT_CLASS",
+ .logfmt = "alert_class",
+ },
+ [NDF_ALERT_COMPONENT] = {
+ .journal = "ND_ALERT_COMPONENT",
+ .logfmt = "alert_component",
+ },
+ [NDF_ALERT_TYPE] = {
+ .journal = "ND_ALERT_TYPE",
+ .logfmt = "alert_type",
+ },
+ [NDF_ALERT_EXEC] = {
+ .journal = "ND_ALERT_EXEC",
+ .logfmt = "alert_exec",
+ },
+ [NDF_ALERT_RECIPIENT] = {
+ .journal = "ND_ALERT_RECIPIENT",
+ .logfmt = "alert_recipient",
+ },
+ [NDF_ALERT_VALUE] = {
+ .journal = "ND_ALERT_VALUE",
+ .logfmt = "alert_value",
+ },
+ [NDF_ALERT_VALUE_OLD] = {
+ .journal = "ND_ALERT_VALUE_OLD",
+ .logfmt = "alert_value_old",
+ },
+ [NDF_ALERT_STATUS] = {
+ .journal = "ND_ALERT_STATUS",
+ .logfmt = "alert_status",
+ },
+ [NDF_ALERT_STATUS_OLD] = {
+ .journal = "ND_ALERT_STATUS_OLD",
+ .logfmt = "alert_value_old",
+ },
+ [NDF_ALERT_UNITS] = {
+ .journal = "ND_ALERT_UNITS",
+ .logfmt = "alert_units",
+ },
+ [NDF_ALERT_SUMMARY] = {
+ .journal = "ND_ALERT_SUMMARY",
+ .logfmt = "alert_summary",
+ },
+ [NDF_ALERT_INFO] = {
+ .journal = "ND_ALERT_INFO",
+ .logfmt = "alert_info",
+ },
+ [NDF_ALERT_DURATION] = {
+ .journal = "ND_ALERT_DURATION",
+ .logfmt = "alert_duration",
+ },
+ [NDF_ALERT_NOTIFICATION_REALTIME_USEC] = {
+ .journal = "ND_ALERT_NOTIFICATION_TIMESTAMP_USEC",
+ .logfmt = "alert_notification_timestamp",
+ .logfmt_annotator = timestamp_usec_annotator,
+ },
+
+ // put new items here
+ // leave the request URL and the message last
+
+ [NDF_REQUEST] = {
+ .journal = "ND_REQUEST",
+ .logfmt = "request",
+ },
+ [NDF_MESSAGE] = {
+ .journal = "MESSAGE",
+ .logfmt = "msg",
+ },
+};
+
+#define THREAD_FIELDS_MAX (sizeof(thread_log_fields) / sizeof(thread_log_fields[0]))
+
+ND_LOG_FIELD_ID nd_log_field_id_by_name(const char *field, size_t len) {
+ for(size_t i = 0; i < THREAD_FIELDS_MAX ;i++) {
+ if(thread_log_fields[i].journal && strlen(thread_log_fields[i].journal) == len && strncmp(field, thread_log_fields[i].journal, len) == 0)
+ return i;
+ }
+
+ return NDF_STOP;
+}
+
+void log_stack_pop(void *ptr) {
+ if(!ptr) return;
+
+ struct log_stack_entry *lgs = *(struct log_stack_entry (*)[])ptr;
+
+ if(unlikely(!thread_log_stack_next || lgs != thread_log_stack_base[thread_log_stack_next - 1])) {
+ fatal("You cannot pop in the middle of the stack, or an item not in the stack");
+ return;
+ }
+
+ thread_log_stack_next--;
+}
+
+void log_stack_push(struct log_stack_entry *lgs) {
+ if(!lgs || thread_log_stack_next >= THREAD_LOG_STACK_MAX) return;
+ thread_log_stack_base[thread_log_stack_next++] = lgs;
+}
+
+// ----------------------------------------------------------------------------
+// json formatter
+
+static void nd_logger_json(BUFFER *wb, struct log_field *fields, size_t fields_max) {
+
+ // --- FIELD_PARSER_VERSIONS ---
+ //
+ // IMPORTANT:
+ // THERE ARE 6 VERSIONS OF THIS CODE
+ //
+ // 1. journal (direct socket API),
+ // 2. journal (libsystemd API),
+ // 3. logfmt,
+ // 4. json,
+ // 5. convert to uint64
+ // 6. convert to int64
+ //
+ // UPDATE ALL OF THEM FOR NEW FEATURES OR FIXES
+
+ buffer_json_initialize(wb, "\"", "\"", 0, true, BUFFER_JSON_OPTIONS_MINIFY);
+ CLEAN_BUFFER *tmp = NULL;
+
+ for (size_t i = 0; i < fields_max; i++) {
+ if (!fields[i].entry.set || !fields[i].logfmt)
+ continue;
+
+ const char *key = fields[i].logfmt;
+
+ const char *s = NULL;
+ switch(fields[i].entry.type) {
+ case NDFT_TXT:
+ s = fields[i].entry.txt;
+ break;
+ case NDFT_STR:
+ s = string2str(fields[i].entry.str);
+ break;
+ case NDFT_BFR:
+ s = buffer_tostring(fields[i].entry.bfr);
+ break;
+ case NDFT_U64:
+ buffer_json_member_add_uint64(wb, key, fields[i].entry.u64);
+ break;
+ case NDFT_I64:
+ buffer_json_member_add_int64(wb, key, fields[i].entry.i64);
+ break;
+ case NDFT_DBL:
+ buffer_json_member_add_double(wb, key, fields[i].entry.dbl);
+ break;
+ case NDFT_UUID:{
+ char u[UUID_COMPACT_STR_LEN];
+ uuid_unparse_lower_compact(*fields[i].entry.uuid, u);
+ buffer_json_member_add_string(wb, key, u);
+ }
+ break;
+ case NDFT_CALLBACK: {
+ if(!tmp)
+ tmp = buffer_create(1024, NULL);
+ else
+ buffer_flush(tmp);
+ if(fields[i].entry.cb.formatter(tmp, fields[i].entry.cb.formatter_data))
+ s = buffer_tostring(tmp);
+ else
+ s = NULL;
+ }
+ break;
+ default:
+ s = "UNHANDLED";
+ break;
+ }
+
+ if(s && *s)
+ buffer_json_member_add_string(wb, key, s);
+ }
+
+ buffer_json_finalize(wb);
+}
+
+// ----------------------------------------------------------------------------
+// logfmt formatter
+
+
+static int64_t log_field_to_int64(struct log_field *lf) {
+
+ // --- FIELD_PARSER_VERSIONS ---
+ //
+ // IMPORTANT:
+ // THERE ARE 6 VERSIONS OF THIS CODE
+ //
+ // 1. journal (direct socket API),
+ // 2. journal (libsystemd API),
+ // 3. logfmt,
+ // 4. json,
+ // 5. convert to uint64
+ // 6. convert to int64
+ //
+ // UPDATE ALL OF THEM FOR NEW FEATURES OR FIXES
+
+ CLEAN_BUFFER *tmp = NULL;
+ const char *s = NULL;
+
+ switch(lf->entry.type) {
+ case NDFT_UUID:
+ case NDFT_UNSET:
+ return 0;
+
+ case NDFT_TXT:
+ s = lf->entry.txt;
+ break;
+
+ case NDFT_STR:
+ s = string2str(lf->entry.str);
+ break;
+
+ case NDFT_BFR:
+ s = buffer_tostring(lf->entry.bfr);
+ break;
+
+ case NDFT_CALLBACK:
+ if(!tmp)
+ tmp = buffer_create(0, NULL);
+ else
+ buffer_flush(tmp);
+
+ if(lf->entry.cb.formatter(tmp, lf->entry.cb.formatter_data))
+ s = buffer_tostring(tmp);
+ else
+ s = NULL;
+ break;
+
+ case NDFT_U64:
+ return lf->entry.u64;
+
+ case NDFT_I64:
+ return lf->entry.i64;
+
+ case NDFT_DBL:
+ return lf->entry.dbl;
+ }
+
+ if(s && *s)
+ return str2ll(s, NULL);
+
+ return 0;
+}
+
+static uint64_t log_field_to_uint64(struct log_field *lf) {
+
+ // --- FIELD_PARSER_VERSIONS ---
+ //
+ // IMPORTANT:
+ // THERE ARE 6 VERSIONS OF THIS CODE
+ //
+ // 1. journal (direct socket API),
+ // 2. journal (libsystemd API),
+ // 3. logfmt,
+ // 4. json,
+ // 5. convert to uint64
+ // 6. convert to int64
+ //
+ // UPDATE ALL OF THEM FOR NEW FEATURES OR FIXES
+
+ CLEAN_BUFFER *tmp = NULL;
+ const char *s = NULL;
+
+ switch(lf->entry.type) {
+ case NDFT_UUID:
+ case NDFT_UNSET:
+ return 0;
+
+ case NDFT_TXT:
+ s = lf->entry.txt;
+ break;
+
+ case NDFT_STR:
+ s = string2str(lf->entry.str);
+ break;
+
+ case NDFT_BFR:
+ s = buffer_tostring(lf->entry.bfr);
+ break;
+
+ case NDFT_CALLBACK:
+ if(!tmp)
+ tmp = buffer_create(0, NULL);
+ else
+ buffer_flush(tmp);
+
+ if(lf->entry.cb.formatter(tmp, lf->entry.cb.formatter_data))
+ s = buffer_tostring(tmp);
+ else
+ s = NULL;
+ break;
+
+ case NDFT_U64:
+ return lf->entry.u64;
+
+ case NDFT_I64:
+ return lf->entry.i64;
+
+ case NDFT_DBL:
+ return lf->entry.dbl;
+ }
+
+ if(s && *s)
+ return str2uint64_t(s, NULL);
+
+ return 0;
+}
+
+static void timestamp_usec_annotator(BUFFER *wb, const char *key, struct log_field *lf) {
+ usec_t ut = log_field_to_uint64(lf);
+
+ if(!ut)
+ return;
+
+ char datetime[RFC3339_MAX_LENGTH];
+ rfc3339_datetime_ut(datetime, sizeof(datetime), ut, 3, false);
+
+ if(buffer_strlen(wb))
+ buffer_fast_strcat(wb, " ", 1);
+
+ buffer_strcat(wb, key);
+ buffer_fast_strcat(wb, "=", 1);
+ buffer_json_strcat(wb, datetime);
+}
+
+static void errno_annotator(BUFFER *wb, const char *key, struct log_field *lf) {
+ int64_t errnum = log_field_to_int64(lf);
+
+ if(errnum == 0)
+ return;
+
+ char buf[1024];
+ const char *s = errno2str(errnum, buf, sizeof(buf));
+
+ if(buffer_strlen(wb))
+ buffer_fast_strcat(wb, " ", 1);
+
+ buffer_strcat(wb, key);
+ buffer_fast_strcat(wb, "=\"", 2);
+ buffer_print_int64(wb, errnum);
+ buffer_fast_strcat(wb, ", ", 2);
+ buffer_json_strcat(wb, s);
+ buffer_fast_strcat(wb, "\"", 1);
+}
+
+static void priority_annotator(BUFFER *wb, const char *key, struct log_field *lf) {
+ uint64_t pri = log_field_to_uint64(lf);
+
+ if(buffer_strlen(wb))
+ buffer_fast_strcat(wb, " ", 1);
+
+ buffer_strcat(wb, key);
+ buffer_fast_strcat(wb, "=", 1);
+ buffer_strcat(wb, nd_log_id2priority(pri));
+}
+
+static bool needs_quotes_for_logfmt(const char *s) {
+ static bool safe_for_logfmt[256] = {
+ [' '] = true, ['!'] = true, ['"'] = false, ['#'] = true, ['$'] = true, ['%'] = true, ['&'] = true,
+ ['\''] = true, ['('] = true, [')'] = true, ['*'] = true, ['+'] = true, [','] = true, ['-'] = true,
+ ['.'] = true, ['/'] = true, ['0'] = true, ['1'] = true, ['2'] = true, ['3'] = true, ['4'] = true,
+ ['5'] = true, ['6'] = true, ['7'] = true, ['8'] = true, ['9'] = true, [':'] = true, [';'] = true,
+ ['<'] = true, ['='] = true, ['>'] = true, ['?'] = true, ['@'] = true, ['A'] = true, ['B'] = true,
+ ['C'] = true, ['D'] = true, ['E'] = true, ['F'] = true, ['G'] = true, ['H'] = true, ['I'] = true,
+ ['J'] = true, ['K'] = true, ['L'] = true, ['M'] = true, ['N'] = true, ['O'] = true, ['P'] = true,
+ ['Q'] = true, ['R'] = true, ['S'] = true, ['T'] = true, ['U'] = true, ['V'] = true, ['W'] = true,
+ ['X'] = true, ['Y'] = true, ['Z'] = true, ['['] = true, ['\\'] = false, [']'] = true, ['^'] = true,
+ ['_'] = true, ['`'] = true, ['a'] = true, ['b'] = true, ['c'] = true, ['d'] = true, ['e'] = true,
+ ['f'] = true, ['g'] = true, ['h'] = true, ['i'] = true, ['j'] = true, ['k'] = true, ['l'] = true,
+ ['m'] = true, ['n'] = true, ['o'] = true, ['p'] = true, ['q'] = true, ['r'] = true, ['s'] = true,
+ ['t'] = true, ['u'] = true, ['v'] = true, ['w'] = true, ['x'] = true, ['y'] = true, ['z'] = true,
+ ['{'] = true, ['|'] = true, ['}'] = true, ['~'] = true, [0x7f] = true,
+ };
+
+ if(!*s)
+ return true;
+
+ while(*s) {
+ if(*s == '=' || isspace(*s) || !safe_for_logfmt[(uint8_t)*s])
+ return true;
+
+ s++;
+ }
+
+ return false;
+}
+
+static void string_to_logfmt(BUFFER *wb, const char *s) {
+ bool spaces = needs_quotes_for_logfmt(s);
+
+ if(spaces)
+ buffer_fast_strcat(wb, "\"", 1);
+
+ buffer_json_strcat(wb, s);
+
+ if(spaces)
+ buffer_fast_strcat(wb, "\"", 1);
+}
+
+static void nd_logger_logfmt(BUFFER *wb, struct log_field *fields, size_t fields_max) {
+
+ // --- FIELD_PARSER_VERSIONS ---
+ //
+ // IMPORTANT:
+ // THERE ARE 6 VERSIONS OF THIS CODE
+ //
+ // 1. journal (direct socket API),
+ // 2. journal (libsystemd API),
+ // 3. logfmt,
+ // 4. json,
+ // 5. convert to uint64
+ // 6. convert to int64
+ //
+ // UPDATE ALL OF THEM FOR NEW FEATURES OR FIXES
+
+ CLEAN_BUFFER *tmp = NULL;
+
+ for (size_t i = 0; i < fields_max; i++) {
+ if (!fields[i].entry.set || !fields[i].logfmt)
+ continue;
+
+ const char *key = fields[i].logfmt;
+
+ if(fields[i].logfmt_annotator)
+ fields[i].logfmt_annotator(wb, key, &fields[i]);
+ else {
+ if(buffer_strlen(wb))
+ buffer_fast_strcat(wb, " ", 1);
+
+ switch(fields[i].entry.type) {
+ case NDFT_TXT:
+ if(*fields[i].entry.txt) {
+ buffer_strcat(wb, key);
+ buffer_fast_strcat(wb, "=", 1);
+ string_to_logfmt(wb, fields[i].entry.txt);
+ }
+ break;
+ case NDFT_STR:
+ buffer_strcat(wb, key);
+ buffer_fast_strcat(wb, "=", 1);
+ string_to_logfmt(wb, string2str(fields[i].entry.str));
+ break;
+ case NDFT_BFR:
+ if(buffer_strlen(fields[i].entry.bfr)) {
+ buffer_strcat(wb, key);
+ buffer_fast_strcat(wb, "=", 1);
+ string_to_logfmt(wb, buffer_tostring(fields[i].entry.bfr));
+ }
+ break;
+ case NDFT_U64:
+ buffer_strcat(wb, key);
+ buffer_fast_strcat(wb, "=", 1);
+ buffer_print_uint64(wb, fields[i].entry.u64);
+ break;
+ case NDFT_I64:
+ buffer_strcat(wb, key);
+ buffer_fast_strcat(wb, "=", 1);
+ buffer_print_int64(wb, fields[i].entry.i64);
+ break;
+ case NDFT_DBL:
+ buffer_strcat(wb, key);
+ buffer_fast_strcat(wb, "=", 1);
+ buffer_print_netdata_double(wb, fields[i].entry.dbl);
+ break;
+ case NDFT_UUID: {
+ char u[UUID_COMPACT_STR_LEN];
+ uuid_unparse_lower_compact(*fields[i].entry.uuid, u);
+ buffer_strcat(wb, key);
+ buffer_fast_strcat(wb, "=", 1);
+ buffer_fast_strcat(wb, u, sizeof(u) - 1);
+ }
+ break;
+ case NDFT_CALLBACK: {
+ if(!tmp)
+ tmp = buffer_create(1024, NULL);
+ else
+ buffer_flush(tmp);
+ if(fields[i].entry.cb.formatter(tmp, fields[i].entry.cb.formatter_data)) {
+ buffer_strcat(wb, key);
+ buffer_fast_strcat(wb, "=", 1);
+ string_to_logfmt(wb, buffer_tostring(tmp));
+ }
+ }
+ break;
+ default:
+ buffer_strcat(wb, "UNHANDLED");
+ break;
+ }
+ }
+ }
+}
+
+// ----------------------------------------------------------------------------
+// journal logger
+
+bool nd_log_journal_socket_available(void) {
+ if(netdata_configured_host_prefix && *netdata_configured_host_prefix) {
+ char filename[FILENAME_MAX + 1];
+
+ snprintfz(filename, sizeof(filename), "%s%s",
+ netdata_configured_host_prefix, "/run/systemd/journal/socket");
+
+ if(is_path_unix_socket(filename))
+ return true;
+ }
+
+ return is_path_unix_socket("/run/systemd/journal/socket");
+}
+
+static bool nd_logger_journal_libsystemd(struct log_field *fields, size_t fields_max) {
+#ifdef HAVE_SYSTEMD
+
+ // --- FIELD_PARSER_VERSIONS ---
+ //
+ // IMPORTANT:
+ // THERE ARE 6 VERSIONS OF THIS CODE
+ //
+ // 1. journal (direct socket API),
+ // 2. journal (libsystemd API),
+ // 3. logfmt,
+ // 4. json,
+ // 5. convert to uint64
+ // 6. convert to int64
+ //
+ // UPDATE ALL OF THEM FOR NEW FEATURES OR FIXES
+
+ struct iovec iov[fields_max];
+ int iov_count = 0;
+
+ memset(iov, 0, sizeof(iov));
+
+ CLEAN_BUFFER *tmp = NULL;
+
+ for (size_t i = 0; i < fields_max; i++) {
+ if (!fields[i].entry.set || !fields[i].journal)
+ continue;
+
+ const char *key = fields[i].journal;
+ char *value = NULL;
+ switch (fields[i].entry.type) {
+ case NDFT_TXT:
+ if(*fields[i].entry.txt)
+ asprintf(&value, "%s=%s", key, fields[i].entry.txt);
+ break;
+ case NDFT_STR:
+ asprintf(&value, "%s=%s", key, string2str(fields[i].entry.str));
+ break;
+ case NDFT_BFR:
+ if(buffer_strlen(fields[i].entry.bfr))
+ asprintf(&value, "%s=%s", key, buffer_tostring(fields[i].entry.bfr));
+ break;
+ case NDFT_U64:
+ asprintf(&value, "%s=%" PRIu64, key, fields[i].entry.u64);
+ break;
+ case NDFT_I64:
+ asprintf(&value, "%s=%" PRId64, key, fields[i].entry.i64);
+ break;
+ case NDFT_DBL:
+ asprintf(&value, "%s=%f", key, fields[i].entry.dbl);
+ break;
+ case NDFT_UUID: {
+ char u[UUID_COMPACT_STR_LEN];
+ uuid_unparse_lower_compact(*fields[i].entry.uuid, u);
+ asprintf(&value, "%s=%s", key, u);
+ }
+ break;
+ case NDFT_CALLBACK: {
+ if(!tmp)
+ tmp = buffer_create(1024, NULL);
+ else
+ buffer_flush(tmp);
+ if(fields[i].entry.cb.formatter(tmp, fields[i].entry.cb.formatter_data))
+ asprintf(&value, "%s=%s", key, buffer_tostring(tmp));
+ }
+ break;
+ default:
+ asprintf(&value, "%s=%s", key, "UNHANDLED");
+ break;
+ }
+
+ if (value) {
+ iov[iov_count].iov_base = value;
+ iov[iov_count].iov_len = strlen(value);
+ iov_count++;
+ }
+ }
+
+ int r = sd_journal_sendv(iov, iov_count);
+
+ // Clean up allocated memory
+ for (int i = 0; i < iov_count; i++) {
+ if (iov[i].iov_base != NULL) {
+ free(iov[i].iov_base);
+ }
+ }
+
+ return r == 0;
+#else
+ return false;
+#endif
+}
+
+static bool nd_logger_journal_direct(struct log_field *fields, size_t fields_max) {
+ if(!nd_log.journal_direct.initialized)
+ return false;
+
+ // --- FIELD_PARSER_VERSIONS ---
+ //
+ // IMPORTANT:
+ // THERE ARE 6 VERSIONS OF THIS CODE
+ //
+ // 1. journal (direct socket API),
+ // 2. journal (libsystemd API),
+ // 3. logfmt,
+ // 4. json,
+ // 5. convert to uint64
+ // 6. convert to int64
+ //
+ // UPDATE ALL OF THEM FOR NEW FEATURES OR FIXES
+
+ CLEAN_BUFFER *wb = buffer_create(4096, NULL);
+ CLEAN_BUFFER *tmp = NULL;
+
+ for (size_t i = 0; i < fields_max; i++) {
+ if (!fields[i].entry.set || !fields[i].journal)
+ continue;
+
+ const char *key = fields[i].journal;
+
+ const char *s = NULL;
+ switch(fields[i].entry.type) {
+ case NDFT_TXT:
+ s = fields[i].entry.txt;
+ break;
+ case NDFT_STR:
+ s = string2str(fields[i].entry.str);
+ break;
+ case NDFT_BFR:
+ s = buffer_tostring(fields[i].entry.bfr);
+ break;
+ case NDFT_U64:
+ buffer_strcat(wb, key);
+ buffer_putc(wb, '=');
+ buffer_print_uint64(wb, fields[i].entry.u64);
+ buffer_putc(wb, '\n');
+ break;
+ case NDFT_I64:
+ buffer_strcat(wb, key);
+ buffer_putc(wb, '=');
+ buffer_print_int64(wb, fields[i].entry.i64);
+ buffer_putc(wb, '\n');
+ break;
+ case NDFT_DBL:
+ buffer_strcat(wb, key);
+ buffer_putc(wb, '=');
+ buffer_print_netdata_double(wb, fields[i].entry.dbl);
+ buffer_putc(wb, '\n');
+ break;
+ case NDFT_UUID:{
+ char u[UUID_COMPACT_STR_LEN];
+ uuid_unparse_lower_compact(*fields[i].entry.uuid, u);
+ buffer_strcat(wb, key);
+ buffer_putc(wb, '=');
+ buffer_fast_strcat(wb, u, sizeof(u) - 1);
+ buffer_putc(wb, '\n');
+ }
+ break;
+ case NDFT_CALLBACK: {
+ if(!tmp)
+ tmp = buffer_create(1024, NULL);
+ else
+ buffer_flush(tmp);
+ if(fields[i].entry.cb.formatter(tmp, fields[i].entry.cb.formatter_data))
+ s = buffer_tostring(tmp);
+ else
+ s = NULL;
+ }
+ break;
+ default:
+ s = "UNHANDLED";
+ break;
+ }
+
+ if(s && *s) {
+ buffer_strcat(wb, key);
+ if(!strchr(s, '\n')) {
+ buffer_putc(wb, '=');
+ buffer_strcat(wb, s);
+ buffer_putc(wb, '\n');
+ }
+ else {
+ buffer_putc(wb, '\n');
+ size_t size = strlen(s);
+ uint64_t le_size = htole64(size);
+ buffer_memcat(wb, &le_size, sizeof(le_size));
+ buffer_memcat(wb, s, size);
+ buffer_putc(wb, '\n');
+ }
+ }
+ }
+
+ return journal_direct_send(nd_log.journal_direct.fd, buffer_tostring(wb), buffer_strlen(wb));
+}
+
+// ----------------------------------------------------------------------------
+// syslog logger - uses logfmt
+
+static bool nd_logger_syslog(int priority, ND_LOG_FORMAT format, struct log_field *fields, size_t fields_max) {
+ CLEAN_BUFFER *wb = buffer_create(1024, NULL);
+
+ nd_logger_logfmt(wb, fields, fields_max);
+ syslog(priority, "%s", buffer_tostring(wb));
+
+ return true;
+}
+
+// ----------------------------------------------------------------------------
+// file logger - uses logfmt
+
+static bool nd_logger_file(FILE *fp, ND_LOG_FORMAT format, struct log_field *fields, size_t fields_max) {
+ BUFFER *wb = buffer_create(1024, NULL);
+
+ if(format == NDLF_JSON)
+ nd_logger_json(wb, fields, fields_max);
+ else
+ nd_logger_logfmt(wb, fields, fields_max);
+
+ int r = fprintf(fp, "%s\n", buffer_tostring(wb));
+ fflush(fp);
+
+ buffer_free(wb);
+ return r > 0;
+}
+
+// ----------------------------------------------------------------------------
+// logger router
+
+static ND_LOG_METHOD nd_logger_select_output(ND_LOG_SOURCES source, FILE **fpp, SPINLOCK **spinlock) {
+ *spinlock = NULL;
+ ND_LOG_METHOD output = nd_log.sources[source].method;
+
+ switch(output) {
+ case NDLM_JOURNAL:
+ if(unlikely(!nd_log.journal_direct.initialized && !nd_log.journal.initialized)) {
+ output = NDLM_FILE;
+ *fpp = stderr;
+ *spinlock = &nd_log.std_error.spinlock;
+ }
+ else {
+ *fpp = NULL;
+ *spinlock = NULL;
+ }
+ break;
+
+ case NDLM_SYSLOG:
+ if(unlikely(!nd_log.syslog.initialized)) {
+ output = NDLM_FILE;
+ *spinlock = &nd_log.std_error.spinlock;
+ *fpp = stderr;
+ }
+ else {
+ *spinlock = NULL;
+ *fpp = NULL;
+ }
+ break;
+
+ case NDLM_FILE:
+ if(!nd_log.sources[source].fp) {
+ *fpp = stderr;
+ *spinlock = &nd_log.std_error.spinlock;
+ }
+ else {
+ *fpp = nd_log.sources[source].fp;
+ *spinlock = &nd_log.sources[source].spinlock;
+ }
+ break;
+
+ case NDLM_STDOUT:
+ output = NDLM_FILE;
+ *fpp = stdout;
+ *spinlock = &nd_log.std_output.spinlock;
+ break;
+
+ default:
+ case NDLM_DEFAULT:
+ case NDLM_STDERR:
+ output = NDLM_FILE;
+ *fpp = stderr;
+ *spinlock = &nd_log.std_error.spinlock;
+ break;
+
+ case NDLM_DISABLED:
+ case NDLM_DEVNULL:
+ output = NDLM_DISABLED;
+ *fpp = NULL;
+ *spinlock = NULL;
+ break;
+ }
+
+ return output;
+}
+
+// ----------------------------------------------------------------------------
+// high level logger
+
+static void nd_logger_log_fields(SPINLOCK *spinlock, FILE *fp, bool limit, ND_LOG_FIELD_PRIORITY priority,
+ ND_LOG_METHOD output, struct nd_log_source *source,
+ struct log_field *fields, size_t fields_max) {
+ if(spinlock)
+ spinlock_lock(spinlock);
+
+ // check the limits
+ if(limit && nd_log_limit_reached(source))
+ goto cleanup;
+
+ if(output == NDLM_JOURNAL) {
+ if(!nd_logger_journal_direct(fields, fields_max) && !nd_logger_journal_libsystemd(fields, fields_max)) {
+ // we can't log to journal, let's log to stderr
+ if(spinlock)
+ spinlock_unlock(spinlock);
+
+ output = NDLM_FILE;
+ spinlock = &nd_log.std_error.spinlock;
+ fp = stderr;
+
+ if(spinlock)
+ spinlock_lock(spinlock);
+ }
+ }
+
+ if(output == NDLM_SYSLOG)
+ nd_logger_syslog(priority, source->format, fields, fields_max);
+
+ if(output == NDLM_FILE)
+ nd_logger_file(fp, source->format, fields, fields_max);
+
+
+cleanup:
+ if(spinlock)
+ spinlock_unlock(spinlock);
+}
+
+static void nd_logger_unset_all_thread_fields(void) {
+ size_t fields_max = THREAD_FIELDS_MAX;
+ for(size_t i = 0; i < fields_max ; i++)
+ thread_log_fields[i].entry.set = false;
+}
+
+static void nd_logger_merge_log_stack_to_thread_fields(void) {
+ for(size_t c = 0; c < thread_log_stack_next ;c++) {
+ struct log_stack_entry *lgs = thread_log_stack_base[c];
+
+ for(size_t i = 0; lgs[i].id != NDF_STOP ; i++) {
+ if(lgs[i].id >= _NDF_MAX || !lgs[i].set)
+ continue;
+
+ struct log_stack_entry *e = &lgs[i];
+ ND_LOG_STACK_FIELD_TYPE type = lgs[i].type;
+
+ // do not add empty / unset fields
+ if((type == NDFT_TXT && (!e->txt || !*e->txt)) ||
+ (type == NDFT_BFR && (!e->bfr || !buffer_strlen(e->bfr))) ||
+ (type == NDFT_STR && !e->str) ||
+ (type == NDFT_UUID && !e->uuid) ||
+ (type == NDFT_CALLBACK && !e->cb.formatter) ||
+ type == NDFT_UNSET)
+ continue;
+
+ thread_log_fields[lgs[i].id].entry = *e;
+ }
+ }
+}
+
+static void nd_logger(const char *file, const char *function, const unsigned long line,
+ ND_LOG_SOURCES source, ND_LOG_FIELD_PRIORITY priority, bool limit, int saved_errno,
+ const char *fmt, va_list ap) {
+
+ SPINLOCK *spinlock;
+ FILE *fp;
+ ND_LOG_METHOD output = nd_logger_select_output(source, &fp, &spinlock);
+ if(output != NDLM_FILE && output != NDLM_JOURNAL && output != NDLM_SYSLOG)
+ return;
+
+ // mark all fields as unset
+ nd_logger_unset_all_thread_fields();
+
+ // flatten the log stack into the fields
+ nd_logger_merge_log_stack_to_thread_fields();
+
+ // set the common fields that are automatically set by the logging subsystem
+
+ if(likely(!thread_log_fields[NDF_INVOCATION_ID].entry.set))
+ thread_log_fields[NDF_INVOCATION_ID].entry = ND_LOG_FIELD_UUID(NDF_INVOCATION_ID, &nd_log.invocation_id);
+
+ if(likely(!thread_log_fields[NDF_LOG_SOURCE].entry.set))
+ thread_log_fields[NDF_LOG_SOURCE].entry = ND_LOG_FIELD_TXT(NDF_LOG_SOURCE, nd_log_id2source(source));
+ else {
+ ND_LOG_SOURCES src = source;
+
+ if(thread_log_fields[NDF_LOG_SOURCE].entry.type == NDFT_TXT)
+ src = nd_log_source2id(thread_log_fields[NDF_LOG_SOURCE].entry.txt, source);
+ else if(thread_log_fields[NDF_LOG_SOURCE].entry.type == NDFT_U64)
+ src = thread_log_fields[NDF_LOG_SOURCE].entry.u64;
+
+ if(src != source && src >= 0 && src < _NDLS_MAX) {
+ source = src;
+ output = nd_logger_select_output(source, &fp, &spinlock);
+ if(output != NDLM_FILE && output != NDLM_JOURNAL && output != NDLM_SYSLOG)
+ return;
+ }
+ }
+
+ if(likely(!thread_log_fields[NDF_SYSLOG_IDENTIFIER].entry.set))
+ thread_log_fields[NDF_SYSLOG_IDENTIFIER].entry = ND_LOG_FIELD_TXT(NDF_SYSLOG_IDENTIFIER, program_name);
+
+ if(likely(!thread_log_fields[NDF_LINE].entry.set)) {
+ thread_log_fields[NDF_LINE].entry = ND_LOG_FIELD_U64(NDF_LINE, line);
+ thread_log_fields[NDF_FILE].entry = ND_LOG_FIELD_TXT(NDF_FILE, file);
+ thread_log_fields[NDF_FUNC].entry = ND_LOG_FIELD_TXT(NDF_FUNC, function);
+ }
+
+ if(likely(!thread_log_fields[NDF_PRIORITY].entry.set)) {
+ thread_log_fields[NDF_PRIORITY].entry = ND_LOG_FIELD_U64(NDF_PRIORITY, priority);
+ }
+
+ if(likely(!thread_log_fields[NDF_TID].entry.set))
+ thread_log_fields[NDF_TID].entry = ND_LOG_FIELD_U64(NDF_TID, gettid());
+
+ char os_threadname[NETDATA_THREAD_NAME_MAX + 1];
+ if(likely(!thread_log_fields[NDF_THREAD_TAG].entry.set)) {
+ const char *thread_tag = netdata_thread_tag();
+ if(!netdata_thread_tag_exists()) {
+ if (!netdata_thread_tag_exists()) {
+ os_thread_get_current_name_np(os_threadname);
+ if ('\0' != os_threadname[0])
+ /* If it is not an empty string replace "MAIN" thread_tag */
+ thread_tag = os_threadname;
+ }
+ }
+ thread_log_fields[NDF_THREAD_TAG].entry = ND_LOG_FIELD_TXT(NDF_THREAD_TAG, thread_tag);
+
+ // TODO: fix the ND_MODULE in logging by setting proper module name in threads
+// if(!thread_log_fields[NDF_MODULE].entry.set)
+// thread_log_fields[NDF_MODULE].entry = ND_LOG_FIELD_CB(NDF_MODULE, thread_tag_to_module, (void *)thread_tag);
+ }
+
+ if(likely(!thread_log_fields[NDF_TIMESTAMP_REALTIME_USEC].entry.set))
+ thread_log_fields[NDF_TIMESTAMP_REALTIME_USEC].entry = ND_LOG_FIELD_U64(NDF_TIMESTAMP_REALTIME_USEC, now_realtime_usec());
+
+ if(saved_errno != 0 && !thread_log_fields[NDF_ERRNO].entry.set)
+ thread_log_fields[NDF_ERRNO].entry = ND_LOG_FIELD_I64(NDF_ERRNO, saved_errno);
+
+ CLEAN_BUFFER *wb = NULL;
+ if(fmt && !thread_log_fields[NDF_MESSAGE].entry.set) {
+ wb = buffer_create(1024, NULL);
+ buffer_vsprintf(wb, fmt, ap);
+ thread_log_fields[NDF_MESSAGE].entry = ND_LOG_FIELD_TXT(NDF_MESSAGE, buffer_tostring(wb));
+ }
+
+ nd_logger_log_fields(spinlock, fp, limit, priority, output, &nd_log.sources[source],
+ thread_log_fields, THREAD_FIELDS_MAX);
+
+ if(nd_log.sources[source].pending_msg) {
+ // log a pending message
+
+ nd_logger_unset_all_thread_fields();
+
+ thread_log_fields[NDF_TIMESTAMP_REALTIME_USEC].entry = (struct log_stack_entry){
+ .set = true,
+ .type = NDFT_U64,
+ .u64 = now_realtime_usec(),
+ };
+
+ thread_log_fields[NDF_LOG_SOURCE].entry = (struct log_stack_entry){
+ .set = true,
+ .type = NDFT_TXT,
+ .txt = nd_log_id2source(source),
+ };
+
+ thread_log_fields[NDF_SYSLOG_IDENTIFIER].entry = (struct log_stack_entry){
+ .set = true,
+ .type = NDFT_TXT,
+ .txt = program_name,
+ };
+
+ thread_log_fields[NDF_MESSAGE].entry = (struct log_stack_entry){
+ .set = true,
+ .type = NDFT_TXT,
+ .txt = nd_log.sources[source].pending_msg,
+ };
+
+ nd_logger_log_fields(spinlock, fp, false, priority, output,
+ &nd_log.sources[source],
+ thread_log_fields, THREAD_FIELDS_MAX);
+
+ freez((void *)nd_log.sources[source].pending_msg);
+ nd_log.sources[source].pending_msg = NULL;
+ }
+
+ errno = 0;
+}
+
+static ND_LOG_SOURCES nd_log_validate_source(ND_LOG_SOURCES source) {
+ if(source >= _NDLS_MAX)
+ source = NDLS_DAEMON;
+
+ if(overwrite_thread_source)
+ source = overwrite_thread_source;
+
+ if(nd_log.overwrite_process_source)
+ source = nd_log.overwrite_process_source;
+
+ return source;
+}
+
+// ----------------------------------------------------------------------------
+// public API for loggers
+
+void netdata_logger(ND_LOG_SOURCES source, ND_LOG_FIELD_PRIORITY priority, const char *file, const char *function, unsigned long line, const char *fmt, ... ) {
+ int saved_errno = errno;
+ source = nd_log_validate_source(source);
+
+ if (source != NDLS_DEBUG && priority > nd_log.sources[source].min_priority)
+ return;
+
+ va_list args;
+ va_start(args, fmt);
+ nd_logger(file, function, line, source, priority,
+ source == NDLS_DAEMON || source == NDLS_COLLECTORS,
+ saved_errno, fmt, args);
+ va_end(args);
+}
+
+void netdata_logger_with_limit(ERROR_LIMIT *erl, ND_LOG_SOURCES source, ND_LOG_FIELD_PRIORITY priority, const char *file __maybe_unused, const char *function __maybe_unused, const unsigned long line __maybe_unused, const char *fmt, ... ) {
+ int saved_errno = errno;
+ source = nd_log_validate_source(source);
+
+ if (source != NDLS_DEBUG && priority > nd_log.sources[source].min_priority)
+ return;
+
+ if(erl->sleep_ut)
+ sleep_usec(erl->sleep_ut);
+
+ spinlock_lock(&erl->spinlock);
+
+ erl->count++;
+ time_t now = now_boottime_sec();
+ if(now - erl->last_logged < erl->log_every) {
+ spinlock_unlock(&erl->spinlock);
+ return;
+ }
+
+ spinlock_unlock(&erl->spinlock);
+
+ va_list args;
+ va_start(args, fmt);
+ nd_logger(file, function, line, source, priority,
+ source == NDLS_DAEMON || source == NDLS_COLLECTORS,
+ saved_errno, fmt, args);
+ va_end(args);
+ erl->last_logged = now;
+ erl->count = 0;
+}
+
+void netdata_logger_fatal( const char *file, const char *function, const unsigned long line, const char *fmt, ... ) {
+ int saved_errno = errno;
+ ND_LOG_SOURCES source = NDLS_DAEMON;
+ source = nd_log_validate_source(source);
+
+ va_list args;
+ va_start(args, fmt);
+ nd_logger(file, function, line, source, NDLP_ALERT, true, saved_errno, fmt, args);
+ va_end(args);
+
+ char date[LOG_DATE_LENGTH];
+ log_date(date, LOG_DATE_LENGTH, now_realtime_sec());
+
+ char action_data[70+1];
+ snprintfz(action_data, 70, "%04lu@%-10.10s:%-15.15s/%d", line, file, function, saved_errno);
+ char action_result[60+1];
+
+ char os_threadname[NETDATA_THREAD_NAME_MAX + 1];
+ const char *thread_tag = netdata_thread_tag();
+ if(!netdata_thread_tag_exists()) {
+ if (!netdata_thread_tag_exists()) {
+ os_thread_get_current_name_np(os_threadname);
+ if ('\0' != os_threadname[0])
+ /* If it is not an empty string replace "MAIN" thread_tag */
+ thread_tag = os_threadname;
+ }
+ }
+ if(!thread_tag)
+ thread_tag = "UNKNOWN";
+
+ const char *tag_to_send = thread_tag;
+
+ // anonymize thread names
+ if(strncmp(thread_tag, THREAD_TAG_STREAM_RECEIVER, strlen(THREAD_TAG_STREAM_RECEIVER)) == 0)
+ tag_to_send = THREAD_TAG_STREAM_RECEIVER;
+ if(strncmp(thread_tag, THREAD_TAG_STREAM_SENDER, strlen(THREAD_TAG_STREAM_SENDER)) == 0)
+ tag_to_send = THREAD_TAG_STREAM_SENDER;
+
+ snprintfz(action_result, 60, "%s:%s", program_name, tag_to_send);
+ send_statistics("FATAL", action_result, action_data);
+
+#ifdef HAVE_BACKTRACE
+ int fd = nd_log.sources[NDLS_DAEMON].fd;
+ if(fd == -1)
+ fd = STDERR_FILENO;
+
+ int nptrs;
+ void *buffer[10000];
+
+ nptrs = backtrace(buffer, sizeof(buffer));
+ if(nptrs)
+ backtrace_symbols_fd(buffer, nptrs, fd);
+#endif
+
+#ifdef NETDATA_INTERNAL_CHECKS
+ abort();
+#endif
+
+ netdata_cleanup_and_exit(1);
+}
+
+// ----------------------------------------------------------------------------
+// log limits
+
+void nd_log_limits_reset(void) {
+ usec_t now_ut = now_monotonic_usec();
+
+ spinlock_lock(&nd_log.std_output.spinlock);
+ spinlock_lock(&nd_log.std_error.spinlock);
+
+ for(size_t i = 0; i < _NDLS_MAX ;i++) {
+ spinlock_lock(&nd_log.sources[i].spinlock);
+ nd_log.sources[i].limits.prevented = 0;
+ nd_log.sources[i].limits.counter = 0;
+ nd_log.sources[i].limits.started_monotonic_ut = now_ut;
+ nd_log.sources[i].limits.logs_per_period = nd_log.sources[i].limits.logs_per_period_backup;
+ spinlock_unlock(&nd_log.sources[i].spinlock);
+ }
+
+ spinlock_unlock(&nd_log.std_output.spinlock);
+ spinlock_unlock(&nd_log.std_error.spinlock);
+}
+
+void nd_log_limits_unlimited(void) {
+ nd_log_limits_reset();
+ for(size_t i = 0; i < _NDLS_MAX ;i++) {
+ nd_log.sources[i].limits.logs_per_period = 0;
+ }
+}
+
+static bool nd_log_limit_reached(struct nd_log_source *source) {
+ if(source->limits.throttle_period == 0 || source->limits.logs_per_period == 0)
+ return false;
+
+ usec_t now_ut = now_monotonic_usec();
+ if(!source->limits.started_monotonic_ut)
+ source->limits.started_monotonic_ut = now_ut;
+
+ source->limits.counter++;
+
+ if(now_ut - source->limits.started_monotonic_ut > (usec_t)source->limits.throttle_period) {
+ if(source->limits.prevented) {
+ BUFFER *wb = buffer_create(1024, NULL);
+ buffer_sprintf(wb,
+ "LOG FLOOD PROTECTION: resuming logging "
+ "(prevented %"PRIu32" logs in the last %"PRIu32" seconds).",
+ source->limits.prevented,
+ source->limits.throttle_period);
+
+ if(source->pending_msg)
+ freez((void *)source->pending_msg);
+
+ source->pending_msg = strdupz(buffer_tostring(wb));
+
+ buffer_free(wb);
+ }
+
+ // restart the period accounting
+ source->limits.started_monotonic_ut = now_ut;
+ source->limits.counter = 1;
+ source->limits.prevented = 0;
+
+ // log this error
+ return false;
+ }
+
+ if(source->limits.counter > source->limits.logs_per_period) {
+ if(!source->limits.prevented) {
+ BUFFER *wb = buffer_create(1024, NULL);
+ buffer_sprintf(wb,
+ "LOG FLOOD PROTECTION: too many logs (%"PRIu32" logs in %"PRId64" seconds, threshold is set to %"PRIu32" logs "
+ "in %"PRIu32" seconds). Preventing more logs from process '%s' for %"PRId64" seconds.",
+ source->limits.counter,
+ (int64_t)((now_ut - source->limits.started_monotonic_ut) / USEC_PER_SEC),
+ source->limits.logs_per_period,
+ source->limits.throttle_period,
+ program_name,
+ (int64_t)((source->limits.started_monotonic_ut + (source->limits.throttle_period * USEC_PER_SEC) - now_ut)) / USEC_PER_SEC);
+
+ if(source->pending_msg)
+ freez((void *)source->pending_msg);
+
+ source->pending_msg = strdupz(buffer_tostring(wb));
+
+ buffer_free(wb);
+ }
+
+ source->limits.prevented++;
+
+ // prevent logging this error
+#ifdef NETDATA_INTERNAL_CHECKS
+ return false;
+#else
+ return true;
+#endif
+ }
+
+ return false;
+}
diff --git a/libnetdata/log/log.h b/libnetdata/log/log.h
new file mode 100644
index 00000000..ad634693
--- /dev/null
+++ b/libnetdata/log/log.h
@@ -0,0 +1,301 @@
+// SPDX-License-Identifier: GPL-3.0-or-later
+
+#ifndef NETDATA_LOG_H
+#define NETDATA_LOG_H 1
+
+# ifdef __cplusplus
+extern "C" {
+# endif
+
+#include "../libnetdata.h"
+
+#define ND_LOG_DEFAULT_THROTTLE_LOGS 1000
+#define ND_LOG_DEFAULT_THROTTLE_PERIOD 60
+
+typedef enum __attribute__((__packed__)) {
+ NDLS_UNSET = 0, // internal use only
+ NDLS_ACCESS, // access.log
+ NDLS_ACLK, // aclk.log
+ NDLS_COLLECTORS, // collectors.log
+ NDLS_DAEMON, // error.log
+ NDLS_HEALTH, // health.log
+ NDLS_DEBUG, // debug.log
+
+ // terminator
+ _NDLS_MAX,
+} ND_LOG_SOURCES;
+
+typedef enum __attribute__((__packed__)) {
+ NDLP_EMERG = LOG_EMERG,
+ NDLP_ALERT = LOG_ALERT,
+ NDLP_CRIT = LOG_CRIT,
+ NDLP_ERR = LOG_ERR,
+ NDLP_WARNING = LOG_WARNING,
+ NDLP_NOTICE = LOG_NOTICE,
+ NDLP_INFO = LOG_INFO,
+ NDLP_DEBUG = LOG_DEBUG,
+} ND_LOG_FIELD_PRIORITY;
+
+typedef enum __attribute__((__packed__)) {
+ // KEEP THESE IN THE SAME ORDER AS in thread_log_fields (log.c)
+ // so that it easy to audit for missing fields
+
+ NDF_STOP = 0,
+ NDF_TIMESTAMP_REALTIME_USEC, // the timestamp of the log message - added automatically
+ NDF_SYSLOG_IDENTIFIER, // the syslog identifier of the application - added automatically
+ NDF_LOG_SOURCE, // DAEMON, COLLECTORS, HEALTH, ACCESS, ACLK - set at the log call
+ NDF_PRIORITY, // the syslog priority (severity) - set at the log call
+ NDF_ERRNO, // the ERRNO at the time of the log call - added automatically
+ NDF_INVOCATION_ID, // the INVOCATION_ID of Netdata - added automatically
+ NDF_LINE, // the source code file line number - added automatically
+ NDF_FILE, // the source code filename - added automatically
+ NDF_FUNC, // the source code function - added automatically
+ NDF_TID, // the thread ID of the thread logging - added automatically
+ NDF_THREAD_TAG, // the thread tag of the thread logging - added automatically
+ NDF_MESSAGE_ID, // for specific events
+ NDF_MODULE, // for internal plugin module, all other get the NDF_THREAD_TAG
+
+ NDF_NIDL_NODE, // the node / rrdhost currently being worked
+ NDF_NIDL_INSTANCE, // the instance / rrdset currently being worked
+ NDF_NIDL_CONTEXT, // the context of the instance currently being worked
+ NDF_NIDL_DIMENSION, // the dimension / rrddim currently being worked
+
+ // web server, aclk and stream receiver
+ NDF_SRC_TRANSPORT, // the transport we received the request, one of: http, https, pluginsd
+
+ // web server and stream receiver
+ NDF_SRC_IP, // the streaming / web server source IP
+ NDF_SRC_PORT, // the streaming / web server source Port
+ NDF_SRC_CAPABILITIES, // the stream receiver capabilities
+
+ // stream sender (established links)
+ NDF_DST_TRANSPORT, // the transport we send the request, one of: http, https
+ NDF_DST_IP, // the destination streaming IP
+ NDF_DST_PORT, // the destination streaming Port
+ NDF_DST_CAPABILITIES, // the destination streaming capabilities
+
+ // web server, aclk and stream receiver
+ NDF_REQUEST_METHOD, // for http like requests, the http request method
+ NDF_RESPONSE_CODE, // for http like requests, the http response code, otherwise a status string
+
+ // web server (all), aclk (queries)
+ NDF_CONNECTION_ID, // the web server connection ID
+ NDF_TRANSACTION_ID, // the web server and API transaction ID
+ NDF_RESPONSE_SENT_BYTES, // for http like requests, the response bytes
+ NDF_RESPONSE_SIZE_BYTES, // for http like requests, the uncompressed response size
+ NDF_RESPONSE_PREPARATION_TIME_USEC, // for http like requests, the preparation time
+ NDF_RESPONSE_SENT_TIME_USEC, // for http like requests, the time to send the response back
+ NDF_RESPONSE_TOTAL_TIME_USEC, // for http like requests, the total time to complete the response
+
+ // health alerts
+ NDF_ALERT_ID,
+ NDF_ALERT_UNIQUE_ID,
+ NDF_ALERT_EVENT_ID,
+ NDF_ALERT_TRANSITION_ID,
+ NDF_ALERT_CONFIG_HASH,
+ NDF_ALERT_NAME,
+ NDF_ALERT_CLASS,
+ NDF_ALERT_COMPONENT,
+ NDF_ALERT_TYPE,
+ NDF_ALERT_EXEC,
+ NDF_ALERT_RECIPIENT,
+ NDF_ALERT_DURATION,
+ NDF_ALERT_VALUE,
+ NDF_ALERT_VALUE_OLD,
+ NDF_ALERT_STATUS,
+ NDF_ALERT_STATUS_OLD,
+ NDF_ALERT_SOURCE,
+ NDF_ALERT_UNITS,
+ NDF_ALERT_SUMMARY,
+ NDF_ALERT_INFO,
+ NDF_ALERT_NOTIFICATION_REALTIME_USEC,
+ // NDF_ALERT_FLAGS,
+
+ // put new items here
+ // leave the request URL and the message last
+
+ NDF_REQUEST, // the request we are currently working on
+ NDF_MESSAGE, // the log message, if any
+
+ // terminator
+ _NDF_MAX,
+} ND_LOG_FIELD_ID;
+
+typedef enum __attribute__((__packed__)) {
+ NDFT_UNSET = 0,
+ NDFT_TXT,
+ NDFT_STR,
+ NDFT_BFR,
+ NDFT_U64,
+ NDFT_I64,
+ NDFT_DBL,
+ NDFT_UUID,
+ NDFT_CALLBACK,
+} ND_LOG_STACK_FIELD_TYPE;
+
+void nd_log_set_user_settings(ND_LOG_SOURCES source, const char *setting);
+void nd_log_set_facility(const char *facility);
+void nd_log_set_priority_level(const char *setting);
+void nd_log_initialize(void);
+void nd_log_reopen_log_files(void);
+void chown_open_file(int fd, uid_t uid, gid_t gid);
+void nd_log_chown_log_files(uid_t uid, gid_t gid);
+void nd_log_set_flood_protection(size_t logs, time_t period);
+void nd_log_initialize_for_external_plugins(const char *name);
+void nd_log_set_thread_source(ND_LOG_SOURCES source);
+bool nd_log_journal_socket_available(void);
+ND_LOG_FIELD_ID nd_log_field_id_by_name(const char *field, size_t len);
+int nd_log_priority2id(const char *priority);
+const char *nd_log_id2priority(ND_LOG_FIELD_PRIORITY priority);
+const char *nd_log_method_for_external_plugins(const char *s);
+
+int nd_log_health_fd(void);
+typedef bool (*log_formatter_callback_t)(BUFFER *wb, void *data);
+
+struct log_stack_entry {
+ ND_LOG_FIELD_ID id;
+ ND_LOG_STACK_FIELD_TYPE type;
+ bool set;
+ union {
+ const char *txt;
+ struct netdata_string *str;
+ BUFFER *bfr;
+ uint64_t u64;
+ int64_t i64;
+ double dbl;
+ const uuid_t *uuid;
+ struct {
+ log_formatter_callback_t formatter;
+ void *formatter_data;
+ } cb;
+ };
+};
+
+#define ND_LOG_STACK _cleanup_(log_stack_pop) struct log_stack_entry
+#define ND_LOG_STACK_PUSH(lgs) log_stack_push(lgs)
+
+#define ND_LOG_FIELD_TXT(field, value) (struct log_stack_entry){ .id = (field), .type = NDFT_TXT, .txt = (value), .set = true, }
+#define ND_LOG_FIELD_STR(field, value) (struct log_stack_entry){ .id = (field), .type = NDFT_STR, .str = (value), .set = true, }
+#define ND_LOG_FIELD_BFR(field, value) (struct log_stack_entry){ .id = (field), .type = NDFT_BFR, .bfr = (value), .set = true, }
+#define ND_LOG_FIELD_U64(field, value) (struct log_stack_entry){ .id = (field), .type = NDFT_U64, .u64 = (value), .set = true, }
+#define ND_LOG_FIELD_I64(field, value) (struct log_stack_entry){ .id = (field), .type = NDFT_I64, .i64 = (value), .set = true, }
+#define ND_LOG_FIELD_DBL(field, value) (struct log_stack_entry){ .id = (field), .type = NDFT_DBL, .dbl = (value), .set = true, }
+#define ND_LOG_FIELD_CB(field, func, data) (struct log_stack_entry){ .id = (field), .type = NDFT_CALLBACK, .cb = { .formatter = (func), .formatter_data = (data) }, .set = true, }
+#define ND_LOG_FIELD_UUID(field, value) (struct log_stack_entry){ .id = (field), .type = NDFT_UUID, .uuid = (value), .set = true, }
+#define ND_LOG_FIELD_END() (struct log_stack_entry){ .id = NDF_STOP, .type = NDFT_UNSET, .set = false, }
+
+void log_stack_pop(void *ptr);
+void log_stack_push(struct log_stack_entry *lgs);
+
+#define D_WEB_BUFFER 0x0000000000000001
+#define D_WEB_CLIENT 0x0000000000000002
+#define D_LISTENER 0x0000000000000004
+#define D_WEB_DATA 0x0000000000000008
+#define D_OPTIONS 0x0000000000000010
+#define D_PROCNETDEV_LOOP 0x0000000000000020
+#define D_RRD_STATS 0x0000000000000040
+#define D_WEB_CLIENT_ACCESS 0x0000000000000080
+#define D_TC_LOOP 0x0000000000000100
+#define D_DEFLATE 0x0000000000000200
+#define D_CONFIG 0x0000000000000400
+#define D_PLUGINSD 0x0000000000000800
+#define D_CHILDS 0x0000000000001000
+#define D_EXIT 0x0000000000002000
+#define D_CHECKS 0x0000000000004000
+#define D_NFACCT_LOOP 0x0000000000008000
+#define D_PROCFILE 0x0000000000010000
+#define D_RRD_CALLS 0x0000000000020000
+#define D_DICTIONARY 0x0000000000040000
+#define D_MEMORY 0x0000000000080000
+#define D_CGROUP 0x0000000000100000
+#define D_REGISTRY 0x0000000000200000
+#define D_VARIABLES 0x0000000000400000
+#define D_HEALTH 0x0000000000800000
+#define D_CONNECT_TO 0x0000000001000000
+#define D_RRDHOST 0x0000000002000000
+#define D_LOCKS 0x0000000004000000
+#define D_EXPORTING 0x0000000008000000
+#define D_STATSD 0x0000000010000000
+#define D_POLLFD 0x0000000020000000
+#define D_STREAM 0x0000000040000000
+#define D_ANALYTICS 0x0000000080000000
+#define D_RRDENGINE 0x0000000100000000
+#define D_ACLK 0x0000000200000000
+#define D_REPLICATION 0x0000002000000000
+#define D_SYSTEM 0x8000000000000000
+
+extern uint64_t debug_flags;
+
+extern const char *program_name;
+
+#ifdef ENABLE_ACLK
+extern int aclklog_enabled;
+#endif
+
+#define LOG_DATE_LENGTH 26
+void log_date(char *buffer, size_t len, time_t now);
+
+static inline void debug_dummy(void) {}
+
+void nd_log_limits_reset(void);
+void nd_log_limits_unlimited(void);
+
+#define NDLP_INFO_STR "info"
+
+#ifdef NETDATA_INTERNAL_CHECKS
+#define netdata_log_debug(type, args...) do { if(unlikely(debug_flags & type)) netdata_logger(NDLS_DEBUG, NDLP_DEBUG, __FILE__, __FUNCTION__, __LINE__, ##args); } while(0)
+#define internal_error(condition, args...) do { if(unlikely(condition)) netdata_logger(NDLS_DAEMON, NDLP_DEBUG, __FILE__, __FUNCTION__, __LINE__, ##args); } while(0)
+#define internal_fatal(condition, args...) do { if(unlikely(condition)) netdata_logger_fatal(__FILE__, __FUNCTION__, __LINE__, ##args); } while(0)
+#else
+#define netdata_log_debug(type, args...) debug_dummy()
+#define internal_error(args...) debug_dummy()
+#define internal_fatal(args...) debug_dummy()
+#endif
+
+#define fatal(args...) netdata_logger_fatal(__FILE__, __FUNCTION__, __LINE__, ##args)
+#define fatal_assert(expr) ((expr) ? (void)(0) : netdata_logger_fatal(__FILE__, __FUNCTION__, __LINE__, "Assertion `%s' failed", #expr))
+
+// ----------------------------------------------------------------------------
+// normal logging
+
+void netdata_logger(ND_LOG_SOURCES source, ND_LOG_FIELD_PRIORITY priority, const char *file, const char *function, unsigned long line, const char *fmt, ... ) PRINTFLIKE(6, 7);
+#define nd_log(NDLS, NDLP, args...) netdata_logger(NDLS, NDLP, __FILE__, __FUNCTION__, __LINE__, ##args)
+#define nd_log_daemon(NDLP, args...) netdata_logger(NDLS_DAEMON, NDLP, __FILE__, __FUNCTION__, __LINE__, ##args)
+#define nd_log_collector(NDLP, args...) netdata_logger(NDLS_COLLECTORS, NDLP, __FILE__, __FUNCTION__, __LINE__, ##args)
+
+#define netdata_log_info(args...) netdata_logger(NDLS_DAEMON, NDLP_INFO, __FILE__, __FUNCTION__, __LINE__, ##args)
+#define netdata_log_error(args...) netdata_logger(NDLS_DAEMON, NDLP_ERR, __FILE__, __FUNCTION__, __LINE__, ##args)
+#define collector_info(args...) netdata_logger(NDLS_COLLECTORS, NDLP_INFO, __FILE__, __FUNCTION__, __LINE__, ##args)
+#define collector_error(args...) netdata_logger(NDLS_COLLECTORS, NDLP_ERR, __FILE__, __FUNCTION__, __LINE__, ##args)
+
+#define log_aclk_message_bin(__data, __data_len, __tx, __mqtt_topic, __message_name) \
+ nd_log(NDLS_ACLK, NDLP_INFO, \
+ "direction:%s message:'%s' topic:'%s' json:'%.*s'", \
+ (__tx) ? "OUTGOING" : "INCOMING", __message_name, __mqtt_topic, (int)(__data_len), __data)
+
+// ----------------------------------------------------------------------------
+// logging with limits
+
+typedef struct error_with_limit {
+ SPINLOCK spinlock;
+ time_t log_every;
+ size_t count;
+ time_t last_logged;
+ usec_t sleep_ut;
+} ERROR_LIMIT;
+
+#define nd_log_limit_static_global_var(var, log_every_secs, sleep_usecs) static ERROR_LIMIT var = { .last_logged = 0, .count = 0, .log_every = (log_every_secs), .sleep_ut = (sleep_usecs) }
+#define nd_log_limit_static_thread_var(var, log_every_secs, sleep_usecs) static __thread ERROR_LIMIT var = { .last_logged = 0, .count = 0, .log_every = (log_every_secs), .sleep_ut = (sleep_usecs) }
+void netdata_logger_with_limit(ERROR_LIMIT *erl, ND_LOG_SOURCES source, ND_LOG_FIELD_PRIORITY priority, const char *file, const char *function, unsigned long line, const char *fmt, ... ) PRINTFLIKE(7, 8);;
+#define nd_log_limit(erl, NDLS, NDLP, args...) netdata_logger_with_limit(erl, NDLS, NDLP, __FILE__, __FUNCTION__, __LINE__, ##args)
+
+// ----------------------------------------------------------------------------
+
+void send_statistics(const char *action, const char *action_result, const char *action_data);
+void netdata_logger_fatal( const char *file, const char *function, unsigned long line, const char *fmt, ... ) NORETURN PRINTFLIKE(4, 5);
+
+# ifdef __cplusplus
+}
+# endif
+
+#endif /* NETDATA_LOG_H */
diff --git a/libnetdata/log/systemd-cat-native.c b/libnetdata/log/systemd-cat-native.c
new file mode 100644
index 00000000..de6211cc
--- /dev/null
+++ b/libnetdata/log/systemd-cat-native.c
@@ -0,0 +1,820 @@
+// SPDX-License-Identifier: GPL-3.0-or-later
+
+#include "systemd-cat-native.h"
+#include "../required_dummies.h"
+
+#ifdef __FreeBSD__
+#include <sys/endian.h>
+#endif
+
+#ifdef __APPLE__
+#include <machine/endian.h>
+#endif
+
+static void log_message_to_stderr(BUFFER *msg) {
+ CLEAN_BUFFER *tmp = buffer_create(0, NULL);
+
+ for(size_t i = 0; i < msg->len ;i++) {
+ if(isprint(msg->buffer[i]))
+ buffer_putc(tmp, msg->buffer[i]);
+ else {
+ buffer_putc(tmp, '[');
+ buffer_print_uint64_hex(tmp, msg->buffer[i]);
+ buffer_putc(tmp, ']');
+ }
+ }
+
+ fprintf(stderr, "SENDING: %s\n", buffer_tostring(tmp));
+}
+
+static inline buffered_reader_ret_t get_next_line(struct buffered_reader *reader, BUFFER *line, int timeout_ms) {
+ while(true) {
+ if(unlikely(!buffered_reader_next_line(reader, line))) {
+ buffered_reader_ret_t ret = buffered_reader_read_timeout(reader, STDIN_FILENO, timeout_ms, false);
+ if(unlikely(ret != BUFFERED_READER_READ_OK))
+ return ret;
+
+ continue;
+ }
+ else {
+ // make sure the buffer is NULL terminated
+ line->buffer[line->len] = '\0';
+
+ // remove the trailing newlines
+ while(line->len && line->buffer[line->len - 1] == '\n')
+ line->buffer[--line->len] = '\0';
+
+ return BUFFERED_READER_READ_OK;
+ }
+ }
+}
+
+static inline size_t copy_replacing_newlines(char *dst, size_t dst_len, const char *src, size_t src_len, const char *newline) {
+ if (!dst || !src) return 0;
+
+ const char *current_src = src;
+ const char *src_end = src + src_len; // Pointer to the end of src
+ char *current_dst = dst;
+ size_t remaining_dst_len = dst_len;
+ size_t newline_len = newline && *newline ? strlen(newline) : 0;
+
+ size_t bytes_copied = 0; // To track the number of bytes copied
+
+ while (remaining_dst_len > 1 && current_src < src_end) {
+ if (newline_len > 0) {
+ const char *found = strstr(current_src, newline);
+ if (found && found < src_end) {
+ size_t copy_len = found - current_src;
+ if (copy_len >= remaining_dst_len) copy_len = remaining_dst_len - 1;
+
+ memcpy(current_dst, current_src, copy_len);
+ current_dst += copy_len;
+ *current_dst++ = '\n';
+ remaining_dst_len -= (copy_len + 1);
+ bytes_copied += copy_len + 1; // +1 for the newline character
+ current_src = found + newline_len;
+ continue;
+ }
+ }
+
+ // Copy the remaining part of src to dst
+ size_t copy_len = src_end - current_src;
+ if (copy_len >= remaining_dst_len) copy_len = remaining_dst_len - 1;
+
+ memcpy(current_dst, current_src, copy_len);
+ current_dst += copy_len;
+ remaining_dst_len -= copy_len;
+ bytes_copied += copy_len;
+ break;
+ }
+
+ // Ensure the string is null-terminated
+ *current_dst = '\0';
+
+ return bytes_copied;
+}
+
+static inline void buffer_memcat_replacing_newlines(BUFFER *wb, const char *src, size_t src_len, const char *newline) {
+ if(!src) return;
+
+ const char *equal;
+ if(!newline || !*newline || !strstr(src, newline) || !(equal = strchr(src, '='))) {
+ buffer_memcat(wb, src, src_len);
+ buffer_putc(wb, '\n');
+ return;
+ }
+
+ size_t key_len = equal - src;
+ buffer_memcat(wb, src, key_len);
+ buffer_putc(wb, '\n');
+
+ char *length_ptr = &wb->buffer[wb->len];
+ uint64_t le_size = 0;
+ buffer_memcat(wb, &le_size, sizeof(le_size));
+
+ const char *value = ++equal;
+ size_t value_len = src_len - key_len - 1;
+ buffer_need_bytes(wb, value_len + 1);
+ size_t size = copy_replacing_newlines(&wb->buffer[wb->len], value_len + 1, value, value_len, newline);
+ wb->len += size;
+ buffer_putc(wb, '\n');
+
+ le_size = htole64(size);
+ memcpy(length_ptr, &le_size, sizeof(le_size));
+}
+
+// ----------------------------------------------------------------------------
+// log to a systemd-journal-remote
+
+#ifdef HAVE_CURL
+#include <curl/curl.h>
+
+#ifndef HOST_NAME_MAX
+#define HOST_NAME_MAX 256
+#endif
+
+char global_hostname[HOST_NAME_MAX] = "";
+char global_boot_id[UUID_COMPACT_STR_LEN] = "";
+char global_machine_id[UUID_COMPACT_STR_LEN] = "";
+char global_stream_id[UUID_COMPACT_STR_LEN] = "";
+char global_namespace[1024] = "";
+char global_systemd_invocation_id[1024] = "";
+#define BOOT_ID_PATH "/proc/sys/kernel/random/boot_id"
+#define MACHINE_ID_PATH "/etc/machine-id"
+
+#define DEFAULT_PRIVATE_KEY "/etc/ssl/private/journal-upload.pem"
+#define DEFAULT_PUBLIC_KEY "/etc/ssl/certs/journal-upload.pem"
+#define DEFAULT_CA_CERT "/etc/ssl/ca/trusted.pem"
+
+struct upload_data {
+ char *data;
+ size_t length;
+};
+
+static size_t systemd_journal_remote_read_callback(void *ptr, size_t size, size_t nmemb, void *userp) {
+ struct upload_data *upload = (struct upload_data *)userp;
+ size_t buffer_size = size * nmemb;
+
+ if (upload->length) {
+ size_t copy_size = upload->length < buffer_size ? upload->length : buffer_size;
+ memcpy(ptr, upload->data, copy_size);
+ upload->data += copy_size;
+ upload->length -= copy_size;
+ return copy_size;
+ }
+
+ return 0;
+}
+
+CURL* initialize_connection_to_systemd_journal_remote(const char* url, const char* private_key, const char* public_key, const char* ca_cert, struct curl_slist **headers) {
+ CURL *curl = curl_easy_init();
+ if (!curl) {
+ fprintf(stderr, "Failed to initialize curl\n");
+ return NULL;
+ }
+
+ *headers = curl_slist_append(*headers, "Content-Type: application/vnd.fdo.journal");
+ *headers = curl_slist_append(*headers, "Transfer-Encoding: chunked");
+ curl_easy_setopt(curl, CURLOPT_HTTPHEADER, *headers);
+ curl_easy_setopt(curl, CURLOPT_URL, url);
+ curl_easy_setopt(curl, CURLOPT_POST, 1L);
+ curl_easy_setopt(curl, CURLOPT_READFUNCTION, systemd_journal_remote_read_callback);
+
+ if (strncmp(url, "https://", 8) == 0) {
+ if (private_key) curl_easy_setopt(curl, CURLOPT_SSLKEY, private_key);
+ if (public_key) curl_easy_setopt(curl, CURLOPT_SSLCERT, public_key);
+
+ if (strcmp(ca_cert, "all") != 0) {
+ curl_easy_setopt(curl, CURLOPT_CAINFO, ca_cert);
+ } else {
+ curl_easy_setopt(curl, CURLOPT_SSL_VERIFYPEER, 0L);
+ }
+ }
+ // curl_easy_setopt(curl, CURLOPT_VERBOSE, 1L); // Remove for less verbose output
+
+ return curl;
+}
+
+static void journal_remote_complete_event(BUFFER *msg, usec_t *monotonic_ut) {
+ usec_t ut = now_monotonic_usec();
+
+ if(monotonic_ut)
+ *monotonic_ut = ut;
+
+ buffer_sprintf(msg,
+ ""
+ "__REALTIME_TIMESTAMP=%llu\n"
+ "__MONOTONIC_TIMESTAMP=%llu\n"
+ "_MACHINE_ID=%s\n"
+ "_BOOT_ID=%s\n"
+ "_HOSTNAME=%s\n"
+ "_TRANSPORT=stdout\n"
+ "_LINE_BREAK=nul\n"
+ "_STREAM_ID=%s\n"
+ "_RUNTIME_SCOPE=system\n"
+ "%s%s\n"
+ , now_realtime_usec()
+ , ut
+ , global_machine_id
+ , global_boot_id
+ , global_hostname
+ , global_stream_id
+ , global_namespace
+ , global_systemd_invocation_id
+ );
+}
+
+static CURLcode journal_remote_send_buffer(CURL* curl, BUFFER *msg) {
+
+ // log_message_to_stderr(msg);
+
+ struct upload_data upload = {0};
+
+ if (!curl || !buffer_strlen(msg))
+ return CURLE_FAILED_INIT;
+
+ upload.data = (char *) buffer_tostring(msg);
+ upload.length = buffer_strlen(msg);
+
+ curl_easy_setopt(curl, CURLOPT_READDATA, &upload);
+ curl_easy_setopt(curl, CURLOPT_INFILESIZE_LARGE, (curl_off_t)upload.length);
+
+ return curl_easy_perform(curl);
+}
+
+typedef enum {
+ LOG_TO_JOURNAL_REMOTE_BAD_PARAMS = -1,
+ LOG_TO_JOURNAL_REMOTE_CANNOT_INITIALIZE = -2,
+ LOG_TO_JOURNAL_REMOTE_CANNOT_SEND = -3,
+ LOG_TO_JOURNAL_REMOTE_CANNOT_READ = -4,
+} log_to_journal_remote_ret_t;
+
+static log_to_journal_remote_ret_t log_input_to_journal_remote(const char *url, const char *key, const char *cert, const char *trust, const char *newline, int timeout_ms) {
+ if(!url || !*url) {
+ fprintf(stderr, "No URL is given.\n");
+ return LOG_TO_JOURNAL_REMOTE_BAD_PARAMS;
+ }
+
+ if(timeout_ms < 10)
+ timeout_ms = 10;
+
+ global_boot_id[0] = '\0';
+ char buffer[1024];
+ if(read_file(BOOT_ID_PATH, buffer, sizeof(buffer)) == 0) {
+ uuid_t uuid;
+ if(uuid_parse_flexi(buffer, uuid) == 0)
+ uuid_unparse_lower_compact(uuid, global_boot_id);
+ else
+ fprintf(stderr, "WARNING: cannot parse the UUID found in '%s'.\n", BOOT_ID_PATH);
+ }
+
+ if(global_boot_id[0] == '\0') {
+ fprintf(stderr, "WARNING: cannot read '%s'. Will generate a random _BOOT_ID.\n", BOOT_ID_PATH);
+ uuid_t uuid;
+ uuid_generate_random(uuid);
+ uuid_unparse_lower_compact(uuid, global_boot_id);
+ }
+
+ if(read_file(MACHINE_ID_PATH, buffer, sizeof(buffer)) == 0) {
+ uuid_t uuid;
+ if(uuid_parse_flexi(buffer, uuid) == 0)
+ uuid_unparse_lower_compact(uuid, global_machine_id);
+ else
+ fprintf(stderr, "WARNING: cannot parse the UUID found in '%s'.\n", MACHINE_ID_PATH);
+ }
+
+ if(global_machine_id[0] == '\0') {
+ fprintf(stderr, "WARNING: cannot read '%s'. Will generate a random _MACHINE_ID.\n", MACHINE_ID_PATH);
+ uuid_t uuid;
+ uuid_generate_random(uuid);
+ uuid_unparse_lower_compact(uuid, global_boot_id);
+ }
+
+ if(global_stream_id[0] == '\0') {
+ uuid_t uuid;
+ uuid_generate_random(uuid);
+ uuid_unparse_lower_compact(uuid, global_stream_id);
+ }
+
+ if(global_hostname[0] == '\0') {
+ if(gethostname(global_hostname, sizeof(global_hostname)) != 0) {
+ fprintf(stderr, "WARNING: cannot get system's hostname. Will use internal default.\n");
+ snprintfz(global_hostname, sizeof(global_hostname), "systemd-cat-native-unknown-hostname");
+ }
+ }
+
+ if(global_systemd_invocation_id[0] == '\0' && getenv("INVOCATION_ID"))
+ snprintfz(global_systemd_invocation_id, sizeof(global_systemd_invocation_id), "_SYSTEMD_INVOCATION_ID=%s\n", getenv("INVOCATION_ID"));
+
+ if(!key)
+ key = DEFAULT_PRIVATE_KEY;
+
+ if(!cert)
+ cert = DEFAULT_PUBLIC_KEY;
+
+ if(!trust)
+ trust = DEFAULT_CA_CERT;
+
+ char full_url[4096];
+ snprintfz(full_url, sizeof(full_url), "%s/upload", url);
+
+ CURL *curl;
+ CURLcode res = CURLE_OK;
+ struct curl_slist *headers = NULL;
+
+ curl_global_init(CURL_GLOBAL_ALL);
+ curl = initialize_connection_to_systemd_journal_remote(full_url, key, cert, trust, &headers);
+
+ if(!curl)
+ return LOG_TO_JOURNAL_REMOTE_CANNOT_INITIALIZE;
+
+ struct buffered_reader reader;
+ buffered_reader_init(&reader);
+ CLEAN_BUFFER *line = buffer_create(sizeof(reader.read_buffer), NULL);
+ CLEAN_BUFFER *msg = buffer_create(sizeof(reader.read_buffer), NULL);
+
+ size_t msg_full_events = 0;
+ size_t msg_partial_fields = 0;
+ usec_t msg_started_ut = 0;
+ size_t failures = 0;
+ size_t messages_logged = 0;
+
+ log_to_journal_remote_ret_t ret = 0;
+
+ while(true) {
+ buffered_reader_ret_t rc = get_next_line(&reader, line, timeout_ms);
+ if(rc == BUFFERED_READER_READ_POLL_TIMEOUT) {
+ if(msg_full_events && !msg_partial_fields) {
+ res = journal_remote_send_buffer(curl, msg);
+ if(res != CURLE_OK) {
+ fprintf(stderr, "journal_remote_send_buffer() failed: %s\n", curl_easy_strerror(res));
+ failures++;
+ ret = LOG_TO_JOURNAL_REMOTE_CANNOT_SEND;
+ goto cleanup;
+ }
+ else
+ messages_logged++;
+
+ msg_full_events = 0;
+ buffer_flush(msg);
+ }
+ }
+ else if(rc == BUFFERED_READER_READ_OK) {
+ if(!line->len) {
+ // an empty line - we are done for this message
+ if(msg_partial_fields) {
+ msg_partial_fields = 0;
+
+ usec_t ut;
+ journal_remote_complete_event(msg, &ut);
+ if(!msg_full_events)
+ msg_started_ut = ut;
+
+ msg_full_events++;
+
+ if(ut - msg_started_ut >= USEC_PER_SEC / 2) {
+ res = journal_remote_send_buffer(curl, msg);
+ if(res != CURLE_OK) {
+ fprintf(stderr, "journal_remote_send_buffer() failed: %s\n", curl_easy_strerror(res));
+ failures++;
+ ret = LOG_TO_JOURNAL_REMOTE_CANNOT_SEND;
+ goto cleanup;
+ }
+ else
+ messages_logged++;
+
+ msg_full_events = 0;
+ buffer_flush(msg);
+ }
+ }
+ }
+ else {
+ buffer_memcat_replacing_newlines(msg, line->buffer, line->len, newline);
+ msg_partial_fields++;
+ }
+
+ buffer_flush(line);
+ }
+ else {
+ fprintf(stderr, "cannot read input data, failed with code %d\n", rc);
+ ret = LOG_TO_JOURNAL_REMOTE_CANNOT_READ;
+ break;
+ }
+ }
+
+ if (msg_full_events || msg_partial_fields) {
+ if(msg_partial_fields) {
+ msg_partial_fields = 0;
+ msg_full_events++;
+ journal_remote_complete_event(msg, NULL);
+ }
+
+ if(msg_full_events) {
+ res = journal_remote_send_buffer(curl, msg);
+ if(res != CURLE_OK) {
+ fprintf(stderr, "journal_remote_send_buffer() failed: %s\n", curl_easy_strerror(res));
+ failures++;
+ }
+ else
+ messages_logged++;
+
+ msg_full_events = 0;
+ buffer_flush(msg);
+ }
+ }
+
+cleanup:
+ curl_easy_cleanup(curl);
+ curl_slist_free_all(headers);
+ curl_global_cleanup();
+
+ return ret;
+}
+
+#endif
+
+static int help(void) {
+ fprintf(stderr,
+ "\n"
+ "Netdata systemd-cat-native " PACKAGE_VERSION "\n"
+ "\n"
+ "This program reads from its standard input, lines in the format:\n"
+ "\n"
+ "KEY1=VALUE1\\n\n"
+ "KEY2=VALUE2\\n\n"
+ "KEYN=VALUEN\\n\n"
+ "\\n\n"
+ "\n"
+ "and sends them to systemd-journal.\n"
+ "\n"
+ " - Binary journal fields are not accepted at its input\n"
+ " - Binary journal fields can be generated after newline processing\n"
+ " - Messages have to be separated by an empty line\n"
+ " - Keys starting with underscore are not accepted (by journald)\n"
+ " - Other rules imposed by systemd-journald are imposed (by journald)\n"
+ "\n"
+ "Usage:\n"
+ "\n"
+ " %s\n"
+ " [--newline=STRING]\n"
+ " [--log-as-netdata|-N]\n"
+ " [--namespace=NAMESPACE] [--socket=PATH]\n"
+#ifdef HAVE_CURL
+ " [--url=URL [--key=FILENAME] [--cert=FILENAME] [--trust=FILENAME|all]]\n"
+#endif
+ "\n"
+ "The program has the following modes of logging:\n"
+ "\n"
+ " * Log to a local systemd-journald or stderr\n"
+ "\n"
+ " This is the default mode. If systemd-journald is available, logs will be\n"
+ " sent to systemd, otherwise logs will be printed on stderr, using logfmt\n"
+ " formatting. Options --socket and --namespace are available to configure\n"
+ " the journal destination:\n"
+ "\n"
+ " --socket=PATH\n"
+ " The path of a systemd-journald UNIX socket.\n"
+ " The program will use the default systemd-journald socket when this\n"
+ " option is not used.\n"
+ "\n"
+ " --namespace=NAMESPACE\n"
+ " The name of a configured and running systemd-journald namespace.\n"
+ " The program will produce the socket path based on its internal\n"
+ " defaults, to send the messages to the systemd journal namespace.\n"
+ "\n"
+ " * Log as Netdata, enabled with --log-as-netdata or -N\n"
+ "\n"
+ " In this mode the program uses environment variables set by Netdata for\n"
+ " the log destination. Only log fields defined by Netdata are accepted.\n"
+ " If the environment variables expected by Netdata are not found, it\n"
+ " falls back to stderr logging in logfmt format.\n"
+#ifdef HAVE_CURL
+ "\n"
+ " * Log to a systemd-journal-remote TCP socket, enabled with --url=URL\n"
+ "\n"
+ " In this mode, the program will directly sent logs to a remote systemd\n"
+ " journal (systemd-journal-remote expected at the destination)\n"
+ " This mode is available even when the local system does not support\n"
+ " systemd, or even it is not Linux, allowing a remote Linux systemd\n"
+ " journald to become the logs database of the local system.\n"
+ "\n"
+ " Unfortunately systemd-journal-remote does not accept compressed\n"
+ " data over the network, so the stream will be uncompressed.\n"
+ "\n"
+ " --url=URL\n"
+ " The destination systemd-journal-remote address and port, similarly\n"
+ " to what /etc/systemd/journal-upload.conf accepts.\n"
+ " Usually it is in the form: https://ip.address:19532\n"
+ " Both http and https URLs are accepted. When using https, the\n"
+ " following additional options are accepted:\n"
+ "\n"
+ " --key=FILENAME\n"
+ " The filename of the private key of the server.\n"
+ " The default is: " DEFAULT_PRIVATE_KEY "\n"
+ "\n"
+ " --cert=FILENAME\n"
+ " The filename of the public key of the server.\n"
+ " The default is: " DEFAULT_PUBLIC_KEY "\n"
+ "\n"
+ " --trust=FILENAME | all\n"
+ " The filename of the trusted CA public key.\n"
+ " The default is: " DEFAULT_CA_CERT "\n"
+ " The keyword 'all' can be used to trust all CAs.\n"
+ "\n"
+ " --namespace=NAMESPACE\n"
+ " Set the namespace of the messages sent.\n"
+ "\n"
+ " --keep-trying\n"
+ " Keep trying to send the message, if the remote journal is not there.\n"
+#endif
+ "\n"
+ " NEWLINES PROCESSING\n"
+ " systemd-journal logs entries may have newlines in them. However the\n"
+ " Journal Export Format uses binary formatted data to achieve this,\n"
+ " making it hard for text processing.\n"
+ "\n"
+ " To overcome this limitation, this program allows single-line text\n"
+ " formatted values at its input, to be binary formatted multi-line Journal\n"
+ " Export Format at its output.\n"
+ "\n"
+ " To achieve that it allows replacing a given string to a newline.\n"
+ " The parameter --newline=STRING allows setting the string to be replaced\n"
+ " with newlines.\n"
+ "\n"
+ " For example by setting --newline='--NEWLINE--', the program will replace\n"
+ " all occurrences of --NEWLINE-- with the newline character, within each\n"
+ " VALUE of the KEY=VALUE lines. Once this this done, the program will\n"
+ " switch the field to the binary Journal Export Format before sending the\n"
+ " log event to systemd-journal.\n"
+ "\n",
+ program_name);
+
+ return 1;
+}
+
+// ----------------------------------------------------------------------------
+// log as Netdata
+
+static void lgs_reset(struct log_stack_entry *lgs) {
+ for(size_t i = 1; i < _NDF_MAX ;i++) {
+ if(lgs[i].type == NDFT_TXT && lgs[i].set && lgs[i].txt)
+ freez((void *)lgs[i].txt);
+
+ lgs[i] = ND_LOG_FIELD_TXT(i, NULL);
+ }
+
+ lgs[0] = ND_LOG_FIELD_TXT(NDF_MESSAGE, NULL);
+ lgs[_NDF_MAX] = ND_LOG_FIELD_END();
+}
+
+static const char *strdupz_replacing_newlines(const char *src, const char *newline) {
+ if(!src) src = "";
+
+ size_t src_len = strlen(src);
+ char *buffer = mallocz(src_len + 1);
+ copy_replacing_newlines(buffer, src_len + 1, src, src_len, newline);
+ return buffer;
+}
+
+static int log_input_as_netdata(const char *newline, int timeout_ms) {
+ struct buffered_reader reader;
+ buffered_reader_init(&reader);
+ CLEAN_BUFFER *line = buffer_create(sizeof(reader.read_buffer), NULL);
+
+ ND_LOG_STACK lgs[_NDF_MAX + 1] = { 0 };
+ ND_LOG_STACK_PUSH(lgs);
+ lgs_reset(lgs);
+
+ size_t fields_added = 0;
+ size_t messages_logged = 0;
+ ND_LOG_FIELD_PRIORITY priority = NDLP_INFO;
+
+ while(get_next_line(&reader, line, timeout_ms) == BUFFERED_READER_READ_OK) {
+ if(!line->len) {
+ // an empty line - we are done for this message
+
+ nd_log(NDLS_HEALTH, priority,
+ "added %d fields", // if the user supplied a MESSAGE, this will be ignored
+ fields_added);
+
+ lgs_reset(lgs);
+ fields_added = 0;
+ messages_logged++;
+ }
+ else {
+ char *equal = strchr(line->buffer, '=');
+ if(equal) {
+ const char *field = line->buffer;
+ size_t field_len = equal - line->buffer;
+ ND_LOG_FIELD_ID id = nd_log_field_id_by_name(field, field_len);
+ if(id != NDF_STOP) {
+ const char *value = ++equal;
+
+ if(lgs[id].txt)
+ freez((void *) lgs[id].txt);
+
+ lgs[id].txt = strdupz_replacing_newlines(value, newline);
+ lgs[id].set = true;
+
+ fields_added++;
+
+ if(id == NDF_PRIORITY)
+ priority = nd_log_priority2id(value);
+ }
+ else {
+ struct log_stack_entry backup = lgs[NDF_MESSAGE];
+ lgs[NDF_MESSAGE] = ND_LOG_FIELD_TXT(NDF_MESSAGE, NULL);
+
+ nd_log(NDLS_COLLECTORS, NDLP_ERR,
+ "Field '%.*s' is not a Netdata field. Ignoring it.",
+ field_len, field);
+
+ lgs[NDF_MESSAGE] = backup;
+ }
+ }
+ else {
+ struct log_stack_entry backup = lgs[NDF_MESSAGE];
+ lgs[NDF_MESSAGE] = ND_LOG_FIELD_TXT(NDF_MESSAGE, NULL);
+
+ nd_log(NDLS_COLLECTORS, NDLP_ERR,
+ "Line does not contain an = sign; ignoring it: %s",
+ line->buffer);
+
+ lgs[NDF_MESSAGE] = backup;
+ }
+ }
+
+ buffer_flush(line);
+ }
+
+ if(fields_added) {
+ nd_log(NDLS_HEALTH, priority, "added %d fields", fields_added);
+ messages_logged++;
+ }
+
+ return messages_logged ? 0 : 1;
+}
+
+// ----------------------------------------------------------------------------
+// log to a local systemd-journald
+
+static bool journal_local_send_buffer(int fd, BUFFER *msg) {
+ // log_message_to_stderr(msg);
+
+ bool ret = journal_direct_send(fd, msg->buffer, msg->len);
+ if (!ret)
+ fprintf(stderr, "Cannot send message to systemd journal.\n");
+
+ return ret;
+}
+
+static int log_input_to_journal(const char *socket, const char *namespace, const char *newline, int timeout_ms) {
+ char path[FILENAME_MAX + 1];
+ int fd = -1;
+
+ if(socket)
+ snprintfz(path, sizeof(path), "%s", socket);
+ else
+ journal_construct_path(path, sizeof(path), NULL, namespace);
+
+ fd = journal_direct_fd(path);
+ if (fd == -1) {
+ fprintf(stderr, "Cannot open '%s' as a UNIX socket (errno = %d)\n",
+ path, errno);
+ return 1;
+ }
+
+ struct buffered_reader reader;
+ buffered_reader_init(&reader);
+ CLEAN_BUFFER *line = buffer_create(sizeof(reader.read_buffer), NULL);
+ CLEAN_BUFFER *msg = buffer_create(sizeof(reader.read_buffer), NULL);
+
+ size_t messages_logged = 0;
+ size_t failed_messages = 0;
+
+ while(get_next_line(&reader, line, timeout_ms) == BUFFERED_READER_READ_OK) {
+ if (!line->len) {
+ // an empty line - we are done for this message
+ if (msg->len) {
+ if(journal_local_send_buffer(fd, msg))
+ messages_logged++;
+ else {
+ failed_messages++;
+ goto cleanup;
+ }
+ }
+
+ buffer_flush(msg);
+ }
+ else
+ buffer_memcat_replacing_newlines(msg, line->buffer, line->len, newline);
+
+ buffer_flush(line);
+ }
+
+ if (msg && msg->len) {
+ if(journal_local_send_buffer(fd, msg))
+ messages_logged++;
+ else
+ failed_messages++;
+ }
+
+cleanup:
+ return !failed_messages && messages_logged ? 0 : 1;
+}
+
+int main(int argc, char *argv[]) {
+ clocks_init();
+ nd_log_initialize_for_external_plugins(argv[0]);
+
+ int timeout_ms = -1; // wait forever
+ bool log_as_netdata = false;
+ const char *newline = NULL;
+ const char *namespace = NULL;
+ const char *socket = getenv("NETDATA_SYSTEMD_JOURNAL_PATH");
+#ifdef HAVE_CURL
+ const char *url = NULL;
+ const char *key = NULL;
+ const char *cert = NULL;
+ const char *trust = NULL;
+ bool keep_trying = false;
+#endif
+
+ for(int i = 1; i < argc ;i++) {
+ const char *k = argv[i];
+
+ if(strcmp(k, "--help") == 0 || strcmp(k, "-h") == 0)
+ return help();
+
+ else if(strcmp(k, "--log-as-netdata") == 0 || strcmp(k, "-N") == 0)
+ log_as_netdata = true;
+
+ else if(strncmp(k, "--namespace=", 12) == 0)
+ namespace = &k[12];
+
+ else if(strncmp(k, "--socket=", 9) == 0)
+ socket = &k[9];
+
+ else if(strncmp(k, "--newline=", 10) == 0)
+ newline = &k[10];
+
+#ifdef HAVE_CURL
+ else if (strncmp(k, "--url=", 6) == 0)
+ url = &k[6];
+
+ else if (strncmp(k, "--key=", 6) == 0)
+ key = &k[6];
+
+ else if (strncmp(k, "--cert=", 7) == 0)
+ cert = &k[7];
+
+ else if (strncmp(k, "--trust=", 8) == 0)
+ trust = &k[8];
+
+ else if (strcmp(k, "--keep-trying") == 0)
+ keep_trying = true;
+#endif
+ else {
+ fprintf(stderr, "Unknown parameter '%s'\n", k);
+ return 1;
+ }
+ }
+
+#ifdef HAVE_CURL
+ if(log_as_netdata && url) {
+ fprintf(stderr, "Cannot log to a systemd-journal-remote URL as Netdata. "
+ "Please either give --url or --log-as-netdata, not both.\n");
+ return 1;
+ }
+
+ if(socket && url) {
+ fprintf(stderr, "Cannot log to a systemd-journal-remote URL using a UNIX socket. "
+ "Please either give --url or --socket, not both.\n");
+ return 1;
+ }
+
+#endif
+
+ if(log_as_netdata && namespace) {
+ fprintf(stderr, "Cannot log as netdata using a namespace. "
+ "Please either give --log-as-netdata or --namespace, not both.\n");
+ return 1;
+ }
+
+ if(log_as_netdata)
+ return log_input_as_netdata(newline, timeout_ms);
+
+#ifdef HAVE_CURL
+ if(url) {
+ if(url && namespace && *namespace)
+ snprintfz(global_namespace, sizeof(global_namespace), "_NAMESPACE=%s\n", namespace);
+
+ log_to_journal_remote_ret_t rc;
+ do {
+ rc = log_input_to_journal_remote(url, key, cert, trust, newline, timeout_ms);
+ } while(keep_trying && rc == LOG_TO_JOURNAL_REMOTE_CANNOT_SEND);
+ }
+#endif
+
+ return log_input_to_journal(socket, namespace, newline, timeout_ms);
+}
diff --git a/libnetdata/log/systemd-cat-native.h b/libnetdata/log/systemd-cat-native.h
new file mode 100644
index 00000000..34e7a361
--- /dev/null
+++ b/libnetdata/log/systemd-cat-native.h
@@ -0,0 +1,8 @@
+// SPDX-License-Identifier: GPL-3.0-or-later
+
+#include "../libnetdata.h"
+
+#ifndef NETDATA_SYSTEMD_CAT_NATIVE_H
+#define NETDATA_SYSTEMD_CAT_NATIVE_H
+
+#endif //NETDATA_SYSTEMD_CAT_NATIVE_H
diff --git a/libnetdata/log/systemd-cat-native.md b/libnetdata/log/systemd-cat-native.md
new file mode 100644
index 00000000..b0b15f40
--- /dev/null
+++ b/libnetdata/log/systemd-cat-native.md
@@ -0,0 +1,209 @@
+# systemd-cat-native
+
+`systemd` includes a utility called `systemd-cat`. This utility reads log lines from its standard input and sends them
+to the local systemd journal. Its key limitation is that despite the fact that systemd journals support structured logs,
+this command does not support sending structured logs to it.
+
+`systemd-cat-native` is a Netdata supplied utility to push structured logs to systemd journals. Key features:
+
+- reads [Journal Export Format](https://systemd.io/JOURNAL_EXPORT_FORMATS/) formatted log entries
+- converts text fields into binary journal multiline log fields
+- sends logs to any of these:
+ - local default `systemd-journald`,
+ - local namespace `systemd-journald`,
+ - remote `systemd-journal-remote` using HTTP or HTTPS, the same way `systemd-journal-upload` does.
+- is the standard external logger of Netdata shell scripts
+
+## Simple use:
+
+```bash
+printf "MESSAGE=hello world\nPRIORITY=6\n\n" | systemd-cat-native
+```
+
+The result:
+
+![image](https://github.com/netdata/netdata/assets/2662304/689d5e03-97ee-40a8-a690-82b7710cef7c)
+
+
+Sending `PRIORITY=3` (error):
+
+```bash
+printf "MESSAGE=hey, this is error\nPRIORITY=3\n\n" | systemd-cat-native
+```
+
+The result:
+![image](https://github.com/netdata/netdata/assets/2662304/faf3eaa5-ac56-415b-9de8-16e6ceed9280)
+
+Sending multi-line log entries (in this example we replace the text `--NEWLINE--` with a newline in the log entry):
+
+```bash
+printf "MESSAGE=hello--NEWLINE--world\nPRIORITY=6\n\n" | systemd-cat-native --newline='--NEWLINE--'
+```
+
+The result:
+
+![image](https://github.com/netdata/netdata/assets/2662304/d6037b4a-87da-4693-ae67-e07df0decdd9)
+
+
+Processing the standard `\n` string can be tricky due to shell escaping. This works, but note that
+we have to add a lot of backslashes to printf.
+
+```bash
+printf "MESSAGE=hello\\\\nworld\nPRIORITY=6\n\n" | systemd-cat-native --newline='\n'
+```
+
+`systemd-cat-native` needs to receive it like this for newline processing to work:
+
+```bash
+# printf "MESSAGE=hello\\\\nworld\nPRIORITY=6\n\n"
+MESSAGE=hello\nworld
+PRIORITY=6
+
+```
+
+## Best practices
+
+These are the rules about fields, enforced by `systemd-journald`:
+
+- field names can be up to **64 characters**,
+- field values can be up to **48k characters**,
+- the only allowed field characters are **A-Z**, **0-9** and **underscore**,
+- the **first** character of fields cannot be a **digit**
+- **protected** journal fields start with underscore:
+ * they are accepted by `systemd-journal-remote`,
+ * they are **NOT** accepted by a local `systemd-journald`.
+
+For best results, always include these fields:
+
+- `MESSAGE=TEXT`<br/>
+ The `MESSAGE` is the body of the log entry.
+ This field is what we usually see in our logs.
+
+- `PRIORITY=NUMBER`<br/>
+ `PRIORITY` sets the severity of the log entry.<br/>
+ `0=emerg, 1=alert, 2=crit, 3=err, 4=warn, 5=notice, 6=info, 7=debug`
+ - Emergency events (0) are usually broadcast to all terminals.
+ - Emergency, alert, critical, and error (0-3) are usually colored red.
+ - Warning (4) entries are usually colored yellow.
+ - Notice (5) entries are usually bold or have a brighter white color.
+ - Info (6) entries are the default.
+ - Debug (7) entries are usually grayed or dimmed.
+
+- `SYSLOG_IDENTIFIER=NAME`<br/>
+ `SYSLOG_IDENTIFIER` sets the name of application.
+ Use something descriptive, like: `SYSLOG_IDENTIFIER=myapp`
+
+You can find the most common fields at `man systemd.journal-fields`.
+
+
+## Usage
+
+```
+Netdata systemd-cat-native v1.43.0-333-g5af71b875
+
+This program reads from its standard input, lines in the format:
+
+KEY1=VALUE1\n
+KEY2=VALUE2\n
+KEYN=VALUEN\n
+\n
+
+and sends them to systemd-journal.
+
+ - Binary journal fields are not accepted at its input
+ - Binary journal fields can be generated after newline processing
+ - Messages have to be separated by an empty line
+ - Keys starting with underscore are not accepted (by journald)
+ - Other rules imposed by systemd-journald are imposed (by journald)
+
+Usage:
+
+ systemd-cat-native
+ [--newline=STRING]
+ [--log-as-netdata|-N]
+ [--namespace=NAMESPACE] [--socket=PATH]
+ [--url=URL [--key=FILENAME] [--cert=FILENAME] [--trust=FILENAME|all]]
+
+The program has the following modes of logging:
+
+ * Log to a local systemd-journald or stderr
+
+ This is the default mode. If systemd-journald is available, logs will be
+ sent to systemd, otherwise logs will be printed on stderr, using logfmt
+ formatting. Options --socket and --namespace are available to configure
+ the journal destination:
+
+ --socket=PATH
+ The path of a systemd-journald UNIX socket.
+ The program will use the default systemd-journald socket when this
+ option is not used.
+
+ --namespace=NAMESPACE
+ The name of a configured and running systemd-journald namespace.
+ The program will produce the socket path based on its internal
+ defaults, to send the messages to the systemd journal namespace.
+
+ * Log as Netdata, enabled with --log-as-netdata or -N
+
+ In this mode the program uses environment variables set by Netdata for
+ the log destination. Only log fields defined by Netdata are accepted.
+ If the environment variables expected by Netdata are not found, it
+ falls back to stderr logging in logfmt format.
+
+ * Log to a systemd-journal-remote TCP socket, enabled with --url=URL
+
+ In this mode, the program will directly sent logs to a remote systemd
+ journal (systemd-journal-remote expected at the destination)
+ This mode is available even when the local system does not support
+ systemd, or even it is not Linux, allowing a remote Linux systemd
+ journald to become the logs database of the local system.
+
+ Unfortunately systemd-journal-remote does not accept compressed
+ data over the network, so the stream will be uncompressed.
+
+ --url=URL
+ The destination systemd-journal-remote address and port, similarly
+ to what /etc/systemd/journal-upload.conf accepts.
+ Usually it is in the form: https://ip.address:19532
+ Both http and https URLs are accepted. When using https, the
+ following additional options are accepted:
+
+ --key=FILENAME
+ The filename of the private key of the server.
+ The default is: /etc/ssl/private/journal-upload.pem
+
+ --cert=FILENAME
+ The filename of the public key of the server.
+ The default is: /etc/ssl/certs/journal-upload.pem
+
+ --trust=FILENAME | all
+ The filename of the trusted CA public key.
+ The default is: /etc/ssl/ca/trusted.pem
+ The keyword 'all' can be used to trust all CAs.
+
+ --namespace=NAMESPACE
+ Set the namespace of the messages sent.
+
+ --keep-trying
+ Keep trying to send the message, if the remote journal is not there.
+
+ NEWLINES PROCESSING
+ systemd-journal logs entries may have newlines in them. However the
+ Journal Export Format uses binary formatted data to achieve this,
+ making it hard for text processing.
+
+ To overcome this limitation, this program allows single-line text
+ formatted values at its input, to be binary formatted multi-line Journal
+ Export Format at its output.
+
+ To achieve that it allows replacing a given string to a newline.
+ The parameter --newline=STRING allows setting the string to be replaced
+ with newlines.
+
+ For example by setting --newline='--NEWLINE--', the program will replace
+ all occurrences of --NEWLINE-- with the newline character, within each
+ VALUE of the KEY=VALUE lines. Once this this done, the program will
+ switch the field to the binary Journal Export Format before sending the
+ log event to systemd-journal.
+
+``` \ No newline at end of file