summaryrefslogtreecommitdiffstats
path: root/source/configuration/modules
diff options
context:
space:
mode:
Diffstat (limited to '')
-rw-r--r--source/configuration/modules/gssapi.pngbin0 -> 35638 bytes
-rw-r--r--source/configuration/modules/gssapi.rst73
-rw-r--r--source/configuration/modules/idx_input.rst13
-rw-r--r--source/configuration/modules/idx_library.rst9
-rw-r--r--source/configuration/modules/idx_messagemod.rst15
-rw-r--r--source/configuration/modules/idx_output.rst16
-rw-r--r--source/configuration/modules/idx_parser.rst16
-rw-r--r--source/configuration/modules/idx_stringgen.rst43
-rw-r--r--source/configuration/modules/im3195.rst75
-rw-r--r--source/configuration/modules/imbatchreport.rst222
-rw-r--r--source/configuration/modules/imdocker.rst268
-rw-r--r--source/configuration/modules/imfile.rst948
-rw-r--r--source/configuration/modules/imgssapi.rst154
-rw-r--r--source/configuration/modules/imhiredis.rst356
-rw-r--r--source/configuration/modules/imhttp.rst369
-rw-r--r--source/configuration/modules/imjournal.rst474
-rw-r--r--source/configuration/modules/imkafka.rst177
-rw-r--r--source/configuration/modules/imklog.rst230
-rw-r--r--source/configuration/modules/imkmsg.rst188
-rw-r--r--source/configuration/modules/immark.rst41
-rw-r--r--source/configuration/modules/impcap.rst255
-rw-r--r--source/configuration/modules/improg.rst170
-rw-r--r--source/configuration/modules/impstats.rst405
-rw-r--r--source/configuration/modules/imptcp.rst711
-rw-r--r--source/configuration/modules/imrelp.rst595
-rw-r--r--source/configuration/modules/imsolaris.rst59
-rw-r--r--source/configuration/modules/imtcp.rst1086
-rw-r--r--source/configuration/modules/imtuxedoulog.rst146
-rw-r--r--source/configuration/modules/imudp.rst600
-rw-r--r--source/configuration/modules/imuxsock.rst966
-rw-r--r--source/configuration/modules/index.rst33
-rw-r--r--source/configuration/modules/mmanon.rst370
-rw-r--r--source/configuration/modules/mmcount.rst56
-rw-r--r--source/configuration/modules/mmdarwin.rst229
-rw-r--r--source/configuration/modules/mmdblookup.rst141
-rw-r--r--source/configuration/modules/mmexternal.rst110
-rw-r--r--source/configuration/modules/mmfields.rst132
-rw-r--r--source/configuration/modules/mmjsonparse.rst147
-rw-r--r--source/configuration/modules/mmkubernetes.rst630
-rw-r--r--source/configuration/modules/mmnormalize.rst178
-rw-r--r--source/configuration/modules/mmpstrucdata.rst101
-rw-r--r--source/configuration/modules/mmrfc5424addhmac.rst93
-rw-r--r--source/configuration/modules/mmrm1stspace.rst30
-rw-r--r--source/configuration/modules/mmsequence.rst156
-rw-r--r--source/configuration/modules/mmsnmptrapd.rst103
-rw-r--r--source/configuration/modules/mmtaghostname.rst89
-rw-r--r--source/configuration/modules/mmutf8fix.rst112
-rw-r--r--source/configuration/modules/module_workflow.pngbin0 -> 14749 bytes
-rw-r--r--source/configuration/modules/omamqp1.rst476
-rw-r--r--source/configuration/modules/omazureeventhubs.rst412
-rw-r--r--source/configuration/modules/omclickhouse.rst324
-rw-r--r--source/configuration/modules/omelasticsearch.rst1102
-rw-r--r--source/configuration/modules/omfile.rst930
-rw-r--r--source/configuration/modules/omfwd.rst795
-rw-r--r--source/configuration/modules/omhdfs.rst114
-rw-r--r--source/configuration/modules/omhiredis.rst779
-rw-r--r--source/configuration/modules/omhttp.rst869
-rw-r--r--source/configuration/modules/omhttpfs.rst149
-rw-r--r--source/configuration/modules/omjournal.rst71
-rw-r--r--source/configuration/modules/omkafka.rst478
-rw-r--r--source/configuration/modules/omlibdbi.rst238
-rw-r--r--source/configuration/modules/ommail.rst306
-rw-r--r--source/configuration/modules/ommongodb.rst247
-rw-r--r--source/configuration/modules/ommysql.rst201
-rw-r--r--source/configuration/modules/omoracle.rst200
-rw-r--r--source/configuration/modules/ompgsql.rst239
-rw-r--r--source/configuration/modules/ompipe.rst48
-rw-r--r--source/configuration/modules/omprog.rst530
-rw-r--r--source/configuration/modules/omrabbitmq.rst404
-rw-r--r--source/configuration/modules/omrelp.rst482
-rw-r--r--source/configuration/modules/omruleset.rst184
-rw-r--r--source/configuration/modules/omsnmp.rst265
-rw-r--r--source/configuration/modules/omstdout.rst113
-rw-r--r--source/configuration/modules/omudpspoof.rst209
-rw-r--r--source/configuration/modules/omusrmsg.rst67
-rw-r--r--source/configuration/modules/omuxsock.rst61
-rw-r--r--source/configuration/modules/pmciscoios.rst183
-rw-r--r--source/configuration/modules/pmdb2diag.rst146
-rw-r--r--source/configuration/modules/pmlastmsg.rst68
-rw-r--r--source/configuration/modules/pmnormalize.rst121
-rw-r--r--source/configuration/modules/pmnull.rst123
-rw-r--r--source/configuration/modules/pmrfc3164.rst161
-rw-r--r--source/configuration/modules/pmrfc3164sd.rst5
-rw-r--r--source/configuration/modules/pmrfc5424.rst6
-rw-r--r--source/configuration/modules/sigprov_gt.rst94
-rw-r--r--source/configuration/modules/sigprov_ksi.rst99
-rw-r--r--source/configuration/modules/sigprov_ksi12.rst135
-rw-r--r--source/configuration/modules/workflow.rst30
88 files changed, 22854 insertions, 0 deletions
diff --git a/source/configuration/modules/gssapi.png b/source/configuration/modules/gssapi.png
new file mode 100644
index 0000000..c82baa5
--- /dev/null
+++ b/source/configuration/modules/gssapi.png
Binary files differ
diff --git a/source/configuration/modules/gssapi.rst b/source/configuration/modules/gssapi.rst
new file mode 100644
index 0000000..157f5a3
--- /dev/null
+++ b/source/configuration/modules/gssapi.rst
@@ -0,0 +1,73 @@
+GSSAPI module support in rsyslog v3
+===================================
+
+What is it good for.
+
+- client-serverauthentication
+- Log messages encryption
+
+Requirements.
+
+- Kerberos infrastructure
+- rsyslog, rsyslog-gssapi
+
+Configuration.
+
+Let's assume there are 3 machines in Kerberos Realm:
+
+- the first is running KDC (Kerberos Authentication Service and Key
+ Distribution Center),
+- the second is a client sending its logs to the server,
+- the third is receiver, gathering all logs.
+
+1. KDC:
+
+- Kerberos database must be properly set-up on KDC machine first. Use
+ kadmin/kadmin.local to do that. Two principals need to be add in our
+ case:
+
+#. sender@REALM.ORG
+
+- client must have ticket for principal sender
+- REALM.ORG is kerberos Realm
+
+#. host/receiver.mydomain.com@REALM.ORG - service principal
+
+- Use ktadd to export service principal and transfer it to
+ /etc/krb5.keytab on receiver
+
+2. CLIENT:
+
+- set-up rsyslog, in /etc/rsyslog.conf
+- $ModLoad omgssapi - load output gss module
+- $GSSForwardServiceName otherThanHost - set the name of service
+ principal, "host" is the default one
+- \*.\* :omgssapi:receiver.mydomain.com - action line, forward logs to
+ receiver
+- kinit root - get the TGT ticket
+- service rsyslog start
+
+3. SERVER:
+
+- set-up rsyslog, in /etc/rsyslog.conf
+
+- $ModLoad `imgssapi <imgssapi.html>`_ - load input gss module
+
+- $InputGSSServerServiceName otherThanHost - set the name of service
+ principal, "host" is the default one
+
+- $InputGSSServerPermitPlainTCP on - accept GSS and TCP connections
+ (not authenticated senders), off by default
+
+- $InputGSSServerRun 514 - run server on port
+
+- service rsyslog start
+
+The picture demonstrate how things work.
+
+.. figure:: gssapi.png
+ :align: center
+ :alt: rsyslog gssapi support
+
+ rsyslog gssapi support
+
diff --git a/source/configuration/modules/idx_input.rst b/source/configuration/modules/idx_input.rst
new file mode 100644
index 0000000..74f485c
--- /dev/null
+++ b/source/configuration/modules/idx_input.rst
@@ -0,0 +1,13 @@
+Input Modules
+-------------
+
+Input modules are used to gather messages from various sources. They
+interface to message generators. They are generally defined via the
+:doc:`input <../input>` configuration object.
+
+.. toctree::
+ :glob:
+ :maxdepth: 1
+
+ im*
+
diff --git a/source/configuration/modules/idx_library.rst b/source/configuration/modules/idx_library.rst
new file mode 100644
index 0000000..62452a1
--- /dev/null
+++ b/source/configuration/modules/idx_library.rst
@@ -0,0 +1,9 @@
+Library Modules
+===============
+
+Library modules provide dynamically loadable functionality for parts of
+rsyslog, most often for other loadable modules. They can not be
+user-configured and are loaded automatically by some components. They
+are just mentioned so that error messages that point to library modules
+can be understood. No module list is provided.
+
diff --git a/source/configuration/modules/idx_messagemod.rst b/source/configuration/modules/idx_messagemod.rst
new file mode 100644
index 0000000..f29c6af
--- /dev/null
+++ b/source/configuration/modules/idx_messagemod.rst
@@ -0,0 +1,15 @@
+Message Modification Modules
+----------------------------
+
+Message modification modules are used to change the content of messages
+being processed. They can be implemented using either the output module
+or the parser module interface. From the rsyslog core's point of view,
+they actually are output or parser modules, it is their implementation
+that makes them special.
+
+.. toctree::
+ :maxdepth: 1
+ :glob:
+
+ mm*
+
diff --git a/source/configuration/modules/idx_output.rst b/source/configuration/modules/idx_output.rst
new file mode 100644
index 0000000..0be3046
--- /dev/null
+++ b/source/configuration/modules/idx_output.rst
@@ -0,0 +1,16 @@
+Output Modules
+--------------
+
+Output modules process messages. With them, message formats can be
+transformed and messages be transmitted to various different targets.
+They are generally defined via :doc:`action <../actions>` configuration
+objects.
+
+.. toctree::
+ :glob:
+ :maxdepth: 1
+
+ om*
+ sigprov_gt
+ sigprov_ksi
+ sigprov_ksi12
diff --git a/source/configuration/modules/idx_parser.rst b/source/configuration/modules/idx_parser.rst
new file mode 100644
index 0000000..220f2ca
--- /dev/null
+++ b/source/configuration/modules/idx_parser.rst
@@ -0,0 +1,16 @@
+Parser Modules
+--------------
+
+Parser modules are used to parse message content, once the message has
+been received. They can be used to process custom message formats or
+invalidly formatted messages. For details, please see the :doc:`rsyslog
+message parser documentation <../../concepts/messageparser>`.
+
+The current modules are currently provided as part of rsyslog:
+
+.. toctree::
+ :glob:
+ :maxdepth: 1
+
+ pm*
+
diff --git a/source/configuration/modules/idx_stringgen.rst b/source/configuration/modules/idx_stringgen.rst
new file mode 100644
index 0000000..221f0d4
--- /dev/null
+++ b/source/configuration/modules/idx_stringgen.rst
@@ -0,0 +1,43 @@
+String Generator Modules
+========================
+
+String generator modules are used, as the name implies, to generate
+strings based on the message content. They are currently tightly coupled
+with the template system. Their primary use is to speed up template
+processing by providing a native C interface to template generation.
+These modules exist since 5.5.6. To get an idea of the potential
+speedup, the default file format, when generated by a string generator,
+provides a roughly 5% speedup. For more complex strings, especially
+those that include multiple regular expressions, the speedup may be
+considerably higher.
+
+String generator modules are written to a quite simple interface.
+However, a word of caution is due: they access the rsyslog message
+object via a low-level interface. That interface is not guaranteed yet
+to stay stable. So it may be necessary to modify string generator
+modules if the interface changes. Obviously, we will not do that without
+good reason, but it may happen.
+
+Rsyslog comes with a set of core, build-in string generators, which are
+used to provide those default templates that we consider to be
+time-critical:
+
+- smfile - the default rsyslog file format
+- smfwd - the default rsyslog (network) forwarding format
+- smtradfile - the traditional syslog file format
+- smfwd - the traditional syslog (network) forwarding format
+
+Note that when you replace these defaults with some custom strings, you
+will loose some performance (around 5%). For typical systems, this is
+not really relevant. But for a high-performance systems, it may be very
+relevant. To solve that issue, create a new string generator module for
+your custom format, starting out from one of the default generators
+provided. If you can not do this yourself, you may want to contact
+`Adiscon <mailto:info%40adiscon.com>`_ as we offer custom development of
+string generators at a very low price.
+
+Note that string generator modules can be dynamically loaded. However,
+the default ones provided are so important that they are build right
+into the executable. But this does not need to be done that way (and it
+is straightforward to do it dynamic).
+
diff --git a/source/configuration/modules/im3195.rst b/source/configuration/modules/im3195.rst
new file mode 100644
index 0000000..10a2d54
--- /dev/null
+++ b/source/configuration/modules/im3195.rst
@@ -0,0 +1,75 @@
+****************************
+im3195: RFC3195 Input Module
+****************************
+
+=========================== ===========================================================================
+**Module Name:**  **im3195**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+Receives syslog messages via RFC 3195. The RAW profile is fully
+implemented and the COOKED profile is provided in an experimental state.
+This module uses `liblogging <http://www.liblogging.org>`_ for the
+actual protocol handling.
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Input Parameter
+---------------
+
+Input3195ListenPort
+^^^^^^^^^^^^^^^^^^^
+
+.. note::
+
+ Parameter is only available in Legacy Format.
+
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "601", "no", "``$Input3195ListenPort``"
+
+The port on which imklog listens for RFC 3195 messages. The default
+port is 601 (the IANA-assigned port)
+
+
+Caveats/Known Bugs
+==================
+
+Due to no demand at all for RFC3195, we have converted rfc3195d to this
+input module, but we have NOT conducted any testing. Also, the module
+does not yet properly handle the recovery case. If someone intends to
+put this module into production, good testing should be conducted. It
+also is a good idea to notify the rsyslog project that you intend to use
+it in production. In this case, we'll probably give the module another
+cleanup. We don't do this now because so far it looks just like a big
+waste of time.
+
+Currently only a single listener can be defined. That one binds to all
+interfaces.
+
+Example
+=======
+
+The following sample accepts syslog messages via RFC 3195 on port 1601.
+
+.. code-block:: none
+
+ $ModLoad im3195
+ $Input3195ListenPort 1601
+
+
diff --git a/source/configuration/modules/imbatchreport.rst b/source/configuration/modules/imbatchreport.rst
new file mode 100644
index 0000000..439eae4
--- /dev/null
+++ b/source/configuration/modules/imbatchreport.rst
@@ -0,0 +1,222 @@
+****************************************
+imbatchreport: Batch report input module
+****************************************
+
+================ ==============================================================
+**Module Name:** **imbatchreport**
+**Authors:** Jean-Philippe Hilaire <jean-philippe.hilaire@pmu.fr> & Philippe Duveau <philippe.duveau@free.fr>
+================ ==============================================================
+
+
+Purpose
+=======
+
+This module allows rsyslog to manage batch reports.
+
+Batch are programs launched successively to process a large amount of
+information. These programs are organized in stages with passing conditions.
+The batch ends with a global execution summary. Each Batch produces a single
+result file usually named with the name of the batch and its date of execution.
+
+Those files have sense only when they are complete in one log. When the file is
+collected it becomes useless and, as a statefile, should be deleted or renamed.
+
+This module handle those characteristics :
+
+- reads the complete file,
+
+- extracts the structured data from the file (see managing structured data),
+
+- transmit the message to output module(s),
+
+- action is applied to the file to flag it as treated. Two different actions can be applied: delete or rename the file.
+
+If the file is too large to be handled in the message size defined by rsyslog,
+the file is renamed as a "rejected file". See \$maxMessageSize
+
+**Managing structured data**
+
+As part of the batch summary, the structure data can be provided in the batch
+report file as the last part of the file.
+
+The last non-space char has to be a closing brace ']' then all chars between
+this char up to the closest opening brace '[' are computed as structured data.
+
+All the structured data has to be contained in the last 150 chars of the file.
+
+In general, structured data should contain the batch name (program) and the
+start timestamp. Those two values can be extract to fill rsyslog message
+attributes.
+
+Compile
+=======
+
+To successfully compile imbatchreport module.
+
+ ./configure --enable-imbatchreport ...
+
+Configuration Parameters
+========================
+
+Action Parameters
+-----------------
+
+Reports
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "yes", "glob definition",
+
+Glob definition used to identify reports to manage.
+
+Tag
+^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "yes", ,"none"
+
+The tag to be assigned to messages read from this file. If you would like to
+see the colon after the tag, you need to include it when you assign a tag
+value, like so: ``tag="myTagValue:"``.
+
+Facility
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", "facility\|number", "local0"
+
+The syslog facility to be assigned to messages read from this file. Can be
+specified in textual form (e.g. ``local0``, ``local1``, ...) or as numbers (e.g.
+16 for ``local0``). Textual form is suggested.
+
+Severity
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", "severity\|number", "notice"
+
+The syslog severity to be assigned to lines read. Can be specified
+in textual form (e.g. ``info``, ``warning``, ...) or as numbers (e.g. 6
+for ``info``). Textual form is suggested.
+
+DeduplicateSpaces
+^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "no", "", "on"
+
+The parameter modify the way consecutive spaces like chars are managed.
+When it is setted to "on", consecutive spaces like chars are reduced to a single one
+and trailing space like chars are suppressed.
+
+Delete
+^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", "<regex> <reject>",
+
+This parameter informs the module to delete the report to flag it as treated.
+If the file is too large (or failed to be removed) it is renamed using the
+<regex> to identify part of the file name that has to be replaced it by
+<reject>. See Examples
+
+Rename
+^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", "<regex> <sent> <reject>",
+
+This parameter informs the module to rename the report to flag it as treated.
+The file is renamed using the <regex> to identify part of the file name that
+has to be replaced it:
+
+- by <rename> if the file was successfully treated,
+
+- by <reject> if the file is too large to be sent.
+
+See #Examples
+
+Programkey
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", ,
+
+The attribute in structured data which contains the rsyslog APPNAME.
+This attribute has to be a String between double quotes (").
+
+Timestampkey
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", ,
+
+The attribute in structured data which contains the rsyslog TIMESTAMP.
+This attribute has to be a Number (Unix TimeStamp).
+
+Examples
+========
+
+The example show the delete action. All files corresponding to
+"/test/\*.ok" will be treated as batch reports and will be deleted
+on success or renamed from <file>.ok to <file>.rejected in other
+cases.
+
+.. code-block:: none
+
+ module(load="imbatchreport")
+ input(type="imbatchreport" reports="/test/\*.ok"
+ ruleset="myruleset" tag="batch"
+ delete=".ok$ .rejected"
+ programkey="SHELL" timestampkey="START"
+ )
+
+The example show the rename action. All files corresponding to
+"/test/\*.ok" will be treated as batch reports and will be renamed
+from <file>.ok to <file>.sent on success or
+renamed from <file>.ok to <file>.rejected in other cases.
+
+.. code-block:: none
+
+ module(load="imbatchreport")
+ input(type="imbatchreport" reports="/test/\*.ok"
+ ruleset="myruleset" tag="batch"
+ rename=".ok$ .sent .rejected"
+ programkey="SHELL" timestampkey="START"
+ )
diff --git a/source/configuration/modules/imdocker.rst b/source/configuration/modules/imdocker.rst
new file mode 100644
index 0000000..8c47b8f
--- /dev/null
+++ b/source/configuration/modules/imdocker.rst
@@ -0,0 +1,268 @@
+***************************************
+imdocker: Docker Input Module
+***************************************
+
+=========================== ===========================================================================
+**Module Name:**  **imdocker**
+**Author:** Nelson Yen
+**Available since:** 8.41.0
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+The imdocker input plug-in provides the ability to receive container logs from Docker (engine)
+via the Docker Rest API.
+
+Other features include:
+
+- filter containers through the plugin options
+- handle long log lines (greater than 16kb) and obtain
+- container metadata, such as container id, name, image id, labels, etc.
+
+**Note**: Multiple docker instances are not supported at the time of this writing.
+
+
+Configuration Parameters
+========================
+
+The configuration parameters for this module are designed for tailoring
+the behavior of imdocker.
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+.. note::
+
+ This module supports module parameters, only.
+
+
+
+Module Parameters
+-----------------
+
+
+DockerApiUnixSockAddr
+^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "/var/run/docker.sock", "no", "none"
+
+Specifies the Docker unix socket address to use.
+
+ApiVersionStr
+^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "v1.27", "no", "none"
+
+Specifies the version of Docker API to use. Must be in the format specified by the
+Docker api, e.g. similar to the default above (v1.27, v1.28, etc).
+
+
+PollingInterval
+^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "60", "no", "none"
+
+Specifies the polling interval in seconds, imdocker will poll for new containers by
+calling the 'List containers' API from the Docker engine.
+
+
+ListContainersOptions
+^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "", "no", "none"
+
+Specifies the http query component of the a 'List Containers' HTTP API request.
+See Docker API for more information about available options.
+**Note**: It is not necessary to prepend the string with '?'.
+
+
+GetContainerLogOptions
+^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "timestamp=0&follow=1&stdout=1&stderr=1&tail=1", "no", "none"
+
+Specifies the http query component of the a 'Get container logs' HTTP API request.
+See Docker API for more information about available options.
+**Note**: It is not necessary to prepend the string with '?'.
+
+
+RetrieveNewLogsFromStart
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "1", "no", "none"
+
+This option specifies the whether imdocker will process newly found container logs from the beginning.
+The exception is for containers found on start-up. The container logs for containers
+that were active at imdocker start-up are controlled via 'GetContainerLogOptions', the
+'tail' in particular.
+
+
+DefaultFacility
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer or string (preferred)", "user", "no", "``$InputFileFacility``"
+
+The syslog facility to be assigned to log messages received. Specified as numbers.
+
+.. seealso::
+
+ https://en.wikipedia.org/wiki/Syslog
+
+
+DefaultSeverity
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer or string (preferred)", "notice", "no", "``$InputFileSeverity``"
+
+The syslog severity to be assigned to log messages received. Specified as numbers (e.g. 6
+for ``info``). Textual form is suggested. Default is ``notice``.
+
+.. seealso::
+
+ https://en.wikipedia.org/wiki/Syslog
+
+
+escapeLF
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "none"
+
+This is only meaningful if multi-line messages are to be processed.
+LF characters embedded into syslog messages cause a lot of trouble,
+as most tools and even the legacy syslog TCP protocol do not expect
+these. If set to "on", this option avoid this trouble by properly
+escaping LF characters to the 4-byte sequence "#012". This is
+consistent with other rsyslog control character escaping. By default,
+escaping is turned on. If you turn it off, make sure you test very
+carefully with all associated tools. Please note that if you intend
+to use plain TCP syslog with embedded LF characters, you need to
+enable octet-counted framing.
+For more details, see Rainer's blog posting on imfile LF escaping.
+
+
+Metadata
+========
+The imdocker module supports message metadata. It supports the following
+data items:
+
+- **Id** - the container id associated with the message.
+
+- **Names** - the first container associated with the message.
+
+- **ImageID** - the image id of the container associated with the message.
+
+- **Labels** - all the labels of the container associated with the message in json format.
+
+**Note**: At the time of this writing, metadata is always enabled.
+
+
+Statistic Counter
+=================
+
+This plugin maintains `statistics <http://www.rsyslog.com/rsyslog-statistic-counter/>`. The statistic is named "imdocker".
+
+The following properties are maintained for each listener:
+
+- **submitted** - total number of messages submitted to main queue after reading from journal for processing
+ since startup. All records may not be submitted due to rate-limiting.
+
+- **ratelimit.discarded** - number of messages discarded due to rate-limiting within configured
+ rate-limiting interval.
+
+- **curl.errors** - total number of curl errors.
+
+
+Caveats/Known Bugs
+==================
+
+- At the moment, this plugin only supports a single instance of docker on a host.
+
+
+Configuration Examples
+======================
+
+Load module, with only defaults
+--------------------------------
+
+This activates the module with all the default options:
+
+.. code-block:: none
+
+ module(load="imdocker")
+
+
+Load module, with container filtering
+-------------------------------------
+
+This activates the module with container filtering on a label:
+
+.. code-block:: none
+
+ module(load="imdocker"
+ DockerApiUnixSockAddr="/var/run/docker.sock"
+ ApiVersionStr="v1.27"
+ PollingInterval="60"
+ ListContainersOptions="filters={\"label\":[\"log_opt_enabled\"]}"
+ GetContainerLogOptions="timestamps=0&follow=1&stdout=1&stderr=0&tail=1"
+ )
+
+
+Example template to get container metadata
+------------------------------------------
+
+An example of how to create a template with container metadata
+
+.. code-block:: none
+
+ template (name="ImdockerFormat" type="string"
+ string="program:%programname% tag:%syslogtag% id:%$!metadata!Id% name:%$!metadata!Names% imageid:%$!metadata!ImageID% labels:%$!metadata!Labels% msg: %msg%\n"
+ )
+
diff --git a/source/configuration/modules/imfile.rst b/source/configuration/modules/imfile.rst
new file mode 100644
index 0000000..cc11742
--- /dev/null
+++ b/source/configuration/modules/imfile.rst
@@ -0,0 +1,948 @@
+******************************
+imfile: Text File Input Module
+******************************
+
+.. index:: ! imfile
+
+=========================== ===========================================================================
+**Module Name:**  **imfile**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+=========================== ===========================================================================
+
+Purpose
+=======
+
+This module provides the ability to convert any standard text file
+into a syslog
+message. A standard text file is a file consisting of printable
+characters with lines being delimited by LF.
+
+The file is read line-by-line and any line read is passed to rsyslog's
+rule engine. The rule engine applies filter conditions and selects which
+actions needs to be carried out. Empty lines are **not** processed, as
+they would result in empty syslog records. They are simply ignored.
+
+As new lines are written they are taken from the file and processed.
+Depending on the selected mode, this happens via inotify or based on
+a polling interval. Especially in polling mode, file reading doesn't
+happen immediately. But there are also slight delays (due to process
+scheduling and internal processing) in inotify mode.
+
+The file monitor supports file rotation. To fully work,
+rsyslogd must run while the file is rotated. Then, any remaining lines
+from the old file are read and processed and when done with that, the
+new file is being processed from the beginning. If rsyslogd is stopped
+during rotation, the new file is read, but any not-yet-reported lines
+from the previous file can no longer be obtained.
+
+When rsyslogd is stopped while monitoring a text file, it records the
+last processed location and continues to work from there upon restart.
+So no data is lost during a restart (except, as noted above, if the file
+is rotated just in this very moment).
+
+Notable Features
+================
+
+- :ref:`Metadata`
+- :ref:`State-Files`
+- :ref:`WildCards`
+- presentation on `using wildcards with imfile <http://www.slideshare.net/rainergerhards1/using-wildcards-with-rsyslogs-file-monitor-imfile>`_
+
+
+Configuration
+=============
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Module Parameters
+-----------------
+
+Mode
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "inotify", "no", "none"
+
+.. versionadded:: 8.1.5
+
+This specifies if imfile is shall run in inotify ("inotify") or polling
+("polling") mode. Traditionally, imfile used polling mode, which is
+much more resource-intense (and slower) than inotify mode. It is
+suggested that users turn on "polling" mode only if they experience
+strange problems in inotify mode. In theory, there should never be a
+reason to enable "polling" mode and later versions will most probably
+remove it.
+
+Note: if a legacy "$ModLoad" statement is used, the default is *polling*.
+This default was kept to prevent problems with old configurations. It
+might change in the future.
+
+.. versionadded:: 8.32.0
+
+On Solaris, the FEN API is used instead of INOTIFY. You can set the mode
+to fen or inotify (which is automatically mapped to fen on Solaris OS).
+Please note that the FEN is limited compared to INOTIFY. Deep wildcard
+matches may not work because of the API limits for now.
+
+
+readTimeout
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "none"
+
+*Default: 0 (no timeout)*
+
+.. versionadded:: 8.23.0
+
+This sets the default value for input *timeout* parameters. See there
+for exact meaning. Parameter value is the number of seconds.
+
+
+timeoutGranularity
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "1", "no", "none"
+
+.. versionadded:: 8.23.0
+
+This sets the interval in which multi-line-read timeouts are checked.
+The interval is specified in seconds. Note that
+this establishes a lower limit on the length of the timeout. For example, if
+a timeoutGranularity of 60 seconds is selected and a readTimeout value of 10 seconds
+is used, the timeout is nevertheless only checked every 60 seconds (if there is
+no other activity in imfile). This means that the readTimeout is also only
+checked every 60 seconds, which in turn means a timeout can occur only after 60
+seconds.
+
+Note that timeGranularity has some performance implication. The more frequently
+timeout processing is triggered, the more processing time is needed. This
+effect should be negligible, except if a very large number of files is being
+monitored.
+
+
+sortFiles
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+.. versionadded:: 8.32.0
+
+If this parameter is set to on, the files will be processed in sorted order, else
+not. However, due to the inherent asynchronicity of the whole operations involved
+in tracking files, it is not possible to guarantee this sorted order, as it also
+depends on operation mode and OS timing.
+
+
+PollingInterval
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "10", "no", "none"
+
+This setting specifies how often files are to be
+polled for new data. For obvious reasons, it has effect only if
+imfile is running in polling mode.
+The time specified is in seconds. During each
+polling interval, all files are processed in a round-robin fashion.
+
+A short poll interval provides more rapid message forwarding, but
+requires more system resources. While it is possible, we strongly
+recommend not to set the polling interval to 0 seconds. That will
+make rsyslogd become a CPU hog, taking up considerable resources. It
+is supported, however, for the few very unusual situations where this
+level may be needed. Even if you need quick response, 1 seconds
+should be well enough. Please note that imfile keeps reading files as
+long as there is any data in them. So a "polling sleep" will only
+happen when nothing is left to be processed.
+
+**We recommend to use inotify mode.**
+
+
+statefile.directory
+^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "global(WorkDirectory) value", "no", "none"
+
+.. versionadded:: 8.1905.0
+
+This parameter permits to specify a dedicated directory for the storage of
+imfile state files. An absolute path name should be specified (e.g.
+`/var/rsyslog/imfilestate`). This permits to keep imfile state files separate
+from other rsyslog work items.
+
+If not specified the global `workDirectory` setting is used.
+
+**Important: The directory must exist before rsyslog is started.** Also,
+rsyslog needs write permissions to work correctly. Keep in mind that this
+also might require SELinux definitions (or similar for other enhanced security
+systems).
+
+
+Input Parameters
+----------------
+
+File
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "yes", "``$InputFileName``"
+
+The file being monitored. So far, this must be an absolute name (no
+macros or templates). Note that wildcards are supported at the file
+name level (see **WildCards** below for more details).
+
+
+Tag
+^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "yes", "``$InputFileTag``"
+
+The tag to be assigned to messages read from this file. If you would like to
+see the colon after the tag, you need to include it when you assign a tag
+value, like so: ``tag="myTagValue:"``.
+
+
+Facility
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer or string (preferred)", "local0", "no", "``$InputFileFacility``"
+
+The syslog facility to be assigned to messages read from this file. Can be
+specified in textual form (e.g. ``local0``, ``local1``, ...) or as numbers (e.g.
+16 for ``local0``). Textual form is suggested. Default  is ``local0``.
+
+.. seealso::
+
+ https://en.wikipedia.org/wiki/Syslog
+
+
+Severity
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer or string (preferred)", "notice", "no", "``$InputFileSeverity``"
+
+The syslog severity to be assigned to lines read. Can be specified
+in textual form (e.g. ``info``, ``warning``, ...) or as numbers (e.g. 6
+for ``info``). Textual form is suggested. Default is ``notice``.
+
+.. seealso::
+
+ https://en.wikipedia.org/wiki/Syslog
+
+
+PersistStateInterval
+^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "``$InputFilePersistStateInterval``"
+
+Specifies how often the state file shall be written when processing
+the input file. The **default** value is 0, which means a new state
+file is at least being written when the monitored files is being closed (end of
+rsyslogd execution). Any other value n means that the state file is
+written at least every time n file lines have been processed. This setting can
+be used to guard against message duplication due to fatal errors
+(like power fail). Note that this setting affects imfile performance,
+especially when set to a low value. Frequently writing the state file
+is very time consuming.
+
+Note further that rsyslog may write state files
+more frequently. This happens if rsyslog has some reason to do so.
+There is intentionally no more precise description of when state files
+are being written, as this is an implementation detail and may change
+as needed.
+
+**Note: If this parameter is not set, state files are not created.**
+
+
+startmsg.regex
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+.. versionadded:: 8.10.0
+
+This permits the processing of multi-line messages. When set, a
+messages is terminated when the next one begins, and
+``startmsg.regex`` contains the regex that identifies the start
+of a message. As this parameter is using regular expressions, it
+is more flexible than ``readMode`` but at the cost of lower
+performance.
+Note that ``readMode`` and ``startmsg.regex`` and ``endmsg.regex`` cannot all be
+defined for the same input.
+
+
+endmsg.regex
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+.. versionadded:: 8.38.0
+
+This permits the processing of multi-line messages. When set, a message is
+terminated when ``endmsg.regex`` matches the line that
+identifies the end of a message. As this parameter is using regular
+expressions, it is more flexible than ``readMode`` but at the cost of lower
+performance.
+Note that ``readMode`` and ``startmsg.regex`` and ``endmsg.regex`` cannot all be
+defined for the same input.
+The primary use case for this is multiline container log files which look like
+this:
+
+.. code-block:: none
+
+ date stdout P start of message
+ date stdout P middle of message
+ date stdout F end of message
+
+The `F` means this is the line which contains the final part of the message.
+The fully assembled message should be `start of message middle of message end of
+message`. `endmsg.regex="^[^ ]+ stdout F "` will match.
+
+readTimeout
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "none"
+
+.. versionadded:: 8.23.0
+
+This can be used with *startmsg.regex* (but not *readMode*). If specified,
+partial multi-line reads are timed out after the specified timeout interval.
+That means the current message fragment is being processed and the next
+message fragment arriving is treated as a completely new message. The
+typical use case for this parameter is a file that is infrequently being
+written. In such cases, the next message arrives relatively late, maybe hours
+later. Specifying a readTimeout will ensure that those "last messages" are
+emitted in a timely manner. In this use case, the "partial" messages being
+processed are actually full messages, so everything is fully correct.
+
+To guard against accidental too-early emission of a (partial) message, the
+timeout should be sufficiently large (5 to 10 seconds or more recommended).
+Specifying a value of zero turns off timeout processing. Also note the
+relationship to the *timeoutGranularity* global parameter, which sets the
+lower bound of *readTimeout*.
+
+Setting timeout values slightly increases processing time requirements; the
+effect should only be visible of a very large number of files is being
+monitored.
+
+
+readMode
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "``$InputFileReadMode``"
+
+This provides support for processing some standard types of multiline
+messages. It is less flexible than ``startmsg.regex`` or ``endmsg.regex`` but
+offers higher performance than regex processing. Note that ``readMode`` and
+``startmsg.regex`` and ``endmsg.regex`` cannot all be defined for the same
+input.
+
+The value can range from 0-2 and determines the multiline
+detection method.
+
+0 - (**default**) line based (each line is a new message)
+
+1 - paragraph (There is a blank line between log messages)
+
+2 - indented (new log messages start at the beginning of a line. If a
+line starts with a space or tab "\t" it is part of the log message before it)
+
+
+escapeLF
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "1", "no", "none"
+
+This is only meaningful if multi-line messages are to be processed.
+LF characters embedded into syslog messages cause a lot of trouble,
+as most tools and even the legacy syslog TCP protocol do not expect
+these. If set to "on", this option avoid this trouble by properly
+escaping LF characters to the 4-byte sequence "#012". This is
+consistent with other rsyslog control character escaping. By default,
+escaping is turned on. If you turn it off, make sure you test very
+carefully with all associated tools. Please note that if you intend
+to use plain TCP syslog with embedded LF characters, you need to
+enable octet-counted framing. For more details, see
+`Rainer Gerhards' blog posting on imfile LF escaping <https://rainer.gerhards.net/2013/09/imfile-multi-line-messages.html>`_.
+
+
+escapeLF.replacement
+^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "depending on use", "no", "none"
+
+.. versionadded:: 8.2001.0
+
+This parameter works in conjunction with `escapeLF`. It is only
+honored if `escapeLF="on"`.
+
+It permits to replace the default escape sequence by a different character
+sequence. The default historically is inconsistent and denpends on which
+functionality is used to read the file. It can be either "#012" or "\\n". If
+you want to retain that default, do not configure this parameter.
+
+If it is configured, any sequence may be used. For example, to replace a LF
+with a simple space, use::
+
+ escapeLF.replacement=" "
+
+It is also possible to configure longer replacements. An example for this is::
+
+ escapeLF.replacement="[LF]"
+
+Finally, it is possible to completely remove the LF. This is done by specifying
+an empty replacement sequence::
+
+ escapeLF.replacement=""
+
+
+MaxLinesAtOnce
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "``$InputFileMaxLinesAtOnce``"
+
+This is a legacy setting that only is supported in *polling* mode.
+In *inotify* mode, it is fixed at 0 and all attempts to configure
+a different value will be ignored, but will generate an error
+message.
+
+Please note that future versions of imfile may not support this
+parameter at all. So it is suggested to not use it.
+
+In *polling* mode, if set to 0, each file will be fully processed and
+then processing switches to the next file. If it is set to any other
+value, a maximum of [number] lines is processed in sequence for each file,
+and then the file is switched. This provides a kind of multiplexing
+the load of multiple files and probably leads to a more natural
+distribution of events when multiple busy files are monitored. For
+*polling* mode, the **default** is 10240.
+
+
+MaxSubmitAtOnce
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "1024", "no", "none"
+
+This is an expert option. It can be used to set the maximum input
+batch size that imfile can generate. The **default** is 1024, which
+is suitable for a wide range of applications. Be sure to understand
+rsyslog message batch processing before you modify this option. If
+you do not know what this doc here talks about, this is a good
+indication that you should NOT modify the default.
+
+
+deleteStateOnFileDelete
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "none"
+
+This parameter controls if state files are deleted if their associated
+main file is deleted. Usually, this is a good idea, because otherwise
+problems would occur if a new file with the same name is created. In
+that case, imfile would pick up reading from the last position in
+the **deleted** file, which usually is not what you want.
+
+However, there is one situation where not deleting associated state
+file makes sense: this is the case if a monitored file is modified
+with an editor (like vi or gedit). Most editors write out modifications
+by deleting the old file and creating a new now. If the state file
+would be deleted in that case, all of the file would be reprocessed,
+something that's probably not intended in most case. As a side-note,
+it is strongly suggested *not* to modify monitored files with
+editors. In any case, in such a situation, it makes sense to
+disable state file deletion. That also applies to similar use
+cases.
+
+In general, this parameter should only by set if the users
+knows exactly why this is required.
+
+
+Ruleset
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "``$InputFileBindRuleset``"
+
+Binds the listener to a specific :doc:`ruleset <../../concepts/multi_ruleset>`.
+
+
+addMetadata
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "-1", "no", "none"
+
+**Default: see intro section on Metadata**
+
+This is used to turn on or off the addition of metadata to the
+message object.
+
+
+addCeeTag
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+This is used to turn on or off the addition of the "@cee:" cookie to the
+message object.
+
+
+reopenOnTruncate
+^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+This is an **experimental** feature that tells rsyslog to reopen input file
+when it was truncated (inode unchanged but file size on disk is less than
+current offset in memory).
+
+
+MaxLinesPerMinute
+^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "none"
+
+Instructs rsyslog to enqueue up to the specified maximum number of lines
+as messages per minute. Lines above this value are discarded.
+
+The **default** value is 0, which means that no lines are discarded.
+
+
+MaxBytesPerMinute
+^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "none"
+
+Instructs rsyslog to enqueue a maximum number of bytes as messages per
+minute. Once MaxBytesPerMinute is reached, subsequent messages are
+discarded.
+
+Note that messages are not truncated as a result of MaxBytesPerMinute,
+rather the entire message is discarded if part of it would be above the
+specified maximum bytes per minute.
+
+The **default** value is 0, which means that no messages are discarded.
+
+
+trimLineOverBytes
+^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "none"
+
+This is used to tell rsyslog to truncate the line which length is greater
+than specified bytes. If it is positive number, rsyslog truncate the line
+at specified bytes. Default value of 'trimLineOverBytes' is 0, means never
+truncate line.
+
+This option can be used when ``readMode`` is 0 or 2.
+
+
+freshStartTail
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+This is used to tell rsyslog to seek to the end/tail of input files
+(discard old logs) **at its first start(freshStart)** and process only new
+log messages.
+
+When deploy rsyslog to a large number of servers, we may only care about
+new log messages generated after the deployment. set **freshstartTail**
+to **on** will discard old logs. Otherwise, there may be vast useless
+message burst on the remote central log receiver
+
+This parameter only applies to files that are already existing during
+rsyslog's initial processing of the file monitors.
+
+.. warning::
+
+ Depending on the number and location of existing files, this initial
+ startup processing may take some time as well. If another process
+ creates a new file at exactly the time of startup processing and writes
+ data to it, rsyslog might detect this file and it's data as prexisting
+ and may skip it. This race is inevitable. So when freshStartTail is used,
+ some risk of data loss exists. The same holds true if between the last
+ shutdown of rsyslog and its restart log file content has been added.
+ As such, the rsyslog team advises against activating the freshStartTail
+ option.
+
+
+discardTruncatedMsg
+^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+When messages are too long they are truncated and the following part is
+processed as a new message. When this parameter is turned on the
+truncated part is not processed but discarded.
+
+
+msgDiscardingError
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "none"
+
+Upon truncation an error is given. When this parameter is turned off, no
+error will be shown upon truncation.
+
+
+needParse
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+.. versionadded:: 8.1903.0
+
+By default, read message are sent to output modules without passing through
+parsers. This parameter informs rsyslog to use also defined parser module(s).
+
+
+
+persistStateAfterSubmission
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+.. versionadded:: 8.2006.0
+
+This setting makes imfile persist state file information after a batch of
+messages has been submitted. It can be activated (switched to "on") in order
+to provide enhanced robustness against unclean shutdowns. Depending on the
+configuration of the rest of rsyslog (most importantly queues), persisting
+the state file after each message submission prevents message loss
+when reading files and the system is shutdown in an unclean way (e.g.
+loss of power).
+
+Please note that this setting may cause frequent state file writes and
+as such may cause some performance degradation.
+
+
+ignoreOlderThan
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "none"
+
+.. versionadded:: 8.2108.0
+
+Instructs imfile to ignore a discovered file that has not been modified in the
+specified number of seconds. Once a file is discovered, the file is no longer
+ignored and new data will be read. This option is disabled (set to 0) by default.
+
+
+
+.. _Metadata:
+
+Metadata
+========
+The imfile module supports message metadata. It supports the following
+data items:
+
+- filename
+
+ Name of the file where the message originated from. This is most
+ useful when using wildcards inside file monitors, because it then
+ is the only way to know which file the message originated from.
+ The value can be accessed using the %$!metadata!filename% property.
+ **Note**: For symlink-ed files this does **not** contain name of the
+ actual file (source of the data) but name of the symlink (file which
+ matched configured input).
+
+- fileoffset
+
+ Offset of the file in bytes at the time the message was read. The
+ offset reported is from the **start** of the line.
+ This information can be useful when recreating multi-line files
+ that may have been accessed or transmitted non-sequentially.
+ The value can be accessed using the %$!metadata!fileoffset% property.
+
+Metadata is only present if enabled. By default it is enabled for
+input() statements that contain wildcards. For all others, it is
+disabled by default. It can explicitly be turned on or off via the
+*addMetadata* input() parameter, which always overrides the default.
+
+
+.. _State-Files:
+
+State Files
+===========
+Rsyslog must keep track of which parts of the monitored file
+are already processed. This is done in so-called "state files" that
+are created in the rsyslog working directory and are read on startup to
+resume monitoring after a shutdown. The location of the rsyslog
+working directory is configurable via the ``global(workDirectory)``
+|FmtAdvancedName| format parameter.
+
+**Note**: The ``PersistStateInterval`` parameter must be set, otherwise state
+files will NOT be created.
+
+Rsyslog automatically generates state file names. These state file
+names will begin with the string ``imfile-state:`` and be followed
+by some suffix rsyslog generates.
+
+There is intentionally no more precise description of when state file
+naming, as this is an implementation detail and may change as needed.
+
+Note that it is possible to set a fixed state file name via the
+deprecated ``stateFile`` parameter. It is suggested to avoid this, as
+the user must take care of name clashes. Most importantly, if
+"stateFile" is set for file monitors with wildcards, the **same**
+state file is used for all occurrences of these files. In short,
+this will usually not work and cause confusion. Upon startup,
+rsyslog tries to detect these cases and emit warning messages.
+However, the detection simply checks for the presence of "*"
+and as such it will not cover more complex cases.
+
+Note that when the ``global(workDirectory)`` |FmtAdvancedName| format
+parameter is points to a non-writable location, the state file
+**will not be generated**. In those cases, the file content will always
+be completely re-sent by imfile, because the module does not know that it
+already processed parts of that file. if The parameter is not set to all, it
+defaults to the file system root, which may or may not be writable by
+the rsyslog process.
+
+
+.. _WildCards:
+
+WildCards
+=========
+
+**Before Version: 8.25.0**
+ Wildcards are only supported in the filename part, not in directory names.
+
+* /var/log/\*.log **works**. *
+* /var/log/\*/syslog.log does **not work**. *
+
+
+**Since Version: 8.25.0**
+ Wildcards are supported in filename and paths which means these samples will work:
+
+* /var/log/\*.log **works**. *
+* /var/log/\*/syslog.log **works**. *
+* /var/log/\*/\*.log **works**. *
+
+
+ All matching files in all matching subfolders will work.
+ Note that this may decrease performance in imfile depending on how
+ many directories and files are being watched dynamically.
+
+
+
+
+Caveats/Known Bugs
+==================
+
+* symlink may not always be properly processed
+
+Configuration Examples
+======================
+
+The following sample monitors two files. If you need just one, remove
+the second one. If you need more, add them according to the sample ;).
+This code must be placed in /etc/rsyslog.conf (or wherever your distro
+puts rsyslog's config files). Note that only commands actually needed
+need to be specified. The second file uses less commands and uses
+defaults instead.
+
+.. code-block:: none
+
+ module(load="imfile" PollingInterval="10") #needs to be done just once
+
+ # File 1
+ input(type="imfile"
+ File="/path/to/file1"
+ Tag="tag1"
+ Severity="error"
+ Facility="local7")
+
+ # File 2
+ input(type="imfile"
+ File="/path/to/file2"
+ Tag="tag2")
+
+ # ... and so on ... #
+
+
+Deprecated parameters
+=====================
+
+**Note:** While these parameters are still accepted, they should no longer be
+used for newly created configurations.
+
+stateFile
+---------
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "``$InputFileStateFile``"
+
+This is the name of this file's state file. This parameter should
+usually **not** be used. Check the section on "State Files" above
+for more details.
+
+
diff --git a/source/configuration/modules/imgssapi.rst b/source/configuration/modules/imgssapi.rst
new file mode 100644
index 0000000..de51b36
--- /dev/null
+++ b/source/configuration/modules/imgssapi.rst
@@ -0,0 +1,154 @@
+************************************
+imgssapi: GSSAPI Syslog Input Module
+************************************
+
+=========================== ===========================================================================
+**Module Name:**  **imgssapi**
+**Author:** varmojfekoj
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+Provides the ability to receive syslog messages from the network
+protected via Kerberos 5 encryption and authentication. This module also
+accept plain tcp syslog messages on the same port if configured to do
+so. If you need just plain tcp, use :doc:`imtcp <imtcp>` instead.
+
+Note: This is a contributed module, which is not supported by the
+rsyslog team. We recommend to use RFC5425 TLS-protected syslog
+instead.
+
+.. toctree::
+ :maxdepth: 1
+
+ gssapi
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Input Parameter
+---------------
+
+.. note::
+
+ Parameter are only available in Legacy Format.
+
+
+InputGSSServerRun
+^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "``$InputGSSServerRun``"
+
+Starts a GSSAPI server on selected port - note that this runs
+independently from the TCP server.
+
+
+InputGSSServerServiceName
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "``$InputGSSServerServiceName``"
+
+The service name to use for the GSS server.
+
+
+InputGSSServerPermitPlainTCP
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "0", "no", "``$InputGSSServerPermitPlainTCP``"
+
+Permits the server to receive plain tcp syslog (without GSS) on the
+same port.
+
+
+InputGSSServerMaxSessions
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "200", "no", "``$InputGSSServerMaxSessions``"
+
+Sets the maximum number of sessions supported.
+
+
+InputGSSServerKeepAlive
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "0", "no", "``$InputGSSServerKeepAlive``"
+
+.. versionadded:: 8.5.0
+
+Enables or disable keep-alive handling.
+
+
+InputGSSListenPortFileName
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "``$InputGSSListenPortFileName``"
+
+.. versionadded:: 8.38.0
+
+With this parameter you can specify the name for a file. In this file the
+port, imtcp is connected to, will be written.
+This parameter was introduced because the testbench works with dynamic ports.
+
+.. note::
+
+ If this parameter is set, 0 will be accepted as the port. Otherwise it
+ is automatically changed to port 514
+
+
+Caveats/Known Bugs
+==================
+
+- module always binds to all interfaces
+- only a single listener can be bound
+
+Example
+=======
+
+This sets up a GSS server on port 1514 that also permits to receive
+plain tcp syslog messages (on the same port):
+
+.. code-block:: none
+
+ $ModLoad imgssapi # needs to be done just once
+ $InputGSSServerRun 1514
+ $InputGSSServerPermitPlainTCP on
+
+
diff --git a/source/configuration/modules/imhiredis.rst b/source/configuration/modules/imhiredis.rst
new file mode 100644
index 0000000..3574332
--- /dev/null
+++ b/source/configuration/modules/imhiredis.rst
@@ -0,0 +1,356 @@
+
+.. include:: <isonum.txt>
+
+*****************************
+Imhiredis: Redis input plugin
+*****************************
+
+==================== =====================================
+**Module Name:** **imhiredis**
+**Author:** Jeremie Jourdin <jeremie.jourdin@advens.fr>
+**Contributors:** Theo Bertin <theo.bertin@advens.fr>
+==================== =====================================
+
+Purpose
+=======
+
+Imhiredis is an input module reading arbitrary entries from Redis.
+It uses the `hiredis library <https://github.com/redis/hiredis.git>`_ to query Redis instances using 3 modes:
+
+- **queues**, using `LIST <https://redis.io/commands#list>`_ commands
+- **channels**, using `SUBSCRIBE <https://redis.io/commands#pubsub>`_ commands
+- **streams**, using `XREAD/XREADGROUP <https://redis.io/commands/?group=stream>`_ commands
+
+
+.. _imhiredis_queue_mode:
+
+Queue mode
+----------
+
+The **queue mode** uses Redis LISTs to push/pop messages to/from lists. It allows simple and efficient uses of Redis as a queueing system, providing both LIFO and FIFO methods.
+
+This mode should be preferred if the user wants to use Redis as a caching system, with one (or many) Rsyslog instances POP'ing out entries.
+
+.. Warning::
+ This mode was configured to provide optimal performances while not straining Redis, but as imhiredis has to poll the instance some trade-offs had to be made:
+
+ - imhiredis POPs entries by batches of 10 to improve performances (size of batch is configurable via the batchsize parameter)
+ - when no entries are left in the list, the module sleeps for 1 second before checking the list again. This means messages might be delayed by as much as 1 second between a push to the list and a pop by imhiredis (entries will still be POP'ed out as fast as possible while the list is not empty)
+
+
+.. _imhiredis_channel_mode:
+
+Channel mode
+------------
+
+The **subscribe** mode uses Redis PUB/SUB system to listen to messages published to Redis' channels. It allows performant use of Redis as a message broker.
+
+This mode should be preferred to use Redis as a message broker, with zero, one or many subscribers listening to new messages.
+
+.. Warning::
+ This mode shouldn't be used if messages are to be reliably processed, as messages published when no Imhiredis is listening will result in the loss of the message.
+
+
+.. _imhiredis_stream_mode:
+
+Stream mode
+------------
+
+The **stream** mode uses `Redis Streams system <https://redis.io/docs/data-types/streams/>`_ to read entries published to Redis' streams. It is a good alternative when:
+ - sharing work is desired
+ - not losing any log (even in the case of a crash) is mandatory
+
+This mode is especially useful to define pools of workers that do various processing along the way, while ensuring not a single log is lost during processing by a worker.
+
+.. note::
+ As Redis streams do not insert simple values in keys, but rather fleid/value pairs, this mode can also be useful when handling structured data. This is better shown with the examples for the parameter :ref:`imhiredis_fields`.
+
+ This mode also adds additional internal metadata to the message, it won't be included in json data or regular fields, but
+
+ - **$.redis!stream** will be added to the message, with the value of the source stream
+ - **$.redis!index** will be added to the message, with the exact ID of the entry
+ - **$.redis!group** will be added in the message (if :ref:`imhiredis_stream_consumergroup` is set), with the value of the group used to read the entry
+ - **$.redis!consumer** will be added in the message (if :ref:`imhiredis_stream_consumername` is set), with the value of the consumer name used to read the entry
+
+ This is especially useful when used with the omhiredis module, to allow it to get the required information semi-automatically (custom templates will still be required in the user configuration)
+
+.. Warning::
+ This mode is the most reliable to handle entries stored in Redis, but it might also be the one with the most overhead. Although still minimal, make sure to test the different options and determine if this mode is right for you!
+
+
+Master/Replica
+--------------
+
+This module is able to automatically connect to the master instance of a master/replica(s) cluster. Simply providing a valid connection entry point (being the current master or a valid replica), Imhiredis is able to redirect to the master node on startup and when states change between nodes.
+
+
+Configuration Parameters
+========================
+
+.. note::
+ Parameter names are case-insensitive
+
+
+Input Parameters
+----------------
+
+.. _imhiredis_mode:
+
+mode
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "subscribe", "yes", "none"
+
+| Defines the mode to use for the module.
+| Should be either "**subscribe**" (:ref:`imhiredis_channel_mode`), "**queue**" (:ref:`imhiredis_queue_mode`) or "**stream**" (:ref:`imhiredis_stream_mode`) (case-sensitive).
+
+
+ruleset
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+Assign messages from this input to a specific Rsyslog ruleset.
+
+
+batchsize
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "number", "10", "yes", "none"
+
+Defines the dequeue batch size for redis pipelining.
+imhiredis will read "**batchsize**" elements from redis at a time.
+
+When using the :ref:`imhiredis_queue_mode`, defines the size of the batch to use with LPOP / RPOP.
+
+
+.. _imhiredis_key:
+
+key
+^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "yes", "none"
+
+Defines either the name of the list to use (for :ref:`imhiredis_queue_mode`) or the channel to listen to (for :ref:`imhiredis_channel_mode`).
+
+
+.. _imhiredis_socketPath:
+
+socketPath
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "no", "if no :ref:`imhiredis_server` provided", "none"
+
+Defines the socket to use when trying to connect to Redis. Will be ignored if both :ref:`imhiredis_server` and :ref:`imhiredis_socketPath` are given.
+
+
+.. _imhiredis_server:
+
+server
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "ip", "127.0.0.1", "if no :ref:`imhiredis_socketPath` provided", "none"
+
+The Redis server's IP to connect to.
+
+
+.. _imhiredis_port:
+
+port
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "number", "6379", "no", "none"
+
+The Redis server's port to use when connecting via IP.
+
+
+.. _imhiredis_password:
+
+password
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+The password to use when connecting to a Redis node, if necessary.
+
+
+.. _imhiredis_uselpop:
+
+uselpop
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "boolean", "no", "no", "none"
+
+| When using the :ref:`imhiredis_queue_mode`, defines if imhiredis should use a LPOP instruction instead of a RPOP (the default).
+| Has no influence on the :ref:`imhiredis_channel_mode` and will be ignored if set with this mode.
+
+
+.. _imhiredis_stream_consumergroup:
+
+stream.consumerGroup
+^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "", "no", "none"
+
+| When using the :ref:`imhiredis_stream_mode`, defines a consumer group name to use (see `the XREADGROUP documentation <https://redis.io/commands/xreadgroup/>`_ for details). This parameter activates the use of **XREADGROUP** commands, in replacement to simple XREADs.
+| Has no influence in the other modes (queue or channel) and will be ignored.
+
+.. note::
+ If this parameter is set, :ref:`imhiredis_stream_consumername` should also be set
+
+
+.. _imhiredis_stream_consumername:
+
+stream.consumerName
+^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "", "no", "none"
+
+| When using the :ref:`imhiredis_stream_mode`, defines a consumer name to use (see `the XREADGROUP documentation <https://redis.io/commands/xreadgroup/>`_ for details). This parameter activates the use of **XREADGROUP** commands, in replacement to simple XREADs.
+| Has no influence in the other modes (queue or channel) and will be ignored.
+
+.. note::
+ If this parameter is set, :ref:`imhiredis_stream_consumergroup` should also be set
+
+
+.. _imhiredis_stream_readfrom:
+
+stream.readFrom
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "$", "no", "none"
+
+| When using the :ref:`imhiredis_stream_mode`, defines the `starting ID <https://redis.io/docs/data-types/streams-tutorial/#entry-ids>`_ for XREAD/XREADGROUP commands (can also use special IDs, see `documentation <https://redis.io/docs/data-types/streams-tutorial/#special-ids-in-the-streams-api>`_).
+| Has no influence in the other modes (queue or channel) and will be ignored.
+
+
+.. _imhiredis_stream_consumerack:
+
+stream.consumerACK
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "boolean", "on", "no", "none"
+
+| When using :ref:`imhiredis_stream_mode` with :ref:`imhiredis_stream_consumergroup` and :ref:`imhiredis_stream_consumername`, determines if the module should directly acknowledge the ID once read from the Consumer Group.
+| Has no influence in the other modes (queue or channel) and will be ignored.
+
+.. note::
+ When using Consumer Groups and imhiredis, omhiredis can also integrate with this workflow to acknowledge a processed message once put back in another stream (or somewhere else). This parameter is then useful set to **off** to let the omhiredis module acknowledge the input ID once the message is correctly sent.
+
+
+.. _imhiredis_stream_autoclaimidletime:
+
+stream.autoclaimIdleTime
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "positive number", "0", "no", "none"
+
+| When using :ref:`imhiredis_stream_mode` with :ref:`imhiredis_stream_consumergroup` and :ref:`imhiredis_stream_consumername`, determines if the module should check for pending IDs that exceed this time (**in milliseconds**) to assume the original consumer failed to acknowledge the log and claim them for their own (see `the redis ducumentation <https://redis.io/docs/data-types/streams-tutorial/#automatic-claiming>`_ on this subject for more details on how that works).
+| Has no influence in the other modes (queue or channel) and will be ignored.
+
+.. note::
+ If this parameter is set, the AUTOCLAIM operation will also take into account the specified :ref:`imhiredis_stream_readfrom` parameter. **If its value is '$' (default), the AUTOCLAIM commands will use '0-0' as the starting ID**.
+
+
+
+.. _imhiredis_fields:
+
+fields
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "array", "[]", "no", "none"
+
+| When using :ref:`imhiredis_stream_mode`, the module won't get a simple entry but will instead get hashes, with field/value pairs.
+| By default, the module will insert every value into their respective field in the **$!** object, but this parameter can change this behaviour, for each entry the value will be a string where:
+
+ - if the entry begins with a **!** or a **.**, it will be taken as a key to take into the original entry
+ - if the entry doesn't begin with a **!** or a **.**, the value will be taken verbatim
+ - in addition, if the value is prefixed with a **:<key>:** pattern, the value (verbatim or taken from the entry) will be inserted in this specific key (or subkey)
+
+*Examples*:
+
+.. csv-table::
+ :header: "configuration", "result"
+ :widths: auto
+ :class: parameter-table
+
+ ``["static_value"]``, the value "static_value" will be inserted in $!static_value
+ ``[":key:static_value"]``, the value "static_value" will be inserted in $!key
+ ``["!field"]``, the value of the field "field" will be inserted in $!field
+ ``[":key!subkey:!field"]``, the value of the field "field" will be inserted in $!key!subkey
+
diff --git a/source/configuration/modules/imhttp.rst b/source/configuration/modules/imhttp.rst
new file mode 100644
index 0000000..1200c39
--- /dev/null
+++ b/source/configuration/modules/imhttp.rst
@@ -0,0 +1,369 @@
+*************************
+imhttp: http input module
+*************************
+
+=========================== ===========
+**Module Name:**  **imhttp**
+**Author:** Nelson Yen
+=========================== ===========
+
+
+Purpose
+=======
+
+Provides the ability to receive adhoc and plaintext syslog messages via http. The format of messages accepted,
+depends on configuration. imhttp exposes the capabilities and the underlying options of the http library
+used, which currently is civetweb.
+
+Civetweb documentation:
+
+- `Civetweb User Manual <https://github.com/civetweb/civetweb/blob/master/docs/UserManual.md>`_
+- `Civetweb Configuration Options <https://github.com/civetweb/civetweb/blob/master/docs/UserManual.md#configuration-options>`_
+
+Notable Features
+================
+
+- :ref:`imhttp-statistic-counter`
+- :ref:`imhttp-error-messages`
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Module Parameters
+-----------------
+
+Ports
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", "ports", "8080"
+
+Configures "listening_ports" in the civetweb library. This option may also be configured using the
+liboptions_ (below) however, this option will take precendence.
+
+- `Civetweb listening_ports <https://github.com/civetweb/civetweb/blob/master/docs/UserManual.md#listening_ports-8080>`_
+
+
+documentroot
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", "none", "."
+
+Configures "document_root" in the civetweb library. This option may also be configured using liboptions_, however
+this option will take precedence.
+
+- `Civetweb document_root <https://github.com/civetweb/civetweb/blob/master/docs/UserManual.md#document_root->`_
+
+
+.. _liboptions:
+
+liboptions
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", "none", "none"
+
+Configures civetweb library "Options".
+
+- `Civetweb Options <https://github.com/civetweb/civetweb/blob/master/docs/UserManual.md#options-from-civetwebc>`_
+
+
+Input Parameters
+----------------
+
+These parameters can be used with the "input()" statement. They apply to
+the input they are specified with.
+
+
+Endpoint
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "yes", "path that begins with '/' ", "none"
+
+Sets a request path for an http input. Path should always start with a '/'.
+
+
+DisableLFDelimiter
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "no", "", "off"
+
+By default LF is used to delimit msg frames, for data is sent in batches.
+Set this to ‘on’ if this behavior is not needed.
+
+
+Name
+^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", "", "imhttp"
+
+Sets a name for the inputname property. If no name is set "imhttp"
+is used by default. Setting a name is not strictly necessary, but can
+be useful to apply filtering based on which input the message was
+received from.
+
+
+Ruleset
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", "", "default ruleset"
+
+Binds specified ruleset to this input. If not set, the default
+ruleset is bound.
+
+
+SupportOctetCountedFraming
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "no", "", "off"
+
+Useful to send data using syslog style message framing, disabled by default. Message framing is described by `RFC 6587 <https://tools.ietf.org/html/rfc6587#section-3.4.1>`_ .
+
+
+RateLimit.Interval
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "no", "none", "0"
+
+Specifies the rate-limiting interval in seconds. Set it to a number
+of seconds to activate rate-limiting.
+
+
+RateLimit.Burst
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "no", "none", "10000"
+
+Specifies the rate-limiting burst in number of messages.
+
+
+
+flowControl
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "no", "none", "on"
+
+Flow control is used to throttle the sender if the receiver queue is
+near-full preserving some space for input that can not be throttled.
+
+
+
+addmetadata
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "no", "none", "off"
+
+Enables metadata injection into `$!metadata` property. Currently, only header data is supported.
+The following metadata will be injected into the following properties:
+
+- `$!metadata!httpheaders`: http header data will be injected here as key-value pairs. All header names will automatically be lowercased
+ for case-insensitive access.
+
+- `$!metadata!queryparams`: query parameters from the http request will be injected here as key-value pairs. All header names will automatically be lowercased
+ for case-insensitive access.
+
+
+basicAuthFile
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", "none", ""
+
+Enables access control to this endpoint using http basic authentication. Option is disabled by default.
+To enable it, set this option to a `htpasswd file`, which can be generated using a standard `htpasswd` tool.
+
+See also:
+
+- `HTTP Authorization <https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Authorization>`_
+- `HTTP Basic Authentication <https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication#basic_authentication_scheme>`_
+- `htpasswd utility <https://httpd.apache.org/docs/2.4/programs/htpasswd.html>`_
+
+
+.. _imhttp-statistic-counter:
+
+
+basicAuthFile
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", "none", "none"
+
+Configures a `htpasswd <https://httpd.apache.org/docs/2.4/programs/htpasswd.html>`_ file and enables `basic authentication <https://en.wikipedia.org/wiki/Basic_access_authentication>`_ on http request received on this input.
+If this option is not set, basic authentation will not be enabled.
+
+
+Statistic Counter
+=================
+
+This plugin maintains global imhttp :doc:`statistics <../rsyslog_statistic_counter>`. The statistic's origin and name is "imhttp" and is
+accumulated for all inputs. The statistic has the following counters:
+
+
+- **submitted** - Total number of messages successfully submitted for processing since startup.
+- **failed** - Total number of messages failed since startup, due to processing a request.
+- **discarded** - Total number of messages discarded since startup, due to rate limiting or similar.
+
+
+.. _imhttp-error-messages:
+
+Error Messages
+==============
+
+When a message is to long it will be truncated and an error will show the remaining length of the message and the beginning of it. It will be easier to comprehend the truncation.
+
+
+Caveats/Known Bugs
+==================
+
+- module currently only a single http instance, however multiple ports may be bound.
+
+
+Examples
+========
+
+Example 1
+---------
+
+This sets up a http server instance on port 8080 with two inputs.
+One input path at '/postrequest', and another at '/postrequest2':
+
+.. code-block:: none
+
+ # ports=8080
+ # document root='.'
+ module(load="imhttp") # needs to be done just once
+
+ # Input using default LF delimited framing
+ # For example, the following http request, with data body "Msg0001\nMsg0002\nMsg0003"
+ ##
+ # - curl -si http://localhost:$IMHTTP_PORT/postrequest -d $'Msg0001\nMsg0002\nMsg0003'
+ ##
+ # Results in the 3 message objects being submitted into rsyslog queues.
+ # - Message object with `msg` property set to `Msg0001`
+ # - Message object with `msg` property set to `Msg0002`
+ # - Message object with `msg` property set to `Msg0003`
+
+ input(type="imhttp"
+ name="myinput1"
+ endpoint="/postrequest"
+ ruleset="postrequest_rs")
+
+ # define 2nd input path, using octet-counted framing,
+ # and routing to different ruleset
+ input(type="imhttp"
+ name="myinput2"
+ endpoint="/postrequest2"
+ SupportOctetCountedFraming="on"
+ ruleset="postrequest_rs")
+
+ # handle the messages in ruleset
+ ruleset(name="postrequest_rs") {
+ action(type="omfile" file="/var/log/http_messages" template="myformat")
+ }
+
+
+Example 2
+---------
+
+This sets up a http server instance on ports 80 and 443s (use 's' to indicate ssl) with an input path at '/postrequest':
+
+.. code-block:: none
+
+ # ports=8080, 443 (ssl)
+ # document root='.'
+ module(load="imhttp" ports=8080,443s)
+ input(type="imhttp"
+ endpoint="/postrequest"
+ ruleset="postrequest_rs")
+
+
+
+Example 3
+---------
+
+imhttp can also support the underlying options of `Civetweb <https://github.com/civetweb/civetweb/blob/master/docs/UserManual.md>`_ using the liboptions_ option.
+
+.. code-block:: none
+
+ module(load="imhttp"
+ liboptions=[
+ "error_log_file=my_log_file_path",
+ "access_log_file=my_http_access_log_path",
+ ])
+
+ input(type="imhttp"
+ endpoint="/postrequest"
+ ruleset="postrequest_rs"
+ )
diff --git a/source/configuration/modules/imjournal.rst b/source/configuration/modules/imjournal.rst
new file mode 100644
index 0000000..5c0404c
--- /dev/null
+++ b/source/configuration/modules/imjournal.rst
@@ -0,0 +1,474 @@
+***************************************
+imjournal: Systemd Journal Input Module
+***************************************
+
+=========================== ===========================================================================
+**Module Name:**  **imjournal**
+**Author:** Jiri Vymazal <jvymazal@redhat.com> (This module is **not** project-supported)
+**Available since:** 7.3.11
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+Provides the ability to import structured log messages from systemd
+journal to syslog.
+
+Note that this module reads the journal database, what is considered a
+relatively performance-intense operation. As such, the performance of a
+configuration utilizing this module may be notably slower than when
+using `imuxsock <imuxsock.html>`_. The journal provides imuxsock with a
+copy of all "classical" syslog messages, however, it does not provide
+structured data. Only if that structured data is needed, imjournal must be used.
+Otherwise, imjournal may simply be replaced by imuxsock, and we highly
+suggest doing so.
+
+We suggest to check out our short presentation on `rsyslog journal
+integration <http://youtu.be/GTS7EuSdFKE>`_ to learn more details of
+anticipated use cases.
+
+**Warning:** Some versions of systemd journal have problems with
+database corruption, which leads to the journal to return the same data
+endlessly in a tight loop. This results in massive message duplication
+inside rsyslog probably resulting in a denial-of-service when the system
+resources get exhausted. This can be somewhat mitigated by using proper
+rate-limiters, but even then there are spikes of old data which are
+endlessly repeated. By default, ratelimiting is activated and permits to
+process 20,000 messages within 10 minutes, what should be well enough
+for most use cases. If insufficient, use the parameters described below
+to adjust the permitted volume. **It is strongly recommended to use this
+plugin only if there is hard need to do so.**
+
+
+Notable Features
+================
+
+- statistics counters
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Module Parameters
+=================
+
+
+PersistStateInterval
+^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "10", "no", "``$imjournalPersistStateInterval``"
+
+This is a global setting. It specifies how often should the journal
+state be persisted. The persists happens after each *number-of-messages*.
+This option is useful for rsyslog to start reading from the last journal
+message it read.
+
+FileCreateMode
+^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "octalNumber", "0644", "no", "none"
+
+Set the access permissions for the state file. The value given must
+always be a 4-digit octal number, with the initial digit being zero.
+Please note that the actual permission depend on rsyslogd's process
+umask. If in doubt, use "$umask 0000" right at the beginning of the
+configuration file to remove any restrictions. The state file's only
+consumer is rsyslog, so it's recommended to adjust the value according
+to that.
+
+
+StateFile
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "``$imjournalStateFile``"
+
+This is a global setting. It specifies where the state file for
+persisting journal state is located. If a full path name is given
+(starting with "/"), that path is used. Otherwise the given name
+is created inside the working directory.
+
+
+Ratelimit.Interval
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "600", "no", "``$imjournalRatelimitInterval``"
+
+Specifies the interval in seconds onto which rate-limiting is to be
+applied. If more than ratelimit.burst messages are read during that
+interval, further messages up to the end of the interval are
+discarded. The number of messages discarded is emitted at the end of
+the interval (if there were any discards).
+
+**Setting this value to 0 turns off ratelimiting.**
+
+Note that it is *not recommended to turn off ratelimiting*,
+except that you know for
+sure journal database entries will never be corrupted. Without
+ratelimiting, a corrupted systemd journal database may cause a kind
+of denial of service We are stressing this point as multiple users
+have reported us such problems with the journal database - in June
+of 2013 and occasionally also after this time (up until the time of
+this writing in January 2019).
+
+
+Ratelimit.Burst
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "20000", "no", "``$imjournalRatelimitBurst``"
+
+Specifies the maximum number of messages that can be emitted within
+the ratelimit.interval interval. For further information, see
+description there.
+
+
+IgnorePreviousMessages
+^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "``$ImjournalIgnorePreviousMessages``"
+
+This option specifies whether imjournal should ignore messages
+currently in journal and read only new messages. This option is only
+used when there is no StateFile to avoid message loss.
+
+
+DefaultSeverity
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "severity", "5", "no", "``$ImjournalDefaultSeverity``"
+
+Some messages coming from journald don't have the SYSLOG_PRIORITY
+field. These are typically the messages logged through journald's
+native API. This option specifies the default severity for these
+messages. Can be given either as a name or a number. Defaults to 'notice'.
+
+
+DefaultFacility
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "facility", "LOG_USER", "no", "``$ImjournalDefaultFacility``"
+
+Some messages coming from journald don't have the SYSLOG_FACILITY
+field. These are typically the messages logged through journald's
+native API. This option specifies the default facility for these
+messages. Can be given either as a name or a number. Defaults to 'user'.
+
+
+UsePidFromSystem
+^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "0", "no", "none"
+
+Retrieves the trusted systemd parameter, _PID, instead of the user
+systemd parameter, SYSLOG_PID, which is the default.
+This option override the "usepid" option.
+This is now deprecated. It is better to use usepid="syslog" instead.
+
+
+UsePid
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "both", "no", "none"
+
+Sets the PID source from journal.
+
+*syslog*
+ *imjournal* retrieves SYSLOG_PID from journal as PID number.
+
+*system*
+ *imjournal* retrieves _PID from journal as PID number.
+
+*both*
+ *imjournal* trying to retrieve SYSLOG_PID first. When it is not
+ available, it is also trying to retrieve _PID. When none of them is available,
+ message is parsed without PID number.
+
+
+IgnoreNonValidStatefile
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "none"
+
+When a corrupted statefile is read imjournal ignores the statefile and continues
+with logging from the beginning of the journal (from its end if IgnorePreviousMessages
+is on). After PersistStateInterval or when rsyslog is stopped invalid statefile
+is overwritten with a new valid cursor.
+
+
+WorkAroundJournalBug
+^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "none"
+
+.. versionadded:: 8.37.0
+
+**Deprecated.** This option was intended as temporary and has no effect now
+(since 8.1910.0). Left for backwards compatibility only.
+
+
+FSync
+^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+.. versionadded:: 8.1908.0
+
+When there is a hard crash, power loss or similar abrupt end of rsyslog process,
+there is a risk of state file not being written to persistent storage or possibly
+being corrupted. This then results in imjournal starting reading elsewhere then
+desired and most probably message duplication. To mitigate this problem you can
+turn this option on which will force state file writes to persistent physical
+storage. Please note that fsync calls are costly, so especially with lower
+PersistStateInterval value, this may present considerable performance hit.
+
+
+Remote
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+.. versionadded:: 8.1910.0
+
+When this option is turned on, imjournal will pull not only all local journal
+files (default behavior), but also any journal files on machine originating from
+remote sources.
+
+defaultTag
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+.. versionadded:: 8.2312.0
+
+The DefaultTag option specifies the default value for the tag field.
+In imjournal, this can happen when one of the following is missing:
+
+* identifier string provided by the application (SYSLOG_IDENTIFIER) or
+* name of the process the journal entry originates from (_COMM)
+
+Under normal circumstances, at least one of the previously mentioned fields
+is always part of the journal message. But there are some corner cases
+where this is not the case. This parameter provides the ability to alter
+the content of the tag field.
+
+
+Input Module Parameters
+=======================
+
+Parameters specific to the input module.
+
+Main
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "journal", "no", "none"
+
+.. versionadded:: 8.2312.0
+
+When this option is turned on within the input module, imjournal will run the
+target ruleset in the main thread and will be stop taking input if the output
+module is not accepting data. If multiple input moduels set `main` to true, only
+the first one will be affected. The non `main` rulesets will run in the
+background thread and not affected by the output state.
+
+
+
+Statistic Counter
+=================
+
+.. _imjournal-statistic-counter:
+
+This plugin maintains :doc:`statistics <../rsyslog_statistic_counter>` for each listener and for each worker thread. The listener statistic is named "imjournal".
+
+The following properties are maintained for each listener:
+
+- **read** - total number of message read from journal since startup.
+
+- **submitted** - total number of messages submitted to main queue after reading from journal for processing
+ since startup. All records may not be submitted due to rate-limiting.
+
+- **discarded** - total number of messages that were read but not submitted to main queue due to rate-limiting.
+
+- **failed** - total number of failures to read messages from journal.
+
+- **poll_failed** - total number of journal poll failures.
+
+- **rotations** - total number of journal file rotations.
+
+- **recovery_attempts** - total number of recovery attempts by imjournal after unknown errors by closing and
+ re-opening journal.
+
+- **ratelimit_discarded_in_interval** - number of messages discarded due to rate-limiting within configured
+ rate-limiting interval.
+
+- **disk_usage_bytes** - total size of journal obtained from sd_journal_get_usage().
+
+Here is an example output of corresponding imjournal impstat message, which is produced by loading imjournal
+with default rate-limit interval and burst and running a docker container with log-driver as journald that
+spews lots of logs to stdout:
+
+.. code-block:: none
+
+ Jun 13 15:02:48 app1-1.example.com rsyslogd-pstats: imjournal: origin=imjournal submitted=20000 read=216557
+ discarded=196557 failed=0 poll_failed=0 rotations=6 recovery_attempts=0 ratelimit_discarded_in_interval=196557
+ disk_usage_bytes=106610688
+
+Although these counters provide insight into imjournal end message submissions to main queue as well as losses due to
+rate-limiting or other problems to extract messages from journal, they don't offer full visibility into journal end
+issues. While these counters measure journal rotations and disk usage, they do not offer visibility into message
+loss due to journal rate-limiting. sd_journal_* API does not provide any visibility into messages that are
+discarded by the journal due to rate-limiting. Journald does emit a syslog message when log messages cannot make
+it into the journal due to rate-limiting:
+
+.. code-block:: none
+
+ Jun 13 15:50:32 app1-1.example.com systemd-journal[333]: Suppressed 102 messages from /system.slice/docker.service
+
+Such messages can be processed after they are read through imjournal to get a signal for message loss due to journal
+end rate-limiting using a dynamic statistics counter for such log lines with a rule like this:
+
+.. code-block:: none
+
+ dyn_stats(name="journal" resettable="off")
+ if $programname == 'journal' and $msg contains 'Suppressed' and $msg contains 'messages from' then {
+ set $.inc = dyn_inc("journal", "suppressed_count");
+ }
+
+Caveats/Known Bugs:
+===================
+
+- As stated above, a corrupted systemd journal database can cause major
+ problems, depending on what the corruption results in. This is beyond
+ the control of the rsyslog team.
+
+- imjournal does not check if messages received actually originated
+ from rsyslog itself (via omjournal or other means). Depending on
+ configuration, this can also lead to a loop. With imuxsock, this
+ problem does not exist.
+
+
+Build Requirements:
+===================
+
+Development headers for systemd, version >= 197.
+
+
+Example 1
+=========
+
+The following example shows pulling structured imjournal messages and
+saving them into /var/log/ceelog.
+
+.. code-block:: none
+
+ module(load="imjournal" PersistStateInterval="100"
+ StateFile="/path/to/file") #load imjournal module
+ module(load="mmjsonparse") #load mmjsonparse module for structured logs
+
+ template(name="CEETemplate" type="string" string="%TIMESTAMP% %HOSTNAME% %syslogtag% @cee: %$!all-json%\n" ) #template for messages
+
+ action(type="mmjsonparse")
+ action(type="omfile" file="/var/log/ceelog" template="CEETemplate")
+
+
+Example 2
+=========
+
+The following example is the same as `Example 1`, but with the input module.
+
+.. code-block:: none
+
+ ruleset(name="imjournam-example" queue.type="direct"){
+ action(type="mmjsonparse")
+ action(type="omfile" file="/var/log/ceelog" template="CEETemplate")
+ }
+
+ input(
+ type="imjournal"
+ ruleset="imjournam-example"
+ main="on"
+ )
diff --git a/source/configuration/modules/imkafka.rst b/source/configuration/modules/imkafka.rst
new file mode 100644
index 0000000..e589b49
--- /dev/null
+++ b/source/configuration/modules/imkafka.rst
@@ -0,0 +1,177 @@
+*******************************
+imkafka: read from Apache Kafka
+*******************************
+
+=========================== ===========================================================================
+**Module Name:** **imkafka**
+**Author:** Andre Lorbach <alorbach@adiscon.com>
+**Available since:** 8.27.0
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+The imkafka plug-in implements an Apache Kafka consumer, permitting
+rsyslog to receive data from Kafka.
+
+
+Configuration Parameters
+========================
+
+Note that imkafka supports some *Array*-type parameters. While the parameter
+name can only be set once, it is possible to set multiple values with that
+single parameter.
+
+For example, to select a broker, you can use
+
+.. code-block:: none
+
+ input(type="imkafka" topic="mytopic" broker="localhost:9092" consumergroup="default")
+
+which is equivalent to
+
+.. code-block:: none
+
+ input(type="imkafka" topic="mytopic" broker=["localhost:9092"] consumergroup="default")
+
+To specify multiple values, just use the bracket notation and create a
+comma-delimited list of values as shown here:
+
+.. code-block:: none
+
+ input(type="imkafka" topic="mytopic"
+ broker=["localhost:9092",
+ "localhost:9093",
+ "localhost:9094"]
+ )
+
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Module Parameters
+-----------------
+
+Currently none.
+
+
+Action Parameters
+-----------------
+
+Broker
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "array", "localhost:9092", "no", "none"
+
+Specifies the broker(s) to use.
+
+
+Topic
+^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "yes", "none"
+
+Specifies the topic to produce to.
+
+
+ConfParam
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "array", "none", "no", "none"
+
+Permits to specify Kafka options. Rather than offering a myriad of
+config settings to match the Kafka parameters, we provide this setting
+here as a vehicle to set any Kafka parameter. This has the big advantage
+that Kafka parameters that come up in new releases can immediately be used.
+
+Note that we use librdkafka for the Kafka connection, so the parameters
+are actually those that librdkafka supports. As of our understanding, this
+is a superset of the native Kafka parameters.
+
+
+ConsumerGroup
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+With this parameter the group.id for the consumer is set. All consumers
+sharing the same group.id belong to the same group.
+
+
+Ruleset
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+Specifies the ruleset to be used.
+
+
+ParseHostname
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+.. versionadded:: 8.38.0
+
+If this parameter is set to on, imkafka will parse the hostname in log
+if it exists. The result can be retrieved from $hostname. If it's off,
+for compatibility reasons, the local hostname is used, same as the previous
+version.
+
+
+Caveats/Known Bugs
+==================
+
+- currently none
+
+
+Examples
+========
+
+Example 1
+---------
+
+In this sample a consumer for the topic static is created and will forward the messages to the omfile action.
+
+.. code-block:: none
+
+ module(load="imkafka")
+ input(type="imkafka" topic="static" broker="localhost:9092"
+ consumergroup="default" ruleset="pRuleset")
+
+ ruleset(name="pRuleset") {
+ action(type="omfile" file="path/to/file")
+ }
diff --git a/source/configuration/modules/imklog.rst b/source/configuration/modules/imklog.rst
new file mode 100644
index 0000000..2de34ed
--- /dev/null
+++ b/source/configuration/modules/imklog.rst
@@ -0,0 +1,230 @@
+*******************************
+imklog: Kernel Log Input Module
+*******************************
+
+=========================== ===========================================================================
+**Module Name:**  **imklog**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+Reads messages from the kernel log and submits them to the syslog
+engine.
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Module Parameters
+-----------------
+
+InternalMsgFacility
+^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "facility", "(see description)", "no", "``$KLogInternalMsgFacility``"
+
+The facility which messages internally generated by imklog will
+have. imklog generates some messages of itself (e.g. on problems,
+startup and shutdown) and these do not stem from the kernel.
+Historically, under Linux, these too have "kern" facility. Thus, on
+Linux platforms the default is "kern" while on others it is
+"syslogd". You usually do not need to specify this configuration
+directive - it is included primarily for few limited cases where it
+is needed for good reason. Bottom line: if you don't have a good idea
+why you should use this setting, do not touch it.
+
+
+PermitNonKernelFacility
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "``$KLogPermitNonKernelFacility``"
+
+At least under BSD the kernel log may contain entries with
+non-kernel facilities. This setting controls how those are handled.
+The default is "off", in which case these messages are ignored.
+Switch it to on to submit non-kernel messages to rsyslog processing.
+
+
+ConsoleLogLevel
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "-1", "no", "``$klogConsoleLogLevel``"
+
+Sets the console log level. If specified, only messages with up to
+the specified level are printed to the console. The default is -1,
+which means that the current settings are not modified. To get this
+behavior, do not specify $klogConsoleLogLevel in the configuration
+file. Note that this is a global parameter. Each time it is changed,
+the previous definition is re-set. The one activate will be that one
+that is active when imklog actually starts processing. In short
+words: do not specify this directive more than once!
+
+**Linux only**, ignored on other platforms (but may be specified)
+
+
+ParseKernelTimestamp
+^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "``$klogParseKernelTimestamp``"
+
+If enabled and the kernel creates a timestamp for its log messages,
+this timestamp will be parsed and converted into regular message time
+instead to use the receive time of the kernel message (as in 5.8.x
+and before). Default is 'off' to prevent parsing the kernel timestamp,
+because the clock used by the kernel to create the timestamps is not
+supposed to be as accurate as the monotonic clock required to convert
+it. Depending on the hardware and kernel, it can result in message
+time differences between kernel and system messages which occurred at
+same time.
+
+
+KeepKernelTimestamp
+^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "``$klogKeepKernelTimestamp``"
+
+If enabled, this option causes to keep the [timestamp] provided by
+the kernel at the begin of in each message rather than to remove it,
+when it could be parsed and converted into local time for use as
+regular message time. Only used, when $klogParseKernelTimestamp is
+on.
+
+
+LogPath
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "(see description)", "no", "``$klogpath``"
+
+Defines the path to the log file that is used.
+If this parameter is not set a default will be used.
+On Linux "/proc/kmsg" and else "/dev/klog".
+
+
+RatelimitInterval
+^^^^^^^^^^^^^^^^^
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "none"
+
+.. versionadded:: 8.35.0
+
+The rate-limiting interval in seconds. Value 0 turns off rate limiting.
+Set it to a number of seconds (5 recommended) to activate rate-limiting.
+
+
+RatelimitBurst
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "10000", "no", "none"
+
+.. versionadded:: 8.35.0
+
+Specifies the rate-limiting burst in number of messages. Set it high to
+preserve all bootup messages.
+
+
+Caveats/Known Bugs
+==================
+
+This is obviously platform specific and requires platform drivers.
+Currently, imklog functionality is available on Linux and BSD.
+
+This module is **not supported on Solaris** and not needed there. For
+Solaris kernel input, use :doc:`imsolaris <imsolaris>`.
+
+
+Example 1
+=========
+
+The following sample pulls messages from the kernel log. All parameters
+are left by default, which is usually a good idea. Please note that
+loading the plugin is sufficient to activate it. No directive is needed
+to start pulling kernel messages.
+
+.. code-block:: none
+
+ module(load="imklog")
+
+
+Example 2
+=========
+
+The following sample adds a ratelimiter. The burst and interval are
+set high to allow for a large volume of messages on boot.
+
+.. code-block:: none
+
+ module(load="imklog" RatelimitBurst="5000" RatelimitInterval="5")
+
+
+Unsupported |FmtObsoleteName| directives
+========================================
+
+.. function:: $DebugPrintKernelSymbols on/off
+
+ Linux only, ignored on other platforms (but may be specified).
+ Defaults to off.
+
+.. function:: $klogLocalIPIF
+
+ This directive is no longer supported. Instead, use the global
+ $localHostIPIF directive instead.
+
+
+.. function:: $klogUseSyscallInterface on/off
+
+ Linux only, ignored on other platforms (but may be specified).
+ Defaults to off.
+
+.. function:: $klogSymbolsTwice on/off
+
+ Linux only, ignored on other platforms (but may be specified).
+ Defaults to off.
+
+
diff --git a/source/configuration/modules/imkmsg.rst b/source/configuration/modules/imkmsg.rst
new file mode 100644
index 0000000..318c9f5
--- /dev/null
+++ b/source/configuration/modules/imkmsg.rst
@@ -0,0 +1,188 @@
+**********************************
+imkmsg: /dev/kmsg Log Input Module
+**********************************
+
+=========================== ===========================================================================
+**Module Name:**  **imkmsg**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+ Milan Bartos <mbartos@redhat.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+Reads messages from the /dev/kmsg structured kernel log and submits them
+to the syslog engine.
+
+The printk log buffer contains log records. These records are exported
+by /dev/kmsg device as structured data in the following format:
+"level,sequnum,timestamp;<message text>\\n"
+There could be continuation lines starting with space that contains
+key/value pairs.
+Log messages are parsed as necessary into rsyslog msg\_t structure.
+Continuation lines are parsed as json key/value pairs and added into
+rsyslog's message json representation.
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+Module Parameters
+-----------------
+
+Mode
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "parseKernelTimestamp", "no", "none"
+
+.. versionadded:: 8.2312.0
+
+This parameter configures which timestamps will be used. It is an advanced
+setting and most users should probably keep the default mode ("startup").
+
+The linux kernel message buffer contains a timestamp, which reflects the time
+the message was created. However, this is not the "normal" time one expects, but
+in a so-called monotonic time in seconds since kernel start. For a datacenter
+system which runs 24 hours by 7 days a week, kernel time and actual
+wall clock time is mostly the same. Problems may occur during daylight
+savings time switches.
+
+For desktops and laptops this is not necessarily the case. The reason is, as
+it looks, that during low power states (energy save mode, hibernation), kernel
+monotonic time **does not advance**. This is also **not** corrected when the
+system comes back to normal operations. As such, on systems using low power
+states from time to time, kernel time and wallclock time drift apart. We have
+been told cases where this is in the magnitude of days. Just think about
+desktops which are in hibernate during the night, missing several hours
+each day. So this is a real-world problem.
+
+To work around this, we usually do **not** use the kernel timstamp when
+we calculate the message time. Instead, we use wallclock time (obtained
+from the respective linux timer) of the instant when imkmsg reads the
+message from the kernel log. As message creation and imkmsg reading it
+is usually in very close time proximity, this approach works very well.
+
+**However**, this is not helpful for e.g. early boot messages. These
+were potentially generated some seconds to a minute or two before rsyslog
+startup. To provide a proper meaning of time for these events, we use
+the kernel timstamp instead of wallclock time during rsyslog startup.
+This is most probably correct, because it is extremely unlikely (close
+to impossible) that the system entered a low-power state before rsyslog
+startup.
+
+**Note well:** When rsyslog is restarted during normal system operations,
+existing imkmsg messages are re-read and this is done with the kernel
+timestamp. This causes message duplication, but is what imkmsg always
+did. It is planned to provide ehance the module to improve this
+behaviour. This documentation page here will be updated when changes are
+made.
+
+The *parseKernelTimestamp* parameter provides fine-grain control over
+the processing of kernel vs. wallclock time. Adjustments should only
+be needed rarely and if there is a dedicated use case for it. So use
+this parameter only if you have a good reason to do so.
+
+Supported modes are:
+
+* **startup** - This is the **DEFAULT setting**.
+
+ Uses the kernel time stamp during the initial read
+ loop of /dev/kmsg, but uses system wallclock time once the initial
+ read is completed. This behavior is described in the text above in
+ detail.
+
+* **on** - kernel timestamps are always used and wallclock time never
+
+* **off** - kernel timestamps are never used, system wallclock time is
+ always used
+
+
+readMode
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "full-boot", "no", "none"
+
+.. versionadded:: 8.2312.0
+
+This parameter permits to control when imkmsg reads the full kernel.
+
+It provides the following options:
+
+* **full-boot** - (default) read full klog, but only "immediately" after
+ boot. "Immediately" is hereby meant in seconds of system uptime
+ given in "expectedBootCompleteSeconds"
+
+* **full-always** - read full klog on every rsyslog startup. Most probably
+ causes message duplication
+
+* **new-only** - never emit existing kernel log message, read only new ones.
+
+Note that some message loss can happen if rsyslog is stopped in "full-boot" and
+"new-only" read mode. The longer rsyslog is inactive, the higher the message
+loss probability and potential number of messages lost. For typical restart
+scenarios, this should be minimal. On HUP, no message loss occurs as rsyslog
+is not actually stopped.
+
+
+expectedBootCompleteSeconds
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "positive integer", "90", "no", "none"
+
+.. versionadded:: 8.2312.0
+
+This parameter works in conjunction with **readMode** and specifies how
+many seconds after startup the system should be considered to be
+"just booted", which means in **readMode** "full-boot" imkmsg reads and
+forwards to rsyslog processing all existing messages.
+
+In any other **readMode** the **expectedBootCompleteSettings** is
+ignored.
+
+Caveats/Known Bugs:
+===================
+
+This module cannot be used together with imklog module. When using one of
+them, make sure the other one is not enabled.
+
+This is Linux specific module and requires /dev/kmsg device with
+structured kernel logs.
+
+This module does not support rulesets. All messages are delivered to the
+default rulseset.
+
+
+
+Examples
+========
+
+The following sample pulls messages from the /dev/kmsg log device. All
+parameters are left by default, which is usually a good idea. Please
+note that loading the plugin is sufficient to activate it. No directive
+is needed to start pulling messages.
+
+.. code-block:: none
+
+ module(load="imkmsg")
+
+
diff --git a/source/configuration/modules/immark.rst b/source/configuration/modules/immark.rst
new file mode 100644
index 0000000..cbae437
--- /dev/null
+++ b/source/configuration/modules/immark.rst
@@ -0,0 +1,41 @@
+**********************************
+immark: Mark Message Input Module
+**********************************
+
+=========================== ===========================================================================
+**Module Name:**  **immark**
+**Author:** `Rainer Gerhards <http://www.gerhards.net/rainer>`_ <rgerhards@adiscon.com>
+=========================== ===========================================================================
+
+Purpose
+=======
+
+This module provides the ability to inject periodic "mark" messages to
+the input of rsyslog. This is useful to allow for verification that
+the logging system is functioning.
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+Module Parameters
+-----------------
+
+interval
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "max", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "1200", "", "no", "``$MarkMessagePeriod``"
+
+Specifies the mark message injection interval in seconds.
+
+.. seealso::
+
+ The Action Parameter ``action.writeAllMarkMessages`` in :doc:`../actions`.
diff --git a/source/configuration/modules/impcap.rst b/source/configuration/modules/impcap.rst
new file mode 100644
index 0000000..99e0f49
--- /dev/null
+++ b/source/configuration/modules/impcap.rst
@@ -0,0 +1,255 @@
+
+*******************************
+Impcap: network traffic capture
+*******************************
+
+==================== =====================================
+**Module Name:** **impcap**
+**Author:** Theo Bertin <theo.bertin@advens.fr>
+==================== =====================================
+
+Purpose
+=======
+
+Impcap is an input module based upon `tcpdump's libpcap <https://www.tcpdump.org/>`_ library for network traffic capture.
+
+Its goal is to capture network traffic with efficiency, parse network packets metadata AND data, and allow users/modules
+to make full use of it.
+
+
+
+Configuration Parameters
+========================
+
+.. note::
+ Parameter names are case-insensitive
+
+Module Parameter
+----------------
+
+metadata_container
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "!impcap", "no", "none"
+
+Defines the container to place all the parsed metadata of the network packet.
+
+.. Warning::
+ if overwritten, this parameter should always begin with '!' to define the JSON object accompanying messages. No checks are done to ensure that
+ and not complying with this rule will prevent impcap/rsyslog from running, or will result in unexpected behaviours.
+
+
+data_container
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "!data", "no", "none"
+
+Defines the container to place all the data of the network packet. 'data' here defines everything above transport layer
+in the OSI model, and is a string representation of the hexadecimal values of the stream.
+
+.. Warning::
+ if overwritten, this parameter should always begin with '!' to define the JSON object accompanying messages. No checks are done to ensure that
+ and not complying with this rule will prevent impcap/rsyslog from running, or will result in unexpected behaviours.
+
+
+
+snap_length
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "number", "65535", "no", "none"
+
+Defines the maximum size of captured packets.
+If captured packets are longer than the defined value, they will be capped.
+Default value allows any type of packet to be captured entirely but can be much shorter if only metadata capture is
+desired (500 to 2000 should still be safe, depending on network protocols).
+Be wary though, as impcap won't be able to parse metadata correctly if the value is not high enough.
+
+
+Input Parameters
+----------------
+
+interface
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+This parameter specifies the network interface to listen to. **If 'interface' is not specified, 'file' must be in order
+for the module to run.**
+
+.. note::
+ The name must be a valid network interface on the system (such as 'lo').
+ see :ref:`Supported interface types` for an exhaustive list of all supported interface link-layer types.
+
+
+file
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+This parameter specifies a pcap file to read.
+The file must respect the `pcap file format specification <https://www.tcpdump.org/pcap/pcap.html>`_. **If 'file' is not specified, 'interface' must be in order
+for the module to run.**
+
+.. Warning::
+ This functionality is not intended for production environnments,
+ it is designed for development/tests.
+
+
+promiscuous
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "boolean", "off", "no", "none"
+
+When a valid interface is provided, sets the capture to promiscuous for this interface.
+
+.. warning::
+ Setting your network interface to promiscuous can come against your local laws and
+ regulations, maintainers cannot be held responsible for improper use of the module.
+
+
+filter
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+Set a filter for the capture.
+Filter semantics are defined `on pcap manpages <https://www.tcpdump.org/manpages/pcap-filter.7.html>`_.
+
+
+tag
+^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+Set a tag to messages coming from this input.
+
+
+ruleset
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+Assign messages from thi simput to a specific Rsyslog ruleset.
+
+
+.. _no_buffer:
+
+no_buffer
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "boolean", "off", "no", "none"
+
+Disable buffering during capture.
+By default, impcap asks the system to bufferize packets (see parameters :ref:`buffer_size`, :ref:`buffer_timeout` and
+:ref:`packet_count`), this parameter disables buffering completely. This means packets will be handled as soon as they
+arrive, but impcap will make more system calls to get them and might miss some depending on the incoming rate and system
+performances.
+
+
+.. _buffer_size:
+
+buffer_size
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "number (octets)", "15740640", "no", "none"
+
+Set a buffer size in bytes to the capture handle.
+This parameter is only relevant when :ref:`no_buffer` is not active, and should be set depending on input packet rates,
+:ref:`buffer_timeout` and :ref:`packet_count` values.
+
+
+.. _buffer_timeout:
+
+buffer_timeout
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "number (ms)", "10", "no", "none"
+
+Set a timeout in milliseconds between two system calls to get bufferized packets. This parameter prevents low input rate
+interfaces to keep packets in buffers for too long, but does not guarantee fetch every X seconds (see `pcap manpage <https://www.tcpdump.org/manpages/pcap.3pcap.html>`_ for more details).
+
+
+
+.. _packet_count:
+
+packet_count
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "number", "5", "no", "none"
+
+Set a maximum number of packets to process at a time. This parameter allows to limit batch calls to a maximum of X
+packets at a time.
+
+
+.. _Supported interface types:
+
+Supported interface types
+=========================
+
+Impcap currently supports IEEE 802.3 Ethernet link-layer type interfaces.
+Please contact the maintainer if you need a different interface type !
diff --git a/source/configuration/modules/improg.rst b/source/configuration/modules/improg.rst
new file mode 100644
index 0000000..a0fd746
--- /dev/null
+++ b/source/configuration/modules/improg.rst
@@ -0,0 +1,170 @@
+****************************************
+improg: Program integration input module
+****************************************
+
+================ ==============================================================
+**Module Name:** **improg**
+**Authors:** Jean-Philippe Hilaire <jean-philippe.hilaire@pmu.fr> & Philippe Duveau <philippe.duveau@free.fr>
+================ ==============================================================
+
+
+Purpose
+=======
+
+This module allows rsyslog to spawn external command(s) and consume message
+from pipe(s) (stdout of the external process).
+
+**Limitation:** `select()` seems not to support usage of `printf(...)` or
+`fprintf(stdout,...)`. Only `write(STDOUT_FILENO,...)` seems to be efficient.
+
+The imput module consume pipes form all external programs in a mono-threaded
+`runInput` method. This means that data treatments will be serialized.
+
+Optionally, the module manage the external program through keyword sent to
+it using a second pipe to stdin of the external process.
+
+An operational sample in C can be found @ "github.com/pduveau/jsonperfmon"
+
+Also a bash's script is provided as tests/improg-simul.sh. The `echo` and `read` (built-in) can be used to communicate with the module.
+External commands can not be used to communicate. `printf` is unable to send data directly to the module but can used through a variable and `echo`.
+
+
+Compile
+=======
+
+To successfully compile improg module.
+
+ ./configure --enable-improg ...
+
+Configuration Parameters
+========================
+
+Action Parameters
+-----------------
+
+Binary
+^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "yes", "command arguments...",
+
+Command line : external program and arguments
+
+Tag
+^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "yes", ,"none"
+
+The tag to be assigned to messages read from this file. If you would like to
+see the colon after the tag, you need to include it when you assign a tag
+value, like so: ``tag="myTagValue:"``.
+
+Facility
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", "facility\|number", "local0"
+
+The syslog facility to be assigned to messages read from this file. Can be
+specified in textual form (e.g. ``local0``, ``local1``, ...) or as numbers (e.g.
+16 for ``local0``). Textual form is suggested.
+
+Severity
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", "severity\|number", "notice"
+
+The syslog severity to be assigned to lines read. Can be specified
+in textual form (e.g. ``info``, ``warning``, ...) or as numbers (e.g. 6
+for ``info``). Textual form is suggested.
+
+confirmMessages
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "no", "on\|off", "on"
+
+Specifies whether the external program needs feedback from rsyslog via stdin.
+When this switch is set to "on", rsyslog confirms each received message.
+This feature facilitates error handling: instead of having to implement a retry
+logic, the external program can rely on the rsyslog queueing capabilities.
+The program receives a line with the word ``ACK`` from its standard input.
+
+Also, the program receives a ``STOP`` when rsyslog ask the module to stop.
+
+signalOnClose
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "no", "on\|off", "off"
+
+Specifies whether a TERM signal must be sent to the external program before
+closing it (when either the worker thread has been unscheduled, a restart
+of the program is being forced, or rsyslog is about to shutdown).
+
+closeTimeout
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "number", "no", ,"200"
+
+Specifies whether a KILL signal must be sent to the external program in case
+it does not terminate within the timeout indicated by closeTimeout_
+(when either the worker thread has been unscheduled, a restart of the program
+is being forced, or rsyslog is about to shutdown).
+
+killUnresponsive
+^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "no", "on\|off", "on"
+
+Specifies whether a KILL signal must be sent to the external program in case
+it does not terminate within the timeout indicated by closeTimeout
+(when either the worker thread has been unscheduled, a restart of the program
+is being forced, or rsyslog is about to shutdown).
+
+Stop sequence
+=============
+
+1. If `confirmMessages` is set to on, a `STOP` is written in stdin of the child.
+2. If `signalOnClose` is set to "on", a TERM signal is sent to the child.
+3. The pipes with the child process are closed (the child will receive EOF on stdin),
+4. Then, rsyslog waits for the child process to terminate during closeTimeout,
+5. If the child has not terminated within the timeout, a KILL signal is sent to it.
+
+
diff --git a/source/configuration/modules/impstats.rst b/source/configuration/modules/impstats.rst
new file mode 100644
index 0000000..ca95609
--- /dev/null
+++ b/source/configuration/modules/impstats.rst
@@ -0,0 +1,405 @@
+***********************************************************
+impstats: Generate Periodic Statistics of Internal Counters
+***********************************************************
+
+=========================== ===========================================================================
+**Module Name:**  **impstats**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+This module provides periodic output of rsyslog internal counters.
+
+The set of available counters will be output as a set of syslog
+messages. This output is periodic, with the interval being configurable
+(default is 5 minutes). Be sure that your configuration records the
+counter messages (default is syslog.=info). Besides logging to the
+regular syslog stream, the module can also be configured to write
+statistics data into a (local) file.
+
+When logging to the regular syslog stream, impstats records are emitted
+just like regular log messages. As such,
+counters increase when processing these messages. This must be taken into
+consideration when testing and troubleshooting.
+
+Note that loading this module has some impact on rsyslog performance.
+Depending on settings, this impact may be noticeable for high-load
+environments, but in general the overhead is pretty light.
+
+**Note that there is a** `rsyslog statistics online
+analyzer <http://www.rsyslog.com/impstats-analyzer/>`_ **available.** It
+can be given a impstats-generated file and will return problems it
+detects. Note that the analyzer cannot replace a human in getting things
+right, but it is expected to be a good aid in starting to understand and
+gain information from the pstats logs.
+
+The rsyslog website has an overview of available `rsyslog
+statistic counters <http://rsyslog.com/rsyslog-statistic-counter/>`_.
+When browsing this page, please be sure to take note of which rsyslog
+version is required to provide a specific counter. Counters are
+continuously being added, and older versions do not support everything.
+
+
+Notable Features
+================
+
+- :ref:`impstats-statistic-counter`
+
+
+
+
+Configuration Parameters
+========================
+
+The configuration parameters for this module are designed for tailoring
+the method and process for outputting the rsyslog statistics to file.
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+.. note::
+
+ This module supports module parameters, only.
+
+
+Module Parameters
+-----------------
+
+Interval
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "300", "no", "none"
+
+Sets the interval, in **seconds** at which messages are generated.
+Please note that the actual interval may be a bit longer. We do not
+try to be precise and so the interval is actually a sleep period
+which is entered after generating all messages. So the actual
+interval is what is configured here plus the actual time required to
+generate messages. In general, the difference should not really
+matter.
+
+
+Facility
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "5", "no", "none"
+
+The numerical syslog facility code to be used for generated
+messages. Default is 5 (syslog). This is useful for filtering
+messages.
+
+
+Severity
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "6", "no", "none"
+
+The numerical syslog severity code to be used for generated
+messages. Default is 6 (info).This is useful for filtering messages.
+
+
+ResetCounters
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+When set to "on", counters are automatically reset after they are
+emitted. In that case, the contain only deltas to the last value
+emitted. When set to "off", counters always accumulate their values.
+Note that in auto-reset mode not all counters can be reset. Some
+counters (like queue size) are directly obtained from internal object
+and cannot be modified. Also, auto-resetting introduces some
+additional slight inaccuracies due to the multi-threaded nature of
+rsyslog and the fact that for performance reasons it cannot serialize
+access to counter variables. As an alternative to auto-reset mode,
+you can use rsyslog's statistics manipulation scripts to create delta
+values from the regular statistic logs. This is the suggested method
+if deltas are not necessarily needed in real-time.
+
+
+Format
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "legacy", "no", "none"
+
+.. versionadded:: 8.16.0
+
+Specifies the format of emitted stats messages. The default of
+"legacy" is compatible with pre v6-rsyslog. The other options provide
+support for structured formats (note the "cee" is actually "project
+lumberjack" logging).
+
+The json-elasticsearch format supports the broken ElasticSearch
+JSON implementation. ES 2.0 no longer supports valid JSON and
+disallows dots inside names. The "json-elasticsearch" format
+option replaces those dots by the bang ("!") character. So
+"discarded.full" becomes "discarded!full".
+Options: json/json-elasticsearch/cee/legacy
+
+
+log.syslog
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "none"
+
+This is a boolean setting specifying if data should be sent to the
+usual syslog stream. This is useful if custom formatting or more
+elaborate processing is desired. However, output is placed under the
+same restrictions as regular syslog data, especially in regard to the
+queue position (stats data may sit for an extended period of time in
+queues if they are full). If set "off", then you cannot bind the module to
+ruleset.
+
+
+log.file
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+If specified, statistics data is written to the specified file. For
+robustness, this should be a local file. The file format cannot be
+customized, it consists of a date header, followed by a colon,
+followed by the actual statistics record, all on one line. Only very
+limited error handling is done, so if things go wrong stats records
+will probably be lost. Logging to file an be a useful alternative if
+for some reasons (e.g. full queues) the regular syslog stream method
+shall not be used solely. Note that turning on file logging does NOT
+turn off syslog logging. If that is desired log.syslog="off" must be
+explicitly set.
+
+
+Ruleset
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+Binds the listener to a specific :doc:`ruleset <../../concepts/multi_ruleset>`.
+
+**Note** that setting ``ruleset`` and ``log.syslog="off"`` are mutually
+exclusive because syslog stream processing must be enabled to use a ruleset.
+
+
+Bracketing
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+.. versionadded:: 8.4.1
+
+This is a utility setting for folks who post-process impstats logs
+and would like to know the begin and end of a block of statistics.
+When "bracketing" is set to "on", impstats issues a "BEGIN" message
+before the first counter is issued, then all counter values
+are issued, and then an "END" message follows. As such, if and only if messages
+are kept in sequence, a block of stats counts can easily be identified
+by those BEGIN and END messages.
+
+**Note well:** in general, sequence of syslog messages is **not**
+strict and is not ordered in sequence of message generation. There
+are various occasion that can cause message reordering, some
+examples are:
+
+* using multiple threads
+* using UDP forwarding
+* using relay systems, especially with buffering enabled
+* using disk-assisted queues
+
+This is not a problem with rsyslog, but rather the way a concurrent
+world works. For strict order, a specific order predicate (e.g. a
+sufficiently fine-grained timestamp) must be used.
+
+As such, BEGIN and END records may actually indicate the begin and
+end of a block of statistics - or they may *not*. Any order is possible
+in theory. So the bracketing option does not in all cases work as
+expected. This is the reason why it is turned off by default.
+
+*However*, bracketing may still be useful for many use cases. First
+and foremost, while there are many scenarios in which messages become
+reordered, in practice it happens relatively seldom. So most of the
+time the statistics records will come in as expected and actually
+will be bracketed by the BEGIN and END messages. Consequently, if
+an application can handle occasional out-of-order delivery (e.g. by
+graceful degradation), bracketing may actually be a great solution.
+It is, however, very important to know and
+handle out of order delivery. For most real-world deployments,
+a good way to handle it is to ignore unexpected
+records and use the previous values for ones missing in the current
+block. To guard against two or more blocks being mixed, it may also
+be a good idea to never reset a value to a lower bound, except when
+that lower bound is seen consistently (which happens due to a
+restart). Note that such lower bound logic requires *resetCounters*
+to be set to off.
+
+
+.. _impstats-statistic-counter:
+
+Statistic Counter
+=================
+
+The impstats plugin gathers some internal :doc:`statistics <../rsyslog_statistic_counter>`.
+They have different names depending on the actual statistics. Obviously, they do not
+relate to the plugin itself but rather to a broader object – most notably the
+rsyslog process itself. The "resource-usage" counter maintains process
+statistics. They base on the getrusage() system call. The counters are
+named like getrusage returned data members. So for details, looking them
+up in "man getrusage" is highly recommended, especially as value may be
+different depending on the platform. A getrusage() call is done immediately
+before the counter is emitted. The following individual counters are
+maintained:
+
+- ``utime`` - this is the user time in microseconds (thus the timeval structure combined)
+- ``stime`` - again, time given in microseconds
+- ``maxrss``
+- ``minflt``
+- ``majflt``
+- ``inblock``
+- ``outblock``
+- ``nvcsw``
+- ``nivcsw``
+- ``openfiles`` - number of file handles used by rsyslog; includes actual files, sockets and others
+
+
+Caveats/Known Bugs
+==================
+
+- This module MUST be loaded right at the top of rsyslog.conf,
+ otherwise stats may not get turned on in all places.
+
+
+Examples
+========
+
+Load module, send stats data to syslog stream
+---------------------------------------------
+
+This activates the module and records messages to /var/log/rsyslog-stats
+in 10 minute intervals:
+
+.. code-block:: none
+
+ module(load="impstats"
+ interval="600"
+ severity="7")
+
+ # to actually gather the data:
+ syslog.=debug /var/log/rsyslog-stats
+
+
+Load module, send stats data to local file
+------------------------------------------
+
+Here, the default interval of 5 minutes is used. However, this time, stats
+data is NOT emitted to the syslog stream but to a local file instead.
+
+.. code-block:: none
+
+ module(load="impstats"
+ interval="600"
+ severity="7"
+ log.syslog="off"
+ # need to turn log stream logging off!
+ log.file="/path/to/local/stats.log")
+
+
+Load module, send stats data to local file and syslog stream
+------------------------------------------------------------
+
+Here we log to both the regular syslog log stream as well as a
+file. Within the log stream, we forward the data records to another
+server:
+
+.. code-block:: none
+
+ module(load="impstats"
+ interval="600"
+ severity="7"
+ log.file="/path/to/local/stats.log")
+
+ syslog.=debug @central.example.net
+
+
+Explanation of output
+=====================
+
+Example output for illustration::
+
+ Sep 17 11:43:49 localhost rsyslogd-pstats: imuxsock: submitted=16
+ Sep 17 11:43:49 localhost rsyslogd-pstats: main Q: size=1 enqueued=2403 full=0 maxqsize=2
+
+Explanation:
+
+All objects are shown in the results with a separate counter, one object per
+line.
+
+Line 1: shows details for
+
+- ``imuxsock``, an object
+- ``submitted=16``, a counter showing that 16 messages were received by the
+ imuxsock object.
+
+Line 2: shows details for the main queue:
+
+- ``main Q``, an object
+- ``size``, messages in the queue
+- ``enqueued``, all received messages thus far
+- ``full``, how often was the queue was full
+- ``maxqsize``, the maximum amount of messages that have passed through the
+ queue since rsyslog was started
+
+See Also
+========
+
+- `rsyslog statistics
+ counter <http://www.rsyslog.com/rsyslog-statistic-counter/>`_
+- `impstats delayed or
+ lost <http://www.rsyslog.com/impstats-delayed-or-lost/>`_ - cause and
+ cure
diff --git a/source/configuration/modules/imptcp.rst b/source/configuration/modules/imptcp.rst
new file mode 100644
index 0000000..e495155
--- /dev/null
+++ b/source/configuration/modules/imptcp.rst
@@ -0,0 +1,711 @@
+************************
+imptcp: Plain TCP Syslog
+************************
+
+=========================== ===========================================================================
+**Module Name:**  **imptcp**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+Provides the ability to receive syslog messages via plain TCP syslog.
+This is a specialised input plugin tailored for high performance on
+Linux. It will probably not run on any other platform. Also, it does not
+provide TLS services. Encryption can be provided by using
+`stunnel <rsyslog_stunnel.html>`_.
+
+This module has no limit on the number of listeners and sessions that
+can be used.
+
+
+Notable Features
+================
+
+- :ref:`imptcp-statistic-counter`
+- :ref:`error-messages`
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Module Parameters
+-----------------
+
+Threads
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "2", "no", "``$InputPTCPServerHelperThreads``"
+
+Number of helper worker threads to process incoming messages. These
+threads are utilized to pull data off the network. On a busy system,
+additional helper threads (but not more than there are CPUs/Cores)
+can help improving performance. The default value is two, which means
+there is a default thread count of three (the main input thread plus
+two helpers). No more than 16 threads can be set (if tried to,
+rsyslog always resorts to 16).
+
+
+MaxSessions
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "none"
+
+Maximum number of open sessions allowed. This is inherited to each
+"input()" config, however it is not a global maximum, rather just
+setting the default per input.
+
+A setting of zero or less than zero means no limit.
+
+ProcessOnPoller
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "none"
+
+Instructs imptcp to process messages on poller thread opportunistically.
+This leads to lower resource footprint(as poller thread doubles up as
+message-processing thread too). "On" works best when imptcp is handling
+low ingestion rates.
+
+At high throughput though, it causes polling delay(as poller spends time
+processing messages, which keeps connections in read-ready state longer
+than they need to be, filling socket-buffer, hence eventually applying
+backpressure).
+
+It defaults to allowing messages to be processed on poller (for backward
+compatibility).
+
+
+Input Parameters
+----------------
+
+These parameters can be used with the "input()" statement. They apply to
+the input they are specified with.
+
+
+Port
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "``$InputPTCPServerRun``"
+
+Select a port to listen on. It is an error to specify
+both `path` and `port`.
+
+
+Path
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+A path on the filesystem for a unix domain socket. It is an error to specify
+both `path` and `port`.
+
+
+DiscardTruncatedMsg
+^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+When a message is split because it is to long the second part is normally
+processed as the next message. This can cause Problems. When this parameter
+is turned on the part of the message after the truncation will be discarded.
+
+FileOwner
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "UID", "system default", "no", "none"
+
+Set the file owner for the domain socket. The
+parameter is a user name, for which the userid is obtained by
+rsyslogd during startup processing. Interim changes to the user
+mapping are *not* detected.
+
+
+FileOwnerNum
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "system default", "no", "none"
+
+Set the file owner for the domain socket. The
+parameter is a numerical ID, which which is used regardless of
+whether the user actually exists. This can be useful if the user
+mapping is not available to rsyslog during startup.
+
+
+FileGroup
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "GID", "system default", "no", "none"
+
+Set the group for the domain socket. The parameter is
+a group name, for which the groupid is obtained by rsyslogd during
+startup processing. Interim changes to the user mapping are not
+detected.
+
+
+FileGroupNum
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "system default", "no", "none"
+
+Set the group for the domain socket. The parameter is
+a numerical ID, which is used regardless of whether the group
+actually exists. This can be useful if the group mapping is not
+available to rsyslog during startup.
+
+
+FileCreateMode
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "octalNumber", "0644", "no", "none"
+
+Set the access permissions for the domain socket. The value given must
+always be a 4-digit octal number, with the initial digit being zero.
+Please note that the actual permission depend on rsyslogd's process
+umask. If in doubt, use "$umask 0000" right at the beginning of the
+configuration file to remove any restrictions.
+
+
+FailOnChOwnFailure
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "none"
+
+Rsyslog will not start if this is on and changing the file owner, group,
+or access permissions fails. Disable this to ignore these errors.
+
+
+Unlink
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+If a unix domain socket is being used this controls whether or not the socket
+is unlinked before listening and after closing.
+
+
+Name
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "imptcp", "no", "``$InputPTCPServerInputName``"
+
+Sets a name for the inputname property. If no name is set "imptcp"
+is used by default. Setting a name is not strictly necessary, but can
+be useful to apply filtering based on which input the message was
+received from. Note that the name also shows up in
+:doc:`impstats <impstats>` logs.
+
+
+Ruleset
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "``$InputPTCPServerBindRuleset``"
+
+Binds specified ruleset to this input. If not set, the default
+ruleset is bound.
+
+
+MaxFrameSize
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "200000", "no", "none"
+
+When in octet counted mode, the frame size is given at the beginning
+of the message. With this parameter the max size this frame can have
+is specified and when the frame gets to large the mode is switched to
+octet stuffing.
+The max value this parameter can have was specified because otherwise
+the integer could become negative and this would result in a
+Segmentation Fault. (Max Value: 200000000)
+
+
+MaxSessions
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "none"
+
+Maximum number of open sessions allowed. If more tcp connections
+are created then rsyslog will drop those connections. Warning,
+this defaults to 0 which means unlimited, so take care to set this
+if you have limited memory and/or processing power.
+
+A setting of zero or negative integer means no limit.
+
+Address
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "``$InputPTCPServerListenIP``"
+
+On multi-homed machines, specifies to which local address the
+listener should be bound.
+
+
+AddtlFrameDelimiter
+^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "-1", "no", "``$InputPTCPServerAddtlFrameDelimiter``"
+
+This directive permits to specify an additional frame delimiter for
+plain tcp syslog. The industry-standard specifies using the LF
+character as frame delimiter. Some vendors, notable Juniper in their
+NetScreen products, use an invalid frame delimiter, in Juniper's case
+the NUL character. This directive permits to specify the ASCII value
+of the delimiter in question. Please note that this does not
+guarantee that all wrong implementations can be cured with this
+directive. It is not even a sure fix with all versions of NetScreen,
+as I suggest the NUL character is the effect of a (common) coding
+error and thus will probably go away at some time in the future. But
+for the time being, the value 0 can probably be used to make rsyslog
+handle NetScreen's invalid syslog/tcp framing. For additional
+information, see this `forum
+thread <http://kb.monitorware.com/problem-with-netscreen-log-t1652.html>`_.
+**If this doesn't work for you, please do not blame the rsyslog team.
+Instead file a bug report with Juniper!**
+
+Note that a similar, but worse, issue exists with Cisco's IOS
+implementation. They do not use any framing at all. This is confirmed
+from Cisco's side, but there seems to be very limited interest in
+fixing this issue. This directive **can not** fix the Cisco bug. That
+would require much more code changes, which I was unable to do so
+far. Full details can be found at the `Cisco tcp syslog
+anomaly <http://www.rsyslog.com/Article321.phtml>`_ page.
+
+
+SupportOctetCountedFraming
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "``$InputPTCPServerSupportOctetCountedFraming``"
+
+The legacy octed-counted framing (similar to RFC5425
+framing) is activated. This is the default and should be left
+unchanged until you know very well what you do. It may be useful to
+turn it off, if you know this framing is not used and some senders
+emit multi-line messages into the message stream.
+
+
+NotifyOnConnectionClose
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "``$InputPTCPServerNotifyOnConnectionClose``"
+
+Instructs imptcp to emit a message if a remote peer closes the
+connection.
+
+
+NotifyOnConnectionOpen
+^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+Instructs imptcp to emit a message if a remote peer opens a
+connection. Hostname of the remote peer is given in the message.
+
+
+KeepAlive
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "``$InputPTCPServerKeepAlive``"
+
+Enable of disable keep-alive packets at the tcp socket layer. The
+default is to disable them.
+
+
+KeepAlive.Probes
+^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "``$InputPTCPServerKeepAlive_probes``"
+
+The number of unacknowledged probes to send before considering the
+connection dead and notifying the application layer. The default, 0,
+means that the operating system defaults are used. This has only
+effect if keep-alive is enabled. The functionality may not be
+available on all platforms.
+
+
+KeepAlive.Interval
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "``$InputPTCPServerKeepAlive_intvl``"
+
+The interval between subsequential keepalive probes, regardless of
+what the connection has exchanged in the meantime. The default, 0,
+means that the operating system defaults are used. This has only
+effect if keep-alive is enabled. The functionality may not be
+available on all platforms.
+
+
+KeepAlive.Time
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "``$InputPTCPServerKeepAlive_time``"
+
+The interval between the last data packet sent (simple ACKs are not
+considered data) and the first keepalive probe; after the connection
+is marked to need keepalive, this counter is not used any further.
+The default, 0, means that the operating system defaults are used.
+This has only effect if keep-alive is enabled. The functionality may
+not be available on all platforms.
+
+
+RateLimit.Interval
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "none"
+
+Specifies the rate-limiting interval in seconds. Set it to a number
+of seconds (5 recommended) to activate rate-limiting.
+
+
+RateLimit.Burst
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "10000", "no", "none"
+
+Specifies the rate-limiting burst in number of messages.
+
+
+Compression.mode
+^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+This is the counterpart to the compression modes set in
+:doc:`omfwd <omfwd>`.
+Please see it's documentation for details.
+
+
+flowControl
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "none"
+
+Flow control is used to throttle the sender if the receiver queue is
+near-full preserving some space for input that can not be throttled.
+
+
+MultiLine
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+Experimental parameter which causes rsyslog to recognise a new message
+only if the line feed is followed by a '<' or if there are no more characters.
+
+
+framing.delimiter.regex
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "off", "no", "none"
+
+Experimental parameter. It is similar to "MultiLine", but provides greater
+control of when a log message ends. You can specify a regular expression that
+characterizes the header to expect at the start of the next message. As such,
+it indicates the end of the current message. For example, one can use this
+setting to use a RFC3164 header as frame delimiter::
+
+ framing.delimiter.regex="^<[0-9]{1,3}>(Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec)"
+
+Note that when oversize messages arrive this mode may have problems finding
+the proper frame terminator. There are some provisions inside imptcp to make
+these kinds of problems unlikely, but if the messages are very much over the
+configured MaxMessageSize, imptcp emits an error messages. Chances are great
+it will properly recover from such a situation.
+
+
+SocketBacklog
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "5", "no", "none"
+
+Specifies the backlog parameter sent to the listen() function.
+It defines the maximum length to which the queue of pending connections may grow.
+See man page of listen(2) for more information.
+The parameter controls both TCP and UNIX sockets backlog parameter.
+Default value is arbitrary set to 5.
+
+
+Defaulttz
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+Set default time zone. At most seven chars are set, as we would otherwise
+overrun our buffer.
+
+
+Framingfix.cisco.asa
+^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+Cisco very occasionally sends a space after a line feed, which thrashes framing
+if not taken special care of. When this parameter is set to "on", we permit
+space *in front of the next frame* and ignore it.
+
+
+ListenPortFileName
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+.. versionadded:: 8.38.0
+
+With this parameter you can specify the name for a file. In this file the
+port, imptcp is connected to, will be written.
+This parameter was introduced because the testbench works with dynamic ports.
+
+
+.. _imptcp-statistic-counter:
+
+Statistic Counter
+=================
+
+This plugin maintains :doc:`statistics <../rsyslog_statistic_counter>` for each listener. The statistic is
+named "imtcp" , followed by the bound address, listener port and IP
+version in parenthesis. For example, the counter for a listener on port
+514, bound to all interfaces and listening on IPv6 is called
+"imptcp(\*/514/IPv6)".
+
+The following properties are maintained for each listener:
+
+- **submitted** - total number of messages submitted for processing since startup
+
+
+.. _error-messages:
+
+Error Messages
+==============
+
+When a message is to long it will be truncated and an error will show the remaining length of the message and the beginning of it. It will be easier to comprehend the truncation.
+
+
+Caveats/Known Bugs
+==================
+
+- module always binds to all interfaces
+
+
+Examples
+========
+
+Example 1
+---------
+
+This sets up a TCP server on port 514:
+
+.. code-block:: none
+
+ module(load="imptcp") # needs to be done just once
+ input(type="imptcp" port="514")
+
+
+Example 2
+---------
+
+This creates a listener that listens on the local loopback
+interface, only.
+
+.. code-block:: none
+
+ module(load="imptcp") # needs to be done just once
+ input(type="imptcp" port="514" address="127.0.0.1")
+
+
+Example 3
+---------
+
+Create a unix domain socket:
+
+.. code-block:: none
+
+ module(load="imptcp") # needs to be done just once
+ input(type="imptcp" path="/tmp/unix.sock" unlink="on")
+
+
diff --git a/source/configuration/modules/imrelp.rst b/source/configuration/modules/imrelp.rst
new file mode 100644
index 0000000..eb0d9c5
--- /dev/null
+++ b/source/configuration/modules/imrelp.rst
@@ -0,0 +1,595 @@
+*************************
+imrelp: RELP Input Module
+*************************
+
+=========================== ===========================================================================
+**Module Name:**  **imrelp**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+Provides the ability to receive syslog messages via the reliable RELP
+protocol. This module requires `librelp <http://www.librelp.com>`__ to
+be present on the system. From the user's point of view, imrelp works
+much like imtcp or imgssapi, except that no message loss can occur.
+Please note that with the currently supported RELP protocol version, a
+minor message duplication may occur if a network connection between the
+relp client and relp server breaks after the client could successfully
+send some messages but the server could not acknowledge them. The window
+of opportunity is very slim, but in theory this is possible. Future
+versions of RELP will prevent this. Please also note that rsyslogd may
+lose a few messages if rsyslog is shutdown while a network connection to
+the server is broken and could not yet be recovered. Future versions of
+RELP support in rsyslog will prevent that issue. Please note that both
+scenarios also exist with plain TCP syslog. RELP, even with the small
+nits outlined above, is a much more reliable solution than plain TCP
+syslog and so it is highly suggested to use RELP instead of plain TCP.
+Clients send messages to the RELP server via omrelp.
+
+
+Notable Features
+================
+
+- :ref:`imrelp-statistic-counter`
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Module Parameters
+-----------------
+
+Ruleset
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "``$InputRELPServerBindRuleset``"
+
+.. versionadded:: 7.5.0
+
+Binds the specified ruleset to **all** RELP listeners. This can be
+overridden at the instance level.
+
+tls.tlslib
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+.. versionadded:: 8.1903.0
+
+Permits to specify the TLS library used by librelp.
+All RELP protocol operations are actually performed by librelp and
+not rsyslog itself. The value specified is directly passed down to
+librelp. Depending on librelp version and build parameters, supported
+TLS libraries differ (or TLS may not be supported at all). In this case
+rsyslog emits an error message.
+
+Usually, the following options should be available: "openssl", "gnutls".
+
+Note that "gnutls" is the current default for historic reasons. We actually
+recommend to use "openssl". It provides better error messages and accepts
+a wider range of certificate types.
+
+If you have problems with the default setting, we recommend to switch to
+"openssl".
+
+
+Input Parameters
+----------------
+
+Port
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "yes", "``$InputRELPServerRun``"
+
+Starts a RELP server on selected port
+
+Address
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+.. versionadded:: 8.37.0
+
+Bind the RELP server to that address. If not specified, the server will be
+bound to the wildcard address.
+
+Name
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "imrelp", "no", "none"
+
+
+Ruleset
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+Binds specified ruleset to this listener. This overrides the
+module-level Ruleset parameter.
+
+
+MaxDataSize
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "size_nbr", ":doc:`global(maxMessageSize) <../../rainerscript/global>`", "no", "none"
+
+Sets the max message size (in bytes) that can be received. Messages that
+are too long are handled as specified in parameter oversizeMode. Note that
+maxDataSize cannot be smaller than the global parameter maxMessageSize.
+
+
+TLS
+^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+If set to "on", the RELP connection will be encrypted by TLS, so
+that the data is protected against observers. Please note that both
+the client and the server must have set TLS to either "on" or "off".
+Other combinations lead to unpredictable results.
+
+*Attention when using GnuTLS 2.10.x or older*
+
+Versions older than GnuTLS 2.10.x may cause a crash (Segfault) under
+certain circumstances. Most likely when an imrelp inputs and an
+omrelp output is configured. The crash may happen when you are
+receiving/sending messages at the same time. Upgrade to a newer
+version like GnuTLS 2.12.21 to solve the problem.
+
+
+TLS.Compression
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+The controls if the TLS stream should be compressed (zipped). While
+this increases CPU use, the network bandwidth should be reduced. Note
+that typical text-based log records usually compress rather well.
+
+
+TLS.dhbits
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "none"
+
+This setting controls how many bits are used for Diffie-Hellman key
+generation. If not set, the librelp default is used. For security
+reasons, at least 1024 bits should be used. Please note that the
+number of bits must be supported by GnuTLS. If an invalid number is
+given, rsyslog will report an error when the listener is started. We
+do this to be transparent to changes/upgrades in GnuTLS (to check at
+config processing time, we would need to hardcode the supported bits
+and keep them in sync with GnuTLS - this is even impossible when
+custom GnuTLS changes are made...).
+
+
+TLS.PermittedPeer
+^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "array", "none", "no", "none"
+
+PermittedPeer places access restrictions on this listener. Only peers which
+have been listed in this parameter may connect. The certificate presented
+by the remote peer is used for it's validation.
+
+The *peer* parameter lists permitted certificate fingerprints. Note
+that it is an array parameter, so either a single or multiple
+fingerprints can be listed. When a non-permitted peer connects, the
+refusal is logged together with it's fingerprint. So if the
+administrator knows this was a valid request, he can simply add the
+fingerprint by copy and paste from the logfile to rsyslog.conf.
+
+To specify multiple fingerprints, just enclose them in braces like
+this:
+
+.. code-block:: none
+
+ tls.permittedPeer=["SHA1:...1", "SHA1:....2"]
+
+To specify just a single peer, you can either specify the string
+directly or enclose it in braces. You may also use wildcards to match
+a larger number of permitted peers, e.g. ``*.example.com``.
+
+When using wildcards to match larger number of permitted peers, please
+know that the implementation is similar to Syslog RFC5425 which means:
+This wildcard matches any left-most DNS label in the server name.
+That is, the subject ``*.example.com`` matches the server names ``a.example.com``
+and ``b.example.com``, but does not match ``example.com`` or ``a.b.example.com``.
+
+
+TLS.AuthMode
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+Sets the mode used for mutual authentication.
+
+Supported values are either "*fingerprint*\ " or "*name"*.
+
+Fingerprint mode basically is what SSH does. It does not require a
+full PKI to be present, instead self-signed certs can be used on all
+peers. Even if a CA certificate is given, the validity of the peer
+cert is NOT verified against it. Only the certificate fingerprint
+counts.
+
+In "name" mode, certificate validation happens. Here, the matching is
+done against the certificate's subjectAltName and, as a fallback, the
+subject common name. If the certificate contains multiple names, a
+match on any one of these names is considered good and permits the
+peer to talk to rsyslog.
+
+
+About Chained Certificates
+--------------------------
+
+.. versionadded:: 8.2008.0
+
+With librelp 1.7.0, you can use chained certificates.
+If using "openssl" as tls.tlslib, we recommend at least OpenSSL Version 1.1
+or higher. Chained certificates will also work with OpenSSL Version 1.0.2, but
+they will be loaded into the main OpenSSL context object making them available
+to all librelp instances (omrelp/imrelp) within the same process.
+
+If this is not desired, you will require to run rsyslog in multiple instances
+with different omrelp configurations and certificates.
+
+
+TLS.CaCert
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+The CA certificate that is being used to verify the client certificates.
+Has to be configured if TLS.AuthMode is set to "*fingerprint*\ " or "*name"*.
+
+
+TLS.MyCert
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+The machine certificate that is being used for TLS communication.
+
+
+TLS.MyPrivKey
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+The machine private key for the configured TLS.MyCert.
+
+
+TLS.PriorityString
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+This parameter allows passing the so-called "priority string" to
+GnuTLS. This string gives complete control over all crypto
+parameters, including compression settings. For this reason, when the
+prioritystring is specified, the "tls.compression" parameter has no
+effect and is ignored.
+
+Full information about how to construct a priority string can be
+found in the GnuTLS manual. At the time of writing, this
+information was contained in `section 6.10 of the GnuTLS
+manual <http://gnutls.org/manual/html_node/Priority-Strings.html>`_.
+
+**Note: this is an expert parameter.** Do not use if you do not
+exactly know what you are doing.
+
+tls.tlscfgcmd
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+.. versionadded:: 8.2001.0
+
+The setting can be used if tls.tlslib is set to "openssl" to pass configuration commands to
+the openssl libray.
+OpenSSL Version 1.0.2 or higher is required for this feature.
+A list of possible commands and their valid values can be found in the documentation:
+https://www.openssl.org/docs/man1.0.2/man3/SSL_CONF_cmd.html
+
+The setting can be single or multiline, each configuration command is separated by linefeed (\n).
+Command and value are separated by equal sign (=). Here are a few samples:
+
+Example 1
+---------
+
+This will allow all protocols except for SSLv2 and SSLv3:
+
+.. code-block:: none
+
+ tls.tlscfgcmd="Protocol=ALL,-SSLv2,-SSLv3"
+
+
+Example 2
+---------
+
+This will allow all protocols except for SSLv2, SSLv3 and TLSv1.
+It will also set the minimum protocol to TLSv1.2
+
+.. code-block:: none
+
+ tls.tlscfgcmd="Protocol=ALL,-SSLv2,-SSLv3,-TLSv1
+ MinProtocol=TLSv1.2"
+
+
+KeepAlive
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+Enable or disable keep-alive packets at the TCP socket layer. By
+defauly keep-alives are disabled.
+
+
+KeepAlive.Probes
+^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "none"
+
+The number of keep-alive probes to send before considering the
+connection dead and notifying the application layer. The default, 0,
+means that the operating system defaults are used. This only has an
+effect if keep-alives are enabled. The functionality may not be
+available on all platforms.
+
+
+KeepAlive.Interval
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "none"
+
+The interval between subsequent keep-alive probes, regardless of what
+the connection has been exchanged in the meantime. The default, 0,
+means that the operating system defaults are used. This only has an effect
+if keep-alive is enabled. The functionality may not be available on all
+platforms.
+
+
+KeepAlive.Time
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "none"
+
+The interval between the last data packet sent (simple ACKs are not
+considered data) and the first keepalive probe; after the connection
+is marked with keep-alive, this counter is not used any further.
+The default, 0, means that the operating system defaults are used.
+This only has an effect if keep-alive is enabled. The functionality may
+not be available on all platforms.
+
+
+oversizeMode
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "truncate", "no", "none"
+
+.. versionadded:: 8.35.0
+
+This parameter specifies how messages that are too long will be handled.
+For this parameter the length of the parameter maxDataSize is used.
+
+- truncate: Messages will be truncated to the maximum message size.
+- abort: This is the behaviour until version 8.35.0. Upon receiving a
+ message that is too long imrelp will abort.
+- accept: Messages will be accepted even if they are too long and an error
+ message will be output. Using this option does have associated risks.
+
+
+flowControl
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "light", "no", "none"
+
+.. versionadded:: 8.1911.0
+
+
+This parameter permits the fine-tuning of the flowControl parameter.
+Possible values are "no", "light", and "full". With "light" being the default
+and previously only value.
+
+Changing the flow control setting may be useful for some rare applications,
+this is an advanced setting and should only be changed if you know what you
+are doing. Most importantly, **rsyslog block incoming data and become
+unresponsive if you change flowcontrol to "full"**. While this may be a
+desired effect when intentionally trying to make it most unlikely that
+rsyslog needs to lose/discard messages, usually this is not what you want.
+
+General rule of thumb: **if you do not fully understand what this decription
+here talks about, leave the parameter at default value**.
+
+This part of the
+documentation is intentionally brief, as one needs to have deep understanding
+of rsyslog to evaluate usage of this parameter. If someone has the insight,
+the meaning of this parameter is crystal-clear. If not, that someone will
+most likely make the wrong decision when changing this parameter away
+from the default value.
+
+
+.. _imrelp-statistic-counter:
+
+Statistic Counter
+=================
+
+This plugin maintains :doc:`statistics <../rsyslog_statistic_counter>` for each listener.
+The statistic by default is named "imrelp" , followed by the listener port in
+parenthesis. For example, the counter for a listener on port 514 is called "imprelp(514)".
+If the input is given a name, that input name is used instead of "imrelp". This counter is
+available starting rsyslog 7.5.1
+
+The following properties are maintained for each listener:
+
+- **submitted** - total number of messages submitted for processing since startup
+
+
+Caveats/Known Bugs
+==================
+
+- see description
+- To obtain the remote system's IP address, you need to have at least
+ librelp 1.0.0 installed. Versions below it return the hostname
+ instead of the IP address.
+
+
+Examples
+========
+
+Example 1
+---------
+
+This sets up a RELP server on port 2514 with a max message size of 10,000 bytes.
+
+.. code-block:: none
+
+ module(load="imrelp") # needs to be done just once
+ input(type="imrelp" port="2514" maxDataSize="10k")
+
+
+
+Receive RELP traffic via TLS
+----------------------------
+
+This receives RELP traffic via TLS using the recommended "openssl" library.
+Except for encryption support the scenario is the same as in Example 1.
+
+Certificate files must exist at configured locations. Note that authmode
+"certvalid" is not very strong - you may want to use a different one for
+actual deployments. For details, see parameter descriptions.
+
+.. code-block:: none
+
+ module(load="imrelp" tls.tlslib="openssl")
+ input(type="imrelp" port="2514" maxDataSize="10k"
+ tls="on"
+ tls.cacert="/tls-certs/ca.pem"
+ tls.mycert="/tls-certs/cert.pem"
+ tls.myprivkey="/tls-certs/key.pem"
+ tls.authmode="certvalid"
+ tls.permittedpeer="rsyslog")
+
diff --git a/source/configuration/modules/imsolaris.rst b/source/configuration/modules/imsolaris.rst
new file mode 100644
index 0000000..f215714
--- /dev/null
+++ b/source/configuration/modules/imsolaris.rst
@@ -0,0 +1,59 @@
+*******************************
+imsolaris: Solaris Input Module
+*******************************
+
+=========================== ===========================================================================
+**Module Name:**  **imsolaris**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+Reads local Solaris log messages including the kernel log.
+
+This module is specifically tailored for Solaris. Under Solaris, there
+is no special kernel input device. Instead, both kernel messages as well
+as messages emitted via syslog() are received from a single source.
+
+This module obeys the Solaris door() mechanism to detect a running
+syslogd instance. As such, only one can be active at one time. If it
+detects another active instance at startup, the module disables itself,
+but rsyslog will continue to run.
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+|FmtObsoleteName| Directives
+----------------------------
+
+| functions:: $IMSolarisLogSocketName <name>
+
+ This is the name of the log socket (stream) to read. If not given,
+ /dev/log is read.
+
+
+Caveats/Known Bugs
+==================
+
+None currently known. For obvious reasons, works on Solaris, only (and
+compilation will most probably fail on any other platform).
+
+
+Examples
+========
+
+The following sample pulls messages from the default log source
+
+.. code-block:: none
+
+ $ModLoad imsolaris
+
+
diff --git a/source/configuration/modules/imtcp.rst b/source/configuration/modules/imtcp.rst
new file mode 100644
index 0000000..052fa6b
--- /dev/null
+++ b/source/configuration/modules/imtcp.rst
@@ -0,0 +1,1086 @@
+******************************
+imtcp: TCP Syslog Input Module
+******************************
+
+=========================== ===========================================================================
+**Module Name:**  **imtcp**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+Provides the ability to receive syslog messages via TCP. Encryption is
+natively provided by selecting the appropriate network stream driver
+and can also be provided by using `stunnel <rsyslog_stunnel.html>`_ (an
+alternative is the use the `imgssapi <imgssapi.html>`_ module).
+
+
+Notable Features
+================
+
+- :ref:`imtcp-statistic-counter`
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Module Parameters
+-----------------
+
+AddtlFrameDelimiter
+^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "-1", "no", "``$InputTCPServerAddtlFrameDelimiter``"
+
+.. versionadded:: 4.3.1
+
+This directive permits to specify an additional frame delimiter for
+Multiple receivers may be configured by specifying $InputTCPServerRun
+multiple times.
+
+
+DisableLFDelimiter
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "``$InputTCPServerDisableLFDelimiter``"
+
+Industry-standard plain text tcp syslog uses the LF to delimit
+syslog frames. However, some users brought up the case that it may be
+useful to define a different delimiter and totally disable LF as a
+delimiter (the use case named were multi-line messages). This mode is
+non-standard and will probably come with a lot of problems. However,
+as there is need for it and it is relatively easy to support, we do
+so. Be sure to turn this setting to "on" only if you exactly know
+what you are doing. You may run into all sorts of troubles, so be
+prepared to wrangle with that!
+
+
+MaxFrameSize
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "200000", "no", "none"
+
+When in octet counted mode, the frame size is given at the beginning
+of the message. With this parameter the max size this frame can have
+is specified and when the frame gets to large the mode is switched to
+octet stuffing.
+The max value this parameter can have was specified because otherwise
+the integer could become negative and this would result in a
+Segmentation Fault. (Max Value = 200000000)
+
+
+NotifyOnConnectionOpen
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", ""
+
+Instructs imtcp to emit a message if the remote peer closes a
+connection.
+
+
+NotifyOnConnectionOpen
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+Instructs imtcp to emit a message if the remote peer opens a
+connection.
+
+
+NotifyOnConnectionClose
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "``$InputTCPServerNotifyOnConnectionClose``"
+
+Instructs imtcp to emit a message if the remote peer closes a
+connection.
+
+
+
+KeepAlive
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "``$InputTCPServerKeepAlive``"
+
+Enable or disable keep-alive packets at the tcp socket layer. The
+default is to disable them.
+
+
+KeepAlive.Probes
+^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "``$InputTCPServerKeepAlive_probes``"
+
+The number of unacknowledged probes to send before considering the
+connection dead and notifying the application layer. The default, 0,
+means that the operating system defaults are used. This has only
+effect if keep-alive is enabled. The functionality may not be
+available on all platforms.
+
+
+KeepAlive.Time
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "``$InputTCPServerKeepAlive_time``"
+
+The interval between the last data packet sent (simple ACKs are not
+considered data) and the first keepalive probe; after the connection
+is marked to need keepalive, this counter is not used any further.
+The default, 0, means that the operating system defaults are used.
+This has only effect if keep-alive is enabled. The functionality may
+not be available on all platforms.
+
+
+KeepAlive.Interval
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", ""
+
+.. versionadded:: 8.2106.0
+
+The interval for keep alive packets.
+
+
+
+
+FlowControl
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "``$InputTCPFlowControl``"
+
+This setting specifies whether some message flow control shall be
+exercised on the related TCP input. If set to on, messages are
+handled as "light delayable", which means the sender is throttled a
+bit when the queue becomes near-full. This is done in order to
+preserve some queue space for inputs that can not throttle (like
+UDP), but it may have some undesired effect in some configurations.
+Still, we consider this as a useful setting and thus it is the
+default. To turn the handling off, simply configure that explicitly.
+
+
+MaxListeners
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "20", "no", "``$InputTCPMaxListeners``"
+
+Sets the maximum number of listeners (server ports) supported.
+This must be set before the first $InputTCPServerRun directive.
+
+
+MaxSessions
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "200", "no", "``$InputTCPMaxSessions``"
+
+Sets the maximum number of sessions supported. This must be set
+before the first $InputTCPServerRun directive.
+
+
+StreamDriver.Name
+^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+Selects :doc:`network stream driver <../../concepts/netstrm_drvr>`
+for all inputs using this module.
+
+
+StreamDriver.Mode
+^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "``$InputTCPServerStreamDriverMode``"
+
+Sets the driver mode for the currently selected
+:doc:`network stream driver <../../concepts/netstrm_drvr>`.
+<number> is driver specific.
+
+
+StreamDriver.AuthMode
+^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "``$InputTCPServerStreamDriverAuthMode``"
+
+Sets stream driver authentication mode. Possible values and meaning
+depend on the
+:doc:`network stream driver <../../concepts/netstrm_drvr>`.
+used.
+
+
+StreamDriver.PermitExpiredCerts
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "warn", "no", "none"
+
+Controls how expired certificates will be handled when stream driver is in TLS mode.
+It can have one of the following values:
+
+- on = Expired certificates are allowed
+
+- off = Expired certificates are not allowed (Default, changed from warn to off since Version 8.2012.0)
+
+- warn = Expired certificates are allowed but warning will be logged
+
+
+StreamDriver.CheckExtendedKeyPurpose
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+Whether to check also purpose value in extended fields part of certificate
+for compatibility with rsyslog operation. (driver-specific)
+
+
+StreamDriver.PrioritizeSAN
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+Whether to use stricter SAN/CN matching. (driver-specific)
+
+
+StreamDriver.TlsVerifyDepth
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "TLS library default", "no", "none"
+
+
+Specifies the allowed maximum depth for the certificate chain verification.
+Support added in v8.2001.0, supported by GTLS and OpenSSL driver.
+If not set, the API default will be used.
+For OpenSSL, the default is 100 - see the doc for more:
+https://www.openssl.org/docs/man1.1.1/man3/SSL_set_verify_depth.html
+For GnuTLS, the default is 5 - see the doc for more:
+https://www.gnutls.org/manual/gnutls.html
+
+
+PermittedPeer
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "array", "none", "no", "``$InputTCPServerStreamDriverPermittedPeer``"
+
+Sets permitted peer IDs. Only these peers are able to connect to
+the listener. <id-string> semantics depend on the currently
+selected AuthMode and
+:doc:`network stream driver <../../concepts/netstrm_drvr>`.
+PermittedPeer may not be set in anonymous modes. PermittedPeer may
+be set either to a single peer or an array of peers either of type
+IP or name, depending on the tls certificate.
+
+Single peer:
+PermittedPeer="127.0.0.1"
+
+Array of peers:
+PermittedPeer=["test1.example.net","10.1.2.3","test2.example.net","..."]
+
+
+DiscardTruncatedMsg
+^^^^^^^^^^^^^^^^^^^
+
+Normally when a message is truncated in octet stuffing mode the part that
+is cut off is processed as the next message. When this parameter is activated,
+the part that is cut off after a truncation is discarded and not processed.
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+
+gnutlsPriorityString
+^^^^^^^^^^^^^^^^^^^^
+
+The "gnutls priority string" parameter in rsyslog offers enhanced
+customization for secure communications, allowing detailed configuration
+of TLS driver properties. This includes specifying handshake algorithms
+and other settings for GnuTLS, as well as implementing OpenSSL
+configuration commands. Initially developed for GnuTLS, the "gnutls
+priority string" has evolved since version v8.1905.0 to also support
+OpenSSL, broadening its application and utility in network security
+configurations. This update signifies a key advancement in rsyslog's
+capabilities, making the "gnutls priority string" an essential
+feature for advanced TLS configuration.
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+.. versionadded:: 8.29.0
+
+
+**Configuring Driver-Specific Properties**
+
+This configuration string is used to set properties specific to different drivers. Originally designed for the GnuTLS driver, it has been extended to support OpenSSL configuration commands from version v8.1905.0 onwards.
+
+**GNUTLS Configuration**
+
+In GNUTLS, this setting determines the handshake algorithms and options for the TLS session. It's designed to allow user overrides of the library's default settings. If you leave this parameter unset (NULL), the system will revert to the default settings. For more detailed information on priority strings in GNUTLS, you can refer to the GnuTLS Priority Strings Documentation available at [GnuTLS Website](https://gnutls.org/manual/html_node/Priority-Strings.html).
+
+**OpenSSL Configuration**
+
+This feature is compatible with OpenSSL Version 1.0.2 and above. It enables the passing of configuration commands to the OpenSSL library. You can find a comprehensive list of commands and their acceptable values in the OpenSSL Documentation, accessible at [OpenSSL Documentation](https://www.openssl.org/docs/man1.0.2/man3/SSL_CONF_cmd.html).
+
+**General Configuration Guidelines**
+
+The configuration can be formatted as a single line or across multiple lines. Each command within the configuration is separated by a linefeed (`\n`). To differentiate between a command and its corresponding value, use an equal sign (`=`). Below are some examples to guide you in formatting these commands.
+
+
+Example 1
+---------
+
+This will allow all protocols except for SSLv2 and SSLv3:
+
+.. code-block:: none
+
+ gnutlsPriorityString="Protocol=ALL,-SSLv2,-SSLv3"
+
+
+Example 2
+---------
+
+This will allow all protocols except for SSLv2, SSLv3 and TLSv1.
+It will also set the minimum protocol to TLSv1.2
+
+.. code-block:: none
+
+ gnutlsPriorityString="Protocol=ALL,-SSLv2,-SSLv3,-TLSv1
+ MinProtocol=TLSv1.2"
+
+
+PreserveCase
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "boolean", "on", "no", "none"
+
+.. versionadded:: 8.37.0
+
+This parameter is for controlling the case in fromhost. If preservecase is set to "off", the case in fromhost is not preserved. E.g., 'host1.example.org' the message was received from 'Host1.Example.Org'. Default to "on" for the backword compatibility.
+
+
+Input Parameters
+----------------
+
+Port
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "yes", "``$InputTCPServerRun``"
+
+Starts a TCP server on selected port. If port zero is selected, the OS automatically
+assigens a free port. Use `listenPortFileName` in this case to obtain the information
+of which port was assigned.
+
+
+ListenPortFileName
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+This parameter specifies a file name into which the port number this input listens
+on is written. It is primarily intended for cases when `port` is set to 0 to let
+the OS automatically assign a free port number.
+
+
+Address
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+On multi-homed machines, specifies to which local address the
+listener should be bound.
+
+
+Name
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "imtcp", "no", "``$InputTCPServerInputName``"
+
+Sets a name for the inputname property. If no name is set "imtcp" is
+used by default. Setting a name is not strictly necessary, but can be
+useful to apply filtering based on which input the message was
+received from.
+
+
+Ruleset
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "``$InputTCPServerBindRuleset``"
+
+Binds the listener to a specific :doc:`ruleset <../../concepts/multi_ruleset>`.
+
+
+SupportOctetCountedFraming
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "``$InputTCPServerSupportOctetCountedFraming``"
+
+If set to "on", the legacy octed-counted framing (similar to RFC5425
+framing) is activated. This should be left unchanged until you know
+very well what you do. It may be useful to turn it off, if you know
+this framing is not used and some senders emit multi-line messages
+into the message stream.
+
+
+RateLimit.Interval
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "none"
+
+Specifies the rate-limiting interval in seconds. Default value is 0,
+which turns off rate limiting. Set it to a number of seconds (5
+recommended) to activate rate-limiting.
+
+
+RateLimit.Burst
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "10000", "no", "none"
+
+Specifies the rate-limiting burst in number of messages. Default is
+10,000.
+
+
+listenPortFileName
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+.. versionadded:: 8.38.0
+
+With this parameter you can specify the name for a file. In this file the
+port, imtcp is connected to, will be written.
+This parameter was introduced because the testbench works with dynamic ports.
+
+.. note::
+
+ If this parameter is set, 0 will be accepted as the port. Otherwise it
+ is automatically changed to port 514
+
+
+StreamDriver.Name
+^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "module parameter", "no", "none"
+
+.. versionadded:: 8.2106.0
+
+This permits to override the equally-named module parameter on the input()
+level. For further details, see the module parameter.
+
+
+StreamDriver.Mode
+^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "module parameter", "no", "``$InputTCPServerStreamDriverMode``"
+
+.. versionadded:: 8.2106.0
+
+This permits to override the equally-named module parameter on the input()
+level. For further details, see the module parameter.
+
+
+StreamDriver.AuthMode
+^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "module parameter", "no", "``$InputTCPServerStreamDriverAuthMode``"
+
+.. versionadded:: 8.2106.0
+
+This permits to override the equally-named module parameter on the input()
+level. For further details, see the module parameter.
+
+
+StreamDriver.PermitExpiredCerts
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "module parameter", "no", "none"
+
+.. versionadded:: 8.2106.0
+
+This permits to override the equally-named module parameter on the input()
+level. For further details, see the module parameter.
+
+
+StreamDriver.CheckExtendedKeyPurpose
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "module parameter", "no", "none"
+
+.. versionadded:: 8.2106.0
+
+This permits to override the equally-named module parameter on the input()
+level. For further details, see the module parameter.
+
+
+StreamDriver.PrioritizeSAN
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "module parameter", "no", "none"
+
+.. versionadded:: 8.2106.0
+
+This permits to override the equally-named module parameter on the input()
+level. For further details, see the module parameter.
+
+
+StreamDriver.TlsVerifyDepth
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "module parameter", "no", "none"
+
+.. versionadded:: 8.2106.0
+
+This permits to override the equally-named module parameter on the input()
+level. For further details, see the module parameter.
+
+
+streamDriver.CAFile
+^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "global parameter", "no", "none"
+
+.. versionadded:: 8.2108.0
+
+This permits to override the DefaultNetstreamDriverCAFile global parameter on the input()
+level. For further details, see the global parameter.
+
+streamDriver.CRLFile
+^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "optional", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "global parameter", "no", "none"
+
+.. versionadded:: 8.2308.0
+
+This permits to override the CRL (Certificate revocation list) file set via `global()` config
+object at the per-action basis. This parameter is ignored if the netstream driver and/or its
+mode does not need or support certificates.
+
+streamDriver.KeyFile
+^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "global parameter", "no", "none"
+
+.. versionadded:: 8.2108.0
+
+This permits to override the DefaultNetstreamDriverKeyFile global parameter on the input()
+level. For further details, see the global parameter.
+
+
+streamDriver.CertFile
+^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "global parameter", "no", "none"
+
+.. versionadded:: 8.2108.0
+
+This permits to override the DefaultNetstreamDriverCertFile global parameter on the input()
+level. For further details, see the global parameter.
+
+
+PermittedPeer
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "array", "none", "no", "equally-named module parameter"
+.. versionadded:: 8.2112.0
+
+This permits to override the equally-named module parameter on the input()
+level. For further details, see the module parameter.
+
+
+gnutlsPriorityString
+^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "module parameter", "no", "none"
+.. versionadded:: 8.2106.0
+
+This permits to override the equally-named module parameter on the input()
+level. For further details, see the module parameter.
+
+
+MaxSessions
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "module parameter", "no", ""
+
+.. versionadded:: 8.2106.0
+
+This permits to override the equally-named module parameter on the input()
+level. For further details, see the module parameter.
+
+
+MaxListeners
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "module parameter", "no", ""
+
+.. versionadded:: 8.2106.0
+
+This permits to override the equally-named module parameter on the input()
+level. For further details, see the module parameter.
+
+
+FlowControl
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "module parameter", "no", ""
+
+.. versionadded:: 8.2106.0
+
+This permits to override the equally-named module parameter on the input()
+level. For further details, see the module parameter.
+
+
+DisableLFDelimiter
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "module parameter", "no", ""
+
+
+.. versionadded:: 8.2106.0
+
+This permits to override the equally-named module parameter on the input()
+level. For further details, see the module parameter.
+
+
+DiscardTruncatedMsg
+^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "module parameter", "no", "none"
+
+.. versionadded:: 8.2106.0
+
+This permits to override the equally-named module parameter on the input()
+level. For further details, see the module parameter.
+
+
+NotifyOnConnectionClose
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "module parameter", "no", "none"
+
+.. versionadded:: 8.2106.0
+
+This permits to override the equally-named module parameter on the input()
+level. For further details, see the module parameter.
+
+
+AddtlFrameDelimiter
+^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "module parameter", "no", ""
+
+.. versionadded:: 8.2106.0
+
+This permits to override the equally-named module parameter on the input()
+level. For further details, see the module parameter.
+
+
+MaxFrameSize
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "module parameter", "no", "none"
+
+.. versionadded:: 8.2106.0
+
+This permits to override the equally-named module parameter on the input()
+level. For further details, see the module parameter.
+
+
+PreserveCase
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "boolean", "module parameter", "no", "none"
+
+.. versionadded:: 8.2106.0
+
+This permits to override the equally-named module parameter on the input()
+level. For further details, see the module parameter.
+
+
+KeepAlive
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "module parameter", "no", ""
+
+.. versionadded:: 8.2106.0
+
+This permits to override the equally-named module parameter on the input()
+level. For further details, see the module parameter.
+
+
+KeepAlive.Probes
+^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "module parameter", "no", ""
+
+.. versionadded:: 8.2106.0
+
+This permits to override the equally-named module parameter on the input()
+level. For further details, see the module parameter.
+
+
+KeepAlive.Time
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "module parameter", "no", ""
+
+.. versionadded:: 8.2106.0
+
+This permits to override the equally-named module parameter on the input()
+level. For further details, see the module parameter.
+
+
+KeepAlive.Interval
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "module parameter", "no", ""
+
+.. versionadded:: 8.2106.0
+
+This permits to override the equally-named module parameter on the input()
+level. For further details, see the module parameter.
+
+
+
+.. _imtcp-statistic-counter:
+
+Statistic Counter
+=================
+
+This plugin maintains :doc:`statistics <../rsyslog_statistic_counter>` for each listener. The statistic is named
+after the given input name (or "imtcp" if none is configured), followed by
+the listener port in parenthesis. For example, the counter for a listener
+on port 514 with no set name is called "imtcp(514)".
+
+The following properties are maintained for each listener:
+
+- **submitted** - total number of messages submitted for processing since startup
+
+
+Caveats/Known Bugs
+==================
+
+- module always binds to all interfaces
+- can not be loaded together with `imgssapi <imgssapi.html>`_ (which
+ includes the functionality of imtcp)
+
+
+Examples
+========
+
+Example 1
+---------
+
+This sets up a TCP server on port 514 and permits it to accept up to 500
+connections:
+
+.. code-block:: none
+
+ module(load="imtcp" MaxSessions="500")
+ input(type="imtcp" port="514")
+
+
+Note that the global parameters (here: max sessions) need to be set when
+the module is loaded. Otherwise, the parameters will not apply.
+
+
+Additional Resources
+====================
+
+- `rsyslog video tutorial on how to store remote messages in a separate file <http://www.rsyslog.com/howto-store-remote-messages-in-a-separate-file/>`_ (for legacy syntax, but you get the idea).
+
diff --git a/source/configuration/modules/imtuxedoulog.rst b/source/configuration/modules/imtuxedoulog.rst
new file mode 100644
index 0000000..56f72a9
--- /dev/null
+++ b/source/configuration/modules/imtuxedoulog.rst
@@ -0,0 +1,146 @@
+**************************************
+imtuxedoulog: Tuxedo ULOG input module
+**************************************
+
+================ ==============================================================
+**Module Name:** **imtuxedoulog**
+**Authors:** Jean-Philippe Hilaire <jean-philippe.hilaire@pmu.fr> & Philippe Duveau <philippe.duveau@free.fr>
+================ ==============================================================
+
+
+Purpose
+=======
+
+This module allows rsyslog to process Tuxedo ULOG files.
+Tuxedo create an ULOG file each new log of the day this file is defined
+
+- a prefix configured in the tuxedo configuration
+
+- a suffix based on the date ".MMDDYY"
+
+This module is a copy of the polling mode of imfile but the file name is
+computed each polling. The previous one is closed to limit the number of
+opened file descriptor simultaneously.
+
+Another particularity of ULOG is that the lines contains only the time in
+day. So the module use the date in filename and time in log to fill log
+timestamp.
+
+Compile
+=======
+
+To successfully compile improg module.
+
+ ./configure --enable-imtuxedoulog ...
+
+Configuration Parameters
+========================
+
+Action Parameters
+-----------------
+
+ulogbase
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "yes", "path of ULOG file",
+
+Path of ULOG file as it is defined in Tuxedo Configuration ULOGPFX.
+Dot and date is added a end to build full file path
+
+Tag
+^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "yes", ,"none"
+
+The tag to be assigned to messages read from this file. If you would like to
+see the colon after the tag, you need to include it when you assign a tag
+value, like so: ``tag="myTagValue:"``.
+
+Facility
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", "facility\|number", "local0"
+
+The syslog facility to be assigned to messages read from this file. Can be
+specified in textual form (e.g. ``local0``, ``local1``, ...) or as numbers (e.g.
+16 for ``local0``). Textual form is suggested.
+
+Severity
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", "severity\|number", "notice"
+
+The syslog severity to be assigned to lines read. Can be specified
+in textual form (e.g. ``info``, ``warning``, ...) or as numbers (e.g. 6
+for ``info``). Textual form is suggested.
+
+PersistStateInterval
+^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no",
+
+Specifies how often the state file shall be written when processing
+the input file. The **default** value is 0, which means a new state
+file is only written when the monitored files is being closed (end of
+rsyslogd execution). Any other value n means that the state file is
+written every time n file lines have been processed. This setting can
+be used to guard against message duplication due to fatal errors
+(like power fail). Note that this setting affects imfile performance,
+especially when set to a low value. Frequently writing the state file
+is very time consuming.
+
+MaxLinesAtOnce
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no",
+
+If set to 0, the file will be fully processed. If it is set to any other
+value, a maximum of [number] lines is processed in sequence. The **default**
+is 10240.
+
+MaxSubmitAtOnce
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "1024", "no", "none"
+
+This is an expert option. It can be used to set the maximum input
+batch size that the module can generate. The **default** is 1024, which
+is suitable for a wide range of applications. Be sure to understand
+rsyslog message batch processing before you modify this option. If
+you do not know what this doc here talks about, this is a good
+indication that you should NOT modify the default.
diff --git a/source/configuration/modules/imudp.rst b/source/configuration/modules/imudp.rst
new file mode 100644
index 0000000..cfaf9e6
--- /dev/null
+++ b/source/configuration/modules/imudp.rst
@@ -0,0 +1,600 @@
+******************************
+imudp: UDP Syslog Input Module
+******************************
+
+.. index:: ! imudp
+
+=========================== ===========================================================================
+**Module Name:**  **imudp**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+Provides the ability to receive syslog messages via UDP.
+
+Multiple receivers may be configured by specifying multiple input
+statements.
+
+Note that in order to enable UDP reception, Firewall rules probably
+need to be modified as well. Also, SELinux may need additional rules.
+
+
+Notable Features
+================
+
+- :ref:`imudp-statistic-counter`
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+.. index:: imudp; module parameters
+
+
+Module Parameters
+-----------------
+
+TimeRequery
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "2", "no", "``$UDPServerTimeRequery``"
+
+This is a performance optimization. Getting the system time is very
+costly. With this setting, imudp can be instructed to obtain the
+precise time only once every n-times. This logic is only activated if
+messages come in at a very fast rate, so doing less frequent time
+calls should usually be acceptable. The default value is two, because
+we have seen that even without optimization the kernel often returns
+twice the identical time. You can set this value as high as you like,
+but do so at your own risk. The higher the value, the less precise
+the timestamp.
+
+**Note:** the timeRequery is done based on executed system calls
+(**not** messages received). So when batch sizes are used, multiple
+messages are received with one system call. All of these messages
+always receive the same timestamp, as they are effectively received
+at the same time. When there is very high traffic and successive
+system calls immediately return the next batch of messages, the time
+requery logic kicks in, which means that by default time is only
+queried for every second batch. Again, this should not cause a
+too-much deviation as it requires messages to come in very rapidly.
+However, we advise not to set the "timeRequery" parameter to a large
+value (larger than 10) if input batches are used.
+
+
+SchedulingPolicy
+^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "``$IMUDPSchedulingPolicy``"
+
+Can be used the set the scheduler priority, if the necessary
+functionality is provided by the platform. Most useful to select
+"fifo" for real-time processing under Linux (and thus reduce chance
+of packet loss). Other options are "rr" and "other".
+
+
+SchedulingPriority
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "none", "no", "``$IMUDPSchedulingPriority``"
+
+Scheduling priority to use.
+
+
+BatchSize
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "32", "no", "none"
+
+This parameter is only meaningful if the system support recvmmsg()
+(newer Linux OSs do this). The parameter is silently ignored if the
+system does not support it. If supported, it sets the maximum number
+of UDP messages that can be obtained with a single OS call. For
+systems with high UDP traffic, a relatively high batch size can
+reduce system overhead and improve performance. However, this
+parameter should not be overdone. For each buffer, max message size
+bytes are statically required. Also, a too-high number leads to
+reduced efficiency, as some structures need to be completely
+initialized before the OS call is done. We would suggest to not set
+it above a value of 128, except if experimental results show that
+this is useful.
+
+
+Threads
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "1", "no", "none"
+
+.. versionadded:: 7.5.5
+
+Number of worker threads to process incoming messages. These threads
+are utilized to pull data off the network. On a busy system,
+additional threads (but not more than there are CPUs/Cores) can help
+improving performance and avoiding message loss. Note that with too
+many threads, performance can suffer. There is a hard upper limit on
+the number of threads that can be defined. Currently, this limit is
+set to 32. It may increase in the future when massive multicore
+processors become available.
+
+
+PreserveCase
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "boolean", "off", "no", "none"
+
+.. versionadded:: 8.37.0
+
+This parameter is for controlling the case in fromhost. If preservecase is set to "on", the case in fromhost is preserved. E.g., 'Host1.Example.Org' when the message was received from 'Host1.Example.Org'. Default to "off" for the backword compatibility.
+
+
+.. index:: imudp; input parameters
+
+
+Input Parameters
+----------------
+
+.. index:: imudp; address (input parameter)
+
+Address
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "``$UDPServerAddress``"
+
+Local IP address (or name) the UDP server should bind to. Use "*"
+to bind to all of the machine's addresses.
+
+
+Port
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "array", "514", "yes", "``$UDPServerRun``"
+
+Specifies the port the server shall listen to.. Either a single port can
+be specified or an array of ports. If multiple ports are specified, a
+listener will be automatically started for each port. Thus, no
+additional inputs need to be configured.
+
+Single port: Port="514"
+
+Array of ports: Port=["514","515","10514","..."]
+
+
+IpFreeBind
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "2", "no", "none"
+
+.. versionadded:: 8.18.0
+
+Manages the IP_FREEBIND option on the UDP socket, which allows binding it to
+an IP address that is nonlocal or not (yet) associated to any network interface.
+
+The parameter accepts the following values:
+
+- 0 - does not enable the IP_FREEBIND option on the
+ UDP socket. If the *bind()* call fails because of *EADDRNOTAVAIL* error,
+ socket initialization fails.
+
+- 1 - silently enables the IP_FREEBIND socket
+ option if it is required to successfully bind the socket to a nonlocal address.
+
+- 2 - enables the IP_FREEBIND socket option and
+ warns when it is used to successfully bind the socket to a nonlocal address.
+
+
+Device
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+Bind socket to given device (e.g., eth0)
+
+For Linux with VRF support, the Device option can be used to specify the
+VRF for the Address.
+
+
+Ruleset
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "RSYSLOG_DefaultRuleset", "no", "``$InputUDPServerBindRuleset``"
+
+Binds the listener to a specific :doc:`ruleset <../../concepts/multi_ruleset>`.
+
+
+RateLimit.Interval
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "none"
+
+.. versionadded:: 7.3.1
+
+The rate-limiting interval in seconds. Value 0 turns off rate limiting.
+Set it to a number of seconds (5 recommended) to activate rate-limiting.
+
+
+RateLimit.Burst
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "10000", "no", "none"
+
+.. versionadded:: 7.3.1
+
+Specifies the rate-limiting burst in number of messages.
+
+
+Name
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "imudp", "no", "none"
+
+.. versionadded:: 8.3.3
+
+Specifies the value of the inputname property. In older versions,
+this was always "imudp" for all
+listeners, which still is the default. Starting with 7.3.9 it can be
+set to different values for each listener. Note that when a single
+input statement defines multiple listener ports, the inputname will be
+the same for all of them. If you want to differentiate in that case,
+use "name.appendPort" to make them unique. Note that the
+"name" parameter can be an empty string. In that case, the
+corresponding inputname property will obviously also be the empty
+string. This is primarily meant to be used together with
+"name.appendPort" to set the inputname equal to the port.
+
+
+Name.appendPort
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+.. versionadded:: 7.3.9
+
+Appends the port the inputname property. Note that when no "name" is
+specified, the default of "imudp" is used and the port is appended to
+that default. So, for example, a listener port of 514 in that case
+will lead to an inputname of "imudp514". The ability to append a port
+is most useful when multiple ports are defined for a single input and
+each of the inputnames shall be unique. Note that there currently is
+no differentiation between IPv4/v6 listeners on the same port.
+
+
+DefaultTZ
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+This is an **experimental** parameter; details may change at any
+time and it may also be discontinued without any early warning.
+Permits to set a default timezone for this listener. This is useful
+when working with legacy syslog (RFC3164 et al) residing in different
+timezones. If set it will be used as timezone for all messages **that
+do not contain timezone info**. Currently, the format **must** be
+"+/-hh:mm", e.g. "-05:00", "+01:30". Other formats, including TZ
+names (like EST) are NOT yet supported. Note that consequently no
+daylight saving settings are evaluated when working with timezones.
+If an invalid format is used, "interesting" things can happen, among
+them malformed timestamps and rsyslogd segfaults. This will obviously
+be changed at the time this feature becomes non-experimental.
+
+
+RcvBufSize
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "size", "none", "no", "none"
+
+.. versionadded:: 7.3.9
+
+This request a socket receive buffer of specific size from the operating system. It
+is an expert parameter, which should only be changed for a good reason.
+Note that setting this parameter disables Linux auto-tuning, which
+usually works pretty well. The default value is 0, which means "keep
+the OS buffer size unchanged". This is a size value. So in addition
+to pure integer values, sizes like "256k", "1m" and the like can be
+specified. Note that setting very large sizes may require root or
+other special privileges. Also note that the OS may slightly adjust
+the value or shrink it to a system-set max value if the user is not
+sufficiently privileged. Technically, this parameter will result in a
+setsockopt() call with SO\_RCVBUF (and SO\_RCVBUFFORCE if it is
+available). (Maximum Value: 1G)
+
+
+.. _imudp-statistic-counter:
+
+Statistic Counter
+=================
+
+This plugin maintains :doc:`statistics <../rsyslog_statistic_counter>` for each listener and for each worker thread.
+
+The listener statistic is named starting with "imudp", followed followed by the
+listener IP, a colon and port in parenthesis. For example, the counter for a
+listener on port 514 (on all IPs) with no set name is called "imudp(\*:514)".
+
+If an "inputname" is defined for a listener, that inputname is used instead of
+"imudp" as statistic name. For example, if the inputname is set to "myudpinut",
+that corresponding statistic name in above case would be "myudpinput(\*:514)".
+This has been introduced in 7.5.3.
+
+The following properties are maintained for each listener:
+
+- **submitted** - total number of messages submitted for processing since startup
+
+The worker thread (in short: worker) statistic is named "imudp(wX)" where "X" is
+the worker thread ID, which is an monotonically increasing integer starting at 0.
+This means the first worker will have the name "imudp(w0)", the second "imudp(w1)"
+and so on. Note that workers are all equal. It doesn’t really matter which worker
+processes which messages, so the actual worker ID is not of much concern. More
+interesting is to check how the load is spread between the worker. Also note that
+there is no fixed worker-to-listener relationship: all workers process messages
+from all listeners.
+
+Note: worker thread statistics are available starting with rsyslog 7.5.5.
+
+- **disallowed** - total number of messages discarded due to disallowed sender
+
+This counts the number of messages that have been discarded because they have
+been received by an disallowed sender. Note that if no allowed senders are
+configured (the default), this counter will always be zero.
+
+This counter was introduced by rsyslog 8.35.0.
+
+
+The following properties are maintained for each worker thread:
+
+- **called.recvmmsg** - number of recvmmsg() OS calls done
+
+- **called.recvmsg** - number of recvmsg() OS calls done
+
+- **msgs.received** - number of actual messages received
+
+
+Caveats/Known Bugs
+==================
+
+- Scheduling parameters are set **after** privileges have been dropped.
+ In most cases, this means that setting them will not be possible
+ after privilege drop. This may be worked around by using a
+ sufficiently-privileged user account.
+
+Examples
+========
+
+Example 1
+---------
+
+This sets up an UDP server on port 514:
+
+.. code-block:: none
+
+ module(load="imudp") # needs to be done just once
+ input(type="imudp" port="514")
+
+
+Example 2
+---------
+
+This sets up a UDP server on port 514 bound to device eth0:
+
+.. code-block:: none
+
+ module(load="imudp") # needs to be done just once
+ input(type="imudp" port="514" device="eth0")
+
+
+Example 3
+---------
+
+The following sample is mostly equivalent to the first one, but request
+a larger rcvuf size. Note that 1m most probably will not be honored by
+the OS until the user is sufficiently privileged.
+
+.. code-block:: none
+
+ module(load="imudp") # needs to be done just once
+ input(type="imudp" port="514" rcvbufSize="1m")
+
+
+Example 4
+---------
+
+In the next example, we set up three listeners at ports 10514, 10515 and
+10516 and assign a listener name of "udp" to it, followed by the port
+number:
+
+.. code-block:: none
+
+ module(load="imudp")
+ input(type="imudp" port=["10514","10515","10516"]
+ inputname="udp" inputname.appendPort="on")
+
+
+Example 5
+---------
+
+The next example is almost equal to the previous one, but now the
+inputname property will just be set to the port number. So if a message
+was received on port 10515, the input name will be "10515" in this
+example whereas it was "udp10515" in the previous one. Note that to do
+that we set the inputname to the empty string.
+
+.. code-block:: none
+
+ module(load="imudp")
+ input(type="imudp" port=["10514","10515","10516"]
+ inputname="" inputname.appendPort="on")
+
+
+Additional Information on Performance Tuning
+============================================
+
+Threads and Ports
+-----------------
+
+The maximum number of threads is a module parameter. Thus there is no direct
+relation to the number of ports.
+
+Every worker thread processes all inbound ports in parallel. To do so, it
+adds all listen ports to an `epoll()` set and waits for packets to arrive. If
+the system supports the `recvmmsg()` call, it tries to receive up to `batchSize`
+messages at once. This reduces the number of transitions between user and
+kernel space and as such overhead.
+
+After the packages have been received, imudp processes each message and creates
+input batches which are then submitted according to the config file's queue
+definition. After that the a new cycle beings and imudp return to wait for
+new packets to arrive.
+
+When multiple threads are defined, each thread performs the processing
+described above. All worker threads are created when imudp is started.
+Each of them will individually awoken from epoll as data
+is present. Each one reads as much available data as possible. With a low
+incoming volume this can be inefficient in that the threads compete against
+inbound data. At sufficiently high volumes this is not a problem because
+multiple workers permit to read data from the operating system buffers
+while other workers process the data they have read. It must be noted
+that "sufficiently high volume" is not a precise concept. A single thread
+can be very efficient. As such it is recommended to run impstats inside a
+performance testing lab to find out a good number of worker threads. If
+in doubt, start with a low number and increase only if performance
+actually increases by adding threads.
+
+A word of caution: just looking at thread CPU use is **not** a proper
+way to monitor imudp processing capabilities. With too many threads
+the overhead can increase, even strongly. This can result in a much higher
+CPU utilization but still overall less processing capability.
+
+Please also keep in your mind that additional input worker threads may
+cause more mutex contention when adding data to processing queues.
+
+Too many threads may also reduce the number of messages received via
+a single recvmmsg() call, which in turn increases kernel/user space
+switching and thus system overhead.
+
+If **real time** priority is used it must be ensured that not all
+operating system cores are used by imudp threads. The reason is that
+otherwise for heavy workloads there is no ability to actually process
+messages. While this may be desirable in some cases where queue settings
+permit for large bursts, it in general can lead to pushback from the
+queues.
+
+For lower volumes, real time priority can increase the operating system
+overhead by awaking imudp more often than strictly necessary and thus
+reducing the effectiveness of `recvmmsg()`.
+
+imudp threads and queue worker threads
+--------------------------------------
+There is no direct relationship between these two entities. Imudp submits
+messages to the configured rulesets and places them into the respective
+queues. It is then up the the queue config, and outside of the scope
+or knowledge of imudp, how many queue worker threads will be spawned by
+the queue in question.
+
+Note, however, that queue worker threads and imudp input worker threads
+compete for system resources. As such the combined overall value should
+not overload the system. There is no strict rule to follow when sizing
+overall worker numbers: for queue workers it strongly depends on how
+compute-intense the workload is. For example, omfile actions need
+few worker threads as they are fast. On the contrary, omelasticsearch often
+waits for server replies and as such more worker threads can be beneficial.
+The queue subsystem auto-tuning of worker threads should handle the
+different needs in a useful way.
+
+Additional Resources
+====================
+
+- `rsyslog video tutorial on how to store remote messages in a separate file <http://www.rsyslog.com/howto-store-remote-messages-in-a-separate-file/>`_.
+- Description of `rsyslog statistic
+ counters <http://www.rsyslog.com/rsyslog-statistic-counter/>`_.
+ This also describes all imudp counters.
+
diff --git a/source/configuration/modules/imuxsock.rst b/source/configuration/modules/imuxsock.rst
new file mode 100644
index 0000000..8c05108
--- /dev/null
+++ b/source/configuration/modules/imuxsock.rst
@@ -0,0 +1,966 @@
+**********************************
+imuxsock: Unix Socket Input Module
+**********************************
+
+=========================== ===========================================================================
+**Module Name:**  **imuxsock**
+**Author:** `Rainer Gerhards <http://www.gerhards.net/rainer>`_ <rgerhards@adiscon.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+This module provides the ability to accept syslog messages from applications
+running on the local system via Unix sockets. Most importantly, this is the
+mechanism by which the :manpage:`syslog(3)` call delivers syslog messages
+to rsyslogd.
+
+.. seealso::
+
+ :doc:`omuxsock`
+
+
+Notable Features
+================
+
+- :ref:`imuxsock-rate-limiting-label`
+- :ref:`imuxsock-trusted-properties-label`
+- :ref:`imuxsock-flow-control-label`
+- :ref:`imuxsock-application-timestamps-label`
+- :ref:`imuxsock-systemd-details-label`
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+Module Parameters
+-----------------
+
+.. warning::
+
+ When running under systemd, **many "sysSock." parameters are ignored**.
+ See parameter descriptions and the :ref:`imuxsock-systemd-details-label` section for
+ details.
+
+
+SysSock.IgnoreTimestamp
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "``$SystemLogSocketIgnoreMsgTimestamp``"
+
+Ignore timestamps included in the messages, applies to messages received via
+the system log socket.
+
+
+SysSock.IgnoreOwnMessages
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "none"
+
+Ignores messages that originated from the same instance of rsyslogd.
+There usually is no reason to receive messages from ourselves. This
+setting is vital when writing messages to the systemd journal.
+
+.. versionadded:: 7.3.7
+
+.. seealso::
+
+ See :doc:`omjournal <omjournal>` module documentation for a more
+ in-depth description.
+
+
+SysSock.Use
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "``$OmitLocalLogging``"
+
+Listen on the default local log socket (``/dev/log``) or, if provided, use
+the log socket value assigned to the ``SysSock.Name`` parameter instead
+of the default. This is most useful if you run multiple instances of
+rsyslogd where only one shall handle the system log socket. Unless
+disabled by the ``SysSock.Unlink`` setting, this socket is created
+upon rsyslog startup and deleted upon shutdown, according to
+traditional syslogd behavior.
+
+The behavior of this parameter is different for systemd systems. For those
+systems, ``SysSock.Use`` still needs to be enabled, but the value of
+``SysSock.Name`` is ignored and the socket provided by systemd is used
+instead. If this parameter is *not* enabled, then imuxsock will only be
+of use if a custom input is configured.
+
+See the :ref:`imuxsock-systemd-details-label` section for details.
+
+
+SysSock.Name
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "/dev/log", "no", "``$SystemLogSocketName``"
+
+Specifies an alternate log socket to be used instead of the default system
+log socket, traditionally ``/dev/log``. Unless disabled by the
+``SysSock.Unlink`` setting, this socket is created upon rsyslog startup
+and deleted upon shutdown, according to traditional syslogd behavior.
+
+The behavior of this parameter is different for systemd systems. See the
+ :ref:`imuxsock-systemd-details-label` section for details.
+
+
+SysSock.FlowControl
+^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "``$SystemLogFlowControl``"
+
+Specifies if flow control should be applied to the system log socket.
+
+
+SysSock.UsePIDFromSystem
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "``$SystemLogUsePIDFromSystem``"
+
+Specifies if the pid being logged shall be obtained from the log socket
+itself. If so, the TAG part of the message is rewritten. It is recommended
+to turn this option on, but the default is "off" to keep compatible
+with earlier versions of rsyslog.
+
+.. versionadded:: 5.7.0
+
+
+SysSock.RateLimit.Interval
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "max", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "", "no", "``$SystemLogRateLimitInterval``"
+
+Specifies the rate-limiting interval in seconds. Default value is 0,
+which turns off rate limiting. Set it to a number of seconds (5
+recommended) to activate rate-limiting. The default of 0 has been
+chosen as people experienced problems with this feature activated
+by default. Now it needs an explicit opt-in by setting this parameter.
+
+
+SysSock.RateLimit.Burst
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "max", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "200", "(2^31)-1", "no", "``$SystemLogRateLimitBurst``"
+
+Specifies the rate-limiting burst in number of messages.
+
+.. versionadded:: 5.7.1
+
+
+SysSock.RateLimit.Severity
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "max", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "1", "", "no", "``$SystemLogRateLimitSeverity``"
+
+Specifies the severity of messages that shall be rate-limited.
+
+.. seealso::
+
+ https://en.wikipedia.org/wiki/Syslog#Severity_level
+
+
+SysSock.UseSysTimeStamp
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "``$SystemLogUseSysTimeStamp``"
+
+The same as the input parameter ``UseSysTimeStamp``, but for the system log
+socket. This parameter instructs ``imuxsock`` to obtain message time from
+the system (via control messages) instead of using time recorded inside
+the message. This may be most useful in combination with systemd. Due to
+the usefulness of this functionality, we decided to enable it by default.
+As such, the behavior is slightly different than previous versions.
+However, we do not see how this could negatively affect existing environments.
+
+.. versionadded:: 5.9.1
+
+
+SysSock.Annotate
+^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "``$SystemLogSocketAnnotate``"
+
+Turn on annotation/trusted properties for the system log socket. See
+the :ref:`imuxsock-trusted-properties-label` section for more info.
+
+
+SysSock.ParseTrusted
+^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "``$SystemLogParseTrusted``"
+
+If ``SysSock.Annotation`` is turned on, create JSON/lumberjack properties
+out of the trusted properties (which can be accessed via |FmtAdvancedName|
+JSON Variables, e.g. ``$!pid``) instead of adding them to the message.
+
+.. versionadded:: 7.2.7
+ |FmtAdvancedName| directive introduced
+
+.. versionadded:: 7.3.8
+ |FmtAdvancedName| directive introduced
+
+.. versionadded:: 6.5.0
+ |FmtObsoleteName| directive introduced
+
+
+SysSock.Unlink
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "none"
+
+If turned on (default), the system socket is unlinked and re-created
+when opened and also unlinked when finally closed. Note that this
+setting has no effect when running under systemd control (because
+systemd handles the socket. See the :ref:`imuxsock-systemd-details-label`
+section for details.
+
+.. versionadded:: 7.3.9
+
+
+SysSock.UseSpecialParser
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "none"
+
+The equivalent of the ``UseSpecialParser`` input parameter, but
+for the system socket. If turned on (the default) a special parser is
+used that parses the format that is usually used
+on the system log socket (the one :manpage:`syslog(3)` creates). If set to
+"off", the regular parser chain is used, in which case the format on the
+log socket can be arbitrary.
+
+.. note::
+
+ When the special parser is used, rsyslog is able to inject a more precise
+ timestamp into the message (it is obtained from the log socket). If the
+ regular parser chain is used, this is not possible.
+
+.. versionadded:: 8.9.0
+ The setting was previously hard-coded "on"
+
+
+SysSock.ParseHostname
+^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+.. note::
+
+ This option only has an effect if ``SysSock.UseSpecialParser`` is
+ set to "off".
+
+Normally, the local log sockets do *not* contain hostnames. If set
+to on, parsers will expect hostnames just like in regular formats. If
+set to off (the default), the parser chain is instructed to not expect
+them.
+
+.. versionadded:: 8.9.0
+
+
+Input Parameters
+----------------
+
+Ruleset
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "default ruleset", "no", "none"
+
+Binds specified ruleset to this input. If not set, the default
+ruleset is bound.
+
+.. versionadded:: 8.17.0
+
+
+IgnoreTimestamp
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "``$InputUnixListenSocketIgnoreMsgTimestamp``"
+
+Ignore timestamps included in messages received from the input being
+defined.
+
+
+IgnoreOwnMessages
+^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "none"
+
+Ignore messages that originated from the same instance of rsyslogd.
+There usually is no reason to receive messages from ourselves. This
+setting is vital when writing messages to the systemd journal.
+
+.. versionadded:: 7.3.7
+
+.. seealso::
+
+ See :doc:`omjournal <omjournal>` module documentation for a more
+ in-depth description.
+
+
+
+
+FlowControl
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "``$InputUnixListenSocketFlowControl``"
+
+Specifies if flow control should be applied to the input being defined.
+
+
+RateLimit.Interval
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "max", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "", "no", "``$IMUXSockRateLimitInterval``"
+
+Specifies the rate-limiting interval in seconds. Default value is 0, which
+turns off rate limiting. Set it to a number of seconds (5 recommended)
+to activate rate-limiting. The default of 0 has been chosen as people
+experienced problems with this feature activated by default. Now it
+needs an explicit opt-in by setting this parameter.
+
+
+RateLimit.Burst
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "max", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "200", "(2^31)-1", "no", "``$IMUXSockRateLimitBurst``"
+
+Specifies the rate-limiting burst in number of messages.
+
+.. versionadded:: 5.7.1
+
+
+RateLimit.Severity
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "max", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "1", "", "no", "``$IMUXSockRateLimitSeverity``"
+
+Specifies the severity of messages that shall be rate-limited.
+
+.. seealso::
+
+ https://en.wikipedia.org/wiki/Syslog#Severity_level
+
+
+UsePIDFromSystem
+^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "``$InputUnixListenSocketUsePIDFromSystem``"
+
+Specifies if the pid being logged shall be obtained from the log socket
+itself. If so, the TAG part of the message is rewritten. It is
+recommended to turn this option on, but the default is "off" to keep
+compatible with earlier versions of rsyslog.
+
+
+UseSysTimeStamp
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "``$InputUnixListenSocketUseSysTimeStamp``"
+
+This parameter instructs ``imuxsock`` to obtain message time from
+the system (via control messages) instead of using time recorded inside
+the message. This may be most useful in combination with systemd. Due to
+the usefulness of this functionality, we decided to enable it by default.
+As such, the behavior is slightly different than previous versions.
+However, we do not see how this could negatively affect existing environments.
+
+.. versionadded:: 5.9.1
+
+
+CreatePath
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "``$InputUnixListenSocketCreatePath``"
+
+Create directories in the socket path if they do not already exist.
+They are created with 0755 permissions with the owner being the
+process under which rsyslogd runs. The default is not to create
+directories. Keep in mind, though, that rsyslogd always creates
+the socket itself if it does not exist (just not the directories
+by default).
+
+This option is primarily considered useful for defining additional
+sockets that reside on non-permanent file systems. As rsyslogd probably
+starts up before the daemons that create these sockets, it is a vehicle
+to enable rsyslogd to listen to those sockets even though their directories
+do not yet exist.
+
+.. versionadded:: 4.7.0
+.. versionadded:: 5.3.0
+
+
+Socket
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "``$AddUnixListenSocket``"
+
+Adds additional unix socket. Formerly specified with the ``-a`` option.
+
+
+HostName
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "NULL", "no", "``$InputUnixListenSocketHostName``"
+
+Allows overriding the hostname that shall be used inside messages
+taken from the input that is being defined.
+
+
+Annotate
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "``$InputUnixListenSocketAnnotate``"
+
+Turn on annotation/trusted properties for the input that is being defined.
+See the :ref:`imuxsock-trusted-properties-label` section for more info.
+
+
+ParseTrusted
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "``$ParseTrusted``"
+
+Equivalent to the ``SysSock.ParseTrusted`` module parameter, but applies
+to the input that is being defined.
+
+
+Unlink
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "``none``"
+
+If turned on (default), the socket is unlinked and re-created when opened
+and also unlinked when finally closed. Set it to off if you handle socket
+creation yourself.
+
+.. note::
+
+ Note that handling socket creation oneself has the
+ advantage that a limited amount of messages may be queued by the OS
+ if rsyslog is not running.
+
+.. versionadded:: 7.3.9
+
+
+UseSpecialParser
+^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "none"
+
+Equivalent to the ``SysSock.UseSpecialParser`` module parameter, but applies
+to the input that is being defined.
+
+.. versionadded:: 8.9.0
+ The setting was previously hard-coded "on"
+
+
+ParseHostname
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+Equivalent to the ``SysSock.ParseHostname`` module parameter, but applies
+to the input that is being defined.
+
+.. versionadded:: 8.9.0
+
+
+.. _imuxsock-rate-limiting-label:
+
+Input rate limiting
+===================
+
+rsyslog supports (optional) input rate limiting to guard against the problems
+of a wild running logging process. If more than
+``SysSock.RateLimit.Interval`` \* ``SysSock.RateLimit.Burst`` log messages
+are emitted from the same process, those messages with
+``SysSock.RateLimit.Severity`` or lower will be dropped. It is not possible
+to recover anything about these messages, but imuxsock will tell you how
+many it has dropped once the interval has expired AND the next message is
+logged. Rate-limiting depends on ``SCM\_CREDENTIALS``. If the platform does
+not support this socket option, rate limiting is turned off. If multiple
+sockets are configured, rate limiting works independently on each of
+them (that should be what you usually expect).
+
+The same functionality is available for additional log sockets, in which
+case the config statements just use the prefix RateLimit... but otherwise
+works exactly the same. When working with severities, please keep in mind
+that higher severity numbers mean lower severity and configure things
+accordingly. To turn off rate limiting, set the interval to zero.
+
+.. versionadded:: 5.7.1
+
+
+.. _imuxsock-trusted-properties-label:
+
+Trusted (syslog) properties
+===========================
+
+rsyslog can annotate messages from system log sockets (via imuxsock) with
+so-called `Trusted syslog
+properties <http://www.rsyslog.com/what-are-trusted-properties/>`_, (or just
+"Trusted Properties" for short). These are message properties not provided by
+the logging client application itself, but rather obtained from the system.
+As such, they can not be faked by the user application and are trusted in
+this sense. This feature is based on a similar idea introduced in systemd.
+
+This feature requires a recent enough Linux Kernel and access to
+the ``/proc`` file system. In other words, this may not work on all
+platforms and may not work fully when privileges are dropped (depending
+on how they are dropped). Note that trusted properties can be very
+useful, but also typically cause the message to grow rather large. Also,
+the format of log messages is changed by adding the trusted properties at
+the end. For these reasons, the feature is **not enabled by default**.
+If you want to use it, you must turn it on (via
+``SysSock.Annotate`` and ``Annotate``).
+
+.. versionadded:: 5.9.4
+
+.. seealso::
+
+ `What are "trusted properties"?
+ <http://www.rsyslog.com/what-are-trusted-properties/>`_
+
+
+.. _imuxsock-flow-control-label:
+
+Flow-control of Unix log sockets
+================================
+
+If processing queues fill up, the unix socket reader is blocked for a
+short while to help prevent overrunning the queues. If the queues are
+overrun, this may cause excessive disk-io and impact performance.
+
+While turning on flow control for many systems does not hurt, it `can` lead
+to a very unresponsive system and as such is disabled by default.
+
+This means that log records are placed as quickly as possible into the
+processing queues. If you would like to have flow control, you
+need to enable it via the ``SysSock.FlowControl`` and ``FlowControl`` config
+directives. Just make sure you have thought about the implications and have
+tested the change on a non-production system first.
+
+
+.. _imuxsock-application-timestamps-label:
+
+Control over application timestamps
+===================================
+
+Application timestamps are ignored by default. This is needed, as some
+programs (e.g. sshd) log with inconsistent timezone information, what
+messes up the local logs (which by default don't even contain time zone
+information). This seems to be consistent with what sysklogd has done for
+many years. Alternate behaviour may be desirable if gateway-like processes
+send messages via the local log slot. In that case, it can be enabled via
+the ``SysSock.IgnoreTimestamp`` and ``IgnoreTimestamp`` config directives.
+
+
+.. _imuxsock-systemd-details-label:
+
+Coexistence with systemd
+========================
+
+Rsyslog should by default be configured for systemd support on all platforms
+that usually run systemd (which means most Linux distributions, but not, for
+example, Solaris).
+
+Rsyslog is able to coexist with systemd with minimal changes on the part of the
+local system administrator. While the ``systemd journal`` now assumes full
+control of the local ``/dev/log`` system log socket, systemd provides
+access to logging data via the ``/run/systemd/journal/syslog`` log socket.
+This log socket is provided by the ``syslog.socket`` file that is shipped
+with systemd.
+
+The imuxsock module can still be used in this setup and provides superior
+performance over :doc:`imjournal <imjournal>`, the alternative journal input
+module.
+
+.. note::
+
+ It must be noted, however, that the journal tends to drop messages
+ when it becomes busy instead of forwarding them to the system log socket.
+ This is because the journal uses an async log socket interface for forwarding
+ instead of the traditional synchronous one.
+
+.. versionadded:: 8.32.0
+ rsyslog emits an informational message noting the system log socket provided
+ by systemd.
+
+.. seealso::
+
+ :doc:`imjournal`
+
+
+Handling of sockets
+-------------------
+
+What follows is a brief description of the process rsyslog takes to determine
+what system socket to use, which sockets rsyslog should listen on, whether
+the sockets should be created and how rsyslog should handle the sockets when
+shutting down.
+
+.. seealso::
+
+ `Writing syslog Daemons Which Cooperate Nicely With systemd
+ <https://www.freedesktop.org/wiki/Software/systemd/syslog/>`_
+
+
+Step 1: Select name of system socket
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+#. If the user has not explicitly chosen to set ``SysSock.Use="off"`` then
+ the default listener socket (aka, "system log socket" or simply "system
+ socket") name is set to ``/dev/log``. Otherwise, if the user `has`
+ explicitly set ``SysSock.Use="off"``, then rsyslog will not listen on
+ ``/dev/log`` OR any socket defined by the ``SysSock.Name`` parameter and
+ the rest of this section does not apply.
+
+#. If the user has specified ``sysSock.Name="/path/to/custom/socket"`` (and not
+ explicitly set ``SysSock.Use="off"``), then the default listener socket name
+ is overwritten with ``/path/to/custom/socket``.
+
+#. Otherwise, if rsyslog is running under systemd AND
+ ``/run/systemd/journal/syslog`` exists, (AND the user has not
+ explicitly set ``SysSock.Use="off"``) then the default listener socket name
+ is overwritten with ``/run/systemd/journal/syslog``.
+
+
+Step 2: Listen on specified sockets
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. note::
+
+ This is true for all sockets, be it system socket or not. But if
+ ``SysSock.Use="off"``, the system socket will not be listened on.
+
+rsyslog evaluates the list of sockets it has been asked to activate:
+
+- the system log socket (if still enabled after completion of the last section)
+- any custom inputs defined by the user
+
+and then checks to see if it has been passed in via systemd (name is checked).
+If it was passed in via systemd, the socket is used as-is (e.g., not recreated
+upon rsyslog startup), otherwise if not passed in via systemd the log socket
+is unlinked, created and opened.
+
+
+Step 3: Shutdown log sockets
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. note::
+
+ This is true for all sockets, be it system socket or not.
+
+Upon shutdown, rsyslog processes each socket it is listening on and evaluates
+it. If the socket was originally passed in via systemd (name is checked), then
+rsyslog does nothing with the socket (systemd maintains the socket).
+
+If the socket was `not` passed in via systemd AND the configuration permits
+rsyslog to do so (the default setting), rsyslog will unlink/remove the log
+socket. If not permitted to do so (the user specified otherwise), then rsyslog
+will not unlink the log socket and will leave that cleanup step to the
+user or application that created the socket to remove it.
+
+
+Statistic Counter
+=================
+
+This plugin maintains a global :doc:`statistics <../rsyslog_statistic_counter>` with the following properties:
+
+- ``submitted`` - total number of messages submitted for processing since startup
+
+- ``ratelimit.discarded`` - number of messages discarded due to rate limiting
+
+- ``ratelimit.numratelimiters`` - number of currently active rate limiters
+ (small data structures used for the rate limiting logic)
+
+
+Caveats/Known Bugs
+==================
+
+- When running under systemd, **many "sysSock." parameters are ignored**.
+ See parameter descriptions and the :ref:`imuxsock-systemd-details-label` section for
+ details.
+
+- On systems where systemd is used this module is often not loaded by default.
+ See the :ref:`imuxsock-systemd-details-label` section for details.
+
+- Application timestamps are ignored by default. See the
+ :ref:`imuxsock-application-timestamps-label` section for details.
+
+- `imuxsock does not work on Solaris
+ <http://www.rsyslog.com/why-does-imuxsock-not-work-on-solaris/>`_
+
+.. todolist::
+
+
+Examples
+========
+
+Minimum setup
+-------------
+
+The following sample is the minimum setup required to accept syslog
+messages from applications running on the local system.
+
+.. code-block:: none
+
+ module(load="imuxsock")
+
+This only needs to be done once.
+
+
+Enable flow control
+-------------------
+
+.. code-block:: none
+ :emphasize-lines: 2
+
+ module(load="imuxsock" # needs to be done just once
+ SysSock.FlowControl="on") # enable flow control (use if needed)
+
+Enable trusted properties
+-------------------------
+
+As noted in the :ref:`imuxsock-trusted-properties-label` section, trusted properties
+are disabled by default. If you want to use them, you must turn the feature
+on via ``SysSock.Annotate`` for the system log socket and ``Annotate`` for
+inputs.
+
+Append to end of message
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+The following sample is used to activate message annotation and thus
+trusted properties on the system log socket. These trusted properties
+are appended to the end of each message.
+
+.. code-block:: none
+ :emphasize-lines: 2
+
+ module(load="imuxsock" # needs to be done just once
+ SysSock.Annotate="on")
+
+
+Store in JSON message properties
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The following sample is similar to the first one, but enables parsing of
+trusted properties, which places the results into JSON/lumberjack variables.
+
+.. code-block:: none
+ :emphasize-lines: 2
+
+ module(load="imuxsock"
+ SysSock.Annotate="on" SysSock.ParseTrusted="on")
+
+Read log data from jails
+------------------------
+
+The following sample is a configuration where rsyslogd pulls logs from
+two jails, and assigns different hostnames to each of the jails:
+
+.. code-block:: none
+ :emphasize-lines: 3,4,5
+
+ module(load="imuxsock") # needs to be done just once
+ input(type="imuxsock"
+ HostName="jail1.example.net"
+ Socket="/jail/1/dev/log") input(type="imuxsock"
+ HostName="jail2.example.net" Socket="/jail/2/dev/log")
+
+Read from socket on temporary file system
+-----------------------------------------
+
+The following sample is a configuration where rsyslogd reads the openssh
+log messages via a separate socket, but this socket is created on a
+temporary file system. As rsyslogd starts up before the sshd daemon, it needs
+to create the socket directories, because it otherwise can not open the
+socket and thus not listen to openssh messages.
+
+.. code-block:: none
+ :emphasize-lines: 3,4
+
+ module(load="imuxsock") # needs to be done just once
+ input(type="imuxsock"
+ Socket="/var/run/sshd/dev/log"
+ CreatePath="on")
+
+
+Disable rate limiting
+---------------------
+
+The following sample is used to turn off input rate limiting on the
+system log socket.
+
+.. code-block:: none
+ :emphasize-lines: 2
+
+ module(load="imuxsock" # needs to be done just once
+ SysSock.RateLimit.Interval="0") # turn off rate limiting
+
diff --git a/source/configuration/modules/index.rst b/source/configuration/modules/index.rst
new file mode 100644
index 0000000..be423cd
--- /dev/null
+++ b/source/configuration/modules/index.rst
@@ -0,0 +1,33 @@
+Modules
+=======
+
+Rsyslog has a modular design. This enables functionality to be
+dynamically loaded from modules, which may also be written by any third
+party. Rsyslog itself offers all non-core functionality as modules.
+Consequently, there is a growing number of modules. Here is the entry
+point to their documentation and what they do (list is currently not
+complete)
+
+Please note that each module provides (case-insensitive) configuration
+parameters, which are NOT necessarily being listed below. Also remember,
+that a modules configuration parameter (and functionality) is only
+available if it has been loaded.
+
+It is relatively easy to write a rsyslog module. If none of the
+provided modules solve your need, you may consider writing one or have
+one written for you by `Adiscon's professional services for
+rsyslog <http://www.rsyslog.com/professional-services>`_ (this often
+is a very cost-effective and efficient way of getting what you need).
+
+There exist different classes of loadable modules:
+
+.. toctree::
+ :maxdepth: 1
+
+ idx_output
+ idx_input
+ idx_parser
+ idx_messagemod
+ idx_stringgen
+ idx_library
+ workflow
diff --git a/source/configuration/modules/mmanon.rst b/source/configuration/modules/mmanon.rst
new file mode 100644
index 0000000..b1d1a4b
--- /dev/null
+++ b/source/configuration/modules/mmanon.rst
@@ -0,0 +1,370 @@
+****************************************
+IP Address Anonymization Module (mmanon)
+****************************************
+
+=========================== ===========================================================================
+**Module Name:**  **mmanon**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+**Available since:** 7.3.7
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+The mmanon module permits to anonymize IP addresses. It is a message
+modification module that actually changes the IP address inside the
+message, so after calling mmanon, the original message can no longer be
+obtained. Note that anonymization will break digital signatures on the
+message, if they exist.
+
+Please note that log files can also be anonymized via
+`SLFA <http://jan.gerhards.net/p/slfa.html>`_ after they
+have been created.
+
+*How are IP-Addresses defined?*
+
+We assume that an IPv4 address consists of four octets in dotted notation,
+where each of the octets has a value between 0 and 255, inclusively.
+
+An IPv6 is defined by being bewtween zero and eight hex values between 0
+and ffff. These are separated by ':'. Leading zeros in blocks can be omitted
+and blocks full of zeros can be abbreviated by using '::'. However, this
+can ony happen once in an IP address.
+
+An IPv6 address with embedded IPv4 is an IPv6 address where the last two blocks
+have been replaced by an IPv4 address. (see also: RFC4291, 2.2.3) 
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Action Parameters
+-----------------
+
+Parameters starting with 'IPv4.' will configure IPv4 anonymization,
+while 'IPv6.' parameters do the same for IPv6 anonymization.
+
+
+ipv4.enable
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "none"
+
+Allows to enable or disable the anonymization of IPv4 addresses.
+
+
+ipv4.mode
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "zero", "no", "none"
+
+There exist the "simple", "random", "random-consitent", and "zero"
+modes. In simple mode, only octets as whole can be anonymized
+and the length of the message is never changed. This means
+that when the last three octets of the address 10.1.12.123 are
+anonymized, the result will be 10.0.00.000. This means that
+the length of the original octets is still visible and may be used
+to draw some privacy-evasive conclusions. This mode is slightly
+faster than the other modes, and this may matter in high
+throughput environments.
+
+The modes "random" and "random-consistent" are very similar, in
+that they both anonymize ip-addresses by randomizing the last bits (any
+number) of a given address. However, while "random" mode assigns a new
+random ip-address for every address in a message, "random-consitent" will
+assign the same randomized address to every instance of the same original address.
+
+The default "zero" mode will do full anonymization of any number
+of bits and it will also normalize the address, so that no information
+about the original IP address is available. So in the above example,
+10.1.12.123 would be anonymized to 10.0.0.0.
+
+
+ipv4.bits
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "positive integer", "16", "no", "none"
+
+This sets the number of bits that should be anonymized (bits are from
+the right, so lower bits are anonymized first). This setting permits
+to save network information while still anonymizing user-specific
+data. The more bits you discard, the better the anonymization
+obviously is. The default of 16 bits reflects what German data
+privacy rules consider as being sufficinetly anonymized. We assume,
+this can also be used as a rough but conservative guideline for other
+countries.
+Note: when in simple mode, only bits on a byte boundary can be
+specified. As such, any value other than 8, 16, 24 or 32 is invalid.
+If an invalid value is given, it is rounded to the next byte boundary
+(so we favor stronger anonymization in that case). For example, a bit
+value of 12 will become 16 in simple mode (an error message is also
+emitted).
+
+
+ipv4.replaceChar
+^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "char", "x", "no", "none"
+
+In simple mode, this sets the character that the to-be-anonymized
+part of the IP address is to be overwritten with. In any other
+mode the parameter is ignored if set.
+
+
+ipv6.enable
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "none"
+
+Allows to enable or disable the anonymization of IPv6 addresses.
+
+
+ipv6.anonmode
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "zero", "no", "none"
+
+This defines the mode, in which IPv6 addresses will be anonymized.
+There exist the "random", "random-consistent", and "zero" modes.
+
+The modes "random" and "random-consistent" are very similar, in
+that they both anonymize ip-addresses by randomizing the last bits (any
+number) of a given address. However, while "random" mode assigns a new
+random ip-address for every address in a message, "random-consistent" will
+assign the same randomized address to every instance of the same original address.
+
+The default "zero" mode will do full anonymization of any number
+of bits and it will also normalize the address, so that no information
+about the original IP address is available.
+
+Also note that an anonymmized IPv6 address will be normalized, meaning
+there will be no abbreviations, leading zeros will **not** be displayed,
+and capital letters in the hex numerals will be lowercase.
+
+
+ipv6.bits
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "positive integer", "96", "no", "none"
+
+This sets the number of bits that should be anonymized (bits are from
+the right, so lower bits are anonymized first). This setting permits
+to save network information while still anonymizing user-specific
+data. The more bits you discard, the better the anonymization
+obviously is. The default of 96 bits reflects what German data
+privacy rules consider as being sufficinetly anonymized. We assume,
+this can also be used as a rough but conservative guideline for other
+countries.
+
+
+embeddedipv4.enable
+^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "none"
+
+Allows to enable or disable the anonymization of IPv6 addresses with embedded IPv4.
+
+
+embeddedipv4.anonmode
+^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "zero", "no", "none"
+
+This defines the mode, in which IPv6 addresses will be anonymized.
+There exist the "random", "random-consistent", and "zero" modes.
+
+The modes "random" and "random-consistent" are very similar, in
+that they both anonymize ip-addresses by randomizing the last bits (any
+number) of a given address. However, while "random" mode assigns a new
+random ip-address for every address in a message, "random-consistent" will
+assign the same randomized address to every instance of the same original address.
+
+The default "zero" mode will do full anonymization of any number
+of bits and it will also normalize the address, so that no information
+about the original IP address is available.
+
+Also note that an anonymmized IPv6 address will be normalized, meaning
+there will be no abbreviations, leading zeros will **not** be displayed,
+and capital letters in the hex numerals will be lowercase.
+
+
+embeddedipv4.bits
+^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "positive integer", "96", "no", "none"
+
+This sets the number of bits that should be anonymized (bits are from
+the right, so lower bits are anonymized first). This setting permits
+to save network information while still anonymizing user-specific
+data. The more bits you discard, the better the anonymization
+obviously is. The default of 96 bits reflects what German data
+privacy rules consider as being sufficinetly anonymized. We assume,
+this can also be used as a rough but conservative guideline for other
+countries.
+
+
+See Also
+========
+
+- `Howto anonymize messages that go to specific
+ files <http://www.rsyslog.com/howto-anonymize-messages-that-go-to-specific-files/>`_
+
+
+Caveats/Known Bugs
+==================
+
+- will **not** anonymize addresses in the header
+
+
+Examples
+========
+
+Anonymizing messages
+--------------------
+
+In this snippet, we write one file without anonymization and another one
+with the message anonymized. Note that once mmanon has run, access to
+the original message is no longer possible (execept if stored in user
+variables before anonymization).
+
+.. code-block:: none
+
+ module(load="mmanon")
+ action(type="omfile" file="/path/to/non-anon.log")
+ action(type="mmanon" ipv6.enable="off")
+ action(type="omfile" file="/path/to/anon.log")
+
+
+Anonymizing a specific part of the ip address
+---------------------------------------------
+
+This next snippet is almost identical to the first one, but here we
+anonymize the full IPv4 address. Note that by modifying the number of
+bits, you can anonymize different parts of the address. Keep in mind
+that in simple mode (used here), the bit values must match IP address
+bytes, so for IPv4 only the values 8, 16, 24 and 32 are valid. Also, in
+this example the replacement is done via asterisks instead of lower-case
+"x"-letters. Also keep in mind that "replacementChar" can only be set in
+simple mode.
+
+.. code-block:: none
+
+ module(load="mmanon") action(type="omfile" file="/path/to/non-anon.log")
+ action(type="mmanon" ipv4.bits="32" ipv4.mode="simple" replacementChar="\*" ipv6.enable="off")
+ action(type="omfile" file="/path/to/anon.log")
+
+
+Anonymizing an odd number of bits
+---------------------------------
+
+The next snippet is also based on the first one, but anonymizes an "odd"
+number of bits, 12. The value of 12 is used by some folks as a
+compromise between keeping privacy and still permitting to gain some more
+in-depth insight from log files. Note that anonymizing 12 bits may be
+insufficient to fulfill legal requirements (if such exist).
+
+.. code-block:: none
+
+ module(load="mmanon") action(type="omfile" file="/path/to/non-anon.log")
+ action(type="mmanon" ipv4.bits="12" ipv6.enable="off") action(type="omfile"
+ file="/path/to/anon.log")
+
+
+Anonymizing ipv4 and ipv6 addresses
+-----------------------------------
+
+You can also anonymize IPv4 and IPv6 in one go using a configuration like this.
+
+.. code-block:: none
+
+ module(load="mmanon") action(type="omfile" file="/path/to/non-anon.log")
+ action(type="mmanon" ipv4.bits="12" ipv6.bits="128" ipv6.anonmode="random") action(type="omfile"
+ file="/path/to/anon.log")
+
+
+Anonymizing with default values
+-------------------------------
+
+It is also possible to use the default configuration for both types of
+anonymization. This will result in IPv4 addresses being anonymized in zero
+mode anonymizing 16 bits. IPv6 addresses will also be anonymized in zero
+mode anonymizing 96 bits.
+
+.. code-block:: none
+
+ module(load="mmanon")
+ action(type="omfile" file="/path/to/non-anon.log")
+ action(type="mmanon")
+ action(type="omfile" file="/path/to/anon.log")
+
+
+Anonymizing only ipv6 addresses
+-------------------------------
+
+Another option is to only anonymize IPv6 addresses. When doing this you have to
+disable IPv4 aonymization. This example will lead to only IPv6 addresses anonymized
+(using the random-consistent mode).
+
+.. code-block:: none
+
+ module(load="mmanon")
+ action(type="omfile" file="/path/to/non-anon.log")
+ action(type="mmanon" ipv4.enable="off" ipv6.anonmode="random-consistent")
+ action(type="omfile" file="/path/to/anon.log")
+
diff --git a/source/configuration/modules/mmcount.rst b/source/configuration/modules/mmcount.rst
new file mode 100644
index 0000000..de3247d
--- /dev/null
+++ b/source/configuration/modules/mmcount.rst
@@ -0,0 +1,56 @@
+*******
+mmcount
+*******
+
+=========================== ===========================================================================
+**Module Name:**  **mmcount**
+**Author:** Bala.FA <barumuga@redhat.com>
+**Available since:** 7.5.0
+=========================== ===========================================================================
+
+
+**Status:**\ Non project-supported module - contact author or rsyslog
+mailing list for questions
+
+
+Purpose
+=======
+
+Message modification plugin which counts messages.
+
+This module provides the capability to count log messages by severity
+or json property of given app-name. The count value is added into the
+log message as json property named 'mmcount'.
+
+
+Examples
+========
+
+Example usage of the module in the configuration file.
+
+.. code-block:: none
+
+ module(load="mmcount")
+
+ # count each severity of appname gluster
+ action(type="mmcount" appname="gluster")
+
+ # count each value of gf_code of appname gluster
+ action(type="mmcount" appname="glusterd" key="!gf_code")
+
+ # count value 9999 of gf_code of appname gluster
+ action(type="mmcount" appname="glusterfsd" key="!gf_code" value="9999")
+
+ # send email for every 50th mmcount
+ if $app-name == 'glusterfsd' and $!mmcount <> 0 and $!mmcount % 50 == 0 then {
+ $ActionMailSMTPServer smtp.example.com
+ $ActionMailFrom rsyslog@example.com
+ $ActionMailTo glusteradmin@example.com
+ $template mailSubject,"50th message of gf_code=9999 on %hostname%"
+ $template mailBody,"RSYSLOG Alert\r\nmsg='%msg%'"
+ $ActionMailSubject mailSubject
+ $ActionExecOnlyOnceEveryInterval 30
+ :ommail:;RSYSLOG_SyslogProtocol23Format
+ }
+
+
diff --git a/source/configuration/modules/mmdarwin.rst b/source/configuration/modules/mmdarwin.rst
new file mode 100644
index 0000000..17d8c0d
--- /dev/null
+++ b/source/configuration/modules/mmdarwin.rst
@@ -0,0 +1,229 @@
+.. index:: ! mmdarwin
+
+.. role:: json(code)
+ :language: json
+
+***************************
+Darwin connector (mmdarwin)
+***************************
+
+================ ===========================================
+**Module Name:** **mmdarwin**
+**Author:** Guillaume Catto <guillaume.catto@advens.fr>,
+ Theo Bertin <theo.bertin@advens.fr>
+================ ===========================================
+
+Purpose
+=======
+
+Darwin is an open source Artificial Intelligence Framework for CyberSecurity. The mmdarwin module allows us to call Darwin in order to enrich our JSON-parsed logs with a score, and/or to allow Darwin to generate alerts.
+
+How to build the module
+=======================
+
+To compile Rsyslog with mmdarwin you'll need to:
+
+* set *--enable-mmdarwin* on configure
+
+Configuration Parameter
+=======================
+
+Input Parameters
+----------------
+
+key
+^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "yes", "none"
+
+The key name to use to store the returned data.
+
+For example, given the following log line:
+
+.. code-block:: json
+
+ {
+ "from": "192.168.1.42",
+ "date": "2012-12-21 00:00:00",
+ "status": "200",
+ "data": {
+ "status": true,
+ "message": "Request processed correctly"
+ }
+ }
+
+and the :json:`"certitude"` key, the enriched log line would be:
+
+.. code-block:: json
+ :emphasize-lines: 9
+
+ {
+ "from": "192.168.1.42",
+ "date": "2012-12-21 00:00:00",
+ "status": "200",
+ "data": {
+ "status": true,
+ "message": "Request processed correctly"
+ },
+ "certitude": 0
+ }
+
+where :json:`"certitude"` represents the score returned by Darwin.
+
+
+socketpath
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "yes", "none"
+
+The Darwin filter socket path to use.
+
+
+response
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", \"no", "no", "none"
+
+Tells the Darwin filter what to do next:
+
+* :json:`"no"`: no response will be sent, nothing will be sent to next filter.
+* :json:`"back"`: a score for the input will be returned by the filter, nothing will be forwarded to the next filter.
+* :json:`"darwin"`: the data provided will be forwarded to the next filter (in the format specified in the filter's configuration), no response will be given to mmdarwin.
+* :json:`"both"`: the filter will respond to mmdarwin with the input's score AND forward the data (in the format specified in the filter's configuration) to the next filter.
+
+.. note::
+
+ Please be mindful when setting this parameter, as the called filter will only forward data to the next configured filter if you ask the filter to do so with :json:`"darwin"` or :json:`"both"`, if a next filter if configured but you ask for a :json:`"back"` response, the next filter **WILL NOT** receive anything!
+
+filtercode
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "0x00000000", "no", "none"
+
+Each Darwin module has a unique filter code. For example, the code of the hostlookup filter is :json:`"0x66726570"`.
+This code was mandatory but has now become obsolete. you can leave it as it is.
+
+fields
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "array", "none", "yes", "none"
+
+Array containing values to be sent to Darwin as parameters.
+
+Two types of values can be set:
+
+* if it starts with a bang (:json:`"!"`), mmdarwin will search in the JSON-parsed log line the associated value. You can search in subkeys as well: just add a bang to go to a deeper level.
+* otherwise, the value is considered static, and will be forwarded directly to Darwin.
+
+For example, given the following log line:
+
+.. code-block:: json
+
+ {
+ "from": "192.168.1.42",
+ "date": "2012-12-21 00:00:00",
+ "status": "200",
+ "data": {
+ "status": true,
+ "message": "Request processed correctly"
+ }
+ }
+
+and the :json:`"fields"` array:
+
+.. code-block:: none
+
+ ["!from", "!data!status", "rsyslog"]
+
+The parameters sent to Darwin would be :json:`"192.168.1.42"`, :json:`true` and :json:`"rsyslog"`.
+
+.. note::
+ The order of the parameters is important. Thus, you have to be careful when providing the fields in the array.
+ Refer to `Darwin documentation`_ to see what each filter requires as parameters.
+
+.. _`Darwin documentation`: https://github.com/VultureProject/darwin/wiki
+
+send_partial
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "boolean", "off", "no", "none"
+
+Whether to send to Darwin if not all :json:`"fields"` could be found in the message, or not.
+All current Darwin filters required a strict number (and format) of parameters as input, so they will most likely not process the data if some fields are missing. This should be kept to "off", unless you know what you're doing.
+
+For example, for the following log line:
+
+.. code-block:: json
+
+ {
+ "from": "192.168.1.42",
+ "date": "2012-12-21 00:00:00",
+ "status": "200",
+ "data": {
+ "status": true,
+ "message": "Request processed correctly"
+ }
+ }
+
+and the :json:`"fields"` array:
+
+.. code-block:: none
+
+ ["!from", "!data!status", "!this!field!is!not!in!message"]
+
+the third field won't be found, so the call to Darwin will be dropped.
+
+
+Configuration example
+=====================
+
+This example shows a possible configuration of mmdarwin.
+
+.. code-block:: none
+
+ module(load="imtcp")
+ module(load="mmjsonparse")
+ module(load="mmdarwin")
+
+ input(type="imtcp" port="8042" Ruleset="darwin_ruleset")
+
+ ruleset(name="darwin_ruleset") {
+ action(type="mmjsonparse" cookie="")
+ action(type="mmdarwin" socketpath="/path/to/reputation_1.sock" fields=["!srcip", "ATTACK;TOR"] key="reputation" response="back" filtercode="0x72657075")
+
+ call darwin_output
+ }
+
+ ruleset(name="darwin_output") {
+ action(type="omfile" file="/path/to/darwin_output.log")
+ }
diff --git a/source/configuration/modules/mmdblookup.rst b/source/configuration/modules/mmdblookup.rst
new file mode 100644
index 0000000..d92f849
--- /dev/null
+++ b/source/configuration/modules/mmdblookup.rst
@@ -0,0 +1,141 @@
+.. index:: ! mmdblookup
+
+************************************
+MaxMind/GeoIP DB lookup (mmdblookup)
+************************************
+
+================ ==================================
+**Module Name:** mmdblookup
+**Author:** `chenryn <rao.chenlin@gmail.com>`_
+**Available:** 8.24+
+================ ==================================
+
+
+Purpose
+=======
+
+MaxMindDB is the new file format for storing information about IP addresses
+in a highly optimized, flexible database format. GeoIP2 Databases are
+available in the MaxMind DB format.
+
+Plugin author claimed a MaxMindDB vs GeoIP speed around 4 to 6 times.
+
+
+How to build the module
+=======================
+
+To compile Rsyslog with mmdblookup you'll need to:
+
+* install *libmaxminddb-devel* package
+* set *--enable-mmdblookup* on configure
+
+
+Configuration Parameter
+=======================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Module Parameters
+-----------------
+
+container
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "!iplocation", "no", "none"
+
+.. versionadded:: 8.28.0
+
+Specifies the container to be used to store the fields amended by
+mmdblookup.
+
+
+Input Parameters
+----------------
+
+key
+^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "yes", "none"
+
+Name of field containing IP address.
+
+
+mmdbfile
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "yes", "none"
+
+Location of Maxmind DB file.
+
+
+fields
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "array", "none", "yes", "none"
+
+Fields that will be appended to processed message. The fields will
+always be appended in the container used by mmdblookup (which may be
+overridden by the "container" parameter on module load).
+
+By default, the maxmindb field name is used for variables. This can
+be overridden by specifying a custom name between colons at the
+beginning of the field name. As usual, bang signs denote path levels.
+So for example, if you want to extract "!city!names!en" but rename it
+to "cityname", you can use ":cityname:!city!names!en" as field name.
+
+
+Examples
+========
+
+Minimum configuration
+---------------------
+
+This example shows the minimum configuration.
+
+.. code-block:: none
+
+ # load module
+ module( load="mmdblookup" )
+
+ action( type="mmdblookup" mmdbfile="/etc/rsyslog.d/GeoLite2-City.mmdb"
+ fields=["!continent!code","!location"] key="!clientip" )
+
+
+Custom container and field name
+-------------------------------
+
+The following example uses a custom container and custom field name
+
+.. code-block:: none
+
+ # load module
+ module( load="mmdblookup" container="!geo_ip")
+
+ action( type="mmdblookup" mmdbfile="/etc/rsyslog.d/GeoLite2-City.mmdb"
+ fields=[":continent:!continent!code", ":loc:!location"]
+ key="!clientip")
+
+
diff --git a/source/configuration/modules/mmexternal.rst b/source/configuration/modules/mmexternal.rst
new file mode 100644
index 0000000..afc85cc
--- /dev/null
+++ b/source/configuration/modules/mmexternal.rst
@@ -0,0 +1,110 @@
+********************************************************
+Support module for external message modification modules
+********************************************************
+
+=========================== ===========================================================================
+**Module Name:**  **mmexternal**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+**Available since:** 8.3.0
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+This module permits to integrate external message modification plugins
+into rsyslog.
+
+For details on the interface specification, see rsyslog's source in the
+./plugins/external/INTERFACE.md.
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Action Parameters
+-----------------
+
+binary
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "yes", "none"
+
+The name of the external message modification plugin to be called. This
+can be a full path name.
+
+
+interface.input
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "msg", "no", "none"
+
+This can either be "msg", "rawmsg" or "fulljson". In case of "fulljson", the
+message object is provided as a json object. Otherwise, the respective
+property is provided. This setting **must** match the external plugin's
+expectations. Check the external plugin documentation for what needs to be used.
+
+
+output
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+This is a debug aid. If set, this is a filename where the plugins output
+is logged. Note that the output is also being processed as usual by rsyslog.
+Setting this parameter thus gives insight into the internal processing
+that happens between plugin and rsyslog core.
+
+
+forceSingleInstance
+^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+This is an expert parameter, just like the equivalent *omprog* parameter.
+See the message modification plugin's documentation if it is needed.
+
+
+Examples
+========
+
+Execute external module
+-----------------------
+
+The following config file snippet is used to write execute an external
+message modification module "mmexternal.py". Note that the path to the
+module is specified here. This is necessary if the module is not in the
+default search path.
+
+.. code-block:: none
+
+ module (load="mmexternal") # needs to be done only once inside the config
+
+ action(type="mmexternal" binary="/path/to/mmexternal.py")
+
+
diff --git a/source/configuration/modules/mmfields.rst b/source/configuration/modules/mmfields.rst
new file mode 100644
index 0000000..876a7ca
--- /dev/null
+++ b/source/configuration/modules/mmfields.rst
@@ -0,0 +1,132 @@
+***********************************
+Fields Extraction Module (mmfields)
+***********************************
+
+=========================== ===========================================================================
+**Module Name:**  **mmfields**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+**Available since:** 7.5.1
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+The mmfield module permits to extract fields. It is an alternate to
+using the property replacer field extraction capabilities. In contrast
+to the property replacer, all fields are extracted as once and stored
+inside the structured data part (more precisely: they become Lumberjack
+[JSON] properties).
+
+Using this module is of special advantage if a field-based log format is
+to be processed, like for example CEF **and** either a large number
+of fields is needed or a specific field is used multiple times inside
+filters. In these scenarios, mmfields potentially offers better
+performance than the property replacer of the RainerScript field
+extraction method. The reason is that mmfields extracts all fields as
+one big sweep, whereas the other methods extract fields individually,
+which requires multiple passes through the same data. On the other hand,
+adding field content to the rsyslog property dictionary also has some
+overhead, so for high-performance use cases it is suggested to do some
+performance testing before finally deciding which method to use. This is
+most important if only a smaller subset of the fields is actually
+needed.
+
+In any case, mmfields provides a very handy and easy to use way to parse
+structured data into a it's individual data items. Again, a primary use
+case was support for CEF (Common Event Format), which is made extremely
+easy to do with this module.
+
+This module is implemented via the action interface. Thus it can be
+conditionally used depending on some prerequisites.
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Action Parameters
+-----------------
+
+separator
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "char", ",", "no", "none"
+
+This is the character used to separate fields. Currently, only a
+single character is permitted, while the RainerScript method permits
+to specify multi-character separator strings. For CEF, this is not
+required. If there is actual need to support multi-character
+separator strings, support can relatively easy be added. It is
+suggested to request it on the rsyslog mailing list, together with
+the use case - we intend to add functionality only if there is a real
+use case behind the request (in the past we too-often implemented
+things that actually never got used).
+The fields are named f\ *nbr*, where *nbr* is the field number
+starting with one and being incremented for each field.
+
+
+jsonRoot
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "!", "no", "none"
+
+This parameters specifies into which json path the extracted fields
+shall be written. The default is to use the json root object itself.
+
+
+Examples
+========
+
+Parsing messages and writing them to file
+-----------------------------------------
+
+This is a very simple use case where each message is parsed. The default
+separator character of comma is being used.
+
+.. code-block:: none
+
+ module(load="mmfields")
+ template(name="ftpl"
+ type="string"
+ string="%$!%\\n")
+ action(type="mmfields")
+ action(type="omfile"
+ file="/path/to/logfile"
+ template="ftpl")
+
+
+Writing into a specific json path
+---------------------------------
+
+The following sample is similar to the previous one, but this time the
+colon is used as separator and data is written into the "$!mmfields"
+json path.
+
+.. code-block:: none
+
+ module(load="mmfields")
+ template(name="ftpl"
+ type="string"
+ string="%$!%\\n")
+ action(type="mmfields"
+ separator=":"
+ jsonRoot="!mmfields")
+ action(type="omfile"
+ file="/path/to/logfile"
+ template="ftpl")
+
diff --git a/source/configuration/modules/mmjsonparse.rst b/source/configuration/modules/mmjsonparse.rst
new file mode 100644
index 0000000..3d47c4f
--- /dev/null
+++ b/source/configuration/modules/mmjsonparse.rst
@@ -0,0 +1,147 @@
+***********************************************************
+JSON/CEE Structured Content Extraction Module (mmjsonparse)
+***********************************************************
+
+=========================== ===========================================================================
+**Module Name:**  **mmjsonparse**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+**Available since:** 6.6.0
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+This module provides support for parsing structured log messages that
+follow the CEE/lumberjack spec. The so-called "CEE cookie" is checked
+and, if present, the JSON-encoded structured message content is parsed.
+The properties are then available as original message properties.
+
+As a convenience, mmjsonparse will produce a valid CEE/lumberjack log
+message if passed a message without the CEE cookie. A JSON structure
+will be created and the "msg" field will be the only field and it will
+contain the message. Note that in this case, mmjsonparse will
+nonetheless return that the JSON parsing has failed.
+
+The "CEE cookie" is the character squence "@cee:" which must prepend the
+actual JSON. Note that the JSON must be valid and MUST NOT be followed
+by any non-JSON message. If either of these conditions is not true,
+mmjsonparse will **not** parse the associated JSON. This is based on the
+cookie definition used in CEE/project lumberjack and is meant to aid
+against an erroneous detection of a message as being CEE where it is
+not.
+
+This also means that mmjsonparse currently is NOT a generic JSON parser
+that picks up JSON from wherever it may occur in the message. This is
+intentional, but future versions may support config parameters to relax
+the format requirements.
+
+
+Notable Features
+================
+
+- :ref:`mmjsonparse-parsing-result`
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Action Parameters
+-----------------
+
+cookie
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "@cee:", "no", "none"
+
+Permits to set the cookie that must be present in front of the
+JSON part of the message.
+
+Most importantly, this can be set to the empty string ("") in order
+to not require any cookie. In this case, leading spaces are permitted
+in front of the JSON. No non-whitespace characters are permitted
+after the JSON. If such is required, mmnormalize must be used.
+
+
+useRawMsg
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+Specifies if the raw message should be used for normalization (on)
+or just the MSG part of the message (off).
+
+
+container
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "$!", "no", "none"
+
+Specifies the JSON container (path) under which parsed elements should be
+placed. By default, all parsed properties are merged into root of
+message properties. You can place them under a subtree, instead. You
+can place them in local variables, also, by setting path="$.".
+
+
+.. _mmjsonparse-parsing-result:
+
+Check parsing result
+====================
+
+You can check whether rsyslogd was able to successfully parse the
+message by reading the $parsesuccess variable :
+
+.. code-block:: none
+
+ action(type="mmjsonparse")
+ if $parsesuccess == "OK" then {
+ action(type="omfile" File="/tmp/output")
+ }
+ else if $parsesuccess == "FAIL" then {
+ action(type="omfile" File="/tmp/parsing_failure")
+ }
+
+
+Examples
+========
+
+Apply default normalization
+---------------------------
+
+This activates the module and applies normalization to all messages.
+
+.. code-block:: none
+
+ module(load="mmjsonparse")
+ action(type="mmjsonparse")
+
+
+Permit parsing messages without cookie
+--------------------------------------
+
+To permit parsing messages without cookie, use this action statement
+
+.. code-block:: none
+
+ action(type="mmjsonparse" cookie="")
+
diff --git a/source/configuration/modules/mmkubernetes.rst b/source/configuration/modules/mmkubernetes.rst
new file mode 100644
index 0000000..83efc18
--- /dev/null
+++ b/source/configuration/modules/mmkubernetes.rst
@@ -0,0 +1,630 @@
+*****************************************
+Kubernetes Metadata Module (mmkubernetes)
+*****************************************
+
+=========================== ===========================================================================
+**Module Name:**  **mmkubernetes**
+**Author:** `Tomáš Heinrich`
+ `Rich Megginson` <rmeggins@redhat.com>
+=========================== ===========================================================================
+
+Purpose
+=======
+
+This module is used to add `Kubernetes <https://kubernetes.io/>`
+metadata to log messages logged by containers running in Kubernetes.
+It will add the namespace uuid, pod uuid, pod and namespace labels and
+annotations, and other metadata associated with the pod and
+namespace.
+
+.. note::
+
+ This **only** works with log files in `/var/log/containers/*.log` (docker
+ `--log-driver=json-file`, or CRI-O log files), or with journald entries with
+ message properties `CONTAINER_NAME` and `CONTAINER_ID_FULL` (docker
+ `--log-driver=journald`), and when the application running inside the
+ container writes logs to `stdout`/`stderr`. This **does not** currently
+ work with other log drivers.
+
+For json-file and CRI-O logs, you must use the `imfile` module with the
+`addmetadata="on"` parameter, and the filename must match the
+liblognorm rules specified by the `filenamerules`
+(:ref:`filenamerules`) or `filenamerulebase` (:ref:`filenamerulebase`)
+parameter values.
+
+For journald logs, there must be a message property `CONTAINER_NAME`
+which matches the liblognorm rules specified by the `containerrules`
+(:ref:`containerrules`) or `containerrulebase`
+(:ref:`containerrulebase`) parameter values. The record must also have
+the message property `CONTAINER_ID_FULL`.
+
+This module is implemented via the output module interface. This means
+that mmkubernetes should be called just like an action. After it has
+been called, there will be two new message properties: `kubernetes`
+and `docker`. There will be subfields of each one for the various
+metadata items: `$!kubernetes!namespace_name`
+`$!kubernetes!labels!this-is-my-label`, etc. There is currently only
+1 docker subfield: `$!docker!container_id`. See
+https://github.com/ViaQ/elasticsearch-templates/blob/master/namespaces/kubernetes.yml
+and
+https://github.com/ViaQ/elasticsearch-templates/blob/master/namespaces/docker.yml
+for more details.
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+Module Parameters and Action Parameters
+---------------------------------------
+
+.. _kubernetesurl:
+
+KubernetesURL
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "https://kubernetes.default.svc.cluster.local:443", "no", "none"
+
+The URL of the Kubernetes API server. Example: `https://localhost:8443`.
+
+.. _mmkubernetes-tls.cacert:
+
+tls.cacert
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+Full path and file name of file containing the CA cert of the
+Kubernetes API server cert issuer. Example: `/etc/rsyslog.d/mmk8s-ca.crt`.
+This parameter is not mandatory if using an `http` scheme instead of `https` in
+`kubernetesurl`, or if using `allowunsignedcerts="yes"`.
+
+.. _mmkubernetes-tls.mycert:
+
+tls.mycert
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+This is the full path and file name of the file containing the client cert for
+doing client cert auth against Kubernetes. This file is in PEM format. For
+example: `/etc/rsyslog.d/k8s-client-cert.pem`
+
+.. _mmkubernetes-tls.myprivkey:
+
+tls.myprivkey
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+This is the full path and file name of the file containing the private key
+corresponding to the cert `tls.mycert` used for doing client cert auth against
+Kubernetes. This file is in PEM format, and must be unencrypted, so take
+care to secure it properly. For example: `/etc/rsyslog.d/k8s-client-key.pem`
+
+.. _tokenfile:
+
+tokenfile
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+The file containing the token to use to authenticate to the Kubernetes API
+server. One of `tokenfile` or `token` is required if Kubernetes is configured
+with access control. Example: `/etc/rsyslog.d/mmk8s.token`
+
+.. _token:
+
+token
+^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+The token to use to authenticate to the Kubernetes API server. One of `token`
+or `tokenfile` is required if Kubernetes is configured with access control.
+Example: `UxMU46ptoEWOSqLNa1bFmH`
+
+.. _annotation_match:
+
+annotation_match
+^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "array", "none", "no", "none"
+
+By default no pod or namespace annotations will be added to the
+messages. This parameter is an array of patterns to match the keys of
+the `annotations` field in the pod and namespace metadata to include
+in the `$!kubernetes!annotations` (for pod annotations) or the
+`$!kubernetes!namespace_annotations` (for namespace annotations)
+message properties. Example: `["k8s.*master","k8s.*node"]`
+
+.. _srcmetadatapath:
+
+srcmetadatapath
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "$!metadata!filename", "no", "none"
+
+When reading json-file logs, with `imfile` and `addmetadata="on"`,
+this is the property where the filename is stored.
+
+.. _dstmetadatapath:
+
+dstmetadatapath
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "$!", "no", "none"
+
+This is the where the `kubernetes` and `docker` properties will be
+written. By default, the module will add `$!kubernetes` and
+`$!docker`.
+
+.. _allowunsignedcerts:
+
+allowunsignedcerts
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "boolean", "off", "no", "none"
+
+If `"on"`, this will set the curl `CURLOPT_SSL_VERIFYPEER` option to
+`0`. You are strongly discouraged to set this to `"on"`. It is
+primarily useful only for debugging or testing.
+
+.. _skipverifyhost:
+
+skipverifyhost
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "boolean", "off", "no", "none"
+
+If `"on"`, this will set the curl `CURLOPT_SSL_VERIFYHOST` option to
+`0`. You are strongly discouraged to set this to `"on"`. It is
+primarily useful only for debugging or testing.
+
+.. _de_dot:
+
+de_dot
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "boolean", "on", "no", "none"
+
+When processing labels and annotations, if this parameter is set to
+`"on"`, the key strings will have their `.` characters replaced with
+the string specified by the `de_dot_separator` parameter.
+
+.. _de_dot_separator:
+
+de_dot_separator
+^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "_", "no", "none"
+
+When processing labels and annotations, if the `de_dot` parameter is
+set to `"on"`, the key strings will have their `.` characters replaced
+with the string specified by the string value of this parameter.
+
+.. _filenamerules:
+
+filenamerules
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "SEE BELOW", "no", "none"
+
+.. note::
+
+ This directive is not supported with liblognorm 2.0.2 and earlier.
+
+When processing json-file logs, these are the lognorm rules to use to
+match the filename and extract metadata. The default value is::
+
+ rule=:/var/log/containers/%pod_name:char-to:_%_%namespace_name:char-to:_%_%contai\
+ ner_name_and_id:char-to:.%.log
+
+.. note::
+
+ In the above rules, the slashes ``\`` ending each line indicate
+ line wrapping - they are not part of the rule.
+
+.. _filenamerulebase:
+
+filenamerulebase
+^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "/etc/rsyslog.d/k8s_filename.rulebase", "no", "none"
+
+When processing json-file logs, this is the rulebase used to match the filename
+and extract metadata. For the actual rules, see :ref:`filenamerules`.
+
+.. _containerrules:
+
+containerrules
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "SEE BELOW", "no", "none"
+
+.. note::
+
+ This directive is not supported with liblognorm 2.0.2 and earlier.
+
+For journald logs, there must be a message property `CONTAINER_NAME`
+which has a value matching these rules specified by this parameter.
+The default value is::
+
+ rule=:%k8s_prefix:char-to:_%_%container_name:char-to:.%.%container_hash:char-to:\
+ _%_%pod_name:char-to:_%_%namespace_name:char-to:_%_%not_used_1:char-to:_%_%not_u\
+ sed_2:rest%
+ rule=:%k8s_prefix:char-to:_%_%container_name:char-to:_%_%pod_name:char-to:_%_%na\
+ mespace_name:char-to:_%_%not_used_1:char-to:_%_%not_used_2:rest%
+
+.. note::
+
+ In the above rules, the slashes ``\`` ending each line indicate
+ line wrapping - they are not part of the rule.
+
+There are two rules because the `container_hash` is optional.
+
+.. _containerrulebase:
+
+containerrulebase
+^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "/etc/rsyslog.d/k8s_container_name.rulebase", "no", "none"
+
+When processing json-file logs, this is the rulebase used to match the
+CONTAINER_NAME property value and extract metadata. For the actual rules, see
+:ref:`containerrules`.
+
+.. _mmkubernetes-busyretryinterval:
+
+busyretryinterval
+^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "5", "no", "none"
+
+The number of seconds to wait before retrying operations to the Kubernetes API
+server after receiving a `429 Busy` response. The default `"5"` means that the
+module will retry the connection every `5` seconds. Records processed during
+this time will _not_ have any additional metadata associated with them, so you
+will need to handle cases where some of your records have all of the metadata
+and some do not.
+
+If you want to have rsyslog suspend the plugin until the Kubernetes API server
+is available, set `busyretryinterval` to `"0"`. This will cause the plugin to
+return an error to rsyslog.
+
+.. _mmkubernetes-sslpartialchain:
+
+sslpartialchain
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "boolean", "off", "no", "none"
+
+This option is only available if rsyslog was built with support for OpenSSL and
+only if the `X509_V_FLAG_PARTIAL_CHAIN` flag is available. If you attempt to
+set this parameter on other platforms, you will get an `INFO` level log
+message. This was done so that you could use the same configuration on
+different platforms.
+If `"on"`, this will set the OpenSSL certificate store flag
+`X509_V_FLAG_PARTIAL_CHAIN`. This will allow you to verify the Kubernetes API
+server cert with only an intermediate CA cert in your local trust store, rather
+than having to have the entire intermediate CA + root CA chain in your local
+trust store. See also `man s_client` - the `-partial_chain` flag.
+If you get errors like this, you probably need to set `sslpartialchain="on"`:
+
+.. code-block:: none
+
+ rsyslogd: mmkubernetes: failed to connect to [https://...url...] -
+ 60:Peer certificate cannot be authenticated with given CA certificates
+
+.. _mmkubernetes-cacheexpireinterval:
+
+cacheexpireinterval
+^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "-1", "no", "none"
+
+This parameter allows you to expire entries from the metadata cache. The
+values are:
+
+- -1 (default) - disables metadata cache expiration
+- 0 - check cache for expired entries before every cache lookup
+- 1 or higher - the number is a number of seconds - check the cache
+ for expired entries every this many seconds, when processing an
+ entry
+
+The cache is only checked if processing a record from Kubernetes. There
+isn't some sort of housekeeping thread that continually runs cleaning up
+the cache. When an record from Kubernetes is processed:
+
+If `cacheexpireinterval` is -1, then do not check for cache expiration.
+If `cacheexpireinterval` is 0, then check for cache expiration.
+If `cacheexpireinterval` is greater than 0, check for cache expiration
+if the last time we checked was more than this many seconds ago.
+
+When cache expiration is checked, it will delete all cache entries which
+have a ttl less than or equal to the current time. The cache entry ttl
+is set using the `cacheentryttl`.
+
+.. _mmkubernetes-cacheentryttl:
+
+cacheentryttl
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "3600", "no", "none"
+
+This parameter allows you to set the maximum age (time-to-live, or ttl) of
+an entry in the metadata cache. The value is in seconds. The default value
+is `3600` (one hour). When cache expiration is checked, if a cache entry
+has a ttl less than or equal to the current time, it will be removed from
+the cache.
+
+This option is only used if `cacheexpireinterval` is 0 or greater.
+
+This value must be 0 or greater, otherwise, if `cacheexpireinterval` is 0
+or greater, you will get an error.
+
+.. _mmkubernetes-statistic-counter:
+
+Statistic Counter
+=================
+
+This plugin maintains per-action :doc:`statistics
+<../rsyslog_statistic_counter>`. The statistic is named
+"mmkubernetes($kubernetesurl)", where `$kubernetesurl` is the
+:ref:`kubernetesurl` setting for the action.
+
+Parameters are:
+
+- **recordseen** - number of messages seen by the action which the action has
+ determined have Kubernetes metadata associated with them
+
+- **namespacemetadatasuccess** - the number of times a successful request was
+ made to the Kubernetes API server for namespace metadata.
+
+- **namespacemetadatanotfound** - the number of times a request to the
+ Kubernetes API server for namespace metadata was returned with a `404 Not
+ Found` error code - the namespace did not exist at that time.
+
+- **namespacemetadatabusy** - the number of times a request to the Kubernetes
+ API server for namespace metadata was returned with a `429 Busy` error
+ code - the server was too busy to send a proper response.
+
+- **namespacemetadataerror** - the number of times a request to the Kubernetes
+ API server for namespace metadata was returned with some other error code
+ not handled above. These are typically "hard" errors which require some
+ sort of intervention to fix e.g. Kubernetes server down, credentials incorrect.
+
+- **podmetadatasuccess** - the number of times a successful request was made
+ to the Kubernetes API server for pod metadata.
+
+- **podmetadatanotfound** - the number of times a request to the Kubernetes
+ API server for pod metadata was returned with a `404 Not Found` error code -
+ the pod did not exist at that time.
+
+- **podmetadatabusy** - the number of times a request to the Kubernetes API
+ server for pod metadata was returned with a `429 Busy` error code - the
+ server was too busy to send a proper response.
+
+- **podmetadataerror** - the number of times a request to the Kubernetes API
+ server for pod metadata was returned with some other error code not handled
+ above. These are typically "hard" errors which require some sort of
+ intervention to fix e.g. Kubernetes server down, credentials incorrect.
+
+- **podcachenumentries** - the number of entries in the pod metadata cache.
+
+- **namespacecachenumentries** - the number of entries in the namespace metadata
+ cache.
+
+- **podcachehits** - the number of times a requested entry was found in the
+ pod metadata cache.
+
+- **namespacecachehits** - the number of times a requested entry was found in the
+ namespace metadata cache.
+
+- **podcachemisses** - the number of times a requested entry was not found in the
+ pod metadata cache, and had to be requested from Kubernetes.
+
+- **namespacecachemisses** - the number of times a requested entry was not found
+ in the namespace metadata cache, and had to be requested from Kubernetes.
+
+Fields
+------
+
+These are the fields added from the metadata in the json-file filename, or from
+the `CONTAINER_NAME` and `CONTAINER_ID_FULL` fields from the `imjournal` input:
+
+`$!kubernetes!namespace_name`, `$!kubernetes!pod_name`,
+`$!kubernetes!container_name`, `$!docker!id`, `$!kubernetes!master_url`.
+
+If mmkubernetes can extract the above fields from the input, the following
+fields will always be present. If they are not present, mmkubernetes
+failed to look up the namespace or pod in Kubernetes:
+
+`$!kubernetes!namespace_id`, `$!kubernetes!pod_id`,
+`$!kubernetes!creation_timestamp`, `$!kubernetes!host`
+
+The following fields may be present, depending on how the namespace and pod are
+defined in Kubernetes, and depending on the value of the directive
+`annotation_match`:
+
+`$!kubernetes!labels`, `$!kubernetes!annotations`, `$!kubernetes!namespace_labels`,
+`$!kubernetes!namespace_annotations`
+
+More fields may be added in the future.
+
+Error Handling
+--------------
+If the plugin encounters a `404 Not Found` in response to a request for
+namespace or pod metadata, that is, the pod or namespace is missing, the plugin
+will cache that result, and no metadata will be available for that pod or
+namespace forever. If the pod or namespace is recreated, you will need to
+restart rsyslog in order to clear the cache and allow it to find that metadata.
+
+If the plugin gets a `429 Busy` response, the plugin will _not_ cache that
+result, and will _not_ add the metadata to the record. This can happen in very
+large Kubernetes clusters when you run into the upper limit on the number of
+concurrent Kubernetes API service connections. You may have to increase that
+limit. In the meantime, you can control what the plugin does with those
+records using the :ref:`mmkubernetes-busyretryinterval` setting. If you want
+to continue to process the records, but with incomplete metadata, set
+`busyretryinterval` to a non-zero value, which will be the number of seconds
+after which mmkubernetes will retry the connection. The default value is `5`,
+so by default, the plugin will retry the connection every `5` seconds. If the
+`429` error condition in the Kubernetes API server is brief and transient, this
+means you will have some (hopefully small) number of records without the
+metadata such as the uuids, labels, and annotations, but your pipeline will not
+stop. If the `429` error condition in the Kubernetes API server is persistent,
+it may require Kubernetes API server administrator intervention to address, and
+you may want to use the `busyretryinterval` value of `"0"`. This will cause
+the module to return a "hard" error (see below).
+
+For other errors, the plugin will assume they are "hard" errors requiring admin
+intervention, return an error code, and rsyslog will suspend the plugin. Use
+the :ref:`mmkubernetes-statistic-counter` to monitor for problems getting data
+from the Kubernetes API service.
+
+Example
+-------
+
+Assuming you have an `imfile` input reading from docker json-file container
+logs managed by Kubernetes, with `addmetadata="on"` so that mmkubernetes can
+get the basic necessary Kubernetes metadata from the filename:
+
+.. code-block:: none
+
+ input(type="imfile" file="/var/log/containers/*.log"
+ tag="kubernetes" addmetadata="on")
+
+(Add `reopenOnTruncate="on"` if using Docker, not required by CRI-O).
+
+and/or an `imjournal` input for docker journald container logs annotated by
+Kubernetes:
+
+.. code-block:: none
+
+ input(type="imjournal")
+
+Then mmkubernetes can be used to annotate log records like this:
+
+.. code-block:: none
+
+ module(load="mmkubernetes")
+
+ action(type="mmkubernetes")
+
+After this, you should have log records with fields described in the `Fields`
+section above.
+
+Credits
+-------
+
+This work is based on
+https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter
+and has many of the same features.
diff --git a/source/configuration/modules/mmnormalize.rst b/source/configuration/modules/mmnormalize.rst
new file mode 100644
index 0000000..7d526a5
--- /dev/null
+++ b/source/configuration/modules/mmnormalize.rst
@@ -0,0 +1,178 @@
+Log Message Normalization Module (mmnormalize)
+==============================================
+
+**Module Name:    mmnormalize**
+
+**Available since:** 6.1.2+
+
+**Author:** Rainer Gerhards <rgerhards@adiscon.com>
+
+**Description**:
+
+This module provides the capability to normalize log messages via
+`liblognorm <http://www.liblognorm.com>`_. Thanks to liblognorm,
+unstructured text, like usually found in log messages, can very quickly
+be parsed and put into a normal form. This is done so quickly, that it
+should be possible to normalize events in realtime.
+
+This module is implemented via the output module interface. This means
+that mmnormalize should be called just like an action. After it has been
+called, the normalized message properties are available and can be
+accessed. These properties are called the "CEE/lumberjack" properties,
+because liblognorm creates a format that is inspired by the
+CEE/lumberjack approach.
+
+**Please note:** CEE/lumberjack properties are different from regular
+properties. They have always "$!" prepended to the property name given
+in the rulebase. Such a property needs to be called with
+**%$!propertyname%**.
+
+Note that from a performance point of view mmnormalize should only be called
+once on each message, if possible. To do so, place all rules into a single
+rule base. If that is not possible, you can safely call mmnormalize multiple
+times. This incurs a small performance drawback.
+
+Module Parameters
+~~~~~~~~~~~~~~~~~
+
+Note: parameter names are case-insensitive.
+
+.. function:: allow_regex <boolean>
+
+ **Default**: off
+
+ Specifies if regex field-type should be allowed. Regex field-type has
+ significantly higher computational overhead compared to other fields,
+ so it should be avoided when another field-type can achieve the desired
+ effect. Needs to be "on" for regex field-type to work.
+
+Action Parameters
+~~~~~~~~~~~~~~~~~
+
+Note: parameter names are case-insensitive.
+
+.. function:: ruleBase <word>
+
+ Specifies which rulebase file is to use. If there are multiple
+ mmnormalize instances, each one can use a different file. However, a
+ single instance can use only a single file. This parameter or **rule** MUST be
+ given, because normalization can only happen based on a rulebase. It
+ is recommended that an absolute path name is given. Information on
+ how to create the rulebase can be found in the `liblognorm
+ manual <http://www.liblognorm.com/files/manual/index.html>`_.
+
+.. function:: rule <array>
+
+ *(Available since: 8.26.0)*
+
+ Contains an array of strings which will be put together as the rulebase. This parameter
+ or **rulebase** MUST be given, because normalization can only happen based on a rulebase.
+
+.. function:: useRawMsg <boolean>
+
+ **Default**: off
+
+ Specifies if the raw message should be used for normalization (on)
+ or just the MSG part of the message (off).
+
+.. function:: path <word>
+
+ **Default**: $!
+
+ Specifies the JSON path under which parsed elements should be
+ placed. By default, all parsed properties are merged into root of
+ message properties. You can place them under a subtree, instead. You
+ can place them in local variables, also, by setting path="$.".
+
+.. function:: variable <word>
+
+ *(Available since: 8.5.1)*
+
+ Specifies if a variable insteed of property 'msg' should be used for
+ normalization. A variable can be property, local variable, json-path etc.
+ Please note that **useRawMsg** overrides this parameter, so if **useRawMsg**
+ is set, **variable** will be ignored and raw message will be used.
+
+
+
+
+Legacy Configuration Parameters
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Note: parameter names are case-insensitive.
+
+- $mmnormalizeRuleBase <rulebase-file> - equivalent to the "ruleBase"
+ parameter.
+- $mmnormalizeUseRawMsg <on/off> - equivalent to the "useRawMsg"
+ parameter.
+
+See Also
+~~~~~~~~
+
+- `First steps for
+ mmnormalize <http://www.rsyslog.com/normalizer-first-steps-for-mmnormalize/>`_
+- `Log normalization and special
+ characters <http://www.rsyslog.com/log-normalization-and-special-characters/>`_
+- `Log normalization and the leading
+ space <http://www.rsyslog.com/log-normalization-and-the-leading-space/>`_
+- `Using mmnormalize effectively with Adiscon
+ LogAnalyzer <http://www.rsyslog.com/using-rsyslog-mmnormalize-module-effectively-with-adiscon-loganalyzer/>`_
+
+Caveats/Known Bugs
+~~~~~~~~~~~~~~~~~~
+
+None known at this time.
+
+Example
+~~~~~~~
+
+**Sample 1:**
+
+In this sample messages are received via imtcp. Then they are normalized with the given rulebase.
+After that they are written in a file.
+
+::
+
+ module(load="mmnormalize")
+ module(load="imtcp")
+
+ input(type="imtcp" port="10514" ruleset="outp")
+
+ ruleset(name="outp") {
+ action(type="mmnormalize" rulebase="/tmp/rules.rulebase")
+ action(type="omfile" File="/tmp/output")
+ }
+
+**Sample 2:**
+
+In this sample messages are received via imtcp. Then they are normalized based on the given rules.
+The strings from **rule** are put together and are equal to a rulebase with the same content.
+
+::
+
+ module(load="mmnormalize")
+ module(load="imtcp")
+
+ input(type="imtcp" port="10514" ruleset="outp")
+
+ ruleset(name="outp") {
+ action(type="mmnormalize" rule=["rule=:%host:word% %tag:char-to:\\x3a%: no longer listening on %ip:ipv4%#%port:number%", "rule=:%host:word% %ip:ipv4% user was logged out"])
+ action(type="omfile" File="/tmp/output")
+ }
+
+**Sample 3:**
+
+This activates the module and applies normalization to all messages:
+
+::
+
+ module(load="mmnormalize")
+ action(type="mmnormalize" ruleBase="/path/to/rulebase.rb")
+
+The same in legacy format:
+
+::
+
+ $ModLoad mmnormalize
+ $mmnormalizeRuleBase /path/to/rulebase.rb
+ *.* :mmnormalize:
diff --git a/source/configuration/modules/mmpstrucdata.rst b/source/configuration/modules/mmpstrucdata.rst
new file mode 100644
index 0000000..481f2e0
--- /dev/null
+++ b/source/configuration/modules/mmpstrucdata.rst
@@ -0,0 +1,101 @@
+RFC5424 structured data parsing module (mmpstrucdata)
+=====================================================
+
+**Module Name:** mmpstrucdata
+
+**Author:** Rainer Gerhards <rgerhards@adiscon.com>
+
+**Available since**: 7.5.4
+
+**Description**:
+
+The mmpstrucdata parses the structured data of `RFC5424 <https://tools.ietf.org/html/rfc5424>`_ into the message json variable tree. The data parsed, if available, is stored under "jsonRoot!rfc5424-sd!...". Please note that only RFC5424 messages will be processed.
+
+The difference of RFC5424 is in the message layout: the SYSLOG-MSG part only contains the structured-data part instead of the normal message part. Further down you can find a example of a structured-data part.
+
+**Module Configuration Parameters**:
+
+Note: parameter names are case-insensitive.
+
+Currently none.
+
+
+**Action Confguration Parameters**:
+
+Note: parameter names are case-insensitive.
+
+- **jsonRoot** - default "!"
+ Specifies into which json container the data shall be parsed to.
+
+- **sd_name.lowercase** - default "on"
+
+ Available: rsyslog 8.32.0 and above
+
+ Specifies if sd names (SDID) shall be lowercased. If set to "on", this
+ is the case, if "off" than not. The default of "on" is used because that
+ was the traditional mode of operations. It it generally advised to
+ change the parameter to "off" if not otherwise required.
+
+**See Also**
+
+- `Howto anonymize messages that go to specific
+ files <http://www.rsyslog.com/howto-anonymize-messages-that-go-to-specific-files/>`_
+
+**Caveats/Known Bugs:**
+
+- this module is currently experimental; feedback is appreciated
+- property names are treated case-insensitive in rsyslog. As such,
+ RFC5424 names are treated case-insensitive as well. If such names
+ only differ in case (what is not recommended anyways), problems will
+ occur.
+- structured data with duplicate SD-IDs and SD-PARAMS is not properly
+ processed
+
+**Samples:**
+
+Below you can find a structured data part of a random message which has three parameters.
+
+::
+
+ [exampleSDID@32473 iut="3" eventSource="Application"eventID="1011"]
+
+
+In this snippet, we parse the message and emit all json variable to a
+file with the message anonymized. Note that once mmpstrucdata has run,
+access to the original message is no longer possible (execept if stored
+in user variables before anonymization).
+
+::
+
+ module(load="mmpstrucdata") action(type="mmpstrucdata")
+ template(name="jsondump" type="string" string="%msg%: %$!%\\n")
+ action(type="omfile" file="/path/to/log" template="jsondump")
+
+
+**A more practical one:**
+
+Take this example message (inspired by RFC5424 sample;)):
+
+``<34>1 2003-10-11T22:14:15.003Z mymachine.example.com su - ID47 [exampleSDID@32473 iut="3" eventSource="Application" eventID="1011"][id@2 test="tast"] BOM'su root' failed for lonvick on /dev/pts/8``
+
+We apply this configuration:
+
+::
+
+ module(load="mmpstrucdata") action(type="mmpstrucdata")
+ template(name="sample2" type="string" string="ALL: %$!%\\nSD:
+ %$!RFC5424-SD%\\nIUT:%$!rfc5424-sd!exampleSDID@32473!iut%\\nRAWMSG:
+ %rawmsg%\\n\\n") action(type="omfile" file="/path/to/log"
+ template="sample2")
+
+
+
+This will output:
+
+``ALL: { "rfc5424-sd": { "examplesdid@32473": { "iut": "3", "eventsource": "Application", "eventid": "1011" }, "id@2": { "test": "tast" } } } SD: { "examplesdid@32473": { "iut": "3", "eventsource": "Application", "eventid": "1011" }, "id@2": { "test": "tast" } } IUT:3 RAWMSG: <34>1 2003-10-11T22:14:15.003Z mymachine.example.com su - ID47 [exampleSDID@32473 iut="3" eventSource="Application" eventID="1011"][id@2 test="tast"] BOM'su root' failed for lonvick on /dev/pts/8``
+
+As you can seem, you can address each of the individual items. Note that
+the case of the RFC5424 parameter names has been converted to lower
+case.
+
diff --git a/source/configuration/modules/mmrfc5424addhmac.rst b/source/configuration/modules/mmrfc5424addhmac.rst
new file mode 100644
index 0000000..d3e5333
--- /dev/null
+++ b/source/configuration/modules/mmrfc5424addhmac.rst
@@ -0,0 +1,93 @@
+mmrfc5424addhmac
+================
+
+**Module Name:    mmrfc5424addhmac**
+
+**Author:**\ Rainer Gerhards <rgerhards@adiscon.com>
+
+**Available since**: 7.5.6
+
+**Description**:
+
+This module adds a hmac to RFC5424 structured data if not already
+present. This is a custom module and uses openssl as requested by the
+sponsor. This works exclusively for RFC5424 formatted messages; all
+others are ignored.
+
+If both `mmpstrucdata <mmpstrucdata.html>`_ and mmrfc5424addhmac are to
+be used, the recommended calling sequence is
+
+#. mmrfc5424addhmac
+#. mmpstrucdata
+
+with that sequence, the generated hash will become available for
+mmpstrucdata.
+
+
+**Module Configuration Parameters**:
+
+Note: parameter names are case-insensitive.
+
+Currently none.
+
+
+**Action Confguration Parameters**:
+
+Note: parameter names are case-insensitive.
+
+- **key**
+ The "key" (string) to be used to generate the hmac.
+- **hashfunction**
+ An openssl hash function name for the function to be used. This is
+ passed on to openssl, so see the openssl list of supported function
+ names.
+- **sd\_id**
+ The RFC5424 structured data ID to be used by this module. This is
+ the SD-ID that will be added. Note that nothing is added if this
+ SD-ID is already present.
+
+**Verification method**
+
+rsyslog does not contain any tools to verify a log file (this was not
+part of the custom project). So you need to write your own verifier.
+
+When writing the verifier, keep in mind that the log file contains
+messages with the hash SD-ID included. For obvious reasons, this SD-ID
+was not present when the hash was created. So before the actual
+verification is done, this SD-ID must be removed, and the remaining
+(original) message be verified. Also, it is important to note that the
+output template must write the exact same message format that was
+received. Otherwise, a verification failure will obviously occur - and
+must so, because the message content actually was altered.
+
+So in a more formal description, verification of a message m can be done
+as follows:
+
+#. let m' be m with the configured SD-ID removed (everything between
+ []). Otherwise, m' must be an exact duplicate of m.
+#. call openssl's HMAC function as follows:
+ ``HMAC(hashfunction, key, len(key), m', len(m'), hash, &hashlen);``
+ Where hashfunction and key are the configured values and hash is an
+ output buffer for the hash.
+#. let h be the extracted hash value obtained from m within the relevant
+ SD-ID. Be sure to convert the hex string back to the actual byte
+ values.
+#. now compare hash and h under consideration of the sizes. If these
+ values match the verification succeeds, otherwise the message was
+ modified.
+
+If you need help implementing a verifier function or want to sponsor
+development of a verification tool, please simply email
+`sales@adiscon.com <sales@adiscon.com>`_ for a quote.
+
+**See Also**
+
+- `How to add a HMAC to RFC5424
+ messages <http://www.rsyslog.com/how-to-add-a-hmac-to-rfc5424-structured-data-messages/>`_
+
+**Caveats/Known Bugs:**
+
+- none
+
diff --git a/source/configuration/modules/mmrm1stspace.rst b/source/configuration/modules/mmrm1stspace.rst
new file mode 100644
index 0000000..915f026
--- /dev/null
+++ b/source/configuration/modules/mmrm1stspace.rst
@@ -0,0 +1,30 @@
+mmrm1stspace: First Space Modification Module
+=============================================
+
+**Author:** Pascal Withopf <pascalwithopf1@gmail.com>
+
+In rfc3164 the msg begins at the first letter after the tag. It is often the
+case that this is a unnecessary space. This module removes this first character
+if it is a space.
+
+Configuration Parameters
+------------------------
+
+Note: parameter names are case-insensitive.
+
+Currently none.
+
+Examples
+--------
+
+This example receives messages over imtcp and modifies them, before sending
+them to a file.
+
+::
+
+ module(load="imtcp")
+ module(load="mmrm1stspace")
+ input(type="imtcp" port="13514")
+ action(type="mmrm1stspace")
+ action(type="omfile" file="output.log")
+
diff --git a/source/configuration/modules/mmsequence.rst b/source/configuration/modules/mmsequence.rst
new file mode 100644
index 0000000..a35599f
--- /dev/null
+++ b/source/configuration/modules/mmsequence.rst
@@ -0,0 +1,156 @@
+Number generator and counter module (mmsequence)
+================================================
+
+**Module Name:    mmsequence**
+
+**Author:**\ Pavel Levshin <pavel@levshin.spb.ru>
+
+**Status:**\ Non project-supported module - contact author or rsyslog
+mailing list for questions
+
+**This module is deprecated** in v8 and solely provided for backward
+compatibility reasons. It was written as a work-around for missing
+global variable support in v7. Global variables are available in v8,
+and at some point in time this module will entirely be removed.
+
+**Do not use this module for newly crafted config files.**
+Use global variables instead.
+
+
+**Available since**: 7.5.6
+
+**Description**:
+
+This module generates numeric sequences of different kinds. It can be
+used to count messages up to a limit and to number them. It can generate
+random numbers in a given range.
+
+This module is implemented via the output module interface, so it is
+called just as an action. The number generated is stored in a variable.
+
+
+**Action Parameters**:
+
+Note: parameter names are case-insensitive.
+
+- **mode** "random" or "instance" or "key"
+
+ Specifies mode of the action. In "random" mode, the module generates
+ uniformly distributed integer numbers in a range defined by "from"
+ and "to".
+
+ In "instance" mode, which is default, the action produces a counter
+ in range [from, to). This counter is specific to this action
+ instance.
+
+ In "key" mode, the counter can be shared between multiple instances.
+ This counter is identified by a name, which is defined with "key"
+ parameter.
+
+- **from** [non-negative integer], default "0"
+
+ Starting value for counters and lower margin for random generator.
+
+- **to** [positive integer], default "INT\_MAX"
+
+ Upper margin for all sequences. Note that this margin is not
+ inclusive. When next value for a counter is equal or greater than
+ this parameter, the counter resets to the starting value.
+
+- **step** [non-negative integer], default "1"
+
+ Increment for counters. If step is "0", it can be used to fetch
+ current value without modification. The latter not applies to
+ "random" mode. This is useful in "key" mode or to get constant values
+ in "instance" mode.
+
+- **key** [word], default ""
+
+ Name of the global counter which is used in this action.
+
+- **var** [word], default "$!mmsequence"
+
+ Name of the variable where the number will be stored. Should start
+ with "$".
+
+**Sample**:
+
+::
+
+ # load balance
+ Ruleset(
+ name="logd"
+ queue.workerthreads="5"
+ ){
+
+ Action(
+ type="mmsequence"
+ mode="instance"
+ from="0"
+ to="2"
+ var="$.seq"
+ )
+
+ if $.seq == "0" then {
+ Action(
+ type="mmnormalize"
+ userawmsg="on"
+ rulebase="/etc/rsyslog.d/rules.rb"
+ )
+ } else {
+ Action(
+ type="mmnormalize"
+ userawmsg="on"
+ rulebase="/etc/rsyslog.d/rules.rb"
+ )
+ }
+
+ # output logic here
+ }
+ # generate random numbers
+ action(
+ type="mmsequence"
+ mode="random"
+ to="100"
+ var="$!rndz"
+ )
+ # count from 0 to 99
+ action(
+ type="mmsequence"
+ mode="instance"
+ to="100"
+ var="$!cnt1"
+ )
+ # the same as before but the counter is global
+ action(
+ type="mmsequence"
+ mode="key"
+ key="key1"
+ to="100"
+ var="$!cnt2"
+ )
+ # count specific messages but place the counter in every message
+ if $msg contains "txt" then
+ action(
+ type="mmsequence"
+ mode="key"
+ to="100"
+ var="$!cnt3"
+ )
+ else
+ action(
+ type="mmsequence"
+ mode="key"
+ to="100"
+ step="0"
+ var="$!cnt3"
+ key=""
+ )
+
+**Legacy Configuration Parameters**:
+
+Note: parameter names are case-insensitive.
+
+Not supported.
+
diff --git a/source/configuration/modules/mmsnmptrapd.rst b/source/configuration/modules/mmsnmptrapd.rst
new file mode 100644
index 0000000..0f75d8c
--- /dev/null
+++ b/source/configuration/modules/mmsnmptrapd.rst
@@ -0,0 +1,103 @@
+mmsnmptrapd message modification module
+=======================================
+
+**Module Name:** mmsnmptrapd
+
+**Author:** Rainer Gerhards <rgerhards@adiscon.com> (custom-created)
+
+**Multi-Ruleset Support:** since 5.8.1
+
+**Description**:
+
+This module uses a specific configuration of snmptrapd's tag values to
+obtain information of the original source system and the severity
+present inside the original SNMP trap. It then replaces these fields
+inside the syslog message.
+
+Let's look at an example. Essentially, SNMPTT will invoke something like
+this:
+
+::
+
+ logger -t snmptrapd/warning/realhost Host 003c.abcd.ffff in vlan 17 is flapping between port Gi4/1 and port Gi3/2
+
+This message modification module will change the tag (removing the
+additional information), hostname and severity (not shown in example),
+so the log entry will look as follows:
+
+::
+
+ 2011-04-21T16:43:09.101633+02:00 realhost snmptrapd: Host 003c.abcd.ffff in vlan 122 is flapping between port Gi4/1 and port Gi3/2
+
+The following logic is applied to all message being processed:
+
+#. The module checks incoming syslog entries. If their TAG field starts
+ with "snmptrapd/" (configurable), they are modified, otherwise not.
+ If the are modified, this happens as follows:
+#. It will derive the hostname from the tag field which has format
+ snmptrapd/severity/hostname
+#. It should derive the severity from the tag field which has format
+ snmptrapd/severity/hostname. A configurable mapping table will be
+ used to drive a new severity value from that severity string. If no
+ mapping has been defined, the original severity is not changed.
+#. It replaces the "FromHost" value with the derived value from step 2
+#. It replaces the "Severity" value with the derived value from step 3
+
+Note that the placement of this module inside the configuration is
+important. All actions before this modules is called will work on the
+unmodified message. All messages after it's call will work on the
+modified message. Please also note that there is some extra power in
+case it is required: as this module is implemented via the output module
+interface, a filter can be used (actually must be used) in order to tell
+when it is called. Usually, the catch-all filter (\*.\*) is used, but
+more specific filters are fully supported. So it is possible to define
+different parameters for this module depending on different filters. It
+is also possible to just run messages from one remote system through
+this module, with the help of filters or multiple rulesets and ruleset
+bindings. In short words, all capabilities rsyslog offers to control
+output modules are also available to mmsnmptrapd.
+
+**Configuration Parameters**:
+
+Note: parameter names are case-insensitive.
+
+- **$mmsnmptrapdTag** [tagname]
+
+ Tells the module which start string inside the tag to look for. The
+ default is "snmptrapd". Note that a slash is automatically added to
+ this tag when it comes to matching incoming messages. It MUST not be
+ given, except if two slashes are required for whatever reasons (so
+ "tag/" results in a check for "tag//" at the start of the tag field).
+
+- **$mmsnmptrapdSeverityMapping** [severitymap]
+ This specifies the severity mapping table. It needs to be specified
+ as a list. Note that due to the current config system **no
+ whitespace** is supported inside the list, so be sure not to use any
+ whitespace inside it.
+ The list is constructed of Severity-Name/Severity-Value pairs,
+ delimited by comma. Severity-Name is a case-sensitive string, e.g.
+ "warning" and an associated numerical value (e.g. 4). Possible values
+ are in the rage 0..7 and are defined in RFC5424, table 2. The given
+ sample would be specified as "warning/4".
+ If multiple instances of mmsnmptrapd are used, each instance uses
+ the most recently defined $mmsnmptrapdSeverityMapping before itself.
+
+**Caveats/Known Bugs:**
+
+- currently none known
+
+**Example:**
+
+This enables to rewrite messages from snmptrapd and configures error and
+warning severities. The default tag is used.
+
+::
+
+ $ModLoad mmsnmptrapd # needs to be done just once
+ # ... other module loads and listener setup ...
+ *.* /path/to/file/with/originalMessage # this file receives unmodified messages
+ $mmsnmptrapdSeverityMapping warning/4,error/3
+ *.* :mmsnmptrapd: # now message is modified
+ *.* /path/to/file/with/modifiedMessage # this file receives modified messages
+ # ... rest of config ...
+
diff --git a/source/configuration/modules/mmtaghostname.rst b/source/configuration/modules/mmtaghostname.rst
new file mode 100644
index 0000000..114f723
--- /dev/null
+++ b/source/configuration/modules/mmtaghostname.rst
@@ -0,0 +1,89 @@
+******************************************
+mmtaghostname: message modification module
+******************************************
+
+================ ==============================================================
+**Module Name:** **mmtaghostname**
+**Authors:** Jean-Philippe Hilaire <jean-philippe.hilaire@pmu.fr> & Philippe Duveau <philippe.duveau@free.fr>
+================ ==============================================================
+
+
+Purpose
+=======
+
+As a message modification, it can be used in a different step of the
+message processing without interfering in the parsers' chain process
+and can be applied before or after parsing process using rulesets.
+
+The purposes are :
+
+- to add a tag on message produce by input module which does not provide
+ a tag like imudp or imtcp. Useful when the tag is used for routing the
+ message.
+
+- to force message hostname to the rsyslog valeur.
+ AWS Use case : applications in auto-scaling systems provides logs to rsyslog
+ through udp/tcp. As a result of auto-scaling, the name of the host is based
+ on an ephemeral IPs (short term meaning). In this situation rsyslog local
+ hostname is generally closed to business rule. So replacing hostanme received
+ by the rsyslog local Hostname provide values to the logs collected.
+
+Compile
+=======
+
+To successfully compile mmtaghostname module.
+
+.. code-block:: none
+
+ ./configure --enable-mmtaghostname ...
+
+Configuration Parameters
+========================
+
+Tag
+^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", ,"none"
+
+The tag to be assigned to messages modified. If you would like to see the
+colon after the tag, you need to include it when you assign a tag value,
+like so: ``tag="myTagValue:"``.
+
+If this attribute is no provided, messages tags are not modified.
+
+ForceLocalHostname
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "Binary", "no", ,"off"
+
+This attribute force to set the HOSTNAME of the message to the rsyslog
+value "localHostName". This allow to set a valid value to message received
+received from local application through imudp or imtcp.
+
+Sample
+======
+
+In this sample, the message received is parsed by RFC5424 parser and then
+the HOSTNAME is overwritten and a tag is setted.
+
+.. code-block:: none
+
+ module(load='mmtaghostname')
+ module(load='imudp')
+ global(localhostname="sales-front")
+
+ ruleset(name="TagUDP" parser=[ "rsyslog.rfc5424" ]) {
+ action(type="mmtaghostname" tag="front" forcelocalhostname="on")
+ call ...
+ }
+ input(type="imudp" port="514" ruleset="TagUDP")
diff --git a/source/configuration/modules/mmutf8fix.rst b/source/configuration/modules/mmutf8fix.rst
new file mode 100644
index 0000000..594bb90
--- /dev/null
+++ b/source/configuration/modules/mmutf8fix.rst
@@ -0,0 +1,112 @@
+Fix invalid UTF-8 Sequences (mmutf8fix)
+=======================================
+
+**Module Name:** mmutf8fix
+
+**Author:** Rainer Gerhards <rgerhards@adiscon.com>
+
+**Available since**: 7.5.4
+
+**Description**:
+
+The mmutf8fix module permits to fix invalid UTF-8 sequences. Most often,
+such invalid sequences result from syslog sources sending in non-UTF
+character sets, e.g. ISO 8859. As syslog does not have a way to convey
+the character set information, these sequences are not properly handled.
+While they are typically uncritical with plain text files, they can
+cause big headache with database sources as well as systems like
+ElasticSearch.
+
+The module supports different "fixing" modes and fixes. The current
+implementation will always replace invalid bytes with a single US ASCII
+character. Additional replacement modes will probably be added in the
+future, depending on user demand. In the longer term it could also be
+evolved into an any-charset-to-UTF8 converter. But first let's see if it
+really gets into widespread enough use.
+
+**Proper Usage**:
+
+Some notes are due for proper use of this module. This is a message
+modification module utilizing the action interface, which means you call
+it like an action. This gives great flexibility on the question on when
+and how to call this module. Note that once it has been called, it
+actually modifies the message. The original message is then no longer
+available. However, this does **not** change any properties set, used or
+extracted before the modification is done.
+
+One potential use case is to normalize all messages. This is done by
+simply calling mmutf8fix right in front of all other actions.
+
+If only a specific source (or set of sources) is known to cause
+problems, mmutf8fix can be conditionally called only on messages from
+them. This also offers performance benefits. If such multiple sources
+exists, it probably is a good idea to define different listeners for
+their incoming traffic, bind them to specific
+`ruleset <multi_ruleset.html>`_ and call mmutf8fix as first action in
+this ruleset.
+
+**Module Configuration Parameters**:
+
+Note: parameter names are case-insensitive.
+
+Currently none.
+
+
+**Action Confguration Parameters**:
+
+Note: parameter names are case-insensitive.
+
+- **mode** - **utf-8**/controlcharacters
+
+ This sets the basic detection mode.
+ In **utf-8** mode (the default), proper UTF-8 encoding is checked and
+ bytes which are not proper UTF-8 sequences are acted on. If a proper
+ multi-byte start sequence byte is detected but any of the following
+ bytes is invalid, the whole sequence is replaced by the replacement
+ method. This mode is most useful with non-US-ASCII character sets,
+ which validly includes multibyte sequences. Note that in this mode
+ control characters are NOT being replaced, because they are valid
+ UTF-8.
+ In **controlcharacters** mode, all bytes which do not represent a
+ printable US-ASCII character (codes 32 to 126) are replaced. Note
+ that this also mangles valid UTF-8 multi-byte sequences, as these are
+ (deliberately) outside of that character range. This mode is most
+ useful if it is known that no characters outside of the US-ASCII
+ alphabet need to be processed.
+- **replacementChar** - default " " (space), a single character
+
+ This is the character that invalid sequences are replaced by.
+ Currently, it MUST be a **printable** US-ASCII character.
+
+**Caveats/Known Bugs:**
+
+- overlong UTF-8 encodings are currently not detected in utf-8 mode.
+
+**Samples:**
+
+In this snippet, we write one file without fixing UTF-8 and another one
+with the message fixed. Note that once mmutf8fix has run, access to the
+original message is no longer possible.
+
+::
+
+ module(load="mmutf8fix") action(type="omfile"
+ file="/path/to/non-fixed.log") action(type="mmutf8fix")
+ action(type="omfile" file="/path/to/fixed.log")
+
+In this sample, we fix only message originating from host 10.0.0.1.
+
+::
+
+ module(load="mmutf8fix") if $fromhost-ip == "10.0.0.1" then
+ action(type="mmutf8fix") # all other actions here...
+
+This is mostly the same as the previous sample, but uses
+"controlcharacters" processing mode.
+
+::
+
+ module(load="mmutf8fix") if $fromhost-ip == "10.0.0.1" then
+ action(type="mmutf8fix" mode="controlcharacters") # all other actions here...
+
diff --git a/source/configuration/modules/module_workflow.png b/source/configuration/modules/module_workflow.png
new file mode 100644
index 0000000..e1a72e9
--- /dev/null
+++ b/source/configuration/modules/module_workflow.png
Binary files differ
diff --git a/source/configuration/modules/omamqp1.rst b/source/configuration/modules/omamqp1.rst
new file mode 100644
index 0000000..c5e2e07
--- /dev/null
+++ b/source/configuration/modules/omamqp1.rst
@@ -0,0 +1,476 @@
+*****************************************
+omamqp1: AMQP 1.0 Messaging Output Module
+*****************************************
+
+=========================== ===========================================================================
+**Module Name:** **omamqp1**
+**Available Since:** **8.17.0**
+**Original Author:** Ken Giusti <kgiusti@gmail.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+This module provides the ability to send logging via an AMQP 1.0
+compliant message bus. It puts the log messages into an AMQP
+message and sends the message to a destination on the bus.
+
+
+Notable Features
+================
+
+- :ref:`omamqp1-message-format`
+- :ref:`omamqp1-interoperability`
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Action Parameters
+-----------------
+
+Host
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "5672", "yes", "none"
+
+The address of the message bus in *host[:port]* format.
+The port defaults to 5672 if absent. Examples: *"localhost"*,
+*"127.0.0.1:9999"*, *"bus.someplace.org"*
+
+
+Target
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "yes", "none"
+
+The destination for the generated messages. This can be
+the name of a queue or topic. On some messages buses it may be
+necessary to create this target manually. Example: *"amq.topic"*
+
+
+Username
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+Used by SASL to authenticate with the message bus.
+
+
+Password
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+Used by SASL to authenticate with the message bus.
+
+
+Template
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "RSYSLOG_FileFormat", "no", "none"
+
+Format for the log messages.
+
+
+idleTimeout
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "none"
+
+The idle timeout in seconds. This enables connection
+heartbeats and is used to detect a failed connection to the message
+bus. Set to zero to disable.
+
+
+reconnectDelay
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "5", "no", "none"
+
+The time in seconds this module will delay before
+attempting to re-established a failed connection to the message bus.
+
+
+MaxRetries
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "10", "no", "none"
+
+The number of times an undeliverable message is
+re-sent to the message bus before it is dropped. This is unrelated
+to rsyslog's action.resumeRetryCount. Once the connection to the
+message bus is active this module is ready to receive log messages
+from rsyslog (i.e. the module has 'resumed'). Even though the
+connection is active, any particular message may be rejected by the
+message bus (e.g. 'unrouteable'). The module will retry
+(e.g. 'suspend') for up to *maxRetries* attempts before discarding
+the message as undeliverable. Setting this to zero disables the
+limit and unrouteable messages will be retried as long as the
+connection stays up. You probably do not want that to
+happen.
+
+
+DisableSASL
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "none"
+
+Setting this to a non-zero value will disable SASL
+negotiation. Only necessary if the message bus does not offer SASL
+support.
+
+
+Dependencies
+============
+
+* `libqpid-proton <http://qpid.apache.org/proton>`_
+
+Configure
+=========
+
+.. code-block:: none
+
+ ./configure --enable-omamqp1
+
+
+.. _omamqp1-message-format:
+
+Message Format
+==============
+
+Messages sent from this module to the message bus contain an AMQP List
+in the message body. This list contains one or more log messages as
+AMQP String types. Each string entry is a single log message. The
+list is ordered such that the oldest log appears at the front of the
+list (e.g. list index 0), whilst the most recent log is at the end of
+the list.
+
+
+.. _omamqp1-interoperability:
+
+Interoperability
+================
+
+The output plugin has been tested against the following messaging systems:
+
+* `QPID C++ Message Broker <http://qpid.apache.org/components/cpp-broker>`_
+* `QPID Dispatch Message Router <http://qpid.apache.org/components/dispatch-router>`_
+
+
+TODO
+====
+
+- Add support for SSL connections.
+
+
+Examples
+========
+
+Example 1
+---------
+
+This example shows a minimal configuration. The module will attempt
+to connect to a QPID broker at *broker.amqp.org*. Messages are
+sent to the *amq.topic* topic, which exists on the broker by default:
+
+.. code-block:: none
+
+ module(load="omamqp1")
+ action(type="omamqp1"
+ host="broker.amqp.org"
+ target="amq.topic")
+
+
+Example 2
+---------
+
+This example forces rsyslogd to authenticate with the message bus.
+The message bus must be provisioned such that user *joe* is allowed to
+send to the message bus. All messages are sent to *log-queue*. It is
+assumed that *log-queue* has already been provisioned:
+
+.. code-block:: none
+
+ module(load="omamqp1")
+
+ action(type="omamqp1"
+ host="bus.amqp.org"
+ target="log-queue"
+ username="joe"
+ password="trustno1")
+
+
+Notes on use with the QPID C++ broker (qpidd)
+=============================================
+
+*Note well*: These notes assume use of version 0.34 of the QPID C++
+broker. Previous versions may not be fully compatible.
+
+To use the Apache QPID C++ broker **qpidd** as the message bus, a
+version of qpidd that supports the AMQP 1.0 protocol must be used.
+
+Since qpidd can be packaged without AMQP 1.0 support you should verify
+AMQP 1.0 has been enabled by checking for AMQP 1.0 related options in
+the qpidd help text. For example:
+
+.. code-block:: none
+
+ qpidd --help
+
+ ...
+
+ AMQP 1.0 Options:
+ --domain DOMAIN Domain of this broker
+ --queue-patterns PATTERN Pattern for on-demand queues
+ --topic-patterns PATTERN Pattern for on-demand topics
+
+
+If no AMQP 1.0 related options appear in the help output then your
+instance of qpidd does not support AMQP 1.0 and cannot be used with
+this output module.
+
+The destination for message (target) *must* be created before log
+messages arrive. This can be done using the qpid-config tool.
+
+Example:
+
+.. code-block:: none
+
+ qpid-config add queue rsyslogd
+
+
+Alternatively the target can be created on demand by configuring a
+queue-pattern (or topic-pattern) that matches the target. To do this,
+add a *queue-patterns* or *topic_patterns* configuration directive to
+the qpidd configuration file /etc/qpid/qpidd.conf.
+
+For example to have qpidd automatically create a queue named
+*rsyslogd* add the following to the qpidd configuration file:
+
+.. code-block:: none
+
+ queue-patterns=rsyslogd
+
+
+or, if a topic behavior is desired instead of a queue:
+
+.. code-block:: none
+
+ topic-patterns=rsyslogd
+
+
+These dynamic targets are auto-delete and will be destroyed once there
+are no longer any subscribers or queue-bound messages.
+
+Versions of qpidd <= 0.34 also need to have the SASL service name set
+to *"amqp"* if SASL authentication is used. Add this to the qpidd.conf
+file:
+
+.. code-block:: none
+
+ sasl-service-name=amqp
+
+
+Notes on use with the QPID Dispatch Router (qdrouterd)
+======================================================
+
+*Note well*: These notes assume use of version 0.5 of the QPID Dispatch
+Router **qdrouterd**. Previous versions may not be fully compatible.
+
+The default qdrouterd configuration does not have SASL authentication
+turned on. If SASL authentication is required you must configure SASL
+in the qdrouter configuration file /etc/qpid-dispatch/qdrouterd.conf
+
+First create a SASL configuration file for qdrouterd. This
+configuration file is usually /etc/sasl2/qdrouterd.conf, but its
+default location may vary depending on your platform's configuration.
+
+This document assumes you understand how to properly configure Cyrus
+SASL.
+
+Here is an example qdrouterd SASL configuration file that allows the
+client to use either the **DIGEST-MD5** or **PLAIN** authentication
+mechanisms and specifies the path to the SASL user credentials
+database:
+
+.. code-block:: none
+
+ pwcheck_method: auxprop
+ auxprop_plugin: sasldb
+ sasldb_path: /var/lib/qdrouterd/qdrouterd.sasldb
+ mech_list: DIGEST-MD5 PLAIN
+
+
+Once a SASL configuration file has been set up for qdrouterd the path
+to the directory holding the configuration file and the name of the
+configuration file itself **without the '.conf' suffix** must be added
+to the /etc/qpid-dispatch/qdrouterd.conf configuration file. This is
+done by adding *saslConfigPath* and *saslConfigName* to the
+*container* section of the configuration file. For example, assuming
+the file /etc/sasl2/qdrouterd.conf holds the qdrouterd SASL
+configuration:
+
+.. code-block:: none
+
+ container {
+ workerThreads: 4
+ containerName: Qpid.Dispatch.Router.A
+ saslConfigPath: /etc/sasl2
+ saslConfigName: qdrouterd
+ }
+
+
+In addition the address used by the omamqp1 module to connect to
+qdrouterd must have SASL authentication turned on. This is done by
+adding the *authenticatePeer* attribute set to 'yes' to the
+corresponding *listener* entry:
+
+.. code-block:: none
+
+ listener {
+ addr: 0.0.0.0
+ port: amqp
+ authenticatePeer: yes
+ }
+
+
+This should complete the SASL setup needed by qdrouterd.
+
+The target address used as the destination for the log messages must
+be picked with care. qdrouterd uses the prefix of the target address
+to determine the forwarding pattern used for messages sent to that
+target address. Addresses starting with the prefix *queue* are
+distributed to only one message receiver. If there are multiple
+message consumers listening to that target address only one listener
+will receive the message - mimicking the behavior of a queue with
+competing subscribers. For example: *queue/rsyslogd*
+
+If a multicast pattern is desired - where all active listeners receive
+their own copy of the message - the target address prefix *multicast*
+may be used. For example: *multicast/rsyslogd*
+
+Note well: if there are no active receivers for the log messages the
+messages will be rejected by qdrouterd since the messages are
+undeliverable. In this case the omamqp1 module will return a
+**SUSPENDED** status to the rsyslogd main task. rsyslogd may then
+re-submit the rejected log messages to the module which will attempt
+to send them again. This retry option is configured via rsyslogd - it
+is not part of this module. Refer to the rsyslogd actions
+documentation.
+
+
+Using qdrouterd in combination with qpidd
+=========================================
+
+A qdrouterd-based message bus can use a broker as a message storage
+mechanism for those that require broker-based message services (such
+as a message store). This section explains how to configure qdrouterd
+and qpidd for this type of deployment. Please read the above notes
+for deploying qpidd and qdrouterd first.
+
+Each qdrouterd instance that is to connect the broker to the message
+bus must define a *connector* section in the qdrouterd.conf file.
+This connector contains the addressing information necessary to have
+the message bus set up a connection to the broker. For example, if a
+broker is available on host broker.host.com at port 5672:
+
+.. code-block:: none
+
+ connector {
+ name: mybroker
+ role: on-demand
+ addr: broker.host.com
+ port: 5672
+ }
+
+
+In order to route messages to and from the broker, a static *link
+route* must be configured on qdrouterd. This link route contains a
+target address prefix and the name of the connector to use for
+forwarding matching messages.
+
+For example, to have qdrouterd forward messages that have a target
+address prefixed by "Broker" to the connector defined above, the
+following link pattern must be added to the qdrouterd.conf
+configuration:
+
+.. code-block:: none
+
+ linkRoutePattern {
+ prefix: /Broker/
+ connector: mybroker
+ }
+
+
+A queue must then be created on the broker. The name of the queue
+must be prefixed by the same prefix specified in the linkRoutePattern
+entry. For example:
+
+.. code-block:: none
+
+ $ qpid-config add queue Broker/rsyslogd
+
+
+Lastly use the name of the queue for the target address for the omamqp
+module action. For example, assuming qdrouterd is listening on local
+port 5672:
+
+.. code-block:: none
+
+ action(type="omamqp1"
+ host="localhost:5672"
+ target="Broker/rsyslogd")
+
+
diff --git a/source/configuration/modules/omazureeventhubs.rst b/source/configuration/modules/omazureeventhubs.rst
new file mode 100644
index 0000000..2c27660
--- /dev/null
+++ b/source/configuration/modules/omazureeventhubs.rst
@@ -0,0 +1,412 @@
+**********************************************************
+omazureeventhubs: Microsoft Azure Event Hubs Output Module
+**********************************************************
+
+=========================== ===========================================================================
+**Module Name:** **omazureeventhubs**
+**Author:** Andre Lorbach <alorbach@adiscon.com>
+**Available since:** v8.2304
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+The purpose of the rsyslog output plugin omazureeventhubs is to provide a
+fast and reliable way to send log data from rsyslog to Microsoft Azure Event Hubs.
+This plugin uses the Advanced Message Queuing Protocol (AMQP) to securely transmit
+log data from rsyslog to Microsoft Azure, where it can be centralized, analyzed, and stored.
+The plugin uses the "Qpid Proton C API" library to implement the AMQP protocol,
+providing a flexible and efficient solution for sending log data to Microsoft Azure Event Hubs.
+
+AMQP is a reliable and secure binary protocol for exchanging messages between applications,
+and it is widely used in the cloud and enterprise messaging systems. The use of AMQP in the
+omazureeventhubs plugin, in combination with the Qpid Proton C API library, ensures that
+log data is transmitted in a robust and reliable manner, even in the presence of network
+outages or other disruptions.
+
+The omazureeventhubs plugin supports various configuration options, allowing organizations to
+customize their log data pipeline to meet their specific requirements.
+This includes options for specifying the Event Hubs endpoint, port, and authentication credentials.
+With this plugin, organizations can easily integrate their rsyslog infrastructure with
+Microsoft Azure Event Hubs, providing a scalable and secure solution for log management.
+The plugin is designed to work with the latest versions of rsyslog and Microsoft Azure,
+ensuring compatibility and reliability.
+
+
+Requirements
+============
+
+To output logs from rsyslog to Microsoft Azure Event Hubs, you will need to fulfill the
+following requirements:
+
+- Qpid Proton C Library Version 0.13 or higher including Qpid Proton ProActor
+- The AMQP Protocol needs to have firewall Ports 5671 and 443 TCP to be open for outgoing connections.
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Action Parameters
+-----------------
+
+azurehost
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "yes", "none"
+
+Specifies the fully qualified domain name (FQDN) of the Event Hubs instance that
+the rsyslog output plugin should connect to. The format of the hostname should
+be **<namespace>.servicebus.windows.net**, where **<namespace>** is the name
+of the Event Hubs namespace that was created in Microsoft Azure.
+
+
+azureport
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "5671", "no", "none"
+
+Specifies the TCP port number used by the Event Hubs instance for incoming connections.
+The default port number for Event Hubs is 5671 for connections over the
+AMQP Secure Sockets Layer (SSL) protocol. This property is usually optional in the configuration
+file of the rsyslog output plugin, as the default value of 5671 is typically used.
+
+
+azure_key_name
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive", "Available since"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "yes", "none"
+
+The configuration property for the Azure key name used to connect to Microsoft Azure Event Hubs is
+typically referred to as the "Event Hubs shared access key name". It specifies the name of
+the shared access key that is used to authenticate and authorize connections to the Event Hubs instance.
+The shared access key is a secret string that is used to securely sign and validate requests
+to the Event Hubs instance.
+
+
+azure_key
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "yes", "none"
+
+The configuration property for the Azure key used to connect to Microsoft Azure Event Hubs is
+typically referred to as the "Event Hubs shared access key". It specifies the value of the
+shared access key that is used to authenticate and authorize connections to the Event Hubs instance.
+The shared access key is a secret string that is used to securely sign and validate requests
+to the Event Hubs instance.
+
+
+container
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "yes", "none"
+
+The configuration property for the Azure container used to connect to Microsoft Azure Event Hubs is
+typically referred to as the "Event Hubs Instance". It specifies the name of the Event Hubs Instance,
+to which log data should be sent.
+
+
+template
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "RSYSLOG_FileFormat", "no", "none"
+
+Specifies the template used to format and structure the log messages that will be sent from rsyslog to
+Microsoft Azure Event Hubs.
+
+The message template can include rsyslog variables, such as the timestamp, hostname, or process name,
+and it can use rsyslog macros, such as $rawmsg or $json, to control the formatting of log data.
+
+For a message template sample with valid JSON output see the sample below:
+
+.. code-block:: none
+
+ template(name="generic" type="list" option.jsonf="on") {
+ property(outname="timestamp" name="timereported" dateFormat="rfc3339" format="jsonf")
+ constant(value="\"source\": \"EventHubMessage\", ")
+ property(outname="host" name="hostname" format="jsonf")
+ property(outname="severity" name="syslogseverity" caseConversion="upper" format="jsonf" datatype="number")
+ property(outname="facility" name="syslogfacility" format="jsonf" datatype="number")
+ property(outname="appname" name="syslogtag" format="jsonf")
+ property(outname="message" name="msg" format="jsonf" )
+ property(outname="etlsource" name="$myhostname" format="jsonf")
+ }
+
+
+amqp_address
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+The configuration property for the AMQP address used to connect to Microsoft Azure Event Hubs is
+typically referred to as the "Event Hubs connection string". It specifies the URL that is used to connect
+to the target Event Hubs instance in Microsoft Azure. If the amqp_address is configured, the configuration
+parameters for **azurehost**, **azureport**, **azure_key_name** and **azure_key** will be ignored.
+
+A sample Event Hubs connection string URL is:
+
+.. code-block:: none
+
+ amqps://[Shared access key name]:[Shared access key]@[Event Hubs namespace].servicebus.windows.net/[Event Hubs Instance]
+
+
+eventproperties
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "array", "none", "no", "none"
+
+The **eventproperties** configuration property is an array property used to add key-value pairs as additional properties to the
+encoded AMQP message object, providing additional information about the log event.
+These properties can be used for filtering, routing, and grouping log events in Azure Event Hubs.
+
+The event properties property is specified as a list of key-value pairs separated by comma,
+with the key and value separated by an equal sign.
+
+For example, the following configuration setting adds two event properties:
+
+.. code-block:: none
+
+ eventproperties=[ "Table=TestTable",
+ "Format=JSON"]
+
+In this example, the Table and Format keys are added to the message object as event properties,
+with the corresponding values of TestTable and JSON, respectively.
+
+
+closeTimeout
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "2000", "no", "none"
+
+The close timeout configuration property is used in the rsyslog output module
+to specify the amount of time the output module should wait for a response
+from Microsoft Azure Event Hubs before timing out and closing the connection.
+
+This property is used to control the amount of time the output module will wait
+for a response from the target Event Hubs instance before giving up and
+assuming that the connection has failed. The close timeout property is specified in milliseconds.
+
+
+statsname
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "omazureeventhubs", "no", "none"
+
+The name assigned to statistics specific to this action instance. The supported set of
+statistics tracked for this action instance are **submitted**, **accepted**, **failures** and **failures_other**.
+See the :ref:`statistics-counter_omazureeventhubs_label` section for more details.
+
+
+.. _statistics-counter_omazureeventhubs_label:
+
+Statistic Counter
+=================
+
+This plugin maintains global :doc:`statistics <../rsyslog_statistic_counter>` for omazureeventhubs that
+accumulate all action instances. The statistic origin is named "omazureeventhubs" with following counters:
+
+
+- **submitted** - This counter tracks the number of log messages that have been submitted by the rsyslog process
+ to the output module for delivery to Microsoft Azure Event Hubs.
+
+- **accepted** - This counter tracks the number of log messages that have been successfully delivered to
+ Microsoft Azure Event Hubs by the output module.
+
+- **failures** - This counter tracks the number of log messages that have failed to be delivered to
+ Microsoft Azure Event Hubs due to various error conditions, such as network connectivity issues,
+ incorrect configuration settings, or other technical problems. This counter provides important information about
+ any issues that may be affecting the delivery of log data to Microsoft Azure Event Hubs.
+
+- **failures_other** - This counter tracks the number of log messages that have failed to be delivered due to
+ other error conditions, such as incorrect payload format or unexpected data.
+
+These statistics counters are updated in real-time by the rsyslog output module as log data is processed,
+and they provide valuable information about the performance and operation of the output module.
+
+For multiple actions using statistics callback, there will be one record for each action.
+
+.. _omazureeventhubs-examples-label:
+
+Examples
+========
+
+Example 1: Use AMQP URL
+-----------------------
+
+The following sample does the following:
+
+- loads the omazureeventhubs module
+- outputs all logs to Microsoft Azure Event Hubs with standard template
+- Uses amqp_address parameter
+
+.. code-block:: none
+
+ module(load="omazureeventhubs")
+ action(type="omazureeventhubs" amqp_address="amqps://<AccessKeyName>:<AccessKey>@<EventHubsNamespace>.servicebus.windows.net/<EventHubsInstance>")
+
+
+Example 2: RAW Format
+---------------------
+
+The following sample does the following:
+
+- loads the omazureeventhubs module
+- outputs all logs to Microsoft Azure Event Hubs with simple custom template
+- Uses **azurehost**, **azureport**, **azure_key_name** and **azure_key**
+ parameters instead of **amqp_address** parameter
+
+.. code-block:: none
+
+ module(load="omazureeventhubs")
+ template(name="outfmt" type="string" string="%msg%\n")
+
+ action(type="omazureeventhubs"
+ azurehost="<EventHubsNamespace>.servicebus.windows.net"
+ azureport="5671"
+ azure_key_name="<AccessKeyName>"
+ azure_key="<AccessKey>"
+ container="<EventHubsInstance>"
+ template="outfmt"
+ )
+
+
+Example 3: JSON Format
+----------------------
+
+The following sample does the following:
+
+- loads the omazureeventhubs module
+- outputs all logs to Microsoft Azure Event Hubs with JSON custom template
+- Uses **azurehost**, **azureport**, **azure_key_name** and **azure_key**
+ parameters instead of **amqp_address** parameter
+- Uses **eventproperties** array parameter to set additional message properties
+
+.. code-block:: none
+
+ module(load="omazureeventhubs")
+ template(name="outfmtjson" type="list" option.jsonf="on") {
+ property(outname="timestamp" name="timereported" dateFormat="rfc3339" format="jsonf")
+ constant(value="\"source\": \"EventHubMessage\", ")
+ property(outname="host" name="hostname" format="jsonf")
+ property(outname="severity" name="syslogseverity" caseConversion="upper" format="jsonf" datatype="number")
+ property(outname="facility" name="syslogfacility" format="jsonf" datatype="number")
+ property(outname="appname" name="syslogtag" format="jsonf")
+ property(outname="message" name="msg" format="jsonf" )
+ property(outname="etlsource" name="$myhostname" format="jsonf")
+ }
+
+ action(type="omazureeventhubs"
+ azurehost="<EventHubsNamespace>.servicebus.windows.net"
+ azureport="5671"
+ azure_key_name="<AccessKeyName>"
+ azure_key="<AccessKey>"
+ container="<EventHubsInstance>"
+ template="outfmtjson"
+ eventproperties=[ "Table=CustomTable",
+ "Format=JSON"]
+ )
+
+Example 4: High Performance
+---------------------------
+
+To achieve high performance when sending syslog data to Azure Event Hubs, you should consider configuring your output module to use multiple worker instances. This can be done by setting the "workerthreads" parameter in the configuration file.
+
+The following example is for high performance (Azure Premium Tier) and does the following:
+
+- loads the omazureeventhubs module
+- outputs all logs to Microsoft Azure Event Hubs with JSON custom template
+- Uses **azurehost**, **azureport**, **azure_key_name** and **azure_key**
+ parameters instead of **amqp_address** parameter
+- Uses **eventproperties** array parameter to set additional message properties
+- Uses **Linkedlist** In-Memory Queue which enables multiple omazureeventhubs workers running at the same time. Using a dequeue size of 2000 and a dequeue timeout of 1000 has shown very good results in performance tests.
+- Uses 8 workerthreads in this example, which will be spawn automatically if more than 2000 messages are waiting in the Queue. To achieve more performance, the number can be incremented.
+
+.. code-block:: none
+
+ module(load="omazureeventhubs")
+ template(name="outfmtjson" type="list" option.jsonf="on") {
+ property(outname="timestamp" name="timereported" dateFormat="rfc3339" format="jsonf")
+ constant(value="\"source\": \"EventHubMessage\", ")
+ property(outname="host" name="hostname" format="jsonf")
+ property(outname="severity" name="syslogseverity" caseConversion="upper" format="jsonf" datatype="number")
+ property(outname="facility" name="syslogfacility" format="jsonf" datatype="number")
+ property(outname="appname" name="syslogtag" format="jsonf")
+ property(outname="message" name="msg" format="jsonf" )
+ property(outname="etlsource" name="$myhostname" format="jsonf")
+ }
+
+ action(type="omazureeventhubs"
+ azurehost="<EventHubsNamespace>.servicebus.windows.net"
+ azureport="5671"
+ azure_key_name="<AccessKeyName>"
+ azure_key="<AccessKey>"
+ container="<EventHubsInstance>"
+ template="outfmtjson"
+ eventproperties=[ "Table=CustomTable",
+ "Format=JSON"]
+ queue.type="linkedList"
+ queue.size="200000"
+ queue.saveonshutdown="on"
+ queue.dequeueBatchSize="2000"
+ queue.minDequeueBatchSize.timeout="1000"
+ queue.workerThreads="8"
+ queue.workerThreadMinimumMessages="2000"
+ queue.timeoutWorkerthreadShutdown="10000"
+ queue.timeoutshutdown="1000"
+ )
+
diff --git a/source/configuration/modules/omclickhouse.rst b/source/configuration/modules/omclickhouse.rst
new file mode 100644
index 0000000..499d8bd
--- /dev/null
+++ b/source/configuration/modules/omclickhouse.rst
@@ -0,0 +1,324 @@
+**************************************
+omclickhouse: ClickHouse Output Module
+**************************************
+
+=========================== ===========================================================================
+**Module Name:**  **omclickhouse**
+**Author:** Pascal Withopf <pwithopf@adiscon.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+This module provides native support for logging to
+`ClickHouse <https://clickhouse.yandex/>`_.
+To enable the module use "--enable-clickhouse" while configuring rsyslog.
+Tests for the testbench can be enabled with "--enable-clickhouse-tests".
+
+
+Notable Features
+================
+
+- :ref:`omclickhouse-statistic-counter`
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Action Parameters
+-----------------
+
+Server
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "localhost", "no", "none"
+
+The address of a ClickHouse server.
+
+.. _port:
+
+Port
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "8123", "no", "none"
+
+HTTP port to use to connect to ClickHouse.
+
+
+usehttps
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "none"
+
+Default scheme to use when sending events to ClickHouse if none is
+specified on a server.
+
+
+template
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "StdClickHouseFmt", "no", "none"
+
+This is the message format that will be sent to ClickHouse. The
+resulting string needs to be a valid INSERT Query, otherwise ClickHouse
+will return an error. Defaults to:
+
+.. code-block:: none
+
+ "\"INSERT INTO rsyslog.SystemEvents (severity, facility, "
+ "timestamp, hostname, tag, message) VALUES (%syslogseverity%, %syslogfacility%, "
+ "'%timereported:::date-unixtimestamp%', '%hostname%', '%syslogtag%', '%msg%')\""
+
+
+bulkmode
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "none"
+
+The "off" setting means logs are shipped one by one. Each in
+its own HTTP request.
+The default "on" will send multiple logs in the same request. This is
+recommended, because it is many times faster than when bulkmode is turned off.
+The maximum number of logs sent in a single bulk request depends on your
+maxbytes and queue settings - usually limited by the `dequeue batch
+size <http://www.rsyslog.com/doc/node35.html>`_. More information
+about queues can be found
+`here <http://www.rsyslog.com/doc/node32.html>`_.
+
+
+maxbytes
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "Size", "104857600/100mb", "no", "none"
+
+When shipping logs with bulkmode **on**, maxbytes specifies the maximum
+size of the request body sent to ClickHouse. Logs are batched until
+either the buffer reaches maxbytes or the `dequeue batch
+size <http://www.rsyslog.com/doc/node35.html>`_ is reached.
+
+
+user
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "default", "no", "none"
+
+If you have basic HTTP authentication deployed you can specify your user-name here.
+
+
+pwd
+^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "", "no", "none"
+
+Password for basic authentication.
+
+
+errorFile
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+If specified, records failed in bulk mode are written to this file, including
+their error cause. Rsyslog itself does not process the file any more, but the
+idea behind that mechanism is that the user can create a script to periodically
+inspect the error file and react appropriately. As the complete request is
+included, it is possible to simply resubmit messages from that script.
+
+*Please note:* when rsyslog has problems connecting to clickhouse, a general
+error is assumed. However, if we receive negative responses during batch
+processing, we assume an error in the data itself (like a mandatory field is
+not filled in, a format error or something along those lines). Such errors
+cannot be solved by simply resubmitting the record. As such, they are written
+to the error file so that the user (script) can examine them and act appropriately.
+Note that e.g. after search index reconfiguration (e.g. dropping the mandatory
+attribute) a resubmit may be successful.
+
+
+allowUnsignedCerts
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "none"
+
+The module accepts connections to servers, which have unsigned certificates.
+If this parameter is disabled, the module will verify whether the certificates
+are authentic.
+
+
+skipverifyhost
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "boolean", "off", "no", "none"
+
+If `"on"`, this will set the curl `CURLOPT_SSL_VERIFYHOST` option to
+`0`. You are strongly discouraged to set this to `"on"`. It is
+primarily useful only for debugging or testing.
+
+
+healthCheckTimeout
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "int", "3500", "no", "none"
+
+This parameter sets the timeout for checking the availability
+of ClickHouse. Value is given in milliseconds.
+
+
+timeout
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "int", "0", "no", "none"
+
+This parameter sets the timeout for sending data to ClickHouse.
+Value is given in milliseconds.
+
+
+.. _omclickhouse-statistic-counter:
+
+Statistic Counter
+=================
+
+This plugin maintains global :doc:`statistics <../rsyslog_statistic_counter>`,
+which accumulate all action instances. The statistic is named "omclickhouse".
+Parameters are:
+
+- **submitted** - number of messages submitted for processing (with both
+ success and error result)
+
+- **fail.httprequests** - the number of times a http request failed. Note
+ that a single http request may be used to submit multiple messages, so this
+ number may be (much) lower than failed.http.
+
+- **failed.http** - number of message failures due to connection like-problems
+ (things like remote server down, broken link etc)
+
+- **fail.clickhouse** - number of failures due to clickhouse error reply; Note that
+ this counter does NOT count the number of failed messages but the number of
+ times a failure occurred (a potentially much smaller number). Counting messages
+ would be quite performance-intense and is thus not done.
+
+- **response.success** - number of records successfully sent in bulk index
+ requests - counts the number of successful responses
+
+
+**The fail.httprequests and failed.http counters reflect only failures that
+omclickhouse detected.** Once it detects problems, it (usually, depends on
+circumstances) tell the rsyslog core that it wants to be suspended until the
+situation clears (this is a requirement for rsyslog output modules). Once it is
+suspended, it does NOT receive any further messages. Depending on the user
+configuration, messages will be lost during this period. Those lost messages will
+NOT be counted by impstats (as it does not see them).
+
+
+Examples
+========
+
+Example 1
+---------
+
+The following sample does the following:
+
+- loads the omclickhouse module
+- outputs all logs to ClickHouse using the default settings
+
+.. code-block:: none
+
+ module(load="omclickhouse")
+ action(type="omclickhouse")
+
+
+Example 2
+---------
+
+In this example the URL will use http and the specified parameters to create
+the REST URL.
+
+.. code-block:: none
+
+ module(load="omclickhouse")
+ action(type="omclickhouse" server="127.0.0.1" port="8124" user="user1" pwd="pwd1"
+ usehttps="off")
+
+
+Example 3
+---------
+
+This example will send messages in batches up to 10MB.
+If an error occurs it will be written in the error file.
+
+.. code-block:: none
+
+ module(load="omclickhouse")
+ action(type="omclickhouse" maxbytes="10mb" errorfile="clickhouse-error.log")
+
+
diff --git a/source/configuration/modules/omelasticsearch.rst b/source/configuration/modules/omelasticsearch.rst
new file mode 100644
index 0000000..ee561fc
--- /dev/null
+++ b/source/configuration/modules/omelasticsearch.rst
@@ -0,0 +1,1102 @@
+********************************************
+omelasticsearch: Elasticsearch Output Module
+********************************************
+
+=========================== ===========================================================================
+**Module Name:**  **omelasticsearch**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+This module provides native support for logging to
+`Elasticsearch <http://www.elasticsearch.org/>`_.
+
+
+Notable Features
+================
+
+- :ref:`omelasticsearch-statistic-counter`
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Action Parameters
+-----------------
+
+Server
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "array", "none", "no", "none"
+
+An array of Elasticsearch servers in the specified format. If no scheme
+is specified, it will be chosen according to usehttps_. If no port is
+specified, serverport_ will be used. Defaults to "localhost".
+
+Requests to Elasticsearch will be load-balanced between all servers in
+round-robin fashion.
+
+.. code-block:: none
+
+ Examples:
+ server="localhost:9200"
+ server=["elasticsearch1", "elasticsearch2"]
+
+
+.. _serverport:
+
+Serverport
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "9200", "no", "none"
+
+Default HTTP port to use to connect to Elasticsearch if none is specified
+on a server_. Defaults to 9200
+
+
+.. _healthchecktimeout:
+
+HealthCheckTimeout
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "3500", "no", "none"
+
+Specifies the number of milliseconds to wait for a successful health check
+on a server_. Before trying to submit events to Elasticsearch, rsyslog will
+execute an *HTTP HEAD* to ``/_cat/health`` and expect an *HTTP OK* within
+this timeframe. Defaults to 3500.
+
+*Note, the health check is verifying connectivity only, not the state of
+the Elasticsearch cluster.*
+
+
+.. _esVersion_major:
+
+esVersion.major
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "none"
+
+ElasticSearch is notoriously bad at maintaining backwards compatibility. For this
+reason, the setting can be used to configure the server's major version number (e.g. 7, 8, ...).
+As far as we know breaking changes only happen with major version changes.
+As of now, only value 8 triggers API changes. All other values select
+pre-version-8 API usage.
+
+.. _searchIndex:
+
+searchIndex
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+`Elasticsearch
+index <http://www.elasticsearch.org/guide/appendix/glossary.html#index>`_
+to send your logs to. Defaults to "system"
+
+
+.. _dynSearchIndex:
+
+dynSearchIndex
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+Whether the string provided for searchIndex_ should be taken as a
+`rsyslog template <http://www.rsyslog.com/doc/rsyslog_conf_templates.html>`_.
+Defaults to "off", which means the index name will be taken
+literally. Otherwise, it will look for a template with that name, and
+the resulting string will be the index name. For example, let's
+assume you define a template named "date-days" containing
+"%timereported:1:10:date-rfc3339%". Then, with dynSearchIndex="on",
+if you say searchIndex="date-days", each log will be sent to and
+index named after the first 10 characters of the timestamp, like
+"2013-03-22".
+
+
+.. _searchType:
+
+searchType
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+`Elasticsearch
+type <http://www.elasticsearch.org/guide/appendix/glossary.html#type>`_
+to send your index to. Defaults to "events".
+Setting this parameter to an empty string will cause the type to be omitted,
+which is required since Elasticsearch 7.0. See
+`Elasticsearch documentation <https://www.elastic.co/guide/en/elasticsearch/reference/current/removal-of-types.html>`_
+for more information.
+
+
+.. _dynSearchType:
+
+dynSearchType
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+Like dynSearchIndex_, it allows you to specify a
+`rsyslog template <http://www.rsyslog.com/doc/rsyslog_conf_templates.html>`_
+for searchType_, instead of a static string.
+
+
+.. _pipelineName:
+
+pipelineName
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+The `ingest node <https://www.elastic.co/guide/en/elasticsearch/reference/current/ingest.html>`_
+pipeline name to be included in the request. This allows pre processing
+of events before indexing them. By default, events are not send to a pipeline.
+
+
+.. _dynPipelineName:
+
+dynPipelineName
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+Like dynSearchIndex_, it allows you to specify a
+`rsyslog template <http://www.rsyslog.com/doc/rsyslog_conf_templates.html>`_
+for pipelineName_, instead of a static string.
+
+
+.. _skipPipelineIfEmpty:
+
+skipPipelineIfEmpty
+^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+When POST'ing a document, Elasticsearch does not allow an empty pipeline
+parameter value. If boolean option skipPipelineIfEmpty is set to `"on"`, the
+pipeline parameter won't be posted. Default is `"off"`.
+
+
+.. _asyncrepl:
+
+asyncrepl
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+No longer supported as ElasticSearch no longer supports it.
+
+
+.. _usehttps:
+
+usehttps
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+Default scheme to use when sending events to Elasticsearch if none is
+specified on a server_. Good for when you have
+Elasticsearch behind Apache or something else that can add HTTPS.
+Note that if you have a self-signed certificate, you'd need to install
+it first. This is done by copying the certificate to a trusted path
+and then running *update-ca-certificates*. That trusted path is
+typically */usr/local/share/ca-certificates* but check the man page of
+*update-ca-certificates* for the default path of your distro
+
+
+.. _timeout:
+
+timeout
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "1m", "no", "none"
+
+How long Elasticsearch will wait for a primary shard to be available
+for indexing your log before sending back an error. Defaults to "1m".
+
+
+.. _indextimeout:
+
+indexTimeout
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "none"
+
+.. versionadded:: 8.2204.0
+
+Specifies the number of milliseconds to wait for a successful log indexing
+request on a server_. By default there is no timeout.
+
+
+.. _template:
+
+template
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "see below", "no", "none"
+
+This is the JSON document that will be indexed in Elasticsearch. The
+resulting string needs to be a valid JSON, otherwise Elasticsearch
+will return an error. Defaults to:
+
+.. code-block:: none
+
+ $template StdJSONFmt, "{\"message\":\"%msg:::json%\",\"fromhost\":\"%HOSTNAME:::json%\",\"facility\":\"%syslogfacility-text%\",\"priority\":\"%syslogpriority-text%\",\"timereported\":\"%timereported:::date-rfc3339%\",\"timegenerated\":\"%timegenerated:::date-rfc3339%\"}"
+
+Which will produce this sort of documents (pretty-printed here for
+readability):
+
+.. code-block:: none
+
+ {
+     "message": " this is a test message",
+     "fromhost": "test-host",
+     "facility": "user",
+     "priority": "info",
+     "timereported": "2013-03-12T18:05:01.344864+02:00",
+     "timegenerated": "2013-03-12T18:05:01.344864+02:00"
+ }
+
+Another template, FullJSONFmt, is available that includes more fields including programname, PROCID (usually the process ID), and MSGID.
+
+.. _bulkmode:
+
+bulkmode
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+The default "off" setting means logs are shipped one by one. Each in
+its own HTTP request, using the `Index
+API <http://www.elasticsearch.org/guide/reference/api/index_.html>`_.
+Set it to "on" and it will use Elasticsearch's `Bulk
+API <http://www.elasticsearch.org/guide/reference/api/bulk.html>`_ to
+send multiple logs in the same request. The maximum number of logs
+sent in a single bulk request depends on your maxbytes_
+and queue settings -
+usually limited by the `dequeue batch
+size <http://www.rsyslog.com/doc/node35.html>`_. More information
+about queues can be found
+`here <http://www.rsyslog.com/doc/node32.html>`_.
+
+
+.. _maxbytes:
+
+maxbytes
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "100m", "no", "none"
+
+.. versionadded:: 8.23.0
+
+When shipping logs with bulkmode_ **on**, maxbytes specifies the maximum
+size of the request body sent to Elasticsearch. Logs are batched until
+either the buffer reaches maxbytes or the `dequeue batch
+size <http://www.rsyslog.com/doc/node35.html>`_ is reached. In order to
+ensure Elasticsearch does not reject requests due to content length, verify
+this value is set according to the `http.max_content_length
+<https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-http.html>`_
+setting in Elasticsearch. Defaults to 100m.
+
+
+.. _parent:
+
+parent
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+Specifying a string here will index your logs with that string the
+parent ID of those logs. Please note that you need to define the
+`parent
+field <http://www.elasticsearch.org/guide/reference/mapping/parent-field.html>`_
+in your
+`mapping <http://www.elasticsearch.org/guide/reference/mapping/>`_
+for that to work. By default, logs are indexed without a parent.
+
+
+.. _dynParent:
+
+dynParent
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+Using the same parent for all the logs sent in the same action is
+quite unlikely. So you'd probably want to turn this "on" and specify
+a
+`rsyslog template <http://www.rsyslog.com/doc/rsyslog_conf_templates.html>`_
+that will provide meaningful parent IDs for your logs.
+
+
+.. _uid:
+
+uid
+^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+If you have basic HTTP authentication deployed (eg through the
+`elasticsearch-basic
+plugin <https://github.com/Asquera/elasticsearch-http-basic>`_), you
+can specify your user-name here.
+
+
+.. _pwd:
+
+pwd
+^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+Password for basic authentication.
+
+
+.. _errorfile:
+
+errorFile
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+If specified, records failed in bulk mode are written to this file, including
+their error cause. Rsyslog itself does not process the file any more, but the
+idea behind that mechanism is that the user can create a script to periodically
+inspect the error file and react appropriately. As the complete request is
+included, it is possible to simply resubmit messages from that script.
+
+*Please note:* when rsyslog has problems connecting to elasticsearch, a general
+error is assumed and the submit is retried. However, if we receive negative
+responses during batch processing, we assume an error in the data itself
+(like a mandatory field is not filled in, a format error or something along
+those lines). Such errors cannot be solved by simply resubmitting the record.
+As such, they are written to the error file so that the user (script) can
+examine them and act appropriately. Note that e.g. after search index
+reconfiguration (e.g. dropping the mandatory attribute) a resubmit may
+be successful.
+
+.. _omelasticsearch-tls.cacert:
+
+tls.cacert
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+This is the full path and file name of the file containing the CA cert for the
+CA that issued the Elasticsearch server cert. This file is in PEM format. For
+example: `/etc/rsyslog.d/es-ca.crt`
+
+.. _tls.mycert:
+
+tls.mycert
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+This is the full path and file name of the file containing the client cert for
+doing client cert auth against Elasticsearch. This file is in PEM format. For
+example: `/etc/rsyslog.d/es-client-cert.pem`
+
+.. _tls.myprivkey:
+
+tls.myprivkey
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+This is the full path and file name of the file containing the private key
+corresponding to the cert `tls.mycert` used for doing client cert auth against
+Elasticsearch. This file is in PEM format, and must be unencrypted, so take
+care to secure it properly. For example: `/etc/rsyslog.d/es-client-key.pem`
+
+.. _omelasticsearch-allowunsignedcerts:
+
+allowunsignedcerts
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "boolean", "off", "no", "none"
+
+If `"on"`, this will set the curl `CURLOPT_SSL_VERIFYPEER` option to
+`0`. You are strongly discouraged to set this to `"on"`. It is
+primarily useful only for debugging or testing.
+
+.. _omelasticsearch-skipverifyhost:
+
+skipverifyhost
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "boolean", "off", "no", "none"
+
+If `"on"`, this will set the curl `CURLOPT_SSL_VERIFYHOST` option to
+`0`. You are strongly discouraged to set this to `"on"`. It is
+primarily useful only for debugging or testing.
+
+.. _omelasticsearch-bulkid:
+
+bulkid
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+This is the unique id to assign to the record. The `bulk` part is misleading - this
+can be used in both bulk mode :ref:`bulkmode` or in index
+(record at a time) mode. Although you can specify a static value for this
+parameter, you will almost always want to specify a *template* for the value of
+this parameter, and set `dynbulkid="on"` :ref:`omelasticsearch-dynbulkid`. NOTE:
+you must use `bulkid` and `dynbulkid` in order to use `writeoperation="create"`
+:ref:`omelasticsearch-writeoperation`.
+
+.. _omelasticsearch-dynbulkid:
+
+dynbulkid
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+If this parameter is set to `"on"`, then the `bulkid` parameter :ref:`omelasticsearch-bulkid`
+specifies a *template* to use to generate the unique id value to assign to the record. If
+using `bulkid` you will almost always want to set this parameter to `"on"` to assign
+a different unique id value to each record. NOTE:
+you must use `bulkid` and `dynbulkid` in order to use `writeoperation="create"`
+:ref:`omelasticsearch-writeoperation`.
+
+.. _omelasticsearch-writeoperation:
+
+writeoperation
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "index", "no", "none"
+
+The value of this parameter is either `"index"` (the default) or `"create"`. If `"create"` is
+used, this means the bulk action/operation will be `create` - create a document only if the
+document does not already exist. The record must have a unique id in order to use `create`.
+See :ref:`omelasticsearch-bulkid` and :ref:`omelasticsearch-dynbulkid`. See
+:ref:`omelasticsearch-writeoperation-example` for an example.
+
+.. _omelasticsearch-retryfailures:
+
+retryfailures
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+If this parameter is set to `"on"`, then the module will look for an
+`"errors":true` in the bulk index response. If found, each element in the
+response will be parsed to look for errors, since a bulk request may have some
+records which are successful and some which are failures. Failed requests will
+be converted back into records and resubmitted back to rsyslog for
+reprocessing. Each failed request will be resubmitted with a local variable
+called `$.omes`. This is a hash consisting of the fields from the metadata
+header in the original request, and the fields from the response. If the same
+field is in the request and response, the value from the field in the *request*
+will be used, to facilitate retries that want to send the exact same request,
+and want to know exactly what was sent.
+See below :ref:`omelasticsearch-retry-example` for an example of how retry
+processing works.
+*NOTE* The retried record will be resubmitted at the "top" of your processing
+pipeline. If your processing pipeline is not idempotent (that is, your
+processing pipeline expects "raw" records), then you can specify a ruleset to
+redirect retries to. See :ref:`omelasticsearch-retryruleset` below.
+
+`$.omes` fields:
+
+* writeoperation - the operation used to submit the request - for rsyslog
+ omelasticsearch this currently means either `"index"` or `"create"`
+* status - the HTTP status code - typically an error will have a `4xx` or `5xx`
+ code - of particular note is `429` - this means Elasticsearch was unable to
+ process this bulk record request due to a temporary condition e.g. the bulk
+ index thread pool queue is full, and rsyslog should retry the operation.
+* _index, _type, _id, pipeline, _parent - the metadata associated with the
+ request - not all of these fields will be present with every request - for
+ example, if you do not use `"pipelinename"` or `"dynpipelinename"`, there
+ will be no `$.omes!pipeline` field.
+* error - a hash containing one or more, possibly nested, fields containing
+ more detailed information about a failure. Typically there will be fields
+ `$.omes!error!type` (a keyword) and `$.omes!error!reason` (a longer string)
+ with more detailed information about the rejection. NOTE: The format is
+ apparently not described in great detail, so code must not make any
+ assumption about the availability of `error` or any specific sub-field.
+
+There may be other fields too - the code just copies everything in the
+response. Here is an example of a detailed error response, in JSON format, from
+Elasticsearch 5.6.9:
+
+.. code-block:: json
+
+ {"omes":
+ {"writeoperation": "create",
+ "_index": "rsyslog_testbench",
+ "_type": "test-type",
+ "_id": "92BE7AF79CD44305914C7658AF846A08",
+ "status": 400,
+ "error":
+ {"type": "mapper_parsing_exception",
+ "reason": "failed to parse [msgnum]",
+ "caused_by":
+ {"type": "number_format_exception",
+ "reason": "For input string: \"x00000025\""}}}}
+
+Reference: https://www.elastic.co/guide/en/elasticsearch/guide/current/bulk.html#bulk
+
+.. _omelasticsearch-retryruleset:
+
+retryruleset
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "", "no", "none"
+
+If `retryfailures` is not `"on"` (:ref:`omelasticsearch-retryfailures`) then
+this parameter has no effect. This parameter specifies the name of a ruleset
+to use to route retries. This is useful if you do not want retried messages to
+be processed starting from the top of your processing pipeline, or if you have
+multiple outputs but do not want to send retried Elasticsearch failures to all
+of your outputs, and you do not want to clutter your processing pipeline with a
+lot of conditionals. See below :ref:`omelasticsearch-retry-example` for an
+example of how retry processing works.
+
+.. _omelasticsearch-ratelimit.interval:
+
+ratelimit.interval
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "600", "no", "none"
+
+If `retryfailures` is not `"on"` (:ref:`omelasticsearch-retryfailures`) then
+this parameter has no effect. Specifies the interval in seconds onto which
+rate-limiting is to be applied. If more than ratelimit.burst messages are read
+during that interval, further messages up to the end of the interval are
+discarded. The number of messages discarded is emitted at the end of the
+interval (if there were any discards).
+Setting this to value zero turns off ratelimiting.
+
+.. _omelasticsearch-ratelimit.burst:
+
+ratelimit.burst
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "20000", "no", "none"
+
+If `retryfailures` is not `"on"` (:ref:`omelasticsearch-retryfailures`) then
+this parameter has no effect. Specifies the maximum number of messages that
+can be emitted within the ratelimit.interval interval. For further information,
+see description there.
+
+.. _omelasticsearch-rebindinterval:
+
+rebindinterval
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "-1", "no", "none"
+
+This parameter tells omelasticsearch to close the connection and reconnect
+to Elasticsearch after this many operations have been submitted. The default
+value `-1` means that omelasticsearch will not reconnect. A value greater
+than `-1` tells omelasticsearch, after this many operations have been
+submitted to Elasticsearch, to drop the connection and establish a new
+connection. This is useful when rsyslog connects to multiple Elasticsearch
+nodes through a router or load balancer, and you need to periodically drop
+and reestablish connections to help the router balance the connections. Use
+the counter `rebinds` to monitor the number of times this has happened.
+
+.. _omelasticsearch-statistic-counter:
+
+Statistic Counter
+=================
+
+This plugin maintains global :doc:`statistics <../rsyslog_statistic_counter>`,
+which accumulate all action instances. The statistic is named "omelasticsearch".
+Parameters are:
+
+- **submitted** - number of messages submitted for processing (with both
+ success and error result)
+
+- **fail.httprequests** - the number of times a http request failed. Note
+ that a single http request may be used to submit multiple messages, so this
+ number may be (much) lower than fail.http.
+
+- **fail.http** - number of message failures due to connection like-problems
+ (things like remote server down, broken link etc)
+
+- **fail.es** - number of failures due to elasticsearch error reply; Note that
+ this counter does NOT count the number of failed messages but the number of
+ times a failure occurred (a potentially much smaller number). Counting messages
+ would be quite performance-intense and is thus not done.
+
+The following counters are available when `retryfailures="on"` is used:
+
+- **response.success** - number of records successfully sent in bulk index
+ requests - counts the number of successful responses
+
+- **response.bad** - number of times omelasticsearch received a response in a
+ bulk index response that was unrecognized or unable to be parsed. This may
+ indicate that omelasticsearch is attempting to communicate with a version of
+ Elasticsearch that is incompatible, or is otherwise sending back data in the
+ response that cannot be handled
+
+- **response.duplicate** - number of records in the bulk index request that
+ were duplicates of already existing records - this will only be reported if
+ using `writeoperation="create"` and `bulkid` to assign each record a unique
+ ID
+
+- **response.badargument** - number of times omelasticsearch received a
+ response that had a status indicating omelasticsearch sent bad data to
+ Elasticsearch. For example, status `400` and an error message indicating
+ omelasticsearch attempted to store a non-numeric string value in a numeric
+ field.
+
+- **response.bulkrejection** - number of times omelasticsearch received a
+ response that had a status indicating Elasticsearch was unable to process
+ the record at this time - status `429`. The record can be retried.
+
+- **response.other** - number of times omelasticsearch received a
+ response not recognized as one of the above responses, typically some other
+ `4xx` or `5xx` http status.
+
+- **rebinds** - if using `rebindinterval` this will be the number of
+ times omelasticsearch has reconnected to Elasticsearch
+
+**The fail.httprequests and fail.http counters reflect only failures that
+omelasticsearch detected.** Once it detects problems, it (usually, depends on
+circumstances) tell the rsyslog core that it wants to be suspended until the
+situation clears (this is a requirement for rsyslog output modules). Once it is
+suspended, it does NOT receive any further messages. Depending on the user
+configuration, messages will be lost during this period. Those lost messages will
+NOT be counted by impstats (as it does not see them).
+
+Note that some previous (pre 7.4.5) versions of this plugin had different counters.
+These were experimental and confusing. The only ones really used were "submits",
+which were the number of successfully processed messages and "connfail" which were
+equivalent to "failed.http".
+
+How Retries Are Handled
+=======================
+
+When using `retryfailures="on"` (:ref:`omelasticsearch-retryfailures`), the
+original `Message` object (that is, the original `smsg_t *msg` object) **is not
+available**. This means none of the metadata associated with that object, such
+as various timestamps, hosts/ip addresses, etc. are not available for the retry
+operation. The only thing available are the metadata header (_index, _type,
+_id, pipeline, _parent) and original JSON string sent in the original request,
+and whatever data is returned in the error response. All of these are made
+available in the `$.omes` fields. If the same field name exists in the request
+metadata and the response, the field from the request will be used, in order to
+facilitate retrying the exact same request. For the message to retry, the code
+will take the original JSON string and parse it back into an internal `Message`
+object. This means you **may need to use a different template** to output
+messages for your retry ruleset. For example, if you used the following
+template to format the Elasticsearch message for the initial submission:
+
+.. code-block:: none
+
+ template(name="es_output_template"
+          type="list"
+          option.json="on") {
+            constant(value="{")
+              constant(value="\"timestamp\":\"")      property(name="timereported" dateFormat="rfc3339")
+              constant(value="\",\"message\":\"")     property(name="msg")
+              constant(value="\",\"host\":\"")        property(name="hostname")
+              constant(value="\",\"severity\":\"")    property(name="syslogseverity-text")
+              constant(value="\",\"facility\":\"")    property(name="syslogfacility-text")
+              constant(value="\",\"syslogtag\":\"")   property(name="syslogtag")
+            constant(value="\"}")
+          }
+
+You would have to use a different template for the retry, since none of the
+`timereported`, `msg`, etc. fields will have the same values for the retry as
+for the initial try.
+
+Same with the other omelasticsearch parameters which can be constructed with
+templates, such as `"dynpipelinename"`, `"dynsearchindex"`, `"dynsearchtype"`,
+`"dynparent"`, and `"dynbulkid"`. For example, if you generate the `_id` to
+use in the request, you will want to reuse the same `_id` for each subsequent
+retry:
+
+.. code-block:: none
+
+ template(name="id-template" type="string" string="%$.es_msg_id%")
+ if strlen($.omes!_id) > 0 then {
+ set $.es_msg_id = $.omes!_id;
+ } else {
+ # NOTE: depends on rsyslog being compiled with --enable-uuid
+ set $.es_msg_id = $uuid;
+ }
+ action(type="omelasticsearch" bulkid="id-template" ...)
+
+That is, if this is a retry, `$.omes!_id` will be set, so use that value for
+the bulk id for this record, otherwise, generate a new one with `$uuid`. Note
+that the template uses the temporary variable `$.es_msg_id` which must be set
+each time, to either `$.omes!_id` or `$uuid`.
+
+The `rawmsg` field is a special case. If the original request had a field
+called `message`, then when constructing the new message from the original to
+retry, the `rawmsg` message property will be set to the value of the `message`
+field. Otherwise, the `rawmsg` property value will be set to the entire
+original request - the data part, not the metadata. In previous versions,
+without the `message` field, the `rawmsg` property was set to the value of the
+data plus the Elasticsearch metadata, which caused problems with retries. See
+`rsyslog issue 3573 <https://github.com/rsyslog/rsyslog/issues/3573>`_
+
+Examples
+========
+
+Example 1
+---------
+
+The following sample does the following:
+
+- loads the omelasticsearch module
+- outputs all logs to Elasticsearch using the default settings
+
+.. code-block:: none
+
+ module(load="omelasticsearch")
+ *.* action(type="omelasticsearch")
+
+
+Example 2
+---------
+
+The following sample does the following:
+
+- loads the omelasticsearch module
+- outputs all logs to Elasticsearch using the full JSON logging template including program name
+
+.. code-block:: none
+
+ module(load="omelasticsearch")
+ *.* action(type="omelasticsearch" template="FullJSONFmt")
+
+
+Example 3
+---------
+
+The following sample does the following:
+
+- loads the omelasticsearch module
+- defines a template that will make the JSON contain the following
+ properties
+
+ - RFC-3339 timestamp when the event was generated
+ - the message part of the event
+ - hostname of the system that generated the message
+ - severity of the event, as a string
+ - facility, as a string
+ - the tag of the event
+
+- outputs to Elasticsearch with the following settings
+
+ - host name of the server is myserver.local
+ - port is 9200
+ - JSON docs will look as defined in the template above
+ - index will be "test-index"
+ - type will be "test-type"
+ - activate bulk mode. For that to work effectively, we use an
+ in-memory queue that can hold up to 5000 events. The maximum bulk
+ size will be 300
+ - retry indefinitely if the HTTP request failed (eg: if the target
+ server is down)
+
+.. code-block:: none
+
+ module(load="omelasticsearch")
+ template(name="testTemplate"
+          type="list"
+          option.json="on") {
+            constant(value="{")
+              constant(value="\"timestamp\":\"")      property(name="timereported" dateFormat="rfc3339")
+              constant(value="\",\"message\":\"")     property(name="msg")
+              constant(value="\",\"host\":\"")        property(name="hostname")
+              constant(value="\",\"severity\":\"")    property(name="syslogseverity-text")
+              constant(value="\",\"facility\":\"")    property(name="syslogfacility-text")
+              constant(value="\",\"syslogtag\":\"")   property(name="syslogtag")
+            constant(value="\"}")
+          }
+ action(type="omelasticsearch"
+        server="myserver.local"
+        serverport="9200"
+        template="testTemplate"
+        searchIndex="test-index"
+        searchType="test-type"
+        bulkmode="on"
+        maxbytes="100m"
+        queue.type="linkedlist"
+        queue.size="5000"
+        queue.dequeuebatchsize="300"
+        action.resumeretrycount="-1")
+
+
+.. _omelasticsearch-writeoperation-example:
+
+Example 4
+---------
+
+The following sample shows how to use :ref:`omelasticsearch-writeoperation`
+with :ref:`omelasticsearch-dynbulkid` and :ref:`omelasticsearch-bulkid`. For
+simplicity, it assumes rsyslog has been built with `--enable-libuuid` which
+provides the `uuid` property for each record:
+
+.. code-block:: none
+
+ module(load="omelasticsearch")
+ set $!es_record_id = $uuid;
+ template(name="bulkid-template" type="list") { property(name="$!es_record_id") }
+ action(type="omelasticsearch"
+        ...
+        bulkmode="on"
+        bulkid="bulkid-template"
+        dynbulkid="on"
+        writeoperation="create")
+
+
+.. _omelasticsearch-retry-example:
+
+Example 5
+---------
+
+The following sample shows how to use :ref:`omelasticsearch-retryfailures` to
+process, discard, or retry failed operations. This uses
+`writeoperation="create"` with a unique `bulkid` so that we can check for and
+discard duplicate messages as successful. The `try_es` ruleset is used both
+for the initial attempt and any subsequent retries. The code in the ruleset
+assumes that if `$.omes!status` is set and is non-zero, this is a retry for a
+previously failed operation. If the status was successful, or Elasticsearch
+said this was a duplicate, the record is already in Elasticsearch, so we can
+drop the record. If there was some error processing the response
+e.g. Elasticsearch sent a response formatted in some way that we did not know
+how to process, then submit the record to the `error_es` ruleset. If the
+response was a "hard" error like `400`, then submit the record to the
+`error_es` ruleset. In any other case, such as a status `429` or `5xx`, the
+record will be resubmitted to Elasticsearch. In the example, the `error_es`
+ruleset just dumps the records to a file.
+
+.. code-block:: none
+
+ module(load="omelasticsearch")
+ module(load="omfile")
+ template(name="bulkid-template" type="list") { property(name="$.es_record_id") }
+
+ ruleset(name="error_es") {
+ action(type="omfile" template="RSYSLOG_DebugFormat" file="es-bulk-errors.log")
+ }
+
+ ruleset(name="try_es") {
+ if strlen($.omes!status) > 0 then {
+ # retry case
+ if ($.omes!status == 200) or ($.omes!status == 201) or (($.omes!status == 409) and ($.omes!writeoperation == "create")) then {
+ stop # successful
+ }
+ if ($.omes!writeoperation == "unknown") or (strlen($.omes!error!type) == 0) or (strlen($.omes!error!reason) == 0) then {
+ call error_es
+ stop
+ }
+ if ($.omes!status == 400) or ($.omes!status < 200) then {
+ call error_es
+ stop
+ }
+ # else fall through to retry operation
+ }
+ if strlen($.omes!_id) > 0 then {
+ set $.es_record_id = $.omes!_id;
+ } else {
+ # NOTE: depends on rsyslog being compiled with --enable-uuid
+ set $.es_record_id = $uuid;
+ }
+ action(type="omelasticsearch"
+        ...
+        bulkmode="on"
+        bulkid="bulkid-template"
+        dynbulkid="on"
+        writeoperation="create"
+        retryfailures="on"
+        retryruleset="try_es")
+ }
+ call try_es
diff --git a/source/configuration/modules/omfile.rst b/source/configuration/modules/omfile.rst
new file mode 100644
index 0000000..b5d1b22
--- /dev/null
+++ b/source/configuration/modules/omfile.rst
@@ -0,0 +1,930 @@
+**************************
+omfile: File Output Module
+**************************
+
+=========================== ===========================================================================
+**Module Name:** **omfile**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+The omfile plug-in provides the core functionality of writing messages
+to files residing inside the local file system (which may actually be
+remote if methods like NFS are used). Both files named with static names
+as well files with names based on message content are supported by this
+module.
+
+
+Notable Features
+================
+
+- :ref:`omfile-statistic-counter`
+
+
+Configuration Parameters
+========================
+
+Omfile is a built-in module that does not need to be loaded. In order to
+specify module parameters, use
+
+.. code-block:: none
+
+ module(load="builtin:omfile" ...parameters...)
+
+
+Note that legacy parameters **do not** affect new-style RainerScript configuration
+objects. See :doc:`basic configuration structure doc <../basic_structure>` to
+learn about different configuration languages in use by rsyslog.
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+General Notes
+-------------
+
+As can be seen in the parameters below, owner and groups can be set either by
+name or by direct id (uid, gid). While using a name is more convenient, using
+the id is more robust. There may be some situations where the OS is not able
+to do the name-to-id resolution, and these cases the owner information will be
+set to the process default. This seems to be uncommon and depends on the
+authentication provider and service start order. In general, using names
+is fine.
+
+
+Module Parameters
+-----------------
+
+Template
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "RSYSLOG_FileFormat", "no", "``$ActionFileDefaultTemplate``"
+
+Set the default template to be used if an action is not configured
+to use a specific template.
+
+
+DirCreateMode
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "FileCreateMode", "0700", "no", "``$DirCreateMode``"
+
+Sets the default dirCreateMode to be used for an action if no
+explicit one is specified.
+
+
+FileCreateMode
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "FileCreateMode", "0644", "no", "``$FileCreateMode``"
+
+Sets the default fileCreateMode to be used for an action if no
+explicit one is specified.
+
+
+fileOwner
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "UID", "process user", "no", "``$FileOwner``"
+
+Sets the default fileOwner to be used for an action if no
+explicit one is specified.
+
+
+fileOwnerNum
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "process user", "no", "none"
+
+Sets the default fileOwnerNum to be used for an action if no
+explicit one is specified.
+
+
+fileGroup
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "GID", "process user's primary group", "no", "``$FileGroup``"
+
+Sets the default fileGroup to be used for an action if no
+explicit one is specified.
+
+
+fileGroupNum
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "process user's primary group", "no", "none"
+
+Sets the default fileGroupNum to be used for an action if no
+explicit one is specified.
+
+
+dirOwner
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "UID", "process user", "no", "``$DirOwner``"
+
+Sets the default dirOwner to be used for an action if no
+explicit one is specified.
+
+
+dirOwnerNum
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "process user", "no", "none"
+
+Sets the default dirOwnerNum to be used for an action if no
+explicit one is specified.
+
+
+dirGroup
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "GID", "process user's primary group", "no", "``$DirGroup``"
+
+Sets the default dirGroup to be used for an action if no
+explicit one is specified.
+
+
+dirGroupNum
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "process user's primary group", "no", "none"
+
+Sets the default dirGroupNum to be used for an action if no
+explicit one is specified.
+
+
+dynafile.donotsuspend
+^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "none"
+
+This permits SUSPENDing dynafile actions. Traditionally, SUSPEND mode was
+never entered for dynafiles as it would have blocked overall processing
+flow. Default is not to suspend (and thus block).
+
+
+compression.driver
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "zlib", "no", "none"
+
+.. versionadded:: 8.2208.0
+
+For compressed operation ("zlib mode"), this permits to set the compression
+driver to be used. Originally, only zlib was supported and still is the
+default. Since 8.2208.0 zstd is also supported. It provides much better
+compression ratios and performance, especially with multiple zstd worker
+threads enabled.
+
+Possible values are:
+- zlib
+- zstd
+
+
+compression.zstd.workers
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "positive integer", "zlib library default", "no", "none"
+
+.. versionadded:: 8.2208.0
+
+In zstd mode, this enables to configure zstd-internal compression worker threads.
+This setting has nothing to do with rsyslog workers. The zstd library provides
+an enhanced worker thread pool which permits multithreaed compression of serial
+data streams. Rsyslog fully supports this mode for optimal performance.
+
+Please note that for this parameter to have an effect, the zstd library must
+be compiled with multithreading support. As of this writing (2022), this is
+**not** the case for many frequently used distros and distro versions. In this
+case, you may want to custom install the zstd library with threading enabled. Note
+that this does not require a rsyslog rebuild.
+
+
+Action Parameters
+-----------------
+
+Note that **one** of the parameters *file* or *dynaFile* must be specified. This
+selects whether a static or dynamic file (name) shall be written to.
+
+
+File
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+This creates a static file output, always writing into the same file.
+If the file already exists, new data is appended to it. Existing
+data is not truncated. If the file does not already exist, it is
+created. Files are kept open as long as rsyslogd is active. This
+conflicts with external log file rotation. In order to close a file
+after rotation, send rsyslogd a HUP signal after the file has been
+rotated away. Either file or dynaFile can be used, but not both. If both
+are given, dynaFile will be used.
+
+
+dynaFile
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+For each message, the file name is generated based on the given
+template. Then, this file is opened. As with the *file* property,
+data is appended if the file already exists. If the file does not
+exist, a new file is created. The template given in "templateName"
+is just a regular :doc:`rsyslog template <../templates>`, so all
+you have full control over how to format the file name. Either file
+or dynaFile can be used, but not both. If both are given, dynaFile
+will be used.
+
+A cache of recent files is kept. Note
+that this cache can consume quite some memory (especially if large
+buffer sizes are used). Files are kept open as long as they stay
+inside the cache.
+Files are removed from the cache when a HUP signal is sent, the
+*closeTimeout* occurs, or the cache runs out of space, in which case
+the least recently used entry is evicted.
+
+
+Template
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "template set via module parameter", "no", "``$ActionFileDefaultTemplate``"
+
+Sets the template to be used for this action.
+
+
+closeTimeout
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "File: 0 DynaFile: 10", "no", "none"
+
+.. versionadded:: 8.3.3
+
+Specifies after how many minutes of inactivity a file is
+automatically closed. Note that this functionality is implemented
+based on the
+:doc:`janitor process <../../concepts/janitor>`.
+See its doc to understand why and how janitor-based times are
+approximate.
+
+
+dynaFileCacheSize
+^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "10", "no", "``$DynaFileCacheSize``"
+
+This parameter specifies the maximum size of the cache for
+dynamically-generated file names (dynafile= parmeter).
+This setting specifies how many open file handles should
+be cached. If, for example, the file name is generated with the hostname
+in it and you have 100 different hosts, a cache size of 100 would ensure
+that files are opened once and then stay open. This can be a great way
+to increase performance. If the cache size is lower than the number of
+different files, the least recently used one is discarded (and the file
+closed).
+
+Note that this is a per-action value, so if you have
+multiple dynafile actions, each of them have their individual caches
+(which means the numbers sum up). Ideally, the cache size exactly
+matches the need. You can use :doc:`impstats <impstats>` to tune
+this value. Note that a too-low cache size can be a very considerable
+performance bottleneck.
+
+
+zipLevel
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "``$OMFileZipLevel``"
+
+If greater than 0, turns on gzip compression of the output file. The
+higher the number, the better the compression, but also the more CPU
+is required for zipping.
+
+
+veryRobustZip
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+.. versionadded:: 7.3.0
+
+If *zipLevel* is greater than 0,
+then this setting controls if extra headers are written to make the
+resulting file extra hardened against malfunction. If set to off,
+data appended to previously unclean closed files may not be
+accessible without extra tools (like `gztool <https://github.com/circulosmeos/gztool>`_ with: ``gztool -p``).
+Note that this risk is usually
+expected to be bearable, and thus "off" is the default mode. The
+extra headers considerably degrade compression, files with this
+option set to "on" may be four to five times as large as files
+processed in "off" mode.
+
+**In order to avoid this degradation in compression** both
+*flushOnTXEnd* and *asyncWriting* parameters must be set to "off"
+and also *ioBufferSize* must be raised from default "4k" value to
+at least "32k". This way a reasonable compression factor is
+maintained, similar to a non-blocked gzip file:
+
+.. code-block:: none
+
+ veryRobustZip="on" ioBufferSize="64k" flushOnTXEnd="off" asyncWriting="off"
+
+
+Do not forget to add your desired *zipLevel* to this configuration line.
+
+
+flushInterval
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "1", "no", "``$OMFileFlushInterval``"
+
+Defines, in seconds, the interval after which unwritten data is
+flushed.
+
+
+asyncWriting
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "``$OMFileASyncWriting``"
+
+If turned on, the files will be written in asynchronous mode via a
+separate thread. In that case, double buffers will be used so that
+one buffer can be filled while the other buffer is being written.
+Note that in order to enable FlushInterval, AsyncWriting must be set
+to "on". Otherwise, the flush interval will be ignored.
+
+
+flushOnTXEnd
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "``$OMFileFlushOnTXEnd``"
+
+Omfile has the capability to write output using a buffered writer.
+Disk writes are only done when the buffer is full. So if an error
+happens during that write, data is potentially lost. Bear in mind that
+the buffer may become full only after several hours or a rsyslog
+shutdown (however a buffer flush can still be forced by sending rsyslogd
+a HUP signal). In cases where this is unacceptable, set FlushOnTXEnd
+to "on". Then, data is written at the end of each transaction
+(for pre-v5 this means after each log message) and the usual error
+recovery thus can handle write errors without data loss.
+Note that this option severely reduces the effect of zip compression
+and should be switched to "off" for that use case.
+Also note that the default -on- is primarily an aid to preserve the
+traditional syslogd behaviour.
+
+If you are using dynamic file names (dynafiles), flushes can actually
+happen more frequently. In this case, a flush can also happen when
+the file name changes within a transaction.
+
+
+ioBufferSize
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "size", "4 KiB", "no", "``$OMFileIOBufferSize``"
+
+Size of the buffer used to writing output data. The larger the
+buffer, the potentially better performance is. The default of 4k is
+quite conservative, it is useful to go up to 64k, and 128k if you
+used gzip compression (then, even higher sizes may make sense)
+
+
+dirOwner
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "UID", "system default", "no", "``$DirOwner``"
+
+Set the file owner for directories newly created. Please note that
+this setting does not affect the owner of directories already
+existing. The parameter is a user name, for which the userid is
+obtained by rsyslogd during startup processing. Interim changes to
+the user mapping are not detected.
+
+
+dirOwnerNum
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "system default", "no", "``$DirOwnerNum``"
+
+.. versionadded:: 7.5.8
+
+Set the file owner for directories newly created. Please note that
+this setting does not affect the owner of directories already
+existing. The parameter is a numerical ID, which is used regardless
+of whether the user actually exists. This can be useful if the user
+mapping is not available to rsyslog during startup.
+
+
+dirGroup
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "GID", "system default", "no", "``$DirGroup``"
+
+Set the group for directories newly created. Please note that this
+setting does not affect the group of directories already existing.
+The parameter is a group name, for which the groupid is obtained by
+rsyslogd on during startup processing. Interim changes to the user
+mapping are not detected.
+
+
+dirGroupNum
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "system default", "no", "``$DirGroupNum``"
+
+Set the group for directories newly created. Please note that this
+setting does not affect the group of directories already existing.
+The parameter is a numerical ID, which is used regardless of whether
+the group actually exists. This can be useful if the group mapping is
+not available to rsyslog during startup.
+
+
+fileOwner
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "UID", "system default", "no", "``$FileOwner``"
+
+Set the file owner for files newly created. Please note that this
+setting does not affect the owner of files already existing. The
+parameter is a user name, for which the userid is obtained by
+rsyslogd during startup processing. Interim changes to the user
+mapping are *not* detected.
+
+
+fileOwnerNum
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "system default", "no", "``$FileOwnerNum``"
+
+.. versionadded:: 7.5.8
+
+Set the file owner for files newly created. Please note that this
+setting does not affect the owner of files already existing. The
+parameter is a numerical ID, which which is used regardless of
+whether the user actually exists. This can be useful if the user
+mapping is not available to rsyslog during startup.
+
+
+fileGroup
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "GID", "system default", "no", "``$FileGroup``"
+
+Set the group for files newly created. Please note that this setting
+does not affect the group of files already existing. The parameter is
+a group name, for which the groupid is obtained by rsyslogd during
+startup processing. Interim changes to the user mapping are not
+detected.
+
+
+fileGroupNum
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "system default", "no", "``$FileGroupNum``"
+
+.. versionadded:: 7.5.8
+
+Set the group for files newly created. Please note that this setting
+does not affect the group of files already existing. The parameter is
+a numerical ID, which is used regardless of whether the group
+actually exists. This can be useful if the group mapping is not
+available to rsyslog during startup.
+
+
+fileCreateMode
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "equally-named module parameter", "no", "``$FileCreateMode``"
+
+The FileCreateMode directive allows to specify the creation mode
+with which rsyslogd creates new files. If not specified, the value
+0644 is used (which retains backward-compatibility with earlier
+releases). The value given must always be a 4-digit octal number,
+with the initial digit being zero.
+Please note that the actual permission depend on rsyslogd's process
+umask. If in doubt, use "$umask 0000" right at the beginning of the
+configuration file to remove any restrictions.
+
+
+dirCreateMode
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "equally-named module parameter", "no", "``$DirCreateMode``"
+
+This is the same as FileCreateMode, but for directories
+automatically generated.
+
+
+failOnChOwnFailure
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "``$FailOnCHOwnFailure``"
+
+This option modifies behaviour of file creation. If different owners
+or groups are specified for new files or directories and rsyslogd
+fails to set these new owners or groups, it will log an error and NOT
+write to the file in question if that option is set to "on". If it is
+set to "off", the error will be ignored and processing continues.
+Keep in mind, that the files in this case may be (in)accessible by
+people who should not have permission. The default is "on".
+
+
+createDirs
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "``$CreateDirs``"
+
+Create directories on an as-needed basis
+
+
+sync
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "``$ActionFileEnableSync``"
+
+Enables file syncing capability of omfile.
+
+When enabled, rsyslog does a sync to the data file as well as the
+directory it resides after processing each batch. There currently
+is no way to sync only after each n-th batch.
+
+Enabling sync causes a severe performance hit. Actually,
+it slows omfile so much down, that the probability of loosing messages
+**increases**. In short,
+you should enable syncing only if you know exactly what you do, and
+fully understand how the rest of the engine works, and have tuned
+the rest of the engine to lossless operations.
+
+
+sig.Provider
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "no signature provider", "no", "none"
+
+Selects a signature provider for log signing. By selecting a provider,
+the signature feature is turned on.
+
+Currently there is one signature provider available: ":doc:`ksi_ls12 <sigprov_ksi12>`".
+
+Previous signature providers ":doc:`gt <sigprov_gt>`" and ":doc:`ksi <sigprov_ksi>`" are deprecated.
+
+
+cry.Provider
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "no crypto provider", "no", "none"
+
+Selects a crypto provider for log encryption. By selecting a provider,
+the encryption feature is turned on.
+
+Currently, there only is one provider called ":doc:`gcry <../cryprov_gcry>`".
+
+
+rotation.sizeLimit
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "size", "0 (disabled)", "no", "`$outchannel` (partly)"
+
+This permits to set a size limit on the output file. When the limit is reached,
+rotation of the file is tried. The rotation script needs to be configured via
+`rotation.sizeLimitCommand`.
+
+Please note that the size limit is not exact. Some excess bytes are permitted
+to prevent messages from being split across two files. Also, a full batch of
+messages is not terminated in between. As such, in practice, the size of the
+output file can grow some KiB larger than configured.
+
+Also avoid to configer a too-low limit, especially for busy files. Calling the
+rotation script is relatively performance intense. As such, it could negatively
+affect overall rsyslog performance.
+
+
+rotation.sizeLimitCommand
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "(empty)", "no", "`$outchannel` (partly)"
+
+This permits to configure the script to be called whe a size limit on the output
+file is reached. The actual size limit needs to be configured via
+`rotation.sizeLimit`.
+
+
+.. _omfile-statistic-counter:
+
+Statistic Counter
+=================
+
+This plugin maintains :doc:`statistics <../rsyslog_statistic_counter>` for each
+dynafile cache. Dynafile cache performance is critical for overall system performance,
+so reviewing these counters on a busy system (especially one experiencing performance
+problems) is advisable. The statistic is named "dynafile cache", followed by the
+template name used for this dynafile action.
+
+The following properties are maintained for each dynafile:
+
+- **request** - total number of requests made to obtain a dynafile
+
+- **level0** - requests for the current active file, so no real cache
+ lookup needed to be done. These are extremely good.
+
+- **missed** - cache misses, where the required file did not reside in
+ cache. Even with a perfect cache, there will be at least one miss per
+ file. That happens when the file is being accessed for the first time
+ and brought into cache. So "missed" will always be at least as large
+ as the number of different files processed.
+
+- **evicted** - the number of times a file needed to be evicted from
+ the cache as it ran out of space. These can simply happen when
+ date-based files are used, and the previous date files are being
+ removed from the cache as time progresses. It is better, though, to
+ set an appropriate "closeTimeout" (counter described below), so that
+ files are removed from the cache after they become no longer accessed.
+ It is bad if active files need to be evicted from the cache. This is a
+ very costly operation as an evict requires to close the file (thus a
+ full flush, no matter of its buffer state) and a later access requires
+ a re-open – and the eviction of another file, as the cache obviously has
+ run out of free entries. If this happens frequently, it can severely
+ affect performance. So a high eviction rate is a sign that the dynafile
+ cache size should be increased. If it is already very high, it is
+ recommended to re-think about the design of the file store, at least if
+ the eviction process causes real performance problems.
+
+- **maxused** - the maximum number of cache entries ever used. This can
+ be used to trim the cache down to a value that’s actually useful but
+ does not waste resources. Note that when date-based files are used and
+ rsyslog is run for an extended period of time, the cache gradually fills
+ up to the max configured value as older files are migrated out of it.
+ This will make "maxused" questionable after some time. Frequently enough
+ purging the cache can prevent this (usually, once a day is sufficient).
+
+- **closetimeouts** - available since 8.3.3 – tells how often a file was
+ closed due to timeout settings ("closeTimeout" action parameter). These
+ are cases where dynafiles or static files have been closed by rsyslog due
+ to inactivity. Note that if no "closeTimeout" is specified for the action,
+ this counter always is zero. A high or low number in itself doesn’t mean
+ anything good or bad. It totally depends on the use case, so no general
+ advise can be given.
+
+
+Caveats/Known Bugs
+==================
+
+- people often report problems that dynafiles are not properly created.
+ The common cause for this problem is SELinux rules, which do not permit
+ the create of those files (check generated file names and paths!). The
+ same happens for generic permission issues (this is often a problem
+ under Ubuntu where permissions are dropped by default)
+
+- One needs to be careful with log rotation if signatures and/or
+ encryption are being used. These create side-files, which form a set
+ and must be kept together.
+ For signatures, the ".sigstate" file must NOT be rotated away if
+ signature chains are to be build across multiple files. This is
+ because .sigstate contains just global information for the whole file
+ set. However, all other files need to be rotated together. The proper
+ sequence is to
+
+ #. move all files inside the file set
+ #. only AFTER this is completely done, HUP rsyslog
+
+ This sequence will ensure that all files inside the set are
+ atomically closed and in sync. HUPing only after a subset of files
+ have been moved results in inconsistencies and will most probably
+ render the file set unusable.
+
+- If ``zipLevel`` is greater than 0 and ``veryRobustZip`` is set to off,
+ data appended to previously unclean closed files will not be
+ accessible with ``gunzip`` if rsyslog writes again in the same
+ file. Nonetheless, data is still there and can be correctly accessed
+ with other tools like `gztool <https://github.com/circulosmeos/gztool>`_ (v>=1.1) with: ``gztool -p``.
+
+
+Examples
+========
+
+Example 1
+---------
+
+The following command writes all syslog messages into a file.
+
+.. code-block:: none
+
+ action(type="omfile" dirCreateMode="0700" FileCreateMode="0644"
+ File="/var/log/messages")
+
+
diff --git a/source/configuration/modules/omfwd.rst b/source/configuration/modules/omfwd.rst
new file mode 100644
index 0000000..06adcd0
--- /dev/null
+++ b/source/configuration/modules/omfwd.rst
@@ -0,0 +1,795 @@
+**************************************
+omfwd: syslog Forwarding Output Module
+**************************************
+
+=========================== ===========================================================================
+**Module Name:**  **omfwd**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+The omfwd plug-in provides the core functionality of traditional message
+forwarding via UDP and plain TCP. It is a built-in module that does not
+need to be loaded.
+
+Notable Features
+================
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+Module Parameters
+-----------------
+
+Template
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "RSYSLOG_TraditionalForwardFormat", "no", "``$ActionForwardDefaultTemplateName``"
+
+Sets a non-standard default template for this module.
+
+Action Parameters
+-----------------
+
+Target
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+Name or IP-Address of the system that shall receive messages. Any
+resolvable name is fine.
+
+
+Port
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "514", "no", "none"
+
+Name or numerical value of port to use when connecting to target.
+
+
+Protocol
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "udp", "no", "none"
+
+Type of protocol to use for forwarding. Note that \`\`tcp'' means
+both legacy plain tcp syslog as well as RFC5425-based TLS-encrypted
+syslog. Which one is selected depends on the StreamDriver parameter.
+If StreamDriver is set to "ossl" or "gtls" it will use TLS-encrypted syslog.
+
+
+NetworkNamespace
+^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+Name of a network namespace as in /var/run/netns/ to use for forwarding.
+
+If the setns() system call is not available on the system (e.g. BSD
+kernel, linux kernel before v2.6.24) the given namespace will be
+ignored.
+
+Address
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+.. versionadded:: 8.35.0
+
+Bind socket to a given local IP address. This option is only supported
+for UDP, not TCP.
+
+IpFreeBind
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "2", "no", "none"
+
+.. versionadded:: 8.35.0
+
+Manages the IP_FREEBIND option on the UDP socket, which allows binding it to
+an IP address that is not yet associated to any network interface. This option
+is only relevant if the address option is set.
+
+The parameter accepts the following values:
+
+- 0 - does not enable the IP_FREEBIND option on the
+ UDP socket. If the *bind()* call fails because of *EADDRNOTAVAIL* error,
+ socket initialization fails.
+
+- 1 - silently enables the IP_FREEBIND socket
+ option if it is required to successfully bind the socket to a nonlocal address.
+
+- 2 - enables the IP_FREEBIND socket option and
+ warns when it is used to successfully bind the socket to a nonlocal address.
+
+Device
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+Bind socket to given device (e.g., eth0)
+
+For Linux with VRF support, the Device option can be used to specify the
+VRF for the Target address.
+
+
+TCP_Framing
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "traditional", "no", "none"
+
+Framing-Mode to be used for forwarding, either "traditional" or
+"octet-counted". This affects only TCP-based protocols, it is ignored for UDP.
+In protocol engineering, "framing" means how multiple messages over the same
+connection are separated. Usually, this is transparent to users. Unfortunately,
+the early syslog protocol evolved and so there are cases where users need to
+specify the framing. The "traditional" framing is nontransparent. With it,
+messages end when an LF (aka "line break", "return") is encountered, and the
+next message starts immediately after the LF. If multi-line messages are
+received, these are essentially broken up into multiple message, usually with
+all but the first message segment being incorrectly formatted. The
+"octet-counted" framing solves this issue. With it, each message is prefixed
+with the actual message length, so that a receiver knows exactly where the
+message ends. Multi-line messages cause no problem here. This mode is very
+close to the method described in RFC5425 for TLS-enabled syslog. Unfortunately,
+only few syslogd implementations support "octet-counted" framing. As such, the
+"traditional" framing is set as default, even though it has defects. If it is
+known that the receiver supports "octet-counted" framing, it is suggested to
+use that framing mode.
+
+
+TCP_FrameDelimiter
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "10", "no", "none"
+
+Sets a custom frame delimiter for TCP transmission when running TCP\_Framing
+in "traditional" mode. The delimiter has to be a number between 0 and 255
+(representing the ASCII-code of said character). The default value for this
+parameter is 10, representing a '\\n'. When using Graylog, the parameter
+must be set to 0.
+
+
+ZipLevel
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "none"
+
+Compression level for messages.
+
+Up until rsyslog 7.5.1, this was the only compression setting that
+rsyslog understood. Starting with 7.5.1, we have different
+compression modes. All of them are affected by the ziplevel. If,
+however, no mode is explicitly set, setting ziplevel also turns on
+"single" compression mode, so pre 7.5.1 configuration will continue
+to work as expected.
+
+The compression level is specified via the usual factor of 0 to 9,
+with 9 being the strongest compression (taking up most processing
+time) and 0 being no compression at all (taking up no extra
+processing time).
+
+
+compression.Mode
+^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+*mode* is one of "none", "single", or "stream:always". The default
+is "none", in which no compression happens at all.
+In "single" compression mode, Rsyslog implements a proprietary
+capability to zip transmitted messages. That compression happens on a
+message-per-message basis. As such, there is a performance gain only
+for larger messages. Before compressing a message, rsyslog checks if
+there is some gain by compression. If so, the message is sent
+compressed. If not, it is sent uncompressed. As such, it is totally
+valid that compressed and uncompressed messages are intermixed within
+a conversation.
+
+In "stream:always" compression mode the full stream is being
+compressed. This also uses non-standard protocol and is compatible
+only with receives that have the same abilities. This mode offers
+potentially very high compression ratios. With typical syslog
+messages, it can be as high as 95+% compression (so only one
+twentieth of data is actually transmitted!). Note that this mode
+introduces extra latency, as data is only sent when the compressor
+emits new compressed data. For typical syslog messages, this can mean
+that some hundred messages may be held in local buffers before they
+are actually sent. This mode has been introduced in 7.5.1.
+
+**Note: currently only imptcp supports receiving stream-compressed
+data.**
+
+
+compression.stream.flushOnTXEnd
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "none"
+
+.. versionadded:: 7.5.3
+
+This setting affects stream compression mode, only. If enabled (the
+default), the compression buffer will by emptied at the end of a
+rsyslog batch. If set to "off", end of batch will not affect
+compression at all.
+
+While setting it to "off" can potentially greatly improve
+compression ratio, it will also introduce severe delay between when a
+message is being processed by rsyslog and actually sent out to the
+network. We have seen cases where for several thousand message not a
+single byte was sent. This is good in the sense that it can happen
+only if we have a great compression ratio. This is most probably a
+very good mode for busy machines which will process several thousand
+messages per second and the resulting short delay will not pose any
+problems. However, the default is more conservative, while it works
+more "naturally" with even low message traffic. Even in flush mode,
+notable compression should be achievable (but we do not yet have
+practice reports on actual compression ratios).
+
+
+RebindInterval
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "``$ActionSendTCPRebindInterval`` or ``$ActionSendUDPRebindInterval``"
+
+Permits to specify an interval at which the current connection is
+broken and re-established. This setting is primarily an aid to load
+balancers. After the configured number of batches (equals roughly to
+messages for UDP traffic, dependent on batch size for TCP) has been
+transmitted, the current connection is terminated and a new one
+started. Note that this setting applies to both TCP and UDP traffic.
+For UDP, the new \`\`connection'' uses a different source port (ports
+are cycled and not reused too frequently). This usually is perceived
+as a \`\`new connection'' by load balancers, which in turn forward
+messages to another physical target system.
+
+
+KeepAlive
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+Enable or disable keep-alive packets at the tcp socket layer. The
+default is to disable them.
+
+
+KeepAlive.Probes
+^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "none"
+
+The number of unacknowledged probes to send before considering the
+connection dead and notifying the application layer. The default, 0,
+means that the operating system defaults are used. This has only
+effect if keep-alive is enabled. The functionality may not be
+available on all platforms.
+
+
+KeepAlive.Interval
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "none"
+
+The interval between subsequential keepalive probes, regardless of
+what the connection has exchanged in the meantime. The default, 0,
+means that the operating system defaults are used. This has only
+effect if keep-alive is enabled. The functionality may not be
+available on all platforms.
+
+
+KeepAlive.Time
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "none"
+
+The interval between the last data packet sent (simple ACKs are not
+considered data) and the first keepalive probe; after the connection
+is marked to need keepalive, this counter is not used any further.
+The default, 0, means that the operating system defaults are used.
+This has only effect if keep-alive is enabled. The functionality may
+not be available on all platforms.
+
+ConErrSkip
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "none"
+
+The ConErrSkip can be used to limit the number of network errors
+recorded in logs. For example, value 10 means that each 10th error
+message is logged. Note that this options should be used as the last
+resort since the necessity of its use indicates network issues.
+The default behavior is that all network errors are logged.
+
+RateLimit.Interval
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "max", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "", "no", "none"
+
+Specifies the rate-limiting interval in seconds. Default value is 0,
+which turns off rate limiting.
+
+RateLimit.Burst
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "max", "mandatory", "none"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "200", "(2^32)-1", "no", "none"
+
+Specifies the rate-limiting burst in number of messages.
+
+
+StreamDriver
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "``$ActionSendStreamDriver``"
+
+Choose the stream driver to be used. Default is plain tcp, but
+you can also choose "ossl" or "gtls" for TLS encryption.
+
+
+StreamDriverMode
+^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "``$ActionSendStreamDriverMode``"
+
+Mode to use with the stream driver (driver-specific)
+
+
+StreamDriverAuthMode
+^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "``$ActionSendStreamDriverAuthMode``"
+
+Authentication mode to use with the stream driver. Note that this
+parameter requires TLS netstream drivers. For all others, it will be
+ignored. (driver-specific).
+
+
+StreamDriver.PermitExpiredCerts
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "warn", "no", "none"
+
+Controls how expired certificates will be handled when stream driver is in TLS mode.
+It can have one of the following values:
+
+- on = Expired certificates are allowed
+
+- off = Expired certificates are not allowed (Default, changed from warn to off since Version 8.2012.0)
+
+- warn = Expired certificates are allowed but warning will be logged
+
+
+StreamDriverPermittedPeers
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "``$ActionSendStreamDriverPermittedPeers``"
+
+Accepted fingerprint (SHA1) or name of remote peer. Note that this
+parameter requires TLS netstream drivers. For all others, it will be
+ignored. (driver-specific)
+
+
+StreamDriver.CheckExtendedKeyPurpose
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+Whether to check also purpose value in extended fields part of certificate
+for compatibility with rsyslog operation. (driver-specific)
+
+
+StreamDriver.PrioritizeSAN
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+Whether to use stricter SAN/CN matching. (driver-specific)
+
+
+StreamDriver.TlsVerifyDepth
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "TLS library default", "no", "none"
+
+
+Specifies the allowed maximum depth for the certificate chain verification.
+Support added in v8.2001.0, supported by GTLS and OpenSSL driver.
+If not set, the API default will be used.
+For OpenSSL, the default is 100 - see the doc for more:
+https://www.openssl.org/docs/man1.1.1/man3/SSL_set_verify_depth.html
+For GnuTLS, the default is 5 - see the doc for more:
+https://www.gnutls.org/manual/gnutls.html
+
+StreamDriver.CAFile
+^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "global() default", "no", "none"
+
+.. versionadded:: 8.2108.0
+
+This permits to override the CA file set via `global()` config object at the
+per-action basis. This parameter is ignored if the netstream driver and/or its
+mode does not need or support certificates.
+
+StreamDriver.CRLFile
+^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "optional", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "global() default", "no", "none"
+
+.. versionadded:: 8.2308.0
+
+This permits to override the CRL (Certificate revocation list) file set via `global()` config
+object at the per-action basis. This parameter is ignored if the netstream driver and/or its
+mode does not need or support certificates.
+
+StreamDriver.KeyFile
+^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "global() default", "no", "none"
+
+.. versionadded:: 8.2108.0
+
+This permits to override the CA file set via `global()` config object at the
+per-action basis. This parameter is ignored if the netstream driver and/or its
+mode does not need or support certificates.
+
+StreamDriver.CertFile
+^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "global() default", "no", "none"
+
+.. versionadded:: 8.2108.0
+
+This permits to override the CA file set via `global()` config object at the
+per-action basis. This parameter is ignored if the netstream driver and/or its
+mode does not need or support certificates.
+
+
+ResendLastMSGOnReconnect
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "``$ActionSendResendLastMsgOnReconnect``"
+
+Permits to resend the last message when a connection is reconnected.
+This setting affects TCP-based syslog, only. It is most useful for
+traditional, plain TCP syslog. Using this protocol, it is not always
+possible to know which messages were successfully transmitted to the
+receiver when a connection breaks. In many cases, the last message
+sent is lost. By switching this setting to "yes", rsyslog will always
+retransmit the last message when a connection is reestablished. This
+reduces potential message loss, but comes at the price that some
+messages may be duplicated (what usually is more acceptable).
+
+Please note that busy systems probably loose more than a
+single message in such cases. This is caused by an
+`inherant unreliability in plain tcp syslog
+<https://rainer.gerhards.net/2008/04/on-unreliability-of-plain-tcp-syslog.html>`_
+and there is no way rsyslog could prevent this from happening
+(if you read the detail description, be sure to follow the link
+to the follow-up posting). In order to prevent these problems,
+we recommend the use of :doc:`omrelp <omrelp>`.
+
+
+udp.SendToAll
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+When sending UDP messages, there are potentially multiple paths to
+the target destination. By default, rsyslogd
+only sends to the first target it can successfully send to. If this
+option is set to "on", messages are sent to all targets. This may improve
+reliability, but may also cause message duplication. This option
+should be enabled only if it is fully understood.
+
+Note: this option replaces the former -A command line option. In
+contrast to the -A option, this option must be set once per
+input() definition.
+
+
+udp.SendDelay
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "none"
+
+.. versionadded:: 8.7.0
+
+This is an **expert option**, do only use it if you know very well
+why you are using it!
+
+This options permits to introduce a small delay after *each* send
+operation. The integer specifies the delay in microseconds. This
+option can be used in cases where too-quick sending of UDP messages
+causes message loss (UDP is permitted to drop packets if e.g. a device
+runs out of buffers). Usually, you do not want this delay. The parameter
+was introduced in order to support some testbench tests. Be sure
+to think twice before you use it in production.
+
+
+gnutlsPriorityString
+^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+.. versionadded:: 8.29.0
+
+This strings setting is used to configure driver specific properties.
+Historically, the setting was only meant for gnutls driver. However
+with version v8.1905.0 and higher, the setting can also be used to set openssl configuration commands.
+
+For GNUTls, the setting specifies the TLS session's handshake algorithms and
+options. These strings are intended as a user-specified override of the library
+defaults. If this parameter is NULL, the default settings are used. More
+information about priority Strings
+`here <https://gnutls.org/manual/html_node/Priority-Strings.html>`_.
+
+For OpenSSL, the setting can be used to pass configuration commands to openssl library.
+OpenSSL Version 1.0.2 or higher is required for this feature.
+A list of possible commands and their valid values can be found in the documentation:
+https://www.openssl.org/docs/man1.0.2/man3/SSL_CONF_cmd.html
+
+The setting can be single or multiline, each configuration command is separated by linefeed (\n).
+Command and value are separated by equal sign (=). Here are a few samples:
+
+Example 1
+---------
+
+This will allow all protocols except for SSLv2 and SSLv3:
+
+.. code-block:: none
+
+ gnutlsPriorityString="Protocol=ALL,-SSLv2,-SSLv3"
+
+
+Example 2
+---------
+
+This will allow all protocols except for SSLv2, SSLv3 and TLSv1.
+It will also set the minimum protocol to TLSv1.2
+
+.. code-block:: none
+
+ gnutlsPriorityString="Protocol=ALL,-SSLv2,-SSLv3,-TLSv1
+ MinProtocol=TLSv1.2"
+
+
+Statistic Counter
+=================
+
+This plugin maintains :doc:`statistics <../rsyslog_statistic_counter>` for each forwarding action.
+The statistic is named "target-port-protocol" where "target", "port", and
+"protocol" are the respective configuration parameters. So an actual name might be
+"192.0.2.1-514-TCP" or "example.net-10514-UDP".
+
+The following properties are maintained for each action:
+
+- **bytes.sent** - total number of bytes sent to the network
+
+See Also
+========
+
+- `Encrypted Disk
+ Queues <http://www.rsyslog.com/encrypted-disk-queues/>`_
+
+
+Examples
+========
+
+Example 1
+---------
+
+The following command sends all syslog messages to a remote server via
+TCP port 10514.
+
+.. code-block:: none
+
+ action(type="omfwd" Target="192.168.2.11" Port="10514" Protocol="tcp" Device="eth0")
+
+
+Example 2
+---------
+
+In case the system in use has multiple (maybe virtual) network interfaces network
+namespaces come in handy, each with its own routing table. To be able to distribute
+syslogs to remote servers in different namespaces specify them as separate actions.
+
+.. code-block:: none
+
+ action(type="omfwd" Target="192.168.1.13" Port="10514" Protocol="tcp" NetworkNamespace="ns_eth0.0")
+ action(type="omfwd" Target="192.168.2.24" Port="10514" Protocol="tcp" NetworkNamespace="ns_eth0.1")
+ action(type="omfwd" Target="192.168.3.38" Port="10514" Protocol="tcp" NetworkNamespace="ns_eth0.2")
diff --git a/source/configuration/modules/omhdfs.rst b/source/configuration/modules/omhdfs.rst
new file mode 100644
index 0000000..ccdf114
--- /dev/null
+++ b/source/configuration/modules/omhdfs.rst
@@ -0,0 +1,114 @@
+***************************************
+omhdfs: Hadoop Filesystem Output Module
+***************************************
+
+=========================== ===========================================================================
+**Module Name:**  **omhdfs**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+This module supports writing message into files on Hadoop's HDFS file
+system.
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+|FmtObsoleteName| Directives
+----------------------------
+
+.. csv-table::
+ :header: "|FmtObsoleteName| directive", "type", "default", "mandatory"
+ :widths: auto
+ :class: parameter-table
+
+ "``$OMHDFSFileName``", "word", "none", "no"
+
+The name of the file to which the output data shall be written.
+
+.. csv-table::
+ :header: "|FmtObsoleteName| directive", "type", "default", "mandatory"
+ :widths: auto
+ :class: parameter-table
+
+ "``$OMHDFSHost``", "word", "default", "no"
+
+Name or IP address of the HDFS host to connect to.
+
+.. csv-table::
+ :header: "|FmtObsoleteName| directive", "type", "default", "mandatory"
+ :widths: auto
+ :class: parameter-table
+
+ "``$OMHDFSPort``", "integer", "0", "no"
+
+Port on which to connect to the HDFS host.
+
+.. csv-table::
+ :header: "|FmtObsoleteName| directive", "type", "default", "mandatory"
+ :widths: auto
+ :class: parameter-table
+
+ "``$OMHDFSDefaultTemplate``", "word", "RSYSLOG_FileFormat", "no"
+
+Default template to be used when none is specified. This saves the work of
+specifying the same template ever and ever again. Of course, the default
+template can be overwritten via the usual method.
+
+
+Caveats/Known Bugs
+==================
+
+Building omhdfs is a challenge because we could not yet find out how to
+integrate Java properly into the autotools build process. The issue is
+that HDFS is written in Java and libhdfs uses JNI to talk to it. That
+requires that various system-specific environment options and paths be
+set correctly. At this point, we leave this to the user. If someone knows
+how to do it better, please drop us a line!
+
+- In order to build, you need to set these environment variables BEFORE
+ running ./configure:
+
+ - JAVA\_INCLUDES - must have all include paths that are needed to
+ build JNI C programs, including the -I options necessary for gcc.
+ An example is
+ # export
+ JAVA\_INCLUDES="-I/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86\_64/include
+ -I/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86\_64/include/linux"
+ - JAVA\_LIBS - must have all library paths that are needed to build
+ JNI C programs, including the -l/-L options necessary for gcc. An
+ example is
+ # export export
+ JAVA\_LIBS="-L/usr/java/jdk1.6.0\_21/jre/lib/amd64
+ -L/usr/java/jdk1.6.0\_21/jre/lib/amd64/server -ljava -ljvm
+ -lverify"
+
+- As of HDFS architecture, you must make sure that all relevant
+ environment variables (the usual Java stuff and HADOOP's home
+ directory) are properly set.
+- As it looks, libhdfs makes Java throw exceptions to stdout. There is
+ no known work-around for this (and it usually should not case any
+ troubles.
+
+
+Examples
+========
+
+Example 1
+---------
+
+.. code-block:: none
+
+ $ModLoad omhdfs
+ $OMHDFSFileName /var/log/logfile \*.\* :omhdfs:
+
+
diff --git a/source/configuration/modules/omhiredis.rst b/source/configuration/modules/omhiredis.rst
new file mode 100644
index 0000000..fb1a98c
--- /dev/null
+++ b/source/configuration/modules/omhiredis.rst
@@ -0,0 +1,779 @@
+******************************
+omhiredis: Redis Output Module
+******************************
+
+=========================== ===========================================================================
+**Module Name:**  **omhiredis**
+**Author:** Brian Knox <bknox@digitalocean.com>
+**Contributors:** Theo Bertin <theo.bertin@advens.fr>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+This module provides native support for writing to Redis,
+using the hiredis client library.
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Action Parameters
+-----------------
+
+Server
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+Name or address of the Redis server
+
+
+ServerPort
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "6379", "no", "none"
+
+Port of the Redis server if the server is not listening on the default port.
+
+
+ServerPassword
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+Password to support authenticated redis database server to push messages
+across networks and datacenters. Parameter is optional if not provided
+AUTH command wont be sent to the server.
+
+
+Mode
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "template", "no", "none"
+
+Mode to run the output action in: "queue", "publish", "set" or "stream" If not supplied, the
+original "template" mode is used.
+
+.. note::
+
+ Due to a config parsing bug in 8.13, explicitly setting this to "template" mode will result in a config parsing
+ error.
+
+ If mode is "set", omhiredis will send SET commands. If "expiration" parameter is provided (see parameter below),
+ omhiredis will send SETEX commands.
+
+ If mode is "stream", logs will be sent using XADD. In that case, the template-formatted message will be inserted in
+ the **msg** field of the stream ID (this behaviour can be controlled through the :ref:`omhiredis_streamoutfield` parameter)
+
+.. _omhiredis_template:
+
+Template
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "RSYSLOG_ForwardFormat", "no", "none"
+
+.. Warning::
+ Template is required if using "template" mode.
+
+
+Key
+^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+Key is required if using "publish", "queue" or "set" modes.
+
+
+Dynakey
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+If set to "on", the key value will be considered a template by Rsyslog.
+Useful when dynamic key generation is desired.
+
+Userpush
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+If set to on, use RPUSH instead of LPUSH, if not set or off, use LPUSH.
+
+
+Expiration
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "number", "0", "no", "none"
+
+Only applicable with mode "set". Specifies an expiration for the keys set by omhiredis.
+If this parameter is not specified, the value will be 0 so keys will last forever, otherwise they will last for X
+seconds.
+
+.. _omhiredis_streamoutfield:
+
+stream.outField
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "msg", "no", "none"
+
+| Only applicable with mode "stream".
+| The stream ID's field to use to insert the generated log.
+
+.. note::
+ Currently, the module cannot use the full message object, so it can only insert templated messages to a single stream entry's specific field
+
+
+.. _omhiredis_streamcapacitylimit:
+
+stream.capacityLimit
+^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "positive integer", "0", "no", "none"
+
+| Only applicable with mode "stream".
+| If set to a value greater than 0 (zero), the XADD will add a `MAXLEN <https://redis.io/docs/data-types/streams-tutorial/#capped-streams>`_ option with **approximate** trimming, limiting the amount of stored entries in the stream at each insertion.
+
+.. Warning::
+ This parameter has no way to check if the deleted entries have been ACK'ed once or even used, this should be set if you're sure the insertion rate in lower that the dequeuing to avoid losing entries!
+
+
+.. _omhiredis_streamack:
+
+stream.ack
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "boolean", "off", "no", "none"
+
+| Only applicable with mode "stream".
+| If set, the module will send an acknowledgement to Redis, for the stream defined by :ref:`omhiredis_streamkeyack`, with the group defined by :ref:`omhiredis_streamgroupack` and the ID defined by :ref:`omhiredis_streamindexack`.
+| This is especially useful when used with the :ref:`imhiredis_stream_consumerack` deactivated, as it allows omhiredis to acknowledge the correct processing of the log once the job is effectively done.
+
+
+.. _omhiredis_streamdel:
+
+stream.del
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "boolean", "off", "no", "none"
+
+| Only applicable with mode "stream".
+| If set, the module will send a XDEL command to remove an entry, for the stream defined by :ref:`omhiredis_streamkeyack`, and the ID defined by :ref:`omhiredis_streamindexack`.
+| This can be useful to automatically remove processed entries extracted on a previous stream by imhiredis.
+
+
+.. _omhiredis_streamkeyack:
+
+stream.keyAck
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "", "no", "none"
+
+| Only applicable with mode "stream".
+| Is **required**, if one of :ref:`omhiredis_streamack` or :ref:`omhiredis_streamdel` are **on**.
+| Defines the value to use for acknowledging/deleting while inserting a new entry, can be either a constant value or a template name if :ref:`omhiredis_streamdynakeyack` is set.
+| This can be useful to automatically acknowledge/remove processed entries extracted on a previous stream by imhiredis.
+
+
+.. _omhiredis_streamdynakeyack:
+
+stream.dynaKeyAck
+^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "boolean", "off", "no", "none"
+
+| Only applicable with mode "stream".
+| If set to **on**, the value of :ref:`omhiredis_streamkeyack` is taken as an existing Rsyslog template.
+
+
+
+.. _omhiredis_streamgroupack:
+
+stream.groupAck
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "", "no", "none"
+
+| Only applicable with mode "stream".
+| Is **required**, if :ref:`omhiredis_streamack` is **on**.
+| Defines the value to use for acknowledging while inserting a new entry, can be either a constant value or a template name if :ref:`omhiredis_streamdynagroupack` is set.
+| This can be useful to automatically acknowledge processed entries extracted on a previous stream by imhiredis.
+
+
+.. _omhiredis_streamdynagroupack:
+
+stream.dynaGroupAck
+^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "boolean", "off", "no", "none"
+
+| Only applicable with mode "stream".
+| If set to **on**, the value of :ref:`omhiredis_streamgroupack` is taken as an existing Rsyslog template.
+
+
+.. _omhiredis_streamindexack:
+
+stream.indexAck
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "", "no", "none"
+
+| Only applicable with mode "stream".
+| Is **required**, if one of :ref:`omhiredis_streamack` or :ref:`omhiredis_streamdel` are **on**.
+| Defines the index value to use for acknowledging/deleting while inserting a new entry, can be either a constant value or a template name if :ref:`omhiredis_streamdynaindexack` is set.
+| This can be useful to automatically acknowledge/remove processed entries extracted on a previous stream by imhiredis.
+
+
+.. _omhiredis_streamdynaindexack:
+
+stream.dynaIndexAck
+^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "boolean", "off", "no", "none"
+
+| Only applicable with mode "stream".
+| If set to **on**, the value of :ref:`omhiredis_streamindexack` is taken as an existing Rsyslog template.
+
+Examples
+========
+
+Example 1: Template mode
+------------------------
+
+In "template" mode, the string constructed by the template is sent
+to Redis as a command.
+
+.. note::
+
+ This mode has problems with strings with spaces in them - full message won't work correctly. In this mode, the template argument is required, and the key argument is meaningless.
+
+.. code-block:: none
+
+ module(load="omhiredis")
+
+ template(
+ name="program_count_tmpl"
+ type="string"
+ string="HINCRBY progcount %programname% 1")
+
+ action(
+ name="count_programs"
+ server="my-redis-server.example.com"
+ serverport="6379"
+ type="omhiredis"
+ mode="template"
+ template="program_count_tmpl")
+
+
+Results
+^^^^^^^
+
+Here's an example redis-cli session where we HGETALL the counts:
+
+.. code-block:: none
+
+ > redis-cli
+ 127.0.0.1:6379> HGETALL progcount
+ 1) "rsyslogd"
+ 2) "35"
+ 3) "rsyslogd-pstats"
+ 4) "4302"
+
+
+Example 2: Queue mode
+---------------------
+
+In "queue" mode, the syslog message is pushed into a Redis list
+at "key", using the LPUSH command. If a template is not supplied,
+the plugin will default to the RSYSLOG_ForwardFormat template.
+
+.. code-block:: none
+
+ module(load="omhiredis")
+
+ action(
+ name="push_redis"
+ server="my-redis-server.example.com"
+ serverport="6379"
+ type="omhiredis"
+ mode="queue"
+ key="my_queue")
+
+
+Results
+^^^^^^^
+
+Here's an example redis-cli session where we RPOP from the queue:
+
+.. code-block:: none
+
+ > redis-cli
+ 127.0.0.1:6379> RPOP my_queue
+
+ "<46>2015-09-17T10:54:50.080252-04:00 myhost rsyslogd: [origin software=\"rsyslogd\" swVersion=\"8.13.0.master\" x-pid=\"6452\" x-info=\"http://www.rsyslog.com\"] start"
+
+ 127.0.0.1:6379>
+
+
+Example 3: Publish mode
+-----------------------
+
+In "publish" mode, the syslog message is published to a Redis
+topic set by "key". If a template is not supplied, the plugin
+will default to the RSYSLOG_ForwardFormat template.
+
+.. code-block:: none
+
+ module(load="omhiredis")
+
+ action(
+ name="publish_redis"
+ server="my-redis-server.example.com"
+ serverport="6379"
+ type="omhiredis"
+ mode="publish"
+ key="my_channel")
+
+
+Results
+^^^^^^^
+
+Here's an example redis-cli session where we SUBSCRIBE to the topic:
+
+.. code-block:: none
+
+ > redis-cli
+
+ 127.0.0.1:6379> subscribe my_channel
+
+ Reading messages... (press Ctrl-C to quit)
+
+ 1) "subscribe"
+
+ 2) "my_channel"
+
+ 3) (integer) 1
+
+ 1) "message"
+
+ 2) "my_channel"
+
+ 3) "<46>2015-09-17T10:55:44.486416-04:00 myhost rsyslogd-pstats: {\"name\":\"imuxsock\",\"origin\":\"imuxsock\",\"submitted\":0,\"ratelimit.discarded\":0,\"ratelimit.numratelimiters\":0}"
+
+
+Example 4: Set mode
+-------------------
+
+In "set" mode, the syslog message is set as a Redis key at "key". If a template is not supplied,
+the plugin will default to the RSYSLOG_ForwardFormat template.
+
+.. code-block:: none
+
+ module(load="omhiredis")
+
+ action(
+ name="set_redis"
+ server="my-redis-server.example.com"
+ serverport="6379"
+ type="omhiredis"
+ mode="set"
+ key="my_key")
+
+
+Results
+^^^^^^^
+Here's an example redis-cli session where we get the key:
+
+.. code-block:: none
+
+ > redis-cli
+
+ 127.0.0.1:6379> get my_key
+
+ "<46>2019-12-17T20:16:54.781239+00:00 localhost rsyslogd-pstats: { \"name\": \"main Q\", \"origin\": \"core.queue\",
+ \"size\": 3, \"enqueued\": 7, \"full\": 0, \"discarded.full\": 0, \"discarded.nf\": 0, \"maxqsize\": 3 }"
+
+ 127.0.0.1:6379> ttl my_key
+
+ (integer) -1
+
+
+Example 5: Set mode with expiration
+-----------------------------------
+
+In "set" mode when "expiration" is set to a positive integer, the syslog message is set as a Redis key at "key",
+with expiration "expiration".
+If a template is not supplied, the plugin will default to the RSYSLOG_ForwardFormat template.
+
+.. code-block:: none
+
+ module(load="omhiredis")
+
+ action(
+ name="set_redis"
+ server="my-redis-server.example.com"
+ serverport="6379"
+ type="omhiredis"
+ mode="set"
+ key="my_key"
+ expiration="10")
+
+
+Results
+^^^^^^^
+
+Here's an example redis-cli session where we get the key and test the expiration:
+
+.. code-block:: none
+
+ > redis-cli
+
+ 127.0.0.1:6379> get my_key
+
+ "<46>2019-12-17T20:16:54.781239+00:00 localhost rsyslogd-pstats: { \"name\": \"main Q\", \"origin\": \"core.queue\",
+ \"size\": 3, \"enqueued\": 7, \"full\": 0, \"discarded.full\": 0, \"discarded.nf\": 0, \"maxqsize\": 3 }"
+
+ 127.0.0.1:6379> ttl my_key
+
+ (integer) 10
+
+ 127.0.0.1:6379> ttl my_key
+
+ (integer) 3
+
+ 127.0.0.1:6379> ttl my_key
+
+ (integer) -2
+
+ 127.0.0.1:6379> get my_key
+
+ (nil)
+
+
+Example 6: Set mode with dynamic key
+------------------------------------
+
+In any mode with "key" defined and "dynakey" as "on", the key used during operation will be dynamically generated
+by Rsyslog using templating.
+
+.. code-block:: none
+
+ module(load="omhiredis")
+
+ template(name="example-template" type="string" string="%hostname%")
+
+ action(
+ name="set_redis"
+ server="my-redis-server.example.com"
+ serverport="6379"
+ type="omhiredis"
+ mode="set"
+ key="example-template"
+ dynakey="on")
+
+
+Results
+^^^^^^^
+Here's an example redis-cli session where we get the dynamic key:
+
+.. code-block:: none
+
+ > redis-cli
+
+ 127.0.0.1:6379> keys *
+
+ (empty list or set)
+
+ 127.0.0.1:6379> keys *
+
+ 1) "localhost"
+
+
+Example 7: "Simple" stream mode
+-------------------------------
+
+| By using the **stream mode**, the template-formatted log is inserted in a stream using the :ref:`omhiredis_streamoutfield` parameter as key (or *msg* as default).
+| The output template can be explicitely set with the :ref:`omhiredis_template` option (or the default *RSYSLOG_ForwardFormat* template will be used).
+
+.. code-block:: none
+
+ module(load="omhiredis")
+
+ template(name="example-template" type="string" string="%hostname%")
+
+ action(
+ type="omhiredis"
+ server="my-redis-server.example.com"
+ serverport="6379"
+ template="example-template"
+ mode="stream"
+ key="stream_output"
+ stream.outField="data")
+
+
+Results
+^^^^^^^
+Here's an example redis-cli session where we get the newly inserted stream index:
+
+.. code-block:: none
+
+ > redis-cli
+
+ 127.0.0.1:6379> XLEN stream_output
+ 1
+
+ 127.0.0.1:6379> xread STREAMS stream_output 0
+ 1) 1) "stream_output"
+ 2) 1) 1) "1684507855284-0"
+ 2) 1) "data"
+ 2) "localhost"
+
+
+Example 8: Get from a stream with imhiredis, then insert in another one with omhiredis
+--------------------------------------------------------------------------------------
+
+| When you use the omhiredis in stream mode with the imhiredis in stream mode as input, you might want to acknowledge
+ entries taken with imhiredis once you insert them back somewhere else with omhiredis.
+| The module allows to acknowledge input entries using information either provided by the user through configuration
+ or through information accessible in the log entry.
+| Under the hood, imhiredis adds metadata to the generated logs read from redis streams, this data includes
+ the stream name, ID of the index, group name and consumer name (when read from a consumer group).
+| This information is added in the **$.redis** object and can be retrieved with the help of specific templates.
+
+.. code-block:: none
+
+ module(load="imhiredis")
+ module(load="omhiredis")
+
+ template(name="redisJsonMessage" type="list") {
+ property(name="$!output")
+ }
+
+ template(name="indexTemplate" type="list") {
+ property(name="$.redis!index")
+ }
+ # Not used in this example, but can be used to replace the static declarations in omhiredis' configuration below
+ template(name="groupTemplate" type="list") {
+ property(name="$.redis!group")
+ }
+ template(name="keyTemplate" type="list") {
+ property(name="$.redis!stream")
+ }
+
+ input(type="imhiredis"
+ server="127.0.0.1"
+ port="6379"
+ mode="stream"
+ key="stream_input"
+ stream.consumerGroup="group1"
+ stream.consumerName="consumer1"
+ stream.consumerACK="off"
+ ruleset="receive_redis")
+
+ ruleset(name="receive_redis") {
+
+ action(type="omhiredis"
+ server="127.0.0.1"
+ serverport="6379"
+ mode="stream"
+ key="stream_output"
+ stream.ack="on"
+ # The key and group values are set statically, but the index value is taken from imhiredis metadata
+ stream.dynaKeyAck="off"
+ stream.keyAck="stream_input"
+ stream.dynaGroupAck="off"
+ stream.groupAck="group1"
+ stream.indexAck="indexTemplate"
+ stream.dynaIndexAck="on"
+ template="redisJsonMessage"
+ )
+ }
+
+
+Results
+^^^^^^^
+Here's an example redis-cli session where we get the pending entries at the end of the log re-insertion:
+
+.. code-block:: none
+
+ > redis-cli
+
+ 127.0.0.1:6379> XINFO GROUPS stream_input
+ 1) 1) "name"
+ 1) "group1"
+ 2) "consumers"
+ 3) (integer) 1
+ 4) "pending"
+ 5) (integer) 0
+ 6) "last-delivered-id"
+ 7) "1684509391900-0"
+ 8) "entries-read"
+ 9) (integer) 1
+ 10) "lag"
+ 11) (integer) 0
+
+
+
+Example 9: Ensuring streams don't grow indefinitely
+---------------------------------------------------
+
+| While using Redis streams, index entries are not automatically evicted, even if you acknowledge entries.
+| You have several options to ensure your streams stays under reasonable memoyr usage, while making sure your data is
+ not evicted before behing processed.
+| To do that, you have 2 available options, that can be used independently from each other
+ (as they don't apply to the same source):
+
+ - **stream.del** to delete processed entries (can also be used as a complement of ACK'ing)
+ - **stream.capacityLimit** that allows to ensure a hard-limit of logs can be inserted before dropping entries
+
+.. code-block:: none
+
+ module(load="imhiredis")
+ module(load="omhiredis")
+
+ template(name="redisJsonMessage" type="list") {
+ property(name="$!output")
+ }
+
+ template(name="indexTemplate" type="list") {
+ property(name="$.redis!index")
+ }
+ template(name="keyTemplate" type="list") {
+ property(name="$.redis!stream")
+ }
+
+ input(type="imhiredis"
+ server="127.0.0.1"
+ port="6379"
+ mode="stream"
+ key="stream_input"
+ ruleset="receive_redis")
+
+ ruleset(name="receive_redis") {
+
+ action(type="omhiredis"
+ server="127.0.0.1"
+ serverport="6379"
+ mode="stream"
+ key="stream_output"
+ stream.capacityLimit="1000000"
+ stream.del="on"
+ stream.dynaKeyAck="on"
+ stream.keyAck="keyTemplate"
+ stream.dynaIndexAck="on"
+ stream.indexAck="indexTemplate"
+ template="redisJsonMessage"
+ )
+ }
+
+
+Results
+^^^^^^^
+Here, the result of this configuration is:
+
+ - entries are deleted from the source stream *stream_input* after being inserted by omhiredis to *stream_output*
+ - *stream_output* won't hold more than (approximately) a million entries at a time
+
+.. Warning::
+ The **stream.capacityLimit** is an approximate maximum! see `redis documentation on MAXLEN and the '~' option <https://redis.io/commands/xadd>`_ to understand how it works!
diff --git a/source/configuration/modules/omhttp.rst b/source/configuration/modules/omhttp.rst
new file mode 100644
index 0000000..3eee0aa
--- /dev/null
+++ b/source/configuration/modules/omhttp.rst
@@ -0,0 +1,869 @@
+********************************************
+omhttp: HTTP Output Module
+********************************************
+
+=========================== ===========================================================================
+**Module Name:**  **omhttp**
+**Module Type:**  **contributed** - not maintained by rsyslog core team
+**Current Maintainer:** `Nelson Yen <https://github.com/n2yen/>`_
+Original Author: `Christian Tramnitz <https://github.com/ctramnitz/>`_
+=========================== ===========================================================================
+
+.. warning::
+
+ This page is incomplete, if you want to contribute you can do this on
+ github in the `rsyslog-doc repository <https://github.com/rsyslog/rsyslog-doc>`_.
+
+
+
+Purpose
+=======
+
+This module provides the ability to send messages over an HTTP REST interface.
+
+This module supports sending messages in individual requests (the default), and batching multiple messages into a single request. Support for retrying failed requests is available in both modes. GZIP compression is configurable with the compress_ parameter. TLS encryption is configurable with the useHttps_ parameter and associated tls parameters.
+
+In the default mode, every message is sent in its own HTTP request and it is a drop-in replacement for any other output module. In batch_ mode, the module implements several batch formatting options that are configurable via the batch.format_ parameter. Some additional attention to message formatting and retry_ strategy is required in this mode.
+
+See the `Examples`_ section for some configuration examples.
+
+
+Notable Features
+================
+
+- `Statistic Counter`_
+- `Message Batching`_, supports several formats.
+ - Newline concatenation, like the Elasticsearch bulk format.
+ - JSON Array as a generic batching strategy.
+ - Kafka REST Proxy format, to support sending data through the `Confluent Kafka REST API <https://docs.confluent.io/current/kafka-rest/docs/index.html>`_ to a Kafka cluster.
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Action Parameters
+-----------------
+
+Server
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "array", "localhost", "no", "none"
+
+The server address you want to connect to.
+
+
+Serverport
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "443", "no", "none"
+
+The port you want to connect to.
+
+
+healthchecktimeout
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "3500", "no", "none"
+
+The time after which the health check will time out in milliseconds.
+
+httpcontenttype
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "application/json; charset=utf-8", "no", "none"
+
+The HTTP "Content-Type" header sent with each request. This parameter will override other defaults. If a batching mode is specified, the correct content type is automatically configured. The "Content-Type" header can also be configured using the httpheaders_ parameter, it should be configured in only one of the parameters.
+
+
+httpheaderkey
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+The header key. Currently only a single additional header/key pair is configurable, to specify multiple headers see the httpheaders_ parameter. This parameter along with httpheadervalue_ may be deprecated in the future.
+
+
+httpheadervalue
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+The header value for httpheaderkey_.
+
+httpheaders
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "array", "none", "no", "none"
+
+An array of strings that defines a list of one or more HTTP headers to send with each message. Keep in mind that some HTTP headers are added using other parameters, "Content-Type" can be configured using httpcontenttype_ and "Content-Encoding: gzip" is added when using the compress_ parameter.
+
+.. code-block:: text
+
+ action(
+ type="omhttp"
+ ...
+ httpheaders=[
+ "X-Insert-Key: key",
+ "X-Event-Source: logs"
+ ]
+ ...
+ )
+
+
+uid
+^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+The username for basic auth.
+
+
+pwd
+^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+The password for the user for basic auth.
+
+
+restpath
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+The rest path you want to use. Do not include the leading slash character. If the full path looks like "localhost:5000/my/path", restpath should be "my/path".
+
+
+dynrestpath
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+When this parameter is set to "on" you can specify a template name in the parameter
+restpath instead of the actual path. This way you will be able to use dynamic rest
+paths for your messages based on the template you are using.
+
+
+checkpath
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+The health check path you want to use. Do not include the leading slash character. If the full path looks like "localhost:5000/my/path", checkpath should be "my/path".
+When this parameter is set, omhttp utilizes this path to determine if it is safe to resume (from suspend mode) and communicates this status back to rsyslog core.
+This parameter defaults to none, which implies that health checks are not needed, and it is always safe to resume from suspend mode.
+
+**Important** - Note that it is highly recommended to set a valid health check path, as this allows omhttp to better determine whether it is safe to retry.
+See the `rsyslog action queue documentation for more info <https://www.rsyslog.com/doc/v8-stable/configuration/actions.html>`_ regarding general rsyslog suspend and resume behavior.
+
+
+batch
+^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+Batch and bulkmode do the same thing, bulkmode included for backwards compatibility. See the `Message Batching`_ section for a detailed breakdown of how batching is implemented.
+
+This parameter activates batching mode, which queues messages and sends them as a single request. There are several related parameters that specify the format and size of the batch: they are batch.format_, batch.maxbytes_, and batch.maxsize_.
+
+Note that rsyslog core is the ultimate authority on when a batch must be submitted, due to the way that batching is implemented. This plugin implements the `output plugin transaction interface <https://www.rsyslog.com/doc/v8-stable/development/dev_oplugins.html#output-plugin-transaction-interface>`_. There may be multiple batches in a single transaction, but a batch will never span multiple transactions. This means that if batch.maxsize_ or batch.maxbytes_ is set very large, you may never actually see batches hit this size. Additionally, the number of messages per transaction is determined by the size of the main, action, and ruleset queues as well.
+
+Additionally, due to some open issues with rsyslog and the transaction interface, batching requires some nuanced retry_ configuration.
+
+
+batch.format
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "newline", "no", "none"
+
+This parameter specifies how to combine multiple messages into a single batch. Valid options are *newline* (default), *jsonarray*, *kafkarest*, and *lokirest*.
+
+Each message on the "Inputs" line is the templated log line that is fed into the omhttp action, and the "Output" line describes the resulting payload sent to the configured HTTP server.
+
+1. *newline* - Concatenates each message into a single string joined by newline ("\\n") characters. This mode is default and places no restrictions on the structure of the input messages.
+
+.. code-block:: text
+
+ Inputs: "message 1" "message 2" "message 3"
+ Output: "message 1\nmessage2\nmessage3"
+
+2. *jsonarray* - Builds a JSON array containing all messages in the batch. This mode requires that each message is parseable JSON, since the plugin parses each message as JSON while building the array.
+
+.. code-block:: text
+
+ Inputs: {"msg": "message 1"} {"msg"": "message 2"} {"msg": "message 3"}
+ Output: [{"msg": "message 1"}, {"msg"": "message 2"}, {"msg": "message 3"}]
+
+3. *kafkarest* - Builds a JSON object that conforms to the `Kafka Rest Proxy specification <https://docs.confluent.io/current/kafka-rest/docs/quickstart.html>`_. This mode requires that each message is parseable JSON, since the plugin parses each message as JSON while building the batch object.
+
+.. code-block:: text
+
+ Inputs: {"msg": "message 1"} {"msg"": "message 2"} {"msg": "message 3"}
+ Output: {"records": [{"value": {"msg": "message 1"}}, {"value": {"msg": "message 2"}}, {"value": {"msg": "message 3"}}]}
+
+4. *lokirest* - Builds a JSON object that conforms to the `Loki Rest specification <https://github.com/grafana/loki/blob/master/docs/api.md#post-lokiapiv1push>`_. This mode requires that each message is parseable JSON, since the plugin parses each message as JSON while building the batch object. Additionally, the operator is responsible for providing index keys, and message values.
+
+.. code-block:: text
+
+ Inputs: {"stream": {"tag1":"value1"}, values:[[ "%timestamp%", "message 1" ]]} {"stream": {"tag2":"value2"}, values:[[ %timestamp%, "message 2" ]]}
+ Output: {"streams": [{"stream": {"tag1":"value1"}, values:[[ "%timestamp%", "message 1" ]]},{"stream": {"tag2":"value2"}, values:[[ %timestamp%, "message 2" ]]}]}
+
+batch.maxsize
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "Size", "100", "no", "none"
+
+This parameter specifies the maximum number of messages that will be sent in each batch.
+
+batch.maxbytes
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "Size", "10485760 (10MB)", "no", "none"
+
+batch.maxbytes and maxbytes do the same thing, maxbytes included for backwards compatibility.
+
+This parameter specifies the maximum size in bytes for each batch.
+
+template
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "StdJSONFmt", "no", "none"
+
+The template to be used for the messages.
+
+Note that in batching mode, this describes the format of *each* individual message, *not* the format of the resulting batch. Some batch modes require that a template produces valid JSON.
+
+
+retry
+^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+This parameter specifies whether failed requests should be retried using the custom retry logic implemented in this plugin. Requests returning 5XX HTTP status codes are considered retriable. If retry is enabled, set retry.ruleset_ as well.
+
+Note that retries are generally handled in rsyslog by setting action.resumeRetryCount="-1" (or some other integer), and the plugin lets rsyslog know it should start retrying by suspending itself. This is still the recommended approach in the 2 cases enumerated below when using this plugin. In both of these cases, the output plugin transaction interface is not used. That is, from rsyslog core's point of view, each message is contained in its own transaction.
+
+1. Batching is off (batch="off")
+2. Batching is on and the maximum batch size is 1 (batch="on" batch.maxsize="1")
+
+This custom retry behavior is the result of a bug in rsyslog's handling of transaction commits. See `this issue <https://github.com/rsyslog/rsyslog/issues/2420>`_ for full details. Essentially, if rsyslog hands omhttp 4 messages, and omhttp batches them up but the request fails, rsyslog will only retry the LAST message that it handed the plugin, instead of all 4, even if the plugin returns the correct "defer commit" statuses for messages 1, 2, and 3. This means that omhttp cannot rely on action.resumeRetryCount for any transaction that processes more than a single message, and explains why the 2 above cases do work correctly.
+
+It looks promising that issue will be resolved at some point, so this behavior can be revisited at that time.
+
+retry.ruleset
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+This parameter specifies the ruleset where this plugin should requeue failed messages if retry_ is on. This ruleset generally would contain another omhttp action instance.
+
+**Important** - Note that the message that is queued on the retry ruleset is the templated output of the initial omhttp action. This means that no further templating should be done to messages inside this ruleset, unless retries should be templated differently than first-tries. An "echo template" does the trick here.
+
+.. code-block:: text
+
+ template(name="tpl_echo" type="string" string="%msg%")
+
+This retry ruleset can recursively call itself as its own retry.ruleset to retry forever, but there is no timeout behavior currently implemented.
+
+Alternatively, the omhttp action in the retry ruleset could be configured to support action.resumeRetryCount as explained above in the retry parameter section. The benefit of this approach is that retried messages still hit the server in a batch format (though with a single message in it), and the ability to configure rsyslog to give up after some number of resume attempts so as to avoid resource exhaustion.
+
+Or, if some data loss or high latency is acceptable, do not configure retries with the retry ruleset itself. A single retry from the original ruleset might catch most failures, and errors from the retry ruleset could still be logged using the errorfile parameter and sent later on via some other process.
+
+ratelimit.interval
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "600", "no", "none"
+
+This parameter sets the rate limiting behavior for the retry.ruleset_. Specifies the interval in seconds onto which rate-limiting is to be applied. If more than ratelimit.burst messages are read during that interval, further messages up to the end of the interval are discarded. The number of messages discarded is emitted at the end of the interval (if there were any discards). Setting this to value zero turns off ratelimiting.
+
+ratelimit.burst
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "20000", "no", "none"
+
+This parameter sets the rate limiting behavior for the retry.ruleset_. Specifies the maximum number of messages that can be emitted within the ratelimit.interval interval. For further information, see description there.
+
+
+errorfile
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+Here you can set the name of a file where all errors will be written to. Any request that returns a 4XX or 5XX HTTP code is recorded in the error file. Each line is JSON formatted with "request" and "response" fields, example pretty-printed below.
+
+.. code-block:: text
+
+ {
+ "request": {
+ "url": "https://url.com:443/path",
+ "postdata": "mypayload"
+ },
+ "response" : {
+ "status": 400,
+ "message": "error string"
+ }
+ }
+
+It is intended that a full replay of failed data is possible by processing this file.
+
+compress
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+When switched to "on" each message will be compressed as GZIP using zlib's deflate compression algorithm.
+
+A "Content-Encoding: gzip" HTTP header is added to each request when this feature is used. Set the compress.level_ for fine-grained control.
+
+compress.level
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "-1", "no", "none"
+
+Specify the zlib compression level if compress_ is enabled. Check the `zlib manual <https://www.zlib.net/manual.html>`_ for further documentation.
+
+"-1" is the default value that strikes a balance between best speed and best compression. "0" disables compression. "1" results in the fastest compression. "9" results in the best compression.
+
+useHttps
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+When switched to "on" you will use https instead of http.
+
+
+tls.cacert
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+This parameter sets the path to the Certificate Authority (CA) bundle. Expects .pem format.
+
+tls.mycert
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+This parameter sets the path to the SSL client certificate. Expects .pem format.
+
+tls.myprivkey
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+The parameters sets the path to the SSL private key. Expects .pem format.
+
+allowunsignedcerts
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "boolean", "off", "no", "none"
+
+If `"on"`, this will set the curl `CURLOPT_SSL_VERIFYPEER` option to
+`0`. You are strongly discouraged to set this to `"on"`. It is
+primarily useful only for debugging or testing.
+
+skipverifyhost
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "boolean", "off", "no", "none"
+
+If `"on"`, this will set the curl `CURLOPT_SSL_VERIFYHOST` option to
+`0`. You are strongly discouraged to set this to `"on"`. It is
+primarily useful only for debugging or testing.
+
+reloadonhup
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+If this parameter is "on", the plugin will close and reopen any libcurl handles on a HUP signal. This option is primarily intended to enable reloading short-lived certificates without restarting rsyslog.
+
+Statistic Counter
+=================
+
+This plugin maintains global :doc:`statistics <../rsyslog_statistic_counter>` for omhttp that
+accumulates all action instances. The statistic origin is named "omhttp" with following counters:
+
+- **messages.submitted** - Number of messages submitted to omhttp. Messages resubmitted via a retry ruleset will be counted twice.
+
+- **messages.success** - Number of messages successfully sent.
+
+- **messages.fail** - Number of messages that omhttp failed to deliver for any reason.
+
+- **messages.retry** - Number of messages that omhttp resubmitted for retry via the retry ruleset.
+
+- **request.count** - Number of attempted HTTP requests.
+
+- **request.success** - Number of successful HTTP requests. A successful request can return *any* HTTP status code.
+
+- **request.fail** - Number of failed HTTP requests. A failed request is something like an invalid SSL handshake, or the server is not reachable. Requests returning 4XX or 5XX HTTP status codes are *not* failures.
+
+- **request.status.success** - Number of requests returning 1XX or 2XX HTTP status codes.
+
+- **request.status.fail** - Number of requests returning 3XX, 4XX, or 5XX HTTP status codes. If a requests fails (i.e. server not reachable) this counter will *not* be incremented.
+
+Message Batching
+================
+
+See the batch.format_ section for some light examples of available batching formats.
+
+Implementation
+--------------
+
+Here's the pseudocode of the batching algorithm used by omhttp. This section of code would run once per transaction.
+
+.. code-block:: python
+
+ Q = Queue()
+
+ def submit(Q): # function to submit
+ batch = serialize(Q) # serialize according to configured batch.format
+ result = post(batch) # http post serialized batch to server
+ checkFailureAndRetry(Q, result) # check if post failed and pushed failed messages to configured retry.ruleset
+ Q.empty() # reset for next batch
+
+
+ while isActive(transaction): # rsyslog manages the transaction
+ message = receiveMessage() # rsyslog sends us messages
+ if wouldTriggerSubmit(Q, message): # if this message puts us over maxbytes or maxsize
+ submit(Q) # submit the current batch
+ Q.push(message) # queue this message on the current batch
+
+ submit(Q) # transaction is over, submit what is currently in the queue
+
+
+Walkthrough
+-----------
+
+This is a run through of a file tailing to omhttp scenario. Suppose we have a file called ``/var/log/my.log`` with this content..
+
+.. code-block:: text
+
+ 001 message
+ 002 message
+ 003 message
+ 004 message
+ 005 message
+ 006 message
+ 007 message
+ ...
+
+We are tailing this using imfile and defining a template to generate a JSON payload...
+
+.. code-block:: text
+
+ input(type="imfile" File="/var/log/my.log" ruleset="rs_omhttp" ... )
+
+ # Produces JSON formatted payload
+ template(name="tpl_omhttp_json" type="list") {
+ constant(value="{") property(name="msg" outname="message" format="jsonfr")
+ constant(value=",") property(name="hostname" outname="host" format="jsonfr")
+ constant(value=",") property(name="timereported" outname="timestamp" format="jsonfr" dateFormat="rfc3339")
+ constant(value="}")
+ }
+
+Our omhttp ruleset is configured to batch using the *jsonarray* format with 5 messages per batch, and to use a retry ruleset.
+
+
+.. code-block:: text
+
+ module(load="omhttp")
+
+ ruleset(name="rs_omhttp") {
+ action(
+ type="omhttp"
+ template="tpl_omhttp_json"
+ batch="on"
+ batch.format="jsonarray"
+ batch.maxsize="5"
+ retry="on"
+ retry.ruleset="rs_omhttp_retry"
+ ...
+ )
+ }
+
+ call rs_omhttp
+
+Each input message to this omhttp action is the output of ``tpl_omhttp_json`` with the following structure..
+
+.. code-block:: text
+
+ {"message": "001 message", "host": "localhost", "timestamp": "2018-12-28T21:14:13.840470+00:00"}
+
+After 5 messages have been queued, and a batch submit is triggered, omhttp serializes the messages as a JSON array and attempts to post the batch to the server. At this point the payload on the wire looks like this..
+
+.. code-block:: text
+
+ [
+ {"message": "001 message", "host": "localhost", "timestamp": "2018-12-28T21:14:13.000000+00:00"},
+ {"message": "002 message", "host": "localhost", "timestamp": "2018-12-28T21:14:14.000000+00:00"},
+ {"message": "003 message", "host": "localhost", "timestamp": "2018-12-28T21:14:15.000000+00:00"},
+ {"message": "004 message", "host": "localhost", "timestamp": "2018-12-28T21:14:16.000000+00:00"},
+ {"message": "005 message", "host": "localhost", "timestamp": "2018-12-28T21:14:17.000000+00:00"}
+ ]
+
+If the request fails, omhttp requeues each failed message onto the retry ruleset. However, recall that the inputs to the ``rs_omhttp`` ruleset are the rendered *outputs* of ``tpl_json_omhttp``, and therefore we *cannot* use the same template (and therefore the same action instance) to produce the retry messages. At this point, the ``msg`` rsyslog property is ``{"message": "001 message", "host": "localhost", "timestamp": "2018-12-28T21:14:13.000000+00:00"}`` instead of the original ``001 message``, and ``tpl_json_omhttp`` would render incorrect payloads.
+
+Instead we define a simple template that echos its input..
+
+.. code-block:: text
+
+ template(name="tpl_echo" type="string" string="%msg%")
+
+And assign it to the retry template..
+
+.. code-block:: text
+
+ ruleset(name="rs_omhttp_retry") {
+ action(
+ type="omhttp"
+ template="tpl_echo"
+ batch="on"
+ batch.format="jsonarray"
+ batch.maxsize="5"
+ ...
+ )
+ }
+
+And the destination is none the wiser! The *newline*, *jsonarray*, and *kafkarest* formats all behave in the same way with respect to their batching and retry behavior, and differ only in the format of the on-the-wire payload. The formats themselves are described in the batch.format_ section.
+
+Examples
+========
+
+Example 1
+---------
+
+The following example is a basic usage, first the module is loaded and then
+the action is used with a standard retry strategy.
+
+
+.. code-block:: text
+
+ module(load="omhttp")
+ template(name="tpl1" type="string" string="{\"type\":\"syslog\", \"host\":\"%HOSTNAME%\"}")
+ action(
+ type="omhttp"
+ server="127.0.0.1"
+ serverport="8080"
+ restpath="events"
+ template="tpl1"
+ action.resumeRetryCount="3"
+ )
+
+Example 2
+---------
+
+The following example is a basic batch usage with no retry processing.
+
+
+.. code-block:: text
+
+ module(load="omhttp")
+ template(name="tpl1" type="string" string="{\"type\":\"syslog\", \"host\":\"%HOSTNAME%\"}")
+ action(
+ type="omhttp"
+ server="127.0.0.1"
+ serverport="8080"
+ restpath="events"
+ template="tpl1"
+ batch="on"
+ batch.format="jsonarray"
+ batch.maxsize="10"
+ )
+
+
+Example 3
+---------
+
+The following example is a batch usage with a retry ruleset that retries forever
+
+
+.. code-block:: text
+
+ module(load="omhttp")
+
+ template(name="tpl_echo" type="string" string="%msg%")
+ ruleset(name="rs_retry_forever") {
+ action(
+ type="omhttp"
+ server="127.0.0.1"
+ serverport="8080"
+ restpath="events"
+ template="tpl_echo"
+
+ batch="on"
+ batch.format="jsonarray"
+ batch.maxsize="10"
+
+ retry="on"
+ retry.ruleset="rs_retry_forever"
+ )
+ }
+
+ template(name="tpl1" type="string" string="{\"type\":\"syslog\", \"host\":\"%HOSTNAME%\"}")
+ action(
+ type="omhttp"
+ server="127.0.0.1"
+ serverport="8080"
+ restpath="events"
+ template="tpl1"
+
+ batch="on"
+ batch.format="jsonarray"
+ batch.maxsize="10"
+
+ retry="on"
+ retry.ruleset="rs_retry_forever"
+ )
+
+Example 4
+---------
+
+The following example is a batch usage with a couple retry options
+
+.. code-block:: text
+
+ module(load="omhttp")
+
+ template(name="tpl_echo" type="string" string="%msg%")
+
+ # This retry ruleset tries to send batches once then logs failures.
+ # Error log could be tailed by rsyslog itself or processed by some
+ # other program.
+ ruleset(name="rs_retry_once_errorfile") {
+ action(
+ type="omhttp"
+ server="127.0.0.1"
+ serverport="8080"
+ restpath="events"
+ template="tpl_echo"
+
+ batch="on"
+ batch.format="jsonarray"
+ batch.maxsize="10"
+
+ retry="off"
+ errorfile="/var/log/rsyslog/omhttp_errors.log"
+ )
+ }
+
+ # This retry ruleset gives up trying to batch messages and instead always
+ # uses a batch size of 1, relying on the suspend/resume mechanism to do
+ # further retries if needed.
+ ruleset(name="rs_retry_batchsize_1") {
+ action(
+ type="omhttp"
+ server="127.0.0.1"
+ serverport="8080"
+ restpath="events"
+ template="tpl_echo"
+
+ batch="on"
+ batch.format="jsonarray"
+ batch.maxsize="1"
+ action.resumeRetryCount="-1"
+ )
+ }
+
+ template(name="tpl1" type="string" string="{\"type\":\"syslog\", \"host\":\"%HOSTNAME%\"}")
+ action(
+ type="omhttp"
+ template="tpl1"
+
+ ...
+
+ retry="on"
+ retry.ruleset="<some_retry_ruleset>"
+ )
+
+Example 5
+---------
+
+The following example is a batch action for pushing logs with checking, and queues to Loki.
+
+.. code-block:: text
+
+ module(load="omhttp")
+
+ template(name="loki" type="string" string="{\"stream\":{\"host\":\"%HOSTNAME%\",\"facility\":\"%syslogfacility-text%\",\"priority\":\"%syslogpriority-text%\",\"syslogtag\":\"%syslogtag%\"},\"values\": [[ \"%timegenerated:::date-unixtimestamp%000000000\", \"%msg%\" ]]}")
+
+
+ action(
+ name="loki"
+ type="omhttp"
+ useHttps="off"
+ server="localhost"
+ serverport="3100"
+ checkpath="ready"
+
+ restpath="loki/api/v1/push"
+ template="loki"
+ batch.format="lokirest"
+ batch="on"
+ batch.maxsize="10"
+
+ queue.size="10000" queue.type="linkedList"
+ queue.workerthreads="3"
+ queue.workerthreadMinimumMessages="1000"
+ queue.timeoutWorkerthreadShutdown="500"
+ queue.timeoutEnqueue="10000"
+ )
diff --git a/source/configuration/modules/omhttpfs.rst b/source/configuration/modules/omhttpfs.rst
new file mode 100644
index 0000000..acaf369
--- /dev/null
+++ b/source/configuration/modules/omhttpfs.rst
@@ -0,0 +1,149 @@
+*************************************
+omhttpfs: Hadoop HTTPFS Output Module
+*************************************
+
+=========================== ===========================================================================
+**Module Name:** **omhttpfs**
+**Available Since:** **8.10.0**
+**Author:** `sskaje <https://sskaje.me/2014/12/omhttpfs-rsyslog-hdfs-output-plugin/>`_ <sskaje@gmail.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+This module is an alternative to omhdfs via `Hadoop HDFS over HTTP <http://hadoop.apache.org/docs/current/hadoop-hdfs-httpfs/index.html>`_.
+
+
+Dependencies
+============
+
+* libcurl
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Action Parameters
+-----------------
+
+Host
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "127.0.0.1", "no", "none"
+
+HttpFS server host.
+
+
+Port
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "14000", "no", "none"
+
+HttpFS server port.
+
+
+User
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "hdfs", "no", "none"
+
+HttpFS auth user.
+
+
+https
+^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+Turn on if your HttpFS runs on HTTPS.
+
+
+File
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "yes", "none"
+
+File to write, or a template name.
+
+
+isDynFile
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+Turn this on if your **file** is a template name.
+See examples below.
+
+
+Template
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "RSYSLOG_FileFormat", "no", "none"
+
+Format your message when writing to **file**. Default: *RSYSLOG_FileFormat*
+
+
+Configure
+=========
+
+.. code-block:: none
+
+ ./configure --enable-omhttpfs
+
+
+Examples
+========
+
+Example 1
+---------
+
+.. code-block:: none
+
+ module(load="omhttpfs")
+ template(name="hdfs_tmp_file" type="string" string="/tmp/%$YEAR%/test.log")
+ template(name="hdfs_tmp_filecontent" type="string" string="%$YEAR%-%$MONTH%-%$DAY% %MSG% ==\n")
+ local4.* action(type="omhttpfs" host="10.1.1.161" port="14000" https="off" file="hdfs_tmp_file" isDynFile="on")
+ local5.* action(type="omhttpfs" host="10.1.1.161" port="14000" https="off" file="hdfs_tmp_file" isDynFile="on" template="hdfs_tmp_filecontent")
+
+
diff --git a/source/configuration/modules/omjournal.rst b/source/configuration/modules/omjournal.rst
new file mode 100644
index 0000000..fd250a4
--- /dev/null
+++ b/source/configuration/modules/omjournal.rst
@@ -0,0 +1,71 @@
+*********************************
+omjournal: Systemd Journal Output
+*********************************
+
+=========================== ===========================================================================
+**Module Name:**  **omjournal**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+This module provides native support for logging to the systemd journal.
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Action Parameters
+-----------------
+
+Template
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+Template to use when submitting messages.
+
+By default, rsyslog will use the incoming %msg% as the MESSAGE field
+of the journald entry, and include the syslog tag and priority.
+
+You can override the default formatting of the message, and include
+custom fields with a template. Complex fields in the template
+(eg. json entries) will be added to the journal as json text. Other
+fields will be coerced to strings.
+
+Journald requires that you include a template parameter named MESSAGE.
+
+
+Examples
+========
+
+Example 1
+---------
+
+The following sample writes all syslog messages to the journal with a
+custom EVENT_TYPE field.
+
+.. code-block:: none
+
+ module(load="omjournal")
+
+ template(name="journal" type="list") {
+ constant(value="Something happened" outname="MESSAGE")
+ property(name="$!event-type" outname="EVENT_TYPE")
+ }
+
+ action(type="omjournal" template="journal")
+
+
diff --git a/source/configuration/modules/omkafka.rst b/source/configuration/modules/omkafka.rst
new file mode 100644
index 0000000..ece8cd6
--- /dev/null
+++ b/source/configuration/modules/omkafka.rst
@@ -0,0 +1,478 @@
+******************************
+omkafka: write to Apache Kafka
+******************************
+
+=========================== ===========================================================================
+**Module Name:** **omkafka**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+**Available since:** v8.7.0
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+The omkafka plug-in implements an Apache Kafka producer, permitting
+rsyslog to write data to Kafka.
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Action Parameters
+-----------------
+
+Note that omkafka supports some *Array*-type parameters. While the parameter
+name can only be set once, it is possible to set multiple values with that
+single parameter. See the :ref:`omkafka-examples-label` section for details.
+
+
+Broker
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "array", "localhost:9092", "no", "none"
+
+Specifies the broker(s) to use.
+
+
+Topic
+^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "yes", "none"
+
+Specifies the topic to produce to.
+
+
+Key
+^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+Kafka key to be used for all messages.
+
+If a key is provided and partitions.auto="on" is set, then all messages will
+be assigned to a partition based on the key.
+
+
+DynaKey
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive", "Available since"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none", v8.1903
+
+If set, the key parameter becomes a template for the key to base the
+partitioning on.
+
+
+DynaTopic
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+If set, the topic parameter becomes a template for which topic to
+produce messages to. The cache is cleared on HUP.
+
+
+DynaTopic.Cachesize
+^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "50", "no", "none"
+
+If set, defines the number of topics that will be kept in the dynatopic
+cache.
+
+
+Partitions.Auto
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+Librdkafka provides an automatic partitioning function that will
+automatically distribute the produced messages into all partitions
+configured for that topic.
+
+To use, set partitions.auto="on". This is instead of specifying the
+number of partitions on the producer side, where it would be easier
+to change the kafka configuration on the cluster for number of
+partitions/topic vs on every machine talking to Kafka via rsyslog.
+
+If no key is set, messages will be distributed randomly across partitions.
+This results in a very even load on all partitions, but does not preserve
+ordering between the messages.
+
+If a key is set, a partition will be chosen automatically based on it. All
+messages with the same key will be sorted into the same partition,
+preserving their ordering. For example, by setting the key to the hostname,
+messages from a specific host will be written to one partition and ordered,
+but messages from different nodes will be distributed across different
+partitions. This distribution is essentially random, but stable. If the
+number of different keys is much larger than the number of partitions on the
+topic, load will be distributed fairly evenly.
+
+If set, it will override any other partitioning scheme configured.
+
+
+Partitions.number
+^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "none", "no", "none"
+
+If set, specifies how many partitions exists **and** activates
+load-balancing among them. Messages are distributed more or
+less evenly between the partitions. Note that the number specified
+must be correct. Otherwise, some errors may occur or some partitions
+may never receive data.
+
+
+Partitions.useFixed
+^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "none", "no", "none"
+
+If set, specifies the partition to which data is produced. All
+data goes to this partition, no other partition is ever involved
+for this action.
+
+
+errorFile
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+If set, messages that could not be sent and caused an error
+messages are written to the file specified. This file is in JSON
+format, with a single record being written for each message in
+error. The entry contains the full message, as well as Kafka
+error number and reason string.
+
+The idea behind the error file is that the admin can periodically
+run a script that reads the error file and reacts on it. Note that
+the error file is kept open from when the first error occurred up
+until rsyslog is terminated or received a HUP signal.
+
+
+statsFile
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+If set, the contents of the JSON object containing the full librdkafka
+statistics will be written to the file specified. The file will be
+updated based on the statistics.interval.ms confparam value, which must
+also be set.
+
+
+ConfParam
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "array", "none", "no", "none"
+
+Permits to specify Kafka options. Rather than offering a myriad of
+config settings to match the Kafka parameters, we provide this setting
+here as a vehicle to set any Kafka parameter. This has the big advantage
+that Kafka parameters that come up in new releases can immediately be used.
+
+Note that we use librdkafka for the Kafka connection, so the parameters
+are actually those that librdkafka supports. As of our understanding, this
+is a superset of the native Kafka parameters.
+
+
+TopicConfParam
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "array", "none", "no", "none"
+
+In essence the same as *confParam*, but for the Kafka topic.
+
+
+Template
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "template set via template module parameter", "no", "none"
+
+Sets the template to be used for this action.
+
+
+closeTimeout
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "2000", "no", "none"
+
+Sets the time to wait in ms (milliseconds) for draining messages submitted to kafka-handle
+(provided by librdkafka) before closing it.
+
+The maximum value of closeTimeout used across all omkafka action instances
+is used as librdkafka unload-timeout while unloading the module
+(for shutdown, for instance).
+
+
+resubmitOnFailure
+^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+.. versionadded:: 8.28.0
+
+If enabled, failed messages will be resubmit automatically when kafka is able to send
+messages again. To prevent message loss, this option should be enabled.
+
+**Note:** Messages that are rejected by kafka due to exceeding the maximum configured
+message size, are automatically dropped. These errors are not retriable.
+
+KeepFailedMessages
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+If enabled, failed messages will be saved and loaded on shutdown/startup and resend after startup if
+the kafka server is able to receive messages again. This setting requires resubmitOnFailure to be enabled as well.
+
+
+failedMsgFile
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+.. versionadded:: 8.28.0
+
+Filename where the failed messages should be stored into.
+Needs to be set when keepFailedMessages is enabled, otherwise failed messages won't be saved.
+
+
+statsName
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+.. versionadded:: 8.2108.0
+
+The name assigned to statistics specific to this action instance. The supported set of
+statistics tracked for this action instance are **submitted**, **acked**, **failures**.
+See the :ref:`statistics-counter_label` section for more details.
+
+
+.. _statistics-counter_label:
+
+Statistic Counter
+=================
+
+This plugin maintains global :doc:`statistics <../rsyslog_statistic_counter>` for omkafka that
+accumulate all action instances. The statistic origin is named "omafka" with following counters:
+
+- **submitted** - number of messages submitted to omkafka for processing (with both acknowledged
+ deliveries to broker as well as failed or re-submitted from omkafka to librdkafka).
+
+- **maxoutqsize** - high water mark of output queue size.
+
+- **failures** - number of messages that librdkafka failed to deliver. This number is
+ broken down into counts of various types of failures.
+
+- **topicdynacache.skipped** - count of dynamic topic cache lookups that find an existing topic and
+ skip creating a new one.
+
+- **topicdynacache.miss** - count of dynamic topic cache lookups that fail to find an existing topic
+ and end up creating new ones.
+
+- **topicdynacache.evicted** - count of dynamic topic cache entry evictions.
+
+- **acked** - count of messages that were acknowledged by kafka broker. Note that
+ kafka broker provides two levels of delivery acknowledgements depending on topicConfParam:
+ default (acks=1) implies delivery to the leader only while acks=-1 implies delivery to leader
+ as well as replication to all brokers.
+
+- **failures_msg_too_large** - count of messages dropped by librdkafka when it failed to
+ deliver to the broker because broker considers message to be too large. Note that
+ omkafka may still resubmit to librdkafka depending on resubmitOnFailure option.
+
+- **failures_unknown_topic** - count of messages dropped by librdkafka when it failed to
+ deliver to the broker because broker does not recognize the topic.
+
+- **failures_queue_full** - count of messages dropped by librdkafka when its queue becomes
+ full. Note that default size of librdkafka queue is 100,000 messages.
+
+- **failures_unknown_partition** - count of messages that librdkafka failed to deliver because
+ broker does not recognize a partition.
+
+- **failures_other** - count of all of the rest of the failures that do not fall in any of
+ the above failure categories.
+
+- **errors_timed_out** - count of messages that librdkafka could not deliver within timeout. These
+ errors will cause action to be suspended but messages can be retried depending on retry options.
+
+- **errors_transport** - count of messages that librdkafka could not deliver due to transport errors.
+ These messages can be retried depending on retry options.
+
+- **errors_broker_down** - count of messages that librdkafka could not deliver because it thinks that
+ broker is not accessible. These messages can be retried depending on options.
+
+- **errors_auth** - count of messages that librdkafka could not deliver due to authentication errors.
+ These messages can be retried depending on the options.
+
+- **errors_ssl** - count of messages that librdkafka could not deliver due to ssl errors.
+ These messages can be retried depending on the options.
+
+- **errors_other** - count of rest of librdkafka errors.
+
+- **rtt_avg_usec** - broker round trip time in microseconds averaged over all brokers. It is based
+ on the statistics callback window specified through statistics.interval.ms parameter to librdkafka.
+ Average exclude brokers with less than 100 microseconds rtt.
+
+- **throttle_avg_msec** - broker throttling time in milliseconds averaged over all brokers. This is
+ also a part of window statistics delivered by librdkakfka. Average excludes brokers with zero throttling time.
+
+- **int_latency_avg_usec** - internal librdkafka producer queue latency in microseconds averaged other
+ all brokers. This is also part of window statistics and average excludes brokers with zero internal latency.
+
+Note that three window statics counters are not safe with multiple clients. When statistics callback is
+enabled, for example, by using statics.callback.ms=60000, omkafa will generate an internal log message every
+minute for the corresponding omkafka action:
+
+.. code-block:: none
+
+ 2018-03-31T01:51:59.368491+00:00 app1-1.example.com rsyslogd: statscb_window_stats:
+ handler_name=collections.rsyslog.core#producer-1 replyq=0 msg_cnt=30 msg_size=37986 msg_max=100000
+ msg_size_max=1073741824 rtt_avg_usec=41475 throttle_avg_msec=0 int_latency_avg_usec=2943224 [v8.32.0]
+
+For multiple actions using statistics callback, there will be one such record for each action after specified
+window period. See https://github.com/edenhill/librdkafka/wiki/Statistics for more details on statistics
+callback values.
+
+Examples
+========
+
+.. _omkafka-examples-label:
+
+Using Array Type Parameter
+--------------------------
+
+Set a single value
+^^^^^^^^^^^^^^^^^^
+
+For example, to select "snappy" compression, you can use:
+
+.. code-block:: none
+
+ action(type="omkafka" topic="mytopic" confParam="compression.codec=snappy")
+
+
+which is equivalent to:
+
+.. code-block:: none
+
+ action(type="omkafka" topic="mytopic" confParam=["compression.codec=snappy"])
+
+
+Set multiple values
+^^^^^^^^^^^^^^^^^^^
+
+To specify multiple values, just use the bracket notation and create a
+comma-delimited list of values as shown here:
+
+.. code-block:: none
+
+ action(type="omkafka" topic="mytopic"
+ confParam=["compression.codec=snappy",
+ "socket.timeout.ms=5",
+ "socket.keepalive.enable=true"]
+ )
+
+
diff --git a/source/configuration/modules/omlibdbi.rst b/source/configuration/modules/omlibdbi.rst
new file mode 100644
index 0000000..01d1e95
--- /dev/null
+++ b/source/configuration/modules/omlibdbi.rst
@@ -0,0 +1,238 @@
+****************************************
+omlibdbi: Generic Database Output Module
+****************************************
+
+=========================== ===========================================================================
+**Module Name:**  **omlibdbi**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+This modules supports a large number of database systems via
+`libdbi <http://libdbi.sourceforge.net/>`_. Libdbi abstracts the
+database layer and provides drivers for many systems. Drivers are
+available via the
+`libdbi-drivers <http://libdbi-drivers.sourceforge.net/>`_ project. As
+of this writing, the following drivers are available:
+
+- `Firebird/Interbase <http://www.firebird.sourceforge.net/>`_
+- `FreeTDS <http://www.freetds.org/>`_ (provides access to `MS SQL
+ Server <http://www.microsoft.com/sql>`_ and
+ `Sybase <http://www.sybase.com/products/informationmanagement/adaptiveserverenterprise>`_)
+- `MySQL <http://www.mysql.com/>`_ (also supported via the native
+ `ommysql <ommysql.html>`_ plugin in rsyslog)
+- `PostgreSQL <http://www.postgresql.org/>`_\ (also supported via the
+ native `ommysql <ommysql.html>`_ plugin in rsyslog)
+- `SQLite/SQLite3 <http://www.sqlite.org/>`_
+
+The following drivers are in various stages of completion:
+
+- `Ingres <http://ingres.com/>`_
+- `mSQL <http://www.hughes.com.au/>`_
+- `Oracle <http://www.oracle.com/>`_
+
+These drivers seem to be quite usable, at least from an rsyslog point of
+view.
+
+Libdbi provides a slim layer between rsyslog and the actual database
+engine. We have not yet done any performance testing (e.g. omlibdbi vs.
+:doc:`ommysql`) but honestly believe that the performance impact should be
+irrelevant, if at all measurable. Part of that assumption is that
+rsyslog just does the "insert" and most of the time is spent either in
+the database engine or rsyslog itself. It's hard to think of any
+considerable time spent in the libdbi abstraction layer.
+
+
+Setup
+=====
+
+In order for this plugin to work, you need to have libdbi, the libdbi
+driver for your database backend and the client software for your
+database backend installed. There are libdbi packages for many
+distributions. Please note that rsyslogd requires a quite recent version
+(0.8.3) of libdbi. It may work with older versions, but these need some
+special ./configure options to support being called from a dlopen()ed
+plugin (as omlibdbi is). So in short, you probably save you a lot of
+headache if you make sure you have at least libdbi version 0.8.3 on your
+system.
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Module Parameters
+-----------------
+
+DriverDirectory
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "``$ActionLibdbiDriverDirectory``"
+
+This is a global setting. It points libdbi to its driver directory.
+Usually, you do not need to set it. If you installed libdbi-driver's
+at a non-standard location, you may need to specify the directory
+here. If you are unsure, do not use this configuration parameter.
+Usually, everything works just fine.
+
+
+Template
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+Standard template used for the actions.
+
+
+Action Parameters
+-----------------
+
+Driver
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "yes", "``$ActionLibdbiDriver``"
+
+Name of the dbidriver to use, see libdbi-drivers documentation. As a
+quick excerpt, at least those were available at the time of this
+writing:
+
+- ``mysql`` (:doc:`ommysql` is recommended instead)
+- ``firebird`` (Firebird and InterBase)
+- ``ingres``
+- ``msql``
+- ``Oracle``
+- ``sqlite``
+- ``sqlite3``
+- ``freetds`` (for Microsoft SQL and Sybase)
+- ``pgsql`` (:doc:`ompgsql` is recommended instead)
+
+
+Server
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "yes", "``$ActionLibdbiHost``"
+
+The host to connect to.
+
+
+UID
+^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "yes", "``$ActionLibdbiUserName``"
+
+The user used to connect to the database.
+
+
+PWD
+^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "yes", "``$ActionlibdbiPassword``"
+
+That user's password.
+
+
+DB
+^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "yes", "``$ActionlibdbiDBName``"
+
+The database that shall be written to.
+
+
+Template
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+Template used for this action.
+
+
+Caveats/Known Bugs:
+===================
+
+You must make sure that any templates used for omlibdbi properly escape
+strings. This is usually done by supplying the SQL (or STDSQL) option to
+the template. Omlibdbi rejects templates without this option for
+security reasons. However, omlibdbi does not detect if you used the
+right option for your backend. Future versions of rsyslog (with
+full expression  support) will provide advanced ways of handling this
+situation. So far, you must be careful. The default template provided by
+rsyslog is suitable for MySQL, but not necessarily for your database
+backend. Be careful!
+
+If you receive the rsyslog error message "libdbi or libdbi drivers not
+present on this system" you may either not have libdbi and its drivers
+installed or (very probably) the version is earlier than 0.8.3. In this
+case, you need to make sure you have at least 0.8.3 and the libdbi
+driver for your database backend present on your system.
+
+I do not have most of the database supported by omlibdbi in my lab. So
+it received limited cross-platform tests. If you run into troubles, be
+sure the let us know at
+`http://www.rsyslog.com <http://www.rsyslog.com>`_.
+
+
+Examples
+========
+
+Example 1
+---------
+
+The following sample writes all syslog messages to the database
+"syslog_db" on mysqlserver.example.com. The server is MySQL and being
+accessed under the account of "user" with password "pwd".
+
+.. code-block:: none
+
+ module(load="omlibdbi")
+ action(type="omlibdbi" driver="mysql" server="mysqlserver.example.com"
+ uid="user" pwd="pwd" db="syslog_db")
+
+
diff --git a/source/configuration/modules/ommail.rst b/source/configuration/modules/ommail.rst
new file mode 100644
index 0000000..e28cd17
--- /dev/null
+++ b/source/configuration/modules/ommail.rst
@@ -0,0 +1,306 @@
+**************************
+ommail: Mail Output Module
+**************************
+
+.. index:: ! imudp
+
+=========================== ===========================================================================
+**Module Name:** **ommail**
+**Available Since:** **3.17.0**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+This module supports sending syslog messages via mail. Each syslog
+message is sent via its own mail. Obviously, you will want to apply
+rigorous filtering, otherwise your mailbox (and mail server) will be
+heavily spammed. The ommail plugin is primarily meant for alerting
+users. As such, it is assumed that mails will only be sent in an
+extremely limited number of cases.
+
+Ommail uses up to two templates, one for the mail body and optionally
+one for the subject line. Note that the subject line can also be set to
+a constant text.
+If neither the subject nor the mail body is provided, a quite meaningless
+subject line is used
+and the mail body will be a syslog message just as if it were written to
+a file. It is expected that the users customizes both messages. In an
+effort to support cell phones (including SMS gateways), there is an
+option to turn off the body part at all. This is considered to be useful
+to send a short alert to a pager-like device.
+It is highly recommended to use the 
+
+.. code-block:: none
+
+ action.execonlyonceeveryinterval="<seconds>"
+
+parameter to limit the amount of mails that potentially be
+generated. With it, mails are sent at most in a <seconds> interval. This
+may be your life safer. And remember that an hour has 3,600 seconds, so
+if you would like to receive mails at most once every two hours, include
+a
+
+.. code-block:: none
+
+ action.execonlyonceeveryinterval="7200"
+
+in the action definition. Messages sent more frequently are simply discarded.
+
+
+Configuration Parameters
+========================
+
+Configuration parameters are supported starting with v8.5.0. Earlier
+v7 and v8 versions did only support legacy parameters.
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Action Parameters
+-----------------
+
+Server
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "yes", "``$ActionMailSMTPServer``"
+
+Name or IP address of the SMTP server to be used. Must currently be
+set. The default is 127.0.0.1, the SMTP server on the local machine.
+Obviously it is not good to expect one to be present on each machine,
+so this value should be specified.
+
+
+Port
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "yes", "``$ActionMailSMTPPort``"
+
+Port number or name of the SMTP port to be used. The default is 25,
+the standard SMTP port.
+
+
+MailFrom
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "yes", "``$ActionMailFrom``"
+
+The email address used as the senders address.
+
+
+MailTo
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "array", "none", "yes", "``$ActionMailTo``"
+
+The recipient email address(es). Note that this is an array parameter. See
+samples below on how to specify multiple recipients.
+
+
+Subject.Template
+^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "``$ActionMailSubject``"
+
+The name of the template to be used as the mail subject.
+
+If you want to include some information from the message inside the
+template, you need to use *subject.template* with an appropriate template.
+If you just need a constant text, you can simply use *subject.text*
+instead, which doesn't require a template definition.
+
+
+Subject.Text
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+This is used to set a **constant** subject text.
+
+
+Body.Enable
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "``$ActionMailEnableBody``"
+
+Setting this to "off" permits to exclude the actual message body.
+This may be useful for pager-like devices or cell phone SMS messages.
+The default is "on", which is appropriate for almost all cases. Turn
+it off only if you know exactly what you do!
+
+
+Template
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "RSYSLOG_FileFormat", "no", "none"
+
+Template to be used for the mail body (if enabled).
+
+The *template.subject* and *template.text* parameters cannot be given together
+inside a single action definition. Use either one of them. If none is used,
+a more or less meaningless mail subject is generated (we don't tell you the exact
+text because that can change - if you want to have something specific, configure it!).
+
+
+Caveats/Known Bugs
+==================
+
+The current ommail implementation supports SMTP-direct mode only. In
+that mode, the plugin talks to the mail server via SMTP protocol. No
+other process is involved. This mode offers best reliability as it is
+not depending on any external entity except the mail server. Mail server
+downtime is acceptable if the action is put onto its own action queue,
+so that it may wait for the SMTP server to come back online. However,
+the module implements only the bare SMTP essentials. Most importantly,
+it does not provide any authentication capabilities. So your mail server
+must be configured to accept incoming mail from ommail without any
+authentication needs (this may be change in the future as need arises,
+but you may also be referred to sendmail-mode).
+
+In theory, ommail should also offer a mode where it uses the sendmail
+utility to send its mail (sendmail-mode). This is somewhat less reliable
+(because we depend on an entity we do not have close control over -
+sendmail). It also requires dramatically more system resources, as we
+need to load the external process (but that should be no problem given
+the expected infrequent number of calls into this plugin). The big
+advantage of sendmail mode is that it supports all the bells and
+whistles of a full-blown SMTP implementation and may even work for local
+delivery without a SMTP server being present. Sendmail mode will be
+implemented as need arises. So if you need it, please drop us a line (If
+nobody does, sendmail mode will probably never be implemented).
+
+
+Examples
+========
+
+Example 1
+---------
+
+The following example alerts the operator if the string "hard disk fatal
+failure" is present inside a syslog message. The mail server at
+mail.example.net is used and the subject shall be "disk problem on
+<hostname>". Note how \\r\\n is included inside the body text to create
+line breaks. A message is sent at most once every 6 hours (21600 seconds),
+any other messages are silently discarded (or, to be precise, not being
+forwarded - they are still being processed by the rest of the configuration
+file).
+
+.. code-block:: none
+
+ module(load="ommail")
+
+ template (name="mailBody" type="string" string="RSYSLOG Alert\\r\\nmsg='%msg%'")
+ template (name="mailSubject" type="string" string="disk problem on %hostname%")
+
+ if $msg contains "hard disk fatal failure" then {
+ action(type="ommail" server="mail.example.net" port="25"
+ mailfrom="rsyslog@example.net"
+ mailto="operator@example.net"
+ subject.template="mailSubject"
+ action.execonlyonceeveryinterval="21600")
+ }
+
+
+Example 2
+---------
+
+The following example is exactly like the first one, but it sends the mails
+to two different email addresses:
+
+.. code-block:: none
+
+ module(load="ommail")
+
+ template (name="mailBody" type="string" string="RSYSLOG Alert\\r\\nmsg='%msg%'")
+ template (name="mailSubject" type="string" string="disk problem on %hostname%")
+
+ if $msg contains "hard disk fatal failure" then {
+ action(type="ommail" server="mail.example.net" port="25"
+ mailfrom="rsyslog@example.net"
+ mailto=["operator@example.net", "admin@example.net"]
+ subject.template="mailSubject"
+ action.execonlyonceeveryinterval="21600")
+ }
+
+
+Example 3
+---------
+
+Note the array syntax to specify email addresses. Note that while rsyslog
+permits you to specify as many recipients as you like, your mail server
+may limit their number. It is usually a bad idea to use more than 50
+recipients, and some servers may have lower limits. If you hit such a limit,
+you could either create additional actions or (recommended) create an
+email distribution list.
+
+The next example is again mostly equivalent to the previous one, but it uses a
+constant subject line, so no subject template is required:
+
+.. code-block:: none
+
+ module(load="ommail")
+
+ template (name="mailBody" type="string" string="RSYSLOG Alert\\r\\nmsg='%msg%'")
+
+ if $msg contains "hard disk fatal failure" then {
+ action(type="ommail" server="mail.example.net" port="25"
+ mailfrom="rsyslog@example.net"
+ mailto=["operator@example.net", "admin@example.net"]
+ subject.text="rsyslog detected disk problem"
+ action.execonlyonceeveryinterval="21600")
+ }
+
+
+Additional Resources
+====================
+
+A more advanced example plus a discussion on using the email feature
+inside a reliable system can be found in Rainer's blogpost "`Why is
+native email capability an advantage for a
+syslogd? <http://rgerhards.blogspot.com/2008/04/why-is-native-email-capability.html>`_\ "
+
+
diff --git a/source/configuration/modules/ommongodb.rst b/source/configuration/modules/ommongodb.rst
new file mode 100644
index 0000000..f048aa1
--- /dev/null
+++ b/source/configuration/modules/ommongodb.rst
@@ -0,0 +1,247 @@
+********************************
+ommongodb: MongoDB Output Module
+********************************
+
+=========================== ===========================================================================
+**Module Name:**  **ommongodb**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+This module provides native support for logging to MongoDB.
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Action Parameters
+-----------------
+
+UriStr
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+MongoDB connexion string, as defined by the MongoDB String URI Format (See: https://docs.mongodb.com/manual/reference/connection-string/). If uristr is defined, following directives will be ignored: server, serverport, uid, pwd.
+
+
+SSL_Cert
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+Absolute path to the X509 certificate you want to use for TLS client authentication. This is optional.
+
+
+SSL_Ca
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+Absolute path to the trusted X509 CA certificate that signed the mongoDB server certificate. This is optional.
+
+
+db
+^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "syslog", "no", "none"
+
+Database to use.
+
+
+Collection
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "log", "no", "none"
+
+Collection to use.
+
+
+Allowed_Error_Codes
+^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "array", "no", "no", "none"
+
+The list of error codes returned by MongoDB you want ommongodb to ignore.
+Please use the following format: allowed_error_codes=["11000","47"].
+
+
+Template
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "OMSR_TPL_AS_MSG", "no", "none"
+
+Template to use when submitting messages.
+
+Note rsyslog contains a canned default template to write to the MongoDB.
+It will be used automatically if no other template is specified to be
+used. This template is:
+
+.. code-block:: none
+
+ template(name="BSON" type="string" string="\\"sys\\" : \\"%hostname%\\",
+ \\"time\\" : \\"%timereported:::rfc3339%\\", \\"time\_rcvd\\" :
+ \\"%timegenerated:::rfc3339%\\", \\"msg\\" : \\"%msg%\\",
+ \\"syslog\_fac\\" : \\"%syslogfacility%\\", \\"syslog\_server\\" :
+ \\"%syslogseverity%\\", \\"syslog\_tag\\" : \\"%syslogtag%\\",
+ \\"procid\\" : \\"%programname%\\", \\"pid\\" : \\"%procid%\\",
+ \\"level\\" : \\"%syslogpriority-text%\\"")
+
+
+This creates the BSON document needed for MongoDB if no template is
+specified. The default schema is aligned to CEE and project lumberjack.
+As such, the field names are standard lumberjack field names, and
+**not** `rsyslog property names <property_replacer.html>`_. When
+specifying templates, be sure to use rsyslog property names as given in
+the table. If you would like to use lumberjack-based field names inside
+MongoDB (which probably is useful depending on the use case), you need
+to select fields names based on the lumberjack schema. If you just want
+to use a subset of the fields, but with lumberjack names, you can look
+up the mapping in the default template. For example, the lumberjack
+field "level" contains the rsyslog property "syslogpriority-text".
+
+
+Examples
+========
+
+
+Write to Database
+-----------------
+
+The following sample writes all syslog messages to the database "syslog"
+and into the collection "log" on mongoserver.example.com. The server is
+being accessed under the account of "user" with password "pwd". Please note
+that this syntax is deprecated by the "uristr" directive, as shown below.
+
+.. code-block:: none
+
+ module(load="ommongodb")
+ action(type="ommongodb"
+ server="mongoserver.example.com" db="syslog" collection="log"
+ uid="user" pwd="pwd")
+
+
+Write to mongoDB server with TLS and client authentication
+----------------------------------------------------------
+
+Another sample that uses the new "uristr" directives to connect to a TLS mongoDB server with TLS and client authentication.
+
+.. code-block:: none
+
+ module(load="ommongodb")
+ action(type="ommongodb"
+ uristr="mongodb://vulture:9091,vulture2:9091/?replicaset=Vulture&ssl=true"
+ ssl_cert="/var/db/mongodb/mongod.pem"
+ ssl_ca="/var/db/mongodb/ca.pem"
+ db="logs" collection="syslog")
+
+
+Deprecated Parameters
+=====================
+
+.. note::
+
+ While these parameters are still accepted, they should no longer be used for newly created configurations.
+
+
+Action Parameters
+-----------------
+
+Server
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "127.0.0.1", "no", "none"
+
+Name or address of the MongoDB server.
+
+
+ServerPorted
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "27017", "no", "none"
+
+Permits to select a non-standard port for the MongoDB server. The
+default is 0, which means the system default port is used. There is
+no need to specify this parameter unless you know the server is
+running on a non-standard listen port.
+
+
+UID
+^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+Logon userid used to connect to server. Must have proper permissions.
+
+
+PWD
+^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+The user's password.
+
+
diff --git a/source/configuration/modules/ommysql.rst b/source/configuration/modules/ommysql.rst
new file mode 100644
index 0000000..12d2787
--- /dev/null
+++ b/source/configuration/modules/ommysql.rst
@@ -0,0 +1,201 @@
+*************************************
+ommysql: MySQL Database Output Module
+*************************************
+
+=========================== ===========================================================================
+**Module Name:**  **ommysql**
+**Author:** Michael Meckelein (Initial Author) / `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+This module provides native support for logging to MySQL databases. It
+offers superior performance over the more generic
+`omlibdbi <omlibdbi.html>`_ module.
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Action Parameters
+-----------------
+
+Server
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "yes", "none"
+
+This is the address of the MySQL-Server.
+
+
+Socket
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+This is the unix socket path of the MySQL-Server. When the server
+address is set localhost, the mysql client library connects using
+the default unix socket specified at build time.
+If you run mysql server and run the unix socket path differently
+than the default, you can set the socket path with this option.
+
+
+db
+^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "yes", "none"
+
+This is the name of the database used.
+
+
+UID
+^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "yes", "none"
+
+This is the user who is used.
+
+
+PWD
+^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "yes", "none"
+
+This is the password for the user specified in UID.
+
+
+ServerPort
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "none", "no", "``$ActionOmmysqlServerPort``"
+
+Permits to select a non-standard port for the MySQL server. The
+default is 0, which means the system default port is used. There is
+no need to specify this parameter unless you know the server is
+running on a non-standard listen port.
+
+
+MySQLConfig.File
+^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "``$OmMySQLConfigFile``"
+
+Permits the selection of an optional MySQL Client Library
+configuration file (my.cnf) for extended configuration functionality.
+The use of this configuration parameter is necessary only if you have
+a non-standard environment or if fine-grained control over the
+database connection is desired.
+
+
+MySQLConfig.Section
+^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "``$OmMySQLConfigSection``"
+
+Permits the selection of the section within the configuration file
+specified by the **$OmMySQLConfigFile** parameter.
+This will likely only be used where the database administrator
+provides a single configuration file with multiple profiles.
+This configuration parameter is ignored unless **$OmMySQLConfigFile**
+is also used in the rsyslog configuration file.
+If omitted, the MySQL Client Library default of "client" will be
+used.
+
+
+Template
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "StdDBFmt", "no", "none"
+
+Rsyslog contains a canned default template to write to the MySQL
+database. It works on the MonitorWare schema. This template is:
+
+.. code-block:: none
+
+ $template tpl,"insert into SystemEvents (Message, Facility, FromHost,
+ Priority, DeviceReportedTime, ReceivedAt, InfoUnitID, SysLogTag) values
+ ('%msg%', %syslogfacility%, '%HOSTNAME%', %syslogpriority%,
+ '%timereported:::date-mysql%', '%timegenerated:::date-mysql%', %iut%,
+ '%syslogtag%')",SQL
+
+
+As you can see, the template is an actual SQL statement. Note the ",SQL"
+option: it tells the template processor that the template is used for
+SQL processing, thus quote characters are quoted to prevent security
+issues. You can not assign a template without ",SQL" to a MySQL output
+action.
+
+If you would like to change fields contents or add or delete your own
+fields, you can simply do so by modifying the schema (if required) and
+creating your own custom template.
+
+
+Examples
+========
+
+Example 1
+---------
+
+The following sample writes all syslog messages to the database
+"syslog_db" on mysqlserver.example.com. The server is being accessed
+under the account of "user" with password "pwd".
+
+.. code-block:: none
+
+ module(load="ommysql")
+ action(type="ommysql" server="mysqlserver.example.com" serverport="1234"
+ db="syslog_db" uid="user" pwd="pwd")
+
+
diff --git a/source/configuration/modules/omoracle.rst b/source/configuration/modules/omoracle.rst
new file mode 100644
index 0000000..4133eb7
--- /dev/null
+++ b/source/configuration/modules/omoracle.rst
@@ -0,0 +1,200 @@
+omoracle: Oracle Database Output Module
+=======================================
+
+**Module Name:    omoracle**
+
+**Author:**\ Luis Fernando Muñoz Mejías
+<Luis.Fernando.Munoz.Mejias@cern.ch> - this module is currently
+orphaned, the original author does no longer support it.
+
+**Available since:**: 4.3.0, **does not work with recent rsyslog
+versions (v7 and up). Use** :doc:`omlibdbi <omlibdbi>` **instead.**
+An upgrade to the new interfaces is needed. If you would like
+to contribute, please send us a patch or open a github pull request.
+
+**Status:**: contributed module, not maintained by rsyslog core authors
+
+**Description**:
+
+This module provides native support for logging to Oracle databases. It
+offers superior performance over the more generic
+`omlibdbi <omlibdbi.html>`_ module. It also includes a number of
+enhancements, most importantly prepared statements and batching, what
+provides a big performance improvement.
+
+Note that this module is maintained by its original author. If you need
+assistance with it, it is suggested to post questions to the `rsyslog
+mailing list <http://lists.adiscon.net/mailman/listinfo/rsyslog>`_.
+
+From the header comments of this module:
+
+::
+
+
+ This is an output module feeding directly to an Oracle
+ database. It uses Oracle Call Interface, a propietary module
+ provided by Oracle.
+
+ Selector lines to be used are of this form:
+
+ :omoracle:;TemplateName
+
+ The module gets its configuration via rsyslog $... directives,
+ namely:
+
+ $OmoracleDBUser: user name to log in on the database.
+
+ $OmoracleDBPassword: password to log in on the database.
+
+ $OmoracleDB: connection string (an Oracle easy connect or a db
+ name as specified by tnsnames.ora)
+
+ $OmoracleBatchSize: Number of elements to send to the DB on each
+ transaction.
+
+ $OmoracleStatement: Statement to be prepared and executed in
+ batches. Please note that Oracle's prepared statements have their
+ placeholders as ':identifier', and this module uses the colon to
+ guess how many placeholders there will be.
+
+ All these directives are mandatory. The dbstring can be an Oracle
+ easystring or a DB name, as present in the tnsnames.ora file.
+
+ The form of the template is just a list of strings you want
+ inserted to the DB, for instance:
+
+ $template TestStmt,"%hostname%%msg%"
+
+ Will provide the arguments to a statement like
+
+ $OmoracleStatement \
+ insert into foo(hostname,message)values(:host,:message)
+
+ Also note that identifiers to placeholders are arbitrary. You
+ need to define the properties on the template in the correct order
+ you want them passed to the statement!
+
+Some additional documentation contributed by Ronny Egner:
+
+::
+
+ REQUIREMENTS:
+ --------------
+
+ - Oracle Instantclient 10g (NOT 11g) Base + Devel
+ (if you´re on 64-bit linux you should choose the 64-bit libs!)
+ - JDK 1.6 (not neccessary for oracle plugin but "make" didd not finsished successfully without it)
+
+ - "oracle-instantclient-config" script
+ (seems to shipped with instantclient 10g Release 1 but i was unable to find it for 10g Release 2 so here it is)
+
+
+ ====================== /usr/local/bin/oracle-instantclient-config =====================
+ #!/bin/sh
+ #
+ # Oracle InstantClient SDK config file
+ # Jean-Christophe Duberga - Bordeaux 2 University
+ #
+
+ # just adapt it to your environment
+ incdirs="-I/usr/include/oracle/10.2.0.4/client64"
+ libdirs="-L/usr/lib/oracle/10.2.0.4/client64/lib"
+
+ usage="\
+ Usage: oracle-instantclient-config [--prefix[=DIR]] [--exec-prefix[=DIR]] [--version] [--cflags] [--libs] [--static-libs]"
+
+ if test $# -eq 0; then
+ echo "${usage}" 1>&2
+ exit 1
+ fi
+
+ while test $# -gt 0; do
+ case "$1" in
+ -*=*) optarg=`echo "$1" | sed 's/[-_a-zA-Z0-9]*=//'` ;;
+ *) optarg= ;;
+ esac
+
+ case $1 in
+ --prefix=*)
+ prefix=$optarg
+ if test $exec_prefix_set = no ; then
+ exec_prefix=$optarg
+ fi
+ ;;
+ --prefix)
+ echo $prefix
+ ;;
+ --exec-prefix=*)
+ exec_prefix=$optarg
+ exec_prefix_set=yes
+ ;;
+ --exec-prefix)
+ echo ${exec_prefix}
+ ;;
+ --version)
+ echo ${version}
+ ;;
+ --cflags)
+ echo ${incdirs}
+ ;;
+ --libs)
+ echo $libdirs -lclntsh -lnnz10 -locci -lociei -locijdbc10
+ ;;
+ --static-libs)
+ echo "No static libs" 1>&2
+ exit 1
+ ;;
+ *)
+ echo "${usage}" 1>&2
+ exit 1
+ ;;
+ esac
+ shift
+ done
+
+ =============== END ==============
+
+
+
+
+ COMPILING RSYSLOGD
+ -------------------
+
+
+ ./configure --enable-oracle
+
+
+
+
+ RUNNING
+ -------
+
+ - make sure rsyslogd is able to locate the oracle libs (either via LD_LIBRARY_PATH or /etc/ld.so.conf)
+ - set TNS_ADMIN to point to your tnsnames.ora
+ - create a tnsnames.ora and test you are able to connect to the database
+
+ - create user in oracle as shown in the following example:
+ create user syslog identified by syslog default tablespace users quota unlimited on users;
+ grant create session to syslog;
+ create role syslog_role;
+ grant syslog_role to syslog;
+ grant create table to syslog_role;
+ grant create sequence to syslog_role;
+
+ - create tables as needed
+
+ - configure rsyslog as shown in the following example
+ $ModLoad omoracle
+
+ $OmoracleDBUser syslog
+ $OmoracleDBPassword syslog
+ $OmoracleDB syslog
+ $OmoracleBatchSize 1
+ $OmoracleBatchItemSize 4096
+
+ $OmoracleStatementTemplate OmoracleStatement
+ $template OmoracleStatement,"insert into foo(hostname,message) values (:host,:message)"
+ $template TestStmt,"%hostname%%msg%"
+ *.* :omoracle:;TestStmt
+ (you guess it: username = password = database = "syslog".... see $rsyslogd_source/plugins/omoracle/omoracle.c for me info)
+
diff --git a/source/configuration/modules/ompgsql.rst b/source/configuration/modules/ompgsql.rst
new file mode 100644
index 0000000..4df4745
--- /dev/null
+++ b/source/configuration/modules/ompgsql.rst
@@ -0,0 +1,239 @@
+.. index:: ! ompgsql
+
+*******************************************
+PostgreSQL Database Output Module (ompgsql)
+*******************************************
+
+================ ==========================================================================
+**Module Name:** ompgsql
+**Author:** `Rainer Gerhards <rgerhards@adiscon.com>`__ and `Dan Molik <dan@danmolik.com>`__
+**Available:** 8.32+
+================ ==========================================================================
+
+
+Purpose
+=======
+
+This module provides native support for logging to PostgreSQL databases.
+It's an alternative (with potentially superior performance) to the more
+generic :doc:`omlibdbi <omlibdbi>` module.
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Action Parameters
+-----------------
+
+Conninfo
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+The URI or set of key-value pairs that describe how to connect to the PostgreSQL
+server. This takes precedence over ``server``, ``port``, ``db``, and ``pass``
+parameters. Required if ``server`` and ``db`` are not specified.
+
+The format corresponds to `standard PostgreSQL connection string format
+<https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING>`_.
+
+Server
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+The hostname or address of the PostgreSQL server. Required if ``conninfo`` is
+not specified.
+
+
+Port/Serverport
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "5432", "no", "none"
+
+The IP port of the PostgreSQL server.
+
+
+db
+^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+The multi-tenant database name to ``INSERT`` rows into. Required if ``conninfo``
+is not specified.
+
+
+User/UID
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "postgres", "no", "none"
+
+The username to connect to the PostgreSQL server with.
+
+
+Pass/PWD
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "postgres", "no", "none"
+
+The password to connect to the PostgreSQL server with.
+
+
+Template
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+The template name to use to ``INSERT`` rows into the database with. Valid SQL
+syntax is required, as the module does not perform any insertion statement
+checking.
+
+
+Examples
+========
+
+Example 1
+---------
+
+A Basic Example using the internal PostgreSQL template.
+
+.. code-block:: none
+
+ # load module
+ module(load="ompgsql")
+
+ action(type="ompgsql" server="localhost"
+ user="rsyslog" pass="test1234"
+ db="syslog")
+
+
+Example 2
+---------
+
+A Basic Example using the internal PostgreSQL template and connection using URI.
+
+.. code-block:: none
+
+ # load module
+ module(load="ompgsql")
+
+ action(type="ompgsql"
+ conninfo="postgresql://rsyslog:test1234@localhost/syslog")
+
+
+Example 3
+---------
+
+A Basic Example using the internal PostgreSQL template and connection with TLS using URI.
+
+.. code-block:: none
+
+ # load module
+ module(load="ompgsql")
+
+ action(type="ompgsql"
+ conninfo="postgresql://rsyslog:test1234@postgres.example.com/syslog?sslmode=verify-full&sslrootcert=/path/to/cert")
+
+
+Example 4
+---------
+
+A Templated example.
+
+.. code-block:: none
+
+ template(name="sql-syslog" type="list" option.stdsql="on") {
+ constant(value="INSERT INTO SystemEvents (message, timereported) values ('")
+ property(name="msg")
+ constant(value="','")
+ property(name="timereported" dateformat="pgsql" date.inUTC="on")
+ constant(value="')")
+ }
+
+ # load module
+ module(load="ompgsql")
+
+ action(type="ompgsql" server="localhost"
+ user="rsyslog" pass="test1234"
+ db="syslog"
+ template="sql-syslog" )
+
+
+Example 5
+---------
+
+An action queue and templated example.
+
+.. code-block:: none
+
+ template(name="sql-syslog" type="list" option.stdsql="on") {
+ constant(value="INSERT INTO SystemEvents (message, timereported) values ('")
+ property(name="msg")
+ constant(value="','")
+ property(name="timereported" dateformat="pgsql" date.inUTC="on")
+ constant(value="')")
+ }
+
+ # load module
+ module(load="ompgsql")
+
+ action(type="ompgsql" server="localhost"
+ user="rsyslog" pass="test1234"
+ db="syslog"
+ template="sql-syslog"
+ queue.size="10000" queue.type="linkedList"
+ queue.workerthreads="5"
+ queue.workerthreadMinimumMessages="500"
+ queue.timeoutWorkerthreadShutdown="1000"
+ queue.timeoutEnqueue="10000")
+
+
+Building
+========
+
+To compile Rsyslog with PostgreSQL support you will need to:
+
+* install *libpq* and *libpq-dev* packages, check your package manager for the correct name.
+* set *--enable-pgsql* switch on configure.
+
+
diff --git a/source/configuration/modules/ompipe.rst b/source/configuration/modules/ompipe.rst
new file mode 100644
index 0000000..1bf581e
--- /dev/null
+++ b/source/configuration/modules/ompipe.rst
@@ -0,0 +1,48 @@
+ompipe: Pipe Output Module
+==========================
+
+**Module Name:    ompipe**
+
+**Author:**\ Rainer Gerhards <rgerhards@adiscon.com>
+
+**Description**:
+
+The ompipe plug-in provides the core functionality for logging output to named pipes (fifos). It is a built-in module that does not need to be loaded.
+
+**Global Configuration Parameters:**
+
+Note: parameter names are case-insensitive.
+
+- Template: [templateName] sets a new default template for file actions.
+
+**Action specific Configuration Parameters:**
+
+Note: parameter names are case-insensitive.
+
+- Pipe: string a fifo or named pipe can be used as a destination for log messages.
+- tryResumeReopen: Sometimes we need to reopen a pipe after an ompipe action gets suspended. Sending an HUP signal does the job but requires an interaction with rsyslog. When set to "on" and a resume action fails, the file descriptor is closed, causing a new open in the next resume. Default: "off" to preserve existing behavior before introduction of this option.
+
+**Caveats/Known Bugs:**
+None
+
+**Sample:**
+The following command sends all syslog messages to a pipe named "NameofPipe".
+
+::
+
+     Module (path="builtin:ompipe")
+     *.* action(type="ompipe" Pipe="NameofPipe")
+
+**Legacy Configuration Parameters:**
+
+rsyslog has support for logging output to named pipes (fifos). A fifo or named pipe can be used as a destination for log messages by prepending a pipe symbol ("|") to the name of the file. This is handy for debugging. Note that the fifo must be created with the mkfifo(1) command before rsyslogd is started.
+
+**Legacy Sample:**
+
+The following command sends all syslog messages to a pipe named /var/log/pipe.
+
+::
+
+     $ModLoad ompipe
+     *.* |/var/log/pipe
+
diff --git a/source/configuration/modules/omprog.rst b/source/configuration/modules/omprog.rst
new file mode 100644
index 0000000..d96fec1
--- /dev/null
+++ b/source/configuration/modules/omprog.rst
@@ -0,0 +1,530 @@
+*****************************************
+omprog: Program integration Output module
+*****************************************
+
+=========================== ===========================================================================
+**Module Name:**  **omprog**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+This module permits to integrate arbitrary external programs into
+rsyslog's logging. It is similar to the "execute program (^)" action,
+but offers better security and much higher performance. While "execute
+program (^)" can be a useful tool for executing programs if rare events
+occur, omprog can be used to provide massive amounts of log data to a
+program.
+
+Executes the configured program and feeds log messages to that binary
+via stdin. The binary is free to do whatever it wants with the supplied
+data. If the program terminates, it is re-started. If rsyslog
+terminates, the program's stdin will see EOF. The program must then
+terminate. The message format passed to the program can, as usual, be
+modified by defining rsyslog templates.
+
+Note that in order to execute the given program, rsyslog needs to have
+sufficient permissions on the binary file. This is especially true if
+not running as root. Also, keep in mind that default SELinux policies
+most probably do not permit rsyslogd to execute arbitrary binaries. As
+such, permissions must be appropriately added. Note that SELinux
+restrictions also apply if rsyslogd runs under root. To check if a
+problem is SELinux-related, you can temporarily disable SELinux and
+retry. If it then works, you know for sure you have a SELinux issue.
+
+Starting with 8.4.0, rsyslogd emits an error message via the ``syslog()``
+API call when there is a problem executing the binary. This can be
+extremely valuable in troubleshooting. For those technically savvy:
+when we execute a binary, we need to fork, and we do not have
+full access to rsyslog's usual error-reporting capabilities after the
+fork. As the actual execution must happen after the fork, we cannot
+use the default error logger to emit the error message. As such,
+we use ``syslog()``. In most cases, there is no real difference
+between both methods. However, if you run multiple rsyslog instances,
+the message shows up in that instance that processes the default
+log socket, which may be different from the one where the error occurred.
+Also, if you redirected the log destination, that redirection may
+not work as expected.
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Action Parameters
+-----------------
+
+template
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "RSYSLOG_FileFormat", "no", "none"
+
+Name of the :doc:`template <../templates>` to use to format the log messages
+passed to the external program.
+
+
+binary
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "", "yes", "``$ActionOMProgBinary``"
+
+Full path and command line parameters of the external program to execute.
+Arbitrary external programs should be placed under the /usr/libexec/rsyslog directory.
+That is, the binaries put in this namespaced directory are meant for the consumption
+of rsyslog, and are not intended to be executed by users.
+In legacy config, it is **not possible** to specify command line parameters.
+
+
+.. _confirmMessages:
+
+confirmMessages
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+.. versionadded:: 8.31.0
+
+Specifies whether the external program provides feedback to rsyslog via stdout.
+When this switch is set to "on", rsyslog will wait for the program to confirm
+each received message. This feature facilitates error handling: instead of
+having to implement a retry logic, the external program can rely on the rsyslog
+queueing capabilities.
+
+To confirm a message, the program must write a line with the word ``OK`` to its
+standard output. If it writes a line containing anything else, rsyslog considers
+that the message could not be processed, keeps it in the action queue, and
+re-sends it to the program later (after the period specified by the
+:doc:`action.resumeInterval <../actions>` parameter).
+
+In addition, when a new instance of the program is started, rsyslog will also
+wait for the program to confirm it is ready to start consuming logs. This
+prevents rsyslog from starting to send logs to a program that could not
+complete its initialization properly.
+
+.. seealso::
+
+ `Interface between rsyslog and external output plugins
+ <https://github.com/rsyslog/rsyslog/blob/master/plugins/external/INTERFACE.md>`_
+
+
+.. _confirmTimeout:
+
+confirmTimeout
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "10000", "no", "none"
+
+.. versionadded:: 8.38.0
+
+Specifies how long rsyslog must wait for the external program to confirm
+each message when confirmMessages_ is set to "on". If the program does not
+send a response within this timeout, it will be restarted (see signalOnClose_,
+closeTimeout_ and killUnresponsive_ for details on the cleanup sequence).
+The value must be expressed in milliseconds and must be greater than zero.
+
+.. seealso::
+
+ `Interface between rsyslog and external output plugins
+ <https://github.com/rsyslog/rsyslog/blob/master/plugins/external/INTERFACE.md>`_
+
+
+.. _reportFailures:
+
+reportFailures
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+.. versionadded:: 8.38.0
+
+Specifies whether rsyslog must internally log a warning message whenever the
+program returns an error when confirming a message. The logged message will
+include the error line returned by the program. This parameter is ignored when
+confirmMessages_ is set to "off".
+
+Enabling this flag can be useful to log the problems detected by the program.
+However, the information that can be logged is limited to a short error line,
+and the logs will be tagged as originated by the 'syslog' facility (like the
+rest of rsyslog logs). To avoid these shortcomings, consider the use of the
+output_ parameter to capture the stderr of the program.
+
+
+.. _useTransactions:
+
+useTransactions
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+.. versionadded:: 8.31.0
+
+Specifies whether the external program processes the messages in
+:doc:`batches <../../development/dev_oplugins>` (transactions). When this
+switch is enabled, the logs sent to the program are grouped in transactions.
+At the start of a transaction, rsyslog sends a special mark message to the
+program (see beginTransactionMark_). At the end of the transaction, rsyslog
+sends another mark message (see commitTransactionMark_).
+
+If confirmMessages_ is also set to "on", the program must confirm both the
+mark messages and the logs within the transaction. The mark messages must be
+confirmed by returning ``OK``, and the individual messages by returning
+``DEFER_COMMIT`` (instead of ``OK``). Refer to the link below for details.
+
+.. seealso::
+
+ `Interface between rsyslog and external output plugins
+ <https://github.com/rsyslog/rsyslog/blob/master/plugins/external/INTERFACE.md>`_
+
+.. warning::
+
+ This feature is currently **experimental**. It could change in future releases
+ without keeping backwards compatibility with existing configurations or the
+ specified interface. There is also a `known issue
+ <https://github.com/rsyslog/rsyslog/issues/2420>`_ with the use of
+ transactions together with ``confirmMessages=on``.
+
+
+.. _beginTransactionMark:
+
+beginTransactionMark
+^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "BEGIN TRANSACTION", "no", "none"
+
+.. versionadded:: 8.31.0
+
+Allows specifying the mark message that rsyslog will send to the external
+program to indicate the start of a transaction (batch). This parameter is
+ignored if useTransactions_ is disabled.
+
+
+.. _commitTransactionMark:
+
+commitTransactionMark
+^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "COMMIT TRANSACTION", "no", "none"
+
+.. versionadded:: 8.31.0
+
+Allows specifying the mark message that rsyslog will send to the external
+program to indicate the end of a transaction (batch). This parameter is
+ignored if useTransactions_ is disabled.
+
+
+.. _output:
+
+output
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+.. versionadded:: v8.1.6
+
+Full path of a file where the output of the external program will be saved.
+If the file already exists, the output is appended to it. If the file does
+not exist, it is created with the permissions specified by fileCreateMode_.
+
+If confirmMessages_ is set to "off" (the default), both the stdout and
+stderr of the child process are written to the specified file.
+
+If confirmMessages_ is set to "on", only the stderr of the child is
+written to the specified file (since stdout is used for confirming the
+messages).
+
+Rsyslog will reopen the file whenever it receives a HUP signal. This allows
+the file to be externally rotated (using a tool like *logrotate*): after
+each rotation of the file, make sure a HUP signal is sent to rsyslogd.
+
+If the omprog action is configured to use multiple worker threads
+(:doc:`queue.workerThreads <../../rainerscript/queue_parameters>` is
+set to a value greater than 1), the lines written by the various program
+instances will not appear intermingled in the output file, as long as the
+lines do not exceed a certain length and the program writes them to
+stdout/stderr in line-buffered mode. For details, refer to `Interface between
+rsyslog and external output plugins
+<https://github.com/rsyslog/rsyslog/blob/master/plugins/external/INTERFACE.md>`_.
+
+If this parameter is not specified, the output of the program will be
+redirected to ``/dev/null``.
+
+.. note::
+
+ Before version v8.38.0, this parameter was intended for debugging purposes
+ only. Since v8.38.0, the parameter can be used for production.
+
+
+.. _fileCreateMode:
+
+fileCreateMode
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "0600", "no", "none"
+
+.. versionadded:: v8.38.0
+
+Permissions the output_ file will be created with, in case the file does not
+exist. The value must be a 4-digit octal number, with the initial digit being
+zero. Please note that the actual permission depends on the rsyslogd process
+umask. If in doubt, use ``$umask 0000`` right at the beginning of the
+configuration file to remove any restrictions.
+
+
+hup.signal
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+.. versionadded:: 8.9.0
+
+Specifies which signal, if any, is to be forwarded to the external program
+when rsyslog receives a HUP signal. Currently, HUP, USR1, USR2, INT, and
+TERM are supported. If unset, no signal is sent on HUP. This is the default
+and what pre 8.9.0 versions did.
+
+
+.. _signalOnClose:
+
+signalOnClose
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+.. versionadded:: 8.23.0
+
+Specifies whether a TERM signal must be sent to the external program before
+closing it (when either the worker thread has been unscheduled, a restart
+of the program is being forced, or rsyslog is about to shutdown).
+
+If this switch is set to "on", rsyslog will send a TERM signal to the child
+process before closing the pipe. That is, the process will first receive a
+TERM signal, and then an EOF on stdin.
+
+No signal is issued if this switch is set to "off" (default). The child
+process can still detect it must terminate because reading from stdin will
+return EOF.
+
+See the killUnresponsive_ parameter for more details.
+
+
+.. _closeTimeout:
+
+closeTimeout
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "5000", "no", "none"
+
+.. versionadded:: 8.35.0
+
+Specifies how long rsyslog must wait for the external program to terminate
+(when either the worker thread has been unscheduled, a restart of the program
+is being forced, or rsyslog is about to shutdown) after closing the pipe,
+that is, after sending EOF to the stdin of the child process. The value must
+be expressed in milliseconds and must be greater than or equal to zero.
+
+See the killUnresponsive_ parameter for more details.
+
+
+.. _killUnresponsive:
+
+killUnresponsive
+^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "the value of 'signalOnClose'", "no", "none"
+
+.. versionadded:: 8.35.0
+
+Specifies whether a KILL signal must be sent to the external program in case
+it does not terminate within the timeout indicated by closeTimeout_
+(when either the worker thread has been unscheduled, a restart of the program
+is being forced, or rsyslog is about to shutdown).
+
+If signalOnClose_ is set to "on", the default value of ``killUnresponsive``
+is also "on". In this case, the cleanup sequence of the child process is as
+follows: (1) a TERM signal is sent to the child, (2) the pipe with the child
+process is closed (the child will receive EOF on stdin), (3) rsyslog waits
+for the child process to terminate during closeTimeout_, (4) if the child
+has not terminated within the timeout, a KILL signal is sent to it.
+
+If signalOnClose_ is set to "off", the default value of ``killUnresponsive``
+is also "off". In this case, the child cleanup sequence is as follows: (1) the
+pipe with the child process is closed (the child will receive EOF on stdin),
+(2) rsyslog waits for the child process to terminate during closeTimeout_,
+(3) if the child has not terminated within the timeout, rsyslog ignores it.
+
+This parameter can be set to a different value than signalOnClose_, obtaining
+the corresponding variations of cleanup sequences described above.
+
+
+forceSingleInstance
+^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+.. versionadded:: v8.1.6
+
+By default, the omprog action will start an instance (process) of the
+external program per worker thread (the maximum number of worker threads
+can be specified with the
+:doc:`queue.workerThreads <../../rainerscript/queue_parameters>`
+parameter). Moreover, if the action is associated to a
+:doc:`disk-assisted queue <../../concepts/queues>`, an additional instance
+will be started when the queue is persisted, to process the items stored
+on disk.
+
+If you want to force a single instance of the program to be executed,
+regardless of the number of worker threads or the queue type, set this
+flag to "on". This is useful when the external program uses or accesses
+some kind of shared resource that does not allow concurrent access from
+multiple processes.
+
+.. note::
+
+ Before version v8.38.0, this parameter had no effect.
+
+
+Examples
+========
+
+Example: command line arguments
+-------------------------------
+
+In the following example, logs will be sent to a program ``log.sh`` located
+in ``/usr/libexec/rsyslog``. The program will receive the command line arguments
+``p1``, ``p2`` and ``--param3="value 3"``.
+
+.. code-block:: none
+
+ module(load="omprog")
+
+ action(type="omprog"
+ binary="/usr/libexec/rsyslog/log.sh p1 p2 --param3=\"value 3\""
+ template="RSYSLOG_TraditionalFileFormat")
+
+
+Example: external program that writes logs to a database
+--------------------------------------------------------
+
+In this example, logs are sent to the stdin of a Python program that
+(let's assume) writes them to a database. A dedicated disk-assisted
+queue with (a maximum of) 5 worker threads is used, to avoid affecting
+other log destinations in moments of high load. The ``confirmMessages``
+flag is enabled, which tells rsyslog to wait for the program to confirm
+its initialization and each message received. The purpose of this setup
+is preventing logs from being lost because of database connection
+failures.
+
+If the program cannot write a log to the database, it will return a
+negative confirmation to rsyslog via stdout. Rsyslog will then keep the
+failed log in the queue, and send it again to the program after 5
+seconds. The program can also write error details to stderr, which will
+be captured by rsyslog and written to ``/var/log/db_forward.log``. If
+no response is received from the program within a 30-second timeout,
+rsyslog will kill and restart it.
+
+.. code-block:: none
+
+ module(load="omprog")
+
+ action(type="omprog"
+ name="db_forward"
+ binary="/usr/libexec/rsyslog/db_forward.py"
+ confirmMessages="on"
+ confirmTimeout="30000"
+ queue.type="LinkedList"
+ queue.saveOnShutdown="on"
+ queue.workerThreads="5"
+ action.resumeInterval="5"
+ killUnresponsive="on"
+ output="/var/log/db_forward.log")
+
+Note that the ``useTransactions`` flag is not used in this example. The
+program stores and confirms each log individually.
+
+
+|FmtObsoleteName| directives
+============================
+
+- **$ActionOMProgBinary** <binary>
+ The binary program to be executed.
diff --git a/source/configuration/modules/omrabbitmq.rst b/source/configuration/modules/omrabbitmq.rst
new file mode 100644
index 0000000..433abd4
--- /dev/null
+++ b/source/configuration/modules/omrabbitmq.rst
@@ -0,0 +1,404 @@
+**********************************
+omrabbitmq: RabbitMQ output module
+**********************************
+
+=========================== ===========================================================================
+**Module Name:** **omrabbitmq**
+**Authors:** Jean-Philippe Hilaire <jean-philippe.hilaire@pmu.fr> / Philippe Duveau <philippe.duveau@free.fr> / Hamid Maadani <hamid@dexo.tech>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+This module sends syslog messages into RabbitMQ server.
+Only v6 configuration syntax is supported.
+
+**omrabbitmq is tested and is running in production with 8.x version of rsyslog.**
+
+Compile
+=======
+
+To successfully compile omrabbitmq module you need `rabbitmq-c <https://github.com/alanxz/rabbitmq-c>`_ library version >= 0.4.
+
+ ./configure --enable-omrabbitmq ...
+
+Configuration Parameters
+========================
+
+Action Parameters
+-----------------
+
+host
+^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "yes", "hostname\[:port\]\[ hostname2\[:port2\]\]",
+
+rabbitmq server(s). See HA configuration
+
+port
+^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", "port", "5672"
+
+virtual\_host
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "yes", "path",
+
+virtual message broker
+
+user
+^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "yes", "user",
+
+user name
+
+password
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "yes", "password",
+
+user password
+
+ssl
+^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "no", , "off"
+
+enable TLS for AMQP connection
+
+init_openssl
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "no", , "off"
+
+should rabbitmq-c initialize OpenSSL? This is included to prevent crashes caused by OpenSSL double initialization. Should stay off in most cases. ONLY turn on if SSL does not work due to OpenSSL not being initialized.
+
+verify_peer
+^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "no", , "off"
+
+SSL peer verification
+
+verify_hostname
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "no", , "off"
+
+SSL certificate hostname verification
+
+ca_cert
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", "file path",
+
+CA certificate to be used for the SSL connection
+
+heartbeat_interval
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "no", , "0"
+
+AMQP heartbeat interval in seconds. 0 means disabled, which is default.
+
+exchange
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", "name",
+
+exchange name
+
+routing\_key
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", "name",
+
+value of routing key
+
+routing\_key\_template
+^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", "template_name",
+
+template used to compute the routing key
+
+body\_template
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", "template", "StdJSONFmt"
+
+template used to compute the message body. If the template is an empty string the sent message will be %rawmsg%
+
+delivery\_mode
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", "TRANSIENT\|PERSISTANT", "TRANSIENT"
+
+persistence of the message in the broker
+
+expiration
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", "milliseconds", no expiration
+
+ttl of the amqp message
+
+populate\_properties
+^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "no", , "off"
+
+fill timestamp, appid, msgid, hostname (custom header) with message informations
+
+content\_type
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", "value",
+
+content type as a MIME value
+
+declare\_exchange
+^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "no", "off",
+
+Rsyslog tries to declare the exchange on startup. Declaration failure (already exists with different parameters or insufficient rights) is warned but does not cancel the module instance.
+
+recover\_policy
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", "check\_interval;short\_failure_interval; short\_failure\_nb\_max;graceful\_interval", "60;6;3;600"
+
+See HA configuration
+
+HA configuration
+================
+
+The module can use two rabbitmq server in a fail-over mode. To configure this mode, the host parameter has to reference the two rabbitmq servers separated by space.
+Each server can be optionally completed with the port (useful when they are different).
+One of the servers is chosen on startup as a preferred one. The module connects to this server with a fail-over policy which can be defined through the action parameter "recover_policy".
+
+The module launch a back-ground thread to monitor the connection. As soon as the connection fails, the thread retries to reestablish the connection and switch to the back-up server if needed to recover the service. While connected to backup server, the thread tries to reconnect to the preferred server using a "recover_policy". This behaviour allow to load balance the client across the two rabbitmq servers on normal conditions, switch to the running server in case of failure and rebalanced on the two server as soon as the failed server is recovered without restarting clients.
+
+The recover policy is based on 4 parameters :
+
+- `check_interval` is the base duration between connection retries (default is 60 seconds)
+
+- `short_failure_interval` is a duration under which two successive failures are considered as abnormal for the rabbitmq server (default is `check_interval/10`)
+
+- `short_failure_nb_max` is the number of successive short failure are detected before to apply the graceful interval (default is 3)
+
+- `graceful_interval` is a longer duration used if the rabbitmq server is unstable (default is `check_interval*10`).
+
+The short failures detection is applied in case of unstable network or server and force to switch to back-up server for at least 'graceful-interval' avoiding heavy load on the unstable server. This can avoid dramatic scenarios in a multisites deployment.
+
+Examples
+========
+
+Example 1
+---------
+
+This is the simplest action :
+
+- No High Availability
+
+- The routing-key is constant
+
+- The sent message use JSON format
+
+.. code-block:: none
+
+ module(load='omrabbitmq')
+ action(type="omrabbitmq"
+ host="localhost"
+ virtual_host="/"
+ user="guest"
+ password="guest"
+ exchange="syslog"
+ routing_key="syslog.all")
+
+Example 2
+---------
+
+Action characteristics :
+
+- No High Availability
+
+- The routing-key is computed
+
+- The sent message is a raw message
+
+.. code-block:: none
+
+ module(load='omrabbitmq')
+ template(name="rkTpl" type="string" string="%syslogtag%.%syslogfacility-text%.%syslogpriority-text%")
+
+ action(type="omrabbitmq"
+ host="localhost"
+ virtual_host="/"
+ user="guest"
+ password="guest"
+ exchange="syslog"
+ routing_key_template="rkTpl"
+ template_body="")
+
+Example 3
+---------
+
+HA action :
+
+- High Availability between `server1:5672` and `server2:1234`
+
+- The routing-key is computed
+
+- The sent message is formatted using RSYSLOG_ForwardFormat standard template
+
+.. code-block:: none
+
+ module(load='omrabbitmq')
+ template(name="rkTpl" type="string" string="%syslogtag%.%syslogfacility-text%.%syslogpriority-text%")
+
+ action(type="omrabbitmq"
+ host="server1 server2:1234"
+ virtual_host="production"
+ user="guest"
+ password="guest"
+ exchange="syslog"
+ routing_key_template="rkTpl"
+ template_body="RSYSLOG_ForwardFormat")
+
+Example 4
+---------
+
+SSL enabled connection, with Heartbeat :
+
+- No High Availability
+
+- The routing-key is constant
+
+- The sent message use JSON format
+
+- Heartbeat is set to 20 seconds
+
+.. code-block:: none
+
+ module(load='omrabbitmq')
+ action(type="omrabbitmq"
+ host="localhost"
+ virtual_host="/"
+ user="guest"
+ password="guest"
+ ssl="on"
+ verify_peer="off"
+ verify_hostname="off"
+ heartbeat_interval="20"
+ exchange="syslog"
+ routing_key="syslog.all")
diff --git a/source/configuration/modules/omrelp.rst b/source/configuration/modules/omrelp.rst
new file mode 100644
index 0000000..11a3fd9
--- /dev/null
+++ b/source/configuration/modules/omrelp.rst
@@ -0,0 +1,482 @@
+**************************
+omrelp: RELP Output Module
+**************************
+
+=========================== ===========================================================================
+**Module Name:**  **omrelp**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+This module supports sending syslog messages over the reliable RELP
+protocol. For RELP's advantages over plain tcp syslog, please see the
+documentation for :doc:`imrelp <imrelp>` (the server counterpart). 
+
+Setup
+
+Please note that `librelp <http://www.librelp.com>`__ is required for
+imrelp (it provides the core relp protocol implementation).
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+Module Parameters
+-----------------
+
+tls.tlslib
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+.. versionadded:: 8.1903.0
+
+Permits to specify the TLS library used by librelp.
+All relp protocol operations or actually performed by librelp and
+not rsyslog itself. This value specified is directly passed down to
+librelp. Depending on librelp version and build parameters, supported
+tls libraries differ (or TLS may not be supported at all). In this case
+rsyslog emits an error message.
+
+Usually, the following options should be available: "openssl", "gnutls".
+
+Note that "gnutls" is the current default for historic reasons. We actually
+recommend to use "openssl". It provides better error messages and accepts
+a wider range of certificate types.
+
+If you have problems with the default setting, we recommend to switch to
+"openssl".
+
+
+Action Parameters
+-----------------
+
+Target
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "yes", "none"
+
+The target server to connect to.
+
+
+Port
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "514", "no", "none"
+
+Name or numerical value of TCP port to use when connecting to target.
+
+
+Template
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "RSYSLOG_ForwardFormat", "no", "none"
+
+Defines the template to be used for the output.
+
+
+Timeout
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "int", "90", "no", "none"
+
+Timeout for relp sessions. If set too low, valid sessions may be
+considered dead and tried to recover.
+
+
+Conn.Timeout
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "int", "10", "no", "none"
+
+Timeout for the socket connection.
+
+
+RebindInterval
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "int", "0", "no", "none"
+
+Permits to specify an interval at which the current connection is
+broken and re-established. This setting is primarily an aid to load
+balancers. After the configured number of messages has been
+transmitted, the current connection is terminated and a new one
+started. This usually is perceived as a \`\`new connection'' by load
+balancers, which in turn forward messages to another physical target
+system.
+
+
+WindowSize
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "int", "0", "no", "none"
+
+This is an **expert parameter**. It permits to override the RELP
+window size being used by the client. Changing the window size has
+both an effect on performance as well as potential message
+duplication in failure case. A larger window size means more
+performance, but also potentially more duplicated messages - and vice
+versa. The default 0 means that librelp's default window size is
+being used, which is considered a compromise between goals reached.
+For your information: at the time of this writing, the librelp
+default window size is 128 messages, but this may change at any time.
+Note that there is no equivalent server parameter, as the client
+proposes and manages the window size in RELP protocol.
+
+
+TLS
+^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+If set to "on", the RELP connection will be encrypted by TLS, so
+that the data is protected against observers. Please note that both
+the client and the server must have set TLS to either "on" or "off".
+Other combinations lead to unpredictable results.
+
+*Attention when using GnuTLS 2.10.x or older*
+
+Versions older than GnuTLS 2.10.x may cause a crash (Segfault) under
+certain circumstances. Most likely when an imrelp inputs and an
+omrelp output is configured. The crash may happen when you are
+receiving/sending messages at the same time. Upgrade to a newer
+version like GnuTLS 2.12.21 to solve the problem.
+
+
+TLS.Compression
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+The controls if the TLS stream should be compressed (zipped). While
+this increases CPU use, the network bandwidth should be reduced. Note
+that typical text-based log records usually compress rather well.
+
+
+TLS.PermittedPeer
+^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "array", "none", "no", "none"
+
+Note: this parameter is mandatory depending on the value of
+`TLS.AuthMode` but the code does currently not check this.
+
+Peer Places access restrictions on this forwarder. Only peers which
+have been listed in this parameter may be connected to. This guards
+against rouge servers and man-in-the-middle attacks. The validation
+bases on the certificate the remote peer presents.
+
+This contains either remote system names or fingerprints, depending
+on the value of parameter `TLS.AuthMode`. One or more values may be
+entered.
+
+When a non-permitted peer is connected to, the refusal is logged
+together with the given remote peer identify. This is especially
+useful in *fingerprint* authentication mode: if the
+administrator knows this was a valid request, he can simply add the
+fingerprint by copy and paste from the logfile to rsyslog.conf. It
+must be noted, though, that this situation should usually not happen
+after initial client setup and administrators should be alert in this
+case.
+
+Note that usually a single remote peer should be all that is ever
+needed. Support for multiple peers is primarily included in support
+of load balancing scenarios. If the connection goes to a specific
+server, only one specific certificate is ever expected (just like
+when connecting to a specific ssh server).
+To specify multiple fingerprints, just enclose them in braces like
+this:
+
+.. code-block:: none
+
+ tls.permittedPeer=["SHA1:...1", "SHA1:....2"]
+
+To specify just a single peer, you can either specify the string
+directly or enclose it in braces.
+
+Note that in *name* authentication mode wildcards are supported.
+This can be done as follows:
+
+.. code-block:: none
+
+ tls.permittedPeer="*.example.com"
+
+Of course, there can also be multiple names used, some with and
+some without wildcards:
+
+.. code-block:: none
+
+ tls.permittedPeer=["*.example.com", "srv1.example.net", "srv2.example.net"]
+
+
+TLS.AuthMode
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+Sets the mode used for mutual authentication. Supported values are
+either "*fingerprint*" or "*name*". Fingerprint mode basically is
+what SSH does. It does not require a full PKI to be present, instead
+self-signed certs can be used on all peers. Even if a CA certificate
+is given, the validity of the peer cert is NOT verified against it.
+Only the certificate fingerprint counts.
+
+In "name" mode, certificate validation happens. Here, the matching is
+done against the certificate's subjectAltName and, as a fallback, the
+subject common name. If the certificate contains multiple names, a
+match on any one of these names is considered good and permits the
+peer to talk to rsyslog.
+
+The permitted names or fingerprints are configured via
+`TLS.PermittedPeer`.
+
+
+About Chained Certificates
+--------------------------
+
+.. versionadded:: 8.2008.0
+
+With librelp 1.7.0, you can use chained certificates.
+If using "openssl" as tls.tlslib, we recommend at least OpenSSL Version 1.1
+or higher. Chained certificates will also work with OpenSSL Version 1.0.2, but
+they will be loaded into the main OpenSSL context object making them available
+to all librelp instances (omrelp/imrelp) within the same process.
+
+If this is not desired, you will require to run rsyslog in multiple instances
+with different omrelp configurations and certificates.
+
+
+TLS.CaCert
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+The CA certificate that can verify the machine certs.
+
+
+TLS.MyCert
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+The machine public certificate.
+
+
+TLS.MyPrivKey
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+The machine private key.
+
+
+TLS.PriorityString
+^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+This parameter permits to specify the so-called "priority string" to
+GnuTLS. This string gives complete control over all crypto
+parameters, including compression setting. For this reason, when the
+prioritystring is specified, the "tls.compression" parameter has no
+effect and is ignored.
+Full information about how to construct a priority string can be
+found in the GnuTLS manual. At the time of this writing, this
+information was contained in `section 6.10 of the GnuTLS
+manual <http://gnutls.org/manual/html_node/Priority-Strings.html>`__.
+**Note: this is an expert parameter.** Do not use if you do not
+exactly know what you are doing.
+
+tls.tlscfgcmd
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "no", "none"
+
+.. versionadded:: 8.2001.0
+
+The setting can be used if tls.tlslib is set to "openssl" to pass configuration commands to
+the openssl libray.
+OpenSSL Version 1.0.2 or higher is required for this feature.
+A list of possible commands and their valid values can be found in the documentation:
+https://www.openssl.org/docs/man1.0.2/man3/SSL_CONF_cmd.html
+
+The setting can be single or multiline, each configuration command is separated by linefeed (\n).
+Command and value are separated by equal sign (=). Here are a few samples:
+
+Example 1
+---------
+
+This will allow all protocols except for SSLv2 and SSLv3:
+
+.. code-block:: none
+
+ tls.tlscfgcmd="Protocol=ALL,-SSLv2,-SSLv3"
+
+
+Example 2
+---------
+
+This will allow all protocols except for SSLv2, SSLv3 and TLSv1.
+It will also set the minimum protocol to TLSv1.2
+
+.. code-block:: none
+
+ tls.tlscfgcmd="Protocol=ALL,-SSLv2,-SSLv3,-TLSv1
+ MinProtocol=TLSv1.2"
+
+LocalClientIp
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+Omrelp uses ip_address as local client address while connecting
+to remote logserver.
+
+
+Examples
+========
+
+Sending msgs with omrelp
+------------------------
+
+The following sample sends all messages to the central server
+"centralserv" at port 2514 (note that that server must run imrelp on
+port 2514).
+
+.. code-block:: none
+
+ module(load="omrelp")
+ action(type="omrelp" target="centralserv" port="2514")
+
+
+Sending msgs with omrelp via TLS
+------------------------------------
+
+This is the same as the previous example but uses TLS (via OpenSSL) for
+operations.
+
+Certificate files must exist at configured locations. Note that authmode
+"certvalid" is not very strong - you may want to use a different one for
+actual deployments. For details, see parameter descriptions.
+
+.. code-block:: none
+
+ module(load="omrelp" tls.tlslib="openssl")
+ action(type="omrelp"
+ target="centralserv" port="2514" tls="on"
+ tls.cacert="tls-certs/ca.pem"
+ tls.mycert="tls-certs/cert.pem"
+ tls.myprivkey="tls-certs/key.pem"
+ tls.authmode="certvalid"
+ tls.permittedpeer="rsyslog")
+
+
+|FmtObsoleteName| directives
+============================
+
+This module uses old-style action configuration to keep consistent with
+the forwarding rule. So far, no additional configuration directives can
+be specified. To send a message via RELP, use
+
+.. code-block:: none
+
+ *.*  :omrelp:<server>:<port>;<template>
+
+
diff --git a/source/configuration/modules/omruleset.rst b/source/configuration/modules/omruleset.rst
new file mode 100644
index 0000000..abba0b2
--- /dev/null
+++ b/source/configuration/modules/omruleset.rst
@@ -0,0 +1,184 @@
+******************************************
+omruleset: ruleset output/including module
+******************************************
+
+=========================== ===========================================================================
+**Module Name:**  **omruleset**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+=========================== ===========================================================================
+
+.. warning::
+
+ This module is outdated and only provided to support configurations that
+ already use it. **Do no longer use it in new configurations.** It has
+ been replaced by the much more efficient `"call" RainerScript
+ statement <rainerscript_call.html>`_. The "call" statement supports
+ everything omruleset does, but in an easier to use way.
+
+
+**Available Since**: 5.3.4
+
+**Deprecated in**: 7.2.0+
+
+
+Purpose
+=======
+
+This is a very special "output" module. It permits to pass a message
+object to another rule set. While this is a very simple action, it
+enables very complex configurations, e.g. it supports high-speed "and"
+conditions, sending data to the same file in a non-racy way,
+include-ruleset functionality as well as some high-performance
+optimizations (in case the rule sets have the necessary queue
+definitions).
+
+While it leads to a lot of power, this output module offers seemingly
+easy functionality. The complexity (and capabilities) arise from how
+everything can be combined.
+
+With this module, a message can be sent to processing to another
+ruleset. This is somewhat similar to a "#include" in the C programming
+language. However, one needs to keep on the mind that a ruleset can
+contain its own queue and that a queue can run in various modes.
+
+Note that if no queue is defined in the ruleset, the message is enqueued
+into the main message queue. This most often is not optimal and means
+that message processing may be severely deferred. Also note that when the
+ruleset's target queue is full and no free space can be acquired within
+the usual timeout, the message object may actually be lost. This is an
+extreme scenario, but users building an audit-grade system need to know
+this restriction. For regular installations, it should not really be
+relevant.
+
+**At minimum, be sure you understand the**
+:doc:`$RulesetCreateMainQueue <../ruleset/rsconf1_rulesetcreatemainqueue>`
+**directive as well as the importance of statement order in rsyslog.conf
+before using omruleset!**
+
+**Recommended Use:**
+
+- create rulesets specifically for omruleset
+- create these rulesets with their own main queue
+- decent queueing parameters (sizes, threads, etc) should be used for
+ the ruleset main queue. If in doubt, use the same parameters as for
+ the overall main queue.
+- if you use multiple levels of ruleset nesting, double check for
+ endless loops - the rsyslog engine does not detect these
+
+
+|FmtObsoleteName| directives
+============================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+- **$ActionOmrulesetRulesetName** ruleset-to-submit-to
+ This directive specifies the name of the ruleset that the message
+ provided to omruleset should be submitted to. This ruleset must
+ already have been defined. Note that the directive is automatically
+ reset after each :omruleset: action and there is no default. This is
+ done to prevent accidental loops in ruleset definition, what can
+ happen very quickly. The :omruleset: action will NOT be honored if no
+ ruleset name has been defined. As usual, the ruleset name must be
+ specified in front of the action that it modifies.
+
+
+Examples
+========
+
+Ruleset for Write-to-file action
+--------------------------------
+
+This example creates a ruleset for a write-to-file action. The idea here
+is that the same file is written based on multiple filters, problems
+occur if the file is used together with a buffer. That is because file
+buffers are action-specific, and so some partial buffers would be
+written. With omruleset, we create a single action inside its own
+ruleset and then pass all messages to it whenever we need to do so. Of
+course, such a simple situation could also be solved by a more complex
+filter, but the method used here can also be utilized in more complex
+scenarios (e.g. with multiple listeners). The example tries to keep it
+simple. Note that we create a ruleset-specific main queue (for
+simplicity with the default main queue parameters) in order to avoid
+re-queueing messages back into the main queue.
+
+.. code-block:: none
+
+ $ModLoad omruleset # define ruleset for commonly written file
+ $RuleSet CommonAction
+ $RulesetCreateMainQueue on
+ *.* /path/to/file.log
+
+ #switch back to default ruleset
+ $ruleset RSYSLOG_DefaultRuleset
+
+ # begin first action
+ # note that we must first specify which ruleset to use for omruleset:
+ $ActionOmrulesetRulesetName CommonAction
+ mail.info :omruleset:
+ # end first action
+
+ # begin second action
+ # note that we must first specify which ruleset to use for omruleset:
+ $ActionOmrulesetRulesetName CommonAction
+ :FROMHOST, isequal, "myhost.example.com" :omruleset:
+ #end second action
+
+ # of course, we can have "regular" actions alongside :omrulset: actions
+ *.* /path/to/general-message-file.log
+
+
+High-performance filter condition
+---------------------------------
+
+The next example is used to create a high-performance nested and filter
+condition. Here, it is first checked if the message contains a string
+"error". If so, the message is forwarded to another ruleset which then
+applies some filters. The advantage of this is that we can use
+high-performance filters where we otherwise would need to use the (much
+slower) expression-based filters. Also, this enables pipeline
+processing, in that second ruleset is executed in parallel to the first
+one.
+
+.. code-block:: none
+
+ $ModLoad omruleset
+ # define "second" ruleset
+ $RuleSet nested
+ $RulesetCreateMainQueue on
+ # again, we use our own queue
+ mail.* /path/to/mailerr.log
+ kernel.* /path/to/kernelerr.log
+ auth.* /path/to/autherr.log
+
+ #switch back to default ruleset
+ $ruleset RSYSLOG_DefaultRuleset
+
+ # begin first action - here we filter on "error"
+ # note that we must first specify which ruleset to use for omruleset:
+ $ActionOmrulesetRulesetName nested
+ :msg, contains, "error" :omruleset:
+ #end first action
+
+ # begin second action - as an example we can do anything else in
+ # this processing. Note that these actions are processed concurrently
+ # to the ruleset "nested"
+ :FROMHOST, isequal, "myhost.example.com" /path/to/host.log
+ #end second action
+
+ # of course, we can have "regular" actions alongside :omrulset: actions
+ *.* /path/to/general-message-file.log
+
+
+Caveats/Known Bugs
+==================
+
+The current configuration file language is not really adequate for a
+complex construct like omruleset. Unfortunately, more important work is
+currently preventing me from redoing the config language. So use extreme
+care when nesting rulesets and be sure to test-run your config before
+putting it into production, ensuring you have a sufficiently large probe
+of the traffic run over it. If problems arise, the `rsyslog debug
+log <troubleshoot.html>`_ is your friend.
+
diff --git a/source/configuration/modules/omsnmp.rst b/source/configuration/modules/omsnmp.rst
new file mode 100644
index 0000000..ba283f5
--- /dev/null
+++ b/source/configuration/modules/omsnmp.rst
@@ -0,0 +1,265 @@
+*******************************
+omsnmp: SNMP Trap Output Module
+*******************************
+
+=========================== ===========================================================================
+**Module Name:**  **omsnmp**
+**Author:** Andre Lorbach <alorbach@adiscon.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+Provides the ability to send syslog messages as an SNMPv1 & v2c traps.
+By default, SNMPv2c is preferred. The syslog message is wrapped into a
+OCTED STRING variable. This module uses the
+`NET-SNMP <http://net-snmp.sourceforge.net/>`_ library. In order to
+compile this module, you will need to have the
+`NET-SNMP <http://net-snmp.sourceforge.net/>`_ developer (headers)
+package installed.
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+Action Parameters
+-----------------
+
+Server
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "yes", "``$actionsnmptarget``"
+
+This can be a hostname or ip address, and is our snmp target host.
+This parameter is required, if the snmptarget is not defined, nothing
+will be send.
+
+
+Port
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "162", "no", "``$actionsnmptargetport``"
+
+The port which will be used, common values are port 162 or 161.
+
+
+Transport
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "udp", "no", "``$actionsnmptransport``"
+
+Defines the transport type you wish to use. Technically we can
+support all transport types which are supported by NET-SNMP.
+To name a few possible values:
+udp, tcp, udp6, tcp6, icmp, icmp6 ...
+
+
+Version
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "1", "no", "``$actionsnmpversion``"
+
+There can only be two choices for this parameter for now.
+0 means SNMPv1 will be used.
+1 means SNMPv2c will be used.
+Any other value will default to 1.
+
+
+Community
+^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "public", "no", "``$actionsnmpcommunity``"
+
+This sets the used SNMP Community.
+
+
+TrapOID
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "1.3.6.1.4.1.19406.1.2.1", "no", "``$actionsnmptrapoid``"
+
+The default value means "ADISCON-MONITORWARE-MIB::syslogtrap".
+
+This configuration parameter is used for **SNMPv2** only.
+This is the OID which defines the trap-type, or notification-type
+rsyslog uses to send the trap.
+In order to decode this OID, you will need to have the
+ADISCON-MONITORWARE-MIB and ADISCON-MIB mibs installed on the
+receiver side. Downloads of these mib files can be found here:
+
+`http://www.adiscon.org/download/ADISCON-MIB.txt <http://www.adiscon.org/download/ADISCON-MIB.txt>`_
+
+`http://www.adiscon.org/download/ADISCON-MONITORWARE-MIB.txt <http://www.adiscon.org/download/ADISCON-MONITORWARE-MIB.txt>`_
+Thanks to the net-snmp mailinglist for the help and the
+recommendations ;).
+
+
+MessageOID
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "1.3.6.1.4.1.19406.1.2.1", "no", "``$actionsnmpsyslogmessageoid``"
+
+This OID will be used as a variable, type "OCTET STRING". This
+variable will contain up to 255 characters of the original syslog
+message including syslog header. It is recommend to use the default
+OID.
+In order to decode this OID, you will need to have the
+ADISCON-MONITORWARE-MIB and ADISCON-MIB mibs installed on the
+receiver side. To download these custom mibs, see the description of
+**TrapOID**.
+
+
+EnterpriseOID
+^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "1.3.6.1.4.1.3.1.1", "no", "``$actionsnmpenterpriseoid``"
+
+The default value means "enterprises.cmu.1.1"
+
+Customize this value if needed. I recommend to use the default value
+unless you require to use a different OID.
+This configuration parameter is used for **SNMPv1** only. It has no
+effect if **SNMPv2** is used.
+
+
+SpecificType
+^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "0", "no", "``$actionsnmpspecifictype``"
+
+This is the specific trap number. This configuration parameter is
+used for **SNMPv1** only. It has no effect if **SNMPv2** is used.
+
+
+Snmpv1DynSource
+^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "", "no", "none"
+
+.. versionadded:: 8.2001
+
+If set, the source field of the SNMP trap can be overwritten with the a
+template. The internal default is "%fromhost-ip%". The result should be a
+valid IPv4 Address. Otherwise setting the source will fail.
+
+Below is a sample template called "dynsource" which you canm use to set the
+source to a custom property:
+
+.. code-block:: none
+
+ set $!custom_host = $fromhost;
+ template(name="dynsource" type="list") {
+ property(name="$!custom_host")
+ }
+
+
+This configuration parameter is used for **SNMPv1** only.
+It has no effect if **SNMPv2** is used.
+
+
+TrapType
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "6", "no", "``$actionsnmptraptype``"
+
+There are only 7 Possible trap types defined which can be used here.
+These trap types are:
+
+.. code-block:: none
+
+ 0 = SNMP_TRAP_COLDSTART
+ 1 = SNMP_TRAP_WARMSTART
+ 2 = SNMP_TRAP_LINKDOWN
+ 3 = SNMP_TRAP_LINKUP
+ 4 = SNMP_TRAP_AUTHFAIL
+ 5 = SNMP_TRAP_EGPNEIGHBORLOSS
+ 6 = SNMP_TRAP_ENTERPRISESPECIFIC
+
+.. note::
+
+ Any other value will default to 6 automatically. This configuration
+ parameter is used for **SNMPv1** only. It has no effect if **SNMPv2**
+ is used.
+
+
+Caveats/Known Bugs
+==================
+
+- In order to decode the custom OIDs, you will need to have the adiscon
+ mibs installed.
+
+
+Examples
+========
+
+Sending messages as snmp traps
+------------------------------
+
+The following commands send every message as a snmp trap.
+
+.. code-block:: none
+
+ module(load="omsnmp")
+ action(type="omsnmp" server="localhost" port="162" transport="udp"
+ version="1" community="public")
+
diff --git a/source/configuration/modules/omstdout.rst b/source/configuration/modules/omstdout.rst
new file mode 100644
index 0000000..ee2ddf3
--- /dev/null
+++ b/source/configuration/modules/omstdout.rst
@@ -0,0 +1,113 @@
+***********************************************
+omstdout: stdout output module (testbench tool)
+***********************************************
+
+=========================== ===========================================================================
+**Module Name:** **omstdout**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+**Available Since:** 4.1.6
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+This module writes any messages that are passed to it to stdout. It
+was developed for the rsyslog test suite. However, there may (limited)
+exist some other usages. Please note we do not put too much effort on
+the quality of this module as we do not expect it to be used in real
+deployments. If you do, please drop us a note so that we can enhance
+its priority!
+
+
+Configuration
+=============
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Module Parameters
+-----------------
+
+none
+
+
+Action Parameters
+-----------------
+
+template
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "RSYSLOG_FileFormat", "no", "none"
+
+Set the template which will be used for the output. If none is specified
+the default will be used.
+
+
+EnsureLFEnding
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "on", "no", "``$ActionOMStdoutEnsureLFEnding``"
+
+Makes sure, that each message is written with a terminating LF. If the
+message contains a trailing LF, none is added. This is needed for the
+automated tests.
+
+
+Configure statement
+-------------------
+
+This is used when building rsyslog from source.
+
+./configure --enable-omstdout
+
+
+Legacy parameter not adopted in the new style
+---------------------------------------------
+
+- **$ActionOMStdoutArrayInterface**
+ [Default: off]
+ This setting instructs omstdout to use the alternate array based
+ method of parameter passing. If used, the values will be output with
+ commas between the values but no other padding bytes. This is a test
+ aid for the alternate calling interface.
+
+
+Examples
+========
+
+Minimum setup
+-------------
+
+The following sample is the minimum setup required to have syslog messages
+written to stdout.
+
+.. code-block:: none
+
+ module(load="omstdout")
+ action(type="omstdout")
+
+
+Example 2
+---------
+
+The following sample will write syslog messages to stdout, using a template.
+
+.. code-block:: none
+
+ module(load="omstdout")
+ action(type="omstdout" template="outfmt")
+
+
diff --git a/source/configuration/modules/omudpspoof.rst b/source/configuration/modules/omudpspoof.rst
new file mode 100644
index 0000000..2edb106
--- /dev/null
+++ b/source/configuration/modules/omudpspoof.rst
@@ -0,0 +1,209 @@
+**************************************
+omudpspoof: UDP spoofing output module
+**************************************
+
+=========================== ===========================================================================
+**Module Name:**  **omudpspoof**
+**Author:** David Lang <david@lang.hm> and `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+**Available Since:** 5.1.3
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+This module is similar to the regular UDP forwarder, but permits to
+spoof the sender address. Also, it enables to circle through a number of
+source ports.
+
+**Important**: This module **requires root permissions**. This is a hard
+requirement because raw socket access is necessary to fake UDP sender
+addresses. As such, rsyslog cannot drop privileges if this module is
+to be used. Ensure that you do **not** use `$PrivDropToUser` or
+`$PrivDropToGroup`. Many distro default configurations (notably Ubuntu)
+contain these statements. You need to remove or comment them out if you
+want to use `omudpspoof`.
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+Module Parameters
+-----------------
+
+Template
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "RSYSLOG_TraditionalForwardFormat", "no", "none"
+
+This setting instructs omudpspoof to use a template different from
+the default template for all of its actions that do not have a
+template specified explicitly.
+
+
+Action Parameters
+-----------------
+
+Target
+^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "yes", "``$ActionOMUDPSpoofTargetHost``"
+
+Host that the messages shall be sent to.
+
+
+Port
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "514", "no", "``$ActionOMUDPSpoofTargetPort``"
+
+Remote port that the messages shall be sent to. Default is 514.
+
+
+SourceTemplate
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "RSYSLOG_omudpspoofDfltSourceTpl", "no", "``$ActionOMOMUDPSpoofSourceNameTemplate``"
+
+This is the name of the template that contains a numerical IP
+address that is to be used as the source system IP address. While it
+may often be a constant value, it can be generated as usual via the
+property replacer, as long as it is a valid IPv4 address. If not
+specified, the build-in default template
+RSYSLOG\_omudpspoofDfltSourceTpl is used. This template is defined as
+follows:
+$template RSYSLOG\_omudpspoofDfltSourceTpl,"%fromhost-ip%"
+So in essence, the default template spoofs the address of the system
+the message was received from. This is considered the most important
+use case.
+
+
+SourcePort.start
+^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "32000", "no", "``$ActionOMUDPSpoofSourcePortStart``"
+
+Specify the start value for circling the source ports. Start must be
+less than or equal to sourcePort.End.
+
+
+SourcePort.End
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "42000", "no", "``$ActionOMUDPSpoofSourcePortEnd``"
+
+Specify the end value for circling the source ports. End must be
+equal to or more than sourcePort.Start.
+
+
+MTU
+^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "1500", "no", "none"
+
+Maximum packet length to send.
+
+
+Template
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "RSYSLOG_TraditionalForwardFormat", "no", "``$ActionOMUDPSpoofDefaultTemplate``"
+
+This setting instructs omudpspoof to use a template different from
+the default template for all of its actions that do not have a
+template specified explicitly.
+
+
+Caveats/Known Bugs
+==================
+
+- **IPv6** is currently not supported. If you need this capability,
+ please let us know via the rsyslog mailing list.
+
+- Throughput is MUCH smaller than when using omfwd module.
+
+
+Examples
+========
+
+Forwarding message through multiple ports
+-----------------------------------------
+
+Forward the message to 192.168.1.1, using original source and port between 10000 and 19999.
+
+.. code-block:: none
+
+ Action (
+ type="omudpspoof"
+ target="192.168.1.1"
+ sourceport.start="10000"
+ sourceport.end="19999"
+ )
+
+
+Forwarding message using another source address
+-----------------------------------------------
+
+Forward the message to 192.168.1.1, using source address 192.168.111.111 and default ports.
+
+.. code-block:: none
+
+ Module (
+ load="omudpspoof"
+ )
+ Template (
+ name="spoofaddr"
+ type="string"
+ string="192.168.111.111"
+ )
+ Action (
+ type="omudpspoof"
+ target="192.168.1.1"
+ sourcetemplate="spoofaddr"
+ )
+
+
diff --git a/source/configuration/modules/omusrmsg.rst b/source/configuration/modules/omusrmsg.rst
new file mode 100644
index 0000000..ec00956
--- /dev/null
+++ b/source/configuration/modules/omusrmsg.rst
@@ -0,0 +1,67 @@
+**********************
+omusrmsg: notify users
+**********************
+
+=========================== ===========================================================================
+**Module Name:**  **omusrmsg**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+This module permits to send log messages to the user terminal. This is a
+built-in module so it doesn't need to be loaded.
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Action Parameters
+-----------------
+
+Users
+^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "none", "yes", "none"
+
+The name of the users to send data to.
+
+
+Template
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "WallFmt/StdUsrMsgFmt", "no", "none"
+
+Template to user for the message. Default is WallFmt when parameter users is
+"*" and StdUsrMsgFmt otherwise.
+
+
+Examples
+========
+
+Write emergency messages to all users
+-------------------------------------
+
+The following command writes emergency messages to all users
+
+.. code-block:: none
+
+ action(type="omusrmsg" users="*")
+
diff --git a/source/configuration/modules/omuxsock.rst b/source/configuration/modules/omuxsock.rst
new file mode 100644
index 0000000..a9ba8cd
--- /dev/null
+++ b/source/configuration/modules/omuxsock.rst
@@ -0,0 +1,61 @@
+************************************
+omuxsock: Unix sockets Output Module
+************************************
+
+=========================== ===========================================================================
+**Module Name:**  **omuxsock**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+**Available since:** 4.7.3, 5.5.7
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+This module supports sending syslog messages to local Unix sockets. Thus
+it provided a fast message-passing interface between different rsyslog
+instances. The counterpart to omuxsock is `imuxsock <imuxsock.html>`_.
+Note that the template used together with omuxsock must be suitable to
+be processed by the receiver.
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+|FmtObsoleteName| directives
+----------------------------
+
+- **$OMUxSockSocket**
+ Name of the socket to send data to. This has no default and **must**
+ be set.
+- **$OMUxSockDefaultTemplate**
+ This can be used to override the default template to be used
+ together with omuxsock. This is primarily useful if there are many
+ forwarding actions and each of them should use the same template.
+
+
+Caveats/Known Bugs
+==================
+
+Currently, only datagram sockets are supported.
+
+
+Examples
+========
+
+Write all messages to socket
+----------------------------
+
+The following sample writes all messages to the "/tmp/socksample"
+socket.
+
+.. code-block:: none
+
+ $ModLoad omuxsock
+ $OMUxSockSocket /tmp/socksample
+ *.* :omuxsock:
+
diff --git a/source/configuration/modules/pmciscoios.rst b/source/configuration/modules/pmciscoios.rst
new file mode 100644
index 0000000..dc82e43
--- /dev/null
+++ b/source/configuration/modules/pmciscoios.rst
@@ -0,0 +1,183 @@
+**********
+pmciscoios
+**********
+
+=========================== ===========================================================================
+**Module Name:**  **pmciscoios**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+**Available since:** 8.3.4+
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+This is a parser that understands Cisco IOS "syslog" format. Note
+that this format is quite different from RFC syslog format, and
+so the default parser chain cannot deal with it.
+
+Note that due to large differences in IOS logging format, pmciscoios
+may currently not be able to handle all possible format variations.
+Nevertheless, it should be fairly easy to adapt it to additional
+requirements. So be sure to ask if you run into problems with
+format issues.
+
+Note that if your Cisco system emits timezone information in a supported
+format, rsyslog will pick it up. In order to apply proper timezone offsets,
+the timezone ids (e.g. "EST") must be configured via the
+:doc:`timezone object <../timezone>`.
+
+Note if the clock on the Cisco device has not been set and cannot be
+verified the Cisco will prepend the timestamp field with an asterisk (*).
+If the clock has gone out of sync with its configured NTP server the
+timestamp field will be prepended with a dot (.). In both of these cases
+parsing the timestamp would fail, therefore any preceding asterisks (*) or
+dots (.) are ignored. This may lead to "incorrect" timestamps being logged.
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Parser Parameters
+-----------------
+
+present.origin
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+This setting tell the parser if the origin field is present inside
+the message. Due to the nature of Cisco's logging format, the parser
+cannot sufficiently correctly deduce if the origin field is present
+or not (at least not with reasonable performance). As such, the parser
+must be provided with that information. If the origin is present,
+its value is stored inside the HOSTNAME message property.
+
+
+present.xr
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+If syslog is received from an IOSXR device the syslog format will usually
+start with the RSP/LC/etc that produced the log, then the timestamp.
+It will also contain an additional syslog tag before the standard Cisco
+%TAG, this tag references the process that produced the log.
+In order to use this Cisco IOS parser module with XR format messages both
+of these additional fields must be ignored.
+
+
+Examples
+========
+
+Listening multiple devices, some emitting origin information and some not
+-------------------------------------------------------------------------
+
+We assume a scenario where we have some devices configured to emit origin
+information whereas some others do not. In order to differentiate between
+the two classes, rsyslog accepts input on different ports, one per class.
+For each port, an input() object is defined, which binds the port to a
+ruleset that uses the appropriately-configured parser. Except for the
+different parsers, processing shall be identical for both classes. In our
+first example we do this via a common ruleset which carries out the
+actual processing:
+
+.. code-block:: none
+
+ module(load="imtcp")
+ module(load="pmciscoios")
+
+ input(type="imtcp" port="10514" ruleset="withoutOrigin")
+ input(type="imtcp" port="10515" ruleset="withOrigin")
+
+ ruleset(name="common") {
+ ... do processing here ...
+ }
+
+ ruleset(name="withoutOrigin" parser="rsyslog.ciscoios") {
+ /* this ruleset uses the default parser which was
+ * created during module load
+ */
+ call common
+ }
+
+ parser(name="custom.ciscoios.withOrigin" type="pmciscoios"
+ present.origin="on")
+ ruleset(name="withOrigin" parser="custom.ciscoios.withOrigin") {
+ /* this ruleset uses the parser defined immediately above */
+ call common
+ }
+
+
+Date stamp immediately following the origin
+-------------------------------------------
+
+The example configuration above is a good solution. However, it is possible
+to do the same thing in a somewhat condensed way, but if and only if the date
+stamp immediately follows the origin. In that case, the parser has a chance to
+detect if the origin is present or not. The key point here is to make sure
+the parser checking for the origin is given before the default one, in which
+case the first on will detect it does not match an pass on to the next
+one inside the parser chain. However, this comes at the expense of additional
+runtime overhead. The example below is **not** good practice -- it is given
+as a purely educational sample to show some fine details of how parser
+definitions interact. In this case, we can use a single listener.
+
+.. code-block:: none
+
+ module(load="imtcp")
+ module(load="pmciscoios")
+
+ input(type="imtcp" port="10514" ruleset="ciscoBoth")
+
+ parser(name="custom.ciscoios.withOrigin" type="pmciscoios"
+ present.origin="on")
+ ruleset(name="ciscoBoth"
+ parser=["custom.ciscoios.withOrigin", "rsyslog.ciscoios"]) {
+ ... do processing here ...
+ }
+
+
+Handling Cisco IOS and IOSXR formats
+------------------------------------
+
+The following sample demonstrates how to handle Cisco IOS and IOSXR formats
+
+.. code-block:: none
+
+ module(load="imudp")
+ module(load="pmciscoios")
+
+ input(type="imudp" port="10514" ruleset="ios")
+ input(type="imudp" port="10515" ruleset="iosxr")
+
+ ruleset(name="common") {
+ ... do processing here ...
+ }
+
+ ruleset(name="ios" parser="rsyslog.ciscoios") {
+ call common
+ }
+
+ parser(name="custom.ciscoios.withXr" type="pmciscoios"
+ present.xr="on")
+ ruleset(name="iosxr" parser="custom.ciscoios.withXr"] {
+ call common
+ }
+
+
diff --git a/source/configuration/modules/pmdb2diag.rst b/source/configuration/modules/pmdb2diag.rst
new file mode 100644
index 0000000..dc36977
--- /dev/null
+++ b/source/configuration/modules/pmdb2diag.rst
@@ -0,0 +1,146 @@
+**************************************
+pmdb2diag: DB2 Diag file parser module
+**************************************
+
+=========================== ===========================================================================
+**Module Name:** **pmdb2diag**
+**Authors:** Jean-Philippe Hilaire <jean-philippe.hilaire@pmu.fr> & Philippe Duveau <philippe.duveau@free.fr>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+The objective of this module is to extract timestamp, procid and appname form the log
+lines without altering it
+
+The parser is acting after an imfile input. This implies that imfile must be configured
+with needParse to setted to on.
+
+Compile
+=======
+
+To successfully compile pmdb2diag module you need to add it via configure.
+
+ ./configure --enable-pmdb2diag ...
+
+Configuration Parameters
+========================
+
+**Parser Name:** "db2.diag"
+
+The default value of parameter are defined with escapeLF on in imfile.
+
+timeformat
+^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "no", "see strptime manual","%Y-%m-%d-%H.%M.%S."
+
+Format of the timestamp in d2diag log included decimal separator between seconds and second fractions.
+
+timepos
+^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "no", ,"0"
+
+Position of the timestamp in the db2 diag log.
+
+levelpos
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "no", ,"59"
+
+Position of the severity (level) in the db2 diag log.
+
+pidstarttoprogstartshift
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "mandatory", "format", "default"
+ :widths: auto
+ :class: parameter-table
+
+ "integer", "no", ,"49"
+
+Position of the prog related to the pid (form beginning to beginning) in the db2 diag log.
+
+Examples
+========
+
+Example 1
+^^^^^^^^^
+
+This is the simplest parsing with default values
+
+.. code-block:: none
+
+ module(load="pmdb2diag")
+ ruleset(name="ruleDB2" parser="db2.diag") {
+ ... do something
+ }
+ input(type="imfile" file="db2diag.log" ruleset="ruleDB2" tag="db2diag"
+ startmsg.regex="^[0-9]{4}-[0-9]{2}-[0-9]{2}" escapelf="on" needparse="on")
+
+
+Example 2
+^^^^^^^^^
+
+Parsing with custom values
+
+.. code-block:: none
+
+ module(load="pmdb2diag")
+ parser(type="pmdb2diag" name="custom.db2.diag" levelpos="57"
+ timeformat=""%y-%m-%d:%H.%M.%S.")
+ ruleset(name="ruleDB2" parser="custom.db2.diag") {
+ ... do something
+ }
+ input(type="imfile" file="db2diag.log" ruleset="ruleDB2" tag="db2diag"
+ startmsg.regex="^[0-9]{4}-[0-9]{2}-[0-9]{2}" escapelf="on" needparse="on")
+
+DB2 Log sample
+^^^^^^^^^^^^^^
+
+.. code-block:: none
+
+ 2015-05-06-16.53.26.989402+120 E1876227378A1702 LEVEL: Info
+ PID : 4390948 TID : 89500 PROC : db2sysc 0
+ INSTANCE: db2itst NODE : 000 DB : DBTEST
+ APPHDL : 0-14410 APPID: 10.42.2.166.36261.150506120149
+ AUTHID : DBUSR HOSTNAME: dev-dbm1
+ EDUID : 89500 EDUNAME: db2agent (DBTEST) 0
+ FUNCTION: DB2 UDB, relation data serv, sqlrr_dispatch_xa_request, probe:703
+ MESSAGE : ZRC=0x80100024=-2146435036=SQLP_NOTA "Transaction was not found"
+ DIA8036C XA error with request type of "". Transaction was not found.
+ DATA #1 : String, 27 bytes
+ XA Dispatcher received NOTA
+ CALLSTCK: (Static functions may not be resolved correctly, as they are resolved to the nearest symbol)
+ [0] 0x090000000A496B70 sqlrr_xrollback__FP14db2UCinterface + 0x11E0
+ [1] 0x090000000A356764 sqljsSyncRollback__FP14db2UCinterface + 0x6E4
+ [2] 0x090000000C1FAAA8 sqljsParseRdbAccessed__FP13sqljsDrdaAsCbP13sqljDDMObjectP14db2UCinterface + 0x529C
+ [3] 0x0000000000000000 ?unknown + 0x0
+ [4] 0x090000000C23D260 @72@sqljsSqlam__FP14db2UCinterfaceP8sqeAgentb + 0x1174
+ [5] 0x090000000C23CE54 @72@sqljsSqlam__FP14db2UCinterfaceP8sqeAgentb + 0xD68
+ [6] 0x090000000D74AB90 @72@sqljsDriveRequests__FP8sqeAgentP14db2UCconHandle + 0xA8
+ [7] 0x090000000D74B6A0 @72@sqljsDrdaAsInnerDriver__FP18SQLCC_INITSTRUCT_Tb + 0x5F8
+ [8] 0x090000000B8F85AC RunEDU__8sqeAgentFv + 0x48C38
+ [9] 0x090000000B876240 RunEDU__8sqeAgentFv + 0x124
+ [10] 0x090000000CD90DFC EDUDriver__9sqzEDUObjFv + 0x130
+ [11] 0x090000000BE01664 sqloEDUEntry + 0x390
+ [12] 0x09000000004F5E10 _pthread_body + 0xF0
+ [13] 0xFFFFFFFFFFFFFFFC ?unknown + 0xFFFFFFFF
diff --git a/source/configuration/modules/pmlastmsg.rst b/source/configuration/modules/pmlastmsg.rst
new file mode 100644
index 0000000..711612f
--- /dev/null
+++ b/source/configuration/modules/pmlastmsg.rst
@@ -0,0 +1,68 @@
+****************************************
+pmlastmsg: last message repeated n times
+****************************************
+
+=========================== ===========================================================================
+**Module Name:**  **pmlastmsg**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+**Available Since:** 5.5.6
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+Some syslogds are known to emit severity malformed messages with content
+"last message repeated n times". These messages can mess up message
+reception, as they lead to wrong interpretation with the standard
+RFC3164 parser. Rather than trying to fix this issue in pmrfc3164, we
+have created a new parser module specifically for these messages. The
+reason is that some processing overhead is involved in processing these
+messages (they must be recognized) and we would not like to place this
+toll on every user but only on those actually in need of the feature.
+Note that the performance toll is not large -- but if you expect a very
+high message rate with tenthousands of messages per second, you will
+notice a difference.
+
+This module should be loaded first inside :doc:`rsyslog's parser
+chain </concepts/messageparser>`. It processes all those messages that
+contain a PRI, then none or some spaces and then the exact text
+(case-insensitive) "last message repeated n times" where n must be an
+integer. All other messages are left untouched.
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+There do not currently exist any configuration parameters for this
+module.
+
+
+Examples
+========
+
+Systems emitting malformed "repeated msg" messages
+--------------------------------------------------
+
+This example is the typical use case, where some systems emit malformed
+"repeated msg" messages. Other than that, the default :rfc:`5424` and
+:rfc:`3164` parsers should be used. Note that when a parser is specified,
+the default parser chain is removed, so we need to specify all three
+parsers. We use this together with the default ruleset.
+
+.. code-block:: none
+
+ module(load="pmlastmsg")
+
+ parser(type="pmlastmsg" name="custom.pmlastmsg")
+
+ ruleset(name="ruleset" parser=["custom.pmlastmsg", "rsyslog.rfc5424",
+ "rsyslog.rfc3164"]) {
+ ... do processing here ...
+ }
+
diff --git a/source/configuration/modules/pmnormalize.rst b/source/configuration/modules/pmnormalize.rst
new file mode 100644
index 0000000..2a01dd5
--- /dev/null
+++ b/source/configuration/modules/pmnormalize.rst
@@ -0,0 +1,121 @@
+*****************************************************
+Log Message Normalization Parser Module (pmnormalize)
+*****************************************************
+
+=========================== ===========================================================================
+**Module Name:**  **pmnormalize**
+**Author:** Pascal Withopf <pascalwithopf1@gmail.com>
+**Available since:** 8.27.0
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+This parser normalizes messages with the specified rules and populates the
+properties for further use.
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Action Parameters
+-----------------
+
+Rulebase
+^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "word", "none", "no", "none"
+
+Specifies which rulebase file is to use. If there are multiple
+pmnormalize instances, each one can use a different file. However, a
+single instance can use only a single file. This parameter or **rule**
+MUST be given, because normalization can only happen based on a rulebase.
+It is recommended that an absolute path name is given. Information on
+how to create the rulebase can be found in the `liblognorm
+manual <http://www.liblognorm.com/files/manual/index.html>`_.
+
+
+Rule
+^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "array", "none", "no", "none"
+
+Contains an array of strings which will be put together as the rulebase.
+This parameter or **rulebase** MUST be given, because normalization can
+only happen based on a rulebase.
+
+
+UndefinedPropertyError
+^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+With this parameter an error message is controlled, which will be put out
+every time pmnormalize can't normalize a message.
+
+
+Examples
+========
+
+Normalize msgs received via imtcp
+---------------------------------
+
+In this sample messages are received via imtcp. Then they are normalized with
+the given rulebase and written to a file.
+
+.. code-block:: none
+
+ module(load="imtcp")
+ module(load="pmnormalize")
+
+ input(type="imtcp" port="13514" ruleset="ruleset")
+
+ parser(name="custom.pmnormalize" type="pmnormalize" rulebase="/tmp/rules.rulebase")
+
+ ruleset(name="ruleset" parser="custom.pmnormalize") {
+ action(type="omfile" file="/tmp/output")
+ }
+
+
+Write normalized messages to file
+---------------------------------
+
+In this sample messages are received via imtcp. Then they are normalized with
+the given rule array. After that they are written in a file.
+
+.. code-block:: none
+
+ module(load="imtcp")
+ module(load="pmnormalize")
+
+ input(type="imtcp" port="10514" ruleset="outp")
+
+ parser(name="custom.pmnormalize" type="pmnormalize" rule=[
+ "rule=:<%pri:number%> %fromhost-ip:ipv4% %hostname:word% %syslogtag:char-to:\\x3a%: %msg:rest%",
+ "rule=:<%pri:number%> %hostname:word% %fromhost-ip:ipv4% %syslogtag:char-to:\\x3a%: %msg:rest%"])
+
+ ruleset(name="outp" parser="custom.pmnormalize") {
+ action(type="omfile" File="/tmp/output")
+ }
+
diff --git a/source/configuration/modules/pmnull.rst b/source/configuration/modules/pmnull.rst
new file mode 100644
index 0000000..3348cbf
--- /dev/null
+++ b/source/configuration/modules/pmnull.rst
@@ -0,0 +1,123 @@
+*********************************
+pmnull: Syslog Null Parser Module
+*********************************
+
+=========================== ===========================================================================
+**Module Name:**  **pmnull**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+When a message is received it is tried to match a set of parsers to get
+properties populated. This parser module sets all attributes to "" but rawmsg.
+There usually should be no need to use this module. It may be useful to
+process certain known-non-syslog messages.
+
+The pmnull module was originally written as some people thought it would
+be nice to save 0.05% of time by not unnecessarily parsing the message.
+We even doubt it is that amount of performance enhancement as the properties
+need to be populated in any case, so the saving is really minimal (but exists).
+
+**If you just want to transmit or store messages exactly in the format that
+they arrived in you do not need pmnull!** You can use the `rawmsg` property::
+
+ template(name="asReceived" type="string" string="%rawmsg%")
+ action(type="omfwd" target="server.example.net" template="asReceived")
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Parser Parameters
+-----------------
+
+Tag
+^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "string", "", "no", "none"
+
+This setting sets the tag value to the message.
+
+
+SyslogFacility
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "Facility", "1", "no", "none"
+
+This setting sets the syslog facility value. The default comes from the
+rfc3164 standard.
+
+
+SyslogSeverity
+^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "Severity", "5", "no", "none"
+
+This setting sets the syslog severity value. The default comes from the
+rfc3164 standard.
+
+
+Examples
+========
+
+Process messages received via imtcp
+-----------------------------------
+
+In this example messages are received through imtcp on port 13514. The
+ruleset uses the parser pmnull which has the parameters tag, syslogfacility
+and syslogseverity given.
+
+.. code-block:: none
+
+ module(load="imtcp")
+ module(load="pmnull")
+
+ input(type="imtcp" port="13514" ruleset="ruleset")
+ parser(name="custom.pmnull" type="pmnull" tag="mytag" syslogfacility="3"
+ syslogseverity="1")
+
+ ruleset(name="ruleset" parser=["custom.pmnull", "rsyslog.pmnull"]) {
+ action(type="omfile" file="rsyslog.out.log")
+ }
+
+
+Process messages with default parameters
+----------------------------------------
+
+In this example the ruleset uses the parser pmnull with the default parameters
+because no specifics were given.
+
+.. code-block:: none
+
+ module(load="imtcp")
+ module(load="pmnull")
+
+ input(type="imtcp" port="13514" ruleset="ruleset")
+ parser(name="custom.pmnull" type="pmnull")
+
+ ruleset(name="ruleset" parser="custom.pmnull") {
+ action(type="omfile" file="rsyslog.out.log")
+ }
+
diff --git a/source/configuration/modules/pmrfc3164.rst b/source/configuration/modules/pmrfc3164.rst
new file mode 100644
index 0000000..46cff38
--- /dev/null
+++ b/source/configuration/modules/pmrfc3164.rst
@@ -0,0 +1,161 @@
+*******************************************
+pmrfc3164: Parse RFC3164-formatted messages
+*******************************************
+
+=========================== ===========================================================================
+**Module Name:**  **pmrfc3164**
+**Author:** `Rainer Gerhards <https://rainer.gerhards.net/>`_ <rgerhards@adiscon.com>
+=========================== ===========================================================================
+
+
+Purpose
+=======
+
+This parser module is for parsing messages according to the traditional/legacy
+syslog standard :rfc:`3164`
+
+It is part of the default parser chain.
+
+The parser can also be customized to allow the parsing of specific formats,
+if they occur.
+
+
+Configuration Parameters
+========================
+
+.. note::
+
+ Parameter names are case-insensitive.
+
+
+Parser Parameters
+-----------------
+
+permit.squareBracketsInHostname
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+This setting tells the parser that hostnames that are enclosed by brackets
+should omit the brackets.
+
+
+permit.slashesInHostname
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+.. versionadded:: 8.20.0
+
+This setting tells the parser that hostnames may contain slashes. This
+is useful when messages e.g. from a syslog-ng relay chain are received.
+Syslog-ng puts the various relay hosts via slashes into the hostname
+field.
+
+
+permit.AtSignsInHostname
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+.. versionadded:: 8.25.0
+
+This setting tells the parser that hostnames may contain at-signs. This
+is useful when messages are relayed from a syslog-ng server in rfc3164
+format. The hostname field sent by syslog-ng may be prefixed by the source
+name followed by an at-sign character.
+
+
+force.tagEndingByColon
+^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+.. versionadded:: 8.25.0
+
+This setting tells the parser that tag need to be ending by colon to be
+valid. In others case, the tag is set to dash ("-") without changing
+message.
+
+
+remove.msgFirstSpace
+^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+.. versionadded:: 8.25.0
+
+Rfc3164 tell message is directly after tag including first white space.
+This option tell to remove the first white space in message just after
+reading. It make rfc3164 & rfc5424 syslog messages working in a better way.
+
+
+detect.YearAfterTimestamp
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. csv-table::
+ :header: "type", "default", "mandatory", "|FmtObsoleteName| directive"
+ :widths: auto
+ :class: parameter-table
+
+ "binary", "off", "no", "none"
+
+Some devices send syslog messages in a format that is similar to RFC3164,
+but they also attach the year to the timestamp (which is not compliant to
+the RFC). With regular parsing, the year would be recognized to be the
+hostname and the hostname would become the syslogtag. This setting should
+prevent this. It is also limited to years between 2000 and 2099, so
+hostnames with numbers as their name can still be recognized correctly. But
+everything in this range will be detected as a year.
+
+
+Examples
+========
+
+Receiving malformed RFC3164 messages
+------------------------------------
+
+We assume a scenario where some of the devices send malformed RFC3164
+messages. The parser module will automatically detect the malformed
+sections and parse them accordingly.
+
+.. code-block:: none
+
+ module(load="imtcp")
+
+ input(type="imtcp" port="514" ruleset="customparser")
+
+ parser(name="custom.rfc3164"
+ type="pmrfc3164"
+ permit.squareBracketsInHostname="on"
+ detect.YearAfterTimestamp="on")
+
+ ruleset(name="customparser" parser="custom.rfc3164") {
+ ... do processing here ...
+ }
+
diff --git a/source/configuration/modules/pmrfc3164sd.rst b/source/configuration/modules/pmrfc3164sd.rst
new file mode 100644
index 0000000..aeb1517
--- /dev/null
+++ b/source/configuration/modules/pmrfc3164sd.rst
@@ -0,0 +1,5 @@
+pmrfc3164sd: Parse RFC5424 structured data inside RFC3164 messages
+==================================================================
+
+A contributed module for supporting RFC5424 structured data inside
+RFC3164 messages (not supported by the rsyslog team)
diff --git a/source/configuration/modules/pmrfc5424.rst b/source/configuration/modules/pmrfc5424.rst
new file mode 100644
index 0000000..21554b7
--- /dev/null
+++ b/source/configuration/modules/pmrfc5424.rst
@@ -0,0 +1,6 @@
+pmrfc5424: Parse RFC5424-formatted messages
+===========================================
+
+This is the new Syslog Standard.
+
+:rfc:`5424`
diff --git a/source/configuration/modules/sigprov_gt.rst b/source/configuration/modules/sigprov_gt.rst
new file mode 100644
index 0000000..b3cd092
--- /dev/null
+++ b/source/configuration/modules/sigprov_gt.rst
@@ -0,0 +1,94 @@
+GuardTime Log Signature Provider (gt)
+=====================================
+
+**Signature Provider Name: gt**
+
+**Author:** Rainer Gerhards <rgerhards@adiscon.com>
+
+**Supported:** from 7.3.9 to 8.26.0
+
+**Description**:
+
+Provides the ability to sign syslog messages via the GuardTime signature
+services.
+
+**Configuration Parameters**:
+
+Note: parameter names are case-insensitive.
+
+Signature providers are loaded by omfile, when the provider is selected
+in its "sig.providerName" parameter. Parameters for the provider are
+given in the omfile action instance line.
+
+This provider creates a signature file with the same base name but the
+extension ".gtsig" for each log file (both for fixed-name files as well
+as dynafiles). Both files together form a set. So you need to archive
+both in order to prove integrity.
+
+- **sig.hashFunction** <Hash Algorithm>
+ The following hash algorithms are currently supported:
+
+ - SHA1
+ - RIPEMD-160
+ - SHA2-224
+ - SHA2-256
+ - SHA2-384
+ - SHA2-512
+
+- **sig.timestampService** <timestamper URL>
+ This provides the URL of the timestamper service. If not selected, a
+ default server is selected. This may not necessarily be a good one
+ for your region.
+
+ *Note:* If you need to supply user credentials, you can add them to
+ the timestamper URL. If, for example, you have a user "user" with
+ password "pass", you can do so as follows:
+
+ http://user:pass@timestamper.example.net
+
+- **sig.block.sizeLimit** <nbr-records>
+ The maximum number of records inside a single signature block. By
+ default, there is no size limit, so the signature is only written on
+ file closure. Note that a signature request typically takes between
+ one and two seconds. So signing to frequently is probably not a good
+ idea.
+
+- **sig.keepRecordHashes** <on/**off**>
+ Controls if record hashes are written to the .gtsig file. This
+ enhances the ability to spot the location of a signature breach, but
+ costs considerable disk space (65 bytes for each log record for
+ SHA2-512 hashes, for example).
+
+- **sig.keepTreeHashes** <on/**off**>
+ Controls if tree (intermediate) hashes are written to the .gtsig
+ file. This enhances the ability to spot the location of a signature
+ breach, but costs considerable disk space (a bit mire than the amount
+ sig.keepRecordHashes requries). Note that both Tree and Record hashes
+ can be kept inside the signature file.
+
+**See Also**
+
+- `How to sign log messages through signature provider
+ Guardtime <http://www.rsyslog.com/how-to-sign-log-messages-through-signature-provider-guardtime/>`_
+
+**Caveats/Known Bugs:**
+
+- currently none known
+
+**Samples:**
+
+This writes a log file with it's associated signature file. Default
+parameters are used.
+
+::
+
+ action(type="omfile" file="/var/log/somelog" sig.provider="gt")
+
+In the next sample, we use the more secure SHA2-512 hash function, sign
+every 10,000 records and Tree and Record hashes are kept.
+
+::
+
+ action(type="omfile" file="/var/log/somelog" sig.provider="gt"
+ sig.hashfunction="SHA2-512" sig.block.sizelimit="10000"
+ sig.keepTreeHashes="on" sig.keepRecordHashes="on")
diff --git a/source/configuration/modules/sigprov_ksi.rst b/source/configuration/modules/sigprov_ksi.rst
new file mode 100644
index 0000000..99193cb
--- /dev/null
+++ b/source/configuration/modules/sigprov_ksi.rst
@@ -0,0 +1,99 @@
+Keyless Signature Infrastructure Provider (ksi)
+===============================================
+
+**Signature Provider Name: ksi**
+
+**Author:** Rainer Gerhards <rgerhards@adiscon.com>
+
+**Supported:** from 8.11.0 to 8.26.0
+
+**Description**:
+
+Provides the ability to sign syslog messages via the GuardTime KSI
+signature services.
+
+**Configuration Parameters**:
+
+Note: parameter names are case-insensitive.
+
+Signature providers are loaded by omfile, when the provider is selected
+in its "sig.providerName" parameter. Parameters for the provider are
+given in the omfile action instance line.
+
+This provider creates a signature file with the same base name but the
+extension ".ksisig" for each log file (both for fixed-name files as well
+as dynafiles). Both files together form a set. So you need to archive
+both in order to prove integrity.
+
+- **sig.hashFunction** <Hash Algorithm>
+ The following hash algorithms are currently supported:
+
+ - SHA1
+ - SHA2-256
+ - RIPEMD-160
+ - SHA2-224
+ - SHA2-384
+ - SHA2-512
+ - RIPEMD-256
+ - SHA3-244
+ - SHA3-256
+ - SHA3-384
+ - SHA3-512
+ - SM3
+
+- **sig.aggregator.uri** <KSI Aggregator URL>
+ This provides the URL of the KSI Aggregator service provided by
+ guardtime and looks like this:
+
+ ksi+tcp://[ip/dnsname]:3332
+
+- **sig.aggregator.user** <KSI UserID>
+ Set your username provided by Guardtime here.
+
+- **sig.aggregator.key** <KSI Key / Password>
+ Set your key provided by Guardtime here.
+
+- **sig.block.sizeLimit** <nbr-records>
+ The maximum number of records inside a single signature block. By
+ default, there is no size limit, so the signature is only written on
+ file closure. Note that a signature request typically takes between
+ one and two seconds. So signing to frequently is probably not a good
+ idea.
+
+- **sig.keepRecordHashes** <on/**off**>
+ Controls if record hashes are written to the .gtsig file. This
+ enhances the ability to spot the location of a signature breach, but
+ costs considerable disk space (65 bytes for each log record for
+ SHA2-512 hashes, for example).
+
+- **sig.keepTreeHashes** <on/**off**>
+ Controls if tree (intermediate) hashes are written to the .gtsig
+ file. This enhances the ability to spot the location of a signature
+ breach, but costs considerable disk space (a bit mire than the amount
+ sig.keepRecordHashes requries). Note that both Tree and Record hashes
+ can be kept inside the signature file.
+
+**See Also**
+
+
+**Caveats/Known Bugs:**
+
+- currently none known
+
+**Samples:**
+
+This writes a log file with it's associated signature file. Default
+parameters are used.
+
+::
+
+ action(type="omfile" file="/var/log/somelog" sig.provider="ksi")
+
+In the next sample, we use the more secure SHA2-512 hash function, sign
+every 10,000 records and Tree and Record hashes are kept.
+
+::
+
+ action(type="omfile" file="/var/log/somelog" sig.provider="ksi"
+ sig.hashfunction="SHA2-512" sig.block.sizelimit="10000"
+ sig.keepTreeHashes="on" sig.keepRecordHashes="on")
diff --git a/source/configuration/modules/sigprov_ksi12.rst b/source/configuration/modules/sigprov_ksi12.rst
new file mode 100644
index 0000000..d5b6ab7
--- /dev/null
+++ b/source/configuration/modules/sigprov_ksi12.rst
@@ -0,0 +1,135 @@
+KSI Signature Provider (rsyslog-ksi-ls12)
+============================================================
+
+**Module Name: rsyslog-ksi-ls12**
+
+**Available Since:** 8.27
+
+**Author:** Guardtime & Adiscon
+
+Description
+###########
+
+The ``rsyslog-ksi-ls12`` module enables record level log signing with Guardtime KSI Blockchain. KSI signatures provide long-term log integrity and prove the time of log records cryptographically using independent verification.
+
+Main features of the ``rsyslog-ksi-ls12`` module are:
+
+* Automated online signing of file output log.
+* Efficient block-based signing with record-level verification.
+* Log records removal detection.
+
+For best results use the ``rsyslog-ksi-ls12`` module together with Guardtime ``logksi`` tool, which will become handy in:
+
+* Signing recovery.
+* Extension of KSI signatures inside the log signature file.
+* Verification of the log using log signatures.
+* Extraction of record-level signatures.
+* Integration of log signature files (necessary when signing in async mode).
+
+Getting Started
+###############
+
+To get started with log signing:
+
+- Sign up to the Guardtime tryout service to be able to connect to KSI blockchain:
+ `guardtime.com/technology/blockchain-developers <https://guardtime.com/technology/blockchain-developers>`_
+- Install the ``libksi`` library (v3.20 or later)
+ `(libksi install) <https://github.com/guardtime/libksi#installation>`_
+- Install the ``rsyslog-ksi-ls12`` module (same version as rsyslog) from Adiscon repository.
+- Install the accompanying ``logksi`` tool (recommended v1.5 or later)
+ `(logksi install) <https://github.com/guardtime/logksi#installation>`_
+
+The format of the output depends on signing mode enabled (synchronous (``sync``) or asynchronous (``async``)).
+
+- In ``sync`` mode, log signature file is written directly into file ``<logfile>.logsig``. This mode is blocking as issuing KSI signatures one at a time will halt actual writing of log lines into log files. This mode suits for a system where signatures are issued rarely and delay caused by signing process is acceptable. Advantage compared to ``async`` mode is that the user has no need to integrate intermediate files to get actual log signature.
+
+- In ``async`` mode, log signature intermediate files are written into directory ``<logfile>.logsig.parts``. This mode is not blocking enabling high availability and concurrent signing of several blocks at the same time. Log signature is divided into two files, where one contains info about log records and blocks, and the other contains KSI signatures issued asynchronously. To create ``<logfile>.logsig`` from ``<logfile>.logsig.parts``, use ``logksi integrate <logfile>``. Advantage compared to ``sync`` mode is much better operational stability and speed.
+
+Currently the log signing is only supported by the file output module, thus the action type must be ``omfile``. To activate signing, add the following parameters to the action of interest in your rsyslog configuration file:
+
+Mandatory parameters (no default value defined):
+
+- **sig.provider** specifies the signature provider; in case of ``rsyslog-ksi-ls12`` package this is ``"ksi_ls12"``.
+- **sig.block.levelLimit** defines the maximum level of the root of the local aggregation tree per one block. The maximum number of log lines in one block is calculated as ``2^(levelLimit - 1)``.
+- **sig.aggregator.url** defines the endpoint of the KSI signing service in KSI Gateway. In ``async`` mode it is possible to specify up to 3 endpoints for high availability service, where user credentials are integrated into URL. Supported URI schemes are:
+
+ - *ksi+http://*
+ - *ksi+tcp://*
+
+ Examples:
+
+ - sig.aggregator.url="ksi+tcp://signingservice1.example.com"
+
+ sig.aggregator.user="rsmith"
+
+ sig.aggregator.key= "secret"
+
+ - sig.aggregator.url="ksi+tcp://rsmith:secret@signingservice1.example.com|ksi+tcp://jsmith:terces@signingservice2.example.com"
+
+- **sig.aggregator.user** specifies the login name for the KSI signing service. For high availability service, credentials are specified in URI.
+- **sig.aggregator.key** specifies the key for the login name. For high availability service, credentials are specified in URI.
+
+Optional parameters (if not defined, default value is used):
+
+- **sig.syncmode** defines the signing mode: ``"sync"`` (default) or ``"async"``.
+- **sig.hashFunction** defines the hash function to be used for hashing, default is ``"SHA2-256"``.
+ Other SHA-2, as well as RIPEMED-160 functions are supported.
+- **sig.block.timeLimit** defines the maximum duration of one block in seconds.
+ Default value ``"0"`` indicates that no time limit is set.
+- **sig.block.signTimeout** specifies a time window within the block signatures
+ have to be issued, default is ``10``. For example, issuing 4 signatures in a
+ second with sign timeout 10s, it is possible to handle 4 x 10 signatures
+ request created at the same time. More than that, will close last blocks with
+ signature failure as signature requests were not sent out within 10 seconds.
+- **sig.aggregator.hmacAlg** defines the HMAC algorithm to be used in communication with the KSI Gateway.
+ This must be agreed on with the KSI service provider, default is ``"SHA2-256"``.
+- **sig.keepTreeHashes** turns on/off the storing of the hashes that were used as leaves
+ for building the Merkle tree, default is ``"off"``.
+- **sig.keepRecordHashes** turns on/off the storing of the hashes of the log records, default is ``"on"``.
+- **sig.confInterval** defines interval of periodic request for aggregator configuration in seconds, default is ``3600``.
+- **sig.randomSource** defines source of random as file, default is ``"/dev/urandom"``.
+- **sig.debugFile** enables libksi log and redirects it into file specified. Note that logger level has to be specified (see ``sig.debugLevel``).
+- **sig.debugLevel** specifies libksi log level. Note that log file has to be specified (see ``sig.debugFile``).
+
+ - *0* None (default).
+ - *1* Error.
+ - *2* Warning.
+ - *3* Notice.
+ - *4* Info.
+ - *5* Debug.
+
+The log signature file, which stores the KSI signatures and information about the signed blocks, appears in the same directory as the log file itself.
+
+Sample
+######
+
+To sign the logs in ``/var/log/secure`` with KSI:
+::
+
+ # The authpriv file has restricted access and is signed with KSI
+ authpriv.* action(type="omfile" file="/var/log/secure"
+ sig.provider="ksi_ls12"
+ sig.syncmode="sync"
+ sig.hashFunction="SHA2-256"
+ sig.block.levelLimit="8"
+ sig.block.timeLimit="0"
+ sig.aggregator.url=
+ "http://tryout.guardtime.net:8080/gt-signingservice"
+ sig.aggregator.user="rsmith"
+ sig.aggregator.key="secret"
+ sig.aggregator.hmacAlg="SHA2-256"
+ sig.keepTreeHashes="off"
+ sig.keepRecordHashes="on")
+
+
+Note that all parameter values must be between quotation marks!
+
+See Also
+########
+
+To better understand the log signing mechanism and the module's possibilities it is advised to consult with:
+
+- `KSI Rsyslog Integration User Guide <https://docs.guardtime.net/ksi-rsyslog-guide/>`_
+- `KSI Developer Guide <https://docs.guardtime.net/ksi-dev-guide/>`_
+
+Access for both of these documents requires Guardtime tryout service credentials, available from `<https://guardtime.com/technology/blockchain-developers>`_
diff --git a/source/configuration/modules/workflow.rst b/source/configuration/modules/workflow.rst
new file mode 100644
index 0000000..9df5f74
--- /dev/null
+++ b/source/configuration/modules/workflow.rst
@@ -0,0 +1,30 @@
+Where are the modules integrated into the Message Flow?
+=======================================================
+
+Depending on their module type, modules may access and/or modify
+messages at various stages during rsyslog's processing. Note that only
+the "core type" (e.g. input, output) but not any type derived from it
+(message modification module) specifies when a module is called.
+
+The simplified workflow is as follows:
+
+.. figure:: module_workflow.png
+ :align: center
+ :alt: module_workflow
+
+As can be seen, messages are received by input modules, then passed to
+one or many parser modules, which generate the in-memory representation
+of the message and may also modify the message itself. The internal
+representation is passed to output modules, which may output a message
+and (with the interfaces introduced in v5) may also modify
+message object content.
+
+String generator modules are not included inside this picture, because
+they are not a required part of the workflow. If used, they operate "in
+front of" the output modules, because they are called during template
+generation.
+
+Note that the actual flow is much more complex and depends a lot on
+queue and filter settings. This graphic above is a high-level message
+flow diagram.
+