summaryrefslogtreecommitdiffstats
path: root/doc/userguide/capture-hardware
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-19 17:39:49 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-19 17:39:49 +0000
commita0aa2307322cd47bbf416810ac0292925e03be87 (patch)
tree37076262a026c4b48c8a0e84f44ff9187556ca35 /doc/userguide/capture-hardware
parentInitial commit. (diff)
downloadsuricata-a0aa2307322cd47bbf416810ac0292925e03be87.tar.xz
suricata-a0aa2307322cd47bbf416810ac0292925e03be87.zip
Adding upstream version 1:7.0.3.upstream/1%7.0.3
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'doc/userguide/capture-hardware')
-rw-r--r--doc/userguide/capture-hardware/af-xdp.rst287
-rw-r--r--doc/userguide/capture-hardware/dpdk.rst148
-rw-r--r--doc/userguide/capture-hardware/ebpf-xdp.rst600
-rw-r--r--doc/userguide/capture-hardware/endace-dag.rst42
-rw-r--r--doc/userguide/capture-hardware/index.rst12
-rw-r--r--doc/userguide/capture-hardware/myricom.rst96
-rw-r--r--doc/userguide/capture-hardware/napatech.rst534
-rw-r--r--doc/userguide/capture-hardware/netmap.rst223
8 files changed, 1942 insertions, 0 deletions
diff --git a/doc/userguide/capture-hardware/af-xdp.rst b/doc/userguide/capture-hardware/af-xdp.rst
new file mode 100644
index 0000000..ebe8585
--- /dev/null
+++ b/doc/userguide/capture-hardware/af-xdp.rst
@@ -0,0 +1,287 @@
+AF_XDP
+======
+
+AF_XDP (eXpress Data Path) is a high speed capture framework for Linux that was
+introduced in Linux v4.18. AF_XDP aims at improving capture performance by
+redirecting ingress frames to user-space memory rings, thus bypassing the network
+stack.
+
+Note that during ``af_xdp`` operation the selected interface cannot be used for
+regular network usage.
+
+Further reading:
+
+ - https://www.kernel.org/doc/html/latest/networking/af_xdp.html
+
+Compiling Suricata
+------------------
+
+Linux
+~~~~~
+
+libxdp and libpbf are required for this feature. When building from source the
+development files will also be required.
+
+Example::
+
+ dnf -y install libxdp-devel libbpf-devel
+
+This feature is enabled provided the libraries above are installed, the user
+does not need to add any additional command line options.
+
+The command line option ``--disable-af-xdp`` can be used to disable this
+feature.
+
+Example::
+
+ ./configure --disable-af-xdp
+
+Starting Suricata
+-----------------
+
+IDS
+~~~
+
+Suricata can be started as follows to use af-xdp:
+
+::
+
+ af-xdp:
+ suricata --af-xdp=<interface>
+ suricata --af-xdp=igb0
+
+In the above example Suricata will start reading from the `igb0` network interface.
+
+AF_XDP Configuration
+--------------------
+
+Each of these settings can be configured under ``af-xdp`` within the "Configure
+common capture settings" section of suricata.yaml configuration file.
+
+The number of threads created can be configured in the suricata.yaml configuration
+file. It is recommended to use threads equal to NIC queues/CPU cores.
+
+Another option is to select ``auto`` which will allow Suricata to configure the
+number of threads based on the number of RSS queues available on the NIC.
+
+With ``auto`` selected, Suricata spawns receive threads equal to the number of
+configured RSS queues on the interface.
+
+::
+
+ af-xdp:
+ threads: <number>
+ threads: auto
+ threads: 8
+
+Advanced setup
+---------------
+
+af-xdp capture source will operate using the default configuration settings.
+However, these settings are available in the suricata.yaml configuration file.
+
+Available configuration options are:
+
+force-xdp-mode
+~~~~~~~~~~~~~~
+
+There are two operating modes employed when loading the XDP program, these are:
+
+- XDP_DRV: Mode chosen when the driver supports AF_XDP
+- XDP_SKB: Mode chosen when no AF_XDP support is unavailable
+
+XDP_DRV mode is the preferred mode, used to ensure best performance.
+
+::
+
+ af-xdp:
+ force-xdp-mode: <value> where: value = <skb|drv|none>
+ force-xdp-mode: drv
+
+force-bind-mode
+~~~~~~~~~~~~~~~
+
+During binding the kernel will first attempt to use zero-copy (preferred). If
+zero-copy support is unavailable it will fallback to copy mode, copying all
+packets out to user space.
+
+::
+
+ af-xdp:
+ force-bind-mode: <value> where: value = <copy|zero|none>
+ force-bind-mode: zero
+
+For both options, the kernel will attempt the 'preferred' option first and
+fallback upon failure. Therefore the default (none) means the kernel has
+control of which option to apply. By configuring these options the user
+is forcing said option. Note that if enabled, the bind will only attempt
+this option, upon failure the bind will fail i.e. no fallback.
+
+mem-unaligned
+~~~~~~~~~~~~~~~~
+
+AF_XDP can operate in two memory alignment modes, these are:
+
+- Aligned chunk mode
+- Unaligned chunk mode
+
+Aligned chunk mode is the default option which ensures alignment of the
+data within the UMEM.
+
+Unaligned chunk mode uses hugepages for the UMEM.
+Hugepages start at the size of 2MB but they can be as large as 1GB.
+Lower count of pages (memory chunks) allows faster lookup of page entries.
+The hugepages need to be allocated on the NUMA node where the NIC and CPU resides.
+Otherwise, if the hugepages are allocated only on NUMA node 0 and the NIC is
+connected to NUMA node 1, then the application will fail to start.
+Therefore, it is recommended to first find out to which NUMA node the NIC is
+connected to and only then allocate hugepages and set CPU cores affinity
+to the given NUMA node.
+
+Memory assigned per socket/thread is 16MB, so each worker thread requires at least
+16MB of free space. As stated above hugepages can be of various sizes, consult the
+OS to confirm with ``cat /proc/meminfo``.
+
+Example ::
+
+ 8 worker threads * 16Mb = 128Mb
+ hugepages = 2048 kB
+ so: pages required = 62.5 (63) pages
+
+See https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt for detailed
+description.
+
+To enable unaligned chunk mode:
+
+::
+
+ af-xdp:
+ mem-unaligned: <yes/no>
+ mem-unaligned: yes
+
+Introduced from Linux v5.11 a ``SO_PREFER_BUSY_POLL`` option has been added to
+AF_XDP that allows a true polling of the socket queues. This feature has
+been introduced to reduce context switching and improve CPU reaction time
+during traffic reception.
+
+Enabled by default, this feature will apply the following options, unless
+disabled (see below). The following options are used to configure this feature.
+
+enable-busy-poll
+~~~~~~~~~~~~~~~~
+
+Enables or disables busy polling.
+
+::
+
+ af-xdp:
+ enable-busy-poll: <yes/no>
+ enable-busy-poll: yes
+
+busy-poll-time
+~~~~~~~~~~~~~~
+
+Sets the approximate time in microseconds to busy poll on a ``blocking receive``
+when there is no data.
+
+::
+
+ af-xdp:
+ busy-poll-time: <time>
+ busy-poll-time: 20
+
+busy-poll-budget
+~~~~~~~~~~~~~~~~
+
+Budget allowed for batching of ingress frames. Larger values means more
+frames can be stored/read. It is recommended to test this for performance.
+
+::
+
+ af-xdp:
+ busy-poll-budget: <budget>
+ busy-poll-budget: 64
+
+Linux tunables
+~~~~~~~~~~~~~~~
+
+The ``SO_PREFER_BUSY_POLL`` option works in concert with the following two Linux
+knobs to ensure best capture performance. These are not socket options:
+
+- gro-flush-timeout
+- napi-defer-hard-irq
+
+The purpose of these two knobs is to defer interrupts and to allow the
+NAPI context to be scheduled from a watchdog timer instead.
+
+The ``gro-flush-timeout`` indicates the timeout period for the watchdog
+timer. When no traffic is received for ``gro-flush-timeout`` the timer will
+exit and softirq handling will resume.
+
+The ``napi-defer-hard-irq`` indicates the number of queue scan attempts
+before exiting to interrupt context. When enabled, the softirq NAPI context will
+exit early, allowing busy polling.
+
+::
+
+ af-xdp:
+ gro-flush-timeout: 2000000
+ napi-defer-hard-irq: 2
+
+
+Hardware setup
+---------------
+
+Intel NIC setup
+~~~~~~~~~~~~~~~
+
+Intel network cards don't support symmetric hashing but it is possible to emulate
+it by using a specific hashing function.
+
+Follow these instructions closely for desired result::
+
+ ifconfig eth3 down
+
+Enable symmetric hashing ::
+
+ ifconfig eth3 down
+ ethtool -L eth3 combined 16 # if you have at least 16 cores
+ ethtool -K eth3 rxhash on
+ ethtool -K eth3 ntuple on
+ ifconfig eth3 up
+ ./set_irq_affinity 0-15 eth3
+ ethtool -X eth3 hkey 6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A equal 16
+ ethtool -x eth3
+ ethtool -n eth3
+
+In the above setup you are free to use any recent ``set_irq_affinity`` script. It is available in any Intel x520/710 NIC sources driver download.
+
+**NOTE:**
+We use a special low entropy key for the symmetric hashing. `More info about the research for symmetric hashing set up <http://www.ndsl.kaist.edu/~kyoungsoo/papers/TR-symRSS.pdf>`_
+
+Disable any NIC offloading
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Suricata shall disable NIC offloading based on configuration parameter ``disable-offloading``, which is enabled by default.
+See ``capture`` section of yaml file.
+
+::
+
+ capture:
+ # disable NIC offloading. It's restored when Suricata exits.
+ # Enabled by default.
+ #disable-offloading: false
+
+Balance as much as you can
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Try to use the network card's flow balancing as much as possible ::
+
+ for proto in tcp4 udp4 ah4 esp4 sctp4 tcp6 udp6 ah6 esp6 sctp6; do
+ /sbin/ethtool -N eth3 rx-flow-hash $proto sd
+ done
+
+This command triggers load balancing using only source and destination IPs. This may be not optimal
+in terms of load balancing fairness but this ensures all packets of a flow will reach the same thread
+even in the case of IP fragmentation (where source and destination port will not be available for
+some fragmented packets).
diff --git a/doc/userguide/capture-hardware/dpdk.rst b/doc/userguide/capture-hardware/dpdk.rst
new file mode 100644
index 0000000..1b9ecae
--- /dev/null
+++ b/doc/userguide/capture-hardware/dpdk.rst
@@ -0,0 +1,148 @@
+.. _dpdk:
+
+DPDK
+====
+
+Introduction
+-------------
+
+The Data Plane Development Kit (DPDK) is a set of libraries and drivers that
+enhance and speed up packet processing in the data plane. Its primary use is to
+provide faster packet processing by bypassing the kernel network stack, which
+can provide significant performance improvements. For detailed instructions on
+how to setup DPDK, please refer to :doc:`../configuration/suricata-yaml` to
+learn more about the basic setup for DPDK.
+The following sections contain examples of how to set up DPDK and Suricata for
+more obscure use-cases.
+
+Hugepage analysis
+-----------------
+
+Suricata can analyse utilized hugepages on the system. This can be particularly
+beneficial when there's a potential overallocation of hugepages.
+The hugepage analysis is designed to examine the hugepages in use and
+provide recommendations on an adequate number of hugepages. This then ensures
+Suricata operates optimally while leaving sufficient memory for other
+applications on the system. The analysis works by comparing snapshots of the
+hugepages before and after Suricata is initialized. After the initialization,
+no more hugepages are allocated by Suricata.
+The hugepage analysis can be seen in the Perf log level and is printed out
+during the Suricata start. It is only printed when Suricata detects some
+disrepancies in the system related to hugepage allocation.
+
+It's recommended to perform this analysis from a "clean" state -
+that is a state when all your hugepages are free. It is especially recommended
+when no other hugepage-dependent applications are running on your system.
+This can be checked in one of two ways:
+
+.. code-block::
+
+ # global check
+ cat /proc/meminfo
+
+ HugePages_Total: 1024
+ HugePages_Free: 1024
+
+ # per-numa check depends on NUMA node ID, hugepage size,
+ # and nr_hugepages/free_hugepages - e.g.:
+ cat /sys/devices/system/node/node0/hugepages/hugepages-2048kB/free_hugepages
+
+After the termination of Suricata and other hugepage-related applications,
+if the count of free hugepages is not equal with the total number of hugepages,
+it indicates some hugepages were not freed completely.
+This can be fixed by removing DPDK-related files from the hugepage-mounted
+directory (filesystem).
+It's important to exercise caution while removing hugepages, especially when
+other hugepage-dependent applications are in operation, as this action will
+disrupt their memory functionality.
+Removing the DPDK files from the hugepage directory can often be done as:
+
+.. code-block:: bash
+
+ sudo rm -rf /dev/hugepages/rtemap_*
+
+ # To check where hugepages are mounted:
+ dpdk-hugepages.py -s
+ # or
+ mount | grep huge
+
+Bond interface
+--------------
+
+Link Bonding Poll Mode Driver (Bond PMD), is a software
+mechanism provided by the Data Plane Development Kit (DPDK) for aggregating
+multiple physical network interfaces into a single logical interface.
+Bonding can be e.g. used to:
+
+* deliver bidirectional flows of tapped interfaces to the same worker,
+* establish redundancy by monitoring multiple links,
+* improve network performance by load-balancing traffic across multiple links.
+
+Bond PMD is essentially a virtual driver that manipulates with multiple
+physical network interfaces. It can operate in multiple modes as described
+in the `DPDK docs
+<https://doc.dpdk.org/guides/prog_guide/link_bonding_poll_mode_drv_lib.html>`_
+The individual bonding modes can accustom user needs.
+DPDK Bond PMD has a requirement that the aggregated interfaces must be
+the same device types - e.g. both physical ports run on mlx5 PMD.
+Bond PMD supports multiple queues and therefore can work in workers runmode.
+It should have no effect on traffic distribution of the individual ports and
+flows should be distributed by physical ports according to the RSS
+configuration the same way as if they would be configured independently.
+
+As an example of Bond PMD, we can setup Suricata to monitor 2 interfaces
+that receive TAP traffic from optical interfaces. This means that Suricata
+receive one direction of the communication on one interface and the other
+direction is received on the other interface.
+
+::
+
+ ...
+ dpdk:
+ eal-params:
+ proc-type: primary
+ vdev: 'net_bonding0,mode=0,slave=0000:04:00.0,slave=0000:04:00.1'
+
+ # DPDK capture support
+ # RX queues (and TX queues in IPS mode) are assigned to cores in 1:1 ratio
+ interfaces:
+ - interface: net_bonding0 # PCIe address of the NIC port
+ # Threading: possible values are either "auto" or number of threads
+ # - auto takes all cores
+ # in IPS mode it is required to specify the number of cores and the
+ # numbers on both interfaces must match
+ threads: 4
+ ...
+
+In the DPDK part of suricata.yaml we have added a new parameter to the
+eal-params section for virtual devices - `vdev`.
+DPDK Environment Abstraction Layer (EAL) can initialize some virtual devices
+during the initialization of EAL.
+In this case, EAL creates a new device of type `net_bonding`. Suffix of
+`net_bonding` signifies the name of the interface (in this case the zero).
+Extra arguments are passed after the device name, such as the bonding mode
+(`mode=0`). This is the round-robin mode as is described in the DPDK
+documentation of Bond PMD.
+Members (slaves) of the `net_bonding0` interface are appended after
+the bonding mode parameter.
+
+When the device is specified within EAL parameters, it can be used within
+Suricata `interfaces` list. Note that the list doesn't contain PCIe addresses
+of the physical ports but instead the `net_bonding0` interface.
+Threading section is also adjusted according to the items in the interfaces
+list by enablign set-cpu-affinity and listing CPUs that should be used in
+management and worker CPU set.
+
+::
+
+ ...
+ threading:
+ set-cpu-affinity: yes
+ cpu-affinity:
+ - management-cpu-set:
+ cpu: [ 0 ] # include only these CPUs in affinity settings
+ - receive-cpu-set:
+ cpu: [ 0 ] # include only these CPUs in affinity settings
+ - worker-cpu-set:
+ cpu: [ 2,4,6,8 ]
+ ...
diff --git a/doc/userguide/capture-hardware/ebpf-xdp.rst b/doc/userguide/capture-hardware/ebpf-xdp.rst
new file mode 100644
index 0000000..1160387
--- /dev/null
+++ b/doc/userguide/capture-hardware/ebpf-xdp.rst
@@ -0,0 +1,600 @@
+.. _ebpf-xdp:
+
+eBPF and XDP
+============
+
+Introduction
+------------
+
+eBPF stands for extended BPF. This is an extended version of Berkeley Packet Filter available in recent
+Linux kernel versions.
+
+It provides more advanced features with eBPF programs developed in C and capability to use structured data shared
+between kernel and userspace.
+
+eBPF is used for three things in Suricata:
+
+- eBPF filter: any BPF like filter can be developed. An example of filter accepting only packet for some VLANs is provided. A bypass implementation is also provided.
+- eBPF load balancing: provide programmable load balancing. Simple ippair load balancing is provided.
+- XDP programs: Suricata can load XDP programs. A bypass program is provided.
+
+Bypass can be implemented in eBPF and XDP. The advantage of XDP is that the packets are dropped at the earliest stage
+possible. So performance is better. But bypassed packets don't reach the network so you can't use this on regular
+traffic but only on duplicated/sniffed traffic.
+
+The bypass implementation relies on one of the most powerful concept of eBPF: maps. A map is a data structure
+shared between user space and kernel space/hardware. It allows user space and kernel space to interact, pass
+information. Maps are often implemented as arrays or hash tables that can contain arbitrary key, value pairs.
+
+XDP
+~~~
+
+XDP provides another Linux native way of optimising Suricata's performance on sniffing high speed networks:
+
+ XDP or eXpress Data Path provides a high performance, programmable network data path in the Linux kernel as part of the IO Visor Project. XDP provides bare metal packet processing at the lowest point in the software stack which makes it ideal for speed without compromising programmability. Furthermore, new functions can be implemented dynamically with the integrated fast path without kernel modification.
+
+More info about XDP:
+
+- `IOVisor's XDP page <https://www.iovisor.org/technology/xdp>`__
+- `Cilium's BPF and XDP reference guide <https://docs.cilium.io/en/stable/bpf/>`__
+
+
+Requirements
+------------
+
+You will need a kernel that supports XDP and, for the most performance improvement, a network
+card that support XDP in the driver.
+
+Suricata XDP code has been tested with 4.13.10 but 4.15 or later is necessary to use all
+features like the CPU redirect map.
+
+If you are using an Intel network card, you will need to stay with in tree kernel NIC drivers.
+The out of tree drivers do not contain the XDP support.
+
+Having a network card with support for RSS symmetric hashing is a good point or you will have to
+use the XDP CPU redirect map feature.
+
+Prerequisites
+-------------
+
+This guide has been confirmed on Debian/Ubuntu "LTS" Linux.
+
+Disable irqbalance
+~~~~~~~~~~~~~~~~~~
+
+``irqbalance`` may cause issues in most setups described here, so it is recommended
+to deactivate it ::
+
+ systemctl stop irqbalance
+ systemctl disable irqbalance
+
+Kernel
+~~~~~~
+
+You need to run a kernel 4.13 or newer.
+
+Clang and dependencies
+~~~~~~~~~~~~~~~~~~~~~~
+
+Make sure you have ``clang`` (>=3.9) installed on the system ::
+
+ sudo apt install clang
+
+Some i386 headers will also be needed as eBPF is not x86_64 and some included headers
+are architecture specific ::
+
+ sudo apt install libc6-dev-i386 --no-install-recommends
+
+libbpf
+~~~~~~
+
+Suricata uses libbpf to interact with eBPF and XDP ::
+
+ git clone https://github.com/libbpf/libbpf.git
+
+Now, you can build and install the library ::
+
+ cd libbpf/src/
+ make && sudo make install
+
+ sudo make install_headers
+ sudo ldconfig
+
+In some cases your system will not find the libbpf library that is installed under
+``/usr/lib64`` so you may need to modify your ldconfig configuration.
+
+Compile and install Suricata
+----------------------------
+
+To get Suricata source, you can use the usual ::
+
+ git clone https://github.com/OISF/suricata.git
+ cd suricata && git clone https://github.com/OISF/libhtp.git -b 0.5.x
+
+ ./autogen.sh
+
+Then you need to add the eBPF flags to configure and specify the Clang
+compiler for building all C sources, including the eBPF programs ::
+
+ CC=clang ./configure --prefix=/usr/ --sysconfdir=/etc/ --localstatedir=/var/ \
+ --enable-ebpf --enable-ebpf-build
+
+ make clean && make
+ sudo make install-full
+ sudo ldconfig
+ sudo mkdir /usr/libexec/suricata/ebpf/
+
+The ``clang`` compiler is needed if you want to build eBPF files as the build
+is done via a specific eBPF backend available only in llvm/clang suite. If you
+don't want to use Clang for building Suricata itself, you can still specify it
+separately, using the ``--with-clang`` parameter ::
+
+ ./configure --prefix=/usr/ --sysconfdir=/etc/ --localstatedir=/var/ \
+ --enable-ebpf --enable-ebpf-build --with-clang=/usr/bin/clang
+
+Setup bypass
+------------
+
+If you plan to use eBPF or XDP for a kernel/hardware level bypass, you need to enable
+some of the following features:
+
+First, enable `bypass` in the `stream` section in ``suricata.yaml`` ::
+
+ stream:
+ bypass: true
+
+This will bypass flows as soon as the stream depth will be reached.
+
+If you want, you can also bypass encrypted flows by setting `encryption-handling` to `bypass`
+in the app-layer tls section ::
+
+ app-layer:
+ protocols:
+ tls:
+ enabled: yes
+ detection-ports:
+ dp: 443
+
+ encryption-handling: bypass
+
+Another solution is to use a set of signatures using the ``bypass`` keyword to obtain
+a selective bypass. Suricata traffic ID defines flowbits that can be used in other signatures.
+For instance one could use ::
+
+ alert any any -> any any (msg:"bypass video"; flowbits:isset,traffic/label/video; noalert; bypass; sid:1000000; rev:1;)
+ alert any any -> any any (msg:"bypass Skype"; flowbits:isset,traffic/id/skype; noalert; bypass; sid:1000001; rev:1;)
+
+Setup eBPF filter
+-----------------
+
+The file `ebpf/vlan_filter.c` contains a list of VLAN id in a switch
+that you need to edit to get something adapted to your network. Another
+filter dropping packets from or to a set of IPv4 address is also available in
+`ebpf/filter.c`. See :ref:`ebpf-pinned-maps` for more information.
+
+Suricata can load as eBPF filter any eBPF code exposing a ``filter`` section.
+
+Once modifications and build via ``make`` are complete, you can copy the resulting
+eBPF filter as needed ::
+
+ cp ebpf/vlan_filter.bpf /usr/libexec/suricata/ebpf/
+
+Then setup the `ebpf-filter-file` variable in af-packet section in ``suricata.yaml`` ::
+
+ - interface: eth3
+ threads: 16
+ cluster-id: 97
+ cluster-type: cluster_flow # choose any type suitable
+ defrag: yes
+ # eBPF file containing a 'filter' function that will be inserted into the
+ # kernel and used as load balancing function
+ ebpf-filter-file: /usr/libexec/suricata/ebpf/vlan_filter.bpf
+ use-mmap: yes
+ ring-size: 200000
+
+You can then run Suricata normally ::
+
+ /usr/bin/suricata --pidfile /var/run/suricata.pid --af-packet=eth3 -vvv
+
+Setup eBPF bypass
+-----------------
+
+You can also use eBPF bypass. To do that load the `bypass_filter.bpf` file and
+update af-packet configuration in ``suricata.yaml`` to set bypass to `yes` ::
+
+ - interface: eth3
+ threads: 16
+ cluster-id: 97
+ cluster-type: cluster_qm # symmetric RSS hashing is mandatory to use this mode
+ # eBPF file containing a 'filter' function that will be inserted into the
+ # kernel and used as packet filter function
+ ebpf-filter-file: /usr/libexec/suricata/ebpf/bypass_filter.bpf
+ bypass: yes
+ use-mmap: yes
+ ring-size: 200000
+
+Constraints on eBPF code to have a bypass compliant code are stronger than for regular filters. The
+filter must expose `flow_table_v4` and `flow_table_v6` per CPU array maps with similar definitions
+as the one available in `bypass_filter.c`. These two maps will be accessed and
+maintained by Suricata to handle the lists of flows to bypass.
+
+If you are not using VLAN tracking (``vlan.use-for-tracking`` set to `false` in suricata.yaml) then you also have to set
+the ``VLAN_TRACKING`` define to `0` in ``bypass_filter.c``.
+
+Setup eBPF load balancing
+-------------------------
+
+eBPF load balancing allows to load balance the traffic on the listening sockets
+With any logic implemented in the eBPF filter. The value returned by the function
+tagged with the ``loadbalancer`` section is used with a modulo on the CPU count to know in
+which socket the packet has to be send.
+
+An implementation of a simple symmetric IP pair hashing function is provided in the ``lb.bpf``
+file.
+
+Copy the resulting eBPF filter as needed ::
+
+ cp ebpf/lb.bpf /usr/libexec/suricata/ebpf/
+
+Then use ``cluster_ebpf`` as load balancing method in the interface section of af-packet
+and point the ``ebpf-lb-file`` variable to the ``lb.bpf`` file ::
+
+ - interface: eth3
+ threads: 16
+ cluster-id: 97
+ cluster-type: cluster_ebpf
+ defrag: yes
+ # eBPF file containing a 'loadbalancer' function that will be inserted into the
+ # kernel and used as load balancing function
+ ebpf-lb-file: /usr/libexec/suricata/ebpf/lb.bpf
+ use-mmap: yes
+ ring-size: 200000
+
+Setup XDP bypass
+----------------
+
+XDP bypass allows Suricata to tell the kernel that packets for some
+flows have to be dropped via the XDP mechanism. This is an early
+drop that occurs before the datagram reaches the Linux kernel
+network stack.
+
+Linux 4.15 or newer are recommended to use that feature. You can use it
+on older kernel if you set ``BUILD_CPUMAP`` to `0` in ``ebpf/xdp_filter.c``.
+
+Copy the resulting XDP filter as needed::
+
+ cp ebpf/xdp_filter.bpf /usr/libexec/suricata/ebpf/
+
+Setup af-packet section/interface in ``suricata.yaml``.
+
+We will use ``cluster_qm`` as we have symmetric hashing on the NIC, ``xdp-mode: driver`` and we will
+also use the ``/usr/libexec/suricata/ebpf/xdp_filter.bpf`` (in our example TCP offloading/bypass) ::
+
+ - interface: eth3
+ threads: 16
+ cluster-id: 97
+ cluster-type: cluster_qm # symmetric hashing is a must!
+ defrag: yes
+ # Xdp mode, "soft" for skb based version, "driver" for network card based
+ # and "hw" for card supporting eBPF.
+ xdp-mode: driver
+ xdp-filter-file: /usr/libexec/suricata/ebpf/xdp_filter.bpf
+ # if the ebpf filter implements a bypass function, you can set 'bypass' to
+ # yes and benefit from these feature
+ bypass: yes
+ use-mmap: yes
+ ring-size: 200000
+ # Uncomment the following if you are using hardware XDP with
+ # a card like Netronome (default value is yes)
+ # use-percpu-hash: no
+
+
+XDP bypass is compatible with AF_PACKET IPS mode. Packets from bypassed flows will be send directly
+from one card to the second card without going by the kernel network stack.
+
+If you are using hardware XDP offload you may have to set ``use-percpu-hash`` to false and
+build and install the XDP filter file after setting ``USE_PERCPU_HASH`` to 0.
+
+In the XDP filter file, you can set ``ENCRYPTED_TLS_BYPASS`` to 1 if you want to bypass
+the encrypted TLS 1.2 packets in the eBPF code. Be aware that this will mean that Suricata will
+be blind on packets on port 443 with the correct pattern.
+
+If you are not using VLAN tracking (``vlan.use-for-tracking`` set to false in suricata.yaml) then you also have to set
+the VLAN_TRACKING define to 0 in ``xdp_filter.c``.
+
+Intel NIC setup
+~~~~~~~~~~~~~~~
+
+Intel network card don't support symmetric hashing but it is possible to emulate
+it by using a specific hashing function.
+
+Follow these instructions closely for desired result::
+
+ ifconfig eth3 down
+
+Use in tree kernel drivers: XDP support is not available in Intel drivers available on Intel website.
+
+Enable symmetric hashing ::
+
+ ifconfig eth3 down
+ ethtool -L eth3 combined 16 # if you have at least 16 cores
+ ethtool -K eth3 rxhash on
+ ethtool -K eth3 ntuple on
+ ifconfig eth3 up
+ ./set_irq_affinity 0-15 eth3
+ ethtool -X eth3 hkey 6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A:6D:5A equal 16
+ ethtool -x eth3
+ ethtool -n eth3
+
+In the above setup you are free to use any recent ``set_irq_affinity`` script. It is available in any Intel x520/710 NIC sources driver download.
+
+**NOTE:**
+We use a special low entropy key for the symmetric hashing. `More info about the research for symmetric hashing set up <http://www.ndsl.kaist.edu/~kyoungsoo/papers/TR-symRSS.pdf>`_
+
+Disable any NIC offloading
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Run the following command to disable offloading ::
+
+ for i in rx tx tso ufo gso gro lro tx nocache copy sg txvlan rxvlan; do
+ /sbin/ethtool -K eth3 $i off 2>&1 > /dev/null;
+ done
+
+Balance as much as you can
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Try to use the network card's flow balancing as much as possible ::
+
+ for proto in tcp4 udp4 ah4 esp4 sctp4 tcp6 udp6 ah6 esp6 sctp6; do
+ /sbin/ethtool -N eth3 rx-flow-hash $proto sd
+ done
+
+This command triggers load balancing using only source and destination IPs. This may be not optimal
+in term of load balancing fairness but this ensures all packets of a flow will reach the same thread
+even in the case of IP fragmentation (where source and destination port will not be available
+for some fragmented packets).
+
+The XDP CPU redirect case
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If ever your hardware is not able to do a symmetric load balancing but support XDP in driver mode, you
+can then use the CPU redirect map support available in the `xdp_filter.bpf` and `xdp_lb.bpf` file. In
+this mode, the load balancing will be done by the XDP filter and each CPU will handle the whole packet
+treatment including the creation of the skb structure in kernel.
+
+You will need Linux 4.15 or newer to use that feature.
+
+To do so set the `xdp-cpu-redirect` variable in af-packet interface configuration to a set of CPUs.
+Then use the `cluster_cpu` as load balancing function. You will also need to set the affinity
+to be certain that CPU cores that have the skb assigned are used by Suricata.
+
+Also to avoid out of order packets, you need to set the RSS queue number to 1. So if our interface
+is `eth3` ::
+
+ /sbin/ethtool -L eth3 combined 1
+
+In case your system has more then 64 core, you need to set `CPUMAP_MAX_CPUS` to a value greater
+than this number in `xdp_lb.c` and `xdp_filter.c`.
+
+A sample configuration for pure XDP load balancing could look like ::
+
+ - interface: eth3
+ threads: 16
+ cluster-id: 97
+ cluster-type: cluster_cpu
+ xdp-mode: driver
+ xdp-filter-file: /usr/libexec/suricata/ebpf/xdp_lb.bpf
+ xdp-cpu-redirect: ["1-17"] # or ["all"] to load balance on all CPUs
+ use-mmap: yes
+ ring-size: 200000
+
+It is possible to use `xdp_monitor` to have information about the behavior of CPU redirect. This
+program is available in Linux tree under the `samples/bpf` directory and will be build by the
+make command. Sample output is the following ::
+
+ sudo ./xdp_monitor --stats
+ XDP-event CPU:to pps drop-pps extra-info
+ XDP_REDIRECT 11 2,880,212 0 Success
+ XDP_REDIRECT total 2,880,212 0 Success
+ XDP_REDIRECT total 0 0 Error
+ cpumap-enqueue 11:0 575,954 0 5.27 bulk-average
+ cpumap-enqueue sum:0 575,954 0 5.27 bulk-average
+ cpumap-kthread 0 575,990 0 56,409 sched
+ cpumap-kthread 1 576,090 0 54,897 sched
+
+Start Suricata with XDP
+~~~~~~~~~~~~~~~~~~~~~~~
+
+You can now start Suricata with XDP bypass activated ::
+
+ /usr/bin/suricata -c /etc/suricata/xdp-suricata.yaml --pidfile /var/run/suricata.pid --af-packet=eth3 -vvv
+
+Confirm you have the XDP filter engaged in the output (example)::
+
+ ...
+ ...
+ (runmode-af-packet.c:220) <Config> (ParseAFPConfig) -- Enabling locked memory for mmap on iface eth3
+ (runmode-af-packet.c:231) <Config> (ParseAFPConfig) -- Enabling tpacket v3 capture on iface eth3
+ (runmode-af-packet.c:326) <Config> (ParseAFPConfig) -- Using queue based cluster mode for AF_PACKET (iface eth3)
+ (runmode-af-packet.c:424) <Info> (ParseAFPConfig) -- af-packet will use '/usr/libexec/suricata/ebpf/xdp_filter.bpf' as XDP filter file
+ (runmode-af-packet.c:429) <Config> (ParseAFPConfig) -- Using bypass kernel functionality for AF_PACKET (iface eth3)
+ (runmode-af-packet.c:609) <Config> (ParseAFPConfig) -- eth3: enabling zero copy mode by using data release call
+ (util-runmodes.c:296) <Info> (RunModeSetLiveCaptureWorkersForDevice) -- Going to use 8 thread(s)
+ ...
+ ...
+
+.. _ebpf-pinned-maps:
+
+Pinned maps usage
+-----------------
+
+Pinned maps stay attached to the system if the creating process disappears and
+they can also be accessed by external tools. In Suricata bypass case, this can be
+used to keep bypassed flow tables active, so Suricata is not hit by previously bypassed flows when
+restarting. In the socket filter case, this can be used to maintain a map from tools outside
+of Suricata.
+
+To use pinned maps, you first have to mount the `bpf` pseudo filesystem ::
+
+ sudo mount -t bpf none /sys/fs/bpf
+
+You can also add to your `/etc/fstab` ::
+
+ bpffs /sys/fs/bpf bpf defaults 0 0
+
+and run `sudo mount -a`.
+
+Pinned maps will be accessible as file from the `/sys/fs/bpf` directory. Suricata
+will pin them under the name `suricata-$IFACE_NAME-$MAP_NAME`.
+
+To activate pinned maps for a interface, set `pinned-maps` to `true` in the `af-packet`
+configuration of this interface ::
+
+ - interface: eth3
+ pinned-maps: true
+
+XDP and pinned-maps
+-------------------
+
+This option can be used to expose the maps of a socket filter to other processes.
+This allows for example, the external handling of a accept list or block list of
+IP addresses. See `bpfctrl <https://github.com/StamusNetworks/bpfctrl/>`_ for an example
+of external list handling.
+
+In the case of XDP, the eBPF filter is attached to the interface so if you
+activate `pinned-maps` the eBPF will remain attached to the interface and
+the maps will remain accessible upon Suricata start.
+If XDP bypass is activated, Suricata will try at start to open the pinned maps
+`flow_v4_table` and `flow_v6_table`. If they are present, this means the XDP filter
+is still there and Suricata will just use them instead of attaching the XDP file to
+the interface.
+
+So if you want to reload the XDP filter, you need to remove the files from `/sys/fs/bpf/`
+before starting Suricata.
+
+In case, you are not using bypass, this means that the used maps are managed from outside
+Suricata. As their names are not known by Suricata, you need to specify a name of a map to look
+for, that will be used to check for the presence of the XDP filter ::
+
+ - interface: eth3
+ pinned-maps: true
+ pinned-maps-name: ipv4_drop
+ xdp-filter-file: /usr/libexec/suricata/ebpf/xdp_filter.bpf
+
+If XDP bypass is used in IPS mode stopping Suricata will trigger an interruption in the traffic.
+To fix that, the provided XDP filter `xdp_filter.bpf` is containing a map that will trigger
+a global bypass if set to 1. You need to use `pinned-maps` to benefit from this feature.
+
+To use it you need to set `#define USE_GLOBAL_BYPASS 1` (instead of 0) in the `xdp_filter.c` file and rebuild
+the eBPF code and install the eBPF file in the correct place. If you write `1` as key `0` then the XDP
+filter will switch to global bypass mode. Set key `0` to value `0` to send traffic to Suricata.
+
+The switch must be activated on all sniffing interfaces. For an interface named `eth0` the global
+switch map will be `/sys/fs/bpf/suricata-eth0-global_bypass`.
+
+Pinned maps and eBPF filter
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Pinned maps can also be used with regular eBPF filters. The main difference is that the map will not
+persist after Suricata is stopped because it is attached to a socket and not an interface which
+is persistent.
+
+The eBPF filter `filter.bpf` uses a `ipv4_drop` map that contains the set of IPv4 addresses to drop.
+If `pinned-maps` is set to `true` in the interface configuration then the map will be pinned
+under `/sys/fs/bpf/suricata-eth3-ipv4_drop`.
+
+You can then use a tool like `bpfctrl` to manage the IPv4 addresses in the map.
+
+Hardware bypass with Netronome
+------------------------------
+
+Netronome cards support hardware bypass. In this case the eBPF code is running in the card
+itself. This introduces some architectural differences compared to driver mode and the configuration
+and eBPF filter need to be updated.
+
+On eBPF side, as of Linux 4.19 CPU maps and interfaces redirect are not supported and these features
+need to be disabled. By architecture, per CPU hash should not be used and has to be disabled.
+To achieve this, edit the beginning of `ebpf/xdp_filter.c` and do ::
+
+ #define BUILD_CPUMAP 0
+ /* Increase CPUMAP_MAX_CPUS if ever you have more than 64 CPUs */
+ #define CPUMAP_MAX_CPUS 64
+
+ #define USE_PERCPU_HASH 0
+ #define GOT_TX_PEER 0
+
+Then build the bpf file with `make` and install it in the expected place.
+
+The Suricata configuration is rather simple as you need to activate
+hardware mode and the `use-percpu-hash` option in the `af-packet` configuration
+of the interface ::
+
+ xdp-mode: hw
+ use-percpu-hash: no
+
+The load balancing will be done on IP pairs inside the eBPF code, so
+using `cluster_qm` as cluster type is a good idea ::
+
+ cluster-type: cluster_qm
+
+As of Linux 4.19, the number of threads must be a power of 2. So set
+`threads` variable of the `af-packet` interface to a power
+of 2 and in the eBPF filter set the following variable accordingly ::
+
+ #define RSS_QUEUE_NUMBERS 32
+
+Getting live info about bypass
+------------------------------
+
+You can get information about bypass via the stats event and through the unix socket.
+``iface-stat`` will return the number of bypassed packets (adding packets for a flow when it timeout) ::
+
+ suricatasc -c "iface-stat enp94s0np0" | jq
+ {
+ "message": {
+ "pkts": 56529854964,
+ "drop": 932328611,
+ "bypassed": 1569467248,
+ "invalid-checksums": 0
+ },
+ "return": "OK"
+ }
+
+``iface-bypassed-stats`` command will return the number of elements in IPv4 and IPv6 flow tables for
+each interface ::
+
+ # suricatasc
+ >>> iface-bypassed-stats
+ Success:
+ {
+ "enp94s0np0": {
+ "ipv4_fail": 0,
+ "ipv4_maps_count": 2303,
+ "ipv4_success": 4232,
+ "ipv6_fail": 0,
+ "ipv6_maps_count": 13131,
+ "ipv6_success": 13500
+
+ }
+ }
+
+The stats entry also contains a `stats.flow_bypassed` object that has local and capture
+bytes and packets counters as well as a bypassed and closed flow counter ::
+
+ {
+ "local_pkts": 0,
+ "local_bytes": 0,
+ "local_capture_pkts": 20,
+ "local_capture_bytes": 25000,
+ "closed": 84,
+ "pkts": 4799,
+ "bytes": 2975133
+ }
+
+`local_pkts` and `local_bytes` are for Suricata bypassed flows. This can be because
+local bypass is used or because the capture method can not bypass more flows.
+`pkts` and `bytes` are counters coming from the capture method. They can take some
+time to appear due to the accounting at timeout.
+`local_capture_pkts` and `local_capture_bytes` are counters for packets that are seen
+by Suricata before the capture method efficiently bypass the traffic. There is almost
+always some for each flow because of the buffer in front of Suricata reading threads.
diff --git a/doc/userguide/capture-hardware/endace-dag.rst b/doc/userguide/capture-hardware/endace-dag.rst
new file mode 100644
index 0000000..854fd4b
--- /dev/null
+++ b/doc/userguide/capture-hardware/endace-dag.rst
@@ -0,0 +1,42 @@
+Endace DAG
+==========
+
+Suricata comes with native Endace DAG card support. This means Suricata can use the *libdag* interface directly, instead of a libpcap wrapper (which should also work).
+
+Steps:
+
+Configure with DAG support:
+
+::
+
+ ./configure --enable-dag --prefix=/usr --sysconfdir=/etc --localstatedir=/var
+ make
+ sudo make install
+
+Results in:
+
+::
+
+ Suricata Configuration:
+ AF_PACKET support: no
+ PF_RING support: no
+ NFQueue support: no
+ IPFW support: no
+ DAG enabled: yes
+ Napatech enabled: no
+
+
+Start with:
+
+::
+
+ suricata -c suricata.yaml --dag 0:0
+
+
+Started up!
+
+::
+
+
+ [5570] 10/7/2012 -- 13:52:30 - (source-erf-dag.c:262) <Info> (ReceiveErfDagThreadInit) -- Attached and started stream: 0 on DAG: /dev/dag0
+ [5570] 10/7/2012 -- 13:52:30 - (source-erf-dag.c:288) <Info> (ReceiveErfDagThreadInit) -- Starting processing packets from stream: 0 on DAG: /dev/dag0
diff --git a/doc/userguide/capture-hardware/index.rst b/doc/userguide/capture-hardware/index.rst
new file mode 100644
index 0000000..992bd07
--- /dev/null
+++ b/doc/userguide/capture-hardware/index.rst
@@ -0,0 +1,12 @@
+Using Capture Hardware
+======================
+
+.. toctree::
+
+ endace-dag
+ napatech
+ myricom
+ ebpf-xdp
+ netmap
+ af-xdp
+ dpdk
diff --git a/doc/userguide/capture-hardware/myricom.rst b/doc/userguide/capture-hardware/myricom.rst
new file mode 100644
index 0000000..1898ff2
--- /dev/null
+++ b/doc/userguide/capture-hardware/myricom.rst
@@ -0,0 +1,96 @@
+Myricom
+=======
+
+From: https://blog.inliniac.net/2012/07/10/suricata-on-myricom-capture-cards/
+
+In this guide I'll describe using the Myricom libpcap support. I'm going to assume you installed the card properly, installed the Sniffer driver and made sure that all works. Make sure ``dmesg`` shows that the card is in sniffer mode:
+
+::
+
+
+ [ 2102.860241] myri_snf INFO: eth4: Link0 is UP
+ [ 2101.341965] myri_snf INFO: eth5: Link0 is UP
+
+I have installed the Myricom runtime and libraries in ``/opt/snf``
+
+Compile Suricata against Myricom's libpcap:
+
+::
+
+
+ ./configure --with-libpcap-includes=/opt/snf/include/ --with-libpcap-libraries=/opt/snf/lib/ --prefix=/usr --sysconfdir=/etc --localstatedir=/var
+ make
+ sudo make install
+
+Next, configure the amount of ringbuffers. I'm going to work with 8 here, as my quad core + hyper threading has 8 logical CPUs. *See below* for additional information about the buffer-size parameter.
+
+
+::
+
+
+ pcap:
+ - interface: eth5
+ threads: 8
+ buffer-size: 512kb
+ checksum-checks: no
+
+The 8 threads setting causes Suricata to create 8 reader threads for eth5. The Myricom driver makes sure each of those is attached to its own ringbuffer.
+
+Then start Suricata as follows:
+
+::
+
+
+ SNF_NUM_RINGS=8 SNF_FLAGS=0x1 suricata -c suricata.yaml -i eth5 --runmode=workers
+
+If you want 16 ringbuffers, update the "threads" variable in the Suricata configuration file to `16` and start Suricata:
+
+::
+
+
+ SNF_NUM_RINGS=16 SNF_FLAGS=0x1 suricata -c suricata.yaml -i eth5 --runmode=workers
+
+Note that the ``pcap.buffer-size`` configuration setting shown above is currently ignored when using Myricom cards. The value is passed through to the ``pcap_set_buffer_size`` libpcap API within the Suricata source code. From Myricom support:
+
+::
+
+ "The libpcap interface to Sniffer10G ignores the pcap_set_buffer_size() value. The call to snf_open() uses zero as the dataring_size which informs the Sniffer library to use a default value or the value from the SNF_DATARING_SIZE environment variable."
+
+The following pull request opened by Myricom in the libpcap project indicates that a future SNF software release could provide support for setting the SNF_DATARING_SIZE via the pcap.buffer-size yaml setting:
+
+* https://github.com/the-tcpdump-group/libpcap/pull/435
+
+Until then, the data ring and descriptor ring values can be explicitly set using the SNF_DATARING_SIZE and SNF_DESCRING_SIZE environment variables, respectively.
+
+The SNF_DATARING_SIZE is the total amount of memory to be used for storing incoming packet data. This size is shared across all rings.
+The SNF_DESCRING_SIZE is the total amount of memory to be used for storing meta information about the packets (packet lengths, offsets, timestamps). This size is also shared across all rings.
+
+Myricom recommends that the descriptor ring be 1/4 the size of the data ring, but the ratio can be modified based on your traffic profile.
+If not set explicitly, Myricom uses the following default values: SNF_DATARING_SIZE = 256MB, and SNF_DESCRING_SIZE = 64MB
+
+Expanding on the 16 thread example above, you can start Suricata with a 16GB Data Ring and a 4GB Descriptor Ring using the following command:
+
+::
+
+
+ SNF_NUM_RINGS=16 SNF_DATARING_SIZE=17179869184 SNF_DESCRING_SIZE=4294967296 SNF_FLAGS=0x1 suricata -c suricata.yaml -i eth5 --runmode=workers
+
+Debug Info
+~~~~~~~~~~
+
+Myricom also provides a means for obtaining debug information. This can be useful for verifying your configuration and gathering additional information.
+Setting SNF_DEBUG_MASK=3 enables debug information, and optionally setting the SNF_DEBUG_FILENAME allows you to specify the location of the output file.
+
+Following through with the example:
+
+::
+
+
+ SNF_NUM_RINGS=16 SNF_DATARING_SIZE=17179869184 SNF_DESCRING_SIZE=4294967296 SNF_FLAGS=0x1 SNF_DEBUG_MASK=3 SNF_DEBUG_FILENAME="/tmp/snf.out" suricata -c suricata.yaml -i eth5 --runmode=workers
+
+Additional Info
+~~~~~~~~~~~~~~~
+
+* http://www.40gbe.net/index_files/be59da7f2ab5bf0a299ab99ef441bb2e-28.html
+
+* https://www.broadcom.com/support/knowledgebase/1211161394432/how-to-use-emulex-oneconnect-oce12000-d-adapters-with-faststack-
diff --git a/doc/userguide/capture-hardware/napatech.rst b/doc/userguide/capture-hardware/napatech.rst
new file mode 100644
index 0000000..e382de4
--- /dev/null
+++ b/doc/userguide/capture-hardware/napatech.rst
@@ -0,0 +1,534 @@
+Napatech
+========
+
+Contents
+--------
+ * Introduction
+
+ * Package Installation
+
+ * Basic Configuration
+
+ * Advanced Multithreaded Configuration
+
+Introduction
+------------
+
+Napatech packet capture accelerator cards can greatly improve the performance of your Suricata deployment using these
+hardware based features:
+
+ * On board burst buffering (up to 12GB)
+
+ * Zero-copy kernel bypass DMA
+
+ * Non-blocking PCIe performance
+
+ * Port merging
+
+ * Load distribution to up 128 host buffers
+
+ * Precise timestamping
+
+ * Accurate time synchronization
+
+The package uses a proprietary shell script to handle the installation process.
+In either case, gcc, make and the kernel header files are required to compile the kernel module and
+install the software.
+
+Package Installation
+--------------------
+
+*Note that make, gcc, and the kernel headers are required for installation*
+
+*Root privileges are also required*
+
+The latest driver and tools installation package can be downloaded from: https://www.napatech.com/downloads.
+
+*Note that you will be prompted to install the Napatech libpcap library. Answer "yes" if you would like to
+use the Napatech card to capture packets in Wireshark, tcpdump, or another pcap based application.
+Libpcap is not needed for Suricata as native Napatech API support is included*
+
+Red Hat Based Distros::
+
+ $ yum install kernel-devel-$(uname -r) gcc make
+ $ ./package_install_3gd.sh
+
+Debian Based Distros::
+
+ $ apt-get install linux-headers-$(uname .r) gcc make
+ $ ./package_install_3gd.sh
+
+To complete installation for all distros ``ntservice``::
+
+ $ /opt/napatech3/bin/ntstart.sh -m
+
+Suricata Installation
+---------------------
+
+After downloading and extracting the Suricata tarball, you need to run configure to enable Napatech support and
+prepare for compilation::
+
+ $ ./configure --enable-napatech --with-napatech-includes=/opt/napatech3/include --with-napatech-libraries=/opt/napatech3/lib
+ $ make
+ $ make install-full
+
+Suricata configuration
+----------------------
+
+Now edit the suricata.yaml file to configure the system. There are three ways
+the system can be configured:
+
+ 1. Auto-config without cpu-affinity: In this mode you specify the stream
+ configuration in suricata.yaml file and allow the threads to
+ roam freely. This is good for single processor systems where NUMA node
+ configuration is not a performance concern.
+
+ 2. Auto-config with cpu-affinity: In this mode you use the cpu-affinity
+ of the worker threads to control the creation and configuration of streams.
+ One stream and one worker thread will be created for each cpu identified in
+ suricata.yaml. This is best in systems with multiple NUMA nodes (i.e.
+ multi-processor systems) as the NUMA node of the host buffers is matched
+ to the core on which the thread is running.
+
+ 3. Manual-config (legacy): In this mode the underlying Napatech streams are configured
+ by issuing NTPL commands prior to running Suricata. Suricata then connects
+ to the existing streams on startup.
+
+Example Configuration - Auto-config without cpu-affinity:
+---------------------------------------------------------
+
+If cpu-affinity is not used it is necessary to explicitly define the streams in
+the Suricata configuration file. To use this option the following options should
+be set in the Suricata configuration file:
+
+ 1. Turn off cpu-affinity
+
+ 2. Enable the Napatech "auto-config" option
+
+ 3. Specify the streams that should be created on startup
+
+ 4. Specify the ports that will provide traffic to Suricata
+
+ 5. Specify the hashmode used to distribute traffic to the streams
+
+Below are the options to set::
+
+ threading:
+ set-cpu-affinity: no
+ .
+ .
+ .
+ napatech:
+ auto-config: yes
+ streams: ["0-3"]
+ ports: [all]
+ hashmode: hash5tuplesorted
+
+Now modify ``ntservice.ini``. You also need make sure that you have allocated enough
+host buffers in ``ntservice.ini`` for the streams. It's a good idea to also set the
+``TimeSyncReferencePriority``. To do this make the following changes to ntservice.ini:
+
+ HostBuffersRx = [4,16,-1] # [number of host buffers, Size(MB), NUMA node]
+ TimeSyncReferencePriority = OSTime # Timestamp clock synchronized to the OS
+
+Stop and restart ``ntservice`` after making changes to ntservice::
+
+ $ /opt/napatech3/bin/ntstop.sh
+ $ /opt/napatech3/bin/ntstart.sh
+
+Now you are ready to start Suricata::
+
+ $ suricata -c /usr/local/etc/suricata/suricata.yaml --napatech --runmode workers
+
+Example Configuration - Auto-config with cpu-affinity:
+------------------------------------------------------
+
+This option will create a single worker-thread and stream for each CPU defined in the
+``worker-cpu-set``. To use this option make the following changes to suricata.yaml:
+
+1. Turn on cpu-affinity
+2. Specify the worker-cpu-set
+3. Enable the Napatech "auto-config" option
+4. Specify the ports that will provide traffic to Suricata
+5. Specify the hashmode that will be used to control the distribution of
+ traffic to the different streams/cpus.
+
+When you are done it should look similar to this::
+
+ threading:
+ set-cpu-affinity: yes
+ cpu-affinity:
+ management-cpu-set:
+ cpu: [ 0 ]
+ receive-cpu-set:
+ cpu: [ 0 ]
+ worker-cpu-set:
+ cpu: [ all ]
+ .
+ .
+ .
+ napatech:
+ auto-config: yes
+ ports: [all]
+ hashmode: hash5tuplesorted
+
+Prior to running Suricata in this mode you also need to configure a sufficient
+number of host buffers on each NUMA node. So, for example, if you have a two
+processor server with 32 total cores and you plan to use all of the cores you
+will need to allocate 16 host buffers on each NUMA node. It is also desirable
+to set the Napatech cards time source to the OS.
+
+To do this make the following changes to ntservice.ini::
+
+ TimeSyncReferencePriority = OSTime # Timestamp clock synchronized to the OS
+ HostBuffersRx = [16,16,0],[16,16,1] # [number of host buffers, Size(MB), NUMA node]
+
+Stop and restart ``ntservice`` after making changes to ntservice::
+
+ $ /opt/napatech3/bin/ntstop.sh -m
+ $ /opt/napatech3/bin/ntstart.sh -m
+
+Now you are ready to start Suricata::
+
+ $ suricata -c /usr/local/etc/suricata/suricata.yaml --napatech --runmode workers
+
+Example Configuration - Manual Configuration
+--------------------------------------------
+
+For Manual Configuration the Napatech streams are created by running NTPL
+commands prior to running Suricata.
+
+Note that this option is provided primarily for legacy configurations as previously
+this was the only way to configure Napatech products. Newer capabilities such as
+flow-awareness and inline processing cannot be configured manually.
+
+In this example we will setup the Napatech capture accelerator to merge all physical
+ports, and then distribute the merged traffic to four streams that Suricata will ingest.
+
+The steps for this configuration are:
+ 1. Disable the Napatech auto-config option in suricata.yaml
+ 2. Specify the streams that Suricata is to use in suricata.yaml
+ 3. Create a file with NTPL commands to create the underlying Napatech streams.
+
+First suricata.yaml should be configured similar to the following::
+
+ napatech:
+ auto-config: no
+ streams: ["0-3"]
+
+Next you need to make sure you have enough host buffers defined in ntservice.ini. As
+it's also a good idea to set up the TimeSync. Here are the lines to change::
+
+ TimeSyncReferencePriority = OSTime # Timestamp clock synchronized to the OS
+ HostBuffersRx = [4,16,-1] # [number of host buffers, Size(MB), NUMA node]
+
+Stop and restart ntservice after making changes to ntservice::
+
+ $ /opt/napatech3/bin/ntstop.sh
+ $ /opt/napatech3/bin/ntstart.sh
+
+Now that ntservice is running we need to execute a few NTPL (Napatech Programming Language)
+commands to complete the setup. Create a file will the following commands::
+
+ Delete=All # Delete any existing filters
+ Assign[streamid=(0..3)]= all # Assign all physical ports to stream ID 0
+
+Next execute those command using the ``ntpl`` tool::
+
+ $ /opt/napatech3/bin/ntpl -f <my_ntpl_file>
+
+Now you are ready to start Suricata::
+
+ $ suricata -c /usr/local/etc/suricata/suricata.yaml --napatech --runmode workers
+
+It is possible to specify much more elaborate configurations using this option. Simply by
+creating the appropriate NTPL file and attaching Suricata to the streams.
+
+Bypassing Flows
+---------------
+
+On flow-aware Napatech products, traffic from individual flows can be automatically
+dropped or, in the case of inline configurations, forwarded by the hardware after
+an inspection of the initial packet(s) of the flow by Suricata. This will save
+CPU cycles since Suricata does not process packets for a flow that has already been
+adjudicated. This is enabled via the hardware-bypass option in the Napatech section
+of the configuration file.
+
+When hardware bypass is used it is important that the ports accepting upstream
+and downstream traffic from the network are configured with information on
+which port the two sides of the connection will arrive. This is needed for the
+hardware to properly process traffic in both directions. This is indicated in the
+"ports" section as a hyphen separated list of port-pairs that will be receiving
+upstream and downstream traffic E.g.::
+
+ napatech:
+ hardware-bypass: true
+ ports[0-1,2-3]
+
+Note that these "port-pairings" are also required for IDS configurations as the hardware
+needs to know on which port(s) two sides of the connection will arrive.
+
+For configurations relying on optical taps the two sides of the pairing will typically
+be different ports. For SPAN port configurations where both upstream and downstream traffic
+are delivered to a single port both sides of the "port-pair" will reference the same port.
+
+For example tap configurations have a form similar to this::
+
+ ports[0-1,2-3]
+
+Whereas SPAN port configurations it would look similar to this::
+
+ ports[0-0,1-1,2-2,3-3]
+
+Note that SPAN and tap configurations may be combined on the same adapter.
+
+There are multiple ways that Suricata can be configured to bypass traffic.
+One way is to enable stream.bypass in the configuration file. E.g.::
+
+ stream:
+ bypass: true
+
+When enabled once Suricata has evaluated the first chunk of the stream (the
+size of which is also configurable) it will indicate that the rest of the
+packets in the flow can be bypassed. In IDS mode this means that the subsequent
+packets of the flow will be dropped and not delivered to Suricata. In inline
+operation the packets will be transmitted on the output port but not delivered
+to Suricata.
+
+Another way is by specifying the "bypass" keyword in a rule. When a rule is
+triggered with this keyword then the "pass" or "drop" action will be applied
+to subsequent packets of the flow automatically without further analysis by
+Suricata. For example given the rule::
+
+ drop tcp any 443 <> any any (msg: "SURICATA Test rule"; bypass; sid:1000001; rev:2;)
+
+Once Suricata initially evaluates the fist packet(s) and identifies the flow,
+all subsequent packets from the flow will be dropped by the hardware; thus
+saving CPU cycles for more important tasks.
+
+The timeout value for how long to wait before evicting stale flows from the
+hardware flow table can be specified via the FlowTimeout attribute in ntservice.ini.
+
+Inline Operation
+----------------
+
+Napatech flow-aware products can be configured for inline operation. This is
+specified in the configuration file. When enabled, ports are specified as
+port-pairs. With traffic received from one port it is transmitted out the
+the peer port after inspection by Suricata. E.g. the configuration::
+
+ napatech:
+ inline: enabled
+ ports[0-1, 2-3]
+
+Will pair ports 0 and 1; and 2 and 3 as peers. Rules can be defined to
+pass traffic matching a given signature. For example, given the rule::
+
+ pass tcp any 443 <> any any (msg: "SURICATA Test rule"; bypass; sid:1000001; rev:2;)
+
+Suricata will evaluate the initial packet(s) of the flow and program the flow
+into the hardware. Subsequent packets from the flow will be automatically be
+shunted from one port to it's peer.
+
+Counters
+--------
+
+The following counters are available:
+
+- napa_total.pkts - The total of packets received by the card.
+
+- napa_total.byte - The total count of bytes received by the card.
+
+- napa_total.overflow_drop_pkts - The number of packets that were dropped because
+ the host buffers were full. (I.e. the application is not able to process
+ packets quickly enough.)
+
+- napa_total.overflow_drop_byte - The number of bytes that were dropped because
+ the host buffers were full. (I.e. the application is not able to process
+ packets quickly enough.)
+
+On flow-aware products the following counters are also available:
+
+- napa_dispatch_host.pkts, napa_dispatch_host.byte:
+
+ The total number of packets/bytes that were dispatched to a host buffer for
+ processing by Suricata. (Note: this count includes packets that may be
+ subsequently dropped if there is no room in the host buffer.)
+
+- napa_dispatch_drop.pkts, napa_dispatch_drop.byte:
+
+ The total number of packets/bytes that were dropped at the hardware as
+ a result of a Suricata "drop" bypass rule or other adjudication by
+ Suricata that the flow packets should be dropped. These packets are not
+ delivered to the application.
+
+- napa_dispatch_fwd.pkts, napa_dispatch_fwd.byte:
+
+ When inline operation is configured this is the total number of packets/bytes
+ that were forwarded as result of a Suricata "pass" bypass rule or as a result
+ of stream or encryption bypass being enabled in the configuration file.
+ These packets were not delivered to the application.
+
+- napa_bypass.active_flows:
+
+ The number of flows actively programmed on the hardware to be forwarded or dropped.
+
+- napa_bypass.total_flows:
+
+ The total count of flows programmed since the application started.
+
+If enable-stream-stats is enabled in the configuration file then, for each stream
+that is being processed, the following counters will be output in stats.log:
+
+- napa<streamid>.pkts: The number of packets received by the stream.
+
+- napa<streamid>.bytes: The total bytes received by the stream.
+
+- napa<streamid>.drop_pkts: The number of packets dropped from this stream due to buffer overflow conditions.
+
+- napa<streamid>.drop_byte: The number of bytes dropped from this stream due to buffer overflow conditions.
+
+This is useful for fine-grain debugging to determine if a specific CPU core or
+thread is falling behind resulting in dropped packets.
+
+If hba is enabled the following counter will also be provided:
+
+- napa<streamid>.hba_drop: the number of packets dropped because the host buffer allowance high-water mark was reached.
+
+In addition to counters host buffer utilization is tracked and logged. This is also useful for
+debugging. Log messages are output for both Host and On-Board buffers when reach 25, 50, 75
+percent of utilization. Corresponding messages are output when utilization decreases.
+
+Debugging:
+
+For debugging configurations it is useful to see what traffic is flowing as well as what streams are
+created and receiving traffic. There are two tools in /opt/napatech3/bin that are useful for this:
+
+- monitoring: this tool will, among other things, show what traffic is arriving at the port interfaces.
+
+- profiling: this will show host-buffers, streams and traffic flow to the streams.
+
+If Suricata terminates abnormally stream definitions, which are normally removed at shutdown, may remain in effect.
+If this happens they can be cleared by issuing the "delete=all" NTPL command as follows::
+
+ # /opt/napatech3/bin/ntpl -e "delete=all"
+
+Napatech configuration options:
+-------------------------------
+
+These are the Napatech options available in the Suricata configuration file::
+
+ napatech:
+ # The Host Buffer Allowance for all streams
+ # (-1 = OFF, 1 - 100 = percentage of the host buffer that can be held back)
+ # This may be enabled when sharing streams with another application.
+ # Otherwise, it should be turned off.
+ #
+ # Note: hba will be deprecated in Suricata 7
+ #
+ #hba: -1
+
+ # When use_all_streams is set to "yes" the initialization code will query
+ # the Napatech service for all configured streams and listen on all of them.
+ # When set to "no" the streams config array will be used.
+ #
+ # This option necessitates running the appropriate NTPL commands to create
+ # the desired streams prior to running Suricata.
+ #use-all-streams: no
+
+ # The streams to listen on when auto-config is disabled or when threading
+ # cpu-affinity is disabled. This can be either:
+ # an individual stream (e.g. streams: [0])
+ # or
+ # a range of streams (e.g. streams: ["0-3"])
+ #
+ streams: ["0-3"]
+
+ # Stream stats can be enabled to provide fine grain packet and byte counters
+ # for each thread/stream that is configured.
+ #
+ enable-stream-stats: no
+
+ # When auto-config is enabled the streams will be created and assigned
+ # automatically to the NUMA node where the thread resides. If cpu-affinity
+ # is enabled in the threading section, then the streams will be created
+ # according to the number of worker threads specified in the worker cpu set.
+ # Otherwise, the streams array is used to define the streams.
+ #
+ # This option cannot be used simultaneous with "use-all-streams".
+ #
+ auto-config: yes
+
+ # Enable hardware level flow bypass.
+ #
+ hardware-bypass: yes
+
+ # Enable inline operation. When enabled traffic arriving on a given port is
+ # automatically forwarded out it's peer port after analysis by Suricata.
+ # hardware-bypass must be enabled when this is enabled.
+ #
+ inline: no
+
+ # Ports indicates which napatech ports are to be used in auto-config mode.
+ # these are the port ID's of the ports that will be merged prior to the
+ # traffic being distributed to the streams.
+ #
+ # When hardware-bypass is enabled the ports must be configured as a segment
+ # specify the port(s) on which upstream and downstream traffic will arrive.
+ # This information is necessary for the hardware to properly process flows.
+ #
+ # When using a tap configuration one of the ports will receive inbound traffic
+ # for the network and the other will receive outbound traffic. The two ports on a
+ # given segment must reside on the same network adapter.
+ #
+ # When using a SPAN-port configuration the upstream and downstream traffic
+ # arrives on a single port. This is configured by setting the two sides of the
+ # segment to reference the same port. (e.g. 0-0 to configure a SPAN port on
+ # port 0).
+ #
+ # port segments are specified in the form:
+ # ports: [0-1,2-3,4-5,6-6,7-7]
+ #
+ # For legacy systems when hardware-bypass is disabled this can be specified in any
+ # of the following ways:
+ #
+ # a list of individual ports (e.g. ports: [0,1,2,3])
+ #
+ # a range of ports (e.g. ports: [0-3])
+ #
+ # "all" to indicate that all ports are to be merged together
+ # (e.g. ports: [all])
+ #
+ # This parameter has no effect if auto-config is disabled.
+ #
+ ports: [0-1,2-3]
+
+ # When auto-config is enabled the hashmode specifies the algorithm for
+ # determining to which stream a given packet is to be delivered.
+ # This can be any valid Napatech NTPL hashmode command.
+ #
+ # The most common hashmode commands are: hash2tuple, hash2tuplesorted,
+ # hash5tuple, hash5tuplesorted and roundrobin.
+ #
+ # See Napatech NTPL documentation other hashmodes and details on their use.
+ #
+ # This parameter has no effect if auto-config is disabled.
+ #
+ hashmode: hash5tuplesorted
+
+*Note: hba is useful only when a stream is shared with another application. When hba is enabled packets will be dropped
+(i.e. not delivered to Suricata) when the host-buffer utilization reaches the high-water mark indicated by the hba value.
+This insures that, should Suricata get behind in its packet processing, the other application will still receive all
+of the packets. If this is enabled without another application sharing the stream it will result in sub-optimal packet
+buffering.*
+
+Make sure that there are enough host-buffers declared in ``ntservice.ini`` to
+accommodate the number of cores/streams being used.
+
+Support
+-------
+
+Contact a support engineer at: ntsupport@napatech.com
+
+Napatech Documentation can be found at: https://docs.napatech.com (Click the search icon, with no search text,
+to see all documents in the portal.)
diff --git a/doc/userguide/capture-hardware/netmap.rst b/doc/userguide/capture-hardware/netmap.rst
new file mode 100644
index 0000000..08f191d
--- /dev/null
+++ b/doc/userguide/capture-hardware/netmap.rst
@@ -0,0 +1,223 @@
+Netmap
+======
+
+Netmap is a high speed capture framework for Linux and FreeBSD. In Linux it
+is available as an external module, while in FreeBSD 11+ it is available by
+default.
+
+
+Compiling Suricata
+------------------
+
+FreeBSD
+~~~~~~~
+
+On FreeBSD 11 and up, NETMAP is included and enabled by default in the kernel.
+
+To build Suricata with NETMAP, add ``--enable-netmap`` to the configure line.
+The location of the NETMAP includes (/usr/src/sys/net/) does not have to be
+specified.
+
+Linux
+~~~~~
+
+On Linux, NETMAP is not included by default. It can be pulled from github.
+Follow the instructions on installation included in the NETMAP repository.
+
+When NETMAP is installed, add ``--enable-netmap`` to the configure line.
+If the includes are not added to a standard location, the location can
+be specified when configuring Suricata.
+
+Example::
+
+ ./configure --enable-netmap --with-netmap-includes=/usr/local/include/netmap/
+
+Starting Suricata
+-----------------
+
+When opening an interface, netmap can take various special characters as
+options in the interface string.
+
+.. warning:: the interface that netmap reads from will become unavailable
+ for normal network operations. You can lock yourself out of
+ your system.
+
+IDS
+~~~
+
+Suricata can be started in 2 ways to use netmap:
+
+::
+
+ suricata --netmap=<interface>
+ suricata --netmap=igb0
+
+In the above example Suricata will start reading from the `igb0` network interface.
+The number of threads created depends on the number of RSS queues available on the NIC.
+
+::
+
+ suricata --netmap
+
+In the above example Suricata will take the ``netmap`` block from the Suricata
+configuration and open each of the interfaces listed.
+
+::
+
+ netmap:
+ - interface: igb0
+ threads: 2
+ - interface: igb1
+ threads: 4
+
+For the above configuration, both ``igb0`` and ``igb1`` would be opened. With 2
+threads for ``igb0`` and 4 capture threads for ``igb1``.
+
+.. warning:: This multi threaded setup only works correctly if the NIC
+ has symmetric RSS hashing. If this is not the case, consider
+ using the 'lb' method below.
+
+IPS
+~~~
+
+Suricata's Netmap based IPS mode is based on the concept of creating
+a layer 2 software bridge between 2 interfaces. Suricata reads packets on
+one interface and transmits them on another.
+
+Packets that are blocked by the IPS policy, are simply not transmitted.
+
+::
+
+ netmap:
+ - interface: igb0
+ copy-mode: ips
+ copy-iface: igb1
+ - interface: igb1
+ copy-mode: ips
+ copy-iface: igb0
+
+Advanced setups
+---------------
+
+lb (load balance)
+-----------------
+
+"lb" is a tool written by Seth Hall to allow for load balancing for single
+or multiple tools. One common use case is being able to run Suricata and
+Zeek together on the same traffic.
+
+starting lb::
+
+ lb -i eth0 -p suricata:6 -p zeek:6
+
+.. note:: On FreeBSD 11, the named prefix doesn't work.
+
+yaml::
+
+ netmap:
+ - interface: netmap:suricata
+ threads: 6
+
+startup::
+
+ suricata --netmap=netmap:suricata
+
+The interface name as passed to Suricata includes a 'netmap:' prefix. This
+tells Suricata that it's going to read from netmap pipes instead of a real
+interface.
+
+Then Zeek (formerly Bro) can be configured to load 6 instances. Both will
+get a copy of the same traffic. The number of netmap pipes does not have
+to be equal for both tools.
+
+FreeBSD 11
+~~~~~~~~~~
+
+On FreeBSD 11 the named pipe is not available.
+
+starting lb::
+
+ lb -i eth0 -p 6
+
+yaml::
+
+ netmap:
+ - interface: netmap:eth0
+ threads: 6
+
+startup::
+
+ suricata --netmap
+
+
+.. note:: "lb" is bundled with netmap.
+
+Single NIC
+~~~~~~~~~~
+
+When an interface enters NETMAP mode, it is no longer available to
+the OS for other operations. This can be undesirable in certain
+cases, but there is a workaround.
+
+By running Suricata in a special inline mode, the interface will
+show it's traffic to the OS.
+
+::
+
+ netmap:
+ - interface: igb0
+ copy-mode: tap
+ copy-iface: igb0^
+ - interface: igb0^
+ copy-mode: tap
+ copy-iface: igb0
+
+The copy-mode can be both 'tap' and 'ips', where the former never
+drops packets based on the policies in use, and the latter may drop
+packets.
+
+.. warning:: Misconfiguration can lead to connectivity loss. Use
+ with care.
+
+.. note:: This set up can also be used to mix NETMAP with firewall
+ setups like pf or ipfw.
+
+VALE switches
+~~~~~~~~~~~~~
+
+VALE is a virtual switch that can be used to create an all virtual
+network or a mix of virtual and real nics.
+
+A simple all virtual setup::
+
+ vale-ctl -n vi0
+ vale-ctl -a vale0:vi0
+ vale-ctl -n vi1
+ vale-ctl -a vale0:vi1
+
+We now have a virtual switch "vale0" with 2 ports "vi0" and "vi1".
+
+We can start Suricata to listen on one of the ports::
+
+ suricata --netmap=vale0:vi1
+
+Then we can
+
+Inline IDS
+----------
+
+The inline IDS is almost the same as the IPS setup above, but it will not
+enforce ``drop`` policies.
+
+::
+
+ netmap:
+ - interface: igb0
+ copy-mode: tap
+ copy-iface: igb1
+ - interface: igb1
+ copy-mode: tap
+ copy-iface: igb0
+
+The only difference with the IPS mode is that the ``copy-mode`` setting is
+set to ``tap``.