summaryrefslogtreecommitdiffstats
path: root/src/spdk/dpdk/doc/guides/rawdevs
diff options
context:
space:
mode:
Diffstat (limited to 'src/spdk/dpdk/doc/guides/rawdevs')
-rw-r--r--src/spdk/dpdk/doc/guides/rawdevs/dpaa2_cmdif.rst104
-rw-r--r--src/spdk/dpdk/doc/guides/rawdevs/dpaa2_qdma.rst101
-rw-r--r--src/spdk/dpdk/doc/guides/rawdevs/ifpga.rst112
-rw-r--r--src/spdk/dpdk/doc/guides/rawdevs/index.rst20
-rw-r--r--src/spdk/dpdk/doc/guides/rawdevs/ioat.rst265
-rw-r--r--src/spdk/dpdk/doc/guides/rawdevs/ntb.rst154
-rw-r--r--src/spdk/dpdk/doc/guides/rawdevs/octeontx2_dma.rst115
-rw-r--r--src/spdk/dpdk/doc/guides/rawdevs/octeontx2_ep.rst89
8 files changed, 960 insertions, 0 deletions
diff --git a/src/spdk/dpdk/doc/guides/rawdevs/dpaa2_cmdif.rst b/src/spdk/dpdk/doc/guides/rawdevs/dpaa2_cmdif.rst
new file mode 100644
index 000000000..be9805874
--- /dev/null
+++ b/src/spdk/dpdk/doc/guides/rawdevs/dpaa2_cmdif.rst
@@ -0,0 +1,104 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2018 NXP
+
+NXP DPAA2 CMDIF Driver
+======================
+
+The DPAA2 CMDIF is an implementation of the rawdev API, that provides
+communication between the GPP and AIOP (Firmware). This is achieved
+via using the DPCI devices exposed by MC for GPP <--> AIOP interaction.
+
+More information can be found at `NXP Official Website
+<http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors:QORIQ-ARM>`_.
+
+Features
+--------
+
+The DPAA2 CMDIF implements following features in the rawdev API;
+
+- Getting the object ID of the device (DPCI) using attributes
+- I/O to and from the AIOP device using DPCI
+
+Supported DPAA2 SoCs
+--------------------
+
+- LS2084A/LS2044A
+- LS2088A/LS2048A
+- LS1088A/LS1048A
+
+Prerequisites
+-------------
+
+See :doc:`../platform/dpaa2` for setup information
+
+Currently supported by DPDK:
+
+- NXP SDK **19.09+**.
+- MC Firmware version **10.18.0** and higher.
+- Supported architectures: **arm64 LE**.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+.. note::
+
+ Some part of fslmc bus code (mc flib - object library) routines are
+ dual licensed (BSD & GPLv2).
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+
+- ``CONFIG_RTE_LIBRTE_PMD_DPAA2_CMDIF_RAWDEV`` (default ``y``)
+
+ Toggle compilation of the ``lrte_pmd_dpaa2_cmdif`` driver.
+
+Enabling logs
+-------------
+
+For enabling logs, use the following EAL parameter:
+
+.. code-block:: console
+
+ ./your_cmdif_application <EAL args> --log-level=pmd.raw.dpaa2.cmdif,<level>
+
+Using ``pmd.raw.dpaa2.cmdif`` as log matching criteria, all Event PMD logs can be
+enabled which are lower than logging ``level``.
+
+Driver Compilation
+~~~~~~~~~~~~~~~~~~
+
+To compile the DPAA2 CMDIF PMD for Linux arm64 gcc target, run the
+following ``make`` command:
+
+.. code-block:: console
+
+ cd <DPDK-source-directory>
+ make config T=arm64-dpaa-linux-gcc install
+
+Initialization
+--------------
+
+The DPAA2 CMDIF is exposed as a vdev device which consists of dpci devices.
+On EAL initialization, dpci devices will be probed and then vdev device
+can be created from the application code by
+
+* Invoking ``rte_vdev_init("dpaa2_dpci")`` from the application
+
+* Using ``--vdev="dpaa2_dpci"`` in the EAL options, which will call
+ rte_vdev_init() internally
+
+Example:
+
+.. code-block:: console
+
+ ./your_cmdif_application <EAL args> --vdev="dpaa2_dpci"
+
+Platform Requirement
+~~~~~~~~~~~~~~~~~~~~
+
+DPAA2 drivers for DPDK can only work on NXP SoCs as listed in the
+``Supported DPAA2 SoCs``.
diff --git a/src/spdk/dpdk/doc/guides/rawdevs/dpaa2_qdma.rst b/src/spdk/dpdk/doc/guides/rawdevs/dpaa2_qdma.rst
new file mode 100644
index 000000000..129e83d5e
--- /dev/null
+++ b/src/spdk/dpdk/doc/guides/rawdevs/dpaa2_qdma.rst
@@ -0,0 +1,101 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2018 NXP
+
+NXP DPAA2 QDMA Driver
+=====================
+
+The DPAA2 QDMA is an implementation of the rawdev API, that provide means
+to initiate a DMA transaction from CPU. The initiated DMA is performed
+without CPU being involved in the actual DMA transaction. This is achieved
+via using the DPDMAI device exposed by MC.
+
+More information can be found at `NXP Official Website
+<http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors:QORIQ-ARM>`_.
+
+Features
+--------
+
+The DPAA2 QDMA implements following features in the rawdev API;
+
+- Supports issuing DMA of data within memory without hogging CPU while
+ performing DMA operation.
+- Supports configuring to optionally get status of the DMA translation on
+ per DMA operation basis.
+
+Supported DPAA2 SoCs
+--------------------
+
+- LX2160A
+- LS2084A/LS2044A
+- LS2088A/LS2048A
+- LS1088A/LS1048A
+
+Prerequisites
+-------------
+
+See :doc:`../platform/dpaa2` for setup information
+
+Currently supported by DPDK:
+
+- NXP SDK **19.09+**.
+- MC Firmware version **10.18.0** and higher.
+- Supported architectures: **arm64 LE**.
+
+- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
+
+.. note::
+
+ Some part of fslmc bus code (mc flib - object library) routines are
+ dual licensed (BSD & GPLv2).
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+
+- ``CONFIG_RTE_LIBRTE_PMD_DPAA2_QDMA_RAWDEV`` (default ``y``)
+
+ Toggle compilation of the ``lrte_pmd_dpaa2_qdma`` driver.
+
+Enabling logs
+-------------
+
+For enabling logs, use the following EAL parameter:
+
+.. code-block:: console
+
+ ./your_qdma_application <EAL args> --log-level=pmd.raw.dpaa2.qdma,<level>
+
+Using ``pmd.raw.dpaa2.qdma`` as log matching criteria, all Event PMD logs can be
+enabled which are lower than logging ``level``.
+
+Driver Compilation
+~~~~~~~~~~~~~~~~~~
+
+To compile the DPAA2 QDMA PMD for Linux arm64 gcc target, run the
+following ``make`` command:
+
+.. code-block:: console
+
+ cd <DPDK-source-directory>
+ make config T=arm64-dpaa-linux-gcc install
+
+Initialization
+--------------
+
+The DPAA2 QDMA is exposed as a vdev device which consists of dpdmai devices.
+On EAL initialization, dpdmai devices will be probed and populated into the
+rawdevices. The rawdev ID of the device can be obtained using
+
+* Invoking ``rte_rawdev_get_dev_id("dpdmai.x")`` from the application
+ where x is the object ID of the DPDMAI object created by MC. Use can
+ use this index for further rawdev function calls.
+
+Platform Requirement
+~~~~~~~~~~~~~~~~~~~~
+
+DPAA2 drivers for DPDK can only work on NXP SoCs as listed in the
+``Supported DPAA2 SoCs``.
diff --git a/src/spdk/dpdk/doc/guides/rawdevs/ifpga.rst b/src/spdk/dpdk/doc/guides/rawdevs/ifpga.rst
new file mode 100644
index 000000000..a3d92a62e
--- /dev/null
+++ b/src/spdk/dpdk/doc/guides/rawdevs/ifpga.rst
@@ -0,0 +1,112 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2018 Intel Corporation.
+
+IFPGA Rawdev Driver
+======================
+
+FPGA is used more and more widely in Cloud and NFV, one primary reason is
+that FPGA not only provides ASIC performance but also it's more flexible
+than ASIC.
+
+FPGA uses Partial Reconfigure (PR) Parts of Bit Stream to achieve its
+flexibility. That means one FPGA Device Bit Stream is divided into many Parts
+of Bit Stream(each Part of Bit Stream is defined as AFU-Accelerated Function
+Unit), and each AFU is a hardware acceleration unit which can be dynamically
+reloaded respectively.
+
+By PR (Partial Reconfiguration) AFUs, one FPGA resources can be time-shared by
+different users. FPGA hot upgrade and fault tolerance can be provided easily.
+
+The SW IFPGA Rawdev Driver (**ifpga_rawdev**) provides a Rawdev driver
+that utilizes Intel FPGA Software Stack OPAE(Open Programmable Acceleration
+Engine) for FPGA management.
+
+Implementation details
+----------------------
+
+Each instance of IFPGA Rawdev Driver is probed by Intel FpgaDev. In coordination
+with OPAE share code IFPGA Rawdev Driver provides common FPGA management ops
+for FPGA operation, OPAE provides all following operations:
+- FPGA PR (Partial Reconfiguration) management
+- FPGA AFUs Identifying
+- FPGA Thermal Management
+- FPGA Power Management
+- FPGA Performance reporting
+- FPGA Remote Debug
+
+All configuration parameters are taken by vdev_ifpga_cfg driver. Besides
+configuration, vdev_ifpga_cfg driver also hot plugs in IFPGA Bus.
+
+All of the AFUs of one FPGA may share same PCI BDF and AFUs scan depend on
+IFPGA Rawdev Driver so IFPGA Bus takes AFU device scan and AFU drivers probe.
+All AFU device driver bind to AFU device by its UUID (Universally Unique
+Identifier).
+
+To avoid unnecessary code duplication and ensure maximum performance,
+handling of AFU devices is left to different PMDs; all the design as
+summarized by the following block diagram::
+
+ +---------------------------------------------------------------+
+ | Application(s) |
+ +----------------------------.----------------------------------+
+ |
+ |
+ +----------------------------'----------------------------------+
+ | DPDK Framework (APIs) |
+ +----------|------------|--------.---------------------|--------+
+ / \ |
+ / \ |
+ +-------'-------+ +-------'-------+ +--------'--------+
+ | Eth PMD | | Crypto PMD | | |
+ +-------.-------+ +-------.-------+ | |
+ | | | |
+ | | | |
+ +-------'-------+ +-------'-------+ | IFPGA |
+ | Eth AFU Dev | |Crypto AFU Dev | | Rawdev Driver |
+ +-------.-------+ +-------.-------+ |(OPAE Share Code)|
+ | | | |
+ | | Rawdev | |
+ +-------'------------------'-------+ Ops | |
+ | IFPGA Bus | -------->| |
+ +-----------------.----------------+ +--------.--------+
+ | |
+ Hot-plugin -->| |
+ | |
+ +-----------------'------------------+ +--------'--------+
+ | vdev_ifpga_cfg driver | | Intel FpgaDev |
+ +------------------------------------+ +-----------------+
+
+Build options
+-------------
+
+- ``CONFIG_RTE_LIBRTE_IFPGA_BUS`` (default ``y``)
+
+ Toggle compilation of IFPGA Bus library.
+
+- ``CONFIG_RTE_LIBRTE_IFPGA_RAWDEV`` (default ``y``)
+
+ Toggle compilation of the ``ifpga_rawdev`` driver.
+
+Run-time parameters
+-------------------
+
+This driver is invoked automatically in systems added with Intel FPGA,
+but PR and IFPGA Bus scan is triggered by command line using
+``--vdev 'ifpga_rawdev_cfg`` EAL option.
+
+The following device parameters are supported:
+
+- ``ifpga`` [string]
+
+ Provide a specific Intel FPGA device PCI BDF. Can be provided multiple
+ times for additional instances.
+
+- ``port`` [int]
+
+ Each FPGA can provide many channels to PR AFU by software, each channels
+ is identified by this parameter.
+
+- ``afu_bts`` [string]
+
+ If null, the AFU Bit Stream has been PR in FPGA, if not forces PR and
+ identifies AFU Bit Stream file.
diff --git a/src/spdk/dpdk/doc/guides/rawdevs/index.rst b/src/spdk/dpdk/doc/guides/rawdevs/index.rst
new file mode 100644
index 000000000..f64ec4427
--- /dev/null
+++ b/src/spdk/dpdk/doc/guides/rawdevs/index.rst
@@ -0,0 +1,20 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2018 NXP
+
+Rawdev Drivers
+==============
+
+The following are a list of raw device PMDs, which can be used from an
+application through rawdev API.
+
+.. toctree::
+ :maxdepth: 2
+ :numbered:
+
+ dpaa2_cmdif
+ dpaa2_qdma
+ ifpga
+ ioat
+ ntb
+ octeontx2_dma
+ octeontx2_ep
diff --git a/src/spdk/dpdk/doc/guides/rawdevs/ioat.rst b/src/spdk/dpdk/doc/guides/rawdevs/ioat.rst
new file mode 100644
index 000000000..d0eee5e23
--- /dev/null
+++ b/src/spdk/dpdk/doc/guides/rawdevs/ioat.rst
@@ -0,0 +1,265 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Intel Corporation.
+
+.. include:: <isonum.txt>
+
+IOAT Rawdev Driver for Intel\ |reg| QuickData Technology
+======================================================================
+
+The ``ioat`` rawdev driver provides a poll-mode driver (PMD) for Intel\ |reg|
+QuickData Technology, part of Intel\ |reg| I/O Acceleration Technology
+`(Intel I/OAT)
+<https://www.intel.com/content/www/us/en/wireless-network/accel-technology.html>`_.
+This PMD, when used on supported hardware, allows data copies, for example,
+cloning packet data, to be accelerated by that hardware rather than having to
+be done by software, freeing up CPU cycles for other tasks.
+
+Hardware Requirements
+----------------------
+
+On Linux, the presence of an Intel\ |reg| QuickData Technology hardware can
+be detected by checking the output of the ``lspci`` command, where the
+hardware will be often listed as "Crystal Beach DMA" or "CBDMA". For
+example, on a system with Intel\ |reg| Xeon\ |reg| CPU E5-2699 v4 @2.20GHz,
+lspci shows:
+
+.. code-block:: console
+
+ # lspci | grep DMA
+ 00:04.0 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 0 (rev 01)
+ 00:04.1 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 1 (rev 01)
+ 00:04.2 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 2 (rev 01)
+ 00:04.3 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 3 (rev 01)
+ 00:04.4 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 4 (rev 01)
+ 00:04.5 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 5 (rev 01)
+ 00:04.6 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 6 (rev 01)
+ 00:04.7 System peripheral: Intel Corporation Xeon E7 v4/Xeon E5 v4/Xeon E3 v4/Xeon D Crystal Beach DMA Channel 7 (rev 01)
+
+On a system with Intel\ |reg| Xeon\ |reg| Gold 6154 CPU @ 3.00GHz, lspci
+shows:
+
+.. code-block:: console
+
+ # lspci | grep DMA
+ 00:04.0 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
+ 00:04.1 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
+ 00:04.2 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
+ 00:04.3 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
+ 00:04.4 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
+ 00:04.5 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
+ 00:04.6 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
+ 00:04.7 System peripheral: Intel Corporation Sky Lake-E CBDMA Registers (rev 04)
+
+
+Compilation
+------------
+
+For builds done with ``make``, the driver compilation is enabled by the
+``CONFIG_RTE_LIBRTE_PMD_IOAT_RAWDEV`` build configuration option. This is
+enabled by default in builds for x86 platforms, and disabled in other
+configurations.
+
+For builds using ``meson`` and ``ninja``, the driver will be built when the
+target platform is x86-based.
+
+Device Setup
+-------------
+
+The Intel\ |reg| QuickData Technology HW devices will need to be bound to a
+user-space IO driver for use. The script ``dpdk-devbind.py`` script
+included with DPDK can be used to view the state of the devices and to bind
+them to a suitable DPDK-supported kernel driver. When querying the status
+of the devices, they will appear under the category of "Misc (rawdev)
+devices", i.e. the command ``dpdk-devbind.py --status-dev misc`` can be
+used to see the state of those devices alone.
+
+Device Probing and Initialization
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Once bound to a suitable kernel device driver, the HW devices will be found
+as part of the PCI scan done at application initialization time. No vdev
+parameters need to be passed to create or initialize the device.
+
+Once probed successfully, the device will appear as a ``rawdev``, that is a
+"raw device type" inside DPDK, and can be accessed using APIs from the
+``rte_rawdev`` library.
+
+Using IOAT Rawdev Devices
+--------------------------
+
+To use the devices from an application, the rawdev API can be used, along
+with definitions taken from the device-specific header file
+``rte_ioat_rawdev.h``. This header is needed to get the definition of
+structure parameters used by some of the rawdev APIs for IOAT rawdev
+devices, as well as providing key functions for using the device for memory
+copies.
+
+Getting Device Information
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Basic information about each rawdev device can be queried using the
+``rte_rawdev_info_get()`` API. For most applications, this API will be
+needed to verify that the rawdev in question is of the expected type. For
+example, the following code snippet can be used to identify an IOAT
+rawdev device for use by an application:
+
+.. code-block:: C
+
+ for (i = 0; i < count && !found; i++) {
+ struct rte_rawdev_info info = { .dev_private = NULL };
+ found = (rte_rawdev_info_get(i, &info) == 0 &&
+ strcmp(info.driver_name,
+ IOAT_PMD_RAWDEV_NAME_STR) == 0);
+ }
+
+When calling the ``rte_rawdev_info_get()`` API for an IOAT rawdev device,
+the ``dev_private`` field in the ``rte_rawdev_info`` struct should either
+be NULL, or else be set to point to a structure of type
+``rte_ioat_rawdev_config``, in which case the size of the configured device
+input ring will be returned in that structure.
+
+Device Configuration
+~~~~~~~~~~~~~~~~~~~~~
+
+Configuring an IOAT rawdev device is done using the
+``rte_rawdev_configure()`` API, which takes the same structure parameters
+as the, previously referenced, ``rte_rawdev_info_get()`` API. The main
+difference is that, because the parameter is used as input rather than
+output, the ``dev_private`` structure element cannot be NULL, and must
+point to a valid ``rte_ioat_rawdev_config`` structure, containing the ring
+size to be used by the device. The ring size must be a power of two,
+between 64 and 4096.
+
+The following code shows how the device is configured in
+``test_ioat_rawdev.c``:
+
+.. code-block:: C
+
+ #define IOAT_TEST_RINGSIZE 512
+ struct rte_ioat_rawdev_config p = { .ring_size = -1 };
+ struct rte_rawdev_info info = { .dev_private = &p };
+
+ /* ... */
+
+ p.ring_size = IOAT_TEST_RINGSIZE;
+ if (rte_rawdev_configure(dev_id, &info) != 0) {
+ printf("Error with rte_rawdev_configure()\n");
+ return -1;
+ }
+
+Once configured, the device can then be made ready for use by calling the
+``rte_rawdev_start()`` API.
+
+Performing Data Copies
+~~~~~~~~~~~~~~~~~~~~~~~
+
+To perform data copies using IOAT rawdev devices, the functions
+``rte_ioat_enqueue_copy()`` and ``rte_ioat_do_copies()`` should be used.
+Once copies have been completed, the completion will be reported back when
+the application calls ``rte_ioat_completed_copies()``.
+
+The ``rte_ioat_enqueue_copy()`` function enqueues a single copy to the
+device ring for copying at a later point. The parameters to that function
+include the IOVA addresses of both the source and destination buffers,
+as well as two "handles" to be returned to the user when the copy is
+completed. These handles can be arbitrary values, but two are provided so
+that the library can track handles for both source and destination on
+behalf of the user, e.g. virtual addresses for the buffers, or mbuf
+pointers if packet data is being copied.
+
+While the ``rte_ioat_enqueue_copy()`` function enqueues a copy operation on
+the device ring, the copy will not actually be performed until after the
+application calls the ``rte_ioat_do_copies()`` function. This function
+informs the device hardware of the elements enqueued on the ring, and the
+device will begin to process them. It is expected that, for efficiency
+reasons, a burst of operations will be enqueued to the device via multiple
+enqueue calls between calls to the ``rte_ioat_do_copies()`` function.
+
+The following code from ``test_ioat_rawdev.c`` demonstrates how to enqueue
+a burst of copies to the device and start the hardware processing of them:
+
+.. code-block:: C
+
+ struct rte_mbuf *srcs[32], *dsts[32];
+ unsigned int j;
+
+ for (i = 0; i < RTE_DIM(srcs); i++) {
+ char *src_data;
+
+ srcs[i] = rte_pktmbuf_alloc(pool);
+ dsts[i] = rte_pktmbuf_alloc(pool);
+ srcs[i]->data_len = srcs[i]->pkt_len = length;
+ dsts[i]->data_len = dsts[i]->pkt_len = length;
+ src_data = rte_pktmbuf_mtod(srcs[i], char *);
+
+ for (j = 0; j < length; j++)
+ src_data[j] = rand() & 0xFF;
+
+ if (rte_ioat_enqueue_copy(dev_id,
+ srcs[i]->buf_iova + srcs[i]->data_off,
+ dsts[i]->buf_iova + dsts[i]->data_off,
+ length,
+ (uintptr_t)srcs[i],
+ (uintptr_t)dsts[i],
+ 0 /* nofence */) != 1) {
+ printf("Error with rte_ioat_enqueue_copy for buffer %u\n",
+ i);
+ return -1;
+ }
+ }
+ rte_ioat_do_copies(dev_id);
+
+To retrieve information about completed copies, the API
+``rte_ioat_completed_copies()`` should be used. This API will return to the
+application a set of completion handles passed in when the relevant copies
+were enqueued.
+
+The following code from ``test_ioat_rawdev.c`` shows the test code
+retrieving information about the completed copies and validating the data
+is correct before freeing the data buffers using the returned handles:
+
+.. code-block:: C
+
+ if (rte_ioat_completed_copies(dev_id, 64, (void *)completed_src,
+ (void *)completed_dst) != RTE_DIM(srcs)) {
+ printf("Error with rte_ioat_completed_copies\n");
+ return -1;
+ }
+ for (i = 0; i < RTE_DIM(srcs); i++) {
+ char *src_data, *dst_data;
+
+ if (completed_src[i] != srcs[i]) {
+ printf("Error with source pointer %u\n", i);
+ return -1;
+ }
+ if (completed_dst[i] != dsts[i]) {
+ printf("Error with dest pointer %u\n", i);
+ return -1;
+ }
+
+ src_data = rte_pktmbuf_mtod(srcs[i], char *);
+ dst_data = rte_pktmbuf_mtod(dsts[i], char *);
+ for (j = 0; j < length; j++)
+ if (src_data[j] != dst_data[j]) {
+ printf("Error with copy of packet %u, byte %u\n",
+ i, j);
+ return -1;
+ }
+ rte_pktmbuf_free(srcs[i]);
+ rte_pktmbuf_free(dsts[i]);
+ }
+
+
+Querying Device Statistics
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The statistics from the IOAT rawdev device can be got via the xstats
+functions in the ``rte_rawdev`` library, i.e.
+``rte_rawdev_xstats_names_get()``, ``rte_rawdev_xstats_get()`` and
+``rte_rawdev_xstats_by_name_get``. The statistics returned for each device
+instance are:
+
+* ``failed_enqueues``
+* ``successful_enqueues``
+* ``copies_started``
+* ``copies_completed``
diff --git a/src/spdk/dpdk/doc/guides/rawdevs/ntb.rst b/src/spdk/dpdk/doc/guides/rawdevs/ntb.rst
new file mode 100644
index 000000000..aa7d80964
--- /dev/null
+++ b/src/spdk/dpdk/doc/guides/rawdevs/ntb.rst
@@ -0,0 +1,154 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2018 Intel Corporation.
+
+NTB Rawdev Driver
+=================
+
+The ``ntb`` rawdev driver provides a non-transparent bridge between two
+separate hosts so that they can communicate with each other. Thus, many
+user cases can benefit from this, such as fault tolerance and visual
+acceleration.
+
+This PMD allows two hosts to handshake for device start and stop, memory
+allocation for the peer to access and read/write allocated memory from peer.
+Also, the PMD allows to use doorbell registers to notify the peer and share
+some information by using scratchpad registers.
+
+BIOS setting on Intel Skylake
+-----------------------------
+
+Intel Non-transparent Bridge needs special BIOS setting. Since the PMD only
+supports Intel Skylake platform, introduce BIOS setting here. The referencce
+is https://www.intel.com/content/dam/support/us/en/documents/server-products/Intel_Xeon_Processor_Scalable_Family_BIOS_User_Guide.pdf
+
+- Set the needed PCIe port as NTB to NTB mode on both hosts.
+- Enable NTB bars and set bar size of bar 23 and bar 45 as 12-29 (2K-512M)
+ on both hosts. Note that bar size on both hosts should be the same.
+- Disable split bars for both hosts.
+- Set crosslink control override as DSD/USP on one host, USD/DSP on
+ another host.
+- Disable PCIe PII SSC (Spread Spectrum Clocking) for both hosts. This
+ is a hardware requirement.
+
+Build Options
+-------------
+
+- ``CONFIG_RTE_LIBRTE_PMD_NTB_RAWDEV`` (default ``y``)
+
+ Toggle compilation of the ``ntb`` driver.
+
+Device Setup
+------------
+
+The Intel NTB devices need to be bound to a DPDK-supported kernel driver
+to use, i.e. igb_uio, vfio. The ``dpdk-devbind.py`` script can be used to
+show devices status and to bind them to a suitable kernel driver. They will
+appear under the category of "Misc (rawdev) devices".
+
+Prerequisites
+-------------
+
+NTB PMD needs kernel PCI driver to support write combining (WC) to get
+better performance. The difference will be more than 10 times.
+To enable WC, there are 2 ways.
+
+- Insert igb_uio with ``wc_activate=1`` flag if use igb_uio driver.
+
+.. code-block:: console
+
+ insmod igb_uio.ko wc_activate=1
+
+- Enable WC for NTB device's Bar 2 and Bar 4 (Mapped memory) manually.
+ The reference is https://www.kernel.org/doc/html/latest/x86/mtrr.html
+ Get bar base address using ``lspci -vvv -s ae:00.0 | grep Region``.
+
+.. code-block:: console
+
+ # lspci -vvv -s ae:00.0 | grep Region
+ Region 0: Memory at 39bfe0000000 (64-bit, prefetchable) [size=64K]
+ Region 2: Memory at 39bfa0000000 (64-bit, prefetchable) [size=512M]
+ Region 4: Memory at 39bfc0000000 (64-bit, prefetchable) [size=512M]
+
+Using the following command to enable WC.
+
+.. code-block:: console
+
+ echo "base=0x39bfa0000000 size=0x20000000 type=write-combining" >> /proc/mtrr
+ echo "base=0x39bfc0000000 size=0x20000000 type=write-combining" >> /proc/mtrr
+
+And the results:
+
+.. code-block:: console
+
+ # cat /proc/mtrr
+ reg00: base=0x000000000 ( 0MB), size= 2048MB, count=1: write-back
+ reg01: base=0x07f000000 ( 2032MB), size= 16MB, count=1: uncachable
+ reg02: base=0x39bfa0000000 (60553728MB), size= 512MB, count=1: write-combining
+ reg03: base=0x39bfc0000000 (60554240MB), size= 512MB, count=1: write-combining
+
+To disable WC for these regions, using the following.
+
+.. code-block:: console
+
+ echo "disable=2" >> /proc/mtrr
+ echo "disable=3" >> /proc/mtrr
+
+Ring Layout
+-----------
+
+Since read/write remote system's memory are through PCI bus, remote read
+is much more expensive than remote write. Thus, the enqueue and dequeue
+based on ntb ring should avoid remote read. The ring layout for ntb is
+like the following:
+
+- Ring Format::
+
+ desc_ring:
+
+ 0 16 64
+ +---------------------------------------------------------------+
+ | buffer address |
+ +---------------+-----------------------------------------------+
+ | buffer length | resv |
+ +---------------+-----------------------------------------------+
+
+ used_ring:
+
+ 0 16 32
+ +---------------+---------------+
+ | packet length | flags |
+ +---------------+---------------+
+
+- Ring Layout::
+
+ +------------------------+ +------------------------+
+ | used_ring | | desc_ring |
+ | +---+ | | +---+ |
+ | | | | | | | |
+ | +---+ +--------+ | | +---+ |
+ | | | ---> | buffer | <+---+-| | |
+ | +---+ +--------+ | | +---+ |
+ | | | | | | | |
+ | +---+ | | +---+ |
+ | ... | | ... |
+ | | | |
+ | +---------+ | | +---------+ |
+ | | tx_tail | | | | rx_tail | |
+ | System A +---------+ | | System B +---------+ |
+ +------------------------+ +------------------------+
+ <---------traffic---------
+
+- Enqueue and Dequeue
+ Based on this ring layout, enqueue reads rx_tail to get how many free
+ buffers and writes used_ring and tx_tail to tell the peer which buffers
+ are filled with data.
+ And dequeue reads tx_tail to get how many packets are arrived, and
+ writes desc_ring and rx_tail to tell the peer about the new allocated
+ buffers.
+ So in this way, only remote write happens and remote read can be avoid
+ to get better performance.
+
+Limitation
+----------
+
+- This PMD only supports Intel Skylake platform.
diff --git a/src/spdk/dpdk/doc/guides/rawdevs/octeontx2_dma.rst b/src/spdk/dpdk/doc/guides/rawdevs/octeontx2_dma.rst
new file mode 100644
index 000000000..1e1dfbe93
--- /dev/null
+++ b/src/spdk/dpdk/doc/guides/rawdevs/octeontx2_dma.rst
@@ -0,0 +1,115 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Marvell International Ltd.
+
+OCTEON TX2 DMA Driver
+=====================
+
+OCTEON TX2 has an internal DMA unit which can be used by applications to initiate
+DMA transaction internally, from/to host when OCTEON TX2 operates in PCIe End
+Point mode. The DMA PF function supports 8 VFs corresponding to 8 DMA queues.
+Each DMA queue was exposed as a VF function when SRIOV enabled.
+
+Features
+--------
+
+This DMA PMD supports below 3 modes of memory transfers
+
+#. Internal - OCTEON TX2 DRAM to DRAM without core intervention
+
+#. Inbound - Host DRAM to OCTEON TX2 DRAM without host/OCTEON TX2 cores involvement
+
+#. Outbound - OCTEON TX2 DRAM to Host DRAM without host/OCTEON TX2 cores involvement
+
+Prerequisites and Compilation procedure
+---------------------------------------
+
+ See :doc:`../platform/octeontx2` for setup information.
+
+
+Pre-Installation Configuration
+------------------------------
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+
+- ``CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_DMA_RAWDEV`` (default ``y``)
+
+ Toggle compilation of the ``lrte_pmd_octeontx2_dma`` driver.
+
+Enabling logs
+-------------
+
+For enabling logs, use the following EAL parameter:
+
+.. code-block:: console
+
+ ./your_dma_application <EAL args> --log-level=pmd.raw.octeontx2.dpi,<level>
+
+Using ``pmd.raw.octeontx2.dpi`` as log matching criteria, all Event PMD logs
+can be enabled which are lower than logging ``level``.
+
+Initialization
+--------------
+
+The number of DMA VFs (queues) enabled can be controlled by setting sysfs
+entry, `sriov_numvfs` for the corresponding PF driver.
+
+.. code-block:: console
+
+ echo <num_vfs> > /sys/bus/pci/drivers/octeontx2-dpi/0000\:05\:00.0/sriov_numvfs
+
+Once the required VFs are enabled, to be accessible from DPDK, VFs need to be
+bound to vfio-pci driver.
+
+Device Setup
+-------------
+
+The OCTEON TX2 DPI DMA HW devices will need to be bound to a
+user-space IO driver for use. The script ``dpdk-devbind.py`` script
+included with DPDK can be used to view the state of the devices and to bind
+them to a suitable DPDK-supported kernel driver. When querying the status
+of the devices, they will appear under the category of "Misc (rawdev)
+devices", i.e. the command ``dpdk-devbind.py --status-dev misc`` can be
+used to see the state of those devices alone.
+
+Device Configuration
+--------------------
+
+Configuring DMA rawdev device is done using the ``rte_rawdev_configure()``
+API, which takes the mempool as parameter. PMD uses this pool to submit DMA
+commands to HW.
+
+The following code shows how the device is configured
+
+.. code-block:: c
+
+ struct dpi_rawdev_conf_s conf = {0};
+ struct rte_rawdev_info rdev_info = {.dev_private = &conf};
+
+ conf.chunk_pool = (void *)rte_mempool_create_empty(...);
+ rte_mempool_set_ops_byname(conf.chunk_pool, rte_mbuf_platform_mempool_ops(), NULL);
+ rte_mempool_populate_default(conf.chunk_pool);
+
+ rte_rawdev_configure(dev_id, (rte_rawdev_obj_t)&rdev_info);
+
+Performing Data Transfer
+------------------------
+
+To perform data transfer using OCTEON TX2 DMA rawdev devices use standard
+``rte_rawdev_enqueue_buffers()`` and ``rte_rawdev_dequeue_buffers()`` APIs.
+
+Self test
+---------
+
+On EAL initialization, dma devices will be probed and populated into the
+raw devices. The rawdev ID of the device can be obtained using
+
+* Invoke ``rte_rawdev_get_dev_id("DPI:x")`` from the application
+ where x is the VF device's bus id specified in "bus:device.func" format. Use this
+ index for further rawdev function calls.
+
+* This PMD supports driver self test, to test DMA internal mode from test
+ application one can directly calls
+ ``rte_rawdev_selftest(rte_rawdev_get_dev_id("DPI:x"))``
diff --git a/src/spdk/dpdk/doc/guides/rawdevs/octeontx2_ep.rst b/src/spdk/dpdk/doc/guides/rawdevs/octeontx2_ep.rst
new file mode 100644
index 000000000..bbcf530a4
--- /dev/null
+++ b/src/spdk/dpdk/doc/guides/rawdevs/octeontx2_ep.rst
@@ -0,0 +1,89 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2019 Marvell International Ltd.
+
+Marvell OCTEON TX2 End Point Rawdev Driver
+==========================================
+
+OCTEON TX2 has an internal SDP unit which provides End Point mode of operation
+by exposing its IOQs to Host, IOQs are used for packet I/O between Host and
+OCTEON TX2. Each OCTEON TX2 SDP PF supports a max of 128 VFs and Each VF is
+associated with a set of IOQ pairs.
+
+Features
+--------
+
+This OCTEON TX2 End Point mode PMD supports
+
+#. Packet Input - Host to OCTEON TX2 with direct data instruction mode.
+
+#. Packet Output - OCTEON TX2 to Host with info pointer mode.
+
+Config File Options
+~~~~~~~~~~~~~~~~~~~
+
+The following options can be modified in the ``config`` file.
+
+- ``CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EP_RAWDEV`` (default ``y``)
+
+ Toggle compilation of the ``lrte_pmd_octeontx2_ep`` driver.
+
+Initialization
+--------------
+
+The number of SDP VFs enabled, can be controlled by setting sysfs
+entry `sriov_numvfs` for the corresponding PF driver.
+
+.. code-block:: console
+
+ echo <num_vfs> > /sys/bus/pci/drivers/octeontx2-ep/0000\:04\:00.0/sriov_numvfs
+
+Once the required VFs are enabled, to be accessible from DPDK, VFs need to be
+bound to vfio-pci driver.
+
+Device Setup
+------------
+
+The OCTEON TX2 SDP End Point VF devices will need to be bound to a
+user-space IO driver for use. The script ``dpdk-devbind.py`` script
+included with DPDK can be used to view the state of the devices and to bind
+them to a suitable DPDK-supported kernel driver. When querying the status
+of the devices, they will appear under the category of "Misc (rawdev)
+devices", i.e. the command ``dpdk-devbind.py --status-dev misc`` can be
+used to see the state of those devices alone.
+
+Device Configuration
+--------------------
+
+Configuring SDP EP rawdev device is done using the ``rte_rawdev_configure()``
+API, which takes the mempool as parameter. PMD uses this pool to send/receive
+packets to/from the HW.
+
+The following code shows how the device is configured
+
+.. code-block:: c
+
+ struct sdp_rawdev_info config = {0};
+ struct rte_rawdev_info rdev_info = {.dev_private = &config};
+ config.enqdeq_mpool = (void *)rte_mempool_create(...);
+
+ rte_rawdev_configure(dev_id, (rte_rawdev_obj_t)&rdev_info);
+
+Performing Data Transfer
+------------------------
+
+To perform data transfer using SDP VF EP rawdev devices use standard
+``rte_rawdev_enqueue_buffers()`` and ``rte_rawdev_dequeue_buffers()`` APIs.
+
+Self test
+---------
+
+On EAL initialization, SDP VF devices will be probed and populated into the
+raw devices. The rawdev ID of the device can be obtained using
+
+* Invoke ``rte_rawdev_get_dev_id("SDPEP:x")`` from the test application
+ where x is the VF device's bus id specified in "bus:device.func"(BDF)
+ format. Use this index for further rawdev function calls.
+
+* The driver's selftest rawdev API can be used to verify the SDP EP mode
+ functional tests which can send/receive the raw data packets to/from the
+ EP device.