summaryrefslogtreecommitdiffstats
path: root/src/spdk/dpdk/doc/guides/nics/vmxnet3.rst
blob: ae146f0d557552611ede9b2153b56ccaae743278 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
..  SPDX-License-Identifier: BSD-3-Clause
    Copyright(c) 2010-2014 Intel Corporation.

Poll Mode Driver for Paravirtual VMXNET3 NIC
============================================

The VMXNET3 adapter is the next generation of a paravirtualized NIC, introduced by VMware* ESXi.
It is designed for performance, offers all the features available in VMXNET2, and adds several new features such as,
multi-queue support (also known as Receive Side Scaling, RSS),
IPv6 offloads, and MSI/MSI-X interrupt delivery.
One can use the same device in a DPDK application with VMXNET3 PMD introduced in DPDK API.

In this chapter, two setups with the use of the VMXNET3 PMD are demonstrated:

#.  Vmxnet3 with a native NIC connected to a vSwitch

#.  Vmxnet3 chaining VMs connected to a vSwitch

VMXNET3 Implementation in the DPDK
----------------------------------

For details on the VMXNET3 device, refer to the VMXNET3 driver's vmxnet3 directory and support manual from VMware*.

For performance details, refer to the following link from VMware:

`http://www.vmware.com/pdf/vsp_4_vmxnet3_perf.pdf <http://www.vmware.com/pdf/vsp_4_vmxnet3_perf.pdf>`_

As a PMD, the VMXNET3 driver provides the packet reception and transmission callbacks, vmxnet3_recv_pkts and vmxnet3_xmit_pkts.

The VMXNET3 PMD handles all the packet buffer memory allocation and resides in guest address space
and it is solely responsible to free that memory when not needed.
The packet buffers and features to be supported are made available to hypervisor via VMXNET3 PCI configuration space BARs.
During RX/TX, the packet buffers are exchanged by their GPAs,
and the hypervisor loads the buffers with packets in the RX case and sends packets to vSwitch in the TX case.

The VMXNET3 PMD is compiled with vmxnet3 device headers.
The interface is similar to that of the other PMDs available in the DPDK API.
The driver pre-allocates the packet buffers and loads the command ring descriptors in advance.
The hypervisor fills those packet buffers on packet arrival and write completion ring descriptors,
which are eventually pulled by the PMD.
After reception, the DPDK application frees the descriptors and loads new packet buffers for the coming packets.
The interrupts are disabled and there is no notification required.
This keeps performance up on the RX side, even though the device provides a notification feature.

In the transmit routine, the DPDK application fills packet buffer pointers in the descriptors of the command ring
and notifies the hypervisor.
In response the hypervisor takes packets and passes them to the vSwitch, It writes into the completion descriptors ring.
The rings are read by the PMD in the next transmit routine call and the buffers and descriptors are freed from memory.

Features and Limitations of VMXNET3 PMD
---------------------------------------

In release 1.6.0, the VMXNET3 PMD provides the basic functionality of packet reception and transmission.
There are several options available for filtering packets at VMXNET3 device level including:

#.  MAC Address based filtering:

    *   Unicast, Broadcast, All Multicast modes - SUPPORTED BY DEFAULT

    *   Multicast with Multicast Filter table - NOT SUPPORTED

    *   Promiscuous mode - SUPPORTED

    *   RSS based load balancing between queues - SUPPORTED

#.  VLAN filtering:

    *   VLAN tag based filtering without load balancing - SUPPORTED

.. note::


    *   Release 1.6.0 does not support separate headers and body receive cmd_ring and hence,
        multiple segment buffers are not supported.
        Only cmd_ring_0 is used for packet buffers, one for each descriptor.

    *   Receive and transmit of scattered packets is not supported.

    *   Multicast with Multicast Filter table is not supported.

Prerequisites
-------------

The following prerequisites apply:

*   Before starting a VM, a VMXNET3 interface to a VM through VMware vSphere Client must be assigned.
    This is shown in the figure below.

.. _figure_vmxnet3_int:

.. figure:: img/vmxnet3_int.*

   Assigning a VMXNET3 interface to a VM using VMware vSphere Client

.. note::

    Depending on the Virtual Machine type, the VMware vSphere Client shows Ethernet adaptors while adding an Ethernet device.
    Ensure that the VM type used offers a VMXNET3 device. Refer to the VMware documentation for a listed of VMs.

.. note::

    Follow the *DPDK Getting Started Guide* to setup the basic DPDK environment.

.. note::

    Follow the *DPDK Sample Application's User Guide*, L2 Forwarding/L3 Forwarding and
    TestPMD for instructions on how to run a DPDK application using an assigned VMXNET3 device.

VMXNET3 with a Native NIC Connected to a vSwitch
------------------------------------------------

This section describes an example setup for Phy-vSwitch-VM-Phy communication.

.. _figure_vswitch_vm:

.. figure:: img/vswitch_vm.*

   VMXNET3 with a Native NIC Connected to a vSwitch

.. note::

    Other instructions on preparing to use DPDK such as, hugepage enabling, uio port binding are not listed here.
    Please refer to *DPDK Getting Started Guide and DPDK Sample Application's User Guide* for detailed instructions.

The packet reception and transmission flow path is::

    Packet generator -> 82576
                     -> VMware ESXi vSwitch
                     -> VMXNET3 device
                     -> Guest VM VMXNET3 port 0 rx burst
                     -> Guest VM 82599 VF port 0 tx burst
                     -> 82599 VF
                     -> Packet generator

VMXNET3 Chaining VMs Connected to a vSwitch
-------------------------------------------

The following figure shows an example VM-to-VM communication over a Phy-VM-vSwitch-VM-Phy communication channel.

.. _figure_vm_vm_comms:

.. figure:: img/vm_vm_comms.*

   VMXNET3 Chaining VMs Connected to a vSwitch

.. note::

    When using the L2 Forwarding or L3 Forwarding applications,
    a destination MAC address needs to be written in packets to hit the other VM's VMXNET3 interface.

In this example, the packet flow path is::

    Packet generator -> 82599 VF
                     -> Guest VM 82599 port 0 rx burst
                     -> Guest VM VMXNET3 port 1 tx burst
                     -> VMXNET3 device
                     -> VMware ESXi vSwitch
                     -> VMXNET3 device
                     -> Guest VM VMXNET3 port 0 rx burst
                     -> Guest VM 82599 VF port 1 tx burst
                     -> 82599 VF
                     -> Packet generator