summaryrefslogtreecommitdiffstats
path: root/doc/rbd
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-05-23 16:45:13 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-05-23 16:45:13 +0000
commit389020e14594e4894e28d1eb9103c210b142509e (patch)
tree2ba734cdd7a243f46dda7c3d0cc88c2293d9699f /doc/rbd
parentAdding upstream version 18.2.2. (diff)
downloadceph-389020e14594e4894e28d1eb9103c210b142509e.tar.xz
ceph-389020e14594e4894e28d1eb9103c210b142509e.zip
Adding upstream version 18.2.3.upstream/18.2.3
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to '')
-rw-r--r--doc/rbd/index.rst7
-rw-r--r--doc/rbd/libvirt.rst122
-rw-r--r--doc/rbd/nvmeof-initiator-esx.rst70
-rw-r--r--doc/rbd/nvmeof-initiator-linux.rst83
-rw-r--r--doc/rbd/nvmeof-initiators.rst16
-rw-r--r--doc/rbd/nvmeof-overview.rst48
-rw-r--r--doc/rbd/nvmeof-requirements.rst14
-rw-r--r--doc/rbd/nvmeof-target-configure.rst122
-rw-r--r--doc/rbd/rados-rbd-cmds.rst8
-rw-r--r--doc/rbd/rbd-encryption.rst12
-rw-r--r--doc/rbd/rbd-integrations.rst1
-rw-r--r--doc/rbd/rbd-nomad.rst1
-rw-r--r--doc/rbd/rbd-snapshot.rst31
13 files changed, 453 insertions, 82 deletions
diff --git a/doc/rbd/index.rst b/doc/rbd/index.rst
index 4a8029bba..96f1e1389 100644
--- a/doc/rbd/index.rst
+++ b/doc/rbd/index.rst
@@ -32,9 +32,9 @@ the ``librbd`` library.
Ceph's block devices deliver high performance with vast scalability to
`kernel modules`_, or to :abbr:`KVMs (kernel virtual machines)` such as `QEMU`_, and
-cloud-based computing systems like `OpenStack`_ and `CloudStack`_ that rely on
-libvirt and QEMU to integrate with Ceph block devices. You can use the same cluster
-to operate the :ref:`Ceph RADOS Gateway <object-gateway>`, the
+cloud-based computing systems like `OpenStack`_, `OpenNebula`_ and `CloudStack`_
+that rely on libvirt and QEMU to integrate with Ceph block devices. You can use
+the same cluster to operate the :ref:`Ceph RADOS Gateway <object-gateway>`, the
:ref:`Ceph File System <ceph-file-system>`, and Ceph block devices simultaneously.
.. important:: To use Ceph Block Devices, you must have access to a running
@@ -69,4 +69,5 @@ to operate the :ref:`Ceph RADOS Gateway <object-gateway>`, the
.. _kernel modules: ./rbd-ko/
.. _QEMU: ./qemu-rbd/
.. _OpenStack: ./rbd-openstack
+.. _OpenNebula: https://docs.opennebula.io/stable/open_cluster_deployment/storage_setup/ceph_ds.html
.. _CloudStack: ./rbd-cloudstack
diff --git a/doc/rbd/libvirt.rst b/doc/rbd/libvirt.rst
index e3523f8a8..a55a4f95b 100644
--- a/doc/rbd/libvirt.rst
+++ b/doc/rbd/libvirt.rst
@@ -4,11 +4,11 @@
.. index:: Ceph Block Device; livirt
-The ``libvirt`` library creates a virtual machine abstraction layer between
-hypervisor interfaces and the software applications that use them. With
-``libvirt``, developers and system administrators can focus on a common
+The ``libvirt`` library creates a virtual machine abstraction layer between
+hypervisor interfaces and the software applications that use them. With
+``libvirt``, developers and system administrators can focus on a common
management framework, common API, and common shell interface (i.e., ``virsh``)
-to many different hypervisors, including:
+to many different hypervisors, including:
- QEMU/KVM
- XEN
@@ -18,7 +18,7 @@ to many different hypervisors, including:
Ceph block devices support QEMU/KVM. You can use Ceph block devices with
software that interfaces with ``libvirt``. The following stack diagram
-illustrates how ``libvirt`` and QEMU use Ceph block devices via ``librbd``.
+illustrates how ``libvirt`` and QEMU use Ceph block devices via ``librbd``.
.. ditaa::
@@ -41,10 +41,11 @@ illustrates how ``libvirt`` and QEMU use Ceph block devices via ``librbd``.
The most common ``libvirt`` use case involves providing Ceph block devices to
-cloud solutions like OpenStack or CloudStack. The cloud solution uses
+cloud solutions like OpenStack, OpenNebula or CloudStack. The cloud solution uses
``libvirt`` to interact with QEMU/KVM, and QEMU/KVM interacts with Ceph block
-devices via ``librbd``. See `Block Devices and OpenStack`_ and `Block Devices
-and CloudStack`_ for details. See `Installation`_ for installation details.
+devices via ``librbd``. See `Block Devices and OpenStack`_,
+`Block Devices and OpenNebula`_ and `Block Devices and CloudStack`_ for details.
+See `Installation`_ for installation details.
You can also use Ceph block devices with ``libvirt``, ``virsh`` and the
``libvirt`` API. See `libvirt Virtualization API`_ for details.
@@ -62,12 +63,12 @@ Configuring Ceph
To configure Ceph for use with ``libvirt``, perform the following steps:
-#. `Create a pool`_. The following example uses the
+#. `Create a pool`_. The following example uses the
pool name ``libvirt-pool``.::
ceph osd pool create libvirt-pool
- Verify the pool exists. ::
+ Verify the pool exists. ::
ceph osd lspools
@@ -80,23 +81,23 @@ To configure Ceph for use with ``libvirt``, perform the following steps:
and references ``libvirt-pool``. ::
ceph auth get-or-create client.libvirt mon 'profile rbd' osd 'profile rbd pool=libvirt-pool'
-
- Verify the name exists. ::
-
+
+ Verify the name exists. ::
+
ceph auth ls
- **NOTE**: ``libvirt`` will access Ceph using the ID ``libvirt``,
- not the Ceph name ``client.libvirt``. See `User Management - User`_ and
- `User Management - CLI`_ for a detailed explanation of the difference
- between ID and name.
+ **NOTE**: ``libvirt`` will access Ceph using the ID ``libvirt``,
+ not the Ceph name ``client.libvirt``. See `User Management - User`_ and
+ `User Management - CLI`_ for a detailed explanation of the difference
+ between ID and name.
-#. Use QEMU to `create an image`_ in your RBD pool.
+#. Use QEMU to `create an image`_ in your RBD pool.
The following example uses the image name ``new-libvirt-image``
and references ``libvirt-pool``. ::
qemu-img create -f rbd rbd:libvirt-pool/new-libvirt-image 2G
- Verify the image exists. ::
+ Verify the image exists. ::
rbd -p libvirt-pool ls
@@ -111,7 +112,7 @@ To configure Ceph for use with ``libvirt``, perform the following steps:
admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok
The ``client.libvirt`` section name should match the cephx user you created
- above.
+ above.
If SELinux or AppArmor is enabled, note that this could prevent the client
process (qemu via libvirt) from doing some operations, such as writing logs
or operate the images or admin socket to the destination locations (``/var/
@@ -123,7 +124,7 @@ Preparing the VM Manager
========================
You may use ``libvirt`` without a VM manager, but you may find it simpler to
-create your first domain with ``virt-manager``.
+create your first domain with ``virt-manager``.
#. Install a virtual machine manager. See `KVM/VirtManager`_ for details. ::
@@ -131,7 +132,7 @@ create your first domain with ``virt-manager``.
#. Download an OS image (if necessary).
-#. Launch the virtual machine manager. ::
+#. Launch the virtual machine manager. ::
sudo virt-manager
@@ -142,12 +143,12 @@ Creating a VM
To create a VM with ``virt-manager``, perform the following steps:
-#. Press the **Create New Virtual Machine** button.
+#. Press the **Create New Virtual Machine** button.
#. Name the new virtual machine domain. In the exemplary embodiment, we
use the name ``libvirt-virtual-machine``. You may use any name you wish,
- but ensure you replace ``libvirt-virtual-machine`` with the name you
- choose in subsequent commandline and configuration examples. ::
+ but ensure you replace ``libvirt-virtual-machine`` with the name you
+ choose in subsequent commandline and configuration examples. ::
libvirt-virtual-machine
@@ -155,9 +156,9 @@ To create a VM with ``virt-manager``, perform the following steps:
/path/to/image/recent-linux.img
- **NOTE:** Import a recent image. Some older images may not rescan for
+ **NOTE:** Import a recent image. Some older images may not rescan for
virtual devices properly.
-
+
#. Configure and start the VM.
#. You may use ``virsh list`` to verify the VM domain exists. ::
@@ -179,11 +180,11 @@ you that root privileges are required. For a reference of ``virsh``
commands, refer to `Virsh Command Reference`_.
-#. Open the configuration file with ``virsh edit``. ::
+#. Open the configuration file with ``virsh edit``. ::
sudo virsh edit {vm-domain-name}
- Under ``<devices>`` there should be a ``<disk>`` entry. ::
+ Under ``<devices>`` there should be a ``<disk>`` entry. ::
<devices>
<emulator>/usr/bin/kvm</emulator>
@@ -196,18 +197,18 @@ commands, refer to `Virsh Command Reference`_.
Replace ``/path/to/image/recent-linux.img`` with the path to the OS image.
- The minimum kernel for using the faster ``virtio`` bus is 2.6.25. See
+ The minimum kernel for using the faster ``virtio`` bus is 2.6.25. See
`Virtio`_ for details.
- **IMPORTANT:** Use ``sudo virsh edit`` instead of a text editor. If you edit
- the configuration file under ``/etc/libvirt/qemu`` with a text editor,
- ``libvirt`` may not recognize the change. If there is a discrepancy between
- the contents of the XML file under ``/etc/libvirt/qemu`` and the result of
- ``sudo virsh dumpxml {vm-domain-name}``, then your VM may not work
+ **IMPORTANT:** Use ``sudo virsh edit`` instead of a text editor. If you edit
+ the configuration file under ``/etc/libvirt/qemu`` with a text editor,
+ ``libvirt`` may not recognize the change. If there is a discrepancy between
+ the contents of the XML file under ``/etc/libvirt/qemu`` and the result of
+ ``sudo virsh dumpxml {vm-domain-name}``, then your VM may not work
properly.
-
-#. Add the Ceph RBD image you created as a ``<disk>`` entry. ::
+
+#. Add the Ceph RBD image you created as a ``<disk>`` entry. ::
<disk type='network' device='disk'>
<source protocol='rbd' name='libvirt-pool/new-libvirt-image'>
@@ -216,21 +217,21 @@ commands, refer to `Virsh Command Reference`_.
<target dev='vdb' bus='virtio'/>
</disk>
- Replace ``{monitor-host}`` with the name of your host, and replace the
- pool and/or image name as necessary. You may add multiple ``<host>``
+ Replace ``{monitor-host}`` with the name of your host, and replace the
+ pool and/or image name as necessary. You may add multiple ``<host>``
entries for your Ceph monitors. The ``dev`` attribute is the logical
- device name that will appear under the ``/dev`` directory of your
- VM. The optional ``bus`` attribute indicates the type of disk device to
- emulate. The valid settings are driver specific (e.g., "ide", "scsi",
+ device name that will appear under the ``/dev`` directory of your
+ VM. The optional ``bus`` attribute indicates the type of disk device to
+ emulate. The valid settings are driver specific (e.g., "ide", "scsi",
"virtio", "xen", "usb" or "sata").
-
+
See `Disks`_ for details of the ``<disk>`` element, and its child elements
and attributes.
-
+
#. Save the file.
-#. If your Ceph Storage Cluster has `Ceph Authentication`_ enabled (it does by
- default), you must generate a secret. ::
+#. If your Ceph Storage Cluster has `Ceph Authentication`_ enabled (it does by
+ default), you must generate a secret. ::
cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
@@ -249,11 +250,11 @@ commands, refer to `Virsh Command Reference`_.
ceph auth get-key client.libvirt | sudo tee client.libvirt.key
-#. Set the UUID of the secret. ::
+#. Set the UUID of the secret. ::
sudo virsh secret-set-value --secret {uuid of secret} --base64 $(cat client.libvirt.key) && rm client.libvirt.key secret.xml
- You must also set the secret manually by adding the following ``<auth>``
+ You must also set the secret manually by adding the following ``<auth>``
entry to the ``<disk>`` element you entered earlier (replacing the
``uuid`` value with the result from the command line example above). ::
@@ -266,14 +267,14 @@ commands, refer to `Virsh Command Reference`_.
<auth username='libvirt'>
<secret type='ceph' uuid='{uuid of secret}'/>
</auth>
- <target ...
+ <target ...
- **NOTE:** The exemplary ID is ``libvirt``, not the Ceph name
- ``client.libvirt`` as generated at step 2 of `Configuring Ceph`_. Ensure
- you use the ID component of the Ceph name you generated. If for some reason
- you need to regenerate the secret, you will have to execute
- ``sudo virsh secret-undefine {uuid}`` before executing
+ **NOTE:** The exemplary ID is ``libvirt``, not the Ceph name
+ ``client.libvirt`` as generated at step 2 of `Configuring Ceph`_. Ensure
+ you use the ID component of the Ceph name you generated. If for some reason
+ you need to regenerate the secret, you will have to execute
+ ``sudo virsh secret-undefine {uuid}`` before executing
``sudo virsh secret-set-value`` again.
@@ -285,30 +286,31 @@ To verify that the VM and Ceph are communicating, you may perform the
following procedures.
-#. Check to see if Ceph is running::
+#. Check to see if Ceph is running::
ceph health
-#. Check to see if the VM is running. ::
+#. Check to see if the VM is running. ::
sudo virsh list
-#. Check to see if the VM is communicating with Ceph. Replace
- ``{vm-domain-name}`` with the name of your VM domain::
+#. Check to see if the VM is communicating with Ceph. Replace
+ ``{vm-domain-name}`` with the name of your VM domain::
sudo virsh qemu-monitor-command --hmp {vm-domain-name} 'info block'
#. Check to see if the device from ``<target dev='vdb' bus='virtio'/>`` exists::
-
+
virsh domblklist {vm-domain-name} --details
-If everything looks okay, you may begin using the Ceph block device
+If everything looks okay, you may begin using the Ceph block device
within your VM.
.. _Installation: ../../install
.. _libvirt Virtualization API: http://www.libvirt.org
.. _Block Devices and OpenStack: ../rbd-openstack
+.. _Block Devices and OpenNebula: https://docs.opennebula.io/stable/open_cluster_deployment/storage_setup/ceph_ds.html#datastore-internals
.. _Block Devices and CloudStack: ../rbd-cloudstack
.. _Create a pool: ../../rados/operations/pools#create-a-pool
.. _Create a Ceph User: ../../rados/operations/user-management#add-a-user
diff --git a/doc/rbd/nvmeof-initiator-esx.rst b/doc/rbd/nvmeof-initiator-esx.rst
new file mode 100644
index 000000000..6afa29f1e
--- /dev/null
+++ b/doc/rbd/nvmeof-initiator-esx.rst
@@ -0,0 +1,70 @@
+---------------------------------
+NVMe/TCP Initiator for VMware ESX
+---------------------------------
+
+Prerequisites
+=============
+
+- A VMware ESXi host running VMware vSphere Hypervisor (ESXi) 7.0U3 version or later.
+- Deployed Ceph NVMe-oF gateway.
+- Ceph cluster with NVMe-oF configuration.
+- Subsystem defined in the gateway.
+
+Configuration
+=============
+
+The following instructions will use the default vSphere web client and esxcli.
+
+1. Enable NVMe/TCP on a NIC:
+
+ .. prompt:: bash #
+
+ esxcli nvme fabric enable --protocol TCP --device vmnicN
+
+ Replace ``N`` with the number of the NIC.
+
+2. Tag a VMKernel NIC to permit NVMe/TCP traffic:
+
+ .. prompt:: bash #
+
+ esxcli network uip interface tag add --interface-nme vmkN --tagname NVMeTCP
+
+ Replace ``N`` with the ID of the VMkernel.
+
+3. Configure the VMware ESXi host for NVMe/TCP:
+
+ #. List the NVMe-oF adapter:
+
+ .. prompt:: bash #
+
+ esxcli nvme adapter list
+
+ #. Discover NVMe-oF subsystems:
+
+ .. prompt:: bash #
+
+ esxcli nvme fabric discover -a NVME_TCP_ADAPTER -i GATEWAY_IP -p 4420
+
+ #. Connect to NVME-oF gateway subsystem:
+
+ .. prompt:: bash #
+
+ esxcli nvme connect -a NVME_TCP_ADAPTER -i GATEWAY_IP -p 4420 -s SUBSYSTEM_NQN
+
+ #. List the NVMe/TCP controllers:
+
+ .. prompt:: bash #
+
+ esxcli nvme controller list
+
+ #. List the NVMe-oF namespaces in the subsystem:
+
+ .. prompt:: bash #
+
+ esxcli nvme namespace list
+
+4. Verify that the initiator has been set up correctly:
+
+ #. From the vSphere client go to the ESXi host.
+ #. On the Storage page go to the Devices tab.
+ #. Verify that the NVME/TCP disks are listed in the table.
diff --git a/doc/rbd/nvmeof-initiator-linux.rst b/doc/rbd/nvmeof-initiator-linux.rst
new file mode 100644
index 000000000..4889e4132
--- /dev/null
+++ b/doc/rbd/nvmeof-initiator-linux.rst
@@ -0,0 +1,83 @@
+==============================
+ NVMe/TCP Initiator for Linux
+==============================
+
+Prerequisites
+=============
+
+- Kernel 5.0 or later
+- RHEL 9.2 or later
+- Ubuntu 24.04 or later
+- SLES 15 SP3 or later
+
+Installation
+============
+
+1. Install the nvme-cli:
+
+ .. prompt:: bash #
+
+ yum install nvme-cli
+
+2. Load the NVMe-oF module:
+
+ .. prompt:: bash #
+
+ modprobe nvme-fabrics
+
+3. Verify the NVMe/TCP target is reachable:
+
+ .. prompt:: bash #
+
+ nvme discover -t tcp -a GATEWAY_IP -s 4420
+
+4. Connect to the NVMe/TCP target:
+
+ .. prompt:: bash #
+
+ nvme connect -t tcp -a GATEWAY_IP -n SUBSYSTEM_NQN
+
+Next steps
+==========
+
+Verify that the initiator is set up correctly:
+
+1. List the NVMe block devices:
+
+ .. prompt:: bash #
+
+ nvme list
+
+2. Create a filesystem on the desired device:
+
+ .. prompt:: bash #
+
+ mkfs.ext4 NVME_NODE_PATH
+
+3. Mount the filesystem:
+
+ .. prompt:: bash #
+
+ mkdir /mnt/nvmeof
+
+ .. prompt:: bash #
+
+ mount NVME_NODE_PATH /mnt/nvmeof
+
+4. List the NVME-oF files:
+
+ .. prompt:: bash #
+
+ ls /mnt/nvmeof
+
+5. Create a text file in the ``/mnt/nvmeof`` directory:
+
+ .. prompt:: bash #
+
+ echo "Hello NVME-oF" > /mnt/nvmeof/hello.text
+
+6. Verify that the file can be accessed:
+
+ .. prompt:: bash #
+
+ cat /mnt/nvmeof/hello.text
diff --git a/doc/rbd/nvmeof-initiators.rst b/doc/rbd/nvmeof-initiators.rst
new file mode 100644
index 000000000..8fa4a5b9d
--- /dev/null
+++ b/doc/rbd/nvmeof-initiators.rst
@@ -0,0 +1,16 @@
+.. _configuring-the-nvmeof-initiators:
+
+====================================
+ Configuring the NVMe-oF Initiators
+====================================
+
+- `NVMe/TCP Initiator for Linux <../nvmeof-initiator-linux>`_
+
+- `NVMe/TCP Initiator for VMware ESX <../nvmeof-initiator-esx>`_
+
+.. toctree::
+ :maxdepth: 1
+ :hidden:
+
+ Linux <nvmeof-initiator-linux>
+ VMware ESX <nvmeof-initiator-esx>
diff --git a/doc/rbd/nvmeof-overview.rst b/doc/rbd/nvmeof-overview.rst
new file mode 100644
index 000000000..070024a3a
--- /dev/null
+++ b/doc/rbd/nvmeof-overview.rst
@@ -0,0 +1,48 @@
+.. _ceph-nvmeof:
+
+======================
+ Ceph NVMe-oF Gateway
+======================
+
+The NVMe-oF Gateway presents an NVMe-oF target that exports
+RADOS Block Device (RBD) images as NVMe namespaces. The NVMe-oF protocol allows
+clients (initiators) to send NVMe commands to storage devices (targets) over a
+TCP/IP network, enabling clients without native Ceph client support to access
+Ceph block storage.
+
+Each NVMe-oF gateway consists of an `SPDK <https://spdk.io/>`_ NVMe-oF target
+with ``bdev_rbd`` and a control daemon. Ceph’s NVMe-oF gateway can be used to
+provision a fully integrated block-storage infrastructure with all the features
+and benefits of a conventional Storage Area Network (SAN).
+
+.. ditaa::
+ Cluster Network (optional)
+ +-------------------------------------------+
+ | | | |
+ +-------+ +-------+ +-------+ +-------+
+ | | | | | | | |
+ | OSD 1 | | OSD 2 | | OSD 3 | | OSD N |
+ | {s}| | {s}| | {s}| | {s}|
+ +-------+ +-------+ +-------+ +-------+
+ | | | |
+ +--------->| | +---------+ | |<----------+
+ : | | | RBD | | | :
+ | +----------------| Image |----------------+ |
+ | Public Network | {d} | |
+ | +---------+ |
+ | |
+ | +--------------------+ |
+ | +--------------+ | NVMeoF Initiators | +--------------+ |
+ | | NVMe‐oF GW | | +-----------+ | | NVMe‐oF GW | |
+ +-->| RBD Module |<--+ | Various | +-->| RBD Module |<--+
+ | | | | Operating | | | |
+ +--------------+ | | Systems | | +--------------+
+ | +-----------+ |
+ +--------------------+
+
+.. toctree::
+ :maxdepth: 1
+
+ Requirements <nvmeof-requirements>
+ Configuring the NVME-oF Target <nvmeof-target-configure>
+ Configuring the NVMe-oF Initiators <nvmeof-initiators>
diff --git a/doc/rbd/nvmeof-requirements.rst b/doc/rbd/nvmeof-requirements.rst
new file mode 100644
index 000000000..a53d1c2d7
--- /dev/null
+++ b/doc/rbd/nvmeof-requirements.rst
@@ -0,0 +1,14 @@
+============================
+NVME-oF Gateway Requirements
+============================
+
+We recommend that you provision at least two NVMe/TCP gateways on different
+nodes to implement a highly-available Ceph NVMe/TCP solution.
+
+We recommend at a minimum a single 10Gb Ethernet link in the Ceph public
+network for the gateway. For hardware recommendations, see
+:ref:`hardware-recommendations` .
+
+.. note:: On the NVMe-oF gateway, the memory footprint is a function of the
+ number of mapped RBD images and can grow to be large. Plan memory
+ requirements accordingly based on the number RBD images to be mapped.
diff --git a/doc/rbd/nvmeof-target-configure.rst b/doc/rbd/nvmeof-target-configure.rst
new file mode 100644
index 000000000..4aa7d6ab7
--- /dev/null
+++ b/doc/rbd/nvmeof-target-configure.rst
@@ -0,0 +1,122 @@
+==========================================
+Installing and Configuring NVMe-oF Targets
+==========================================
+
+Traditionally, block-level access to a Ceph storage cluster has been limited to
+(1) QEMU and ``librbd`` (which is a key enabler for adoption within OpenStack
+environments), and (2) the Linux kernel client. Starting with the Ceph Reef
+release, block-level access has been expanded to offer standard NVMe/TCP
+support, allowing wider platform usage and potentially opening new use cases.
+
+Prerequisites
+=============
+
+- Red Hat Enterprise Linux/CentOS 8.0 (or newer); Linux kernel v4.16 (or newer)
+
+- A working Ceph Reef or later storage cluster, deployed with ``cephadm``
+
+- NVMe-oF gateways, which can either be colocated with OSD nodes or on dedicated nodes
+
+- Separate network subnets for NVME-oF front-end traffic and Ceph back-end traffic
+
+Explanation
+===========
+
+The Ceph NVMe-oF gateway is both an NVMe-oF target and a Ceph client. Think of
+it as a "translator" between Ceph's RBD interface and the NVME-oF protocol. The
+Ceph NVMe-oF gateway can run on a standalone node or be colocated with other
+daemons, for example on a Ceph Object Store Disk (OSD) node. When colocating
+the Ceph NVMe-oF gateway with other daemons, ensure that sufficient CPU and
+memory are available. The steps below explain how to install and configure the
+Ceph NVMe/TCP gateway for basic operation.
+
+
+Installation
+============
+
+Complete the following steps to install the Ceph NVME-oF gateway:
+
+#. Create a pool in which the gateways configuration can be managed:
+
+ .. prompt:: bash #
+
+ ceph osd pool create NVME-OF_POOL_NAME
+
+#. Enable RBD on the NVMe-oF pool:
+
+ .. prompt:: bash #
+
+ rbd pool init NVME-OF_POOL_NAME
+
+#. Deploy the NVMe-oF gateway daemons on a specific set of nodes:
+
+ .. prompt:: bash #
+
+ ceph orch apply nvmeof NVME-OF_POOL_NAME --placment="host01, host02"
+
+Configuration
+=============
+
+Download the ``nvmeof-cli`` container before first use.
+To download it use the following command:
+
+.. prompt:: bash #
+
+ podman pull quay.io/ceph/nvmeof-cli:latest
+
+#. Create an NVMe subsystem:
+
+ .. prompt:: bash #
+
+ podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 subsystem add --subsystem SUSYSTEM_NQN
+
+ The subsystem NQN is a user defined string, for example ``nqn.2016-06.io.spdk:cnode1``.
+
+#. Define the IP port on the gateway that will process the NVME/TCP commands and I/O:
+
+ a. On the install node, get the NVME-oF Gateway name:
+
+ .. prompt:: bash #
+
+ ceph orch ps | grep nvme
+
+ b. Define the IP port for the gateway:
+
+ .. prompt:: bash #
+
+ podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 listener add --subsystem SUBSYSTEM_NQN --gateway-name GATEWAY_NAME --traddr GATEWAY_IP --trsvcid 4420
+
+#. Get the host NQN (NVME Qualified Name) for each host:
+
+ .. prompt:: bash #
+
+ cat /etc/nvme/hostnqn
+
+ .. prompt:: bash #
+
+ esxcli nvme info get
+
+#. Allow the initiator host to connect to the newly-created NVMe subsystem:
+
+ .. prompt:: bash #
+
+ podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 host add --subsystem SUBSYSTEM_NQN --host "HOST_NQN1, HOST_NQN2"
+
+#. List all subsystems configured in the gateway:
+
+ .. prompt:: bash #
+
+ podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 subsystem list
+
+#. Create a new NVMe namespace:
+
+ .. prompt:: bash #
+
+ podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 namespace add --subsystem SUBSYSTEM_NQN --rbd-pool POOL_NAME --rbd-image IMAGE_NAME
+
+#. List all namespaces in the subsystem:
+
+ .. prompt:: bash #
+
+ podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 namespace list --subsystem SUBSYSTEM_NQN
+
diff --git a/doc/rbd/rados-rbd-cmds.rst b/doc/rbd/rados-rbd-cmds.rst
index 0bbcb2611..a290dc1e5 100644
--- a/doc/rbd/rados-rbd-cmds.rst
+++ b/doc/rbd/rados-rbd-cmds.rst
@@ -4,7 +4,7 @@
.. index:: Ceph Block Device; image management
-The ``rbd`` command enables you to create, list, introspect and remove block
+The ``rbd`` command enables you to create, list, inspect and remove block
device images. You can also use it to clone images, create snapshots,
rollback an image to a snapshot, view a snapshot, etc. For details on using
the ``rbd`` command, see `RBD – Manage RADOS Block Device (RBD) Images`_ for
@@ -139,7 +139,7 @@ Retrieving Image Information
============================
To retrieve information from a particular image, run the following command, but
-replace ``{image-name}`` with the name for the image:
+replace ``{image-name}`` with the name of the image:
.. prompt:: bash $
@@ -250,13 +250,13 @@ Removing a Deferred Block Device from a Pool
--------------------------------------------
To remove a deferred block device from a pool, run the following command but
-replace ``{image-}`` with the ID of the image to be removed, and replace
+replace ``{image-id}`` with the ID of the image to be removed, and replace
``{pool-name}`` with the name of the pool from which the image is to be
removed:
.. prompt:: bash $
- rbd trash rm {pool-name}/{image-}
+ rbd trash rm {pool-name}/{image-id}
For example:
diff --git a/doc/rbd/rbd-encryption.rst b/doc/rbd/rbd-encryption.rst
index 3f37a8b1c..e9c788ecb 100644
--- a/doc/rbd/rbd-encryption.rst
+++ b/doc/rbd/rbd-encryption.rst
@@ -240,6 +240,18 @@ The same applies to creating a formatted clone of an unformatted
(plaintext) image since an unformatted image does not have a header at
all.
+To map a formatted clone, provide encryption formats and passphrases
+for the clone itself and all of its explicitly formatted parent images.
+The order in which ``encryption-format`` and ``encryption-passphrase-file``
+options should be provided is based on the image hierarchy: start with
+that of the cloned image, then its parent and so on.
+
+Here is an example of a command that maps a formatted clone:
+
+.. prompt:: bash #
+
+ rbd device map -t nbd -o encryption-passphrase-file=clone-passphrase.bin,encryption-passphrase-file=passphrase.bin mypool/myclone
+
.. _journal feature: ../rbd-mirroring/#enable-image-journaling-feature
.. _Supported Formats: #supported-formats
.. _rbd-nbd: ../../man/8/rbd-nbd
diff --git a/doc/rbd/rbd-integrations.rst b/doc/rbd/rbd-integrations.rst
index f55604a6f..3c4afe38f 100644
--- a/doc/rbd/rbd-integrations.rst
+++ b/doc/rbd/rbd-integrations.rst
@@ -14,3 +14,4 @@
CloudStack <rbd-cloudstack>
LIO iSCSI Gateway <iscsi-overview>
Windows <rbd-windows>
+ NVMe-oF Gateway <nvmeof-overview>
diff --git a/doc/rbd/rbd-nomad.rst b/doc/rbd/rbd-nomad.rst
index 66d87d6ce..747bc3aca 100644
--- a/doc/rbd/rbd-nomad.rst
+++ b/doc/rbd/rbd-nomad.rst
@@ -372,6 +372,7 @@ using the newly created nomad user id and cephx key::
clusterID = "b9127830-b0cc-4e34-aa47-9d1a2e9949a8"
pool = "nomad"
imageFeatures = "layering"
+ mkfsOptions = "-t ext4"
}
After the ``ceph-volume.hcl`` file has been generated, create the volume:
diff --git a/doc/rbd/rbd-snapshot.rst b/doc/rbd/rbd-snapshot.rst
index 120dd8ec1..4a4309f8e 100644
--- a/doc/rbd/rbd-snapshot.rst
+++ b/doc/rbd/rbd-snapshot.rst
@@ -10,7 +10,7 @@ you can create snapshots of images to retain point-in-time state history. Ceph
also supports snapshot layering, which allows you to clone images (for example,
VM images) quickly and easily. Ceph block device snapshots are managed using
the ``rbd`` command and several higher-level interfaces, including `QEMU`_,
-`libvirt`_, `OpenStack`_, and `CloudStack`_.
+`libvirt`_, `OpenStack`_, `OpenNebula`_ and `CloudStack`_.
.. important:: To use RBD snapshots, you must have a running Ceph cluster.
@@ -18,14 +18,14 @@ the ``rbd`` command and several higher-level interfaces, including `QEMU`_,
.. note:: Because RBD is unaware of any file system within an image (volume),
snapshots are merely `crash-consistent` unless they are coordinated within
the mounting (attaching) operating system. We therefore recommend that you
- pause or stop I/O before taking a snapshot.
-
+ pause or stop I/O before taking a snapshot.
+
If the volume contains a file system, the file system should be in an
internally consistent state before a snapshot is taken. Snapshots taken
without write quiescing could need an `fsck` pass before they are mounted
again. To quiesce I/O you can use `fsfreeze` command. See the `fsfreeze(8)`
- man page for more details.
-
+ man page for more details.
+
For virtual machines, `qemu-guest-agent` can be used to automatically freeze
file systems when creating a snapshot.
@@ -44,7 +44,7 @@ Cephx Notes
When `cephx`_ authentication is enabled (it is by default), you must specify a
user name or ID and a path to the keyring containing the corresponding key. See
-:ref:`User Management <user-management>` for details.
+:ref:`User Management <user-management>` for details.
.. prompt:: bash $
@@ -83,7 +83,7 @@ For example:
.. prompt:: bash $
rbd snap create rbd/foo@snapname
-
+
List Snapshots
--------------
@@ -135,7 +135,7 @@ name, the image name, and the snap name:
.. prompt:: bash $
rbd snap rm {pool-name}/{image-name}@{snap-name}
-
+
For example:
.. prompt:: bash $
@@ -186,20 +186,20 @@ snapshot simplifies semantics, making it possible to create clones rapidly.
| | to Parent | |
| (read only) | | (writable) |
+-------------+ +-------------+
-
+
Parent Child
.. note:: The terms "parent" and "child" refer to a Ceph block device snapshot
(parent) and the corresponding image cloned from the snapshot (child).
These terms are important for the command line usage below.
-
+
Each cloned image (child) stores a reference to its parent image, which enables
the cloned image to open the parent snapshot and read it.
A copy-on-write clone of a snapshot behaves exactly like any other Ceph
block device image. You can read to, write from, clone, and resize cloned
images. There are no special restrictions with cloned images. However, the
-copy-on-write clone of a snapshot depends on the snapshot, so you must
+copy-on-write clone of a snapshot depends on the snapshot, so you must
protect the snapshot before you clone it. The diagram below depicts this
process.
@@ -222,7 +222,7 @@ have performed these steps, you can begin cloning the snapshot.
| | | |
+----------------------------+ +-----------------------------+
|
- +--------------------------------------+
+ +--------------------------------------+
|
v
+----------------------------+ +-----------------------------+
@@ -265,7 +265,7 @@ Protecting a Snapshot
---------------------
Clones access the parent snapshots. All clones would break if a user
-inadvertently deleted the parent snapshot. To prevent data loss, you must
+inadvertently deleted the parent snapshot. To prevent data loss, you must
protect the snapshot before you can clone it:
.. prompt:: bash $
@@ -290,13 +290,13 @@ protect the snapshot before you can clone it:
.. prompt:: bash $
rbd clone {pool-name}/{parent-image-name}@{snap-name} {pool-name}/{child-image-name}
-
+
For example:
.. prompt:: bash $
rbd clone rbd/foo@snapname rbd/bar
-
+
.. note:: You may clone a snapshot from one pool to an image in another pool.
For example, you may maintain read-only images and snapshots as templates in
@@ -364,5 +364,6 @@ For example:
.. _cephx: ../../rados/configuration/auth-config-ref/
.. _QEMU: ../qemu-rbd/
.. _OpenStack: ../rbd-openstack/
+.. _OpenNebula: https://docs.opennebula.io/stable/management_and_operations/vm_management/vm_instances.html?highlight=ceph#managing-disk-snapshots
.. _CloudStack: ../rbd-cloudstack/
.. _libvirt: ../libvirt/