summaryrefslogtreecommitdiffstats
path: root/doc/rbd/nvmeof-target-configure.rst
blob: 4aa7d6ab73f4c6c2b6d81120e65185fe7c7a5f4d (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
==========================================
Installing and Configuring NVMe-oF Targets
==========================================

Traditionally, block-level access to a Ceph storage cluster has been limited to
(1) QEMU and ``librbd`` (which is a key enabler for adoption within OpenStack
environments), and (2) the Linux kernel client. Starting with the Ceph Reef
release, block-level access has been expanded to offer standard NVMe/TCP
support, allowing wider platform usage and potentially opening new use cases.

Prerequisites
=============

-  Red Hat Enterprise Linux/CentOS 8.0 (or newer); Linux kernel v4.16 (or newer)

-  A working Ceph Reef or later storage cluster, deployed with ``cephadm``

-  NVMe-oF gateways, which can either be colocated with OSD nodes or on dedicated nodes

-  Separate network subnets for NVME-oF front-end traffic and Ceph back-end traffic

Explanation
===========

The Ceph NVMe-oF gateway is both an NVMe-oF target and a Ceph client. Think of
it as a "translator" between Ceph's RBD interface and the NVME-oF protocol. The
Ceph NVMe-oF gateway can run on a standalone node or be colocated with other
daemons, for example on a Ceph Object Store Disk (OSD) node. When colocating
the Ceph NVMe-oF gateway with other daemons, ensure that sufficient CPU and
memory are available. The steps below explain how to install and configure the
Ceph NVMe/TCP gateway for basic operation.


Installation
============

Complete the following steps to install the Ceph NVME-oF gateway:

#. Create a pool in which the gateways configuration can be managed:

   .. prompt:: bash #

      ceph osd pool create NVME-OF_POOL_NAME

#. Enable RBD on the NVMe-oF pool:

   .. prompt:: bash #
   
      rbd pool init NVME-OF_POOL_NAME

#. Deploy the NVMe-oF gateway daemons on a specific set of nodes:

   .. prompt:: bash #
   
      ceph orch apply nvmeof NVME-OF_POOL_NAME --placment="host01, host02"

Configuration
=============

Download the ``nvmeof-cli`` container before first use.
To download it use the following command:

.. prompt:: bash #
   
   podman pull quay.io/ceph/nvmeof-cli:latest

#. Create an NVMe subsystem:

   .. prompt:: bash #
   
      podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 subsystem add --subsystem SUSYSTEM_NQN

   The subsystem NQN is a user defined string, for example ``nqn.2016-06.io.spdk:cnode1``.

#. Define the IP port on the gateway that will process the NVME/TCP commands and I/O:

    a. On the install node, get the NVME-oF Gateway name:

       .. prompt:: bash #
       
          ceph orch ps | grep nvme

    b. Define the IP port for the gateway:

       .. prompt:: bash #
    
          podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 listener add --subsystem SUBSYSTEM_NQN --gateway-name GATEWAY_NAME --traddr GATEWAY_IP --trsvcid 4420

#. Get the host NQN (NVME Qualified Name) for each host:

   .. prompt:: bash #

      cat /etc/nvme/hostnqn

   .. prompt:: bash #
    
      esxcli nvme info get

#. Allow the initiator host to connect to the newly-created NVMe subsystem:

   .. prompt:: bash #
    
      podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 host add --subsystem SUBSYSTEM_NQN --host "HOST_NQN1, HOST_NQN2"

#. List all subsystems configured in the gateway:

   .. prompt:: bash #
    
      podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 subsystem list

#. Create a new NVMe namespace:

   .. prompt:: bash #
    
      podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 namespace add --subsystem SUBSYSTEM_NQN --rbd-pool POOL_NAME --rbd-image IMAGE_NAME

#. List all namespaces in the subsystem:

   .. prompt:: bash #
    
      podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 namespace list --subsystem SUBSYSTEM_NQN