========================= Block Devices and Nomad ========================= Like Kubernetes, Nomad can use Ceph Block Device. This is made possible by `ceph-csi`_, which allows you to dynamically provision RBD images or import existing RBD images. Every version of Nomad is compatible with `ceph-csi`_, but the reference version of Nomad that was used to generate the procedures and guidance in this document is Nomad v1.1.2, the latest version available at the time of the writing of the document. To use Ceph Block Devices with Nomad, you must install and configure ``ceph-csi`` within your Nomad environment. The following diagram shows the Nomad/Ceph technology stack. .. ditaa:: +-------------------------+-------------------------+ | Container | ceph--csi | | | node | | ^ | ^ | | | | | | +----------+--------------+-------------------------+ | | | | | v | | | Nomad | | | | | +---------------------------------------------------+ | ceph--csi | | controller | +--------+------------------------------------------+ | | | configures maps | +---------------+ +----------------+ | | v v +------------------------+ +------------------------+ | | | rbd--nbd | | Kernel Modules | +------------------------+ | | | librbd | +------------------------+-+------------------------+ | RADOS Protocol | +------------------------+-+------------------------+ | OSDs | | Monitors | +------------------------+ +------------------------+ .. note:: Nomad has many possible task drivers, but this example uses only a Docker container. .. important:: ``ceph-csi`` uses the RBD kernel modules by default, which may not support all Ceph `CRUSH tunables`_ or `RBD image features`_. Create a Pool ============= By default, Ceph block devices use the ``rbd`` pool. Ensure that your Ceph cluster is running, then create a pool for Nomad persistent storage: .. prompt:: bash $ ceph osd pool create nomad See `Create a Pool`_ for details on specifying the number of placement groups for your pools. See `Placement Groups`_ for details on the number of placement groups you should set for your pools. A newly created pool must be initialized prior to use. Use the ``rbd`` tool to initialize the pool: .. prompt:: bash $ rbd pool init nomad Configure ceph-csi ================== Ceph Client Authentication Setup -------------------------------- Create a new user for Nomad and `ceph-csi`. Execute the following command and record the generated key: .. code-block:: console $ ceph auth get-or-create client.nomad mon 'profile rbd' osd 'profile rbd pool=nomad' mgr 'profile rbd pool=nomad' [client.nomad] key = AQAlh9Rgg2vrDxAARy25T7KHabs6iskSHpAEAQ== Configure Nomad --------------- Configuring Nomad to Allow Containers to Use Privileged Mode ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ By default, Nomad doesn't allow containers to use privileged mode. We must configure Nomad so that it allows containers to use privileged mode. Edit the Nomad configuration file by adding the following configuration block to `/etc/nomad.d/nomad.hcl`:: plugin "docker" { config { allow_privileged = true } } Loading the rbd module ~~~~~~~~~~~~~~~~~~~~~~ Nomad must have the `rbd` module loaded. Run the following command to confirm that the `rbd` module is loaded: .. code-block:: console $ lsmod | grep rbd rbd 94208 2 libceph 364544 1 rbd If the `rbd` module is not loaded, load it: .. prompt:: bash $ sudo modprobe rbd Restarting Nomad ~~~~~~~~~~~~~~~~ Restart Nomad: .. prompt:: bash $ sudo systemctl restart nomad Create ceph-csi controller and plugin nodes =========================================== The `ceph-csi`_ plugin requires two components: - **Controller plugin**: communicates with the provider's API. - **Node plugin**: executes tasks on the client. .. note:: We'll set the ceph-csi's version in those files. See `ceph-csi release`_ for information about ceph-csi's compatibility with other versions. Configure controller plugin --------------------------- The controller plugin requires the Ceph monitor addresses of the Ceph cluster. Collect both (1) the Ceph cluster unique `fsid` and (2) the monitor addresses: .. code-block:: console $ ceph mon dump <...> fsid b9127830-b0cc-4e34-aa47-9d1a2e9949a8 <...> 0: [v2:192.168.1.1:3300/0,v1:192.168.1.1:6789/0] mon.a 1: [v2:192.168.1.2:3300/0,v1:192.168.1.2:6789/0] mon.b 2: [v2:192.168.1.3:3300/0,v1:192.168.1.3:6789/0] mon.c Generate a ``ceph-csi-plugin-controller.nomad`` file similar to the example below. Substitute the `fsid` for "clusterID", and the monitor addresses for "monitors":: job "ceph-csi-plugin-controller" { datacenters = ["dc1"] group "controller" { network { port "metrics" {} } task "ceph-controller" { template { data = <