diff options
author | Daniel Baumann <daniel.baumann@progress-linux.org> | 2024-04-07 18:45:59 +0000 |
---|---|---|
committer | Daniel Baumann <daniel.baumann@progress-linux.org> | 2024-04-07 18:45:59 +0000 |
commit | 19fcec84d8d7d21e796c7624e521b60d28ee21ed (patch) | |
tree | 42d26aa27d1e3f7c0b8bd3fd14e7d7082f5008dc /doc/start/quick-rbd.rst | |
parent | Initial commit. (diff) | |
download | ceph-upstream/16.2.11+ds.tar.xz ceph-upstream/16.2.11+ds.zip |
Adding upstream version 16.2.11+ds.upstream/16.2.11+dsupstream
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to '')
-rw-r--r-- | doc/start/quick-rbd.rst | 69 |
1 files changed, 69 insertions, 0 deletions
diff --git a/doc/start/quick-rbd.rst b/doc/start/quick-rbd.rst new file mode 100644 index 000000000..c1cf77098 --- /dev/null +++ b/doc/start/quick-rbd.rst @@ -0,0 +1,69 @@ +========================== + Block Device Quick Start +========================== + +Ensure your :term:`Ceph Storage Cluster` is in an ``active + clean`` state +before working with the :term:`Ceph Block Device`. + +.. note:: The Ceph Block Device is also known as :term:`RBD` or :term:`RADOS` + Block Device. + + +.. ditaa:: + + /------------------\ /----------------\ + | Admin Node | | ceph-client | + | +-------->+ cCCC | + | ceph-deploy | | ceph | + \------------------/ \----------------/ + + +You may use a virtual machine for your ``ceph-client`` node, but do not +execute the following procedures on the same physical node as your Ceph +Storage Cluster nodes (unless you use a VM). See `FAQ`_ for details. + +Create a Block Device Pool +========================== + +#. On the admin node, use the ``ceph`` tool to `create a pool`_ + (we recommend the name 'rbd'). + +#. On the admin node, use the ``rbd`` tool to initialize the pool for use by RBD:: + + rbd pool init <pool-name> + +Configure a Block Device +======================== + +#. On the ``ceph-client`` node, create a block device image. :: + + rbd create foo --size 4096 --image-feature layering [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring] [-p {pool-name}] + +#. On the ``ceph-client`` node, map the image to a block device. :: + + sudo rbd map foo --name client.admin [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring] [-p {pool-name}] + +#. Use the block device by creating a file system on the ``ceph-client`` + node. :: + + sudo mkfs.ext4 -m0 /dev/rbd/{pool-name}/foo + + This may take a few moments. + +#. Mount the file system on the ``ceph-client`` node. :: + + sudo mkdir /mnt/ceph-block-device + sudo mount /dev/rbd/{pool-name}/foo /mnt/ceph-block-device + cd /mnt/ceph-block-device + +#. Optionally configure the block device to be automatically mapped and mounted + at boot (and unmounted/unmapped at shutdown) - see the `rbdmap manpage`_. + + +See `block devices`_ for additional details. + +.. _create a pool: ../../rados/operations/pools/#create-a-pool +.. _block devices: ../../rbd +.. _FAQ: http://wiki.ceph.com/How_Can_I_Give_Ceph_a_Try +.. _OS Recommendations: ../os-recommendations +.. _rbdmap manpage: ../../man/8/rbdmap |