diff options
Diffstat (limited to '')
-rw-r--r-- | doc/dev/ceph-volume/index.rst | 14 | ||||
-rw-r--r-- | doc/dev/ceph-volume/lvm.rst | 179 | ||||
-rw-r--r-- | doc/dev/ceph-volume/plugins.rst | 65 | ||||
-rw-r--r-- | doc/dev/ceph-volume/systemd.rst | 37 | ||||
-rw-r--r-- | doc/dev/ceph-volume/zfs.rst | 176 |
5 files changed, 471 insertions, 0 deletions
diff --git a/doc/dev/ceph-volume/index.rst b/doc/dev/ceph-volume/index.rst new file mode 100644 index 000000000..5feef8089 --- /dev/null +++ b/doc/dev/ceph-volume/index.rst @@ -0,0 +1,14 @@ +=================================== +ceph-volume developer documentation +=================================== + +.. rubric:: Contents + +.. toctree:: + :maxdepth: 1 + + + plugins + lvm + zfs + systemd diff --git a/doc/dev/ceph-volume/lvm.rst b/doc/dev/ceph-volume/lvm.rst new file mode 100644 index 000000000..f2df6d850 --- /dev/null +++ b/doc/dev/ceph-volume/lvm.rst @@ -0,0 +1,179 @@ + +.. _ceph-volume-lvm-api: + +LVM +=== +The backend of ``ceph-volume lvm`` is LVM, it relies heavily on the usage of +tags, which is a way for LVM to allow extending its volume metadata. These +values can later be queried against devices and it is how they get discovered +later. + +.. warning:: These APIs are not meant to be public, but are documented so that + it is clear what the tool is doing behind the scenes. Do not alter + any of these values. + + +.. _ceph-volume-lvm-tag-api: + +Tag API +------- +The process of identifying logical volumes as part of Ceph relies on applying +tags on all volumes. It follows a naming convention for the namespace that +looks like:: + + ceph.<tag name>=<tag value> + +All tags are prefixed by the ``ceph`` keyword to claim ownership of that +namespace and make it easily identifiable. This is how the OSD ID would be used +in the context of lvm tags:: + + ceph.osd_id=0 + + +.. _ceph-volume-lvm-tags: + +Metadata +-------- +The following describes all the metadata from Ceph OSDs that is stored on an +LVM volume: + + +``type`` +-------- +Describes if the device is an OSD or Journal, with the ability to expand to +other types when supported (for example a lockbox) + +Example:: + + ceph.type=osd + + +``cluster_fsid`` +---------------- +Example:: + + ceph.cluster_fsid=7146B649-AE00-4157-9F5D-1DBFF1D52C26 + + +``data_device`` +--------------- +Example:: + + ceph.data_device=/dev/ceph/data-0 + + +``data_uuid`` +------------- +Example:: + + ceph.data_uuid=B76418EB-0024-401C-8955-AE6919D45CC3 + + +``journal_device`` +------------------ +Example:: + + ceph.journal_device=/dev/ceph/journal-0 + + +``journal_uuid`` +---------------- +Example:: + + ceph.journal_uuid=2070E121-C544-4F40-9571-0B7F35C6CB2B + + +``encrypted`` +------------- +Example for enabled encryption with ``luks``:: + + ceph.encrypted=1 + +When encryption is not supported or simply disabled:: + + ceph.encrypted=0 + + +``osd_fsid`` +------------ +Example:: + + ceph.osd_fsid=88ab9018-f84b-4d62-90b4-ce7c076728ff + + +``osd_id`` +---------- +Example:: + + ceph.osd_id=1 + + +``block_device`` +---------------- +Just used on :term:`bluestore` backends. Captures the path to the logical +volume path. + +Example:: + + ceph.block_device=/dev/mapper/vg-block-0 + + +``block_uuid`` +-------------- +Just used on :term:`bluestore` backends. Captures either the logical volume UUID or +the partition UUID. + +Example:: + + ceph.block_uuid=E5F041BB-AAD4-48A8-B3BF-31F7AFD7D73E + + +``db_device`` +------------- +Just used on :term:`bluestore` backends. Captures the path to the logical +volume path. + +Example:: + + ceph.db_device=/dev/mapper/vg-db-0 + + +``db_uuid`` +----------- +Just used on :term:`bluestore` backends. Captures either the logical volume UUID or +the partition UUID. + +Example:: + + ceph.db_uuid=F9D02CF1-31AB-4910-90A3-6A6302375525 + + +``wal_device`` +-------------- +Just used on :term:`bluestore` backends. Captures the path to the logical +volume path. + +Example:: + + ceph.wal_device=/dev/mapper/vg-wal-0 + + +``wal_uuid`` +------------ +Just used on :term:`bluestore` backends. Captures either the logical volume UUID or +the partition UUID. + +Example:: + + ceph.wal_uuid=A58D1C68-0D6E-4CB3-8E99-B261AD47CC39 + + +``vdo`` +------- +A VDO-enabled device is detected when device is getting prepared, and then +stored for later checks when activating. This affects mount options by +appending the ``discard`` mount flag, regardless of mount flags being used. + +Example for an enabled VDO device:: + + ceph.vdo=1 diff --git a/doc/dev/ceph-volume/plugins.rst b/doc/dev/ceph-volume/plugins.rst new file mode 100644 index 000000000..95bc761e2 --- /dev/null +++ b/doc/dev/ceph-volume/plugins.rst @@ -0,0 +1,65 @@ +.. _ceph-volume-plugins: + +Plugins +======= +``ceph-volume`` started initially to provide support for using ``lvm`` as +the underlying system for an OSD. It is included as part of the tool but it is +treated like a plugin. + +This modularity, allows for other device or device-like technologies to be able +to consume and re-use the utilities and workflows provided. + +Adding Plugins +-------------- +As a Python tool, plugins ``setuptools`` entry points. For a new plugin to be +available, it should have an entry similar to this in its ``setup.py`` file: + +.. code-block:: python + + setup( + ... + entry_points = dict( + ceph_volume_handlers = [ + 'my_command = my_package.my_module:MyClass', + ], + ), + +The ``MyClass`` should be a class that accepts ``sys.argv`` as its argument, +``ceph-volume`` will pass that in at instantiation and call them ``main`` +method. + +This is how a plugin for ``ZFS`` could look like for example: + +.. code-block:: python + + class ZFS(object): + + help_menu = 'Deploy OSDs with ZFS' + _help = """ + Use ZFS as the underlying technology for OSDs + + --verbose Increase the verbosity level + """ + + def __init__(self, argv): + self.argv = argv + + def main(self): + parser = argparse.ArgumentParser() + args = parser.parse_args(self.argv) + ... + +And its entry point (via ``setuptools``) in ``setup.py`` would looke like: + +.. code-block:: python + + entry_points = { + 'ceph_volume_handlers': [ + 'zfs = ceph_volume_zfs.zfs:ZFS', + ], + }, + +After installation, the ``zfs`` subcommand would be listed and could be used +as:: + + ceph-volume zfs diff --git a/doc/dev/ceph-volume/systemd.rst b/doc/dev/ceph-volume/systemd.rst new file mode 100644 index 000000000..8553430ee --- /dev/null +++ b/doc/dev/ceph-volume/systemd.rst @@ -0,0 +1,37 @@ +.. _ceph-volume-systemd-api: + +systemd +======= +The workflow to *"activate"* an OSD is by relying on systemd unit files and its +ability to persist information as a suffix to the instance name. + +``ceph-volume`` exposes the following convention for unit files:: + + ceph-volume@<sub command>-<extra metadata> + +For example, this is how enabling an OSD could look like for the +:ref:`ceph-volume-lvm` sub command:: + + systemctl enable ceph-volume@lvm-0-8715BEB4-15C5-49DE-BA6F-401086EC7B41 + + +These 3 pieces of persisted information are needed by the sub-command so that +it understands what OSD it needs to activate. + +Since ``lvm`` is not the only subcommand that will be supported, this +is how it will allow other device types to be defined. + +At some point for example, for plain disks, it could be:: + + systemctl enable ceph-volume@disk-0-8715BEB4-15C5-49DE-BA6F-401086EC7B41 + +At startup, the systemd unit will execute a helper script that will parse the +suffix and will end up calling ``ceph-volume`` back. Using the previous +example for lvm, that call will look like:: + + ceph-volume lvm activate 0 8715BEB4-15C5-49DE-BA6F-401086EC7B41 + + +.. warning:: These workflows are not meant to be public, but are documented so that + it is clear what the tool is doing behind the scenes. Do not alter + any of these values. diff --git a/doc/dev/ceph-volume/zfs.rst b/doc/dev/ceph-volume/zfs.rst new file mode 100644 index 000000000..18de7652a --- /dev/null +++ b/doc/dev/ceph-volume/zfs.rst @@ -0,0 +1,176 @@ + +.. _ceph-volume-zfs-api: + +ZFS +=== +The backend of ``ceph-volume zfs`` is ZFS, it relies heavily on the usage of +tags, which is a way for ZFS to allow extending its volume metadata. These +values can later be queried against devices and it is how they get discovered +later. + +Currently this interface is only usable when running on FreeBSD. + +.. warning:: These APIs are not meant to be public, but are documented so that + it is clear what the tool is doing behind the scenes. Do not alter + any of these values. + + +.. _ceph-volume-zfs-tag-api: + +Tag API +------- +The process of identifying filesystems, volumes and pools as part of Ceph relies +on applying tags on all volumes. It follows a naming convention for the +namespace that looks like:: + + ceph.<tag name>=<tag value> + +All tags are prefixed by the ``ceph`` keyword to claim ownership of that +namespace and make it easily identifiable. This is how the OSD ID would be used +in the context of zfs tags:: + + ceph.osd_id=0 + +Tags on filesystems are stored as property. +Tags on a zpool are stored in the comment property as a concatenated list +separated by ``;`` + +.. _ceph-volume-zfs-tags: + +Metadata +-------- +The following describes all the metadata from Ceph OSDs that is stored on a +ZFS filesystem, volume, pool: + + +``type`` +-------- +Describes if the device is an OSD or Journal, with the ability to expand to +other types when supported + +Example:: + + ceph.type=osd + + +``cluster_fsid`` +---------------- +Example:: + + ceph.cluster_fsid=7146B649-AE00-4157-9F5D-1DBFF1D52C26 + + +``data_device`` +--------------- +Example:: + + ceph.data_device=/dev/ceph/data-0 + + +``data_uuid`` +------------- +Example:: + + ceph.data_uuid=B76418EB-0024-401C-8955-AE6919D45CC3 + + +``journal_device`` +------------------ +Example:: + + ceph.journal_device=/dev/ceph/journal-0 + + +``journal_uuid`` +---------------- +Example:: + + ceph.journal_uuid=2070E121-C544-4F40-9571-0B7F35C6CB2B + + +``osd_fsid`` +------------ +Example:: + + ceph.osd_fsid=88ab9018-f84b-4d62-90b4-ce7c076728ff + + +``osd_id`` +---------- +Example:: + + ceph.osd_id=1 + + +``block_device`` +---------------- +Just used on :term:`bluestore` backends. Captures the path to the logical +volume path. + +Example:: + + ceph.block_device=/dev/gpt/block-0 + + +``block_uuid`` +-------------- +Just used on :term:`bluestore` backends. Captures either the logical volume UUID or +the partition UUID. + +Example:: + + ceph.block_uuid=E5F041BB-AAD4-48A8-B3BF-31F7AFD7D73E + + +``db_device`` +------------- +Just used on :term:`bluestore` backends. Captures the path to the logical +volume path. + +Example:: + + ceph.db_device=/dev/gpt/db-0 + + +``db_uuid`` +----------- +Just used on :term:`bluestore` backends. Captures either the logical volume UUID or +the partition UUID. + +Example:: + + ceph.db_uuid=F9D02CF1-31AB-4910-90A3-6A6302375525 + + +``wal_device`` +-------------- +Just used on :term:`bluestore` backends. Captures the path to the logical +volume path. + +Example:: + + ceph.wal_device=/dev/gpt/wal-0 + + +``wal_uuid`` +------------ +Just used on :term:`bluestore` backends. Captures either the logical volume UUID or +the partition UUID. + +Example:: + + ceph.wal_uuid=A58D1C68-0D6E-4CB3-8E99-B261AD47CC39 + + +``compression`` +--------------- +A compression-enabled device can always be set using the native zfs settings on +a volume or filesystem. This will/can be activated during creation of the volume +of filesystem. +When activated by ``ceph-volume zfs`` this tag will be created. +Compression manually set AFTER ``ceph-volume`` will go unnoticed, unless this +tag is also manually set. + +Example for an enabled compression device:: + + ceph.vdo=1 |