summaryrefslogtreecommitdiffstats
path: root/doc/cephadm/host-management.rst
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-05-23 16:45:17 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-05-23 16:45:44 +0000
commit17d6a993fc17d533460c5f40f3908c708e057c18 (patch)
tree1a3bd93e0ecd74fa02f93a528fe2f87e5314c4b5 /doc/cephadm/host-management.rst
parentReleasing progress-linux version 18.2.2-0progress7.99u1. (diff)
downloadceph-17d6a993fc17d533460c5f40f3908c708e057c18.tar.xz
ceph-17d6a993fc17d533460c5f40f3908c708e057c18.zip
Merging upstream version 18.2.3.
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'doc/cephadm/host-management.rst')
-rw-r--r--doc/cephadm/host-management.rst72
1 files changed, 50 insertions, 22 deletions
diff --git a/doc/cephadm/host-management.rst b/doc/cephadm/host-management.rst
index 4b964c5f4..1fba8e582 100644
--- a/doc/cephadm/host-management.rst
+++ b/doc/cephadm/host-management.rst
@@ -74,9 +74,9 @@ To add each new host to the cluster, perform two steps:
ceph orch host add host2 10.10.0.102
ceph orch host add host3 10.10.0.103
- It is best to explicitly provide the host IP address. If an IP is
+ It is best to explicitly provide the host IP address. If an address is
not provided, then the host name will be immediately resolved via
- DNS and that IP will be used.
+ DNS and the result will be used.
One or more labels can also be included to immediately label the
new host. For example, by default the ``_admin`` label will make
@@ -104,7 +104,7 @@ To drain all daemons from a host, run a command of the following form:
The ``_no_schedule`` and ``_no_conf_keyring`` labels will be applied to the
host. See :ref:`cephadm-special-host-labels`.
-If you only want to drain daemons but leave managed ceph conf and keyring
+If you want to drain daemons but leave managed `ceph.conf` and keyring
files on the host, you may pass the ``--keep-conf-keyring`` flag to the
drain command.
@@ -115,7 +115,8 @@ drain command.
This will apply the ``_no_schedule`` label to the host but not the
``_no_conf_keyring`` label.
-All OSDs on the host will be scheduled to be removed. You can check the progress of the OSD removal operation with the following command:
+All OSDs on the host will be scheduled to be removed. You can check
+progress of the OSD removal operation with the following command:
.. prompt:: bash #
@@ -148,7 +149,7 @@ cluster by running the following command:
Offline host removal
--------------------
-Even if a host is offline and can not be recovered, it can be removed from the
+If a host is offline and can not be recovered, it can be removed from the
cluster by running a command of the following form:
.. prompt:: bash #
@@ -250,8 +251,8 @@ Rescanning Host Devices
=======================
Some servers and external enclosures may not register device removal or insertion with the
-kernel. In these scenarios, you'll need to perform a host rescan. A rescan is typically
-non-disruptive, and can be performed with the following CLI command:
+kernel. In these scenarios, you'll need to perform a device rescan on the appropriate host.
+A rescan is typically non-disruptive, and can be performed with the following CLI command:
.. prompt:: bash #
@@ -314,19 +315,43 @@ create a new CRUSH host located in the specified hierarchy.
.. note::
- The ``location`` attribute will be only affect the initial CRUSH location. Subsequent
- changes of the ``location`` property will be ignored. Also, removing a host will not remove
- any CRUSH buckets.
+ The ``location`` attribute will be only affect the initial CRUSH location.
+ Subsequent changes of the ``location`` property will be ignored. Also,
+ removing a host will not remove an associated CRUSH bucket unless the
+ ``--rm-crush-entry`` flag is provided to the ``orch host rm`` command
See also :ref:`crush_map_default_types`.
+Removing a host from the CRUSH map
+==================================
+
+The ``ceph orch host rm`` command has support for removing the associated host bucket
+from the CRUSH map. This is done by providing the ``--rm-crush-entry`` flag.
+
+.. prompt:: bash [ceph:root@host1/]#
+
+ ceph orch host rm host1 --rm-crush-entry
+
+When this flag is specified, cephadm will attempt to remove the host bucket
+from the CRUSH map as part of the host removal process. Note that if
+it fails to do so, cephadm will report the failure and the host will remain under
+cephadm control.
+
+.. note::
+
+ Removal from the CRUSH map will fail if there are OSDs deployed on the
+ host. If you would like to remove all the host's OSDs as well, please start
+ by using the ``ceph orch host drain`` command to do so. Once the OSDs
+ have been removed, then you may direct cephadm remove the CRUSH bucket
+ along with the host using the ``--rm-crush-entry`` flag.
+
OS Tuning Profiles
==================
-Cephadm can be used to manage operating-system-tuning profiles that apply sets
-of sysctl settings to sets of hosts.
+Cephadm can be used to manage operating system tuning profiles that apply
+``sysctl`` settings to sets of hosts.
-Create a YAML spec file in the following format:
+To do so, create a YAML spec file in the following format:
.. code-block:: yaml
@@ -345,18 +370,21 @@ Apply the tuning profile with the following command:
ceph orch tuned-profile apply -i <tuned-profile-file-name>
-This profile is written to ``/etc/sysctl.d/`` on each host that matches the
-hosts specified in the placement block of the yaml, and ``sysctl --system`` is
+This profile is written to a file under ``/etc/sysctl.d/`` on each host
+specified in the ``placement`` block, then ``sysctl --system`` is
run on the host.
.. note::
The exact filename that the profile is written to within ``/etc/sysctl.d/``
is ``<profile-name>-cephadm-tuned-profile.conf``, where ``<profile-name>`` is
- the ``profile_name`` setting that you specify in the YAML spec. Because
+ the ``profile_name`` setting that you specify in the YAML spec. We suggest
+ naming these profiles following the usual ``sysctl.d`` `NN-xxxxx` convention. Because
sysctl settings are applied in lexicographical order (sorted by the filename
- in which the setting is specified), you may want to set the ``profile_name``
- in your spec so that it is applied before or after other conf files.
+ in which the setting is specified), you may want to carefully choose
+ the ``profile_name`` in your spec so that it is applied before or after other
+ conf files. Careful selection ensures that values supplied here override or
+ do not override those in other ``sysctl.d`` files as desired.
.. note::
@@ -365,7 +393,7 @@ run on the host.
.. note::
- Applying tuned profiles is idempotent when the ``--no-overwrite`` option is
+ Applying tuning profiles is idempotent when the ``--no-overwrite`` option is
passed. Moreover, if the ``--no-overwrite`` option is passed, existing
profiles with the same name are not overwritten.
@@ -525,7 +553,7 @@ There are two ways to customize this configuration for your environment:
We do *not recommend* this approach. The path name must be
visible to *any* mgr daemon, and cephadm runs all daemons as
- containers. That means that the file either need to be placed
+ containers. That means that the file must either be placed
inside a customized container image for your deployment, or
manually distributed to the mgr data directory
(``/var/lib/ceph/<cluster-fsid>/mgr.<id>`` on the host, visible at
@@ -578,8 +606,8 @@ Note that ``man hostname`` recommends ``hostname`` to return the bare
host name:
The FQDN (Fully Qualified Domain Name) of the system is the
- name that the resolver(3) returns for the host name, such as,
- ursula.example.com. It is usually the hostname followed by the DNS
+ name that the resolver(3) returns for the host name, for example
+ ``ursula.example.com``. It is usually the short hostname followed by the DNS
domain name (the part after the first dot). You can check the FQDN
using ``hostname --fqdn`` or the domain name using ``dnsdomainname``.