diff options
author | Daniel Baumann <daniel.baumann@progress-linux.org> | 2024-05-23 16:45:13 +0000 |
---|---|---|
committer | Daniel Baumann <daniel.baumann@progress-linux.org> | 2024-05-23 16:45:13 +0000 |
commit | 389020e14594e4894e28d1eb9103c210b142509e (patch) | |
tree | 2ba734cdd7a243f46dda7c3d0cc88c2293d9699f /doc/start/intro.rst | |
parent | Adding upstream version 18.2.2. (diff) | |
download | ceph-389020e14594e4894e28d1eb9103c210b142509e.tar.xz ceph-389020e14594e4894e28d1eb9103c210b142509e.zip |
Adding upstream version 18.2.3.upstream/18.2.3
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to '')
-rw-r--r-- | doc/start/intro.rst | 32 |
1 files changed, 15 insertions, 17 deletions
diff --git a/doc/start/intro.rst b/doc/start/intro.rst index 3a50a8733..390f1b2d8 100644 --- a/doc/start/intro.rst +++ b/doc/start/intro.rst @@ -9,10 +9,10 @@ System`. All :term:`Ceph Storage Cluster` deployments begin with setting up each :term:`Ceph Node` and then setting up the network. A Ceph Storage Cluster requires the following: at least one Ceph Monitor and at -least one Ceph Manager, and at least as many Ceph OSDs as there are copies of -an object stored on the Ceph cluster (for example, if three copies of a given -object are stored on the Ceph cluster, then at least three OSDs must exist in -that Ceph cluster). +least one Ceph Manager, and at least as many :term:`Ceph Object Storage +Daemon<Ceph OSD>`\s (OSDs) as there are copies of a given object stored in the +Ceph cluster (for example, if three copies of a given object are stored in the +Ceph cluster, then at least three OSDs must exist in that Ceph cluster). The Ceph Metadata Server is necessary to run Ceph File System clients. @@ -27,13 +27,13 @@ The Ceph Metadata Server is necessary to run Ceph File System clients. | OSDs | | Monitors | | Managers | | MDSs | +---------------+ +------------+ +------------+ +---------------+ -- **Monitors**: A :term:`Ceph Monitor` (``ceph-mon``) maintains maps - of the cluster state, including the monitor map, manager map, the - OSD map, the MDS map, and the CRUSH map. These maps are critical - cluster state required for Ceph daemons to coordinate with each other. - Monitors are also responsible for managing authentication between - daemons and clients. At least three monitors are normally required - for redundancy and high availability. +- **Monitors**: A :term:`Ceph Monitor` (``ceph-mon``) maintains maps of the + cluster state, including the :ref:`monitor map<display-mon-map>`, manager + map, the OSD map, the MDS map, and the CRUSH map. These maps are critical + cluster state required for Ceph daemons to coordinate with each other. + Monitors are also responsible for managing authentication between daemons and + clients. At least three monitors are normally required for redundancy and + high availability. - **Managers**: A :term:`Ceph Manager` daemon (``ceph-mgr``) is responsible for keeping track of runtime metrics and the current @@ -51,12 +51,10 @@ The Ceph Metadata Server is necessary to run Ceph File System clients. heartbeat. At least three Ceph OSDs are normally required for redundancy and high availability. -- **MDSs**: A :term:`Ceph Metadata Server` (MDS, ``ceph-mds``) stores - metadata on behalf of the :term:`Ceph File System` (i.e., Ceph Block - Devices and Ceph Object Storage do not use MDS). Ceph Metadata - Servers allow POSIX file system users to execute basic commands (like - ``ls``, ``find``, etc.) without placing an enormous burden on the - Ceph Storage Cluster. +- **MDSs**: A :term:`Ceph Metadata Server` (MDS, ``ceph-mds``) stores metadata + for the :term:`Ceph File System`. Ceph Metadata Servers allow CephFS users to + run basic commands (like ``ls``, ``find``, etc.) without placing a burden on + the Ceph Storage Cluster. Ceph stores data as objects within logical storage pools. Using the :term:`CRUSH` algorithm, Ceph calculates which placement group (PG) should |