summaryrefslogtreecommitdiffstats
path: root/doc/start/intro.rst
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-07 18:45:59 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-07 18:45:59 +0000
commit19fcec84d8d7d21e796c7624e521b60d28ee21ed (patch)
tree42d26aa27d1e3f7c0b8bd3fd14e7d7082f5008dc /doc/start/intro.rst
parentInitial commit. (diff)
downloadceph-19fcec84d8d7d21e796c7624e521b60d28ee21ed.tar.xz
ceph-19fcec84d8d7d21e796c7624e521b60d28ee21ed.zip
Adding upstream version 16.2.11+ds.upstream/16.2.11+dsupstream
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'doc/start/intro.rst')
-rw-r--r--doc/start/intro.rst89
1 files changed, 89 insertions, 0 deletions
diff --git a/doc/start/intro.rst b/doc/start/intro.rst
new file mode 100644
index 000000000..d05ccbf7a
--- /dev/null
+++ b/doc/start/intro.rst
@@ -0,0 +1,89 @@
+===============
+ Intro to Ceph
+===============
+
+Whether you want to provide :term:`Ceph Object Storage` and/or
+:term:`Ceph Block Device` services to :term:`Cloud Platforms`, deploy
+a :term:`Ceph File System` or use Ceph for another purpose, all
+:term:`Ceph Storage Cluster` deployments begin with setting up each
+:term:`Ceph Node`, your network, and the Ceph Storage Cluster. A Ceph
+Storage Cluster requires at least one Ceph Monitor, Ceph Manager, and
+Ceph OSD (Object Storage Daemon). The Ceph Metadata Server is also
+required when running Ceph File System clients.
+
+.. ditaa::
+
+ +---------------+ +------------+ +------------+ +---------------+
+ | OSDs | | Monitors | | Managers | | MDSs |
+ +---------------+ +------------+ +------------+ +---------------+
+
+- **Monitors**: A :term:`Ceph Monitor` (``ceph-mon``) maintains maps
+ of the cluster state, including the monitor map, manager map, the
+ OSD map, the MDS map, and the CRUSH map. These maps are critical
+ cluster state required for Ceph daemons to coordinate with each other.
+ Monitors are also responsible for managing authentication between
+ daemons and clients. At least three monitors are normally required
+ for redundancy and high availability.
+
+- **Managers**: A :term:`Ceph Manager` daemon (``ceph-mgr``) is
+ responsible for keeping track of runtime metrics and the current
+ state of the Ceph cluster, including storage utilization, current
+ performance metrics, and system load. The Ceph Manager daemons also
+ host python-based modules to manage and expose Ceph cluster
+ information, including a web-based :ref:`mgr-dashboard` and
+ `REST API`_. At least two managers are normally required for high
+ availability.
+
+- **Ceph OSDs**: An Object Storage Daemon (:term:`Ceph OSD`,
+ ``ceph-osd``) stores data, handles data replication, recovery,
+ rebalancing, and provides some monitoring information to Ceph
+ Monitors and Managers by checking other Ceph OSD Daemons for a
+ heartbeat. At least three Ceph OSDs are normally required for
+ redundancy and high availability.
+
+- **MDSs**: A :term:`Ceph Metadata Server` (MDS, ``ceph-mds``) stores
+ metadata on behalf of the :term:`Ceph File System` (i.e., Ceph Block
+ Devices and Ceph Object Storage do not use MDS). Ceph Metadata
+ Servers allow POSIX file system users to execute basic commands (like
+ ``ls``, ``find``, etc.) without placing an enormous burden on the
+ Ceph Storage Cluster.
+
+Ceph stores data as objects within logical storage pools. Using the
+:term:`CRUSH` algorithm, Ceph calculates which placement group (PG) should
+contain the object, and which OSD should store the placement group. The
+CRUSH algorithm enables the Ceph Storage Cluster to scale, rebalance, and
+recover dynamically.
+
+.. _REST API: ../../mgr/restful
+
+.. raw:: html
+
+ <style type="text/css">div.body h3{margin:5px 0px 0px 0px;}</style>
+ <table cellpadding="10"><colgroup><col width="50%"><col width="50%"></colgroup><tbody valign="top"><tr><td><h3>Recommendations</h3>
+
+To begin using Ceph in production, you should review our hardware
+recommendations and operating system recommendations.
+
+.. toctree::
+ :maxdepth: 2
+
+ Hardware Recommendations <hardware-recommendations>
+ OS Recommendations <os-recommendations>
+
+
+.. raw:: html
+
+ </td><td><h3>Get Involved</h3>
+
+ You can avail yourself of help or contribute documentation, source
+ code or bugs by getting involved in the Ceph community.
+
+.. toctree::
+ :maxdepth: 2
+
+ get-involved
+ documenting-ceph
+
+.. raw:: html
+
+ </td></tr></tbody></table>