summaryrefslogtreecommitdiffstats
path: root/doc/rados/index.rst
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-21 11:54:28 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-21 11:54:28 +0000
commite6918187568dbd01842d8d1d2c808ce16a894239 (patch)
tree64f88b554b444a49f656b6c656111a145cbbaa28 /doc/rados/index.rst
parentInitial commit. (diff)
downloadceph-e6918187568dbd01842d8d1d2c808ce16a894239.tar.xz
ceph-e6918187568dbd01842d8d1d2c808ce16a894239.zip
Adding upstream version 18.2.2.upstream/18.2.2
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'doc/rados/index.rst')
-rw-r--r--doc/rados/index.rst81
1 files changed, 81 insertions, 0 deletions
diff --git a/doc/rados/index.rst b/doc/rados/index.rst
new file mode 100644
index 000000000..b506b7a7e
--- /dev/null
+++ b/doc/rados/index.rst
@@ -0,0 +1,81 @@
+.. _rados-index:
+
+======================
+ Ceph Storage Cluster
+======================
+
+The :term:`Ceph Storage Cluster` is the foundation for all Ceph deployments.
+Based upon :abbr:`RADOS (Reliable Autonomic Distributed Object Store)`, Ceph
+Storage Clusters consist of several types of daemons:
+
+ 1. a :term:`Ceph OSD Daemon` (OSD) stores data as objects on a storage node
+ 2. a :term:`Ceph Monitor` (MON) maintains a master copy of the cluster map.
+ 3. a :term:`Ceph Manager` manager daemon
+
+A Ceph Storage Cluster might contain thousands of storage nodes. A
+minimal system has at least one Ceph Monitor and two Ceph OSD
+Daemons for data replication.
+
+The Ceph File System, Ceph Object Storage and Ceph Block Devices read data from
+and write data to the Ceph Storage Cluster.
+
+.. container:: columns-3
+
+ .. container:: column
+
+ .. raw:: html
+
+ <h3>Config and Deploy</h3>
+
+ Ceph Storage Clusters have a few required settings, but most configuration
+ settings have default values. A typical deployment uses a deployment tool
+ to define a cluster and bootstrap a monitor. See :ref:`cephadm` for details.
+
+ .. toctree::
+ :maxdepth: 2
+
+ Configuration <configuration/index>
+
+ .. container:: column
+
+ .. raw:: html
+
+ <h3>Operations</h3>
+
+ Once you have deployed a Ceph Storage Cluster, you may begin operating
+ your cluster.
+
+ .. toctree::
+ :maxdepth: 2
+
+ Operations <operations/index>
+
+ .. toctree::
+ :maxdepth: 1
+
+ Man Pages <man/index>
+
+ .. toctree::
+ :hidden:
+
+ troubleshooting/index
+
+ .. container:: column
+
+ .. raw:: html
+
+ <h3>APIs</h3>
+
+ Most Ceph deployments use `Ceph Block Devices`_, `Ceph Object Storage`_ and/or the
+ `Ceph File System`_. You may also develop applications that talk directly to
+ the Ceph Storage Cluster.
+
+ .. toctree::
+ :maxdepth: 2
+
+ APIs <api/index>
+
+.. _Ceph Block Devices: ../rbd/
+.. _Ceph File System: ../cephfs/
+.. _Ceph Object Storage: ../radosgw/
+.. _Deployment: ../cephadm/