summaryrefslogtreecommitdiffstats
path: root/doc/rados/index.rst
diff options
context:
space:
mode:
Diffstat (limited to '')
-rw-r--r--doc/rados/index.rst78
1 files changed, 78 insertions, 0 deletions
diff --git a/doc/rados/index.rst b/doc/rados/index.rst
new file mode 100644
index 000000000..5f3f112e8
--- /dev/null
+++ b/doc/rados/index.rst
@@ -0,0 +1,78 @@
+.. _rados-index:
+
+======================
+ Ceph Storage Cluster
+======================
+
+The :term:`Ceph Storage Cluster` is the foundation for all Ceph deployments.
+Based upon :abbr:`RADOS (Reliable Autonomic Distributed Object Store)`, Ceph
+Storage Clusters consist of two types of daemons: a :term:`Ceph OSD Daemon`
+(OSD) stores data as objects on a storage node; and a :term:`Ceph Monitor` (MON)
+maintains a master copy of the cluster map. A Ceph Storage Cluster may contain
+thousands of storage nodes. A minimal system will have at least one
+Ceph Monitor and two Ceph OSD Daemons for data replication.
+
+The Ceph File System, Ceph Object Storage and Ceph Block Devices read data from
+and write data to the Ceph Storage Cluster.
+
+.. raw:: html
+
+ <style type="text/css">div.body h3{margin:5px 0px 0px 0px;}</style>
+ <table cellpadding="10"><colgroup><col width="33%"><col width="33%"><col width="33%"></colgroup><tbody valign="top"><tr><td><h3>Config and Deploy</h3>
+
+Ceph Storage Clusters have a few required settings, but most configuration
+settings have default values. A typical deployment uses a deployment tool
+to define a cluster and bootstrap a monitor. See `Deployment`_ for details
+on ``cephadm.``
+
+.. toctree::
+ :maxdepth: 2
+
+ Configuration <configuration/index>
+ Deployment <../cephadm/index>
+
+.. raw:: html
+
+ </td><td><h3>Operations</h3>
+
+Once you have deployed a Ceph Storage Cluster, you may begin operating
+your cluster.
+
+.. toctree::
+ :maxdepth: 2
+
+
+ Operations <operations/index>
+
+.. toctree::
+ :maxdepth: 1
+
+ Man Pages <man/index>
+
+
+.. toctree::
+ :hidden:
+
+ troubleshooting/index
+
+.. raw:: html
+
+ </td><td><h3>APIs</h3>
+
+Most Ceph deployments use `Ceph Block Devices`_, `Ceph Object Storage`_ and/or the
+`Ceph File System`_. You may also develop applications that talk directly to
+the Ceph Storage Cluster.
+
+.. toctree::
+ :maxdepth: 2
+
+ APIs <api/index>
+
+.. raw:: html
+
+ </td></tr></tbody></table>
+
+.. _Ceph Block Devices: ../rbd/
+.. _Ceph File System: ../cephfs/
+.. _Ceph Object Storage: ../radosgw/
+.. _Deployment: ../cephadm/