summaryrefslogtreecommitdiffstats
path: root/doc/rados/index.rst
blob: b506b7a7ea1748eeef162fd3800d2dbb0d354a67 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
.. _rados-index:

======================
 Ceph Storage Cluster
======================

The :term:`Ceph Storage Cluster` is the foundation for all Ceph deployments.
Based upon :abbr:`RADOS (Reliable Autonomic Distributed Object Store)`, Ceph
Storage Clusters consist of several types of daemons: 

  1. a :term:`Ceph OSD Daemon` (OSD) stores data as objects on a storage node
  2. a :term:`Ceph Monitor` (MON) maintains a master copy of the cluster map. 
  3. a :term:`Ceph Manager`  manager daemon
       
A Ceph Storage Cluster might contain thousands of storage nodes. A
minimal system has at least one Ceph Monitor and two Ceph OSD
Daemons for data replication. 

The Ceph File System, Ceph Object Storage and Ceph Block Devices read data from
and write data to the Ceph Storage Cluster.

.. container:: columns-3

   .. container:: column

      .. raw:: html

          <h3>Config and Deploy</h3>

      Ceph Storage Clusters have a few required settings, but most configuration
      settings have default values. A typical deployment uses a deployment tool
      to define a cluster and bootstrap a monitor. See :ref:`cephadm` for details.

      .. toctree::
         :maxdepth: 2

         Configuration <configuration/index>

   .. container:: column

      .. raw:: html

          <h3>Operations</h3>

      Once you have deployed a Ceph Storage Cluster, you may begin operating
      your cluster.

      .. toctree::
         :maxdepth: 2

         Operations <operations/index>

      .. toctree::
         :maxdepth: 1

         Man Pages <man/index>

      .. toctree::
         :hidden:

         troubleshooting/index

   .. container:: column

      .. raw:: html

          <h3>APIs</h3>

      Most Ceph deployments use `Ceph Block Devices`_, `Ceph Object Storage`_ and/or the
      `Ceph File System`_. You  may also develop applications that talk directly to
      the Ceph Storage Cluster.

      .. toctree::
         :maxdepth: 2

         APIs <api/index>

.. _Ceph Block Devices: ../rbd/
.. _Ceph File System: ../cephfs/
.. _Ceph Object Storage: ../radosgw/
.. _Deployment: ../cephadm/