summaryrefslogtreecommitdiffstats
path: root/doc/rados/configuration/common.rst
blob: 709c8bce2f5003b38e160f2678eed50c3e7fd3ee (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
.. _ceph-conf-common-settings:

Common Settings
===============

The `Hardware Recommendations`_ section provides some hardware guidelines for
configuring a Ceph Storage Cluster. It is possible for a single :term:`Ceph
Node` to run multiple daemons. For example, a single node with multiple drives
may run one ``ceph-osd`` for each drive. Ideally, you will  have a node for a
particular type of process. For example, some nodes may run ``ceph-osd``
daemons, other nodes may run ``ceph-mds`` daemons, and still  other nodes may
run ``ceph-mon`` daemons.

Each node has a name identified by the ``host`` setting. Monitors also specify
a network address and port (i.e., domain name or IP address) identified by the
``addr`` setting.  A basic configuration file will typically specify only
minimal settings for each instance of monitor daemons. For example:

.. code-block:: ini

	[global]
	mon_initial_members = ceph1
	mon_host = 10.0.0.1


.. important:: The ``host`` setting is the short name of the node (i.e., not
   an fqdn). It is **NOT** an IP address either.  Enter ``hostname -s`` on
   the command line to retrieve the name of the node. Do not use ``host``
   settings for anything other than initial monitors unless you are deploying
   Ceph manually. You **MUST NOT** specify ``host`` under individual daemons
   when using deployment tools like ``chef`` or ``cephadm``, as those tools
   will enter the appropriate values for you in the cluster map.


.. _ceph-network-config:

Networks
========

See the `Network Configuration Reference`_ for a detailed discussion about
configuring a network for use with Ceph.


Monitors
========

Production Ceph clusters typically provision a minimum of three :term:`Ceph Monitor`
daemons to ensure availability should a monitor instance crash. A minimum of
three ensures that the Paxos algorithm can determine which version
of the :term:`Ceph Cluster Map` is the most recent from a majority of Ceph
Monitors in the quorum.

.. note:: You may deploy Ceph with a single monitor, but if the instance fails,
	       the lack of other monitors may interrupt data service availability.

Ceph Monitors normally listen on port ``3300`` for the new v2 protocol, and ``6789`` for the old v1 protocol.

By default, Ceph expects to store monitor data under the
following path::

	/var/lib/ceph/mon/$cluster-$id

You or a deployment tool (e.g., ``cephadm``) must create the corresponding
directory. With metavariables fully  expressed and a cluster named "ceph", the
foregoing directory would evaluate to::

	/var/lib/ceph/mon/ceph-a

For additional details, see the `Monitor Config Reference`_.

.. _Monitor Config Reference: ../mon-config-ref


.. _ceph-osd-config:


Authentication
==============

.. versionadded:: Bobtail 0.56

For Bobtail (v 0.56) and beyond, you should expressly enable or disable
authentication in the ``[global]`` section of your Ceph configuration file.

.. code-block:: ini

	auth_cluster_required = cephx
	auth_service_required = cephx
	auth_client_required = cephx

Additionally, you should enable message signing. See `Cephx Config Reference`_ for details.

.. _Cephx Config Reference: ../auth-config-ref


.. _ceph-monitor-config:


OSDs
====

Ceph production clusters typically deploy :term:`Ceph OSD Daemons` where one node
has one OSD daemon running a Filestore on one storage device. The BlueStore back
end is now default, but when using Filestore you specify a journal size. For example:

.. code-block:: ini

	[osd]
	osd_journal_size = 10000

	[osd.0]
	host = {hostname} #manual deployments only.


By default, Ceph expects to store a Ceph OSD Daemon's data at the
following path::

	/var/lib/ceph/osd/$cluster-$id

You or a deployment tool (e.g., ``cephadm``) must create the corresponding
directory. With metavariables fully expressed and a cluster named "ceph", this
example would evaluate to::

	/var/lib/ceph/osd/ceph-0

You may override this path using the ``osd_data`` setting. We recommend not
changing the default location. Create the default directory on your OSD host.

.. prompt:: bash $

	ssh {osd-host}
	sudo mkdir /var/lib/ceph/osd/ceph-{osd-number}

The ``osd_data`` path ideally leads to a mount point with a device that is
separate from the device that contains the operating system and
daemons. If an OSD is to use a device other than the OS device, prepare it for
use with Ceph, and mount it to the directory you just created

.. prompt:: bash $

	ssh {new-osd-host}
	sudo mkfs -t {fstype} /dev/{disk}
	sudo mount -o user_xattr /dev/{hdd} /var/lib/ceph/osd/ceph-{osd-number}

We recommend using the ``xfs`` file system when running
:command:`mkfs`.  (``btrfs`` and ``ext4`` are not recommended and are no
longer tested.)

See the `OSD Config Reference`_ for additional configuration details.


Heartbeats
==========

During runtime operations, Ceph OSD Daemons check up on other Ceph OSD Daemons
and report their  findings to the Ceph Monitor. You do not have to provide any
settings. However, if you have network latency issues, you may wish to modify
the settings.

See `Configuring Monitor/OSD Interaction`_ for additional details.


.. _ceph-logging-and-debugging:

Logs / Debugging
================

Sometimes you may encounter issues with Ceph that require
modifying logging output and using Ceph's debugging. See `Debugging and
Logging`_ for details on log rotation.

.. _Debugging and Logging: ../../troubleshooting/log-and-debug


Example ceph.conf
=================

.. literalinclude:: demo-ceph.conf
   :language: ini

.. _ceph-runtime-config:



Running Multiple Clusters (DEPRECATED)
======================================

Each Ceph cluster has an internal name that is used as part of configuration
and log file names as well as directory and mountpoint names.  This name
defaults to "ceph".  Previous releases of Ceph allowed one to specify a custom
name instead, for example "ceph2".  This was intended to faciliate running
multiple logical clusters on the same physical hardware, but in practice this
was rarely exploited and should no longer be attempted.  Prior documentation
could also be misinterpreted as requiring unique cluster names in order to
use ``rbd-mirror``.

Custom cluster names are now considered deprecated and the ability to deploy
them has already been removed from some tools, though existing custom name
deployments continue to operate.  The ability to run and manage clusters with
custom names may be progressively removed by future Ceph releases, so it is
strongly recommended to deploy all new clusters with the default name "ceph".

Some Ceph CLI commands accept an optional ``--cluster`` (cluster name) option. This
option is present purely for backward compatibility and need not be accomodated
by new tools and deployments.

If you do need to allow multiple clusters to exist on the same host, please use
:ref:`cephadm`, which uses containers to fully isolate each cluster.





.. _Hardware Recommendations: ../../../start/hardware-recommendations
.. _Network Configuration Reference: ../network-config-ref
.. _OSD Config Reference: ../osd-config-ref
.. _Configuring Monitor/OSD Interaction: ../mon-osd-interaction