summaryrefslogtreecommitdiffstats
path: root/doc/cephadm/services/mon.rst
blob: 6326b73f46d39bfbd01fbc4e22fb5f7b98e4635e (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
===========
MON Service
===========

.. _deploy_additional_monitors:

Deploying additional monitors 
=============================

A typical Ceph cluster has three or five monitor daemons that are spread
across different hosts.  We recommend deploying five monitors if there are
five or more nodes in your cluster.

.. _CIDR: https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation

Ceph deploys monitor daemons automatically as the cluster grows and Ceph
scales back monitor daemons automatically as the cluster shrinks. The
smooth execution of this automatic growing and shrinking depends upon
proper subnet configuration.

The cephadm bootstrap procedure assigns the first monitor daemon in the
cluster to a particular subnet. ``cephadm`` designates that subnet as the
default subnet of the cluster. New monitor daemons will be assigned by
default to that subnet unless cephadm is instructed to do otherwise. 

If all of the ceph monitor daemons in your cluster are in the same subnet,
manual administration of the ceph monitor daemons is not necessary.
``cephadm`` will automatically add up to five monitors to the subnet, as
needed, as new hosts are added to the cluster.

By default, cephadm will deploy 5 daemons on arbitrary hosts. See
:ref:`orchestrator-cli-placement-spec` for details of specifying
the placement of daemons.

Designating a Particular Subnet for Monitors
--------------------------------------------

To designate a particular IP subnet for use by ceph monitor daemons, use a
command of the following form, including the subnet's address in `CIDR`_
format (e.g., ``10.1.2.0/24``):

  .. prompt:: bash #

     ceph config set mon public_network *<mon-cidr-network>*

  For example:

  .. prompt:: bash #

     ceph config set mon public_network 10.1.2.0/24

Cephadm deploys new monitor daemons only on hosts that have IP addresses in
the designated subnet.

You can also specify two public networks by using a list of networks:

  .. prompt:: bash #

     ceph config set mon public_network *<mon-cidr-network1>,<mon-cidr-network2>*

  For example:

  .. prompt:: bash #

     ceph config set mon public_network 10.1.2.0/24,192.168.0.1/24


Deploying Monitors on a Particular Network 
------------------------------------------

You can explicitly specify the IP address or CIDR network for each monitor and
control where each monitor is placed.  To disable automated monitor deployment,
run this command:

  .. prompt:: bash #

    ceph orch apply mon --unmanaged

  To deploy each additional monitor:

  .. prompt:: bash #

    ceph orch daemon add mon *<host1:ip-or-network1>

  For example, to deploy a second monitor on ``newhost1`` using an IP
  address ``10.1.2.123`` and a third monitor on ``newhost2`` in
  network ``10.1.2.0/24``, run the following commands:

  .. prompt:: bash #

    ceph orch apply mon --unmanaged
    ceph orch daemon add mon newhost1:10.1.2.123
    ceph orch daemon add mon newhost2:10.1.2.0/24

  Now, enable automatic placement of Daemons

  .. prompt:: bash #

    ceph orch apply mon --placement="newhost1,newhost2,newhost3" --dry-run

  See :ref:`orchestrator-cli-placement-spec` for details of specifying
  the placement of daemons.

  Finally apply this new placement by dropping ``--dry-run``

  .. prompt:: bash #

    ceph orch apply mon --placement="newhost1,newhost2,newhost3"


Moving Monitors to a Different Network
--------------------------------------

To move Monitors to a new network, deploy new monitors on the new network and
subsequently remove monitors from the old network. It is not advised to
modify and inject the ``monmap`` manually.

First, disable the automated placement of daemons:

  .. prompt:: bash #

    ceph orch apply mon --unmanaged

To deploy each additional monitor:

  .. prompt:: bash #

    ceph orch daemon add mon *<newhost1:ip-or-network1>*

For example, to deploy a second monitor on ``newhost1`` using an IP
address ``10.1.2.123`` and a third monitor on ``newhost2`` in
network ``10.1.2.0/24``, run the following commands:

  .. prompt:: bash #

    ceph orch apply mon --unmanaged
    ceph orch daemon add mon newhost1:10.1.2.123
    ceph orch daemon add mon newhost2:10.1.2.0/24

  Subsequently remove monitors from the old network:

  .. prompt:: bash #

    ceph orch daemon rm *mon.<oldhost1>*

  Update the ``public_network``:

  .. prompt:: bash #

     ceph config set mon public_network *<mon-cidr-network>*

  For example:

  .. prompt:: bash #

     ceph config set mon public_network 10.1.2.0/24

  Now, enable automatic placement of Daemons

  .. prompt:: bash #

    ceph orch apply mon --placement="newhost1,newhost2,newhost3" --dry-run

  See :ref:`orchestrator-cli-placement-spec` for details of specifying
  the placement of daemons.

  Finally apply this new placement by dropping ``--dry-run``

  .. prompt:: bash #

    ceph orch apply mon --placement="newhost1,newhost2,newhost3" 

Futher Reading
==============

* :ref:`rados-operations`
* :ref:`rados-troubleshooting-mon`
* :ref:`cephadm-restore-quorum`