summaryrefslogtreecommitdiffstats
path: root/doc/rados/configuration/network-config-ref.rst
blob: 97229a401203dd69a5935649c3a5f011dc06203f (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
=================================
 Network Configuration Reference
=================================

Network configuration is critical for building a high performance  :term:`Ceph
Storage Cluster`. The Ceph Storage Cluster does not perform  request routing or
dispatching on behalf of the :term:`Ceph Client`. Instead, Ceph Clients make
requests directly to Ceph OSD Daemons. Ceph OSD Daemons perform data replication
on behalf of Ceph Clients, which means replication and other factors impose
additional loads on Ceph Storage Cluster networks.

Our Quick Start configurations provide a trivial Ceph configuration file that
sets monitor IP addresses and daemon host names only. Unless you specify a
cluster network, Ceph assumes a single "public" network. Ceph functions just
fine with a public network only, but you may see significant performance
improvement with a second "cluster" network in a large cluster.

It is possible to run a Ceph Storage Cluster with two networks: a public
(client, front-side) network and a cluster (private, replication, back-side)
network.  However, this approach
complicates network configuration (both hardware and software) and does not usually
have a significant impact on overall performance.  For this reason, we recommend
that for resilience and capacity dual-NIC systems either active/active bond
these interfaces or implemebnt a layer 3 multipath strategy with eg. FRR.

If, despite the complexity, one still wishes to use two networks, each
:term:`Ceph Node` will need to have more than one network interface or VLAN. See `Hardware
Recommendations - Networks`_ for additional details.

.. ditaa::
                               +-------------+
                               | Ceph Client |
                               +----*--*-----+
                                    |  ^
                            Request |  : Response
                                    v  |
 /----------------------------------*--*-------------------------------------\
 |                              Public Network                               |
 \---*--*------------*--*-------------*--*------------*--*------------*--*---/
     ^  ^            ^  ^             ^  ^            ^  ^            ^  ^
     |  |            |  |             |  |            |  |            |  |
     |  :            |  :             |  :            |  :            |  :
     v  v            v  v             v  v            v  v            v  v
 +---*--*---+    +---*--*---+     +---*--*---+    +---*--*---+    +---*--*---+
 | Ceph MON |    | Ceph MDS |     | Ceph OSD |    | Ceph OSD |    | Ceph OSD |
 +----------+    +----------+     +---*--*---+    +---*--*---+    +---*--*---+
                                      ^  ^            ^  ^            ^  ^
     The cluster network relieves     |  |            |  |            |  |
     OSD replication and heartbeat    |  :            |  :            |  :
     traffic from the public network. v  v            v  v            v  v
 /------------------------------------*--*------------*--*------------*--*---\
 |   cCCC                      Cluster Network                               |
 \---------------------------------------------------------------------------/


IP Tables
=========

By default, daemons `bind`_ to ports within the ``6800:7300`` range. You may
configure this range at your discretion. Before configuring your IP tables,
check the default ``iptables`` configuration.

.. prompt:: bash $

   sudo iptables -L

Some Linux distributions include rules that reject all inbound requests
except SSH from all network interfaces. For example:: 

	REJECT all -- anywhere anywhere reject-with icmp-host-prohibited

You will need to delete these rules on both your public and cluster networks
initially, and replace them with appropriate rules when you are ready to 
harden the ports on your Ceph Nodes.


Monitor IP Tables
-----------------

Ceph Monitors listen on ports ``3300`` and ``6789`` by
default. Additionally, Ceph Monitors always operate on the public
network. When you add the rule using the example below, make sure you
replace ``{iface}`` with the public network interface (e.g., ``eth0``,
``eth1``, etc.), ``{ip-address}`` with the IP address of the public
network and ``{netmask}`` with the netmask for the public network. :

.. prompt:: bash $

   sudo iptables -A INPUT -i {iface} -p tcp -s {ip-address}/{netmask} --dport 6789 -j ACCEPT


MDS and Manager IP Tables
-------------------------

A :term:`Ceph Metadata Server` or :term:`Ceph Manager` listens on the first 
available port on the public network beginning at port 6800. Note that this 
behavior is not deterministic, so if you are running more than one OSD or MDS
on the same host, or if you restart the daemons within a short window of time,
the daemons will bind to higher ports. You should open the entire 6800-7300
range by default.  When you add the rule using the example below, make sure
you replace ``{iface}`` with the public network interface (e.g., ``eth0``,
``eth1``, etc.), ``{ip-address}`` with the IP address of the public network
and ``{netmask}`` with the netmask of the public network.

For example:

.. prompt:: bash $

   sudo iptables -A INPUT -i {iface} -m multiport -p tcp -s {ip-address}/{netmask} --dports 6800:7300 -j ACCEPT


OSD IP Tables
-------------

By default, Ceph OSD Daemons `bind`_ to the first available ports on a Ceph Node
beginning at port 6800.  Note that this behavior is not deterministic, so if you
are running more than one OSD or MDS on the same host, or if you restart the
daemons within a short window of time, the daemons will bind to higher ports.
Each Ceph OSD Daemon on a Ceph Node may use up to four ports:

#. One for talking to clients and monitors.
#. One for sending data to other OSDs.
#. Two for heartbeating on each interface.

.. ditaa::
              /---------------\
              |      OSD      |
              |           +---+----------------+-----------+
              |           | Clients & Monitors | Heartbeat |
              |           +---+----------------+-----------+
              |               |
              |           +---+----------------+-----------+
              |           | Data Replication   | Heartbeat |
              |           +---+----------------+-----------+
              | cCCC          |
              \---------------/

When a daemon fails and restarts without letting go of the port, the restarted
daemon will bind to a new port. You should open the entire 6800-7300 port range
to handle this possibility.

If you set up separate public and cluster networks, you must add rules for both
the public network and the cluster network, because clients will connect using
the public network and other Ceph OSD Daemons will connect using the cluster
network. When you add the rule using the example below, make sure you replace
``{iface}`` with the network interface (e.g., ``eth0``, ``eth1``, etc.),
``{ip-address}`` with the IP address and ``{netmask}`` with the netmask of the
public or cluster network. For example:

.. prompt:: bash $

   sudo iptables -A INPUT -i {iface}  -m multiport -p tcp -s {ip-address}/{netmask} --dports 6800:7300 -j ACCEPT

.. tip:: If you run Ceph Metadata Servers on the same Ceph Node as the 
   Ceph OSD Daemons, you can consolidate the public network configuration step. 


Ceph Networks
=============

To configure Ceph networks, you must add a network configuration to the
``[global]`` section of the configuration file. Our 5-minute Quick Start
provides a trivial Ceph configuration file that assumes one public network
with client and server on the same network and subnet. Ceph functions just fine
with a public network only. However, Ceph allows you to establish much more
specific criteria, including multiple IP network and subnet masks for your
public network. You can also establish a separate cluster network to handle OSD
heartbeat, object replication and recovery traffic. Don't confuse the IP
addresses you set in your configuration with the public-facing IP addresses
network clients may use to access your service. Typical internal IP networks are
often ``192.168.0.0`` or ``10.0.0.0``.

.. tip:: If you specify more than one IP address and subnet mask for
   either the public or the cluster network, the subnets within the network
   must be capable of routing to each other. Additionally, make sure you
   include each IP address/subnet in your IP tables and open ports for them
   as necessary.

.. note:: Ceph uses `CIDR`_ notation for subnets (e.g., ``10.0.0.0/24``).

When you have configured your networks, you may restart your cluster or restart
each daemon. Ceph daemons bind dynamically, so you do not have to restart the
entire cluster at once if you change your network configuration.


Public Network
--------------

To configure a public network, add the following option to the ``[global]``
section of your Ceph configuration file. 

.. code-block:: ini

	[global]
		# ... elided configuration
		public_network = {public-network/netmask}

.. _cluster-network:

Cluster Network
---------------

If you declare a cluster network, OSDs will route heartbeat, object replication
and recovery traffic over the cluster network. This may improve performance
compared to using a single network. To configure a cluster network, add the
following option to the ``[global]`` section of your Ceph configuration file. 

.. code-block:: ini

	[global]
		# ... elided configuration
		cluster_network = {cluster-network/netmask}

We prefer that the cluster network is **NOT** reachable from the public network
or the Internet for added security.

IPv4/IPv6 Dual Stack Mode
-------------------------

If you want to run in an IPv4/IPv6 dual stack mode and want to define your public and/or
cluster networks, then you need to specify both your IPv4 and IPv6 networks for each:

.. code-block:: ini

	[global]
		# ... elided configuration
		public_network = {IPv4 public-network/netmask}, {IPv6 public-network/netmask}

This is so that Ceph can find a valid IP address for both address families.

If you want just an IPv4 or an IPv6 stack environment, then make sure you set the `ms bind`
options correctly.

.. note::
   Binding to IPv4 is enabled by default, so if you just add the option to bind to IPv6
   you'll actually put yourself into dual stack mode. If you want just IPv6, then disable IPv4 and
   enable IPv6. See `Bind`_ below.

Ceph Daemons
============

Monitor daemons are each configured to bind to a specific IP address.  These
addresses are normally configured by your deployment tool.  Other components
in the Ceph cluster discover the monitors via the ``mon host`` configuration
option, normally specified in the ``[global]`` section of the ``ceph.conf`` file.

.. code-block:: ini

     [global]
         mon_host = 10.0.0.2, 10.0.0.3, 10.0.0.4

The ``mon_host`` value can be a list of IP addresses or a name that is
looked up via DNS.  In the case of a DNS name with multiple A or AAAA
records, all records are probed in order to discover a monitor.  Once
one monitor is reached, all other current monitors are discovered, so
the ``mon host`` configuration option only needs to be sufficiently up
to date such that a client can reach one monitor that is currently online.

The MGR, OSD, and MDS daemons will bind to any available address and
do not require any special configuration.  However, it is possible to
specify a specific IP address for them to bind to with the ``public
addr`` (and/or, in the case of OSD daemons, the ``cluster addr``)
configuration option.  For example,

.. code-block:: ini

	[osd.0]
		public addr = {host-public-ip-address}
		cluster addr = {host-cluster-ip-address}

.. topic:: One NIC OSD in a Two Network Cluster

   Generally, we do not recommend deploying an OSD host with a single network interface in a 
   cluster with two networks. However, you may accomplish this by forcing the 
   OSD host to operate on the public network by adding a ``public_addr`` entry
   to the ``[osd.n]`` section of the Ceph configuration file, where ``n`` 
   refers to the ID of the OSD with one network interface. Additionally, the public
   network and cluster network must be able to route traffic to each other, 
   which we don't recommend for security reasons.


Network Config Settings
=======================

Network configuration settings are not required. Ceph assumes a public network
with all hosts operating on it unless you specifically configure a cluster 
network.


Public Network
--------------

The public network configuration allows you specifically define IP addresses
and subnets for the public network. You may specifically assign static IP 
addresses or override ``public_network`` settings using the ``public_addr``
setting for a specific daemon.

``public_network``

:Description: The IP address and netmask of the public (front-side) network 
              (e.g., ``192.168.0.0/24``). Set in ``[global]``. You may specify
              comma-separated subnets.

:Type: ``{ip-address}/{netmask} [, {ip-address}/{netmask}]``
:Required: No
:Default: N/A


``public_addr``

:Description: The IP address for the public (front-side) network. 
              Set for each daemon.

:Type: IP Address
:Required: No
:Default: N/A



Cluster Network
---------------

The cluster network configuration allows you to declare a cluster network, and
specifically define IP addresses and subnets for the cluster network. You may
specifically assign static IP  addresses or override ``cluster_network``
settings using the ``cluster_addr`` setting for specific OSD daemons.


``cluster_network``

:Description: The IP address and netmask of the cluster (back-side) network 
              (e.g., ``10.0.0.0/24``).  Set in ``[global]``. You may specify
              comma-separated subnets.

:Type: ``{ip-address}/{netmask} [, {ip-address}/{netmask}]``
:Required: No
:Default: N/A


``cluster_addr``

:Description: The IP address for the cluster (back-side) network. 
              Set for each daemon.

:Type: Address
:Required: No
:Default: N/A


Bind
----

Bind settings set the default port ranges Ceph OSD and MDS daemons use. The
default range is ``6800:7300``. Ensure that your `IP Tables`_ configuration
allows you to use the configured port range.

You may also enable Ceph daemons to bind to IPv6 addresses instead of IPv4
addresses.


``ms_bind_port_min``

:Description: The minimum port number to which an OSD or MDS daemon will bind.
:Type: 32-bit Integer
:Default: ``6800``
:Required: No


``ms_bind_port_max``

:Description: The maximum port number to which an OSD or MDS daemon will bind.
:Type: 32-bit Integer
:Default: ``7300``
:Required: No. 

``ms_bind_ipv4``

:Description: Enables Ceph daemons to bind to IPv4 addresses.
:Type: Boolean
:Default: ``true``
:Required: No

``ms_bind_ipv6``

:Description: Enables Ceph daemons to bind to IPv6 addresses.
:Type: Boolean
:Default: ``false``
:Required: No

``public_bind_addr``

:Description: In some dynamic deployments the Ceph MON daemon might bind
              to an IP address locally that is different from the ``public_addr``
              advertised to other peers in the network. The environment must ensure
              that routing rules are set correctly. If ``public_bind_addr`` is set
              the Ceph Monitor daemon will bind to it locally and use ``public_addr``
              in the monmaps to advertise its address to peers. This behavior is limited
              to the Monitor daemon.

:Type: IP Address
:Required: No
:Default: N/A



TCP
---

Ceph disables TCP buffering by default.


``ms_tcp_nodelay``

:Description: Ceph enables ``ms_tcp_nodelay`` so that each request is sent 
              immediately (no buffering). Disabling `Nagle's algorithm`_
              increases network traffic, which can introduce latency. If you 
              experience large numbers of small packets, you may try 
              disabling ``ms_tcp_nodelay``. 

:Type: Boolean
:Required: No
:Default: ``true``


``ms_tcp_rcvbuf``

:Description: The size of the socket buffer on the receiving end of a network
              connection. Disable by default.

:Type: 32-bit Integer
:Required: No
:Default: ``0``


``ms_tcp_read_timeout``

:Description: If a client or daemon makes a request to another Ceph daemon and
              does not drop an unused connection, the ``ms tcp read timeout`` 
              defines the connection as idle after the specified number 
              of seconds.

:Type: Unsigned 64-bit Integer
:Required: No
:Default: ``900`` 15 minutes.



.. _Scalability and High Availability: ../../../architecture#scalability-and-high-availability
.. _Hardware Recommendations - Networks: ../../../start/hardware-recommendations#networks
.. _hardware recommendations: ../../../start/hardware-recommendations
.. _Monitor / OSD Interaction: ../mon-osd-interaction
.. _Message Signatures: ../auth-config-ref#signatures
.. _CIDR: https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing
.. _Nagle's Algorithm: https://en.wikipedia.org/wiki/Nagle's_algorithm