summaryrefslogtreecommitdiffstats
path: root/doc/cephadm/services/index.rst
blob: 26fd8864a808b62b48af52a7f7e320efb3f38ab2 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
==================
Service Management
==================

A service is a group of daemons configured together. See these chapters
for details on individual services:

.. toctree::
    :maxdepth: 1

    mon
    mgr
    osd
    rgw
    mds
    nfs
    iscsi
    custom-container
    monitoring
    snmp-gateway

Service Status
==============


To see the status of one
of the services running in the Ceph cluster, do the following:

#. Use the command line to print a list of services.
#. Locate the service whose status you want to check.
#. Print the status of the service.

The following command prints a list of services known to the orchestrator. To
limit the output to services only on a specified host, use the optional
``--host`` parameter. To limit the output to services of only a particular
type, use the optional ``--type`` parameter (mon, osd, mgr, mds, rgw):

   .. prompt:: bash #

     ceph orch ls [--service_type type] [--service_name name] [--export] [--format f] [--refresh]

Discover the status of a particular service or daemon:

   .. prompt:: bash #

     ceph orch ls --service_type type --service_name <name> [--refresh]

To export the service specifications knows to the orchestrator, run the following command.

   .. prompt:: bash #

     ceph orch ls --export

The service specifications exported with this command will be exported as yaml
and that yaml can be used with the ``ceph orch apply -i`` command.

For information about retrieving the specifications of single services (including examples of commands), see :ref:`orchestrator-cli-service-spec-retrieve`.

Daemon Status
=============

A daemon is a systemd unit that is running and part of a service.

To see the status of a daemon, do the following:

#. Print a list of all daemons known to the orchestrator.
#. Query the status of the target daemon.

First, print a list of all daemons known to the orchestrator:

   .. prompt:: bash #

    ceph orch ps [--hostname host] [--daemon_type type] [--service_name name] [--daemon_id id] [--format f] [--refresh]

Then query the status of a particular service instance (mon, osd, mds, rgw).
For OSDs the id is the numeric OSD ID. For MDS services the id is the file
system name:

   .. prompt:: bash #

    ceph orch ps --daemon_type osd --daemon_id 0

.. _orchestrator-cli-service-spec:

Service Specification
=====================

A *Service Specification* is a data structure that is used to specify the
deployment of services. In addition to parameters such as `placement` or
`networks`, the user can set initial values of service configuration parameters
by means of the `config` section. For each param/value configuration pair,
cephadm calls the following command to set its value:

   .. prompt:: bash #

    ceph config set <service-name> <param> <value>

cephadm raises health warnings in case invalid configuration parameters are
found in the spec (`CEPHADM_INVALID_CONFIG_OPTION`) or if any error while
trying to apply the new configuration option(s) (`CEPHADM_FAILED_SET_OPTION`).

Here is an example of a service specification in YAML:

.. code-block:: yaml

    service_type: rgw
    service_id: realm.zone
    placement:
      hosts:
        - host1
        - host2
        - host3
    config:
      param_1: val_1
      ...
      param_N: val_N
    unmanaged: false
    networks:
    - 192.169.142.0/24
    spec:
      # Additional service specific attributes.

In this example, the properties of this service specification are:

.. py:currentmodule:: ceph.deployment.service_spec

.. autoclass:: ServiceSpec
   :members:

Each service type can have additional service-specific properties.

Service specifications of type ``mon``, ``mgr``, and the monitoring
types do not require a ``service_id``.

A service of type ``osd`` is described in :ref:`drivegroups`

Many service specifications can be applied at once using ``ceph orch apply -i``
by submitting a multi-document YAML file::

    cat <<EOF | ceph orch apply -i -
    service_type: mon
    placement:
      host_pattern: "mon*"
    ---
    service_type: mgr
    placement:
      host_pattern: "mgr*"
    ---
    service_type: osd
    service_id: default_drive_group
    placement:
      host_pattern: "osd*"
    data_devices:
      all: true
    EOF

.. _orchestrator-cli-service-spec-retrieve:

Retrieving the running Service Specification
--------------------------------------------

If the services have been started via ``ceph orch apply...``, then directly changing
the Services Specification is complicated. Instead of attempting to directly change
the Services Specification, we suggest exporting the running Service Specification by
following these instructions:

   .. prompt:: bash #

    ceph orch ls --service-name rgw.<realm>.<zone> --export > rgw.<realm>.<zone>.yaml
    ceph orch ls --service-type mgr --export > mgr.yaml
    ceph orch ls --export > cluster.yaml

The Specification can then be changed and re-applied as above.

Updating Service Specifications
-------------------------------

The Ceph Orchestrator maintains a declarative state of each
service in a ``ServiceSpec``. For certain operations, like updating
the RGW HTTP port, we need to update the existing
specification.

1. List the current ``ServiceSpec``:

   .. prompt:: bash #

    ceph orch ls --service_name=<service-name> --export > myservice.yaml

2. Update the yaml file:

   .. prompt:: bash #

    vi myservice.yaml

3. Apply the new ``ServiceSpec``:

   .. prompt:: bash #

    ceph orch apply -i myservice.yaml [--dry-run]

.. _orchestrator-cli-placement-spec:

Daemon Placement
================

For the orchestrator to deploy a *service*, it needs to know where to deploy
*daemons*, and how many to deploy.  This is the role of a placement
specification.  Placement specifications can either be passed as command line arguments
or in a YAML files.

.. note::

   cephadm will not deploy daemons on hosts with the ``_no_schedule`` label; see :ref:`cephadm-special-host-labels`.

.. note::
   The **apply** command can be confusing. For this reason, we recommend using
   YAML specifications.

   Each ``ceph orch apply <service-name>`` command supersedes the one before it.
   If you do not use the proper syntax, you will clobber your work
   as you go.

   For example:

   .. prompt:: bash #

        ceph orch apply mon host1
        ceph orch apply mon host2
        ceph orch apply mon host3

   This results in only one host having a monitor applied to it: host 3.

   (The first command creates a monitor on host1. Then the second command
   clobbers the monitor on host1 and creates a monitor on host2. Then the
   third command clobbers the monitor on host2 and creates a monitor on
   host3. In this scenario, at this point, there is a monitor ONLY on
   host3.)

   To make certain that a monitor is applied to each of these three hosts,
   run a command like this:

   .. prompt:: bash #

     ceph orch apply mon "host1,host2,host3"

   There is another way to apply monitors to multiple hosts: a ``yaml`` file
   can be used. Instead of using the "ceph orch apply mon" commands, run a
   command of this form:

   .. prompt:: bash #

      ceph orch apply -i file.yaml

   Here is a sample **file.yaml** file

   .. code-block:: yaml

        service_type: mon
        placement:
          hosts:
           - host1
           - host2
           - host3

Explicit placements
-------------------

Daemons can be explicitly placed on hosts by simply specifying them:

   .. prompt:: bash #

    orch apply prometheus --placement="host1 host2 host3"

Or in YAML:

.. code-block:: yaml

    service_type: prometheus
    placement:
      hosts:
        - host1
        - host2
        - host3

MONs and other services may require some enhanced network specifications:

   .. prompt:: bash #

    orch daemon add mon --placement="myhost:[v2:1.2.3.4:3300,v1:1.2.3.4:6789]=name"

where ``[v2:1.2.3.4:3300,v1:1.2.3.4:6789]`` is the network address of the monitor
and ``=name`` specifies the name of the new monitor.

.. _orch-placement-by-labels:

Placement by labels
-------------------

Daemon placement can be limited to hosts that match a specific label. To set
a label ``mylabel`` to the appropriate hosts, run this command:

  .. prompt:: bash #

    ceph orch host label add *<hostname>* mylabel

  To view the current hosts and labels, run this command:

  .. prompt:: bash #

    ceph orch host ls

  For example:

  .. prompt:: bash #

    ceph orch host label add host1 mylabel
    ceph orch host label add host2 mylabel
    ceph orch host label add host3 mylabel
    ceph orch host ls

  .. code-block:: bash

    HOST   ADDR   LABELS  STATUS
    host1         mylabel
    host2         mylabel
    host3         mylabel
    host4
    host5

Now, Tell cephadm to deploy daemons based on the label by running
this command:

   .. prompt:: bash #

    orch apply prometheus --placement="label:mylabel"

Or in YAML:

.. code-block:: yaml

    service_type: prometheus
    placement:
      label: "mylabel"

* See :ref:`orchestrator-host-labels`

Placement by pattern matching
-----------------------------

Daemons can be placed on hosts as well:

   .. prompt:: bash #

    orch apply prometheus --placement='myhost[1-3]'

Or in YAML:

.. code-block:: yaml

    service_type: prometheus
    placement:
      host_pattern: "myhost[1-3]"

To place a service on *all* hosts, use ``"*"``:

   .. prompt:: bash #

    orch apply node-exporter --placement='*'

Or in YAML:

.. code-block:: yaml

    service_type: node-exporter
    placement:
      host_pattern: "*"


Changing the number of daemons
------------------------------

By specifying ``count``, only the number of daemons specified will be created:

   .. prompt:: bash #

    orch apply prometheus --placement=3

To deploy *daemons* on a subset of hosts, specify the count:

   .. prompt:: bash #

    orch apply prometheus --placement="2 host1 host2 host3"

If the count is bigger than the amount of hosts, cephadm deploys one per host:

   .. prompt:: bash #

    orch apply prometheus --placement="3 host1 host2"

The command immediately above results in two Prometheus daemons.

YAML can also be used to specify limits, in the following way:

.. code-block:: yaml

    service_type: prometheus
    placement:
      count: 3

YAML can also be used to specify limits on hosts:

.. code-block:: yaml

    service_type: prometheus
    placement:
      count: 2
      hosts:
        - host1
        - host2
        - host3

.. _cephadm_co_location:

Co-location of daemons
----------------------

Cephadm supports the deployment of multiple daemons on the same host:

.. code-block:: yaml

    service_type: rgw
    placement:
      label: rgw
      count_per_host: 2

The main reason for deploying multiple daemons per host is an additional
performance benefit for running multiple RGW and MDS daemons on the same host.

See also: 

* :ref:`cephadm_mgr_co_location`.
* :ref:`cephadm-rgw-designated_gateways`.

This feature was introduced in Pacific.

Algorithm description
---------------------

Cephadm's declarative state consists of a list of service specifications
containing placement specifications.

Cephadm continually compares a list of daemons actually running in the cluster
against the list in the service specifications. Cephadm adds new daemons and
removes old daemons as necessary in order to conform to the service
specifications.

Cephadm does the following to maintain compliance with the service
specifications.

Cephadm first selects a list of candidate hosts. Cephadm seeks explicit host
names and selects them. If cephadm finds no explicit host names, it looks for
label specifications. If no label is defined in the specification, cephadm
selects hosts based on a host pattern. If no host pattern is defined, as a last
resort, cephadm selects all known hosts as candidates.

Cephadm is aware of existing daemons running services and tries to avoid moving
them.

Cephadm supports the deployment of a specific amount of services.
Consider the following service specification:

.. code-block:: yaml

    service_type: mds
    service_name: myfs
    placement:
      count: 3
      label: myfs

This service specifcation instructs cephadm to deploy three daemons on hosts
labeled ``myfs`` across the cluster.

If there are fewer than three daemons deployed on the candidate hosts, cephadm
randomly chooses hosts on which to deploy new daemons.

If there are more than three daemons deployed on the candidate hosts, cephadm
removes existing daemons.

Finally, cephadm removes daemons on hosts that are outside of the list of
candidate hosts.

.. note::

   There is a special case that cephadm must consider.

   If there are fewer hosts selected by the placement specification than
   demanded by ``count``, cephadm will deploy only on the selected hosts.

Extra Container Arguments
=========================

.. warning:: 
  The arguments provided for extra container args are limited to whatever arguments are available for a `run` command from whichever container engine you are using. Providing any arguments the `run` command does not support (or invalid values for arguments) will cause the daemon to fail to start.


Cephadm supports providing extra miscellaneous container arguments for
specific cases when they may be necessary. For example, if a user needed
to limit the amount of cpus their mon daemons make use of they could apply
a spec like

.. code-block:: yaml

    service_type: mon
    service_name: mon
    placement:
      hosts:
        - host1
        - host2
        - host3
    extra_container_args:
      - "--cpus=2"

which would cause each mon daemon to be deployed with `--cpus=2`.

Mounting Files with Extra Container Arguments
---------------------------------------------

A common use case for extra container arguments is to mount additional
files within the container. However, some intuitive formats for doing
so can cause deployment to fail (see https://tracker.ceph.com/issues/57338).
The recommended syntax for mounting a file with extra container arguments is:

.. code-block:: yaml

    extra_container_args:
      - "-v"
      - "/absolute/file/path/on/host:/absolute/file/path/in/container"

For example:

.. code-block:: yaml

    extra_container_args:
      - "-v"
      - "/opt/ceph_cert/host.cert:/etc/grafana/certs/cert_file:ro"

.. _orch-rm:

Removing a Service
==================

In order to remove a service including the removal
of all daemons of that service, run

.. prompt:: bash

  ceph orch rm <service-name>

For example:

.. prompt:: bash

  ceph orch rm rgw.myrgw

.. _cephadm-spec-unmanaged:

Disabling automatic deployment of daemons
=========================================

Cephadm supports disabling the automated deployment and removal of daemons on a
per service basis. The CLI supports two commands for this.

In order to fully remove a service, see :ref:`orch-rm`.

Disabling automatic management of daemons
-----------------------------------------

To disable the automatic management of dameons, set ``unmanaged=True`` in the
:ref:`orchestrator-cli-service-spec` (``mgr.yaml``).

``mgr.yaml``:

.. code-block:: yaml

  service_type: mgr
  unmanaged: true
  placement:
    label: mgr


.. prompt:: bash #

   ceph orch apply -i mgr.yaml


.. note::

  After you apply this change in the Service Specification, cephadm will no
  longer deploy any new daemons (even if the placement specification matches
  additional hosts).

Deploying a daemon on a host manually
-------------------------------------

.. note::

  This workflow has a very limited use case and should only be used
  in rare circumstances.

To manually deploy a daemon on a host, follow these steps:

Modify the service spec for a service by getting the
existing spec, adding ``unmanaged: true``, and applying the modified spec.

Then manually deploy the daemon using the following:

   .. prompt:: bash #

     ceph orch daemon add <daemon-type>  --placement=<placement spec>

For example :

   .. prompt:: bash #

     ceph orch daemon add mgr --placement=my_host

.. note::

  Removing ``unmanaged: true`` from the service spec will
  enable the reconciliation loop for this service and will
  potentially lead to the removal of the daemon, depending
  on the placement spec.

Removing a daemon from a host manually
--------------------------------------

To manually remove a daemon, run a command of the following form:

   .. prompt:: bash #

     ceph orch daemon rm <daemon name>... [--force]

For example:

   .. prompt:: bash #

     ceph orch daemon rm mgr.my_host.xyzxyz

.. note::

  For managed services (``unmanaged=False``), cephadm will automatically
  deploy a new daemon a few seconds later.

See also
--------

* See :ref:`cephadm-osd-declarative` for special handling of unmanaged OSDs.
* See also :ref:`cephadm-pause`