summaryrefslogtreecommitdiffstats
path: root/doc/cephadm/install.rst
blob: b1aa736e225bf5fd173ab2d810576f110d04b527 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
.. _cephadm_deploying_new_cluster:

============================
Deploying a new Ceph cluster
============================

Cephadm creates a new Ceph cluster by "bootstrapping" on a single
host, expanding the cluster to encompass any additional hosts, and
then deploying the needed services.

.. highlight:: console

.. _cephadm-host-requirements:

Requirements
============

- Python 3
- Systemd
- Podman or Docker for running containers
- Time synchronization (such as chrony or NTP)
- LVM2 for provisioning storage devices

Any modern Linux distribution should be sufficient.  Dependencies
are installed automatically by the bootstrap process below.

See the section :ref:`Compatibility With Podman
Versions<cephadm-compatibility-with-podman>` for a table of Ceph versions that
are compatible with Podman. Not every version of Podman is compatible with
Ceph.



.. _get-cephadm:

Install cephadm
===============

There are two ways to install ``cephadm``:

#. a :ref:`curl-based installation<cephadm_install_curl>` method
#. :ref:`distribution-specific installation methods<cephadm_install_distros>`

.. important:: These methods of installing ``cephadm`` are mutually exclusive.
   Choose either the distribution-specific method or the curl-based method. Do
   not attempt to use both these methods on one system.

.. _cephadm_install_distros:

distribution-specific installations
-----------------------------------

Some Linux distributions  may already include up-to-date Ceph packages.  In
that case, you can install cephadm directly. For example:

  In Ubuntu:

  .. prompt:: bash #

     apt install -y cephadm

  In CentOS Stream:

  .. prompt:: bash #
     :substitutions:

     dnf search release-ceph
     dnf install --assumeyes centos-release-ceph-|stable-release|
     dnf install --assumeyes cephadm

  In Fedora:

  .. prompt:: bash #

     dnf -y install cephadm

  In SUSE:

  .. prompt:: bash #

     zypper install -y cephadm

.. _cephadm_install_curl:

curl-based installation
-----------------------

* First, determine what version of Ceph you will need. You can use the releases
  page to find the `latest active releases <https://docs.ceph.com/en/latest/releases/#active-releases>`_.
  For example, we might look at that page and find that ``18.2.0`` is the latest
  active release.

* Use ``curl`` to fetch a build of cephadm for that release.

  .. prompt:: bash #
     :substitutions:

     CEPH_RELEASE=18.2.0 # replace this with the active release
     curl --silent --remote-name --location https://download.ceph.com/rpm-${CEPH_RELEASE}/el9/noarch/cephadm

  Ensure the ``cephadm`` file is executable:

  .. prompt:: bash #

   chmod +x cephadm

  This file can be run directly from the current directory:

  .. prompt:: bash #

   ./cephadm <arguments...>

* If you encounter any issues with running cephadm due to errors including
  the message ``bad interpreter``, then you may not have Python or
  the correct version of Python installed. The cephadm tool requires Python 3.6
  and above. You can manually run cephadm with a particular version of Python by
  prefixing the command with your installed Python version. For example:

  .. prompt:: bash #
     :substitutions:

     python3.8 ./cephadm <arguments...>

.. _cephadm_update:

update cephadm
--------------

The cephadm binary can be used to bootstrap a cluster and for a variety
of other management and debugging tasks. The Ceph team strongly recommends
using an actively supported version of cephadm. Additionally, although
the standalone cephadm is sufficient to get a cluster started, it is
convenient to have the ``cephadm`` command installed on the host. Older or LTS
distros may also have ``cephadm`` packages that are out-of-date and
running the commands below can help install a more recent version
from the Ceph project's repositories.

To install the packages provided by the Ceph project that provide the
``cephadm`` command, run the following commands:

.. prompt:: bash #
   :substitutions:

   ./cephadm add-repo --release |stable-release|
   ./cephadm install

Confirm that ``cephadm`` is now in your PATH by running ``which`` or
``command -v``:

.. prompt:: bash #

   which cephadm

A successful ``which cephadm`` command will return this:

.. code-block:: bash

   /usr/sbin/cephadm

Bootstrap a new cluster
=======================

What to know before you bootstrap
---------------------------------

The first step in creating a new Ceph cluster is running the ``cephadm
bootstrap`` command on the Ceph cluster's first host. The act of running the
``cephadm bootstrap`` command on the Ceph cluster's first host creates the Ceph
cluster's first "monitor daemon", and that monitor daemon needs an IP address.
You must pass the IP address of the Ceph cluster's first host to the ``ceph
bootstrap`` command, so you'll need to know the IP address of that host.

.. important:: ``ssh`` must be installed and running in order for the
   bootstrapping procedure to succeed. 

.. note:: If there are multiple networks and interfaces, be sure to choose one
   that will be accessible by any host accessing the Ceph cluster.

Running the bootstrap command
-----------------------------

Run the ``ceph bootstrap`` command:

.. prompt:: bash #

   cephadm bootstrap --mon-ip *<mon-ip>*

This command will:

* Create a monitor and manager daemon for the new cluster on the local
  host.
* Generate a new SSH key for the Ceph cluster and add it to the root
  user's ``/root/.ssh/authorized_keys`` file.
* Write a copy of the public key to ``/etc/ceph/ceph.pub``.
* Write a minimal configuration file to ``/etc/ceph/ceph.conf``. This
  file is needed to communicate with the new cluster.
* Write a copy of the ``client.admin`` administrative (privileged!)
  secret key to ``/etc/ceph/ceph.client.admin.keyring``.
* Add the ``_admin`` label to the bootstrap host.  By default, any host
  with this label will (also) get a copy of ``/etc/ceph/ceph.conf`` and
  ``/etc/ceph/ceph.client.admin.keyring``.

.. _cephadm-bootstrap-further-info:

Further information about cephadm bootstrap
-------------------------------------------

The default bootstrap behavior will work for most users. But if you'd like
immediately to know more about ``cephadm bootstrap``, read the list below.

Also, you can run ``cephadm bootstrap -h`` to see all of ``cephadm``'s
available options.

* By default, Ceph daemons send their log output to stdout/stderr, which is picked
  up by the container runtime (docker or podman) and (on most systems) sent to
  journald.  If you want Ceph to write traditional log files to ``/var/log/ceph/$fsid``,
  use the ``--log-to-file`` option during bootstrap.

* Larger Ceph clusters perform better when (external to the Ceph cluster)
  public network traffic is separated from (internal to the Ceph cluster)
  cluster traffic. The internal cluster traffic handles replication, recovery,
  and heartbeats between OSD daemons.  You can define the :ref:`cluster
  network<cluster-network>` by supplying the ``--cluster-network`` option to the ``bootstrap``
  subcommand. This parameter must define a subnet in CIDR notation (for example
  ``10.90.90.0/24`` or ``fe80::/64``).

* ``cephadm bootstrap`` writes to ``/etc/ceph`` the files needed to access
  the new cluster. This central location makes it possible for Ceph
  packages installed on the host (e.g., packages that give access to the
  cephadm command line interface) to find these files.

  Daemon containers deployed with cephadm, however, do not need
  ``/etc/ceph`` at all.  Use the ``--output-dir *<directory>*`` option
  to put them in a different directory (for example, ``.``). This may help
  avoid conflicts with an existing Ceph configuration (cephadm or
  otherwise) on the same host.

* You can pass any initial Ceph configuration options to the new
  cluster by putting them in a standard ini-style configuration file
  and using the ``--config *<config-file>*`` option.  For example::

      $ cat <<EOF > initial-ceph.conf
      [global]
      osd crush chooseleaf type = 0
      EOF
      $ ./cephadm bootstrap --config initial-ceph.conf ...

* The ``--ssh-user *<user>*`` option makes it possible to choose which SSH
  user cephadm will use to connect to hosts. The associated SSH key will be
  added to ``/home/*<user>*/.ssh/authorized_keys``. The user that you
  designate with this option must have passwordless sudo access.

* If you are using a container on an authenticated registry that requires
  login, you may add the argument:

  * ``--registry-json <path to json file>``

  example contents of JSON file with login info::

      {"url":"REGISTRY_URL", "username":"REGISTRY_USERNAME", "password":"REGISTRY_PASSWORD"}

  Cephadm will attempt to log in to this registry so it can pull your container
  and then store the login info in its config database. Other hosts added to
  the cluster will then also be able to make use of the authenticated registry.

* See :ref:`cephadm-deployment-scenarios` for additional examples for using ``cephadm bootstrap``.

.. _cephadm-enable-cli:

Enable Ceph CLI
===============

Cephadm does not require any Ceph packages to be installed on the
host.  However, we recommend enabling easy access to the ``ceph``
command.  There are several ways to do this:

* The ``cephadm shell`` command launches a bash shell in a container
  with all of the Ceph packages installed. By default, if
  configuration and keyring files are found in ``/etc/ceph`` on the
  host, they are passed into the container environment so that the
  shell is fully functional. Note that when executed on a MON host,
  ``cephadm shell`` will infer the ``config`` from the MON container
  instead of using the default configuration. If ``--mount <path>``
  is given, then the host ``<path>`` (file or directory) will appear
  under ``/mnt`` inside the container:

  .. prompt:: bash #

     cephadm shell

* To execute ``ceph`` commands, you can also run commands like this:

  .. prompt:: bash #

     cephadm shell -- ceph -s

* You can install the ``ceph-common`` package, which contains all of the
  ceph commands, including ``ceph``, ``rbd``, ``mount.ceph`` (for mounting
  CephFS file systems), etc.:

  .. prompt:: bash #
     :substitutions:

     cephadm add-repo --release |stable-release|
     cephadm install ceph-common

Confirm that the ``ceph`` command is accessible with:

.. prompt:: bash #

  ceph -v


Confirm that the ``ceph`` command can connect to the cluster and also
its status with:

.. prompt:: bash #

  ceph status

Adding Hosts
============

Add all hosts to the cluster by following the instructions in
:ref:`cephadm-adding-hosts`.

By default, a ``ceph.conf`` file and a copy of the ``client.admin`` keyring are
maintained in ``/etc/ceph`` on all hosts that have the ``_admin`` label. This
label is initially applied only to the bootstrap host. We usually recommend
that one or more other hosts be given the ``_admin`` label so that the Ceph CLI
(for example, via ``cephadm shell``) is easily accessible on multiple hosts. To add
the ``_admin`` label to additional host(s), run a command of the following form:

  .. prompt:: bash #

    ceph orch host label add *<host>* _admin


Adding additional MONs
======================

A typical Ceph cluster has three or five monitor daemons spread
across different hosts.  We recommend deploying five
monitors if there are five or more nodes in your cluster.

Please follow :ref:`deploy_additional_monitors` to deploy additional MONs.

Adding Storage
==============

To add storage to the cluster, you can tell Ceph to consume any
available and unused device(s):

  .. prompt:: bash #

    ceph orch apply osd --all-available-devices

See :ref:`cephadm-deploy-osds` for more detailed instructions.

Enabling OSD memory autotuning
------------------------------

.. warning:: By default, cephadm enables ``osd_memory_target_autotune`` on bootstrap, with ``mgr/cephadm/autotune_memory_target_ratio`` set to ``.7`` of total host memory.

See :ref:`osd_autotune`.

To deploy hyperconverged Ceph with TripleO, please refer to the TripleO documentation: `Scenario: Deploy Hyperconverged Ceph <https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/cephadm.html#scenario-deploy-hyperconverged-ceph>`_

In other cases where the cluster hardware is not exclusively used by Ceph (hyperconverged),
reduce the memory consumption of Ceph like so:

  .. prompt:: bash #

    # hyperconverged only:
    ceph config set mgr mgr/cephadm/autotune_memory_target_ratio 0.2

Then enable memory autotuning:

  .. prompt:: bash #

    ceph config set osd osd_memory_target_autotune true


Using Ceph
==========

To use the *Ceph Filesystem*, follow :ref:`orchestrator-cli-cephfs`.

To use the *Ceph Object Gateway*, follow :ref:`cephadm-deploy-rgw`.

To use *NFS*, follow :ref:`deploy-cephadm-nfs-ganesha`

To use *iSCSI*, follow :ref:`cephadm-iscsi`

.. _cephadm-deployment-scenarios:

Different deployment scenarios
==============================

Single host
-----------

To configure a Ceph cluster to run on a single host, use the
``--single-host-defaults`` flag when bootstrapping. For use cases of this, see
:ref:`one-node-cluster`.

The ``--single-host-defaults`` flag sets the following configuration options::

  global/osd_crush_chooseleaf_type = 0
  global/osd_pool_default_size = 2
  mgr/mgr_standby_modules = False

For more information on these options, see :ref:`one-node-cluster` and
``mgr_standby_modules`` in :ref:`mgr-administrator-guide`.

.. _cephadm-airgap:

Deployment in an isolated environment
-------------------------------------

You might need to install cephadm in an environment that is not connected
directly to the internet (such an environment is also called an "isolated
environment"). This can be done if a custom container registry is used. Either
of two kinds of custom container registry can be used in this scenario: (1) a
Podman-based or Docker-based insecure registry, or (2) a secure registry.

The practice of installing software on systems that are not connected directly
to the internet is called "airgapping" and registries that are not connected
directly to the internet are referred to as "airgapped".

Make sure that your container image is inside the registry. Make sure that you
have access to all hosts that you plan to add to the cluster.

#. Run a local container registry:

   .. prompt:: bash #

      podman run --privileged -d --name registry -p 5000:5000 -v /var/lib/registry:/var/lib/registry --restart=always registry:2

#. If you are using an insecure registry, configure Podman or Docker with the
   hostname and port where the registry is running.

   .. note:: You must repeat this step for every host that accesses the local
             insecure registry.

#. Push your container image to your local registry. Here are some acceptable
   kinds of container images:

   * Ceph container image. See :ref:`containers`.
   * Prometheus container image
   * Node exporter container image
   * Grafana container image
   * Alertmanager container image

#. Create a temporary configuration file to store the names of the monitoring
   images. (See :ref:`cephadm_monitoring-images`):

   .. prompt:: bash $

      cat <<EOF > initial-ceph.conf

   ::

      [mgr]
      mgr/cephadm/container_image_prometheus = *<hostname>*:5000/prometheus
      mgr/cephadm/container_image_node_exporter = *<hostname>*:5000/node_exporter
      mgr/cephadm/container_image_grafana = *<hostname>*:5000/grafana
      mgr/cephadm/container_image_alertmanager = *<hostname>*:5000/alertmanger

#. Run bootstrap using the ``--image`` flag and pass the name of your
   container image as the argument of the image flag. For example:

   .. prompt:: bash #

      cephadm --image *<hostname>*:5000/ceph/ceph bootstrap --mon-ip *<mon-ip>*

.. _cluster network: ../rados/configuration/network-config-ref#cluster-network

.. _cephadm-bootstrap-custom-ssh-keys:

Deployment with custom SSH keys
-------------------------------

Bootstrap allows users to create their own private/public SSH key pair
rather than having cephadm generate them automatically.

To use custom SSH keys, pass the ``--ssh-private-key`` and ``--ssh-public-key``
fields to bootstrap. Both parameters require a path to the file where the
keys are stored:

.. prompt:: bash #

  cephadm bootstrap --mon-ip <ip-addr> --ssh-private-key <private-key-filepath> --ssh-public-key <public-key-filepath>

This setup allows users to use a key that has already been distributed to hosts
the user wants in the cluster before bootstrap.

.. note:: In order for cephadm to connect to other hosts you'd like to add
   to the cluster, make sure the public key of the key pair provided is set up
   as an authorized key for the ssh user being used, typically root. If you'd
   like more info on using a non-root user as the ssh user, see :ref:`cephadm-bootstrap-further-info`

.. _cephadm-bootstrap-ca-signed-keys:

Deployment with CA signed SSH keys
----------------------------------

As an alternative to standard public key authentication, cephadm also supports
deployment using CA signed keys. Before bootstrapping it's recommended to set up
the CA public key as a trusted CA key on hosts you'd like to eventually add to
the cluster. For example:

.. prompt:: bash

  # we will act as our own CA, therefore we'll need to make a CA key
  [root@host1 ~]# ssh-keygen -t rsa -f ca-key -N ""

  # make the ca key trusted on the host we've generated it on
  # this requires adding in a line in our /etc/sshd_config
  # to mark this key as trusted
  [root@host1 ~]# cp ca-key.pub /etc/ssh
  [root@host1 ~]# vi /etc/ssh/sshd_config
  [root@host1 ~]# cat /etc/ssh/sshd_config | grep ca-key
  TrustedUserCAKeys /etc/ssh/ca-key.pub
  # now restart sshd so it picks up the config change
  [root@host1 ~]# systemctl restart sshd

  # now, on all other hosts we want in the cluster, also install the CA key
  [root@host1 ~]# scp /etc/ssh/ca-key.pub host2:/etc/ssh/

  # on other hosts, make the same changes to the sshd_config
  [root@host2 ~]# vi /etc/ssh/sshd_config
  [root@host2 ~]# cat /etc/ssh/sshd_config | grep ca-key
  TrustedUserCAKeys /etc/ssh/ca-key.pub
  # and restart sshd so it picks up the config change
  [root@host2 ~]# systemctl restart sshd

Once the CA key has been installed and marked as a trusted key, you are ready
to use a private key/CA signed cert combination for SSH. Continuing with our
current example, we will create a new key-pair for for host access and then
sign it with our CA key

.. prompt:: bash

  # make a new key pair
  [root@host1 ~]# ssh-keygen -t rsa -f cephadm-ssh-key -N ""
  # sign the private key. This will create a new cephadm-ssh-key-cert.pub
  # note here we're using user "root". If you'd like to use a non-root
  # user the arguments to the -I and -n params would need to be adjusted
  # Additionally, note the -V param indicates how long until the cert
  # this creates will expire
  [root@host1 ~]# ssh-keygen -s ca-key -I user_root -n root -V +52w cephadm-ssh-key
  [root@host1 ~]# ls
  ca-key  ca-key.pub  cephadm-ssh-key  cephadm-ssh-key-cert.pub  cephadm-ssh-key.pub

  # verify our signed key is working. To do this, make sure the generated private
  # key ("cephadm-ssh-key" in our example) and the newly signed cert are stored
  # in the same directory. Then try to ssh using the private key
  [root@host1 ~]# ssh -i cephadm-ssh-key host2

Once you have your private key and corresponding CA signed cert and have tested
SSH authentication using that key works, you can pass those keys to bootstrap
in order to have cephadm use them for SSHing between cluster hosts

.. prompt:: bash

  [root@host1 ~]# cephadm bootstrap --mon-ip <ip-addr> --ssh-private-key cephadm-ssh-key --ssh-signed-cert cephadm-ssh-key-cert.pub

Note that this setup does not require installing the corresponding public key
from the private key passed to bootstrap on other nodes. In fact, cephadm will
reject the ``--ssh-public-key`` argument when passed along with ``--ssh-signed-cert``.
Not because having the public key breaks anything, but because it is not at all needed
for this setup and it helps bootstrap differentiate if the user wants the CA signed
keys setup or standard pubkey encryption. What this means is, SSH key rotation
would simply be a matter of getting another key signed by the same CA and providing
cephadm with the new private key and signed cert. No additional distribution of
keys to cluster nodes is needed after the initial setup of the CA key as a trusted key,
no matter how many new private key/signed cert pairs are rotated in.