summaryrefslogtreecommitdiffstats
path: root/doc/cephadm/troubleshooting.rst
blob: 9a534f6336e3cb2a27d6bd1560102e1c0615f460 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
Troubleshooting
===============

You might need to investigate why a cephadm command failed
or why a certain service no longer runs properly.

Cephadm deploys daemons as containers. This means that
troubleshooting those containerized daemons might work
differently than you expect (and that is certainly true if
you expect this troubleshooting to work the way that
troubleshooting does when the daemons involved aren't
containerized). 

Here are some tools and commands to help you troubleshoot
your Ceph environment.

.. _cephadm-pause:

Pausing or disabling cephadm
----------------------------

If something goes wrong and cephadm is behaving badly, you can
pause most of the Ceph cluster's background activity by running
the following command: 

.. prompt:: bash #

  ceph orch pause

This stops all changes in the Ceph cluster, but cephadm will
still periodically check hosts to refresh its inventory of
daemons and devices.  You can disable cephadm completely by
running the following commands:

.. prompt:: bash #

  ceph orch set backend ''
  ceph mgr module disable cephadm

These commands disable all of the ``ceph orch ...`` CLI commands.
All previously deployed daemon containers continue to exist and
will start as they did before you ran these commands.

See :ref:`cephadm-spec-unmanaged` for information on disabling
individual services.


Per-service and per-daemon events
---------------------------------

In order to help with the process of debugging failed daemon
deployments, cephadm stores events per service and per daemon.
These events often contain information relevant to
troubleshooting
your Ceph cluster. 

Listing service events
~~~~~~~~~~~~~~~~~~~~~~

To see the events associated with a certain service, run a
command of the and following form:

.. prompt:: bash #

  ceph orch ls --service_name=<service-name> --format yaml

This will return something in the following form:

.. code-block:: yaml

  service_type: alertmanager
  service_name: alertmanager
  placement:
    hosts:
    - unknown_host
  status:
    ...
    running: 1
    size: 1
  events:
  - 2021-02-01T08:58:02.741162 service:alertmanager [INFO] "service was created"
  - '2021-02-01T12:09:25.264584 service:alertmanager [ERROR] "Failed to apply: Cannot
    place <AlertManagerSpec for service_name=alertmanager> on unknown_host: Unknown hosts"'

Listing daemon events
~~~~~~~~~~~~~~~~~~~~~

To see the events associated with a certain daemon, run a
command of the and following form:

.. prompt:: bash #

  ceph orch ps --service-name <service-name> --daemon-id <daemon-id> --format yaml

This will return something in the following form:

.. code-block:: yaml

  daemon_type: mds
  daemon_id: cephfs.hostname.ppdhsz
  hostname: hostname
  status_desc: running
  ...
  events:
  - 2021-02-01T08:59:43.845866 daemon:mds.cephfs.hostname.ppdhsz [INFO] "Reconfigured
    mds.cephfs.hostname.ppdhsz on host 'hostname'"


Checking cephadm logs
---------------------

To learn how to monitor the cephadm logs as they are generated, read :ref:`watching_cephadm_logs`.

If your Ceph cluster has been configured to log events to files, there will exist a
cephadm log file called ``ceph.cephadm.log`` on all monitor hosts (see
:ref:`cephadm-logs` for a more complete explanation of this).

Gathering log files
-------------------

Use journalctl to gather the log files of all daemons:

.. note:: By default cephadm now stores logs in journald. This means
   that you will no longer find daemon logs in ``/var/log/ceph/``.

To read the log file of one specific daemon, run::

    cephadm logs --name <name-of-daemon>

Note: this only works when run on the same host where the daemon is running. To
get logs of a daemon running on a different host, give the ``--fsid`` option::

    cephadm logs --fsid <fsid> --name <name-of-daemon>

where the ``<fsid>`` corresponds to the cluster ID printed by ``ceph status``.

To fetch all log files of all daemons on a given host, run::

    for name in $(cephadm ls | jq -r '.[].name') ; do
      cephadm logs --fsid <fsid> --name "$name" > $name;
    done

Collecting systemd status
-------------------------

To print the state of a systemd unit, run::

      systemctl status "ceph-$(cephadm shell ceph fsid)@<service name>.service";


To fetch all state of all daemons of a given host, run::

    fsid="$(cephadm shell ceph fsid)"
    for name in $(cephadm ls | jq -r '.[].name') ; do
      systemctl status "ceph-$fsid@$name.service" > $name;
    done


List all downloaded container images
------------------------------------

To list all container images that are downloaded on a host:

.. note:: ``Image`` might also be called `ImageID`

::

    podman ps -a --format json | jq '.[].Image'
    "docker.io/library/centos:8"
    "registry.opensuse.org/opensuse/leap:15.2"


Manually running containers
---------------------------

Cephadm writes small wrappers that run a containers. Refer to
``/var/lib/ceph/<cluster-fsid>/<service-name>/unit.run`` for the
container execution command.

.. _cephadm-ssh-errors:

SSH errors
----------

Error message::

  execnet.gateway_bootstrap.HostNotFound: -F /tmp/cephadm-conf-73z09u6g -i /tmp/cephadm-identity-ky7ahp_5 root@10.10.1.2
  ...
  raise OrchestratorError(msg) from e
  orchestrator._interface.OrchestratorError: Failed to connect to 10.10.1.2 (10.10.1.2).
  Please make sure that the host is reachable and accepts connections using the cephadm SSH key
  ...

Things users can do:

1. Ensure cephadm has an SSH identity key::

     [root@mon1~]# cephadm shell -- ceph config-key get mgr/cephadm/ssh_identity_key > ~/cephadm_private_key
     INFO:cephadm:Inferring fsid f8edc08a-7f17-11ea-8707-000c2915dd98
     INFO:cephadm:Using recent ceph image docker.io/ceph/ceph:v15 obtained 'mgr/cephadm/ssh_identity_key'
     [root@mon1 ~] # chmod 0600 ~/cephadm_private_key

 If this fails, cephadm doesn't have a key. Fix this by running the following command::

     [root@mon1 ~]# cephadm shell -- ceph cephadm generate-ssh-key

 or::

     [root@mon1 ~]# cat ~/cephadm_private_key | cephadm shell -- ceph cephadm set-ssk-key -i -

2. Ensure that the SSH config is correct::

     [root@mon1 ~]# cephadm shell -- ceph cephadm get-ssh-config > config

3. Verify that we can connect to the host::

     [root@mon1 ~]# ssh -F config -i ~/cephadm_private_key root@mon1

Verifying that the Public Key is Listed in the authorized_keys file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

To verify that the public key is in the authorized_keys file, run the following commands::

     [root@mon1 ~]# cephadm shell -- ceph cephadm get-pub-key > ~/ceph.pub
     [root@mon1 ~]# grep "`cat ~/ceph.pub`"  /root/.ssh/authorized_keys

Failed to infer CIDR network error
----------------------------------

If you see this error::

   ERROR: Failed to infer CIDR network for mon ip ***; pass --skip-mon-network to configure it later

Or this error::

   Must set public_network config option or specify a CIDR network, ceph addrvec, or plain IP

This means that you must run a command of this form::

  ceph config set mon public_network <mon_network>

For more detail on operations of this kind, see :ref:`deploy_additional_monitors`

Accessing the admin socket
--------------------------

Each Ceph daemon provides an admin socket that bypasses the
MONs (See :ref:`rados-monitoring-using-admin-socket`).

To access the admin socket, first enter the daemon container on the host::

    [root@mon1 ~]# cephadm enter --name <daemon-name>
    [ceph: root@mon1 /]# ceph --admin-daemon /var/run/ceph/ceph-<daemon-name>.asok config show

Calling miscellaneous ceph tools
--------------------------------

To call miscellaneous like ``ceph-objectstore-tool`` or 
``ceph-monstore-tool``, you can run them by calling 
``cephadm shell --name <daemon-name>`` like so::

    root@myhostname # cephadm unit --name mon.myhostname stop
    root@myhostname # cephadm shell --name mon.myhostname
    [ceph: root@myhostname /]# ceph-monstore-tool /var/lib/ceph/mon/ceph-myhostname get monmap > monmap         
    [ceph: root@myhostname /]# monmaptool --print monmap
    monmaptool: monmap file monmap
    epoch 1
    fsid 28596f44-3b56-11ec-9034-482ae35a5fbb
    last_changed 2021-11-01T20:57:19.755111+0000
    created 2021-11-01T20:57:19.755111+0000
    min_mon_release 17 (quincy)
    election_strategy: 1
    0: [v2:127.0.0.1:3300/0,v1:127.0.0.1:6789/0] mon.myhostname

This command sets up the environment in a way that is suitable
for extended daemon maintenance and running the deamon interactively. 

.. _cephadm-restore-quorum:

Restoring the MON quorum
------------------------

In case the Ceph MONs cannot form a quorum, cephadm is not able
to manage the cluster, until the quorum is restored.

In order to restore the MON quorum, remove unhealthy MONs
form the monmap by following these steps:

1. Stop all MONs. For each MON host::

    ssh {mon-host}
    cephadm unit --name mon.`hostname` stop


2. Identify a surviving monitor and log in to that host::

    ssh {mon-host}
    cephadm enter --name mon.`hostname`

3. Follow the steps in :ref:`rados-mon-remove-from-unhealthy`

.. _cephadm-manually-deploy-mgr:

Manually deploying a MGR daemon
-------------------------------
cephadm requires a MGR daemon in order to manage the cluster. In case the cluster
the last MGR of a cluster was removed, follow these steps in order to deploy 
a MGR ``mgr.hostname.smfvfd`` on a random host of your cluster manually. 

Disable the cephadm scheduler, in order to prevent cephadm from removing the new 
MGR. See :ref:`cephadm-enable-cli`::

  ceph config-key set mgr/cephadm/pause true

Then get or create the auth entry for the new MGR::

  ceph auth get-or-create mgr.hostname.smfvfd mon "profile mgr" osd "allow *" mds "allow *"

Get the ceph.conf::

  ceph config generate-minimal-conf

Get the container image::

  ceph config get "mgr.hostname.smfvfd" container_image

Create a file ``config-json.json`` which contains the information neccessary to deploy
the daemon:

.. code-block:: json

  {
    "config": "# minimal ceph.conf for 8255263a-a97e-4934-822c-00bfe029b28f\n[global]\n\tfsid = 8255263a-a97e-4934-822c-00bfe029b28f\n\tmon_host = [v2:192.168.0.1:40483/0,v1:192.168.0.1:40484/0]\n",
    "keyring": "[mgr.hostname.smfvfd]\n\tkey = V2VyIGRhcyBsaWVzdCBpc3QgZG9vZi4=\n"
  }

Deploy the daemon::

  cephadm --image <container-image> deploy --fsid <fsid> --name mgr.hostname.smfvfd --config-json config-json.json

Analyzing core dumps
---------------------

In case a Ceph daemon crashes, cephadm supports analyzing core dumps. To enable core dumps, run

.. prompt:: bash #

  ulimit -c unlimited

core dumps will now be written to ``/var/lib/systemd/coredump``.

.. note::

  core dumps are not namespaced by the kernel, which means
  they will be written to ``/var/lib/systemd/coredump`` on
  the container host. 

Now, wait for the crash to happen again. (To simulate the crash of a daemon, run e.g. ``killall -3 ceph-mon``)

Install debug packages by entering the cephadm shell and install ``ceph-debuginfo``::

  # cephadm shell --mount /var/lib/systemd/coredump
  [ceph: root@host1 /]# dnf install ceph-debuginfo gdb zstd
  [ceph: root@host1 /]# unzstd /mnt/coredump/core.ceph-*.zst
  [ceph: root@host1 /]# gdb /usr/bin/ceph-mon /mnt/coredump/core.ceph-...
  (gdb) bt
  #0  0x00007fa9117383fc in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
  #1  0x00007fa910d7f8f0 in std::condition_variable::wait(std::unique_lock<std::mutex>&) () from /lib64/libstdc++.so.6
  #2  0x00007fa913d3f48f in AsyncMessenger::wait() () from /usr/lib64/ceph/libceph-common.so.2
  #3  0x0000563085ca3d7e in main ()