summaryrefslogtreecommitdiffstats
path: root/doc/cephfs/scrub.rst
blob: 5b813f1c41ad822051169f2e6fb59c6c26568698 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
.. _mds-scrub:

======================
Ceph File System Scrub
======================

CephFS provides the cluster admin (operator) to check consistency of a file system
via a set of scrub commands. Scrub can be classified into two parts:

#. Forward Scrub: In which the scrub operation starts at the root of the file system
   (or a sub directory) and looks at everything that can be touched in the hierarchy
   to ensure consistency.

#. Backward Scrub: In which the scrub operation looks at every RADOS object in the
   file system pools and maps it back to the file system hierarchy.

This document details commands to initiate and control forward scrub (referred as
scrub thereafter).

.. warning::

   CephFS forward scrubs are started and manipulated on rank 0. All scrub
   commands must be directed at rank 0.

Initiate File System Scrub
==========================

To start a scrub operation for a directory tree use the following command::

   ceph tell mds.<fsname>:0 scrub start <path> [scrubopts] [tag]

where ``scrubopts`` is a comma delimited list of ``recursive``, ``force``, or
``repair`` and ``tag`` is an optional custom string tag (the default is a generated
UUID). An example command is::

   ceph tell mds.cephfs:0 scrub start / recursive
   {
       "return_code": 0,
       "scrub_tag": "6f0d204c-6cfd-4300-9e02-73f382fd23c1",
       "mode": "asynchronous"
   }

Recursive scrub is asynchronous (as hinted by `mode` in the output above).
Asynchronous scrubs must be polled using ``scrub status`` to determine the
status.

The scrub tag is used to differentiate scrubs and also to mark each inode's
first data object in the default data pool (where the backtrace information is
stored) with a ``scrub_tag`` extended attribute with the value of the tag. You
can verify an inode was scrubbed by looking at the extended attribute using the
RADOS utilities.

Scrubs work for multiple active MDS (multiple ranks). The scrub is managed by
rank 0 and distributed across MDS as appropriate.


Monitor (ongoing) File System Scrubs
====================================

Status of ongoing scrubs can be monitored and polled using in `scrub status`
command. This commands lists out ongoing scrubs (identified by the tag) along
with the path and options used to initiate the scrub::

   ceph tell mds.cephfs:0 scrub status
   {
       "status": "scrub active (85 inodes in the stack)",
       "scrubs": {
           "6f0d204c-6cfd-4300-9e02-73f382fd23c1": {
               "path": "/",
               "options": "recursive"
           }
       }
   }

`status` shows the number of inodes that are scheduled to be scrubbed at any point in time,
hence, can change on subsequent `scrub status` invocations. Also, a high level summary of
scrub operation (which includes the operation state and paths on which scrub is triggered)
gets displayed in `ceph status`::

   ceph status
   [...]

   task status:
     scrub status:
         mds.0: active [paths:/]

   [...]

A scrub is complete when it no longer shows up in this list (although that may
change in future releases). Any damage will be reported via cluster health warnings.

Control (ongoing) File System Scrubs
====================================

- Pause: Pausing ongoing scrub operations results in no new or pending inodes being
  scrubbed after in-flight RADOS ops (for the inodes that are currently being scrubbed)
  finish::

   ceph tell mds.cephfs:0 scrub pause
   {
       "return_code": 0
   }

  The ``scrub status`` after pausing reflects the paused state. At this point,
  initiating new scrub operations (via ``scrub start``) would just queue the
  inode for scrub::

   ceph tell mds.cephfs:0 scrub status
   {
       "status": "PAUSED (66 inodes in the stack)",
       "scrubs": {
           "6f0d204c-6cfd-4300-9e02-73f382fd23c1": {
               "path": "/",
               "options": "recursive"
           }
       }
   }

- Resume: Resuming kick starts a paused scrub operation::

   ceph tell mds.cephfs:0 scrub resume
   {
       "return_code": 0
   }

- Abort: Aborting ongoing scrub operations removes pending inodes from the scrub
  queue (thereby aborting the scrub) after in-flight RADOS ops (for the inodes that
  are currently being scrubbed) finish::

   ceph tell mds.cephfs:0 scrub abort
   {
       "return_code": 0
   }

Damages
=======

The types of damage that can be reported and repaired by File System Scrub are:

* DENTRY : Inode's dentry is missing.

* DIR_FRAG : Inode's directory fragment(s) is missing.

* BACKTRACE : Inode's backtrace in the data pool is corrupted.

Evaluate strays using recursive scrub
=====================================

- In order to evaluate strays i.e. purge stray directories in ``~mdsdir`` use the following command::

    ceph tell mds.<fsname>:0 scrub start ~mdsdir recursive

- ``~mdsdir`` is not enqueued by default when scrubbing at the CephFS root. In order to perform stray evaluation
  at root, run scrub with flags ``scrub_mdsdir`` and ``recursive``::

    ceph tell mds.<fsname>:0 scrub start / recursive,scrub_mdsdir