summaryrefslogtreecommitdiffstats
path: root/doc/man/8/crushtool.rst
blob: 31bb8272ebb8c3c958dabf6bb41d0720e3b5e273 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
:orphan:

==========================================
 crushtool -- CRUSH map manipulation tool
==========================================

.. program:: crushtool

Synopsis
========

| **crushtool** ( -d *map* | -c *map.txt* | --build --num_osds *numosds*
  *layer1* *...* | --test ) [ -o *outfile* ]


Description
===========

**crushtool** is a utility that lets you create, compile, decompile
and test CRUSH map files.

CRUSH is a pseudo-random data distribution algorithm that efficiently
maps input values (which, in the context of Ceph, correspond to Placement
Groups) across a heterogeneous, hierarchically structured device map.
The algorithm was originally described in detail in the following paper
(although it has evolved some since then)::

   http://www.ssrc.ucsc.edu/Papers/weil-sc06.pdf

The tool has four modes of operation.

.. option:: --compile|-c map.txt

   will compile a plaintext map.txt into a binary map file.

.. option:: --decompile|-d map

   will take the compiled map and decompile it into a plaintext source
   file, suitable for editing.

.. option:: --build --num_osds {num-osds} layer1 ...

   will create map with the given layer structure. See below for a
   detailed explanation.

.. option:: --test

   will perform a dry run of a CRUSH mapping for a range of input
   values ``[--min-x,--max-x]`` (default ``[0,1023]``) which can be
   thought of as simulated Placement Groups. See below for a more
   detailed explanation.

Unlike other Ceph tools, **crushtool** does not accept generic options
such as **--debug-crush** from the command line. They can, however, be
provided via the CEPH_ARGS environment variable. For instance, to
silence all output from the CRUSH subsystem::

    CEPH_ARGS="--debug-crush 0" crushtool ...


Running tests with --test
=========================

The test mode will use the input crush map ( as specified with **-i
map** ) and perform a dry run of CRUSH mapping or random placement
(if **--simulate** is set ). On completion, two kinds of reports can be
created. 
1) The **--show-...** option outputs human readable information
on stderr. 
2) The **--output-csv** option creates CSV files that are
documented by the **--help-output** option.

Note: Each Placement Group (PG) has an integer ID which can be obtained
from ``ceph pg dump`` (for example PG 2.2f means pool id 2, PG id 32).
The pool and PG IDs are combined by a function to get a value which is
given to CRUSH to map it to OSDs. crushtool does not know about PGs or
pools; it only runs simulations by mapping values in the range
``[--min-x,--max-x]``.


.. option:: --show-statistics

   Displays a summary of the distribution. For instance::

       rule 1 (metadata) num_rep 5 result size == 5:	1024/1024

   shows that rule **1** which is named **metadata** successfully
   mapped **1024** values to **result size == 5** devices when trying
   to map them to **num_rep 5** replicas. When it fails to provide the
   required mapping, presumably because the number of **tries** must
   be increased, a breakdown of the failures is displayed. For instance::

       rule 1 (metadata) num_rep 10 result size == 8:	4/1024
       rule 1 (metadata) num_rep 10 result size == 9:	93/1024
       rule 1 (metadata) num_rep 10 result size == 10:	927/1024

   shows that although **num_rep 10** replicas were required, **4**
   out of **1024** values ( **4/1024** ) were mapped to **result size
   == 8** devices only.

.. option:: --show-mappings

   Displays the mapping of each value in the range ``[--min-x,--max-x]``.
   For instance::

       CRUSH rule 1 x 24 [11,6]

   shows that value **24** is mapped to devices **[11,6]** by rule
   **1**.

   One of the following is required when using the ``--show-mappings`` option:
   
        (a) ``--num-rep`` 
        (b) both ``--min-rep`` and ``--max-rep``

   ``--num-rep`` stands for "number of replicas, indicates the number of
   replicas in a pool, and is used to specify an exact number of replicas (for
   example ``--num-rep 5``). ``--min-rep`` and ``--max-rep`` are used together
   to specify a range of replicas (for example, ``--min-rep 1 --max-rep 10``).

.. option:: --show-bad-mappings

   Displays which value failed to be mapped to the required number of
   devices. For instance::

     bad mapping rule 1 x 781 num_rep 7 result [8,10,2,11,6,9]

   shows that when rule **1** was required to map **7** devices, it
   could map only six : **[8,10,2,11,6,9]**.

.. option:: --show-utilization

   Displays the expected and actual utilization for each device, for
   each number of replicas. For instance::

     device 0: stored : 951      expected : 853.333
     device 1: stored : 963      expected : 853.333
     ...

   shows that device **0** stored **951** values and was expected to store **853**.
   Implies **--show-statistics**.

.. option:: --show-utilization-all

   Displays the same as **--show-utilization** but does not suppress
   output when the weight of a device is zero.
   Implies **--show-statistics**.

.. option:: --show-choose-tries

   Displays how many attempts were needed to find a device mapping.
   For instance::

      0:     95224
      1:      3745
      2:      2225
      ..

   shows that **95224** mappings succeeded without retries, **3745**
   mappings succeeded with one attempts, etc. There are as many rows
   as the value of the **--set-choose-total-tries** option.

.. option:: --output-csv

   Creates CSV files (in the current directory) containing information
   documented by **--help-output**. The files are named after the rule
   used when collecting the statistics. For instance, if the rule
   : 'metadata' is used, the CSV files will be::

      metadata-absolute_weights.csv
      metadata-device_utilization.csv
      ...

   The first line of the file shortly explains the column layout. For
   instance::

      metadata-absolute_weights.csv
      Device ID, Absolute Weight
      0,1
      ...

.. option:: --output-name NAME

   Prepend **NAME** to the file names generated when **--output-csv**
   is specified. For instance **--output-name FOO** will create
   files::

      FOO-metadata-absolute_weights.csv
      FOO-metadata-device_utilization.csv
      ...

The **--set-...** options can be used to modify the tunables of the
input crush map. The input crush map is modified in
memory. For example::

      $ crushtool -i mymap --test --show-bad-mappings
      bad mapping rule 1 x 781 num_rep 7 result [8,10,2,11,6,9]

could be fixed by increasing the **choose-total-tries** as follows:

      $ crushtool -i mymap --test \
          --show-bad-mappings \
          --set-choose-total-tries 500

Building a map with --build
===========================

The build mode will generate hierarchical maps. The first argument
specifies the number of devices (leaves) in the CRUSH hierarchy. Each
layer describes how the layer (or devices) preceding it should be
grouped.

Each layer consists of::

       bucket ( uniform | list | tree | straw | straw2 ) size

The **bucket** is the type of the buckets in the layer
(e.g. "rack"). Each bucket name will be built by appending a unique
number to the **bucket** string (e.g. "rack0", "rack1"...).

The second component is the type of bucket: **straw** should be used
most of the time.

The third component is the maximum size of the bucket. A size of zero
means a bucket of infinite capacity.


Example
=======

Suppose we have two rows with two racks each and 20 nodes per rack. Suppose
each node contains 4 storage devices for Ceph OSD Daemons. This configuration
allows us to deploy 320 Ceph OSD Daemons. Lets assume a 42U rack with 2U nodes,
leaving an extra 2U for a rack switch.

To reflect our hierarchy of devices, nodes, racks and rows, we would execute
the following::

    $ crushtool -o crushmap --build --num_osds 320 \
           node straw 4 \
           rack straw 20 \
           row straw 2 \
           root straw 0
    # id	weight	type name	reweight
    -87	320	root root
    -85	160		row row0
    -81	80			rack rack0
    -1	4				node node0
    0	1					osd.0	1
    1	1					osd.1	1
    2	1					osd.2	1
    3	1					osd.3	1
    -2	4				node node1
    4	1					osd.4	1
    5	1					osd.5	1
    ...

CRUSH rules are created so the generated crushmap can be
tested. They are the same rules as the ones created by default when
creating a new Ceph cluster. They can be further edited with::

       # decompile
       crushtool -d crushmap -o map.txt

       # edit
       emacs map.txt

       # recompile
       crushtool -c map.txt -o crushmap

Reclassify
==========

The *reclassify* function allows users to transition from older maps that
maintain parallel hierarchies for OSDs of different types to a modern CRUSH
map that makes use of the *device class* feature.  For more information,
see https://docs.ceph.com/en/latest/rados/operations/crush-map-edits/#migrating-from-a-legacy-ssd-rule-to-device-classes.

Example output from --test
==========================

See https://github.com/ceph/ceph/blob/master/src/test/cli/crushtool/set-choose.t
for sample ``crushtool --test`` commands and output produced thereby.

Availability
============

**crushtool** is part of Ceph, a massively scalable, open-source, distributed storage system. Please
refer to the Ceph documentation at http://ceph.com/docs for more
information.


See also
========

:doc:`ceph <ceph>`\(8),
:doc:`osdmaptool <osdmaptool>`\(8),

Authors
=======

John Wilkins, Sage Weil, Loic Dachary