summaryrefslogtreecommitdiffstats
path: root/doc/rados/operations/pools.rst
blob: dda9e844e9659ba59aa95e540a2a6fd4925ce74d (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
.. _rados_pools:

=======
 Pools
=======
Pools are logical partitions that are used to store objects.

Pools provide:

- **Resilience**: It is possible to set the number of OSDs that are allowed to
  fail without any data being lost. If your cluster uses replicated pools, the
  number of OSDs that can fail without data loss is equal to the number of
  replicas.
  
  For example: a typical configuration stores an object and two replicas
  (copies) of each RADOS object (that is: ``size = 3``), but you can configure
  the number of replicas on a per-pool basis. For `erasure-coded pools
  <../erasure-code>`_, resilience is defined as the number of coding chunks
  (for example, ``m = 2`` in the default **erasure code profile**).

- **Placement Groups**: You can set the number of placement groups (PGs) for
  the pool. In a typical configuration, the target number of PGs is
  approximately one hundred PGs per OSD. This provides reasonable balancing
  without consuming excessive computing resources.  When setting up multiple
  pools, be careful to set an appropriate number of PGs for each pool and for
  the cluster as a whole. Each PG belongs to a specific pool: when multiple
  pools use the same OSDs, make sure that the **sum** of PG replicas per OSD is
  in the desired PG-per-OSD target range. To calculate an appropriate number of
  PGs for your pools, use the `pgcalc`_ tool.

- **CRUSH Rules**: When data is stored in a pool, the placement of the object
  and its replicas (or chunks, in the case of erasure-coded pools) in your
  cluster is governed by CRUSH rules. Custom CRUSH rules can be created for a
  pool if the default rule does not fit your use case.

- **Snapshots**: The command ``ceph osd pool mksnap`` creates a snapshot of a
  pool.

Pool Names
==========

Pool names beginning with ``.`` are reserved for use by Ceph's internal
operations. Do not create or manipulate pools with these names.


List Pools
==========

There are multiple ways to get the list of pools in your cluster.

To list just your cluster's pool names (good for scripting), execute:

.. prompt:: bash $

   ceph osd pool ls

::

   .rgw.root
   default.rgw.log
   default.rgw.control
   default.rgw.meta

To list your cluster's pools with the pool number, run the following command:

.. prompt:: bash $

   ceph osd lspools

::

   1 .rgw.root
   2 default.rgw.log
   3 default.rgw.control
   4 default.rgw.meta

To list your cluster's pools with additional information, execute:

.. prompt:: bash $

   ceph osd pool ls detail

::

   pool 1 '.rgw.root' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 19 flags hashpspool stripe_width 0 application rgw read_balance_score 4.00
   pool 2 'default.rgw.log' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 21 flags hashpspool stripe_width 0 application rgw read_balance_score 4.00
   pool 3 'default.rgw.control' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 23 flags hashpspool stripe_width 0 application rgw read_balance_score 4.00
   pool 4 'default.rgw.meta' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 25 flags hashpspool stripe_width 0 pg_autoscale_bias 4 application rgw read_balance_score 4.00

To get even more information, you can execute this command with the ``--format`` (or ``-f``) option and the ``json``, ``json-pretty``, ``xml`` or ``xml-pretty`` value.

.. _createpool:

Creating a Pool
===============

Before creating a pool, consult `Pool, PG and CRUSH Config Reference`_.  Your
Ceph configuration file contains a setting (namely, ``pg_num``) that determines
the number of PGs.  However, this setting's default value is NOT appropriate
for most systems.  In most cases, you should override this default value when
creating your pool.  For details on PG numbers, see `setting the number of
placement groups`_

For example:

.. prompt:: bash $

    osd_pool_default_pg_num = 128
    osd_pool_default_pgp_num = 128

.. note:: In Luminous and later releases, each pool must be associated with the
   application that will be using the pool. For more information, see
   `Associating a Pool with an Application`_ below.

To create a pool, run one of the following commands:

.. prompt:: bash $

    ceph osd pool create {pool-name} [{pg-num} [{pgp-num}]] [replicated] \
             [crush-rule-name] [expected-num-objects]

or:

.. prompt:: bash $

    ceph osd pool create {pool-name} [{pg-num} [{pgp-num}]]   erasure \
             [erasure-code-profile] [crush-rule-name] [expected_num_objects] [--autoscale-mode=<on,off,warn>]

For a brief description of the elements of the above commands, consult the
following:

.. describe:: {pool-name}

   The name of the pool. It must be unique.

   :Type: String
   :Required: Yes.

.. describe:: {pg-num}

   The total number of PGs in the pool. For details on calculating an
   appropriate number, see :ref:`placement groups`. The default value ``8`` is
   NOT suitable for most systems.

  :Type: Integer
  :Required: Yes.
  :Default: 8

.. describe:: {pgp-num}

   The total number of PGs for placement purposes. This **should be equal to
   the total number of PGs**, except briefly while ``pg_num`` is being
   increased or decreased. 

  :Type: Integer
  :Required: Yes. If no value has been specified in the command, then the default value is used (unless a different value has been set in Ceph configuration).
  :Default: 8

.. describe:: {replicated|erasure}

   The pool type. This can be either **replicated** (to recover from lost OSDs
   by keeping multiple copies of the objects) or **erasure** (to achieve a kind
   of `generalized parity RAID <../erasure-code>`_ capability).  The
   **replicated** pools require more raw storage but can implement all Ceph
   operations. The **erasure** pools require less raw storage but can perform
   only some Ceph tasks and may provide decreased performance.

  :Type: String
  :Required: No.
  :Default: replicated

.. describe:: [crush-rule-name]

   The name of the CRUSH rule to use for this pool. The specified rule must
   exist; otherwise the command will fail.

   :Type: String
   :Required: No.
   :Default: For **replicated** pools, it is the rule specified by the :confval:`osd_pool_default_crush_rule` configuration variable. This rule must exist.  For **erasure** pools, it is the ``erasure-code`` rule if the ``default`` `erasure code profile`_ is used or the ``{pool-name}`` rule  if not. This rule will be created implicitly if it doesn't already exist.

.. describe:: [erasure-code-profile=profile]

   For **erasure** pools only. Instructs Ceph to use the specified `erasure
   code profile`_. This profile must be an existing profile as defined by **osd
   erasure-code-profile set**.

  :Type: String
  :Required: No.

.. _erasure code profile: ../erasure-code-profile

.. describe:: --autoscale-mode=<on,off,warn>

   - ``on``: the Ceph cluster will autotune or recommend changes to the number of PGs in your pool based on actual usage.
   - ``warn``: the Ceph cluster will autotune or recommend changes to the number of PGs in your pool based on actual usage.
   - ``off``: refer to :ref:`placement groups` for more information.

  :Type: String
  :Required: No.
  :Default: The default behavior is determined by the :confval:`osd_pool_default_pg_autoscale_mode` option.

.. describe:: [expected-num-objects]

   The expected number of RADOS objects for this pool. By setting this value and
   assigning a negative value to **filestore merge threshold**, you arrange
   for the PG folder splitting to occur at the time of pool creation and
   avoid the latency impact that accompanies runtime folder splitting.

   :Type: Integer
   :Required: No.
   :Default: 0, no splitting at the time of pool creation.

.. _associate-pool-to-application:

Associating a Pool with an Application
======================================

Pools need to be associated with an application before they can be used. Pools
that are intended for use with CephFS and pools that are created automatically
by RGW are associated automatically. Pools that are intended for use with RBD
should be initialized with the ``rbd`` tool (see `Block Device Commands`_ for
more information).

For other cases, you can manually associate a free-form application name to a
pool by running the following command.:

.. prompt:: bash $

   ceph osd pool application enable {pool-name} {application-name}

.. note:: CephFS uses the application name ``cephfs``, RBD uses the
   application name ``rbd``, and RGW uses the application name ``rgw``.

Setting Pool Quotas
===================

To set pool quotas for the maximum number of bytes and/or the maximum number of
RADOS objects per pool, run the following command:

.. prompt:: bash $

   ceph osd pool set-quota {pool-name} [max_objects {obj-count}] [max_bytes {bytes}]

For example:

.. prompt:: bash $

   ceph osd pool set-quota data max_objects 10000

To remove a quota, set its value to ``0``.


Deleting a Pool
===============

To delete a pool, run a command of the following form:

.. prompt:: bash $

   ceph osd pool delete {pool-name} [{pool-name} --yes-i-really-really-mean-it]

To remove a pool, you must set the ``mon_allow_pool_delete`` flag to ``true``
in the monitor's configuration. Otherwise, monitors will refuse to remove
pools.

For more information, see `Monitor Configuration`_.

.. _Monitor Configuration: ../../configuration/mon-config-ref

If there are custom rules for a pool that is no longer needed, consider
deleting those rules.

.. prompt:: bash $

   ceph osd pool get {pool-name} crush_rule

For example, if the custom rule is "123", check all pools to see whether they
use the rule by running the following command:

.. prompt:: bash $

    ceph osd dump | grep "^pool" | grep "crush_rule 123"

If no pools use this custom rule, then it is safe to delete the rule from the
cluster.

Similarly, if there are users with permissions restricted to a pool that no
longer exists, consider deleting those users by running commands of the
following forms:

.. prompt:: bash $

    ceph auth ls | grep -C 5 {pool-name}
    ceph auth del {user}


Renaming a Pool
===============

To rename a pool, run a command of the following form:

.. prompt:: bash $

   ceph osd pool rename {current-pool-name} {new-pool-name}

If you rename a pool for which an authenticated user has per-pool capabilities,
you must update the user's capabilities ("caps") to refer to the new pool name.


Showing Pool Statistics
=======================

To show a pool's utilization statistics, run the following command:

.. prompt:: bash $

   rados df

To obtain I/O information for a specific pool or for all pools, run a command
of the following form:

.. prompt:: bash $

   ceph osd pool stats [{pool-name}]


Making a Snapshot of a Pool
===========================

To make a snapshot of a pool, run a command of the following form:

.. prompt:: bash $

   ceph osd pool mksnap {pool-name} {snap-name}

Removing a Snapshot of a Pool
=============================

To remove a snapshot of a pool, run a command of the following form:

.. prompt:: bash $

   ceph osd pool rmsnap {pool-name} {snap-name}

.. _setpoolvalues:

Setting Pool Values
===================

To assign values to a pool's configuration keys, run a command of the following
form:

.. prompt:: bash $

   ceph osd pool set {pool-name} {key} {value}

You may set values for the following keys:

.. _compression_algorithm:

.. describe:: compression_algorithm
   
   :Description: Sets the inline compression algorithm used in storing data on the underlying BlueStore back end. This key's setting overrides the global setting :confval:`bluestore_compression_algorithm`.
   :Type: String
   :Valid Settings: ``lz4``, ``snappy``, ``zlib``, ``zstd``

.. describe:: compression_mode
   
   :Description: Sets the policy for the inline compression algorithm used in storing data on the underlying BlueStore back end. This key's setting overrides the global setting :confval:`bluestore_compression_mode`.
   :Type: String
   :Valid Settings: ``none``, ``passive``, ``aggressive``, ``force``

.. describe:: compression_min_blob_size

   
   :Description: Sets the minimum size for the compression of chunks: that is, chunks smaller than this are not compressed.  This key's setting overrides the following global settings:
   
   * :confval:`bluestore_compression_min_blob_size` 
   * :confval:`bluestore_compression_min_blob_size_hdd`
   * :confval:`bluestore_compression_min_blob_size_ssd`

   :Type: Unsigned Integer


.. describe:: compression_max_blob_size
   
   :Description: Sets the maximum size for chunks: that is, chunks larger than this are broken into smaller blobs of this size before compression is performed.
   :Type: Unsigned Integer

.. _size:

.. describe:: size
   
   :Description: Sets the number of replicas for objects in the pool. For further details, see `Setting the Number of RADOS Object Replicas`_. Replicated pools only.
   :Type: Integer

.. _min_size:

.. describe:: min_size
   
   :Description: Sets the minimum number of replicas required for I/O.  For further details, see `Setting the Number of RADOS Object Replicas`_.  For erasure-coded pools, this should be set to a value greater than 'k'. If I/O is allowed at the value 'k', then there is no redundancy and data will be lost in the event of a permanent OSD failure. For more information, see `Erasure Code <../erasure-code>`_
   :Type: Integer
   :Version: ``0.54`` and above

.. _pg_num:

.. describe:: pg_num
   
   :Description: Sets the effective number of PGs to use when calculating data placement.
   :Type: Integer
   :Valid Range: ``0`` to ``mon_max_pool_pg_num``. If set to ``0``, the value of ``osd_pool_default_pg_num`` will be used. 

.. _pgp_num:

.. describe:: pgp_num
   
   :Description: Sets the effective number of PGs to use when calculating data placement.
   :Type: Integer
   :Valid Range: Between ``1`` and the current value of ``pg_num``.

.. _crush_rule:

.. describe:: crush_rule
   
   :Description: Sets the CRUSH rule that Ceph uses to map object placement within the pool.
   :Type: String

.. _allow_ec_overwrites:

.. describe:: allow_ec_overwrites
   
   :Description: Determines whether writes to an erasure-coded pool are allowed to update only part of a RADOS object. This allows CephFS and RBD to use an EC (erasure-coded) pool for user data (but not for metadata). For more details, see `Erasure Coding with Overwrites`_.
   :Type: Boolean

   .. versionadded:: 12.2.0
   
.. describe:: hashpspool

   :Description: Sets and unsets the HASHPSPOOL flag on a given pool.
   :Type: Integer
   :Valid Range: 1 sets flag, 0 unsets flag

.. _nodelete:

.. describe:: nodelete

   :Description: Sets and unsets the NODELETE flag on a given pool.
   :Type: Integer
   :Valid Range: 1 sets flag, 0 unsets flag
   :Version: Version ``FIXME``

.. _nopgchange:

.. describe:: nopgchange

   :Description: Sets and unsets the NOPGCHANGE flag on a given pool.
   :Type: Integer
   :Valid Range: 1 sets flag, 0 unsets flag
   :Version: Version ``FIXME``

.. _nosizechange:

.. describe:: nosizechange

   :Description: Sets and unsets the NOSIZECHANGE flag on a given pool.
   :Type: Integer
   :Valid Range: 1 sets flag, 0 unsets flag
   :Version: Version ``FIXME``

.. _bulk:

.. describe:: bulk

   :Description: Sets and unsets the bulk flag on a given pool.
   :Type: Boolean
   :Valid Range: ``true``/``1`` sets flag, ``false``/``0`` unsets flag

.. _write_fadvise_dontneed:

.. describe:: write_fadvise_dontneed

   :Description: Sets and unsets the WRITE_FADVISE_DONTNEED flag on a given pool.
   :Type: Integer
   :Valid Range: ``1`` sets flag, ``0`` unsets flag

.. _noscrub:

.. describe:: noscrub

   :Description: Sets and unsets the NOSCRUB flag on a given pool.
   :Type: Integer
   :Valid Range: ``1`` sets flag, ``0`` unsets flag

.. _nodeep-scrub:

.. describe:: nodeep-scrub

   :Description: Sets and unsets the NODEEP_SCRUB flag on a given pool.
   :Type: Integer
   :Valid Range: ``1`` sets flag, ``0`` unsets flag

.. _target_max_bytes:

.. describe:: target_max_bytes
   
   :Description: Ceph will begin flushing or evicting objects when the
                 ``max_bytes`` threshold is triggered.
   :Type: Integer
   :Example: ``1000000000000``  #1-TB

.. _target_max_objects:

.. describe:: target_max_objects
   
   :Description: Ceph will begin flushing or evicting objects when the
                 ``max_objects`` threshold is triggered.
   :Type: Integer
   :Example: ``1000000`` #1M objects

.. _fast_read:

.. describe:: fast_read
   
   :Description: For erasure-coded pools, if this flag is turned ``on``, the
                 read request issues "sub reads" to all shards, and then waits
                 until it receives enough shards to decode before it serves 
                 the client. If *jerasure* or *isa* erasure plugins are in 
                 use, then after the first *K* replies have returned, the 
                 client's request is served immediately using the data decoded 
                 from these replies. This approach sacrifices resources in 
                 exchange for better performance. This flag is supported only 
                 for erasure-coded pools.
   :Type: Boolean 
   :Defaults: ``0``

.. _scrub_min_interval:

.. describe:: scrub_min_interval
   
   :Description: Sets the minimum interval (in seconds) for successive scrubs of the pool's PGs when the load is low. If the default value of ``0`` is in effect, then the value of ``osd_scrub_min_interval`` from central config is used.

   :Type: Double
   :Default: ``0``

.. _scrub_max_interval:

.. describe:: scrub_max_interval
   
   :Description: Sets the maximum interval (in seconds) for scrubs of the pool's PGs regardless of cluster load. If the value of ``scrub_max_interval`` is ``0``, then the value ``osd_scrub_max_interval`` from central config is used.

   :Type: Double
   :Default: ``0``

.. _deep_scrub_interval:

.. describe:: deep_scrub_interval
   
   :Description: Sets the interval (in seconds) for pool “deep” scrubs of the pool's PGs. If the value of ``deep_scrub_interval`` is ``0``, the value ``osd_deep_scrub_interval`` from central config is used.

   :Type: Double
   :Default: ``0``

.. _recovery_priority:

.. describe:: recovery_priority
   
   :Description: Setting this value adjusts a pool's computed reservation priority. This value must be in the range ``-10`` to ``10``. Any pool assigned a negative value will be given a lower priority than any new pools, so users are directed to assign negative values to low-priority pools.

   :Type: Integer
   :Default: ``0``


.. _recovery_op_priority:

.. describe:: recovery_op_priority
   
   :Description: Sets the recovery operation priority for a specific pool's PGs. This overrides the general priority determined by :confval:`osd_recovery_op_priority`.

   :Type: Integer
   :Default: ``0``


Getting Pool Values
===================

To get a value from a pool's key, run a command of the following form:

.. prompt:: bash $

   ceph osd pool get {pool-name} {key}


You may get values from the following keys:


``size``

:Description: See size_.

:Type: Integer


``min_size``

:Description: See min_size_.

:Type: Integer
:Version: ``0.54`` and above


``pg_num``

:Description: See pg_num_.

:Type: Integer


``pgp_num``

:Description: See pgp_num_.

:Type: Integer
:Valid Range: Equal to or less than ``pg_num``.


``crush_rule``

:Description: See crush_rule_.


``target_max_bytes``

:Description: See target_max_bytes_.

:Type: Integer


``target_max_objects``

:Description: See target_max_objects_.

:Type: Integer


``fast_read``

:Description: See fast_read_.

:Type: Boolean


``scrub_min_interval``

:Description: See scrub_min_interval_.

:Type: Double


``scrub_max_interval``

:Description: See scrub_max_interval_.

:Type: Double


``deep_scrub_interval``

:Description: See deep_scrub_interval_.

:Type: Double


``allow_ec_overwrites``

:Description: See allow_ec_overwrites_.

:Type: Boolean


``recovery_priority``

:Description: See recovery_priority_.

:Type: Integer


``recovery_op_priority``

:Description: See recovery_op_priority_.

:Type: Integer


Setting the Number of RADOS Object Replicas
===========================================

To set the number of data replicas on a replicated pool, run a command of the
following form:

.. prompt:: bash $

   ceph osd pool set {poolname} size {num-replicas}

.. important:: The ``{num-replicas}`` argument includes the primary object
   itself.  For example, if you want there to be two replicas of the object in
   addition to the original object (for a total of three instances of the
   object) specify ``3`` by running the following command:

.. prompt:: bash $

   ceph osd pool set data size 3

You may run the above command for each pool. 

.. Note:: An object might accept I/Os in degraded mode with fewer than ``pool
   size`` replicas. To set a minimum number of replicas required for I/O, you
   should use the ``min_size`` setting.  For example, you might run the
   following command:

.. prompt:: bash $

   ceph osd pool set data min_size 2

This command ensures that no object in the data pool will receive I/O if it has
fewer than ``min_size`` (in this case, two) replicas.


Getting the Number of Object Replicas
=====================================

To get the number of object replicas, run the following command:

.. prompt:: bash $

   ceph osd dump | grep 'replicated size'

Ceph will list pools and highlight the ``replicated size`` attribute.  By
default, Ceph creates two replicas of an object (a total of three copies, for a
size of ``3``).

Managing pools that are flagged with ``--bulk``
===============================================
See :ref:`managing_bulk_flagged_pools`.


.. _pgcalc: https://old.ceph.com/pgcalc/
.. _Pool, PG and CRUSH Config Reference: ../../configuration/pool-pg-config-ref
.. _Bloom Filter: https://en.wikipedia.org/wiki/Bloom_filter
.. _setting the number of placement groups: ../placement-groups#set-the-number-of-placement-groups
.. _Erasure Coding with Overwrites: ../erasure-code#erasure-coding-with-overwrites
.. _Block Device Commands: ../../../rbd/rados-rbd-cmds/#create-a-block-device-pool