blob: fe8bd3942a9ad2f5b5767eb1100861181e9919e3 (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
|
Local Pool Module
=================
The *localpool* module can automatically create RADOS pools that are
localized to a subset of the overall cluster. For example, by default, it will
create a pool for each distinct ``rack`` in the cluster. This can be useful for
deployments where it is desirable to distribute some data locally and other data
globally across the cluster. One use-case is measuring performance and testing
behavior of specific drive, NIC, or chassis models in isolation.
Enabling
--------
The *localpool* module is enabled with::
ceph mgr module enable localpool
Configuring
-----------
The *localpool* module understands the following options:
* **subtree** (default: `rack`): which CRUSH subtree type the module
should create a pool for.
* **failure_domain** (default: `host`): what failure domain we should
separate data replicas across.
* **pg_num** (default: `128`): number of PGs to create for each pool
* **num_rep** (default: `3`): number of replicas for each pool.
(Currently, pools are always replicated.)
* **min_size** (default: none): value to set min_size to (unchanged from Ceph's default if this option is not set)
* **prefix** (default: `by-$subtreetype-`): prefix for the pool name.
These options are set via the config-key interface. For example, to
change the replication level to 2x with only 64 PGs, ::
ceph config set mgr mgr/localpool/num_rep 2
ceph config set mgr mgr/localpool/pg_num 64
|