From 19fcec84d8d7d21e796c7624e521b60d28ee21ed Mon Sep 17 00:00:00 2001 From: Daniel Baumann Date: Sun, 7 Apr 2024 20:45:59 +0200 Subject: Adding upstream version 16.2.11+ds. Signed-off-by: Daniel Baumann --- doc/mgr/localpool.rst | 37 +++++++++++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+) create mode 100644 doc/mgr/localpool.rst (limited to 'doc/mgr/localpool.rst') diff --git a/doc/mgr/localpool.rst b/doc/mgr/localpool.rst new file mode 100644 index 000000000..fe8bd3942 --- /dev/null +++ b/doc/mgr/localpool.rst @@ -0,0 +1,37 @@ +Local Pool Module +================= + +The *localpool* module can automatically create RADOS pools that are +localized to a subset of the overall cluster. For example, by default, it will +create a pool for each distinct ``rack`` in the cluster. This can be useful for +deployments where it is desirable to distribute some data locally and other data +globally across the cluster. One use-case is measuring performance and testing +behavior of specific drive, NIC, or chassis models in isolation. + +Enabling +-------- + +The *localpool* module is enabled with:: + + ceph mgr module enable localpool + +Configuring +----------- + +The *localpool* module understands the following options: + +* **subtree** (default: `rack`): which CRUSH subtree type the module + should create a pool for. +* **failure_domain** (default: `host`): what failure domain we should + separate data replicas across. +* **pg_num** (default: `128`): number of PGs to create for each pool +* **num_rep** (default: `3`): number of replicas for each pool. + (Currently, pools are always replicated.) +* **min_size** (default: none): value to set min_size to (unchanged from Ceph's default if this option is not set) +* **prefix** (default: `by-$subtreetype-`): prefix for the pool name. + +These options are set via the config-key interface. For example, to +change the replication level to 2x with only 64 PGs, :: + + ceph config set mgr mgr/localpool/num_rep 2 + ceph config set mgr mgr/localpool/pg_num 64 -- cgit v1.2.3