summaryrefslogtreecommitdiffstats
path: root/doc/rados/configuration/pool-pg-config-ref.rst
blob: b4b5df47802ad2e213c8291227e0ed6a79968538 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
======================================
 Pool, PG and CRUSH Config Reference
======================================

.. index:: pools; configuration

When you create pools and set the number of placement groups (PGs) for each, Ceph
uses default values when you don't specifically override the defaults. **We
recommend** overriding some of the defaults. Specifically, we recommend setting
a pool's replica size and overriding the default number of placement groups. You
can specifically set these values when running `pool`_ commands. You can also
override the defaults by adding new ones in the ``[global]`` section of  your
Ceph configuration file.


.. literalinclude:: pool-pg.conf
   :language: ini


``mon_max_pool_pg_num``

:Description: The maximum number of placement groups per pool.
:Type: Integer
:Default: ``65536``


``mon_pg_create_interval``

:Description: Number of seconds between PG creation in the same
              Ceph OSD Daemon.

:Type: Float
:Default: ``30.0``


``mon_pg_stuck_threshold``

:Description: Number of seconds after which PGs can be considered as
              being stuck.

:Type: 32-bit Integer
:Default: ``300``

``mon_pg_min_inactive``

:Description: Raise ``HEALTH_ERR`` if the count of PGs that have been
              inactive longer than the ``mon_pg_stuck_threshold`` exceeds this
              setting. A non-positive number means disabled, never go into ERR.
:Type: Integer
:Default: ``1``


``mon_pg_warn_min_per_osd``

:Description: Raise ``HEALTH_WARN`` if the average number
              of PGs per ``in`` OSD is under this number. A non-positive number
              disables this.
:Type: Integer
:Default: ``30``


``mon_pg_warn_min_objects``

:Description: Do not warn if the total number of RADOS objects in cluster is below
              this number
:Type: Integer
:Default: ``1000``


``mon_pg_warn_min_pool_objects``

:Description: Do not warn on pools whose RADOS object count is below this number
:Type: Integer
:Default: ``1000``


``mon_pg_check_down_all_threshold``

:Description: Percentage threshold of ``down`` OSDs above which we check all PGs
              for stale ones.
:Type: Float
:Default: ``0.5``


``mon_pg_warn_max_object_skew``

:Description: Raise ``HEALTH_WARN`` if the average RADOS object count per PG
              of any pool is greater than ``mon_pg_warn_max_object_skew`` times
              the average RADOS object count per PG of all pools. Zero or a non-positive
              number disables this. Note that this option applies to ``ceph-mgr`` daemons.
:Type: Float
:Default: ``10``


``mon_delta_reset_interval``

:Description: Seconds of inactivity before we reset the PG delta to 0. We keep
              track of the delta of the used space of each pool, so, for
              example, it would be easier for us to understand the progress of
              recovery or the performance of cache tier. But if there's no
              activity reported for a certain pool, we just reset the history of
              deltas of that pool.
:Type: Integer
:Default: ``10``


``mon_osd_max_op_age``

:Description: Maximum op age before we get concerned (make it a power of 2).
              ``HEALTH_WARN`` will be raised if a request has been blocked longer
              than this limit.
:Type: Float
:Default: ``32.0``


``osd_pg_bits``

:Description: Placement group bits per Ceph OSD Daemon.
:Type: 32-bit Integer
:Default: ``6``


``osd_pgp_bits``

:Description: The number of bits per Ceph OSD Daemon for PGPs.
:Type: 32-bit Integer
:Default: ``6``


``osd_crush_chooseleaf_type``

:Description: The bucket type to use for ``chooseleaf`` in a CRUSH rule. Uses
              ordinal rank rather than name.

:Type: 32-bit Integer
:Default: ``1``. Typically a host containing one or more Ceph OSD Daemons.


``osd_crush_initial_weight``

:Description: The initial CRUSH weight for newly added OSDs.

:Type: Double
:Default: ``the size of a newly added OSD in TB``. By default, the initial CRUSH
          weight for a newly added OSD is set to its device size in TB.
          See `Weighting Bucket Items`_ for details.


``osd_pool_default_crush_rule``

:Description: The default CRUSH rule to use when creating a replicated pool.
:Type: 8-bit Integer
:Default: ``-1``, which means "pick the rule with the lowest numerical ID and 
          use that".  This is to make pool creation work in the absence of rule 0.


``osd_pool_erasure_code_stripe_unit``

:Description: Sets the default size, in bytes, of a chunk of an object
              stripe for erasure coded pools. Every object of size S
              will be stored as N stripes, with each data chunk
              receiving ``stripe unit`` bytes. Each stripe of ``N *
              stripe unit`` bytes will be encoded/decoded
              individually. This option can is overridden by the
              ``stripe_unit`` setting in an erasure code profile.

:Type: Unsigned 32-bit Integer
:Default: ``4096``


``osd_pool_default_size``

:Description: Sets the number of replicas for objects in the pool. The default
              value is the same as
              ``ceph osd pool set {pool-name} size {size}``.

:Type: 32-bit Integer
:Default: ``3``


``osd_pool_default_min_size``

:Description: Sets the minimum number of written replicas for objects in the
             pool in order to acknowledge a write operation to the client.  If
             minimum is not met, Ceph will not acknowledge the write to the
             client, **which may result in data loss**. This setting ensures
             a minimum number of replicas when operating in ``degraded`` mode.

:Type: 32-bit Integer
:Default: ``0``, which means no particular minimum. If ``0``,
          minimum is ``size - (size / 2)``.


``osd_pool_default_pg_num``

:Description: The default number of placement groups for a pool. The default
              value is the same as ``pg_num`` with ``mkpool``.

:Type: 32-bit Integer
:Default: ``32``


``osd_pool_default_pgp_num``

:Description: The default number of placement groups for placement for a pool.
              The default value is the same as ``pgp_num`` with ``mkpool``.
              PG and PGP should be equal (for now).

:Type: 32-bit Integer
:Default: ``8``


``osd_pool_default_flags``

:Description: The default flags for new pools.
:Type: 32-bit Integer
:Default: ``0``


``osd_max_pgls``

:Description: The maximum number of placement groups to list. A client
              requesting a large number can tie up the Ceph OSD Daemon.

:Type: Unsigned 64-bit Integer
:Default: ``1024``
:Note: Default should be fine.


``osd_min_pg_log_entries``

:Description: The minimum number of placement group logs to maintain
              when trimming log files.

:Type: 32-bit Int Unsigned
:Default: ``250``


``osd_max_pg_log_entries``

:Description: The maximum number of placement group logs to maintain
              when trimming log files.

:Type: 32-bit Int Unsigned
:Default: ``10000``


``osd_default_data_pool_replay_window``

:Description: The time (in seconds) for an OSD to wait for a client to replay
              a request.

:Type: 32-bit Integer
:Default: ``45``

``osd_max_pg_per_osd_hard_ratio``

:Description: The ratio of number of PGs per OSD allowed by the cluster before the
              OSD refuses to create new PGs. An OSD stops creating new PGs if the number
              of PGs it serves exceeds
              ``osd_max_pg_per_osd_hard_ratio`` \* ``mon_max_pg_per_osd``.

:Type: Float
:Default: ``2``

``osd_recovery_priority``

:Description: Priority of recovery in the work queue.

:Type: Integer
:Default: ``5``

``osd_recovery_op_priority``

:Description: Default priority used for recovery operations if pool doesn't override.

:Type: Integer
:Default: ``3``

.. _pool: ../../operations/pools
.. _Monitoring OSDs and PGs: ../../operations/monitoring-osd-pg#peering
.. _Weighting Bucket Items: ../../operations/crush-map#weightingbucketitems