summaryrefslogtreecommitdiffstats
path: root/doc/cephfs/dirfrags.rst
blob: 9f177c19f44091ca44191a48d1ad99cc162194d4 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
===================================
Configuring Directory fragmentation
===================================

In CephFS, directories are *fragmented* when they become very large
or very busy.  This splits up the metadata so that it can be shared
between multiple MDS daemons, and between multiple objects in the
metadata pool.

In normal operation, directory fragmentation is invisible to
users and administrators, and all the configuration settings mentioned
here should be left at their default values.

While directory fragmentation enables CephFS to handle very large
numbers of entries in a single directory, application programmers should
remain conservative about creating very large directories, as they still
have a resource cost in situations such as a CephFS client listing
the directory, where all the fragments must be loaded at once.

.. tip:: The root directory cannot be fragmented.

All directories are initially created as a single fragment.  This fragment
may be *split* to divide up the directory into more fragments, and these
fragments may be *merged* to reduce the number of fragments in the directory.

Splitting and merging
=====================

When an MDS identifies a directory fragment to be split, it does not
do the split immediately.  Because splitting interrupts metadata IO,
a short delay is used to allow short bursts of client IO to complete
before the split begins.  This delay is configured with
``mds_bal_fragment_interval``, which defaults to 5 seconds.

When the split is done, the directory fragment is broken up into
a power of two number of new fragments.  The number of new
fragments is given by two to the power ``mds_bal_split_bits``, i.e.
if ``mds_bal_split_bits`` is 2, then four new fragments will be
created.  The default setting is 3, i.e. splits create 8 new fragments.

The criteria for initiating a split or a merge are described in the
following sections.

Size thresholds
===============

A directory fragment is eligible for splitting when its size exceeds
``mds_bal_split_size`` (default 10000 directory entries).  Ordinarily this
split is delayed by ``mds_bal_fragment_interval``, but if the fragment size
exceeds a factor of ``mds_bal_fragment_fast_factor`` the split size,
the split will happen immediately (holding up any client metadata
IO on the directory).

``mds_bal_fragment_size_max`` is the hard limit on the size of
directory fragments.  If it is reached, clients will receive
ENOSPC errors if they try to create files in the fragment.  On
a properly configured system, this limit should never be reached on
ordinary directories, as they will have split long before.  By default,
this is set to 10 times the split size, giving a dirfrag size limit of
100000 directory entries.  Increasing this limit may lead to oversized
directory fragment objects in the metadata pool, which the OSDs may not
be able to handle.

A directory fragment is eligible for merging when its size is less
than ``mds_bal_merge_size``.  There is no merge equivalent of the
"fast splitting" explained above: fast splitting exists to avoid
creating oversized directory fragments, there is no equivalent issue
to avoid when merging.  The default merge size is 50 directory entries.

Activity thresholds
===================

In addition to splitting fragments based
on their size, the MDS may split directory fragments if their
activity exceeds a threshold.

The MDS maintains separate time-decaying load counters for read and write
operations on directory fragments.  The decaying load counters have an
exponential decay based on the ``mds_decay_halflife`` setting.

On writes, the write counter is
incremented, and compared with ``mds_bal_split_wr``, triggering a
split if the threshold is exceeded.  Write operations include metadata IO
such as renames, unlinks and creations.

The ``mds_bal_split_rd`` threshold is applied based on the read operation
load counter, which tracks readdir operations.

The ``mds_bal_split_rd`` and ``mds_bal_split_wr`` configs represent the
popularity threshold. In the MDS these are measured as "read/write temperatures"
which is closely related to the number of respective read/write operations.
By default, the read threshold is 25000 operations and the write
threshold is 10000 operations, i.e. 2.5x as many reads as writes would be
required to trigger a split.

After fragments are split due to the activity thresholds, they are only
merged based on the size threshold (``mds_bal_merge_size``), so 
a spike in activity may cause a directory to stay fragmented
forever unless some entries are unlinked.