summaryrefslogtreecommitdiffstats
path: root/Documentation/mm
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-08-07 13:11:27 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-08-07 13:11:27 +0000
commit34996e42f82bfd60bc2c191e5cae3c6ab233ec6c (patch)
tree62db60558cbf089714b48daeabca82bf2b20b20e /Documentation/mm
parentAdding debian version 6.8.12-1. (diff)
downloadlinux-34996e42f82bfd60bc2c191e5cae3c6ab233ec6c.tar.xz
linux-34996e42f82bfd60bc2c191e5cae3c6ab233ec6c.zip
Merging upstream version 6.9.7.
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'Documentation/mm')
-rw-r--r--Documentation/mm/arch_pgtable_helpers.rst6
-rw-r--r--Documentation/mm/damon/design.rst70
-rw-r--r--Documentation/mm/damon/maintainer-profile.rst8
-rw-r--r--Documentation/mm/page_cache.rst10
-rw-r--r--Documentation/mm/page_owner.rst48
-rw-r--r--Documentation/mm/slub.rst60
6 files changed, 156 insertions, 46 deletions
diff --git a/Documentation/mm/arch_pgtable_helpers.rst b/Documentation/mm/arch_pgtable_helpers.rst
index 2466d3363a..ad50ca6f49 100644
--- a/Documentation/mm/arch_pgtable_helpers.rst
+++ b/Documentation/mm/arch_pgtable_helpers.rst
@@ -140,7 +140,8 @@ PMD Page Table Helpers
+---------------------------+--------------------------------------------------+
| pmd_swp_clear_soft_dirty | Clears a soft dirty swapped PMD |
+---------------------------+--------------------------------------------------+
-| pmd_mkinvalid | Invalidates a mapped PMD [1] |
+| pmd_mkinvalid | Invalidates a present PMD; do not call for |
+| | non-present PMD [1] |
+---------------------------+--------------------------------------------------+
| pmd_set_huge | Creates a PMD huge mapping |
+---------------------------+--------------------------------------------------+
@@ -196,7 +197,8 @@ PUD Page Table Helpers
+---------------------------+--------------------------------------------------+
| pud_mkdevmap | Creates a ZONE_DEVICE mapped PUD |
+---------------------------+--------------------------------------------------+
-| pud_mkinvalid | Invalidates a mapped PUD [1] |
+| pud_mkinvalid | Invalidates a present PUD; do not call for |
+| | non-present PUD [1] |
+---------------------------+--------------------------------------------------+
| pud_set_huge | Creates a PUD huge mapping |
+---------------------------+--------------------------------------------------+
diff --git a/Documentation/mm/damon/design.rst b/Documentation/mm/damon/design.rst
index 1bb69524a6..5620aab9b3 100644
--- a/Documentation/mm/damon/design.rst
+++ b/Documentation/mm/damon/design.rst
@@ -31,6 +31,8 @@ DAMON subsystem is configured with three layers including
interfaces for the user space, on top of the core layer.
+.. _damon_design_configurable_operations_set:
+
Configurable Operations Set
---------------------------
@@ -63,6 +65,8 @@ modules that built on top of the core layer using the API, which can be easily
used by the user space end users.
+.. _damon_operations_set:
+
Operations Set Layer
====================
@@ -71,16 +75,26 @@ The monitoring operations are defined in two parts:
1. Identification of the monitoring target address range for the address space.
2. Access check of specific address range in the target space.
-DAMON currently provides the implementations of the operations for the physical
-and virtual address spaces. Below two subsections describe how those work.
+DAMON currently provides below three operation sets. Below two subsections
+describe how those work.
+
+ - vaddr: Monitor virtual address spaces of specific processes
+ - fvaddr: Monitor fixed virtual address ranges
+ - paddr: Monitor the physical address space of the system
+ .. _damon_design_vaddr_target_regions_construction:
+
VMA-based Target Address Range Construction
-------------------------------------------
-This is only for the virtual address space monitoring operations
-implementation. That for the physical address space simply asks users to
-manually set the monitoring target address ranges.
+A mechanism of ``vaddr`` DAMON operations set that automatically initializes
+and updates the monitoring target address regions so that entire memory
+mappings of the target processes can be covered.
+
+This mechanism is only for the ``vaddr`` operations set. In cases of
+``fvaddr`` and ``paddr`` operation sets, users are asked to manually set the
+monitoring target address ranges.
Only small parts in the super-huge virtual address space of the processes are
mapped to the physical memory and accessed. Thus, tracking the unmapped
@@ -294,9 +308,29 @@ not mandated to support all actions of the list. Hence, the availability of
specific DAMOS action depends on what operations set is selected to be used
together.
-Applying an action to a region is considered as changing the region's
-characteristics. Hence, DAMOS resets the age of regions when an action is
-applied to those.
+The list of the supported actions, their meaning, and DAMON operations sets
+that supports each action are as below.
+
+ - ``willneed``: Call ``madvise()`` for the region with ``MADV_WILLNEED``.
+ Supported by ``vaddr`` and ``fvaddr`` operations set.
+ - ``cold``: Call ``madvise()`` for the region with ``MADV_COLD``.
+ Supported by ``vaddr`` and ``fvaddr`` operations set.
+ - ``pageout``: Reclaim the region.
+ Supported by ``vaddr``, ``fvaddr`` and ``paddr`` operations set.
+ - ``hugepage``: Call ``madvise()`` for the region with ``MADV_HUGEPAGE``.
+ Supported by ``vaddr`` and ``fvaddr`` operations set.
+ - ``nohugepage``: Call ``madvise()`` for the region with ``MADV_NOHUGEPAGE``.
+ Supported by ``vaddr`` and ``fvaddr`` operations set.
+ - ``lru_prio``: Prioritize the region on its LRU lists.
+ Supported by ``paddr`` operations set.
+ - ``lru_deprio``: Deprioritize the region on its LRU lists.
+ Supported by ``paddr`` operations set.
+ - ``stat``: Do nothing but count the statistics.
+ Supported by all operations sets.
+
+Applying the actions except ``stat`` to a region is considered as changing the
+region's characteristics. Hence, DAMOS resets the age of regions when any such
+actions are applied to those.
.. _damon_design_damos_access_pattern:
@@ -364,12 +398,28 @@ Aim-oriented Feedback-driven Auto-tuning
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Automatic feedback-driven quota tuning. Instead of setting the absolute quota
-value, users can repeatedly provide numbers representing how much of their goal
-for the scheme is achieved as feedback. DAMOS then automatically tunes the
+value, users can specify the metric of their interest, and what target value
+they want the metric value to be. DAMOS then automatically tunes the
aggressiveness (the quota) of the corresponding scheme. For example, if DAMOS
is under achieving the goal, DAMOS automatically increases the quota. If DAMOS
is over achieving the goal, it decreases the quota.
+The goal can be specified with three parameters, namely ``target_metric``,
+``target_value``, and ``current_value``. The auto-tuning mechanism tries to
+make ``current_value`` of ``target_metric`` be same to ``target_value``.
+Currently, two ``target_metric`` are provided.
+
+- ``user_input``: User-provided value. Users could use any metric that they
+ has interest in for the value. Use space main workload's latency or
+ throughput, system metrics like free memory ratio or memory pressure stall
+ time (PSI) could be examples. Note that users should explicitly set
+ ``current_value`` on their own in this case. In other words, users should
+ repeatedly provide the feedback.
+- ``some_mem_psi_us``: System-wide ``some`` memory pressure stall information
+ in microseconds that measured from last quota reset to next quota reset.
+ DAMOS does the measurement on its own, so only ``target_value`` need to be
+ set by users at the initial time. In other words, DAMOS does self-feedback.
+
.. _damon_design_damos_watermarks:
diff --git a/Documentation/mm/damon/maintainer-profile.rst b/Documentation/mm/damon/maintainer-profile.rst
index a84c14e590..5a306e4de2 100644
--- a/Documentation/mm/damon/maintainer-profile.rst
+++ b/Documentation/mm/damon/maintainer-profile.rst
@@ -21,8 +21,8 @@ be queued in mm-stable [3]_ , and finally pull-requested to the mainline by the
memory management subsystem maintainer.
Note again the patches for review should be made against the mm-unstable
-tree[1] whenever possible. damon/next is only for preview of others' works in
-progress.
+tree [1]_ whenever possible. damon/next is only for preview of others' works
+in progress.
Submit checklist addendum
-------------------------
@@ -41,8 +41,8 @@ Further doing below and putting the results will be helpful.
Key cycle dates
---------------
-Patches can be sent anytime. Key cycle dates of the mm-unstable[1] and
-mm-stable[3] trees depend on the memory management subsystem maintainer.
+Patches can be sent anytime. Key cycle dates of the mm-unstable [1]_ and
+mm-stable [3]_ trees depend on the memory management subsystem maintainer.
Review cadence
--------------
diff --git a/Documentation/mm/page_cache.rst b/Documentation/mm/page_cache.rst
index 75eba7c431..138d61f869 100644
--- a/Documentation/mm/page_cache.rst
+++ b/Documentation/mm/page_cache.rst
@@ -3,3 +3,13 @@
==========
Page Cache
==========
+
+The page cache is the primary way that the user and the rest of the kernel
+interact with filesystems. It can be bypassed (e.g. with O_DIRECT),
+but normal reads, writes and mmaps go through the page cache.
+
+Folios
+======
+
+The folio is the unit of memory management within the page cache.
+Operations
diff --git a/Documentation/mm/page_owner.rst b/Documentation/mm/page_owner.rst
index 62e3f7ab23..3a45a20fc0 100644
--- a/Documentation/mm/page_owner.rst
+++ b/Documentation/mm/page_owner.rst
@@ -24,6 +24,11 @@ fragmentation statistics can be obtained through gfp flag information of
each page. It is already implemented and activated if page owner is
enabled. Other usages are more than welcome.
+It can also be used to show all the stacks and their current number of
+allocated base pages, which gives us a quick overview of where the memory
+is going without the need to screen through all the pages and match the
+allocation and free operation.
+
page owner is disabled by default. So, if you'd like to use it, you need
to add "page_owner=on" to your boot cmdline. If the kernel is built
with page owner and page owner is disabled in runtime due to not enabling
@@ -68,6 +73,49 @@ Usage
4) Analyze information from page owner::
+ cat /sys/kernel/debug/page_owner_stacks/show_stacks > stacks.txt
+ cat stacks.txt
+ post_alloc_hook+0x177/0x1a0
+ get_page_from_freelist+0xd01/0xd80
+ __alloc_pages+0x39e/0x7e0
+ allocate_slab+0xbc/0x3f0
+ ___slab_alloc+0x528/0x8a0
+ kmem_cache_alloc+0x224/0x3b0
+ sk_prot_alloc+0x58/0x1a0
+ sk_alloc+0x32/0x4f0
+ inet_create+0x427/0xb50
+ __sock_create+0x2e4/0x650
+ inet_ctl_sock_create+0x30/0x180
+ igmp_net_init+0xc1/0x130
+ ops_init+0x167/0x410
+ setup_net+0x304/0xa60
+ copy_net_ns+0x29b/0x4a0
+ create_new_namespaces+0x4a1/0x820
+ nr_base_pages: 16
+ ...
+ ...
+ echo 7000 > /sys/kernel/debug/page_owner_stacks/count_threshold
+ cat /sys/kernel/debug/page_owner_stacks/show_stacks> stacks_7000.txt
+ cat stacks_7000.txt
+ post_alloc_hook+0x177/0x1a0
+ get_page_from_freelist+0xd01/0xd80
+ __alloc_pages+0x39e/0x7e0
+ alloc_pages_mpol+0x22e/0x490
+ folio_alloc+0xd5/0x110
+ filemap_alloc_folio+0x78/0x230
+ page_cache_ra_order+0x287/0x6f0
+ filemap_get_pages+0x517/0x1160
+ filemap_read+0x304/0x9f0
+ xfs_file_buffered_read+0xe6/0x1d0 [xfs]
+ xfs_file_read_iter+0x1f0/0x380 [xfs]
+ __kernel_read+0x3b9/0x730
+ kernel_read_file+0x309/0x4d0
+ __do_sys_finit_module+0x381/0x730
+ do_syscall_64+0x8d/0x150
+ entry_SYSCALL_64_after_hwframe+0x62/0x6a
+ nr_base_pages: 20824
+ ...
+
cat /sys/kernel/debug/page_owner > page_owner_full.txt
./page_owner_sort page_owner_full.txt sorted_page_owner.txt
diff --git a/Documentation/mm/slub.rst b/Documentation/mm/slub.rst
index be75971532..b517ee28a9 100644
--- a/Documentation/mm/slub.rst
+++ b/Documentation/mm/slub.rst
@@ -9,7 +9,7 @@ SLUB can enable debugging only for selected slabs in order to avoid
an impact on overall system performance which may make a bug more
difficult to find.
-In order to switch debugging on one can add an option ``slub_debug``
+In order to switch debugging on one can add an option ``slab_debug``
to the kernel command line. That will enable full debugging for
all slabs.
@@ -26,16 +26,16 @@ be enabled on the command line. F.e. no tracking information will be
available without debugging on and validation can only partially
be performed if debugging was not switched on.
-Some more sophisticated uses of slub_debug:
+Some more sophisticated uses of slab_debug:
-------------------------------------------
-Parameters may be given to ``slub_debug``. If none is specified then full
+Parameters may be given to ``slab_debug``. If none is specified then full
debugging is enabled. Format:
-slub_debug=<Debug-Options>
+slab_debug=<Debug-Options>
Enable options for all slabs
-slub_debug=<Debug-Options>,<slab name1>,<slab name2>,...
+slab_debug=<Debug-Options>,<slab name1>,<slab name2>,...
Enable options only for select slabs (no spaces
after a comma)
@@ -60,23 +60,23 @@ Possible debug options are::
F.e. in order to boot just with sanity checks and red zoning one would specify::
- slub_debug=FZ
+ slab_debug=FZ
Trying to find an issue in the dentry cache? Try::
- slub_debug=,dentry
+ slab_debug=,dentry
to only enable debugging on the dentry cache. You may use an asterisk at the
end of the slab name, in order to cover all slabs with the same prefix. For
example, here's how you can poison the dentry cache as well as all kmalloc
slabs::
- slub_debug=P,kmalloc-*,dentry
+ slab_debug=P,kmalloc-*,dentry
Red zoning and tracking may realign the slab. We can just apply sanity checks
to the dentry cache with::
- slub_debug=F,dentry
+ slab_debug=F,dentry
Debugging options may require the minimum possible slab order to increase as
a result of storing the metadata (for example, caches with PAGE_SIZE object
@@ -84,20 +84,20 @@ sizes). This has a higher liklihood of resulting in slab allocation errors
in low memory situations or if there's high fragmentation of memory. To
switch off debugging for such caches by default, use::
- slub_debug=O
+ slab_debug=O
You can apply different options to different list of slab names, using blocks
of options. This will enable red zoning for dentry and user tracking for
kmalloc. All other slabs will not get any debugging enabled::
- slub_debug=Z,dentry;U,kmalloc-*
+ slab_debug=Z,dentry;U,kmalloc-*
You can also enable options (e.g. sanity checks and poisoning) for all caches
except some that are deemed too performance critical and don't need to be
debugged by specifying global debug options followed by a list of slab names
with "-" as options::
- slub_debug=FZ;-,zs_handle,zspage
+ slab_debug=FZ;-,zs_handle,zspage
The state of each debug option for a slab can be found in the respective files
under::
@@ -105,7 +105,7 @@ under::
/sys/kernel/slab/<slab name>/
If the file contains 1, the option is enabled, 0 means disabled. The debug
-options from the ``slub_debug`` parameter translate to the following files::
+options from the ``slab_debug`` parameter translate to the following files::
F sanity_checks
Z red_zone
@@ -129,7 +129,7 @@ in order to reduce overhead and increase cache hotness of objects.
Slab validation
===============
-SLUB can validate all object if the kernel was booted with slub_debug. In
+SLUB can validate all object if the kernel was booted with slab_debug. In
order to do so you must have the ``slabinfo`` tool. Then you can do
::
@@ -150,29 +150,29 @@ list_lock once in a while to deal with partial slabs. That overhead is
governed by the order of the allocation for each slab. The allocations
can be influenced by kernel parameters:
-.. slub_min_objects=x (default 4)
-.. slub_min_order=x (default 0)
-.. slub_max_order=x (default 3 (PAGE_ALLOC_COSTLY_ORDER))
+.. slab_min_objects=x (default: automatically scaled by number of cpus)
+.. slab_min_order=x (default 0)
+.. slab_max_order=x (default 3 (PAGE_ALLOC_COSTLY_ORDER))
-``slub_min_objects``
+``slab_min_objects``
allows to specify how many objects must at least fit into one
slab in order for the allocation order to be acceptable. In
general slub will be able to perform this number of
allocations on a slab without consulting centralized resources
(list_lock) where contention may occur.
-``slub_min_order``
+``slab_min_order``
specifies a minimum order of slabs. A similar effect like
- ``slub_min_objects``.
+ ``slab_min_objects``.
-``slub_max_order``
- specified the order at which ``slub_min_objects`` should no
+``slab_max_order``
+ specified the order at which ``slab_min_objects`` should no
longer be checked. This is useful to avoid SLUB trying to
- generate super large order pages to fit ``slub_min_objects``
+ generate super large order pages to fit ``slab_min_objects``
of a slab cache with large object sizes into one high order
page. Setting command line parameter
``debug_guardpage_minorder=N`` (N > 0), forces setting
- ``slub_max_order`` to 0, what cause minimum possible order of
+ ``slab_max_order`` to 0, what cause minimum possible order of
slabs allocation.
SLUB Debug output
@@ -219,7 +219,7 @@ Here is a sample of slub debug output::
FIX kmalloc-8: Restoring Redzone 0xc90f6d28-0xc90f6d2b=0xcc
If SLUB encounters a corrupted object (full detection requires the kernel
-to be booted with slub_debug) then the following output will be dumped
+to be booted with slab_debug) then the following output will be dumped
into the syslog:
1. Description of the problem encountered
@@ -239,7 +239,7 @@ into the syslog:
pid=<pid of the process>
(Object allocation / free information is only available if SLAB_STORE_USER is
- set for the slab. slub_debug sets that option)
+ set for the slab. slab_debug sets that option)
2. The object contents if an object was involved.
@@ -262,7 +262,7 @@ into the syslog:
the object boundary.
(Redzone information is only available if SLAB_RED_ZONE is set.
- slub_debug sets that option)
+ slab_debug sets that option)
Padding <address> : <bytes>
Unused data to fill up the space in order to get the next object
@@ -296,7 +296,7 @@ Emergency operations
Minimal debugging (sanity checks alone) can be enabled by booting with::
- slub_debug=F
+ slab_debug=F
This will be generally be enough to enable the resiliency features of slub
which will keep the system running even if a bad kernel component will
@@ -311,13 +311,13 @@ and enabling debugging only for that cache
I.e.::
- slub_debug=F,dentry
+ slab_debug=F,dentry
If the corruption occurs by writing after the end of the object then it
may be advisable to enable a Redzone to avoid corrupting the beginning
of other objects::
- slub_debug=FZ,dentry
+ slab_debug=FZ,dentry
Extended slabinfo mode and plotting
===================================