From fc22b3d6507c6745911b9dfcc68f1e665ae13dbc Mon Sep 17 00:00:00 2001 From: Daniel Baumann Date: Mon, 15 Apr 2024 21:43:11 +0200 Subject: Adding upstream version 4.22.0. Signed-off-by: Daniel Baumann --- upstream/fedora-40/man7/numa.7 | 170 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 170 insertions(+) create mode 100644 upstream/fedora-40/man7/numa.7 (limited to 'upstream/fedora-40/man7/numa.7') diff --git a/upstream/fedora-40/man7/numa.7 b/upstream/fedora-40/man7/numa.7 new file mode 100644 index 00000000..9823de16 --- /dev/null +++ b/upstream/fedora-40/man7/numa.7 @@ -0,0 +1,170 @@ +.\" Copyright (c) 2008, Linux Foundation, written by Michael Kerrisk +.\" +.\" and Copyright 2003,2004 Andi Kleen, SuSE Labs. +.\" numa_maps material Copyright (c) 2005 Silicon Graphics Incorporated. +.\" Christoph Lameter, . +.\" +.\" SPDX-License-Identifier: Linux-man-pages-copyleft +.\" +.TH numa 7 2023-10-31 "Linux man-pages 6.06" +.SH NAME +numa \- overview of Non-Uniform Memory Architecture +.SH DESCRIPTION +Non-Uniform Memory Access (NUMA) refers to multiprocessor systems +whose memory is divided into multiple memory nodes. +The access time of a memory node depends on +the relative locations of the accessing CPU and the accessed node. +(This contrasts with a symmetric multiprocessor system, +where the access time for all of the memory is the same for all CPUs.) +Normally, each CPU on a NUMA system has a local memory node whose +contents can be accessed faster than the memory in +the node local to another CPU +or the memory on a bus shared by all CPUs. +.SS NUMA system calls +The Linux kernel implements the following NUMA-related system calls: +.BR get_mempolicy (2), +.BR mbind (2), +.BR migrate_pages (2), +.BR move_pages (2), +and +.BR set_mempolicy (2). +However, applications should normally use the interface provided by +.IR libnuma ; +see "Library Support" below. +.SS \fI/proc/\fPpid\fI/numa_maps\fP (since Linux 2.6.14) +.\" See also Changelog-2.6.14 +This file displays information about a process's +NUMA memory policy and allocation. +.P +Each line contains information about a memory range used by the process, +displaying\[em]among other information\[em]the effective memory policy for +that memory range and on which nodes the pages have been allocated. +.P +.I numa_maps +is a read-only file. +When +.IR /proc/ pid /numa_maps +is read, the kernel will scan the virtual address space of the +process and report how memory is used. +One line is displayed for each unique memory range of the process. +.P +The first field of each line shows the starting address of the memory range. +This field allows a correlation with the contents of the +.IR /proc/ pid /maps +file, +which contains the end address of the range and other information, +such as the access permissions and sharing. +.P +The second field shows the memory policy currently in effect for the +memory range. +Note that the effective policy is not necessarily the policy +installed by the process for that memory range. +Specifically, if the process installed a "default" policy for that range, +the effective policy for that range will be the process policy, +which may or may not be "default". +.P +The rest of the line contains information about the pages allocated in +the memory range, as follows: +.TP +.I N= +The number of pages allocated on +.IR . +.I +includes only pages currently mapped by the process. +Page migration and memory reclaim may have temporarily unmapped pages +associated with this memory range. +These pages may show up again only after the process has +attempted to reference them. +If the memory range represents a shared memory area or file mapping, +other processes may currently have additional pages mapped in a +corresponding memory range. +.TP +.I file= +The file backing the memory range. +If the file is mapped as private, write accesses may have generated +COW (Copy-On-Write) pages in this memory range. +These pages are displayed as anonymous pages. +.TP +.I heap +Memory range is used for the heap. +.TP +.I stack +Memory range is used for the stack. +.TP +.I huge +Huge memory range. +The page counts shown are huge pages and not regular sized pages. +.TP +.I anon= +The number of anonymous page in the range. +.TP +.I dirty= +Number of dirty pages. +.TP +.I mapped= +Total number of mapped pages, if different from +.I dirty +and +.I anon +pages. +.TP +.I mapmax= +Maximum mapcount (number of processes mapping a single page) encountered +during the scan. +This may be used as an indicator of the degree of sharing occurring in a +given memory range. +.TP +.I swapcache= +Number of pages that have an associated entry on a swap device. +.TP +.I active= +The number of pages on the active list. +This field is shown only if different from the number of pages in this range. +This means that some inactive pages exist in the memory range that may be +removed from memory by the swapper soon. +.TP +.I writeback= +Number of pages that are currently being written out to disk. +.SH STANDARDS +None. +.SH NOTES +The Linux NUMA system calls and +.I /proc +interface are available only +if the kernel was configured and built with the +.B CONFIG_NUMA +option. +.SS Library support +Link with \fI\-lnuma\fP +to get the system call definitions. +.I libnuma +and the required +.I +header are available in the +.I numactl +package. +.P +However, applications should not use these system calls directly. +Instead, the higher level interface provided by the +.BR numa (3) +functions in the +.I numactl +package is recommended. +The +.I numactl +package is available at +.UR ftp://oss.sgi.com\:/www\:/projects\:/libnuma\:/download/ +.UE . +The package is also included in some Linux distributions. +Some distributions include the development library and header +in the separate +.I numactl\-devel +package. +.SH SEE ALSO +.BR get_mempolicy (2), +.BR mbind (2), +.BR move_pages (2), +.BR set_mempolicy (2), +.BR numa (3), +.BR cpuset (7), +.BR numactl (8) -- cgit v1.2.3