1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
|
.\" Copyright (c) 2008, Linux Foundation, written by Michael Kerrisk
.\" <mtk.manpages@gmail.com>
.\" and Copyright 2003,2004 Andi Kleen, SuSE Labs.
.\" numa_maps material Copyright (c) 2005 Silicon Graphics Incorporated.
.\" Christoph Lameter, <cl@linux-foundation.org>.
.\"
.\" SPDX-License-Identifier: Linux-man-pages-copyleft
.\"
.TH numa 7 2024-05-02 "Linux man-pages (unreleased)"
.SH NAME
numa \- overview of Non-Uniform Memory Architecture
.SH DESCRIPTION
Non-Uniform Memory Access (NUMA) refers to multiprocessor systems
whose memory is divided into multiple memory nodes.
The access time of a memory node depends on
the relative locations of the accessing CPU and the accessed node.
(This contrasts with a symmetric multiprocessor system,
where the access time for all of the memory is the same for all CPUs.)
Normally, each CPU on a NUMA system has a local memory node whose
contents can be accessed faster than the memory in
the node local to another CPU
or the memory on a bus shared by all CPUs.
.SS NUMA system calls
The Linux kernel implements the following NUMA-related system calls:
.BR get_mempolicy (2),
.BR mbind (2),
.BR migrate_pages (2),
.BR move_pages (2),
and
.BR set_mempolicy (2).
However, applications should normally use the interface provided by
.IR libnuma ;
see "Library Support" below.
.SS \fI/proc/\fPpid\fI/numa_maps\fP (since Linux 2.6.14)
.\" See also Changelog-2.6.14
This file displays information about a process's
NUMA memory policy and allocation.
.P
Each line contains information about a memory range used by the process,
displaying\[em]among other information\[em]the effective memory policy for
that memory range and on which nodes the pages have been allocated.
.P
.I numa_maps
is a read-only file.
When
.IR /proc/ pid /numa_maps
is read, the kernel will scan the virtual address space of the
process and report how memory is used.
One line is displayed for each unique memory range of the process.
.P
The first field of each line shows the starting address of the memory range.
This field allows a correlation with the contents of the
.IR /proc/ pid /maps
file,
which contains the end address of the range and other information,
such as the access permissions and sharing.
.P
The second field shows the memory policy currently in effect for the
memory range.
Note that the effective policy is not necessarily the policy
installed by the process for that memory range.
Specifically, if the process installed a "default" policy for that range,
the effective policy for that range will be the process policy,
which may or may not be "default".
.P
The rest of the line contains information about the pages allocated in
the memory range, as follows:
.TP
.I N<node>=<nr_pages>
The number of pages allocated on
.IR <node> .
.I <nr_pages>
includes only pages currently mapped by the process.
Page migration and memory reclaim may have temporarily unmapped pages
associated with this memory range.
These pages may show up again only after the process has
attempted to reference them.
If the memory range represents a shared memory area or file mapping,
other processes may currently have additional pages mapped in a
corresponding memory range.
.TP
.I file=<filename>
The file backing the memory range.
If the file is mapped as private, write accesses may have generated
COW (Copy-On-Write) pages in this memory range.
These pages are displayed as anonymous pages.
.TP
.I heap
Memory range is used for the heap.
.TP
.I stack
Memory range is used for the stack.
.TP
.I huge
Huge memory range.
The page counts shown are huge pages and not regular sized pages.
.TP
.I anon=<pages>
The number of anonymous page in the range.
.TP
.I dirty=<pages>
Number of dirty pages.
.TP
.I mapped=<pages>
Total number of mapped pages, if different from
.I dirty
and
.I anon
pages.
.TP
.I mapmax=<count>
Maximum mapcount (number of processes mapping a single page) encountered
during the scan.
This may be used as an indicator of the degree of sharing occurring in a
given memory range.
.TP
.I swapcache=<count>
Number of pages that have an associated entry on a swap device.
.TP
.I active=<pages>
The number of pages on the active list.
This field is shown only if different from the number of pages in this range.
This means that some inactive pages exist in the memory range that may be
removed from memory by the swapper soon.
.TP
.I writeback=<pages>
Number of pages that are currently being written out to disk.
.SH STANDARDS
None.
.SH NOTES
The Linux NUMA system calls and
.I /proc
interface are available only
if the kernel was configured and built with the
.B CONFIG_NUMA
option.
.SS Library support
Link with \fI\-lnuma\fP
to get the system call definitions.
.I libnuma
and the required
.I <numaif.h>
header are available in the
.I numactl
package.
.P
However, applications should not use these system calls directly.
Instead, the higher level interface provided by the
.BR numa (3)
functions in the
.I numactl
package is recommended.
The
.I numactl
package is available at
.UR ftp://oss.sgi.com\:/www\:/projects\:/libnuma\:/download/
.UE .
The package is also included in some Linux distributions.
Some distributions include the development library and header
in the separate
.I numactl\-devel
package.
.SH SEE ALSO
.BR get_mempolicy (2),
.BR mbind (2),
.BR move_pages (2),
.BR set_mempolicy (2),
.BR numa (3),
.BR cpuset (7),
.BR numactl (8)
|