summaryrefslogtreecommitdiffstats
path: root/man7/cpuset.7
diff options
context:
space:
mode:
Diffstat (limited to 'man7/cpuset.7')
-rw-r--r--man7/cpuset.7196
1 files changed, 98 insertions, 98 deletions
diff --git a/man7/cpuset.7 b/man7/cpuset.7
index 800e4da..2db2bfc 100644
--- a/man7/cpuset.7
+++ b/man7/cpuset.7
@@ -4,7 +4,7 @@
.\"
.\" SPDX-License-Identifier: GPL-2.0-only
.\"
-.TH cpuset 7 2023-07-18 "Linux man-pages 6.05.01"
+.TH cpuset 7 2023-10-31 "Linux man-pages 6.7"
.SH NAME
cpuset \- confine processes to processor and memory node subsets
.SH DESCRIPTION
@@ -14,7 +14,7 @@ which is used to control the processor placement
and memory placement of processes.
It is commonly mounted at
.IR /dev/cpuset .
-.PP
+.P
On systems with kernels compiled with built in support for cpusets,
all processes are attached to a cpuset, and cpusets are always present.
If a system supports cpusets, then it will have the entry
@@ -31,9 +31,9 @@ By default, if the cpuset configuration
on a system is not modified or if the cpuset filesystem
is not even mounted, then the cpuset mechanism,
though present, has no effect on the system's behavior.
-.PP
+.P
A cpuset defines a list of CPUs and memory nodes.
-.PP
+.P
The CPUs of a system include all the logical processing
units on which a process can execute, including, if present,
multiple processor cores within a package and Hyper-Threads
@@ -42,7 +42,7 @@ Memory nodes include all distinct
banks of main memory; small and SMP systems typically have
just one memory node that contains all the system's main memory,
while NUMA (non-uniform memory access) systems have multiple memory nodes.
-.PP
+.P
Cpusets are represented as directories in a hierarchical
pseudo-filesystem, where the top directory in the hierarchy
.RI ( /dev/cpuset )
@@ -52,7 +52,7 @@ another parent cpuset contains a subset of that parent's
CPUs and memory nodes.
The directories and files representing cpusets have normal
filesystem permissions.
-.PP
+.P
Every process in the system belongs to exactly one cpuset.
A process is confined to run only on the CPUs in
the cpuset it belongs to, and to allocate memory only
@@ -63,7 +63,7 @@ the child process is placed in the same cpuset as its parent.
With sufficient privilege, a process may be moved from one
cpuset to another and the allowed CPUs and memory nodes
of an existing cpuset may be changed.
-.PP
+.P
When the system begins booting, a single cpuset is
defined that includes all CPUs and memory nodes on the
system, and all processes are in that cpuset.
@@ -71,7 +71,7 @@ During the boot process, or later during normal system operation,
other cpusets may be created, as subdirectories of this top cpuset,
under the control of the system administrator,
and processes may be placed in these other cpusets.
-.PP
+.P
Cpusets are integrated with the
.BR sched_setaffinity (2)
scheduling affinity mechanism and the
@@ -93,7 +93,7 @@ other calls returning an error, if for example, such
a call ends up requesting an empty set of CPUs or
memory nodes, after that request is restricted to
the invoking process's cpuset.
-.PP
+.P
Typically, a cpuset is used to manage
the CPU and memory-node confinement for a set of
cooperating processes such as a batch scheduler job, and these
@@ -104,7 +104,7 @@ Each directory below
.I /dev/cpuset
represents a cpuset and contains a fixed set of pseudo-files
describing the state of that cpuset.
-.PP
+.P
New cpusets are created using the
.BR mkdir (2)
system call or the
@@ -114,13 +114,13 @@ The properties of a cpuset, such as its flags, allowed
CPUs and memory nodes, and attached processes, are queried and modified
by reading or writing to the appropriate file in that cpuset's directory,
as listed below.
-.PP
+.P
The pseudo-files in each cpuset directory are automatically created when
the cpuset is created, as a result of the
.BR mkdir (2)
invocation.
It is not possible to directly add or remove these pseudo-files.
-.PP
+.P
A cpuset directory that contains no child cpuset directories,
and has no attached processes, can be removed using
.BR rmdir (2)
@@ -128,7 +128,7 @@ or
.BR rmdir (1).
It is not necessary, or possible,
to remove the pseudo-files inside the directory before removing it.
-.PP
+.P
The pseudo-files in each cpuset directory are
small text files that may be read and
written using traditional shell utilities such as
@@ -142,7 +142,7 @@ such as
.BR write (2),
and
.BR close (2).
-.PP
+.P
The pseudo-files in a cpuset directory represent internal kernel
state and do not have any persistent image on disk.
Each of these per-cpuset files is listed and described below.
@@ -338,7 +338,7 @@ the wider
the range of CPUs over which immediate load balancing is attempted.
See \fBScheduler Relax Domain Level\fR, below, for further details.
.\" ================== proc cpuset ==================
-.PP
+.P
In addition to the above pseudo-files in each directory below
.IR /dev/cpuset ,
each process has a pseudo-file,
@@ -346,7 +346,7 @@ each process has a pseudo-file,
that displays the path of the process's cpuset directory
relative to the root of the cpuset filesystem.
.\" ================== proc status ==================
-.PP
+.P
Also the
.IR /proc/ pid /status
file for each process has four added lines,
@@ -357,7 +357,7 @@ displaying the process's
(on which memory nodes it may obtain memory),
in the two formats \fBMask Format\fR and \fBList Format\fR (see below)
as shown in the following example:
-.PP
+.P
.in +4n
.EX
Cpus_allowed: ffffffff,ffffffff,ffffffff,ffffffff
@@ -366,7 +366,7 @@ Mems_allowed: ffffffff,ffffffff
Mems_allowed_list: 0\-63
.EE
.in
-.PP
+.P
The "allowed" fields were added in Linux 2.6.24;
the "allowed_list" fields were added in Linux 2.6.26.
.\" ================== EXTENDED CAPABILITIES ==================
@@ -385,7 +385,7 @@ or
.IR mem_exclusive ,
no other cpuset, other than a direct ancestor or descendant,
may share any of the same CPUs or memory nodes.
-.PP
+.P
A cpuset that is
.I mem_exclusive
restricts kernel allocations for
@@ -426,7 +426,7 @@ and other data commonly shared by the kernel across multiple users.
All cpusets, whether
.I hardwall
or not, restrict allocations of memory for user space.
-.PP
+.P
This enables configuring a system so that several independent
jobs can share common kernel data, such as filesystem pages,
while isolating each job's user allocation in its own cpuset.
@@ -437,7 +437,7 @@ all the jobs, and construct child cpusets for each individual
job which are not
.I hardwall
cpusets.
-.PP
+.P
Only a small amount of kernel memory, such as requests from
interrupt handlers, is allowed to be taken outside even a
.I hardwall
@@ -455,7 +455,7 @@ the kernel will run the command
supplying the pathname (relative to the mount point of the
cpuset filesystem) of the abandoned cpuset.
This enables automatic removal of abandoned cpusets.
-.PP
+.P
The default value of
.I notify_on_release
in the root cpuset at system boot is disabled (0).
@@ -463,7 +463,7 @@ The default value of other cpusets at creation
is the current value of their parent's
.I notify_on_release
setting.
-.PP
+.P
The command
.I /sbin/cpuset_release_agent
is invoked, with the name
@@ -471,18 +471,18 @@ is invoked, with the name
relative path)
of the to-be-released cpuset in
.IR argv[1] .
-.PP
+.P
The usual contents of the command
.I /sbin/cpuset_release_agent
is simply the shell script:
-.PP
+.P
.in +4n
.EX
#!/bin/sh
rmdir /dev/cpuset/$1
.EE
.in
-.PP
+.P
As with other flag values below, this flag can
be changed by writing an ASCII
number 0 or 1 (with optional trailing newline)
@@ -494,30 +494,30 @@ The
of a cpuset provides a simple per-cpuset running average of
the rate that the processes in a cpuset are attempting to free up in-use
memory on the nodes of the cpuset to satisfy additional memory requests.
-.PP
+.P
This enables batch managers that are monitoring jobs running in dedicated
cpusets to efficiently detect what level of memory pressure that job
is causing.
-.PP
+.P
This is useful both on tightly managed systems running a wide mix of
submitted jobs, which may choose to terminate or reprioritize jobs that
are trying to use more memory than allowed on the nodes assigned them,
and with tightly coupled, long-running, massively parallel scientific
computing jobs that will dramatically fail to meet required performance
goals if they start to use more memory than allowed to them.
-.PP
+.P
This mechanism provides a very economical way for the batch manager
to monitor a cpuset for signs of memory pressure.
It's up to the batch manager or other user code to decide
what action to take if it detects signs of memory pressure.
-.PP
+.P
Unless memory pressure calculation is enabled by setting the pseudo-file
.IR /dev/cpuset/cpuset.memory_pressure_enabled ,
it is not computed for any cpuset, and reads from any
.I memory_pressure
always return zero, as represented by the ASCII string "0\en".
See the \fBWARNINGS\fR section, below.
-.PP
+.P
A per-cpuset, running average is employed for the following reasons:
.IP \[bu] 3
Because this meter is per-cpuset rather than per-process or per virtual
@@ -535,7 +535,7 @@ the batch scheduler can obtain the key information\[em]memory
pressure in a cpuset\[em]with a single read, rather than having to
query and accumulate results over all the (dynamically changing)
set of processes in the cpuset.
-.PP
+.P
The
.I memory_pressure
of a cpuset is calculated using a per-cpuset simple digital filter
@@ -543,7 +543,7 @@ that is kept within the kernel.
For each cpuset, this filter tracks
the recent rate at which processes attached to that cpuset enter the
kernel direct reclaim code.
-.PP
+.P
The kernel direct reclaim code is entered whenever a process has to
satisfy a memory page request by first finding some other page to
repurpose, due to lack of any readily available already free pages.
@@ -552,7 +552,7 @@ to disk.
Unmodified filesystem buffer pages are repurposed
by simply dropping them, though if that page is needed again, it
will have to be reread from disk.
-.PP
+.P
The
.I cpuset.memory_pressure
file provides an integer number representing the recent (half-life of
@@ -568,14 +568,14 @@ They are called
.I cpuset.memory_spread_page
and
.IR cpuset.memory_spread_slab .
-.PP
+.P
If the per-cpuset Boolean flag file
.I cpuset.memory_spread_page
is set, then
the kernel will spread the filesystem buffers (page cache) evenly
over all the nodes that the faulting process is allowed to use, instead
of preferring to put those pages on the node where the process is running.
-.PP
+.P
If the per-cpuset Boolean flag file
.I cpuset.memory_spread_slab
is set,
@@ -583,12 +583,12 @@ then the kernel will spread some filesystem-related slab caches,
such as those for inodes and directory entries, evenly over all the nodes
that the faulting process is allowed to use, instead of preferring to
put those pages on the node where the process is running.
-.PP
+.P
The setting of these flags does not affect the data segment
(see
.BR brk (2))
or stack segment pages of a process.
-.PP
+.P
By default, both kinds of memory spreading are off and the kernel
prefers to allocate memory pages on the node local to where the
requesting process is running.
@@ -596,10 +596,10 @@ If that node is not allowed by the
process's NUMA memory policy or cpuset configuration or if there are
insufficient free memory pages on that node, then the kernel looks
for the nearest node that is allowed and has sufficient free memory.
-.PP
+.P
When new cpusets are created, they inherit the memory spread settings
of their parent.
-.PP
+.P
Setting memory spreading causes allocations for the affected page or
slab caches to ignore the process's NUMA memory policy and be spread
instead.
@@ -614,7 +614,7 @@ no cpuset-specified memory spreading is in effect, even if it is.
If cpuset memory spreading is subsequently turned off, the NUMA
memory policy most recently specified by these calls is automatically
reapplied.
-.PP
+.P
Both
.I cpuset.memory_spread_page
and
@@ -623,10 +623,10 @@ are Boolean flag files.
By default, they contain "0", meaning that the feature is off
for that cpuset.
If a "1" is written to that file, that turns the named feature on.
-.PP
+.P
Cpuset-specified memory spreading behaves similarly to what is known
(in other contexts) as round-robin or interleave memory placement.
-.PP
+.P
Cpuset-specified memory spreading can provide substantial performance
improvements for jobs that:
.IP \[bu] 3
@@ -636,7 +636,7 @@ frequently access that data; but also
.IP \[bu]
need to access large filesystem data sets that must to be spread
across the several nodes in the job's cpuset in order to fit.
-.PP
+.P
Without this policy,
the memory allocation across the nodes in the job's cpuset
can become very uneven,
@@ -652,19 +652,19 @@ was allocated, so long as it remains allocated, even if the
cpuset's memory-placement policy
.I mems
subsequently changes.
-.PP
+.P
When memory migration is enabled in a cpuset, if the
.I mems
setting of the cpuset is changed, then any memory page in use by any
process in the cpuset that is on a memory node that is no longer
allowed will be migrated to a memory node that is allowed.
-.PP
+.P
Furthermore, if a process is moved into a cpuset with
.I memory_migrate
enabled, any memory pages it uses that were on memory nodes allowed
in its previous cpuset, but which are not allowed in its new cpuset,
will be migrated to a memory node allowed in the new cpuset.
-.PP
+.P
The relative placement of a migrated page within
the cpuset is preserved during these migration operations if possible.
For example,
@@ -679,7 +679,7 @@ the kernel will look for processes on other more
overloaded CPUs and move those processes to the underutilized CPU,
within the constraints of such placement mechanisms as cpusets and
.BR sched_setaffinity (2).
-.PP
+.P
The algorithmic cost of load balancing and its impact on key shared
kernel data structures such as the process list increases more than
linearly with the number of CPUs being balanced.
@@ -692,17 +692,17 @@ and the cost of load balancing depends
on implementation details of the kernel process scheduler, which is
subject to change over time, as improved kernel scheduler algorithms
are implemented.)
-.PP
+.P
The per-cpuset flag
.I sched_load_balance
provides a mechanism to suppress this automatic scheduler load
balancing in cases where it is not needed and suppressing it would have
worthwhile performance benefits.
-.PP
+.P
By default, load balancing is done across all CPUs, except those
marked isolated using the kernel boot time "isolcpus=" argument.
(See \fBScheduler Relax Domain Level\fR, below, to change this default.)
-.PP
+.P
This default load balancing across all CPUs is not well suited to
the following two situations:
.IP \[bu] 3
@@ -713,7 +713,7 @@ on separate sets of CPUs, full load balancing is unnecessary.
Systems supporting real-time on some CPUs need to minimize
system overhead on those CPUs, including avoiding process load
balancing if that is not needed.
-.PP
+.P
When the per-cpuset flag
.I sched_load_balance
is enabled (the default setting),
@@ -723,7 +723,7 @@ ensuring that load balancing can move a process (not otherwise pinned,
as by
.BR sched_setaffinity (2))
from any CPU in that cpuset to any other.
-.PP
+.P
When the per-cpuset flag
.I sched_load_balance
is disabled, then the
@@ -732,7 +732,7 @@ scheduler will avoid load balancing across the CPUs in that cpuset,
has
.I sched_load_balance
enabled.
-.PP
+.P
So, for example, if the top cpuset has the flag
.I sched_load_balance
enabled, then the scheduler will load balance across all
@@ -740,12 +740,12 @@ CPUs, and the setting of the
.I sched_load_balance
flag in other cpusets has no effect,
as we're already fully load balancing.
-.PP
+.P
Therefore in the above two situations, the flag
.I sched_load_balance
should be disabled in the top cpuset, and only some of the smaller,
child cpusets would have this flag enabled.
-.PP
+.P
When doing this, you don't usually want to leave any unpinned processes in
the top cpuset that might use nontrivial amounts of CPU, as such processes
may be artificially constrained to some subset of CPUs, depending on
@@ -753,7 +753,7 @@ the particulars of this flag setting in descendant cpusets.
Even if such a process could use spare CPU cycles in some other CPUs,
the kernel scheduler might not consider the possibility of
load balancing that process to the underused CPU.
-.PP
+.P
Of course, processes pinned to a particular CPU can be left in a cpuset
that disables
.I sched_load_balance
@@ -780,7 +780,7 @@ In any case, of course, tasks will be scheduled to run only on
CPUs allowed by their cpuset, as modified by
.BR sched_setaffinity (2)
system calls.
-.PP
+.P
On small systems, such as those with just a few CPUs, immediate load
balancing is useful to improve system interactivity and to minimize
wasteful idle CPU cycles.
@@ -788,7 +788,7 @@ But on large systems, attempting immediate
load balancing across a large number of CPUs can be more costly than
it is worth, depending on the particular performance characteristics
of the job mix and the hardware.
-.PP
+.P
The exact meaning of the small integer values of
.I sched_relax_domain_level
will depend on internal
@@ -796,12 +796,12 @@ implementation details of the kernel scheduler code and on the
non-uniform architecture of the hardware.
Both of these will evolve
over time and vary by system architecture and kernel version.
-.PP
+.P
As of this writing, when this capability was introduced in Linux
2.6.26, on certain popular architectures, the positive values of
.I sched_relax_domain_level
have the following meanings.
-.PP
+.P
.PD 0
.TP
.B 1
@@ -823,7 +823,7 @@ Perform immediate load balancing across over several
Perform immediate load balancing across over all CPUs
in system [On NUMA systems].
.PD
-.PP
+.P
The
.I sched_relax_domain_level
value of zero (0) always means
@@ -831,7 +831,7 @@ don't perform immediate load balancing,
hence that load balancing is done only periodically,
not immediately when a CPU becomes available or another task becomes
runnable.
-.PP
+.P
The
.I sched_relax_domain_level
value of minus one (\-1)
@@ -839,7 +839,7 @@ always means use the system default value.
The system default value can vary by architecture and kernel version.
This system default value can be changed by kernel
boot-time "relax_domain_level=" argument.
-.PP
+.P
In the case of multiple overlapping cpusets which have conflicting
.I sched_relax_domain_level
values, then the highest such value
@@ -860,7 +860,7 @@ The \fBMask Format\fR is used to represent CPU and memory-node bit masks
in the
.IR /proc/ pid /status
file.
-.PP
+.P
This format displays each 32-bit
word in hexadecimal (using ASCII characters "0" - "9" and "a" - "f");
words are filled with leading zeros, if required.
@@ -868,12 +868,12 @@ For masks longer than one word, a comma separator is used between words.
Words are displayed in big-endian
order, which has the most significant bit first.
The hex digits within a word are also in big-endian order.
-.PP
+.P
The number of 32-bit words displayed is the minimum number needed to
display all bits of the bit mask, based on the size of the bit mask.
-.PP
+.P
Examples of the \fBMask Format\fR:
-.PP
+.P
.in +4n
.EX
00000001 # just bit 0 set
@@ -883,15 +883,15 @@ Examples of the \fBMask Format\fR:
00000000,000e3862 # 1,5,6,11\-13,17\-19 set
.EE
.in
-.PP
+.P
A mask with bits 0, 1, 2, 4, 8, 16, 32, and 64 set displays as:
-.PP
+.P
.in +4n
.EX
00000001,00000001,00010117
.EE
.in
-.PP
+.P
The first "1" is for bit 64, the
second for bit 32, the third for bit 16, the fourth for bit 8, the
fifth for bit 4, and the "7" is for bits 2, 1, and 0.
@@ -903,9 +903,9 @@ and
.I mems
is a comma-separated list of CPU or memory-node
numbers and ranges of numbers, in ASCII decimal.
-.PP
+.P
Examples of the \fBList Format\fR:
-.PP
+.P
.in +4n
.EX
0\-4,9 # bits 0, 1, 2, 3, 4, and 9 set
@@ -940,7 +940,7 @@ The permissions of a cpuset are determined by the permissions
of the directories and pseudo-files in the cpuset filesystem,
normally mounted at
.IR /dev/cpuset .
-.PP
+.P
For instance, a process can put itself in some other cpuset (than
its current one) if it can write the
.I tasks
@@ -949,14 +949,14 @@ This requires execute permission on the encompassing directories
and write permission on the
.I tasks
file.
-.PP
+.P
An additional constraint is applied to requests to place some
other process in a cpuset.
One process may not attach another to
a cpuset unless it would have permission to send that process
a signal (see
.BR kill (2)).
-.PP
+.P
A process may create a child cpuset if it can access and write the
parent cpuset directory.
It can modify the CPUs or memory nodes
@@ -967,7 +967,7 @@ corresponding
or
.I mems
file.
-.PP
+.P
There is one minor difference between the manner in which these
permissions are evaluated and the manner in which normal filesystem
operation permissions are evaluated.
@@ -988,7 +988,7 @@ to its cpuset directory beneath
which is a bit unusual)
or if some user code converts the relative cpuset path to a
full filesystem path.
-.PP
+.P
In theory, this means that user code should specify cpusets
using absolute pathnames, which requires knowing the mount point of
the cpuset filesystem (usually, but not necessarily,
@@ -1024,13 +1024,13 @@ command in some shells does not display an error message if the
system call fails.
.\" Gack! csh(1)'s echo does this
For example, if the command:
-.PP
+.P
.in +4n
.EX
echo 19 > cpuset.mems
.EE
.in
-.PP
+.P
failed because memory node 19 was not allowed (perhaps
the current system does not have a memory node 19), then the
.B echo
@@ -1041,7 +1041,7 @@ external command to change cpuset file settings, as this
command will display
.BR write (2)
errors, as in the example:
-.PP
+.P
.in +4n
.EX
/bin/echo 19 > cpuset.mems
@@ -1053,7 +1053,7 @@ errors, as in the example:
.SS Memory placement
Not all allocations of system memory are constrained by cpusets,
for the following reasons.
-.PP
+.P
If hot-plug functionality is used to remove all the CPUs that are
currently assigned to a cpuset, then the kernel will automatically
update the
@@ -1068,7 +1068,7 @@ rather than starving a process that has had all its allowed CPUs or
memory nodes taken offline.
User code should reconfigure cpusets to refer only to online CPUs
and memory nodes when using hot-plug to add or remove such resources.
-.PP
+.P
A few kernel-critical, internal memory-allocation requests, marked
GFP_ATOMIC, must be satisfied immediately.
The kernel may drop some
@@ -1076,7 +1076,7 @@ request or malfunction if one of these allocations fail.
If such a request cannot be satisfied within the current process's cpuset,
then we relax the cpuset, and look for memory anywhere we can find it.
It's better to violate the cpuset than stress the kernel.
-.PP
+.P
Allocations of memory requested by kernel drivers while processing
an interrupt lack any relevant process context, and are not confined
by cpusets.
@@ -1092,7 +1092,7 @@ a different directory is not permitted.
The Linux kernel implementation of cpusets sets
.I errno
to specify the reason for a failed system call affecting cpusets.
-.PP
+.P
The possible
.I errno
settings and their meaning when set on
@@ -1359,7 +1359,7 @@ options using shell commands.
.SS Creating and attaching to a cpuset.
To create a new cpuset and attach the current command shell to it,
the steps are:
-.PP
+.P
.PD 0
.IP (1) 5
mkdir /dev/cpuset (if not already done)
@@ -1373,11 +1373,11 @@ Assign CPUs and memory nodes to the new cpuset.
.IP (5)
Attach the shell to the new cpuset.
.PD
-.PP
+.P
For example, the following sequence of commands will set up a cpuset
named "Charlie", containing just CPUs 2 and 3, and memory node 1,
and then attach the current shell to that cpuset.
-.PP
+.P
.in +4n
.EX
.RB "$" " mkdir /dev/cpuset"
@@ -1399,7 +1399,7 @@ To migrate a job (the set of processes attached to a cpuset)
to different CPUs and memory nodes in the system, including moving
the memory pages currently allocated to that job,
perform the following steps.
-.PP
+.P
.PD 0
.IP (1) 5
Let's say we want to move the job in cpuset
@@ -1424,9 +1424,9 @@ Then move each process from
to
.IR beta .
.PD
-.PP
+.P
The following sequence of commands accomplishes this.
-.PP
+.P
.in +4n
.EX
.RB "$" " cd /dev/cpuset"
@@ -1438,22 +1438,22 @@ The following sequence of commands accomplishes this.
.RB "$" " while read i; do /bin/echo $i; done < ../alpha/tasks > tasks"
.EE
.in
-.PP
+.P
The above should move any processes in
.I alpha
to
.IR beta ,
and any memory held by these processes on memory nodes 2\[en]3 to memory
nodes 8\[en]9, respectively.
-.PP
+.P
Notice that the last step of the above sequence did not do:
-.PP
+.P
.in +4n
.EX
.RB "$" " cp ../alpha/tasks tasks"
.EE
.in
-.PP
+.P
The
.I while
loop, rather than the seemingly easier use of the
@@ -1462,7 +1462,7 @@ command, was necessary because
only one process PID at a time may be written to the
.I tasks
file.
-.PP
+.P
The same effect (writing one PID at a time) as the
.I while
loop can be accomplished more efficiently, in fewer keystrokes and in
@@ -1470,7 +1470,7 @@ syntax that works on any shell, but alas more obscurely, by using the
.B \-u
(unbuffered) option of
.BR sed (1):
-.PP
+.P
.in +4n
.EX
.RB "$" " sed \-un p < ../alpha/tasks > tasks"
@@ -1493,7 +1493,7 @@ syntax that works on any shell, but alas more obscurely, by using the
.BR sched (7),
.BR migratepages (8),
.BR numactl (8)
-.PP
+.P
.I Documentation/admin\-guide/cgroup\-v1/cpusets.rst
in the Linux kernel source tree
.\" commit 45ce80fb6b6f9594d1396d44dd7e7c02d596fef8