From b15a952c52a6825376d3e7f6c1bf5c886c6d8b74 Mon Sep 17 00:00:00 2001 From: Daniel Baumann Date: Sat, 27 Apr 2024 12:06:00 +0200 Subject: Adding debian version 5.10.209-2. Signed-off-by: Daniel Baumann --- .../0075-doc-Use-CONFIG_PREEMPTION.patch | 250 +++++++++++++++++++++ 1 file changed, 250 insertions(+) create mode 100644 debian/patches-rt/0075-doc-Use-CONFIG_PREEMPTION.patch (limited to 'debian/patches-rt/0075-doc-Use-CONFIG_PREEMPTION.patch') diff --git a/debian/patches-rt/0075-doc-Use-CONFIG_PREEMPTION.patch b/debian/patches-rt/0075-doc-Use-CONFIG_PREEMPTION.patch new file mode 100644 index 000000000..3ddcf14fa --- /dev/null +++ b/debian/patches-rt/0075-doc-Use-CONFIG_PREEMPTION.patch @@ -0,0 +1,250 @@ +From d9780d88d268b12562427ef709de6ab2b8c85188 Mon Sep 17 00:00:00 2001 +From: Sebastian Andrzej Siewior +Date: Tue, 15 Dec 2020 15:16:49 +0100 +Subject: [PATCH 075/323] doc: Use CONFIG_PREEMPTION +Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/5.10/older/patches-5.10.204-rt100.tar.xz + +CONFIG_PREEMPTION is selected by CONFIG_PREEMPT and by CONFIG_PREEMPT_RT. +Both PREEMPT and PREEMPT_RT require the same functionality which today +depends on CONFIG_PREEMPT. + +Update the documents and mention CONFIG_PREEMPTION. Spell out +CONFIG_PREEMPT_RT (instead PREEMPT_RT) since it is an option now. + +Signed-off-by: Sebastian Andrzej Siewior +Signed-off-by: Paul E. McKenney +Signed-off-by: Sebastian Andrzej Siewior +--- + .../Expedited-Grace-Periods.rst | 4 ++-- + .../RCU/Design/Requirements/Requirements.rst | 24 +++++++++---------- + Documentation/RCU/checklist.rst | 2 +- + Documentation/RCU/rcubarrier.rst | 6 ++--- + Documentation/RCU/stallwarn.rst | 4 ++-- + Documentation/RCU/whatisRCU.rst | 10 ++++---- + 6 files changed, 25 insertions(+), 25 deletions(-) + +diff --git a/Documentation/RCU/Design/Expedited-Grace-Periods/Expedited-Grace-Periods.rst b/Documentation/RCU/Design/Expedited-Grace-Periods/Expedited-Grace-Periods.rst +index 72f0f6fbd53c..6f89cf1e567d 100644 +--- a/Documentation/RCU/Design/Expedited-Grace-Periods/Expedited-Grace-Periods.rst ++++ b/Documentation/RCU/Design/Expedited-Grace-Periods/Expedited-Grace-Periods.rst +@@ -38,7 +38,7 @@ sections. + RCU-preempt Expedited Grace Periods + =================================== + +-``CONFIG_PREEMPT=y`` kernels implement RCU-preempt. ++``CONFIG_PREEMPTION=y`` kernels implement RCU-preempt. + The overall flow of the handling of a given CPU by an RCU-preempt + expedited grace period is shown in the following diagram: + +@@ -112,7 +112,7 @@ things. + RCU-sched Expedited Grace Periods + --------------------------------- + +-``CONFIG_PREEMPT=n`` kernels implement RCU-sched. The overall flow of ++``CONFIG_PREEMPTION=n`` kernels implement RCU-sched. The overall flow of + the handling of a given CPU by an RCU-sched expedited grace period is + shown in the following diagram: + +diff --git a/Documentation/RCU/Design/Requirements/Requirements.rst b/Documentation/RCU/Design/Requirements/Requirements.rst +index 0f7e0237ea14..17d38480ef5c 100644 +--- a/Documentation/RCU/Design/Requirements/Requirements.rst ++++ b/Documentation/RCU/Design/Requirements/Requirements.rst +@@ -78,7 +78,7 @@ RCU treats a nested set as one big RCU read-side critical section. + Production-quality implementations of ``rcu_read_lock()`` and + ``rcu_read_unlock()`` are extremely lightweight, and in fact have + exactly zero overhead in Linux kernels built for production use with +-``CONFIG_PREEMPT=n``. ++``CONFIG_PREEMPTION=n``. + + This guarantee allows ordering to be enforced with extremely low + overhead to readers, for example: +@@ -1182,7 +1182,7 @@ and has become decreasingly so as memory sizes have expanded and memory + costs have plummeted. However, as I learned from Matt Mackall's + `bloatwatch `__ efforts, memory + footprint is critically important on single-CPU systems with +-non-preemptible (``CONFIG_PREEMPT=n``) kernels, and thus `tiny ++non-preemptible (``CONFIG_PREEMPTION=n``) kernels, and thus `tiny + RCU `__ + was born. Josh Triplett has since taken over the small-memory banner + with his `Linux kernel tinification `__ +@@ -1498,7 +1498,7 @@ limitations. + + Implementations of RCU for which ``rcu_read_lock()`` and + ``rcu_read_unlock()`` generate no code, such as Linux-kernel RCU when +-``CONFIG_PREEMPT=n``, can be nested arbitrarily deeply. After all, there ++``CONFIG_PREEMPTION=n``, can be nested arbitrarily deeply. After all, there + is no overhead. Except that if all these instances of + ``rcu_read_lock()`` and ``rcu_read_unlock()`` are visible to the + compiler, compilation will eventually fail due to exhausting memory, +@@ -1771,7 +1771,7 @@ implementation can be a no-op. + + However, once the scheduler has spawned its first kthread, this early + boot trick fails for ``synchronize_rcu()`` (as well as for +-``synchronize_rcu_expedited()``) in ``CONFIG_PREEMPT=y`` kernels. The ++``synchronize_rcu_expedited()``) in ``CONFIG_PREEMPTION=y`` kernels. The + reason is that an RCU read-side critical section might be preempted, + which means that a subsequent ``synchronize_rcu()`` really does have to + wait for something, as opposed to simply returning immediately. +@@ -2010,7 +2010,7 @@ the following: + 5 rcu_read_unlock(); + 6 do_something_with(v, user_v); + +-If the compiler did make this transformation in a ``CONFIG_PREEMPT=n`` kernel ++If the compiler did make this transformation in a ``CONFIG_PREEMPTION=n`` kernel + build, and if ``get_user()`` did page fault, the result would be a quiescent + state in the middle of an RCU read-side critical section. This misplaced + quiescent state could result in line 4 being a use-after-free access, +@@ -2292,7 +2292,7 @@ conjunction with the `-rt + patchset `__. The + real-time-latency response requirements are such that the traditional + approach of disabling preemption across RCU read-side critical sections +-is inappropriate. Kernels built with ``CONFIG_PREEMPT=y`` therefore use ++is inappropriate. Kernels built with ``CONFIG_PREEMPTION=y`` therefore use + an RCU implementation that allows RCU read-side critical sections to be + preempted. This requirement made its presence known after users made it + clear that an earlier `real-time +@@ -2414,7 +2414,7 @@ includes ``rcu_read_lock_bh()``, ``rcu_read_unlock_bh()``, + ``call_rcu_bh()``, ``rcu_barrier_bh()``, and + ``rcu_read_lock_bh_held()``. However, the update-side APIs are now + simple wrappers for other RCU flavors, namely RCU-sched in +-CONFIG_PREEMPT=n kernels and RCU-preempt otherwise. ++CONFIG_PREEMPTION=n kernels and RCU-preempt otherwise. + + Sched Flavor (Historical) + ~~~~~~~~~~~~~~~~~~~~~~~~~ +@@ -2432,11 +2432,11 @@ not have this property, given that any point in the code outside of an + RCU read-side critical section can be a quiescent state. Therefore, + *RCU-sched* was created, which follows “classic” RCU in that an + RCU-sched grace period waits for pre-existing interrupt and NMI +-handlers. In kernels built with ``CONFIG_PREEMPT=n``, the RCU and ++handlers. In kernels built with ``CONFIG_PREEMPTION=n``, the RCU and + RCU-sched APIs have identical implementations, while kernels built with +-``CONFIG_PREEMPT=y`` provide a separate implementation for each. ++``CONFIG_PREEMPTION=y`` provide a separate implementation for each. + +-Note well that in ``CONFIG_PREEMPT=y`` kernels, ++Note well that in ``CONFIG_PREEMPTION=y`` kernels, + ``rcu_read_lock_sched()`` and ``rcu_read_unlock_sched()`` disable and + re-enable preemption, respectively. This means that if there was a + preemption attempt during the RCU-sched read-side critical section, +@@ -2599,10 +2599,10 @@ userspace execution also delimit tasks-RCU read-side critical sections. + + The tasks-RCU API is quite compact, consisting only of + ``call_rcu_tasks()``, ``synchronize_rcu_tasks()``, and +-``rcu_barrier_tasks()``. In ``CONFIG_PREEMPT=n`` kernels, trampolines ++``rcu_barrier_tasks()``. In ``CONFIG_PREEMPTION=n`` kernels, trampolines + cannot be preempted, so these APIs map to ``call_rcu()``, + ``synchronize_rcu()``, and ``rcu_barrier()``, respectively. In +-``CONFIG_PREEMPT=y`` kernels, trampolines can be preempted, and these ++``CONFIG_PREEMPTION=y`` kernels, trampolines can be preempted, and these + three APIs are therefore implemented by separate functions that check + for voluntary context switches. + +diff --git a/Documentation/RCU/checklist.rst b/Documentation/RCU/checklist.rst +index 2efed9926c3f..7ed4956043bd 100644 +--- a/Documentation/RCU/checklist.rst ++++ b/Documentation/RCU/checklist.rst +@@ -214,7 +214,7 @@ over a rather long period of time, but improvements are always welcome! + the rest of the system. + + 7. As of v4.20, a given kernel implements only one RCU flavor, +- which is RCU-sched for PREEMPT=n and RCU-preempt for PREEMPT=y. ++ which is RCU-sched for PREEMPTION=n and RCU-preempt for PREEMPTION=y. + If the updater uses call_rcu() or synchronize_rcu(), + then the corresponding readers my use rcu_read_lock() and + rcu_read_unlock(), rcu_read_lock_bh() and rcu_read_unlock_bh(), +diff --git a/Documentation/RCU/rcubarrier.rst b/Documentation/RCU/rcubarrier.rst +index f64f4413a47c..3b4a24877496 100644 +--- a/Documentation/RCU/rcubarrier.rst ++++ b/Documentation/RCU/rcubarrier.rst +@@ -9,7 +9,7 @@ RCU (read-copy update) is a synchronization mechanism that can be thought + of as a replacement for read-writer locking (among other things), but with + very low-overhead readers that are immune to deadlock, priority inversion, + and unbounded latency. RCU read-side critical sections are delimited +-by rcu_read_lock() and rcu_read_unlock(), which, in non-CONFIG_PREEMPT ++by rcu_read_lock() and rcu_read_unlock(), which, in non-CONFIG_PREEMPTION + kernels, generate no code whatsoever. + + This means that RCU writers are unaware of the presence of concurrent +@@ -329,10 +329,10 @@ Answer: This cannot happen. The reason is that on_each_cpu() has its last + to smp_call_function() and further to smp_call_function_on_cpu(), + causing this latter to spin until the cross-CPU invocation of + rcu_barrier_func() has completed. This by itself would prevent +- a grace period from completing on non-CONFIG_PREEMPT kernels, ++ a grace period from completing on non-CONFIG_PREEMPTION kernels, + since each CPU must undergo a context switch (or other quiescent + state) before the grace period can complete. However, this is +- of no use in CONFIG_PREEMPT kernels. ++ of no use in CONFIG_PREEMPTION kernels. + + Therefore, on_each_cpu() disables preemption across its call + to smp_call_function() and also across the local call to +diff --git a/Documentation/RCU/stallwarn.rst b/Documentation/RCU/stallwarn.rst +index c9ab6af4d3be..e97d1b4876ef 100644 +--- a/Documentation/RCU/stallwarn.rst ++++ b/Documentation/RCU/stallwarn.rst +@@ -25,7 +25,7 @@ warnings: + + - A CPU looping with bottom halves disabled. + +-- For !CONFIG_PREEMPT kernels, a CPU looping anywhere in the kernel ++- For !CONFIG_PREEMPTION kernels, a CPU looping anywhere in the kernel + without invoking schedule(). If the looping in the kernel is + really expected and desirable behavior, you might need to add + some calls to cond_resched(). +@@ -44,7 +44,7 @@ warnings: + result in the ``rcu_.*kthread starved for`` console-log message, + which will include additional debugging information. + +-- A CPU-bound real-time task in a CONFIG_PREEMPT kernel, which might ++- A CPU-bound real-time task in a CONFIG_PREEMPTION kernel, which might + happen to preempt a low-priority task in the middle of an RCU + read-side critical section. This is especially damaging if + that low-priority task is not permitted to run on any other CPU, +diff --git a/Documentation/RCU/whatisRCU.rst b/Documentation/RCU/whatisRCU.rst +index fb3ff76c3e73..3b2b1479fd0f 100644 +--- a/Documentation/RCU/whatisRCU.rst ++++ b/Documentation/RCU/whatisRCU.rst +@@ -684,7 +684,7 @@ Quick Quiz #1: + ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + This section presents a "toy" RCU implementation that is based on + "classic RCU". It is also short on performance (but only for updates) and +-on features such as hotplug CPU and the ability to run in CONFIG_PREEMPT ++on features such as hotplug CPU and the ability to run in CONFIG_PREEMPTION + kernels. The definitions of rcu_dereference() and rcu_assign_pointer() + are the same as those shown in the preceding section, so they are omitted. + :: +@@ -740,7 +740,7 @@ Quick Quiz #2: + Quick Quiz #3: + If it is illegal to block in an RCU read-side + critical section, what the heck do you do in +- PREEMPT_RT, where normal spinlocks can block??? ++ CONFIG_PREEMPT_RT, where normal spinlocks can block??? + + :ref:`Answers to Quick Quiz <8_whatisRCU>` + +@@ -1094,7 +1094,7 @@ Quick Quiz #2: + overhead is **negative**. + + Answer: +- Imagine a single-CPU system with a non-CONFIG_PREEMPT ++ Imagine a single-CPU system with a non-CONFIG_PREEMPTION + kernel where a routing table is used by process-context + code, but can be updated by irq-context code (for example, + by an "ICMP REDIRECT" packet). The usual way of handling +@@ -1121,10 +1121,10 @@ Answer: + Quick Quiz #3: + If it is illegal to block in an RCU read-side + critical section, what the heck do you do in +- PREEMPT_RT, where normal spinlocks can block??? ++ CONFIG_PREEMPT_RT, where normal spinlocks can block??? + + Answer: +- Just as PREEMPT_RT permits preemption of spinlock ++ Just as CONFIG_PREEMPT_RT permits preemption of spinlock + critical sections, it permits preemption of RCU + read-side critical sections. It also permits + spinlocks blocking while in RCU read-side critical +-- +2.43.0 + -- cgit v1.2.3