From 5c27e6fdf46d68180a46fdf7944aa7e4668680c3 Mon Sep 17 00:00:00 2001 From: Wander Lairson Costa Date: Wed, 14 Jun 2023 09:23:22 -0300 Subject: [PATCH 57/62] sched: avoid false lockdep splat in put_task_struct() Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/6.1/older/patches-6.1.69-rt21.tar.xz In put_task_struct(), a spin_lock is indirectly acquired under the kernel stock. When running the kernel in real-time (RT) configuration, the operation is dispatched to a preemptible context call to ensure guaranteed preemption. However, if PROVE_RAW_LOCK_NESTING is enabled and __put_task_struct() is called while holding a raw_spinlock, lockdep incorrectly reports an "Invalid lock context" in the stock kernel. This false splat occurs because lockdep is unaware of the different route taken under RT. To address this issue, override the inner wait type to prevent the false lockdep splat. Signed-off-by: Wander Lairson Costa Suggested-by: Oleg Nesterov Suggested-by: Sebastian Andrzej Siewior Suggested-by: Peter Zijlstra Cc: Steven Rostedt Cc: Luis Goncalves Link: https://lore.kernel.org/r/20230614122323.37957-3-wander@redhat.com Signed-off-by: Sebastian Andrzej Siewior (cherry picked from commit a5e446e728e89d5f5c5e427cc919bc7813c64c28) Signed-off-by: Clark Williams --- include/linux/sched/task.h | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h index 7291fb6399d2..de7ebd2bf3ba 100644 --- a/include/linux/sched/task.h +++ b/include/linux/sched/task.h @@ -141,8 +141,12 @@ static inline void put_task_struct(struct task_struct *t) */ if (IS_ENABLED(CONFIG_PREEMPT_RT) && !preemptible()) call_rcu(&t->rcu, __put_task_struct_rcu_cb); - else + else { + static DEFINE_WAIT_OVERRIDE_MAP(put_task_map, LD_WAIT_SLEEP); + lock_map_acquire_try(&put_task_map); __put_task_struct(t); + lock_map_release(&put_task_map); + } } static inline void put_task_struct_many(struct task_struct *t, int nr) -- 2.43.0