summaryrefslogtreecommitdiffstats
path: root/debian/patches-rt/0069-list_bl-fixup-bogus-lockdep-warning.patch
blob: 94437bea7fa3c8c7e9d460f7f7fe15640606f1c7 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
From: Josh Cartwright <joshc@ni.com>
Date: Thu, 31 Mar 2016 00:04:25 -0500
Subject: [PATCH 069/353] list_bl: fixup bogus lockdep warning
Origin: https://git.kernel.org/cgit/linux/kernel/git/rt/linux-stable-rt.git/commit?id=a22e7f3fc7bbc9dad2043bcbfc0d38ce9e4b00bb

At first glance, the use of 'static inline' seems appropriate for
INIT_HLIST_BL_HEAD().

However, when a 'static inline' function invocation is inlined by gcc,
all callers share any static local data declared within that inline
function.

This presents a problem for how lockdep classes are setup.  raw_spinlocks, for
example, when CONFIG_DEBUG_SPINLOCK,

	# define raw_spin_lock_init(lock)				\
	do {								\
		static struct lock_class_key __key;			\
									\
		__raw_spin_lock_init((lock), #lock, &__key);		\
	} while (0)

When this macro is expanded into a 'static inline' caller, like
INIT_HLIST_BL_HEAD():

	static inline INIT_HLIST_BL_HEAD(struct hlist_bl_head *h)
	{
		h->first = NULL;
		raw_spin_lock_init(&h->lock);
	}

...the static local lock_class_key object is made a function static.

For compilation units which initialize invoke INIT_HLIST_BL_HEAD() more
than once, then, all of the invocations share this same static local
object.

This can lead to some very confusing lockdep splats (example below).
Solve this problem by forcing the INIT_HLIST_BL_HEAD() to be a macro,
which prevents the lockdep class object sharing.

 =============================================
 [ INFO: possible recursive locking detected ]
 4.4.4-rt11 #4 Not tainted
 ---------------------------------------------
 kswapd0/59 is trying to acquire lock:
  (&h->lock#2){+.+.-.}, at: mb_cache_shrink_scan

 but task is already holding lock:
  (&h->lock#2){+.+.-.}, at:  mb_cache_shrink_scan

 other info that might help us debug this:
  Possible unsafe locking scenario:

        CPU0
        ----
   lock(&h->lock#2);
   lock(&h->lock#2);

  *** DEADLOCK ***

  May be due to missing lock nesting notation

 2 locks held by kswapd0/59:
  #0:  (shrinker_rwsem){+.+...}, at: rt_down_read_trylock
  #1:  (&h->lock#2){+.+.-.}, at: mb_cache_shrink_scan

Reported-by: Luis Claudio R. Goncalves <lclaudio@uudg.org>
Tested-by: Luis Claudio R. Goncalves <lclaudio@uudg.org>
Signed-off-by: Josh Cartwright <joshc@ni.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 include/linux/list_bl.h | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/include/linux/list_bl.h b/include/linux/list_bl.h
index 69b659259bac..0b5de7d9ffcf 100644
--- a/include/linux/list_bl.h
+++ b/include/linux/list_bl.h
@@ -43,13 +43,15 @@ struct hlist_bl_node {
 	struct hlist_bl_node *next, **pprev;
 };
 
-static inline void INIT_HLIST_BL_HEAD(struct hlist_bl_head *h)
-{
-	h->first = NULL;
 #ifdef CONFIG_PREEMPT_RT_BASE
-	raw_spin_lock_init(&h->lock);
+#define INIT_HLIST_BL_HEAD(h)		\
+do {					\
+	(h)->first = NULL;		\
+	raw_spin_lock_init(&(h)->lock);	\
+} while (0)
+#else
+#define INIT_HLIST_BL_HEAD(h) (h)->first = NULL
 #endif
-}
 
 static inline void INIT_HLIST_BL_NODE(struct hlist_bl_node *h)
 {