blob: eaf171957aa9b48d1c7b3dee586036b8c376e186 (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
|
From 4da7dd5c5195e9f48a911e768ca8bd317f5c6b7f Mon Sep 17 00:00:00 2001
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Fri, 26 Feb 2021 17:26:04 +0100
Subject: [PATCH 336/347] mm: slub: Don't resize the location tracking cache on
PREEMPT_RT
Origin: https://www.kernel.org/pub/linux/kernel/projects/rt/4.19/older/patches-4.19.246-rt110.tar.xz
The location tracking cache has a size of a page and is resized if its
current size is too small.
This allocation happens with disabled interrupts and can't happen on
PREEMPT_RT.
Should one page be too small, then we have to allocate more at the
beginning. The only downside is that less callers will be visible.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
(cherry picked from commit 87bd0bf324f4c5468ea3d1de0482589f491f3145)
Signed-off-by: Clark Williams <williams@redhat.com>
---
mm/slub.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/mm/slub.c b/mm/slub.c
index 497096152c39..6b9b894ba5bc 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4591,6 +4591,9 @@ static int alloc_loc_track(struct loc_track *t, unsigned long max, gfp_t flags)
struct location *l;
int order;
+ if (IS_ENABLED(CONFIG_PREEMPT_RT) && flags == GFP_ATOMIC)
+ return 0;
+
order = get_order(sizeof(struct location) * max);
l = (void *)__get_free_pages(flags, order);
--
2.36.1
|