1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
|
From: Liu Shixin <liushixin2@huawei.com>
Date: Thu, 30 Jun 2022 19:33:31 +0800
Subject: swiotlb: skip swiotlb_bounce when orig_addr is zero
Origin: https://lore.kernel.org/stable/20220630113331.1544886-1-liushixin2@huawei.com/
After patch ddbd89deb7d3 ("swiotlb: fix info leak with DMA_FROM_DEVICE"),
swiotlb_bounce will be called in swiotlb_tbl_map_single unconditionally.
This requires that the physical address must be valid, which is not always
true on stable-4.19 or earlier version.
On stable-4.19, swiotlb_alloc_buffer will call swiotlb_tbl_map_single with
orig_addr equal to zero, which cause such a panic:
Unable to handle kernel paging request at virtual address ffffb77a40000000
...
pc : __memcpy+0x100/0x180
lr : swiotlb_bounce+0x74/0x88
...
Call trace:
__memcpy+0x100/0x180
swiotlb_tbl_map_single+0x2c8/0x338
swiotlb_alloc+0xb4/0x198
__dma_alloc+0x84/0x1d8
...
On stable-4.9 and stable-4.14, swiotlb_alloc_coherent wille call map_single
with orig_addr equal to zero, which can cause same panic.
Fix this by skipping swiotlb_bounce when orig_addr is zero.
Fixes: ddbd89deb7d3 ("swiotlb: fix info leak with DMA_FROM_DEVICE")
Signed-off-by: Liu Shixin <liushixin2@huawei.com>
---
kernel/dma/swiotlb.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -594,7 +594,8 @@ found:
* unconditional bounce may prevent leaking swiotlb content (i.e.
* kernel memory) to user-space.
*/
- swiotlb_bounce(orig_addr, tlb_addr, size, DMA_TO_DEVICE);
+ if (orig_addr)
+ swiotlb_bounce(orig_addr, tlb_addr, size, DMA_TO_DEVICE);
return tlb_addr;
}
|