- Jul 12, 2021
-
-
卢佳琳 authored
hulk inclusion category: bugfix bugzilla: 51815, https://gitee.com/openeuler/kernel/issues/I3IJ9I CVE: NA -------- static int alloc_mem_cgroup_per_node_info(struct mem_cgroup *memcg, int node) { ... pn = kzalloc_node(sizeof(*pn), GFP_KERNEL, tmp); if (!pn) return 1; pnext = to_mgpn_ext(pn); pnext->lruvec_stat_local = alloc_percpu(struct lruvec_stat); } the size of pnext is larger than pn, so pnext->lruvec_stat_local is out of bounds Signed-off-by:
Lu Jialin <lujialin4@huawei.com> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Mimi Zohar authored
stable inclusion from linux-4.19.196 commit ff660863628fb144badcb3395cde7821c82c13a6 CVE: CVE-2021-35039 -------------------------------- [ Upstream commit 0c18f29aae7ce3dadd26d8ee3505d07cc982df75 ] Irrespective as to whether CONFIG_MODULE_SIG is configured, specifying "module.sig_enforce=1" on the boot command line sets "sig_enforce". Only allow "sig_enforce" to be set when CONFIG_MODULE_SIG is configured. This patch makes the presence of /sys/module/module/parameters/sig_enforce dependent on CONFIG_MODULE_SIG=y. Fixes: fda784e5 ("module: export module signature enforcement status") Reported-by:
Nayna Jain <nayna@linux.ibm.com> Tested-by:
Mimi Zohar <zohar@linux.ibm.com> Tested-by:
Jessica Yu <jeyu@kernel.org> Signed-off-by:
Mimi Zohar <zohar@linux.ibm.com> Signed-off-by:
Jessica Yu <jeyu@kernel.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Sasha Levin <sashal@kernel.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yufen Yu authored
hulk inclusion category: feature bugzilla: 173267 CVE: NA --------------------------- For hibench applications, likely kmeans, wordcount, terasort, we can try to use this bpf tool to improve io performance. Usage: make -C bpf ./test_xfs_file spec_readahead Signed-off-by:
Yufen Yu <yuyufen@huawei.com> Signed-off-by:
Zhihao Cheng <chengzhihao1@huawei.com> Reviewed-by:
Hou Tao <houtao1@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yufen Yu authored
hulk inclusion category: feature bugzilla: 173267 CVE: NA --------------------------- For hibench applications, include kmeans, wordcount and terasort, they will read whole blk_xxx and blk_xxx.meta from disk in sequential. And almost all of the read issued to disk are triggered by async readahead. While sequential read of single thread does't means sequential io on disk when multiple threads cocurrently do that. Multiple threads interleaving sequentail read can make io issued into disk become random, which will limit disk IO throughput. To reduce disk randomization, we can consider to increase readahead window. Then IO generated by filesystem will be bigger in each time of async readahead. But, limited by disk max_hw_sectors_kb, big IO will be split and the whole bio need to wait all split bios complete, which can cause longer io latency. Our trace shows that many long latency in threads are caused by waiting async readahead IO complete when set readahead window with a big value. That means, thread read read speed is faster than async readahead io complete. To improve performance, we try to provide a special async readahead method: * On the one hand, we try to read more sequential data from disk, which can reduce disk randomization when multiple thread interleaving. * On the other hand, size of each IO issued to disk is 2M, which can avoid big IO split and long io latency. Signed-off-by:
Yufen Yu <yuyufen@huawei.com> Signed-off-by:
Zhihao Cheng <chengzhihao1@huawei.com> Reviewed-by:
Hou Tao <houtao1@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yufen Yu authored
hulk inclusion category: feature bugzilla: 173267 CVE: NA --------------------------- If ra->prev_pos page index is equal to current pos, that means it is sequential read, then clear FMODE_RANDOM flag to enable async readahead. Usage: make -C bpf ./test_xfs_file clear Signed-off-by:
Yufen Yu <yuyufen@huawei.com> Signed-off-by:
Zhihao Cheng <chengzhihao1@huawei.com> Reviewed-by:
Hou Tao <houtao1@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yufen Yu authored
hulk inclusion category: feature bugzilla: 173267 CVE: NA --------------------------- Adding a new member clear_f_mode into struct xfs_writable_file, then we can clear some flag of file->f_mode. Signed-off-by:
Yufen Yu <yuyufen@huawei.com> Signed-off-by:
Zhihao Cheng <chengzhihao1@huawei.com> Reviewed-by:
Hou Tao <houtao1@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
- Jul 09, 2021
-
-
yangerkun authored
hulk inclusion category: bugfix bugzilla: 172974 CVE: NA --------------------------- 72c9e4df ('jbd2: ensure abort the journal if detect IO error when writing original buffer back') will add 'j_atomic_flags' which can lead lots of kabi broken like jbd2_journal_destroy/jbd2_journal_abort and so on. Fix it by add a wrapper. Signed-off-by:
yangerkun <yangerkun@huawei.com> Reviewed-by:
Zhang Yi <yi.zhang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Qu Wenruo authored
mainline inclusion from mainline-v5.13-rc5 commit 6d4572a9 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I39MZM CVE: NA ------------------------------------------------------ [BUG] When the data space is exhausted, even if the inode has NOCOW attribute, we will still refuse to truncate unaligned range due to ENOSPC. The following script can reproduce it pretty easily: #!/bin/bash dev=/dev/test/test mnt=/mnt/btrfs umount $dev &> /dev/null umount $mnt &> /dev/null mkfs.btrfs -f $dev -b 1G mount -o nospace_cache $dev $mnt touch $mnt/foobar chattr +C $mnt/foobar xfs_io -f -c "pwrite -b 4k 0 4k" $mnt/foobar > /dev/null xfs_io -f -c "pwrite -b 4k 0 1G" $mnt/padding &> /dev/null sync xfs_io -c "fpunch 0 2k" $mnt/foobar umount $mnt Currently this will fail at the fpunch part. [CAUSE] Because btrfs_truncate_block() always reserves space without checking the NOCOW attribute. Since the writeback path follows NOCOW bit, we only need to bother the space reservation code in btrfs_truncate_block(). [FIX] Make btrfs_truncate_block() follow btrfs_buffered_write() to try to reserve data space first, and fall back to NOCOW check only when we don't have enough space. Such always-try-reserve is an optimization introduced in btrfs_buffered_write(), to avoid expensive btrfs_check_can_nocow() call. This patch will export check_can_nocow() as btrfs_check_can_nocow(), and use it in btrfs_truncate_block() to fix the problem. Reference: https://patchwork.kernel.org/project/linux-btrfs/patch/20200130052822.11765-1-wqu@suse.com Reported-by:
Martin Doucha <martin.doucha@suse.com> Reviewed-by:
Filipe Manana <fdmanana@suse.com> Reviewed-by:
Anand Jain <anand.jain@oracle.com> Signed-off-by:
Qu Wenruo <wqu@suse.com> Reviewed-by:
David Sterba <dsterba@suse.com> Signed-off-by:
David Sterba <dsterba@suse.com> Conflicts: fs/btrfs/file.c fs/btrfs/inode.c Signed-off-by:
Gou Hao <gouhao@uniontech.com> Signed-off-by:
Cheng Jian <cj.chengjian@huawei.com> Reviewed-by:
Jiao Fenfang <jiaofenfang@uniontech.com> Reviewed-by:
Zhang Yi <yi.zhang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
ChenXiaoSong authored
hulk inclusion category: bugfix bugzilla: NA CVE: NA ------------------------------------------------- commit a2ff6d97 ("NFSv4.1: Don't rebind to the same source port when reconnecting to the server") add new member into struct rpc_xprt, which will break KABI. This patch try to fix it. Signed-off-by:
ChenXiaoSong <chenxiaosong2@huawei.com> Reviewed-by:
Zhang Xiaoxu <zhangxiaoxu5@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
- Jul 08, 2021
-
-
王海 authored
hulk inclusion category: bugfix bugzilla: 172330 CVE: NA -------------------------------- We can construct some special USB packets that cause kernel info leak by the following steps of rndis. 1. construct the packet to make rndis call gen_ndis_set_resp(). In gen_ndis_set_resp(), BufOffset comes from the USB packet and it is not checked so that BufOffset can be any value. Therefore, if OID is RNDIS_OID_GEN_CURRENT_PACKET_FILTER, then *params->filter can get data at any address. 2. construct the packet to make rndis call rndis_query_response(). In rndis_query_response(), if OID is RNDIS_OID_GEN_CURRENT_PACKET_FILTER, then the data of *params->filter is fetched and returned, resulting in info leak. Therefore, we need to check the BufOffset to prevent info leak. Here, buf size is USB_COMP_EP0_BUFSIZ, as long as "8 + BufOffset + BufLength" is less than USB_COMP_EP0_BUFSIZ, it will be considered legal. Fixes: 1da177e4 ("Linux-2.6.12-rc2") Signed-off-by:
Wang Hai <wanghai38@huawei.com> Reviewed-by:
Wei Yongjun <weiyongjun1@huawei.com> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
- Jul 05, 2021
-
-
王克锋 authored
hulk inclusion category: bugfix bugzilla: 172153 CVE: NA ------------------------------------------------- DO_ONCE DEFINE_STATIC_KEY_TRUE(___once_key); __do_once_done once_disable_jump(once_key); INIT_WORK(&w->work, once_deferred); struct once_work *w; w->key = key; schedule_work(&w->work); module unload //*the key is destroy* process_one_work once_deferred BUG_ON(!static_key_enabled(work->key)); static_key_count((struct static_key *)x) //*access key, crash* When module uses DO_ONCE mechanism, it could crash due to the above concurrency problem, we could reproduce it with link[1]. Fix it by add/put module refcount in the once work process. [1] https://lore.kernel.org/netdev/eaa6c371-465e-57eb-6be9-f4b16b9d7cbf@huawei.com/ Cc: Hannes Frederic Sowa <hannes@stressinduktion.org> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: David S. Miller <davem@davemloft.net> Cc: Eric Dumazet <edumazet@google.com> Reported-by:
Minmin chen <chenmingmin@huawei.com> Signed-off-by:
Kefeng Wang <wangkefeng.wang@huawei.com> Reviewed-by:
Xie XiuQi <xiexiuqi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Zhang Xiaoxu authored
mainline inclusion from mainline-v5.14 commit 5483b904bf336948826594610af4c9bbb0d9e3aa category: bugfix bugzilla: 51898 CVE: NA --------------------------- When find a task from wait queue to wake up, a non-privileged task may be found out, rather than the privileged. This maybe lead a deadlock same as commit dfe1fe75e00e ("NFSv4: Fix deadlock between nfs4_evict_inode() and nfs4_opendata_get_inode()"): Privileged delegreturn task is queued to privileged list because all the slots are assigned. If there has no enough slot to wake up the non-privileged batch tasks(session less than 8 slot), then the privileged delegreturn task maybe lost waked up because the found out task can't get slot since the session is on draining. So we should treate the privileged task as the emergency task, and execute it as for as we can. Reported-by:
Hulk Robot <hulkci@huawei.com> Fixes: 5fcdfacc ("NFSv4: Return delegations synchronously in evict_inode") Cc: stable@vger.kernel.org Signed-off-by:
Zhang Xiaoxu <zhangxiaoxu5@huawei.com> Signed-off-by:
Trond Myklebust <trond.myklebust@hammerspace.com> Reviewed-by:
Yue Haibing <yuehaibing@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Zhang Xiaoxu authored
mainline inclusion from mainline-v5.14 commit fcb170a9d825d7db4a3fb870b0300f5a40a8d096 category: bugfix bugzilla: 51898 CVE: NA --------------------------- The 'queue->nr' will wraparound from 0 to 255 when only current priority queue has tasks. This maybe lead a deadlock same as commit dfe1fe75e00e ("NFSv4: Fix deadlock between nfs4_evict_inode() and nfs4_opendata_get_inode()"): Privileged delegreturn task is queued to privileged list because all the slots are assigned. When non-privileged task complete and release the slot, a non-privileged maybe picked out. It maybe allocate slot failed when the session on draining. If the 'queue->nr' has wraparound to 255, and no enough slot to service it, then the privileged delegreturn will lost to wake up. So we should avoid the wraparound on 'queue->nr'. Reported-by:
Hulk Robot <hulkci@huawei.com> Fixes: 5fcdfacc ("NFSv4: Return delegations synchronously in evict_inode") Fixes: 1da177e4 ("Linux-2.6.12-rc2") Cc: stable@vger.kernel.org Signed-off-by:
Zhang Xiaoxu <zhangxiaoxu5@huawei.com> Signed-off-by:
Trond Myklebust <trond.myklebust@hammerspace.com> Reviewed-by:
Yue Haibing <yuehaibing@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Daniel Borkmann authored
mainline inclusion from mainline-v5.13-rc7 commit 9183671af6dbf60a1219371d4ed73e23f43b49db category: bugfix bugzilla: NA CVE: CVE-2021-33624 -------------------------------- The verifier only enumerates valid control-flow paths and skips paths that are unreachable in the non-speculative domain. And so it can miss issues under speculative execution on mispredicted branches. For example, a type confusion has been demonstrated with the following crafted program: // r0 = pointer to a map array entry // r6 = pointer to readable stack slot // r9 = scalar controlled by attacker 1: r0 = *(u64 *)(r0) // cache miss 2: if r0 != 0x0 goto line 4 3: r6 = r9 4: if r0 != 0x1 goto line 6 5: r9 = *(u8 *)(r6) 6: // leak r9 Since line 3 runs iff r0 == 0 and line 5 runs iff r0 == 1, the verifier concludes that the pointer dereference on line 5 is safe. But: if the attacker trains both the branches to fall-through, such that the following is speculatively executed ... r6 = r9 r9 = *(u8 *)(r6) // leak r9 ... then the program will dereference an attacker-controlled value and could leak its content under speculative execution via side-channel. This requires to mistrain the branch predictor, which can be rather tricky, because the branches are mutually exclusive. However such training can be done at congruent addresses in user space using different branches that are not mutually exclusive. That is, by training branches in user space ... A: if r0 != 0x0 goto line C B: ... C: if r0 != 0x0 goto line D D: ... ... such that addresses A and C collide to the same CPU branch prediction entries in the PHT (pattern history table) as those of the BPF program's lines 2 and 4, respectively. A non-privileged attacker could simply brute force such collisions in the PHT until observing the attack succeeding. Alternative methods to mistrain the branch predictor are also possible that avoid brute forcing the collisions in the PHT. A reliable attack has been demonstrated, for example, using the following crafted program: // r0 = pointer to a [control] map array entry // r7 = *(u64 *)(r0 + 0), training/attack phase // r8 = *(u64 *)(r0 + 8), oob address // [...] // r0 = pointer to a [data] map array entry 1: if r7 == 0x3 goto line 3 2: r8 = r0 // crafted sequence of conditional jumps to separate the conditional // branch in line 193 from the current execution flow 3: if r0 != 0x0 goto line 5 4: if r0 == 0x0 goto exit 5: if r0 != 0x0 goto line 7 6: if r0 == 0x0 goto exit [...] 187: if r0 != 0x0 goto line 189 188: if r0 == 0x0 goto exit // load any slowly-loaded value (due to cache miss in phase 3) ... 189: r3 = *(u64 *)(r0 + 0x1200) // ... and turn it into known zero for verifier, while preserving slowly- // loaded dependency when executing: 190: r3 &= 1 191: r3 &= 2 // speculatively bypassed phase dependency 192: r7 += r3 193: if r7 == 0x3 goto exit 194: r4 = *(u8 *)(r8 + 0) // leak r4 As can be seen, in training phase (phase != 0x3), the condition in line 1 turns into false and therefore r8 with the oob address is overridden with the valid map value address, which in line 194 we can read out without issues. However, in attack phase, line 2 is skipped, and due to the cache miss in line 189 where the map value is (zeroed and later) added to the phase register, the condition in line 193 takes the fall-through path due to prior branch predictor training, where under speculation, it'll load the byte at oob address r8 (unknown scalar type at that point) which could then be leaked via side-channel. One way to mitigate these is to 'branch off' an unreachable path, meaning, the current verification path keeps following the is_branch_taken() path and we push the other branch to the verification stack. Given this is unreachable from the non-speculative domain, this branch's vstate is explicitly marked as speculative. This is needed for two reasons: i) if this path is solely seen from speculative execution, then we later on still want the dead code elimination to kick in in order to sanitize these instructions with jmp-1s, and ii) to ensure that paths walked in the non-speculative domain are not pruned from earlier walks of paths walked in the speculative domain. Additionally, for robustness, we mark the registers which have been part of the conditional as unknown in the speculative path given there should be no assumptions made on their content. The fix in here mitigates type confusion attacks described earlier due to i) all code paths in the BPF program being explored and ii) existing verifier logic already ensuring that given memory access instruction references one specific data structure. An alternative to this fix that has also been looked at in this scope was to mark aux->alu_state at the jump instruction with a BPF_JMP_TAKEN state as well as direction encoding (always-goto, always-fallthrough, unknown), such that mixing of different always-* directions themselves as well as mixing of always-* with unknown directions would cause a program rejection by the verifier, e.g. programs with constructs like 'if ([...]) { x = 0; } else { x = 1; }' with subsequent 'if (x == 1) { [...] }'. For unprivileged, this would result in only single direction always-* taken paths, and unknown taken paths being allowed, such that the former could be patched from a conditional jump to an unconditional jump (ja). Compared to this approach here, it would have two downsides: i) valid programs that otherwise are not performing any pointer arithmetic, etc, would potentially be rejected/broken, and ii) we are required to turn off path pruning for unprivileged, where both can be avoided in this work through pushing the invalid branch to the verification stack. The issue was originally discovered by Adam and Ofek, and later independently discovered and reported as a result of Benedict and Piotr's research work. Fixes: b2157399 ("bpf: prevent out-of-bounds speculation") Reported-by:
Adam Morrison <mad@cs.tau.ac.il> Reported-by:
Ofek Kirzner <ofekkir@gmail.com> Reported-by:
Benedict Schlueter <benedict.schlueter@rub.de> Reported-by:
Piotr Krysiuk <piotras@gmail.com> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Reviewed-by:
John Fastabend <john.fastabend@gmail.com> Reviewed-by:
Benedict Schlueter <benedict.schlueter@rub.de> Reviewed-by:
Piotr Krysiuk <piotras@gmail.com> Acked-by:
Alexei Starovoitov <ast@kernel.org> onflicts: kernel/bpf/verifier.c [yyl: bypass_spec_v1 is not introduced in kernel-4.19, use allow_ptr_leaks instead] Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
He <Fengqing<hefengqing@huawei.com> Reviewed-by:
Kuohai Xu <xukuohai@huawei.com> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Daniel Borkmann authored
mainline inclusion from mainline-v5.13-rc7 commit fe9a5ca7e370e613a9a75a13008a3845ea759d6e category: bugfix bugzilla: NA CVE: CVE-2021-33624 -------------------------------- ... in such circumstances, we do not want to mark the instruction as seen given the goal is still to jmp-1 rewrite/sanitize dead code, if it is not reachable from the non-speculative path verification. We do however want to verify it for safety regardless. With the patch as-is all the insns that have been marked as seen before the patch will also be marked as seen after the patch (just with a potentially different non-zero count). An upcoming patch will also verify paths that are unreachable in the non-speculative domain, hence this extension is needed. Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Reviewed-by:
John Fastabend <john.fastabend@gmail.com> Reviewed-by:
Benedict Schlueter <benedict.schlueter@rub.de> Reviewed-by:
Piotr Krysiuk <piotras@gmail.com> Acked-by:
Alexei Starovoitov <ast@kernel.org> Conflicts: kernel/bpf/verifier.c pass_cnt is not introduced in kernel-4.19. Signed-off-by:
He Fengqing <hefengqing@huawei.com> Reviewed-by:
Kuohai Xu <xukuohai@huawei.com> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Daniel Borkmann authored
mainline inclusion from mainline-v5.13-rc7 commit d203b0fd863a2261e5d00b97f3d060c4c2a6db71 category: bugfix bugzilla: NA CVE: CVE-2021-33624 -------------------------------- Instead of relying on current env->pass_cnt, use the seen count from the old aux data in adjust_insn_aux_data(), and expand it to the new range of patched instructions. This change is valid given we always expand 1:n with n>=1, so what applies to the old/original instruction needs to apply for the replacement as well. Not relying on env->pass_cnt is a prerequisite for a later change where we want to avoid marking an instruction seen when verified under speculative execution path. Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Reviewed-by:
John Fastabend <john.fastabend@gmail.com> Reviewed-by:
Benedict Schlueter <benedict.schlueter@rub.de> Reviewed-by:
Piotr Krysiuk <piotras@gmail.com> Acked-by:
Alexei Starovoitov <ast@kernel.org> Conflicts: kernel/bpf/verifier.c seen of bpf_insn_aux_data is bool in kernel-4.19. Signed-off-by:
He Fengqing <hefengqing@huawei.com> Reviewed-by:
Kuohai Xu <xukuohai@huawei.com> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Daniel Borkmann authored
stable inclusion from linux-4.19.193 commit 138b0ec1064c8f154a32297458e562591a94773f -------------------------------- commit d7a5091351756d0ae8e63134313c455624e36a13 upstream Update various selftest error messages: * The 'Rx tried to sub from different maps, paths, or prohibited types' is reworked into more specific/differentiated error messages for better guidance. * The change into 'value -4294967168 makes map_value pointer be out of bounds' is due to moving the mixed bounds check into the speculation handling and thus occuring slightly later than above mentioned sanity check. * The change into 'math between map_value pointer and register with unbounded min value' is similarly due to register sanity check coming before the mixed bounds check. * The case of 'map access: known scalar += value_ptr from different maps' now loads fine given masks are the same from the different paths (despite max map value size being different). Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Reviewed-by:
John Fastabend <john.fastabend@gmail.com> Acked-by:
Alexei Starovoitov <ast@kernel.org> [OP: 4.19 backport, account for split test_verifier and different / missing tests] Signed-off-by:
Ovidiu Panait <ovidiu.panait@windriver.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Daniel Borkmann authored
stable inclusion from linux-4.19.193 commit d1e281d6cb8841122c4677b47fcebdc6f410bd74 -------------------------------- [ no upstream commit ] Switch the comparison, so that is_branch_taken() will recognize that below branch is never taken: [...] 17: [...] R1_w=inv0 [...] R8_w=inv(id=0,smin_value=-2147483648,smax_value=-1,umin_value=18446744071562067968,var_off=(0xffffffff80000000; 0x7fffffff)) [...] 17: (67) r8 <<= 32 18: [...] R8_w=inv(id=0,smax_value=-4294967296,umin_value=9223372036854775808,umax_value=18446744069414584320,var_off=(0x8000000000000000; 0x7fffffff00000000)) [...] 18: (c7) r8 s>>= 32 19: [...] R8_w=inv(id=0,smin_value=-2147483648,smax_value=-1,umin_value=18446744071562067968,var_off=(0xffffffff80000000; 0x7fffffff)) [...] 19: (6d) if r1 s> r8 goto pc+16 [...] R1_w=inv0 [...] R8_w=inv(id=0,smin_value=-2147483648,smax_value=-1,umin_value=18446744071562067968,var_off=(0xffffffff80000000; 0x7fffffff)) [...] [...] Currently we check for is_branch_taken() only if either K is source, or source is a scalar value that is const. For upstream it would be good to extend this properly to check whether dst is const and src not. For the sake of the test_verifier, it is probably not needed here: # ./test_verifier 101 #101/p bpf_get_stack return R0 within range OK Summary: 1 PASSED, 0 SKIPPED, 0 FAILED I haven't seen this issue in test_progs* though, they are passing fine: # ./test_progs-no_alu32 -t get_stack Switching to flavor 'no_alu32' subdirectory... #20 get_stack_raw_tp:OK Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED # ./test_progs -t get_stack #20 get_stack_raw_tp:OK Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Acked-by:
Alexei Starovoitov <ast@kernel.org> Acked-by:
John Fastabend <john.fastabend@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> [OP: backport to 4.19] Signed-off-by:
Ovidiu Panait <ovidiu.panait@windriver.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
John Fastabend authored
stable inclusion from linux-4.19.193 commit f915e7975fc2d593ddb60b67d14eef314eb6dd08 -------------------------------- commit 9ac26e99 upstream. With current ALU32 subreg handling and retval refine fix from last patches we see an expected failure in test_verifier. With verbose verifier state being printed at each step for clarity we have the following relavent lines [I omit register states that are not necessarily useful to see failure cause], #101/p bpf_get_stack return R0 within range FAIL Failed to load prog 'Success'! [..] 14: (85) call bpf_get_stack#67 R0_w=map_value(id=0,off=0,ks=8,vs=48,imm=0) R3_w=inv48 15: R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) 15: (b7) r1 = 0 16: R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) R1_w=inv0 16: (bf) r8 = r0 17: R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) R1_w=inv0 R8_w=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) 17: (67) r8 <<= 32 18: R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) R1_w=inv0 R8_w=inv(id=0,smax_value=9223372032559808512, umax_value=18446744069414584320, var_off=(0x0; 0xffffffff00000000), s32_min_value=0, s32_max_value=0, u32_max_value=0, var32_off=(0x0; 0x0)) 18: (c7) r8 s>>= 32 19 R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) R1_w=inv0 R8_w=inv(id=0,smin_value=-2147483648, smax_value=2147483647, var32_off=(0x0; 0xffffffff)) 19: (cd) if r1 s< r8 goto pc+16 R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) R1_w=inv0 R8_w=inv(id=0,smin_value=-2147483648, smax_value=0, var32_off=(0x0; 0xffffffff)) 20: R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff)) R1_w=inv0 R8_w=inv(id=0,smin_value=-2147483648, smax_value=0, R9=inv48 20: (1f) r9 -= r8 21: (bf) r2 = r7 22: R2_w=map_value(id=0,off=0,ks=8,vs=48,imm=0) 22: (0f) r2 += r8 value -2147483648 makes map_value pointer be out of bounds After call bpf_get_stack() on line 14 and some moves we have at line 16 an r8 bound with max_value 48 but an unknown min value. This is to be expected bpf_get_stack call can only return a max of the input size but is free to return any negative error in the 32-bit register space. The C helper is returning an int so will use lower 32-bits. Lines 17 and 18 clear the top 32 bits with a left/right shift but use ARSH so we still have worst case min bound before line 19 of -2147483648. At this point the signed check 'r1 s< r8' meant to protect the addition on line 22 where dst reg is a map_value pointer may very well return true with a large negative number. Then the final line 22 will detect this as an invalid operation and fail the program. What we want to do is proceed only if r8 is positive non-error. So change 'r1 s< r8' to 'r1 s> r8' so that we jump if r8 is negative. Next we will throw an error because we access past the end of the map value. The map value size is 48 and sizeof(struct test_val) is 48 so we walk off the end of the map value on the second call to get bpf_get_stack(). Fix this by changing sizeof(struct test_val) to 24 by using 'sizeof(struct test_val) / 2'. After this everything passes as expected. Signed-off-by:
John Fastabend <john.fastabend@gmail.com> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/158560426019.10843.3285429543232025187.stgit@john-Precision-5820-Tower Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> [OP: backport to 4.19] Signed-off-by:
Ovidiu Panait <ovidiu.panait@windriver.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Alexei Starovoitov authored
stable inclusion from linux-4.19.193 commit e0b86677fb3e4622b444dcdd8546caa0dba8a689 -------------------------------- commit fb8d251e upstream This patch extends is_branch_taken() logic from JMP+K instructions to JMP+X instructions. Conditional branches are often done when src and dst registers contain known scalars. In such case the verifier can follow the branch that is going to be taken when program executes. That speeds up the verification and is essential feature to support bounded loops. Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Acked-by:
Andrii Nakryiko <andriin@fb.com> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> [OP: drop is_jmp32 parameter from is_branch_taken() calls and adjust context] Signed-off-by:
Ovidiu Panait <ovidiu.panait@windriver.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Ovidiu Panait authored
stable inclusion from linux-4.19.193 commit c905bfe767e98a13dd886bf241ba9ee0640a53ff -------------------------------- Backport the missing selftest part of commit 7da6cd690c43 ("bpf: improve verifier branch analysis") in order to fix the following test_verifier failures: ... Unexpected success to load! 0: (b7) r0 = 0 1: (75) if r0 s>= 0x0 goto pc+1 3: (95) exit processed 3 insns (limit 131072), stack depth 0 Unexpected success to load! 0: (b7) r0 = 0 1: (75) if r0 s>= 0x0 goto pc+1 3: (95) exit processed 3 insns (limit 131072), stack depth 0 ... The changesets apply with a minor context difference. Fixes: 7da6cd690c43 ("bpf: improve verifier branch analysis") Signed-off-by:
Ovidiu Panait <ovidiu.panait@windriver.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Andrey Ignatov authored
stable inclusion from linux-4.19.193 commit 737f5f3a633518feae7b2793f4666c67e39bcc5a -------------------------------- commit 6c2afb67 upstream Test the following narrow loads in test_verifier for context __sk_buff: * off=1, size=1 - ok; * off=2, size=1 - ok; * off=3, size=1 - ok; * off=0, size=2 - ok; * off=1, size=2 - fail; * off=0, size=2 - ok; * off=3, size=2 - fail. Signed-off-by:
Andrey Ignatov <rdna@fb.com> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Signed-off-by:
Ovidiu Panait <ovidiu.panait@windriver.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Piotr Krysiuk authored
stable inclusion from linux-4.19.193 commit 1982f436a9a990e338ac4d7ed80a9fb40e0a1885 -------------------------------- commit 0a13e3537ea67452d549a6a80da3776d6b7dedb3 upstream Fix up test_verifier error messages for the case where the original error message changed, or for the case where pointer alu errors differ between privileged and unprivileged tests. Also, add alternative tests for keeping coverage of the original verifier rejection error message (fp alu), and newly reject map_ptr += rX where rX == 0 given we now forbid alu on these types for unprivileged. All test_verifier cases pass after the change. The test case fixups were kept separate to ease backporting of core changes. Signed-off-by:
Piotr Krysiuk <piotras@gmail.com> Co-developed-by:
Daniel Borkmann <daniel@iogearbox.net> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Acked-by:
Alexei Starovoitov <ast@kernel.org> [OP: backport to 4.19, skipping non-existent tests] Signed-off-by:
Ovidiu Panait <ovidiu.panait@windriver.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Ovidiu Panait authored
stable inclusion from linux-4.19.193 commit b190383c714a379002b00bc8de43371e78d291d8 -------------------------------- After the backport of the changes to fix CVE 2019-7308, the selftests also need to be fixed up, as was done originally in mainline 80c9b2fa ("bpf: add various test cases to selftests"). This is a backport of upstream commit 80c9b2fa ("bpf: add various test cases to selftests") adapted to 4.19 in order to fix the selftests that began to fail after CVE-2019-7308 fixes. Suggested-by:
Frank van der Linden <fllinden@amazon.com> Signed-off-by:
Ovidiu Panait <ovidiu.panait@windriver.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
- Jul 02, 2021
-
-
Chao Leng authored
mainline inclusion from mainline-v5.11-rc5 commit 7674073b2ed35ac951a49c425dec6b39d5a57140 category: bugfix bugzilla: NA CVE: NA Link: https://gitee.com/openeuler/kernel/issues/I1WGZE ------------------------------------------------- A crash happens when inject completing request long time(nearly 30s). Each name space has a request queue, when inject completing request long time, multi request queues may have time out requests at the same time, nvme_rdma_timeout will execute concurrently. Multi requests in different request queues may be queued in the same rdma queue, multi nvme_rdma_timeout may call nvme_rdma_stop_queue at the same time. The first nvme_rdma_timeout will clear NVME_RDMA_Q_LIVE and continue stopping the rdma queue(drain qp), but the others check NVME_RDMA_Q_LIVE is already cleared, and then directly complete the requests, complete request before the qp is fully drained may lead to a use-after-free condition. Add a multex lock to serialize nvme_rdma_stop_queue. Signed-off-by:
Chao Leng <lengchao@huawei.com> Tested-by:
Israel Rukshin <israelr@nvidia.com> Reviewed-by:
Israel Rukshin <israelr@nvidia.com> Signed-off-by:
Christoph Hellwig <hch@lst.de> conflicts: drivers/nvme/host/rdma.c [lrz: adjust context] Signed-off-by:
Ruozhu Li <liruozhu@huawei.com> Reviewed-by:
Hou Tao <houtao1@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Eric W. Biederman authored
mainline inclusion from mainline-v5.8-rc1 commit e7f77854 category: bugfix bugzilla: 36868 CVE: NA ----------------------------------------------- In 2016 Linus moved install_exec_creds immediately after setup_new_exec, in binfmt_elf as a cleanup and as part of closing a potential information leak. Perform the same cleanup for the other binary formats. Different binary formats doing the same things the same way makes exec easier to reason about and easier to maintain. Greg Ungerer reports: > I tested the the whole series on non-MMU m68k and non-MMU arm > (exercising binfmt_flat) and it all tested out with no problems, > so for the binfmt_flat changes: Tested-by:
Greg Ungerer <gerg@linux-m68k.org> Ref: 9f834ec1 ("binfmt_elf: switch to new creds when switching to new mm") Reviewed-by:
Kees Cook <keescook@chromium.org> Reviewed-by:
Greg Ungerer <gerg@linux-m68k.org> Signed-off-by: "Eric W. Biede...
-
- Jul 01, 2021
-
-
Pavel Skripkin authored
mainline inclusion from mainline-5.14 commit 618f003199c6188e01472b03cdbba227f1dc5f24 category: bugfix bugzilla: 167360 CVE: NA ------------------------------------------------- static int kthread(void *_create) will return -ENOMEM or -EINTR in case of internal failure or kthread_stop() call happens before threadfn call. To prevent fancy error checking and make code more straightforward we moved all cleanup code out of kmmpd threadfn. Also, dropped struct mmpd_data at all. Now struct super_block is a threadfn data and struct buffer_head embedded into struct ext4_sb_info. Reported-by:
<syzbot+d9e482e303930fa4f6ff@syzkaller.appspotmail.com> Signed-off-by:
Pavel Skripkin <paskripkin@gmail.com> Link: https://lore.kernel.org/r/20210430185046.15742-1-paskripkin@gmail.com Signed-off-by:
Theodore Ts'o <tytso@mit.edu> Conflicts: fs/ext4/ext4.h fs/ext4/super.c Signed-off-by:
Baokun Li <libaokun1@huawei.com> Reviewed-by:
Zhang Yi <yi.zhang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Xi Wang authored
driver inclusion category: bugfix bugzilla: NA CVE: NA Wraps the public logic as 3 functions: hns_roce_mtr_create(), hns_roce_mtr_destroy() and hns_roce_mtr_map() to support hopnum ranges from 0 to 3. In addition, makes the mtr interfaces easier to use. Signed-off-by:
Xi Wang <wangxi11@huawei.com> Signed-off-by:
Shunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by:
chunzhi hu <huchunzhi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Xi Wang authored
driver inclusion category: bugfix bugzilla: NA CVE: NA When the value of nbufs is 1, the buffer is in direct mode, which may cause confusion. So optimizes current codes to make it easier to maintain. Signed-off-by:
Xi Wang <wangxi11@huawei.com> Signed-off-by:
Shunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by:
chunzhi hu <huchunzhi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Lang Cheng authored
mainline inclusion from mainline-v5.7 commit 026ded37 category: bugfix bugzilla: NA CVE: NA Depth of qp shouldn't be allowed to be set to zero, after ensuring that, subsequent process can be simplified. And when qp is changed from reset to reset, the capability of minimum qp depth was used to identify hardware of hip06, it should be changed into a more readable form. Link: https://lore.kernel.org/r/1584006624-11846-1-git-send-email-liweihang@huawei.com Signed-off-by:
Lang Cheng <chenglang@huawei.com> Signed-off-by:
Weihang Li <liweihang@huawei.com> Signed-off-by:
Jason Gunthorpe <jgg@mellanox.com> Signed-off-by:
Shunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by:
chunzhi hu <huchunzhi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Xi Wang authored
mainline inclusion from mainline-v5.7 commit ae85bf92 category: bugfix bugzilla: NA CVE: NA Encapsulate the qp param setup related code into set_qp_param(). Link: https://lore.kernel.org/r/1582526258-13825-6-git-send-email-liweihang@huawei.com Signed-off-by:
Xi Wang <wangxi11@huawei.com> Signed-off-by:
Weihang Li <liweihang@huawei.com> Signed-off-by:
Jason Gunthorpe <jgg@mellanox.com> Signed-off-by:
Shunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by:
chunzhi hu <huchunzhi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Xi Wang authored
mainline inclusion from mainline-v5.7 commit 24c22112 category: bugfix bugzilla: NA CVE: NA Encapsulate qp buffer allocation related code into 3 functions: alloc_qp_buf(), map_wqe_buf() and free_qp_buf(). Link: https://lore.kernel.org/r/1582526258-13825-5-git-send-email-liweihang@huawei.com Signed-off-by:
Xi Wang <wangxi11@huawei.com> Signed-off-by:
Weihang Li <liweihang@huawei.com> Signed-off-by:
Jason Gunthorpe <jgg@mellanox.com> Signed-off-by:
Shunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by:
chunzhi hu <huchunzhi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Xi Wang authored
mainline inclusion from mainline-v5.7 commit e365b26c category: bugfix bugzilla: NA CVE: NA Wrap the duplicate code in hip08 and hip06 qp destruction process as hns_roce_qp_destroy() to simply the qp destroy flow. Link: https://lore.kernel.org/r/1582526258-13825-2-git-send-email-liweihang@huawei.com Signed-off-by:
Xi Wang <wangxi11@huawei.com> Signed-off-by:
Weihang Li <liweihang@huawei.com> Signed-off-by:
Jason Gunthorpe <jgg@mellanox.com> Signed-off-by:
Shunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by:
chunzhi hu <huchunzhi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Leon Romanovsky authored
mainline inclusion from mainline-v5.2 commit 57425822 category: bugfix bugzilla: NA CVE: NA Verbs destroy callbacks are synchronous operations and can't be delayed. The expectation is that after driver returned from destroy function, the memory can be freed and user won't be able to access it again. Ditch workqueue implementation used in HNS driver. Fixes: d838c481 ("IB/hns: Fix the bug when destroy qp") Signed-off-by:
Leon Romanovsky <leonro@mellanox.com> Acked-by:
oulijun <oulijun@huawei.com> Signed-off-by:
Jason Gunthorpe <jgg@mellanox.com> Signed-off-by:
Shunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by:
chunzhi hu <huchunzhi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Lijun Ou authored
mainline inclusion from mainline-v5.6 commit 468d020e category: bugfix bugzilla: NA CVE: NA Driver should first check whether the sge is valid, then fill the valid sge and the caculated total into hardware, otherwise invalid sges will cause an error. Fixes: 52e3b42a ("RDMA/hns: Filter for zero length of sge in hip08 kernel mode") Fixes: 7bdee415 ("RDMA/hns: Fill sq wqe context of ud type in hip08") Link: https://lore.kernel.org/r/1578571852-13704-1-git-send-email-liweihang@huawei.com Signed-off-by:
Lijun Ou <oulijun@huawei.com> Signed-off-by:
Weihang Li <liweihang@huawei.com> Signed-off-by:
Jason Gunthorpe <jgg@mellanox.com> Signed-off-by:
Shunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by:
chunzhi hu <huchunzhi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yixian Liu authored
mainline inclusion from mainline-v5.5 commit ec6adad0 category: bugfix bugzilla: NA CVE: NA There is no need to define max_post in hns_roce_wq, as it does same thing as wqe_cnt. Link: https://lore.kernel.org/r/1572952082-6681-2-git-send-email-liweihang@hisilicon.com Signed-off-by:
Yixian Liu <liuyixian@huawei.com> Signed-off-by:
Weihang Li <liweihang@hisilicon.com> Signed-off-by:
Jason Gunthorpe <jgg@mellanox.com> Signed-off-by:
Shunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by:
chunzhi hu <huchunzhi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Xi Wang authored
mainline inclusion from mainline-v5.5 commit 99441ab5 category: bugfix bugzilla: NA CVE: NA Currently, more than 20 lines of duplicate code exist in function 'modify_qp_init_to_init' and function 'modify_qp_reset_to_init', which affects the readability of the code. Consolidate them. Link: https://lore.kernel.org/r/1562593285-8037-6-git-send-email-oulijun@huawei.com Signed-off-by:
Xi Wang <wangxi11@huawei.com> Signed-off-by:
Jason Gunthorpe <jgg@mellanox.com> Signed-off-by:
Shunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by:
chunzhi hu <huchunzhi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Jason Gunthorpe authored
mainline inclusion from mainline-v5.5 commit 515f6000 category: bugfix bugzilla: NA CVE: NA The "ucmd->log_sq_bb_count" variable is a user controlled variable in the 0-255 range. If we shift more than then number of bits in an int then it's undefined behavior (it shift wraps), and potentially the int could become negative. Fixes: 9a443537 ("IB/hns: Add driver files for hns RoCE driver") Link: https://lore.kernel.org/r/20190608092514.GC28890@mwanda Reported-by:
Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by:
Jason Gunthorpe <jgg@mellanox.com> Reviewed-by:
Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by:
Shunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by:
chunzhi hu <huchunzhi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Xi Wang authored
driver inclusion category: bugfix bugzilla: NA CVE: NA This helper does the same as rdma_for_each_block(), except it works on a umem. This simplifies most of the call sites. Link: https://lore.kernel.org/r/4-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com Acked-by:
Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> Acked-by:
Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com> Signed-off-by:
Xi Wang <wangxi11@huawei.com> Signed-off-by:
Shunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by:
chunzhi hu <huchunzhi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Xi Wang authored
driver inclusion category: bugfix bugzilla: NA CVE: NA This helper iterates over a DMA-mapped SGL and returns contiguous memory blocks aligned to a HW supported page size. Suggested-by:
Jason Gunthorpe <jgg@ziepe.ca> Signed-off-by:
Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by:
Jason Gunthorpe <jgg@mellanox.com> Signed-off-by:
Xi Wang <wangxi11@huawei.com> Signed-off-by:
Shunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by:
chunzhi hu <huchunzhi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-