- Apr 22, 2022
-
-
Josh Poimboeuf authored
stable inclusion from stable-v4.19.238 commit ac069a960d357a7495fd2a1cde28d9b03250cace category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I53TS3 CVE: NA -------------------------------- commit 1fb14363 upstream. In swapgs_restore_regs_and_return_to_usermode, after the stack is switched to the trampoline stack, the existing UNWIND_HINT_REGS hint is no longer valid, which can result in the following ORC unwinder warning: WARNING: can't dereference registers at 000000003aeb0cdd for ip swapgs_restore_regs_and_return_to_usermode+0x93/0xa0 For full correctness, we could try to add complicated unwind hints so the unwinder could continue to find the registers, but when when it's this close to kernel exit, unwind hints aren't really needed anymore and it's fine to just use an empty hint which tells the unwinder to stop. For consistency, also move the UNWIND_HINT_EMPTY in entry_SYSCALL_64_after_hwframe to a similar location. Fixes: 3e3b9293 ("x86/entry/64: Return to userspace from the trampoline stack") Reported-by:
Vince Weaver <vincent.weaver@maine.edu> Reported-by:
Dave Jones <dsj@fb.com> Reported-by:
Dr. David Alan Gilbert <dgilbert@redhat.com> Reported-by:
Joe Mario <jmario@redhat.com> Reported-by:
Jann Horn <jannh@google.com> Reported-by:
Linus Torvalds <torvalds@linux-foundation.org> Reviewed-by:
Miroslav Benes <mbenes@suse.cz> Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/60ea8f562987ed2d9ace2977502fe481c0d7c9a0.1587808742.git.jpoimboe@redhat.com Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Zou Yipeng <zouyipeng@huawei.com> Reviewed-by:
Zhang Jianhua <chris.zjh@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
- Apr 21, 2022
-
-
Haimin Zhang authored
stable inclusion from stable-v4.19.238 commit 693fe8af9a2625139de07bd1ae212a7d89c37795 category: bugfix bugzilla: 186606, https://gitee.com/src-openeuler/kernel/issues/I53SSV CVE: CVE-2022-1353 -------------------------------- [ Upstream commit 9a564bccb78a76740ea9d75a259942df8143d02c ] Add __GFP_ZERO flag for compose_sadb_supported in function pfkey_register to initialize the buffer of supp_skb to fix a kernel-info-leak issue. 1) Function pfkey_register calls compose_sadb_supported to request a sk_buff. 2) compose_sadb_supported calls alloc_sbk to allocate a sk_buff, but it doesn't zero it. 3) If auth_len is greater 0, then compose_sadb_supported treats the memory as a struct sadb_supported and begins to initialize. But it just initializes the field sadb_supported_len and field sadb_supported_exttype without field sadb_supported_reserved. Reported-by:
TCS Robot <tcs_robot@tencent.com> Signed-off-by:
Haimin Zhang <tcs_kernel@tencent.com> Signed-off-by:
Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by:
Sasha Levin <sashal@kernel.org> Signed-off-by:
Xu Jia <xujia39@huawei.com> Reviewed-by:
Wei Yongjun <weiyongjun1@huawei.com> Reviewed-by:
Wang Weiyang <wangweiyang2@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
- Apr 20, 2022
-
-
James Morse authored
stable inclusion from stable-v4.19.236 commit ed5dec3fae86f20db52930e1d9a7cc38403994cc category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit 228a26b912287934789023b4132ba76065d9491c upstream. Future CPUs may implement a clearbhb instruction that is sufficient to mitigate SpectreBHB. CPUs that implement this instruction, but not CSV2.3 must be affected by Spectre-BHB. Add support to use this instruction as the BHB mitigation on CPUs that support it. The instruction is in the hint space, so it will be treated by a NOP as older CPUs. Reviewed-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> [ modified for stable: Use a KVM vector template instead of alternatives ] Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Joey Gouly authored
stable inclusion from stable-v4.19.236 commit a44e7ddb5822b943cd50c5ad6a2541fb445d58bd category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit 9e45365f1469ef2b934f9d035975dbc9ad352116 upstream. This is a new ID register, introduced in 8.7. Signed-off-by:
Joey Gouly <joey.gouly@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Alexandru Elisei <alexandru.elisei@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Reiji Watanabe <reijiw@google.com> Acked-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211210165432.8106-3-joey.gouly@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit 5f051d32b03f08a0507ac1afd7b9c0a30c8e5d59 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit a5905d6af492ee6a4a2205f0d550b3f931b03d03 upstream. KVM allows the guest to discover whether the ARCH_WORKAROUND SMCCC are implemented, and to preserve that state during migration through its firmware register interface. Add the necessary boiler plate for SMCCC_ARCH_WORKAROUND_3. Reviewed-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> [ kvm code moved to virt/kvm/arm, removed fw regs ABI. Added 32bit stub ] Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Conflicts: arch/arm/include/asm/kvm_host.h arch/arm64/include/asm/kvm_host.h virt/kvm/arm/psci.c Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit c20d551744797000c4af993f7d59ef8c69732949 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit 558c303c9734af5a813739cd284879227f7297d2 upstream. Speculation attacks against some high-performance processors can make use of branch history to influence future speculation. When taking an exception from user-space, a sequence of branches or a firmware call overwrites or invalidates the branch history. The sequence of branches is added to the vectors, and should appear before the first indirect branch. For systems using KPTI the sequence is added to the kpti trampoline where it has a free register as the exit from the trampoline is via a 'ret'. For systems not using KPTI, the same register tricks are used to free up a register in the vectors. For the firmware call, arch-workaround-3 clobbers 4 registers, so there is no choice but to save them to the EL1 stack. This only happens for entry from EL0, so if we take an exception due to the stack access, it will not become re-entrant. For KVM, the existing branch-predictor-hardening vectors are used. When a spectre version of these vectors is in use, the firmware call is sufficient to mitigate against Spectre-BHB. For the non-spectre versions, the sequence of branches is added to the indirect vector. Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: <stable@kernel.org> # <v5.17.x 72bb9dcb6c33c arm64: Add Cortex-X2 CPU part definition Cc: <stable@kernel.org> # <v5.16.x 2d0d656700d67 arm64: Add Neoverse-N2, Cortex-A710 CPU part definition Cc: <stable@kernel.org> # <v5.10.x 8a6b88e6 arm64: Add part number for Arm Cortex-A77 [ modified for stable, moved code to cpu_errata.c removed bitmap of mitigations, use kvm template infrastructure ] Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Conflicts: arch/arm64/Kconfig arch/arm64/include/asm/cpufeature.h arch/arm64/include/asm/cputype.h arch/arm64/kernel/cpu_errata.c Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit a68912a3ae3413be5febcaa40e7e0ec1fd62adee category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- KVM writes the Spectre-v2 mitigation template at the beginning of each vector when a CPU requires a specific sequence to run. Because the template is copied, it can not be modified by the alternatives at runtime. As the KVM template code is intertwined with the bp-hardening callbacks, all templates must have a bp-hardening callback. Add templates for calling ARCH_WORKAROUND_3 and one for each value of K in the brancy-loop. Identify these sequences by a new parameter template_start, and add a copy of install_bp_hardening_cb() that is able to install them. Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Conflicts: arch/arm64/include/asm/cpucaps.h Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit 7b012f6597e55a2ea4c7efe94b5d9a792b6e5757 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit dee435be76f4117410bbd90573a881fd33488f37 upstream. Speculation attacks against some high-performance processors can make use of branch history to influence future speculation as part of a spectre-v2 attack. This is not mitigated by CSV2, meaning CPUs that previously reported 'Not affected' are now moderately mitigated by CSV2. Update the value in /sys/devices/system/cpu/vulnerabilities/spectre_v2 to also show the state of the BHB mitigation. Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> [ code move to cpu_errata.c for backport ] Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Conflicts: arch/arm64/include/asm/cpufeature.h Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit 5b5ca2608fbd6f250281b6a1d0d73613f250e6f1 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit bd09128d16fac3c34b80bd6a29088ac632e8ce09 upstream. The Spectre-BHB workaround adds a firmware call to the vectors. This is needed on some CPUs, but not others. To avoid the unaffected CPU in a big/little pair from making the firmware call, create per cpu vectors. The per-cpu vectors only apply when returning from EL0. Systems using KPTI can use the canonical 'full-fat' vectors directly at EL1, the trampoline exit code will switch to this_cpu_vector on exit to EL0. Systems not using KPTI should always use this_cpu_vector. this_cpu_vector will point at a vector in tramp_vecs or __bp_harden_el1_vectors, depending on whether KPTI is in use. Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Conflicts: arch/arm64/kernel/cpufeature.c arch/arm64/kvm/hyp/switch.c Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit e18876b523d5f5fd8b8f34721f60a470caf20aa1 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit b28a8eebe81c186fdb1a0078263b30576c8e1f42 upstream. The trampoline code needs to use the address of symbols in the wider kernel, e.g. vectors. PC-relative addressing wouldn't work as the trampoline code doesn't run at the address the linker expected. tramp_ventry uses a literal pool, unless CONFIG_RANDOMIZE_BASE is set, in which case it uses the data page as a literal pool because the data page can be unmapped when running in user-space, which is required for CPUs vulnerable to meltdown. Pull this logic out as a macro, instead of adding a third copy of it. Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit 91429ed04ebe9dbec88f97c6fd136b722bc3f3c5 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit ba2689234be92024e5635d30fe744f4853ad97db upstream. Some CPUs affected by Spectre-BHB need a sequence of branches, or a firmware call to be run before any indirect branch. This needs to go in the vectors. No CPU needs both. While this can be patched in, it would run on all CPUs as there is a single set of vectors. If only one part of a big/little combination is affected, the unaffected CPUs have to run the mitigation too. Create extra vectors that include the sequence. Subsequent patches will allow affected CPUs to select this set of vectors. Later patches will modify the loop count to match what the CPU requires. Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit 901c0a20aa94d09a9328899e2dd69a8d43a3a920 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit aff65393fa1401e034656e349abd655cfe272de0 upstream. kpti is an optional feature, for systems not using kpti a set of vectors for the spectre-bhb mitigations is needed. Add another set of vectors, __bp_harden_el1_vectors, that will be used if a mitigation is needed and kpti is not in use. The EL1 ventries are repeated verbatim as there is no additional work needed for entry from EL1. Reviewed-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit 22fdfcf1c2cea8e6dc383d46cbbe59d476d24a96 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit a9c406e6462ff14956d690de7bbe5131a5677dc9 upstream. Adding a second set of vectors to .entry.tramp.text will make it larger than a single 4K page. Allow the trampoline text to occupy up to three pages by adding two more fixmap slots. Previous changes to tramp_valias allowed it to reach beyond a single page. Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit 9e056623dfc538909ed2a914f70a66d68ec71ec3 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit c47e4d04ba0f1ea17353d85d45f611277507e07a upstream. Spectre-BHB needs to add sequences to the vectors. Having one global set of vectors is a problem for big/little systems where the sequence is costly on cpus that are not vulnerable. Making the vectors per-cpu in the style of KVM's bh_harden_hyp_vecs requires the vectors to be generated by macros. Make the kpti re-mapping of the kernel optional, so the macros can be used without kpti. Reviewed-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit f689fa53bb944873f75fe1584f446cae1aabd2c1 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit 13d7a08352a83ef2252aeb464a5e08dfc06b5dfd upstream. The macros for building the kpti trampoline are all behind CONFIG_UNMAP_KERNEL_AT_EL0, and in a region that outputs to the .entry.tramp.text section. Move the macros out so they can be used to generate other kinds of trampoline. Only the symbols need to be guarded by CONFIG_UNMAP_KERNEL_AT_EL0 and appear in the .entry.tramp.text section. Reviewed-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit af484e69b5e83095609d8b5c8abaf13a5460229e category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit ed50da7764535f1e24432ded289974f2bf2b0c5a upstream. The tramp_ventry macro uses tramp_vectors as the address of the vectors when calculating which ventry in the 'full fat' vectors to branch to. While there is one set of tramp_vectors, this will be true. Adding multiple sets of vectors will break this assumption. Move the generation of the vectors to a macro, and pass the start of the vectors as an argument to tramp_ventry. Reviewed-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit ebcdd80d0016c7445e8395cec99b9ce266a26001 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit 6c5bf79b69f911560fbf82214c0971af6e58e682 upstream. Systems using kpti enter and exit the kernel through a trampoline mapping that is always mapped, even when the kernel is not. tramp_valias is a macro to find the address of a symbol in the trampoline mapping. Adding extra sets of vectors will expand the size of the entry.tramp.text section to beyond 4K. tramp_valias will be unable to generate addresses for symbols beyond 4K as it uses the 12 bit immediate of the add instruction. As there are now two registers available when tramp_alias is called, use the extra register to avoid the 4K limit of the 12 bit immediate. Reviewed-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit 266b1ef1368e06ac4c5a89eb9774ef2bbaa54e19 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit c091fb6ae059cda563b2a4d93fdbc548ef34e1d6 upstream. The trampoline code has a data page that holds the address of the vectors, which is unmapped when running in user-space. This ensures that with CONFIG_RANDOMIZE_BASE, the randomised address of the kernel can't be discovered until after the kernel has been mapped. If the trampoline text page is extended to include multiple sets of vectors, it will be larger than a single page, making it tricky to find the data page without knowing the size of the trampoline text pages, which will vary with PAGE_SIZE. Move the data page to appear before the text page. This allows the data page to be found without knowing the size of the trampoline text pages. 'tramp_vectors' is used to refer to the beginning of the .entry.tramp.text section, do that explicitly. Reviewed-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit 51acb81130d1feee7fd043760b75f5377ab8d4f0 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit 03aff3a77a58b5b52a77e00537a42090ad57b80b upstream. Kpti stashes x30 in far_el1 while it uses x30 for all its work. Making the vectors a per-cpu data structure will require a second register. Allow tramp_exit two registers before it unmaps the kernel, by leaving x30 on the stack, and stashing x29 in far_el1. Reviewed-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit 87eccd56c52fcdd6c55b048d789da5c9c2e51ed3 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit d739da1694a0eaef0358a42b76904b611539b77b upstream. Subsequent patches will add additional sets of vectors that use the same tricks as the kpti vectors to reach the full-fat vectors. The full-fat vectors contain some cleanup for kpti that is patched in by alternatives when kpti is in use. Once there are additional vectors, the cleanup will be needed in more cases. But on big/little systems, the cleanup would be harmful if no trampoline vector were in use. Instead of forcing CPUs that don't need a trampoline vector to use one, make the trampoline cleanup optional. Entry at the top of the vectors will skip the cleanup. The trampoline vectors can then skip the first instruction, triggering the cleanup to run. Reviewed-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit e8bfe29afc09ac77b347540a0f4c789e6530a436 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit 4330e2c5c04c27bebf89d34e0bc14e6943413067 upstream. Subsequent patches add even more code to the ventry slots. Ensure kernels that overflow a ventry slot don't get built. Reviewed-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Josh Poimboeuf authored
stable inclusion from stable-v4.19.234 commit 9711b12a3f4c0fc73dd257c1e467e6e42155a5f1 category: bugfix bugzilla: 186453, https://gitee.com/src-openeuler/kernel/issues/I50WBM CVE: CVE-2022-0001 -------------------------------- commit 0de05d056afdb00eca8c7bbb0c79a3438daf700c upstream. The commit 44a3918c8245 ("x86/speculation: Include unprivileged eBPF status in Spectre v2 mitigation reporting") added a warning for the "eIBRS + unprivileged eBPF" combination, which has been shown to be vulnerable against Spectre v2 BHB-based attacks. However, there's no warning about the "eIBRS + LFENCE retpoline + unprivileged eBPF" combo. The LFENCE adds more protection by shortening the speculation window after a mispredicted branch. That makes an attack significantly more difficult, even with unprivileged eBPF. So at least for now the logic doesn't warn about that combination. But if you then add SMT into the mix, the SMT attack angle weakens the effectiveness of the LFENCE considerably. So extend the "eIBRS + unprivileged eBPF" warning to also include the "eIBRS + LFENCE + unprivileged eBPF + SMT" case. [ bp: Massage commit message. ] Suggested-by:
Alyssa Milburn <alyssa.milburn@linux.intel.com> Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Josh Poimboeuf authored
stable inclusion from stable-v4.19.234 commit 8bfdba77595aee5c3e83ed1c9994c35d6d409605 category: bugfix bugzilla: 186453, https://gitee.com/src-openeuler/kernel/issues/I50WBM CVE: CVE-2022-0001 -------------------------------- commit eafd987d4a82c7bb5aa12f0e3b4f8f3dea93e678 upstream. With: f8a66d608a3e ("x86,bugs: Unconditionally allow spectre_v2=retpoline,amd") it became possible to enable the LFENCE "retpoline" on Intel. However, Intel doesn't recommend it, as it has some weaknesses compared to retpoline. Now AMD doesn't recommend it either. It can still be left available as a cmdline option. It's faster than retpoline but is weaker in certain scenarios -- particularly SMT, but even non-SMT may be vulnerable in some cases. So just unconditionally warn if the user requests it on the cmdline. [ bp: Massage commit message. ] Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Kim Phillips authored
stable inclusion from stable-v4.19.234 commit c034d344e733a3ac574dd09e39e911a50025c607 category: bugfix bugzilla: 186453, https://gitee.com/src-openeuler/kernel/issues/I50WBM CVE: CVE-2022-0001 -------------------------------- commit e9b6013a7ce31535b04b02ba99babefe8a8599fa upstream. Update the link to the "Software Techniques for Managing Speculation on AMD Processors" whitepaper. Signed-off-by:
Kim Phillips <kim.phillips@amd.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Kim Phillips authored
stable inclusion from stable-v4.19.234 commit d3cb3a6927222268a10b2f12dfb8c9444f7cc39e category: bugfix bugzilla: 186453, https://gitee.com/src-openeuler/kernel/issues/I50WBM CVE: CVE-2022-0001 -------------------------------- commit 244d00b5dd4755f8df892c86cab35fb2cfd4f14b upstream. AMD retpoline may be susceptible to speculation. The speculation execution window for an incorrect indirect branch prediction using LFENCE/JMP sequence may potentially be large enough to allow exploitation using Spectre V2. By default, don't use retpoline,lfence on AMD. Instead, use the generic retpoline. Signed-off-by:
Kim Phillips <kim.phillips@amd.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Josh Poimboeuf authored
stable inclusion from stable-v4.19.234 commit 995629e1d8e6751936c6e2b738f70b392b0461de category: bugfix bugzilla: 186453, https://gitee.com/src-openeuler/kernel/issues/I50WBM CVE: CVE-2022-0001 -------------------------------- commit 44a3918c8245ab10c6c9719dd12e7a8d291980d8 upstream. With unprivileged eBPF enabled, eIBRS (without retpoline) is vulnerable to Spectre v2 BHB-based attacks. When both are enabled, print a warning message and report it in the 'spectre_v2' sysfs vulnerabilities file. Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Reviewed-by:
Thomas Gleixner <tglx@linutronix.de> [fllinden@amazon.com: backported to 4.19] Signed-off-by:
Frank van der Linden <fllinden@amazon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Conflicts: kernel/sysctl.c Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Peter Zijlstra authored
stable inclusion from stable-v4.19.234 commit 7af95ef3ec6248696300fce5c68f6c8c4f50e4a4 category: bugfix bugzilla: 186453, https://gitee.com/src-openeuler/kernel/issues/I50WBM CVE: CVE-2022-0001 -------------------------------- commit 5ad3eb1132453b9795ce5fd4572b1c18b292cca9 upstream. Update the doc with the new fun. [ bp: Massage commit message. ] Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Borislav Petkov <bp@suse.de> Reviewed-by:
Thomas Gleixner <tglx@linutronix.de> [fllinden@amazon.com: backported to 4.19] Signed-off-by:
Frank van der Linden <fllinden@amazon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Peter Zijlstra authored
stable inclusion from stable-v4.19.234 commit 3f66bedb96ff4c064a819e68499f79b38297ba26 category: bugfix bugzilla: 186453, https://gitee.com/src-openeuler/kernel/issues/I50WBM CVE: CVE-2022-0001 -------------------------------- commit 1e19da8522c81bf46b335f84137165741e0d82b7 upstream. Thanks to the chaps at VUsec it is now clear that eIBRS is not sufficient, therefore allow enabling of retpolines along with eIBRS. Add spectre_v2=eibrs, spectre_v2=eibrs,lfence and spectre_v2=eibrs,retpoline options to explicitly pick your preferred means of mitigation. Since there's new mitigations there's also user visible changes in /sys/devices/system/cpu/vulnerabilities/spectre_v2 to reflect these new mitigations. [ bp: Massage commit message, trim error messages, do more precise eIBRS mode checking. ] Co-developed-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Borislav Petkov <bp@suse.de> Reviewed-by:
Patrick Colp <patrick.colp@oracle.com> Reviewed-by:
Thomas Gleixner <tglx@linutronix.de> [fllinden@amazon.com: backported to 4.19 (no Hygon)] Signed-off-by:
Frank van der Linden <fllinden@amazon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Conflicts: arch/x85/kernel/cpu/bugs.c Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Peter Zijlstra (Intel) authored
stable inclusion from stable-v4.19.234 commit 25440a8c77dd2fde6a8e9cfc0c616916febf408e category: bugfix bugzilla: 186453, https://gitee.com/src-openeuler/kernel/issues/I50WBM CVE: CVE-2022-0001 -------------------------------- commit d45476d9832409371537013ebdd8dc1a7781f97a upstream. The RETPOLINE_AMD name is unfortunate since it isn't necessarily AMD only, in fact Hygon also uses it. Furthermore it will likely be sufficient for some Intel processors. Therefore rename the thing to RETPOLINE_LFENCE to better describe what it is. Add the spectre_v2=retpoline,lfence option as an alias to spectre_v2=retpoline,amd to preserve existing setups. However, the output of /sys/devices/system/cpu/vulnerabilities/spectre_v2 will be changed. [ bp: Fix typos, massage. ] Co-developed-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Borislav Petkov <bp@suse.de> Reviewed-by:
Thomas Gleixner <tglx@linutronix.de> [fllinden@amazon.com: backported to 4.19] Signed-off-by:
Frank van der Linden <fllinden@amazon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Conflicts: arch/x85/kernel/cpu/bugs.c Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Peter Zijlstra authored
stable inclusion from stable-v4.19.234 commit 1dd7efcfcccee59c974f56848739d4f5a5a570c7 category: bugfix bugzilla: 186453, https://gitee.com/src-openeuler/kernel/issues/I50WBM CVE: CVE-2022-0001 -------------------------------- commit f8a66d608a3e471e1202778c2a36cbdc96bae73b upstream. Currently Linux prevents usage of retpoline,amd on !AMD hardware, this is unfriendly and gets in the way of testing. Remove this restriction. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by:
Borislav Petkov <bp@suse.de> Acked-by:
Josh Poimboeuf <jpoimboe@redhat.com> Tested-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/r/20211026120310.487348118@infradead.org [fllinden@amazon.com: backported to 4.19 (no Hygon in 4.19)] Signed-off-by:
Frank van der Linden <fllinden@amazon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Conflicts: arch/x85/kernel/cpu/bugs.c Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Borislav Petkov authored
stable inclusion from stable-v4.19.234 commit d86fc674172edc0794f84198405f59f00b71512a category: bugfix bugzilla: 186453, https://gitee.com/src-openeuler/kernel/issues/I50WBM CVE: CVE-2022-0001 -------------------------------- commit a5ce9f2b upstream. Merge the test whether the CPU supports STIBP into the test which determines whether STIBP is required. Thus try to simplify what is already an insane logic. Remove a superfluous newline in a comment, while at it. Signed-off-by:
Borislav Petkov <bp@suse.de> Cc: Anthony Steinhauser <asteinhauser@google.com> Link: https://lkml.kernel.org/r/20200615065806.GB14668@zn.tnic [fllinden@amazon.com: fixed contextual conflict (comment) for 4.19] Signed-off-by:
Frank van der Linden <fllinden@amazon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Conflicts: arch/x85/kernel/cpu/bugs.c Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Liu Shixin authored
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5323R CVE: NA -------------------------------- Patch "mm: parallelize clear_gigantic_page" put mem_map_next() after clear_user_highpage but not modify the parameter i. So it's actually the previous page being cleared in each loop. As a result, the last page has not been cleared. Fix it by put mem_map_next() back to the original position. Fixes: ae0cd4d4 ("mm: parallelize clear_gigantic_page") Signed-off-by:
Liu Shixin <liushixin2@huawei.com> Reviewed-by:
Wei Li <liwei391@huawei.com> Reviewed-by:
Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
- Apr 19, 2022
-
-
Ye Bin authored
mainline inclusion from mainline-v5.18-rc1 commit 7aab5c84a0f6ec2290e2ba4a6b245178b1bf949a category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I52WOM CVE: NA -------------------------------- We inject IO error when rmdir non empty direcory, then got issue as follows: step1: mkfs.ext4 -F /dev/sda step2: mount /dev/sda test step3: cd test step4: mkdir -p 1/2 step5: rmdir 1 [ 110.920551] ext4_empty_dir: inject fault [ 110.921926] EXT4-fs warning (device sda): ext4_rmdir:3113: inode #12: comm rmdir: empty directory '1' has too many links (3) step6: cd .. step7: umount test step8: fsck.ext4 -f /dev/sda e2fsck 1.42.9 (28-Dec-2013) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Entry '..' in .../??? (13) has deleted/unused inode 12. Clear<y>? yes Pass 3: Checking directory connectivity Unconnected directory inode 13 (...) Connect to /lost+found<y>? yes Pass 4: Checking reference counts Inode 13 ref count is 3, should be 2. Fix<y>? yes Pass 5: Checking group summary information /dev/sda: ***** FILE SYSTEM WAS MODIFIED ***** /dev/sda: 12/131072 files (0.0% non-contiguous), 26157/524288 blocks ext4_rmdir if (!ext4_empty_dir(inode)) goto end_rmdir; ext4_empty_dir bh = ext4_read_dirblock(inode, 0, DIRENT_HTREE); if (IS_ERR(bh)) return true; Now if read directory block failed, 'ext4_empty_dir' will return true, assume directory is empty. Obviously, it will lead to above issue. To solve this issue, if read directory block failed 'ext4_empty_dir' just return false. To avoid making things worse when file system is already corrupted, 'ext4_empty_dir' also return false. Signed-off-by:
Ye Bin <yebin10@huawei.com> Cc: stable@kernel.org Link: https://lore.kernel.org/r/20220228024815.3952506-1-yebin10@huawei.com Signed-off-by:
Theodore Ts'o <tytso@mit.edu> conflicts: fs/ext4/namei.c Signed-off-by:
Ye Bin <yebin10@huawei.com> Reviewed-by:
Zhang Yi <yi.zhang@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Ye Bin authored
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I52WOM CVE: NA -------------------------------- This reverts commit 35bd50bc. Signed-off-by:
Ye Bin <yebin10@huawei.com> Reviewed-by:
Zhang Yi <yi.zhang@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Peter Zijlstra authored
mainline inclusion from mainline-v5.17-rc5 commit b1e8206582f9d680cff7d04828708c8b6ab32957 category: bugfix bugzilla: 186609, https://gitee.com/openeuler/kernel/issues/I532B0 CVE: NA -------------------------------- Where commit 4ef0c5c6b5ba ("kernel/sched: Fix sched_fork() access an invalid sched_task_group") fixed a fork race vs cgroup, it opened up a race vs syscalls by not placing the task on the runqueue before it gets exposed through the pidhash. Commit 13765de8148f ("sched/fair: Fix fault in reweight_entity") is trying to fix a single instance of this, instead fix the whole class of issues, effectively reverting this commit. Fixes: 4ef0c5c6b5ba ("kernel/sched: Fix sched_fork() access an invalid sched_task_group") Reported-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by:
Tadeusz Struk <tadeusz.struk@linaro.org> Tested-by:
Zhang Qiao <zhangqiao22@huawei.com> Tested-by:
Dietmar Eggemann <dietmar.eggemann@arm.com> Link: https://lkml.kernel.org/r/YgoeCbwj5mbCR0qA@hirez.programming.kicks-ass.net conflict: include/linux/sched/task.h kernel/fork.c kernel/sched/core.c Signed-off-by:
Zhang Qiao <zhangqiao22@huawei.com> Reviewed-by:
Chen Hui <judy.chenhui@huawei.com> Reviewed-by:
Wang Weiyang <wangweiyang2@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Xunlei Pang authored
mainline inclusion from mainline-v5.10-rc1 commit df3cb4ea category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I52QKB CVE: NA -------------------------------- We've met problems that occasionally tasks with full cpumask (e.g. by putting it into a cpuset or setting to full affinity) were migrated to our isolated cpus in production environment. After some analysis, we found that it is due to the current select_idle_smt() not considering the sched_domain mask. Steps to reproduce on my 31-CPU hyperthreads machine: 1. with boot parameter: "isolcpus=domain,2-31" (thread lists: 0,16 and 1,17) 2. cgcreate -g cpu:test; cgexec -g cpu:test "test_threads" 3. some threads will be migrated to the isolated cpu16~17. Fix it by checking the valid domain mask in select_idle_smt(). Fixes: 10e2f1ac ("sched/core: Rewrite and improve select_idle_siblings()) Reported-by:
Wetp Zhang <wetp.zy@linux.alibaba.com> Signed-off-by:
Xunlei Pang <xlpang@linux.alibaba.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by:
Jiang Biao <benbjiang@tencent.com> Reviewed-by:
Vincent Guittot <vincent.guittot@linaro.org> Link: https://lkml.kernel.org/r/1600930127-76857-1-git-send-email-xlpang@linux.alibaba.com Conflicts: kernel/sched/fair.c Signed-off-by:
Yu jiahua <yujiahua1@huawei.com> Reviewed-by:
zheng zucheng <zhengzucheng@huawei.com> Reviewed-by:
zheng zucheng <zhengzucheng@huawei.com> Reviewed-by:
Zhang Qiao <zhangqiao22@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
- Apr 18, 2022
-
-
Pablo Neira Ayuso authored
mainline inclusion from mainline-v5.18-rc1 commit 4c905f6740a365464e91467aa50916555b28213d category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I50WAZ CVE: CVE-2022-1016 ------------------------------------------------- Initialize registers to avoid stack leak into userspace. Fixes: 96518518 ("netfilter: add nftables") Signed-off-by:
Pablo Neira Ayuso <pablo@netfilter.org> conflict: net/netfilter/nf_tables_core.c Signed-off-by:
Lu Wei <luwei32@huawei.com> Reviewed-by:
Wei Yongjun <weiyongjun1@huawei.com> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
- Apr 16, 2022
-
-
Zhang Wensheng authored
hulk inclusion category: bugfix, https://gitee.com/openeuler/kernel/issues/I51ABL bugzilla: 186386 CVE: NA -------------------------------- When 'index' is a big numbers, it may become negative which forced to 'int'. then 'index << part_shift' might overflow to a positive value that is not greater than '0xfffff', then sysfs might complains about duplicate creation. Because of this, move the 'index' judgment to the front will fix it and be better. Fixes: b0d9111a ("nbd: use an idr to keep track of nbd devices") Fixes: 940c264984fd ("nbd: fix possible overflow for 'first_minor' in nbd_dev_add()") Signed-off-by:
Zhang Wensheng <zhangwensheng5@huawei.com> Reviewed-by:
Jason Yan <yanaijie@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Zhengchao Shao authored
hulk inclusion category: bugfix CVE: CVE-2021-39713 -------------------------------- To adapt to KABI, put rcu before gso_skb for 64-bit kernel. RCU will use 16 Bytes, and the space is enough. It's unuse for 32-bit kernel. Signed-off-by:
Zhengchao Shao <shaozhengchao@huawei.com> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Reviewed-by:
Wei Yongjun <weiyongjun1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Cong Wang authored
stable inclusion from linux-4.19.221 commit f9ff09e266ca70c801b9911280f6ae64c9183d85 category: bugfix CVE: CVE-2021-39713 -------------------------------- commit 460b3601 upstream. When tcf_block_find() fails, it already rollbacks the qdisc refcnt, so its caller doesn't need to clean up this again. Avoid calling qdisc_put() again by resetting qdisc to NULL for callers. Reported-by:
<syzbot+37b8770e6d5a8220a039@syzkaller.appspotmail.com> Fixes: e368fdb6 ("net: sched: use Qdisc rcu API instead of relying on rtnl lock") Signed-off-by:
Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Zhengchao Shao <shaozhengchao@huawei.com> Conflict: net/sched/cls_api.c Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Reviewed-by:
Wei Yongjun <weiyongjun1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-