- Oct 27, 2022
-
-
Thomas Gleixner authored
mainline inclusion from v5.10 commit 190113b4 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5Y4BR CVE: NA ------------------------------------------- Prarit reported that depending on the affinity setting the ' irq $N: Affinity broken due to vector space exhaustion.' message is showing up in dmesg, but the vector space on the CPUs in the affinity mask is definitely not exhausted. Shung-Hsi provided traces and analysis which pinpoints the problem: The ordering of trying to assign an interrupt vector in assign_irq_vector_any_locked() is simply wrong if the interrupt data has a valid node assigned. It does: 1) Try the intersection of affinity mask and node mask 2) Try the node mask 3) Try the full affinity mask 4) Try the full online mask Obviously #2 and #3 are in the wrong order as the requested affinity mask has to take precedence. In the observed cases #1 failed because the affinity mask did not contain CPUs from node 0. That made it allocate a vector from node 0, thereby breaking affinity and emitting the misleading message. Revert the order of #2 and #3 so the full affinity mask without the node intersection is tried before actually affinity is broken. If no node is assigned then only the full affinity mask and if that fails the full online mask is tried. Fixes: d6ffc6ac ("x86/vector: Respect affinity mask in irq descriptor") Reported-by:
Prarit Bhargava <prarit@redhat.com> Reported-by:
Shung-Hsi Yu <shung-hsi.yu@suse.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Tested-by:
Shung-Hsi Yu <shung-hsi.yu@suse.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/87ft4djtyp.fsf@nanos.tec.linutronix.de Signed-off-by:
zhangjp12 <zhangjp12@chinatelecom.com> Signed-off-by:
Xibo.Wang <wangxb12@chinatelecom.cn>
-
- Sep 22, 2022
-
-
Borislav Petkov authored
mainline inclusion from mainline-v5.7 commit a9a3ed1e category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5Q0UG?from=project-issue CVE: NA --------------------------- ... or the odyssey of trying to disable the stack protector for the function which generates the stack canary value. The whole story started with Sergei reporting a boot crash with a kernel built with gcc-10: Kernel panic — not syncing: stack-protector: Kernel stack is corrupted in: start_secondary CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.6.0-rc5—00235—gfffb08b37df9 #139 Hardware name: Gigabyte Technology Co., Ltd. To be filled by O.E.M./H77M—D3H, BIOS F12 11/14/2013 Call Trace: dump_stack panic ? start_secondary __stack_chk_fail start_secondary secondary_startup_64 -—-[ end Kernel panic — not syncing: stack—protector: Kernel stack is corrupted in: start_secondary This happens because gcc-10 tail-call optimizes the last function call in start_secondary() - cpu_startup_entry() - and thus emits a stack canary check which fails because the canary value changes after the boot_init_stack_canary() call. To fix that, the initial attempt was to mark the one function which generates the stack canary with: __attribute__((optimize("-fno-stack-protector"))) ... start_secondary(void *unused) however, using the optimize attribute doesn't work cumulatively as the attribute does not add to but rather replaces previously supplied optimization options - roughly all -fxxx options. The key one among them being -fno-omit-frame-pointer and thus leading to not present frame pointer - frame pointer which the kernel needs. The next attempt to prevent compilers from tail-call optimizing the last function call cpu_startup_entry(), shy of carving out start_secondary() into a separate compilation unit and building it with -fno-stack-protector, was to add an empty asm(""). This current solution was short and sweet, and reportedly, is supported by both compilers but we didn't get very far this time: future (LTO?) optimization passes could potentially eliminate this, which leads us to the third attempt: having an actual memory barrier there which the compiler cannot ignore or move around etc. That should hold for a long time, but hey we said that about the other two solutions too so... Reported-by:
Sergei Trofimovich <slyfox@gentoo.org> Signed-off-by:
Borislav Petkov <bp@suse.de> Tested-by:
Kalle Valo <kvalo@codeaurora.org> Cc: <stable@vger.kernel.org> Link: https://lkml.kernel.org/r/20200314164451.346497-1-slyfox@gentoo.org Signed-off-by:
tangbin <tangbin_yewu@cmss.chinamobile.com>
-
- Sep 07, 2022
-
-
Johan Hovold authored
stable inclusion from stable-v4.19.256 commit e2b45e82e453c01ffe97308aad81ac44c4bbd019 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5Q0SQ CVE: NA -------------------------------- [ Upstream commit 229e73d46994f15314f58b2d39bf952111d89193 ] Make sure to free the platform device in the unlikely event that registration fails. Fixes: 7a67832c ("libnvdimm, e820: make CONFIG_X86_PMEM_LEGACY a tristate option") Signed-off-by:
Johan Hovold <johan@kernel.org> Signed-off-by:
Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20220620140723.9810-1-johan@kernel.org Signed-off-by:
Sasha Levin <sashal@kernel.org> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Chen Zhongjin authored
mainline inclusion from mainline-v6.0-rc3 commit fc2e426b1161761561624ebd43ce8c8d2fa058da category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5Q4RA CVE: NA -------------------------------- When meeting ftrace trampolines in ORC unwinding, unwinder uses address of ftrace_{regs_}call address to find the ORC entry, which gets next frame at sp+176. If there is an IRQ hitting at sub $0xa8,%rsp, the next frame should be sp+8 instead of 176. It makes unwinder skip correct frame and throw warnings such as "wrong direction" or "can't access registers", etc, depending on the content of the incorrect frame address. By adding the base address ftrace_{regs_}caller with the offset *ip - ops->trampoline*, we can get the correct address to find the ORC entry. Also change "caller" to "tramp_addr" to make variable name conform to its content. [ mingo: Clarified the changelog a bit. ] Fixes: 6be7fa3c ("ftrace, orc, x86: Handle ftrace dynamically allocated trampolines") Signed-off-by:
Chen Zhongjin <chenzhongjin@huawei.com> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Reviewed-by:
Steven Rostedt (Google) <rostedt@goodmis.org> Cc: <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20220819084334.244016-1-chenzhongjin@huawei.com Signed-off-by:
Chen Zhongjin <chenzhongjin@huawei.com> Reviewed-by:
Yang Jihong <yangjihong1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
- Aug 19, 2022
-
-
Daniel Sneddon authored
stable inclusion from stable-v4.19.255 commit b6c5011934a15762cd694e36fe74f2f2f93eac9b category: bugfix bugzilla: 187492, https://gitee.com/src-openeuler/kernel/issues/I5N1SO CVE: CVE-2022-26373 -------------------------------- commit 2b1299322016731d56807aa49254a5ea3080b6b3 upstream. tl;dr: The Enhanced IBRS mitigation for Spectre v2 does not work as documented for RET instructions after VM exits. Mitigate it with a new one-entry RSB stuffing mechanism and a new LFENCE. == Background == Indirect Branch Restricted Speculation (IBRS) was designed to help mitigate Branch Target Injection and Speculative Store Bypass, i.e. Spectre, attacks. IBRS prevents software run in less privileged modes from affecting branch prediction in more privileged modes. IBRS requires the MSR to be written on every privilege level change. To overcome some of the performance issues of IBRS, Enhanced IBRS was introduced. eIBRS is an "always on" IBRS, in other words, just turn it on once instead of writing the MSR on every privilege level change. When eIBRS is enabled, more privileged modes should be protected from less privileged modes, including protecting VMMs from guests. == Problem == Here's a simplification of how guests are run on Linux' KVM: void run_kvm_guest(void) { // Prepare to run guest VMRESUME(); // Clean up after guest runs } The execution flow for that would look something like this to the processor: 1. Host-side: call run_kvm_guest() 2. Host-side: VMRESUME 3. Guest runs, does "CALL guest_function" 4. VM exit, host runs again 5. Host might make some "cleanup" function calls 6. Host-side: RET from run_kvm_guest() Now, when back on the host, there are a couple of possible scenarios of post-guest activity the host needs to do before executing host code: * on pre-eIBRS hardware (legacy IBRS, or nothing at all), the RSB is not touched and Linux has to do a 32-entry stuffing. * on eIBRS hardware, VM exit with IBRS enabled, or restoring the host IBRS=1 shortly after VM exit, has a documented side effect of flushing the RSB except in this PBRSB situation where the software needs to stuff the last RSB entry "by hand". IOW, with eIBRS supported, host RET instructions should no longer be influenced by guest behavior after the host retires a single CALL instruction. However, if the RET instructions are "unbalanced" with CALLs after a VM exit as is the RET in #6, it might speculatively use the address for the instruction after the CALL in #3 as an RSB prediction. This is a problem since the (untrusted) guest controls this address. Balanced CALL/RET instruction pairs such as in step #5 are not affected. == Solution == The PBRSB issue affects a wide variety of Intel processors which support eIBRS. But not all of them need mitigation. Today, X86_FEATURE_RETPOLINE triggers an RSB filling sequence that mitigates PBRSB. Systems setting RETPOLINE need no further mitigation - i.e., eIBRS systems which enable retpoline explicitly. However, such systems (X86_FEATURE_IBRS_ENHANCED) do not set RETPOLINE and most of them need a new mitigation. Therefore, introduce a new feature flag X86_FEATURE_RSB_VMEXIT_LITE which triggers a lighter-weight PBRSB mitigation versus RSB Filling at vmexit. The lighter-weight mitigation performs a CALL instruction which is immediately followed by a speculative execution barrier (INT3). This steers speculative execution to the barrier -- just like a retpoline -- which ensures that speculation can never reach an unbalanced RET. Then, ensure this CALL is retired before continuing execution with an LFENCE. In other words, the window of exposure is opened at VM exit where RET behavior is troublesome. While the window is open, force RSB predictions sampling for RET targets to a dead end at the INT3. Close the window with the LFENCE. There is a subset of eIBRS systems which are not vulnerable to PBRSB. Add these systems to the cpu_vuln_whitelist[] as NO_EIBRS_PBRSB. Future systems that aren't vulnerable will set ARCH_CAP_PBRSB_NO. [ bp: Massage, incorporate review comments from Andy Cooper. ] [ Pawan: Update commit message to replace RSB_VMEXIT with RETPOLINE ] Signed-off-by:
Daniel Sneddon <daniel.sneddon@linux.intel.com> Co-developed-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> conflict: arch/x86/include/asm/cpufeatures.h arch/x86/kernel/cpu/common.c Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Zhang Jianhua <chris.zjh@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
- Aug 15, 2022
-
-
Yipeng Zou authored
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5MC71 CVE: NA --------------------------------- This reverts commit 657a6bec. This patch was backport from mainline and intend to fix REG_SP_INDIRECT type in orc unwinder.The patch was fix an objtools problem on mainline, which The upstream commit havn't been merged in hulk-4.19,and it led to parse the sp value form orc data was wrong. So we need revert this patch. Signed-off-by:
Yipeng Zou <zouyipeng@huawei.com> Reviewed-by:
Zhang Jianhua <chris.zjh@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
- Aug 09, 2022
-
-
Juergen Gross authored
stable inclusion from stable-4.19.253 commit 36e2f161fb01795722f2ff1a24d95f08100333dd category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I5JTYM CVE: CVE-2022-36123 -------------------------------- [ Upstream commit 38fa5479b41376dc9d7f57e71c83514285a25ca0 ] The .brk section has the same properties as .bss: it is an alloc-only section and should be cleared before being used. Not doing so is especially a problem for Xen PV guests, as the hypervisor will validate page tables (check for writable page tables and hypervisor private bits) before accepting them to be used. Make sure .brk is initially zero by letting clear_bss() clear the brk area, too. Signed-off-by:
Juergen Gross <jgross@suse.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Link: https://lore.kernel.org/r/20220630071441.28576-3-jgross@suse.com Signed-off-by:
Sasha Levin <sashal@kernel.org> Signed-off-by:
GONG, Ruiqi <gongruiqi1@huawei.com> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Dmitry Monakhov authored
mainline inclusion from mainline-v5.18-rc5 commit 6c8ef58a50b5fab6e364b558143490a2014e2a4f category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5LCHG CVE: NA -------------------------------- A crash was observed in the ORC unwinder: BUG: stack guard page was hit at 000000000dd984a2 (stack is 00000000d1caafca..00000000613712f0) kernel stack overflow (page fault): 0000 [#1] SMP NOPTI CPU: 93 PID: 23787 Comm: context_switch1 Not tainted 5.4.145 #1 RIP: 0010:unwind_next_frame Call Trace: <NMI> perf_callchain_kernel get_perf_callchain perf_callchain perf_prepare_sample perf_event_output_forward __perf_event_overflow perf_ibs_handle_irq perf_ibs_nmi_handler nmi_handle default_do_nmi do_nmi end_repeat_nmi This was really two bugs: 1) The perf IBS code passed inconsistent regs to the unwinder. 2) The unwinder didn't handle the bad input gracefully. Fix the latter bug. The ORC unwinder needs to be immune against bad inputs. The problem is that stack_access_ok() doesn't recheck the validity of the full range of registers after switching to the next valid stack with get_stack_info(). Fix that. [ jpoimboe: rewrote commit log ] Signed-off-by:
Dmitry Monakhov <dmtrmonakhov@yandex-team.ru> Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/1650353656-956624-1-git-send-email-dmtrmonakhov@yandex-team.ru Signed-off-by:
Peter Zijlstra <peterz@infradead.org> Signed-off-by:
Yipeng Zou <zouyipeng@huawei.com> Reviewed-by:
Zhang Jianhua <chris.zjh@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Josh Poimboeuf authored
mainline inclusion from mainline-v5.12-rc3 commit b59cc97674c947861783ca92b9a6e7d043adba96 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5LCHG CVE: NA -------------------------------- The ORC unwinder attempts to fall back to frame pointers when ORC data is missing for a given instruction. It sets state->error, but then tries to keep going as a best-effort type of thing. That may result in further warnings if the unwinder gets lost. Until we have some way to register generated code with the unwinder, missing ORC will be expected, and occasionally going off the rails will also be expected. So don't warn about it. Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Borislav Petkov <bp@suse.de> Tested-by:
Ivan Babrou <ivan@cloudflare.com> Link: https://lkml.kernel.org/r/06d02c4bbb220bd31668db579278b0352538efbb.1612534649.git.jpoimboe@redhat.com Signed-off-by:
Yipeng Zou <zouyipeng@huawei.com> Reviewed-by:
Zhang Jianhua <chris.zjh@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Peter Zijlstra authored
mainline inclusion from mainline-v5.12-rc1 commit 87ccc826bf1c9e5ab4c2f649b404e02c63e47622 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5LCHG CVE: NA -------------------------------- Currently REG_SP_INDIRECT is unused but means (%rsp + offset), change it to mean (%rsp) + offset. The reason is that we're going to swizzle stack in the middle of a C function with non-trivial stack footprint. This means that when the unwinder finds the ToS, it needs to dereference it (%rsp) and then add the offset to the next frame, resulting in: (%rsp) + offset This is somewhat unfortunate, since REG_BP_INDIRECT is used (by DRAP) and thus needs to retain the current (%rbp + offset). Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by:
Miroslav Benes <mbenes@suse.cz> Acked-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Yipeng Zou <zouyipeng@huawei.com> Reviewed-by:
Zhang Jianhua <chris.zjh@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
- Jul 08, 2022
-
-
Randy Dunlap authored
stable inclusion from stable-4.19.247 commit 61967ac7ba2814fefd033eb3979058057a18edc1 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5FNPY CVE: NA -------------------------------- [ Upstream commit 12441ccdf5e2f5a01a46e344976cbbd3d46845c9 ] __setup() handlers should return 1 to obsolete_checksetup() in init/main.c to indicate that the boot option has been handled. A return of 0 causes the boot option/value to be listed as an Unknown kernel parameter and added to init's (limited) argument (no '=') or environment (with '=') strings. So return 1 from these x86 __setup handlers. Examples: Unknown kernel command line parameters "apicpmtimer BOOT_IMAGE=/boot/bzImage-517rc8 vdso=1 ring3mwait=disable", will be passed to user space. Run /sbin/init as init process with arguments: /sbin/init apicpmtimer with environment: HOME=/ TERM=linux BOOT_IMAGE=/boot/bzImage-517rc8 vdso=1 ring3mwait=disabl...
-
- Jul 06, 2022
-
-
Josh Poimboeuf authored
stable inclusion from stable-v4.19.248 commit 0255c936bfaa1887f7043b995f1c9e1049bb25f1 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I5D5RS CVE: CVE-2022-21123,CVE-2022-21125,CVE-2022-21166 -------------------------------- commit 1dc6ff02c8bf77d71b9b5d11cbc9df77cfb28626 upstream Similar to MDS and TAA, print a warning if SMT is enabled for the MMIO Stale Data vulnerability. Signed-off-by:
Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yipeng Zou <zouyipeng@huawei.com> Reviewed-by:
Zhang Jianhua <chris.zjh@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Pawan Gupta authored
stable inclusion from stable-v4.19.248 commit 0e94464009ee37217a7e450c96ea1f8d42d3a6b5 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I5D5RS CVE: CVE-2022-21123,CVE-2022-21125,CVE-2022-21166 -------------------------------- commit a992b8a4682f119ae035a01b40d4d0665c4a2875 upstream The Shared Buffers Data Sampling (SBDS) variant of Processor MMIO Stale Data vulnerabilities may expose RDRAND, RDSEED and SGX EGETKEY data. Mitigation for this is added by a microcode update. As some of the implications of SBDS are similar to SRBDS, SRBDS mitigation infrastructure can be leveraged by SBDS. Set X86_BUG_SRBDS and use SRBDS mitigation. Mitigation is enabled by default; use srbds=off to opt-out. Mitigation status can be checked from below file: /sys/devices/system/cpu/vulnerabilities/srbds Signed-off-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> [cascardo: adjust for processor model names] Signed-off-by:
Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yipeng Zou <zouyipeng@huawei.com> Reviewed-by:
Zhang Jianhua <chris.zjh@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Pawan Gupta authored
stable inclusion from stable-v4.19.248 commit 3ecb6dbad25b448ed8240f0ec2c7a8ff5155b7ea category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I5D5RS CVE: CVE-2022-21123,CVE-2022-21125,CVE-2022-21166 -------------------------------- commit 22cac9c677c95f3ac5c9244f8ca0afdc7c8afb19 upstream Currently, Linux disables SRBDS mitigation on CPUs not affected by MDS and have the TSX feature disabled. On such CPUs, secrets cannot be extracted from CPU fill buffers using MDS or TAA. Without SRBDS mitigation, Processor MMIO Stale Data vulnerabilities can be used to extract RDRAND, RDSEED, and EGETKEY data. Do not disable SRBDS mitigation by default when CPU is also affected by Processor MMIO Stale Data vulnerabilities. Signed-off-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yipeng Zou <zouyipeng@huawei.com> Reviewed-by:
Zhang Jianhua <chris.zjh@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Pawan Gupta authored
stable inclusion from stable-v4.19.248 commit f2983fbba1cccac611d4966277f0336374fad0be category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I5D5RS CVE: CVE-2022-21123,CVE-2022-21125,CVE-2022-21166 -------------------------------- commit 8d50cdf8b8341770bc6367bce40c0c1bb0e1d5b3 upstream Add the sysfs reporting file for Processor MMIO Stale Data vulnerability. It exposes the vulnerability and mitigation state similar to the existing files for the other hardware vulnerabilities. Signed-off-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yipeng Zou <zouyipeng@huawei.com> Reviewed-by:
Zhang Jianhua <chris.zjh@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Pawan Gupta authored
stable inclusion from stable-v4.19.248 commit 8b42145e8c9903d4805651e08f4fca628e166642 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I5D5RS CVE: CVE-2022-21123,CVE-2022-21125,CVE-2022-21166 -------------------------------- commit 99a83db5a605137424e1efe29dc0573d6a5b6316 upstream When the CPU is affected by Processor MMIO Stale Data vulnerabilities, Fill Buffer Stale Data Propagator (FBSDP) can propagate stale data out of Fill buffer to uncore buffer when CPU goes idle. Stale data can then be exploited with other variants using MMIO operations. Mitigate it by clearing the Fill buffer before entering idle state. Signed-off-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Co-developed-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yipeng Zou <zouyipeng@huawei.com> Reviewed-by:
Zhang Jianhua <chris.zjh@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Pawan Gupta authored
stable inclusion from stable-v4.19.248 commit 54974c8714283feb5bf64df3bfe0f44267db5a3c category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I5D5RS CVE: CVE-2022-21123,CVE-2022-21125,CVE-2022-21166 -------------------------------- commit e5925fb867290ee924fcf2fe3ca887b792714366 upstream MDS, TAA and Processor MMIO Stale Data mitigations rely on clearing CPU buffers. Moreover, status of these mitigations affects each other. During boot, it is important to maintain the order in which these mitigations are selected. This is especially true for md_clear_update_mitigation() that needs to be called after MDS, TAA and Processor MMIO Stale Data mitigation selection is done. Introduce md_clear_select_mitigation(), and select all these mitigations from there. This reflects relationships between these mitigations and ensures proper ordering. Signed-off-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yipeng Zou <zouyipeng@huawei.com> Reviewed-by:
Zhang Jianhua <chris.zjh@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Pawan Gupta authored
stable inclusion from stable-v4.19.248 commit 9f2ce43ebc33713ba02a89a66bd5f93c2f3a82cf category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I5D5RS CVE: CVE-2022-21123,CVE-2022-21125,CVE-2022-21166 -------------------------------- commit 8cb861e9e3c9a55099ad3d08e1a3b653d29c33ca upstream Processor MMIO Stale Data is a class of vulnerabilities that may expose data after an MMIO operation. For details please refer to Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst. These vulnerabilities are broadly categorized as: Device Register Partial Write (DRPW): Some endpoint MMIO registers incorrectly handle writes that are smaller than the register size. Instead of aborting the write or only copying the correct subset of bytes (for example, 2 bytes for a 2-byte write), more bytes than specified by the write transaction may be written to the register. On some processors, this may expose stale data from the fill buffers of the core that created the write transaction. Shared Buffers Data Sampling (SBDS): After propagators may have moved data around the uncore and copied stale data into client core fill buffers, processors affected by MFBDS can leak data from the fill buffer. Shared Buffers Data Read (SBDR): It is similar to Shared Buffer Data Sampling (SBDS) except that the data is directly read into the architectural software-visible state. An attacker can use these vulnerabilities to extract data from CPU fill buffers using MDS and TAA methods. Mitigate it by clearing the CPU fill buffers using the VERW instruction before returning to a user or a guest. On CPUs not affected by MDS and TAA, user application cannot sample data from CPU fill buffers using MDS or TAA. A guest with MMIO access can still use DRPW or SBDR to extract data architecturally. Mitigate it with VERW instruction to clear fill buffers before VMENTER for MMIO capable guests. Add a kernel parameter mmio_stale_data={off|full|full,nosmt} to control the mitigation. Signed-off-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> [cascardo: arch/x86/kvm/vmx.c has been moved] Signed-off-by:
Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Conflicts: arch/x86/kvm/vmx.c Signed-off-by:
Yipeng Zou <zouyipeng@huawei.com> Reviewed-by:
Zhang Jianhua <chris.zjh@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Pawan Gupta authored
stable inclusion from stable-v4.19.248 commit d03de576a604899741a0ebadcfe2a4a19ee53ba3 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I5D5RS CVE: CVE-2022-21123,CVE-2022-21125,CVE-2022-21166 -------------------------------- commit f52ea6c26953fed339aa4eae717ee5c2133c7ff2 upstream Processor MMIO Stale Data mitigation uses similar mitigation as MDS and TAA. In preparation for adding its mitigation, add a common function to update all mitigations that depend on MD_CLEAR. [ bp: Add a newline in md_clear_update_mitigation() to separate statements better. ] Signed-off-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yipeng Zou <zouyipeng@huawei.com> Reviewed-by:
Zhang Jianhua <chris.zjh@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Pawan Gupta authored
stable inclusion from stable-v4.19.248 commit 9277b11cafd0472db9e7d634de52d7c5d8d25462 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I5D5RS CVE: CVE-2022-21123,CVE-2022-21125,CVE-2022-21166 -------------------------------- commit 51802186158c74a0304f51ab963e7c2b3a2b046f upstream Processor MMIO Stale Data is a class of vulnerabilities that may expose data after an MMIO operation. For more details please refer to Documentation/admin-guide/hw-vuln/processor_mmio_stale_data.rst Add the Processor MMIO Stale Data bug enumeration. A microcode update adds new bits to the MSR IA32_ARCH_CAPABILITIES, define them. Signed-off-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> [cascardo: adapted family names to the ones in v4.19] Signed-off-by:
Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yipeng Zou <zouyipeng@huawei.com> Reviewed-by:
Zhang Jianhua <chris.zjh@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
- Jun 13, 2022
-
-
Peter Zijlstra authored
mainline inclusion from mainline-v5.16-rc2 commit 0dc636b3b757a6b747a156de613275f9d74a4a66 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5BKYL CVE: NA -------------------------------- When commit 5d1ceb3969b6 ("x86: Fix __get_wchan() for !STACKTRACE") moved from stacktrace to native unwind_*() usage, the try_get_task_stack() got lost, leading to use-after-free issues for dying tasks. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Fixes: 5d1ceb3969b6 ("x86: Fix __get_wchan() for !STACKTRACE") Link: https://bugzilla.kernel.org/show_bug.cgi?id=215031 Link: https://lore.kernel.org/stable/YZV02RCRVHIa144u@fedora64.linuxtx.org/ Reported-by:
Justin Forbes <jmforbes@linuxtx.org> Reported-by:
Holger Hoffstätte <holger@applied-asynchrony.com> Cc: Qi Zheng <zhengqi.arch@bytedance.com> Cc: Kees Cook <keescook@chromium.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Lin Yujun <linyujun809@huawei.com> Reviewed-by:
Zhang Jianhua <chris.zjh@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Peter Zijlstra authored
mainline inclusion from mainline-v5.16-rc1 commit 5d1ceb3969b6b2e47e2df6d17790a7c5a20fcbb4 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5BKYL CVE: NA -------------------------------- Use asm/unwind.h to implement wchan, since we cannot always rely on STACKTRACE=y. Fixes: bc9bbb81730e ("x86: Fix get_wchan() to support the ORC unwinder") Reported-by:
Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by:
Kees Cook <keescook@chromium.org> Link: https://lkml.kernel.org/r/20211022152104.137058575@infradead.org Signed-off-by:
Lin Yujun <linyujun809@huawei.com> Reviewed-by:
Zhang Jianhua <chris.zjh@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Josh Poimboeuf authored
mainline inclusion from mainline-v5.7-rc5 commit 81b67439 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5BJD1 CVE: NA -------------------------------- The following execution path is possible: fsnotify() [ realign the stack and store previous SP in R10 ] <IRQ> [ only IRET regs saved ] common_interrupt() interrupt_entry() <NMI> [ full pt_regs saved ] ... [ unwind stack ] When the unwinder goes through the NMI and the IRQ on the stack, and then sees fsnotify(), it doesn't have access to the value of R10, because it only has the five IRET registers. So the unwind stops prematurely. However, because the interrupt_entry() code is careful not to clobber R10 before saving the full regs, the unwinder should be able to read R10 from the previously saved full pt_regs associated with the NMI. Handle this case properly. When encountering an IRET regs frame immediately after a full pt_regs frame, use the pt_regs as a backup which can be used to get the C register values. Also, note that a call frame resets the 'prev_regs' value, because a function is free to clobber the registers. For this fix to work, the IRET and full regs frames must be adjacent, with no FUNC frames in between. So replace the FUNC hint in interrupt_entry() with an IRET_REGS hint. Fixes: ee9f8fce ("x86/unwind: Add the ORC unwinder") Reviewed-by:
Miroslav Benes <mbenes@suse.cz> Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Dave Jones <dsj@fb.com> Cc: Jann Horn <jannh@google.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Link: https://lore.kernel.org/r/97a408167cc09f1cfa0de31a7b70dd88868d743f.1587808742.git.jpoimboe@redhat.com Signed-off-by:
Yipeng Zou <zouyipeng@huawei.com> Reviewed-by:
Zhang Jianhua <chris.zjh@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Josh Poimboeuf authored
mainline inclusion from mainline-v5.7-rc5 commit b08418b5 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5BJD1 CVE: NA -------------------------------- There's some daring kernel code out there which dumps the stack of another task without first making sure the task is inactive. If the task happens to be running while the unwinder is reading the stack, unusual unwinder warnings can result. There's no race-free way for the unwinder to know whether such a warning is legitimate, so just disable unwinder warnings for all non-current tasks. Reviewed-by:
Miroslav Benes <mbenes@suse.cz> Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Dave Jones <dsj@fb.com> Cc: Jann Horn <jannh@google.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vince Weaver <vincent.weaver@maine.edu> Link: https://lore.kernel.org/r/ec424a2aea1d461eb30cab48a28c6433de2ab784.1587808742.git.jpoimboe@redhat.com Signed-off-by:
Yipeng Zou <zouyipeng@huawei.com> Reviewed-by:
Zhang Jianhua <chris.zjh@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
- May 06, 2022
-
-
Thomas Gleixner authored
mainline inclusion from mainline-v5.3-rc1 commit 7652ac92 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I563FS CVE: NA -------------------------------- The pinning of sensitive CR0 and CR4 bits caused a boot crash when loading the kvm_intel module on a kernel compiled with CONFIG_PARAVIRT=n. The reason is that the static key which controls the pinning is marked RO after init. The kvm_intel module contains a CR4 write which requires to update the static key entry list. That obviously does not work when the key is in a RO section. With CONFIG_PARAVIRT enabled this does not happen because the CR4 write uses the paravirt indirection and the actual write function is built in. As the key is intended to be immutable after init, move native_write_cr0/4() out of line. While at it consolidate the update of the cr4 shadow variable and store the value right away when the pinning is initialized on a booting CPU. No point in reading it back 20 instructions later. This allows to confine the static key and the pinning variable to cpu/common and allows to mark them static. Fixes: 8dbec27a ("x86/asm: Pin sensitive CR0 bits") Fixes: 873d50d5 ("x86/asm: Pin sensitive CR4 bits") Reported-by:
Linus Torvalds <torvalds@linux-foundation.org> Reported-by:
Xi Ruoyao <xry111@mengyan1223.wang> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Tested-by:
Xi Ruoyao <xry111@mengyan1223.wang> Acked-by:
Kees Cook <keescook@chromium.org> Acked-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://lkml.kernel.org/r/alpine.DEB.2.21.1907102140340.1758@nanos.tec.linutronix.de Signed-off-by:
Lin Yujun <linyujun809@huawei.com> Reviewed-by:
Zhang Jianhua <chris.zjh@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Kees Cook authored
mainline inclusion from mainline-v5.3-rc1 commit 873d50d5 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I563FS CVE: NA -------------------------------- Several recent exploits have used direct calls to the native_write_cr4() function to disable SMEP and SMAP before then continuing their exploits using userspace memory access. Direct calls of this form can be mitigate by pinning bits of CR4 so that they cannot be changed through a common function. This is not intended to be a general ROP protection (which would require CFI to defend against properly), but rather a way to avoid trivial direct function calling (or CFI bypasses via a matching function prototype) as seen in: https://googleprojectzero.blogspot.com/2017/05/exploiting-linux-kernel-via-packet.html (https://github.com/xairy/kernel-exploits/tree/master/CVE-2017-7308 ) The goals of this change: - Pin specific bits (SMEP, SMAP, and UMIP) when writing CR4. - Avoid setting the bits too early (they must become pinned only after CPU feature detection and selection has finished). - Pinning mask needs to be read-only during normal runtime. - Pinning needs to be checked after write to validate the cr4 state Using __ro_after_init on the mask is done so it can't be first disabled with a malicious write. Since these bits are global state (once established by the boot CPU and kernel boot parameters), they are safe to write to secondary CPUs before those CPUs have finished feature detection. As such, the bits are set at the first cr4 write, so that cr4 write bugs can be detected (instead of silently papered over). This uses a few bytes less storage of a location we don't have: read-only per-CPU data. A check is performed after the register write because an attack could just skip directly to the register write. Such a direct jump is possible because of how this function may be built by the compiler (especially due to the removal of frame pointers) where it doesn't add a stack frame (function exit may only be a retq without pops) which is sufficient for trivial exploitation like in the timer overwrites mentioned above). The asm argument constraints gain the "+" modifier to convince the compiler that it shouldn't make ordering assumptions about the arguments or memory, and treat them as changed. Signed-off-by:
Kees Cook <keescook@chromium.org> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Dave Hansen <dave.hansen@intel.com> Cc: kernel-hardening@lists.openwall.com Link: https://lkml.kernel.org/r/20190618045503.39105-3-keescook@chromium.org Signed-off-by:
Lin Yujun <linyujun809@huawei.com> Reviewed-by:
Zhang Jianhua <chris.zjh@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
- Apr 20, 2022
-
-
Josh Poimboeuf authored
stable inclusion from stable-v4.19.234 commit 9711b12a3f4c0fc73dd257c1e467e6e42155a5f1 category: bugfix bugzilla: 186453, https://gitee.com/src-openeuler/kernel/issues/I50WBM CVE: CVE-2022-0001 -------------------------------- commit 0de05d056afdb00eca8c7bbb0c79a3438daf700c upstream. The commit 44a3918c8245 ("x86/speculation: Include unprivileged eBPF status in Spectre v2 mitigation reporting") added a warning for the "eIBRS + unprivileged eBPF" combination, which has been shown to be vulnerable against Spectre v2 BHB-based attacks. However, there's no warning about the "eIBRS + LFENCE retpoline + unprivileged eBPF" combo. The LFENCE adds more protection by shortening the speculation window after a mispredicted branch. That makes an attack significantly more difficult, even with unprivileged eBPF. So at least for now the logic doesn't warn about that combination. But if you then add SMT into the mix, the SMT attack angle weakens the effectiveness of the LFENCE considerably. So extend the "eIBRS + unprivileged eBPF" warning to also include the "eIBRS + LFENCE + unprivileged eBPF + SMT" case. [ bp: Massage commit message. ] Suggested-by:
Alyssa Milburn <alyssa.milburn@linux.intel.com> Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Josh Poimboeuf authored
stable inclusion from stable-v4.19.234 commit 8bfdba77595aee5c3e83ed1c9994c35d6d409605 category: bugfix bugzilla: 186453, https://gitee.com/src-openeuler/kernel/issues/I50WBM CVE: CVE-2022-0001 -------------------------------- commit eafd987d4a82c7bb5aa12f0e3b4f8f3dea93e678 upstream. With: f8a66d608a3e ("x86,bugs: Unconditionally allow spectre_v2=retpoline,amd") it became possible to enable the LFENCE "retpoline" on Intel. However, Intel doesn't recommend it, as it has some weaknesses compared to retpoline. Now AMD doesn't recommend it either. It can still be left available as a cmdline option. It's faster than retpoline but is weaker in certain scenarios -- particularly SMT, but even non-SMT may be vulnerable in some cases. So just unconditionally warn if the user requests it on the cmdline. [ bp: Massage commit message. ] Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Kim Phillips authored
stable inclusion from stable-v4.19.234 commit d3cb3a6927222268a10b2f12dfb8c9444f7cc39e category: bugfix bugzilla: 186453, https://gitee.com/src-openeuler/kernel/issues/I50WBM CVE: CVE-2022-0001 -------------------------------- commit 244d00b5dd4755f8df892c86cab35fb2cfd4f14b upstream. AMD retpoline may be susceptible to speculation. The speculation execution window for an incorrect indirect branch prediction using LFENCE/JMP sequence may potentially be large enough to allow exploitation using Spectre V2. By default, don't use retpoline,lfence on AMD. Instead, use the generic retpoline. Signed-off-by:
Kim Phillips <kim.phillips@amd.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Josh Poimboeuf authored
stable inclusion from stable-v4.19.234 commit 995629e1d8e6751936c6e2b738f70b392b0461de category: bugfix bugzilla: 186453, https://gitee.com/src-openeuler/kernel/issues/I50WBM CVE: CVE-2022-0001 -------------------------------- commit 44a3918c8245ab10c6c9719dd12e7a8d291980d8 upstream. With unprivileged eBPF enabled, eIBRS (without retpoline) is vulnerable to Spectre v2 BHB-based attacks. When both are enabled, print a warning message and report it in the 'spectre_v2' sysfs vulnerabilities file. Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Reviewed-by:
Thomas Gleixner <tglx@linutronix.de> [fllinden@amazon.com: backported to 4.19] Signed-off-by:
Frank van der Linden <fllinden@amazon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Conflicts: kernel/sysctl.c Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Peter Zijlstra authored
stable inclusion from stable-v4.19.234 commit 3f66bedb96ff4c064a819e68499f79b38297ba26 category: bugfix bugzilla: 186453, https://gitee.com/src-openeuler/kernel/issues/I50WBM CVE: CVE-2022-0001 -------------------------------- commit 1e19da8522c81bf46b335f84137165741e0d82b7 upstream. Thanks to the chaps at VUsec it is now clear that eIBRS is not sufficient, therefore allow enabling of retpolines along with eIBRS. Add spectre_v2=eibrs, spectre_v2=eibrs,lfence and spectre_v2=eibrs,retpoline options to explicitly pick your preferred means of mitigation. Since there's new mitigations there's also user visible changes in /sys/devices/system/cpu/vulnerabilities/spectre_v2 to reflect these new mitigations. [ bp: Massage commit message, trim error messages, do more precise eIBRS mode checking. ] Co-developed-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Borislav Petkov <bp@suse.de> Reviewed-by:
Patrick Colp <patrick.colp@oracle.com> Reviewed-by:
Thomas Gleixner <tglx@linutronix.de> [fllinden@amazon.com: backported to 4.19 (no Hygon)] Signed-off-by:
Frank van der Linden <fllinden@amazon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Conflicts: arch/x85/kernel/cpu/bugs.c Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Peter Zijlstra (Intel) authored
stable inclusion from stable-v4.19.234 commit 25440a8c77dd2fde6a8e9cfc0c616916febf408e category: bugfix bugzilla: 186453, https://gitee.com/src-openeuler/kernel/issues/I50WBM CVE: CVE-2022-0001 -------------------------------- commit d45476d9832409371537013ebdd8dc1a7781f97a upstream. The RETPOLINE_AMD name is unfortunate since it isn't necessarily AMD only, in fact Hygon also uses it. Furthermore it will likely be sufficient for some Intel processors. Therefore rename the thing to RETPOLINE_LFENCE to better describe what it is. Add the spectre_v2=retpoline,lfence option as an alias to spectre_v2=retpoline,amd to preserve existing setups. However, the output of /sys/devices/system/cpu/vulnerabilities/spectre_v2 will be changed. [ bp: Fix typos, massage. ] Co-developed-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Borislav Petkov <bp@suse.de> Reviewed-by:
Thomas Gleixner <tglx@linutronix.de> [fllinden@amazon.com: backported to 4.19] Signed-off-by:
Frank van der Linden <fllinden@amazon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Conflicts: arch/x85/kernel/cpu/bugs.c Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Peter Zijlstra authored
stable inclusion from stable-v4.19.234 commit 1dd7efcfcccee59c974f56848739d4f5a5a570c7 category: bugfix bugzilla: 186453, https://gitee.com/src-openeuler/kernel/issues/I50WBM CVE: CVE-2022-0001 -------------------------------- commit f8a66d608a3e471e1202778c2a36cbdc96bae73b upstream. Currently Linux prevents usage of retpoline,amd on !AMD hardware, this is unfriendly and gets in the way of testing. Remove this restriction. Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by:
Borislav Petkov <bp@suse.de> Acked-by:
Josh Poimboeuf <jpoimboe@redhat.com> Tested-by:
Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/r/20211026120310.487348118@infradead.org [fllinden@amazon.com: backported to 4.19 (no Hygon in 4.19)] Signed-off-by:
Frank van der Linden <fllinden@amazon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Conflicts: arch/x85/kernel/cpu/bugs.c Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Borislav Petkov authored
stable inclusion from stable-v4.19.234 commit d86fc674172edc0794f84198405f59f00b71512a category: bugfix bugzilla: 186453, https://gitee.com/src-openeuler/kernel/issues/I50WBM CVE: CVE-2022-0001 -------------------------------- commit a5ce9f2b upstream. Merge the test whether the CPU supports STIBP into the test which determines whether STIBP is required. Thus try to simplify what is already an insane logic. Remove a superfluous newline in a comment, while at it. Signed-off-by:
Borislav Petkov <bp@suse.de> Cc: Anthony Steinhauser <asteinhauser@google.com> Link: https://lkml.kernel.org/r/20200615065806.GB14668@zn.tnic [fllinden@amazon.com: fixed contextual conflict (comment) for 4.19] Signed-off-by:
Frank van der Linden <fllinden@amazon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Conflicts: arch/x85/kernel/cpu/bugs.c Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
- Feb 22, 2022
-
-
Zheng Yejian authored
hulk inclusion category: bugfix bugzilla: 186253, https://gitee.com/openeuler/kernel/issues/I4TYA9 CVE: NA ----------------------------------------------- Refer to following codes, 'strncpy' would stop copying if Null character encountered. For example, when 'code' is "53 be 00 0a 05", 'old_code' would be "53 be 00 00 00". > 276 static unsigned char *klp_old_code(unsigned char *code) > 277 { > 278 static union klp_code_union old_code; > 279 > 280 strncpy(old_code.code, code, JMP_E9_INSN_SIZE); > 281 return old_code.code; > 282 } As a result, the instructions cannot be restored completely, and the system becomes abnormal. Fixes: 7e2ab91e ("livepatch/x86: support livepatch without ftrace") Signed-off-by:
Zheng Yejian <zhengyejian1@huawei.com> Reviewed-by:
Kuohai Xu <xukuohai@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
- Jan 30, 2022
-
-
Jan H. Schönherr authored
mainline inclusion from mainline-v5.6-rc1 commit 7a8bc2b0 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA -------------------------------- The function mce_severity() is not required to update its msg argument. In fact, mce_severity_amd() does not, which makes mce_no_way_out() return uninitialized data, which may be used later for printing. Assuming that implementations of mce_severity() either always or never update the msg argument (which is currently the case), it is sufficient to initialize the temporary variable in mce_no_way_out(). While at it, avoid printing a useless "Unknown". Signed-off-by:
Jan H. Schönherr <jschoenh@amazon.de> Signed-off-by:
Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200103150722.20313-4-jschoenh@amazon.de Signed-off-by: Jackie Liu <liuyun01@kylinos.cn> #openEuler_contributor Signed-off-by:
Laibin Qiu <qiulaibin@huawei.com> Reviewed-by:
Xiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yazen Ghannam authored
mainline inclusion from mainline-v5.6-rc1 commit 89a76171 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA -------------------------------- Add support for a new version of the Load Store unit bank type as indicated by its McaType value, which will be present in future SMCA systems. Add the new (HWID, MCATYPE) tuple. Reuse the same name, since this is logically the same to the user. Also, add the new error descriptions to edac_mce_amd. Signed-off-by:
Yazen Ghannam <yazen.ghannam@amd.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20200110015651.14887-2-Yazen.Ghannam@amd.com Signed-off-by: Jackie Liu <liuyun01@kylinos.cn> #openEuler_contributor Signed-off-by:
Laibin Qiu <qiulaibin@huawei.com> Reviewed-by:
Xiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yazen Ghannam authored
mainline inclusion from mainline-v5.1-rc1 commit 8a5dd2cd category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA -------------------------------- Some SMCA bank types on future systems will report new error types even though the bank type is not treated as a new version. These new error types will reported by bits that are reserved in past systems. Add the new error descriptions to the lists in edac_mce_amd. Signed-off-by:
Yazen Ghannam <yazen.ghannam@amd.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Kees Cook <keescook@chromium.org> Cc: linux-edac <linux-edac@vger.kernel.org> Cc: Mauro Carvalho Chehab <mchehab@kernel.org> Cc: Shirish S <Shirish.S@amd.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20190201225534.8177-4-Yazen.Ghannam@amd.com Signed-off-by: Jackie Liu <liuyun01@kylinos.cn> #openEuler_contributor Signed-off-by:
Laibin Qiu <qiulaibin@huawei.com> Reviewed-by:
Xiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yazen Ghannam authored
mainline inclusion from mainline-v5.1-rc1 commit 3ad7e748 category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA -------------------------------- The existing CS, PSP, and SMU SMCA bank types will see new versions (as indicated by their McaTypes) in future SMCA systems. Add the new (HWID, MCATYPE) tuples for these new versions. Reuse the same names as the older versions, since they are logically the same to the user. SMCA systems won't mix and match IP blocks with different McaType versions in the same system, so there isn't a need to distinguish them. The MCA_IPID register is saved when logging an MCA error, and that can be used to triage the error. Also, add the new error descriptions to edac_mce_amd. Some error types (positions in the list) are overloaded compared to the previous McaTypes. Therefore, just create new lists of the error descriptions to keep things simple even if some of the error descriptions are the same between versions. Signed-off-by:
Yazen Ghannam <yazen.ghannam@amd.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Cc: Arnd Bergmann <arnd@arndb.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Kees Cook <keescook@chromium.org> Cc: linux-edac <linux-edac@vger.kernel.org> Cc: Mauro Carvalho Chehab <mchehab@kernel.org> Cc: Pu Wen <puwen@hygon.cn> Cc: Qiuxu Zhuo <qiuxu.zhuo@intel.com> Cc: Shirish S <Shirish.S@amd.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vishal Verma <vishal.l.verma@intel.com> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20190201225534.8177-3-Yazen.Ghannam@amd.com Signed-off-by: Jackie Liu <liuyun01@kylinos.cn> #openEuler_contributor Signed-off-by:
Laibin Qiu <qiulaibin@huawei.com> Reviewed-by:
Xiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yazen Ghannam authored
mainline inclusion from mainline-v5.1-rc1 commit cbfa447e category: feature bugzilla: https://gitee.com/openeuler/kernel/issues/I4MKP4 CVE: NA -------------------------------- Add the (HWID, MCATYPE) tuples and names for the new MP5, NBIO, and PCIE SMCA bank types. Also, add their respective error descriptions to the MCE decoding module edac_mce_amd. Signed-off-by:
Yazen Ghannam <yazen.ghannam@amd.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Cc: Arnd Bergmann <arnd@arndb.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Kees Cook <keescook@chromium.org> Cc: linux-edac <linux-edac@vger.kernel.org> Cc: Mauro Carvalho Chehab <mchehab@kernel.org> Cc: Pu Wen <puwen@hygon.cn> Cc: Qiuxu Zhuo <qiuxu.zhuo@intel.com> Cc: Shirish S <Shirish.S@amd.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vishal Verma <vishal.l.verma@intel.com> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20190201225534.8177-2-Yazen.Ghannam@amd.com Signed-off-by: Jackie Liu <liuyun01@kylinos.cn> #openEuler_contributor Signed-off-by:
Laibin Qiu <qiulaibin@huawei.com> Reviewed-by:
Xiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-