- Apr 28, 2022
-
-
Russell King (Oracle) authored
stable inclusion from stable-v4.19.234 commit 99e14db3b711c27f93079ba9d7f2fff169916d5f category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit b9baf5c8c5c356757f4f9d8180b5e9d234065bc3 upstream. Workaround the Spectre BHB issues for Cortex-A15, Cortex-A57, Cortex-A72, Cortex-A73 and Cortex-A75. We also include Brahma B15 as well to be safe, which is affected by Spectre V2 in the same ways as Cortex-A15. Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> [changes due to lack of SYSTEM_FREEING_INITMEM - gregkh] Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Russell King (Oracle) authored
stable inclusion from stable-v4.19.234 commit 67e1f18a972be16363c6e88d7b29cde880774164 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit 8d9d651ff2270a632e9dc497b142db31e8911315 upstream. Use the linker's LOADADDR() macro to get the load address of the sections, and provide a macro to set the start and end symbols. Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Russell King (Oracle) authored
stable inclusion from stable-v4.19.234 commit 45c25917ceb7a5377883ef4c3a675276fba8a268 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit 04e91b7324760a377a725e218b5ee783826d30f5 upstream. Provide a couple of helpers to copy the vectors and stubs, and also to flush the copied vectors and stubs. Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Russell King (Oracle) authored
stable inclusion from stable-v4.19.234 commit dc64af755099d1e51fd64e99fe3a59b75595814a category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit 9dd78194a3722fa6712192cdd4f7032d45112a9a upstream. As per other architectures, add support for reporting the Spectre vulnerability status via sysfs CPU. Acked-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> [ preserve res variable and add SMCCC_ARCH_WORKAROUND_RET_UNAFFECTED - gregkh ] Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
- Apr 26, 2022
-
-
Hangyu Hua authored
mainline inclusion from mainline-v5.18-rc1 commit 3d3925ff6433f98992685a9679613a2cc97f3ce2 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I51YBQ CVE: CVE-2022-28388 ------------------------------------------------- There is no need to call dev_kfree_skb() when usb_submit_urb() fails because can_put_echo_skb() deletes original skb and can_free_echo_skb() deletes the cloned skb. Fixes: 0024d8ad ("can: usb_8dev: Add support for USB2CAN interface from 8 devices") Link: https://lore.kernel.org/all/20220311080614.45229-1-hbh25y@gmail.com Cc: stable@vger.kernel.org Signed-off-by:
Hangyu Hua <hbh25y@gmail.com> Signed-off-by:
Marc Kleine-Budde <mkl@pengutronix.de> Conflicts: drivers/net/can/usb/usb_8dev.c Signed-off-by:
Ziyang Xuan <william.xuanziyang@huawei.com> Reviewed-by:
Wei Yongjun <weiyongjun1@huawei.com> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
- Apr 24, 2022
-
-
Yang Jihong authored
hulk inclusion category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I53VHE CVE: NA -------------------------------- This reverts commit 03804742. This patch is used to solve race between close() and fork() of the perf. However, this patch is not accepted by the community. As a result, destory interface is incorrectly invoked during the perf_remove_from_context, causing UAF, see https://lkml.org/lkml/2019/6/28/856 . For 4.19 kernel, he final fix patch has been incorporated, see eb41044b. Therefore, need to revert the patch. Signed-off-by:
Yang Jihong <yangjihong1@huawei.com> Reviewed-by:
Kuohai Xu <xukuohai@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
- Apr 22, 2022
-
-
Duoming Zhou authored
mainline inclusion from mainline-v5.18-rc1 commit fc6d01ff9ef03b66d4a3a23b46fc3c3d8cf92009 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I53VJO CVE: CVE-2022-1205 -------------------------------- The previous commit 7ec02f5ac8a5 ("ax25: fix NPD bug in ax25_disconnect") move ax25_disconnect into lock_sock() in order to prevent NPD bugs. But there are race conditions that may lead to null pointer dereferences in ax25_heartbeat_expiry(), ax25_t1timer_expiry(), ax25_t2timer_expiry(), ax25_t3timer_expiry() and ax25_idletimer_expiry(), when we use ax25_kill_by_device() to detach the ax25 device. One of the race conditions that cause null pointer dereferences can be shown as below: (Thread 1) | (Thread 2) ax25_connect() | ax25_std_establish_data_link() | ax25_start_t1timer() | mod_timer(&ax25->t1timer,..) | | ax25_kill_by_device() (wait a time) | ... | s->ax25_dev = NULL; //(1) ax25_t1timer_expiry() | ax25->ax25_dev->values[..] //(2)| ... ... | We set null to ax25_cb->ax25_dev in position (1) and dereference the null pointer in position (2). The corresponding fail log is shown below: =============================================================== BUG: kernel NULL pointer dereference, address: 0000000000000050 CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.17.0-rc6-00794-g45690b7d0 RIP: 0010:ax25_t1timer_expiry+0x12/0x40 ... Call Trace: call_timer_fn+0x21/0x120 __run_timers.part.0+0x1ca/0x250 run_timer_softirq+0x2c/0x60 __do_softirq+0xef/0x2f3 irq_exit_rcu+0xb6/0x100 sysvec_apic_timer_interrupt+0xa2/0xd0 ... This patch moves ax25_disconnect() before s->ax25_dev = NULL and uses del_timer_sync() to delete timers in ax25_disconnect(). If ax25_disconnect() is called by ax25_kill_by_device() or ax25->ax25_dev is NULL, the reason in ax25_disconnect() will be equal to ENETUNREACH, it will wait all timers to stop before we set null to s->ax25_dev in ax25_kill_by_device(). Fixes: 7ec02f5ac8a5 ("ax25: fix NPD bug in ax25_disconnect") Signed-off-by:
Duoming Zhou <duoming@zju.edu.cn> Signed-off-by:
David S. Miller <davem@davemloft.net> Conflict: net/ax25/af_ax25.c Signed-off-by:
Zhengchao Shao <shaozhengchao@huawei.com> Reviewed-by:
Wei Yongjun <weiyongjun1@huawei.com> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Duoming Zhou authored
mainline inclusion from mainline-v5.17-rc4 commit 7ec02f5ac8a5be5a3f20611731243dc5e1d9ba10 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I53VJO CVE: CVE-2022-1199 -------------------------------- The ax25_disconnect() in ax25_kill_by_device() is not protected by any locks, thus there is a race condition between ax25_disconnect() and ax25_destroy_socket(). when ax25->sk is assigned as NULL by ax25_destroy_socket(), a NULL pointer dereference bug will occur if site (1) or (2) dereferences ax25->sk. ax25_kill_by_device() | ax25_release() ax25_disconnect() | ax25_destroy_socket() ... | if(ax25->sk != NULL) | ... ... | ax25->sk = NULL; bh_lock_sock(ax25->sk); //(1) | ... ... | bh_unlock_sock(ax25->sk); //(2)| This patch moves ax25_disconnect() into lock_sock(), which can synchronize with ax25_destroy_socket() in ax25_release(). Fail log: =============================================================== BUG: kernel NULL pointer dereference, address: 0000000000000088 ... RIP: 0010:_raw_spin_lock+0x7e/0xd0 ... Call Trace: ax25_disconnect+0xf6/0x220 ax25_device_event+0x187/0x250 raw_notifier_call_chain+0x5e/0x70 dev_close_many+0x17d/0x230 rollback_registered_many+0x1f1/0x950 unregister_netdevice_queue+0x133/0x200 unregister_netdev+0x13/0x20 ... Signed-off-by:
Duoming Zhou <duoming@zju.edu.cn> Signed-off-by:
David S. Miller <davem@davemloft.net> Conflict: net/ax25/af_ax25.c Signed-off-by:
Zhengchao Shao <shaozhengchao@huawei.com> Reviewed-by:
Wei Yongjun <weiyongjun1@huawei.com> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Duoming Zhou authored
stable inclusion from linux-4.19.235 commit 5ab8de9377edde3eaf1de9872e2f01d43157cd6c category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I53VJO CVE: CVE-2022-1199 -------------------------------- [ Upstream commit 71171ac8eb34ce7fe6b3267dce27c313ab3cb3ac ] When two ax25 devices attempted to establish connection, the requester use ax25_create(), ax25_bind() and ax25_connect() to initiate connection. The receiver use ax25_rcv() to accept connection and use ax25_create_cb() in ax25_rcv() to create ax25_cb, but the ax25_cb->sk is NULL. When the receiver is detaching, a NULL pointer dereference bug caused by sock_hold(sk) in ax25_kill_by_device() will happen. The corresponding fail log is shown below: =============================================================== BUG: KASAN: null-ptr-deref in ax25_device_event+0xfd/0x290 Call Trace: ... ax25_device_event+0xfd/0x290 raw_notifier_call_chain+0x5e/0x70 dev_close_many+0x174/0x220 unregister_netdevice_many+0x1f7/0xa60 unregister_netdevice_queue+0x12f/0x170 unregister_netdev+0x13/0x20 mkiss_close+0xcd/0x140 tty_ldisc_release+0xc0/0x220 tty_release_struct+0x17/0xa0 tty_release+0x62d/0x670 ... This patch add condition check in ax25_kill_by_device(). If s->sk is NULL, it will goto if branch to kill device. Fixes: 4e0f718daf97 ("ax25: improve the incomplete fix to avoid UAF and NPD bugs") Reported-by:
Thomas Osterried <thomas@osterried.de> Signed-off-by:
Duoming Zhou <duoming@zju.edu.cn> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Sasha Levin <sashal@kernel.org> Signed-off-by:
Zhengchao Shao <shaozhengchao@huawei.com> Reviewed-by:
Wei Yongjun <weiyongjun1@huawei.com> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Duoming Zhou authored
stable inclusion from linux-4.19.235 commit 3072e72814de56f3c674650a8af98233ddf78b19 category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I53VJO CVE: CVE-2022-1199 -------------------------------- [ Upstream commit 4e0f718daf97d47cf7dec122da1be970f145c809 ] The previous commit 1ade48d0c27d ("ax25: NPD bug when detaching AX25 device") introduce lock_sock() into ax25_kill_by_device to prevent NPD bug. But the concurrency NPD or UAF bug will occur, when lock_sock() or release_sock() dereferences the ax25_cb->sock. The NULL pointer dereference bug can be shown as below: ax25_kill_by_device() | ax25_release() | ax25_destroy_socket() | ax25_cb_del() ... | ... | ax25->sk=NULL; lock_sock(s->sk); //(1) | s->ax25_dev = NULL; | ... release_sock(s->sk); //(2) | ... | The root cause is that the sock is set to null before dereference site (1) or (2). Therefore, this patch extracts the ax25_cb->sock in advance, and uses ax25_list_lock to protect it, which can synchronize with ax25_cb_del() and ensure the value of sock is not null before dereference sites. The concurrency UAF bug can be shown as below: ax25_kill_by_device() | ax25_release() | ax25_destroy_socket() ... | ... | sock_put(sk); //FREE lock_sock(s->sk); //(1) | s->ax25_dev = NULL; | ... release_sock(s->sk); //(2) | ... | The root cause is that the sock is released before dereference site (1) or (2). Therefore, this patch uses sock_hold() to increase the refcount of sock and uses ax25_list_lock to protect it, which can synchronize with ax25_cb_del() in ax25_destroy_socket() and ensure the sock wil not be released before dereference sites. Signed-off-by:
Duoming Zhou <duoming@zju.edu.cn> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Sasha Levin <sashal@kernel.org> Signed-off-by:
Zhengchao Shao <shaozhengchao@huawei.com> Reviewed-by:
Wei Yongjun <weiyongjun1@huawei.com> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Lin Ma authored
stable inclusion from linux-4.19.235 commit bd05a8f1b7368ef4f7845548312fc61ab4fa63ce category: bugfix bugzilla: https://gitee.com/src-openeuler/kernel/issues/I53VJO CVE: CVE-2022-1199 -------------------------------- commit 1ade48d0c27d5da1ccf4b583d8c5fc8b534a3ac8 upstream. The existing cleanup routine implementation is not well synchronized with the syscall routine. When a device is detaching, below race could occur. static int ax25_sendmsg(...) { ... lock_sock() ax25 = sk_to_ax25(sk); if (ax25->ax25_dev == NULL) // CHECK ... ax25_queue_xmit(skb, ax25->ax25_dev->dev); // USE ... } static void ax25_kill_by_device(...) { ... if (s->ax25_dev == ax25_dev) { s->ax25_dev = NULL; ... } Other syscall functions like ax25_getsockopt, ax25_getname, ax25_info_show also suffer from similar races. To fix them, this patch introduce lock_sock() into ax25_kill_by_device in order to guarantee that the nullify action in cleanup routine cannot proceed when another socket request is pending. Signed-off-by:
Hanjie Wu <nagi@zju.edu.cn> Signed-off-by:
Lin Ma <linma@zju.edu.cn> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Zhengchao Shao <shaozhengchao@huawei.com> Reviewed-by:
Wei Yongjun <weiyongjun1@huawei.com> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Josh Poimboeuf authored
stable inclusion from stable-v4.19.238 commit 9fbfc77d0f5e04a7434fa6d32112036e2a41bc6e category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I53TS3 CVE: NA -------------------------------- commit d8dd25a4 upstream. When the current frame address (CFA) is stored on the stack (i.e., cfa->base == CFI_SP_INDIRECT), objtool neglects to adjust the stack offset when there are subsequent pushes or pops. This results in bad ORC data at the end of the ENTER_IRQ_STACK macro, when it puts the previous stack pointer on the stack and does a subsequent push. This fixes the following unwinder warning: WARNING: can't dereference registers at 00000000f0a6bdba for ip interrupt_entry+0x9f/0xa0 Fixes: 627fce14 ("objtool: Add ORC unwind table generation") Reported-by:
Vince Weaver <vincent.weaver@maine.edu> Reported-by:
Dave Jones <dsj@fb.com> Reported-by:
Steven Rostedt <rostedt@goodmis.org> Reported-by:
Vegard Nossum <vegard.nossum@oracle.com> Reported-by:
Joe Mario <jmario@redhat.com> Reviewed-by:
Miroslav Benes <mbenes@suse.cz> Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Jann Horn <jannh@google.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/853d5d691b29e250333332f09b8e27410b2d9924.1587808742.git.jpoimboe@redhat.com Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Zou Yipeng <zouyipeng@huawei.com> Reviewed-by:
Zhang Jianhua <chris.zjh@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Josh Poimboeuf authored
stable inclusion from stable-v4.19.238 commit ac069a960d357a7495fd2a1cde28d9b03250cace category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I53TS3 CVE: NA -------------------------------- commit 1fb14363 upstream. In swapgs_restore_regs_and_return_to_usermode, after the stack is switched to the trampoline stack, the existing UNWIND_HINT_REGS hint is no longer valid, which can result in the following ORC unwinder warning: WARNING: can't dereference registers at 000000003aeb0cdd for ip swapgs_restore_regs_and_return_to_usermode+0x93/0xa0 For full correctness, we could try to add complicated unwind hints so the unwinder could continue to find the registers, but when when it's this close to kernel exit, unwind hints aren't really needed anymore and it's fine to just use an empty hint which tells the unwinder to stop. For consistency, also move the UNWIND_HINT_EMPTY in entry_SYSCALL_64_after_hwframe to a similar location. Fixes: 3e3b9293 ("x86/entry/64: Return to userspace from the trampoline stack") Reported-by:
Vince Weaver <vincent.weaver@maine.edu> Reported-by:
Dave Jones <dsj@fb.com> Reported-by:
Dr. David Alan Gilbert <dgilbert@redhat.com> Reported-by:
Joe Mario <jmario@redhat.com> Reported-by:
Jann Horn <jannh@google.com> Reported-by:
Linus Torvalds <torvalds@linux-foundation.org> Reviewed-by:
Miroslav Benes <mbenes@suse.cz> Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Ingo Molnar <mingo@kernel.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/60ea8f562987ed2d9ace2977502fe481c0d7c9a0.1587808742.git.jpoimboe@redhat.com Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Zou Yipeng <zouyipeng@huawei.com> Reviewed-by:
Zhang Jianhua <chris.zjh@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
- Apr 21, 2022
-
-
Haimin Zhang authored
stable inclusion from stable-v4.19.238 commit 693fe8af9a2625139de07bd1ae212a7d89c37795 category: bugfix bugzilla: 186606, https://gitee.com/src-openeuler/kernel/issues/I53SSV CVE: CVE-2022-1353 -------------------------------- [ Upstream commit 9a564bccb78a76740ea9d75a259942df8143d02c ] Add __GFP_ZERO flag for compose_sadb_supported in function pfkey_register to initialize the buffer of supp_skb to fix a kernel-info-leak issue. 1) Function pfkey_register calls compose_sadb_supported to request a sk_buff. 2) compose_sadb_supported calls alloc_sbk to allocate a sk_buff, but it doesn't zero it. 3) If auth_len is greater 0, then compose_sadb_supported treats the memory as a struct sadb_supported and begins to initialize. But it just initializes the field sadb_supported_len and field sadb_supported_exttype without field sadb_supported_reserved. Reported-by:
TCS Robot <tcs_robot@tencent.com> Signed-off-by:
Haimin Zhang <tcs_kernel@tencent.com> Signed-off-by:
Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by:
Sasha Levin <sashal@kernel.org> Signed-off-by:
Xu Jia <xujia39@huawei.com> Reviewed-by:
Wei Yongjun <weiyongjun1@huawei.com> Reviewed-by:
Wang Weiyang <wangweiyang2@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
- Apr 20, 2022
-
-
James Morse authored
stable inclusion from stable-v4.19.236 commit ed5dec3fae86f20db52930e1d9a7cc38403994cc category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit 228a26b912287934789023b4132ba76065d9491c upstream. Future CPUs may implement a clearbhb instruction that is sufficient to mitigate SpectreBHB. CPUs that implement this instruction, but not CSV2.3 must be affected by Spectre-BHB. Add support to use this instruction as the BHB mitigation on CPUs that support it. The instruction is in the hint space, so it will be treated by a NOP as older CPUs. Reviewed-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> [ modified for stable: Use a KVM vector template instead of alternatives ] Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Joey Gouly authored
stable inclusion from stable-v4.19.236 commit a44e7ddb5822b943cd50c5ad6a2541fb445d58bd category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit 9e45365f1469ef2b934f9d035975dbc9ad352116 upstream. This is a new ID register, introduced in 8.7. Signed-off-by:
Joey Gouly <joey.gouly@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Alexandru Elisei <alexandru.elisei@arm.com> Cc: Suzuki K Poulose <suzuki.poulose@arm.com> Cc: Reiji Watanabe <reijiw@google.com> Acked-by:
Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20211210165432.8106-3-joey.gouly@arm.com Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit 5f051d32b03f08a0507ac1afd7b9c0a30c8e5d59 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit a5905d6af492ee6a4a2205f0d550b3f931b03d03 upstream. KVM allows the guest to discover whether the ARCH_WORKAROUND SMCCC are implemented, and to preserve that state during migration through its firmware register interface. Add the necessary boiler plate for SMCCC_ARCH_WORKAROUND_3. Reviewed-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> [ kvm code moved to virt/kvm/arm, removed fw regs ABI. Added 32bit stub ] Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Conflicts: arch/arm/include/asm/kvm_host.h arch/arm64/include/asm/kvm_host.h virt/kvm/arm/psci.c Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit c20d551744797000c4af993f7d59ef8c69732949 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit 558c303c9734af5a813739cd284879227f7297d2 upstream. Speculation attacks against some high-performance processors can make use of branch history to influence future speculation. When taking an exception from user-space, a sequence of branches or a firmware call overwrites or invalidates the branch history. The sequence of branches is added to the vectors, and should appear before the first indirect branch. For systems using KPTI the sequence is added to the kpti trampoline where it has a free register as the exit from the trampoline is via a 'ret'. For systems not using KPTI, the same register tricks are used to free up a register in the vectors. For the firmware call, arch-workaround-3 clobbers 4 registers, so there is no choice but to save them to the EL1 stack. This only happens for entry from EL0, so if we take an exception due to the stack access, it will not become re-entrant. For KVM, the existing branch-predictor-hardening vectors are used. When a spectre version of these vectors is in use, the firmware call is sufficient to mitigate against Spectre-BHB. For the non-spectre versions, the sequence of branches is added to the indirect vector. Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Cc: <stable@kernel.org> # <v5.17.x 72bb9dcb6c33c arm64: Add Cortex-X2 CPU part definition Cc: <stable@kernel.org> # <v5.16.x 2d0d656700d67 arm64: Add Neoverse-N2, Cortex-A710 CPU part definition Cc: <stable@kernel.org> # <v5.10.x 8a6b88e6 arm64: Add part number for Arm Cortex-A77 [ modified for stable, moved code to cpu_errata.c removed bitmap of mitigations, use kvm template infrastructure ] Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Conflicts: arch/arm64/Kconfig arch/arm64/include/asm/cpufeature.h arch/arm64/include/asm/cputype.h arch/arm64/kernel/cpu_errata.c Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit a68912a3ae3413be5febcaa40e7e0ec1fd62adee category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- KVM writes the Spectre-v2 mitigation template at the beginning of each vector when a CPU requires a specific sequence to run. Because the template is copied, it can not be modified by the alternatives at runtime. As the KVM template code is intertwined with the bp-hardening callbacks, all templates must have a bp-hardening callback. Add templates for calling ARCH_WORKAROUND_3 and one for each value of K in the brancy-loop. Identify these sequences by a new parameter template_start, and add a copy of install_bp_hardening_cb() that is able to install them. Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Conflicts: arch/arm64/include/asm/cpucaps.h Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit 7b012f6597e55a2ea4c7efe94b5d9a792b6e5757 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit dee435be76f4117410bbd90573a881fd33488f37 upstream. Speculation attacks against some high-performance processors can make use of branch history to influence future speculation as part of a spectre-v2 attack. This is not mitigated by CSV2, meaning CPUs that previously reported 'Not affected' are now moderately mitigated by CSV2. Update the value in /sys/devices/system/cpu/vulnerabilities/spectre_v2 to also show the state of the BHB mitigation. Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> [ code move to cpu_errata.c for backport ] Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Conflicts: arch/arm64/include/asm/cpufeature.h Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit 5b5ca2608fbd6f250281b6a1d0d73613f250e6f1 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit bd09128d16fac3c34b80bd6a29088ac632e8ce09 upstream. The Spectre-BHB workaround adds a firmware call to the vectors. This is needed on some CPUs, but not others. To avoid the unaffected CPU in a big/little pair from making the firmware call, create per cpu vectors. The per-cpu vectors only apply when returning from EL0. Systems using KPTI can use the canonical 'full-fat' vectors directly at EL1, the trampoline exit code will switch to this_cpu_vector on exit to EL0. Systems not using KPTI should always use this_cpu_vector. this_cpu_vector will point at a vector in tramp_vecs or __bp_harden_el1_vectors, depending on whether KPTI is in use. Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Conflicts: arch/arm64/kernel/cpufeature.c arch/arm64/kvm/hyp/switch.c Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit e18876b523d5f5fd8b8f34721f60a470caf20aa1 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit b28a8eebe81c186fdb1a0078263b30576c8e1f42 upstream. The trampoline code needs to use the address of symbols in the wider kernel, e.g. vectors. PC-relative addressing wouldn't work as the trampoline code doesn't run at the address the linker expected. tramp_ventry uses a literal pool, unless CONFIG_RANDOMIZE_BASE is set, in which case it uses the data page as a literal pool because the data page can be unmapped when running in user-space, which is required for CPUs vulnerable to meltdown. Pull this logic out as a macro, instead of adding a third copy of it. Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit 91429ed04ebe9dbec88f97c6fd136b722bc3f3c5 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit ba2689234be92024e5635d30fe744f4853ad97db upstream. Some CPUs affected by Spectre-BHB need a sequence of branches, or a firmware call to be run before any indirect branch. This needs to go in the vectors. No CPU needs both. While this can be patched in, it would run on all CPUs as there is a single set of vectors. If only one part of a big/little combination is affected, the unaffected CPUs have to run the mitigation too. Create extra vectors that include the sequence. Subsequent patches will allow affected CPUs to select this set of vectors. Later patches will modify the loop count to match what the CPU requires. Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit 901c0a20aa94d09a9328899e2dd69a8d43a3a920 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit aff65393fa1401e034656e349abd655cfe272de0 upstream. kpti is an optional feature, for systems not using kpti a set of vectors for the spectre-bhb mitigations is needed. Add another set of vectors, __bp_harden_el1_vectors, that will be used if a mitigation is needed and kpti is not in use. The EL1 ventries are repeated verbatim as there is no additional work needed for entry from EL1. Reviewed-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit 22fdfcf1c2cea8e6dc383d46cbbe59d476d24a96 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit a9c406e6462ff14956d690de7bbe5131a5677dc9 upstream. Adding a second set of vectors to .entry.tramp.text will make it larger than a single 4K page. Allow the trampoline text to occupy up to three pages by adding two more fixmap slots. Previous changes to tramp_valias allowed it to reach beyond a single page. Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit 9e056623dfc538909ed2a914f70a66d68ec71ec3 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit c47e4d04ba0f1ea17353d85d45f611277507e07a upstream. Spectre-BHB needs to add sequences to the vectors. Having one global set of vectors is a problem for big/little systems where the sequence is costly on cpus that are not vulnerable. Making the vectors per-cpu in the style of KVM's bh_harden_hyp_vecs requires the vectors to be generated by macros. Make the kpti re-mapping of the kernel optional, so the macros can be used without kpti. Reviewed-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit f689fa53bb944873f75fe1584f446cae1aabd2c1 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit 13d7a08352a83ef2252aeb464a5e08dfc06b5dfd upstream. The macros for building the kpti trampoline are all behind CONFIG_UNMAP_KERNEL_AT_EL0, and in a region that outputs to the .entry.tramp.text section. Move the macros out so they can be used to generate other kinds of trampoline. Only the symbols need to be guarded by CONFIG_UNMAP_KERNEL_AT_EL0 and appear in the .entry.tramp.text section. Reviewed-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit af484e69b5e83095609d8b5c8abaf13a5460229e category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit ed50da7764535f1e24432ded289974f2bf2b0c5a upstream. The tramp_ventry macro uses tramp_vectors as the address of the vectors when calculating which ventry in the 'full fat' vectors to branch to. While there is one set of tramp_vectors, this will be true. Adding multiple sets of vectors will break this assumption. Move the generation of the vectors to a macro, and pass the start of the vectors as an argument to tramp_ventry. Reviewed-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit ebcdd80d0016c7445e8395cec99b9ce266a26001 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit 6c5bf79b69f911560fbf82214c0971af6e58e682 upstream. Systems using kpti enter and exit the kernel through a trampoline mapping that is always mapped, even when the kernel is not. tramp_valias is a macro to find the address of a symbol in the trampoline mapping. Adding extra sets of vectors will expand the size of the entry.tramp.text section to beyond 4K. tramp_valias will be unable to generate addresses for symbols beyond 4K as it uses the 12 bit immediate of the add instruction. As there are now two registers available when tramp_alias is called, use the extra register to avoid the 4K limit of the 12 bit immediate. Reviewed-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit 266b1ef1368e06ac4c5a89eb9774ef2bbaa54e19 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit c091fb6ae059cda563b2a4d93fdbc548ef34e1d6 upstream. The trampoline code has a data page that holds the address of the vectors, which is unmapped when running in user-space. This ensures that with CONFIG_RANDOMIZE_BASE, the randomised address of the kernel can't be discovered until after the kernel has been mapped. If the trampoline text page is extended to include multiple sets of vectors, it will be larger than a single page, making it tricky to find the data page without knowing the size of the trampoline text pages, which will vary with PAGE_SIZE. Move the data page to appear before the text page. This allows the data page to be found without knowing the size of the trampoline text pages. 'tramp_vectors' is used to refer to the beginning of the .entry.tramp.text section, do that explicitly. Reviewed-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit 51acb81130d1feee7fd043760b75f5377ab8d4f0 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit 03aff3a77a58b5b52a77e00537a42090ad57b80b upstream. Kpti stashes x30 in far_el1 while it uses x30 for all its work. Making the vectors a per-cpu data structure will require a second register. Allow tramp_exit two registers before it unmaps the kernel, by leaving x30 on the stack, and stashing x29 in far_el1. Reviewed-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit 87eccd56c52fcdd6c55b048d789da5c9c2e51ed3 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit d739da1694a0eaef0358a42b76904b611539b77b upstream. Subsequent patches will add additional sets of vectors that use the same tricks as the kpti vectors to reach the full-fat vectors. The full-fat vectors contain some cleanup for kpti that is patched in by alternatives when kpti is in use. Once there are additional vectors, the cleanup will be needed in more cases. But on big/little systems, the cleanup would be harmful if no trampoline vector were in use. Instead of forcing CPUs that don't need a trampoline vector to use one, make the trampoline cleanup optional. Entry at the top of the vectors will skip the cleanup. The trampoline vectors can then skip the first instruction, triggering the cleanup to run. Reviewed-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
James Morse authored
stable inclusion from stable-v4.19.236 commit e8bfe29afc09ac77b347540a0f4c789e6530a436 category: bugfix bugzilla: 186460, https://gitee.com/src-openeuler/kernel/issues/I53MHA CVE: CVE-2022-23960 -------------------------------- commit 4330e2c5c04c27bebf89d34e0bc14e6943413067 upstream. Subsequent patches add even more code to the ventry slots. Ensure kernels that overflow a ventry slot don't get built. Reviewed-by:
Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Reviewed-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
James Morse <james.morse@arm.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Josh Poimboeuf authored
stable inclusion from stable-v4.19.234 commit 9711b12a3f4c0fc73dd257c1e467e6e42155a5f1 category: bugfix bugzilla: 186453, https://gitee.com/src-openeuler/kernel/issues/I50WBM CVE: CVE-2022-0001 -------------------------------- commit 0de05d056afdb00eca8c7bbb0c79a3438daf700c upstream. The commit 44a3918c8245 ("x86/speculation: Include unprivileged eBPF status in Spectre v2 mitigation reporting") added a warning for the "eIBRS + unprivileged eBPF" combination, which has been shown to be vulnerable against Spectre v2 BHB-based attacks. However, there's no warning about the "eIBRS + LFENCE retpoline + unprivileged eBPF" combo. The LFENCE adds more protection by shortening the speculation window after a mispredicted branch. That makes an attack significantly more difficult, even with unprivileged eBPF. So at least for now the logic doesn't warn about that combination. But if you then add SMT into the mix, the SMT attack angle weakens the effectiveness of the LFENCE considerably. So extend the "eIBRS + unprivileged eBPF" warning to also include the "eIBRS + LFENCE + unprivileged eBPF + SMT" case. [ bp: Massage commit message. ] Suggested-by:
Alyssa Milburn <alyssa.milburn@linux.intel.com> Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Josh Poimboeuf authored
stable inclusion from stable-v4.19.234 commit 8bfdba77595aee5c3e83ed1c9994c35d6d409605 category: bugfix bugzilla: 186453, https://gitee.com/src-openeuler/kernel/issues/I50WBM CVE: CVE-2022-0001 -------------------------------- commit eafd987d4a82c7bb5aa12f0e3b4f8f3dea93e678 upstream. With: f8a66d608a3e ("x86,bugs: Unconditionally allow spectre_v2=retpoline,amd") it became possible to enable the LFENCE "retpoline" on Intel. However, Intel doesn't recommend it, as it has some weaknesses compared to retpoline. Now AMD doesn't recommend it either. It can still be left available as a cmdline option. It's faster than retpoline but is weaker in certain scenarios -- particularly SMT, but even non-SMT may be vulnerable in some cases. So just unconditionally warn if the user requests it on the cmdline. [ bp: Massage commit message. ] Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Kim Phillips authored
stable inclusion from stable-v4.19.234 commit c034d344e733a3ac574dd09e39e911a50025c607 category: bugfix bugzilla: 186453, https://gitee.com/src-openeuler/kernel/issues/I50WBM CVE: CVE-2022-0001 -------------------------------- commit e9b6013a7ce31535b04b02ba99babefe8a8599fa upstream. Update the link to the "Software Techniques for Managing Speculation on AMD Processors" whitepaper. Signed-off-by:
Kim Phillips <kim.phillips@amd.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Kim Phillips authored
stable inclusion from stable-v4.19.234 commit d3cb3a6927222268a10b2f12dfb8c9444f7cc39e category: bugfix bugzilla: 186453, https://gitee.com/src-openeuler/kernel/issues/I50WBM CVE: CVE-2022-0001 -------------------------------- commit 244d00b5dd4755f8df892c86cab35fb2cfd4f14b upstream. AMD retpoline may be susceptible to speculation. The speculation execution window for an incorrect indirect branch prediction using LFENCE/JMP sequence may potentially be large enough to allow exploitation using Spectre V2. By default, don't use retpoline,lfence on AMD. Instead, use the generic retpoline. Signed-off-by:
Kim Phillips <kim.phillips@amd.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Josh Poimboeuf authored
stable inclusion from stable-v4.19.234 commit 995629e1d8e6751936c6e2b738f70b392b0461de category: bugfix bugzilla: 186453, https://gitee.com/src-openeuler/kernel/issues/I50WBM CVE: CVE-2022-0001 -------------------------------- commit 44a3918c8245ab10c6c9719dd12e7a8d291980d8 upstream. With unprivileged eBPF enabled, eIBRS (without retpoline) is vulnerable to Spectre v2 BHB-based attacks. When both are enabled, print a warning message and report it in the 'spectre_v2' sysfs vulnerabilities file. Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Reviewed-by:
Thomas Gleixner <tglx@linutronix.de> [fllinden@amazon.com: backported to 4.19] Signed-off-by:
Frank van der Linden <fllinden@amazon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Conflicts: kernel/sysctl.c Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Peter Zijlstra authored
stable inclusion from stable-v4.19.234 commit 7af95ef3ec6248696300fce5c68f6c8c4f50e4a4 category: bugfix bugzilla: 186453, https://gitee.com/src-openeuler/kernel/issues/I50WBM CVE: CVE-2022-0001 -------------------------------- commit 5ad3eb1132453b9795ce5fd4572b1c18b292cca9 upstream. Update the doc with the new fun. [ bp: Massage commit message. ] Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Borislav Petkov <bp@suse.de> Reviewed-by:
Thomas Gleixner <tglx@linutronix.de> [fllinden@amazon.com: backported to 4.19] Signed-off-by:
Frank van der Linden <fllinden@amazon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
Peter Zijlstra authored
stable inclusion from stable-v4.19.234 commit 3f66bedb96ff4c064a819e68499f79b38297ba26 category: bugfix bugzilla: 186453, https://gitee.com/src-openeuler/kernel/issues/I50WBM CVE: CVE-2022-0001 -------------------------------- commit 1e19da8522c81bf46b335f84137165741e0d82b7 upstream. Thanks to the chaps at VUsec it is now clear that eIBRS is not sufficient, therefore allow enabling of retpolines along with eIBRS. Add spectre_v2=eibrs, spectre_v2=eibrs,lfence and spectre_v2=eibrs,retpoline options to explicitly pick your preferred means of mitigation. Since there's new mitigations there's also user visible changes in /sys/devices/system/cpu/vulnerabilities/spectre_v2 to reflect these new mitigations. [ bp: Massage commit message, trim error messages, do more precise eIBRS mode checking. ] Co-developed-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by:
Borislav Petkov <bp@suse.de> Reviewed-by:
Patrick Colp <patrick.colp@oracle.com> Reviewed-by:
Thomas Gleixner <tglx@linutronix.de> [fllinden@amazon.com: backported to 4.19 (no Hygon)] Signed-off-by:
Frank van der Linden <fllinden@amazon.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Conflicts: arch/x85/kernel/cpu/bugs.c Signed-off-by:
Chen Jiahao <chenjiahao16@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Reviewed-by:
Liao Chang <liaochang1@huawei.com> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-