- Jun 28, 2020
-
-
Anthony Steinhauser authored
commit dbbe2ad0 upstream. On context switch the change of TIF_SSBD and TIF_SPEC_IB are evaluated to adjust the mitigations accordingly. This is optimized to avoid the expensive MSR write if not needed. This optimization is buggy and allows an attacker to shutdown the SSBD protection of a victim process. The update logic reads the cached base value for the speculation control MSR which has neither the SSBD nor the STIBP bit set. It then OR's the SSBD bit only when TIF_SSBD is different and requests the MSR update. That means if TIF_SSBD of the previous and next task are the same, then the base value is not updated, even if TIF_SSBD is set. The MSR write is not requested. Subsequently if the TIF_STIBP bit differs then the STIBP bit is updated in the base value and the MSR is written with a wrong SSBD value. This was introduced when the per task/process conditional STIPB switching was added on top of the existing SSBD switching. It is exploitable if the attacker creates a process which enforces SSBD and has the contrary value of STIBP than the victim process (i.e. if the victim process enforces STIBP, the attacker process must not enforce it; if the victim process does not enforce STIBP, the attacker process must enforce it) and schedule it on the same core as the victim process. If the victim runs after the attacker the victim becomes vulnerable to Spectre V4. To fix this, update the MSR value independent of the TIF_SSBD difference and dependent on the SSBD mitigation method available. This ensures that a subsequent STIPB initiated MSR write has the correct state of SSBD. [ tglx: Handle X86_FEATURE_VIRT_SSBD & X86_FEATURE_VIRT_SSBD correctly and massaged changelog ] Fixes: 5bfbe3ad ("x86/speculation: Prepare for per task indirect branch speculation control") Signed-off-by:
Anthony Steinhauser <asteinhauser@google.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Alex Williamson authored
mainline inclusion from mainline-v5.8-rc1 commit abafbc55 category: bugfix bugzilla: NA CVE: CVE-2020-12888 --------------------------- Accessing the disabled memory space of a PCI device would typically result in a master abort response on conventional PCI, or an unsupported request on PCI express. The user would generally see these as a -1 response for the read return data and the write would be silently discarded, possibly with an uncorrected, non-fatal AER error triggered on the host. Some systems however take it upon themselves to bring down the entire system when they see something that might indicate a loss of data, such as this discarded write to a disabled memory space. To avoid this, we want to try to block the user from accessing memory spaces while they're disabled. We start with a semaphore around the memory enable bit, where writers modify the memory enable state and must be serialized, while readers make use of the memory region and can access in parallel. Writers include both direct manipulation via the command register, as well as any reset path where the internal mechanics of the reset may both explicitly and implicitly disable memory access, and manipulation of the MSI-X configuration, where the MSI-X vector table resides in MMIO space of the device. Readers include the read and write file ops to access the vfio device fd offsets as well as memory mapped access. In the latter case, we make use of our new vma list support to zap, or invalidate, those memory mappings in order to force them to be faulted back in on access. Our semaphore usage will stall user access to MMIO spaces across internal operations like reset, but the user might experience new behavior when trying to access the MMIO space while disabled via the PCI command register. Access via read or write while disabled will return -EIO and access via memory maps will result in a SIGBUS. This is expected to be compatible with known use cases and potentially provides better error handling capabilities than present in the hardware, while avoiding the more readily accessible and severe platform error responses that might otherwise occur. Fixes: CVE-2020-12888 Reviewed-by:
Peter Xu <peterx@redhat.com> Signed-off-by:
Alex Williamson <alex.williamson@redhat.com> Conflicts: drivers/vfio/pci/vfio_pci_private.h drivers/vfio/pci/vfio_pci.c [yyl: adjust context] Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Reviewed-by:
Jason Yan <yanaijie@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Alex Williamson authored
mainline inclusion from mainline-v5.8-rc1 commit 11c4cd07 category: bugfix bugzilla: NA CVE: CVE-2020-12888 --------------------------- Rather than calling remap_pfn_range() when a region is mmap'd, setup a vm_ops handler to support dynamic faulting of the range on access. This allows us to manage a list of vmas actively mapping the area that we can later use to invalidate those mappings. The open callback invalidates the vma range so that all tracking is inserted in the fault handler and removed in the close handler. Reviewed-by:
Peter Xu <peterx@redhat.com> Signed-off-by:
Alex Williamson <alex.williamson@redhat.com> Conflicts: drivers/vfio/pci/vfio_pci.c drivers/vfio/pci/vfio_pci_private.h [yyl: adjust context] Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Reviewed-by:
Jason Yan <yanaijie@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Alex Williamson authored
mainline inclusion from mainline-v5.8-rc1 commit 41311242 category: bugfix bugzilla: NA CVE: CVE-2020-12888 --------------------------- With conversion to follow_pfn(), DMA mapping a PFNMAP range depends on the range being faulted into the vma. Add support to manually provide that, in the same way as done on KVM with hva_to_pfn_remapped(). Reviewed-by:
Peter Xu <peterx@redhat.com> Signed-off-by:
Alex Williamson <alex.williamson@redhat.com> Conflicts: drivers/vfio/vfio_iommu_type1.c [yyl: adjust context] Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Reviewed-by:
Jason Yan <yanaijie@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Sean Christopherson authored
commit 5cbf3264 upstream. Use follow_pfn() to get the PFN of a PFNMAP VMA instead of assuming that vma->vm_pgoff holds the base PFN of the VMA. This fixes a bug where attempting to do VFIO_IOMMU_MAP_DMA on an arbitrary PFNMAP'd region of memory calculates garbage for the PFN. Hilariously, this only got detected because the first "PFN" calculated by vaddr_get_pfn() is PFN 0 (vma->vm_pgoff==0), and iommu_iova_to_phys() uses PA==0 as an error, which triggers a WARN in vfio_unmap_unpin() because the translation "failed". PFN 0 is now unconditionally reserved on x86 in order to mitigate L1TF, which causes is_invalid_reserved_pfn() to return true and in turns results in vaddr_get_pfn() returning success for PFN 0. Eventually the bogus calculation runs into PFNs that aren't reserved and leads to failure in vfio_pin_map_dma(). The subsequent call to vfio_remove_dma() attempts to unmap PFN 0 and WARNs. WARNING: CPU: 8 PID: 5130 at drivers/vfio/vfio_iommu_type1.c:750 vfio_unmap_unpin+0x2e1/0x310 [vfio_iommu_type1] Modules linked in: vfio_pci vfio_virqfd vfio_iommu_type1 vfio ... CPU: 8 PID: 5130 Comm: sgx Tainted: G W 5.6.0-rc5-705d787c7fee-vfio+ #3 Hardware name: Intel Corporation Mehlow UP Server Platform/Moss Beach Server, BIOS CNLSE2R1.D00.X119.B49.1803010910 03/01/2018 RIP: 0010:vfio_unmap_unpin+0x2e1/0x310 [vfio_iommu_type1] Code: <0f> 0b 49 81 c5 00 10 00 00 e9 c5 fe ff ff bb 00 10 00 00 e9 3d fe RSP: 0018:ffffbeb5039ebda8 EFLAGS: 00010246 RAX: 0000000000000000 RBX: ffff9a55cbf8d480 RCX: 0000000000000000 RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff9a52b771c200 RBP: 0000000000000000 R08: 0000000000000040 R09: 00000000fffffff2 R10: 0000000000000001 R11: ffff9a51fa896000 R12: 0000000184010000 R13: 0000000184000000 R14: 0000000000010000 R15: ffff9a55cb66ea08 FS: 00007f15d3830b40(0000) GS:ffff9a55d5600000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000561cf39429e0 CR3: 000000084f75f005 CR4: 00000000003626e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: vfio_remove_dma+0x17/0x70 [vfio_iommu_type1] vfio_iommu_type1_ioctl+0x9e3/0xa7b [vfio_iommu_type1] ksys_ioctl+0x92/0xb0 __x64_sys_ioctl+0x16/0x20 do_syscall_64+0x4c/0x180 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x7f15d04c75d7 Code: <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 81 48 2d 00 f7 d8 64 89 01 48 Fixes: 73fa0d10 ("vfio: Type1 IOMMU implementation") Signed-off-by:
Sean Christopherson <sean.j.christopherson@intel.com> Signed-off-by:
Alex Williamson <alex.williamson@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Eric Auger authored
[ Upstream commit 0cfd027b ] pci_map_rom/pci_get_rom_size() performs memory access in the ROM. In case the Memory Space accesses were disabled, readw() is likely to trigger a synchronous external abort on some platforms. In case memory accesses were disabled, re-enable them before the call and disable them back again just after. Fixes: 89e1f7d4 ("vfio: Add PCI device driver") Signed-off-by:
Eric Auger <eric.auger@redhat.com> Suggested-by:
Alex Williamson <alex.williamson@redhat.com> Signed-off-by:
Alex Williamson <alex.williamson@redhat.com> Signed-off-by:
Sasha Levin <sashal@kernel.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
- Jun 20, 2020
-
-
Chiqijun authored
driver inclusion category: bugfix bugzilla: 4472 ----------------------------------------------------------------------- The statistics saved by the driver are 110K. The tool only obtains 4K at a time. When the last time it is obtained, the length calculation is wrong, which may lead to out-of-bounds copy. Signed-off-by:
Chiqijun <chiqijun@huawei.com> Reviewed-by:
Zengweiliang <zengweiliang.zengweiliang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yu'an Wang authored
driver inclusion category: bugfix bugzilla: NA CVE: NA In this patch, we try to add parameter check of uacce_hw_err_isolate/ uacce_wake_up/uacce_register, which can avoid risk. Signed-off-by:
Yu'an Wang <wangyuan46@huawei.com> Reviewed-by:
Weiwei Deng <dengweiwei@huawei.com> Reviewed-by:
Cheng Hu <hucheng.hu@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Shengzui You authored
driver inclusion category: bugfix bugzilla: NA CVE: NA -------------------------------- This patch is used to update driver version to 1.9.38.1. Signed-off-by:
Shengzui You <youshengzui@huawei.com> Reviewed-by:
Weiwei Deng <dengweiwei@huawei.com> Reviewed-by:
Zhaohui Zhong <zhongzhaohui@huawei.com> Reviewed-by:
Junxin Chen <chenjunxin1@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Junxin Chen authored
driver inclusion category: bugfix bugzilla: NA CVE: NA ---------------------------------- Nowadays, we have checked device name at user tool, and have limited length of device name at kernel driver. But when get_netdev_by_ifname return NULL, it would cause print device name without check terminator. This patch fixes this bug, when device name's last character isn't a '\0', kernel driver would return -EINVAL to user. Signed-off-by:
Junxin Chen <chenjunxin1@huawei.com> Reviewed-by:
Weiwei Deng <dengweiwei@huawei.com> Reviewed-by:
Shengzui You <youshengzui@huawei.com> Reviewed-by:
Zhaohui Zhong <zhongzhaohui@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
zhangyi (F) authored
mainline inclusion from mainline-v5.8-rc1 commit 7b97d868 category: bugfix bugzilla: 34619 CVE: NA --------------------------- In the ext4 filesystem with errors=panic, if one process is recording errno in the superblock when invoking jbd2_journal_abort() due to some error cases, it could be raced by another __ext4_abort() which is setting the SB_RDONLY flag but missing panic because errno has not been recorded. jbd2_journal_commit_transaction() jbd2_journal_abort() journal->j_flags |= JBD2_ABORT; jbd2_journal_update_sb_errno() | ext4_journal_check_start() | __ext4_abort() | sb->s_flags |= SB_RDONLY; | if (!JBD2_REC_ERR) | return; journal->j_flags |= JBD2_REC_ERR; Finally, it will no longer trigger panic because the filesystem has already been set read-only. Fix this by introduce j_abort_mutex to make sure journal abort is completed before panic, and remove JBD2_REC_ERR flag. Fixes: 4327ba52 ("ext4, jbd2: ensure entering into panic after recording an error in superblock") Signed-off-by:
zhangyi (F) <yi.zhang@huawei.com> Reviewed-by:
Jan Kara <jack@suse.cz> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20200609073540.3810702-1-yi.zhang@huawei.com Signed-off-by:
Theodore Ts'o <tytso@mit.edu> Conflict: fs/ext4/super.c Signed-off-by:
zhangyi (F) <yi.zhang@huawei.com> Reviewed-by:
Zhang Xiaoxu <zhangxiaoxu5@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
zhangyi (F) authored
hulk inclusion category: bugfix bugzilla: 34619 CVE: NA --------------------------- This reverts commit 2e0fefbd. The upstream fix this problem throuth <7b97d868> "ext4, jbd2: ensure panic by fix a race between jbd2 abort and ext4 error handlers", so revert this patch and backport <7b97d868>. Signed-off-by:
zhangyi (F) <yi.zhang@huawei.com> Reviewed-by:
Zhang Xiaoxu <zhangxiaoxu5@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
- Jun 16, 2020
-
-
Josh Poimboeuf authored
commit 3798cc4d upstream Make the docs match the code. Signed-off-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Mark Gross authored
commit 7222a1b5 upstream Add documentation for the SRBDS vulnerability and its mitigation. [ bp: Massage. jpoimboe: sysfs table strings. ] Signed-off-by:
Mark Gross <mgross@linux.intel.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Reviewed-by:
Tony Luck <tony.luck@intel.com> Reviewed-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Mark Gross authored
commit 7e5b3c26 upstream SRBDS is an MDS-like speculative side channel that can leak bits from the random number generator (RNG) across cores and threads. New microcode serializes the processor access during the execution of RDRAND and RDSEED. This ensures that the shared buffer is overwritten before it is released for reuse. While it is present on all affected CPU models, the microcode mitigation is not needed on models that enumerate ARCH_CAPABILITIES[MDS_NO] in the cases where TSX is not supported or has been disabled with TSX_CTRL. The mitigation is activated by default on affected processors and it increases latency for RDRAND and RDSEED instructions. Among other effects this will reduce throughput from /dev/urandom. * Enable administrator to configure the mitigation off when desired using either mitigations=off or srbds=off. * Export vulnerability status via sysfs * Rename file-scoped macros to apply for non-whitelist table initializations. [ bp: Massage, - s/VULNBL_INTEL_STEPPING/VULNBL_INTEL_STEPPINGS/g, - do not read arch cap MSR a second time in tsx_fused_off() - just pass it in, - flip check in cpu_set_bug_bits() to save an indentation level, - reflow comments. jpoimboe: s/Mitigated/Mitigation/ in user-visible strings tglx: Dropped the fused off magic for now ] Signed-off-by:
Mark Gross <mgross@linux.intel.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Reviewed-by:
Tony Luck <tony.luck@intel.com> Reviewed-by:
Pawan Gupta <pawan.kumar.gupta@linux.intel.com> Reviewed-by:
Josh Poimboeuf <jpoimboe@redhat.com> Tested-by:
Neelima Krishnan <neelima.krishnan@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Mark Gross authored
commit 93920f61 upstream To make cpu_matches() reusable for other matching tables, have it take a pointer to a x86_cpu_id table as an argument. [ bp: Flip arguments order. ] Signed-off-by:
Mark Gross <mgross@linux.intel.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Reviewed-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Mark Gross authored
commit e9d71445 upstream Intel uses the same family/model for several CPUs. Sometimes the stepping must be checked to tell them apart. On x86 there can be at most 16 steppings. Add a steppings bitmask to x86_cpu_id and a X86_MATCH_VENDOR_FAMILY_MODEL_STEPPING_FEATURE macro and support for matching against family/model/stepping. [ bp: Massage. tglx: Lightweight variant for backporting ] Signed-off-by:
Mark Gross <mgross@linux.intel.com> Signed-off-by:
Borislav Petkov <bp@suse.de> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Reviewed-by:
Tony Luck <tony.luck@intel.com> Reviewed-by:
Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
- Jun 15, 2020
-
-
yangerkun authored
hulk inclusion category: bugfix bugzilla: 35487 CVE: NA ----------------------------------------------- Now the errcode from ext4_commit_super will overwrite EROFS exists in ext4_setup_super. Actually, no need to call ext4_commit_super since we will return EROFS. Fix it by goto done directly. Fixes: c89128a0 ("ext4: handle errors on ext4_commit_super") Signed-off-by:
yangerkun <yangerkun@huawei.com> Reviewed-by:
zhangyi (F) <yi.zhang@huawei.com> Signed-off-by:
Ye Bin <yebin10@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Cheng Jian authored
hulk inclusion category: bugfix bugzilla: 34546 CVE: NA ---------------------------------------- There are two problems with the implementation and use of zap_lock(). Firstly, This console_sem does not require reinit in zap_lock(), this is because: 1). printk() itself does try_lock() and skips console handling when the semaphore is not available. 2). panic() tries to push the messages later in console_flush_on_panic(). It ignores the semaphore. Also most console drivers ignore their internal locks because oops_in_progress is set by bust_spinlocks(). Secondly, The situation is more complicated when NMI is not used. 1). Non-stopped CPUs are in unknown state, most likely in a busy loop. Nobody knows whether printk() is repeatedly called in the loop. When it was called, re-initializing any lock would cause double unlock and deadlock. 2). It would be possible to add some more hacks. One problem is that there are two groups of users. One prefer to risk a deadlock and have a chance to see the messages. Others prefer to always reach emergency_restart() and reboot the machine. Fixes: b15f5676 ("printk/panic: Avoid deadlock in printk()") Signed-off-by:
Cheng Jian <cj.chengjian@huawei.com> Reviewed-by:
Xie XiuQi <xiexiuqi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
- Jun 12, 2020
-
-
Dmitry Torokhov authored
mainline inclusion from mainline-v5.7 commit b86dab05 category: bugfix bugzilla: NA CVE: CVE-2020-13974 --------------------------- When k_ascii is invoked several times in a row there is a potential for signed integer overflow: UBSAN: Undefined behaviour in drivers/tty/vt/keyboard.c:888:19 signed integer overflow: 10 * 1111111111 cannot be represented in type 'int' CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.6.11 #1 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011 Call Trace: <IRQ> __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0xce/0x128 lib/dump_stack.c:118 ubsan_epilogue+0xe/0x30 lib/ubsan.c:154 handle_overflow+0xdc/0xf0 lib/ubsan.c:184 __ubsan_handle_mul_overflow+0x2a/0x40 lib/ubsan.c:205 k_ascii+0xbf/0xd0 drivers/tty/vt/keyboard.c:888 kbd_keycode drivers/tty/vt/keyboard.c:1477 [inline] kbd_event+0x888/0x3be0 drivers/tty/vt/keyboard.c:1495 While it can be worked around by using check_mul_overflow()/ check_add_overflow(), it is better to introduce a separate flag to signal that number pad is being used to compose a symbol, and change type of the accumulator from signed to unsigned, thus avoiding undefined behavior when it overflows. Reported-by:
Kyungtae Kim <kt0755@gmail.com> Signed-off-by:
Dmitry Torokhov <dmitry.torokhov@gmail.com> Cc: stable <stable@vger.kernel.org> Link: https://lore.kernel.org/r/20200525232740.GA262061@dtor-ws Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Reviewed-by:
Jason Yan <yanaijie@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Zhang Xiaoxu authored
hulk inclusion category: bugfix bugzilla: 34540 CVE: NA --------------------------- Before journal checkpoint complete, the latest block bitmap stored in journal area and memory, the metadata area was old. If io error occured when checkpoint, the uptodate was clear from buffer, then read old block bitmap from metadata area may lead bitmap corrpution. This is only an temporary solution ,if the buffer reclaim by system, this patch also can't fix the problem. Mainline solution on discussion, we need to merge this patch to reduce the probability. Signed-off-by:
Zhang Xiaoxu <zhangxiaoxu5@huawei.com> Reviewed-by:
zhangyi (F) <yi.zhang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Fan Yang authored
mainline inclusion from mainline-v5.7 commit 5bfea2d9 category: bugfix bugzilla: NA CVE: CVE-2020-10757 --------------------------- The original code in mm/mremap.c checks huge pmd by: if (is_swap_pmd(*old_pmd) || pmd_trans_huge(*old_pmd)) { However, a DAX mapped nvdimm is mapped as huge page (by default) but it is not transparent huge page (_PAGE_PSE | PAGE_DEVMAP). This commit changes the condition to include the case. This addresses CVE-2020-10757. Fixes: 5c7fb56e ("mm, dax: dax-pmd vs thp-pmd vs hugetlbfs-pmd") Cc: <stable@vger.kernel.org> Reported-by:
Fan Yang <Fan_Yang@sjtu.edu.cn> Signed-off-by:
Fan Yang <Fan_Yang@sjtu.edu.cn> Tested-by:
Fan Yang <Fan_Yang@sjtu.edu.cn> Tested-by:
Dan Williams <dan.j.williams@intel.com> Reviewed-by:
Dan Williams <dan.j.williams@intel.com> Acked-by:
Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Reviewed-by:
Kefeng Wang <wangkefeng.wang@huawei.com> Reviewed-by:
Jason Yan <yanaijie@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Jouni Hogander authored
commit ddd9b5e3 upstream. Dev_hold has to be called always in rx_queue_add_kobject. Otherwise usage count drops below 0 in case of failure in kobject_init_and_add. Fixes: b8eb7183 ("net-sysfs: Fix reference count leak in rx|netdev_queue_add_kobject") Reported-by:
syzbot <syzbot+30209ea299c09d8785c9@syzkaller.appspotmail.com> Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Cc: David Miller <davem@davemloft.net> Cc: Lukas Bulwahn <lukas.bulwahn@gmail.com> Signed-off-by:
Jouni Hogander <jouni.hogander@unikie.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Jouni Hogander authored
commit e0b60903 upstream. Dev_hold has to be called always in netdev_queue_add_kobject. Otherwise usage count drops below 0 in case of failure in kobject_init_and_add. Fixes: b8eb7183 ("net-sysfs: Fix reference count leak in rx|netdev_queue_add_kobject") Reported-by:
Hulk Robot <hulkci@huawei.com> Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Cc: David Miller <davem@davemloft.net> Cc: Lukas Bulwahn <lukas.bulwahn@gmail.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Eric Dumazet authored
commit 48a322b6 upstream. kobject_put() should only be called in error path. Fixes: b8eb7183 ("net-sysfs: Fix reference count leak in rx|netdev_queue_add_kobject") Signed-off-by:
Eric Dumazet <edumazet@google.com> Cc: Jouni Hogander <jouni.hogander@unikie.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Jouni Hogander authored
commit b8eb7183 upstream. kobject_init_and_add takes reference even when it fails. This has to be given up by the caller in error handling. Otherwise memory allocated by kobject_init_and_add is never freed. Originally found by Syzkaller: BUG: memory leak unreferenced object 0xffff8880679f8b08 (size 8): comm "netdev_register", pid 269, jiffies 4294693094 (age 12.132s) hex dump (first 8 bytes): 72 78 2d 30 00 36 20 d4 rx-0.6 . backtrace: [<000000008c93818e>] __kmalloc_track_caller+0x16e/0x290 [<000000001f2e4e49>] kvasprintf+0xb1/0x140 [<000000007f313394>] kvasprintf_const+0x56/0x160 [<00000000aeca11c8>] kobject_set_name_vargs+0x5b/0x140 [<0000000073a0367c>] kobject_init_and_add+0xd8/0x170 [<0000000088838e4b>] net_rx_queue_update_kobjects+0x152/0x560 [<000000006be5f104>] netdev_register_kobject+0x210/0x380 [<00000000e31dab9d>] register_netdevice+0xa1b/0xf00 [<00000000f68b2465>] __tun_chr_ioctl+0x20d5/0x3dd0 [<000000004c50599f>] tun_chr_ioctl+0x2f/0x40 [<00000000bbd4c317>] do_vfs_ioctl+0x1c7/0x1510 [<00000000d4c59e8f>] ksys_ioctl+0x99/0xb0 [<00000000946aea81>] __x64_sys_ioctl+0x78/0xb0 [<0000000038d946e5>] do_syscall_64+0x16f/0x580 [<00000000e0aa5d8f>] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [<00000000285b3d1a>] 0xffffffffffffffff Cc: David Miller <davem@davemloft.net> Cc: Lukas Bulwahn <lukas.bulwahn@gmail.com> Signed-off-by:
Jouni Hogander <jouni.hogander@unikie.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Zheng Bin authored
hulk inclusion category: bugfix bugzilla: 35486 CVE: NA ----------------------------------------------- If RPC use udp as it's transport protocol, transport->connect_worker will call xs_udp_setup_socket. xs_setup_udp INIT_DELAYED_WORK(&transport->connect_worker, xs_udp_setup_socket) xs_connect | | queue_delayed_work| | |xprt_destroy | | wait_on_bit_lock(LOCKED) | | del_timer_sync(del timer) | | xprt_destroy_cb | | xs_destroy | | cancel_delayed_work_sync| | |xs_udp_setup_socket | | xprt_unlock_connect | | test_bit(XPRT_LOCKED(OK) | | xprt_schedule_autodisconnect | | mod_timer(insert timer to list) | xs_xprt_free(free xprt) | | | access timer(use-after-free) Delete xprt->timer to avoid this. Signed-off-by:
Zheng Bin <zhengbin13@huawei.com> Reviewed-by:
YueHaibing <yuehaibing@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Cheng Jian authored
hulk inclusion category: bugfix bugzilla: 34599 CVE: NA A deadlock caused by logbuf_lock occurs when panic: a) Panic CPU is running in non-NMI context b) Panic CPU sends out shutdown IPI via NMI vector c) One of the CPUs that we bring down via NMI vector holded logbuf_lock d) Panic CPU try to hold logbuf_lock, then deadlock occurs. we try to re-init the logbuf_lock in printk_safe_flush_on_panic() to avoid deadlock, but it does not work here, because : Firstly, it is inappropriate to check num_online_cpus() here. When the CPU bring down via NMI vector, the panic CPU willn't wait too long for other cores to stop, so when this problem occurs, num_online_cpus() may be greater than 1. Secondly, printk_safe_flush_on_panic() is called after panic notifier callback, so if printk() is called in panic notifier callback, deadlock will still occurs. Eg, if ftrace_dump_on_oops is set, we print some debug information, it will try to hold the logbuf_lock. To avoid this deadlock, attempt to re-init logbuf_lock from panic CPU before panic_notifier_list callback, Signed-off-by:
Cheng Jian <cj.chengjian@huawei.com> Reviewed-by:
Xie XiuQi <xiexiuqi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Jason Yan authored
hulk inclusion category: bugfix bugzilla: 35485 CVE: NA ----------------------------------------------- In blkdev_get() we call __blkdev_get() to do some internal jobs and if there is some errors in __blkdev_get(), the bdput() is called which means we have released the refcount of the bdev (actually the refcount of the bdev inode). This means we cannot access bdev after that point. But accually bdev is still accessed in blkdev_get() after calling __blkdev_get(). This may leads to use-after-free if the refcount is the last one we released in __blkdev_get(). Let's take a look at the following scenerio: CPU0 CPU1 CPU2 blkdev_open blkdev_open Remove disk bd_acquire blkdev_get __blkdev_get del_gendisk bdev_unhash_inode bd_acquire bdev_get_gendisk bd_forget failed because of unhashed bdput bdput (the last one) bdev_evict_inode access bdev => use after free [ 459.350216] BUG: KASAN: use-after-free in __lock_acquire+0x24c1/0x31b0 [ 459.351190] Read of size 8 at addr ffff88806c815a80 by task syz-executor.0/20132 [ 459.352347] [ 459.352594] CPU: 0 PID: 20132 Comm: syz-executor.0 Not tainted 4.19.90 #2 [ 459.353628] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014 [ 459.354947] Call Trace: [ 459.355337] dump_stack+0x111/0x19e [ 459.355879] ? __lock_acquire+0x24c1/0x31b0 [ 459.356523] print_address_description+0x60/0x223 [ 459.357248] ? __lock_acquire+0x24c1/0x31b0 [ 459.357887] kasan_report.cold+0xae/0x2d8 [ 459.358503] __lock_acquire+0x24c1/0x31b0 [ 459.359120] ? _raw_spin_unlock_irq+0x24/0x40 [ 459.359784] ? lockdep_hardirqs_on+0x37b/0x580 [ 459.360465] ? _raw_spin_unlock_irq+0x24/0x40 [ 459.361123] ? finish_task_switch+0x125/0x600 [ 459.361812] ? finish_task_switch+0xee/0x600 [ 459.362471] ? mark_held_locks+0xf0/0xf0 [ 459.363108] ? __schedule+0x96f/0x21d0 [ 459.363716] lock_acquire+0x111/0x320 [ 459.364285] ? blkdev_get+0xce/0xbe0 [ 459.364846] ? blkdev_get+0xce/0xbe0 [ 459.365390] __mutex_lock+0xf9/0x12a0 [ 459.365948] ? blkdev_get+0xce/0xbe0 [ 459.366493] ? bdev_evict_inode+0x1f0/0x1f0 [ 459.367130] ? blkdev_get+0xce/0xbe0 [ 459.367678] ? destroy_inode+0xbc/0x110 [ 459.368261] ? mutex_trylock+0x1a0/0x1a0 [ 459.368867] ? __blkdev_get+0x3e6/0x1280 [ 459.369463] ? bdev_disk_changed+0x1d0/0x1d0 [ 459.370114] ? blkdev_get+0xce/0xbe0 [ 459.370656] blkdev_get+0xce/0xbe0 [ 459.371178] ? find_held_lock+0x2c/0x110 [ 459.371774] ? __blkdev_get+0x1280/0x1280 [ 459.372383] ? lock_downgrade+0x680/0x680 [ 459.373002] ? lock_acquire+0x111/0x320 [ 459.373587] ? bd_acquire+0x21/0x2c0 [ 459.374134] ? do_raw_spin_unlock+0x4f/0x250 [ 459.374780] blkdev_open+0x202/0x290 [ 459.375325] do_dentry_open+0x49e/0x1050 [ 459.375924] ? blkdev_get_by_dev+0x70/0x70 [ 459.376543] ? __x64_sys_fchdir+0x1f0/0x1f0 [ 459.377192] ? inode_permission+0xbe/0x3a0 [ 459.377818] path_openat+0x148c/0x3f50 [ 459.378392] ? kmem_cache_alloc+0xd5/0x280 [ 459.379016] ? entry_SYSCALL_64_after_hwframe+0x49/0xbe [ 459.379802] ? path_lookupat.isra.0+0x900/0x900 [ 459.380489] ? __lock_is_held+0xad/0x140 [ 459.381093] do_filp_open+0x1a1/0x280 [ 459.381654] ? may_open_dev+0xf0/0xf0 [ 459.382214] ? find_held_lock+0x2c/0x110 [ 459.382816] ? lock_downgrade+0x680/0x680 [ 459.383425] ? __lock_is_held+0xad/0x140 [ 459.384024] ? do_raw_spin_unlock+0x4f/0x250 [ 459.384668] ? _raw_spin_unlock+0x1f/0x30 [ 459.385280] ? __alloc_fd+0x448/0x560 [ 459.385841] do_sys_open+0x3c3/0x500 [ 459.386386] ? filp_open+0x70/0x70 [ 459.386911] ? trace_hardirqs_on_thunk+0x1a/0x1c [ 459.387610] ? trace_hardirqs_off_caller+0x55/0x1c0 [ 459.388342] ? do_syscall_64+0x1a/0x520 [ 459.388930] do_syscall_64+0xc3/0x520 [ 459.389490] entry_SYSCALL_64_after_hwframe+0x49/0xbe [ 459.390248] RIP: 0033:0x416211 [ 459.390720] Code: 75 14 b8 02 00 00 00 0f 05 48 3d 01 f0 ff ff 0f 83 04 19 00 00 c3 48 83 ec 08 e8 0a fa ff ff 48 89 04 24 b8 02 00 00 00 0f 05 <48> 8b 3c 24 48 89 c2 e8 53 fa ff ff 48 89 d0 48 83 c4 08 48 3d 01 [ 459.393483] RSP: 002b:00007fe45dfe9a60 EFLAGS: 00000293 ORIG_RAX: 0000000000000002 [ 459.394610] RAX: ffffffffffffffda RBX: 00007fe45dfea6d4 RCX: 0000000000416211 [ 459.395678] RDX: 00007fe45dfe9b0a RSI: 0000000000000002 RDI: 00007fe45dfe9b00 [ 459.396758] RBP: 000000000076bf20 R08: 0000000000000000 R09: 000000000000000a [ 459.397930] R10: 0000000000000075 R11: 0000000000000293 R12: 00000000ffffffff [ 459.399022] R13: 0000000000000bd9 R14: 00000000004cdb80 R15: 000000000076bf2c [ 459.400168] [ 459.400430] Allocated by task 20132: [ 459.401038] kasan_kmalloc+0xbf/0xe0 [ 459.401652] kmem_cache_alloc+0xd5/0x280 [ 459.402330] bdev_alloc_inode+0x18/0x40 [ 459.402970] alloc_inode+0x5f/0x180 [ 459.403510] iget5_locked+0x57/0xd0 [ 459.404095] bdget+0x94/0x4e0 [ 459.404607] bd_acquire+0xfa/0x2c0 [ 459.405113] blkdev_open+0x110/0x290 [ 459.405702] do_dentry_open+0x49e/0x1050 [ 459.406340] path_openat+0x148c/0x3f50 [ 459.406926] do_filp_open+0x1a1/0x280 [ 459.407471] do_sys_open+0x3c3/0x500 [ 459.408010] do_syscall_64+0xc3/0x520 [ 459.408572] entry_SYSCALL_64_after_hwframe+0x49/0xbe [ 459.409415] [ 459.409679] Freed by task 1262: [ 459.410212] __kasan_slab_free+0x129/0x170 [ 459.410919] kmem_cache_free+0xb2/0x2a0 [ 459.411564] rcu_process_callbacks+0xbb2/0x2320 [ 459.412318] __do_softirq+0x225/0x8ac Fix this by delaying bdput() to the end of blkdev_get() which means we have finished accessing bdev. Cc: Christoph Hellwig <hch@lst.de> Cc: Jens Axboe <axboe@kernel.dk> Cc: Ming Lei <ming.lei@redhat.com> Cc: Jan Kara <jack@suse.cz> Reported-by:
Hulk Robot <hulkci@huawei.com> Signed-off-by:
Jason Yan <yanaijie@huawei.com> Reviewed-by:
Hou Tao <houtao1@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Ye Bin authored
hulk inclusion category: bugfix bugzilla: 34626 CVE: NA ----------------------------------------------- BUG: KASAN: use-after-free in ata_scsi_mode_select_xlat+0x10bd/0x10f0 drivers/ata/libata-scsi.c:4045 Read of size 1 at addr ffff88803b8cd003 by task syz-executor.6/12621 CPU: 1 PID: 12621 Comm: syz-executor.6 Not tainted 4.19.95 #1 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1ubuntu1 04/01/2014 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0xac/0xee lib/dump_stack.c:118 print_address_description+0x60/0x223 mm/kasan/report.c:253 kasan_report_error mm/kasan/report.c:351 [inline] kasan_report mm/kasan/report.c:409 [inline] kasan_report.cold+0xae/0x2d8 mm/kasan/report.c:393 ata_scsi_mode_select_xlat+0x10bd/0x10f0 drivers/ata/libata-scsi.c:4045 ata_scsi_translate+0x2da/0x680 drivers/ata/libata-scsi.c:2035 __ata_scsi_queuecmd drivers/ata/libata-scsi.c:4360 [inline] ata_scsi_queuecmd+0x2e4/0x790 drivers/ata/libata-scsi.c:4409 scsi_dispatch_cmd+0x2ee/0x6c0 drivers/scsi/scsi_lib.c:1867 scsi_queue_rq+0xfd7/0x1990 drivers/scsi/scsi_lib.c:2170 blk_mq_dispatch_rq_list+0x1e1/0x19a0 block/blk-mq.c:1186 blk_mq_do_dispatch_sched+0x147/0x3d0 block/blk-mq-sched.c:108 blk_mq_sched_dispatch_requests+0x427/0x680 block/blk-mq-sched.c:204 __blk_mq_run_hw_queue+0xbc/0x200 block/blk-mq.c:1308 __blk_mq_delay_run_hw_queue+0x3c0/0x460 block/blk-mq.c:1376 blk_mq_run_hw_queue+0x152/0x310 block/blk-mq.c:1413 blk_mq_sched_insert_request+0x337/0x6c0 block/blk-mq-sched.c:397 blk_execute_rq_nowait+0x124/0x320 block/blk-exec.c:64 blk_execute_rq+0xc5/0x112 block/blk-exec.c:101 sg_scsi_ioctl+0x3b0/0x6a0 block/scsi_ioctl.c:507 sg_ioctl+0xd37/0x23f0 drivers/scsi/sg.c:1106 vfs_ioctl fs/ioctl.c:46 [inline] file_ioctl fs/ioctl.c:501 [inline] do_vfs_ioctl+0xae6/0x1030 fs/ioctl.c:688 ksys_ioctl+0x76/0xa0 fs/ioctl.c:705 __do_sys_ioctl fs/ioctl.c:712 [inline] __se_sys_ioctl fs/ioctl.c:710 [inline] __x64_sys_ioctl+0x6f/0xb0 fs/ioctl.c:710 do_syscall_64+0xa0/0x2e0 arch/x86/entry/common.c:293 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x45c479 Code: ad b6 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 7b b6 fb ff c3 66 2e 0f 1f 84 00 00 00 00 RSP: 002b:00007fb0e9602c78 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 00007fb0e96036d4 RCX: 000000000045c479 RDX: 0000000020000040 RSI: 0000000000000001 RDI: 0000000000000003 RBP: 000000000076bfc0 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 00000000ffffffff R13: 000000000000046d R14: 00000000004c6e1a R15: 000000000076bfcc Allocated by task 12577: set_track mm/kasan/kasan.c:460 [inline] kasan_kmalloc mm/kasan/kasan.c:553 [inline] kasan_kmalloc+0xbf/0xe0 mm/kasan/kasan.c:531 __kmalloc+0xf3/0x1e0 mm/slub.c:3749 kmalloc include/linux/slab.h:520 [inline] load_elf_phdrs+0x118/0x1b0 fs/binfmt_elf.c:441 load_elf_binary+0x2de/0x4610 fs/binfmt_elf.c:737 search_binary_handler fs/exec.c:1654 [inline] search_binary_handler+0x15c/0x4e0 fs/exec.c:1632 exec_binprm fs/exec.c:1696 [inline] __do_execve_file.isra.0+0xf52/0x1a90 fs/exec.c:1820 do_execveat_common fs/exec.c:1866 [inline] do_execve fs/exec.c:1883 [inline] __do_sys_execve fs/exec.c:1964 [inline] __se_sys_execve fs/exec.c:1959 [inline] __x64_sys_execve+0x8a/0xb0 fs/exec.c:1959 do_syscall_64+0xa0/0x2e0 arch/x86/entry/common.c:293 entry_SYSCALL_64_after_hwframe+0x44/0xa9 Freed by task 12577: set_track mm/kasan/kasan.c:460 [inline] __kasan_slab_free+0x129/0x170 mm/kasan/kasan.c:521 slab_free_hook mm/slub.c:1370 [inline] slab_free_freelist_hook mm/slub.c:1397 [inline] slab_free mm/slub.c:2952 [inline] kfree+0x8b/0x1a0 mm/slub.c:3904 load_elf_binary+0x1be7/0x4610 fs/binfmt_elf.c:1118 search_binary_handler fs/exec.c:1654 [inline] search_binary_handler+0x15c/0x4e0 fs/exec.c:1632 exec_binprm fs/exec.c:1696 [inline] __do_execve_file.isra.0+0xf52/0x1a90 fs/exec.c:1820 do_execveat_common fs/exec.c:1866 [inline] do_execve fs/exec.c:1883 [inline] __do_sys_execve fs/exec.c:1964 [inline] __se_sys_execve fs/exec.c:1959 [inline] __x64_sys_execve+0x8a/0xb0 fs/exec.c:1959 do_syscall_64+0xa0/0x2e0 arch/x86/entry/common.c:293 entry_SYSCALL_64_after_hwframe+0x44/0xa9 The buggy address belongs to the object at ffff88803b8ccf00 which belongs to the cache kmalloc-512 of size 512 The buggy address is located 259 bytes inside of 512-byte region [ffff88803b8ccf00, ffff88803b8cd100) The buggy address belongs to the page: page:ffffea0000ee3300 count:1 mapcount:0 mapping:ffff88806cc03080 index:0xffff88803b8cc780 compound_mapcount: 0 flags: 0x100000000008100(slab|head) raw: 0100000000008100 ffffea0001104080 0000000200000002 ffff88806cc03080 raw: ffff88803b8cc780 00000000800c000b 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected Memory state around the buggy address: ffff88803b8ccf00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff88803b8ccf80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb >ffff88803b8cd000: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff88803b8cd080: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff88803b8cd100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc You can refer to "https://www.lkml.org/lkml/2019/1/17/474 " reproduce this error. The exception code is "bd_len = p[3];", "p" value is ffff88803b8cd000 which belongs to the cache kmalloc-512 of size 512. The "page_address(sg_page(scsi_sglist(scmd)))" maybe from sg_scsi_ioctl function "buffer" which allocated by kzalloc, so "buffer" may not page aligned. This also looks completely buggy on highmem systems and really needs to use a kmap_atomic. --Christoph Hellwig To address above bugs, Paolo Bonzini advise to simpler to just make a char array of size CACHE_MPAGE_LEN+8+8+4-2(or just 64 to make it easy), use sg_copy_to_buffer to copy from the sglist into the buffer, and workthere. Signed-off-by:
Ye Bin <yebin10@huawei.com> Reviewed-by:
Jason Yan <yanaijie@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Chuhong Yuan authored
mainline inclusion from mainline-v5.6-rc1 commit 9453264e category: bugfix bugzilla: NA CVE: CVE-2019-20810 --------------------------- go7007_snd_init() misses a snd_card_free() in an error path. Add the missed call to fix it. Signed-off-by:
Chuhong Yuan <hslester96@gmail.com> Signed-off-by:
Hans Verkuil <hverkuil-cisco@xs4all.nl> Signed-off-by:
Mauro Carvalho Chehab <mchehab+huawei@kernel.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Reviewed-by:
Jason Yan <yanaijie@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Nicolas Pitre authored
[ Upstream commit 57d38f26 ] By directly using kfree() in different places we risk missing one if it is switched to using vfree(), especially if the corresponding vmalloc() is hidden away within a common abstraction. Oh wait, that's exactly what happened here. So let's fix this by creating a common abstraction for the free case as well. Signed-off-by:
Nicolas Pitre <nico@fluxnic.net> Reported-by:
<syzbot+0bfda3ade1ee9288a1be@syzkaller.appspotmail.com> Fixes: 9a98e7a8 ("vt: don't use kmalloc() for the unicode screen buffer") Cc: <stable@vger.kernel.org> Reviewed-by:
Sam Ravnborg <sam@ravnborg.org> Link: https://lore.kernel.org/r/nycvar.YSQ.7.76.2005021043110.2671@knanqh.ubzr Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Sasha Levin <sashal@kernel.org>
-
Nicolas Pitre authored
commit 9a98e7a8 upstream. Even if the actual screen size is bounded in vc_do_resize(), the unicode buffer is still a little more than twice the size of the glyph buffer and may exceed MAX_ORDER down the kmalloc() path. This can be triggered from user space. Since there is no point having a physically contiguous buffer here, let's avoid the above issue as well as reducing pressure on high order allocations by using vmalloc() instead. Signed-off-by:
Nicolas Pitre <nico@fluxnic.net> Cc: <stable@vger.kernel.org> Acked-by:
Sam Ravnborg <sam@ravnborg.org> Link: https://lore.kernel.org/r/nycvar.YSQ.7.76.2003282214210.2671@knanqh.ubzr Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
- Jun 11, 2020
-
-
Ye Bin authored
hulk inclusion category: bugfix bugzilla: 34604 CVE: NA ----------------------------------------------- Signed-off-by:
Ye Bin <yebin10@huawei.com> Reviewed-by:
Xie XiuQi <xiexiuqi@huawei.com> Reviewed-by:
Hou Tao <houtao1@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Ewan D. Milne authored
mainline inclusion from mainline-v5.7-rc1 commit b0962c53 category: bugfix bugzilla: 34604 CVE: NA ----------------------------------------------- Large queues of I/O to offline devices that are eventually submitted when devices are unblocked result in a many repeated "rejecting I/O to offline device" messages. These messages can fill up the dmesg buffer in crash dumps so no useful prior messages remain. In addition, if a serial console is used, the flood of messages can cause a hard lockup in the console code. Introduce a flag indicating the message has already been logged for the device, and reset the flag when scsi_device_set_state() changes the device state. Link: https://lore.kernel.org/r/20200311143930.20674-1-emilne@redhat.com Reviewed-by:
Bart van Assche <bvanassche@acm.org> Signed-off-by:
Ewan D. Milne <emilne@redhat.com> Signed-off-by:
Martin K. Petersen <martin.petersen@oracle.com> conflicts: drivers/scsi/scsi_lib.c include/scsi/scsi_device.h Signed-off-by:
Ye Bin <yebin10@huawei.com> Reviewed-by:
Hou Tao <houtao1@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yang Yingliang authored
hulk inclusion category: bugfix bugzilla: NA CVE: CVE-2018-12928 --------------------------- It will cause a null-ptr-deref in hfs_find_init() in fuzz test. [ 107.092729] hfs: continuing without an alternate MDB [ 107.097632] general protection fault, probably for non-canonical address 0xdffffc0000000008: 0000 [#1] SMP KASAN PTI [ 107.104679] KASAN: null-ptr-deref in range [0x0000000000000040-0x0000000000000047] [ 107.109100] CPU: 0 PID: 379 Comm: hfs_inject Not tainted 5.7.0-rc7-00001-g24627f5f2973 #897 [ 107.114142] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.13.0-0-gf21b5a4aeb02-prebuilt.qemu.org 04/01/2014 [ 107.121095] RIP: 0010:hfs_find_init+0x72/0x170 [ 107.123609] Code: c1 ea 03 80 3c 02 00 0f 85 e6 00 00 00 4c 8d 65 40 48 c7 43 18 00 00 00 00 48 b8 00 00 00 00 00 fc ff df 4c 89 e2 48 c1 ea 03 <0f> b6 04 02 84 c0 74 08 3c 03 0f 8e a5 00 00 00 8b 45 40 be c0 0c [ 107.134660] RSP: 0018:ffff88810291f3f8 EFLAGS: 00010202 [ 107.137897] RAX: dffffc0000000000 RBX: ffff88810291f468 RCX: 1ffff110175cdf05 [ 107.141874] RDX: 0000000000000008 RSI: ffff88810291f468 RDI: ffff88810291f480 [ 107.145844] RBP: 0000000000000000 R08: 0000000000000000 R09: ffffed1020381013 [ 107.149431] R10: ffff88810291f500 R11: ffffed1020381012 R12: 0000000000000040 [ 107.152315] R13: 0000000000000000 R14: ffff888101c0814a R15: ffff88810291f468 [ 107.155464] FS: 00000000009ea880(0000) GS:ffff88810c600000(0000) knlGS:0000000000000000 [ 107.159795] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 107.162987] CR2: 00005605a19dd284 CR3: 0000000103a0c006 CR4: 0000000000020ef0 [ 107.166665] Call Trace: [ 107.167969] ? find_held_lock+0x33/0x1c0 [ 107.169972] hfs_ext_read_extent+0x16b/0xb00 [ 107.172092] ? create_page_buffers+0x14e/0x1b0 [ 107.174303] ? hfs_free_extents+0x280/0x280 [ 107.176437] ? lock_downgrade+0x730/0x730 [ 107.178272] hfs_get_block+0x496/0x8a0 [ 107.179972] block_read_full_page+0x241/0x8d0 [ 107.181971] ? hfs_extend_file+0xae0/0xae0 [ 107.183814] ? end_buffer_async_read_io+0x10/0x10 [ 107.185954] ? add_to_page_cache_lru+0x13f/0x1f0 [ 107.188006] ? add_to_page_cache_locked+0x10/0x10 [ 107.190175] do_read_cache_page+0xc6a/0x1180 [ 107.192096] ? generic_file_read_iter+0x4c0/0x4c0 [ 107.194234] ? hfs_btree_open+0x408/0x1000 [ 107.196068] ? lock_downgrade+0x730/0x730 [ 107.197926] ? wake_bit_function+0x180/0x180 [ 107.199845] ? lockdep_init_map_waits+0x267/0x7c0 [ 107.201895] hfs_btree_open+0x455/0x1000 [ 107.203479] hfs_mdb_get+0x122c/0x1ae8 [ 107.205065] ? hfs_mdb_put+0x350/0x350 [ 107.206590] ? queue_work_node+0x260/0x260 [ 107.208309] ? rcu_read_lock_sched_held+0xa1/0xd0 [ 107.210227] ? lockdep_init_map_waits+0x267/0x7c0 [ 107.212144] ? lockdep_init_map_waits+0x267/0x7c0 [ 107.213979] hfs_fill_super+0x9ba/0x1280 [ 107.215444] ? bdev_name.isra.9+0xf1/0x2b0 [ 107.217028] ? hfs_remount+0x190/0x190 [ 107.218428] ? pointer+0x5da/0x710 [ 107.219745] ? file_dentry_name+0xf0/0xf0 [ 107.221262] ? mount_bdev+0xd1/0x330 [ 107.222592] ? vsnprintf+0x7bd/0x1250 [ 107.224007] ? pointer+0x710/0x710 [ 107.225332] ? down_write+0xe5/0x160 [ 107.226698] ? hfs_remount+0x190/0x190 [ 107.228120] ? snprintf+0x91/0xc0 [ 107.229388] ? vsprintf+0x10/0x10 [ 107.230628] ? sget+0x3af/0x4a0 [ 107.231848] ? hfs_remount+0x190/0x190 [ 107.233300] mount_bdev+0x26e/0x330 [ 107.234611] ? hfs_statfs+0x540/0x540 [ 107.236015] legacy_get_tree+0x101/0x1f0 [ 107.237431] ? security_capable+0x58/0x90 [ 107.238832] vfs_get_tree+0x89/0x2d0 [ 107.240082] ? ns_capable_common+0x5c/0xd0 [ 107.241521] do_mount+0xd8a/0x1720 [ 107.242727] ? lock_downgrade+0x730/0x730 [ 107.244116] ? copy_mount_string+0x20/0x20 [ 107.245557] ? _copy_from_user+0xbe/0x100 [ 107.246967] ? memdup_user+0x47/0x70 [ 107.248212] __x64_sys_mount+0x162/0x1b0 [ 107.249537] do_syscall_64+0xa5/0x4f0 [ 107.250742] entry_SYSCALL_64_after_hwframe+0x49/0xb3 [ 107.252369] RIP: 0033:0x44e8ea [ 107.253360] Code: 48 c7 c1 c0 ff ff ff f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 49 89 ca b8 a5 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 c0 ff ff ff f7 d8 64 89 01 48 [ 107.259240] RSP: 002b:00007ffd910e4c28 EFLAGS: 00000207 ORIG_RAX: 00000000000000a5 [ 107.261668] RAX: ffffffffffffffda RBX: 0000000000400400 RCX: 000000000044e8ea [ 107.263920] RDX: 000000000049321e RSI: 0000000000493222 RDI: 00007ffd910e4d00 [ 107.266177] RBP: 00007ffd910e5d10 R08: 0000000000000000 R09: 000000000000000a [ 107.268451] R10: 0000000000000001 R11: 0000000000000207 R12: 0000000000401c40 [ 107.270721] R13: 0000000000000000 R14: 00000000006ba018 R15: 0000000000000000 [ 107.273025] Modules linked in: [ 107.274029] Dumping ftrace buffer: [ 107.275121] (ftrace buffer empty) [ 107.276370] ---[ end trace c5e0b9d684f3570e ]--- We need check tree in hfs_find_init(). https://lore.kernel.org/linux-fsdevel/20180419024358.GA5215@bombadil.infradead.org/ https://marc.info/?l=linux-fsdevel&m=152406881024567&w=2 References: CVE-2018-12928 Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Reviewed-by:
Jason Yan <yanaijie@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
zhangyi (F) authored
hulk inclusion category: bugfix bugzilla: 34619 CVE: NA --------------------------- In the ext4 filesystem with errors=panic, if one process is recording errno in the superblock when invoking jbd2_journal_abort() due to some error cases, it could be raced by another __ext4_abort() which is setting the SB_RDONLY flag but missing panic because errno has not been recorded. jbd2_journal_abort() journal->j_flags |= JBD2_ABORT; jbd2_journal_update_sb_errno() | __ext4_abort() | sb->s_flags |= SB_RDONLY; | if (!JBD2_REC_ERR) | return; journal->j_flags |= JBD2_REC_ERR; Finally, it will no longer trigger panic because the filesystem has already been set read-only. Fix this by remove JBD2_REC_ERR and switch to use completion variable instead. Fixes: 4327ba52 ("ext4, jbd2: ensure entering into panic after recording an error in superblock") Signed-off-by:
zhangyi (F) <yi.zhang@huawei.com> Reviewed-by:
Zhang Xiaoxu <zhangxiaoxu5@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
zhangyi (F) authored
mainline inclusion from mainline-5.6-rc1 commit 7f6225e4 category: bugfix bugzilla: 34619 CVE: NA --------------------------- __jbd2_journal_abort_hard() is no longer used, so now we can merge __jbd2_journal_abort_hard() and __journal_abort_soft() these two functions into jbd2_journal_abort() and remove them. Signed-off-by:
zhangyi (F) <yi.zhang@huawei.com> Reviewed-by:
Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20191204124614.45424-5-yi.zhang@huawei.com Signed-off-by:
Theodore Ts'o <tytso@mit.edu> Reviewed-by:
Zhang Xiaoxu <zhangxiaoxu5@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
zhangyi (F) authored
[ Upstream commit 0e98c084 ] Commit fb7c0244 ("ext4: pass -ESHUTDOWN code to jbd2 layer") want to allow jbd2 layer to distinguish shutdown journal abort from other error cases. So the ESHUTDOWN should be taken precedence over any other errno which has already been recoded after EXT4_FLAGS_SHUTDOWN is set, but it only update errno in the journal suoerblock now if the old errno is 0. Fixes: fb7c0244 ("ext4: pass -ESHUTDOWN code to jbd2 layer") Signed-off-by:
zhangyi (F) <yi.zhang@huawei.com> Reviewed-by:
Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20191204124614.45424-4-yi.zhang@huawei.com Signed-off-by:
Theodore Ts'o <tytso@mit.edu> Signed-off-by:
Sasha Levin <sashal@kernel.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Eric Biggers authored
commit 7cf64b18 upstream. vt_in_use() dereferences console_driver->ttys[i] without proper locking. This is broken because the tty can be closed and freed concurrently. We could fix this by using 'READ_ONCE(console_driver->ttys[i]) != NULL' and skipping the check of tty_struct::count. But, looking at console_driver->ttys[i] isn't really appropriate anyway because even if it is NULL the tty can still be in the process of being closed. Instead, fix it by making vt_in_use() require console_lock() and check whether the vt is allocated and has port refcount > 1. This works since following the patch "vt: vt_ioctl: fix VT_DISALLOCATE freeing in-use virtual console" the port refcount is incremented while the vt is open. Reproducer (very unreliable, but it worked for me after a few minutes): #include <fcntl.h> #include <linux/vt.h> int main() { int fd, nproc; struct vt_stat state; char ttyname[16]; fd = open("/dev/tty10", O_RDONLY); for (nproc = 1; nproc < 8; nproc *= 2) fork(); for (;;) { sprintf(ttyname, "/dev/tty%d", rand() % 8); close(open(ttyname, O_RDONLY)); ioctl(fd, VT_GETSTATE, &state); } } KASAN report: BUG: KASAN: use-after-free in vt_in_use drivers/tty/vt/vt_ioctl.c:48 [inline] BUG: KASAN: use-after-free in vt_ioctl+0x1ad3/0x1d70 drivers/tty/vt/vt_ioctl.c:657 Read of size 4 at addr ffff888065722468 by task syz-vt2/132 CPU: 0 PID: 132 Comm: syz-vt2 Not tainted 5.6.0-rc5-00130-g089b6d3654916 #13 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS ?-20191223_100556-anatol 04/01/2014 Call Trace: [...] vt_in_use drivers/tty/vt/vt_ioctl.c:48 [inline] vt_ioctl+0x1ad3/0x1d70 drivers/tty/vt/vt_ioctl.c:657 tty_ioctl+0x9db/0x11b0 drivers/tty/tty_io.c:2660 [...] Allocated by task 136: [...] kzalloc include/linux/slab.h:669 [inline] alloc_tty_struct+0x96/0x8a0 drivers/tty/tty_io.c:2982 tty_init_dev+0x23/0x350 drivers/tty/tty_io.c:1334 tty_open_by_driver drivers/tty/tty_io.c:1987 [inline] tty_open+0x3ca/0xb30 drivers/tty/tty_io.c:2035 [...] Freed by task 41: [...] kfree+0xbf/0x200 mm/slab.c:3757 free_tty_struct+0x8d/0xb0 drivers/tty/tty_io.c:177 release_one_tty+0x22d/0x2f0 drivers/tty/tty_io.c:1468 process_one_work+0x7f1/0x14b0 kernel/workqueue.c:2264 worker_thread+0x8b/0xc80 kernel/workqueue.c:2410 [...] Fixes: 4001d7b7 ("vt: push down the tty lock so we can see what is left to tackle") Cc: <stable@vger.kernel.org> # v3.4+ Acked-by:
Jiri Slaby <jslaby@suse.cz> Signed-off-by:
Eric Biggers <ebiggers@google.com> Link: https://lore.kernel.org/r/20200322034305.210082-3-ebiggers@kernel.org Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-