Skip to content
Snippets Groups Projects
  1. Sep 26, 2022
    • Li Lingfeng's avatar
      mm: avoid potential deadlock tirgged by writing slab-attr-file · b2297d93
      Li Lingfeng authored
      hulk inclusion
      category: bugfix
      bugzilla: https://gitee.com/openeuler/kernel/issues/I5SR8X
      
      
      CVE: NA
      
      --------------------------------
      
      ======================================================
      WARNING: possible circular locking dependency detected
      4.18.0+ #4 Tainted: G                 ---------r-  -
      ------------------------------------------------------
      dmsetup/923 is trying to acquire lock:
      000000008d8170dd (kn->count#184){++++}, at: kernfs_remove+0x24/0x40 fs/kernfs/dir.c:1354
      
      but task is already holding lock:
      000000003377330b (slab_mutex){+.+.}, at: kmem_cache_destroy+0xec/0x320 mm/slab_common.c:928
      
      which lock already depends on the new lock.
      
      the existing dependency chain (in reverse order) is:
      
      -> #1 (slab_mutex){+.+.}:
             __mutex_lock_common kernel/locking/mutex.c:925 [inline]
             __mutex_lock+0x105/0x11a0 kernel/locking/mutex.c:1072
             slab_attr_store+0x6d/0xe0 mm/slub.c:5526
             sysfs_kf_write+0x10f/0x170 fs/sysfs/file.c:139
             kernfs_fop_write+0x290/0x440 fs/kernfs/file.c:316
             __vfs_write+0x81/0x100 fs/read_write.c:485
             vfs_write+0x184/0x4c0 fs/read_write.c:549
             ksys_write+0xc6/0x1a0 fs/read_write.c:598
             do_syscall_64+0xca/0x5a0 arch/x86/entry/common.c:298
             entry_SYSCALL_64_after_hwframe+0x6a/0xdf
      
      -> #0 (kn->count#184){++++}:
             lock_acquire+0x10f/0x340 kernel/locking/lockdep.c:3868
             kernfs_drain fs/kernfs/dir.c:467 [inline]
             __kernfs_remove fs/kernfs/dir.c:1320 [inline]
             __kernfs_remove+0x6d0/0x890 fs/kernfs/dir.c:1279
             kernfs_remove+0x24/0x40 fs/kernfs/dir.c:1354
             sysfs_remove_dir+0xb6/0xf0 fs/sysfs/dir.c:99
             kobject_del.part.1+0x35/0xe0 lib/kobject.c:573
             kobject_del+0x1b/0x30 lib/kobject.c:569
             shutdown_cache+0x17f/0x310 mm/slab_common.c:592
             kmem_cache_destroy+0x263/0x320 mm/slab_common.c:943
             bio_put_slab block/bio.c:152 [inline]
             bioset_exit+0x20d/0x330 block/bio.c:1916
             cleanup_mapped_device+0x64/0x360 drivers/md/dm.c:1903
             free_dev+0xbc/0x240 drivers/md/dm.c:2058
             __dm_destroy+0x317/0x490 drivers/md/dm.c:2426
             dm_hash_remove_all+0x8f/0x250 drivers/md/dm-ioctl.c:314
             remove_all+0x4d/0x90 drivers/md/dm-ioctl.c:471
             ctl_ioctl+0x426/0x910 drivers/md/dm-ioctl.c:1870
             dm_ctl_ioctl+0x23/0x30 drivers/md/dm-ioctl.c:1892
             vfs_ioctl fs/ioctl.c:46 [inline]
             file_ioctl fs/ioctl.c:509 [inline]
             do_vfs_ioctl+0x1a5/0x1100 fs/ioctl.c:696
             ksys_ioctl+0x7c/0xa0 fs/ioctl.c:713
             __do_sys_ioctl fs/ioctl.c:720 [inline]
             __se_sys_ioctl fs/ioctl.c:718 [inline]
             __x64_sys_ioctl+0x74/0xb0 fs/ioctl.c:718
             do_syscall_64+0xca/0x5a0 arch/x86/entry/common.c:298
             entry_SYSCALL_64_after_hwframe+0x6a/0xdf
      
      other info that might help us debug this:
      
       Possible unsafe locking scenario:
      
             CPU0                    CPU1
             ----                    ----
        lock(slab_mutex);
                                     lock(kn->count#184);
                                     lock(slab_mutex);
        lock(kn->count#184);
      
      A potential deadlock may occur when we remove and write a slab-attr-file in
      /sys/kernfs/slab/xxx/ at the same time.
      The lock sequence in remove process is:
      slab_mutex --> kn->count
      The lock sequence in write process is:
      kn->count --> slab_mutex
      This can be fixed by replacing mutex_lock with mutex_trylock in slab_attr_store.
      
      Signed-off-by: default avatarLi Lingfeng <lilingfeng3@huawei.com>
      Reviewed-by: default avatarKefeng Wang <wangkefeng.wang@huawei.com>
      Signed-off-by: default avatarYongqiang Liu <liuyongqiang13@huawei.com>
      b2297d93
    • Baokun Li's avatar
      ext4: fix use-after-free in ext4_ext_shift_extents · ae52ee4a
      Baokun Li authored
      hulk inclusion
      category: bugfix
      bugzilla: 187600, https://gitee.com/openeuler/kernel/issues/I5SV2U
      
      
      CVE: NA
      
      --------------------------------
      
      If the starting position of our insert range happens to be in the hole
      between the two ext4_extent_idx, because the lblk of the ext4_extent in
      the previous ext4_extent_idx is always less than the start, which leads
      to the "extent" variable access across the boundary, the following UAF is
      triggered:
      
      ==================================================================
      BUG: KASAN: use-after-free in ext4_ext_shift_extents+0x257/0x790
      Read of size 4 at addr ffff88819807a008 by task fallocate/8010
      CPU: 3 PID: 8010 Comm: fallocate Tainted: G            E     5.10.0+ #492
      Call Trace:
       dump_stack+0x7d/0xa3
       print_address_description.constprop.0+0x1e/0x220
       kasan_report.cold+0x67/0x7f
       ext4_ext_shift_extents+0x257/0x790
       ext4_insert_range+0x5b6/0x700
       ext4_fallocate+0x39e/0x3d0
       vfs_fallocate+0x26f/0x470
       ksys_fallocate+0x3a/0x70
       __x64_sys_fallocate+0x4f/0x60
       do_syscall_64+0x33/0x40
       entry_SYSCALL_64_after_hwframe+0x44/0xa9
      ==================================================================
      
      For right shifts, we can divide them into the following situations:
      
      1. When the first ee_block of ext4_extent_idx is greater than or equal to
         start, make right shifts directly from the first ee_block.
          1) If it is greater than start, we need to continue searching in the
             previous ext4_extent_idx.
          2) If it is equal to start, we can exit the loop (iterator=NULL).
      
      2. When the first ee_block of ext4_extent_idx is less than start, then
         traverse from the last extent to find the first extent whose ee_block
         is less than start.
          1) If extent is still the last extent after traversal, it means that
             the last ee_block of ext4_extent_idx is less than start, that is,
             start is located in the hole between idx and (idx+1), so we can
             exit the loop directly (break) without right shifts.
          2) Otherwise, make right shifts at the corresponding position of the
             found extent, and then exit the loop (iterator=NULL).
      
      Fixes: 331573fe ("ext4: Add support FALLOC_FL_INSERT_RANGE for fallocate")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarZhihao Cheng <chengzhihao1@huawei.com>
      Signed-off-by: default avatarBaokun Li <libaokun1@huawei.com>
      Reviewed-by: default avatarZhang Yi <yi.zhang@huawei.com>
      Signed-off-by: default avatarYongqiang Liu <liuyongqiang13@huawei.com>
      ae52ee4a
    • Zhihao Cheng's avatar
      quota: Add more checking after reading from quota file · f66997d9
      Zhihao Cheng authored
      hulk inclusion
      category: bugfix
      bugzilla: 187046, https://gitee.com/openeuler/kernel/issues/I5QH0X
      
      
      CVE: NA
      
      --------------------------------
      
      It would be better to do more sanity checking (eg. dqdh_entries,
      block no.) for the content read from quota file, which can prevent
      corrupting the quota file.
      
      Signed-off-by: default avatarZhihao Cheng <chengzhihao1@huawei.com>
      Signed-off-by: default avatarLi Lingfeng <lilingfeng3@huawei.com>
      Reviewed-by: default avatarZhihao Cheng <chengzhihao1@huawei.com>
      Reviewed-by: default avatarZhang Yi <yi.zhang@huawei.com>
      Signed-off-by: default avatarYongqiang Liu <liuyongqiang13@huawei.com>
      f66997d9
    • Zhihao Cheng's avatar
      quota: Replace all block number checking with helper function · 1e9a49cf
      Zhihao Cheng authored
      hulk inclusion
      category: bugfix
      bugzilla: 187046, https://gitee.com/openeuler/kernel/issues/I5QH0X
      
      
      CVE: NA
      
      --------------------------------
      
      Cleanup all block checking places, replace them with helper function
      do_check_range().
      
      Signed-off-by: default avatarZhihao Cheng <chengzhihao1@huawei.com>
      Signed-off-by: default avatarLi Lingfeng <lilingfeng3@huawei.com>
      Reviewed-by: default avatarZhihao Cheng <chengzhihao1@huawei.com>
      Reviewed-by: default avatarZhihao Cheng <chengzhihao1@huawei.com>
      Reviewed-by: default avatarZhang Yi <yi.zhang@huawei.com>
      Signed-off-by: default avatarYongqiang Liu <liuyongqiang13@huawei.com>
      1e9a49cf
    • Zhihao Cheng's avatar
      quota: Check next/prev free block number after reading from quota file · 6c27d754
      Zhihao Cheng authored
      hulk inclusion
      category: bugfix
      bugzilla: 187046, https://gitee.com/openeuler/kernel/issues/I5QH0X
      CVE: NA
      
      --------------------------------
      
      Following process:
       Init: v2_read_file_info: <3> dqi_free_blk 0 dqi_free_entry 5 dqi_blks 6
      
       Step 1. chown bin f_a -> dquot_acquire -> v2_write_dquot:
        qtree_write_dquot
         do_insert_tree
          find_free_dqentry
           get_free_dqblk
            write_blk(info->dqi_blocks) // info->dqi_blocks = 6, failure. The
      	   content in physical block (corresponding to blk 6) is random.
      
       Step 2. chown root f_a -> dquot_transfer -> dqput_all -> dqput ->
               ext4_release_dquot -> v2_release_dquot -> qtree_delete_dquot:
        dquot_release
         remove_tree
          free_dqentry
           put_free_dqblk(6)
            info->dqi_free_blk = blk    // info->dqi_free_blk = 6
      
       Step 3. drop cache (buffer head for block 6 is released)
      
       Step 4. chown bin f_b -> dquot_acquire -> commit_dqblk -> v2_write_dquot:
        qtree_write_dquot
         do_insert_tree
          find_free_dqentry
           get_free_dqblk
            dh = (struct qt_disk_dqdbheader *)buf
            blk = info->dqi_free_blk     // 6
            ret = read_blk(info, blk, buf)  // The content of buf is random
            info->dqi_free_blk = le32_to_cpu(dh->dqdh_next_free)  // random blk
      
       Step 5. chown bin f_c -> notify_change -> ext4_setattr -> dquot_transfer:
        dquot = dqget -> acquire_dquot -> ext4_acquire_dquot -> dquot_acquire ->
                commit_dqblk -> v2_write_dquot -> dq_insert_tree:
         do_insert_tree
          find_free_dqentry
           get_free_dqblk
            blk = info->dqi_free_blk    // If blk < 0 and blk is not an error
      				     code, it will be returned as dquot
      
        transfer_to[USRQUOTA] = dquot  // A random negative value
        __dquot_transfer(transfer_to)
         dquot_add_inodes(transfer_to[cnt])
          spin_lock(&dquot->dq_dqb_lock)  // page fault
      
      , which will lead to kernel page fault:
       Quota error (device sda): qtree_write_dquot: Error -8000 occurred
       while creating quota
       BUG: unable to handle page fault for address: ffffffffffffe120
       #PF: supervisor write access in kernel mode
       #PF: error_code(0x0002) - not-present page
       Oops: 0002 [#1] PREEMPT SMP
       CPU: 0 PID: 5974 Comm: chown Not tainted 6.0.0-rc1-00004
       Hardware name: QEMU Standard PC (i440FX + PIIX, 1996)
       RIP: 0010:_raw_spin_lock+0x3a/0x90
       Call Trace:
        dquot_add_inodes+0x28/0x270
        __dquot_transfer+0x377/0x840
        dquot_transfer+0xde/0x540
        ext4_setattr+0x405/0x14d0
        notify_change+0x68e/0x9f0
        chown_common+0x300/0x430
        __x64_sys_fchownat+0x29/0x40
      
      In order to avoid accessing invalid quota memory address, this patch adds
      block number checking of next/prev free block read from quota file.
      
      Fetch a reproducer in [Link].
      
      Link: https://bugzilla.kernel.org/show_bug.cgi?id=216372
      
      
      Fixes: 1da177e4 ("Linux-2.6.12-rc2")
      Signed-off-by: default avatarZhihao Cheng <chengzhihao1@huawei.com>
      Signed-off-by: default avatarLi Lingfeng <lilingfeng3@huawei.com>
      Reviewed-by: default avatarZhihao Cheng <chengzhihao1@huawei.com>
      Reviewed-by: default avatarZhang Yi <yi.zhang@huawei.com>
      Signed-off-by: default avatarYongqiang Liu <liuyongqiang13@huawei.com>
      6c27d754
    • Hyunwoo Kim's avatar
      efi: capsule-loader: Fix use-after-free in efi_capsule_write · 27dfef31
      Hyunwoo Kim authored
      mainline inclusion
      from mainline-v6.0-rc5
      commit 9cb636b5f6a8cc6d1b50809ec8f8d33ae0c84c95
      category: bugfix
      bugzilla: https://gitee.com/src-openeuler/kernel/issues/I5QI0W
      
      
      CVE: CVE-2022-40307
      
      ---------------------------
      
      A race condition may occur if the user calls close() on another thread
      during a write() operation on the device node of the efi capsule.
      
      This is a race condition that occurs between the efi_capsule_write() and
      efi_capsule_flush() functions of efi_capsule_fops, which ultimately
      results in UAF.
      
      So, the page freeing process is modified to be done in
      efi_capsule_release() instead of efi_capsule_flush().
      
      Cc: <stable@vger.kernel.org> # v4.9+
      Signed-off-by: default avatarHyunwoo Kim <imv4bel@gmail.com>
      Link: https://lore.kernel.org/all/20220907102920.GA88602@ubuntu/
      
      
      Signed-off-by: default avatarArd Biesheuvel <ardb@kernel.org>
      Signed-off-by: default avatarXia Longlong <xialonglong1@huawei.com>
      Reviewed-by: default avatarKefeng Wang <wangkefeng.wang@huawei.com>
      Reviewed-by: Xiu Jianfeng <x...
      27dfef31
    • Lu Wei's avatar
      ipvlan: Fix out-of-bound bugs caused by unset skb->mac_header · 4b633c1e
      Lu Wei authored
      mainline inclusion
      from mainline-v6.0-rc6
      commit 81225b2ea161af48e093f58e8dfee6d705b16af4
      category: bugfix
      bugzilla: https://gitee.com/openeuler/kernel/issues/I5SYBY
      
      
      CVE: NA
      
      --------------------------------
      
      If an AF_PACKET socket is used to send packets through ipvlan and the
      default xmit function of the AF_PACKET socket is changed from
      dev_queue_xmit() to packet_direct_xmit() via setsockopt() with the option
      name of PACKET_QDISC_BYPASS, the skb->mac_header may not be reset and
      remains as the initial value of 65535, this may trigger slab-out-of-bounds
      bugs as following:
      
      =================================================================
      UG: KASAN: slab-out-of-bounds in ipvlan_xmit_mode_l2+0xdb/0x330 [ipvlan]
      PU: 2 PID: 1768 Comm: raw_send Kdump: loaded Not tainted 6.0.0-rc4+ #6
      ardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-1.fc33
      all Trace:
      print_address_description.constprop.0+0x1d/0x160
      print_report.cold+0x4f/0x112
      kasan_report+0xa3/0x130
      ipvlan_xmit_mode_l2+0xdb/0x330 [ipvlan]
      ipvlan_start_xmit+0x29/0xa0 [ipvlan]
      __dev_direct_xmit+0x2e2/0x380
      packet_direct_xmit+0x22/0x60
      packet_snd+0x7c9/0xc40
      sock_sendmsg+0x9a/0xa0
      __sys_sendto+0x18a/0x230
      __x64_sys_sendto+0x74/0x90
      do_syscall_64+0x3b/0x90
      entry_SYSCALL_64_after_hwframe+0x63/0xcd
      
      The root cause is:
        1. packet_snd() only reset skb->mac_header when sock->type is SOCK_RAW
           and skb->protocol is not specified as in packet_parse_headers()
      
        2. packet_direct_xmit() doesn't reset skb->mac_header as dev_queue_xmit()
      
      In this case, skb->mac_header is 65535 when ipvlan_xmit_mode_l2() is
      called. So when ipvlan_xmit_mode_l2() gets mac header with eth_hdr() which
      use "skb->head + skb->mac_header", out-of-bound access occurs.
      
      This patch replaces eth_hdr() with skb_eth_hdr() in ipvlan_xmit_mode_l2()
      and reset mac header in multicast to solve this out-of-bound bug.
      
      Fixes: 2ad7bf36 ("ipvlan: Initial check-in of the IPVLAN driver.")
      Signed-off-by: default avatarLu Wei <luwei32@huawei.com>
      Reviewed-by: default avatarEric Dumazet <edumazet@google.com>
      Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
      Signed-off-by: default avatarLu Wei <luwei32@huawei.com>
      Reviewed-by: default avatarYue Haibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarYongqiang Liu <liuyongqiang13@huawei.com>
      4b633c1e
    • Wang Wensheng's avatar
      mm/sharepool: Fix UAF reported by KASAN · 89f5304b
      Wang Wensheng authored
      hulk inclusion
      category: bugfix
      bugzilla: https://gitee.com/openeuler/kernel/issues/I5PD4P
      CVE: NA
      
      --------------------------------
      
      [ 2058.802818][  T290] BUG: KASAN: use-after-free in get_process_sp_res+0x70/0x134
      [ 2058.810194][  T290] Read of size 8 at addr ffff00088dc6ab28 by task test_debug_loop/290
      [ 2058.820520][  T290] CPU: 5 PID: 290 Comm: test_debug_loop Tainted: G        W  OE     5.10.0+ #2
      [ 2058.829377][  T290] Hardware name: EVB(EP) (DT)
      [ 2058.833982][  T290] Call trace:
      [ 2058.837217][  T290]  dump_backtrace+0x0/0x30c
      [ 2058.841660][  T290]  show_stack+0x20/0x30
      [ 2058.845758][  T290]  dump_stack+0x120/0x1b0
      [ 2058.850028][  T290]  print_address_description.constprop.0+0x2c/0x1fc
      [ 2058.856555][  T290]  __kasan_report+0xfc/0x160
      [ 2058.861086][  T290]  kasan_report+0x44/0xb0
      [ 2058.865356][  T290]  __asan_load8+0x94/0xd0
      [ 2058.869623][  T290]  get_process_sp_res+0x70/0x134
      [ 2058.874501][  T290]  proc_usage_show+0x1ac/0x304
      [ 2058.879208][  T290]...
      89f5304b
    • David Jeffery's avatar
      blk-mq: avoid extending delays of active hctx from blk_mq_delay_run_hw_queues · 9c7724ae
      David Jeffery authored
      mainline inclusion
      from mainline-v5.18-rc1
      commit 8f5fea65b06de1cc51d4fc23fb4d378d1abd6ed7
      category: bugfix
      bugzilla: 187541, https://gitee.com/openeuler/kernel/issues/I5RUM6
      
      
      CVE: NA
      
      --------------------------------
      
      When blk_mq_delay_run_hw_queues sets an hctx to run in the future, it can
      reset the delay length for an already pending delayed work run_work. This
      creates a scenario where multiple hctx may have their queues set to run,
      but if one runs first and finds nothing to do, it can reset the delay of
      another hctx and stall the other hctx's ability to run requests.
      
      To avoid this I/O stall when an hctx's run_work is already pending,
      leave it untouched to run at its current designated time rather than
      extending its delay. The work will still run which keeps closed the race
      calling blk_mq_delay_run_hw_queues is needed for while also avoiding the
      I/O stall.
      
      Signed-off-by: default avatarDavid Jeffery <djeffery@redhat.com>
      Reviewed-by: default avatarMing Lei <ming.lei@redhat.com>
      Link: https://lore.kernel.org/r/20220131203337.GA17666@redhat
      
      
      Signed-off-by: default avatarJens Axboe <axboe@kernel.dk>
      Signed-off-by: default avatarYu Kuai <yukuai3@huawei.com>
      Reviewed-by: default avatarJason Yan <yanaijie@huawei.com>
      Signed-off-by: default avatarYongqiang Liu <liuyongqiang13@huawei.com>
      9c7724ae
    • Ma Wupeng's avatar
      mm: mem_reliable: Start fallback if no suitable zone found · f8f0da00
      Ma Wupeng authored
      hulk inclusion
      category: bugfix
      bugzilla: https://gitee.com/openeuler/kernel/issues/I4SK3S
      
      
      CVE: NA
      
      --------------------------------
      
      For reliable memory allocation bind to nodes which do not hvve any
      reliable zones, its memory allocation will fail and then warn message
      will be produced at the end of __alloc_pages_slowpath().
      
      Though this memory allocation can fallback to movable zone in
      check_after_alloc() if fallback is enabled, something should be done to
      prevent this pointless warn log.
      
      To solve this problem, fallback to movable zone if no suitable zone found.
      
      Signed-off-by: default avatarMa Wupeng <mawupeng1@huawei.com>
      Reviewed-by: default avatarKefeng Wang <wangkefeng.wang@huawei.com>
      Signed-off-by: default avatarYongqiang Liu <liuyongqiang13@huawei.com>
      f8f0da00
  2. Sep 22, 2022
  3. Sep 20, 2022
  4. Sep 14, 2022
  5. Sep 13, 2022
  6. Sep 07, 2022