Skip to content
Snippets Groups Projects
  1. Jul 12, 2021
  2. Jul 09, 2021
  3. Jul 08, 2021
    • 王海's avatar
      usb: gadget: rndis: Fix info leak of rndis · 7f1196b6
      王海 authored
      
      hulk inclusion
      category: bugfix
      bugzilla: 172330
      CVE: NA
      
      --------------------------------
      
      We can construct some special USB packets that cause kernel
      info leak by the following steps of rndis.
      
      1. construct the packet to make rndis call gen_ndis_set_resp().
      
      In gen_ndis_set_resp(), BufOffset comes from the USB packet and
      it is not checked so that BufOffset can be any value. Therefore,
      if OID is RNDIS_OID_GEN_CURRENT_PACKET_FILTER, then *params->filter
      can get data at any address.
      
      2. construct the packet to make rndis call rndis_query_response().
      
      In rndis_query_response(), if OID is RNDIS_OID_GEN_CURRENT_PACKET_FILTER,
      then the data of *params->filter is fetched and returned, resulting in
      info leak.
      
      Therefore, we need to check the BufOffset to prevent info leak. Here,
      buf size is USB_COMP_EP0_BUFSIZ, as long as "8 + BufOffset + BufLength"
      is less than USB_COMP_EP0_BUFSIZ, it will be considered legal.
      
      Fixes: 1da177e4 ("Linux-2.6.12-rc2")
      Signed-off-by: default avatarWang Hai <wanghai38@huawei.com>
      Reviewed-by: default avatarWei Yongjun <weiyongjun1@huawei.com>
      Reviewed-by: default avatarXiu Jianfeng <xiujianfeng@huawei.com>
      Signed-off-by: default avatarYang Yingliang <yangyingliang@huawei.com>
      7f1196b6
  4. Jul 05, 2021
    • 王克锋's avatar
      once: Fix panic when module unload · 23eb8e37
      王克锋 authored
      hulk inclusion
      category: bugfix
      bugzilla: 172153
      CVE: NA
      
      -------------------------------------------------
      
      DO_ONCE
      DEFINE_STATIC_KEY_TRUE(___once_key);
      __do_once_done
        once_disable_jump(once_key);
          INIT_WORK(&w->work, once_deferred);
          struct once_work *w;
          w->key = key;
          schedule_work(&w->work);                     module unload
                                                         //*the key is destroy*
      process_one_work
        once_deferred
          BUG_ON(!static_key_enabled(work->key));
             static_key_count((struct static_key *)x)    //*access key, crash*
      
      When module uses DO_ONCE mechanism, it could crash due to the above
      concurrency problem, we could reproduce it with link[1].
      
      Fix it by add/put module refcount in the once work process.
      
      [1]
      https://lore.kernel.org/netdev/eaa6c371-465e-57eb-6be9-f4b16b9d7cbf@huawei.com/
      
      
      
      Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
      Cc: Daniel Borkmann <daniel@iogearbox.net>
      Cc: David S. Miller <davem@davemloft.net>
      Cc: Eric Dumazet <edumazet@google.com>
      Reported-by: default avatarMinmin chen <chenmingmin@huawei.com>
      Signed-off-by: default avatarKefeng Wang <wangkefeng.wang@huawei.com>
      Reviewed-by: default avatarXie XiuQi <xiexiuqi@huawei.com>
      Signed-off-by: default avatarYang Yingliang <yangyingliang@huawei.com>
    • Zhang Xiaoxu's avatar
      SUNRPC: Should wake up the privileged task firstly. · 3d5dba2f
      Zhang Xiaoxu authored
      
      mainline inclusion
      from mainline-v5.14
      commit 5483b904bf336948826594610af4c9bbb0d9e3aa
      category: bugfix
      bugzilla: 51898
      CVE: NA
      
      ---------------------------
      
      When find a task from wait queue to wake up, a non-privileged task may
      be found out, rather than the privileged. This maybe lead a deadlock
      same as commit dfe1fe75e00e ("NFSv4: Fix deadlock between nfs4_evict_inode()
      and nfs4_opendata_get_inode()"):
      
      Privileged delegreturn task is queued to privileged list because all
      the slots are assigned. If there has no enough slot to wake up the
      non-privileged batch tasks(session less than 8 slot), then the privileged
      delegreturn task maybe lost waked up because the found out task can't
      get slot since the session is on draining.
      
      So we should treate the privileged task as the emergency task, and
      execute it as for as we can.
      
      Reported-by: default avatarHulk Robot <hulkci@huawei.com>
      Fixes: 5fcdfacc ("NFSv4: Return delegations synchronously in evict_inode")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarZhang Xiaoxu <zhangxiaoxu5@huawei.com>
      Signed-off-by: default avatarTrond Myklebust <trond.myklebust@hammerspace.com>
      Reviewed-by: default avatarYue Haibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarYang Yingliang <yangyingliang@huawei.com>
      3d5dba2f
    • Zhang Xiaoxu's avatar
      SUNRPC: Fix the batch tasks count wraparound. · 9b06b695
      Zhang Xiaoxu authored
      
      mainline inclusion
      from mainline-v5.14
      commit fcb170a9d825d7db4a3fb870b0300f5a40a8d096
      category: bugfix
      bugzilla: 51898
      CVE: NA
      
      ---------------------------
      
      The 'queue->nr' will wraparound from 0 to 255 when only current
      priority queue has tasks. This maybe lead a deadlock same as commit
      dfe1fe75e00e ("NFSv4: Fix deadlock between nfs4_evict_inode()
      and nfs4_opendata_get_inode()"):
      
      Privileged delegreturn task is queued to privileged list because all
      the slots are assigned. When non-privileged task complete and release
      the slot, a non-privileged maybe picked out. It maybe allocate slot
      failed when the session on draining.
      
      If the 'queue->nr' has wraparound to 255, and no enough slot to
      service it, then the privileged delegreturn will lost to wake up.
      
      So we should avoid the wraparound on 'queue->nr'.
      
      Reported-by: default avatarHulk Robot <hulkci@huawei.com>
      Fixes: 5fcdfacc ("NFSv4: Return delegations synchronously in evict_inode")
      Fixes: 1da177e4 ("Linux-2.6.12-rc2")
      Cc: stable@vger.kernel.org
      Signed-off-by: default avatarZhang Xiaoxu <zhangxiaoxu5@huawei.com>
      Signed-off-by: default avatarTrond Myklebust <trond.myklebust@hammerspace.com>
      Reviewed-by: default avatarYue Haibing <yuehaibing@huawei.com>
      Signed-off-by: default avatarYang Yingliang <yangyingliang@huawei.com>
      9b06b695
    • Daniel Borkmann's avatar
      bpf: Fix leakage under speculation on mispredicted branches · 78d76ae7
      Daniel Borkmann authored
      
      mainline inclusion
      from mainline-v5.13-rc7
      commit 9183671af6dbf60a1219371d4ed73e23f43b49db
      category: bugfix
      bugzilla: NA
      CVE: CVE-2021-33624
      
      --------------------------------
      
      The verifier only enumerates valid control-flow paths and skips paths that
      are unreachable in the non-speculative domain. And so it can miss issues
      under speculative execution on mispredicted branches.
      
      For example, a type confusion has been demonstrated with the following
      crafted program:
      
        // r0 = pointer to a map array entry
        // r6 = pointer to readable stack slot
        // r9 = scalar controlled by attacker
        1: r0 = *(u64 *)(r0) // cache miss
        2: if r0 != 0x0 goto line 4
        3: r6 = r9
        4: if r0 != 0x1 goto line 6
        5: r9 = *(u8 *)(r6)
        6: // leak r9
      
      Since line 3 runs iff r0 == 0 and line 5 runs iff r0 == 1, the verifier
      concludes that the pointer dereference on line 5 is safe. But: if the
      attacker trains both the branches to fall-through, such that the following
      is speculatively executed ...
      
        r6 = r9
        r9 = *(u8 *)(r6)
        // leak r9
      
      ... then the program will dereference an attacker-controlled value and could
      leak its content under speculative execution via side-channel. This requires
      to mistrain the branch predictor, which can be rather tricky, because the
      branches are mutually exclusive. However such training can be done at
      congruent addresses in user space using different branches that are not
      mutually exclusive. That is, by training branches in user space ...
      
        A:  if r0 != 0x0 goto line C
        B:  ...
        C:  if r0 != 0x0 goto line D
        D:  ...
      
      ... such that addresses A and C collide to the same CPU branch prediction
      entries in the PHT (pattern history table) as those of the BPF program's
      lines 2 and 4, respectively. A non-privileged attacker could simply brute
      force such collisions in the PHT until observing the attack succeeding.
      
      Alternative methods to mistrain the branch predictor are also possible that
      avoid brute forcing the collisions in the PHT. A reliable attack has been
      demonstrated, for example, using the following crafted program:
      
        // r0 = pointer to a [control] map array entry
        // r7 = *(u64 *)(r0 + 0), training/attack phase
        // r8 = *(u64 *)(r0 + 8), oob address
        // [...]
        // r0 = pointer to a [data] map array entry
        1: if r7 == 0x3 goto line 3
        2: r8 = r0
        // crafted sequence of conditional jumps to separate the conditional
        // branch in line 193 from the current execution flow
        3: if r0 != 0x0 goto line 5
        4: if r0 == 0x0 goto exit
        5: if r0 != 0x0 goto line 7
        6: if r0 == 0x0 goto exit
        [...]
        187: if r0 != 0x0 goto line 189
        188: if r0 == 0x0 goto exit
        // load any slowly-loaded value (due to cache miss in phase 3) ...
        189: r3 = *(u64 *)(r0 + 0x1200)
        // ... and turn it into known zero for verifier, while preserving slowly-
        // loaded dependency when executing:
        190: r3 &= 1
        191: r3 &= 2
        // speculatively bypassed phase dependency
        192: r7 += r3
        193: if r7 == 0x3 goto exit
        194: r4 = *(u8 *)(r8 + 0)
        // leak r4
      
      As can be seen, in training phase (phase != 0x3), the condition in line 1
      turns into false and therefore r8 with the oob address is overridden with
      the valid map value address, which in line 194 we can read out without
      issues. However, in attack phase, line 2 is skipped, and due to the cache
      miss in line 189 where the map value is (zeroed and later) added to the
      phase register, the condition in line 193 takes the fall-through path due
      to prior branch predictor training, where under speculation, it'll load the
      byte at oob address r8 (unknown scalar type at that point) which could then
      be leaked via side-channel.
      
      One way to mitigate these is to 'branch off' an unreachable path, meaning,
      the current verification path keeps following the is_branch_taken() path
      and we push the other branch to the verification stack. Given this is
      unreachable from the non-speculative domain, this branch's vstate is
      explicitly marked as speculative. This is needed for two reasons: i) if
      this path is solely seen from speculative execution, then we later on still
      want the dead code elimination to kick in in order to sanitize these
      instructions with jmp-1s, and ii) to ensure that paths walked in the
      non-speculative domain are not pruned from earlier walks of paths walked in
      the speculative domain. Additionally, for robustness, we mark the registers
      which have been part of the conditional as unknown in the speculative path
      given there should be no assumptions made on their content.
      
      The fix in here mitigates type confusion attacks described earlier due to
      i) all code paths in the BPF program being explored and ii) existing
      verifier logic already ensuring that given memory access instruction
      references one specific data structure.
      
      An alternative to this fix that has also been looked at in this scope was to
      mark aux->alu_state at the jump instruction with a BPF_JMP_TAKEN state as
      well as direction encoding (always-goto, always-fallthrough, unknown), such
      that mixing of different always-* directions themselves as well as mixing of
      always-* with unknown directions would cause a program rejection by the
      verifier, e.g. programs with constructs like 'if ([...]) { x = 0; } else
      { x = 1; }' with subsequent 'if (x == 1) { [...] }'. For unprivileged, this
      would result in only single direction always-* taken paths, and unknown taken
      paths being allowed, such that the former could be patched from a conditional
      jump to an unconditional jump (ja). Compared to this approach here, it would
      have two downsides: i) valid programs that otherwise are not performing any
      pointer arithmetic, etc, would potentially be rejected/broken, and ii) we are
      required to turn off path pruning for unprivileged, where both can be avoided
      in this work through pushing the invalid branch to the verification stack.
      
      The issue was originally discovered by Adam and Ofek, and later independently
      discovered and reported as a result of Benedict and Piotr's research work.
      
      Fixes: b2157399 ("bpf: prevent out-of-bounds speculation")
      Reported-by: default avatarAdam Morrison <mad@cs.tau.ac.il>
      Reported-by: default avatarOfek Kirzner <ofekkir@gmail.com>
      Reported-by: default avatarBenedict Schlueter <benedict.schlueter@rub.de>
      Reported-by: default avatarPiotr Krysiuk <piotras@gmail.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Reviewed-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Reviewed-by: default avatarBenedict Schlueter <benedict.schlueter@rub.de>
      Reviewed-by: default avatarPiotr Krysiuk <piotras@gmail.com>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      
      onflicts:
        kernel/bpf/verifier.c
      [yyl: bypass_spec_v1 is not introduced in kernel-4.19,
        use allow_ptr_leaks instead]
      
      Signed-off-by: default avatarYang Yingliang <yangyingliang@huawei.com>
      Signed-off-by: default avatarHe <Fengqing&lt;hefengqing@huawei.com>
      Reviewed-by: default avatarKuohai Xu <xukuohai@huawei.com>
      Reviewed-by: default avatarXiu Jianfeng <xiujianfeng@huawei.com>
      Signed-off-by: default avatarYang Yingliang <yangyingliang@huawei.com>
      78d76ae7
    • Daniel Borkmann's avatar
      bpf: Do not mark insn as seen under speculative path verification · b2fdc6d8
      Daniel Borkmann authored
      
      mainline inclusion
      from mainline-v5.13-rc7
      commit fe9a5ca7e370e613a9a75a13008a3845ea759d6e
      category: bugfix
      bugzilla: NA
      CVE: CVE-2021-33624
      
      --------------------------------
      
      ... in such circumstances, we do not want to mark the instruction as seen given
      the goal is still to jmp-1 rewrite/sanitize dead code, if it is not reachable
      from the non-speculative path verification. We do however want to verify it for
      safety regardless.
      
      With the patch as-is all the insns that have been marked as seen before the
      patch will also be marked as seen after the patch (just with a potentially
      different non-zero count). An upcoming patch will also verify paths that are
      unreachable in the non-speculative domain, hence this extension is needed.
      
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Reviewed-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Reviewed-by: default avatarBenedict Schlueter <benedict.schlueter@rub.de>
      Reviewed-by: default avatarPiotr Krysiuk <piotras@gmail.com>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      
      Conflicts:
        kernel/bpf/verifier.c
      
      pass_cnt is not introduced in kernel-4.19.
      
      Signed-off-by: default avatarHe Fengqing <hefengqing@huawei.com>
      Reviewed-by: default avatarKuohai Xu <xukuohai@huawei.com>
      Reviewed-by: default avatarXiu Jianfeng <xiujianfeng@huawei.com>
      Signed-off-by: default avatarYang Yingliang <yangyingliang@huawei.com>
      b2fdc6d8
    • Daniel Borkmann's avatar
      bpf: Inherit expanded/patched seen count from old aux data · 9d1b583d
      Daniel Borkmann authored
      
      mainline inclusion
      from mainline-v5.13-rc7
      commit d203b0fd863a2261e5d00b97f3d060c4c2a6db71
      category: bugfix
      bugzilla: NA
      CVE: CVE-2021-33624
      
      --------------------------------
      
      Instead of relying on current env->pass_cnt, use the seen count from the
      old aux data in adjust_insn_aux_data(), and expand it to the new range of
      patched instructions. This change is valid given we always expand 1:n
      with n>=1, so what applies to the old/original instruction needs to apply
      for the replacement as well.
      
      Not relying on env->pass_cnt is a prerequisite for a later change where we
      want to avoid marking an instruction seen when verified under speculative
      execution path.
      
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Reviewed-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Reviewed-by: default avatarBenedict Schlueter <benedict.schlueter@rub.de>
      Reviewed-by: default avatarPiotr Krysiuk <piotras@gmail.com>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      
      Conflicts:
        kernel/bpf/verifier.c
      
      seen of bpf_insn_aux_data is bool in kernel-4.19.
      
      Signed-off-by: default avatarHe Fengqing <hefengqing@huawei.com>
      Reviewed-by: default avatarKuohai Xu <xukuohai@huawei.com>
      Reviewed-by: default avatarXiu Jianfeng <xiujianfeng@huawei.com>
      Signed-off-by: default avatarYang Yingliang <yangyingliang@huawei.com>
      9d1b583d
    • Daniel Borkmann's avatar
      bpf: Update selftests to reflect new error states · 040bd002
      Daniel Borkmann authored
      
      stable inclusion
      from linux-4.19.193
      commit 138b0ec1064c8f154a32297458e562591a94773f
      
      --------------------------------
      
      commit d7a5091351756d0ae8e63134313c455624e36a13 upstream
      
      Update various selftest error messages:
      
       * The 'Rx tried to sub from different maps, paths, or prohibited types'
         is reworked into more specific/differentiated error messages for better
         guidance.
      
       * The change into 'value -4294967168 makes map_value pointer be out of
         bounds' is due to moving the mixed bounds check into the speculation
         handling and thus occuring slightly later than above mentioned sanity
         check.
      
       * The change into 'math between map_value pointer and register with
         unbounded min value' is similarly due to register sanity check coming
         before the mixed bounds check.
      
       * The case of 'map access: known scalar += value_ptr from different maps'
         now loads fine given masks are the same from the different paths (despite
         max map value size being different).
      
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Reviewed-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      [OP: 4.19 backport, account for split test_verifier and
      different / missing tests]
      Signed-off-by: default avatarOvidiu Panait <ovidiu.panait@windriver.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarYang Yingliang <yangyingliang@huawei.com>
      040bd002
    • Daniel Borkmann's avatar
      bpf, test_verifier: switch bpf_get_stack's 0 s> r8 test · 0dae2841
      Daniel Borkmann authored
      
      stable inclusion
      from linux-4.19.193
      commit d1e281d6cb8841122c4677b47fcebdc6f410bd74
      
      --------------------------------
      
      [ no upstream commit ]
      
      Switch the comparison, so that is_branch_taken() will recognize that below
      branch is never taken:
      
        [...]
        17: [...] R1_w=inv0 [...] R8_w=inv(id=0,smin_value=-2147483648,smax_value=-1,umin_value=18446744071562067968,var_off=(0xffffffff80000000; 0x7fffffff)) [...]
        17: (67) r8 <<= 32
        18: [...] R8_w=inv(id=0,smax_value=-4294967296,umin_value=9223372036854775808,umax_value=18446744069414584320,var_off=(0x8000000000000000; 0x7fffffff00000000)) [...]
        18: (c7) r8 s>>= 32
        19: [...] R8_w=inv(id=0,smin_value=-2147483648,smax_value=-1,umin_value=18446744071562067968,var_off=(0xffffffff80000000; 0x7fffffff)) [...]
        19: (6d) if r1 s> r8 goto pc+16
        [...] R1_w=inv0 [...] R8_w=inv(id=0,smin_value=-2147483648,smax_value=-1,umin_value=18446744071562067968,var_off=(0xffffffff80000000; 0x7fffffff)) [...]
        [...]
      
      Currently we check for is_branch_taken() only if either K is source, or source
      is a scalar value that is const. For upstream it would be good to extend this
      properly to check whether dst is const and src not.
      
      For the sake of the test_verifier, it is probably not needed here:
      
        # ./test_verifier 101
        #101/p bpf_get_stack return R0 within range OK
        Summary: 1 PASSED, 0 SKIPPED, 0 FAILED
      
      I haven't seen this issue in test_progs* though, they are passing fine:
      
        # ./test_progs-no_alu32 -t get_stack
        Switching to flavor 'no_alu32' subdirectory...
        #20 get_stack_raw_tp:OK
        Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED
      
        # ./test_progs -t get_stack
        #20 get_stack_raw_tp:OK
        Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED
      
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      [OP: backport to 4.19]
      Signed-off-by: default avatarOvidiu Panait <ovidiu.panait@windriver.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarYang Yingliang <yangyingliang@huawei.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarYang Yingliang <yangyingliang@huawei.com>
      0dae2841
    • John Fastabend's avatar
      bpf: Test_verifier, bpf_get_stack return value add <0 · 011e1131
      John Fastabend authored
      
      stable inclusion
      from linux-4.19.193
      commit f915e7975fc2d593ddb60b67d14eef314eb6dd08
      
      --------------------------------
      
      commit 9ac26e99 upstream.
      
      With current ALU32 subreg handling and retval refine fix from last
      patches we see an expected failure in test_verifier. With verbose
      verifier state being printed at each step for clarity we have the
      following relavent lines [I omit register states that are not
      necessarily useful to see failure cause],
      
      #101/p bpf_get_stack return R0 within range FAIL
      Failed to load prog 'Success'!
      [..]
      14: (85) call bpf_get_stack#67
       R0_w=map_value(id=0,off=0,ks=8,vs=48,imm=0)
       R3_w=inv48
      15:
       R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff))
      15: (b7) r1 = 0
      16:
       R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff))
       R1_w=inv0
      16: (bf) r8 = r0
      17:
       R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff))
       R1_w=inv0
       R8_w=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff))
      17: (67) r8 <<= 32
      18:
       R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff))
       R1_w=inv0
       R8_w=inv(id=0,smax_value=9223372032559808512,
                     umax_value=18446744069414584320,
                     var_off=(0x0; 0xffffffff00000000),
                     s32_min_value=0,
                     s32_max_value=0,
                     u32_max_value=0,
                     var32_off=(0x0; 0x0))
      18: (c7) r8 s>>= 32
      19
       R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff))
       R1_w=inv0
       R8_w=inv(id=0,smin_value=-2147483648,
                     smax_value=2147483647,
                     var32_off=(0x0; 0xffffffff))
      19: (cd) if r1 s< r8 goto pc+16
       R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff))
       R1_w=inv0
       R8_w=inv(id=0,smin_value=-2147483648,
                     smax_value=0,
                     var32_off=(0x0; 0xffffffff))
      20:
       R0=inv(id=0,smax_value=48,var32_off=(0x0; 0xffffffff))
       R1_w=inv0
       R8_w=inv(id=0,smin_value=-2147483648,
                     smax_value=0,
       R9=inv48
      20: (1f) r9 -= r8
      21: (bf) r2 = r7
      22:
       R2_w=map_value(id=0,off=0,ks=8,vs=48,imm=0)
      22: (0f) r2 += r8
      value -2147483648 makes map_value pointer be out of bounds
      
      After call bpf_get_stack() on line 14 and some moves we have at line 16
      an r8 bound with max_value 48 but an unknown min value. This is to be
      expected bpf_get_stack call can only return a max of the input size but
      is free to return any negative error in the 32-bit register space. The
      C helper is returning an int so will use lower 32-bits.
      
      Lines 17 and 18 clear the top 32 bits with a left/right shift but use
      ARSH so we still have worst case min bound before line 19 of -2147483648.
      At this point the signed check 'r1 s< r8' meant to protect the addition
      on line 22 where dst reg is a map_value pointer may very well return
      true with a large negative number. Then the final line 22 will detect
      this as an invalid operation and fail the program. What we want to do
      is proceed only if r8 is positive non-error. So change 'r1 s< r8' to
      'r1 s> r8' so that we jump if r8 is negative.
      
      Next we will throw an error because we access past the end of the map
      value. The map value size is 48 and sizeof(struct test_val) is 48 so
      we walk off the end of the map value on the second call to
      get bpf_get_stack(). Fix this by changing sizeof(struct test_val) to
      24 by using 'sizeof(struct test_val) / 2'. After this everything passes
      as expected.
      
      Signed-off-by: default avatarJohn Fastabend <john.fastabend@gmail.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Link: https://lore.kernel.org/bpf/158560426019.10843.3285429543232025187.stgit@john-Precision-5820-Tower
      
      
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      [OP: backport to 4.19]
      Signed-off-by: default avatarOvidiu Panait <ovidiu.panait@windriver.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarYang Yingliang <yangyingliang@huawei.com>
      011e1131
    • Alexei Starovoitov's avatar
      bpf: extend is_branch_taken to registers · 6147ca1f
      Alexei Starovoitov authored
      
      stable inclusion
      from linux-4.19.193
      commit e0b86677fb3e4622b444dcdd8546caa0dba8a689
      
      --------------------------------
      
      commit fb8d251e upstream
      
      This patch extends is_branch_taken() logic from JMP+K instructions
      to JMP+X instructions.
      Conditional branches are often done when src and dst registers
      contain known scalars. In such case the verifier can follow
      the branch that is going to be taken when program executes.
      That speeds up the verification and is essential feature to support
      bounded loops.
      
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Acked-by: default avatarAndrii Nakryiko <andriin@fb.com>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      [OP: drop is_jmp32 parameter from is_branch_taken() calls and
           adjust context]
      Signed-off-by: default avatarOvidiu Panait <ovidiu.panait@windriver.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarYang Yingliang <yangyingliang@huawei.com>
      6147ca1f
    • Ovidiu Panait's avatar
      selftests/bpf: add selftest part of "bpf: improve verifier branch analysis" · b680cd59
      Ovidiu Panait authored
      
      stable inclusion
      from linux-4.19.193
      commit c905bfe767e98a13dd886bf241ba9ee0640a53ff
      
      --------------------------------
      
      Backport the missing selftest part of commit 7da6cd690c43 ("bpf: improve
      verifier branch analysis") in order to fix the following test_verifier
      failures:
      
      ...
      Unexpected success to load!
      0: (b7) r0 = 0
      1: (75) if r0 s>= 0x0 goto pc+1
      3: (95) exit
      processed 3 insns (limit 131072), stack depth 0
      Unexpected success to load!
      0: (b7) r0 = 0
      1: (75) if r0 s>= 0x0 goto pc+1
      3: (95) exit
      processed 3 insns (limit 131072), stack depth 0
      ...
      
      The changesets apply with a minor context difference.
      
      Fixes: 7da6cd690c43 ("bpf: improve verifier branch analysis")
      Signed-off-by: default avatarOvidiu Panait <ovidiu.panait@windriver.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarYang Yingliang <yangyingliang@huawei.com>
      b680cd59
    • Andrey Ignatov's avatar
      selftests/bpf: Test narrow loads with off > 0 in test_verifier · ff5ead11
      Andrey Ignatov authored
      
      stable inclusion
      from linux-4.19.193
      commit 737f5f3a633518feae7b2793f4666c67e39bcc5a
      
      --------------------------------
      
      commit 6c2afb67 upstream
      
      Test the following narrow loads in test_verifier for context __sk_buff:
      * off=1, size=1 - ok;
      * off=2, size=1 - ok;
      * off=3, size=1 - ok;
      * off=0, size=2 - ok;
      * off=1, size=2 - fail;
      * off=0, size=2 - ok;
      * off=3, size=2 - fail.
      
      Signed-off-by: default avatarAndrey Ignatov <rdna@fb.com>
      Signed-off-by: default avatarAlexei Starovoitov <ast@kernel.org>
      Signed-off-by: default avatarOvidiu Panait <ovidiu.panait@windriver.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarYang Yingliang <yangyingliang@huawei.com>
      ff5ead11
    • Piotr Krysiuk's avatar
      bpf, selftests: Fix up some test_verifier cases for unprivileged · 10bec3c6
      Piotr Krysiuk authored
      
      stable inclusion
      from linux-4.19.193
      commit 1982f436a9a990e338ac4d7ed80a9fb40e0a1885
      
      --------------------------------
      
      commit 0a13e3537ea67452d549a6a80da3776d6b7dedb3 upstream
      
      Fix up test_verifier error messages for the case where the original error
      message changed, or for the case where pointer alu errors differ between
      privileged and unprivileged tests. Also, add alternative tests for keeping
      coverage of the original verifier rejection error message (fp alu), and
      newly reject map_ptr += rX where rX == 0 given we now forbid alu on these
      types for unprivileged. All test_verifier cases pass after the change. The
      test case fixups were kept separate to ease backporting of core changes.
      
      Signed-off-by: default avatarPiotr Krysiuk <piotras@gmail.com>
      Co-developed-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Signed-off-by: default avatarDaniel Borkmann <daniel@iogearbox.net>
      Acked-by: default avatarAlexei Starovoitov <ast@kernel.org>
      [OP: backport to 4.19, skipping non-existent tests]
      Signed-off-by: default avatarOvidiu Panait <ovidiu.panait@windriver.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarYang Yingliang <yangyingliang@huawei.com>
      10bec3c6
    • Ovidiu Panait's avatar
      bpf: fix up selftests after backports were fixed · dcead3a0
      Ovidiu Panait authored
      
      stable inclusion
      from linux-4.19.193
      commit b190383c714a379002b00bc8de43371e78d291d8
      
      --------------------------------
      
      After the backport of the changes to fix CVE 2019-7308, the
      selftests also need to be fixed up, as was done originally
      in mainline 80c9b2fa ("bpf: add various test cases to selftests").
      
      This is a backport of upstream commit 80c9b2fa ("bpf: add various test
      cases to selftests") adapted to 4.19 in order to fix the
      selftests that began to fail after CVE-2019-7308 fixes.
      
      Suggested-by: default avatarFrank van der Linden <fllinden@amazon.com>
      Signed-off-by: default avatarOvidiu Panait <ovidiu.panait@windriver.com>
      Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
      Signed-off-by: default avatarYang Yingliang <yangyingliang@huawei.com>
      dcead3a0
  5. Jul 02, 2021
  6. Jul 01, 2021