- Jul 05, 2021
-
-
Ovidiu Panait authored
stable inclusion from linux-4.19.193 commit c905bfe767e98a13dd886bf241ba9ee0640a53ff -------------------------------- Backport the missing selftest part of commit 7da6cd690c43 ("bpf: improve verifier branch analysis") in order to fix the following test_verifier failures: ... Unexpected success to load! 0: (b7) r0 = 0 1: (75) if r0 s>= 0x0 goto pc+1 3: (95) exit processed 3 insns (limit 131072), stack depth 0 Unexpected success to load! 0: (b7) r0 = 0 1: (75) if r0 s>= 0x0 goto pc+1 3: (95) exit processed 3 insns (limit 131072), stack depth 0 ... The changesets apply with a minor context difference. Fixes: 7da6cd690c43 ("bpf: improve verifier branch analysis") Signed-off-by:
Ovidiu Panait <ovidiu.panait@windriver.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Andrey Ignatov authored
stable inclusion from linux-4.19.193 commit 737f5f3a633518feae7b2793f4666c67e39bcc5a -------------------------------- commit 6c2afb67 upstream Test the following narrow loads in test_verifier for context __sk_buff: * off=1, size=1 - ok; * off=2, size=1 - ok; * off=3, size=1 - ok; * off=0, size=2 - ok; * off=1, size=2 - fail; * off=0, size=2 - ok; * off=3, size=2 - fail. Signed-off-by:
Andrey Ignatov <rdna@fb.com> Signed-off-by:
Alexei Starovoitov <ast@kernel.org> Signed-off-by:
Ovidiu Panait <ovidiu.panait@windriver.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Piotr Krysiuk authored
stable inclusion from linux-4.19.193 commit 1982f436a9a990e338ac4d7ed80a9fb40e0a1885 -------------------------------- commit 0a13e3537ea67452d549a6a80da3776d6b7dedb3 upstream Fix up test_verifier error messages for the case where the original error message changed, or for the case where pointer alu errors differ between privileged and unprivileged tests. Also, add alternative tests for keeping coverage of the original verifier rejection error message (fp alu), and newly reject map_ptr += rX where rX == 0 given we now forbid alu on these types for unprivileged. All test_verifier cases pass after the change. The test case fixups were kept separate to ease backporting of core changes. Signed-off-by:
Piotr Krysiuk <piotras@gmail.com> Co-developed-by:
Daniel Borkmann <daniel@iogearbox.net> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Acked-by:
Alexei Starovoitov <ast@kernel.org> [OP: backport to 4.19, skipping non-existent tests] Signed-off-by:
Ovidiu Panait <ovidiu.panait@windriver.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Ovidiu Panait authored
stable inclusion from linux-4.19.193 commit b190383c714a379002b00bc8de43371e78d291d8 -------------------------------- After the backport of the changes to fix CVE 2019-7308, the selftests also need to be fixed up, as was done originally in mainline 80c9b2fa ("bpf: add various test cases to selftests"). This is a backport of upstream commit 80c9b2fa ("bpf: add various test cases to selftests") adapted to 4.19 in order to fix the selftests that began to fail after CVE-2019-7308 fixes. Suggested-by:
Frank van der Linden <fllinden@amazon.com> Signed-off-by:
Ovidiu Panait <ovidiu.panait@windriver.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
- Jul 02, 2021
-
-
Chao Leng authored
mainline inclusion from mainline-v5.11-rc5 commit 7674073b2ed35ac951a49c425dec6b39d5a57140 category: bugfix bugzilla: NA CVE: NA Link: https://gitee.com/openeuler/kernel/issues/I1WGZE ------------------------------------------------- A crash happens when inject completing request long time(nearly 30s). Each name space has a request queue, when inject completing request long time, multi request queues may have time out requests at the same time, nvme_rdma_timeout will execute concurrently. Multi requests in different request queues may be queued in the same rdma queue, multi nvme_rdma_timeout may call nvme_rdma_stop_queue at the same time. The first nvme_rdma_timeout will clear NVME_RDMA_Q_LIVE and continue stopping the rdma queue(drain qp), but the others check NVME_RDMA_Q_LIVE is already cleared, and then directly complete the requests, complete request before the qp is fully drained may lead to a use-after-free condition. Add...
-
Eric W. Biederman authored
mainline inclusion from mainline-v5.8-rc1 commit e7f77854 category: bugfix bugzilla: 36868 CVE: NA ----------------------------------------------- In 2016 Linus moved install_exec_creds immediately after setup_new_exec, in binfmt_elf as a cleanup and as part of closing a potential information leak. Perform the same cleanup for the other binary formats. Different binary formats doing the same things the same way makes exec easier to reason about and easier to maintain. Greg Ungerer reports: > I tested the the whole series on non-MMU m68k and non-MMU arm > (exercising binfmt_flat) and it all tested out with no problems, > so for the binfmt_flat changes: Tested-by:
Greg Ungerer <gerg@linux-m68k.org> Ref: 9f834ec1 ("binfmt_elf: switch to new creds when switching to new mm") Reviewed-by:
Kees Cook <keescook@chromium.org> Reviewed-by:
Greg Ungerer <gerg@linux-m68k.org> Signed-off-by: "Eric W. Biede...
-
- Jul 01, 2021
-
-
Pavel Skripkin authored
mainline inclusion from mainline-5.14 commit 618f003199c6188e01472b03cdbba227f1dc5f24 category: bugfix bugzilla: 167360 CVE: NA ------------------------------------------------- static int kthread(void *_create) will return -ENOMEM or -EINTR in case of internal failure or kthread_stop() call happens before threadfn call. To prevent fancy error checking and make code more straightforward we moved all cleanup code out of kmmpd threadfn. Also, dropped struct mmpd_data at all. Now struct super_block is a threadfn data and struct buffer_head embedded into struct ext4_sb_info. Reported-by:
<syzbot+d9e482e303930fa4f6ff@syzkaller.appspotmail.com> Signed-off-by:
Pavel Skripkin <paskripkin@gmail.com> Link: https://lore.kernel.org/r/20210430185046.15742-1-paskripkin@gmail.com Signed-off-by:
Theodore Ts'o <tytso@mit.edu> Conflicts: fs/ext4/ext4.h fs/ext4/super.c Signed-off-by:
Baokun Li <libaokun1@huawei.com> Reviewed-by:
Zhang Yi <yi.zhang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Xi Wang authored
driver inclusion category: bugfix bugzilla: NA CVE: NA Wraps the public logic as 3 functions: hns_roce_mtr_create(), hns_roce_mtr_destroy() and hns_roce_mtr_map() to support hopnum ranges from 0 to 3. In addition, makes the mtr interfaces easier to use. Signed-off-by:
Xi Wang <wangxi11@huawei.com> Signed-off-by:
Shunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by:
chunzhi hu <huchunzhi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Xi Wang authored
driver inclusion category: bugfix bugzilla: NA CVE: NA When the value of nbufs is 1, the buffer is in direct mode, which may cause confusion. So optimizes current codes to make it easier to maintain. Signed-off-by:
Xi Wang <wangxi11@huawei.com> Signed-off-by:
Shunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by:
chunzhi hu <huchunzhi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Lang Cheng authored
mainline inclusion from mainline-v5.7 commit 026ded37 category: bugfix bugzilla: NA CVE: NA Depth of qp shouldn't be allowed to be set to zero, after ensuring that, subsequent process can be simplified. And when qp is changed from reset to reset, the capability of minimum qp depth was used to identify hardware of hip06, it should be changed into a more readable form. Link: https://lore.kernel.org/r/1584006624-11846-1-git-send-email-liweihang@huawei.com Signed-off-by:
Lang Cheng <chenglang@huawei.com> Signed-off-by:
Weihang Li <liweihang@huawei.com> Signed-off-by:
Jason Gunthorpe <jgg@mellanox.com> Signed-off-by:
Shunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by:
chunzhi hu <huchunzhi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Xi Wang authored
mainline inclusion from mainline-v5.7 commit ae85bf92 category: bugfix bugzilla: NA CVE: NA Encapsulate the qp param setup related code into set_qp_param(). Link: https://lore.kernel.org/r/1582526258-13825-6-git-send-email-liweihang@huawei.com Signed-off-by:
Xi Wang <wangxi11@huawei.com> Signed-off-by:
Weihang Li <liweihang@huawei.com> Signed-off-by:
Jason Gunthorpe <jgg@mellanox.com> Signed-off-by:
Shunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by:
chunzhi hu <huchunzhi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Xi Wang authored
mainline inclusion from mainline-v5.7 commit 24c22112 category: bugfix bugzilla: NA CVE: NA Encapsulate qp buffer allocation related code into 3 functions: alloc_qp_buf(), map_wqe_buf() and free_qp_buf(). Link: https://lore.kernel.org/r/1582526258-13825-5-git-send-email-liweihang@huawei.com Signed-off-by:
Xi Wang <wangxi11@huawei.com> Signed-off-by:
Weihang Li <liweihang@huawei.com> Signed-off-by:
Jason Gunthorpe <jgg@mellanox.com> Signed-off-by:
Shunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by:
chunzhi hu <huchunzhi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Xi Wang authored
mainline inclusion from mainline-v5.7 commit e365b26c category: bugfix bugzilla: NA CVE: NA Wrap the duplicate code in hip08 and hip06 qp destruction process as hns_roce_qp_destroy() to simply the qp destroy flow. Link: https://lore.kernel.org/r/1582526258-13825-2-git-send-email-liweihang@huawei.com Signed-off-by:
Xi Wang <wangxi11@huawei.com> Signed-off-by:
Weihang Li <liweihang@huawei.com> Signed-off-by:
Jason Gunthorpe <jgg@mellanox.com> Signed-off-by:
Shunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by:
chunzhi hu <huchunzhi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Leon Romanovsky authored
mainline inclusion from mainline-v5.2 commit 57425822 category: bugfix bugzilla: NA CVE: NA Verbs destroy callbacks are synchronous operations and can't be delayed. The expectation is that after driver returned from destroy function, the memory can be freed and user won't be able to access it again. Ditch workqueue implementation used in HNS driver. Fixes: d838c481 ("IB/hns: Fix the bug when destroy qp") Signed-off-by:
Leon Romanovsky <leonro@mellanox.com> Acked-by:
oulijun <oulijun@huawei.com> Signed-off-by:
Jason Gunthorpe <jgg@mellanox.com> Signed-off-by:
Shunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by:
chunzhi hu <huchunzhi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Lijun Ou authored
mainline inclusion from mainline-v5.6 commit 468d020e category: bugfix bugzilla: NA CVE: NA Driver should first check whether the sge is valid, then fill the valid sge and the caculated total into hardware, otherwise invalid sges will cause an error. Fixes: 52e3b42a ("RDMA/hns: Filter for zero length of sge in hip08 kernel mode") Fixes: 7bdee415 ("RDMA/hns: Fill sq wqe context of ud type in hip08") Link: https://lore.kernel.org/r/1578571852-13704-1-git-send-email-liweihang@huawei.com Signed-off-by:
Lijun Ou <oulijun@huawei.com> Signed-off-by:
Weihang Li <liweihang@huawei.com> Signed-off-by:
Jason Gunthorpe <jgg@mellanox.com> Signed-off-by:
Shunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by:
chunzhi hu <huchunzhi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yixian Liu authored
mainline inclusion from mainline-v5.5 commit ec6adad0 category: bugfix bugzilla: NA CVE: NA There is no need to define max_post in hns_roce_wq, as it does same thing as wqe_cnt. Link: https://lore.kernel.org/r/1572952082-6681-2-git-send-email-liweihang@hisilicon.com Signed-off-by:
Yixian Liu <liuyixian@huawei.com> Signed-off-by:
Weihang Li <liweihang@hisilicon.com> Signed-off-by:
Jason Gunthorpe <jgg@mellanox.com> Signed-off-by:
Shunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by:
chunzhi hu <huchunzhi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Xi Wang authored
mainline inclusion from mainline-v5.5 commit 99441ab5 category: bugfix bugzilla: NA CVE: NA Currently, more than 20 lines of duplicate code exist in function 'modify_qp_init_to_init' and function 'modify_qp_reset_to_init', which affects the readability of the code. Consolidate them. Link: https://lore.kernel.org/r/1562593285-8037-6-git-send-email-oulijun@huawei.com Signed-off-by:
Xi Wang <wangxi11@huawei.com> Signed-off-by:
Jason Gunthorpe <jgg@mellanox.com> Signed-off-by:
Shunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by:
chunzhi hu <huchunzhi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Jason Gunthorpe authored
mainline inclusion from mainline-v5.5 commit 515f6000 category: bugfix bugzilla: NA CVE: NA The "ucmd->log_sq_bb_count" variable is a user controlled variable in the 0-255 range. If we shift more than then number of bits in an int then it's undefined behavior (it shift wraps), and potentially the int could become negative. Fixes: 9a443537 ("IB/hns: Add driver files for hns RoCE driver") Link: https://lore.kernel.org/r/20190608092514.GC28890@mwanda Reported-by:
Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by:
Jason Gunthorpe <jgg@mellanox.com> Reviewed-by:
Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by:
Shunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by:
chunzhi hu <huchunzhi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Xi Wang authored
driver inclusion category: bugfix bugzilla: NA CVE: NA This helper does the same as rdma_for_each_block(), except it works on a umem. This simplifies most of the call sites. Link: https://lore.kernel.org/r/4-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com Acked-by:
Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> Acked-by:
Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by:
Jason Gunthorpe <jgg@nvidia.com> Signed-off-by:
Xi Wang <wangxi11@huawei.com> Signed-off-by:
Shunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by:
chunzhi hu <huchunzhi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Xi Wang authored
driver inclusion category: bugfix bugzilla: NA CVE: NA This helper iterates over a DMA-mapped SGL and returns contiguous memory blocks aligned to a HW supported page size. Suggested-by:
Jason Gunthorpe <jgg@ziepe.ca> Signed-off-by:
Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by:
Jason Gunthorpe <jgg@mellanox.com> Signed-off-by:
Xi Wang <wangxi11@huawei.com> Signed-off-by:
Shunfeng Yang <yangshunfeng2@huawei.com> Reviewed-by:
chunzhi hu <huchunzhi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Thadeu Lima de Souza Cascardo authored
mainline inclusion from mainline-v5.14-rc1 commit d5f9023fa61ee8b94f37a93f08e94b136cf1e463 category: bugfix bugzilla: NA CVE: CVE-2021-3609 -------------------------------- can_rx_register() callbacks may be called concurrently to the call to can_rx_unregister(). The callbacks and callback data, though, are protected by RCU and the struct sock reference count. So the callback data is really attached to the life of sk, meaning that it should be released on sk_destruct. However, bcm_remove_op() calls tasklet_kill(), and RCU callbacks may be called under RCU softirq, so that cannot be used on kernels before the introduction of HRTIMER_MODE_SOFT. However, bcm_rx_handler() is called under RCU protection, so after calling can_rx_unregister(), we may call synchronize_rcu() in order to wait for any RCU read-side critical sections to finish. That is, bcm_rx_handler() won't be called anymore for those ops. So, we only free them, after we do that s...
-
Kemeng Shi authored
euleros inclusion category: bugfix bugzilla: NA ------------------------------ Struct page_idle_ctrl is alloced at beginning of vm_idle_read, but it's not freed when vm_idle_read ends. Fixes: bad4d883 ("etmem: add etmem-scan feature") Signed-off-by:
Kemeng Shi <shikemeng@huawei.com> Reviewed-by:
Jing <Xiangfeng<jingxiangfeng@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Masami Hiramatsu authored
stable inclusion from linux-4.19.163 commit 6bb78b3fff90fcf76991f689822ae58a4feee36d -------------------------------- commit 4e9a5ae8 upstream. Since insn.prefixes.nbytes can be bigger than the size of insn.prefixes.bytes[] when a prefix is repeated, the proper check must be insn.prefixes.bytes[i] != 0 and i < 4 instead of using insn.prefixes.nbytes. Introduce a for_each_insn_prefix() macro for this purpose. Debugged by Kees Cook <keescook@chromium.org>. [ bp: Massage commit message, sync with the respective header in tools/ and drop "we". ] Fixes: 2b144498 ("uprobes, mm, x86: Add the ability to install and remove uprobes breakpoints") Reported-by:
<syzbot+9b64b619f10f19d19a7c@syzkaller.appspotmail.com> Signed-off-by:
Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by:
Borislav Petkov <bp@suse.de> Reviewed-by:
Srikar Dronamraju <srikar@linux.vnet.ibm.com> Cc: stable@vger.kernel.org Li...
-
- Jun 30, 2021
-
-
Yang Yingliang authored
This reverts commit 8fa0b010. Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yang Yingliang authored
This reverts commit 9c4ec8f5. Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yang Yingliang authored
This reverts commit d4fcfb2e. Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yonglong Liu authored
driver inclusion category: bugfix bugzilla: NA CVE: NA ----------------------------- This patch is used to update driver version to 1.9.40.24. Signed-off-by:
Yonglong Liu <liuyonglong@huawei.com> Reviewed-by:
li yongxin <liyongxin1@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yonglong Liu authored
driver inclusion category: bugfix bugzilla: NA CVE: NA ---------------------------- Signed-off-by:
Yonglong Liu <liuyonglong@huawei.com> Reviewed-by:
li yongxin <liyongxin1@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Colin Ian King authored
mainline inclusion from mainline-v5.13-rc1 commit d0494135f94c7ab5a9cf7a9094fbb233275c7ba6 category: bugfix bugzilla: NA CVE: NA ---------------------------- The reset_prepare and reset_done calls have a null pointer check on ae_dev however ae_dev is being dereferenced via the call to ns3_is_phys_func with the ae->pdev argument. Fix this by performing a null pointer check on ae_dev and hence short-circuiting the dereference to ae_dev on the call to ns3_is_phys_func. Addresses-Coverity: ("Dereference before null check") Fixes: 715c58e94f0d ("net: hns3: add suspend and resume pm_ops") Signed-off-by:
Colin Ian King <colin.king@canonical.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Yonglong Liu <liuyonglong@huawei.com> Reviewed-by:
li yongxin <liyongxin1@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Huazhong Tan authored
mainline inclusion from mainline-v5.2-rc1 commit 146e92c1 category: feature bugzilla: NA CVE: NA ---------------------------- Since the hardware does not handle mailboxes and the hardware reset include TQP reset, so it is unnecessary to reset TQP in the hclgevf_ae_stop() while doing VF reset. Also it is unnecessary to reset the remaining TQP when one reset fails. Signed-off-by:
Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by:
Peng Li <lipeng321@huawei.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Yonglong Liu <liuyonglong@huawei.com> Reviewed-by:
li yongxin <liyongxin1@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yonglong Liu authored
driver inclusion category: cleanup bugzilla: NA CVE: NA ---------------------------- Signed-off-by:
Yonglong Liu <liuyonglong@huawei.com> Reviewed-by:
li yongxin <liyongxin1@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yunsheng Lin authored
mainline inclusion from mainline-v5.13-rc1 commit 97b9e5c131f16e2e487139ba596f9e6df927ae87 category: feature bugzilla: NA CVE: NA ---------------------------- skb_put_padto() may fails because of memory failure, sw_err_cnt is already used to log memory failure in hns3_skb_linearize(), so use it to log the memory failure for skb_put_padto() too. Signed-off-by:
Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by:
Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Yonglong Liu <liuyonglong@huawei.com> Reviewed-by:
li yongxin <liyongxin1@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yunsheng Lin authored
mainline inclusion from mainline-v5.13-rc1 commit 811c0830eb4ca8811ed80fe40378f622b9844835 category: feature bugzilla: NA CVE: NA ---------------------------- The actual size on wire for tso skb should be (gso_segs - 1) * hdr + skb->len instead of skb->len, which can be seen by user using 'ethtool -S ethX' cmd, and 'Byte Queue Limit' also use the send size stat to do the queue limiting, so add send_bytes in the desc_cb to record the actual send size for a skb. And send_bytes is only for tx desc_cb and page_offset is only for rx desc, so reuse the same space for both of them. Signed-off-by:
Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by:
Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Yonglong Liu <liuyonglong@huawei.com> Reviewed-by:
li yongxin <liyongxin1@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yunsheng Lin authored
mainline inclusion from mainline-v5.13-rc1 commit d5d5e0193ee8f88efbbc7f1471087255657bc19a category: feature bugzilla: NA CVE: NA ---------------------------- Currently hns3 driver only handle the xmit skb with one level of fraglist skb, add handling for multi level by calling hns3_tx_bd_num() recursively when calculating bd num and calling hns3_fill_skb_to_desc() recursively when filling tx desc. When the skb has a fraglist level of 24, the skb is simply dropped and stats.max_recursion_level is added to record the error. Move the stat handling from hns3_nic_net_xmit() to hns3_nic_maybe_stop_tx() in order to handle different error stat and add the 'max_recursion_level' and 'hw_limitation' stat. Note that the max recursive level as 24 is chose according to below: commit 48a1df65 ("skbuff: return -EMSGSIZE in skb_to_sgvec to prevent overflow"). And that we are not able to find a testcase to verify the recursive fraglist case, so Fixes tag is not provided. Reported-by:
Barry Song <song.bao.hua@hisilicon.com> Signed-off-by:
Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by:
Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Yonglong Liu <liuyonglong@huawei.com> Reviewed-by:
li yongxin <liyongxin1@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yunsheng Lin authored
mainline inclusion from mainline-v5.10-rc1 commit 619ae331 category: feature bugzilla: NA CVE: NA ---------------------------- Use napi_consume_skb() to batch consuming skb when cleaning tx desc in NAPI polling. Signed-off-by:
Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by:
Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Yonglong Liu <liuyonglong@huawei.com> Reviewed-by:
li yongxin <liyongxin1@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yunsheng Lin authored
mainline inclusion from mainline-v5.10-rc1 commit 48ee56fd category: feature bugzilla: NA CVE: NA ---------------------------- writel() can be used to order I/O vs memory by default when writing portable drivers. Use writel() to replace wmb() + writel_relaxed(), and writel() is dma_wmb() + writel_relaxed() for ARM64, so there is an optimization here because dma_wmb() is a lighter barrier than wmb(). Signed-off-by:
Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by:
Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Yonglong Liu <liuyonglong@huawei.com> Reviewed-by:
li yongxin <liyongxin1@huawei.com> Reviewed-by:
li yongxin <liyongxin1@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yunsheng Lin authored
mainline inclusion from mainline-v5.10-rc1 commit 8c30e194 category: feature bugzilla: NA CVE: NA ---------------------------- Currently HNS3_RING_RX_RING_FBDNUM_REG register is read to determine how many rx desc can be cleaned. To avoid the register read operation in the critical data path, use the valid bit in the rx desc to determine if a specific rx desc can be cleaned. The hns3 driver clear valid bit in the rx desc before notifying the rx desc to the hw, and hw will only set the valid bit of the rx desc after corresponding buffer is filled with packet data and other field in the rx desc is set accordingly. Add hns3_rx_ring_move_fw() function to clear the valid bit in the rx desc before moving rx ring's next_to_clean forward to avoid double cleaning a rx desc, also add a dma_rmb() barrier in hns3_handle_rx_bd() to make sure valid bit is set before reading other field in the rx desc. Signed-off-by:
Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by:
Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Yonglong Liu <liuyonglong@huawei.com> Reviewed-by:
li yongxin <liyongxin1@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yunsheng Lin authored
mainline inclusion from mainline-v5.10-rc1 commit 20d06ca2 category: feature bugzilla: NA CVE: NA ---------------------------- Currently HNS3_RING_TX_RING_HEAD_REG register is read to determine how many tx desc can be cleaned. To avoid the register read operation in the critical data path, use the valid bit in the tx desc to determine if a specific tx desc can be cleaned. The hns3 driver sets valid bit in the tx desc before ringing a doorbell to the hw, and hw will only clear the valid bit of the tx desc after corresponding packet is sent out to the wire. And because next_to_use for tx ring is a changing variable when the driver is filling the tx desc, so reuse the pull_len for rx ring to record the tx desc that has notified to the hw, so that hns3_nic_reclaim_desc() can decide how many tx desc's valid bit need checking when reclaiming tx desc. And io_err_cnt stat is also removed for it is not used anymore. Signed-off-by:
Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by:
Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Yonglong Liu <liuyonglong@huawei.com> Reviewed-by:
li yongxin <liyongxin1@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yunsheng Lin authored
mainline inclusion from mainline-v5.10-rc1 commit f6061a05 category: feature bugzilla: NA CVE: NA ---------------------------- Use netdev_xmit_more() to defer the tx doorbell operation when the skb is passed to the driver continuously. By doing this we can improve the overall xmit performance by avoid some doorbell operations. Also, the tx_err_cnt stat is not used, so rename it to tx_more stat. Signed-off-by:
Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by:
Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Yonglong Liu <liuyonglong@huawei.com> Reviewed-by:
li yongxin <liyongxin1@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yunsheng Lin authored
mainline inclusion from mainline-v5.10-rc1 commit aeda9bf8 category: feature bugzilla: NA CVE: NA ---------------------------- Batch the page reference count updates instead of doing them one at a time. By doing this we can improve the overall receive performance by avoid some atomic increment operations when the rx page is reused. Signed-off-by:
Yunsheng Lin <linyunsheng@huawei.com> Signed-off-by:
Huazhong Tan <tanhuazhong@huawei.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Yonglong Liu <liuyonglong@huawei.com> Reviewed-by:
li yongxin <liyongxin1@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-