- Dec 27, 2019
-
-
commit c32cc30c upstream. cpu_to_le32/le32_to_cpu is defined in include/linux/byteorder/generic.h, which is not exported to user-space. UAPI headers must use the ones prefixed with double-underscore. Detected by compile-testing exported headers: include/linux/nilfs2_ondisk.h: In function `nilfs_checkpoint_set_snapshot': include/linux/nilfs2_ondisk.h:536:17: error: implicit declaration of function `cpu_to_le32' [-Werror=implicit-function-declaration] cp->cp_flags = cpu_to_le32(le32_to_cpu(cp->cp_flags) | \ ^ include/linux/nilfs2_ondisk.h:552:1: note: in expansion of macro `NILFS_CHECKPOINT_FNS' NILFS_CHECKPOINT_FNS(SNAPSHOT, snapshot) ^~~~~~~~~~~~~~~~~~~~ include/linux/nilfs2_ondisk.h:536:29: error: implicit declaration of function `le32_to_cpu' [-Werror=implicit-function-declaration] cp->cp_flags = cpu_to_le32(le32_to_cpu(cp->cp_flags) | \ ^ include/linux/nilfs2_ondisk.h:552:1: note: in expansion of macro `NILFS_CHECKPOINT_FNS' NILFS_CHECKPOINT_FNS(SNAPSHOT, snapshot) ^~~~~~~~~~~~~~~~~~~~ include/linux/nilfs2_ondisk.h: In function `nilfs_segment_usage_set_clean': include/linux/nilfs2_ondisk.h:622:19: error: implicit declaration of function `cpu_to_le64' [-Werror=implicit-function-declaration] su->su_lastmod = cpu_to_le64(0); ^~~~~~~~~~~ Link: http://lkml.kernel.org/r/20190605053006.14332-1-yamada.masahiro@socionext.com Fixes: e63e88bc ("nilfs2: move ioctl interface and disk layout to uapi separately") Signed-off-by:
Masahiro Yamada <yamada.masahiro@socionext.com> Acked-by:
Ryusuke Konishi <konishi.ryusuke@gmail.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Greg KH <gregkh@linuxfoundation.org> Cc: Joe Perches <joe@perches.com> Cc: <stable@vger.kernel.org> [4.9+] Signed-off-by:
Andrew Morton <akpm@linux-foundation.org> Signed-off-by:
Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
commit abbe3acd upstream. Thinkpad t480 laptops had some touchpad features disabled, resulting in the loss of pinch to activities in GNOME, on wayland, and other touch gestures being slower. This patch adds the touchpad of the t480 to the smbus_pnp_ids whitelist to enable the extra features. In my testing this does not break suspend (on fedora, with wayland, and GNOME, using the rc-6 kernel), while also fixing the feature on a T480. Signed-off-by:
Cole Rogers <colerogers@disroot.org> Acked-by:
Benjamin Tissoires <benjamin.tissoires@redhat.com> Cc: stable@vger.kernel.org Signed-off-by:
Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
commit d17ba0f6 upstream. Driver does not want to keep packets in Tx queue when link is lost. But present code only reset NIC to flush them, but does not prevent queuing new packets. Moreover reset sequence itself could generate new packets via netconsole and NIC falls into endless reset loop. This patch wakes Tx queue only when NIC is ready to send packets. This is proper fix for problem addressed by commit 0f9e980b ("e1000e: fix cyclic resets at link up with active tx"). Signed-off-by:
Konstantin Khlebnikov <khlebnikov@yandex-team.ru> Suggested-by:
Alexander Duyck <alexander.duyck@gmail.com> Tested-by:
Joseph Yasi <joe.yasi@gmail.com> Tested-by:
Aaron Brown <aaron.f.brown@intel.com> Tested-by:
Oleksandr Natalenko <oleksandr@redhat.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
commit caff422e upstream. This reverts commit 0f9e980b. That change cased false-positive warning about hardware hang: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready e1000e 0000:00:1f.6 eth0: Detected Hardware Unit Hang: TDH <0> TDT <1> next_to_use <1> next_to_clean <0> buffer_info[next_to_clean]: time_stamp <fffba7a7> next_to_watch <0> jiffies <fffbb140> next_to_watch.status <0> MAC Status <40080080> PHY Status <7949> PHY 1000BASE-T Status <0> PHY Extended Status <3000> PCI Status <10> e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx Besides warning everything works fine. Original issue will be fixed property in following patch. Signed-off-by:
Konstantin Khlebnikov <khlebnikov@yandex-team.ru> Reported-by:
Joseph Yasi <joe.yasi@gmail.com> Link: https://bugzilla.kernel.org/show_bug.cgi?id=203175 Tested-by:
Joseph Yasi <joe.yasi@gmail.com> Tested-by:
Aaron Brown <aaron.f.brown@intel.com> Tested-by:
Oleksandr Natalenko <oleksandr@redhat.com> Signed-off-by:
Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
driver inclusion category: bugfix bugzilla: NA CVE: NA Use polling mode to adapt automated testcases Signed-off-by:
Weihang Li <liweihang@hisilicon.com> Reviewed-by:
liuyixian <liuyixian@huawei.com> Reviewed-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
driver inclusion category: bugfix bugzilla: NA CVE: NA The application layer should be aware of cmq timeout, so we assign last_status with CMD_EXEC_TIMEOUT in this case. In other situations, app layer don't care about this variable. Feature or Bugfix: Bugfix Signed-off-by:
Weihang Li <liweihang@hisilicon.com> Reviewed-by:
chenglang <chenglang@huawei.com> Reviewed-by:
liuyixian <liuyixian@huawei.com> Reviewed-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
driver inclusion category: bugfix bugzilla: NA CVE: NA When rq/srq sge length is smaller than sq sge length, local length error would occur. Then, for rq wqe and srq wqe, one reserved sge pointing to a reserved mr is used to avoid this error. Feature or Bugfix: Bugfix Signed-off-by:
Weihang Li <liweihang@hisilicon.com> Reviewed-by:
liuyixian <liuyixian@huawei.com> Reviewed-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
driver inclusion category: bugfix bugzilla: NA CVE: NA Here adds a structure of hns_roce_ib_create_ah_resp in hns-abi.h Feature or Bugfix:Bugfix Signed-off-by:
oulijun <oulijun@huawei.com> Reviewed-by:
liuyixian <liuyixian@huawei.com> Reviewed-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
driver inclusion category: bugfix bugzilla: NA CVE: NA Even if no response from hardware, make sure that qp related resources are completely released. Feature or Bugfix:Bugfix Signed-off-by:
Yangyang Li <liyangyang20@huawei.com> Signed-off-by:
liyangyang (M) <liyangyang20@huawei.com> Reviewed-by:
liuyixian <liuyixian@huawei.com> Reviewed-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
driver inclusion category: bugfix bugzilla: NA CVE: NA When exiting "for loop", the actual value of pi will be increased by 1, which is compatible with the next calculation. But when pi is equal to "ci + hr_cq-> ib_cq.cqe", the "break" was called and the pi is actual value, it will lead one cqe still existing, so the "==" should be modify to ">". Feature or Bugfix:Bugfix Signed-off-by:
Yangyang Li <liyangyang20@huawei.com> Signed-off-by:
liyangyang (M) <liyangyang20@huawei.com> Reviewed-by:
oulijun <oulijun@huawei.com> Reviewed-by:
liuyixian <liuyixian@huawei.com> Reviewed-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
driver inclusion category: bugfix bugzilla: NA CVE: NA Since commit "518a2f19 dma-mapping: zero memory returned from dma_alloc_*", the allocated memory are zeroed. So it is not necessary to zero them or allocate them with flag __GFP_ZERO. Signed-off-by:
Xiang Chen <chenxiang66@hisilicon.com> Signed-off-by:
John Garry <john.garry@huawei.com> Feature or Bugfix:Bugfix Reviewed-by:
John Garry <john.garry@huawei.com> Signed-off-by:
chenxiang (M) <chenxiang66@hisilicon.com> Reviewed-by:
tanxiaofei <tanxiaofei@huawei.com> Reviewed-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
driver inclusion category: bugfix bugzilla: NA CVE: NA We found a memory out of bounds issue in hisi_sas_debug_I_T_nexus_reset(). This function needs to use sas_phy(struct asd_sas_phy) when handling the link reset of the directly attached environment. Since controller has 8 phys, only 8 sas_phy spaces are alloced when probe. At the beginning of this function, we get the sas_phy pointer of the corresponding phy by sas_phy = sas_ha->sas_phy[local_phy->number]. There is a problem here, because in the case of direct attached, local_phy->number can be guaranteed to be less than 8, but in the case of expander, local_phy->number is greater than 8, it will cause out of bounds when run "sas_phy = sas_ha->sas_phy[local_phy->number]". We fix this OOB problem by moving the problematic code into the code section for direct attached backplane. Feature or Bugfix:Bugfix Signed-off-by:
Jiaxing Luo <luojiaxing@huawei.com> Signed-off-by:
John Garry <john.garry@huawei.com> Signed-off-by:
luojiaxing <luojiaxing@huawei.com> Reviewed-by:
chenxiang <chenxiang66@hisilicon.com> Reviewed-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
driver inclusion category: bugfix bugzilla: NA CVE: NA Currently, the reset interrupt is cleared in the reset task, which is too late. Since, when the hardware finish the previous reset, it can begin to do a new global/IMP reset, if this new coming reset type is same as the previous one, the driver will clear them together, then driver can not get that there is another reset, but the hardware still wait for the driver to deal with the second one. So this patch clears PF's reset interrupt status in the hclge_irq_handle(), the hardware waits for handshaking from driver before doing reset, so the driver and hardware deal with reset one by one. BTW, when VF doing global/IMP reset, it reads PF's reset interrupt register to get that whether PF driver's re-initialization is done, since VF's re-initialization should be done after PF's. So we add a new command and a regiter bit to do that. When VF receive reset interrupt, it sets up this bit, and PF finishes re-initialization send command to clear this bit, then VF do re-initialization. Feature or Bugfix:Bugfix Signed-off-by:
Huazhong Tan <tanhuazhong@huawei.com> Reviewed-by:
linyunsheng <linyunsheng@huawei.com> Reviewed-by:
Yunsheng Lin <linyunsheng@huawei.com> Reviewed-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
driver inclusion category: bugfix bugzilla: NA CVE: NA In the current process, the management table is missing after the IM reset. This patch adds the management table to the reset process. Feature or Bugfix:Bugfix Signed-off-by:
YufengMo <moyufeng@huawei.com> Reviewed-by:
lipeng <lipeng321@huawei.com> Reviewed-by:
Yunsheng Lin <linyunsheng@huawei.com> Reviewed-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
driver inclusion category: bugfix bugzilla: NA CVE: NA Currently, the driver will clear the handshake status when re-intializing the CMDQ, and does not recover this status when reset fail. This will cause the hardware cannot get the handshake and do nothing anymore. So this patch delays clearing handshake status just before UP, and recovers this status when reset fail. Fixes: ada13ee3 ("net: hns3: add handshake with hardware while doing reset") Feature or Bugfix:Bugfix Signed-off-by:
Huazhong Tan <tanhuazhong@huawei.com> Reviewed-by:
lipeng <lipeng321@huawei.com> Reviewed-by:
Yunsheng Lin <linyunsheng@huawei.com> Reviewed-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
mainline inclusion from mainline-v5.2 commit b617158d category: bugfix bugzilla: 18679 CVE: NA ------------------------------------------------- Some applications set tiny SO_SNDBUF values and expect TCP to just work. Recent patches to address CVE-2019-11478 broke them in case of losses, since retransmits might be prevented. We should allow these flows to make progress. This patch allows the first and last skb in retransmit queue to be split even if memory limits are hit. It also adds the some room due to the fact that tcp_sendmsg() and tcp_sendpage() might overshoot sk_wmem_queued by about one full TSO skb (64KB size). Note this allowance was already present in stable backports for kernels < 4.15 Note for < 4.15 backports : tcp_rtx_queue_tail() will probably look like : static inline struct sk_buff *tcp_rtx_queue_tail(const struct sock *sk) { struct sk_buff *skb = tcp_send_head(sk); return skb ? tcp_write_queue_prev(sk, skb) : tcp_write_queue_tail(sk); } Fixes: f070ef2a ("tcp: tcp_fragment() should apply sane memory limits") Signed-off-by:
Eric Dumazet <edumazet@google.com> Reported-by:
Andrew Prout <aprout@ll.mit.edu> Tested-by:
Andrew Prout <aprout@ll.mit.edu> Tested-by:
Jonathan Lemon <jonathan.lemon@gmail.com> Tested-by:
Michal Kubecek <mkubecek@suse.cz> Acked-by:
Neal Cardwell <ncardwell@google.com> Acked-by:
Yuchung Cheng <ycheng@google.com> Acked-by:
Christoph Paasch <cpaasch@apple.com> Cc: Jonathan Looney <jtl@netflix.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Reviewed-by:
Wenan Mao <maowenan@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
mainline inclusion from mainline-v5.1 commit ca2fe295 category: bugfix bugzilla: NA CVE: NA ------------------------------------------------- Richard and Bruno both reported that my commit added a bug, and Bruno was able to determine the problem came when a segment wih a FIN packet was coalesced to a prior one in tcp backlog queue. It turns out the header prediction in tcp_rcv_established() looks back to TCP headers in the packet, not in the metadata (aka TCP_SKB_CB(skb)->tcp_flags) The fast path in tcp_rcv_established() is not supposed to handle a FIN flag (it does not call tcp_fin()) Therefore we need to make sure to propagate the FIN flag, so that the coalesced packet does not go through the fast path, the same than a GRO packet carrying a FIN flag. While we are at it, make sure we do not coalesce packets with RST or SYN, or if they do not have ACK set. Many thanks to Richard and Bruno for pinpointing the bad commit, and to Richard for providing a first version of the fix. Fixes: 4f693b55 ("tcp: implement coalescing on backlog queue") Signed-off-by:
Eric Dumazet <edumazet@google.com> Reported-by:
Richard Purdie <richard.purdie@linuxfoundation.org> Reported-by:
Bruno Prémont <bonbons@sysophe.eu> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Biaoxiang <yebiaoxiang@huawei.com> Reviewed-by:
Mao Wenan <maowenan@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
mainline inclusion from mainline-v5.0-rc1 commit 4f693b55 category: perf bugzilla: NA CVE: NA ------------------------------------------------- In case GRO is not as efficient as it should be or disabled, we might have a user thread trapped in __release_sock() while softirq handler flood packets up to the point we have to drop. This patch balances work done from user thread and softirq, to give more chances to __release_sock() to complete its work before new packets are added the the backlog. This also helps if we receive many ACK packets, since GRO does not aggregate them. This patch brings ~60% throughput increase on a receiver without GRO, but the spectacular gain is really on 1000x release_sock() latency reduction I have measured. Signed-off-by:
Eric Dumazet <edumazet@google.com> Cc: Neal Cardwell <ncardwell@google.com> Cc: Yuchung Cheng <ycheng@google.com> Acked-by:
Neal Cardwell <ncardwell@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Biaoxiang <yebiaoxiang@huawei.com> Conflicts: net/ipv4/tcp_ipv4.c Reviewed-by:
Mao Wenan <maowenan@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
mainline inclusion from mainline-v5.0-rc1 commit 85bdf7db category: perf bugzilla: NA CVE: NA ------------------------------------------------- Jean-Louis Dupond reported poor iscsi TCP receive performance that we tracked to backlog drops. Apparently we fail to send window updates reflecting the fact that we are under stress. Note that we might lack a proper window increase when backlog is fully processed, since __release_sock() clears sk->sk_backlog.len _after_ all skbs have been processed. This should not matter in practice. If we had a significant load through socket backlog, we are in a dangerous situation. Reported-by:
Jean-Louis Dupond <jean-louis@dupond.be> Signed-off-by:
Eric Dumazet <edumazet@google.com> Acked-by:
Neal Cardwell <ncardwell@google.com> Acked-by:
Yuchung Cheng <ycheng@google.com> Tested-by:
Jean-Louis <Dupond<jean-louis@dupond.be> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Biaoxiang <yebiaoxiang@huawei.com> Reviewed-by:
Mao Wenan <maowenan@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
mainline inclusion from mainline-v5.0-rc1 commit 19119f29 category: perf bugzilla: NA CVE: NA ------------------------------------------------- Neal pointed out that non sack flows might suffer from ACK compression added in the following patch ("tcp: implement coalescing on backlog queue") Instead of tweaking tcp_add_backlog() we can take into account how many ACK were coalesced, this information will be available in skb_shinfo(skb)->gso_segs Signed-off-by:
Eric Dumazet <edumazet@google.com> Acked-by:
Neal Cardwell <ncardwell@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Biaoxiang <yebiaoxiang@huawei.com> Reviewed-by:
Mao Wenan <maowenan@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
mainline inclusion from mainline-v5.0-rc1 commit ebeef4bc category: perf bugzilla: NA CVE: NA ------------------------------------------------- Tell the compiler that most TCP flows are using SACK these days. There is no need to add the unlikely() clause in tcp_is_reno(), the compiler is able to infer it. Signed-off-by:
Eric Dumazet <edumazet@google.com> Acked-by:
Neal Cardwell <ncardwell@google.com> Acked-by:
Yuchung Cheng <ycheng@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Biaoxiang <yebiaoxiang@huawei.com> Reviewed-by:
Mao Wenan <maowenan@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
mainline inclusion from mainline-v5.2 commit 2a017fd8 category: bugfix bugzilla: 13690 CVE: CVE-2019-13631 ------------------------------------------------- The GTCO tablet input driver configures itself from an HID report sent via USB during the initial enumeration process. Some debugging messages are generated during the parsing. A debugging message indentation counter is not bounds checked, leading to the ability for a specially crafted HID report to cause '-' and null bytes be written past the end of the indentation array. As long as the kernel has CONFIG_DYNAMIC_DEBUG enabled, this code will not be optimized out. This was discovered during code review after a previous syzkaller bug was found in this driver. Signed-off-by:
Grant Hernandez <granthernandez@google.com> Cc: stable@vger.kernel.org Signed-off-by:
Dmitry Torokhov <dmitry.torokhov@gmail.com> Reviewed-by:
Yao Hongbo <yaohongbo@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
hulk inclusion category: bugfix bugzilla: 18664 CVE: NA --------------------------- When write bio return error, it would be added to conf->retry_list and wait for raid1d thread to retry write and acknowledge badblocks. In narrow_write_error(), the error bio will be split in the unit of badblock shift (such as one sector) and raid1d thread issues them one by one. Until all of the splited bio has finished, raid1d thread can go on processing other things, which is time consuming. But, there is a scene for error handling that is not necessary. When the device has been set faulty, flush_bio_list() may end bios in pending_bio_list with error status. Since these bios has not been issued to the device actually, error handlding to retry write and acknowledge badblocks make no sense. Even without that scene, when the device is faulty, badblocks info can not be written out to the device. Thus, we also no need to handle the error IO. Link: https://marc.info/?l=linux-raid&m=156351499609153&w=2 Signed-off-by:
Yufen Yu <yuyufen@huawei.com> Reviewed-by:
Hou Tao <houtao1@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
driver inclusion category: bugfix bugzilla: NA CVE: NA Since commit(net: phy: marvell: change default m88e1510 LED configuration), the active LED of Hip07 devices is always off, because Hip07 just use 2 LEDs. This patch adds a phy_register_fixup_for_uid() for m88e1510 to current the LED configuration. Feature or Bugfix:Bugfix Fixes: 07777246 ("net: phy: marvell: change default m88e1510 LED configuration") Signed-off-by:
Yonglong Liu <liuyonglong@huawei.com> Reviewed-by:
linyunsheng <linyunsheng@huawei.com> Reviewed-by:
Yunsheng Lin <linyunsheng@huawei.com> Reviewed-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
driver inclusion category: bugfix bugzilla: NA CVE: NA Currently, hclge_reset_err_handle() will assert a global reset when the failing count is smaller than MAX_RESET_FAIL_CNT, which will affect other running functions. So this patch removes this upgrading, and uses re-scheduling reset task to do it. Feature or Bugfix:Bugfix Signed-off-by:
Huazhong Tan <tanhuazhong@huawei.com> Reviewed-by:
linyunsheng <linyunsheng@huawei.com> Reviewed-by:
Yunsheng Lin <linyunsheng@huawei.com> Reviewed-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
driver inclusion category: bugfix bugzilla: NA CVE: NA The hns3_init_all_ring() has called outside the hns3_change_all_ring_bd_num(), we should remove it from hns3_change_all_ring_bd_num(). Feature or Bugfix:Bugfix Signed-off-by:
shenjian (K) <shenjian15@huawei.com> Reviewed-by:
lipeng <lipeng321@huawei.com> Reviewed-by:
Yunsheng Lin <linyunsheng@huawei.com> Reviewed-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
driver inclusion category: bugfix bugzilla: NA CVE: NA In some case, hns3_nic_net_up() may fail. But the firmware is unware of this. It may still send link change message to PF, PF needs to drop it, otherwise it may cause panic when net stack is up and device is down. Feature or Bugfix:Bugfix Signed-off-by:
shenjian (K) <shenjian15@huawei.com> Reviewed-by:
lipeng <lipeng321@huawei.com> Reviewed-by:
Yunsheng Lin <linyunsheng@huawei.com> Reviewed-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
driver inclusion category: bugfix bugzilla: NA CVE: NA ras_status is used to keep the register's valus, so its type should be u32 instead of int. Fixes: 41eeed900e4c ("net: hns3: add support for handling IMP error") Signed-off-by:
Huazhong Tan <tanhuazhong@huawei.com> Reviewed-by:
lipeng <lipeng321@huawei.com> Reviewed-by:
Yunsheng Lin <linyunsheng@huawei.com> Reviewed-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
driver inclusion category: bugfix bugzilla: NA CVE: NA This patch removes a duplicated assignment in hclge_reset_prepare_wait(). Fixes: 7cc5f1c13564 ("net: hns3: add support for handling IMP error") Feature or Bugfix:Bugfix Signed-off-by:
Huazhong Tan <tanhuazhong@huawei.com> Reviewed-by:
lipeng <lipeng321@huawei.com> Reviewed-by:
Yunsheng Lin <linyunsheng@huawei.com> Reviewed-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
driver inclusion category: bugfix bugzilla: NA CVE: NA Before calling get_reset_level, we should check whether it is NULL. Feature or Bugfix:Bugfix Signed-off-by:
huangguangbin (A) <huangguangbin2@huawei.com> Reviewed-by:
xuzaibo <xuzaibo@huawei.com> Reviewed-by:
Yunsheng Lin <linyunsheng@huawei.com> Reviewed-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
driver inclusion category: bugfix bugzilla: NA CVE: NA Since the UEFI of other driver may have used the NIC hardware, so when loading HNS3 driver, we should clear the residual values in these CMDQ register, otherwise it will effect the normal behaviour. Feature or Bugfix:Bugfix Signed-off-by:
Huazhong Tan <tanhuazhong@huawei.com> Reviewed-by:
lipeng <lipeng321@huawei.com> Reviewed-by:
Yunsheng Lin <linyunsheng@huawei.com> Reviewed-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
hulk inclusion category: bugfix bugzilla: NA CVE: NA ------------------- This patch fix the same issue as commit cdda3bbfd0ca ("pciehp: use completion to wait irq_thread 'pciehp_ist'"). But the previous patch didn't fix the issue completely. This patch power off the slot directly instead of waking up the irq_thread 'pciehp_ist' This patch also check 'slot_being_removed_rescanned' before powering off the slot to avoid the dead lock issue similar as commit 764cafd9875e ("pciehp: fix a race between pciehp and removing operations by sysfs") Signed-off-by:
Xiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by:
Yao Hongbo <yaohongbo@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
hulk inclusion category: bugfix bugzilla: NA CVE: NA ------------------- This reverts commit cdda3bbfd0cab7dea3cfb37e9a8648f945c0241d. If 'slot_being_removed_rescanned' is set, when we power off the slot through sysfs, the irq_thread 'pciehp_ist' will be woken up. But it will return immediately and schedule this thread 3 seconds later. However, sysfs write should wait until the actual operation is finished. We will power off the slot in 'power_write_file' instread of waking up a irq_thread. So this patch has no use. Let's revert it. A following patch will fix the issue. Signed-off-by:
Xiongfeng Wang <wangxiongfeng2@huawei.com> Reviewed-by:
Yao Hongbo <yaohongbo@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
hulk inclusion category: bugfix bugzilla: 18665 CVE: NA ------------------- The kernel memory accounting for all memory cgroups is not stable now, it could lead kmem.usage refcount leak. It's used as a debug feature for now, so disable it by default. We can use the following command line to enable or disable it, cgroup.memory=kmem or cgroup.memory=kmem. Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Reviewed-by:
Jing Xiangfeng <jingxiangfeng@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
mainline inclusion from mainline-v5.2-rc7 commit a050fa54 category: bugfix bugzilla: 18663 CVE: NA --------------------------- When we run several VMs with PCI passthrough and GICv4 enabled, not pinning vCPUs, we will occasionally see below warnings in dmesg: ITS queue timeout (65440 65504 480) ITS cmd its_build_vmovp_cmd failed The reason for the above issue is that in BUILD_SINGLE_CMD_FUNC: 1. Post the write command. 2. Release the lock. 3. Start to read GITS_CREADR to get the reader pointer. 4. Compare the reader pointer to the target pointer. 5. If reader pointer does not reach the target, sleep 1us and continue to try. If we have several processors running the above concurrently, other CPUs will post write commands while the 1st CPU is waiting the completion. So we may have below issue: phase 1: ---rd_idx-----from_idx-----to_idx--0--------- wait 1us: phase 2: --------------from_idx-----to_idx--0-rd_idx-- That is the rd_idx may fly ahead of to_idx, and if in case to_idx is near the wrap point, rd_idx will wrap around. So the below condition will not be met even after 1s: if (from_idx < to_idx && rd_idx >= to_idx) There is another theoretical issue. For a slow and busy ITS, the initial rd_idx may fall behind from_idx a lot, just as below: ---rd_idx---0--from_idx-----to_idx----------- This will cause the wait function exit too early. Actually, it does not make much sense to use from_idx to judge if to_idx is wrapped, but we need a initial rd_idx when lock is still acquired, and it can be used to judge whether to_idx is wrapped and the current rd_idx is wrapped. We switch to a method of calculating the delta of two adjacent reads and accumulating it to get the sum, so that we can get the real rd_idx from the wrapped value even when the queue is almost full. Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Jason Cooper <jason@lakedaemon.net> Signed-off-by:
Heyi Guo <guoheyi@huawei.com> Signed-off-by:
Marc Zyngier <marc.zyngier@arm.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
mainline inclusion from mainline-v5.2 commit 9176ab1b category: bugfix bugzilla: 16631 CVE: NA --------------------------- The user value is validated after converting the timeval to a timespec, but for a wide range of negative tv_usec values the multiplication overflow turns them in positive numbers. So the 'validated later' is not catching the invalid input. Signed-off-by:
zhengbin <zhengbin13@huawei.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/1562460701-113301-1-git-send-email-zhengbin13@huawei.com Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Reviewed-by:
Xiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
mainline inclusion from mainline-v5.2-rc2 commit a3839bc6 category: bugfix bugzilla: 16233 CVE: NA --------------------------------------------------------- My static checker complains about this line from dmz_get_zoned_device() aligned_capacity = dev->capacity & ~(blk_queue_zone_sectors(q) - 1); The problem is that "aligned_capacity" and "dev->capacity" are sector_t type (which is a u64 under most configs) but blk_queue_zone_sectors(q) returns a u32 so the higher 32 bits in aligned_capacity are cleared to zero. This patch adds a cast to address the issue. Fixes: 114e0259 ("dm zoned: ignore last smaller runt zone") Signed-off-by:
Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by:
Damien Le Moal <damien.lemoal@wdc.com> Signed-off-by:
Mike Snitzer <snitzer@redhat.com> Signed-off-by:
SunKe <sunke32@huawei.com> Reviewed-by:
Hou Tao <houtao1@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
mainline inclusion from mainline-v5.2 commit: 7b8c87b2 category: performance bugzilla: NA CVE: NA -------------------------------------------------- Add coherency_max_size variable to record the maximum cache line size cache_line_size is derived from CTR_EL0.CWG field and is called mostly for I/O device drivers. For some platforms like the HiSilicon Kunpeng920 server SoC, cache line sizes are different between L1/2 cache and L3 cache while L1 cache line size is 64-byte and L3 is 128-byte, but CTR_EL0.CWG is misreporting using L1 cache line size. We shall correct the right value which is important for I/O performance. Let's update the cache line size if it is detected from DT or PPTT information. Cc: Will Deacon <will.deacon@arm.com> Cc: Jeremy Linton <jeremy.linton@arm.com> Cc: Zhenfa Qiu <qiuzhenfa@hisilicon.com> Reported-by:
Zhenfa Qiu <qiuzhenfa@hisilicon.com> Suggested-by:
Catalin Marinas <catalin.marinas@arm.com> Reviewed-by:
Sudeep Holla <sudeep.holla@arm.com> Signed-off-by:
Shaokun Zhang <zhangshaokun@hisilicon.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Shaokun Zhang <zhangshaokun@hisilicon.com> Reviewed-by:
Xie XiuQi <xiexiuqi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
mainline inclusion from mainline-v5.2 commit: 9a83c84c category: performance bugzilla: NA CVE: NA -------------------------------------------------- Add coherency_max_size variable to record the maximum cache line size for different cache levels. If it is available, we will synchronize it as cache line size, otherwise we will use CTR_EL0.CWG reporting in cache_line_size() for arm64. Cc: "Rafael J. Wysocki" <rafael@kernel.org> Cc: Jeremy Linton <jeremy.linton@arm.com> Cc: Will Deacon <will.deacon@arm.com> Reviewed-by:
Sudeep Holla <sudeep.holla@arm.com> Reviewed-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Shaokun Zhang <zhangshaokun@hisilicon.com> Signed-off-by:
Catalin Marinas <catalin.marinas@arm.com> Signed-off-by:
Shaokun Zhang <zhangshaokun@hisilicon.com> Reviewed-by:
Xie XiuQi <xiexiuqi@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
mainline inclusion from mainline-v5.2-rc1 commit 5daab580 category: bugfix bugzilla: 15934 CVE: NA ------------------------------------------------------------------------- The kernel parameter igfx_off is used by users to disable DMA remapping for the Intel integrated graphic device. It was designed for bare metal cases where a dedicated IOMMU is used for graphic. This doesn't apply to virtual IOMMU case where an include-all IOMMU is used. This makes the kernel parameter work with virtual IOMMU as well. Cc: Ashok Raj <ashok.raj@intel.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Suggested-by:
Kevin Tian <kevin.tian@intel.com> Fixes: c0771df8 ("intel-iommu: Export a flag indicating that the IOMMU is used for iGFX.") Signed-off-by:
Lu Baolu <baolu.lu@linux.intel.com> Tested-by:
Zhenyu Wang <zhenyuw@linux.intel.com> Signed-off-by:
Joerg Roedel <jroedel@suse.de> Signed-off-by:
Zhen Lei <thunder.leizhen@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-