- Jun 01, 2021
-
-
Joao Martins authored
mainline inclusion from mainline-5.4 commit 97d3eb9d category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=34 CVE: NA When cpus != maxcpus cpuidle-haltpoll will fail to register all vcpus past the online ones and thus fail to register the idle driver. This is because cpuidle_add_sysfs() will return with -ENODEV as a consequence from get_cpu_device() return no device for a non-existing CPU. Instead switch to cpuidle_register_driver() and manually register each of the present cpus through cpuhp_setup_state() callbacks and future ones that get onlined or offlined. This mimmics similar logic that intel_idle does. Signed-off-by:
Joao Martins <joao.m.martins@oracle.com> Signed-off-by:
Boris Ostrovsky <boris.ostrovsky@oracle.com> Reviewed-by:
Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Yubo Miao <miaoyubo@huawei.com> Signed-off-by:
Xiangyou Xie <xiexiangyou@huawei.com> Reviewed-by:
Hailiang Zhang <zhang.zhanghailiang@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Jiajun Chen <chenjiajun8@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Joao Martins authored
mainline inclusion from mainline-5.4 commit 73214408 category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=34 CVE: NA Right now, guest current governors have the following ratings: * ladder -> 10 * teo -> 19 * menu -> 20 * haltpoll -> 21 * ladder + nohz=off -> 25 haltpoll governor got introduced and it is now the default governor given its highest rating -- with ladder+nohz being the exception -- regardless of idle driver in the guest. An example of an undesirable case is x86 KVM guests with MWAIT which have intel_idle registered first, and consequently will have haltpoll be used as governor which would get limited to a poll state and state 1 and the other states wouldn't get used. To keep the previous defaults we decrease rating of governor to 9 (below current lowest rating) and thus rely on @governor switch on cpuidle_register_driver() to tie in haltpoll idle driver and governor together. Signed-off-by:
Joao Martins <joao.m.martins@oracle.com> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Yubo Miao <miaoyubo@huawei.com> Signed-off-by:
Xiangyou Xie <xiexiangyou@huawei.com> Reviewed-by:
Hailiang Zhang <zhang.zhanghailiang@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Jiajun Chen <chenjiajun8@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Joao Martins authored
mainline inclusion from mainline-5.4 commit cb5d8c45 category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=34 CVE: NA The recently introduced haltpoll driver is largely only useful with haltpoll governor. To allow drivers to associate with a particular idle behaviour, add a @governor property to 'struct cpuidle_driver' and thus allow a cpuidle driver to switch to a *preferred* governor on idle driver registration. We save the previous governor, and when an idle driver is unregistered we switch back to that. The @governor can be overridden by cpuidle.governor= boot param or alternatively be ignored if the governor doesn't exist. Signed-off-by:
Joao Martins <joao.m.martins@oracle.com> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Yubo Miao <miaoyubo@huawei.com> Signed-off-by:
Xiangyou Xie <xiexiangyou@huawei.com> Reviewed-by:
Hailiang Zhang <zhang.zhanghailiang@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Jiajun Chen <chenjiajun8@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Rafael J. Wysocki authored
mainline inclusion from mainline-5.1 commit 22782b3f category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=34 CVE: NA After commit ("cpuidle: Add cpuidle.governor= command line parameter") new cpuidle governors are not added to the list of available governors, so governor selection via sysfs doesn't work as expected (even though it is rarely used anyway). Fix that by making cpuidle_register_governor() add new governors to cpuidle_governors again. Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Yubo Miao <miaoyubo@huawei.com> Signed-off-by:
Xiangyou Xie <xiexiangyou@huawei.com> Reviewed-by:
Hailiang Zhang <zhang.zhanghailiang@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Jiajun Chen <chenjiajun8@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Rafael J. Wysocki authored
mainline inclusion from mainline-5.0 commit 61cb5758 category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=34 CVE: NA Add cpuidle.governor= command line parameter to allow the default cpuidle governor to be replaced. That is useful, for example, if someone running a tickful kernel wants to use the menu governor on it. Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Yubo Miao <miaoyubo@huawei.com> Signed-off-by:
Xiangyou Xie <xiexiangyou@huawei.com> Reviewed-by:
Hailiang Zhang <zhang.zhanghailiang@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Jiajun Chen <chenjiajun8@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Marcelo Tosatti authored
mainline inclusion from mainline-5.4 commit a1c4423b category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=34 CVE: NA When performing guest side polling, it is not necessary to also perform host side polling. So disable host side polling, via the new MSR interface, when loading cpuidle-haltpoll driver. Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Yubo Miao <miaoyubo@huawei.com> Signed-off-by:
Xiangyou Xie <xiexiangyou@huawei.com> Reviewed-by:
Hailiang Zhang <zhang.zhanghailiang@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Jiajun Chen <chenjiajun8@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Marcelo Tosatti authored
mainline inclusion from mainline-5.3 commit 2d5ba19b category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=34 CVE: NA Add an MSRs which allows the guest to disable host polling (specifically the cpuidle-haltpoll, when performing polling in the guest, disables host side polling). Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by:
Paolo Bonzini <pbonzini@redhat.com> Signed-off-by:
Yubo Miao <miaoyubo@huawei.com> Signed-off-by:
Xiangyou Xie <xiexiangyou@huawei.com> Reviewed-by:
Hailiang Zhang <zhang.zhanghailiang@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Jiajun Chen <chenjiajun8@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Marcelo Tosatti authored
mainline inclusion from mainline-5.4 commit 2cffe9f6 category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=34 CVE: NA The cpuidle_haltpoll governor, in conjunction with the haltpoll cpuidle driver, allows guest vcpus to poll for a specified amount of time before halting. This provides the following benefits to host side polling: 1) The POLL flag is set while polling is performed, which allows a remote vCPU to avoid sending an IPI (and the associated cost of handling the IPI) when performing a wakeup. 2) The VM-exit cost can be avoided. The downside of guest side polling is that polling is performed even with other runnable tasks in the host. Results comparing halt_poll_ns and server/client application where a small packet is ping-ponged: host --> 31.33 halt_poll_ns=300000 / no guest busy spin --> 33.40 (93.8%) halt_poll_ns=0 / guest_halt_poll_ns=300000 --> 32.73 (95.7%) For the SAP HANA benchmarks (where idle_spin is a parameter of the previous version of the patch, results should be the same): hpns == halt_poll_ns idle_spin=0/ idle_spin=800/ idle_spin=0/ hpns=200000 hpns=0 hpns=800000 DeleteC06T03 (100 thread) 1.76 1.71 (-3%) 1.78 (+1%) InsertC16T02 (100 thread) 2.14 2.07 (-3%) 2.18 (+1.8%) DeleteC00T01 (1 thread) 1.34 1.28 (-4.5%) 1.29 (-3.7%) UpdateC00T03 (1 thread) 4.72 4.18 (-12%) 4.53 (-5%) Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Yubo Miao <miaoyubo@huawei.com> Signed-off-by:
Xiangyou Xie <xiexiangyou@huawei.com> Reviewed-by:
Hailiang Zhang <zhang.zhanghailiang@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Jiajun Chen <chenjiajun8@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Marcelo Tosatti authored
mainline inclusion from mainline-5.4 commit 7d4daeed category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=34 CVE: NA Since this field is shared by all governors, move it to cpuidle device structure. Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Yubo Miao <miaoyubo@huawei.com> Signed-off-by:
Xiangyou Xie <xiexiangyou@huawei.com> Reviewed-by:
Hailiang Zhang <zhang.zhanghailiang@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Jiajun Chen <chenjiajun8@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Marcelo Tosatti authored
mainline inclusion from mainline-5.5 commit 36fcb429 category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=34 CVE: NA Commit 259231a0 ("cpuidle: add poll_limit_ns to cpuidle_device structure") changed, by mistake, the target residency from the first available sleep state to the last available sleep state (which should be longer). This might cause excessive polling. Fixes: 259231a0 ("cpuidle: add poll_limit_ns to cpuidle_device structure") Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Xiangyou Xie <xiexiangyou@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Jiajun Chen <chenjiajun8@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Stephen Rothwell authored
mainline inclusion from mainline-5.4 commit 7dcddef6 category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=34 CVE: NA An x86_64 allmodconfig build produces these errors: x86_64-linux-gnu-ld: kernel/sched/core.o: in function `cpuidle_poll_time': core.c:(.text+0x230): multiple definition of `cpuidle_poll_time'; arch/x86/= kernel/process.o:process.c:(.text+0xc0): first defined here (and more) Fixes: 259231a0 ("cpuidle: add poll_limit_ns to cpuidle_device structure") Signed-off-by:
Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Xiangyou Xie <xiexiangyou@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Jiajun Chen <chenjiajun8@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Marcelo Tosatti authored
mainline inclusion from mainline-5.4 commit 259231a0 category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=34 CVE: NA Add a poll_limit_ns variable to cpuidle_device structure. Calculate and configure it in the new cpuidle_poll_time function, in case its zero. Individual governors are allowed to override this value. Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by:
Yubo Miao <miaoyubo@huawei.com> Signed-off-by:
Xiangyou Xie <xiexiangyou@huawei.com> Reviewed-by:
Hailiang Zhang <zhang.zhanghailiang@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Jiajun Chen <chenjiajun8@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Marcelo Tosatti authored
mainline inclusion from mainline-5.4 commit fa86ee90 category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=34 CVE: NA Add a cpuidle driver that calls the architecture default_idle routine. To be used in conjunction with the haltpoll governor. Signed-off-by:
Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Yubo Miao <miaoyubo@huawei.com> Signed-off-by:
Xiangyou Xie <xiexiangyou@huawei.com> Reviewed-by:
Hailiang Zhang <zhang.zhanghailiang@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Jiajun Chen <chenjiajun8@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Doug Smythies authored
mainline inclusion from mainline-5.0 commit 1617971c category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=34 CVE: NA The default time is declared in units of microsecnds, but is used as nanoseconds, resulting in significant accounting errors for idle state 0 time when all idle states deeper than 0 are disabled. Under these unusual conditions, we don't really care about the poll time limit anyhow. Fixes: 800fb34a ("cpuidle: poll_state: Disregard disable idle states") Signed-off-by:
Doug Smythies <dsmythies@telus.net> Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Xiangyou Xie <xiexiangyou@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Jiajun Chen <chenjiajun8@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Rafael J. Wysocki authored
mainline inclusion from mainline-5.0 commit 800fb34a category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=34 CVE: NA When computing the limit of time to spend in the loop in poll_idle(), use the target residency of the first enabled idle state deeper than state 0 instead of always using the target residency of state 1. This helps when state 1 is disabled for diagnostics, for instance. Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Yubo Miao <miaoyubo@huawei.com> Signed-off-by:
Xiangyou Xie <xiexiangyou@huawei.com> Reviewed-by:
Hailiang Zhang <zhang.zhanghailiang@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Jiajun Chen <chenjiajun8@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Rafael J. Wysocki authored
mainline inclusion from mainline-4.20 commit 01bad1c6 category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=34 CVE: NA If need_resched() returns "false", breaking out of the loop in poll_idle() will cause a new idle state to be selected, so in fact it usually doesn't make sense to spin in it longer than the target residency of the second state. [Note that the "polling" state is used only if there is at least one "real" state defined in addition to it, so the second state is always there.] On the other hand, breaking out of it early (say in case the next state is disabled) shouldn't hurt as it is polling anyway. For this reason, make the loop in poll_idle() break if the CPU has been spinning longer than the target residency of the second state (the "polling" state can only be state[0]). Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Yubo Miao <miaoyubo@huawei.com> Signed-off-by:
Xiangyou Xie <xiexiangyou@huawei.com> Reviewed-by:
Hailiang Zhang <zhang.zhanghailiang@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Jiajun Chen <chenjiajun8@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Rafael J. Wysocki authored
mainline inclusion from mainline-4.20 commit eb40a380 category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=34 CVE: NA It is not necessary to update data->last_state_idx in menu_select() as it only is used in menu_update() which only runs when data->needs_update is set and that is set only when updating data->last_state_idx in menu_reflect(). Accordingly, drop the update of data->last_state_idx from menu_select() and get rid of the (now redundant) "out" label from it. No intentional behavior changes. Signed-off-by:
Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by:
Yubo Miao <miaoyubo@huawei.com> Signed-off-by:
Xiangyou Xie <xiexiangyou@huawei.com> Reviewed-by:
Hailiang Zhang <zhang.zhanghailiang@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Signed-off-by:
Jiajun Chen <chenjiajun8@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Daniel Borkmann authored
mainline inclusion from mainline-v5.13-rc4 commit a7036191277f9fa68d92f2071ddc38c09b1e5ee5 category: bugfix bugzilla: NA CVE: CVE-2021-33200 -------------------------------- In 801c6058d14a ("bpf: Fix leakage of uninitialized bpf stack under speculation") we replaced masking logic with direct loads of immediates if the register is a known constant. Given in this case we do not apply any masking, there is also no reason for the operation to be truncated under the speculative domain. Therefore, there is also zero reason for the verifier to branch-off and simulate this case, it only needs to do it for unknown but bounded scalars. As a side-effect, this also enables few test cases that were previously rejected due to simulation under zero truncation. Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Reviewed-by:
Piotr Krysiuk <piotras@gmail.com> Acked-by:
Alexei Starovoitov <ast@kernel.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Reviewed-by:
Kuohai Xu <xukuohai@huawei.com> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Daniel Borkmann authored
mainline inclusion from mainline-v5.13-rc4 commit bb01a1bba579b4b1c5566af24d95f1767859771e category: bugfix bugzilla: NA CVE: CVE-2021-33200 -------------------------------- Masking direction as indicated via mask_to_left is considered to be calculated once and then used to derive pointer limits. Thus, this needs to be placed into bpf_sanitize_info instead so we can pass it to sanitize_ptr_alu() call after the pointer move. Piotr noticed a corner case where the off reg causes masking direction change which then results in an incorrect final aux->alu_limit. Fixes: 7fedb63a8307 ("bpf: Tighten speculative pointer arithmetic mask") Reported-by:
Piotr Krysiuk <piotras@gmail.com> Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Reviewed-by:
Piotr Krysiuk <piotras@gmail.com> Acked-by:
Alexei Starovoitov <ast@kernel.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Reviewed-by:
Kuohai Xu <xukuohai@huawei.com> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Daniel Borkmann authored
mainline inclusion from mainline-v5.13-rc4 commit 3d0220f6861d713213b015b582e9f21e5b28d2e0 category: bugfix bugzilla: NA CVE: CVE-2021-33200 -------------------------------- Add a container structure struct bpf_sanitize_info which holds the current aux info, and update call-sites to sanitize_ptr_alu() to pass it in. This is needed for passing in additional state later on. Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Reviewed-by:
Piotr Krysiuk <piotras@gmail.com> Acked-by:
Alexei Starovoitov <ast@kernel.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Reviewed-by:
Kuohai Xu <xukuohai@huawei.com> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
- May 31, 2021
-
-
Xingui Yang authored
mainline inclusion from mainline-v5.12-rc6 commit 234e6d2c18f5 category: bugfix bugzilla: NA CVE: NA On Hisilicon Kunpeng920, ESP is set to 1 by default for all ports of SATA controller. In some scenarios, some ports are not external SATA ports, and it cause disks connected to these ports to be identified as removable disks. So disable the SXS capability on the software side to prevent users from mistakenly considering non-removable disks as removable disks and performing related operations. Signed-off-by:
Xingui Yang <yangxingui@huawei.com> Signed-off-by:
Luo Jiaxing <luojiaxing@huawei.com> Reviewed-by:
John Garry <john.garry@huawei.com> Link: https://lore.kernel.org/r/1615544676-61926-1-git-send-email-luojiaxing@huawei.com Signed-off-by:
Jens Axboe <axboe@kernel.dk> Signed-off-by:
Yang Xingui <yangxingui@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Vasily Averin authored
mainline inclusion from mainline-v5.8-rc6 commit 7779b047 category: bugfix bugzilla: 39163 CVE: NA ------------------------------------------------- fuse_writepages() ignores some errors taken from fuse_writepages_fill() I believe it is a bug: if .writepages is called with WB_SYNC_ALL it should either guarantee that all data was successfully saved or return error. Fixes: 26d614df ("fuse: Implement writepages callback") Signed-off-by:
Vasily Averin <vvs@virtuozzo.com> Signed-off-by:
Miklos Szeredi <mszeredi@redhat.com> Signed-off-by:
Yu Kuai <yukuai3@huawei.com> Reviewed-by:
Zhang Yi <yi.zhang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Trond Myklebust authored
mainline inclusion from mainline-v5.7-rc1 commit 75da9858 category: bugfix bugzilla: NA CVE: NA -------------------------------- We must not return from nfs_d_automount() without holding 2 references to the mount record. Doing so, will trigger the BUG() in finish_automount(). Also ensure that we don't try to reschedule the automount timer with a negative or zero timeout value. Fixes: 22a1ae9a ("NFS: If nfs_mountpoint_expiry_timeout < 0, do not expire submounts") Cc: stable@vger.kernel.org # v5.5+ Signed-off-by:
Trond Myklebust <trond.myklebust@hammerspace.com> Conflicts: fs/nfs/namespace.c Signed-off-by:
Zhang Xiaoxu <zhangxiaoxu5@huawei.com> Reviewed-by:
Zhang Yi <yi.zhang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Trond Myklebust authored
mainline inclusion from mainline-v5.5-rc1 commit 22a1ae9a category: bugfix bugzilla: NA CVE: NA -------------------------------- If we set nfs_mountpoint_expiry_timeout to a negative value, then allow that to imply that we do not expire NFSv4 submounts. Signed-off-by:
Trond Myklebust <trond.myklebust@hammerspace.com> Signed-off-by:
Zhang Xiaoxu <zhangxiaoxu5@huawei.com> Reviewed-by:
Zhang Yi <yi.zhang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Benjamin Coddington authored
mainline inclusion from mainline-v5.4-rc1 commit 581057c8 category: bugfix bugzilla: NA CVE: NA -------------------------------- This check has been hanging out since we used to have parallel paths to add dentry in nfs_create(), but that hasn't been the case for some years. Signed-off-by:
Benjamin Coddington <bcodding@redhat.com> Signed-off-by:
Anna Schumaker <Anna.Schumaker@Netapp.com> Signed-off-by:
Zhang Xiaoxu <zhangxiaoxu5@huawei.com> Reviewed-by:
Zhang Yi <yi.zhang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Benjamin Coddington authored
mainline inclusion from mainline-v5.4-rc1 commit 17fd6e45 category: bugfix bugzilla: NA CVE: NA -------------------------------- Signed-off-by:
Benjamin Coddington <bcodding@redhat.com> Signed-off-by:
Anna Schumaker <Anna.Schumaker@Netapp.com> Signed-off-by:
Zhang Xiaoxu <zhangxiaoxu5@huawei.com> Reviewed-by:
Zhang Yi <yi.zhang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Benjamin Coddington authored
mainline inclusion from mainline-v5.4-rc1 commit 406cd915 category: bugfix bugzilla: NA CVE: NA -------------------------------- Since commit b0c6108e ("nfs_instantiate(): prevent multiple aliases for directory inode"), nfs_instantiate() may succeed without actually instantiating the dentry that was passed in. That can be problematic for some callers in NFSv3, so this patch breaks things up so we can get the actual dentry obtained. Signed-off-by:
Benjamin Coddington <bcodding@redhat.com> Signed-off-by:
Anna Schumaker <Anna.Schumaker@Netapp.com> Signed-off-by:
Zhang Xiaoxu <zhangxiaoxu5@huawei.com> Reviewed-by:
Zhang Yi <yi.zhang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
- May 29, 2021
-
-
Zheng Yejian authored
hulk inclusion category: bugfix bugzilla: 51349 CVE: CVE-2021-27365 --------------------------- sysfs_emit and sysfs_emit_at have a constraint that output buffer should be alignment with PAGE_SIZE, but currently we can not guarantee it since 59bb4798 ("mm, sl[aou]b: guarantee natural alignment for kmalloc(power-of-two)") is not merged. This may lead to an unexpected warning when execute like: 'cat /sys/class/iscsi_transport/tcp/handle'. As for the necessity of the address alignment constraint, Joe Perches (the code author) wrote that: > It's to make sure it's a PAGE_SIZE aligned buffer. > It's just so it would not be misused/abused in non-sysfs derived cases. So we'll not need to introduce 59bb4798 ("mm, sl[aou]b: guarantee natural alignment for kmalloc(power-of-two)") but just remove the address alignment constraint. For more discussions of the issue, see: https://www.spinics.net/lists/stable/msg455428.html Signed-off-by:
Zheng Yejian <zhengyejian1@huawei.com> Reviewed-by:
zhangyi (F) <yi.zhang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Reviewed-by:
Zhang Yi <yi.zhang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yang Yingliang authored
hulk inclusion category: bugfix bugzilla: 51349 CVE: NA ------------------------------------------------- This patchset https://patchwork.kernel.org/project/linux-block/cover/20190826111627.7505-1-vbabka@suse.cz/ will cause perfmance regression, so revert it and use another way to fix the warning introduced by fix CVE-2021-27365. Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Reviewed-by:
Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yang Yingliang authored
hulk inclusion category: bugfix bugzilla: 51349 CVE: NA ------------------------------------------------- This patchset https://patchwork.kernel.org/project/linux-block/cover/20190826111627.7505-1-vbabka@suse.cz/ will cause perfmance regression, so revert it and use another way to fix the warning introduced by fix CVE-2021-27365. Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Reviewed-by:
Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Yang Yingliang authored
hulk inclusion category: bugfix bugzilla: 51349 CVE: NA ------------------------------------------------- This patchset https://patchwork.kernel.org/project/linux-block/cover/20190826111627.7505-1-vbabka@suse.cz/ will cause perfmance regression, so revert it and use another way to fix the warning introduced by fix CVE-2021-27365. Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Reviewed-by:
Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
- May 27, 2021
-
-
Thadeu Lima de Souza Cascardo authored
mainline inclusion from mainline-v5.13-rc1 commit d1f82808877bb10d3deee7cf3374a4eb3fb582db category: bugfix bugzilla: NA CVE: CVE-2021-3491 -------------------------------- Read and write operations are capped to MAX_RW_COUNT. Some read ops rely on that limit, and that is not guaranteed by the IORING_OP_PROVIDE_BUFFERS. Truncate those lengths when doing io_add_buffers, so buffer addresses still use the uncapped length. Also, take the chance and change struct io_buffer len member to __u32, so it matches struct io_provide_buffer len member. This fixes CVE-2021-3491, also reported as ZDI-CAN-13546. Fixes: ddf0322d ("io_uring: add IORING_OP_PROVIDE_BUFFERS") Reported-by:
Billy Jheng Bing-Jhong <(@st424204)> Signed-off-by:
Thadeu Lima de Souza Cascardo <cascardo@canonical.com> Signed-off-by:
Jens Axboe <axboe@kernel.dk> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Reviewed-by:
Zhang Yi <yi.zhang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Shenwei Luo authored
kunpeng inclusion category: feature bugzilla: https://bugzilla.openeuler.org/show_bug.cgi?id=45 CVE: NA The ARM processor error section includes several ARM processor error information, several ARM processor context information and several vendor specific error information structures. Report all of these information to userspace via perf i/f. Shengwei Luo: backport for openEuler 20.xx kernel. V2: report severity info to userspace V1: fix the error in the original patch. Ensure all info to be parsed correctly. Original-Author: Jason Tian <jason@os.amperecomputing.com> Signed-off-by:
Gong Chen <chengong15@huawei.com> Signed-off-by:
Shengwei Luo <luoshengwei@huawei.com> Cc: Chen Wei <chenwei68@huawei.com> Reviewed-by:
Xiongfeng Wang <wangxiongfeng2@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
- May 26, 2021
-
-
Miklos Szeredi authored
mainline inclusion from mainline-v5.8-rc1 commit 5ddd9ced category: bugfix bugzilla: 37636 CVE: NA ------------------------------------------------- A GETATTR request can race with FUSE_NOTIFY_INVAL_INODE, resulting in the attribute cache being updated with stale information after the invalidation. Fix this by bumping the attribute version in fuse_reverse_inval_inode(). Reported-by:
Krzysztof Rusek <rusek@9livesdata.com> Signed-off-by:
Miklos Szeredi <mszeredi@redhat.com> Conflict: fs/fuse/inode.c a. commit f15ecfef ("fuse: Introduce fi->lock to protect write related fields") is not backported, 'fi->lock' do not exist. b. commit 4510d86f ("fuse: Convert fc->attr_version into atomic64_t") is not backported, 'fc->lock' is needed to read 'fc->attr_version'. Signed-off-by:
Yu Kuai <yukuai3@huawei.com> Reviewed-by:
Zhang Yi <yi.zhang@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Xingjun Liu authored
anolis inclusion from anolis_master commit 7afda44c8a9e043f8f16dcc57dd8ef615522e2c8 category: performance bugzilla: NA CVE: NA --------------------------- alinux: random: speed up the initialization of module During the module initialization phase, entropy will be added to entropy pool for every interrupt, the change should speed up initialization of the random module. Before optimization: [ 22.180236] random: crng init done After optimization: [ 1.474832] random: crng init done Signed-off-by:
Xingjun Liu <xingjun.lxj@alibaba-inc.com> Reviewed-by:
Liu Jiang <gerry@linux.alibaba.com> Reviewed-by:
Caspar Zhang <caspar@linux.alibaba.com> Reviewed-by:
Jia Zhang <zhang.jia@linux.alibaba.com> Reviewed-by:
Yang Shi <yang.shi@linux.alibaba.com> Reviewed-by:
Liu Bo <bo.liu@linux.alibaba.com> Signed-off-by:
Chen Jialong <chenjialong@huawei.com> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Reviewed-by:
Ziyuan Hu <huziyuan@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Pavel Skripkin authored
stable inclusion from linux-4.19.187 commit c166c0f5311dc9de687b8985574a5ee5166d367e CVE: CVE-2021-33033 -------------------------------- commit 1165affd484889d4986cf3b724318935a0b120d8 upstream. syzbot found general protection fault in crypto_destroy_tfm()[1]. It was caused by wrong clean up loop in llsec_key_alloc(). If one of the tfm array members is in IS_ERR() range it will cause general protection fault in clean up function [1]. Call Trace: crypto_free_aead include/crypto/aead.h:191 [inline] [1] llsec_key_alloc net/mac802154/llsec.c:156 [inline] mac802154_llsec_key_add+0x9e0/0xcc0 net/mac802154/llsec.c:249 ieee802154_add_llsec_key+0x56/0x80 net/mac802154/cfg.c:338 rdev_add_llsec_key net/ieee802154/rdev-ops.h:260 [inline] nl802154_add_llsec_key+0x3d3/0x560 net/ieee802154/nl802154.c:1584 genl_family_rcv_msg_doit+0x228/0x320 net/netlink/genetlink.c:739 genl_family_rcv_msg net/netlink/genetlink.c:783 [inline] genl_rcv_msg+0x328/0x580 net/netlink/genetlink.c:800 netlink_rcv_skb+0x153/0x420 net/netlink/af_netlink.c:2502 genl_rcv+0x24/0x40 net/netlink/genetlink.c:811 netlink_unicast_kernel net/netlink/af_netlink.c:1312 [inline] netlink_unicast+0x533/0x7d0 net/netlink/af_netlink.c:1338 netlink_sendmsg+0x856/0xd90 net/netlink/af_netlink.c:1927 sock_sendmsg_nosec net/socket.c:654 [inline] sock_sendmsg+0xcf/0x120 net/socket.c:674 ____sys_sendmsg+0x6e8/0x810 net/socket.c:2350 ___sys_sendmsg+0xf3/0x170 net/socket.c:2404 __sys_sendmsg+0xe5/0x1b0 net/socket.c:2433 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xae Signed-off-by:
Pavel Skripkin <paskripkin@gmail.com> Reported-by:
<syzbot+9ec037722d2603a9f52e@syzkaller.appspotmail.com> Acked-by:
Alexander Aring <aahringo@redhat.com> Link: https://lore.kernel.org/r/20210304152125.1052825-1-paskripkin@gmail.com Signed-off-by:
Stefan Schmidt <stefan@datenfreihafen.org> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Reviewed-by:
Yue Haibing <yuehaibing@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Paul Moore authored
stable inclusion from linux-4.19.181 commit a44af1c69737f9e64d5134c34eb9d5c4c2e04da1 CVE: CVE-2021-33033 -------------------------------- commit ad5d07f4a9cd671233ae20983848874731102c08 upstream. The current CIPSO and CALIPSO refcounting scheme for the DOI definitions is a bit flawed in that we: 1. Don't correctly match gets/puts in netlbl_cipsov4_list(). 2. Decrement the refcount on each attempt to remove the DOI from the DOI list, only removing it from the list once the refcount drops to zero. This patch fixes these problems by adding the missing "puts" to netlbl_cipsov4_list() and introduces a more conventional, i.e. not-buggy, refcounting mechanism to the DOI definitions. Upon the addition of a DOI to the DOI list, it is initialized with a refcount of one, removing a DOI from the list removes it from the list and drops the refcount by one; "gets" and "puts" behave as expected with respect to refcounts, increasing and decreasing the DOI's refcount by one. Fixes: b1edeb10 ("netlabel: Replace protocol/NetLabel linking with refrerence counts") Fixes: d7cce015 ("netlabel: Add support for removing a CALIPSO DOI.") Reported-by:
<syzbot+9ec037722d2603a9f52e@syzkaller.appspotmail.com> Signed-off-by:
Paul Moore <paul@paul-moore.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Reviewed-by:
Yue Haibing <yuehaibing@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Archie Pusaka authored
stable inclusion from linux-4.19.191 commit 75e26178e26f910f7f26c79c2824b726eecf0dfb CVE: CVE-2021-33034 -------------------------------- commit 5c4c8c9544099bb9043a10a5318130a943e32fc3 upstream. hci_chan can be created in 2 places: hci_loglink_complete_evt() if it is an AMP hci_chan, or l2cap_conn_add() otherwise. In theory, Only AMP hci_chan should be removed by a call to hci_disconn_loglink_complete_evt(). However, the controller might mess up, call that function, and destroy an hci_chan which is not initiated by hci_loglink_complete_evt(). This patch adds a verification that the destroyed hci_chan must have been init'd by hci_loglink_complete_evt(). Example crash call trace: Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0xe3/0x144 lib/dump_stack.c:118 print_address_description+0x67/0x22a mm/kasan/report.c:256 kasan_report_error mm/kasan/report.c:354 [inline] kasan_report mm/kasan/report.c:412 [inline] kasan_report+0x251/0x28f mm/kasan/report.c:396 hci_send_acl+0x3b/0x56e net/bluetooth/hci_core.c:4072 l2cap_send_cmd+0x5af/0x5c2 net/bluetooth/l2cap_core.c:877 l2cap_send_move_chan_cfm_icid+0x8e/0xb1 net/bluetooth/l2cap_core.c:4661 l2cap_move_fail net/bluetooth/l2cap_core.c:5146 [inline] l2cap_move_channel_rsp net/bluetooth/l2cap_core.c:5185 [inline] l2cap_bredr_sig_cmd net/bluetooth/l2cap_core.c:5464 [inline] l2cap_sig_channel net/bluetooth/l2cap_core.c:5799 [inline] l2cap_recv_frame+0x1d12/0x51aa net/bluetooth/l2cap_core.c:7023 l2cap_recv_acldata+0x2ea/0x693 net/bluetooth/l2cap_core.c:7596 hci_acldata_packet net/bluetooth/hci_core.c:4606 [inline] hci_rx_work+0x2bd/0x45e net/bluetooth/hci_core.c:4796 process_one_work+0x6f8/0xb50 kernel/workqueue.c:2175 worker_thread+0x4fc/0x670 kernel/workqueue.c:2321 kthread+0x2f0/0x304 kernel/kthread.c:253 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:415 Allocated by task 38: set_track mm/kasan/kasan.c:460 [inline] kasan_kmalloc+0x8d/0x9a mm/kasan/kasan.c:553 kmem_cache_alloc_trace+0x102/0x129 mm/slub.c:2787 kmalloc include/linux/slab.h:515 [inline] kzalloc include/linux/slab.h:709 [inline] hci_chan_create+0x86/0x26d net/bluetooth/hci_conn.c:1674 l2cap_conn_add.part.0+0x1c/0x814 net/bluetooth/l2cap_core.c:7062 l2cap_conn_add net/bluetooth/l2cap_core.c:7059 [inline] l2cap_connect_cfm+0x134/0x852 net/bluetooth/l2cap_core.c:7381 hci_connect_cfm+0x9d/0x122 include/net/bluetooth/hci_core.h:1404 hci_remote_ext_features_evt net/bluetooth/hci_event.c:4161 [inline] hci_event_packet+0x463f/0x72fa net/bluetooth/hci_event.c:5981 hci_rx_work+0x197/0x45e net/bluetooth/hci_core.c:4791 process_one_work+0x6f8/0xb50 kernel/workqueue.c:2175 worker_thread+0x4fc/0x670 kernel/workqueue.c:2321 kthread+0x2f0/0x304 kernel/kthread.c:253 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:415 Freed by task 1732: set_track mm/kasan/kasan.c:460 [inline] __kasan_slab_free mm/kasan/kasan.c:521 [inline] __kasan_slab_free+0x106/0x128 mm/kasan/kasan.c:493 slab_free_hook mm/slub.c:1409 [inline] slab_free_freelist_hook+0xaa/0xf6 mm/slub.c:1436 slab_free mm/slub.c:3009 [inline] kfree+0x182/0x21e mm/slub.c:3972 hci_disconn_loglink_complete_evt net/bluetooth/hci_event.c:4891 [inline] hci_event_packet+0x6a1c/0x72fa net/bluetooth/hci_event.c:6050 hci_rx_work+0x197/0x45e net/bluetooth/hci_core.c:4791 process_one_work+0x6f8/0xb50 kernel/workqueue.c:2175 worker_thread+0x4fc/0x670 kernel/workqueue.c:2321 kthread+0x2f0/0x304 kernel/kthread.c:253 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:415 The buggy address belongs to the object at ffff8881d7af9180 which belongs to the cache kmalloc-128 of size 128 The buggy address is located 24 bytes inside of 128-byte region [ffff8881d7af9180, ffff8881d7af9200) The buggy address belongs to the page: page:ffffea00075ebe40 count:1 mapcount:0 mapping:ffff8881da403200 index:0x0 flags: 0x8000000000000200(slab) raw: 8000000000000200 dead000000000100 dead000000000200 ffff8881da403200 raw: 0000000000000000 0000000080150015 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected Memory state around the buggy address: ffff8881d7af9080: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb ffff8881d7af9100: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc >ffff8881d7af9180: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff8881d7af9200: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ffff8881d7af9280: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc Signed-off-by:
Archie Pusaka <apusaka@chromium.org> Reported-by:
<syzbot+98228e7407314d2d4ba2@syzkaller.appspotmail.com> Reviewed-by:
Alain Michaud <alainm@chromium.org> Reviewed-by:
Abhishek Pandit-Subedi <abhishekpandit@chromium.org> Signed-off-by:
Marcel Holtmann <marcel@holtmann.org> Cc: George Kennedy <george.kennedy@oracle.com> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Or Cohen authored
stable inclusion from linux-4.19.191 commit 48fba458fe54cc2a980a05c13e6c19b8b2cfb610 CVE: CVE-2021-23134 -------------------------------- commit c61760e6940dd4039a7f5e84a6afc9cdbf4d82b6 upstream. Commits 8a4cd82d ("nfc: fix refcount leak in llcp_sock_connect()") and c33b1cc62 ("nfc: fix refcount leak in llcp_sock_bind()") fixed a refcount leak bug in bind/connect but introduced a use-after-free if the same local is assigned to 2 different sockets. This can be triggered by the following simple program: int sock1 = socket( AF_NFC, SOCK_STREAM, NFC_SOCKPROTO_LLCP ); int sock2 = socket( AF_NFC, SOCK_STREAM, NFC_SOCKPROTO_LLCP ); memset( &addr, 0, sizeof(struct sockaddr_nfc_llcp) ); addr.sa_family = AF_NFC; addr.nfc_protocol = NFC_PROTO_NFC_DEP; bind( sock1, (struct sockaddr*) &addr, sizeof(struct sockaddr_nfc_llcp) ) bind( sock2, (struct sockaddr*) &addr, sizeof(struct sockaddr_nfc_llcp) ) close(sock1); close(sock2); Fix this by assigning NULL to llcp_sock->local after calling nfc_llcp_local_put. This addresses CVE-2021-23134. Reported-by:
Or Cohen <orcohen@paloaltonetworks.com> Reported-by:
Nadav Markus <nmarkus@paloaltonetworks.com> Fixes: c33b1cc62 ("nfc: fix refcount leak in llcp_sock_bind()") Signed-off-by:
Or Cohen <orcohen@paloaltonetworks.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com> Reviewed-by:
Xiu Jianfeng <xiujianfeng@huawei.com> Reviewed-by:
Yue Haibing <yuehaibing@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
Hans de Goede authored
mainline inclusion from mainline-5.7 commit 17e5888e category: bugfix bugzilla: NA CVE: NA ------------------------------------------------- Modern x86 laptops are starting to use GPIO pins as interrupts more and more, e.g. touchpads and touchscreens have almost all moved away from PS/2 and USB to using I2C with a GPIO pin as interrupt. Modern x86 laptops also have almost all moved to using s2idle instead of using the system S3 ACPI power state to suspend. The Intel and AMD pinctrl drivers do not define irq_retrigger handlers for the irqchips they register, this is causing edge triggered interrupts which happen while suspended using s2idle to get lost. One specific example of this is the lid switch on some devices, lid switches used to be handled by the embedded-controller, but now the lid open/closed sensor is sometimes directly connected to a GPIO pin. On most devices the ACPI code for this looks like this: Method (_E00, ...) { Notify (LID0, 0x80) // Status Change } Where _E00 is an ACPI event handler for changes on both edges of the GPIO connected to the lid sensor, this event handler is then combined with an _LID method which directly reads the pin. When the device is resumed by opening the lid, the GPIO interrupt will wake the system, but because the pinctrl irqchip doesn't have an irq_retrigger handler, the Notify will not happen. This is not a problem in the case the _LID method directly reads the GPIO, because the drivers/acpi/button.c code will call _LID on resume anyways. But some devices have an event handler for the GPIO connected to the lid sensor which looks like this: Method (_E00, ...) { if (LID_GPIO == One) LIDS = One else LIDS = Zero Notify (LID0, 0x80) // Status Change } And the _LID method returns the cached LIDS value, since on open we do not re-run the edge-interrupt handler when we re-enable IRQS on resume (because of the missing irq_retrigger handler), _LID now will keep reporting closed, as LIDS was never changed to reflect the open status, this causes userspace to re-resume the laptop again shortly after opening the lid. The Intel GPIO controllers do not allow implementing irq_retrigger without emulating it in software, at which point we are better of just using the generic HARDIRQS_SW_RESEND mechanism rather then re-implementing software emulation for this separately in aprox. 14 different pinctrl drivers. Select HARDIRQS_SW_RESEND to solve the problem of edge-triggered GPIO interrupts not being re-triggered on resume when they were triggered during suspend (s2idle) and/or when they were the cause of the wakeup. This requires 008f1d60 ("x86/apic/vector: Force interupt handler invocation to irq context") c16816ac ("genirq: Add protection against unsafe usage of generic_handle_irq()") to protect the APIC based interrupts from being wreckaged by a software resend. Signed-off-by:
Hans de Goede <hdegoede@redhat.com> Signed-off-by:
Thomas Gleixner <tglx@linutronix.de> Link: https://lkml.kernel.org/r/20200123210242.53367-1-hdegoede@redhat.com Signed-off-by:
Liao Chang <liaochang1@huawei.com> Reviewed-by:
Hanjun Guo <guohanjun@huawei.com> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-