- Oct 11, 2022
-
-
Kuniyuki Iwashima authored
stable inclusion from stable-v4.19.257 commit da89cab514b3ec7fdedb378617160c47fe3b60a9 category: bugfix bugzilla: https://gitee.com/openeuler/kernel/issues/I5UQH4 CVE: NA -------------------------------- [ Upstream commit c42b7cddea47503411bfb5f2f93a4154aaffa2d9 ] While reading sysctl_net_busy_poll, it can be changed concurrently. Thus, we need to add READ_ONCE() to its reader. Fixes: 06021292 ("net: add low latency socket poll") Signed-off-by:
Kuniyuki Iwashima <kuniyu@amazon.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Sasha Levin <sashal@kernel.org> Signed-off-by:
Yongqiang Liu <liuyongqiang13@huawei.com>
-
- Oct 25, 2021
-
-
Eric Dumazet authored
stable inclusion from linux-4.19.200 commit c1a5cd807960d07381364c7b05aa3a43eb6d3a2f -------------------------------- [ Upstream commit 0dbffbb5335a1e3aa6855e4ee317e25e669dd302 ] sk_ll_usec is read locklessly from sk_can_busy_loop() while another thread can change its value in sock_setsockopt() This is correct but needs annotations. BUG: KCSAN: data-race in __skb_try_recv_datagram / sock_setsockopt write to 0xffff88814eb5f904 of 4 bytes by task 14011 on cpu 0: sock_setsockopt+0x1287/0x2090 net/core/sock.c:1175 __sys_setsockopt+0x14f/0x200 net/socket.c:2100 __do_sys_setsockopt net/socket.c:2115 [inline] __se_sys_setsockopt net/socket.c:2112 [inline] __x64_sys_setsockopt+0x62/0x70 net/socket.c:2112 do_syscall_64+0x4a/0x90 arch/x86/entry/common.c:47 entry_SYSCALL_64_after_hwframe+0x44/0xae read to 0xffff88814eb5f904 of 4 bytes by task 14001 on cpu 1: sk_can_busy_loop include/net/busy_poll.h:41 [inline] __skb_try_recv_datagram+0x14f/0x320 net/core/data...
-
- Dec 27, 2019
-
-
[ Upstream commit ee8d153d ] We already annotated most accesses to sk->sk_napi_id We missed sk_mark_napi_id() and sk_mark_napi_id_once() which might be called without socket lock held in UDP stack. KCSAN reported : BUG: KCSAN: data-race in udpv6_queue_rcv_one_skb / udpv6_queue_rcv_one_skb write to 0xffff888121c6d108 of 4 bytes by interrupt on cpu 0: sk_mark_napi_id include/net/busy_poll.h:125 [inline] __udpv6_queue_rcv_skb net/ipv6/udp.c:571 [inline] udpv6_queue_rcv_one_skb+0x70c/0xb40 net/ipv6/udp.c:672 udpv6_queue_rcv_skb+0xb5/0x400 net/ipv6/udp.c:689 udp6_unicast_rcv_skb.isra.0+0xd7/0x180 net/ipv6/udp.c:832 __udp6_lib_rcv+0x69c/0x1770 net/ipv6/udp.c:913 udpv6_rcv+0x2b/0x40 net/ipv6/udp.c:1015 ip6_protocol_deliver_rcu+0x22a/0xbe0 net/ipv6/ip6_input.c:409 ip6_input_finish+0x30/0x50 net/ipv6/ip6_input.c:450 NF_HOOK include/linux/netfilter.h:305 [inline] NF_HOOK include/linux/netfilter.h:299 [inline] ip6_input+0x177/0x190 net/ipv6/ip6_input.c:459 dst_input include/net/dst.h:442 [inline] ip6_rcv_finish+0x110/0x140 net/ipv6/ip6_input.c:76 NF_HOOK include/linux/netfilter.h:305 [inline] NF_HOOK include/linux/netfilter.h:299 [inline] ipv6_rcv+0x1a1/0x1b0 net/ipv6/ip6_input.c:284 __netif_receive_skb_one_core+0xa7/0xe0 net/core/dev.c:5010 __netif_receive_skb+0x37/0xf0 net/core/dev.c:5124 process_backlog+0x1d3/0x420 net/core/dev.c:5955 napi_poll net/core/dev.c:6392 [inline] net_rx_action+0x3ae/0xa90 net/core/dev.c:6460 write to 0xffff888121c6d108 of 4 bytes by interrupt on cpu 1: sk_mark_napi_id include/net/busy_poll.h:125 [inline] __udpv6_queue_rcv_skb net/ipv6/udp.c:571 [inline] udpv6_queue_rcv_one_skb+0x70c/0xb40 net/ipv6/udp.c:672 udpv6_queue_rcv_skb+0xb5/0x400 net/ipv6/udp.c:689 udp6_unicast_rcv_skb.isra.0+0xd7/0x180 net/ipv6/udp.c:832 __udp6_lib_rcv+0x69c/0x1770 net/ipv6/udp.c:913 udpv6_rcv+0x2b/0x40 net/ipv6/udp.c:1015 ip6_protocol_deliver_rcu+0x22a/0xbe0 net/ipv6/ip6_input.c:409 ip6_input_finish+0x30/0x50 net/ipv6/ip6_input.c:450 NF_HOOK include/linux/netfilter.h:305 [inline] NF_HOOK include/linux/netfilter.h:299 [inline] ip6_input+0x177/0x190 net/ipv6/ip6_input.c:459 dst_input include/net/dst.h:442 [inline] ip6_rcv_finish+0x110/0x140 net/ipv6/ip6_input.c:76 NF_HOOK include/linux/netfilter.h:305 [inline] NF_HOOK include/linux/netfilter.h:299 [inline] ipv6_rcv+0x1a1/0x1b0 net/ipv6/ip6_input.c:284 __netif_receive_skb_one_core+0xa7/0xe0 net/core/dev.c:5010 __netif_receive_skb+0x37/0xf0 net/core/dev.c:5124 process_backlog+0x1d3/0x420 net/core/dev.c:5955 Reported by Kernel Concurrency Sanitizer on: CPU: 1 PID: 10890 Comm: syz-executor.0 Not tainted 5.4.0-rc3+ #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Fixes: e68b6e50 ("udp: enable busy polling for all sockets") Signed-off-by:
Eric Dumazet <edumazet@google.com> Reported-by:
syzbot <syzkaller@googlegroups.com> Signed-off-by:
David S. Miller <davem@davemloft.net> Signed-off-by:
Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by:
Yang Yingliang <yangyingliang@huawei.com>
-
- Jul 31, 2018
-
-
Christoph Hellwig authored
Fold it into the only caller to make the code simpler and easier to read. Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Christoph Hellwig authored
There is no point in hiding this logic in a helper. Also remove the useless events != 0 check and only busy loop once we know we actually have a poll method. Signed-off-by:
Christoph Hellwig <hch@lst.de> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Jul 02, 2018
-
-
Amritha Nambiar authored
This patch adds a new field to sock_common 'skc_rx_queue_mapping' which holds the receive queue number for the connection. The Rx queue is marked in tcp_finish_connect() to allow a client app to do SO_INCOMING_NAPI_ID after a connect() call to get the right queue association for a socket. Rx queue is also marked in tcp_conn_request() to allow syn-ack to go on the right tx-queue associated with the queue on which syn is received. Signed-off-by:
Amritha Nambiar <amritha.nambiar@intel.com> Signed-off-by:
Sridhar Samudrala <sridhar.samudrala@intel.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- May 26, 2018
-
-
Christoph Hellwig authored
Factor out two busy poll related helpers for late reuse, and remove a command that isn't very helpful, especially with the __poll_t annotations in place. Signed-off-by:
Christoph Hellwig <hch@lst.de>
-
- Aug 12, 2017
-
-
Daniel Borkmann authored
MIN_NAPI_ID is used in various places outside of CONFIG_NET_RX_BUSY_POLL wrapping, so when it's not set we run into build errors such as: net/core/dev.c: In function 'dev_get_by_napi_id': net/core/dev.c:886:16: error: ‘MIN_NAPI_ID’ undeclared (first use in this function) if (napi_id < MIN_NAPI_ID) ^~~~~~~~~~~ Thus, have MIN_NAPI_ID always defined to fix these errors. Signed-off-by:
Daniel Borkmann <daniel@iogearbox.net> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Mar 25, 2017
-
-
Sridhar Samudrala authored
Move the core functionality in sk_busy_loop() to napi_busy_loop() and make it independent of sk. This enables re-using this function in epoll busy loop implementation. Signed-off-by:
Sridhar Samudrala <sridhar.samudrala@intel.com> Signed-off-by:
Alexander Duyck <alexander.h.duyck@intel.com> Acked-by:
Eric Dumazet <edumazet@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
This patch flips the logic we were using to determine if the busy polling has timed out. The main motivation for this is that we will need to support two different possible timeout values in the future and by recording the start time rather than when we would want to end we can focus on making the end_time specific to the task be it epoll or socket based polling. Signed-off-by:
Alexander Duyck <alexander.h.duyck@intel.com> Acked-by:
Eric Dumazet <edumazet@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
checking the return value of sk_busy_loop. As there are only a few consumers of that data, and the data being checked for can be replaced with a check for !skb_queue_empty() we might as well just pull the code out of sk_busy_loop and place it in the spots that actually need it. Signed-off-by:
Alexander Duyck <alexander.h.duyck@intel.com> Acked-by:
Eric Dumazet <edumazet@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
Instead of defining two versions of skb_mark_napi_id I think it is more readable to just match the format of the sk_mark_napi_id functions and just wrap the contents of the function instead of defining two versions of the function. This way we can save a few lines of code since we only need 2 of the ifdef/endif but needed 5 for the extra function declaration. Signed-off-by:
Alexander Duyck <alexander.h.duyck@intel.com> Acked-by:
Eric Dumazet <edumazet@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Alexander Duyck authored
This patch is a cleanup/fix for NAPI IDs following the changes that made it so that sender_cpu and napi_id were doing a better job of sharing the same location in the sk_buff. One issue I found is that we weren't validating the napi_id as being valid before we started trying to setup the busy polling. This change corrects that by using the MIN_NAPI_ID value that is now used in both allocating the NAPI IDs, as well as validating them. Signed-off-by:
Alexander Duyck <alexander.h.duyck@intel.com> Acked-by:
Eric Dumazet <edumazet@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Mar 02, 2017
-
-
Ingo Molnar authored
sched/headers: Prepare to move signal wakeup & sigpending methods from <linux/sched.h> into <linux/sched/signal.h> Fix up affected files that include this signal functionality via sched.h. Acked-by:
Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by:
Ingo Molnar <mingo@kernel.org>
-
Ingo Molnar authored
We are going to split <linux/sched/clock.h> out of <linux/sched.h>, which will have to be picked up from other headers and .c files. Create a trivial placeholder <linux/sched/clock.h> file that just maps to <linux/sched.h> to make this patch obviously correct and bisectable. Include the new header in the files that are going to need it. Acked-by:
Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by:
Ingo Molnar <mingo@kernel.org>
-
- Feb 14, 2017
-
-
Eric Dumazet authored
Commit 79e7fff4 ("net: remove support for per driver ndo_busy_poll()") made them obsolete. Signed-off-by:
Eric Dumazet <edumazet@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Nov 18, 2016
-
-
Eric Dumazet authored
UDP busy polling is restricted to connected UDP sockets. This is because sk_busy_loop() only takes care of one NAPI context. There are cases where it could be extended. 1) Some hosts receive traffic on a single NIC, with one RX queue. 2) Some applications use SO_REUSEPORT and associated BPF filter to split the incoming traffic on one UDP socket per RX queue/thread/cpu 3) Some UDP sockets are used to send/receive traffic for one flow, but they do not bother with connect() This patch records the napi_id of first received skb, giving more reach to busy polling. Tested: lpaa23:~# echo 70 >/proc/sys/net/core/busy_read lpaa24:~# echo 70 >/proc/sys/net/core/busy_read lpaa23:~# for f in `seq 1 10`; do ./super_netperf 1 -H lpaa24 -t UDP_RR -l 5; done Before patch : 27867 28870 37324 41060 41215 36764 36838 44455 41282 43843 After patch : 73920 73213 70147 74845 71697 68315 68028 75219 70082 73707 Signed-off-by:
Eric Dumazet <edumazet@google.com> Cc: Willem de Bruijn <willemb@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Nov 17, 2016
-
-
Eric Dumazet authored
Now sk_busy_loop() can schedule by itself, we can remove need_resched() check from sk_can_busy_loop() Also add a const to its struct sock parameter. Signed-off-by:
Eric Dumazet <edumazet@google.com> Cc: Willem de Bruijn <willemb@google.com> Cc: Adam Belay <abelay@google.com> Cc: Tariq Toukan <tariqt@mellanox.com> Cc: Yuval Mintz <Yuval.Mintz@cavium.com> Cc: Ariel Elior <ariel.elior@cavium.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Nov 19, 2015
-
-
Eric Dumazet authored
There is really little gain from inlining this big function. We'll soon make it even bigger in following patches. This means we no longer need to export napi_by_id() Signed-off-by:
Eric Dumazet <edumazet@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Jan 14, 2014
-
-
Peter Zijlstra authored
The only valid use of preempt_enable_no_resched() is if the very next line is schedule() or if we know preemption cannot actually be enabled by that statement due to known more preempt_count 'refs'. This busy_poll stuff looks to be completely and utterly broken, sched_clock() can return utter garbage with interrupts enabled (rare but still) and it can drift unbounded between CPUs. This means that if you get preempted/migrated and your new CPU is years behind on the previous CPU we get to busy spin for a _very_ long time. There is a _REASON_ sched_clock() warns about preemptability - papering over it with a preempt_disable()/preempt_enable_no_resched() is just terminal brain damage on so many levels. Replace sched_clock() usage with local_clock() which has a bounded drift between CPUs (<2 jiffies). There is a further problem with the entire busy wait poll thing in that the spin time is additive to the syscall timeout, not inclusive. Reviewed-by:
Thomas Gleixner <tglx@linutronix.de> Signed-off-by:
Peter Zijlstra <peterz@infradead.org> Cc: David S. Miller <davem@davemloft.net> Cc: rui.zhang@intel.com Cc: jacob.jun.pan@linux.intel.com Cc: Mike Galbraith <bitbucket@online.de> Cc: hpa@zytor.com Cc: Arjan van de Ven <arjan@linux.intel.com> Cc: lenb@kernel.org Cc: rjw@rjwysocki.net Cc: Eliezer Tamir <eliezer.tamir@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/20131119151338.GF3694@twins.programming.kicks-ass.net Signed-off-by:
Ingo Molnar <mingo@kernel.org>
-
- Aug 29, 2013
-
-
Eliezer Tamir authored
Add a cpu_relaxt to sk_busy_loop. Julie Cummings reported performance issues when hyperthreading is on. Arjan van de Ven observed that we should have a cpu_relax() in the busy poll loop. Reported-by:
Julie Cummings <julie.a.cummings@intel.com> Signed-off-by:
Eliezer Tamir <eliezer.tamir@linux.intel.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Aug 10, 2013
-
-
Eliezer Tamir authored
Rename mib counter from "low latency" to "busy poll" v1 also moved the counter to the ip MIB (suggested by Shawn Bohrer) Eric Dumazet suggested that the current location is better. So v2 just renames the counter to fit the new naming convention. Signed-off-by:
Eliezer Tamir <eliezer.tamir@linux.intel.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Aug 05, 2013
-
-
Eliezer Tamir authored
When renaming ll_poll to busy poll, I introduced a typo in the name of the do-nothing placeholder for sk_busy_loop and called it sk_busy_poll. This broke compile when busy poll was not configured. Cong Wang submitted a patch to fixed that. This patch removes the now redundant, misspelled placeholder. Signed-off-by:
Eliezer Tamir <eliezer.tamir@linux.intel.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Aug 02, 2013
-
-
Cong Wang authored
Eliezer renames several *ll_poll to *busy_poll, but forgets CONFIG_NET_LL_RX_POLL, so in case of confusion, rename it too. Cc: Eliezer Tamir <eliezer.tamir@linux.intel.com> Cc: David S. Miller <davem@davemloft.net> Signed-off-by:
Cong Wang <amwang@redhat.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Cong Wang authored
When CONFIG_NET_LL_RX_POLL is not set, I got: net/socket.c: In function ‘sock_poll’: net/socket.c:1165:4: error: implicit declaration of function ‘sk_busy_loop’ [-Werror=implicit-function-declaration] Fix this by adding a nop when !CONFIG_NET_LL_RX_POLL. Cc: Eliezer Tamir <eliezer.tamir@linux.intel.com> Cc: David S. Miller <davem@davemloft.net> Signed-off-by:
Cong Wang <amwang@redhat.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Jul 11, 2013
-
-
Eliezer Tamir authored
Rename LL_SO to BUSY_POLL_SO Rename sysctl_net_ll_{read,poll} to sysctl_busy_{read,poll} Fix up users of these variables. Fix documentation for sysctl. a patch for the socket.7 man page will follow separately, because of limitations of my mail setup. Signed-off-by:
Eliezer Tamir <eliezer.tamir@linux.intel.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Eliezer Tamir authored
Rename ndo_ll_poll to ndo_busy_poll. Rename sk_mark_ll to sk_mark_napi_id. Rename skb_mark_ll to skb_mark_napi_id. Correct all useres of these functions. Update comments and defines in include/net/busy_poll.h Signed-off-by:
Eliezer Tamir <eliezer.tamir@linux.intel.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Eliezer Tamir authored
Rename the file and correct all the places where it is included. Signed-off-by:
Eliezer Tamir <eliezer.tamir@linux.intel.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Jul 10, 2013
-
-
Eliezer Tamir authored
Suggested by Linus: Changed time accounting for busy-poll: - Make it microsecond based. - Use unsigned longs. - Revert back to use time_after instead of time_in_range. Reorder poll/select busy loop conditions: - Clear busy_flag after one time we can't busy-poll. - Only init busy_end if we actually are going to busy-poll. Added one more missing need_resched() test. Signed-off-by:
Eliezer Tamir <eliezer.tamir@linux.intel.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Jul 09, 2013
-
-
Eliezer Tamir authored
Rename functions in include/net/ll_poll.h to busy wait. Clarify documentation about expected power use increase. Rename POLL_LL to POLL_BUSY_LOOP. Add need_resched() testing to poll/select busy loops. Note, that in select and poll can_busy_poll is dynamic and is updated continuously to reflect the existence of supported sockets with valid queue information. Signed-off-by:
Eliezer Tamir <eliezer.tamir@linux.intel.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Jul 03, 2013
-
-
Eliezer Tamir authored
correct placeholder declarations to prevent build breakage when !CONFIG_NET_LL_RX_POLL Signed-off-by:
Eliezer Tamir <eliezer.tamir@linux.intel.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Eliezer Tamir authored
Time in range will fail safely if we move to a different cpu with an extremely large clock skew. Add time_in_range64() and convert lls to use it. changelog: v2 - fixed double call to sched_clock in can_poll_ll - fixed checkpatchisms Signed-off-by:
Eliezer Tamir <eliezer.tamir@linux.intel.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Jul 02, 2013
-
-
Eliezer Tamir authored
Change Low Latency Sockets code for select and poll so that when LLS is disabled sched_clock() is never called. Also, avoid sending POLL_LL to sockets if disabled. Reported-by:
Andi Kleen <andi@firstfloor.org> Signed-off-by:
Eliezer Tamir <eliezer.tamir@linux.intel.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Eliezer Tamir authored
Our use of sched_clock is OK because we don't mind the side effects of calling it and occasionally waking up on a different CPU. When CONFIG_DEBUG_PREEMPT is on, disable preempt before calling sched_clock() so we don't trigger a debug_smp_processor_id() warning. Reported-by:
Cody P Schafer <devel-lists@codyps.com> Signed-off-by:
Eliezer Tamir <eliezer.tamir@linux.intel.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Jun 26, 2013
-
-
Eliezer Tamir authored
select/poll busy-poll support. Split sysctl value into two separate ones, one for read and one for poll. updated Documentation/sysctl/net.txt Add a new poll flag POLL_LL. When this flag is set, sock_poll will call sk_poll_ll if possible. sock_poll sets this flag in its return value to indicate to select/poll when a socket that can busy poll is found. When poll/select have nothing to report, call the low-level sock_poll again until we are out of time or we find something. Once the system call finds something, it stops setting POLL_LL, so it can return the result to the user ASAP. Signed-off-by:
Eliezer Tamir <eliezer.tamir@linux.intel.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Jun 18, 2013
-
-
Eliezer Tamir authored
adds a socket option for low latency polling. This allows overriding the global sysctl value with a per-socket one. Unexport sysctl_net_ll_poll since for now it's not needed in modules. Signed-off-by:
Eliezer Tamir <eliezer.tamir@linux.intel.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Eliezer Tamir authored
Use sched_clock() instead of get_cycles(). We can use sched_clock() because we don't care much about accuracy. Remove the dependency on X86_TSC Signed-off-by:
Eliezer Tamir <eliezer.tamir@linux.intel.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
Eliezer Tamir authored
There is no reason for sysctl_net_ll_poll to be an unsigned long. Change it into an unsigned int. Fix the proc handler. Signed-off-by:
Eliezer Tamir <eliezer.tamir@linux.intel.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-
- Jun 11, 2013
-
-
Eliezer Tamir authored
Adds an ndo_ll_poll method and the code that supports it. This method can be used by low latency applications to busy-poll Ethernet device queues directly from the socket code. sysctl_net_ll_poll controls how many microseconds to poll. Default is zero (disabled). Individual protocol support will be added by subsequent patches. Signed-off-by:
Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by:
Jesse Brandeburg <jesse.brandeburg@intel.com> Signed-off-by:
Eliezer Tamir <eliezer.tamir@linux.intel.com> Acked-by:
Eric Dumazet <edumazet@google.com> Tested-by:
Willem de Bruijn <willemb@google.com> Signed-off-by:
David S. Miller <davem@davemloft.net>
-