Skip to content
Snippets Groups Projects
Commit ea812ca1 authored by Alexander Duyck's avatar Alexander Duyck Committed by David S. Miller
Browse files

x86: Align skb w/ start of cacheline on newer core 2/Xeon Arch


x86 architectures can handle unaligned accesses in hardware, and it has
been shown that unaligned DMA accesses can be expensive on Nehalem
architectures.  As such we should overwrite NET_IP_ALIGN to resolve
this issue.

Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Signed-off-by: default avatarAlexander Duyck <alexander.h.duyck@intel.com>
Signed-off-by: default avatarJeff Kirsher <jeffrey.t.kirsher@intel.com>
Acked-by: default avatarH. Peter Anvin <hpa@zytor.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent cb836a97
No related branches found
No related tags found
No related merge requests found
......@@ -457,4 +457,13 @@ static inline void rdtsc_barrier(void)
alternative(ASM_NOP3, "lfence", X86_FEATURE_LFENCE_RDTSC);
}
#ifdef CONFIG_MCORE2
/*
* We handle most unaligned accesses in hardware. On the other hand
* unaligned DMA can be quite expensive on some Nehalem processors.
*
* Based on this we disable the IP header alignment in network drivers.
*/
#define NET_IP_ALIGN 0
#endif
#endif /* _ASM_X86_SYSTEM_H */
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment