David Ahern [Tue, 2 Feb 2016 16:17:07 +0000 (08:17 -0800)]
net: Add support for filtering link dump by master device and kind
Add support for filtering link dumps by master device and kind, similar
to the filtering implemented for neighbor dumps.
Each net_device that exists adds between 1196 bytes (eth) and 1556 bytes
(bridge) to the link dump. As the number of interfaces increases so does
the amount of data pushed to user space for a link list. If the user
only wants to see a list of specific devices (e.g., interfaces enslaved
to a specific bridge or a list of VRFs) most of that data is thrown away.
Passing the filters to the kernel to have only relevant data returned
makes the dump more efficient.
Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Acked-by: Roopa Prabhu <roopa@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 11 Feb 2016 08:54:23 +0000 (03:54 -0500)]
Merge branch 'tcp-fast-so_reuseport'
Craig Gallek says:
====================
Faster SO_REUSEPORT for TCP
This patch series complements an earlier series (6a5ef90c58da)
which added faster SO_REUSEPORT lookup for UDP sockets by
extending the feature to TCP sockets. It uses the same
array-based data structure which allows for socket selection
after finding the first listening socket that matches an incoming
packet. Prior to this feature, every socket in the reuseport
group needed to be found and examined before a selection could be
made.
With this series the SO_ATTACH_REUSEPORT_CBPF and
SO_ATTACH_REUSEPORT_EBPF socket options now work for TCP sockets
as well. The test at the end of the series includes an example of
how to use these options to select a reuseport socket based on the
cpu core id handling the incoming packet.
There are several refactoring patches that precede the feature
implementation. Only the last two patches in this series
should result in any behavioral changes.
v4
- Fix build issue when compiling IPv6 as a module. This required
moving the ipv6_rcv_saddr_equal into an object that is included as a
built-in object. I included this change in the second patch which
adds inet6_hash since that is where ipv6_rcv_saddr_equal will
later be called from non-module code.
v3:
- Another warning in the first patch caught by a build bot. Return 0 in
the no-op UDP hash function.
v2:
- In the first patched I missed a couple of hash functions that should now be
returning int instead of void. I missed these the first time through as it
only generated a warning and not an error :\
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Craig Gallek [Wed, 10 Feb 2016 16:50:41 +0000 (11:50 -0500)]
soreuseport: BPF selection functional test for TCP
Unfortunately the existing test relied on packet payload in order to
map incoming packets to sockets. In order to get this to work with TCP,
TCP_FASTOPEN needed to be used.
Since the fast open path is slightly different than the standard TCP path,
I created a second test which sends to reuseport group members based
on receiving cpu core id. This will probably serve as a better
real-world example use as well.
Signed-off-by: Craig Gallek <kraig@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Craig Gallek [Wed, 10 Feb 2016 16:50:40 +0000 (11:50 -0500)]
soreuseport: fast reuseport TCP socket selection
This change extends the fast SO_REUSEPORT socket lookup implemented
for UDP to TCP. Listener sockets with SO_REUSEPORT and the same
receive address are additionally added to an array for faster
random access. This means that only a single socket from the group
must be found in the listener list before any socket in the group can
be used to receive a packet. Previously, every socket in the group
needed to be considered before handing off the incoming packet.
This feature also exposes the ability to use a BPF program when
selecting a socket from a reuseport group.
Signed-off-by: Craig Gallek <kraig@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Craig Gallek [Wed, 10 Feb 2016 16:50:39 +0000 (11:50 -0500)]
soreuseport: Prep for fast reuseport TCP socket selection
Both of the lines in this patch probably should have been included
in the initial implementation of this code for generic socket
support, but weren't technically necessary since only UDP sockets
were supported.
First, the sk_reuseport_cb points to a structure which assumes
each socket in the group has this pointer assigned at the same
time it's added to the array in the structure. The sk_clone_lock
function breaks this assumption. Since a child socket shouldn't
implicitly be in a reuseport group, the simple fix is to clear
the field in the clone.
Second, the SO_ATTACH_REUSEPORT_xBPF socket options require that
SO_REUSEPORT also be set first. For UDP sockets, this is easily
enforced at bind-time since that process both puts the socket in
the appropriate receive hlist and updates the reuseport structures.
Since these operations can happen at two different times for TCP
sockets (bind and listen) it must be explicitly checked to enforce
the use of SO_REUSEPORT with SO_ATTACH_REUSEPORT_xBPF in the
setsockopt call.
Signed-off-by: Craig Gallek <kraig@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Craig Gallek [Wed, 10 Feb 2016 16:50:38 +0000 (11:50 -0500)]
inet: refactor inet[6]_lookup functions to take skb
This is a preliminary step to allow fast socket lookup of SO_REUSEPORT
groups. Doing so with a BPF filter will require access to the
skb in question. This change plumbs the skb (and offset to payload
data) through the call stack to the listening socket lookup
implementations where it will be used in a following patch.
Signed-off-by: Craig Gallek <kraig@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Craig Gallek [Wed, 10 Feb 2016 16:50:37 +0000 (11:50 -0500)]
tcp: __tcp_hdrlen() helper
tcp_hdrlen is wasteful if you already have a pointer to struct tcphdr.
This splits the size calculation into a helper function that can be
used if a struct tcphdr is already available.
Signed-off-by: Craig Gallek <kraig@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Craig Gallek [Wed, 10 Feb 2016 16:50:36 +0000 (11:50 -0500)]
inet: create IPv6-equivalent inet_hash function
In order to support fast lookups for TCP sockets with SO_REUSEPORT,
the function that adds sockets to the listening hash set needs
to be able to check receive address equality. Since this equality
check is different for IPv4 and IPv6, we will need two different
socket hashing functions.
This patch adds inet6_hash identical to the existing inet_hash function
and updates the appropriate references. A following patch will
differentiate the two by passing different comparison functions to
__inet_hash.
Additionally, in order to use the IPv6 address equality function from
inet6_hashtables (which is compiled as a built-in object when IPv6 is
enabled) it also needs to be in a built-in object file as well. This
moves ipv6_rcv_saddr_equal into inet_hashtables to accomplish this.
Signed-off-by: Craig Gallek <kraig@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Craig Gallek [Wed, 10 Feb 2016 16:50:35 +0000 (11:50 -0500)]
sock: struct proto hash function may error
In order to support fast reuseport lookups in TCP, the hash function
defined in struct proto must be capable of returning an error code.
This patch changes the function signature of all related hash functions
to return an integer and handles or propagates this return value at
all call sites.
Signed-off-by: Craig Gallek <kraig@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 11 Feb 2016 08:49:55 +0000 (03:49 -0500)]
Merge tag 'batman-adv-for-davem' of git://git.open-mesh.org/linux-merge
Antonio Quartulli says:
====================
Here you have a batch of patches by Sven Eckelmann that
drops our private reference counting implementation and
substitutes it with the kref objects/functions.
Then you have a patch, by Simon Wunderlich, that
makes the broadcast protection window code more generic so
that it can be re-used in the future by other components
with different requirements.
Lastly, Sven is also introducing two lockdep asserts in
functions operating on our TVLV container list, to make
sure that the proper lock is always acquired by the users.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 11 Feb 2016 08:47:05 +0000 (03:47 -0500)]
Merge branch 'be2net-next'
Ajit Khaparde says:
====================
be2net Patch series
Please consider applying these two patches to net-next
Patch-1: Request RSS capability of Rx interface depending on number of
Rx rings
Patch-2: Interpret and log new data that's added to the port
misconfigure async event
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Ajit Khaparde [Wed, 10 Feb 2016 17:15:54 +0000 (22:45 +0530)]
be2net: Interpret and log new data that's added to the port misconfigure async event
>From FW version 11.0. onwards, the PORT_MISCONFIG event generated by the FW
will carry more information about the event in the "data_word1"
and "data_word2" fields. This patch adds support in the driver to parse the
new information and log it accordingly. This patch also changes some of the
messages that are being logged currently.
Signed-off-by: Suresh Reddy <suresh.reddy@broadcom.com> Signed-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com> Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Ajit Khaparde [Wed, 10 Feb 2016 17:15:53 +0000 (22:45 +0530)]
be2net: Request RSS capability of Rx interface depending on number of Rx rings
Currently we request RSS capability even if a single Rx ring is created.
As a result in few cases we unnecessarily consume an RSS capable interface
which is a limited resource in the chip.
This patch enables RSS on an interface only if more than one Rx ring
is created.
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sven Eckelmann [Sat, 16 Jan 2016 09:29:57 +0000 (10:29 +0100)]
batman-adv: Convert batadv_tt_common_entry to kref
batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.
Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
Sven Eckelmann [Sat, 16 Jan 2016 09:29:56 +0000 (10:29 +0100)]
batman-adv: Convert batadv_orig_node to kref
batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.
Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
Sven Eckelmann [Sat, 16 Jan 2016 09:29:55 +0000 (10:29 +0100)]
batman-adv: Convert batadv_orig_node_vlan to kref
batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.
Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
Sven Eckelmann [Sat, 16 Jan 2016 09:29:54 +0000 (10:29 +0100)]
batman-adv: Convert batadv_hard_iface to kref
batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.
Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
Sven Eckelmann [Sat, 16 Jan 2016 09:29:53 +0000 (10:29 +0100)]
batman-adv: Convert batadv_neigh_node to kref
batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.
Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
Sven Eckelmann [Sat, 16 Jan 2016 09:29:52 +0000 (10:29 +0100)]
batman-adv: Convert batadv_orig_ifinfo to kref
batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.
Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
Sven Eckelmann [Sat, 16 Jan 2016 09:29:51 +0000 (10:29 +0100)]
batman-adv: Convert batadv_neigh_ifinfo to kref
batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.
Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
Sven Eckelmann [Sat, 16 Jan 2016 09:29:50 +0000 (10:29 +0100)]
batman-adv: Convert batadv_tt_orig_list_entry to kref
batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.
Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
Sven Eckelmann [Sat, 16 Jan 2016 09:29:49 +0000 (10:29 +0100)]
batman-adv: Convert batadv_tvlv_handler to kref
batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.
Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
Sven Eckelmann [Sat, 16 Jan 2016 09:29:48 +0000 (10:29 +0100)]
batman-adv: Convert batadv_tvlv_container to kref
batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.
Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
Sven Eckelmann [Sat, 16 Jan 2016 09:29:47 +0000 (10:29 +0100)]
batman-adv: Convert batadv_dat_entry to kref
batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.
Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
Sven Eckelmann [Sat, 16 Jan 2016 09:29:46 +0000 (10:29 +0100)]
batman-adv: Convert batadv_nc_path to kref
batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.
Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
Sven Eckelmann [Sat, 16 Jan 2016 09:29:45 +0000 (10:29 +0100)]
batman-adv: Convert batadv_nc_node to kref
batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.
Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
Sven Eckelmann [Sat, 16 Jan 2016 09:29:44 +0000 (10:29 +0100)]
batman-adv: Convert batadv_bla_claim to kref
batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.
Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
Sven Eckelmann [Sat, 16 Jan 2016 09:29:43 +0000 (10:29 +0100)]
batman-adv: Convert batadv_bla_backbone_gw to kref
batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.
Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
Sven Eckelmann [Sat, 16 Jan 2016 09:29:42 +0000 (10:29 +0100)]
batman-adv: Convert batadv_softif_vlan to kref
batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.
Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
Sven Eckelmann [Sat, 16 Jan 2016 09:29:41 +0000 (10:29 +0100)]
batman-adv: Convert batadv_gw_node to kref
batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.
Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
Sven Eckelmann [Sat, 16 Jan 2016 09:29:40 +0000 (10:29 +0100)]
batman-adv: Convert batadv_hardif_neigh_node to kref
batman-adv uses a self-written reference implementation which is just based
on atomic_t. This is less obvious when reading the code than kref and
therefore increases the change that the reference counting will be missed.
Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
Sven Eckelmann [Sun, 20 Dec 2015 08:04:03 +0000 (09:04 +0100)]
batman-adv: Add lockdep assert for container_list_lock
The batadv_tvlv_container* functions state in their kernel-doc that they
require tvlv.container_list_lock. Add an assert to automatically detect
when this might have been ignored by the caller.
Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
Simon Wunderlich [Mon, 23 Nov 2015 18:57:22 +0000 (19:57 +0100)]
batman-adv: add seqno maximum age and protection start flag parameters
To allow future use of the window protected function with different
maximum sequence numbers, add a parameter to set this value which
was previously hardcoded. Another parameter added for future use is a
flag to return whether the protection window has started.
While at it, also fix the kerneldoc.
Signed-off-by: Simon Wunderlich <simon@open-mesh.com> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
Sven Eckelmann [Tue, 5 Jan 2016 11:06:26 +0000 (12:06 +0100)]
batman-adv: Drop reference to netdevice on last reference
The references to the network device should be dropped inside the release
function for batadv_hard_iface similar to what is done with the batman-adv
internal datastructures.
Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Antonio Quartulli <a@unstable.cc>
Jean Sacren [Wed, 10 Feb 2016 03:47:17 +0000 (20:47 -0700)]
sxgbe: remove unused code
Remove the unused code of sxgbe_xpcs.
Reported-by: Julia Lawall <julia.lawall@lip6.fr> Suggested-by: David S. Miller <davem@davemloft.net> Signed-off-by: Jean Sacren <sakiwit@gmail.com> Cc: Byungho An <bh74.an@samsung.com> Cc: Girish K S <ks.giri@samsung.com> Link: http://lkml.kernel.org/r/alpine.DEB.2.10.1601191918470.2531@hadrien Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 10 Feb 2016 10:38:19 +0000 (05:38 -0500)]
Merge branch 'renesas-bit-twiddling'
Sergei Shtylyov says:
====================
Factor out register bit twiddling in the Renesas Ethernet drivers
Here's a set of 2 patches against DaveM's 'net-next.git' repo. We factor out
the often repeated pattern of reading a register, AND'ing and/or OR'ing some
bits, and then writing the value back.
[1/2] ravb: factor out register bit twiddling code
[2/2] sh_eth: factor out register bit twiddling code
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Sergei Shtylyov [Tue, 9 Feb 2016 22:38:28 +0000 (01:38 +0300)]
sh_eth: factor out register bit twiddling code
The driver has often repeated pattern of reading a register, AND'ing and/or
OR'ing some bits and writing the value back. Factor the pattern out into
sh_eth_modify() -- this saves 84 bytes of code with ARM gcc 4.7.3.
While at it, update Cogent Embedded's copyright.
Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sergei Shtylyov [Tue, 9 Feb 2016 22:37:44 +0000 (01:37 +0300)]
ravb: factor out register bit twiddling code
The driver has often repeated pattern of reading a register, AND'ing and/or
OR'ing some bits and writing the value back. Factor the pattern out into
ravb_modify() -- this saves 260 bytes of code with ARM gcc 4.7.3.
While at it, update Cogent Embedded's copyrights.
Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 9 Feb 2016 11:43:59 +0000 (06:43 -0500)]
Merge branch 'tpacket-gso-csum-offload'
Willem de Bruijn says:
====================
packet: tpacket gso and csum offload
Extend PACKET_VNET_HDR socket option support to packet sockets with
memory mapped rings.
Patches 2 and 4 add support to tpacket_rcv and tpacket_snd.
Patch 1 prepares for this by moving the relevant virtio_net_hdr
logic out of packet_snd and packet_rcv into helper functions.
GSO transmission requires all headers in the skb linear section.
Patch 3 moves parsing of tx_ring slot headers before skb allocation
to enable allocation with sufficient linear size.
Changes
v1->v2:
- fix bounds checks:
- subtract sizeof(vnet_hdr) before comparing tp_len to size_max
- compare tp_len to size_max also with GSO, just do not truncate to MTU
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Support socket option PACKET_VNET_HDR together with PACKET_TX_RING.
When enabled, a struct virtio_net_hdr is expected to precede the data
in the ring. The vnet option must be set before the ring is created.
The implementation reuses the existing skb_copy_bits code that is used
when dev->hard_header_len is non-zero. Move this ll_header check to
before the skb alloc and combine it with a test for vnet_hdr->hdr_len.
Allocate and copy the max of the two.
Verified with test program at
github.com/wdebruij/kerneltools/blob/master/tests/psock_txring_vnet.c
Signed-off-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
GSO packet headers must be stored in the linear skb segment.
Move tpacket header parsing before sock_alloc_send_skb. The GSO
follow-on patch will later increase the skb linear argument to
sock_alloc_send_skb if needed for large packets.
The header parsing code does not require an allocated skb, so is
safe to move. Later pass to tpacket_fill_skb the computed data
start and length.
Signed-off-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Support socket option PACKET_VNET_HDR together with PACKET_RX_RING.
When enabled, a struct virtio_net_hdr will precede the data in the
packet ring slots.
Verified with test program at
github.com/wdebruij/kerneltools/blob/master/tests/psock_rxring_vnet.c
packet_snd and packet_rcv support virtio net headers for GSO.
Move this logic into helper functions to be able to reuse it in
tpacket_snd and tpacket_rcv.
This is a straighforward code move with one exception. Instead of
creating and passing a separate gso_type variable, reuse
vnet_hdr.gso_type after conversion from virtio to kernel gso type.
Signed-off-by: Willem de Bruijn <willemb@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Currently the bonding allows to set ad_actor_system and prio while the
bond device is down, but these are actually applied only if there aren't
any slaves yet (applied to bond device when first slave shows up, and to
slaves at 3ad bind time). After this patch changes are applied immediately
and the new values can be used/seen after the bond's upped so it's not
necessary anymore to release all and enslave again to see the changes.
CC: Jay Vosburgh <j.vosburgh@gmail.com> CC: Veaceslav Falico <vfalico@gmail.com> CC: Andy Gospodarek <gospo@cumulusnetworks.com> Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: Jay Vosburgh <jay.vosburgh@canonical.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Elad Raz [Wed, 3 Feb 2016 08:57:06 +0000 (09:57 +0100)]
bridge: mdb: Passing the port-group pointer to br_mdb module
Passing the port-group to br_mdb in order to allow direct access to the
structure. br_mdb will later use the structure to reflect HW reflection
status via "state" variable.
Signed-off-by: Elad Raz <eladr@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 7 Feb 2016 19:36:21 +0000 (14:36 -0500)]
Merge branch 'ns-tcp-sysctls'
Nikolay Borisov says:
====================
Namespaceify more of the tcp sysctl knobs
This patch series continues making more of the tcp-related
sysctl knobs be per net-namespace. Most of these apply per
socket and have global defaults so should be safe and I
don't expect any breakages.
Having those per net-namespace is useful when multiple
containers are hosted and it is required to tune the
tcp settings for each independently of the host node.
I've split the patches to be per-sysctl but after
the review if the outcome is positive I'm happy
to either send it in one big blob or just.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 7 Feb 2016 19:32:50 +0000 (14:32 -0500)]
Merge tag 'batman-adv-for-davem' of git://git.open-mesh.org/linux-merge
Antonio Quartulli says:
====================
This batch of patches includes a number of corrections and
improvements for our kernel-doc. These changes also make sure
that our doc is now properly processed by the kernel-doc
parsing tool.
Other than that you have a patch updating all the copyright
lines to 2016 and another patch switching the URLs in our
readme, Kconfig and MAINTAINERS file from "http" to "https".
Both by Sven Eckelmann.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 7 Feb 2016 19:30:55 +0000 (14:30 -0500)]
Merge branch 'virtio_net_ethtool_settings'
Nikolay Aleksandrov says:
====================
virtio_net: add ethtool get/set settings support
Patch 1 adds ethtool speed/duplex validation functions which check if the
value is defined. Patch 2 adds support for ethtool (get|set)_settings and
uses the validation functions to check the user-supplied values.
v2: split in 2 patches to allow everyone to make use of the validation
functions and allow virtio_net devices to be half duplex
v3: added a check to return error if the user tries to change anything else
besides duplex/speed as per Michael's comment
v4: Set port type to PORT_OTHER
v5: clear diff1.port (ignore port) when checking for changes since we set
it now and ethtool uses it in the set request
Sorry about the pointless iterations, should've all covered now.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
virtio_net: add ethtool support for set and get of settings
This patch allows the user to set and retrieve speed and duplex of the
virtio_net device via ethtool. Having this functionality is very helpful
for simulating different environments and also enables the virtio_net
device to participate in operations where proper speed and duplex are
required (e.g. currently bonding lacp mode requires full duplex). Custom
speed and duplex are not allowed, the user-supplied settings are validated
before applying.
Example:
$ ethtool eth1
Settings for eth1:
...
Speed: Unknown!
Duplex: Unknown! (255)
$ ethtool -s eth1 speed 1000 duplex full
$ ethtool eth1
Settings for eth1:
...
Speed: 1000Mb/s
Duplex: Full
Based on a patch by Roopa Prabhu.
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Add functions which check if the speed/duplex are defined.
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 7 Feb 2016 19:09:59 +0000 (14:09 -0500)]
Merge branch 'tcp_cong_ctrl_refactoring'
Yuchung Cheng says:
====================
tcp: congestion control refactoring
This patch set refactors the sequence of congestion control,
loss recovery, and transmission logic in TCP ack processing.
The design goal is to decouple and sequence them in the following order:
0. ACK accounting: free or tag sent packets [unchanged]
1. loss recovery: identify lost/ecn packets and update congestion state
2. congestion control: up/down cwnd and pacing rate based on (1)
3. transmission: send new or retransmit old based on (1) and (2)
This refactoring makes the cwnd changes more clear because it's done
in one place. The packet accounting is also more robust especially
for connections that do not support SACK. Patch 1-4 and 6 are
refactoring and patch 5 improves TCP performance under reordering.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Yuchung Cheng [Tue, 2 Feb 2016 18:33:09 +0000 (10:33 -0800)]
tcp: tcp_cong_control helper
Refactor and consolidate cwnd and rate updates into a new function
tcp_cong_control().
Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Eric Dumazet <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Yuchung Cheng [Tue, 2 Feb 2016 18:33:08 +0000 (10:33 -0800)]
tcp: make congestion control more robust against reordering
This change enables congestion control to update cwnd based on
not only packet cumulatively acked but also packets delivered
out-of-order. This makes congestion control robust against packet
reordering because it may raise cwnd as long as packets are being
delivered once reordering has been detected (i.e., it only cares
the amount of packets delivered, not the ordering among them).
Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Eric Dumazet <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Yuchung Cheng [Tue, 2 Feb 2016 18:33:07 +0000 (10:33 -0800)]
tcp: refactor pkts acked accounting
A small refactoring that gets number of packets cumulatively acked
from tcp_clean_rtx_queue() directly.
Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Eric Dumazet <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
where prior_packets and prior_sacked out are snapshot
at the beginning of the ACK processing.
The new approach tracks the delivery information via a new
TCP state variable "delivered" which monotically increases
as new packets are delivered in order or out-of-order.
The reason for this change is that the current approach is
brittle that produces negative or inaccurate estimate.
1) For non-SACK connections, an ACK that advances the SND.UNA
could reset the DUPACK counters (tp->sacked_out) in
tcp_process_loss() or tcp_fastretrans_alert(). This inflates
the inflight suddenly and causes under-estimate or even
negative estimate. Here is a real example:
before after (processing ACK)
packets_out 75 73
sacked_out 23 0
ca state Loss Open
The old approach computes (75-23) - (73 - 0) = -21 delivered
while the new approach computes 1 delivered since it
considers the 2nd-24th packets are delivered OOO.
2) MSS change would re-count packets_out and sacked_out so
the estimate is in-accurate and can even become negative.
E.g., the inflight is doubled when MSS is halved.
3) Spurious retransmission signaled by DSACK is not accounted
The new approach is simpler and more robust. For SACK connections,
tp->delivered increments as packets are being acked or sacked in
SACK and ACK processing.
For non-sack connections, it's done in tcp_remove_reno_sacks() and
tcp_add_reno_sack(). When an ACK advances the SND.UNA, tp->delivered
is incremented by the number of packets ACKed (less the current
number of DUPACKs received plus one packet hole). Upon receiving
a DUPACK, tp->delivered is incremented assuming one out-of-order
packet is delivered.
Upon receiving a DSACK, tp->delivered is incremtened assuming one
retransmission is delivered in tcp_sacktag_write_queue().
Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Eric Dumazet <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Yuchung Cheng [Tue, 2 Feb 2016 18:33:05 +0000 (10:33 -0800)]
tcp: move cwnd reduction after recovery state procesing
Currently the cwnd is reduced and increased in various different
places. The reduction happens in various places in the recovery
state processing (tcp_fastretrans_alert) while the increase
happens afterward.
A better sequence is to identify lost packets and update
the congestion control state (icsk_ca_state) first. Then base
on the new state, up/down the cwnd in one central place. It's
more clear to reason cwnd changes.
Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Eric Dumazet <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Yuchung Cheng [Tue, 2 Feb 2016 18:33:04 +0000 (10:33 -0800)]
tcp: retransmit after recovery processing and congestion control
The retransmission and F-RTO transmission currently happen inside
recovery state processing (tcp_fastretrans_alert) but before
congestion control. This refactoring moves the logic after both
s.t. we can determine how much to send (cwnd) before deciding what to
send.
Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Eric Dumazet <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
unix: properly account for FDs passed over unix sockets
Signed-off-by: David Herrmann <dh.herrmann@gmail.com> Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Paul Durrant [Tue, 2 Feb 2016 11:55:05 +0000 (11:55 +0000)]
xen-netback: implement dynamic multicast control
My recent patch to the Xen Project documents a protocol for 'dynamic
multicast control' in netif.h. This extends the previous multicast control
protocol to not require a shared ring reconnection to turn the feature off.
Instead the backend watches the "request-multicast-control" key in xenstore
and turns the feature off if the key value is written to zero.
This patch adds support for dynamic multicast control in xen-netback.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com> Cc: Ian Campbell <ian.campbell@citrix.com> Cc: Wei Liu <wei.liu2@citrix.com> Acked-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 7 Feb 2016 18:55:29 +0000 (13:55 -0500)]
Merge branch 'be2net-non-critical-fixes'
Sriharsha Basavapatna says:
====================
be2net patch-set
v2 changes:
Patch-4: Changed a tab to space in be.h
Patches-6,7,8: Updated commit log summary line: benet --> be2net
Hi David,
The following patch set contains a few non-critical bug fixes. Please
consider applying this to the net-next tree. Thanks.
Patch-1 fixes be_set_phys_id() ethtool function to return an error code.
Patch-2 fixes a warning when some commands fail for VFs.
Patch-3 fixes be_vlan_rem_vid() to verify vlan being removed is in the list.
Patch-4 improves SRIOV queue distribution logic.
Patch-5 avoids running self test on VFs.
Patch-6 fixes error recovery in Lancer to clean up after moving to ready state.
Patch-7 adds retry logic to error recovery in case of recovery failures
Patch-8 fixes time interval used in eq delay computation routine
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
be2net: Fix interval calculation in interrupt moderation
Interrupt moderation parameters need to be recalculated only
after a time interval of 1 ms. Interval calculation is wrong
when there is a rollover of jiffies. Using recommended way of interval
calculation using jiffies to fix this.
Signed-off-by: Padmanabh Ratnakar <padmanabh.ratnakar@broadcom.com> Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
be2net: Add retry in case of error recovery failure
Retry error recovery MAX_ERR_RECOVERY_RETRY_COUNT times in case of
failure during error recovery.
Signed-off-by: Padmanabh Ratnakar <padmanabh.ratnakar@broadcom.com> Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
After error is detected, wait for adapter to move to ready state
before destroying queues and cleanup of other resources. Also
skip performing any cleanup for non-Lancer chips and move debug
messages to correct routine.
Signed-off-by: Padmanabh Ratnakar <padmanabh.ratnakar@broadcom.com> Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Somnath Kotur [Wed, 3 Feb 2016 04:19:20 +0000 (09:49 +0530)]
be2net: Don't run ethtool self-tests for VFs
The CMD_SUBSYSTEM_LOWLEVEL cmds need DEV_CFG Privilege to run
which VFs don't have by default.
Self-tests need to be issued only for PFs.
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com> Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
be2net: SRIOV Queue distribution should factor in EQ-count of VFs
The SRIOV resource distribution logic for RX/TX queue counts is not optimal
when a small number of VFs are enabled. It does not take into account the
VF's EQ count while computing the queue counts. Because of this, the VF
gets a large number of queues, though it doesn't have sufficient EQs,
resulting in wasted queue resources. And the PF gets a smaller share of
queues though it has more EQs. Fix this by capping the VF queue count at
its EQ count.
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
be2net: Fix be_vlan_rem_vid() to check vlan id being removed
The driver decrements its vlan count without checking if it is really
present in its list. This results in an invalid vlan count and impacts
subsequent vlan add/rem ops. The function be_vlan_rem_vid() should be
updated to fix this.
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Suresh Reddy [Wed, 3 Feb 2016 04:19:17 +0000 (09:49 +0530)]
be2net: check for INSUFFICIENT_PRIVILEGES error
The driver currently logs the message "VF is not privileged to issue
opcode" by checking only the base_status field for UNAUTHORIZED_REQUEST.
Add check to look for INSUFFICIENT_PRIVILEGES in the additional status
field also as not all cmds fail with that base status.
Signed-off-by: Suresh Reddy <suresh.reddy@broadcom.com> Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Suresh Reddy [Wed, 3 Feb 2016 04:19:16 +0000 (09:49 +0530)]
be2net: return error status from be_set_phys_id()
be_set_phys_id() returns 0 to ethtool when the command fails in the FW.
This patch fixes the set_phys_id() to return -EIO in case the FW cmd fails.
Signed-off-by: Suresh Reddy <suresh.reddy@broadcom.com> Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 7 Feb 2016 18:51:07 +0000 (13:51 -0500)]
Merge branch 'vxlan-headers-and-tx-cleanups'
Jiri Benc says:
====================
vxlan: cleanup headers and tx path
This is first part of vxlan cleanup. It makes vxlan.h better organized and
removes code duplication in the tx path. There's no change in functionality.
This is in preparation for VXLAN-GPE support. Other sets will follow with
cleanup of rx path, modifications of vxlan interface setup and
cleanup/modification of fdb handling. The goal of these patchsets is to
consolidate VXLAN extension handling (right now it's scattered all around
the code) and allow vxlan to operate in L3 mode (i.e. without inner Ethernet
headers).
The full picture (all 30 patches) can be seen at:
https://github.com/jbenc/linux-vxlan/commits/master
Changelog:
v2: added patch 5/6, fixing the udp sum problem spotted by Paolo Abeni
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Benc [Tue, 2 Feb 2016 17:09:15 +0000 (18:09 +0100)]
vxlan: consolidate csum flag handling
The flag for tx checksumming for tunneling over IPv4 and IPv6 is different.
Decide whether to do tx checksumming in vxlan_xmit_one and pass it on as
a separate flag. This will allow for tx path consolidation in the next
patch.
Unfortunately, gcc is not clever enough to see that udp_sum is always
initialized and gives an uninitialized variable warning. Set it to false to
silence the warning.
Signed-off-by: Jiri Benc <jbenc@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jiri Benc [Tue, 2 Feb 2016 17:09:13 +0000 (18:09 +0100)]
vxlan: restructure vxlan.h definitions
RCO and GBP are VXLAN extensions, not specified in RFC 7348. Because of
that, they need to be explicitly enabled when creating vxlan interface. By
default, those extensions are not used and plain VXLAN header is sent and
received.
Reflect this in vxlan.h: first, the plain VXLAN header is defined. Following
it, RCO is documented and defined, and likewise for GBP.
Signed-off-by: Jiri Benc <jbenc@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Sat, 6 Feb 2016 19:16:28 +0000 (11:16 -0800)]
tcp: fastopen: call tcp_fin() if FIN present in SYNACK
When we acknowledge a FIN, it is not enough to ack the sequence number
and queue the skb into receive queue. We also have to call tcp_fin()
to properly update socket state and send proper poll() notifications.
It seems we also had the problem if we received a SYN packet with the
FIN flag set, but it does not seem an urgent issue, as no known
implementation can do that.
Fixes: 61d2bcae99f6 ("tcp: fastopen: accept data/FIN present in SYNACK message") Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Yuchung Cheng <ycheng@google.com> Cc: Neal Cardwell <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 6 Feb 2016 08:42:09 +0000 (03:42 -0500)]
Merge branch 'tipc-topology-updates'
Parthasarathy Bhuvaragan says:
====================
tipc: cleanups, fixes & improvements for topology server
This series contains topology server cleanups, fixes and improvements.
Cleanups in #1-#4:
We remove duplicate data structures and aligin the rest of the code accordingly.
Fixes in #5-#8:
The bugs occur either during configuration or while running on SMP targets,
which are race conditions that pop up under different situations.
Improvements in #9-#10:
Updates to decrease timer usage and improve readability.
v2: Updated commit message in patch 6 based on feedback from
Sergei Shtylyov sergei.shtylyov@cogentembedded.com
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
tipc: use alloc_ordered_workqueue() instead of WQ_UNBOUND w/ max_active = 1
Until now, tipc_rcv and tipc_send workqueues in server are allocated
with parameters WQ_UNBOUND & max_active = 1.
This parameters passed to this function makes it equivalent to
alloc_ordered_workqueue(). The later form is more explicit and
can inherit future ordered_workqueue changes.
In this commit we replace alloc_workqueue() with more readable
alloc_ordered_workqueue().
Acked-by: Ying Xue <ying.xue@windriver.com> Reviewed-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: Parthasarathy Bhuvaragan <parthasarathy.bhuvaragan@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
tipc: donot create timers if subscription timeout = TIPC_WAIT_FOREVER
Until now, we create timers even for the subscription requests
with timeout = TIPC_WAIT_FOREVER.
This can be improved by avoiding timer creation when the timeout
is set to TIPC_WAIT_FOREVER.
In this commit, we introduce a check to creates timers only
when timeout != TIPC_WAIT_FOREVER.
Acked-by: Ying Xue <ying.xue@windriver.com> Reviewed-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: Parthasarathy Bhuvaragan <parthasarathy.bhuvaragan@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
tipc: protect tipc_subscrb_get() with subscriber spin lock
Until now, during subscription creation the mod_time() &
tipc_subscrb_get() are called after releasing the subscriber
spin lock.
In a SMP system when performing a subscription creation, if the
subscription timeout occurs simultaneously (the timer is
scheduled to run on another CPU) then the timer thread
might decrement the subscribers refcount before the create
thread increments the refcount.
This can be simulated by creating subscription with timeout=0 and
sometimes the timeout occurs before the create request is complete.
This leads to the following message:
[30.702949] BUG: spinlock bad magic on CPU#1, kworker/u8:3/87
[30.703834] general protection fault: 0000 [#1] SMP
[30.704826] CPU: 1 PID: 87 Comm: kworker/u8:3 Not tainted 4.4.0-rc8+ #18
[30.704826] Workqueue: tipc_rcv tipc_recv_work [tipc]
[30.704826] task: ffff88003f878600 ti: ffff88003fae0000 task.ti: ffff88003fae0000
[30.704826] RIP: 0010:[<ffffffff8109196c>] [<ffffffff8109196c>] spin_dump+0x5c/0xe0
[...]
[30.704826] Call Trace:
[30.704826] [<ffffffff81091a16>] spin_bug+0x26/0x30
[30.704826] [<ffffffff81091b75>] do_raw_spin_lock+0xe5/0x120
[30.704826] [<ffffffff81684439>] _raw_spin_lock_bh+0x19/0x20
[30.704826] [<ffffffffa0096f10>] tipc_subscrb_rcv_cb+0x1d0/0x330 [tipc]
[30.704826] [<ffffffffa00a37b1>] tipc_receive_from_sock+0xc1/0x150 [tipc]
[30.704826] [<ffffffffa00a31df>] tipc_recv_work+0x3f/0x80 [tipc]
[30.704826] [<ffffffff8106a739>] process_one_work+0x149/0x3c0
[30.704826] [<ffffffff8106aa16>] worker_thread+0x66/0x460
[30.704826] [<ffffffff8106a9b0>] ? process_one_work+0x3c0/0x3c0
[30.704826] [<ffffffff8106a9b0>] ? process_one_work+0x3c0/0x3c0
[30.704826] [<ffffffff8107029d>] kthread+0xed/0x110
[30.704826] [<ffffffff810701b0>] ? kthread_create_on_node+0x190/0x190
[30.704826] [<ffffffff81684bdf>] ret_from_fork+0x3f/0x70
In this commit,
1. we remove the check for the return code for mod_timer()
2. we protect tipc_subscrb_get() using the subscriber spin lock.
We increment the subscriber's refcount as soon as we add the
subscription to subscriber's subscription list.
Acked-by: Ying Xue <ying.xue@windriver.com> Reviewed-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: Parthasarathy Bhuvaragan <parthasarathy.bhuvaragan@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
tipc: hold subscriber->lock for tipc_nametbl_subscribe()
Until now, while creating a subscription the subscriber lock
protects only the subscribers subscription list and not the
nametable. The call to tipc_nametbl_subscribe() is outside
the lock. However, at subscription timeout and cancel both
the subscribers subscription list and the nametable are
protected by the subscriber lock.
This asymmetric locking mechanism leads to the following problem:
In a SMP system, the timer can be fire on another core before
the create request is complete.
When the timer thread calls tipc_nametbl_unsubscribe() before create
thread calls tipc_nametbl_subscribe(), we get a nullptr exception.
This can be simulated by creating subscription with timeout=0 and
sometimes the timeout occurs before the create request is complete.
In this commit, we do the following at subscription creation:
1. set the subscription's subscriber pointer before performing
tipc_nametbl_subscribe(), as this value is required further in
the call chain ex: by tipc_subscrp_send_event().
2. move tipc_nametbl_subscribe() under the scope of subscriber lock
Acked-by: Ying Xue <ying.xue@windriver.com> Reviewed-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: Parthasarathy Bhuvaragan <parthasarathy.bhuvaragan@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>