Ivan Khoronzhuk [Fri, 17 Jun 2016 10:25:40 +0000 (13:25 +0300)]
Documentation: DT: cpsw: remove rx_descs property
There is no reason to hold s/w dependent parameter in device tree.
Even more, there is no reason in this parameter because davinici_cpdma
driver splits pool of descriptors equally between tx and rx channels
anyway.
Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Ivan Khoronzhuk [Fri, 17 Jun 2016 10:25:39 +0000 (13:25 +0300)]
net: ethernet: ti: cpsw: remove rx_descs property
There is no reason in rx_descs property because davinici_cpdma
driver splits pool of descriptors equally between tx and rx channels.
That is, if number of descriptors 256, 128 of them are for rx
channels. While receiving, the descriptor is freed to the pool and
then allocated with new skb. And if in DT the "rx_descs" is set to
64, then 128 - 64 = 64 descriptors are always in the pool and cannot
be used, for tx, for instance. It's not correct resource usage,
better to set it to half of pool, then the rx pool can be used in
full. It will not have any impact on performance, as anyway, the
"redundant" descriptors were unused.
Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Dan Carpenter [Fri, 17 Jun 2016 09:22:26 +0000 (12:22 +0300)]
tipc: potential shift wrapping bug in map_set()
"up_map" is a u64 type but we're not using the high 32 bits.
Fixes: 35c55c9877f8 ('tipc: add neighbor monitoring framework') Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
IPv6 address selection is currently messed up for several use cases such
as unnumbered deployments with global addresses on the VRF device and none
on the enslaved devices.
Update the source address selection to consider the real output route as
opposed to the VRF route that sends packets to the VRF device first (ie.,
implement get_saddr6 similar to the IPv4 method) and update the IPv6
address selection to consider L3 domains and preference for addresses on
the VRF device).
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Thu, 16 Jun 2016 23:24:26 +0000 (16:24 -0700)]
net: ipv6: Address selection needs to consider L3 domains
IPv6 version of 3f2fb9a834cb ("net: l3mdev: address selection should only
consider devices in L3 domain") and the follow up commit, a17b693cdd876
("net: l3mdev: prefer VRF master for source address selection").
That is, if outbound device is given then the address preference order
is an address from that device, an address from the master device if it
is enslaved, and then an address from a device in the same L3 domain.
Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Thu, 16 Jun 2016 23:24:25 +0000 (16:24 -0700)]
net: vrf: Implement get_saddr for IPv6
IPv6 source address selection needs to consider the real egress route.
Similar to IPv4 implement a get_saddr6 method which is called if
source address has not been set. The get_saddr6 method does a full
lookup which means pulling a route from the VRF FIB table and properly
considering linklocal/multicast destination addresses. Lookup failures
(eg., unreachable) then cause the source address selection to fail
which gets propagated back to the caller.
Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Thu, 16 Jun 2016 23:24:24 +0000 (16:24 -0700)]
net: ipv6: Move ip6_route_get_saddr to inline
VRF driver needs access to ip6_route_get_saddr code. Since it does
little beyond ipv6_dev_get_saddr and ipv6_dev_get_saddr is already
exported for modules move ip6_route_get_saddr to the header as an
inline.
Code move only; no functional change.
Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
These patches are meant to address two things. First we are currently
using the ndo_add/del_vxlan_port calls with VXLAN-GPE tunnels and we
cannot really support that as it is likely to cause more harm than
good since VXLAN-GPE can support tunnels without a MAC address on the
inner header.
As such we need to add a new offload to advertise this, but in doing so it
would mean introducing 3 new functions for the driver to request the ports,
and then for the tunnel to push the changes to add and delete the ports to
the device. However instead of taking that approach I think it would be
much better if we just made one common function for fetching the ports, and
provided a generic means to push the tunnels to the device. So in order to
make this work this patch set does several things.
First it merges the existing VXLAN and GENEVE functionality into one set of
functions and passes an enum in order to specify the type of tunnel we want
to offload. By doing this we only have to extend this enum in the future
if we want to add additional types.
Second it goes through the drivers replacing all of the tunnel specific
offload calls with implementations that support the generic calls so that
we can drop the VXLAN and GENEVE specific calls entirely.
Finally I go through in the last patch and replace the VXLAN specific
offload request that was being used for VXLAN-GPE with one that specifies
if we want to offload VXLAN or VXLAN-GPE so that the hardware can decide if
it can actually support it or not.
I also ended up with some minor clean-up built into the driver patches for
this. Most of it is to either fix misuse of build flags, specifying a type
to ignore instead of the type that should be used, or in the case of ixgbe
I actually moved a rtnl_lock/unlock in order to avoid taking it unless it
was actually needed.
v2:
I did my best to remove the word "offload" from any of the calls or
notifiers as this isn't really an offload. It
is a workaround for the fact that the drivers don't provide basic features
like CHECKSUM_COMPLETE. I also added a disclaimer to the section defining
the function prototypes explaining that these are essentially workarounds.
I ended up going through and stripping all of the VXLAN and GENEVE build
flags from the drivers. There isn't much point in carrying them. In
addition I dropped the use of the vxlan.h or geneve.h header files in favor
of udp_tunnel.h in the cases where a driver didn't need anything from
either of those headers.
I updated the tunnel add/del functions so that they pass a udp_tunnel_info
structure instead of a list of arguments. This way we should be able to
add additional information in the future with little impact on the other
drivers.
I updated bnxt so that it doesn't use a hard-coded port number for GENEVE.
I have been able to test mlx4e, mlx5e, and i40e and verified functionality
on these drivers. Though there are patches for the net tree I submitted
due to unrelated bugs I found in the mlx4e and i40e/i40evf drivers.
v3:
Fixed a typo that caused us to add geneve port when we should have been
deleting it.
Ended up dropping geneve and vxlan wrappers for
udp_tunnel_notify_rx_add/del_port and instead just called them directly.
Updated comments for functions to call out RTNL instead of port lock.
Updated patch description to remove changes that were moved into a second
patch.
Rebased on latest net-next to fix merge conflict on bnxt driver.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Thu, 16 Jun 2016 19:23:19 +0000 (12:23 -0700)]
vxlan: Add new UDP encapsulation offload type for VXLAN-GPE
The fact is VXLAN with Generic Protocol Extensions cannot be supported by
the same hardware parsers that support VXLAN. The protocol extensions
allow for things like a Next Protocol field which in turn allows for things
other than Ethernet to be passed over the tunnel. Most existing parsers
will not know how to interpret this.
To resolve this I am giving VXLAN-GPE its own UDP encapsulation offload
type. This way hardware that does support GPE can simply add this type to
the switch statement for VXLAN, and if they don't support it then this will
fix any issues where headers might be interpreted incorrectly.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Thu, 16 Jun 2016 19:23:12 +0000 (12:23 -0700)]
net: Remove deprecated tunnel specific UDP offload functions
Now that we have all the drivers using udp_tunnel_get_rx_ports,
ndo_add_udp_enc_rx_port, and ndo_del_udp_enc_rx_port we can drop the
function calls that were specific to VXLAN and GENEVE.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Thu, 16 Jun 2016 19:23:04 +0000 (12:23 -0700)]
qlcnic: Replace ndo_add/del_vxlan_port with ndo_add/del_udp_enc_port
This change replaces the network device operations for adding or removing a
VXLAN port with operations that are more generically defined to be used for
any UDP offload port but provide a type. As such by just adding a line to
verify that the offload type is VXLAN we can maintain the same
functionality.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Thu, 16 Jun 2016 19:22:57 +0000 (12:22 -0700)]
qede: Move all UDP port notifiers to single function
This patch goes through and combines the notifiers for VXLAN and GENEVE
into a single function for each action. So there is now one combined
function for getting ports, one for adding the ports, and one for deleting
the ports.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Thu, 16 Jun 2016 19:22:51 +0000 (12:22 -0700)]
nfp: Replace ndo_add/del_vxlan_port with ndo_add/del_udp_enc_port
This change replaces the network device operations for adding or removing a
VXLAN port with operations that are more generically defined to be used for
any UDP offload port but provide a type. As such by just adding a line to
verify that the offload type is VXLAN we can maintain the same
functionality.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Thu, 16 Jun 2016 19:22:38 +0000 (12:22 -0700)]
mlx5_en: Replace ndo_add/del_vxlan_port with ndo_add/del_udp_enc_port
This change replaces the network device operations for adding or removing a
VXLAN port with operations that are more generically defined to be used for
any UDP offload port but provide a type. As such by just adding a line to
verify that the offload type is VXLAN we can maintain the same
functionality.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Thu, 16 Jun 2016 19:22:30 +0000 (12:22 -0700)]
mlx4_en: Replace ndo_add/del_vxlan_port with ndo_add/del_udp_enc_port
This change replaces the network device operations for adding or removing a
VXLAN port with operations that are more generically defined to be used for
any UDP offload port but provide a type. As such by just adding a line to
verify that the offload type is VXLAN we can maintain the same
functionality.
In addition I updated the socket address family check so that instead of
excluding IPv6 we instead abort of type is not IPv4. This makes much more
sense as we should only be supporting IPv4 outer addresses on this
hardware.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Thu, 16 Jun 2016 19:22:19 +0000 (12:22 -0700)]
ixgbe: Replace ndo_add/del_vxlan_port with ndo_add/del_udp_enc_port
This change replaces the network device operations for adding or removing a
VXLAN port with operations that are more generically defined to be used for
any UDP offload port but provide a type. As such by just adding a line to
verify that the offload type is VXLAN we can maintain the same
functionality.
In addition I updated the socket address family check so that instead of
excluding IPv6 we instead abort of type is not IPv4. This makes much more
sense as we should only be supporting IPv4 outer addresses on this
hardware.
The last change is that I pulled the rtnl_lock/unlock into the conditional
statement for IXGBE_FLAG2_VXLAN_REREG_NEEDED. The motivation behind this
is to avoid unneeded bouncing of the mutex which will just slow down the
handling of this call anyway.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Thu, 16 Jun 2016 19:22:06 +0000 (12:22 -0700)]
i40e: Move all UDP port notifiers to single function
This patch goes through and combines the notifiers for VXLAN and GENEVE
into a single function for each action. So there is now one combined
function for getting ports, one for adding the ports, and one for deleting
the ports.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Thu, 16 Jun 2016 19:21:57 +0000 (12:21 -0700)]
fm10k: Replace ndo_add/del_vxlan_port with ndo_add/del_udp_enc_port
This change replaces the network device operations for adding or removing a
VXLAN port with operations that are more generically defined to be used for
any UDP offload port but provide a type. As such by just adding a line to
verify that the offload type if VXLAN we can maintain the same
functionality.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Thu, 16 Jun 2016 19:21:43 +0000 (12:21 -0700)]
benet: Replace ndo_add/del_vxlan_port with ndo_add/del_udp_enc_port
This change replaces the network device operations for adding or removing a
VXLAN port with operations that are more generically defined to be used for
any UDP offload port but provide a type. As such by just adding a line to
verify that the offload type if VXLAN we can maintain the same
functionality.
I have also gone though and removed the BE2NET_VXLAN config option since it
no longer relies on the VXLAN code anyway.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Thu, 16 Jun 2016 19:21:36 +0000 (12:21 -0700)]
bnxt: Move GENEVE support from hard-coded port to using port notifier
The port number for GENEVE is hard coded into the bnxt driver. This is the
kind of thing we want to avoid going forward. For now I will integrate
this back into the port notifier so that we can change the GENEVE port
number if we need to in the future.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Acked-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Thu, 16 Jun 2016 19:21:19 +0000 (12:21 -0700)]
bnxt: Update drivers to support unified UDP encapsulation offload functions
This patch ends up doing several things. First it updates the driver to
make use of the new unified UDP tunnel offload notifier functions. In
addition I updated the code so that we can work around the bits that were
checking for if VXLAN was enabled since we are now using a notifier based
setup.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Acked-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Thu, 16 Jun 2016 19:21:09 +0000 (12:21 -0700)]
bnx2x: Move all UDP port notifiers to single function
This patch goes through and combines the notifiers for VXLAN and GENEVE
into a single function for each action. So there is now one combined
function for getting ports, one for adding the ports, and one for deleting
the ports.
I also went through and dropped the BNX2X VXLAN and GENEVE specific build
flags.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Thu, 16 Jun 2016 19:21:00 +0000 (12:21 -0700)]
net: Merge VXLAN and GENEVE push notifiers into a single notifier
This patch merges the notifiers for VXLAN and GENEVE into a single UDP
tunnel notifier. The idea is that we will want to only have to make one
notifier call to receive the list of ports for VXLAN and GENEVE tunnels
that need to be offloaded.
In addition we add a new set of ndo functions named ndo_udp_tunnel_add and
ndo_udp_tunnel_del that are meant to allow us to track the tunnel meta-data
such as port and address family as tunnels are added and removed. The
tunnel meta-data is now transported in a structure named udp_tunnel_info
which for now carries the type, address family, and port number. In the
future this could be updated so that we can include a tuple of values
including things such as the destination IP address and other fields.
I also ended up going with a naming scheme that consisted of using the
prefix udp_tunnel on function names. I applied this to the notifier and
ndo ops as well so that it hopefully points to the fact that these are
primarily used in the udp_tunnel functions.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Thu, 16 Jun 2016 19:20:52 +0000 (12:20 -0700)]
net: Combine GENEVE and VXLAN port notifiers into single functions
This patch merges the GENEVE and VXLAN code so that both functions pass
through a shared code path. This way we can start the effort of using a
single function on the network device drivers to handle both of these
tunnel types.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Thu, 16 Jun 2016 19:20:44 +0000 (12:20 -0700)]
vxlan/geneve: Include udp_tunnel.h in vxlan/geneve.h and fixup includes
This patch makes it so that we add udp_tunnel.h to vxlan.h and geneve.h
header files. This is useful as I plan to move the generic handlers for
the port offloads into the udp_tunnel header file and leave the vxlan and
geneve headers to be a bit more protocol specific.
I also went through and cleaned out a number of redundant includes that
where in the .h and .c files for these drivers.
Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Thu, 16 Jun 2016 21:19:29 +0000 (23:19 +0200)]
net, cls: also reject deleting all filters when TCA_KIND present
When we check for RTM_DELTFILTER, we should also reject the request
for deleting all filters under a given parent when TCA_KIND attribute
is present. If present, it's currently just ignored but there's also
no point to let it pass in the first place either since this doesn't
have any meaning with wild-card removal.
Fixes: ea7f8277f907 ("net, cls: allow for deleting all filters for given parent") Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 17 Jun 2016 05:37:05 +0000 (22:37 -0700)]
Merge branch 'vmxnet3-upgrade-to-version3'
Shrikrishna Khare says:
====================
vmxnet3: upgrade to version 3
vmxnet3 emulation has recently added several new features which includes
support for new commands the driver can issue to emulation, change in
descriptor fields etc. This patch series extends the vmxnet3 driver to
leverage these new features.
Compatibility is maintained using existing vmxnet3 versioning mechanism as
follows:
- new features added to vmxnet3 emulation are associated with new vmxnet3
version viz. vmxnet3 version 3.
- emulation advertises all the versions it supports to the driver.
- during initialization, vmxnet3 driver picks the highest version number
supported by both the emulation and the driver and configures emulation
to run at that version.
In particular, following changes are introduced:
Patch 1:
Some command definitions from previous vmxnet3 versions are
missing. This patch adds those definitions before moving to vmxnet3
version 3. It also fixes copyright info and maintained by.
Patch 2:
This patch introduces generalized command interface which allows
for easily adding new commands that vmxnet3 driver can issue to the
emulation. Further patches in this series make use of this facility.
Patch 3:
Transmit data ring buffer is used to copy packet headers or small
packets. It is a fixed size buffer. This patch extends the driver to
allow variable sized transmit data ring buffer.
Patch 4:
This patch introduces receive data ring buffer - a set of small sized
buffers that are always mapped by the emulation. This avoids memory
mapping/unmapping overhead for small packets.
Patch 5:
The vmxnet3 emulation supports a variety of coalescing modes. This patch
extends vmxnet3 driver to allow querying and configuring these modes.
Patch 6:
In vmxnet3 version 3, the emulation added support for the vmxnet3 driver
to communicate information about the memory regions the driver will use
for rx/tx buffers. This patch exposes related commands to the driver.
Patch 7:
With all vmxnet3 version 3 changes incorporated in the vmxnet3 driver,
with this patch, the driver can configure emulation to run at vmxnet3
version 3.
Changes in v2:
- v1 patch used special values of rx-usecs to differentiate between
coalescing modes. v2 uses relevant fields in struct ethtool_coalesce
to choose modes. Also, a new command VMXNET3_CMD_GET_COALESCE
is introduced which allows driver to query the device for default
coalescing configuration.
Changes in v3:
- fix subject line to use vmxnet3: instead of Driver:Vmxnet3
- resubmit when net-next is open
Changes in v4:
- Address code review comments by Ben Hutchings: remove unnecessary memset
from vmxnet3_get_coalesce.
Changes in v5:
- Updated all the patches to add detailed commit messages.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
With all vmxnet3 version 3 changes incorporated in the vmxnet3 driver,
the driver can configure emulation to run at vmxnet3 version 3, provided
the emulation advertises support for version 3.
Signed-off-by: Shrikrishna Khare <skhare@vmware.com> Signed-off-by: David S. Miller <davem@davemloft.net>
vmxnet3: introduce command to register memory region
In vmxnet3 version 3, the emulation added support for the vmxnet3 driver
to communicate information about the memory regions the driver will use
for rx/tx buffers. The driver can also indicate which rx/tx queue the
memory region is applicable for. If this information is communicated
to the emulation, the emulation will always keep these memory regions
mapped, thereby avoiding the mapping/unmapping overhead for every packet.
Currently, Linux vmxnet3 driver does not leverage this capability. The
feasibility of using this approach for the Linux vmxnet3 driver will be
investigated independently and if possible, will be part of a different
patch. This patch only exposes the emulation capability to the driver
(vmxnet3_defs.h is identical between the driver and the emulation).
Signed-off-by: Guolin Yang <gyang@vmware.com> Signed-off-by: Shrikrishna Khare <skhare@vmware.com> Signed-off-by: David S. Miller <davem@davemloft.net>
vmxnet3: add support for get_coalesce, set_coalesce ethtool operations
The emulation supports a variety of coalescing modes viz. disabled
(no coalescing), adaptive, static (number of packets to batch before
raising an interrupt), rate based (number of interrupts per second).
This patch implements get_coalesce and set_coalesce methods to allow
querying and configuring different coalescing modes.
Signed-off-by: Keyong Sun <sunk@vmware.com> Signed-off-by: Manoj Tammali <tammalim@vmware.com> Signed-off-by: Shrikrishna Khare <skhare@vmware.com> Signed-off-by: David S. Miller <davem@davemloft.net>
vmxnet3 driver preallocates buffers for receiving packets and posts the
buffers to the emulation. In order to deliver a received packet to the
guest, the emulation must map buffer(s) and copy the packet into it.
To avoid this memory mapping overhead, this patch introduces the receive
data ring - a set of small sized buffers that are always mapped by
the emulation. If a packet fits into the receive data ring buffer, the
emulation delivers the packet via the receive data ring (which must be
copied by the guest driver), or else the usual receive path is used.
Receive Data Ring buffer length is configurable via ethtool -G ethX rx-mini
Signed-off-by: Shrikrishna Khare <skhare@vmware.com> Signed-off-by: David S. Miller <davem@davemloft.net>
vmxnet3: allow variable length transmit data ring buffer
vmxnet3 driver supports transmit data ring viz. a set of fixed size
buffers used by the driver to copy packet headers. Small packets that
fit these buffers are copied into these buffers entirely.
Currently this buffer size of fixed at 128 bytes. This patch extends
transmit data ring implementation to allow variable length transmit
data ring buffers. The length of the buffer is read from the emulation
during initialization.
Signed-off-by: Sriram Rangarajan <rangarajans@vmware.com> Signed-off-by: Shrikrishna Khare <skhare@vmware.com> Signed-off-by: David S. Miller <davem@davemloft.net>
vmxnet3: introduce generalized command interface to configure the device
Shared memory is used to exchange information between the vmxnet3 driver
and the emulation. In order to request emulation to perform a task, the
driver first populates specific fields in this shared memory and then
issues corresponding command by writing to the command register(CMD). The
layout of the shared memory was defined by vmxnet3 version 1 and cannot
be extended for every new command without breaking backward compatibility.
To address this problem, in vmxnet3 version 3, the emulation repurposed
a reserved field in the shared memory to represent command information
instead. For new commands, the driver first populates the command
information field in the shared memory and then issues the command. The
emulation interprets the data written to the command information depending
on the type of the command. This patch exposes this capability to the driver.
Signed-off-by: Guolin Yang <gyang@vmware.com> Signed-off-by: Shrikrishna Khare <skhare@vmware.com> Signed-off-by: David S. Miller <davem@davemloft.net>
vmxnet3 is currently at version 2, but some command definitions from
previous vmxnet3 versions are missing. Add those definitions before
moving to version 3.
Also, introduce utility macros for vmxnet3 version comparison and update
Copyright information and Maintained by.
Signed-off-by: Shrikrishna Khare <skhare@vmware.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sebastian Ott [Thu, 16 Jun 2016 14:19:03 +0000 (16:19 +0200)]
s390/qeth: fix indentation in qeth_l3_arp_query
gcc-6 warns about obviously wrong indentation:
drivers/s390/net/qeth_l3_main.c: In function 'qeth_l3_arp_query':
drivers/s390/net/qeth_l3_main.c:2315:3: warning: this 'if' clause does not
guard... [-Wmisleading-indentation]
if (copy_to_user(udata, qinfo.udata, 4))
^~
drivers/s390/net/qeth_l3_main.c:2317:4: note: ...this statement, but the
latter is misleadingly indented as if it is guarded by the 'if'
goto free_and_out;
^~~~
Although this particular case is harmless, fix the indentation to get rid
of that warning and improve readability.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Hans Wippel [Thu, 16 Jun 2016 14:19:02 +0000 (16:19 +0200)]
qeth: omit outbound queue 3 for unicast packets in Priority Queuing on HiperSockets
On HiperSockets only outbound queues 0 to 2 are available for unicast
packets. Current Priority Queuing implementation in the qeth driver puts
outgoing packets in outbound queues 0 to 3.
This puts outgoing unicast packets into outbound queue 2 instead of
outbound queue 3 when using Priority Queuing on a HiperSocket.
Additionally, the default outbound queue cannot be set to outbound queue 3
on HiperSockets.
Signed-off-by: Hans Wippel <hwippel@linux.vnet.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Hans Wippel [Thu, 16 Jun 2016 14:19:01 +0000 (16:19 +0200)]
qeth: improve set_features error handling
The function set_features is called to configure network device features
on the hardware. If errors occur, the network device features should
reflect the changed hardware state and the function should return an
error in order to notify the user.
In case of an error, the current implementation does not necessarily
save the changed hardware state in the network device features before an
error is returned.
This patch improves error handling by saving features, that could be
changed, to the network device features before returning an error. If
the device is not running, an additional check in fix_features removes
features, that require hardware changes, before they are passed to
set_features. Thus, the corresponding check was removed in set_features.
Signed-off-by: Hans Wippel <hwippel@linux.vnet.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Hans Wippel [Thu, 16 Jun 2016 14:19:00 +0000 (16:19 +0200)]
qeth: add network device features for VLAN devices
Network device features indicate the capabilities of network devices (e.g.,
TX checksum offloading and TSO) and their configuration state. Additional
network device features (vlan_features) indicate for each network device,
which capabilities can be used on VLAN devices, that are configured on the
respective base network device.
In the current qeth implementation, network device features are only set
for the base network devices and not for the VLAN devices. Thus, features
like TX checksum offloading cannot be used on VLAN devices.
This patch adds network device features to vlan_features, so they can be
used by VLAN devices.
Signed-off-by: Hans Wippel <hwippel@linux.vnet.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Thomas Richter [Thu, 16 Jun 2016 14:18:59 +0000 (16:18 +0200)]
qeth layer 2 and layer 3 common feature handling
This patch introduces a common set of fix_features and set_features
functions for layer 2 and layer 3. The RX, TX and TSO offload
functionality on the OSA card is enabled using ethtool at user's
request and not at device initialization as done before.
For layer 3 the RX checksum offloading is disabled at device
initialization time.
Signed-off-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Lakhvich Dmitriy [Thu, 16 Jun 2016 14:18:58 +0000 (16:18 +0200)]
qeth: optimize IP handling in rx_mode callback
In layer3 mode of the qeth driver, multicast IP addresses
from struct net_device and other type of IP addresses
from other sources require mapping to the OSA-card.
This patch simplifies the IP address mapping logic, and changes imple-
mentation of ndo_set_rx_mode callback and ip notifier events.
Addresses are stored in private hashtables instead of lists now.
It allows hardware registration/removal for new/deleted multicast
addresses only.
Signed-off-by: Lakhvich Dmitriy <ldmitriy@ru.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reviewed-by: Evgeny Cherkashin <Eugene.Crosser@ru.ibm.com> Reviewed-by: Thomas Richter <tmricht@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eugene Crosser [Thu, 16 Jun 2016 14:18:57 +0000 (16:18 +0200)]
qeth: introduce linearization fail count to stats
When skb data touches too many pages, skb_linearize() is called
opportunistically in the hope that less pages will be required
for a big linear buffer than for multiple fragments. This patch
intoduces a separate counter in ethtool statistics structure
representing _failed_ linearization attempts.
Signed-off-by: Eugene Crosser <Eugene.Crosser@ru.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reviewed-by: Lakhvich Dmitriy <ldmitriy@ru.ibm.com> Reviewed-by: Thomas Richter <tmricht@de.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eugene Crosser [Thu, 16 Jun 2016 14:18:56 +0000 (16:18 +0200)]
qeth: enable scatter/gather by default
Set scatter/gather ON by default on OSA, for both layer 2 and
layer 3 modes. We always use fragmentation over QDIO anyway,
so let the upper layers of the stack take advantage of that.
Signed-off-by: Eugene Crosser <Eugene.Crosser@ru.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reviewed-by: Lakhvich Dmitriy <ldmitriy@ru.ibm.com> Reviewed-by: Thomas Richter <tmricht@de.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eugene Crosser [Thu, 16 Jun 2016 14:18:55 +0000 (16:18 +0200)]
qeth: enable scatter/gather in layer 2 mode
The patch enables NETIF_F_SG flag for OSA in layer 2 mode.
It also adds performance accounting for fragmented sends,
adds a conditional skb_linearize() attempt if the skb had
too many fragments for QDIO SBAL, and fills netdevice->gso_*
attributes.
Signed-off-by: Eugene Crosser <Eugene.Crosser@ru.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Reviewed-by: Lakhvich Dmitriy <ldmitriy@ru.ibm.com> Reviewed-by: Thomas Richter <tmricht@de.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eugene Crosser [Thu, 16 Jun 2016 14:18:54 +0000 (16:18 +0200)]
qeth: fill netdevice->gso_* attributes accurately
Use QETH_MAX_BUFFER_ELEMENTS(card) instead of constant 16.
Also fill gso_max_segs and gso_min_segs.
Signed-off-by: Eugene Crosser <Eugene.Crosser@ru.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eugene Crosser [Thu, 16 Jun 2016 14:18:53 +0000 (16:18 +0200)]
qeth: clean up condition when tso is used
Make conditions under which TSO is activated more stringent.
Make calculation of SBALEs required for the skb more accurate.
Signed-off-by: Eugene Crosser <Eugene.Crosser@ru.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eugene Crosser [Thu, 16 Jun 2016 14:18:52 +0000 (16:18 +0200)]
qeth: refactor calculation of SBALE count
Rewrite the functions that calculate the required number of buffer
elements needed to represent SKB data, to make them hopefully more
comprehensible. Plus a few cleanups.
Signed-off-by: Eugene Crosser <Eugene.Crosser@ru.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eugene Crosser [Thu, 16 Jun 2016 14:18:51 +0000 (16:18 +0200)]
qeth: Include error message for "OS Mismatch"
Having understood the semantics of BRIDGEPORT error code 0x0010,
we can introduce a meaningful error message.
Signed-off-by: Eugene Crosser <Eugene.Crosser@ru.ibm.com> Signed-off-by: Ursula Braun <ubraun@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Arnd Bergmann [Thu, 16 Jun 2016 13:59:25 +0000 (15:59 +0200)]
net: xfrm: fix old-style declaration
Modern C standards expect the '__inline__' keyword to come before the return
type in a declaration, and we get a couple of warnings for this with "make W=1"
in the xfrm{4,6}_policy.c files:
net/ipv6/xfrm6_policy.c:369:1: error: 'inline' is not at beginning of declaration [-Werror=old-style-declaration]
static int inline xfrm6_net_sysctl_init(struct net *net)
net/ipv6/xfrm6_policy.c:374:1: error: 'inline' is not at beginning of declaration [-Werror=old-style-declaration]
static void inline xfrm6_net_sysctl_exit(struct net *net)
net/ipv4/xfrm4_policy.c:339:1: error: 'inline' is not at beginning of declaration [-Werror=old-style-declaration]
static int inline xfrm4_net_sysctl_init(struct net *net)
net/ipv4/xfrm4_policy.c:344:1: error: 'inline' is not at beginning of declaration [-Werror=old-style-declaration]
static void inline xfrm4_net_sysctl_exit(struct net *net)
Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: David S. Miller <davem@davemloft.net>
Arnd Bergmann [Thu, 16 Jun 2016 13:52:12 +0000 (15:52 +0200)]
isdn: eicon: fix old-style declarations
Modern C standards expect the '__inline__' keyword to come before the return
type in a declaration, and we get many warnings for this with "make W=1"
because the eicon driver has this in a header file:
eicon/divasmain.c:448:1: error: '__inline__' is not at beginning of declaration [-Werror=old-style-declaration]
eicon/divasmain.c:453:1: error: '__inline__' is not at beginning of declaration [-Werror=old-style-declaration]
eicon/divasmain.c:458:1: error: '__inline__' is not at beginning of declaration [-Werror=old-style-declaration]
eicon/divasmain.c:463:1: error: '__inline__' is not at beginning of declaration [-Werror=old-style-declaration]
eicon/divasmain.c:468:1: error: '__inline__' is not at beginning of declaration [-Werror=old-style-declaration]
eicon/divasmain.c:473:1: error: '__inline__' is not at beginning of declaration [-Werror=old-style-declaration]
eicon/platform.h:274:1: error: '__inline__' is not at beginning of declaration [-Werror=old-style-declaration]
eicon/platform.h:280:1: error: '__inline__' is not at beginning of declaration [-Werror=old-style-declaration]
A similar warning gets printed for the diva_os_register_io_port()
declaration, because 'register' is interpreted as a keyword instead
of a variable name:
In file included from eicon/diva_didd.c:21:0:
eicon/platform.h:206:1: error: 'register' is not at beginning of declaration [-Werror=old-style-declaration]
Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: David S. Miller <davem@davemloft.net>
Arnd Bergmann [Thu, 16 Jun 2016 11:38:23 +0000 (13:38 +0200)]
net: tlan: don't set unused function argument
We get a warning for tlan_handle_tx_eoc when building with "make W=1"
drivers/net/ethernet/ti/tlan.c: In function 'tlan_handle_tx_eoc':
drivers/net/ethernet/ti/tlan.c:1647:59: error: parameter 'host_int' set but not used [-Werror=unused-but-set-parameter]
static u32 tlan_handle_tx_eoc(struct net_device *dev, u16 host_int)
This is harmless, but removing the unused assignment lets us avoid
the warning with no downside.
Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: David S. Miller <davem@davemloft.net>
Arnd Bergmann [Thu, 16 Jun 2016 11:38:22 +0000 (13:38 +0200)]
net: qlcnic: don't set unused function argument
We get a warning for qlcnic_83xx_get_mac_address when building with
"make W=1":
drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c: In function 'qlcnic_83xx_get_mac_address':
drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c:2156:8: error: parameter 'function' set but not used [-Werror=unused-but-set-parameter]
Clearly this is harmless, but there is also no point for setting
the variable, so we can simply remove the assignment.
Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Rajesh Borundia <rajesh.borundia@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Arnd Bergmann [Thu, 16 Jun 2016 09:00:05 +0000 (11:00 +0200)]
dsa: b53: fix big-endian register access
The b53 dsa register access confusingly uses __raw register accessors
when both the CPU and the device are big-endian, but it uses little-
endian accessors when the same device is used from a little-endian
CPU, which makes no sense.
This uses normal accessors in device-endianess all the time, which
will work in all four combinations of register and CPU endianess,
and it will have the same barrier semantics in all cases.
This also seems to take care of a (false positive) warning I'm getting:
drivers/net/dsa/b53/b53_mmap.c: In function 'b53_mmap_read64':
drivers/net/dsa/b53/b53_mmap.c:109:10: error: 'hi' may be used uninitialized in this function [-Werror=maybe-uninitialized]
*val = ((u64)hi << 32) | lo;
I originally planned to submit another patch for that warning
and did this one as a preparation cleanup, but it does seem to be
sufficient by itself.
Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Simon Horman [Thu, 16 Jun 2016 08:09:09 +0000 (17:09 +0900)]
mpls: allow routes on ipgre devices
This appears to be necessary and sufficient to provide
MPLS in GRE (RFC4023) support.
This can be used by establishing an ipgre tunnel device
and then routing MPLS over it.
The following example will forward MPLS frames received with an outermost
MPLS label 100 over tun1, a GRE tunnel. The forwarded packet will have the
outermost MPLS LSE removed and two new LSEs added with labels 200
(outermost) and 300 (next).
ip link add name tun1 type gre remote 10.0.99.193 local 10.0.99.192 ttl 225
ip link set up dev tun1
ip addr add 10.0.98.192/24 dev tun1
ip route sh
echo 1 > /proc/sys/net/mpls/conf/eth0/input
echo 101 > /proc/sys/net/mpls/platform_labels
ip -f mpls route add 100 as 200/300 via inet 10.0.98.193
ip -f mpls route sh
Also remove unnecessary braces.
Reviewed-by: Dinan Gunawardena <dinan.gunawardena@netronome.com> Signed-off-by: Simon Horman <simon.horman@netronome.com> Acked-by: Robert Shearman <rshearma@brocade.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Philippe Reynes [Wed, 15 Jun 2016 22:12:48 +0000 (00:12 +0200)]
net: ethernet: ax88796: use phydev from struct net_device
The private structure contain a pointer to phydev, but the structure
net_device already contain such pointer. So we can remove the pointer
phydev in the private structure, and update the driver to use the
one contained in struct net_device.
Signed-off-by: Philippe Reynes <tremyfr@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 16 Jun 2016 21:14:58 +0000 (14:14 -0700)]
Merge branch 'stmmac-wol'
Vincent Palatin says:
====================
net: stmmac: dwmac-rk: fixes for Wake-on-Lan on RK3288
In order to support Wake-On-Lan when using the RK3288 integrated MAC
(with an external RGMII PHY), we need to avoid shutting down the regulator
of the external PHY when the MAC is suspended as it's currently done in the MAC
platform code.
As a first step, create independant callbacks for suspend/resume rather than
re-using exit/init callbacks. So the dwmac platform driver can behave differently
on suspend where it might skip shutting the PHY and at module unloading.
Then update the dwmac-rk driver to switch off the PHY regulator only if we are
not planning to wake up from the LAN.
Finally add the PMT interrupt to the MAC device tree configuration, so we can
wake up the core from it when the PHY has received the magic packet.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Vincent Palatin [Wed, 15 Jun 2016 18:32:23 +0000 (11:32 -0700)]
ARM: dts: rockchip: add interrupt for Wake-on-Lan on RK3288
In order to use Wake-on-Lan on RK3288 integrated MAC, we need to wake-up
the CPU on the PMT interrupt when the MAC and the PHY are in low power mode.
Adding the interrupt declaration.
Signed-off-by: Vincent Palatin <vpalatin@chromium.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Vincent Palatin [Wed, 15 Jun 2016 18:32:22 +0000 (11:32 -0700)]
net: stmmac: dwmac-rk: keep the PHY up for WoL
When suspending the machine, do not shutdown the external PHY by cutting
its regulator in the mac platform driver suspend code if Wake-on-Lan is enabled,
else it cannot wake us up.
In order to do this, split the suspend/resume callbacks from the
init/exit callbacks, so we can condition the power-down on the lack of
need to wake-up from the LAN but do it unconditionally when unloading the
module.
Signed-off-by: Vincent Palatin <vpalatin@chromium.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Vincent Palatin [Wed, 15 Jun 2016 18:32:21 +0000 (11:32 -0700)]
net: stmmac: allow to split suspend/resume from init/exit callbacks
Let the stmmac platform drivers provide dedicated suspend and resume
callbacks rather than always re-using the init and exits callbacks.
If the driver does not provide the suspend or resume callback, we fall
back to the old behavior trying to use exit or init.
This allows a specific platform to perform only a partial power-down on
suspend if Wake-on-Lan is enabled but always perform the full shutdown
sequence if the module is unloaded.
Signed-off-by: Vincent Palatin <vpalatin@chromium.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Xin Long [Wed, 15 Jun 2016 17:15:06 +0000 (01:15 +0800)]
sctp: change sk state to CLOSED instead of CLOSING in sctp_sock_migrate
Commit d46e416c11c8 ("sctp: sctp should change socket state when
shutdown is received") may set sk_state CLOSING in sctp_sock_migrate,
but inet_accept doesn't allow the sk_state other than ESTABLISHED/
CLOSED for sctp. So we will change sk_state to CLOSED, instead of
CLOSING, as actually sk is closed already there.
Fixes: d46e416c11c8 ("sctp: sctp should change socket state when shutdown is received") Reported-by: Ye Xiaolong <xiaolong.ye@intel.com> Signed-off-by: Xin Long <lucien.xin@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
This set improves BPF perf fd array map release wrt to purging
entries, first two extend the API as needed. Please see individual
patches for more details.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Wed, 15 Jun 2016 20:47:14 +0000 (22:47 +0200)]
bpf, maps: flush own entries on perf map release
The behavior of perf event arrays are quite different from all
others as they are tightly coupled to perf event fds, f.e. shown
recently by commit e03e7ee34fdd ("perf/bpf: Convert perf_event_array
to use struct file") to make refcounting on perf event more robust.
A remaining issue that the current code still has is that since
additions to the perf event array take a reference on the struct
file via perf_event_get() and are only released via fput() (that
cleans up the perf event eventually via perf_event_release_kernel())
when the element is either manually removed from the map from user
space or automatically when the last reference on the perf event
map is dropped. However, this leads us to dangling struct file's
when the map gets pinned after the application owning the perf
event descriptor exits, and since the struct file reference will
in such case only be manually dropped or via pinned file removal,
it leads to the perf event living longer than necessary, consuming
needlessly resources for that time.
Relations between perf event fds and bpf perf event map fds can be
rather complex. F.e. maps can act as demuxers among different perf
event fds that can possibly be owned by different threads and based
on the index selection from the program, events get dispatched to
one of the per-cpu fd endpoints. One perf event fd (or, rather a
per-cpu set of them) can also live in multiple perf event maps at
the same time, listening for events. Also, another requirement is
that perf event fds can get closed from application side after they
have been attached to the perf event map, so that on exit perf event
map will take care of dropping their references eventually. Likewise,
when such maps are pinned, the intended behavior is that a user
application does bpf_obj_get(), puts its fds in there and on exit
when fd is released, they are dropped from the map again, so the map
acts rather as connector endpoint. This also makes perf event maps
inherently different from program arrays as described in more detail
in commit c9da161c6517 ("bpf: fix clearing on persistent program
array maps").
To tackle this, map entries are marked by the map struct file that
added the element to the map. And when the last reference to that map
struct file is released from user space, then the tracked entries
are purged from the map. This is okay, because new map struct files
instances resp. frontends to the anon inode are provided via
bpf_map_new_fd() that is called when we invoke bpf_obj_get_user()
for retrieving a pinned map, but also when an initial instance is
created via map_create(). The rest is resolved by the vfs layer
automatically for us by keeping reference count on the map's struct
file. Any concurrent updates on the map slot are fine as well, it
just means that perf_event_fd_array_release() needs to delete less
of its own entires.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Wed, 15 Jun 2016 20:47:13 +0000 (22:47 +0200)]
bpf, maps: extend map_fd_get_ptr arguments
This patch extends map_fd_get_ptr() callback that is used by fd array
maps, so that struct file pointer from the related map can be passed
in. It's safe to remove map_update_elem() callback for the two maps since
this is only allowed from syscall side, but not from eBPF programs for these
two map types. Like in per-cpu map case, bpf_fd_array_map_update_elem()
needs to be called directly here due to the extra argument.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Wed, 15 Jun 2016 20:47:12 +0000 (22:47 +0200)]
bpf, maps: add release callback
Add a release callback for maps that is invoked when the last
reference to its struct file is gone and the struct file about
to be released by vfs. The handler will be used by fd array maps.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 16 Jun 2016 05:26:27 +0000 (22:26 -0700)]
Merge branch 'sfc-rx-vlan-filtering'
Edward Cree says:
====================
sfc: RX VLAN filtering
Adds support for VLAN-qualified receive filters on EF10 hardware.
This is needed when running as a guest if the hypervisor has enabled
vfs-vlan-restrict, in which case the firmware rejects filters not qualified
with VLAN 0.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Andrew Rybchenko [Wed, 15 Jun 2016 16:52:08 +0000 (17:52 +0100)]
sfc: Fix VLAN filtering feature if vPort has VLAN_RESTRICT flag
If vPort has VLAN_RESTRICT flag, VLAN tagged traffic will not be
delivered without corresponding Rx filters which may be proxied to and
moderated by hypervisor.
Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Martin Habets [Wed, 15 Jun 2016 16:51:07 +0000 (17:51 +0100)]
sfc: VLAN filters must only be created if the firmware supports this.
If it is not supported we simply disable the feature.
For the feature to work we need firmware filter support for
OUTER_VID + LOC_MAC and for OUTER_VID + LOC_MAC_IG.
The low-latency firmware can match on OUTER_VID + LOC_MAC but not on
OUTER_VID + LOC_MAC_IG.
For the capture packet firmware it is the other way around.
Only the full-feature variant can match on both combinations.
Incorporates a fix by Andrew Rybchenko <Andrew.Rybchenko@oktetlabs.ru>
in the net_dev->[hw_]features handling.
Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Andrew Rybchenko [Wed, 15 Jun 2016 16:49:30 +0000 (17:49 +0100)]
sfc: Fix dup unknown multicast/unicast filters after datapath reset
Filter match flags are not unique criteria to be mapped to priority
because of both unknown unicast and unknown multicast are mapped to
LOC_MAC_IG. So, local MAC is required to map filter to priority.
MCDI filter flags is unique criteria to find filter priority.
Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Edward Cree [Wed, 15 Jun 2016 16:49:05 +0000 (17:49 +0100)]
sfc: Refactor checks for invalid filter ID
Nearly every time we call efx_ef10_filter_remove_unsafe, we first check
for EFX_EF10_FILTER_ID_INVALID, in which case we do nothing. So move
that check into the function, simplifying all the call sites.
Also, change the return type to void, since none of the callers check it.
Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Martin Habets [Wed, 15 Jun 2016 16:48:49 +0000 (17:48 +0100)]
sfc: Take mac_lock before calling efx_ef10_filter_table_probe
When trying to enslave an SFC interface to a bond the following BUG_ON was
hit:
kernel BUG [in ef10.c]!
CPU: 0 PID: 4383 Comm: ifenslave Tainted: G
...
Call Trace:
efx_ef10_filter_add_vlan+0x121/0x180 [sfc]
efx_ef10_filter_table_probe+0x2a2/0x4f0 [sfc]
efx_ef10_set_mac_address+0x370/0x6d0 [sfc]
efx_set_mac_address+0x7d/0x120 [sfc]
dev_set_mac_address+0x43/0xa0
bond_enslave+0x337/0xea0 [bonding]
This comes from function efx_ef10_filter_vlan_sync_rx_mode.
To solve the bug we ensure the mac_lock is taken before calling
efx_ef10_filter_add_vlan. But to avoid a priority inversion mac_lock must
be taken before filter_sem.
To satisfy these requirements we end up taking mac_lock in
efx_ef10_vport_set_mac_address, efx_ef10_set_mac_address,
efx_ef10_sriov_set_vf_vlan and efx_probe_filters.
Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Andrew Rybchenko [Wed, 15 Jun 2016 16:45:56 +0000 (17:45 +0100)]
sfc: Store unicast and multicast promisc flag with address cache
These flags are built when address cache is updated.
The information will be required when VLAN filtering is added and address
cache is used without re-sync.
Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Andrew Rybchenko [Wed, 15 Jun 2016 16:45:36 +0000 (17:45 +0100)]
sfc: Move filter IDs to per-VLAN data structure
It is a step to support VLAN filtering in HW.
Until then, there is only one struct efx_ef10_filter_vlan per struct
efx_ef10_filter_table, with no VLAN information yet.
Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Andrew Rybchenko [Wed, 15 Jun 2016 16:44:20 +0000 (17:44 +0100)]
sfc: Forget filter ID when the filter is marked old
It is required to remove setting of filter IDs to invalid from multicast
and unicast addresses caching functions.
Add initialization to invalid when filter table is created.
Add paranoid checks to track consistency.
Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Andrew Rybchenko [Wed, 15 Jun 2016 16:43:00 +0000 (17:43 +0100)]
sfc: Move last mc_promisc flag to EF10 filter table state
It is used for EF10 only and logically belongs to EF10 filter table state.
It is OK that it is reset to false on filter table recreation since all
filters are removed on destruction.
Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 16 Jun 2016 05:22:17 +0000 (22:22 -0700)]
Merge tag 'rxrpc-rewrite-20160615' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs
David Howells says:
====================
rxrpc: Rework endpoint record handling
Here's the next part of the AF_RXRPC rewrite. In this set I rework
endpoint record handling. There are two types of endpoint record, local
and peer. The local endpoint record is used as an anchor for the transport
socket that AF_RXRPC uses (at the moment a UDP socket). Local endpoints
can be shared between AF_RXRPC sockets under certain restricted
circumstances.
The peer endpoint is a record of the remote end. It is (or will be) used
to keep track MTU and RTT values and, with these changes, is used to find
the call(s) to abort when a network error occurs.
The following significant changes are made:
(1) The local endpoint event handling code is split out into its own file.
(2) The local endpoint list bottom half-excluding spinlock is removed as
things are arranged such that sk_user_data will not change whilst the
transport socket callbacks are in progress.
(3) Local endpoints can now only be shared if they have the same transport
address (as before) and have a local service ID of 0 (ie. they're not
listening for incoming calls). This prevents callbacks from a server
to one process being picked up by another process.
(4) Local endpoint destruction is now accomplished by the same work item
as processes events, meaning that the destructor doesn't need to wait
for the event processor.
(5) Peer endpoints are now held in a hash table rather than a flat list.
(6) Peer endpoints are now destroyed by RCU rather than by work item.
(7) Peer endpoints are now differentiated by local endpoint and remote
transport port in addition to remote transport address and transport
type and family.
This means that a firewall that excludes access between a particular
local port and remote port won't cause calls to be aborted that use a
different port pair.
(8) Error report handling now no longer assumes that the source is always
an IPv4 ICMP message from a UDP port and has assumptions that an ICMP
message comes from an IPv4 socket removed. At some point IPv6 support
will be added.
(9) Peer endpoints rather than local endpoints are now the anchor point
for distributing network error reports.
(10) Both types of endpoint records are now disposed of as soon as all
references to them are gone. There is less hanging around and once
their usage counts hit zero, records can no longer be resurrected.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 16 Jun 2016 04:44:33 +0000 (21:44 -0700)]
Merge branch 'liquidio-next'
Raghu Vatsavayi says:
====================
liquidio: Updates and Bug fixes
Following are updates for liquidio bug fixes and driver
support for new firmware interface. These updates are divided
into smaller logical patches as mentioned by you. These set of
nine patches should be applied in the following order as some of
them depend on earlier patches in the list.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>