Yuval Mintz [Wed, 24 Aug 2016 10:27:19 +0000 (13:27 +0300)]
bnx2x: Don't flush multicast MACs
When ndo_set_rx_mode() is called for bnx2x, as part of process of
configuring the new MAC address filters [both unicast & multicast]
driver begins by flushing the existing configuration and then iterating
over the network device's list of addresses and configures those instead.
This has the side-effect of creating a short gap where traffic wouldn't
be properly classified, as no filters are configured in HW.
While for unicasts this is rather insignificant [as unicast MACs don't
frequently change while interface is actually running],
for multicast traffic it does pose an issue as there are multicast-based
networks where new multicast groups would constantly be removed and
added.
This patch tries to remedy this [at least for the newer adapters] -
Instead of flushing & reconfiguring all existing multicast filters,
the driver would instead create the approximate hash match that would
result from the required filters. It would then compare it against the
currently configured approximate hash match, and only add and remove the
delta between those.
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
These two patches add a better client connection management strategy. They
need to be applied on top of the just-posted fixes.
(1) Duplicate the connection list and separate out procfs iteration from
garbage collection. This is necessary for the next patch as with that
client connections no longer appear on a single list and may not
appear on a list at all - and really don't want to be exposed to the
old garbage collector.
(Note that client conns aren't left dangling, they're also in a tree
rooted in the local endpoint so that they can be found by a user
wanting to make a new client call. Service conns do not appear in
this tree.)
(2) Implement a better lifetime management and garbage collection strategy
for client connections.
In this, a client connection can be in one of five cache states
(inactive, waiting, active, culled and idle). Limits are set on the
number of client conns that may be active at any one time and makes
users wait if they want to start a new call when there isn't capacity
available.
To make capacity available, active and idle connections can be culled,
after a short delay (to allow for retransmission). The delay is
reduced if the capacity exceeds a tunable threshold.
If there is spare capacity, client conns are permitted to hang around
a fair bit longer (tunable) so as to allow reuse of negotiated
security contexts.
After this patch, the client conn strategy is separate from that of
service conns (which continues to use the old code for the moment).
This difference in strategy is because the client side retains control
over when it allows a connection to become active, whereas the service
side has no control over when it sees a new connection or a new call
on an old connection.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 24 Aug 2016 16:42:57 +0000 (09:42 -0700)]
Merge tag 'rxrpc-rewrite-20160824-1' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs
David Howells says:
====================
rxrpc: More fixes
Here are a couple of fix patches:
(1) Fix the conn-based retransmission patch posted yesterday. This breaks
if it actually has to retransmit. However, it seems the likelihood of
this happening is really low, despite the server I'm testing against
being located >3000 miles away, and sometime of the time it's handled
in the call background processor before we manage to disconnect the
call - hence why I didn't spot it.
(2) /proc/net/rxrpc_calls can cause a crash it accessed whilst a call is
being torn down. The window of opportunity is pretty small, however,
as calls don't stay in this state for long.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Ido says:
This patchset addresses two long standing issues in the mlxsw driver
concerning FDB learning.
Patch 1 limits the number of FDB records processed by the driver in a
single session. This is useful in situations in which many new records
need to be processed, thereby causing the RTNL mutex to be held for
long periods of time.
Patches 2-6 offload the learning configuration (on / off) of bridge
ports to the device instead of having the driver decide whether a
record needs to be learned or not.
The last patch is fallout and removes configuration no longer necessary
after the first patches are applied.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Ido Schimmel [Wed, 24 Aug 2016 10:00:29 +0000 (12:00 +0200)]
mlxsw: spectrum: Don't set learning when creating vPorts
Before commit 99724c18fc66 ("mlxsw: spectrum: Introduce support for
router interfaces") we used to assign vFIDs to the created vPorts. Since
these vPorts were used for slow path traffic we had to disable learning
for them, as it doesn't make sense to have it enabled.
This is no longer the case and now vPorts are either used for router
interfaces (for which learning is disabled by the firmware) or bridge
ports (for which learning is explicitly enabled by the driver).
Therefore, we can remove the learning configuration upon vPort creation.
Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Ido Schimmel [Wed, 24 Aug 2016 10:00:27 +0000 (12:00 +0200)]
mlxsw: spectrum: Offload learning to the switch ASIC
Up until now we simply stored the learning configuration of a bridge
port in the driver and decided whether to learn a new FDB record based
on this value.
However, this is sub-optimal in cases where learning is disabled on the
bridge port, as the device repeatedly generates learning notifications
for the same record.
Instead, offload the learning configuration to the device, thereby
preventing it from generating notifications when learning is disabled.
Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Ido Schimmel [Wed, 24 Aug 2016 10:00:26 +0000 (12:00 +0200)]
mlxsw: spectrum: Configure learning for VLAN-aware bridge port
We are going to prevent the device from generating learning
notifications for a port that was configured with learning disabled.
Since learning configuration is done per {Port, VID} we need to apply
the port's learning configuration for any VID that is added to the
bridge port's VLAN filter list.
When a VID is added to the VLAN filter list of a VLAN-aware bridge port,
configure the {Port, VID} learning status according to the port's
configuration. When the VID is removed, disable learning for the {Port,
VID}.
Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Ido Schimmel [Wed, 24 Aug 2016 10:00:24 +0000 (12:00 +0200)]
mlxsw: spectrum: Make VLAN deletion function symmetric
Commit 05978481e77e ("mlxsw: spectrum: Create PVID vPort before
registering netdevice") removed __mlxsw_sp_port_vlans_del() from the
init sequence of the driver, which forced it to be non-symmetric with
regards to __mlxsw_sp_port_vlans_add().
Make both functions symmetric as the constraint no longer exists.
Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Ido Schimmel [Wed, 24 Aug 2016 10:00:23 +0000 (12:00 +0200)]
mlxsw: spectrum: Limit number of FDB records per learning session
Up until now a learning session ended whenever the number of queried
records was zero. This turned out to be problematic in situations where
a large number of MACs (48K) had to be processed by the switch driver,
as RTNL mutex is held during the learning session.
Instead, limit the number of FDB records that can be processed in a
session to 64. This means that every time the device is queried for
learning notifications (currently, every 100ms), up to 64 records will
be processed by the switch driver.
Signed-off-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
This series contains some low level and API updates for mlx5 core
driver interface and mlx5_ifc.h, plus mlx5 LAG core driver support,
to be shared as base code for net-next and rdma mlx5 4.9 submissions.
From Alex and Artemy, Update mlx5_ifc for modify RQ and XRC bits.
From Noa, Expose mlx5 link modes so they can be used in RDMA tree for rdma tools.
From Aviv, LAG support needed for RDMA.
- Add needed hardware structures, layouts and interface
- mlx5 core driver LAG implementation
- Introduce mlx5 core driver LAG API for mlx5_ib
From Maor, add two low level patches for mlx5 hardware sniffer QP
infrastructure bits and capabilities, plus added the namespace for sniffer
steering tables. Needed for RDMA subtree.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David Howells [Wed, 24 Aug 2016 06:30:52 +0000 (07:30 +0100)]
rxrpc: Improve management and caching of client connection objects
Improve the management and caching of client rxrpc connection objects.
From this point, client connections will be managed separately from service
connections because AF_RXRPC controls the creation and re-use of client
connections but doesn't have that luxury with service connections.
Further, there will be limits on the numbers of client connections that may
be live on a machine. No direct restriction will be placed on the number
of client calls, excepting that each client connection can support a
maximum of four concurrent calls.
Note that, for a number of reasons, we don't want to simply discard a
client connection as soon as the last call is apparently finished:
(1) Security is negotiated per-connection and the context is then shared
between all calls on that connection. The context can be negotiated
again if the connection lapses, but that involves holding up calls
whilst at least two packets are exchanged and various crypto bits are
performed - so we'd ideally like to cache it for a little while at
least.
(2) If a packet goes astray, we will need to retransmit a final ACK or
ABORT packet. To make this work, we need to keep around the
connection details for a little while.
(3) The locally held structures represent some amount of setup time, to be
weighed against their occupation of memory when idle.
To this end, the client connection cache is managed by a state machine on
each connection. There are five states:
(1) INACTIVE - The connection is not held in any list and may not have
been exposed to the world. If it has been previously exposed, it was
discarded from the idle list after expiring.
(2) WAITING - The connection is waiting for the number of client conns to
drop below the maximum capacity. Calls may be in progress upon it
from when it was active and got culled.
The connection is on the rxrpc_waiting_client_conns list which is kept
in to-be-granted order. Culled conns with waiters go to the back of
the queue just like new conns.
(3) ACTIVE - The connection has at least one call in progress upon it, it
may freely grant available channels to new calls and calls may be
waiting on it for channels to become available.
The connection is on the rxrpc_active_client_conns list which is kept
in activation order for culling purposes.
(4) CULLED - The connection got summarily culled to try and free up
capacity. Calls currently in progress on the connection are allowed
to continue, but new calls will have to wait. There can be no waiters
in this state - the conn would have to go to the WAITING state
instead.
(5) IDLE - The connection has no calls in progress upon it and must have
been exposed to the world (ie. the EXPOSED flag must be set). When it
expires, the EXPOSED flag is cleared and the connection transitions to
the INACTIVE state.
The connection is on the rxrpc_idle_client_conns list which is kept in
order of how soon they'll expire.
A connection in the ACTIVE or CULLED state must have at least one active
call upon it; if in the WAITING state it may have active calls upon it;
other states may not have active calls.
As long as a connection remains active and doesn't get culled, it may
continue to process calls - even if there are connections on the wait
queue. This simplifies things a bit and reduces the amount of checking we
need do.
There are a couple flags of relevance to the cache:
(1) EXPOSED - The connection ID got exposed to the world. If this flag is
set, an extra ref is added to the connection preventing it from being
reaped when it has no calls outstanding. This flag is cleared and the
ref dropped when a conn is discarded from the idle list.
(2) DONT_REUSE - The connection should be discarded as soon as possible and
should not be reused.
This commit also provides a number of new settings:
(*) /proc/net/rxrpc/max_client_conns
The maximum number of live client connections. Above this number, new
connections get added to the wait list and must wait for an active
conn to be culled. Culled connections can be reused, but they will go
to the back of the wait list and have to wait.
(*) /proc/net/rxrpc/reap_client_conns
If the number of desired connections exceeds the maximum above, the
active connection list will be culled until there are only this many
left in it.
(*) /proc/net/rxrpc/idle_conn_expiry
The normal expiry time for a client connection, provided there are
fewer than reap_client_conns of them around.
(*) /proc/net/rxrpc/idle_conn_fast_expiry
The expedited expiry time, used when there are more than
reap_client_conns of them around.
Note that I combined the Tx wait queue with the channel grant wait queue to
save space as only one of these should be in use at once.
Note also that, for the moment, the service connection cache still uses the
old connection management code.
Signed-off-by: David Howells <dhowells@redhat.com>
David Howells [Wed, 24 Aug 2016 06:30:52 +0000 (07:30 +0100)]
rxrpc: Dup the main conn list for the proc interface
The main connection list is used for two independent purposes: primarily it
is used to find connections to reap and secondarily it is used to list
connections in procfs.
Split the procfs list out from the reap list. This allows us to stop using
the reap list for client connections when they acquire a separate
management strategy from service collections.
The client connections will not be on a management single list, and sometimes
won't be on a management list at all. This doesn't leave them floating,
however, as they will also be on an rb-tree rooted on the socket so that the
socket can find them to dispatch calls.
Signed-off-by: David Howells <dhowells@redhat.com>
David Howells [Wed, 24 Aug 2016 13:31:43 +0000 (14:31 +0100)]
rxrpc: Make /proc/net/rxrpc_calls safer
Make /proc/net/rxrpc_calls safer by stashing a copy of the peer pointer in
the rxrpc_call struct and checking in the show routine that the peer
pointer, the socket pointer and the local pointer obtained from the socket
pointer aren't NULL before we use them.
Signed-off-by: David Howells <dhowells@redhat.com>
David Howells [Wed, 24 Aug 2016 12:06:14 +0000 (13:06 +0100)]
rxrpc: Fix conn-based retransmit
If a duplicate packet comes in for a call that has just completed on a
connection's channel then there will be an oops in the data_ready handler
because it tries to examine the connection struct via a call struct (which
we don't have - the pointer is unset).
Since the connection struct pointer is available to us, go direct instead.
Also, the ACK packet to be retransmitted needs three octets of padding
between the soft ack list and the ackinfo.
Fixes: 18bfeba50dfd0c8ee420396f2570f16a0bdbd7de ("rxrpc: Perform terminal call ACK/ABORT retransmission from conn processor") Signed-off-by: David Howells <dhowells@redhat.com>
Since IPv6 socket lookups no longer dereference pinet6 pointer
and UDP lost SLAB_DESTROY_BY_RCU special rules, we no longer
need special clear_sk() methods.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Wed, 24 Aug 2016 04:06:33 +0000 (21:06 -0700)]
net: diag: support SOCK_DESTROY for UDP sockets
This implements SOCK_DESTROY for UDP sockets similar to what was done
for TCP with commit c1e64e298b8ca ("net: diag: Support destroying TCP
sockets.") A process with a UDP socket targeted for destroy is awakened
and recvmsg fails with ECONNABORTED.
Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Here are some improvements that are part of the AF_RXRPC rewrite. They
need to be applied on top of the just posted cleanups.
(1) Set the connection expiry on the connection becoming idle when its
last currently active call completes rather than each time put is
called.
This means that the connection isn't held open by retransmissions,
pings and duplicate packets. Future patches will limit the number of
live connections that the kernel will support, so making sure that old
connections don't overstay their welcome is necessary.
(2) Calculate packet serial skew in the UDP data_ready callback rather
than in the call processor on a work queue. Deferring it like this
causes the skew to be elevated by further packets coming in before we
get to make the calculation.
(3) Move retransmission of the terminal ACK or ABORT packet for a
connection to the connection processor, using the terminal state
cached in the rxrpc_connection struct. This means that once last_call
is set in a channel to the current call's ID, no more packets will be
routed to that rxrpc_call struct.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 24 Aug 2016 00:19:59 +0000 (17:19 -0700)]
Merge tag 'rxrpc-rewrite-20160823-1' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs
David Howells says:
====================
rxrpc: Cleanups
Here are some cleanups for the AF_RXRPC rewrite:
(1) Remove some unused bits.
(2) Call releasing on socket closure is now done in the order in which
calls progress through the phases so that we don't miss a call
actively moving list.
(3) The rxrpc_call struct's channel number field is redundant and replaced
with accesses to the masked off cid field instead.
(4) Use a tracepoint for socket buffer accounting rather than printks.
Unfortunately, since this would require currently non-existend
arch-specific help to divine the current instruction location, the
accounting functions are moved out of line so that
__builtin_return_address() can be used.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Add provision for configuring the fastpath queues with Tx (or Rx) only
functionality.
Signed-off-by: Sudarsana Reddy Kalluru <sudarsana.kalluru@qlogic.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Phil Sutter [Tue, 23 Aug 2016 11:14:31 +0000 (13:14 +0200)]
net: rtnetlink: Don't export empty RTAX_FEATURES
Since the features bit field has bits for internal only use as well, it
may happen that the kernel exports RTAX_FEATURES attribute with zero
value which is pointless.
Fix this by making sure the attribute is added only if the exported
value is non-zero.
Signed-off-by: Phil Sutter <phil@nwl.cc> Signed-off-by: David S. Miller <davem@davemloft.net>
cxgb4: Fix issue while re-registering VF mgmt netdev
When we disable SRIOV, we used to unregister the netdev but wasn't
freed. But next time when the same netdev is registered, since the state
was in 'NETREG_UNREGISTERED', we used to hit BUG_ON in register_netdevice,
where it expects the state to be 'NETREG_UNINITIALIZED'.
Alloc netdev and register them while configuring SRIOV, and free them
when SRIOV is disabled. Also added a new function to setup ethernet
properties instead of using ether_setup. Set carrier off by default,
since we don't have to do any transmit on the interface.
Fixes: 7829451c695e ("cxgb4: Add control net_device for configuring PCIe VF") Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Yuchung Cheng [Tue, 23 Aug 2016 00:17:54 +0000 (17:17 -0700)]
net-tcp: retire TFO_SERVER_WO_SOCKOPT2 config
TFO_SERVER_WO_SOCKOPT2 was intended for debugging purposes during
Fast Open development. Remove this config option and also
update/clean-up the documentation of the Fast Open sysctl.
Reported-by: Piotr Jurkiewicz <piotr.jerzy.jurkiewicz@gmail.com> Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
hv_netvsc: add ethtool statistics for tx packet issues
Printing console messages is not helpful when system is out of memory;
and can be disastrous with netconsole. Instead keep statistics
of these anomalous conditions.
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
The function get_netvsc_net_device had conditional locking. This was
unnecessary, incorrect, but harmless. It was unnecessary since the
code is only called from netlink netdev event callback where RTNL
is always acquired before the callbacks are run. It was incorrect
because of use of trylock and then continuing.
Fix by replacing with proper assertion.
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com> Signed-off-by: David S. Miller <davem@davemloft.net>
This series contains several low level and API updates for mlx5 core
commands interface and mlx5_ifc.h to be shared as base code for net-next and
rdma mlx5 4.9 submissions.
From Saeed, ten patches that refactors old layouts of firmware commands which
were manually generated before we introduced the mlx5_ifc, now all of the firmware
commands inbox/outbox layouts moved to use mlx5_ifc and we remove the old
manually generated structures. Plus to those ten patches, we add two patches
that unifies mlx5 commands execution interface and improve the driver log messages
in that area.
From Hadar and Ilya, added the needed hardware bits and infrastructure for
minimum inline headers setting and encap/decap commands and capabilities,
needed for E-Switch offloads.
This series applies on top latest net-next and rdma/master, and smoothly merges with
the latest "Mellanox 100G mlx5 fixes 2016-08-16" series already applied into net branch.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David Howells [Tue, 23 Aug 2016 14:27:25 +0000 (15:27 +0100)]
rxrpc: Perform terminal call ACK/ABORT retransmission from conn processor
Perform terminal call ACK/ABORT retransmission in the connection processor
rather than in the call processor. With this change, once last_call is
set, no more incoming packets will be routed to the corresponding call or
any earlier calls on that channel (call IDs must only increase on a channel
on a connection).
Further, if a packet's callNumber is before the last_call ID or a packet is
aimed at successfully completed service call then that packet is discarded
and ignored.
Signed-off-by: David Howells <dhowells@redhat.com>
David Howells [Tue, 23 Aug 2016 14:27:25 +0000 (15:27 +0100)]
rxrpc: Calculate serial skew on packet reception
Calculate the serial number skew in the data_ready handler when a packet
has been received and a connection looked up. The skew is cached in the
sk_buff's priority field.
The connection highest received serial number is updated at this time also.
This can be done without locks or atomic instructions because, at this
point, the code is serialised by the socket.
This generates more accurate skew data because if the packet is offloaded
to a work queue before this is determined, more packets may come in,
bumping the highest serial number and thereby increasing the apparent skew.
This also removes some unnecessary atomic ops.
Signed-off-by: David Howells <dhowells@redhat.com>
David Howells [Tue, 23 Aug 2016 14:27:24 +0000 (15:27 +0100)]
rxrpc: Set connection expiry on idle, not put
Set the connection expiry time when a connection becomes idle rather than
doing this in rxrpc_put_connection(). This makes the put path more
efficient (it is likely to be called occasionally whilst a connection has
outstanding calls because active workqueue items needs to be given a ref).
The time is also preset in the connection allocator in case the connection
never gets used.
Signed-off-by: David Howells <dhowells@redhat.com>
David Howells [Tue, 23 Aug 2016 14:27:24 +0000 (15:27 +0100)]
rxrpc: Drop channel number field from rxrpc_call struct
Drop the channel number (channel) field from the rxrpc_call struct to
reduce the size of the call struct. The field is redundant: if the call is
attached to a connection, the channel can be obtained from there by AND'ing
with RXRPC_CHANNELMASK.
Signed-off-by: David Howells <dhowells@redhat.com>
David Howells [Tue, 23 Aug 2016 14:27:24 +0000 (15:27 +0100)]
rxrpc: When clearing a socket, clear the call sets in the right order
When clearing a socket, we should clear the securing-in-progress list
first, then the accept queue and last the main call tree because that's the
order in which a call progresses. Not that a call should move from the
accept queue to the main tree whilst we're shutting down a socket, but it a
call could possibly move from sequreq to acceptq whilst we're clearing up.
Signed-off-by: David Howells <dhowells@redhat.com>
David Howells [Tue, 23 Aug 2016 14:27:24 +0000 (15:27 +0100)]
rxrpc: Tidy up the rxrpc_call struct a bit
Do a little tidying of the rxrpc_call struct:
(1) in_clientflag is no longer compared against the value that's in the
packet, so keeping it in this form isn't necessary. Use a flag in
flags instead and provide a pair of wrapper functions.
(2) We don't read the epoch value, so that can go.
(3) Move what remains of the data that were used for hashing up in the
struct to be with the channel number.
(4) Get rid of the local pointer. We can get at this via the socket
struct and we only use this in the procfs viewer.
Signed-off-by: David Howells <dhowells@redhat.com>
David S. Miller [Tue, 23 Aug 2016 07:13:11 +0000 (00:13 -0700)]
Merge branch 'cpsw-mq'
Ivan Khoronzhuk says:
====================
net: ethernet: ti: cpsw: add cpdma multi-queue support
This series is intended to allow cpsw driver to use cpdma ability of
h/w shaper to send/receive data with up to 8 tx and 8 rx queues. This
series doesn't contain interface to configure h/w shaper itself, it
contains only multi-queue support part and ability to configure number
of tx/rx queues with ethtool, it also doesn't contain mapping of input
traffic to rx queues, as it can depend on usage and requires separate
interface for setup.
Default shaper mode - priority mode. The h/w shaper configuration will
be added with separate patch series. This series doesn't affect on net
throughput.
Tested on:
am572x-idk, 1Gbps link
am335-boneblack, 100Mbps link.
A simple example for splitting traffic on queues:
$ ethtool -l eth0
$ ethtool -L eth0 rx 8 tx 8
$ tc qdisc add dev eth0 root handle 1: multiq
$ tc filter add dev eth0 parent 1: protocol ip prio 1 u32 \
match ip dst 172.22.39.12 \
action skbedit queue_mapping 5
Based on: net-next/master
V3: https://lkml.org/lkml/2016/8/15/788
Since v3:
-changed arg to priv in fill_rx_channels in
net: ethernet: ti: davinci_cpdma: split descs num between all channels
- added more comments to cpsw_set_channels
Since v2:
- added new patch to avoid warn while ctrl stop
net: ethernet: ti: cpsw: add ethtool channels support
- enable ctrl in case at least one interface is running
Since v1:
- removed cpdam_check_free_desc function
- remove pm_runtime calls as they are used in begin/complete ethtool calls now
- removed change of driver version. it can be done later
- corrected setup of channels for dual_emac mode with ethtool
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Ivan Khoronzhuk [Mon, 22 Aug 2016 18:18:28 +0000 (21:18 +0300)]
net: ethernet: ti: cpsw: add ethtool channels support
These ops allow to control number of channels driver is allowed to
work with at cpdma level. The maximum number of channels is 8 for
rx and 8 for tx. In dual_emac mode the h/w channels are shared
between two interfaces and changing number on one interface changes
number of channels on another.
How many channels are supported and enabled:
$ ethtool -l ethX
Change number of channels (up to 8)
$ ethtool -L ethX rx 6 tx 6
Per-channel statistic:
$ ethtool -S ethX
Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Reviewed-by: Mugunthan V N <mugunthanvnm@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Keep the driver internals in C file. Currently it's not required for
drivers to know rx or tx a channel is, except create function.
So correct "channel create" function, and use all channel struct
macroses only for internal use.
Reviewed-by: Mugunthan V N <mugunthanvnm@ti.com> Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Ivan Khoronzhuk [Mon, 22 Aug 2016 18:18:26 +0000 (21:18 +0300)]
net: ethernet: ti: cpsw: add multi queue support
The cpsw h/w supports up to 8 tx and 8 rx channels. This patch adds
multi-queue support to the driver only, shaper configuration will
be added with separate patch series. Default shaper mode, as
before, priority mode, but with corrected priority order, 0 - is
highest priority, 7 - lowest.
The poll function handles all unprocessed channels, till all of
them are free, beginning from hi priority channel.
In dual_emac mode the channels are shared between two network devices,
as it's with single-queue default mode.
The statistic for every channel can be read with:
$ ethtool -S ethX
Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Reviewed-by: Mugunthan V N <mugunthanvnm@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Ivan Khoronzhuk [Mon, 22 Aug 2016 18:18:25 +0000 (21:18 +0300)]
net: ethernet: ti: davinci_cpdma: fix locking while ctrl_stop
The interrupts shouldn't be disabled while receiving skb, but while
ctrl_stop, the channels are stopped and all remaining packets are
handled with netif_receive_skb(), it can cause WARN_ONCE when ctrl
is stopping while not all packets were handled with NAPIs:
So, split locking while ctrl stop thus interrupts are still
enabled while skbs handling. It can cause WARN_ONCE in rare
cases when ctrl is stopping while not all packets were handled
with NAPIs.
Reviewed-by: Mugunthan V N <mugunthanvnm@ti.com> Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Ivan Khoronzhuk [Mon, 22 Aug 2016 18:18:24 +0000 (21:18 +0300)]
net: ethernet: ti: davinci_cpdma: split descs num between all channels
Tx channels share same pool of descriptors. Thus one channel can
block another if pool is emptied by one. But, the shaper should
decide which channel is allowed to send packets. To avoid such
impact of one channel on another, let every channel to have its
own piece of pool.
Signed-off-by: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> Reviewed-by: Mugunthan V N <mugunthanvnm@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Yuval Mintz [Tue, 23 Aug 2016 04:19:50 +0000 (07:19 +0300)]
qed: Fix address macros
Last FW submission reverted various macros into an older form,
where they generate compilation warnings on some architectures.
Bring back the newer macros instead.
Fixes: 05fafbfb3d77 ("qed: utilize FW 8.10.10.0") Reported-by: kbuild test robot <fengguang.wu@intel.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 23 Aug 2016 04:08:09 +0000 (21:08 -0700)]
Merge branch 'dsa-fix-MV88E6131-tagging'
Andrew Lunn says:
====================
Fix MV88E6131 tagging
Marvell has two different tagging protocols for frames passed to a
swicth. There is the older DSA and the newer EDSA. Somewhere along the
way, we broke support for switches which only support DSA, by trying
to configure them to use EDSA. These patches add back support for
switches which only support DSA, by allowing the drivers to
dynamically indicate the tagging protocol they support to the DSA
core. This needs to be dynamic since the mv88e6xxx has to support two
protocols.
Thanks go to Jamie Lentin for reporting the problem, helping debug it,
providing some of the fix, and testing.
====================
Tested-By: Jamie Lentin <jm@lentin.co.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
Jamie Lentin [Mon, 22 Aug 2016 14:01:04 +0000 (16:01 +0200)]
net: mv88e6xxx: Enable PORT_CONTROL_FORWARD_UNKNOWN for DSA-tagged CPU ports
Without it, a mv88e6131 switch will not forward incoming unicast
packets to the CPU port.
Signed-off-by: Jamie Lentin <jm@lentin.co.uk> Reviewed-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Andrew Lunn [Mon, 22 Aug 2016 14:01:03 +0000 (16:01 +0200)]
dsa: mv88e6xxx: Delete ppu timer when removing module
The PPU method of accessing PHYs makes use of a timer. Make sure this
timer is deleted before unloading the driver.
Reported-by: Jamie Lentin <jm@lentin.co.uk> Signed-off-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Andrew Lunn [Mon, 22 Aug 2016 14:01:02 +0000 (16:01 +0200)]
net: dsa: mv88e6xxx: Fix support for DSA tagging for older switches.
Older chips only support DSA tagging on the CPU port. New devices
support both DSA and EDSA. The driver needs to tell the core the tag
protocol to use, and configure the switch for what is available.
Signed-off-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Andrew Lunn [Mon, 22 Aug 2016 14:01:01 +0000 (16:01 +0200)]
net: dsa: Allow the DSA driver to indicate the tag protocol
DSA drivers may drive different families of switches which need
different tag protocol. Rather than hard code the tag protocol in the
driver structure, have a callback for the DSA core to call.
Signed-off-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net: ipconfig: Fix NULL pointer dereference on RARP/BOOTP/DHCP timeout
If no RARP, BOOTP, or DHCP response is received, ic_dev is never set,
causing a NULL pointer dereference in ic_close_devs():
Sending DHCP requests ...... timed out!
Unable to handle kernel NULL pointer dereference at virtual address 00000004
To fix this, add a check to avoid dereferencing ic_dev if it is still
NULL.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Fixes: 2647cffb2bc6fbed ("net: ipconfig: Support using "delayed" DHCP replies") Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 23 Aug 2016 03:38:25 +0000 (20:38 -0700)]
Merge tag 'batadv-next-for-davem-20160822' of git://git.open-mesh.org/linux-merge
Simon Wunderlich says:
====================
This feature patchset includes the following changes:
- place kref_get near usage of referenced objects, separate patches
for various used objects to improve readability and maintainability
by Sven Eckelmann (18 patches)
- Keep batadv net device when all hard interfaces disappear, to
improve situations where tools currently use work arounds, by
Sven Eckelmann
- Add an option to disable debugfs support to minimize footprint when
userspace uses netlink only, by Sven Eckelmann
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 23 Aug 2016 01:29:14 +0000 (18:29 -0700)]
Merge branch 'cxgb4-tx-rate-limiting'
Rahul Lakkireddy says:
====================
TX max rate limiting for Chelsio T4/T5 adapters
This series of patches implement tx max rate limiting per queue on
Chelsio T4/T5 hardware. This is achieved by first creating a tx
scheduling class with the specified max rate. The queue is then
bound to the newly created class. If a scheduling class with similar
max rate already exists, then the queue is bound to the matching class.
Patch 1 adds support for setting tx scheduling classes.
Patch 2 adds support to bind/unbind queues to/from the scheduling classes.
Patch 3 implements the set_tx_maxrate NDO.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Rahul Lakkireddy [Mon, 22 Aug 2016 10:59:08 +0000 (16:29 +0530)]
cxgb4: add support for tx max rate limiting
Implement set_tx_maxrate NDO to perform per queue tx rate limiting.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com> Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Rahul Lakkireddy [Mon, 22 Aug 2016 10:59:07 +0000 (16:29 +0530)]
cxgb4: add support for per queue tx scheduling
Add support to bind/unbind specified tx queues to/from scheduling
classes. If a queue is already bound to a scheduling class, it is
unbound first and then bound to a new specified class.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com> Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Rahul Lakkireddy [Mon, 22 Aug 2016 10:59:06 +0000 (16:29 +0530)]
cxgb4: add support for tx traffic scheduling classes
Add support to create tx traffic scheduling classes with specified
scheduling parameters. Return an existing class if a match is found
with same scheduling parameters.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com> Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 23 Aug 2016 01:24:52 +0000 (18:24 -0700)]
Merge branch 'qed-sriov-legacy'
Yuval Mintz says:
====================
qed*: IOV patch series
Recent FW [8.10.10.0] enabled us to support sriov interaction
with legacy VF/PF. This patch series adds the necessary driver changes
to utilize this additional compatibility.
In addition, utilize the new FW ability to prevent pause floods by VFs,
and fix a bug that is [mostly] exposed by the added legacy support.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Yuval Mintz [Mon, 22 Aug 2016 10:25:12 +0000 (13:25 +0300)]
qed: Change locking scheme for VF channel
Each VF employees a lock that's supposed to serialize its usage of the
HW channel for communication with its PF, but the critical section is
ill-defined:
- VFs currently release the lock whenever the PF response arrives,
prior to actually processing the reply buffer [which was also supposed
to have been protected by same lock].
- The lock would be released on first response, ignoring the possibilty
the sw flow isn't over [as might be the case of the acquisition flow].
As a result, the flow would run unprotected and would cause a double
mutex release [as the additional message completion would release it
while its actually already free].
Change the flow to have a dedicated function to be called at end of each
flow and release the lock.
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Yuval Mintz [Mon, 22 Aug 2016 10:25:11 +0000 (13:25 +0300)]
qed*: Add support for VFs over legacy PFs
Modern VFs can't run on old non-compatible as the fastpath HSI is
slightly changed - but as the HSI is actually very close [basically,
a single bit whose meaning flipped] this can be supported with small
modifications.
The major differences would be in:
- Recognizing that VF is running on top of a legacy PF.
- Returning some slowpath configurations that are no longer needed
on top of modern PFs, but would be required when working over
the legacy ones.
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Yuval Mintz [Mon, 22 Aug 2016 10:25:09 +0000 (13:25 +0300)]
qed: Add support for legacy VFs
The 8.10.x FW added support for forward compatability as well as
'future' backward compatibility, but only to those VFs that were
using HSI which was 8.10.x based or newer.
The latest firmware now supports backward compatibility for the
older VFs based on 8.7.x and 8.8.x firmware as well.
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Wei Yongjun [Sun, 21 Aug 2016 22:46:15 +0000 (22:46 +0000)]
net: phy: Add missing of_node_put() in xgmiitorgmii_probe()
This node pointer is returned by of_parse_phandle() with
refcount incremented in this function. of_node_put() on it
before exitting this function.
This is detected by Coccinelle semantic patch.
Signed-off-by: Wei Yongjun <weiyj.lk@gmail.com> Reviewed-by: Kedareswara rao Appana <appanad@xilinx.com> Signed-off-by: David S. Miller <davem@davemloft.net>
WANG Cong [Fri, 19 Aug 2016 19:36:54 +0000 (12:36 -0700)]
net_sched: properly handle failure case of tcf_exts_init()
After commit 22dc13c837c3 ("net_sched: convert tcf_exts from list to pointer array")
we do dynamic allocation in tcf_exts_init(), therefore we need
to handle the ENOMEM case properly.
Cc: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Acked-by: Jamal Hadi Salim <jhs@mojatatu.com> Acked-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Philippe Reynes [Fri, 19 Aug 2016 22:52:19 +0000 (00:52 +0200)]
net: ethernet: renesas: ravb: use new api ethtool_{get|set}_link_ksettings
The ethtool api {get|set}_settings is deprecated.
We move this driver to new api {get|set}_link_ksettings.
Signed-off-by: Philippe Reynes <tremyfr@gmail.com> Acked-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Philippe Reynes [Fri, 19 Aug 2016 22:52:18 +0000 (00:52 +0200)]
net: ethernet: renesas: ravb: use phydev from struct net_device
The private structure contain a pointer to phydev, but the structure
net_device already contain such pointer. So we can remove the pointer
phy_dev in the private structure, and update the driver to use the
one contained in struct net_device.
Signed-off-by: Philippe Reynes <tremyfr@gmail.com> Acked-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 21 Aug 2016 04:58:49 +0000 (21:58 -0700)]
Merge branch '10GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue
Jeff Kirsher says:
====================
10GbE Intel Wired LAN Driver Updates 2016-08-20
This series contains updates to ixgbe and ixgbevf.
Veola fixes how the backplane reports the media in ethtool, as KR, KX or
KX4 based on the backplane interface present.
Emil fixes ixgbevf since an incorrect size parameter for
ixgbevf_write_msg_read_ack() ended up only giving the PF the first 4
bytes of the MAC address, so correct the size by calculating it on the
fly for all instances where we call ixgbevf_write_msg_read_ack(). Added
geneve receive offload support for x550em_a.
Don fixes the LED interface for x557 since it uses a different interface.
Added support for the new x557 copper device.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Don Skidmore [Thu, 18 Aug 2016 00:34:40 +0000 (20:34 -0400)]
ixgbe: Add support for new X557 device
This patch adds support for the new copper device X557.
Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Don Skidmore [Wed, 17 Aug 2016 21:34:07 +0000 (17:34 -0400)]
ixgbe: add device to MDIO speed setting
This shouldn't matter as nothing should be attached still to be
consisted control MDIO speed for these devices as well.
Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Don Skidmore [Wed, 17 Aug 2016 18:11:57 +0000 (14:11 -0400)]
ixgbe: Fix led interface for X557 devices
The X557 devices use a different interface to the LED for the port.
This patch reflect that change.
Signed-off-by: Don Skidmore <donald.c.skidmore@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Emil Tantilov [Wed, 10 Aug 2016 18:19:23 +0000 (11:19 -0700)]
ixgbe: add support for geneve Rx offload
Add geneve Rx offload support for x550em_a.
The implementation follows the vxlan code with the lower 16 bits of
the VXLANCTRL register holding the UDP port for VXLAN and the upper
for Geneve.
Disabled NFS filters in the RFCTL register which allows us to simplify
the check for VXLAN and Geneve packets in ixgbe_rx_checksum().
Removed vxlan from the name of the callback functions and replaced it
with udp_tunnel which is more in line with the new API.
Signed-off-by: Emil Tantilov <emil.s.tantilov@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>