David Howells [Mon, 4 Apr 2016 13:00:40 +0000 (14:00 +0100)]
rxrpc: Split client connection code out into its own file
Split the client-specific connection code out into its own file. It will
behave somewhat differently from the service-specific connection code, so
it makes sense to separate them.
Signed-off-by: David Howells <dhowells@redhat.com>
David Howells [Mon, 27 Jun 2016 13:39:44 +0000 (14:39 +0100)]
rxrpc: Call channels should have separate call number spaces
Each channel on a connection has a separate, independent number space from
which to allocate callNumber values. It is entirely possible, for example,
to have a connection with four active calls, each with call number 1.
Note that the callNumber values for any particular channel don't have to
start at 1, but they are supposed to increment monotonically for that
channel from a client's perspective and may not be reused once the call
number is transmitted (until the epoch cycles all the way back round).
Currently, however, call numbers are allocated on a per-connection basis
and, further, are held in an rb-tree. The rb-tree is redundant as the four
channel pointers in the rxrpc_connection struct are entirely capable of
pointing to all the calls currently in progress on a connection.
To this end, make the following changes:
(1) Handle call number allocation independently per channel.
(2) Get rid of the conn->calls rb-tree. This is overkill as a connection
may have a maximum of four calls in progress at any one time. Use the
pointers in the channels[] array instead, indexed by the channel
number from the packet.
(3) For each channel, save the result of the last call that was in
progress on that channel in conn->channels[] so that the final ACK or
ABORT packet can be replayed if necessary. Any call earlier than that
is just ignored. If we've seen the next call number in a packet, the
last one is most definitely defunct.
(4) When generating a RESPONSE packet for a connection, the call number
counter for each channel must be included in it.
(5) When parsing a RESPONSE packet for a connection, the call number
counters contained therein should be used to set the minimum expected
call numbers on each channel.
To do in future commits:
(1) Replay terminal packets based on the last call stored in
conn->channels[].
(2) Connections should be retired before the callNumber space on any
channel runs out.
(3) A server is expected to disregard or reject any new incoming call that
has a call number less than the current call number counter. The call
number counter for that channel must be advanced to the new call
number.
Note that the server cannot just require that the next call that it
sees on a channel be exactly the call number counter + 1 because then
there's a scenario that could cause a problem: The client transmits a
packet to initiate a connection, the network goes out, the server
sends an ACK (which gets lost), the client sends an ABORT (which also
gets lost); the network then reconnects, the client then reuses the
call number for the next call (it doesn't know the server already saw
the call number), but the server thinks it already has the first
packet of this call (it doesn't know that the client doesn't know that
it saw the call number the first time).
Signed-off-by: David Howells <dhowells@redhat.com>
David Howells [Mon, 27 Jun 2016 16:11:19 +0000 (17:11 +0100)]
rxrpc: Add RCU destruction for connections and calls
Add RCU destruction for connections and calls as the RCU lookup from the
transport socket data_ready handler is going to come along shortly.
Whilst we're at it, move the cleanup workqueue flushing and RCU barrierage
into the destruction code for the objects that need it (locals and
connections) and add the extra RCU barrier required for connection cleanup.
Signed-off-by: David Howells <dhowells@redhat.com>
David Howells [Mon, 4 Apr 2016 13:00:38 +0000 (14:00 +0100)]
rxrpc: Release a call's connection ref on call disconnection
When a call is disconnected, clear the call's pointer to the connection and
release the associated ref on that connection. This means that the call no
longer pins the connection and the connection can be discarded even before
the call is.
As the code currently stands, the call struct is effectively pinned by
userspace until userspace has enacted a recvmsg() to retrieve the final
call state as sk_buffs on the receive queue pin the call to which they're
related because:
(1) The rxrpc_call struct contains the userspace ID that recvmsg() has to
include in the control message buffer to indicate which call is being
referred to. This ID must remain valid until the terminal packet is
completely read and must be invalidated immediately at that point as
userspace is entitled to immediately reuse it.
(2) The final ACK to the reply to a client call isn't sent until the last
data packet is entirely read (it's probably worth altering this in
future to be send the ACK as soon as all the data has been received).
This change requires a bit of rearrangement to make sure that the call
isn't going to try and access the connection again after protocol
completion:
(1) Delete the error link earlier when we're releasing the call. Possibly
network errors should be distributed via connections at the cost of
adding in an access to the rxrpc_connection struct.
(2) Remove the call from the connection's call tree before disconnecting
the call. The call tree needs to be removed anyway and incoming
packets delivered by channel pointer instead.
(3) The release call event should be considered last after all other
events have been processed so that we don't need access to the
connection again.
(4) Move the channel_lock taking from rxrpc_release_call() to
rxrpc_disconnect_call() where it will be required in future.
Signed-off-by: David Howells <dhowells@redhat.com>
David Howells [Mon, 4 Apr 2016 13:00:39 +0000 (14:00 +0100)]
rxrpc: Fix handling of connection failure in client call creation
If rxrpc_connect_call() fails during the creation of a client connection,
there are two bugs that we can hit that need fixing:
(1) The call state should be moved to RXRPC_CALL_DEAD before the call
cleanup phase is invoked. If not, this can cause an assertion failure
later.
(2) call->link should be reinitialised after being deleted in
rxrpc_new_client_call() - which otherwise leads to a failure later
when the call cleanup attempts to delete the link again.
Signed-off-by: David Howells <dhowells@redhat.com>
David Howells [Mon, 27 Jun 2016 09:32:03 +0000 (10:32 +0100)]
rxrpc: Move usage count getting into rxrpc_queue_conn()
Rather than calling rxrpc_get_connection() manually before calling
rxrpc_queue_conn(), do it inside the queue wrapper.
This allows us to do some important fixes:
(1) If the usage count is 0, do nothing. This prevents connections from
being reanimated once they're dead.
(2) If rxrpc_queue_work() fails because the work item is already queued,
retract the usage count increment which would otherwise be lost.
(3) Don't take a ref on the connection in the work function. By passing
the ref through the work item, this is unnecessary. Doing it in the
work function is too late anyway. Previously, connection-directed
packets held a ref on the connection, but that's not really the best
idea.
And another useful changes:
(*) Don't need to take a refcount on the connection in the data_ready
handler unless we invoke the connection's work item. We're using RCU
there so that's otherwise redundant.
Signed-off-by: David Howells <dhowells@redhat.com>
David Howells [Mon, 27 Jun 2016 09:32:02 +0000 (10:32 +0100)]
rxrpc: Check that the client conns cache is empty before module removal
Check that the client conns cache is empty before module removal and bug if
not, listing any offending connections that are still present. Unfortunately,
if there are connections still around, then the transport socket is still
unexpectedly open and active, so we can't just unallocate the connections.
Signed-off-by: David Howells <dhowells@redhat.com>
David Howells [Mon, 27 Jun 2016 09:32:02 +0000 (10:32 +0100)]
rxrpc: Provide queuing helper functions
Provide queueing helper functions so that the queueing of local and
connection objects can be fixed later.
The issue is that a ref on the object needs to be passed to the work queue,
but the act of queueing the object may fail because the object is already
queued. Testing the queuedness of an object before hand doesn't work
because there can be a race with someone else trying to queue it. What
will have to be done is to adjust the refcount depending on the result of
the queue operation.
Signed-off-by: David Howells <dhowells@redhat.com>
Herbert Xu [Sun, 26 Jun 2016 21:55:24 +0000 (14:55 -0700)]
rxrpc: Avoid using stack memory in SG lists in rxkad
rxkad uses stack memory in SG lists which would not work if stacks were
allocated from vmalloc memory. In fact, in most cases this isn't even
necessary as the stack memory ends up getting copied over to kmalloc
memory.
This patch eliminates all the unnecessary stack memory uses by supplying
the final destination directly to the crypto API. In two instances where a
temporary buffer is actually needed we also switch use a scratch area in
the rxrpc_call struct (only one DATA packet will be being secured or
verified at a time).
Finally there is no need to split a split-page buffer into two SG entries
so code dealing with that has been removed.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: David Howells <dhowells@redhat.com>
David Howells [Thu, 30 Jun 2016 10:34:30 +0000 (11:34 +0100)]
rxrpc: Check the source of a packet to a client conn
When looking up a client connection to which to route a packet, we need to
check that the packet came from the correct source so that a peer can't try
to muck around with another peer's connection.
Signed-off-by: David Howells <dhowells@redhat.com>
David Howells [Fri, 1 Jul 2016 07:35:02 +0000 (08:35 +0100)]
rxrpc: Fix processing of authenticated/encrypted jumbo packets
When a jumbo packet is being split up and processed, the crypto checksum
for each split-out packet is in the jumbo header and needs placing in the
reconstructed packet header.
When the code was changed to keep the stored copy of the packet header in
host byte order, this reconstruction was missed.
Found with sparse with CF=-D__CHECK_ENDIAN__:
../net/rxrpc/input.c:479:33: warning: incorrect type in assignment (different base types)
../net/rxrpc/input.c:479:33: expected unsigned short [unsigned] [usertype] _rsvd
../net/rxrpc/input.c:479:33: got restricted __be16 [addressable] [usertype] _rsvd
Fixes: 0d12f8a4027d021c9cc942f09f38d28288020c5d ("rxrpc: Keep the skb private record of the Rx header in host byte order") Signed-off-by: David Howells <dhowells@redhat.com>
Russell King [Thu, 23 Jun 2016 13:50:25 +0000 (14:50 +0100)]
phy: improve safety of fixed-phy MII register reading
There is no prevention of a concurrent call to both fixed_mdio_read()
and fixed_phy_update_state(), which can result in the state being
modified while it's being inspected. Fix this by using a seqcount
to detect modifications, and memcpy()ing the state.
We remain slightly naughty here, calling link_update() and updating
the link status within the read-side loop - which would need rework
of the design to change.
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
Russell King [Thu, 23 Jun 2016 13:50:20 +0000 (14:50 +0100)]
phy: generate swphy registers on the fly
Generate software phy registers as and when requested, rather than
duplicating the state in fixed_phy. This allows us to eliminate
the duplicate storage of of the same data, which is only different
in format.
As fixed_phy_update_regs() no longer updates register state, rename
it to fixed_phy_update().
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
Russell King [Thu, 23 Jun 2016 13:50:15 +0000 (14:50 +0100)]
phy: separate swphy state validation from register generation
Separate out the generation of MII registers from the state validation.
This allows us to simplify the error handing in fixed_phy() by allowing
earlier error detection.
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
Russell King [Thu, 23 Jun 2016 13:50:10 +0000 (14:50 +0100)]
phy: convert swphy register generation to tabular form
Convert the swphy register generation to tabular form which allows us
to eliminate multiple switch() statements. This results in a smaller
object code size, more efficient, and easier to add support for faster
speeds.
text data bss dec hex filename
324 0 0 324 144 swphy.o
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
Russell King [Thu, 23 Jun 2016 13:50:05 +0000 (14:50 +0100)]
phy: move fixed_phy MII register generation to a library
Move the fixed_phy MII register generation to a library to allow other
software phy implementations to use this code.
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
this is a pull request of 4 patches for net-next/master.
Arnd Bergmann's patch fixes a regresseion in af_can introduced in
linux-can-next-for-4.8-20160617. There are two patches by Ramesh
Shanmugasundaram, which add CAN-2.0 support to the rcar_canfd driver.
And a patch by Ed Spiridonov that adds better error diagnoses messages
to the Ed Spiridonov driver.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Replace calls to kmalloc followed by a memcpy with a direct call to
kmemdup.
The Coccinelle semantic patch used to make this change is as follows:
@@
expression from,to,size,flag;
statement S;
@@
- to = \(kmalloc\|kzalloc\)(size,flag);
+ to = kmemdup(from,size,flag);
if (to==NULL || ...) S
- memcpy(to, from, size);
Signed-off-by: Amitoj Kaur Chawla <amitoj1606@gmail.com> Acked-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
This series includes multiple features extensions for mlx5 Ethernet netdevice driver.
Namely, TX Rate limiting, RX interrupt moderation, ethtool settings.
TX Rate limiting:
- ConnectX-4 rate limiting infrastructure
- Set max rate NDO support
RX interrupt moderation:
- CQE based coalescing option (controlled via priv flags)
- Adaptive RX coalescing
ethtool settings:
- priv flags callbacks
- Support new ksettings API
- Add 50G missing link mode
- Support auto negotiation on/off
Applied on top: 0e9390ebf1fe ("Merge branch 'mlxsw-next'")
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Gal Pressman [Thu, 23 Jun 2016 14:02:46 +0000 (17:02 +0300)]
net/mlx5e: Report correct auto negotiation and allow toggling
Previous to this patch auto negotiation was reported off although it was
on by default in hardware. This patch reports the correct information to
ethtool and allows the user to toggle it on/off.
Added another parameter to set port proto function in order to pass
the auto negotiation field to the hardware.
Signed-off-by: Gal Pressman <galp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Gal Pressman [Thu, 23 Jun 2016 14:02:45 +0000 (17:02 +0300)]
net/mlx5e: Use new ethtool get/set link ksettings API
Use new get/set link ksettings and remove get/set settings legacy
callbacks.
This allows us to use bitmasks longer than 32 bit for supported and
advertised link modes and use modes that were previously not supported.
Signed-off-by: Gal Pressman <galp@mellanox.com> CC: Ben Hutchings <bwh@kernel.org> CC: David Decotigny <decot@googlers.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Gal Pressman [Thu, 23 Jun 2016 14:02:44 +0000 (17:02 +0300)]
net/mlx5e: Add missing 50G baseSR2 link mode
Add MLX5E_50GBASE_SR2 as ETHTOOL_LINK_MODE_50000baseSR2_Full_BIT.
Signed-off-by: Gal Pressman <galp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Cc: Ben Hutchings <bwh@kernel.org> Cc: David Decotigny <decot@googlers.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Gal Pressman [Thu, 23 Jun 2016 14:02:43 +0000 (17:02 +0300)]
ethtool: Add 50G baseSR2 link mode
Add ETHTOOL_LINK_MODE_50000baseSR2_Full_BIT bit.
Signed-off-by: Gal Pressman <galp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Cc: Ben Hutchings <bwh@kernel.org> Cc: David Decotigny <decot@googlers.com> Acked-By: David Decotigny <decot@googlers.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Gal Pressman [Thu, 23 Jun 2016 14:02:42 +0000 (17:02 +0300)]
net/mlx5e: Toggle link only after modifying port parameters
Add a dedicated function to toggle port link. It should be called only
after setting a port register.
Toggle will set port link to down and bring it back up in case that it's
admin status was up.
Signed-off-by: Gal Pressman <galp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Tariq Toukan [Thu, 23 Jun 2016 14:02:40 +0000 (17:02 +0300)]
net/mlx5e: CQE based moderation
In this mode the moderation timer will restart upon
new completion (CQE) generation rather than upon interrupt
generation.
The outcome is that for bursty traffic the period timer will never
expire and thus only the moderation frames counter will dictate
interrupt generation, thus the interrupt rate will be relative
to the incoming packets size.
If the burst seizes for "moderation period" time then an interrupt
will be issued immediately.
CQE based moderation is off by default and can be controlled
via ethtool set_priv_flags.
Performance tested on ConnectX4-Lx 50G.
Less packet loss in netperf UDP and TCP tests, with no bw degradation,
for both single and multi streams, with message sizes of
64, 1024, 1472 and 32768 byte.
Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Achiad Shochat <achiad@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Gal Pressman <galp@mellanox.com> Signed-off-by: Gil Rockah <gilr@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Gal Pressman [Thu, 23 Jun 2016 14:02:39 +0000 (17:02 +0300)]
net/mlx5e: Introduce net device priv flags infrastructure
Introduce an infrastructure for getting/setting private net device
flags.
Currently a 'nop' priv flag is added, following patches will override
the flag will actual feature specific flags.
Signed-off-by: Gal Pressman <galp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Yevgeny Petrilin [Thu, 23 Jun 2016 14:02:38 +0000 (17:02 +0300)]
net/mlx5e: Add TXQ set max rate support
Implement set_maxrate ndo.
Use the rate index from the hardware table to attach to channel SQ/TXQ.
In case of failure to configure new rate, the queue remains with
unlimited rate.
We save the configuration on priv structure and apply it each time
Send Queues are being reinitialized (after open/close) operations.
Signed-off-by: Yevgeny Petrilin <yevgenyp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Yevgeny Petrilin [Thu, 23 Jun 2016 14:02:37 +0000 (17:02 +0300)]
net/mlx5: Rate limit tables support
Configuring and managing HW rate limit tables.
The HW holds a table of rate limits, each rate is
associated with an index in that table.
Later a Send Queue uses this index to set the rate limit.
Multiple Send Queues can have the same rate limit, which is
represented by a single entry in this table.
Even though a rate can be shared, each queue is being rate
limited independently of others.
The SW shadow of this table holds the rate itself,
the index in the HW table and the refcount (number of queues)
working with this rate.
The exported functions are mlx5_rl_add_rate and mlx5_rl_remove_rate.
Number of different rates and their values are derived
from HW capabilities.
Signed-off-by: Yevgeny Petrilin <yevgenyp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sathya Perla [Wed, 22 Jun 2016 12:54:57 +0000 (08:54 -0400)]
be2net: update be2net maintainers list
This patch removes Padmanabh's name from the maintainers list as he's no
longer with the company. It also adds the driver name on the headline to
make it easy to lookup the maintainers list by the driver name.
Signed-off-by: Sathya Perla <sathya.perla@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Somnath Kotur [Wed, 22 Jun 2016 12:54:56 +0000 (08:54 -0400)]
be2net: Change copyright markings in source files
This patch updates year and company name in the copyright markings in the
be2net source files.
Signed-off-by: Somnath Kotur <somnath.kotur@emulex.com> Signed-off-by: Sathya Perla <sathya.perla@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Somnath Kotur [Wed, 22 Jun 2016 12:54:55 +0000 (08:54 -0400)]
be2net: Fix broadcast echoes from EVB in BE3
On SR-IOV profiles, when the user connects a Linux Bridge or OVS to a BE3
vport, they suffer the "broadcast/multicast echo" problem. BE3 EVB echoes
broadcast and multicast packets back to PF's vport confusing the
Linux bridge. BE3 relies on the src-mac addr being programmed on the
interface to avoid sending back an echo of a broadcast or multicast packet
on a vPort. When a Linux bridge is connected to a BE3, the mac-addr of the
VM behind the bridge doesn't get configured on the vPort and so echo
cancellation doesn't work.
This patch worksaround this problem by disabling the EVB initially
and re-enabling it *only* when SR-IOV is enabled by the user. For the
driver fix to work, the BE3 FW version must be >= 11.1.84.0.
Signed-off-by: Somnath Kotur <somnath.kotur@emulex.com> Signed-off-by: Sathya Perla <sathya.perla@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sathya Perla [Wed, 22 Jun 2016 12:54:54 +0000 (08:54 -0400)]
be2net: support asymmetric rx/tx queue counts
be2net so far supported creation of RX/TX queues only in pairs.
On configs where rx and tx queue counts are different, creation of only
the lesser number of queues has been supported.
This patch now allows a combination of RX/TX-only channels along with
combined channels. N TX-queues and M RX-queues can be created with the
following cmds:
ethtool -L ethX combined N rx M-N (when N < M)
ethtool -L ethX combined M tx N-M (when M < N)
Setting both RX-only and TX-only channels is still not supported.
It is mandatory to create atleast one combined channel.
Signed-off-by: Sathya Perla <sathya.perla@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sathya Perla [Wed, 22 Jun 2016 12:54:53 +0000 (08:54 -0400)]
be2net: fix definition of be_max_eqs()
The EQs available on a function are shared between NIC and RoCE.
The be_max_eqs() macro was so far being used to refer to the max number of
EQs available for NIC. This has caused some confusion in the code. To fix
this confusion this patch introduces a new macro called be_max_nic_eqs()
to refer to the max number of EQs avialable for NIC only and renames
be_max_eqs() to be_max_func_eqs().
Signed-off-by: Sathya Perla <sathya.perla@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 27 Jun 2016 08:02:01 +0000 (04:02 -0400)]
Merge branch 'fec-new-type-device'
Andy Duan says:
====================
net: fec: add new type device
Different i.MX SOC FEC support different features like :
- i.MX6Q/DL FEC does not support AVB and interrupt coalesc
- i.MX6SX/i.MX7D supports AVB and interrupt coalesc
- i.MX6UL/ULL does not support AVB, but support interrupt coalesc
Then, add new quirk flag to judge the supported features, and add new
type device for i.MX6UL.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Fugang Duan [Wed, 22 Jun 2016 10:52:35 +0000 (18:52 +0800)]
net: fec: add interrupt coalesc quirk flag
Different i.MX SOC FEC support different features like :
- i.MX6Q/DL FEC does not support AVB and interrupt coalesc
- i.MX6SX/i.MX7D supports AVB and interrupt coalesc
- i.MX6UL/ULL does not support AVB, but support interrupt coalesc
So, add new quirk flag to judge the supported features.
Signed-off-by: Fugang Duan <fugang.duan@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 26 Jun 2016 20:01:54 +0000 (16:01 -0400)]
Merge tag 'rxrpc-rewrite-20160622-2' of git://git.kernel.org/pub/scm/linux/kernel/git/dhowells/linux-fs
David Howells says:
====================
rxrpc: Get rid of conn bundle and transport structs
Here's the next part of the AF_RXRPC rewrite. The primary purpose of this
set is to get rid of the rxrpc_conn_bundle and rxrpc_transport structs.
This simplifies things for future development of the connection handling.
To this end, the following significant changes are made:
(1) The rxrpc_connection struct is given pointers to the local and peer
endpoints, inside the rxrpc_conn_parameters struct. Pointers to the
transport's copy of these pointers are then redirected to the
connection struct.
(2) Exclusive connection handling is fixed. Exclusive connections should
do just one call and then be retired. They are used in security
negotiations and, I believe, the idea is to avoid reuse of negotiated
security contexts.
The current code is doing a single connection per socket and doing all
the calls over that. With this change it gets a new connection for
each call made.
(3) A new sendmsg() control message marker is added to make individual
calls operate over exclusive connections. This should be used in
future in preference to the sockopt that marks a socket as "exclusive
connection".
(4) IDs for client connections initiated by a machine are now allocated
from a global pool using the IDR facility and are unique across all
client connections, no matter their destination. The IDR facility is
then used to look up a connection on the connection ID alone. Other
parameters are then verified afterwards.
Note that the IDR facility may use a lot of memory if the IDs it holds
are widely scattered. Given this, in a future commit, client
connections will be retired if they are more than a certain distance
from the last ID allocated.
The client epoch is advanced by 1 each time the client ID counter
wraps. Connections outside the current epoch will also be retired in
a future commit.
(5) The connection bundle concept is removed and the client connection
tree is moved into the local endpoint. The queue for waiting for a
call channel is moved to the rxrpc_connection struct as there can only
be one connection for any particular key going to any particular peer
now.
(6) The rxrpc_transport struct is removed and the service connection tree
is moved into the peer struct.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Xing Zheng [Tue, 21 Jun 2016 12:33:28 +0000 (20:33 +0800)]
net: stmmac: dwmac-rk: add rk3228-specific data
Add constants and callback functions for the dwmac on rk3228/rk3229 socs.
As can be seen, the base structure is the same, only registers and the
bits in them moved slightly.
Signed-off-by: Xing Zheng <zhengxing@rock-chips.com> Reviewed-by: Heiko Stuebner <heiko@sntech.de> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 25 Jun 2016 16:19:41 +0000 (12:19 -0400)]
Merge branch 'net-sched-bulk-dequeue'
Eric Dumazet says:
====================
net_sched: bulk dequeue and deferred drops
First patch adds an additional parameter to ->enqueue() qdisc method
so that drops can be done outside of critical section
(after locks are released).
Then fq_codel can have a small optimization to reduce number of cache
lines misses during a drop event
(possibly accumulating hundreds of packets to be freed).
A small htb change exports the backlog in class dumps.
Final patch adds bulk dequeue to qdiscs that were lacking this feature.
This series brings a nice qdisc performance increase (more than 80 %
in some cases).
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Wed, 22 Jun 2016 06:16:52 +0000 (23:16 -0700)]
net_sched: generalize bulk dequeue
When qdisc bulk dequeue was added in linux-3.18 (commit 5772e9a3463b "qdisc: bulk dequeue support for qdiscs
with TCQ_F_ONETXQUEUE"), it was constrained to some
specific qdiscs.
With some extra care, we can extend this to all qdiscs,
so that typical traffic shaping solutions can benefit from
small batches (8 packets in this patch).
For example, HTB is often used on some multi queue device.
And bonding/team are multi queue devices...
Idea is to bulk-dequeue packets mapping to the same transmit queue.
This brings between 35 and 80 % performance increase in HTB setup
under pressure on a bonding setup :
1) NUMA node contention : 610,000 pps -> 1,110,000 pps
2) No node contention : 1,380,000 pps -> 1,930,000 pps
Now we should work to add batches on the enqueue() side ;)
Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: John Fastabend <john.r.fastabend@intel.com> Cc: Jesper Dangaard Brouer <brouer@redhat.com> Cc: Hannes Frederic Sowa <hannes@stressinduktion.org> Cc: Florian Westphal <fw@strlen.de> Cc: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Wed, 22 Jun 2016 06:16:50 +0000 (23:16 -0700)]
net_sched: fq_codel: cache skb->truesize into skb->cb
Now we defer skb drops, it makes sense to keep a copy
of skb->truesize in struct codel_skb_cb to avoid one
cache line miss per dropped skb in fq_codel_drop(),
to reduce latencies a bit further.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Wed, 22 Jun 2016 06:16:49 +0000 (23:16 -0700)]
net_sched: drop packets after root qdisc lock is released
Qdisc performance suffers when packets are dropped at enqueue()
time because drops (kfree_skb()) are done while qdisc lock is held,
delaying a dequeue() draining the queue.
Nominal throughput can be reduced by 50 % when this happens,
at a time we would like the dequeue() to proceed as fast as possible.
Even FQ is vulnerable to this problem, while one of FQ goals was
to provide some flow isolation.
This patch adds a 'struct sk_buff **to_free' parameter to all
qdisc->enqueue(), and in qdisc_drop() helper.
I measured a performance increase of up to 12 %, but this patch
is a prereq so that future batches in enqueue() can fly.
Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 25 Jun 2016 16:08:41 +0000 (12:08 -0400)]
Merge branch 'liquidio-next'
Raghu Vatsavayi says:
====================
liquidio: updates and bug fixes
Please consider following patch series for liquidio bug fixes
and updates on top of net-next. Following patches should be
applied in the following order as some of them depend on
earlier patches in the series.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Raghu Vatsavayi [Wed, 22 Jun 2016 05:53:09 +0000 (22:53 -0700)]
liquidio: chip reset changes
This patch resolves the order of chip reset while destroying
the resources by postoponing soft reset in destroy resources
function until all queues are removed properly.
Signed-off-by: Derek Chickles <derek.chickles@caviumnetworks.com> Signed-off-by: Satanand Burla <satananda.burla@caviumnetworks.com> Signed-off-by: Felix Manlunas <felix.manlunas@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <rvatsavayi@caviumnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Raghu Vatsavayi [Wed, 22 Jun 2016 05:53:06 +0000 (22:53 -0700)]
liquidio: Napi rx/tx traffic
This Patch adds tx buffer handling to Napi along with RX
traffic. Also separate spinlocks are introduced for handling
iq posting and buffer reclaim so that tx path and tx interrupt
do not compete against each other.
Signed-off-by: Derek Chickles <derek.chickles@caviumnetworks.com> Signed-off-by: Satanand Burla <satananda.burla@caviumnetworks.com> Signed-off-by: Felix Manlunas <felix.manlunas@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <rvatsavayi@caviumnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Raghu Vatsavayi [Wed, 22 Jun 2016 05:53:04 +0000 (22:53 -0700)]
liquidio: Vlan offloads changes
This patch adds support for vlan offloads for the driver and
receive header structures are also modified appropriately. Also
requestID will not be used in reveive header any more.
Signed-off-by: Derek Chickles <derek.chickles@caviumnetworks.com> Signed-off-by: Satanand Burla <satananda.burla@caviumnetworks.com> Signed-off-by: Felix Manlunas <felix.manlunas@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <rvatsavayi@caviumnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Raghu Vatsavayi [Wed, 22 Jun 2016 05:53:03 +0000 (22:53 -0700)]
liquidio: soft command buffer limits
This patch increases the limits of soft command buffer size and
num command buffers. This patch also has changes for queue macros
and limit related changes for new chips.
Signed-off-by: Derek Chickles <derek.chickles@caviumnetworks.com> Signed-off-by: Satanand Burla <satananda.burla@caviumnetworks.com> Signed-off-by: Felix Manlunas <felix.manlunas@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <raghu.vatsavayi@caviumnetworks.com> Signed-off-by: Raghu Vatsavayi <rvatsavayi@caviumnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
The issue comes when there are multiple threads attempting to use the
mailbox facility at the same time. The issue is the for the Virtual
Function Driver, the only way to get the Virtual Interface statistics
is to issue mailbox commands to ask the firmware for the VI Stats.
And, because the VI Stats command can only retrieve a smallish number of
stats per mailbox command, we have to issue three mailbox commands in quick
succession. When ethtool or netstat command to get interface stats and
interface up/down is run in a loop for every 0.1 sec, we observed
mailbox collisions. And out of the two commands one would fail with
the present code, since we don't queue the second command.
To overcome the above issue, added a queue to access the mailbox.
Whenever a mailbox command is issued add it to the queue. If its at the
head issue the mailbox command, else wait for the existing command to
complete. Usually command takes less than a milli-second to complete.
Also timeout from the loop, if the command under execution takes
long time to run.
In reality, the number of mailbox access collisions is going to be very
rare since no one runs such abusive script.
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Philippe Reynes [Tue, 21 Jun 2016 22:32:35 +0000 (00:32 +0200)]
net: ethernet: macb: use phydev from struct net_device
The private structure contain a pointer to phydev, but the structure
net_device already contain such pointer. So we can remove the pointer
phydev in the private structure, and update the driver to use the
one contained in struct net_device.
Signed-off-by: Philippe Reynes <tremyfr@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jarno Rajahalme [Tue, 21 Jun 2016 21:59:38 +0000 (14:59 -0700)]
openvswitch: Only set mark and labels with a commit flag.
Only set conntrack mark or labels when the commit flag is specified.
This makes sure we can not set them before the connection has been
persisted, as in that case the mark and labels would be lost in an
event of an userspace upcall.
OVS userspace already requires the commit flag to accept setting
ct_mark and/or ct_labels. Validate for this in the kernel API.
Signed-off-by: Jarno Rajahalme <jarno@ovn.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Vivien Didelot [Tue, 21 Jun 2016 16:28:20 +0000 (12:28 -0400)]
net: dsa: mv88e6xxx: rename single-chip support
With the upcoming support for cross-chip operations, it will be hard to
distinguish portions of code supporting a single-chip or a switch fabric
of interconnected chips.
Make the code clearer now, by renaming the mv88e6xxx_priv_state chip
structure to mv88e6xxx_chip. This patch brings no functional changes.
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vivien Didelot [Tue, 21 Jun 2016 16:28:19 +0000 (12:28 -0400)]
net: dsa: mv88e6xxx: move driver in its own folder
With the upcoming support for cross-chip operations and other mv88e6xxx
enhancements, new files will be added.
Similarly to mlxsw or b53, move mv88e6xxx files into their own folder.
In the meantime, update the MAINTAINERS entry to please checkpatch.pl,
by replacing the invalid 88E6352 entry with 88E6XXX, maintained by
Andrew and myself.
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
The patch series adds the support for config/read of the adapter coalesce
parameters. Patch (1) adds the qed infrastructure/APIs for the support and
patch (2) adds the driver support for following ethtool commands:
ethtool -c|--show-coalesce ethX
ethtool -C|--coalesce ethX [rx-usecs N] [tx-usecs N]
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
qede: Add support for coalescing config read/update.
Signed-off-by: Sudarsana Reddy Kalluru <sudarsana.kalluru@qlogic.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
qed: Add support for coalescing config read/update.
This patch adds support for configuring the device tx/rx coalescing
timeout values in the order of micro seconds. It also adds APIs for
upper layer drivers for reading/updating the coalescing values.
Signed-off-by: Sudarsana Reddy Kalluru <sudarsana.kalluru@qlogic.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 23 Jun 2016 19:40:31 +0000 (15:40 -0400)]
Merge tag 'wireless-drivers-next-for-davem-2016-06-21' of git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/wireless-drivers-next
Kalle Valo says:
====================
wireless-drivers patches for 4.8
Major changes:
ath10k
* enable btcoex support without restarting firmware
* enable ipq4019 support using AHB bus
* add QCA9887 chipset support
* retrieve calibration data from EEPROM, currently only for QCA9887
wil6210
* add pm_notify handling
brcmfmac
* add support for the PCIE devices 43525 and 43465
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Rana Shahout [Tue, 21 Jun 2016 09:43:59 +0000 (12:43 +0300)]
net/mlx4_en: Add DCB PFC support through CEE netlink commands
This patch adds support for reading and updating priority flow
control (PFC) attributes in the driver via netlink.
Signed-off-by: Rana Shahout <ranas@mellanox.com> Signed-off-by: Eran Ben Elisha <eranbe@mellanox.com> Signed-off-by: Eugenia Emantayev <eugenia@mellanox.com> Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Florian Fainelli [Tue, 21 Jun 2016 01:26:53 +0000 (18:26 -0700)]
net: dsa: b53: Fix statistics readings
Due to a typo we would always be using the MIB counter width of the
first element of the counter array instead of the current element, and
we would always be accessing the register statistics with a 64-bits
read, while some could be 32-bits. This got unnoticed in testing with
MDIO and SRAB which tolerate doing this, but testing with the SPI bus
revealed bogus values being returned. Fix this by using the proper
iterator here.
Fixes: 967dd82ffc52 ("net: dsa: b53: Add support for Broadcom RoboSwitch") Reported-by: Jonas Gorski <jogo@openwrt.org> Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
Ben Hutchings [Tue, 21 Jun 2016 00:17:17 +0000 (01:17 +0100)]
of_mdio: Enable fixed PHY support if driver is a module
The fixed_phy driver doesn't have to be built-in, and it's
important that of_mdio supports it even if it's a module.
Signed-off-by: Ben Hutchings <ben.hutchings@codethink.co.uk> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Ed Spiridonov [Mon, 20 Jun 2016 18:40:15 +0000 (21:40 +0300)]
can: mcp251x: add message about sucessful/unsuccessful probe
Silent ignorance of errors during probe procedure is a bad thing, this
patch fixes it. Extra message added for hardware initialization
failure. Such common issues are mostly caused by wrong wiring. Message
about success added as well, it should be useful to debug new hardware
configuration, especially in case of several CAN buses.
Signed-off-by: Ed Spiridonov <edo.rus@gmail.com> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
As per Wolfgang G, all new drivers should support decreasing state
transition(back-to-error-active). This patch adds this support.
This driver configures the controller to halt on bus-off entry. Hence,
when in error states less than bus off state, the TEC/REC counters
are checked for lower state transition eligibility and action.
Signed-off-by: Ramesh Shanmugasundaram <ramesh.shanmugasundaram@bp.renesas.com> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Arnd Bergmann [Mon, 20 Jun 2016 15:51:52 +0000 (17:51 +0200)]
can: only call can_stat_update with procfs
The change to leave out procfs support in CAN when CONFIG_PROC_FS
is not set was incomplete and leads to a build error:
net/built-in.o: In function `can_init':
:(.init.text+0x9858): undefined reference to `can_stat_update'
ERROR: "can_stat_update" [net/can/can.ko] undefined!
This tries a better approach, encapsulating all of the calls
within IS_ENABLED(), so we also leave out the timer function
from the object file.
Signed-off-by: Arnd Bergmann <arnd@arndb.de> Fixes: a20fadf85312 ("can: build proc support only if CONFIG_PROC_FS is activated") Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
William Tu [Mon, 20 Jun 2016 14:26:17 +0000 (07:26 -0700)]
openvswitch: Add packet len info to upcall.
The commit f2a4d086ed4c ("openvswitch: Add packet truncation support.")
introduces packet truncation before sending to userspace upcall receiver.
This patch passes up the skb->len before truncation so that the upcall
receiver knows the original packet size. Potentially this will be used
by sFlow, where OVS translates sFlow config header=N to a sample action,
truncating packet to N byte in kernel datapath. Thus, only N bytes instead
of full-packet size is copied from kernel to userspace, saving the
kernel-to-userspace bandwidth.
Signed-off-by: William Tu <u9012063@gmail.com> Cc: Pravin Shelar <pshelar@nicira.com> Acked-by: Pravin B Shelar <pshelar@ovn.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Philippe Reynes [Sun, 19 Jun 2016 20:37:05 +0000 (22:37 +0200)]
net: ethernet: bgmac: use phydev from struct net_device
The private structure contain a pointer to phydev, but the structure
net_device already contain such pointer. So we can remove the pointer
phydev in the private structure, and update the driver to use the
one contained in struct net_device.
Signed-off-by: Philippe Reynes <tremyfr@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Philippe Reynes [Sat, 18 Jun 2016 14:37:20 +0000 (16:37 +0200)]
net: ethernet: altera_tse: use phydev from struct net_device
The private structure contain a pointer to phydev, but the structure
net_device already contain such pointer. So we can remove the pointer
phydev in the private structure, and update the driver to use the
one contained in struct net_device.
Signed-off-by: Philippe Reynes <tremyfr@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Philippe Reynes [Sat, 18 Jun 2016 13:15:39 +0000 (15:15 +0200)]
net: ethernet: sun4i-emac: use phydev from struct net_device
The private structure contain a pointer to phydev, but the structure
net_device already contain such pointer. So we can remove the pointer
phydev in the private structure, and update the driver to use the
one contained in struct net_device.
Signed-off-by: Philippe Reynes <tremyfr@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David Howells [Fri, 17 Jun 2016 09:06:56 +0000 (10:06 +0100)]
rxrpc: Kill off the rxrpc_transport struct
The rxrpc_transport struct is now redundant, given that the rxrpc_peer
struct is now per peer port rather than per peer host, so get rid of it.
Service connection lists are transferred to the rxrpc_peer struct, as is
the conn_lock. Previous patches moved the client connection handling out
of the rxrpc_transport struct and discarded the connection bundling code.
Signed-off-by: David Howells <dhowells@redhat.com>
David Howells [Fri, 17 Jun 2016 14:42:35 +0000 (15:42 +0100)]
rxrpc: Kill the client connection bundle concept
Kill off the concept of maintaining a bundle of connections to a particular
target service to increase the number of call slots available for any
beyond four for that service (there are four call slots per connection).
This will make cleaning up the connection handling code easier and
facilitate removal of the rxrpc_transport struct. Bundling can be
reintroduced later if necessary.
Signed-off-by: David Howells <dhowells@redhat.com>
David Howells [Fri, 17 Jun 2016 10:07:55 +0000 (11:07 +0100)]
rxrpc: Calls displayed in /proc may in future lack a connection
Allocated rxrpc calls displayed in /proc/net/rxrpc_calls may in future be
on the proc list before they're connected or after they've been
disconnected - in which case they may not have a pointer to a connection
struct that can be used to get data from there.
Deal with this by using stuff from the call struct in preference where
possible and printing "no_connection" rather than a peer address if no
connection is assigned.
This change also has the added bonus that the service ID is now taken from
the call rather the connection which will allow per-call service upgrades
to be shown - something required for AuriStor server compatibility.
Signed-off-by: David Howells <dhowells@redhat.com>
David Howells [Fri, 17 Jun 2016 10:00:48 +0000 (11:00 +0100)]
rxrpc: Validate the net address given to rxrpc_kernel_begin_call()
Validate the net address given to rxrpc_kernel_begin_call() before using
it.
Whilst this should be mostly unnecessary for in-kernel users, it does clear
the tail of the address struct in case we want to hash or compare the whole
thing.
Signed-off-by: David Howells <dhowells@redhat.com>
David Howells [Mon, 4 Apr 2016 13:00:37 +0000 (14:00 +0100)]
rxrpc: Use IDR to allocate client conn IDs on a machine-wide basis
Use the IDR facility to allocate client connection IDs on a machine-wide
basis so that each client connection has a unique identifier. When the
connection ID space wraps, we advance the epoch by 1, thereby effectively
having a 62-bit ID space. The IDR facility is then used to look up client
connections during incoming packet routing instead of using an rbtree
rooted on the transport.
This change allows for the removal of the transport in the future and also
means that client connections can be looked up directly in the data-ready
handler by connection ID.
The ID management code is placed in a new file, conn-client.c, to which all
the client connection-specific code will eventually move.
Note that the IDR tree gets very expensive on memory if the connection IDs
are widely scattered throughout the number space, so we shall need to
retire connections that have, say, an ID more than four times the maximum
number of client conns away from the current allocation point to try and
keep the IDs concentrated. We will also need to retire connections from an
old epoch.
Also note that, for the moment, a pointer to the transport has to be passed
through into the ID allocation function so that we can take a BH lock to
prevent a locking issue against in-BH lookup of client connections. This
will go away later when RCU is used for server connections also.
Signed-off-by: David Howells <dhowells@redhat.com>