Jacob Keller [Thu, 12 Jan 2017 23:59:41 +0000 (15:59 -0800)]
fm10k: update function header comment for fm10k_get_stats64
Re-word the comment to avoid stating that we return a value for this
void function. Additionally, there is no need to mention older kernels,
since this is the upstream kernel.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Krishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Jacob Keller [Thu, 12 Jan 2017 23:59:40 +0000 (15:59 -0800)]
fm10k: allow service task to reschedule itself
If some code path executes fm10k_service_event_schedule(), it is
guaranteed that we only queue the service task once, since we use
__FM10K_SERVICE_SCHED flag. Unfortunately this has a side effect that if
a service request occurs while we are currently running the watchdog, it
is possible that we will fail to notice the request and ignore it until
the next time the request occurs.
This can cause problems with pf/vf mailbox communication and other
service event tasks. To avoid this, introduce a FM10K_SERVICE_REQUEST
bit. When we successfully schedule (and set the _SCHED bit) the service
task, we will clear this bit. However, if we are unable to currently
schedule the service event, we just set the new SERVICE_REQUEST bit.
Finally, after the service event completes, we will re-schedule if the
request bit has been set.
This should ensure that we do not miss any service event schedules,
since we will re-schedule it once the currently running task finishes.
This means that for each request, we will always schedule the service
task to run at least once in full after the request came in.
This will avoid timing issues that can occur with the service event
scheduling. We do pay a cost in re-running many tasks, but all the
service event tasks use either flags to avoid duplicate work, or are
tolerant of being run multiple times.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Krishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Jacob Keller [Thu, 12 Jan 2017 23:59:39 +0000 (15:59 -0800)]
fm10k: future-proof state bitmaps using DECLARE_BITMAP
This ensures that future programmers do not have to remember to re-size
the bitmaps due to adding new values. Although this is unlikely for this
driver, it may happen and it's best to prevent it from ever being an
issue.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Krishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Jacob Keller [Thu, 12 Jan 2017 23:59:38 +0000 (15:59 -0800)]
fm10k: use a BITMAP for flags to avoid race conditions
Replace bitwise operators and #defines with a BITMAP and enumeration
values. This is similar to how we handle the "state" values as well.
This has two distinct advantages over the old method. First, we ensure
correctness of operations which are currently problematic due to race
conditions. Suppose that two kernel threads are running, such as the
watchdog and an ethtool ioctl, and both modify flags. We'll say that the
watchdog is CPU A, and the ethtool ioctl is CPU B.
CPU A sets FLAG_1, which can be seen as
CPU A read FLAGS
CPU A write FLAGS | FLAG_1
CPU B sets FLAG_2, which can be seen as
CPU B read FLAGS
CPU A write FLAGS | FLAG_2
However, "|=" and "&=" operators are not actually atomic. So this could
be ordered like the following:
CPU A read FLAGS -> variable
CPU B read FLAGS -> variable
CPU A write FLAGS (variable | FLAG_1)
CPU B write FLAGS (variable | FLAG_2)
Notice how the 2nd write from CPU B could actually undo the write from
CPU A because it isn't guaranteed that the |= operation is atomic.
In practice the race windows for most flag writes is incredibly narrow
so it is not easy to isolate issues. However, the more flags we have,
the more likely they will cause problems. Additionally, if such
a problem were to arise, it would be incredibly difficult to track down.
Second, there is an additional advantage beyond code correctness. We can
now automatically size the BITMAP if more flags were added, so that we
do not need to remember that flags is u32 and thus if we added too many
flags we would over-run the variable. This is not a likely occurrence
for fm10k driver, but this patch can serve as an example for other
drivers which have many more flags.
This particular change does have a bit of trouble converting some of the
idioms previously used with the #defines for flags. Specifically, when
converting FM10K_FLAG_RSS_FIELD_IPV[46]_UDP flags. This whole operation
was actually quite problematic, because we actually stored flags
separately. This could more easily show the problem of the above
re-ordering issue.
This is really difficult to test whether atomics make a difference in
practical scenarios, but you can ensure that basic functionality remains
the same. This patch has a lot of code coverage, but most of it is
relatively simple.
While we are modifying these files, update their copyright year.
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Tested-by: Krishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Phil Turnbull [Wed, 23 Nov 2016 18:33:58 +0000 (13:33 -0500)]
fm10k: correctly check if interface is removed
FM10K_REMOVED expects a hardware address, not a 'struct fm10k_hw'.
Fixes: 5cb8db4a4cbc ("fm10k: Add support for VF") Signed-off-by: Phil Turnbull <phil.turnbull@oracle.com> Tested-by: Krishneil Singh <krishneil.k.singh@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Jarod Wilson [Tue, 4 Apr 2017 21:32:42 +0000 (17:32 -0400)]
bonding: attempt to better support longer hw addresses
People are using bonding over Infiniband IPoIB connections, and who knows
what else. Infiniband has a hardware address length of 20 octets
(INFINIBAND_ALEN), and the network core defines a MAX_ADDR_LEN of 32.
Various places in the bonding code are currently hard-wired to 6 octets
(ETH_ALEN), such as the 3ad code, which I've left untouched here. Besides,
only alb is currently possible on Infiniband links right now anyway, due
to commit 1533e7731522, so the alb code is where most of the changes are.
One major component of this change is the addition of a bond_hw_addr_copy
function that takes a length argument, instead of using ether_addr_copy
everywhere that hardware addresses need to be copied about. The other
major component of this change is converting the bonding code from using
struct sockaddr for address storage to struct sockaddr_storage, as the
former has an address storage space of only 14, while the latter is 128
minus a few, which is necessary to support bonding over device with up to
MAX_ADDR_LEN octet hardware addresses. Additionally, this probably fixes
up some memory corruption issues with the current code, where it's
possible to write an infiniband hardware address into a sockaddr declared
on the stack.
Lightly tested on a dual mlx4 IPoIB setup, which properly shows a 20-octet
hardware address now:
Bonding Mode: fault-tolerance (active-backup) (fail_over_mac active)
Primary Slave: mlx4_ib0 (primary_reselect always)
Currently Active Slave: mlx4_ib0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 100
Down Delay (ms): 100
Slave Interface: mlx4_ib0
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr:
80:00:02:08:fe:80:00:00:00:00:00:00:e4:1d:2d:03:00:1d:67:01
Slave queue ID: 0
Slave Interface: mlx4_ib1
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr:
80:00:02:09:fe:80:00:00:00:00:00:01:e4:1d:2d:03:00:1d:67:02
Slave queue ID: 0
Also tested with a standard 1Gbps NIC bonding setup (with a mix of
e1000 and e1000e cards), running LNST's bonding tests.
CC: Jay Vosburgh <j.vosburgh@gmail.com> CC: Veaceslav Falico <vfalico@gmail.com> CC: Andy Gospodarek <andy@greyhouse.net> CC: netdev@vger.kernel.org Signed-off-by: Jarod Wilson <jarod@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Edward Cree [Tue, 4 Apr 2017 16:02:49 +0000 (17:02 +0100)]
sfc: don't insert mc_list on low-latency firmware if it's too long
If the mc_list is longer than 256 addresses, we enter mc_promisc mode.
If we're in mc_promisc mode and the firmware doesn't support cascaded
multicast, normally we also insert our mc_list, to prevent stealing by
another VI. However, if the mc_list was too long, this isn't really
helpful - the MC groups that didn't fit in the list can still get
stolen, and having only some of them stealable will probably cause
more confusing behaviour than having them all stealable. Since
inserting 256 multicast filters takes a long time and can lead to MCDI
state machine timeouts, just skip the mc_list insert in this overflow
condition.
Signed-off-by: Edward Cree <ecree@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 5 Apr 2017 17:49:14 +0000 (10:49 -0700)]
Merge branch 'nfp-ksettings'
Jakub Kicinski says:
====================
nfp: ethtool link settings
This series adds support for getting and setting link settings
via the (moderately) new ethtool ksettings ops.
First patch introduces minimal speed and duplex reporting using
the information directly provided in PCI BAR0 memory.
Next few changes deal with the need to refresh port state read
from the service process and patch 6 finally uses that information
to provide link speed and duplex. Patches 7 and 8 add auto
negotiation and port type reporting.
Remaining changes provide the set support for speed and auto
negotiation. An upcoming series will also add port splitting
support via devlink.
Quite a bit of churn in this series is caused by the fact that
currently port speed and split changes will usually require a
reboot to take effect. Current service process code is not capable
of performing MAC reinitialization after chip has been passing
traffic. To make sure user is aware of this limitation we refuse
the configuration unless netdev is down, print warning to the logs
and if configuration was performed but did take effect we unregister
the netdev. Service process has a "reboot needed" sticky bit, so
reloading the driver will not bring the netdev back.
Note that there is a helper in patch 13 which is marked as
__always_inline, because the FIELD_* macros require the parameters
to be known at compilation time. I hope that is OK.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Tue, 4 Apr 2017 23:12:35 +0000 (16:12 -0700)]
nfp: add support for .set_link_ksettings()
Support setting link speed and autonegotiation through
set_link_ksettings() ethtool op. If the port is reconfigured
in incompatible way and reboot is required the netdev will get
unregistered and not come back until user reboots the system.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Tue, 4 Apr 2017 23:12:34 +0000 (16:12 -0700)]
nfp: NSP backend for link configuration operations
Add NSP backend for upcoming link configuration operations.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Tue, 4 Apr 2017 23:12:33 +0000 (16:12 -0700)]
nfp: add extended error messages
Allow NSP to set option code even when error is reported. This provides
a way for NSP to give user more precise information about why command
failed.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Tue, 4 Apr 2017 23:12:32 +0000 (16:12 -0700)]
nfp: turn NSP port entry into a union
Make NSP port structure a union to simplify accessing the fields
from generic macros.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Tue, 4 Apr 2017 23:12:31 +0000 (16:12 -0700)]
nfp: allow multi-stage NSP configuration
NSP commands may be slow to respond, we should try to avoid doing
a command-per-item when user requested to change multiple parameters
for instance with an ethtool .set_settings() command.
Introduce a way of internal NSP code to carry state in NSP structure
and add start/finish calls to perform the initialization and kick off
of the configuration request, with potentially many parameters being
modified in between.
nfp_eth_set_mod_enable() will make use of the new code internally,
other "set" functions to follow.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Tue, 4 Apr 2017 23:12:30 +0000 (16:12 -0700)]
nfp: separate high level and low level NSP headers
We will soon add more NSP commands and structure definitions.
Move all high-level NSP header contents to a common nfp_nsp.h file.
Right now it mostly boils down to renaming nfp_nsp_eth.h and
moving some functions from nfp.h there.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Tue, 4 Apr 2017 23:12:29 +0000 (16:12 -0700)]
nfp: report port type in ethtool
Service process firmware provides us with information about media
and interface (SFP module) plugged in, translate that to Linux's
PORT_* defines and report via ethtool.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Tue, 4 Apr 2017 23:12:28 +0000 (16:12 -0700)]
nfp: report auto-negotiation in ethtool
NSP ABI version 0.17 is exposing the autonegotiation settings.
Report whether autoneg is on via ethtool.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Tue, 4 Apr 2017 23:12:27 +0000 (16:12 -0700)]
nfp: report link speed from NSP
On the PF prefer the link speed value provided by the NSP.
Refresh port table if needed.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Tue, 4 Apr 2017 23:12:26 +0000 (16:12 -0700)]
nfp: add port state refresh
We will need a way of refreshing port state for link settings
get/set. For get we need to refresh port speed and type.
When settings are changed the reconfiguration may require
reboot before it's effective. Unregister netdevs affected
by reconfiguration from a workqueue.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Tue, 4 Apr 2017 23:12:25 +0000 (16:12 -0700)]
nfp: track link state changes
For caching link settings - remember if we have seen link events
since the last time the eth_port information was refreshed.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Tue, 4 Apr 2017 23:12:24 +0000 (16:12 -0700)]
nfp: add mutex protection for the port list
We will want to unregister netdevs after their port got reconfigured.
For that we need to make sure manipulations of port list from the
port reconfiguration flow will not race with driver's .remove()
callback.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Tue, 4 Apr 2017 23:12:23 +0000 (16:12 -0700)]
nfp: don't spawn netdevs for reconfigured ports
After port reconfiguration (port split, media type change)
firmware will continue to report old configuration until
reboot. NSP will inform us that reconfiguration is pending.
To avoid user confusion refuse to spawn netdevs until the
new configuration is applied (reboot).
We need to split the netdev to eth_table port matching from
MAC search and move it earlier in the probe() flow.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Tue, 4 Apr 2017 23:12:22 +0000 (16:12 -0700)]
nfp: add support for .get_link_ksettings()
Read link speed from the BAR. This provides very basic information
and works for both PFs and VFs.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Simon Horman <simon.horman@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
this is a pull request of 5 patches for net-next/master.
There are two patches by Yegor Yefremov which convert the ti_hecc
driver into a DT only driver, as there is no in-tree user of the old
platform driver interface anymore. The next patch by Mario Kicherer
adds network namespace support to the can subsystem. The last two
patches by Akshay Bhat add support for the holt_hi311x SPI CAN driver.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 5 Apr 2017 15:14:14 +0000 (08:14 -0700)]
Merge branch 'rtnetlink-event-type'
Vladislav Yasevich says:
====================
rtnetlink: Updates to rtnetlink_event()
This series came out of the conversation that started as a result
my first attempt to add netdevice event info to netlink messages.
This series converts event processing to a 'white list', where
we explicitely permit events to generate netlink messages. This
is meant to make people take a closer look and determine wheter
these events should really trigger netlink messages.
I am also adding a V2 of my patch to add event type to the netlink
message. This version supports all events that we currently generate.
I will also update my patch to iproute that will show this data
through 'ip monitor'.
I actually need the ability to trap NETDEV_NOTIFY_PEERS event
(as well as possible NETDEV_RESEND_IGMP) to support hanlding of
macvtap on top of bonding. I hope others will also find this info usefull.
V2: Added missed events (from David Ahern)
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
rtnl: Add support for netdev event to link messages
When netdev events happen, a rtnetlink_event() handler will send
messages for every event in it's white list. These messages contain
current information about a particular device, but they do not include
the iformation about which event just happened. The consumer of
the message has to try to infer this information. In some cases
(ex: NETDEV_NOTIFY_PEERS), that is not possible.
This patch adds a new extension to RTM_NEWLINK message called IFLA_EVENT
that would have an encoding of the which event triggered this
message. This would allow the the message consumer to easily determine
if it is interested in a particular event or not.
Signed-off-by: Vladislav Yasevich <vyasevic@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
The rtnetlink_event currently functions as a blacklist where
we block cerntain netdev events from being sent to user space.
As a result, events have been added to the system that userspace
probably doesn't care about.
This patch converts the implementation to the white list so that
newly events would have to be specifically added to the list to
be sent to userspace. This would force new event implementers to
consider whether a given event is usefull to user space or if it's
just a kernel event.
Signed-off-by: Vladislav Yasevich <vyasevic@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net: tcp: Define the TCP_MAX_WSCALE instead of literal number 14
Define one new macro TCP_MAX_WSCALE instead of literal number '14',
and use U16_MAX instead of 65535 as the max value of TCP window.
There is another minor change, use rounddown(space, mss) instead of
(space / mss) * mss;
Signed-off-by: Gao Feng <fgao@ikuai8.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Biggers [Tue, 4 Apr 2017 05:50:20 +0000 (22:50 -0700)]
net: ibm: emac: remove unused sysrq handler for 'c' key
Since commit d6580a9f1523 ("kexec: sysrq: simplify sysrq-c handler"),
the sysrq handler for the 'c' key has been sysrq_crash_op. Debugging
code in the ibm_emac driver also tries to register a handler for the 'c'
key, but this has no effect because register_sysrq_key() doesn't replace
existing handlers. Since evidently no one has cared enough to fix this
in the last 8 years, and it's very rare for drivers to register sysrq
handlers (for good reason), just remove the dead code.
Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Earlier patch c4adfc822bf5 ("bonding: make speed, duplex setting
consistent with link state") made an attempt to keep slave state
consistent with speed and duplex settings. Unfortunately link-state
transition is used to change the active link especially when used
in conjunction with mii-mon. The above mentioned patch broke that
logic. Also when speed and duplex settings for a link are updated
during a link-event, the link-status should not be changed to
invoke correct transition logic.
This patch fixes this issue by moving the link-state update outside
of the bond_update_speed_duplex() fn and to the places where this fn
is called and update link-state selectively.
Fixes: c4adfc822bf5 ("bonding: make speed, duplex setting consistent
with link state") Signed-off-by: Mahesh Bandewar <maheshb@google.com> Reviewed-by: Andy Gospodarek <andy@greyhouse.net> Signed-off-by: David S. Miller <davem@davemloft.net>
cb_running is reported in /proc/self/net/netlink and it is reported by
the ss tool, when it gets information from the proc files.
sock_diag is a new interface which is used instead of proc files, so it
looks reasonable that this interface has to report no less information
about sockets than proc files.
We use these flags to dump and restore netlink sockets.
Signed-off-by: Andrei Vagin <avagin@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Dan Carpenter [Mon, 3 Apr 2017 18:25:22 +0000 (21:25 +0300)]
qed: Add a missing error code
We should be returning -ENOMEM if qed_mcp_cmd_add_elem() fails. The
current code returns success.
Fixes: 4ed1eea82a21 ("qed: Revise MFW command locking") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Tomer Tayar <Tomer.Tayar@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Dan Carpenter [Mon, 3 Apr 2017 18:18:41 +0000 (21:18 +0300)]
net: sched: choke: remove some dead code
We accidentally left this dead code behind after commit 5952fde10c35
("net: sched: choke: remove dead filter classify code").
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by: Jiri Pirko <jiri@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Dan Carpenter [Mon, 3 Apr 2017 18:18:27 +0000 (21:18 +0300)]
liquidio: clear the correct memory
There is a cut and paste bug here so we accidentally clear the first
few bytes of "resp" a second time instead clearing "ctx".
Fixes: 50c0add534d2 ("liquidio: refactor interrupt moderation code") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Felix Manlunas <felix.manlunas@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Tue, 4 Apr 2017 22:14:17 +0000 (18:14 -0400)]
bnxt_en: Cap the msix vector with the max completion rings.
The current code enables up to the maximum MSIX vectors in the PCIE
config space without considering the max completion rings available.
An MSIX vector is only useful when it has an associated completion
ring, so it is better to cap it.
Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Tue, 4 Apr 2017 22:14:16 +0000 (18:14 -0400)]
bnxt_en: Use short TX BDs for the XDP TX ring.
No offload is performed on the XDP_TX ring so we can use the short TX
BDs. This has the effect of doubling the size of the XDP TX ring so
that it now matches the size of the rx ring by default.
Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Tue, 4 Apr 2017 22:14:13 +0000 (18:14 -0400)]
bnxt_en: Add ethtool mac loopback self test.
The mac loopback self test operates in polling mode. To support that,
we need to add functions to open and close the NIC half way. The half
open mode allows the rings to operate without IRQ and NAPI. We
use the XDP transmit function to send the loopback packet.
Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Michael Chan [Tue, 4 Apr 2017 22:14:06 +0000 (18:14 -0400)]
bnxt_en: Update firmware interface spec to 1.7.6.2.
Features added include WoL and selftest.
Signed-off-by: Deepak Khungar <deepak.khungar@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Akshay Bhat [Fri, 17 Mar 2017 21:10:40 +0000 (17:10 -0400)]
can: hi311x: Add Holt HI-311x CAN driver
This patch adds support for the Holt HI-311x CAN controller. The HI311x
CAN controller is capable of transmitting and receiving standard data
frames, extended data frames and remote frames. The HI311x interfaces
with the host over SPI.
Mario Kicherer [Tue, 21 Feb 2017 11:19:47 +0000 (12:19 +0100)]
can: initial support for network namespaces
This patch adds initial support for network namespaces. The changes only
enable support in the CAN raw, proc and af_can code. GW and BCM still
have their checks that ensure that they are used only from the main
namespace.
The patch boils down to moving the global structures, i.e. the global
filter list and their /proc stats, into a per-namespace structure and passing
around the corresponding "struct net" in a lot of different places.
Changes since v1:
- rebased on current HEAD (2bfe01e)
- fixed overlong line
Signed-off-by: Mario Kicherer <dev@kicherer.org> Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
David S. Miller [Tue, 4 Apr 2017 02:16:38 +0000 (19:16 -0700)]
Merge branch 'qed-QM-ILT-changes'
Yuval Mintz says:
====================
qed: QM & ILT changes
This series introduces several changes and improvements to existing
queue manager and ILT configurations done during initialization.
Notice some of the patches are actually future fixes, I.e., bugs that
can't be triggered with exisiting driver but are needed for some future
functionality.
Patch #1 refactors the configuration of the hardware's queue manager,
which is quite messy today. This contains most of the bulk [code-wise]
in the series.
Patch #2, #3 fix Timers related ILT configurations that are yet to
affect qed in existing scenarios.
Patch #4 reduces needless ILT lines wasted for RoCE configurations.
Patch #5 allows RoCE partitions to manage with less memory regions
[important, e.g., for Multi-function parititions with RoCE support].
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
It's possible some configurations would prevent driver from utilizing
all the Memory Regions due to a lack of ILT lines.
In such a case, calculate how many memory regions would have to be
dropped due to limit, and manage without those.
Signed-off-by: Ram Amrani <Ram.Amrani@cavium.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
As of today there's no protocol supported that requires
support from the TM hardware block and enables SRIOV,
but we should still correct the calculation to reflect
the lines required for such future VFs instead of changing
the PF's own lines.
Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Michal Kalderon [Mon, 3 Apr 2017 09:21:10 +0000 (12:21 +0300)]
qed: Fix TM block ILT allocation
When configuring the HW timers block we should set the number of CIDs
up until the last CID that require timers, instead of only those CIDs
whose protocol needs timers support.
Today, the protocols that require HW timers' support have their CIDs
before any other protocol, but that would change in future [when we
add iWARP support].
Signed-off-by: Michal Kalderon <Michal.Kalderon@cavium.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Refactor and clean up the queue manager initialization logic.
Also, this adds support for RoC low latency queues, which later
would be used for improving RoCE latency in high throughput scenarios.
Signed-off-by: Ariel Elior <Ariel.Elior@cavium.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@cavium.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Add support for the net stats64 counters to the usbnet core. With that
in place put the hooks into every usbnet driver to use it.
This is a strait forward addition of 64bit counters for RX and TX packet
and byte counts. It is done in the same style as for the other net drivers
that support stats64. Note that the other stats fields remain as 32bit
sized values (error counts, etc).
The motivation to add this is that it is not particularly difficult to
get the RX and TX byte counts to wrap on 32bit platforms.
Signed-off-by: Greg Ungerer <gerg@linux-m68k.org> Acked-by: Bjørn Mork <bjorn@mork.no> Signed-off-by: David S. Miller <davem@davemloft.net>
Make ->hash_count, ->low_watermark and ->high_watermark unsigned int
and propagate unsignedness to other variables.
This change doesn't change code generation because these fields aren't
used in 64-bit contexts but make it anyway: these fields can't be
negative numbers.
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Andrew Lunn [Sun, 2 Apr 2017 18:20:47 +0000 (20:20 +0200)]
net/faraday: Add missing include of of.h
Breaking the include loop netdevice.h, dsa.h, devlink.h broke this
driver, it depends on includes brought in by these headers. Adding
linux/of.h fixes it.
Fixes: ed0e39e97d34 ("net: break include loop netdevice.h, dsa.h, devlink.h") Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
Vincent Bernat [Sun, 2 Apr 2017 09:00:06 +0000 (11:00 +0200)]
vxlan: fix ND proxy when skb doesn't have transport header offset
When an incoming frame is tagged or when GRO is disabled, the skb
handled to vxlan_xmit() doesn't contain a valid transport header
offset. This makes ND proxying fail.
We combine two changes: replace use of skb_transport_offset() and ensure
the necessary amount of skb is linear just before using it:
- In vxlan_xmit(), when determining if we have an ICMPv6 neighbor
discovery packet, just check if it is an ICMPv6 packet and rely on
neigh_reduce() to do more checks if this is the case. The use of
pskb_may_pull() is replaced by skb_header_pointer() for just the IPv6
header.
- In neigh_reduce(), add pskb_may_pull() for IPv6 header and neighbor
discovery message since this was removed from vxlan_xmit(). Replace
skb_transport_header() with ipv6_hdr() + 1.
- In vxlan_na_create(), replace first skb_transport_offset() with
ipv6_hdr() + 1 and second with skb_network_offset() + sizeof(struct
ipv6hdr). Additionally, ensure we pskb_may_pull() the whole skb as we
need it to iterate over the options.
Signed-off-by: Vincent Bernat <vincent@bernat.im> Signed-off-by: David S. Miller <davem@davemloft.net>
Xin Long [Sat, 1 Apr 2017 09:07:46 +0000 (17:07 +0800)]
sctp: add SCTP_PR_STREAM_STATUS sockopt for prsctp
Before when implementing sctp prsctp, SCTP_PR_STREAM_STATUS wasn't
added, as it needs to save abandoned_(un)sent for every stream.
After sctp stream reconf is added in sctp, assoc has structure
sctp_stream_out to save per stream info.
This patch is to add SCTP_PR_STREAM_STATUS by putting the prsctp
per stream statistics into sctp_stream_out.
v1->v2:
fix an indent issue.
Signed-off-by: Xin Long <lucien.xin@gmail.com> Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
There is a bug on Hip06 that tx ring interrupts packets count will be
clear when drivers send data to tx ring, so that the tx packets count
will never upgrade to packets line, and cause the interrupts engendered
was delayed.
Sometimes, it will cause sending performance lower than expected.
To fix this bug, we set tx ring interrupts packets line to 1 forever,
to avoid count clear. And set the gap time to 20us, to solve the problem
that too many interrupts engendered when packets line is 1.
This patch could advance the send performance on ARM from 6.6G to 9.37G
when an iperf send thread on ARM and an iperf send thread on X86 for XGE.
Signed-off-by: lipeng <lipeng321@huawei.com> Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Kejian Yan [Sat, 1 Apr 2017 11:03:46 +0000 (12:03 +0100)]
net: hns: Adjust the SBM module buffer threshold
HNS needs SMB Buffers to store at least two packets after sending
pause frame because of the link delay. The MTU of HNS is 9728. As
the processor user manual described, the SBM buffer threshold should
be modified.
Reported-by: Ping Zhang <zhangping5@huawei.com> Signed-off-by: Kejian Yan <yankejian@huawei.com> Reviewed-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Kejian Yan [Sat, 1 Apr 2017 11:03:45 +0000 (12:03 +0100)]
net: hns: Simplify the exception sequence in hns_ppe_init()
We need to free all ppe submodule if it fails to initialize ppe by
any fault, so this patch will free all ppe resource before
hns_ppe_init() returns exception situation
Reported-by: JinchuanTian <tianjinchuan1@huawei.com> Signed-off-by: Kejian Yan <yankejian@huawei.com> Reviewed-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Kejian Yan [Sat, 1 Apr 2017 11:03:42 +0000 (12:03 +0100)]
net: hns: Remove redundant mac table operations
This patch removes redundant functions used only for debugging
purposes.
Reported-by: Weiwei Deng <dengweiwei@huawei.com> Signed-off-by: Kejian Yan <yankejian@huawei.com> Reviewed-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Kejian Yan [Sat, 1 Apr 2017 11:03:41 +0000 (12:03 +0100)]
net: hns: Remove redundant mac_get_id()
There is a mac_id in mac control block structure, so the callback
function mac_get_id() is useless. Here we remove this function.
Reported-by: Weiwei Deng <dengweiwei@huawei.com> Signed-off-by: Kejian Yan <yankejian@huawei.com> Reviewed-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Kejian Yan [Sat, 1 Apr 2017 11:03:40 +0000 (12:03 +0100)]
net: hns: Remove the redundant adding and deleting mac function
The functions (hns_dsaf_set_mac_mc_entry() and hns_mac_del_mac()) are
not called by any functions. They are dead code in hns. And the same
features are implemented by the patch (the id is 66355f5).
Reported-by: Weiwei Deng <dengweiwei@huawei.com> Signed-off-by: Kejian Yan <yankejian@huawei.com> Reviewed-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
netif_tx_lock is a global spin lock, it will take affect
in all rings in the netdevice. In tx_poll_one process, it can
only lock the current ring, in this case, we define a spin lock
in hnae_ring struct for it.
Signed-off-by: lipeng <lipeng321@huawei.com>
reviewed-by: Yisen Zhuang <yisen.zhuang@huawei.com> Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net: hns: Fix to adjust buf_size of ring according to mtu
Because buf_size of ring set to 2048, the process of rx_poll_one
can reuse the page, therefore the performance of XGE can improve.
But the chip only supports three bds in one package, so the max mtu
is 6K when it sets to 2048. For better performane in litter mtu, we
need change buf_size according to mtu.
When user change mtu, hns is only change the desc in memory. There
are some desc has been fetched by the chip, these desc can not be
changed by the code. So it needs set the port loopback and send
some packages to let the chip consumes the wrong desc and fetch new
desc.
Because the Pv660 do not support rss indirection, we need add version
check in mtu change process.
Signed-off-by: lipeng <lipeng321@huawei.com>
reviewed-by: Yisen Zhuang <yisen.zhuang@huawei.com> Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net: hns: Optimize hns_nic_common_poll for better performance
After polling less than buget packages, we need check again. If
there are still some packages, we call napi_schedule add softirq
queue, this is not better way. So we return buget value instead
of napi_schedule.
Signed-off-by: lipeng <lipeng321@huawei.com>
reviewed-by: Yisen Zhuang <yisen.zhuang@huawei.com> Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net: hns: Remove redundant memset during buffer release
Because all members of desc_cb is assigned when xmit one package, so it
can delete in hnae_free_buffer, as follows:
- "dma, priv, length, type" are assigned in fill_v2_desc.
- "page_offset, reuse_flag, buf" are not used in tx direction.
Signed-off-by: lipeng <lipeng321@huawei.com> Signed-off-by: Weiwei Deng <dengweiwei@huawei.com> Reviewed-by: Yisen Zhuang <yisen.zhuang@huawei.com> Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net: hns: Fix the implementation of irq affinity function
This patch fixes the implementation of the IRQ affinity
function. This function is used to create the cpu mask
which eventually is used to initialize the cpu<->queue
association for XPS(Transmit Packet Steering).
Signed-off-by: lipeng <lipeng321@huawei.com> Signed-off-by: Kejian Yan <yankejian@huawei.com> Reviewed-by: Yisen Zhuang <yisen.zhuang@huawei.com> Signed-off-by: Salil Mehta <salil.mehta@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sowmini Varadhan [Fri, 31 Mar 2017 22:56:31 +0000 (15:56 -0700)]
rds: tcp: canonical connection order for all paths with index > 0
The rds_connect_worker() has a bug in the check that enforces the
canonical connection order described in the comments of
rds_tcp_state_change(). The intention is to make sure that all
the multipath connections are always initiated by the smaller IP
address via rds_start_mprds. To achieve this, rds_connection_worker
should check that cp_index > 0.
Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sowmini Varadhan [Fri, 31 Mar 2017 22:56:30 +0000 (15:56 -0700)]
rds: tcp: allow progress of rds_conn_shutdown if the rds_connection is marked ERROR by an intervening FIN
rds_conn_shutdown() runs in workq context, and marks the rds_connection
as DISCONNECTING before quiescing Tx/Rx paths. However, after all I/O
has quiesced, we may still find the rds_connection state to be
RDS_CONN_ERROR if an intervening FIN was processed in softirq context.
This is not a fatal error: rds_conn_shutdown() should continue the
shutdown, and there is no need to log noisy messages about this event.
Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Fri, 31 Mar 2017 21:59:25 +0000 (14:59 -0700)]
sock: correctly test SOCK_TIMESTAMP in sock_recv_ts_and_drops()
It seems the code does not match the intent.
This broke packetdrill, and probably other programs.
Fixes: 6c7c98bad488 ("sock: avoid dirtying sk_stamp, if possible") Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Paolo Abeni <pabeni@redhat.com> Acked-by: Paolo Abeni <pabeni@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Andrew Morton [Fri, 31 Mar 2017 20:09:41 +0000 (13:09 -0700)]
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c: fix build with gcc-4.4.4
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c: In function 'mlx5e_set_rxfh':
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c:1067: error: unknown field 'rss' specified in initializer
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c:1067: warning: missing braces around initializer
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c:1067: warning: (near initialization for 'rrp.<anonymous>')
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c:1068: error: unknown field 'rss' specified in initializer
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c:1069: warning: excess elements in struct initializer
drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c:1069: warning: (near initialization for 'rrp')
gcc-4.4.4 has issues with anonymous union initializers. Work around this.
Cc: Saeed Mahameed <saeedm@mellanox.com> Cc: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Andrew Morton [Fri, 31 Mar 2017 20:09:38 +0000 (13:09 -0700)]
drivers/net/ethernet/mellanox/mlx5/core/en_main.c: fix build with gcc-4.4.4
drivers/net/ethernet/mellanox/mlx5/core/en_main.c: In function 'mlx5e_redirect_rqts':
drivers/net/ethernet/mellanox/mlx5/core/en_main.c:2210: error: unknown field 'rqn' specified in initializer
drivers/net/ethernet/mellanox/mlx5/core/en_main.c:2211: warning: missing braces around initializer
drivers/net/ethernet/mellanox/mlx5/core/en_main.c:2211: warning: (near initialization for 'direct_rrp.<anonymous>')
drivers/net/ethernet/mellanox/mlx5/core/en_main.c: In function 'mlx5e_redirect_rqts_to_channels':
drivers/net/ethernet/mellanox/mlx5/core/en_main.c:2227: error: unknown field 'rss' specified in initializer
drivers/net/ethernet/mellanox/mlx5/core/en_main.c:2227: warning: missing braces around initializer
drivers/net/ethernet/mellanox/mlx5/core/en_main.c:2227: warning: (near initialization for 'rrp.<anonymous>')
drivers/net/ethernet/mellanox/mlx5/core/en_main.c:2227: warning: initialization makes integer from pointer without a cast
drivers/net/ethernet/mellanox/mlx5/core/en_main.c:2228: error: unknown field 'rss' specified in initializer
drivers/net/ethernet/mellanox/mlx5/core/en_main.c:2229: warning: excess elements in struct initializer
drivers/net/ethernet/mellanox/mlx5/core/en_main.c:2229: warning: (near initialization for 'rrp')
drivers/net/ethernet/mellanox/mlx5/core/en_main.c: In function 'mlx5e_redirect_rqts_to_drop':
drivers/net/ethernet/mellanox/mlx5/core/en_main.c:2238: error: unknown field 'rqn' specified in initializer
drivers/net/ethernet/mellanox/mlx5/core/en_main.c:2239: warning: missing braces around initializer
drivers/net/ethernet/mellanox/mlx5/core/en_main.c:2239: warning: (near initialization for 'drop_rrp.<anonymous>')
gcc-4.4.4 has issues with anonymous union initializers. Work around this.
Cc: Saeed Mahameed <saeedm@mellanox.com> Cc: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Joao Pinto [Fri, 31 Mar 2017 13:22:02 +0000 (14:22 +0100)]
net: stmmac: fix cbs configuration
Sending again, because forgot to include net-dev.
The QoS IP does not accept AVB capabilities to default/queue 0, this way we
guarantee 75% bandwidth for AVB. This patch assures that only queues >= 1
gets CBS confgured. Additional info was also added to stmmac.txt.
Reported-by: Niklas Cassel <niklas.cassel@axis.com> Signed-off-by: Joao Pinto <jpinto@synopsys.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 2 Apr 2017 03:21:45 +0000 (20:21 -0700)]
Merge branch 'mpls-more-labels'
David Ahern says:
====================
net: mpls: Allow users to configure more labels per route
Increase the maximum number of new labels for MPLS routes from 2 to 30.
To keep memory consumption in check, the labels array is moved to the end
of mpls_nh and mpls_iptunnel_encap structs as a 0-sized array. Allocations
use the maximum number of labels across all nexthops in a route for LSR
and the number of labels configured for LWT.
The mpls_route layout is changed to:
+----------------------+
| mpls_route |
+----------------------+
| mpls_nh 0 |
+----------------------+
| alignment padding | 4 bytes for odd number of labels; 0 for even
+----------------------+
| via[rt_max_alen] 0 |
+----------------------+
| alignment padding | via's aligned on sizeof(unsigned long)
+----------------------+
| ... |
Meaning the via follows its mpls_nh providing better locality as the
number of labels increases. UDP_RR tests with namespaces shows no impact
to a modest performance increase with this layout for 1 or 2 labels and
1 or 2 nexthops.
mpls_route allocation size is limited to 4096 bytes allowing on the
order of 30 nexthops with 30 labels (or more nexthops with fewer
labels). LWT encap shares same maximum number of labels as mpls routing.
v3
- initialize n_labels to 0 in case RTA_NEWDST is not defined; detected
by the kbuild test robot
v2
- updates per Eric's comments
+ added patch to ensure all reads of rt_nhn_alive and nh_flags in
the packet path use READ_ONCE and all writes via event handlers
use WRITE_ONCE
+ limit mpls_route size to 4096 (PAGE_SIZE for most arch)
+ mostly killed use of MAX_NEW_LABELS; it exists only for common
limit between lwt and routing paths
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Fri, 31 Mar 2017 14:14:04 +0000 (07:14 -0700)]
net: mpls: Increase max number of labels for lwt encap
Alow users to push down more labels per MPLS encap. Similar to LSR case,
move label array to the end of mpls_iptunnel_encap and allocate based on
the number of labels for the route.
For consistency with the LSR case, re-use the same maximum number of
labels.
Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Fri, 31 Mar 2017 14:14:03 +0000 (07:14 -0700)]
net: mpls: bump maximum number of labels
Allow users to push down more labels per MPLS route. With the previous
patches, no memory allocations are based on MAX_NEW_LABELS; the limit
is only used to keep userspace in check.
At this point MAX_NEW_LABELS is only used for mpls_route_config (copying
route data from userspace) and processing nexthops looking for the max
number of labels across the route spec.
Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David Ahern [Fri, 31 Mar 2017 14:14:01 +0000 (07:14 -0700)]
net: mpls: change mpls_route layout
Move labels to the end of mpls_nh as a 0-sized array and within mpls_route
move the via for a nexthop after the mpls_nh. The new layout becomes:
+----------------------+
| mpls_route |
+----------------------+
| mpls_nh 0 |
+----------------------+
| alignment padding | 4 bytes for odd number of labels; 0 for even
+----------------------+
| via[rt_max_alen] 0 |
+----------------------+
| alignment padding | via's aligned on sizeof(unsigned long)
+----------------------+
| ... |
+----------------------+
| mpls_nh n-1 |
+----------------------+
| via[rt_max_alen] n-1 |
+----------------------+
Memory allocated for nexthop + via is constant across all nexthops and
their via. It is based on the maximum number of labels across all nexthops
and the maximum via length. The size is saved in the mpls_route as
rt_nh_size. Accessing a nexthop becomes rt->rt_nh + index * rt->rt_nh_size.
The offset of the via address from a nexthop is saved as rt_via_offset
so that given an mpls_nh pointer the via for that hop is simply
nh + rt->rt_via_offset.
With prior code, memory allocated per mpls_route with 1 nexthop:
via is an ethernet address - 64 bytes
via is an ipv4 address - 64
via is an ipv6 address - 72
With this patch set, memory allocated per mpls_route with 1 nexthop and
1 or 2 labels:
via is an ethernet address - 56 bytes
via is an ipv4 address - 56
via is an ipv6 address - 64
The 8-byte reduction is due to the previous patch; the change introduced
by this patch has no impact on the size of allocations for 1 or 2 labels.
Performance impact of this change was examined using network namespaces
with veth pairs connecting namespaces. ns0 inserts the packet to the
label-switched path using an lwt route with encap mpls. ns1 adds 1 or 2
labels depending on test, ns2 (and ns3 for 2-label test) pops the label
and forwards. ns3 (or ns4) for a 2-label is the destination. Similar
series of namespaces used for 2-nexthop test.
Intent is to measure changes to latency (overhead in manipulating the
packet) in the forwarding path. Tests used netperf with UDP_RR.
In short, the change has no effect to a modest increase in performance.
This is expected since this patch does not really have an impact on routes
with 1 or 2 labels (the current limit) and 1 or 2 nexthops.
Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>