Haishuang Yan [Sun, 4 Sep 2016 10:52:51 +0000 (18:52 +0800)]
vxlan: Update tx_errors statistics if vxlan_build_skb return err.
If vxlan_build_skb return err < 0, tx_errors should be also increased.
Signed-off-by: Haishuang Yan <yanhaishuang@cmss.chinamobile.com> Acked-by: Jiri Benc <jbenc@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net/mlx4_en: protect ring->xdp_prog with rcu_read_lock
Depending on the preempt mode, the bpf_prog stored in xdp_prog may be
freed despite the use of call_rcu inside bpf_prog_put. The situation is
possible when running in PREEMPT_RCU=y mode, for instance, since the rcu
callback for destroying the bpf prog can run even during the bh handling
in the mlx4 rx path.
Several options were considered before this patch was settled on:
Add a napi_synchronize loop in mlx4_xdp_set, which would occur after all
of the rings are updated with the new program.
This approach has the disadvantage that as the number of rings
increases, the speed of update will slow down significantly due to
napi_synchronize's msleep(1).
Add a new rcu_head in bpf_prog_aux, to be used by a new bpf_prog_put_bh.
The action of the bpf_prog_put_bh would be to then call bpf_prog_put
later. Those drivers that consume a bpf prog in a bh context (like mlx4)
would then use the bpf_prog_put_bh instead when the ring is up. This has
the problem of complexity, in maintaining proper refcnts and rcu lists,
and would likely be harder to review. In addition, this approach to
freeing must be exclusive with other frees of the bpf prog, for instance
a _bh prog must not be referenced from a prog array that is consumed by
a non-_bh prog.
The placement of rcu_read_lock in this patch is functionally the same as
putting an rcu_read_lock in napi_poll. Actually doing so could be a
potentially controversial change, but would bring the implementation in
line with sk_busy_loop (though of course the nature of those two paths
is substantially different), and would also avoid future copy/paste
problems with future supporters of XDP. Still, this patch does not take
that opinionated option.
Testing was done with kernels in either PREEMPT_RCU=y or
CONFIG_PREEMPT_VOLUNTARY=y+PREEMPT_RCU=n modes, with neither exhibiting
any drawback. With PREEMPT_RCU=n, the extra call to rcu_read_lock did
not show up in the perf report whatsoever, and with PREEMPT_RCU=y the
overhead of rcu_read_lock (according to perf) was the same before/after.
In the rx path, rcu_read_lock is eventually called for every packet
from netif_receive_skb_internal, so the napi poll call's rcu_read_lock
is easily amortized.
v2:
Remove extra rcu_read_lock in mlx4_en_process_rx_cq body
Annotate xdp_prog with __rcu, and convert all usages to rcu_assign or
rcu_dereference[_protected] as appropriate.
Add explicit mutex lock around rcu_assign instead of xchg loop.
Fixes: d576acf0a22 ("net/mlx4_en: add page recycle to prepare rx ring for tx support") Acked-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <alexei.starovoitov@gmail.com> Signed-off-by: Brenden Blanco <bblanco@plumgrid.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 6 Sep 2016 20:33:20 +0000 (13:33 -0700)]
Merge branch 'mediatek-rx-path-enhancements'
Sean Wang says:
====================
net: ethernet: mediatek: add enhancements to RX path
Changes since v1:
- fix message typos and add coverletter
Changes since v2:
- split from the previous series for submitting add enhancements as
a series targeting 'net-next' and add indents before comments.
Changes since v3:
- merge the patch using PDMA RX path
- fixed the input of mtk_poll_rx is with the remaining budget
Changes since v4:
- save one wmb and register update when no packet is being handled
inside mtk_poll_rx call
- fixed incorrect return packet count from mtk_napi_rx
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Sean Wang [Sat, 3 Sep 2016 09:59:27 +0000 (17:59 +0800)]
net: ethernet: mediatek: enhance RX path by aggregating more SKBs into NAPI
The patch adds support for aggregating more SKBs feed into NAPI in
order to get more benefits from generic receive offload (GRO) by
peeking at the RX ring status and moving more packets right before
returning from NAPI RX polling handler if NAPI budgets are still
available and some packets already present in hardware.
Signed-off-by: Sean Wang <sean.wang@mediatek.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sean Wang [Sat, 3 Sep 2016 09:59:26 +0000 (17:59 +0800)]
net: ethernet: mediatek: enhance RX path by reducing the frequency of the memory barrier used
The patch makes move wmb() to outside the loop that could help
RX path handling more faster although that RX descriptors aren't
freed for DMA to use as soon as possible, but based on my experiment
and the result shows it still can reach about 943Mbpis without
performance drop that is tested based on the setup with one port
using Giga PHY and 256 RX descriptors for DMA to move.
Signed-off-by: Sean Wang <sean.wang@mediatek.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Joe Perches [Fri, 2 Sep 2016 22:58:01 +0000 (15:58 -0700)]
hso: Use a more common logging style
Macros that end in an underscore are just odd.
Add hso_dbg(level, fmt, ...) and use it everwhere instead.
Several uses had additional unnecessary newlines as the
macro added a newline. Remove the newline from the macro
and add newlines to each use as appropriate.
Remove now unused D<digit> macros.
Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Add Microchip Linux Driver Support as maintainer
because this driver is maintaining by Microchip.
Signed-off-by: Woojung Huh <Woojung.huh@gmail.com> Acked-by: Steve Glendinning <steve.glendinning@shawell.net> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Tue, 6 Sep 2016 19:58:14 +0000 (12:58 -0700)]
Merge branch 'mv88e6xxx-isolate-Global2'
Vivien Didelot says:
====================
net: dsa: mv88e6xxx: isolate Global2 support
Registers of Marvell chips are organized in internal SMI devices.
One of them at address 0x1C is called Global2. It provides an extended
set of registers, used for interrupt control, EEPROM access, indirect
PHY access (to bypass the PHY Polling Unit) and cross-chip setup.
Most chips have it, but some others don't (older ones such as 6060).
Now that its related code is isolated in mv88e6xxx_g2_* functions, move
it to its own global2.c file, making most of its setup code static.
Then make its compilation optional, which allows to reduce the size of
the mv88e6xxx driver for devices such as home routers embedding Ethernet
chips without Global2 support.
It is present on most recent chips, thus enable its support by default.
Changes in v2: fail probe if GLOBAL2 is required but not enabled.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Vivien Didelot [Fri, 2 Sep 2016 18:45:33 +0000 (14:45 -0400)]
net: dsa: mv88e6xxx: move Global2 code
Marvell chips are composed of multiple SMI devices. One of them at
address 0x1C is called Global2. It provides an extended set of
registers, used for interrupt control, EEPROM access, indirect PHY
access (to bypass the PHY Polling Unit) and cross-chip related setup.
Most chips have it, but some others don't (older ones such as 6060).
Now that its related code is isolated in mv88e6xxx_g2_* functions, move
it to its own global2.c file, making most of its setup code static.
Document each registers in the meantime.
Its compilation can be later avoided for chips without such registers.
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vivien Didelot [Fri, 2 Sep 2016 18:45:32 +0000 (14:45 -0400)]
net: dsa: mv88e6xxx: fix module naming
Since the mv88e6xxx.c file has been renamed, the driver compiled as a
module is called chip.ko instead of mv88e6xxx.ko. Fix this.
Fixes: fad09c73c270 ("net: dsa: mv88e6xxx: rename single-chip support") Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>
====================
Netfilter updates for net-next
The following patchset contains Netfilter updates for your net-next
tree. Most relevant updates are the removal of per-conntrack timers to
use a workqueue/garbage collection approach instead from Florian
Westphal, the hash and numgen expression for nf_tables from Laura
Garcia, updates on nf_tables hash set to honor the NLM_F_EXCL flag,
removal of ip_conntrack sysctl and many other incremental updates on our
Netfilter codebase.
More specifically, they are:
1) Retrieve only 4 bytes to fetch ports in case of non-linear skb
transport area in dccp, sctp, tcp, udp and udplite protocol
conntrackers, from Gao Feng.
2) Missing whitespace on error message in physdev match, from Hangbin Liu.
3) Skip redundant IPv4 checksum calculation in nf_dup_ipv4, from Liping Zhang.
4) Add nf_ct_expires() helper function and use it, from Florian Westphal.
5) Replace opencoded nf_ct_kill() call in IPVS conntrack support, also
from Florian.
6) Rename nf_tables set implementation to nft_set_{name}.c
7) Introduce the hash expression to allow arbitrary hashing of selector
concatenations, from Laura Garcia Liebana.
8) Remove ip_conntrack sysctl backward compatibility code, this code has
been around for long time already, and we have two interfaces to do
this already: nf_conntrack sysctl and ctnetlink.
9) Use nf_conntrack_get_ht() helper function whenever possible, instead
of opencoding fetch of hashtable pointer and size, patch from Liping Zhang.
10) Add quota expression for nf_tables.
11) Add number generator expression for nf_tables, this supports
incremental and random generators that can be combined with maps,
very useful for load balancing purpose, again from Laura Garcia Liebana.
12) Fix a typo in a debug message in FTP conntrack helper, from Colin Ian King.
13) Introduce a nft_chain_parse_hook() helper function to parse chain hook
configuration, this is used by a follow up patch to perform better chain
update validation.
14) Add rhashtable_lookup_get_insert_key() to rhashtable and use it from the
nft_set_hash implementation to honor the NLM_F_EXCL flag.
15) Missing nulls check in nf_conntrack from nf_conntrack_tuple_taken(),
patch from Florian Westphal.
16) Don't use the DYING bit to know if the conntrack event has been already
delivered, instead a state variable to track event re-delivery
states, also from Florian.
17) Remove the per-conntrack timer, use the workqueue approach that was
discussed during the NFWS, from Florian Westphal.
18) Use the netlink conntrack table dump path to kill stale entries,
again from Florian.
19) Add a garbage collector to get rid of stale conntracks, from
Florian.
20) Reschedule garbage collector if eviction rate is high.
21) Get rid of the __nf_ct_kill_acct() helper.
22) Use ARPHRD_ETHER instead of hardcoded 1 from ARP logger.
23) Make nf_log_set() interface assertive on unsupported families.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
The workqueue "afs_lock_manager" queues work item &vnode->lock_work,
per vnode. Since there can be multiple vnodes and since their work items
can be executed concurrently, alloc_workqueue has been used to replace
the deprecated create_singlethread_workqueue instance.
The WQ_MEM_RECLAIM flag has been set to ensure forward progress under
memory pressure because the workqueue is being used on a memory reclaim
path.
Since there are fixed number of work items, explicit concurrency
limit is unnecessary here.
Signed-off-by: Bhaktipriya Shridhar <bhaktipriya96@gmail.com> Signed-off-by: David Howells <dhowells@redhat.com>
The workqueue "afs_callback_update_worker" queues multiple work items
viz &vnode->cb_broken_work, &server->cb_break_work which require strict
execution ordering. Hence, an ordered dedicated workqueue has been used.
Since the workqueue is being used on a memory reclaim path, WQ_MEM_RECLAIM
has been set to ensure forward progress under memory pressure.
Signed-off-by: Bhaktipriya Shridhar <bhaktipriya96@gmail.com> Signed-off-by: David Howells <dhowells@redhat.com>
The workqueue "afs_async_calls" queues work item
&call->async_work per afs_call. Since there could be multiple calls and since
these calls can be run concurrently, alloc_workqueue has been used to replace
the deprecated create_singlethread_workqueue instance.
The WQ_MEM_RECLAIM flag has been set to ensure forward progress under
memory pressure because the workqueue is being used on a memory reclaim
path.
Since there are fixed number of work items, explicit concurrency
limit is unnecessary here.
Signed-off-by: Bhaktipriya Shridhar <bhaktipriya96@gmail.com> Signed-off-by: David Howells <dhowells@redhat.com>
The workqueue "afs_vlocation_update_worker" queues a single work item
&afs_vlocation_update and hence it doesn't require execution ordering.
Hence, alloc_workqueue has been used to replace the deprecated
create_singlethread_workqueue instance.
Since the workqueue is being used on a memory reclaim path, WQ_MEM_RECLAIM
flag has been set to ensure forward progress under memory pressure.
Since there are fixed number of work items, explicit concurrency
limit is unnecessary here.
Signed-off-by: Bhaktipriya Shridhar <bhaktipriya96@gmail.com> Signed-off-by: David Howells <dhowells@redhat.com>
Paul Burton [Fri, 2 Sep 2016 14:22:48 +0000 (15:22 +0100)]
net: ti: cpmac: Fix compiler warning due to type confusion
cpmac_start_xmit() used the max() macro on skb->len (an unsigned int)
and ETH_ZLEN (a signed int literal). This led to the following compiler
warning:
In file included from include/linux/list.h:8:0,
from include/linux/module.h:9,
from drivers/net/ethernet/ti/cpmac.c:19:
drivers/net/ethernet/ti/cpmac.c: In function 'cpmac_start_xmit':
include/linux/kernel.h:748:17: warning: comparison of distinct pointer
types lacks a cast
(void) (&_max1 == &_max2); \
^
drivers/net/ethernet/ti/cpmac.c:560:8: note: in expansion of macro 'max'
len = max(skb->len, ETH_ZLEN);
^
On top of this, it assigned the result of the max() macro to a signed
integer whilst all further uses of it result in it being cast to varying
widths of unsigned integer.
Fix this up by using max_t to ensure the comparison is performed as
unsigned integers, and for consistency change the type of the len
variable to unsigned int.
Signed-off-by: Paul Burton <paul.burton@imgtec.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Adds support for ndo_get_vf_config, also fill the default mac address
that will be provided to the VF by firmware, in case user doesn't
provide one. So user can get the default MAC address address also
through ndo_get_vf_config.
Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
WANG Cong [Fri, 2 Sep 2016 04:53:45 +0000 (21:53 -0700)]
netns: avoid disabling irq for netns id
We never read or change netns id in hardirq context,
the only place we read netns id in softirq context
is in vxlan_xmit(). So, it should be enough to just
disable BH.
Cc: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
WANG Cong [Fri, 2 Sep 2016 04:53:44 +0000 (21:53 -0700)]
vxlan: call peernet2id() in fdb notification
netns id should be already allocated each time we change
netns, that is, in dev_change_net_namespace() (more precisely
in rtnl_fill_ifinfo()). It is safe to just call peernet2id() here.
Cc: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Acked-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Joe Stringer [Fri, 2 Sep 2016 01:01:47 +0000 (18:01 -0700)]
openvswitch: Free tmpl with tmpl_free.
When an error occurs during conntrack template creation as part of
actions validation, we need to free the template. Previously we've been
using nf_ct_put() to do this, but nf_ct_tmpl_free() is more appropriate.
Signed-off-by: Joe Stringer <joe@ovn.org> Signed-off-by: David S. Miller <davem@davemloft.net>
David Howells [Sun, 4 Sep 2016 12:10:10 +0000 (13:10 +0100)]
rxrpc: The client call state must be changed before attachment to conn
We must set the client call state to RXRPC_CALL_CLIENT_SEND_REQUEST before
attaching the call to the connection struct, not after, as it's liable to
receive errors and conn aborts as soon as the assignment is made - and
these will cause its state to be changed outside of the initiating thread's
control.
Signed-off-by: David Howells <dhowells@redhat.com>
David S. Miller [Sat, 3 Sep 2016 00:11:32 +0000 (17:11 -0700)]
Merge branch 'liquidio-CN23XX-part-2'
Raghu Vatsavayi says:
====================
liquidio CN23XX support
I am posting the remaining half of patchset after the
acceptance of first half. With this patchset I am able
to completely submit the code of V3 patchset which you
earlier advised me to split into smaller ones.
This V5 patch also addresses all the comments from previous
submission:
1) Avoid busy loop while reading registers.
2) Other minor comments about debug messages and constants.
Please apply patches in following order as some of the
patches depend on earlier patches.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
The broadcast protocol has turned out to not scale well beyond 70-80
nodes, while it is now possible to build TIPC clusters of at least ten
times that size. This commit series improves the NACK/retransmission
mechanism of the broadcast protocol to make is at scalable as the rest
of TIPC.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Jon Paul Maloy [Thu, 1 Sep 2016 17:52:51 +0000 (13:52 -0400)]
tipc: send broadcast nack directly upon sequence gap detection
Because of the risk of an excessive number of NACK messages and
retransissions, receivers have until now abstained from sending
broadcast NACKS directly upon detection of a packet sequence number
gap. We have instead relied on such gaps being detected by link
protocol STATE message exchange, something that by necessity delays
such detection and subsequent retransmissions.
With the introduction of unicast NACK transmission and rate control
of retransmissions we can now remove this limitation. We now allow
receiving nodes to send NACKS immediately, while coordinating the
permission to do so among the nodes in order to avoid NACK storms.
Reviewed-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jon Paul Maloy [Thu, 1 Sep 2016 17:52:50 +0000 (13:52 -0400)]
tipc: rate limit broadcast retransmissions
As cluster sizes grow, so does the amount of identical or overlapping
broadcast NACKs generated by the packet receivers. This often leads to
'NACK crunches' resulting in huge numbers of redundant retransmissions
of the same packet ranges.
In this commit, we introduce rate control of broadcast retransmissions,
so that a retransmitted range cannot be retransmitted again until after
at least 10 ms. This reduces the frequency of duplicate, redundant
retransmissions by an order of magnitude, while having a significant
positive impact on overall throughput and scalability.
Reviewed-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jon Paul Maloy [Thu, 1 Sep 2016 17:52:49 +0000 (13:52 -0400)]
tipc: transfer broadcast nacks in link state messages
When we send broadcasts in clusters of more 70-80 nodes, we sometimes
see the broadcast link resetting because of an excessive number of
retransmissions. This is caused by a combination of two factors:
1) A 'NACK crunch", where loss of broadcast packets is discovered
and NACK'ed by several nodes simultaneously, leading to multiple
redundant broadcast retransmissions.
2) The fact that the NACKS as such also are sent as broadcast, leading
to excessive load and packet loss on the transmitting switch/bridge.
This commit deals with the latter problem, by moving sending of
broadcast nacks from the dedicated BCAST_PROTOCOL/NACK message type
to regular unicast LINK_PROTOCOL/STATE messages. We allocate 10 unused
bits in word 8 of the said message for this purpose, and introduce a
new capability bit, TIPC_BCAST_STATE_NACK in order to keep the change
backwards compatible.
Reviewed-by: Ying Xue <ying.xue@windriver.com> Signed-off-by: Jon Maloy <jon.maloy@ericsson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Roger Chen [Thu, 1 Sep 2016 17:50:00 +0000 (01:50 +0800)]
net: stmmac: dwmac-rk: fixes the gmac resume after PD on/off
GMAC Power Domain(PD) will be disabled during suspend.
That will causes GRF registers reset.
So corresponding GRF registers for GMAC must be setup again.
Signed-off-by: Roger Chen <roger.chen@rock-chips.com> Signed-off-by: Caesar Wang <wxt@rock-chips.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Roger Chen [Thu, 1 Sep 2016 17:49:59 +0000 (01:49 +0800)]
net: stmmac: dwmac-rk: add rk3366 & rk3399 specific data
Add constants and callback functions for the dwmac on rk3228/rk3229 socs.
As can be seen, the base structure is the same, only registers and the
bits in them moved slightly.
Signed-off-by: Roger Chen <roger.chen@rock-chips.com> Signed-off-by: Caesar Wang <wxt@rock-chips.com> Reviewed-by: Heiko Stuebner <heiko@sntech.de> Signed-off-by: David S. Miller <davem@davemloft.net>
David Howells [Fri, 2 Sep 2016 21:39:44 +0000 (22:39 +0100)]
rxrpc: Fix uninitialised variable warning
Fix the following uninitialised variable warning:
../net/rxrpc/call_event.c: In function 'rxrpc_process_call':
../net/rxrpc/call_event.c:879:58: warning: 'error' may be used uninitialized in this function [-Wmaybe-uninitialized]
_debug("post net error %d", error);
^
Signed-off-by: David Howells <dhowells@redhat.com>
rxrpc: fix undefined behavior in rxrpc_mark_call_released
gcc -Wmaybe-initialized correctly points out a newly introduced bug
through which we can end up calling rxrpc_queue_call() for a dead
connection:
net/rxrpc/call_object.c: In function 'rxrpc_mark_call_released':
net/rxrpc/call_object.c:600:5: error: 'sched' may be used uninitialized in this function [-Werror=maybe-uninitialized]
This sets the 'sched' variable to zero to restore the previous
behavior.
Signed-off-by: Arnd Bergmann <arnd@arndb.de> Fixes: f5c17aaeb2ae ("rxrpc: Calls should only have one terminal state") Signed-off-by: David Howells <dhowells@redhat.com>
switchdev: Fix return value of switchdev_port_fdb_dump().
This patch fixes the retun value of switchdev_port_fdb_dump() when
CONFIG_NET_SWITCHDEV is not set. This avoids getting "warning: return makes
integer from pointer without a cast [-Wint-conversion]" when building
when CONFIG_NET_SWITCHDEV is not set under several compiler versions.
This warning is due to commit d297653dd6f07afbe7e6c702a4bcd7615680002e
("rtnetlink: fdb dump: optimize by saving last interface markers").
Signed-off-by: Rami Rosen <rami.rosen@intel.com> Acked-by: Roopa Prabhu <roopa@cumulusnetworks.com> Reported-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 2 Sep 2016 17:46:45 +0000 (10:46 -0700)]
Merge branch 'bpf-perf-hw-sw-events'
Alexei Starovoitov says:
====================
perf, bpf: add support for bpf in sw/hw perf_events
this patch set is a follow up to the discussion:
https://lkml.kernel.org/r/20160804142853.GO6862%20()%20twins%20!%20programming%20!%20kicks-ass%20!%20net
It turned out to be simpler than what we discussed.
Patches 1-3 is bpf-side prep for the main patch 4
that adds bpf program as an overflow_handler to sw and hw perf_events.
Patches 5 and 6 are examples from myself and Brendan.
Peter,
to implement your suggestion to add ifdef CONFIG_BPF_SYSCALL
inside struct perf_event, I had to shuffle ifdefs in events/core.c
Please double check whether that is what you wanted to see.
v2->v3: fixed few more minor issues
v1->v2: fixed issues spotted by Peter and Daniel.
====================
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: David S. Miller <davem@davemloft.net>
sample instruction pointer and frequency count in a BPF map
Signed-off-by: Brendan Gregg <bgregg@netflix.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
The bpf program is called 50 times a second and does hashmap[kern&user_stackid]++
It's primary purpose to check that key bpf helpers like map lookup, update,
get_stackid, trace_printk and ctx access are all working.
It checks:
- PERF_COUNT_HW_CPU_CYCLES on all cpus
- PERF_COUNT_HW_CPU_CYCLES for current process and inherited perf_events to children
- PERF_COUNT_SW_CPU_CLOCK on all cpus
- PERF_COUNT_SW_CPU_CLOCK for current process
Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
perf, bpf: add perf events core support for BPF_PROG_TYPE_PERF_EVENT programs
Allow attaching BPF_PROG_TYPE_PERF_EVENT programs to sw and hw perf events
via overflow_handler mechanism.
When program is attached the overflow_handlers become stacked.
The program acts as a filter.
Returning zero from the program means that the normal perf_event_output handler
will not be called and sampling event won't be stored in the ring buffer.
The overflow_handler_context==NULL is an additional safety check
to make sure programs are not attached to hw breakpoints and watchdog
in case other checks (that prevent that now anyway) get accidentally
relaxed in the future.
The program refcnt is incremented in case perf_events are inhereted
when target task is forked.
Similar to kprobe and tracepoint programs there is no ioctl to
detach the program or swap already attached program. The user space
expected to close(perf_event_fd) like it does right now for kprobe+bpf.
That restriction simplifies the code quite a bit.
The invocation of overflow_handler in __perf_event_overflow() is now
done via READ_ONCE, since that pointer can be replaced when the program
is attached while perf_event itself could have been active already.
There is no need to do similar treatment for event->prog, since it's
assigned only once before it's accessed.
Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
bpf: perf_event progs should only use preallocated maps
Make sure that BPF_PROG_TYPE_PERF_EVENT programs only use
preallocated hash maps, since doing memory allocation
in overflow_handler can crash depending on where nmi got triggered.
Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
bpf: introduce BPF_PROG_TYPE_PERF_EVENT program type
Introduce BPF_PROG_TYPE_PERF_EVENT programs that can be attached to
HW and SW perf events (PERF_TYPE_HARDWARE and PERF_TYPE_SOFTWARE
correspondingly in uapi/linux/perf_event.h)
The program visible context meta structure is
struct bpf_perf_event_data {
struct pt_regs regs;
__u64 sample_period;
};
which is accessible directly from the program:
int bpf_prog(struct bpf_perf_event_data *ctx)
{
... ctx->sample_period ...
... ctx->regs.ip ...
}
The bpf verifier rewrites the accesses into kernel internal
struct bpf_perf_event_data_kern which allows changing
struct perf_sample_data without affecting bpf programs.
New fields can be added to the end of struct bpf_perf_event_data
in the future.
Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
The verifier supported only 4-byte metafields in
struct __sk_buff and struct xdp_md. The metafields in upcoming
struct bpf_perf_event are 8-byte to match register width in struct pt_regs.
Teach verifier to recognize 8-byte metafield access.
The patch doesn't affect safety of sockets and xdp programs.
They check for 4-byte only ctx access before these conditions are hit.
Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
We get a few warnings when building kernel with W=1:
drivers/isdn/hardware/mISDN/hfcmulti.c:568:1: warning: no previous declaration for 'enablepcibridge' [-Wmissing-declarations]
drivers/isdn/hardware/mISDN/hfcmulti.c:574:1: warning: no previous declaration for 'disablepcibridge' [-Wmissing-declarations]
drivers/isdn/hardware/mISDN/hfcmulti.c:580:1: warning: no previous declaration for 'readpcibridge' [-Wmissing-declarations]
drivers/isdn/hardware/mISDN/hfcmulti.c:608:1: warning: no previous declaration for 'writepcibridge' [-Wmissing-declarations]
drivers/isdn/hardware/mISDN/hfcmulti.c:638:1: warning: no previous declaration for 'cpld_set_reg' [-Wmissing-declarations]
drivers/isdn/hardware/mISDN/hfcmulti.c:645:1: warning: no previous declaration for 'cpld_write_reg' [-Wmissing-declarations]
drivers/isdn/hardware/mISDN/hfcmulti.c:657:1: warning: no previous declaration for 'cpld_read_reg' [-Wmissing-declarations]
drivers/isdn/hardware/mISDN/hfcmulti.c:674:1: warning: no previous declaration for 'vpm_write_address' [-Wmissing-declarations]
drivers/isdn/hardware/mISDN/hfcmulti.c:681:1: warning: no previous declaration for 'vpm_read_address' [-Wmissing-declarations]
drivers/isdn/hardware/mISDN/hfcmulti.c:695:1: warning: no previous declaration for 'vpm_in' [-Wmissing-declarations]
drivers/isdn/hardware/mISDN/hfcmulti.c:716:1: warning: no previous declaration for 'vpm_out' [-Wmissing-declarations]
drivers/isdn/hardware/mISDN/hfcmulti.c:1028:1: warning: no previous declaration for 'plxsd_checksync' [-Wmissing-declarations]
....
In fact, these functions are only used in the file in which they are
declared and don't need a declaration, but can be made static.
so this patch marks these functions with 'static'.
Signed-off-by: Baoyou Xie <baoyou.xie@linaro.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 2 Sep 2016 05:48:33 +0000 (22:48 -0700)]
Merge branch 'br-next'
Nikolay Aleksandrov says:
====================
net: bridge: add per-port unknown multicast flood control
The first patch prepares the forwarding path by having the exact packet
type passed down so we can later filter based on it and the per-port
unknown mcast flood flag introduced in the second patch. It is similar to
how the per-port unknown unicast flood flag works.
Nice side-effects of patch 01 are the slight reduction of tests in the
fast-path and a few minor checkpatch fixes.
v3: don't change br_auto_mask as that will change user-visible behaviour
v2: make pkt_type an enum as per Stephen's comment
====================
Acked-by: Stephen Hemminger <stephen@networkplumber.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Add a per-port flag to control the unknown multicast flood, similar to the
unknown unicast flood flag and break a few long lines in the netlink flag
exports.
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net: bridge: change unicast boolean to exact pkt_type
Remove the unicast flag and introduce an exact pkt_type. That would help us
for the upcoming per-port multicast flood flag and also slightly reduce the
tests in the input fast path.
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Roopa Prabhu [Wed, 31 Aug 2016 04:56:45 +0000 (21:56 -0700)]
rtnetlink: fdb dump: optimize by saving last interface markers
fdb dumps spanning multiple skb's currently restart from the first
interface again for every skb. This results in unnecessary
iterations on the already visited interfaces and their fdb
entries. In large scale setups, we have seen this to slow
down fdb dumps considerably. On a system with 30k macs we
see fdb dumps spanning across more than 300 skbs.
To fix the problem, this patch replaces the existing single fdb
marker with three markers: netdev hash entries, netdevs and fdb
index to continue where we left off instead of restarting from the
first netdev. This is consistent with link dumps.
In the process of fixing the performance issue, this patch also
re-implements fix done by
commit 472681d57a5d ("net: ndo_fdb_dump should report -EMSGSIZE to rtnl_fdb_dump")
(with an internal fix from Wilson Kok) in the following ways:
- change ndo_fdb_dump handlers to return error code instead
of the last fdb index
- use cb->args strictly for dump frag markers and not error codes.
This is consistent with other dump functions.
Below results were taken on a system with 1000 netdevs
and 35085 fdb entries:
before patch:
$time bridge fdb show | wc -l
15065
real 1m11.791s
user 0m0.070s
sys 1m8.395s
(existing code does not return all macs)
after patch:
$time bridge fdb show | wc -l
35085
real 0m2.017s
user 0m0.113s
sys 0m1.942s
Signed-off-by: Roopa Prabhu <roopa@cumulusnetworks.com> Signed-off-by: Wilson Kok <wkok@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David Howells [Tue, 30 Aug 2016 19:42:14 +0000 (20:42 +0100)]
rxrpc: Don't expose skbs to in-kernel users [ver #2]
Don't expose skbs to in-kernel users, such as the AFS filesystem, but
instead provide a notification hook the indicates that a call needs
attention and another that indicates that there's a new call to be
collected.
This makes the following possibilities more achievable:
(1) Call refcounting can be made simpler if skbs don't hold refs to calls.
(2) skbs referring to non-data events will be able to be freed much sooner
rather than being queued for AFS to pick up as rxrpc_kernel_recv_data
will be able to consult the call state.
(3) We can shortcut the receive phase when a call is remotely aborted
because we don't have to go through all the packets to get to the one
cancelling the operation.
(4) It makes it easier to do encryption/decryption directly between AFS's
buffers and sk_buffs.
(5) Encryption/decryption can more easily be done in the AFS's thread
contexts - usually that of the userspace process that issued a syscall
- rather than in one of rxrpc's background threads on a workqueue.
(6) AFS will be able to wait synchronously on a call inside AF_RXRPC.
To make this work, the following interface function has been added:
This is the recvmsg equivalent. It allows the caller to find out about the
state of a specific call and to transfer received data into a buffer
piecemeal.
afs_extract_data() and rxrpc_kernel_recv_data() now do all the extraction
logic between them. They don't wait synchronously yet because the socket
lock needs to be dealt with.
As a temporary hack, sk_buffs going to an in-kernel call are queued on the
rxrpc_call struct (->knlrecv_queue) rather than being handed over to the
in-kernel user. To process the queue internally, a temporary function,
temp_deliver_data() has been added. This will be replaced with common code
between the rxrpc_recvmsg() path and the kernel_rxrpc_recv_data() path in a
future patch.
Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
The workqueue "pegasus_workqueue" queues a single work item per pegasus
instance and hence it doesn't require execution ordering. Hence,
alloc_workqueue has been used to replace the deprecated
create_singlethread_workqueue instance.
The WQ_MEM_RECLAIM flag has been set to ensure forward progress under
memory pressure since it's a network driver.
Since there are fixed number of work items, explicit concurrency
limit is unnecessary here.
Signed-off-by: Bhaktipriya Shridhar <bhaktipriya96@gmail.com> Acked-by: Tejun Heo <tj@kernel.org> Acked-by: Petko Manolov <petkan@mip-labs.com> Signed-off-by: David S. Miller <davem@davemloft.net>
alloc_ordered_workqueue() with WQ_MEM_RECLAIM set, replaces
deprecated create_singlethread_workqueue(). This is the identity
conversion.
The workqueue "wq" queues multiple work items viz
&bond->mcast_work, &nnw->work, &bond->mii_work, &bond->arp_work,
&bond->alb_work, &bond->mii_work, &bond->ad_work, &bond->slave_arr_work
which require strict execution ordering. Hence, an ordered dedicated
workqueue has been used.
Since, it is a network driver, WQ_MEM_RECLAIM has been set to
ensure forward progress under memory pressure.
Signed-off-by: Bhaktipriya Shridhar <bhaktipriya96@gmail.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Update the sky2 driver to pass number of packets done to NAPI.
The driver was never updated when napi_complete_done was added.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org> Acked-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 1 Sep 2016 21:03:40 +0000 (14:03 -0700)]
Merge branch 'stmmac-STM32F429'
Alexandre TORGUE says:
====================
Add Ethernet support on STM32F429
STM32F429 Chip embeds a Synopsys 3.50a MAC IP.
This series enhance current stmmac driver to control it (code already
available) and adds basic glue for STM32F429 chip.
Changes since v5:
-Fix typo in bindings documentation patch.
-Change clocks names in stm32-dwmac glue driver / Documentation.
-After rebase, stm32 ethernet node is now available. It has to be updated
according to new clocks names.
Changes since v4:
-Fix dirty copy/past in bindings documentation patch.
Changes since v3:
-Fix "tx-clk" and "rx-clk" as required clocks. Driver and bindings are
modified.
Changes since v2:
-Fix alphabetic order in Kconfig and Makefile.
-Improve code according to Joachim review.
-Binding: remove useless entry.
Changes since v1:
-Fix Kbuild issue in Kconfig.
-Remove init/exit callbacks. Suspend/Resume and remove driver is no more
driven in stmmac_pltfr but directly in dwmac-stm32 glue driver.
-Take into account Joachim review.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexandre TORGUE [Mon, 29 Aug 2016 16:23:40 +0000 (18:23 +0200)]
net: ethernet: stmmac: add support of Synopsys 3.50a MAC IP
Adds support of Synopsys 3.50a MAC IP in stmmac driver.
Acked-by: Giuseppe Cavallaro <peppe.cavallaro@st.com> Tested-by: Maxime Coquelin <maxime.coquelin@st.com> Signed-off-by: Alexandre TORGUE <alexandre.torgue@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Alexandre TORGUE [Mon, 29 Aug 2016 16:23:39 +0000 (18:23 +0200)]
Documentation: Bindings: Add STM32 DWMAC glue
Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Alexandre TORGUE <alexandre.torgue@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Alexandre TORGUE [Mon, 29 Aug 2016 16:23:38 +0000 (18:23 +0200)]
net: ethernet: dwmac: add Ethernet glue logic for stm32 chip
stm324xx family chips support Synopsys MAC 3.510 IP.
This patch adds settings for logical glue logic:
-clocks
-mode selection MII or RMII.
Reviewed-by: Joachim Eastwood <manabian@gmail.com> Acked-by: Giuseppe Cavallaro <peppe.cavallaro@st.com> Tested-by: Maxime Coquelin <maxime.coquelin@st.com> Signed-off-by: Alexandre TORGUE <alexandre.torgue@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Arnd Bergmann [Mon, 29 Aug 2016 12:37:14 +0000 (14:37 +0200)]
net: xgene: fix backward compatibility fix
A bugfix for backward compatibility handling introduced undefined
behavior for the case that of_parse_phandle() does not return
a valid entry, as "gcc -Wmaybe-unused" reports:
drivers/net/ethernet/apm/xgene/xgene_enet_hw.c: In function 'xgene_enet_phy_connect':
drivers/net/ethernet/apm/xgene/xgene_enet_hw.c:776:6: error: 'phy_dev' may be used uninitialized in this function [-Werror=maybe-uninitialized]
drivers/net/ethernet/apm/xgene/xgene_enet_hw.c: In function 'xgene_enet_mdio_config':
drivers/net/ethernet/apm/xgene/xgene_enet_hw.c:776:6: error: 'phy_dev' may be used uninitialized in this function [-Werror=maybe-uninitialized]
We can work around this by removing the check for zero "np", as
of_phy_connect() will correctly handle a NULL argument so we fall
back into the normal error handling case.
Note that I had previously fixed another bug that resulted in the
exact same warning, but this is a different problem that was
introduced after my original fix.
Signed-off-by: Arnd Bergmann <arnd@arndb.de> Fixes: 03377e381bf4 ("drivers: net: xgene: Fix backward compatibility") Signed-off-by: David S. Miller <davem@davemloft.net>
This is a resubmission of v3, since the netdev
mailinlist was not sent the previous submission.
This series improves power management of the asix driver.
- Suspend/resume support is improved to save needed registers.
- Device disconnection is improved.
- Fixes AX88772x resume failures
- Implementes IEEE 802.3 spec section "22.2.4.1.1 Reset" correctly
- Fixes AX_CMD_WRITE_MEDIUM_MODE being set incorrectly
Changes since v1:
- Added proper metadata tags to series.
- Added two more patches to series.
Changes since v2:
- Added coverletter
- Tested patches on AX88772A/AX88772B/AX88178/AX88179 hardware
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Robert Foss [Mon, 29 Aug 2016 13:32:19 +0000 (09:32 -0400)]
net: asix: autoneg will set WRITE_MEDIUM reg
From: Grant Grundler <grundler@chromium.org>
The miii_nway_restart() causes a PHY link change activity and
ax88772_link_reset will be called. link_reset will set
AX_CMD_WRITE_MEDIUM_MODE register correctly.
The asix_write_medium_mode in reset() fills in a default value to the register
which may be different from the negotiation result. So do this first.
Ignore the ret value since it's ignored in XXX_link_reset() functions.
Signed-off-by: Grant Grundler <grundler@google.com> Signed-off-by: Robert Foss <robert.foss@collabora.com> Tested-by: Robert Foss <robert.foss@collabora.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Robert Foss [Mon, 29 Aug 2016 13:32:18 +0000 (09:32 -0400)]
net: asix: see 802.3 spec for phy reset
From: Grant Grundler <grundler@chromium.org>
https://lkml.org/lkml/2014/11/11/947
Ben Hutchings is correct. IEEE 802.3 spec section "22.2.4.1.1 Reset" requires
up to 500ms delay. Mitigate the "max" delay by polling the phy until BCM_RESET
bit is clear.
Signed-off-by: Grant Grundler <grundler@chromium.org> Signed-off-by: Robert Foss <robert.foss@collabora.com> Tested-by: Robert Foss <robert.foss@collabora.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Robert Foss [Mon, 29 Aug 2016 13:32:17 +0000 (09:32 -0400)]
net: asix: Fix AX88772x resume failures
From: Allan Chou <allan@asix.com.tw>
The change fixes AX88772x resume failure by
- Restore incorrect AX88772A PHY registers when resetting
- Need to stop MAC operation when suspending
- Need to restart MII when restoring PHY
Signed-off-by: Allan Chou <allan@asix.com.tw> Signed-off-by: Robert Foss <robert.foss@collabora.com> Tested-by: Robert Foss <robert.foss@collabora.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Robert Foss [Mon, 29 Aug 2016 13:32:16 +0000 (09:32 -0400)]
net: asix: Avoid looping when the device is disconnected
From: Vincent Palatin <vpalatin@chromium.org>
Check the answers from the USB stack and avoid re-sending multiple times
the request if the device has disappeared.
Signed-off-by: Vincent Palatin <vpalatin@chromium.org> Signed-off-by: Robert Foss <robert.foss@collabora.com> Tested-by: Robert Foss <robert.foss@collabora.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Robert Foss [Mon, 29 Aug 2016 13:32:15 +0000 (09:32 -0400)]
net: asix: Add in_pm parameter
From: Freddy Xin <freddy@asix.com.tw>
In order to R/W registers in suspend/resume functions, in_pm flags are
added to some functions to determine whether the nopm version of usb
functions is called.
Save BMCR and ANAR PHY registers in suspend function and restore them
in resume function.
Reset HW in resume function to ensure the PHY works correctly.
Signed-off-by: Freddy Xin <freddy@asix.com.tw> Signed-off-by: Robert Foss <robert.foss@collabora.com> Tested-by: Robert Foss <robert.foss@collabora.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Julia Lawall [Wed, 31 Aug 2016 22:21:23 +0000 (00:21 +0200)]
net: axienet: constify ethtool_ops structures
Check for ethtool_ops structures that are only stored in the ethtool_ops
field of a net_device structure or passed as the second argument to
netdev_set_default_ethtool_ops. These contexts are declared const, so
ethtool_ops structures that have these properties can be declared as const
also.
The semantic patch that makes this change is as follows:
(http://coccinelle.lip6.fr/)
@ok2@
identifier r.i;
expression e;
position p;
@@
netdev_set_default_ethtool_ops(e, &i@p)
@bad@
position p != {r.p,ok1.p,ok2.p};
identifier r.i;
@@
i@p
@depends on !bad disable optional_qualifier@
identifier r.i;
@@
static
+const
struct ethtool_ops i = { ... };
// </smpl>
Suggested-by: Stephen Hemminger <stephen@networkplumber.org> Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
Julia Lawall [Wed, 31 Aug 2016 22:21:22 +0000 (00:21 +0200)]
r8152: constify ethtool_ops structures
Check for ethtool_ops structures that are only stored in the ethtool_ops
field of a net_device structure or passed as the second argument to
netdev_set_default_ethtool_ops. These contexts are declared const, so
ethtool_ops structures that have these properties can be declared as const
also.
The semantic patch that makes this change is as follows:
(http://coccinelle.lip6.fr/)
@ok2@
identifier r.i;
expression e;
position p;
@@
netdev_set_default_ethtool_ops(e, &i@p)
@bad@
position p != {r.p,ok1.p,ok2.p};
identifier r.i;
@@
i@p
@depends on !bad disable optional_qualifier@
identifier r.i;
@@
static
+const
struct ethtool_ops i = { ... };
// </smpl>
Suggested-by: Stephen Hemminger <stephen@networkplumber.org> Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
Julia Lawall [Wed, 31 Aug 2016 22:21:19 +0000 (00:21 +0200)]
net: mediatek: constify ethtool_ops structures
Check for ethtool_ops structures that are only stored in the ethtool_ops
field of a net_device structure or passed as the second argument to
netdev_set_default_ethtool_ops. These contexts are declared const, so
ethtool_ops structures that have these properties can be declared as const
also.
The semantic patch that makes this change is as follows:
(http://coccinelle.lip6.fr/)
@ok2@
identifier r.i;
expression e;
position p;
@@
netdev_set_default_ethtool_ops(e, &i@p)
@bad@
position p != {r.p,ok1.p,ok2.p};
identifier r.i;
@@
i@p
@depends on !bad disable optional_qualifier@
identifier r.i;
@@
static
+const
struct ethtool_ops i = { ... };
// </smpl>
Suggested-by: Stephen Hemminger <stephen@networkplumber.org> Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 31 Aug 2016 21:33:09 +0000 (14:33 -0700)]
Merge branch 'ppp-recursion'
Guillaume Nault says:
====================
ppp: fix deadlock upon recursive xmit
This series fixes the issue reported by Feng where packets looping
through a ppp device makes the module deadlock:
https://marc.info/?l=linux-netdev&m=147134567319038&w=2
The problem can occur on virtual interfaces (e.g. PPP over L2TP, or
PPPoE on vxlan devices), when a PPP packet is routed back to the PPP
interface.
PPP's xmit path isn't reentrant, so patch #1 uses a per-cpu variable
to detect and break recursion. Patch #2 sets the NETIF_F_LLTX flag to
avoid lock inversion issues between ppp and txqueue locks.
There are multiple entry points to the PPP xmit path. This series has
been tested with lockdep and should address recursion issues no matter
how the packet entered the path.
A similar issue in L2TP is not covered by this series:
l2tp_xmit_skb() also isn't reentrant, and it can be called as part of
PPP's xmit path (pppol2tp_xmit()), or directly from the L2TP socket
(l2tp_ppp_sendmsg()). If a packet is sent by l2tp_ppp_sendmsg() and
routed to the parent PPP interface, then it's going to hit
l2tp_xmit_skb() again.
Breaking recursion as done in ppp_generic is not enough, because we'd
still have a lock inversion issue (locking in l2tp_xmit_skb() can
happen before or after locking in ppp_generic). The best approach would
be to use the ip_tunnel functions and remove the socket locking in
l2tp_xmit_skb(). But that'd be something for net-next.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Guillaume Nault [Sat, 27 Aug 2016 20:22:32 +0000 (22:22 +0200)]
ppp: avoid dealock on recursive xmit
In case of misconfiguration, a virtual PPP channel might send packets
back to their parent PPP interface. This typically happens in
misconfigured L2TP setups, where PPP's peer IP address is set with the
IP of the L2TP peer.
When that happens the system hangs due to PPP trying to recursively
lock its xmit path.
Key points about the lower layer's .start_xmit() callback:
* It can be called directly by a channel fd .write() or by
ppp_output_wakeup() or indirectly by a unit fd .write() or by
.ndo_start_xmit().
* In any case, it's always called with chan->downl held.
* It might route the packet back to its parent unit using
.ndo_start_xmit() as entry point.
This patch detects and breaks recursion in ppp_xmit_process(). This
function is a good candidate for the task because it's called early
enough after .ndo_start_xmit(), it's always part of the recursion
loop and it's on the path of whatever entry point is used to send
a packet on a PPP unit.
Recursion detection is done using the per-cpu ppp_xmit_recursion
variable.
Since ppp_channel_push() too locks the channel's xmit path and calls
the lower layer's .start_xmit() callback, we need to also increment
ppp_xmit_recursion there. However there's no need to check for
recursion, as it's out of the recursion loop.
Reported-by: Feng Gao <gfree.wind@gmail.com> Signed-off-by: Guillaume Nault <g.nault@alphalink.fr> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Wed, 31 Aug 2016 21:15:43 +0000 (14:15 -0700)]
Merge branch 'dsa-mdb-support'
Vivien Didelot says:
====================
net: dsa: add MDB support
This patchset adds the switchdev MDB object support to the DSA layer.
The MDB support for the mv88e6xxx driver is very similar to the FDB
support. The FDB operations care about unicast addresses while the MDB
operations care about multicast addresses.
Both operation set load/purge/dump the Address Translation Table (ATU),
thus common code is used.
Changes in v2 based on Andrew's comments:
- drop "group" in multicast database related doc and comment
- change _one for more relevant _fid in mv88e6xxx_port_db_dump_one
- return -EOPNOTSUPP if switchdev obj ID is neither _FDB nor _MDB
====================
Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
Vivien Didelot [Wed, 31 Aug 2016 15:50:04 +0000 (11:50 -0400)]
net: dsa: mv88e6xxx: make switchdev DB ops generic
The MDB support for the mv88e6xxx driver will be very similar to the FDB
support, since it consists of loading/purging/dumping address to/from
the Address Translation Unit (ATU).
Prepare the support for MDB by making the FDB code accessing the ATU
generic. The FDB operations now provide access to the unicast addresses
while the MDB operations will provide access to the multicast addresses.
Signed-off-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com> Signed-off-by: David S. Miller <davem@davemloft.net>