Hadar Hen Zion [Fri, 1 Jul 2016 11:51:04 +0000 (14:51 +0300)]
net/mlx5e: Create NIC global resources only once
To allow creating more than one netdev over the same PCI function, we
change the driver such that global NIC resources are created once and
later be shared amongst all the mlx5e netdevs running over that port.
Move the CQ UAR, PD (pdn), Transport Domain (tdn), MKey resources from
being kept in the mlx5e priv part to a new resources structure
(mlx5e_resources) placed under the mlx5_core device.
This patch doesn't add any new functionality.
Signed-off-by: Hadar Hen Zion <hadarh@mellanox.com> Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Or Gerlitz [Fri, 1 Jul 2016 11:51:03 +0000 (14:51 +0300)]
net/mlx5e: Add devlink based SRIOV mode changes
Implement handlers for the devlink commands to get and set the SRIOV
E-Switch mode.
When turning to the switchdev/offloads mode, we disable the e-switch
and enable it again in the new mode, create the NIC offloads table
and create VF reps.
When turning to legacy mode, we remove the VF reps and the offloads
table, and re-initiate the e-switch in it's legacy mode.
The actual creation/removal of the VF reps is done in downstream patches.
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Or Gerlitz [Fri, 1 Jul 2016 11:51:02 +0000 (14:51 +0300)]
net/mlx5: Add devlink interface
The devlink interface is initially used to set/get the mode of the SRIOV e-switch.
Currently, these are only stubs for get/set, down-stream patch will actually
fill them out.
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Or Gerlitz [Fri, 1 Jul 2016 11:51:01 +0000 (14:51 +0300)]
net/devlink: Add E-Switch mode control
Add the commands to set and show the mode of SRIOV E-Switch, two modes
are supported:
* legacy: operating in the "old" L2 based mode (DMAC --> VF vport)
* switchdev: the E-Switch is referred to as whitebox switch configured
using standard tools such as tc, bridge, openvswitch etc. To allow
working with the tools, for each VF, a VF representor netdevice is
created by the E-Switch manager vendor device driver instance (e.g PF).
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Or Gerlitz [Fri, 1 Jul 2016 11:51:00 +0000 (14:51 +0300)]
net/mlx5: E-Switch, Add API to create vport rx rules
Add the API to create vport rx rules of the form
packet meta-data :: vport == $VPORT --> $TIR
where the TIR is opened by this VF representor.
This logic will by used for packets that didn't match any rule in the
e-switch datapath and should be received into the host OS through the
netdevice that represents the VF they were sent from.
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Or Gerlitz [Fri, 1 Jul 2016 11:50:59 +0000 (14:50 +0300)]
net/mlx5: E-Switch, Add offloads table
Belongs to the NIC offloads name-space, and to be used as part of the
SRIOV offloads logic to steer packets that hit the e-switch miss rule
to the TIR of the relevant VF representor.
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Or Gerlitz [Fri, 1 Jul 2016 11:50:58 +0000 (14:50 +0300)]
net/mlx5: Introduce offloads steering namespace
Add a new namespace (MLX5_FLOW_NAMESPACE_OFFLOADS) to be populated
with flow steering rules that deal with rules that have have to
be executed before the EN NIC steering rules are matched.
The namespace is located after the bypass name-space and before the
kernel name-space. Therefore, it precedes the HW processing done for
rules set for the kernel NIC name-space.
Under SRIOV, it would allow us to match on e-switch missed packet
and forward them to the relevant VF representor TIR.
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Amir Vadai <amir@vadai.me> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
These rules are to be used for a send-to-vport logic which conceptually bypasses
the "normal" steering rules currently present at the e-switch datapath.
Such rule should apply only for packets that originate in the e-switch manager
vport (0) and are sent for a given SQN which is used by a given VF representor
device, and hence the matching logic.
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Or Gerlitz [Fri, 1 Jul 2016 11:50:56 +0000 (14:50 +0300)]
net/mlx5: E-Switch, Add miss rule for offloads mode
In the sriov offloads mode, packets that are not matched by any other
rule should be sent towards the e-switch manager for further processing.
Add such "miss" rule which matches ANY packet as the last rule in the
e-switch FDB and programs the HW to send the packet to vport 0 where
the e-switch manager runs.
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Or Gerlitz [Fri, 1 Jul 2016 11:50:55 +0000 (14:50 +0300)]
net/mlx5: E-Switch, Add support for the sriov offloads mode
Unlike the legacy mode, here, forwarding rules are not learned by the
driver per events on macs set by VFs/VMs into their vports, but rather
should be programmed by higher-level SW entities.
Saying that, still, in the offloads mode (SRIOV_OFFLOADS), two flow
groups are created by the driver for management (slow path) purposes:
The first group will be used for sending packets over e-switch vports
from the host OS where the e-switch management code runs, to be
received by VFs.
The second group will be used by a miss rule which forwards packets toward
the e-switch manager. Further logic will trap these packets such that
the receiving net-device as seen by the networking stack is the representor
of the vport that sent the packet over the e-switch data-path.
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Or Gerlitz [Fri, 1 Jul 2016 11:50:54 +0000 (14:50 +0300)]
net/mlx5: E-Switch, Add operational mode to the SRIOV e-Switch
Define three modes for the SRIOV e-switch operation, none (SRIOV_NONE,
none of the VF vports are enabled), legacy (SRIOV_LEGACY, the current mode)
and sriov offloads (SRIOV_OFFLOADS). Currently, when in SRIOV, only the
legacy mode is supported, where steering rules are of the form:
destination mac --> VF vport
This patch does not change any functionality.
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Kejian Yan [Fri, 1 Jul 2016 09:34:13 +0000 (17:34 +0800)]
net: hns: get reset registers from DT
Since the registers of subctrl may be different, it is better to
mv the registers from hns mdio driver routine to device tree node.
Signed-off-by: Kejian Yan <yankejian@huawei.com> Signed-off-by: Yisen Zhuang <Yisen.Zhuang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Kejian Yan [Fri, 1 Jul 2016 09:34:12 +0000 (17:34 +0800)]
net: hns: add media-type property for hns
It is PORT_TP type if the service port is GE mode. It is wrong to
judge the port type by using if it is service port. Adding the media
type to know port type.
Reported-by: Jinchuan Tian <tianjinchuan1@huawei.com> Signed-off-by: Kejian Yan <yankejian@huawei.com> Signed-off-by: Yisen Zhuang <Yisen.Zhuang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
The sequence of hns_mac_dev_to_enet_if() is the same as
hns_get_enet_interface(), and hns_get_enet_interface() is called
by initialization to get the mac mode. And the mode is not changed
anywhere. Thus add hns_mac_dev_to_enet_if() function to get the mac
mode is obviously redundant.
Reported-by: Jinchuan Tian <tianjinchuan1@huawei.com> Signed-off-by: Kejian Yan <yankejian@huawei.com> Signed-off-by: Yisen Zhuang <Yisen.Zhuang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
There are two approaches to assign data, one does 2 loops, another
does 1 loop. This patch normalize the different methods to 1 loop.
Signed-off-by: Daode Huang <huangdaode@hisilicon.com> Signed-off-by: Yisen Zhuang <Yisen.Zhuang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
In comment line, some time miss a space before */, so this
patch adds a space before */.
Signed-off-by: Daode Huang <huangdaode@hisilicon.com> Signed-off-by: Yisen Zhuang <Yisen.Zhuang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
According to the previous review comments from Andy, this patch
deletes the redundant parens in the patch.
Signed-off-by: Daode Huang <huangdaode@hisilicon.com> Signed-off-by: Yisen Zhuang <Yisen.Zhuang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
This patch fixes code sytle of hns driver to make it
simple.
Signed-off-by: Daode Huang <huangdaode@hisilicon.com> Signed-off-by: Yisen Zhuang <Yisen.Zhuang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
This patch adds maintainers for hisilicon network subsystem driver
Signed-off-by: Daode Huang <huangdaode@hisilicon.com> Signed-off-by: Yisen Zhuang <Yisen.Zhuang@huawei.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 1 Jul 2016 20:45:24 +0000 (16:45 -0400)]
Merge branch 'rds-multipath-datastructures'
Sowmini Varadhan says:
====================
RDS:TCP data structure changes for multipath support
The second installment of changes to enable multipath support in
RDS-TCP. This series implements the changes in rds-tcp so that the
rds_conn_path has a pointer to the rds_tcp_connection in cp_transport_data.
Struct rds_tcp_connection keeps track of the inet_sk per path in
t_sock. The ->sk_user_data in turn is a pointer to the rds_conn_path.
With this set of changes, rds_tcp has the needed plumbing to handle
multiple paths(socket) per rds_connection.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Sowmini Varadhan [Thu, 30 Jun 2016 23:11:18 +0000 (16:11 -0700)]
RDS: Do not send a pong to an incoming ping with 0 src port
RDS ping messages are sent with a non-zero src port to a zero
dst port, so that the rds pong messages can be sent back to the
originators src port. However if a confused/malicious sender
sends a ping with a 0 src port, we'd have an infinite ping-pong
loop. To avoid this, the receiver should ignore ping messages
with a 0 src port.
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sowmini Varadhan [Thu, 30 Jun 2016 23:11:17 +0000 (16:11 -0700)]
RDS: TCP: Simplify reconnect to avoid duelling reconnnect attempts
When reconnecting, the peer with the smaller IP address will initiate
the reconnect, to avoid needless duelling SYN issues.
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sowmini Varadhan [Thu, 30 Jun 2016 23:11:16 +0000 (16:11 -0700)]
RDS: TCP: Hooks to set up a single connection path
This patch adds ->conn_path_connect callbacks in the rds_transport
that are used to set up a single connection path.
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sowmini Varadhan [Thu, 30 Jun 2016 23:11:15 +0000 (16:11 -0700)]
RDS: TCP: make receive path use the rds_conn_path
The ->sk_user_data contains a pointer to the rds_conn_path
for the socket. Use this consistently in the rds_tcp_data_ready
callbacks to get the rds_conn_path for rds_recv_incoming.
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sowmini Varadhan [Thu, 30 Jun 2016 23:11:14 +0000 (16:11 -0700)]
RDS: TCP: make ->sk_user_data point to a rds_conn_path
The socket callbacks should all operate on a struct rds_conn_path,
in preparation for a MP capable RDS-TCP.
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sowmini Varadhan [Thu, 30 Jun 2016 23:11:13 +0000 (16:11 -0700)]
RDS: TCP: Refactor connection destruction to handle multiple paths
A single rds_connection may have multiple rds_conn_paths that have
to be carefully and correctly destroyed, for both rmmod and
netns-delete cases.
For both cases, we extract a single rds_tcp_connection for
each conn into a temporary list, and then invoke rds_conn_destroy()
which iteratively dismantles every path in the rds_connection.
For the netns deletion case, we additionally have to make sure
that we do not leave a socket in TIME_WAIT state, as this will
hold up the netns deletion. Thus we call rds_tcp_conn_paths_destroy()
to reset state quickly.
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sowmini Varadhan [Thu, 30 Jun 2016 23:11:12 +0000 (16:11 -0700)]
RDS: TCP: Make rds_tcp_connection track the rds_conn_path
The struct rds_tcp_connection is the transport-specific private
data structure that tracks TCP information per rds_conn_path.
Modify this structure to have a back-pointer to the rds_conn_path
for which it is the ->cp_transport_data.
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sowmini Varadhan [Thu, 30 Jun 2016 23:11:11 +0000 (16:11 -0700)]
RDS: TCP: Remove dead logic around c_passive in rds-tcp
The c_passive bit is only intended for the IB transport and will
never be encountered in rds-tcp, so remove the dead logic that
predicates on this bit.
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sowmini Varadhan [Thu, 30 Jun 2016 23:11:10 +0000 (16:11 -0700)]
RDS: Rework path specific indirections
Refactor code to avoid separate indirections for single-path
and multipath transports. All transports (both single and mp-capable)
will get a pointer to the rds_conn_path, and can trivially derive
the rds_connection from the ->cp_conn.
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 1 Jul 2016 20:32:27 +0000 (16:32 -0400)]
Merge branch 'bpf-cgroup2'
Martin KaFai Lau says:
====================
cgroup: bpf: cgroup2 membership test on skb
This series is to implement a bpf-way to
check the cgroup2 membership of a skb (sk_buff).
It is similar to the feature added in netfilter: c38c4597e4bf ("netfilter: implement xt_cgroup cgroup2 path match")
The current target is the tc-like usage.
v3:
- Remove WARN_ON_ONCE(!rcu_read_lock_held())
- Stop BPF_MAP_TYPE_CGROUP_ARRAY usage in patch 2/4
- Avoid mounting bpf fs manually in patch 4/4
- Thanks for Daniel's review and the above suggestions
- Check CONFIG_SOCK_CGROUP_DATA instead of CONFIG_CGROUPS. Thanks to
the kbuild bot's report.
Patch 2/4 only needs CONFIG_CGROUPS while patch 3/4 needs
CONFIG_SOCK_CGROUP_DATA. Since a single bpf cgrp2 array alone is
not useful for now, CONFIG_SOCK_CGROUP_DATA is also used in
patch 2/4. We can fine tune it later if we find other use cases
for the cgrp2 array.
- Return EAGAIN instead of ENOENT if the cgrp2 array entry is
NULL. It is to distinguish these two cases: 1) the userland has
not populated this array entry yet. or 2) not finding cgrp2 from the skb.
- Be-lated thanks to Alexei and Tejun on reviewing v1 and giving advice on
this work.
v2:
- Fix two return cases in cgroup_get_from_fd()
- Fix compilation errors when CONFIG_CGROUPS is not used:
- arraymap.c: avoid registering BPF_MAP_TYPE_CGROUP_ARRAY
- filter.c: tc_cls_act_func_proto() returns NULL on BPF_FUNC_skb_in_cgroup
- Add comments to BPF_FUNC_skb_in_cgroup and cgroup_get_from_fd()
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Martin KaFai Lau [Thu, 30 Jun 2016 17:28:45 +0000 (10:28 -0700)]
cgroup: bpf: Add an example to do cgroup checking in BPF
test_cgrp2_array_pin.c:
A userland program that creates a bpf_map (BPF_MAP_TYPE_GROUP_ARRAY),
pouplates/updates it with a cgroup2's backed fd and pins it to a
bpf-fs's file. The pinned file can be loaded by tc and then used
by the bpf prog later. This program can also update an existing pinned
array and it could be useful for debugging/testing purpose.
test_cgrp2_tc_kern.c:
A bpf prog which should be loaded by tc. It is to demonstrate
the usage of bpf_skb_in_cgroup.
test_cgrp2_tc.sh:
A script that glues the test_cgrp2_array_pin.c and
test_cgrp2_tc_kern.c together. The idea is like:
1. Load the test_cgrp2_tc_kern.o by tc
2. Use test_cgrp2_array_pin.c to populate a BPF_MAP_TYPE_CGROUP_ARRAY
with a cgroup fd
3. Do a 'ping -6 ff02::1%ve' to ensure the packet has been
dropped because of a match on the cgroup
Most of the lines in test_cgrp2_tc.sh is the boilerplate
to setup the cgroup/bpf-fs/net-devices/netns...etc. It is
not bulletproof on errors but should work well enough and
give enough debug info if things did not go well.
Signed-off-by: Martin KaFai Lau <kafai@fb.com> Cc: Alexei Starovoitov <ast@fb.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Tejun Heo <tj@kernel.org> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Martin KaFai Lau [Thu, 30 Jun 2016 17:28:44 +0000 (10:28 -0700)]
cgroup: bpf: Add bpf_skb_in_cgroup_proto
Adds a bpf helper, bpf_skb_in_cgroup, to decide if a skb->sk
belongs to a descendant of a cgroup2. It is similar to the
feature added in netfilter:
commit c38c4597e4bf ("netfilter: implement xt_cgroup cgroup2 path match")
The user is expected to populate a BPF_MAP_TYPE_CGROUP_ARRAY
which will be used by the bpf_skb_in_cgroup.
Modifications to the bpf verifier is to ensure BPF_MAP_TYPE_CGROUP_ARRAY
and bpf_skb_in_cgroup() are always used together.
Signed-off-by: Martin KaFai Lau <kafai@fb.com> Cc: Alexei Starovoitov <ast@fb.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Tejun Heo <tj@kernel.org> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Martin KaFai Lau [Thu, 30 Jun 2016 17:28:43 +0000 (10:28 -0700)]
cgroup: bpf: Add BPF_MAP_TYPE_CGROUP_ARRAY
Add a BPF_MAP_TYPE_CGROUP_ARRAY and its bpf_map_ops's implementations.
To update an element, the caller is expected to obtain a cgroup2 backed
fd by open(cgroup2_dir) and then update the array with that fd.
Signed-off-by: Martin KaFai Lau <kafai@fb.com> Cc: Alexei Starovoitov <ast@fb.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Tejun Heo <tj@kernel.org> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Martin KaFai Lau [Thu, 30 Jun 2016 17:28:42 +0000 (10:28 -0700)]
cgroup: Add cgroup_get_from_fd
Add a helper function to get a cgroup2 from a fd. It will be
stored in a bpf array (BPF_MAP_TYPE_CGROUP_ARRAY) which will
be introduced in the later patch.
Signed-off-by: Martin KaFai Lau <kafai@fb.com> Cc: Alexei Starovoitov <ast@fb.com> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Tejun Heo <tj@kernel.org> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 1 Jul 2016 20:00:52 +0000 (16:00 -0400)]
Merge branch 'bpf-robustify'
Daniel Borkmann says:
====================
Further robustify putting BPF progs
This series addresses a potential issue reported to us by Jann Horn
with regards to putting progs. First patch moves progs generally under
RCU destruction and second patch refactors getting of progs to simplify
code a bit. For details, please see individual patches. Note, we think
that addressing this one in net-next should be sufficient.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Thu, 30 Jun 2016 15:24:44 +0000 (17:24 +0200)]
bpf: refactor bpf_prog_get and type check into helper
Since bpf_prog_get() and program type check is used in a couple of places,
refactor this into a small helper function that we can make use of. Since
the non RO prog->aux part is not used in performance critical paths and a
program destruction via RCU is rather very unlikley when doing the put, we
shouldn't have an issue just doing the bpf_prog_get() + prog->type != type
check, but actually not taking the ref at all (due to being in fdget() /
fdput() section of the bpf fd) is even cleaner and makes the diff smaller
as well, so just go for that. Callsites are changed to make use of the new
helper where possible.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Thu, 30 Jun 2016 15:24:43 +0000 (17:24 +0200)]
bpf: generally move prog destruction to RCU deferral
Jann Horn reported following analysis that could potentially result
in a very hard to trigger (if not impossible) UAF race, to quote his
event timeline:
- Set up a process with threads T1, T2 and T3
- Let T1 set up a socket filter F1 that invokes another filter F2
through a BPF map [tail call]
- Let T1 trigger the socket filter via a unix domain socket write,
don't wait for completion
- Let T2 call PERF_EVENT_IOC_SET_BPF with F2, don't wait for completion
- Now T2 should be behind bpf_prog_get(), but before bpf_prog_put()
- Let T3 close the file descriptor for F2, dropping the reference
count of F2 to 2
- At this point, T1 should have looked up F2 from the map, but not
finished executing it
- Let T3 remove F2 from the BPF map, dropping the reference count of
F2 to 1
- Now T2 should call bpf_prog_put() (wrong BPF program type), dropping
the reference count of F2 to 0 and scheduling bpf_prog_free_deferred()
via schedule_work()
- At this point, the BPF program could be freed
- BPF execution is still running in a freed BPF program
While at PERF_EVENT_IOC_SET_BPF time it's only guaranteed that the perf
event fd we're doing the syscall on doesn't disappear from underneath us
for whole syscall time, it may not be the case for the bpf fd used as
an argument only after we did the put. It needs to be a valid fd pointing
to a BPF program at the time of the call to make the bpf_prog_get() and
while T2 gets preempted, F2 must have dropped reference to 1 on the other
CPU. The fput() from the close() in T3 should also add additionally delay
to the reference drop via exit_task_work() when bpf_prog_release() gets
called as well as scheduling bpf_prog_free_deferred().
That said, it makes nevertheless sense to move the BPF prog destruction
generally after RCU grace period to guarantee that such scenario above,
but also others as recently fixed in ceb56070359b ("bpf, perf: delay release
of BPF prog after grace period") with regards to tail calls won't happen.
Integrating bpf_prog_free_deferred() directly into the RCU callback is
not allowed since the invocation might happen from either softirq or
process context, so we're not permitted to block. Reviewing all bpf_prog_put()
invocations from eBPF side (note, cBPF -> eBPF progs don't use this for
their destruction) with call_rcu() look good to me.
Since we don't know whether at the time of attaching the program, we're
already part of a tail call map, we need to use RCU variant. However, due
to this, there won't be severely more stress on the RCU callback queue:
situations with above bpf_prog_get() and bpf_prog_put() combo in practice
normally won't lead to releases, but even if they would, enough effort/
cycles have to be put into loading a BPF program into the kernel already.
Reported-by: Jann Horn <jannh@google.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 1 Jul 2016 09:40:58 +0000 (05:40 -0400)]
Merge branch 'qed-next'
Manish Chopra says:
====================
qede: Enhancements
This patch series have few small fastpath features
support and code refactoring.
Note - regarding get/set tunable configuration via ethtool
Surprisingly, there is NO ethtool application support for
such configuration given that we have kernel support.
Do let us know if we need to add support for that in user ethtool.
Please consider applying this series to "net-next".
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Manish Chopra [Thu, 30 Jun 2016 06:35:22 +0000 (02:35 -0400)]
qede: Bump up driver version to 8.10.1.20
Signed-off-by: Manish Chopra <manish.chopra@qlogic.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Manish Chopra [Thu, 30 Jun 2016 06:35:21 +0000 (02:35 -0400)]
qede: Add get/set rx copy break tunable support
Signed-off-by: Manish <manish.chopra@qlogic.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Manish Chopra [Thu, 30 Jun 2016 06:35:20 +0000 (02:35 -0400)]
qede: Utilize xmit_more
This patch uses xmit_more optimization to reduce
number of TX doorbells write per packet.
Signed-off-by: Manish <manish.chopra@qlogic.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Manish Chopra [Thu, 30 Jun 2016 06:35:19 +0000 (02:35 -0400)]
qede: qede_poll refactoring
This patch cleanups qede_poll() routine a bit
and allows qede_poll() to do single iteration to handle
TX completion [As under heavy TX load qede_poll() might
run for indefinite time in the while(1) loop for TX
completion processing and cause CPU stuck].
Signed-off-by: Manish <manish.chopra@qlogic.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Manish Chopra [Thu, 30 Jun 2016 06:35:18 +0000 (02:35 -0400)]
qede: Add support for handling IP fragmented packets.
When handling IP fragmented packets with csum in their
transport header, the csum isn't changed as part of the
fragmentation. As a result, the packet containing the
transport headers would have the correct csum of the original
packet, but one that mismatches the actual packet that
passes on the wire. As a result, on receive path HW would
give an indication that the packet has incorrect csum,
which would cause qede to discard the incoming packet.
Since HW also delivers a notification of IP fragments,
change driver behavior to pass such incoming packets
to stack and let it make the decision whether it needs
to be dropped.
Signed-off-by: Manish <manish.chopra@qlogic.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 1 Jul 2016 09:32:28 +0000 (05:32 -0400)]
Merge branch 'tun-skb_array'
Jason Wang says:
====================
switch to use tx skb array in tun
This series tries to switch to use skb array in tun. This is used to
eliminate the spinlock contention between producer and consumer. The
conversion was straightforward: just introdce a tx skb array and use
it instead of sk_receive_queue.
A minor issue is to keep the tx_queue_len behaviour, since tun used to
use it for the length of sk_receive_queue. This is done through:
- add the ability to resize multiple rings at once to avoid handling
partial resize failure for mutiple rings.
- add the support for zero length ring.
- introduce a notifier which was triggered when tx_queue_len was
changed for a netdev.
- resize all queues during the tx_queue_len changing.
Tests shows about 15% improvement on guest rx pps:
Before: ~1300000pps
After : ~1500000pps
Changes from V3:
- fix kbuild warnings
- call NETDEV_CHANGE_TX_QUEUE_LEN on IFLA_TXQLEN
Changes from V2:
- add multiple rings resizing support for ptr_ring/skb_array
- add zero length ring support
- introdce a NETDEV_CHANGE_TX_QUEUE_LEN
- drop new flags
Changes from V1:
- switch to use skb array instead of a customized circular buffer
- add non-blocking support
- rename .peek to .peek_len
- drop lockless peeking since test show very minor improvement
====================
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Acked-from-altitude: 34697 feet. Signed-off-by: David S. Miller <davem@davemloft.net>
Jason Wang [Thu, 30 Jun 2016 06:45:36 +0000 (14:45 +0800)]
tun: switch to use skb array for tx
We used to queue tx packets in sk_receive_queue, this is less
efficient since it requires spinlocks to synchronize between producer
and consumer.
This patch tries to address this by:
- switch from sk_receive_queue to a skb_array, and resize it when
tx_queue_len was changed.
- introduce a new proto_ops peek_len which was used for peeking the
skb length.
- implement a tun version of peek_len for vhost_net to use and convert
vhost_net to use peek_len if possible.
Pktgen test shows about 15.3% improvement on guest receiving pps for small
buffers:
Before: ~1300000pps
After : ~1500000pps
Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jason Wang [Thu, 30 Jun 2016 06:45:35 +0000 (14:45 +0800)]
net: introduce NETDEV_CHANGE_TX_QUEUE_LEN
This patch introduces a new event - NETDEV_CHANGE_TX_QUEUE_LEN, this
will be triggered when tx_queue_len. It could be used by net device
who want to do some processing at that time. An example is tun who may
want to resize tx array when tx_queue_len is changed.
Cc: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: Jason Wang <jasowang@redhat.com> Acked-by: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jason Wang [Thu, 30 Jun 2016 06:45:34 +0000 (14:45 +0800)]
skb_array: add wrappers for resizing
Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sometimes, we need support resizing multiple queues at once. This is
because it was not easy to recover to recover from a partial failure
of multiple queues resizing.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jason Wang [Thu, 30 Jun 2016 06:45:32 +0000 (14:45 +0800)]
skb_array: minor tweak
Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 1 Jul 2016 09:03:51 +0000 (05:03 -0400)]
Merge branch 'sch_hfsc-fixes-cleanups'
Michal Soltys says:
====================
HFSC patches, part 1
It's revised version of part of the patches I submitted really, really long
time ago (back then I asked Patrick to ignore them as I found some issues
shortly after submitting).
Anyway this is the first set with very simple fixes/changes though some of them
relatively subtle (I tried to do very exhaustive commit messages explaining what
and why with those).
The patches are against net-next tree.
The second set will be heavier - or rather with more complex explanations, among those I have:
- a fix to subtle issue introduced in
http://permalink.gmane.org/gmane.linux.kernel.commits.2-4/8281
along with simplifying related stuff
- update times to 96 bits (which allows to "just" use 32 bit shifts and
improves curve definition accuracy at more extreme low/high speeds)
- add curve "merging" instead of just selecting in convex case (computations
mirror those from concave intersection)
But these are eventually for later.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Michal Soltys [Thu, 30 Jun 2016 00:26:48 +0000 (02:26 +0200)]
net/sched/sch_hfsc.c: anchor virtual curve at proper vt in hfsc_change_fsc()
cl->cl_vt alone is relative only to the current backlog period, while
the curve operates on cumulative virtual time. This patch adds missing
cl->cl_vtoff.
Signed-off-by: Michal Soltys <soltys@ziu.info> Signed-off-by: David S. Miller <davem@davemloft.net>
Michal Soltys [Thu, 30 Jun 2016 00:26:47 +0000 (02:26 +0200)]
net/sched/sch_hfsc.c: go passive after vt update
When a class is going passive, it should update its cl_vt first
to be consistent with the last dequeue operation.
Otherwise its cl_vt will be one packet behind and parent's cvtmax might
not be updated as well.
One possible side effect is if some class goes passive and subsequently
goes active /without/ its parent going passive - with cl_vt lagging one
packet behind - comparison made in init_vf() will be affected (same
period).
Signed-off-by: Michal Soltys <soltys@ziu.info> Signed-off-by: David S. Miller <davem@davemloft.net>
Michal Soltys [Thu, 30 Jun 2016 00:26:44 +0000 (02:26 +0200)]
net/sched/sch_hfsc.c: handle corner cases where head may change invalidating calculated deadline
Realtime scheduling implemented in HFSC uses head of the queue to make
the decision about which packet to schedule next. But in case of any
head drop, the deadline calculated for the previous head is not
necessarily correct for the next head (unless both packets have the same
length).
Thanks to peek() function used during dequeue - which internally is a
dequeue operation - hfsc is almost safe from this issue, as peek()
dequeues and isolates the head storing it temporarily until the real
dequeue happens.
But there is one exception: if after the class activation a drop happens
before the first dequeue operation, there's never a chance to do the
peek().
Adding peek() call in enqueue - if this is the first packet in a new
backlog period AND the scheduler has realtime curve defined - fixes that
one corner case. The 1st hfsc_dequeue() will use that peeked packet,
similarly as every subsequent hfsc_dequeue() call uses packet peeked by
the previous call.
Signed-off-by: Michal Soltys <soltys@ziu.info> Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Mon, 27 Jun 2016 16:51:53 +0000 (18:51 +0200)]
tcp: md5: use kmalloc() backed scratch areas
Some arches have virtually mapped kernel stacks, or will soon have.
tcp_md5_hash_header() uses an automatic variable to copy tcp header
before mangling th->check and calling crypto function, which might
be problematic on such arches.
David says that using percpu storage is also problematic on non SMP
builds.
Just use kmalloc() to allocate scratch areas.
Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Andy Lutomirski <luto@amacapital.net> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 30 Jun 2016 13:29:07 +0000 (09:29 -0400)]
Merge branch '1GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue
Jeff Kirsher says:
====================
Intel Wired LAN Driver Updates 2016-06-29
This series contains updates and fixes to e1000e, igb, ixgbe and fm10k. A
true smorgasbord of changes.
Jake cleans up some obscurity by not using the BIT() macro on bitshift
operation and also fixed the calculated index when looping through the
indir array. Fixes the issue with igb's workqueue item for overflow
check from causing a surprise remove event. The ptp_flags variable is
added to simplify the work of writing several complex MAC type checks
in the PTP code while fixing the workqueue.
Alex Duyck fixes the receive buffers alignment which should not be L1
cache aligned, but to 512 bytes instead.
Denys Vlasenko prevents a division by zero which was reported under
VMWare for e1000e.
Amritha fixes an issue where filters in a child hash table must be
cleared from the hardware before delete the filter links in ixgbe.
Bhaktipriya Shridhar simply replaces the deprecated create_workqueue()
with alloc_workqueue() for fm10k.
Tony corrects ixgbe ethtool reporting to show x550 supports hardware
timestamping of all packets.
Emil fixes an issue where MAC-VLANs on the VF fail to pass traffic due
to spoofed packets.
Andrew Lunn increases performance on some systems where syncing a buffer
for DMA is expensive. So rather than sync the whole 2K receive buffer,
only synchronize the length of the frame.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 30 Jun 2016 13:12:23 +0000 (09:12 -0400)]
Merge branch 'nfp-next'
Jakub Kicinski says:
====================
nfp: few code improvements
Three small patches for net-next. First and second patches
improve the code quality by spelling things correctly and
removing unused parameters. Third patch hooks-in standard
kernel implementation of .get_link() in ethtool ops.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Wed, 29 Jun 2016 20:55:54 +0000 (21:55 +0100)]
nfp: remove unused parameter from nfp_net_write_mac_addr()
nfp_net_write_mac_addr() always writes to the BAR the current
device address taken from netdev struct. The address given
as parameter is actually ignored. Since all callers pass
netdev->dev_addr simply remove the parameter.
While at it improve the function's kdoc a bit.
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Dan Carpenter [Wed, 29 Jun 2016 14:39:43 +0000 (17:39 +0300)]
be2net: signedness bug in be_msix_enable()
"num_vec" needs to be signed for the error handling to work.
Fixes: e261768e9e39 ('be2net: support asymmetric rx/tx queue counts') Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Sathya Perla <sathya.perla@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 30 Jun 2016 12:52:12 +0000 (08:52 -0400)]
Merge branch 'mediatek-next'
John Crispin says:
====================
net-next: mediatek: IRQ cleanups, fixes and grouping
This series contains 2 small code cleanups that are leftovers from the
MIPS support. There is also a small fix that adds proper locking to the
code accessing the IRQ registers. Without this fix we saw deadlocks caused
by the last patch of the series, which adds IRQ grouping. The grouping
feature allows us to use different IRQs for TX and RX. By doing so we can
use affinity to let the SoC handle the IRQs on different cores.
This series depends on a previous series currently sitting in net.git
starting with
commit 562c5a70400c ("net: mediatek: only wake the queue if it is stopped")
up to
commit 82c6544dddc6 ("net: mediatek: remove superfluous queue wake up call")
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
John Crispin [Wed, 29 Jun 2016 11:38:11 +0000 (13:38 +0200)]
net-next: mediatek: add support for IRQ grouping
The ethernet core has 3 IRQs. Using the IRQ grouping registers we are able
to separate TX and RX IRQs, which allows us to service them on separate
cores. This patch splits the IRQ handler into 2 separate functions, one for
TX and another for RX. The TX housekeeping is split out into its own NAPI
handler.
Signed-off-by: John Crispin <john@phrozen.org> Signed-off-by: David S. Miller <davem@davemloft.net>
John Crispin [Wed, 29 Jun 2016 11:38:10 +0000 (13:38 +0200)]
net-next: mediatek: add IRQ locking
The code that enables and disables IRQs is missing proper locking. After
adding the IRQ grouping patch and routing the RX and TX IRQs to different
cores we experienced IRQ stalls. Fix this by adding proper locking.
We use a dedicated lock to reduce the latency if the IRQ code.
Signed-off-by: John Crispin <john@phrozen.org> Signed-off-by: David S. Miller <davem@davemloft.net>
John Crispin [Wed, 29 Jun 2016 11:38:09 +0000 (13:38 +0200)]
net-next: mediatek: don't use intermediate variables to store IRQ masks
The code currently uses variables to store and never modify the bit masks
of interrupts. This is legacy code from an early version of the driver
that supported MIPS based SoCs where the IRQ bits depended on the actual
SoC. As the bits are the same for all ARM based SoCs using this driver we
can remove the intermediate variables.
Signed-off-by: John Crispin <john@phrozen.org> Signed-off-by: David S. Miller <davem@davemloft.net>
The driver was originally written for MIPS based SoC. These required the
IRQ mask register to be read after writing it to ensure that the content
was actually applied. As this version only works on ARM based SoCs, we can
safely remove the 2 reads as they are no longer required.
Signed-off-by: John Crispin <john@phrozen.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Mateusz Bajorski [Wed, 29 Jun 2016 07:22:10 +0000 (09:22 +0200)]
fib_rules: Added NLM_F_EXCL support to fib_nl_newrule
When adding rule with NLM_F_EXCL flag then check if the same rule exist.
If yes then exit with -EEXIST.
This is already implemented in iproute2:
if (cmd == RTM_NEWRULE) {
req.n.nlmsg_flags |= NLM_F_CREATE|NLM_F_EXCL;
req.r.rtm_type = RTN_UNICAST;
}
Tested ipv4 and ipv6 with net-next linux on qemu x86
expected behavior after patch:
localhost ~ # ip rule
0: from all lookup local
32766: from all lookup main
32767: from all lookup default
localhost ~ # ip rule add from 10.46.177.97 lookup 104 pref 1005
localhost ~ # ip rule add from 10.46.177.97 lookup 104 pref 1005
RTNETLINK answers: File exists
localhost ~ # ip rule
0: from all lookup local
1005: from 10.46.177.97 lookup 104
32766: from all lookup main
32767: from all lookup default
There was already topic regarding this but I don't see any changes
merged and problem still occurs.
https://lkml.kernel.org/r/1135778809.5944.7.camel+%28%29+localhost+%21+localdomain
Signed-off-by: Mateusz Bajorski <mateusz.bajorski@nokia.com> Acked-by: David Ahern <dsa@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
"When peer uses tiny windows, there is no use in packetizing to sub-MSS
pieces for the sake of SWS or making sure there are enough packets in
the pipe for fast recovery."
The test should be > TCP_MSS_DEFAULT not >= 512. This allows low end
devices that send an MSS of 536 (TCP_MSS_DEFAULT) to see better network
performance by sending it 536 bytes of data at a time instead of bounding
to half window size (268). Other network stacks work this way, e.g. HP-UX.
Signed-off-by: Shane Seymour <shane.seymour@hpe.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Andrey Vagin [Mon, 27 Jun 2016 22:33:56 +0000 (15:33 -0700)]
tcp: add an ability to dump and restore window parameters
We found that sometimes a restored tcp socket doesn't work.
A reason of this bug is incorrect window parameters and in this case
tcp_acceptable_seq() returns tcp_wnd_end(tp) instead of tp->snd_nxt. The
other side drops packets with this seq, because seq is less than
tp->rcv_nxt ( tcp_sequence() ).
Data from a send queue is sent only if there is enough space in a
window, so when we restore unacked data, we need to expand a window to
fit this data.
This was in a first version of this patch:
"tcp: extend window to fit all restored unacked data in a send queue"
Then Alexey recommended me to restore window parameters instead of
adjusted them according with data in a sent queue. This sounds resonable.
rcv_wnd has to be restored, because it was reported to another side
and the offered window is never shrunk.
One of reasons why we need to restore snd_wnd was described above.
Cc: Pavel Emelyanov <xemul@parallels.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru> Cc: James Morris <jmorris@namei.org> Cc: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org> Cc: Patrick McHardy <kaber@trash.net> Signed-off-by: Andrey Vagin <avagin@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 30 Jun 2016 10:18:32 +0000 (06:18 -0400)]
Merge branch 'bridge-igmp-stats'
Nikolay Aleksandrov says:
====================
net: bridge: add support for IGMP/MLD stats
This patchset adds support for the new IFLA_STATS_LINK_XSTATS_SLAVE
attribute which can be used with RTM_GETSTATS in order to export per-slave
statistics. It works by passing the attribute to the linkxstats callback
and if the callback user supports it - it should dump that slave's stats.
This is much more scalable and permits us to request only a single port's
statistics instead of dumping everything every time.
The second patch adds support for per-port IGMP/MLD statistics and uses
the new API to export them for the bridge and its ports. The stats are
made in a very lightweight manner, the normal fast-path is not affected
at all and the flood paths (br_flood/br_multicast_flood) are only affected
if the packet is IGMP and the IGMP stats have been enabled using cache-hot
data for the check.
v2: Patch 01 is new, patch 02 has been reworked to use the new API, also
in addition counters for IGMP/MLD parse errors have been added and members
are added for per-port multicast traffic stats. The multicast counting has
been slightly optimized (moved the br_multicast_count inside the IPv4/6
IGMP functions after the checks for IGMP traffic) to avoid one conditional
that was on all of the multicast traffic path (both IGMP and other).
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
net: bridge: add support for IGMP/MLD stats and export them via netlink
This patch adds stats support for the currently used IGMP/MLD types by the
bridge. The stats are per-port (plus one stat per-bridge) and per-direction
(RX/TX). The stats are exported via netlink via the new linkxstats API
(RTM_GETSTATS). In order to minimize the performance impact, a new option
is used to enable/disable the stats - multicast_stats_enabled, similar to
the recent vlan stats. Also in order to avoid multiple IGMP/MLD type
lookups and checks, we make use of the current "igmp" member of the bridge
private skb->cb region to record the type on Rx (both host-generated and
external packets pass by multicast_rcv()). We can do that since the igmp
member was used as a boolean and all the valid IGMP/MLD types are positive
values. The normal bridge fast-path is not affected at all, the only
affected paths are the flooding ones and since we make use of the IGMP/MLD
type, we can quickly determine if the packet should be counted using
cache-hot data (cb's igmp member). We add counters for:
* IGMP Queries
* IGMP Leaves
* IGMP v1/v2/v3 reports
* MLD Queries
* MLD Leaves
* MLD v1/v2 reports
These are invaluable when monitoring or debugging complex multicast setups
with bridges.
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net: rtnetlink: add support for the IFLA_STATS_LINK_XSTATS_SLAVE attribute
This patch adds support for the IFLA_STATS_LINK_XSTATS_SLAVE attribute
which allows to export per-slave statistics if the master device supports
the linkxstats callback. The attribute is passed down to the linkxstats
callback and it is up to the callback user to use it (an example has been
added to the only current user - the bridge). This allows us to query only
specific slaves of master devices like bridge ports and export only what
we're interested in instead of having to dump all ports and searching only
for a single one. This will be used to export per-port IGMP/MLD stats and
also per-port vlan stats in the future, possibly other statistics as well.
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 30 Jun 2016 09:54:48 +0000 (05:54 -0400)]
Merge branch 'bpf-helper-improvements'
Daniel Borkmann says:
====================
BPF helper improvements
This set adds various BPF helper improvements, that is, cleaning
up and adding BPF_F_CURRENT_CPU flag for tracing helper, allowing
for preemption checks on bpf_get_smp_processor_id() helper, and
adding two new helpers bpf_skb_change_{proto, type} for tc related
programs. For further details please see individual patches.
Note, this set requires -net to be merged into -net-next tree first.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Tue, 28 Jun 2016 10:18:28 +0000 (12:18 +0200)]
bpf: add bpf_skb_change_type helper
This work adds a helper for changing skb->pkt_type in a controlled way.
We only allow a subset of possible values and can extend that in future
should other use cases come up. Doing this as a helper has the advantage
that errors can be handeled gracefully and thus helper kept extensible.
It's a write counterpart to pkt_type member we can already read from
struct __sk_buff context. Major use case is to change incoming skbs to
PACKET_HOST in a programmatic way instead of having to recirculate via
redirect(..., BPF_F_INGRESS), for example.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Tue, 28 Jun 2016 10:18:27 +0000 (12:18 +0200)]
bpf: add bpf_skb_change_proto helper
This patch adds a minimal helper for doing the groundwork of changing
the skb->protocol in a controlled way. Currently supported is v4 to
v6 and vice versa transitions, which allows f.e. for a minimal, static
nat64 implementation where applications in containers that still
require IPv4 can be transparently operated in an IPv6-only environment.
For example, host facing veth of the container can transparently do
the transitions in a programmatic way with the help of clsact qdisc
and cls_bpf.
Idea is to separate concerns for keeping complexity of the helper
lower, which means that the programs utilize bpf_skb_change_proto(),
bpf_skb_store_bytes() and bpf_lX_csum_replace() to get the job done,
instead of doing everything in a single helper (and thus partially
duplicating helper functionality). Also, bpf_skb_change_proto()
shouldn't need to deal with raw packet data as this is done by other
helpers.
bpf_skb_proto_6_to_4() and bpf_skb_proto_4_to_6() unclone the skb to
operate on a private one, push or pop additionally required header
space and migrate the gso/gro meta data from the shared info. We do
mark the gso type as dodgy so that headers are checked and segs
recalculated by the gso/gro engine. The gso_size target is adapted
as well. The flags argument added is currently reserved and can be
used for future extensions.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Tue, 28 Jun 2016 10:18:26 +0000 (12:18 +0200)]
bpf: don't use raw processor id in generic helper
Use smp_processor_id() for the generic helper bpf_get_smp_processor_id()
instead of the raw variant. This allows for preemption checks when we
have DEBUG_PREEMPT, and otherwise uses the raw variant anyway. We only
need to keep the raw variant for socket filters, but we can reuse the
helper that is already there from cBPF side.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Tue, 28 Jun 2016 10:18:25 +0000 (12:18 +0200)]
bpf, trace: add BPF_F_CURRENT_CPU flag for bpf_perf_event_read
Follow-up commit to 1e33759c788c ("bpf, trace: add BPF_F_CURRENT_CPU
flag for bpf_perf_event_output") to add the same functionality into
bpf_perf_event_read() helper. The split of index into flags and index
component is also safe here, since such large maps are rejected during
map allocation time.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Tue, 28 Jun 2016 10:18:24 +0000 (12:18 +0200)]
bpf, trace: fetch current cpu only once
We currently have two invocations, which is unnecessary. Fetch it only
once and use the smp_processor_id() variant, so we also get preemption
checks along with it when DEBUG_PREEMPT is set.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Tue, 28 Jun 2016 10:18:23 +0000 (12:18 +0200)]
bpf: minor cleanups on fd maps and helpers
Some minor cleanups: i) Remove the unlikely() from fd array map lookups
and let the CPU branch predictor do its job, scenarios where there is not
always a map entry are very well valid. ii) Move the attribute type check
in the bpf_perf_event_read() helper a bit earlier so it's consistent wrt
checks with bpf_perf_event_output() helper as well. iii) remove some
comments that are self-documenting in kprobe_prog_is_valid_access() and
therefore make it consistent to tp_prog_is_valid_access() as well.
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Several cases of overlapping changes, except the packet scheduler
conflicts which deal with the addition of the free list parameter
to qdisc_enqueue().
Signed-off-by: David S. Miller <davem@davemloft.net>
Linus Lüssing [Tue, 10 May 2016 16:41:27 +0000 (18:41 +0200)]
batman-adv: Add debugfs table for mcast flags
This patch adds a debugfs table with originators and their according
multicast flags to help users figure out why multicast optimizations
might be enabled or disabled for them.
Tested-by: Simon Wunderlich <sw@simonwunderlich.de> Signed-off-by: Linus Lüssing <linus.luessing@c0d3.blue> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
Linus Lüssing [Tue, 10 May 2016 16:41:26 +0000 (18:41 +0200)]
batman-adv: Adding logging of mcast flag changes
With this patch changes relevant to a node's own multicast flags are
printed to the 'mcast' log level.
Tested-by: Simon Wunderlich <sw@simonwunderlich.de> Signed-off-by: Linus Lüssing <linus.luessing@c0d3.blue> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
Linus Lüssing [Tue, 10 May 2016 16:41:25 +0000 (18:41 +0200)]
batman-adv: Add multicast optimization support for bridged setups
With this patch we are finally able to support multicast optimizations
in bridged setups, too. So far, if a bridge was added on top of a
soft-interface (e.g. bat0) the batman-adv multicast optimizations
needed to be disabled to avoid packetloss.
Current Linux bridge implementations and API can now provide us
with the so far missing information about interested but "remote"
multicast receivers behind bridge ports.
The Linux bridge performs the detection of remote participants
interested in multicast packets with its own and mature so
called IGMP and MLD snooping code and stores that in its
database. With the new API provided by the bridge batman-adv can
now simply hook into this database.
We then reliably announce the gathered multicast listeners to
other nodes through the batman-adv translation table.
Additionally, the Linux bridge provides us with the information about
whether an IGMP/MLD querier exists. If there is none then we need to
disable multicast optimizations as we cannot learn about multicast
listeners on external, bridged-in host then.
Tested-by: Simon Wunderlich <sw@simonwunderlich.de> Signed-off-by: Linus Lüssing <linus.luessing@c0d3.blue> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
Linus Lüssing [Tue, 10 May 2016 16:41:24 +0000 (18:41 +0200)]
batman-adv: Always flood IGMP/MLD reports
With this patch IGMP or MLD reports are always flooded. This is
necessary for the upcoming bridge integration to function without
multicast packet loss.
With the report handling so far bridges might miss interested multicast
listeners, leading to wrongly excluding ports from multicast packet
forwarding.
Currently we are treating IGMP/MLD reports, the messages bridges use to
learn about interested multicast listeners, just as any other multicast
packet: We try to send them to nodes matching its multicast destination.
Unfortunately, the destination address of reports of the older
IGMPv2/MLDv1 protocol families do not strictly adhere to their own
protocol: More precisely, the interested receiver, an IGMPv2 or MLDv1
querier, itself usually does not listen to the multicast destination
address of any reports.
Therefore with this patch we are simply excluding IGMP/MLD reports from
the multicast forwarding code path and keep flooding them. By that
any bridge receives them and can properly learn about listeners.
To avoid compatibility issues with older nodes not yet implementing this
report handling, we need to force them to flood reports: We do this by
bumping the multicast TVLV version to 2, effectively disabling their
multicast optimization.
Tested-by: Simon Wunderlich <sw@simonwunderlich.de> Signed-off-by: Linus Lüssing <linus.luessing@c0d3.blue> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
Andrew Lunn [Mon, 9 May 2016 18:03:36 +0000 (20:03 +0200)]
batman-adv: Include frame priority in fragment header
Unfragmented frames which traverse a node have their skb->priority set
by looking at the IP ToS byte, or the 802.1p header. However for
fragments this is not possible, only one of the fragments will contain
the headers. Instead, place the priority into the fragment header and
on receiving a fragment, use this information to set the skb->priority
for when the fragment is forwarded.
Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
Andrew Lunn [Mon, 9 May 2016 18:03:35 +0000 (20:03 +0200)]
batman-adv: Set skb priority in fragments
BATMAN will set the skb->priority based on the IP precedence or 802.1q
tag. However, if it needs to fragment the frame, it currently leaves
the fragment skb with the default priority and actually overwrites the
priority in the unfragmented frame. Fix this.
Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
Marek Lindner [Tue, 10 May 2016 14:31:59 +0000 (22:31 +0800)]
batman-adv: init ELP tweaking options only once
The ELP interval and throughput override interface settings are initialized
with default settings on every time an interface is added to a mesh.
This patch prevents this behavior by moving the configuration init to the
interface detection routine which runs only once per interface.
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
[a@unstable.cc: move initialization to batadv_v_hardif_init] Signed-off-by: Antonio Quartulli <a@unstable.cc> Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
batman-adv: move GW mode and selection class to private data structure
To reduce the field pollution in our main batadv_priv data structure
we've already created some substructures so that we could group fields
in a convenient manner.
However gw_mode and gw_sel_class are still part of the main object.
More both fields to the GW private substructure.
Signed-off-by: Antonio Quartulli <a@unstable.cc> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
batman-adv: remove useless inline attribute for sysfs helper function
the compiler can optimize functions within the same C file and therefore
there is no need to make it explicit.
Remove the useless inline attribute for __batadv_store_uint_attr()
Signed-off-by: Antonio Quartulli <a@unstable.cc> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
batman-adv: remove ogm_emit and ogm_schedule API calls
The ogm_emit and ogm_schedule API calls were rather tight to the
B.A.T.M.A.N. IV logic and therefore rather difficult to use
with other algorithm implementations.
Remove such calls and move the surrounding logic into the
B.A.T.M.A.N. IV specific code.
Signed-off-by: Antonio Quartulli <a@unstable.cc> Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
Marek Lindner [Mon, 2 May 2016 13:58:50 +0000 (21:58 +0800)]
batman-adv: remove unused callback from batadv_algo_ops struct
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
Marek Lindner [Mon, 2 May 2016 17:52:08 +0000 (01:52 +0800)]
batman-adv: refactor batadv_neigh_node_* functions to follow common style
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
Marek Lindner [Mon, 9 May 2016 10:42:12 +0000 (18:42 +0800)]
batman-adv: update elp interval documentation
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Acked-by: Antonio Quartulli <a@unstable.cc> Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>