Ying Xue [Mon, 4 May 2015 02:36:47 +0000 (10:36 +0800)]
tipc: adjust locking policy of subscription
Currently subscriber's lock protects not only subscriber's subscription
list but also all subscriptions linked into the list. However, as all
members of subscription are never changed after they are initialized,
it's unnecessary for subscription to be protected under subscriber's
lock. If the lock is used to only protect subscriber's subscription
list, the adjustment not only makes the locking policy simpler, but
also helps to avoid a deadlock which may happen once creating a
subscription is failed.
Signed-off-by: Ying Xue <ying.xue@windriver.com> Reviewed-by: Jon Maloy <jon.maloy@ericson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Ying Xue [Mon, 4 May 2015 02:36:46 +0000 (10:36 +0800)]
tipc: involve reference counter for subscriber
At present subscriber's lock is used to protect the subscription list
of subscriber as well as subscriptions linked into the list. While one
or all subscriptions are deleted through iterating the list, the
subscriber's lock must be held. Meanwhile, as deletion of subscription
may happen in subscription timer's handler, the lock must be grabbed
in the function as well. When subscription's timer is terminated with
del_timer_sync() during above iteration, subscriber's lock has to be
temporarily released, otherwise, deadlock may occur. However, the
temporary release may cause the double free of a subscription as the
subscription is not disconnected from the subscription list.
Now if a reference counter is introduced to subscriber, subscription's
timer can be asynchronously stopped with del_timer(). As a result, the
issue is not only able to be fixed, but also relevant code is pretty
readable and understandable.
Signed-off-by: Ying Xue <ying.xue@windriver.com> Reviewed-by: Jon Maloy <jon.maloy@ericson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Ying Xue [Mon, 4 May 2015 02:36:44 +0000 (10:36 +0800)]
tipc: rename functions defined in subscr.c
When a topology server accepts a connection request from its client,
it allocates a connection instance and a tipc_subscriber structure
object. The former is used to communicate with client, and the latter
is often treated as a subscriber which manages all subscription events
requested from a same client. When a topology server receives a request
of subscribing name services from a client through the connection, it
creates a tipc_subscription structure instance which is seen as a
subscription recording what name services are subscribed. In order to
manage all subscriptions from a same client, topology server links
them into the subscrp_list of the subscriber. So subscriber and
subscription completely represents different meanings respectively,
but function names associated with them make us so confused that we
are unable to easily tell which function is against subscriber and
which is to subscription. So we want to eliminate the confusion by
renaming them.
Signed-off-by: Ying Xue <ying.xue@windriver.com> Reviewed-by: Jon Maloy <jon.maloy@ericson.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 4 May 2015 18:49:23 +0000 (14:49 -0400)]
Merge branch 'igmp_mld_export'
Linus Lüssing says:
====================
Exporting IGMP/MLD checking from bridge code
The multicast optimizations in batman-adv are yet only usable and
enabled in non-bridged scenarios. To be able to support bridged setups
batman-adv needs to be able to detect IGMP/MLD queriers and reports on
mesh nodes without bridges, too. See the following link for details:
To avoid duplicate code between the bridge and batman-adv, the IGMP/MLD
message validation code is moved from the bridge to the IPv4/IPv6 stack.
On the way, some refactoring to increase readability and to iron out
some subtle differences between the IGMP and MLD parsing code is done.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Linus Lüssing [Sat, 2 May 2015 12:01:07 +0000 (14:01 +0200)]
net: Export IGMP/MLD message validation code
With this patch, the IGMP and MLD message validation functions are moved
from the bridge code to IPv4/IPv6 multicast files. Some small
refactoring was done to enhance readibility and to iron out some
differences in behaviour between the IGMP and MLD parsing code (e.g. the
skb-cloning of MLD messages is now only done if necessary, just like the
IGMP part always did).
Finally, these IGMP and MLD message validation functions are exported so
that not only the bridge can use it but batman-adv later, too.
Signed-off-by: Linus Lüssing <linus.luessing@c0d3.blue> Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Linus Lüssing <linus.luessing@c0d3.blue> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Patches #6 and #7 are fairly simple barrier stuff.
Patch #8 closes some SMP transmit races - not that anyone really
complained about these but it's a bit hard to handwave that they
can be safely ignored. Some testing, especially SMP testing of
course, would be welcome.
. Changes since #2:
- added dma_rmb barrier in vlan related patch 6.
- s/wmb/dma_wmb/ in (*new*) patch 7 of 8.
- added explicit SMP barriers in (*new*) patch 8 of 8.
. Changes since #1:
- turned wmb() into dma_wmb() as suggested by davem and Alexander Duyck
in patch 1 of 6.
- forgot to reset rx_head_desc in rhine_reset_rbufs in patch 4 of 6.
- removed rx_head_desc altogether in (*new*) patch 5 of 6
- remoed some vlan receive uglyness in (*new*) patch 6 of 6.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
françois romieu [Fri, 1 May 2015 20:14:45 +0000 (22:14 +0200)]
via-rhine: close SMP transmit races.
7ab87ff4c770eed71e3777936299292739fcd0fe ("via-rhine: move work from
irq handler to softirq and beyond") forgot to explicitely control the
lifespan of the tx_dirty and tx_cur pointers.
Signed-off-by: Francois Romieu <romieu@fr.zoreil.com> Signed-off-by: David S. Miller <davem@davemloft.net>
françois romieu [Fri, 1 May 2015 20:14:41 +0000 (22:14 +0200)]
via-rhine: forbid holes in the receive descriptor ring.
Rationales:
- throttle work under memory pressure
- lower receive descriptor recycling latency for the network adapter
- lower the maintenance burden of uncommon paths
The patch is twofold:
- it fails early if the receive ring can't be completely initialized
at dev->open() time
- it drops packets on the floor in the napi receive handler so as to
keep the received ring full
Signed-off-by: Francois Romieu <romieu@fr.zoreil.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 4 May 2015 04:09:09 +0000 (00:09 -0400)]
Merge branch 'flow_keys_digest'
Tom Herbert says:
====================
net: Eliminate calls to flow_dissector and introduce flow_keys_digest
In this patch set we add skb_get_hash_perturb which gets the skbuff
hash for a packet and perturbs it using a provided key and jhash1.
This function is used in serveral qdiscs and eliminates many calls
to flow_dissector and jhash3 to get a perturbed hash for a packet.
To handle the sch_choke issue (passes flow_keys in skbuff cb) we
add flow_keys_digest which is a digest of a flow constructed
from a flow_keys structure.
This is the second version of these patches I posted a while ago,
and is prerequisite work to increasing the size of the flow_keys
structure and hashing over it (full IPv6 address, flow label, VLAN ID,
etc.).
Version 2:
- Add keyval parameter to __flow_hash_from_keys which allows caller to
set the initval for jhash
- Perturb always does flow dissection and creates hash based on
input perturb value which acts as the keyval to __flow_hash_from_keys
- Added a _flow_keys_digest_data which is used in make_flow_keys_digest.
This fills out the digest by populating individual fields instead
of copying the whole structure.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Tom Herbert [Fri, 1 May 2015 18:30:17 +0000 (11:30 -0700)]
net: Add flow_keys digest
Some users of flow keys (well just sch_choke now) need to pass
flow_keys in skbuff cb, and use them for exact comparisons of flows
so that skb->hash is not sufficient. In order to increase size of
the flow_keys structure, we introduce another structure for
the purpose of passing flow keys in skbuff cb. We limit this structure
to sixteen bytes, and we will technically treat this as a digest of
flow_keys struct hence its name flow_keys_digest. In the first
incaranation we just copy the flow_keys structure up to 16 bytes--
this is the same information previously passed in the cb. In the
future, we'll adapt this for larger flow_keys and could use something
like SHA-1 over the whole flow_keys to improve the quality of the
digest.
Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Tom Herbert [Fri, 1 May 2015 18:30:12 +0000 (11:30 -0700)]
net: Add skb_get_hash_perturb
This calls flow_disect and __skb_get_hash to procure a hash for a
packet. Input includes a key to initialize jhash. This function
does not set skb->hash.
Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Andrew Lunn [Fri, 1 May 2015 14:39:54 +0000 (16:39 +0200)]
net: ipv4: route: Fix sending IGMP messages with link address
In setups with a global scope address on an interface, and a lesser
scope address on an interface sending IGMP reports, the reports can be
sent using the other interfaces global scope address rather than the
local interface address. RFC 2236 suggests:
Ignore the Report if you cannot identify the source address of
the packet as belonging to a subnet assigned to the interface on
which the packet was received.
since such reports could be forged.
Look at the protocol when deciding if a RT_SCOPE_LINK address should
be used for the packet.
Signed-off-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
TC classifiers/actions were converted to RCU by John in the series:
http://thread.gmane.org/gmane.linux.network/329739/focus=329739
and many follow on patches.
This is the last patch from that series that finally drops
ingress spin_lock.
Single cpu ingress+u32 performance goes from 22.9 Mpps to 24.5 Mpps.
In two cpu case when both cores are receiving traffic on the same
device and go into the same ingress+u32 the performance jumps
from 4.5 + 4.5 Mpps to 23.5 + 23.5 Mpps
Signed-off-by: John Fastabend <john.r.fastabend@intel.com> Signed-off-by: Alexei Starovoitov <ast@plumgrid.com> Signed-off-by: Jamal Hadi Salim <jhs@mojatatu.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 4 May 2015 03:18:02 +0000 (23:18 -0400)]
Merge branch 'tcp_sack_rttm'
Kenneth Klette Jonassen says:
====================
tcp: SACK RTTM changes for congestion control
This patch series improves SACK RTT measurements for congestion control:
o Picks the latest sequence SACKed for RTT, i.e. most accurate delay
signal.
o Calls the congestion control's pkts_acked hook with SACK RTTMs
even when not sequentially ACKing new data.
V2: amend misleading comment
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
tcp_sacktag_one() always picks the earliest sequence SACKed for RTT.
This might not make sense for congestion control in cases where:
1. ACKs are lost, i.e. a SACK following a lost SACK covers both
new and old segments at the receiver.
2. The receiver disregards the RFC 5681 recommendation to immediately
ACK out-of-order segments.
Give congestion control a RTT for the latest segment SACKed, which is the
most accurate RTT estimate, but preserve the conservative RTT for RTO.
Removes the call to skb_mstamp_get() in tcp_sacktag_one().
Cc: Yuchung Cheng <ycheng@google.com> Cc: Eric Dumazet <edumazet@google.com> Signed-off-by: Kenneth Klette Jonassen <kennetkl@ifi.uio.no> Acked-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Later patch passes two values set in tcp_sacktag_one() to
tcp_clean_rtx_queue(). Prepare passing them via struct tcp_sacktag_state.
Acked-by: Yuchung Cheng <ycheng@google.com> Cc: Eric Dumazet <edumazet@google.com> Signed-off-by: Kenneth Klette Jonassen <kennetkl@ifi.uio.no> Signed-off-by: David S. Miller <davem@davemloft.net>
This series improves the rhashtable self-test to:
* Avoid allocation of test objects
* Measure the time of test runs
* Use the iterator to walk the table for consistency
* Account for failed insertions due to memory pressure or
utilization pressure
* Ignore failed insertions when checking for consistency
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Thomas Graf [Thu, 30 Apr 2015 22:37:44 +0000 (22:37 +0000)]
rhashtable-test: Use walker to test bucket statistics
As resizes may continue to run in the background, use walker to
ensure we see all entries. Also print the encountered number
of rehashes queued up while traversing.
This may lead to warnings due to entries being seen multiple
times. We consider them non-fatal.
Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
Thomas Graf [Thu, 30 Apr 2015 22:37:41 +0000 (22:37 +0000)]
rhashtable-test: Measure time to insert, remove & traverse entries
Make test configurable by allowing to specify all relevant knobs
through module parameters.
Do several test runs and measure the average time it takes to
insert & remove all entries. Note, a deferred resize might still
continue to run in the background.
Signed-off-by: Thomas Graf <tgraf@suug.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 4 May 2015 02:30:36 +0000 (22:30 -0400)]
Merge branch 'eth_type_trans'
Alexander Duyck says:
====================
A few minor clean-ups to eth_type_trans
This series addresses a few minor issues I found in eth_type_trans that
that allow us to gain back something like 3 or more cycles per packet.
The first change is to drop the byte swap since it isn't necessary. On x86
we could just check the first byte and compare that against the upper 8
bits of the Ethertype to determine if we are dealing with a size value or
not.
The second makes it so that the value we read in to test for multicast can
be used for the address comparison. This allows us to avoid a second read
of the destination address.
The final change is to avoid some unneeded instructions in computing the
Ethernet header pointer. When we start the call the Ethernet header is at
skb->data, so we just use that rather than computing mac_header, and then
adding that back to skb->head.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Thu, 30 Apr 2015 21:53:59 +0000 (14:53 -0700)]
etherdev: Use skb->data to retrieve Ethernet header instead of eth_hdr
Avoid recomputing the Ethernet header location and instead just use the
pointer provided by skb->data. The problem with using eth_hdr is that the
compiler wasn't smart enough to realize that skb->head + skb->mac_header
was the same thing as skb->data before it added ETH_HLEN. By just caching
it off before calling skb_pull_inline we can avoid a few unnecessary
instructions.
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Thu, 30 Apr 2015 21:53:54 +0000 (14:53 -0700)]
etherdev: Process is_multicast_ether_addr at same size as other operations
This change makes it so that we process the address in
is_multicast_ether_addr at the same size as the other calls. This allows
us to avoid duplicate reads when used with other calls such as
is_zero_ether_addr or eth_addr_copy. In addition I have added a 64 bit
version of the function so in eth_type_trans we can process the destination
address as a 64 bit value throughout.
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Alexander Duyck [Thu, 30 Apr 2015 21:53:48 +0000 (14:53 -0700)]
etherdev: Avoid unnecessary byte swap in check for Ethertype
This change takes advantage of the fact that ETH_P_802_3_MIN is aligned to
512 so as a result we can actually ignore the lower 8b when comparing the
Ethertype to ETH_P_802_3_MIN. This allows us to avoid a byte swap by simply
masking the value and comparing it to the byte swapped value for
ETH_P_802_3_MIN.
Signed-off-by: Alexander Duyck <alexander.h.duyck@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Tom Herbert [Wed, 29 Apr 2015 22:33:21 +0000 (15:33 -0700)]
ipv6: Flow label state ranges
This patch divides the IPv6 flow label space into two ranges:
0-7ffff is reserved for flow label manager, 80000-fffff will be
used for creating auto flow labels (per RFC6438). This only affects how
labels are set on transmit, it does not affect receive. This range split
can be disbaled by systcl.
Background:
IPv6 flow labels have been an unmitigated disappointment thus far
in the lifetime of IPv6. Support in HW devices to use them for ECMP
is lacking, and OSes don't turn them on by default. If we had these
we could get much better hashing in IPv6 networks without resorting
to DPI, possibly eliminating some of the motivations to to define new
encaps in UDP just for getting ECMP.
Unfortunately, the initial specfications of IPv6 did not clarify
how they are to be used. There has always been a vague concept that
these can be used for ECMP, flow hashing, etc. and we do now have a
good standard how to this in RFC6438. The problem is that flow labels
can be either stateful or stateless (as in RFC6438), and we are
presented with the possibility that a stateless label may collide
with a stateful one. Attempts to split the flow label space were
rejected in IETF. When we added support in Linux for RFC6438, we
could not turn on flow labels by default due to this conflict.
This patch splits the flow label space and should give us
a path to enabling auto flow labels by default for all IPv6 packets.
This is an API change so we need to consider compatibility with
existing deployment. The stateful range is chosen to be the lower
values in hopes that most uses would have chosen small numbers.
Once we resolve the stateless/stateful issue, we can proceed to
look at enabling RFC6438 flow labels by default (starting with
scaled testing).
Signed-off-by: Tom Herbert <tom@herbertland.com> Signed-off-by: David S. Miller <davem@davemloft.net>
ipv6: Check RTF_LOCAL on rt->rt6i_flags instead of rt->dst.flags
In my earlier commit: 653437d02f1f ("ipv6: Stop /128 route from disappearing after pmtu update"),
there was a horrible typo. Instead of checking RTF_LOCAL on
rt->rt6i_flags, it was checked on rt->dst.flags. This patch fixes
it.
Signed-off-by: Martin KaFai Lau <kafai@fb.com> Cc: Hajime Tazaki <tazaki@sfc.wide.ad.jp> Cc: David S. Miller <davem@davemloft.net> Signed-off-by: David S. Miller <davem@davemloft.net>
pedit sets TC_MUNGED when packet content was altered, but all the core
does is unset MUNGED again and then set OK2MUNGE.
And the latter isn't tested anywhere. So lets remove both
TC_MUNGED and TC_OK2MUNGE.
Signed-off-by: Florian Westphal <fw@strlen.de> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
1) Receive packet length needs to be adjust by 2 on RX to accomodate
the two padding bytes in altera_tse driver. From Vlastimil Setka.
2) If rx frame is dropped due to out of memory in macb driver, we leave
the receive ring descriptors in an undefined state. From Punnaiah
Choudary Kalluri
3) Some netlink subsystems erroneously signal NLM_F_MULTI. That is
only for dumps. Fix from Nicolas Dichtel.
4) Fix mis-use of raw rt->rt_pmtu value in ipv4, one must always go via
the ipv4_mtu() helper. From Herbert Xu.
5) Fix null deref in bridge netfilter, and miscalculated lengths in
jump/goto nf_tables verdicts. From Florian Westphal.
6) Unhash ping sockets properly.
7) Software implementation of BPF divide did 64/32 rather than 64/64
bit divide. The JITs got it right. Fix from Alexei Starovoitov.
* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net: (30 commits)
ipv4: Missing sk_nulls_node_init() in ping_unhash().
net: fec: Fix RGMII-ID mode
net/mlx4_en: Schedule napi when RX buffers allocation fails
netxen_nic: use spin_[un]lock_bh around tx_clean_lock
net/mlx4_core: Fix unaligned accesses
mlx4_en: Use correct loop cursor in error path.
cxgb4: Fix MC1 memory offset calculation
bnx2x: Delay during kdump load
net: Fix Kernel Panic in bonding driver debugfs file: rlb_hash_table
net: dsa: Fix scope of eeprom-length property
net: macb: Fix race condition in driver when Rx frame is dropped
hv_netvsc: Fix a bug in netvsc_start_xmit()
altera_tse: Correct rx packet length
mlx4: Fix tx ring affinity_mask creation
tipc: fix problem with parallel link synchronization mechanism
tipc: remove wrong use of NLM_F_MULTI
bridge/nl: remove wrong use of NLM_F_MULTI
bridge/mdb: remove wrong use of NLM_F_MULTI
net: sched: act_connmark: don't zap skb->nfct
trivial: net: systemport: bcmsysport.h: fix 0x0x prefix
...
David S. Miller [Sat, 2 May 2015 02:02:47 +0000 (22:02 -0400)]
ipv4: Missing sk_nulls_node_init() in ping_unhash().
If we don't do that, then the poison value is left in the ->pprev
backlink.
This can cause crashes if we do a disconnect, followed by a connect().
Tested-by: Linus Torvalds <torvalds@linux-foundation.org> Reported-by: Wen Xu <hotdog3645@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Simon Horman [Thu, 30 Apr 2015 06:21:29 +0000 (15:21 +0900)]
net: rocker: Use ether_addr_equal
A small cleanup to make use of the ether_addr_equal helper.
Signed-off-by: Simon Horman <simon.horman@netronome.com> Acked-by: Scott Feldman <sfeldma@gmail.com> Acked-by: Jiri Pirko <jiri@resnulli.us> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 2 May 2015 00:57:07 +0000 (20:57 -0400)]
Merge branch 'rt6_pmtu'
Martin KaFai Lau says:
====================
ipv6: Stop /128 route from disappearing after pmtu update
The series is separated from another patch series,
'ipv6: Only create RTF_CACHE route after encountering pmtu exception',
which can be found here:
http://thread.gmane.org/gmane.linux.network/359140
This series focus on fixing the /128 route issues. It is currently targeted
for net-next due to the number of code churn but it is also applicable
to net (should be without conflict). The original reported problem can be
found here:
http://thread.gmane.org/gmane.linux.network/348138
Patch 01 and 02 are to prepare the fib6 search to expect both the
RTF_CACHE clone and its original route exist at the same fib6_node.
Patch 03 fixes the /128 route disappearing bug.
Patch 04 and 05 stop rt6_info from using the inet_peer's metrics to
avoid the /128 routes (like the /128 clone and its original route)
from stepping on each others' metrics.
The second patch is by 'Steffen Klassert <steffen.klassert@secunet.com>'
which I pulled off from netdev. The third patch is also mostly by
Steffen with one minor optimization.
Many thanks to Hannes Frederic Sowa <hannes@stressinduktion.org> on
reviewing the patches and giving advice.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Martin KaFai Lau [Tue, 28 Apr 2015 20:03:07 +0000 (13:03 -0700)]
ipv6: Remove DST_METRICS_FORCE_OVERWRITE and _rt6i_peer
_rt6i_peer is no longer needed after the last patch,
'ipv6: Stop rt6_info from using inet_peer's metrics'.
DST_METRICS_FORCE_OVERWRITE is added by
commit e5fd387ad5b3 ("ipv6: do not overwrite inetpeer metrics prematurely").
Since inetpeer is no longer used for metrics, this bit is also not needed.
Signed-off-by: Martin KaFai Lau <kafai@fb.com> Reviewed-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Cc: Michal Kubeček <mkubecek@suse.cz> Cc: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Martin KaFai Lau [Tue, 28 Apr 2015 20:03:06 +0000 (13:03 -0700)]
ipv6: Stop rt6_info from using inet_peer's metrics
inet_peer is indexed by the dst address alone. However, the fib6 tree
could have multiple routing entries (rt6_info) for the same dst. For
example,
1. A /128 dst via multiple gateways.
2. A RTF_CACHE route cloned from a /128 route.
In the above cases, all of them will share the same metrics and
step on each other.
This patch will steer away from inet_peer's metrics and use
dst_cow_metrics_generic() for everything.
Change Highlights:
1. Remove rt6_cow_metrics() which currently acquires metrics from
inet_peer for DST_HOST route (i.e. /128 route).
2. Add rt6i_pmtu to take care of the pmtu update to avoid creating a
full size metrics just to override the RTAX_MTU.
3. After (2), the RTF_CACHE route can also share the metrics with its
dst.from route, by:
dst_init_metrics(&cache_rt->dst, dst_metrics_ptr(cache_rt->dst.from), true);
4. Stop creating RTF_CACHE route by cloning another RTF_CACHE route. Instead,
directly clone from rt->dst.
[ Currently, cloning from another RTF_CACHE is only possible during
rt6_do_redirect(). Also, the old clone is removed from the tree
immediately after the new clone is added. ]
In case of cloning from an older redirect RTF_CACHE, it should work as
before.
In case of cloning from an older pmtu RTF_CACHE, this patch will forget
the pmtu and re-learn it (if there is any) from the redirected route.
The _rt6i_peer and DST_METRICS_FORCE_OVERWRITE will be removed
in the next cleanup patch.
Signed-off-by: Martin KaFai Lau <kafai@fb.com> Reviewed-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Cc: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Martin KaFai Lau [Tue, 28 Apr 2015 20:03:05 +0000 (13:03 -0700)]
ipv6: Stop /128 route from disappearing after pmtu update
This patch is mostly from Steffen Klassert <steffen.klassert@secunet.com>.
I only removed the (rt6->rt6i_dst.plen == 128) check from
ip6_rt_update_pmtu() because the (rt6->rt6i_flags & RTF_CACHE) test
has already implied it.
This patch:
1. Create RTF_CACHE route for /128 non local route
2. After (1), all routes that allow pmtu update should have a RTF_CACHE
clone. Hence, stop updating MTU for any non RTF_CACHE route.
Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Reviewed-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
ipv6: Extend the route lookups to low priority metrics.
We search only for routes with highest priority metric in
find_rr_leaf(). However if one of these routes is marked
as invalid, we may fail to find a route even if there is
a appropriate route with lower priority. Then we loose
connectivity until the garbage collector deletes the
invalid route. This typically happens if a host route
expires afer a pmtu event. Fix this by searching also
for routes with a lower priority metric.
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: Martin KaFai Lau <kafai@fb.com> Reviewed-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Martin KaFai Lau [Tue, 28 Apr 2015 20:03:03 +0000 (13:03 -0700)]
ipv6: Consider RTF_CACHE when searching the fib6 tree
It is a prep work for the later bug-fix patch which will stop /128 route
from disappearing after pmtu update.
The later bug-fix patch will allow a /128 route and its RTF_CACHE clone
both exist at the same fib6_node. To do this, we need to prepare the
existing fib6 tree search to expect RTF_CACHE for /128 route.
Note that the fn->leaf is sorted by rt6i_metric. Hence,
RTF_CACHE (if there is any) is always at the front. This property
leads to the following:
1. When doing ip6_route_del(), it should honor the RTF_CACHE flag which
the caller is used to ask for deleting clone or non-clone.
The rtm_to_fib6_config() should also check the RTM_F_CLONED and
then set RTF_CACHE accordingly so that:
- 'ip -6 r del...' will make ip6_route_del() to delete a route
and all its clones. Note that its clones is flushed by fib6_del()
- 'ip -6 r flush table cache' will make ip6_route_del() to
only delete clone(s).
2. Exclude RTF_CACHE from addrconf_get_prefix_route() which
should not configure on a cloned route.
3. No change is need for rt6_device_match() since it currently could
return a RTF_CACHE clone route, so the later bug-fix patch will not
affect it.
Signed-off-by: Martin KaFai Lau <kafai@fb.com> Reviewed-by: Hannes Frederic Sowa <hannes@stressinduktion.org> Cc: Steffen Klassert <steffen.klassert@secunet.com> Signed-off-by: David S. Miller <davem@davemloft.net>
When we end I/O struct request with error, we need to pass
obj_request->length as @nr_bytes so that the entire obj_request worth
of bytes is completed. Otherwise block layer ends up confused and we
trip on
in rbd_img_obj_callback() due to more being true no matter what. We
already do it in most cases but we are missing some, in particular
those where we don't even get a chance to submit any obj_requests, due
to an early -ENOMEM for example.
A number of obj_request->xferred assignments seem to be redundant but
I haven't touched any of obj_request->xferred stuff to keep this small
and isolated.
Cc: Alex Elder <elder@linaro.org> Cc: stable@vger.kernel.org # 3.10+ Reported-by: Shawn Edwards <lesser.evil@gmail.com> Reviewed-by: Sage Weil <sage@redhat.com> Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Linus Torvalds [Fri, 1 May 2015 14:46:21 +0000 (07:46 -0700)]
Merge branch 'for-linus-4.1' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs
Pull btrfs fixes from Chris Mason:
"A few more btrfs fixes.
These range from corners Filipe found in the new free space cache
writeback to a grab bag of fixes from the list"
* 'for-linus-4.1' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs:
Btrfs: btrfs_release_extent_buffer_page didn't free pages of dummy extent
Btrfs: fill ->last_trans for delayed inode in btrfs_fill_inode.
btrfs: unlock i_mutex after attempting to delete subvolume during send
btrfs: check io_ctl_prepare_pages return in __btrfs_write_out_cache
btrfs: fix race on ENOMEM in alloc_extent_buffer
btrfs: handle ENOMEM in btrfs_alloc_tree_block
Btrfs: fix find_free_dev_extent() malfunction in case device tree has hole
Btrfs: don't check for delalloc_bytes in cache_save_setup
Btrfs: fix deadlock when starting writeback of bg caches
Btrfs: fix race between start dirty bg cache writeout and bg deletion
Linus Torvalds [Fri, 1 May 2015 14:44:32 +0000 (07:44 -0700)]
Merge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 fixes from Will Deacon:
"Not too much here, but we've addressed a couple of nasty issues in the
dma-mapping code as well as adding the halfword and byte variants of
load_acquire/store_release following on from the CSD locking bug that
you fixed in the core.
- fix perf devicetree warnings at probe time
- fix memory leak in __dma_free()
- ensure DMA buffers are always zeroed
- show IRQ trigger in /proc/interrupts (for parity with ARM)
- implement byte and halfword access for smp_{load_acquire,store_release}"
* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
arm64: perf: Fix the pmu node name in warning message
arm64: perf: don't warn about missing interrupt-affinity property for PPIs
arm64: add missing PAGE_ALIGN() to __dma_free()
arm64: dma-mapping: always clear allocated buffers
ARM64: Enable CONFIG_GENERIC_IRQ_SHOW_LEVEL
arm64: add missing data types in smp_load_acquire/smp_store_release
Merge tag 'pm+acpi-4.1-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management and ACPI fixes from Rafael Wysocki:
"Three regression fixes this time, one for a recent regression in the
cpuidle core affecting multiple systems, one for an inadvertently
added duplicate typedef in ACPICA that breaks compilation with GCC 4.5
and one for an ACPI Smart Battery Subsystem driver regression
introduced during the 3.18 cycle (stable-candidate).
Specifics:
- Fix for a regression in the cpuidle core introduced by one of the
recent commits in the clockevents_notify() removal series that put
a call to a function which had to be executed with disabled
interrupts into a code path running with enabled interrupts (Rafael
J Wysocki)
- Fix for a build problem in ACPICA (with GCC 4.5) introduced by one
of the recent ACPICA tools commits that added a duplicate typedef
to one of the ACPICA's header files by mistake (Olaf Hering)
- Fix for a regression in the ACPI SBS (Smart Battery Subsystem)
driver introduced during the 3.18 development cycle causing the
smart battery manager to be marked as not present when it should be
marked as present (Chris Bainbridge)"
* tag 'pm+acpi-4.1-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
cpuidle: Run tick_broadcast_exit() with disabled interrupts
ACPI / SBS: Enable battery manager when present
ACPICA: remove duplicate u8 typedef
Merge tag 'sound-4.1-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound
Pull sound fixes from Takashi Iwai:
"One nice fix is Peter's patch to make the old good SB Audigy PCI to
work with 32bit DMA instead of 31bit. This allows the MIDI synth
running on modern machines again. Along with it, a few fixes for
emu10k1 have merged.
In ASoC side, there is one fix in the common code, but it's just
trivial additions of static inline functions for CONFIG_PM=n. The
rest are various device-specific small fixes.
Last but not least, a few HD-audio fixes are included, as usual, too"
* tag 'sound-4.1-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound: (23 commits)
ASoC: rt5677: fixed wrong DMIC ref clock
ALSA: emu10k1: Emu10k2 32 bit DMA mode
ALSA: emux: Fix mutex deadlock in OSS emulation
ASoC: Update email-id of Rajeev Kumar
ASoC: rt5645: Fix mask for setting RT5645_DMIC_2_DP_GPIO12 bit
ALSA: hda - Fix missing va_end() call in snd_hda_codec_pcm_new()
ALSA: emux: Fix mutex deadlock at unloading
ALSA: emu10k1: Fix card shortname string buffer overflow
ALSA: hda - Add mute-LED mode control to Thinkpad
ALSA: hda - Fix mute-LED fixed mode
ALSA: hda - Fix click noise at start on Dell XPS13
ASoC: rt5645: Add ACPI match ID
ASoC: rt5677: add register patch for PLL
ASoC: Intel: fix the makefile for atom code
ASoC: dapm: Enable autodisable on SOC_DAPM_SINGLE_TLV_AUTODISABLE
ASoC: add static inline funcs to fix a compiling issue
ASoC: Intel: sst_byt: remove kfree for memory allocated with devm_kzalloc
ASoC: samsung: s3c24xx-i2s: Fix return value check in s3c24xx_iis_dev_probe()
ASoC: tfa9879: Fix return value check in tfa9879_i2c_probe()
ASoC: fsl_ssi: Fix platform_get_irq() error handling
...
net/mlx4_en: Schedule napi when RX buffers allocation fails
When system is out of memory, refilling of RX buffers fails while
the driver continue to pass the received packets to the kernel stack.
At some point, when all RX buffers deplete, driver may fall into a
sleep, and not recover when memory for new RX buffers is once again
availible. This is because hardware does not have valid descriptors,
so no interrupt will be generated for the driver to return to work
in napi context. Fix it by schedule the napi poll function from
stats_task delayed workqueue, as long as the allocations fail.
Signed-off-by: Ido Shamay <idos@mellanox.com> Signed-off-by: Amir Vadai <amirv@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Daniel Borkmann [Thu, 30 Apr 2015 14:17:27 +0000 (16:17 +0200)]
test_bpf: indicate whether bpf prog got jited in test suite
I think this is useful to verify whether a filter could be JITed or
not in case of bpf_prog_enable >= 1, which otherwise the test suite
doesn't tell besides taking a good peek at the performance numbers.
Nicolas Schichan reported a bug in the ARM JIT compiler that rejected
and waved the filter to the interpreter although it shouldn't have.
Nevertheless, the test passes as expected, but such information is
not visible.
It's i.e. useful for the remaining classic JITs, but also for
implementing remaining opcodes that are not yet present in eBPF JITs
(e.g. ARM64 waves some of them to the interpreter). This minor patch
allows to grep through dmesg to find those accordingly, but also
provides a total summary, i.e.: [<X>/53 JIT'ed]
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Cc: Nicolas Schichan <nschichan@freebox.fr> Cc: Alexei Starovoitov <ast@plumgrid.com> Acked-by: Alexei Starovoitov <ast@plumgrid.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Tony Camuso [Thu, 30 Apr 2015 11:51:27 +0000 (07:51 -0400)]
netxen_nic: use spin_[un]lock_bh around tx_clean_lock
While testing this driver with DEBUG_LOCKDEP and DEBUG_SPINLOCK
enabled did not produce any traces, it would be more prudent in the
case of tx_clean_lock to use spin_[un]lock_bh, since this lock is
manipulated in both the process and softirq contexts.
This patch was tested for functionality and regressions with netperf
and DEBUG_LOCKDEP and DEBUG_SPINLOCK enabled.
Signed-off-by: Tony Camuso <tcamuso@redhat.com> Acked-by: Neil Horman <nhorman@tuxdriver.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Thomas Falcon [Wed, 29 Apr 2015 21:25:47 +0000 (16:25 -0500)]
ibmveth: Add support for Large Receive Offload
Enables receiving large packets from other LPARs. These packets
have a -1 IP header checksum, so we must recalculate to have
a valid checksum.
Signed-off-by: Brian King <brking@linux.vnet.ibm.com> Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Thomas Falcon [Wed, 29 Apr 2015 21:25:46 +0000 (16:25 -0500)]
ibmveth: Add GRO support
Cc: Brian King <brking@linux.vnet.ibm.com> Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Thomas Falcon [Wed, 29 Apr 2015 21:25:45 +0000 (16:25 -0500)]
ibmveth: Add support for TSO
Add support for TSO. TSO is turned off by default and
must be enabled and configured by the user. The driver
version number is increased so that users can be sure
that they are using ibmveth with TSO support.
Cc: Brian King <brking@linux.vnet.ibm.com> Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Thomas Falcon [Wed, 29 Apr 2015 21:25:44 +0000 (16:25 -0500)]
ibmveth: change rx buffer default allocation for CMO
This patch enables 64k rx buffer pools by default. If Cooperative
Memory Overcommitment (CMO) is enabled, the number of 64k buffers
is reduced to save memory.
Cc: Brian King <brking@linux.vnet.ibm.com> Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Benjamin Poirier [Wed, 29 Apr 2015 22:59:35 +0000 (15:59 -0700)]
mlx4_en: Use correct loop cursor in error path.
Signed-off-by: Benjamin Poirier <bpoirier@suse.de> Fixes: 9e311e7 ("net/mlx4_en: Use affinity hint") Acked-by: Amir Vadai <amirv@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Thu, 30 Apr 2015 20:03:14 +0000 (16:03 -0400)]
Merge branch 'xgene-next'
Iyappan Subramanian says:
====================
drivers: net: xgene: Add ethernet with ring manager v2 support
Adding XFI based 10GbE and SGMII based 1GbE with ring manager v2 support
for APM X-Gene ethernet driver. The ring manager v2 is used by 2nd
generation SoC.
v1:
* Initial version
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Merge tag 'asoc-v4.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/sound into for-linus
ASoC: Fixes for v4.1
A few fixes for v4.1, none earth shattering and mostly driver related
except for one change to fix !PM builds for Intel platforms which is
done by adding stubs in the core so other platforms don't run into the
same issue.
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull kvm changes from Paolo Bonzini:
"Remove from guest code the handling of task migration during a pvclock
read; instead use the correct protocol in KVM.
This removes the need for task migration notifiers in core scheduler
code"
[ The scheduler people really hated the migration notifiers, so this was
kind of required - Linus ]
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
x86: pvclock: Really remove the sched notifier for cross-cpu migrations
kvm: x86: fix kvmclock update protocol
Merge tag 'dm-4.1-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm
Pull device mapper bugfixes from Mike Snitzer:
"Fix two bugs in the request-based DM blk-mq support that was added
during the 4.1 merge"
* tag 'dm-4.1-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm:
dm: fix free_rq_clone() NULL pointer when requeueing unmapped request
dm: only initialize the request_queue once
Merge tag 'tty-4.1-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty
Pull tty/serial fixes from Greg KH:
"Here are some small tty/serial driver fixes for 4.1-rc2.
They include some minor fixes that resolve reported issues, and a new
device quirk.
All have been in linux-next succesfully"
* tag 'tty-4.1-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty:
serial: 8250_pci: Add support for 16 port Exar boards
serial: samsung: fix serial console break
tty/serial: at91: maxburst was missing for dma transfers
serial: of-serial: Remove device_type = "serial" registration
serial: xilinx: Use platform_get_irq to get irq description structure
serial: core: Fix kernel-doc build warnings
tty: Re-add external interface for tty_set_termios()
Merge tag 'usb-4.1-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb
Pull USB fixes from Greg KH:
"Here are a number of small USB fixes for 4.2-rc2. They revert one
problem patch, fix some minor things, and add some new quirks for
"broken" devices.
All have been in linux-next successfully"
* tag 'usb-4.1-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb:
cdc-acm: prevent infinite loop when parsing CDC headers.
Revert "usb: host: ehci-msm: Use devm_ioremap_resource instead of devm_ioremap"
usb: chipidea: otg: remove mutex unlock and lock while stop and start role
uas: Set max_sectors_240 quirk for ASM1053 devices
uas: Add US_FL_MAX_SECTORS_240 flag
uas: Allow uas_use_uas_driver to return usb-storage flags
Merge tag 'renesas-sh-drivers-for-v4.1' of git://git.kernel.org/pub/scm/linux/kernel/git/horms/renesas
Pull SH driver updates from Simon Horman:
- remove test for now unsupported sh7372 SoC
- disable PM runtime for multi-platform r8a73a4 and sh73a0 SoCs with
genpd
* tag 'renesas-sh-drivers-for-v4.1' of git://git.kernel.org/pub/scm/linux/kernel/git/horms/renesas:
drivers: sh: Remove test for now unsupported sh7372
drivers: sh: Disable PM runtime for multi-platform r8a73a4 with genpd
drivers: sh: Disable PM runtime for multi-platform sh73a0 with genpd
Mike Snitzer [Wed, 29 Apr 2015 14:48:09 +0000 (10:48 -0400)]
dm: fix free_rq_clone() NULL pointer when requeueing unmapped request
Commit 022333427a ("dm: optimize dm_mq_queue_rq to _not_ use kthread if
using pure blk-mq") mistakenly removed free_rq_clone()'s clone->q check
before testing clone->q->mq_ops. It was an oversight to discontinue
that check for 1 of the 2 use-cases for free_rq_clone():
1) free_rq_clone() called when an unmapped original request is requeued
2) free_rq_clone() called in the request-based IO completion path
The clone->q check made sense for case #1 but not for #2. However, we
cannot just reinstate the check as it'd mask a serious bug in the IO
completion case #2 -- no in-flight request should have an uninitialized
request_queue (basic block layer refcounting _should_ ensure this).
The NULL pointer seen for case #1 is detailed here:
https://www.redhat.com/archives/dm-devel/2015-April/msg00160.html
Fix this free_rq_clone() NULL pointer by simply checking if the
mapped_device's type is DM_TYPE_MQ_REQUEST_BASED (clone's queue is
blk-mq) rather than checking clone->q->mq_ops. This avoids the need to
dereference clone->q, but a WARN_ON_ONCE is added to let us know if an
uninitialized clone request is being completed.
Reported-by: Bart Van Assche <bart.vanassche@sandisk.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Commit bfebd1cdb4 ("dm: add full blk-mq support to request-based DM")
didn't properly account for the need to short-circuit re-initializing
DM's blk-mq request_queue if it was already initialized.
Otherwise, reloading a blk-mq request-based DM table (either manually
or via multipathd) resulted in errors, see:
https://www.redhat.com/archives/dm-devel/2015-April/msg00132.html
Fix is to only initialize the request_queue on the initial table load
(when the mapped_device type is assigned).
This is better than having dm_init_request_based_blk_mq_queue() return
early if the queue was already initialized because it elevates the
constraint to a more meaningful location in DM core. As such the
pre-existing early return in dm_init_request_based_queue() can now be
removed.
Fixes: bfebd1cdb4 ("dm: add full blk-mq support to request-based DM") Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
arm64: perf: Fix the pmu node name in warning message
With commit d5efd9cc9cf2 ("arm64: pmu: add support for interrupt-affinity
property"), we print a warning when we find a PMU SPI with a missing
missing interrupt-affinity property in a pmu node. Unfortunately, we
pass the wrong (NULL) device node to of_node_full_name, resulting in
unhelpful messages such as:
hw perfevents: Failed to parse <no-node>/interrupt-affinity[0]
This patch fixes the name to that of the pmu node.
Fixes: d5efd9cc9cf2 (arm64: pmu: add support for interrupt-affinity property) Acked-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
Will Deacon [Fri, 17 Apr 2015 13:41:29 +0000 (14:41 +0100)]
arm64: perf: don't warn about missing interrupt-affinity property for PPIs
PPIs are affine by nature, so the interrupt-affinity property is not
used and therefore we shouldn't print a warning in its absence.
Reported-by: Maxime Ripard <maxime.ripard@free-electrons.com> Reviewed-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
Eric Dumazet [Wed, 29 Apr 2015 23:20:58 +0000 (16:20 -0700)]
tcp_westwood: fix tcp_westwood_info()
I forgot to update tcp_westwood when changing get_info() behavior,
this patch should fix this.
Fixes: 64f40ff5bbdb ("tcp: prepare CC get_info() access from getsockopt()") Reported-by: kbuild test robot <fengguang.wu@intel.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
tcp: update reordering first before detecting loss
tcp_mark_lost_retrans is not used when FACK is disabled. Since
tcp_update_reordering may disable FACK, it should be called first
before tcp_mark_lost_retrans.
Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Nandita Dukkipati <nanditad@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Tue, 28 Apr 2015 23:23:49 +0000 (16:23 -0700)]
tcp: add TCP_CC_INFO socket option
Some Congestion Control modules can provide per flow information,
but current way to get this information is to use netlink.
Like TCP_INFO, let's add TCP_CC_INFO so that applications can
issue a getsockopt() if they have a socket file descriptor,
instead of playing complex netlink games.
Sample usage would be :
union tcp_cc_info info;
socklen_t len = sizeof(info);
if (getsockopt(fd, SOL_TCP, TCP_CC_INFO, &info, &len) == -1)
Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Yuchung Cheng <ycheng@google.com> Cc: Neal Cardwell <ncardwell@google.com> Acked-by: Neal Cardwell <ncardwell@google.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Tue, 28 Apr 2015 23:23:48 +0000 (16:23 -0700)]
tcp: prepare CC get_info() access from getsockopt()
We would like that optional info provided by Congestion Control
modules using netlink can also be read using getsockopt()
This patch changes get_info() to put this information in a buffer,
instead of skb, like tcp_get_info(), so that following patch
can reuse this common infrastructure.
Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Yuchung Cheng <ycheng@google.com> Cc: Neal Cardwell <ncardwell@google.com> Acked-by: Neal Cardwell <ncardwell@google.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Tue, 28 Apr 2015 22:28:18 +0000 (15:28 -0700)]
tcp: add tcpi_bytes_received to tcp_info
This patch tracks total number of payload bytes received on a TCP socket.
This is the sum of all changes done to tp->rcv_nxt
RFC4898 named this : tcpEStatsAppHCThruOctetsReceived
This is a 64bit field, and can be fetched both from TCP_INFO
getsockopt() if one has a handle on a TCP socket, or from inet_diag
netlink facility (iproute2/ss patch will follow)
Note that tp->bytes_received was placed near tp->rcv_nxt for
best data locality and minimal performance impact.
Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Yuchung Cheng <ycheng@google.com> Cc: Matt Mathis <mattmathis@google.com> Cc: Eric Salo <salo@google.com> Cc: Martin Lau <kafai@fb.com> Cc: Chris Rapier <rapier@psc.edu> Acked-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>