From: Paolo Abeni Date: Fri, 2 Dec 2016 16:35:49 +0000 (+0100) Subject: udp: be less conservative with sock rmem accounting X-Git-Url: https://git.karo-electronics.de/?a=commitdiff_plain;h=363dc73acacbbcdae98acf5612303e9770e04b1d;p=linux-beck.git udp: be less conservative with sock rmem accounting Before commit 850cbaddb52d ("udp: use it's own memory accounting schema"), the udp protocol allowed sk_rmem_alloc to grow beyond the rcvbuf by the whole current packet's truesize. After said commit we allow sk_rmem_alloc to exceed the rcvbuf only if the receive queue is empty. As reported by Jesper this cause a performance regression for some (small) values of rcvbuf. This commit is intended to fix the regression restoring the old handling of the rcvbuf limit. Reported-by: Jesper Dangaard Brouer Fixes: 850cbaddb52d ("udp: use it's own memory accounting schema") Signed-off-by: Paolo Abeni Signed-off-by: David S. Miller --- diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c index e1d0bf8eba4b..16d88ba9ff1c 100644 --- a/net/ipv4/udp.c +++ b/net/ipv4/udp.c @@ -1205,14 +1205,14 @@ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb) * queue is full; always allow at least a packet */ rmem = atomic_read(&sk->sk_rmem_alloc); - if (rmem && (rmem + size > sk->sk_rcvbuf)) + if (rmem > sk->sk_rcvbuf) goto drop; /* we drop only if the receive buf is full and the receive * queue contains some other skb */ rmem = atomic_add_return(size, &sk->sk_rmem_alloc); - if ((rmem > sk->sk_rcvbuf) && (rmem > size)) + if (rmem > (size + sk->sk_rcvbuf)) goto uncharge_drop; spin_lock(&list->lock);