From a58266cf8e0b2d1ede969b5b3d8dacf03faedeae Mon Sep 17 00:00:00 2001 From: Mel Gorman Date: Thu, 25 Oct 2012 12:14:33 +1100 Subject: [PATCH] mm: vmscan: scale number of pages reclaimed by reclaim/compaction only in direct reclaim Jiri Slaby reported the following: (It's an effective revert of "mm: vmscan: scale number of pages reclaimed by reclaim/compaction based on failures".) Given kswapd had hours of runtime in ps/top output yesterday in the morning and after the revert it's now 2 minutes in sum for the last 24h, I would say, it's gone. The intention of the patch in question was to compensate for the loss of lumpy reclaim. Part of the reason lumpy reclaim worked is because it aggressively reclaimed pages and this patch was meant to be a sane compromise. When compaction fails, it gets deferred and both compaction and reclaim/compaction is deferred avoid excessive reclaim. However, since commit c6543459 ("mm: remove __GFP_NO_KSWAPD"), kswapd is woken up each time and continues reclaiming which was not taken into account when the patch was developed. As it is not taking deferred compaction into account in this path it scans aggressively before falling out and making the compaction_deferred check in compaction_ready. This patch avoids kswapd scaling pages for reclaim and leaves the aggressive reclaim to the process attempting the THP allocation. Signed-off-by: Mel Gorman Reported-by: Jiri Slaby Cc: Rik van Riel Cc: Minchan Kim Cc: Valdis Kletnieks Signed-off-by: Andrew Morton --- mm/vmscan.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 2624edcfb420..2b7edfab3b05 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1763,14 +1763,20 @@ static bool in_reclaim_compaction(struct scan_control *sc) #ifdef CONFIG_COMPACTION /* * If compaction is deferred for sc->order then scale the number of pages - * reclaimed based on the number of consecutive allocation failures + * reclaimed based on the number of consecutive allocation failures. This + * scaling only happens for direct reclaim as it is about to attempt + * compaction. If compaction fails, future allocations will be deferred + * and reclaim avoided. On the other hand, kswapd does not take compaction + * deferral into account so if it scaled, it could scan excessively even + * though allocations are temporarily not being attempted. */ static unsigned long scale_for_compaction(unsigned long pages_for_compaction, struct lruvec *lruvec, struct scan_control *sc) { struct zone *zone = lruvec_zone(lruvec); - if (zone->compact_order_failed <= sc->order) + if (zone->compact_order_failed <= sc->order && + !current_is_kswapd()) pages_for_compaction <<= zone->compact_defer_shift; return pages_for_compaction; } -- 2.39.5