From: Sonny Rao Date: Thu, 20 Dec 2012 23:05:07 +0000 (-0800) Subject: mm: fix calculation of dirtyable memory X-Git-Tag: v3.7.2~109 X-Git-Url: https://git.karo-electronics.de/?a=commitdiff_plain;h=9359dd031b11100a3532ef7bfc04f2fc7f23c33b;p=karo-tx-linux.git mm: fix calculation of dirtyable memory commit c8b74c2f6604923de91f8aa6539f8bb934736754 upstream. The system uses global_dirtyable_memory() to calculate number of dirtyable pages/pages that can be allocated to the page cache. A bug causes an underflow thus making the page count look like a big unsigned number. This in turn confuses the dirty writeback throttling to aggressively write back pages as they become dirty (usually 1 page at a time). This generally only affects systems with highmem because the underflowed count gets subtracted from the global count of dirtyable memory. The problem was introduced with v3.2-4896-gab8fabd Fix is to ensure we don't get an underflowed total of either highmem or global dirtyable memory. Signed-off-by: Sonny Rao Signed-off-by: Puneet Kumar Acked-by: Johannes Weiner Tested-by: Damien Wyart Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 830893b2b3c7..c0fa8bdaa338 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -200,6 +200,18 @@ static unsigned long highmem_dirtyable_memory(unsigned long total) x += zone_page_state(z, NR_FREE_PAGES) + zone_reclaimable_pages(z) - z->dirty_balance_reserve; } + /* + * Unreclaimable memory (kernel memory or anonymous memory + * without swap) can bring down the dirtyable pages below + * the zone's dirty balance reserve and the above calculation + * will underflow. However we still want to add in nodes + * which are below threshold (negative values) to get a more + * accurate calculation but make sure that the total never + * underflows. + */ + if ((long)x < 0) + x = 0; + /* * Make sure that the number of highmem pages is never larger * than the number of the total dirtyable memory. This can only @@ -222,8 +234,8 @@ static unsigned long global_dirtyable_memory(void) { unsigned long x; - x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages() - - dirty_balance_reserve; + x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages(); + x -= min(x, dirty_balance_reserve); if (!vm_highmem_is_dirtyable) x -= highmem_dirtyable_memory(x); @@ -290,9 +302,12 @@ static unsigned long zone_dirtyable_memory(struct zone *zone) * highmem zone can hold its share of dirty pages, so we don't * care about vm_highmem_is_dirtyable here. */ - return zone_page_state(zone, NR_FREE_PAGES) + - zone_reclaimable_pages(zone) - - zone->dirty_balance_reserve; + unsigned long nr_pages = zone_page_state(zone, NR_FREE_PAGES) + + zone_reclaimable_pages(zone); + + /* don't allow this to underflow */ + nr_pages -= min(nr_pages, zone->dirty_balance_reserve); + return nr_pages; } /**