From ad2f37f07b848d7c021e00091f271de35958bd4a Mon Sep 17 00:00:00 2001 From: Johannes Weiner Date: Wed, 20 Feb 2013 13:14:02 +1100 Subject: [PATCH] mm: reduce rmap overhead for ex-KSM page copies created on swap faults When ex-KSM pages are faulted from swap cache, the fault handler is not capable of re-establishing anon_vma-spanning KSM pages. In this case, a copy of the page is created instead, just like during a COW break. These freshly made copies are known to be exclusive to the faulting VMA and there is no reason to go look for this page in parent and sibling processes during rmap operations. Use page_add_new_anon_rmap() for these copies. This also puts them on the proper LRU lists and marks them SwapBacked, so we can get rid of doing this ad-hoc in the KSM copy code. Signed-off-by: Johannes Weiner Reviewed-by: Rik van Riel Acked-by: Hugh Dickins Cc: Simon Jeons Cc: Mel Gorman Cc: Michal Hocko Cc: Satoru Moriya Signed-off-by: Andrew Morton --- mm/ksm.c | 6 ------ mm/memory.c | 5 ++++- 2 files changed, 4 insertions(+), 7 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 51573858938d..e1f1f278075f 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1590,13 +1590,7 @@ struct page *ksm_does_need_to_copy(struct page *page, SetPageDirty(new_page); __SetPageUptodate(new_page); - SetPageSwapBacked(new_page); __set_page_locked(new_page); - - if (!mlocked_vma_newpage(vma, new_page)) - lru_cache_add_lru(new_page, LRU_ACTIVE_ANON); - else - add_page_to_unevictable_list(new_page); } return new_page; diff --git a/mm/memory.c b/mm/memory.c index bc8bec762db7..569558810b90 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3044,7 +3044,10 @@ static int do_swap_page(struct mm_struct *mm, struct vm_area_struct *vma, } flush_icache_page(vma, page); set_pte_at(mm, address, page_table, pte); - do_page_add_anon_rmap(page, vma, address, exclusive); + if (swapcache) /* ksm created a completely new copy */ + page_add_new_anon_rmap(page, vma, address); + else + do_page_add_anon_rmap(page, vma, address, exclusive); /* It's better to call commit-charge after rmap is established */ mem_cgroup_commit_charge_swapin(page, ptr); -- 2.39.5