]> git.karo-electronics.de Git - karo-tx-linux.git/commit
hugetlb: avoid taking i_mmap_mutex in unmap_single_vma() for hugetlb
authorAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Thu, 3 May 2012 05:43:36 +0000 (15:43 +1000)
committerStephen Rothwell <sfr@canb.auug.org.au>
Thu, 3 May 2012 05:46:24 +0000 (15:46 +1000)
commit5b132a166c222933b7527d828807f2db5e898a2f
treec0d1d2f062d300784493e3e91d1d7773217ea627
parentdeae5b3178249b0c37d84d27da25939cdb55b4e5
hugetlb: avoid taking i_mmap_mutex in unmap_single_vma() for hugetlb

i_mmap_mutex lock was added in unmap_single_vma by 502717f4e ("hugetlb:
fix linked list corruption in unmap_hugepage_range()") but we don't use
page->lru in unmap_hugepage_range any more.  Also the lock was taken
higher up in the stack in some code path.  That would result in deadlock.

unmap_mapping_range (i_mmap_mutex)
 -> unmap_mapping_range_tree
    -> unmap_mapping_range_vma
       -> zap_page_range_single
         -> unmap_single_vma
      -> unmap_hugepage_range (i_mmap_mutex)

For shared pagetable support for huge pages, since pagetable pages are ref
counted we don't need any lock during huge_pmd_unshare.  We do take
i_mmap_mutex in huge_pmd_share while walking the vma_prio_tree in mapping.
(39dde65c9940c97f ("shared page table for hugetlb page")).

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Hillf Danton <dhillf@gmail.com>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/memory.c