From ef64fe4216db21ccd23f747c1179a20ad1428605 Mon Sep 17 00:00:00 2001 From: "Aneesh Kumar K.V" Date: Sat, 21 Jul 2012 10:53:58 +1000 Subject: [PATCH] hugetlb: avoid taking i_mmap_mutex in unmap_single_vma() for hugetlb i_mmap_mutex lock was added in unmap_single_vma by 502717f4e ("hugetlb: fix linked list corruption in unmap_hugepage_range()") but we don't use page->lru in unmap_hugepage_range any more. Also the lock was taken higher up in the stack in some code path. That would result in deadlock. unmap_mapping_range (i_mmap_mutex) -> unmap_mapping_range_tree -> unmap_mapping_range_vma -> zap_page_range_single -> unmap_single_vma -> unmap_hugepage_range (i_mmap_mutex) For shared pagetable support for huge pages, since pagetable pages are ref counted we don't need any lock during huge_pmd_unshare. We do take i_mmap_mutex in huge_pmd_share while walking the vma_prio_tree in mapping. (39dde65c9940c97f ("shared page table for hugetlb page")). Signed-off-by: Aneesh Kumar K.V Cc: David Rientjes Acked-by: KAMEZAWA Hiroyuki Cc: Hillf Danton Cc: Michal Hocko Cc: KOSAKI Motohiro Signed-off-by: Andrew Morton --- mm/memory.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 59e5bebc2e35..3b04b3ce3bb3 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1343,11 +1343,8 @@ static void unmap_single_vma(struct mmu_gather *tlb, * Since no pte has actually been setup, it is * safe to do nothing in this case. */ - if (vma->vm_file) { - mutex_lock(&vma->vm_file->f_mapping->i_mmap_mutex); + if (vma->vm_file) __unmap_hugepage_range(tlb, vma, start, end, NULL); - mutex_unlock(&vma->vm_file->f_mapping->i_mmap_mutex); - } } else unmap_page_range(tlb, vma, start, end, details); } -- 2.39.5