From: Kirill A. Shutemov Date: Thu, 29 Nov 2012 03:17:37 +0000 (+1100) Subject: thp-change-split_huge_page_pmd-interface-v6 X-Git-Tag: next-20121205~1^2~266 X-Git-Url: https://git.karo-electronics.de/?a=commitdiff_plain;h=42b33af50b8e6932ccf0836d79143e97acad9529;p=karo-tx-linux.git thp-change-split_huge_page_pmd-interface-v6 Pass vma instead of mm and add address parameter. In most cases we already have vma on the stack. We provides split_huge_page_pmd_mm() for few cases when we have mm, but not vma. This change is preparation to huge zero pmd splitting implementation. Signed-off-by: Kirill A. Shutemov Cc: Andrea Arcangeli Cc: Andi Kleen Cc: "H. Peter Anvin" Cc: Mel Gorman Cc: David Rientjes Signed-off-by: Andrew Morton --- diff --git a/Documentation/vm/transhuge.txt b/Documentation/vm/transhuge.txt index 677a599be430..8f5b41db314c 100644 --- a/Documentation/vm/transhuge.txt +++ b/Documentation/vm/transhuge.txt @@ -276,7 +276,7 @@ unaffected. libhugetlbfs will also work fine as usual. == Graceful fallback == Code walking pagetables but unware about huge pmds can simply call -split_huge_page_pmd(vma, pmd, addr) where the pmd is the one returned by +split_huge_page_pmd(vma, addr, pmd) where the pmd is the one returned by pmd_offset. It's trivial to make the code transparent hugepage aware by just grepping for "pmd_offset" and adding split_huge_page_pmd where missing after pmd_offset returns the pmd. Thanks to the graceful @@ -299,7 +299,7 @@ diff --git a/mm/mremap.c b/mm/mremap.c return NULL; pmd = pmd_offset(pud, addr); -+ split_huge_page_pmd(vma, pmd, addr); ++ split_huge_page_pmd(vma, addr, pmd); if (pmd_none_or_clear_bad(pmd)) return NULL; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 53292fb1334f..9fed4ccb2aec 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2644,18 +2644,19 @@ void __split_huge_page_pmd(struct vm_area_struct *vma, unsigned long address, { struct page *page; unsigned long haddr = address & HPAGE_PMD_MASK; + struct mm_struct *mm = vma->vm_mm; BUG_ON(vma->vm_start > haddr || vma->vm_end < haddr + HPAGE_PMD_SIZE); - spin_lock(&vma->vm_mm->page_table_lock); + spin_lock(&mm->page_table_lock); if (unlikely(!pmd_trans_huge(*pmd))) { - spin_unlock(&vma->vm_mm->page_table_lock); + spin_unlock(&mm->page_table_lock); return; } page = pmd_page(*pmd); VM_BUG_ON(!page_count(page)); get_page(page); - spin_unlock(&vma->vm_mm->page_table_lock); + spin_unlock(&mm->page_table_lock); split_huge_page(page);