From: Pavel Emelyanov Date: Thu, 23 May 2013 00:37:07 +0000 (+1000) Subject: soft-dirty: call mmu notifiers when write-protecting ptes X-Git-Tag: next-20130527~1^2~228 X-Git-Url: https://git.karo-electronics.de/?a=commitdiff_plain;h=305c1b6f598306cc9c74a59b9f00edeb11f4d4bb;p=karo-tx-linux.git soft-dirty: call mmu notifiers when write-protecting ptes As noticed by Xiao, since soft-dirty clear command modifies page tables we have to flush tlbs and call mmu notifiers. While the former is done by the clear_refs engine itself, the latter is to be done. One thing to note about this -- in order not to call per-page invalidate notifier (_all_ address space is about to be changed), the _invalidate_range_start and _end are used. But for those start and end are not known exactly. To address this, the same trick as in exit_mmap() is used -- start is 0 and end is (unsigned long)-1. Signed-off-by: Pavel Emelyanov Cc: Matt Mackall Cc: Xiao Guangrong Cc: Glauber Costa Cc: Marcelo Tosatti Cc: KOSAKI Motohiro Cc: Stephen Rothwell Signed-off-by: Andrew Morton --- diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 9238acbc0a10..a18e065c1c3e 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include @@ -791,6 +792,8 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, .private = &cp, }; down_read(&mm->mmap_sem); + if (type == CLEAR_REFS_SOFT_DIRTY) + mmu_notifier_invalidate_range_start(mm, 0, -1); for (vma = mm->mmap; vma; vma = vma->vm_next) { cp.vma = vma; if (is_vm_hugetlb_page(vma)) @@ -811,6 +814,8 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf, walk_page_range(vma->vm_start, vma->vm_end, &clear_refs_walk); } + if (type == CLEAR_REFS_SOFT_DIRTY) + mmu_notifier_invalidate_range_end(mm, 0, -1); flush_tlb_mm(mm); up_read(&mm->mmap_sem); mmput(mm);