]> git.karo-electronics.de Git - karo-tx-linux.git/commitdiff
x86/mm: Completely drop the TLB flush from ptep_set_access_flags()
authorRik van Riel <riel@redhat.com>
Sat, 27 Oct 2012 16:12:11 +0000 (12:12 -0400)
committerIngo Molnar <mingo@kernel.org>
Mon, 19 Nov 2012 01:15:35 +0000 (02:15 +0100)
Intel has an architectural guarantee that the TLB entry causing
a page fault gets invalidated automatically. This means
we should be able to drop the local TLB invalidation.

Because of the way other areas of the page fault code work,
chances are good that all x86 CPUs do this.  However, if
someone somewhere has an x86 CPU that does not invalidate
the TLB entry causing a page fault, this one-liner should
be easy to revert - or a CPU model specific quirk could
be added to retain this optimization on most CPUs.

Signed-off-by: Rik van Riel <riel@redhat.com>
Acked-by: Linus Torvalds <torvalds@kernel.org>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michel Lespinasse <walken@google.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Hugh Dickins <hughd@google.com>
[ Applied changelog massage and moved this last in the series,
  to create bisection distance. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
arch/x86/mm/pgtable.c

index be3bb4690887e283712a77fb9b6cb23ba5a6b2a1..7353de3d98a75fcd928ee039fce89ea0fa4b3189 100644 (file)
@@ -317,7 +317,6 @@ int ptep_set_access_flags(struct vm_area_struct *vma,
        if (changed && dirty) {
                *ptep = entry;
                pte_update_defer(vma->vm_mm, address, ptep);
-               __flush_tlb_one(address);
        }
 
        return changed;