]> git.karo-electronics.de Git - karo-tx-linux.git/commitdiff
vmscan: add barrier to prevent evictable page in unevictable list
authorMinchan Kim <minchan.kim@gmail.com>
Wed, 5 Oct 2011 00:43:15 +0000 (11:43 +1100)
committerStephen Rothwell <sfr@canb.auug.org.au>
Wed, 12 Oct 2011 06:32:14 +0000 (17:32 +1100)
When a race between putback_lru_page() and shmem_lock with lock=0 happens,
progrom execution order is as follows, but clear_bit in processor #1 could
be reordered right before spin_unlock of processor #1.  Then, the page
would be stranded on the unevictable list.

spin_lock
SetPageLRU
spin_unlock
                                clear_bit(AS_UNEVICTABLE)
                                spin_lock
                                if PageLRU()
                                        if !test_bit(AS_UNEVICTABLE)
                                         move evictable list
smp_mb
if !test_bit(AS_UNEVICTABLE)
        move evictable list
                                spin_unlock

But, pagevec_lookup() in scan_mapping_unevictable_pages() has
rcu_read_[un]lock() so it could protect reordering before reaching
test_bit(AS_UNEVICTABLE) on processor #1 so this problem never happens.
But it's a unexpected side effect and we should solve this problem
properly.

This patch adds a barrier after mapping_clear_unevictable.

I didn't meet this problem but just found during review.

Signed-off-by: Minchan Kim <minchan.kim@gmail.com>
Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Rik van Riel <riel@redhat.com>
Cc: Lee Schermerhorn <lee.schermerhorn@hp.com>
Acked-by: Johannes Weiner <jweiner@redhat.com>
Signed-off-by: Andrew Morton <akpm@google.com>
mm/shmem.c
mm/vmscan.c

index 2d357729529880b29f18704edda807108f6cdc5b..fa4fa6ce13bc431c65de6725d9b86f6db24bf041 100644 (file)
@@ -1068,6 +1068,12 @@ int shmem_lock(struct file *file, int lock, struct user_struct *user)
                user_shm_unlock(inode->i_size, user);
                info->flags &= ~VM_LOCKED;
                mapping_clear_unevictable(file->f_mapping);
+               /*
+                * Ensure that a racing putback_lru_page() can see
+                * the pages of this mapping are evictable when we
+                * skip them due to !PageLRU during the scan.
+                */
+               smp_mb__after_clear_bit();
                scan_mapping_unevictable_pages(file->f_mapping);
        }
        retval = 0;
index f84541c59200340380e3f9d868bdc0f46afddaf6..5369cf45eb08ca837d2eeac02d13f40d0e519737 100644 (file)
@@ -633,13 +633,14 @@ redo:
                lru = LRU_UNEVICTABLE;
                add_page_to_unevictable_list(page);
                /*
-                * When racing with an mlock clearing (page is
-                * unlocked), make sure that if the other thread does
-                * not observe our setting of PG_lru and fails
-                * isolation, we see PG_mlocked cleared below and move
+                * When racing with an mlock or AS_UNEVICTABLE clearing
+                * (page is unlocked) make sure that if the other thread
+                * does not observe our setting of PG_lru and fails
+                * isolation/check_move_unevictable_page,
+                * we see PG_mlocked/AS_UNEVICTABLE cleared below and move
                 * the page back to the evictable list.
                 *
-                * The other side is TestClearPageMlocked().
+                * The other side is TestClearPageMlocked() or shmem_lock().
                 */
                smp_mb();
        }