From cad58af2a8fba19c15c628d300f1bf5acf4b626b Mon Sep 17 00:00:00 2001 From: Michel Lespinasse Date: Thu, 15 Nov 2012 13:37:41 +1100 Subject: [PATCH] mm: ensure safe rb_subtree_gap update when removing VMA Using the trinity fuzzer, Sasha Levin uncovered a case where rb_subtree_gap wasn't correctly updated. Digging into this, the root cause was that vma insertions and removals require both an rbtree insert or erase operation (which may trigger tree rotations), and an update of the next vma's gap (which does not change the tree topology, but may require iterating on the node's ancestors to propagate the update). The rbtree rotations caused the rb_subtree_gap values to be updated in some of the internal nodes, but without upstream propagation. Then the subsequent update on the next vma didn't iterate as high up the tree as it should have, as it stopped as soon as it hit one of the internal nodes that had been updated as part of a tree rotation. The fix is to impose that all rb_subtree_gap values must be up to date before any rbtree insertion or erase, with the possible exception that the node being erased doesn't need to have an up to date rb_subtree_gap. This change: during VMA removal, remove VMA from the rbtree before we remove it from the linked list. The implication is the next vma's rb_subtree_gap value becomes stale when next->vm_prev is updated, and we want to make sure vma_rb_erase() runs before there are any such stale rb_subtree_gap values in the rbtree. (I don't know of a reproduceable test case for this particular issue) Signed-off-by: Michel Lespinasse Reported-by: Sasha Levin Signed-off-by: Michel Lespinasse Reviewed-by: Rik van Riel Cc: Hugh Dickins Signed-off-by: Andrew Morton --- mm/mmap.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/mm/mmap.c b/mm/mmap.c index 0ed8aa7d2a34..ad6af34f6cd1 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -570,12 +570,12 @@ static inline void __vma_unlink(struct mm_struct *mm, struct vm_area_struct *vma, struct vm_area_struct *prev) { - struct vm_area_struct *next = vma->vm_next; + struct vm_area_struct *next; - prev->vm_next = next; + vma_rb_erase(vma, &mm->mm_rb); + prev->vm_next = next = vma->vm_next; if (next) next->vm_prev = prev; - vma_rb_erase(vma, &mm->mm_rb); if (mm->mmap_cache == vma) mm->mmap_cache = prev; } -- 2.39.5