The current implementation of the clean up function for
the interval RB trees has two flaws which may cause
problems in cases of concurrent executing of the function
and MMU notifier.
The flaws were due to the fact that deregistration of the
MMU callbacks was done after the tree was emptied and,
furthermore, the tree was not being locked.
This commit fixes both of these flaws by, first, switch the
order of operations, and, second, locking the tree while
traversing it to prevent any other operations.
Reviewed-by: Dean Luick <dean.luick@intel.com>
Signed-off-by: Mitko Haralanov <mitko.haralanov@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
if (!handler)
return;
+ /* Unregister first so we don't get any more notifications. */
+ if (current->mm)
+ mmu_notifier_unregister(&handler->mn, current->mm);
+
spin_lock_irqsave(&mmu_rb_lock, flags);
list_del(&handler->list);
spin_unlock_irqrestore(&mmu_rb_lock, flags);
+ spin_lock_irqsave(&handler->lock, flags);
if (!RB_EMPTY_ROOT(root)) {
struct rb_node *node;
struct mmu_rb_node *rbnode;
handler->ops->remove(root, rbnode, NULL);
}
}
+ spin_unlock_irqrestore(&handler->lock, flags);
- if (current->mm)
- mmu_notifier_unregister(&handler->mn, current->mm);
kfree(handler);
}