From: Jeremy Fitzhardinge Date: Tue, 16 Oct 2007 18:51:31 +0000 (-0700) Subject: xfs: eagerly remove vmap mappings to avoid upsetting Xen X-Git-Url: https://git.karo-electronics.de/?a=commitdiff_plain;h=ace2e92e193126711cb3a83a3752b2c5b8396950;p=linux-beck.git xfs: eagerly remove vmap mappings to avoid upsetting Xen XFS leaves stray mappings around when it vmaps memory to make it virtually contigious. This upsets Xen if one of those pages is being recycled into a pagetable, since it finds an extra writable mapping of the page. This patch solves the problem in a brute force way, by making XFS always eagerly unmap its mappings. David Chinner says this shouldn't have any performance impact on filesystems with default block sizes; it will only affect filesystems with large block sizes. Signed-off-by: Jeremy Fitzhardinge Acked-by: David Chinner Cc: Nick Piggin Cc: XFS masters Cc: Stable kernel Cc: Morten =?utf-8?q?B=C3=B8geskov?= Cc: Mark Williamson --- diff --git a/fs/xfs/linux-2.6/xfs_buf.c b/fs/xfs/linux-2.6/xfs_buf.c index 39f44ee572e8..455e042e0c10 100644 --- a/fs/xfs/linux-2.6/xfs_buf.c +++ b/fs/xfs/linux-2.6/xfs_buf.c @@ -187,6 +187,19 @@ free_address( { a_list_t *aentry; +#ifdef CONFIG_XEN + /* + * Xen needs to be able to make sure it can get an exclusive + * RO mapping of pages it wants to turn into a pagetable. If + * a newly allocated page is also still being vmap()ed by xfs, + * it will cause pagetable construction to fail. This is a + * quick workaround to always eagerly unmap pages so that Xen + * is happy. + */ + vunmap(addr); + return; +#endif + aentry = kmalloc(sizeof(a_list_t), GFP_NOWAIT); if (likely(aentry)) { spin_lock(&as_lock);