From: Mike Snitzer Date: Fri, 18 Jul 2014 21:59:43 +0000 (-0400) Subject: dm thin: set minimum_io_size to pool's data block size X-Git-Tag: v3.17-rc1~29^2~4 X-Git-Url: https://git.karo-electronics.de/?a=commitdiff_plain;h=fdfb4c8c1;p=karo-tx-linux.git dm thin: set minimum_io_size to pool's data block size Before, if the block layer's limit stacking didn't establish an optimal_io_size that was compatible with the thin-pool's data block size we'd set optimal_io_size to the data block size and minimum_io_size to 0 (which the block layer adjusts to be physical_block_size). Update pool_io_hints() to set both minimum_io_size and optimal_io_size to the thin-pool's data block size. This fixes an issue reported where mkfs.xfs would create more XFS Allocation Groups on thinp volumes than on a normal linear LV of comparable size, see: https://bugzilla.redhat.com/show_bug.cgi?id=1003227 Reported-by: Chris Murphy Signed-off-by: Mike Snitzer --- diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c index 0e844a5eca8f..4843801173fe 100644 --- a/drivers/md/dm-thin.c +++ b/drivers/md/dm-thin.c @@ -3177,7 +3177,7 @@ static void pool_io_hints(struct dm_target *ti, struct queue_limits *limits) */ if (io_opt_sectors < pool->sectors_per_block || do_div(io_opt_sectors, pool->sectors_per_block)) { - blk_limits_io_min(limits, 0); + blk_limits_io_min(limits, pool->sectors_per_block << SECTOR_SHIFT); blk_limits_io_opt(limits, pool->sectors_per_block << SECTOR_SHIFT); }