From: Chao Yu Date: Thu, 5 May 2016 11:13:03 +0000 (+0800) Subject: f2fs: avoid panic when truncating to max filesize X-Git-Tag: v4.7-rc1~84^2~27 X-Git-Url: https://git.karo-electronics.de/?a=commitdiff_plain;h=09210c973af30320edc03a6325422cdd0f03b580;p=karo-tx-linux.git f2fs: avoid panic when truncating to max filesize The following panic occurs when truncating inode which has inline xattr to max filesize. [] get_dnode_of_data+0x4e/0x580 [f2fs] [] ? read_node_page+0x51/0x90 [f2fs] [] ? get_node_page.part.34+0xb9/0x170 [f2fs] [] truncate_blocks+0x131/0x3f0 [f2fs] [] f2fs_truncate+0x73/0x100 [f2fs] [] f2fs_setattr+0x62/0x2a0 [f2fs] [] notify_change+0x158/0x300 [] do_truncate+0x6b/0xa0 [] ? __sb_start_write+0x49/0x100 [] do_sys_ftruncate.constprop.12+0x118/0x170 [] SyS_ftruncate+0xe/0x10 [] tracesys+0xe1/0xe6 [] get_node_path+0x210/0x220 [f2fs] --[ end trace 5fea664dfbcc6625 ]--- The reason is truncate_blocks tries to truncate all node and data blocks start from specified block offset with value of (max filesize / block size), but actually, our valid max block offset is (max filesize / block size) - 1, so f2fs detects such invalid block offset with BUG_ON in truncation path. This patch lets f2fs skip truncating data which is exceeding max filesize. Signed-off-by: Chao Yu Signed-off-by: Jaegeuk Kim --- diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c index dc47d5c7b882..dd50f30a2f57 100644 --- a/fs/f2fs/file.c +++ b/fs/f2fs/file.c @@ -563,6 +563,9 @@ int truncate_blocks(struct inode *inode, u64 from, bool lock) free_from = (pgoff_t)F2FS_BYTES_TO_BLK(from + blocksize - 1); + if (free_from >= sbi->max_file_blocks) + goto free_partial; + if (lock) f2fs_lock_op(sbi); @@ -604,7 +607,7 @@ free_next: out: if (lock) f2fs_unlock_op(sbi); - +free_partial: /* lastly zero out the first data page */ if (!err) err = truncate_partial_data_page(inode, from, truncate_page);