1 ================================================================================
2 WHAT IS Flash-Friendly File System (F2FS)?
3 ================================================================================
5 NAND flash memory-based storage devices, such as SSD, eMMC, and SD cards, have
6 been equipped on a variety systems ranging from mobile to server systems. Since
7 they are known to have different characteristics from the conventional rotating
8 disks, a file system, an upper layer to the storage device, should adapt to the
9 changes from the sketch in the design level.
11 F2FS is a file system exploiting NAND flash memory-based storage devices, which
12 is based on Log-structured File System (LFS). The design has been focused on
13 addressing the fundamental issues in LFS, which are snowball effect of wandering
14 tree and high cleaning overhead.
16 Since a NAND flash memory-based storage device shows different characteristic
17 according to its internal geometry or flash memory management scheme, namely FTL,
18 F2FS and its tools support various parameters not only for configuring on-disk
19 layout, but also for selecting allocation and cleaning algorithms.
21 The following git tree provides the file system formatting tool (mkfs.f2fs),
22 a consistency checking tool (fsck.f2fs), and a debugging tool (dump.f2fs).
23 >> git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs-tools.git
25 For reporting bugs and sending patches, please use the following mailing list:
26 >> linux-f2fs-devel@lists.sourceforge.net
28 ================================================================================
29 BACKGROUND AND DESIGN ISSUES
30 ================================================================================
32 Log-structured File System (LFS)
33 --------------------------------
34 "A log-structured file system writes all modifications to disk sequentially in
35 a log-like structure, thereby speeding up both file writing and crash recovery.
36 The log is the only structure on disk; it contains indexing information so that
37 files can be read back from the log efficiently. In order to maintain large free
38 areas on disk for fast writing, we divide the log into segments and use a
39 segment cleaner to compress the live information from heavily fragmented
40 segments." from Rosenblum, M. and Ousterhout, J. K., 1992, "The design and
41 implementation of a log-structured file system", ACM Trans. Computer Systems
44 Wandering Tree Problem
45 ----------------------
46 In LFS, when a file data is updated and written to the end of log, its direct
47 pointer block is updated due to the changed location. Then the indirect pointer
48 block is also updated due to the direct pointer block update. In this manner,
49 the upper index structures such as inode, inode map, and checkpoint block are
50 also updated recursively. This problem is called as wandering tree problem [1],
51 and in order to enhance the performance, it should eliminate or relax the update
52 propagation as much as possible.
54 [1] Bityutskiy, A. 2005. JFFS3 design issues. http://www.linux-mtd.infradead.org/
58 Since LFS is based on out-of-place writes, it produces so many obsolete blocks
59 scattered across the whole storage. In order to serve new empty log space, it
60 needs to reclaim these obsolete blocks seamlessly to users. This job is called
61 as a cleaning process.
63 The process consists of three operations as follows.
64 1. A victim segment is selected through referencing segment usage table.
65 2. It loads parent index structures of all the data in the victim identified by
66 segment summary blocks.
67 3. It checks the cross-reference between the data and its parent index structure.
68 4. It moves valid data selectively.
70 This cleaning job may cause unexpected long delays, so the most important goal
71 is to hide the latencies to users. And also definitely, it should reduce the
72 amount of valid data to be moved, and move them quickly as well.
74 ================================================================================
76 ================================================================================
80 - Enlarge the random write area for better performance, but provide the high
82 - Align FS data structures to the operational units in FTL as best efforts
84 Wandering Tree Problem
85 ----------------------
86 - Use a term, “node”, that represents inodes as well as various pointer blocks
87 - Introduce Node Address Table (NAT) containing the locations of all the “node”
88 blocks; this will cut off the update propagation.
92 - Support a background cleaning process
93 - Support greedy and cost-benefit algorithms for victim selection policies
94 - Support multi-head logs for static/dynamic hot and cold data separation
95 - Introduce adaptive logging for efficient block allocation
97 ================================================================================
99 ================================================================================
101 background_gc=%s Turn on/off cleaning operations, namely garbage
102 collection, triggered in background when I/O subsystem is
103 idle. If background_gc=on, it will turn on the garbage
104 collection and if background_gc=off, garbage collection
106 Default value for this option is on. So garbage
107 collection is on by default.
108 disable_roll_forward Disable the roll-forward recovery routine
109 discard Issue discard/TRIM commands when a segment is cleaned.
110 no_heap Disable heap-style segment allocation which finds free
111 segments for data from the beginning of main area, while
112 for node from the end of main area.
113 nouser_xattr Disable Extended User Attributes. Note: xattr is enabled
114 by default if CONFIG_F2FS_FS_XATTR is selected.
115 noacl Disable POSIX Access Control List. Note: acl is enabled
116 by default if CONFIG_F2FS_FS_POSIX_ACL is selected.
117 active_logs=%u Support configuring the number of active logs. In the
118 current design, f2fs supports only 2, 4, and 6 logs.
120 disable_ext_identify Disable the extension list configured by mkfs, so f2fs
121 does not aware of cold files such as media files.
122 inline_xattr Enable the inline xattrs feature.
123 inline_data Enable the inline data feature: New created small(<~3.4k)
124 files can be written into inode block.
125 inline_dentry Enable the inline dir feature: data in new created
126 directory entries can be written into inode block. The
127 space of inode block which is used to store inline
128 dentries is limited to ~3.4k.
129 flush_merge Merge concurrent cache_flush commands as much as possible
130 to eliminate redundant command issues. If the underlying
131 device handles the cache_flush command relatively slowly,
132 recommend to enable this option.
133 nobarrier This option can be used if underlying storage guarantees
134 its cached data should be written to the novolatile area.
135 If this option is set, no cache_flush commands are issued
136 but f2fs still guarantees the write ordering of all the
138 fastboot This option is used when a system wants to reduce mount
139 time as much as possible, even though normal performance
142 ================================================================================
144 ================================================================================
146 /sys/kernel/debug/f2fs/ contains information about all the partitions mounted as
147 f2fs. Each file shows the whole f2fs information.
149 /sys/kernel/debug/f2fs/status includes:
150 - major file system information managed by f2fs currently
151 - average SIT information about whole segments
152 - current memory footprint consumed by f2fs.
154 ================================================================================
156 ================================================================================
158 Information about mounted f2f2 file systems can be found in
159 /sys/fs/f2fs. Each mounted filesystem will have a directory in
160 /sys/fs/f2fs based on its device name (i.e., /sys/fs/f2fs/sda).
161 The files in each per-device directory are shown in table below.
163 Files in /sys/fs/f2fs/<devname>
164 (see also Documentation/ABI/testing/sysfs-fs-f2fs)
165 ..............................................................................
168 gc_max_sleep_time This tuning parameter controls the maximum sleep
169 time for the garbage collection thread. Time is
172 gc_min_sleep_time This tuning parameter controls the minimum sleep
173 time for the garbage collection thread. Time is
176 gc_no_gc_sleep_time This tuning parameter controls the default sleep
177 time for the garbage collection thread. Time is
180 gc_idle This parameter controls the selection of victim
181 policy for garbage collection. Setting gc_idle = 0
182 (default) will disable this option. Setting
183 gc_idle = 1 will select the Cost Benefit approach
184 & setting gc_idle = 2 will select the greedy aproach.
186 reclaim_segments This parameter controls the number of prefree
187 segments to be reclaimed. If the number of prefree
188 segments is larger than the number of segments
189 in the proportion to the percentage over total
190 volume size, f2fs tries to conduct checkpoint to
191 reclaim the prefree segments to free segments.
192 By default, 5% over total # of segments.
194 max_small_discards This parameter controls the number of discard
195 commands that consist small blocks less than 2MB.
196 The candidates to be discarded are cached until
197 checkpoint is triggered, and issued during the
198 checkpoint. By default, it is disabled with 0.
200 ipu_policy This parameter controls the policy of in-place
201 updates in f2fs. There are five policies:
202 0x01: F2FS_IPU_FORCE, 0x02: F2FS_IPU_SSR,
203 0x04: F2FS_IPU_UTIL, 0x08: F2FS_IPU_SSR_UTIL,
204 0x10: F2FS_IPU_FSYNC.
206 min_ipu_util This parameter controls the threshold to trigger
207 in-place-updates. The number indicates percentage
208 of the filesystem utilization, and used by
209 F2FS_IPU_UTIL and F2FS_IPU_SSR_UTIL policies.
211 min_fsync_blocks This parameter controls the threshold to trigger
212 in-place-updates when F2FS_IPU_FSYNC mode is set.
213 The number indicates the number of dirty pages
214 when fsync needs to flush on its call path. If
215 the number is less than this value, it triggers
218 max_victim_search This parameter controls the number of trials to
219 find a victim segment when conducting SSR and
220 cleaning operations. The default value is 4096
221 which covers 8GB block address range.
223 dir_level This parameter controls the directory level to
224 support large directory. If a directory has a
225 number of files, it can reduce the file lookup
226 latency by increasing this dir_level value.
227 Otherwise, it needs to decrease this value to
228 reduce the space overhead. The default value is 0.
230 ram_thresh This parameter controls the memory footprint used
231 by free nids and cached nat entries. By default,
232 10 is set, which indicates 10 MB / 1 GB RAM.
234 ================================================================================
236 ================================================================================
238 1. Download userland tools and compile them.
240 2. Skip, if f2fs was compiled statically inside kernel.
241 Otherwise, insert the f2fs.ko module.
244 3. Create a directory trying to mount
247 4. Format the block device, and then mount as f2fs
248 # mkfs.f2fs -l label /dev/block_device
249 # mount -t f2fs /dev/block_device /mnt/f2fs
253 The mkfs.f2fs is for the use of formatting a partition as the f2fs filesystem,
254 which builds a basic on-disk layout.
256 The options consist of:
257 -l [label] : Give a volume label, up to 512 unicode name.
258 -a [0 or 1] : Split start location of each area for heap-based allocation.
259 1 is set by default, which performs this.
260 -o [int] : Set overprovision ratio in percent over volume size.
262 -s [int] : Set the number of segments per section.
264 -z [int] : Set the number of sections per zone.
266 -e [str] : Set basic extension list. e.g. "mp3,gif,mov"
267 -t [0 or 1] : Disable discard command or not.
268 1 is set by default, which conducts discard.
272 The fsck.f2fs is a tool to check the consistency of an f2fs-formatted
273 partition, which examines whether the filesystem metadata and user-made data
274 are cross-referenced correctly or not.
275 Note that, initial version of the tool does not fix any inconsistency.
277 The options consist of:
278 -d debug level [default:0]
282 The dump.f2fs shows the information of specific inode and dumps SSA and SIT to
283 file. Each file is dump_ssa and dump_sit.
285 The dump.f2fs is used to debug on-disk data structures of the f2fs filesystem.
286 It shows on-disk inode information reconized by a given inode number, and is
287 able to dump all the SSA and SIT entries into predefined files, ./dump_ssa and
288 ./dump_sit respectively.
290 The options consist of:
291 -d debug level [default:0]
293 -s [SIT dump segno from #1~#2 (decimal), for all 0~-1]
294 -a [SSA dump segno from #1~#2 (decimal), for all 0~-1]
297 # dump.f2fs -i [ino] /dev/sdx
298 # dump.f2fs -s 0~-1 /dev/sdx (SIT dump)
299 # dump.f2fs -a 0~-1 /dev/sdx (SSA dump)
301 ================================================================================
303 ================================================================================
308 F2FS divides the whole volume into a number of segments, each of which is fixed
309 to 2MB in size. A section is composed of consecutive segments, and a zone
310 consists of a set of sections. By default, section and zone sizes are set to one
311 segment size identically, but users can easily modify the sizes by mkfs.
313 F2FS splits the entire volume into six areas, and all the areas except superblock
314 consists of multiple segments as described below.
316 align with the zone size <-|
317 |-> align with the segment size
318 _________________________________________________________________________
319 | | | Segment | Node | Segment | |
320 | Superblock | Checkpoint | Info. | Address | Summary | Main |
321 | (SB) | (CP) | Table (SIT) | Table (NAT) | Area (SSA) | |
322 |____________|_____2______|______N______|______N______|______N_____|__N___|
326 ._________________________________________.
327 |_Segment_|_..._|_Segment_|_..._|_Segment_|
336 : It is located at the beginning of the partition, and there exist two copies
337 to avoid file system crash. It contains basic partition information and some
338 default parameters of f2fs.
341 : It contains file system information, bitmaps for valid NAT/SIT sets, orphan
342 inode lists, and summary entries of current active segments.
344 - Segment Information Table (SIT)
345 : It contains segment information such as valid block count and bitmap for the
346 validity of all the blocks.
348 - Node Address Table (NAT)
349 : It is composed of a block address table for all the node blocks stored in
352 - Segment Summary Area (SSA)
353 : It contains summary entries which contains the owner information of all the
354 data and node blocks stored in Main area.
357 : It contains file and directory data including their indices.
359 In order to avoid misalignment between file system and flash-based storage, F2FS
360 aligns the start block address of CP with the segment size. Also, it aligns the
361 start block address of Main area with the zone size by reserving some segments
364 Reference the following survey for additional technical details.
365 https://wiki.linaro.org/WorkingGroups/Kernel/Projects/FlashCardSurvey
367 File System Metadata Structure
368 ------------------------------
370 F2FS adopts the checkpointing scheme to maintain file system consistency. At
371 mount time, F2FS first tries to find the last valid checkpoint data by scanning
372 CP area. In order to reduce the scanning time, F2FS uses only two copies of CP.
373 One of them always indicates the last valid data, which is called as shadow copy
374 mechanism. In addition to CP, NAT and SIT also adopt the shadow copy mechanism.
376 For file system consistency, each CP points to which NAT and SIT copies are
377 valid, as shown as below.
379 +--------+----------+---------+
381 +--------+----------+---------+
385 +-------+-------+--------+--------+--------+--------+
386 | CP #0 | CP #1 | SIT #0 | SIT #1 | NAT #0 | NAT #1 |
387 +-------+-------+--------+--------+--------+--------+
390 `----------------------------------------'
395 The key data structure to manage the data locations is a "node". Similar to
396 traditional file structures, F2FS has three types of node: inode, direct node,
397 indirect node. F2FS assigns 4KB to an inode block which contains 923 data block
398 indices, two direct node pointers, two indirect node pointers, and one double
399 indirect node pointer as described below. One direct node block contains 1018
400 data blocks, and one indirect node block contains also 1018 node blocks. Thus,
401 one inode block (i.e., a file) covers:
403 4KB * (923 + 2 * 1018 + 2 * 1018 * 1018 + 1018 * 1018 * 1018) := 3.94TB.
410 | `- direct node (1018)
412 `- double indirect node (1)
413 `- indirect node (1018)
414 `- direct node (1018)
417 Note that, all the node blocks are mapped by NAT which means the location of
418 each node is translated by the NAT table. In the consideration of the wandering
419 tree problem, F2FS is able to cut off the propagation of node updates caused by
425 A directory entry occupies 11 bytes, which consists of the following attributes.
427 - hash hash value of the file name
429 - len the length of file name
430 - type file type such as directory, symlink, etc
432 A dentry block consists of 214 dentry slots and file names. Therein a bitmap is
433 used to represent whether each dentry is valid or not. A dentry block occupies
434 4KB with the following composition.
436 Dentry Block(4 K) = bitmap (27 bytes) + reserved (3 bytes) +
437 dentries(11 * 214 bytes) + file name (8 * 214 bytes)
440 +--------------------------------+
441 |dentry block 1 | dentry block 2 |
442 +--------------------------------+
445 . [Dentry Block Structure: 4KB] .
446 +--------+----------+----------+------------+
447 | bitmap | reserved | dentries | file names |
448 +--------+----------+----------+------------+
449 [Dentry Block: 4KB] . .
452 +------+------+-----+------+
453 | hash | ino | len | type |
454 +------+------+-----+------+
455 [Dentry Structure: 11 bytes]
457 F2FS implements multi-level hash tables for directory structure. Each level has
458 a hash table with dedicated number of hash buckets as shown below. Note that
459 "A(2B)" means a bucket includes 2 data blocks.
461 ----------------------
464 N : MAX_DIR_HASH_DEPTH
465 ----------------------
469 level #1 | A(2B) - A(2B)
471 level #2 | A(2B) - A(2B) - A(2B) - A(2B)
473 level #N/2 | A(2B) - A(2B) - A(2B) - A(2B) - A(2B) - ... - A(2B)
475 level #N | A(4B) - A(4B) - A(4B) - A(4B) - A(4B) - ... - A(4B)
477 The number of blocks and buckets are determined by,
479 ,- 2, if n < MAX_DIR_HASH_DEPTH / 2,
480 # of blocks in level #n = |
483 ,- 2^(n + dir_level),
484 | if n + dir_level < MAX_DIR_HASH_DEPTH / 2,
485 # of buckets in level #n = |
486 `- 2^((MAX_DIR_HASH_DEPTH / 2) - 1),
489 When F2FS finds a file name in a directory, at first a hash value of the file
490 name is calculated. Then, F2FS scans the hash table in level #0 to find the
491 dentry consisting of the file name and its inode number. If not found, F2FS
492 scans the next hash table in level #1. In this way, F2FS scans hash tables in
493 each levels incrementally from 1 to N. In each levels F2FS needs to scan only
494 one bucket determined by the following equation, which shows O(log(# of files))
497 bucket number to scan in level #n = (hash value) % (# of buckets in level #n)
499 In the case of file creation, F2FS finds empty consecutive slots that cover the
500 file name. F2FS searches the empty slots in the hash tables of whole levels from
501 1 to N in the same way as the lookup operation.
503 The following figure shows an example of two cases holding children.
504 --------------> Dir <--------------
508 child - child [hole] - child
510 child - child - child [hole] - [hole] - child
513 Number of children = 6, Number of children = 3,
514 File size = 7 File size = 7
516 Default Block Allocation
517 ------------------------
519 At runtime, F2FS manages six active logs inside "Main" area: Hot/Warm/Cold node
520 and Hot/Warm/Cold data.
522 - Hot node contains direct node blocks of directories.
523 - Warm node contains direct node blocks except hot node blocks.
524 - Cold node contains indirect node blocks
525 - Hot data contains dentry blocks
526 - Warm data contains data blocks except hot and cold data blocks
527 - Cold data contains multimedia data or migrated data blocks
529 LFS has two schemes for free space management: threaded log and copy-and-compac-
530 tion. The copy-and-compaction scheme which is known as cleaning, is well-suited
531 for devices showing very good sequential write performance, since free segments
532 are served all the time for writing new data. However, it suffers from cleaning
533 overhead under high utilization. Contrarily, the threaded log scheme suffers
534 from random writes, but no cleaning process is needed. F2FS adopts a hybrid
535 scheme where the copy-and-compaction scheme is adopted by default, but the
536 policy is dynamically changed to the threaded log scheme according to the file
539 In order to align F2FS with underlying flash-based storage, F2FS allocates a
540 segment in a unit of section. F2FS expects that the section size would be the
541 same as the unit size of garbage collection in FTL. Furthermore, with respect
542 to the mapping granularity in FTL, F2FS allocates each section of the active
543 logs from different zones as much as possible, since FTL can write the data in
544 the active logs into one allocation unit according to its mapping granularity.
549 F2FS does cleaning both on demand and in the background. On-demand cleaning is
550 triggered when there are not enough free segments to serve VFS calls. Background
551 cleaner is operated by a kernel thread, and triggers the cleaning job when the
554 F2FS supports two victim selection policies: greedy and cost-benefit algorithms.
555 In the greedy algorithm, F2FS selects a victim segment having the smallest number
556 of valid blocks. In the cost-benefit algorithm, F2FS selects a victim segment
557 according to the segment age and the number of valid blocks in order to address
558 log block thrashing problem in the greedy algorithm. F2FS adopts the greedy
559 algorithm for on-demand cleaner, while background cleaner adopts cost-benefit
562 In order to identify whether the data in the victim segment are valid or not,
563 F2FS manages a bitmap. Each bit represents the validity of a block, and the
564 bitmap is composed of a bit stream covering whole blocks in main area.