From: Rik van Riel Date: Thu, 12 Apr 2012 22:52:00 +0000 (+1000) Subject: mm: add extra free kbytes tunable X-Git-Tag: next-20120417~2^2~84 X-Git-Url: https://git.karo-electronics.de/?a=commitdiff_plain;h=9eb061d9ed0411e292a0d037077ef038e6ffae7d;p=karo-tx-linux.git mm: add extra free kbytes tunable Add a userspace visible knob to tell the VM to keep an extra amount of memory free, by increasing the gap between each zone's min and low watermarks. This is useful for realtime applications that call system calls and have a bound on the number of allocations that happen in any short time period. In this application, extra_free_kbytes would be left at an amount equal to or larger than than the maximum number of allocations that happen in any burst. It may also be useful to reduce the memory use of virtual machines (temporarily?), in a way that does not cause memory fragmentation like ballooning does. Testing results from Satoru Moriya: : I ran some sample workloads and measure memory allocation latency : (latency of __alloc_page_nodemask()). : The test is like following: : : - CPU: 1 socket, 4 core : - Memory: 4GB : : - Background load: : $ dd if=3D/dev/zero of=3D/tmp/tmp1 : $ dd if=3D/dev/zero of=3D/tmp/tmp2 : $ dd if=3D/dev/zero of=3D/tmp/tmp3 : : - Main load: : $ mapped-file-stream 1 $((1024 * 1024 * 640)) --(*) : : (*) This is made by Johannes Weiner : https://lkml.org/lkml/2010/8/30/226 : : It allocates/access 640MByte memory at a burst. : : The result is follwoing: : : | | extra | : | default | kbytes | : -------------------------------------------------------------- : min_free_kbytes | 8113 | 8113 | : extra_free_kbytes | 0 | 640*1024 | (KB) : -------------------------------------------------------------- : worst latency | 517.762 | 20.775 | (usec) : -------------------------------------------------------------- : vmstat result | | | : nr_vmscan_write | 0 | 0 | : pgsteal_dma | 0 | 0 | : pgsteal_dma32 | 143667 | 144882 | : pgsteal_normal | 31486 | 27001 | : pgsteal_movable | 0 | 0 | : pgscan_kswapd_dma | 0 | 0 | : pgscan_kswapd_dma32 | 138617 | 156351 | : pgscan_kswapd_normal | 30593 | 27955 | : pgscan_kswapd_movable | 0 | 0 | : pgscan_direct_dma | 0 | 0 | : pgscan_direct_dma32 | 5050 | 0 | : pgscan_direct_normal | 896 | 0 | : pgscan_direct_movable | 0 | 0 | : kswapd_steal | 169207 | 171883 | : kswapd_inodesteal | 0 | 0 | : kswapd_low_wmark_hit_quickly | 43 | 45 | : kswapd_high_wmark_hit_quickly | 1 | 0 | : allocstall | 32 | 0 | : : : As you can see, in the default case there were 32 direct reclaim : (allocstal= l) and its worst latency was 517.762 usecs. This value may be : larger if a process would sleep or issue I/O in the direct reclaim path. : OTOH, ii the other case where I add extra free bytes, there were no direct : reclaim and its worst latency was 20.775 usecs. : : In this test case, we can avoid direct reclaim and keep a latency low. Signed-off-by: Rik van Riel Acked-by: Johannes Weiner Tested-by: Satoru Moriya Signed-off-by: Andrew Morton --- diff --git a/Documentation/sysctl/vm.txt b/Documentation/sysctl/vm.txt index 96f0ee825bed..9c11d97e075a 100644 --- a/Documentation/sysctl/vm.txt +++ b/Documentation/sysctl/vm.txt @@ -28,6 +28,7 @@ Currently, these files are in /proc/sys/vm: - dirty_writeback_centisecs - drop_caches - extfrag_threshold +- extra_free_kbytes - hugepages_treat_as_movable - hugetlb_shm_group - laptop_mode @@ -168,6 +169,21 @@ fragmentation index is <= extfrag_threshold. The default value is 500. ============================================================== +extra_free_kbytes + +This parameter tells the VM to keep extra free memory between the threshold +where background reclaim (kswapd) kicks in, and the threshold where direct +reclaim (by allocating processes) kicks in. + +This is useful for workloads that require low latency memory allocations +and have a bounded burstiness in memory allocations, for example a +realtime application that receives and transmits network traffic +(causing in-kernel memory allocations) with a maximum total message burst +size of 200MB may need 200MB of extra free memory to avoid direct reclaim +related latencies. + +============================================================== + hugepages_treat_as_movable This parameter is only useful when kernelcore= is specified at boot time to diff --git a/kernel/sysctl.c b/kernel/sysctl.c index ba133ec3c4f5..38e0e70f008d 100644 --- a/kernel/sysctl.c +++ b/kernel/sysctl.c @@ -102,6 +102,7 @@ extern char core_pattern[]; extern unsigned int core_pipe_limit; extern int pid_max; extern int min_free_kbytes; +extern int extra_free_kbytes; extern int pid_max_min, pid_max_max; extern int sysctl_drop_caches; extern int percpu_pagelist_fraction; @@ -1198,6 +1199,14 @@ static struct ctl_table vm_table[] = { .proc_handler = min_free_kbytes_sysctl_handler, .extra1 = &zero, }, + { + .procname = "extra_free_kbytes", + .data = &extra_free_kbytes, + .maxlen = sizeof(extra_free_kbytes), + .mode = 0644, + .proc_handler = min_free_kbytes_sysctl_handler, + .extra1 = &zero, + }, { .procname = "percpu_pagelist_fraction", .data = &percpu_pagelist_fraction, diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a712fb9e04ce..decfbf0f71c2 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -191,8 +191,20 @@ static char * const zone_names[MAX_NR_ZONES] = { "Movable", }; +/* + * Try to keep at least this much lowmem free. Do not allow normal + * allocations below this point, only high priority ones. Automatically + * tuned according to the amount of memory in the system. + */ int min_free_kbytes = 1024; +/* + * Extra memory for the system to try freeing. Used to temporarily + * free memory, to make space for new workloads. Anyone can allocate + * down to the min watermarks controlled by min_free_kbytes above. + */ +int extra_free_kbytes = 0; + static unsigned long __meminitdata nr_kernel_pages; static unsigned long __meminitdata nr_all_pages; static unsigned long __meminitdata dma_reserve; @@ -4986,6 +4998,7 @@ static void setup_per_zone_lowmem_reserve(void) void setup_per_zone_wmarks(void) { unsigned long pages_min = min_free_kbytes >> (PAGE_SHIFT - 10); + unsigned long pages_low = extra_free_kbytes >> (PAGE_SHIFT - 10); unsigned long lowmem_pages = 0; struct zone *zone; unsigned long flags; @@ -4997,11 +5010,14 @@ void setup_per_zone_wmarks(void) } for_each_zone(zone) { - u64 tmp; + u64 min, low; spin_lock_irqsave(&zone->lock, flags); - tmp = (u64)pages_min * zone->present_pages; - do_div(tmp, lowmem_pages); + min = (u64)pages_min * zone->present_pages; + do_div(min, lowmem_pages); + low = (u64)pages_low * zone->present_pages; + do_div(low, vm_total_pages); + if (is_highmem(zone)) { /* * __GFP_HIGH and PF_MEMALLOC allocations usually don't @@ -5025,11 +5041,13 @@ void setup_per_zone_wmarks(void) * If it's a lowmem zone, reserve a number of pages * proportionate to the zone's size. */ - zone->watermark[WMARK_MIN] = tmp; + zone->watermark[WMARK_MIN] = min; } - zone->watermark[WMARK_LOW] = min_wmark_pages(zone) + (tmp >> 2); - zone->watermark[WMARK_HIGH] = min_wmark_pages(zone) + (tmp >> 1); + zone->watermark[WMARK_LOW] = min_wmark_pages(zone) + + low + (min >> 2); + zone->watermark[WMARK_HIGH] = min_wmark_pages(zone) + + low + (min >> 1); setup_zone_migrate_reserve(zone); spin_unlock_irqrestore(&zone->lock, flags); } @@ -5127,7 +5145,7 @@ module_init(init_per_zone_wmark_min) /* * min_free_kbytes_sysctl_handler - just a wrapper around proc_dointvec() so * that we can call two helper functions whenever min_free_kbytes - * changes. + * or extra_free_kbytes changes. */ int min_free_kbytes_sysctl_handler(ctl_table *table, int write, void __user *buffer, size_t *length, loff_t *ppos)