From: Johannes Weiner Date: Thu, 8 Dec 2011 04:42:47 +0000 (+1100) Subject: mm: page_cgroup: check page_cgroup arrays in lookup_page_cgroup() only when necessary X-Git-Tag: next-20111213~1^2~37 X-Git-Url: https://git.karo-electronics.de/?a=commitdiff_plain;h=b82c9efa8b28988ce915f74555ff2a8a1560aa24;p=karo-tx-linux.git mm: page_cgroup: check page_cgroup arrays in lookup_page_cgroup() only when necessary lookup_page_cgroup() is usually used only against pages that are used in userspace. The exception is the CONFIG_DEBUG_VM-only memcg check from the page allocator: it can run on pages without page_cgroup descriptors allocated when the pages are fed into the page allocator for the first time during boot or memory hotplug. Include the array check only when CONFIG_DEBUG_VM is set and save the unnecessary check in production kernels. Signed-off-by: Johannes Weiner Acked-by: KAMEZAWA Hiroyuki Acked-by: Michal Hocko Cc: Balbir Singh Cc: David Rientjes Cc: Hugh Dickins Signed-off-by: Andrew Morton --- diff --git a/mm/page_cgroup.c b/mm/page_cgroup.c index f0559e049e00..e910524e5a08 100644 --- a/mm/page_cgroup.c +++ b/mm/page_cgroup.c @@ -28,9 +28,16 @@ struct page_cgroup *lookup_page_cgroup(struct page *page) struct page_cgroup *base; base = NODE_DATA(page_to_nid(page))->node_page_cgroup; +#ifdef CONFIG_DEBUG_VM + /* + * The sanity checks the page allocator does upon freeing a + * page can reach here before the page_cgroup arrays are + * allocated when feeding a range of pages to the allocator + * for the first time during bootup or memory hotplug. + */ if (unlikely(!base)) return NULL; - +#endif offset = pfn - NODE_DATA(page_to_nid(page))->node_start_pfn; return base + offset; } @@ -85,9 +92,16 @@ struct page_cgroup *lookup_page_cgroup(struct page *page) { unsigned long pfn = page_to_pfn(page); struct mem_section *section = __pfn_to_section(pfn); - +#ifdef CONFIG_DEBUG_VM + /* + * The sanity checks the page allocator does upon freeing a + * page can reach here before the page_cgroup arrays are + * allocated when feeding a range of pages to the allocator + * for the first time during bootup or memory hotplug. + */ if (!section->page_cgroup) return NULL; +#endif return section->page_cgroup + pfn; }