]> git.karo-electronics.de Git - karo-tx-linux.git/commit
Both mem_cgroup_charge_statistics() and mem_cgroup_move_account() were
authorGreg Thelen <gthelen@google.com>
Wed, 24 Aug 2011 23:47:43 +0000 (09:47 +1000)
committerStephen Rothwell <sfr@canb.auug.org.au>
Tue, 27 Sep 2011 07:14:03 +0000 (17:14 +1000)
commit9a1d12eee5f6108cc42d06f5b8cfea46440925d5
tree7d66a62fb2bc5764d5586b5eb890e1a80a8b0c7d
parentcd1bcff68430d208ca74a76a3a99c6bc4ff86d91
Both mem_cgroup_charge_statistics() and mem_cgroup_move_account() were
unnecessarily disabling preemption when adjusting per-cpu counters:
    preempt_disable()
    __this_cpu_xxx()
    __this_cpu_yyy()
    preempt_enable()

This change does not disable preemption and thus CPU switch is possible
within these routines.  This does not cause a problem because the total
of all cpu counters is summed when reporting stats.  Now both
mem_cgroup_charge_statistics() and mem_cgroup_move_account() look like:
    this_cpu_xxx()
    this_cpu_yyy()

akpm: this is an optimisation for x86 and a deoptimisation for non-x86.
The non-x86 situation will be fixed as architectures implement their
atomic this_cpu_foo() operations.

Signed-off-by: Greg Thelen <gthelen@google.com>
Reported-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Johannes Weiner <jweiner@redhat.com>
Cc: Valdis Kletnieks <Valdis.Kletnieks@vt.edu>
Cc: Balbir Singh <bsingharora@gmail.com>
Cc: Daisuke Nishimura <nishimura@mxp.nes.nec.co.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/memcontrol.c