]> git.karo-electronics.de Git - karo-tx-linux.git/log
karo-tx-linux.git
11 years agoMerge branch 'linus'
Ingo Molnar [Sat, 12 Oct 2013 17:19:12 +0000 (19:19 +0200)]
Merge branch 'linus'

11 years agoMerge branch 'x86/uv'
Ingo Molnar [Sat, 12 Oct 2013 17:19:10 +0000 (19:19 +0200)]
Merge branch 'x86/uv'

11 years agomanual merge of x86/urgent
Ingo Molnar [Sat, 12 Oct 2013 17:19:09 +0000 (19:19 +0200)]
manual merge of x86/urgent

Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agoMerge branch 'x86/uaccess'
Ingo Molnar [Sat, 12 Oct 2013 17:19:05 +0000 (19:19 +0200)]
Merge branch 'x86/uaccess'

11 years agoMerge branch 'x86/reboot'
Ingo Molnar [Sat, 12 Oct 2013 17:19:03 +0000 (19:19 +0200)]
Merge branch 'x86/reboot'

11 years agoMerge branch 'x86/platform'
Ingo Molnar [Sat, 12 Oct 2013 17:19:01 +0000 (19:19 +0200)]
Merge branch 'x86/platform'

11 years agoMerge branch 'x86/mm'
Ingo Molnar [Sat, 12 Oct 2013 17:18:59 +0000 (19:18 +0200)]
Merge branch 'x86/mm'

11 years agoMerge branch 'x86/iommu'
Ingo Molnar [Sat, 12 Oct 2013 17:18:57 +0000 (19:18 +0200)]
Merge branch 'x86/iommu'

11 years agoMerge branch 'x86/efi'
Ingo Molnar [Sat, 12 Oct 2013 17:18:54 +0000 (19:18 +0200)]
Merge branch 'x86/efi'

11 years agoMerge branch 'x86/cleanups'
Ingo Molnar [Sat, 12 Oct 2013 17:18:52 +0000 (19:18 +0200)]
Merge branch 'x86/cleanups'

11 years agoMerge branch 'x86/build'
Ingo Molnar [Sat, 12 Oct 2013 17:18:50 +0000 (19:18 +0200)]
Merge branch 'x86/build'

11 years agoMerge branch 'sched/core'
Ingo Molnar [Sat, 12 Oct 2013 17:18:48 +0000 (19:18 +0200)]
Merge branch 'sched/core'

11 years agosched: Remove bogus parameter in structured comment
Ramkumar Ramachandra [Thu, 10 Oct 2013 10:20:33 +0000 (15:50 +0530)]
sched: Remove bogus parameter in structured comment

The balance parameter was removed by 23f0d20 ("sched: Factor out
code to should_we_balance()", 2013-08-06).

Signed-off-by: Ramkumar Ramachandra <artagnon@gmail.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381400433-2030-1-git-send-email-artagnon@gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agoMerge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus
Linus Torvalds [Fri, 11 Oct 2013 18:24:58 +0000 (11:24 -0700)]
Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus

Pull MIPS fix from Ralf Baechle:
 "Just one fix.  The stack protector was loading the value of the canary
  instead of its address"

* 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus:
  MIPS: stack protector: Fix per-task canary switch

11 years agoMerge branch 'drm-fixes' of git://people.freedesktop.org/~airlied/linux
Linus Torvalds [Fri, 11 Oct 2013 17:41:21 +0000 (10:41 -0700)]
Merge branch 'drm-fixes' of git://people.freedesktop.org/~airlied/linux

Pull drm fixes from Dave Airlie:
 "All over the map..

   - nouveau:
     disable MSI, needs more work, will try again next merge window
   - radeon:
      audio + uvd regression fixes, dpm fixes, reset fixes
   - i915:
     the dpms fix might fix your haswell

  And one pain in the ass revert, so we have VGA arbitration that when
  implemented 4-5 years ago really hoped that GPUs could remove
  themselves from arbitration completely once they had a kernel driver.

  It seems Intel hw designers decided that was too nice a facility to
  allow us to have so they removed it when they went on-die (so since
  Ironlake at least).  Now Alex Williamson added support for VGA
  arbitration for newer GPUs however this now exposes itself to
  userspace as requireing arbitration of GPU VGA regions and the X
  server gets involved and disables things that it can't handle when VGA
  access is possibly required around every operation.

  So in order to not break userspace we just reverted things back to the
  old known broken status so maybe we can try and design out way out.

  Ville also had a patch to use stop machine for the two times Intel
  needs to access VGA space, that might be acceptable with some rework,
  but for now myself and Daniel agreed to just go back"

* 'drm-fixes' of git://people.freedesktop.org/~airlied/linux: (23 commits)
  Revert "i915: Update VGA arbiter support for newer devices"
  Revert "drm/i915: Delay disabling of VGA memory until vgacon->fbcon handoff is done"
  drm/radeon: re-enable sw ACR support on pre-DCE4
  drm/radeon/dpm: disable bapm on TN asics
  drm/radeon: improve soft reset on CIK
  drm/radeon: improve soft reset on SI
  drm/radeon/dpm: off by one in si_set_mc_special_registers()
  drm/radeon/dpm/btc: off by one in btc_set_mc_special_registers()
  drm/radeon: forever loop on error in radeon_do_test_moves()
  drm/radeon: fix hw contexts for SUMO2 asics
  drm/radeon: fix typo in CP DMA register headers
  drm/radeon/dpm: disable multiple UVD states
  drm/radeon: use hw generated CTS/N values for audio
  drm/radeon: fix N/CTS clock matching for audio
  drm/radeon: use 64-bit math to calculate CTS values for audio (v2)
  drm/edid: catch kmalloc failure in drm_edid_to_speaker_allocation
  Revert "drm/fb-helper: don't sleep for screen unblank when an oops is in progress"
  drm/gma500: fix things after get/put page helpers
  drm/nouveau/mc: disable msi support by default, it's busted in tons of places
  drm/i915: Only apply DPMS to the encoder if enabled
  ...

11 years agoMerge branch 'x86/boot'
Ingo Molnar [Fri, 11 Oct 2013 07:31:05 +0000 (09:31 +0200)]
Merge branch 'x86/boot'

11 years agoMerge branch 'timers/core'
Ingo Molnar [Fri, 11 Oct 2013 07:31:03 +0000 (09:31 +0200)]
Merge branch 'timers/core'

11 years agoMerge branch 'sched/core'
Ingo Molnar [Fri, 11 Oct 2013 07:31:00 +0000 (09:31 +0200)]
Merge branch 'sched/core'

11 years agoMerge branch 'perf/core'
Ingo Molnar [Fri, 11 Oct 2013 07:30:58 +0000 (09:30 +0200)]
Merge branch 'perf/core'

11 years agoMerge branch 'irq/core'
Ingo Molnar [Fri, 11 Oct 2013 07:30:56 +0000 (09:30 +0200)]
Merge branch 'irq/core'

11 years agoMerge branch 'core/urgent'
Ingo Molnar [Fri, 11 Oct 2013 07:30:55 +0000 (09:30 +0200)]
Merge branch 'core/urgent'

11 years agoMerge branch 'core/locking'
Ingo Molnar [Fri, 11 Oct 2013 07:30:53 +0000 (09:30 +0200)]
Merge branch 'core/locking'

11 years agox86: Apply the asm_volatile_goto() compiler quirk
Ingo Molnar [Thu, 10 Oct 2013 08:16:30 +0000 (10:16 +0200)]
x86: Apply the asm_volatile_goto() compiler quirk

Apply the asm_volatile_goto() compiler quirk to the new rmwcc.h
file as well, introduced in:

   c2daa3bed53a sched, x86: Provide a per-cpu preempt_count implementation

Reported-and-tested-by: Fengguang Wu <fengguang.wu@intel.com>
Reported-by: Oleg Nesterov <oleg@redhat.com>
Reported-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Suggested-by: Jakub Jelinek <jakub@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agoMerge branch 'core/urgent' into sched/core
Ingo Molnar [Fri, 11 Oct 2013 05:39:37 +0000 (07:39 +0200)]
Merge branch 'core/urgent' into sched/core

Merge in asm goto fix, to be able to apply the asm/rmwcc.h fix.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agocompiler/gcc4: Add quirk for 'asm goto' miscompilation bug
Ingo Molnar [Thu, 10 Oct 2013 08:16:30 +0000 (10:16 +0200)]
compiler/gcc4: Add quirk for 'asm goto' miscompilation bug

Fengguang Wu, Oleg Nesterov and Peter Zijlstra tracked down
a kernel crash to a GCC bug: GCC miscompiles certain 'asm goto'
constructs, as outlined here:

  http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58670

Implement a workaround suggested by Jakub Jelinek.

Reported-and-tested-by: Fengguang Wu <fengguang.wu@intel.com>
Reported-by: Oleg Nesterov <oleg@redhat.com>
Reported-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Suggested-by: Jakub Jelinek <jakub@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: <stable@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agoRevert "i915: Update VGA arbiter support for newer devices"
Dave Airlie [Fri, 11 Oct 2013 05:12:04 +0000 (15:12 +1000)]
Revert "i915: Update VGA arbiter support for newer devices"

This reverts commit 81b5c7bc8de3e6f63419139c2fc91bf81dea8a7d.

Adding drm/i915 into the vga arbiter chain means that X (in a piece of
well-meant paranoia) will do a get/put on the vga decoding around
_every_ accel call down into the ddx. Which results in some nice
performance disasters [1]. This really breaks userspace, by disabling
DRI for everyone, and stops OpenGL from working, this isn't limited
to just the i915 but both the integrated and discrete GPUs on
multi-gpu systems, in other words this causes untold worlds of pain,

Ville tried to come up with a Great Hack to fiddle the required VGA
I/O ops behind everyone's back using stop_machine, but that didn't
really work out [2]. Given that we're fairly late in the -rc stage for
such games let's just revert this all.

One thing we might want to keep is to delay the disabling of the vga
decoding until the fbdev emulation and the fbcon screen is set up. If
we kill vga mem decoding beforehand fbcon can end up with a white
square in the top-left corner it tried to save from the vga memory for
a seamless transition. And we have bug reports on older platforms
which seem to match these symptoms.

But again that's something to play around with in -next.

References: [1] http://lists.x.org/archives/xorg-devel/2013-September/037763.html
References: [2] http://www.spinics.net/lists/intel-gfx/msg34062.html
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
11 years agoRevert "drm/i915: Delay disabling of VGA memory until vgacon->fbcon handoff is done"
Dave Airlie [Fri, 11 Oct 2013 05:11:52 +0000 (15:11 +1000)]
Revert "drm/i915: Delay disabling of VGA memory until vgacon->fbcon handoff is done"

This reverts commit 6e1b4fdad5157bb9e88777d525704aba24389bee.

This is part of a revert due to a userspace breakage, better explained in the revert of 1a1a4cbf4906a13c0c377f708df5d94168e7b582.

Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Dave Airlie <airlied@redhat.com>
11 years agoMerge branch 'drm-fixes-3.12' of git://people.freedesktop.org/~agd5f/linux into drm...
Dave Airlie [Fri, 11 Oct 2013 03:07:15 +0000 (13:07 +1000)]
Merge branch 'drm-fixes-3.12' of git://people.freedesktop.org/~agd5f/linux into drm-fixes

Regression fixes for audio and UVD, several hang fixes,
some DPM fixes.

* 'drm-fixes-3.12' of git://people.freedesktop.org/~agd5f/linux:
  drm/radeon: re-enable sw ACR support on pre-DCE4
  drm/radeon/dpm: disable bapm on TN asics
  drm/radeon: improve soft reset on CIK
  drm/radeon: improve soft reset on SI
  drm/radeon/dpm: off by one in si_set_mc_special_registers()
  drm/radeon/dpm/btc: off by one in btc_set_mc_special_registers()
  drm/radeon: forever loop on error in radeon_do_test_moves()
  drm/radeon: fix hw contexts for SUMO2 asics
  drm/radeon: fix typo in CP DMA register headers
  drm/radeon/dpm: disable multiple UVD states
  drm/radeon: use hw generated CTS/N values for audio
  drm/radeon: fix N/CTS clock matching for audio
  drm/radeon: use 64-bit math to calculate CTS values for audio (v2)
  drm/edid: catch kmalloc failure in drm_edid_to_speaker_allocation

11 years agobcache: Fix a null ptr deref regression
Kent Overstreet [Fri, 11 Oct 2013 00:31:15 +0000 (17:31 -0700)]
bcache: Fix a null ptr deref regression

Commit c0f04d88e46d ("bcache: Fix flushes in writeback mode") was fixing
a reported data corruption bug, but it seems some last minute
refactoring or rebasing introduced a null pointer deref.

Signed-off-by: Kent Overstreet <kmo@daterainc.com>
Cc: linux-stable <stable@vger.kernel.org> # >= v3.10
Reported-by: Gabriel de Perthuis <g2p.code@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
11 years agoMerge tag 'hwmon-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/groeck...
Linus Torvalds [Fri, 11 Oct 2013 01:16:02 +0000 (18:16 -0700)]
Merge tag 'hwmon-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging

Pull hwmon fix from Guenter Roeck:
 "Fix root cause of crash/error seen in applesmc driver"

* tag 'hwmon-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging:
  hwmon: (applesmc) Always read until end of data

11 years agoMerge branch 'rc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild
Linus Torvalds [Fri, 11 Oct 2013 01:15:23 +0000 (18:15 -0700)]
Merge branch 'rc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild

Pull kbuild fix from Michal Marek:
 "Here is an ARM Makefile fix that you even acked.  After nobody wanted
  to take it, it ended up in the kbuild tree"

* 'rc-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/mmarek/kbuild:
  arm, kbuild: make "make install" not depend on vmlinux

11 years agoMerge git://www.linux-watchdog.org/linux-watchdog
Linus Torvalds [Thu, 10 Oct 2013 20:57:10 +0000 (13:57 -0700)]
Merge git://www.linux-watchdog.org/linux-watchdog

Pull watchdog fix from Wim Van Sebroeck:
 "Make sure that the hpwdt driver will not load auxilary iLO devices"

* git://www.linux-watchdog.org/linux-watchdog:
  watchdog: hpwdt: Patch to ignore auxilary iLO devices

11 years agokobject: show debug info on delayed kobject release
Fengguang Wu [Wed, 9 Oct 2013 01:26:21 +0000 (09:26 +0800)]
kobject: show debug info on delayed kobject release

Useful for locating buggy drivers on kernel oops.

It may add dozens of new lines to boot dmesg. DEBUG_KOBJECT_RELEASE is
hopefully only enabled in debug kernels (like maybe the Fedora rawhide
one, or at developers), so being a bit more verbose is likely ok.

Acked-by: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
11 years agowatchdog: hpwdt: Patch to ignore auxilary iLO devices
Mingarelli, Thomas [Fri, 9 Aug 2013 16:31:09 +0000 (16:31 +0000)]
watchdog: hpwdt: Patch to ignore auxilary iLO devices

This patch is to prevent hpwdt from loading on any auxilary iLO devices defined
after the initial (or main) iLO device. All auxilary iLO devices will have a
subsystem device ID set to 0x1979 in order for hpwdt to differentiate between
the two types.

Signed-off-by: Thomas Mingarelli <thomas.mingarelli@hp.com>
Tested-by: Lisa Mitchell <lisa.mitchell@hp.com>
Signed-off-by: Wim Van Sebroeck <wim@iguana.be>
11 years agoMerge tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso...
Linus Torvalds [Thu, 10 Oct 2013 19:31:43 +0000 (12:31 -0700)]
Merge tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random

Pull /dev/random changes from Ted Ts'o:
 "These patches are designed to enable improvements to /dev/random for
  non-x86 platforms, in particular MIPS and ARM"

* tag 'random_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/random:
  random: allow architectures to optionally define random_get_entropy()
  random: run random_int_secret_init() run after all late_initcalls

11 years agoMerge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Linus Torvalds [Thu, 10 Oct 2013 18:33:48 +0000 (11:33 -0700)]
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull kvm fixes from Paolo Bonzini:
 "Fixes for 3.12-rc5: two old PPC bugs and one new (3.12-rc2) x86 bug"

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
  kvm: ppc: booke: check range page invalidation progress on page setup
  KVM: PPC: Book3S HV: Fix typo in saving DSCR
  KVM: nVMX: fix shadow on EPT

11 years agoMerge tag 'spi-v3.12-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi
Linus Torvalds [Thu, 10 Oct 2013 18:33:02 +0000 (11:33 -0700)]
Merge tag 'spi-v3.12-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi

Pull spi fixes from Mark Brown:
 "This is all driver updates, mostly fixes for error handling paths
  except for the s3c64xx and hspi fixes for trying to use runtime PM
  before it is enabled and the pxa2xx fix for interactions between power
  management and interrupt handling"

* tag 'spi-v3.12-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi:
  spi: atmel: Fix incorrect error path
  spi/hspi: fixup Runtime PM enable timing
  spi/s3c64xx: Ensure runtime PM is enabled prior to registration
  spi/clps711x: drop clk_put for devm_clk_get in spi_clps711x_probe()
  spi: fix return value check in dspi_probe()
  spi: mpc512x: fix error return code in mpc512x_psc_spi_do_probe()
  spi: clps711x: Don't call kfree() after spi_master_put/spi_unregister_master
  spi/pxa2xx: check status register as well to determine if the device is off

11 years agorandom: allow architectures to optionally define random_get_entropy()
Theodore Ts'o [Sat, 21 Sep 2013 17:58:22 +0000 (13:58 -0400)]
random: allow architectures to optionally define random_get_entropy()

Allow architectures which have a disabled get_cycles() function to
provide a random_get_entropy() function which provides a fine-grained,
rapidly changing counter that can be used by the /dev/random driver.

For example, an architecture might have a rapidly changing register
used to control random TLB cache eviction, or DRAM refresh that
doesn't meet the requirements of get_cycles(), but which is good
enough for the needs of the random driver.

Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Cc: stable@vger.kernel.org
11 years agoMerge tag 'for-linus-20131008' of git://git.infradead.org/linux-mtd
Linus Torvalds [Thu, 10 Oct 2013 18:30:33 +0000 (11:30 -0700)]
Merge tag 'for-linus-20131008' of git://git.infradead.org/linux-mtd

Pull MTD fixes from Brian Norris:
 - fix a small memory leak in some new ONFI code
 - account for additional odd variations of Micron SPI flash

Acked by David Woodhouse.

* tag 'for-linus-20131008' of git://git.infradead.org/linux-mtd:
  mtd: m25p80: Fix 4 byte addressing mode for Micron devices.
  mtd: nand: fix memory leak in ONFI extended parameter page

11 years agodrm/radeon: re-enable sw ACR support on pre-DCE4
Alex Deucher [Thu, 10 Oct 2013 15:47:01 +0000 (11:47 -0400)]
drm/radeon: re-enable sw ACR support on pre-DCE4

HW ACR support may have issues on some older chips, so
use SW ACR for now until we've tested further.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
CC: Rafał Miłecki <zajec5@gmail.com>
11 years agokvm: ppc: booke: check range page invalidation progress on page setup
Bharat Bhushan [Wed, 7 Aug 2013 10:03:46 +0000 (15:33 +0530)]
kvm: ppc: booke: check range page invalidation progress on page setup

When the MM code is invalidating a range of pages, it calls the KVM
kvm_mmu_notifier_invalidate_range_start() notifier function, which calls
kvm_unmap_hva_range(), which arranges to flush all the TLBs for guest pages.
However, the Linux PTEs for the range being flushed are still valid at
that point.  We are not supposed to establish any new references to pages
in the range until the ...range_end() notifier gets called.
The PPC-specific KVM code doesn't get any explicit notification of that;
instead, we are supposed to use mmu_notifier_retry() to test whether we
are or have been inside a range flush notifier pair while we have been
referencing a page.

This patch calls the mmu_notifier_retry() while mapping the guest
page to ensure we are not referencing a page when in range invalidation.

This call is inside a region locked with kvm->mmu_lock, which is the
same lock that is called by the KVM MMU notifier functions, thus
ensuring that no new notification can proceed while we are in the
locked region.

Signed-off-by: Bharat Bhushan <bharat.bhushan@freescale.com>
Acked-by: Alexander Graf <agraf@suse.de>
[Backported to 3.12 - Paolo]
Reviewed-by: Bharat Bhushan <bharat.bhushan@freescale.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
11 years agoKVM: PPC: Book3S HV: Fix typo in saving DSCR
Paul Mackerras [Fri, 20 Sep 2013 23:53:28 +0000 (09:53 +1000)]
KVM: PPC: Book3S HV: Fix typo in saving DSCR

This fixes a typo in the code that saves the guest DSCR (Data Stream
Control Register) into the kvm_vcpu_arch struct on guest exit.  The
effect of the typo was that the DSCR value was saved in the wrong place,
so changes to the DSCR by the guest didn't persist across guest exit
and entry, and some host kernel memory got corrupted.

Cc: stable@vger.kernel.org [v3.1+]
Signed-off-by: Paul Mackerras <paulus@samba.org>
Acked-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
11 years agoKVM: nVMX: fix shadow on EPT
Gleb Natapov [Wed, 9 Oct 2013 16:13:19 +0000 (19:13 +0300)]
KVM: nVMX: fix shadow on EPT

72f857950f6f19 broke shadow on EPT. This patch reverts it and fixes PAE
on nEPT (which reverted commit fixed) in other way.

Shadow on EPT is now broken because while L1 builds shadow page table
for L2 (which is PAE while L2 is in real mode) it never loads L2's
GUEST_PDPTR[0-3].  They do not need to be loaded because without nested
virtualization HW does this during guest entry if EPT is disabled,
but in our case L0 emulates L2's vmentry while EPT is enables, so we
cannot rely on vmcs12->guest_pdptr[0-3] to contain up-to-date values
and need to re-read PDPTEs from L2 memory. This is what kvm_set_cr3()
is doing, but by clearing cache bits during L2 vmentry we drop values
that kvm_set_cr3() read from memory.

So why the same code does not work for PAE on nEPT? kvm_set_cr3()
reads pdptes into vcpu->arch.walk_mmu->pdptrs[]. walk_mmu points to
vcpu->arch.nested_mmu while nested guest is running, but ept_load_pdptrs()
uses vcpu->arch.mmu which contain incorrect values. Fix that by using
walk_mmu in ept_(load|save)_pdptrs.

Signed-off-by: Gleb Natapov <gleb@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Tested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
11 years agotimer stats: Add a 'Collection: active/inactive' line to timer usage statistics
Dong Zhu [Thu, 10 Oct 2013 07:56:18 +0000 (15:56 +0800)]
timer stats: Add a 'Collection: active/inactive' line to timer usage statistics

We can enable/disable timer statistics collection via:

  echo [1|0] > /proc/timers_stats

and it would be nice if apps had the ability to check
what the current collection status is.

This patch adds a 'Collection: active/inactive' line to display the
current timer collection status.

Also bump up the timer stats version to v0.3.

Signed-off-by: Dong Zhu <bluezhudong@gmail.com>
Cc: John Stultz <john.stultz@linaro.org>
Link: http://lkml.kernel.org/r/20131010075618.GH2139@zhudong.nay.redhat.com
[ Improved the changelog and the code. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agoMerge branch 'fortglx/3.13/time' of git://git.linaro.org/people/jstultz/linux into...
Ingo Molnar [Thu, 10 Oct 2013 04:25:23 +0000 (06:25 +0200)]
Merge branch 'fortglx/3.13/time' of git://git.linaro.org/people/jstultz/linux into timers/core

Pull more timekeeping items for v3.13 from John Stultz:

  * Small cleanup in the clocksource code.

  * Fix for rtc-pl031 to let it work with alarmtimers.

  * Move arm64 to using the generic sched_clock framework & resulting
    cleanup in the generic sched_clock code.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched_clock: Remove sched_clock_func() hook
Stephen Boyd [Thu, 18 Jul 2013 23:21:19 +0000 (16:21 -0700)]
sched_clock: Remove sched_clock_func() hook

Nobody is using sched_clock_func() anymore now that sched_clock
supports up to 64 bits. Remove the hook so that new code only
uses sched_clock_register().

Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
11 years agoarch_timer: Move to generic sched_clock framework
Stephen Boyd [Thu, 18 Jul 2013 23:21:18 +0000 (16:21 -0700)]
arch_timer: Move to generic sched_clock framework

Register with the generic sched_clock framework now that it
supports 64 bits. This fixes two problems with the current
sched_clock support for machines using the architected timers.
First off, we don't subtract the start value from subsequent
sched_clock calls so we can potentially start off with
sched_clock returning gigantic numbers. Second, there is no
support for suspend/resume handling so problems such as discussed
in 6a4dae5 (ARM: 7565/1: sched: stop sched_clock() during
suspend, 2012-10-23) can happen without this patch. Finally, it
allows us to move the sched_clock setup into drivers clocksource
out of the arch ports.

Cc: Christopher Covington <cov@codeaurora.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
11 years agodrm/radeon/dpm: disable bapm on TN asics
Alex Deucher [Tue, 8 Oct 2013 01:25:39 +0000 (21:25 -0400)]
drm/radeon/dpm: disable bapm on TN asics

Causes hangs on certain boards.

Fixes:
https://bugs.freedesktop.org/show_bug.cgi?id=70053

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
11 years agodrm/radeon: improve soft reset on CIK
Alex Deucher [Wed, 2 Oct 2013 18:54:44 +0000 (14:54 -0400)]
drm/radeon: improve soft reset on CIK

Disable CG/PG before resetting.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
11 years agodrm/radeon: improve soft reset on SI
Alex Deucher [Wed, 2 Oct 2013 18:50:57 +0000 (14:50 -0400)]
drm/radeon: improve soft reset on SI

Disable CG/PG and stop the rlc before resetting.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
11 years agodrm/radeon/dpm: off by one in si_set_mc_special_registers()
Dan Carpenter [Sat, 28 Sep 2013 09:35:31 +0000 (12:35 +0300)]
drm/radeon/dpm: off by one in si_set_mc_special_registers()

These checks should be ">=" instead of ">".  j is used as an offset into
the table->mc_reg_address[] array and that has
SMC_SISLANDS_MC_REGISTER_ARRAY_SIZE (16) elements.

Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
11 years agodrm/radeon/dpm/btc: off by one in btc_set_mc_special_registers()
Dan Carpenter [Fri, 27 Sep 2013 20:18:39 +0000 (23:18 +0300)]
drm/radeon/dpm/btc: off by one in btc_set_mc_special_registers()

It should be ">=" instead of ">" here.  The table->mc_reg_address[]
array has SMC_EVERGREEN_MC_REGISTER_ARRAY_SIZE (16) elements.

Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
11 years agodrm/radeon: forever loop on error in radeon_do_test_moves()
Dan Carpenter [Mon, 1 Jul 2013 16:39:34 +0000 (19:39 +0300)]
drm/radeon: forever loop on error in radeon_do_test_moves()

The error path does this:

for (--i; i >= 0; --i) {

which is a forever loop because "i" is unsigned.

Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
11 years agodrm/radeon: fix hw contexts for SUMO2 asics
wojciech kapuscinski [Tue, 1 Oct 2013 23:54:33 +0000 (19:54 -0400)]
drm/radeon: fix hw contexts for SUMO2 asics

They have 4 rather than 8.

Fixes:
https://bugs.freedesktop.org/show_bug.cgi?id=63599

Signed-off-by: wojciech kapuscinski <wojtask9@wp.pl>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
11 years agodrm/radeon: fix typo in CP DMA register headers
Alex Deucher [Tue, 1 Oct 2013 20:40:45 +0000 (16:40 -0400)]
drm/radeon: fix typo in CP DMA register headers

Wrong bit offset for SRC endian swapping.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
11 years agodrm/radeon/dpm: disable multiple UVD states
Alex Deucher [Mon, 30 Sep 2013 23:11:24 +0000 (19:11 -0400)]
drm/radeon/dpm: disable multiple UVD states

Always use the regular UVD state for now.  This fixes
a performance regression with UVD playback on certain APUs.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
11 years agodrm/radeon: use hw generated CTS/N values for audio
Alex Deucher [Fri, 27 Sep 2013 22:22:15 +0000 (18:22 -0400)]
drm/radeon: use hw generated CTS/N values for audio

Use the hw generated values rather than calculating
them in the driver.  There may be some older r6xx
asics where this doesn't work correctly.  This remains
to be seen.

See bug:
https://bugs.freedesktop.org/show_bug.cgi?id=69675

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
11 years agodrm/radeon: fix N/CTS clock matching for audio
Alex Deucher [Fri, 27 Sep 2013 22:19:42 +0000 (18:19 -0400)]
drm/radeon: fix N/CTS clock matching for audio

The drm code that calculates the 1001 clocks rounds up
rather than truncating.  This allows the table to match
properly on those modes.

See bug:
https://bugs.freedesktop.org/show_bug.cgi?id=69675

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
11 years agodrm/radeon: use 64-bit math to calculate CTS values for audio (v2)
Alex Deucher [Fri, 27 Sep 2013 22:09:54 +0000 (18:09 -0400)]
drm/radeon: use 64-bit math to calculate CTS values for audio (v2)

Avoid losing precision.  See bug:
https://bugs.freedesktop.org/show_bug.cgi?id=69675

v2: fix math as per Anssi's comments.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
11 years agodrm/edid: catch kmalloc failure in drm_edid_to_speaker_allocation
Alex Deucher [Fri, 27 Sep 2013 22:44:39 +0000 (18:44 -0400)]
drm/edid: catch kmalloc failure in drm_edid_to_speaker_allocation

Return -ENOMEM if the allocation fails.

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
11 years agoRevert "drm/fb-helper: don't sleep for screen unblank when an oops is in progress"
Dave Airlie [Wed, 9 Oct 2013 21:05:41 +0000 (07:05 +1000)]
Revert "drm/fb-helper: don't sleep for screen unblank when an oops is in progress"

This reverts commit 928c2f0c006bf7f381f58af2b2786d2a858ae311.

This patch double applied, two checks for the price of one.

Signed-off-by: Dave Airlie <airlied@redhat.com>
11 years agohwmon: (applesmc) Always read until end of data
Henrik Rydberg [Wed, 2 Oct 2013 17:15:03 +0000 (19:15 +0200)]
hwmon: (applesmc) Always read until end of data

The crash reported and investigated in commit 5f4513 turned out to be
caused by a change to the read interface on newer (2012) SMCs.

Tests by Chris show that simply reading the data valid line is enough
for the problem to go away. Additional tests show that the newer SMCs
no longer wait for the number of requested bytes, but start sending
data right away.  Apparently the number of bytes to read is no longer
specified as before, but instead found out by reading until end of
data. Failure to read until end of data confuses the state machine,
which eventually causes the crash.

As a remedy, assuming bit0 is the read valid line, make sure there is
nothing more to read before leaving the read function.

Tested to resolve the original problem, and runtested on MBA3,1,
MBP4,1, MBP8,2, MBP10,1, MBP10,2. The patch seems to have no effect on
machines before 2012.

Tested-by: Chris Murphy <chris@cmurf.com>
Cc: stable@vger.kernel.org
Signed-off-by: Henrik Rydberg <rydberg@euromail.se>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
11 years agosched/numa: Reflow task_numa_group() to avoid a compiler warning
Peter Zijlstra [Wed, 9 Oct 2013 08:24:48 +0000 (10:24 +0200)]
sched/numa: Reflow task_numa_group() to avoid a compiler warning

Reflow the function a bit because GCC gets confused:

  kernel/sched/fair.c: In function ‘task_numa_fault’:
  kernel/sched/fair.c:1448:3: warning: ‘my_grp’ may be used uninitialized in this function [-Wmaybe-uninitialized]
  kernel/sched/fair.c:1463:27: note: ‘my_grp’ was declared here

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/n/tip-6ebt6x7u64pbbonq1khqu2z9@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched/numa: Retry task_numa_migrate() periodically
Rik van Riel [Mon, 7 Oct 2013 10:29:41 +0000 (11:29 +0100)]
sched/numa: Retry task_numa_migrate() periodically

Short spikes of CPU load can lead to a task being migrated
away from its preferred node for temporary reasons.

It is important that the task is migrated back to where it
belongs, in order to avoid migrating too much memory to its
new location, and generally disturbing a task's NUMA location.

This patch fixes NUMA placement for 4 specjbb instances on
a 4 node system. Without this patch, things take longer to
converge, and processes are not always completely on their
own node.

Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-64-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched/numa: Use unsigned longs for numa group fault stats
Mel Gorman [Mon, 7 Oct 2013 10:29:40 +0000 (11:29 +0100)]
sched/numa: Use unsigned longs for numa group fault stats

As Peter says "If you're going to hold locks you can also do away with all
that atomic_long_*() nonsense". Lock aquisition moved slightly to protect
the updates.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-63-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched/numa: Skip some page migrations after a shared fault
Rik van Riel [Mon, 7 Oct 2013 10:29:39 +0000 (11:29 +0100)]
sched/numa: Skip some page migrations after a shared fault

Shared faults can lead to lots of unnecessary page migrations,
slowing down the system, and causing private faults to hit the
per-pgdat migration ratelimit.

This patch adds sysctl numa_balancing_migrate_deferred, which specifies
how many shared page migrations to skip unconditionally, after each page
migration that is skipped because it is a shared fault.

This reduces the number of page migrations back and forth in
shared fault situations. It also gives a strong preference to
the tasks that are already running where most of the memory is,
and to moving the other tasks to near the memory.

Testing this with a much higher scan rate than the default
still seems to result in fewer page migrations than before.

Memory seems to be somewhat better consolidated than previously,
with multi-instance specjbb runs on a 4 node system.

Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-62-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agomm: numa: Revert temporarily disabling of NUMA migration
Rik van Riel [Mon, 7 Oct 2013 10:29:38 +0000 (11:29 +0100)]
mm: numa: Revert temporarily disabling of NUMA migration

With the scan rate code working (at least for multi-instance specjbb),
the large hammer that is "sched: Do not migrate memory immediately after
switching node" can be replaced with something smarter. Revert temporarily
migration disabling and all traces of numa_migrate_seq.

Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-61-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched/numa: Remove the numa_balancing_scan_period_reset sysctl
Mel Gorman [Mon, 7 Oct 2013 10:29:37 +0000 (11:29 +0100)]
sched/numa: Remove the numa_balancing_scan_period_reset sysctl

With scan rate adaptions based on whether the workload has properly
converged or not there should be no need for the scan period reset
hammer. Get rid of it.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-60-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched/numa: Adjust scan rate in task_numa_placement
Rik van Riel [Mon, 7 Oct 2013 10:29:36 +0000 (11:29 +0100)]
sched/numa: Adjust scan rate in task_numa_placement

Adjust numa_scan_period in task_numa_placement, depending on how much
useful work the numa code can do. The more local faults there are in a
given scan window the longer the period (and hence the slower the scan rate)
during the next window. If there are excessive shared faults then the scan
period will decrease with the amount of scaling depending on whether the
ratio of shared/private faults. If the preferred node changes then the
scan rate is reset to recheck if the task is properly placed.

Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-59-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched/numa: Take false sharing into account when adapting scan rate
Mel Gorman [Mon, 7 Oct 2013 10:29:35 +0000 (11:29 +0100)]
sched/numa: Take false sharing into account when adapting scan rate

Scan rate is altered based on whether shared/private faults dominated.
task_numa_group() may detect false sharing but that information is not
taken into account when adapting the scan rate. Take it into account.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-58-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched/numa: Be more careful about joining numa groups
Rik van Riel [Mon, 7 Oct 2013 10:29:34 +0000 (11:29 +0100)]
sched/numa: Be more careful about joining numa groups

Due to the way the pid is truncated, and tasks are moved between
CPUs by the scheduler, it is possible for the current task_numa_fault
to group together tasks that do not actually share memory together.

This patch adds a few easy sanity checks to task_numa_fault, joining
tasks together if they share the same tsk->mm, or if the fault was on
a page with an elevated mapcount, in a shared VMA.

Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-57-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched/numa: Avoid migrating tasks that are placed on their preferred node
Peter Zijlstra [Mon, 7 Oct 2013 10:29:33 +0000 (11:29 +0100)]
sched/numa: Avoid migrating tasks that are placed on their preferred node

This patch classifies scheduler domains and runqueues into types depending
the number of tasks that are about their NUMA placement and the number
that are currently running on their preferred node. The types are

regular: There are tasks running that do not care about their NUMA
placement.

remote: There are tasks running that care about their placement but are
currently running on a node remote to their ideal placement

all: No distinction

To implement this the patch tracks the number of tasks that are optimally
NUMA placed (rq->nr_preferred_running) and the number of tasks running
that care about their placement (nr_numa_running). The load balancer
uses this information to avoid migrating idea placed NUMA tasks as long
as better options for load balancing exists. For example, it will not
consider balancing between a group whose tasks are all perfectly placed
and a group with remote tasks.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/1381141781-10992-56-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched/numa: Fix task or group comparison
Rik van Riel [Mon, 7 Oct 2013 10:29:32 +0000 (11:29 +0100)]
sched/numa: Fix task or group comparison

This patch separately considers task and group affinities when
searching for swap candidates during NUMA placement. If tasks
are part of the same group, or no group at all, the task weights
are considered.

Some hysteresis is added to prevent tasks within one group from
getting bounced between NUMA nodes due to tiny differences.

If tasks are part of different groups, the code compares group
weights, in order to favor grouping task groups together.

The patch also changes the group weight multiplier to be the
same as the task weight multiplier, since the two are no longer
added up like before.

Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-55-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched/numa: Decide whether to favour task or group weights based on swap candidate...
Rik van Riel [Mon, 7 Oct 2013 10:29:31 +0000 (11:29 +0100)]
sched/numa: Decide whether to favour task or group weights based on swap candidate relationships

This patch separately considers task and group affinities when searching
for swap candidates during task NUMA placement. If tasks are not part of
a group or the same group then the task weights are considered.
Otherwise the group weights are compared.

Signed-off-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-54-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched/numa: Add debugging
Ingo Molnar [Mon, 7 Oct 2013 10:29:30 +0000 (11:29 +0100)]
sched/numa: Add debugging

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Link: http://lkml.kernel.org/r/1381141781-10992-53-git-send-email-mgorman@suse.de
11 years agosched/numa: Prevent parallel updates to group stats during placement
Mel Gorman [Mon, 7 Oct 2013 10:29:29 +0000 (11:29 +0100)]
sched/numa: Prevent parallel updates to group stats during placement

Having multiple tasks in a group go through task_numa_placement
simultaneously can lead to a task picking a wrong node to run on, because
the group stats may be in the middle of an update. This patch avoids
parallel updates by holding the numa_group lock during placement
decisions.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-52-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched/numa: Call task_numa_free() from do_execve()
Rik van Riel [Mon, 7 Oct 2013 10:29:28 +0000 (11:29 +0100)]
sched/numa: Call task_numa_free() from do_execve()

It is possible for a task in a numa group to call exec, and
have the new (unrelated) executable inherit the numa group
association from its former self.

This has the potential to break numa grouping, and is trivial
to fix.

Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-51-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched/numa: Use group fault statistics in numa placement
Mel Gorman [Mon, 7 Oct 2013 10:29:27 +0000 (11:29 +0100)]
sched/numa: Use group fault statistics in numa placement

This patch uses the fraction of faults on a particular node for both task
and group, to figure out the best node to place a task.  If the task and
group statistics disagree on what the preferred node should be then a full
rescan will select the node with the best combined weight.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-50-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched/numa: Stay on the same node if CLONE_VM
Rik van Riel [Mon, 7 Oct 2013 10:29:26 +0000 (11:29 +0100)]
sched/numa: Stay on the same node if CLONE_VM

A newly spawned thread inside a process should stay on the same
NUMA node as its parent. This prevents processes from being "torn"
across multiple NUMA nodes every time they spawn a new thread.

Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-49-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agomm: numa: Do not batch handle PMD pages
Mel Gorman [Mon, 7 Oct 2013 10:29:25 +0000 (11:29 +0100)]
mm: numa: Do not batch handle PMD pages

With the THP migration races closed it is still possible to occasionally
see corruption. The problem is related to handling PMD pages in batch.
When a page fault is handled it can be assumed that the page being
faulted will also be flushed from the TLB. The same flushing does not
happen when handling PMD pages in batch. Fixing is straight forward but
there are a number of reasons not to

1. Multiple TLB flushes may have to be sent depending on what pages get
   migrated
2. The handling of PMDs in batch means that faults get accounted to
   the task that is handling the fault. While care is taken to only
   mark PMDs where the last CPU and PID match it can still have problems
   due to PID truncation when matching PIDs.
3. Batching on the PMD level may reduce faults but setting pmd_numa
   requires taking a heavy lock that can contend with THP migration
   and handling the fault requires the release/acquisition of the PTL
   for every page migrated. It's still pretty heavy.

PMD batch handling is not something that people ever have been happy
with. This patch removes it and later patches will deal with the
additional fault overhead using more installigent migrate rate adaption.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-48-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agomm: numa: Do not group on RO pages
Peter Zijlstra [Mon, 7 Oct 2013 10:29:24 +0000 (11:29 +0100)]
mm: numa: Do not group on RO pages

And here's a little something to make sure not the whole world ends up
in a single group.

As while we don't migrate shared executable pages, we do scan/fault on
them. And since everybody links to libc, everybody ends up in the same
group.

Suggested-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/1381141781-10992-47-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agomm: numa: Copy cpupid on page migration
Rik van Riel [Mon, 7 Oct 2013 10:29:23 +0000 (11:29 +0100)]
mm: numa: Copy cpupid on page migration

After page migration, the new page has the nidpid unset. This makes
every fault on a recently migrated page look like a first numa fault,
leading to another page migration.

Copying over the nidpid at page migration time should prevent erroneous
migrations of recently migrated pages.

Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-46-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched/numa: Report a NUMA task group ID
Mel Gorman [Mon, 7 Oct 2013 10:29:22 +0000 (11:29 +0100)]
sched/numa: Report a NUMA task group ID

It is desirable to model from userspace how the scheduler groups tasks
over time. This patch adds an ID to the numa_group and reports it via
/proc/PID/status.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-45-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched/numa: Use {cpu, pid} to create task groups for shared faults
Peter Zijlstra [Mon, 7 Oct 2013 10:29:21 +0000 (11:29 +0100)]
sched/numa: Use {cpu, pid} to create task groups for shared faults

While parallel applications tend to align their data on the cache
boundary, they tend not to align on the page or THP boundary.
Consequently tasks that partition their data can still "false-share"
pages presenting a problem for optimal NUMA placement.

This patch uses NUMA hinting faults to chain tasks together into
numa_groups. As well as storing the NID a task was running on when
accessing a page a truncated representation of the faulting PID is
stored. If subsequent faults are from different PIDs it is reasonable
to assume that those two tasks share a page and are candidates for
being grouped together. Note that this patch makes no scheduling
decisions based on the grouping information.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/1381141781-10992-44-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agomm: numa: Change page last {nid,pid} into {cpu,pid}
Peter Zijlstra [Mon, 7 Oct 2013 10:29:20 +0000 (11:29 +0100)]
mm: numa: Change page last {nid,pid} into {cpu,pid}

Change the per page last fault tracking to use cpu,pid instead of
nid,pid. This will allow us to try and lookup the alternate task more
easily. Note that even though it is the cpu that is store in the page
flags that the mpol_misplaced decision is still based on the node.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/r/1381141781-10992-43-git-send-email-mgorman@suse.de
[ Fixed build failure on 32-bit systems. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched/numa: Fix placement of workloads spread across multiple nodes
Rik van Riel [Mon, 7 Oct 2013 10:29:19 +0000 (11:29 +0100)]
sched/numa: Fix placement of workloads spread across multiple nodes

The load balancer will spread workloads across multiple NUMA nodes,
in order to balance the load on the system. This means that sometimes
a task's preferred node has available capacity, but moving the task
there will not succeed, because that would create too large an imbalance.

In that case, other NUMA nodes need to be considered.

Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-42-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched/numa: Favor placing a task on the preferred node
Mel Gorman [Mon, 7 Oct 2013 10:29:18 +0000 (11:29 +0100)]
sched/numa: Favor placing a task on the preferred node

A tasks preferred node is selected based on the number of faults
recorded for a node but the actual task_numa_migate() conducts a global
search regardless of the preferred nid. This patch checks if the
preferred nid has capacity and if so, searches for a CPU within that
node. This avoids a global search when the preferred node is not
overloaded.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-41-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched/numa: Use a system-wide search to find swap/migration candidates
Mel Gorman [Mon, 7 Oct 2013 10:29:17 +0000 (11:29 +0100)]
sched/numa: Use a system-wide search to find swap/migration candidates

This patch implements a system-wide search for swap/migration candidates
based on total NUMA hinting faults. It has a balance limit, however it
doesn't properly consider total node balance.

In the old scheme a task selected a preferred node based on the highest
number of private faults recorded on the node. In this scheme, the preferred
node is based on the total number of faults. If the preferred node for a
task changes then task_numa_migrate will search the whole system looking
for tasks to swap with that would improve both the overall compute
balance and minimise the expected number of remote NUMA hinting faults.

Not there is no guarantee that the node the source task is placed
on by task_numa_migrate() has any relationship to the newly selected
task->numa_preferred_nid due to compute overloading.

Signed-off-by: Mel Gorman <mgorman@suse.de>
[ Do not swap with tasks that cannot run on source cpu]
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
[ Fixed compiler warning on UP. ]
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-40-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched/numa: Introduce migrate_swap()
Peter Zijlstra [Mon, 7 Oct 2013 10:29:16 +0000 (11:29 +0100)]
sched/numa: Introduce migrate_swap()

Use the new stop_two_cpus() to implement migrate_swap(), a function that
flips two tasks between their respective cpus.

I'm fairly sure there's a less crude way than employing the stop_two_cpus()
method, but everything I tried either got horribly fragile and/or complex. So
keep it simple for now.

The notable detail is how we 'migrate' tasks that aren't runnable
anymore. We'll make it appear like we migrated them before they went to
sleep. The sole difference is the previous cpu in the wakeup path, so we
override this.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Link: http://lkml.kernel.org/r/1381141781-10992-39-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agostop_machine: Introduce stop_two_cpus()
Peter Zijlstra [Mon, 7 Oct 2013 10:29:15 +0000 (11:29 +0100)]
stop_machine: Introduce stop_two_cpus()

Introduce stop_two_cpus() in order to allow controlled swapping of two
tasks. It repurposes the stop_machine() state machine but only stops
the two cpus which we can do with on-stack structures and avoid
machine wide synchronization issues.

The ordering of CPUs is important to avoid deadlocks. If unordered then
two cpus calling stop_two_cpus on each other simultaneously would attempt
to queue in the opposite order on each CPU causing an AB-BA style deadlock.
By always having the lowest number CPU doing the queueing of works, we can
guarantee that works are always queued in the same order, and deadlocks
are avoided.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
[ Implemented deadlock avoidance. ]
Signed-off-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Link: http://lkml.kernel.org/r/1381141781-10992-38-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agomm: numa: Trap pmd hinting faults only if we would otherwise trap PTE faults
Mel Gorman [Mon, 7 Oct 2013 10:29:14 +0000 (11:29 +0100)]
mm: numa: Trap pmd hinting faults only if we would otherwise trap PTE faults

Base page PMD faulting is meant to batch handle NUMA hinting faults from
PTEs. However, even is no PTE faults would ever be handled within a
range the kernel still traps PMD hinting faults. This patch avoids the
overhead.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-37-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched/numa: Do not trap hinting faults for shared libraries
Mel Gorman [Mon, 7 Oct 2013 10:29:13 +0000 (11:29 +0100)]
sched/numa: Do not trap hinting faults for shared libraries

NUMA hinting faults will not migrate a shared executable page mapped by
multiple processes on the grounds that the data is probably in the CPU
cache already and the page may just bounce between tasks running on multipl
nodes. Even if the migration is avoided, there is still the overhead of
trapping the fault, updating the statistics, making scheduler placement
decisions based on the information etc. If we are never going to migrate
the page, it is overhead for no gain and worse a process may be placed on
a sub-optimal node for shared executable pages. This patch avoids trapping
faults for shared libraries entirely.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-36-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched/numa: Increment numa_migrate_seq when task runs in correct location
Rik van Riel [Mon, 7 Oct 2013 10:29:12 +0000 (11:29 +0100)]
sched/numa: Increment numa_migrate_seq when task runs in correct location

When a task is already running on its preferred node, increment
numa_migrate_seq to indicate that the task is settled if migration is
temporarily disabled, and memory should migrate towards it.

Signed-off-by: Rik van Riel <riel@redhat.com>
[ Only increment migrate_seq if migration temporarily disabled. ]
Signed-off-by: Mel Gorman <mgorman@suse.de>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-35-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched/numa: Retry migration of tasks to CPU on a preferred node
Mel Gorman [Mon, 7 Oct 2013 10:29:11 +0000 (11:29 +0100)]
sched/numa: Retry migration of tasks to CPU on a preferred node

When a preferred node is selected for a tasks there is an attempt to migrate
the task to a CPU there. This may fail in which case the task will only
migrate if the active load balancer takes action. This may never happen if
the conditions are not right. This patch will check at NUMA hinting fault
time if another attempt should be made to migrate the task. It will only
make an attempt once every five seconds.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-34-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched/numa: Avoid overloading CPUs on a preferred NUMA node
Mel Gorman [Mon, 7 Oct 2013 10:29:10 +0000 (11:29 +0100)]
sched/numa: Avoid overloading CPUs on a preferred NUMA node

This patch replaces find_idlest_cpu_node with task_numa_find_cpu.
find_idlest_cpu_node has two critical limitations. It does not take the
scheduling class into account when calculating the load and it is unsuitable
for using when comparing loads between NUMA nodes.

task_numa_find_cpu uses similar load calculations to wake_affine() when
selecting the least loaded CPU within a scheduling domain common to the
source and destimation nodes. It avoids causing CPU load imbalances in
the machine by refusing to migrate if the relative load on the target
CPU is higher than the source CPU.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-33-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agomm: numa: Limit NUMA scanning to migrate-on-fault VMAs
Mel Gorman [Mon, 7 Oct 2013 10:29:09 +0000 (11:29 +0100)]
mm: numa: Limit NUMA scanning to migrate-on-fault VMAs

There is a 90% regression observed with a large Oracle performance test
on a 4 node system. Profiles indicated that the overhead was due to
contention on sp_lock when looking up shared memory policies. These
policies do not have the appropriate flags to allow them to be
automatically balanced so trapping faults on them is pointless. This
patch skips VMAs that do not have MPOL_F_MOF set.

[riel@redhat.com: Initial patch]

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reported-and-tested-by: Joe Mario <jmario@redhat.com>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-32-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched/numa: Do not migrate memory immediately after switching node
Rik van Riel [Mon, 7 Oct 2013 10:29:08 +0000 (11:29 +0100)]
sched/numa: Do not migrate memory immediately after switching node

The load balancer can move tasks between nodes and does not take NUMA
locality into account. With automatic NUMA balancing this may result in the
tasks working set being migrated to the new node. However, as the fault
buffer will still store faults from the old node the schduler may decide to
reset the preferred node and migrate the task back resulting in more
migrations.

The ideal would be that the scheduler did not migrate tasks with a heavy
memory footprint but this may result nodes being overloaded. We could
also discard the fault information on task migration but this would still
cause all the tasks working set to be migrated. This patch simply avoids
migrating the memory for a short time after a task is migrated.

Signed-off-by: Rik van Riel <riel@redhat.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-31-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched/numa: Set preferred NUMA node based on number of private faults
Mel Gorman [Mon, 7 Oct 2013 10:29:07 +0000 (11:29 +0100)]
sched/numa: Set preferred NUMA node based on number of private faults

Ideally it would be possible to distinguish between NUMA hinting faults that
are private to a task and those that are shared. If treated identically
there is a risk that shared pages bounce between nodes depending on
the order they are referenced by tasks. Ultimately what is desirable is
that task private pages remain local to the task while shared pages are
interleaved between sharing tasks running on different nodes to give good
average performance. This is further complicated by THP as even
applications that partition their data may not be partitioning on a huge
page boundary.

To start with, this patch assumes that multi-threaded or multi-process
applications partition their data and that in general the private accesses
are more important for cpu->memory locality in the general case. Also,
no new infrastructure is required to treat private pages properly but
interleaving for shared pages requires additional infrastructure.

To detect private accesses the pid of the last accessing task is required
but the storage requirements are a high. This patch borrows heavily from
Ingo Molnar's patch "numa, mm, sched: Implement last-CPU+PID hash tracking"
to encode some bits from the last accessing task in the page flags as
well as the node information. Collisions will occur but it is better than
just depending on the node information. Node information is then used to
determine if a page needs to migrate. The PID information is used to detect
private/shared accesses. The preferred NUMA node is selected based on where
the maximum number of approximately private faults were measured. Shared
faults are not taken into consideration for a few reasons.

First, if there are many tasks sharing the page then they'll all move
towards the same node. The node will be compute overloaded and then
scheduled away later only to bounce back again. Alternatively the shared
tasks would just bounce around nodes because the fault information is
effectively noise. Either way accounting for shared faults the same as
private faults can result in lower performance overall.

The second reason is based on a hypothetical workload that has a small
number of very important, heavily accessed private pages but a large shared
array. The shared array would dominate the number of faults and be selected
as a preferred node even though it's the wrong decision.

The third reason is that multiple threads in a process will race each
other to fault the shared page making the fault information unreliable.

Signed-off-by: Mel Gorman <mgorman@suse.de>
[ Fix complication error when !NUMA_BALANCING. ]
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-30-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agosched/numa: Remove check that skips small VMAs
Mel Gorman [Mon, 7 Oct 2013 10:29:06 +0000 (11:29 +0100)]
sched/numa: Remove check that skips small VMAs

task_numa_work skips small VMAs. At the time the logic was to reduce the
scanning overhead which was considerable. It is a dubious hack at best.
It would make much more sense to cache where faults have been observed
and only rescan those regions during subsequent PTE scans. Remove this
hack as motivation to do it properly in the future.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-29-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
11 years agomm: numa: Scan pages with elevated page_mapcount
Mel Gorman [Mon, 7 Oct 2013 10:29:05 +0000 (11:29 +0100)]
mm: numa: Scan pages with elevated page_mapcount

Currently automatic NUMA balancing is unable to distinguish between false
shared versus private pages except by ignoring pages with an elevated
page_mapcount entirely. This avoids shared pages bouncing between the
nodes whose task is using them but that is ignored quite a lot of data.

This patch kicks away the training wheels in preparation for adding support
for identifying shared/private pages is now in place. The ordering is so
that the impact of the shared/private detection can be easily measured. Note
that the patch does not migrate shared, file-backed within vmas marked
VM_EXEC as these are generally shared library pages. Migrating such pages
is not beneficial as there is an expectation they are read-shared between
caches and iTLB and iCache pressure is generally low.

Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-28-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>