]> git.karo-electronics.de Git - karo-tx-linux.git/log
karo-tx-linux.git
12 years agoMerge branch 'x86/urgent'
Ingo Molnar [Wed, 24 Oct 2012 11:14:40 +0000 (13:14 +0200)]
Merge branch 'x86/urgent'

12 years agoMerge branch 'perf/core'
Ingo Molnar [Wed, 24 Oct 2012 11:14:36 +0000 (13:14 +0200)]
Merge branch 'perf/core'

12 years agox86: Allow tracing of functions in arch/x86/kernel/rtc.c
David Vrabel [Mon, 8 Oct 2012 12:07:30 +0000 (13:07 +0100)]
x86: Allow tracing of functions in arch/x86/kernel/rtc.c

Move native_read_tsc() to tsc.c to allow profiling to be
re-enabled for rtc.c.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Link: http://lkml.kernel.org/r/1349698050-6560-1-git-send-email-david.vrabel@citrix.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agox86/irq/ioapic: Check for valid irq_cfg pointer in smp_irq_move_cleanup_interrupt
Dimitri Sivanich [Tue, 16 Oct 2012 12:50:21 +0000 (07:50 -0500)]
x86/irq/ioapic: Check for valid irq_cfg pointer in smp_irq_move_cleanup_interrupt

Posting this patch to fix an issue concerning sparse irq's that
I raised a while back.  There was discussion about adding
refcounting to sparse irqs (to fix other potential race
conditions), but that does not appear to have been addressed
yet.  This covers the only issue of this type that I've
encountered in this area.

A NULL pointer dereference can occur in
smp_irq_move_cleanup_interrupt() if we haven't yet setup the
irq_cfg pointer in the irq_desc.irq_data.chip_data.

In create_irq_nr() there is a window where we have set
vector_irq in __assign_irq_vector(), but not yet called
irq_set_chip_data() to set the irq_cfg pointer.

Should an IRQ_MOVE_CLEANUP_VECTOR hit the cpu in question during
this time, smp_irq_move_cleanup_interrupt() will attempt to
process the aforementioned irq, but panic when accessing
irq_cfg.

Only continue processing the irq if irq_cfg is non-NULL.

Signed-off-by: Dimitri Sivanich <sivanich@sgi.com>
Cc: Suresh Siddha <suresh.b.siddha@intel.com>
Cc: Joerg Roedel <joerg.roedel@amd.com>
Cc: Yinghai Lu <yinghai@kernel.org>
Cc: Alexander Gordeev <agordeev@redhat.com>
Link: http://lkml.kernel.org/r/20121016125021.GA22935@sgi.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoMerge branch 'perf/urgent'
Ingo Molnar [Wed, 24 Oct 2012 10:51:52 +0000 (12:51 +0200)]
Merge branch 'perf/urgent'

12 years agoperf/x86: Remove unused variable in nhmex_rbox_alter_er()
Wei Yongjun [Mon, 22 Oct 2012 08:51:38 +0000 (16:51 +0800)]
perf/x86: Remove unused variable in nhmex_rbox_alter_er()

The variable port is initialized but never used
otherwise, so remove the unused variable.

dpatch engine is used to auto generate this patch.
(https://github.com/weiyj/dpatch)

Signed-off-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Cc: Yan, Zheng <zheng.z.yan@intel.com>
Cc: a.p.zijlstra@chello.nl
Cc: paulus@samba.org
Cc: acme@ghostprotocols.net
Link: http://lkml.kernel.org/r/CAPgLHd8NZkYSkZm22FpZxiEh6HcA0q-V%3D29vdnheiDhgrJZ%2Byw@mail.gmail.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoMerge branch 'x86/urgent'
Ingo Molnar [Wed, 24 Oct 2012 10:50:20 +0000 (12:50 +0200)]
Merge branch 'x86/urgent'

12 years agox86/efi: Fix oops caused by incorrect set_memory_uc() usage
Matt Fleming [Fri, 19 Oct 2012 12:25:46 +0000 (13:25 +0100)]
x86/efi: Fix oops caused by incorrect set_memory_uc() usage

Calling __pa() with an ioremap'd address is invalid. If we
encounter an efi_memory_desc_t without EFI_MEMORY_WB set in
->attribute we currently call set_memory_uc(), which in turn
calls __pa() on a potentially ioremap'd address.

On CONFIG_X86_32 this results in the following oops:

  BUG: unable to handle kernel paging request at f7f22280
  IP: [<c10257b9>] reserve_ram_pages_type+0x89/0x210
  *pdpt = 0000000001978001 *pde = 0000000001ffb067 *pte = 0000000000000000
  Oops: 0000 [#1] PREEMPT SMP
  Modules linked in:

  Pid: 0, comm: swapper Not tainted 3.0.0-acpi-efi-0805 #3
   EIP: 0060:[<c10257b9>] EFLAGS: 00010202 CPU: 0
   EIP is at reserve_ram_pages_type+0x89/0x210
   EAX: 0070e280 EBX: 38714000 ECX: f7814000 EDX: 00000000
   ESI: 00000000 EDI: 38715000 EBP: c189fef0 ESP: c189fea8
   DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068
  Process swapper (pid: 0, ti=c189e000 task=c18bbe60 task.ti=c189e000)
  Stack:
   80000200 ff108000 00000000 c189ff00 00038714 00000000 00000000 c189fed0
   c104f8ca 00038714 00000000 00038715 00000000 00000000 00038715 00000000
   00000010 38715000 c189ff48 c1025aff 38715000 00000000 00000010 00000000
  Call Trace:
   [<c104f8ca>] ? page_is_ram+0x1a/0x40
   [<c1025aff>] reserve_memtype+0xdf/0x2f0
   [<c1024dc9>] set_memory_uc+0x49/0xa0
   [<c19334d0>] efi_enter_virtual_mode+0x1c2/0x3aa
   [<c19216d4>] start_kernel+0x291/0x2f2
   [<c19211c7>] ? loglevel+0x1b/0x1b
   [<c19210bf>] i386_start_kernel+0xbf/0xc8

The only time we can call set_memory_uc() for a memory region is
when it is part of the direct kernel mapping. For the case where
we ioremap a memory region we must leave it alone.

This patch reimplements the fix from e8c7106280a3 ("x86, efi:
Calling __pa() with an ioremap()ed address is invalid") which
was reverted in e1ad783b12ec because it caused a regression on
some MacBooks (they hung at boot). The regression was caused
because the commit only marked EFI_RUNTIME_SERVICES_DATA as
E820_RESERVED_EFI, when it should have marked all regions that
have the EFI_MEMORY_RUNTIME attribute.

Despite first impressions, it's not possible to use
ioremap_cache() to map all cached memory regions on
CONFIG_X86_64 because of the way that the memory map might be
configured as detailed in the following bug report,

https://bugzilla.redhat.com/show_bug.cgi?id=748516

e.g. some of the EFI memory regions *need* to be mapped as part
of the direct kernel mapping.

Signed-off-by: Matt Fleming <matt.fleming@intel.com>
Cc: Matthew Garrett <mjg@redhat.com>
Cc: Zhang Rui <rui.zhang@intel.com>
Cc: Huang Ying <huang.ying.caritas@gmail.com>
Cc: Keith Packard <keithp@keithp.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Link: http://lkml.kernel.org/r/1350649546-23541-1-git-send-email-matt@console-pimps.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoMerge branch 'x86/asm'
Ingo Molnar [Wed, 24 Oct 2012 10:43:16 +0000 (12:43 +0200)]
Merge branch 'x86/asm'

12 years agox86/asm: Clean up copy_page_*() comments and code
Ma Ling [Wed, 17 Oct 2012 19:52:45 +0000 (03:52 +0800)]
x86/asm: Clean up copy_page_*() comments and code

Modern CPUs use fast-string instruction to accelerate copy
performance, by combining data into 128 bit chunks.

Modify comments and coding style to match it.

Signed-off-by: Ma Ling <ling.ma@intel.com>
Cc: iant@google.com
Link: http://lkml.kernel.org/r/1350503565-19167-1-git-send-email-ling.ma@intel.com
[ Cleaned up the clean up. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoMerge branch 'core/locking'
Ingo Molnar [Wed, 24 Oct 2012 10:39:19 +0000 (12:39 +0200)]
Merge branch 'core/locking'

12 years agolockdep: Use KSYM_NAME_LEN'ed buffer for __get_key_name()
Cyrill Gorcunov [Sat, 20 Oct 2012 19:05:19 +0000 (23:05 +0400)]
lockdep: Use KSYM_NAME_LEN'ed buffer for __get_key_name()

Not a big deal, but since other __get_key_name() callers
use it lets be consistent.

Signed-off-by: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20121020190519.GH25467@moon
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoMerge branch 'perf/urgent'
Ingo Molnar [Wed, 24 Oct 2012 10:01:03 +0000 (12:01 +0200)]
Merge branch 'perf/urgent'

12 years agoperf/x86: Enable overflow on Intel KNC with a custom knc_pmu_handle_irq()
Vince Weaver [Wed, 17 Oct 2012 17:05:45 +0000 (13:05 -0400)]
perf/x86: Enable overflow on Intel KNC with a custom knc_pmu_handle_irq()

Although based on the Intel P6 design, the interrupt mechnanism
for KNC more closely resembles the Intel architectural
perfmon one.

We can't just re-use that code though, because KNC has different
MSR numbers for the status and ack registers.

In this case we just cut-and paste from perf_event_intel.c
with some minor changes, as it looks like it would not be
worth the trouble to change that code to be MSR-configurable.

Signed-off-by: Vince Weaver <vincent.weaver@maine.edu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: eranian@gmail.com
Cc: Meadows Lawrence F <lawrence.f.meadows@intel.com>
Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1210171304410.23243@vincent-weaver-1.um.maine.edu
[ Small stylistic edits. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoperf/x86: Remove cpuc->enable check on Intl KNC event enable/disable
Vince Weaver [Wed, 17 Oct 2012 17:04:33 +0000 (13:04 -0400)]
perf/x86: Remove cpuc->enable check on Intl KNC event enable/disable

x86_pmu.enable() is called from x86_pmu_enable() with
cpuc->enabled set to 0.  This means we weren't re-enabling the
counters after a context switch.

This patch just removes the check, as it should't be necessary
(and the equivelent x86_ generic code does not have the checks).

The origin of this problem is the KNC driver being based on the
P6 one.   The P6 driver also has this issue, but works anyway
due to various lucky accidents.

Signed-off-by: Vince Weaver <vincent.weaver@maine.edu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: eranian@gmail.com
Cc: Meadows
Cc: Lawrence F <lawrence.f.meadows@intel.com>
Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1210171303290.23243@vincent-weaver-1.um.maine.edu
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoperf/x86: Make Intel KNC use full 40-bit width of counters
Vince Weaver [Wed, 17 Oct 2012 17:03:21 +0000 (13:03 -0400)]
perf/x86: Make Intel KNC use full 40-bit width of counters

Early versions of Intel KNC chips have a bug where bits above 32
were not properly set.  We worked around this by only using the
bottom 32 bits (out of 40 that should be available).

It turns out this workaround breaks overflow handling.

The buggy silicon will in theory never be used in production
systems, so remove this workaround so we get proper overflow
support.

Signed-off-by: Vince Weaver <vincent.weaver@maine.edu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: eranian@gmail.com
Cc: Meadows Lawrence F <lawrence.f.meadows@intel.com>
Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1210171302140.23243@vincent-weaver-1.um.maine.edu
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoMerge branch 'perf/urgent'
Ingo Molnar [Wed, 24 Oct 2012 08:57:37 +0000 (10:57 +0200)]
Merge branch 'perf/urgent'

12 years agoperf/x86/uncore: Handle pci_read_config_dword() errors
Yan, Zheng [Wed, 24 Oct 2012 08:42:20 +0000 (16:42 +0800)]
perf/x86/uncore: Handle pci_read_config_dword() errors

This, beyond handling corner cases, also fixes some build warnings:

 arch/x86/kernel/cpu/perf_event_intel_uncore.c: In function ‘snbep_uncore_pci_disable_box’:
 arch/x86/kernel/cpu/perf_event_intel_uncore.c:124:9: warning: ‘config’ is used uninitialized in this function [-Wuninitialized]
 arch/x86/kernel/cpu/perf_event_intel_uncore.c: In function ‘snbep_uncore_pci_enable_box’:
 arch/x86/kernel/cpu/perf_event_intel_uncore.c:135:9: warning: ‘config’ is used uninitialized in this function [-Wuninitialized]
 arch/x86/kernel/cpu/perf_event_intel_uncore.c: In function ‘snbep_uncore_pci_read_counter’:
 arch/x86/kernel/cpu/perf_event_intel_uncore.c:164:2: warning: ‘count’ is used uninitialized in this function [-Wuninitialized]

Signed-off-by: Yan, Zheng <zheng.z.yan@intel.com>
Cc: a.p.zijlstra@chello.nl
Link: http://lkml.kernel.org/r/1351068140-13456-1-git-send-email-zheng.z.yan@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoMerge branch 'x86/urgent'
Ingo Molnar [Wed, 24 Oct 2012 08:50:36 +0000 (10:50 +0200)]
Merge branch 'x86/urgent'

12 years agox86-64: Fix page table accounting
Jan Beulich [Thu, 4 Oct 2012 13:48:10 +0000 (14:48 +0100)]
x86-64: Fix page table accounting

Commit 20167d3421a089a1bf1bd680b150dc69c9506810 ("x86-64: Fix
accounting in kernel_physical_mapping_init()") went a little too
far by entirely removing the counting of pre-populated page
tables: this should be done at boot time (to cover the page
tables set up in early boot code), but shouldn't be done during
memory hot add.

Hence, re-add the removed increments of "pages", but make them
and the one in phys_pte_init() conditional upon !after_bootmem.

Reported-Acked-and-Tested-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Cc: <stable@kernel.org>
Link: http://lkml.kernel.org/r/506DAFBA020000780009FA8C@nat28.tlf.novell.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoMerge branch 'perf/core'
Ingo Molnar [Wed, 24 Oct 2012 08:41:44 +0000 (10:41 +0200)]
Merge branch 'perf/core'

12 years agoperf test: Add automated tests for pmu sysfs translated events
Jiri Olsa [Wed, 10 Oct 2012 12:53:18 +0000 (14:53 +0200)]
perf test: Add automated tests for pmu sysfs translated events

Add automated tests for all events found under PMU/events
directory. Tested events are in the 'cpu/event=xxx/u' format,
where 'xxx' is substituted by every event found.

The 'event=xxx' term is translated to the cpu specific term.
We only check that the event is created (not the real config
numbers) and that the modifier is properly set.

Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Stephane Eranian <eranian@google.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1349873598-12583-9-git-send-email-jolsa@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoperf tools: Add support to specify hw event as PMU event term
Jiri Olsa [Wed, 10 Oct 2012 12:53:17 +0000 (14:53 +0200)]
perf tools: Add support to specify hw event as PMU event term

Add a way to specify hw event as PMU event term like:

 'cpu/event=cpu-cycles/u'
 'cpu/event=instructions,.../u'
 'cpu/cycles,.../u'

The 'event=cpu-cycles' term is replaced/translated by the hw events
term translation, which is exposed by sysfs 'events' group attribute.

Add parser bits, the rest is already handled by the PMU alias code.

Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1349873598-12583-8-git-send-email-jolsa@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoperf tools: Fix PMU object alias initialization
Jiri Olsa [Wed, 10 Oct 2012 12:53:16 +0000 (14:53 +0200)]
perf tools: Fix PMU object alias initialization

The pmu_lookup should return pmus that do not expose the 'events'
group attribute in sysfs. Also it should fail when any other error
during 'events' lookup is hit (pmu_aliases fails).

Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1349873598-12583-7-git-send-email-jolsa@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoperf/x86: Add hardware events translations for Intel P6 cpus
Jiri Olsa [Wed, 10 Oct 2012 12:53:15 +0000 (14:53 +0200)]
perf/x86: Add hardware events translations for Intel P6 cpus

Add support for Intel P6 processors to display 'events' sysfs
directory (/sys/devices/cpu/events/) with hw event translations:

  # ls /sys/devices/cpu/events/
  branch-instructions
  branch-misses
  bus-cycles
  cache-misses
  cache-references
  cpu-cycles
  instructions
  ref-cycles
  stalled-cycles-backend
  stalled-cycles-frontend

Suggested-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1349873598-12583-6-git-send-email-jolsa@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoperf/x86: Add hardware events translations for AMD cpus
Jiri Olsa [Wed, 10 Oct 2012 12:53:14 +0000 (14:53 +0200)]
perf/x86: Add hardware events translations for AMD cpus

Add support for AMD processors to display 'events' sysfs
directory (/sys/devices/cpu/events/) with hw event translations:

  # ls  /sys/devices/cpu/events/
  branch-instructions
  branch-misses
  bus-cycles
  cache-misses
  cache-references
  cpu-cycles
  instructions
  ref-cycles
  stalled-cycles-backend
  stalled-cycles-frontend

Suggested-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Stephane Eranian <eranian@google.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1349873598-12583-5-git-send-email-jolsa@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoperf/x86: Add hardware events translations for Intel cpus
Jiri Olsa [Wed, 10 Oct 2012 12:53:13 +0000 (14:53 +0200)]
perf/x86: Add hardware events translations for Intel cpus

Add support for Intel processors to display 'events' sysfs
directory (/sys/devices/cpu/events/) with hw event translations:

  # ls  /sys/devices/cpu/events/
  branch-instructions
  branch-misses
  bus-cycles
  cache-misses
  cache-references
  cpu-cycles
  instructions
  ref-cycles
  stalled-cycles-backend
  stalled-cycles-frontend

Suggested-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Stephane Eranian <eranian@google.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1349873598-12583-4-git-send-email-jolsa@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoperf/x86: Filter out undefined events from sysfs events attribute
Jiri Olsa [Wed, 10 Oct 2012 12:53:12 +0000 (14:53 +0200)]
perf/x86: Filter out undefined events from sysfs events attribute

The sysfs events group attribute currently shows all hw events,
including also undefined ones.

This patch filters out all undefined events out of the sysfs events
group attribute, so they don't even show up.

Suggested-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Stephane Eranian <eranian@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1349873598-12583-3-git-send-email-jolsa@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoperf/x86: Make hardware event translations available in sysfs
Jiri Olsa [Wed, 10 Oct 2012 12:53:11 +0000 (14:53 +0200)]
perf/x86: Make hardware event translations available in sysfs

Add support to display hardware events translations available
through the sysfs. Add 'events' group attribute under the sysfs
x86 PMU record with attribute/file for each hardware event.

This patch adds only backbone for PMUs to display config under
'events' directory. The specific PMU support itself will come
in next patches, however this is how the sysfs group will look
like:

  # ls  /sys/devices/cpu/events/
  branch-instructions
  branch-misses
  bus-cycles
  cache-misses
  cache-references
  cpu-cycles
  instructions
  ref-cycles
  stalled-cycles-backend
  stalled-cycles-frontend

The file - hw event ID mapping is:

  file                      hw event ID
  ---------------------------------------------------------------
  cpu-cycles                PERF_COUNT_HW_CPU_CYCLES
  instructions              PERF_COUNT_HW_INSTRUCTIONS
  cache-references          PERF_COUNT_HW_CACHE_REFERENCES
  cache-misses              PERF_COUNT_HW_CACHE_MISSES
  branch-instructions       PERF_COUNT_HW_BRANCH_INSTRUCTIONS
  branch-misses             PERF_COUNT_HW_BRANCH_MISSES
  bus-cycles                PERF_COUNT_HW_BUS_CYCLES
  stalled-cycles-frontend   PERF_COUNT_HW_STALLED_CYCLES_FRONTEND
  stalled-cycles-backend    PERF_COUNT_HW_STALLED_CYCLES_BACKEND
  ref-cycles                PERF_COUNT_HW_REF_CPU_CYCLES

Each file in the 'events' directory contains the term translation
for the symbolic hw event for the currently running cpu model.

  # cat /sys/devices/cpu/events/stalled-cycles-backend
  event=0xb1,umask=0x01,inv,cmask=0x01

Suggested-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Jiri Olsa <jolsa@redhat.com>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Corey Ashford <cjashfor@linux.vnet.ibm.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Stephane Eranian <eranian@google.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1349873598-12583-2-git-send-email-jolsa@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoMerge branch 'sched/core'
Ingo Molnar [Wed, 24 Oct 2012 08:32:50 +0000 (10:32 +0200)]
Merge branch 'sched/core'

Conflicts:
kernel/sched/fair.c

12 years agoMerge branch 'perf/urgent'
Ingo Molnar [Wed, 24 Oct 2012 08:32:26 +0000 (10:32 +0200)]
Merge branch 'perf/urgent'

12 years agoMerge branch 'perf/core'
Ingo Molnar [Wed, 24 Oct 2012 08:32:16 +0000 (10:32 +0200)]
Merge branch 'perf/core'

12 years agoMerge branch 'numa/core'
Ingo Molnar [Wed, 24 Oct 2012 08:32:12 +0000 (10:32 +0200)]
Merge branch 'numa/core'

12 years agoperf/x86: Remove P6 cpuc->enabled check
Vince Weaver [Fri, 19 Oct 2012 21:33:38 +0000 (17:33 -0400)]
perf/x86: Remove P6 cpuc->enabled check

Between 2.6.33 and 2.6.34 the PMU code was made modular.

The x86_pmu_enable() call was extended to disable cpuc->enabled
and iterate the counters, enabling one at a time, before calling
enable_all() at the end, followed by re-enabling cpuc->enabled.

Since cpuc->enabled was set to 0, that change effectively caused
the "val |= ARCH_PERFMON_EVENTSEL_ENABLE;" code in p6_pmu_enable_event()
and p6_pmu_disable_event() to be dead code that was never called.

This change removes this code (which was confusing) and adds some
extra commentary to make it more clear what is going on.

Signed-off-by: Vince Weaver <vincent.weaver@maine.edu>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1210191732000.14552@vincent-weaver-1.um.maine.edu
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoperf/x86: Update/fix generic events on P6 PMU
Vince Weaver [Fri, 19 Oct 2012 21:31:54 +0000 (17:31 -0400)]
perf/x86: Update/fix generic events on P6 PMU

This patch updates the generic events on p6, including some new
extended cache events.

Values for these events were taken from the equivelant PAPI
predefined events.

Tested on a Pentium II.

Signed-off-by: Vince Weaver <vincent.weaver@maine.edu>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1210191730080.14552@vincent-weaver-1.um.maine.edu
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoperf/x86: Fix P6 FP_ASSIST event constraint
Vince Weaver [Fri, 19 Oct 2012 21:30:01 +0000 (17:30 -0400)]
perf/x86: Fix P6 FP_ASSIST event constraint

According to Intel SDM Volume 3B, FP_ASSIST is limited to Counter 1 only,
not Counter 0.

Tested on a Pentium II.

Signed-off-by: Vince Weaver <vincent.weaver@maine.edu>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1210191728570.14552@vincent-weaver-1.um.maine.edu
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agosched: Describe CFS load-balancer
Peter Zijlstra [Tue, 3 Jul 2012 11:53:26 +0000 (13:53 +0200)]
sched: Describe CFS load-balancer

Add some scribbles on how and why the load-balancer works..

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1341316406.23484.64.camel@twins
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agosched: Introduce temporary FAIR_GROUP_SCHED dependency for load-tracking
Paul Turner [Thu, 4 Oct 2012 11:18:32 +0000 (13:18 +0200)]
sched: Introduce temporary FAIR_GROUP_SCHED dependency for load-tracking

While per-entity load-tracking is generally useful, beyond computing shares
distribution, e.g. runnable based load-balance (in progress), governors,
power-management, etc.

These facilities are not yet consumers of this data.  This may be trivially
reverted when the information is required; but avoid paying the overhead for
calculations we will not use until then.

Signed-off-by: Paul Turner <pjt@google.com>
Reviewed-by: Ben Segall <bsegall@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20120823141507.422162369@google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agosched: Make __update_entity_runnable_avg() fast
Paul Turner [Thu, 4 Oct 2012 11:18:32 +0000 (13:18 +0200)]
sched: Make __update_entity_runnable_avg() fast

__update_entity_runnable_avg forms the core of maintaining an entity's runnable
load average.  In this function we charge the accumulated run-time since last
update and handle appropriate decay.  In some cases, e.g. a waking task, this
time interval may be much larger than our period unit.

Fortunately we can exploit some properties of our series to perform decay for a
blocked update in constant time and account the contribution for a running
update in essentially-constant* time.

[*]: For any running entity they should be performing updates at the tick which
gives us a soft limit of 1 jiffy between updates, and we can compute up to a
32 jiffy update in a single pass.

C program to generate the magic constants in the arrays:

  #include <math.h>
  #include <stdio.h>

  #define N 32
  #define WMULT_SHIFT 32

  const long WMULT_CONST = ((1UL << N) - 1);
  double y;

  long runnable_avg_yN_inv[N];
  void calc_mult_inv() {
   int i;
   double yn = 0;

   printf("inverses\n");
   for (i = 0; i < N; i++) {
   yn = (double)WMULT_CONST * pow(y, i);
   runnable_avg_yN_inv[i] = yn;
   printf("%2d: 0x%8lx\n", i, runnable_avg_yN_inv[i]);
   }
   printf("\n");
  }

  long mult_inv(long c, int n) {
   return (c * runnable_avg_yN_inv[n]) >>  WMULT_SHIFT;
  }

  void calc_yn_sum(int n)
  {
   int i;
   double sum = 0, sum_fl = 0, diff = 0;

   /*
    * We take the floored sum to ensure the sum of partial sums is never
    * larger than the actual sum.
    */
   printf("sum y^n\n");
   printf("   %8s  %8s %8s\n", "exact", "floor", "error");
   for (i = 1; i <= n; i++) {
   sum = (y * sum + y * 1024);
   sum_fl = floor(y * sum_fl+ y * 1024);
   printf("%2d: %8.0f  %8.0f %8.0f\n", i, sum, sum_fl,
   sum_fl - sum);
   }
   printf("\n");
  }

  void calc_conv(long n) {
   long old_n;
   int i = -1;

   printf("convergence (LOAD_AVG_MAX, LOAD_AVG_MAX_N)\n");
   do {
   old_n = n;
   n = mult_inv(n, 1) + 1024;
   i++;
   } while (n != old_n);
   printf("%d> %ld\n", i - 1, n);
   printf("\n");
  }

  void main() {
   y = pow(0.5, 1/(double)N);
   calc_mult_inv();
   calc_conv(1024);
   calc_yn_sum(N);
  }

[ Compile with -lm ]
Signed-off-by: Paul Turner <pjt@google.com>
Reviewed-by: Ben Segall <bsegall@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20120823141507.277808946@google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agosched: Update_cfs_shares at period edge
Paul Turner [Thu, 4 Oct 2012 11:18:31 +0000 (13:18 +0200)]
sched: Update_cfs_shares at period edge

Now that our measurement intervals are small (~1ms) we can amortize the posting
of update_shares() to be about each period overflow.  This is a large cost
saving for frequently switching tasks.

Signed-off-by: Paul Turner <pjt@google.com>
Reviewed-by: Ben Segall <bsegall@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20120823141507.200772172@google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agosched: Refactor update_shares_cpu() -> update_blocked_avgs()
Paul Turner [Thu, 4 Oct 2012 11:18:31 +0000 (13:18 +0200)]
sched: Refactor update_shares_cpu() -> update_blocked_avgs()

Now that running entities maintain their own load-averages the work we must do
in update_shares() is largely restricted to the periodic decay of blocked
entities.  This allows us to be a little less pessimistic regarding our
occupancy on rq->lock and the associated rq->clock updates required.

Signed-off-by: Paul Turner <pjt@google.com>
Reviewed-by: Ben Segall <bsegall@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20120823141507.133999170@google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agosched: Replace update_shares weight distribution with per-entity computation
Paul Turner [Thu, 4 Oct 2012 11:18:31 +0000 (13:18 +0200)]
sched: Replace update_shares weight distribution with per-entity computation

Now that the machinery in place is in place to compute contributed load in a
bottom up fashion; replace the shares distribution code within update_shares()
accordingly.

Signed-off-by: Paul Turner <pjt@google.com>
Reviewed-by: Ben Segall <bsegall@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20120823141507.061208672@google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agosched: Maintain runnable averages across throttled periods
Paul Turner [Thu, 4 Oct 2012 11:18:31 +0000 (13:18 +0200)]
sched: Maintain runnable averages across throttled periods

With bandwidth control tracked entities may cease execution according to user
specified bandwidth limits.  Charging this time as either throttled or blocked
however, is incorrect and would falsely skew in either direction.

What we actually want is for any throttled periods to be "invisible" to
load-tracking as they are removed from the system for that interval and
contribute normally otherwise.

Do this by moderating the progression of time to omit any periods in which the
entity belonged to a throttled hierarchy.

Signed-off-by: Paul Turner <pjt@google.com>
Reviewed-by: Ben Segall <bsegall@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20120823141506.998912151@google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agosched: Normalize tg load contributions against runnable time
Paul Turner [Thu, 4 Oct 2012 11:18:31 +0000 (13:18 +0200)]
sched: Normalize tg load contributions against runnable time

Entities of equal weight should receive equitable distribution of cpu time.
This is challenging in the case of a task_group's shares as execution may be
occurring on multiple cpus simultaneously.

To handle this we divide up the shares into weights proportionate with the load
on each cfs_rq.  This does not however, account for the fact that the sum of
the parts may be less than one cpu and so we need to normalize:
  load(tg) = min(runnable_avg(tg), 1) * tg->shares
Where runnable_avg is the aggregate time in which the task_group had runnable
children.

Signed-off-by: Paul Turner <pjt@google.com>
Reviewed-by: Ben Segall <bsegall@google.com>.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20120823141506.930124292@google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agosched: Compute load contribution by a group entity
Paul Turner [Thu, 4 Oct 2012 11:18:31 +0000 (13:18 +0200)]
sched: Compute load contribution by a group entity

Unlike task entities who have a fixed weight, group entities instead own a
fraction of their parenting task_group's shares as their contributed weight.

Compute this fraction so that we can correctly account hierarchies and shared
entity nodes.

Signed-off-by: Paul Turner <pjt@google.com>
Reviewed-by: Ben Segall <bsegall@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20120823141506.855074415@google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agosched: Aggregate total task_group load
Paul Turner [Thu, 4 Oct 2012 11:18:30 +0000 (13:18 +0200)]
sched: Aggregate total task_group load

Maintain a global running sum of the average load seen on each cfs_rq belonging
to each task group so that it may be used in calculating an appropriate
shares:weight distribution.

Signed-off-by: Paul Turner <pjt@google.com>
Reviewed-by: Ben Segall <bsegall@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20120823141506.792901086@google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agosched: Account for blocked load waking back up
Paul Turner [Thu, 4 Oct 2012 11:18:30 +0000 (13:18 +0200)]
sched: Account for blocked load waking back up

When a running entity blocks we migrate its tracked load to
cfs_rq->blocked_runnable_avg.  In the sleep case this occurs while holding
rq->lock and so is a natural transition.  Wake-ups however, are potentially
asynchronous in the presence of migration and so special care must be taken.

We use an atomic counter to track such migrated load, taking care to match this
with the previously introduced decay counters so that we don't migrate too much
load.

Signed-off-by: Paul Turner <pjt@google.com>
Reviewed-by: Ben Segall <bsegall@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20120823141506.726077467@google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agosched: Add an rq migration call-back to sched_class
Paul Turner [Thu, 4 Oct 2012 11:18:30 +0000 (13:18 +0200)]
sched: Add an rq migration call-back to sched_class

Since we are now doing bottom up load accumulation we need explicit
notification when a task has been re-parented so that the old hierarchy can be
updated.

Adds: migrate_task_rq(struct task_struct *p, int next_cpu)

(The alternative is to do this out of __set_task_cpu, but it was suggested that
this would be a cleaner encapsulation.)

Signed-off-by: Paul Turner <pjt@google.com>
Reviewed-by: Ben Segall <bsegall@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20120823141506.660023400@google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agosched: Maintain the load contribution of blocked entities
Paul Turner [Thu, 4 Oct 2012 11:18:30 +0000 (13:18 +0200)]
sched: Maintain the load contribution of blocked entities

We are currently maintaining:

  runnable_load(cfs_rq) = \Sum task_load(t)

For all running children t of cfs_rq.  While this can be naturally updated for
tasks in a runnable state (as they are scheduled); this does not account for
the load contributed by blocked task entities.

This can be solved by introducing a separate accounting for blocked load:

  blocked_load(cfs_rq) = \Sum runnable(b) * weight(b)

Obviously we do not want to iterate over all blocked entities to account for
their decay, we instead observe that:

  runnable_load(t) = \Sum p_i*y^i

and that to account for an additional idle period we only need to compute:

  y*runnable_load(t).

This means that we can compute all blocked entities at once by evaluating:

  blocked_load(cfs_rq)` = y * blocked_load(cfs_rq)

Finally we maintain a decay counter so that when a sleeping entity re-awakens
we can determine how much of its load should be removed from the blocked sum.

Signed-off-by: Paul Turner <pjt@google.com>
Reviewed-by: Ben Segall <bsegall@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20120823141506.585389902@google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agosched: Aggregate load contributed by task entities on parenting cfs_rq
Paul Turner [Thu, 4 Oct 2012 11:18:30 +0000 (13:18 +0200)]
sched: Aggregate load contributed by task entities on parenting cfs_rq

For a given task t, we can compute its contribution to load as:

  task_load(t) = runnable_avg(t) * weight(t)

On a parenting cfs_rq we can then aggregate:

  runnable_load(cfs_rq) = \Sum task_load(t), for all runnable children t

Maintain this bottom up, with task entities adding their contributed load to
the parenting cfs_rq sum.  When a task entity's load changes we add the same
delta to the maintained sum.

Signed-off-by: Paul Turner <pjt@google.com>
Reviewed-by: Ben Segall <bsegall@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20120823141506.514678907@google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agosched: Maintain per-rq runnable averages
Ben Segall [Thu, 4 Oct 2012 10:51:20 +0000 (12:51 +0200)]
sched: Maintain per-rq runnable averages

Since runqueues do not have a corresponding sched_entity we instead embed a
sched_avg structure directly.

Signed-off-by: Ben Segall <bsegall@google.com>
Reviewed-by: Paul Turner <pjt@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20120823141506.442637130@google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agosched: Track the runnable average on a per-task entity basis
Paul Turner [Thu, 4 Oct 2012 11:18:29 +0000 (13:18 +0200)]
sched: Track the runnable average on a per-task entity basis

Instead of tracking averaging the load parented by a cfs_rq, we can track
entity load directly. With the load for a given cfs_rq then being the sum
of its children.

To do this we represent the historical contribution to runnable average
within each trailing 1024us of execution as the coefficients of a
geometric series.

We can express this for a given task t as:

  runnable_sum(t) = \Sum u_i * y^i, runnable_avg_period(t) = \Sum 1024 * y^i
  load(t) = weight_t * runnable_sum(t) / runnable_avg_period(t)

Where: u_i is the usage in the last i`th 1024us period (approximately 1ms)
~ms and y is chosen such that y^k = 1/2.  We currently choose k to be 32 which
roughly translates to about a sched period.

Signed-off-by: Paul Turner <pjt@google.com>
Reviewed-by: Ben Segall <bsegall@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/20120823141506.372695337@google.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoMerge branch 'perf/urgent' into perf/core
Ingo Molnar [Wed, 24 Oct 2012 08:20:57 +0000 (10:20 +0200)]
Merge branch 'perf/urgent' into perf/core

Pick up v3.7-rc2 and fixes before applying more patches.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agosysctl/sched: Fix 'defined but not used' warning
Peter Zijlstra [Tue, 23 Oct 2012 15:47:12 +0000 (17:47 +0200)]
sysctl/sched: Fix 'defined but not used' warning

Since commit ("sched/numa: Implement NUMA home-node selection code")
building a kernel with CONFIG_SMP disabled causes the following
warning:

  kernel/sysctl.c:259:12: warning: 'min_sched_tunable_scaling' defined but not used [-Wunused-variable]
  kernel/sysctl.c:260:12: warning: 'max_sched_tunable_scaling' defined but not used [-Wunused-variable]

Reported-by: Fabio Estevam <fabio.estevam@freescale.com>
[ Ingo preferred extra #ifdef variant over the __maybe_unused ]
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/n/tip-p9w5w57ylinrj9zakvhc5zay@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoMerge branch 'timers/urgent'
Ingo Molnar [Wed, 24 Oct 2012 08:17:02 +0000 (10:17 +0200)]
Merge branch 'timers/urgent'

12 years agotimers, sched: Correct the comments for tick_sched_timer()
Chuansheng Liu [Wed, 24 Oct 2012 17:07:35 +0000 (01:07 +0800)]
timers, sched: Correct the comments for tick_sched_timer()

In the comments of function tick_sched_timer(), the sentence
"timer->base->cpu_base->lock held" is not right.

In function __run_hrtimer(), before call timer->function(),
the cpu_base->lock has been unlocked.

Signed-off-by: liu chuansheng <chuansheng.liu@intel.com>
Cc: fei.li@intel.com
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1351098455.15558.1421.camel@cliu38-desktop-build
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoMerge branch 'perf/urgent'
Ingo Molnar [Wed, 24 Oct 2012 08:02:16 +0000 (10:02 +0200)]
Merge branch 'perf/urgent'

12 years agoperf, cpu hotplug: Use cached value of smp_processor_id()
Srivatsa S. Bhat [Tue, 16 Oct 2012 07:58:17 +0000 (13:28 +0530)]
perf, cpu hotplug: Use cached value of smp_processor_id()

The perf_cpu_notifier() macro invokes smp_processor_id()
multiple times. Optimize it by using a local variable.

Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: peterz@infradead.org
Cc: acme@ghostprotocols.net
Link: http://lkml.kernel.org/r/20121016075817.3572.76733.stgit@srivatsabhat.in.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoperf, cpu hotplug: Run CPU_STARTING notifiers with irqs disabled
Srivatsa S. Bhat [Tue, 16 Oct 2012 07:58:10 +0000 (13:28 +0530)]
perf, cpu hotplug: Run CPU_STARTING notifiers with irqs disabled

The CPU_STARTING notifiers are supposed to be run with irqs
disabled. But the perf_cpu_notifier() macro invokes them without
doing that. Fix it.

Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: peterz@infradead.org
Cc: acme@ghostprotocols.net
Link: http://lkml.kernel.org/r/20121016075809.3572.47848.stgit@srivatsabhat.in.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoMerge branch 'x86/urgent'
Ingo Molnar [Wed, 24 Oct 2012 07:38:35 +0000 (09:38 +0200)]
Merge branch 'x86/urgent'

12 years agoRevert "x86/mm: Fix the size calculation of mapping tables"
Dave Young [Thu, 18 Oct 2012 06:33:23 +0000 (14:33 +0800)]
Revert "x86/mm: Fix the size calculation of mapping tables"

Commit:

   722bc6b16771 x86/mm: Fix the size calculation of mapping tables

Tried to address the issue that the first 2/4M should use 4k pages
if PSE enabled, but extra counts should only be valid for x86_32.

This commit caused a kdump regression: the kdump kernel hangs.

Work is in progress to fundamentally fix the various page table
initialization issues that we have, via the design suggested
by H. Peter Anvin, but it's not ready yet to be merged.

So, to get a working kdump revert to the last known working version,
which is the revert of this commit and of a followup fix (which was
incomplete):

   bd2753b2dda7 x86/mm: Only add extra pages count for the first memory range during pre-allocation

Tested kdump on physical and virtual machines.

Signed-off-by: Dave Young <dyoung@redhat.com>
Acked-by: Yinghai Lu <yinghai@kernel.org>
Acked-by: Cong Wang <xiyou.wangcong@gmail.com>
Acked-by: Flavio Leitner <fbl@redhat.com>
Tested-by: Flavio Leitner <fbl@redhat.com>
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Cc: Cong Wang <xiyou.wangcong@gmail.com>
Cc: Flavio Leitner <fbl@redhat.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: ianfang.cn@gmail.com
Cc: Vivek Goyal <vgoyal@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: <stable@kernel.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoMerge branch 'numa/core'
Ingo Molnar [Wed, 24 Oct 2012 07:11:28 +0000 (09:11 +0200)]
Merge branch 'numa/core'

12 years agonuma, sched: Eliminate unused functions
Ingo Molnar [Wed, 24 Oct 2012 07:04:41 +0000 (09:04 +0200)]
numa, sched: Eliminate unused functions

Andrew Morton reported these allnoconfig warnings:

  kernel/sched/fair.c:800: warning: 'task_h_load' declared 'static' but never defined
  kernel/sched/fair.c:1004: warning: 'account_numa_enqueue' defined but not used

These are only used on CONFIG_SMP - fix it.

We should eventually resolve the Kconfig complexities here by turning
SMP (and NUMA) scheduling either into a separate source code file, or
by creating a single-model scheduler, which happens to build to a small
object file on !CONFIG_SMP or !CONFIG_NUMA kernels not via #ifdefs but
via more clever build time code elimination and zero-size data fields.

That's not a simple patch.

Reported-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Link: http://lkml.kernel.org/n/tip-wyctbug9qKulTs0umsxjyixi@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoMerge branch 'perf/urgent'
Ingo Molnar [Wed, 24 Oct 2012 06:53:57 +0000 (08:53 +0200)]
Merge branch 'perf/urgent'

12 years agoMerge branch 'numa/core'
Ingo Molnar [Wed, 24 Oct 2012 06:53:53 +0000 (08:53 +0200)]
Merge branch 'numa/core'

12 years agox86/perf: Fix virtualization sanity check
Andre Przywara [Tue, 9 Oct 2012 15:38:35 +0000 (17:38 +0200)]
x86/perf: Fix virtualization sanity check

In check_hw_exists() we try to detect non-emulated MSR accesses
by writing an arbitrary value into one of the PMU registers
and check if it's value after a readout is still the same.
This algorithm silently assumes that the register does not contain
the magic value already, which is wrong in at least one situation.

Fix the algorithm to really do a read-modify-write cycle. This fixes
a warning under Xen under some circumstances on AMD family 10h CPUs.

The reasons in more details actually sound like a story from
Believe It or Not!:

First you need an AMD family 10h/12h CPU. These do not reset the
PERF_CTR registers on a reboot.
Now you boot bare metal Linux, which goes successfully through this
check, but leaves the magic value of 0xabcd in the register. You
don't use the performance counters, but do a reboot (warm reset).
Then you choose to boot Xen. The check will be triggered with a
recent Linux kernel as Dom0 again, trying to write 0xabcd into the
MSR. Xen silently drops the write (expected), but the subsequent read
will return the value in the register, which just happens to be the
expected magic value. Thus the test misleadingly succeeds, leaving
the kernel in the belief that the PMU is available. This will trigger
the following message:

[    0.020294] ------------[ cut here ]------------
[    0.020311] WARNING: at arch/x86/xen/enlighten.c:730 xen_apic_write+0x15/0x17()
[    0.020318] Hardware name: empty
[    0.020323] Modules linked in:
[    0.020334] Pid: 1, comm: swapper/0 Not tainted 3.3.8 #7
[    0.020340] Call Trace:
[    0.020354]  [<ffffffff81050379>] warn_slowpath_common+0x80/0x98
[    0.020369]  [<ffffffff810503a6>] warn_slowpath_null+0x15/0x17
[    0.020378]  [<ffffffff810034df>] xen_apic_write+0x15/0x17
[    0.020392]  [<ffffffff8101cb2b>] perf_events_lapic_init+0x2e/0x30
[    0.020410]  [<ffffffff81ee4dd0>] init_hw_perf_events+0x250/0x407
[    0.020419]  [<ffffffff81ee4b80>] ? check_bugs+0x2d/0x2d
[    0.020430]  [<ffffffff81002181>] do_one_initcall+0x7a/0x131
[    0.020444]  [<ffffffff81edbbf9>] kernel_init+0x91/0x15d
[    0.020456]  [<ffffffff817caaa4>] kernel_thread_helper+0x4/0x10
[    0.020471]  [<ffffffff817c347c>] ? retint_restore_args+0x5/0x6
[    0.020481]  [<ffffffff817caaa0>] ? gs_change+0x13/0x13
[    0.020500] ---[ end trace a7919e7f17c0a725 ]---

The new code will change every of the 16 low bits read from the
register and tries to write and read-back that modified number
from the MSR.

Signed-off-by: Andre Przywara <andre.przywara@amd.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Avi Kivity <avi@redhat.com>
Link: http://lkml.kernel.org/r/1349797115-28346-2-git-send-email-andre.przywara@amd.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agonuma, sched, mm: Fix NULL-ptr deref
Peter Zijlstra [Mon, 22 Oct 2012 17:21:32 +0000 (19:21 +0200)]
numa, sched, mm: Fix NULL-ptr deref

Dan reported that there's a possible NULL pointer deref in this logic,
fix that. Further fix it to avoid a possible inf. loop (completely
unlikely in the case no vma is migratable). Also don't endlessly loop
on large length, simply truncate at the end of the address-space and
restart on the next go.

Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Link: http://lkml.kernel.org/n/tip-mnkio02xxtttiepsg9ek6qkw@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agonuma, sched: Implement slow start for working set sampling
Peter Zijlstra [Mon, 22 Oct 2012 18:15:40 +0000 (20:15 +0200)]
numa, sched: Implement slow start for working set sampling

Add a 1 second delay before starting to scan the working set of
a task and starting to balance it amongst nodes.

The theory is that short-run tasks benefit very little from NUMA
placement: they come and go, and they better stick to the node
they were started on. As tasks mature and rebalance to other CPUs
and nodes, so does their NUMA placement have to change and so
does it start to matter more and more.

In practice this change fixes an observable kbuild regression:

   # [ a perf stat --null --repeat 10 test of ten bzImage builds to /dev/shm ]

   !NUMA:
   45.291088843 seconds time elapsed                                          ( +-  0.40% )
   45.154231752 seconds time elapsed                                          ( +-  0.36% )

   +NUMA, no slow start:
   46.172308123 seconds time elapsed                                          ( +-  0.30% )
   46.343168745 seconds time elapsed                                          ( +-  0.25% )

   +NUMA, 1 sec slow start:
   45.224189155 seconds time elapsed                                          ( +-  0.25% )
   45.160866532 seconds time elapsed                                          ( +-  0.17% )

and it also fixes an observable perf bench (hackbench) regression:

   # perf stat --null --repeat 10 perf bench sched messaging

   -NUMA:

   -NUMA:                  0.246225691 seconds time elapsed                   ( +-  1.31% )
   +NUMA no slow start:    0.252620063 seconds time elapsed                   ( +-  1.13% )

   +NUMA 1sec delay:       0.248076230 seconds time elapsed                   ( +-  1.35% )

The implementation is simple and straightforward, most of the patch
deals with adding the /proc/sys/kernel/sched_numa_scan_delay_ms tunable
knob and with renaming task_period to scan_period.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Link: http://lkml.kernel.org/n/tip-vn7p3ynbwqt3qqewhdlvjltc@git.kernel.org
[ Wrote the changelog, ran measurements, tuned the default. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoMerge branch 'perf/urgent'
Ingo Molnar [Wed, 24 Oct 2012 05:50:28 +0000 (07:50 +0200)]
Merge branch 'perf/urgent'

12 years agoMerge tag 'perf-urgent-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git...
Ingo Molnar [Wed, 24 Oct 2012 05:47:40 +0000 (07:47 +0200)]
Merge tag 'perf-urgent-for-mingo' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/urgent

Pull perf/urgent fixes from Arnaldo Carvalho de Melo:

 * Validate syscall id before growing syscall table in 'trace', fixing potential
   excessive memory usage.

 * Validate perf_sample.raw_data, making 'trace' more robust, avoiding some
   potential SEGFAULTs when reading tracepoint fields.

 * Fix exclude_guest parse events 'perf test's, from Jiri Olsa.

 * Do not flush maps on COMM, that is sent by the kernel when a process is
   exec'ed, but also when a process changes its name. Since we were assuming
   a COMM always meant an EXEC, we were losing track of a process maps by
   flushing its maps. Fix from Luigi Semenzato.

 * A recent patch introduced a problem by not initializing what should be
   the first kind of pager to use, 'man', instead it was being left as zero
   which means no pager. This caused 'perf subcmd --help' to produce no output.
   Fix from Namhyung Kim.

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoMerge branch 'linus'
Ingo Molnar [Wed, 24 Oct 2012 05:30:44 +0000 (07:30 +0200)]
Merge branch 'linus'

12 years agoMerge tag 'stable/for-linus-3.7-rc2-tag' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Wed, 24 Oct 2012 02:17:27 +0000 (05:17 +0300)]
Merge tag 'stable/for-linus-3.7-rc2-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen

Pull xen bug-fixes from Konrad Rzeszutek Wilk:
 - Fix mysterious SIGSEGV or SIGKILL in applications due to corrupting
   of the %eip when returning from a signal handler.
 - Fix various ARM compile issues after the merge fallout.
 - Continue on making more of the Xen generic code usable by ARM
   platform.
 - Fix SR-IOV passthrough to mirror multifunction PCI devices.
 - Fix various compile warnings.
 - Remove hypercalls that don't exist anymore.

* tag 'stable/for-linus-3.7-rc2-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen:
  xen: dbgp: Fix warning when CONFIG_PCI is not enabled.
  xen: arm: comment on why 64-bit xen_pfn_t is safe even on 32 bit
  xen: balloon: use correct type for frame_list
  xen/x86: don't corrupt %eip when returning from a signal handler
  xen: arm: make p2m operations NOPs
  xen: balloon: don't include e820.h
  xen: grant: use xen_pfn_t type for frame_list.
  xen: events: pirq_check_eoi_map is X86 specific
  xen: XENMEM_translate_gpfn_list was remove ages ago and is unused.
  xen: sysfs: fix build warning.
  xen: sysfs: include err.h for PTR_ERR etc
  xen: xenbus: quirk uses x86 specific cpuid
  xen PV passthru: assign SR-IOV virtual functions to separate virtual slots
  xen/xenbus: Fix compile warning.
  xen/x86: remove duplicated include from enlighten.c

12 years agoalpha: separate thread-synchronous flags
Al Viro [Sat, 20 Oct 2012 14:52:23 +0000 (15:52 +0100)]
alpha: separate thread-synchronous flags

... and fix the race in updating unaligned control ones

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
12 years agoMerge tag 'kvm-3.7-2' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Linus Torvalds [Wed, 24 Oct 2012 01:08:42 +0000 (04:08 +0300)]
Merge tag 'kvm-3.7-2' of git://git.kernel.org/pub/scm/virt/kvm/kvm

Pull kvm fixes from Avi Kivity:
 "KVM updates for 3.7-rc2"

* tag 'kvm-3.7-2' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
  KVM guest: exit idleness when handling KVM_PV_REASON_PAGE_NOT_PRESENT
  KVM: apic: fix LDR calculation in x2apic mode
  KVM: MMU: fix release noslot pfn

12 years agoMerge branch 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Wed, 24 Oct 2012 01:07:51 +0000 (04:07 +0300)]
Merge branch 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull perf fixes from Ingo Molnar:
 "Most of these are uprobes race fixes from Oleg, and their preparatory
  cleanups.  (It's larger than what I'd normally send for an -rc kernel,
  but they looked significant enough to not delay them.)

  There's also an oprofile fix and an uncore PMU fix."

* 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (22 commits)
  perf/x86: Disable uncore on virtualized CPUs
  oprofile, x86: Fix wrapping bug in op_x86_get_ctrl()
  ring-buffer: Check for uninitialized cpu buffer before resizing
  uprobes: Fix the racy uprobe->flags manipulation
  uprobes: Fix prepare_uprobe() race with itself
  uprobes: Introduce prepare_uprobe()
  uprobes: Fix handle_swbp() vs unregister() + register() race
  uprobes: Do not delete uprobe if uprobe_unregister() fails
  uprobes: Don't return success if alloc_uprobe() fails
  uprobes/x86: Only rep+nop can be emulated correctly
  uprobes: Simplify is_swbp_at_addr(), remove stale comments
  uprobes: Kill set_orig_insn()->is_swbp_at_addr()
  uprobes: Introduce copy_opcode(), kill read_opcode()
  uprobes: Kill set_swbp()->is_swbp_at_addr()
  uprobes: Restrict valid_vma(false) to skip VM_SHARED vmas
  uprobes: Change valid_vma() to demand VM_MAYEXEC rather than VM_EXEC
  uprobes: Change write_opcode() to use FOLL_FORCE
  uprobes: Move clear_thread_flag(TIF_UPROBE) to uprobe_notify_resume()
  uprobes: Kill UTASK_BP_HIT state
  uprobes: Fix UPROBE_SKIP_SSTEP checks in handle_swbp()
  ...

12 years agoMerge branch 'core-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Wed, 24 Oct 2012 01:07:02 +0000 (04:07 +0300)]
Merge branch 'core-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull core kernel fixes from Ingo Molnar:
 "Two small fixes"

* 'core-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  Documentation: Reflect the new location of the NMI watchdog info
  nohz: Fix idle ticks in cpu summary line of /proc/stat

12 years agoMerge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
Linus Torvalds [Wed, 24 Oct 2012 01:05:56 +0000 (04:05 +0300)]
Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux

Pull s390 fixes from Martin Schwidefsky:
 "Among the usual minor bug fixes the more interesting patches are the
  perf counters for the latest machine, the missing select to enable
  transparent huge pages and a build fix for the UAPI rework."

* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
  s390,uapi: do not use uapi/asm-generic/kvm_para.h
  s390/cache: fix data/instruction cache output
  s390: fix linker script for 31 bit builds
  s390/thp: select HAVE_ARCH_TRANSPARENT_HUGEPAGE
  s390/kdump: Use 64 bit mode for 0x10000 entry point
  perf_cpum_cf: Add support for counters available with IBM zEC12
  s390/css: stop stsch loop after cc 3
  s390/cio: use generic bitmap functions
  s390/chpid: make headers usable (again)

12 years agoMerge branch 'stable' of git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux...
Linus Torvalds [Wed, 24 Oct 2012 01:05:15 +0000 (04:05 +0300)]
Merge branch 'stable' of git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile

Pull tile fixes from Chris Metcalf:
 "This fixes one issue with compiler flags that can cause modules not to
  load, and cleans up some warnings with ELF_R_xxx defines."

* 'stable' of git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile:
  arch/tile: avoid build warnings from duplicate ELF_R_xxx #defines
  arch/tile: avoid generating .eh_frame information in modules

12 years agoMerge tag 'please-pull-uapi-fix' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Wed, 24 Oct 2012 01:03:21 +0000 (04:03 +0300)]
Merge tag 'please-pull-uapi-fix' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux

Pull ia64 fix from Tony Luck:
 "Fix from dhowells for UAPI fallout"

* tag 'please-pull-uapi-fix' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux:
  UAPI: Make arch/ia64/include/asm/kvm_para.h generic

12 years agoMerge branch 'linus'
Ingo Molnar [Tue, 23 Oct 2012 14:39:24 +0000 (16:39 +0200)]
Merge branch 'linus'

12 years agoarch/tile: avoid build warnings from duplicate ELF_R_xxx #defines
Chris Metcalf [Fri, 19 Oct 2012 20:29:43 +0000 (16:29 -0400)]
arch/tile: avoid build warnings from duplicate ELF_R_xxx #defines

These are now provided in <asm-generic/module.h>, so clean up warnings
by not re-defining them in module.c.

Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
12 years agoarch/tile: avoid generating .eh_frame information in modules
Chris Metcalf [Fri, 19 Oct 2012 15:43:11 +0000 (11:43 -0400)]
arch/tile: avoid generating .eh_frame information in modules

The tile tool chain uses the .eh_frame information for backtracing.
The vmlinux build drops any .eh_frame sections at link time, but when
present in kernel modules, it causes a module load failure due to the
presence of unsupported pc-relative relocations.  When compiling to
use compiler feedback support, the compiler by default omits .eh_frame
information, so we don't see this problem.  But when not using feedback,
we need to explicitly suppress the .eh_frame.

Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
Cc: stable@vger.kernel.org
12 years agoMerge branch 'numa/core'
Ingo Molnar [Tue, 23 Oct 2012 09:54:13 +0000 (11:54 +0200)]
Merge branch 'numa/core'

12 years agonuma, mm, sched: Use down_write() in task_numa_work()
Ingo Molnar [Sat, 20 Oct 2012 21:06:00 +0000 (23:06 +0200)]
numa, mm, sched: Use down_write() in task_numa_work()

change_protection() needs to be called with the mmap_sem write-locked,
like the mprotect() variants do it.

With that in place we can avoid the intrusive (and partially
incorrect) page locking changes in the:

   "numa, mm: Fix 4K migration races"

patch, because the down_write() will properly serialize with the
down_read() page fault path.

Keep the cleanups and debug code removal.

In theory calling change_protection() with just down_read()
should work, but in practice it seems messy.

Signed-off-by: Ingo Molnar <mingo@elte.hu>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Link: http://lkml.kernel.org/n/tip-g3xyfmqqmmpubhcdww2TrbLc@git.kernel.org
12 years agonuma, mm: Rename the PROT_NONE fault handling functions to *_numa()
Rik van Riel [Thu, 18 Oct 2012 21:20:21 +0000 (17:20 -0400)]
numa, mm: Rename the PROT_NONE fault handling functions to *_numa()

Having the function name indicate what the function is used
for makes the code a little easier to read.  Furthermore,
the fault handling code largely consists of do_...._page
functions.

Rename the NUMA working set sampling fault handling functions
to _numa() names, to indicate what they are used for.

This separates the naming from the regular PROT_NONE namings.

Signed-off-by: Rik van Riel <riel@redhat.com>
Cc: aarcange@redhat.com
Cc: a.p.zijlstra@chello.nl
Link: http://lkml.kernel.org/r/20121018172021.0b1f6e3d@cuia.bos.redhat.com
[ Converted two more usage sites ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agosched, numa: Add NUMA_MIGRATION feature flag
Ingo Molnar [Sat, 20 Oct 2012 20:20:19 +0000 (22:20 +0200)]
sched, numa: Add NUMA_MIGRATION feature flag

After this patch, doing:

   # echo NO_NUMA_MIGRATION > /sys/kernel/debug/sched_features

Will turn off the NUMA placement logic/policy - but keeps the
working set sampling faults in place.

This allows the debugging of the WSS facility, by using it
but keeping vanilla, non-NUMA CPU and memory placement
policies.

Default enabled. Generates on extra code on !CONFIG_SCHED_DEBUG.

Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Link: http://lkml.kernel.org/n/tip-xjt7bqjlphxRfjXxasqm4cdv@git.kernel.org
12 years agoMerge branch 'timers/core'
Ingo Molnar [Tue, 23 Oct 2012 09:50:21 +0000 (11:50 +0200)]
Merge branch 'timers/core'

12 years agoMerge branch 'x86/urgent'
Ingo Molnar [Tue, 23 Oct 2012 09:50:17 +0000 (11:50 +0200)]
Merge branch 'x86/urgent'

12 years agoMerge branch 'perf/urgent'
Ingo Molnar [Tue, 23 Oct 2012 09:50:10 +0000 (11:50 +0200)]
Merge branch 'perf/urgent'

12 years agoMerge branch 'core/urgent'
Ingo Molnar [Tue, 23 Oct 2012 09:49:57 +0000 (11:49 +0200)]
Merge branch 'core/urgent'

12 years agoMerge branch 'numa/misc'
Ingo Molnar [Tue, 23 Oct 2012 09:45:06 +0000 (11:45 +0200)]
Merge branch 'numa/misc'

12 years agox86, mm: Prevent gcc to re-read the pagetables
Andrea Arcangeli [Tue, 18 Sep 2012 00:14:51 +0000 (02:14 +0200)]
x86, mm: Prevent gcc to re-read the pagetables

GCC is very likely to read the pagetables just once and cache them in
the local stack or in a register, but it is can also decide to re-read
the pagetables. The problem is that the pagetable in those places can
change from under gcc.

In the page fault we only hold the ->mmap_sem for reading and both the
page fault and MADV_DONTNEED only take the ->mmap_sem for reading and we
don't hold any PT lock yet.

In get_user_pages_fast() the TLB shootdown code can clear the pagetables
before firing any TLB flush (the page can't be freed until the TLB
flushing IPI has been delivered but the pagetables will be cleared well
before sending any TLB flushing IPI).

With THP/hugetlbfs the pmd (and pud for hugetlbfs giga pages) can
change as well under gup_fast, it won't just be cleared for the same
reasons described above for the pte in the page fault case.

[ This patch was picked up from the AutoNUMA tree. ]

Originally-by: Andrea Arcangeli <aarcange@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
[ Ported to this tree, because we are modifying the page tables
  at a high rate here, so this problem is potentially more
  likely to show up in practice. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agomm: Check if PTE is already allocated during page fault
Mel Gorman [Wed, 12 Oct 2011 19:06:51 +0000 (21:06 +0200)]
mm: Check if PTE is already allocated during page fault

With transparent hugepage support, handle_mm_fault() has to be careful
that a normal PMD has been established before handling a PTE fault. To
achieve this, it used __pte_alloc() directly instead of pte_alloc_map
as pte_alloc_map is unsafe to run against a huge PMD. pte_offset_map()
is called once it is known the PMD is safe.

pte_alloc_map() is smart enough to check if a PTE is already present
before calling __pte_alloc but this check was lost. As a consequence,
PTEs may be allocated unnecessarily and the page table lock taken.
Thi useless PTE does get cleaned up but it's a performance hit which
is visible in page_test from aim9.

This patch simply re-adds the check normally done by pte_alloc_map to
check if the PTE needs to be allocated before taking the page table
lock. The effect is noticable in page_test from aim9.

 AIM9
                 2.6.38-vanilla 2.6.38-checkptenone
 creat-clo      446.10 ( 0.00%)   424.47 (-5.10%)
 page_test       38.10 ( 0.00%)    42.04 ( 9.37%)
 brk_test        52.45 ( 0.00%)    51.57 (-1.71%)
 exec_test      382.00 ( 0.00%)   456.90 (16.39%)
 fork_test       60.11 ( 0.00%)    67.79 (11.34%)
 MMTests Statistics: duration
 Total Elapsed Time (seconds)                611.90    612.22

(While this affects 2.6.38, it is a performance rather than a
functional bug and normally outside the rules -stable. While the big
performance differences are to a microbench, the difference in fork
and exec performance may be significant enough that -stable wants to
consider the patch)

Reported-by: Raz Ben Yehuda <raziebe@gmail.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Rik van Riel <riel@redhat.com>
[ Picked this up from the AutoNUMA tree to help
  it upstream and to allow apples-to-apples
  performance comparisons. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
12 years agoMerge branch 'drm-fixes' of git://people.freedesktop.org/~airlied/linux
Linus Torvalds [Tue, 23 Oct 2012 05:51:07 +0000 (08:51 +0300)]
Merge branch 'drm-fixes' of git://people.freedesktop.org/~airlied/linux

Pull drm fixes from Dave Airlie:
 "Fixes for intel and nouveau mainly.

   - intel: disable HSW by default, sdvo fixes, link train regression
     fix
   - nouveau: acpi rom loading regression fix, with a few other fixes
     from the rework
   -core: just other minor fixes and race fixes for ttm."

* 'drm-fixes' of git://people.freedesktop.org/~airlied/linux: (24 commits)
  drm/ttm: Fix a theoretical race in ttm_bo_cleanup_refs()
  drm/ttm: Fix a theoretical race
  drm: platform: Don't initialize driver-private data
  drm/debugfs: remove redundant info from gem_names
  drm: fb: cma: Fail gracefully on allocation failure
  drm: fb: cma: Fix typo in debug message
  drm/nouveau/clock: fix missing pll type/addr when matching default entry
  drm/nouveau/fb: fix reporting of memory type on GF8+ IGPs
  drm/nv41/vm: don't init hw pciegart on boards with agp bridge
  drm/nouveau/bios: fetch full 4KiB block to determine ACPI ROM image size
  drm/nouveau: validate vbios size
  drm/nouveau: warn when trying to free mm which is still in use
  drm/nouveau: fix nouveau_mm/nouveau_mm_node leak
  drm/nouveau/bios: improve error handling when reading the vbios from ACPI
  drm/nouveau: handle same-fb page flips
  drm/i915: Initialize obj->pages before use by i915_gem_object_do_bit17_swizzle()
  drm/i915: Add no-lvds quirk for Supermicro X7SPA-H
  drm/i915: Insert i915_preliminary_hw_support variable.
  drm/i915: shut up spurious WARN in the gtt fault handler
  Revert "drm/i915: Try harder to complete DP training pattern 1"
  ...

12 years agoMerge tag 'jfs-3.7-2' of git://github.com/kleikamp/linux-shaggy
Linus Torvalds [Tue, 23 Oct 2012 05:49:34 +0000 (08:49 +0300)]
Merge tag 'jfs-3.7-2' of git://github.com/kleikamp/linux-shaggy

Pull jfs fix from Dave Kleikamp:
 "Bug fix: Fix FITRIM argument handling"

* tag 'jfs-3.7-2' of git://github.com/kleikamp/linux-shaggy:
  jfs: Fix FITRIM argument handling

12 years agoMerge tag 'ext4_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso...
Linus Torvalds [Tue, 23 Oct 2012 05:48:26 +0000 (08:48 +0300)]
Merge tag 'ext4_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4

Pull ext4 fixes from Ted Ts'o:
 "Various bug fixes for ext4.  The most serious of them fixes a security
  bug (CVE-2012-4508) which leads to stale data exposure when we have
  fallocate racing against writes to files undergoing delayed
  allocation.  We also have two fixes for the metadata checksum feature,
  the most serious of which can cause the superblock to have a invalid
  checksum after a power failure."

* tag 'ext4_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4:
  ext4: Avoid underflow in ext4_trim_fs()
  ext4: Checksum the block bitmap properly with bigalloc enabled
  ext4: fix undefined bit shift result in ext4_fill_flex_info
  ext4: fix metadata checksum calculation for the superblock
  ext4: race-condition protection for ext4_convert_unwritten_extents_endio
  ext4: serialize fallocate with ext4_convert_unwritten_extents

12 years agoMerge tag 'nfs-for-3.7-2' of git://git.linux-nfs.org/projects/trondmy/linux-nfs
Linus Torvalds [Tue, 23 Oct 2012 05:47:38 +0000 (08:47 +0300)]
Merge tag 'nfs-for-3.7-2' of git://git.linux-nfs.org/projects/trondmy/linux-nfs

Pull NFS client bugfixes from Trond Myklebust:
 - Do not call pnfs_return_layout() from an rpciod context
 - nfs4_ds_disconnect can cause Oopses.  Kill it...
 - Fix the return value for nfs_callback_start_svc
 - Fix a number of compile warnings

* tag 'nfs-for-3.7-2' of git://git.linux-nfs.org/projects/trondmy/linux-nfs:
  NFSv4: Fix the return value for nfs_callback_start_svc
  NFSv4.1: Declare osd_pri_2_pnfs_err(), objio_init_read/write to be static
  NFSv4: fs/nfs/nfs4getroot.c needs to include "internal.h"
  NFSv4.1: Use kcalloc() to allocate zeroed arrays instead of kzalloc()
  NFSv4.1: Do not call pnfs_return_layout() from an rpciod context
  NFSv4.1: Kill nfs4_ds_disconnect()

12 years agoMerge tag 'regmap-fix-mmio' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie...
Linus Torvalds [Tue, 23 Oct 2012 05:39:38 +0000 (08:39 +0300)]
Merge tag 'regmap-fix-mmio' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap

Pull regmap fix from Mark Brown:
 "regmap: Fix for dependencies for MMIO

  Trivial dependency issue, not noticed before as the only user of MMIO
  also needs I2C."

* tag 'regmap-fix-mmio' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regmap:
  regmap: select REGMAP if REGMAP_MMIO and REGMAP_IRQ enabled

12 years agodrm/ttm: Fix a theoretical race in ttm_bo_cleanup_refs()
Thomas Hellstrom [Mon, 22 Oct 2012 12:51:26 +0000 (12:51 +0000)]
drm/ttm: Fix a theoretical race in ttm_bo_cleanup_refs()

In theory, that function could release the lru lock between
checking for bo on ddestroy list and a successful reserve if the bo
was already reserved, and the function was called with waiting reserves
allowed.
However, all current reservers of a bo on the ddestroy list would
atomically take the bo off the list after a successful reserve so this
race should not have been hit, so no need to backport for stable.

This patch also fixes a case found by Maarten Lankhorst where
ttm_mem_evict_first called with no_wait_gpu would incorrectly
spin waiting for bo idle if trying to evict a busy buffer that
also sits on the ddestroy list.

Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
12 years agodrm/ttm: Fix a theoretical race
Thomas Hellstrom [Mon, 22 Oct 2012 12:51:25 +0000 (12:51 +0000)]
drm/ttm: Fix a theoretical race

The ttm_mem_evict_first function could theoretically drop the
lru lock without retrying if a reservation from off the LRU list
ended up waiting.
However, since currently there are no users that could cause a wait
in that situation so this is not suitable for stable

Signed-off-by: Thomas Hellstrom <thellstrom@vmware.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>